Search is not available for this dataset
project
stringlengths
1
235
source
stringclasses
16 values
language
stringclasses
48 values
content
stringlengths
909
64.8M
leptos_viz
rust
Rust
Crate leptos_viz === Provides functions to easily integrate Leptos with Viz. For more details on how to use the integrations, see the `examples` directory in the Leptos repository. Structs --- * RequestPartsA struct to hold the parts of the incoming Request. Since `http::Request` isn’t cloneable, we’re forced to construct this for Leptos to use in viz * ResponseOptionsAllows you to override details of the HTTP response like the status code and add Headers/Cookies. * ResponsePartsThis struct lets you define headers and override the status of the Response from an Element or a Server Function Typically contained inside of a ResponseOptions. Setting this is useful for cookies and custom responses. Traits --- * LeptosRoutesThis trait allows one to pass a list of routes and a render function to Viz’s router, letting us avoid having to use wildcards or manually define all routes in multiple places. Functions --- * generate_request_partsDecomposes an HTTP request into its parts, allowing you to read its headers and other data without consuming the body. * generate_route_listGenerates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. * generate_route_list_with_exclusionsGenerates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. * generate_route_list_with_exclusions_and_ssgGenerates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. * generate_route_list_with_ssgGenerates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. * handle_server_fnsA Viz handlers to listens for a request with Leptos server function arguments in the body, run the server function if found, and return the resulting [Response]. * handle_server_fns_with_contextA Viz handlers to listens for a request with Leptos server function arguments in the body, run the server function if found, and return the resulting [Response]. * redirectProvides an easy way to redirect the user from within a server function. Mimicking the Remix `redirect()`, it sets a StatusCode of 302 and a LOCATION header with the provided value. If looking to redirect from the client, `leptos_router::use_navigate()` should be used instead * render_app_asyncReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, asynchronously rendering an HTML page after all `async` Resources have loaded. * render_app_async_with_contextReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, asynchronously rendering an HTML page after all `async` Resources have loaded. * render_app_to_streamReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. * render_app_to_stream_in_orderReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. This stream will pause at each `<Suspense/>` node and wait for it to resolve before sending down its HTML. The app will become interactive once it has fully loaded. * render_app_to_stream_in_order_with_contextReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an in-order HTML stream of your application. This stream will pause at each `<Suspense/>` node and wait for it to resolve before sending down its HTML. The app will become interactive once it has fully loaded. * render_app_to_stream_with_contextReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. * render_app_to_stream_with_context_and_replace_blocksReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. Crate leptos_viz === Provides functions to easily integrate Leptos with Viz. For more details on how to use the integrations, see the `examples` directory in the Leptos repository. Structs --- * RequestPartsA struct to hold the parts of the incoming Request. Since `http::Request` isn’t cloneable, we’re forced to construct this for Leptos to use in viz * ResponseOptionsAllows you to override details of the HTTP response like the status code and add Headers/Cookies. * ResponsePartsThis struct lets you define headers and override the status of the Response from an Element or a Server Function Typically contained inside of a ResponseOptions. Setting this is useful for cookies and custom responses. Traits --- * LeptosRoutesThis trait allows one to pass a list of routes and a render function to Viz’s router, letting us avoid having to use wildcards or manually define all routes in multiple places. Functions --- * generate_request_partsDecomposes an HTTP request into its parts, allowing you to read its headers and other data without consuming the body. * generate_route_listGenerates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. * generate_route_list_with_exclusionsGenerates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. * generate_route_list_with_exclusions_and_ssgGenerates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. * generate_route_list_with_ssgGenerates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. * handle_server_fnsA Viz handlers to listens for a request with Leptos server function arguments in the body, run the server function if found, and return the resulting [Response]. * handle_server_fns_with_contextA Viz handlers to listens for a request with Leptos server function arguments in the body, run the server function if found, and return the resulting [Response]. * redirectProvides an easy way to redirect the user from within a server function. Mimicking the Remix `redirect()`, it sets a StatusCode of 302 and a LOCATION header with the provided value. If looking to redirect from the client, `leptos_router::use_navigate()` should be used instead * render_app_asyncReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, asynchronously rendering an HTML page after all `async` Resources have loaded. * render_app_async_with_contextReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, asynchronously rendering an HTML page after all `async` Resources have loaded. * render_app_to_streamReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. * render_app_to_stream_in_orderReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. This stream will pause at each `<Suspense/>` node and wait for it to resolve before sending down its HTML. The app will become interactive once it has fully loaded. * render_app_to_stream_in_order_with_contextReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an in-order HTML stream of your application. This stream will pause at each `<Suspense/>` node and wait for it to resolve before sending down its HTML. The app will become interactive once it has fully loaded. * render_app_to_stream_with_contextReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. * render_app_to_stream_with_context_and_replace_blocksReturns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. Struct leptos_viz::RequestParts === ``` pub struct RequestParts { pub version: Version, pub method: Method, pub uri: Uri, pub headers: HeaderMap<HeaderValue>, pub body: Bytes, } ``` A struct to hold the parts of the incoming Request. Since `http::Request` isn’t cloneable, we’re forced to construct this for Leptos to use in viz Fields --- `version: Version``method: Method``uri: Uri``headers: HeaderMap<HeaderValue>``body: Bytes`Trait Implementations --- ### impl Clone for RequestParts #### fn clone(&self) -> RequestParts Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for RequestParts ### impl Send for RequestParts ### impl Sync for RequestParts ### impl Unpin for RequestParts ### impl UnwindSafe for RequestParts Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Clone, #### fn __clone_box(&self, _: Private) -> *mut() ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. El: Debug, Struct leptos_viz::ResponseOptions === ``` pub struct ResponseOptions(pub Arc<RwLock<ResponseParts>>); ``` Allows you to override details of the HTTP response like the status code and add Headers/Cookies. Tuple Fields --- `0: Arc<RwLock<ResponseParts>>`Implementations --- ### impl ResponseOptions #### pub fn overwrite(&self, parts: ResponseParts) A simpler way to overwrite the contents of `ResponseOptions` with a new `ResponseParts`. #### pub fn set_status(&self, status: StatusCode) Set the status of the returned Response. #### pub fn insert_header(&self, key: HeaderName, value: HeaderValue) Insert a header, overwriting any previous value with the same key. #### pub fn append_header(&self, key: HeaderName, value: HeaderValue) Append a header, leaving any header with the same key intact. Trait Implementations --- ### impl Clone for ResponseOptions #### fn clone(&self) -> ResponseOptions Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> ResponseOptions Returns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for ResponseOptions ### impl Send for ResponseOptions ### impl Sync for ResponseOptions ### impl Unpin for ResponseOptions ### impl !UnwindSafe for ResponseOptions Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Clone, #### fn __clone_box(&self, _: Private) -> *mut() ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. El: Debug, Struct leptos_viz::ResponseParts === ``` pub struct ResponseParts { pub status: Option<StatusCode>, pub headers: HeaderMap, } ``` This struct lets you define headers and override the status of the Response from an Element or a Server Function Typically contained inside of a ResponseOptions. Setting this is useful for cookies and custom responses. Fields --- `status: Option<StatusCode>``headers: HeaderMap`Implementations --- ### impl ResponseParts #### pub fn insert_header(&mut self, key: HeaderName, value: HeaderValue) Insert a header, overwriting any previous value with the same key #### pub fn append_header(&mut self, key: HeaderName, value: HeaderValue) Append a header, leaving any header with the same key intact Trait Implementations --- ### impl Clone for ResponseParts #### fn clone(&self) -> ResponseParts Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> ResponseParts Returns the “default value” for a type. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for ResponseParts ### impl Send for ResponseParts ### impl Sync for ResponseParts ### impl Unpin for ResponseParts ### impl UnwindSafe for ResponseParts Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Clone, #### fn __clone_box(&self, _: Private) -> *mut() ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. `WithDispatch` wrapper. El: Debug, Trait leptos_viz::LeptosRoutes === ``` pub trait LeptosRoutes { // Required methods fn leptos_routes<IV>( self, options: LeptosOptions, paths: Vec<RouteListing>, app_fn: impl Fn() -> IV + Clone + Send + Sync + 'static ) -> Self where IV: IntoView + 'static; fn leptos_routes_with_context<IV>( self, options: LeptosOptions, paths: Vec<RouteListing>, additional_context: impl Fn() + Clone + Send + Sync + 'static, app_fn: impl Fn() -> IV + Clone + Send + Sync + 'static ) -> Self where IV: IntoView + 'static; fn leptos_routes_with_handler<H, O>( self, paths: Vec<RouteListing>, handler: H ) -> Self where H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static; } ``` This trait allows one to pass a list of routes and a render function to Viz’s router, letting us avoid having to use wildcards or manually define all routes in multiple places. Required Methods --- #### fn leptos_routes<IV>( self, options: LeptosOptions, paths: Vec<RouteListing>, app_fn: impl Fn() -> IV + Clone + Send + Sync + 'static ) -> Selfwhere IV: IntoView + 'static, #### fn leptos_routes_with_context<IV>( self, options: LeptosOptions, paths: Vec<RouteListing>, additional_context: impl Fn() + Clone + Send + Sync + 'static, app_fn: impl Fn() -> IV + Clone + Send + Sync + 'static ) -> Selfwhere IV: IntoView + 'static, #### fn leptos_routes_with_handler<H, O>( self, paths: Vec<RouteListing>, handler: H ) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Implementations on Foreign Types --- ### impl LeptosRoutes for Router The default implementation of `LeptosRoutes` which takes in a list of paths, and dispatches GET requests to those paths to Leptos’s renderer. #### fn leptos_routes<IV>( self, options: LeptosOptions, paths: Vec<RouteListing>, app_fn: impl Fn() -> IV + Clone + Send + Sync + 'static ) -> Selfwhere IV: IntoView + 'static, #### fn leptos_routes_with_context<IV>( self, options: LeptosOptions, paths: Vec<RouteListing>, additional_context: impl Fn() + Clone + Send + Sync + 'static, app_fn: impl Fn() -> IV + Clone + Send + Sync + 'static ) -> Selfwhere IV: IntoView + 'static, #### fn leptos_routes_with_handler<H, O>( self, paths: Vec<RouteListing>, handler: H ) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Implementors --- Function leptos_viz::generate_request_parts === ``` pub async fn generate_request_parts(req: Request) -> RequestParts ``` Decomposes an HTTP request into its parts, allowing you to read its headers and other data without consuming the body. Function leptos_viz::generate_route_list === ``` pub fn generate_route_list<IV>( app_fn: impl Fn() -> IV + 'static + Clone ) -> Vec<RouteListing>where IV: IntoView + 'static, ``` Generates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. Function leptos_viz::generate_route_list_with_exclusions === ``` pub fn generate_route_list_with_exclusions<IV>( app_fn: impl Fn() -> IV + 'static + Clone, excluded_routes: Option<Vec<String>> ) -> Vec<RouteListing>where IV: IntoView + 'static, ``` Generates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. Function leptos_viz::generate_route_list_with_exclusions_and_ssg === ``` pub fn generate_route_list_with_exclusions_and_ssg<IV>( app_fn: impl Fn() -> IV + 'static + Clone, excluded_routes: Option<Vec<String>> ) -> (Vec<RouteListing>, StaticDataMap)where IV: IntoView + 'static, ``` Generates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. Function leptos_viz::generate_route_list_with_ssg === ``` pub fn generate_route_list_with_ssg<IV>( app_fn: impl Fn() -> IV + 'static + Clone ) -> (Vec<RouteListing>, StaticDataMap)where IV: IntoView + 'static, ``` Generates a list of all routes defined in Leptos’s Router in your app. We can then use this to automatically create routes in Viz’s Router without having to use wildcard matching or fallbacks. Takes in your root app Element as an argument so it can walk you app tree. This version is tailored to generate Viz compatible paths. Function leptos_viz::handle_server_fns === ``` pub async fn handle_server_fns(req: Request) -> Result<Response> ``` A Viz handlers to listens for a request with Leptos server function arguments in the body, run the server function if found, and return the resulting [Response]. This can then be set up at an appropriate route in your application: ``` use leptos::*; use std::net::SocketAddr; use viz::{Router, ServiceMaker}; #[tokio::main] async fn main() { let addr = SocketAddr::from(([127, 0, 0, 1], 8082)); // build our application with a route let app = Router::new().post("/api/:fn_name*", leptos_viz::handle_server_fns); // run our app with hyper // `viz::Server` is a re-export of `hyper::Server` viz::Server::bind(&addr) .serve(ServiceMaker::from(app)) .await .unwrap(); } ``` Leptos provides a generic implementation of `handle_server_fns`. If access to more specific parts of the Request is desired, you can specify your own server fn handler based on this one and give it it’s own route in the server macro. ### Provided Context Types This function always provides context values including the following types: * RequestParts * ResponseOptions Function leptos_viz::handle_server_fns_with_context === ``` pub async fn handle_server_fns_with_context( req: Request, additional_context: impl Fn() + Clone + Send + 'static ) -> Result<Response> ``` A Viz handlers to listens for a request with Leptos server function arguments in the body, run the server function if found, and return the resulting [Response]. This can then be set up at an appropriate route in your application: This version allows you to pass in a closure to capture additional data from the layers above leptos and store it in context. To use it, you’ll need to define your own route, and a handler function that takes in the data you’d like. See the render_app_to_stream_with_context docs for an example of one that should work much like this one. ### Provided Context Types This function always provides context values including the following types: * RequestParts * ResponseOptions Function leptos_viz::redirect === ``` pub fn redirect(path: &str) ``` Provides an easy way to redirect the user from within a server function. Mimicking the Remix `redirect()`, it sets a StatusCode of 302 and a LOCATION header with the provided value. If looking to redirect from the client, `leptos_router::use_navigate()` should be used instead Function leptos_viz::render_app_async === ``` pub fn render_app_async<IV>( options: LeptosOptions, app_fn: impl Fn() -> IV + Clone + Send + 'static ) -> impl Fn(Request) -> Pin<Box<dyn Future<Output = Result<Response>> + Send + 'static>> + Clone + Send + 'staticwhere IV: IntoView, ``` Returns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, asynchronously rendering an HTML page after all `async` Resources have loaded. The provides a MetaContext and a RouterIntegrationContext to app’s context before rendering it, and includes any meta tags injected using leptos_meta. The HTML stream is rendered using [render_to_string_async], and includes everything described in the documentation for that function. This can then be set up at an appropriate route in your application: ``` use leptos::*; use leptos_config::get_configuration; use std::{env, net::SocketAddr}; use viz::{Router, ServiceMaker}; #[component] fn MyApp() -> impl IntoView { view! { <main>"Hello, world!"</main> } } #[tokio::main] async fn main() { let conf = get_configuration(Some("Cargo.toml")).await.unwrap(); let leptos_options = conf.leptos_options; let addr = leptos_options.site_addr.clone(); // build our application with a route let app = Router::new().any( "*", leptos_viz::render_app_async(leptos_options, || view! { <MyApp/> }), ); // run our app with hyper // `viz::Server` is a re-export of `hyper::Server` viz::Server::bind(&addr) .serve(ServiceMaker::from(app)) .await .unwrap(); } ``` ### Provided Context Types This function always provides context values including the following types: * RequestParts * ResponseOptions * MetaContext * RouterIntegrationContext Function leptos_viz::render_app_async_with_context === ``` pub fn render_app_async_with_context<IV>( options: LeptosOptions, additional_context: impl Fn() + 'static + Clone + Send, app_fn: impl Fn() -> IV + Clone + Send + 'static ) -> impl Fn(Request) -> Pin<Box<dyn Future<Output = Result<Response>> + Send + 'static>> + Clone + Send + 'staticwhere IV: IntoView, ``` Returns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, asynchronously rendering an HTML page after all `async` Resources have loaded. This version allows us to pass Viz State/Extractor or other infro from Viz or network layers above Leptos itself. To use it, you’ll need to write your own handler function that provides the data to leptos in a closure. An example is below ``` async fn custom_handler(req: Request) -> Result<Response> { let id = req.params::<String>()?; let options = &*req.state::<Arc<LeptosOptions>>().ok_or(StateError::new::<Arc<LeptosOptions>>())?; let handler = leptos_viz::render_app_async_with_context(options.clone(), move || { provide_context(id.clone()); }, || view! { <TodoApp/> } ); handler(req).await.into_response() } ``` Otherwise, this function is identical to render_app_to_stream. ### Provided Context Types This function always provides context values including the following types: * RequestParts * ResponseOptions * MetaContext * RouterIntegrationContext Function leptos_viz::render_app_to_stream === ``` pub fn render_app_to_stream<IV>( options: LeptosOptions, app_fn: impl Fn() -> IV + Clone + Send + 'static ) -> impl Fn(Request) -> Pin<Box<dyn Future<Output = Result<Response>> + Send + 'static>> + Clone + Send + 'staticwhere IV: IntoView, ``` Returns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. The provides a MetaContext and a RouterIntegrationContext to app’s context before rendering it, and includes any meta tags injected using leptos_meta. The HTML stream is rendered using [render_to_stream], and includes everything described in the documentation for that function. This can then be set up at an appropriate route in your application: ``` use leptos::*; use leptos_config::get_configuration; use std::{env, net::SocketAddr}; use viz::{Router, ServiceMaker}; #[component] fn MyApp() -> impl IntoView { view! { <main>"Hello, world!"</main> } } #[tokio::main] async fn main() { let conf = get_configuration(Some("Cargo.toml")).await.unwrap(); let leptos_options = conf.leptos_options; let addr = leptos_options.site_addr.clone(); // build our application with a route let app = Router::new().any( "*", leptos_viz::render_app_to_stream( leptos_options, || view! { <MyApp/> }, ), ); // run our app with hyper // `viz::Server` is a re-export of `hyper::Server` viz::Server::bind(&addr) .serve(ServiceMaker::from(app)) .await .unwrap(); } ``` ### Provided Context Types This function always provides context values including the following types: * RequestParts * ResponseOptions * MetaContext * RouterIntegrationContext Function leptos_viz::render_app_to_stream_in_order === ``` pub fn render_app_to_stream_in_order<IV>( options: LeptosOptions, app_fn: impl Fn() -> IV + Clone + Send + 'static ) -> impl Fn(Request) -> Pin<Box<dyn Future<Output = Result<Response>> + Send + 'static>> + Clone + Send + 'staticwhere IV: IntoView, ``` Returns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. This stream will pause at each `<Suspense/>` node and wait for it to resolve before sending down its HTML. The app will become interactive once it has fully loaded. The provides a MetaContext and a RouterIntegrationContext to app’s context before rendering it, and includes any meta tags injected using leptos_meta. The HTML stream is rendered using [render_to_stream], and includes everything described in the documentation for that function. This can then be set up at an appropriate route in your application: ``` use leptos::*; use leptos_config::get_configuration; use std::{env, net::SocketAddr}; use viz::{Router, ServiceMaker}; #[component] fn MyApp() -> impl IntoView { view! { <main>"Hello, world!"</main> } } #[tokio::main] async fn main() { let conf = get_configuration(Some("Cargo.toml")).await.unwrap(); let leptos_options = conf.leptos_options; let addr = leptos_options.site_addr.clone(); // build our application with a route let app = Router::new().any( "*", leptos_viz::render_app_to_stream_in_order( leptos_options, || view! { <MyApp/> }, ), ); // run our app with hyper // `viz::Server` is a re-export of `hyper::Server` viz::Server::bind(&addr) .serve(ServiceMaker::from(app)) .await .unwrap(); } ``` ### Provided Context Types This function always provides context values including the following types: * RequestParts * ResponseOptions * MetaContext * RouterIntegrationContext Function leptos_viz::render_app_to_stream_in_order_with_context === ``` pub fn render_app_to_stream_in_order_with_context<IV>( options: LeptosOptions, additional_context: impl Fn() + 'static + Clone + Send, app_fn: impl Fn() -> IV + Clone + Send + 'static ) -> impl Fn(Request) -> Pin<Box<dyn Future<Output = Result<Response>> + Send + 'static>> + Clone + Send + 'staticwhere IV: IntoView, ``` Returns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an in-order HTML stream of your application. This stream will pause at each `<Suspense/>` node and wait for it to resolve before sending down its HTML. The app will become interactive once it has fully loaded. This version allows us to pass Viz State/Extractor or other infro from Viz or network layers above Leptos itself. To use it, you’ll need to write your own handler function that provides the data to leptos in a closure. An example is below ``` async fn custom_handler(req: Request) -> Result<Response> { let id = req.params::<String>()?; let options = &*req.state::<Arc<LeptosOptions>>().ok_or(StateError::new::<Arc<LeptosOptions>>())?; let handler = leptos_viz::render_app_to_stream_in_order_with_context(options.clone(), move || { provide_context(id.clone()); }, || view! { <TodoApp/> } ); handler(req).await } ``` Otherwise, this function is identical to render_app_to_stream. ### Provided Context Types This function always provides context values including the following types: * RequestParts * ResponseOptions * MetaContext * RouterIntegrationContext Function leptos_viz::render_app_to_stream_with_context === ``` pub fn render_app_to_stream_with_context<IV>( options: LeptosOptions, additional_context: impl Fn() + Clone + Send + 'static, app_fn: impl Fn() -> IV + Clone + Send + 'static ) -> impl Fn(Request) -> Pin<Box<dyn Future<Output = Result<Response>> + Send + 'static>> + Clone + Send + 'staticwhere IV: IntoView, ``` Returns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. This version allows us to pass Viz State/Extractor or other infro from Viz or network layers above Leptos itself. To use it, you’ll need to write your own handler function that provides the data to leptos in a closure. An example is below ``` async fn custom_handler(req: Request) -> Result<Response> { let id = req.params::<String>()?; let options = &*req.state::<Arc<LeptosOptions>>().ok_or(Error::Responder(Response::text("missing state type LeptosOptions")))?; let handler = leptos_viz::render_app_to_stream_with_context(options.clone(), move || { provide_context(id.clone()); }, || view! { <TodoApp/> } ); handler(req).await } ``` Otherwise, this function is identical to render_app_to_stream. ### Provided Context Types This function always provides context values including the following types: * RequestParts * ResponseOptions * MetaContext * RouterIntegrationContext Function leptos_viz::render_app_to_stream_with_context_and_replace_blocks === ``` pub fn render_app_to_stream_with_context_and_replace_blocks<IV>( options: LeptosOptions, additional_context: impl Fn() + Clone + Send + 'static, app_fn: impl Fn() -> IV + Clone + Send + 'static, replace_blocks: bool ) -> impl Fn(Request) -> Pin<Box<dyn Future<Output = Result<Response>> + Send + 'static>> + Clone + Send + 'staticwhere IV: IntoView, ``` Returns a Viz Handler that listens for a `GET` request and tries to route it using leptos_router, serving an HTML stream of your application. This version allows us to pass Viz State/Extractor or other infro from Viz or network layers above Leptos itself. To use it, you’ll need to write your own handler function that provides the data to leptos in a closure. `replace_blocks` additionally lets you specify whether `<Suspense/>` fragments that read from blocking resources should be retrojected into the HTML that’s initially served, rather than dynamically inserting them with JavaScript on the client. This means you will have better support if JavaScript is not enabled, in exchange for a marginally slower response time. Otherwise, this function is identical to render_app_to_stream_with_context. ### Provided Context Types This function always provides context values including the following types: * RequestParts * ResponseOptions * MetaContext * RouterIntegrationContext
alexandria-scrna-data-library
readthedoc
YAML
Date: Categories: Tags: A Single-Cell RNA-Seq and Analytics Platform for Global Health This is the work-in-progress documentation for the Alexandria platform and all associated tool workflows and notebooks. ## Background¶ Alexandria is a single-cell portal and data resource for the global health community. Alexandria will use unified pipelines to preprocess, store, and visualize datasets of interest to the global health community, enabling rapid realization of transformative insights and the prioritization of follow-ups. To maximize impact and utility, Alexandria will build upon existing efforts at the Broad, Single Cell Portal (SCP), but will be further enhanced to enable queries across gene sets, cell types, and models as inspired by the types of data collected by the global health community. This will power vital cross-comparisons while simultaneously providing novel analytic capabilities for the community at large. Moreover, Alexandra will similarly empower the broader global research community—from individuals with limited experience in single-cell analysis to power users looking to more rapidly isolate specific subsets of data from several experiments—to examine and parse scRNA-Seq data, so that the insights and intuitions of the entire scientific community can be leveraged to enable rapid progress in fighting a variety of human maladies. flowchart ## Why use Alexandria?¶ There are a variety of features that Alexandria plans to provide users * The ability to cross project query on the cell level * Easy communication with collaborators: studies may be kept private during active analysis and shared only with desired groups. * Access to a variety of analysis pipelines and environments ## Data upload¶ In Alexandria and the Single Cell Portal, data is organized into ‘studies’ which are projects containing data intended to be analyzed together. Visualizations of subsets of single cells are available within a study, and a study may contain data from many samples. An overview of visualizations provided on SCP can be found here. Data can be uploaded to Alexandria while samples are still being collected, while the analysis is in progress, or when the final analysis is complete. ### Types of studies:¶ * Newly sequenced data: Raw sequencing data can be uploaded to Alexandria, aligned using integrated pipelines, and analyzed using either automated and interactive environments. * Currently, for automated visualization, all data must be uploaded and run on the alignment pipeline at the same time. * For human sequencing data, please verify that study is kept private for the time being. * The upload process for sequencing data is described here. * The process of sharing SCP studies is described here. * Partially analyzed data: For ongoing analyses or data collection, data can be uploaded using SCP file formats to allow sharing of intermediate results with collaborators. * Complete analyses: Data from published or under revision manuscripts may be uploaded. * We request that as much metadata as is known be added to the project even if it is the same for all cells in the project or is not relevant to the project. * These studies can be kept private and shared with reviewers when they are under revision. * The upload process for SCP file formats is described here. * The process of sharing SCP studies is described here. ### Upload from Single Cell Portal file formats¶ SCP file format upload is documented here. The Alexandria Project has several additional requirements on these file types. We will provide an interactive notebook to facilitate conversion of Seurat or Scanpy objects to these file types. * Expression files * These files should represent normalized (but not scaled) data whose values would make sense to visualize in violin plot or heatmaps. Currently, we expect UMI counts, if available, to be uploaded under ‘additional files’. * All genes, as opposed to only variable genes, should be included in these files, if possible. * Though not required, the Alexandria Project requests that each expression file contain the cells from one sample. * Each cell name must be unique across all samples. * Metadata file * To enable query, the Alexandria Project uses a structured metadata schema described here. This file should contain as many fields from that schema as possible including several required fields. They may also contain unstructured metadata fields provided their names differ from those used in the schema. * All cells must be included in a single file and the cell names must match those in the expression files. * Cluster files * These files represent dimensionality reduction visualizations and can contain subsets of cells (ex. Subclustering on a single cell type). * SCP allows these files to contain data labels which may be visualized only when that clustering is shown. These labels are not queryable, so Alexandria studies should only include labels that are not useful for query in these data labels. Date: 2020-03-26 Categories: Tags: ## The Alexandria Sheet¶ To instruct the workflow, you must create a comma-separated-value (csv) sample sheet called the Alexandria Sheet which contains sample names, paths to sequencing data on the workspace bucket, and any metadata you wish to include on Alexandria. ### Formatting your Alexandria Sheet for FASTQ files¶ For processing a sequencing directory full of BCL files, see the below section instead. * (REQUIRED) the ‘Sample’ column; the sample/array names that must prefix the respective .fastq or .fastq.gz files. Any preexisting count matrices must be named as `<sample>_dge.txt.gz` and located in the bucket at ``` <output_path>/dropseq/<sample>/<sample>_dge.txt.gz ``` . * (RECOMMENDED) both ‘R1_Path’ and ‘R2_Path’ columns; the paths to .fastq/.fastq.gz files on the bucket. Alternatively, see the section on the fastq_directory parameter. * (OPTIONAL) Other metadata columns that will be appended to the alexandria_metadata.txt (tab-delimited) file generated after running Cumulus. Column headers must match exactly the names of attributes found in the Alexandria Metadata Convention. Labels outside of this convention will be supported in the future. To verify that the paths you listed in the file are correct, you can navigate to your Google bucket and locate your sequence data files. Click on each file to view its URI (gsURL), which should resemble the format ``` gs://<bucket ID>/path/to/file.fastq.gz ``` in the case of `gzip` -compressed FASTQ files (regular FASTQ files are fine too). The locations you should enter in the path columns of your Alexandria Sheet can be the entire URI or all of the characters following the bucket ID and trailing slash, in this case ``` path/to/file.fastq.gz ``` ### Formatting your Alexandria Sheet for bcl2fastq¶ Due to legal requirements, the default Docker image for bcl2fastq workflow made by the Cumulus Team is privately available for use by Broad Institute affiliates. Affiliates must create a Docker account using their broadinstitute.org email address, download Docker Desktop, and log in through typing `docker login` in their terminal. Only then can you launch the bcl2fastq workflow on Alexandria or Terra. If you are not an affiliate, you can either create and reference your own Docker image or you can download the Bcl2Fastq software here and run it locally on a computer with plenty of disk space. If you chose the latter, once you get your FASTQs you can see the above section for writing a csv to process them. * (REQUIRED) the ‘Sample’ column, the sample/array names that are found in each sequencing directory’s sample sheet. Any pre-existing count matrices must have the sample names prefix each .txt.gz file. * (REQUIRED) ‘BCL_Path’ column, the paths to the sequencing run directories on the bucket. Ex: ``` path/to/191231_NB501935_0679_AHVY52BGXB/ ``` * (OPTIONAL) The ‘SS_Path’ column; Recommended if you have BCL_Path and your sample sheets are not located within the root of the sequencing run directory. Paths to the sequencing run directories’ `SampleSheet.csv` files on the bucket. If blank, will check inside the corresponding BCL_Path for `SampleSheet.csv` . * (OPTIONAL) Other metadata columns that will be appended to the alexandria_metadata.txt file generated after running Cumulus. Column headers must match exactly the names of attributes found in the Alexandria Metadata Convention. Labels outside of this convention will be supported in the future. To verify that the paths you listed in the file are correct, you must navigate to your Google bucket and locate your sequence data files. Click on each file to view its URI (gsURL), which should resemble the format ``` gs://<bucket ID>/path/to/sequencing_run_directory/ ``` in the case of sequencing run directories. The locations you should enter in the path columns of your Alexandria Sheet should be all of the characters following the bucket ID and trailing slash, in this case ``` path/to/sequencing_run_directory ``` ### Understanding the fastq_directory parameter¶ The use of this variable is not essential and is only meant for quick and convenient CSV writing when you are submitting FASTQs. Refer to the above spreadsheet example. There are four samples which each have two FASTQ reads. All FASTQ files are found in a folder located at the root of the bucket called mouse_fastqs. Since they are all located in the same directory, one could set mouse_fastqs as the `fastq_directory` and no longer need to have R1_Path and R2_Path columns. Furthermore, if the user has R1_Path and R2_Path columns but leaves spreadsheet cells left blank, the pipeline will search in the `fastq_directory` for the corresponding sample. Here the pipeline will search the ``` gs://[bucket ID]/mouse_fastqs ``` directory for any spreadsheet cells left blank; DMSO_R2.fastq.gz, LGD_R1.fastq.gz, LKS_CGP_R1.fastq.gz, and LKS_CGP_R2.fastq.gz. The specific pattern the pipeline searches for is ``` <Sample Name>*_<R1 or R2>*.fastq(.gz) ``` ## Inputs of the dropseq_cumulus workflow¶ ### Basic usage¶ Variable | Description | | --- | --- | bucket | gsURL of the workspace bucket to which you have permissions, ex: | output_path | Path to bucket folder where outputs (count matrices, metadata files, etc.) will be stored. All folders in this path will be created if they do not exist. Ex: Entering | input_csv_file | Sample sheet (comma-separated value file) uploaded in the miscellaneous tab of this study’s Upload/Edit Study Data page. **Formatting must adhere to the criteria!** | reference | Genome for alignment. Supported options: | run_dropseq | Yes: run Drop-seq pipeline (sequence alignment and QC). Sequencing data must be uploaded to the Google bucket associated with this study. | is_bcl | Yes: bcl2fastq will be run to convert all of your BCL directories to fastq.gz. No: all of your data is already of fastq.gz type. | fastq_directory, default = ‘’ | Sequence data directory name for sequence uploaded to the SCP study google bucket. Ex: Enter | run_cumulus | Yes: run Cumulus (generate metadata, cluster files, coordinate files for data exploration in Alexandria). If | ### Advanced usage¶ Variable | Description | | --- | --- | cumulus_output_prefix, default = | Prefix for Cumulus files to distinguish them from files from different Cumulus jobs. | preemptible, default = | Number of attempts using a preemptible virtual machine before requesting a higher-cost, non-preemptible instance. See Google Cloud documentation. | zones, default = | The ordered list of Google Zone preferences for requesting a Google machine to run the pipeline. See Google Cloud documentation page. | alexandria_docker, default = | Full address of the shaleklab/alexandria Docker image to use. | dropseq_registry, default = | Registry of the Drop-Seq Tools Docker image. Default Drop-Seq Tools image registry. | dropseq_tools_version, default = | Image tag of the Drop-Seq Tools Docker image. Default Drop-Seq Tools image tags. | bcl2fastq_registry, default = | Registry of the bcl2fastq Docker image. Default is privately hosted for Broad Institute members only, see this page to create your own. | bcl2fastq_version, default = ``2.20.0.422` | Image tag of the bcl2fasdtq Docker image. Default is privately hosted for Broad Institute members only, see this page to create your own. | cumulus_registry, default = | Registry of the Cumulus Docker image. Default Cumulus image registry. | cumulus_version, default = | Image tag of the Cumulus Docker image that corresponds to the version of Cumulus. Default Cumulus image tags. | ### Optional inputs exposed on Terra¶ See the Drop-Seq Pipeline workflow documentation. Caution dropEst does not account for strandedness and therefore its usage is not recommended. See the Cumulus workflow documentation. ## Outputs of the dropseq_cumulus workflow¶ When running the Drop-Seq pipeline and/or Cumulus through dropseq_cumulus, the workflow yields the same outputs as its component workflows. You can see those documentations in the section above. Explicitly, dropseq_cumulus presents the Single-Cell Portal with the alexandria metadata file (alexandria_metadata.txt), the dense expression matrix (ends with scp.expr.txt), and the coordinate file (ends with scp.X_fitsne.coords.txt).
@carbon/charts-angular
npm
JavaScript
Carbon Charts - Angular === Carbon Charts Angular is a thin Angular wrapper around the vanilla JavaScript `@carbon/charts` component library. This release is aimed at Angular >= 6 and < 16. If you need support for Angular 16 or higher, please try `@carbon/charts-angular@next`. The required styles should be imported from `@carbon/charts-angular/styles.css` and `@carbon/styles/css/styles.css`. Additional documentation is provided in the Storybook demos. **[Storybook demos](https://carbon-design-system.github.io/carbon-charts/angular)** **[Storybook demo sources](https://github.com/carbon-design-system/carbon-charts/tree/master/packages/angular/src/stories)** Getting started --- Run the following command using [npm](https://www.npmjs.com/): ``` npm install -S @carbon/charts-angular@latest @carbon/styles d3 d3-cloud d3-sankey ``` If you prefer [Yarn](https://yarnpkg.com/en/), use the following command instead: ``` yarn add @carbon/charts-angular@latest @carbon/styles d3 d3-cloud d3-sankey ``` Step-by-step instructions --- Read [Getting Started](https://charts.carbondesignsystem.com/?path=/docs/docs-getting-started-angular--docs) Charting data & options --- Although new charts will be introduced in the future (such as a choropleth), data and options follow the same model for all charts with minor exceptions. For example, in the case of a donut chart, you're able to pass in an additional field called `center` in your options to configure the donut center. For instructions on using the **tabular data format**, see [here](https://charts.carbondesignsystem.com/angular/?path=/docs/docs-tutorials-tabular-data-format--docs) Customizable options (specific to chart type) can be found [here](https://charts.carbondesignsystem.com/documentation/modules/interfaces.html) Readme --- ### Keywords * carbon * charts * angular * dataviz * data-visualization * visualizations * d3 * svg * component * components * css * html * ibm * typescript * javascript * js * library * pattern * patterns * sass * scss
smallarea
cran
R
Package ‘smallarea’ October 14, 2022 Type Package Title Fits a Fay Herriot Model Version 0.1 Date 2015-10-03 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Inference techniques for Fay Herriot Model. License GPL (>= 2) Depends MASS NeedsCompilation no Repository CRAN Date/Publication 2015-10-05 12:37:47 R topics documented: smallarea-packag... 2 estimate.unknownsampva... 3 fayherrio... 4 maximlikelihoo... 5 prasadraoes... 7 resimaxilikelihoo... 8 smallareafi... 9 smallarea-package Fits a Fay Herriot model Description It has some useful functions which the users might find convenient for fitting Fay Herriot Model. Details are included in package vignette. Details Package: smallarea Type: Package Version: 0.1 Date: 2013-04-25 License: GPL(>=2) Author(s) <NAME> Maintainer: <NAME><<EMAIL>> References On measuring the variability of small area estimators under a basic area level model. Datta, Rao, Smith. Biometrika(2005),92, 1,pp. 183-196 Large Sample Techniques for Statistics, Springer Texts in Statistics. Jiming Jiang. Chapters - 4,12 and 13. Small Area Estimation, JNK Rao, Wiley 2003 Variance Components, Wiley Series in Probability and Statistics,2006 Searle, Casella, Mc-Culloh Examples data=data.frame(response=c(1,2,3,4,8),D=c(0.2,0.5,0.1,0.9,1)) data ans=smallareafit(response~D,data,method="FH") ans1=smallareafit(response~D,data,method="REML") ans2=smallareafit(response~D,data,method="PR") ans3=smallareafit(response~D,data,method="ML") estimate.unknownsampvar Estimates of variance component, unknown sampling variance, re- gression coefficients and small area means in Fay Herriot model with unknown sampling variance. Description The function returns a list of 5 elements. The first element is an estimate of the variance component , the second element is an estimate of the parameter related to sampling variance, the third element is a vector of estimates of the regression coefficients in the Fay-Herriot model, the fourth element is a vector of the predictors pf the small area means and last element is the design matrix, the first column being a column of ones and the remaining columns represent the values of the covariates for different small areas. See details below. Usage estimate.unknownsampvar(response, mat.design, sample.size) Arguments response A numeric vector. It represents the response or the direct survey based estima- tors in the Fay Herriot Model mat.design A numeric matrix. The first column is a column of ones(also called the inter- cept). The other columns consist of observations of each of the covariates or the explanatory variable in Fay Herriot Model. sample.size A numeric vector. The known sample sizes used to calculate the direct survey based estimators. Details For more details please see the package vignette. Value psi.hat Estimate of the variance component del.hat Estimate of the parameter for sampling variance beta.hat Estimate of the unknown regression coefficients theta.hat Predictors of the small area means mat.design design matrix Author(s) <NAME> References On measuring the variability of small area estimators under a basic area level model. Datta, Rao, Smith. Biometrika(2005),92, 1,pp. 183-196 Large Sample Techniques for Statistics, Springer Texts in Statistics. <NAME>. Chapters - 4,12 and 13. See Also prasadraoest fayherriot Examples set.seed( 55 ) # setting a random seed require(MASS) # the function mvrnorm requires MASS x1 <- rep( 1, 10 ) # vector of ones representing intercept x2 <- rnorm( 10 ) # vector of covariates randomly generated x <- cbind( x1, x2 ) # design matrix x <- as.matrix( x ) # coercing into class matrix n <- rbinom (10, 20, 0.4) # generating sample sizes for each small area psi <- 1 # true value of the psi parameter delta <- 1 # true value of the delta parameter beta <- c( 1, 0.5 ) # true values of the regression parameters theta <- mvrnorm( 1, as.vector( x %*% beta ), diag(10) ) # true small area means y <- mvrnorm( 1, as.vector( theta ), diag( delta/n ) ) # design based estimators estimate.unknownsampvar( y, x, n ) fayherriot Estimate of the variance component in Fay Herriot Model using Fay Herriot Method Description This function returns a list with one element in it which is the estimate of the variance component in the Fay Herriot Model. The estimate is found by solving an equation (for details see vignette) and is due to Fay Herriot. The uniroot in the stats package is used to find the root. uniroot searches for a root of that equation in a particular interval the lower bound is 0 and the upper bound is set to estimate of the variance component using Prasad Rao method + three times the square root of the number of observation. It depends on the function prasadraoest in the same package. Note that our function does not accept missing values. Usage fayherriot(response, designmatrix, sampling.var) Arguments response a numeric vector. It represents the response or the observed value in the Fay Herriot Model designmatrix a numeric matrix. The first column is a column of ones(also called the inter- cept). The other columns consist of observations of each of the covariates or the explanatory variable in Fay Herriot Model. sampling.var a numeric vector consisting of the known sampling variances of each of the small area levels. Details For more details please see the attached vignette Value estimate estimate of the variance component Author(s) <NAME> References On measuring the variability of small area estimators under a basic area level model. Datta, Rao, Smith. Biometrika(2005),92, 1,pp. 183-196 Large Sample Techniques for Statistics, Springer Texts in Statistics. Jiming Jiang. Chapters - 4,12 and 13. See Also prasadraoest maximlikelihood resimaxilikelihood Examples response=c(1,2,3,4,5) designmatrix=cbind(c(1,1,1,1,1),c(1,2,4,4,1),c(2,1,3,1,5)) randomeffect.var=c(0.5,0.7,0.8,0.4,0.5) fayherriot(response,designmatrix,randomeffect.var) maximlikelihood Maximum likelihood estimates of the variance components and the un- known regression coefficients in Fay Herriot Model. Description This function returns a list of three elements the first one is the maximum likelihood estimate of the variance component,the second one is a vector of the maximum likelihood estimate of the unknown regression coefficients the first one being the coefficient of the intecept and the remaining ones are in the same order as the columns of the design matrix and the last one being the value of the maximized loglikelihood function in Fay Herriot model. it uses the optim in the stats package and the BFGS algorithm to minimize the negative loglikelihood. The initial value for this iterative proceedure of maximization are chosen as follows. The initial value for the variance component is the fay Prasad-rao estimate of the variance component, the initial value for the regression coefficients are the estimates of the regression coefficients using the multiple linear regression and ignoring the random effects.(For more details see vignette). Note that our function does not accept any missing values. Usage maximlikelihood(response, designmatrix, sampling.var) Arguments response A numeric vector. It represents the response or the direct survey based estima- tors in the Fay Herriot Model designmatrix A numeric matrix. The first column is a column of ones(also called the inter- cept). The other columns consist of observations of each of the covariates or the explanatory variable in Fay Herriot Model. sampling.var A numeric vector consisting of the known sampling variances of each of the small area levels. Details For more details please see the package vignette Value estimate Maximum likelihood estimate of the variance component reg.coefficients Maximum likelihood estimate of the unknown regression coefficients loglikeli.optimum The maximized value of the log likelihood function Author(s) <NAME> References On measuring the variability of small area estimators under a basic area level model. Datta, Rao, Smith. Biometrika(2005),92, 1,pp. 183-196 Large Sample Techniques for Statistics, Springer Texts in Statistics. Jiming Jiang. Chapters - 4,12 and 13. See Also prasadraoest fayherriot resimaxilikelihood Examples response=c(1,2,3,4,5) designmatrix=cbind(c(1,1,1,1,1),c(1,2,4,4,1),c(2,1,3,1,5)) randomeffect.var=c(0.5,0.7,0.8,0.4,0.5) maximlikelihood(response,designmatrix,randomeffect.var) prasadraoest Estimate of the variance component in Fay Herriot Model using Prasad Rao Method Description This function returns a list with one element in it which is the estimate of the variance component in the Fay Herriot Model. The method used to get the estimate is the prasad rao method also known as BLUP.(for details see vignette). Note that our function does not accept any missing values. Usage prasadraoest(response, designmatrix, sampling.var) Arguments response a numeric vector. It represents the response or the observed value in the Fay Herriot Model designmatrix a numeric matrix. The first column is a column of ones(also called the inter- cept). The other columns consist of observations of each of the covariates or the explanatory variable in Fay Herriot Model. sampling.var a numeric vector consisting of the known sampling variances of each of the small area levels. Details For more details see the package vignette Value estimate estimate of the variance component Author(s) <NAME> References On measuring the variability of small area estimators under a basic area level model. Datta, Rao, Smith. Biometrika(2005),92, 1,pp. 183-196 Large Sample Techniques for Statistics, Springer Texts in Statistics. Jiming Jiang. Chapters - 4,12 and 13. See Also fayherriot maximlikelihood resimaxilikelihood Examples response=c(1,2,3,4,5) designmatrix=cbind(c(1,1,1,1,1),c(1,2,4,4,1),c(2,1,3,1,5)) randomeffect.var=c(0.5,0.7,0.8,0.4,0.5) prasadraoest(response,designmatrix,randomeffect.var) resimaxilikelihood Estimate of the variance component in Fay Herriot Model using Resid- ual Maximum Likelihood, REML. Description This function returns a list with one element in it which is the estimate of the variance component in the Fay Herriot Model using residual maximum likelihood method. The estimates are obtained as a solution of equations known as REML equations. The solution is obtained numerically using Fisher-scoring algorithm. For more details please see the package vignette and the references. Note that our function does not accept any missing values. Usage resimaxilikelihood(response, designmatrix, sampling.var,maxiter) Arguments response a numeric vector. It represents the response or the observed value in the Fay Herriot Model designmatrix a numeric matrix. The first column is a column of ones(also called the inter- cept). The other columns consist of observations of each of the covariates or the explanatory variable in Fay Herriot Model. sampling.var a numeric vector consisting of the known sampling variances of each of the small area levels. maxiter maximum number of iterations of fisher scoring Details For more details see the package vignette Value estimate estimate of the variance component Author(s) <NAME> References On measuring the variability of small area estimators under a basic area level model. Datta, Rao, Smith. Biometrika(2005),92, 1,pp. 183-196 Large Sample Techniques for Statistics, Springer Texts in Statistics. Jiming Jiang. Chapters - 4,12 and 13. Small Area Estimation, JNK Rao,Wiley 2003 Variance Components, Wiley Series in Probability and Statistics,2006 Searle, Casella, Mc-Culloh See Also prasadraoest maximlikelihood fayherriot Examples response=c(1,2,3,4,5) designmatrix=cbind(c(1,1,1,1,1),c(1,2,4,4,1),c(2,1,3,1,5)) randomeffect.var=c(0.5,0.7,0.8,0.4,0.5) resimaxilikelihood(response,designmatrix,randomeffect.var,100) smallareafit Fits a Fay Herriot Model to data Description Fits a Fay Herriot model to the data and returns a list of items which area estimates of different paramaters and mse of the estimates of the small area means the details of which are provided in the value section. Usage smallareafit(formula, data, method) Arguments formula an object of class formula. a formula similar in appearance to that of in lm func- tion in R. It has to be ascertained that the data contains a column of the sampling variances, and that while specifying the formula the the name of the variable that contains the sampling variances should preceede the variables which are the covariates. e.g response~D+x1+x2 is a correct way of specifying the for- mula where as response~x1+D+x2 is not.(note D is the variabe that contains the values of sampling variances and x1 and x2 are covariates). In general the first of the variables on the right hand side of ~ will be treated as the vector of sampling variance. Note that our function does not accept any missing values. data an optional data.frame. containg the variable names and data. in absence of this arguments the function will accept the corresponding things from the global environment. method method can be from "PR", "FH", "ML", "REML" Details for more details see the vignette Value smallmean.est Estimates of the small area mean smallmean.mse Mean Square Prediction error of the estimates of the small area mead var.comp an estimate of the variance components est.coef an estimate of the regression coefficients Author(s) <NAME> References On measuring the variability of small area estimators under a basic area level model. Datta, Rao, Smith. Biometrika(2005),92, 1,pp. 183-196 Large Sample Techniques for Statistics, Springer Texts in Statistics. Jiming Jiang. Chapters - 4,12 and 13. Small Area Estimation, JNK Rao, Wiley 2003 Variance Components, Wiley Series in Probability and Statistics,2006 Searle, Casella, Mc-Culloh See Also prasadraoest fayherriot resimaxilikelihood maximlikelihood Examples data=data.frame(response=c(1,2,3,4,8),D=c(0.2,0.5,0.1,0.9,1)) data ans=smallareafit(response~D,data,method="FH") ans1=smallareafit(response~D,data,method="REML") ans2=smallareafit(response~D,data,method="PR") ans3=smallareafit(response~D,data,method="ML")
github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling
go
Go
None Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package iotsecuretunneling provides the API client, operations, and parameter types for AWS IoT Secure Tunneling. IoT Secure Tunneling IoT Secure Tunneling creates remote connections to devices deployed in the field. For more information about how IoT Secure Tunneling works, see IoT Secure Tunneling (<https://docs.aws.amazon.com/iot/latest/developerguide/secure-tunneling.html>) . ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [func NewDefaultEndpointResolver() *internalendpoints.Resolver](#NewDefaultEndpointResolver) * [func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options)](#WithAPIOptions) * [func WithEndpointResolver(v EndpointResolver) func(*Options)](#WithEndpointResolver)deprecated * [func WithEndpointResolverV2(v EndpointResolverV2) func(*Options)](#WithEndpointResolverV2) * [type Client](#Client) * + [func New(options Options, optFns ...func(*Options)) *Client](#New) + [func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client](#NewFromConfig) * + [func (c *Client) CloseTunnel(ctx context.Context, params *CloseTunnelInput, optFns ...func(*Options)) (*CloseTunnelOutput, error)](#Client.CloseTunnel) + [func (c *Client) DescribeTunnel(ctx context.Context, params *DescribeTunnelInput, optFns ...func(*Options)) (*DescribeTunnelOutput, error)](#Client.DescribeTunnel) + [func (c *Client) ListTagsForResource(ctx context.Context, params *ListTagsForResourceInput, ...) (*ListTagsForResourceOutput, error)](#Client.ListTagsForResource) + [func (c *Client) ListTunnels(ctx context.Context, params *ListTunnelsInput, optFns ...func(*Options)) (*ListTunnelsOutput, error)](#Client.ListTunnels) + [func (c *Client) OpenTunnel(ctx context.Context, params *OpenTunnelInput, optFns ...func(*Options)) (*OpenTunnelOutput, error)](#Client.OpenTunnel) + [func (c *Client) RotateTunnelAccessToken(ctx context.Context, params *RotateTunnelAccessTokenInput, ...) (*RotateTunnelAccessTokenOutput, error)](#Client.RotateTunnelAccessToken) + [func (c *Client) TagResource(ctx context.Context, params *TagResourceInput, optFns ...func(*Options)) (*TagResourceOutput, error)](#Client.TagResource) + [func (c *Client) UntagResource(ctx context.Context, params *UntagResourceInput, optFns ...func(*Options)) (*UntagResourceOutput, error)](#Client.UntagResource) * [type CloseTunnelInput](#CloseTunnelInput) * [type CloseTunnelOutput](#CloseTunnelOutput) * [type DescribeTunnelInput](#DescribeTunnelInput) * [type DescribeTunnelOutput](#DescribeTunnelOutput) * [type EndpointParameters](#EndpointParameters) * + [func (p EndpointParameters) ValidateRequired() error](#EndpointParameters.ValidateRequired) + [func (p EndpointParameters) WithDefaults() EndpointParameters](#EndpointParameters.WithDefaults) * [type EndpointResolver](#EndpointResolver) * + [func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver](#EndpointResolverFromURL) * [type EndpointResolverFunc](#EndpointResolverFunc) * + [func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error)](#EndpointResolverFunc.ResolveEndpoint) * [type EndpointResolverOptions](#EndpointResolverOptions) * [type EndpointResolverV2](#EndpointResolverV2) * + [func NewDefaultEndpointResolverV2() EndpointResolverV2](#NewDefaultEndpointResolverV2) * [type HTTPClient](#HTTPClient) * [type HTTPSignerV4](#HTTPSignerV4) * [type ListTagsForResourceInput](#ListTagsForResourceInput) * [type ListTagsForResourceOutput](#ListTagsForResourceOutput) * [type ListTunnelsAPIClient](#ListTunnelsAPIClient) * [type ListTunnelsInput](#ListTunnelsInput) * [type ListTunnelsOutput](#ListTunnelsOutput) * [type ListTunnelsPaginator](#ListTunnelsPaginator) * + [func NewListTunnelsPaginator(client ListTunnelsAPIClient, params *ListTunnelsInput, ...) *ListTunnelsPaginator](#NewListTunnelsPaginator) * + [func (p *ListTunnelsPaginator) HasMorePages() bool](#ListTunnelsPaginator.HasMorePages) + [func (p *ListTunnelsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListTunnelsOutput, error)](#ListTunnelsPaginator.NextPage) * [type ListTunnelsPaginatorOptions](#ListTunnelsPaginatorOptions) * [type OpenTunnelInput](#OpenTunnelInput) * [type OpenTunnelOutput](#OpenTunnelOutput) * [type Options](#Options) * + [func (o Options) Copy() Options](#Options.Copy) * [type ResolveEndpoint](#ResolveEndpoint) * + [func (m *ResolveEndpoint) HandleSerialize(ctx context.Context, in middleware.SerializeInput, ...) (out middleware.SerializeOutput, metadata middleware.Metadata, err error)](#ResolveEndpoint.HandleSerialize) + [func (*ResolveEndpoint) ID() string](#ResolveEndpoint.ID) * [type RotateTunnelAccessTokenInput](#RotateTunnelAccessTokenInput) * [type RotateTunnelAccessTokenOutput](#RotateTunnelAccessTokenOutput) * [type TagResourceInput](#TagResourceInput) * [type TagResourceOutput](#TagResourceOutput) * [type UntagResourceInput](#UntagResourceInput) * [type UntagResourceOutput](#UntagResourceOutput) ### Constants [¶](#pkg-constants) ``` const ServiceAPIVersion = "2018-10-05" ``` ``` const ServiceID = "IoTSecureTunneling" ``` ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [NewDefaultEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L33) [¶](#NewDefaultEndpointResolver) ``` func NewDefaultEndpointResolver() *[internalendpoints](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/internal/endpoints).[Resolver](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/internal/endpoints#Resolver) ``` NewDefaultEndpointResolver constructs a new service endpoint resolver #### func [WithAPIOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L153) [¶](#WithAPIOptions) added in v1.0.0 ``` func WithAPIOptions(optFns ...func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error)) func(*[Options](#Options)) ``` WithAPIOptions returns a functional option for setting the Client's APIOptions option. #### func [WithEndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L164) deprecated ``` func WithEndpointResolver(v [EndpointResolver](#EndpointResolver)) func(*[Options](#Options)) ``` Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for this field will likely prevent you from using any endpoint-related service features released after the introduction of EndpointResolverV2 and BaseEndpoint. To migrate an EndpointResolver implementation that uses a custom endpoint, set the client option BaseEndpoint instead. #### func [WithEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L172) [¶](#WithEndpointResolverV2) added in v1.16.0 ``` func WithEndpointResolverV2(v [EndpointResolverV2](#EndpointResolverV2)) func(*[Options](#Options)) ``` WithEndpointResolverV2 returns a functional option for setting the Client's EndpointResolverV2 option. ### Types [¶](#pkg-types) #### type [Client](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L30) [¶](#Client) ``` type Client struct { // contains filtered or unexported fields } ``` Client provides the API client to make operations call for AWS IoT Secure Tunneling. #### func [New](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L37) [¶](#New) ``` func New(options [Options](#Options), optFns ...func(*[Options](#Options))) *[Client](#Client) ``` New returns an initialized Client based on the functional options. Provide additional functional options to further configure the behavior of the client, such as changing the client's endpoint or adding custom middleware behavior. #### func [NewFromConfig](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L281) [¶](#NewFromConfig) ``` func NewFromConfig(cfg [aws](/github.com/aws/aws-sdk-go-v2/aws).[Config](/github.com/aws/aws-sdk-go-v2/aws#Config), optFns ...func(*[Options](#Options))) *[Client](#Client) ``` NewFromConfig returns a new client from the provided config. #### func (*Client) [CloseTunnel](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_CloseTunnel.go#L23) [¶](#Client.CloseTunnel) ``` func (c *[Client](#Client)) CloseTunnel(ctx [context](/context).[Context](/context#Context), params *[CloseTunnelInput](#CloseTunnelInput), optFns ...func(*[Options](#Options))) (*[CloseTunnelOutput](#CloseTunnelOutput), [error](/builtin#error)) ``` Closes a tunnel identified by the unique tunnel id. When a CloseTunnel request is received, we close the WebSocket connections between the client and proxy server so no data can be transmitted. Requires permission to access the CloseTunnel (<https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions>) action. #### func (*Client) [DescribeTunnel](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_DescribeTunnel.go#L22) [¶](#Client.DescribeTunnel) ``` func (c *[Client](#Client)) DescribeTunnel(ctx [context](/context).[Context](/context#Context), params *[DescribeTunnelInput](#DescribeTunnelInput), optFns ...func(*[Options](#Options))) (*[DescribeTunnelOutput](#DescribeTunnelOutput), [error](/builtin#error)) ``` Gets information about a tunnel identified by the unique tunnel id. Requires permission to access the DescribeTunnel (<https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions>) action. #### func (*Client) [ListTagsForResource](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTagsForResource.go#L20) [¶](#Client.ListTagsForResource) ``` func (c *[Client](#Client)) ListTagsForResource(ctx [context](/context).[Context](/context#Context), params *[ListTagsForResourceInput](#ListTagsForResourceInput), optFns ...func(*[Options](#Options))) (*[ListTagsForResourceOutput](#ListTagsForResourceOutput), [error](/builtin#error)) ``` Lists the tags for the specified resource. #### func (*Client) [ListTunnels](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTunnels.go#L23) [¶](#Client.ListTunnels) ``` func (c *[Client](#Client)) ListTunnels(ctx [context](/context).[Context](/context#Context), params *[ListTunnelsInput](#ListTunnelsInput), optFns ...func(*[Options](#Options))) (*[ListTunnelsOutput](#ListTunnelsOutput), [error](/builtin#error)) ``` List all tunnels for an Amazon Web Services account. Tunnels are listed by creation time in descending order, newer tunnels will be listed before older tunnels. Requires permission to access the ListTunnels (<https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions>) action. #### func (*Client) [OpenTunnel](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_OpenTunnel.go#L23) [¶](#Client.OpenTunnel) ``` func (c *[Client](#Client)) OpenTunnel(ctx [context](/context).[Context](/context#Context), params *[OpenTunnelInput](#OpenTunnelInput), optFns ...func(*[Options](#Options))) (*[OpenTunnelOutput](#OpenTunnelOutput), [error](/builtin#error)) ``` Creates a new tunnel, and returns two client access tokens for clients to use to connect to the IoT Secure Tunneling proxy server. Requires permission to access the OpenTunnel (<https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions>) action. #### func (*Client) [RotateTunnelAccessToken](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_RotateTunnelAccessToken.go#L26) [¶](#Client.RotateTunnelAccessToken) added in v1.13.0 ``` func (c *[Client](#Client)) RotateTunnelAccessToken(ctx [context](/context).[Context](/context#Context), params *[RotateTunnelAccessTokenInput](#RotateTunnelAccessTokenInput), optFns ...func(*[Options](#Options))) (*[RotateTunnelAccessTokenOutput](#RotateTunnelAccessTokenOutput), [error](/builtin#error)) ``` Revokes the current client access token (CAT) and returns new CAT for clients to use when reconnecting to secure tunneling to access the same tunnel. Requires permission to access the RotateTunnelAccessToken (<https://docs.aws.amazon.com/service-authorization/latest/reference/list_awsiot.html#awsiot-actions-as-permissions>) action. Rotating the CAT doesn't extend the tunnel duration. For example, say the tunnel duration is 12 hours and the tunnel has already been open for 4 hours. When you rotate the access tokens, the new tokens that are generated can only be used for the remaining 8 hours. #### func (*Client) [TagResource](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_TagResource.go#L20) [¶](#Client.TagResource) ``` func (c *[Client](#Client)) TagResource(ctx [context](/context).[Context](/context#Context), params *[TagResourceInput](#TagResourceInput), optFns ...func(*[Options](#Options))) (*[TagResourceOutput](#TagResourceOutput), [error](/builtin#error)) ``` A resource tag. #### func (*Client) [UntagResource](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_UntagResource.go#L19) [¶](#Client.UntagResource) ``` func (c *[Client](#Client)) UntagResource(ctx [context](/context).[Context](/context#Context), params *[UntagResourceInput](#UntagResourceInput), optFns ...func(*[Options](#Options))) (*[UntagResourceOutput](#UntagResourceOutput), [error](/builtin#error)) ``` Removes a tag from a resource. #### type [CloseTunnelInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_CloseTunnel.go#L38) [¶](#CloseTunnelInput) ``` type CloseTunnelInput struct { // The ID of the tunnel to close. // // This member is required. TunnelId *[string](/builtin#string) // When set to true, IoT Secure Tunneling deletes the tunnel data immediately. Delete *[bool](/builtin#bool) // contains filtered or unexported fields } ``` #### type [CloseTunnelOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_CloseTunnel.go#L51) [¶](#CloseTunnelOutput) ``` type CloseTunnelOutput struct { // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [DescribeTunnelInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_DescribeTunnel.go#L37) [¶](#DescribeTunnelInput) ``` type DescribeTunnelInput struct { // The tunnel to describe. // // This member is required. TunnelId *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [DescribeTunnelOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_DescribeTunnel.go#L47) [¶](#DescribeTunnelOutput) ``` type DescribeTunnelOutput struct { // The tunnel being described. Tunnel *[types](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types).[Tunnel](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types#Tunnel) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [EndpointParameters](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L265) [¶](#EndpointParameters) added in v1.16.0 ``` type EndpointParameters struct { // The AWS region used to dispatch the request. // // Parameter is // required. // // AWS::Region Region *[string](/builtin#string) // When true, use the dual-stack endpoint. If the configured endpoint does not // support dual-stack, dispatching the request MAY return an error. // // Defaults to // false if no value is provided. // // AWS::UseDualStack UseDualStack *[bool](/builtin#bool) // When true, send this request to the FIPS-compliant regional endpoint. If the // configured endpoint does not have a FIPS compliant endpoint, dispatching the // request will return an error. // // Defaults to false if no value is // provided. // // AWS::UseFIPS UseFIPS *[bool](/builtin#bool) // Override the endpoint used to send this request // // Parameter is // required. // // SDK::Endpoint Endpoint *[string](/builtin#string) } ``` EndpointParameters provides the parameters that influence how endpoints are resolved. #### func (EndpointParameters) [ValidateRequired](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L303) [¶](#EndpointParameters.ValidateRequired) added in v1.16.0 ``` func (p [EndpointParameters](#EndpointParameters)) ValidateRequired() [error](/builtin#error) ``` ValidateRequired validates required parameters are set. #### func (EndpointParameters) [WithDefaults](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L317) [¶](#EndpointParameters.WithDefaults) added in v1.16.0 ``` func (p [EndpointParameters](#EndpointParameters)) WithDefaults() [EndpointParameters](#EndpointParameters) ``` WithDefaults returns a shallow copy of EndpointParameterswith default values applied to members where applicable. #### type [EndpointResolver](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L26) [¶](#EndpointResolver) ``` type EndpointResolver interface { ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error)) } ``` EndpointResolver interface for resolving service endpoints. #### func [EndpointResolverFromURL](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L51) [¶](#EndpointResolverFromURL) added in v1.1.0 ``` func EndpointResolverFromURL(url [string](/builtin#string), optFns ...func(*[aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint))) [EndpointResolver](#EndpointResolver) ``` EndpointResolverFromURL returns an EndpointResolver configured using the provided endpoint url. By default, the resolved endpoint resolver uses the client region as signing region, and the endpoint source is set to EndpointSourceCustom.You can provide functional options to configure endpoint values for the resolved endpoint. #### type [EndpointResolverFunc](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L40) [¶](#EndpointResolverFunc) ``` type EndpointResolverFunc func(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) ([aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), [error](/builtin#error)) ``` EndpointResolverFunc is a helper utility that wraps a function so it satisfies the EndpointResolver interface. This is useful when you want to add additional endpoint resolving logic, or stub out specific endpoints with custom values. #### func (EndpointResolverFunc) [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L42) [¶](#EndpointResolverFunc.ResolveEndpoint) ``` func (fn [EndpointResolverFunc](#EndpointResolverFunc)) ResolveEndpoint(region [string](/builtin#string), options [EndpointResolverOptions](#EndpointResolverOptions)) (endpoint [aws](/github.com/aws/aws-sdk-go-v2/aws).[Endpoint](/github.com/aws/aws-sdk-go-v2/aws#Endpoint), err [error](/builtin#error)) ``` #### type [EndpointResolverOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L23) [¶](#EndpointResolverOptions) added in v0.29.0 ``` type EndpointResolverOptions = [internalendpoints](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/internal/endpoints).[Options](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/internal/endpoints#Options) ``` EndpointResolverOptions is the service endpoint resolver options #### type [EndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L329) [¶](#EndpointResolverV2) added in v1.16.0 ``` type EndpointResolverV2 interface { // ResolveEndpoint attempts to resolve the endpoint with the provided options, // returning the endpoint if found. Otherwise an error is returned. ResolveEndpoint(ctx [context](/context).[Context](/context#Context), params [EndpointParameters](#EndpointParameters)) ( [smithyendpoints](/github.com/aws/smithy-go/endpoints).[Endpoint](/github.com/aws/smithy-go/endpoints#Endpoint), [error](/builtin#error), ) } ``` EndpointResolverV2 provides the interface for resolving service endpoints. #### func [NewDefaultEndpointResolverV2](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L340) [¶](#NewDefaultEndpointResolverV2) added in v1.16.0 ``` func NewDefaultEndpointResolverV2() [EndpointResolverV2](#EndpointResolverV2) ``` #### type [HTTPClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L178) [¶](#HTTPClient) ``` type HTTPClient interface { Do(*[http](/net/http).[Request](/net/http#Request)) (*[http](/net/http).[Response](/net/http#Response), [error](/builtin#error)) } ``` #### type [HTTPSignerV4](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L426) [¶](#HTTPSignerV4) ``` type HTTPSignerV4 interface { SignHTTP(ctx [context](/context).[Context](/context#Context), credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[Credentials](/github.com/aws/aws-sdk-go-v2/aws#Credentials), r *[http](/net/http).[Request](/net/http#Request), payloadHash [string](/builtin#string), service [string](/builtin#string), region [string](/builtin#string), signingTime [time](/time).[Time](/time#Time), optFns ...func(*[v4](/github.com/aws/aws-sdk-go-v2/aws/signer/v4).[SignerOptions](/github.com/aws/aws-sdk-go-v2/aws/signer/v4#SignerOptions))) [error](/builtin#error) } ``` #### type [ListTagsForResourceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTagsForResource.go#L35) [¶](#ListTagsForResourceInput) ``` type ListTagsForResourceInput struct { // The resource ARN. // // This member is required. ResourceArn *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [ListTagsForResourceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTagsForResource.go#L45) [¶](#ListTagsForResourceOutput) ``` type ListTagsForResourceOutput struct { // The tags for the specified resource. Tags [][types](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types).[Tag](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types#Tag) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [ListTunnelsAPIClient](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTunnels.go#L141) [¶](#ListTunnelsAPIClient) added in v0.30.0 ``` type ListTunnelsAPIClient interface { ListTunnels([context](/context).[Context](/context#Context), *[ListTunnelsInput](#ListTunnelsInput), ...func(*[Options](#Options))) (*[ListTunnelsOutput](#ListTunnelsOutput), [error](/builtin#error)) } ``` ListTunnelsAPIClient is a client that implements the ListTunnels operation. #### type [ListTunnelsInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTunnels.go#L38) [¶](#ListTunnelsInput) ``` type ListTunnelsInput struct { // The maximum number of results to return at once. MaxResults *[int32](/builtin#int32) // To retrieve the next set of results, the nextToken value from a previous // response; otherwise null to receive the first set of results. NextToken *[string](/builtin#string) // The name of the IoT thing associated with the destination device. ThingName *[string](/builtin#string) // contains filtered or unexported fields } ``` #### type [ListTunnelsOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTunnels.go#L53) [¶](#ListTunnelsOutput) ``` type ListTunnelsOutput struct { // The token to use to get the next set of results, or null if there are no // additional results. NextToken *[string](/builtin#string) // A short description of the tunnels in an Amazon Web Services account. TunnelSummaries [][types](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types).[TunnelSummary](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types#TunnelSummary) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [ListTunnelsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTunnels.go#L158) [¶](#ListTunnelsPaginator) added in v0.30.0 ``` type ListTunnelsPaginator struct { // contains filtered or unexported fields } ``` ListTunnelsPaginator is a paginator for ListTunnels #### func [NewListTunnelsPaginator](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTunnels.go#L167) [¶](#NewListTunnelsPaginator) added in v0.30.0 ``` func NewListTunnelsPaginator(client [ListTunnelsAPIClient](#ListTunnelsAPIClient), params *[ListTunnelsInput](#ListTunnelsInput), optFns ...func(*[ListTunnelsPaginatorOptions](#ListTunnelsPaginatorOptions))) *[ListTunnelsPaginator](#ListTunnelsPaginator) ``` NewListTunnelsPaginator returns a new ListTunnelsPaginator #### func (*ListTunnelsPaginator) [HasMorePages](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTunnels.go#L191) [¶](#ListTunnelsPaginator.HasMorePages) added in v0.30.0 ``` func (p *[ListTunnelsPaginator](#ListTunnelsPaginator)) HasMorePages() [bool](/builtin#bool) ``` HasMorePages returns a boolean indicating whether more pages are available #### func (*ListTunnelsPaginator) [NextPage](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTunnels.go#L196) [¶](#ListTunnelsPaginator.NextPage) added in v0.30.0 ``` func (p *[ListTunnelsPaginator](#ListTunnelsPaginator)) NextPage(ctx [context](/context).[Context](/context#Context), optFns ...func(*[Options](#Options))) (*[ListTunnelsOutput](#ListTunnelsOutput), [error](/builtin#error)) ``` NextPage retrieves the next ListTunnels page. #### type [ListTunnelsPaginatorOptions](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_ListTunnels.go#L148) [¶](#ListTunnelsPaginatorOptions) added in v0.30.0 ``` type ListTunnelsPaginatorOptions struct { // The maximum number of results to return at once. Limit [int32](/builtin#int32) // Set to true if pagination should stop if the service returns a pagination token // that matches the most recent token provided to the service. StopOnDuplicateToken [bool](/builtin#bool) } ``` ListTunnelsPaginatorOptions is the paginator options for ListTunnels #### type [OpenTunnelInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_OpenTunnel.go#L38) [¶](#OpenTunnelInput) ``` type OpenTunnelInput struct { // A short text description of the tunnel. Description *[string](/builtin#string) // The destination configuration for the OpenTunnel request. DestinationConfig *[types](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types).[DestinationConfig](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types#DestinationConfig) // A collection of tag metadata. Tags [][types](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types).[Tag](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types#Tag) // Timeout configuration for a tunnel. TimeoutConfig *[types](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types).[TimeoutConfig](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types#TimeoutConfig) // contains filtered or unexported fields } ``` #### type [OpenTunnelOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_OpenTunnel.go#L55) [¶](#OpenTunnelOutput) ``` type OpenTunnelOutput struct { // The access token the destination local proxy uses to connect to IoT Secure // Tunneling. DestinationAccessToken *[string](/builtin#string) // The access token the source local proxy uses to connect to IoT Secure Tunneling. SourceAccessToken *[string](/builtin#string) // The Amazon Resource Name for the tunnel. TunnelArn *[string](/builtin#string) // A unique alpha-numeric tunnel ID. TunnelId *[string](/builtin#string) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [Options](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L61) [¶](#Options) ``` type Options struct { // Set of options to modify how an operation is invoked. These apply to all // operations invoked for this client. Use functional options on operation call to // modify this list for per operation behavior. APIOptions []func(*[middleware](/github.com/aws/smithy-go/middleware).[Stack](/github.com/aws/smithy-go/middleware#Stack)) [error](/builtin#error) // The optional application specific identifier appended to the User-Agent header. AppID [string](/builtin#string) // This endpoint will be given as input to an EndpointResolverV2. It is used for // providing a custom base endpoint that is subject to modifications by the // processing EndpointResolverV2. BaseEndpoint *[string](/builtin#string) // Configures the events that will be sent to the configured logger. ClientLogMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[ClientLogMode](/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode) // The credentials object to use when signing requests. Credentials [aws](/github.com/aws/aws-sdk-go-v2/aws).[CredentialsProvider](/github.com/aws/aws-sdk-go-v2/aws#CredentialsProvider) // The configuration DefaultsMode that the SDK should use when constructing the // clients initial default settings. DefaultsMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[DefaultsMode](/github.com/aws/aws-sdk-go-v2/aws#DefaultsMode) // The endpoint options to be used when attempting to resolve an endpoint. EndpointOptions [EndpointResolverOptions](#EndpointResolverOptions) // The service endpoint resolver. // // Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a // value for this field will likely prevent you from using any endpoint-related // service features released after the introduction of EndpointResolverV2 and // BaseEndpoint. To migrate an EndpointResolver implementation that uses a custom // endpoint, set the client option BaseEndpoint instead. EndpointResolver [EndpointResolver](#EndpointResolver) // Resolves the endpoint used for a particular service. This should be used over // the deprecated EndpointResolver EndpointResolverV2 [EndpointResolverV2](#EndpointResolverV2) // Signature Version 4 (SigV4) Signer HTTPSignerV4 [HTTPSignerV4](#HTTPSignerV4) // The logger writer interface to write logging messages to. Logger [logging](/github.com/aws/smithy-go/logging).[Logger](/github.com/aws/smithy-go/logging#Logger) // The region to send requests to. (Required) Region [string](/builtin#string) // RetryMaxAttempts specifies the maximum number attempts an API client will call // an operation that fails with a retryable error. A value of 0 is ignored, and // will not be used to configure the API client created default retryer, or modify // per operation call's retry max attempts. When creating a new API Clients this // member will only be used if the Retryer Options member is nil. This value will // be ignored if Retryer is not nil. If specified in an operation call's functional // options with a value that is different than the constructed client's Options, // the Client's Retryer will be wrapped to use the operation's specific // RetryMaxAttempts value. RetryMaxAttempts [int](/builtin#int) // RetryMode specifies the retry mode the API client will be created with, if // Retryer option is not also specified. When creating a new API Clients this // member will only be used if the Retryer Options member is nil. This value will // be ignored if Retryer is not nil. Currently does not support per operation call // overrides, may in the future. RetryMode [aws](/github.com/aws/aws-sdk-go-v2/aws).[RetryMode](/github.com/aws/aws-sdk-go-v2/aws#RetryMode) // Retryer guides how HTTP requests should be retried in case of recoverable // failures. When nil the API client will use a default retryer. The kind of // default retry created by the API client can be changed with the RetryMode // option. Retryer [aws](/github.com/aws/aws-sdk-go-v2/aws).[Retryer](/github.com/aws/aws-sdk-go-v2/aws#Retryer) // The RuntimeEnvironment configuration, only populated if the DefaultsMode is set // to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You // should not populate this structure programmatically, or rely on the values here // within your applications. RuntimeEnvironment [aws](/github.com/aws/aws-sdk-go-v2/aws).[RuntimeEnvironment](/github.com/aws/aws-sdk-go-v2/aws#RuntimeEnvironment) // The HTTP client to invoke API calls with. Defaults to client's default HTTP // implementation if nil. HTTPClient [HTTPClient](#HTTPClient) // contains filtered or unexported fields } ``` #### func (Options) [Copy](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_client.go#L183) [¶](#Options.Copy) ``` func (o [Options](#Options)) Copy() [Options](#Options) ``` Copy creates a clone where the APIOptions list is deep copied. #### type [ResolveEndpoint](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L67) [¶](#ResolveEndpoint) ``` type ResolveEndpoint struct { Resolver [EndpointResolver](#EndpointResolver) Options [EndpointResolverOptions](#EndpointResolverOptions) } ``` #### func (*ResolveEndpoint) [HandleSerialize](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L76) [¶](#ResolveEndpoint.HandleSerialize) ``` func (m *[ResolveEndpoint](#ResolveEndpoint)) HandleSerialize(ctx [context](/context).[Context](/context#Context), in [middleware](/github.com/aws/smithy-go/middleware).[SerializeInput](/github.com/aws/smithy-go/middleware#SerializeInput), next [middleware](/github.com/aws/smithy-go/middleware).[SerializeHandler](/github.com/aws/smithy-go/middleware#SerializeHandler)) ( out [middleware](/github.com/aws/smithy-go/middleware).[SerializeOutput](/github.com/aws/smithy-go/middleware#SerializeOutput), metadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata), err [error](/builtin#error), ) ``` #### func (*ResolveEndpoint) [ID](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/endpoints.go#L72) [¶](#ResolveEndpoint.ID) ``` func (*[ResolveEndpoint](#ResolveEndpoint)) ID() [string](/builtin#string) ``` #### type [RotateTunnelAccessTokenInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_RotateTunnelAccessToken.go#L41) [¶](#RotateTunnelAccessTokenInput) added in v1.13.0 ``` type RotateTunnelAccessTokenInput struct { // The mode of the client that will use the client token, which can be either the // source or destination, or both source and destination. // // This member is required. ClientMode [types](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types).[ClientMode](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types#ClientMode) // The tunnel for which you want to rotate the access tokens. // // This member is required. TunnelId *[string](/builtin#string) // The destination configuration. DestinationConfig *[types](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types).[DestinationConfig](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types#DestinationConfig) // contains filtered or unexported fields } ``` #### type [RotateTunnelAccessTokenOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_RotateTunnelAccessToken.go#L60) [¶](#RotateTunnelAccessTokenOutput) added in v1.13.0 ``` type RotateTunnelAccessTokenOutput struct { // The client access token that the destination local proxy uses to connect to IoT // Secure Tunneling. DestinationAccessToken *[string](/builtin#string) // The client access token that the source local proxy uses to connect to IoT // Secure Tunneling. SourceAccessToken *[string](/builtin#string) // The Amazon Resource Name for the tunnel. TunnelArn *[string](/builtin#string) // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [TagResourceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_TagResource.go#L35) [¶](#TagResourceInput) ``` type TagResourceInput struct { // The ARN of the resource. // // This member is required. ResourceArn *[string](/builtin#string) // The tags for the resource. // // This member is required. Tags [][types](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types).[Tag](/github.com/aws/aws-sdk-go-v2/service/iotsecuretunneling@v1.17.2/types#Tag) // contains filtered or unexported fields } ``` #### type [TagResourceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_TagResource.go#L50) [¶](#TagResourceOutput) ``` type TagResourceOutput struct { // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ``` #### type [UntagResourceInput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_UntagResource.go#L34) [¶](#UntagResourceInput) ``` type UntagResourceInput struct { // The resource ARN. // // This member is required. ResourceArn *[string](/builtin#string) // The keys of the tags to remove. // // This member is required. TagKeys [][string](/builtin#string) // contains filtered or unexported fields } ``` #### type [UntagResourceOutput](https://github.com/aws/aws-sdk-go-v2/blob/service/iotsecuretunneling/v1.17.2/service/iotsecuretunneling/api_op_UntagResource.go#L49) [¶](#UntagResourceOutput) ``` type UntagResourceOutput struct { // Metadata pertaining to the operation's result. ResultMetadata [middleware](/github.com/aws/smithy-go/middleware).[Metadata](/github.com/aws/smithy-go/middleware#Metadata) // contains filtered or unexported fields } ```
interoptopus_reference_project
rust
Rust
Crate interoptopus\_reference\_project === A reference project for Interoptopus. This project tries to use every Interoptopus feature at least once. When submitting new features or making changes to existing ones the types and functions in here will ensure existing backends still work as expected. Note, many items here are deliberately not documented as testing how and if documentation is generated is part of the test. Modules --- * constantsVarious ways to define constants. * functionsFunctions using all supported type patterns. * patternsReference implementations of patterns. * typesAll supported type patterns. Functions --- * ffi\_inventory Crate interoptopus\_reference\_project === A reference project for Interoptopus. This project tries to use every Interoptopus feature at least once. When submitting new features or making changes to existing ones the types and functions in here will ensure existing backends still work as expected. Note, many items here are deliberately not documented as testing how and if documentation is generated is part of the test. Modules --- * constantsVarious ways to define constants. * functionsFunctions using all supported type patterns. * patternsReference implementations of patterns. * typesAll supported type patterns. Functions --- * ffi\_inventory Module interoptopus\_reference\_project::constants === Various ways to define constants. Constants --- * COMPUTED\_I32 * F32\_MIN\_POSITIVE * U8 Module interoptopus\_reference\_project::functions === Functions using all supported type patterns. Functions --- * ambiguous\_1 * ambiguous\_2 * ambiguous\_3 * array\_1 * callback * complex\_args\_1 * complex\_args\_2 * documentedThis function has documentation. * generic\_1a * generic\_1b * generic\_1c * generic\_2 * generic\_3 * generic\_4 * many\_args\_5 * many\_args\_10 * namespaced\_inner\_option * namespaced\_inner\_slice * namespaced\_inner\_slice\_mut * namespaced\_type * panics * primitive\_bool * primitive\_i8 * primitive\_i16 * primitive\_i32 * primitive\_i64 * primitive\_u8 * primitive\_u16 * primitive\_u32 * primitive\_u64 * primitive\_void * primitive\_void2 * ptr * ptr\_mut⚠Safety * ptr\_ptr * ref\_mut\_option * ref\_mut\_simple * ref\_option * ref\_simple * renamed * repr\_transparent * sleep * tupled * visibility * weird\_1 Module interoptopus\_reference\_project::patterns === Reference implementations of patterns. Modules --- * api\_guard * ascii\_pointer * callbacks * option * primitives * result * service * slice Module interoptopus\_reference\_project::types === All supported type patterns. Modules --- * ambiguous1 * ambiguous2 * associated\_types * common Structs --- * Align1 * Align2 * Array * CallbackFFISlice * Container * EmptyEmpty structs are only allowed as opaques. * ExtraType * Generic * Generic2 * Generic3 * Generic4 * GenericArray * Opaque * Packed1 * Packed2 * Phantom * SomeContextThis can also be used for the `class` pattern. * SomeForeignType * StructDocumentedDocumented struct. * StructRenamedXYZ * Transparent * Tupled * UseAsciiStringPattern * Vec3f32 * Visibility1 * Visibility2 * Weird1 * Weird2 Enums --- * EnumDocumentedDocumented enum. * EnumRenamedXYZ Traits --- * Helper Functions --- * some\_foreign\_type Type Aliases --- * Callbacku8u8
GITHUB_papers-we-love_papers-we-love.zip_unzipped_shazam-audio-search-algorithm.pdf
free_programming_book
Unknown
An Industrial-Strength Audio Search Algorithm <NAME> <EMAIL> Shazam Entertainment, Ltd. USA: 2925 Ross Road Palo Alto, CA 94303 United Kingdom: 375 Kensington High Street 4th Floor Block F London W14 8Q We have developed and commercially deployed a flexible audio search engine. The algorithm is noise and distortion resistant, computationally efficient, and massively scalable, capable of quickly identifying a short segment of music captured through a cellphone microphone in the presence of foreground voices and other dominant noise, and through voice codec compression, out of a database of over a million tracks. The algorithm uses a combinatorially hashed time-frequency constellation analysis of the audio, yielding unusual properties such as transparency, in which multiple tracks mixed together may each be identified. Furthermore, for applications such as radio monitoring, search times on the order of a few milliseconds per query are attained, even on a massive music database. 1 Introduction Shazam Entertainment, Ltd. was started in 2000 with the idea of providing a service that could connect people to music by recognizing music in the environment by using their mobile phones to recognize the music directly. The algorithm had to be able to recognize a short audio sample of music that had been broadcast, mixed with heavy ambient noise, subject to reverb and other processing, captured by a little cellphone microphone, subjected to voice codec compression, and network dropouts, all before arriving at our servers. The algorithm also had to perform the recognition quickly over a large database of music with nearly 2M tracks, and furthermore have a low number of false positives while having a high recognition rate. This was a hard problem, and at the time there were no algorithms known to us that could satisfy all these constraints. We eventually developed our own technique that met all the operational constraints [1]. We have deployed the algorithm to scale in our commercial music recognition service, with over 1.8M tracks in the database. The service is currently live in Germany, Finland, and the UK, with over a half million users, and will soon be available in additional countries in Europe, Asia, and the USA. The user experience is as follows: A user hears music playing in the environment. She calls up our service using her mobile phone and samples up to 15 seconds of audio. An identification is performed on the sample at our server, then the track title and artist are sent back to the user via SMS text messaging. The information is also made available on a web site, where the user may register and log in with her mobile phone number and password. At the web site, or on a smart phone, the user may view her tagged track list and buy the CD. The user may also download the ringtone corresponding to the tagged track, if it is available. The user may also send a 30-second clip of the song to a friend. Other services, such as purchasing an MP3 download may become available soon. A variety of similar consumer services has sprung up recently. Musiwave has deployed a similar mobile-phone music identification service on the Spanish mobile carrier Amena using Philips robust hashing algorithm [2-4]. Using the algorithm from Relatable, Neuros has included a sampling feature on their MP3 player which allows a user to collect a 30-second sample from the built-in radio, then later plug into an online server to identify the music [5,6]. Audible Magic uses the Muscle Fish algorithm to offer the Clango service for identifying audio streaming from an internet radio station [7-9]. The Shazam algorithm can be used in many applications besides just music recognition over a mobile phone. Due to the ability to dig deep into noise we can identify music hidden behind a loud voiceover, such as in a radio advert. On the other hand, the algorithm is also very fast and can be used for copyright monitoring at a search speed of over 1000 times realtime, thus enabling a modest server to monitor significantly many media streams. The algorithm is also suitable for content-based cueing and indexing for library and archival uses. 2 Basic principle of operation Each audio file is fingerprinted, a process in which reproducible hash tokens are extracted. Both database and sample audio files are subjected to the same analysis. The fingerprints from the unknown sample are matched against a large set of fingerprints derived from the music database. The candidate matches are subsequently evaluated for correctness of match. Some guiding principles for the attributes to use as fingerprints are that they should be temporally localized, translation-invariant, robust, and sufficiently entropic. The temporal locality guideline suggests that each fingerprint hash is calculated using audio samples near a corresponding point in time, so that distant events do not affect the hash. The translationinvariant aspect means that fingerprint hashes derived from corresponding matching content are reproducible independent of position within an audio file, as long as the temporal locality containing the data from which the hash is computed is contained within the file. This makes sense, as an unknown sample could come from any portion of the original audio track. Robustness means that hashes generated from the original clean database track should be reproducible from a degraded copy of the audio. Furthermore, the fingerprint tokens should have sufficiently high entropy in order to minimize the probability of false token matches at non-corresponding locations between the unknown sample and tracks within the database. Insufficient entropy leads to excessive and spurious matches at non-corresponding locations, requiring more processing power to cull the results, and too much entropy usually leads to fragility and non-reproducibility of fingerprint tokens in the presence of noise and distortion. There are 3 main components, presented in the next sections. 2.1 Robust Constellations In order to address the problem of robust identification in the presence of highly significant noise and distortion, we experimented with a variety of candidate features that could survive GSM encoding in the presence of noise. We settled on spectrogram peaks, due to their robustness in the presence of noise and approximate linear superposability [1]. A time-frequency point is a candidate peak if it has a higher energy content than all its neighbors in a region centered around the point. Candidate peaks are chosen according to a density criterion in order to assure that the time-frequency strip for the audio file has reasonably uniform coverage. The peaks in each time-frequency locality are also chosen according amplitude, with the justification that the highest amplitude peaks are most likely to survive the distortions listed above. Thus, a complicated spectrogram, as illustrated in Figure 1A may be reduced to a sparse set of coordinates, as illustrated in Figure 1B. Notice that at this point the amplitude component has been eliminated. This reduction has the advantage of being fairly insensitive to EQ, as generally a peak in the spectrum is still a peak with the same coordinates in a filtered spectrum (assuming that the derivative of the filter transfer function is reasonably smallpeaks in the vicinity of a sharp transition in the transfer function are slightly frequency-shifted). We term the sparse coordinate lists constellation maps since the coordinate scatter plots often resemble a star field. The pattern of dots should be the same for matching segments of audio. If you put the constellation map of a database song on a strip chart, and the constellation map of a short matching audio sample of a few seconds length on a transparent piece of plastic, then slide the latter over the former, at some point a significant number of points will coincide when the proper time offset is located and the two constellation maps are aligned in register. The number of matching points will be significant in the presence of spurious peaks injected due to noise, as peak positions are relatively independent; further, the number of matches can also be significant even if many of the correct points have been deleted. Registration of constellation maps is thus a powerful way of matching in the presence of noise and/or deletion of features. This procedure reduces the search problem to a kind of astronavigation, in which a small patch of time-frequency constellation points must be quickly located within a large universe of points in a strip-chart universe with dimensions of bandlimited frequency versus nearly a billion seconds in the database. Yang also considered the use of spectrogram peaks, but employed them in a different way [10]. 2.2 Fast Combinatorial Hashing Finding the correct registration offset directly from constellation maps can be rather slow, due to raw constellation points having low entropy. For example, a 1024-bin frequency axis yields only at most 10 bits of frequency data per peak. We have developed a fast way of indexing constellation maps. Fingerprint hashes are formed from the constellation map, in which pairs of time-frequency points are combinatorially associated. Anchor points are chosen, each anchor point having a target zone associated with it. Each anchor point is sequentially paired with points within its target zone, each pair yielding two frequency components plus the time difference between the points (Figure 1C and 1D). These hashes are quite reproducible, even in the presence of noise and voice codec compression. Furthermore, each hash can be packed into a 32-bit unsigned integer. Each hash is also associated with the time offset from the beginning of the respective file to its anchor point, though the absolute time is not a part of the hash itself. To create a database index, the above operation is carried out on each track in a database to generate a corresponding list of hashes and their associated offset times. Track IDs may also be appended to the small data structs, yielding an aggregate 64-bit struct, 32 bits for the hash and 32 bits for the time offset and track ID. To facilitate fast processing, the 64-bit structs are sorted according to hash token value. The number of hashes per second of audio recording being processed is approximately equal to the density of constellation points per second times the fan-out factor into the target zone. For example, if each constellation point is taken to be an anchor point, and if the target zone has a fanout of size F=10, then the number of hashes is approximately equal to F=10 times the number of constellation points extracted from the file. By limiting the number of points chosen in each target zone, we seek to limit the combinatorial explosion of pairs. The fan-out factor leads directly to a cost factor in terms of storage space. By forming pairs instead of searching for matches against individual constellation points we gain a tremendous acceleration in the search process. For example, if each frequency component is 10 bits, and the t component is also 10 bits, then matching a pair of points yields 30 bits of information, versus only 10 for a single point. Then the specificity of the hash would be about a million times greater, due to the 20 extra bits, and thus the search speed for a single hash token is similarly accelerated. On the other hand, due to the combinatorial generation of hashes, assuming symmetric density and fan-out for both database and sample hash generation, there are F times as many token combinations in the unknown sample to search for, and F times as many tokens in the database, thus the total speedup is a factor of about 1000000/F2, or about 10000, over token searches based on single constellation points. Note that the combinatorial hashing squares the probability of point survival, i.e. if p is the probability of a spectrogram peak surviving the journey from the original source material to the captured sample recording, then the probability of a hash from a pair of points surviving is approximately p2. This reduction in hash survivability is a tradeoff against the tremendous amount of speedup provided. The reduced probability of individual hash survival is mitigated by the combinatorial generation of a greater number of hashes than original constellation points. For example, if F=10, then the probability of at least one hash surviving for a given anchor point would be the joint probability of the anchor point and at least one target point in its target zone surviving. If we simplistically assume IID probability p of survival for all points involved, then the probability of at least one hash surviving per anchor point is p*[1-(1-p)F]. For reasonably large values of F, e.g. F>10, and reasonable values of p, e.g. p>0.1, we have approximately p p*[1-(1-p)F] so we are actually not much worse off than before. We see that by using combinatorial hashing, we have traded off approximately 10 times the storage space for approximately 10000 times improvement in speed, and a small loss in probability of signal detection. Figure 4: Recognition rate -- Additive Noise 100.00% 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% -15 -12 -9 -6 -3 0 3 6 9 12 15 Signal/Noise Ratio (dB) 15 sec linear Different fan-out and density factors may be chosen for different signal conditions. For relatively clean audio, e.g. for radio monitoring applications, F may be chosen to be modestly small and the density can also be chosen to be low, versus for the somewhat more challenging mobile phone consumer application. The difference in processing requirements can thus span many orders of magnitude. 2.3 Searching and Scoring To perform a search, the above fingerprinting step is performed on a captured sample sound file to generate a set of hash:time offset records. Each hash from the sample is used to search in the database for matching hashes. For each matching hash found in the database, the corresponding offset times from the beginning of the sample and database files are associated into time pairs. The time pairs are distributed into bins according to the track ID associated with the matching database hash. After all sample hashes have been used to search in the database to form matching time pairs, the bins are scanned for matches. Within each bin the set of time pairs represents a scatterplot of association between the sample and database sound files. If the files match, matching features should occur at similar relative offsets from the beginning of the file, i.e. a sequence of hashes in one file should also occur in the matching file with the same relative time sequence. The problem of deciding whether a match has been found reduces to detecting a significant cluster of points forming a diagonal line within the scatterplot. Various techniques could be used to perform the detection, for example a Hough transform or other 10 sec linear 5 sec linear robust regression technique. Such techniques are overly general, computationally expensive, and susceptible to outliers. Due to the rigid constraints of the problem, the following technique solves the problem in approximately N*log(N) time, where N is the number of points appearing on the scatterplot. For the purposes of this discussion, we may assume that the slope of the diagonal line is 1.0. Then corresponding times of matching features between matching files have the relationship tk=tk+offset, where tk is the time coordinate of the feature in the matching (clean) database soundfile and tk is the time coordinate of the corresponding feature in the sample soundfile to be identified. For each (tk,tk) coordinate in the scatterplot, we calculate tk=tk-tk. Then we calculate a histogram of these tk values and scan for a peak. This may be done by sorting the set of tk values and quickly scanning for a cluster of values. The scatterplots are usually very sparse, due to the specificity of the hashes owing to the combinatorial method of generation as discussed above. Since the number of time pairs in each bin is small, the scanning process takes on the order of microseconds per bin, or less. The score of the match is the number of matching points in the histogram peak. The presence of a statistically significant cluster indicates a match. Figure 2A illustrates a scatterplot of database time versus sample time for a track that does not match the sample. There are a few chance associations, but no linear correspondence appears. Figure 3A shows a case where a Figure 5: Recognition rate -- Additive noise + GSM compression 100.00% 90.00% 80.00% 70.00% 60.00% 50.00% 40.00% 30.00% 20.00% 10.00% 0.00% -15 -12 -9 -6 -3 0 3 6 9 12 15 Signal/Noise Ratio (dB) 15 sec GSM significant number of matching time pairs appear on a diagonal line. Figures 2B and 3B show the histograms of the tk values corresponding to Figures 2A and 3B. This bin scanning process is repeated for each track in the database until a significant match is found. Note that the matching and scanning phases do not make any special assumption about the format of the hashes. In fact, the hashes only need to have the properties of having sufficient entropy to avoid too many spurious matches to occur, as well as being reproducible. In the scanning phase the main thing that matters is for the matching hashes to be temporally aligned. 2.3.1 Significance As described above, the score is simply the number of matching and time-aligned hash tokens. The distribution of scores of incorrectly-matching tracks is of interest in determining the rate of false positives as well as the rate of correct recognitions. To summarize briefly, a histogram of the scores of incorrectly-matching tracks is collected. The number of tracks in the database is taken into account and a probability density function of the score of the highestscoring incorrectly-matching track is generated. Then an acceptable false positive rate is chosen (for example 0.1% false positive rate or 0.01%, depending on the application), then a threshold score is chosen that meets or exceeds the false-positive criterion. 10 sec GSM 5 sec GSM 3 Performance 3.1 Noise resistance The algorithm performs well with significant levels of noise and even non-linear distortion. It can correctly identify music in the presence of voices, traffic noise, dropout, and even other music. To give an idea of the power of this technique, from a heavily corrupted 15 second sample, a statistically significant match can be determined with only about 1-2% of the generated hash tokens actually surviving and contributing to the offset cluster. A property of the scatterplot histogramming technique is that discontinuities are irrelevant, allowing immunity to dropouts and masking due to interference. One somewhat surprising result is that even with a large database, we can correctly identify each of several tracks mixed together, including multiple versions of the same piece, a property we call transparency. Figure 4 shows the result of performing 250 sample recognitions of varying length and noise levels against a test database of 10000 tracks consisting of popular music. A noise sample was recorded in a noisy pub to simulate real-life conditions. Audio excerpts of 15, 10, and 5 seconds in length were taken from the middle of each test track, each of which was taken from the test database. For each test excerpt, the relative power of the noise was normalized to the desired signal-to-noise ratio, then linearly added to the sample. We see that the recognition rate drops to 50% for 15, 10, and 5 second samples at approximately -9, -6, and -3 dB SNR, respectively Figure 5 shows the same analysis, except that the resulting music+noise mixture was further subjected to GSM 6.10 compression, then reconverted to PCM audio. In this case, the 50% recognition rate level for 15, 10, and 5 second samples occurs at approximately -3, 0, and +4 dB SNR. Audio sampling and processing was carried out using 8KHz, mono, 16-bit samples. 3.2 Speed For a database of about 20 thousand tracks implemented on a PC, the search time is on the order of 5-500 milliseconds, depending on parameters settings and application. The service can find a matching track for a heavily corrupted audio sample within a few hundred milliseconds of core search time. With radio quality audio, we can find a match in less than 10 milliseconds, with a likely optimization goal reaching down to 1 millisecond per query. 3.3 Specificity and False Positives The algorithm was designed specifically to target recognition of sound files that are already present in the database. It is not expected to generalize to live recordings. That said, we have anecdotally discovered several artists in concert who apparently either have extremely accurate and reproducible timing (with millisecond precision), or are more plausibly lip synching. The algorithm is conversely very sensitive to which particular version of a track has been sampled. Given a multitude of different performances of the same song by an artist, the algorithm can pick the correct one even if they are virtually indistinguishable by the human ear. We occasionally get reports of false positives. Often times we find that the algorithm was not actually wrong since it had picked up an example of sampling, or plagiarism. As mentioned above, there is a tradeoff between true hits and false positives, and thus the maximum allowable percentage of false positives is a design parameter that is chosen to suit the application. 4 Acknowledgements Special thanks to <NAME>, III and <NAME> for providing guidance. Thanks also to Chris, Philip, Dhiraj, Claus, Ajay, Jerry, Matt, Mike, Rahul, Beth and all the other wonderful folks at Shazam, and to my Katja. 5 References [1] <NAME> and <NAME>, III., WIPO publication WO 02/11123A2, 7 February 2002, (Priority 31 July 2000). [2] http://www.musiwave.com [3] <NAME>, <NAME>, <NAME>, and <NAME>., WIPO publication WO 02/065782A1, 22 August 2002, (Priority 12 February, 2001). [4] <NAME>, <NAME>, A Highly Robust Audio Fingerprinting System, International Symposium on Music Information Retrieval (ISMIR) 2002, pp. 107-115. [5] Neuros Audio web site: http://www.neurosaudio.com/ [6] <NAME> and <NAME>, WIPO publication WO 02/073520A1, 19 September 2002, (Priority 13 March 2001) [7] Audible Magic web site: http://www.audiblemagic.com/ [8] <NAME>, <NAME>, <NAME>, <NAME>, Content-Based Classification, Search, and Retrieval of Audio, in IEEE Multimedia, Vol. 3, No. 3: FALL 1996, pp. 27-36 [9] Clango web site: http://www.clango.com/ [10] <NAME>, MACS: Music Audio Characteristic Sequence Indexing For Similarity Retrieval, in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2001
github.com/VirtusLab/render
go
Go
README [¶](#section-readme) --- ### render [![Version](https://img.shields.io/badge/version-v0.3.0-brightgreen.svg)](https://github.com/VirtusLab/render/releases/tag/v0.3.0) [![Travis CI](https://img.shields.io/travis/VirtusLab/render.svg)](https://travis-ci.org/VirtusLab/render) [![Github All Releases](https://img.shields.io/github/downloads/VirtusLab/render/total.svg)](https://github.com/VirtusLab/render/releases) [![Go Report Card](https://goreportcard.com/badge/github.com/VirtusLab/render "Go Report Card")](https://goreportcard.com/report/github.com/VirtusLab/render) [![GoDoc](https://godoc.org/github.com/VirtusLab/render?status.svg "GoDoc Documentation")](https://godoc.org/github.com/VirtusLab/render/renderer) ![](https://github.com/VirtusLab/render/raw/v0.3.0/assets/render_typo.png) Universal data-driven templates for generating textual output. Can be used as a single static binary (no dependencies) or as a golang library. Just some of the things to `render`: * configuration files * Infrastructure as Code files (e.g. CloudFormation templates) * Kubernetes manifests The renderer extends [go-template](https://golang.org/pkg/text/template/) and [Sprig](http://masterminds.github.io/sprig/) functions. If you are interested in one of the use cases, take a look at this [blog post](https://medium.com/virtuslab/helm-alternative-d6568aa9d40b) about Kubernetes resources rendering. Also see [Helm compatibility](https://github.com/VirtusLab/render/blob/v0.3.0/README.md). * [Installation](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) + [Official binary releases](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) * [Usage](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) + [Command line](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) + [Notable standard and sprig functions](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) + [Custom functions](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) + [Helm compatibility](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) + [Limitations and future work](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) * [Contribution](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) * [Development](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) * [The Name](https://github.com/VirtusLab/render/blob/v0.3.0/README.md) #### Installation ###### Official binary releases For binaries please visit the [Releases Page](https://github.com/VirtusLab/render/releases). The binaries are statically compiled and does not require any dependencies. #### Usage ``` $ render --help NAME: render - Universal file renderer USAGE: render [global options] command [command options] [arguments...] VERSION: v0.3.0 AUTHOR: VirtusLab COMMANDS: help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --debug, -d run in debug mode --indir value the input directory, can't be used with --out --outdir value the output directory, the same as --outdir if empty, can't be used with --in --in value the input template file, stdin if empty, can't be used with --outdir --out value the output file, stdout if empty, can't be used with --indir --config value optional configuration YAML file, can be used multiple times --set value, --var value additional parameters in key=value format, can be used multiple times --unsafe-ignore-missing-keys do not fail on missing map key and print '<no value>' ('missingkey=invalid') --help, -h show help --version, -v print the version ``` **Notes:** * `--in`, `--out` take only files (not directories), `--in` will consume any file as long as it can be parsed * `stdin` and `stdout` can be used instead of `--in` and `--out` * `--config` accepts any YAML file, can be used multiple times, the values of the configs will be merged * `--set`, `--var` are the same (one is used in Helm, the other in Terraform), we provide both for convenience, any values set here **will override** values form configuration files ###### Command line Example usage of `render` with `stdin`, `stdout` and `--var`: ``` $ echo "something {{ .value }}" | render --var "value=new" something new ``` Example usage of `render` with `--in`, `--out` and `--config`: ``` $ echo "something {{ .value }}" > test.txt.tmpl $ echo "value: new" > test.config.yaml $ ./render --in test.txt.tmpl --out test.txt --config test.config.yaml $ cat test.txt something new ``` Also see a [more advanced template](https://github.com/VirtusLab/render/blob/v0.3.0/examples/example.yaml.tmpl) example. ###### As a library ``` package example import ( "github.com/VirtusLab/render/renderer" "github.com/VirtusLab/render/renderer/parameters" ) func CustomRender(template string, opts []string, params parameters.Parameters) (string, error) { return renderer.New( renderer.WithOptions(opts...), renderer.WithParameters(params), renderer.WithSprigFunctions(), renderer.WithExtraFunctions(), renderer.WithCryptFunctions(), ).Render(template) } ``` See also [`other functions`](https://godoc.org/github.com/VirtusLab/render/renderer). Also see [tests](https://github.com/VirtusLab/render/raw/master/renderer/render_test.go) for more usage examples. ###### Notable standard and sprig functions * [`indent`](https://masterminds.github.io/sprig/strings.html#indent) * [`default`](https://masterminds.github.io/sprig/defaults.html#default) * [`ternary`](https://masterminds.github.io/sprig/defaults.html#ternary) * [`toJson`](https://masterminds.github.io/sprig/defaults.html#tojson) * [`b64enc`, `b64dec`](https://masterminds.github.io/sprig/encoding.html) All syntax and functions: * [Go template functions](https://golang.org/pkg/text/template) * [Sprig functions](http://masterminds.github.io/sprig) ###### Custom functions * `render` - calls the `render` from inside of the template, making the renderer recursive (also accepts an optional template parameters override) * `toYaml` - provides a configuration data structure fragment as a YAML format * `fromYaml` - marshalls YAML data to a data structure (supports multi-documents) * `fromJson` - marshalls JSON data to a data structure * `jsonPath` - provides data structure manipulation with JSONPath (`kubectl` dialect) * `n` - used with `range` to allow easy iteration over integers form the given start to end (inclusive) * `gzip`, `ungzip` - use `gzip` compression and extraction inside the templates, for best results use with `b64enc` and `b64dec` * `readFile` - reads a file from a path, relative paths are translated to absolute paths, based on `root` function or property * `writeFile` - writes a file to a path, relative paths are translated to absolute paths, based on `root` function or property * `root` - the root path, used for relative to absolute path translation in any file based operations; by default `PWD` is used * `cidrHost` - calculates a full host IP address for a given host number within a given IP network address prefix * `cidrNetmask` - converts an IPv4 address prefix given in CIDR notation into a subnet mask address * `cidrSubnets` - calculates a subnet address within given IP network address prefix * `cidrSubnetSizes` - calculates a sequence of consecutive IP address ranges within a particular CIDR prefix See also [examples](https://github.com/VirtusLab/render/blob/v0.3.0/examples) and a more [detailed documentation](https://godoc.org/github.com/VirtusLab/render/renderer#Renderer.ExtraFunctions). Cloud KMS (AWS, Amazon, Google) based cryptography functions form [`crypt`](https://github.com/VirtusLab/crypt): * `encryptAWS` - encrypts data using AWS KMS, for best results use with `gzip` and `b64enc` * `decryptAWS` - decrypts data using AWS KMS, for best results use with `ungzip` and `b64dec` * `encryptGCP` - encrypts data using GCP KMS, for best results use with `gzip` and `b64enc` * `decryptGCP` - decrypts data using GCP KMS, for best results use with `ungzip` and `b64dec` * `encryptAzure` - encrypts data using Azure Key Vault, for best results use with `gzip` and `b64enc` * `decryptAzure` - decrypts data using Azure Key Vault, for best results use with `ungzip` and `b64dec` ###### Helm compatibility As of now, there is a limited Helm 2 Chart compatibility, simple Charts will render just fine. To mimic Helm behaviour regarding to missing keys use `--unsafe-ignore-missing-keys` option. There is no plan to implement full compatibility with Helm, because of unnecessary complexity that would bring. If you need full Helm compatilble rendering see: [`helm-nomagic`](https://github.com/giantswarm/helm-nomagic). #### Limitations and future work ###### Planned new features * `.renderignore` files [`#12`](https://github.com/VirtusLab/render/issues/12) ###### Operating system support We provide cross-compiled binaries for most platforms, but is currently used mainly with `linux/amd64`. #### Community & Contribution There is a dedicated channel `#render` on [virtuslab-oss.slack.com](https://virtuslab-oss.slack.com) ([Invite form](https://forms.gle/X3X8qA1XMirdBuEH7)) Feel free to file [issues](https://github.com/VirtusLab/render/issues) or [pull requests](https://github.com/VirtusLab/render/pulls). Before any big pull request please consult the maintainers to ensure a common direction. #### Development ``` git clone <EMAIL>:VirtusLab/render.git cd render make init make all ``` #### The name We believe in obvious names. It renders. It's a *verb*. It's `render`. Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
gameR
cran
R
Package ‘gameR’ March 29, 2023 Title Color Palettes Inspired by Video Games Version 0.0.5 Description Palettes based on video games. License GPL (>= 3) Encoding UTF-8 RoxygenNote 7.2.3 Suggests testthat (>= 3.0.0), ggplot2, magrittr, palmerpenguins, knitr, rmarkdown, spelling Config/testthat/edition 3 URL https://www.constantine-cooke.com/gameR/, https://github.com/nathansam/gameR/ BugReports https://github.com/nathansam/gameR/issues VignetteBuilder knitr Language en-US NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-4437-8713>), <NAME> [ctb] (<https://orcid.org/0000-0002-4308-7316>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-03-29 20:40:02 UTC R topics documented: gameR_col... 2 gameR_con... 2 gameR_cols Choose a gameR palette Description Choose a gameR palette Usage gameR_cols(palette = NULL, reverse = FALSE) Arguments palette Character name of palette. Either banjo, blocks, border, cowboy, cups, cyber- punk, fallout, gris, kirby, ocarina, okami, p4g, pman, rayman, sonic, spirit, splat, superbros, wow reverse Logical. Should the palette be reversed? Defaults to FALSE. Value Vector containing a hex color code representation for the chosen palette gameR_cont Generate continuous palette from a discrete gameR palette Description Generate continuous palette from a discrete gameR palette Usage gameR_cont( n, palette = NULL, reverse = FALSE, bias = NULL, interpolate = "spline" ) Arguments n Number of colors to be generated palette Character name of palette. Either banjo, blocks, border, cowboy, cups, cyber- punk, fallout, gris, kirby, ocarina, okami, p4g, pman, rayman, sonic, spirit, splat, superbros, wow reverse Logical. Should the palette be reversed? Defaults to FALSE. bias Passed to colorRamp. A positive number. Higher values give more widely spaced colors at the high end. interpolate Passed to colorRamp. Use spline or linear interpolation Value Vector containing a hex color code representation for the chosen palette interpolated across n values
torchgeo
readthedoc
YAML
Spack 0.12.1 documentation [Spack](index.html#document-index) --- Spack[¶](#spack) === > These are docs for the Spack package manager. For sphere packing, see [pyspack](https://pyspack.readthedocs.io). Spack is a package management tool designed to support multiple versions and configurations of software on a wide variety of platforms and environments. It was designed for large supercomputing centers, where many users and application teams share common installations of software on clusters with exotic architectures, using libraries that do not have a standard ABI. Spack is non-destructive: installing a new version does not break existing installations, so many configurations can coexist on the same system. Most importantly, Spack is *simple*. It offers a simple *spec* syntax so that users can specify versions and configuration options concisely. Spack is also simple for package authors: package files are written in pure Python, and specs allow package authors to maintain a single file for many different builds of the same package. See the [Feature Overview](index.html#document-features) for examples and highlights. Get spack from the [github repository](https://github.com/spack/spack) and install your first package: ``` $ git clone https://github.com/spack/spack.git $ cd spack/bin $ ./spack install libelf ``` If you’re new to spack and want to start using it, see [Getting Started](index.html#document-getting_started), or refer to the full manual below. Feature Overview[¶](#feature-overview) --- This is a high-level overview of features that make Spack different from other [package managers](http://en.wikipedia.org/wiki/Package_management_system) and [port systems](http://en.wikipedia.org/wiki/Ports_collection). ### Simple package installation[¶](#simple-package-installation) Installing the default version of a package is simple. This will install the latest version of the `mpileaks` package and all of its dependencies: ``` $ spack install mpileaks ``` ### Custom versions & configurations[¶](#custom-versions-configurations) Spack allows installation to be customized. Users can specify the version, build compiler, compile-time options, and cross-compile platform, all on the command line. ``` # Install a particular version by appending @ $ spack install mpileaks@1.1.2 # Specify a compiler (and its version), with % $ spack install mpileaks@1.1.2 %gcc@4.7.3 # Add special compile-time options by name $ spack install mpileaks@1.1.2 %gcc@4.7.3 debug=True # Add special boolean compile-time options with + $ spack install mpileaks@1.1.2 %gcc@4.7.3 +debug # Add compiler flags using the conventional names $ spack install mpileaks@1.1.2 %gcc@4.7.3 cppflags="-O3 -floop-block" # Cross-compile for a different architecture with arch= $ spack install mpileaks@1.1.2 arch=bgqos_0 ``` Users can specify as many or few options as they care about. Spack will fill in the unspecified values with sensible defaults. The two listed syntaxes for variants are identical when the value is boolean. ### Customize dependencies[¶](#customize-dependencies) Spack allows *dependencies* of a particular installation to be customized extensively. Suppose that `mpileaks` depends indirectly on `libelf` and `libdwarf`. Using `^`, users can add custom configurations for the dependencies: ``` # Install mpileaks and link it with specific versions of libelf and libdwarf $ spack install mpileaks@1.1.2 %gcc@4.7.3 +debug ^libelf@0.8.12 ^libdwarf@20130729+debug ``` ### Non-destructive installs[¶](#non-destructive-installs) Spack installs every unique package/dependency configuration into its own prefix, so new installs will not break existing ones. ### Packages can peacefully coexist[¶](#packages-can-peacefully-coexist) Spack avoids library misconfiguration by using `RPATH` to link dependencies. When a user links a library or runs a program, it is tied to the dependencies it was built with, so there is no need to manipulate `LD_LIBRARY_PATH` at runtime. ### Creating packages is easy[¶](#creating-packages-is-easy) To create a new packages, all Spack needs is a URL for the source archive. The `spack create` command will create a boilerplate package file, and the package authors can fill in specific build steps in pure Python. For example, this command: ``` $ spack create http://www.mr511.de/software/libelf-0.8.13.tar.gz ``` creates a simple python file: ``` from spack import * class Libelf(Package): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "http://www.mr511.de/software/libelf-0.8.13.tar.gz" version('0.8.13', '4136d7b4c04df68b686570afa26988ac') # FIXME: Add dependencies if required. # depends_on('foo') def install(self, spec, prefix): # FIXME: Modify the configure line to suit your build system here. configure('--prefix={0}'.format(prefix)) # FIXME: Add logic to build and install here. make() make('install') ``` It doesn’t take much python coding to get from there to a working package: ``` from spack import * class Libelf(AutotoolsPackage): """libelf lets you read, modify or create ELF object files in an architecture-independent way. The library takes care of size and endian issues, e.g. you can process a file for SPARC processors on an Intel-based system.""" homepage = "http://www.mr511.de/software/english.html" url = "http://www.mr511.de/software/libelf-0.8.13.tar.gz" version('0.8.13', '4136d7b4c04df68b686570afa26988ac') version('0.8.12', 'e21f8273d9f5f6d43a59878dc274fec7') provides('elf@0') def configure_args(self): args = ["--enable-shared", "--disable-dependency-tracking", "--disable-debug"] return args def install(self, spec, prefix): make('install', parallel=False) ``` Spack also provides wrapper functions around common commands like `configure`, `make`, and `cmake` to make writing packages simple. Getting Started[¶](#getting-started) --- ### Prerequisites[¶](#prerequisites) Spack has the following minimum requirements, which must be installed before Spack is run: 1. Python 2 (2.6 or 2.7) or 3 (3.4 - 3.7) 2. A C/C++ compiler 3. The `git` and `curl` commands. 4. If using the `gpg` subcommand, `gnupg2` is required. These requirements can be easily installed on most modern Linux systems; on Macintosh, XCode is required. Spack is designed to run on HPC platforms like Cray and BlueGene/Q. Not all packages should be expected to work on all platforms. A build matrix showing which packages are working on which systems is planned but not yet available. ### Installation[¶](#installation) Getting Spack is easy. You can clone it from the [github repository](https://github.com/spack/spack) using this command: ``` $ git clone https://github.com/spack/spack.git ``` This will create a directory called `spack`. #### Add Spack to the Shell[¶](#add-spack-to-the-shell) We’ll assume that the full path to your downloaded Spack directory is in the `SPACK_ROOT` environment variable. Add `$SPACK_ROOT/bin` to your path and you’re ready to go: ``` $ export PATH=$SPACK_ROOT/bin:$PATH $ spack install libelf ``` For a richer experience, use Spack’s shell support: ``` # For bash/zsh users $ export SPACK_ROOT=/path/to/spack $ . $SPACK_ROOT/share/spack/setup-env.sh # For tcsh or csh users (note you must set SPACK_ROOT) $ setenv SPACK_ROOT /path/to/spack $ source $SPACK_ROOT/share/spack/setup-env.csh ``` This automatically adds Spack to your `PATH` and allows the `spack` command to be used to execute spack [commands](index.html#shell-support) and [useful packaging commands](index.html#packaging-shell-support). If [environment-modules or dotkit](#installenvironmentmodules) is installed and available, the `spack` command can also load and unload [modules](index.html#modules). #### Clean Environment[¶](#clean-environment) Many packages’ installs can be broken by changing environment variables. For example, a package might pick up the wrong build-time dependencies (most of them not specified) depending on the setting of `PATH`. `GCC` seems to be particularly vulnerable to these issues. Therefore, it is recommended that Spack users run with a *clean environment*, especially for `PATH`. Only software that comes with the system, or that you know you wish to use with Spack, should be included. This procedure will avoid many strange build errors. #### Check Installation[¶](#check-installation) With Spack installed, you should be able to run some basic Spack commands. For example: ``` $ spack spec netcdf Input spec --- netcdf Concretized --- netcdf@4.6.1%gcc@5.4.0~dap~hdf4 maxdims=1024 maxvars=8192 +mpi~parallel-netcdf+shared arch=linux-ubuntu16.04-x86_64 ^hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran+hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ``` #### Optional: Alternate Prefix[¶](#optional-alternate-prefix) You may want to run Spack out of a prefix other than the git repository you cloned. The `spack clone` command provides this functionality. To install spack in a new directory, simply type: ``` $ spack clone /my/favorite/prefix ``` This will install a new spack script in `/my/favorite/prefix/bin`, which you can use just like you would the regular spack script. Each copy of spack installs packages into its own `$PREFIX/opt` directory. #### Next Steps[¶](#next-steps) In theory, Spack doesn’t need any additional installation; just download and run! But in real life, additional steps are usually required before Spack can work in a practical sense. Read on
 ### Compiler configuration[¶](#compiler-configuration) Spack has the ability to build packages with multiple compilers and compiler versions. Spack searches for compilers on your machine automatically the first time it is run. It does this by inspecting your `PATH`. #### `spack compilers`[¶](#spack-compilers) You can see which compilers spack has found by running `spack compilers` or `spack compiler list`: ``` $ spack compilers ==> Available compilers -- gcc --- gcc@4.9.0 gcc@4.8.0 gcc@4.7.0 gcc@4.6.2 gcc@4.4.7 gcc@4.8.2 gcc@4.7.1 gcc@4.6.3 gcc@4.6.1 gcc@4.1.2 -- intel --- intel@15.0.0 intel@14.0.0 intel@13.0.0 intel@12.1.0 intel@10.0 intel@14.0.3 intel@13.1.1 intel@12.1.5 intel@12.0.4 intel@9.1 intel@14.0.2 intel@13.1.0 intel@12.1.3 intel@11.1 intel@14.0.1 intel@13.0.1 intel@12.1.2 intel@10.1 -- clang --- clang@3.4 clang@3.3 clang@3.2 clang@3.1 -- pgi --- pgi@14.3-0 pgi@13.2-0 pgi@12.1-0 pgi@10.9-0 pgi@8.0-1 pgi@13.10-0 pgi@13.1-1 pgi@11.10-0 pgi@10.2-0 pgi@7.1-3 pgi@13.6-0 pgi@12.8-0 pgi@11.1-0 pgi@9.0-4 pgi@7.0-6 ``` Any of these compilers can be used to build Spack packages. More on how this is done is in [Specs & dependencies](index.html#sec-specs). #### `spack compiler add`[¶](#spack-compiler-add) An alias for `spack compiler find`. #### `spack compiler find`[¶](#spack-compiler-find) If you do not see a compiler in this list, but you want to use it with Spack, you can simply run `spack compiler find` with the path to where the compiler is installed. For example: ``` $ spack compiler find /usr/local/tools/ic-13.0.079 ==> Added 1 new compiler to ~/.spack/compilers.yaml intel@13.0.079 ``` Or you can run `spack compiler find` with no arguments to force auto-detection. This is useful if you do not know where compilers are installed, but you know that new compilers have been added to your `PATH`. For example, you might load a module, like this: ``` $ module load gcc-4.9.0 $ spack compiler find ==> Added 1 new compiler to ~/.spack/compilers.yaml gcc@4.9.0 ``` This loads the environment module for gcc-4.9.0 to add it to `PATH`, and then it adds the compiler to Spack. Note By default, spack does not fill in the `modules:` field in the `compilers.yaml` file. If you are using a compiler from a module, then you should add this field manually. See the section on [Compilers Requiring Modules](#compilers-requiring-modules). #### `spack compiler info`[¶](#spack-compiler-info) If you want to see specifics on a particular compiler, you can run `spack compiler info` on it: ``` $ spack compiler info intel@15 intel@15.0.0: paths: cc = /usr/local/bin/icc-15.0.090 cxx = /usr/local/bin/icpc-15.0.090 f77 = /usr/local/bin/ifort-15.0.090 fc = /usr/local/bin/ifort-15.0.090 modules = [] operating_system = centos6 ... ``` This shows which C, C++, and Fortran compilers were detected by Spack. Notice also that we didn’t have to be too specific about the version. We just said `intel@15`, and information about the only matching Intel compiler was displayed. #### Manual compiler configuration[¶](#manual-compiler-configuration) If auto-detection fails, you can manually configure a compiler by editing your `~/.spack/compilers.yaml` file. You can do this by running `spack config edit compilers`, which will open the file in your `$EDITOR`. Each compiler configuration in the file looks like this: ``` compilers: - compiler: modules: [] operating_system: centos6 paths: cc: /usr/local/bin/icc-15.0.024-beta cxx: /usr/local/bin/icpc-15.0.024-beta f77: /usr/local/bin/ifort-15.0.024-beta fc: /usr/local/bin/ifort-15.0.024-beta spec: intel@15.0.0: ``` For compilers that do not support Fortran (like `clang`), put `None` for `f77` and `fc`: ``` compilers: - compiler: modules: [] operating_system: centos6 paths: cc: /usr/bin/clang cxx: /usr/bin/clang++ f77: None fc: None spec: clang@3.3svn ``` Once you save the file, the configured compilers will show up in the list displayed by `spack compilers`. You can also add compiler flags to manually configured compilers. These flags should be specified in the `flags` section of the compiler specification. The valid flags are `cflags`, `cxxflags`, `fflags`, `cppflags`, `ldflags`, and `ldlibs`. For example: ``` compilers: - compiler: modules: [] operating_system: centos6 paths: cc: /usr/bin/gcc cxx: /usr/bin/g++ f77: /usr/bin/gfortran fc: /usr/bin/gfortran flags: cflags: -O3 -fPIC cxxflags: -O3 -fPIC cppflags: -O3 -fPIC spec: gcc@4.7.2 ``` These flags will be treated by spack as if they were entered from the command line each time this compiler is used. The compiler wrappers then inject those flags into the compiler command. Compiler flags entered from the command line will be discussed in more detail in the following section. #### Build Your Own Compiler[¶](#build-your-own-compiler) If you are particular about which compiler/version you use, you might wish to have Spack build it for you. For example: ``` $ spack install gcc@4.9.3 ``` Once that has finished, you will need to add it to your `compilers.yaml` file. You can then set Spack to use it by default by adding the following to your `packages.yaml` file: ``` packages: all: compiler: [gcc@4.9.3] ``` #### Compilers Requiring Modules[¶](#compilers-requiring-modules) Many installed compilers will work regardless of the environment they are called with. However, some installed compilers require `$LD_LIBRARY_PATH` or other environment variables to be set in order to run; this is typical for Intel and other proprietary compilers. In such a case, you should tell Spack which module(s) to load in order to run the chosen compiler (If the compiler does not come with a module file, you might consider making one by hand). Spack will load this module into the environment ONLY when the compiler is run, and NOT in general for a package’s `install()` method. See, for example, this `compilers.yaml` file: ``` compilers: - compiler: modules: [other/comp/gcc-5.3-sp3] operating_system: SuSE11 paths: cc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gcc cxx: /usr/local/other/SLES11.3/gcc/5.3.0/bin/g++ f77: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran fc: /usr/local/other/SLES11.3/gcc/5.3.0/bin/gfortran spec: gcc@5.3.0 ``` Some compilers require special environment settings to be loaded not just to run, but also to execute the code they build, breaking packages that need to execute code they just compiled. If it’s not possible or practical to use a better compiler, you’ll need to ensure that environment settings are preserved for compilers like this (i.e., you’ll need to load the module or source the compiler’s shell script). By default, Spack tries to ensure that builds are reproducible by cleaning the environment before building. If this interferes with your compiler settings, you CAN use `spack install --dirty` as a workaround. Note that this MAY interfere with package builds. #### Licensed Compilers[¶](#licensed-compilers) Some proprietary compilers require licensing to use. If you need to use a licensed compiler (eg, PGI), the process is similar to a mix of build your own, plus modules: 1. Create a Spack package (if it doesn’t exist already) to install your compiler. Follow instructions on installing [Licensed software](index.html#license). 2. Once the compiler is installed, you should be able to test it by using Spack to load the module it just created, and running simple builds (eg: `cc helloWorld.c && ./a.out`) 3. Add the newly-installed compiler to `compilers.yaml` as shown above. #### Mixed Toolchains[¶](#mixed-toolchains) Modern compilers typically come with related compilers for C, C++ and Fortran bundled together. When possible, results are best if the same compiler is used for all languages. In some cases, this is not possible. For example, starting with macOS El Capitan (10.11), many packages no longer build with GCC, but XCode provides no Fortran compilers. The user is therefore forced to use a mixed toolchain: XCode-provided Clang for C/C++ and GNU `gfortran` for Fortran. 1. You need to make sure that Xcode is installed. Run the following command: ``` $ xcode-select --install ``` If the Xcode command-line tools are already installed, you will see an error message: ``` xcode-select: error: command line tools are already installed, use "Software Update" to install updates ``` 2. For most packages, the Xcode command-line tools are sufficient. However, some packages like `qt` require the full Xcode suite. You can check to see which you have installed by running: ``` $ xcode-select -p ``` If the output is: ``` /Applications/Xcode.app/Contents/Developer ``` you already have the full Xcode suite installed. If the output is: ``` /Library/Developer/CommandLineTools ``` you only have the command-line tools installed. The full Xcode suite can be installed through the App Store. Make sure you launch the Xcode application and accept the license agreement before using Spack. It may ask you to install additional components. Alternatively, the license can be accepted through the command line: ``` $ sudo xcodebuild -license accept ``` Note: the flag is `-license`, not `--license`. 3. Run `spack compiler find` to locate Clang. 4. There are different ways to get `gfortran` on macOS. For example, you can install GCC with Spack (`spack install gcc`) or with Homebrew (`brew install gcc`). 5. The only thing left to do is to edit `~/.spack/compilers.yaml` to provide the path to `gfortran`: ``` compilers: darwin-x86_64: clang@7.3.0-apple: cc: /usr/bin/clang cxx: /usr/bin/clang++ f77: /path/to/bin/gfortran fc: /path/to/bin/gfortran ``` If you used Spack to install GCC, you can get the installation prefix by `spack location -i gcc` (this will only work if you have a single version of GCC installed). Whereas for Homebrew, GCC is installed in `/usr/local/Cellar/gcc/x.y.z`. #### Compiler Verification[¶](#compiler-verification) You can verify that your compilers are configured properly by installing a simple package. For example: ``` $ spack install zlib%gcc@5.3.0 ``` ### Vendor-Specific Compiler Configuration[¶](#vendor-specific-compiler-configuration) With Spack, things usually “just work” with GCC. Not so for other compilers. This section provides details on how to get specific compilers working. #### Intel Compilers[¶](#intel-compilers) Intel compilers are unusual because a single Intel compiler version can emulate multiple GCC versions. In order to provide this functionality, the Intel compiler needs GCC to be installed. Therefore, the following steps are necessary to successfully use Intel compilers: 1. Install a version of GCC that implements the desired language features (`spack install gcc`). 2. Tell the Intel compiler how to find that desired GCC. This may be done in one of two ways: > “By default, the compiler determines which version of `gcc` or `g++` > you have installed from the `PATH` environment variable. > If you want use a version of `gcc` or `g++` other than the default > version on your system, you need to use either the `-gcc-name` > or `-gxx-name` compiler option to specify the path to the version of > `gcc` or `g++` that you want to use.” > —[Intel Reference Guide](https://software.intel.com/en-us/node/522750) Intel compilers may therefore be configured in one of two ways with Spack: using modules, or using compiler flags. ##### Configuration with Modules[¶](#configuration-with-modules) One can control which GCC is seen by the Intel compiler with modules. A module must be loaded both for the Intel Compiler (so it will run) and GCC (so the compiler can find the intended GCC). The following configuration in `compilers.yaml` illustrates this technique: ``` compilers: - compiler: modules: [gcc-4.9.3, intel-15.0.24] operating_system: centos7 paths: cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta spec: intel@15.0.24.4.9.3 ``` Note The version number on the Intel compiler is a combination of the “native” Intel version number and the GNU compiler it is targeting. ##### Command Line Configuration[¶](#command-line-configuration) One can also control which GCC is seen by the Intel compiler by adding flags to the `icc` command: 1. Identify the location of the compiler you just installed: ``` $ spack location --install-dir gcc ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw... ``` 2. Set up `compilers.yaml`, for example: ``` compilers: - compiler: modules: [intel-15.0.24] operating_system: centos7 paths: cc: /opt/intel-15.0.24/bin/icc-15.0.24-beta cflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc cxx: /opt/intel-15.0.24/bin/icpc-15.0.24-beta cxxflags: -gxx-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/g++ f77: /opt/intel-15.0.24/bin/ifort-15.0.24-beta fc: /opt/intel-15.0.24/bin/ifort-15.0.24-beta fflags: -gcc-name ~/spack/opt/spack/linux-centos7-x86_64/gcc-4.9.3-iy4rw.../bin/gcc spec: intel@15.0.24.4.9.3 ``` #### PGI[¶](#pgi) PGI comes with two sets of compilers for C++ and Fortran, distinguishable by their names. “Old” compilers: ``` cc: /soft/pgi/15.10/linux86-64/15.10/bin/pgcc cxx: /soft/pgi/15.10/linux86-64/15.10/bin/pgCC f77: /soft/pgi/15.10/linux86-64/15.10/bin/pgf77 fc: /soft/pgi/15.10/linux86-64/15.10/bin/pgf90 ``` “New” compilers: ``` cc: /soft/pgi/15.10/linux86-64/15.10/bin/pgcc cxx: /soft/pgi/15.10/linux86-64/15.10/bin/pgc++ f77: /soft/pgi/15.10/linux86-64/15.10/bin/pgfortran fc: /soft/pgi/15.10/linux86-64/15.10/bin/pgfortran ``` Older installations of PGI contains just the old compilers; whereas newer installations contain the old and the new. The new compiler is considered preferable, as some packages (`hdf`) will not build with the old compiler. When auto-detecting a PGI compiler, there are cases where Spack will find the old compilers, when you really want it to find the new compilers. It is best to check this `compilers.yaml`; and if the old compilers are being used, change `pgf77` and `pgf90` to `pgfortran`. Other issues: * There are reports that some packages will not build with PGI, including `libpciaccess` and `openssl`. A workaround is to build these packages with another compiler and then use them as dependencies for PGI-build packages. For example: ``` $ spack install openmpi%pgi ^libpciaccess%gcc ``` * PGI requires a license to use; see [Licensed Compilers](#licensed-compilers) for more information on installation. Note It is believed the problem with HDF 4 is that everything is compiled with the `F77` compiler, but at some point some Fortran 90 code slipped in there. So compilers that can handle both FORTRAN 77 and Fortran 90 (`gfortran`, `pgfortran`, etc) are fine. But compilers specific to one or the other (`pgf77`, `pgf90`) won’t work. #### NAG[¶](#nag) The Numerical Algorithms Group provides a licensed Fortran compiler. Like Clang, this requires you to set up a [Mixed Toolchains](#mixed-toolchains). It is recommended to use GCC for your C/C++ compilers. The NAG Fortran compilers are a bit more strict than other compilers, and many packages will fail to install with error messages like: ``` Error: mpi_comm_spawn_multiple_f90.f90: Argument 3 to MPI_COMM_SPAWN_MULTIPLE has data type DOUBLE PRECISION in reference from MPI_COMM_SPAWN_MULTIPLEN and CHARACTER in reference from MPI_COMM_SPAWN_MULTIPLEA ``` In order to convince the NAG compiler not to be too picky about calling conventions, you can use `FFLAGS=-mismatch` and `FCFLAGS=-mismatch`. This can be done through the command line: ``` $ spack install openmpi fflags="-mismatch" ``` Or it can be set permanently in your `compilers.yaml`: ``` - compiler: modules: [] operating_system: centos6 paths: cc: /soft/spack/opt/spack/linux-x86_64/gcc-5.3.0/gcc-6.1.0-q2zosj3igepi3pjnqt74bwazmptr5gpj/bin/gcc cxx: /soft/spack/opt/spack/linux-x86_64/gcc-5.3.0/gcc-6.1.0-q2zosj3igepi3pjnqt74bwazmptr5gpj/bin/g++ f77: /soft/spack/opt/spack/linux-x86_64/gcc-4.4.7/nag-6.1-jt3h5hwt5myezgqguhfsan52zcskqene/bin/nagfor fc: /soft/spack/opt/spack/linux-x86_64/gcc-4.4.7/nag-6.1-jt3h5hwt5myezgqguhfsan52zcskqene/bin/nagfor flags: fflags: -mismatch spec: nag@6.1 ``` ### System Packages[¶](#system-packages) Once compilers are configured, one needs to determine which pre-installed system packages, if any, to use in builds. This is configured in the file `~/.spack/packages.yaml`. For example, to use an OpenMPI installed in /opt/local, one would use: ``` packages: openmpi: paths: openmpi@1.10.1: /opt/local buildable: False ``` In general, Spack is easier to use and more reliable if it builds all of its own dependencies. However, there are two packages for which one commonly needs to use system versions: #### MPI[¶](#mpi) On supercomputers, sysadmins have already built MPI versions that take into account the specifics of that computer’s hardware. Unless you know how they were built and can choose the correct Spack variants, you are unlikely to get a working MPI from Spack. Instead, use an appropriate pre-installed MPI. If you choose a pre-installed MPI, you should consider using the pre-installed compiler used to build that MPI; see above on `compilers.yaml`. #### OpenSSL[¶](#openssl) The `openssl` package underlies much of modern security in a modern OS; an attacker can easily “pwn” any computer on which they can modify SSL. Therefore, any `openssl` used on a system should be created in a “trusted environment” — for example, that of the OS vendor. OpenSSL is also updated by the OS vendor from time to time, in response to security problems discovered in the wider community. It is in everyone’s best interest to use any newly updated versions as soon as they come out. Modern Linux installations have standard procedures for security updates without user involvement. Spack running at user-level is not a trusted environment, nor do Spack users generally keep up-to-date on the latest security holes in SSL. For these reasons, a Spack-installed OpenSSL should likely not be trusted. As long as the system-provided SSL works, you can use it instead. One can check if it works by trying to download an `https://`. For example: ``` $ curl -O https://github.com/ImageMagick/ImageMagick/archive/7.0.2-7.tar.gz ``` To tell Spack to use the system-supplied OpenSSL, first determine what version you have: ``` $ openssl version OpenSSL 1.0.2g 1 Mar 2016 ``` Then add the following to `~/.spack/packages.yaml`: ``` packages: openssl: paths: openssl@1.0.2g: /usr buildable: False ``` #### BLAS / LAPACK[¶](#blas-lapack) The recommended way to use system-supplied BLAS / LAPACK packages is to add the following to `packages.yaml`: ``` packages: netlib-lapack: paths: netlib-lapack@3.6.1: /usr buildable: False all: providers: blas: [netlib-lapack] lapack: [netlib-lapack] ``` Note Above we pretend that the system-provided BLAS / LAPACK is `netlib-lapack` only because it is the only BLAS / LAPACK provider which use standard names for libraries (as opposed to, for example, `libopenblas.so`). Although we specify external package in `/usr`, Spack is smart enough not to add `/usr/lib` to RPATHs, where it could cause unrelated system libraries to be used instead of their Spack equivalents. `usr/bin` will be present in PATH, however it will have lower precedence compared to paths from other dependencies. This ensures that binaries in Spack dependencies are preferred over system binaries. #### Git[¶](#git) Some Spack packages use `git` to download, which might not work on some computers. For example, the following error was encountered on a Macintosh during `spack install julia-master`: ``` ==> Cloning git repository: https://github.com/JuliaLang/julia.git on branch master Cloning into 'julia'... fatal: unable to access 'https://github.com/JuliaLang/julia.git/': SSL certificate problem: unable to get local issuer certificate ``` This problem is related to OpenSSL, and in some cases might be solved by installing a new version of `git` and `openssl`: 1. Run `spack install git` 2. Add the output of `spack module tcl loads git` to your `.bashrc`. If this doesn’t work, it is also possible to disable checking of SSL certificates by using: ``` $ spack --insecure install ``` Using `--insecure` makes Spack disable SSL checking when fetching from websites and from git. Warning This workaround should be used ONLY as a last resort! Wihout SSL certificate verification, spack and git will download from sites you wouldn’t normally trust. The code you download and run may then be compromised! While this is not a major issue for archives that will be checksummed, it is especially problematic when downloading from name Git branches or tags, which relies entirely on trusting a certificate for security (no verification). certificate for security (no verification). ### Utilities Configuration[¶](#utilities-configuration) Although Spack does not need installation *per se*, it does rely on other packages to be available on its host system. If those packages are out of date or missing, then Spack will not work. Sometimes, an appeal to the system’s package manager can fix such problems. If not, the solution is have Spack install the required packages, and then have Spack use them. For example, if `curl` doesn’t work, one could use the following steps to provide Spack a working `curl`: ``` $ spack install curl $ spack load curl ``` or alternately: ``` $ spack module tcl loads curl >>~/.bashrc ``` or if environment modules don’t work: ``` $ export PATH=`spack location --install-dir curl`/bin:$PATH ``` External commands are used by Spack in two places: within core Spack, and in the package recipes. The bootstrapping procedure for these two cases is somewhat different, and is treated separately below. #### Core Spack Utilities[¶](#core-spack-utilities) Core Spack uses the following packages, mainly to download and unpack source code, and to load generated environment modules: `curl`, `env`, `git`, `go`, `hg`, `svn`, `tar`, `unzip`, `patch`, `environment-modules`. As long as the user’s environment is set up to successfully run these programs from outside of Spack, they should work inside of Spack as well. They can generally be activated as in the `curl` example above; or some systems might already have an appropriate hand-built environment module that may be loaded. Either way works. If you find that you are missing some of these programs, `spack` can build some of them for you with `spack bootstrap`. Currently supported programs are `environment-modules`. A few notes on specific programs in this list: ##### cURL, git, Mercurial, etc.[¶](#curl-git-mercurial-etc) Spack depends on cURL to download tarballs, the format that most Spack-installed packages come in. Your system’s cURL should always be able to download unencrypted `http://`. However, the cURL on some systems has problems with SSL-enabled `https://` URLs, due to outdated / insecure versions of OpenSSL on those systems. This will prevent Spack from installing any software requiring `https://` until a new cURL has been installed, using the technique above. Warning remember that if you install `curl` via Spack that it may rely on a user-space OpenSSL that is not upgraded regularly. It may fall out of date faster than your system OpenSSL. Some packages use source code control systems as their download method: `git`, `hg`, `svn` and occasionally `go`. If you had to install a new `curl`, then chances are the system-supplied version of these other programs will also not work, because they also rely on OpenSSL. Once `curl` has been installed, you can similarly install the others. ##### Environment Modules[¶](#environment-modules) In order to use Spack’s generated module files, you must have installed `environment-modules` or `lmod`. The simplest way to get the latest version of either of these tools is installing it as part of Spack’s bootstrap procedure: ``` $ spack bootstrap ``` Warning At the moment `spack bootstrap` is only able to install `environment-modules`. Extending its capabilities to prefer `lmod` where possible is in the roadmap, and likely to happen before the next release. Alternatively, on many Linux distributions, you can install a pre-built binary from the vendor’s repository. On Fedora/RHEL/CentOS, for example, this can be done with the command: ``` $ yum install environment-modules ``` Once you have the tool installed and available in your path, you can source Spack’s setup file: ``` $ source share/spack/setup-env.sh ``` This activates [shell support](index.html#shell-support) and makes commands like `spack load` available for use. #### Package Utilities[¶](#package-utilities) Spack may also encounter bootstrapping problems inside a package’s `install()` method. In this case, Spack will normally be running inside a *sanitized build environment*. This includes all of the package’s dependencies, but none of the environment Spack inherited from the user: if you load a module or modify `$PATH` before launching Spack, it will have no effect. In this case, you will likely need to use the `--dirty` flag when running `spack install`, causing Spack to **not** sanitize the build environment. You are now responsible for making sure that environment does not do strange things to Spack or its installs. Another way to get Spack to use its own version of something is to add that something to a package that needs it. For example: ``` depends_on('binutils', type='build') ``` This is considered best practice for some common build dependencies, such as `autotools` (if the `autoreconf` command is needed) and `cmake` — `cmake` especially, because different packages require a different version of CMake. ##### binutils[¶](#binutils) Sometimes, strange error messages can happen while building a package. For example, `ld` might crash. Or one receives a message like: ``` ld: final link failed: Nonrepresentable section on output ``` or: ``` ld: .../_fftpackmodule.o: unrecognized relocation (0x2a) in section `.text' ``` These problems are often caused by an outdated `binutils` on your system. Unlike CMake or Autotools, adding `depends_on('binutils')` to every package is not considered a best practice because every package written in C/C++/Fortran would need it. A potential workaround is to load a recent `binutils` into your environment and use the `--dirty` flag. ### GPG Signing[¶](#gpg-signing) #### `spack gpg`[¶](#spack-gpg) Spack has support for signing and verifying packages using GPG keys. A separate keyring is used for Spack, so any keys available in the user’s home directory are not used. #### `spack gpg init`[¶](#spack-gpg-init) When Spack is first installed, its keyring is empty. Keys stored in `var/spack/gpg` are the default keys for a Spack installation. These keys may be imported by running `spack gpg init`. This will import the default keys into the keyring as trusted keys. #### Trusting keys[¶](#trusting-keys) Additional keys may be added to the keyring using `spack gpg trust <keyfile>`. Once a key is trusted, packages signed by the owner of they key may be installed. #### Creating keys[¶](#creating-keys) You may also create your own key so that you may sign your own packages using `spack gpg create <name> <email>`. By default, the key has no expiration, but it may be set with the `--expires <date>` flag (see the `gnupg2` documentation for accepted date formats). It is also recommended to add a comment as to the use of the key using the `--comment <comment>` flag. The public half of the key can also be exported for sharing with others so that they may use packages you have signed using the `--export <keyfile>` flag. Secret keys may also be later exported using the `spack gpg export <location> [<key>...]` command. Note Key creation speed The creation of a new GPG key requires generating a lot of random numbers. Depending on the entropy produced on your system, the entire process may take a long time (*even appearing to hang*). Virtual machines and cloud instances are particularly likely to display this behavior. To speed it up you may install tools like `rngd`, which is usually available as a package in the host OS. On e.g. an Ubuntu machine you need to give the following commands: ``` $ sudo apt-get install rng-tools $ sudo rngd -r /dev/urandom ``` before generating the keys. Another alternative is `haveged`, which can be installed on RHEL/CentOS machines as follows: ``` $ sudo yum install haveged $ sudo chkconfig haveged on ``` [This Digital Ocean tutorial](https://www.digitalocean.com/community/tutorials/how-to-setup-additional-entropy-for-cloud-servers-using-haveged) provides a good overview of sources of randomness. #### Listing keys[¶](#listing-keys) In order to list the keys available in the keyring, the `spack gpg list` command will list trusted keys with the `--trusted` flag and keys available for signing using `--signing`. If you would like to remove keys from your keyring, `spack gpg untrust <keyid>`. Key IDs can be email addresses, names, or (best) fingerprints. #### Signing and Verifying Packages[¶](#signing-and-verifying-packages) In order to sign a package, `spack gpg sign <file>` should be used. By default, the signature will be written to `<file>.asc`, but that may be changed by using the `--output <file>` flag. If there is only one signing key available, it will be used, but if there is more than one, the key to use must be specified using the `--key <keyid>` flag. The `--clearsign` flag may also be used to create a signed file which contains the contents, but it is not recommended. Signed packages may be verified by using `spack gpg verify <file>`. ### Spack on Cray[¶](#spack-on-cray) Spack differs slightly when used on a Cray system. The architecture spec can differentiate between the front-end and back-end processor and operating system. For example, on Edison at NERSC, the back-end target processor is “Ivy Bridge”, so you can specify to use the back-end this way: ``` $ spack install zlib target=ivybridge ``` You can also use the operating system to build against the back-end: ``` $ spack install zlib os=CNL10 ``` Notice that the name includes both the operating system name and the major version number concatenated together. Alternatively, if you want to build something for the front-end, you can specify the front-end target processor. The processor for a login node on Edison is “Sandy bridge” so we specify on the command line like so: ``` $ spack install zlib target=sandybridge ``` And the front-end operating system is: ``` $ spack install zlib os=SuSE11 ``` #### Cray compiler detection[¶](#cray-compiler-detection) Spack can detect compilers using two methods. For the front-end, we treat everything the same. The difference lies in back-end compiler detection. Back-end compiler detection is made via the Tcl module avail command. Once it detects the compiler it writes the appropriate PrgEnv and compiler module name to compilers.yaml and sets the paths to each compiler with Cray’s compiler wrapper names (i.e. cc, CC, ftn). During build time, Spack will load the correct PrgEnv and compiler module and will call appropriate wrapper. The compilers.yaml config file will also differ. There is a modules section that is filled with the compiler’s Programming Environment and module name. On other systems, this field is empty []: ``` - compiler: modules: - PrgEnv-intel - intel/15.0.109 ``` As mentioned earlier, the compiler paths will look different on a Cray system. Since most compilers are invoked using cc, CC and ftn, the paths for each compiler are replaced with their respective Cray compiler wrapper names: ``` paths: cc: cc cxx: CC f77: ftn fc: ftn ``` As opposed to an explicit path to the compiler executable. This allows Spack to call the Cray compiler wrappers during build time. For more on compiler configuration, check out [Compiler configuration](#compiler-config). Spack sets the default Cray link type to dynamic, to better match other other platforms. Individual packages can enable static linking (which is the default outside of Spack on cray systems) using the `-static` flag. #### Setting defaults and using Cray modules[¶](#setting-defaults-and-using-cray-modules) If you want to use default compilers for each PrgEnv and also be able to load cray external modules, you will need to set up a `packages.yaml`. Here’s an example of an external configuration for cray modules: ``` packages: mpich: modules: mpich@7.3.1%gcc@5.2.0 arch=cray_xc-haswell-CNL10: cray-mpich mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-haswell-CNL10: cray-mpich all: providers: mpi: [mpich] ``` This tells Spack that for whatever package that depends on mpi, load the cray-mpich module into the environment. You can then be able to use whatever environment variables, libraries, etc, that are brought into the environment via module load. Note For Cray-provided packages, it is best to use `modules:` instead of `paths:` in `packages.yaml`, because the Cray Programming Environment heavily relies on modules (e.g., loading the `cray-mpich` module adds MPI libraries to the compiler wrapper link line). You can set the default compiler that Spack can use for each compiler type. If you want to use the Cray defaults, then set them under `all:` in packages.yaml. In the compiler field, set the compiler specs in your order of preference. Whenever you build with that compiler type, Spack will concretize to that version. Here is an example of a full packages.yaml used at NERSC ``` packages: mpich: modules: mpich@7.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge: cray-mpich mpich@7.3.1%intel@16.0.0.109 arch=cray_xc-SuSE11-ivybridge: cray-mpich buildable: False netcdf: modules: netcdf@4.3.3.1%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge: cray-netcdf netcdf@4.3.3.1%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge: cray-netcdf buildable: False hdf5: modules: hdf5@1.8.14%gcc@5.2.0 arch=cray_xc-CNL10-ivybridge: cray-hdf5 hdf5@1.8.14%intel@16.0.0.109 arch=cray_xc-CNL10-ivybridge: cray-hdf5 buildable: False all: compiler: [gcc@5.2.0, intel@16.0.0.109] providers: mpi: [mpich] ``` Here we tell spack that whenever we want to build with gcc use version 5.2.0 or if we want to build with intel compilers, use version 16.0.0.109. We add a spec for each compiler type for each cray modules. This ensures that for each compiler on our system we can use that external module. For more on external packages check out the section [External Packages](index.html#sec-external-packages). #### Using Linux containers on Cray machines[¶](#using-linux-containers-on-cray-machines) Spack uses environment variables particular to the Cray programming environment to determine which systems are Cray platforms. These environment variables may be propagated into containers that are not using the Cray programming environment. To ensure that Spack does not autodetect the Cray programming environment, unset the environment variable `CRAYPE_VERSION`. This will cause Spack to treat a linux container on a Cray system as a base linux distro. Basic Usage[¶](#basic-usage) --- The `spack` command has many *subcommands*. You’ll only need a small subset of them for typical usage. Note that Spack colorizes output. `less -R` should be used with Spack to maintain this colorization. E.g.: ``` $ spack find | less -R ``` It is recommended that the following be put in your `.bashrc` file: ``` alias less='less -R' ``` ### Listing available packages[¶](#listing-available-packages) To install software with Spack, you need to know what software is available. You can see a list of available package names at the [Package List](index.html#package-list) webpage, or using the `spack list` command. #### `spack list`[¶](#spack-list) The `spack list` command prints out a list of all of the packages Spack can install: ``` $ spack list abinit mummer r-c50 abyss mumps r-callr accfft munge r-car ack muparser r-caret activeharmony muscle r-category adept-utils muse r-catools adios muster r-cdcfluview adios2 mvapich2 r-cellranger adlbx mxml r-checkmate adol-c mxnet r-checkpoint aegean nag r-chemometrics aida nalu r-chron albany nalu-wind r-circlize albert namd r-class alglib nano r-classint allinea-forge nanoflann r-cli allinea-reports nanopb r-clipr allpaths-lg nasm r-cluster alquimia nauty r-clusterprofiler alsa-lib ncbi-magicblast r-cner aluminum ncbi-rmblastn r-coda amg ncbi-toolkit r-codetools amg2013 nccl r-coin amp nccmp r-colorspace ampliconnoise ncdu r-complexheatmap amrex ncftp r-corpcor amrvis ncl r-corrplot andi nco r-covr angsd ncurses r-cowplot ant ncview r-crayon antlr ndiff r-crosstalk ants nek5000 r-ctc ape nekbone r-cubature aperture-photometry nekcem r-cubist apex nektar r-curl apple-libunwind neovim r-data-table applewmproto nest r-dbi appres netcdf r-dbplyr apr netcdf-cxx r-delayedarray apr-util netcdf-cxx4 r-deldir aragorn netcdf-fortran r-dendextend archer netgauge r-deoptim argobots netgen r-deoptimr argp-standalone netlib-lapack r-deseq argtable netlib-scalapack r-deseq2 arlecore netlib-xblas r-desolve armadillo nettle r-devtools arpack-ng neuron r-diagrammer arrow nextflow r-dicekriging ascent nfft r-dichromat asciidoc nghttp2 r-diffusionmap aspa nginx r-digest aspcud ngmlr r-diptest aspect ninja r-dirichletmultinomial aspell ninja-fortran r-dismo aspell6-de nlohmann-json r-dnacopy aspell6-en nlopt r-do-db aspell6-es nmap r-domc aspera-cli nnvm r-doparallel assimp node-js r-dorng astra notmuch r-dose astral npb r-downloader astyle npm r-dplyr at-spi2-atk npth r-dt at-spi2-core nspr r-dtw atk numactl r-dygraphs atlas numdiff r-e1071 atom-dft nut r-edger atompaw nvptx-tools r-ellipse atop nwchem r-ensembldb augustus ocaml r-ergm autoconf occa r-evaluate autodock-vina oce r-expm autofact oclint r-factoextra autogen oclock r-factominer automaded octave r-fastcluster automake octave-optim r-fastmatch axel octave-splines r-ff axl octave-struct r-fftwtools bamdst octopus r-fgsea bamtools of-adios-write r-filehash bamutil of-precice r-findpython barrnap omega-h r-fit-models bash ompss r-flashclust bash-completion ompt-openmp r-flexclust bats oniguruma r-flexmix bazel ont-albacore r-fnn bbcp opa-psm2 r-forcats bbmap opam r-foreach bc opari2 r-forecast bcftools openbabel r-foreign bcl2fastq2 openblas r-formatr bdftopcf opencoarrays r-formula bdw-gc opencv r-fpc bear openexr r-fracdiff beast1 openfast r-futile-logger beast2 openfoam-com r-futile-options bedops openfoam-org r-gbm bedtools2 openfst r-gcrma beforelight opengl r-gdata benchmark openglu r-gdsfmt berkeley-db openjpeg r-geiger bertini openmc r-genefilter bib2xhtml openmpi r-genelendatabase bigreqsproto opennurbs r-geneplotter binutils openpmd-api r-genie3 bioawk openscenegraph r-genomeinfodb biopieces openslide r-genomeinfodbdata bismark openspeedshop r-genomicalignments bison openspeedshop-utils r-genomicfeatures bitmap openssh r-genomicranges blasr openssl r-geomorph blasr-libcpp opium r-geoquery blast-plus optional-lite r-geosphere blat opus r-getopt blaze orca r-getoptlong blis orfm r-ggally bliss orthofinder r-ggbio blitz orthomcl r-ggdendro bmake osu-micro-benchmarks r-ggjoy bml otf r-ggmap bohrium otf2 r-ggplot2 bolt p4est r-ggpubr bookleaf-cpp p7zip r-ggrepel boost pacbio-daligner r-ggridges boostmplcartesianproduct pacbio-damasker r-ggsci bowtie pacbio-dazz-db r-ggvis bowtie2 pacbio-dextractor r-gistr boxlib packmol r-git2r bpp-core pacvim r-glimma bpp-phyl pagit r-glmnet bpp-seq pagmo r-globaloptions bpp-suite paml r-glue bracken panda r-gmodels braker pandaseq r-gmp branson pango r-go-db breakdancer pangomm r-googlevis breseq papi r-goplot brigand papyrus r-gosemsim bsseeker2 paradiseo r-goseq bucky parallel r-gostats bumpversion parallel-netcdf r-gplots busco paraver r-graph butter paraview r-gridbase bwa parmetis r-gridextra bwtool parmgridgen r-gseabase byobu parquet r-gss bzip2 parsimonator r-gsubfn c-blosc parsplice r-gtable c-lime partitionfinder r-gtools cabana patch r-gtrellis caffe patchelf r-gviz cairo pathfinder r-haven cairomm pax-utils r-hexbin caliper pbbam r-highr callpath pbmpi r-hmisc camellia pcma r-hms candle-benchmarks pcre r-htmltable cantera pcre2 r-htmltools canu pdf2svg r-htmlwidgets cap3 pdftk r-httpuv cares pdsh r-httr cask pdt r-hwriter casper pegtl r-hypergraph catalyst pennant r-ica catch percept r-igraph cbench perl r-illuminaio cblas perl-algorithm-diff r-impute cbtf perl-app-cmd r-influencer cbtf-argonavis perl-array-utils r-inline cbtf-argonavis-gui perl-b-hooks-endofscope r-interactivedisplaybase cbtf-krell perl-bio-perl r-ipred cbtf-lanl perl-bit-vector r-iranges ccache perl-cairo r-irdisplay cctools perl-capture-tiny r-irkernel cdbfasta perl-carp-clan r-irlba cdd perl-cgi r-iso cddlib perl-class-data-inheritable r-iterators cdhit perl-class-inspector r-janitor cdo perl-class-load r-jaspar2018 ceed perl-class-load-xs r-jomo cereal perl-compress-raw-bzip2 r-jpeg ceres-solver perl-compress-raw-zlib r-jsonlite cfitsio perl-contextual-return r-kegg-db cgal perl-cpan-meta-check r-kegggraph cgm perl-data-optlist r-keggrest cgns perl-data-stag r-kernlab channelflow perl-dbd-mysql r-kernsmooth charliecloud perl-dbd-sqlite r-kknn charmpp perl-dbfile r-knitr chatterbug perl-dbi r-ks check perl-devel-cycle r-labeling chlorop perl-devel-globaldestruction r-lambda-r chombo perl-devel-overloadinfo r-laplacesdemon cistem perl-devel-stacktrace r-lars cityhash perl-digest-md5 r-lattice clamr perl-dist-checkconflicts r-latticeextra clapack perl-encode-locale r-lava claw perl-eval-closure r-lazyeval cleaveland4 perl-exception-class r-leaflet cleverleaf perl-exporter-tiny r-leaps clfft perl-extutils-depends r-learnbayes clhep perl-extutils-makemaker r-lhs clingo perl-extutils-pkgconfig r-limma cloc perl-file-copy-recursive r-lme4 cloog perl-file-listing r-lmtest cloverleaf perl-file-pushd r-locfit cloverleaf3d perl-file-sharedir-install r-log4r clustalo perl-file-slurp-tiny r-lpsolve clustalw perl-file-slurper r-lsei cmake perl-file-which r-lubridate cmocka perl-font-ttf r-magic cmor perl-gd r-magrittr cnmem perl-gd-graph r-makecdfenv cnpy perl-gd-text r-maldiquant cntk perl-gdgraph-histogram r-mapproj cntk1bitsgd perl-graph r-maps codar-cheetah perl-graph-readwrite r-maptools codes perl-html-parser r-markdown coevp perl-html-tagset r-mass cohmm perl-http-cookies r-matr coinhsl perl-http-daemon r-matrix colm perl-http-date r-matrixmodels colordiff perl-http-message r-matrixstats comd perl-http-negotiate r-mclust commons-lang perl-inline r-mcmcglmm commons-lang3 perl-inline-c r-mco commons-logging perl-intervaltree r-mda compiz perl-io-compress r-memoise compositeproto perl-io-html r-mergemaid conduit perl-io-sessiondata r-methodss3 constype perl-io-socket-ssl r-mgcv converge perl-io-string r-mgraster coreutils perl-json r-mice corset perl-libwww-perl r-mime cosmomc perl-list-moreutils r-minfi cosp2 perl-log-log4perl r-minqa cp2k perl-lwp r-misc3d cppad perl-lwp-mediatypes r-mitml cppcheck perl-lwp-protocol-https r-mixtools cppgsl perl-math-cdf r-mlbench cpprestsdk perl-math-cephes r-mlinterfaces cppunit perl-math-matrixreal r-mlr cppzmq perl-module-build r-mlrmbo cpu-features perl-module-implementation r-mmwrweek cpuinfo perl-module-runtime r-mnormt cram perl-module-runtime-conflicts r-modelmetrics cryptopp perl-moose r-modelr cscope perl-mozilla-ca r-modeltools csdp perl-mro-compat r-mpm ctffind perl-namespace-clean r-msnbase cub perl-net-http r-multcomp cube perl-net-scp-expect r-multicool cubelib perl-net-ssleay r-multtest cubew perl-package-deprecationmanager r-munsell cuda perl-package-stash r-mvtnorm cuda-memtest perl-package-stash-xs r-mzid cudnn perl-padwalker r-mzr cufflinks perl-parallel-forkmanager r-nanotime cups perl-params-util r-ncbit curl perl-parse-recdescent r-ncdf4 cvs perl-pdf-api2 r-network czmq perl-pegex r-networkd3 dakota perl-perl4-corelibs r-nlme daligner perl-perl6-slurp r-nloptr damageproto perl-perlio-gzip r-nmf damaris perl-perlio-utf8-strict r-nnet damselfly perl-scalar-util-numeric r-nnls darshan-runtime perl-soap-lite r-nor1mix darshan-util perl-star-fusion r-np dash perl-statistics-descriptive r-numderiv datamash perl-statistics-pca r-oligoclasses dataspaces perl-sub-exporter r-oo davix perl-sub-exporter-progressive r-openssl dbcsr perl-sub-identify r-org-hs-eg-db dbus perl-sub-install r-organismdbi dealii perl-sub-name r-packrat dealii-parameter-gui perl-sub-uplevel r-pacman deconseq-standalone perl-svg r-pamr dejagnu perl-swissknife r-pan delly2 perl-task-weaken r-parallelmap denovogear perl-term-readkey r-paramhelpers dftfe perl-test-cleannamespaces r-party dia perl-test-deep r-partykit dialign-tx perl-test-differences r-pathview diamond perl-test-exception r-pbapply diffsplice perl-test-fatal r-pbdzmq diffutils perl-test-memory-cycle r-pbkrtest direnv perl-test-most r-pcamethods discovar perl-test-needs r-pcapp discovardenovo perl-test-requires r-permute dislin perl-test-requiresinternet r-pfam-db diy perl-test-warn r-phangorn dlpack perl-test-warnings r-phantompeakqualtools dmd perl-text-csv r-phyloseq dmlc-core perl-text-diff r-picante dmtcp perl-text-simpletable r-pkgconfig dmxproto perl-text-soundex r-pkgmaker docbook-xml perl-text-unidecode r-plogr docbook-xsl perl-time-hires r-plot3d dos2unix perl-time-piece r-plotly dotnet-core-sdk perl-try-tiny r-plotrix double-conversion perl-uri r-pls doxygen perl-uri-escape r-plyr dri2proto perl-version r-pmcmr dri3proto perl-want r-png dsdp perl-www-robotrules r-powerlaw dsrc perl-xml-parser r-prabclus dtcmp perl-xml-parser-lite r-praise dyninst perl-xml-simple r-preprocesscore ea-utils perl-yaml-libyaml r-prettyunits easybuild petsc r-processx ebms pexsi r-prodlim eccodes pfft r-progress eclipse-gcj-parser pflotran r-protgenerics ecp-proxy-apps pfunit r-proto ed pgdspider r-proxy editres pgi r-pryr eigen pgmath r-ps elasticsearch phantompeakqualtools r-psych elemental phast r-ptw elfutils phasta r-purrr elk phist r-quadprog elpa phylip r-quantmod emacs phyluce r-quantreg ember picard r-quantro emboss picsar r-qvalue encodings picsarlite r-r6 energyplus pidx r-randomforest environment-modules pigz r-ranger eospac pilon r-rappdirs er pindel r-raster es piranha r-rbgl esmf pism r-rbokeh essl pixman r-rcolorbrewer ethminer pkg-config r-rcpp etsf-io pkgconf r-rcpparmadillo everytrace planck-likelihood r-rcppblaze everytrace-example plasma r-rcppcctz evieext platypus r-rcppcnpy exabayes plink r-rcppeigen examinimd plplot r-rcppprogress exampm plumed r-rcurl exasp2 pmgr-collective r-rda exmcutils pmix r-readr exodusii pnfft r-readxl exonerate pngwriter r-registry expat pnmpi r-rematch expect poamsa r-reordercluster express pocl r-reportingtools extrae polymake r-repr exuberant-ctags poppler r-reprex f90cache poppler-data r-reshape fabtests porta r-reshape2 falcon portage r-rex fast-global-file-status portcullis r-rgdal fasta postgresql r-rgenoud fastjar ppl r-rgeos fastmath pplacer r-rgl fastme prank r-rgooglemaps fastphase precice r-rgraphviz fastq-screen presentproto r-rhdf5 fastqc preseq r-rhtslib fastqvalidator price r-rinside fasttree primer3 r-rjags fastx-toolkit prinseq-lite r-rjava fenics printproto r-rjson fermi prng r-rjsonio fermikit probconsrna r-rlang fermisciencetools prodigal r-rmarkdown ferret proj r-rminer ffmpeg protobuf r-rmpfr fftw proxymngr r-rmpi figtree pruners-ninja r-rmysql fimpute ps-lite r-rngtools findutils psi4 r-robustbase fio pslib r-rocr fish psm r-rodbc fixesproto psmc r-rots flac pstreams r-roxygen2 flang pugixml r-rpart flann pumi r-rpart-plot flash pv r-rpostgresql flatbuffers pvm r-rprojroot flecsale pxz r-rsamtools flecsi py-3to2 r-rsnns flex py-4suite-xml r-rsolnp flint py-abipy r-rsqlite flit py-adios r-rstan fltk py-affine r-rstudioapi flux-core py-alabaster r-rtracklayer flux-sched py-apache-libcloud r-rtsne fluxbox py-apipkg r-rvcheck fmt py-appdirs r-rvest foam-extend py-appnope r-rzmq folly py-apscheduler r-s4vectors font-adobe-100dpi py-argcomplete r-samr font-adobe-75dpi py-argparse r-sandwich font-adobe-utopia-100dpi py-ase r-scales font-adobe-utopia-75dpi py-asn1crypto r-scatterplot3d font-adobe-utopia-type1 py-astroid r-sdmtools font-alias py-astropy r-segmented font-arabic-misc py-atomicwrites r-selectr font-bh-100dpi py-attrs r-seqinr font-bh-75dpi py-autopep8 r-seqlogo font-bh-lucidatypewriter-100dpi py-avro r-seurat font-bh-lucidatypewriter-75dpi py-avro-json-serializer r-sf font-bh-ttf py-babel r-sfsmisc font-bh-type1 py-backcall r-shape font-bitstream-100dpi py-backports-abc r-shiny font-bitstream-75dpi py-backports-functools-lru-cache r-shinydashboard font-bitstream-speedo py-backports-shutil-get-terminal-size r-shortread font-bitstream-type1 py-backports-ssl-match-hostname r-siggenes font-cronyx-cyrillic py-basemap r-simpleaffy font-cursor-misc py-bcbio-gff r-sm font-daewoo-misc py-beautifulsoup4 r-smoof font-dec-misc py-binwalk r-sn font-ibm-type1 py-biom-format r-snow font-isas-misc py-biopython r-snowfall font-jis-misc py-bitarray r-snprelate font-micro-misc py-bitstring r-som font-misc-cyrillic py-bleach r-somaticsignatures font-misc-ethiopic py-blessings r-sourcetools font-misc-meltho py-bokeh r-sp font-misc-misc py-boltons r-sparsem font-mutt-misc py-bottleneck r-spdep font-schumacher-misc py-breakseq2 r-speedglm font-screen-cyrillic py-brian r-spem font-sony-misc py-brian2 r-splitstackshape font-sun-misc py-bsddb3 r-sqldf font-util py-bx-python r-squash font-winitzki-cyrillic py-cartopy r-stanheaders font-xfree86-type1 py-cclib r-statmod fontcacheproto py-cdat-lite r-statnet-common fontconfig py-cdo r-stringi fontsproto py-certifi r-stringr fonttosfnt py-cffi r-strucchange fp16 py-chardet r-subplex fpc py-checkm-genome r-summarizedexperiment fr-hit py-cheetah r-survey freebayes py-click r-survival freeglut py-cligj r-sva freetype py-cloudpickle r-tarifx fseq py-cogent r-tclust fsl py-colorama r-tensora fslsfonts py-colormath r-testit fstobdf py-configparser r-testthat ftgl py-counter r-tfbstools funhpc py-coverage r-tfmpvalue fyba py-cpuinfo r-th-data gapbs py-crispresso r-threejs gapcloser py-cryptography r-tibble gapfiller py-csvkit r-tidycensus gasnet py-current r-tidyr gatk py-cutadapt r-tidyselect gaussian py-cvxopt r-tidyverse gawk py-cycler r-tiff gblocks py-cython r-tigris gcc py-dask r-timedate gccmakedep py-dateutil r-tmixclust gccxml py-dbf r-topgo gconf py-decorator r-trimcluster gcta py-deeptools r-truncnorm gdal py-dendropy r-trust gdb py-dill r-tseries gdbm py-discover r-tsne gdk-pixbuf py-dlcpar r-ttr gdl py-docopt r-udunits2 geant4 py-docutils r-units gearshifft py-doxypy r-utils gemmlowp py-doxypypy r-uuid genemark-et py-dryscrape r-variantannotation genomefinisher py-dxchange r-varselrf genometools py-dxfile r-vcd geopm py-easybuild-easyblocks r-vegan geos py-easybuild-easyconfigs r-vgam gettext py-easybuild-framework r-vipor gflags py-edffile r-viridis ghost py-editdistance r-viridislite ghostscript py-elasticsearch r-visnetwork ghostscript-fonts py-elephant r-vsn giflib py-emcee r-whisker git py-entrypoints r-withr git-imerge py-enum34 r-xde git-lfs py-epydoc r-xgboost gl2ps py-espresso r-xlconnect glew py-espressopp r-xlconnectjars glfmultiples py-et-xmlfile r-xlsx glib py-eventlet r-xlsxjars glibmm py-execnet r-xmapbridge glimmer py-fastaindex r-xml glm py-fasteners r-xml2 global py-faststructure r-xtable globalarrays py-filelock r-xts globus-toolkit py-fiscalyear r-xvector glog py-flake8 r-yaml gloo py-flake8-polyfill r-yapsa glpk py-flask r-yaqcaffy glproto py-flask-compress r-yarn glvis py-flask-socketio r-zlibbioc gmake py-flexx r-zoo gmap-gsnap py-fn racon gmime py-fparser raft gmodel py-funcsigs ragel gmp py-functools32 raja gmsh py-future randfold gnat py-futures random123 gnu-prolog py-fypp randrproto gnupg py-gdbgui range-v3 gnuplot py-genders rankstr gnutls py-genshi rapidjson go py-gevent ravel go-bootstrap py-git-review raxml gobject-introspection py-git2 ray googletest py-gnuplot rclone gotcha py-goatools rdma-core gource py-gpaw rdp-classifier gperf py-greenlet re2c gperftools py-griddataformats readfq gplates py-guidata readline grackle py-guiqwt recordproto gradle py-h5py redset grandr py-hdfs redundans graph500 py-hepdata-validator regcm graphicsmagick py-html2text relion graphlib py-html5lib rempi graphmap py-htseq rename graphviz py-httpbin rendercheck grass py-hypothesis renderproto grib-api py-idna repeatmasker grnboost py-igraph resourceproto groff py-illumina-utils revbayes gromacs py-imageio rgb gsl py-imagesize rhash gslib py-iminuit rlwrap gtkmm py-importlib rmats gtkorvo-atl py-ipaddress rmlab gtkorvo-cercs-env py-ipdb rna-seqc gtkorvo-dill py-ipykernel rngstreams gtkorvo-enet py-ipython rockstar gtkplus py-ipython-genutils root gts py-ipywidgets rose guidance py-isort ross guile py-itsdangerous rr gurobi py-jdcal rsbench h5hut py-jedi rsem h5part py-jinja2 rstart h5utils py-joblib rsync h5z-zfp py-jprops rtags hacckernels py-jpype rtax hadoop py-jsonschema ruby halc py-junit-xml ruby-gnuplot hapcut2 py-jupyter-client ruby-narray hapdip py-jupyter-console ruby-ronn haploview py-jupyter-core ruby-rubyinline harfbuzz py-jupyter-notebook ruby-svn2git harminv py-keras ruby-terminal-table hdf py-kiwisolver rust hdf5 py-lark-parser rust-bindgen hdf5-blosc py-latexcodec sabre help2man py-lazy sailfish henson py-lazy-object-proxy salmon hepmc py-lazy-property sambamba heppdt py-lazyarray samblaster hic-pro py-libconf samrai highfive py-libensemble samtools highwayhash py-line-profiler sandbox hiop py-linecache2 sas hisat2 py-lit satsuma2 hisea py-llvmlite savanna hmmer py-lmfit saws homer py-localcider sbt hoomd-blue py-locket scala hpccg py-lockfile scalasca hpctoolkit py-logilab-common scalpel hpctoolkit-externals py-lrudict scan-for-matches hpgmg py-lxml scons hpl py-lzstring scorec-core hpx py-macholib scorep hpx5 py-machotools scotch hsakmt py-macs2 scr hstr py-maestrowf screen htop py-mako scripts htslib py-mappy scrnsaverproto httpie py-markdown sctk hub py-markupsafe sdl2 hunspell py-matplotlib sdl2-image hwloc py-mccabe sed hybpiper py-mdanalysis sentieon-genomics hydra py-meep seqan hydrogen py-memory-profiler seqprep hypre py-methylcode seqtk i3 py-mg-rast-tools serf ibmisc py-misopy sessreg iceauth py-mistune setxkbmap icedtea py-mock sga icet py-moltemplate shapeit ico py-mongo shared-mime-info icu4c py-monotonic shiny-server id3lib py-monty shocklibs idba py-more-itertools shoremap igraph py-mpi4py shortbred igvtools py-mpmath shortstack ilmbase py-multiprocess showfont image-magick py-multiqc shuffile imake py-mx sickle imp py-mxnet siesta impute2 py-myhdl signalp infernal py-mysqldb1 signify inputproto py-natsort silo intel py-nbconvert simplemoc intel-daal py-nbformat simul intel-gpu-tools py-neo simulationio intel-ipp py-nestle singularity intel-mkl py-netcdf4 skilion-onedrive intel-mkl-dnn py-netifaces sleef intel-mpi py-networkx slepc intel-parallel-studio py-nose slurm intel-tbb py-nosexcover smalt intel-xed py-numba smproxy intltool py-numexpr snakemake ior py-numexpr3 snap iozone py-numpy snap-berkeley iperf2 py-numpydoc snap-korf iperf3 py-olefile snappy ipopt py-ont-fast5-api snbone isaac py-openpmd-validator sniffles isaac-server py-openpyxl snpeff isl py-openslide-python snphylo itstool py-opentuner snptest itsx py-ordereddict soap2 jackcess py-oset soapdenovo-trans jags py-owslib soapdenovo2 jansson py-packaging soapindel jasper py-palettable soapsnp jbigkit py-pandas sofa-c jchronoss py-paramiko somatic-sniper jdk py-partd sortmerna jellyfish py-pathlib2 sosflow jemalloc py-pathos sowing jmol py-pathspec sox jq py-patsy spades json-c py-pbr span-lite json-cwx py-pep8-naming spark json-glib py-perf sparsehash jsoncpp py-performance sparta judy py-periodictable spdlog julia py-petsc4py spectrum-mpi k8 py-pexpect speex kahip py-phonopy spglib kaiju py-pickleshare sph2pipe kaks-calculator py-picrust spherepack kaldi py-pil spindle kallisto py-pillow spot karma py-pip sqlite kbproto py-pipits sqlitebrowser kdiff3 py-pkgconfig squid kealib py-plotly sra-toolkit kentutils py-pluggy ssht kibana py-ply sspace-longread kim-api py-pmw sspace-standard kmergenie py-poster sst-core kokkos py-pox sst-dumpi kraken py-ppft sst-macro krb5 py-prettytable stacks krims py-progress staden-io-lib kripke py-proj star kvasir-mpl py-projectq star-ccm-plus kvtree py-prompt-toolkit startup-notification laghos py-protobuf stat lammps py-psutil stc last py-psyclone steps lastz py-ptyprocess stow latte py-pudb strace launchmon py-py stream lazyten py-py2bit strelka lbann py-py2cairo stress lbxproxy py-py2neo string-view-lite lbzip2 py-py4j stringtie lcals py-pyani structure lcms py-pyarrow strumpack ldc py-pyasn1 sublime-text ldc-bootstrap py-pybigwig subread legion py-pybind11 subversion leveldb py-pybtex suite-sparse lftp py-pybtex-docutils sumaclust libaec py-pycairo sundials libaio py-pychecker superlu libapplewm py-pycodestyle superlu-dist libarchive py-pycparser superlu-mt libassuan py-pycrypto supernova libatomic-ops py-pycurl sw4lite libbeagle py-pydatalog swap-assembler libbeato py-pydispatcher swarm libbsd py-pydot swfft libbson py-pyelftools swiftsim libcanberra py-pyepsg swig libcap py-pyfasta symengine libceed py-pyfftw sympol libcerf py-pyflakes sz libcheck py-pygdbmi tabix libcint py-pygments talass libcircle py-pygobject talloc libconfig py-pygpu tantan libcroco py-pygtk tar libctl py-pylint targetp libdivsufsort py-pymatgen task libdmx py-pyminifier taskd libdrm py-pymol tasmanian libdwarf py-pympler tassel libedit py-pymysql tau libelf py-pynn tcl libemos py-pypar tcl-itcl libepoxy py-pyparsing tcl-tcllib libev py-pypeflow tcl-tclxml libevent py-pyprof2html tclap libevpath py-pyqi tcoffee libfabric py-pyqt tcptrace libffi py-pyrad tcsh libffs py-pysam tealeaf libfontenc py-pyscaf templight libfs py-pyserial templight-tools libgcrypt py-pyshp tetgen libgd py-pyside tethex libgeotiff py-pysocks texinfo libgit2 py-pyspark texlive libgpg-error py-pysqlite the-platinum-searcher libgpuarray py-pytables the-silver-searcher libgridxc py-pytest thornado-mini libgtextutils py-pytest-cov thrift libharu py-pytest-flake8 thrust libhio py-pytest-httpbin tig libiberty py-pytest-mock tinyxml libice py-pytest-runner tinyxml2 libiconv py-pytest-xdist tioga libint py-python-daemon tk libjpeg py-python-engineio tldd libjpeg-turbo py-python-gitlab tmalign libksba py-python-levenshtein tmhmm liblbxutil py-python-socketio tmux liblockfile py-pythonqwt tmuxinator libmatheval py-pytorch tophat libmaxminddb py-pytz tppred libmesh py-pyutilib tracer libmng py-pywavelets transabyss libmongoc py-pyyaml transdecoder libmonitor py-qtawesome transposome libnbc py-qtconsole transset libnl py-qtpy trapproto libnova py-quantities tree libogg py-quast treesub liboldx py-radical-utils trf libpcap py-ranger triangle libpciaccess py-rasterio trilinos libpfm4 py-readme-renderer trimal libpipeline py-regex trimgalore libpng py-reportlab trimmomatic libpsl py-requests trinity libpthread-stubs py-requests-toolbelt trinotate libquo py-restview trnascan-se librom py-rope turbine libsharp py-rpy2 turbomole libshm py-rsa tut libsigcpp py-rseqc twm libsigsegv py-rtree tycho2 libsm py-saga-python typhon libsodium py-scandir typhonio libspatialindex py-scientificpython uberftp libsplash py-scikit-image ucx libssh py-scikit-learn udunits2 libssh2 py-scipy ufo-core libsvm py-seaborn ufo-filters libszip py-setuptools umpire libtermkey py-setuptools-git unblur libtiff py-setuptools-scm uncrustify libtool py-sfepy unibilium libunistring py-sh unifycr libunwind py-shapely unison libuuid py-shiboken units libuv py-simplegeneric unixodbc libvorbis py-simplejson unuran libvterm py-singledispatch unzip libwebsockets py-sip usearch libwindowswm py-six util-linux libx11 py-slepc4py util-macros libxau py-slurm-pipeline uuid libxaw py-sncosmo valgrind libxaw3d py-snowballstemmer vampirtrace libxc py-snuggs vardictjava libxcb py-spectra varscan libxcomposite py-spefile vc libxcursor py-spglib vcftools libxdamage py-sphinx vcsh libxdmcp py-sphinx-bootstrap-theme vdt libxevie py-sphinx-rtd-theme vecgeom libxext py-sphinxcontrib-bibtex veclibfort libxfixes py-sphinxcontrib-programoutput vegas2 libxfont py-sphinxcontrib-websupport veloc libxfont2 py-spyder velvet libxfontcache py-spykeutils verilator libxft py-sqlalchemy verrou libxi py-statsmodels videoproto libxinerama py-stevedore viennarna libxkbcommon py-storm viewres libxkbfile py-subprocess32 vim libxkbui py-symengine virtualgl libxml2 py-symfit visit libxmu py-sympy vizglow libxp py-tabulate vmatch libxpm py-tappy voropp libxpresent py-terminado votca-csg libxprintapputil py-testinfra votca-ctp libxprintutil py-tetoolkit votca-tools libxrandr py-theano votca-xtp libxrender py-tifffile vpfft libxres py-toml vpic libxscrnsaver py-tomopy vsearch libxshmfence py-toolz vt libxslt py-tornado vtk libxsmm py-tqdm vtkh libxstream py-traceback2 vtkm libxt py-traitlets wannier90 libxtrap py-tuiview warpx libxtst py-twisted wcslib libxv py-typing wget libxvmc py-tzlocal wgsim libxxf86dga py-udunits windowswmproto libxxf86misc py-umi-tools wireshark libxxf86vm py-unittest2 workrave libyogrt py-unittest2py3k wt libzip py-urllib3 wx lighttpd py-urwid wxpropgrid likwid py-vcversioner x11perf linkphase3 py-virtualenv xapian-core linux-headers py-virtualenv-clone xauth listres py-virtualenvwrapper xbacklight llvm py-vsc-base xbiff llvm-openmp-ompt py-vsc-install xbitmaps lmdb py-wcsaxes xbraid lmod py-wcwidth xcalc lndir py-webkit-server xcb-demo log4cplus py-weblogo xcb-proto log4cxx py-werkzeug xcb-util loki py-wheel xcb-util-cursor lordec py-widgetsnbextension xcb-util-errors lrslib py-wrapt xcb-util-image lrzip py-xarray xcb-util-keysyms lsof py-xattr xcb-util-renderutil ltrace py-xdot xcb-util-wm lua py-xlrd xcb-util-xrm lua-bitlib py-xlsxwriter xclip lua-jit py-xmlrunner xclipboard lua-lpeg py-xopen xclock lua-luafilesystem py-xpyb xcmiscproto lua-luaposix py-xvfbwrapper xcmsdb lua-mpack py-yamlreader xcompmgr luit py-yapf xconsole lulesh py-yt xcursor-themes lumpy-sv py-ytopt xcursorgen lwgrp py-zmq xdbedizzy lwm2 py-zope-event xditview lz4 py-zope-interface xdm lzma pythia6 xdpyinfo lzo python xdriinfo m4 qbank xedit macsio qbox xerces-c mad-numdiff qhull xeus mafft qmcpack xev magics qmd-progress xextproto magma qorts xeyes makedepend qrupdate xf86bigfontproto mallocmc qt xf86dga man-db qt-creator xf86dgaproto manta qtgraph xf86driproto maq qthreads xf86miscproto mariadb quantum-espresso xf86rushproto masa quinoa xf86vidmodeproto masurca qwt xfd matio r xfindproxy matlab r-a4 xfontsel maven r-a4base xfs maverick r-a4classif xfsinfo mawk r-a4core xfwp mbedtls r-a4preproc xgamma mc r-a4reporting xgc mcl r-abadata xhmm mdtest r-abaenrichment xhost med r-abind xineramaproto meep r-absseq xinit mefit r-acde xinput megahit r-acepack xios memaxes r-acgh xkbcomp meme r-acme xkbdata memkind r-ada xkbevd meraculous r-adabag xkbprint mercurial r-ade4 xkbutils mesa r-adegenet xkeyboard-config mesa-glu r-adsplit xkill meshkit r-aer xload meson r-affxparser xlogo mesquite r-affy xlsatoms metabat r-affycomp xlsclients metaphysicl r-affycompatible xlsfonts metis r-affycontam xmag mfem r-affycoretools xman microbiomeutil r-affydata xmessage minced r-affyexpress xmh mindthegap r-affyilm xmlf90 miniaero r-affyio xmlto miniamr r-affypdnn xmodmap miniasm r-affyplm xmore miniconda2 r-affyqcreport xorg-cf-files miniconda3 r-affyrnadegradation xorg-docs minife r-agdex xorg-gtest minighost r-agilp xorg-server minigmg r-agimicrorna xorg-sgml-doctools minimap2 r-aims xphelloworld minimd r-aldex2 xplor-nih miniqmc r-allelicimbalance xplsprinters minisign r-alpine xpr minismac2d r-als xprehashprinterlist minitri r-alsace xprop minivite r-altcdfenvs xproto minixyce r-amap xproxymanagementprotocol minuit r-ampliqueso xqilla mira r-analysispageserver xrandr mirdeep2 r-anaquin xrdb mitofates r-aneufinder xrefresh mitos r-aneufinderdata xrootd mkfontdir r-animation xrx mkfontscale r-annaffy xsbench mlhka r-annotate xscope moab r-annotationdbi xsd modern-wheel r-annotationfilter xsdk mofem-cephas r-annotationforge xsdktrilinos mofem-fracture-module r-annotationhub xset mofem-minimal-surface-equation r-ape xsetmode mofem-users-modules r-argparse xsetpointer molcas r-assertthat xsetroot mono r-backports xsimd mosh r-bamsignals xsm mothur r-base64 xstdcmap motif r-base64enc xtensor motioncor2 r-bbmisc xtensor-python mount-point-attributes r-beanplot xterm mozjs r-bh xtl mpark-variant r-biasedurn xtrans mpc r-bindr xtrap mpe2 r-bindrcpp xts mpest r-biobase xvidtune mpfr r-biocgenerics xvinfo mpibash r-biocinstaller xwd mpiblast r-biocparallel xwininfo mpich r-biocstyle xwud mpifileutils r-biom-utils xxhash mpilander r-biomart xz mpileaks r-biomformat yajl mpip r-biostrings yambo mpir r-biovizbase yaml-cpp mpix-launch-swift r-bit yasm mrbayes r-bit64 yorick mrnet r-bitops z3 mrtrix3 r-blob zeromq mscgen r-bookdown zfp msgpack-c r-boot zip mshadow r-brew zlib msmc r-broom zoltan multitail r-bsgenome zsh multiverso r-bumphunter zstd ``` The packages are listed by name in alphabetical order. A pattern to match with no wildcards, `*` or `?`, will be treated as though it started and ended with `*`, so `util` is equivalent to `*util*`. All patterns will be treated as case-insensitive. You can also add the `-d` to search the description of the package in addition to the name. Some examples: All packages whose names contain “sql”: ``` $ spack list sql perl-dbd-mysql postgresql py-pymysql py-sqlalchemy r-rpostgresql r-sqldf sqlitebrowser perl-dbd-sqlite py-mysqldb1 py-pysqlite r-rmysql r-rsqlite sqlite ``` All packages whose names or descriptions contain documentation: ``` $ spack list --search-description documentation compositeproto libxfixes py-docutils r-ggplot2 r-stanheaders damageproto libxpresent py-epydoc r-quadprog sowing double-conversion man-db py-markdown r-rcpp texinfo doxygen perl-dbfile py-sphinx r-rinside xorg-docs gflags py-alabaster py-sphinxcontrib-websupport r-roxygen2 xorg-sgml-doctools ``` #### `spack info`[¶](#spack-info) To get more information on a particular package from spack list, use spack info. Just supply the name of a package: ``` $ spack info mpich AutotoolsPackage: mpich Description: MPICH is a high performance and widely portable implementation of the Message Passing Interface (MPI) standard. Homepage: http://www.mpich.org Tags: None Preferred version: 3.2.1 http://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz Safe versions: develop [git] https://github.com/pmodels/mpich.git 3.2.1 http://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz 3.2 http://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz 3.1.4 http://www.mpich.org/static/downloads/3.1.4/mpich-3.1.4.tar.gz 3.1.3 http://www.mpich.org/static/downloads/3.1.3/mpich-3.1.3.tar.gz 3.1.2 http://www.mpich.org/static/downloads/3.1.2/mpich-3.1.2.tar.gz 3.1.1 http://www.mpich.org/static/downloads/3.1.1/mpich-3.1.1.tar.gz 3.1 http://www.mpich.org/static/downloads/3.1/mpich-3.1.tar.gz 3.0.4 http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz Variants: Name [Default] Allowed values Description device [ch3] ch3, ch4 Abstract Device Interface (ADI) implementation. The ch4 device is currently in experimental state hydra [on] True, False Build the hydra process manager netmod [tcp] tcp, mxm, ofi, ucx Network module. Only single netmod builds are supported. For ch3 device configurations, this presumes the ch3:nemesis communication channel. ch3:sock is not supported by this spack package at this time. pmi [on] True, False Build with PMI support romio [on] True, False Enable ROMIO MPI I/O implementation verbs [off] True, False Build support for OpenFabrics verbs. Installation Phases: autoreconf configure build install Build Dependencies: findutils libfabric Link Dependencies: libfabric Run Dependencies: None Virtual Packages: mpich@3: provides mpi@:3.0 mpich@1: provides mpi@:1.3 mpich provides mpi ``` Most of the information is self-explanatory. The *safe versions* are versions that Spack knows the checksum for, and it will use the checksum to verify that these versions download without errors or viruses. [Dependencies](#sec-specs) and [virtual dependencies](#sec-virtual-dependencies) are described in more detail later. #### `spack versions`[¶](#spack-versions) To see *more* available versions of a package, run `spack versions`. For example: ``` $ spack versions libelf ==> Safe versions (already checksummed): 0.8.13 0.8.12 ==> Remote versions (not yet checksummed): 0.8.11 0.8.10 0.8.9 0.8.8 0.8.7 0.8.6 0.8.5 0.8.4 0.8.3 0.8.2 0.8.0 0.7.0 0.6.4 0.5.2 ``` There are two sections in the output. *Safe versions* are versions for which Spack has a checksum on file. It can verify that these versions are downloaded correctly. In many cases, Spack can also show you what versions are available out on the web—these are *remote versions*. Spack gets this information by scraping it directly from package web pages. Depending on the package and how its releases are organized, Spack may or may not be able to find remote versions. ### Installing and uninstalling[¶](#installing-and-uninstalling) #### `spack install`[¶](#spack-install) `spack install` will install any package shown by `spack list`. For example, To install the latest version of the `mpileaks` package, you might type this: ``` $ spack install mpileaks ``` If `mpileaks` depends on other packages, Spack will install the dependencies first. It then fetches the `mpileaks` tarball, expands it, verifies that it was downloaded without errors, builds it, and installs it in its own directory under `$SPACK_ROOT/opt`. You’ll see a number of messages from spack, a lot of build output, and a message that the packages is installed: ``` $ spack install mpileaks ==> Installing mpileaks ==> mpich is already installed in ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpich@3.0.4. ==> callpath is already installed in ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/callpath@1.0.2-5dce4318. ==> adept-utils is already installed in ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/adept-utils@1.0-5adef8da. ==> Trying to fetch from https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz ######################################################################## 100.0% ==> Staging archive: ~/spack/var/spack/stage/mpileaks@1.0%gcc@4.4.7 arch=linux-debian7-x86_64-59f6ad23/mpileaks-1.0.tar.gz ==> Created stage in ~/spack/var/spack/stage/mpileaks@1.0%gcc@4.4.7 arch=linux-debian7-x86_64-59f6ad23. ==> No patches needed for mpileaks. ==> Building mpileaks. ... build output ... ==> Successfully installed mpileaks. Fetch: 2.16s. Build: 9.82s. Total: 11.98s. [+] ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpileaks@1.0-59f6ad23 ``` The last line, with the `[+]`, indicates where the package is installed. #### Building a specific version[¶](#building-a-specific-version) Spack can also build *specific versions* of a package. To do this, just add `@` after the package name, followed by a version: ``` $ spack install mpich@3.0.4 ``` Any number of versions of the same package can be installed at once without interfering with each other. This is good for multi-user sites, as installing a version that one user needs will not disrupt existing installations for other users. In addition to different versions, Spack can customize the compiler, compile-time options (variants), compiler flags, and platform (for cross compiles) of an installation. Spack is unique in that it can also configure the *dependencies* a package is built with. For example, two configurations of the same version of a package, one built with boost 1.39.0, and the other version built with version 1.43.0, can coexist. This can all be done on the command line using the *spec* syntax. Spack calls the descriptor used to refer to a particular package configuration a **spec**. In the commands above, `mpileaks` and `mpileaks@3.0.4` are both valid *specs*. We’ll talk more about how you can use them to customize an installation in [Specs & dependencies](#sec-specs). #### `spack uninstall`[¶](#spack-uninstall) To uninstall a package, type `spack uninstall <package>`. This will ask the user for confirmation before completely removing the directory in which the package was installed. ``` $ spack uninstall mpich ``` If there are still installed packages that depend on the package to be uninstalled, spack will refuse to uninstall it. To uninstall a package and every package that depends on it, you may give the `--dependents` option. ``` $ spack uninstall --dependents mpich ``` will display a list of all the packages that depend on `mpich` and, upon confirmation, will uninstall them in the right order. A command like ``` $ spack uninstall mpich ``` may be ambiguous if multiple `mpich` configurations are installed. For example, if both `mpich@3.0.2` and `mpich@3.1` are installed, `mpich` could refer to either one. Because it cannot determine which one to uninstall, Spack will ask you either to provide a version number to remove the ambiguity or use the `--all` option to uninstall all of the matching packages. You may force uninstall a package with the `--force` option ``` $ spack uninstall --force mpich ``` but you risk breaking other installed packages. In general, it is safer to remove dependent packages *before* removing their dependencies or use the `--dependents` option. #### Non-Downloadable Tarballs[¶](#non-downloadable-tarballs) The tarballs for some packages cannot be automatically downloaded by Spack. This could be for a number of reasons: 1. The author requires users to manually accept a license agreement before downloading (`jdk` and `galahad`). 2. The software is proprietary and cannot be downloaded on the open Internet. To install these packages, one must create a mirror and manually add the tarballs in question to it (see [Mirrors](index.html#mirrors)): 1. Create a directory for the mirror. You can create this directory anywhere you like, it does not have to be inside `~/.spack`: ``` $ mkdir ~/.spack/manual_mirror ``` 2. Register the mirror with Spack by creating `~/.spack/mirrors.yaml`: ``` mirrors: manual: file://~/.spack/manual_mirror ``` 3. Put your tarballs in it. Tarballs should be named `<package>/<package>-<version>.tar.gz`. For example: ``` $ ls -l manual_mirror/galahad -rw---. 1 me me 11657206 Jun 21 19:25 galahad-2.60003.tar.gz ``` 4. Install as usual: ``` $ spack install galahad ``` ### Seeing installed packages[¶](#seeing-installed-packages) We know that `spack list` shows you the names of available packages, but how do you figure out which are already installed? #### `spack find`[¶](#spack-find) `spack find` shows the *specs* of installed packages. A spec is like a name, but it has a version, compiler, architecture, and build options associated with it. In spack, you can have many installations of the same package with different specs. Running `spack find` with no arguments lists installed packages: ``` $ spack find ==> 74 installed packages. -- linux-debian7-x86_64 / gcc@4.4.7 --- ImageMagick@6.8.9-10 libdwarf@20130729 py-dateutil@2.4.0 adept-utils@1.0 libdwarf@20130729 py-ipython@2.3.1 atk@2.14.0 libelf@0.8.12 py-matplotlib@1.4.2 boost@1.55.0 libelf@0.8.13 py-nose@1.3.4 bzip2@1.0.6 libffi@3.1 py-numpy@1.9.1 cairo@1.14.0 libmng@2.0.2 py-pygments@2.0.1 callpath@1.0.2 libpng@1.6.16 py-pyparsing@2.0.3 cmake@3.0.2 libtiff@4.0.3 py-pyside@1.2.2 dbus@1.8.6 libtool@2.4.2 py-pytz@2014.10 dbus@1.9.0 libxcb@1.11 py-setuptools@11.3.1 dyninst@8.1.2 libxml2@2.9.2 py-six@1.9.0 fontconfig@2.11.1 libxml2@2.9.2 python@2.7.8 freetype@2.5.3 llvm@3.0 qhull@1.0 gdk-pixbuf@2.31.2 memaxes@0.5 qt@4.8.6 glib@2.42.1 mesa@8.0.5 qt@5.4.0 graphlib@2.0.0 mpich@3.0.4 readline@6.3 gtkplus@2.24.25 mpileaks@1.0 sqlite@3.8.5 harfbuzz@0.9.37 mrnet@4.1.0 stat@2.1.0 hdf5@1.8.13 ncurses@5.9 tcl@8.6.3 icu@54.1 netcdf@4.3.3 tk@src jpeg@9a openssl@1.0.1h vtk@6.1.0 launchmon@1.0.1 pango@1.36.8 xcb-proto@1.11 lcms@2.6 pixman@0.32.6 xz@5.2.0 libdrm@2.4.33 py-dateutil@2.4.0 zlib@1.2.8 -- linux-debian7-x86_64 / gcc@4.9.2 --- libelf@0.8.10 mpich@3.0.4 ``` Packages are divided into groups according to their architecture and compiler. Within each group, Spack tries to keep the view simple, and only shows the version of installed packages. `spack find` can filter the package list based on the package name, spec, or a number of properties of their installation status. For example, missing dependencies of a spec can be shown with `--missing`, packages which were explicitly installed with `spack install <package>` can be singled out with `--explicit` and those which have been pulled in only as dependencies with `--implicit`. In some cases, there may be different configurations of the *same* version of a package installed. For example, there are two installations of `libdwarf@20130729` above. We can look at them in more detail using `spack find --deps`, and by asking only to show `libdwarf` packages: ``` $ spack find --deps libdwarf ==> 2 installed packages. -- linux-debian7-x86_64 / gcc@4.4.7 --- libdwarf@20130729-d9b90962 ^libelf@0.8.12 libdwarf@20130729-b52fac98 ^libelf@0.8.13 ``` Now we see that the two instances of `libdwarf` depend on *different* versions of `libelf`: 0.8.12 and 0.8.13. This view can become complicated for packages with many dependencies. If you just want to know whether two packages’ dependencies differ, you can use `spack find --long`: ``` $ spack find --long libdwarf ==> 2 installed packages. -- linux-debian7-x86_64 / gcc@4.4.7 --- libdwarf@20130729-d9b90962 libdwarf@20130729-b52fac98 ``` Now the `libdwarf` installs have hashes after their names. These are hashes over all of the dependencies of each package. If the hashes are the same, then the packages have the same dependency configuration. If you want to know the path where each package is installed, you can use `spack find --paths`: ``` $ spack find --paths ==> 74 installed packages. -- linux-debian7-x86_64 / gcc@4.4.7 --- ImageMagick@6.8.9-10 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/ImageMagick@6.8.9-10-4df950dd adept-utils@1.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/adept-utils@1.0-5adef8da atk@2.14.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/atk@2.14.0-3d09ac09 boost@1.55.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/boost@1.55.0 bzip2@1.0.6 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/bzip2@1.0.6 cairo@1.14.0 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/cairo@1.14.0-fcc2ab44 callpath@1.0.2 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/callpath@1.0.2-5dce4318 ... ``` And, finally, you can restrict your search to a particular package by supplying its name: ``` $ spack find --paths libelf -- linux-debian7-x86_64 / gcc@4.4.7 --- libelf@0.8.11 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/libelf@0.8.11 libelf@0.8.12 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/libelf@0.8.12 libelf@0.8.13 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/libelf@0.8.13 ``` `spack find` actually does a lot more than this. You can use *specs* to query for specific configurations and builds of each package. If you want to find only libelf versions greater than version 0.8.12, you could say: ``` $ spack find libelf@0.8.12: -- linux-debian7-x86_64 / gcc@4.4.7 --- libelf@0.8.12 libelf@0.8.13 ``` Finding just the versions of libdwarf built with a particular version of libelf would look like this: ``` $ spack find --long libdwarf ^libelf@0.8.12 ==> 1 installed packages. -- linux-debian7-x86_64 / gcc@4.4.7 --- libdwarf@20130729-d9b90962 ``` We can also search for packages that have a certain attribute. For example, `spack find libdwarf +debug` will show only installations of libdwarf with the ‘debug’ compile-time option enabled. The full spec syntax is discussed in detail in [Specs & dependencies](#sec-specs). ### Specs & dependencies[¶](#specs-dependencies) We know that `spack install`, `spack uninstall`, and other commands take a package name with an optional version specifier. In Spack, that descriptor is called a *spec*. Spack uses specs to refer to a particular build configuration (or configurations) of a package. Specs are more than a package name and a version; you can use them to specify the compiler, compiler version, architecture, compile options, and dependency options for a build. In this section, we’ll go over the full syntax of specs. Here is an example of a much longer spec than we’ve seen thus far: ``` mpileaks @1.2:1.4 %gcc@4.7.5 +debug -qt arch=bgq_os ^callpath @1.1 %gcc@4.7.2 ``` If provided to `spack install`, this will install the `mpileaks` library at some version between `1.2` and `1.4` (inclusive), built using `gcc` at version 4.7.5 for the Blue Gene/Q architecture, with debug options enabled, and without Qt support. Additionally, it says to link it with the `callpath` library (which it depends on), and to build callpath with `gcc` 4.7.2. Most specs will not be as complicated as this one, but this is a good example of what is possible with specs. More formally, a spec consists of the following pieces: * Package name identifier (`mpileaks` above) * `@` Optional version specifier (`@1.2:1.4`) * `%` Optional compiler specifier, with an optional compiler version (`gcc` or `gcc@4.7.3`) * `+` or `-` or `~` Optional variant specifiers (`+debug`, `-qt`, or `~qt`) for boolean variants * `name=<value>` Optional variant specifiers that are not restricted to boolean variants * `name=<value>` Optional compiler flag specifiers. Valid flag names are `cflags`, `cxxflags`, `fflags`, `cppflags`, `ldflags`, and `ldlibs`. * `target=<value> os=<value>` Optional architecture specifier (`target=haswell os=CNL10`) * `^` Dependency specs (`^callpath@1.1`) There are two things to notice here. The first is that specs are recursively defined. That is, each dependency after `^` is a spec itself. The second is that everything is optional *except* for the initial package name identifier. Users can be as vague or as specific as they want about the details of building packages, and this makes spack good for beginners and experts alike. To really understand what’s going on above, we need to think about how software is structured. An executable or a library (these are generally the artifacts produced by building software) depends on other libraries in order to run. We can represent the relationship between a package and its dependencies as a graph. Here is the full dependency graph for `mpileaks`: digraph { mpileaks -> mpich mpileaks -> callpath -> mpich callpath -> dyninst dyninst -> libdwarf -> libelf dyninst -> libelf } Each box above is a package and each arrow represents a dependency on some other package. For example, we say that the package `mpileaks` *depends on* `callpath` and `mpich`. `mpileaks` also depends *indirectly* on `dyninst`, `libdwarf`, and `libelf`, in that these libraries are dependencies of `callpath`. To install `mpileaks`, Spack has to build all of these packages. Dependency graphs in Spack have to be acyclic, and the *depends on* relationship is directional, so this is a *directed, acyclic graph* or *DAG*. The package name identifier in the spec is the root of some dependency DAG, and the DAG itself is implicit. Spack knows the precise dependencies among packages, but users do not need to know the full DAG structure. Each `^` in the full spec refers to some dependency of the root package. Spack will raise an error if you supply a name after `^` that the root does not actually depend on (e.g. `mpileaks ^emacs@23.3`). Spack further simplifies things by only allowing one configuration of each package within any single build. Above, both `mpileaks` and `callpath` depend on `mpich`, but `mpich` appears only once in the DAG. You cannot build an `mpileaks` version that depends on one version of `mpich` *and* on a `callpath` version that depends on some *other* version of `mpich`. In general, such a configuration would likely behave unexpectedly at runtime, and Spack enforces this to ensure a consistent runtime environment. The point of specs is to abstract this full DAG from Spack users. If a user does not care about the DAG at all, she can refer to mpileaks by simply writing `mpileaks`. If she knows that `mpileaks` indirectly uses `dyninst` and she wants a particular version of `dyninst`, then she can refer to `mpileaks ^dyninst@8.1`. Spack will fill in the rest when it parses the spec; the user only needs to know package names and minimal details about their relationship. When spack prints out specs, it sorts package names alphabetically to normalize the way they are displayed, but users do not need to worry about this when they write specs. The only restriction on the order of dependencies within a spec is that they appear *after* the root package. For example, these two specs represent exactly the same configuration: ``` mpileaks ^callpath@1.0 ^libelf@0.8.3 mpileaks ^libelf@0.8.3 ^callpath@1.0 ``` You can put all the same modifiers on dependency specs that you would put on the root spec. That is, you can specify their versions, compilers, variants, and architectures just like any other spec. Specifiers are associated with the nearest package name to their left. For example, above, `@1.1` and `%gcc@4.7.2` associates with the `callpath` package, while `@1.2:1.4`, `%gcc@4.7.5`, `+debug`, `-qt`, and `target=haswell os=CNL10` all associate with the `mpileaks` package. In the diagram above, `mpileaks` depends on `mpich` with an unspecified version, but packages can depend on other packages with *constraints* by adding more specifiers. For example, `mpileaks` could depend on `mpich@1.2:` if it can only build with version `1.2` or higher of `mpich`. Below are more details about the specifiers that you can add to specs. #### Version specifier[¶](#version-specifier) A version specifier comes somewhere after a package name and starts with `@`. It can be a single version, e.g. `@1.0`, `@3`, or `@1.2a7`. Or, it can be a range of versions, such as `@1.0:1.5` (all versions between `1.0` and `1.5`, inclusive). Version ranges can be open, e.g. `:3` means any version up to and including `3`. This would include `3.4` and `3.4.2`. `4.2:` means any version above and including `4.2`. Finally, a version specifier can be a set of arbitrary versions, such as `@1.0,1.5,1.7` (`1.0`, `1.5`, or `1.7`). When you supply such a specifier to `spack install`, it constrains the set of versions that Spack will install. If the version spec is not provided, then Spack will choose one according to policies set for the particular spack installation. If the spec is ambiguous, i.e. it could match multiple versions, Spack will choose a version within the spec’s constraints according to policies set for the particular Spack installation. Details about how versions are compared and how Spack determines if one version is less than another are discussed in the developer guide. #### Compiler specifier[¶](#compiler-specifier) A compiler specifier comes somewhere after a package name and starts with `%`. It tells Spack what compiler(s) a particular package should be built with. After the `%` should come the name of some registered Spack compiler. This might include `gcc`, or `intel`, but the specific compilers available depend on the site. You can run `spack compilers` to get a list; more on this below. The compiler spec can be followed by an optional *compiler version*. A compiler version specifier looks exactly like a package version specifier. Version specifiers will associate with the nearest package name or compiler specifier to their left in the spec. If the compiler spec is omitted, Spack will choose a default compiler based on site policies. #### Variants[¶](#variants) Variants are named options associated with a particular package. They are optional, as each package must provide default values for each variant it makes available. Variants can be specified using a flexible parameter syntax `name=<value>`. For example, `spack install libelf debug=True` will install libelf build with debug flags. The names of particular variants available for a package depend on what was provided by the package author. `spack info <package>` will provide information on what build variants are available. For compatibility with earlier versions, variants which happen to be boolean in nature can be specified by a syntax that represents turning options on and off. For example, in the previous spec we could have supplied `libelf +debug` with the same effect of enabling the debug compile time option for the libelf package. Depending on the package a variant may have any default value. For `libelf` here, `debug` is `False` by default, and we turned it on with `debug=True` or `+debug`. If a variant is `True` by default you can turn it off by either adding `-name` or `~name` to the spec. There are two syntaxes here because, depending on context, `~` and `-` may mean different things. In most shells, the following will result in the shell performing home directory substitution: ``` mpileaks ~debug # shell may try to substitute this! mpileaks~debug # use this instead ``` If there is a user called `debug`, the `~` will be incorrectly expanded. In this situation, you would want to write `libelf -debug`. However, `-` can be ambiguous when included after a package name without spaces: ``` mpileaks-debug # wrong! mpileaks -debug # right ``` Spack allows the `-` character to be part of package names, so the above will be interpreted as a request for the `mpileaks-debug` package, not a request for `mpileaks` built without `debug` options. In this scenario, you should write `mpileaks~debug` to avoid ambiguity. When spack normalizes specs, it prints them out with no spaces boolean variants using the backwards compatibility syntax and uses only `~` for disabled boolean variants. The `-` and spaces on the command line are provided for convenience and legibility. #### Compiler Flags[¶](#compiler-flags) Compiler flags are specified using the same syntax as non-boolean variants, but fulfill a different purpose. While the function of a variant is set by the package, compiler flags are used by the compiler wrappers to inject flags into the compile line of the build. Additionally, compiler flags are inherited by dependencies. `spack install libdwarf cppflags="-g"` will install both libdwarf and libelf with the `-g` flag injected into their compile line. Notice that the value of the compiler flags must be quoted if it contains any spaces. Any of `cppflags=-O3`, `cppflags="-O3"`, `cppflags='-O3'`, and `cppflags="-O3 -fPIC"` are acceptable, but `cppflags=-O3 -fPIC` is not. Additionally, if they value of the compiler flags is not the last thing on the line, it must be followed by a space. The commmand `spack install libelf cppflags="-O3"%intel` will be interpreted as an attempt to set cppflags=”-O3%intel”`. The six compiler flags are injected in the order of implicit make commands in GNU Autotools. If all flags are set, the order is `$cppflags $cflags|$cxxflags $ldflags <command> $ldlibs` for C and C++ and `$fflags $cppflags $ldflags <command> $ldlibs` for Fortran. #### Compiler environment variables and additional RPATHs[¶](#compiler-environment-variables-and-additional-rpaths) In the exceptional case a compiler requires setting special environment variables, like an explicit library load path. These can bet set in an extra section in the compiler configuration (the supported environment modification commands are: `set`, `unset`, `append-path`, and `prepend-path`). The user can also specify additional `RPATHs` that the compiler will add to all executables generated by that compiler. This is useful for forcing certain compilers to RPATH their own runtime libraries, so that executables will run without the need to set `LD_LIBRARY_PATH`. ``` compilers: - compiler: spec: gcc@4.9.3 paths: cc: /opt/gcc/bin/gcc c++: /opt/gcc/bin/g++ f77: /opt/gcc/bin/gfortran fc: /opt/gcc/bin/gfortran environment: unset: BAD_VARIABLE: # The colon is required but the value must be empty set: GOOD_VARIABLE_NUM: 1 GOOD_VARIABLE_STR: good prepend-path: PATH: /path/to/binutils append-path: LD_LIBRARY_PATH: /opt/gcc/lib extra_rpaths: - /path/to/some/compiler/runtime/directory - /path/to/some/other/compiler/runtime/directory ``` Note The section environment is interpreted as an ordered dictionary, which means two things. First, environment modification are applied in the order they are specified in the configuration file. Second, you cannot express environment modifications that require mixing different commands, i.e. you cannot set one variable, than prepend-path to another one, and than again set a third one. #### Architecture specifiers[¶](#architecture-specifiers) The architecture can be specified by using the reserved words `target` and/or `os` (`target=x86-64 os=debian7`). You can also use the triplet form of platform, operating system and processor. ``` $ spack install libelf arch=cray-CNL10-haswell ``` Users on non-Cray systems won’t have to worry about specifying the architecture. Spack will autodetect what kind of operating system is on your machine as well as the processor. For more information on how the architecture can be used on Cray machines, see [Spack on Cray](index.html#cray-support) ### Virtual dependencies[¶](#virtual-dependencies) The dependence graph for `mpileaks` we saw above wasn’t *quite* accurate. `mpileaks` uses MPI, which is an interface that has many different implementations. Above, we showed `mpileaks` and `callpath` depending on `mpich`, which is one *particular* implementation of MPI. However, we could build either with another implementation, such as `openmpi` or `mvapich`. Spack represents interfaces like this using *virtual dependencies*. The real dependency DAG for `mpileaks` looks like this: digraph { mpi [color=red] mpileaks -> mpi mpileaks -> callpath -> mpi callpath -> dyninst dyninst -> libdwarf -> libelf dyninst -> libelf } Notice that `mpich` has now been replaced with `mpi`. There is no *real* MPI package, but some packages *provide* the MPI interface, and these packages can be substituted in for `mpi` when `mpileaks` is built. You can see what virtual packages a particular package provides by getting info on it: ``` $ spack info mpich AutotoolsPackage: mpich Description: MPICH is a high performance and widely portable implementation of the Message Passing Interface (MPI) standard. Homepage: http://www.mpich.org Tags: None Preferred version: 3.2.1 http://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz Safe versions: develop [git] https://github.com/pmodels/mpich.git 3.2.1 http://www.mpich.org/static/downloads/3.2.1/mpich-3.2.1.tar.gz 3.2 http://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz 3.1.4 http://www.mpich.org/static/downloads/3.1.4/mpich-3.1.4.tar.gz 3.1.3 http://www.mpich.org/static/downloads/3.1.3/mpich-3.1.3.tar.gz 3.1.2 http://www.mpich.org/static/downloads/3.1.2/mpich-3.1.2.tar.gz 3.1.1 http://www.mpich.org/static/downloads/3.1.1/mpich-3.1.1.tar.gz 3.1 http://www.mpich.org/static/downloads/3.1/mpich-3.1.tar.gz 3.0.4 http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz Variants: Name [Default] Allowed values Description device [ch3] ch3, ch4 Abstract Device Interface (ADI) implementation. The ch4 device is currently in experimental state hydra [on] True, False Build the hydra process manager netmod [tcp] tcp, mxm, ofi, ucx Network module. Only single netmod builds are supported. For ch3 device configurations, this presumes the ch3:nemesis communication channel. ch3:sock is not supported by this spack package at this time. pmi [on] True, False Build with PMI support romio [on] True, False Enable ROMIO MPI I/O implementation verbs [off] True, False Build support for OpenFabrics verbs. Installation Phases: autoreconf configure build install Build Dependencies: findutils libfabric Link Dependencies: libfabric Run Dependencies: None Virtual Packages: mpich@3: provides mpi@:3.0 mpich@1: provides mpi@:1.3 mpich provides mpi ``` Spack is unique in that its virtual packages can be versioned, just like regular packages. A particular version of a package may provide a particular version of a virtual package, and we can see above that `mpich` versions `1` and above provide all `mpi` interface versions up to `1`, and `mpich` versions `3` and above provide `mpi` versions up to `3`. A package can *depend on* a particular version of a virtual package, e.g. if an application needs MPI-2 functions, it can depend on `mpi@2:` to indicate that it needs some implementation that provides MPI-2 functions. #### Constraining virtual packages[¶](#constraining-virtual-packages) When installing a package that depends on a virtual package, you can opt to specify the particular provider you want to use, or you can let Spack pick. For example, if you just type this: ``` $ spack install mpileaks ``` Then spack will pick a provider for you according to site policies. If you really want a particular version, say `mpich`, then you could run this instead: ``` $ spack install mpileaks ^mpich ``` This forces spack to use some version of `mpich` for its implementation. As always, you can be even more specific and require a particular `mpich` version: ``` $ spack install mpileaks ^mpich@3 ``` The `mpileaks` package in particular only needs MPI-1 commands, so any MPI implementation will do. If another package depends on `mpi@2` and you try to give it an insufficient MPI implementation (e.g., one that provides only `mpi@:1`), then Spack will raise an error. Likewise, if you try to plug in some package that doesn’t provide MPI, Spack will raise an error. #### Specifying Specs by Hash[¶](#specifying-specs-by-hash) Complicated specs can become cumbersome to enter on the command line, especially when many of the qualifications are necessary to distinguish between similar installs. To avoid this, when referencing an existing spec, Spack allows you to reference specs by their hash. We previously discussed the spec hash that Spack computes. In place of a spec in any command, substitute `/<hash>` where `<hash>` is any amount from the beginning of a spec hash. For example, lets say that you accidentally installed two different `mvapich2` installations. If you want to uninstall one of them but don’t know what the difference is, you can run: ``` $ spack find --long mvapich2 ==> 2 installed packages. -- linux-centos7-x86_64 / gcc@6.3.0 --- qmt35td mvapich2@2.2%gcc er3die3 mvapich2@2.2%gcc ``` You can then uninstall the latter installation using: ``` $ spack uninstall /er3die3 ``` Or, if you want to build with a specific installation as a dependency, you can use: ``` $ spack install trilinos ^/er3die3 ``` If the given spec hash is sufficiently long as to be unique, Spack will replace the reference with the spec to which it refers. Otherwise, it will prompt for a more qualified hash. Note that this will not work to reinstall a dependency uninstalled by `spack uninstall --force`. #### `spack providers`[¶](#spack-providers) You can see what packages provide a particular virtual package using `spack providers`. If you wanted to see what packages provide `mpi`, you would just run: ``` $ spack providers mpi charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: mpich mvapich2 openmpi@2.0.0: charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: mpich@1: openmpi spectrum-mpi charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: intel-mpi mpich@3: openmpi@1.6.5 charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: intel-parallel-studio mpilander openmpi@1.7.5: ``` And if you *only* wanted to see packages that provide MPI-2, you would add a version specifier to the spec: ``` $ spack providers mpi@2 charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: mpich openmpi spectrum-mpi charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: mpich@3: openmpi@1.6.5 charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: intel-mpi mpilander openmpi@1.7.5: charmpp@6.7.1: charmpp@6.7.1: charmpp@6.7.1: intel-parallel-studio mvapich2 openmpi@2.0.0: ``` Notice that the package versions that provide insufficient MPI versions are now filtered out. ### Extensions & Python support[¶](#extensions-python-support) Spack’s installation model assumes that each package will live in its own install prefix. However, certain packages are typically installed *within* the directory hierarchy of other packages. For example, modules in interpreted languages like [Python](https://www.python.org) are typically installed in the `$prefix/lib/python-2.7/site-packages` directory. Spack has support for this type of installation as well. In Spack, a package that can live inside the prefix of another package is called an *extension*. Suppose you have Python installed like so: ``` $ spack find python ==> 1 installed packages. -- linux-debian7-x86_64 / gcc@4.4.7 --- python@2.7.8 ``` #### `spack extensions`[¶](#spack-extensions) You can find extensions for your Python installation like this: ``` $ spack extensions python ==> python@2.7.8%gcc@4.4.7 arch=linux-debian7-x86_64-703c7a96 ==> 36 extensions: geos py-ipython py-pexpect py-pyside py-sip py-basemap py-libxml2 py-pil py-pytz py-six py-biopython py-mako py-pmw py-rpy2 py-sympy py-cython py-matplotlib py-pychecker py-scientificpython py-virtualenv py-dateutil py-mpi4py py-pygments py-scikit-learn py-epydoc py-mx py-pylint py-scipy py-gnuplot py-nose py-pyparsing py-setuptools py-h5py py-numpy py-pyqt py-shiboken ==> 12 installed: -- linux-debian7-x86_64 / gcc@4.4.7 --- py-dateutil@2.4.0 py-nose@1.3.4 py-pyside@1.2.2 py-dateutil@2.4.0 py-numpy@1.9.1 py-pytz@2014.10 py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1 py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0 ==> None activated. ``` The extensions are a subset of what’s returned by `spack list`, and they are packages like any other. They are installed into their own prefixes, and you can see this with `spack find --paths`: ``` $ spack find --paths py-numpy ==> 1 installed packages. -- linux-debian7-x86_64 / gcc@4.4.7 --- py-numpy@1.9.1 ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/py-numpy@1.9.1-66733244 ``` However, even though this package is installed, you cannot use it directly when you run `python`: ``` $ spack load python $ python Python 2.7.8 (default, Feb 17 2015, 01:35:25) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named numpy >>> ``` #### Using Extensions[¶](#using-extensions) There are three ways to get `numpy` working in Python. The first is to use [Using module files via Spack](index.html#shell-support). You can simply `use` or `load` the module for the extension, and it will be added to the `PYTHONPATH` in your current shell. For tcl modules: ``` $ spack load python $ spack load py-numpy ``` or, for dotkit: ``` $ spack use python $ spack use py-numpy ``` Now `import numpy` will succeed for as long as you keep your current session open. #### Activating Extensions in a View[¶](#activating-extensions-in-a-view) The second way to use extensions is to create a view, which merges the python installation along with the extensions into a single prefix. See [Filesystem Views](index.html#filesystem-views) for a more in-depth description of views and [spack view](index.html#cmd-spack-view) for usage of the `spack view` command. #### Activating Extensions Globally[¶](#activating-extensions-globally) As an alternative to creating a merged prefix with Python and its extensions, and prior to support for views, Spack has provided a means to install the extension into the Spack installation prefix for the extendee. This has typically been useful since extendable packages typically search their own installation path for addons by default. Global activations are performed with the `spack activate` command: #### `spack activate`[¶](#spack-activate) ``` $ spack activate py-numpy ==> Activated extension py-setuptools@11.3.1%gcc@4.4.7 arch=linux-debian7-x86_64-3c74eb69 for python@2.7.8%gcc@4.4.7. ==> Activated extension py-nose@1.3.4%gcc@4.4.7 arch=linux-debian7-x86_64-5f70f816 for python@2.7.8%gcc@4.4.7. ==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7. ``` Several things have happened here. The user requested that `py-numpy` be activated in the `python` installation it was built with. Spack knows that `py-numpy` depends on `py-nose` and `py-setuptools`, so it activated those packages first. Finally, once all dependencies were activated in the `python` installation, `py-numpy` was activated as well. If we run `spack extensions` again, we now see the three new packages listed as activated: ``` $ spack extensions python ==> python@2.7.8%gcc@4.4.7 arch=linux-debian7-x86_64-703c7a96 ==> 36 extensions: geos py-ipython py-pexpect py-pyside py-sip py-basemap py-libxml2 py-pil py-pytz py-six py-biopython py-mako py-pmw py-rpy2 py-sympy py-cython py-matplotlib py-pychecker py-scientificpython py-virtualenv py-dateutil py-mpi4py py-pygments py-scikit-learn py-epydoc py-mx py-pylint py-scipy py-gnuplot py-nose py-pyparsing py-setuptools py-h5py py-numpy py-pyqt py-shiboken ==> 12 installed: -- linux-debian7-x86_64 / gcc@4.4.7 --- py-dateutil@2.4.0 py-nose@1.3.4 py-pyside@1.2.2 py-dateutil@2.4.0 py-numpy@1.9.1 py-pytz@2014.10 py-ipython@2.3.1 py-pygments@2.0.1 py-setuptools@11.3.1 py-matplotlib@1.4.2 py-pyparsing@2.0.3 py-six@1.9.0 ==> 3 currently activated: -- linux-debian7-x86_64 / gcc@4.4.7 --- py-nose@1.3.4 py-numpy@1.9.1 py-setuptools@11.3.1 ``` Now, when a user runs python, `numpy` will be available for import *without* the user having to explicitly loaded. `python@2.7.8` now acts like a system Python installation with `numpy` installed inside of it. Spack accomplishes this by symbolically linking the *entire* prefix of the `py-numpy` into the prefix of the `python` package. To the python interpreter, it looks like `numpy` is installed in the `site-packages` directory. The only limitation of global activation is that you can only have a *single* version of an extension activated at a time. This is because multiple versions of the same extension would conflict if symbolically linked into the same prefix. Users who want a different version of a package can still get it by using environment modules or views, but they will have to explicitly load their preferred version. #### `spack activate --force`[¶](#spack-activate-force) If, for some reason, you want to activate a package *without* its dependencies, you can use `spack activate --force`: ``` $ spack activate --force py-numpy ==> Activated extension py-numpy@1.9.1%gcc@4.4.7 arch=linux-debian7-x86_64-66733244 for python@2.7.8%gcc@4.4.7. ``` #### `spack deactivate`[¶](#spack-deactivate) We’ve seen how activating an extension can be used to set up a default version of a Python module. Obviously, you may want to change that at some point. `spack deactivate` is the command for this. There are several variants: * `spack deactivate <extension>` will deactivate a single extension. If another activated extension depends on this one, Spack will warn you and exit with an error. * `spack deactivate --force <extension>` deactivates an extension regardless of packages that depend on it. * `spack deactivate --all <extension>` deactivates an extension and all of its dependencies. Use `--force` to disregard dependents. * `spack deactivate --all <extendee>` deactivates *all* activated extensions of a package. For example, to deactivate *all* python extensions, use: ``` $ spack deactivate --all python ``` ### Filesystem requirements[¶](#filesystem-requirements) By default, Spack needs to be run from a filesystem that supports `flock` locking semantics. Nearly all local filesystems and recent versions of NFS support this, but parallel filesystems or NFS volumes may be configured without `flock` support enabled. You can determine how your filesystems are mounted with `mount`. The output for a Lustre filesystem might look like this: ``` $ mount | grep lscratch mds1-lnet0@o2ib100:/lsd on /p/lscratchd type lustre (rw,nosuid,lazystatfs,flock) mds2-lnet0@o2ib100:/lse on /p/lscratche type lustre (rw,nosuid,lazystatfs,flock) ``` Note the `flock` option on both Lustre mounts. If you do not see this or a similar option for your filesystem, you have a few options. First, you can move your Spack installation to a filesystem that supports locking. Second, you could ask your system administrator to enable `flock` for your filesystem. If none of those work, you can disable locking in one of two ways: > 1. Run Spack with the `-L` or `--disable-locks` option to disable > locks on a call-by-call basis. > 2. Edit [config.yaml](index.html#config-yaml) and set the `locks` option > to `false` to always disable locking. Warning If you disable locking, concurrent instances of Spack will have no way to avoid stepping on each other. You must ensure that there is only **one** instance of Spack running at a time. Otherwise, Spack may end up with a corrupted database file, or you may not be able to see all installed packages in commands like `spack find`. If you are unfortunate enough to run into this situation, you may be able to fix it by running `spack reindex`. This issue typically manifests with the error below: ``` $ ./spack find Traceback (most recent call last): File "./spack", line 176, in <module> main() File "./spack", line 154,' in main return_val = command(parser, args) File "./spack/lib/spack/spack/cmd/find.py", line 170, in find specs = set(spack.installed_db.query(\**q_args)) File "./spack/lib/spack/spack/database.py", line 551, in query with self.read_transaction(): File "./spack/lib/spack/spack/database.py", line 598, in __enter__ if self._enter() and self._acquire_fn: File "./spack/lib/spack/spack/database.py", line 608, in _enter return self._db.lock.acquire_read(self._timeout) File "./spack/lib/spack/llnl/util/lock.py", line 103, in acquire_read self._lock(fcntl.LOCK_SH, timeout) # can raise LockError. File "./spack/lib/spack/llnl/util/lock.py", line 64, in _lock fcntl.lockf(self._fd, op | fcntl.LOCK_NB) IOError: [Errno 38] Function not implemented ``` A nicer error message is TBD in future versions of Spack. ### Getting Help[¶](#getting-help) #### `spack help`[¶](#spack-help) If you don’t find what you need here, the `help` subcommand will print out out a list of *all* of spack’s options and subcommands: ``` $ spack help usage: spack [-hkV] [--color {always,never,auto}] COMMAND ... A flexible package manager that supports multiple versions, configurations, platforms, and compilers. These are common spack commands: query packages: list list and search available packages info get detailed information on a particular package find list and search installed packages build packages: install build and install packages uninstall remove installed packages spec show what would be installed, given a spec environments: env manage virtual environments view produce a single-rooted directory view of packages modules: load add package to environment using `module load` module manipulate module files unload remove package from environment using `module unload` create packages: create create a new package file edit open package files in $EDITOR system: arch print architecture information about this machine compilers list available compilers optional arguments: -h, --help show this help message and exit -k, --insecure do not check ssl certificates when downloading -V, --version show version number and exit --color {always,never,auto} when to colorize output (default: auto) more help: spack help --all list all commands and options spack help <command> help on a specific command spack help --spec help on the spec syntax spack docs open http://spack.rtfd.io/ in a browser ``` Adding an argument, e.g. `spack help <subcommand>`, will print out usage information for a particular subcommand: ``` $ spack help install usage: spack install [-hInvy] [--only {package,dependencies}] [-j JOBS] [--overwrite] [--keep-prefix] [--keep-stage] [--dont-restage] [--use-cache | --no-cache] [--show-log-on-error] [--source] [--fake] [--only-concrete] [-f SPEC_YAML_FILE] [--clean | --dirty] [--test {root,all} | --run-tests] [--log-format {cdash,None,junit}] [--log-file LOG_FILE] [--cdash-upload-url CDASH_UPLOAD_URL] ... build and install packages positional arguments: package spec of the package to install optional arguments: -h, --help show this help message and exit --only {package,dependencies} select the mode of installation. the default is to install the package along with all its dependencies. alternatively one can decide to install only the package or only the dependencies -j JOBS, --jobs JOBS explicitly set number of make jobs, default is #cpus. -I, --install-status show install status of packages. packages can be: installed [+], missing and needed by an installed package [-], or not installed (no annotation) --overwrite reinstall an existing spec, even if it has dependents --keep-prefix don't remove the install prefix if installation fails --keep-stage don't remove the build stage if installation succeeds --dont-restage if a partial install is detected, don't delete prior state --use-cache check for pre-built Spack packages in mirrors (default) --no-cache do not check for pre-built Spack packages in mirrors --show-log-on-error print full build log to stderr if build fails --source install source files in prefix -n, --no-checksum do not use checksums to verify downloadeded files (unsafe) -v, --verbose display verbose build output while installing --fake fake install for debug purposes. --only-concrete (with environment) only install already concretized specs -f SPEC_YAML_FILE, --file SPEC_YAML_FILE install from file. Read specs to install from .yaml files --clean unset harmful variables in the build environment (default) --dirty preserve user environment in the spack build environment (danger!) --test {root,all} If 'root' is chosen, run package tests during installation for top-level packages (but skip tests for dependencies). if 'all' is chosen, run package tests during installation for all packages. If neither are chosen, don't run tests for any packages. --run-tests run package tests during installation (same as --test=all) --log-format {cdash,None,junit} format to be used for log files --log-file LOG_FILE filename for the log file. if not passed a default will be used --cdash-upload-url CDASH_UPLOAD_URL CDash URL where reports will be uploaded -y, --yes-to-all assume "yes" is the answer to every confirmation request ``` Alternately, you can use `spack --help` in place of `spack help`, or `spack <subcommand> --help` to get help on a particular subcommand. Workflows[¶](#workflows) --- The process of using Spack involves building packages, running binaries from those packages, and developing software that depends on those packages. For example, one might use Spack to build the `netcdf` package, use `spack load` to run the `ncdump` binary, and finally, write a small C program to read/write a particular NetCDF file. Spack supports a variety of workflows to suit a variety of situations and user preferences, there is no single way to do all these things. This chapter demonstrates different workflows that have been developed, pointing out the pros and cons of them. ### Definitions[¶](#definitions) First some basic definitions. #### Package, Concrete Spec, Installed Package[¶](#package-concrete-spec-installed-package) In Spack, a package is an abstract recipe to build one piece of software. Spack packages may be used to build, in principle, any version of that software with any set of variants. Examples of packages include `curl` and `zlib`. A package may be *instantiated* to produce a concrete spec; one possible realization of a particular package, out of combinatorially many other realizations. For example, here is a concrete spec instantiated from `curl`: ``` $ spack spec curl Input spec --- curl Concretized --- curl@7.60.0%gcc@5.4.0~darwinssl~libssh~libssh2~nghttp2 arch=linux-ubuntu16.04-x86_64 ^openssl@1.0.2o%gcc@5.4.0+systemcerts arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ``` Spack’s core concretization algorithm generates concrete specs by instantiating packages from its repo, based on a set of “hints”, including user input and the `packages.yaml` file. This algorithm may be accessed at any time with the `spack spec` command. Every time Spack installs a package, that installation corresponds to a concrete spec. Only a vanishingly small fraction of possible concrete specs will be installed at any one Spack site. #### Consistent Sets[¶](#consistent-sets) A set of Spack specs is said to be *consistent* if each package is only instantiated one way within it — that is, if two specs in the set have the same package, then they must also have the same version, variant, compiler, etc. For example, the following set is consistent: ``` curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64 ^openssl@1.0.2k%gcc@5.3.0 arch=linux-SuSE11-x86_64 ^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64 zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64 ``` The following set is not consistent: ``` curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64 ^openssl@1.0.2k%gcc@5.3.0 arch=linux-SuSE11-x86_64 ^zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64 zlib@1.2.7%gcc@5.3.0 arch=linux-SuSE11-x86_64 ``` The compatibility of a set of installed packages determines what may be done with it. It is always possible to `spack load` any set of installed packages, whether or not they are consistent, and run their binaries from the command line. However, a set of installed packages can only be linked together in one binary if it is consistent. If the user produces a series of `spack spec` or `spack load` commands, in general there is no guarantee of consistency between them. Spack’s concretization procedure guarantees that the results of any *single* `spack spec` call will be consistent. Therefore, the best way to ensure a consistent set of specs is to create a Spack package with dependencies, and then instantiate that package. We will use this technique below. ### Building Packages[¶](#building-packages) Suppose you are tasked with installing a set of software packages on a system in order to support one application – both a core application program, plus software to prepare input and analyze output. The required software might be summed up as a series of `spack install` commands placed in a script. If needed, this script can always be run again in the future. For example: ``` #!/bin/sh spack install modele-utils spack install emacs spack install ncview spack install nco spack install modele-control spack install py-numpy ``` In most cases, this script will not correctly install software according to your specific needs: choices need to be made for variants, versions and virtual dependency choices may be needed. It *is* possible to specify these choices by extending specs on the command line; however, the same choices must be specified repeatedly. For example, if you wish to use `openmpi` to satisfy the `mpi` dependency, then `^openmpi` will have to appear on *every* `spack install` line that uses MPI. It can get repetitive fast. Customizing Spack installation options is easier to do in the `~/.spack/packages.yaml` file. In this file, you can specify preferred versions and variants to use for packages. For example: ``` packages: python: version: [3.5.1] modele-utils: version: [cmake] everytrace: version: [develop] eigen: variants: ~suitesparse netcdf: variants: +mpi all: compiler: [gcc@5.3.0] providers: mpi: [openmpi] blas: [openblas] lapack: [openblas] ``` This approach will work as long as you are building packages for just one application. #### Multiple Applications[¶](#multiple-applications) Suppose instead you’re building multiple inconsistent applications. For example, users want package A to be built with `openmpi` and package B with `mpich` — but still share many other lower-level dependencies. In this case, a single `packages.yaml` file will not work. Plans are to implement *per-project* `packages.yaml` files. In the meantime, one could write shell scripts to switch `packages.yaml` between multiple versions as needed, using symlinks. #### Combinatorial Sets of Installs[¶](#combinatorial-sets-of-installs) Suppose that you are now tasked with systematically building many incompatible versions of packages. For example, you need to build `petsc` 9 times for 3 different MPI implementations on 3 different compilers, in order to support user needs. In this case, you will need to either create 9 different `packages.yaml` files; or more likely, create 9 different `spack install` command lines with the correct options in the spec. Here is a real-life example of this kind of usage: ``` #!/bin/bash compilers=( %gcc %intel %pgi ) mpis=( openmpi+psm~verbs openmpi~psm+verbs mvapich2+psm~mrail mvapich2~psm+mrail mpich+verbs ) for compiler in "${compilers[@]}" do # Serial installs spack install szip $compiler spack install hdf $compiler spack install hdf5 $compiler spack install netcdf $compiler spack install netcdf-fortran $compiler spack install ncview $compiler # Parallel installs for mpi in "${mpis[@]}" do spack install $mpi $compiler spack install hdf5~cxx+mpi $compiler ^$mpi spack install parallel-netcdf $compiler ^$mpi done done ``` ### Running Binaries from Packages[¶](#running-binaries-from-packages) Once Spack packages have been built, the next step is to use them. As with building packages, there are many ways to use them, depending on the use case. #### Find and Run[¶](#find-and-run) The simplest way to run a Spack binary is to find it and run it! In many cases, nothing more is needed because Spack builds binaries with RPATHs. Spack installation directories may be found with `spack location --install-dir` commands. For example: ``` $ spack location --install-dir cmake ~/spack/opt/spack/linux-SuSE11-x86_64/gcc-5.3.0/cmake-3.6.0-7cxrynb6esss6jognj23ak55fgxkwtx7 ``` This gives the root of the Spack package; relevant binaries may be found within it. For example: ``` $ CMAKE=`spack location --install-dir cmake`/bin/cmake ``` Standard UNIX tools can find binaries as well. For example: ``` $ find ~/spack/opt -name cmake | grep bin ~/spack/opt/spack/linux-SuSE11-x86_64/gcc-5.3.0/cmake-3.6.0-7cxrynb6esss6jognj23ak55fgxkwtx7/bin/cmake ``` These methods are suitable, for example, for setting up build processes or GUIs that need to know the location of particular tools. However, other more powerful methods are generally preferred for user environments. #### Spack-Generated Modules[¶](#spack-generated-modules) Suppose that Spack has been used to install a set of command-line programs, which users now wish to use. One can in principle put a number of `spack load` commands into `.bashrc`, for example, to load a set of Spack-generated modules: ``` spack load modele-utils spack load emacs spack load ncview spack load nco spack load modele-control ``` Although simple load scripts like this are useful in many cases, they have some drawbacks: 1. The set of modules loaded by them will in general not be consistent. They are a decent way to load commands to be called from command shells. See below for better ways to assemble a consistent set of packages for building application programs. 2. The `spack spec` and `spack install` commands use a sophisticated concretization algorithm that chooses the “best” among several options, taking into account `packages.yaml` file. The `spack load` and `spack module tcl loads` commands, on the other hand, are not very smart: if the user-supplied spec matches more than one installed package, then `spack module tcl loads` will fail. This may change in the future. For now, the workaround is to be more specific on any `spack module tcl loads` lines that fail. ##### Generated Load Scripts[¶](#generated-load-scripts) Another problem with using spack load is, it is slow; a typical user environment could take several seconds to load, and would not be appropriate to put into `.bashrc` directly. It is preferable to use a series of `spack module tcl loads` commands to pre-compute which modules to load. These can be put in a script that is run whenever installed Spack packages change. For example: ``` #!/bin/sh # # Generate module load commands in ~/env/spackenv cat <<EOF | /bin/sh >$HOME/env/spackenv FIND='spack module tcl loads --prefix linux-SuSE11-x86_64/' \$FIND modele-utils \$FIND emacs \$FIND ncview \$FIND nco \$FIND modele-control EOF ``` The output of this file is written in `~/env/spackenv`: ``` # binutils@2.25%gcc@5.3.0+gold~krellpatch~libiberty arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/binutils-2.25-gcc-5.3.0-6w5d2t4 # python@2.7.12%gcc@5.3.0~tk~ucs4 arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/python-2.7.12-gcc-5.3.0-2azoju2 # ncview@2.1.7%gcc@5.3.0 arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/ncview-2.1.7-gcc-5.3.0-uw3knq2 # nco@4.5.5%gcc@5.3.0 arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/nco-4.5.5-gcc-5.3.0-7aqmimu # modele-control@develop%gcc@5.3.0 arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/modele-control-develop-gcc-5.3.0-7rddsij # zlib@1.2.8%gcc@5.3.0 arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/zlib-1.2.8-gcc-5.3.0-fe5onbi # curl@7.50.1%gcc@5.3.0 arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/curl-7.50.1-gcc-5.3.0-4vlev55 # hdf5@1.10.0-patch1%gcc@5.3.0+cxx~debug+fortran+mpi+shared~szip~threadsafe arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/hdf5-1.10.0-patch1-gcc-5.3.0-pwnsr4w # netcdf@4.4.1%gcc@5.3.0~hdf4+mpi arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/netcdf-4.4.1-gcc-5.3.0-rl5canv # netcdf-fortran@4.4.4%gcc@5.3.0 arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/netcdf-fortran-4.4.4-gcc-5.3.0-stdk2xq # modele-utils@cmake%gcc@5.3.0+aux+diags+ic arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/modele-utils-cmake-gcc-5.3.0-idyjul5 # everytrace@develop%gcc@5.3.0+fortran+mpi arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/everytrace-develop-gcc-5.3.0-p5wmb25 ``` Users may now put `source ~/env/spackenv` into `.bashrc`. Note Some module systems put a prefix on the names of modules created by Spack. For example, that prefix is `linux-SuSE11-x86_64/` in the above case. If a prefix is not needed, you may omit the `--prefix` flag from `spack module tcl loads`. ##### Transitive Dependencies[¶](#transitive-dependencies) In the script above, each `spack module tcl loads` command generates a *single* `module load` line. Transitive dependencies do not usually need to be loaded, only modules the user needs in `$PATH`. This is because Spack builds binaries with RPATH. Spack’s RPATH policy has some nice features: 1. Modules for multiple inconsistent applications may be loaded simultaneously. In the above example (Multiple Applications), package A and package B can coexist together in the user’s $PATH, even though they use different MPIs. 2. RPATH eliminates a whole class of strange errors that can happen in non-RPATH binaries when the wrong `LD_LIBRARY_PATH` is loaded. 3. Recursive module systems such as LMod are not necessary. 4. Modules are not needed at all to execute binaries. If a path to a binary is known, it may be executed. For example, the path for a Spack-built compiler can be given to an IDE without requiring the IDE to load that compiler’s module. Unfortunately, Spack’s RPATH support does not work in all case. For example: 1. Software comes in many forms — not just compiled ELF binaries, but also as interpreted code in Python, R, JVM bytecode, etc. Those systems almost universally use an environment variable analogous to `LD_LIBRARY_PATH` to dynamically load libraries. 2. Although Spack generally builds binaries with RPATH, it does not currently do so for compiled Python extensions (for example, `py-numpy`). Any libraries that these extensions depend on (`blas` in this case, for example) must be specified in the `LD_LIBRARY_PATH`.` 3. In some cases, Spack-generated binaries end up without a functional RPATH for no discernible reason. In cases where RPATH support doesn’t make things “just work,” it can be necessary to load a module’s dependencies as well as the module itself. This is done by adding the `--dependencies` flag to the `spack module tcl loads` command. For example, the following line, added to the script above, would be used to load SciPy, along with Numpy, core Python, BLAS/LAPACK and anything else needed: ``` spack module tcl loads --dependencies py-scipy ``` #### Dummy Packages[¶](#dummy-packages) As an alternative to a series of `module load` commands, one might consider dummy packages as a way to create a *consistent* set of packages that may be loaded as one unit. The idea here is pretty simple: 1. Create a package (say, `mydummy`) with no URL and no `install()` method, just dependencies. 2. Run `spack install mydummy` to install. An advantage of this method is the set of packages produced will be consistent. This means that you can reliably build software against it. A disadvantage is the set of packages will be consistent; this means you cannot load up two applications this way if they are not consistent with each other. #### Filesystem Views[¶](#filesystem-views) Filesystem views offer an alternative to environment modules, another way to assemble packages in a useful way and load them into a user’s environment. A filesystem view is a single directory tree that is the union of the directory hierarchies of a number of installed packages; it is similar to the directory hiearchy that might exist under `/usr/local`. The files of the view’s installed packages are brought into the view by symbolic or hard links, referencing the original Spack installation. When software is built and installed, absolute paths are frequently “baked into” the software, making it non-relocatable. This happens not just in RPATHs, but also in shebangs, configuration files, and assorted other locations. Therefore, programs run out of a Spack view will typically still look in the original Spack-installed location for shared libraries and other resources. This behavior is not easily changed; in general, there is no way to know where absolute paths might be written into an installed package, and how to relocate it. Therefore, the original Spack tree must be kept in place for a filesystem view to work, even if the view is built with hardlinks. ##### `spack view`[¶](#spack-view) A filesystem view is created, and packages are linked in, by the `spack view` command’s `symlink` and `hardlink` sub-commands. The `spack view remove` command can be used to unlink some or all of the filesystem view. The following example creates a filesystem view based on an installed `cmake` package and then removes from the view the files in the `cmake` package while retaining its dependencies. ``` $ spack view --verbose symlink myview cmake@3.5.2 ==> Linking package: "ncurses" ==> Linking package: "zlib" ==> Linking package: "openssl" ==> Linking package: "cmake" $ ls myview/ bin doc etc include lib share $ ls myview/bin/ captoinfo clear cpack ctest infotocap openssl tabs toe tset ccmake cmake c_rehash infocmp ncurses6-config reset tic tput $ spack view --verbose --dependencies false rm myview cmake@3.5.2 ==> Removing package: "cmake" $ ls myview/bin/ captoinfo c_rehash infotocap openssl tabs toe tset clear infocmp ncurses6-config reset tic tput ``` Note If the set of packages being included in a view is inconsistent, then it is possible that two packages will provide the same file. Any conflicts of this type are handled on a first-come-first-served basis, and a warning is printed. Note When packages are removed from a view, empty directories are purged. ##### Fine-Grain Control[¶](#fine-grain-control) The `--exclude` and `--dependencies` option flags allow for fine-grained control over which packages and dependencies do or not get included in a view. For example, suppose you are developing the `appsy` package. You wish to build against a view of all `appsy` dependencies, but not `appsy` itself: ``` $ spack view --dependencies yes --exclude appsy symlink /path/to/MYVIEW/ appsy ``` Alternately, you wish to create a view whose purpose is to provide binary executables to end users. You only need to include applications they might want, and not those applications’ dependencies. In this case, you might use: ``` $ spack view --dependencies no symlink /path/to/MYVIEW/ cmake ``` ##### Hybrid Filesystem Views[¶](#hybrid-filesystem-views) Although filesystem views are usually created by Spack, users are free to add to them by other means. For example, imagine a filesystem view, created by Spack, that looks something like: ``` /path/to/MYVIEW/bin/programA -> /path/to/spack/.../bin/programA /path/to/MYVIEW/lib/libA.so -> /path/to/spack/.../lib/libA.so ``` Now, the user may add to this view by non-Spack means; for example, by running a classic install script. For example: ``` $ tar -xf B.tar.gz $ cd B/ $ ./configure --prefix=/path/to/MYVIEW \ --with-A=/path/to/MYVIEW $ make && make install ``` The result is a hybrid view: ``` /path/to/MYVIEW/bin/programA -> /path/to/spack/.../bin/programA /path/to/MYVIEW/bin/programB /path/to/MYVIEW/lib/libA.so -> /path/to/spack/.../lib/libA.so /path/to/MYVIEW/lib/libB.so ``` In this case, real files coexist, interleaved with the “view” symlinks. At any time one can delete `/path/to/MYVIEW` or use `spack view` to manage it surgically. None of this will affect the real Spack install area. #### Global Activations[¶](#global-activations) [spack activate](index.html#cmd-spack-activate) may be used as an alternative to loading Python (and similar systems) packages directly or creating a view. If extensions are globally activated, then `spack load python` will also load all the extensions activated for the given `python`. This reduces the need for users to load a large number of modules. However, Spack global activations have two potential drawbacks: 1. Activated packages that involve compiled C extensions may still need their dependencies to be loaded manually. For example, `spack load openblas` might be required to make `py-numpy` work. 2. Global activations “break” a core feature of Spack, which is that multiple versions of a package can co-exist side-by-side. For example, suppose you wish to run a Python package in two different environments but the same basic Python — one with `py-numpy@1.7` and one with `py-numpy@1.8`. Spack extensions will not support this potential debugging use case. #### Discussion: Running Binaries[¶](#discussion-running-binaries) Modules, extension packages and filesystem views are all ways to assemble sets of Spack packages into a useful environment. They are all semantically similar, in that conflicting installed packages cannot simultaneously be loaded, activated or included in a view. With all of these approaches, there is no guarantee that the environment created will be consistent. It is possible, for example, to simultaneously load application A that uses OpenMPI and application B that uses MPICH. Both applications will run just fine in this inconsistent environment because they rely on RPATHs, not the environment, to find their dependencies. In general, environments set up using modules vs. views will work similarly. Both can be used to set up ephemeral or long-lived testing/development environments. Operational differences between the two approaches can make one or the other preferable in certain environments: * Filesystem views do not require environment module infrastructure. Although Spack can install `environment-modules`, users might be hostile to its use. Filesystem views offer a good solution for sysadmins serving users who just “want all the stuff I need in one place” and don’t want to hear about Spack. * Although modern build systems will find dependencies wherever they might be, some applications with hand-built make files expect their dependencies to be in one place. One common problem is makefiles that assume that `netcdf` and `netcdf-fortran` are installed in the same tree. Or, one might use an IDE that requires tedious configuration of dependency paths; and it’s easier to automate that administration in a view-building script than in the IDE itself. For all these cases, a view will be preferable to other ways to assemble an environment. * On systems with I-node quotas, modules might be preferable to views and extension packages. * Views and activated extensions maintain state that is semantically equivalent to the information in a `spack module tcl loads` script. Administrators might find things easier to maintain without the added “heavyweight” state of a view. ### Developing Software with Spack[¶](#developing-software-with-spack) For any project, one needs to assemble an environment of that application’s dependencies. You might consider loading a series of modules or creating a filesystem view. This approach, while obvious, has some serious drawbacks: 1. There is no guarantee that an environment created this way will be consistent. Your application could end up with dependency A expecting one version of MPI, and dependency B expecting another. The linker will not be happy
 2. Suppose you need to debug a package deep within your software DAG. If you build that package with a manual environment, then it becomes difficult to have Spack auto-build things that depend on it. That could be a serious problem, depending on how deep the package in question is in your dependency DAG. 3. At its core, Spack is a sophisticated concretization algorithm that matches up packages with appropriate dependencies and creates a *consistent* environment for the package it’s building. Writing a list of `spack load` commands for your dependencies is at least as hard as writing the same list of `depends_on()` declarations in a Spack package. But it makes no use of Spack concretization and is more error-prone. 4. Spack provides an automated, systematic way not just to find a packages’s dependencies — but also to build other packages on top. Any Spack package can become a dependency for another Spack package, offering a powerful vision of software re-use. If you build your package A outside of Spack, then your ability to use it as a building block for other packages in an automated way is diminished: other packages depending on package A will not be able to use Spack to fulfill that dependency. 5. If you are reading this manual, you probably love Spack. You’re probably going to write a Spack package for your software so prospective users can install it with the least amount of pain. Why should you go to additional work to find dependencies in your development environment? Shouldn’t Spack be able to help you build your software based on the package you’ve already written? In this section, we show how Spack can be used in the software development process to greatest effect, and how development packages can be seamlessly integrated into the Spack ecosystem. We will show how this process works by example, assuming the software you are creating is called `mylib`. #### Write the CMake Build[¶](#write-the-cmake-build) For now, the techniques in this section only work for CMake-based projects, although they could be easily extended to other build systems in the future. We will therefore assume you are using CMake to build your project. The `CMakeLists.txt` file should be written as normal. A few caveats: 1. Your project should produce binaries with RPATHs. This will ensure that they work the same whether built manually or automatically by Spack. For example: ``` # enable @rpath in the install name for any shared library being built # note: it is planned that a future version of CMake will enable this by default set(CMAKE_MACOSX_RPATH 1) # Always use full RPATH # http://www.cmake.org/Wiki/CMake_RPATH_handling # http://www.kitware.com/blog/home/post/510 # use, i.e. don't skip the full RPATH for the build tree SET(CMAKE_SKIP_BUILD_RPATH FALSE) # when building, don't use the install RPATH already # (but later on when installing) SET(CMAKE_BUILD_WITH_INSTALL_RPATH FALSE) # add the automatically determined parts of the RPATH # which point to directories outside the build tree to the install RPATH SET(CMAKE_INSTALL_RPATH_USE_LINK_PATH TRUE) # the RPATH to be used when installing, but only if it's not a system directory LIST(FIND CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES "${CMAKE_INSTALL_PREFIX}/lib" isSystemDir) IF("${isSystemDir}" STREQUAL "-1") SET(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib") ENDIF("${isSystemDir}" STREQUAL "-1") ``` 2. Spack provides a CMake variable called `SPACK_TRANSITIVE_INCLUDE_PATH`, which contains the `include/` directory for all of your project’s transitive dependencies. It can be useful if your project `#include``s files from package B, which ``#include` files from package C, but your project only lists project B as a dependency. This works in traditional single-tree build environments, in which B and C’s include files live in the same place. In order to make it work with Spack as well, you must add the following to `CMakeLists.txt`. It will have no effect when building without Spack: ``` # Include all the transitive dependencies determined by Spack. # If we're not running with Spack, this does nothing... include_directories($ENV{SPACK_TRANSITIVE_INCLUDE_PATH}) ``` Note Note that this feature is controversial and could break with future versions of GNU ld. The best practice is to make sure anything you `#include` is listed as a dependency in your CMakeLists.txt (and Spack package). #### Write the Spack Package[¶](#write-the-spack-package) The Spack package also needs to be written, in tandem with setting up the build (for example, CMake). The most important part of this task is declaring dependencies. Here is an example of the Spack package for the `mylib` package (ellipses for brevity): ``` class Mylib(CMakePackage): """Misc. reusable utilities used by Myapp.""" homepage = "https://github.com/citibeth/mylib" url = "https://github.com/citibeth/mylib/tarball/123" version('0.1.2', '3a6acd70085e25f81b63a7e96c504ef9') version('develop', git='https://github.com/citibeth/mylib.git', branch='develop') variant('everytrace', default=False, description='Report errors through Everytrace') ... extends('python') depends_on('eigen') depends_on('everytrace', when='+everytrace') depends_on('proj', when='+proj') ... depends_on('cmake', type='build') depends_on('doxygen', type='build') def cmake_args(self): spec = self.spec return [ '-DUSE_EVERYTRACE=%s' % ('YES' if '+everytrace' in spec else 'NO'), '-DUSE_PROJ4=%s' % ('YES' if '+proj' in spec else 'NO'), ... '-DUSE_UDUNITS2=%s' % ('YES' if '+udunits2' in spec else 'NO'), '-DUSE_GTEST=%s' % ('YES' if '+googletest' in spec else 'NO')] ``` This is a standard Spack package that can be used to install `mylib` in a production environment. The list of dependencies in the Spack package will generally be a repeat of the list of CMake dependencies. This package also has some features that allow it to be used for development: 1. It subclasses `CMakePackage` instead of `Package`. This eliminates the need to write an `install()` method, which is defined in the superclass. Instead, one just needs to write the `configure_args()` method. That method should return the arguments needed for the `cmake` command (beyond the standard CMake arguments, which Spack will include already). These arguments are typically used to turn features on/off in the build. 2. It specifies a non-checksummed version `develop`. Running `spack install mylib@develop` the `@develop` version will install the latest version off the develop branch. This method of download is useful for the developer of a project while it is in active development; however, it should only be used by developers who control and trust the repository in question! 3. The `url`, `url_for_version()` and `homepage` attributes are not used in development. Don’t worry if you don’t have any, or if they are behind a firewall. #### Build with Spack[¶](#build-with-spack) Now that you have a Spack package, you can use Spack to find its dependencies automatically. For example: ``` $ cd mylib $ spack setup mylib@local ``` The result will be a file `spconfig.py` in the top-level `mylib/` directory. It is a short script that calls CMake with the dependencies and options determined by Spack — similar to what happens in `spack install`, but now written out in script form. From a developer’s point of view, you can think of `spconfig.py` as a stand-in for the `cmake` command. Note You can invent any “version” you like for the `spack setup` command. Note Although `spack setup` does not build your package, it does create and install a module file, and mark in the database that your package has been installed. This can lead to errors, of course, if you don’t subsequently install your package. Also
 you will need to `spack uninstall` before you run `spack setup` again. You can now build your project as usual with CMake: ``` $ mkdir build; cd build $ ../spconfig.py .. # Instead of cmake .. $ make $ make install ``` Once your `make install` command is complete, your package will be installed, just as if you’d run `spack install`. Except you can now edit, re-build and re-install as often as needed, without checking into Git or downloading tarballs. Note The build you get this way will be *almost* the same as the build from `spack install`. The only difference is, you will not be using Spack’s compiler wrappers. This difference has not caused problems in our experience, as long as your project sets RPATHs as shown above. You DO use RPATHs, right? #### Build Other Software[¶](#build-other-software) Now that you’ve built `mylib` with Spack, you might want to build another package that depends on it — for example, `myapp`. This is accomplished easily enough: ``` $ spack install myapp ^mylib@local ``` Note that auto-built software has now been installed *on top of* manually-built software, without breaking Spack’s “web.” This property is useful if you need to debug a package deep in the dependency hierarchy of your application. It is a *big* advantage of using `spack setup` to build your package’s environment. If you feel your software is stable, you might wish to install it with `spack install` and skip the source directory. You can just use, for example: ``` $ spack install mylib@develop ``` #### Release Your Software[¶](#release-your-software) You are now ready to release your software as a tarball with a numbered version, and a Spack package that can build it. If you’re hosted on GitHub, this process will be a bit easier. 1. Put tag(s) on the version(s) in your GitHub repo you want to be release versions. For example, a tag `v0.1.0` for version 0.1.0. 2. Set the `url` in your `package.py` to download a tarball for the appropriate version. GitHub will give you a tarball for any commit in the repo, if you tickle it the right way. For example: ``` url = 'https://github.com/citibeth/mylib/tarball/v0.1.2' ``` 3. Use Spack to determine your version’s hash, and cut’n’paste it into your `package.py`: ``` $ spack checksum mylib 0.1.2 ==> Found 1 versions of mylib 0.1.2 https://github.com/citibeth/mylib/tarball/v0.1.2 How many would you like to checksum? (default is 5, q to abort) ==> Downloading... ==> Trying to fetch from https://github.com/citibeth/mylib/tarball/v0.1.2 ######################################################################## 100.0% ==> Checksummed new versions of mylib: version('0.1.2', '3a6acd70085e25f81b63a7e96c504ef9') ``` 4. You should now be able to install released version 0.1.2 of your package with: ``` $ spack install mylib@0.1.2 ``` 5. There is no need to remove the develop version from your package. Spack concretization will always prefer numbered version to non-numeric versions. Users will only get it if they ask for it. #### Distribute Your Software[¶](#distribute-your-software) Once you’ve released your software, other people will want to build it; and you will need to tell them how. In the past, that has meant a few paragraphs of prose explaining which dependencies to install. But now you use Spack, and those instructions are written in executable Python code. But your software has many dependencies, and you know Spack is the best way to install it: 1. First, you will want to fork Spack’s `develop` branch. Your aim is to provide a stable version of Spack that you KNOW will install your software. If you make changes to Spack in the process, you will want to submit pull requests to Spack core. 2. Add your software’s `package.py` to that fork. You should submit a pull request for this as well, unless you don’t want the public to know about your software. 3. Prepare instructions that read approximately as follows: 1. Download Spack from your forked repo. 2. Install Spack; see [Getting Started](index.html#getting-started). 3. Set up an appropriate `packages.yaml` file. You should tell your users to include in this file whatever versions/variants are needed to make your software work correctly (assuming those are not already in your `packages.yaml`). 4. Run `spack install mylib`. 5. Run this script to generate the `module load` commands or filesystem view needed to use this software. 4. Be aware that your users might encounter unexpected bootstrapping issues on their machines, especially if they are running on older systems. The [Getting Started](index.html#getting-started) section should cover this, but there could always be issues. #### Other Build Systems[¶](#other-build-systems) `spack setup` currently only supports CMake-based builds, in packages that subclass `CMakePackage`. The intent is that this mechanism should support a wider range of build systems; for example, GNU Autotools. Someone well-versed in Autotools is needed to develop this patch and test it out. Python Distutils is another popular build system that should get `spack setup` support. For non-compiled languages like Python, `spack diy` may be used. Even better is to put the source directory directly in the user’s `PYTHONPATH`. Then, edits in source files are immediately available to run without any install process at all! #### Conclusion[¶](#conclusion) The `spack setup` development workflow provides better automation, flexibility and safety than workflows relying on environment modules or filesystem views. However, it has some drawbacks: 1. It currently works only with projects that use the CMake build system. Support for other build systems is not hard to build, but will require a small amount of effort for each build system to be supported. It might not work well with some IDEs. 2. It only works with packages that sub-class `StagedPackage`. Currently, most Spack packages do not. Converting them is not hard; but must be done on a package-by-package basis. 3. It requires that users are comfortable with Spack, as they integrate Spack explicitly in their workflow. Not all users are willing to do this. ### Using Spack on Travis-CI[¶](#using-spack-on-travis-ci) Spack can be deployed as a provider for userland software in [Travis-CI](https://http://travis-ci.org). A starting-point for a `.travis.yml` file can look as follows. It uses [caching](https://docs.travis-ci.com/user/caching/) for already built environments, so make sure to clean the Travis cache if you run into problems. The main points that are implemented below: 1. Travis is detected as having up to 34 cores available, but only 2 are actually allocated for the user. We limit the parallelism of the spack builds in the config. (The Travis yaml parser is a bit buggy on the echo command.) 2. Builds over 10 minutes need to be prefixed with `travis_wait`. Alternatively, generate output once with `spack install -v`. 3. Travis builds are non-interactive. This prevents using bash aliases and functions for modules. We fix that by sourcing `/etc/profile` first (or running everything in a subshell with `bash -l -c '...'`). ``` language: cpp sudo: false dist: trusty cache: apt: true directories: - $HOME/.cache addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-4.9 - environment-modules env: global: - SPACK_ROOT: $HOME/.cache/spack - PATH: $PATH:$HOME/.cache/spack/bin before_install: - export CXX=g++-4.9 - export CC=gcc-4.9 - export FC=gfortran-4.9 - export CXXFLAGS="-std=c++11" install: - if ! which spack >/dev/null; then mkdir -p $SPACK_ROOT && git clone --depth 50 https://github.com/spack/spack.git $SPACK_ROOT && echo -e "config:""\n build_jobs:"" 2" > $SPACK_ROOT/etc/spack/config.yaml; fi - travis_wait spack install cmake@3.7.2~openssl~ncurses - travis_wait spack install boost@1.62.0~graph~iostream~locale~log~wave - spack clean -a - source /etc/profile && source $SPACK_ROOT/share/spack/setup-env.sh - spack load cmake - spack load boost script: - mkdir -p $HOME/build - cd $HOME/build - cmake $TRAVIS_BUILD_DIR - make -j 2 - make test ``` ### Using Spack to Create Docker Images[¶](#using-spack-to-create-docker-images) Spack can be the ideal tool to set up images for Docker (and Singularity). An example `Dockerfile` is given below, downloading the latest spack version. The following functionality is prepared: 1. Base image: the example starts from a minimal ubuntu. 2. Installing as root: docker images are usually set up as root. Since some autotools scripts might complain about this being unsafe, we set `FORCE_UNSAFE_CONFIGURE=1` to avoid configure errors. 3. Pre-install the spack dependencies, including modules from the packages. This avoids needing to build those from scratch via `spack bootstrap`. Package installs are followed by a clean-up of the system package index, to avoid outdated information and it saves space. 4. Install spack in `/usr/local`. Add `setup-env.sh` to profile scripts, so commands in *login* shells can use the whole spack functionality, including modules. 5. Install an example package (`tar`). As with system package managers above, `spack install` commands should be concatenated with a `&& spack clean -a` in order to keep image sizes small. 6. Add a startup hook to an *interactive login shell* so spack modules will be usable. In order to build and run the image, execute: ``` docker build -t spack . docker run -it spack ``` ``` FROM ubuntu:16.04 MAINTAINER <NAME> <<EMAIL># general environment for docker ENV DEBIAN_FRONTEND=noninteractive \ SPACK_ROOT=/usr/local \ FORCE_UNSAFE_CONFIGURE=1 # install minimal spack depedencies RUN apt-get update \ && apt-get install -y --no-install-recommends \ autoconf \ build-essential \ ca-certificates \ coreutils \ curl \ environment-modules \ git \ python \ unzip \ vim \ && rm -rf /var/lib/apt/lists/* # load spack environment on login RUN echo "source $SPACK_ROOT/share/spack/setup-env.sh" \ > /etc/profile.d/spack.sh # spack settings # note: if you wish to change default settings, add files alongside # the Dockerfile with your desired settings. Then uncomment this line #COPY packages.yaml modules.yaml $SPACK_ROOT/etc/spack/ # install spack RUN curl -s -L https://api.github.com/repos/spack/spack/tarball \ | tar xzC $SPACK_ROOT --strip 1 # note: at this point one could also run ``spack bootstrap`` to avoid # parts of the long apt-get install list above # install software RUN spack install tar \ && spack clean -a # need the modules already during image build? #RUN /bin/bash -l -c ' \ # spack load tar \ # && which tar' # image run hook: the -l will make sure /etc/profile environments are loaded CMD /bin/bash -l ``` #### Best Practices[¶](#best-practices) ##### MPI[¶](#mpi) Due to the dependency on Fortran for OpenMPI, which is the spack default implementation, consider adding `gfortran` to the `apt-get install` list. Recent versions of OpenMPI will require you to pass `--allow-run-as-root` to your `mpirun` calls if started as root user inside Docker. For execution on HPC clusters, it can be helpful to import the docker image into Singularity in order to start a program with an *external* MPI. Otherwise, also add `openssh-server` to the `apt-get install` list. ##### CUDA[¶](#cuda) Starting from CUDA 9.0, Nvidia provides minimal CUDA images based on Ubuntu. Please see [their instructions](https://hub.docker.com/r/nvidia/cuda/). Avoid double-installing CUDA by adding, e.g. ``` packages: cuda: paths: cuda@9.0.176%gcc@5.4.0 arch=linux-ubuntu16-x86_64: /usr/local/cuda buildable: False ``` to your `packages.yaml`. Then `COPY` in that file into the image as in the example above. Users will either need `nvidia-docker` or e.g. Singularity to *execute* device kernels. ##### Singularity[¶](#singularity) Importing and running the image created above into [Singularity](http://singularity.lbl.gov/) works like a charm. Just use the [docker bootstraping mechanism](http://singularity.lbl.gov/quickstart#bootstrap-recipes): ``` Bootstrap: docker From: registry/user/image:tag %runscript exec /bin/bash -l ``` ##### Docker for Development[¶](#docker-for-development) For examples of how we use docker in development, see [Docker for Developers](index.html#docker-for-developers). ##### Docker on Windows and OSX[¶](#docker-on-windows-and-osx) On Mac OS and Windows, docker runs on a hypervisor that is not allocated much memory by default, and some spack packages may fail to build due to lack of memory. To work around this issue, consider configuring your docker installation to use more of your host memory. In some cases, you can also ease the memory pressure on parallel builds by limiting the parallelism in your config.yaml. ``` config: build_jobs: 2 ``` ### Upstream Bug Fixes[¶](#upstream-bug-fixes) It is not uncommon to discover a bug in an upstream project while trying to build with Spack. Typically, the bug is in a package that serves a dependency to something else. This section describes procedure to work around and ultimately resolve these bugs, while not delaying the Spack user’s main goal. #### Buggy New Version[¶](#buggy-new-version) Sometimes, the old version of a package works fine, but a new version is buggy. For example, it was once found that [Adios did not build with hdf5@1.10](https://github.com/spack/spack/issues/1683). If the old version of `hdf5` will work with `adios`, the suggested procedure is: 1. Revert `adios` to the old version of `hdf5`. Put in its `adios/package.py`: ``` # Adios does not build with HDF5 1.10 # See: https://github.com/spack/spack/issues/1683 depends_on('hdf5@:1.9') ``` 2. Determine whether the problem is with `hdf5` or `adios`, and report the problem to the appropriate upstream project. In this case, the problem was with `adios`. 3. Once a new version of `adios` comes out with the bugfix, modify `adios/package.py` to reflect it: ``` # Adios up to v1.10.0 does not build with HDF5 1.10 # See: https://github.com/spack/spack/issues/1683 depends_on('hdf5@:1.9', when='@:1.10.0') depends_on('hdf5', when='@1.10.1:') ``` #### No Version Works[¶](#no-version-works) Sometimes, *no* existing versions of a dependency work for a build. This typically happens when developing a new project: only then does the developer notice that existing versions of a dependency are all buggy, or the non-buggy versions are all missing a critical feature. In the long run, the upstream project will hopefully fix the bug and release a new version. But that could take a while, even if a bugfix has already been pushed to the project’s repository. In the meantime, the Spack user needs things to work. The solution is to create an unofficial Spack release of the project, as soon as the bug is fixed in *some* repository. A study of the [Git history](https://github.com/citibeth/spack/commits/efischer/develop/var/spack/repos/builtin/packages/py-proj/package.py) of `py-proj/package.py` is instructive here: 1. On [April 1](https://github.com/citibeth/spack/commit/44a1d6a96706affe6ef0a11c3a780b91d21d105a), an initial bugfix was identified for the PyProj project and a pull request submitted to PyProj. Because the upstream authors had not yet fixed the bug, the `py-proj` Spack package downloads from a forked repository, set up by the package’s author. A non-numeric version number is used to make it easy to upgrade the package without recomputing checksums; however, this is an untrusted download method and should not be distributed. The package author has now become, temporarily, a maintainer of the upstream project: ``` # We need the benefits of this PR # https://github.com/jswhit/pyproj/pull/54 version('citibeth-latlong2', git='https://github.com/citibeth/pyproj.git', branch='latlong2') ``` 2. By May 14, the upstream project had accepted a pull request with the required bugfix. At this point, the forked repository was deleted. However, the upstream project still had not released a new version with a bugfix. Therefore, a Spack-only release was created by specifying the desired hash in the main project repository. The version number `@1.9.5.1.1` was chosen for this “release” because it’s a descendent of the officially released version `@1.9.5.1`. This is a trusted download method, and can be released to the Spack community: ``` # This is not a tagged release of pyproj. # The changes in this "version" fix some bugs, especially with Python3 use. version('1.9.5.1.1', 'd035e4bc704d136db79b43ab371b27d2', url='https://www.github.com/jswhit/pyproj/tarball/0be612cc9f972e38b50a90c946a9b353e2ab140f') ``` Note It would have been simpler to use Spack’s Git download method, which is also a trusted download in this case: ``` # This is not a tagged release of pyproj. # The changes in this "version" fix some bugs, especially with Python3 use. version('1.9.5.1.1', git='https://github.com/jswhit/pyproj.git', commit='0be612cc9f972e38b50a90c946a9b353e2ab140f') ``` Note In this case, the upstream project fixed the bug in its repository in a relatively timely manner. If that had not been the case, the numbered version in this step could have been released from the forked repository. 3. The author of the Spack package has now become an unofficial release engineer for the upstream project. Depending on the situation, it may be advisable to put `preferred=True` on the latest *officially released* version. 4. As of August 31, the upstream project still had not made a new release with the bugfix. In the meantime, Spack-built `py-proj` provides the bugfix needed by packages depending on it. As long as this works, there is no particular need for the upstream project to make a new official release. 5. If the upstream project releases a new official version with the bugfix, then the unofficial `version()` line should be removed from the Spack package. #### Patches[¶](#patches) Spack’s source patching mechanism provides another way to fix bugs in upstream projects. This has advantages and disadvantages compared to the procedures above. Advantages: > 1. It can fix bugs in existing released versions, and (probably) > future releases as well. > 2. It is lightweight, does not require a new fork to be set up. Disadvantages: > 1. It is harder to develop and debug a patch, vs. a branch in a > repository. The user loses the automation provided by version > control systems. > 2. Although patches of a few lines work OK, large patch files can be > hard to create and maintain. Tutorial: Spack 101[¶](#tutorial-spack-101) --- This is a full-day introduction to Spack with lectures and live demos. It was presented as a tutorial at [Supercomputing 2018](http://sc18.supercomputing.org). You can use these materials to teach a course on Spack at your own site, or you can just skip ahead and read the live demo scripts to see how Spack is used in practice. Slides [Download Slides](http://spack.io/slides/Spack-SC18-Tutorial.pdf). **Full citation:** <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. [Managing HPC Software Complexity with Spack](https://sc18.supercomputing.org/presentation/?id=tut165&sess=sess252). Tutorial presented at Supercomputing 2018. November 12, 2018, Dallas, TX, USA. Live Demos These scripts will take you step-by-step through basic Spack tasks. They correspond to sections in the slides above. > 1. [Basic Installation Tutorial](index.html#basics-tutorial) > 2. [Configuration Tutorial](index.html#configs-tutorial) > 3. [Package Creation Tutorial](index.html#packaging-tutorial) > 4. [Environments, spack.yaml, and spack.lock](index.html#environments-tutorial) > 5. [Module Files](index.html#modules-tutorial) > 6. [Spack Package Build Systems](index.html#build-systems-tutorial) > 7. [Advanced Topics in Packaging](index.html#advanced-packaging-tutorial) Full contents: ### Basic Installation Tutorial[¶](#basic-installation-tutorial) This tutorial will guide you through the process of installing software using Spack. We will first cover the spack install command, focusing on the power of the spec syntax and the flexibility it gives to users. We will also cover the spack find command for viewing installed packages and the spack uninstall command. Finally, we will touch on how Spack manages compilers, especially as it relates to using Spack-built compilers within Spack. We will include full output from all of the commands demonstrated, although we will frequently call attention to only small portions of that output (or merely to the fact that it succeeded). The provided output is all from an AWS instance running Ubuntu 16.04 #### Installing Spack[¶](#installing-spack) Spack works out of the box. Simply clone spack and get going. We will clone Spack and immediately checkout the most recent release, v0.12. ``` $ git clone https://github.com/spack/spack git clone https://github.com/spack/spack Cloning into 'spack'... remote: Enumerating objects: 68, done. remote: Counting objects: 100% (68/68), done. remote: Compressing objects: 100% (56/56), done. remote: Total 135389 (delta 40), reused 16 (delta 9), pack-reused 135321 Receiving objects: 100% (135389/135389), 47.31 MiB | 1.01 MiB/s, done. Resolving deltas: 100% (64414/64414), done. Checking connectivity... done. $ cd spack $ git checkout releases/v0.12 Branch releases/v0.12 set up to track remote branch releases/v0.12 from origin. Switched to a new branch 'releases/v0.12' ``` Next add Spack to your path. Spack has some nice command line integration tools, so instead of simply appending to your `PATH` variable, source the spack setup script. Then add Spack to your path. ``` $ . share/spack/setup-env.sh ``` You’re good to go! #### What is in Spack?[¶](#what-is-in-spack) The `spack list` command shows available packages. ``` $ spack list ==> 2907 packages. abinit libgpuarray py-espresso r-mlrmbo abyss libgridxc py-espressopp r-mmwrweek accfft libgtextutils py-et-xmlfile r-mnormt ... ``` The `spack list` command can also take a query string. Spack automatically adds wildcards to both ends of the string. For example, we can view all available python packages. ``` $ spack list py- ==> 479 packages. lumpy-sv py-funcsigs py-numpydoc py-utililib perl-file-copy-recursive py-functools32 py-olefile py-pywavelets py-3to2 py-future py-ont-fast5-api py-pyyaml ... ``` #### Installing Packages[¶](#installing-packages) Installing a package with Spack is very simple. To install a piece of software, simply type `spack install <package_name>`. ``` $ spack install zlib ==> Installing zlib ==> Searching for binary cache of zlib ==> Warning: No Spack mirrors are currently configured ==> No binary for zlib found: installing from source ==> Fetching http://zlib.net/fossils/zlib-1.2.11.tar.gz ######################################################################## 100.0% ==> Staging archive: /home/spack1/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb/zlib-1.2.11.tar.gz ==> Created stage in /home/spack1/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> No patches needed for zlib ==> Building zlib [Package] ==> Executing phase: 'install' ==> Successfully installed zlib Fetch: 3.27s. Build: 2.18s. Total: 5.44s. [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ``` Spack can install software either from source or from a binary cache. Packages in the binary cache are signed with GPG for security. For the tutorial we have prepared a binary cache so you don’t have to wait on slow compilation from source. To be able to install from the binary cache, we will need to configure Spack with the location of the binary cache and trust the GPG key that the binary cache was signed with. ``` $ spack mirror add tutorial /mirror $ spack gpg trust /mirror/public.key gpg: keybox '/home/spack1/spack/opt/spack/gpg/pubring.kbx' created gpg: /home/spack1/spack/opt/spack/gpg/trustdb.gpg: trustdb created gpg: key 3B7C69B2: public key "sc-tutorial (GPG created for Spack) <<EMAIL>>" imported gpg: Total number processed: 1 gpg: imported: 1 ``` You’ll learn more about configuring Spack later in the tutorial, but for now you will be able to install the rest of the packages in the tutorial from a binary cache using the same `spack install` command. By default this will install the binary cached version if it exists and fall back on installing from source. Spack’s spec syntax is the interface by which we can request specific configurations of the package. The `%` sigil is used to specify compilers. ``` $ spack install zlib %clang ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64-gcc-7.2.0-texinfo-6.5-cuqnfgfhhmudqp5f7upmld6ax7pratzw.spec.yaml ######################################################################## 100.0% ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64-gcc-4.7-zlib-1.2.11-bq2wtdxakpjytk2tjr7qu23i4py2fi2r.spec.yaml ######################################################################## 100.0% ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64-gcc-5.4.0-dyninst-9.3.2-bu6s2jzievsjkwtcnrtimc5b625j5omf.spec.yaml ######################################################################## 100.0% ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64-gcc-7.2.0-openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4.spec.yaml ... ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.11/linux-ubuntu16.04-x86_64-clang-3.8.0-2ubuntu4-zlib-1.2.11-4pt75q7qq6lygf3hgnona4lyc2uwedul.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:08:01 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.11-4pt75q7qq6lygf3hgnona4lyc2uwedul ``` Note that this installation is located separately from the previous one. We will discuss this in more detail later, but this is part of what allows Spack to support arbitrarily versioned software. You can check for particular versions before requesting them. We will use the `spack versions` command to see the available versions, and then install a different version of `zlib`. ``` $ spack versions zlib ==> Safe versions (already checksummed): 1.2.11 1.2.8 1.2.3 ==> Remote versions (not yet checksummed): 1.2.10 1.2.7 1.2.5.1 1.2.4.2 1.2.3.7 ... ``` The `@` sigil is used to specify versions, both of packages and of compilers. ``` $ spack install zlib@1.2.8 ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-zlib-1.2.8-bkyl5bhuep6fmhuxzkmhqy25qefjcvzc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:30 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-bkyl5bhuep6fmhuxzkmhqy25qefjcvzc $ spack install zlib %gcc@4.7 ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-4.7/zlib-1.2.11/linux-ubuntu16.04-x86_64-gcc-4.7-zlib-1.2.11-bq2wtdxakpjytk2tjr7qu23i4py2fi2r.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 04:55:30 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-4.7/zlib-1.2.11-bq2wtdxakpjytk2tjr7qu23i4py2fi2r ``` The spec syntax also includes compiler flags. Spack accepts `cppflags`, `cflags`, `cxxflags`, `fflags`, `ldflags`, and `ldlibs` parameters. The values of these fields must be quoted on the command line if they include spaces. These values are injected into the compile line automatically by the Spack compiler wrappers. ``` $ spack install zlib @1.2.8 cppflags=-O3 ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-zlib-1.2.8-64mns5mvdacqvlashkf7v6lqrxixhmxu.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:31:54 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-64mns5mvdacqvlashkf7v6lqrxixhmxu ``` The `spack find` command is used to query installed packages. Note that some packages appear identical with the default output. The `-l` flag shows the hash of each package, and the `-f` flag shows any non-empty compiler flags of those packages. ``` $ spack find ==> 5 installed packages. -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- zlib@1.2.8 zlib@1.2.8 zlib@1.2.11 $ spack find -lf ==> 5 installed packages. -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 4pt75q7 zlib@1.2.11%clang -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- bq2wtdx zlib@1.2.11%gcc -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- bkyl5bh zlib@1.2.8%gcc 64mns5m zlib@1.2.8%gcc cppflags="-O3" 5nus6kn zlib@1.2.11%gcc ``` Spack generates a hash for each spec. This hash is a function of the full provenance of the package, so any change to the spec affects the hash. Spack uses this value to compare specs and to generate unique installation directories for every combinatorial version. As we move into more complicated packages with software dependencies, we can see that Spack reuses existing packages to satisfy a dependency only when the existing package’s hash matches the desired spec. ``` $ spack install tcl ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing tcl ==> Searching for binary cache of tcl ==> Finding buildcaches in /mirror/build_cache ==> Installing tcl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:15 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed tcl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt ``` Dependencies can be explicitly requested using the `^` sigil. Note that the spec syntax is recursive. Anything we could specify about the top-level package, we can also specify about a dependency using `^`. ``` $ spack install tcl ^zlib @1.2.8 %clang ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.8/linux-ubuntu16.04-x86_64-clang-3.8.0-2ubuntu4-zlib-1.2.8-i426yu3o6lyau5fv5ljwsajfkqxj5rl5.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:09:01 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>3<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.8-i426yu3o6lyau5fv5ljwsajfkqxj5rl5 ==> Installing tcl ==> Searching for binary cache of tcl ==> Installing tcl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/tcl-8.6.8/linux-ubuntu16.04-x86_64-clang-3.8.0-2ubuntu4-tcl-8.6.8-6wc66etr7y6hgibp2derrdkf763exwvc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:10:21 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed tcl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/tcl-8.6.8-6wc66etr7y6hgibp2derrdkf763exwvc ``` Packages can also be referred to from the command line by their package hash. Using the `spack find -lf` command earlier we saw that the hash of our optimized installation of zlib (`cppflags="-O3"`) began with `64mns5m`. We can now explicitly build with that package without typing the entire spec, by using the `/` sigil to refer to it by hash. As with other tools like git, you do not need to specify an *entire* hash on the command line. You can specify just enough digits to identify a hash uniquely. If a hash prefix is ambiguous (i.e., two or more installed packages share the prefix) then spack will report an error. ``` $ spack install tcl ^/64mn ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-64mns5mvdacqvlashkf7v6lqrxixhmxu ==> Installing tcl ==> Searching for binary cache of tcl ==> Finding buildcaches in /mirror/build_cache ==> Installing tcl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-tcl-8.6.8-am4pbatrtga3etyusg2akmsvrswwxno2.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:11:53 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed tcl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-am4pbatrtga3etyusg2akmsvrswwxno2 ``` The `spack find` command can also take a `-d` flag, which can show dependency information. Note that each package has a top-level entry, even if it also appears as a dependency. ``` $ spack find -ldf ==> 9 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 6wc66et tcl@8.6.8%clang i426yu3 ^zlib@1.2.8%clang i426yu3 zlib@1.2.8%clang 4pt75q7 zlib@1.2.11%clang -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- bq2wtdx zlib@1.2.11%gcc -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- am4pbat tcl@8.6.8%gcc 64mns5m ^zlib@1.2.8%gcc cppflags="-O3" qhwyccy tcl@8.6.8%gcc 5nus6kn ^zlib@1.2.11%gcc bkyl5bh zlib@1.2.8%gcc 64mns5m zlib@1.2.8%gcc cppflags="-O3" 5nus6kn zlib@1.2.11%gcc ``` Let’s move on to slightly more complicated packages. `HDF5` is a good example of a more complicated package, with an MPI dependency. If we install it “out of the box,” it will build with `openmpi`. ``` $ spack install hdf5 ==> Installing libsigsegv ==> Searching for binary cache of libsigsegv ==> Finding buildcaches in /mirror/build_cache ==> Installing libsigsegv from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11/linux-ubuntu16.04-x86_64-gcc-5.4.0-libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:08:01 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed libsigsegv from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> Installing m4 ==> Searching for binary cache of m4 ==> Installing m4 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18/linux-ubuntu16.04-x86_64-gcc-5.4.0-m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:24:11 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed m4 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> Installing libtool ==> Searching for binary cache of libtool ==> Installing libtool from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6/linux-ubuntu16.04-x86_64-gcc-5.4.0-libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:12:47 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 95C7 1787 7AC0 0FFD AA8F D6E9 9CFA 4A45 3B7C 69B2 ==> Successfully installed libtool from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> Installing pkgconf ==> Searching for binary cache of pkgconf ==> Installing pkgconf from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:00:47 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed pkgconf from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> Installing util-macros ==> Searching for binary cache of util-macros ==> Installing util-macros from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/util-macros-1.19.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-util-macros-1.19.1-milz7fmttmptcic2qdk5cnel7ll5sybr.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:31:54 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed util-macros from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/util-macros-1.19.1-milz7fmttmptcic2qdk5cnel7ll5sybr ==> Installing libpciaccess ==> Searching for binary cache of libpciaccess ==> Installing libpciaccess from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/libpciaccess-0.13.5/linux-ubuntu16.04-x86_64-gcc-5.4.0-libpciaccess-0.13.5-5urc6tcjae26fbbd2wyfohoszhgxtbmc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:09:34 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed libpciaccess from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libpciaccess-0.13.5-5urc6tcjae26fbbd2wyfohoszhgxtbmc ==> Installing xz ==> Searching for binary cache of xz ==> Installing xz from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/xz-5.2.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-xz-5.2.4-teneqii2xv5u6zl5r6qi3pwurc6pmypz.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:05:03 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed xz from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/xz-5.2.4-teneqii2xv5u6zl5r6qi3pwurc6pmypz ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing libxml2 ==> Searching for binary cache of libxml2 ==> Installing libxml2 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/libxml2-2.9.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-libxml2-2.9.8-wpexsphdmfayxqxd4up5vgwuqgu5woo7.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 04:56:04 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed libxml2 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libxml2-2.9.8-wpexsphdmfayxqxd4up5vgwuqgu5woo7 ==> Installing ncurses ==> Searching for binary cache of ncurses ==> Installing ncurses from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:04:49 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed ncurses from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> Installing readline ==> Searching for binary cache of readline ==> Installing readline from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:04:56 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed readline from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> Installing gdbm ==> Searching for binary cache of gdbm ==> Installing gdbm from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:34 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed gdbm from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> Installing perl ==> Searching for binary cache of perl ==> Installing perl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:12:45 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed perl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> Installing autoconf ==> Searching for binary cache of autoconf ==> Installing autoconf from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69/linux-ubuntu16.04-x86_64-gcc-5.4.0-autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:24:03 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed autoconf from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> Installing automake ==> Searching for binary cache of automake ==> Installing automake from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:12:03 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 95C7 1787 7AC0 0FFD AA8F D6E9 9CFA 4A45 3B7C 69B2 ==> Successfully installed automake from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> Installing numactl ==> Searching for binary cache of numactl ==> Installing numactl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/numactl-2.0.11/linux-ubuntu16.04-x86_64-gcc-5.4.0-numactl-2.0.11-ft463odrombnxlc3qew4omckhlq7tqgc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:34 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed numactl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/numactl-2.0.11-ft463odrombnxlc3qew4omckhlq7tqgc ==> Installing hwloc ==> Searching for binary cache of hwloc ==> Installing hwloc from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hwloc-1.11.9/linux-ubuntu16.04-x86_64-gcc-5.4.0-hwloc-1.11.9-43tkw5mt6huhv37vqnybqgxtkodbsava.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:08:00 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hwloc from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hwloc-1.11.9-43tkw5mt6huhv37vqnybqgxtkodbsava ==> Installing openmpi ==> Searching for binary cache of openmpi ==> Installing openmpi from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:01:54 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed openmpi from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx ==> Installing hdf5 ==> Searching for binary cache of hdf5 ==> Installing hdf5 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:23:04 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hdf5 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw ``` Spack packages can also have build options, called variants. Boolean variants can be specified using the `+` and `~` or `-` sigils. There are two sigils for `False` to avoid conflicts with shell parsing in different situations. Variants (boolean or otherwise) can also be specified using the same syntax as compiler flags. Here we can install HDF5 without MPI support. ``` $ spack install hdf5~mpi ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing hdf5 ==> Searching for binary cache of hdf5 ==> Finding buildcaches in /mirror/build_cache ==> Installing hdf5 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-hdf5-1.10.4-5vcv5r67vpjzenq4apyebshclelnzuja.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:23:24 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hdf5 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-5vcv5r67vpjzenq4apyebshclelnzuja ``` We might also want to install HDF5 with a different MPI implementation. While MPI is not a package itself, packages can depend on abstract interfaces like MPI. Spack handles these through “virtual dependencies.” A package, such as HDF5, can depend on the MPI interface. Other packages (`openmpi`, `mpich`, `mvapich`, etc.) provide the MPI interface. Any of these providers can be requested for an MPI dependency. For example, we can build HDF5 with MPI support provided by mpich by specifying a dependency on `mpich`. Spack also supports versioning of virtual dependencies. A package can depend on the MPI interface at version 3, and provider packages specify what version of the interface *they* provide. The partial spec `^mpi@3` can be safisfied by any of several providers. ``` $ spack install hdf5+hl+mpi ^mpich ==> libsigsegv is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> pkgconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> ncurses is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> readline is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> gdbm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> perl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> autoconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> automake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> libtool is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> Installing texinfo ==> Searching for binary cache of texinfo ==> Finding buildcaches in /mirror/build_cache ==> Installing texinfo from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/texinfo-6.5/linux-ubuntu16.04-x86_64-gcc-5.4.0-texinfo-6.5-zs7a2pcwhq6ho2cj2x26uxfktwkpyucn.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:29 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed texinfo from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/texinfo-6.5-zs7a2pcwhq6ho2cj2x26uxfktwkpyucn ==> Installing findutils ==> Searching for binary cache of findutils ==> Installing findutils from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/findutils-4.6.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-findutils-4.6.0-d4iajxsopzrlcjtasahxqeyjkjv5jx4v.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:17 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed findutils from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/findutils-4.6.0-d4iajxsopzrlcjtasahxqeyjkjv5jx4v ==> Installing mpich ==> Searching for binary cache of mpich ==> Installing mpich from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpich-3.2.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-mpich-3.2.1-p3f7p2r5ntrynqibosglxvhwyztiwqs5.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:23:57 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed mpich from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpich-3.2.1-p3f7p2r5ntrynqibosglxvhwyztiwqs5 ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing hdf5 ==> Searching for binary cache of hdf5 ==> Installing hdf5 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-hdf5-1.10.4-xxd7syhgej6onpyfyewxqcqe7ltkt7ob.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:32 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hdf5 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-xxd7syhgej6onpyfyewxqcqe7ltkt7ob ``` We’ll do a quick check in on what we have installed so far. ``` $ spack find -ldf ==> 32 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 6wc66et tcl@8.6.8%clang i426yu3 ^zlib@1.2.8%clang i426yu3 zlib@1.2.8%clang 4pt75q7 zlib@1.2.11%clang -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- bq2wtdx zlib@1.2.11%gcc -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- 3sx2gxe autoconf@2.69%gcc suf5jtc ^m4@1.4.18%gcc fypapcp ^libsigsegv@2.11%gcc ic2kyoa ^perl@5.26.2%gcc q4fpyuo ^gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc rymw7im automake@1.16.1%gcc ic2kyoa ^perl@5.26.2%gcc q4fpyuo ^gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc d4iajxs findutils@4.6.0%gcc q4fpyuo gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc 5vcv5r6 hdf5@1.10.4%gcc 5nus6kn ^zlib@1.2.11%gcc ozyvmhz hdf5@1.10.4%gcc 3njc4q5 ^openmpi@3.1.3%gcc 43tkw5m ^hwloc@1.11.9%gcc 5urc6tc ^libpciaccess@0.13.5%gcc wpexsph ^libxml2@2.9.8%gcc teneqii ^xz@5.2.4%gcc 5nus6kn ^zlib@1.2.11%gcc ft463od ^numactl@2.0.11%gcc xxd7syh hdf5@1.10.4%gcc p3f7p2r ^mpich@3.2.1%gcc 5nus6kn ^zlib@1.2.11%gcc 43tkw5m hwloc@1.11.9%gcc 5urc6tc ^libpciaccess@0.13.5%gcc wpexsph ^libxml2@2.9.8%gcc teneqii ^xz@5.2.4%gcc 5nus6kn ^zlib@1.2.11%gcc ft463od ^numactl@2.0.11%gcc 5urc6tc libpciaccess@0.13.5%gcc fypapcp libsigsegv@2.11%gcc o2pfwjf libtool@2.4.6%gcc wpexsph libxml2@2.9.8%gcc teneqii ^xz@5.2.4%gcc 5nus6kn ^zlib@1.2.11%gcc suf5jtc m4@1.4.18%gcc fypapcp ^libsigsegv@2.11%gcc p3f7p2r mpich@3.2.1%gcc 3o765ou ncurses@6.1%gcc ft463od numactl@2.0.11%gcc 3njc4q5 openmpi@3.1.3%gcc 43tkw5m ^hwloc@1.11.9%gcc 5urc6tc ^libpciaccess@0.13.5%gcc wpexsph ^libxml2@2.9.8%gcc teneqii ^xz@5.2.4%gcc 5nus6kn ^zlib@1.2.11%gcc ft463od ^numactl@2.0.11%gcc ic2kyoa perl@5.26.2%gcc q4fpyuo ^gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc fovrh7a pkgconf@1.4.2%gcc nxhwrg7 readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc am4pbat tcl@8.6.8%gcc 64mns5m ^zlib@1.2.8%gcc cppflags="-O3" qhwyccy tcl@8.6.8%gcc 5nus6kn ^zlib@1.2.11%gcc zs7a2pc texinfo@6.5%gcc ic2kyoa ^perl@5.26.2%gcc q4fpyuo ^gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc milz7fm util-macros@1.19.1%gcc teneqii xz@5.2.4%gcc bkyl5bh zlib@1.2.8%gcc 64mns5m zlib@1.2.8%gcc cppflags="-O3" 5nus6kn zlib@1.2.11%gcc ``` Spack models the dependencies of packages as a directed acyclic graph (DAG). The `spack find -d` command shows the tree representation of that graph. We can also use the `spack graph` command to view the entire DAG as a graph. ``` $ spack graph hdf5+hl+mpi ^mpich o hdf5 |\ o | zlib / o mpich o findutils |\ | |\ | | |\ | | | |\ o | | | | texinfo | | | o | automake | |_|/| | |/| | | | | | | |/ | | | o autoconf | |_|/| |/| |/ | |/| o | | perl o | | gdbm o | | readline o | | ncurses o | | pkgconf / / | o libtool |/ o m4 o libsigsegv ``` You may also have noticed that there are some packages shown in the `spack find -d` output that we didn’t install explicitly. These are dependencies that were installed implicitly. A few packages installed implicitly are not shown as dependencies in the `spack find -d` output. These are build dependencies. For example, `libpciaccess` is a dependency of openmpi and requires `m4` to build. Spack will build `m4` as part of the installation of `openmpi`, but it does not become a part of the DAG because it is not linked in at run time. Spack handles build dependencies differently because of their different (less strict) consistency requirements. It is entirely possible to have two packages using different versions of a dependency to build, which obviously cannot be done with linked dependencies. `HDF5` is more complicated than our basic example of zlib and openssl, but it’s still within the realm of software that an experienced HPC user could reasonably expect to install given a bit of time. Now let’s look at an even more complicated package. ``` $ spack install trilinos ==> Installing diffutils ==> Searching for binary cache of diffutils ==> Finding buildcaches in /mirror/build_cache ==> Installing diffutils from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/diffutils-3.6/linux-ubuntu16.04-x86_64-gcc-5.4.0-diffutils-3.6-2rhuivgjrna2nrxhntyde6md2khcvs34.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:17 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed diffutils from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/diffutils-3.6-2rhuivgjrna2nrxhntyde6md2khcvs34 ==> Installing bzip2 ==> Searching for binary cache of bzip2 ==> Installing bzip2 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/bzip2-1.0.6/linux-ubuntu16.04-x86_64-gcc-5.4.0-bzip2-1.0.6-ufczdvsqt6edesm36xiucyry7myhj7e7.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:34:37 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed bzip2 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/bzip2-1.0.6-ufczdvsqt6edesm36xiucyry7myhj7e7 ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing boost ==> Searching for binary cache of boost ==> Installing boost from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/boost-1.68.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-boost-1.68.0-zbgfxapchxa4awxdwpleubfuznblxzvt.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 04:58:55 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed boost from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/boost-1.68.0-zbgfxapchxa4awxdwpleubfuznblxzvt ==> pkgconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> ncurses is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> readline is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> gdbm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> perl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> Installing openssl ==> Searching for binary cache of openssl ==> Installing openssl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/openssl-1.0.2o/linux-ubuntu16.04-x86_64-gcc-5.4.0-openssl-1.0.2o-b4y3w3bsyvjla6eesv4vt6aplpfrpsha.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:24:10 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed openssl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openssl-1.0.2o-b4y3w3bsyvjla6eesv4vt6aplpfrpsha ==> Installing cmake ==> Searching for binary cache of cmake ==> Installing cmake from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/cmake-3.12.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-cmake-3.12.3-otafqzhh4xnlq2mpakch7dr3tjfsrjnx.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:33:15 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed cmake from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/cmake-3.12.3-otafqzhh4xnlq2mpakch7dr3tjfsrjnx ==> Installing glm ==> Searching for binary cache of glm ==> Installing glm from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/glm-0.9.7.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-glm-0.9.7.1-jnw622jwcbsymzj2fsx22omjl7tmvaws.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:33 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed glm from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/glm-0.9.7.1-jnw622jwcbsymzj2fsx22omjl7tmvaws ==> libsigsegv is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> libtool is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> util-macros is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/util-macros-1.19.1-milz7fmttmptcic2qdk5cnel7ll5sybr ==> libpciaccess is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libpciaccess-0.13.5-5urc6tcjae26fbbd2wyfohoszhgxtbmc ==> xz is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/xz-5.2.4-teneqii2xv5u6zl5r6qi3pwurc6pmypz ==> libxml2 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libxml2-2.9.8-wpexsphdmfayxqxd4up5vgwuqgu5woo7 ==> autoconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> automake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> numactl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/numactl-2.0.11-ft463odrombnxlc3qew4omckhlq7tqgc ==> hwloc is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hwloc-1.11.9-43tkw5mt6huhv37vqnybqgxtkodbsava ==> openmpi is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx ==> Installing hdf5 ==> Searching for binary cache of hdf5 ==> Installing hdf5 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-hdf5-1.10.4-oqwnui7wtovuf2id4vjwcxfmxlzjus6y.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:09:10 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hdf5 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-oqwnui7wtovuf2id4vjwcxfmxlzjus6y ==> Installing openblas ==> Searching for binary cache of openblas ==> Installing openblas from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/openblas-0.3.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-openblas-0.3.3-cyeg2yiitpuqglhvbox5gtbgsim2v5vn.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:32:04 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed openblas from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openblas-0.3.3-cyeg2yiitpuqglhvbox5gtbgsim2v5vn ==> Installing hypre ==> Searching for binary cache of hypre ==> Installing hypre from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hypre-2.15.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-hypre-2.15.1-fshksdpecwiq7r6vawfswpboedhbisju.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:34 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hypre from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hypre-2.15.1-fshksdpecwiq7r6vawfswpboedhbisju ==> Installing matio ==> Searching for binary cache of matio ==> Installing matio from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/matio-1.5.9/linux-ubuntu16.04-x86_64-gcc-5.4.0-matio-1.5.9-lmzdgssvobdljw52mtahelu2ju7osh6h.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:05:13 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed matio from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/matio-1.5.9-lmzdgssvobdljw52mtahelu2ju7osh6h ==> Installing metis ==> Searching for binary cache of metis ==> Installing metis from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/metis-5.1.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-metis-5.1.0-3wnvp4ji3wwu4v4vymszrhx6naehs6jc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:31:42 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed metis from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/metis-5.1.0-3wnvp4ji3wwu4v4vymszrhx6naehs6jc ==> Installing netlib-scalapack ==> Searching for binary cache of netlib-scalapack ==> Installing netlib-scalapack from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-netlib-scalapack-2.0.2-wotpfwfctgfkzzn2uescucxvvbg3tm6b.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:22 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed netlib-scalapack from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2-wotpfwfctgfkzzn2uescucxvvbg3tm6b ==> Installing mumps ==> Searching for binary cache of mumps ==> Installing mumps from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mumps-5.1.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-mumps-5.1.1-acsg2dzroox2swssgc5cwgkvdy6jcm5q.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:32 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed mumps from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mumps-5.1.1-acsg2dzroox2swssgc5cwgkvdy6jcm5q ==> Installing netcdf ==> Searching for binary cache of netcdf ==> Installing netcdf from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.6.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-netcdf-4.6.1-mhm4izpogf4mrjidyskb6ewtzxdi7t6g.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:11:57 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed netcdf from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.6.1-mhm4izpogf4mrjidyskb6ewtzxdi7t6g ==> Installing parmetis ==> Searching for binary cache of parmetis ==> Installing parmetis from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/parmetis-4.0.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-parmetis-4.0.3-uv6h3sqx6quqg22hxesi2mw2un3kw6b7.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:12:04 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed parmetis from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/parmetis-4.0.3-uv6h3sqx6quqg22hxesi2mw2un3kw6b7 ==> Installing suite-sparse ==> Searching for binary cache of suite-sparse ==> Installing suite-sparse from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/suite-sparse-5.3.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-suite-sparse-5.3.0-zaau4kifha2enpdcn3mjlrqym7hm7yon.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:22:54 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed suite-sparse from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/suite-sparse-5.3.0-zaau4kifha2enpdcn3mjlrqym7hm7yon ==> Installing trilinos ==> Searching for binary cache of trilinos ==> Installing trilinos from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:10 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed trilinos from binary cache ``` Now we’re starting to see the power of Spack. Trilinos in its default configuration has 23 top level dependecies, many of which have dependencies of their own. Installing more complex packages can take days or weeks even for an experienced user. Although we’ve done a binary installation for the tutorial, a source installation of trilinos using Spack takes about 3 hours (depending on the system), but only 20 seconds of programmer time. Spack manages constistency of the entire DAG. Every MPI dependency will be satisfied by the same configuration of MPI, etc. If we install `trilinos` again specifying a dependency on our previous HDF5 built with `mpich`: ``` $ spack install trilinos +hdf5 ^hdf5+hl+mpi ^mpich ==> diffutils is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/diffutils-3.6-2rhuivgjrna2nrxhntyde6md2khcvs34 ==> bzip2 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/bzip2-1.0.6-ufczdvsqt6edesm36xiucyry7myhj7e7 ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> boost is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/boost-1.68.0-zbgfxapchxa4awxdwpleubfuznblxzvt ==> pkgconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> ncurses is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> readline is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> gdbm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> perl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> openssl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openssl-1.0.2o-b4y3w3bsyvjla6eesv4vt6aplpfrpsha ==> cmake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/cmake-3.12.3-otafqzhh4xnlq2mpakch7dr3tjfsrjnx ==> glm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/glm-0.9.7.1-jnw622jwcbsymzj2fsx22omjl7tmvaws ==> libsigsegv is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> autoconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> automake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> libtool is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> texinfo is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/texinfo-6.5-zs7a2pcwhq6ho2cj2x26uxfktwkpyucn ==> findutils is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/findutils-4.6.0-d4iajxsopzrlcjtasahxqeyjkjv5jx4v ==> mpich is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpich-3.2.1-p3f7p2r5ntrynqibosglxvhwyztiwqs5 ==> hdf5 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-xxd7syhgej6onpyfyewxqcqe7ltkt7ob ==> openblas is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openblas-0.3.3-cyeg2yiitpuqglhvbox5gtbgsim2v5vn ==> Installing hypre ==> Searching for binary cache of hypre ==> Finding buildcaches in /mirror/build_cache ==> Installing hypre from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hypre-2.15.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-hypre-2.15.1-obewuozolon7tkdg4cfxc6ae2tzkronb.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:34:36 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hypre from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hypre-2.15.1-obewuozolon7tkdg4cfxc6ae2tzkronb ==> Installing matio ==> Searching for binary cache of matio ==> Installing matio from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/matio-1.5.9/linux-ubuntu16.04-x86_64-gcc-5.4.0-matio-1.5.9-gvyqldhifflmvcrtui3b6s64jcczsxxh.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:25:11 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed matio from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/matio-1.5.9-gvyqldhifflmvcrtui3b6s64jcczsxxh ==> metis is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/metis-5.1.0-3wnvp4ji3wwu4v4vymszrhx6naehs6jc ==> Installing netlib-scalapack ==> Searching for binary cache of netlib-scalapack ==> Installing netlib-scalapack from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-netlib-scalapack-2.0.2-p7iln2pcosw2ipyqoyr7ie6lpva2oj7r.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:32:20 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed netlib-scalapack from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2-p7iln2pcosw2ipyqoyr7ie6lpva2oj7r ==> Installing mumps ==> Searching for binary cache of mumps ==> Installing mumps from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mumps-5.1.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-mumps-5.1.1-cumcj5a75cagsznpjrgretxdg6okxaur.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:33:18 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed mumps from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mumps-5.1.1-cumcj5a75cagsznpjrgretxdg6okxaur ==> Installing netcdf ==> Searching for binary cache of netcdf ==> Installing netcdf from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.6.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-netcdf-4.6.1-wmmx5sgwfds34v7bkkhiduar5yecrnnd.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:24:01 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed netcdf from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.6.1-wmmx5sgwfds34v7bkkhiduar5yecrnnd ==> Installing parmetis ==> Searching for binary cache of parmetis ==> Installing parmetis from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/parmetis-4.0.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-parmetis-4.0.3-jehtatan4y2lcobj6waoqv66jj4libtz.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:41 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed parmetis from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/parmetis-4.0.3-jehtatan4y2lcobj6waoqv66jj4libtz ==> suite-sparse is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/suite-sparse-5.3.0-zaau4kifha2enpdcn3mjlrqym7hm7yon ==> Installing trilinos ==> Searching for binary cache of trilinos ==> Installing trilinos from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-trilinos-12.12.1-kqc52moweigxqxzwzfqajc6ocxwdwn4w.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:15 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>.gov>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed trilinos from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-kqc52moweigxqxzwzfqajc6ocxwdwn4w ``` We see that every package in the trilinos DAG that depends on MPI now uses `mpich`. ``` $ spack find -d trilinos ==> 2 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- trilinos@12.12.1 ^boost@1.68.0 ^bzip2@1.0.6 ^zlib@1.2.11 ^glm@0.9.7.1 ^hdf5@1.10.4 ^openmpi@3.1.3 ^hwloc@1.11.9 ^libpciaccess@0.13.5 ^libxml2@2.9.8 ^xz@5.2.4 ^numactl@2.0.11 ^hypre@2.15.1 ^openblas@0.3.3 ^matio@1.5.9 ^metis@5.1.0 ^mumps@5.1.1 ^netlib-scalapack@2.0.2 ^netcdf@4.6.1 ^parmetis@4.0.3 ^suite-sparse@5.3.0 trilinos@12.12.1 ^boost@1.68.0 ^bzip2@1.0.6 ^zlib@1.2.11 ^glm@0.9.7.1 ^hdf5@1.10.4 ^mpich@3.2.1 ^hypre@2.15.1 ^openblas@0.3.3 ^matio@1.5.9 ^metis@5.1.0 ^mumps@5.1.1 ^netlib-scalapack@2.0.2 ^netcdf@4.6.1 ^parmetis@4.0.3 ^suite-sparse@5.3.0 ``` As we discussed before, the `spack find -d` command shows the dependency information as a tree. While that is often sufficient, many complicated packages, including trilinos, have dependencies that cannot be fully represented as a tree. Again, the `spack graph` command shows the full DAG of the dependency information. ``` $ spack graph trilinos o trilinos |\ | |\ | | |\ | | | |\ | | | | |\ | | | | | |\ | | | | | | |\ | | | | | | | |\ | | | | | | | | |\ | | | | | | | | | |\ | | | | | | | | | | |\ | | | | | | | | | | | |\ | | | | | | | | | | | | |\ o | | | | | | | | | | | | | suite-sparse |\ \ \ \ \ \ \ \ \ \ \ \ \ \ | |_|_|/ / / / / / / / / / / |/| | | | | | | | | | | | | | |\ \ \ \ \ \ \ \ \ \ \ \ \ | | |_|_|_|_|_|/ / / / / / / | |/| | | | | | | | | | | | | | | |_|_|_|_|_|_|_|/ / / | | |/| | | | | | | | | | | | | o | | | | | | | | | parmetis | | |/| | | | | | | | | | | |/|/| | | | | | | | | | | | | |/ / / / / / / / / | | | | | | o | | | | | mumps | |_|_|_|_|/| | | | | | |/| | | |_|/| | | | | | | | | |/| |/ / / / / / | | | | |/| | | | | | | | | | o | | | | | | netlib-scalapack | |_|_|/| | | | | | | |/| | |/| | | | | | | | | |/|/ / / / / / / | o | | | | | | | | metis | |/ / / / / / / / | | | | | | | o | glm | | |_|_|_|_|/ / | |/| | | | | | | o | | | | | | cmake | |\ \ \ \ \ \ \ | o | | | | | | | openssl | |\ \ \ \ \ \ \ \ | | | | | o | | | | netcdf | | |_|_|/| | | | | | |/| | |/| | | | | | | | | | |\ \ \ \ \ | | | | | | | |_|/ / | | | | | | |/| | | | | | | | | | o | | matio | | |_|_|_|_|/| | | | |/| | | | |/ / / | | | | | | | o | hypre | |_|_|_|_|_|/| | |/| | | | |_|/ / | | | | |/| | | | | | | | | o | hdf5 | | |_|_|_|/| | | |/| | | |/ / | | | | |/| | | | | | o | | openmpi | | |_|/| | | | |/| | | | | | | | | |\ \ \ | | | | | o | | hwloc | | | | |/| | | | | | | | |\ \ \ | | | | | | |\ \ \ | | | | | | o | | | libxml2 | | |_|_|_|/| | | | | |/| | | |/| | | | | | | | | | | | | o boost | | |_|_|_|_|_|_|/| | |/| | | | | | | | | o | | | | | | | | zlib | / / / / / / / / | | | | | o | | | xz | | | | | / / / | | | | | o | | libpciaccess | | | | |/| | | | | | | | |\ \ \ | | | | | o | | | util-macros | | | | | / / / | | | o | | | | numactl | | | |\ \ \ \ \ | | | | |_|_|/ / | | | |/| | | | | | | | |\ \ \ \ | | | | | |_|/ / | | | | |/| | | | | | | | |\ \ \ | | | | | o | | | automake | | |_|_|/| | | | | |/| | | | | | | | | | | | |/ / / | | | | | o | | autoconf | | |_|_|/| | | | |/| | |/ / / | | | |/| | | | o | | | | | perl | o | | | | | gdbm | o | | | | | readline | |/ / / / / | o | | | | ncurses | | |_|/ / | |/| | | | o | | | pkgconf | / / / o | | | openblas / / / | o | libtool |/ / o | m4 o | libsigsegv / o bzip2 o diffutils ``` You can control how the output is displayed with a number of options. The ASCII output from `spack graph` can be difficult to parse for complicated packages. The output can be changed to the `graphviz` `.dot` format using the `--dot` flag. ``` $ spack graph --dot trilinos | dot -Tpdf trilinos_graph.pdf ``` #### Uninstalling Packages[¶](#uninstalling-packages) Earlier we installed many configurations each of zlib and tcl. Now we will go through and uninstall some of those packages that we didn’t really need. ``` $ spack find -d tcl ==> 3 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- tcl@8.6.8 ^zlib@1.2.8 -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- tcl@8.6.8 ^zlib@1.2.8 tcl@8.6.8 ^zlib@1.2.11 $ spack find zlib ==> 6 installed packages. -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- zlib@1.2.8 zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- zlib@1.2.8 zlib@1.2.8 zlib@1.2.11 ``` We can uninstall packages by spec using the same syntax as install. ``` $ spack uninstall zlib %gcc@4.7 ==> The following packages will be uninstalled: -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- bq2wtdx zlib@1.2.11%gcc+optimize+pic+shared ==> Do you want to proceed? [y/N] y ==> Successfully uninstalled zlib@1.2.11%gcc@4.7+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 /bq2wtdx $ spack find -lf zlib ==> 5 installed packages. -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- i426yu3 zlib@1.2.8%clang 4pt75q7 zlib@1.2.11%clang -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- bkyl5bh zlib@1.2.8%gcc 64mns5m zlib@1.2.8%gcc cppflags="-O3" 5nus6kn zlib@1.2.11%gcc ``` We can also uninstall packages by referring only to their hash. We can use either `-f` (force) or `-R` (remove dependents as well) to remove packages that are required by another installed package. ``` $ spack uninstall zlib/i426 ==> Error: Will not uninstall zlib@1.2.8%clang@3.8.0-2ubuntu4/i426yu3 The following packages depend on it: -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 6wc66et tcl@8.6.8%clang ==> Error: Use \`spack uninstall --dependents\` to uninstall these dependencies as well. $ spack uninstall -R zlib/i426 ==> The following packages will be uninstalled: -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 6wc66et tcl@8.6.8%clang i426yu3 zlib@1.2.8%clang+optimize+pic+shared ==> Do you want to proceed? [y/N] y ==> Successfully uninstalled tcl@8.6.8%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 /6wc66et ==> Successfully uninstalled zlib@1.2.8%clang@3.8.0-2ubuntu4+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 /i426yu3 ``` Spack will not uninstall packages that are not sufficiently specified. The `-a` (all) flag can be used to uninstall multiple packages at once. ``` $ spack uninstall trilinos ==> Error: trilinos matches multiple packages: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- rlsruav trilinos@12.12.1%gcc~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 kqc52mo trilinos@12.12.1%gcc~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 ==> Error: You can either: a) use a more specific spec, or b) use `spack uninstall --all` to uninstall ALL matching specs. $ spack uninstall /rlsr ==> The following packages will be uninstalled: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- rlsruav trilinos@12.12.1%gcc~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 ==> Do you want to proceed? [y/N] y ==> Successfully uninstalled trilinos@12.12.1%gcc@5.4.0~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 arch=linux-ubuntu16.04-x86_64 /rlsruav ``` #### Advanced `spack find` Usage[¶](#advanced-spack-find-usage) We will go over some additional uses for the `spack find` command not already covered in the [Installing Spack](#basics-tutorial-install) and [Uninstalling Packages](#basics-tutorial-uninstall) sections. The `spack find` command can accept what we call “anonymous specs.” These are expressions in spec syntax that do not contain a package name. For example, `spack find ^mpich` will return every installed package that depends on mpich, and `spack find cppflags="-O3"` will return every package which was built with `cppflags="-O3"`. ``` $ spack find ^mpich ==> 8 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- hdf5@1.10.4 matio@1.5.9 netcdf@4.6.1 parmetis@4.0.3 hypre@2.15.1 mumps@5.1.1 netlib-scalapack@2.0.2 trilinos@12.12.1 $ spack find cppflags=-O3 ==> 1 installed packages. -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- zlib@1.2.8 ``` The `find` command can also show which packages were installed explicitly (rather than pulled in as a dependency) using the `-x` flag. The `-X` flag shows implicit installs only. The `find` command can also show the path to which a spack package was installed using the `-p` command. ``` $ spack find -px ==> 10 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- zlib@1.2.11 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.11-4pt75q7qq6lygf3hgnona4lyc2uwedul -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- hdf5@1.10.4 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-5vcv5r67vpjzenq4apyebshclelnzuja hdf5@1.10.4 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw hdf5@1.10.4 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-xxd7syhgej6onpyfyewxqcqe7ltkt7ob tcl@8.6.8 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-am4pbatrtga3etyusg2akmsvrswwxno2 tcl@8.6.8 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt trilinos@12.12.1 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-kqc52moweigxqxzwzfqajc6ocxwdwn4w zlib@1.2.8 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-bkyl5bhuep6fmhuxzkmhqy25qefjcvzc zlib@1.2.8 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-64mns5mvdacqvlashkf7v6lqrxixhmxu zlib@1.2.11 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ``` #### Customizing Compilers[¶](#customizing-compilers) Spack manages a list of available compilers on the system, detected automatically from from the user’s `PATH` variable. The `spack compilers` command is an alias for the command `spack compiler list`. ``` $ spack compilers ==> Available compilers -- clang ubuntu16.04-x86_64 --- clang@3.8.0-2ubuntu4 clang@3.7.1-2ubuntu2 -- gcc ubuntu16.04-x86_64 --- gcc@5.4.0 gcc@4.7 ``` The compilers are maintained in a YAML file. Later in the tutorial you will learn how to configure compilers by hand for special cases. Spack also has tools to add compilers, and compilers built with Spack can be added to the configuration. ``` $ spack install gcc @7.2.0 ==> libsigsegv is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> pkgconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> ncurses is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> readline is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> gdbm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> perl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> autoconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> automake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> libtool is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> Installing gmp ==> Searching for binary cache of gmp ==> Finding buildcaches in /mirror/build_cache ==> Installing gmp from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/gmp-6.1.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-gmp-6.1.2-qc4qcfz4monpllc3nqupdo7vwinf73sw.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:16 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed gmp from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gmp-6.1.2-qc4qcfz4monpllc3nqupdo7vwinf73sw ==> Installing isl ==> Searching for binary cache of isl ==> Installing isl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/isl-0.18/linux-ubuntu16.04-x86_64-gcc-5.4.0-isl-0.18-vttqoutnsmjpm3ogb52rninksc7hq5ax.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:05:19 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed isl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/isl-0.18-vttqoutnsmjpm3ogb52rninksc7hq5ax ==> Installing mpfr ==> Searching for binary cache of mpfr ==> Installing mpfr from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpfr-3.1.6/linux-ubuntu16.04-x86_64-gcc-5.4.0-mpfr-3.1.6-jnt2nnp5pmvikbw7opueajlbwbhmjxyv.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:32:07 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed mpfr from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpfr-3.1.6-jnt2nnp5pmvikbw7opueajlbwbhmjxyv ==> Installing mpc ==> Searching for binary cache of mpc ==> Installing mpc from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpc-1.1.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-mpc-1.1.0-iuf3gc3zpgr4n4mditnxhff6x3joxi27.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:35 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 95C7 1787 7AC0 0FFD AA8F D6E9 9CFA 4A45 3B7C 69B2 ==> Successfully installed mpc from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpc-1.1.0-iuf3gc3zpgr4n4mditnxhff6x3joxi27 ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb Installing gcc ==> Searching for binary cache of gcc ==> Finding buildcaches in /mirror/build_cache ==> Installing gcc from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:22:47 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed gcc from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs $ spack find -p gcc spack find -p gcc ==> 1 installed package -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- gcc@7.2.0 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs ``` We can add gcc to Spack as an available compiler using the `spack compiler add` command. This will allow future packages to build with [gcc@7.2.0](mailto:gcc%407.2.0). ``` $ spack compiler add `spack location -i gcc@7.2.0` ==> Added 1 new compiler to /home/ubuntu/.spack/linux/compilers.yaml gcc@7.2.0 ==> Compilers are defined in the following files: /home/ubuntu/.spack/linux/compilers.yaml ``` We can also remove compilers from our configuration using `spack compiler remove <compiler_spec>` ``` $ spack compiler remove gcc@7.2.0 ==> Removed compiler gcc@7.2.0 ``` ### Configuration Tutorial[¶](#configuration-tutorial) This tutorial will guide you through various configuration options that allow you to customize Spack’s behavior with respect to software installation. We will first cover the configuration file hierarchy. Then, we will cover configuration options for compilers, focusing on how they can be used to extend Spack’s compiler auto-detection. Next, we will cover the packages configuration file, focusing on how it can be used to override default build options as well as specify external package installations to use. Finally, we will briefly touch on the config configuration file, which manages more high-level Spack configuration options. For all of these features we will demonstrate how we build up a full configuration file. For some we will then demonstrate how the configuration affects the install command, and for others we will use the `spack spec` command to demonstrate how the configuration changes have affected Spack’s concretization algorithm. The provided output is all from a server running Ubuntu version 16.04. #### Configuration Scopes[¶](#configuration-scopes) Depending on your use case, you may want to provide configuration settings common to everyone on your team, or you may want to set default behaviors specific to a single user account. Spack provides six configuration *scopes* to handle this customization. These scopes, in order of decreasing priority, are: | Scope | Directory | | --- | --- | | Command-line | N/A | | Custom | Custom directory, specified with `--config-scope` | | User | `~/.spack/` | | Site | `$SPACK_ROOT/etc/spack/` | | System | `/etc/spack/` | | Defaults | `$SPACK_ROOT/etc/spack/defaults/` | Spack’s default configuration settings reside in `$SPACK_ROOT/etc/spack/defaults`. These are useful for reference, but should never be directly edited. To override these settings, create new configuration files in any of the higher-priority configuration scopes. A particular cluster may have multiple Spack installations associated with different projects. To provide settings common to all Spack installations, put your configuration files in `/etc/spack`. To provide settings specific to a particular Spack installation, you can use the `$SPACK_ROOT/etc/spack` directory. For settings specific to a particular user, you will want to add configuration files to the `~/.spack` directory. When Spack first checked for compilers on your system, you may have noticed that it placed your compiler configuration in this directory. Configuration settings can also be placed in a custom location, which is then specified on the command line via `--config-scope`. An example use case is managing two sets of configurations, one for development and another for production preferences. Settings specified on the command line have precedence over all other configuration scopes. ##### Platform-specific Scopes[¶](#platform-specific-scopes) Some facilities manage multiple platforms from a single shared file system. In order to handle this, each of the configuration scopes listed above has two *sub-scopes*: platform-specific and platform-independent. For example, compiler settings can be stored in `compilers.yaml` configuration files in the following locations: 1. `~/.spack/<platform>/compilers.yaml` 2. `~/.spack/compilers.yaml` 3. `$SPACK_ROOT/etc/spack/<platform>/compilers.yaml` 4. `$SPACK_ROOT/etc/spack/compilers.yaml` 5. `/etc/spack/<platform>/compilers.yaml` 6. `/etc/spack/compilers.yaml` 7. `$SPACK_ROOT/etc/defaults/<platform>/compilers.yaml` 8. `$SPACK_ROOT/etc/defaults/compilers.yaml` These files are listed in decreasing order of precedence, so files in `~/.spack/<platform>` will override settings in `~/.spack`. #### YAML Format[¶](#yaml-format) Spack configurations are YAML dictionaries. Every configuration file begins with a top-level dictionary that tells Spack which configuration set it modifies. When Spack checks it’s configuration, the configuration scopes are updated as dictionaries in increasing order of precedence, allowing higher precedence files to override lower. YAML dictionaries use a colon “:” to specify key-value pairs. Spack extends YAML syntax slightly to allow a double-colon “::” to specify a key-value pair. When a double-colon is used to specify a key-value pair, instead of adding that section Spack replaces what was in that section with the new value. For example, a user compilers configuration file as follows: ``` compilers:: - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/gcc cxx: /usr/bin/g++ f77: /usr/bin/gfortran fc: /usr/bin/gfortran spec: gcc@5.4.0 target: x86_64 ``` ensures that no other compilers are used, as the user configuration scope is the last scope searched and the `compilers::` line replaces all previous configuration files information. If the same configuration file had a single colon instead of the double colon, it would add the GCC version 5.4.0 compiler to whatever other compilers were listed in other configuration files. #### Compiler Configuration[¶](#compiler-configuration) For most tasks, we can use Spack with the compilers auto-detected the first time Spack runs on a system. As discussed in the basic installation tutorial, we can also tell Spack where compilers are located using the `spack compiler add` command. However, in some circumstances we want even more fine-grained control over the compilers available. This section will teach you how to exercise that control using the compilers configuration file. We will start by opening the compilers configuration file ``` $ spack config edit compilers ``` ``` compilers: - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/clang-3.7 cxx: /usr/bin/clang++-3.7 f77: null fc: null spec: clang@3.7.1-2ubuntu2 target: x86_64 - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/clang cxx: /usr/bin/clang++ f77: null fc: null spec: clang@3.8.0-2ubuntu4 target: x86_64 - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/gcc-4.7 cxx: /usr/bin/g++-4.7 f77: /usr/bin/gfortran-4.7 fc: /usr/bin/gfortran-4.7 spec: gcc@4.7 target: x86_64 - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/gcc cxx: /usr/bin/g++ f77: /usr/bin/gfortran fc: /usr/bin/gfortran spec: gcc@5.4.0 target: x86_64 ``` This specifies two versions of the GCC compiler and two versions of the Clang compiler with no Flang compiler. Now suppose we have a code that we want to compile with the Clang compiler for C/C++ code, but with gfortran for Fortran components. We can do this by adding another entry to the `compilers.yaml` file. ``` - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/clang cxx: /usr/bin/clang++ f77: /usr/bin/gfortran fc: /usr/bin/gfortran spec: clang@3.8.0-gfortran target: x86_64 ``` Let’s talk about the sections of this compiler entry that we’ve changed. The biggest change we’ve made is to the `paths` section. This lists the paths to the compilers to use for each language/specification. In this case, we point to the clang compiler for C/C++ and the gfortran compiler for both specifications of Fortran. We’ve also changed the `spec` entry for this compiler. The `spec` entry is effectively the name of the compiler for Spack. It consists of a name and a version number, separated by the `@` sigil. The name must be one of the supported compiler names in Spack (gcc, intel, pgi, xl, xl_r, clang, nag, cce, arm). The version number can be an arbitrary string of alphanumeric characters, as well as `-`, `.`, and `_`. The `target` and `operating_system` sections we leave unchanged. These sections specify when Spack can use different compilers, and are primarily useful for configuration files that will be used across multiple systems. We can verify that our new compiler works by invoking it now: ``` $ spack install --no-cache zlib %clang@3.8.0-gfortran ... ``` This new compiler also works on Fortran codes: ``` $ spack install --no-cache cfitsio %clang@3.8.0-gfortran -bzip2 ... ``` ##### Compiler Flags[¶](#compiler-flags) Some compilers may require specific compiler flags to work properly in a particular computing environment. Spack provides configuration options for setting compiler flags every time a specific compiler is invoked. These flags become part of the package spec and therefore of the build provenance. As on the command line, the flags are set through the implicit build variables `cflags`, `cxxflags`, `cppflags`, `fflags`, `ldflags`, and `ldlibs`. Let’s open our compilers configuration file again and add a compiler flag. ``` - compiler: environment: {} extra_rpaths: [] flags: cppflags: -g modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/clang cxx: /usr/bin/clang++ f77: /usr/bin/gfortran fc: /usr/bin/gfortran spec: clang@3.8.0-gfortran target: x86_64 ``` We can test this out using the `spack spec` command to show how the spec is concretized. ``` $ spack spec cfitsio %clang@3.8.0-gfortran Input spec --- cfitsio%clang@3.8.0-gfortran Normalized --- cfitsio%clang@3.8.0-gfortran Concretized --- cfitsio@3.410%clang@3.8.0-gfortran cppflags="-g" +bzip2+shared arch=linux-ubuntu16.04-x86_64 ^bzip2@1.0.6%clang@3.8.0-gfortran cppflags="-g" +shared arch=linux-ubuntu16.04-x86_64 ``` We can see that `cppflags="-g"` has been added to every node in the DAG. ##### Advanced Compiler Configuration[¶](#advanced-compiler-configuration) There are three fields of the compiler configuration entry that we have not yet talked about. The `modules` field of the compiler is used primarily on Cray systems, but can be useful on any system that has compilers that are only useful when a particular module is loaded. Any modules in the `modules` field of the compiler configuration will be loaded as part of the build environment for packages using that compiler. The `extra_rpaths` field of the compiler configuration is used for compilers that do not rpath all of their dependencies by default. Since compilers are often installed externally to Spack, Spack is unable to manage compiler dependencies and enforce rpath usage. This can lead to packages not finding link dependencies imposed by the compiler properly. For compilers that impose link dependencies on the resulting executables that are not rpath’ed into the executable automatically, the `extra_rpaths` field of the compiler configuration tells Spack which dependencies to rpath into every executable created by that compiler. The executables will then be able to find the link dependencies imposed by the compiler. As an example, this field can be set by ``` - compiler: ... extra_rpaths: - /apps/intel/ComposerXE2017/compilers_and_libraries_2017.5.239/linux/compiler/lib/intel64_lin ... ``` The `environment` field of the compiler configuration is used for compilers that require environment variables to be set during build time. For example, if your Intel compiler suite requires the `INTEL_LICENSE_FILE` environment variable to point to the proper license server, you can set this in `compilers.yaml` as follows: ``` - compiler: environment: set: INTEL_LICENSE_FILE: 1713@license4 ... ``` In addition to `set`, `environment` also supports `unset`, `prepend-path`, and `append-path`. #### Configuring Package Preferences[¶](#configuring-package-preferences) Package preferences in Spack are managed through the `packages.yaml` configuration file. First, we will look at the default `packages.yaml` file. ``` $ spack config --scope defaults edit packages ``` ``` # --- # This file controls default concretization preferences for Spack. # # Settings here are versioned with Spack and are intended to provide # sensible defaults out of the box. Spack maintainers should edit this # file to keep it current. # # Users can override these settings by editing the following files. # # Per-spack-instance settings (overrides defaults): # $SPACK_ROOT/etc/spack/packages.yaml # # Per-user settings (overrides default and site settings): # ~/.spack/packages.yaml # --- packages: all: compiler: [gcc, intel, pgi, clang, xl, nag] providers: D: [ldc] awk: [gawk] blas: [openblas] daal: [intel-daal] elf: [elfutils] fftw-api: [fftw] gl: [mesa, opengl] glu: [mesa-glu, openglu] golang: [gcc] ipp: [intel-ipp] java: [jdk] jpeg: [libjpeg-turbo, libjpeg] lapack: [openblas] mkl: [intel-mkl] mpe: [mpe2] mpi: [openmpi, mpich] opencl: [pocl] openfoam: [openfoam-com, openfoam-org, foam-extend] pil: [py-pillow] pkgconfig: [pkgconf, pkg-config] scalapack: [netlib-scalapack] szip: [libszip, libaec] tbb: [intel-tbb] unwind: [libunwind] permissions: read: world write: user ``` This sets the default preferences for compilers and for providers of virtual packages. To illustrate how this works, suppose we want to change the preferences to prefer the Clang compiler and to prefer MPICH over OpenMPI. Currently, we prefer GCC and OpenMPI. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ``` Now we will open the packages configuration file and update our preferences. ``` $ spack config edit packages ``` ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] ``` Because of the configuration scoping we discussed earlier, this overrides the default settings just for these two items. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^mpich@3.2.1%clang@3.8.0-2ubuntu4 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 ^findutils@4.6.0%clang@3.8.0-2ubuntu4 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%clang@3.8.0-2ubuntu4 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%clang@3.8.0-2ubuntu4+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%clang@3.8.0-2ubuntu4~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^texinfo@6.5%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ``` ##### Variant Preferences[¶](#variant-preferences) The packages configuration file can also set variant preferences for package variants. For example, let’s change our preferences to build all packages without shared libraries. We will accomplish this by turning off the `shared` variant on all packages that have one. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared ``` We can check the effect of this command with `spack spec hdf5` again. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic~shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^mpich@3.2.1%clang@3.8.0-2ubuntu4 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 ^findutils@4.6.0%clang@3.8.0-2ubuntu4 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%clang@3.8.0-2ubuntu4 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%clang@3.8.0-2ubuntu4+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac ~shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%clang@3.8.0-2ubuntu4~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^texinfo@6.5%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` So far we have only made global changes to the package preferences. As we’ve seen throughout this tutorial, hdf5 builds with MPI enabled by default in Spack. If we were working on a project that would routinely need serial hdf5, that might get annoying quickly, having to type `hdf5~mpi` all the time. Instead, we’ll update our preferences for hdf5. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared hdf5: variants: ~mpi ``` Now hdf5 will concretize without an MPI dependency by default. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` In general, every attribute that we can set for all packages we can set separately for an individual package. ##### External Packages[¶](#external-packages) The packages configuration file also controls when Spack will build against an externally installed package. On these systems we have a pre-installed zlib. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared hdf5: variants: ~mpi zlib: paths: zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr ``` Here, we’ve told Spack that zlib 1.2.8 is installed on our system. We’ve also told it the installation prefix where zlib can be found. We don’t know exactly which variants it was built with, but that’s okay. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` You’ll notice that Spack is now using the external zlib installation, but the compiler used to build zlib is now overriding our compiler preference of clang. If we explicitly specify clang: ``` $ spack spec hdf5 %clang Input spec --- hdf5%clang Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` Spack concretizes to both hdf5 and zlib being built with clang. This has a side-effect of rebuilding zlib. If we want to force Spack to use the system zlib, we have two choices. We can either specify it on the command line, or we can tell Spack that it’s not allowed to build its own zlib. We’ll go with the latter. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared hdf5: variants: ~mpi zlib: paths: zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr buildable: False ``` Now Spack will be forced to choose the external zlib. ``` $ spack spec hdf5 %clang Input spec --- hdf5%clang Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` This gets slightly more complicated with virtual dependencies. Suppose we don’t want to build our own MPI, but we now want a parallel version of hdf5? Well, fortunately we have mpich installed on these systems. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared hdf5: variants: ~mpi zlib: paths: zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr buildable: False mpich: paths: mpich@3.2%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64: /usr buildable: False ``` If we concretize `hdf5+mpi` with this configuration file, we will just build with an alternate MPI implementation. ``` $ spack spec hdf5 %clang +mpi Input spec --- hdf5%clang+mpi Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^openmpi@3.1.3%clang@3.8.0-2ubuntu4~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 ^hwloc@1.11.9%clang@3.8.0-2ubuntu4~cairo~cuda+libxml2+pci~shared arch=linux-ubuntu16.04-x86_64 ^libpciaccess@0.13.5%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%clang@3.8.0-2ubuntu4 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^util-macros@1.19.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^libxml2@2.9.8%clang@3.8.0-2ubuntu4~python arch=linux-ubuntu16.04-x86_64 ^xz@5.2.4%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ^numactl@2.0.11%clang@3.8.0-2ubuntu4 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%clang@3.8.0-2ubuntu4+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac ~shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%clang@3.8.0-2ubuntu4~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ``` We have only expressed a preference for mpich over other MPI implementations, and Spack will happily build with one we haven’t forbid it from building. We could resolve this by requesting `hdf5%clang+mpi^mpich` explicitly, or we can configure Spack not to use any other MPI implementation. Since we’re focused on configurations here and the former can get tedious, we’ll need to modify our `packages.yaml` file again. While we’re at it, we can configure hdf5 to build with MPI by default again. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared zlib: paths: zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr buildable: False mpich: paths: mpich@3.2%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64: /usr buildable: False openmpi: buildable: False mvapich2: buildable: False intel-mpi: buildable: False intel-parallel-studio: buildable: False spectrum-mpi: buildable: False mpilander: buildable: False charm: buildable: False charmpp: buildable: False ``` Now that we have configured Spack not to build any of the possible providers for MPI we can try again. ``` $ spack spec hdf5 %clang Input spec --- hdf5%clang Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic~shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^mpich@3.2%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` By configuring most of our package preferences in `packages.yaml`, we can cut down on the amount of work we need to do when specifying a spec on the command line. In addition to compiler and variant preferences, we can specify version preferences as well. Anything that you can specify on the command line can be specified in `packages.yaml` with the exact same spec syntax. ##### Installation Permissions[¶](#installation-permissions) The `packages.yaml` file also controls the default permissions to use when installing a package. You’ll notice that by default, the installation prefix will be world readable but only user writable. Let’s say we need to install `converge`, a licensed software package. Since a specific research group, `fluid_dynamics`, pays for this license, we want to ensure that only members of this group can access the software. We can do this like so: ``` packages: converge: permissions: read: group group: fluid_dynamics ``` Now, only members of the `fluid_dynamics` group can use any `converge` installations. Warning Make sure to delete or move the `packages.yaml` you have been editing up to this point. Otherwise, it will change the hashes of your packages, leading to differences in the output of later tutorial sections. #### High-level Config[¶](#high-level-config) In addition to compiler and package settings, Spack allows customization of several high-level settings. These settings are stored in the generic `config.yaml` configuration file. You can see the default settings by running: ``` $ spack config --scope defaults edit config ``` ``` # --- # This is the default spack configuration file. # # Settings here are versioned with Spack and are intended to provide # sensible defaults out of the box. Spack maintainers should edit this # file to keep it current. # # Users can override these settings by editing the following files. # # Per-spack-instance settings (overrides defaults): # $SPACK_ROOT/etc/spack/config.yaml # # Per-user settings (overrides default and site settings): # ~/.spack/config.yaml # --- config: # This is the path to the root of the Spack install tree. # You can use $spack here to refer to the root of the spack instance. install_tree: $spack/opt/spack # Locations where templates should be found template_dirs: - $spack/share/spack/templates # Default directory layout install_path_scheme: "${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}" # Locations where different types of modules should be installed. module_roots: tcl: $spack/share/spack/modules lmod: $spack/share/spack/lmod dotkit: $spack/share/spack/dotkit # Temporary locations Spack can try to use for builds. # # Spack will use the first one it finds that exists and is writable. # You can use $tempdir to refer to the system default temp directory # (as returned by tempfile.gettempdir()). # # A value of $spack/var/spack/stage indicates that Spack should run # builds directly inside its install directory without staging them in # temporary space. # # The build stage can be purged with `spack clean --stage`. build_stage: - $tempdir - /nfs/tmp2/$user - $spack/var/spack/stage # Cache directory for already downloaded source tarballs and archived # repositories. This can be purged with `spack clean --downloads`. source_cache: $spack/var/spack/cache # Cache directory for miscellaneous files, like the package index. # This can be purged with `spack clean --misc-cache` misc_cache: ~/.spack/cache # If this is false, tools like curl that use SSL will not verify # certifiates. (e.g., curl will use use the -k option) verify_ssl: true # If set to true, Spack will always check checksums after downloading # archives. If false, Spack skips the checksum step. checksum: true # If set to true, `spack install` and friends will NOT clean # potentially harmful variables from the build environment. Use wisely. dirty: false # The language the build environment will use. This will produce English # compiler messages by default, so the log parser can highlight errors. # If set to C, it will use English (see man locale). # If set to the empty string (''), it will use the language from the # user's environment. build_language: C # When set to true, concurrent instances of Spack will use locks to # avoid modifying the install tree, database file, etc. If false, Spack # will disable all locking, but you must NOT run concurrent instances # of Spack. For filesystems that don't support locking, you should set # this to false and run one Spack at a time, but otherwise we recommend # enabling locks. locks: true # The default number of jobs to use when running `make` in parallel. # If set to 4, for example, `spack install` will run `make -j4`. # If not set, all available cores are used by default. # build_jobs: 4 # If set to true, Spack will use ccache to cache C compiles. ccache: false # How long to wait to lock the Spack installation database. This lock is used # when Spack needs to manage its own package metadata and all operations are # expected to complete within the default time limit. The timeout should # therefore generally be left untouched. db_lock_timeout: 120 # How long to wait when attempting to modify a package (e.g. to install it). # This value should typically be 'null' (never time out) unless the Spack # instance only ever has a single user at a time, and only if the user # anticipates that a significant delay indicates that the lock attempt will # never succeed. package_lock_timeout: null ``` As you can see, many of the directories Spack uses can be customized. For example, you can tell Spack to install packages to a prefix outside of the `$SPACK_ROOT` hierarchy. Module files can be written to a central location if you are using multiple Spack instances. If you have a fast scratch file system, you can run builds from this file system with the following `config.yaml`: ``` config: build_stage: - /scratch/$user ``` On systems with compilers that absolutely *require* environment variables like `LD_LIBRARY_PATH`, it is possible to prevent Spack from cleaning the build environment with the `dirty` setting: ``` config: dirty: true ``` However, this is strongly discouraged, as it can pull unwanted libraries into the build. One last setting that may be of interest to many users is the ability to customize the parallelism of Spack builds. By default, Spack installs all packages in parallel with the number of jobs equal to the number of cores on the node. For example, on a node with 16 cores, this will look like: ``` $ spack install --no-cache --verbose zlib ==> Installing zlib ==> Using cached archive: /home/user/spack/var/spack/cache/zlib/zlib-1.2.11.tar.gz ==> Staging archive: /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb/zlib-1.2.11.tar.gz ==> Created stage in /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> No patches needed for zlib ==> Building zlib [Package] ==> Executing phase: 'install' ==> './configure' '--prefix=/home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb' ... ==> 'make' '-j16' ... ==> 'make' '-j16' 'install' ... ==> Successfully installed zlib Fetch: 0.00s. Build: 1.03s. Total: 1.03s. [+] /home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ``` As you can see, we are building with all 16 cores on the node. If you are on a shared login node, this can slow down the system for other users. If you have a strict ulimit or restriction on the number of available licenses, you may not be able to build at all with this many cores. On nodes with 64+ cores, you may not see a significant speedup of the build anyway. To limit the number of cores our build uses, set `build_jobs` like so: ``` config: build_jobs: 4 ``` If we uninstall and reinstall zlib, we see that it now uses only 4 cores: ``` $ spack install --no-cache --verbose zlib ==> Installing zlib ==> Using cached archive: /home/user/spack/var/spack/cache/zlib/zlib-1.2.11.tar.gz ==> Staging archive: /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb/zlib-1.2.11.tar.gz ==> Created stage in /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> No patches needed for zlib ==> Building zlib [Package] ==> Executing phase: 'install' ==> './configure' '--prefix=/home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb' ... ==> 'make' '-j4' ... ==> 'make' '-j4' 'install' ... ==> Successfully installed zlib Fetch: 0.00s. Build: 1.03s. Total: 1.03s. [+] /home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ``` Obviously, if you want to build everything in serial for whatever reason, you would set `build_jobs` to 1. #### Examples[¶](#examples) For examples of how other sites configure Spack, see <https://github.com/spack/spack-configs>. If you use Spack at your site and want to share your config files, feel free to submit a pull request! ### Package Creation Tutorial[¶](#package-creation-tutorial) This tutorial will walk you through the steps behind building a simple package installation script. We’ll focus building an mpileaks package, which is a MPI debugging tool. By creating a package file we’re essentially giving Spack a recipe for how to build a particular piece of software. We’re describing some of the software’s dependencies, where to find the package, what commands and options are used to build the package from source, and more. Once we’ve specified a package’s recipe, we can ask Spack to build that package in many different ways. This tutorial assumes you have a basic familiarity with some of the Spack commands, and that you have a working version of Spack installed. If not, we suggest looking at Spack’s *Getting Started* guide. This tutorial also assumes you have at least a beginner’s-level familiarity with Python. Also note that this document is a tutorial. It can help you get started with packaging, but is not intended to be complete. See Spack’s [Packaging Guide](index.html#packaging-guide) for more complete documentation on this topic. #### Getting Started[¶](#getting-started) A few things before we get started: * We’ll refer to the Spack installation location via the environment variable `SPACK_ROOT`. You should point `SPACK_ROOT` at wherever you have Spack installed. * Add `$SPACK_ROOT/bin` to your `PATH` before you start. * Make sure your `EDITOR` environment variable is set to some text editor you like. * We’ll be writing Python code as part of this tutorial. You can find successive versions of the Python code in `$SPACK_ROOT/lib/spack/docs/tutorial/examples`. #### Creating the Package File[¶](#creating-the-package-file) We will use a separate package repository for the tutorial. Package repositories allow you to separate sets of packages that take precedence over one another. We will use the tutorial repo that ships with Spack to avoid breaking the builtin Spack packages. ``` $ spack repo add $SPACK_ROOT/var/spack/repos/tutorial/ ==> Added repo with namespace 'tutorial'. ``` Spack comes with a handy command to create a new package: `spack create`. This command is given the location of a package’s source code, downloads the code, and sets up some basic packaging infrastructure for you. The mpileaks source code can be found on GitHub, and here’s what happens when we run `spack create` on it: ``` $ spack create -t generic -f https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz ==> This looks like a URL for mpileaks ==> Found 1 version of mpileaks: 1.0 https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz ==> How many would you like to checksum? (default is 1, q to abort) 1 ==> Downloading... ==> Fetching https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz ############################################################################# 100.0% ==> Checksummed 1 version of mpileaks ==> Using specified package template: 'generic' ==> Created template for mpileaks package ==> Created package file: /home/spack1/spack/var/spack/repos/builtin/packages/mpileaks/package.py ``` And Spack should spawn a text editor with this file: ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) # # This is a template package file for Spack. We've put "FIXME" # next to all the things you'll want to change. Once you've handled # them, you can save this file and test your package like this: # # spack install mpileaks # # You can edit this file again by typing: # # spack edit mpileaks # # See the Spack documentation for more information on packaging. # If you submit this package back to Spack as a pull request, # please first remove this boilerplate and all FIXME comments. # from spack import * class Mpileaks(Package): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') # FIXME: Add dependencies if required. # depends_on('foo') def install(self, spec, prefix): # FIXME: Unknown build system make() make('install') ``` Spack has created this file in `/home/spack1/spack/var/spack/repos/builtin/packages/mpileaks/package.py`. Take a moment to look over the file. There’s a few placeholders that Spack has created, which we’ll fill in as part of this tutorial: * We’ll document some information about this package in the comments. * We’ll fill in the dependency list for this package. * We’ll fill in some of the configuration arguments needed to build this package. For the moment, exit your editor and let’s see what happens when we try to build this package: ``` $ spack install mpileaks ==> No binary for mpileaks found: installing from source ==> Fetching file:///mirror/mpileaks/mpileaks-1.0.tar.gz curl: (37) Couldn't open file /mirror/mpileaks/mpileaks-1.0.tar.gz ==> Fetching from file:///mirror/mpileaks/mpileaks-1.0.tar.gz failed. ==> Fetching https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz ######################################################################## 100.0% ==> Staging archive: /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-sv75n3u5ev6mljwcezisz3slooozbbxu/mpileaks-1.0.tar.gz ==> Created stage in /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-sv75n3u5ev6mljwcezisz3slooozbbxu ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> Error: ProcessError: Command exited with status 2: 'make' '-j16' 1 error found in build log: 1 ==> Executing phase: 'install' 2 ==> 'make' '-j16' >> 3 make: *** No targets specified and no makefile found. Stop. See build log for details: /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-sv75n3u5ev6mljwcezisz3slooozbbxu/mpileaks-1.0/spack-build.out ``` This obviously didn’t work; we need to fill in the package-specific information. Specifically, Spack didn’t try to build any of mpileaks’ dependencies, nor did it use the proper configure arguments. Let’s start fixing things #### Package Documentation[¶](#package-documentation) We can bring the `package.py` file back into our `EDITOR` with the `spack edit` command: ``` $ spack edit mpileaks ``` Let’s remove some of the `FIXME` comments, and add links to the mpileaks homepage and document what mpileaks does. I’m also going to cut out the Copyright clause at this point to keep this tutorial document shorter, but you shouldn’t do that normally. The results of these changes can be found in `$SPACK_ROOT/lib/spack/docs/tutorial/examples/1.package.py` and are below. Make these changes to your `package.py`: ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" # NOQA version('1.0', '8838c574b39202a57d7c2d68692718aa') # FIXME: Add dependencies if required. # depends_on('foo') def install(self, spec, prefix): # FIXME: Unknown build system make() make('install') ``` We’ve filled in the comment that describes what this package does and added a link to the web site. That won’t help us build yet, but it will allow Spack to provide some documentation on this package to other users: ``` $ spack info mpileaks Package: mpileaks Description: Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes. Homepage: https://github.com/hpc/mpileaks Tags: None Preferred version: 1.0 https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz Safe versions: 1.0 https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz Variants: None Installation Phases: install Build Dependencies: None Link Dependencies: None Run Dependencies: None Virtual Packages: None ``` As we fill in more information about this package the `spack info` command will become more informative. Now let’s start making this package build. #### Dependencies[¶](#dependencies) The mpileaks packages depends on three other package: `MPI`, `adept-utils`, and `callpath`. Let’s add those via the `depends_on` command in our `package.py` (this version is in `$SPACK_ROOT/lib/spack/docs/tutorial/examples/2.package.py`): ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') depends_on('mpi') depends_on('adept-utils') depends_on('callpath') def install(self, spec, prefix): # FIXME: Unknown build system make() make('install') ``` Now when we go to build mpileaks, Spack will fetch and build these dependencies before building mpileaks. Note that the mpi dependency is a different kind of beast than the adept-utils and callpath dependencies; there is no mpi package available in Spack. Instead mpi is a virtual dependency. Spack may satisfy that dependency by installing packages such as `openmpi` or `mvapich`. See the [Packaging Guide](index.html#packaging-guide) for more information on virtual dependencies. Now when we try to install this package a lot more happens: ``` $ spack install mpileaks ... ==> Successfully installed libdwarf from binary cache [+] /home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libdwarf-20180129-p4jeflorwlnkoq2vpuyocwrbcht2ayak ==> Installing callpath ==> Searching for binary cache of callpath ==> Installing callpath from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/callpath-1.0.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-callpath-1.0.4-empvyxdkc4j4pwg7gznwhbiumruey66x.spack ######################################################################## 100.0% gpg: Signature made Sat 10 Nov 2018 05:30:21 AM UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> 69B2 ==> Successfully installed callpath from binary cache [+] /home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/callpath-1.0.4-empvyxdkc4j4pwg7gznwhbiumruey66x ==> Installing mpileaks ==> Searching for binary cache of mpileaks ==> No binary for mpileaks found: installing from source ==> Using cached archive: /home/ubuntu/packaging/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz ==> Staging archive: /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0.tar.gz ==> Created stage in /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> Error: ProcessError: Command exited with status 2: 'make' '-j16' 1 error found in build log: 1 ==> Executing phase: 'install' 2 ==> 'make' '-j16' >> 3 make: *** No targets specified and no makefile found. Stop. See build log for details: /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0/spack-build.out ``` Note that this command may take a while to run and produce more output if you don’t have an MPI already installed or configured in Spack. Now Spack has identified and made sure all of our dependencies have been built. It found the `openmpi` package that will satisfy our `mpi` dependency, and the `callpath` and `adept-utils` package to satisfy our concrete dependencies. #### Debugging Package Builds[¶](#debugging-package-builds) Our `mpileaks` package is still not building. It may be obvious to many of you that we never ran the configure script. Let’s add a call to `configure()` to the top of the install routine. The resulting package.py is in `$SPACK_ROOT/lib/spack/docs/tutorial/examples/3.package.py`: ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') depends_on('mpi') depends_on('adept-utils') depends_on('callpath') def install(self, spec, prefix): configure() make() make('install') ``` If we re-run we still get errors: ``` $ spack install mpileask ... ==> Installing mpileaks ==> Searching for binary cache of mpileaks ==> Finding buildcaches in /mirror/build_cache ==> No binary for mpileaks found: installing from source ==> Using cached archive: /home/ubuntu/packaging/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz ==> Staging archive: /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0.tar.gz ==> Created stage in /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> Error: ProcessError: Command exited with status 1: './configure' 1 error found in build log: 25 checking for /home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc- 5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc... /home/ubuntu/pa ckaging/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3-3njc4q5p qdpptq6jvqjrezkffwokv2sx/bin/mpicc 26 Checking whether /home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/ gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc responds to '- showme:compile'... no 27 Checking whether /home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/ gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc responds to '- showme'... no 28 Checking whether /home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/ gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc responds to '- compile-info'... no 29 Checking whether /home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/ gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc responds to '- show'... no 30 ./configure: line 4809: Echo: command not found >> 31 configure: error: unable to locate adept-utils installation See build log for details: /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0/spack-build.out ``` Again, the problem may be obvious. But let’s pretend we’re not all intelligent developers and use this opportunity spend some time debugging. We have a few options that can tell us about what’s going wrong: As per the error message, Spack has given us a `spack-build.out` debug log: ``` ==> Executing phase: 'install' ==> './configure' checking metadata... no checking installation directory variables... yes checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for gcc... /home/spack1/spack/lib/spack/env/gcc/gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether /home/spack1/spack/lib/spack/env/gcc/gcc accepts -g... yes checking for /home/spack1/spack/lib/spack/env/gcc/gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of /home/spack1/spack/lib/spack/env/gcc/gcc... gcc3 checking whether /home/spack1/spack/lib/spack/env/gcc/gcc and cc understand -c and -o together... yes checking whether we are using the GNU C++ compiler... yes checking whether /home/spack1/spack/lib/spack/env/gcc/g++ accepts -g... yes checking dependency style of /home/spack1/spack/lib/spack/env/gcc/g++... gcc3 checking for /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc... /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc Checking whether /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc responds to '-showme:compile'... yes configure: error: unable to locate adept-utils installation ``` This gives us the output from the build, and mpileaks isn’t finding its `adept-utils` package. Spack has automatically added the include and library directories of `adept-utils` to the compiler’s search path, but some packages like mpileaks can sometimes be picky and still want things spelled out on their command line. But let’s continue to pretend we’re not brilliant developers, and explore some other debugging paths: We can also enter the build area and try to manually run the build: ``` $ spack build-env mpileaks bash $ spack cd mpileaks ``` The `spack env` command spawned a new shell that contains the same environment that Spack used to build the mpileaks package (you can substitute bash for your favorite shell). The `spack cd` command changed our working dirctory to the last attempted build for mpileaks. From here we can manually re-run the build: ``` $ ./configure checking metadata... no checking installation directory variables... yes checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for gcc... /home/spack1/spack/lib/spack/env/gcc/gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether /home/spack1/spack/lib/spack/env/gcc/gcc accepts -g... yes checking for /home/spack1/spack/lib/spack/env/gcc/gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of /home/spack1/spack/lib/spack/env/gcc/gcc... gcc3 checking whether /home/spack1/spack/lib/spack/env/gcc/gcc and cc understand -c and -o together... yes checking whether we are using the GNU C++ compiler... yes checking whether /home/spack1/spack/lib/spack/env/gcc/g++ accepts -g... yes checking dependency style of /home/spack1/spack/lib/spack/env/gcc/g++... gcc3 checking for /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc... /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc Checking whether /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc responds to '-showme:compile'... yes configure: error: unable to locate adept-utils installation ``` We’re seeing the same error, but now we’re in a shell where we can run the command ourselves and debug as needed. We could, for example, run `./configure --help` to see what options we can use to specify dependencies. We can use the `exit` command to leave the shell spawned by `spack env`. #### Specifying Configure Arguments[¶](#specifying-configure-arguments) Let’s add the configure arguments to the mpileaks’ `package.py`. This version can be found in `$SPACK_ROOT/lib/spack/docs/tutorial/examples/4.package.py`: ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') depends_on('mpi') depends_on('adept-utils') depends_on('callpath') def install(self, spec, prefix): configure('--with-adept-utils=%s' % self.spec['adept-utils'].prefix, '--with-callpath=%s' % self.spec['callpath'].prefix, '--prefix=%s' % self.spec.prefix) make() make('install') ``` This is all we need for working mpileaks! If we install now we’ll see: ``` $ spack install mpileaks ... ==> Installing mpileaks ==> Searching for binary cache of mpileaks ==> Finding buildcaches in /mirror/build_cache ==> No binary for mpileaks found: installing from source ==> Using cached archive: /home/ubuntu/packaging/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz ==> Staging archive: /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0.tar.gz ==> Created stage in /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> Successfully installed mpileaks Fetch: 0.00s. Build: 9.41s. Total: 9.41s. [+] /home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb ``` There are some special circumstances in package that are worth highlighting. Normally spack would have automatically detected that mpileaks was an Autotools-based package when we ran `spack create` and made it an `AutoToolsPackage` class (except we added the `-t generic` option to skip this). Instead of a full install routine we would have just written: ``` def configure_args(self): args = ['--with-adept-utils=%s' % self.spec['adept-utils'].prefix, '--with-callpath=%s' % self.spec['callpath'].prefix] return args ``` Similarly, if this had been a CMake-based package we would have been filling in a `cmake_args` function instead of `configure_args`. There are similar default package types for many build environments that will be discussed later in the tutorial. #### Variants[¶](#variants) We have a successful mpileaks build, but let’s take some time to improve it. `mpileaks` has a build-time option to truncate parts of the stack that it walks. Let’s add a variant to allow users to set this when they build in Spack. To do this, we’ll add a variant to our package, as per the following (see `$SPACK_ROOT/lib/spack/docs/tutorial/examples/5.package.py`): ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') variant('stackstart', values=int, default=0, description='Specify the number of stack frames to truncate.') depends_on('mpi') depends_on('adept-utils') depends_on('callpath') def install(self, spec, prefix): stackstart = int(self.spec.variants['stackstart'].value) confargs = ['--with-adept-utils=%s' % self.spec['adept-utils'].prefix, '--with-callpath=%s' % self.spec['callpath'].prefix, '--prefix=%s' % self.spec.prefix] if stackstart: confargs.extend(['--with-stack-start-c=%s' % stackstart, '--with-stack-start-fortran=%s' % stackstart]) configure(*confargs) make() make('install') ``` We’ve added the variant `stackstart`, and given it a default value of `0`. If we install now we can see the stackstart variant added to the configure line (output truncated for length): ``` $ spack install --verbose mpileaks stackstart=4 ... ==> Installing mpileaks ==> Searching for binary cache of mpileaks ==> Finding buildcaches in /mirror/build_cache ==> No binary for mpileaks found: installing from source ==> Using cached archive: /home/ubuntu/packaging/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz ==> Staging archive: /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-meufjojkxve3l7rci2mbud3faidgplto/mpileaks-1.0.tar.gz ==> Created stage in /home/ubuntu/packaging/spack/var/spack/stage/mpileaks-1.0-meufjojkxve3l7rci2mbud3faidgplto ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> './configure' '--with-adept-utils=/home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/adept-utils-1.0.1-7tippnvo5g76wpijk7x5kwfpr3iqiaen' '--with-callpath=/home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/callpath-1.0.4-empvyxdkc4j4pwg7gznwhbiumruey66x' '--prefix=/home/ubuntu/packaging/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpileaks-1.0-meufjojkxve3l7rci2mbud3faidgplto' '--with-stack-start-c=4' '--with-stack-start-fortran=4' ``` #### The Spec Object[¶](#the-spec-object) This tutorial has glossed over a few important features, which weren’t too relevant for mpileaks but may be useful for other packages. There were several places we references the `self.spec` object. This is a powerful class for querying information about what we’re building. For example, you could use the spec to query information about how a package’s dependencies were built, or what compiler was being used, or what version of a package is being installed. Full documentation can be found in the [Packaging Guide](index.html#packaging-guide), but here’s some quick snippets with common queries: * Am I building `mpileaks` version `1.1` or greater? ``` if self.spec.satisfies('@1.1:'): # Do things needed for 1.1+ ``` * Is `openmpi` the MPI I’m building with? ``` if self.spec['mpi'].name == 'openmpi': # Do openmpi things ``` * Am I building with `gcc` version less than `5.0.0`: ``` if self.spec.satisfies('%gcc@:5.0.0'): # Add arguments specific to gcc's earlier than 5.0.0 ``` * Am I built with the `debug` variant: ``` if self.spec.satisfies('+debug'): # Add -g option to configure flags ``` * Is my `dyninst` dependency greater than version `8.0`? ``` if self.spec['dyninst'].satisfies('@8.0:'): # Use newest dyninst options ``` More examples can be found in the thousands of packages already added to Spack in `$SPACK_ROOT/var/spack/repos/builtin/packages`. Good Luck! ### Environments, `spack.yaml`, and `spack.lock`[¶](#environments-spack-yaml-and-spack-lock) We’ve shown you how to install and remove packages with Spack. You can use [spack install](index.html#cmd-spack-install) to install packages, [spack uninstall](index.html#cmd-spack-uninstall) to remove them, and [spack find](index.html#cmd-spack-find) to look at and query what is installed. We’ve also shown you how to customize Spack’s installation with configuration files like [packages.yaml](index.html#build-settings). If you build a lot of software, or if you work on multiple projects, managing everything in one place can be overwhelming. The default `spack find` output may contain many packages, but you may want to *just* focus on packages a particular project. Moreover, you may want to include special configuration with your package groups, e.g., to build all the packages in the same group the same way. Spack **environments** provide a way to handle these problems. #### Environment basics[¶](#environment-basics) Let’s look at the output of `spack find` at this point in the tutorial. ``` $ bin/spack find ==> 70 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- tcl@8.6.8 zlib@1.2.8 zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- adept-utils@1.0.1 hdf5@1.10.4 mpc@1.1.0 perl@5.26.2 autoconf@2.69 hdf5@1.10.4 mpfr@3.1.6 pkgconf@1.4.2 automake@1.16.1 hdf5@1.10.4 mpich@3.2.1 readline@7.0 boost@1.68.0 hwloc@1.11.9 mpileaks@1.0 suite-sparse@5.3.0 bzip2@1.0.6 hypre@2.15.1 mumps@5.1.1 tar@1.30 callpath@1.0.4 hypre@2.15.1 mumps@5.1.1 tcl@8.6.8 cmake@3.12.3 isl@0.18 ncurses@6.1 tcl@8.6.8 diffutils@3.6 libdwarf@20180129 netcdf@4.6.1 texinfo@6.5 dyninst@9.3.2 libiberty@2.31.1 netcdf@4.6.1 trilinos@12.12.1 elfutils@0.173 libpciaccess@0.13.5 netlib-scalapack@2.0.2 trilinos@12.12.1 findutils@4.6.0 libsigsegv@2.11 netlib-scalapack@2.0.2 util-macros@1.19.1 gcc@7.2.0 libtool@2.4.6 numactl@2.0.11 xz@5.2.4 gdbm@1.14.1 libxml2@2.9.8 openblas@0.3.3 zlib@1.2.8 gettext@0.19.8.1 m4@1.4.18 openmpi@3.1.3 zlib@1.2.8 glm@0.9.7.1 matio@1.5.9 openssl@1.0.2o zlib@1.2.11 gmp@6.1.2 matio@1.5.9 parmetis@4.0.3 hdf5@1.10.4 metis@5.1.0 parmetis@4.0.3 ``` This is a complete, but cluttered view. There are packages built with both `openmpi` and `mpich`, as well as multiple variants of other packages, like `zlib`. The query mechanism we learned about in `spack find` can help, but it would be nice if we could start from a clean slate without losing what we’ve already done. ##### Creating and activating environments[¶](#creating-and-activating-environments) The `spack env` command can help. Let’s create a new environment: ``` $ spack env create myproject ==> Created environment 'myproject' in ~/spack/var/spack/environments/myproject ``` An environment is a virtualized `spack` instance that you can use for a specific purpose. You can see the environments we’ve created so far like this: ``` $ spack env list ==> 1 environments myproject ``` And you can **activate** an environment with `spack env activate`: ``` $ spack env activate myproject ``` Once you enter an environment, `spack find` shows only what is in the current environment. That’s nothing, so far: ``` $ spack find ==> In environment myproject ==> No root specs ==> 0 installed packages ``` The `spack find` output is still *slightly* different. It tells you that you’re in the `myproject` environment, so that you don’t panic when you see that there is nothing installed. It also says that there are *no root specs*. We’ll get back to what that means later. If you *only* want to check what environment you are in, you can use `spack env status`: ``` $ spack env status ==> In environment myproject ``` And, if you want to leave this environment and go back to normal Spack, you can use `spack env deactivate`. We like to use the `despacktivate` alias (which Spack sets up automatically) for short: ``` $ despacktivate # short alias for `spack env deactivate` $ spack env status ==> No active environment $ spack find netcdf@4.6.1 readline@7.0 zlib@1.2.11 diffutils@3.6 hdf5@1.10.4 m4@1.4.18 netcdf@4.6.1 suite-sparse@5.3.0 dyninst@10.0.0 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 tar@1.30 elfutils@0.173 hypre@2.15.1 matio@1.5.9 netlib-scalapack@2.0.2 tcl@8.6.8 findutils@4.6.0 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 tcl@8.6.8 gcc@7.2.0 intel-tbb@2019 mpc@1.1.0 openblas@0.3.3 texinfo@6.5~ ``` ##### Installing packages[¶](#installing-packages) Ok, now that we understand how creation and activation work, let’s go back to `myproject` and *install* a few packages: ``` $ spack env activate myproject $ spack install tcl ==> tcl is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt $ spack install trilinos ==> trilinos is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r $ spack find ==> In environment myproject ==> Root specs tcl trilinos ==> 22 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 xz@5.2.4 bzip2@1.0.6 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 zlib@1.2.11 glm@0.9.7.1 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 tcl@8.6.8 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 trilinos@12.12.1 ``` We’ve installed `tcl` and `trilinos` in our environment, along with all of their dependencies. We call `tcl` and `trilinos` the **roots** because we asked for them explicitly. The other 20 packages listed under “installed packages” are present because they were needed as dependencies. So, these are the roots of the packages’ dependency graph. The “<package> is already installed” messages above are generated because we already installed these packages in previous steps of the tutorial, and we don’t have to rebuild them to put them in an environment. Now let’s create *another* project. We’ll call this one `myproject2`: ``` $ spack env create myproject2 ==> Created environment 'myproject2' in ~/spack/var/spack/environments/myproject2 $ spack env activate myproject2 $ spack install hdf5 ==> hdf5 is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw $ spack install trilinos ==> trilinos is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r $ spack find ==> In environment myproject2 ==> Root specs hdf5 trilinos ==> 22 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 xz@5.2.4 bzip2@1.0.6 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 zlib@1.2.11 glm@0.9.7.1 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 hdf5@1.10.4 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 trilinos@12.12.1 ``` Now we have two environments: one with `tcl` and `trilinos`, and another with `hdf5` and `trilinos`. We can uninstall trilinos from `myproject2` as you would expect: ``` $ spack uninstall trilinos ==> The following packages will be uninstalled: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- rlsruav trilinos@12.12.1%gcc~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 ==> Do you want to proceed? [y/N] y $ spack find ==> In environment myproject2 ==> Root specs hdf5 ==> 8 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- hdf5@1.10.4 libpciaccess@0.13.5 numactl@2.0.11 xz@5.2.4 hwloc@1.11.9 libxml2@2.9.8 openmpi@3.1.3 zlib@1.2.11 ``` Now there is only one root spec, `hdf5`, which requires fewer additional dependencies. However, we still needed `trilinos` for the `myproject` environment! What happened to it? Let’s switch back and see. ``` $ despacktivate $ spack env activate myproject $ spack find ==> In environment myproject ==> Root specs tcl trilinos ==> 22 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 xz@5.2.4 bzip2@1.0.6 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 zlib@1.2.11 glm@0.9.7.1 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 tcl@8.6.8 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 trilinos@12.12.1 ``` Spack is smart enough to realize that `trilinos` is still present in the other environment. Trilinos won’t *actually* be uninstalled unless it is no longer needed by any environments or packages. If it is still needed, it is only removed from the environment. #### Dealing with many specs at once[¶](#dealing-with-many-specs-at-once) In the above examples, we just used `install` and `uninstall`. There are other ways to deal with groups of packages, as well. ##### Adding specs[¶](#adding-specs) Let’s go back to our first `myproject` environment and *add* a few specs instead of installing them: ``` $ spack add hdf5 ==> Adding hdf5 to environment myproject $ spack add gmp ==> Adding mumps to environment myproject $ spack find ==> In environment myproject ==> Root specs gmp hdf5 tcl trilinos ==> 22 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 xz@5.2.4 bzip2@1.0.6 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 zlib@1.2.11 glm@0.9.7.1 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 tcl@8.6.8 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 trilinos@12.12.1 ``` Let’s take a close look at what happened. The two packages we added, `hdf5` and `gmp`, are present, but they’re not installed in the environment yet. `spack add` just adds *roots* to the environment, but it does not automatically install them. We can install *all* the as-yet uninstalled packages in an environment by simply running `spack install` with no arguments: ``` $ spack install ==> Concretizing hdf5 [+] ozyvmhz hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 [+] 3njc4q5 ^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 [+] 43tkw5m ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 [+] 5urc6tc ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] milz7fm ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] wpexsph ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 [+] teneqii ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 [+] ft463od ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ==> Concretizing gmp [+] qc4qcfz gmp@6.1.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ==> Installing environment myproject ==> tcl is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt ==> trilinos is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r ==> hdf5 is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw ==> gmp is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gmp-6.1.2-qc4qcfz4monpllc3nqupdo7vwinf73sw ``` Spack will concretize the new roots, and install everything you added to the environment. Now we can see the installed roots in the output of `spack find`: ``` $ spack find ==> In environment myproject ==> Root specs gmp hdf5 tcl trilinos ==> 24 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hdf5@1.10.4 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 tcl@8.6.8 bzip2@1.0.6 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 trilinos@12.12.1 glm@0.9.7.1 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 xz@5.2.4 gmp@6.1.2 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 zlib@1.2.11 ``` We can build whole environments this way, by adding specs and installing all at once, or we can install them with the usual `install` and `uninstall` portions. The advantage to doing them all at once is that we don’t have to write a script outside of Spack to automate this, and we can kick off a large build of many packages easily. ##### Configuration[¶](#configuration) So far, `myproject` does not have any special configuration associated with it. The specs concretize using Spack’s defaults: ``` $ spack spec hypre Input spec --- hypre Concretized --- hypre@2.15.1%gcc@5.4.0~debug~int64+internal-superlu+mpi+shared arch=linux-ubuntu16.04-x86_64 ^openblas@0.3.3%gcc@5.4.0 cpu_target= ~ilp64 patches=47cfa7a952ac7b2e4632c73ae199d69fb54490627b66a62c681e21019c4ddc9d,714aea33692304a50bd0ccde42590c176c82ded4a8ac7f06e573dc8071929c33 +pic+shared threads=none ~virtual_machine arch=linux-ubuntu16.04-x86_64 ^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ``` You may want to add extra configuration to your environment. You can see how your environment is configured using `spack config get`: ``` $ spack config get # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: [tcl, trilinos, hdf5, gmp] ``` It turns out that this is a special configuration format where Spack stores the state for the environment. Currently, the file is just a `spack:` header and a list of `specs`. These are the roots. You can edit this file to add your own custom configuration. Spack provides a shortcut to do that: ``` spack config edit ``` You should now see the same file, and edit it to look like this: ``` # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: packages: all: providers: mpi: [mpich] # add package specs to the `specs` list specs: [tcl, trilinos, hdf5, gmp] ``` Now if we run `spack spec` again in the environment, specs will concretize with `mpich` as the MPI implementation: ``` $ spack spec hypre Input spec --- hypre Concretized --- hypre@2.15.1%gcc@5.4.0~debug~int64+internal-superlu+mpi+shared arch=linux-ubuntu16.04-x86_64 ^mpich@3.2.1%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 ^findutils@4.6.0%gcc@5.4.0 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^texinfo@6.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^openblas@0.3.3%gcc@5.4.0 cpu_target= ~ilp64 patches=47cfa7a952ac7b2e4632c73ae199d69fb54490627b66a62c681e21019c4ddc9d,714aea33692304a50bd0ccde42590c176c82ded4a8ac7f06e573dc8071929c33 +pic+shared threads=none ~virtual_machine arch=linux-ubuntu16.04-x86_64 ``` In addition to the `specs` section, an environment’s configuration can contain any of the configuration options from Spack’s various config sections. You can add custom repositories, a custom install location, custom compilers, or custom external packages, in addition to the `package` preferences we show here. But now we have a problem. We already installed part of this environment with openmpi, but now we want to install it with `mpich`. You can run `spack concretize` inside of an environment to concretize all of its specs. We can run it here: ``` $ spack concretize -f ==> Concretizing tcl [+] qhwyccy tcl@8.6.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ==> Concretizing trilinos [+] kqc52mo trilinos@12.12.1%gcc@5.4.0~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 arch=linux-ubuntu16.04-x86_64 [+] zbgfxap ^boost@1.68.0%gcc@5.4.0+atomic+chrono~clanglibcpp cxxstd=default +date_time~debug+exception+filesystem+graph~icu+iostreams+locale+log+math~mpi+multithreaded~numpy patches=2ab6c72d03dec6a4ae20220a9dfd5c8c572c5294252155b85c6874d97c323199 +program_options~python+random+regex+serialization+shared+signals~singlethreaded+system~taggedlayout+test+thread+timer~versionedlayout+wave arch=linux-ubuntu16.04-x86_64 [+] ufczdvs ^bzip2@1.0.6%gcc@5.4.0+shared arch=linux-ubuntu16.04-x86_64 [+] 2rhuivg ^diffutils@3.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 [+] otafqzh ^cmake@3.12.3%gcc@5.4.0~doc+ncurses+openssl+ownlibs patches=dd3a40d4d92f6b2158b87d6fb354c277947c776424aa03f6dc8096cf3135f5d0 ~qt arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] b4y3w3b ^openssl@1.0.2o%gcc@5.4.0+systemcerts arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] jnw622j ^glm@0.9.7.1%gcc@5.4.0 build_type=RelWithDebInfo arch=linux-ubuntu16.04-x86_64 [+] xxd7syh ^hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran+hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 [+] p3f7p2r ^mpich@3.2.1%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 [+] d4iajxs ^findutils@4.6.0%gcc@5.4.0 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] zs7a2pc ^texinfo@6.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] obewuoz ^hypre@2.15.1%gcc@5.4.0~debug~int64~internal-superlu+mpi+shared arch=linux-ubuntu16.04-x86_64 [+] cyeg2yi ^openblas@0.3.3%gcc@5.4.0 cpu_target= ~ilp64 patches=47cfa7a952ac7b2e4632c73ae199d69fb54490627b66a62c681e21019c4ddc9d,714aea33692304a50bd0ccde42590c176c82ded4a8ac7f06e573dc8071929c33 +pic+shared threads=none ~virtual_machine arch=linux-ubuntu16.04-x86_64 [+] gvyqldh ^matio@1.5.9%gcc@5.4.0+hdf5+shared+zlib arch=linux-ubuntu16.04-x86_64 [+] 3wnvp4j ^metis@5.1.0%gcc@5.4.0 build_type=Release ~gdb~int64 patches=4991da938c1d3a1d3dea78e49bbebecba00273f98df2a656e38b83d55b281da1 ~real64+shared arch=linux-ubuntu16.04-x86_64 [+] cumcj5a ^mumps@5.1.1%gcc@5.4.0+complex+double+float~int64~metis+mpi~parmetis~ptscotch~scotch+shared arch=linux-ubuntu16.04-x86_64 [+] p7iln2p ^netlib-scalapack@2.0.2%gcc@5.4.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 [+] wmmx5sg ^netcdf@4.6.1%gcc@5.4.0~dap~hdf4 maxdims=1024 maxvars=8192 +mpi~parallel-netcdf+shared arch=linux-ubuntu16.04-x86_64 [+] jehtata ^parmetis@4.0.3%gcc@5.4.0 build_type=RelWithDebInfo ~gdb patches=4f892531eb0a807eb1b82e683a416d3e35154a455274cf9b162fb02054d11a5b,50ed2081bc939269689789942067c58b3e522c269269a430d5d34c00edbc5870,704b84f7c7444d4372cb59cca6e1209df4ef3b033bc4ee3cf50f369bce972a9d +shared arch=linux-ubuntu16.04-x86_64 [+] zaau4ki ^suite-sparse@5.3.0%gcc@5.4.0~cuda~openmp+pic~tbb arch=linux-ubuntu16.04-x86_64 ==> Concretizing hdf5 - zjgyn3w hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 [+] p3f7p2r ^mpich@3.2.1%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 [+] d4iajxs ^findutils@4.6.0%gcc@5.4.0 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] zs7a2pc ^texinfo@6.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ==> Concretizing gmp [+] qc4qcfz gmp@6.1.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ``` Now, all the specs in the environemnt are concrete and ready to be installed wiht `mpich` as the MPI immplementation. Normally, we could just run `spack config edit`, edit the environment configuration, `spack add` some specs, and `install`. But, when we already have installed packages in the environment, we have to force everything in the environment to be re-concretized using `spack concretize -f`. *Then* we can re-run `spack install`. #### `spack.yaml` and `spack.lock`[¶](#spack-yaml-and-spack-lock) So far we’ve shown you how to interact with environments from the command line, but they also have a file-based interface that can be used by developers and admins to manage workflows for projects. In this section we’ll dive a little deeper to see how environments are implemented, and how you could use this in your day-to-day development. ##### `spack.yaml`[¶](#spack-yaml) Earlier, we changed an environment’s configuration using `spack config edit`. We were actually editing a special file called `spack.yaml`. Let’s take a look. We can get directly to the current environment’s location using `spack cd`: ``` $ spack cd -e myproject $ pwd ~/spack/var/spack/environments/myproject $ ls spack.lock spack.yaml ``` We notice two things here. First, the environment is just a directory inside of `var/spack/environments` within the Spack installation. Second, it contains two important files: `spack.yaml` and `spack.lock`. `spack.yaml` is the configuration file for environments that we’ve already seen, but it does not *have* to live inside Spack. If you create an environment using `spack env create`, it is *managed* by Spack in the `var/spack/environments` directory, and you can refer to it by name. You can actually put a `spack.yaml` file *anywhere*, and you can use it to bundle an environment, or a list of dependencies to install, with your project. Let’s make a simple project: ``` $ cd $ mkdir code $ cd code $ spack env create -d . ==> Created environment in ~/code ``` Here, we made a new directory called *code*, and we used the `-d` option to create an environment in it. What really happened? ``` $ ls spack.yaml $ cat spack.yaml # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: [] ``` Spack just created a `spack.yaml` file in the code directory, with an empty list of root specs. Now we have a Spack environment, *in a directory*, that we can use to manage dependencies. Suppose your project depends on `boost`, `trilinos`, and `openmpi`. You can add these to your spec list: ``` # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: - boost - trilinos - openmpi ``` And now *anyone* who uses the *code* repository can use this format to install the project’s dependencies. They need only clone the repository, `cd` into it, and type `spack install`: ``` $ spack install ==> Concretizing boost [+] zbgfxap boost@1.68.0%gcc@5.4.0+atomic+chrono~clanglibcpp cxxstd=default +date_time~debug+exception+filesystem+graph~icu+iostreams+locale+log+math~mpi+multithreaded~numpy patches=2ab6c72d03dec6a4ae20220a9dfd5c8c572c5294252155b85c6874d97c323199 +program_options~python+random+regex+serialization+shared+signals~singlethreaded+system~taggedlayout+test+thread+timer~versionedlayout+wave arch=linux-ubuntu16.04-x86_64 [+] ufczdvs ^bzip2@1.0.6%gcc@5.4.0+shared arch=linux-ubuntu16.04-x86_64 [+] 2rhuivg ^diffutils@3.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ==> Concretizing trilinos [+] rlsruav trilinos@12.12.1%gcc@5.4.0~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 arch=linux-ubuntu16.04-x86_64 [+] zbgfxap ^boost@1.68.0%gcc@5.4.0+atomic+chrono~clanglibcpp cxxstd=default +date_time~debug+exception+filesystem+graph~icu+iostreams+locale+log+math~mpi+multithreaded~numpy patches=2ab6c72d03dec6a4ae20220a9dfd5c8c572c5294252155b85c6874d97c323199 +program_options~python+random+regex+serialization+shared+signals~singlethreaded+system~taggedlayout+test+thread+timer~versionedlayout+wave arch=linux-ubuntu16.04-x86_64 [+] ufczdvs ^bzip2@1.0.6%gcc@5.4.0+shared arch=linux-ubuntu16.04-x86_64 [+] 2rhuivg ^diffutils@3.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 [+] otafqzh ^cmake@3.12.3%gcc@5.4.0~doc+ncurses+openssl+ownlibs patches=dd3a40d4d92f6b2158b87d6fb354c277947c776424aa03f6dc8096cf3135f5d0 ~qt arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] b4y3w3b ^openssl@1.0.2o%gcc@5.4.0+systemcerts arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] jnw622j ^glm@0.9.7.1%gcc@5.4.0 build_type=RelWithDebInfo arch=linux-ubuntu16.04-x86_64 [+] oqwnui7 ^hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran+hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 [+] 3njc4q5 ^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 [+] 43tkw5m ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 [+] 5urc6tc ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] milz7fm ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] wpexsph ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 [+] teneqii ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ft463od ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] fshksdp ^hypre@2.15.1%gcc@5.4.0~debug~int64~internal-superlu+mpi+shared arch=linux-ubuntu16.04-x86_64 [+] cyeg2yi ^openblas@0.3.3%gcc@5.4.0 cpu_target= ~ilp64 patches=47cfa7a952ac7b2e4632c73ae199d69fb54490627b66a62c681e21019c4ddc9d,714aea33692304a50bd0ccde42590c176c82ded4a8ac7f06e573dc8071929c33 +pic+shared threads=none ~virtual_machine arch=linux-ubuntu16.04-x86_64 [+] lmzdgss ^matio@1.5.9%gcc@5.4.0+hdf5+shared+zlib arch=linux-ubuntu16.04-x86_64 [+] 3wnvp4j ^metis@5.1.0%gcc@5.4.0 build_type=Release ~gdb~int64 patches=4991da938c1d3a1d3dea78e49bbebecba00273f98df2a656e38b83d55b281da1 ~real64+shared arch=linux-ubuntu16.04-x86_64 [+] acsg2dz ^mumps@5.1.1%gcc@5.4.0+complex+double+float~int64~metis+mpi~parmetis~ptscotch~scotch+shared arch=linux-ubuntu16.04-x86_64 [+] wotpfwf ^netlib-scalapack@2.0.2%gcc@5.4.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 [+] mhm4izp ^netcdf@4.6.1%gcc@5.4.0~dap~hdf4 maxdims=1024 maxvars=8192 +mpi~parallel-netcdf+shared arch=linux-ubuntu16.04-x86_64 [+] uv6h3sq ^parmetis@4.0.3%gcc@5.4.0 build_type=RelWithDebInfo ~gdb patches=4f892531eb0a807eb1b82e683a416d3e35154a455274cf9b162fb02054d11a5b,50ed2081bc939269689789942067c58b3e522c269269a430d5d34c00edbc5870,704b84f7c7444d4372cb59cca6e1209df4ef3b033bc4ee3cf50f369bce972a9d +shared arch=linux-ubuntu16.04-x86_64 [+] zaau4ki ^suite-sparse@5.3.0%gcc@5.4.0~cuda~openmp+pic~tbb arch=linux-ubuntu16.04-x86_64 ==> Concretizing openmpi [+] 3njc4q5 openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 [+] 43tkw5m ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 [+] 5urc6tc ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] milz7fm ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] wpexsph ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 [+] teneqii ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 [+] ft463od ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ==> Installing environment ~/code ==> boost is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/boost-1.68.0-zbgfxapchxa4awxdwpleubfuznblxzvt ==> trilinos is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r ==> openmpi is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx ``` Spack concretizes the specs in the `spack.yaml` file and installs them. What happened here? If you `cd` into a directory tha has a `spack.yaml` file in it, Spack considers this directory’s environment to be activated. The directory does not have to live within Spack; it can be anywhere. So, from `~/code`, we can actually manipulate `spack.yaml` using `spack add` and `spack remove` (just like managed environments): ``` $ spack add hdf5@5.5.1 ==> Adding hdf5 to environment ~/code $ cat spack.yaml # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: - boost - trilinos - openmpi - hdf5@5.5.1 $ spack remove hdf5 ==> Removing hdf5 from environment ~/code $ cat spack.yaml # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: - boost - trilinos - openmpi ``` ##### `spack.lock`[¶](#spack-lock) Ok, we’ve covered managed environments, environments in directories, and the last thing we’ll cover is `spack.lock`. You may remember that when we ran `spack install`, Spack concretized all the specs in the `spack.yaml` file and installed them. Whenever we concretize Specs in an environment, all concrete specs in the environment are written out to a `spack.lock` file *alongside* `spack.yaml`. The `spack.lock` file is not really human-readable like the `spack.yaml` file. It is a `json` format that contains all the information that we need to `reproduce` the build of an environment: ``` $ head spack.lock { "concrete_specs": { "teneqii2xv5u6zl5r6qi3pwurc6pmypz": { "xz": { "version": "5.2.4", "arch": { "platform": "linux", "platform_os": "ubuntu16.04", "target": "x86_64" }, ... ``` `spack.yaml` and `spack.lock` correspond to two fundamental concepts in Spack, but for environments: > * `spack.yaml` is the set of *abstract* specs and configuration that > you want to install. > * `spack.lock` is the set of all fully *concretized* specs generated > from concretizing `spack.yaml` Using either of these, you can recreate an environment that someone else built. `spack env create` takes an extra optional argument, which can be either a `spack.yaml` or a `spack.lock` file: ``` $ spack env create my-project spack.yaml $ spack env create my-project spack.lock ``` Both of these create a new environment called `my-project`, but which one you choose to use depends on your needs: > 1. copying the yaml file allows someone else to build your *requirements*, > potentially a different way. > 2. copying the lock file allows someone else to rebuild your > *installation* exactly as you built it. The first use case can *re-concretize* the same specs on new platforms in order to build, but it will preserve the abstract requirements. The second use case (currently) requires you to be on the same machine, but it retains all decisions made during concretization and is faithful to a prior install. ### Module Files[¶](#module-files) In this tutorial, we’ll introduce a few concepts that are fundamental to the generation of module files with Spack, and we’ll guide you through the customization of both module files content and their layout on disk. In the end you should have a clear understanding of: > * What are module files and how they work > * How Spack generates them > * Which commands are available to ease their maintenance > * How it is possible to customize them in all aspects #### Modules at a glance[¶](#modules-at-a-glance) Let’s start by summarizing what module files are and how you can use them to modify your environment. The idea is to give enough information so that people without any previous exposure to them will be able to follow the tutorial later on. We’ll also give a high-level view of how module files are generated in Spack. If you are already familiar with these topics you can quickly skim through this section or move directly to [Setup for the tutorial](#module-file-tutorial-prerequisites). ##### What are module files?[¶](#what-are-module-files) Module files are an easy way to modify your environment in a controlled manner during a shell session. In general, they contain the information needed to run an application or use a library, and they work in conjunction with a tool that interprets them. Typical module files instruct this tool to modify the environment variables when a module file is loaded: > ``` > $ module show zlib > --- > /home/mculpo/PycharmProjects/spack/share/spack/modules/linux-ubuntu14.04-x86_64/zlib/1.2.11-gcc-7.2.0-linux-ubuntu14.04-x86_64-co2px3k: > module-whatis A free, general-purpose, legally unencumbered lossless data-compression library. > prepend-path MANPATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/share/man > prepend-path LIBRARY_PATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/lib > prepend-path LD_LIBRARY_PATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/lib > prepend-path CPATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/include > prepend-path PKG_CONFIG_PATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/lib/pkgconfig > prepend-path CMAKE_PREFIX_PATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/ > --- > $ echo $LD_LIBRARY_PATH > $ module load zlib > $ echo $LD_LIBRARY_PATH > /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/lib > ``` and to undo the modifications when the same module file is unloaded: > ``` > $ module unload zlib > $ echo $LD_LIBRARY_PATH > $ > ``` Different formats exist for module files, and different tools provide various levels of support for them. Spack can natively generate: 1. Non-hierarchical module files written in TCL 2. Hierarchical module files written in Lua and can build [environment-modules](http://modules.sourceforge.net/) and [lmod](http://lmod.readthedocs.io/en/latest) as support tools. Which of the formats or tools best suits one’s needs depends on each particular use-case. For the sake of illustration, we’ll be working on both formats using `lmod`. See also Environment modules This is the original tool that provided modules support. Its first version was coded in C in the early ’90s and was later substituted by a version completely coded in TCL - the one Spack is distributing. More details on its features are given in the [homepage of the project](http://modules.sourceforge.net/) or in its [github page](https://github.com/cea-hpc/modules). The tool is able to interpret the non-hierarchical TCL modulefiles written by Spack. Lmod Lmod is a module system written in Lua, designed to easily handle hierarchies of module files. It’s a drop-in replacement of Environment Modules and works with both of the module file formats generated by Spack. Despite being fully compatible with Environment Modules there are many features that are unique to Lmod. These features are either [targeted towards safety](http://lmod.readthedocs.io/en/latest/010_user.html#safety-features) or meant to [extend the module system functionality](http://lmod.readthedocs.io/en/latest/010_user.html#module-hierarchy). ##### How do we generate module files?[¶](#how-do-we-generate-module-files) Before we dive into the hands-on sections it’s worth spending a couple of words to explain how module files are generated by Spack. The following diagram provides a high-level view of the process: The red dashed line above represents Spack’s boundaries, the blue one Spack’s dependencies [[1]](#f1). Module files are generated by combining: > * the configuration details in `config.yaml` and `modules.yaml` > * the information contained in Spack packages (and processed by the module subpackage) > * a set of template files with [Jinja2](http://jinja.pocoo.org/docs/2.9/), an external template engine that stamps out each particular module file. As Spack serves very diverse needs this process has many points of customization, and we’ll explore most of them in the next sections. | [[1]](#id1) | Spack vendors its dependencies! This means that Spack comes with a copy of each one of its dependencies, including `Jinja2`, and is already configured to use them. | #### Setup for the tutorial[¶](#setup-for-the-tutorial) In order to showcase the capabilities of Spack’s module file generation, we need a representative set of software to work with. This set must include different flavors of the same packages installed alongside each other and some [external packages](index.html#sec-external-packages). The purpose of this setup is not to make our life harder but to demonstrate how Spack can help with similar situations, as they will happen on real HPC clusters. For instance, it’s often preferable for Spack to use vendor-provided MPI implementations than to build one itself. To keep the set of software we’re dealing with manageable, we’re going to uninstall everything from earlier in the tutorial. ##### Build a module tool[¶](#build-a-module-tool) The first thing that we need is the module tool. In this case we choose `lmod` as it can work with both hierarchical and non-hierarchical module file layouts. ``` $ bin/spack install lmod ``` Once the module tool is installed we need to have it available in the current shell. As the installation directories are definitely not easy to remember, we’ll employ the command `spack location` to retrieve the `lmod` prefix directly from Spack: ``` $ . $(spack location -i lmod)/lmod/lmod/init/bash ``` Now we can re-source the setup file and Spack modules will be put in our module path. ``` $ . share/spack/setup_env.sh ``` ##### Add a new compiler[¶](#add-a-new-compiler) The second step is to build a recent compiler. On first use, Spack scans the environment and automatically locates the compiler(s) already available on the system. For this tutorial, however, we want to use `gcc@7.2.0`. ``` $ spack install gcc@7.2.0 ... Wait a long time ... ``` Once `gcc` is installed we can use shell support to load it and make it readily available: ``` $ spack load gcc@7.2.0 ``` It may not be apparent, but the last command employed the module files generated automatically by Spack. What happens under the hood when you use the `spack load` command is: 1. the spec passed as argument is translated into a module file name 2. the current module tool is used to load that module file You can use this command to double check: ``` $ module list Currently Loaded Modules: 1) gcc-7.2.0-gcc-5.4.0-b7smjjc ``` Note that the 7-digit hash at the end of the generated module may vary depending on architecture or package version. Now that we have `gcc@7.2.0` in `PATH` we can finally add it to the list of compilers known to Spack: ``` $ spack compiler add ==> Added 1 new compiler to /home/spack1/.spack/linux/compilers.yaml gcc@7.2.0 ==> Compilers are defined in the following files: /home/spack1/.spack/linux/compilers.yaml $ spack compiler list ==> Available compilers -- clang ubuntu16.04-x86_64 --- clang@3.8.0-2ubuntu4 clang@3.7.1-2ubuntu2 -- gcc ubuntu16.04-x86_64 --- gcc@7.2.0 gcc@5.4.0 gcc@4.7 ``` ##### Build the software that will be used in the tutorial[¶](#build-the-software-that-will-be-used-in-the-tutorial) Finally, we should use Spack to install the packages used in the examples: ``` $ spack install netlib-scalapack ^openmpi ^openblas $ spack install netlib-scalapack ^mpich ^openblas $ spack install netlib-scalapack ^openmpi ^netlib-lapack $ spack install netlib-scalapack ^mpich ^netlib-lapack $ spack install py-scipy ^openblas ``` #### Non-hierarchical module files[¶](#non-hierarchical-module-files) If you arrived to this point you should have an environment that looks similar to: ``` $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf-2.69-gcc-5.4.0-3sx2gxe libsigsegv-2.11-gcc-7.2.0-g67xpfd openssl-1.0.2o-gcc-5.4.0-b4y3w3b autoconf-2.69-gcc-7.2.0-yb2makb libtool-2.4.6-gcc-5.4.0-o2pfwjf openssl-1.0.2o-gcc-7.2.0-cvldq3v automake-1.16.1-gcc-5.4.0-rymw7im libtool-2.4.6-gcc-7.2.0-kt2udm6 pcre-8.42-gcc-5.4.0-gt5lgzi automake-1.16.1-gcc-7.2.0-qoowd5q libxml2-2.9.8-gcc-5.4.0-wpexsph perl-5.26.2-gcc-5.4.0-ic2kyoa bzip2-1.0.6-gcc-5.4.0-ufczdvs libxml2-2.9.8-gcc-7.2.0-47gf5kk perl-5.26.2-gcc-7.2.0-fdwz5yu bzip2-1.0.6-gcc-7.2.0-mwamumj lmod-7.8-gcc-5.4.0-kmhks3p pkgconf-1.4.2-gcc-5.4.0-fovrh7a cmake-3.12.3-gcc-7.2.0-obqgn2v lua-5.3.4-gcc-5.4.0-cpfeo2w pkgconf-1.4.2-gcc-7.2.0-yoxwmgb curl-7.60.0-gcc-5.4.0-vzqreb2 lua-luafilesystem-1_6_3-gcc-5.4.0-alakjim py-numpy-1.15.2-gcc-7.2.0-wbwtcxf diffutils-3.6-gcc-5.4.0-2rhuivg lua-luaposix-33.4.0-gcc-5.4.0-7wqhwoc py-scipy-1.1.0-gcc-7.2.0-d5n3cph diffutils-3.6-gcc-7.2.0-eauxwi7 m4-1.4.18-gcc-5.4.0-suf5jtc py-setuptools-40.4.3-gcc-7.2.0-5dbwfwn expat-2.2.5-gcc-5.4.0-emyv67q m4-1.4.18-gcc-7.2.0-wdzvagl python-2.7.15-gcc-7.2.0-ucmr2mn findutils-4.6.0-gcc-7.2.0-ca4b7zq mpc-1.1.0-gcc-5.4.0-iuf3gc3 readline-7.0-gcc-5.4.0-nxhwrg7 gcc-7.2.0-gcc-5.4.0-b7smjjc (L) mpfr-3.1.6-gcc-5.4.0-jnt2nnp readline-7.0-gcc-7.2.0-ccruj2i gdbm-1.14.1-gcc-5.4.0-q4fpyuo mpich-3.2.1-gcc-7.2.0-vt5xcat sqlite-3.23.1-gcc-7.2.0-5ltus3a gdbm-1.14.1-gcc-7.2.0-zk5lhob ncurses-6.1-gcc-5.4.0-3o765ou tar-1.30-gcc-5.4.0-dk7lrpo gettext-0.19.8.1-gcc-5.4.0-tawgous ncurses-6.1-gcc-7.2.0-xcgzqdv tcl-8.6.8-gcc-5.4.0-qhwyccy git-2.19.1-gcc-5.4.0-p3gjnfa netlib-lapack-3.8.0-gcc-7.2.0-fj7nayd texinfo-6.5-gcc-7.2.0-cuqnfgf gmp-6.1.2-gcc-5.4.0-qc4qcfz netlib-scalapack-2.0.2-gcc-7.2.0-67nmj7g unzip-6.0-gcc-5.4.0-ba23fbg hwloc-1.11.9-gcc-7.2.0-gbyc65s netlib-scalapack-2.0.2-gcc-7.2.0-6jgjbyg util-macros-1.19.1-gcc-7.2.0-t62kozq isl-0.18-gcc-5.4.0-vttqout netlib-scalapack-2.0.2-gcc-7.2.0-prgo67d xz-5.2.4-gcc-5.4.0-teneqii libbsd-0.8.6-gcc-5.4.0-f4qkkwm netlib-scalapack-2.0.2-gcc-7.2.0-zxpt252 xz-5.2.4-gcc-7.2.0-rql5kog libiconv-1.15-gcc-5.4.0-u2x3umv numactl-2.0.11-gcc-7.2.0-rifwktk zlib-1.2.11-gcc-5.4.0-5nus6kn libpciaccess-0.13.5-gcc-7.2.0-riipwi2 openblas-0.3.3-gcc-7.2.0-xxoxfh4 zlib-1.2.11-gcc-7.2.0-ezuwp4p libsigsegv-2.11-gcc-5.4.0-fypapcp openmpi-3.1.3-gcc-7.2.0-do5xfer Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` The non-hierarchical module files that have been generated so far follow [the default rules for module generation](index.html#modules-yaml). Taking a look at the `gcc` module you’ll see, for example: ``` $ module show gcc-7.2.0-gcc-5.4.0-b7smjjc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/gcc-7.2.0-gcc-5.4.0-b7smjjc: --- whatis("The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("CPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/include") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/") setenv("CC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gcc") setenv("CXX","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/g++") setenv("FC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F77","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F90","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ]]) ``` As expected, a few environment variables representing paths will be modified by the module file according to the default prefix inspection rules. ##### Filter unwanted modifications to the environment[¶](#filter-unwanted-modifications-to-the-environment) Now consider the case that your site has decided that `CPATH` and `LIBRARY_PATH` modifications should not be present in module files. What you can do to abide by the rules is to create a configuration file `~/.spack/modules.yaml` with the following content: ``` modules: tcl: all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` Next you should regenerate all the module files: ``` $ spack module tcl refresh ==> You are about to regenerate tcl module files for: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- 3sx2gxe autoconf@2.69 b7smjjc gcc@7.2.0 f4qkkwm libbsd@0.8.6 cpfeo2w lua@5.3.4 3o765ou ncurses@6.1 dk7lrpo tar@1.30 rymw7im automake@1.16.1 q4fpyuo gdbm@1.14.1 u2x3umv libiconv@1.15 alakjim lua-luafilesystem@1_6_3 b4y3w3b openssl@1.0.2o qhwyccy tcl@8.6.8 ufczdvs bzip2@1.0.6 tawgous gettext@0.19.8.1 fypapcp libsigsegv@2.11 7wqhwoc lua-luaposix@33.4.0 gt5lgzi pcre@8.42 ba23fbg unzip@6.0 vzqreb2 curl@7.60.0 p3gjnfa git@2.19.1 o2pfwjf libtool@2.4.6 suf5jtc m4@1.4.18 ic2kyoa perl@5.26.2 teneqii xz@5.2.4 2rhuivg diffutils@3.6 qc4qcfz gmp@6.1.2 wpexsph libxml2@2.9.8 iuf3gc3 mpc@1.1.0 fovrh7a pkgconf@1.4.2 5nus6kn zlib@1.2.11 emyv67q expat@2.2.5 vttqout isl@0.18 kmhks3p lmod@7.8 jnt2nnp mpfr@3.1.6 nxhwrg7 readline@7.0 -- linux-ubuntu16.04-x86_64 / gcc@7.2.0 --- yb2makb autoconf@2.69 riipwi2 libpciaccess@0.13.5 6jgjbyg netlib-scalapack@2.0.2 fdwz5yu perl@5.26.2 cuqnfgf texinfo@6.5 qoowd5q automake@1.16.1 g67xpfd libsigsegv@2.11 zxpt252 netlib-scalapack@2.0.2 yoxwmgb pkgconf@1.4.2 t62kozq util-macros@1.19.1 mwamumj bzip2@1.0.6 kt2udm6 libtool@2.4.6 67nmj7g netlib-scalapack@2.0.2 wbwtcxf py-numpy@1.15.2 rql5kog xz@5.2.4 obqgn2v cmake@3.12.3 47gf5kk libxml2@2.9.8 prgo67d netlib-scalapack@2.0.2 d5n3cph py-scipy@1.1.0 ezuwp4p zlib@1.2.11 eauxwi7 diffutils@3.6 wdzvagl m4@1.4.18 rifwktk numactl@2.0.11 5dbwfwn py-setuptools@40.4.3 ca4b7zq findutils@4.6.0 vt5xcat mpich@3.2.1 xxoxfh4 openblas@0.3.3 ucmr2mn python@2.7.15 zk5lhob gdbm@1.14.1 xcgzqdv ncurses@6.1 do5xfer openmpi@3.1.3 ccruj2i readline@7.0 gbyc65s hwloc@1.11.9 fj7nayd netlib-lapack@3.8.0 cvldq3v openssl@1.0.2o 5ltus3a sqlite@3.23.1 ==> Do you want to proceed? [y/n] y ==> Regenerating tcl module files ``` If you take a look now at the module for `gcc` you’ll see that the unwanted paths have disappeared: ``` $ module show gcc-7.2.0-gcc-5.4.0-b7smjjc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/gcc-7.2.0-gcc-5.4.0-b7smjjc: --- whatis("The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/") setenv("CC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gcc") setenv("CXX","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/g++") setenv("FC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F77","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F90","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ]]) ``` ##### Prevent some module files from being generated[¶](#prevent-some-module-files-from-being-generated) Another common request at many sites is to avoid exposing software that is only needed as an intermediate step when building a newer stack. Let’s try to prevent the generation of module files for anything that is compiled with `gcc@5.4.0` (the OS provided compiler). To do this you should add a `blacklist` keyword to `~/.spack/modules.yaml`: ``` modules: tcl: blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` and regenerate the module files: This time it is convenient to pass the option `--delete-tree` to the command that regenerates the module files to instruct it to delete the existing tree and regenerate a new one instead of overwriting the files in the existing directory. ``` $ spack module tcl refresh --delete-tree ==> You are about to regenerate tcl module files for: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- 3sx2gxe autoconf@2.69 b7smjjc gcc@7.2.0 f4qkkwm libbsd@0.8.6 cpfeo2w lua@5.3.4 3o765ou ncurses@6.1 dk7lrpo tar@1.30 rymw7im automake@1.16.1 q4fpyuo gdbm@1.14.1 u2x3umv libiconv@1.15 alakjim lua-luafilesystem@1_6_3 b4y3w3b openssl@1.0.2o qhwyccy tcl@8.6.8 ufczdvs bzip2@1.0.6 tawgous gettext@0.19.8.1 fypapcp libsigsegv@2.11 7wqhwoc lua-luaposix@33.4.0 gt5lgzi pcre@8.42 ba23fbg unzip@6.0 vzqreb2 curl@7.60.0 p3gjnfa git@2.19.1 o2pfwjf libtool@2.4.6 suf5jtc m4@1.4.18 ic2kyoa perl@5.26.2 teneqii xz@5.2.4 2rhuivg diffutils@3.6 qc4qcfz gmp@6.1.2 wpexsph libxml2@2.9.8 iuf3gc3 mpc@1.1.0 fovrh7a pkgconf@1.4.2 5nus6kn zlib@1.2.11 emyv67q expat@2.2.5 vttqout isl@0.18 kmhks3p lmod@7.8 jnt2nnp mpfr@3.1.6 nxhwrg7 readline@7.0 -- linux-ubuntu16.04-x86_64 / gcc@7.2.0 --- yb2makb autoconf@2.69 riipwi2 libpciaccess@0.13.5 6jgjbyg netlib-scalapack@2.0.2 fdwz5yu perl@5.26.2 cuqnfgf texinfo@6.5 qoowd5q automake@1.16.1 g67xpfd libsigsegv@2.11 zxpt252 netlib-scalapack@2.0.2 yoxwmgb pkgconf@1.4.2 t62kozq util-macros@1.19.1 mwamumj bzip2@1.0.6 kt2udm6 libtool@2.4.6 67nmj7g netlib-scalapack@2.0.2 wbwtcxf py-numpy@1.15.2 rql5kog xz@5.2.4 obqgn2v cmake@3.12.3 47gf5kk libxml2@2.9.8 prgo67d netlib-scalapack@2.0.2 d5n3cph py-scipy@1.1.0 ezuwp4p zlib@1.2.11 eauxwi7 diffutils@3.6 wdzvagl m4@1.4.18 rifwktk numactl@2.0.11 5dbwfwn py-setuptools@40.4.3 ca4b7zq findutils@4.6.0 vt5xcat mpich@3.2.1 xxoxfh4 openblas@0.3.3 ucmr2mn python@2.7.15 zk5lhob gdbm@1.14.1 xcgzqdv ncurses@6.1 do5xfer openmpi@3.1.3 ccruj2i readline@7.0 gbyc65s hwloc@1.11.9 fj7nayd netlib-lapack@3.8.0 cvldq3v openssl@1.0.2o 5ltus3a sqlite@3.23.1 ==> Do you want to proceed? [y/n] y ==> Regenerating tcl module files $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf-2.69-gcc-7.2.0-yb2makb m4-1.4.18-gcc-7.2.0-wdzvagl perl-5.26.2-gcc-7.2.0-fdwz5yu automake-1.16.1-gcc-7.2.0-qoowd5q mpich-3.2.1-gcc-7.2.0-vt5xcat pkgconf-1.4.2-gcc-7.2.0-yoxwmgb bzip2-1.0.6-gcc-7.2.0-mwamumj ncurses-6.1-gcc-7.2.0-xcgzqdv py-numpy-1.15.2-gcc-7.2.0-wbwtcxf cmake-3.12.3-gcc-7.2.0-obqgn2v netlib-lapack-3.8.0-gcc-7.2.0-fj7nayd py-scipy-1.1.0-gcc-7.2.0-d5n3cph diffutils-3.6-gcc-7.2.0-eauxwi7 netlib-scalapack-2.0.2-gcc-7.2.0-67nmj7g py-setuptools-40.4.3-gcc-7.2.0-5dbwfwn findutils-4.6.0-gcc-7.2.0-ca4b7zq netlib-scalapack-2.0.2-gcc-7.2.0-6jgjbyg python-2.7.15-gcc-7.2.0-ucmr2mn gdbm-1.14.1-gcc-7.2.0-zk5lhob netlib-scalapack-2.0.2-gcc-7.2.0-prgo67d readline-7.0-gcc-7.2.0-ccruj2i hwloc-1.11.9-gcc-7.2.0-gbyc65s netlib-scalapack-2.0.2-gcc-7.2.0-zxpt252 sqlite-3.23.1-gcc-7.2.0-5ltus3a libpciaccess-0.13.5-gcc-7.2.0-riipwi2 numactl-2.0.11-gcc-7.2.0-rifwktk texinfo-6.5-gcc-7.2.0-cuqnfgf libsigsegv-2.11-gcc-7.2.0-g67xpfd openblas-0.3.3-gcc-7.2.0-xxoxfh4 util-macros-1.19.1-gcc-7.2.0-t62kozq libtool-2.4.6-gcc-7.2.0-kt2udm6 openmpi-3.1.3-gcc-7.2.0-do5xfer xz-5.2.4-gcc-7.2.0-rql5kog libxml2-2.9.8-gcc-7.2.0-47gf5kk openssl-1.0.2o-gcc-7.2.0-cvldq3v zlib-1.2.11-gcc-7.2.0-ezuwp4p Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` If you look closely you’ll see though that we went too far in blacklisting modules: the module for `gcc@7.2.0` disappeared as it was bootstrapped with `gcc@5.4.0`. To specify exceptions to the blacklist rules you can use `whitelist`: ``` modules: tcl: whitelist: - gcc blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` `whitelist` rules always have precedence over `blacklist` rules. If you regenerate the modules again: ``` $ spack module tcl refresh -y ==> Regenerating tcl module files ``` you’ll see that now the module for `gcc@7.2.0` has reappeared: ``` $ module avail gcc-7.2.0-gcc-5.4.0-b7smjjc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- gcc-7.2.0-gcc-5.4.0-b7smjjc Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` An additional possibility that you can leverage to unclutter the environment is that of preventing the generation of module files for implicitly installed packages. In this case all one needs to do is to add the following line: ``` modules: tcl: blacklist_implicits: true whitelist: - gcc blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` to `modules.yaml` and regenerate the module file tree as above. ##### Change module file naming[¶](#change-module-file-naming) The next step in making module files more user-friendly is to improve their naming scheme. To reduce the length of the hash or remove it altogether you can use the `hash_length` keyword in the configuration file: ``` modules: tcl: hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` If you try to regenerate the module files now you will get an error: ``` $ spack module tcl refresh --delete-tree -y ==> Error: Name clashes detected in module files: file: /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/netlib-scalapack-2.0.2-gcc-7.2.0 spec: netlib-scalapack@2.0.2%gcc@7.2.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 spec: netlib-scalapack@2.0.2%gcc@7.2.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 spec: netlib-scalapack@2.0.2%gcc@7.2.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 spec: netlib-scalapack@2.0.2%gcc@7.2.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 ==> Error: Operation aborted ``` Note We try to check for errors upfront! In Spack we check for errors upfront whenever possible, so don’t worry about your module files: as a name clash was detected nothing has been changed on disk. The problem here is that without the hashes the four different flavors of `netlib-scalapack` map to the same module file name. We can add suffixes to differentiate them: ``` modules: tcl: hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' all: suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ``` As you can see it is possible to specify rules that apply only to a restricted set of packages using [anonymous specs](index.html#anonymous-specs). Regenerating module files now we obtain: ``` $ spack module tcl refresh --delete-tree -y ==> Regenerating tcl module files $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf-2.69-gcc-7.2.0 m4-1.4.18-gcc-7.2.0 pkgconf-1.4.2-gcc-7.2.0 automake-1.16.1-gcc-7.2.0 mpich-3.2.1-gcc-7.2.0 py-numpy-1.15.2-gcc-7.2.0-openblas bzip2-1.0.6-gcc-7.2.0 ncurses-6.1-gcc-7.2.0 py-scipy-1.1.0-gcc-7.2.0-openblas cmake-3.12.3-gcc-7.2.0 netlib-lapack-3.8.0-gcc-7.2.0 py-setuptools-40.4.3-gcc-7.2.0 diffutils-3.6-gcc-7.2.0 netlib-scalapack-2.0.2-gcc-7.2.0-netlib-mpich python-2.7.15-gcc-7.2.0 findutils-4.6.0-gcc-7.2.0 netlib-scalapack-2.0.2-gcc-7.2.0-netlib-openmpi readline-7.0-gcc-7.2.0 gcc-7.2.0-gcc-5.4.0 netlib-scalapack-2.0.2-gcc-7.2.0-openblas-mpich sqlite-3.23.1-gcc-7.2.0 gdbm-1.14.1-gcc-7.2.0 netlib-scalapack-2.0.2-gcc-7.2.0-openblas-openmpi texinfo-6.5-gcc-7.2.0 hwloc-1.11.9-gcc-7.2.0 numactl-2.0.11-gcc-7.2.0 util-macros-1.19.1-gcc-7.2.0 libpciaccess-0.13.5-gcc-7.2.0 openblas-0.3.3-gcc-7.2.0 xz-5.2.4-gcc-7.2.0 libsigsegv-2.11-gcc-7.2.0 openmpi-3.1.3-gcc-7.2.0 zlib-1.2.11-gcc-7.2.0 libtool-2.4.6-gcc-7.2.0 openssl-1.0.2o-gcc-7.2.0 libxml2-2.9.8-gcc-7.2.0 perl-5.26.2-gcc-7.2.0 Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` Finally we can set a `naming_scheme` to prevent users from loading modules that refer to different flavors of the same library/application: ``` modules: tcl: hash_length: 0 naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}' whitelist: - gcc blacklist: - '%gcc@5.4.0' all: conflict: - '${PACKAGE}' suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ``` The final result should look like: ``` $ spack module tcl refresh --delete-tree -y ==> Regenerating tcl module files $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf/2.69-gcc-7.2.0 m4/1.4.18-gcc-7.2.0 pkgconf/1.4.2-gcc-7.2.0 automake/1.16.1-gcc-7.2.0 mpich/3.2.1-gcc-7.2.0 py-numpy/1.15.2-gcc-7.2.0-openblas bzip2/1.0.6-gcc-7.2.0 ncurses/6.1-gcc-7.2.0 py-scipy/1.1.0-gcc-7.2.0-openblas cmake/3.12.3-gcc-7.2.0 netlib-lapack/3.8.0-gcc-7.2.0 py-setuptools/40.4.3-gcc-7.2.0 diffutils/3.6-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-netlib-mpich python/2.7.15-gcc-7.2.0 findutils/4.6.0-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-netlib-openmpi readline/7.0-gcc-7.2.0 gcc/7.2.0-gcc-5.4.0 netlib-scalapack/2.0.2-gcc-7.2.0-openblas-mpich sqlite/3.23.1-gcc-7.2.0 gdbm/1.14.1-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-openblas-openmpi (D) texinfo/6.5-gcc-7.2.0 hwloc/1.11.9-gcc-7.2.0 numactl/2.0.11-gcc-7.2.0 util-macros/1.19.1-gcc-7.2.0 libpciaccess/0.13.5-gcc-7.2.0 openblas/0.3.3-gcc-7.2.0 xz/5.2.4-gcc-7.2.0 libsigsegv/2.11-gcc-7.2.0 openmpi/3.1.3-gcc-7.2.0 zlib/1.2.11-gcc-7.2.0 libtool/2.4.6-gcc-7.2.0 openssl/1.0.2o-gcc-7.2.0 libxml2/2.9.8-gcc-7.2.0 perl/5.26.2-gcc-7.2.0 Where: D: Default Module Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` Note TCL specific directive The directives `naming_scheme` and `conflict` are TCL specific and can’t be used in the `lmod` section of the configuration file. ##### Add custom environment modifications[¶](#add-custom-environment-modifications) At many sites it is customary to set an environment variable in a package’s module file that points to the folder in which the package is installed. You can achieve this with Spack by adding an `environment` directive to the configuration file: ``` modules: tcl: hash_length: 0 naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}' whitelist: - gcc blacklist: - '%gcc@5.4.0' all: conflict: - '${PACKAGE}' suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '${PACKAGE}_ROOT': '${PREFIX}' netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ``` Under the hood Spack uses the [`format()`](index.html#spack.spec.Spec.format) API to substitute tokens in either environment variable names or values. There are two caveats though: * The set of allowed tokens in variable names is restricted to `PACKAGE`, `VERSION`, `COMPILER`, `COMPILERNAME`, `COMPILERVER`, `ARCHITECTURE` * Any token expanded in a variable name is made uppercase, but other than that case sensitivity is preserved Regenerating the module files results in something like: ``` $ spack module tcl refresh -y ==> Regenerating tcl module files $ module show gcc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/gcc/7.2.0-gcc-5.4.0: --- whatis("The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ") conflict("gcc") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/") setenv("CC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gcc") setenv("CXX","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/g++") setenv("FC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F77","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F90","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("GCC_ROOT","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs") help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ]]) ``` As you can see, the `gcc` module has the environment variable `GCC_ROOT` set. Sometimes it’s also useful to apply environment modifications selectively and target only certain packages. You can, for instance set the common variables `CC`, `CXX`, etc. in the `gcc` module file and apply other custom modifications to the `openmpi` modules as follows: ``` modules: tcl: hash_length: 0 naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}' whitelist: - gcc blacklist: - '%gcc@5.4.0' all: conflict: - '${PACKAGE}' suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '${PACKAGE}_ROOT': '${PREFIX}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ``` This time we will be more selective and regenerate only the `gcc` and `openmpi` module files: ``` $ spack module tcl refresh -y gcc ==> Regenerating tcl module files $ spack module tcl refresh -y openmpi ==> Regenerating tcl module files $ module show gcc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/gcc/7.2.0-gcc-5.4.0: --- whatis("The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ") conflict("gcc") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/") setenv("CC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gcc") setenv("CXX","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/g++") setenv("FC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F77","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F90","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("GCC_ROOT","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs") setenv("CC","gcc") setenv("CXX","g++'") setenv("FC","gfortran") setenv("F77","gfortran") setenv("F90","gfortran") help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ]]) $ module show openmpi --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/openmpi/3.1.3-gcc-7.2.0: --- whatis("An open source Message Passing Interface implementation. ") conflict("openmpi") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/lib") prepend_path("PKG_CONFIG_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/lib/pkgconfig") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/") setenv("OPENMPI_ROOT","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4") setenv("SLURM_MPI_TYPE","pmi2") setenv("OMPI_MCA_btl_openib_warn_default_gid_prefix","0") help([[An open source Message Passing Interface implementation. The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers. ]]) ``` ##### Autoload dependencies[¶](#autoload-dependencies) Spack can also generate module files that contain code to load the dependencies automatically. You can, for instance generate python modules that load their dependencies by adding the `autoload` directive and assigning it the value `direct`: ``` modules: tcl: verbose: True hash_length: 0 naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}' whitelist: - gcc blacklist: - '%gcc@5.4.0' all: conflict: - '${PACKAGE}' suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '${PACKAGE}_ROOT': '${PREFIX}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ^python: autoload: 'direct' ``` and regenerating the module files for every package that depends on `python`: ``` root@module-file-tutorial:/# spack module tcl refresh -y ^python ==> Regenerating tcl module files ``` Now the `py-scipy` module will be: ``` #%Module1.0 ## Module file created by spack (https://github.com/spack/spack) on 2018-11-11 22:10:48.834221 ## ## py-scipy@1.1.0%gcc@7.2.0 arch=linux-ubuntu16.04-x86_64 /d5n3cph ## module-whatis "SciPy (pronounced 'Sigh Pie') is a Scientific Library for Python. It provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization." proc ModulesHelp { } { puts stderr "SciPy (pronounced "Sigh Pie") is a Scientific Library for Python. It" puts stderr "provides many user-friendly and efficient numerical routines such as" puts stderr "routines for numerical integration and optimization." } if { [ module-info mode load ] && ![ is-loaded python/2.7.15-gcc-7.2.0 ] } { puts stderr "Autoloading python/2.7.15-gcc-7.2.0" module load python/2.7.15-gcc-7.2.0 } if { [ module-info mode load ] && ![ is-loaded openblas/0.3.3-gcc-7.2.0 ] } { puts stderr "Autoloading openblas/0.3.3-gcc-7.2.0" module load openblas/0.3.3-gcc-7.2.0 } if { [ module-info mode load ] && ![ is-loaded py-numpy/1.15.2-gcc-7.2.0-openblas ] } { puts stderr "Autoloading py-numpy/1.15.2-gcc-7.2.0-openblas" module load py-numpy/1.15.2-gcc-7.2.0-openblas } conflict py-scipy prepend-path LD_LIBRARY_PATH "/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/py-scipy-1.1.0-d5n3cphk2lx2v74ypwb6h7tna7vvgdyn/lib" prepend-path CMAKE_PREFIX_PATH "/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/py-scipy-1.1.0-d5n3cphk2lx2v74ypwb6h7tna7vvgdyn/" prepend-path PYTHONPATH "/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/py-scipy-1.1.0-d5n3cphk2lx2v74ypwb6h7tna7vvgdyn/lib/python2.7/site-packages" setenv PY_SCIPY_ROOT "/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/py-scipy-1.1.0-d5n3cphk2lx2v74ypwb6h7tna7vvgdyn" ``` and will contain code to autoload all the dependencies: ``` $ module load py-scipy Autoloading python/2.7.15-gcc-7.2.0 Autoloading openblas/0.3.3-gcc-7.2.0 Autoloading py-numpy/1.15.2-gcc-7.2.0-openblas ``` In case messages are unwanted during the autoload procedure, it will be sufficient to omit the line setting `verbose: True` in the configuration file above. #### Hierarchical module files[¶](#hierarchical-module-files) So far we worked with non-hierarchical module files, i.e. with module files that are all generated in the same root directory and don’t attempt to dynamically modify the `MODULEPATH`. This results in a flat module structure where all the software is visible at the same time: ``` $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf/2.69-gcc-7.2.0 m4/1.4.18-gcc-7.2.0 pkgconf/1.4.2-gcc-7.2.0 automake/1.16.1-gcc-7.2.0 mpich/3.2.1-gcc-7.2.0 py-numpy/1.15.2-gcc-7.2.0-openblas (L) bzip2/1.0.6-gcc-7.2.0 ncurses/6.1-gcc-7.2.0 py-scipy/1.1.0-gcc-7.2.0-openblas (L) cmake/3.12.3-gcc-7.2.0 netlib-lapack/3.8.0-gcc-7.2.0 py-setuptools/40.4.3-gcc-7.2.0 diffutils/3.6-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-netlib-mpich python/2.7.15-gcc-7.2.0 (L) findutils/4.6.0-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-netlib-openmpi readline/7.0-gcc-7.2.0 gcc/7.2.0-gcc-5.4.0 netlib-scalapack/2.0.2-gcc-7.2.0-openblas-mpich sqlite/3.23.1-gcc-7.2.0 gdbm/1.14.1-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-openblas-openmpi (D) texinfo/6.5-gcc-7.2.0 hwloc/1.11.9-gcc-7.2.0 numactl/2.0.11-gcc-7.2.0 util-macros/1.19.1-gcc-7.2.0 libpciaccess/0.13.5-gcc-7.2.0 openblas/0.3.3-gcc-7.2.0 (L) xz/5.2.4-gcc-7.2.0 libsigsegv/2.11-gcc-7.2.0 openmpi/3.1.3-gcc-7.2.0 zlib/1.2.11-gcc-7.2.0 libtool/2.4.6-gcc-7.2.0 openssl/1.0.2o-gcc-7.2.0 libxml2/2.9.8-gcc-7.2.0 perl/5.26.2-gcc-7.2.0 Where: L: Module is loaded D: Default Module Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` This layout is quite simple to deploy, but you can see from the above snippet that nothing prevents users from loading incompatible sets of modules: ``` $ module purge $ module load netlib-lapack/3.8.0-gcc-7.2.0 openblas/0.3.3-gcc-7.2.0 $ module list Currently Loaded Modules: 1) netlib-lapack/3.8.0-gcc-7.2.0 2) openblas/0.3.3-gcc-7.2.0 ``` Even if `conflicts` directives are carefully placed in module files, they: > * won’t enforce a consistent environment, but will just report an error > * need constant updates, for instance as soon as a new compiler or MPI library is installed [Hierarchical module files](http://lmod.readthedocs.io/en/latest/080_hierarchy.html) try to overcome these shortcomings by showing at start-up only a restricted view of what is available on the system: more specifically only the software that has been installed with OS provided compilers. Among this software there will be other - usually more recent - compilers that, once loaded, will prepend new directories to `MODULEPATH` unlocking all the software that was compiled with them. This “unlocking” idea can then be extended arbitrarily to virtual dependencies, as we’ll see in the following section. ##### Core/Compiler/MPI[¶](#core-compiler-mpi) The most widely used hierarchy is the so called `Core/Compiler/MPI` where, on top of the compilers, different MPI libraries also unlock software linked to them. There are just a few steps needed to adapt the `modules.yaml` file we used previously: > 1. enable the `lmod` file generator > 2. change the `tcl` tag to `lmod` > 3. remove `tcl` specific directives (`naming_scheme` and `conflict`) > 4. declare which compilers are considered `core_compilers` > 5. remove the `mpi` related suffixes (as they will be substituted by hierarchies) After these modifications your configuration file should look like: ``` modules: enable:: - lmod lmod: core_compilers: - 'gcc@5.4.0' hierarchy: - mpi hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' all: suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '${PACKAGE}_ROOT': '${PREFIX}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' ``` Note Double colon in configuration files The double colon after `enable` is intentional and it serves the purpose of overriding the default list of enabled generators so that only `lmod` will be active (see [Overriding entire sections](index.html#config-overrides) for more details). The directive `core_compilers` accepts a list of compilers. Everything built using these compilers will create a module in the `Core` part of the hierarchy, which is the entry point for hierarchical module files. It is common practice to put the OS provided compilers in the list and only build common utilities and other compilers with them. If we now regenerate the module files: ``` $ spack module lmod refresh --delete-tree -y ==> Regenerating lmod module files ``` and update `MODULEPATH` to point to the `Core`: ``` $ module purge $ module unuse /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 $ module use /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/Core ``` asking for the available modules will return: ``` $ module avail --- share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` Unsurprisingly, the only visible module is `gcc`. Loading that we’ll unlock the `Compiler` part of the hierarchy: ``` $ module load gcc $ module avail --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/gcc/7.2.0 --- autoconf/2.69 findutils/4.6.0 libtool/2.4.6 netlib-lapack/3.8.0 perl/5.26.2 python/2.7.15 xz/5.2.4 automake/1.16.1 gdbm/1.14.1 libxml2/2.9.8 numactl/2.0.11 pkgconf/1.4.2 readline/7.0 zlib/1.2.11 bzip2/1.0.6 hwloc/1.11.9 m4/1.4.18 openblas/0.3.3 py-numpy/1.15.2-openblas sqlite/3.23.1 cmake/3.12.3 libpciaccess/0.13.5 mpich/3.2.1 openmpi/3.1.3 py-scipy/1.1.0-openblas texinfo/6.5 diffutils/3.6 libsigsegv/2.11 ncurses/6.1 openssl/1.0.2o py-setuptools/40.4.3 util-macros/1.19.1 --- share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 (L) Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` The same holds true also for the `MPI` part, that you can enable by loading either `mpich` or `openmpi`. Let’s start by loading `mpich`: ``` $ module load mpich $ module avail --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/mpich/3.2.1-vt5xcat/gcc/7.2.0 --- netlib-scalapack/2.0.2-netlib netlib-scalapack/2.0.2-openblas (D) --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/gcc/7.2.0 --- autoconf/2.69 findutils/4.6.0 libtool/2.4.6 netlib-lapack/3.8.0 perl/5.26.2 python/2.7.15 xz/5.2.4 automake/1.16.1 gdbm/1.14.1 libxml2/2.9.8 numactl/2.0.11 pkgconf/1.4.2 readline/7.0 zlib/1.2.11 bzip2/1.0.6 hwloc/1.11.9 m4/1.4.18 openblas/0.3.3 py-numpy/1.15.2-openblas sqlite/3.23.1 cmake/3.12.3 libpciaccess/0.13.5 mpich/3.2.1 (L) openmpi/3.1.3 py-scipy/1.1.0-openblas texinfo/6.5 diffutils/3.6 libsigsegv/2.11 ncurses/6.1 openssl/1.0.2o py-setuptools/40.4.3 util-macros/1.19.1 --- share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 (L) Where: L: Module is loaded D: Default Module Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". root@module-file-tutorial:/# module load openblas netlib-scalapack/2.0.2-openblas root@module-file-tutorial:/# module list Currently Loaded Modules: 1) gcc/7.2.0 2) mpich/3.2.1 3) openblas/0.3.3 4) netlib-scalapack/2.0.2-openblas ``` At this point we can showcase the improved consistency that a hierarchical layout provides over a non-hierarchical one: ``` $ module load openmpi Lmod is automatically replacing "mpich/3.2.1" with "openmpi/3.1.3". Due to MODULEPATH changes, the following have been reloaded: 1) netlib-scalapack/2.0.2-openblas ``` `Lmod` took care of swapping the MPI provider for us, and it also substituted the `netlib-scalapack` module to conform to the change in the MPI. In this way we can’t accidentally pull-in two different MPI providers at the same time or load a module file for a package linked to `openmpi` when `mpich` is also loaded. Consistency for compilers and MPI is ensured by the tool. ##### Add LAPACK to the hierarchy[¶](#add-lapack-to-the-hierarchy) The hierarchy just shown is already a great improvement over non-hierarchical layouts, but it still has an asymmetry: `LAPACK` providers cover the same semantic role as `MPI` providers, but yet they are not part of the hierarchy. To be more practical, this means that although we have gained an improved consistency in our environment when it comes to `MPI`, we still have the same problems as we had before for `LAPACK` implementations: ``` root@module-file-tutorial:/# module list Currently Loaded Modules: 1) gcc/7.2.0 2) openblas/0.3.3 3) openmpi/3.1.3 4) netlib-scalapack/2.0.2-openblas root@module-file-tutorial:/# module load netlib-scalapack/2.0.2-netlib The following have been reloaded with a version change: 1) netlib-scalapack/2.0.2-openblas => netlib-scalapack/2.0.2-netlib root@module-file-tutorial:/# module list Currently Loaded Modules: 1) gcc/7.2.0 2) openblas/0.3.3 3) openmpi/3.1.3 4) netlib-scalapack/2.0.2-netlib ``` Hierarchies that are deeper than `Core`/`Compiler`/`MPI` are probably still considered “unusual” or “impractical” at many sites, mainly because module files are written manually and keeping track of the combinations among multiple providers quickly becomes quite involved. For instance, having both `MPI` and `LAPACK` in the hierarchy means we must classify software into one of four categories: > 1. Software that doesn’t depend on `MPI` or `LAPACK` > 2. Software that depends only on `MPI` > 3. Software that depends only on `LAPACK` > 4. Software that depends on both to decide when to show it to the user. The situation becomes more involved as the number of virtual dependencies in the hierarchy increases. We can take advantage of the DAG that Spack maintains for the installed software and solve this combinatorial problem in a clean and automated way. In some sense Spack’s ability to manage this combinatorial complexity makes deeper hierarchies feasible. Coming back to our example, let’s add `lapack` to the hierarchy and remove any remaining suffix: ``` modules: enable:: - lmod lmod: core_compilers: - 'gcc@5.4.0' hierarchy: - mpi - lapack hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '${PACKAGE}_ROOT': '${PREFIX}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' ``` After module files have been regenerated as usual: ``` root@module-file-tutorial:/# module purge root@module-file-tutorial:/# spack module lmod refresh --delete-tree -y ==> Regenerating lmod module files ``` we can see that now we have additional components in the hierarchy: ``` $ module load gcc $ module load openblas $ module avail --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/openblas/0.3.3-xxoxfh4/gcc/7.2.0 --- py-numpy/1.15.2 py-scipy/1.1.0 --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/gcc/7.2.0 --- autoconf/2.69 findutils/4.6.0 libtool/2.4.6 netlib-lapack/3.8.0 perl/5.26.2 sqlite/3.23.1 automake/1.16.1 gdbm/1.14.1 libxml2/2.9.8 numactl/2.0.11 pkgconf/1.4.2 texinfo/6.5 bzip2/1.0.6 hwloc/1.11.9 m4/1.4.18 openblas/0.3.3 (L) py-setuptools/40.4.3 util-macros/1.19.1 cmake/3.12.3 libpciaccess/0.13.5 mpich/3.2.1 openmpi/3.1.3 python/2.7.15 xz/5.2.4 diffutils/3.6 libsigsegv/2.11 ncurses/6.1 openssl/1.0.2o readline/7.0 zlib/1.2.11 --- share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 (L) Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". $ module load openmpi $ module avail --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/openmpi/3.1.3-do5xfer/openblas/0.3.3-xxoxfh4/gcc/7.2.0 --- netlib-scalapack/2.0.2 --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/openblas/0.3.3-xxoxfh4/gcc/7.2.0 --- py-numpy/1.15.2 py-scipy/1.1.0 --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/gcc/7.2.0 --- autoconf/2.69 findutils/4.6.0 libtool/2.4.6 netlib-lapack/3.8.0 perl/5.26.2 sqlite/3.23.1 automake/1.16.1 gdbm/1.14.1 libxml2/2.9.8 numactl/2.0.11 pkgconf/1.4.2 texinfo/6.5 bzip2/1.0.6 hwloc/1.11.9 m4/1.4.18 openblas/0.3.3 (L) py-setuptools/40.4.3 util-macros/1.19.1 cmake/3.12.3 libpciaccess/0.13.5 mpich/3.2.1 openmpi/3.1.3 (L) python/2.7.15 xz/5.2.4 diffutils/3.6 libsigsegv/2.11 ncurses/6.1 openssl/1.0.2o readline/7.0 zlib/1.2.11 --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 (L) Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` Both `MPI` and `LAPACK` providers will now benefit from the same safety features: ``` $ module load py-numpy netlib-scalapack $ module load mpich Lmod is automatically replacing "openmpi/3.1.3" with "mpich/3.2.1". Due to MODULEPATH changes, the following have been reloaded: 1) netlib-scalapack/2.0.2 $ module load netlib-lapack Lmod is automatically replacing "openblas/0.3.3" with "netlib-lapack/3.8.0". Inactive Modules: 1) py-numpy Due to MODULEPATH changes, the following have been reloaded: 1) netlib-scalapack/2.0.2 ``` Because we only compiled `py-numpy` with `openblas` the module is made inactive when we switch the `LAPACK` provider. The user environment is now consistent by design! #### Working with templates[¶](#working-with-templates) As briefly mentioned in the introduction, Spack uses [Jinja2](http://jinja.pocoo.org/docs/2.9/) to generate each individual module file. This means that you have all of its flexibility and power when it comes to customizing what gets generated! ##### Module file templates[¶](#module-file-templates) The templates that Spack uses to generate module files are stored in the `share/spack/templates/module` directory within the Spack prefix, and they all share the same common structure. Usually, they start with a header that identifies the type of module being generated. In the case of hierarchical module files it’s: ``` -- -*- lua -*- -- Module file created by spack (https://github.com/spack/spack) on {{ timestamp }} -- -- {{ spec.short_spec }} -- ``` The statements within double curly brackets `{{ ... }}` denote [expressions](http://jinja.pocoo.org/docs/2.9/templates/#expressions) that will be evaluated and substituted at module generation time. The rest of the file is then divided into [blocks](http://jinja.pocoo.org/docs/2.9/templates/#template-inheritance) that can be overridden or extended by users, if need be. [Control structures](http://jinja.pocoo.org/docs/2.9/templates/#list-of-control-structures) , delimited by `{% ... %}`, are also permitted in the template language: ``` {% block environment %} {% for command_name, cmd in environment_modifications %} {% if command_name == 'PrependPath' %} prepend_path("{{ cmd.name }}", "{{ cmd.value }}", "{{ cmd.separator }}") {% elif command_name == 'AppendPath' %} append_path("{{ cmd.name }}", "{{ cmd.value }}", "{{ cmd.separator }}") {% elif command_name == 'RemovePath' %} remove_path("{{ cmd.name }}", "{{ cmd.value }}", "{{ cmd.separator }}") {% elif command_name == 'SetEnv' %} setenv("{{ cmd.name }}", "{{ cmd.value }}") {% elif command_name == 'UnsetEnv' %} unsetenv("{{ cmd.name }}") {% endif %} {% endfor %} {% endblock %} ``` The locations where Spack looks for templates are specified in `config.yaml`: ``` # Locations where templates should be found template_dirs: - $spack/share/spack/templates ``` and can be extended by users to employ custom templates, as we’ll see next. ##### Extend the default templates[¶](#extend-the-default-templates) Let’s assume one of our software is protected by group membership: allowed users belong to the same linux group, and access is granted at group level. Wouldn’t it be nice if people that are not yet entitled to use it could receive a helpful message at module load time that tells them who to contact in your organization to be inserted in the group? To automate the generation of module files with such site-specific behavior we’ll start by extending the list of locations where Spack looks for module files. Let’s create the file `~/.spack/config.yaml` with the content: ``` config: template_dirs: - $HOME/.spack/templates ``` This tells Spack to also search another location when looking for template files. Next, we need to create our custom template extension in the folder listed above: ``` {% extends "modules/modulefile.lua" %} {% block footer %} -- Access is granted only to specific groups if not isDir("{{ spec.prefix }}") then LmodError ( "You don't have the necessary rights to run \"{{ spec.name }}\".\n\n", "\tPlease write an e-mail to <EMAIL> if you need further information on how to get access to it.\n" ) end {% endblock %} ``` Let’s name this file `group-restricted.lua`. The line: ``` {% extends "modules/modulefile.lua" %} ``` tells Jinja2 that we are reusing the standard template for hierarchical module files. The section: ``` {% block footer %} -- Access is granted only to specific groups if not isDir("{{ spec.prefix }}") then LmodError ( "You don't have the necessary rights to run \"{{ spec.name }}\".\n\n", "\tPlease write an e-mail to <EMAIL> if you need further information on how to get access to it.\n" ) end {% endblock %} ``` overrides the `footer` block. Finally, we need to add a couple of lines in `modules.yaml` to tell Spack which specs need to use the new custom template. For the sake of illustration let’s assume it’s `netlib-scalapack`: ``` modules: enable:: - lmod lmod: core_compilers: - 'gcc@5.4.0' hierarchy: - mpi - lapack hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' - readline all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '${PACKAGE}_ROOT': '${PREFIX}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' netlib-scalapack: template: 'group-restricted.lua' ``` If we regenerate the module files one last time: ``` root@module-file-tutorial:/# spack module lmod refresh -y netlib-scalapack ==> Regenerating lmod module files ``` we’ll find the following at the end of each `netlib-scalapack` module file: ``` -- Access is granted only to specific groups if not isDir("/usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/netlib-scalapack-2.0.2-d3lertflood3twaor44eam2kcr4l72ag") then LmodError ( "You don't have the necessary rights to run \"netlib-scalapack\".\n\n", "\tPlease write an e-mail to <EMAIL> if you need further information on how to get access to it.\n" ) end ``` and every user that doesn’t have access to the software will now be redirected to the right e-mail address where to ask for it! ### Spack Package Build Systems[¶](#spack-package-build-systems) You may begin to notice after writing a couple of package template files a pattern emerge for some packages. For example, you may find yourself writing an `install()` method that invokes: `configure`, `cmake`, `make`, `make install`. You may also find yourself writing `"prefix=" + prefix` as an argument to `configure` or `cmake`. Rather than having you repeat these lines for all packages, Spack has classes that can take care of these patterns. In addition, these package files allow for finer grained control of these build systems. In this section, we will describe each build system and give examples on how these can be manipulated to install a package. #### Package Class Hierarchy[¶](#package-class-hierarchy) digraph G { node [ shape = "record" ] edge [ arrowhead = "empty" ] PackageBase -> Package [dir=back] PackageBase -> MakefilePackage [dir=back] PackageBase -> AutotoolsPackage [dir=back] PackageBase -> CMakePackage [dir=back] PackageBase -> PythonPackage [dir=back] } The above diagram gives a high level view of the class hierarchy and how each package relates. Each subclass inherits from the `PackageBaseClass` super class. The bulk of the work is done in this super class which includes fetching, extracting to a staging directory and installing. Each subclass then adds additional build-system-specific functionality. In the following sections, we will go over examples of how to utilize each subclass and to see how powerful these abstractions are when packaging. #### Package[¶](#package) We’ve already seen examples of a `Package` class in our walkthrough for writing package files, so we won’t be spending much time with them here. Briefly, the Package class allows for abitrary control over the build process, whereas subclasses rely on certain patterns (e.g. `configure` `make` `make install`) to be useful. `Package` classes are particularly useful for packages that have a non-conventional way of being built since the packager can utilize some of Spack’s helper functions to customize the building and installing of a package. #### Autotools[¶](#autotools) As we have seen earlier, packages using `Autotools` use `configure`, `make` and `make install` commands to execute the build and install process. In our `Package` class, your typical build incantation will consist of the following: ``` def install(self, spec, prefix): configure("--prefix=" + prefix) make() make("install") ``` You’ll see that this looks similar to what we wrote in our packaging tutorial. The `Autotools` subclass aims to simplify writing package files and provides convenience methods to manipulate each of the different phases for a `Autotools` build system. `Autotools` packages consist of four phases: 1. `autoreconf()` 2. `configure()` 3. `build()` 4. `install()` Each of these phases have sensible defaults. Let’s take a quick look at some the internals of the `Autotools` class: ``` $ spack edit --build-system autotools ``` This will open the `AutotoolsPackage` file in your text editor. Note The examples showing code for these classes is abridged to avoid having long examples. We only show what is relevant to the packager. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 ``` | ``` They all have sensible defaults and for many packages the only thing necessary will be to override the helper method :py:meth:`~.AutotoolsPackage.configure_args`. For a finer tuning you may also override: +---+---+ | **Method** | **Purpose** | +===+===+ | :py:attr:`~.AutotoolsPackage.build_targets` | Specify ``make`` | | | targets for the | | | build phase | +---+---+ | :py:attr:`~.AutotoolsPackage.install_targets` | Specify ``make`` | | | targets for the | | | install phase | +---+---+ | :py:meth:`~.AutotoolsPackage.check` | Run build time | | | tests if required | +---+---+ """ #: Phases of a GNU Autotools package phases = ['autoreconf', 'configure', 'build', 'install'] #: This attribute is used in UI queries that need to know the build #: system base class build_system_class = 'AutotoolsPackage' #: Whether or not to update ``config.guess`` on old architectures patch_config_guess = True #: Targets for ``make`` during the :py:meth:`~.AutotoolsPackage.build` #: phase build_targets = [] #: Targets for ``make`` during the :py:meth:`~.AutotoolsPackage.install` #: phase install_targets = ['install'] #: Callback names for build-time test build_time_test_callbacks = ['check'] #: Callback names for install-time test install_time_test_callbacks = ['installcheck'] #: Set to true to force the autoreconf step even if configure is present force_autoreconf = False #: Options to be passed to autoreconf when using the default implementation autoreconf_extra_args = [] setattr(self, 'configure_flag_args', []) for flag, values in flags.items(): if values: values_str = '{0}={1}'.format(flag.upper(), ' '.join(values)) self.configure_flag_args.append(values_str) def configure(self, spec, prefix): """Runs configure with the arguments specified in :py:meth:`~.AutotoolsPackage.configure_args` ``` | Important to note are the highlighted lines. These properties allow the packager to set what build targets and install targets they want for their package. If, for example, we wanted to add as our build target `foo` then we can append to our `build_targets` property: ``` build_targets = ["foo"] ``` Which is similiar to invoking make in our Package ``` make("foo") ``` This is useful if we have packages that ignore environment variables and need a command-line argument. Another thing to take note of is in the `configure()` method. Here we see that the `prefix` argument is already included since it is a common pattern amongst packages using `Autotools`. We then only have to override `configure_args()`, which will then return it’s output to to `configure()`. Then, `configure()` will append the common arguments Packagers also have the option to run `autoreconf` in case a package needs to update the build system and generate a new `configure`. Though, for the most part this will be unnecessary. Let’s look at the `mpileaks` package.py file that we worked on earlier: ``` $ spack edit mpileaks ``` Notice that mpileaks is a `Package` class but uses the `Autotools` build system. Although this package is acceptable let’s make this into an `AutotoolsPackage` class and simplify it further. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Mpileaks(AutotoolsPackage): """Tool to detect and report leaked MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') depends_on("mpi") depends_on("adept-utils") depends_on("callpath") def install(self, spec, prefix): configure("--prefix=" + prefix, "--with-adept-utils=" + spec['adept-utils'].prefix, "--with-callpath=" + spec['callpath'].prefix) make() make("install") ``` | We first inherit from the `AutotoolsPackage` class. Although we could keep the `install()` method, most of it can be handled by the `AutotoolsPackage` base class. In fact, the only thing that needs to be overridden is `configure_args()`. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Mpileaks(AutotoolsPackage): """Tool to detect and report leaked MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') variant("stackstart", values=int, default=0, description="Specify the number of stack frames to truncate") depends_on("mpi") depends_on("adept-utils") depends_on("callpath") def configure_args(self): stackstart = int(self.spec.variants['stackstart'].value) args = ["--with-adept-utils=" + spec['adept-utils'].prefix, "--with-callpath=" + spec['callpath'].prefix] if stackstart: args.extend(['--with-stack-start-c=%s' % stackstart, '--with-stack-start-fortran=%s' % stackstart]) return args ``` | Since Spack takes care of setting the prefix for us we can exclude that as an argument to `configure`. Our packages look simpler, and the packager does not need to worry about whether they have properly included `configure` and `make`. This version of the `mpileaks` package installs the same as the previous, but the `AutotoolsPackage` class lets us do it with a cleaner looking package file. #### Makefile[¶](#makefile) Packages that utilize `Make` or a `Makefile` usually require you to edit a `Makefile` to set up platform and compiler specific variables. These packages are handled by the `Makefile` subclass which provides convenience methods to help write these types of packages. A `MakefilePackage` class has three phases that can be overridden. These include: > 1. `edit()` > 2. `build()` > 3. `install()` Packagers then have the ability to control how a `Makefile` is edited, and what targets to include for the build phase or install phase. Let’s also take a look inside the `MakefilePackage` class: ``` $ spack edit --build-system makefile ``` Take note of the following: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 ``` | ``` class MakefilePackage(PackageBase): #: Phases of a package that is built with an hand-written Makefile phases = ['edit', 'build', 'install'] #: This attribute is used in UI queries that need to know the build #: system base class build_system_class = 'MakefilePackage' #: Targets for ``make`` during the :py:meth:`~.MakefilePackage.build` #: phase build_targets = [] #: Targets for ``make`` during the :py:meth:`~.MakefilePackage.install` #: phase install_targets = ['install'] #: Callback names for build-time test build_time_test_callbacks = ['check'] #: Callback names for install-time test install_time_test_callbacks = ['installcheck'] def edit(self, spec, prefix): """Edits the Makefile before calling make. This phase cannot be defaulted. """ tty.msg('Using default implementation: skipping edit phase.') def build(self, spec, prefix): """Calls make, passing :py:attr:`~.MakefilePackage.build_targets` as targets. """ with working_dir(self.build_directory): inspect.getmodule(self).make(*self.build_targets) def install(self, spec, prefix): """Calls make, passing :py:attr:`~.MakefilePackage.install_targets` as targets. """ with working_dir(self.build_directory): inspect.getmodule(self).make(*self.install_targets) ``` | Similar to `Autotools`, `MakefilePackage` class has properties that can be set by the packager. We can also override the different methods highlighted. Let’s try to recreate the [Bowtie](http://bowtie-bio.sourceforge.net/index.shtml) package: ``` $ spack create -f https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip ==> This looks like a URL for bowtie ==> Found 1 version of bowtie: 1.2.1.1 https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip ==> How many would you like to checksum? (default is 1, q to abort) 1 ==> Downloading... ==> Fetching https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip ######################################################################## 100.0% ==> Checksummed 1 version of bowtie ==> This package looks like it uses the makefile build system ==> Created template for bowtie package ==> Created package file: /Users/mamelara/spack/var/spack/repos/builtin/packages/bowtie/package.py ``` Once the fetching is completed, Spack will open up your text editor in the usual fashion and create a template of a `MakefilePackage` package.py. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Bowtie(MakefilePackage): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip" version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf') # FIXME: Add dependencies if required. # depends_on('foo') def edit(self, spec, prefix): # FIXME: Edit the Makefile if necessary # FIXME: If not needed delete this function # makefile = FileFilter('Makefile') # makefile.filter('CC = .*', 'CC = cc') return ``` | Spack was successfully able to detect that `Bowtie` uses `Make`. Let’s add in the rest of our details for our package: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Bowtie(MakefilePackage): """Bowtie is an ultrafast, memory efficient short read aligner for short DNA sequences (reads) from next-gen sequencers.""" homepage = "https://sourceforge.net/projects/bowtie-bio/" url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip" version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf') variant("tbb", default=False, description="Use Intel thread building block") depends_on("tbb", when="+tbb") def edit(self, spec, prefix): # FIXME: Edit the Makefile if necessary # FIXME: If not needed delete this function # makefile = FileFilter('Makefile') # makefile.filter('CC = .*', 'CC = cc') return ``` | As we mentioned earlier, most packages using a `Makefile` have hard-coded variables that must be edited. These variables are fine if you happen to not care about setup or types of compilers used but Spack is designed to work with any compiler. The `MakefilePackage` subclass makes it easy to edit these `Makefiles` by having an `edit()` method that can be overridden. Let’s take a look at the default `Makefile` that `Bowtie` provides. If we look inside, we see that `CC` and `CXX` point to our GNU compiler: ``` $ spack stage bowtie ``` Note As usual make sure you have shell support activated with spack: `source /path/to/spack_root/spack/share/spack/setup-env.sh` ``` $ spack cd -s bowtie $ cd bowtie-1.2 $ vim Makefile ``` ``` CPP = g++ -w CXX = $(CPP) CC = gcc LIBS = $(LDFLAGS) -lz HEADERS = $(wildcard *.h) ``` To fix this, we need to use the `edit()` method to write our custom `Makefile`. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Bowtie(MakefilePackage): """Bowtie is an ultrafast, memory efficient short read aligner for short DNA sequences (reads) from next-gen sequencers.""" homepage = "https://sourceforge.net/projects/bowtie-bio/" url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip" version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf') variant("tbb", default=False, description="Use Intel thread building block") depends_on("tbb", when="+tbb") def edit(self, spec, prefix): makefile = FileFilter("Makefile") makefile.filter('CC= .*', 'CC = ' + env['CC']) makefile.filter('CXX = .*', 'CXX = ' + env['CXX']) ``` | Here we use a `FileFilter` object to edit our `Makefile`. It takes in a regular expression and then replaces `CC` and `CXX` to whatever Spack sets `CC` and `CXX` environment variables to. This allows us to build `Bowtie` with whatever compiler we specify through Spack’s `spec` syntax. Let’s change the build and install phases of our package: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Bowtie(MakefilePackage): """Bowtie is an ultrafast, memory efficient short read aligner for short DNA sequences (reads) from next-gen sequencers.""" homepage = "https://sourceforge.net/projects/bowtie-bio/" url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip" version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf') variant("tbb", default=False, description="Use Intel thread building block") depends_on("tbb", when="+tbb") def edit(self, spec, prefix): makefile = FileFilter("Makefile") makefile.filter('CC= .*', 'CC = ' + env['CC']) makefile.filter('CXX = .*', 'CXX = ' + env['CXX']) @property def build_targets(self): if "+tbb" in spec: return [] else: return ["NO_TBB=1"] @property def install_targets(self): return ['prefix={0}'.format(self.prefix), 'install'] ``` | Here demonstrate another strategy that we can use to manipulate our package We can provide command-line arguments to `make()`. Since `Bowtie` can use `tbb` we can either add `NO_TBB=1` as a argument to prevent `tbb` support or we can just invoke `make` with no arguments. `Bowtie` requires our `install_target` to provide a path to the install directory. We can do this by providing `prefix=` as a command line argument to `make()`. Let’s look at a couple of other examples and go through them: ``` $ spack edit esmf ``` Some packages allow environment variables to be set and will honor them. Packages that use `?=` for assignment in their `Makefile` can be set using environment variables. In our `esmf` example we set two environment variables in our `edit()` method: ``` def edit(self, spec, prefix): for var in os.environ: if var.startswith('ESMF_'): os.environ.pop(var) # More code ... if self.compiler.name == 'gcc': os.environ['ESMF_COMPILER'] = 'gfortran' elif self.compiler.name == 'intel': os.environ['ESMF_COMPILER'] = 'intel' elif self.compiler.name == 'clang': os.environ['ESMF_COMPILER'] = 'gfortranclang' elif self.compiler.name == 'nag': os.environ['ESMF_COMPILER'] = 'nag' elif self.compiler.name == 'pgi': os.environ['ESMF_COMPILER'] = 'pgi' else: msg = "The compiler you are building with, " msg += "'{0}', is not supported by ESMF." raise InstallError(msg.format(self.compiler.name)) ``` As you may have noticed, we didn’t really write anything to the `Makefile` but rather we set environment variables that will override variables set in the `Makefile`. Some packages include a configuration file that sets certain compiler variables, platform specific variables, and the location of dependencies or libraries. If the file is simple and only requires a couple of changes, we can overwrite those entries with our `FileFilter` object. If the configuration involves complex changes, we can write a new configuration file from scratch. Let’s look at an example of this in the `elk` package: ``` $ spack edit elk ``` ``` def edit(self, spec, prefix): # Dictionary of configuration options config = { 'MAKE': 'make', 'AR': 'ar' } # Compiler-specific flags flags = '' if self.compiler.name == 'intel': flags = '-O3 -ip -unroll -no-prec-div' elif self.compiler.name == 'gcc': flags = '-O3 -ffast-math -funroll-loops' elif self.compiler.name == 'pgi': flags = '-O3 -lpthread' elif self.compiler.name == 'g95': flags = '-O3 -fno-second-underscore' elif self.compiler.name == 'nag': flags = '-O4 -kind=byte -dusty -dcfuns' elif self.compiler.name == 'xl': flags = '-O3' config['F90_OPTS'] = flags config['F77_OPTS'] = flags # BLAS/LAPACK support # Note: BLAS/LAPACK must be compiled with OpenMP support # if the +openmp variant is chosen blas = 'blas.a' lapack = 'lapack.a' if '+blas' in spec: blas = spec['blas'].libs.joined() if '+lapack' in spec: lapack = spec['lapack'].libs.joined() # lapack must come before blas config['LIB_LPK'] = ' '.join([lapack, blas]) # FFT support if '+fft' in spec: config['LIB_FFT'] = join_path(spec['fftw'].prefix.lib, 'libfftw3.so') config['SRC_FFT'] = 'zfftifc_fftw.f90' else: config['LIB_FFT'] = 'fftlib.a' config['SRC_FFT'] = 'zfftifc.f90' # MPI support if '+mpi' in spec: config['F90'] = spec['mpi'].mpifc config['F77'] = spec['mpi'].mpif77 else: config['F90'] = spack_fc config['F77'] = spack_f77 config['SRC_MPI'] = 'mpi_stub.f90' # OpenMP support if '+openmp' in spec: config['F90_OPTS'] += ' ' + self.compiler.openmp_flag config['F77_OPTS'] += ' ' + self.compiler.openmp_flag else: config['SRC_OMP'] = 'omp_stub.f90' # Libxc support if '+libxc' in spec: config['LIB_libxc'] = ' '.join([ join_path(spec['libxc'].prefix.lib, 'libxcf90.so'), join_path(spec['libxc'].prefix.lib, 'libxc.so') ]) config['SRC_libxc'] = ' '.join([ 'libxc_funcs.f90', 'libxc.f90', 'libxcifc.f90' ]) else: config['SRC_libxc'] = 'libxcifc_stub.f90' # Write configuration options to include file with open('make.inc', 'w') as inc: for key in config: inc.write('{0} = {1}\n'.format(key, config[key])) ``` `config` is just a dictionary that we can add key-value pairs to. By the end of the `edit()` method we write the contents of our dictionary to `make.inc`. #### CMake[¶](#cmake) [CMake](https://cmake.org) is another common build system that has been gaining popularity. It works in a similar manner to `Autotools` but with differences in variable names, the number of configuration options available, and the handling of shared libraries. Typical build incantations look like this: ``` def install(self, spec, prefix): cmake("-DCMAKE_INSTALL_PREFIX:PATH=/path/to/install_dir ..") make() make("install") ``` As you can see from the example above, it’s very similar to invoking `configure` and `make` in an `Autotools` build system. However, the variable names and options differ. Most options in CMake are prefixed with a `'-D'` flag to indicate a configuration setting. In the `CMakePackage` class we can override the following phases: 1. `cmake()` 2. `build()` 3. `install()` The `CMakePackage` class also provides sensible defaults so we only need to override `cmake_args()`. Let’s look at these defaults in the `CMakePackage` class in the `_std_args()` method: ``` $ spack edit --build-system cmake ``` | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 ``` | ``` @staticmethod def _std_args(pkg): """Computes the standard cmake arguments for a generic package""" try: generator = pkg.generator except AttributeError: generator = 'Unix Makefiles' # Make sure a valid generator was chosen valid_generators = ['Unix Makefiles', 'Ninja'] if generator not in valid_generators: msg = "Invalid CMake generator: '{0}'\n".format(generator) msg += "CMakePackage currently supports the following " msg += "generators: '{0}'".format("', '".join(valid_generators)) raise InstallError(msg) try: build_type = pkg.spec.variants['build_type'].value except KeyError: build_type = 'RelWithDebInfo' args = [ '-G', generator, '-DCMAKE_INSTALL_PREFIX:PATH={0}'.format(pkg.prefix), '-DCMAKE_BUILD_TYPE:STRING={0}'.format(build_type), '-DCMAKE_VERBOSE_MAKEFILE:BOOL=ON' ] if platform.mac_ver()[0]: args.extend([ '-DCMAKE_FIND_FRAMEWORK:STRING=LAST', '-DCMAKE_FIND_APPBUNDLE:STRING=LAST' ]) # Set up CMake rpath args.append('-DCMAKE_INSTALL_RPATH_USE_LINK_PATH:BOOL=FALSE') rpaths = ';'.join(spack.build_environment.get_rpaths(pkg)) args.append('-DCMAKE_INSTALL_RPATH:STRING={0}'.format(rpaths)) # CMake's find_package() looks in CMAKE_PREFIX_PATH first, help CMake # to find immediate link dependencies in right places: deps = [d.prefix for d in pkg.spec.dependencies(deptype=('build', 'link'))] deps = filter_system_paths(deps) args.append('-DCMAKE_PREFIX_PATH:STRING={0}'.format(';'.join(deps))) return args ``` | Some `CMake` packages use different generators. Spack is able to support [Unix-Makefile](https://cmake.org/cmake/help/v3.4/generator/Unix%20Makefiles.html) generators as well as [Ninja](https://cmake.org/cmake/help/v3.4/generator/Ninja.html) generators. If no generator is specified Spack will default to `Unix Makefiles`. Next we setup the build type. In `CMake` you can specify the build type that you want. Options include: 1. `empty` 2. `Debug` 3. `Release` 4. `RelWithDebInfo` 5. `MinSizeRel` With these options you can specify whether you want your executable to have the debug version only, release version or the release with debug information. Release executables tend to be more optimized than Debug. In Spack, we set the default as RelWithDebInfo unless otherwise specified through a variant. Spack then automatically sets up the `-DCMAKE_INSTALL_PREFIX` path, appends the build type (`RelWithDebInfo` default), and then specifies a verbose `Makefile`. Next we add the `rpaths` to `-DCMAKE_INSTALL_RPATH:STRING`. Finally we add to `-DCMAKE_PREFIX_PATH:STRING` the locations of all our dependencies so that `CMake` can find them. In the end our `cmake` line will look like this (example is `xrootd`): ``` $ cmake $HOME/spack/var/spack/stage/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk/xrootd-4.6.0 -G Unix Makefiles -DCMAKE_INSTALL_PREFIX:PATH=$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk -DCMAKE_BUILD_TYPE:STRING=RelWithDebInfo -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_FIND_FRAMEWORK:STRING=LAST -DCMAKE_INSTALL_RPATH_USE_LINK_PATH:BOOL=FALSE -DCMAKE_INSTALL_RPATH:STRING=$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk/lib:$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk/lib64 -DCMAKE_PREFIX_PATH:STRING=$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/cmake-3.9.4-hally3vnbzydiwl3skxcxcbzsscaasx5 ``` We can see now how `CMake` takes care of a lot of the boilerplate code that would have to be otherwise typed in. Let’s try to recreate [callpath](https://github.com/LLNL/callpath.git): ``` $ spack create -f https://github.com/llnl/callpath/archive/v1.0.3.tar.gz ==> This looks like a URL for callpath ==> Found 4 versions of callpath: 1.0.3 https://github.com/LLNL/callpath/archive/v1.0.3.tar.gz 1.0.2 https://github.com/LLNL/callpath/archive/v1.0.2.tar.gz 1.0.1 https://github.com/LLNL/callpath/archive/v1.0.1.tar.gz 1.0 https://github.com/LLNL/callpath/archive/v1.0.tar.gz ==> How many would you like to checksum? (default is 1, q to abort) 1 ==> Downloading... ==> Fetching https://github.com/LLNL/callpath/archive/v1.0.3.tar.gz ######################################################################## 100.0% ==> Checksummed 1 version of callpath ==> This package looks like it uses the cmake build system ==> Created template for callpath package ==> Created package file: /Users/mamelara/spack/var/spack/repos/builtin/packages/callpath/package.py ``` which then produces the following template: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) # # This is a template package file for Spack. We've put "FIXME" # next to all the things you'll want to change. Once you've handled # them, you can save this file and test your package like this: # # spack install callpath # # You can edit this file again by typing: # # spack edit callpath # # See the Spack documentation for more information on packaging. # If you submit this package back to Spack as a pull request, # please first remove this boilerplate and all FIXME comments. # from spack import * class Callpath(CMakePackage): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "https://github.com/llnl/callpath/archive/v1.0.1.tar.gz" version('1.0.3', 'c89089b3f1c1ba47b09b8508a574294a') # FIXME: Add dependencies if required. # depends_on('foo') def cmake_args(self): # FIXME: Add arguments other than # FIXME: CMAKE_INSTALL_PREFIX and CMAKE_BUILD_TYPE # FIXME: If not needed delete this function args = [] return args ``` | Again we fill in the details: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Callpath(CMakePackage): """Library for representing callpaths consistently in distributed-memory performance tools.""" homepage = "https://github.com/llnl/callpath" url = "https://github.com/llnl/callpath/archive/v1.0.3.tar.gz" version('1.0.3', 'c89089b3f1c1ba47b09b8508a574294a') depends_on("elf", type="link") depends_on("libdwarf") depends_on("dyninst") depends_on("adept-utils") depends_on("mpi") depends_on("cmake@2.8:", type="build") ``` | As mentioned earlier, Spack will use sensible defaults to prevent repeated code and to make writing `CMake` package files simpler. In callpath, we want to add options to `CALLPATH_WALKER` as well as add compiler flags. We add the following options like so: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Callpath(CMakePackage): """Library for representing callpaths consistently in distributed-memory performance tools.""" homepage = "https://github.com/llnl/callpath" url = "https://github.com/llnl/callpath/archive/v1.0.3.tar.gz" version('1.0.3', 'c89089b3f1c1ba47b09b8508a574294a') depends_on("elf", type="link") depends_on("libdwarf") depends_on("dyninst") depends_on("adept-utils") depends_on("mpi") depends_on("cmake@2.8:", type="build") def cmake_args(self): args = ["-DCALLPATH_WALKER=dyninst"] if self.spec.satisfies("^dyninst@9.3.0:"): std.flag = self.compiler.cxx_flag args.append("-DCMAKE_CXX_FLAGS='{0}' -fpermissive'".format( std_flag)) return args ``` | Now we can control our build options using `cmake_args()`. If defaults are sufficient enough for the package, we can leave this method out. `CMakePackage` classes allow for control of other features in the build system. For example, you can specify the path to the “out of source” build directory and also point to the root of the `CMakeLists.txt` file if it is placed in a non-standard location. A good example of a package that has its `CMakeLists.txt` file located at a different location is found in `spades`. ``` $ spack edit spades ``` ``` root_cmakelists_dir = "src" ``` Here `root_cmakelists_dir` will tell Spack where to find the location of `CMakeLists.txt`. In this example, it is located a directory level below in the `src` directory. Some `CMake` packages also require the `install` phase to be overridden. For example, let’s take a look at `sniffles`. ``` $ spack edit sniffles ``` In the `install()` method, we have to manually install our targets so we override the `install()` method to do it for us: ``` # the build process doesn't actually install anything, do it by hand def install(self, spec, prefix): mkdir(prefix.bin) src = "bin/sniffles-core-{0}".format(spec.version.dotted) binaries = ['sniffles', 'sniffles-debug'] for b in binaries: install(join_path(src, b), join_path(prefix.bin, b)) ``` #### PythonPackage[¶](#pythonpackage) Python extensions and modules are built differently from source than most applications. Python uses a `setup.py` script to install Python modules. The script consists of a call to `setup()` which provides the information required to build a module to Distutils. If you’re familiar with pip or easy_install, setup.py does the same thing. These modules are usually installed using the following line: ``` $ python setup.py install ``` There are also a list of commands and phases that you can call. To see the full list you can run: ``` $ python setup.py --help-commands Standard commands: build build everything needed to install build_py "build" pure Python modules (copy to build directory) build_ext build C/C++ extensions (compile/link to build directory) build_clib build C/C++ libraries used by Python extensions build_scripts "build" scripts (copy and fixup #! line) clean (no description available) install install everything from build directory install_lib install all Python modules (extensions and pure Python) install_headers install C/C++ header files install_scripts install scripts (Python or otherwise) install_data install data files sdist create a source distribution (tarball, zip file, etc.) register register the distribution with the Python package index bdist create a built (binary) distribution bdist_dumb create a "dumb" built distribution bdist_rpm create an RPM distribution bdist_wininst create an executable installer for MS Windows upload upload binary package to PyPI check perform some checks on the package ``` We can write package files for Python packages using the `Package` class, but the class brings with it a lot of methods that are useless for Python packages. Instead, Spack has a `PythonPackage` subclass that allows packagers of Python modules to be able to invoke `setup.py` and use `Distutils`, which is much more familiar to a typical python user. To see the defaults that Spack has for each a methods, we will take a look at the `PythonPackage` class: ``` $ spack edit --build-system python ``` We see the following: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 ``` | ``` class PythonPackage(PackageBase): # Standard commands def build(self, spec, prefix): """Build everything needed to install.""" args = self.build_args(spec, prefix) self.setup_py('build', *args) def build_args(self, spec, prefix): """Arguments to pass to build.""" return [] def build_py(self, spec, prefix): '''"Build" pure Python modules (copy to build directory).''' args = self.build_py_args(spec, prefix) self.setup_py('build_py', *args) def build_py_args(self, spec, prefix): """Arguments to pass to build_py.""" return [] def build_ext(self, spec, prefix): """Build C/C++ extensions (compile/link to build directory).""" args = self.build_ext_args(spec, prefix) self.setup_py('build_ext', *args) def build_ext_args(self, spec, prefix): """Arguments to pass to build_ext.""" return [] def build_clib(self, spec, prefix): """Build C/C++ libraries used by Python extensions.""" args = self.build_clib_args(spec, prefix) self.setup_py('build_clib', *args) def build_clib_args(self, spec, prefix): """Arguments to pass to build_clib.""" return [] def build_scripts(self, spec, prefix): '''"Build" scripts (copy and fixup #! line).''' args = self.build_scripts_args(spec, prefix) self.setup_py('build_scripts', *args) def clean(self, spec, prefix): """Clean up temporary files from 'build' command.""" args = self.clean_args(spec, prefix) self.setup_py('clean', *args) def clean_args(self, spec, prefix): """Arguments to pass to clean.""" return [] def install(self, spec, prefix): """Install everything from build directory.""" args = self.install_args(spec, prefix) self.setup_py('install', *args) def install_args(self, spec, prefix): """Arguments to pass to install.""" args = ['--prefix={0}'.format(prefix)] # This option causes python packages (including setuptools) NOT # to create eggs or easy-install.pth files. Instead, they # install naturally into $prefix/pythonX.Y/site-packages. # # Eggs add an extra level of indirection to sys.path, slowing # down large HPC runs. They are also deprecated in favor of # wheels, which use a normal layout when installed. # # Spack manages the package directory on its own by symlinking # extensions into the site-packages directory, so we don't really # need the .pth files or egg directories, anyway. # # We need to make sure this is only for build dependencies. A package # such as py-basemap will not build properly with this flag since # it does not use setuptools to build and those does not recognize # the --single-version-externally-managed flag if ('py-setuptools' == spec.name or # this is setuptools, or 'py-setuptools' in spec._dependencies and # it's an immediate dep 'build' in spec._dependencies['py-setuptools'].deptypes): args += ['--single-version-externally-managed', '--root=/'] return args def install_lib(self, spec, prefix): """Install all Python modules (extensions and pure Python).""" args = self.install_lib_args(spec, prefix) self.setup_py('install_lib', *args) def install_lib_args(self, spec, prefix): """Arguments to pass to install_lib.""" return [] def install_headers(self, spec, prefix): """Install C/C++ header files.""" args = self.install_headers_args(spec, prefix) self.setup_py('install_headers', *args) def install_headers_args(self, spec, prefix): """Arguments to pass to install_headers.""" return [] def install_scripts(self, spec, prefix): """Install scripts (Python or otherwise).""" args = self.install_scripts_args(spec, prefix) self.setup_py('install_scripts', *args) def install_scripts_args(self, spec, prefix): """Arguments to pass to install_scripts.""" return [] def install_data(self, spec, prefix): """Install data files.""" args = self.install_data_args(spec, prefix) self.setup_py('install_data', *args) def install_data_args(self, spec, prefix): """Arguments to pass to install_data.""" return [] def sdist(self, spec, prefix): """Create a source distribution (tarball, zip file, etc.).""" args = self.sdist_args(spec, prefix) self.setup_py('sdist', *args) def sdist_args(self, spec, prefix): """Arguments to pass to sdist.""" return [] def register(self, spec, prefix): """Register the distribution with the Python package index.""" args = self.register_args(spec, prefix) self.setup_py('register', *args) def register_args(self, spec, prefix): """Arguments to pass to register.""" return [] def bdist(self, spec, prefix): """Create a built (binary) distribution.""" args = self.bdist_args(spec, prefix) self.setup_py('bdist', *args) def bdist_args(self, spec, prefix): """Arguments to pass to bdist.""" return [] def bdist_dumb(self, spec, prefix): '''Create a "dumb" built distribution.''' args = self.bdist_dumb_args(spec, prefix) self.setup_py('bdist_dumb', *args) def bdist_dumb_args(self, spec, prefix): """Arguments to pass to bdist_dumb.""" return [] def bdist_rpm(self, spec, prefix): """Create an RPM distribution.""" args = self.bdist_rpm(spec, prefix) self.setup_py('bdist_rpm', *args) def bdist_rpm_args(self, spec, prefix): """Arguments to pass to bdist_rpm.""" return [] def bdist_wininst(self, spec, prefix): """Create an executable installer for MS Windows.""" args = self.bdist_wininst_args(spec, prefix) self.setup_py('bdist_wininst', *args) def bdist_wininst_args(self, spec, prefix): """Arguments to pass to bdist_wininst.""" return [] def upload(self, spec, prefix): """Upload binary package to PyPI.""" args = self.upload_args(spec, prefix) self.setup_py('upload', *args) def upload_args(self, spec, prefix): """Arguments to pass to upload.""" return [] def check(self, spec, prefix): """Perform some checks on the package.""" args = self.check_args(spec, prefix) self.setup_py('check', *args) def check_args(self, spec, prefix): """Arguments to pass to check.""" return [] ``` | Each of these methods have sensible defaults or they can be overridden. We will write a package file for [Pandas](https://pandas.pydata.org): ``` $ spack create -f https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz ==> This looks like a URL for pandas ==> Warning: Spack was unable to fetch url list due to a certificate verification problem. You can try running spack -k, which will not check SSL certificates. Use this at your own risk. ==> Found 1 version of pandas: 0.19.0 https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz ==> How many would you like to checksum? (default is 1, q to abort) 1 ==> Downloading... ==> Fetching https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz ######################################################################## 100.0% ==> Checksummed 1 version of pandas ==> This package looks like it uses the python build system ==> Changing package name from pandas to py-pandas ==> Created template for py-pandas package ==> Created package file: /Users/mamelara/spack/var/spack/repos/builtin/packages/py-pandas/package.py ``` And we are left with the following template: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) # # This is a template package file for Spack. We've put "FIXME" # next to all the things you'll want to change. Once you've handled # them, you can save this file and test your package like this: # # spack install py-pandas # # You can edit this file again by typing: # # spack edit py-pandas # # See the Spack documentation for more information on packaging. # If you submit this package back to Spack as a pull request, # please first remove this boilerplate and all FIXME comments. # from spack import * class PyPandas(PythonPackage): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz" version('0.19.0', 'bc9bb7188e510b5d44fbdd249698a2c3') # FIXME: Add dependencies if required. # depends_on('py-setuptools', type='build') # depends_on('py-foo', type=('build', 'run')) def build_args(self, spec, prefix): # FIXME: Add arguments other than --prefix # FIXME: If not needed delete this function args = [] return args ``` | As you can see this is not any different than any package template that we have written. We have the choice of providing build options or using the sensible defaults Luckily for us, there is no need to provide build args. Next we need to find the dependencies of a package. Dependencies are usually listed in `setup.py`. You can find the dependencies by searching for `install_requires` keyword in that file. Here it is for `Pandas`: ``` # ... code if sys.version_info[0] >= 3: setuptools_kwargs = { 'zip_safe': False, 'install_requires': ['python-dateutil >= 2', 'pytz >= 2011k', 'numpy >= %s' % min_numpy_ver], 'setup_requires': ['numpy >= %s' % min_numpy_ver], } if not _have_setuptools: sys.exit("need setuptools/distribute for Py3k" "\n$ pip install distribute") # ... more code ``` You can find a more comprehensive list at the Pandas [documentation](https://pandas.pydata.org/pandas-docs/stable/install.html). By reading the documentation and `setup.py` we found that `Pandas` depends on `python-dateutil`, `pytz`, and `numpy`, `numexpr`, and finally `bottleneck`. Here is the completed `Pandas` script: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 ``` | ``` # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class PyPandas(PythonPackage): """pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with relational or labeled data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. """ homepage = "http://pandas.pydata.org/" url = "https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz" version('0.19.0', 'bc9bb7188e510b5d44fbdd249698a2c3') version('0.18.0', 'f143762cd7a59815e348adf4308d2cf6') version('0.16.1', 'fac4f25748f9610a3e00e765474bdea8') version('0.16.0', 'bfe311f05dc0<KEY>') depends_on('py-dateutil', type=('build', 'run')) depends_on('py-numpy', type=('build', 'run')) depends_on('py-setuptools', type='build') depends_on('py-cython', type='build') depends_on('py-pytz', type=('build', 'run')) depends_on('py-numexpr', type=('build', 'run')) depends_on('py-bottleneck', type=('build', 'run')) ``` | It is quite important to declare all the dependencies of a Python package. Spack can “activate” Python packages to prevent the user from having to load each dependency module explictly. If a dependency is missed, Spack will be unable to properly activate the package and it will cause an issue. To learn more about extensions go to [spack extensions](index.html#cmd-spack-extensions). From this example, you can see that building Python modules is made easy through the `PythonPackage` class. #### Other Build Systems[¶](#other-build-systems) Although we won’t get in depth with any of the other build systems that Spack supports, it is worth mentioning that Spack does provide subclasses for the following build systems: 1. `IntelPackage` 2. `SconsPackage` 3. `WafPackage` 4. `RPackage` 5. `PerlPackage` 6. `QMakePackage` Each of these classes have their own abstractions to help assist in writing package files. For whatever doesn’t fit nicely into the other build-systems, you can use the `Package` class. Hopefully by now you can see how we aim to make packaging simple and robust through these classes. If you want to learn more about these build systems, check out [Implementing the installation procedure](index.html#installation-procedure) in the Packaging Guide. ### Advanced Topics in Packaging[¶](#advanced-topics-in-packaging) Spack tries to automatically configure packages with information from dependencies such that all you need to do is to list the dependencies (i.e., with the `depends_on` directive) and the build system (for example by deriving from `CmakePackage`). However, there are many special cases. Often you need to retrieve details about dependencies to set package-specific configuration options, or to define package-specific environment variables used by the package’s build system. This tutorial covers how to retrieve build information from dependencies, and how you can automatically provide important information to dependents in your package. #### Setup for the tutorial[¶](#setup-for-the-tutorial) Note If you are not using the tutorial docker image, it is recommended that you do this section of the tutorial in a fresh clone of Spack The tutorial uses custom package definitions with missing sections that will be filled in during the tutorial. These package definitions are stored in a separate package repository, which can be enabled with: ``` $ spack repo add --scope=site var/spack/repos/tutorial ``` This section of the tutorial may also require a newer version of gcc, which you can add with: ``` $ spack install gcc@7.2.0 $ spack compiler add --scope=site path/to/spack-installed-gcc/bin ``` If you are using the tutorial docker image, all dependency packages will have been installed. Otherwise, to install these packages you can use the following commands: ``` $ spack install openblas $ spack install netlib-lapack $ spack install mpich ``` Now, you are ready to set your preferred `EDITOR` and continue with the rest of the tutorial. Note Several of these packages depend on an MPI implementation. You can use OpenMPI if you install it from scratch, but this is slow (>10 min.). A binary cache of MPICH may be provided, in which case you can force the package to use it and install quickly. All tutorial examples with packages that depend on MPICH include the spec syntax for building with it #### Modifying a package’s build environment[¶](#modifying-a-package-s-build-environment) Spack sets up several environment variables like `PATH` by default to aid in building a package, but many packages make use of environment variables which convey specific information about their dependencies (e.g., `MPICC`). This section covers how to update your Spack packages so that package-specific environment variables are defined at build-time. ##### Set environment variables in dependent packages at build-time[¶](#set-environment-variables-in-dependent-packages-at-build-time) Dependencies can set environment variables that are required when their dependents build. For example, when a package depends on a python extension like py-numpy, Spack’s `python` package will add it to `PYTHONPATH` so it is available at build time; this is required because the default setup that spack does is not sufficient for python to import modules. To provide environment setup for a dependent, a package can implement the [`setup_dependent_environment`](index.html#spack.package.PackageBase.setup_dependent_environment) function. This function takes as a parameter a [`EnvironmentModifications`](index.html#spack.util.environment.EnvironmentModifications) object which includes convenience methods to update the environment. For example, an MPI implementation can set `MPICC` for packages that depend on it: ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): spack_env.set('MPICC', join_path(self.prefix.bin, 'mpicc')) ``` In this case packages that depend on `mpi` will have `MPICC` defined in their environment when they build. This section is focused on modifying the build-time environment represented by `spack_env`, but it’s worth noting that modifications to `run_env` are included in Spack’s automatically-generated module files. We can practice by editing the `mpich` package to set the `MPICC` environment variable in the build-time environment of dependent packages. ``` root@advanced-packaging-tutorial:/# spack edit mpich ``` Once you’re finished, the method should look like this: ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): spack_env.set('MPICC', join_path(self.prefix.bin, 'mpicc')) spack_env.set('MPICXX', join_path(self.prefix.bin, 'mpic++')) spack_env.set('MPIF77', join_path(self.prefix.bin, 'mpif77')) spack_env.set('MPIF90', join_path(self.prefix.bin, 'mpif90')) spack_env.set('MPICH_CC', spack_cc) spack_env.set('MPICH_CXX', spack_cxx) spack_env.set('MPICH_F77', spack_f77) spack_env.set('MPICH_F90', spack_fc) spack_env.set('MPICH_FC', spack_fc) ``` At this point we can, for instance, install `netlib-scalapack` with `mpich`: ``` root@advanced-packaging-tutorial:/# spack install netlib-scalapack ^mpich ... ==> Created stage in /usr/local/var/spack/stage/netlib-scalapack-2.0.2-km7tsbgoyyywonyejkjoojskhc5knz3z ==> No patches needed for netlib-scalapack ==> Building netlib-scalapack [CMakePackage] ==> Executing phase: 'cmake' ==> Executing phase: 'build' ==> Executing phase: 'install' ==> Successfully installed netlib-scalapack Fetch: 0.01s. Build: 3m 59.86s. Total: 3m 59.87s. [+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2-km7tsbgoyyywonyejkjoojskhc5knz3z ``` and double check the environment logs to verify that every variable was set to the correct value. ##### Set environment variables in your own package[¶](#set-environment-variables-in-your-own-package) Packages can modify their own build-time environment by implementing the [`setup_environment`](index.html#spack.package.PackageBase.setup_environment) function. For `qt` this looks like: ``` def setup_environment(self, spack_env, run_env): spack_env.set('MAKEFLAGS', '-j{0}'.format(make_jobs)) run_env.set('QTDIR', self.prefix) ``` When `qt` builds, `MAKEFLAGS` will be defined in the environment. To contrast with `qt`’s [`setup_dependent_environment`](index.html#spack.package.PackageBase.setup_dependent_environment) function: ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): spack_env.set('QTDIR', self.prefix) ``` Let’s see how it works by completing the `elpa` package: ``` root@advanced-packaging-tutorial:/# spack edit elpa ``` In the end your method should look like: ``` def setup_environment(self, spack_env, run_env): spec = self.spec spack_env.set('CC', spec['mpi'].mpicc) spack_env.set('FC', spec['mpi'].mpifc) spack_env.set('CXX', spec['mpi'].mpicxx) spack_env.set('SCALAPACK_LDFLAGS', spec['scalapack'].libs.joined()) spack_env.append_flags('LDFLAGS', spec['lapack'].libs.search_flags) spack_env.append_flags('LIBS', spec['lapack'].libs.link_flags) ``` At this point it’s possible to proceed with the installation of `elpa ^mpich` #### Retrieving library information[¶](#retrieving-library-information) Although Spack attempts to help packages locate their dependency libraries automatically (e.g. by setting `PKG_CONFIG_PATH` and `CMAKE_PREFIX_PATH`), a package may have unique configuration options that are required to locate libraries. When a package needs information about dependency libraries, the general approach in Spack is to query the dependencies for the locations of their libraries and set configuration options accordingly. By default most Spack packages know how to automatically locate their libraries. This section covers how to retrieve library information from dependencies and how to locate libraries when the default logic doesn’t work. ##### Accessing dependency libraries[¶](#accessing-dependency-libraries) If you need to access the libraries of a dependency, you can do so via the `libs` property of the spec, for example in the `arpack-ng` package: ``` def install(self, spec, prefix): lapack_libs = spec['lapack'].libs.joined(';') blas_libs = spec['blas'].libs.joined(';') cmake(*[ '-DLAPACK_LIBRARIES={0}'.format(lapack_libs), '-DBLAS_LIBRARIES={0}'.format(blas_libs) ], '.') ``` Note that `arpack-ng` is querying virtual dependencies, which Spack automatically resolves to the installed implementation (e.g. `openblas` for `blas`). We’ve started work on a package for `armadillo`. You should open it, read through the comment that starts with `# TUTORIAL:` and complete the `cmake_args` section: ``` root@advanced-packaging-tutorial:/# spack edit armadillo ``` If you followed the instructions in the package, when you are finished your `cmake_args` method should look like: ``` def cmake_args(self): spec = self.spec return [ # ARPACK support '-DARPACK_LIBRARY={0}'.format(spec['arpack-ng'].libs.joined(";")), # BLAS support '-DBLAS_LIBRARY={0}'.format(spec['blas'].libs.joined(";")), # LAPACK support '-DLAPACK_LIBRARY={0}'.format(spec['lapack'].libs.joined(";")), # SuperLU support '-DSuperLU_INCLUDE_DIR={0}'.format(spec['superlu'].prefix.include), '-DSuperLU_LIBRARY={0}'.format(spec['superlu'].libs.joined(";")), # HDF5 support '-DDETECT_HDF5={0}'.format('ON' if '+hdf5' in spec else 'OFF') ] ``` As you can see, getting the list of libraries that your dependencies provide is as easy as accessing the their `libs` attribute. Furthermore, the interface remains the same whether you are querying regular or virtual dependencies. At this point you can complete the installation of `armadillo` using `openblas` as a LAPACK provider (`armadillo ^openblas ^mpich`): ``` root@advanced-packaging-tutorial:/# spack install armadillo ^openblas ^mpich ==> pkg-config is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkg-config-0.29.2-ae2hwm7q57byfbxtymts55xppqwk7ecj ... ==> superlu is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/superlu-5.2.1-q2mbtw2wo4kpzis2e2n227ip2fquxrno ==> Installing armadillo ==> Using cached archive: /usr/local/var/spack/cache/armadillo/armadillo-8.100.1.tar.xz ==> Staging archive: /usr/local/var/spack/stage/armadillo-8.100.1-n2eojtazxbku6g4l5izucwwgnpwz77r4/armadillo-8.100.1.tar.xz ==> Created stage in /usr/local/var/spack/stage/armadillo-8.100.1-n2eojtazxbku6g4l5izucwwgnpwz77r4 ==> Applied patch undef_linux.patch ==> Building armadillo [CMakePackage] ==> Executing phase: 'cmake' ==> Executing phase: 'build' ==> Executing phase: 'install' ==> Successfully installed armadillo Fetch: 0.01s. Build: 3.96s. Total: 3.98s. [+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/armadillo-8.100.1-n2eojtazxbku6g4l5izucwwgnpwz77r4 ``` Hopefully the installation went fine and the code we added expanded to the right list of semicolon separated libraries (you are encouraged to open `armadillo`’s build logs to double check). ##### Providing libraries to dependents[¶](#providing-libraries-to-dependents) Spack provides a default implementation for `libs` which often works out of the box. A user can write a package definition without having to implement a `libs` property and dependents can retrieve its libraries as shown in the above section. However, the default implementation assumes that libraries follow the naming scheme `lib<package name>.so` (or e.g. `lib<package name>.a` for static libraries). Packages which don’t follow this naming scheme must implement this function themselves, e.g. `opencv`: ``` @property def libs(self): shared = "+shared" in self.spec return find_libraries( "libopencv_*", root=self.prefix, shared=shared, recurse=True ) ``` This issue is common for packages which implement an interface (i.e. virtual package providers in Spack). If we try to build another version of `armadillo` tied to `netlib-lapack` (`armadillo ^netlib-lapack ^mpich`) we’ll notice that this time the installation won’t complete: ``` root@advanced-packaging-tutorial:/# spack install armadillo ^netlib-lapack ^mpich ==> pkg-config is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkg-config-0.29.2-ae2hwm7q57byfbxtymts55xppqwk7ecj ... ==> openmpi is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f ==> Installing arpack-ng ==> Using cached archive: /usr/local/var/spack/cache/arpack-ng/arpack-ng-3.5.0.tar.gz ==> Already staged arpack-ng-3.5.0-bloz7cqirpdxj33pg7uj32zs5likz2un in /usr/local/var/spack/stage/arpack-ng-3.5.0-bloz7cqirpdxj33pg7uj32zs5likz2un ==> No patches needed for arpack-ng ==> Building arpack-ng [Package] ==> Executing phase: 'install' ==> Error: RuntimeError: Unable to recursively locate netlib-lapack libraries in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-lapack-3.6.1-jjfe23wgt7nkjnp2adeklhseg3ftpx6z RuntimeError: RuntimeError: Unable to recursively locate netlib-lapack libraries in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-lapack-3.6.1-jjfe23wgt7nkjnp2adeklhseg3ftpx6z /usr/local/var/spack/repos/builtin/packages/arpack-ng/package.py:105, in install: 5 options.append('-DCMAKE_INSTALL_NAME_DIR:PATH=%s/lib' % prefix) 6 7 # Make sure we use Spack's blas/lapack: >> 8 lapack_libs = spec['lapack'].libs.joined(';') 9 blas_libs = spec['blas'].libs.joined(';') 10 11 options.extend([ See build log for details: /usr/local/var/spack/stage/arpack-ng-3.5.0-bloz7cqirpdxj33pg7uj32zs5likz2un/arpack-ng-3.5.0/spack-build.out ``` Unlike `openblas` which provides a library named `libopenblas.so`, `netlib-lapack` provides `liblapack.so`, so it needs to implement customized library search logic. Let’s edit it: ``` root@advanced-packaging-tutorial:/# spack edit netlib-lapack ``` and follow the instructions in the `# TUTORIAL:` comment as before. What we need to implement is: ``` @property def lapack_libs(self): shared = True if '+shared' in self.spec else False return find_libraries( 'liblapack', root=self.prefix, shared=shared, recursive=True ) ``` i.e., a property that returns the correct list of libraries for the LAPACK interface. We use the name `lapack_libs` rather than `libs` because `netlib-lapack` can also provide `blas`, and when it does it is provided as a separate library file. Using this name ensures that when dependents ask for `lapack` libraries, `netlib-lapack` will retrieve only the libraries associated with the `lapack` interface. Now we can finally install `armadillo ^netlib-lapack ^mpich`: ``` root@advanced-packaging-tutorial:/# spack install armadillo ^netlib-lapack ^mpich ... ==> Building armadillo [CMakePackage] ==> Executing phase: 'cmake' ==> Executing phase: 'build' ==> Executing phase: 'install' ==> Successfully installed armadillo Fetch: 0.01s. Build: 3.75s. Total: 3.76s. [+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/armadillo-8.100.1-sxmpu5an4dshnhickh6ykchyfda7jpyn ``` Since each implementation of a virtual package is responsible for locating the libraries associated with the interfaces it provides, dependents do not need to include special-case logic for different implementations and for example need only ask for `spec['blas'].libs`. #### Other Packaging Topics[¶](#other-packaging-topics) ##### Attach attributes to other packages[¶](#attach-attributes-to-other-packages) Build tools usually also provide a set of executables that can be used when another package is being installed. Spack gives you the opportunity to monkey-patch dependent modules and attach attributes to them. This helps make the packager experience as similar as possible to what would have been the manual installation of the same package. An example here is the `automake` package, which overrides [`setup_dependent_package`](index.html#spack.package.PackageBase.setup_dependent_package): ``` def setup_dependent_package(self, module, dependent_spec): # Automake is very likely to be a build dependency, # so we add the tools it provides to the dependent module executables = ['aclocal', 'automake'] for name in executables: setattr(module, name, self._make_executable(name)) ``` so that every other package that depends on it can use directly `aclocal` and `automake` with the usual function call syntax of [`Executable`](index.html#spack.util.executable.Executable): ``` aclocal('--force') ``` ##### Extra query parameters[¶](#extra-query-parameters) An advanced feature of the Spec’s build-interface protocol is the support for extra parameters after the subscript key. In fact, any of the keys used in the query can be followed by a comma-separated list of extra parameters which can be inspected by the package receiving the request to fine-tune a response. Let’s look at an example and try to install `netcdf ^mpich`: ``` root@advanced-packaging-tutorial:/# spack install netcdf ^mpich ==> libsigsegv is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-r5envx3kqctwwflhd4qax4ahqtt6x43a ... ==> Error: AttributeError: 'list' object has no attribute 'search_flags' AttributeError: AttributeError: 'list' object has no attribute 'search_flags' /usr/local/var/spack/repos/builtin/packages/netcdf/package.py:207, in configure_args: 50 # used instead. 51 hdf5_hl = self.spec['hdf5:hl'] 52 CPPFLAGS.append(hdf5_hl.headers.cpp_flags) >> 53 LDFLAGS.append(hdf5_hl.libs.search_flags) 54 55 if '+parallel-netcdf' in self.spec: 56 config_args.append('--enable-pnetcdf') See build log for details: /usr/local/var/spack/stage/netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj/netcdf-4.4.1.1/spack-build.out ``` We can see from the error that `netcdf` needs to know how to link the *high-level interface* of `hdf5`, and thus passes the extra parameter `hl` after the request to retrieve it. Clearly the implementation in the `hdf5` package is not complete, and we need to fix it: ``` root@advanced-packaging-tutorial:/# spack edit hdf5 ``` If you followed the instructions correctly, the code added to the `lib` property should be similar to: ``` query_parameters = self.spec.last_query.extra_parameters key = tuple(sorted(query_parameters)) libraries = query2libraries[key] shared = '+shared' in self.spec return find_libraries( libraries, root=self.prefix, shared=shared, recurse=True ) ``` where we highlighted the line retrieving the extra parameters. Now we can successfully complete the installation of `netcdf ^mpich`: ``` root@advanced-packaging-tutorial:/# spack install netcdf ^mpich ==> libsigsegv is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-r5envx3kqctwwflhd4qax4ahqtt6x43a ... ==> Installing netcdf ==> Using cached archive: /usr/local/var/spack/cache/netcdf/netcdf-4.4.1.1.tar.gz ==> Already staged netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj in /usr/local/var/spack/stage/netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj ==> Already patched netcdf ==> Building netcdf [AutotoolsPackage] ==> Executing phase: 'autoreconf' ==> Executing phase: 'configure' ==> Executing phase: 'build' ==> Executing phase: 'install' ==> Successfully installed netcdf Fetch: 0.01s. Build: 24.61s. Total: 24.62s. [+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj ``` Known Issues[¶](#known-issues) --- This is a list of known bugs in Spack. It provides ways of getting around these problems if you encounter them. ### Variants are not properly forwarded to dependencies[¶](#variants-are-not-properly-forwarded-to-dependencies) **Status:** Expected to be fixed in the next release Sometimes, a variant of a package can also affect how its dependencies are built. For example, in order to build MPI support for a package, it may require that its dependencies are also built with MPI support. In the `package.py`, this looks like: ``` depends_on('hdf5~mpi', when='~mpi') depends_on('hdf5+mpi', when='+mpi') ``` Spack handles this situation properly for *immediate* dependencies, and builds `hdf5` with the same variant you used for the package that depends on it. However, for *indirect* dependencies (dependencies of dependencies), Spack does not backtrack up the DAG far enough to handle this. Users commonly run into this situation when trying to build R with X11 support: ``` $ spack install r+X ... ==> Error: Invalid spec: 'cairo@1.14.8%gcc@6.2.1+X arch=linux-fedora25-x86_64 ^bzip2@1.0.6%gcc@6.2.1+shared arch=linux-fedora25-x86_64 ^font-util@1.3.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^fontconfig@2.12.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^freetype@2.7.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^gettext@0.19.8.1%gcc@6.2.1+bzip2+curses+git~libunistring+libxml2+tar+xz arch=linux-fedora25-x86_64 ^glib@2.53.1%gcc@6.2.1~libmount arch=linux-fedora25-x86_64 ^inputproto@2.3.2%gcc@6.2.1 arch=linux-fedora25-x86_64 ^kbproto@1.0.7%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libffi@3.2.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libpng@1.6.29%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libpthread-stubs@0.4%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libx11@1.6.5%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxau@1.0.8%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxcb@1.12%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxdmcp@1.1.2%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxext@1.3.3%gcc@6.2.1 arch=linux-fedora25-x86_64 ^libxml2@2.9.4%gcc@6.2.1~python arch=linux-fedora25-x86_64 ^libxrender@0.9.10%gcc@6.2.1 arch=linux-fedora25-x86_64 ^ncurses@6.0%gcc@6.2.1~symlinks arch=linux-fedora25-x86_64 ^openssl@1.0.2k%gcc@6.2.1 arch=linux-fedora25-x86_64 ^pcre@8.40%gcc@6.2.1+utf arch=linux-fedora25-x86_64 ^pixman@0.34.0%gcc@6.2.1 arch=linux-fedora25-x86_64 ^pkg-config@0.29.2%gcc@6.2.1+internal_glib arch=linux-fedora25-x86_64 ^python@2.7.13%gcc@6.2.1+shared~tk~ucs4 arch=linux-fedora25-x86_64 ^readline@7.0%gcc@6.2.1 arch=linux-fedora25-x86_64 ^renderproto@0.11.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^sqlite@3.18.0%gcc@6.2.1 arch=linux-fedora25-x86_64 ^tar^util-macros@1.19.1%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xcb-proto@1.12%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xextproto@7.3.0%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xproto@7.0.31%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xtrans@1.3.5%gcc@6.2.1 arch=linux-fedora25-x86_64 ^xz@5.2.3%gcc@6.2.1 arch=linux-fedora25-x86_64 ^zlib@1.2.11%gcc@6.2.1+pic+shared arch=linux-fedora25-x86_64'. Package cairo requires variant ~X, but spec asked for +X ``` A workaround is to explicitly activate the variants of dependencies as well: ``` $ spack install r+X ^cairo+X ^pango+X ``` See <https://github.com/spack/spack/issues/267> and <https://github.com/spack/spack/issues/2546> for further details. ### `spack setup` doesn’t work[¶](#spack-setup-doesn-t-work) **Status:** Work in progress Spack provides a `setup` command that is useful for the development of software outside of Spack. Unfortunately, this command no longer works. See <https://github.com/spack/spack/issues/2597> and <https://github.com/spack/spack/issues/2662> for details. This is expected to be fixed by <https://github.com/spack/spack/pull/2664>. Configuration Files[¶](#configuration-files) --- Spack has many configuration files. Here is a quick list of them, in case you want to skip directly to specific docs: * [compilers.yaml](index.html#compiler-config) * [config.yaml](index.html#config-yaml) * [mirrors.yaml](index.html#mirrors) * [modules.yaml](index.html#modules) * [packages.yaml](index.html#build-settings) * [repos.yaml](index.html#repositories) ### YAML Format[¶](#yaml-format) Spack configuration files are written in YAML. We chose YAML because it’s human readable, but also versatile in that it supports dictionaries, lists, and nested sections. For more details on the format, see [yaml.org](http://yaml.org) and [libyaml](http://pyyaml.org/wiki/LibYAML). Here is an example `config.yaml` file: ``` config: install_tree: $spack/opt/spack module_roots: lmod: $spack/share/spack/lmod build_stage: - $tempdir - /nfs/tmp2/$user ``` Each Spack configuration file is nested under a top-level section corresponding to its name. So, `config.yaml` starts with `config:`, `mirrors.yaml` starts with `mirrors:`, etc. ### Configuration Scopes[¶](#configuration-scopes) Spack pulls configuration data from files in several directories. There are six configuration scopes. From lowest to highest: 1. **defaults**: Stored in `$(prefix)/etc/spack/defaults/`. These are the “factory” settings. Users should generally not modify the settings here, but should override them in other configuration scopes. The defaults here will change from version to version of Spack. 2. **system**: Stored in `/etc/spack/`. These are settings for this machine, or for all machines on which this file system is mounted. The site scope can be used for settings idiosyncratic to a particular machine, such as the locations of compilers or external packages. These settings are presumably controlled by someone with root access on the machine. They override the defaults scope. 3. **site**: Stored in `$(prefix)/etc/spack/`. Settings here affect only *this instance* of Spack, and they override the defaults and system scopes. The site scope can can be used for per-project settings (one Spack instance per project) or for site-wide settings on a multi-user machine (e.g., for a common Spack instance). 4. **user**: Stored in the home directory: `~/.spack/`. These settings affect all instances of Spack and take higher precedence than site, system, or defaults scopes. 5. **custom**: Stored in a custom directory specified by `--config-scope`. If multiple scopes are listed on the command line, they are ordered from lowest to highest precedence. 6. **command line**: Build settings specified on the command line take precedence over all other scopes. Each configuration directory may contain several configuration files, such as `config.yaml`, `compilers.yaml`, or `mirrors.yaml`. When configurations conflict, settings from higher-precedence scopes override lower-precedence settings. Commands that modify scopes (e.g., `spack compilers`, `spack repo`, etc.) take a `--scope=<name>` parameter that you can use to control which scope is modified. By default, they modify the highest-precedence scope. #### Custom scopes[¶](#custom-scopes) In addition to the `defaults`, `system`, `site`, and `user` scopes, you may add configuration scopes directly on the command line with the `--config-scope` argument, or `-C` for short. For example, the following adds two configuration scopes, named `scopea` and `scopeb`, to a `spack spec` command: ``` $ spack -C ~/myscopes/scopea -C ~/myscopes/scopeb spec ncurses ``` Custom scopes come *after* the `spack` command and *before* the subcommand, and they specify a single path to a directory full of configuration files. You can add the same configuration files to that directory that you can add to any other scope (`config.yaml`, `packages.yaml`, etc.). If multiple scopes are provided: 1. Each must be preceded with the `--config-scope` or `-C` flag. 2. They must be ordered from lowest to highest precedence. ##### Example: scopes for release and development[¶](#example-scopes-for-release-and-development) Suppose that you need to support simultaneous building of release and development versions of `mypackage`, where `mypackage` -> `A` -> `B`. You could create The following files: ~/myscopes/release/packages.yaml[¶](#id4) ``` packages: mypackage: version: [1.7] A: version: [2.3] B: version: [0.8] ``` ~/myscopes/develop/packages.yaml[¶](#id5) ``` packages: mypackage: version: [develop] A: version: [develop] B: version: [develop] ``` You can switch between `release` and `develop` configurations using configuration arguments. You would type `spack -C ~/myscopes/release` when you want to build the designated release versions of `mypackage`, `A`, and `B`, and you would type `spack -C ~/myscopes/develop` when you want to build all of these packages at the `develop` version. ##### Example: swapping MPI providers[¶](#example-swapping-mpi-providers) Suppose that you need to build two software packages, `packagea` and `packageb`. `packagea` is Python 2-based and `packageb` is Python 3-based. `packagea` only builds with OpenMPI and `packageb` only builds with MPICH. You can create different configuration scopes for use with `packagea` and `packageb`: ~/myscopes/packgea/packages.yaml[¶](#id6) ``` packages: python: version: [2.7.11] all: providers: mpi: [openmpi] ``` ~/myscopes/packageb/packages.yaml[¶](#id7) ``` packages: python: version: [3.5.2] all: providers: mpi: [mpich] ``` ### Platform-specific Scopes[¶](#platform-specific-scopes) For each scope above, there can also be platform-specific settings. For example, on most platforms, GCC is the preferred compiler. However, on macOS (darwin), Clang often works for more packages, and is set as the default compiler. This configuration is set in `$(prefix)/etc/spack/defaults/darwin/packages.yaml`. It will take precedence over settings in the `defaults` scope, but can still be overridden by settings in `system`, `system/darwin`, `site`, `site/darwin`, `user`, `user/darwin`, `custom`, or `custom/darwin`. So, the full scope precedence is: 1. `defaults` 2. `defaults/<platform>` 3. `system` 4. `system/<platform>` 5. `site` 6. `site/<platform>` 7. `user` 8. `user/<platform>` 9. `custom` 10. `custom/<platform>` You can get the name to use for `<platform>` by running `spack arch --platform`. The system config scope has a `<platform>` section for sites at which `/etc` is mounted on multiple heterogeneous machines. ### Scope Precedence[¶](#scope-precedence) When spack queries for configuration parameters, it searches in higher-precedence scopes first. So, settings in a higher-precedence file can override those with the same key in a lower-precedence one. For list-valued settings, Spack *prepends* higher-precedence settings to lower-precedence settings. Completely ignoring higher-level configuration options is supported with the `::` notation for keys (see [Overriding entire sections](#config-overrides) below). #### Simple keys[¶](#simple-keys) Let’s look at an example of overriding a single key in a Spack file. If your configurations look like this: $(prefix)/etc/spack/defaults/config.yaml[¶](#id8) ``` config: install_tree: $spack/opt/spack module_roots: lmod: $spack/share/spack/lmod build_stage: - $tempdir - /nfs/tmp2/$user ``` ~/.spack/config.yaml[¶](#id9) ``` config: install_tree: /some/other/directory ``` Spack will only override `install_tree` in the `config` section, and will take the site preferences for other settings. You can see the final, combined configuration with the `spack config get <configtype>` command: ``` $ spack config get config config: install_tree: /some/other/directory module_roots: lmod: $spack/share/spack/lmod build_stage: - $tempdir - /nfs/tmp2/$user ``` #### Overriding entire sections[¶](#overriding-entire-sections) Above, the user `config.yaml` only overrides specific settings in the default `config.yaml`. Sometimes, it is useful to *completely* override lower-precedence settings. To do this, you can use *two* colons at the end of a key in a configuration file. For example: ~/.spack/config.yaml[¶](#id10) ``` config:: install_tree: /some/other/directory ``` Spack will ignore all lower-precedence configuration under the `config::` section: ``` $ spack config get config config: install_tree: /some/other/directory ``` #### List-valued settings[¶](#list-valued-settings) Let’s revisit the `config.yaml` example one more time. The `build_stage` setting’s value is an ordered list of directories: $(prefix)/etc/spack/defaults/config.yaml[¶](#id11) ``` build_stage: - $tempdir - /nfs/tmp2/$user ``` Suppose the user configuration adds its *own* list of `build_stage` paths: ~/.spack/config.yaml[¶](#id12) ``` build_stage: - /lustre-scratch/$user - ~/mystage ``` Spack will first look at the paths in the defaults `config.yaml`, then the paths in the user’s `~/.spack/config.yaml`. The list in the higher-precedence scope is *prepended* to the defaults. `spack config get config` shows the result: ``` $ spack config get config config: install_tree: /some/other/directory module_roots: lmod: $spack/share/spack/lmod build_stage: - /lustre-scratch/$user - ~/mystage - $tempdir - /nfs/tmp2/$user ``` As in [Overriding entire sections](#config-overrides), the higher-precedence scope can *completely* override the lower-precedence scope using `::`. So if the user config looked like this: ~/.spack/config.yaml[¶](#id13) ``` build_stage:: - /lustre-scratch/$user - ~/mystage ``` The merged configuration would look like this: ``` $ spack config get config config: install_tree: /some/other/directory module_roots: lmod: $spack/share/spack/lmod build_stage: - /lustre-scratch/$user - ~/mystage ``` ### Config File Variables[¶](#config-file-variables) Spack understands several variables which can be used in config file paths wherever they appear. There are three sets of these variables: Spack-specific variables, environment variables, and user path variables. Spack-specific variables and environment variables are both indicated by prefixing the variable name with `$`. User path variables are indicated at the start of the path with `~` or `~user`. #### Spack-specific variables[¶](#spack-specific-variables) Spack understands several special variables. These are: * `$spack`: path to the prefix of this Spack installation * `$tempdir`: default system temporary directory (as specified in Python’s [tempfile.tempdir](https://docs.python.org/2/library/tempfile.html#tempfile.tempdir) variable. * `$user`: name of the current user Note that, as with shell variables, you can write these as `$varname` or with braces to distinguish the variable from surrounding characters: `${varname}`. Their names are also case insensitive, meaning that `$SPACK` works just as well as `$spack`. These special variables are substituted first, so any environment variables with the same name will not be used. #### Environment variables[¶](#environment-variables) After Spack-specific variables are evaluated, environment variables are expanded. These are formatted like Spack-specific variables, e.g., `${varname}`. You can use this to insert environment variables in your Spack configuration. #### User home directories[¶](#user-home-directories) Spack performs Unix-style tilde expansion on paths in configuration files. This means that tilde (`~`) will expand to the current user’s home directory, and `~user` will expand to a specified user’s home directory. The `~` must appear at the beginning of the path, or Spack will not expand it. ### Seeing Spack’s Configuration[¶](#seeing-spack-s-configuration) With so many scopes overriding each other, it can sometimes be difficult to understand what Spack’s final configuration looks like. Spack provides two useful ways to view the final “merged” version of any configuration file: `spack config get` and `spack config blame`. #### `spack config get`[¶](#spack-config-get) `spack config get` shows a fully merged configuration file, taking into account all scopes. For example, to see the fully merged `config.yaml`, you can type: ``` $ spack config get config config: debug: false checksum: true verify_ssl: true dirty: false build_jobs: 8 install_tree: $spack/opt/spack template_dirs: - $spack/templates directory_layout: ${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH} module_roots: tcl: $spack/share/spack/modules lmod: $spack/share/spack/lmod dotkit: $spack/share/spack/dotkit build_stage: - $tempdir - /nfs/tmp2/$user - $spack/var/spack/stage source_cache: $spack/var/spack/cache misc_cache: ~/.spack/cache locks: true ``` Likewise, this will show the fully merged `packages.yaml`: ``` $ spack config get packages ``` You can use this in conjunction with the `-C` / `--config-scope` argument to see how your scope will affect Spack’s configuration: ``` $ spack -C /path/to/my/scope config get packages ``` #### `spack config blame`[¶](#spack-config-blame) `spack config blame` functions much like `spack config get`, but it shows exactly which configuration file each preference came from. If you do not know why Spack is behaving a certain way, this can help you track down the problem: ``` $ spack --insecure -C ./my-scope -C ./my-scope-2 config blame config ==> Warning: You asked for --insecure. Will NOT check SSL certificates. --- config: _builtin debug: False /home/myuser/spack/etc/spack/defaults/config.yaml:72 checksum: True command_line verify_ssl: False ./my-scope-2/config.yaml:2 dirty: False _builtin build_jobs: 8 ./my-scope/config.yaml:2 install_tree: /path/to/some/tree /home/myuser/spack/etc/spack/defaults/config.yaml:23 template_dirs: /home/myuser/spack/etc/spack/defaults/config.yaml:24 - $spack/templates /home/myuser/spack/etc/spack/defaults/config.yaml:28 directory_layout: ${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH} /home/myuser/spack/etc/spack/defaults/config.yaml:32 module_roots: /home/myuser/spack/etc/spack/defaults/config.yaml:33 tcl: $spack/share/spack/modules /home/myuser/spack/etc/spack/defaults/config.yaml:34 lmod: $spack/share/spack/lmod /home/myuser/spack/etc/spack/defaults/config.yaml:35 dotkit: $spack/share/spack/dotkit /home/myuser/spack/etc/spack/defaults/config.yaml:49 build_stage: /home/myuser/spack/etc/spack/defaults/config.yaml:50 - $tempdir /home/myuser/spack/etc/spack/defaults/config.yaml:51 - /nfs/tmp2/$user /home/myuser/spack/etc/spack/defaults/config.yaml:52 - $spack/var/spack/stage /home/myuser/spack/etc/spack/defaults/config.yaml:57 source_cache: $spack/var/spack/cache /home/myuser/spack/etc/spack/defaults/config.yaml:62 misc_cache: ~/.spack/cache /home/myuser/spack/etc/spack/defaults/config.yaml:86 locks: True ``` You can see above that the `build_jobs` and `debug` settings are built in and are not overridden by a configuration file. The `verify_ssl` setting comes from the `--insceure` option on the command line. `dirty` and `install_tree` come from the custom scopes `./my-scope` and `./my-scope-2`, and all other configuration options come from the default configuration files that ship with Spack. Basic Settings[¶](#basic-settings) --- Spack’s basic configuration options are set in `config.yaml`. You can see the default settings by looking at `etc/spack/defaults/config.yaml`: ``` # --- # This is the default spack configuration file. # # Settings here are versioned with Spack and are intended to provide # sensible defaults out of the box. Spack maintainers should edit this # file to keep it current. # # Users can override these settings by editing the following files. # # Per-spack-instance settings (overrides defaults): # $SPACK_ROOT/etc/spack/config.yaml # # Per-user settings (overrides default and site settings): # ~/.spack/config.yaml # --- config: # This is the path to the root of the Spack install tree. # You can use $spack here to refer to the root of the spack instance. install_tree: $spack/opt/spack # Locations where templates should be found template_dirs: - $spack/share/spack/templates # Default directory layout install_path_scheme: "${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}" # Locations where different types of modules should be installed. module_roots: tcl: $spack/share/spack/modules lmod: $spack/share/spack/lmod dotkit: $spack/share/spack/dotkit # Temporary locations Spack can try to use for builds. # # Spack will use the first one it finds that exists and is writable. # You can use $tempdir to refer to the system default temp directory # (as returned by tempfile.gettempdir()). # # A value of $spack/var/spack/stage indicates that Spack should run # builds directly inside its install directory without staging them in # temporary space. # # The build stage can be purged with `spack clean --stage`. build_stage: - $tempdir - /nfs/tmp2/$user - $spack/var/spack/stage # Cache directory for already downloaded source tarballs and archived # repositories. This can be purged with `spack clean --downloads`. source_cache: $spack/var/spack/cache # Cache directory for miscellaneous files, like the package index. # This can be purged with `spack clean --misc-cache` misc_cache: ~/.spack/cache # If this is false, tools like curl that use SSL will not verify # certifiates. (e.g., curl will use use the -k option) verify_ssl: true # If set to true, Spack will always check checksums after downloading # archives. If false, Spack skips the checksum step. checksum: true # If set to true, `spack install` and friends will NOT clean # potentially harmful variables from the build environment. Use wisely. dirty: false # The language the build environment will use. This will produce English # compiler messages by default, so the log parser can highlight errors. # If set to C, it will use English (see man locale). # If set to the empty string (''), it will use the language from the # user's environment. build_language: C # When set to true, concurrent instances of Spack will use locks to # avoid modifying the install tree, database file, etc. If false, Spack # will disable all locking, but you must NOT run concurrent instances # of Spack. For filesystems that don't support locking, you should set # this to false and run one Spack at a time, but otherwise we recommend # enabling locks. locks: true # The default number of jobs to use when running `make` in parallel. # If set to 4, for example, `spack install` will run `make -j4`. # If not set, all available cores are used by default. # build_jobs: 4 # If set to true, Spack will use ccache to cache C compiles. ccache: false # How long to wait to lock the Spack installation database. This lock is used # when Spack needs to manage its own package metadata and all operations are # expected to complete within the default time limit. The timeout should # therefore generally be left untouched. db_lock_timeout: 120 # How long to wait when attempting to modify a package (e.g. to install it). # This value should typically be 'null' (never time out) unless the Spack # instance only ever has a single user at a time, and only if the user # anticipates that a significant delay indicates that the lock attempt will # never succeed. package_lock_timeout: null ``` These settings can be overridden in `etc/spack/config.yaml` or `~/.spack/config.yaml`. See [Configuration Scopes](index.html#configuration-scopes) for details. ### `install_tree`[¶](#install-tree) The location where Spack will install packages and their dependencies. Default is `$spack/opt/spack`. ### `install_hash_length` and `install_path_scheme`[¶](#install-hash-length-and-install-path-scheme) The default Spack installation path can be very long and can create problems for scripts with hardcoded shebangs. There are two parameters to help with that. Firstly, the `install_hash_length` parameter can set the length of the hash in the installation path from 1 to 32. The default path uses the full 32 characters. Secondly, it is also possible to modify the entire installation scheme. By default Spack uses `${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}` where the tokens that are available for use in this directive are the same as those understood by the `Spec.format` method. Using this parameter it is possible to use a different package layout or reduce the depth of the installation paths. For example > ``` > config: > install_path_scheme: '${PACKAGE}/${VERSION}/${HASH:7}' > ``` would install packages into sub-directories using only the package name, version and a hash length of 7 characters. When using either parameter to set the hash length it only affects the representation of the hash in the installation directory. You should be aware that the smaller the hash length the more likely naming conflicts will occur. These parameters are independent of those used to configure module names. Warning Modifying the installation hash length or path scheme after packages have been installed will prevent Spack from being able to find the old installation directories. ### `module_roots`[¶](#module-roots) Controls where Spack installs generated module files. You can customize the location for each type of module. e.g.: ``` module_roots: tcl: $spack/share/spack/modules lmod: $spack/share/spack/lmod dotkit: $spack/share/spack/dotkit ``` See [Modules](index.html#modules) for details. ### `build_stage`[¶](#build-stage) Spack is designed to run out of a user home directory, and on many systems the home directory is a (slow) network file system. On most systems, building in a temporary file system results in faster builds than building in the home directory. Usually, there is also more space available in the temporary location than in the home directory. So, Spack tries to create build stages in temporary space. By default, Spack’s `build_stage` is configured like this: ``` build_stage: - $tempdir - /nfs/tmp2/$user - $spack/var/spack/stage ``` This is an ordered list of paths that Spack should search when trying to find a temporary directory for the build stage. The list is searched in order, and Spack will use the first directory to which it has write access. See [Config File Variables](index.html#config-file-variables) for more on `$tempdir` and `$spack`. When Spack builds a package, it creates a temporary directory within the `build_stage`, and it creates a symbolic link to that directory in `$spack/var/spack/stage`. This is used to track the stage. After a package is successfully installed, Spack deletes the temporary directory it used to build. Unsuccessful builds are not deleted, but you can manually purge them with [spack clean –stage](index.html#cmd-spack-clean). Note The last item in the list is `$spack/var/spack/stage`. If this is the only writable directory in the `build_stage` list, Spack will build *directly* in `$spack/var/spack/stage` and will not link to temporary space. ### `source_cache`[¶](#source-cache) Location to cache downloaded tarballs and repositories. By default these are stored in `$spack/var/spack/cache`. These are stored indefinitely by default. Can be purged with [spack clean –downloads](index.html#cmd-spack-clean). ### `misc_cache`[¶](#misc-cache) Temporary directory to store long-lived cache files, such as indices of packages available in repositories. Defaults to `~/.spack/cache`. Can be purged with [spack clean –misc-cache](index.html#cmd-spack-clean). ### `verify_ssl`[¶](#verify-ssl) When set to `true` (default) Spack will verify certificates of remote hosts when making `ssl` connections. Set to `false` to disable, and tools like `curl` will use their `--insecure` options. Disabling this can expose you to attacks. Use at your own risk. ### `checksum`[¶](#checksum) When set to `true`, Spack verifies downloaded source code using a checksum, and will refuse to build packages that it cannot verify. Set to `false` to disable these checks. Disabling this can expose you to attacks. Use at your own risk. ### `locks`[¶](#locks) When set to `true`, concurrent instances of Spack will use locks to avoid modifying the install tree, database file, etc. If false, Spack will disable all locking, but you must **not** run concurrent instances of Spack. For file systems that don’t support locking, you should set this to `false` and run one Spack at a time, but otherwise we recommend enabling locks. ### `dirty`[¶](#dirty) By default, Spack unsets variables in your environment that can change the way packages build. This includes `LD_LIBRARY_PATH`, `CPATH`, `LIBRARY_PATH`, `DYLD_LIBRARY_PATH`, and others. By default, builds are `clean`, but on some machines, compilers and other tools may need custom `LD_LIBRARY_PATH` settings to run. You can set `dirty` to `true` to skip the cleaning step and make all builds “dirty” by default. Be aware that this will reduce the reproducibility of builds. ### `build_jobs`[¶](#build-jobs) Unless overridden in a package or on the command line, Spack builds all packages in parallel. For a build system that uses Makefiles, this means running `make -j<build_jobs>`, where `build_jobs` is the number of threads to use. The default parallelism is equal to the number of cores on your machine. If you work on a shared login node or have a strict ulimit, it may be necessary to set the default to a lower value. By setting `build_jobs` to 4, for example, commands like `spack install` will run `make -j4` instead of hogging every core. To build all software in serial, set `build_jobs` to 1. ### `ccache`[¶](#ccache) When set to `true` Spack will use ccache to cache compiles. This is useful specifically in two cases: (1) when using `spack setup`, and (2) when building the same package with many different variants. The default is `false`. When enabled, Spack will look inside your `PATH` for a `ccache` executable and stop if it is not found. Some systems come with `ccache`, but it can also be installed using `spack install ccache`. `ccache` comes with reasonable defaults for cache size and location. (See the *Configuration settings* section of `man ccache` to learn more about the default settings and how to change them). Please note that we currently disable ccache’s `hash_dir` feature to avoid an issue with the stage directory (see <https://github.com/LLNL/spack/pull/3761#issuecomment-294352232>). Build Customization[¶](#build-customization) --- Spack allows you to customize how your software is built through the `packages.yaml` file. Using it, you can make Spack prefer particular implementations of virtual dependencies (e.g., MPI or BLAS/LAPACK), or you can make it prefer to build with particular compilers. You can also tell Spack to use *external* software installations already present on your system. At a high level, the `packages.yaml` file is structured like this: ``` packages: package1: # settings for package1 package2: # settings for package2 # ... all: # settings that apply to all packages. ``` So you can either set build preferences specifically for *one* package, or you can specify that certain settings should apply to *all* packages. The types of settings you can customize are described in detail below. Spack’s build defaults are in the default `etc/spack/defaults/packages.yaml` file. You can override them in `~/.spack/packages.yaml` or `etc/spack/packages.yaml`. For more details on how this works, see [Configuration Scopes](index.html#configuration-scopes). ### External Packages[¶](#external-packages) Spack can be configured to use externally-installed packages rather than building its own packages. This may be desirable if machines ship with system packages, such as a customized MPI that should be used instead of Spack building its own MPI. External packages are configured through the `packages.yaml` file found in a Spack installation’s `etc/spack/` or a user’s `~/.spack/` directory. Here’s an example of an external configuration: ``` packages: openmpi: paths: openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3 openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel ``` This example lists three installations of OpenMPI, one built with GCC, one built with GCC and debug information, and another built with Intel. If Spack is asked to build a package that uses one of these MPIs as a dependency, it will use the pre-installed OpenMPI in the given directory. `packages.yaml` can also be used to specify modules to load instead of the installation prefixes. Each `packages.yaml` begins with a `packages:` token, followed by a list of package names. To specify externals, add a `paths` or `modules` token under the package name, which lists externals in a `spec: /path` or `spec: module-name` format. Each spec should be as well-defined as reasonably possible. If a package lacks a spec component, such as missing a compiler or package version, then Spack will guess the missing component based on its most-favored packages, and it may guess incorrectly. Each package version and compiler listed in an external should have entries in Spack’s packages and compiler configuration, even though the package and compiler may not ever be built. The packages configuration can tell Spack to use an external location for certain package versions, but it does not restrict Spack to using external packages. In the above example, since newer versions of OpenMPI are available, Spack will choose to start building and linking with the latest version rather than continue using the pre-installed OpenMPI versions. To prevent this, the `packages.yaml` configuration also allows packages to be flagged as non-buildable. The previous example could be modified to be: ``` packages: openmpi: paths: openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7: /opt/openmpi-1.4.3 openmpi@1.4.3%gcc@4.4.7 arch=linux-x86_64-debian7+debug: /opt/openmpi-1.4.3-debug openmpi@1.6.5%intel@10.1 arch=linux-x86_64-debian7: /opt/openmpi-1.6.5-intel buildable: False ``` The addition of the `buildable` flag tells Spack that it should never build its own version of OpenMPI, and it will instead always rely on a pre-built OpenMPI. Similar to `paths`, `buildable` is specified as a property under a package name. If an external module is specified as not buildable, then Spack will load the external module into the build environment which can be used for linking. The `buildable` does not need to be paired with external packages. It could also be used alone to forbid packages that may be buggy or otherwise undesirable. ### Concretization Preferences[¶](#concretization-preferences) Spack can be configured to prefer certain compilers, package versions, dependencies, and variants during concretization. The preferred configuration can be controlled via the `~/.spack/packages.yaml` file for user configurations, or the `etc/spack/packages.yaml` site configuration. Here’s an example `packages.yaml` file that sets preferred packages: ``` packages: opencv: compiler: [gcc@4.9] variants: +debug gperftools: version: [2.2, 2.4, 2.3] all: compiler: [gcc@4.4.7, gcc@4.6:, intel, clang, pgi] providers: mpi: [mvapich2, mpich, openmpi] ``` At a high level, this example is specifying how packages should be concretized. The opencv package should prefer using GCC 4.9 and be built with debug options. The gperftools package should prefer version 2.2 over 2.4. Every package on the system should prefer mvapich2 for its MPI and GCC 4.4.7 (except for opencv, which overrides this by preferring GCC 4.9). These options are used to fill in implicit defaults. Any of them can be overwritten on the command line if explicitly requested. Each `packages.yaml` file begins with the string `packages:` and package names are specified on the next level. The special string `all` applies settings to each package. Underneath each package name is one or more components: `compiler`, `variants`, `version`, or `providers`. Each component has an ordered list of spec `constraints`, with earlier entries in the list being preferred over later entries. Sometimes a package installation may have constraints that forbid the first concretization rule, in which case Spack will use the first legal concretization rule. Going back to the example, if a user requests gperftools 2.3 or later, then Spack will install version 2.4 as the 2.4 version of gperftools is preferred over 2.3. An explicit concretization rule in the preferred section will always take preference over unlisted concretizations. In the above example, xlc isn’t listed in the compiler list. Every listed compiler from gcc to pgi will thus be preferred over the xlc compiler. The syntax for the `provider` section differs slightly from other concretization rules. A provider lists a value that packages may `depend_on` (e.g, MPI) and a list of rules for fulfilling that dependency. ### Package Permissions[¶](#package-permissions) Spack can be configured to assign permissions to the files installed by a package. In the `packages.yaml` file under `permissions`, the attributes `read`, `write`, and `group` control the package permissions. These attributes can be set per-package, or for all packages under `all`. If permissions are set under `all` and for a specific package, the package-specific settings take precedence. The `read` and `write` attributes take one of `user`, `group`, and `world`. ``` packages: all: permissions: write: group group: spack my_app: permissions: read: group group: my_team ``` The permissions settings describe the broadest level of access to installations of the specified packages. The execute permissions of the file are set to the same level as read permissions for those files that are executable. The default setting for `read` is `world`, and for `write` is `user`. In the example above, installations of `my_app` will be installed with user and group permissions but no world permissions, and owned by the group `my_team`. All other packages will be installed with user and group write privileges, and world read privileges. Those packages will be owned by the group `spack`. The `group` attribute assigns a Unix-style group to a package. All files installed by the package will be owned by the assigned group, and the sticky group bit will be set on the install prefix and all directories inside the install prefix. This will ensure that even manually placed files within the install prefix are owned by the assigned group. If no group is assigned, Spack will allow the OS default behavior to go as expected. Mirrors[¶](#mirrors) --- Some sites may not have access to the internet for fetching packages. These sites will need a local repository of tarballs from which they can get their files. Spack has support for this with *mirrors*. A mirror is a URL that points to a directory, either on the local filesystem or on some server, containing tarballs for all of Spack’s packages. Here’s an example of a mirror’s directory structure: ``` mirror/ cmake/ cmake-2.8.10.2.tar.gz dyninst/ dyninst-8.1.1.tgz dyninst-8.1.2.tgz libdwarf/ libdwarf-20130126.tar.gz libdwarf-20130207.tar.gz libdwarf-20130729.tar.gz libelf/ libelf-0.8.12.tar.gz libelf-0.8.13.tar.gz libunwind/ libunwind-1.1.tar.gz mpich/ mpich-3.0.4.tar.gz mvapich2/ mvapich2-1.9.tgz ``` The structure is very simple. There is a top-level directory. The second level directories are named after packages, and the third level contains tarballs for each package, named after each package. Note Archives are **not** named exactly the way they were in the package’s fetch URL. They have the form `<name>-<version>.<extension>`, where `<name>` is Spack’s name for the package, `<version>` is the version of the tarball, and `<extension>` is whatever format the package’s fetch URL contains. In order to make mirror creation reasonably fast, we copy the tarball in its original format to the mirror directory, but we do not standardize on a particular compression algorithm, because this would potentially require expanding and re-compressing each archive. ### `spack mirror`[¶](#spack-mirror) Mirrors are managed with the `spack mirror` command. The help for `spack mirror` looks like this: ``` $ spack help mirror usage: spack mirror [-hn] SUBCOMMAND ... manage mirrors (source and binary) positional arguments: SUBCOMMAND create Create a directory to be used as a spack mirror, and fill it with package archives. add Add a mirror to Spack. remove (rm) Remove a mirror by name. list Print out available mirrors to the console. optional arguments: -h, --help show this help message and exit -n, --no-checksum do not use checksums to verify downloadeded files (unsafe) ``` The `create` command actually builds a mirror by fetching all of its packages from the internet and checksumming them. The other three commands are for managing mirror configuration. They control the URL(s) from which Spack downloads its packages. ### `spack mirror create`[¶](#spack-mirror-create) You can create a mirror using the `spack mirror create` command, assuming you’re on a machine where you can access the internet. The command will iterate through all of Spack’s packages and download the safe ones into a directory structure like the one above. Here is what it looks like: ``` $ spack mirror create libelf libdwarf ==> Created new mirror in spack-mirror-2014-06-24 ==> Trying to fetch from http://www.mr511.de/software/libelf-0.8.13.tar.gz ########################################################## 81.6% ==> Checksum passed for libelf@0.8.13 ==> Added libelf@0.8.13 ==> Trying to fetch from http://www.mr511.de/software/libelf-0.8.12.tar.gz ###################################################################### 98.6% ==> Checksum passed for libelf@0.8.12 ==> Added libelf@0.8.12 ==> Trying to fetch from http://www.prevanders.net/libdwarf-20130207.tar.gz ###################################################################### 97.3% ==> Checksum passed for libdwarf@20130207 ==> Added libdwarf@20130207 ==> Trying to fetch from http://www.prevanders.net/libdwarf-20130126.tar.gz ######################################################## 78.9% ==> Checksum passed for libdwarf@20130126 ==> Added libdwarf@20130126 ==> Trying to fetch from http://www.prevanders.net/libdwarf-20130729.tar.gz ############################################################# 84.7% ==> Added libdwarf@20130729 ==> Added spack-mirror-2014-06-24/libdwarf/libdwarf-20130729.tar.gz to mirror ==> Added python@2.7.8. ==> Successfully updated mirror in spack-mirror-2015-02-24. Archive stats: 0 already present 5 added 0 failed to fetch. ``` Once this is done, you can tar up the `spack-mirror-2014-06-24` directory and copy it over to the machine you want it hosted on. #### Custom package sets[¶](#custom-package-sets) Normally, `spack mirror create` downloads all the archives it has checksums for. If you want to only create a mirror for a subset of packages, you can do that by supplying a list of package specs on the command line after `spack mirror create`. For example, this command: ``` $ spack mirror create libelf@0.8.12: boost@1.44: ``` Will create a mirror for libelf versions greater than or equal to 0.8.12 and boost versions greater than or equal to 1.44. #### Mirror files[¶](#mirror-files) If you have a *very* large number of packages you want to mirror, you can supply a file with specs in it, one per line: ``` $ cat specs.txt libdwarf libelf@0.8.12: boost@1.44: boost@1.39.0 ... $ spack mirror create --file specs.txt ... ``` This is useful if there is a specific suite of software managed by your site. ### `spack mirror add`[¶](#spack-mirror-add) Once you have a mirror, you need to let spack know about it. This is relatively simple. First, figure out the URL for the mirror. If it’s a directory, you can use a file URL like this one: ``` file://$HOME/spack-mirror-2014-06-24 ``` That points to the directory on the local filesystem. If it were on a web server, you could use a URL like this one: <https://example.com/some/web-hosted/directory/spack-mirror-2014-06-24Spack will use the URL as the root for all of the packages it fetches. You can tell your Spack installation to use that mirror like this: ``` $ spack mirror add local_filesystem file://$HOME/spack-mirror-2014-06-24 ``` Each mirror has a name so that you can refer to it again later. ### `spack mirror list`[¶](#spack-mirror-list) To see all the mirrors Spack knows about, run `spack mirror list`: ``` $ spack mirror list local_filesystem file:///home/username/spack-mirror-2014-06-24 ``` ### `spack mirror remove`[¶](#spack-mirror-remove) To remove a mirror by name, run: ``` $ spack mirror remove local_filesystem $ spack mirror list ==> No mirrors configured. ``` ### Mirror precedence[¶](#mirror-precedence) Adding a mirror really adds a line in `~/.spack/mirrors.yaml`: ``` mirrors: local_filesystem: file:///home/username/spack-mirror-2014-06-24 remote_server: https://example.com/some/web-hosted/directory/spack-mirror-2014-06-24 ``` If you want to change the order in which mirrors are searched for packages, you can edit this file and reorder the sections. Spack will search the topmost mirror first and the bottom-most mirror last. ### Local Default Cache[¶](#local-default-cache) Spack caches resources that are downloaded as part of installs. The cache is a valid spack mirror: it uses the same directory structure and naming scheme as other Spack mirrors (so it can be copied anywhere and referenced with a URL like other mirrors). The mirror is maintained locally (within the Spack installation directory) at `var/spack/cache/`. It is always enabled (and is always searched first when attempting to retrieve files for an installation) but can be cleared with [clean](index.html#cmd-spack-clean); the cache directory can also be deleted manually without issue. Caching includes retrieved tarball archives and source control repositories, but only resources with an associated digest or commit ID (e.g. a revision number for SVN) will be cached. Modules[¶](#modules) --- The use of module systems to manage user environment in a controlled way is a common practice at HPC centers that is often embraced also by individual programmers on their development machines. To support this common practice Spack integrates with [Environment Modules](http://modules.sourceforge.net/) , [LMod](http://lmod.readthedocs.io/en/latest/) and [Dotkit](https://computing.llnl.gov/?set=jobs&page=dotkit) by providing post-install hooks that generate module files and commands to manipulate them. Note If your machine does not already have a module system installed, we advise you to use either Environment Modules or LMod. See [Environment Modules](index.html#installenvironmentmodules) for more details. ### Using module files via Spack[¶](#using-module-files-via-spack) If you have installed a supported module system either manually or through `spack bootstrap`, you should be able to run either `module avail` or `use -l spack` to see what module files have been installed. Here is sample output of those programs, showing lots of installed packages: ``` $ module avail --- ~/spack/share/spack/modules/linux-ubuntu14-x86_64 --- autoconf-2.69-gcc-4.8-qextxkq hwloc-1.11.6-gcc-6.3.0-akcisez m4-1.4.18-gcc-4.8-ev2znoc openblas-0.2.19-gcc-6.3.0-dhkmed6 py-setuptools-34.2.0-gcc-6.3.0-fadur4s automake-1.15-gcc-4.8-maqvukj isl-0.18-gcc-4.8-afi6taq m4-1.4.18-gcc-6.3.0-uppywnz openmpi-2.1.0-gcc-6.3.0-go2s4z5 py-six-1.10.0-gcc-6.3.0-p4dhkaw binutils-2.28-gcc-4.8-5s7c6rs libiconv-1.15-gcc-4.8-at46wg3 mawk-1.3.4-gcc-4.8-acjez57 openssl-1.0.2k-gcc-4.8-dkls5tk python-2.7.13-gcc-6.3.0-tyehea7 bison-3.0.4-gcc-4.8-ek4luo5 libpciaccess-0.13.4-gcc-6.3.0-gmufnvh mawk-1.3.4-gcc-6.3.0-ostdoms openssl-1.0.2k-gcc-6.3.0-gxgr5or readline-7.0-gcc-4.8-xhufqhn bzip2-1.0.6-gcc-4.8-iffrxzn libsigsegv-2.11-gcc-4.8-pp2cvte mpc-1.0.3-gcc-4.8-g5mztc5 pcre-8.40-gcc-4.8-r5pbrxb readline-7.0-gcc-6.3.0-zzcyicg bzip2-1.0.6-gcc-6.3.0-bequudr libsigsegv-2.11-gcc-6.3.0-7enifnh mpfr-3.1.5-gcc-4.8-o7xm7az perl-5.24.1-gcc-4.8-dg5j65u sqlite-3.8.5-gcc-6.3.0-6zoruzj cmake-3.7.2-gcc-6.3.0-fowuuby libtool-2.4.6-gcc-4.8-7a523za mpich-3.2-gcc-6.3.0-dmvd3aw perl-5.24.1-gcc-6.3.0-6uzkpt6 tar-1.29-gcc-4.8-wse2ass curl-7.53.1-gcc-4.8-3fz46n6 libtool-2.4.6-gcc-6.3.0-n7zmbzt ncurses-6.0-gcc-4.8-dcpe7ia pkg-config-0.29.2-gcc-4.8-ib33t75 tcl-8.6.6-gcc-4.8-tfxzqbr expat-2.2.0-gcc-4.8-mrv6bd4 libxml2-2.9.4-gcc-4.8-ryzxnsu ncurses-6.0-gcc-6.3.0-ucbhcdy pkg-config-0.29.2-gcc-6.3.0-jpgubk3 util-macros-1.19.1-gcc-6.3.0-xorz2x2 flex-2.6.3-gcc-4.8-yf345oo libxml2-2.9.4-gcc-6.3.0-rltzsdh netlib-lapack-3.6.1-gcc-6.3.0-js33dog py-appdirs-1.4.0-gcc-6.3.0-jxawmw7 xz-5.2.3-gcc-4.8-mew4log gcc-6.3.0-gcc-4.8-24puqve lmod-7.4.1-gcc-4.8-je4srhr netlib-scalapack-2.0.2-gcc-6.3.0-5aidk4l py-numpy-1.12.0-gcc-6.3.0-oemmoeu xz-5.2.3-gcc-6.3.0-3vqeuvb gettext-0.19.8.1-gcc-4.8-yymghlh lua-5.3.4-gcc-4.8-im75yaz netlib-scalapack-2.0.2-gcc-6.3.0-hjsemcn py-packaging-16.8-gcc-6.3.0-i2n3dtl zip-3.0-gcc-4.8-rwar22d gmp-6.1.2-gcc-4.8-5ub2wu5 lua-luafilesystem-1_6_3-gcc-4.8-wkey3nl netlib-scalapack-2.0.2-gcc-6.3.0-jva724b py-pyparsing-2.1.10-gcc-6.3.0-tbo6gmw zlib-1.2.11-gcc-4.8-pgxsxv7 help2man-1.47.4-gcc-4.8-kcnqmau lua-luaposix-33.4.0-gcc-4.8-mdod2ry netlib-scalapack-2.0.2-gcc-6.3.0-rgqfr6d py-scipy-0.19.0-gcc-6.3.0-kr7nat4 zlib-1.2.11-gcc-6.3.0-7cqp6cj ``` The names should look familiar, as they resemble the output from `spack find`. You *can* use the modules here directly. For example, you could type either of these commands to load the `cmake` module: ``` $ use cmake-3.7.2-gcc-6.3.0-fowuuby ``` ``` $ module load cmake-3.7.2-gcc-6.3.0-fowuuby ``` Neither of these is particularly pretty, easy to remember, or easy to type. Luckily, Spack has its own interface for using modules and dotkits. #### Shell support[¶](#id2) To enable additional Spack commands for loading and unloading module files, and to add the correct path to `MODULEPATH`, you need to source the appropriate setup file in the `$SPACK_ROOT/share/spack` directory. This will activate shell support for the commands that need it. For `bash`, `ksh` or `zsh` users: ``` $ . ${SPACK_ROOT}/share/spack/setup-env.sh ``` For `csh` and `tcsh` instead: ``` $ set SPACK_ROOT ... $ source $SPACK_ROOT/share/spack/setup-env.csh ``` Note that in the latter case it is necessary to explicitly set `SPACK_ROOT` before sourcing the setup file (you will get a meaningful error message if you don’t). When `bash` and `ksh` users update their environment with `setup-env.sh`, it will check for spack-installed environment modules and add the `module` command to their environment; This only occurs if the module command is not already available. You can install `environment-modules` with `spack bootstrap` as described in [Environment Modules](index.html#installenvironmentmodules). Finally, if you want to have Spack’s shell support available on the command line at any login you can put this source line in one of the files that are sourced at startup (like `.profile`, `.bashrc` or `.cshrc`). Be aware though that the startup time may be slightly increased because of that. #### `spack load / unload`[¶](#spack-load-unload) Once you have shell support enabled you can use the same spec syntax you’re used to: | Modules | Dotkit | | --- | --- | | `spack load <spec>` | `spack use <spec>` | | `spack unload <spec>` | `spack unuse <spec>` | And you can use the same shortened names you use everywhere else in Spack. For example, if you are using dotkit, this will add the `mpich` package built with `gcc` to your path: ``` $ spack install mpich %gcc@4.4.7 # ... wait for install ... $ spack use mpich %gcc@4.4.7 # dotkit Prepending: mpich@3.0.4%gcc@4.4.7 (ok) $ which mpicc ~/spack/opt/linux-debian7-x86_64/gcc@4.4.7/mpich@3.0.4/bin/mpicc ``` Or, similarly if you are using modules, you could type: ``` $ spack load mpich %gcc@4.4.7 # modules ``` These commands will add appropriate directories to your `PATH`, `MANPATH`, `CPATH`, and `LD_LIBRARY_PATH`. When you no longer want to use a package, you can type unload or unuse similarly: ``` $ spack unload mpich %gcc@4.4.7 # modules $ spack unuse mpich %gcc@4.4.7 # dotkit ``` Note These `use`, `unuse`, `load`, and `unload` subcommands are only available if you have enabled Spack’s shell support *and* you have dotkit or modules installed on your machine. #### Ambiguous module names[¶](#ambiguous-module-names) If a spec used with load/unload or use/unuse is ambiguous (i.e. more than one installed package matches it), then Spack will warn you: ``` $ spack load libelf ==> Error: Multiple matches for spec libelf. Choose one: libelf@0.8.13%gcc@4.4.7 arch=linux-debian7-x86_64 libelf@0.8.13%intel@15.0.0 arch=linux-debian7-x86_64 ``` You can either type the `spack load` command again with a fully qualified argument, or you can add just enough extra constraints to identify one package. For example, above, the key differentiator is that one `libelf` is built with the Intel compiler, while the other used `gcc`. You could therefore just type: ``` $ spack load libelf %intel ``` To identify just the one built with the Intel compiler. #### `spack module tcl loads`[¶](#spack-module-tcl-loads) In some cases, it is desirable to load not just a module, but also all the modules it depends on. This is not required for most modules because Spack builds binaries with RPATH support. However, not all packages use RPATH to find their dependencies: this can be true in particular for Python extensions, which are currently *not* built with RPATH. Scripts to load modules recursively may be made with the command: ``` $ spack module tcl loads --dependencies <spec> ``` An equivalent alternative using [process substitution](http://tldp.org/LDP/abs/html/process-sub.html) is: ``` $ source <( spack module tcl loads --dependencies <spec> ) ``` #### Module Commands for Shell Scripts[¶](#module-commands-for-shell-scripts) Although Spack is flexible, the `module` command is much faster. This could become an issue when emitting a series of `spack load` commands inside a shell script. By adding the `--shell` flag, `spack module tcl find` may also be used to generate code that can be cut-and-pasted into a shell script. For example: ``` $ spack module tcl loads --dependencies py-numpy git # bzip2@1.0.6%gcc@4.9.3=linux-x86_64 module load bzip2-1.0.6-gcc-4.9.3-ktnrhkrmbbtlvnagfatrarzjojmkvzsx # ncurses@6.0%gcc@4.9.3=linux-x86_64 module load ncurses-6.0-gcc-4.9.3-kaazyneh3bjkfnalunchyqtygoe2mncv # zlib@1.2.8%gcc@4.9.3=linux-x86_64 module load zlib-1.2.8-gcc-4.9.3-v3ufwaahjnviyvgjcelo36nywx2ufj7z # sqlite@3.8.5%gcc@4.9.3=linux-x86_64 module load sqlite-3.8.5-gcc-4.9.3-a3eediswgd5f3rmto7g3szoew5nhehbr # readline@6.3%gcc@4.9.3=linux-x86_64 module load readline-6.3-gcc-4.9.3-se6r3lsycrwxyhreg4lqirp6xixxejh3 # python@3.5.1%gcc@4.9.3=linux-x86_64 module load python-3.5.1-gcc-4.9.3-5q5rsrtjld4u6jiicuvtnx52m7tfhegi # py-setuptools@20.5%gcc@4.9.3=linux-x86_64 module load py-setuptools-20.5-gcc-4.9.3-4qr2suj6p6glepnedmwhl4f62x64wxw2 # py-nose@1.3.7%gcc@4.9.3=linux-x86_64 module load py-nose-1.3.7-gcc-4.9.3-pwhtjw2dvdvfzjwuuztkzr7b4l6zepli # openblas@0.2.17%gcc@4.9.3+shared=linux-x86_64 module load openblas-0.2.17-gcc-4.9.3-pw6rmlom7apfsnjtzfttyayzc7nx5e7y # py-numpy@1.11.0%gcc@4.9.3+blas+lapack=linux-x86_64 module load py-numpy-1.11.0-gcc-4.9.3-mulodttw5pcyjufva4htsktwty4qd52r # curl@7.47.1%gcc@4.9.3=linux-x86_64 module load curl-7.47.1-gcc-4.9.3-ohz3fwsepm3b462p5lnaquv7op7naqbi # autoconf@2.69%gcc@4.9.3=linux-x86_64 module load autoconf-2.69-gcc-4.9.3-bkibjqhgqm5e3o423ogfv2y3o6h2uoq4 # cmake@3.5.0%gcc@4.9.3~doc+ncurses+openssl~qt=linux-x86_64 module load cmake-3.5.0-gcc-4.9.3-x7xnsklmgwla3ubfgzppamtbqk5rwn7t # expat@2.1.0%gcc@4.9.3=linux-x86_64 module load expat-2.1.0-gcc-4.9.3-6pkz2ucnk2e62imwakejjvbv6egncppd # git@2.8.0-rc2%gcc@4.9.3+curl+expat=linux-x86_64 module load git-2.8.0-rc2-gcc-4.9.3-3bib4hqtnv5xjjoq5ugt3inblt4xrgkd ``` The script may be further edited by removing unnecessary modules. #### Module Prefixes[¶](#module-prefixes) On some systems, modules are automatically prefixed with a certain string; `spack module tcl loads` needs to know about that prefix when it issues `module load` commands. Add the `--prefix` option to your `spack module tcl loads` commands if this is necessary. For example, consider the following on one system: ``` $ module avail linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y $ spack module tcl loads antlr # WRONG! # antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64 module load antlr-2.7.7-gcc-5.3.0-bdpl46y $ spack module tcl loads --prefix linux-SuSE11-x86_64/ antlr # antlr@2.7.7%gcc@5.3.0~csharp+cxx~java~python arch=linux-SuSE11-x86_64 module load linux-SuSE11-x86_64/antlr-2.7.7-gcc-5.3.0-bdpl46y ``` ### Module file customization[¶](#module-file-customization) Module files are generated by post-install hooks after the successful installation of a package. The table below summarizes the essential information associated with the different file formats that can be generated by Spack: > | | **Hook name** | **Default root directory** | **Default template file** | **Compatible tools** | > | --- | --- | --- | --- | --- | > | **Dotkit** | `dotkit` | share/spack/dotkit | share/spack/templates/modules/modulefile.dk | DotKit | > | **TCL - Non-Hierarchical** | `tcl` | share/spack/modules | share/spack/templates/modules/modulefile.tcl | Env. Modules/LMod | > | **Lua - Hierarchical** | `lmod` | share/spack/lmod | share/spack/templates/modules/modulefile.lua | LMod | Spack ships with sensible defaults for the generation of module files, but you can customize many aspects of it to accommodate package or site specific needs. In general you can override or extend the default behavior by: > 1. overriding certain callback APIs in the Python packages > 2. writing specific rules in the `modules.yaml` configuration file > 3. writing your own templates to override or extend the defaults The former method let you express changes in the run-time environment that are needed to use the installed software properly, e.g. injecting variables from language interpreters into their extensions. The latter two instead permit to fine tune the filesystem layout, content and creation of module files to meet site specific conventions. #### Override API calls in `package.py`[¶](#override-api-calls-in-package-py) There are two methods that you can override in any `package.py` to affect the content of the module files generated by Spack. The first one: ``` def setup_environment(self, spack_env, run_env): """Set up the compile and runtime environments for a package.""" pass ``` can alter the content of the module file associated with the same package where it is overridden. The second method: ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): """Set up the environment of packages that depend on this one""" pass ``` can instead inject run-time environment modifications in the module files of packages that depend on it. In both cases you need to fill `run_env` with the desired list of environment modifications. Note The `r` package and callback APIs An example in which it is crucial to override both methods is given by the `r` package. This package installs libraries and headers in non-standard locations and it is possible to prepend the appropriate directory to the corresponding environment variables: | LIBRARY_PATH | `self.prefix/rlib/R/lib` | | LD_LIBRARY_PATH | `self.prefix/rlib/R/lib` | | CPATH | `self.prefix/rlib/R/include` | with the following snippet: ``` def setup_environment(self, spack_env, run_env): run_env.prepend_path('LIBRARY_PATH', join_path(self.prefix, 'rlib', 'R', 'lib')) run_env.prepend_path('LD_LIBRARY_PATH', join_path(self.prefix, 'rlib', 'R', 'lib')) run_env.prepend_path('CPATH', join_path(self.prefix, 'rlib', 'R', 'include')) ``` The `r` package also knows which environment variable should be modified to make language extensions provided by other packages available, and modifies it appropriately in the override of the second method: ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): # Set R_LIBS to include the library dir for the # extension and any other R extensions it depends on. r_libs_path = [] for d in dependent_spec.traverse( deptype=('build', 'run'), deptype_query='run'): if d.package.extends(self.spec): r_libs_path.append(join_path(d.prefix, self.r_lib_dir)) r_libs_path = ':'.join(r_libs_path) spack_env.set('R_LIBS', r_libs_path) spack_env.set('R_MAKEVARS_SITE', join_path(self.etcdir, 'Makeconf.spack')) # Use the number of make_jobs set in spack. The make program will # determine how many jobs can actually be started. spack_env.set('MAKEFLAGS', '-j{0}'.format(make_jobs)) # For run time environment set only the path for dependent_spec and # prepend it to R_LIBS if dependent_spec.package.extends(self.spec): run_env.prepend_path('R_LIBS', join_path( dependent_spec.prefix, self.r_lib_dir)) ``` #### Write a configuration file[¶](#write-a-configuration-file) The configuration files that control module generation behavior are named `modules.yaml`. The default configuration: ``` # --- # This is the default configuration for Spack's module file generation. # # Settings here are versioned with Spack and are intended to provide # sensible defaults out of the box. Spack maintainers should edit this # file to keep it current. # # Users can override these settings by editing the following files. # # Per-spack-instance settings (overrides defaults): # $SPACK_ROOT/etc/spack/modules.yaml # # Per-user settings (overrides default and site settings): # ~/.spack/modules.yaml # --- modules: enable: - tcl - dotkit prefix_inspections: bin: - PATH man: - MANPATH share/man: - MANPATH share/aclocal: - ACLOCAL_PATH lib: - LIBRARY_PATH lib64: - LIBRARY_PATH include: - CPATH lib/pkgconfig: - PKG_CONFIG_PATH lib64/pkgconfig: - PKG_CONFIG_PATH '': - CMAKE_PREFIX_PATH lmod: hierarchy: - mpi ``` activates the hooks to generate `tcl` and `dotkit` module files and inspects the installation folder of each package for the presence of a set of subdirectories (`bin`, `man`, `share/man`, etc.). If any is found its full path is prepended to the environment variables listed below the folder name. ##### Activate other hooks[¶](#activate-other-hooks) Any other module file generator shipped with Spack can be activated adding it to the list under the `enable` key in the module file. Currently the only generator that is not active by default is `lmod`, which produces hierarchical lua module files. Each module system can then be configured separately. In fact, you should list configuration options that affect a particular type of module files under a top level key corresponding to the generator being customized: ``` modules: enable: - tcl - dotkit - lmod tcl: # contains environment modules specific customizations dotkit: # contains dotkit specific customizations lmod: # contains lmod specific customizations ``` In general, the configuration options that you can use in `modules.yaml` will either change the layout of the module files on the filesystem, or they will affect their content. For the latter point it is possible to use anonymous specs to fine tune the set of packages on which the modifications should be applied. ##### Selection by anonymous specs[¶](#selection-by-anonymous-specs) In the configuration file you can use *anonymous specs* (i.e. specs that **are not required to have a root package** and are thus used just to express constraints) to apply certain modifications on a selected set of the installed software. For instance, in the snippet below: ``` modules: tcl: # The keyword `all` selects every package all: environment: set: BAR: 'bar' # This anonymous spec selects any package that # depends on openmpi. The double colon at the # end clears the set of rules that matched so far. ^openmpi:: environment: set: BAR: 'baz' # Selects any zlib package zlib: environment: prepend_path: LD_LIBRARY_PATH: 'foo' # Selects zlib compiled with gcc@4.8 zlib%gcc@4.8: environment: unset: - FOOBAR ``` you are instructing Spack to set the environment variable `BAR=bar` for every module, unless the associated spec satisfies `^openmpi` in which case `BAR=baz`. In addition in any spec that satisfies `zlib` the value `foo` will be prepended to `LD_LIBRARY_PATH` and in any spec that satisfies `zlib%gcc@4.8` the variable `FOOBAR` will be unset. Note Order does matter The modifications associated with the `all` keyword are always evaluated first, no matter where they appear in the configuration file. All the other spec constraints are instead evaluated top to bottom. ##### Blacklist or whitelist specific module files[¶](#blacklist-or-whitelist-specific-module-files) You can use anonymous specs also to prevent module files from being written or to force them to be written. Consider the case where you want to hide from users all the boilerplate software that you had to build in order to bootstrap a new compiler. Suppose for instance that `gcc@4.4.7` is the compiler provided by your system. If you write a configuration file like: ``` modules: tcl: whitelist: ['gcc', 'llvm'] # Whitelist will have precedence over blacklist blacklist: ['%gcc@4.4.7'] # Assuming gcc@4.4.7 is the system compiler ``` you will prevent the generation of module files for any package that is compiled with `gcc@4.4.7`, with the only exception of any `gcc` or any `llvm` installation. ##### Customize the naming scheme[¶](#customize-the-naming-scheme) The names of environment modules generated by spack are not always easy to fully comprehend due to the long hash in the name. There are two module configuration options to help with that. The first is a global setting to adjust the hash length. It can be set anywhere from 0 to 32 and has a default length of 7. This is the representation of the hash in the module file name and does not affect the size of the package hash. Be aware that the smaller the hash length the more likely naming conflicts will occur. The following snippet shows how to set hash length in the module file names: ``` modules: tcl: hash_length: 7 ``` To help make module names more readable, and to help alleviate name conflicts with a short hash, one can use the `suffixes` option in the modules configuration file. This option will add strings to modules that match a spec. For instance, the following config options, ``` modules: tcl: all: suffixes: ^python@2.7.12: 'python-2.7.12' ^openblas: 'openblas' ``` will add a `python-2.7.12` version string to any packages compiled with python matching the spec, `python@2.7.12`. This is useful to know which version of python a set of python extensions is associated with. Likewise, the `openblas` string is attached to any program that has openblas in the spec, most likely via the `+blas` variant specification. Note TCL module files A modification that is specific to `tcl` module files is the possibility to change the naming scheme of modules. ``` modules: tcl: naming_scheme: '${PACKAGE}/${VERSION}-${COMPILERNAME}-${COMPILERVER}' all: conflict: - '${PACKAGE}' - 'intel/14.0.1' ``` will create module files that will conflict with `intel/14.0.1` and with the base directory of the same module, effectively preventing the possibility to load two or more versions of the same software at the same time. The tokens that are available for use in this directive are the same understood by the `Spec.format` method. Note LMod hierarchical module files When `lmod` is activated Spack will generate a set of hierarchical lua module files that are understood by LMod. The hierarchy will always contain the two layers `Core` / `Compiler` but can be further extended to any of the virtual dependencies present in Spack. A case that could be useful in practice is for instance: ``` modules: enable: - lmod lmod: core_compilers: - 'gcc@4.8' hierarchy: - 'mpi' - 'lapack' ``` that will generate a hierarchy in which the `lapack` and `mpi` layer can be switched independently. This allows a site to build the same libraries or applications against different implementations of `mpi` and `lapack`, and let LMod switch safely from one to the other. Warning Deep hierarchies and `lmod spider` For hierarchies that are deeper than three layers `lmod spider` may have some issues. See [this discussion on the LMod project](https://github.com/TACC/Lmod/issues/114). ##### Filter out environment modifications[¶](#filter-out-environment-modifications) Modifications to certain environment variables in module files are there by default, for instance because they are generated by prefix inspections. If you want to prevent modifications to some environment variables, you can do so by using the environment blacklist: ``` modules: dotkit: all: filter: # Exclude changes to any of these variables environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` The configuration above will generate dotkit module files that will not contain modifications to either `CPATH` or `LIBRARY_PATH` and environment module files that instead will contain these modifications. ##### Autoload dependencies[¶](#autoload-dependencies) In some cases it can be useful to have module files that automatically load their dependencies. This may be the case for Python extensions, if not activated using `spack activate`: ``` modules: tcl: ^python: autoload: 'direct' ``` The configuration file above will produce module files that will load their direct dependencies if the package installed depends on `python`. The allowed values for the `autoload` statement are either `none`, `direct` or `all`. Note TCL prerequisites In the `tcl` section of the configuration file it is possible to use the `prerequisites` directive that accepts the same values as `autoload`. It will produce module files that have a `prereq` statement instead of automatically loading other modules. ### Maintaining Module Files[¶](#maintaining-module-files) Each type of module file has a command with the same name associated with it. The actions these commands permit are usually associated with the maintenance of a production environment. Here’s, for instance, a sample of the features of the `spack module tcl` command: ``` $ spack module tcl --help usage: spack module tcl [-h] SUBCOMMAND ... positional arguments: SUBCOMMAND refresh regenerate module files find find module files for packages rm remove module files loads prompt the list of modules associated with a constraint optional arguments: -h, --help show this help message and exit ``` #### Refresh the set of modules[¶](#refresh-the-set-of-modules) The subcommand that regenerates module files to update their content or their layout is `refresh`: ``` $ spack module tcl refresh --help usage: spack module tcl refresh [-hy] [--delete-tree] ... positional arguments: constraint constraint to select a subset of installed packages optional arguments: -h, --help show this help message and exit --delete-tree delete the module file tree before refresh -y, --yes-to-all assume "yes" is the answer to every confirmation request ``` A set of packages can be selected using anonymous specs for the optional `constraint` positional argument. Optionally the entire tree can be deleted before regeneration if the change in layout is radical. #### Delete module files[¶](#delete-module-files) If instead what you need is just to delete a few module files, then the right subcommand is `rm`: ``` $ spack module tcl rm --help usage: spack module tcl rm [-hy] ... positional arguments: constraint constraint to select a subset of installed packages optional arguments: -h, --help show this help message and exit -y, --yes-to-all assume "yes" is the answer to every confirmation request ``` Note We care about your module files! Every modification done on modules that are already existing will ask for a confirmation by default. If the command is used in a script it is possible though to pass the `-y` argument, that will skip this safety measure. Package Repositories[¶](#package-repositories) --- Spack comes with over 1,000 built-in package recipes in `var/spack/repos/builtin/`. This is a **package repository** – a directory that Spack searches when it needs to find a package by name. You may need to maintain packages for restricted, proprietary or experimental software separately from the built-in repository. Spack allows you to configure local repositories using either the `repos.yaml` or the `spack repo` command. A package repository a directory structured like this: ``` repo/ repo.yaml packages/ hdf5/ package.py mpich/ package.py mpich-1.9-bugfix.patch trilinos/ package.py ... ``` The top-level `repo.yaml` file contains configuration metadata for the repository, and the `packages` directory contains subdirectories for each package in the repository. Each package directory contains a `package.py` file and any patches or other files needed to build the package. Package repositories allow you to: 1. Maintain your own packages separately from Spack; 2. Share your packages (e.g., by hosting them in a shared file system), without committing them to the built-in Spack package repository; and 3. Override built-in Spack packages with your own implementation. Packages in a separate repository can also *depend on* built-in Spack packages. So, you can leverage existing recipes without re-implementing them in your own repository. ### `repos.yaml`[¶](#repos-yaml) Spack uses the `repos.yaml` file in `~/.spack` (and [elsewhere](index.html#configuration)) to find repositories. Note that the `repos.yaml` configuration file is distinct from the `repo.yaml` file in each repository. For more on the YAML format, and on how configuration file precedence works in Spack, see [configuration](index.html#configuration). The default `etc/spack/defaults/repos.yaml` file looks like this: ``` repos: - $spack/var/spack/repos/builtin ``` The file starts with `repos:` and contains a single ordered list of paths to repositories. Each path is on a separate line starting with `-`. You can add a repository by inserting another path into the list: ``` repos: - /opt/local-repo - $spack/var/spack/repos/builtin ``` When Spack interprets a spec, e.g., `mpich` in `spack install mpich`, it searches these repositories in order (first to last) to resolve each package name. In this example, Spack will look for the following packages and use the first valid file: 1. `/opt/local-repo/packages/mpich/package.py` 2. `$spack/var/spack/repos/builtin/packages/mpich/package.py` Note Currently, Spack can only use repositories in the file system. We plan to eventually support URLs in `repos.yaml`, so that you can easily point to remote package repositories, but that is not yet implemented. ### Namespaces[¶](#namespaces) Every repository in Spack has an associated **namespace** defined in its top-level `repo.yaml` file. If you look at `var/spack/repos/builtin/repo.yaml` in the built-in repository, you’ll see that its namespace is `builtin`: ``` $ cat var/spack/repos/builtin/repo.yaml repo: namespace: builtin ``` Spack records the repository namespace of each installed package. For example, if you install the `mpich` package from the `builtin` repo, Spack records its fully qualified name as `builtin.mpich`. This accomplishes two things: 1. You can have packages with the same name from different namespaces installed at once. 1. You can easily determine which repository a package came from after it is installed (more [below](#namespace-example)). Note It may seem redundant for a repository to have both a namespace and a path, but repository *paths* may change over time, or, as mentioned above, a locally hosted repository path may eventually be hosted at some remote URL. Namespaces are designed to allow *package authors* to associate a unique identifier with their packages, so that the package can be identified even if the repository moves. This is why the namespace is determined by the `repo.yaml` file in the repository rather than the local `repos.yaml` configuration: the *repository maintainer* sets the name. #### Uniqueness[¶](#uniqueness) You should choose a namespace that uniquely identifies your package repository. For example, if you make a repository for packages written by your organization, you could use your organization’s name. You can also nest namespaces using periods, so you could identify a repository by a sub-organization. For example, LLNL might use a namespace for its internal repositories like `llnl`. Packages from the Physical & Life Sciences directorate (PLS) might use the `llnl.pls` namespace, and packages created by the Computation directorate might use `llnl.comp`. Spack cannot ensure that every repository is named uniquely, but it will prevent you from registering two repositories with the same namespace at the same time. If you try to add a repository that has the same name as an existing one, e.g., `builtin`, Spack will print a warning message. #### Namespace example[¶](#namespace-example) Suppose that LLNL maintains its own version of `mpich`, separate from Spack’s built-in `mpich` package, and suppose you’ve installed both LLNL’s and Spack’s `mpich` packages. If you just use `spack find`, you won’t see a difference between these two packages: ``` $ spack find ==> 2 installed packages. -- linux-rhel6-x86_64 / gcc@4.4.7 --- mpich@3.2 mpich@3.2 ``` However, if you use `spack find -N`, Spack will display the packages with their namespaces: ``` $ spack find -N ==> 2 installed packages. -- linux-rhel6-x86_64 / gcc@4.4.7 --- builtin.mpich@3.2 llnl.comp.mpich@3.2 ``` Now you know which one is LLNL’s special version, and which one is the built-in Spack package. As you might guess, packages that are identical except for their namespace will still have different hashes: ``` $ spack find -lN ==> 2 installed packages. -- linux-rhel6-x86_64 / gcc@4.4.7 --- c35p3gc builtin.mpich@3.2 itoqmox llnl.comp.mpich@3.2 ``` All Spack commands that take a package [spec](index.html#sec-specs) can also accept a fully qualified spec with a namespace. This means you can use the namespace to be more specific when designating, e.g., which package you want to uninstall: ``` spack uninstall llnl.comp.mpich ``` ### Overriding built-in packages[¶](#overriding-built-in-packages) Spack’s search semantics mean that you can make your own implementation of a built-in Spack package (like `mpich`), put it in a repository, and use it to override the built-in package. As long as the repository containing your `mpich` is earlier any other in `repos.yaml`, any built-in package that depends on `mpich` will be use the one in your repository. Suppose you have three repositories: the builtin Spack repo (`builtin`), a shared repo for your institution (e.g., `llnl`), and a repo containing your own prototype packages (`proto`). Suppose they contain packages as follows: > | Namespace | Path to repo | Packages | > | --- | --- | --- | > | `proto` | `~/proto` | `mpich` | > | `llnl` | `/usr/local/llnl` | `hdf5` | > | `builtin` | `$spack/var/spack/repos/builtin` | `mpich`, `hdf5`, others | Suppose that `hdf5` depends on `mpich`. You can override the built-in `hdf5` by adding the `llnl` repo to `repos.yaml`: ``` repos: - /usr/local/llnl - $spack/var/spack/repos/builtin ``` `spack install hdf5` will install `llnl.hdf5 ^builtin.mpich`. If, instead, `repos.yaml` looks like this: ``` repos: - ~/proto - /usr/local/llnl - $spack/var/spack/repos/builtin ``` `spack install hdf5` will install `llnl.hdf5 ^proto.mpich`. Any unqualified package name will be resolved by searching `repos.yaml` from the first entry to the last. You can force a particular repository’s package by using a fully qualified name. For example, if your `repos.yaml` is as above, and you want `builtin.mpich` instead of `proto.mpich`, you can write: ``` spack install hdf5 ^builtin.mpich ``` which will install `llnl.hdf5 ^builtin.mpich`. Similarly, you can force the `builtin.hdf5` like this: ``` spack install builtin.hdf5 ^builtin.mpich ``` This will not search `repos.yaml` at all, as the `builtin` repo is specified in both cases. It will install `builtin.hdf5 ^builtin.mpich`. If you want to see which repositories will be used in a build *before* you install it, you can use `spack spec -N`: ``` $ spack spec -N hdf5 Input spec --- hdf5 Normalized --- hdf5 ^zlib@1.1.2: Concretized --- builtin.hdf5@1.10.0-patch1%clang@7.0.2-apple+cxx~debug+fortran+mpi+shared~szip~threadsafe arch=darwin-elcapitan-x86_64 ^builtin.openmpi@2.0.1%clang@7.0.2-apple~mxm~pmi~psm~psm2~slurm~sqlite3~thread_multiple~tm~verbs+vt arch=darwin-elcapitan-x86_64 ^builtin.hwloc@1.11.4%clang@7.0.2-apple arch=darwin-elcapitan-x86_64 ^builtin.libpciaccess@0.13.4%clang@7.0.2-apple arch=darwin-elcapitan-x86_64 ^builtin.libtool@2.4.6%clang@7.0.2-apple arch=darwin-elcapitan-x86_64 ^builtin.m4@1.4.17%clang@7.0.2-apple+sigsegv arch=darwin-elcapitan-x86_64 ^builtin.libsigsegv@2.10%clang@7.0.2-apple arch=darwin-elcapitan-x86_64 ^builtin.pkg-config@0.29.1%clang@7.0.2-apple+internal_glib arch=darwin-elcapitan-x86_64 ^builtin.util-macros@1.19.0%clang@7.0.2-apple arch=darwin-elcapitan-x86_64 ^builtin.zlib@1.2.8%clang@7.0.2-apple+pic arch=darwin-elcapitan-x86_64 ``` Warning You *can* use a fully qualified package name in a `depends_on` directive in a `package.py` file, like so: ``` depends_on('proto.hdf5') ``` This is *not* recommended, as it makes it very difficult for multiple repos to be composed and shared. A `package.py` like this will fail if the `proto` repository is not registered in `repos.yaml`. ### `spack repo`[¶](#spack-repo) Spack’s [configuration system](index.html#configuration) allows repository settings to come from `repos.yaml` files in many locations. If you want to see the repositories registered as a result of all configuration files, use `spack repo list`. #### `spack repo list`[¶](#spack-repo-list) ``` $ spack repo list ==> 2 package repositories. myrepo ~/myrepo builtin ~/spack/var/spack/repos/builtin ``` Each repository is listed with its associated namespace. To get the raw, merged YAML from all configuration files, use `spack config get repos`: ``` $ spack config get repos repos:srepos: - ~/myrepo - $spack/var/spack/repos/builtin ``` mNote that, unlike `spack repo list`, this does not include the namespace, which is read from each repo’s `repo.yaml`. #### `spack repo create`[¶](#spack-repo-create) To make your own repository, you don’t need to construct a directory yourself; you can use the `spack repo create` command. ``` $ spack repo create myrepo ==> Created repo with namespace 'myrepo'. ==> To register it with spack, run this command: spack repo add ~/myrepo $ ls myrepo packages/ repo.yaml $ cat myrepo/repo.yaml repo: namespace: 'myrepo' ``` By default, the namespace of a new repo matches its directory’s name. You can supply a custom namespace with a second argument, e.g.: ``` $ spack repo create myrepo llnl.comp ==> Created repo with namespace 'llnl.comp'. ==> To register it with spack, run this command: spack repo add ~/myrepo $ cat myrepo/repo.yaml repo: namespace: 'llnl.comp' ``` #### `spack repo add`[¶](#spack-repo-add) Once your repository is created, you can register it with Spack with `spack repo add`: ``` $ spack repo add ./myrepo ==> Added repo with namespace 'llnl.comp'. $ spack repo list ==> 2 package repositories. llnl.comp ~/myrepo builtin ~/spack/var/spack/repos/builtin ``` This simply adds the repo to your `repos.yaml` file. Once a repository is registered like this, you should be able to see its packages’ names in the output of `spack list`, and you should be able to build them using `spack install <name>` as you would with any built-in package. #### `spack repo remove`[¶](#spack-repo-remove) You can remove an already-registered repository with `spack repo rm`. This will work whether you pass the repository’s namespace *or* its path. By namespace: ``` $ spack repo rm llnl.comp ==> Removed repository ~/myrepo with namespace 'llnl.comp'. $ spack repo list ==> 1 package repository. builtin ~/spack/var/spack/repos/builtin ``` By path: ``` $ spack repo rm ~/myrepo ==> Removed repository ~/myrepo $ spack repo list ==> 1 package repository. builtin ~/spack/var/spack/repos/builtin ``` ### Repo namespaces and Python[¶](#repo-namespaces-and-python) You may have noticed that namespace notation for repositories is similar to the notation for namespaces in Python. As it turns out, you *can* treat Spack repositories like Python packages; this is how they are implemented. You could, for example, extend a `builtin` package in your own repository: ``` from spack.pkg.builtin.mpich import Mpich class MyPackage(Mpich): ... ``` Spack repo namespaces are actually Python namespaces tacked on under `spack.pkg`. The search semantics of `repos.yaml` are actually implemented using Python’s built-in [sys.path](https://docs.python.org/2/library/sys.html#sys.path) search. The [`spack.repo`](index.html#module-spack.repo) module implements a custom [Python importer](https://docs.python.org/2/library/imp.html). Warning The mechanism for extending packages is not yet extensively tested, and extending packages across repositories imposes inter-repo dependencies, which may be hard to manage. Use this feature at your own risk, but let us know if you have a use case for it. Build Caches[¶](#build-caches) --- Some sites may encourage users to set up their own test environments before carrying out central installations, or some users may prefer to set up these environments on their own motivation. To reduce the load of recompiling otherwise identical package specs in different installations, installed packages can be put into build cache tarballs, uploaded to your Spack mirror and then downloaded and installed by others. ### Creating build cache files[¶](#creating-build-cache-files) A compressed tarball of an installed package is created. Tarballs are created for all of its link and run dependency packages as well. Compressed tarballs are signed with gpg and signature and tarball and put in a `.spack` file. Optionally, the rpaths (and ids and deps on macOS) can be changed to paths relative to the Spack install tree before the tarball is created. Build caches are created via: ``` $ spack buildcache create spec ``` ### Finding or installing build cache files[¶](#finding-or-installing-build-cache-files) To find build caches or install build caches, a Spack mirror must be configured with: ``` $ spack mirror add <name> <url> ``` Build caches are found via: ``` $ spack buildcache list ``` Build caches are installed via: ``` $ spack buildcache install ``` ### Relocation[¶](#relocation) Initial build and later installation do not necessarily happen at the same location. Spack provides a relocation capability and corrects for RPATHs and non-relocatable scripts. However, many packages compile paths into binary artifacts directly. In such cases, the build instructions of this package would need to be adjusted for better re-locatability. ### `spack buildcache`[¶](#spack-buildcache) #### `spack buildcache create`[¶](#spack-buildcache-create) Create tarball of installed Spack package and all dependencies. Tarballs are checksummed and signed if gpg2 is available. Places them in a directory `build_cache` that can be copied to a mirror. Commands like `spack buildcache install` will search Spack mirrors for build_cache to get the list of build caches. | Arguments | Description | | --- | --- | | `<specs>` | list of partial specs or hashes with a leading `/` to match from installed packages and used for creating build caches | | `-d <path>` | directory in which `build_cache` directory is created, defaults to `.` | | `-f` | overwrite `.spack` file in `build_cache` directory if it exists | | `-k <key>` | the key to sign package with. In the case where multiple keys exist, the package will be unsigned unless `-k` is used. | | `-r` | make paths in binaries relative before creating tarball | | `-y` | answer yes to all create unsigned `build_cache` questions | #### `spack buildcache list`[¶](#spack-buildcache-list) Retrieves all specs for build caches available on a Spack mirror. | Arguments | Description | | --- | --- | | `<specs>` | list of partial package specs to be matched against specs downloaded for build caches | E.g. `spack buildcache list gcc` with print only commands to install `gcc` package(s) #### `spack buildcache install`[¶](#spack-buildcache-install) Retrieves all specs for build caches available on a Spack mirror and installs build caches with specs matching the specs input. | Arguments | Description | | --- | --- | | `<specs>` | list of partial package specs or hashes with a leading `/` to be installed from build caches | | `-f` | remove install directory if it exists before unpacking tarball | | `-y` | answer yes to all to don’t verify package with gpg questions | #### `spack buildcache keys`[¶](#spack-buildcache-keys) List public keys available on Spack mirror. | Arguments | Description | | --- | --- | | `-i` | trust the keys downloaded with prompt for each | | `-y` | answer yes to all trust all keys downloaded | Command Reference[¶](#command-reference) --- This is a reference for all commands in the Spack command line interface. The same information is available through [spack help](#spack-help). Commands that also have sections in the main documentation have a link to “More documentation”. | Category | Commands | | --- | --- | | Administration | [bootstrap](#spack-bootstrap), [clone](#spack-clone), [reindex](#spack-reindex) | | Query packages | [dependencies](#spack-dependencies), [dependents](#spack-dependents), [find](#spack-find), [graph](#spack-graph), [info](#spack-info), [list](#spack-list), [location](#spack-location), [providers](#spack-providers) | | Build packages | [build](#spack-build), [build-env](#spack-build-env), [clean](#spack-clean), [configure](#spack-configure), [diy](#spack-diy), [fetch](#spack-fetch), [install](#spack-install), [log-parse](#spack-log-parse), [patch](#spack-patch), [restage](#spack-restage), [setup](#spack-setup), [spec](#spack-spec), [stage](#spack-stage), [uninstall](#spack-uninstall) | | Configuration | [config](#spack-config), [mirror](#spack-mirror), [repo](#spack-repo) | | Developer | [blame](#spack-blame), [cd](#spack-cd), [commands](#spack-commands), [debug](#spack-debug), [flake8](#spack-flake8), [license](#spack-license), [pkg](#spack-pkg), [pydoc](#spack-pydoc), [python](#spack-python), [test](#spack-test), [url](#spack-url) | | Environments | [add](#spack-add), [concretize](#spack-concretize), [env](#spack-env), [remove](#spack-remove), [view](#spack-view) | | Extensions | [activate](#spack-activate), [deactivate](#spack-deactivate), [extensions](#spack-extensions) | | More help | [docs](#spack-docs), [help](#spack-help) | | Modules | [load](#spack-load), [module](#spack-module), [unload](#spack-unload), [unuse](#spack-unuse), [use](#spack-use) | | Create packages | [buildcache](#spack-buildcache), [checksum](#spack-checksum), [create](#spack-create), [edit](#spack-edit), [gpg](#spack-gpg), [versions](#spack-versions) | | System | [arch](#spack-arch), [compiler](#spack-compiler), [compilers](#spack-compilers) | --- ### spack[¶](#spack) A flexible package manager that supports multiple versions, configurations, platforms, and compilers. ``` spack [-hHdklLmpvV] [--color {always,never,auto}] [-C DIR] [--pdb] [-e ENV | -D DIR | -E] [--use-env-repo] [--sorted-profile STAT] [--lines LINES] [--stacktrace] [--print-shell-vars PRINT_SHELL_VARS] COMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit `-H, --all-help` show help for all commands (same as spack help –all) `--color {always,never,auto}` when to colorize output (default: auto) `-C DIR, --config-scope DIR` add a custom configuration scope `-d, --debug` write out debug logs during compile `--pdb` run spack under the pdb debugger `-e ENV, --env ENV` run with a specific environment (see spack env) `-D DIR, --env-dir DIR` run with an environment directory (ignore named environments) `-E, --no-env` run without any environments activated (see spack env) `--use-env-repo` when running in an environment, use its package repository `-k, --insecure` do not check ssl certificates when downloading `-l, --enable-locks` use filesystem locking (default) `-L, --disable-locks` do not use filesystem locking (unsafe) `-m, --mock` use mock packages instead of real ones `-p, --profile` profile execution using cProfile `--sorted-profile STAT` profile and sort by one or more of: [line, pcalls, ncalls, time, nfl, file, cumulative, filename, calls, cumtime, module, name, stdname, tottime] `--lines LINES` lines of profile output or ‘all’ (default: 20) `-v, --verbose` print additional output during builds `--stacktrace` add stacktraces to all printed statements `-V, --version` show version number and exit `--print-shell-vars PRINT_SHELL_VARS` print info needed by setup-env.[c]sh **Subcommands** | | | | | | --- | --- | --- | --- | | * [activate](#spack-activate) * [add](#spack-add) * [arch](#spack-arch) * [blame](#spack-blame) * [bootstrap](#spack-bootstrap) * [build](#spack-build) * [build-env](#spack-build-env) * [buildcache](#spack-buildcache) * [cd](#spack-cd) * [checksum](#spack-checksum) * [clean](#spack-clean) * [clone](#spack-clone) * [commands](#spack-commands) * [compiler](#spack-compiler) * [compilers](#spack-compilers) * [concretize](#spack-concretize) | * [config](#spack-config) * [configure](#spack-configure) * [create](#spack-create) * [deactivate](#spack-deactivate) * [debug](#spack-debug) * [dependencies](#spack-dependencies) * [dependents](#spack-dependents) * [diy](#spack-diy) * [docs](#spack-docs) * [edit](#spack-edit) * [env](#spack-env) * [extensions](#spack-extensions) * [fetch](#spack-fetch) * [find](#spack-find) * [flake8](#spack-flake8) * [gpg](#spack-gpg) | * [graph](#spack-graph) * [help](#spack-help) * [info](#spack-info) * [install](#spack-install) * [license](#spack-license) * [list](#spack-list) * [load](#spack-load) * [location](#spack-location) * [log-parse](#spack-log-parse) * [mirror](#spack-mirror) * [module](#spack-module) * [patch](#spack-patch) * [pkg](#spack-pkg) * [providers](#spack-providers) * [pydoc](#spack-pydoc) * [python](#spack-python) | * [reindex](#spack-reindex) * [remove](#spack-remove) * [repo](#spack-repo) * [restage](#spack-restage) * [setup](#spack-setup) * [spec](#spack-spec) * [stage](#spack-stage) * [test](#spack-test) * [uninstall](#spack-uninstall) * [unload](#spack-unload) * [unuse](#spack-unuse) * [url](#spack-url) * [use](#spack-use) * [versions](#spack-versions) * [view](#spack-view) | --- ### spack activate[¶](#spack-activate) activate a package extension ``` spack activate [-hf] [-v VIEW] ... ``` [More documentation](index.html#cmd-spack-activate) **Positional arguments** spec spec of package extension to activate **Optional arguments** `-h, --help` show this help message and exit `-f, --force` activate without first activating dependencies `-v VIEW, --view VIEW` the view to operate on --- ### spack add[¶](#spack-add) add a spec to an environment ``` spack add [-h] ... ``` **Positional arguments** specs specs of packages to add **Optional arguments** `-h, --help` show this help message and exit --- ### spack arch[¶](#spack-arch) print architecture information about this machine ``` spack arch [-h] [-p | -o | -t] ``` **Optional arguments** `-h, --help` show this help message and exit `-p, --platform` print only the platform `-o, --operating-system` print only the operating system `-t, --target` print only the target --- ### spack blame[¶](#spack-blame) show contributors to packages ``` spack blame [-h] [-t | -p | -g] package_name ``` **Positional arguments** package_name name of package to show contributions for, or path to a file in the spack repo **Optional arguments** `-h, --help` show this help message and exit `-t, --time` sort by last modification date (default) `-p, --percent` sort by percent of code `-g, --git` show git blame output instead of summary --- ### spack bootstrap[¶](#spack-bootstrap) Bootstrap packages needed for spack to run smoothly ``` spack bootstrap [-hnv] [-j JOBS] [--keep-prefix] [--keep-stage] [--clean | --dirty] ``` **Optional arguments** `-h, --help` show this help message and exit `-j JOBS, --jobs JOBS` explicitly set number of make jobs (default: #cpus) `--keep-prefix` don’t remove the install prefix if installation fails `--keep-stage` don’t remove the build stage if installation succeeds `-n, --no-checksum` do not use checksums to verify downloadeded files (unsafe) `-v, --verbose` display verbose build output while installing `--clean` unset harmful variables in the build environment (default) `--dirty` preserve user environment in the spack build environment (danger!) --- ### spack build[¶](#spack-build) stops at build stage when installing a package, if possible ``` spack build [-hv] ... ``` **Positional arguments** package spec of the package to install **Optional arguments** `-h, --help` show this help message and exit `-v, --verbose` print additional output during builds --- ### spack build-env[¶](#spack-build-env) show install environment for a spec, and run commands ``` spack build-env [-h] [--clean] [--dirty] ... ``` **Positional arguments** spec specs of package environment to emulate **Optional arguments** `-h, --help` show this help message and exit `--clean` unset harmful variables in the build environment (default) `--dirty` preserve user environment in the spack build environment (danger!) --- ### spack buildcache[¶](#spack-buildcache) create, download and install binary packages ``` spack buildcache [-h] SUBCOMMAND ... ``` [More documentation](index.html#cmd-spack-buildcache) **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [buildcache create](#spack-buildcache-create) | * [buildcache install](#spack-buildcache-install) | * [buildcache list](#spack-buildcache-list) | * [buildcache keys](#spack-buildcache-keys) | --- #### spack buildcache create[¶](#spack-buildcache-create) ``` spack buildcache create [-hrfua] [-k key] [-d directory] ... ``` **Positional arguments** packages specs of packages to create buildcache for **Optional arguments** `-h, --help` show this help message and exit `-r, --rel` make all rpaths relative before creating tarballs. `-f, --force` overwrite tarball if it exists. `-u, --unsigned` create unsigned buildcache tarballs for testing `-a, --allow_root` allow install root string in binary files after RPATH substitution `-k key, --key key` Key for signing. `-d directory, --directory directory` directory in which to save the tarballs. --- #### spack buildcache install[¶](#spack-buildcache-install) ``` spack buildcache install [-hfmau] ... ``` **Positional arguments** packages specs of packages to install buildcache for **Optional arguments** `-h, --help` show this help message and exit `-f, --force` overwrite install directory if it exists. `-m, --multiple` allow all matching packages `-a, --allow_root` allow install root string in binary files after RPATH substitution `-u, --unsigned` install unsigned buildcache tarballs for testing --- #### spack buildcache list[¶](#spack-buildcache-list) ``` spack buildcache list [-hf] ... ``` **Positional arguments** packages specs of packages to search for **Optional arguments** `-h, --help` show this help message and exit `-f, --force` force new download of specs --- #### spack buildcache keys[¶](#spack-buildcache-keys) ``` spack buildcache keys [-hitf] ``` **Optional arguments** `-h, --help` show this help message and exit `-i, --install` install Keys pulled from mirror `-t, --trust` trust all downloaded keys `-f, --force` force new download of keys --- ### spack cd[¶](#spack-cd) cd to spack directories in the shell ``` spack cd [-h] [-m | -r | -i | -p | -P | -s | -S | -b | -e ENV] ... ``` [More documentation](index.html#cmd-spack-cd) **Positional arguments** spec spec of package to fetch directory for **Optional arguments** `-h, --help` show this help message and exit `-m, --module-dir` spack python module directory `-r, --spack-root` spack installation root `-i, --install-dir` install prefix for spec (spec need not be installed) `-p, --package-dir` directory enclosing a spec’s package.py file `-P, --packages` top-level packages directory for Spack `-s, --stage-dir` stage directory for a spec `-S, --stages` top level stage directory `-b, --build-dir` checked out or expanded source directory for a spec (requires it to be staged first) `-e ENV, --env ENV` location of an environment managed by spack --- ### spack checksum[¶](#spack-checksum) checksum available versions of a package ``` spack checksum [-h] [--keep-stage] package ... ``` [More documentation](index.html#cmd-spack-checksum) **Positional arguments** package package to checksum versions for versions versions to generate checksums for **Optional arguments** `-h, --help` show this help message and exit `--keep-stage` don’t clean up staging area when command completes --- ### spack clean[¶](#spack-clean) remove temporary build files and/or downloaded archives ``` spack clean [-hsdmpa] ... ``` [More documentation](index.html#cmd-spack-clean) **Positional arguments** specs removes the build stages and tarballs for specs **Optional arguments** `-h, --help` show this help message and exit `-s, --stage` remove all temporary build stages (default) `-d, --downloads` remove cached downloads `-m, --misc-cache` remove long-lived caches, like the virtual package index `-p, --python-cache` remove .pyc, .pyo files and __pycache__ folders `-a, --all` equivalent to -sdmp --- ### spack clone[¶](#spack-clone) create a new installation of spack in another prefix ``` spack clone [-h] [-r REMOTE] prefix ``` **Positional arguments** prefix name of prefix where we should install spack **Optional arguments** `-h, --help` show this help message and exit `-r REMOTE, --remote REMOTE` name of the remote to clone from --- ### spack commands[¶](#spack-commands) list available spack commands ``` spack commands [-h] [--format {names,subcommands,rst}] ... ``` **Positional arguments** documented_commands list of documented commands to cross-references **Optional arguments** `-h, --help` show this help message and exit `--format {names,subcommands,rst}` format to be used to print the output (default: names) --- ### spack compiler[¶](#spack-compiler) manage compilers ``` spack compiler [-h] SUBCOMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [compiler find](#spack-compiler-find) | * [compiler remove](#spack-compiler-remove) | * [compiler list](#spack-compiler-list) | * [compiler info](#spack-compiler-info) | --- #### spack compiler find[¶](#spack-compiler-find) ``` spack compiler find [-h] [--scope {defaults,system,site,user}[/PLATFORM]] ... ``` [More documentation](index.html#cmd-spack-compiler-find) **Positional arguments** add_paths **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to modify --- #### spack compiler remove[¶](#spack-compiler-remove) ``` spack compiler remove [-ha] [--scope {defaults,system,site,user}[/PLATFORM]] compiler_spec ``` **Positional arguments** compiler_spec **Optional arguments** `-h, --help` show this help message and exit `-a, --all` remove ALL compilers that match spec `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to modify --- #### spack compiler list[¶](#spack-compiler-list) ``` spack compiler list [-h] [--scope {defaults,system,site,user}[/PLATFORM]] ``` **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to read from --- #### spack compiler info[¶](#spack-compiler-info) ``` spack compiler info [-h] [--scope {defaults,system,site,user}[/PLATFORM]] compiler_spec ``` [More documentation](index.html#cmd-spack-compiler-info) **Positional arguments** compiler_spec **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to read from --- ### spack compilers[¶](#spack-compilers) list available compilers ``` spack compilers [-h] [--scope {defaults,system,site,user}[/PLATFORM]] ``` [More documentation](index.html#cmd-spack-compilers) **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to read/modify --- ### spack concretize[¶](#spack-concretize) concretize an environment and write a lockfile ``` spack concretize [-hf] ``` **Optional arguments** `-h, --help` show this help message and exit `-f, --force` Re-concretize even if already concretized. --- ### spack config[¶](#spack-config) get and set configuration options ``` spack config [-h] [--scope {defaults,system,site,user}[/PLATFORM]] SUBCOMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to read/modify **Subcommands** | | | | | | --- | --- | --- | --- | | * [config get](#spack-config-get) | * [config blame](#spack-config-blame) | * [config edit](#spack-config-edit) | | --- #### spack config get[¶](#spack-config-get) ``` spack config get [-h] [SECTION] ``` [More documentation](index.html#cmd-spack-config-get) **Positional arguments** SECTION configuration section to print. options: %(choices)s **Optional arguments** `-h, --help` show this help message and exit --- #### spack config blame[¶](#spack-config-blame) ``` spack config blame [-h] SECTION ``` [More documentation](index.html#cmd-spack-config-blame) **Positional arguments** SECTION configuration section to print. options: %(choices)s **Optional arguments** `-h, --help` show this help message and exit --- #### spack config edit[¶](#spack-config-edit) ``` spack config edit [-h] [--print-file] [SECTION] ``` **Positional arguments** SECTION configuration section to edit. options: %(choices)s **Optional arguments** `-h, --help` show this help message and exit `--print-file` print the file name that would be edited --- ### spack configure[¶](#spack-configure) stage and configure a package but do not install ``` spack configure [-hv] ... ``` **Positional arguments** package spec of the package to install **Optional arguments** `-h, --help` show this help message and exit `-v, --verbose` print additional output during builds --- ### spack create[¶](#spack-create) create a new package file ``` spack create [-hf] [--keep-stage] [-n NAME] [-t TEMPLATE] [-r REPO] [-N NAMESPACE] [url] ``` [More documentation](index.html#cmd-spack-create) **Positional arguments** url url of package archive **Optional arguments** `-h, --help` show this help message and exit `--keep-stage` don’t clean up staging area when command completes `-n NAME, --name NAME` name of the package to create `-t TEMPLATE, --template TEMPLATE` build system template to use. options: %(choices)s `-r REPO, --repo REPO` path to a repository where the package should be created `-N NAMESPACE, --namespace NAMESPACE` specify a namespace for the package. must be the namespace of a repository registered with Spack `-f, --force` overwrite any existing package file with the same name --- ### spack deactivate[¶](#spack-deactivate) deactivate a package extension ``` spack deactivate [-hfa] [-v VIEW] ... ``` [More documentation](index.html#cmd-spack-deactivate) **Positional arguments** spec spec of package extension to deactivate **Optional arguments** `-h, --help` show this help message and exit `-f, --force` run deactivation even if spec is NOT currently activated `-v VIEW, --view VIEW` the view to operate on `-a, --all` deactivate all extensions of an extendable package, or deactivate an extension AND its dependencies --- ### spack debug[¶](#spack-debug) debugging commands for troubleshooting Spack ``` spack debug [-h] SUBCOMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [debug create-db-tarball](#spack-debug-create-db-tarball) | | | | --- #### spack debug create-db-tarball[¶](#spack-debug-create-db-tarball) ``` spack debug create-db-tarball [-h] ``` **Optional arguments** `-h, --help` show this help message and exit --- ### spack dependencies[¶](#spack-dependencies) show dependencies of a package ``` spack dependencies [-hitV] ... ``` **Positional arguments** spec spec or package name **Optional arguments** `-h, --help` show this help message and exit `-i, --installed` List installed dependencies of an installed spec, instead of possible dependencies of a package. `-t, --transitive` show all transitive dependencies `-V, --no-expand-virtuals` do not expand virtual dependencies --- ### spack dependents[¶](#spack-dependents) show packages that depend on another ``` spack dependents [-hit] ... ``` **Positional arguments** spec spec or package name **Optional arguments** `-h, --help` show this help message and exit `-i, --installed` List installed dependents of an installed spec, instead of possible dependents of a package. `-t, --transitive` Show all transitive dependents. --- ### spack diy[¶](#spack-diy) do-it-yourself: build from an existing source directory ``` spack diy [-hinq] [-j JOBS] [-d SOURCE_PATH] [--keep-prefix] [--skip-patch] [--clean | --dirty] ... ``` **Positional arguments** spec specs to use for install. must contain package AND version **Optional arguments** `-h, --help` show this help message and exit `-j JOBS, --jobs JOBS` explicitly set number of make jobs, default is #cpus. `-d SOURCE_PATH, --source-path SOURCE_PATH` path to source directory. defaults to the current directory `-i, --ignore-dependencies` don’t try to install dependencies of requested packages `-n, --no-checksum` do not use checksums to verify downloadeded files (unsafe) `--keep-prefix` do not remove the install prefix if installation fails `--skip-patch` skip patching for the DIY build `-q, --quiet` do not display verbose build output while installing `--clean` unset harmful variables in the build environment (default) `--dirty` preserve user environment in the spack build environment (danger!) --- ### spack docs[¶](#spack-docs) open spack documentation in a web browser ``` spack docs [-h] ``` **Optional arguments** `-h, --help` show this help message and exit --- ### spack edit[¶](#spack-edit) open package files in $EDITOR ``` spack edit [-h] [-b | -c | -d | -t | -m | -r REPO | -N NAMESPACE] [name] ``` [More documentation](index.html#cmd-spack-edit) **Positional arguments** name name of package to edit **Optional arguments** `-h, --help` show this help message and exit `-b, --build-system` Edit the build system with the supplied name. `-c, --command` edit the command with the supplied name `-d, --docs` edit the docs with the supplied name `-t, --test` edit the test with the supplied name `-m, --module` edit the main spack module with the supplied name `-r REPO, --repo REPO` path to repo to edit package in `-N NAMESPACE, --namespace NAMESPACE` namespace of package to edit --- ### spack env[¶](#spack-env) manage virtual environments ``` spack env [-h] SUBCOMMAND ... ``` [More documentation](index.html#cmd-spack-env) **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [env activate](#spack-env-activate) * [env deactivate](#spack-env-deactivate) | * [env create](#spack-env-create) * [env remove](#spack-env-remove) | * [env list](#spack-env-list) * [env status](#spack-env-status) | * [env loads](#spack-env-loads) | --- #### spack env activate[¶](#spack-env-activate) ``` spack env activate [-hp] [--sh | --csh | -d] env ``` **Positional arguments** env name of environment to activate **Optional arguments** `-h, --help` show this help message and exit `--sh` print sh commands to activate the environment `--csh` print csh commands to activate the environment `-d, --dir` force spack to treat env as a directory, not a name `-p, --prompt` decorate the command line prompt when activating --- #### spack env deactivate[¶](#spack-env-deactivate) ``` spack env deactivate [-h] [--sh | --csh] ``` **Optional arguments** `-h, --help` show this help message and exit `--sh` print sh commands to deactivate the environment `--csh` print csh commands to deactivate the environment --- #### spack env create[¶](#spack-env-create) ``` spack env create [-hd] ENV [envfile] ``` **Positional arguments** ENV name of environment to create envfile optional init file; can be spack.yaml or spack.lock **Optional arguments** `-h, --help` show this help message and exit `-d, --dir` create an environment in a specific directory --- #### spack env remove[¶](#spack-env-remove) ``` spack env remove [-hy] ENV [ENV ...] ``` **Positional arguments** ENV environment(s) to remove **Optional arguments** `-h, --help` show this help message and exit `-y, --yes-to-all` assume “yes” is the answer to every confirmation request --- #### spack env list[¶](#spack-env-list) ``` spack env list [-h] ``` **Optional arguments** `-h, --help` show this help message and exit --- #### spack env status[¶](#spack-env-status) ``` spack env status [-h] ``` **Optional arguments** `-h, --help` show this help message and exit --- #### spack env loads[¶](#spack-env-loads) ``` spack env loads [-hr] [-m {tcl,lmod}] [--input-only] [-p PREFIX] [-x EXCLUDE] [env] ``` **Positional arguments** env name of env to generate loads file for **Optional arguments** `-h, --help` show this help message and exit `-m {tcl,lmod}, --module-type {tcl,lmod}` type of module system to generate loads for `--input-only` generate input for module command (instead of a shell script) `-p PREFIX, --prefix PREFIX` prepend to module names when issuing module load commands `-x EXCLUDE, --exclude EXCLUDE` exclude package from output; may be specified multiple times `-r, --dependencies` recursively traverse spec dependencies --- ### spack extensions[¶](#spack-extensions) list extensions for package ``` spack extensions [-h] [-l | -p | -d] [-s TYPE] [-v VIEW] ... ``` [More documentation](index.html#cmd-spack-extensions) **Positional arguments** spec spec of package to list extensions for **Optional arguments** `-h, --help` show this help message and exit `-l, --long` show dependency hashes as well as versions `-p, --paths` show paths to extension install directories `-d, --deps` show full dependency DAG of extensions `-s TYPE, --show TYPE` one of packages, installed, activated, all `-v VIEW, --view VIEW` the view to operate on --- ### spack fetch[¶](#spack-fetch) fetch archives for packages ``` spack fetch [-hnmD] ... ``` [More documentation](index.html#cmd-spack-fetch) **Positional arguments** packages specs of packages to fetch **Optional arguments** `-h, --help` show this help message and exit `-n, --no-checksum` do not use checksums to verify downloadeded files (unsafe) `-m, --missing` fetch only missing (not yet installed) dependencies `-D, --dependencies` also fetch all dependencies --- ### spack find[¶](#spack-find) list and search installed packages ``` spack find [-hlLcfumvMN] [-s | -p | -d] [-t TAGS] [--show-full-compiler] [-x | -X] [--start-date START_DATE] [--end-date END_DATE] ... ``` [More documentation](index.html#cmd-spack-find) **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `-s, --short` show only specs (default) `-p, --paths` show paths to package install directories `-d, --deps` show full dependency DAG of installed packages `-l, --long` show dependency hashes as well as versions `-L, --very-long` show full dependency hashes as well as versions `-t TAGS, --tags TAGS` filter a package query by tags `-c, --show-concretized` show concretized specs in an environment `-f, --show-flags` show spec compiler flags `--show-full-compiler` show full compiler specs `-x, --explicit` show only specs that were installed explicitly `-X, --implicit` show only specs that were installed as dependencies `-u, --unknown` show only specs Spack does not have a package for `-m, --missing` show missing dependencies as well as installed specs `-v, --variants` show variants in output (can be long) `-M, --only-missing` show only missing dependencies `-N, --namespace` show fully qualified package names `--start-date START_DATE` earliest date of installation [YYYY-MM-DD] `--end-date END_DATE` latest date of installation [YYYY-MM-DD] --- ### spack flake8[¶](#spack-flake8) runs source code style checks on Spack. requires flake8 ``` spack flake8 [-hkaorU] [-b BASE] ... ``` **Positional arguments** files specific files to check **Optional arguments** `-h, --help` show this help message and exit `-b BASE, --base BASE` select base branch for collecting list of modified files `-k, --keep-temp` do not delete temporary directory where flake8 runs. use for debugging, to see filtered files `-a, --all` check all files, not just changed files `-o, --output` send filtered files to stdout as well as temp files `-r, --root-relative` print root-relative paths (default: cwd-relative) `-U, --no-untracked` exclude untracked files from checks --- ### spack gpg[¶](#spack-gpg) handle GPG actions for spack ``` spack gpg [-h] SUBCOMMAND ... ``` [More documentation](index.html#cmd-spack-gpg) **Optional arguments** `-h, --help` show this help message and exit --- ### spack graph[¶](#spack-graph) generate graphs of package dependency relationships ``` spack graph [-hnsi] [-a | -d] [-t DEPTYPE] ... ``` [More documentation](index.html#cmd-spack-graph) **Positional arguments** specs specs of packages to graph **Optional arguments** `-h, --help` show this help message and exit `-a, --ascii` draw graph as ascii to stdout (default) `-d, --dot` generate graph in dot format and print to stdout `-n, --normalize` skip concretization; only print normalized spec `-s, --static` use static information from packages, not dynamic spec info `-i, --installed` graph all installed specs in dot format (implies –dot) `-t DEPTYPE, --deptype DEPTYPE` comma-separated list of deptypes to traverse. default=build,link,run,test --- ### spack help[¶](#spack-help) get help on spack and its commands ``` spack help [-ha] [--spec] help_command] ``` [More documentation](index.html#cmd-spack-help) **Positional arguments** help_command command to get help on **Optional arguments** `-h, --help` show this help message and exit `-a, --all` print all available commands `--spec` print all available commands --- ### spack info[¶](#spack-info) get detailed information on a particular package ``` spack info [-h] PACKAGE ``` [More documentation](index.html#cmd-spack-info) **Positional arguments** PACKAGE name of package to get info for **Optional arguments** `-h, --help` show this help message and exit --- ### spack install[¶](#spack-install) build and install packages ``` spack install [-hInvy] [--only {package,dependencies}] [-j JOBS] [--overwrite] [--keep-prefix] [--keep-stage] [--dont-restage] [--use-cache | --no-cache] [--show-log-on-error] [--source] [--fake] [--only-concrete] [-f SPEC_YAML_FILE] [--clean | --dirty] [--test {root,all} | --run-tests] [--log-format {junit,None,cdash}] [--log-file LOG_FILE] [--cdash-upload-url CDASH_UPLOAD_URL] ... ``` [More documentation](index.html#cmd-spack-install) **Positional arguments** package spec of the package to install **Optional arguments** `-h, --help` show this help message and exit `--only {package,dependencies}` select the mode of installation. the default is to install the package along with all its dependencies. alternatively one can decide to install only the package or only the dependencies `-j JOBS, --jobs JOBS` explicitly set number of make jobs, default is #cpus. `-I, --install-status` show install status of packages. packages can be: installed [+], missing and needed by an installed package [-], or not installed (no annotation) `--overwrite` reinstall an existing spec, even if it has dependents `--keep-prefix` don’t remove the install prefix if installation fails `--keep-stage` don’t remove the build stage if installation succeeds `--dont-restage` if a partial install is detected, don’t delete prior state `--use-cache` check for pre-built Spack packages in mirrors (default) `--no-cache` do not check for pre-built Spack packages in mirrors `--show-log-on-error` print full build log to stderr if build fails `--source` install source files in prefix `-n, --no-checksum` do not use checksums to verify downloadeded files (unsafe) `-v, --verbose` display verbose build output while installing `--fake` fake install for debug purposes. `--only-concrete` (with environment) only install already concretized specs `-f SPEC_YAML_FILE, --file SPEC_YAML_FILE` install from file. Read specs to install from .yaml files `--clean` unset harmful variables in the build environment (default) `--dirty` preserve user environment in the spack build environment (danger!) `--test {root,all}` If ‘root’ is chosen, run package tests during installation for top-level packages (but skip tests for dependencies). if ‘all’ is chosen, run package tests during installation for all packages. If neither are chosen, don’t run tests for any packages. `--run-tests` run package tests during installation (same as –test=all) `--log-format {junit,None,cdash}` format to be used for log files `--log-file LOG_FILE` filename for the log file. if not passed a default will be used `--cdash-upload-url CDASH_UPLOAD_URL` CDash URL where reports will be uploaded `-y, --yes-to-all` assume “yes” is the answer to every confirmation request --- ### spack license[¶](#spack-license) list and check license headers on files in spack ``` spack license [-h] SUBCOMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [license list-files](#spack-license-list-files) | * [license verify](#spack-license-verify) | | | --- #### spack license list-files[¶](#spack-license-list-files) ``` spack license list-files [-h] ``` **Optional arguments** `-h, --help` show this help message and exit --- #### spack license verify[¶](#spack-license-verify) ``` spack license verify [-h] [--root ROOT] ``` **Optional arguments** `-h, --help` show this help message and exit `--root ROOT` scan a different prefix for license issues --- ### spack list[¶](#spack-list) list and search available packages ``` spack list [-hd] [--format {rst,html,name_only}] [-t TAGS] ... ``` [More documentation](index.html#cmd-spack-list) **Positional arguments** filter optional case-insensitive glob patterns to filter results **Optional arguments** `-h, --help` show this help message and exit `-d, --search-description` filtering will also search the description for a match `--format {rst,html,name_only}` format to be used to print the output [default: name_only] `-t TAGS, --tags TAGS` filter a package query by tags --- ### spack load[¶](#spack-load) add package to environment using module load ``` spack load [-hr] ... ``` [More documentation](index.html#cmd-spack-load) **Positional arguments** spec spec of package to load with modules **Optional arguments** `-h, --help` show this help message and exit `-r, --dependencies` recursively traverse spec dependencies --- ### spack location[¶](#spack-location) print out locations of packages and spack directories ``` spack location [-h] [-m | -r | -i | -p | -P | -s | -S | -b | -e ENV] ... ``` [More documentation](index.html#cmd-spack-location) **Positional arguments** spec spec of package to fetch directory for **Optional arguments** `-h, --help` show this help message and exit `-m, --module-dir` spack python module directory `-r, --spack-root` spack installation root `-i, --install-dir` install prefix for spec (spec need not be installed) `-p, --package-dir` directory enclosing a spec’s package.py file `-P, --packages` top-level packages directory for Spack `-s, --stage-dir` stage directory for a spec `-S, --stages` top level stage directory `-b, --build-dir` checked out or expanded source directory for a spec (requires it to be staged first) `-e ENV, --env ENV` location of an environment managed by spack --- ### spack log-parse[¶](#spack-log-parse) filter errors and warnings from build logs ``` spack log-parse [-hp] [--show SHOW] [-c CONTEXT] [-w WIDTH] [-j JOBS] file ``` **Positional arguments** file a log file containing build output, or - for stdin **Optional arguments** `-h, --help` show this help message and exit `--show SHOW` comma-separated list of what to show; options: errors, warnings `-c CONTEXT, --context CONTEXT` lines of context to show around lines of interest `-p, --profile` print out a profile of time spent in regexes during parse `-w WIDTH, --width WIDTH` wrap width: auto-size to terminal by default; 0 for no wrap `-j JOBS, --jobs JOBS` number of jobs to parse log file (default: 1 for short logs, ncpus for long logs) --- ### spack mirror[¶](#spack-mirror) manage mirrors (source and binary) ``` spack mirror [-hn] SUBCOMMAND ... ``` [More documentation](index.html#cmd-spack-mirror) **Optional arguments** `-h, --help` show this help message and exit `-n, --no-checksum` do not use checksums to verify downloadeded files (unsafe) **Subcommands** | | | | | | --- | --- | --- | --- | | * [mirror create](#spack-mirror-create) | * [mirror add](#spack-mirror-add) | * [mirror remove](#spack-mirror-remove) | * [mirror list](#spack-mirror-list) | --- #### spack mirror create[¶](#spack-mirror-create) ``` spack mirror create [-hDo] [-d DIRECTORY] [-f FILE] ... ``` [More documentation](index.html#cmd-spack-mirror-create) **Positional arguments** specs specs of packages to put in mirror **Optional arguments** `-h, --help` show this help message and exit `-d DIRECTORY, --directory DIRECTORY` directory in which to create mirror `-f FILE, --file FILE` file with specs of packages to put in mirror `-D, --dependencies` also fetch all dependencies `-o, --one-version-per-spec` only fetch one ‘preferred’ version per spec, not all known --- #### spack mirror add[¶](#spack-mirror-add) ``` spack mirror add [-h] [--scope {defaults,system,site,user}[/PLATFORM]] name url ``` [More documentation](index.html#cmd-spack-mirror-add) **Positional arguments** name mnemonic name for mirror url url of mirror directory from ‘spack mirror create’ **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to modify --- #### spack mirror remove[¶](#spack-mirror-remove) ``` spack mirror remove [-h] [--scope {defaults,system,site,user}[/PLATFORM]] name ``` [More documentation](index.html#cmd-spack-mirror-remove) **Positional arguments** name **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to modify --- #### spack mirror list[¶](#spack-mirror-list) ``` spack mirror list [-h] [--scope {defaults,system,site,user}[/PLATFORM]] ``` [More documentation](index.html#cmd-spack-mirror-list) **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to read from --- ### spack module[¶](#spack-module) manipulate module files ``` spack module [-h] SUBCOMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [module dotkit](#spack-module-dotkit) | * [module lmod](#spack-module-lmod) | * [module tcl](#spack-module-tcl) | | --- #### spack module dotkit[¶](#spack-module-dotkit) ``` spack module dotkit [-h] SUBCOMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [module dotkit refresh](#spack-module-dotkit-refresh) | * [module dotkit find](#spack-module-dotkit-find) | * [module dotkit rm](#spack-module-dotkit-rm) | * [module dotkit loads](#spack-module-dotkit-loads) | --- ##### spack module dotkit refresh[¶](#spack-module-dotkit-refresh) ``` spack module dotkit refresh [-hy] [--delete-tree] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `--delete-tree` delete the module file tree before refresh `-y, --yes-to-all` assume “yes” is the answer to every confirmation request --- ##### spack module dotkit find[¶](#spack-module-dotkit-find) ``` spack module dotkit find [-hr] [--full-path] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `--full-path` display full path to module file `-r, --dependencies` recursively traverse spec dependencies --- ##### spack module dotkit rm[¶](#spack-module-dotkit-rm) ``` spack module dotkit rm [-hy] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `-y, --yes-to-all` assume “yes” is the answer to every confirmation request --- ##### spack module dotkit loads[¶](#spack-module-dotkit-loads) ``` spack module dotkit loads [-hr] [--input-only] [-p PREFIX] [-x EXCLUDE] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `--input-only` generate input for module command (instead of a shell script) `-p PREFIX, --prefix PREFIX` prepend to module names when issuing module load commands `-x EXCLUDE, --exclude EXCLUDE` exclude package from output; may be specified multiple times `-r, --dependencies` recursively traverse spec dependencies --- #### spack module lmod[¶](#spack-module-lmod) ``` spack module lmod [-h] SUBCOMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [module lmod refresh](#spack-module-lmod-refresh) * [module lmod find](#spack-module-lmod-find) | * [module lmod rm](#spack-module-lmod-rm) | * [module lmod loads](#spack-module-lmod-loads) | * [module lmod setdefault](#spack-module-lmod-setdefault) | --- ##### spack module lmod refresh[¶](#spack-module-lmod-refresh) ``` spack module lmod refresh [-hy] [--delete-tree] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `--delete-tree` delete the module file tree before refresh `-y, --yes-to-all` assume “yes” is the answer to every confirmation request --- ##### spack module lmod find[¶](#spack-module-lmod-find) ``` spack module lmod find [-hr] [--full-path] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `--full-path` display full path to module file `-r, --dependencies` recursively traverse spec dependencies --- ##### spack module lmod rm[¶](#spack-module-lmod-rm) ``` spack module lmod rm [-hy] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `-y, --yes-to-all` assume “yes” is the answer to every confirmation request --- ##### spack module lmod loads[¶](#spack-module-lmod-loads) ``` spack module lmod loads [-hr] [--input-only] [-p PREFIX] [-x EXCLUDE] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `--input-only` generate input for module command (instead of a shell script) `-p PREFIX, --prefix PREFIX` prepend to module names when issuing module load commands `-x EXCLUDE, --exclude EXCLUDE` exclude package from output; may be specified multiple times `-r, --dependencies` recursively traverse spec dependencies --- ##### spack module lmod setdefault[¶](#spack-module-lmod-setdefault) ``` spack module lmod setdefault [-h] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit --- #### spack module tcl[¶](#spack-module-tcl) ``` spack module tcl [-h] SUBCOMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [module tcl refresh](#spack-module-tcl-refresh) | * [module tcl find](#spack-module-tcl-find) | * [module tcl rm](#spack-module-tcl-rm) | * [module tcl loads](#spack-module-tcl-loads) | --- ##### spack module tcl refresh[¶](#spack-module-tcl-refresh) ``` spack module tcl refresh [-hy] [--delete-tree] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `--delete-tree` delete the module file tree before refresh `-y, --yes-to-all` assume “yes” is the answer to every confirmation request --- ##### spack module tcl find[¶](#spack-module-tcl-find) ``` spack module tcl find [-hr] [--full-path] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `--full-path` display full path to module file `-r, --dependencies` recursively traverse spec dependencies --- ##### spack module tcl rm[¶](#spack-module-tcl-rm) ``` spack module tcl rm [-hy] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `-y, --yes-to-all` assume “yes” is the answer to every confirmation request --- ##### spack module tcl loads[¶](#spack-module-tcl-loads) ``` spack module tcl loads [-hr] [--input-only] [-p PREFIX] [-x EXCLUDE] ... ``` **Positional arguments** constraint constraint to select a subset of installed packages **Optional arguments** `-h, --help` show this help message and exit `--input-only` generate input for module command (instead of a shell script) `-p PREFIX, --prefix PREFIX` prepend to module names when issuing module load commands `-x EXCLUDE, --exclude EXCLUDE` exclude package from output; may be specified multiple times `-r, --dependencies` recursively traverse spec dependencies --- ### spack patch[¶](#spack-patch) patch expanded archive sources in preparation for install ``` spack patch [-hn] ... ``` [More documentation](index.html#cmd-spack-patch) **Positional arguments** packages specs of packages to stage **Optional arguments** `-h, --help` show this help message and exit `-n, --no-checksum` do not use checksums to verify downloadeded files (unsafe) --- ### spack pkg[¶](#spack-pkg) query packages associated with particular git revisions ``` spack pkg [-h] SUBCOMMAND ... ``` **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [pkg add](#spack-pkg-add) * [pkg list](#spack-pkg-list) | * [pkg diff](#spack-pkg-diff) | * [pkg added](#spack-pkg-added) | * [pkg removed](#spack-pkg-removed) | --- #### spack pkg add[¶](#spack-pkg-add) ``` spack pkg add [-h] ... ``` **Positional arguments** packages names of packages to add to git repo **Optional arguments** `-h, --help` show this help message and exit --- #### spack pkg list[¶](#spack-pkg-list) ``` spack pkg list [-h] [rev] ``` **Positional arguments** rev revision to list packages for **Optional arguments** `-h, --help` show this help message and exit --- #### spack pkg diff[¶](#spack-pkg-diff) ``` spack pkg diff [-h] [rev1] [rev2] ``` **Positional arguments** rev1 revision to compare against rev2 revision to compare to rev1 (default is HEAD) **Optional arguments** `-h, --help` show this help message and exit --- #### spack pkg added[¶](#spack-pkg-added) ``` spack pkg added [-h] [rev1] [rev2] ``` **Positional arguments** rev1 revision to compare against rev2 revision to compare to rev1 (default is HEAD) **Optional arguments** `-h, --help` show this help message and exit --- #### spack pkg removed[¶](#spack-pkg-removed) ``` spack pkg removed [-h] [rev1] [rev2] ``` **Positional arguments** rev1 revision to compare against rev2 revision to compare to rev1 (default is HEAD) **Optional arguments** `-h, --help` show this help message and exit --- ### spack providers[¶](#spack-providers) list packages that provide a particular virtual package ``` spack providers [-h] [virtual_package [virtual_package ...]] ``` [More documentation](index.html#cmd-spack-providers) **Positional arguments** virtual_package find packages that provide this virtual package **Optional arguments** `-h, --help` show this help message and exit --- ### spack pydoc[¶](#spack-pydoc) run pydoc from within spack ``` spack pydoc [-h] entity ``` **Positional arguments** entity run pydoc help on entity **Optional arguments** `-h, --help` show this help message and exit --- ### spack python[¶](#spack-python) launch an interpreter as spack would launch a command ``` spack python [-h] [-c PYTHON_COMMAND] ... ``` [More documentation](index.html#cmd-spack-python) **Positional arguments** python_args file to run plus arguments **Optional arguments** `-h, --help` show this help message and exit `-c PYTHON_COMMAND` command to execute --- ### spack reindex[¶](#spack-reindex) rebuild Spack’s package database ``` spack reindex [-h] ``` **Optional arguments** `-h, --help` show this help message and exit --- ### spack remove[¶](#spack-remove) remove specs from an environment ``` spack remove [-haf] ... ``` **Positional arguments** specs specs to be removed **Optional arguments** `-h, --help` show this help message and exit `-a, --all` remove all specs from (clear) the environment `-f, --force` remove concretized spec (if any) immediately --- ### spack repo[¶](#spack-repo) manage package source repositories ``` spack repo [-h] SUBCOMMAND ... ``` [More documentation](index.html#cmd-spack-repo) **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [repo create](#spack-repo-create) | * [repo list](#spack-repo-list) | * [repo add](#spack-repo-add) | * [repo remove](#spack-repo-remove) | --- #### spack repo create[¶](#spack-repo-create) ``` spack repo create [-h] directory [namespace] ``` **Positional arguments** directory directory to create the repo in namespace namespace to identify packages in the repository. defaults to the directory name **Optional arguments** `-h, --help` show this help message and exit --- #### spack repo list[¶](#spack-repo-list) ``` spack repo list [-h] [--scope {defaults,system,site,user}[/PLATFORM]] ``` **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to read from --- #### spack repo add[¶](#spack-repo-add) ``` spack repo add [-h] [--scope {defaults,system,site,user}[/PLATFORM]] path ``` **Positional arguments** path path to a Spack package repository directory **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to modify --- #### spack repo remove[¶](#spack-repo-remove) ``` spack repo remove [-h] [--scope {defaults,system,site,user}[/PLATFORM]] path_or_namespace ``` **Positional arguments** path_or_namespace path or namespace of a Spack package repository **Optional arguments** `-h, --help` show this help message and exit `--scope {defaults,system,site,user}[/PLATFORM]` configuration scope to modify --- ### spack restage[¶](#spack-restage) revert checked out package source code ``` spack restage [-h] ... ``` [More documentation](index.html#cmd-spack-restage) **Positional arguments** packages specs of packages to restage **Optional arguments** `-h, --help` show this help message and exit --- ### spack setup[¶](#spack-setup) create a configuration script and module, but don’t build ``` spack setup [-hinv] [--clean | --dirty] ... ``` **Positional arguments** spec specs to use for install. must contain package AND version **Optional arguments** `-h, --help` show this help message and exit `-i, --ignore-dependencies` do not try to install dependencies of requested packages `-n, --no-checksum` do not use checksums to verify downloadeded files (unsafe) `-v, --verbose` display verbose build output while installing `--clean` unset harmful variables in the build environment (default) `--dirty` preserve user environment in the spack build environment (danger!) --- ### spack spec[¶](#spack-spec) show what would be installed, given a spec ``` spack spec [-hlLIyNt] [-c {nodes,edges,paths}] ... ``` [More documentation](index.html#cmd-spack-spec) **Positional arguments** specs specs of packages **Optional arguments** `-h, --help` show this help message and exit `-l, --long` show dependency hashes as well as versions `-L, --very-long` show full dependency hashes as well as versions `-I, --install-status` show install status of packages. packages can be: installed [+], missing and needed by an installed package [-], or not installed (no annotation) `-y, --yaml` print concrete spec as YAML `-c {nodes,edges,paths}, --cover {nodes,edges,paths}` how extensively to traverse the DAG (default: nodes) `-N, --namespaces` show fully qualified package names `-t, --types` show dependency types --- ### spack stage[¶](#spack-stage) expand downloaded archive in preparation for install ``` spack stage [-hn] [-p PATH] ... ``` [More documentation](index.html#cmd-spack-stage) **Positional arguments** specs specs of packages to stage **Optional arguments** `-h, --help` show this help message and exit `-n, --no-checksum` do not use checksums to verify downloadeded files (unsafe) `-p PATH, --path PATH` path to stage package, does not add to spack tree --- ### spack test[¶](#spack-test) run spack’s unit tests ``` spack test [-hH] [-l | -L] ... ``` [More documentation](index.html#cmd-spack-test) **Positional arguments** tests list of tests to run (will be passed to pytest -k) **Optional arguments** `-h, --help` show this help message and exit `-H, --pytest-help` print full pytest help message, showing advanced options `-l, --list` list basic test names `-L, --long-list` list the entire hierarchy of tests --- ### spack uninstall[¶](#spack-uninstall) remove installed packages ``` spack uninstall [-hfRya] ... ``` [More documentation](index.html#cmd-spack-uninstall) **Positional arguments** packages specs of packages to uninstall **Optional arguments** `-h, --help` show this help message and exit `-f, --force` remove regardless of whether other packages or environments depend on this one `-R, --dependents` also uninstall any packages that depend on the ones given via command line `-y, --yes-to-all` assume “yes” is the answer to every confirmation request `-a, --all` USE CAREFULLY. Remove ALL installed packages that match each supplied spec. i.e., if you uninstall –all libelf, ALL versions of libelf are uninstalled. If no spec is supplied, all installed packages will be uninstalled. If used in an environment, all packages in the environment will be uninstalled. --- ### spack unload[¶](#spack-unload) remove package from environment using module unload ``` spack unload [-h] ... ``` **Positional arguments** spec spec of package to unload with modules **Optional arguments** `-h, --help` show this help message and exit --- ### spack unuse[¶](#spack-unuse) remove package from environment using dotkit ``` spack unuse [-h] ... ``` **Positional arguments** spec spec of package to unuse with dotkit **Optional arguments** `-h, --help` show this help message and exit --- ### spack url[¶](#spack-url) debugging tool for url parsing ``` spack url [-h] SUBCOMMAND ... ``` [More documentation](index.html#cmd-spack-url) **Optional arguments** `-h, --help` show this help message and exit **Subcommands** | | | | | | --- | --- | --- | --- | | * [url parse](#spack-url-parse) | * [url list](#spack-url-list) | * [url summary](#spack-url-summary) | * [url stats](#spack-url-stats) | --- #### spack url parse[¶](#spack-url-parse) ``` spack url parse [-hs] url ``` **Positional arguments** url url to parse **Optional arguments** `-h, --help` show this help message and exit `-s, --spider` spider the source page for versions --- #### spack url list[¶](#spack-url-list) ``` spack url list [-hce] [-n | -N | -v | -V] ``` **Optional arguments** `-h, --help` show this help message and exit `-c, --color` color the parsed version and name in the urls shown (versions will be cyan, name red) `-e, --extrapolation` color the versions used for extrapolation as well (additional versions will be green, names magenta) `-n, --incorrect-name` only list urls for which the name was incorrectly parsed `-N, --correct-name` only list urls for which the name was correctly parsed `-v, --incorrect-version` only list urls for which the version was incorrectly parsed `-V, --correct-version` only list urls for which the version was correctly parsed --- #### spack url summary[¶](#spack-url-summary) ``` spack url summary [-h] ``` **Optional arguments** `-h, --help` show this help message and exit --- #### spack url stats[¶](#spack-url-stats) ``` spack url stats [-h] ``` **Optional arguments** `-h, --help` show this help message and exit --- ### spack use[¶](#spack-use) add package to environment using dotkit ``` spack use [-hr] ... ``` **Positional arguments** spec spec of package to use with dotkit **Optional arguments** `-h, --help` show this help message and exit `-r, --dependencies` recursively traverse spec dependencies --- ### spack versions[¶](#spack-versions) list available versions of a package ``` spack versions [-h] PACKAGE ``` [More documentation](index.html#cmd-spack-versions) **Positional arguments** PACKAGE package to list versions for **Optional arguments** `-h, --help` show this help message and exit --- ### spack view[¶](#spack-view) produce a single-rooted directory view of packages ``` spack view [-hv] [-e EXCLUDE] [-d {true,false,yes,no}] ACTION ... ``` [More documentation](index.html#cmd-spack-view) **Optional arguments** `-h, --help` show this help message and exit `-v, --verbose` If not verbose only warnings/errors will be printed. `-e EXCLUDE, --exclude EXCLUDE` exclude packages with names matching the given regex pattern `-d {true,false,yes,no}, --dependencies {true,false,yes,no}` Link/remove/list dependencies. **Subcommands** | | | | | | --- | --- | --- | --- | | * [view symlink](#spack-view-symlink) | * [view hardlink](#spack-view-hardlink) | * [view remove](#spack-view-remove) | * [view statlink](#spack-view-statlink) | --- #### spack view symlink[¶](#spack-view-symlink) ``` spack view symlink [-hi] path spec [spec ...] ``` **Positional arguments** path path to file system view directory spec seed specs of the packages to view **Optional arguments** `-h, --help` show this help message and exit `-i, --ignore-conflicts` --- #### spack view hardlink[¶](#spack-view-hardlink) ``` spack view hardlink [-hi] path spec [spec ...] ``` **Positional arguments** path path to file system view directory spec seed specs of the packages to view **Optional arguments** `-h, --help` show this help message and exit `-i, --ignore-conflicts` --- #### spack view remove[¶](#spack-view-remove) ``` spack view remove [-ha] [--no-remove-dependents] path [spec [spec ...]] ``` **Positional arguments** path path to file system view directory spec seed specs of the packages to view **Optional arguments** `-h, --help` show this help message and exit `--no-remove-dependents` Do not remove dependents of specified specs. `-a, --all` act on all specs in view --- #### spack view statlink[¶](#spack-view-statlink) ``` spack view statlink [-h] path [spec [spec ...]] ``` **Positional arguments** path path to file system view directory spec seed specs of the packages to view **Optional arguments** `-h, --help` show this help message and exit Package List[¶](#package-list) --- This is a list of things you can install using Spack. It is automatically generated based on the packages in the latest Spack release. Spack currently has 2907 mainline packages: | | | --- | | [abinit](#abinit) | [mummer](#mummer) | [r-c50](#r-c50) | | [abyss](#abyss) | [mumps](#mumps) | [r-callr](#r-callr) | | [accfft](#accfft) | [munge](#munge) | [r-car](#r-car) | | [ack](#ack) | [muparser](#muparser) | [r-caret](#r-caret) | | [activeharmony](#activeharmony) | [muscle](#muscle) | [r-category](#r-category) | | [adept-utils](#adept-utils) | [muse](#muse) | [r-catools](#r-catools) | | [adios](#adios) | [muster](#muster) | [r-cdcfluview](#r-cdcfluview) | | [adios2](#adios2) | [mvapich2](#mvapich2) | [r-cellranger](#r-cellranger) | | [adlbx](#adlbx) | [mxml](#mxml) | [r-checkmate](#r-checkmate) | | [adol-c](#adol-c) | [mxnet](#mxnet) | [r-checkpoint](#r-checkpoint) | | [aegean](#aegean) | [nag](#nag) | [r-chemometrics](#r-chemometrics) | | [aida](#aida) | [nalu](#nalu) | [r-chron](#r-chron) | | [albany](#albany) | [nalu-wind](#nalu-wind) | [r-circlize](#r-circlize) | | [albert](#albert) | [namd](#namd) | [r-class](#r-class) | | [alglib](#alglib) | [nano](#nano) | [r-classint](#r-classint) | | [allinea-forge](#allinea-forge) | [nanoflann](#nanoflann) | [r-cli](#r-cli) | | [allinea-reports](#allinea-reports) | [nanopb](#nanopb) | [r-clipr](#r-clipr) | | [allpaths-lg](#allpaths-lg) | [nasm](#nasm) | [r-cluster](#r-cluster) | | [alquimia](#alquimia) | [nauty](#nauty) | [r-clusterprofiler](#r-clusterprofiler) | | [alsa-lib](#alsa-lib) | [ncbi-magicblast](#ncbi-magicblast) | [r-cner](#r-cner) | | [aluminum](#aluminum) | [ncbi-rmblastn](#ncbi-rmblastn) | [r-coda](#r-coda) | | [amg](#amg) | [ncbi-toolkit](#ncbi-toolkit) | [r-codetools](#r-codetools) | | [amg2013](#amg2013) | [nccl](#nccl) | [r-coin](#r-coin) | | [amp](#amp) | [nccmp](#nccmp) | [r-colorspace](#r-colorspace) | | [ampliconnoise](#ampliconnoise) | [ncdu](#ncdu) | [r-complexheatmap](#r-complexheatmap) | | [amrex](#amrex) | [ncftp](#ncftp) | [r-corpcor](#r-corpcor) | | [amrvis](#amrvis) | [ncl](#ncl) | [r-corrplot](#r-corrplot) | | [andi](#andi) | [nco](#nco) | [r-covr](#r-covr) | | [angsd](#angsd) | [ncurses](#ncurses) | [r-cowplot](#r-cowplot) | | [ant](#ant) | [ncview](#ncview) | [r-crayon](#r-crayon) | | [antlr](#antlr) | [ndiff](#ndiff) | [r-crosstalk](#r-crosstalk) | | [ants](#ants) | [nek5000](#nek5000) | [r-ctc](#r-ctc) | | [ape](#ape) | [nekbone](#nekbone) | [r-cubature](#r-cubature) | | [aperture-photometry](#aperture-photometry) | [nekcem](#nekcem) | [r-cubist](#r-cubist) | | [apex](#apex) | [nektar](#nektar) | [r-curl](#r-curl) | | [apple-libunwind](#apple-libunwind) | [neovim](#neovim) | [r-data-table](#r-data-table) | | [applewmproto](#applewmproto) | [nest](#nest) | [r-dbi](#r-dbi) | | [appres](#appres) | [netcdf](#netcdf) | [r-dbplyr](#r-dbplyr) | | [apr](#apr) | [netcdf-cxx](#netcdf-cxx) | [r-delayedarray](#r-delayedarray) | | [apr-util](#apr-util) | [netcdf-cxx4](#netcdf-cxx4) | [r-deldir](#r-deldir) | | [aragorn](#aragorn) | [netcdf-fortran](#netcdf-fortran) | [r-dendextend](#r-dendextend) | | [archer](#archer) | [netgauge](#netgauge) | [r-deoptim](#r-deoptim) | | [argobots](#argobots) | [netgen](#netgen) | [r-deoptimr](#r-deoptimr) | | [argp-standalone](#argp-standalone) | [netlib-lapack](#netlib-lapack) | [r-deseq](#r-deseq) | | [argtable](#argtable) | [netlib-scalapack](#netlib-scalapack) | [r-deseq2](#r-deseq2) | | [arlecore](#arlecore) | [netlib-xblas](#netlib-xblas) | [r-desolve](#r-desolve) | | [armadillo](#armadillo) | [nettle](#nettle) | [r-devtools](#r-devtools) | | [arpack-ng](#arpack-ng) | [neuron](#neuron) | [r-diagrammer](#r-diagrammer) | | [arrow](#arrow) | [nextflow](#nextflow) | [r-dicekriging](#r-dicekriging) | | [ascent](#ascent) | [nfft](#nfft) | [r-dichromat](#r-dichromat) | | [asciidoc](#asciidoc) | [nghttp2](#nghttp2) | [r-diffusionmap](#r-diffusionmap) | | [aspa](#aspa) | [nginx](#nginx) | [r-digest](#r-digest) | | [aspcud](#aspcud) | [ngmlr](#ngmlr) | [r-diptest](#r-diptest) | | [aspect](#aspect) | [ninja](#ninja) | [r-dirichletmultinomial](#r-dirichletmultinomial) | | [aspell](#aspell) | [ninja-fortran](#ninja-fortran) | [r-dismo](#r-dismo) | | [aspell6-de](#aspell6-de) | [nlohmann-json](#nlohmann-json) | [r-dnacopy](#r-dnacopy) | | [aspell6-en](#aspell6-en) | [nlopt](#nlopt) | [r-do-db](#r-do-db) | | [aspell6-es](#aspell6-es) | [nmap](#nmap) | [r-domc](#r-domc) | | [aspera-cli](#aspera-cli) | [nnvm](#nnvm) | [r-doparallel](#r-doparallel) | | [assimp](#assimp) | [node-js](#node-js) | [r-dorng](#r-dorng) | | [astra](#astra) | [notmuch](#notmuch) | [r-dose](#r-dose) | | [astral](#astral) | [npb](#npb) | [r-downloader](#r-downloader) | | [astyle](#astyle) | [npm](#npm) | [r-dplyr](#r-dplyr) | | [at-spi2-atk](#at-spi2-atk) | [npth](#npth) | [r-dt](#r-dt) | | [at-spi2-core](#at-spi2-core) | [nspr](#nspr) | [r-dtw](#r-dtw) | | [atk](#atk) | [numactl](#numactl) | [r-dygraphs](#r-dygraphs) | | [atlas](#atlas) | [numdiff](#numdiff) | [r-e1071](#r-e1071) | | [atom-dft](#atom-dft) | [nut](#nut) | [r-edger](#r-edger) | | [atompaw](#atompaw) | [nvptx-tools](#nvptx-tools) | [r-ellipse](#r-ellipse) | | [atop](#atop) | [nwchem](#nwchem) | [r-ensembldb](#r-ensembldb) | | [augustus](#augustus) | [ocaml](#ocaml) | [r-ergm](#r-ergm) | | [autoconf](#autoconf) | [occa](#occa) | [r-evaluate](#r-evaluate) | | [autodock-vina](#autodock-vina) | [oce](#oce) | [r-expm](#r-expm) | | [autofact](#autofact) | [oclint](#oclint) | [r-factoextra](#r-factoextra) | | [autogen](#autogen) | [oclock](#oclock) | [r-factominer](#r-factominer) | | [automaded](#automaded) | [octave](#octave) | [r-fastcluster](#r-fastcluster) | | [automake](#automake) | [octave-optim](#octave-optim) | [r-fastmatch](#r-fastmatch) | | [axel](#axel) | [octave-splines](#octave-splines) | [r-ff](#r-ff) | | [axl](#axl) | [octave-struct](#octave-struct) | [r-fftwtools](#r-fftwtools) | | [bamdst](#bamdst) | [octopus](#octopus) | [r-fgsea](#r-fgsea) | | [bamtools](#bamtools) | [of-adios-write](#of-adios-write) | [r-filehash](#r-filehash) | | [bamutil](#bamutil) | [of-precice](#of-precice) | [r-findpython](#r-findpython) | | [barrnap](#barrnap) | [omega-h](#omega-h) | [r-fit-models](#r-fit-models) | | [bash](#bash) | [ompss](#ompss) | [r-flashclust](#r-flashclust) | | [bash-completion](#bash-completion) | [ompt-openmp](#ompt-openmp) | [r-flexclust](#r-flexclust) | | [bats](#bats) | [oniguruma](#oniguruma) | [r-flexmix](#r-flexmix) | | [bazel](#bazel) | [ont-albacore](#ont-albacore) | [r-fnn](#r-fnn) | | [bbcp](#bbcp) | [opa-psm2](#opa-psm2) | [r-forcats](#r-forcats) | | [bbmap](#bbmap) | [opam](#opam) | [r-foreach](#r-foreach) | | [bc](#bc) | [opari2](#opari2) | [r-forecast](#r-forecast) | | [bcftools](#bcftools) | [openbabel](#openbabel) | [r-foreign](#r-foreign) | | [bcl2fastq2](#bcl2fastq2) | [openblas](#openblas) | [r-formatr](#r-formatr) | | [bdftopcf](#bdftopcf) | [opencoarrays](#opencoarrays) | [r-formula](#r-formula) | | [bdw-gc](#bdw-gc) | [opencv](#opencv) | [r-fpc](#r-fpc) | | [bear](#bear) | [openexr](#openexr) | [r-fracdiff](#r-fracdiff) | | [beast1](#beast1) | [openfast](#openfast) | [r-futile-logger](#r-futile-logger) | | [beast2](#beast2) | [openfoam-com](#openfoam-com) | [r-futile-options](#r-futile-options) | | [bedops](#bedops) | [openfoam-org](#openfoam-org) | [r-gbm](#r-gbm) | | [bedtools2](#bedtools2) | [openfst](#openfst) | [r-gcrma](#r-gcrma) | | [beforelight](#beforelight) | [opengl](#opengl) | [r-gdata](#r-gdata) | | [benchmark](#benchmark) | [openglu](#openglu) | [r-gdsfmt](#r-gdsfmt) | | [berkeley-db](#berkeley-db) | [openjpeg](#openjpeg) | [r-geiger](#r-geiger) | | [bertini](#bertini) | [openmc](#openmc) | [r-genefilter](#r-genefilter) | | [bib2xhtml](#bib2xhtml) | [openmpi](#openmpi) | [r-genelendatabase](#r-genelendatabase) | | [bigreqsproto](#bigreqsproto) | [opennurbs](#opennurbs) | [r-geneplotter](#r-geneplotter) | | [binutils](#binutils) | [openpmd-api](#openpmd-api) | [r-genie3](#r-genie3) | | [bioawk](#bioawk) | [openscenegraph](#openscenegraph) | [r-genomeinfodb](#r-genomeinfodb) | | [biopieces](#biopieces) | [openslide](#openslide) | [r-genomeinfodbdata](#r-genomeinfodbdata) | | [bismark](#bismark) | [openspeedshop](#openspeedshop) | [r-genomicalignments](#r-genomicalignments) | | [bison](#bison) | [openspeedshop-utils](#openspeedshop-utils) | [r-genomicfeatures](#r-genomicfeatures) | | [bitmap](#bitmap) | [openssh](#openssh) | [r-genomicranges](#r-genomicranges) | | [blasr](#blasr) | [openssl](#openssl) | [r-geomorph](#r-geomorph) | | [blasr-libcpp](#blasr-libcpp) | [opium](#opium) | [r-geoquery](#r-geoquery) | | [blast-plus](#blast-plus) | [optional-lite](#optional-lite) | [r-geosphere](#r-geosphere) | | [blat](#blat) | [opus](#opus) | [r-getopt](#r-getopt) | | [blaze](#blaze) | [orca](#orca) | [r-getoptlong](#r-getoptlong) | | [blis](#blis) | [orfm](#orfm) | [r-ggally](#r-ggally) | | [bliss](#bliss) | [orthofinder](#orthofinder) | [r-ggbio](#r-ggbio) | | [blitz](#blitz) | [orthomcl](#orthomcl) | [r-ggdendro](#r-ggdendro) | | [bmake](#bmake) | [osu-micro-benchmarks](#osu-micro-benchmarks) | [r-ggjoy](#r-ggjoy) | | [bml](#bml) | [otf](#otf) | [r-ggmap](#r-ggmap) | | [bohrium](#bohrium) | [otf2](#otf2) | [r-ggplot2](#r-ggplot2) | | [bolt](#bolt) | [p4est](#p4est) | [r-ggpubr](#r-ggpubr) | | [bookleaf-cpp](#bookleaf-cpp) | [p7zip](#p7zip) | [r-ggrepel](#r-ggrepel) | | [boost](#boost) | [pacbio-daligner](#pacbio-daligner) | [r-ggridges](#r-ggridges) | | [boostmplcartesianproduct](#boostmplcartesianproduct) | [pacbio-damasker](#pacbio-damasker) | [r-ggsci](#r-ggsci) | | [bowtie](#bowtie) | [pacbio-dazz-db](#pacbio-dazz-db) | [r-ggvis](#r-ggvis) | | [bowtie2](#bowtie2) | [pacbio-dextractor](#pacbio-dextractor) | [r-gistr](#r-gistr) | | [boxlib](#boxlib) | [packmol](#packmol) | [r-git2r](#r-git2r) | | [bpp-core](#bpp-core) | [pacvim](#pacvim) | [r-glimma](#r-glimma) | | [bpp-phyl](#bpp-phyl) | [pagit](#pagit) | [r-glmnet](#r-glmnet) | | [bpp-seq](#bpp-seq) | [pagmo](#pagmo) | [r-globaloptions](#r-globaloptions) | | [bpp-suite](#bpp-suite) | [paml](#paml) | [r-glue](#r-glue) | | [bracken](#bracken) | [panda](#panda) | [r-gmodels](#r-gmodels) | | [braker](#braker) | [pandaseq](#pandaseq) | [r-gmp](#r-gmp) | | [branson](#branson) | [pango](#pango) | [r-go-db](#r-go-db) | | [breakdancer](#breakdancer) | [pangomm](#pangomm) | [r-googlevis](#r-googlevis) | | [breseq](#breseq) | [papi](#papi) | [r-goplot](#r-goplot) | | [brigand](#brigand) | [papyrus](#papyrus) | [r-gosemsim](#r-gosemsim) | | [bsseeker2](#bsseeker2) | [paradiseo](#paradiseo) | [r-goseq](#r-goseq) | | [bucky](#bucky) | [parallel](#parallel) | [r-gostats](#r-gostats) | | [bumpversion](#bumpversion) | [parallel-netcdf](#parallel-netcdf) | [r-gplots](#r-gplots) | | [busco](#busco) | [paraver](#paraver) | [r-graph](#r-graph) | | [butter](#butter) | [paraview](#paraview) | [r-gridbase](#r-gridbase) | | [bwa](#bwa) | [parmetis](#parmetis) | [r-gridextra](#r-gridextra) | | [bwtool](#bwtool) | [parmgridgen](#parmgridgen) | [r-gseabase](#r-gseabase) | | [byobu](#byobu) | [parquet](#parquet) | [r-gss](#r-gss) | | [bzip2](#bzip2) | [parsimonator](#parsimonator) | [r-gsubfn](#r-gsubfn) | | [c-blosc](#c-blosc) | [parsplice](#parsplice) | [r-gtable](#r-gtable) | | [c-lime](#c-lime) | [partitionfinder](#partitionfinder) | [r-gtools](#r-gtools) | | [cabana](#cabana) | [patch](#patch) | [r-gtrellis](#r-gtrellis) | | [caffe](#caffe) | [patchelf](#patchelf) | [r-gviz](#r-gviz) | | [cairo](#cairo) | [pathfinder](#pathfinder) | [r-haven](#r-haven) | | [cairomm](#cairomm) | [pax-utils](#pax-utils) | [r-hexbin](#r-hexbin) | | [caliper](#caliper) | [pbbam](#pbbam) | [r-highr](#r-highr) | | [callpath](#callpath) | [pbmpi](#pbmpi) | [r-hmisc](#r-hmisc) | | [camellia](#camellia) | [pcma](#pcma) | [r-hms](#r-hms) | | [candle-benchmarks](#candle-benchmarks) | [pcre](#pcre) | [r-htmltable](#r-htmltable) | | [cantera](#cantera) | [pcre2](#pcre2) | [r-htmltools](#r-htmltools) | | [canu](#canu) | [pdf2svg](#pdf2svg) | [r-htmlwidgets](#r-htmlwidgets) | | [cap3](#cap3) | [pdftk](#pdftk) | [r-httpuv](#r-httpuv) | | [cares](#cares) | [pdsh](#pdsh) | [r-httr](#r-httr) | | [cask](#cask) | [pdt](#pdt) | [r-hwriter](#r-hwriter) | | [casper](#casper) | [pegtl](#pegtl) | [r-hypergraph](#r-hypergraph) | | [catalyst](#catalyst) | [pennant](#pennant) | [r-ica](#r-ica) | | [catch](#catch) | [percept](#percept) | [r-igraph](#r-igraph) | | [cbench](#cbench) | [perl](#perl) | [r-illuminaio](#r-illuminaio) | | [cblas](#cblas) | [perl-algorithm-diff](#perl-algorithm-diff) | [r-impute](#r-impute) | | [cbtf](#cbtf) | [perl-app-cmd](#perl-app-cmd) | [r-influencer](#r-influencer) | | [cbtf-argonavis](#cbtf-argonavis) | [perl-array-utils](#perl-array-utils) | [r-inline](#r-inline) | | [cbtf-argonavis-gui](#cbtf-argonavis-gui) | [perl-b-hooks-endofscope](#perl-b-hooks-endofscope) | [r-interactivedisplaybase](#r-interactivedisplaybase) | | [cbtf-krell](#cbtf-krell) | [perl-bio-perl](#perl-bio-perl) | [r-ipred](#r-ipred) | | [cbtf-lanl](#cbtf-lanl) | [perl-bit-vector](#perl-bit-vector) | [r-iranges](#r-iranges) | | [ccache](#ccache) | [perl-cairo](#perl-cairo) | [r-irdisplay](#r-irdisplay) | | [cctools](#cctools) | [perl-capture-tiny](#perl-capture-tiny) | [r-irkernel](#r-irkernel) | | [cdbfasta](#cdbfasta) | [perl-carp-clan](#perl-carp-clan) | [r-irlba](#r-irlba) | | [cdd](#cdd) | [perl-cgi](#perl-cgi) | [r-iso](#r-iso) | | [cddlib](#cddlib) | [perl-class-data-inheritable](#perl-class-data-inheritable) | [r-iterators](#r-iterators) | | [cdhit](#cdhit) | [perl-class-inspector](#perl-class-inspector) | [r-janitor](#r-janitor) | | [cdo](#cdo) | [perl-class-load](#perl-class-load) | [r-jaspar2018](#r-jaspar2018) | | [ceed](#ceed) | [perl-class-load-xs](#perl-class-load-xs) | [r-jomo](#r-jomo) | | [cereal](#cereal) | [perl-compress-raw-bzip2](#perl-compress-raw-bzip2) | [r-jpeg](#r-jpeg) | | [ceres-solver](#ceres-solver) | [perl-compress-raw-zlib](#perl-compress-raw-zlib) | [r-jsonlite](#r-jsonlite) | | [cfitsio](#cfitsio) | [perl-contextual-return](#perl-contextual-return) | [r-kegg-db](#r-kegg-db) | | [cgal](#cgal) | [perl-cpan-meta-check](#perl-cpan-meta-check) | [r-kegggraph](#r-kegggraph) | | [cgm](#cgm) | [perl-data-optlist](#perl-data-optlist) | [r-keggrest](#r-keggrest) | | [cgns](#cgns) | [perl-data-stag](#perl-data-stag) | [r-kernlab](#r-kernlab) | | [channelflow](#channelflow) | [perl-dbd-mysql](#perl-dbd-mysql) | [r-kernsmooth](#r-kernsmooth) | | [charliecloud](#charliecloud) | [perl-dbd-sqlite](#perl-dbd-sqlite) | [r-kknn](#r-kknn) | | [charmpp](#charmpp) | [perl-dbfile](#perl-dbfile) | [r-knitr](#r-knitr) | | [chatterbug](#chatterbug) | [perl-dbi](#perl-dbi) | [r-ks](#r-ks) | | [check](#check) | [perl-devel-cycle](#perl-devel-cycle) | [r-labeling](#r-labeling) | | [chlorop](#chlorop) | [perl-devel-globaldestruction](#perl-devel-globaldestruction) | [r-lambda-r](#r-lambda-r) | | [chombo](#chombo) | [perl-devel-overloadinfo](#perl-devel-overloadinfo) | [r-laplacesdemon](#r-laplacesdemon) | | [cistem](#cistem) | [perl-devel-stacktrace](#perl-devel-stacktrace) | [r-lars](#r-lars) | | [cityhash](#cityhash) | [perl-digest-md5](#perl-digest-md5) | [r-lattice](#r-lattice) | | [clamr](#clamr) | [perl-dist-checkconflicts](#perl-dist-checkconflicts) | [r-latticeextra](#r-latticeextra) | | [clapack](#clapack) | [perl-encode-locale](#perl-encode-locale) | [r-lava](#r-lava) | | [claw](#claw) | [perl-eval-closure](#perl-eval-closure) | [r-lazyeval](#r-lazyeval) | | [cleaveland4](#cleaveland4) | [perl-exception-class](#perl-exception-class) | [r-leaflet](#r-leaflet) | | [cleverleaf](#cleverleaf) | [perl-exporter-tiny](#perl-exporter-tiny) | [r-leaps](#r-leaps) | | [clfft](#clfft) | [perl-extutils-depends](#perl-extutils-depends) | [r-learnbayes](#r-learnbayes) | | [clhep](#clhep) | [perl-extutils-makemaker](#perl-extutils-makemaker) | [r-lhs](#r-lhs) | | [clingo](#clingo) | [perl-extutils-pkgconfig](#perl-extutils-pkgconfig) | [r-limma](#r-limma) | | [cloc](#cloc) | [perl-file-copy-recursive](#perl-file-copy-recursive) | [r-lme4](#r-lme4) | | [cloog](#cloog) | [perl-file-listing](#perl-file-listing) | [r-lmtest](#r-lmtest) | | [cloverleaf](#cloverleaf) | [perl-file-pushd](#perl-file-pushd) | [r-locfit](#r-locfit) | | [cloverleaf3d](#cloverleaf3d) | [perl-file-sharedir-install](#perl-file-sharedir-install) | [r-log4r](#r-log4r) | | [clustalo](#clustalo) | [perl-file-slurp-tiny](#perl-file-slurp-tiny) | [r-lpsolve](#r-lpsolve) | | [clustalw](#clustalw) | [perl-file-slurper](#perl-file-slurper) | [r-lsei](#r-lsei) | | [cmake](#cmake) | [perl-file-which](#perl-file-which) | [r-lubridate](#r-lubridate) | | [cmocka](#cmocka) | [perl-font-ttf](#perl-font-ttf) | [r-magic](#r-magic) | | [cmor](#cmor) | [perl-gd](#perl-gd) | [r-magrittr](#r-magrittr) | | [cnmem](#cnmem) | [perl-gd-graph](#perl-gd-graph) | [r-makecdfenv](#r-makecdfenv) | | [cnpy](#cnpy) | [perl-gd-text](#perl-gd-text) | [r-maldiquant](#r-maldiquant) | | [cntk](#cntk) | [perl-gdgraph-histogram](#perl-gdgraph-histogram) | [r-mapproj](#r-mapproj) | | [cntk1bitsgd](#cntk1bitsgd) | [perl-graph](#perl-graph) | [r-maps](#r-maps) | | [codar-cheetah](#codar-cheetah) | [perl-graph-readwrite](#perl-graph-readwrite) | [r-maptools](#r-maptools) | | [codes](#codes) | [perl-html-parser](#perl-html-parser) | [r-markdown](#r-markdown) | | [coevp](#coevp) | [perl-html-tagset](#perl-html-tagset) | [r-mass](#r-mass) | | [cohmm](#cohmm) | [perl-http-cookies](#perl-http-cookies) | [r-matr](#r-matr) | | [coinhsl](#coinhsl) | [perl-http-daemon](#perl-http-daemon) | [r-matrix](#r-matrix) | | [colm](#colm) | [perl-http-date](#perl-http-date) | [r-matrixmodels](#r-matrixmodels) | | [colordiff](#colordiff) | [perl-http-message](#perl-http-message) | [r-matrixstats](#r-matrixstats) | | [comd](#comd) | [perl-http-negotiate](#perl-http-negotiate) | [r-mclust](#r-mclust) | | [commons-lang](#commons-lang) | [perl-inline](#perl-inline) | [r-mcmcglmm](#r-mcmcglmm) | | [commons-lang3](#commons-lang3) | [perl-inline-c](#perl-inline-c) | [r-mco](#r-mco) | | [commons-logging](#commons-logging) | [perl-intervaltree](#perl-intervaltree) | [r-mda](#r-mda) | | [compiz](#compiz) | [perl-io-compress](#perl-io-compress) | [r-memoise](#r-memoise) | | [compositeproto](#compositeproto) | [perl-io-html](#perl-io-html) | [r-mergemaid](#r-mergemaid) | | [conduit](#conduit) | [perl-io-sessiondata](#perl-io-sessiondata) | [r-methodss3](#r-methodss3) | | [constype](#constype) | [perl-io-socket-ssl](#perl-io-socket-ssl) | [r-mgcv](#r-mgcv) | | [converge](#converge) | [perl-io-string](#perl-io-string) | [r-mgraster](#r-mgraster) | | [coreutils](#coreutils) | [perl-json](#perl-json) | [r-mice](#r-mice) | | [corset](#corset) | [perl-libwww-perl](#perl-libwww-perl) | [r-mime](#r-mime) | | [cosmomc](#cosmomc) | [perl-list-moreutils](#perl-list-moreutils) | [r-minfi](#r-minfi) | | [cosp2](#cosp2) | [perl-log-log4perl](#perl-log-log4perl) | [r-minqa](#r-minqa) | | [cp2k](#cp2k) | [perl-lwp](#perl-lwp) | [r-misc3d](#r-misc3d) | | [cppad](#cppad) | [perl-lwp-mediatypes](#perl-lwp-mediatypes) | [r-mitml](#r-mitml) | | [cppcheck](#cppcheck) | [perl-lwp-protocol-https](#perl-lwp-protocol-https) | [r-mixtools](#r-mixtools) | | [cppgsl](#cppgsl) | [perl-math-cdf](#perl-math-cdf) | [r-mlbench](#r-mlbench) | | [cpprestsdk](#cpprestsdk) | [perl-math-cephes](#perl-math-cephes) | [r-mlinterfaces](#r-mlinterfaces) | | [cppunit](#cppunit) | [perl-math-matrixreal](#perl-math-matrixreal) | [r-mlr](#r-mlr) | | [cppzmq](#cppzmq) | [perl-module-build](#perl-module-build) | [r-mlrmbo](#r-mlrmbo) | | [cpu-features](#cpu-features) | [perl-module-implementation](#perl-module-implementation) | [r-mmwrweek](#r-mmwrweek) | | [cpuinfo](#cpuinfo) | [perl-module-runtime](#perl-module-runtime) | [r-mnormt](#r-mnormt) | | [cram](#cram) | [perl-module-runtime-conflicts](#perl-module-runtime-conflicts) | [r-modelmetrics](#r-modelmetrics) | | [cryptopp](#cryptopp) | [perl-moose](#perl-moose) | [r-modelr](#r-modelr) | | [cscope](#cscope) | [perl-mozilla-ca](#perl-mozilla-ca) | [r-modeltools](#r-modeltools) | | [csdp](#csdp) | [perl-mro-compat](#perl-mro-compat) | [r-mpm](#r-mpm) | | [ctffind](#ctffind) | [perl-namespace-clean](#perl-namespace-clean) | [r-msnbase](#r-msnbase) | | [cub](#cub) | [perl-net-http](#perl-net-http) | [r-multcomp](#r-multcomp) | | [cube](#cube) | [perl-net-scp-expect](#perl-net-scp-expect) | [r-multicool](#r-multicool) | | [cubelib](#cubelib) | [perl-net-ssleay](#perl-net-ssleay) | [r-multtest](#r-multtest) | | [cubew](#cubew) | [perl-package-deprecationmanager](#perl-package-deprecationmanager) | [r-munsell](#r-munsell) | | [cuda](#cuda) | [perl-package-stash](#perl-package-stash) | [r-mvtnorm](#r-mvtnorm) | | [cuda-memtest](#cuda-memtest) | [perl-package-stash-xs](#perl-package-stash-xs) | [r-mzid](#r-mzid) | | [cudnn](#cudnn) | [perl-padwalker](#perl-padwalker) | [r-mzr](#r-mzr) | | [cufflinks](#cufflinks) | [perl-parallel-forkmanager](#perl-parallel-forkmanager) | [r-nanotime](#r-nanotime) | | [cups](#cups) | [perl-params-util](#perl-params-util) | [r-ncbit](#r-ncbit) | | [curl](#curl) | [perl-parse-recdescent](#perl-parse-recdescent) | [r-ncdf4](#r-ncdf4) | | [cvs](#cvs) | [perl-pdf-api2](#perl-pdf-api2) | [r-network](#r-network) | | [czmq](#czmq) | [perl-pegex](#perl-pegex) | [r-networkd3](#r-networkd3) | | [dakota](#dakota) | [perl-perl4-corelibs](#perl-perl4-corelibs) | [r-nlme](#r-nlme) | | [daligner](#daligner) | [perl-perl6-slurp](#perl-perl6-slurp) | [r-nloptr](#r-nloptr) | | [damageproto](#damageproto) | [perl-perlio-gzip](#perl-perlio-gzip) | [r-nmf](#r-nmf) | | [damaris](#damaris) | [perl-perlio-utf8-strict](#perl-perlio-utf8-strict) | [r-nnet](#r-nnet) | | [damselfly](#damselfly) | [perl-scalar-util-numeric](#perl-scalar-util-numeric) | [r-nnls](#r-nnls) | | [darshan-runtime](#darshan-runtime) | [perl-soap-lite](#perl-soap-lite) | [r-nor1mix](#r-nor1mix) | | [darshan-util](#darshan-util) | [perl-star-fusion](#perl-star-fusion) | [r-np](#r-np) | | [dash](#dash) | [perl-statistics-descriptive](#perl-statistics-descriptive) | [r-numderiv](#r-numderiv) | | [datamash](#datamash) | [perl-statistics-pca](#perl-statistics-pca) | [r-oligoclasses](#r-oligoclasses) | | [dataspaces](#dataspaces) | [perl-sub-exporter](#perl-sub-exporter) | [r-oo](#r-oo) | | [davix](#davix) | [perl-sub-exporter-progressive](#perl-sub-exporter-progressive) | [r-openssl](#r-openssl) | | [dbcsr](#dbcsr) | [perl-sub-identify](#perl-sub-identify) | [r-org-hs-eg-db](#r-org-hs-eg-db) | | [dbus](#dbus) | [perl-sub-install](#perl-sub-install) | [r-organismdbi](#r-organismdbi) | | [dealii](#dealii) | [perl-sub-name](#perl-sub-name) | [r-packrat](#r-packrat) | | [dealii-parameter-gui](#dealii-parameter-gui) | [perl-sub-uplevel](#perl-sub-uplevel) | [r-pacman](#r-pacman) | | [deconseq-standalone](#deconseq-standalone) | [perl-svg](#perl-svg) | [r-pamr](#r-pamr) | | [dejagnu](#dejagnu) | [perl-swissknife](#perl-swissknife) | [r-pan](#r-pan) | | [delly2](#delly2) | [perl-task-weaken](#perl-task-weaken) | [r-parallelmap](#r-parallelmap) | | [denovogear](#denovogear) | [perl-term-readkey](#perl-term-readkey) | [r-paramhelpers](#r-paramhelpers) | | [dftfe](#dftfe) | [perl-test-cleannamespaces](#perl-test-cleannamespaces) | [r-party](#r-party) | | [dia](#dia) | [perl-test-deep](#perl-test-deep) | [r-partykit](#r-partykit) | | [dialign-tx](#dialign-tx) | [perl-test-differences](#perl-test-differences) | [r-pathview](#r-pathview) | | [diamond](#diamond) | [perl-test-exception](#perl-test-exception) | [r-pbapply](#r-pbapply) | | [diffsplice](#diffsplice) | [perl-test-fatal](#perl-test-fatal) | [r-pbdzmq](#r-pbdzmq) | | [diffutils](#diffutils) | [perl-test-memory-cycle](#perl-test-memory-cycle) | [r-pbkrtest](#r-pbkrtest) | | [direnv](#direnv) | [perl-test-most](#perl-test-most) | [r-pcamethods](#r-pcamethods) | | [discovar](#discovar) | [perl-test-needs](#perl-test-needs) | [r-pcapp](#r-pcapp) | | [discovardenovo](#discovardenovo) | [perl-test-requires](#perl-test-requires) | [r-permute](#r-permute) | | [dislin](#dislin) | [perl-test-requiresinternet](#perl-test-requiresinternet) | [r-pfam-db](#r-pfam-db) | | [diy](#diy) | [perl-test-warn](#perl-test-warn) | [r-phangorn](#r-phangorn) | | [dlpack](#dlpack) | [perl-test-warnings](#perl-test-warnings) | [r-phantompeakqualtools](#r-phantompeakqualtools) | | [dmd](#dmd) | [perl-text-csv](#perl-text-csv) | [r-phyloseq](#r-phyloseq) | | [dmlc-core](#dmlc-core) | [perl-text-diff](#perl-text-diff) | [r-picante](#r-picante) | | [dmtcp](#dmtcp) | [perl-text-simpletable](#perl-text-simpletable) | [r-pkgconfig](#r-pkgconfig) | | [dmxproto](#dmxproto) | [perl-text-soundex](#perl-text-soundex) | [r-pkgmaker](#r-pkgmaker) | | [docbook-xml](#docbook-xml) | [perl-text-unidecode](#perl-text-unidecode) | [r-plogr](#r-plogr) | | [docbook-xsl](#docbook-xsl) | [perl-time-hires](#perl-time-hires) | [r-plot3d](#r-plot3d) | | [dos2unix](#dos2unix) | [perl-time-piece](#perl-time-piece) | [r-plotly](#r-plotly) | | [dotnet-core-sdk](#dotnet-core-sdk) | [perl-try-tiny](#perl-try-tiny) | [r-plotrix](#r-plotrix) | | [double-conversion](#double-conversion) | [perl-uri](#perl-uri) | [r-pls](#r-pls) | | [doxygen](#doxygen) | [perl-uri-escape](#perl-uri-escape) | [r-plyr](#r-plyr) | | [dri2proto](#dri2proto) | [perl-version](#perl-version) | [r-pmcmr](#r-pmcmr) | | [dri3proto](#dri3proto) | [perl-want](#perl-want) | [r-png](#r-png) | | [dsdp](#dsdp) | [perl-www-robotrules](#perl-www-robotrules) | [r-powerlaw](#r-powerlaw) | | [dsrc](#dsrc) | [perl-xml-parser](#perl-xml-parser) | [r-prabclus](#r-prabclus) | | [dtcmp](#dtcmp) | [perl-xml-parser-lite](#perl-xml-parser-lite) | [r-praise](#r-praise) | | [dyninst](#dyninst) | [perl-xml-simple](#perl-xml-simple) | [r-preprocesscore](#r-preprocesscore) | | [ea-utils](#ea-utils) | [perl-yaml-libyaml](#perl-yaml-libyaml) | [r-prettyunits](#r-prettyunits) | | [easybuild](#easybuild) | [petsc](#petsc) | [r-processx](#r-processx) | | [ebms](#ebms) | [pexsi](#pexsi) | [r-prodlim](#r-prodlim) | | [eccodes](#eccodes) | [pfft](#pfft) | [r-progress](#r-progress) | | [eclipse-gcj-parser](#eclipse-gcj-parser) | [pflotran](#pflotran) | [r-protgenerics](#r-protgenerics) | | [ecp-proxy-apps](#ecp-proxy-apps) | [pfunit](#pfunit) | [r-proto](#r-proto) | | [ed](#ed) | [pgdspider](#pgdspider) | [r-proxy](#r-proxy) | | [editres](#editres) | [pgi](#pgi) | [r-pryr](#r-pryr) | | [eigen](#eigen) | [pgmath](#pgmath) | [r-ps](#r-ps) | | [elasticsearch](#elasticsearch) | [phantompeakqualtools](#phantompeakqualtools) | [r-psych](#r-psych) | | [elemental](#elemental) | [phast](#phast) | [r-ptw](#r-ptw) | | [elfutils](#elfutils) | [phasta](#phasta) | [r-purrr](#r-purrr) | | [elk](#elk) | [phist](#phist) | [r-quadprog](#r-quadprog) | | [elpa](#elpa) | [phylip](#phylip) | [r-quantmod](#r-quantmod) | | [emacs](#emacs) | [phyluce](#phyluce) | [r-quantreg](#r-quantreg) | | [ember](#ember) | [picard](#picard) | [r-quantro](#r-quantro) | | [emboss](#emboss) | [picsar](#picsar) | [r-qvalue](#r-qvalue) | | [encodings](#encodings) | [picsarlite](#picsarlite) | [r-r6](#r-r6) | | [energyplus](#energyplus) | [pidx](#pidx) | [r-randomforest](#r-randomforest) | | [environment-modules](#environment-modules) | [pigz](#pigz) | [r-ranger](#r-ranger) | | [eospac](#eospac) | [pilon](#pilon) | [r-rappdirs](#r-rappdirs) | | [er](#er) | [pindel](#pindel) | [r-raster](#r-raster) | | [es](#es) | [piranha](#piranha) | [r-rbgl](#r-rbgl) | | [esmf](#esmf) | [pism](#pism) | [r-rbokeh](#r-rbokeh) | | [essl](#essl) | [pixman](#pixman) | [r-rcolorbrewer](#r-rcolorbrewer) | | [ethminer](#ethminer) | [pkg-config](#pkg-config) | [r-rcpp](#r-rcpp) | | [etsf-io](#etsf-io) | [pkgconf](#pkgconf) | [r-rcpparmadillo](#r-rcpparmadillo) | | [everytrace](#everytrace) | [planck-likelihood](#planck-likelihood) | [r-rcppblaze](#r-rcppblaze) | | [everytrace-example](#everytrace-example) | [plasma](#plasma) | [r-rcppcctz](#r-rcppcctz) | | [evieext](#evieext) | [platypus](#platypus) | [r-rcppcnpy](#r-rcppcnpy) | | [exabayes](#exabayes) | [plink](#plink) | [r-rcppeigen](#r-rcppeigen) | | [examinimd](#examinimd) | [plplot](#plplot) | [r-rcppprogress](#r-rcppprogress) | | [exampm](#exampm) | [plumed](#plumed) | [r-rcurl](#r-rcurl) | | [exasp2](#exasp2) | [pmgr-collective](#pmgr-collective) | [r-rda](#r-rda) | | [exmcutils](#exmcutils) | [pmix](#pmix) | [r-readr](#r-readr) | | [exodusii](#exodusii) | [pnfft](#pnfft) | [r-readxl](#r-readxl) | | [exonerate](#exonerate) | [pngwriter](#pngwriter) | [r-registry](#r-registry) | | [expat](#expat) | [pnmpi](#pnmpi) | [r-rematch](#r-rematch) | | [expect](#expect) | [poamsa](#poamsa) | [r-reordercluster](#r-reordercluster) | | [express](#express) | [pocl](#pocl) | [r-reportingtools](#r-reportingtools) | | [extrae](#extrae) | [polymake](#polymake) | [r-repr](#r-repr) | | [exuberant-ctags](#exuberant-ctags) | [poppler](#poppler) | [r-reprex](#r-reprex) | | [f90cache](#f90cache) | [poppler-data](#poppler-data) | [r-reshape](#r-reshape) | | [fabtests](#fabtests) | [porta](#porta) | [r-reshape2](#r-reshape2) | | [falcon](#falcon) | [portage](#portage) | [r-rex](#r-rex) | | [fast-global-file-status](#fast-global-file-status) | [portcullis](#portcullis) | [r-rgdal](#r-rgdal) | | [fasta](#fasta) | [postgresql](#postgresql) | [r-rgenoud](#r-rgenoud) | | [fastjar](#fastjar) | [ppl](#ppl) | [r-rgeos](#r-rgeos) | | [fastmath](#fastmath) | [pplacer](#pplacer) | [r-rgl](#r-rgl) | | [fastme](#fastme) | [prank](#prank) | [r-rgooglemaps](#r-rgooglemaps) | | [fastphase](#fastphase) | [precice](#precice) | [r-rgraphviz](#r-rgraphviz) | | [fastq-screen](#fastq-screen) | [presentproto](#presentproto) | [r-rhdf5](#r-rhdf5) | | [fastqc](#fastqc) | [preseq](#preseq) | [r-rhtslib](#r-rhtslib) | | [fastqvalidator](#fastqvalidator) | [price](#price) | [r-rinside](#r-rinside) | | [fasttree](#fasttree) | [primer3](#primer3) | [r-rjags](#r-rjags) | | [fastx-toolkit](#fastx-toolkit) | [prinseq-lite](#prinseq-lite) | [r-rjava](#r-rjava) | | [fenics](#fenics) | [printproto](#printproto) | [r-rjson](#r-rjson) | | [fermi](#fermi) | [prng](#prng) | [r-rjsonio](#r-rjsonio) | | [fermikit](#fermikit) | [probconsrna](#probconsrna) | [r-rlang](#r-rlang) | | [fermisciencetools](#fermisciencetools) | [prodigal](#prodigal) | [r-rmarkdown](#r-rmarkdown) | | [ferret](#ferret) | [proj](#proj) | [r-rminer](#r-rminer) | | [ffmpeg](#ffmpeg) | [protobuf](#protobuf) | [r-rmpfr](#r-rmpfr) | | [fftw](#fftw) | [proxymngr](#proxymngr) | [r-rmpi](#r-rmpi) | | [figtree](#figtree) | [pruners-ninja](#pruners-ninja) | [r-rmysql](#r-rmysql) | | [fimpute](#fimpute) | [ps-lite](#ps-lite) | [r-rngtools](#r-rngtools) | | [findutils](#findutils) | [psi4](#psi4) | [r-robustbase](#r-robustbase) | | [fio](#fio) | [pslib](#pslib) | [r-rocr](#r-rocr) | | [fish](#fish) | [psm](#psm) | [r-rodbc](#r-rodbc) | | [fixesproto](#fixesproto) | [psmc](#psmc) | [r-rots](#r-rots) | | [flac](#flac) | [pstreams](#pstreams) | [r-roxygen2](#r-roxygen2) | | [flang](#flang) | [pugixml](#pugixml) | [r-rpart](#r-rpart) | | [flann](#flann) | [pumi](#pumi) | [r-rpart-plot](#r-rpart-plot) | | [flash](#flash) | [pv](#pv) | [r-rpostgresql](#r-rpostgresql) | | [flatbuffers](#flatbuffers) | [pvm](#pvm) | [r-rprojroot](#r-rprojroot) | | [flecsale](#flecsale) | [pxz](#pxz) | [r-rsamtools](#r-rsamtools) | | [flecsi](#flecsi) | [py-3to2](#py-3to2) | [r-rsnns](#r-rsnns) | | [flex](#flex) | [py-4suite-xml](#py-4suite-xml) | [r-rsolnp](#r-rsolnp) | | [flint](#flint) | [py-abipy](#py-abipy) | [r-rsqlite](#r-rsqlite) | | [flit](#flit) | [py-adios](#py-adios) | [r-rstan](#r-rstan) | | [fltk](#fltk) | [py-affine](#py-affine) | [r-rstudioapi](#r-rstudioapi) | | [flux-core](#flux-core) | [py-alabaster](#py-alabaster) | [r-rtracklayer](#r-rtracklayer) | | [flux-sched](#flux-sched) | [py-apache-libcloud](#py-apache-libcloud) | [r-rtsne](#r-rtsne) | | [fluxbox](#fluxbox) | [py-apipkg](#py-apipkg) | [r-rvcheck](#r-rvcheck) | | [fmt](#fmt) | [py-appdirs](#py-appdirs) | [r-rvest](#r-rvest) | | [foam-extend](#foam-extend) | [py-appnope](#py-appnope) | [r-rzmq](#r-rzmq) | | [folly](#folly) | [py-apscheduler](#py-apscheduler) | [r-s4vectors](#r-s4vectors) | | [font-adobe-100dpi](#font-adobe-100dpi) | [py-argcomplete](#py-argcomplete) | [r-samr](#r-samr) | | [font-adobe-75dpi](#font-adobe-75dpi) | [py-argparse](#py-argparse) | [r-sandwich](#r-sandwich) | | [font-adobe-utopia-100dpi](#font-adobe-utopia-100dpi) | [py-ase](#py-ase) | [r-scales](#r-scales) | | [font-adobe-utopia-75dpi](#font-adobe-utopia-75dpi) | [py-asn1crypto](#py-asn1crypto) | [r-scatterplot3d](#r-scatterplot3d) | | [font-adobe-utopia-type1](#font-adobe-utopia-type1) | [py-astroid](#py-astroid) | [r-sdmtools](#r-sdmtools) | | [font-alias](#font-alias) | [py-astropy](#py-astropy) | [r-segmented](#r-segmented) | | [font-arabic-misc](#font-arabic-misc) | [py-atomicwrites](#py-atomicwrites) | [r-selectr](#r-selectr) | | [font-bh-100dpi](#font-bh-100dpi) | [py-attrs](#py-attrs) | [r-seqinr](#r-seqinr) | | [font-bh-75dpi](#font-bh-75dpi) | [py-autopep8](#py-autopep8) | [r-seqlogo](#r-seqlogo) | | [font-bh-lucidatypewriter-100dpi](#font-bh-lucidatypewriter-100dpi) | [py-avro](#py-avro) | [r-seurat](#r-seurat) | | [font-bh-lucidatypewriter-75dpi](#font-bh-lucidatypewriter-75dpi) | [py-avro-json-serializer](#py-avro-json-serializer) | [r-sf](#r-sf) | | [font-bh-ttf](#font-bh-ttf) | [py-babel](#py-babel) | [r-sfsmisc](#r-sfsmisc) | | [font-bh-type1](#font-bh-type1) | [py-backcall](#py-backcall) | [r-shape](#r-shape) | | [font-bitstream-100dpi](#font-bitstream-100dpi) | [py-backports-abc](#py-backports-abc) | [r-shiny](#r-shiny) | | [font-bitstream-75dpi](#font-bitstream-75dpi) | [py-backports-functools-lru-cache](#py-backports-functools-lru-cache) | [r-shinydashboard](#r-shinydashboard) | | [font-bitstream-speedo](#font-bitstream-speedo) | [py-backports-shutil-get-terminal-size](#py-backports-shutil-get-terminal-size) | [r-shortread](#r-shortread) | | [font-bitstream-type1](#font-bitstream-type1) | [py-backports-ssl-match-hostname](#py-backports-ssl-match-hostname) | [r-siggenes](#r-siggenes) | | [font-cronyx-cyrillic](#font-cronyx-cyrillic) | [py-basemap](#py-basemap) | [r-simpleaffy](#r-simpleaffy) | | [font-cursor-misc](#font-cursor-misc) | [py-bcbio-gff](#py-bcbio-gff) | [r-sm](#r-sm) | | [font-daewoo-misc](#font-daewoo-misc) | [py-beautifulsoup4](#py-beautifulsoup4) | [r-smoof](#r-smoof) | | [font-dec-misc](#font-dec-misc) | [py-binwalk](#py-binwalk) | [r-sn](#r-sn) | | [font-ibm-type1](#font-ibm-type1) | [py-biom-format](#py-biom-format) | [r-snow](#r-snow) | | [font-isas-misc](#font-isas-misc) | [py-biopython](#py-biopython) | [r-snowfall](#r-snowfall) | | [font-jis-misc](#font-jis-misc) | [py-bitarray](#py-bitarray) | [r-snprelate](#r-snprelate) | | [font-micro-misc](#font-micro-misc) | [py-bitstring](#py-bitstring) | [r-som](#r-som) | | [font-misc-cyrillic](#font-misc-cyrillic) | [py-bleach](#py-bleach) | [r-somaticsignatures](#r-somaticsignatures) | | [font-misc-ethiopic](#font-misc-ethiopic) | [py-blessings](#py-blessings) | [r-sourcetools](#r-sourcetools) | | [font-misc-meltho](#font-misc-meltho) | [py-bokeh](#py-bokeh) | [r-sp](#r-sp) | | [font-misc-misc](#font-misc-misc) | [py-boltons](#py-boltons) | [r-sparsem](#r-sparsem) | | [font-mutt-misc](#font-mutt-misc) | [py-bottleneck](#py-bottleneck) | [r-spdep](#r-spdep) | | [font-schumacher-misc](#font-schumacher-misc) | [py-breakseq2](#py-breakseq2) | [r-speedglm](#r-speedglm) | | [font-screen-cyrillic](#font-screen-cyrillic) | [py-brian](#py-brian) | [r-spem](#r-spem) | | [font-sony-misc](#font-sony-misc) | [py-brian2](#py-brian2) | [r-splitstackshape](#r-splitstackshape) | | [font-sun-misc](#font-sun-misc) | [py-bsddb3](#py-bsddb3) | [r-sqldf](#r-sqldf) | | [font-util](#font-util) | [py-bx-python](#py-bx-python) | [r-squash](#r-squash) | | [font-winitzki-cyrillic](#font-winitzki-cyrillic) | [py-cartopy](#py-cartopy) | [r-stanheaders](#r-stanheaders) | | [font-xfree86-type1](#font-xfree86-type1) | [py-cclib](#py-cclib) | [r-statmod](#r-statmod) | | [fontcacheproto](#fontcacheproto) | [py-cdat-lite](#py-cdat-lite) | [r-statnet-common](#r-statnet-common) | | [fontconfig](#fontconfig) | [py-cdo](#py-cdo) | [r-stringi](#r-stringi) | | [fontsproto](#fontsproto) | [py-certifi](#py-certifi) | [r-stringr](#r-stringr) | | [fonttosfnt](#fonttosfnt) | [py-cffi](#py-cffi) | [r-strucchange](#r-strucchange) | | [fp16](#fp16) | [py-chardet](#py-chardet) | [r-subplex](#r-subplex) | | [fpc](#fpc) | [py-checkm-genome](#py-checkm-genome) | [r-summarizedexperiment](#r-summarizedexperiment) | | [fr-hit](#fr-hit) | [py-cheetah](#py-cheetah) | [r-survey](#r-survey) | | [freebayes](#freebayes) | [py-click](#py-click) | [r-survival](#r-survival) | | [freeglut](#freeglut) | [py-cligj](#py-cligj) | [r-sva](#r-sva) | | [freetype](#freetype) | [py-cloudpickle](#py-cloudpickle) | [r-tarifx](#r-tarifx) | | [fseq](#fseq) | [py-cogent](#py-cogent) | [r-tclust](#r-tclust) | | [fsl](#fsl) | [py-colorama](#py-colorama) | [r-tensora](#r-tensora) | | [fslsfonts](#fslsfonts) | [py-colormath](#py-colormath) | [r-testit](#r-testit) | | [fstobdf](#fstobdf) | [py-configparser](#py-configparser) | [r-testthat](#r-testthat) | | [ftgl](#ftgl) | [py-counter](#py-counter) | [r-tfbstools](#r-tfbstools) | | [funhpc](#funhpc) | [py-coverage](#py-coverage) | [r-tfmpvalue](#r-tfmpvalue) | | [fyba](#fyba) | [py-cpuinfo](#py-cpuinfo) | [r-th-data](#r-th-data) | | [gapbs](#gapbs) | [py-crispresso](#py-crispresso) | [r-threejs](#r-threejs) | | [gapcloser](#gapcloser) | [py-cryptography](#py-cryptography) | [r-tibble](#r-tibble) | | [gapfiller](#gapfiller) | [py-csvkit](#py-csvkit) | [r-tidycensus](#r-tidycensus) | | [gasnet](#gasnet) | [py-current](#py-current) | [r-tidyr](#r-tidyr) | | [gatk](#gatk) | [py-cutadapt](#py-cutadapt) | [r-tidyselect](#r-tidyselect) | | [gaussian](#gaussian) | [py-cvxopt](#py-cvxopt) | [r-tidyverse](#r-tidyverse) | | [gawk](#gawk) | [py-cycler](#py-cycler) | [r-tiff](#r-tiff) | | [gblocks](#gblocks) | [py-cython](#py-cython) | [r-tigris](#r-tigris) | | [gcc](#gcc) | [py-dask](#py-dask) | [r-timedate](#r-timedate) | | [gccmakedep](#gccmakedep) | [py-dateutil](#py-dateutil) | [r-tmixclust](#r-tmixclust) | | [gccxml](#gccxml) | [py-dbf](#py-dbf) | [r-topgo](#r-topgo) | | [gconf](#gconf) | [py-decorator](#py-decorator) | [r-trimcluster](#r-trimcluster) | | [gcta](#gcta) | [py-deeptools](#py-deeptools) | [r-truncnorm](#r-truncnorm) | | [gdal](#gdal) | [py-dendropy](#py-dendropy) | [r-trust](#r-trust) | | [gdb](#gdb) | [py-dill](#py-dill) | [r-tseries](#r-tseries) | | [gdbm](#gdbm) | [py-discover](#py-discover) | [r-tsne](#r-tsne) | | [gdk-pixbuf](#gdk-pixbuf) | [py-dlcpar](#py-dlcpar) | [r-ttr](#r-ttr) | | [gdl](#gdl) | [py-docopt](#py-docopt) | [r-udunits2](#r-udunits2) | | [geant4](#geant4) | [py-docutils](#py-docutils) | [r-units](#r-units) | | [gearshifft](#gearshifft) | [py-doxypy](#py-doxypy) | [r-utils](#r-utils) | | [gemmlowp](#gemmlowp) | [py-doxypypy](#py-doxypypy) | [r-uuid](#r-uuid) | | [genemark-et](#genemark-et) | [py-dryscrape](#py-dryscrape) | [r-variantannotation](#r-variantannotation) | | [genomefinisher](#genomefinisher) | [py-dxchange](#py-dxchange) | [r-varselrf](#r-varselrf) | | [genometools](#genometools) | [py-dxfile](#py-dxfile) | [r-vcd](#r-vcd) | | [geopm](#geopm) | [py-easybuild-easyblocks](#py-easybuild-easyblocks) | [r-vegan](#r-vegan) | | [geos](#geos) | [py-easybuild-easyconfigs](#py-easybuild-easyconfigs) | [r-vgam](#r-vgam) | | [gettext](#gettext) | [py-easybuild-framework](#py-easybuild-framework) | [r-vipor](#r-vipor) | | [gflags](#gflags) | [py-edffile](#py-edffile) | [r-viridis](#r-viridis) | | [ghost](#ghost) | [py-editdistance](#py-editdistance) | [r-viridislite](#r-viridislite) | | [ghostscript](#ghostscript) | [py-elasticsearch](#py-elasticsearch) | [r-visnetwork](#r-visnetwork) | | [ghostscript-fonts](#ghostscript-fonts) | [py-elephant](#py-elephant) | [r-vsn](#r-vsn) | | [giflib](#giflib) | [py-emcee](#py-emcee) | [r-whisker](#r-whisker) | | [git](#git) | [py-entrypoints](#py-entrypoints) | [r-withr](#r-withr) | | [git-imerge](#git-imerge) | [py-enum34](#py-enum34) | [r-xde](#r-xde) | | [git-lfs](#git-lfs) | [py-epydoc](#py-epydoc) | [r-xgboost](#r-xgboost) | | [gl2ps](#gl2ps) | [py-espresso](#py-espresso) | [r-xlconnect](#r-xlconnect) | | [glew](#glew) | [py-espressopp](#py-espressopp) | [r-xlconnectjars](#r-xlconnectjars) | | [glfmultiples](#glfmultiples) | [py-et-xmlfile](#py-et-xmlfile) | [r-xlsx](#r-xlsx) | | [glib](#glib) | [py-eventlet](#py-eventlet) | [r-xlsxjars](#r-xlsxjars) | | [glibmm](#glibmm) | [py-execnet](#py-execnet) | [r-xmapbridge](#r-xmapbridge) | | [glimmer](#glimmer) | [py-fastaindex](#py-fastaindex) | [r-xml](#r-xml) | | [glm](#glm) | [py-fasteners](#py-fasteners) | [r-xml2](#r-xml2) | | [global](#global) | [py-faststructure](#py-faststructure) | [r-xtable](#r-xtable) | | [globalarrays](#globalarrays) | [py-filelock](#py-filelock) | [r-xts](#r-xts) | | [globus-toolkit](#globus-toolkit) | [py-fiscalyear](#py-fiscalyear) | [r-xvector](#r-xvector) | | [glog](#glog) | [py-flake8](#py-flake8) | [r-yaml](#r-yaml) | | [gloo](#gloo) | [py-flake8-polyfill](#py-flake8-polyfill) | [r-yapsa](#r-yapsa) | | [glpk](#glpk) | [py-flask](#py-flask) | [r-yaqcaffy](#r-yaqcaffy) | | [glproto](#glproto) | [py-flask-compress](#py-flask-compress) | [r-yarn](#r-yarn) | | [glvis](#glvis) | [py-flask-socketio](#py-flask-socketio) | [r-zlibbioc](#r-zlibbioc) | | [gmake](#gmake) | [py-flexx](#py-flexx) | [r-zoo](#r-zoo) | | [gmap-gsnap](#gmap-gsnap) | [py-fn](#py-fn) | [racon](#racon) | | [gmime](#gmime) | [py-fparser](#py-fparser) | [raft](#raft) | | [gmodel](#gmodel) | [py-funcsigs](#py-funcsigs) | [ragel](#ragel) | | [gmp](#gmp) | [py-functools32](#py-functools32) | [raja](#raja) | | [gmsh](#gmsh) | [py-future](#py-future) | [randfold](#randfold) | | [gnat](#gnat) | [py-futures](#py-futures) | [random123](#random123) | | [gnu-prolog](#gnu-prolog) | [py-fypp](#py-fypp) | [randrproto](#randrproto) | | [gnupg](#gnupg) | [py-gdbgui](#py-gdbgui) | [range-v3](#range-v3) | | [gnuplot](#gnuplot) | [py-genders](#py-genders) | [rankstr](#rankstr) | | [gnutls](#gnutls) | [py-genshi](#py-genshi) | [rapidjson](#rapidjson) | | [go](#go) | [py-gevent](#py-gevent) | [ravel](#ravel) | | [go-bootstrap](#go-bootstrap) | [py-git-review](#py-git-review) | [raxml](#raxml) | | [gobject-introspection](#gobject-introspection) | [py-git2](#py-git2) | [ray](#ray) | | [googletest](#googletest) | [py-gnuplot](#py-gnuplot) | [rclone](#rclone) | | [gotcha](#gotcha) | [py-goatools](#py-goatools) | [rdma-core](#rdma-core) | | [gource](#gource) | [py-gpaw](#py-gpaw) | [rdp-classifier](#rdp-classifier) | | [gperf](#gperf) | [py-greenlet](#py-greenlet) | [re2c](#re2c) | | [gperftools](#gperftools) | [py-griddataformats](#py-griddataformats) | [readfq](#readfq) | | [gplates](#gplates) | [py-guidata](#py-guidata) | [readline](#readline) | | [grackle](#grackle) | [py-guiqwt](#py-guiqwt) | [recordproto](#recordproto) | | [gradle](#gradle) | [py-h5py](#py-h5py) | [redset](#redset) | | [grandr](#grandr) | [py-hdfs](#py-hdfs) | [redundans](#redundans) | | [graph500](#graph500) | [py-hepdata-validator](#py-hepdata-validator) | [regcm](#regcm) | | [graphicsmagick](#graphicsmagick) | [py-html2text](#py-html2text) | [relion](#relion) | | [graphlib](#graphlib) | [py-html5lib](#py-html5lib) | [rempi](#rempi) | | [graphmap](#graphmap) | [py-htseq](#py-htseq) | [rename](#rename) | | [graphviz](#graphviz) | [py-httpbin](#py-httpbin) | [rendercheck](#rendercheck) | | [grass](#grass) | [py-hypothesis](#py-hypothesis) | [renderproto](#renderproto) | | [grib-api](#grib-api) | [py-idna](#py-idna) | [repeatmasker](#repeatmasker) | | [grnboost](#grnboost) | [py-igraph](#py-igraph) | [resourceproto](#resourceproto) | | [groff](#groff) | [py-illumina-utils](#py-illumina-utils) | [revbayes](#revbayes) | | [gromacs](#gromacs) | [py-imageio](#py-imageio) | [rgb](#rgb) | | [gsl](#gsl) | [py-imagesize](#py-imagesize) | [rhash](#rhash) | | [gslib](#gslib) | [py-iminuit](#py-iminuit) | [rlwrap](#rlwrap) | | [gtkmm](#gtkmm) | [py-importlib](#py-importlib) | [rmats](#rmats) | | [gtkorvo-atl](#gtkorvo-atl) | [py-ipaddress](#py-ipaddress) | [rmlab](#rmlab) | | [gtkorvo-cercs-env](#gtkorvo-cercs-env) | [py-ipdb](#py-ipdb) | [rna-seqc](#rna-seqc) | | [gtkorvo-dill](#gtkorvo-dill) | [py-ipykernel](#py-ipykernel) | [rngstreams](#rngstreams) | | [gtkorvo-enet](#gtkorvo-enet) | [py-ipython](#py-ipython) | [rockstar](#rockstar) | | [gtkplus](#gtkplus) | [py-ipython-genutils](#py-ipython-genutils) | [root](#root) | | [gts](#gts) | [py-ipywidgets](#py-ipywidgets) | [rose](#rose) | | [guidance](#guidance) | [py-isort](#py-isort) | [ross](#ross) | | [guile](#guile) | [py-itsdangerous](#py-itsdangerous) | [rr](#rr) | | [gurobi](#gurobi) | [py-jdcal](#py-jdcal) | [rsbench](#rsbench) | | [h5hut](#h5hut) | [py-jedi](#py-jedi) | [rsem](#rsem) | | [h5part](#h5part) | [py-jinja2](#py-jinja2) | [rstart](#rstart) | | [h5utils](#h5utils) | [py-joblib](#py-joblib) | [rsync](#rsync) | | [h5z-zfp](#h5z-zfp) | [py-jprops](#py-jprops) | [rtags](#rtags) | | [hacckernels](#hacckernels) | [py-jpype](#py-jpype) | [rtax](#rtax) | | [hadoop](#hadoop) | [py-jsonschema](#py-jsonschema) | [ruby](#ruby) | | [halc](#halc) | [py-junit-xml](#py-junit-xml) | [ruby-gnuplot](#ruby-gnuplot) | | [hapcut2](#hapcut2) | [py-jupyter-client](#py-jupyter-client) | [ruby-narray](#ruby-narray) | | [hapdip](#hapdip) | [py-jupyter-console](#py-jupyter-console) | [ruby-ronn](#ruby-ronn) | | [haploview](#haploview) | [py-jupyter-core](#py-jupyter-core) | [ruby-rubyinline](#ruby-rubyinline) | | [harfbuzz](#harfbuzz) | [py-jupyter-notebook](#py-jupyter-notebook) | [ruby-svn2git](#ruby-svn2git) | | [harminv](#harminv) | [py-keras](#py-keras) | [ruby-terminal-table](#ruby-terminal-table) | | [hdf](#hdf) | [py-kiwisolver](#py-kiwisolver) | [rust](#rust) | | [hdf5](#hdf5) | [py-lark-parser](#py-lark-parser) | [rust-bindgen](#rust-bindgen) | | [hdf5-blosc](#hdf5-blosc) | [py-latexcodec](#py-latexcodec) | [sabre](#sabre) | | [help2man](#help2man) | [py-lazy](#py-lazy) | [sailfish](#sailfish) | | [henson](#henson) | [py-lazy-object-proxy](#py-lazy-object-proxy) | [salmon](#salmon) | | [hepmc](#hepmc) | [py-lazy-property](#py-lazy-property) | [sambamba](#sambamba) | | [heppdt](#heppdt) | [py-lazyarray](#py-lazyarray) | [samblaster](#samblaster) | | [hic-pro](#hic-pro) | [py-libconf](#py-libconf) | [samrai](#samrai) | | [highfive](#highfive) | [py-libensemble](#py-libensemble) | [samtools](#samtools) | | [highwayhash](#highwayhash) | [py-line-profiler](#py-line-profiler) | [sandbox](#sandbox) | | [hiop](#hiop) | [py-linecache2](#py-linecache2) | [sas](#sas) | | [hisat2](#hisat2) | [py-lit](#py-lit) | [satsuma2](#satsuma2) | | [hisea](#hisea) | [py-llvmlite](#py-llvmlite) | [savanna](#savanna) | | [hmmer](#hmmer) | [py-lmfit](#py-lmfit) | [saws](#saws) | | [homer](#homer) | [py-localcider](#py-localcider) | [sbt](#sbt) | | [hoomd-blue](#hoomd-blue) | [py-locket](#py-locket) | [scala](#scala) | | [hpccg](#hpccg) | [py-lockfile](#py-lockfile) | [scalasca](#scalasca) | | [hpctoolkit](#hpctoolkit) | [py-logilab-common](#py-logilab-common) | [scalpel](#scalpel) | | [hpctoolkit-externals](#hpctoolkit-externals) | [py-lrudict](#py-lrudict) | [scan-for-matches](#scan-for-matches) | | [hpgmg](#hpgmg) | [py-lxml](#py-lxml) | [scons](#scons) | | [hpl](#hpl) | [py-lzstring](#py-lzstring) | [scorec-core](#scorec-core) | | [hpx](#hpx) | [py-macholib](#py-macholib) | [scorep](#scorep) | | [hpx5](#hpx5) | [py-machotools](#py-machotools) | [scotch](#scotch) | | [hsakmt](#hsakmt) | [py-macs2](#py-macs2) | [scr](#scr) | | [hstr](#hstr) | [py-maestrowf](#py-maestrowf) | [screen](#screen) | | [htop](#htop) | [py-mako](#py-mako) | [scripts](#scripts) | | [htslib](#htslib) | [py-mappy](#py-mappy) | [scrnsaverproto](#scrnsaverproto) | | [httpie](#httpie) | [py-markdown](#py-markdown) | [sctk](#sctk) | | [hub](#hub) | [py-markupsafe](#py-markupsafe) | [sdl2](#sdl2) | | [hunspell](#hunspell) | [py-matplotlib](#py-matplotlib) | [sdl2-image](#sdl2-image) | | [hwloc](#hwloc) | [py-mccabe](#py-mccabe) | [sed](#sed) | | [hybpiper](#hybpiper) | [py-mdanalysis](#py-mdanalysis) | [sentieon-genomics](#sentieon-genomics) | | [hydra](#hydra) | [py-meep](#py-meep) | [seqan](#seqan) | | [hydrogen](#hydrogen) | [py-memory-profiler](#py-memory-profiler) | [seqprep](#seqprep) | | [hypre](#hypre) | [py-methylcode](#py-methylcode) | [seqtk](#seqtk) | | [i3](#i3) | [py-mg-rast-tools](#py-mg-rast-tools) | [serf](#serf) | | [ibmisc](#ibmisc) | [py-misopy](#py-misopy) | [sessreg](#sessreg) | | [iceauth](#iceauth) | [py-mistune](#py-mistune) | [setxkbmap](#setxkbmap) | | [icedtea](#icedtea) | [py-mock](#py-mock) | [sga](#sga) | | [icet](#icet) | [py-moltemplate](#py-moltemplate) | [shapeit](#shapeit) | | [ico](#ico) | [py-mongo](#py-mongo) | [shared-mime-info](#shared-mime-info) | | [icu4c](#icu4c) | [py-monotonic](#py-monotonic) | [shiny-server](#shiny-server) | | [id3lib](#id3lib) | [py-monty](#py-monty) | [shocklibs](#shocklibs) | | [idba](#idba) | [py-more-itertools](#py-more-itertools) | [shoremap](#shoremap) | | [igraph](#igraph) | [py-mpi4py](#py-mpi4py) | [shortbred](#shortbred) | | [igvtools](#igvtools) | [py-mpmath](#py-mpmath) | [shortstack](#shortstack) | | [ilmbase](#ilmbase) | [py-multiprocess](#py-multiprocess) | [showfont](#showfont) | | [image-magick](#image-magick) | [py-multiqc](#py-multiqc) | [shuffile](#shuffile) | | [imake](#imake) | [py-mx](#py-mx) | [sickle](#sickle) | | [imp](#imp) | [py-mxnet](#py-mxnet) | [siesta](#siesta) | | [impute2](#impute2) | [py-myhdl](#py-myhdl) | [signalp](#signalp) | | [infernal](#infernal) | [py-mysqldb1](#py-mysqldb1) | [signify](#signify) | | [inputproto](#inputproto) | [py-natsort](#py-natsort) | [silo](#silo) | | [intel](#intel) | [py-nbconvert](#py-nbconvert) | [simplemoc](#simplemoc) | | [intel-daal](#intel-daal) | [py-nbformat](#py-nbformat) | [simul](#simul) | | [intel-gpu-tools](#intel-gpu-tools) | [py-neo](#py-neo) | [simulationio](#simulationio) | | [intel-ipp](#intel-ipp) | [py-nestle](#py-nestle) | [singularity](#singularity) | | [intel-mkl](#intel-mkl) | [py-netcdf4](#py-netcdf4) | [skilion-onedrive](#skilion-onedrive) | | [intel-mkl-dnn](#intel-mkl-dnn) | [py-netifaces](#py-netifaces) | [sleef](#sleef) | | [intel-mpi](#intel-mpi) | [py-networkx](#py-networkx) | [slepc](#slepc) | | [intel-parallel-studio](#intel-parallel-studio) | [py-nose](#py-nose) | [slurm](#slurm) | | [intel-tbb](#intel-tbb) | [py-nosexcover](#py-nosexcover) | [smalt](#smalt) | | [intel-xed](#intel-xed) | [py-numba](#py-numba) | [smproxy](#smproxy) | | [intltool](#intltool) | [py-numexpr](#py-numexpr) | [snakemake](#snakemake) | | [ior](#ior) | [py-numexpr3](#py-numexpr3) | [snap](#snap) | | [iozone](#iozone) | [py-numpy](#py-numpy) | [snap-berkeley](#snap-berkeley) | | [iperf2](#iperf2) | [py-numpydoc](#py-numpydoc) | [snap-korf](#snap-korf) | | [iperf3](#iperf3) | [py-olefile](#py-olefile) | [snappy](#snappy) | | [ipopt](#ipopt) | [py-ont-fast5-api](#py-ont-fast5-api) | [snbone](#snbone) | | [isaac](#isaac) | [py-openpmd-validator](#py-openpmd-validator) | [sniffles](#sniffles) | | [isaac-server](#isaac-server) | [py-openpyxl](#py-openpyxl) | [snpeff](#snpeff) | | [isl](#isl) | [py-openslide-python](#py-openslide-python) | [snphylo](#snphylo) | | [itstool](#itstool) | [py-opentuner](#py-opentuner) | [snptest](#snptest) | | [itsx](#itsx) | [py-ordereddict](#py-ordereddict) | [soap2](#soap2) | | [jackcess](#jackcess) | [py-oset](#py-oset) | [soapdenovo-trans](#soapdenovo-trans) | | [jags](#jags) | [py-owslib](#py-owslib) | [soapdenovo2](#soapdenovo2) | | [jansson](#jansson) | [py-packaging](#py-packaging) | [soapindel](#soapindel) | | [jasper](#jasper) | [py-palettable](#py-palettable) | [soapsnp](#soapsnp) | | [jbigkit](#jbigkit) | [py-pandas](#py-pandas) | [sofa-c](#sofa-c) | | [jchronoss](#jchronoss) | [py-paramiko](#py-paramiko) | [somatic-sniper](#somatic-sniper) | | [jdk](#jdk) | [py-partd](#py-partd) | [sortmerna](#sortmerna) | | [jellyfish](#jellyfish) | [py-pathlib2](#py-pathlib2) | [sosflow](#sosflow) | | [jemalloc](#jemalloc) | [py-pathos](#py-pathos) | [sowing](#sowing) | | [jmol](#jmol) | [py-pathspec](#py-pathspec) | [sox](#sox) | | [jq](#jq) | [py-patsy](#py-patsy) | [spades](#spades) | | [json-c](#json-c) | [py-pbr](#py-pbr) | [span-lite](#span-lite) | | [json-cwx](#json-cwx) | [py-pep8-naming](#py-pep8-naming) | [spark](#spark) | | [json-glib](#json-glib) | [py-perf](#py-perf) | [sparsehash](#sparsehash) | | [jsoncpp](#jsoncpp) | [py-performance](#py-performance) | [sparta](#sparta) | | [judy](#judy) | [py-periodictable](#py-periodictable) | [spdlog](#spdlog) | | [julia](#julia) | [py-petsc4py](#py-petsc4py) | [spectrum-mpi](#spectrum-mpi) | | [k8](#k8) | [py-pexpect](#py-pexpect) | [speex](#speex) | | [kahip](#kahip) | [py-phonopy](#py-phonopy) | [spglib](#spglib) | | [kaiju](#kaiju) | [py-pickleshare](#py-pickleshare) | [sph2pipe](#sph2pipe) | | [kaks-calculator](#kaks-calculator) | [py-picrust](#py-picrust) | [spherepack](#spherepack) | | [kaldi](#kaldi) | [py-pil](#py-pil) | [spindle](#spindle) | | [kallisto](#kallisto) | [py-pillow](#py-pillow) | [spot](#spot) | | [karma](#karma) | [py-pip](#py-pip) | [sqlite](#sqlite) | | [kbproto](#kbproto) | [py-pipits](#py-pipits) | [sqlitebrowser](#sqlitebrowser) | | [kdiff3](#kdiff3) | [py-pkgconfig](#py-pkgconfig) | [squid](#squid) | | [kealib](#kealib) | [py-plotly](#py-plotly) | [sra-toolkit](#sra-toolkit) | | [kentutils](#kentutils) | [py-pluggy](#py-pluggy) | [ssht](#ssht) | | [kibana](#kibana) | [py-ply](#py-ply) | [sspace-longread](#sspace-longread) | | [kim-api](#kim-api) | [py-pmw](#py-pmw) | [sspace-standard](#sspace-standard) | | [kmergenie](#kmergenie) | [py-poster](#py-poster) | [sst-core](#sst-core) | | [kokkos](#kokkos) | [py-pox](#py-pox) | [sst-dumpi](#sst-dumpi) | | [kraken](#kraken) | [py-ppft](#py-ppft) | [sst-macro](#sst-macro) | | [krb5](#krb5) | [py-prettytable](#py-prettytable) | [stacks](#stacks) | | [krims](#krims) | [py-progress](#py-progress) | [staden-io-lib](#staden-io-lib) | | [kripke](#kripke) | [py-proj](#py-proj) | [star](#star) | | [kvasir-mpl](#kvasir-mpl) | [py-projectq](#py-projectq) | [star-ccm-plus](#star-ccm-plus) | | [kvtree](#kvtree) | [py-prompt-toolkit](#py-prompt-toolkit) | [startup-notification](#startup-notification) | | [laghos](#laghos) | [py-protobuf](#py-protobuf) | [stat](#stat) | | [lammps](#lammps) | [py-psutil](#py-psutil) | [stc](#stc) | | [last](#last) | [py-psyclone](#py-psyclone) | [steps](#steps) | | [lastz](#lastz) | [py-ptyprocess](#py-ptyprocess) | [stow](#stow) | | [latte](#latte) | [py-pudb](#py-pudb) | [strace](#strace) | | [launchmon](#launchmon) | [py-py](#py-py) | [stream](#stream) | | [lazyten](#lazyten) | [py-py2bit](#py-py2bit) | [strelka](#strelka) | | [lbann](#lbann) | [py-py2cairo](#py-py2cairo) | [stress](#stress) | | [lbxproxy](#lbxproxy) | [py-py2neo](#py-py2neo) | [string-view-lite](#string-view-lite) | | [lbzip2](#lbzip2) | [py-py4j](#py-py4j) | [stringtie](#stringtie) | | [lcals](#lcals) | [py-pyani](#py-pyani) | [structure](#structure) | | [lcms](#lcms) | [py-pyarrow](#py-pyarrow) | [strumpack](#strumpack) | | [ldc](#ldc) | [py-pyasn1](#py-pyasn1) | [sublime-text](#sublime-text) | | [ldc-bootstrap](#ldc-bootstrap) | [py-pybigwig](#py-pybigwig) | [subread](#subread) | | [legion](#legion) | [py-pybind11](#py-pybind11) | [subversion](#subversion) | | [leveldb](#leveldb) | [py-pybtex](#py-pybtex) | [suite-sparse](#suite-sparse) | | [lftp](#lftp) | [py-pybtex-docutils](#py-pybtex-docutils) | [sumaclust](#sumaclust) | | [libaec](#libaec) | [py-pycairo](#py-pycairo) | [sundials](#sundials) | | [libaio](#libaio) | [py-pychecker](#py-pychecker) | [superlu](#superlu) | | [libapplewm](#libapplewm) | [py-pycodestyle](#py-pycodestyle) | [superlu-dist](#superlu-dist) | | [libarchive](#libarchive) | [py-pycparser](#py-pycparser) | [superlu-mt](#superlu-mt) | | [libassuan](#libassuan) | [py-pycrypto](#py-pycrypto) | [supernova](#supernova) | | [libatomic-ops](#libatomic-ops) | [py-pycurl](#py-pycurl) | [sw4lite](#sw4lite) | | [libbeagle](#libbeagle) | [py-pydatalog](#py-pydatalog) | [swap-assembler](#swap-assembler) | | [libbeato](#libbeato) | [py-pydispatcher](#py-pydispatcher) | [swarm](#swarm) | | [libbsd](#libbsd) | [py-pydot](#py-pydot) | [swfft](#swfft) | | [libbson](#libbson) | [py-pyelftools](#py-pyelftools) | [swiftsim](#swiftsim) | | [libcanberra](#libcanberra) | [py-pyepsg](#py-pyepsg) | [swig](#swig) | | [libcap](#libcap) | [py-pyfasta](#py-pyfasta) | [symengine](#symengine) | | [libceed](#libceed) | [py-pyfftw](#py-pyfftw) | [sympol](#sympol) | | [libcerf](#libcerf) | [py-pyflakes](#py-pyflakes) | [sz](#sz) | | [libcheck](#libcheck) | [py-pygdbmi](#py-pygdbmi) | [tabix](#tabix) | | [libcint](#libcint) | [py-pygments](#py-pygments) | [talass](#talass) | | [libcircle](#libcircle) | [py-pygobject](#py-pygobject) | [talloc](#talloc) | | [libconfig](#libconfig) | [py-pygpu](#py-pygpu) | [tantan](#tantan) | | [libcroco](#libcroco) | [py-pygtk](#py-pygtk) | [tar](#tar) | | [libctl](#libctl) | [py-pylint](#py-pylint) | [targetp](#targetp) | | [libdivsufsort](#libdivsufsort) | [py-pymatgen](#py-pymatgen) | [task](#task) | | [libdmx](#libdmx) | [py-pyminifier](#py-pyminifier) | [taskd](#taskd) | | [libdrm](#libdrm) | [py-pymol](#py-pymol) | [tasmanian](#tasmanian) | | [libdwarf](#libdwarf) | [py-pympler](#py-pympler) | [tassel](#tassel) | | [libedit](#libedit) | [py-pymysql](#py-pymysql) | [tau](#tau) | | [libelf](#libelf) | [py-pynn](#py-pynn) | [tcl](#tcl) | | [libemos](#libemos) | [py-pypar](#py-pypar) | [tcl-itcl](#tcl-itcl) | | [libepoxy](#libepoxy) | [py-pyparsing](#py-pyparsing) | [tcl-tcllib](#tcl-tcllib) | | [libev](#libev) | [py-pypeflow](#py-pypeflow) | [tcl-tclxml](#tcl-tclxml) | | [libevent](#libevent) | [py-pyprof2html](#py-pyprof2html) | [tclap](#tclap) | | [libevpath](#libevpath) | [py-pyqi](#py-pyqi) | [tcoffee](#tcoffee) | | [libfabric](#libfabric) | [py-pyqt](#py-pyqt) | [tcptrace](#tcptrace) | | [libffi](#libffi) | [py-pyrad](#py-pyrad) | [tcsh](#tcsh) | | [libffs](#libffs) | [py-pysam](#py-pysam) | [tealeaf](#tealeaf) | | [libfontenc](#libfontenc) | [py-pyscaf](#py-pyscaf) | [templight](#templight) | | [libfs](#libfs) | [py-pyserial](#py-pyserial) | [templight-tools](#templight-tools) | | [libgcrypt](#libgcrypt) | [py-pyshp](#py-pyshp) | [tetgen](#tetgen) | | [libgd](#libgd) | [py-pyside](#py-pyside) | [tethex](#tethex) | | [libgeotiff](#libgeotiff) | [py-pysocks](#py-pysocks) | [texinfo](#texinfo) | | [libgit2](#libgit2) | [py-pyspark](#py-pyspark) | [texlive](#texlive) | | [libgpg-error](#libgpg-error) | [py-pysqlite](#py-pysqlite) | [the-platinum-searcher](#the-platinum-searcher) | | [libgpuarray](#libgpuarray) | [py-pytables](#py-pytables) | [the-silver-searcher](#the-silver-searcher) | | [libgridxc](#libgridxc) | [py-pytest](#py-pytest) | [thornado-mini](#thornado-mini) | | [libgtextutils](#libgtextutils) | [py-pytest-cov](#py-pytest-cov) | [thrift](#thrift) | | [libharu](#libharu) | [py-pytest-flake8](#py-pytest-flake8) | [thrust](#thrust) | | [libhio](#libhio) | [py-pytest-httpbin](#py-pytest-httpbin) | [tig](#tig) | | [libiberty](#libiberty) | [py-pytest-mock](#py-pytest-mock) | [tinyxml](#tinyxml) | | [libice](#libice) | [py-pytest-runner](#py-pytest-runner) | [tinyxml2](#tinyxml2) | | [libiconv](#libiconv) | [py-pytest-xdist](#py-pytest-xdist) | [tioga](#tioga) | | [libint](#libint) | [py-python-daemon](#py-python-daemon) | [tk](#tk) | | [libjpeg](#libjpeg) | [py-python-engineio](#py-python-engineio) | [tldd](#tldd) | | [libjpeg-turbo](#libjpeg-turbo) | [py-python-gitlab](#py-python-gitlab) | [tmalign](#tmalign) | | [libksba](#libksba) | [py-python-levenshtein](#py-python-levenshtein) | [tmhmm](#tmhmm) | | [liblbxutil](#liblbxutil) | [py-python-socketio](#py-python-socketio) | [tmux](#tmux) | | [liblockfile](#liblockfile) | [py-pythonqwt](#py-pythonqwt) | [tmuxinator](#tmuxinator) | | [libmatheval](#libmatheval) | [py-pytorch](#py-pytorch) | [tophat](#tophat) | | [libmaxminddb](#libmaxminddb) | [py-pytz](#py-pytz) | [tppred](#tppred) | | [libmesh](#libmesh) | [py-pyutilib](#py-pyutilib) | [tracer](#tracer) | | [libmng](#libmng) | [py-pywavelets](#py-pywavelets) | [transabyss](#transabyss) | | [libmongoc](#libmongoc) | [py-pyyaml](#py-pyyaml) | [transdecoder](#transdecoder) | | [libmonitor](#libmonitor) | [py-qtawesome](#py-qtawesome) | [transposome](#transposome) | | [libnbc](#libnbc) | [py-qtconsole](#py-qtconsole) | [transset](#transset) | | [libnl](#libnl) | [py-qtpy](#py-qtpy) | [trapproto](#trapproto) | | [libnova](#libnova) | [py-quantities](#py-quantities) | [tree](#tree) | | [libogg](#libogg) | [py-quast](#py-quast) | [treesub](#treesub) | | [liboldx](#liboldx) | [py-radical-utils](#py-radical-utils) | [trf](#trf) | | [libpcap](#libpcap) | [py-ranger](#py-ranger) | [triangle](#triangle) | | [libpciaccess](#libpciaccess) | [py-rasterio](#py-rasterio) | [trilinos](#trilinos) | | [libpfm4](#libpfm4) | [py-readme-renderer](#py-readme-renderer) | [trimal](#trimal) | | [libpipeline](#libpipeline) | [py-regex](#py-regex) | [trimgalore](#trimgalore) | | [libpng](#libpng) | [py-reportlab](#py-reportlab) | [trimmomatic](#trimmomatic) | | [libpsl](#libpsl) | [py-requests](#py-requests) | [trinity](#trinity) | | [libpthread-stubs](#libpthread-stubs) | [py-requests-toolbelt](#py-requests-toolbelt) | [trinotate](#trinotate) | | [libquo](#libquo) | [py-restview](#py-restview) | [trnascan-se](#trnascan-se) | | [librom](#librom) | [py-rope](#py-rope) | [turbine](#turbine) | | [libsharp](#libsharp) | [py-rpy2](#py-rpy2) | [turbomole](#turbomole) | | [libshm](#libshm) | [py-rsa](#py-rsa) | [tut](#tut) | | [libsigcpp](#libsigcpp) | [py-rseqc](#py-rseqc) | [twm](#twm) | | [libsigsegv](#libsigsegv) | [py-rtree](#py-rtree) | [tycho2](#tycho2) | | [libsm](#libsm) | [py-saga-python](#py-saga-python) | [typhon](#typhon) | | [libsodium](#libsodium) | [py-scandir](#py-scandir) | [typhonio](#typhonio) | | [libspatialindex](#libspatialindex) | [py-scientificpython](#py-scientificpython) | [uberftp](#uberftp) | | [libsplash](#libsplash) | [py-scikit-image](#py-scikit-image) | [ucx](#ucx) | | [libssh](#libssh) | [py-scikit-learn](#py-scikit-learn) | [udunits2](#udunits2) | | [libssh2](#libssh2) | [py-scipy](#py-scipy) | [ufo-core](#ufo-core) | | [libsvm](#libsvm) | [py-seaborn](#py-seaborn) | [ufo-filters](#ufo-filters) | | [libszip](#libszip) | [py-setuptools](#py-setuptools) | [umpire](#umpire) | | [libtermkey](#libtermkey) | [py-setuptools-git](#py-setuptools-git) | [unblur](#unblur) | | [libtiff](#libtiff) | [py-setuptools-scm](#py-setuptools-scm) | [uncrustify](#uncrustify) | | [libtool](#libtool) | [py-sfepy](#py-sfepy) | [unibilium](#unibilium) | | [libunistring](#libunistring) | [py-sh](#py-sh) | [unifycr](#unifycr) | | [libunwind](#libunwind) | [py-shapely](#py-shapely) | [unison](#unison) | | [libuuid](#libuuid) | [py-shiboken](#py-shiboken) | [units](#units) | | [libuv](#libuv) | [py-simplegeneric](#py-simplegeneric) | [unixodbc](#unixodbc) | | [libvorbis](#libvorbis) | [py-simplejson](#py-simplejson) | [unuran](#unuran) | | [libvterm](#libvterm) | [py-singledispatch](#py-singledispatch) | [unzip](#unzip) | | [libwebsockets](#libwebsockets) | [py-sip](#py-sip) | [usearch](#usearch) | | [libwindowswm](#libwindowswm) | [py-six](#py-six) | [util-linux](#util-linux) | | [libx11](#libx11) | [py-slepc4py](#py-slepc4py) | [util-macros](#util-macros) | | [libxau](#libxau) | [py-slurm-pipeline](#py-slurm-pipeline) | [uuid](#uuid) | | [libxaw](#libxaw) | [py-sncosmo](#py-sncosmo) | [valgrind](#valgrind) | | [libxaw3d](#libxaw3d) | [py-snowballstemmer](#py-snowballstemmer) | [vampirtrace](#vampirtrace) | | [libxc](#libxc) | [py-snuggs](#py-snuggs) | [vardictjava](#vardictjava) | | [libxcb](#libxcb) | [py-spectra](#py-spectra) | [varscan](#varscan) | | [libxcomposite](#libxcomposite) | [py-spefile](#py-spefile) | [vc](#vc) | | [libxcursor](#libxcursor) | [py-spglib](#py-spglib) | [vcftools](#vcftools) | | [libxdamage](#libxdamage) | [py-sphinx](#py-sphinx) | [vcsh](#vcsh) | | [libxdmcp](#libxdmcp) | [py-sphinx-bootstrap-theme](#py-sphinx-bootstrap-theme) | [vdt](#vdt) | | [libxevie](#libxevie) | [py-sphinx-rtd-theme](#py-sphinx-rtd-theme) | [vecgeom](#vecgeom) | | [libxext](#libxext) | [py-sphinxcontrib-bibtex](#py-sphinxcontrib-bibtex) | [veclibfort](#veclibfort) | | [libxfixes](#libxfixes) | [py-sphinxcontrib-programoutput](#py-sphinxcontrib-programoutput) | [vegas2](#vegas2) | | [libxfont](#libxfont) | [py-sphinxcontrib-websupport](#py-sphinxcontrib-websupport) | [veloc](#veloc) | | [libxfont2](#libxfont2) | [py-spyder](#py-spyder) | [velvet](#velvet) | | [libxfontcache](#libxfontcache) | [py-spykeutils](#py-spykeutils) | [verilator](#verilator) | | [libxft](#libxft) | [py-sqlalchemy](#py-sqlalchemy) | [verrou](#verrou) | | [libxi](#libxi) | [py-statsmodels](#py-statsmodels) | [videoproto](#videoproto) | | [libxinerama](#libxinerama) | [py-stevedore](#py-stevedore) | [viennarna](#viennarna) | | [libxkbcommon](#libxkbcommon) | [py-storm](#py-storm) | [viewres](#viewres) | | [libxkbfile](#libxkbfile) | [py-subprocess32](#py-subprocess32) | [vim](#vim) | | [libxkbui](#libxkbui) | [py-symengine](#py-symengine) | [virtualgl](#virtualgl) | | [libxml2](#libxml2) | [py-symfit](#py-symfit) | [visit](#visit) | | [libxmu](#libxmu) | [py-sympy](#py-sympy) | [vizglow](#vizglow) | | [libxp](#libxp) | [py-tabulate](#py-tabulate) | [vmatch](#vmatch) | | [libxpm](#libxpm) | [py-tappy](#py-tappy) | [voropp](#voropp) | | [libxpresent](#libxpresent) | [py-terminado](#py-terminado) | [votca-csg](#votca-csg) | | [libxprintapputil](#libxprintapputil) | [py-testinfra](#py-testinfra) | [votca-ctp](#votca-ctp) | | [libxprintutil](#libxprintutil) | [py-tetoolkit](#py-tetoolkit) | [votca-tools](#votca-tools) | | [libxrandr](#libxrandr) | [py-theano](#py-theano) | [votca-xtp](#votca-xtp) | | [libxrender](#libxrender) | [py-tifffile](#py-tifffile) | [vpfft](#vpfft) | | [libxres](#libxres) | [py-toml](#py-toml) | [vpic](#vpic) | | [libxscrnsaver](#libxscrnsaver) | [py-tomopy](#py-tomopy) | [vsearch](#vsearch) | | [libxshmfence](#libxshmfence) | [py-toolz](#py-toolz) | [vt](#vt) | | [libxslt](#libxslt) | [py-tornado](#py-tornado) | [vtk](#vtk) | | [libxsmm](#libxsmm) | [py-tqdm](#py-tqdm) | [vtkh](#vtkh) | | [libxstream](#libxstream) | [py-traceback2](#py-traceback2) | [vtkm](#vtkm) | | [libxt](#libxt) | [py-traitlets](#py-traitlets) | [wannier90](#wannier90) | | [libxtrap](#libxtrap) | [py-tuiview](#py-tuiview) | [warpx](#warpx) | | [libxtst](#libxtst) | [py-twisted](#py-twisted) | [wcslib](#wcslib) | | [libxv](#libxv) | [py-typing](#py-typing) | [wget](#wget) | | [libxvmc](#libxvmc) | [py-tzlocal](#py-tzlocal) | [wgsim](#wgsim) | | [libxxf86dga](#libxxf86dga) | [py-udunits](#py-udunits) | [windowswmproto](#windowswmproto) | | [libxxf86misc](#libxxf86misc) | [py-umi-tools](#py-umi-tools) | [wireshark](#wireshark) | | [libxxf86vm](#libxxf86vm) | [py-unittest2](#py-unittest2) | [workrave](#workrave) | | [libyogrt](#libyogrt) | [py-unittest2py3k](#py-unittest2py3k) | [wt](#wt) | | [libzip](#libzip) | [py-urllib3](#py-urllib3) | [wx](#wx) | | [lighttpd](#lighttpd) | [py-urwid](#py-urwid) | [wxpropgrid](#wxpropgrid) | | [likwid](#likwid) | [py-vcversioner](#py-vcversioner) | [x11perf](#x11perf) | | [linkphase3](#linkphase3) | [py-virtualenv](#py-virtualenv) | [xapian-core](#xapian-core) | | [linux-headers](#linux-headers) | [py-virtualenv-clone](#py-virtualenv-clone) | [xauth](#xauth) | | [listres](#listres) | [py-virtualenvwrapper](#py-virtualenvwrapper) | [xbacklight](#xbacklight) | | [llvm](#llvm) | [py-vsc-base](#py-vsc-base) | [xbiff](#xbiff) | | [llvm-openmp-ompt](#llvm-openmp-ompt) | [py-vsc-install](#py-vsc-install) | [xbitmaps](#xbitmaps) | | [lmdb](#lmdb) | [py-wcsaxes](#py-wcsaxes) | [xbraid](#xbraid) | | [lmod](#lmod) | [py-wcwidth](#py-wcwidth) | [xcalc](#xcalc) | | [lndir](#lndir) | [py-webkit-server](#py-webkit-server) | [xcb-demo](#xcb-demo) | | [log4cplus](#log4cplus) | [py-weblogo](#py-weblogo) | [xcb-proto](#xcb-proto) | | [log4cxx](#log4cxx) | [py-werkzeug](#py-werkzeug) | [xcb-util](#xcb-util) | | [loki](#loki) | [py-wheel](#py-wheel) | [xcb-util-cursor](#xcb-util-cursor) | | [lordec](#lordec) | [py-widgetsnbextension](#py-widgetsnbextension) | [xcb-util-errors](#xcb-util-errors) | | [lrslib](#lrslib) | [py-wrapt](#py-wrapt) | [xcb-util-image](#xcb-util-image) | | [lrzip](#lrzip) | [py-xarray](#py-xarray) | [xcb-util-keysyms](#xcb-util-keysyms) | | [lsof](#lsof) | [py-xattr](#py-xattr) | [xcb-util-renderutil](#xcb-util-renderutil) | | [ltrace](#ltrace) | [py-xdot](#py-xdot) | [xcb-util-wm](#xcb-util-wm) | | [lua](#lua) | [py-xlrd](#py-xlrd) | [xcb-util-xrm](#xcb-util-xrm) | | [lua-bitlib](#lua-bitlib) | [py-xlsxwriter](#py-xlsxwriter) | [xclip](#xclip) | | [lua-jit](#lua-jit) | [py-xmlrunner](#py-xmlrunner) | [xclipboard](#xclipboard) | | [lua-lpeg](#lua-lpeg) | [py-xopen](#py-xopen) | [xclock](#xclock) | | [lua-luafilesystem](#lua-luafilesystem) | [py-xpyb](#py-xpyb) | [xcmiscproto](#xcmiscproto) | | [lua-luaposix](#lua-luaposix) | [py-xvfbwrapper](#py-xvfbwrapper) | [xcmsdb](#xcmsdb) | | [lua-mpack](#lua-mpack) | [py-yamlreader](#py-yamlreader) | [xcompmgr](#xcompmgr) | | [luit](#luit) | [py-yapf](#py-yapf) | [xconsole](#xconsole) | | [lulesh](#lulesh) | [py-yt](#py-yt) | [xcursor-themes](#xcursor-themes) | | [lumpy-sv](#lumpy-sv) | [py-ytopt](#py-ytopt) | [xcursorgen](#xcursorgen) | | [lwgrp](#lwgrp) | [py-zmq](#py-zmq) | [xdbedizzy](#xdbedizzy) | | [lwm2](#lwm2) | [py-zope-event](#py-zope-event) | [xditview](#xditview) | | [lz4](#lz4) | [py-zope-interface](#py-zope-interface) | [xdm](#xdm) | | [lzma](#lzma) | [pythia6](#pythia6) | [xdpyinfo](#xdpyinfo) | | [lzo](#lzo) | [python](#python) | [xdriinfo](#xdriinfo) | | [m4](#m4) | [qbank](#qbank) | [xedit](#xedit) | | [macsio](#macsio) | [qbox](#qbox) | [xerces-c](#xerces-c) | | [mad-numdiff](#mad-numdiff) | [qhull](#qhull) | [xeus](#xeus) | | [mafft](#mafft) | [qmcpack](#qmcpack) | [xev](#xev) | | [magics](#magics) | [qmd-progress](#qmd-progress) | [xextproto](#xextproto) | | [magma](#magma) | [qorts](#qorts) | [xeyes](#xeyes) | | [makedepend](#makedepend) | [qrupdate](#qrupdate) | [xf86bigfontproto](#xf86bigfontproto) | | [mallocmc](#mallocmc) | [qt](#qt) | [xf86dga](#xf86dga) | | [man-db](#man-db) | [qt-creator](#qt-creator) | [xf86dgaproto](#xf86dgaproto) | | [manta](#manta) | [qtgraph](#qtgraph) | [xf86driproto](#xf86driproto) | | [maq](#maq) | [qthreads](#qthreads) | [xf86miscproto](#xf86miscproto) | | [mariadb](#mariadb) | [quantum-espresso](#quantum-espresso) | [xf86rushproto](#xf86rushproto) | | [masa](#masa) | [quinoa](#quinoa) | [xf86vidmodeproto](#xf86vidmodeproto) | | [masurca](#masurca) | [qwt](#qwt) | [xfd](#xfd) | | [matio](#matio) | [r](#r) | [xfindproxy](#xfindproxy) | | [matlab](#matlab) | [r-a4](#r-a4) | [xfontsel](#xfontsel) | | [maven](#maven) | [r-a4base](#r-a4base) | [xfs](#xfs) | | [maverick](#maverick) | [r-a4classif](#r-a4classif) | [xfsinfo](#xfsinfo) | | [mawk](#mawk) | [r-a4core](#r-a4core) | [xfwp](#xfwp) | | [mbedtls](#mbedtls) | [r-a4preproc](#r-a4preproc) | [xgamma](#xgamma) | | [mc](#mc) | [r-a4reporting](#r-a4reporting) | [xgc](#xgc) | | [mcl](#mcl) | [r-abadata](#r-abadata) | [xhmm](#xhmm) | | [mdtest](#mdtest) | [r-abaenrichment](#r-abaenrichment) | [xhost](#xhost) | | [med](#med) | [r-abind](#r-abind) | [xineramaproto](#xineramaproto) | | [meep](#meep) | [r-absseq](#r-absseq) | [xinit](#xinit) | | [mefit](#mefit) | [r-acde](#r-acde) | [xinput](#xinput) | | [megahit](#megahit) | [r-acepack](#r-acepack) | [xios](#xios) | | [memaxes](#memaxes) | [r-acgh](#r-acgh) | [xkbcomp](#xkbcomp) | | [meme](#meme) | [r-acme](#r-acme) | [xkbdata](#xkbdata) | | [memkind](#memkind) | [r-ada](#r-ada) | [xkbevd](#xkbevd) | | [meraculous](#meraculous) | [r-adabag](#r-adabag) | [xkbprint](#xkbprint) | | [mercurial](#mercurial) | [r-ade4](#r-ade4) | [xkbutils](#xkbutils) | | [mesa](#mesa) | [r-adegenet](#r-adegenet) | [xkeyboard-config](#xkeyboard-config) | | [mesa-glu](#mesa-glu) | [r-adsplit](#r-adsplit) | [xkill](#xkill) | | [meshkit](#meshkit) | [r-aer](#r-aer) | [xload](#xload) | | [meson](#meson) | [r-affxparser](#r-affxparser) | [xlogo](#xlogo) | | [mesquite](#mesquite) | [r-affy](#r-affy) | [xlsatoms](#xlsatoms) | | [metabat](#metabat) | [r-affycomp](#r-affycomp) | [xlsclients](#xlsclients) | | [metaphysicl](#metaphysicl) | [r-affycompatible](#r-affycompatible) | [xlsfonts](#xlsfonts) | | [metis](#metis) | [r-affycontam](#r-affycontam) | [xmag](#xmag) | | [mfem](#mfem) | [r-affycoretools](#r-affycoretools) | [xman](#xman) | | [microbiomeutil](#microbiomeutil) | [r-affydata](#r-affydata) | [xmessage](#xmessage) | | [minced](#minced) | [r-affyexpress](#r-affyexpress) | [xmh](#xmh) | | [mindthegap](#mindthegap) | [r-affyilm](#r-affyilm) | [xmlf90](#xmlf90) | | [miniaero](#miniaero) | [r-affyio](#r-affyio) | [xmlto](#xmlto) | | [miniamr](#miniamr) | [r-affypdnn](#r-affypdnn) | [xmodmap](#xmodmap) | | [miniasm](#miniasm) | [r-affyplm](#r-affyplm) | [xmore](#xmore) | | [miniconda2](#miniconda2) | [r-affyqcreport](#r-affyqcreport) | [xorg-cf-files](#xorg-cf-files) | | [miniconda3](#miniconda3) | [r-affyrnadegradation](#r-affyrnadegradation) | [xorg-docs](#xorg-docs) | | [minife](#minife) | [r-agdex](#r-agdex) | [xorg-gtest](#xorg-gtest) | | [minighost](#minighost) | [r-agilp](#r-agilp) | [xorg-server](#xorg-server) | | [minigmg](#minigmg) | [r-agimicrorna](#r-agimicrorna) | [xorg-sgml-doctools](#xorg-sgml-doctools) | | [minimap2](#minimap2) | [r-aims](#r-aims) | [xphelloworld](#xphelloworld) | | [minimd](#minimd) | [r-aldex2](#r-aldex2) | [xplor-nih](#xplor-nih) | | [miniqmc](#miniqmc) | [r-allelicimbalance](#r-allelicimbalance) | [xplsprinters](#xplsprinters) | | [minisign](#minisign) | [r-alpine](#r-alpine) | [xpr](#xpr) | | [minismac2d](#minismac2d) | [r-als](#r-als) | [xprehashprinterlist](#xprehashprinterlist) | | [minitri](#minitri) | [r-alsace](#r-alsace) | [xprop](#xprop) | | [minivite](#minivite) | [r-altcdfenvs](#r-altcdfenvs) | [xproto](#xproto) | | [minixyce](#minixyce) | [r-amap](#r-amap) | [xproxymanagementprotocol](#xproxymanagementprotocol) | | [minuit](#minuit) | [r-ampliqueso](#r-ampliqueso) | [xqilla](#xqilla) | | [mira](#mira) | [r-analysispageserver](#r-analysispageserver) | [xrandr](#xrandr) | | [mirdeep2](#mirdeep2) | [r-anaquin](#r-anaquin) | [xrdb](#xrdb) | | [mitofates](#mitofates) | [r-aneufinder](#r-aneufinder) | [xrefresh](#xrefresh) | | [mitos](#mitos) | [r-aneufinderdata](#r-aneufinderdata) | [xrootd](#xrootd) | | [mkfontdir](#mkfontdir) | [r-animation](#r-animation) | [xrx](#xrx) | | [mkfontscale](#mkfontscale) | [r-annaffy](#r-annaffy) | [xsbench](#xsbench) | | [mlhka](#mlhka) | [r-annotate](#r-annotate) | [xscope](#xscope) | | [moab](#moab) | [r-annotationdbi](#r-annotationdbi) | [xsd](#xsd) | | [modern-wheel](#modern-wheel) | [r-annotationfilter](#r-annotationfilter) | [xsdk](#xsdk) | | [mofem-cephas](#mofem-cephas) | [r-annotationforge](#r-annotationforge) | [xsdktrilinos](#xsdktrilinos) | | [mofem-fracture-module](#mofem-fracture-module) | [r-annotationhub](#r-annotationhub) | [xset](#xset) | | [mofem-minimal-surface-equation](#mofem-minimal-surface-equation) | [r-ape](#r-ape) | [xsetmode](#xsetmode) | | [mofem-users-modules](#mofem-users-modules) | [r-argparse](#r-argparse) | [xsetpointer](#xsetpointer) | | [molcas](#molcas) | [r-assertthat](#r-assertthat) | [xsetroot](#xsetroot) | | [mono](#mono) | [r-backports](#r-backports) | [xsimd](#xsimd) | | [mosh](#mosh) | [r-bamsignals](#r-bamsignals) | [xsm](#xsm) | | [mothur](#mothur) | [r-base64](#r-base64) | [xstdcmap](#xstdcmap) | | [motif](#motif) | [r-base64enc](#r-base64enc) | [xtensor](#xtensor) | | [motioncor2](#motioncor2) | [r-bbmisc](#r-bbmisc) | [xtensor-python](#xtensor-python) | | [mount-point-attributes](#mount-point-attributes) | [r-beanplot](#r-beanplot) | [xterm](#xterm) | | [mozjs](#mozjs) | [r-bh](#r-bh) | [xtl](#xtl) | | [mpark-variant](#mpark-variant) | [r-biasedurn](#r-biasedurn) | [xtrans](#xtrans) | | [mpc](#mpc) | [r-bindr](#r-bindr) | [xtrap](#xtrap) | | [mpe2](#mpe2) | [r-bindrcpp](#r-bindrcpp) | [xts](#xts) | | [mpest](#mpest) | [r-biobase](#r-biobase) | [xvidtune](#xvidtune) | | [mpfr](#mpfr) | [r-biocgenerics](#r-biocgenerics) | [xvinfo](#xvinfo) | | [mpibash](#mpibash) | [r-biocinstaller](#r-biocinstaller) | [xwd](#xwd) | | [mpiblast](#mpiblast) | [r-biocparallel](#r-biocparallel) | [xwininfo](#xwininfo) | | [mpich](#mpich) | [r-biocstyle](#r-biocstyle) | [xwud](#xwud) | | [mpifileutils](#mpifileutils) | [r-biom-utils](#r-biom-utils) | [xxhash](#xxhash) | | [mpilander](#mpilander) | [r-biomart](#r-biomart) | [xz](#xz) | | [mpileaks](#mpileaks) | [r-biomformat](#r-biomformat) | [yajl](#yajl) | | [mpip](#mpip) | [r-biostrings](#r-biostrings) | [yambo](#yambo) | | [mpir](#mpir) | [r-biovizbase](#r-biovizbase) | [yaml-cpp](#yaml-cpp) | | [mpix-launch-swift](#mpix-launch-swift) | [r-bit](#r-bit) | [yasm](#yasm) | | [mrbayes](#mrbayes) | [r-bit64](#r-bit64) | [yorick](#yorick) | | [mrnet](#mrnet) | [r-bitops](#r-bitops) | [z3](#z3) | | [mrtrix3](#mrtrix3) | [r-blob](#r-blob) | [zeromq](#zeromq) | | [mscgen](#mscgen) | [r-bookdown](#r-bookdown) | [zfp](#zfp) | | [msgpack-c](#msgpack-c) | [r-boot](#r-boot) | [zip](#zip) | | [mshadow](#mshadow) | [r-brew](#r-brew) | [zlib](#zlib) | | [msmc](#msmc) | [r-broom](#r-broom) | [zoltan](#zoltan) | | [multitail](#multitail) | [r-bsgenome](#r-bsgenome) | [zsh](#zsh) | | [multiverso](#multiverso) | [r-bumphunter](#r-bumphunter) | [zstd](#zstd) | --- abinit[¶](#abinit) === Homepage: * <http://www.abinit.orgSpack package: * [abinit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/abinit/package.py) Versions: 8.8.2, 8.6.3, 8.2.2, 8.0.8b Build Dependencies: [netcdf-fortran](#netcdf-fortran), [libxc](#libxc), mpi, [hdf5](#hdf5), scalapack, blas, [fftw](#fftw), lapack Link Dependencies: [netcdf-fortran](#netcdf-fortran), [libxc](#libxc), mpi, [hdf5](#hdf5), scalapack, blas, [fftw](#fftw), lapack Description: ABINIT is a package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave or wavelet basis. ABINIT also includes options to optimize the geometry according to the DFT forces and stresses, or to perform molecular dynamics simulations using these forces, or to generate dynamical matrices, Born effective charges, and dielectric tensors, based on Density-Functional Perturbation Theory, and many more properties. Excited states can be computed within the Many-Body Perturbation Theory (the GW approximation and the Bethe-Salpeter equation), and Time- Dependent Density Functional Theory (for molecules). In addition to the main ABINIT code, different utility programs are provided. --- abyss[¶](#abyss) === Homepage: * <http://www.bcgsc.ca/platform/bioinfo/software/abyssSpack package: * [abyss/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/abyss/package.py) Versions: 2.0.2, 1.5.2 Build Dependencies: mpi, [sqlite](#sqlite), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [boost](#boost), [sparsehash](#sparsehash) Link Dependencies: [libtool](#libtool), [boost](#boost), mpi, [sparsehash](#sparsehash), [sqlite](#sqlite) Description: ABySS is a de novo, parallel, paired-end sequence assembler that is designed for short reads. The single-processor version is useful for assembling genomes up to 100 Mbases in size. --- accfft[¶](#accfft) === Homepage: * <http://accfft.orgSpack package: * [accfft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/accfft/package.py) Versions: develop Build Dependencies: [cuda](#cuda), [parallel-netcdf](#parallel-netcdf), [fftw](#fftw), [cmake](#cmake) Link Dependencies: [cuda](#cuda), [parallel-netcdf](#parallel-netcdf), [fftw](#fftw) Description: AccFFT extends existing FFT libraries for CUDA-enabled Graphics Processing Units (GPUs) to distributed memory clusters --- ack[¶](#ack) === Homepage: * <http://beyondgrep.com/Spack package: * [ack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ack/package.py) Versions: 2.22, 2.18, 2.16, 2.14 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Description: ack 2.14 is a tool like grep, optimized for programmers. Designed for programmers with large heterogeneous trees of source code, ack is written purely in portable Perl 5 and takes advantage of the power of Perl's regular expressions. --- activeharmony[¶](#activeharmony) === Homepage: * <http://www.dyninst.org/harmonySpack package: * [activeharmony/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/activeharmony/package.py) Versions: 4.5 Description: Active Harmony: a framework for auto-tuning (the automated search for values to improve the performance of a target application). --- adept-utils[¶](#adept-utils) === Homepage: * <https://github.com/llnl/adept-utilsSpack package: * [adept-utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/adept-utils/package.py) Versions: 1.0.1, 1.0 Build Dependencies: [cmake](#cmake), [boost](#boost), mpi Link Dependencies: [boost](#boost), mpi Description: Utility libraries for LLNL performance tools. --- adios[¶](#adios) === Homepage: * <http://www.olcf.ornl.gov/center-projects/adios/Spack package: * [adios/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/adios/package.py) Versions: develop, 1.13.1, 1.13.0, 1.12.0, 1.11.1, 1.11.0, 1.10.0, 1.9.0 Build Dependencies: szip, [zfp](#zfp), [lz4](#lz4), [libevpath](#libevpath), [hdf5](#hdf5), [autoconf](#autoconf), [dataspaces](#dataspaces), [zlib](#zlib), [bzip2](#bzip2), mpi, [m4](#m4), [automake](#automake), [libtool](#libtool), [sz](#sz), [python](#python), [c-blosc](#c-blosc), [netcdf](#netcdf) Link Dependencies: szip, [zfp](#zfp), [lz4](#lz4), [libevpath](#libevpath), [hdf5](#hdf5), [dataspaces](#dataspaces), [zlib](#zlib), [bzip2](#bzip2), mpi, [netcdf](#netcdf), [sz](#sz), [c-blosc](#c-blosc) Description: The Adaptable IO System (ADIOS) provides a simple, flexible way for scientists to describe the data in their code that may need to be written, read, or processed outside of the running simulation. --- adios2[¶](#adios2) === Homepage: * <https://www.olcf.ornl.gov/center-projects/adios/Spack package: * [adios2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/adios2/package.py) Versions: develop, 2.2.0, 2.1.0, 2.0.0 Build Dependencies: [zeromq](#zeromq), [bzip2](#bzip2), pkgconfig, mpi, [hdf5](#hdf5), [cmake](#cmake), [py-numpy](#py-numpy), [zfp](#zfp), [adios](#adios), [python](#python), [py-mpi4py](#py-mpi4py) Link Dependencies: [zeromq](#zeromq), [bzip2](#bzip2), mpi, [zfp](#zfp), [hdf5](#hdf5), [adios](#adios), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-mpi4py](#py-mpi4py), [python](#python) Description: Next generation of ADIOS developed in the Exascale Computing Program --- adlbx[¶](#adlbx) === Homepage: * <http://swift-lang.org/Swift-TSpack package: * [adlbx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/adlbx/package.py) Versions: 0.9.1, 0.8.0 Build Dependencies: [exmcutils](#exmcutils), mpi Link Dependencies: [exmcutils](#exmcutils), mpi Description: ADLB/X: Master-worker library + work stealing and data dependencies --- adol-c[¶](#adol-c) === Homepage: * <https://projects.coin-or.org/ADOL-CSpack package: * [adol-c/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/adol-c/package.py) Versions: develop, 2.6.3, 2.6.2, 2.6.1, 2.5.2 Build Dependencies: [autoconf](#autoconf), [boost](#boost), [libtool](#libtool), [automake](#automake), [m4](#m4) Link Dependencies: [boost](#boost) Description: A package for the automatic differentiation of first and higher derivatives of vector functions in C and C++ programs by operator overloading. --- aegean[¶](#aegean) === Homepage: * <http://brendelgroup.github.io/AEGeAn/Spack package: * [aegean/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aegean/package.py) Versions: 0.15.2 Build Dependencies: [genometools](#genometools) Link Dependencies: [genometools](#genometools) Description: The AEGeAn Toolkit is designed for the Analysis and Evaluation of Genome Annotations. The toolkit includes a variety of analysis programs as well as a C library whose API provides access to AEGeAn's core functions and data structures. --- aida[¶](#aida) === Homepage: * <http://aida.freehep.org/Spack package: * [aida/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aida/package.py) Versions: 3.2.1 Description: Abstract Interfaces for Data Analysis --- albany[¶](#albany) === Homepage: * <http://gahansen.github.io/AlbanySpack package: * [albany/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/albany/package.py) Versions: develop Build Dependencies: [cmake](#cmake), mpi, [trilinos](#trilinos) Link Dependencies: mpi, [trilinos](#trilinos) Description: Albany is an implicit, unstructured grid, finite element code for the solution and analysis of multiphysics problems. The Albany repository on the GitHub site contains hundreds of regression tests and examples that demonstrate the code's capabilities on a wide variety of problems including fluid mechanics, solid mechanics (elasticity and plasticity), ice-sheet flow, quantum device modeling, and many other applications. --- albert[¶](#albert) === Homepage: * <https://people.cs.clemson.edu/~dpj/albertstuff/albert.htmlSpack package: * [albert/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/albert/package.py) Versions: 4.0a_opt4 Build Dependencies: [readline](#readline) Link Dependencies: [readline](#readline) Description: Albert is an interactive program to assist the specialist in the study of nonassociative algebra. --- alglib[¶](#alglib) === Homepage: * <http://www.alglib.netSpack package: * [alglib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/alglib/package.py) Versions: 3.11.0 Description: ALGLIB is a cross-platform numerical analysis and data processing library. --- allinea-forge[¶](#allinea-forge) === Homepage: * <http://www.allinea.com/products/develop-allinea-forgeSpack package: * [allinea-forge/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/allinea-forge/package.py) Versions: 6.0.4 Description: Allinea Forge is the complete toolsuite for software development - with everything needed to debug, profile, optimize, edit and build C, C++ and Fortran applications on Linux for high performance - from single threads through to complex parallel HPC codes with MPI, OpenMP, threads or CUDA. --- allinea-reports[¶](#allinea-reports) === Homepage: * <http://www.allinea.com/products/allinea-performance-reportsSpack package: * [allinea-reports/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/allinea-reports/package.py) Versions: 6.0.4 Description: Allinea Performance Reports are the most effective way to characterize and understand the performance of HPC application runs. One single-page HTML report elegantly answers a range of vital questions for any HPC site --- allpaths-lg[¶](#allpaths-lg) === Homepage: * <http://www.broadinstitute.org/software/allpaths-lg/blog/Spack package: * [allpaths-lg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/allpaths-lg/package.py) Versions: 52488 Description: ALLPATHS-LG is our original short read assembler and it works on both small and large (mammalian size) genomes. --- alquimia[¶](#alquimia) === Homepage: * <https://github.com/LBL-EESA/alquimia-devSpack package: * [alquimia/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/alquimia/package.py) Versions: develop, xsdk-0.3.0, xsdk-0.2.0 Build Dependencies: [cmake](#cmake), mpi, [petsc](#petsc), [pflotran](#pflotran), [hdf5](#hdf5) Link Dependencies: mpi, [petsc](#petsc), [pflotran](#pflotran), [hdf5](#hdf5) Description: Alquimia is an interface that exposes the capabilities of mature geochemistry codes such as CrunchFlow and PFLOTRAN --- alsa-lib[¶](#alsa-lib) === Homepage: * <https://www.alsa-project.orgSpack package: * [alsa-lib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/alsa-lib/package.py) Versions: 1.1.4.1 Description: The Advanced Linux Sound Architecture (ALSA) provides audio and MIDI functionality to the Linux operating system. alsa-lib contains the user space library that developers compile ALSA applications against. --- aluminum[¶](#aluminum) === Homepage: * <https://github.com/LLNL/AluminumSpack package: * [aluminum/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aluminum/package.py) Versions: 0.1, master Build Dependencies: [cub](#cub), mpi, [cudnn](#cudnn), [cuda](#cuda), [cmake](#cmake), [hwloc](#hwloc), [nccl](#nccl) Link Dependencies: [cuda](#cuda), [cub](#cub), mpi, [cudnn](#cudnn), [hwloc](#hwloc), [nccl](#nccl) Description: Aluminum provides a generic interface to high-performance communication libraries, with a focus on allreduce algorithms. Blocking and non- blocking algorithms and GPU-aware algorithms are supported. Aluminum also contains custom implementations of select algorithms to optimize for certain situations. --- amg[¶](#amg) === Homepage: * <https://computation.llnl.gov/projects/co-design/amg2013Spack package: * [amg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/amg/package.py) Versions: develop, 1.1, 1.0 Build Dependencies: mpi Link Dependencies: mpi Description: AMG is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. The driver provided with AMG builds linear systems for various 3-dimensional problems. --- amg2013[¶](#amg2013) === Homepage: * <https://computation.llnl.gov/projects/co-design/amg2013Spack package: * [amg2013/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/amg2013/package.py) Versions: master Build Dependencies: mpi Link Dependencies: mpi Description: AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the BoomerAMG solver in the hypre library, a large linear solver library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. --- amp[¶](#amp) === Homepage: * <https://bitbucket.org/AdvancedMultiPhysics/ampSpack package: * [amp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/amp/package.py) Versions: develop Build Dependencies: [zlib](#zlib), mpi, [trilinos](#trilinos), [silo](#silo), [hdf5](#hdf5), [cmake](#cmake), [boost](#boost), blas, [petsc](#petsc), lapack Link Dependencies: [zlib](#zlib), mpi, [trilinos](#trilinos), [silo](#silo), [hdf5](#hdf5), [boost](#boost), blas, [petsc](#petsc), lapack Description: The Advanced Multi-Physics (AMP) package is an open source parallel object-oriented computational framework that is designed with single and multi-domain multi-physics applications in mind. AMP can be used to build powerful and flexible multi-physics simulation algorithms from lightweight operator, solver, linear algebra, material database, discretization, and meshing components. The AMP design is meant to enable existing investments in application codes to be leveraged without having to adopt dramatically different data structures while developing new computational science applications. Application components are represented as discrete mathematical operators that only require a minimal interface and through operator composition the incremental development of complex parallel applications is enabled. AMP is meant to allow application domain scientists, computer scientists and mathematicians to simulate, collaborate, and conduct research on various aspects of massively parallel simulation algorithms. --- ampliconnoise[¶](#ampliconnoise) === Homepage: * <https://code.google.com/archive/p/ampliconnoise/Spack package: * [ampliconnoise/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ampliconnoise/package.py) Versions: 1.29 Build Dependencies: mpi, [gsl](#gsl) Link Dependencies: mpi, [gsl](#gsl) Description: AmpliconNoise is a collection of programs for the removal of noise from 454 sequenced PCR amplicons. --- amrex[¶](#amrex) === Homepage: * <https://amrex-codes.github.io/amrex/Spack package: * [amrex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/amrex/package.py) Versions: develop, 18.10.1, 18.10, 18.09.1 Build Dependencies: [cmake](#cmake), mpi, [python](#python) Link Dependencies: mpi Description: AMReX is a publicly available software framework designed for building massively parallel block- structured adaptive mesh refinement (AMR) applications. --- amrvis[¶](#amrvis) === Homepage: * <https://github.com/AMReX-Codes/AmrvisSpack package: * [amrvis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/amrvis/package.py) Versions: master Build Dependencies: [libice](#libice), [libxt](#libxt), mpi, [libx11](#libx11), [libxpm](#libxpm), [motif](#motif), [gmake](#gmake), [libxext](#libxext), [bison](#bison), [flex](#flex), [libsm](#libsm) Link Dependencies: [libice](#libice), [libxt](#libxt), mpi, [libx11](#libx11), [libxpm](#libxpm), [motif](#motif), [libxext](#libxext), [bison](#bison), [flex](#flex), [libsm](#libsm) Description: Amrvis is a visualization package specifically designed to read and display output and profiling data from codes built on the AMReX framework. --- andi[¶](#andi) === Homepage: * <https://github.com/EvolBioInf/andiSpack package: * [andi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/andi/package.py) Versions: 0.10 Build Dependencies: [m4](#m4), [libdivsufsort](#libdivsufsort), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [gsl](#gsl) Link Dependencies: [gsl](#gsl), [libdivsufsort](#libdivsufsort) Description: andi is used for for estimating the evolutionary distance between closely related genomes. --- angsd[¶](#angsd) === Homepage: * <https://github.com/ANGSD/angsdSpack package: * [angsd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/angsd/package.py) Versions: 0.921, 0.919 Build Dependencies: [htslib](#htslib) Link Dependencies: [htslib](#htslib) Description: Angsd is a program for analysing NGS data. The software can handle a number of different input types from mapped reads to imputed genotype probabilities. Most methods take genotype uncertainty into account instead of basing the analysis on called genotypes. This is especially useful for low and medium depth data. --- ant[¶](#ant) === Homepage: * <http://ant.apache.org/Spack package: * [ant/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ant/package.py) Versions: 1.10.0, 1.9.9, 1.9.8, 1.9.7, 1.9.6 Build Dependencies: java Link Dependencies: java Description: Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other --- antlr[¶](#antlr) === Homepage: * <http://www.antlr2.org/Spack package: * [antlr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/antlr/package.py) Versions: 2.7.7 Build Dependencies: java, [python](#python) Link Dependencies: [python](#python) Run Dependencies: java Description: ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build and walk parse trees. --- ants[¶](#ants) === Homepage: * <http://stnava.github.io/ANTs/Spack package: * [ants/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ants/package.py) Versions: 2.2.0 Build Dependencies: [cmake](#cmake) Description: ANTs extracts information from complex datasets that include imaging. Paired with ANTsR (answer), ANTs is useful for managing, interpreting and visualizing multidimensional data. ANTs is popularly considered a state-of-the-art medical image registration and segmentation toolkit. ANTs depends on the Insight ToolKit (ITK), a widely used medical image processing library to which ANTs developers contribute. --- ape[¶](#ape) === Homepage: * <http://www.tddft.org/programs/APE/Spack package: * [ape/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ape/package.py) Versions: 2.2.1 Build Dependencies: [libxc](#libxc), [gsl](#gsl) Link Dependencies: [libxc](#libxc), [gsl](#gsl) Description: A tool for generating atomic pseudopotentials within a Density- Functional Theory framework --- aperture-photometry[¶](#aperture-photometry) === Homepage: * <http://www.aperturephotometry.org/aptool/Spack package: * [aperture-photometry/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aperture-photometry/package.py) Versions: 2.7.2 Build Dependencies: java Link Dependencies: java Description: Aperture Photometry Tool APT is software for astronomical research --- apex[¶](#apex) === Homepage: * <http://github.com/khuck/xpress-apexSpack package: * [apex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/apex/package.py) Versions: 0.1 Build Dependencies: [cmake](#cmake), [boost](#boost), [binutils](#binutils), [ompt-openmp](#ompt-openmp), [activeharmony](#activeharmony) Link Dependencies: [boost](#boost), [binutils](#binutils), [ompt-openmp](#ompt-openmp), [activeharmony](#activeharmony) Description: --- apple-libunwind[¶](#apple-libunwind) === Homepage: * <https://opensource.apple.com/source/libunwind/libunwind-35.3/Spack package: * [apple-libunwind/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/apple-libunwind/package.py) Description: Placeholder package for Apple's analogue to non-GNU libunwind --- applewmproto[¶](#applewmproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/applewmprotoSpack package: * [applewmproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/applewmproto/package.py) Versions: 1.4.2 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: Apple Rootless Window Management Extension. This extension defines a protcol that allows X window managers to better interact with the Mac OS X Aqua user interface when running X11 in a rootless mode. --- appres[¶](#appres) === Homepage: * <http://cgit.freedesktop.org/xorg/app/appresSpack package: * [appres/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/appres/package.py) Versions: 1.0.4 Build Dependencies: [xproto](#xproto), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxt](#libxt) Link Dependencies: [libxt](#libxt), [libx11](#libx11) Description: The appres program prints the resources seen by an application (or subhierarchy of an application) with the specified class and instance names. It can be used to determine which resources a particular program will load. --- apr[¶](#apr) === Homepage: * <https://apr.apache.org/Spack package: * [apr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/apr/package.py) Versions: 1.6.2, 1.5.2 Description: Apache portable runtime. --- apr-util[¶](#apr-util) === Homepage: * <https://apr.apache.org/Spack package: * [apr-util/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/apr-util/package.py) Versions: 1.6.0, 1.5.4 Build Dependencies: [libiconv](#libiconv), [gdbm](#gdbm), [sqlite](#sqlite), [openssl](#openssl), [postgresql](#postgresql), [apr](#apr), [expat](#expat), [unixodbc](#unixodbc) Link Dependencies: [libiconv](#libiconv), [gdbm](#gdbm), [sqlite](#sqlite), [openssl](#openssl), [postgresql](#postgresql), [apr](#apr), [expat](#expat), [unixodbc](#unixodbc) Description: Apache Portable Runtime Utility --- aragorn[¶](#aragorn) === Homepage: * <http://mbio-serv2.mbioekol.lu.se/ARAGORNSpack package: * [aragorn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aragorn/package.py) Versions: 1.2.38 Description: ARAGORN, a program to detect tRNA genes and tmRNA genes in nucleotide sequences. --- archer[¶](#archer) === Homepage: * <https://github.com/PRUNERS/ARCHERSpack package: * [archer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/archer/package.py) Versions: 1.0.0 Build Dependencies: [ninja](#ninja), [cmake](#cmake), [llvm-openmp-ompt](#llvm-openmp-ompt), [llvm](#llvm) Link Dependencies: [llvm-openmp-ompt](#llvm-openmp-ompt), [llvm](#llvm) Description: ARCHER, a data race detection tool for large OpenMP applications. --- argobots[¶](#argobots) === Homepage: * <http://www.argobots.org/Spack package: * [argobots/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/argobots/package.py) Versions: 1.0b1, 1.0a1 Description: Argobots, which was developed as a part of the Argo project, is a lightweight runtime system that supports integrated computation and data movement with massive concurrency. It will directly leverage the lowest- level constructs in the hardware and OS: lightweight notification mechanisms, data movement engines, memory mapping, and data placement strategies. It consists of an execution model and a memory model. --- argp-standalone[¶](#argp-standalone) === Homepage: * <https://www.lysator.liu.se/~nisse/miscSpack package: * [argp-standalone/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/argp-standalone/package.py) Versions: 1.3 Description: Standalone version of the argp interface from glibc for parsing unix- style arguments. --- argtable[¶](#argtable) === Homepage: * <http://argtable.sourceforge.net/Spack package: * [argtable/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/argtable/package.py) Versions: 2-13 Description: Argtable is an ANSI C library for parsing GNU style command line options with a minimum of fuss. --- arlecore[¶](#arlecore) === Homepage: * <http://cmpg.unibe.ch/software/arlequin35/Spack package: * [arlecore/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/arlecore/package.py) Versions: 3.5.2.2 Build Dependencies: [r](#r) Run Dependencies: [r](#r) Description: An Integrated Software for Population Genetics Data Analysis --- armadillo[¶](#armadillo) === Homepage: * <http://arma.sourceforge.net/Spack package: * [armadillo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/armadillo/package.py) Versions: 8.100.1, 7.950.1, 7.900.1, 7.500.0, 7.200.2, 7.200.1 Build Dependencies: [superlu](#superlu), [arpack-ng](#arpack-ng), [hdf5](#hdf5), [cmake](#cmake), blas, lapack Link Dependencies: [superlu](#superlu), blas, lapack, [arpack-ng](#arpack-ng), [hdf5](#hdf5) Description: Armadillo is a high quality linear algebra library (matrix maths) for the C++ language, aiming towards a good balance between speed and ease of use. --- arpack-ng[¶](#arpack-ng) === Homepage: * <https://github.com/opencollab/arpack-ngSpack package: * [arpack-ng/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/arpack-ng/package.py) Versions: develop, 3.6.3, 3.6.2, 3.6.0, 3.5.0, 3.4.0, 3.3.0 Build Dependencies: mpi, [autoconf](#autoconf), [cmake](#cmake), [automake](#automake), [libtool](#libtool), blas, lapack Link Dependencies: mpi, blas, lapack Description: ARPACK-NG is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. Important Features: * Reverse Communication Interface. * Single and Double Precision Real Arithmetic Versions for Symmetric, Non-symmetric, Standard or Generalized Problems. * Single and Double Precision Complex Arithmetic Versions for Standard or Generalized Problems. * Routines for Banded Matrices - Standard or Generalized Problems. * Routines for The Singular Value Decomposition. * Example driver routines that may be used as templates to implement numerous Shift-Invert strategies for all problem types, data types and precision. This project is a joint project between Debian, Octave and Scilab in order to provide a common and maintained version of arpack. Indeed, no single release has been published by Rice university for the last few years and since many software (Octave, Scilab, R, Matlab...) forked it and implemented their own modifications, arpack-ng aims to tackle this by providing a common repository and maintained versions. arpack-ng is replacing arpack almost everywhere. --- arrow[¶](#arrow) === Homepage: * <http://arrow.apache.orgSpack package: * [arrow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/arrow/package.py) Versions: 0.11.0, 0.9.0, 0.8.0 Build Dependencies: [zlib](#zlib), [zstd](#zstd), [flatbuffers](#flatbuffers), [cmake](#cmake), [boost](#boost), [snappy](#snappy), [py-numpy](#py-numpy), [python](#python), [rapidjson](#rapidjson) Link Dependencies: [zlib](#zlib), [zstd](#zstd), [snappy](#snappy), [boost](#boost), [flatbuffers](#flatbuffers), [py-numpy](#py-numpy), [python](#python), [rapidjson](#rapidjson) Description: A cross-language development platform for in-memory data. This package contains the C++ bindings. --- ascent[¶](#ascent) === Homepage: * <https://github.com/Alpine-DAV/ascentSpack package: * [ascent/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ascent/package.py) Versions: develop Build Dependencies: [py-mpi4py](#py-mpi4py), [vtkh](#vtkh), mpi, [cmake](#cmake), [py-numpy](#py-numpy), [conduit](#conduit), [adios](#adios), [python](#python), [py-sphinx](#py-sphinx) Link Dependencies: [vtkh](#vtkh), mpi, [cmake](#cmake), [py-mpi4py](#py-mpi4py), [adios](#adios), [python](#python), [conduit](#conduit) Run Dependencies: [py-numpy](#py-numpy) Description: Ascent is an open source many-core capable lightweight in situ visualization and analysis infrastructure for multi-physics HPC simulations. --- asciidoc[¶](#asciidoc) === Homepage: * <http://asciidoc.orgSpack package: * [asciidoc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/asciidoc/package.py) Versions: 8.6.9 Build Dependencies: [libxslt](#libxslt), [docbook-xml](#docbook-xml), [libxml2](#libxml2), [docbook-xsl](#docbook-xsl) Link Dependencies: [libxslt](#libxslt), [docbook-xml](#docbook-xml), [libxml2](#libxml2), [docbook-xsl](#docbook-xsl) Description: A presentable text document format for writing articles, UNIX man pages and other small to medium sized documents. --- aspa[¶](#aspa) === Homepage: * <http://www.exmatex.org/aspa.htmlSpack package: * [aspa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aspa/package.py) Versions: master Build Dependencies: mpi, blas, lapack, [hdf5](#hdf5) Link Dependencies: mpi, blas, lapack, [hdf5](#hdf5) Description: A fundamental premise in ExMatEx is that scale-bridging performed in heterogeneous MPMD materials science simulations will place important demands upon the exascale ecosystem that need to be identified and quantified. --- aspcud[¶](#aspcud) === Homepage: * <https://potassco.org/aspcudSpack package: * [aspcud/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aspcud/package.py) Versions: 1.9.4 Build Dependencies: [clingo](#clingo), [cmake](#cmake), [boost](#boost), [re2c](#re2c) Link Dependencies: [clingo](#clingo) Description: Aspcud: Package dependency solver Aspcud is a solver for package dependencies. A package universe and a request to install, remove, or upgrade packages have to be encoded in the CUDF format. Such a CUDF document can then be passed to aspcud along with an optimization criteria to obtain a solution to the given package problem. --- aspect[¶](#aspect) === Homepage: * <https://aspect.geodynamics.orgSpack package: * [aspect/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aspect/package.py) Versions: develop, 2.0.1, 2.0.0 Build Dependencies: [dealii-parameter-gui](#dealii-parameter-gui), [cmake](#cmake), [dealii](#dealii) Link Dependencies: [dealii-parameter-gui](#dealii-parameter-gui), [dealii](#dealii) Description: Parallel, extendible finite element code to simulate convection in the Earth's mantle and elsewhere. --- aspell[¶](#aspell) === Homepage: * <http://aspell.net/Spack package: * [aspell/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aspell/package.py) Versions: 0.60.6.1 Description: GNU Aspell is a Free and Open Source spell checker designed to eventually replace Ispell. --- aspell6-de[¶](#aspell6-de) === Homepage: * <http://aspell.net/Spack package: * [aspell6-de/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aspell6-de/package.py) Versions: 6-de-20030222-1 Build Dependencies: [aspell](#aspell) Link Dependencies: [aspell](#aspell) Description: German (de) dictionary for aspell. --- aspell6-en[¶](#aspell6-en) === Homepage: * <http://aspell.net/Spack package: * [aspell6-en/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aspell6-en/package.py) Versions: 2017.01.22-0 Build Dependencies: [aspell](#aspell) Link Dependencies: [aspell](#aspell) Description: English (en) dictionary for aspell. --- aspell6-es[¶](#aspell6-es) === Homepage: * <http://aspell.net/Spack package: * [aspell6-es/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aspell6-es/package.py) Versions: 1.11-2 Build Dependencies: [aspell](#aspell) Link Dependencies: [aspell](#aspell) Description: Spanish (es) dictionary for aspell. --- aspera-cli[¶](#aspera-cli) === Homepage: * <https://asperasoft.comSpack package: * [aspera-cli/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/aspera-cli/package.py) Versions: 3.7.7 Description: The Aspera CLI client for the Fast and Secure Protocol (FASP). --- assimp[¶](#assimp) === Homepage: * <https://www.assimp.orgSpack package: * [assimp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/assimp/package.py) Versions: 4.0.1 Build Dependencies: [cmake](#cmake), [boost](#boost) Link Dependencies: [boost](#boost) Description: Open Asset Import Library (Assimp) is a portable Open Source library to import various well-known 3D model formats in a uniform manner. --- astra[¶](#astra) === Homepage: * <http://www.desy.de/~mpyflo/Spack package: * [astra/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/astra/package.py) Versions: 2016-11-30 Description: A Space Charge Tracking Algorithm. --- astral[¶](#astral) === Homepage: * <https://github.com/smirarab/ASTRALSpack package: * [astral/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/astral/package.py) Versions: 4.10.7 Build Dependencies: java, [zip](#zip) Run Dependencies: java Description: ASTRAL is a tool for estimating an unrooted species tree given a set of unrooted gene trees. --- astyle[¶](#astyle) === Homepage: * <http://astyle.sourceforge.net/Spack package: * [astyle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/astyle/package.py) Versions: 3.1, 3.0.1, 2.06, 2.05.1, 2.04 Description: A Free, Fast, and Small Automatic Formatter for C, C++, C++/CLI, Objective-C, C#, and Java Source Code. --- at-spi2-atk[¶](#at-spi2-atk) === Homepage: * <http://www.linuxfromscratch.org/blfs/view/cvs/x/at-spi2-atk.htmlSpack package: * [at-spi2-atk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/at-spi2-atk/package.py) Versions: 2.26.2, 2.26.1 Build Dependencies: [ninja](#ninja), [meson](#meson), pkgconfig, [atk](#atk), [at-spi2-core](#at-spi2-core) Link Dependencies: [at-spi2-core](#at-spi2-core), [atk](#atk) Description: The At-Spi2 Atk package contains a library that bridges ATK to At-Spi2 D-Bus service. --- at-spi2-core[¶](#at-spi2-core) === Homepage: * <http://www.linuxfromscratch.org/blfs/view/cvs/x/at-spi2-core.htmlSpack package: * [at-spi2-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/at-spi2-core/package.py) Versions: 2.28.0 Build Dependencies: [dbus](#dbus), [glib](#glib), [ninja](#ninja), [recordproto](#recordproto), [meson](#meson), pkgconfig, [libxi](#libxi), [inputproto](#inputproto), [libx11](#libx11), [fixesproto](#fixesproto), [libxtst](#libxtst), [python](#python) Link Dependencies: [dbus](#dbus), [glib](#glib), [libx11](#libx11), [libxi](#libxi) Description: The At-Spi2 Core package provides a Service Provider Interface for the Assistive Technologies available on the GNOME platform and a library against which applications can be linked. --- atk[¶](#atk) === Homepage: * <https://developer.gnome.org/atk/Spack package: * [atk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/atk/package.py) Versions: 2.28.1, 2.20.0, 2.14.0 Build Dependencies: [glib](#glib), [gettext](#gettext), [meson](#meson), pkgconfig, [gobject-introspection](#gobject-introspection) Link Dependencies: [glib](#glib), [gettext](#gettext), [gobject-introspection](#gobject-introspection) Description: ATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications. --- atlas[¶](#atlas) === Homepage: * <http://math-atlas.sourceforge.net/Spack package: * [atlas/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/atlas/package.py) Versions: 3.11.39, 3.11.34, 3.10.3, 3.10.2 Description: Automatically Tuned Linear Algebra Software, generic shared ATLAS is an approach for the automatic generation and optimization of numerical software. Currently ATLAS supplies optimized versions for the complete set of linear algebra kernels known as the Basic Linear Algebra Subroutines (BLAS), and a subset of the linear algebra routines in the LAPACK library. --- atom-dft[¶](#atom-dft) === Homepage: * <https://departments.icmab.es/leem/siesta/Pseudopotentials/Spack package: * [atom-dft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/atom-dft/package.py) Versions: 4.2.6 Build Dependencies: [libgridxc](#libgridxc), [xmlf90](#xmlf90) Link Dependencies: [libgridxc](#libgridxc), [xmlf90](#xmlf90) Description: ATOM is a program for DFT calculations in atoms and pseudopotential generation. --- atompaw[¶](#atompaw) === Homepage: * <http://users.wfu.edu/natalie/papers/pwpaw/man.htmlSpack package: * [atompaw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/atompaw/package.py) Versions: 4.0.0.13, 3.1.0.3 Build Dependencies: [libxc](#libxc), blas, lapack Link Dependencies: [libxc](#libxc), blas, lapack Description: A Projector Augmented Wave (PAW) code for generating atom-centered functions. Official website: http://pwpaw.wfu.edu User's guide: ~/doc/atompaw-usersguide.pdf --- atop[¶](#atop) === Homepage: * <http://www.atoptool.nl/index.phpSpack package: * [atop/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/atop/package.py) Versions: 2.2-3 Build Dependencies: [zlib](#zlib), [ncurses](#ncurses) Link Dependencies: [zlib](#zlib), [ncurses](#ncurses) Description: Atop is an ASCII full-screen performance monitor for Linux --- augustus[¶](#augustus) === Homepage: * <http://bioinf.uni-greifswald.de/augustus/Spack package: * [augustus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/augustus/package.py) Versions: 3.3.1, 3.3, 3.2.3 Build Dependencies: [zlib](#zlib), [boost](#boost), [bamtools](#bamtools), [gsl](#gsl) Link Dependencies: [zlib](#zlib), [boost](#boost), [bamtools](#bamtools), [gsl](#gsl) Description: AUGUSTUS is a program that predicts genes in eukaryotic genomic sequences --- autoconf[¶](#autoconf) === Homepage: * <https://www.gnu.org/software/autoconf/Spack package: * [autoconf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/autoconf/package.py) Versions: 2.69, 2.62, 2.59, 2.13 Build Dependencies: [perl](#perl), [m4](#m4) Run Dependencies: [perl](#perl), [m4](#m4) Description: Autoconf -- system configuration part of autotools --- autodock-vina[¶](#autodock-vina) === Homepage: * <http://vina.scripps.edu/Spack package: * [autodock-vina/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/autodock-vina/package.py) Versions: 1_1_2 Build Dependencies: [boost](#boost) Link Dependencies: [boost](#boost) Description: AutoDock Vina is an open-source program for doing molecular docking --- autofact[¶](#autofact) === Homepage: * <http://megasun.bch.umontreal.ca/Software/AutoFACT.htmSpack package: * [autofact/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/autofact/package.py) Versions: 3_4 Run Dependencies: [perl](#perl), [blast-plus](#blast-plus), [perl-io-string](#perl-io-string), [perl-bio-perl](#perl-bio-perl), [perl-lwp](#perl-lwp) Description: An Automatic Functional Annotation and Classification Tool --- autogen[¶](#autogen) === Homepage: * <https://www.gnu.org/software/autogen/index.htmlSpack package: * [autogen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/autogen/package.py) Versions: 5.18.12 Build Dependencies: [guile](#guile), pkgconfig, [libxml2](#libxml2) Link Dependencies: [guile](#guile), [libxml2](#libxml2) Description: AutoGen is a tool designed to simplify the creation and maintenance of programs that contain large amounts of repetitious text. It is especially valuable in programs that have several blocks of text that must be kept synchronized. --- automaded[¶](#automaded) === Homepage: * <https://github.com/llnl/AutomaDeDSpack package: * [automaded/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/automaded/package.py) Versions: 1.0 Build Dependencies: [cmake](#cmake), [boost](#boost), mpi, [callpath](#callpath) Link Dependencies: [callpath](#callpath), [boost](#boost), mpi Description: AutomaDeD (Automata-based Debugging for Dissimilar parallel tasks) is a tool for automatic diagnosis of performance and correctness problems in MPI applications. It creates control-flow models of each MPI process and, when a failure occurs, these models are leveraged to find the origin of problems automatically. MPI calls are intercepted (using wrappers) to create the models. When an MPI application hangs, AutomaDeD creates a progress-dependence graph that helps finding the process (or group of processes) that caused the hang. --- automake[¶](#automake) === Homepage: * <http://www.gnu.org/software/automake/Spack package: * [automake/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/automake/package.py) Versions: 1.16.1, 1.15.1, 1.15, 1.14.1, 1.11.6 Build Dependencies: [perl](#perl), [autoconf](#autoconf) Run Dependencies: [perl](#perl) Description: Automake -- make file builder part of autotools --- axel[¶](#axel) === Homepage: * <https://github.com/axel-download-accelerator/axelSpack package: * [axel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/axel/package.py) Versions: 2.16.1 Build Dependencies: [pkgconf](#pkgconf), [m4](#m4), [openssl](#openssl), [libtool](#libtool), [autoconf](#autoconf), [automake](#automake), [gettext](#gettext) Link Dependencies: [gettext](#gettext), [openssl](#openssl) Description: Axel is a light command line download accelerator for Linux and Unix --- axl[¶](#axl) === Homepage: * <https://github.com/ECP-VeloC/AXLSpack package: * [axl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/axl/package.py) Versions: 0.1.1, master Build Dependencies: [cmake](#cmake), [kvtree](#kvtree) Link Dependencies: [kvtree](#kvtree) Description: Asynchronous transfer library --- bamdst[¶](#bamdst) === Homepage: * <https://github.com/shiquan/bamdstSpack package: * [bamdst/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bamdst/package.py) Versions: master Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Bamdst is a a lightweight bam file depth statistical tool. --- bamtools[¶](#bamtools) === Homepage: * <https://github.com/pezmaster31/bamtoolsSpack package: * [bamtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bamtools/package.py) Versions: 2.5.1, 2.5.0, 2.4.1, 2.4.0, 2.3.0, 2.2.3 Build Dependencies: [cmake](#cmake) Link Dependencies: [zlib](#zlib) Description: C++ API & command-line toolkit for working with BAM data. --- bamutil[¶](#bamutil) === Homepage: * <http://genome.sph.umich.edu/wiki/BamUtilSpack package: * [bamutil/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bamutil/package.py) Versions: 1.0.13 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: bamUtil is a repository that contains several programs that perform operations on SAM/BAM files. All of these programs are built into a single executable, bam. --- barrnap[¶](#barrnap) === Homepage: * <https://github.com/tseemann/barrnapSpack package: * [barrnap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/barrnap/package.py) Versions: 0.8 Run Dependencies: [hmmer](#hmmer) Description: Barrnap predicts the location of ribosomal RNA genes in genomes. --- bash[¶](#bash) === Homepage: * <https://www.gnu.org/software/bash/Spack package: * [bash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bash/package.py) Versions: 4.4.12, 4.4, 4.3 Build Dependencies: [readline](#readline), [ncurses](#ncurses) Link Dependencies: [readline](#readline), [ncurses](#ncurses) Description: The GNU Project's Bourne Again SHell. --- bash-completion[¶](#bash-completion) === Homepage: * <https://github.com/scop/bash-completionSpack package: * [bash-completion/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bash-completion/package.py) Versions: develop, 2.7, 2.3 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Run Dependencies: [bash](#bash) Description: Programmable completion functions for bash. --- bats[¶](#bats) === Homepage: * <https://github.com/sstephenson/batsSpack package: * [bats/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bats/package.py) Versions: 0.4.0 Description: Bats is a TAP-compliant testing framework for Bash. --- bazel[¶](#bazel) === Homepage: * <https://www.bazel.ioSpack package: * [bazel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bazel/package.py) Versions: 0.17.2, 0.16.1, 0.15.0, 0.14.1, 0.13.0, 0.12.0, 0.11.1, 0.11.0, 0.10.1, 0.10.0, 0.9.0, 0.4.5, 0.4.4, 0.3.1, 0.3.0, 0.2.3, 0.2.2b, 0.2.2 Build Dependencies: java, [zip](#zip) Link Dependencies: java, [zip](#zip) Run Dependencies: java Description: Bazel is Google's own build tool --- bbcp[¶](#bbcp) === Homepage: * <http://www.slac.stanford.edu/~abh/bbcp/Spack package: * [bbcp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bbcp/package.py) Versions: git Build Dependencies: [zlib](#zlib), [openssl](#openssl) Link Dependencies: [zlib](#zlib), [openssl](#openssl) Description: Securely and quickly copy data from source to target --- bbmap[¶](#bbmap) === Homepage: * <http://sourceforge.net/projects/bbmap/Spack package: * [bbmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bbmap/package.py) Versions: 37.36 Build Dependencies: java Link Dependencies: java Description: Short read aligner for DNA and RNA-seq data. --- bc[¶](#bc) === Homepage: * <https://www.gnu.org/software/bcSpack package: * [bc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bc/package.py) Versions: 1.07 Build Dependencies: [ed](#ed), [texinfo](#texinfo) Description: bc is an arbitrary precision numeric processing language. Syntax is similar to C, but differs in many substantial areas. It supports interactive execution of statements. --- bcftools[¶](#bcftools) === Homepage: * <http://samtools.github.io/bcftools/Spack package: * [bcftools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bcftools/package.py) Versions: 1.9, 1.8, 1.7, 1.6, 1.4, 1.3.1, 1.2 Build Dependencies: [libzip](#libzip), [htslib](#htslib) Link Dependencies: [libzip](#libzip), [htslib](#htslib) Description: BCFtools is a set of utilities that manipulate variant calls in the Variant Call Format (VCF) and its binary counterpart BCF. All commands work transparently with both VCFs and BCFs, both uncompressed and BGZF- compressed. --- bcl2fastq2[¶](#bcl2fastq2) === Homepage: * <https://support.illumina.com/downloads/bcl2fastq-conversion-software-v2-20.htmlSpack package: * [bcl2fastq2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bcl2fastq2/package.py) Versions: 2.20.0.422, 2.19.1.403, 2.18.0.12, 2.17.1.14 Build Dependencies: [zlib](#zlib), [libxml2](#libxml2), [libxslt](#libxslt), [cmake](#cmake), [boost](#boost), [libgcrypt](#libgcrypt) Link Dependencies: [zlib](#zlib), [libxml2](#libxml2), [libxslt](#libxslt), [cmake](#cmake), [boost](#boost), [libgcrypt](#libgcrypt) Description: The bcl2fastq2 Conversion Software converts base call (BCL) files from a sequencing run into FASTQ files. --- bdftopcf[¶](#bdftopcf) === Homepage: * <http://cgit.freedesktop.org/xorg/app/bdftopcfSpack package: * [bdftopcf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bdftopcf/package.py) Versions: 1.0.5 Build Dependencies: [libxfont](#libxfont), [xproto](#xproto), pkgconfig, [fontsproto](#fontsproto), [util-macros](#util-macros) Link Dependencies: [libxfont](#libxfont) Description: bdftopcf is a font compiler for the X server and font server. Fonts in Portable Compiled Format can be read by any architecture, although the file is structured to allow one particular architecture to read them directly without reformatting. This allows fast reading on the appropriate machine, but the files are still portable (but read more slowly) on other machines. --- bdw-gc[¶](#bdw-gc) === Homepage: * <http://www.hboehm.info/gc/Spack package: * [bdw-gc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bdw-gc/package.py) Versions: 7.6.0, 7.4.4 Build Dependencies: [libatomic-ops](#libatomic-ops) Link Dependencies: [libatomic-ops](#libatomic-ops) Description: The Boehm-Demers-Weiser conservative garbage collector is a garbage collecting replacement for C malloc or C++ new. --- bear[¶](#bear) === Homepage: * <https://github.com/rizsotto/BearSpack package: * [bear/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bear/package.py) Versions: 2.2.0, 2.0.4 Build Dependencies: [cmake](#cmake), [python](#python) Link Dependencies: [python](#python) Description: Bear is a tool that generates a compilation database for clang tooling from non-cmake build systems. --- beast1[¶](#beast1) === Homepage: * <http://beast.community/Spack package: * [beast1/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/beast1/package.py) Versions: 1.10.0, 1.8.4 Build Dependencies: [libbeagle](#libbeagle) Link Dependencies: [libbeagle](#libbeagle) Run Dependencies: java, [libbeagle](#libbeagle) Description: BEAST is a cross-platform program for Bayesian analysis of molecular sequences using MCMC. --- beast2[¶](#beast2) === Homepage: * <http://beast2.org/Spack package: * [beast2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/beast2/package.py) Versions: 2.4.6 Build Dependencies: java Link Dependencies: java Description: BEAST is a cross-platform program for Bayesian inference using MCMC of molecular sequences. It is entirely orientated towards rooted, time- measured phylogenies inferred using strict or relaxed molecular clock models. It can be used as a method of reconstructing phylogenies but is also a framework for testing evolutionary hypotheses without conditioning on a single tree topology. --- bedops[¶](#bedops) === Homepage: * <https://bedops.readthedocs.ioSpack package: * [bedops/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bedops/package.py) Versions: 2.4.35, 2.4.34, 2.4.30 Description: BEDOPS is an open-source command-line toolkit that performs highly efficient and scalable Boolean and other set operations, statistical calculations, archiving, conversion and other management of genomic data of arbitrary scale. --- bedtools2[¶](#bedtools2) === Homepage: * <https://github.com/arq5x/bedtools2Spack package: * [bedtools2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bedtools2/package.py) Versions: 2.27.1, 2.27.0, 2.26.0, 2.25.0, 2.23.0 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Collectively, the bedtools utilities are a swiss-army knife of tools for a wide-range of genomics analysis tasks. The most widely-used tools enable genome arithmetic: that is, set theory on the genome. --- beforelight[¶](#beforelight) === Homepage: * <http://cgit.freedesktop.org/xorg/app/beforelightSpack package: * [beforelight/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/beforelight/package.py) Versions: 1.0.5 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxscrnsaver](#libxscrnsaver), [libx11](#libx11), [libxt](#libxt) Link Dependencies: [libxt](#libxt), [libx11](#libx11), [libxscrnsaver](#libxscrnsaver) Description: The beforelight program is a sample implementation of a screen saver for X servers supporting the MIT-SCREEN-SAVER extension. It is only recommended for use as a code sample, as it does not include features such as screen locking or configurability. --- benchmark[¶](#benchmark) === Homepage: * <https://github.com/google/benchmarkSpack package: * [benchmark/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/benchmark/package.py) Versions: develop, 1.4.0, 1.3.0, 1.2.0, 1.1.0, 1.0.0 Build Dependencies: [cmake](#cmake) Description: A microbenchmark support library --- berkeley-db[¶](#berkeley-db) === Homepage: * <http://www.oracle.com/technetwork/database/database-technologies/berkeleydb/overview/index.htmlSpack package: * [berkeley-db/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/berkeley-db/package.py) Versions: 6.2.32, 6.1.29, 6.0.35, 5.3.28 Description: Oracle Berkeley DB --- bertini[¶](#bertini) === Homepage: * <https://bertini.nd.edu/Spack package: * [bertini/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bertini/package.py) Versions: 1.5 Build Dependencies: [gmp](#gmp), mpi, [flex](#flex), [mpfr](#mpfr), [bison](#bison) Link Dependencies: [gmp](#gmp), mpi, [mpfr](#mpfr) Description: Bertini is a general-purpose solver, written in C, that was created for research about polynomial continuation. It solves for the numerical solution of systems of polynomial equations using homotopy continuation. --- bib2xhtml[¶](#bib2xhtml) === Homepage: * <http://www.spinellis.gr/sw/textproc/bib2xhtml/Spack package: * [bib2xhtml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bib2xhtml/package.py) Versions: 3.0-15-gf506 Description: bib2xhtml is a program that converts BibTeX files into HTML. --- bigreqsproto[¶](#bigreqsproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/bigreqsprotoSpack package: * [bigreqsproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bigreqsproto/package.py) Versions: 1.1.2 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: Big Requests Extension. This extension defines a protocol to enable the use of requests that exceed 262140 bytes in length. --- binutils[¶](#binutils) === Homepage: * <http://www.gnu.org/software/binutils/Spack package: * [binutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/binutils/package.py) Versions: 2.31.1, 2.29.1, 2.28, 2.27, 2.26, 2.25.1, 2.25, 2.24, 2.23.2, 2.20.1 Build Dependencies: [zlib](#zlib), [gettext](#gettext), [m4](#m4), [bison](#bison) Link Dependencies: [zlib](#zlib), [gettext](#gettext) Description: GNU binutils, which contain the linker, assembler, objdump and others --- bioawk[¶](#bioawk) === Homepage: * <https://github.com/lh3/bioawkSpack package: * [bioawk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bioawk/package.py) Versions: 1.0 Build Dependencies: [zlib](#zlib), [bison](#bison) Link Dependencies: [zlib](#zlib) Description: Bioawk is an extension to <NAME>'s awk, adding the support of several common biological data formats, including optionally gzip'ed BED, GFF, SAM, VCF, FASTA/Q and TAB-delimited formats with column names. --- biopieces[¶](#biopieces) === Homepage: * <http://maasha.github.io/biopieces/Spack package: * [biopieces/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/biopieces/package.py) Versions: 2016-04-12 Build Dependencies: [perl-parse-recdescent](#perl-parse-recdescent), [perl-dbfile](#perl-dbfile), [ruby-terminal-table](#ruby-terminal-table), [perl-dbi](#perl-dbi), [perl-dbd-mysql](#perl-dbd-mysql), [blat](#blat), [perl-version](#perl-version), [perl-svg](#perl-svg), [perl-html-parser](#perl-html-parser), [perl-bit-vector](#perl-bit-vector), [perl-soap-lite](#perl-soap-lite), [velvet](#velvet), [ruby-rubyinline](#ruby-rubyinline), [mummer](#mummer), [ruby](#ruby), [perl-inline](#perl-inline), [perl-lwp](#perl-lwp), [perl-xml-parser](#perl-xml-parser), [ruby-gnuplot](#ruby-gnuplot), [scan-for-matches](#scan-for-matches), [ruby-narray](#ruby-narray), [perl-time-hires](#perl-time-hires), [muscle](#muscle), [blast-plus](#blast-plus), [bowtie](#bowtie), [perl-inline-c](#perl-inline-c), [usearch](#usearch), [ray](#ray), [perl](#perl), [perl-module-build](#perl-module-build), [perl-term-readkey](#perl-term-readkey), [perl-uri](#perl-uri), [bwa](#bwa), [idba](#idba), [perl-carp-clan](#perl-carp-clan), [vmatch](#vmatch), [perl-class-inspector](#perl-class-inspector), [python](#python) Link Dependencies: [muscle](#muscle), [blast-plus](#blast-plus), [ruby-terminal-table](#ruby-terminal-table), [usearch](#usearch), [blat](#blat), [ray](#ray), [scan-for-matches](#scan-for-matches), [ruby-rubyinline](#ruby-rubyinline), [mummer](#mummer), [vmatch](#vmatch), [ruby](#ruby), [velvet](#velvet), [bwa](#bwa), [bowtie](#bowtie), [ruby-gnuplot](#ruby-gnuplot), [idba](#idba), [ruby-narray](#ruby-narray) Run Dependencies: [perl-time-hires](#perl-time-hires), [perl-parse-recdescent](#perl-parse-recdescent), [perl-dbi](#perl-dbi), [perl-dbd-mysql](#perl-dbd-mysql), [perl-term-readkey](#perl-term-readkey), [perl-version](#perl-version), [perl-bit-vector](#perl-bit-vector), [perl-html-parser](#perl-html-parser), [perl-dbfile](#perl-dbfile), [perl-svg](#perl-svg), [perl-soap-lite](#perl-soap-lite), [perl-uri](#perl-uri), [perl](#perl), [perl-module-build](#perl-module-build), [perl-inline](#perl-inline), [perl-carp-clan](#perl-carp-clan), [perl-lwp](#perl-lwp), [perl-xml-parser](#perl-xml-parser), [perl-class-inspector](#perl-class-inspector), [perl-inline-c](#perl-inline-c), [python](#python) Description: The Biopieces are a collection of bioinformatics tools that can be pieced together in a very easy and flexible manner to perform both simple and complex tasks. --- bismark[¶](#bismark) === Homepage: * <https://www.bioinformatics.babraham.ac.uk/projects/bismarkSpack package: * [bismark/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bismark/package.py) Versions: 0.19.0, 0.18.2 Run Dependencies: [perl](#perl), [samtools](#samtools), [bowtie2](#bowtie2) Description: A tool to map bisulfite converted sequence reads and determine cytosine methylation states --- bison[¶](#bison) === Homepage: * <http://www.gnu.org/software/bison/Spack package: * [bison/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bison/package.py) Versions: 3.0.5, 3.0.4, 2.7 Build Dependencies: [perl](#perl), [diffutils](#diffutils), [help2man](#help2man), [m4](#m4) Run Dependencies: [m4](#m4) Description: Bison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables. --- bitmap[¶](#bitmap) === Homepage: * <http://cgit.freedesktop.org/xorg/app/bitmapSpack package: * [bitmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bitmap/package.py) Versions: 1.0.8 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxaw](#libxaw), [libxt](#libxt), [libx11](#libx11), [libxmu](#libxmu), [xbitmaps](#xbitmaps), [xproto](#xproto) Link Dependencies: [libxt](#libxt), [libx11](#libx11), [libxaw](#libxaw), [libxmu](#libxmu) Description: bitmap, bmtoa, atobm - X bitmap (XBM) editor and converter utilities. --- blasr[¶](#blasr) === Homepage: * <https://github.com/PacificBiosciences/blasr/wikiSpack package: * [blasr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/blasr/package.py) Versions: 5.3.1 Build Dependencies: [zlib](#zlib), [blasr-libcpp](#blasr-libcpp), [ncurses](#ncurses), [hdf5](#hdf5), [boost](#boost), [pbbam](#pbbam), [python](#python), [htslib](#htslib) Link Dependencies: [zlib](#zlib), [blasr-libcpp](#blasr-libcpp), [hdf5](#hdf5), [boost](#boost), [pbbam](#pbbam), [ncurses](#ncurses), [htslib](#htslib) Description: The PacBio long read aligner. --- blasr-libcpp[¶](#blasr-libcpp) === Homepage: * <https://github.com/PacificBiosciences/blasr_libcppSpack package: * [blasr-libcpp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/blasr-libcpp/package.py) Versions: 5.3.1 Build Dependencies: [pbbam](#pbbam), [python](#python), [hdf5](#hdf5) Link Dependencies: [pbbam](#pbbam), [hdf5](#hdf5) Description: Blasr_libcpp is a library used by blasr and other executables such as samtoh5, loadPulses for analyzing PacBio sequences. --- blast-plus[¶](#blast-plus) === Homepage: * <http://blast.ncbi.nlm.nih.gov/Spack package: * [blast-plus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/blast-plus/package.py) Versions: 2.7.1, 2.6.0, 2.2.30 Build Dependencies: [lzo](#lzo), [libpng](#libpng), [perl](#perl), jpeg, [zlib](#zlib), [bzip2](#bzip2), [gnutls](#gnutls), [openssl](#openssl), [pcre](#pcre), [lmdb](#lmdb), [python](#python), [freetype](#freetype) Link Dependencies: [lzo](#lzo), [libpng](#libpng), [perl](#perl), jpeg, [zlib](#zlib), [bzip2](#bzip2), [gnutls](#gnutls), [openssl](#openssl), [pcre](#pcre), [lmdb](#lmdb), [python](#python), [freetype](#freetype) Description: Basic Local Alignment Search Tool. --- blat[¶](#blat) === Homepage: * <https://genome.ucsc.edu/FAQ/FAQblat.htmlSpack package: * [blat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/blat/package.py) Versions: 35 Build Dependencies: [libpng](#libpng) Link Dependencies: [libpng](#libpng) Description: BLAT (BLAST-like alignment tool) is a pairwise sequence alignment algorithm. --- blaze[¶](#blaze) === Homepage: * <https://bitbucket.org/blaze-lib/blaze/overviewSpack package: * [blaze/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/blaze/package.py) Versions: 3.4, 3.3, 3.2, 3.1, 3.0, 2.6, 2.5, 2.4, 2.3, 2.2, 2.1, 2.0, 1.5, 1.4, 1.3, 1.2, 1.1, 1.0 Description: Blaze is an open-source, high-performance C++ math library for dense and sparse arithmetic. With its state-of-the-art Smart Expression Template implementation Blaze combines the elegance and ease of use of a domain- specific language with HPC-grade performance, making it one of the most intuitive and fastest C++ math libraries available. --- blis[¶](#blis) === Homepage: * <https://github.com/flame/blisSpack package: * [blis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/blis/package.py) Versions: develop, 0.4.0, 0.3.2, 0.3.1, 0.3.0, 0.2.2 Build Dependencies: [python](#python) Run Dependencies: [python](#python) Description: BLIS is a portable software framework for instantiating high-performance BLAS-like dense linear algebra libraries. The framework was designed to isolate essential kernels of computation that, when optimized, immediately enable optimized implementations of most of its commonly used and computationally intensive operations. BLIS is written in ISO C99 and available under a new/modified/3-clause BSD license. While BLIS exports a new BLAS-like API, it also includes a BLAS compatibility layer which gives application developers access to BLIS implementations via traditional BLAS routine calls. An object-based API unique to BLIS is also available. --- bliss[¶](#bliss) === Homepage: * <http://www.tcs.hut.fi/Software/bliss/Spack package: * [bliss/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bliss/package.py) Versions: 0.73 Build Dependencies: [gmp](#gmp), [libtool](#libtool) Link Dependencies: [gmp](#gmp) Description: bliss: A Tool for Computing Automorphism Groups and Canonical Labelings of Graphs --- blitz[¶](#blitz) === Homepage: * <http://github.com/blitzpp/blitzSpack package: * [blitz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/blitz/package.py) Versions: 1.0.1, 1.0.0 Description: N-dimensional arrays for C++ --- bmake[¶](#bmake) === Homepage: * <http://www.crufty.net/help/sjg/bmake.htmSpack package: * [bmake/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bmake/package.py) Versions: 20180512, 20171207 Description: Portable version of NetBSD make(1). --- bml[¶](#bml) === Homepage: * <http://lanl.github.io/bml/Spack package: * [bml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bml/package.py) Versions: develop, 1.3.0, 1.2.3, 1.2.2, 1.1.0 Build Dependencies: [cmake](#cmake), mpi, blas, lapack Link Dependencies: mpi, blas, lapack Description: The basic matrix library (bml) is a collection of various matrix data formats (in dense and sparse) and their associated algorithms for basic matrix operations. --- bohrium[¶](#bohrium) === Homepage: * <http://bh107.orgSpack package: * [bohrium/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bohrium/package.py) Versions: develop, 0.9.1, 0.9.0 Build Dependencies: [cuda](#cuda), [netlib-lapack](#netlib-lapack), [cmake](#cmake), [py-numpy](#py-numpy), [opencv](#opencv), opencl, [zlib](#zlib), [py-cython](#py-cython), [swig](#swig), [boost](#boost), blas, [python](#python) Link Dependencies: [zlib](#zlib), [cuda](#cuda), [netlib-lapack](#netlib-lapack), [boost](#boost), blas, [opencv](#opencv), [python](#python), opencl Run Dependencies: [py-numpy](#py-numpy) Test Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Library for automatic acceleration of array operations --- bolt[¶](#bolt) === Homepage: * <http://www.bolt-omp.org/Spack package: * [bolt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bolt/package.py) Versions: 1.0b1 Build Dependencies: [cmake](#cmake) Description: BOLT targets a high-performing OpenMP implementation, especially specialized for fine-grain parallelism. Unlike other OpenMP implementations, BOLT utilizes a lightweight threading model for its underlying threading mechanism. It currently adopts Argobots, a new holistic, low-level threading and tasking runtime, in order to overcome shortcomings of conventional OS-level threads. The current BOLT implementation is based on the OpenMP runtime in LLVM, and thus it can be used with LLVM/Clang, Intel OpenMP compiler, and GCC. --- bookleaf-cpp[¶](#bookleaf-cpp) === Homepage: * <https://github.com/UK-MAC/BookLeaf_CppSpack package: * [bookleaf-cpp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bookleaf-cpp/package.py) Versions: develop, 2.0.2, 2.0.1, 2.0 Build Dependencies: [parmetis](#parmetis), [yaml-cpp](#yaml-cpp), mpi, [silo](#silo), [cmake](#cmake), [typhon](#typhon), [caliper](#caliper) Link Dependencies: [parmetis](#parmetis), [yaml-cpp](#yaml-cpp), mpi, [silo](#silo), [typhon](#typhon), [caliper](#caliper) Description: BookLeaf is a 2D unstructured hydrodynamics mini-app. --- boost[¶](#boost) === Homepage: * <http://www.boost.orgSpack package: * [boost/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/boost/package.py) Versions: develop, 1.68.0, 1.67.0, 1.66.0, 1.65.1, 1.65.0, 1.64.0, 1.63.0, 1.62.0, 1.61.0, 1.60.0, 1.59.0, 1.58.0, 1.57.0, 1.56.0, 1.55.0, 1.54.0, 1.53.0, 1.52.0, 1.51.0, 1.50.0, 1.49.0, 1.48.0, 1.47.0, 1.46.1, 1.46.0, 1.45.0, 1.44.0, 1.43.0, 1.42.0, 1.41.0, 1.40.0, 1.39.0, 1.38.0, 1.37.0, 1.36.0, 1.35.0, 1.34.1, 1.34.0 Build Dependencies: [zlib](#zlib), [bzip2](#bzip2), mpi, [icu4c](#icu4c), [py-numpy](#py-numpy), [python](#python) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2), [icu4c](#icu4c), mpi, [python](#python) Run Dependencies: [py-numpy](#py-numpy) Description: Boost provides free peer-reviewed portable C++ source libraries, emphasizing libraries that work well with the C++ Standard Library. Boost libraries are intended to be widely useful, and usable across a broad spectrum of applications. The Boost license encourages both commercial and non-commercial use. --- boostmplcartesianproduct[¶](#boostmplcartesianproduct) === Homepage: * <http://www.organicvectory.com/index.php?option=com_content&view=article&id=75:boostmplcartesianproduct&catid=42:boost&Itemid=78Spack package: * [boostmplcartesianproduct/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/boostmplcartesianproduct/package.py) Versions: 20161205 Description: Cartesian_product is an extension to the Boost.MPL library and as such requires a version of the Boost libraries on your system. --- bowtie[¶](#bowtie) === Homepage: * <https://sourceforge.net/projects/bowtie-bio/Spack package: * [bowtie/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bowtie/package.py) Versions: 1.2 Build Dependencies: tbb Link Dependencies: tbb Description: Bowtie is an ultrafast, memory-efficient short read aligner for short DNA sequences (reads) from next-gen sequencers. --- bowtie2[¶](#bowtie2) === Homepage: * <bowtie-bio.sourceforge.net/bowtie2/index.shtmlSpack package: * [bowtie2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bowtie2/package.py) Versions: 2.3.4.1, 2.3.1, 2.3.0, 2.2.5 Build Dependencies: [zlib](#zlib), [readline](#readline), tbb Link Dependencies: [zlib](#zlib), [readline](#readline), tbb Run Dependencies: [perl](#perl), [python](#python) Description: Bowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences --- boxlib[¶](#boxlib) === Homepage: * <https://ccse.lbl.gov/BoxLib/Spack package: * [boxlib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/boxlib/package.py) Versions: 16.12.2 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: BoxLib, a software framework for massively parallel block-structured adaptive mesh refinement (AMR) codes. --- bpp-core[¶](#bpp-core) === Homepage: * <http://biopp.univ-montp2.fr/wiki/index.php/InstallationSpack package: * [bpp-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bpp-core/package.py) Versions: 2.2.0 Build Dependencies: [cmake](#cmake) Description: Bio++ core library. --- bpp-phyl[¶](#bpp-phyl) === Homepage: * <http://biopp.univ-montp2.fr/wiki/index.php/InstallationSpack package: * [bpp-phyl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bpp-phyl/package.py) Versions: 2.2.0 Build Dependencies: [bpp-core](#bpp-core), [cmake](#cmake), [bpp-seq](#bpp-seq) Link Dependencies: [bpp-core](#bpp-core), [bpp-seq](#bpp-seq) Description: Bio++ phylogeny library. --- bpp-seq[¶](#bpp-seq) === Homepage: * <http://biopp.univ-montp2.fr/wiki/index.php/InstallationSpack package: * [bpp-seq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bpp-seq/package.py) Versions: 2.2.0 Build Dependencies: [bpp-core](#bpp-core), [cmake](#cmake) Link Dependencies: [bpp-core](#bpp-core) Description: Bio++ seq library. --- bpp-suite[¶](#bpp-suite) === Homepage: * <http://biopp.univ-montp2.fr/wiki/index.php/BppSuiteSpack package: * [bpp-suite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bpp-suite/package.py) Versions: 2.2.0 Build Dependencies: [bpp-core](#bpp-core), [cmake](#cmake), [bpp-phyl](#bpp-phyl), [texinfo](#texinfo), [bpp-seq](#bpp-seq) Link Dependencies: [bpp-core](#bpp-core), [bpp-phyl](#bpp-phyl), [bpp-seq](#bpp-seq) Description: BppSuite is a suite of ready-to-use programs for phylogenetic and sequence analysis. --- bracken[¶](#bracken) === Homepage: * <https://ccb.jhu.edu/software/brackenSpack package: * [bracken/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bracken/package.py) Versions: 1.0.0 Build Dependencies: [perl](#perl), [perl-list-moreutils](#perl-list-moreutils), [perl-exporter-tiny](#perl-exporter-tiny), [python](#python), [perl-parallel-forkmanager](#perl-parallel-forkmanager) Link Dependencies: [perl](#perl), [perl-list-moreutils](#perl-list-moreutils), [perl-exporter-tiny](#perl-exporter-tiny), [python](#python), [perl-parallel-forkmanager](#perl-parallel-forkmanager) Description: Bracken (Bayesian Reestimation of Abundance with KrakEN) is a highly accurate statistical method that computes the abundance of species in DNA sequences from a metagenomics sample. --- braker[¶](#braker) === Homepage: * <http://exon.gatech.edu/braker1.htmlSpack package: * [braker/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/braker/package.py) Versions: 2.1.0, 1.11 Build Dependencies: [samtools](#samtools), [genemark-et](#genemark-et), [bamtools](#bamtools), [augustus](#augustus), [perl-file-which](#perl-file-which), [perl-parallel-forkmanager](#perl-parallel-forkmanager), [perl](#perl), [perl-scalar-util-numeric](#perl-scalar-util-numeric) Link Dependencies: [samtools](#samtools), [genemark-et](#genemark-et), [bamtools](#bamtools), [augustus](#augustus) Run Dependencies: [perl](#perl), [perl-scalar-util-numeric](#perl-scalar-util-numeric), [perl-file-which](#perl-file-which), [perl-parallel-forkmanager](#perl-parallel-forkmanager) Description: BRAKER is a pipeline for unsupervised RNA-Seq-based genome annotation that combines the advantages of GeneMark-ET and AUGUSTUS --- branson[¶](#branson) === Homepage: * <https://github.com/lanl/bransonSpack package: * [branson/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/branson/package.py) Versions: develop, 1.01 Build Dependencies: [parmetis](#parmetis), [cmake](#cmake), [boost](#boost), mpi, [metis](#metis) Link Dependencies: [parmetis](#parmetis), [boost](#boost), mpi, [metis](#metis) Description: Branson's purpose is to study different algorithms for parallel Monte Carlo transport. Currently it contains particle passing and mesh passing methods for domain decomposition. --- breakdancer[¶](#breakdancer) === Homepage: * <http://gmt.genome.wustl.edu/packages/breakdancerSpack package: * [breakdancer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/breakdancer/package.py) Versions: 1.4.5, master Build Dependencies: [zlib](#zlib), [cmake](#cmake) Link Dependencies: [zlib](#zlib) Run Dependencies: [perl-gd-graph](#perl-gd-graph), [perl-gdgraph-histogram](#perl-gdgraph-histogram), [perl-math-cdf](#perl-math-cdf), [perl-statistics-descriptive](#perl-statistics-descriptive), [perl-list-moreutils](#perl-list-moreutils), [perl-exporter-tiny](#perl-exporter-tiny) Description: BreakDancer-1.3.6, released under GPLv3, is a perl/Cpp package that provides genome-wide detection of structural variants from next generation paired-end sequencing reads. It includes two complementary programs. BreakDancerMax predicts five types of structural variants: insertions, deletions, inversions, inter- and intra-chromosomal translocations from next-generation short paired-end sequencing reads using read pairs that are mapped with unexpected separation distances or orientation. BreakDancerMini focuses on detecting small indels (usually between 10bp and 100bp) using normally mapped read pairs.. --- breseq[¶](#breseq) === Homepage: * <http://barricklab.org/breseqSpack package: * [breseq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/breseq/package.py) Versions: 0.31.1 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Run Dependencies: [r](#r), [bedtools2](#bedtools2) Description: breseq is a computational pipeline for finding mutations relative to a reference sequence in short-read DNA re-sequencing data for haploid microbial-sized genomes. --- brigand[¶](#brigand) === Homepage: * <https://github.com/edouarda/brigandSpack package: * [brigand/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/brigand/package.py) Versions: 1.3.0, 1.2.0, 1.1.0, 1.0.0, master Description: Brigand Meta-programming library --- bsseeker2[¶](#bsseeker2) === Homepage: * <http://pellegrini.mcdb.ucla.edu/BS_Seeker2Spack package: * [bsseeker2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bsseeker2/package.py) Versions: 2.1.2 Build Dependencies: [py-pysam](#py-pysam), [python](#python) Run Dependencies: [py-pysam](#py-pysam), [python](#python) Description: A versatile aligning pipeline for bisulfite sequencing data. --- bucky[¶](#bucky) === Homepage: * <http://www.stat.wisc.edu/~ane/bucky/index.htmlSpack package: * [bucky/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bucky/package.py) Versions: 1.4.4 Description: BUCKy is a free program to combine molecular data from multiple loci. BUCKy estimates the dominant history of sampled individuals, and how much of the genome supports each relationship, using Bayesian concordance analysis. --- bumpversion[¶](#bumpversion) === Homepage: * <https://pypi.python.org/pypi/bumpversionSpack package: * [bumpversion/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bumpversion/package.py) Versions: 0.5.3, 0.5.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Version-bump your software with a single command. --- busco[¶](#busco) === Homepage: * <http://busco.ezlab.org/Spack package: * [busco/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/busco/package.py) Versions: 3.0.1, 2.0.1 Build Dependencies: [hmmer](#hmmer), [blast-plus](#blast-plus), [augustus](#augustus), [python](#python) Link Dependencies: [hmmer](#hmmer), [blast-plus](#blast-plus), [augustus](#augustus), [python](#python) Run Dependencies: [python](#python) Description: Assesses genome assembly and annotation completeness with Benchmarking Universal Single-Copy Orthologs --- butter[¶](#butter) === Homepage: * <https://github.com/MikeAxtell/butterSpack package: * [butter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/butter/package.py) Versions: 0.3.3 Build Dependencies: [perl](#perl), [samtools](#samtools), [bowtie](#bowtie) Link Dependencies: [samtools](#samtools), [bowtie](#bowtie) Run Dependencies: [perl](#perl) Description: butter: Bowtie UTilizing iTerative placEment of Repetitive small rnas. A wrapper for bowtie to produce small RNA-seq alignments where multimapped small RNAs tend to be placed near regions of confidently high density. --- bwa[¶](#bwa) === Homepage: * <http://github.com/lh3/bwaSpack package: * [bwa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bwa/package.py) Versions: 0.7.17, 0.7.16a, 0.7.15, 0.7.13, 0.7.12 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Burrow-Wheeler Aligner for pairwise alignment between DNA sequences. --- bwtool[¶](#bwtool) === Homepage: * <https://github.com/CRG-Barcelona/bwtoolSpack package: * [bwtool/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bwtool/package.py) Versions: 1.0 Build Dependencies: [libbeato](#libbeato) Link Dependencies: [libbeato](#libbeato) Description: bwtool is a command-line utility for bigWig files. --- byobu[¶](#byobu) === Homepage: * <http://www.byobu.coSpack package: * [byobu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/byobu/package.py) Versions: 5.127, 5.125, 5.123 Build Dependencies: [tmux](#tmux) Run Dependencies: [tmux](#tmux) Description: Byobu: Text-based window manager and terminal multiplexer. --- bzip2[¶](#bzip2) === Homepage: * <https://sourceware.org/bzip2/Spack package: * [bzip2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/bzip2/package.py) Versions: 1.0.6 Build Dependencies: [diffutils](#diffutils) Description: bzip2 is a freely available, patent free high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression. --- c-blosc[¶](#c-blosc) === Homepage: * <http://www.blosc.orgSpack package: * [c-blosc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/c-blosc/package.py) Versions: 1.12.1, 1.11.1, 1.9.2, 1.9.1, 1.9.0, 1.8.1, 1.8.0 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [lz4](#lz4), [zstd](#zstd), [snappy](#snappy) Link Dependencies: [zlib](#zlib), [lz4](#lz4), [zstd](#zstd), [snappy](#snappy) Description: Blosc, an extremely fast, multi-threaded, meta-compressor library --- c-lime[¶](#c-lime) === Homepage: * <https://usqcd-software.github.io/c-lime/Spack package: * [c-lime/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/c-lime/package.py) Versions: 2-3-9 Description: LIME (which can stand for Lattice QCD Interchange Message Encapsulation or more generally, Large Internet Message Encapsulation) is a simple packaging scheme for combining records containing ASCII and/or binary data. --- cabana[¶](#cabana) === Homepage: * <https://github.com/ECP-copa/CabanaSpack package: * [cabana/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cabana/package.py) Versions: develop, 0.1.0-rc0, 0.1.0 Build Dependencies: [cmake](#cmake), [kokkos](#kokkos) Link Dependencies: [kokkos](#kokkos) Description: The Exascale Co-Design Center for Particle Applications Toolkit --- caffe[¶](#caffe) === Homepage: * <http://caffe.berkeleyvision.orgSpack package: * [caffe/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/caffe/package.py) Versions: 1.0, rc5, rc4, rc3, rc2 Build Dependencies: [cuda](#cuda), [hdf5](#hdf5), [cmake](#cmake), [py-numpy](#py-numpy), [gflags](#gflags), [opencv](#opencv), [leveldb](#leveldb), [glog](#glog), [protobuf](#protobuf), [boost](#boost), [matlab](#matlab), blas, [lmdb](#lmdb), [python](#python) Link Dependencies: [cuda](#cuda), [hdf5](#hdf5), [opencv](#opencv), [gflags](#gflags), [leveldb](#leveldb), [glog](#glog), [protobuf](#protobuf), [boost](#boost), [matlab](#matlab), blas, [lmdb](#lmdb), [python](#python) Run Dependencies: [py-numpy](#py-numpy) Description: Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. --- cairo[¶](#cairo) === Homepage: * <http://cairographics.orgSpack package: * [cairo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cairo/package.py) Versions: 1.14.12, 1.14.8, 1.14.0 Build Dependencies: [libxrender](#libxrender), [glib](#glib), [libxext](#libxext), [fontconfig](#fontconfig), [libx11](#libx11), [libxcb](#libxcb), [libpng](#libpng), [pixman](#pixman), pkgconfig, [python](#python), [freetype](#freetype) Link Dependencies: [libxrender](#libxrender), [glib](#glib), [libxext](#libxext), [fontconfig](#fontconfig), [libx11](#libx11), [libxcb](#libxcb), [pixman](#pixman), [libpng](#libpng), [freetype](#freetype) Description: Cairo is a 2D graphics library with support for multiple output devices. --- cairomm[¶](#cairomm) === Homepage: * <https://www.cairographics.org/cairomm/Spack package: * [cairomm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cairomm/package.py) Versions: 1.6.4, 1.6.2 Build Dependencies: [libsigcpp](#libsigcpp), [cairo](#cairo) Link Dependencies: [libsigcpp](#libsigcpp), [cairo](#cairo) Description: Cairomm is a C++ wrapper for the cairo graphics library. --- caliper[¶](#caliper) === Homepage: * <https://github.com/LLNL/CaliperSpack package: * [caliper/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/caliper/package.py) Versions: 1.7.0, 1.6.0, master Build Dependencies: [libpfm4](#libpfm4), mpi, [sosflow](#sosflow), [cmake](#cmake), [dyninst](#dyninst), [gotcha](#gotcha), [papi](#papi), [python](#python), unwind Link Dependencies: [libpfm4](#libpfm4), mpi, [gotcha](#gotcha), [dyninst](#dyninst), [papi](#papi), [sosflow](#sosflow), unwind Description: Caliper is a program instrumentation and performance measurement framework. It provides data collection mechanisms and a source-code annotation API for a variety of performance engineering use cases, e.g., performance profiling, tracing, monitoring, and auto-tuning. --- callpath[¶](#callpath) === Homepage: * <https://github.com/llnl/callpathSpack package: * [callpath/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/callpath/package.py) Versions: 1.0.4, 1.0.2, 1.0.1 Build Dependencies: [adept-utils](#adept-utils), [dyninst](#dyninst), mpi, [libdwarf](#libdwarf), [cmake](#cmake) Link Dependencies: [adept-utils](#adept-utils), mpi, [libdwarf](#libdwarf), elf, [dyninst](#dyninst) Description: Library for representing callpaths consistently in distributed-memory performance tools. --- camellia[¶](#camellia) === Homepage: * <https://bitbucket.org/nateroberts/CamelliaSpack package: * [camellia/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/camellia/package.py) Versions: master Build Dependencies: [cmake](#cmake), mpi, [trilinos](#trilinos), [moab](#moab), [hdf5](#hdf5) Link Dependencies: mpi, [trilinos](#trilinos), [moab](#moab), [hdf5](#hdf5) Description: Camellia: user-friendly MPI-parallel adaptive finite element package, with support for DPG and other hybrid methods, built atop Trilinos. --- candle-benchmarks[¶](#candle-benchmarks) === Homepage: * <https://github.com/ECP-CANDLE/BenchmarksSpack package: * [candle-benchmarks/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/candle-benchmarks/package.py) Versions: 0.1, 0.0 Build Dependencies: [py-tqdm](#py-tqdm), [py-h5py](#py-h5py), [python](#python), [py-keras](#py-keras), [py-mdanalysis](#py-mdanalysis), [py-theano](#py-theano), [py-requests](#py-requests), [py-matplotlib](#py-matplotlib), [opencv](#opencv), [py-scikit-learn](#py-scikit-learn), [py-mpi4py](#py-mpi4py) Link Dependencies: [opencv](#opencv), [python](#python) Run Dependencies: [py-tqdm](#py-tqdm), [py-h5py](#py-h5py), [py-keras](#py-keras), [py-mdanalysis](#py-mdanalysis), [py-theano](#py-theano), [py-requests](#py-requests), [py-matplotlib](#py-matplotlib), [py-mpi4py](#py-mpi4py), [py-scikit-learn](#py-scikit-learn) Description: ECP-CANDLE Benchmarks --- cantera[¶](#cantera) === Homepage: * <http://www.cantera.org/docs/sphinx/html/index.htmlSpack package: * [cantera/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cantera/package.py) Versions: 2.3.0, 2.2.1 Build Dependencies: [eigen](#eigen), [py-3to2](#py-3to2), [googletest](#googletest), [py-numpy](#py-numpy), [fmt](#fmt), lapack, [scons](#scons), [py-cython](#py-cython), [python](#python), [boost](#boost), [matlab](#matlab), blas, [sundials](#sundials), [py-scipy](#py-scipy) Link Dependencies: [sundials](#sundials), [eigen](#eigen), [googletest](#googletest), [boost](#boost), [matlab](#matlab), [fmt](#fmt), blas, [python](#python), lapack Run Dependencies: [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-3to2](#py-3to2) Description: Cantera is a suite of object-oriented software tools for problems involving chemical kinetics, thermodynamics, and/or transport processes. --- canu[¶](#canu) === Homepage: * <http://canu.readthedocs.io/Spack package: * [canu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/canu/package.py) Versions: 1.7.1, 1.5 Run Dependencies: [perl](#perl), [gnuplot](#gnuplot), [jdk](#jdk) Description: A single molecule sequence assembler for genomes large and small. --- cap3[¶](#cap3) === Homepage: * <http://seq.cs.iastate.edu/Spack package: * [cap3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cap3/package.py) Versions: 2015-02-11 Description: CAP3 is DNA Sequence Assembly Program --- cares[¶](#cares) === Homepage: * <https://c-ares.haxx.seSpack package: * [cares/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cares/package.py) Versions: develop, 1.13.0 Build Dependencies: [cmake](#cmake) Description: c-ares: A C library for asynchronous DNS requests --- cask[¶](#cask) === Homepage: * <http://cask.readthedocs.io/en/latest/Spack package: * [cask/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cask/package.py) Versions: 0.8.1, 0.7.4 Build Dependencies: [emacs](#emacs) Run Dependencies: [emacs](#emacs) Description: Cask is a project management tool for Emacs Lisp to automate the package development cycle; development, dependencies, testing, building, packaging and more. --- casper[¶](#casper) === Homepage: * <http://best.snu.ac.kr/casper/index.php?name=mainSpack package: * [casper/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/casper/package.py) Versions: 0.8.2 Build Dependencies: [boost](#boost), [jellyfish](#jellyfish) Link Dependencies: [boost](#boost), [jellyfish](#jellyfish) Description: CASPER (Context-Aware Scheme for Paired-End Read) is state-of-the art merging tool in terms of accuracy and robustness. Using this sophisticated merging method, we could get elongated reads from the forward and reverse reads. --- catalyst[¶](#catalyst) === Homepage: * <http://www.paraview.orgSpack package: * [catalyst/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/catalyst/package.py) Versions: 5.5.2, 5.5.1, 5.5.0, 5.4.1, 5.4.0, 5.3.0, 5.2.0, 5.1.2, 5.0.1, 4.4.0 Build Dependencies: [libxt](#libxt), mpi, [libx11](#libx11), [git](#git), [cmake](#cmake), [python](#python), [mesa](#mesa) Link Dependencies: [libxt](#libxt), mpi, [libx11](#libx11), [git](#git), [python](#python), [mesa](#mesa) Run Dependencies: [python](#python) Description: Catalyst is an in situ use case library, with an adaptable application programming interface (API), that orchestrates the alliance between simulation and analysis and/or visualization tasks. --- catch[¶](#catch) === Homepage: * <https://github.com/catchorg/Catch2Spack package: * [catch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/catch/package.py) Versions: 2.4.0, 2.3.0, 2.2.1, 2.1.0, 2.0.1, 1.12.1, 1.12.0, 1.11.0, 1.10.0, 1.9.7, 1.9.6, 1.9.5, 1.9.4, 1.9.3, 1.9.2, 1.9.1, 1.9.0, 1.8.2, 1.8.1, 1.8.0, 1.7.2, 1.7.1, 1.7.0, 1.6.1, 1.6.0, 1.5.9, 1.5.0, 1.4.0, 1.3.5, 1.3.0 Build Dependencies: [cmake](#cmake) Description: Catch tests --- cbench[¶](#cbench) === Homepage: * <https://sourceforge.net/projects/cbench/Spack package: * [cbench/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbench/package.py) Versions: 1.3.0 Build Dependencies: mpi, blas, [fftw](#fftw), lapack Link Dependencies: mpi, blas, [fftw](#fftw), lapack Description: Cbench is intended as a relatively straightforward toolbox of tests, benchmarks, applications, utilities, and framework to hold them together with the goal to facilitate scalable testing, benchmarking, and analysis of a Linux parallel compute cluster. --- cblas[¶](#cblas) === Homepage: * <http://www.netlib.org/blas/_cblas/Spack package: * [cblas/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cblas/package.py) Versions: 2015-06-06 Build Dependencies: blas Link Dependencies: blas Description: The BLAS (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. --- cbtf[¶](#cbtf) === Homepage: * <http://sourceforge.net/p/cbtf/wiki/HomeSpack package: * [cbtf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbtf/package.py) Versions: develop, 1.9.2, 1.9.1.2, 1.9.1.1, 1.9.1.0 Build Dependencies: [cmake](#cmake), [boost](#boost), [xerces-c](#xerces-c), [libxml2](#libxml2), [mrnet](#mrnet) Link Dependencies: [boost](#boost), [xerces-c](#xerces-c), [libxml2](#libxml2), [mrnet](#mrnet) Description: CBTF project contains the base code for CBTF that supports creating components, component networks and the support to connect these components and component networks into sequential and distributed network tools. --- cbtf-argonavis[¶](#cbtf-argonavis) === Homepage: * <http://sourceforge.net/p/cbtf/wiki/Home/Spack package: * [cbtf-argonavis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbtf-argonavis/package.py) Versions: develop, 1.9.2, 1.9.1.2, 1.9.1.1, 1.9.1.0 Build Dependencies: [cuda](#cuda), [libmonitor](#libmonitor), [cmake](#cmake), [cbtf-krell](#cbtf-krell), [cbtf](#cbtf), [boost](#boost), [mrnet](#mrnet), [papi](#papi) Link Dependencies: [cuda](#cuda), [mrnet](#mrnet), [libmonitor](#libmonitor), [cbtf-krell](#cbtf-krell), [cbtf](#cbtf), [boost](#boost), elf, [papi](#papi) Description: CBTF Argo Navis project contains the CUDA collector and supporting libraries that was done as a result of a DOE SBIR grant. --- cbtf-argonavis-gui[¶](#cbtf-argonavis-gui) === Homepage: * <http://sourceforge.net/p/cbtf/wiki/Home/Spack package: * [cbtf-argonavis-gui/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbtf-argonavis-gui/package.py) Versions: develop, 1.3.0.0 Build Dependencies: [cuda](#cuda), [openspeedshop-utils](#openspeedshop-utils), [qtgraph](#qtgraph), [cmake](#cmake), [cbtf-krell](#cbtf-krell), [qt](#qt), [mrnet](#mrnet), [xerces-c](#xerces-c), [cbtf-argonavis](#cbtf-argonavis), [boost](#boost), [graphviz](#graphviz), [cbtf](#cbtf) Link Dependencies: [boost](#boost), [cuda](#cuda), [openspeedshop-utils](#openspeedshop-utils), [xerces-c](#xerces-c), [qtgraph](#qtgraph), [cbtf-argonavis](#cbtf-argonavis), [cbtf-krell](#cbtf-krell), [graphviz](#graphviz), [cbtf](#cbtf), [qt](#qt), [mrnet](#mrnet) Description: CBTF Argo Navis GUI project contains the GUI that views OpenSpeedShop performance information by loading in the Sqlite database files. --- cbtf-krell[¶](#cbtf-krell) === Homepage: * <http://sourceforge.net/p/cbtf/wiki/Home/Spack package: * [cbtf-krell/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbtf-krell/package.py) Versions: develop, 1.9.2, 1.9.1.2, 1.9.1.1, 1.9.1.0 Build Dependencies: [binutils](#binutils), [cbtf](#cbtf), mpich2, [libmonitor](#libmonitor), [cmake](#cmake), [libunwind](#libunwind), [papi](#papi), [mrnet](#mrnet), [mpich](#mpich), [llvm-openmp-ompt](#llvm-openmp-ompt), [xerces-c](#xerces-c), [gotcha](#gotcha), [mvapich2](#mvapich2), [openmpi](#openmpi), [boost](#boost), mvapich, mpt, [dyninst](#dyninst), [python](#python) Link Dependencies: [boost](#boost), mpich2, [libmonitor](#libmonitor), [cbtf](#cbtf), [libunwind](#libunwind), [papi](#papi), [mrnet](#mrnet), mvapich, [llvm-openmp-ompt](#llvm-openmp-ompt), [xerces-c](#xerces-c), [gotcha](#gotcha), [mvapich2](#mvapich2), [openmpi](#openmpi), [mpich](#mpich), [binutils](#binutils), mpt, [dyninst](#dyninst) Run Dependencies: [cbtf](#cbtf), [python](#python), [mrnet](#mrnet) Description: CBTF Krell project contains the Krell Institute contributions to the CBTF project. These contributions include many performance data collectors and support libraries as well as some example tools that drive the data collection at HPC levels of scale. --- cbtf-lanl[¶](#cbtf-lanl) === Homepage: * <http://sourceforge.net/p/cbtf/wiki/Home/Spack package: * [cbtf-lanl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbtf-lanl/package.py) Versions: develop, 1.9.2, 1.9.1.2, 1.9.1.1, 1.9.1.0 Build Dependencies: [cmake](#cmake), [cbtf-krell](#cbtf-krell), [xerces-c](#xerces-c), [cbtf](#cbtf), [mrnet](#mrnet) Link Dependencies: [cbtf-krell](#cbtf-krell), [xerces-c](#xerces-c), [cbtf](#cbtf), [mrnet](#mrnet) Description: CBTF LANL project contains a memory tool and data center type system command monitoring tool. --- ccache[¶](#ccache) === Homepage: * <https://ccache.samba.org/Spack package: * [ccache/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ccache/package.py) Versions: 3.3.4, 3.3.3, 3.3.2, 3.3.1, 3.3, 3.2.9 Build Dependencies: [libxslt](#libxslt), [gperf](#gperf), [zlib](#zlib) Link Dependencies: [libxslt](#libxslt), [gperf](#gperf), [zlib](#zlib) Description: ccache is a compiler cache. It speeds up recompilation by caching previous compilations and detecting when the same compilation is being done again. --- cctools[¶](#cctools) === Homepage: * <https://github.com/cooperative-computing-lab/cctoolsSpack package: * [cctools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cctools/package.py) Versions: 6.1.1 Build Dependencies: [zlib](#zlib), [swig](#swig), [openssl](#openssl), [perl](#perl), [readline](#readline), [python](#python) Link Dependencies: [zlib](#zlib), [readline](#readline), [swig](#swig), [openssl](#openssl) Run Dependencies: [perl](#perl), [python](#python) Description: The Cooperative Computing Tools (cctools) enable large scale distributed computations to harness hundreds to thousands of machines from clusters, clouds, and grids. --- cdbfasta[¶](#cdbfasta) === Homepage: * <https://github.com/gpertea/cdbfastaSpack package: * [cdbfasta/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cdbfasta/package.py) Versions: 2017-03-16 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Fast indexing and retrieval of fasta records from flat file databases --- cdd[¶](#cdd) === Homepage: * <https://www.inf.ethz.ch/personal/fukudak/cdd_home/cdd.htmlSpack package: * [cdd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cdd/package.py) Versions: 0.61a Build Dependencies: [libtool](#libtool) Description: The program cdd+ (cdd, respectively) is a C++ (ANSI C) implementation of the Double Description Method [MRTT53] for generating all vertices (i.e. extreme points) and extreme rays of a general convex polyhedron given by a system of linear inequalities --- cddlib[¶](#cddlib) === Homepage: * <https://www.inf.ethz.ch/personal/fukudak/cdd_home/Spack package: * [cddlib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cddlib/package.py) Versions: 0.94h Build Dependencies: [gmp](#gmp), [libtool](#libtool) Link Dependencies: [gmp](#gmp) Description: The C-library cddlib is a C implementation of the Double Description Method of Motzkin et al. for generating all vertices (i.e. extreme points) and extreme rays of a general convex polyhedron in R^d given by a system of linear inequalities --- cdhit[¶](#cdhit) === Homepage: * <http://cd-hit.org/Spack package: * [cdhit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cdhit/package.py) Versions: 4.6.8 Build Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: CD-HIT is a very widely used program for clustering and comparing protein or nucleotide sequences. --- cdo[¶](#cdo) === Homepage: * <https://code.mpimet.mpg.de/projects/cdoSpack package: * [cdo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cdo/package.py) Versions: 1.9.5, 1.9.4, 1.9.3, 1.9.2, 1.9.1, 1.9.0, 1.8.2, 1.7.2 Build Dependencies: szip, [libxml2](#libxml2), [hdf5](#hdf5), [curl](#curl), [magics](#magics), [udunits2](#udunits2), [fftw](#fftw), [netcdf](#netcdf), [eccodes](#eccodes), [proj](#proj), [libuuid](#libuuid), [grib-api](#grib-api) Link Dependencies: szip, [libxml2](#libxml2), [hdf5](#hdf5), [curl](#curl), [magics](#magics), [udunits2](#udunits2), [fftw](#fftw), [netcdf](#netcdf), [eccodes](#eccodes), [proj](#proj), [libuuid](#libuuid), [grib-api](#grib-api) Description: CDO is a collection of command line Operators to manipulate and analyse Climate and NWP model Data. --- ceed[¶](#ceed) === Homepage: * <https://ceed.exascaleproject.orgSpack package: * [ceed/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ceed/package.py) Versions: 1.0.0 Build Dependencies: [pumi](#pumi), [petsc](#petsc), [nek5000](#nek5000), [magma](#magma), [suite-sparse](#suite-sparse), [nekbone](#nekbone), [gslib](#gslib), [nekcem](#nekcem), [laghos](#laghos), [mfem](#mfem), [occa](#occa), [hpgmg](#hpgmg), [libceed](#libceed), [hypre](#hypre) Link Dependencies: [pumi](#pumi), [petsc](#petsc), [nek5000](#nek5000), [magma](#magma), [suite-sparse](#suite-sparse), [nekbone](#nekbone), [gslib](#gslib), [nekcem](#nekcem), [laghos](#laghos), [mfem](#mfem), [occa](#occa), [hpgmg](#hpgmg), [libceed](#libceed), [hypre](#hypre) Description: Ceed is a collection of benchmarks, miniapps, software libraries and APIs for efficient high-order finite element and spectral element discretizations for exascale applications developed in the Department of Energy (DOE) and partially supported by the Exascale Computing Project (ECP). This is a Spack bundle package that installs the CEED software components. --- cereal[¶](#cereal) === Homepage: * <http://uscilab.github.io/cereal/Spack package: * [cereal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cereal/package.py) Versions: 1.2.2, 1.2.1, 1.2.0, 1.1.2, 1.1.1, 1.1.0, 1.0.0, 0.9.1 Build Dependencies: [cmake](#cmake) Description: cereal is a header-only C++11 serialization library. cereal takes arbitrary data types and reversibly turns them into different representations, such as compact binary encodings, XML, or JSON. cereal was designed to be fast, light-weight, and easy to extend - it has no external dependencies and can be easily bundled with other code or used standalone. --- ceres-solver[¶](#ceres-solver) === Homepage: * <http://ceres-solver.orgSpack package: * [ceres-solver/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ceres-solver/package.py) Versions: 1.12.0 Build Dependencies: [cmake](#cmake), [glog](#glog), [eigen](#eigen), lapack Link Dependencies: [glog](#glog), [eigen](#eigen), lapack Description: Ceres Solver is an open source C++ library for modeling and solving large, complicated optimization problems. It can be used to solve Non- linear Least Squares problems with bounds constraints and general unconstrained optimization problems. It is a mature, feature rich, and performant library that has been used in production at Google since 2010. --- cfitsio[¶](#cfitsio) === Homepage: * <http://heasarc.gsfc.nasa.gov/fitsio/Spack package: * [cfitsio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cfitsio/package.py) Versions: 3.450, 3.420, 3.410, 3.370 Build Dependencies: [bzip2](#bzip2) Link Dependencies: [bzip2](#bzip2) Description: CFITSIO is a library of C and Fortran subroutines for reading and writing data files in FITS (Flexible Image Transport System) data format. --- cgal[¶](#cgal) === Homepage: * <http://www.cgal.org/Spack package: * [cgal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cgal/package.py) Versions: 4.12, 4.11, 4.9.1, 4.9, 4.7, 4.6.3 Build Dependencies: [gmp](#gmp), [zlib](#zlib), [cmake](#cmake), [boost](#boost), [mpfr](#mpfr), [qt](#qt) Link Dependencies: [gmp](#gmp), [zlib](#zlib), [boost](#boost), [mpfr](#mpfr), [qt](#qt) Description: The Computational Geometry Algorithms Library (CGAL) is a C++ library that aims to provide easy access to efficient and reliable algorithms in computational geometry. CGAL is used in various areas needing geometric computation, such as geographic information systems, computer aided design, molecular biology, medical imaging, computer graphics, and robotics. --- cgm[¶](#cgm) === Homepage: * <http://sigma.mcs.anl.gov/cgm-librarySpack package: * [cgm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cgm/package.py) Versions: 16.0, 13.1.1, 13.1.0, 13.1 Build Dependencies: [oce](#oce), mpi Link Dependencies: [oce](#oce), mpi Description: The Common Geometry Module, Argonne (CGMA) is a code library which provides geometry functionality used for mesh generation and other applications. --- cgns[¶](#cgns) === Homepage: * <http://cgns.github.io/Spack package: * [cgns/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cgns/package.py) Versions: 3.3.1, 3.3.0 Build Dependencies: [cmake](#cmake), mpi, [hdf5](#hdf5) Link Dependencies: mpi, [hdf5](#hdf5) Description: The CFD General Notation System (CGNS) provides a general, portable, and extensible standard for the storage and retrieval of computational fluid dynamics (CFD) analysis data. --- channelflow[¶](#channelflow) === Homepage: * <https://github.com/epfl-ecps/channelflowSpack package: * [channelflow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/channelflow/package.py) Versions: develop Build Dependencies: mpi, [netcdf](#netcdf), [eigen](#eigen), [hdf5](#hdf5), [cmake](#cmake), [boost](#boost), [fftw](#fftw) Link Dependencies: mpi, [netcdf](#netcdf), [eigen](#eigen), [hdf5](#hdf5), [boost](#boost), [fftw](#fftw) Description: Channelflow is a software system for numerical analysis of the incompressible fluid flow in channel geometries, written in C++. --- charliecloud[¶](#charliecloud) === Homepage: * <https://hpc.github.io/charliecloudSpack package: * [charliecloud/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/charliecloud/package.py) Versions: 0.9.3, 0.9.2, 0.9.1, 0.9.0, 0.2.4 Description: Lightweight user-defined software stacks for HPC. --- charmpp[¶](#charmpp) === Homepage: * <http://charmplusplus.orgSpack package: * [charmpp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/charmpp/package.py) Versions: develop, 6.8.2, 6.8.1, 6.8.0, 6.7.1, 6.7.0, 6.6.1, 6.6.0, 6.5.1 Build Dependencies: [cuda](#cuda), [automake](#automake), mpi, [papi](#papi), [autoconf](#autoconf) Link Dependencies: [cuda](#cuda), [automake](#automake), mpi, [papi](#papi), [autoconf](#autoconf) Description: Charm++ is a parallel programming framework in C++ supported by an adaptive runtime system, which enhances user productivity and allows programs to run portably from small multicore computers (your laptop) to the largest supercomputers. --- chatterbug[¶](#chatterbug) === Homepage: * <https://chatterbug.readthedocs.ioSpack package: * [chatterbug/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/chatterbug/package.py) Versions: develop, 1.0 Build Dependencies: [scorep](#scorep), mpi Link Dependencies: [scorep](#scorep), mpi Description: A suite of communication-intensive proxy applications that mimic commonly found communication patterns in HPC codes. These codes can be used as synthetic codes for benchmarking, or for trace generation using Score-P / OTF2. --- check[¶](#check) === Homepage: * <https://libcheck.github.io/check/index.htmlSpack package: * [check/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/check/package.py) Versions: 0.10.0 Description: Check is a unit testing framework for C. It features a simple interface for defining unit tests, putting little in the way of the developer. Tests are run in a separate address space, so both assertion failures and code errors that cause segmentation faults or other signals can be caught. Test results are reportable in the following: Subunit, TAP, XML, and a generic logging format. --- chlorop[¶](#chlorop) === Homepage: * <http://www.cbs.dtu.dk/services/ChloroP/Spack package: * [chlorop/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/chlorop/package.py) Versions: 1.1 Run Dependencies: awk Description: Chlorop predicts the presence of chloroplast transit peptides in protein sequences and the location of potential cTP cleavage sites. You will need to obtain the tarball by visiting the URL and completing the form. You can then either run spack install with the tarball in the directory, or add it to a mirror. You will need to set the CHLOROTMP environment variable to the full path of the directory you want chlorop to use as a temporary directory. --- chombo[¶](#chombo) === Homepage: * <https://commons.lbl.gov/display/chomboSpack package: * [chombo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/chombo/package.py) Versions: develop, 3.2 Build Dependencies: [gmake](#gmake), mpi, blas, lapack, [hdf5](#hdf5) Link Dependencies: mpi, blas, lapack, [hdf5](#hdf5) Description: The Chombo package provides a set of tools for implementing finite difference and finite-volume methods for the solution of partial differential equations on block-structured adaptively refined logically rectangular (i.e. Cartesian) grids. --- cistem[¶](#cistem) === Homepage: * <https://cistem.org/Spack package: * [cistem/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cistem/package.py) Versions: 1.0.0-beta Build Dependencies: [wx](#wx), [fftw](#fftw) Link Dependencies: [wx](#wx), [fftw](#fftw) Description: cisTEM is user-friendly software to process cryo-EM images of macromolecular complexes and obtain high-resolution 3D reconstructions from them. --- cityhash[¶](#cityhash) === Homepage: * <https://github.com/google/cityhashSpack package: * [cityhash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cityhash/package.py) Versions: 2013-07-31, master Description: CityHash, a family of hash functions for strings. --- clamr[¶](#clamr) === Homepage: * <https://github.com/lanl/CLAMRSpack package: * [clamr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/clamr/package.py) Versions: master Build Dependencies: [cmake](#cmake), mpi, mpe Link Dependencies: [cmake](#cmake), mpi, mpe Description: The CLAMR code is a cell-based adaptive mesh refinement (AMR) mini-app developed as a testbed for hybrid algorithm development using MPI and OpenCL GPU code. --- clapack[¶](#clapack) === Homepage: * <http://www.netlib.org/clapack/Spack package: * [clapack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/clapack/package.py) Versions: 3.2.1 Build Dependencies: [atlas](#atlas) Link Dependencies: [atlas](#atlas) Description: CLAPACK is a f2c'ed version of LAPACK. The CLAPACK library was built using a Fortran to C conversion utility called f2c. The entire Fortran 77 LAPACK library is run through f2c to obtain C code, and then modified to improve readability. CLAPACK's goal is to provide LAPACK for someone who does not have access to a Fortran compiler. --- claw[¶](#claw) === Homepage: * <https://claw-project.github.io/Spack package: * [claw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/claw/package.py) Versions: 1.1.0 Build Dependencies: java, [cmake](#cmake), [ant](#ant), [libxml2](#libxml2), [bison](#bison) Link Dependencies: java, [ant](#ant), [libxml2](#libxml2), [bison](#bison) Description: CLAW Compiler targets performance portability problem in climate and weather application written in Fortran. From a single source code, it generates architecture specific code decorated with OpenMP or OpenACC --- cleaveland4[¶](#cleaveland4) === Homepage: * <https://github.com/MikeAxtell/CleaveLand4Spack package: * [cleaveland4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cleaveland4/package.py) Versions: 4.4 Build Dependencies: [r](#r), [samtools](#samtools), [perl-math-cdf](#perl-math-cdf), [perl](#perl), [bowtie](#bowtie), [viennarna](#viennarna) Link Dependencies: [samtools](#samtools), [bowtie](#bowtie), [viennarna](#viennarna) Run Dependencies: [r](#r), [perl](#perl), [perl-math-cdf](#perl-math-cdf) Description: CleaveLand4: Analysis of degradome data to find sliced miRNA and siRNA targets --- cleverleaf[¶](#cleverleaf) === Homepage: * <http://uk-mac.github.io/CleverLeaf/Spack package: * [cleverleaf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cleverleaf/package.py) Versions: develop Build Dependencies: [cmake](#cmake), [boost](#boost), [samrai](#samrai), [hdf5](#hdf5) Link Dependencies: [boost](#boost), [samrai](#samrai), [hdf5](#hdf5) Description: CleverLeaf is a hydrodynamics mini-app that extends CloverLeaf with Adaptive Mesh Refinement using the SAMRAI toolkit from Lawrence Livermore National Laboratory. The primary goal of CleverLeaf is to evaluate the application of AMR to the Lagrangian-Eulerian hydrodynamics scheme used by CloverLeaf. --- clfft[¶](#clfft) === Homepage: * <https://github.com/clMathLibraries/clFFTSpack package: * [clfft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/clfft/package.py) Versions: 2.12.2 Build Dependencies: [cmake](#cmake), [boost](#boost), opencl Link Dependencies: opencl, [boost](#boost) Description: a software library containing FFT functions written in OpenCL --- clhep[¶](#clhep) === Homepage: * <http://proj-clhep.web.cern.ch/proj-clhep/Spack package: * [clhep/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/clhep/package.py) Versions: 2.4.1.0, 2.4.0.4, 2.4.0.2, 2.4.0.1, 2.4.0.0, 2.3.4.6, 2.3.4.5, 2.3.4.4, 2.3.4.3, 2.3.4.2, 2.3.3.2, 2.3.3.1, 2.3.3.0, 2.3.2.2, 2.3.1.1, 2.3.1.0, 2.3.0.0, 2.2.0.8, 2.2.0.5, 2.2.0.4 Build Dependencies: [cmake](#cmake) Description: CLHEP is a C++ Class Library for High Energy Physics. --- clingo[¶](#clingo) === Homepage: * <https://potassco.org/clingo/Spack package: * [clingo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/clingo/package.py) Versions: 5.2.2 Build Dependencies: [cmake](#cmake), [doxygen](#doxygen), [python](#python) Link Dependencies: [python](#python) Description: Clingo: A grounder and solver for logic programs Clingo is part of the Potassco project for Answer Set Programming (ASP). ASP offers a simple and powerful modeling language to describe combinatorial problems as logic programs. The clingo system then takes such a logic program and computes answer sets representing solutions to the given problem. --- cloc[¶](#cloc) === Homepage: * <https://github.com/AlDanial/cloc/Spack package: * [cloc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cloc/package.py) Versions: 1.74 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Description: Count, or compute differences of, physical lines of source code in the given files (may be archives such as compressed tarballs or zip files) and/or recursively below the given directories. --- cloog[¶](#cloog) === Homepage: * <http://www.cloog.orgSpack package: * [cloog/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cloog/package.py) Versions: 0.18.1, 0.18.0, 0.17.0 Build Dependencies: [gmp](#gmp), [isl](#isl) Link Dependencies: [gmp](#gmp), [isl](#isl) Description: CLooG is a free software and library to generate code for scanning Z-polyhedra. That is, it finds a code (e.g. in C, FORTRAN...) that reaches each integral point of one or more parameterized polyhedra. --- cloverleaf[¶](#cloverleaf) === Homepage: * <http://uk-mac.github.io/CloverLeafSpack package: * [cloverleaf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cloverleaf/package.py) Versions: 1.1 Build Dependencies: [cuda](#cuda), mpi Link Dependencies: [cuda](#cuda), mpi Description: Proxy Application. CloverLeaf is a miniapp that solves the compressible Euler equations on a Cartesian grid, using an explicit, second-order accurate method. --- cloverleaf3d[¶](#cloverleaf3d) === Homepage: * <http://uk-mac.github.io/CloverLeaf3D/Spack package: * [cloverleaf3d/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cloverleaf3d/package.py) Versions: 1.0 Build Dependencies: mpi Link Dependencies: mpi Description: Proxy Application. CloverLeaf3D is 3D version of the CloverLeaf mini- app. CloverLeaf is a mini-app that solves the compressible Euler equations on a Cartesian grid, using an explicit, second-order accurate method. --- clustalo[¶](#clustalo) === Homepage: * <http://www.clustal.org/omega/Spack package: * [clustalo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/clustalo/package.py) Versions: 1.2.4 Build Dependencies: [argtable](#argtable) Link Dependencies: [argtable](#argtable) Description: Clustal Omega: the last alignment program you'll ever need. --- clustalw[¶](#clustalw) === Homepage: * <http://www.clustal.org/clustal2/Spack package: * [clustalw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/clustalw/package.py) Versions: 2.1 Description: Multiple alignment of nucleic acid and protein sequences. --- cmake[¶](#cmake) === Homepage: * <https://www.cmake.orgSpack package: * [cmake/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cmake/package.py) Versions: 3.12.3, 3.12.2, 3.12.1, 3.12.0, 3.11.4, 3.11.3, 3.11.2, 3.11.1, 3.11.0, 3.10.3, 3.10.2, 3.10.1, 3.10.0, 3.9.6, 3.9.4, 3.9.0, 3.8.2, 3.8.1, 3.8.0, 3.7.2, 3.7.1, 3.6.1, 3.6.0, 3.5.2, 3.5.1, 3.5.0, 3.4.3, 3.4.0, 3.3.1, 3.1.0, 3.0.2, 2.8.10.2 Build Dependencies: [libuv](#libuv), [rhash](#rhash), [xz](#xz), [curl](#curl), [expat](#expat), [qt](#qt), [zlib](#zlib), [bzip2](#bzip2), [python](#python), [openssl](#openssl), [libarchive](#libarchive), [ncurses](#ncurses), [py-sphinx](#py-sphinx) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2), [libuv](#libuv), [openssl](#openssl), [rhash](#rhash), [xz](#xz), [curl](#curl), [libarchive](#libarchive), [expat](#expat), [qt](#qt), [ncurses](#ncurses) Description: A cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software. --- cmocka[¶](#cmocka) === Homepage: * <https://cmocka.org/Spack package: * [cmocka/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cmocka/package.py) Versions: 1.1.1, 1.1.0, 1.0.1 Build Dependencies: [cmake](#cmake) Description: Unit-testing framework in pure C --- cmor[¶](#cmor) === Homepage: * <http://cmor.llnl.govSpack package: * [cmor/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cmor/package.py) Versions: 3.3.0, 3.2.0, 3.1.2 Build Dependencies: [netcdf](#netcdf), [uuid](#uuid), [udunits2](#udunits2), [py-numpy](#py-numpy), [hdf5](#hdf5), [python](#python) Link Dependencies: [python](#python), [udunits2](#udunits2), [netcdf](#netcdf), [uuid](#uuid), [hdf5](#hdf5) Run Dependencies: [py-numpy](#py-numpy) Description: Climate Model Output Rewriter is used to produce CF-compliant netCDF files. The structure of the files created by the library and the metadata they contain fulfill the requirements of many of the climate community's standard model experiments. --- cnmem[¶](#cnmem) === Homepage: * <https://github.com/NVIDIA/cnmemSpack package: * [cnmem/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cnmem/package.py) Versions: git Build Dependencies: [cmake](#cmake) Description: CNMem mempool for CUDA devices --- cnpy[¶](#cnpy) === Homepage: * <https://github.com/rogersce/cnpySpack package: * [cnpy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cnpy/package.py) Versions: master Build Dependencies: [cmake](#cmake) Description: cnpy: library to read/write .npy and .npz files in C/C++. --- cntk[¶](#cntk) === Homepage: * <https://www.microsoft.com/en-us/research/product/cognitive-toolkitSpack package: * [cntk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cntk/package.py) Versions: 2.0, master Build Dependencies: [cuda](#cuda), [kaldi](#kaldi), [cntk1bitsgd](#cntk1bitsgd), [opencv](#opencv), [cub](#cub), mpi, [libzip](#libzip), [cudnn](#cudnn), [protobuf](#protobuf), [multiverso](#multiverso), [boost](#boost), [nccl](#nccl), [openblas](#openblas) Link Dependencies: [cuda](#cuda), [kaldi](#kaldi), [cntk1bitsgd](#cntk1bitsgd), [opencv](#opencv), [cub](#cub), mpi, [libzip](#libzip), [cudnn](#cudnn), [protobuf](#protobuf), [multiverso](#multiverso), [boost](#boost), [nccl](#nccl), [openblas](#openblas) Description: The Microsoft Cognitive Toolkit is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. --- cntk1bitsgd[¶](#cntk1bitsgd) === Homepage: * <https://github.com/CNTK-components/CNTK1bitSGDSpack package: * [cntk1bitsgd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cntk1bitsgd/package.py) Versions: master, c8b77d Description: CNTK1bitSGD is the header-only 1-bit stochastic gradient descent (1bit- SGD) library for the Computational Network Toolkit (CNTK). --- codar-cheetah[¶](#codar-cheetah) === Homepage: * <https://github.com/CODARcode/cheetahSpack package: * [codar-cheetah/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/codar-cheetah/package.py) Versions: develop, 0.1 Build Dependencies: [savanna](#savanna), [python](#python) Link Dependencies: [savanna](#savanna) Run Dependencies: [python](#python) Description: CODAR Cheetah: The CODAR Experiment Harness for Exascale science applications. --- codes[¶](#codes) === Homepage: * <http://www.mcs.anl.gov/projects/codesSpack package: * [codes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/codes/package.py) Versions: develop, 1.0.0 Build Dependencies: pkgconfig, mpi, [m4](#m4), [libtool](#libtool), [bison](#bison), [autoconf](#autoconf), [sst-dumpi](#sst-dumpi), [automake](#automake), [ross](#ross), [flex](#flex) Link Dependencies: [sst-dumpi](#sst-dumpi), mpi, [ross](#ross) Description: CO-Design of multi-layer Exascale Storage (CODES) simulation framework --- coevp[¶](#coevp) === Homepage: * <https://github.com/exmatex/CoEVPSpack package: * [coevp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/coevp/package.py) Versions: develop Build Dependencies: [flann](#flann), mpi, [silo](#silo), lapack Link Dependencies: [flann](#flann), mpi, [silo](#silo), lapack Description: CoEVP is a scale-bridging proxy application for embedded viscoplasticity applications. It is created and maintained by The Exascale Co-Design Center for Materials in Extreme Environments (ExMatEx). The code is intended to serve as a vehicle for co-design by allowing others to extend and/or reimplement it as needed to test performance of new architectures, programming models, etc. Due to the size and complexity of the studied models, as well as restrictions on distribution, the currently available LULESH proxy application provides the coarse-scale model implementation and the ASPA proxy application provides the adaptive sampling support. --- cohmm[¶](#cohmm) === Homepage: * <http://www.exmatex.org/cohmm.htmlSpack package: * [cohmm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cohmm/package.py) Versions: develop Build Dependencies: [gnuplot](#gnuplot) Link Dependencies: [gnuplot](#gnuplot) Description: An anticipated important use-case for next-generation supercomputing is multiscale modeling, in which continuum equations for large-scale material deformation are augmented with high-fidelity, fine-scale simulations that provide constitutive data on demand. --- coinhsl[¶](#coinhsl) === Homepage: * <http://www.hsl.rl.ac.uk/ipopt/Spack package: * [coinhsl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/coinhsl/package.py) Versions: 2015.06.23, 2014.01.17, 2014.01.10 Build Dependencies: blas Link Dependencies: blas Description: CoinHSL is a collection of linear algebra libraries (KB22, MA27, MA28, MA54, MA57, MA64, MA77, MA86, MA97, MC19, MC34, MC64, MC68, MC69, MC78, MC80, OF01, ZB01, ZB11) bundled for use with IPOPT and other applications that use these HSL routines. Note: CoinHSL is licensed software. You will need to request a license from Research Councils UK and download a .tar.gz archive of CoinHSL yourself. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- colm[¶](#colm) === Homepage: * <http://www.colm.net/open-source/colmSpack package: * [colm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/colm/package.py) Versions: 0.12.0 Description: Colm Programming Language Colm is a programming language designed for the analysis and transformation of computer languages. Colm is influenced primarily by TXL. It is in the family of program transformation languages. --- colordiff[¶](#colordiff) === Homepage: * <https://www.colordiff.orgSpack package: * [colordiff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/colordiff/package.py) Versions: 1.0.18 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Description: Colorful diff utility. --- comd[¶](#comd) === Homepage: * <http://www.exmatex.org/comd.htmlSpack package: * [comd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/comd/package.py) Versions: develop, 1.1 Build Dependencies: mpi, [graphviz](#graphviz) Link Dependencies: mpi, [graphviz](#graphviz) Description: CoMD is a reference implementation of classical molecular dynamics algorithms and workloads as used in materials science. It is created and maintained by The Exascale Co-Design Center for Materials in Extreme Environments (ExMatEx). The code is intended to serve as a vehicle for co-design by allowing others to extend and/or reimplement it as needed to test performance of new architectures, programming models, etc. New versions of CoMD will be released to incorporate the lessons learned from the co-design process. --- commons-lang[¶](#commons-lang) === Homepage: * <http://commons.apache.org/proper/commons-lang/Spack package: * [commons-lang/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/commons-lang/package.py) Versions: 2.6, 2.4 Build Dependencies: [jdk](#jdk) Link Dependencies: [jdk](#jdk) Run Dependencies: java Description: The standard Java libraries fail to provide enough methods for manipulation of its core classes. Apache Commons Lang provides these extra methods. Lang provides a host of helper utilities for the java.lang API, notably String manipulation methods, basic numerical methods, object reflection, concurrency, creation and serialization and System properties. Additionally it contains basic enhancements to java.util.Date and a series of utilities dedicated to help with building methods, such as hashCode, toString and equals. --- commons-lang3[¶](#commons-lang3) === Homepage: * <http://commons.apache.org/proper/commons-lang/Spack package: * [commons-lang3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/commons-lang3/package.py) Versions: 3.7 Build Dependencies: [jdk](#jdk) Link Dependencies: [jdk](#jdk) Run Dependencies: java Description: The standard Java libraries fail to provide enough methods for manipulation of its core classes. Apache Commons Lang provides these extra methods. Lang provides a host of helper utilities for the java.lang API, notably String manipulation methods, basic numerical methods, object reflection, concurrency, creation and serialization and System properties. Additionally it contains basic enhancements to java.util.Date and a series of utilities dedicated to help with building methods, such as hashCode, toString and equals. --- commons-logging[¶](#commons-logging) === Homepage: * <http://commons.apache.org/proper/commons-logging/Spack package: * [commons-logging/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/commons-logging/package.py) Versions: 1.2, 1.1.3, 1.1.1 Build Dependencies: [jdk](#jdk) Link Dependencies: [jdk](#jdk) Run Dependencies: java Description: When writing a library it is very useful to log information. However there are many logging implementations out there, and a library cannot impose the use of a particular one on the overall application that the library is a part of. The Logging package is an ultra-thin bridge between different logging implementations. A library that uses the commons-logging API can be used with any logging implementation at runtime. Commons-logging comes with support for a number of popular logging implementations, and writing adapters for others is a reasonably simple task. --- compiz[¶](#compiz) === Homepage: * <http://www.compiz.org/Spack package: * [compiz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/compiz/package.py) Versions: 0.7.8 Build Dependencies: [libxrender](#libxrender), [glib](#glib), [libxinerama](#libxinerama), [libpng](#libpng), [libxml2](#libxml2), [libxdamage](#libxdamage), [libxcomposite](#libxcomposite), [libxcb](#libxcb), [libxrandr](#libxrandr), [libice](#libice), [libsm](#libsm), [libxslt](#libxslt), [libxfixes](#libxfixes), [gconf](#gconf) Link Dependencies: [libxrender](#libxrender), [glib](#glib), [libxinerama](#libxinerama), [libpng](#libpng), [libxml2](#libxml2), [libxdamage](#libxdamage), [libxcomposite](#libxcomposite), [libxcb](#libxcb), [libxrandr](#libxrandr), [libice](#libice), [libsm](#libsm), [libxslt](#libxslt), [libxfixes](#libxfixes), [gconf](#gconf) Description: compiz - OpenGL window and compositing manager. Compiz is an OpenGL compositing manager that use GLX_EXT_texture_from_pixmap for binding redirected top-level windows to texture objects. It has a flexible plug- in system and it is designed to run well on most graphics hardware. --- compositeproto[¶](#compositeproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/compositeprotoSpack package: * [compositeproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/compositeproto/package.py) Versions: 0.4.2 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: Composite Extension. This package contains header files and documentation for the composite extension. Library and server implementations are separate. --- conduit[¶](#conduit) === Homepage: * <http://software.llnl.gov/conduitSpack package: * [conduit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/conduit/package.py) Versions: 0.3.1, 0.3.0, 0.2.1, 0.2.0, master Build Dependencies: mpi, [silo](#silo), [hdf5](#hdf5), [cmake](#cmake), [py-numpy](#py-numpy), [doxygen](#doxygen), [python](#python), [py-sphinx](#py-sphinx) Link Dependencies: mpi, [silo](#silo), [hdf5](#hdf5), [cmake](#cmake), [doxygen](#doxygen), [python](#python) Run Dependencies: [py-numpy](#py-numpy) Description: Conduit is an open source project from Lawrence Livermore National Laboratory that provides an intuitive model for describing hierarchical scientific data in C++, C, Fortran, and Python. It is used for data coupling between packages in-core, serialization, and I/O tasks. --- constype[¶](#constype) === Homepage: * <http://cgit.freedesktop.org/xorg/app/constypeSpack package: * [constype/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/constype/package.py) Versions: 1.0.4 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: constype prints on the standard output the Sun code for the type of display that the specified device is. It was originally written for SunOS, but has been ported to other SPARC OS'es and to Solaris on both SPARC & x86. --- converge[¶](#converge) === Homepage: * <https://www.convergecfd.com/Spack package: * [converge/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/converge/package.py) Versions: 2.4.10, 2.3.23, 2.2.0, 2.1.0, 2.0.0 Build Dependencies: [openmpi](#openmpi), mpi Link Dependencies: [openmpi](#openmpi), mpi Description: CONVERGE is a revolutionary computational fluid dynamics (CFD) program that eliminates the grid generation bottleneck from the simulation process. CONVERGE was developed by engine simulation experts and is straightforward to use for both engine and non-engine simulations. Unlike many CFD programs, CONVERGE automatically generates a perfectly orthogonal, structured grid at runtime based on simple, user-defined grid control parameters. This grid generation method completely eliminates the need to manually generate a grid. In addition, CONVERGE offers many other features to expedite the setup process and to ensure that your simulations are as computationally efficient as possible. --- coreutils[¶](#coreutils) === Homepage: * <http://www.gnu.org/software/coreutils/Spack package: * [coreutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/coreutils/package.py) Versions: 8.29, 8.26, 8.23 Description: The GNU Core Utilities are the basic file, shell and text manipulation utilities of the GNU operating system. These are the core utilities which are expected to exist on every operating system. --- corset[¶](#corset) === Homepage: * <https://github.com/Oshlack/Corset/wikiSpack package: * [corset/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/corset/package.py) Versions: 1.06 Description: Corset is a command-line software program to go from a de novo transcriptome assembly to gene-level counts. --- cosmomc[¶](#cosmomc) === Homepage: * <http://cosmologist.info/cosmomc/Spack package: * [cosmomc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cosmomc/package.py) Versions: 2016.11, 2016.06 Build Dependencies: mpi, [python](#python), [planck-likelihood](#planck-likelihood), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-pandas](#py-pandas), [py-scipy](#py-scipy), [py-six](#py-six) Link Dependencies: [planck-likelihood](#planck-likelihood), mpi, [python](#python) Run Dependencies: [py-scipy](#py-scipy), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-pandas](#py-pandas), [python](#python), [py-six](#py-six) Description: CosmoMC is a Fortran 2008 Markov-Chain Monte-Carlo (MCMC) engine for exploring cosmological parameter space, together with Fortran and python code for analysing Monte-Carlo samples and importance sampling (plus a suite of scripts for building grids of runs, plotting and presenting results). --- cosp2[¶](#cosp2) === Homepage: * <http://www.exmatex.org/cosp2.htmlSpack package: * [cosp2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cosp2/package.py) Versions: master Build Dependencies: mpi Link Dependencies: mpi Description: Proxy Application. CoSP2 represents a sparse linear algebra parallel algorithm for calculating the density matrix in electronic tructure theory. The algorithm is based on a recursive second-order Fermi- Operator expansion method (SP2) and is tailored for density functional based tight-binding calculations of non-metallic systems. --- cp2k[¶](#cp2k) === Homepage: * <https://www.cp2k.orgSpack package: * [cp2k/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cp2k/package.py) Versions: 5.1, 4.1, 3.0 Build Dependencies: [plumed](#plumed), [pexsi](#pexsi), [libint](#libint), scalapack, [fftw](#fftw), lapack, [libxc](#libxc), mpi, [elpa](#elpa), [libxsmm](#libxsmm), blas, [python](#python), [wannier90](#wannier90) Link Dependencies: [plumed](#plumed), [pexsi](#pexsi), [libint](#libint), scalapack, [fftw](#fftw), lapack, [libxc](#libxc), mpi, [elpa](#elpa), [libxsmm](#libxsmm), blas, [wannier90](#wannier90) Description: CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems --- cppad[¶](#cppad) === Homepage: * <https://www.coin-or.org/CppAD/Spack package: * [cppad/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cppad/package.py) Versions: develop, 20170114 Build Dependencies: [cmake](#cmake) Description: A Package for Differentiation of C++ Algorithms. --- cppcheck[¶](#cppcheck) === Homepage: * <http://cppcheck.sourceforge.net/Spack package: * [cppcheck/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cppcheck/package.py) Versions: 1.81, 1.78, 1.72, 1.68 Run Dependencies: [py-pygments](#py-pygments) Description: A tool for static C/C++ code analysis. --- cppgsl[¶](#cppgsl) === Homepage: * <https://github.com/Microsoft/GSLSpack package: * [cppgsl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cppgsl/package.py) Versions: develop, 2.0.0, 1.0.0 Build Dependencies: [cmake](#cmake) Description: C++ Guideline Support Library --- cpprestsdk[¶](#cpprestsdk) === Homepage: * <https://github.com/Microsoft/cpprestsdkSpack package: * [cpprestsdk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cpprestsdk/package.py) Versions: 2.9.1 Build Dependencies: [cmake](#cmake), [boost](#boost) Link Dependencies: [boost](#boost) Description: The C++ REST SDK is a Microsoft project for cloud-based client-server communication in native code using a modern asynchronous C++ API design. This project aims to help C++ developers connect to and interact with services. --- cppunit[¶](#cppunit) === Homepage: * <https://wiki.freedesktop.org/www/Software/cppunit/Spack package: * [cppunit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cppunit/package.py) Versions: 1.13.2 Description: Obsolete Unit testing framework for C++ --- cppzmq[¶](#cppzmq) === Homepage: * <http://www.zeromq.orgSpack package: * [cppzmq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cppzmq/package.py) Versions: develop, 4.3.0, 4.2.2 Build Dependencies: [zeromq](#zeromq), [cmake](#cmake) Link Dependencies: [zeromq](#zeromq) Description: C++ binding for 0MQ --- cpu-features[¶](#cpu-features) === Homepage: * <https://github.com/google/cpu_featuresSpack package: * [cpu-features/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cpu-features/package.py) Versions: develop Build Dependencies: [cmake](#cmake) Description: A cross platform C99 library to get cpu features at runtime. --- cpuinfo[¶](#cpuinfo) === Homepage: * <https://github.com/Maratyszcza/cpuinfo/Spack package: * [cpuinfo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cpuinfo/package.py) Versions: master Build Dependencies: [cmake](#cmake) Description: cpuinfo is a library to detect essential for performance optimization information about host CPU. --- cram[¶](#cram) === Homepage: * <https://github.com/llnl/cramSpack package: * [cram/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cram/package.py) Versions: 1.0.1 Build Dependencies: [cmake](#cmake), mpi, [python](#python) Link Dependencies: mpi, [python](#python) Description: Cram runs many small MPI jobs inside one large MPI job. --- cryptopp[¶](#cryptopp) === Homepage: * <http://www.cryptopp.comSpack package: * [cryptopp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cryptopp/package.py) Versions: 7.0.0, 6.1.0, 6.0.0, 5.6.5, 5.6.4, 5.6.3, 5.6.2, 5.6.1 Build Dependencies: [gmake](#gmake) Description: Crypto++ is an open-source C++ library of cryptographic schemes. The library supports a number of different cryptography algorithms, including authenticated encryption schemes (GCM, CCM), hash functions (SHA-1, SHA2), public-key encryption (RSA, DSA), and a few obsolete/historical encryption algorithms (MD5, Panama). --- cscope[¶](#cscope) === Homepage: * <http://cscope.sourceforge.net/Spack package: * [cscope/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cscope/package.py) Versions: 15.8b Build Dependencies: pkgconfig, [flex](#flex), [ncurses](#ncurses), [bison](#bison) Link Dependencies: [ncurses](#ncurses) Description: Cscope is a developer's tool for browsing source code. --- csdp[¶](#csdp) === Homepage: * <https://projects.coin-or.org/CsdpSpack package: * [csdp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/csdp/package.py) Versions: 6.1.1 Build Dependencies: [atlas](#atlas) Link Dependencies: [atlas](#atlas) Description: CSDP is a library of routines that implements a predictor corrector variant of the semidefinite programming algorithm of Helmberg, Rendl, Vanderbei, and Wolkowicz --- ctffind[¶](#ctffind) === Homepage: * <http://grigoriefflab.janelia.org/ctffind4Spack package: * [ctffind/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ctffind/package.py) Versions: 4.1.8 Build Dependencies: [wx](#wx), [fftw](#fftw) Link Dependencies: [wx](#wx), [fftw](#fftw) Description: Fast and accurate defocus estimation from electron micrographs. --- cub[¶](#cub) === Homepage: * <https://nvlabs.github.com/cubSpack package: * [cub/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cub/package.py) Versions: 1.7.1, 1.6.4, 1.4.1 Description: CUB is a C++ header library of cooperative threadblock primitives and other utilities for CUDA kernel programming. --- cube[¶](#cube) === Homepage: * <http://www.scalasca.org/software/cube-4.x/download.htmlSpack package: * [cube/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cube/package.py) Versions: 4.4.2, 4.4, 4.3.5, 4.3.4, 4.3.3, 4.2.3 Build Dependencies: [dbus](#dbus), [zlib](#zlib), pkgconfig, [cubelib](#cubelib), [qt](#qt) Link Dependencies: [dbus](#dbus), [zlib](#zlib), [cubelib](#cubelib), [qt](#qt) Description: Cube the profile viewer for Score-P and Scalasca profiles. It displays a multi-dimensional performance space consisting of the dimensions: - performance metric - call path - system resource --- cubelib[¶](#cubelib) === Homepage: * <http://www.scalasca.org/software/cube-4.x/download.htmlSpack package: * [cubelib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cubelib/package.py) Versions: 4.4.2, 4.4 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Component of CubeBundle: General purpose C++ library and tools --- cubew[¶](#cubew) === Homepage: * <http://www.scalasca.org/software/cube-4.x/download.htmlSpack package: * [cubew/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cubew/package.py) Versions: 4.4.1, 4.4 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Component of CubeBundle: High performance C Writer library --- cuda[¶](#cuda) === Homepage: * <https://developer.nvidia.com/cuda-zoneSpack package: * [cuda/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cuda/package.py) Versions: 10.0.130, 9.2.88, 9.1.85, 9.0.176, 8.0.61, 8.0.44, 7.5.18, 6.5.14 Description: CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). Note: This package does not currently install the drivers necessary to run CUDA. These will need to be installed manually. See: https://docs.nvidia.com/cuda/ for details. --- cuda-memtest[¶](#cuda-memtest) === Homepage: * <https://github.com/ComputationalRadiationPhysics/cuda_memtestSpack package: * [cuda-memtest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cuda-memtest/package.py) Versions: master Build Dependencies: [cmake](#cmake), [cuda](#cuda) Link Dependencies: [cuda](#cuda) Description: Maintained and updated fork of cuda_memtest. original homepage: http://sourceforge.net/projects/cudagpumemtest . This software tests GPU memory for hardware errors and soft errors using CUDA or OpenCL. --- cudnn[¶](#cudnn) === Homepage: * <https://developer.nvidia.com/cudnnSpack package: * [cudnn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cudnn/package.py) Versions: 7.3, 6.0, 5.1 Build Dependencies: [cuda](#cuda) Link Dependencies: [cuda](#cuda) Description: NVIDIA cuDNN is a GPU-accelerated library of primitives for deep neural networks --- cufflinks[¶](#cufflinks) === Homepage: * <http://cole-trapnell-lab.github.io/cufflinksSpack package: * [cufflinks/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cufflinks/package.py) Versions: 2.2.1 Description: Cufflinks assembles transcripts, estimates their abundances, and tests for differential expression and regulation in RNA-Seq samples. --- cups[¶](#cups) === Homepage: * <https://www.cups.org/Spack package: * [cups/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cups/package.py) Versions: 2.2.3 Build Dependencies: [gnutls](#gnutls) Link Dependencies: [gnutls](#gnutls) Description: CUPS is the standards-based, open source printing system developed by Apple Inc. for macOS and other UNIX-like operating systems. CUPS uses the Internet Printing Protocol (IPP) to support printing to local and network printers. This provides the core CUPS libraries, not a complete CUPS install. --- curl[¶](#curl) === Homepage: * <http://curl.haxx.seSpack package: * [curl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/curl/package.py) Versions: 7.60.0, 7.59.0, 7.56.0, 7.54.0, 7.53.1, 7.52.1, 7.50.3, 7.50.2, 7.50.1, 7.49.1, 7.47.1, 7.46.0, 7.45.0, 7.44.0, 7.43.0, 7.42.1 Build Dependencies: [zlib](#zlib), [libssh](#libssh), [libssh2](#libssh2), [nghttp2](#nghttp2), [openssl](#openssl) Link Dependencies: [zlib](#zlib), [libssh](#libssh), [libssh2](#libssh2), [nghttp2](#nghttp2), [openssl](#openssl) Description: cURL is an open source command line tool and library for transferring data with URL syntax --- cvs[¶](#cvs) === Homepage: * <http://www.nongnu.org/cvs/Spack package: * [cvs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cvs/package.py) Versions: 1.12.13 Description: CVS a very traditional source control system --- czmq[¶](#czmq) === Homepage: * <http://czmq.zeromq.orgSpack package: * [czmq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/czmq/package.py) Versions: 4.1.1, 4.0.2, 3.0.2 Build Dependencies: [zeromq](#zeromq), pkgconfig, [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [libuuid](#libuuid) Link Dependencies: [zeromq](#zeromq), [libuuid](#libuuid) Description: A C interface to the ZMQ library --- dakota[¶](#dakota) === Homepage: * <https://dakota.sandia.gov/Spack package: * [dakota/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dakota/package.py) Versions: 6.3 Build Dependencies: mpi, [cmake](#cmake), [boost](#boost), blas, [python](#python), lapack Link Dependencies: [boost](#boost), mpi, blas, [python](#python), lapack Description: The Dakota toolkit provides a flexible, extensible interface between analysis codes and iterative systems analysis methods. Dakota contains algorithms for: - optimization with gradient and non gradient-based methods; - uncertainty quantification with sampling, reliability, stochastic - expansion, and epistemic methods; - parameter estimation with nonlinear least squares methods; - sensitivity/variance analysis with design of experiments and - parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. --- daligner[¶](#daligner) === Homepage: * <https://github.com/thegenemyers/DALIGNERSpack package: * [daligner/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/daligner/package.py) Versions: 1.0 Description: Daligner: The Dazzler "Overlap" Module. --- damageproto[¶](#damageproto) === Homepage: * <https://cgit.freedesktop.org/xorg/proto/damageprotoSpack package: * [damageproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/damageproto/package.py) Versions: 1.2.1 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Damage Extension. This package contains header files and documentation for the X Damage extension. Library and server implementations are separate. --- damaris[¶](#damaris) === Homepage: * <https://project.inria.fr/damaris/Spack package: * [damaris/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/damaris/package.py) Versions: 1.3.1, master Build Dependencies: mpi, [catalyst](#catalyst), [xsd](#xsd), [hdf5](#hdf5), [cmake](#cmake), [boost](#boost), [xerces-c](#xerces-c), [visit](#visit) Link Dependencies: mpi, [catalyst](#catalyst), [xsd](#xsd), [hdf5](#hdf5), [boost](#boost), [xerces-c](#xerces-c), [visit](#visit) Description: Damaris is a middleware for I/O and in situ analytics targeting large- scale, MPI-based HPC simulations. --- damselfly[¶](#damselfly) === Homepage: * <https://github.com/llnl/damselflySpack package: * [damselfly/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/damselfly/package.py) Versions: 1.0 Build Dependencies: [cmake](#cmake) Description: Damselfly is a model-based parallel network simulator. --- darshan-runtime[¶](#darshan-runtime) === Homepage: * <http://www.mcs.anl.gov/research/projects/darshan/Spack package: * [darshan-runtime/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/darshan-runtime/package.py) Versions: 3.1.6, 3.1.0, 3.0.0 Build Dependencies: [zlib](#zlib), mpi Link Dependencies: [zlib](#zlib), mpi Description: Darshan (runtime) is a scalable HPC I/O characterization tool designed to capture an accurate picture of application I/O behavior, including properties such as patterns of access within files, with minimum overhead. DarshanRuntime package should be installed on systems where you intend to instrument MPI applications. --- darshan-util[¶](#darshan-util) === Homepage: * <http://www.mcs.anl.gov/research/projects/darshan/Spack package: * [darshan-util/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/darshan-util/package.py) Versions: 3.1.6, 3.1.0, 3.0.0 Build Dependencies: [zlib](#zlib), [bzip2](#bzip2) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2) Run Dependencies: [bzip2](#bzip2) Description: Darshan (util) is collection of tools for parsing and summarizing log files produced by Darshan (runtime) instrumentation. This package is typically installed on systems (front-end) where you intend to analyze log files produced by Darshan (runtime). --- dash[¶](#dash) === Homepage: * <https://git.kernel.org/pub/scm/utils/dash/dash.gitSpack package: * [dash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dash/package.py) Versions: 0.5.9.1 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Link Dependencies: [libedit](#libedit) Description: The Debian Almquist Shell. --- datamash[¶](#datamash) === Homepage: * <https://www.gnu.org/software/datamash/Spack package: * [datamash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/datamash/package.py) Versions: 1.3, 1.1.0, 1.0.7, 1.0.6, 1.0.5 Description: GNU datamash is a command-line program which performs basic numeric, textual and statistical operations on input textual data files. --- dataspaces[¶](#dataspaces) === Homepage: * <http://www.dataspaces.orgSpack package: * [dataspaces/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dataspaces/package.py) Versions: develop, 1.6.2 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4), mpi Link Dependencies: mpi Description: an extreme scale data management framework. --- davix[¶](#davix) === Homepage: * <https://dmc.web.cern.ch/projects/davixSpack package: * [davix/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/davix/package.py) Versions: 0.6.8 Build Dependencies: [cmake](#cmake), pkgconfig, [libuuid](#libuuid), [libxml2](#libxml2), [openssl](#openssl) Link Dependencies: [libuuid](#libuuid), [libxml2](#libxml2), [openssl](#openssl) Description: High-performance file management over WebDAV/HTTP. --- dbcsr[¶](#dbcsr) === Homepage: * <https://github.com/cp2k/dbcsrSpack package: * [dbcsr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dbcsr/package.py) Versions: develop Build Dependencies: [cmake](#cmake), mpi, blas, [py-fypp](#py-fypp), lapack Link Dependencies: mpi, blas, [py-fypp](#py-fypp), lapack Description: Distributed Block Compressed Sparse Row matrix library. --- dbus[¶](#dbus) === Homepage: * <http://dbus.freedesktop.org/Spack package: * [dbus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dbus/package.py) Versions: 1.12.8, 1.11.2, 1.9.0, 1.8.8, 1.8.6, 1.8.4, 1.8.2 Build Dependencies: pkgconfig, [expat](#expat) Link Dependencies: [expat](#expat) Description: D-Bus is a message bus system, a simple way for applications to talk to one another. D-Bus supplies both a system daemon (for events such new hardware device printer queue ) and a per-user-login-session daemon (for general IPC needs among user applications). Also, the message bus is built on top of a general one-to-one message passing framework, which can be used by any two applications to communicate directly (without going through the message bus daemon). --- dealii[¶](#dealii) === Homepage: * <https://www.dealii.orgSpack package: * [dealii/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dealii/package.py) Versions: develop, 9.0.1, 9.0.0, 8.5.1, 8.5.0, 8.4.2, 8.4.1, 8.4.0, 8.3.0, 8.2.1, 8.1.0 Build Dependencies: [cuda](#cuda), [petsc](#petsc), [arpack-ng](#arpack-ng), [assimp](#assimp), scalapack, tbb, [zlib](#zlib), [bzip2](#bzip2), mpi, [netcdf-cxx](#netcdf-cxx), [netcdf](#netcdf), [boost](#boost), lapack, [gmsh](#gmsh), [sundials](#sundials), [trilinos](#trilinos), [nanoflann](#nanoflann), [adol-c](#adol-c), [hdf5](#hdf5), [cmake](#cmake), [doxygen](#doxygen), [suite-sparse](#suite-sparse), [p4est](#p4est), [slepc](#slepc), [gsl](#gsl), [metis](#metis), [oce](#oce), [graphviz](#graphviz), blas, [muparser](#muparser), [python](#python) Link Dependencies: [cuda](#cuda), [petsc](#petsc), [arpack-ng](#arpack-ng), [assimp](#assimp), scalapack, tbb, [zlib](#zlib), [bzip2](#bzip2), mpi, [netcdf-cxx](#netcdf-cxx), [netcdf](#netcdf), [boost](#boost), lapack, [sundials](#sundials), [trilinos](#trilinos), [nanoflann](#nanoflann), [adol-c](#adol-c), [hdf5](#hdf5), [cmake](#cmake), [doxygen](#doxygen), [suite-sparse](#suite-sparse), [p4est](#p4est), [slepc](#slepc), [gsl](#gsl), [metis](#metis), [oce](#oce), [graphviz](#graphviz), blas, [muparser](#muparser), [python](#python) Run Dependencies: [gmsh](#gmsh) Description: C++ software library providing well-documented tools to build finite element codes for a broad variety of PDEs. --- dealii-parameter-gui[¶](#dealii-parameter-gui) === Homepage: * <https://github.com/dealii/parameter_guiSpack package: * [dealii-parameter-gui/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dealii-parameter-gui/package.py) Versions: develop Build Dependencies: [cmake](#cmake), [qt](#qt) Link Dependencies: [qt](#qt) Description: A qt based graphical user interface for editing deal.II .prm parameter files. --- deconseq-standalone[¶](#deconseq-standalone) === Homepage: * <http://deconseq.sourceforge.netSpack package: * [deconseq-standalone/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/deconseq-standalone/package.py) Versions: 0.4.3 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Description: The DeconSeq tool can be used to automatically detect and efficiently remove sequence contaminations from genomic and metagenomic datasets. --- dejagnu[¶](#dejagnu) === Homepage: * <https://www.gnu.org/software/dejagnu/Spack package: * [dejagnu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dejagnu/package.py) Versions: 1.6, 1.4.4 Build Dependencies: [tcl](#tcl), [expect](#expect) Link Dependencies: [tcl](#tcl), [expect](#expect) Description: DejaGnu is a framework for testing other programs. Its purpose is to provide a single front end for all tests. --- delly2[¶](#delly2) === Homepage: * <https://github.com/dellytools/dellySpack package: * [delly2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/delly2/package.py) Versions: 2017-08-03 Build Dependencies: [boost](#boost), [htslib](#htslib), [bcftools](#bcftools) Link Dependencies: [boost](#boost), [htslib](#htslib), [bcftools](#bcftools) Description: Delly2 is an integrated structural variant prediction method that can discover, genotype and visualize deletions, tandem duplications, inversions and translocations at single-nucleotide resolution in short- read massively parallel sequencing data.. --- denovogear[¶](#denovogear) === Homepage: * <https://github.com/denovogear/denovogearSpack package: * [denovogear/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/denovogear/package.py) Versions: 1.1.1, 1.1.0 Build Dependencies: [cmake](#cmake), [boost](#boost), [eigen](#eigen), [htslib](#htslib) Description: DeNovoGear is a software package to detect de novo mutations using next- generation sequencing data. It supports the analysis of many differential experimental designs and uses advanced statistical models to reduce the false positve rate. --- dftfe[¶](#dftfe) === Homepage: * <https://sites.google.com/umich.edu/dftfe/Spack package: * [dftfe/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dftfe/package.py) Versions: 0.6.0, 0.5.2, 0.5.1, 0.5.0 Build Dependencies: [libxc](#libxc), [dealii](#dealii), mpi, [libxml2](#libxml2), [cmake](#cmake), [spglib](#spglib), scalapack, [alglib](#alglib) Link Dependencies: [libxc](#libxc), [dealii](#dealii), mpi, [libxml2](#libxml2), [spglib](#spglib), scalapack, [alglib](#alglib) Description: Real-space DFT calculations using Finite Elements --- dia[¶](#dia) === Homepage: * <https://wiki.gnome.org/Apps/DiaSpack package: * [dia/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dia/package.py) Versions: 0.97.3 Build Dependencies: [libxrender](#libxrender), pkgconfig, [gettext](#gettext), [libsm](#libsm), [gtkplus](#gtkplus), [libxml2](#libxml2), [intltool](#intltool), [swig](#swig), [libxslt](#libxslt), [libxinerama](#libxinerama), [libuuid](#libuuid), [python](#python), [freetype](#freetype) Link Dependencies: [libxrender](#libxrender), [libsm](#libsm), [swig](#swig), [gtkplus](#gtkplus), [libxml2](#libxml2), [libxslt](#libxslt), [libxinerama](#libxinerama), [libuuid](#libuuid), [python](#python), [freetype](#freetype) Description: Dia is a program for drawing structured diagrams. --- dialign-tx[¶](#dialign-tx) === Homepage: * <http://dialign-tx.gobics.de/Spack package: * [dialign-tx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dialign-tx/package.py) Versions: 1.0.2 Description: DIALIGN-TX: greedy and progressive approaches for segment-based multiple sequence alignment --- diamond[¶](#diamond) === Homepage: * <https://ab.inf.uni-tuebingen.de/software/diamondSpack package: * [diamond/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/diamond/package.py) Versions: 0.9.21, 0.9.20, 0.9.19, 0.9.14, 0.8.38, 0.8.26 Build Dependencies: [zlib](#zlib), [cmake](#cmake) Link Dependencies: [zlib](#zlib) Description: DIAMOND is a sequence aligner for protein and translated DNA searches, designed for high performance analysis of big sequence data. --- diffsplice[¶](#diffsplice) === Homepage: * <http://www.netlab.uky.edu/p/bioinfo/DiffSpliceSpack package: * [diffsplice/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/diffsplice/package.py) Versions: 0.1.2beta, 0.1.1 Description: A novel tool for discovering and quantitating alternative splicing variants present in an RNA-seq dataset, without relying on annotated transcriptome or pre-determined splice pattern. --- diffutils[¶](#diffutils) === Homepage: * <https://www.gnu.org/software/diffutils/Spack package: * [diffutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/diffutils/package.py) Versions: 3.6 Description: GNU Diffutils is a package of several programs related to finding differences between files. --- direnv[¶](#direnv) === Homepage: * <https://direnv.net/Spack package: * [direnv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/direnv/package.py) Versions: 2.11.3 Build Dependencies: [go](#go) Description: direnv is an environment switcher for the shell. --- discovar[¶](#discovar) === Homepage: * <https://software.broadinstitute.org/software/discovar/blog/Spack package: * [discovar/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/discovar/package.py) Versions: 52488 Description: DISCOVAR is a variant caller and small genome assembler. --- discovardenovo[¶](#discovardenovo) === Homepage: * <https://software.broadinstitute.org/software/discovar/blog/Spack package: * [discovardenovo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/discovardenovo/package.py) Versions: 52488 Build Dependencies: [samtools](#samtools), [jemalloc](#jemalloc) Link Dependencies: [samtools](#samtools), [jemalloc](#jemalloc) Description: DISCOVAR de novo is a large (and small) de novo genome assembler. It quickly generates highly accurate and complete assemblies using the same single library data as used by DISCOVAR. It currently doesn't support variant calling, for that, please use DISCOVAR instead. --- dislin[¶](#dislin) === Homepage: * <http://www.mps.mpg.de/dislinSpack package: * [dislin/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dislin/package.py) Versions: 11.1.linux.i586_64, 11.0.linux.i586_64 Build Dependencies: [motif](#motif), [mesa](#mesa) Link Dependencies: [motif](#motif), [mesa](#mesa) Description: DISLIN is a high level and easy to use graphics library for displaying data as curves, bar graphs, pie charts, 3D-colour plots, surfaces, contours and maps. --- diy[¶](#diy) === Homepage: * <https://github.com/diatomic/diySpack package: * [diy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/diy/package.py) Versions: 3.5.0, master Build Dependencies: [cmake](#cmake) Description: Data-parallel out-of-core library --- dlpack[¶](#dlpack) === Homepage: * <https://github.com/sjtuhpcc/dlpackSpack package: * [dlpack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dlpack/package.py) Versions: master Description: DLPack is an RFC for common tensor and operator guidelines in deep learning systems. --- dmd[¶](#dmd) === Homepage: * <https://github.com/dlang/dmdSpack package: * [dmd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dmd/package.py) Versions: 2.081.1 Build Dependencies: [curl](#curl), [openssl](#openssl) Link Dependencies: [curl](#curl), [openssl](#openssl) Description: DMD is the reference compiler for the D programming language. --- dmlc-core[¶](#dmlc-core) === Homepage: * <https://github.com/dmlc/dmlc-coreSpack package: * [dmlc-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dmlc-core/package.py) Versions: 20170508, master Build Dependencies: [cmake](#cmake) Description: DMLC-Core is the backbone library to support all DMLC projects, offers the bricks to build efficient and scalable distributed machine learning libraries. --- dmtcp[¶](#dmtcp) === Homepage: * <http://dmtcp.sourceforge.net/Spack package: * [dmtcp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dmtcp/package.py) Versions: 2.5.2 Description: DMTCP (Distributed MultiThreaded Checkpointing) transparently checkpoints a single-host or distributed computation in user-space -- with no modifications to user code or to the O/S. --- dmxproto[¶](#dmxproto) === Homepage: * <http://dmx.sourceforge.net/Spack package: * [dmxproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dmxproto/package.py) Versions: 2.3.1 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: Distributed Multihead X (DMX) Extension. This extension defines a protocol for clients to access a front-end proxy X server that controls multiple back-end X servers making up a large display. --- docbook-xml[¶](#docbook-xml) === Homepage: * <http://www.oasis-open.org/docbookSpack package: * [docbook-xml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/docbook-xml/package.py) Versions: 4.5 Description: Docbook DTD XML files. --- docbook-xsl[¶](#docbook-xsl) === Homepage: * <http://docbook.sourceforge.net/Spack package: * [docbook-xsl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/docbook-xsl/package.py) Versions: 1.79.1 Build Dependencies: [docbook-xml](#docbook-xml) Link Dependencies: [docbook-xml](#docbook-xml) Description: Docbook XSL vocabulary. --- dos2unix[¶](#dos2unix) === Homepage: * <https://waterlan.home.xs4all.nl/dos2unix.htmlSpack package: * [dos2unix/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dos2unix/package.py) Versions: 7.3.4 Description: DOS/Mac to Unix and vice versa text file format converter. --- dotnet-core-sdk[¶](#dotnet-core-sdk) === Homepage: * <https://www.microsoft.com/net/Spack package: * [dotnet-core-sdk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dotnet-core-sdk/package.py) Versions: 2.1.300 Description: The .NET Core SDK is a powerful development environment to write applications for all types of infrastructure. --- double-conversion[¶](#double-conversion) === Homepage: * <https://github.com/google/double-conversionSpack package: * [double-conversion/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/double-conversion/package.py) Versions: 2.0.1, 2.0.0, 1.1.5, 1.1.4, 1.1.3 Build Dependencies: [cmake](#cmake) Description: This project (double-conversion) provides binary-decimal and decimal- binary routines for IEEE doubles. The library consists of efficient conversion routines that have been extracted from the V8 JavaScript engine. The code has been refactored and improved so that it can be used more easily in other projects. There is extensive documentation in src/double-conversion.h. Other examples can be found in test/cctest/test-conversions.cc. --- doxygen[¶](#doxygen) === Homepage: * <http://www.stack.nl/~dimitri/doxygen/Spack package: * [doxygen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/doxygen/package.py) Versions: 1.8.14, 1.8.12, 1.8.11, 1.8.10 Build Dependencies: [cmake](#cmake), [flex](#flex), [bison](#bison) Run Dependencies: [graphviz](#graphviz) Description: Doxygen is the de facto standard tool for generating documentation from annotated C++ sources, but it also supports other popular programming languages such as C, Objective-C, C#, PHP, Java, Python, IDL (Corba, Microsoft, and UNO/OpenOffice flavors), Fortran, VHDL, Tcl, and to some extent D.. --- dri2proto[¶](#dri2proto) === Homepage: * <https://cgit.freedesktop.org/xorg/proto/dri2proto/Spack package: * [dri2proto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dri2proto/package.py) Versions: 2.8 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: Direct Rendering Infrastructure 2 Extension. This extension defines a protocol to securely allow user applications to access the video hardware without requiring data to be passed through the X server. --- dri3proto[¶](#dri3proto) === Homepage: * <https://cgit.freedesktop.org/xorg/proto/dri3proto/Spack package: * [dri3proto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dri3proto/package.py) Versions: 1.0 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: Direct Rendering Infrastructure 3 Extension. This extension defines a protocol to securely allow user applications to access the video hardware without requiring data to be passed through the X server. --- dsdp[¶](#dsdp) === Homepage: * <http://www.mcs.anl.gov/hs/software/DSDP/Spack package: * [dsdp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dsdp/package.py) Versions: 5.8 Build Dependencies: blas, lapack Link Dependencies: blas, lapack Description: The DSDP software is a free open source implementation of an interior- point method for semidefinite programming. It provides primal and dual solutions, exploits low-rank structure and sparsity in the data, and has relatively low memory requirements for an interior-point method. It allows feasible and infeasible starting points and provides approximate certificates of infeasibility when no feasible solution exists. --- dsrc[¶](#dsrc) === Homepage: * <http://sun.aei.polsl.pl/dsrcSpack package: * [dsrc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dsrc/package.py) Versions: 2.0.2 Description: DNA Sequence Reads Compression is an application designed for compression of data files containing reads from DNA sequencing in FASTQ format. --- dtcmp[¶](#dtcmp) === Homepage: * <https://github.com/hpc/dtcmpSpack package: * [dtcmp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dtcmp/package.py) Versions: 1.1.0, 1.0.3 Build Dependencies: mpi, [lwgrp](#lwgrp) Link Dependencies: mpi, [lwgrp](#lwgrp) Description: The Datatype Comparison Library provides comparison operations and parallel sort algorithms for MPI applications. --- dyninst[¶](#dyninst) === Homepage: * <https://paradyn.orgSpack package: * [dyninst/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dyninst/package.py) Versions: develop, 10.0.0, 9.3.2, 9.3.0, 9.2.0, 9.1.0, 8.2.1, 8.1.2, 8.1.1 Build Dependencies: [cmake](#cmake), [boost](#boost), [libiberty](#libiberty), [libdwarf](#libdwarf), tbb Link Dependencies: [libdwarf](#libdwarf), [boost](#boost), [libiberty](#libiberty), elf, tbb Description: API for dynamic binary instrumentation. Modify programs while they are executing without recompiling, re-linking, or re-executing. --- ea-utils[¶](#ea-utils) === Homepage: * <http://expressionanalysis.github.io/ea-utils/Spack package: * [ea-utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ea-utils/package.py) Versions: 1.04.807 Build Dependencies: [subversion](#subversion), [zlib](#zlib), [bamtools](#bamtools), [perl](#perl), [gsl](#gsl) Link Dependencies: [subversion](#subversion), [zlib](#zlib), [bamtools](#bamtools), [gsl](#gsl) Description: Command-line tools for processing biological sequencing data. Barcode demultiplexing, adapter trimming, etc. Primarily written to support an Illumina based pipeline - but should work with any FASTQs. --- easybuild[¶](#easybuild) === Homepage: * <http://hpcugent.github.io/easybuild/Spack package: * [easybuild/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/easybuild/package.py) Versions: 3.1.2 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-easybuild-framework](#py-easybuild-framework), [py-easybuild-easyconfigs](#py-easybuild-easyconfigs), [py-easybuild-easyblocks](#py-easybuild-easyblocks), [python](#python) Description: EasyBuild is a software build and installation framework for (scientific) software on HPC systems. --- ebms[¶](#ebms) === Homepage: * <https://github.com/ANL-CESAR/EBMSSpack package: * [ebms/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ebms/package.py) Versions: develop Build Dependencies: mpi Link Dependencies: mpi Description: This is a miniapp for the Energy Banding Monte Carlo (EBMC) neutron transportation simulation code. It is adapted from a similar miniapp provided by <NAME>, whose algorithm is described in [1], where only one process in a compute node is used, and the compute nodes are divided into memory nodes and tracking nodes. Memory nodes do not participate in particle tracking. Obviously, there is a lot of resource waste in this design. --- eccodes[¶](#eccodes) === Homepage: * <https://software.ecmwf.int/wiki/display/ECC/ecCodes+HomeSpack package: * [eccodes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/eccodes/package.py) Versions: 2.5.0, 2.2.0 Build Dependencies: [libpng](#libpng), [netcdf](#netcdf), [openjpeg](#openjpeg), [cmake](#cmake), [py-numpy](#py-numpy), [jasper](#jasper), [python](#python), [libaec](#libaec) Link Dependencies: [libpng](#libpng), [netcdf](#netcdf), [openjpeg](#openjpeg), [jasper](#jasper), [python](#python), [libaec](#libaec) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: ecCodes is a package developed by ECMWF for processing meteorological data in GRIB (1/2), BUFR (3/4) and GTS header formats. --- eclipse-gcj-parser[¶](#eclipse-gcj-parser) === Homepage: * <https://github.com/spack/spack/issues/8165Spack package: * [eclipse-gcj-parser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/eclipse-gcj-parser/package.py) Versions: 4.8 Description: GCJ requires the Eclipse Java parser, but does not ship with it. This builds that parser into an executable binary, thereby making GCJ work. --- ecp-proxy-apps[¶](#ecp-proxy-apps) === Homepage: * <https://exascaleproject.github.io/proxy-appsSpack package: * [ecp-proxy-apps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ecp-proxy-apps/package.py) Versions: 2.0, 1.1, 1.0 Build Dependencies: [miniamr](#miniamr), [xsbench](#xsbench), [sw4lite](#sw4lite), [macsio](#macsio), [nekbone](#nekbone), [ember](#ember), [minivite](#minivite), [amg](#amg), [picsarlite](#picsarlite), [laghos](#laghos), [thornado-mini](#thornado-mini), [minife](#minife), [swfft](#swfft), [miniqmc](#miniqmc), [examinimd](#examinimd), [minitri](#minitri), [candle-benchmarks](#candle-benchmarks), [comd](#comd) Link Dependencies: [miniamr](#miniamr), [xsbench](#xsbench), [sw4lite](#sw4lite), [macsio](#macsio), [nekbone](#nekbone), [ember](#ember), [minivite](#minivite), [amg](#amg), [picsarlite](#picsarlite), [laghos](#laghos), [thornado-mini](#thornado-mini), [minife](#minife), [swfft](#swfft), [miniqmc](#miniqmc), [examinimd](#examinimd), [minitri](#minitri), [candle-benchmarks](#candle-benchmarks), [comd](#comd) Description: This is a collection of packages that represents the official suite of DOE/ECP proxy applications. This is a Spack bundle package that installs the ECP proxy application suite. --- ed[¶](#ed) === Homepage: * <https://www.gnu.org/software/edSpack package: * [ed/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ed/package.py) Versions: 1.4 Description: GNU ed is a line-oriented text editor. It is used to create, display, modify and otherwise manipulate text files, both interactively and via shell scripts. --- editres[¶](#editres) === Homepage: * <http://cgit.freedesktop.org/xorg/app/editresSpack package: * [editres/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/editres/package.py) Versions: 1.0.6 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxaw](#libxaw), [libxmu](#libxmu) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11), [libxmu](#libxmu) Description: Dynamic resource editor for X Toolkit applications. --- eigen[¶](#eigen) === Homepage: * <http://eigen.tuxfamily.org/Spack package: * [eigen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/eigen/package.py) Versions: 3.3.5, 3.3.4, 3.3.3, 3.3.1, 3.2.10, 3.2.9, 3.2.8, 3.2.7 Build Dependencies: [gmp](#gmp), [suite-sparse](#suite-sparse), [scotch](#scotch), [metis](#metis), [cmake](#cmake), [mpfr](#mpfr), [fftw](#fftw) Link Dependencies: [gmp](#gmp), [suite-sparse](#suite-sparse), [scotch](#scotch), [metis](#metis), [mpfr](#mpfr), [fftw](#fftw) Description: Eigen is a C++ template library for linear algebra matrices, vectors, numerical solvers, and related algorithms. --- elasticsearch[¶](#elasticsearch) === Homepage: * <https://www.elastic.co/Spack package: * [elasticsearch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elasticsearch/package.py) Versions: 6.4.0, 6.2.4 Run Dependencies: [jdk](#jdk) Description: Elasticsearch is a search engine based on Lucene. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. --- elemental[¶](#elemental) === Homepage: * <http://libelemental.orgSpack package: * [elemental/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elemental/package.py) Versions: develop, 0.87.7, 0.87.6 Build Dependencies: [gmp](#gmp), [intel-mkl](#intel-mkl), [mpfr](#mpfr), [mpc](#mpc), [cmake](#cmake), scalapack, [netlib-lapack](#netlib-lapack), [veclibfort](#veclibfort), lapack, mpi, [essl](#essl), [metis](#metis), blas, [openblas](#openblas), [python](#python) Link Dependencies: [gmp](#gmp), [intel-mkl](#intel-mkl), [mpfr](#mpfr), [mpc](#mpc), scalapack, [netlib-lapack](#netlib-lapack), [veclibfort](#veclibfort), lapack, mpi, [essl](#essl), [metis](#metis), blas, [openblas](#openblas), [python](#python) Description: Elemental: Distributed-memory dense and sparse-direct linear algebra and optimization library. --- elfutils[¶](#elfutils) === Homepage: * <https://fedorahosted.org/elfutils/Spack package: * [elfutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elfutils/package.py) Versions: 0.173, 0.170, 0.168, 0.163 Build Dependencies: [gettext](#gettext) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2), [gettext](#gettext), [xz](#xz) Description: elfutils is a collection of various binary tools such as eu-objdump, eu- readelf, and other utilities that allow you to inspect and manipulate ELF files. Refer to Table 5.Tools Included in elfutils for Red Hat Developer for a complete list of binary tools that are distributed with the Red Hat Developer Toolset version of elfutils. --- elk[¶](#elk) === Homepage: * <http://elk.sourceforge.net/Spack package: * [elk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elk/package.py) Versions: 3.3.17 Build Dependencies: [libxc](#libxc), mpi, blas, [fftw](#fftw), lapack Link Dependencies: [libxc](#libxc), mpi, blas, [fftw](#fftw), lapack Description: An all-electron full-potential linearised augmented-plane wave (FP-LAPW) code with many advanced features. --- elpa[¶](#elpa) === Homepage: * <http://elpa.mpcdf.mpg.de/Spack package: * [elpa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elpa/package.py) Versions: 2018.05.001.rc1, 2017.11.001, 2017.05.003, 2017.05.002, 2016.11.001.pre, 2016.05.004, 2016.05.003, 2015.11.001 Build Dependencies: scalapack, mpi, blas, lapack Link Dependencies: scalapack, mpi, blas, lapack Description: Eigenvalue solvers for Petaflop-Applications (ELPA) --- emacs[¶](#emacs) === Homepage: * <https://www.gnu.org/software/emacsSpack package: * [emacs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/emacs/package.py) Versions: 26.1, 25.3, 25.2, 25.1, 24.5 Build Dependencies: pkgconfig, [libpng](#libpng), [gtkplus](#gtkplus), [libxpm](#libxpm), [zlib](#zlib), [libtiff](#libtiff), [libx11](#libx11), [gnutls](#gnutls), [giflib](#giflib), [pcre](#pcre), [ncurses](#ncurses), [libxaw](#libxaw) Link Dependencies: [zlib](#zlib), [giflib](#giflib), [libxaw](#libxaw), [libpng](#libpng), [gtkplus](#gtkplus), [libxpm](#libxpm), [pcre](#pcre), [libtiff](#libtiff), [libx11](#libx11), [ncurses](#ncurses), [gnutls](#gnutls) Description: The Emacs programmable text editor. --- ember[¶](#ember) === Homepage: * <http://sst-simulator.org/SSTPages/SSTElementEmber/Spack package: * [ember/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ember/package.py) Versions: 1.0.0 Build Dependencies: mpi Link Dependencies: mpi Description: Ember Communication Pattern Library The Ember suite provides communication patterns in a simplified setting (simplified by the removal of application calculations, control flow, etc.). --- emboss[¶](#emboss) === Homepage: * <http://emboss.sourceforge.net/Spack package: * [emboss/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/emboss/package.py) Versions: 6.6.0 Build Dependencies: [libgd](#libgd), [postgresql](#postgresql), [libxpm](#libxpm) Link Dependencies: [libgd](#libgd), [postgresql](#postgresql), [libxpm](#libxpm) Description: EMBOSS is a free Open Source software analysis package specially developed for the needs of the molecular biology (e.g. EMBnet) user community --- encodings[¶](#encodings) === Homepage: * <http://cgit.freedesktop.org/xorg/font/encodingsSpack package: * [encodings/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/encodings/package.py) Versions: 1.0.4 Build Dependencies: [util-macros](#util-macros), pkgconfig, [mkfontscale](#mkfontscale), [font-util](#font-util) Link Dependencies: [font-util](#font-util) Description: X.org encodings font. --- energyplus[¶](#energyplus) === Homepage: * <https://energyplus.netSpack package: * [energyplus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/energyplus/package.py) Versions: 8.9.0 Description: EnergyPlus is a whole building energy simulation program that engineers, architects, and researchers use to model both energy consumption for heating, cooling, ventilation, lighting and plug and process loads and water use in buildings --- environment-modules[¶](#environment-modules) === Homepage: * <https://sourceforge.net/p/modules/wiki/Home/Spack package: * [environment-modules/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/environment-modules/package.py) Versions: 3.2.10 Build Dependencies: [tcl](#tcl) Link Dependencies: [tcl](#tcl) Run Dependencies: [tcl](#tcl) Description: The Environment Modules package provides for the dynamic modification of a user's environment via modulefiles. --- eospac[¶](#eospac) === Homepage: * <https://laws.lanl.gov/projects/data/eos.htmlSpack package: * [eospac/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/eospac/package.py) Versions: 6.4.0beta.2, 6.4.0beta.1, 6.3.1 Description: A collection of C routines that can be used to access the Sesame data library. --- er[¶](#er) === Homepage: * <https://github.com/ECP-VeloC/erSpack package: * [er/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/er/package.py) Versions: 0.0.3, master Build Dependencies: [cmake](#cmake), [shuffile](#shuffile), mpi, [kvtree](#kvtree), [redset](#redset) Link Dependencies: [redset](#redset), [shuffile](#shuffile), mpi, [kvtree](#kvtree) Description: Encoding and redundancy on a file set --- es[¶](#es) === Homepage: * <http://wryun.github.io/es-shell/Spack package: * [es/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/es/package.py) Versions: 0.9.1 Build Dependencies: [readline](#readline) Link Dependencies: [readline](#readline) Description: Es is an extensible shell. The language was derived from the Plan 9 shell, rc, and was influenced by functional programming languages, such as Scheme, and the Tcl embeddable programming language. This implementation is derived from Byron Rakitzis's public domain implementation of rc. --- esmf[¶](#esmf) === Homepage: * <https://www.earthsystemcog.org/projects/esmf/Spack package: * [esmf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/esmf/package.py) Versions: 7.0.1 Build Dependencies: [zlib](#zlib), [netcdf-fortran](#netcdf-fortran), mpi, [netcdf](#netcdf), [libxml2](#libxml2), [parallel-netcdf](#parallel-netcdf), [xerces-c](#xerces-c), lapack Link Dependencies: [zlib](#zlib), [netcdf-fortran](#netcdf-fortran), mpi, [netcdf](#netcdf), [libxml2](#libxml2), [parallel-netcdf](#parallel-netcdf), [xerces-c](#xerces-c), lapack Test Dependencies: [perl](#perl) Description: The Earth System Modeling Framework (ESMF) is high-performance, flexible software infrastructure for building and coupling weather, climate, and related Earth science applications. The ESMF defines an architecture for composing complex, coupled modeling systems and includes data structures and utilities for developing individual models. --- essl[¶](#essl) === Homepage: * <https://www.ibm.com/systems/power/software/essl/Spack package: * [essl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/essl/package.py) Description: IBM's Engineering and Scientific Subroutine Library (ESSL). --- ethminer[¶](#ethminer) === Homepage: * <https://github.com/ethereum-mining/ethminerSpack package: * [ethminer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ethminer/package.py) Versions: 0.12.0 Build Dependencies: [json-c](#json-c), [zlib](#zlib), [mesa](#mesa), [curl](#curl), [cmake](#cmake), [boost](#boost), [cuda](#cuda), [python](#python) Link Dependencies: [json-c](#json-c), [zlib](#zlib), [mesa](#mesa), [cuda](#cuda), [curl](#curl), [boost](#boost), [python](#python) Description: The ethminer is an Ethereum GPU mining worker. --- etsf-io[¶](#etsf-io) === Homepage: * <http://www.etsf.eu/resources/software/libraries_and_toolsSpack package: * [etsf-io/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/etsf-io/package.py) Versions: 1.0.4 Build Dependencies: [netcdf-fortran](#netcdf-fortran), [hdf5](#hdf5) Link Dependencies: [netcdf-fortran](#netcdf-fortran), [hdf5](#hdf5) Description: ETSF_IO is a library implementing the Nanoquanta/ETSF file format specifications. ETSF_IO enables an architecture-independent exchange of crystallographic data, electronic wavefunctions, densities and potentials, as well as spectroscopic data. It is meant to be used by quantum-physical and quantum-chemical applications relying upon Density Functional Theory (DFT). --- everytrace[¶](#everytrace) === Homepage: * <https://github.com/citibeth/everytraceSpack package: * [everytrace/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/everytrace/package.py) Versions: develop, 0.2.2 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: Get stack trace EVERY time a program exits. --- everytrace-example[¶](#everytrace-example) === Homepage: * <https://github.com/citibeth/everytrace-exampleSpack package: * [everytrace-example/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/everytrace-example/package.py) Versions: develop Build Dependencies: [cmake](#cmake), [everytrace](#everytrace), [openmpi](#openmpi) Link Dependencies: [openmpi](#openmpi), [everytrace](#everytrace) Description: Get stack trace EVERY time a program exits. --- evieext[¶](#evieext) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/evieprotoSpack package: * [evieext/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/evieext/package.py) Versions: 1.1.1 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: Extended Visual Information Extension (XEVIE). This extension defines a protocol for a client to determine information about core X visuals beyond what the core protocol provides. --- exabayes[¶](#exabayes) === Homepage: * <https://sco.h-its.org/exelixis/web/software/exabayes/Spack package: * [exabayes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/exabayes/package.py) Versions: 1.5 Build Dependencies: mpi Link Dependencies: mpi Description: ExaBayes is a software package for Bayesian tree inference. It is particularly suitable for large-scale analyses on computer clusters. --- examinimd[¶](#examinimd) === Homepage: * <https://github.com/ECP-copa/ExaMiniMDSpack package: * [examinimd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/examinimd/package.py) Versions: develop, 1.0 Build Dependencies: [kokkos](#kokkos), mpi Link Dependencies: [kokkos](#kokkos), mpi Description: ExaMiniMD is a proxy application and research vehicle for particle codes, in particular Molecular Dynamics (MD). Compared to previous MD proxy apps (MiniMD, COMD), its design is significantly more modular in order to allow independent investigation of different aspects. To achieve that the main components such as force calculation, communication, neighbor list construction and binning are derived classes whose main functionality is accessed via virtual functions. This allows a developer to write a new derived class and drop it into the code without touching much of the rest of the application. --- exampm[¶](#exampm) === Homepage: * <https://github.com/ECP-copa/ExaMPMSpack package: * [exampm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/exampm/package.py) Versions: develop Build Dependencies: [cmake](#cmake) Description: Exascale Material Point Method (MPM) Mini-App --- exasp2[¶](#exasp2) === Homepage: * <https://github.com/ECP-copa/ExaSP2Spack package: * [exasp2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/exasp2/package.py) Versions: develop, 1.0 Build Dependencies: mpi, blas, [bml](#bml), lapack Link Dependencies: mpi, blas, [bml](#bml), lapack Description: ExaSP2 is a reference implementation of typical linear algebra algorithms and workloads for a quantum molecular dynamics (QMD) electronic structure code. The algorithm is based on a recursive second- order Fermi-Operator expansion method (SP2) and is tailored for density functional based tight-binding calculations of material systems. The SP2 algorithm variants are part of the Los Alamos Transferable Tight-binding for Energetics (LATTE) code, based on a matrix expansion of the Fermi operator in a recursive series of generalized matrix-matrix multiplications. It is created and maintained by Co-Design Center for Particle Applications (CoPA). The code is intended to serve as a vehicle for co-design by allowing others to extend and/or reimplement as needed to test performance of new architectures, programming models, etc. --- exmcutils[¶](#exmcutils) === Homepage: * <http://swift-lang.org/Swift-TSpack package: * [exmcutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/exmcutils/package.py) Versions: 0.5.6 Description: ExM C-Utils: Generic C utility library for ADLB/X and Swift/T --- exodusii[¶](#exodusii) === Homepage: * <https://github.com/gsjaardema/seacasSpack package: * [exodusii/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/exodusii/package.py) Versions: 2016-08-09 Build Dependencies: [cmake](#cmake), mpi, [netcdf](#netcdf), [hdf5](#hdf5) Link Dependencies: mpi, [netcdf](#netcdf), [hdf5](#hdf5) Description: Exodus II is a C++/Fortran library developed to store and retrieve data for finite element analyses. It's used for preprocessing (problem definition), postprocessing (results visualization), and data transfer between codes. An Exodus II data file is a random access, machine independent, binary file that is written and read via C, C++, or Fortran API routines. --- exonerate[¶](#exonerate) === Homepage: * <http://www.ebi.ac.uk/about/vertebrate-genomics/software/exonerateSpack package: * [exonerate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/exonerate/package.py) Versions: 2.4.0 Build Dependencies: pkgconfig, [glib](#glib) Link Dependencies: [glib](#glib) Description: Pairwise sequence alignment of DNA and proteins --- expat[¶](#expat) === Homepage: * <https://libexpat.github.io/Spack package: * [expat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/expat/package.py) Versions: 2.2.5, 2.2.2, 2.2.0 Build Dependencies: [libbsd](#libbsd) Link Dependencies: [libbsd](#libbsd) Description: Expat is an XML parser library written in C. --- expect[¶](#expect) === Homepage: * <http://expect.sourceforge.net/Spack package: * [expect/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/expect/package.py) Versions: 5.45 Build Dependencies: [autoconf](#autoconf), [tcl](#tcl), [libtool](#libtool), [automake](#automake), [m4](#m4) Link Dependencies: [tcl](#tcl) Description: Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc. --- express[¶](#express) === Homepage: * <http://bio.math.berkeley.edu/eXpress/Spack package: * [express/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/express/package.py) Versions: 2015-11-29 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [boost](#boost), [bamtools](#bamtools) Link Dependencies: [zlib](#zlib), [boost](#boost), [bamtools](#bamtools) Description: eXpress is a streaming tool for quantifying the abundances of a set of target sequences from sampled subsequences. --- extrae[¶](#extrae) === Homepage: * <https://tools.bsc.es/extraeSpack package: * [extrae/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/extrae/package.py) Versions: 3.4.1 Build Dependencies: [gettext](#gettext), mpi, [libxml2](#libxml2), [dyninst](#dyninst), [libunwind](#libunwind), [binutils](#binutils), [libdwarf](#libdwarf), [boost](#boost), [papi](#papi) Link Dependencies: [gettext](#gettext), mpi, [libxml2](#libxml2), [dyninst](#dyninst), [libunwind](#libunwind), [binutils](#binutils), [libdwarf](#libdwarf), [boost](#boost), elf, [papi](#papi) Description: Extrae is the package devoted to generate tracefiles which can be analyzed later by Paraver. Extrae is a tool that uses different interposition mechanisms to inject probes into the target application so as to gather information regarding the application performance. The Extrae instrumentation package can instrument the MPI programin model, and the following parallel programming models either alone or in conjunction with MPI : OpenMP, CUDA, OpenCL, pthread, OmpSs --- exuberant-ctags[¶](#exuberant-ctags) === Homepage: * <ctags.sourceforge.netSpack package: * [exuberant-ctags/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/exuberant-ctags/package.py) Versions: 5.8 Description: The canonical ctags generator --- f90cache[¶](#f90cache) === Homepage: * <https://perso.univ-rennes1.fr/edouard.canot/f90cache/Spack package: * [f90cache/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/f90cache/package.py) Versions: 0.99 Description: f90cache is a compiler cache. It acts as a caching pre-processor to Fortran compilers, using the -E compiler switch and a hash to detect when a compilation can be satisfied from cache. This often results in a great speedup in common compilations. --- fabtests[¶](#fabtests) === Homepage: * <http://libfabric.orgSpack package: * [fabtests/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fabtests/package.py) Versions: 1.6.0, 1.5.3 Build Dependencies: [libfabric](#libfabric) Link Dependencies: [libfabric](#libfabric) Description: Fabtests provides a set of examples that uses libfabric --- falcon[¶](#falcon) === Homepage: * <https://github.com/PacificBiosciences/FALCONSpack package: * [falcon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/falcon/package.py) Versions: 2017-05-30 Build Dependencies: [py-networkx](#py-networkx), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [pacbio-dextractor](#pacbio-dextractor), [pacbio-daligner](#pacbio-daligner), [py-networkx](#py-networkx), [pacbio-dazz-db](#pacbio-dazz-db), [pacbio-damasker](#pacbio-damasker), [python](#python), [py-pypeflow](#py-pypeflow), [py-setuptools](#py-setuptools) Description: Falcon: a set of tools for fast aligning long reads for consensus and assembly. The Falcon tool kit is a set of simple code collection which I use for studying efficient assembly algorithm for haploid and diploid genomes. It has some back-end code implemented in C for speed and some simple front-end written in Python for convenience. --- fast-global-file-status[¶](#fast-global-file-status) === Homepage: * <https://github.com/LLNL/FastGlobalFileStatusSpack package: * [fast-global-file-status/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fast-global-file-status/package.py) Versions: 1.1 Build Dependencies: mpi, [mrnet](#mrnet), [mount-point-attributes](#mount-point-attributes) Link Dependencies: mpi, [mrnet](#mrnet), [mount-point-attributes](#mount-point-attributes) Description: provides a scalable mechanism to retrieve such information of a file, including its degree of distribution or replication and consistency. --- fasta[¶](#fasta) === Homepage: * <https://fasta.bioch.virginia.edu/fasta_www2/fasta_list2.shtmlSpack package: * [fasta/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fasta/package.py) Versions: 36.3.8g Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: The FASTA programs find regions of local or global similarity between Protein or DNA sequences, either by searching Protein or DNA databases, or by identifying local duplications within a sequence. Other programs provide information on the statistical significance of an alignment. Like BLAST, FASTA can be used to infer functional and evolutionary relationships between sequences as well as help identify members of gene families. --- fastjar[¶](#fastjar) === Homepage: * <http://savannah.nongnu.org/projects/fastjar/Spack package: * [fastjar/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fastjar/package.py) Versions: 0.98 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Fastjar is a version of Sun's 'jar' utility, written entirely in C. --- fastmath[¶](#fastmath) === Homepage: * <www.fastmath-scidac.org/Spack package: * [fastmath/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fastmath/package.py) Versions: 1.0 Build Dependencies: [boxlib](#boxlib), [pumi](#pumi), [petsc](#petsc), [arpack-ng](#arpack-ng), [moab](#moab), [chombo](#chombo), [zoltan](#zoltan), mpi, [superlu-dist](#superlu-dist), [phasta](#phasta), [sundials](#sundials), [mesquite](#mesquite), [hypre](#hypre), [trilinos](#trilinos) Link Dependencies: [boxlib](#boxlib), [pumi](#pumi), [petsc](#petsc), [arpack-ng](#arpack-ng), [moab](#moab), [chombo](#chombo), [zoltan](#zoltan), mpi, [superlu-dist](#superlu-dist), [phasta](#phasta), [sundials](#sundials), [mesquite](#mesquite), [hypre](#hypre), [trilinos](#trilinos) Description: FASTMath is a suite of ~15 numerical libraries frequently used together in various SciDAC and CSE applications. The suite includes discretization libraries for structured, AMR and unstructured grids as well as solver libraries for ODE's, Time Integrators, Iterative, Non- Linear, and Direct Solvers. --- fastme[¶](#fastme) === Homepage: * <http://www.atgc-montpellier.fr/fastme/Spack package: * [fastme/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fastme/package.py) Versions: 2.1.5.1 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: FastME is a distance based phylogeny reconstruction program that works on distance matrices and, as of v2.0, sequence data. --- fastphase[¶](#fastphase) === Homepage: * <http://stephenslab.uchicago.edu/software.htmlSpack package: * [fastphase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fastphase/package.py) Versions: 2016-03-30 Description: Software for haplotype reconstruction, and estimating missing genotypes from population data. --- fastq-screen[¶](#fastq-screen) === Homepage: * <https://www.bioinformatics.babraham.ac.uk/projects/fastq_screen/Spack package: * [fastq-screen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fastq-screen/package.py) Versions: 0.11.2 Build Dependencies: [samtools](#samtools), [bwa](#bwa), [bowtie](#bowtie), [bowtie2](#bowtie2) Link Dependencies: [samtools](#samtools), [bwa](#bwa), [bowtie](#bowtie), [bowtie2](#bowtie2) Run Dependencies: [perl-gd-graph](#perl-gd-graph), [perl](#perl) Description: FastQ Screen allows you to screen a library of sequences in FastQ format against a set of sequence databases so you can see if the composition of the library matches with what you expect. --- fastqc[¶](#fastqc) === Homepage: * <http://www.bioinformatics.babraham.ac.uk/projects/fastqc/Spack package: * [fastqc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fastqc/package.py) Versions: 0.11.7, 0.11.5, 0.11.4 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: java Description: A quality control tool for high throughput sequence data. --- fastqvalidator[¶](#fastqvalidator) === Homepage: * <http://genome.sph.umich.edu/wiki/FastQValidatorSpack package: * [fastqvalidator/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fastqvalidator/package.py) Versions: 2017-01-10 Description: The fastQValidator validates the format of fastq files. --- fasttree[¶](#fasttree) === Homepage: * <http://www.microbesonline.org/fasttreeSpack package: * [fasttree/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fasttree/package.py) Versions: 2.1.10 Description: FastTree infers approximately-maximum-likelihood phylogenetic trees from alignments of nucleotide or protein sequences. FastTree can handle alignments with up to a million of sequences in a reasonable amount of time and memory. --- fastx-toolkit[¶](#fastx-toolkit) === Homepage: * <http://hannonlab.cshl.edu/fastx_toolkit/Spack package: * [fastx-toolkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fastx-toolkit/package.py) Versions: 0.0.14 Build Dependencies: [libgtextutils](#libgtextutils) Link Dependencies: [libgtextutils](#libgtextutils) Description: The FASTX-Toolkit is a collection of command line tools for Short-Reads FASTA/FASTQ files preprocessing. --- fenics[¶](#fenics) === Homepage: * <http://fenicsproject.org/Spack package: * [fenics/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fenics/package.py) Versions: 2016.1.0, 1.6.0, 1.5.0 Build Dependencies: [petsc](#petsc), [trilinos](#trilinos), [py-sympy](#py-sympy), [eigen](#eigen), [hdf5](#hdf5), [py-ply](#py-ply), [cmake](#cmake), [py-numpy](#py-numpy), [suite-sparse](#suite-sparse), [qt](#qt), [py-setuptools](#py-setuptools), [py-six](#py-six), [parmetis](#parmetis), [vtk](#vtk), mpi, [swig](#swig), [scotch](#scotch), [boost](#boost), [python](#python), [slepc](#slepc), [py-sphinx](#py-sphinx) Link Dependencies: [petsc](#petsc), [trilinos](#trilinos), [eigen](#eigen), [hdf5](#hdf5), [suite-sparse](#suite-sparse), [slepc](#slepc), [qt](#qt), [parmetis](#parmetis), [vtk](#vtk), mpi, [scotch](#scotch), [boost](#boost), [python](#python) Run Dependencies: [py-ply](#py-ply), [py-numpy](#py-numpy), [swig](#swig), [py-sympy](#py-sympy), [py-six](#py-six) Description: FEniCS is organized as a collection of interoperable components that together form the FEniCS Project. These components include the problem- solving environment DOLFIN, the form compiler FFC, the finite element tabulator FIAT, the just-in-time compiler Instant, the code generation interface UFC, the form language UFL and a range of additional components. --- fermi[¶](#fermi) === Homepage: * <https://github.com/lh3/fermiSpack package: * [fermi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fermi/package.py) Versions: 1.1 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Run Dependencies: [perl](#perl) Description: A WGS de novo assembler based on the FMD-index for large genomes. --- fermikit[¶](#fermikit) === Homepage: * <https://github.com/lh3/fermikitSpack package: * [fermikit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fermikit/package.py) Versions: 2017-11-7 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: De novo assembly based variant calling pipeline for Illumina short reads --- fermisciencetools[¶](#fermisciencetools) === Homepage: * <https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/Spack package: * [fermisciencetools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fermisciencetools/package.py) Versions: 11r5p3 Description: The Fermi Science Tools consists of the basic tools necessary to analyze Fermi data. This is the binary version for Linux x86_64 with libc-2.17. --- ferret[¶](#ferret) === Homepage: * <http://ferret.pmel.noaa.gov/Ferret/homeSpack package: * [ferret/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ferret/package.py) Versions: 6.96 Build Dependencies: [netcdf-fortran](#netcdf-fortran), [zlib](#zlib), [readline](#readline), [netcdf](#netcdf), [hdf5](#hdf5) Link Dependencies: [netcdf-fortran](#netcdf-fortran), [zlib](#zlib), [readline](#readline), [netcdf](#netcdf), [hdf5](#hdf5) Description: Ferret is an interactive computer visualization and analysis environment designed to meet the needs of oceanographers and meteorologists analyzing large and complex gridded data sets. --- ffmpeg[¶](#ffmpeg) === Homepage: * <https://ffmpeg.orgSpack package: * [ffmpeg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ffmpeg/package.py) Versions: 3.2.4 Build Dependencies: [yasm](#yasm) Link Dependencies: [yasm](#yasm) Description: FFmpeg is a complete, cross-platform solution to record, convert and stream audio and video. --- fftw[¶](#fftw) === Homepage: * <http://www.fftw.orgSpack package: * [fftw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fftw/package.py) Versions: 3.3.8, 3.3.7, 3.3.6-pl2, 3.3.5, 3.3.4, 2.1.5 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), mpi Link Dependencies: mpi Description: FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, i.e. the discrete cosine/sine transforms or DCT/DST). We believe that FFTW, which is free software, should become the FFT library of choice for most applications. --- figtree[¶](#figtree) === Homepage: * <https://github.com/rambaut/figtreeSpack package: * [figtree/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/figtree/package.py) Versions: 1.4.3 Run Dependencies: java Description: FigTree is designed as a graphical viewer of phylogenetic trees and as a program for producing publication-ready figures. As with most of my programs, it was written for my own needs so may not be as polished and feature-complete as a commercial program. In particular it is designed to display summarized and annotated trees produced by BEAST. --- fimpute[¶](#fimpute) === Homepage: * <http://www.aps.uoguelph.ca/~msargol/fimpute/Spack package: * [fimpute/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fimpute/package.py) Versions: 2014-01 Description: FImpute uses an overlapping sliding window approach to efficiently exploit relationships or haplotype similarities between target and reference individuals. --- findutils[¶](#findutils) === Homepage: * <https://www.gnu.org/software/findutils/Spack package: * [findutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/findutils/package.py) Versions: 4.6.0, 4.4.2, 4.4.1, 4.4.0, 4.2.33, 4.2.32, 4.2.31, 4.2.30, 4.2.29, 4.2.28, 4.2.27, 4.2.26, 4.2.25, 4.2.23, 4.2.20, 4.2.18, 4.2.15, 4.1.20, 4.1 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4), [texinfo](#texinfo) Description: The GNU Find Utilities are the basic directory searching utilities of the GNU operating system. --- fio[¶](#fio) === Homepage: * <https://github.com/axboe/fioSpack package: * [fio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fio/package.py) Versions: 2.19 Build Dependencies: [cairo](#cairo), [gtkplus](#gtkplus), [py-sphinx](#py-sphinx) Link Dependencies: [cairo](#cairo), [gtkplus](#gtkplus) Description: Flexible I/O Tester. --- fish[¶](#fish) === Homepage: * <https://fishshell.com/Spack package: * [fish/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fish/package.py) Versions: 2.7.1, 2.7.0, 2.2.0 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: fish is a smart and user-friendly command line shell for OS X, Linux, and the rest of the family. --- fixesproto[¶](#fixesproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/fixesprotoSpack package: * [fixesproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fixesproto/package.py) Versions: 5.0 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Fixes Extension. The extension makes changes to many areas of the protocol to resolve issues raised by application interaction with core protocol mechanisms that cannot be adequately worked around on the client side of the wire. --- flac[¶](#flac) === Homepage: * <https://xiph.org/flac/index.htmlSpack package: * [flac/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flac/package.py) Versions: 1.3.2, 1.3.1, 1.3.0 Build Dependencies: [id3lib](#id3lib), [libvorbis](#libvorbis) Link Dependencies: [id3lib](#id3lib), [libvorbis](#libvorbis) Description: Encoder/decoder for the Free Lossless Audio Codec --- flang[¶](#flang) === Homepage: * <https://github.com/flang-compiler/flangSpack package: * [flang/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flang/package.py) Versions: develop, 20180612 Build Dependencies: [cmake](#cmake), [llvm](#llvm), [pgmath](#pgmath) Link Dependencies: [llvm](#llvm), [pgmath](#pgmath) Description: Flang is a Fortran compiler targeting LLVM. --- flann[¶](#flann) === Homepage: * <http://www.cs.ubc.ca/research/flann/Spack package: * [flann/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flann/package.py) Versions: 1.9.1, 1.8.5, 1.8.4, 1.8.1, 1.8.0 Build Dependencies: [cuda](#cuda), mpi, [hdf5](#hdf5), [cmake](#cmake), [py-numpy](#py-numpy), [matlab](#matlab), [boost](#boost), [python](#python), latex Link Dependencies: [cuda](#cuda), mpi, [hdf5](#hdf5), [boost](#boost), [python](#python), latex Run Dependencies: [py-numpy](#py-numpy), [matlab](#matlab) Test Dependencies: gtest, [hdf5](#hdf5) Description: FLANN is a library for performing fast approximate nearest neighbor searches in high dimensional spaces. It contains a collection of algorithms we found to work best for nearest neighbor search and a system for automatically choosing the best algorithm and optimum parameters depending on the dataset. FLANN is written in C++ and contains bindings for the following languages: C, MATLAB and Python. --- flash[¶](#flash) === Homepage: * <https://ccb.jhu.edu/software/FLASH/Spack package: * [flash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flash/package.py) Versions: 1.2.11 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: FLASH (Fast Length Adjustment of SHort reads) is a very fast and accurate software tool to merge paired-end reads from next-generation sequencing experiments. --- flatbuffers[¶](#flatbuffers) === Homepage: * <http://google.github.io/flatbuffers/Spack package: * [flatbuffers/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flatbuffers/package.py) Versions: 1.9.0, 1.8.0 Build Dependencies: [cmake](#cmake) Description: Memory Efficient Serialization Library --- flecsale[¶](#flecsale) === Homepage: * <https://github.com/laristra/flecsaleSpack package: * [flecsale/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flecsale/package.py) Versions: develop Build Dependencies: [cmake](#cmake), [flecsi](#flecsi), [python](#python), [openssl](#openssl) Link Dependencies: [flecsi](#flecsi), [python](#python), [openssl](#openssl) Description: Flecsale is an ALE code based on FleCSI --- flecsi[¶](#flecsi) === Homepage: * <http://flecsi.lanl.gov/Spack package: * [flecsi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flecsi/package.py) Versions: develop Build Dependencies: [parmetis](#parmetis), [cmake](#cmake), [legion](#legion) Link Dependencies: [parmetis](#parmetis), [legion](#legion) Description: FleCSI is a compile-time configurable framework designed to support multi-physics application development. As such, FleCSI attempts to provide a very general set of infrastructure design patterns that can be specialized and extended to suit the needs of a broad variety of solver and data requirements. Current support includes multi-dimensional mesh topology, mesh geometry, and mesh adjacency information, n-dimensional hashed-tree data structures, graph partitioning interfaces,and dependency closures. --- flex[¶](#flex) === Homepage: * <https://github.com/westes/flexSpack package: * [flex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flex/package.py) Versions: 2.6.4, 2.6.3, 2.6.1, 2.6.0, 2.5.39 Build Dependencies: [gettext](#gettext), [help2man](#help2man), [m4](#m4), [bison](#bison), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Description: Flex is a tool for generating scanners. --- flint[¶](#flint) === Homepage: * <http://www.flintlib.orgSpack package: * [flint/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flint/package.py) Versions: develop, 2.5.2, 2.4.5 Build Dependencies: [gmp](#gmp), [autoconf](#autoconf), [mpfr](#mpfr) Link Dependencies: [gmp](#gmp), [mpfr](#mpfr) Description: FLINT (Fast Library for Number Theory). --- flit[¶](#flit) === Homepage: * <https://pruners.github.io/flitSpack package: * [flit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flit/package.py) Versions: 2.0-alpha.1 Run Dependencies: [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-toml](#py-toml) Description: Floating-point Litmus Tests (FLiT) is a C++ test infrastructure for detecting variability in floating-point code caused by variations in compiler code generation, hardware and execution environments. --- fltk[¶](#fltk) === Homepage: * <http://www.fltk.org/Spack package: * [fltk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fltk/package.py) Versions: 1.3.3 Build Dependencies: [libx11](#libx11) Link Dependencies: [libx11](#libx11) Description: FLTK (pronounced "fulltick") is a cross-platform C++ GUI toolkit for UNIX/Linux (X11), Microsoft Windows, and MacOS X. FLTK provides modern GUI functionality without the bloat and supports 3D graphics via OpenGL and its built-in GLUT emulation. FLTK is designed to be small and modular enough to be statically linked, but works fine as a shared library. FLTK also includes an excellent UI builder called FLUID that can be used to create applications in minutes. --- flux-core[¶](#flux-core) === Homepage: * <https://github.com/flux-framework/flux-coreSpack package: * [flux-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flux-core/package.py) Versions: 0.10.0, 0.9.0, 0.8.0, master Build Dependencies: [hwloc](#hwloc), [yaml-cpp](#yaml-cpp), [lua](#lua), [munge](#munge), [czmq](#czmq), [autoconf](#autoconf), [libuuid](#libuuid), [zeromq](#zeromq), [py-cffi](#py-cffi), [py-six](#py-six), [lua-luaposix](#lua-luaposix), [lz4](#lz4), [asciidoc](#asciidoc), [jansson](#jansson), [automake](#automake), [libtool](#libtool), [py-pylint](#py-pylint), [python](#python) Link Dependencies: [lua-luaposix](#lua-luaposix), [hwloc](#hwloc), [yaml-cpp](#yaml-cpp), [lua](#lua), [munge](#munge), [czmq](#czmq), [jansson](#jansson), [zeromq](#zeromq), [libuuid](#libuuid), [python](#python), [lz4](#lz4) Run Dependencies: [lua](#lua), [py-cffi](#py-cffi), [python](#python), [py-six](#py-six) Description: A next-generation resource manager (pre-alpha) --- flux-sched[¶](#flux-sched) === Homepage: * <https://github.com/flux-framework/flux-schedSpack package: * [flux-sched/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/flux-sched/package.py) Versions: 0.6.0, 0.5.0, 0.4.0, master Build Dependencies: [py-pyyaml](#py-pyyaml), [flux-core](#flux-core), [libxml2](#libxml2), [autoconf](#autoconf), [boost](#boost), [libtool](#libtool), [automake](#automake) Link Dependencies: [py-pyyaml](#py-pyyaml), [flux-core](#flux-core), [boost](#boost), [libxml2](#libxml2) Run Dependencies: [flux-core](#flux-core) Description: A scheduler for flux-core (pre-alpha) --- fluxbox[¶](#fluxbox) === Homepage: * <http://fluxbox.org/Spack package: * [fluxbox/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fluxbox/package.py) Versions: 1.3.7 Build Dependencies: [libxrender](#libxrender), pkgconfig, [libx11](#libx11), [libxext](#libxext), [expat](#expat), [freetype](#freetype) Link Dependencies: [libxrender](#libxrender), [libx11](#libx11), [libxext](#libxext), [expat](#expat), [freetype](#freetype) Description: Fluxbox is a windowmanager for X that was based on the Blackbox 0.61.1 code. It is very light on resources and easy to handle but yet full of features to make an easy, and extremely fast, desktop experience. --- fmt[¶](#fmt) === Homepage: * <http://fmtlib.net/latest/index.htmlSpack package: * [fmt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fmt/package.py) Versions: 5.2.1, 5.2.0, 5.1.0, 5.0.0, 4.1.0, 4.0.0, 3.0.2, 3.0.1, 3.0.0 Build Dependencies: [cmake](#cmake) Description: fmt (formerly cppformat) is an open-source formatting library. It can be used as a safe alternative to printf or as a fast alternative to C++ IOStreams. --- foam-extend[¶](#foam-extend) === Homepage: * <http://www.extend-project.de/Spack package: * [foam-extend/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/foam-extend/package.py) Versions: 4.0, 3.2, 3.1, 3.0 Build Dependencies: [zlib](#zlib), [parmetis](#parmetis), mpi, [scotch](#scotch), [metis](#metis), [paraview](#paraview), [cmake](#cmake), [flex](#flex), [python](#python), [parmgridgen](#parmgridgen) Link Dependencies: [zlib](#zlib), [parmetis](#parmetis), mpi, [scotch](#scotch), [metis](#metis), [python](#python), [paraview](#paraview) Description: The Extend Project is a fork of the OpenFOAM opensource library for Computational Fluid Dynamics (CFD). This offering is not approved or endorsed by OpenCFD Ltd, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM trademark. --- folly[¶](#folly) === Homepage: * <https://github.com/facebook/follySpack package: * [folly/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/folly/package.py) Versions: 2017.06.05.00, 2016.11.14.00, 2016.11.07.00, 2016.10.31.00, 2016.10.24.00, 2016.10.17.00 Build Dependencies: [libevent](#libevent), pkgconfig, [gflags](#gflags), [glog](#glog), [double-conversion](#double-conversion), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [boost](#boost), [m4](#m4) Link Dependencies: [libevent](#libevent), [boost](#boost), [glog](#glog), [double-conversion](#double-conversion), [gflags](#gflags) Description: Folly (acronymed loosely after Facebook Open Source Library) is a library of C++11 components designed with practicality and efficiency in mind. Folly contains a variety of core library components used extensively at Facebook. In particular, it's often a dependency of Facebook's other open source C++ efforts and place where those projects can share code. --- font-adobe-100dpi[¶](#font-adobe-100dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/adobe-100dpiSpack package: * [font-adobe-100dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-adobe-100dpi/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org adobe-100dpi font. --- font-adobe-75dpi[¶](#font-adobe-75dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/adobe-75dpiSpack package: * [font-adobe-75dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-adobe-75dpi/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org adobe-75dpi font. --- font-adobe-utopia-100dpi[¶](#font-adobe-utopia-100dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/adobe-utopia-100dpiSpack package: * [font-adobe-utopia-100dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-adobe-utopia-100dpi/package.py) Versions: 1.0.4 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org adobe-utopia-100dpi font. --- font-adobe-utopia-75dpi[¶](#font-adobe-utopia-75dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/adobe-utopia-75dpiSpack package: * [font-adobe-utopia-75dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-adobe-utopia-75dpi/package.py) Versions: 1.0.4 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org adobe-utopia-75dpi font. --- font-adobe-utopia-type1[¶](#font-adobe-utopia-type1) === Homepage: * <https://cgit.freedesktop.org/xorg/font/adobe-utopia-type1Spack package: * [font-adobe-utopia-type1/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-adobe-utopia-type1/package.py) Versions: 1.0.4 Build Dependencies: [util-macros](#util-macros), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [mkfontscale](#mkfontscale) Description: X.org adobe-utopia-type1 font. --- font-alias[¶](#font-alias) === Homepage: * <http://cgit.freedesktop.org/xorg/font/aliasSpack package: * [font-alias/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-alias/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [font-util](#font-util) Link Dependencies: [font-util](#font-util) Description: X.org alias font. --- font-arabic-misc[¶](#font-arabic-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/arabic-miscSpack package: * [font-arabic-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-arabic-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org arabic-misc font. --- font-bh-100dpi[¶](#font-bh-100dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bh-100dpiSpack package: * [font-bh-100dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bh-100dpi/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org bh-100dpi font. --- font-bh-75dpi[¶](#font-bh-75dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bh-75dpiSpack package: * [font-bh-75dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bh-75dpi/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org bh-75dpi font. --- font-bh-lucidatypewriter-100dpi[¶](#font-bh-lucidatypewriter-100dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bh-lucidatypewriter-100dpiSpack package: * [font-bh-lucidatypewriter-100dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bh-lucidatypewriter-100dpi/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org bh-lucidatypewriter-100dpi font. --- font-bh-lucidatypewriter-75dpi[¶](#font-bh-lucidatypewriter-75dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bh-lucidatypewriter-75dpiSpack package: * [font-bh-lucidatypewriter-75dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bh-lucidatypewriter-75dpi/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org bh-lucidatypewriter-75dpi font. --- font-bh-ttf[¶](#font-bh-ttf) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bh-ttfSpack package: * [font-bh-ttf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bh-ttf/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org bh-ttf font. --- font-bh-type1[¶](#font-bh-type1) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bh-type1Spack package: * [font-bh-type1/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bh-type1/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [mkfontscale](#mkfontscale) Link Dependencies: [font-util](#font-util) Description: X.org bh-type1 font. --- font-bitstream-100dpi[¶](#font-bitstream-100dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bitstream-100dpiSpack package: * [font-bitstream-100dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bitstream-100dpi/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org bitstream-100dpi font. --- font-bitstream-75dpi[¶](#font-bitstream-75dpi) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bitstream-75dpiSpack package: * [font-bitstream-75dpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bitstream-75dpi/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org bitstream-75dpi font. --- font-bitstream-speedo[¶](#font-bitstream-speedo) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bitstream-speedoSpack package: * [font-bitstream-speedo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bitstream-speedo/package.py) Versions: 1.0.2 Build Dependencies: [util-macros](#util-macros), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [mkfontscale](#mkfontscale) Link Dependencies: [font-util](#font-util) Description: X.org bitstream-speedo font. --- font-bitstream-type1[¶](#font-bitstream-type1) === Homepage: * <http://cgit.freedesktop.org/xorg/font/bitstream-type1Spack package: * [font-bitstream-type1/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-bitstream-type1/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [mkfontscale](#mkfontscale) Link Dependencies: [font-util](#font-util) Description: X.org bitstream-type1 font. --- font-cronyx-cyrillic[¶](#font-cronyx-cyrillic) === Homepage: * <http://cgit.freedesktop.org/xorg/font/cronyx-cyrillicSpack package: * [font-cronyx-cyrillic/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-cronyx-cyrillic/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org cronyx-cyrillic font. --- font-cursor-misc[¶](#font-cursor-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/cursor-miscSpack package: * [font-cursor-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-cursor-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org cursor-misc font. --- font-daewoo-misc[¶](#font-daewoo-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/daewoo-miscSpack package: * [font-daewoo-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-daewoo-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org daewoo-misc font. --- font-dec-misc[¶](#font-dec-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/dec-miscSpack package: * [font-dec-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-dec-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org dec-misc font. --- font-ibm-type1[¶](#font-ibm-type1) === Homepage: * <http://cgit.freedesktop.org/xorg/font/ibm-type1Spack package: * [font-ibm-type1/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-ibm-type1/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [mkfontscale](#mkfontscale) Link Dependencies: [font-util](#font-util) Description: X.org ibm-type1 font. --- font-isas-misc[¶](#font-isas-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/isas-miscSpack package: * [font-isas-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-isas-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org isas-misc font. --- font-jis-misc[¶](#font-jis-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/jis-miscSpack package: * [font-jis-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-jis-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org jis-misc font. --- font-micro-misc[¶](#font-micro-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/micro-miscSpack package: * [font-micro-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-micro-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org micro-misc font. --- font-misc-cyrillic[¶](#font-misc-cyrillic) === Homepage: * <http://cgit.freedesktop.org/xorg/font/misc-cyrillicSpack package: * [font-misc-cyrillic/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-misc-cyrillic/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org misc-cyrillic font. --- font-misc-ethiopic[¶](#font-misc-ethiopic) === Homepage: * <http://cgit.freedesktop.org/xorg/font/misc-ethiopicSpack package: * [font-misc-ethiopic/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-misc-ethiopic/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [mkfontscale](#mkfontscale) Link Dependencies: [font-util](#font-util) Description: X.org misc-ethiopic font. --- font-misc-meltho[¶](#font-misc-meltho) === Homepage: * <http://cgit.freedesktop.org/xorg/font/misc-melthoSpack package: * [font-misc-meltho/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-misc-meltho/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [mkfontscale](#mkfontscale) Link Dependencies: [font-util](#font-util) Description: X.org misc-meltho font. --- font-misc-misc[¶](#font-misc-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/misc-miscSpack package: * [font-misc-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-misc-misc/package.py) Versions: 1.1.2 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org misc-misc font. --- font-mutt-misc[¶](#font-mutt-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/mutt-miscSpack package: * [font-mutt-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-mutt-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org mutt-misc font. --- font-schumacher-misc[¶](#font-schumacher-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/schumacher-miscSpack package: * [font-schumacher-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-schumacher-misc/package.py) Versions: 1.1.2 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org schumacher-misc font. --- font-screen-cyrillic[¶](#font-screen-cyrillic) === Homepage: * <http://cgit.freedesktop.org/xorg/font/screen-cyrillicSpack package: * [font-screen-cyrillic/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-screen-cyrillic/package.py) Versions: 1.0.4 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org screen-cyrillic font. --- font-sony-misc[¶](#font-sony-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/sony-miscSpack package: * [font-sony-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-sony-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org sony-misc font. --- font-sun-misc[¶](#font-sun-misc) === Homepage: * <http://cgit.freedesktop.org/xorg/font/sun-miscSpack package: * [font-sun-misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-sun-misc/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org sun-misc font. --- font-util[¶](#font-util) === Homepage: * <http://cgit.freedesktop.org/xorg/font/utilSpack package: * [font-util/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-util/package.py) Versions: 1.3.1 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X.Org font package creation/installation utilities. --- font-winitzki-cyrillic[¶](#font-winitzki-cyrillic) === Homepage: * <http://cgit.freedesktop.org/xorg/font/winitzki-cyrillicSpack package: * [font-winitzki-cyrillic/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-winitzki-cyrillic/package.py) Versions: 1.0.3 Build Dependencies: [bdftopcf](#bdftopcf), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [util-macros](#util-macros) Link Dependencies: [font-util](#font-util) Description: X.org winitzki-cyrillic font. --- font-xfree86-type1[¶](#font-xfree86-type1) === Homepage: * <http://cgit.freedesktop.org/xorg/font/xfree86-type1Spack package: * [font-xfree86-type1/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/font-xfree86-type1/package.py) Versions: 1.0.4 Build Dependencies: [util-macros](#util-macros), pkgconfig, [fontconfig](#fontconfig), [mkfontdir](#mkfontdir), [font-util](#font-util), [mkfontscale](#mkfontscale) Link Dependencies: [font-util](#font-util) Description: X.org xfree86-type1 font. --- fontcacheproto[¶](#fontcacheproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/fontcacheprotoSpack package: * [fontcacheproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fontcacheproto/package.py) Versions: 0.1.3 Description: X.org FontcacheProto protocol headers. --- fontconfig[¶](#fontconfig) === Homepage: * <http://www.freedesktop.org/wiki/Software/fontconfig/Spack package: * [fontconfig/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fontconfig/package.py) Versions: 2.12.3, 2.12.1, 2.11.1 Build Dependencies: [gperf](#gperf), pkgconfig, [freetype](#freetype), [libxml2](#libxml2), [font-util](#font-util) Link Dependencies: [freetype](#freetype), [libxml2](#libxml2), [font-util](#font-util) Description: Fontconfig is a library for configuring/customizing font access --- fontsproto[¶](#fontsproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/fontsprotoSpack package: * [fontsproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fontsproto/package.py) Versions: 2.1.3 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Fonts Extension. --- fonttosfnt[¶](#fonttosfnt) === Homepage: * <http://cgit.freedesktop.org/xorg/app/fonttosfntSpack package: * [fonttosfnt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fonttosfnt/package.py) Versions: 1.0.4 Build Dependencies: [util-macros](#util-macros), [xproto](#xproto), [freetype](#freetype), pkgconfig, [libfontenc](#libfontenc) Link Dependencies: [libfontenc](#libfontenc), [freetype](#freetype) Description: Wrap a bitmap font in a sfnt (TrueType) wrapper. --- fp16[¶](#fp16) === Homepage: * <https://github.com/Maratyszcza/FP16/Spack package: * [fp16/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fp16/package.py) Versions: master Description: FP16 is a header-only library for conversion to/from half-precision floating point formats --- fpc[¶](#fpc) === Homepage: * <https://www.freepascal.org/Spack package: * [fpc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fpc/package.py) Versions: 3.0.2 Description: Free Pascal is a 32, 64 and 16 bit professional Pascal compiler. --- fr-hit[¶](#fr-hit) === Homepage: * <http://weizhong-lab.ucsd.edu/frhitSpack package: * [fr-hit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fr-hit/package.py) Versions: 0.7.1-2013-02-20 Build Dependencies: [perl](#perl), [python](#python) Link Dependencies: [perl](#perl), [python](#python) Description: An efficient algorithm for fragment recruitment for next generation sequences against microbial reference genomes. --- freebayes[¶](#freebayes) === Homepage: * <https://github.com/ekg/freebayesSpack package: * [freebayes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/freebayes/package.py) Versions: 1.1.0 Build Dependencies: [zlib](#zlib), [cmake](#cmake) Link Dependencies: [zlib](#zlib) Description: Bayesian haplotype-based genetic polymorphism discovery and genotyping. --- freeglut[¶](#freeglut) === Homepage: * <http://freeglut.sourceforge.net/Spack package: * [freeglut/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/freeglut/package.py) Versions: 3.0.0 Build Dependencies: [inputproto](#inputproto), gl, [libx11](#libx11), [xrandr](#xrandr), glu, [cmake](#cmake), [libxrandr](#libxrandr), [libxi](#libxi) Link Dependencies: [inputproto](#inputproto), gl, [libx11](#libx11), [xrandr](#xrandr), glu, [libxrandr](#libxrandr), [libxi](#libxi) Description: FreeGLUT is a free-software/open-source alternative to the OpenGL Utility Toolkit (GLUT) library --- freetype[¶](#freetype) === Homepage: * <https://www.freetype.org/index.htmlSpack package: * [freetype/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/freetype/package.py) Versions: 2.9.1, 2.7.1, 2.7, 2.5.3 Build Dependencies: [bzip2](#bzip2), pkgconfig, [libpng](#libpng) Link Dependencies: [bzip2](#bzip2), [libpng](#libpng) Description: FreeType is a freely available software library to render fonts. It is written in C, designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images) of most vector and bitmap font formats. --- fseq[¶](#fseq) === Homepage: * <http://fureylab.web.unc.edu/software/fseq/Spack package: * [fseq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fseq/package.py) Versions: 1.84 Build Dependencies: java Run Dependencies: java Description: F-Seq: A Feature Density Estimator for High-Throughput Sequence Tags --- fsl[¶](#fsl) === Homepage: * <https://fsl.fmrib.ox.ac.ukSpack package: * [fsl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fsl/package.py) Versions: 5.0.10 Build Dependencies: [zlib](#zlib), [libpng](#libpng), [sqlite](#sqlite), [libx11](#libx11), [boost](#boost), [expat](#expat), [mesa-glu](#mesa-glu), [python](#python) Link Dependencies: [zlib](#zlib), [libpng](#libpng), [sqlite](#sqlite), [libx11](#libx11), [boost](#boost), [expat](#expat), [mesa-glu](#mesa-glu) Run Dependencies: [python](#python) Description: FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data. Note: A manual download is required for FSL. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- fslsfonts[¶](#fslsfonts) === Homepage: * <http://cgit.freedesktop.org/xorg/app/fslsfontsSpack package: * [fslsfonts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fslsfonts/package.py) Versions: 1.0.5 Build Dependencies: [xproto](#xproto), pkgconfig, [libfs](#libfs), [util-macros](#util-macros) Link Dependencies: [libfs](#libfs) Description: fslsfonts produces a list of fonts served by an X font server. --- fstobdf[¶](#fstobdf) === Homepage: * <http://cgit.freedesktop.org/xorg/app/fstobdfSpack package: * [fstobdf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fstobdf/package.py) Versions: 1.0.6 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [libfs](#libfs), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11), [libfs](#libfs) Description: The fstobdf program reads a font from a font server and prints a BDF file on the standard output that may be used to recreate the font. This is useful in testing servers, debugging font metrics, and reproducing lost BDF files. --- ftgl[¶](#ftgl) === Homepage: * <http://ftgl.sourceforge.net/docs/html/Spack package: * [ftgl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ftgl/package.py) Versions: 2.1.2 Build Dependencies: pkgconfig, gl, [m4](#m4), glu, [autoconf](#autoconf), [doxygen](#doxygen), [libtool](#libtool), [automake](#automake), [freetype](#freetype) Link Dependencies: gl, [freetype](#freetype), glu Description: Library to use arbitrary fonts in OpenGL applications. --- funhpc[¶](#funhpc) === Homepage: * <https://github.com/eschnett/FunHPC.cxxSpack package: * [funhpc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/funhpc/package.py) Versions: develop, 1.3.0, 1.2.0, 1.1.1, 1.1.0, 1.0.0, 0.1.1, 0.1.0 Build Dependencies: [hwloc](#hwloc), mpi, [qthreads](#qthreads), [jemalloc](#jemalloc), [googletest](#googletest), [cereal](#cereal), [cmake](#cmake) Link Dependencies: [hwloc](#hwloc), mpi, [qthreads](#qthreads), [jemalloc](#jemalloc), [googletest](#googletest), [cereal](#cereal) Description: FunHPC: Functional HPC Programming --- fyba[¶](#fyba) === Homepage: * <https://github.com/kartverket/fybaSpack package: * [fyba/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/fyba/package.py) Versions: 4.1.1 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: OpenFYBA is the source code release of the FYBA library, distributed by the National Mapping Authority of Norway (Statens kartverk) to read and write files in the National geodata standard format SOSI. --- gapbs[¶](#gapbs) === Homepage: * <http://gap.cs.berkeley.edu/benchmark.htmlSpack package: * [gapbs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gapbs/package.py) Versions: 1.0 Description: The GAP Benchmark Suite is intended to help graph processing research by standardizing evaluations. Fewer differences between graph processing evaluations will make it easier to compare different research efforts and quantify improvements. The benchmark not only specifies graph kernels, input graphs, and evaluation methodologies, but it also provides an optimized baseline implementation (this repo). These baseline implementations are representative of state-of-the-art performance, and thus new contributions should outperform them to demonstrate an improvement. --- gapcloser[¶](#gapcloser) === Homepage: * <https://sourceforge.net/projects/soapdenovo2/files/GapCloser/Spack package: * [gapcloser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gapcloser/package.py) Versions: 1.12-r6 Description: The GapCloser is designed to close the gaps emerging during the scaffolding process --- gapfiller[¶](#gapfiller) === Homepage: * <https://www.baseclear.com/genomics/bioinformatics/basetools/gapfillerSpack package: * [gapfiller/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gapfiller/package.py) Versions: 1.10 Build Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: GapFiller is a stand-alone program for closing gaps within pre-assembled scaffolds. Note: A manual download is required for GapFiller. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- gasnet[¶](#gasnet) === Homepage: * <http://gasnet.lbl.govSpack package: * [gasnet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gasnet/package.py) Versions: 1.32.0, 1.30.0, 1.28.2, 1.28.0, 1.24.0 Build Dependencies: mpi Link Dependencies: mpi Description: GASNet is a language-independent, low-level networking layer that provides network-independent, high-performance communication primitives tailored for implementing parallel global address space SPMD languages and libraries such as UPC, Co-Array Fortran, SHMEM, Cray Chapel, and Titanium. --- gatk[¶](#gatk) === Homepage: * <https://software.broadinstitute.org/gatk/Spack package: * [gatk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gatk/package.py) Versions: 4.0.8.1, 4.0.4.0, 3.8-0 Run Dependencies: java, [r](#r), [python](#python) Description: Genome Analysis Toolkit Variant Discovery in High-Throughput Sequencing Data --- gaussian[¶](#gaussian) === Homepage: * <http://www.gaussian.com/Spack package: * [gaussian/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gaussian/package.py) Versions: 09 Description: Gaussian is a computer program for computational chemistry --- gawk[¶](#gawk) === Homepage: * <https://www.gnu.org/software/gawk/Spack package: * [gawk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gawk/package.py) Versions: 4.1.4 Build Dependencies: [gmp](#gmp), [libsigsegv](#libsigsegv), [gettext](#gettext), [mpfr](#mpfr), [readline](#readline) Link Dependencies: [gmp](#gmp), [libsigsegv](#libsigsegv), [gettext](#gettext), [mpfr](#mpfr), [readline](#readline) Description: If you are like many computer users, you would frequently like to make changes in various text files wherever certain patterns appear, or extract data from parts of certain lines while discarding the rest. To write a program to do this in a language such as C or Pascal is a time- consuming inconvenience that may take many lines of code. The job is easy with awk, especially the GNU implementation: gawk. The awk utility interprets a special-purpose programming language that makes it possible to handle simple data-reformatting jobs with just a few lines of code. --- gblocks[¶](#gblocks) === Homepage: * <http://molevol.cmima.csic.es/castresana/Gblocks.htmlSpack package: * [gblocks/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gblocks/package.py) Versions: 0.91b Description: Gblocks is a computer program written in ANSI C language that eliminates poorly aligned positions and divergent regions of an alignment of DNA or protein sequences --- gcc[¶](#gcc) === Homepage: * <https://gcc.gnu.orgSpack package: * [gcc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gcc/package.py) Versions: 8.2.0, 8.1.0, 7.3.0, 7.2.0, 7.1.0, 6.4.0, 6.3.0, 6.2.0, 6.1.0, 5.5.0, 5.4.0, 5.3.0, 5.2.0, 5.1.0, 4.9.4, 4.9.3, 4.9.2, 4.9.1, 4.8.5, 4.8.4, 4.7.4, 4.6.4, 4.5.4 Build Dependencies: [gmp](#gmp), [zlib](#zlib), [isl](#isl), [mpc](#mpc), [gnat](#gnat), [mpfr](#mpfr), [binutils](#binutils), [zip](#zip) Link Dependencies: [gmp](#gmp), [zlib](#zlib), [isl](#isl), [mpc](#mpc), [gnat](#gnat), [mpfr](#mpfr), [binutils](#binutils) Test Dependencies: [dejagnu](#dejagnu), [guile](#guile), [tcl](#tcl), [expect](#expect), [autogen](#autogen) Description: The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. --- gccmakedep[¶](#gccmakedep) === Homepage: * <https://cgit.freedesktop.org/xorg/util/gccmakedep/Spack package: * [gccmakedep/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gccmakedep/package.py) Versions: 1.0.3 Build Dependencies: pkgconfig Description: X.org gccmakedep utilities. --- gccxml[¶](#gccxml) === Homepage: * <http://gccxml.github.ioSpack package: * [gccxml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gccxml/package.py) Versions: develop, latest Build Dependencies: [cmake](#cmake) Description: gccxml dumps an XML description of C++ source code using an extension of the GCC C++ compiler. --- gconf[¶](#gconf) === Homepage: * <https://projects.gnome.org/gconf/Spack package: * [gconf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gconf/package.py) Versions: 3.2.6 Build Dependencies: [glib](#glib), [libxml2](#libxml2) Link Dependencies: [glib](#glib), [libxml2](#libxml2) Description: GConf is a system for storing application preferences. --- gcta[¶](#gcta) === Homepage: * <http://cnsgenomics.com/software/gcta/#OverviewSpack package: * [gcta/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gcta/package.py) Versions: 1.91.2beta_mac, 1.91.2beta Description: GCTA (Genome-wide Complex Trait Analysis) was originally designed to estimate the proportion of phenotypic variance explained by all genome- wide SNPs for complex traits (the GREML method), and has subsequently extended for many other analyses to better understand the genetic architecture of complex traits. GCTA currently supports the following analyses. --- gdal[¶](#gdal) === Homepage: * <http://www.gdal.org/Spack package: * [gdal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gdal/package.py) Versions: 2.3.1, 2.3.0, 2.1.2, 2.0.2, 1.11.5 Build Dependencies: pkgconfig, java, [cfitsio](#cfitsio), [openjpeg](#openjpeg), [poppler](#poppler), [libgeotiff](#libgeotiff), [curl](#curl), [py-numpy](#py-numpy), [jasper](#jasper), [qhull](#qhull), [zlib](#zlib), [libtiff](#libtiff), [gmake](#gmake), [netcdf](#netcdf), [openssl](#openssl), [giflib](#giflib), [pcre](#pcre), [postgresql](#postgresql), [libtool](#libtool), [hdf](#hdf), [libpng](#libpng), opencl, [geos](#geos), jpeg, [zstd](#zstd), [json-c](#json-c), [xz](#xz), [perl](#perl), [unixodbc](#unixodbc), [hdf5](#hdf5), [expat](#expat), [kealib](#kealib), [libiconv](#libiconv), [libxml2](#libxml2), [proj](#proj), [xerces-c](#xerces-c), [armadillo](#armadillo), [sqlite](#sqlite), [cryptopp](#cryptopp), [fyba](#fyba), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [cfitsio](#cfitsio), [openjpeg](#openjpeg), [poppler](#poppler), [libgeotiff](#libgeotiff), [curl](#curl), [jasper](#jasper), [qhull](#qhull), [zlib](#zlib), [libtiff](#libtiff), [json-c](#json-c), [netcdf](#netcdf), [openssl](#openssl), [giflib](#giflib), [pcre](#pcre), [postgresql](#postgresql), [kealib](#kealib), [hdf](#hdf), [libpng](#libpng), opencl, [geos](#geos), [zstd](#zstd), [libxml2](#libxml2), [xz](#xz), java, [hdf5](#hdf5), [expat](#expat), [libiconv](#libiconv), [proj](#proj), [xerces-c](#xerces-c), [armadillo](#armadillo), [sqlite](#sqlite), jpeg, [unixodbc](#unixodbc), [fyba](#fyba), [python](#python), [cryptopp](#cryptopp) Run Dependencies: [perl](#perl), java, [py-numpy](#py-numpy), [jackcess](#jackcess), [python](#python) Description: GDAL (Geospatial Data Abstraction Library) is a translator library for raster and vector geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single raster abstract data model and vector abstract data model to the calling application for all supported formats. It also comes with a variety of useful command line utilities for data translation and processing. --- gdb[¶](#gdb) === Homepage: * <https://www.gnu.org/software/gdbSpack package: * [gdb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gdb/package.py) Versions: 8.2, 8.1, 8.0.1, 8.0, 7.12.1, 7.11, 7.10.1, 7.10, 7.9.1, 7.9, 7.8.2 Build Dependencies: [texinfo](#texinfo), [python](#python), [xz](#xz) Link Dependencies: [python](#python), [xz](#xz) Description: GDB, the GNU Project debugger, allows you to see what is going on 'inside' another program while it executes -- or what another program was doing at the moment it crashed. --- gdbm[¶](#gdbm) === Homepage: * <http://www.gnu.org.ua/software/gdbm/gdbm.htmlSpack package: * [gdbm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gdbm/package.py) Versions: 1.14.1, 1.13, 1.12, 1.11, 1.10, 1.9.1, 1.9 Build Dependencies: [readline](#readline) Link Dependencies: [readline](#readline) Description: GNU dbm (or GDBM, for short) is a library of database functions that use extensible hashing and work similar to the standard UNIX dbm. These routines are provided to a programmer needing to create and manipulate a hashed database. --- gdk-pixbuf[¶](#gdk-pixbuf) === Homepage: * <https://developer.gnome.org/gdk-pixbuf/Spack package: * [gdk-pixbuf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gdk-pixbuf/package.py) Versions: 2.31.2 Build Dependencies: pkgconfig, jpeg, [libtiff](#libtiff), [libpng](#libpng), [gettext](#gettext), [glib](#glib), [gobject-introspection](#gobject-introspection) Link Dependencies: [glib](#glib), jpeg, [libtiff](#libtiff), [libpng](#libpng), [gettext](#gettext), [gobject-introspection](#gobject-introspection) Description: The Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3. --- gdl[¶](#gdl) === Homepage: * <https://github.com/gnudatalanguage/gdlSpack package: * [gdl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gdl/package.py) Versions: 0.9.8 Build Dependencies: [readline](#readline), jpeg, [eigen](#eigen), [hdf5](#hdf5), [cmake](#cmake), [py-numpy](#py-numpy), [pslib](#pslib), [fftw](#fftw), [plplot](#plplot), [libice](#libice), [graphicsmagick](#graphicsmagick), [libsm](#libsm), [libx11](#libx11), [netcdf](#netcdf), [wx](#wx), [proj](#proj), [libxxf86vm](#libxxf86vm), [libxinerama](#libxinerama), [hdf](#hdf), [python](#python), [gsl](#gsl) Link Dependencies: [readline](#readline), jpeg, [eigen](#eigen), [hdf5](#hdf5), [wx](#wx), [libxxf86vm](#libxxf86vm), [pslib](#pslib), [fftw](#fftw), [plplot](#plplot), [libice](#libice), [graphicsmagick](#graphicsmagick), [libsm](#libsm), [libx11](#libx11), [netcdf](#netcdf), [proj](#proj), [libxinerama](#libxinerama), [hdf](#hdf), [python](#python), [gsl](#gsl) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: A free and open-source IDL/PV-WAVE compiler. GNU Data Language (GDL) is a free/libre/open source incremental compiler compatible with IDL and to some extent with PV-WAVE. --- geant4[¶](#geant4) === Homepage: * <http://geant4.cern.ch/Spack package: * [geant4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/geant4/package.py) Versions: 10.04, 10.03.p03, 10.02.p03, 10.02.p02, 10.02.p01, 10.01.p03 Build Dependencies: [zlib](#zlib), [motif](#motif), [clhep](#clhep), [xerces-c](#xerces-c), [libx11](#libx11), [mesa](#mesa), [cmake](#cmake), [expat](#expat), [libxmu](#libxmu), [vecgeom](#vecgeom), [qt](#qt) Link Dependencies: [zlib](#zlib), [motif](#motif), [clhep](#clhep), [xerces-c](#xerces-c), [libx11](#libx11), [mesa](#mesa), [expat](#expat), [libxmu](#libxmu), [vecgeom](#vecgeom), [qt](#qt) Description: Geant4 is a toolkit for the simulation of the passage of particles through matter. Its areas of application include high energy, nuclear and accelerator physics, as well as studies in medical and space science. --- gearshifft[¶](#gearshifft) === Homepage: * <https://github.com/mpicbg-scicomp/gearshifftSpack package: * [gearshifft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gearshifft/package.py) Versions: 0.2.1-lw Build Dependencies: opencl, [cuda](#cuda), [cmake](#cmake), [boost](#boost), [clfft](#clfft), [fftw](#fftw) Link Dependencies: opencl, [boost](#boost), [clfft](#clfft), [fftw](#fftw), [cuda](#cuda) Description: Benchmark Suite for Heterogenuous FFT Implementations --- gemmlowp[¶](#gemmlowp) === Homepage: * <https://github.com/google/gemmlowpSpack package: * [gemmlowp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gemmlowp/package.py) Versions: a6f29d9ac Description: Google low-precision matrix multiplication library --- genemark-et[¶](#genemark-et) === Homepage: * <http://topaz.gatech.edu/GeneMarkSpack package: * [genemark-et/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/genemark-et/package.py) Versions: 4.33 Build Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Gene Prediction in Bacteria, archaea, Metagenomes and Metatranscriptomes. --- genomefinisher[¶](#genomefinisher) === Homepage: * <http://gfinisher.sourceforge.netSpack package: * [genomefinisher/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/genomefinisher/package.py) Versions: 1.4 Run Dependencies: java Description: GFinisher is an application tools for refinement and finalization of prokaryotic genomes assemblies using the bias of GC Skew to identify assembly errors and organizes the contigs/scaffolds with genomes references. --- genometools[¶](#genometools) === Homepage: * <http://genometools.org/Spack package: * [genometools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/genometools/package.py) Versions: 1.5.9 Build Dependencies: [perl](#perl), [cairo](#cairo), [pango](#pango) Link Dependencies: [cairo](#cairo), [pango](#pango) Run Dependencies: [perl](#perl) Description: genometools is a free collection of bioinformatics tools (in the realm of genome informatics) combined into a single binary named gt. --- geopm[¶](#geopm) === Homepage: * <https://geopm.github.ioSpack package: * [geopm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/geopm/package.py) Versions: develop, 0.5.0, 0.4.0, 0.3.0, master Build Dependencies: [ruby-ronn](#ruby-ronn), [hwloc](#hwloc), mpi, [m4](#m4), [libtool](#libtool), [autoconf](#autoconf), [automake](#automake), [numactl](#numactl), [json-c](#json-c), [doxygen](#doxygen) Link Dependencies: [json-c](#json-c), [hwloc](#hwloc), mpi, [numactl](#numactl) Run Dependencies: [py-natsort](#py-natsort), [py-pandas](#py-pandas), [py-matplotlib](#py-matplotlib), [py-numpy](#py-numpy) Description: GEOPM is an extensible power management framework targeting HPC. The GEOPM package provides libgeopm, libgeopmpolicy and applications geopmctl and geopmpolicy, as well as tools for postprocessing. GEOPM is designed to be extended for new control algorithms and new hardware power management features via its plugin infrastructure. Note: GEOPM interfaces with hardware using Model Specific Registers (MSRs). For propper usage make sure MSRs are made available directly or via the msr- safe kernel module by your administrator. --- geos[¶](#geos) === Homepage: * <http://trac.osgeo.org/geos/Spack package: * [geos/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/geos/package.py) Versions: 3.6.2, 3.6.1, 3.6.0, 3.5.1, 3.5.0, 3.4.3, 3.4.2, 3.4.1, 3.4.0, 3.3.9, 3.3.8, 3.3.7, 3.3.6, 3.3.5, 3.3.4, 3.3.3 Build Dependencies: [ruby](#ruby), [swig](#swig), [python](#python) Link Dependencies: [ruby](#ruby), [python](#python) Description: GEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS). As such, it aims to contain the complete functionality of JTS in C++. This includes all the OpenGIS Simple Features for SQL spatial predicate functions and spatial operators, as well as specific JTS enhanced topology functions. --- gettext[¶](#gettext) === Homepage: * <https://www.gnu.org/software/gettext/Spack package: * [gettext/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gettext/package.py) Versions: 0.19.8.1, 0.19.7 Build Dependencies: [bzip2](#bzip2), [tar](#tar), [libxml2](#libxml2), [xz](#xz), [libunistring](#libunistring), [ncurses](#ncurses) Link Dependencies: [bzip2](#bzip2), [tar](#tar), [libxml2](#libxml2), [xz](#xz), [libunistring](#libunistring), [ncurses](#ncurses) Description: GNU internationalization (i18n) and localization (l10n) library. --- gflags[¶](#gflags) === Homepage: * <https://gflags.github.io/gflagsSpack package: * [gflags/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gflags/package.py) Versions: 2.1.2 Build Dependencies: [cmake](#cmake) Description: The gflags package contains a C++ library that implements commandline flags processing. It includes built-in support for standard types such as string and the ability to define flags in the source file in which they are used. Online documentation available at: https://gflags.github.io/gflags/ --- ghost[¶](#ghost) === Homepage: * <https://www.bitbucket.org/essex/ghost/Spack package: * [ghost/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ghost/package.py) Versions: develop Build Dependencies: [cuda](#cuda), [zoltan](#zoltan), mpi, [scotch](#scotch), [hwloc](#hwloc), [cmake](#cmake), blas Link Dependencies: [cuda](#cuda), [zoltan](#zoltan), mpi, [scotch](#scotch), [hwloc](#hwloc), [cmake](#cmake), blas Description: GHOST: a General, Hybrid and Optimized Sparse Toolkit. This library provides highly optimized building blocks for implementing sparse iterative eigenvalue and linear solvers multi- and manycore clusters and on heterogenous CPU/GPU machines. For an iterative solver library using these kernels, see the phist package. --- ghostscript[¶](#ghostscript) === Homepage: * <http://ghostscript.com/Spack package: * [ghostscript/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ghostscript/package.py) Versions: 9.21, 9.18 Build Dependencies: [zlib](#zlib), jpeg, pkgconfig, [libpng](#libpng), [lcms](#lcms), [libtiff](#libtiff), [libxext](#libxext), [freetype](#freetype) Link Dependencies: [zlib](#zlib), jpeg, [libpng](#libpng), [lcms](#lcms), [libtiff](#libtiff), [libxext](#libxext), [freetype](#freetype) Description: An interpreter for the PostScript language and for PDF. --- ghostscript-fonts[¶](#ghostscript-fonts) === Homepage: * <http://ghostscript.com/Spack package: * [ghostscript-fonts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ghostscript-fonts/package.py) Versions: 8.11 Description: Ghostscript Fonts --- giflib[¶](#giflib) === Homepage: * <http://giflib.sourceforge.net/Spack package: * [giflib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/giflib/package.py) Versions: 5.1.4 Description: The GIFLIB project maintains the giflib service library, which has been pulling images out of GIFs since 1989. --- git[¶](#git) === Homepage: * <http://git-scm.comSpack package: * [git/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/git/package.py) Versions: 2.19.1, 2.18.0, 2.17.1, 2.17.0, 2.15.1, 2.14.1, 2.13.0, 2.12.2, 2.12.1, 2.12.0, 2.11.1, 2.11.0, 2.9.3, 2.9.2, 2.9.1, 2.9.0, 2.8.4, 2.8.3, 2.8.2, 2.8.1, 2.8.0, 2.7.3, 2.7.1 Build Dependencies: [libiconv](#libiconv), [libtool](#libtool), [perl](#perl), [curl](#curl), [expat](#expat), [zlib](#zlib), [m4](#m4), [openssl](#openssl), [tk](#tk), [autoconf](#autoconf), [pcre](#pcre), [automake](#automake), [gettext](#gettext) Link Dependencies: [zlib](#zlib), [libiconv](#libiconv), [pcre](#pcre), [openssl](#openssl), [tk](#tk), [perl](#perl), [curl](#curl), [gettext](#gettext), [expat](#expat) Description: Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. --- git-imerge[¶](#git-imerge) === Homepage: * <https://github.com/mhagger/git-imergeSpack package: * [git-imerge/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/git-imerge/package.py) Versions: 1.1.0, 1.0.0 Build Dependencies: [git](#git), [python](#python), [py-argparse](#py-argparse) Link Dependencies: [git](#git), [python](#python), [py-argparse](#py-argparse) Description: git-imerge: Incremental merge & rebase for git Perform a merge between two branches incrementally. If conflicts are encountered, figure out exactly which pairs of commits conflict, and present the user with one pairwise conflict at a time for resolution. git-imerge has two primary design goals: * Reduce the pain of resolving merge conflicts to its unavoidable minimum, by finding and presenting the smallest possible conflicts: those between the changes introduced by one commit from each branch. * Allow a merge to be saved, tested, interrupted, published, and collaborated on while it is in progress. --- git-lfs[¶](#git-lfs) === Homepage: * <https://git-lfs.github.comSpack package: * [git-lfs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/git-lfs/package.py) Versions: 2.3.0, 2.2.1, 2.0.2, 1.4.1, 1.3.1 Build Dependencies: [go](#go) Run Dependencies: [git](#git) Description: Git LFS is a system for managing and versioning large files in association with a Git repository. Instead of storing the large files within the Git repository as blobs, Git LFS stores special "pointer files" in the repository, while storing the actual file contents on a Git LFS server. --- gl2ps[¶](#gl2ps) === Homepage: * <http://www.geuz.org/gl2ps/Spack package: * [gl2ps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gl2ps/package.py) Versions: 1.3.9 Build Dependencies: [libice](#libice), [libdrm](#libdrm), [libxdamage](#libxdamage), [cmake](#cmake), [libxxf86vm](#libxxf86vm), [expat](#expat), [libxi](#libxi), [libxdmcp](#libxdmcp), [zlib](#zlib), [libxt](#libxt), [libsm](#libsm), [libxext](#libxext), [libxau](#libxau), [libxcb](#libxcb), [libxfixes](#libxfixes), [libpng](#libpng), [libxmu](#libxmu) Link Dependencies: [libice](#libice), [libdrm](#libdrm), [libxdamage](#libxdamage), [expat](#expat), [libxxf86vm](#libxxf86vm), [libxmu](#libxmu), [libxi](#libxi), [libxdmcp](#libxdmcp), [zlib](#zlib), [libxt](#libxt), [libsm](#libsm), [libxext](#libxext), [libxau](#libxau), [libxcb](#libxcb), [libxfixes](#libxfixes), [libpng](#libpng) Description: GL2PS is a C library providing high quality vector output for any OpenGL application. --- glew[¶](#glew) === Homepage: * <http://glew.sourceforge.net/Spack package: * [glew/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glew/package.py) Versions: 2.0.0 Build Dependencies: [cmake](#cmake), gl Link Dependencies: gl Description: The OpenGL Extension Wrangler Library. --- glfmultiples[¶](#glfmultiples) === Homepage: * <https://genome.sph.umich.edu/wiki/GlfMultiplesSpack package: * [glfmultiples/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glfmultiples/package.py) Versions: 2010-06-16 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: glfMultiples is a GLF-based variant caller for next-generation sequencing data. It takes a set of GLF format genotype likelihood files as input and generates a VCF-format set of variant calls as output. --- glib[¶](#glib) === Homepage: * <https://developer.gnome.org/glib/Spack package: * [glib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glib/package.py) Versions: 2.56.2, 2.56.1, 2.56.0, 2.55.1, 2.53.1, 2.49.7, 2.49.4, 2.48.1, 2.42.1 Build Dependencies: [zlib](#zlib), [libffi](#libffi), pkgconfig, [perl](#perl), [pcre](#pcre), [gettext](#gettext), [util-linux](#util-linux), [python](#python) Link Dependencies: [zlib](#zlib), [libffi](#libffi), [gettext](#gettext), [util-linux](#util-linux), [pcre](#pcre) Run Dependencies: [perl](#perl), [python](#python) Description: GLib provides the core application building blocks for libraries and applications written in C. The GLib package contains a low-level libraries useful for providing data structure handling for C, portability wrappers and interfaces for such runtime functionality as an event loop, threads, dynamic loading and an object system. --- glibmm[¶](#glibmm) === Homepage: * <https://developer.gnome.org/glib/Spack package: * [glibmm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glibmm/package.py) Versions: 2.19.3, 2.16.0, 2.4.8 Build Dependencies: [glib](#glib), [libsigcpp](#libsigcpp) Link Dependencies: [glib](#glib), [libsigcpp](#libsigcpp) Description: Glibmm is a C++ wrapper for the glib library. --- glimmer[¶](#glimmer) === Homepage: * <https://ccb.jhu.edu/software/glimmerSpack package: * [glimmer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glimmer/package.py) Versions: 3.02b Description: Glimmer is a system for finding genes in microbial DNA, especially the genomes of bacteria, archaea, and viruses. --- glm[¶](#glm) === Homepage: * <https://github.com/g-truc/glmSpack package: * [glm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glm/package.py) Versions: 0.9.7.1 Build Dependencies: [cmake](#cmake) Description: OpenGL Mathematics (GLM) is a header only C++ mathematics library for graphics software based on the OpenGL Shading Language (GLSL) specification --- global[¶](#global) === Homepage: * <http://www.gnu.org/software/globalSpack package: * [global/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/global/package.py) Versions: 6.5 Build Dependencies: [exuberant-ctags](#exuberant-ctags), [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Run Dependencies: [exuberant-ctags](#exuberant-ctags) Description: The Gnu Global tagging system --- globalarrays[¶](#globalarrays) === Homepage: * <http://hpc.pnl.gov/globalarrays/Spack package: * [globalarrays/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/globalarrays/package.py) Versions: 5.7, 5.6.5, 5.6.4, 5.6.3, 5.6.2, 5.6.1, 5.6 Build Dependencies: scalapack, mpi, blas, lapack Link Dependencies: scalapack, mpi, blas, lapack Description: Global Arrays (GA) is a Partitioned Global Address Space (PGAS) programming model. It provides primitives for one-sided communication (Get, Put, Accumulate) and Atomic Operations (read increment). It supports blocking and non-blocking primtives, and supports location consistency. --- globus-toolkit[¶](#globus-toolkit) === Homepage: * <http://toolkit.globus.orgSpack package: * [globus-toolkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/globus-toolkit/package.py) Versions: 6.0.1506371041, 6.0.1493989444 Build Dependencies: [openssl](#openssl) Link Dependencies: [openssl](#openssl) Description: The Globus Toolkit is an open source software toolkit used for building grids --- glog[¶](#glog) === Homepage: * <https://github.com/google/glogSpack package: * [glog/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glog/package.py) Versions: 0.3.5, 0.3.4, 0.3.3 Build Dependencies: [cmake](#cmake), [gflags](#gflags) Link Dependencies: [cmake](#cmake), [gflags](#gflags) Description: C++ implementation of the Google logging module. --- gloo[¶](#gloo) === Homepage: * <https://github.com/facebookincubator/glooSpack package: * [gloo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gloo/package.py) Versions: master Build Dependencies: [cmake](#cmake) Description: Gloo is a collective communications library. --- glpk[¶](#glpk) === Homepage: * <https://www.gnu.org/software/glpkSpack package: * [glpk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glpk/package.py) Versions: 4.65, 4.61, 4.57 Build Dependencies: [gmp](#gmp) Link Dependencies: [gmp](#gmp) Description: The GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library. --- glproto[¶](#glproto) === Homepage: * <https://www.x.org/wiki/Spack package: * [glproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glproto/package.py) Versions: 1.4.17 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: OpenGL Extension to the X Window System. This extension defines a protocol for the client to send 3D rendering commands to the X server. --- glvis[¶](#glvis) === Homepage: * <http://glvis.orgSpack package: * [glvis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/glvis/package.py) Versions: develop, 3.4, 3.3, 3.2, 3.1 Build Dependencies: [libx11](#libx11), gl, [mfem](#mfem), [fontconfig](#fontconfig), glu, [libtiff](#libtiff), [libpng](#libpng), [freetype](#freetype) Link Dependencies: [libx11](#libx11), gl, [mfem](#mfem), [fontconfig](#fontconfig), glu, [libtiff](#libtiff), [libpng](#libpng), [freetype](#freetype) Description: GLVis: an OpenGL tool for visualization of FEM meshes and functions --- gmake[¶](#gmake) === Homepage: * <https://www.gnu.org/software/make/Spack package: * [gmake/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gmake/package.py) Versions: 4.2.1, 4.0 Build Dependencies: [guile](#guile), [gettext](#gettext) Link Dependencies: [guile](#guile), [gettext](#gettext) Description: GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files. --- gmap-gsnap[¶](#gmap-gsnap) === Homepage: * <http://research-pub.gene.com/gmap/Spack package: * [gmap-gsnap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gmap-gsnap/package.py) Versions: 2018-07-04, 2018-03-25, 2018-02-12, 2017-06-16, 2014-12-28 Build Dependencies: [zlib](#zlib), [bzip2](#bzip2) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2) Description: GMAP: A Genomic Mapping and Alignment Program for mRNA and EST Sequences, and GSNAP: Genomic Short-read Nucleotide Alignment Program --- gmime[¶](#gmime) === Homepage: * <http://spruce.sourceforge.net/gmime/Spack package: * [gmime/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gmime/package.py) Versions: 2.6.23 Build Dependencies: [glib](#glib), [libgpg-error](#libgpg-error) Link Dependencies: [glib](#glib), [libgpg-error](#libgpg-error) Description: GMime is a C/C++ library which may be used for the creation and parsing of messages using the Multipurpose Internet Mail Extension (MIME). --- gmodel[¶](#gmodel) === Homepage: * <https://github.com/ibaned/gmodelSpack package: * [gmodel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gmodel/package.py) Versions: 2.1.0 Build Dependencies: [cmake](#cmake) Description: Gmsh model generation library Gmodel is a C++11 library that implements a minimal CAD kernel based on the .geo format used by the Gmsh mesh generation code, and is designed to make it easier for users to quickly construct CAD models for Gmsh. --- gmp[¶](#gmp) === Homepage: * <https://gmplib.orgSpack package: * [gmp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gmp/package.py) Versions: 6.1.2, 6.1.1, 6.1.0, 6.0.0a, 6.0.0, 5.1.3, 4.3.2 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating-point numbers. --- gmsh[¶](#gmsh) === Homepage: * <http://gmsh.infoSpack package: * [gmsh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gmsh/package.py) Versions: develop, 4.0.0, 3.0.6, 3.0.1, 2.16.0, 2.15.0, 2.12.0 Build Dependencies: [gmp](#gmp), [fltk](#fltk), [petsc](#petsc), [hdf5](#hdf5), [tetgen](#tetgen), [cmake](#cmake), lapack, [zlib](#zlib), [slepc](#slepc), mpi, [netgen](#netgen), [oce](#oce), blas Link Dependencies: [gmp](#gmp), [fltk](#fltk), [petsc](#petsc), [hdf5](#hdf5), [tetgen](#tetgen), lapack, [zlib](#zlib), [slepc](#slepc), mpi, [netgen](#netgen), [oce](#oce), blas Description: Gmsh is a free 3D finite element grid generator with a built-in CAD engine and post-processor. Its design goal is to provide a fast, light and user-friendly meshing tool with parametric input and advanced visualization capabilities. Gmsh is built around four modules: geometry, mesh, solver and post-processing. The specification of any input to these modules is done either interactively using the graphical user interface or in ASCII text files using Gmsh's own scripting language. --- gnat[¶](#gnat) === Homepage: * <https://libre.adacore.com/tools/gnat-gpl-edition/Spack package: * [gnat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gnat/package.py) Versions: 2016 Description: The GNAT Ada compiler. Ada is a modern programming language designed for large, long-lived applications - and embedded systems in particular - where reliability and efficiency are essential. --- gnu-prolog[¶](#gnu-prolog) === Homepage: * <http://www.gprolog.org/Spack package: * [gnu-prolog/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gnu-prolog/package.py) Versions: 1.4.4 Description: A free Prolog compiler with constraint solving over finite domains. --- gnupg[¶](#gnupg) === Homepage: * <https://gnupg.org/index.htmlSpack package: * [gnupg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gnupg/package.py) Versions: 2.2.3, 2.1.21 Build Dependencies: [libassuan](#libassuan), [npth](#npth), [libgpg-error](#libgpg-error), [libksba](#libksba), [libgcrypt](#libgcrypt) Link Dependencies: [libassuan](#libassuan), [npth](#npth), [libgpg-error](#libgpg-error), [libksba](#libksba), [libgcrypt](#libgcrypt) Description: GnuPG is a complete and free implementation of the OpenPGP standard as defined by RFC4880 --- gnuplot[¶](#gnuplot) === Homepage: * <http://www.gnuplot.infoSpack package: * [gnuplot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gnuplot/package.py) Versions: 5.2.5, 5.2.2, 5.2.0, 5.0.7, 5.0.6, 5.0.5, 5.0.1 Build Dependencies: [libcerf](#libcerf), [libiconv](#libiconv), pkgconfig, [libxpm](#libxpm), [libgd](#libgd), [wx](#wx), [readline](#readline), [cairo](#cairo), [pango](#pango) Link Dependencies: [libcerf](#libcerf), [libiconv](#libiconv), [libxpm](#libxpm), [libgd](#libgd), [wx](#wx), [readline](#readline), [cairo](#cairo), [pango](#pango) Description: Gnuplot is a portable command-line driven graphing utility for Linux, OS/2, MS Windows, OSX, VMS, and many other platforms. The source code is copyrighted but freely distributed (i.e., you don't have to pay for it). It was originally created to allow scientists and students to visualize mathematical functions and data interactively, but has grown to support many non-interactive uses such as web scripting. It is also used as a plotting engine by third-party applications like Octave. Gnuplot has been supported and under active development since 1986 --- gnutls[¶](#gnutls) === Homepage: * <http://www.gnutls.orgSpack package: * [gnutls/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gnutls/package.py) Versions: 3.5.13, 3.5.10, 3.5.9, 3.3.9 Build Dependencies: [zlib](#zlib), pkgconfig, [gettext](#gettext), [nettle](#nettle) Link Dependencies: [zlib](#zlib), [gettext](#gettext), [nettle](#nettle) Description: GnuTLS is a secure communications library implementing the SSL, TLS and DTLS protocols and technologies around them. It provides a simple C language application programming interface (API) to access the secure communications protocols as well as APIs to parse and write X.509, PKCS #12, OpenPGP and other required structures. It is aimed to be portable and efficient with focus on security and interoperability. --- go[¶](#go) === Homepage: * <https://golang.orgSpack package: * [go/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/go/package.py) Versions: 1.11.1, 1.11, 1.10.3, 1.10.2, 1.10.1, 1.9.5, 1.9.2, 1.9.1, 1.9, 1.8.3, 1.8.1, 1.8, 1.7.5, 1.7.4, 1.6.4 Build Dependencies: [go-bootstrap](#go-bootstrap), [git](#git) Link Dependencies: [git](#git) Run Dependencies: [git](#git) Description: The golang compiler and build environment --- go-bootstrap[¶](#go-bootstrap) === Homepage: * <https://golang.orgSpack package: * [go-bootstrap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/go-bootstrap/package.py) Versions: 1.4-bootstrap-20171003, 1.4-bootstrap-20170531, 1.4-bootstrap-20161024 Build Dependencies: [git](#git) Link Dependencies: [git](#git) Run Dependencies: [git](#git) Description: Old C-bootstrapped go to bootstrap real go --- gobject-introspection[¶](#gobject-introspection) === Homepage: * <https://wiki.gnome.org/Projects/GObjectIntrospectionSpack package: * [gobject-introspection/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gobject-introspection/package.py) Versions: 1.49.2, 1.48.0 Build Dependencies: [glib](#glib), [bison](#bison), [cairo](#cairo), [flex](#flex), pkgconfig, [python](#python), [sed](#sed) Link Dependencies: [glib](#glib), [cairo](#cairo), [python](#python) Description: The GObject Introspection is used to describe the program APIs and collect them in a uniform, machine readable format.Cairo is a 2D graphics library with support for multiple output --- googletest[¶](#googletest) === Homepage: * <https://github.com/google/googletestSpack package: * [googletest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/googletest/package.py) Versions: 1.8.0, 1.7.0, 1.6.0 Build Dependencies: [cmake](#cmake) Description: Google test framework for C++. Also called gtest. --- gotcha[¶](#gotcha) === Homepage: * <http://github.com/LLNL/gotchaSpack package: * [gotcha/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gotcha/package.py) Versions: develop, 1.0.2, 0.0.2, master Build Dependencies: [cmake](#cmake) Description: C software library for shared library function wrapping, enables tools to intercept calls into shared libraries --- gource[¶](#gource) === Homepage: * <http://gource.ioSpack package: * [gource/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gource/package.py) Versions: 0.44 Build Dependencies: pkgconfig, jpeg, [automake](#automake), [autoconf](#autoconf), [sdl2-image](#sdl2-image), [libpng](#libpng), [pcre](#pcre), [boost](#boost), [libtool](#libtool), [glew](#glew), [sdl2](#sdl2), [glm](#glm), [freetype](#freetype) Link Dependencies: [boost](#boost), jpeg, [libpng](#libpng), [pcre](#pcre), [sdl2-image](#sdl2-image), [glew](#glew), [sdl2](#sdl2), [freetype](#freetype) Description: Software version control visualization. --- gperf[¶](#gperf) === Homepage: * <https://www.gnu.org/software/gperf/Spack package: * [gperf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gperf/package.py) Versions: 3.0.4 Description: GNU gperf is a perfect hash function generator. For a given list of strings, it produces a hash function and hash table, in form of C or C++ code, for looking up a value depending on the input string. The hash function is perfect, which means that the hash table has no collisions, and the hash table lookup needs a single string comparison only. --- gperftools[¶](#gperftools) === Homepage: * <https://github.com/gperftools/gperftoolsSpack package: * [gperftools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gperftools/package.py) Versions: 2.7, 2.4, 2.3 Build Dependencies: unwind Link Dependencies: unwind Description: Google's fast malloc/free implementation, especially for multi-threaded applications. Contains tcmalloc, heap-checker, heap-profiler, and cpu- profiler. --- gplates[¶](#gplates) === Homepage: * <https://www.gplates.orgSpack package: * [gplates/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gplates/package.py) Versions: 2.0.0 Build Dependencies: [gdal](#gdal), [qwt](#qwt), [proj](#proj), [cmake](#cmake), [boost](#boost), [cgal](#cgal), [glew](#glew), [qt](#qt), [python](#python), [mesa-glu](#mesa-glu) Link Dependencies: [gdal](#gdal), [qwt](#qwt), [proj](#proj), [boost](#boost), [cgal](#cgal), [glew](#glew), [qt](#qt), [python](#python), [mesa-glu](#mesa-glu) Description: GPlates is desktop software for the interactive visualisation of plate- tectonics. GPlates offers a novel combination of interactive plate- tectonic reconstructions, geographic information system (GIS) functionality and raster data visualisation. GPlates enables both the visualisation and the manipulation of plate-tectonic reconstructions and associated data through geological time. --- grackle[¶](#grackle) === Homepage: * <http://grackle.readthedocs.io/en/grackle-3.1/Spack package: * [grackle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/grackle/package.py) Versions: 3.1, 3.0, 2.2, 2.0.1 Build Dependencies: [libtool](#libtool), mpi, [hdf5](#hdf5) Link Dependencies: [libtool](#libtool), mpi, [hdf5](#hdf5) Description: Grackle is a chemistry and radiative cooling library for astrophysical simulations with interfaces for C, C++, and Fortran codes. It is a generalized and trimmed down version of the chemistry network of the Enzo simulation code --- gradle[¶](#gradle) === Homepage: * <https://gradle.orgSpack package: * [gradle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gradle/package.py) Versions: 4.8.1, 3.4, 3.3, 3.2.1, 3.2, 3.1, 3.0, 2.14.1, 2.14, 2.13, 2.12, 2.11, 2.10, 2.9, 2.8, 2.7, 2.6, 2.5, 2.4, 2.3, 2.2.1, 2.2, 2.1, 2.0, 1.12, 1.11, 1.10, 1.9, 1.8, 1.5, 1.4, 1.3, 1.2, 1.1, 1.0, 0.9.2, 0.9.1, 0.9, 0.8, 0.7 Build Dependencies: java Link Dependencies: java Description: Gradle is an open source build automation system that builds upon the concepts of Apache Ant and Apache Maven and introduces a Groovy-based domain-specific language (DSL) instead of the XML form used by Apache Maven for declaring the project configuration. Gradle uses a directed acyclic graph ("DAG") to determine the order in which tasks can be run. --- grandr[¶](#grandr) === Homepage: * <https://cgit.freedesktop.org/xorg/app/grandrSpack package: * [grandr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/grandr/package.py) Versions: 0.1 Build Dependencies: [gconf](#gconf), [gtkplus](#gtkplus), [xrandr](#xrandr) Link Dependencies: [gconf](#gconf), [gtkplus](#gtkplus), [xrandr](#xrandr) Description: RandR user interface using GTK+ libraries. --- graph500[¶](#graph500) === Homepage: * <https://graph500.orgSpack package: * [graph500/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/graph500/package.py) Versions: 3.0.0 Build Dependencies: mpi Link Dependencies: mpi Description: Graph500 reference implementations. --- graphicsmagick[¶](#graphicsmagick) === Homepage: * <http://www.graphicsmagick.org/Spack package: * [graphicsmagick/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/graphicsmagick/package.py) Versions: 1.3.29 Build Dependencies: [ghostscript](#ghostscript), [graphviz](#graphviz), jpeg, [libxml2](#libxml2), [xz](#xz), [jasper](#jasper), [libice](#libice), [bzip2](#bzip2), [libsm](#libsm), [ghostscript-fonts](#ghostscript-fonts), [lcms](#lcms), [libtiff](#libtiff), [libpng](#libpng), [libtool](#libtool), [zlib](#zlib) Link Dependencies: [ghostscript](#ghostscript), [graphviz](#graphviz), jpeg, [libxml2](#libxml2), [xz](#xz), [jasper](#jasper), [libice](#libice), [bzip2](#bzip2), [libsm](#libsm), [ghostscript-fonts](#ghostscript-fonts), [lcms](#lcms), [libtiff](#libtiff), [libpng](#libpng), [libtool](#libtool), [zlib](#zlib) Description: GraphicsMagick is the swiss army knife of image processing. Provides a robust and efficient collection of tools and libraries which support reading, writing, and manipulating an image in over 88 major formats including important formats like DPX, GIF, JPEG, JPEG-2000, PNG, PDF, PNM, and TIFF. --- graphlib[¶](#graphlib) === Homepage: * <https://github.com/LLNL/graphlibSpack package: * [graphlib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/graphlib/package.py) Versions: 3.0.0, 2.0.0 Build Dependencies: [cmake](#cmake) Description: Library to create, manipulate, and export graphs Graphlib. --- graphmap[¶](#graphmap) === Homepage: * <https://github.com/isovic/graphmapSpack package: * [graphmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/graphmap/package.py) Versions: 0.3.0 Description: A highly sensitive and accurate mapper for long, error-prone reads --- graphviz[¶](#graphviz) === Homepage: * <http://www.graphviz.orgSpack package: * [graphviz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/graphviz/package.py) Versions: 2.40.1 Build Dependencies: [ghostscript](#ghostscript), pkgconfig, [gtkplus](#gtkplus), [bison](#bison), [autoconf](#autoconf), [glib](#glib), [pango](#pango), [zlib](#zlib), [swig](#swig), [gts](#gts), [libtool](#libtool), [libpng](#libpng), java, [flex](#flex), [expat](#expat), [libgd](#libgd), [fontconfig](#fontconfig), [qt](#qt), [automake](#automake), [cairo](#cairo), [python](#python), [freetype](#freetype) Link Dependencies: [ghostscript](#ghostscript), [glib](#glib), [libpng](#libpng), [gtkplus](#gtkplus), java, [expat](#expat), [qt](#qt), [pango](#pango), [zlib](#zlib), [fontconfig](#fontconfig), [gts](#gts), [libgd](#libgd), [cairo](#cairo), [python](#python), [freetype](#freetype) Description: Graph Visualization Software --- grass[¶](#grass) === Homepage: * <http://grass.osgeo.orgSpack package: * [grass/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/grass/package.py) Versions: 7.4.1 Build Dependencies: gl, [mariadb](#mariadb), [bison](#bison), [readline](#readline), lapack, [zlib](#zlib), [libtiff](#libtiff), [gmake](#gmake), [libx11](#libx11), [netcdf](#netcdf), [proj](#proj), [postgresql](#postgresql), [flex](#flex), opencl, [geos](#geos), [libpng](#libpng), [bzip2](#bzip2), [cairo](#cairo), [fftw](#fftw), [gdal](#gdal), [sqlite](#sqlite), blas, [python](#python), [freetype](#freetype) Link Dependencies: opencl, gl, [readline](#readline), [libpng](#libpng), [gdal](#gdal), [bzip2](#bzip2), [cairo](#cairo), [mariadb](#mariadb), [geos](#geos), [fftw](#fftw), lapack, [zlib](#zlib), [libtiff](#libtiff), [libx11](#libx11), [netcdf](#netcdf), [sqlite](#sqlite), [proj](#proj), [postgresql](#postgresql), blas, [freetype](#freetype) Run Dependencies: [python](#python) Description: GRASS GIS (Geographic Resources Analysis Support System), is a free and open source Geographic Information System (GIS) software suite used for geospatial data management and analysis, image processing, graphics and maps production, spatial modeling, and visualization. --- grib-api[¶](#grib-api) === Homepage: * <https://software.ecmwf.int/wiki/display/GRIB/HomeSpack package: * [grib-api/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/grib-api/package.py) Versions: 1.24.0, 1.21.0, 1.17.0, 1.16.0 Build Dependencies: [libpng](#libpng), [netcdf](#netcdf), [openjpeg](#openjpeg), [cmake](#cmake), [py-numpy](#py-numpy), [jasper](#jasper), [python](#python), [libaec](#libaec) Link Dependencies: [libpng](#libpng), [netcdf](#netcdf), [openjpeg](#openjpeg), [jasper](#jasper), [python](#python), [libaec](#libaec) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: The ECMWF GRIB API is an application program interface accessible from C, FORTRAN and Python programs developed for encoding and decoding WMO FM-92 GRIB edition 1 and edition 2 messages. --- grnboost[¶](#grnboost) === Homepage: * <https://github.com/aertslab/GRNBoostSpack package: * [grnboost/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/grnboost/package.py) Versions: 2017-10-9 Build Dependencies: java, [sbt](#sbt) Run Dependencies: java, [spark](#spark), xgboost Description: GRNBoost is a library built on top of Apache Spark that implements a scalable strategy for gene regulatory network (GRN) inference. See https://github.com/aertslab/GRNBoost/blob/master/docs/user_guide.md for the user guide. The location of xgboost4j-<version>.jar and GRNBoost.jar are set to $XGBOOST_JAR and $GRNBOOST_JAR. Path to xgboost4j-<version>.jar is also added to CLASSPATH. --- groff[¶](#groff) === Homepage: * <https://www.gnu.org/software/groff/Spack package: * [groff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/groff/package.py) Versions: 1.22.3 Build Dependencies: [ghostscript](#ghostscript), [gmake](#gmake), [gawk](#gawk), [sed](#sed) Link Dependencies: [ghostscript](#ghostscript) Description: Groff (GNU troff) is a typesetting system that reads plain text mixed with formatting commands and produces formatted output. Output may be PostScript or PDF, html, or ASCII/UTF8 for display at the terminal. --- gromacs[¶](#gromacs) === Homepage: * <http://www.gromacs.orgSpack package: * [gromacs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gromacs/package.py) Versions: develop, 2018.2, 2018.1, 2018, 2016.5, 2016.4, 2016.3, 5.1.5, 5.1.4, 5.1.2 Build Dependencies: [plumed](#plumed), [cuda](#cuda), mpi, [fftw](#fftw), [cmake](#cmake) Link Dependencies: [cuda](#cuda), mpi, [fftw](#fftw), [plumed](#plumed) Description: GROMACS (GROningen MAchine for Chemical Simulations) is a molecular dynamics package primarily designed for simulations of proteins, lipids and nucleic acids. It was originally developed in the Biophysical Chemistry department of University of Groningen, and is now maintained by contributors in universities and research centers across the world. GROMACS is one of the fastest and most popular software packages available and can run on CPUs as well as GPUs. It is free, open source released under the GNU General Public License. Starting from version 4.6, GROMACS is released under the GNU Lesser General Public License. --- gsl[¶](#gsl) === Homepage: * <http://www.gnu.org/software/gslSpack package: * [gsl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gsl/package.py) Versions: 2.5, 2.4, 2.3, 2.2.1, 2.1, 2.0, 1.16 Description: The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. It is free software under the GNU General Public License. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting. There are over 1000 functions in total with an extensive test suite. --- gslib[¶](#gslib) === Homepage: * <https://github.com/gslib/gslibSpack package: * [gslib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gslib/package.py) Versions: 1.0.2, 1.0.1, 1.0.0 Build Dependencies: mpi, blas Link Dependencies: mpi, blas Description: Highly scalable Gather-scatter code with AMG and XXT solvers --- gtkmm[¶](#gtkmm) === Homepage: * <https://www.gtkmm.org/en/Spack package: * [gtkmm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gtkmm/package.py) Versions: 2.19.7, 2.19.6, 2.19.4, 2.19.2, 2.17.11, 2.17.1, 2.16.0, 2.4.11 Build Dependencies: [cairomm](#cairomm), [atk](#atk), [glibmm](#glibmm), [pangomm](#pangomm), [gtkplus](#gtkplus) Link Dependencies: [cairomm](#cairomm), [atk](#atk), [glibmm](#glibmm), [pangomm](#pangomm), [gtkplus](#gtkplus) Description: Gtkmm is the official C++ interface for the popular GUI library GTK+. --- gtkorvo-atl[¶](#gtkorvo-atl) === Homepage: * <https://github.com/GTkorvo/atlSpack package: * [gtkorvo-atl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gtkorvo-atl/package.py) Versions: develop, 2.2, 2.1 Build Dependencies: [cmake](#cmake), [gtkorvo-cercs-env](#gtkorvo-cercs-env) Link Dependencies: [gtkorvo-cercs-env](#gtkorvo-cercs-env) Description: Libatl provides a library for the creation and manipulation of lists of name/value pairs using an efficient binary representation. --- gtkorvo-cercs-env[¶](#gtkorvo-cercs-env) === Homepage: * <https://github.com/GTkorvo/cercs_envSpack package: * [gtkorvo-cercs-env/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gtkorvo-cercs-env/package.py) Versions: develop, 1.0 Build Dependencies: [cmake](#cmake) Description: A utility library used by some GTkorvo packages. --- gtkorvo-dill[¶](#gtkorvo-dill) === Homepage: * <https://github.com/GTkorvo/dillSpack package: * [gtkorvo-dill/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gtkorvo-dill/package.py) Versions: develop, 2.4, 2.1 Build Dependencies: [cmake](#cmake) Description: DILL provides instruction-level code generation, register allocation and simple optimizations for generating executable code directly into memory regions for immediate use. --- gtkorvo-enet[¶](#gtkorvo-enet) === Homepage: * <http://www.github.com/GTkorvo/enetSpack package: * [gtkorvo-enet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gtkorvo-enet/package.py) Versions: 1.3.14, 1.3.13 Description: ENet reliable UDP networking library. This is a downstream branch of lsalzman's ENet. This version has expanded the client ID to handle more clients. The original is at http://github.com/lsalzman/enet. --- gtkplus[¶](#gtkplus) === Homepage: * <http://www.gtk.orgSpack package: * [gtkplus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gtkplus/package.py) Versions: 3.20.10, 2.24.32, 2.24.31, 2.24.25 Build Dependencies: pkgconfig, [gdk-pixbuf](#gdk-pixbuf), [fixesproto](#fixesproto), [glib](#glib), [libepoxy](#libepoxy), [pango](#pango), [inputproto](#inputproto), [at-spi2-atk](#at-spi2-atk), [libxi](#libxi), [shared-mime-info](#shared-mime-info), [atk](#atk), [gobject-introspection](#gobject-introspection) Link Dependencies: [inputproto](#inputproto), [glib](#glib), [at-spi2-atk](#at-spi2-atk), [gdk-pixbuf](#gdk-pixbuf), [shared-mime-info](#shared-mime-info), [fixesproto](#fixesproto), [libepoxy](#libepoxy), [atk](#atk), [gobject-introspection](#gobject-introspection), [libxi](#libxi), [pango](#pango) Description: The GTK+ 2 package contains libraries used for creating graphical user interfaces for applications. --- gts[¶](#gts) === Homepage: * <http://gts.sourceforge.net/index.htmlSpack package: * [gts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gts/package.py) Versions: 121130 Build Dependencies: [glib](#glib) Link Dependencies: [glib](#glib) Description: GTS stands for the GNU Triangulated Surface Library. It is an Open Source Free Software Library intended to provide a set of useful functions to deal with 3D surfaces meshed with interconnected triangles. The source code is available free of charge under the Free Software LGPL license. The code is written entirely in C with an object-oriented approach based mostly on the design of GTK+. Careful attention is paid to performance related issues as the initial goal of GTS is to provide a simple and efficient library to scientists dealing with 3D computational surface meshes. --- guidance[¶](#guidance) === Homepage: * <http://guidance.tau.ac.il/ver2/Spack package: * [guidance/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/guidance/package.py) Versions: 2.02 Build Dependencies: [clustalw](#clustalw), [muscle](#muscle), [ruby](#ruby), [prank](#prank), [mafft](#mafft), [perl](#perl), [perl-bio-perl](#perl-bio-perl) Link Dependencies: [clustalw](#clustalw), [muscle](#muscle), [ruby](#ruby), [prank](#prank), [mafft](#mafft) Run Dependencies: [perl](#perl), [perl-bio-perl](#perl-bio-perl) Description: Guidance: Accurate detection of unreliable alignment regions accounting for the uncertainty of multiple parameters. --- guile[¶](#guile) === Homepage: * <https://www.gnu.org/software/guile/Spack package: * [guile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/guile/package.py) Versions: 2.2.0, 2.0.14, 2.0.11 Build Dependencies: [gmp](#gmp), pkgconfig, [libtool](#libtool), [bdw-gc](#bdw-gc), [libffi](#libffi), [libunistring](#libunistring), [gettext](#gettext), [readline](#readline) Link Dependencies: [gmp](#gmp), [libtool](#libtool), [bdw-gc](#bdw-gc), [libffi](#libffi), [libunistring](#libunistring), [gettext](#gettext), [readline](#readline) Description: Guile is the GNU Ubiquitous Intelligent Language for Extensions, the official extension language for the GNU operating system. --- gurobi[¶](#gurobi) === Homepage: * <http://www.gurobi.com/indexSpack package: * [gurobi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/gurobi/package.py) Versions: 7.5.2 Description: The Gurobi Optimizer was designed from the ground up to be the fastest, most powerful solver available for your LP, QP, QCP, and MIP (MILP, MIQP, and MIQCP) problems. Note: Gurobi is licensed software. You will need to create an account on the Gurobi homepage and download Gurobi Optimizer yourself. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html Please set the path to licence file with the following command (for bash) export GRB_LICENSE_FILE=/path/to/gurobi/license/. See section 4 in $GUROBI_HOME/docs/quickstart_linux.pdf for more details. --- h5hut[¶](#h5hut) === Homepage: * <https://amas.psi.ch/H5hut/Spack package: * [h5hut/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/h5hut/package.py) Versions: 1.99.13 Build Dependencies: mpi, [hdf5](#hdf5) Link Dependencies: mpi, [hdf5](#hdf5) Description: H5hut (HDF5 Utility Toolkit). High-Performance I/O Library for Particle- based Simulations. --- h5part[¶](#h5part) === Homepage: * <http://vis.lbl.gov/Research/H5Part/Spack package: * [h5part/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/h5part/package.py) Versions: 1.6.6 Build Dependencies: mpi, [hdf5](#hdf5) Link Dependencies: mpi, [hdf5](#hdf5) Description: Portable High Performance Parallel Data Interface to HDF5 --- h5utils[¶](#h5utils) === Homepage: * <http://ab-initio.mit.edu/wiki/index.php/H5utilsSpack package: * [h5utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/h5utils/package.py) Versions: 1.12.1 Build Dependencies: [libmatheval](#libmatheval), [octave](#octave), [libpng](#libpng), [hdf](#hdf), [hdf5](#hdf5) Link Dependencies: [libmatheval](#libmatheval), [octave](#octave), [libpng](#libpng), [hdf](#hdf), [hdf5](#hdf5) Description: h5utils is a set of utilities for visualization and conversion of scientific data in the free, portable HDF5 format. --- h5z-zfp[¶](#h5z-zfp) === Homepage: * <http://h5z-zfp.readthedocs.io/en/latestSpack package: * [h5z-zfp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/h5z-zfp/package.py) Versions: develop, 0.8.0, 0.7.0 Build Dependencies: [zfp](#zfp), [hdf5](#hdf5) Link Dependencies: [zfp](#zfp), [hdf5](#hdf5) Description: A highly flexible floating point and integer compression plugin for the HDF5 library using ZFP compression. --- hacckernels[¶](#hacckernels) === Homepage: * <https://xgitlab.cels.anl.gov/hacc/HACCKernelsSpack package: * [hacckernels/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hacckernels/package.py) Versions: develop Build Dependencies: [cmake](#cmake) Description: HACCKernels: A Benchmark for HACC's Particle Force Kernels. The Hardware/Hybrid Accelerated Cosmology Code (HACC), a cosmology N-body- code framework, is designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. --- hadoop[¶](#hadoop) === Homepage: * <http://hadoop.apache.org/Spack package: * [hadoop/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hadoop/package.py) Versions: 3.1.1, 2.9.0 Run Dependencies: java Description: The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. --- halc[¶](#halc) === Homepage: * <https://github.com/lanl001/halcSpack package: * [halc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/halc/package.py) Versions: 1.1 Build Dependencies: [dos2unix](#dos2unix) Run Dependencies: [blasr](#blasr), [lordec](#lordec), [python](#python) Description: HALC is software that makes error correction for long reads with high throughput. --- hapcut2[¶](#hapcut2) === Homepage: * <https://github.com/vibansal/HapCUT2Spack package: * [hapcut2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hapcut2/package.py) Versions: 2017-07-10 Description: HapCUT2 is a maximum-likelihood-based tool for assembling haplotypes from DNA sequence reads, designed to 'just work' with excellent speed and accuracy. --- hapdip[¶](#hapdip) === Homepage: * <https://github.com/lh3/hapdipSpack package: * [hapdip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hapdip/package.py) Versions: 2018.02.20 Run Dependencies: [k8](#k8) Description: The CHM1-NA12878 benchmark for single-sample SNP/INDEL calling from WGS Illumina data. --- haploview[¶](#haploview) === Homepage: * <http://www.broadinstitute.org/haploview/haploviewSpack package: * [haploview/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/haploview/package.py) Versions: 4.1 Build Dependencies: java Run Dependencies: java Description: Haploview is designed to simplify and expedite the process of haplotype analysis. --- harfbuzz[¶](#harfbuzz) === Homepage: * <http://www.freedesktop.org/wiki/Software/HarfBuzz/Spack package: * [harfbuzz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/harfbuzz/package.py) Versions: 1.4.6, 0.9.37 Build Dependencies: [zlib](#zlib), pkgconfig, [icu4c](#icu4c), [cairo](#cairo), [glib](#glib), [freetype](#freetype) Link Dependencies: [zlib](#zlib), [icu4c](#icu4c), [cairo](#cairo), [glib](#glib), [freetype](#freetype) Description: The Harfbuzz package contains an OpenType text shaping engine. --- harminv[¶](#harminv) === Homepage: * <http://ab-initio.mit.edu/wiki/index.php/HarminvSpack package: * [harminv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/harminv/package.py) Versions: 1.4 Build Dependencies: blas, lapack Link Dependencies: blas, lapack Description: Harminv is a free program (and accompanying library) to solve the problem of harmonic inversion - given a discrete-time, finite-length signal that consists of a sum of finitely-many sinusoids (possibly exponentially decaying) in a given bandwidth, it determines the frequencies, decay constants, amplitudes, and phases of those sinusoids. --- hdf[¶](#hdf) === Homepage: * <https://support.hdfgroup.org/products/hdf4/Spack package: * [hdf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hdf/package.py) Versions: 4.2.13, 4.2.12, 4.2.11 Build Dependencies: szip, [zlib](#zlib), [bison](#bison), jpeg, [flex](#flex) Link Dependencies: szip, [zlib](#zlib), jpeg Description: HDF4 (also known as HDF) is a library and multi-object file format for storing and managing data between machines. --- hdf5[¶](#hdf5) === Homepage: * <https://support.hdfgroup.org/HDF5/Spack package: * [hdf5/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hdf5/package.py) Versions: 1.10.4, 1.10.3, 1.10.2, 1.10.1, 1.10.0-patch1, 1.10.0, 1.8.19, 1.8.18, 1.8.17, 1.8.16, 1.8.15, 1.8.14, 1.8.13, 1.8.12, 1.8.10 Build Dependencies: szip, [zlib](#zlib), mpi, [numactl](#numactl) Link Dependencies: szip, [zlib](#zlib), mpi, [numactl](#numactl) Description: HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data. --- hdf5-blosc[¶](#hdf5-blosc) === Homepage: * <https://github.com/Blosc/hdf5-bloscSpack package: * [hdf5-blosc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hdf5-blosc/package.py) Versions: master Build Dependencies: [libtool](#libtool), [c-blosc](#c-blosc), [hdf5](#hdf5) Link Dependencies: [c-blosc](#c-blosc), [hdf5](#hdf5) Description: Blosc filter for HDF5 --- help2man[¶](#help2man) === Homepage: * <https://www.gnu.org/software/help2man/Spack package: * [help2man/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/help2man/package.py) Versions: 1.47.4 Build Dependencies: [perl](#perl), [gettext](#gettext) Description: help2man produces simple manual pages from the '--help' and '--version' output of other commands. --- henson[¶](#henson) === Homepage: * <https://github.com/henson-insitu/hensonSpack package: * [henson/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/henson/package.py) Versions: develop Build Dependencies: [cmake](#cmake), mpi, [python](#python) Link Dependencies: mpi, [python](#python) Description: Cooperative multitasking for in situ processing. --- hepmc[¶](#hepmc) === Homepage: * <http://hepmc.web.cern.ch/hepmc/Spack package: * [hepmc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hepmc/package.py) Versions: 3.0.0, 2.06.09, 2.06.08, 2.06.07, 2.06.06, 2.06.05 Build Dependencies: [cmake](#cmake) Description: The HepMC package is an object oriented, C++ event record for High Energy Physics Monte Carlo generators and simulation. --- heppdt[¶](#heppdt) === Homepage: * <http://lcgapp.cern.ch/project/simu/HepPDT/Spack package: * [heppdt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/heppdt/package.py) Versions: 3.04.01, 3.04.00, 3.03.02, 3.03.01, 3.03.00, 2.06.01 Description: The HepPID library contains translation methods for particle ID's to and from various Monte Carlo generators and the PDG standard numbering scheme. We realize that the generators adhere closely to the standard, but there are occasional differences. --- hic-pro[¶](#hic-pro) === Homepage: * <https://github.com/nservant/HiC-ProSpack package: * [hic-pro/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hic-pro/package.py) Versions: 2.10.0 Build Dependencies: [samtools](#samtools), [r](#r), [bowtie2](#bowtie2), [py-scipy](#py-scipy), [py-pysam](#py-pysam), [py-numpy](#py-numpy), [r-rcolorbrewer](#r-rcolorbrewer), [python](#python), [py-bx-python](#py-bx-python), [r-ggplot2](#r-ggplot2) Link Dependencies: [samtools](#samtools), [r](#r), [bowtie2](#bowtie2), [python](#python) Run Dependencies: [py-scipy](#py-scipy), [r-rcolorbrewer](#r-rcolorbrewer), [py-numpy](#py-numpy), [py-pysam](#py-pysam), [py-bx-python](#py-bx-python), [r-ggplot2](#r-ggplot2) Description: HiC-Pro is a package designed to process Hi-C data, from raw fastq files (paired-end Illumina data) to the normalized contact maps --- highfive[¶](#highfive) === Homepage: * <https://github.com/BlueBrain/HighFiveSpack package: * [highfive/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/highfive/package.py) Versions: 1.5, 1.2, 1.1, 1.0 Build Dependencies: [cmake](#cmake), [boost](#boost), [hdf5](#hdf5) Link Dependencies: [boost](#boost), [hdf5](#hdf5) Description: HighFive - Header only C++ HDF5 interface --- highwayhash[¶](#highwayhash) === Homepage: * <https://github.com/google/highwayhashSpack package: * [highwayhash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/highwayhash/package.py) Versions: dfcb97 Description: Strong (well-distributed and unpredictable) hashes: - Portable implementation of SipHash - HighwayHash, a 5x faster SIMD hash with security claims --- hiop[¶](#hiop) === Homepage: * <https://github.com/LLNL/hiopSpack package: * [hiop/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hiop/package.py) Versions: 0.1 Build Dependencies: [cmake](#cmake), mpi, blas, lapack Link Dependencies: mpi, blas, lapack Description: HiOp is an optimization solver for solving certain mathematical optimization problems expressed as nonlinear programming problems. HiOp is a lightweight HPC solver that leverages application's existing data parallelism to parallelize the optimization iterations by using specialized linear algebra kernels. --- hisat2[¶](#hisat2) === Homepage: * <http://ccb.jhu.edu/software/hisat2Spack package: * [hisat2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hisat2/package.py) Versions: 2.1.0 Description: HISAT2 is a fast and sensitive alignment program for mapping next- generation sequencing reads (whole-genome, transcriptome, and exome sequencing data) against the general human population (as well as against a single reference genome). --- hisea[¶](#hisea) === Homepage: * <https://doi.org/10.1186/s12859-017-1953-9Spack package: * [hisea/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hisea/package.py) Versions: 2017.12.26 Build Dependencies: [boost](#boost) Link Dependencies: [boost](#boost) Description: HISEA is an efficient all-vs-all long read aligner for SMRT sequencing data. Its algorithm is designed to produce highest alignment sensitivity among others. --- hmmer[¶](#hmmer) === Homepage: * <http://www.hmmer.orgSpack package: * [hmmer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hmmer/package.py) Versions: 3.2.1, 3.1b2, 3.0, 2.4i, 2.3.2, 2.3.1 Build Dependencies: mpi, [gsl](#gsl) Link Dependencies: mpi, [gsl](#gsl) Description: HMMER is used for searching sequence databases for sequence homologs, and for making sequence alignments. It implements methods using probabilistic models called profile hidden Markov models (profile HMMs). --- homer[¶](#homer) === Homepage: * <http://homer.ucsd.edu/homerSpack package: * [homer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/homer/package.py) Versions: 4.9.1 Build Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [r-biocgenerics](#r-biocgenerics), [r-deseq2](#r-deseq2), [r-edger](#r-edger), [r-biocparallel](#r-biocparallel) Description: Software for motif discovery and next generation sequencing analysis --- hoomd-blue[¶](#hoomd-blue) === Homepage: * <http://glotzerlab.engin.umich.edu/hoomd-blue/Spack package: * [hoomd-blue/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hoomd-blue/package.py) Versions: develop, 2.2.2, 2.1.6 Build Dependencies: [cuda](#cuda), pkgconfig, mpi, [cmake](#cmake), [py-numpy](#py-numpy), [doxygen](#doxygen), [python](#python) Link Dependencies: [cuda](#cuda), mpi, [python](#python) Run Dependencies: [py-numpy](#py-numpy) Description: HOOMD-blue is a general-purpose particle simulation toolkit. It scales from a single CPU core to thousands of GPUs. You define particle initial conditions and interactions in a high-level python script. Then tell HOOMD-blue how you want to execute the job and it takes care of the rest. Python job scripts give you unlimited flexibility to create custom initialization routines, control simulation parameters, and perform in situ analysis. --- hpccg[¶](#hpccg) === Homepage: * <https://mantevo.org/about/applications/Spack package: * [hpccg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpccg/package.py) Versions: 1.0 Build Dependencies: mpi Link Dependencies: mpi Description: Proxy Application. Intended to be the 'best approximation to an unstructured implicit finite element or finite volume application in 800 lines or fewer.' --- hpctoolkit[¶](#hpctoolkit) === Homepage: * <http://hpctoolkit.orgSpack package: * [hpctoolkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpctoolkit/package.py) Versions: 2017.06, master Build Dependencies: mpi, [papi](#papi), [hpctoolkit-externals](#hpctoolkit-externals) Link Dependencies: mpi, [papi](#papi), [hpctoolkit-externals](#hpctoolkit-externals) Description: HPCToolkit is an integrated suite of tools for measurement and analysis of program performance on computers ranging from multicore desktop systems to the nation's largest supercomputers. By using statistical sampling of timers and hardware performance counters, HPCToolkit collects accurate measurements of a program's work, resource consumption, and inefficiency and attributes them to the full calling context in which they occur. --- hpctoolkit-externals[¶](#hpctoolkit-externals) === Homepage: * <http://hpctoolkit.orgSpack package: * [hpctoolkit-externals/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpctoolkit-externals/package.py) Versions: 2017.06, master Description: HPCToolkit performance analysis tool has many prerequisites and HpctoolkitExternals package provides all these prerequisites. --- hpgmg[¶](#hpgmg) === Homepage: * <https://bitbucket.org/hpgmg/hpgmgSpack package: * [hpgmg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpgmg/package.py) Versions: develop, 0.3, a0a5510df23b Build Dependencies: [petsc](#petsc), [cuda](#cuda), mpi, [python](#python) Link Dependencies: [petsc](#petsc), [cuda](#cuda), mpi Description: HPGMG implements full multigrid (FMG) algorithms using finite-volume and finite-element methods. Different algorithmic variants adjust the arithmetic intensity and architectural properties that are tested. These FMG methods converge up to discretization error in one F-cycle, thus may be considered direct solvers. An F-cycle visits the finest level a total of two times, the first coarsening (8x smaller) 4 times, the second coarsening 6 times, etc. --- hpl[¶](#hpl) === Homepage: * <http://www.netlib.org/benchmark/hpl/Spack package: * [hpl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpl/package.py) Versions: 2.2 Build Dependencies: mpi, blas Link Dependencies: mpi, blas Description: HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark. --- hpx[¶](#hpx) === Homepage: * <http://stellar.cct.lsu.edu/tag/hpx/Spack package: * [hpx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpx/package.py) Versions: 1.0.0 Build Dependencies: [cmake](#cmake), [boost](#boost), [hwloc](#hwloc) Link Dependencies: [hwloc](#hwloc), [boost](#boost) Description: C++ runtime system for parallel and distributed applications. --- hpx5[¶](#hpx5) === Homepage: * <http://hpx.crest.iu.eduSpack package: * [hpx5/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpx5/package.py) Versions: 4.1.0, 4.0.0, 3.1.0, 2.0.0, 1.3.0, 1.2.0, 1.1.0, 1.0.0 Build Dependencies: [jemalloc](#jemalloc), pkgconfig, mpi, [m4](#m4), [hwloc](#hwloc), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [metis](#metis), opencl Link Dependencies: [jemalloc](#jemalloc), mpi, [hwloc](#hwloc), [metis](#metis), opencl Description: The HPX-5 Runtime System. HPX-5 (High Performance ParalleX) is an open source, portable, performance-oriented runtime developed at CREST (Indiana University). HPX-5 provides a distributed programming model allowing programs to run unmodified on systems from a single SMP to large clusters and supercomputers with thousands of nodes. HPX-5 supports a wide variety of Intel and ARM platforms. It is being used by a broad range of scientific applications enabling scientists to write code that performs and scales better than contemporary runtimes. --- hsakmt[¶](#hsakmt) === Homepage: * <https://cgit.freedesktop.org/amd/hsakmt/Spack package: * [hsakmt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hsakmt/package.py) Versions: 1.0.0 Description: hsakmt is a thunk library that provides a userspace interface to amdkfd (AMD's HSA Linux kernel driver). It is the HSA equivalent of libdrm. --- hstr[¶](#hstr) === Homepage: * <https://github.com/dvorka/hstrSpack package: * [hstr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hstr/package.py) Versions: 1.22 Build Dependencies: [readline](#readline), [m4](#m4), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [ncurses](#ncurses) Link Dependencies: [readline](#readline), [ncurses](#ncurses) Description: hstr(hh) is a shell history suggest box for Bash and Zsh, which enables easy viewing, searching and using your command history. --- htop[¶](#htop) === Homepage: * <https://github.com/hishamhm/htopSpack package: * [htop/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/htop/package.py) Versions: 2.2.0, 2.0.2 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: htop is an interactive text-mode process viewer for Unix systems. --- htslib[¶](#htslib) === Homepage: * <https://github.com/samtools/htslibSpack package: * [htslib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/htslib/package.py) Versions: 1.9, 1.8, 1.7, 1.6, 1.4, 1.3.1, 1.2 Build Dependencies: [zlib](#zlib), [bzip2](#bzip2), [m4](#m4), [xz](#xz), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2), [m4](#m4), [xz](#xz), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Description: C library for high-throughput sequencing data formats. --- httpie[¶](#httpie) === Homepage: * <https://httpie.org/Spack package: * [httpie/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/httpie/package.py) Versions: 0.9.9, 0.9.8 Build Dependencies: [py-pysocks](#py-pysocks), [py-setuptools](#py-setuptools), [py-requests](#py-requests), [py-pygments](#py-pygments), [python](#python), [py-argparse](#py-argparse) Link Dependencies: [python](#python) Run Dependencies: [py-pysocks](#py-pysocks), [py-setuptools](#py-setuptools), [py-requests](#py-requests), [py-pygments](#py-pygments), [python](#python), [py-argparse](#py-argparse) Description: Modern command line HTTP client. --- hub[¶](#hub) === Homepage: * <https://github.com/github/hubSpack package: * [hub/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hub/package.py) Versions: 2.2.3, 2.2.2, 2.2.1, 2.2.0, 1.12.4, head Build Dependencies: [go](#go) Link Dependencies: [go](#go) Description: The github git wrapper --- hunspell[¶](#hunspell) === Homepage: * <http://hunspell.github.io/Spack package: * [hunspell/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hunspell/package.py) Versions: 1.6.0 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: The most popular spellchecking library (sez the author...). --- hwloc[¶](#hwloc) === Homepage: * <http://www.open-mpi.org/projects/hwloc/Spack package: * [hwloc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hwloc/package.py) Versions: 2.0.2, 2.0.1, 2.0.0, 1.11.9, 1.11.8, 1.11.7, 1.11.6, 1.11.5, 1.11.4, 1.11.3, 1.11.2, 1.11.1, 1.9 Build Dependencies: [cuda](#cuda), pkgconfig, [libxml2](#libxml2), [libpciaccess](#libpciaccess), [numactl](#numactl), [cairo](#cairo) Link Dependencies: [cuda](#cuda), [numactl](#numactl), [cairo](#cairo), [libxml2](#libxml2), [libpciaccess](#libpciaccess) Description: The Hardware Locality (hwloc) software project. The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently. --- hybpiper[¶](#hybpiper) === Homepage: * <https://github.com/mossmatters/HybPiperSpack package: * [hybpiper/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hybpiper/package.py) Versions: 1.2.0 Build Dependencies: [spades](#spades), [bwa](#bwa), [blast-plus](#blast-plus), [parallel](#parallel), [samtools](#samtools), [py-biopython](#py-biopython), [exonerate](#exonerate), [python](#python) Link Dependencies: [spades](#spades), [bwa](#bwa), [blast-plus](#blast-plus), [parallel](#parallel), [samtools](#samtools), [exonerate](#exonerate) Run Dependencies: [py-biopython](#py-biopython), [python](#python) Description: HybPiper was designed for targeted sequence capture, in which DNA sequencing libraries are enriched for gene regions of interest, especially for phylogenetics. HybPiper is a suite of Python scripts that wrap and connect bioinformatics tools in order to extract target sequences from high-throughput DNA sequencing reads --- hydra[¶](#hydra) === Homepage: * <http://www.mpich.orgSpack package: * [hydra/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hydra/package.py) Versions: 3.2 Description: Hydra is a process management system for starting parallel jobs. Hydra is designed to natively work with existing launcher daemons (such as ssh, rsh, fork), as well as natively integrate with resource management systems (such as slurm, pbs, sge). --- hydrogen[¶](#hydrogen) === Homepage: * <http://libelemental.orgSpack package: * [hydrogen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hydrogen/package.py) Versions: develop, 1.0, 0.99 Build Dependencies: [gmp](#gmp), [cuda](#cuda), [intel-mkl](#intel-mkl), [mpfr](#mpfr), [mpc](#mpc), [cmake](#cmake), scalapack, [netlib-lapack](#netlib-lapack), [veclibfort](#veclibfort), lapack, [cub](#cub), mpi, [essl](#essl), [cudnn](#cudnn), [aluminum](#aluminum), [openblas](#openblas) Link Dependencies: [gmp](#gmp), [cuda](#cuda), [intel-mkl](#intel-mkl), [mpfr](#mpfr), [mpc](#mpc), [aluminum](#aluminum), scalapack, [netlib-lapack](#netlib-lapack), [veclibfort](#veclibfort), lapack, [cub](#cub), mpi, [essl](#essl), [cudnn](#cudnn), [openblas](#openblas) Description: Hydrogen: Distributed-memory dense and sparse-direct linear algebra and optimization library. Based on the Elemental library. --- hypre[¶](#hypre) === Homepage: * <http://computation.llnl.gov/project/linear_solvers/software.phpSpack package: * [hypre/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hypre/package.py) Versions: develop, 2.15.1, 2.15.0, 2.14.0, 2.13.0, 2.12.1, 2.11.2, 2.11.1, 2.10.1, 2.10.0b, xsdk-0.2.0 Build Dependencies: mpi, blas, lapack Link Dependencies: mpi, blas, lapack Description: Hypre is a library of high performance preconditioners that features parallel multigrid methods for both structured and unstructured grid problems. --- i3[¶](#i3) === Homepage: * <https://i3wm.org/Spack package: * [i3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/i3/package.py) Versions: 4.14.1 Build Dependencies: [xcb-util-wm](#xcb-util-wm), pkgconfig, [xcb-util-xrm](#xcb-util-xrm), [xcb-util-keysyms](#xcb-util-keysyms), [libev](#libev), [startup-notification](#startup-notification), [autoconf](#autoconf), [pango](#pango), [libxkbcommon](#libxkbcommon), [xcb-util-cursor](#xcb-util-cursor), [m4](#m4), [automake](#automake), [libtool](#libtool), [cairo](#cairo), [yajl](#yajl) Link Dependencies: [xcb-util-wm](#xcb-util-wm), [xcb-util-cursor](#xcb-util-cursor), [libxkbcommon](#libxkbcommon), [xcb-util-xrm](#xcb-util-xrm), [xcb-util-keysyms](#xcb-util-keysyms), [startup-notification](#startup-notification), [libev](#libev), [cairo](#cairo), [yajl](#yajl), [pango](#pango) Description: i3, improved tiling wm. i3 is a tiling window manager, completely written from scratch. The target platforms are GNU/Linux and BSD operating systems, our code is Free and Open Source Software (FOSS) under the BSD license. i3 is primarily targeted at advanced users and developers. --- ibmisc[¶](#ibmisc) === Homepage: * <https://github.com/citibeth/ibmiscSpack package: * [ibmisc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ibmisc/package.py) Versions: 0.1.0 Build Dependencies: [everytrace](#everytrace), [eigen](#eigen), [googletest](#googletest), [py-numpy](#py-numpy), [blitz](#blitz), [netcdf-cxx4](#netcdf-cxx4), [cmake](#cmake), [py-cython](#py-cython), [udunits2](#udunits2), [proj](#proj), [boost](#boost), [doxygen](#doxygen), [python](#python) Link Dependencies: [everytrace](#everytrace), [eigen](#eigen), [blitz](#blitz), [proj](#proj), [boost](#boost), [udunits2](#udunits2), [netcdf-cxx4](#netcdf-cxx4), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-cython](#py-cython) Description: Misc. reusable utilities used by IceBin. --- iceauth[¶](#iceauth) === Homepage: * <http://cgit.freedesktop.org/xorg/app/iceauthSpack package: * [iceauth/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/iceauth/package.py) Versions: 1.0.7 Build Dependencies: [libice](#libice), [xproto](#xproto), pkgconfig, [util-macros](#util-macros) Link Dependencies: [libice](#libice) Description: The iceauth program is used to edit and display the authorization information used in connecting with ICE. It operates very much like the xauth program for X11 connection authentication records. --- icedtea[¶](#icedtea) === Homepage: * <http://icedtea.classpath.org/wiki/Main_PageSpack package: * [icedtea/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/icedtea/package.py) Versions: 3.4.0 Build Dependencies: [libxrender](#libxrender), pkgconfig, [gtkplus](#gtkplus), [libxcomposite](#libxcomposite), [libxau](#libxau), [libxdmcp](#libxdmcp), [zlib](#zlib), [libxt](#libxt), [libxext](#libxext), [gmake](#gmake), [libx11](#libx11), [libxi](#libxi), [lcms](#lcms), [giflib](#giflib), [libpng](#libpng), [libxtst](#libxtst), jpeg, [jdk](#jdk), [wget](#wget), [xproto](#xproto), [alsa-lib](#alsa-lib), [cups](#cups), [libxinerama](#libxinerama), [freetype](#freetype) Link Dependencies: [libxrender](#libxrender), [libxinerama](#libxinerama), jpeg, [gtkplus](#gtkplus), [xproto](#xproto), [libxcomposite](#libxcomposite), [freetype](#freetype), [libxau](#libxau), [libxdmcp](#libxdmcp), [zlib](#zlib), [libxt](#libxt), [libxext](#libxext), [alsa-lib](#alsa-lib), [libx11](#libx11), [libxi](#libxi), [lcms](#lcms), [giflib](#giflib), [cups](#cups), [libxtst](#libxtst), [libpng](#libpng) Description: The IcedTea project provides a harness to build the source code from http://openjdk.java.net using Free Software build tools and adds a number of key features to the upstream OpenJDK codebase. IcedTea requires an existing IcedTea or OpenJDK install to build. --- icet[¶](#icet) === Homepage: * <http://icet.sandia.govSpack package: * [icet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/icet/package.py) Versions: develop, 2.1.1 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: The Image Composition Engine for Tiles (IceT) is a high-performance sort-last parallel rendering library. --- ico[¶](#ico) === Homepage: * <http://cgit.freedesktop.org/xorg/app/icoSpack package: * [ico/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ico/package.py) Versions: 1.0.4 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11) Description: ico is a simple animation program that may be used for testing various X11 operations and extensions. It displays a wire-frame rotating polyhedron, with hidden lines removed, or a solid-fill polyhedron with hidden faces removed. --- icu4c[¶](#icu4c) === Homepage: * <http://site.icu-project.org/Spack package: * [icu4c/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/icu4c/package.py) Versions: 60.1, 58.2, 57.1 Description: ICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications. ICU4C is the C/C++ interface. --- id3lib[¶](#id3lib) === Homepage: * <http://id3lib.sourceforge.net/Spack package: * [id3lib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/id3lib/package.py) Versions: 3.8.3 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Library for manipulating ID3v1 and ID3v2 tags --- idba[¶](#idba) === Homepage: * <http://i.cs.hku.hk/~alse/hkubrg/projects/idba/Spack package: * [idba/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/idba/package.py) Versions: 1.1.3 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: IDBA is a practical iterative De Bruijn Graph De Novo Assembler for sequence assembly in bioinfomatics. --- igraph[¶](#igraph) === Homepage: * <http://igraph.org/Spack package: * [igraph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/igraph/package.py) Versions: 0.7.1 Build Dependencies: [libxml2](#libxml2) Link Dependencies: [libxml2](#libxml2) Description: igraph is a library for creating and manipulating graphs. --- igvtools[¶](#igvtools) === Homepage: * <https://software.broadinstitute.org/software/igv/homeSpack package: * [igvtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/igvtools/package.py) Versions: 2.3.98 Build Dependencies: java Link Dependencies: java Description: IGVTools suite of command-line utilities for preprocessing data files --- ilmbase[¶](#ilmbase) === Homepage: * <http://www.openexr.com/Spack package: * [ilmbase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ilmbase/package.py) Versions: 2.2.0, 2.1.0, 2.0.1, 1.0.2, 0.9.0 Description: OpenEXR ILM Base libraries (high dynamic-range image file format) --- image-magick[¶](#image-magick) === Homepage: * <http://www.imagemagick.orgSpack package: * [image-magick/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/image-magick/package.py) Versions: 7.0.5-9, 7.0.2-7, 7.0.2-6 Build Dependencies: [ghostscript](#ghostscript), [ghostscript-fonts](#ghostscript-fonts), [libtiff](#libtiff), [fontconfig](#fontconfig), [libtool](#libtool), [pango](#pango), jpeg, [freetype](#freetype), [libpng](#libpng) Link Dependencies: [ghostscript](#ghostscript), [ghostscript-fonts](#ghostscript-fonts), [libtiff](#libtiff), [fontconfig](#fontconfig), [pango](#pango), jpeg, [freetype](#freetype), [libpng](#libpng) Description: ImageMagick is a software suite to create, edit, compose, or convert bitmap images. --- imake[¶](#imake) === Homepage: * <http://www.snake.net/software/imake-stuff/Spack package: * [imake/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/imake/package.py) Versions: 1.0.7 Build Dependencies: [xproto](#xproto), pkgconfig Description: The imake build system. --- imp[¶](#imp) === Homepage: * <https://integrativemodeling.orgSpack package: * [imp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/imp/package.py) Versions: 2.8.0 Build Dependencies: [swig](#swig), [eigen](#eigen), [hdf5](#hdf5), [cmake](#cmake), [boost](#boost), [python](#python) Link Dependencies: [boost](#boost), [swig](#swig), [python](#python), [eigen](#eigen), [hdf5](#hdf5) Description: IMP, the Integrative Modeling Platform. --- impute2[¶](#impute2) === Homepage: * <https://mathgen.stats.ox.ac.uk/impute/impute_v2.html#homeSpack package: * [impute2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/impute2/package.py) Versions: 2.3.2 Description: IMPUTE2 is a genotype imputation and haplotype phasing program based on ideas from Howie et al. 2009. --- infernal[¶](#infernal) === Homepage: * <http://eddylab.org/infernal/Spack package: * [infernal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/infernal/package.py) Versions: 1.1.2 Build Dependencies: mpi Link Dependencies: mpi Description: Infernal (INFERence of RNA ALignment) is for searching DNA sequence databases for RNA structure and sequence similarities. It is an implementation of a special case of profile stochastic context-free grammars called covariance models (CMs). --- inputproto[¶](#inputproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/inputprotoSpack package: * [inputproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/inputproto/package.py) Versions: 2.3.2 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Input Extension. This extension defines a protocol to provide additional input devices management such as graphic tablets. --- intel[¶](#intel) === Homepage: * <https://software.intel.com/en-us/intel-parallel-studio-xeSpack package: * [intel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel/package.py) Versions: 18.0.3, 18.0.2, 18.0.1, 18.0.0, 17.0.7, 17.0.6, 17.0.4, 17.0.3, 17.0.2, 17.0.1, 17.0.0, 16.0.4, 16.0.3, 16.0.2, 15.0.6, 15.0.1 Description: Intel Compilers. --- intel-daal[¶](#intel-daal) === Homepage: * <https://software.intel.com/en-us/daalSpack package: * [intel-daal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel-daal/package.py) Versions: 2019.0.117, 2018.3.222, 2018.2.199, 2018.1.163, 2018.0.128, 2017.4.239, 2017.3.196, 2017.2.174, 2017.1.132, 2017.0.098, 2016.3.210, 2016.2.181 Description: Intel Data Analytics Acceleration Library. --- intel-gpu-tools[¶](#intel-gpu-tools) === Homepage: * <https://cgit.freedesktop.org/xorg/app/intel-gpu-tools/Spack package: * [intel-gpu-tools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel-gpu-tools/package.py) Versions: 1.16 Build Dependencies: [util-macros](#util-macros), [glib](#glib), [libdrm](#libdrm), [libpciaccess](#libpciaccess), [cairo](#cairo), [bison](#bison), [flex](#flex), pkgconfig, [python](#python) Link Dependencies: [libpciaccess](#libpciaccess), [glib](#glib), [cairo](#cairo), [libdrm](#libdrm) Description: Intel GPU Tools is a collection of tools for development and testing of the Intel DRM driver. There are many macro-level test suites that get used against the driver, including xtest, rendercheck, piglit, and oglconform, but failures from those can be difficult to track down to kernel changes, and many require complicated build procedures or specific testing environments to get useful results. Therefore, Intel GPU Tools includes low-level tools and tests specifically for development and testing of the Intel DRM Driver. --- intel-ipp[¶](#intel-ipp) === Homepage: * <https://software.intel.com/en-us/intel-ippSpack package: * [intel-ipp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel-ipp/package.py) Versions: 2019.0.117, 2018.3.222, 2018.2.199, 2018.1.163, 2018.0.128, 2017.3.196, 2017.2.174, 2017.1.132, 2017.0.098, 9.0.3.210 Description: Intel Integrated Performance Primitives. --- intel-mkl[¶](#intel-mkl) === Homepage: * <https://software.intel.com/en-us/intel-mklSpack package: * [intel-mkl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel-mkl/package.py) Versions: 2019.0.117, 2018.3.222, 2018.2.199, 2018.1.163, 2018.0.128, 2017.4.239, 2017.3.196, 2017.2.174, 2017.1.132, 2017.0.098, 11.3.3.210, 11.3.2.181 Description: Intel Math Kernel Library. --- intel-mkl-dnn[¶](#intel-mkl-dnn) === Homepage: * <https://01.org/mkl-dnnSpack package: * [intel-mkl-dnn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel-mkl-dnn/package.py) Versions: 0.11, 0.10, 0.9 Build Dependencies: [cmake](#cmake), [intel-mkl](#intel-mkl) Link Dependencies: [intel-mkl](#intel-mkl) Description: Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL- DNN). --- intel-mpi[¶](#intel-mpi) === Homepage: * <https://software.intel.com/en-us/intel-mpi-librarySpack package: * [intel-mpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel-mpi/package.py) Versions: 2019.0.117, 2018.3.222, 2018.2.199, 2018.1.163, 2018.0.128, 2017.4.239, 2017.3.196, 2017.2.174, 2017.1.132, 5.1.3.223 Description: Intel MPI --- intel-parallel-studio[¶](#intel-parallel-studio) === Homepage: * <https://software.intel.com/en-us/intel-parallel-studio-xeSpack package: * [intel-parallel-studio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel-parallel-studio/package.py) Versions: professional.2018.3, professional.2018.2, professional.2018.1, professional.2018.0, professional.2017.7, professional.2017.6, professional.2017.5, professional.2017.4, professional.2017.3, professional.2017.2, professional.2017.1, professional.2017.0, professional.2016.4, professional.2016.3, professional.2016.2, professional.2016.1, professional.2016.0, professional.2015.6, professional.2015.1, composer.2018.3, composer.2018.2, composer.2018.1, composer.2018.0, composer.2017.7, composer.2017.6, composer.2017.4, composer.2017.3, composer.2017.2, composer.2017.1, composer.2017.0, composer.2016.4, composer.2016.3, composer.2016.2, composer.2015.6, composer.2015.1, cluster.2019.0, cluster.2018.3, cluster.2018.2, cluster.2018.1, cluster.2018.0, cluster.2017.7, cluster.2017.6, cluster.2017.5, cluster.2017.4, cluster.2017.3, cluster.2017.2, cluster.2017.1, cluster.2017.0, cluster.2016.4, cluster.2016.3, cluster.2016.2, cluster.2016.1, cluster.2016.0, cluster.2015.6, cluster.2015.1 Description: Intel Parallel Studio. --- intel-tbb[¶](#intel-tbb) === Homepage: * <http://www.threadingbuildingblocks.org/Spack package: * [intel-tbb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel-tbb/package.py) Versions: 2019, 2018.6, 2018.5, 2018.4, 2018.3, 2018.2, 2018.1, 2018, 2017.8, 2017.7, 2017.6, 2017.5, 2017.4, 2017.3, 2017.2, 2017.1, 2017, 4.4.6, 4.4.5, 4.4.4, 4.4.3, 4.4.2, 4.4.1, 4.4 Build Dependencies: [cmake](#cmake) Description: Widely used C++ template library for task parallelism. Intel Threading Building Blocks (Intel TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable and composable, and that have future-proof scalability. --- intel-xed[¶](#intel-xed) === Homepage: * <https://intelxed.github.io/Spack package: * [intel-xed/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intel-xed/package.py) Versions: 2018.02.14 Build Dependencies: [python](#python) Description: The Intel X86 Encoder Decoder library for encoding and decoding x86 machine instructions (64- and 32-bit). Also includes libxed-ild, a lightweight library for decoding the length of an instruction. --- intltool[¶](#intltool) === Homepage: * <https://freedesktop.org/wiki/Software/intltool/Spack package: * [intltool/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/intltool/package.py) Versions: 0.51.0 Build Dependencies: [perl](#perl), [perl-xml-parser](#perl-xml-parser) Run Dependencies: [perl](#perl), [perl-xml-parser](#perl-xml-parser) Description: intltool is a set of tools to centralize translation of many different file formats using GNU gettext-compatible PO files. --- ior[¶](#ior) === Homepage: * <https://github.com/LLNL/iorSpack package: * [ior/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ior/package.py) Versions: 3.0.1 Build Dependencies: mpi, [m4](#m4), [hdf5](#hdf5), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [parallel-netcdf](#parallel-netcdf) Link Dependencies: mpi, [parallel-netcdf](#parallel-netcdf), [hdf5](#hdf5) Description: The IOR software is used for benchmarking parallel file systems using POSIX, MPI-IO, or HDF5 interfaces. --- iozone[¶](#iozone) === Homepage: * <http://www.iozone.org/Spack package: * [iozone/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/iozone/package.py) Versions: 3_465 Description: IOzone is a filesystem benchmark tool. The benchmark generates and measures a variety of file operations. Iozone has been ported to many machines and runs under many operating systems. --- iperf2[¶](#iperf2) === Homepage: * <https://sourceforge.net/projects/iperf2Spack package: * [iperf2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/iperf2/package.py) Versions: 2.0.12 Description: This code is a continuation based from the no longer maintained iperf 2.0.5 code base. Iperf 2.0.5 is still widely deployed and used by many for testing networks and for qualifying networking products. --- iperf3[¶](#iperf3) === Homepage: * <https://software.es.net/iperf/Spack package: * [iperf3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/iperf3/package.py) Versions: 3.6 Description: The iperf series of tools perform active measurements to determine the maximum achievable bandwidth on IP networks. iperf2 is a separately maintained project. --- ipopt[¶](#ipopt) === Homepage: * <https://projects.coin-or.org/IpoptSpack package: * [ipopt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ipopt/package.py) Versions: 3.12.10, 3.12.9, 3.12.8, 3.12.7, 3.12.6, 3.12.5, 3.12.4, 3.12.3, 3.12.2, 3.12.1, 3.12.0 Build Dependencies: pkgconfig, [metis](#metis), [mumps](#mumps), blas, [coinhsl](#coinhsl), lapack Link Dependencies: [mumps](#mumps), [coinhsl](#coinhsl), lapack, blas, [metis](#metis) Description: Ipopt (Interior Point OPTimizer, pronounced eye-pea-Opt) is a software package for large-scale nonlinear optimization. --- isaac[¶](#isaac) === Homepage: * <http://computationalradiationphysics.github.io/isaac/Spack package: * [isaac/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/isaac/package.py) Versions: develop, 1.4.0, 1.3.3, 1.3.2, 1.3.1, 1.3.0, master Build Dependencies: [cmake](#cmake) Link Dependencies: [cuda](#cuda), mpi, jpeg, [jansson](#jansson), [boost](#boost), [icet](#icet) Description: In Situ Animation of Accelerated Computations: Header-Only Library --- isaac-server[¶](#isaac-server) === Homepage: * <http://computationalradiationphysics.github.io/isaac/Spack package: * [isaac-server/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/isaac-server/package.py) Versions: develop, 1.4.0, 1.3.3, 1.3.2, 1.3.1, 1.3.0, master Build Dependencies: [cmake](#cmake) Link Dependencies: [boost](#boost), jpeg, [libwebsockets](#libwebsockets), [jansson](#jansson) Description: In Situ Animation of Accelerated Computations: Server --- isl[¶](#isl) === Homepage: * <http://isl.gforge.inria.frSpack package: * [isl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/isl/package.py) Versions: 0.19, 0.18, 0.15, 0.14 Build Dependencies: [gmp](#gmp) Link Dependencies: [gmp](#gmp) Description: isl (Integer Set Library) is a thread-safe C library for manipulating sets and relations of integer points bounded by affine constraints. --- itstool[¶](#itstool) === Homepage: * <http://itstool.org/Spack package: * [itstool/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/itstool/package.py) Versions: 2.0.2, 2.0.1, 2.0.0, 1.2.0 Description: ITS Tool allows you to translate your XML documents with PO files, using rules from the W3C Internationalization Tag Set (ITS) to determine what to translate and how to separate it into PO file messages. --- itsx[¶](#itsx) === Homepage: * <http://microbiology.se/software/itsx/Spack package: * [itsx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/itsx/package.py) Versions: 1.0.11 Build Dependencies: [perl](#perl), [hmmer](#hmmer) Link Dependencies: [hmmer](#hmmer) Run Dependencies: [perl](#perl) Description: Improved software detection and extraction of ITS1 and ITS2 from ribosomal ITS sequences of fungi and other eukaryotes for use in environmental sequencing --- jackcess[¶](#jackcess) === Homepage: * <http://jackcess.sourceforge.net/Spack package: * [jackcess/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jackcess/package.py) Versions: 2.1.12, 1.2.14.3 Build Dependencies: [jdk](#jdk) Link Dependencies: [jdk](#jdk) Run Dependencies: java, [commons-lang](#commons-lang), [commons-logging](#commons-logging) Description: Jackcess is a pure Java library for reading from and writing to MS Access databases (currently supporting versions 2000-2016). --- jags[¶](#jags) === Homepage: * <http://mcmc-jags.sourceforge.net/Spack package: * [jags/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jags/package.py) Versions: 4.3.0, 4.2.0 Build Dependencies: blas, lapack Link Dependencies: blas, lapack Description: JAGS is Just Another Gibbs Sampler. It is a program for analysis of Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation not wholly unlike BUGS --- jansson[¶](#jansson) === Homepage: * <http://www.digip.org/jansson/Spack package: * [jansson/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jansson/package.py) Versions: 2.9 Build Dependencies: [cmake](#cmake) Description: Jansson is a C library for encoding, decoding and manipulating JSON data. --- jasper[¶](#jasper) === Homepage: * <https://www.ece.uvic.ca/~frodo/jasper/Spack package: * [jasper/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jasper/package.py) Versions: 2.0.14, 1.900.1 Build Dependencies: [cmake](#cmake), gl, jpeg Link Dependencies: gl, jpeg Description: Library for manipulating JPEG-2000 images --- jbigkit[¶](#jbigkit) === Homepage: * <http://www.cl.cam.ac.uk/~mgk25/jbigkit/Spack package: * [jbigkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jbigkit/package.py) Versions: 2.1, 1.6 Description: JBIG-Kit is a software implementation of the JBIG1 data compression standard. --- jchronoss[¶](#jchronoss) === Homepage: * <http://jchronoss.hpcframework.comSpack package: * [jchronoss/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jchronoss/package.py) Versions: 1.2, 1.1.1, 1.1, 1.0 Build Dependencies: [cmake](#cmake), [libev](#libev), [libxml2](#libxml2), [ncurses](#ncurses), [libwebsockets](#libwebsockets) Link Dependencies: [libev](#libev), [libxml2](#libxml2), [ncurses](#ncurses), [libwebsockets](#libwebsockets) Description: JCHRONOSS aims to help HPC application testing process to scale as much as the application does. --- jdk[¶](#jdk) === Homepage: * <http://www.oracle.com/technetwork/java/javase/downloads/index.htmlSpack package: * [jdk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jdk/package.py) Versions: 11.0.1, 10.0.2_13, 10.0.1_10, 1.8.0_181-b13, 1.8.0_172-b11, 1.8.0_141-b15, 1.8.0_131-b11, 1.8.0_92-b14, 1.8.0_73-b02, 1.8.0_66-b17, 1.7.0_80-b0 Description: The Java Development Kit (JDK) released by Oracle Corporation in the form of a binary product aimed at Java developers. Includes a complete JRE plus tools for developing, debugging, and monitoring Java applications. --- jellyfish[¶](#jellyfish) === Homepage: * <http://www.cbcb.umd.edu/software/jellyfish/Spack package: * [jellyfish/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jellyfish/package.py) Versions: 2.2.7, 1.1.11 Build Dependencies: [perl](#perl), [python](#python) Run Dependencies: [perl](#perl), [python](#python) Description: JELLYFISH is a tool for fast, memory-efficient counting of k-mers in DNA. --- jemalloc[¶](#jemalloc) === Homepage: * <http://www.canonware.com/jemalloc/Spack package: * [jemalloc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jemalloc/package.py) Versions: 4.5.0, 4.4.0, 4.3.1, 4.2.1, 4.2.0, 4.1.0, 4.0.4 Description: jemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support. --- jmol[¶](#jmol) === Homepage: * <http://jmol.sourceforge.net/Spack package: * [jmol/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jmol/package.py) Versions: 14.8.0 Run Dependencies: java Description: Jmol: an open-source Java viewer for chemical structures in 3D with features for chemicals, crystals, materials and biomolecules. --- jq[¶](#jq) === Homepage: * <https://stedolan.github.io/jq/Spack package: * [jq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jq/package.py) Versions: 1.5 Build Dependencies: [oniguruma](#oniguruma), [bison](#bison) Link Dependencies: [oniguruma](#oniguruma) Description: jq is a lightweight and flexible command-line JSON processor. --- json-c[¶](#json-c) === Homepage: * <https://github.com/json-c/json-c/wikiSpack package: * [json-c/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/json-c/package.py) Versions: 0.13.1, 0.12.1, 0.11 Build Dependencies: [autoconf](#autoconf) Description: A JSON implementation in C. --- json-cwx[¶](#json-cwx) === Homepage: * <https://github.com/LLNL/json-cwxSpack package: * [json-cwx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/json-cwx/package.py) Versions: 0.12 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: JSON-C with Extensions --- json-glib[¶](#json-glib) === Homepage: * <https://developer.gnome.org/json-glibSpack package: * [json-glib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/json-glib/package.py) Versions: 1.2.8 Build Dependencies: [glib](#glib) Link Dependencies: [glib](#glib) Description: JSON-GLib is a library for reading and parsing JSON using GLib and GObject data types and API. --- jsoncpp[¶](#jsoncpp) === Homepage: * <https://github.com/open-source-parsers/jsoncppSpack package: * [jsoncpp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/jsoncpp/package.py) Versions: 1.7.3 Build Dependencies: [cmake](#cmake) Test Dependencies: [python](#python) Description: JsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files. --- judy[¶](#judy) === Homepage: * <http://judy.sourceforge.net/Spack package: * [judy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/judy/package.py) Versions: 1.0.5 Description: Judy: General-purpose dynamic array, associative array and hash-trie. --- julia[¶](#julia) === Homepage: * <http://julialang.orgSpack package: * [julia/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/julia/package.py) Versions: 0.6.2, 0.5.2, 0.5.1, 0.5.0, 0.4.7, 0.4.6, 0.4.5, 0.4.3, release-0.5, release-0.4, master Build Dependencies: [git](#git), [openssl](#openssl), [cmake](#cmake), [curl](#curl), [binutils](#binutils), [python](#python), [m4](#m4) Link Dependencies: [git](#git), [openssl](#openssl), [cmake](#cmake), [curl](#curl), [binutils](#binutils), [python](#python) Run Dependencies: mpi, [py-matplotlib](#py-matplotlib), [hdf5](#hdf5) Description: The Julia Language: A fresh approach to technical computing --- k8[¶](#k8) === Homepage: * <https://github.com/attractivechaos/k8Spack package: * [k8/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/k8/package.py) Versions: 0.2.4 Run Dependencies: [zlib](#zlib) Description: K8 is a Javascript shell based on Google's V8 Javascript engine. --- kahip[¶](#kahip) === Homepage: * <http://algo2.iti.kit.edu/documents/kahip/index.htmlSpack package: * [kahip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kahip/package.py) Versions: develop, 2.00 Build Dependencies: [scons](#scons), mpi, [argtable](#argtable) Link Dependencies: mpi, [argtable](#argtable) Description: KaHIP - Karlsruhe High Quality Partitioning - is a family of graph partitioning programs. It includes KaFFPa (Karlsruhe Fast Flow Partitioner), which is a multilevel graph partitioning algorithm, in its variants Strong, Eco and Fast, KaFFPaE (KaFFPaEvolutionary) which is a parallel evolutionary algorithm that uses KaFFPa to provide combine and mutation operations, as well as KaBaPE which extends the evolutionary algorithm. Moreover, specialized techniques are included to partition road networks (Buffoon), to output a vertex separator from a given partition or techniques geared towards efficient partitioning of social networks. --- kaiju[¶](#kaiju) === Homepage: * <https://github.com/bioinformatics-centre/kaijuSpack package: * [kaiju/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kaiju/package.py) Versions: 1.6.2 Run Dependencies: [py-htseq](#py-htseq), [perl-io-compress](#perl-io-compress) Description: Kaiju is a program for the taxonomic classification of high-throughput sequencing reads. --- kaks-calculator[¶](#kaks-calculator) === Homepage: * <https://sourceforge.net/projects/kakscalculator2Spack package: * [kaks-calculator/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kaks-calculator/package.py) Versions: 2.0 Description: KaKs_Calculator adopts model selection and model averaging to calculate nonsynonymous (Ka) and synonymous (Ks) substitution rates, attempting to include as many features as needed for accurately capturing evolutionary information in protein-coding sequences. --- kaldi[¶](#kaldi) === Homepage: * <https://github.com/kaldi-asr/kaldiSpack package: * [kaldi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kaldi/package.py) Versions: 2018-07-11, 2015-10-07, master Build Dependencies: [cuda](#cuda), blas, [openfst](#openfst) Link Dependencies: [cuda](#cuda), blas, [openfst](#openfst) Run Dependencies: [sph2pipe](#sph2pipe), [sctk](#sctk), [speex](#speex) Description: Kaldi is a toolkit for speech recognition written in C++ and licensed under the Apache License v2.0. Kaldi is intended for use by speech recognition researchers. --- kallisto[¶](#kallisto) === Homepage: * <http://pachterlab.github.io/kallistoSpack package: * [kallisto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kallisto/package.py) Versions: 0.43.1 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [mpich](#mpich), [hdf5](#hdf5) Link Dependencies: [zlib](#zlib), [mpich](#mpich), [hdf5](#hdf5) Description: kallisto is a program for quantifying abundances of transcripts from RNA-Seq data. --- karma[¶](#karma) === Homepage: * <https://www.atnf.csiro.au/computing/software/karma/Spack package: * [karma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/karma/package.py) Versions: 1.7.25-common Build Dependencies: [libx11](#libx11), [libxaw](#libxaw) Run Dependencies: [libx11](#libx11), [libxaw](#libxaw) Description: Karma is a toolkit for interprocess communications, authentication, encryption, graphics display, user interface and manipulating the Karma network data structure. It contains KarmaLib (the structured libraries and API) and a large number of modules (applications) to perform many standard tasks. --- kbproto[¶](#kbproto) === Homepage: * <https://cgit.freedesktop.org/xorg/proto/kbprotoSpack package: * [kbproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kbproto/package.py) Versions: 1.0.7 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Keyboard Extension. This extension defines a protcol to provide a number of new capabilities and controls for text keyboards. --- kdiff3[¶](#kdiff3) === Homepage: * <http://kdiff3.sourceforge.net/Spack package: * [kdiff3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kdiff3/package.py) Versions: 0.9.98 Build Dependencies: [qt](#qt) Link Dependencies: [qt](#qt) Description: Compare and merge 2 or 3 files or directories. --- kealib[¶](#kealib) === Homepage: * <http://www.kealib.org/Spack package: * [kealib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kealib/package.py) Versions: develop, 1.4.10, 1.4.9, 1.4.8, 1.4.7 Build Dependencies: [cmake](#cmake), [hdf5](#hdf5) Link Dependencies: [hdf5](#hdf5) Description: An HDF5 Based Raster File Format. KEALib provides an implementation of the GDAL data model. The format supports raster attribute tables, image pyramids, meta-data and in-built statistics while also handling very large files and compression throughout. Based on the HDF5 standard, it also provides a base from which other formats can be derived and is a good choice for long term data archiving. An independent software library (libkea) provides complete access to the KEA image format and a GDAL driver allowing KEA images to be used from any GDAL supported software. Development work on this project has been funded by Landcare Research. --- kentutils[¶](#kentutils) === Homepage: * <https://github.com/ENCODE-DCC/kentUtilsSpack package: * [kentutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kentutils/package.py) Versions: 302.1 Build Dependencies: [mariadb](#mariadb), [libpng](#libpng), [openssl](#openssl) Link Dependencies: [mariadb](#mariadb), [libpng](#libpng), [openssl](#openssl) Description: Jim Kent command line bioinformatic utilities --- kibana[¶](#kibana) === Homepage: * <https://www.elastic.co/products/kibanaSpack package: * [kibana/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kibana/package.py) Versions: 6.4.0 Run Dependencies: [jdk](#jdk) Description: Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack --- kim-api[¶](#kim-api) === Homepage: * <https://openkim.org/Spack package: * [kim-api/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kim-api/package.py) Versions: develop, 2.0rc1 Build Dependencies: [cmake](#cmake) Description: OpenKIM is an online framework for making molecular simulations reliable, reproducible, and portable. Computer implementations of inter- atomic models are archived in OpenKIM, verified for coding integrity, and tested by computing their predictions for a variety of material properties. Models conforming to the KIM application programming interface (API) work seamlessly with major simulation codes that have adopted the KIM API standard. --- kmergenie[¶](#kmergenie) === Homepage: * <http://kmergenie.bx.psu.edu/Spack package: * [kmergenie/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kmergenie/package.py) Versions: 1.7044 Build Dependencies: [r](#r), [zlib](#zlib), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [zlib](#zlib) Run Dependencies: [r](#r), [py-setuptools](#py-setuptools), [python](#python) Description: KmerGenie estimates the best k-mer length for genome de novo assembly. --- kokkos[¶](#kokkos) === Homepage: * <https://github.com/kokkos/kokkosSpack package: * [kokkos/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kokkos/package.py) Versions: develop, 2.7.00, 2.5.00, 2.04.11, 2.04.04, 2.04.00, 2.03.13, 2.03.05, 2.03.00, 2.02.15, 2.02.07 Build Dependencies: [hwloc](#hwloc), [qthreads](#qthreads), [cuda](#cuda) Link Dependencies: [hwloc](#hwloc), [qthreads](#qthreads), [cuda](#cuda) Description: Kokkos implements a programming model in C++ for writing performance portable applications targeting all major HPC platforms. --- kraken[¶](#kraken) === Homepage: * <https://ccb.jhu.edu/software/kraken/Spack package: * [kraken/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kraken/package.py) Versions: 1.0 Build Dependencies: [perl](#perl), [jellyfish](#jellyfish) Link Dependencies: [jellyfish](#jellyfish) Run Dependencies: [perl](#perl) Description: Kraken is a system for assigning taxonomic labels to short DNA sequences, usually obtained through metagenomic studies. --- krb5[¶](#krb5) === Homepage: * <https://kerberos.orgSpack package: * [krb5/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/krb5/package.py) Versions: 1.16.1 Build Dependencies: [openssl](#openssl) Link Dependencies: [openssl](#openssl) Description: Network authentication protocol --- krims[¶](#krims) === Homepage: * <http://lazyten.org/krimsSpack package: * [krims/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/krims/package.py) Versions: develop, 0.2.1 Build Dependencies: [cmake](#cmake) Description: The bucket of Krimskrams every C or C++ project needs --- kripke[¶](#kripke) === Homepage: * <https://computation.llnl.gov/projects/co-design/kripkeSpack package: * [kripke/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kripke/package.py) Versions: 1.1 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: Kripke is a simple, scalable, 3D Sn deterministic particle transport proxy/mini app. --- kvasir-mpl[¶](#kvasir-mpl) === Homepage: * <https://github.com/kvasir-io/mplSpack package: * [kvasir-mpl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kvasir-mpl/package.py) Versions: develop Description: Kvasir metaprogramming library --- kvtree[¶](#kvtree) === Homepage: * <https://github.com/ECP-VeloC/KVTreeSpack package: * [kvtree/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kvtree/package.py) Versions: 1.0.2, master Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: KVTree provides a fully extensible C datastructure modeled after perl hashes. --- laghos[¶](#laghos) === Homepage: * <https://computation.llnl.gov/projects/co-design/laghosSpack package: * [laghos/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/laghos/package.py) Versions: develop, 1.1, 1.0 Build Dependencies: [mfem](#mfem) Link Dependencies: [mfem](#mfem) Description: Laghos (LAGrangian High-Order Solver) is a CEED miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. --- lammps[¶](#lammps) === Homepage: * <http://lammps.sandia.gov/Spack package: * [lammps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lammps/package.py) Versions: develop, 20180822, 20180629, 20180316, 20180222, 20170922, 20170901 Build Dependencies: lapack, mpi, [netcdf](#netcdf), [latte](#latte), [voropp](#voropp), [cmake](#cmake), [hdf5](#hdf5), blas, [python](#python), [fftw](#fftw) Link Dependencies: mpi, [netcdf](#netcdf), [latte](#latte), [hdf5](#hdf5), [fftw](#fftw), [voropp](#voropp), blas, [python](#python), lapack Description: LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator. This package uses patch releases, not stable release. See https://github.com/spack/spack/pull/5342 for a detailed discussion. --- last[¶](#last) === Homepage: * <http://last.cbrc.jp/Spack package: * [last/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/last/package.py) Versions: 869 Description: LAST finds similar regions between sequences, and aligns them. It is designed for comparing large datasets to each other (e.g. vertebrate genomes and/or large numbers of DNA reads). --- lastz[¶](#lastz) === Homepage: * <https://lastz.github.io/lastzSpack package: * [lastz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lastz/package.py) Versions: 1.04.00 Description: LASTZ is a program for aligning DNA sequences, a pairwise aligner. --- latte[¶](#latte) === Homepage: * <https://github.com/lanl/latteSpack package: * [latte/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/latte/package.py) Versions: develop, 1.2.1, 1.2.0, 1.1.1, 1.0.1 Build Dependencies: [cmake](#cmake), mpi, blas, lapack, [qmd-progress](#qmd-progress) Link Dependencies: mpi, blas, lapack, [qmd-progress](#qmd-progress) Description: Open source density functional tight binding molecular dynamics. --- launchmon[¶](#launchmon) === Homepage: * <https://github.com/LLNL/LaunchMONSpack package: * [launchmon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/launchmon/package.py) Versions: 1.0.2 Build Dependencies: [libgpg-error](#libgpg-error), [spectrum-mpi](#spectrum-mpi), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [boost](#boost), [libgcrypt](#libgcrypt) Link Dependencies: [spectrum-mpi](#spectrum-mpi), [boost](#boost), [libgpg-error](#libgpg-error), [libgcrypt](#libgcrypt), elf Description: Software infrastructure that enables HPC run-time tools to co-locate tool daemons with a parallel job. --- lazyten[¶](#lazyten) === Homepage: * <http://lazyten.orgSpack package: * [lazyten/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lazyten/package.py) Versions: develop, 0.4.1 Build Dependencies: [arpack-ng](#arpack-ng), [armadillo](#armadillo), [cmake](#cmake), [krims](#krims), blas, lapack Link Dependencies: blas, [krims](#krims), [arpack-ng](#arpack-ng), [armadillo](#armadillo), lapack Description: Lightweight linear algebra library based on lazy matrices --- lbann[¶](#lbann) === Homepage: * <http://software.llnl.gov/lbann/Spack package: * [lbann/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lbann/package.py) Versions: develop, 0.95, 0.94, 0.93, 0.92, 0.91 Build Dependencies: [cuda](#cuda), [hwloc](#hwloc), [cmake](#cmake), [hydrogen](#hydrogen), [opencv](#opencv), [cnpy](#cnpy), [cub](#cub), mpi, [elemental](#elemental), [protobuf](#protobuf), [aluminum](#aluminum), [conduit](#conduit), [nccl](#nccl), [cudnn](#cudnn) Link Dependencies: [cuda](#cuda), [hwloc](#hwloc), [aluminum](#aluminum), [hydrogen](#hydrogen), [conduit](#conduit), [opencv](#opencv), [cub](#cub), mpi, [elemental](#elemental), [protobuf](#protobuf), [cnpy](#cnpy), [nccl](#nccl), [cudnn](#cudnn) Description: LBANN: Livermore Big Artificial Neural Network Toolkit. A distributed memory, HPC-optimized, model and data parallel training toolkit for deep neural networks. --- lbxproxy[¶](#lbxproxy) === Homepage: * <http://cgit.freedesktop.org/xorg/app/lbxproxySpack package: * [lbxproxy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lbxproxy/package.py) Versions: 1.0.3 Build Dependencies: [libice](#libice), [util-macros](#util-macros), [liblbxutil](#liblbxutil), [libx11](#libx11), [libxext](#libxext), [bigreqsproto](#bigreqsproto), [xproxymanagementprotocol](#xproxymanagementprotocol), pkgconfig, [xtrans](#xtrans) Link Dependencies: [libice](#libice), [libxext](#libxext), [liblbxutil](#liblbxutil), [libx11](#libx11) Description: lbxproxy accepts client connections, multiplexes them over a single connection to the X server, and performs various optimizations on the X protocol to make it faster over low bandwidth and/or high latency connections. Note that the X server source from X.Org no longer supports the LBX extension, so this program is only useful in connecting to older X servers. --- lbzip2[¶](#lbzip2) === Homepage: * <http://lbzip2.org/Spack package: * [lbzip2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lbzip2/package.py) Versions: 2.5 Description: Multi-threaded compression utility with support for bzip2 compressed file format --- lcals[¶](#lcals) === Homepage: * <https://computation.llnl.gov/projects/co-design/lcalsSpack package: * [lcals/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lcals/package.py) Versions: 1.0.2 Description: LCALS ("Livermore Compiler Analysis Loop Suite") is a collection of loop kernels based, in part, on historical "Livermore Loops" benchmarks (See the 1986 technical report: "The Livermore Fortran Kernels: A Computer Test of the Numerical Performance Range", by <NAME>, UCRL-53745.). The suite contains facilities to generate timing statistics and reports. --- lcms[¶](#lcms) === Homepage: * <http://www.littlecms.comSpack package: * [lcms/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lcms/package.py) Versions: 2.9, 2.8, 2.6 Build Dependencies: [libtiff](#libtiff), [zlib](#zlib), jpeg Link Dependencies: [libtiff](#libtiff), [zlib](#zlib), jpeg Description: Little cms is a color management library. Implements fast transforms between ICC profiles. It is focused on speed, and is portable across several platforms (MIT license). --- ldc[¶](#ldc) === Homepage: * <https://dlang.org/Spack package: * [ldc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ldc/package.py) Versions: 1.3.0 Build Dependencies: [zlib](#zlib), [libconfig](#libconfig), [cmake](#cmake), [curl](#curl), [binutils](#binutils), [ldc-bootstrap](#ldc-bootstrap), [llvm](#llvm), [libedit](#libedit) Link Dependencies: [zlib](#zlib), [libconfig](#libconfig), [curl](#curl), [binutils](#binutils), [ldc-bootstrap](#ldc-bootstrap), [llvm](#llvm), [libedit](#libedit) Run Dependencies: [binutils](#binutils) Description: The LDC project aims to provide a portable D programming language compiler with modern optimization and code generation capabilities. LDC is fully Open Source; the parts of the code not taken/adapted from other projects are BSD-licensed (see the LICENSE file for details). Consult the D wiki for further information: http://wiki.dlang.org/LDC --- ldc-bootstrap[¶](#ldc-bootstrap) === Homepage: * <https://dlang.org/Spack package: * [ldc-bootstrap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ldc-bootstrap/package.py) Versions: 0.17.4 Build Dependencies: [zlib](#zlib), [libconfig](#libconfig), [cmake](#cmake), [curl](#curl), [binutils](#binutils), [llvm](#llvm), [libedit](#libedit) Link Dependencies: [zlib](#zlib), [libconfig](#libconfig), [curl](#curl), [binutils](#binutils), [llvm](#llvm), [libedit](#libedit) Description: The LDC project aims to provide a portable D programming language compiler with modern optimization and code generation capabilities. LDC is fully Open Source; the parts of the code not taken/adapted from other projects are BSD-licensed (see the LICENSE file for details). Consult the D wiki for further information: http://wiki.dlang.org/LDC This old version of the compiler is needed to bootstrap newer ones. --- legion[¶](#legion) === Homepage: * <http://legion.stanford.edu/Spack package: * [legion/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/legion/package.py) Versions: develop, 18.05.0, 18.02.0, 17.10.0, 17.08.0, 17.02.0 Build Dependencies: [cmake](#cmake), [gasnet](#gasnet) Link Dependencies: [gasnet](#gasnet) Description: Legion is a data-centric parallel programming system for writing portable high performance programs targeted at distributed heterogeneous architectures. Legion presents abstractions which allow programmers to describe properties of program data (e.g. independence, locality). By making the Legion programming system aware of the structure of program data, it can automate many of the tedious tasks programmers currently face, including correctly extracting task- and data-level parallelism and moving data around complex memory hierarchies. A novel mapping interface provides explicit programmer controlled placement of data in the memory hierarchy and assignment of tasks to processors in a way that is orthogonal to correctness, thereby enabling easy porting and tuning of Legion applications to new architectures. --- leveldb[¶](#leveldb) === Homepage: * <https://github.com/google/leveldbSpack package: * [leveldb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/leveldb/package.py) Versions: 1.20, 1.18 Build Dependencies: [snappy](#snappy) Link Dependencies: [snappy](#snappy) Description: LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. --- lftp[¶](#lftp) === Homepage: * <http://lftp.yar.ru/Spack package: * [lftp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lftp/package.py) Versions: 4.8.1, 4.7.7, 4.6.4 Build Dependencies: [zlib](#zlib), [libiconv](#libiconv), [openssl](#openssl), [readline](#readline), [expat](#expat), [ncurses](#ncurses) Link Dependencies: [zlib](#zlib), [libiconv](#libiconv), [openssl](#openssl), [readline](#readline), [expat](#expat), [ncurses](#ncurses) Description: LFTP is a sophisticated file transfer program supporting a number of network protocols (ftp, http, sftp, fish, torrent). --- libaec[¶](#libaec) === Homepage: * <https://gitlab.dkrz.de/k202009/libaecSpack package: * [libaec/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libaec/package.py) Versions: 1.0.2, 1.0.1, 1.0.0 Build Dependencies: [cmake](#cmake) Description: Libaec provides fast lossless compression of 1 up to 32 bit wide signed or unsigned integers (samples). It implements Golomb-Rice compression method under the BSD license and includes a free drop-in replacement for the SZIP library. --- libaio[¶](#libaio) === Homepage: * <http://lse.sourceforge.net/io/aio.htmlSpack package: * [libaio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libaio/package.py) Versions: 0.3.110 Description: This is the linux native Asynchronous I/O interface library. --- libapplewm[¶](#libapplewm) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libAppleWMSpack package: * [libapplewm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libapplewm/package.py) Versions: 1.4.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxext](#libxext), [xextproto](#xextproto), [applewmproto](#applewmproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: AppleWM is a simple library designed to interface with the Apple-WM extension. This extension allows X window managers to better interact with the Mac OS X Aqua user interface when running X11 in a rootless mode. --- libarchive[¶](#libarchive) === Homepage: * <http://www.libarchive.orgSpack package: * [libarchive/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libarchive/package.py) Versions: 3.3.2, 3.2.1, 3.1.2, 3.1.1, 3.1.0 Build Dependencies: [zlib](#zlib), [bzip2](#bzip2), [lz4](#lz4), [nettle](#nettle), [openssl](#openssl), [xz](#xz), [lzo](#lzo), [expat](#expat), [lzma](#lzma), [libxml2](#libxml2) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2), [lz4](#lz4), [nettle](#nettle), [openssl](#openssl), [xz](#xz), [lzo](#lzo), [expat](#expat), [lzma](#lzma), [libxml2](#libxml2) Description: libarchive: C library and command-line tools for reading and writing tar, cpio, zip, ISO, and other archive formats. --- libassuan[¶](#libassuan) === Homepage: * <https://gnupg.org/software/libassuan/index.htmlSpack package: * [libassuan/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libassuan/package.py) Versions: 2.4.5, 2.4.3 Build Dependencies: [libgpg-error](#libgpg-error) Link Dependencies: [libgpg-error](#libgpg-error) Description: Libassuan is a small library implementing the so-called Assuan protocol. --- libatomic-ops[¶](#libatomic-ops) === Homepage: * <https://github.com/ivmai/libatomic_opsSpack package: * [libatomic-ops/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libatomic-ops/package.py) Versions: 7.4.4 Description: This package provides semi-portable access to hardware-provided atomic memory update operations on a number architectures. --- libbeagle[¶](#libbeagle) === Homepage: * <https://github.com/beagle-dev/beagle-libSpack package: * [libbeagle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libbeagle/package.py) Versions: 2.1.2 Build Dependencies: [subversion](#subversion), pkgconfig, [m4](#m4), java, [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Description: Beagle performs genotype calling, genotype phasing, imputation of ungenotyped markers, and identity-by-descent segment detection. --- libbeato[¶](#libbeato) === Homepage: * <https://github.com/CRG-Barcelona/libbeatoSpack package: * [libbeato/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libbeato/package.py) Versions: master Description: libbeato is a C library containing routines for various uses in Genomics, and includes a copy of the freeware portion of the C library from UCSC's Genome Browser Group. --- libbsd[¶](#libbsd) === Homepage: * <https://libbsd.freedesktop.org/wiki/Spack package: * [libbsd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libbsd/package.py) Versions: 0.8.6 Description: This library provides useful functions commonly found on BSD systems, and lacking on others like GNU systems, thus making it easier to port projects with strong BSD origins, without needing to embed the same code over and over again on each project. --- libbson[¶](#libbson) === Homepage: * <https://github.com/mongodb/libbsonSpack package: * [libbson/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libbson/package.py) Versions: 1.9.1, 1.8.1, 1.8.0, 1.7.0, 1.6.3, 1.6.2, 1.6.1 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: libbson is a library providing useful routines related to building, parsing, and iterating BSON documents. --- libcanberra[¶](#libcanberra) === Homepage: * <http://0pointer.de/lennart/projects/libcanberra/Spack package: * [libcanberra/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libcanberra/package.py) Versions: 0.30 Build Dependencies: [libxrender](#libxrender), pkgconfig, [libxinerama](#libxinerama), [libxcursor](#libxcursor), [libxdamage](#libxdamage), [libxcomposite](#libxcomposite), [libxcb](#libxcb), [libxrandr](#libxrandr), [libxau](#libxau), [libxext](#libxext), [libvorbis](#libvorbis), [libx11](#libx11), [libxfixes](#libxfixes), [gtkplus](#gtkplus) Link Dependencies: [libxrender](#libxrender), [libxcursor](#libxcursor), [libxdamage](#libxdamage), [libxcomposite](#libxcomposite), [libxcb](#libxcb), [libxrandr](#libxrandr), [libxau](#libxau), [libxext](#libxext), [libx11](#libx11), [libvorbis](#libvorbis), [libxfixes](#libxfixes), [libxinerama](#libxinerama), [gtkplus](#gtkplus) Description: libcanberra is an implementation of the XDG Sound Theme and Name Specifications, for generating event sounds on free desktops, such as GNOME. --- libcap[¶](#libcap) === Homepage: * <https://sites.google.com/site/fullycapable/Spack package: * [libcap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libcap/package.py) Versions: 2.25 Description: Libcap implements the user-space interfaces to the POSIX 1003.1e capabilities available in Linux kernels. These capabilities are a partitioning of the all powerful root privilege into a set of distinct privileges. --- libceed[¶](#libceed) === Homepage: * <https://github.com/CEED/libCEEDSpack package: * [libceed/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libceed/package.py) Versions: develop, 0.2, 0.1 Build Dependencies: [occa](#occa) Link Dependencies: [occa](#occa) Description: The CEED API Library: Code for Efficient Extensible Discretizations. --- libcerf[¶](#libcerf) === Homepage: * <http://sourceforge.net/projects/libcerfSpack package: * [libcerf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libcerf/package.py) Versions: 1.3 Description: A self-contained C library providing complex error functions, based on Faddeeva's plasma dispersion function w(z). Also provides Dawson's integral and Voigt's convolution of a Gaussian and a Lorentzian --- libcheck[¶](#libcheck) === Homepage: * <https://libcheck.github.io/check/index.htmlSpack package: * [libcheck/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libcheck/package.py) Versions: 0.12.0, 0.11.0, 0.10.0 Build Dependencies: [cmake](#cmake) Description: A unit testing framework for C. --- libcint[¶](#libcint) === Homepage: * <https://github.com/sunqm/libcintSpack package: * [libcint/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libcint/package.py) Versions: 3.0.13, 3.0.12, 3.0.11, 3.0.10, 3.0.8, 3.0.7, 3.0.6, 3.0.5, 3.0.4 Build Dependencies: [cmake](#cmake), [py-numpy](#py-numpy), blas, [python](#python) Link Dependencies: blas Test Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Library for analytical Gaussian integrals for quantum chemistry. --- libcircle[¶](#libcircle) === Homepage: * <https://github.com/hpc/libcircleSpack package: * [libcircle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libcircle/package.py) Versions: 0.2.1-rc.1 Build Dependencies: mpi Link Dependencies: mpi Description: libcircle provides an efficient distributed queue on a cluster, using self-stabilizing work stealing. --- libconfig[¶](#libconfig) === Homepage: * <http://www.hyperrealm.com/libconfig/Spack package: * [libconfig/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libconfig/package.py) Versions: 1.5 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: C/C++ Configuration File Library --- libcroco[¶](#libcroco) === Homepage: * <https://developer.gnome.org/libcrocoSpack package: * [libcroco/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libcroco/package.py) Versions: 0.6.12 Build Dependencies: [glib](#glib), [libxml2](#libxml2) Link Dependencies: [glib](#glib), [libxml2](#libxml2) Description: Libcroco is a standalone css2 parsing and manipulation library. --- libctl[¶](#libctl) === Homepage: * <http://ab-initio.mit.edu/wiki/index.php/LibctlSpack package: * [libctl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libctl/package.py) Versions: 3.2.2 Build Dependencies: [guile](#guile) Link Dependencies: [guile](#guile) Description: libctl is a free Guile-based library implementing flexible control files for scientific simulations. --- libdivsufsort[¶](#libdivsufsort) === Homepage: * <https://github.com/y-256/libdivsufsortSpack package: * [libdivsufsort/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libdivsufsort/package.py) Versions: 2.0.1 Build Dependencies: [cmake](#cmake) Description: libdivsufsort is a software library that implements a lightweight suffix array construction algorithm. --- libdmx[¶](#libdmx) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libdmxSpack package: * [libdmx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libdmx/package.py) Versions: 1.1.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxext](#libxext), [dmxproto](#dmxproto), [libx11](#libx11), [xextproto](#xextproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: libdmx - X Window System DMX (Distributed Multihead X) extension library. --- libdrm[¶](#libdrm) === Homepage: * <http://dri.freedesktop.org/libdrm/Spack package: * [libdrm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libdrm/package.py) Versions: 2.4.81, 2.4.75, 2.4.70, 2.4.59, 2.4.33 Build Dependencies: [libpciaccess](#libpciaccess), pkgconfig, [libpthread-stubs](#libpthread-stubs) Link Dependencies: [libpciaccess](#libpciaccess), [libpthread-stubs](#libpthread-stubs) Description: A userspace library for accessing the DRM, direct rendering manager, on Linux, BSD and other systems supporting the ioctl interface. --- libdwarf[¶](#libdwarf) === Homepage: * <http://www.prevanders.net/dwarf.htmlSpack package: * [libdwarf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libdwarf/package.py) Versions: 20180129, 20160507, 20130729, 20130207, 20130126 Link Dependencies: [zlib](#zlib), [elfutils](#elfutils), elf Description: The DWARF Debugging Information Format is of interest to programmers working on compilers and debuggers (and any one interested in reading or writing DWARF information). It was developed by a committee (known as the PLSIG at the time) starting around 1991. Starting around 1991 SGI developed the libdwarf and dwarfdump tools for internal use and as part of SGI IRIX developer tools. Since that time dwarfdump and libdwarf have been shipped (as an executable and archive respectively, not source) with every release of the SGI MIPS/IRIX C compiler. --- libedit[¶](#libedit) === Homepage: * <http://thrysoee.dk/editline/Spack package: * [libedit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libedit/package.py) Versions: 3.1-20170329, 3.1-20160903, 3.1-20150325 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: An autotools compatible port of the NetBSD editline library --- libelf[¶](#libelf) === Homepage: * <http://www.mr511.de/software/english.htmlSpack package: * [libelf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libelf/package.py) Versions: 0.8.13, 0.8.12 Description: libelf lets you read, modify or create ELF object files in an architecture-independent way. The library takes care of size and endian issues, e.g. you can process a file for SPARC processors on an Intel- based system. --- libemos[¶](#libemos) === Homepage: * <https://software.ecmwf.int/wiki/display/EMOS/EmoslibSpack package: * [libemos/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libemos/package.py) Versions: 4.5.1, 4.5.0, 4.4.9, 4.4.7, 4.4.2 Build Dependencies: [cmake](#cmake), pkgconfig, [grib-api](#grib-api), [fftw](#fftw), [eccodes](#eccodes) Link Dependencies: [grib-api](#grib-api), [fftw](#fftw), [eccodes](#eccodes) Description: The Interpolation library (EMOSLIB) includes Interpolation software and BUFR & CREX encoding/decoding routines. --- libepoxy[¶](#libepoxy) === Homepage: * <https://github.com/anholt/libepoxySpack package: * [libepoxy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libepoxy/package.py) Versions: 1.4.3, 1.3.1 Build Dependencies: pkgconfig, [meson](#meson), [mesa](#mesa) Link Dependencies: [meson](#meson), [mesa](#mesa) Description: Epoxy is a library for handling OpenGL function pointer management for you. --- libev[¶](#libev) === Homepage: * <http://software.schmorp.de/pkg/libev.htmlSpack package: * [libev/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libev/package.py) Versions: develop, 4.24 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: A full-featured and high-performance event loop that is loosely modelled after libevent, but without its limitations and bugs. --- libevent[¶](#libevent) === Homepage: * <http://libevent.orgSpack package: * [libevent/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libevent/package.py) Versions: 2.0.21, 2.0.20, 2.0.19, 2.0.18, 2.0.17, 2.0.16, 2.0.15, 2.0.14, 2.0.13, 2.0.12 Build Dependencies: [openssl](#openssl) Link Dependencies: [openssl](#openssl) Description: The libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts. --- libevpath[¶](#libevpath) === Homepage: * <https://github.com/GTkorvo/evpathSpack package: * [libevpath/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libevpath/package.py) Versions: develop, 4.4.0, 4.2.4, 4.2.1, 4.1.2, 4.1.1 Build Dependencies: [libffs](#libffs), [cmake](#cmake), [gtkorvo-enet](#gtkorvo-enet) Link Dependencies: [libffs](#libffs), [gtkorvo-enet](#gtkorvo-enet) Description: EVpath is an event transport middleware layer designed to allow for the easy implementation of overlay networks, with active data processing, routing and management at all points in the overlay. EVPath is designed for high performance systems. --- libfabric[¶](#libfabric) === Homepage: * <https://libfabric.org/Spack package: * [libfabric/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libfabric/package.py) Versions: develop, 1.6.1, 1.6.0, 1.5.3, 1.5.0, 1.4.2 Build Dependencies: [opa-psm2](#opa-psm2), [rdma-core](#rdma-core), [ucx](#ucx), [m4](#m4), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [psm](#psm) Link Dependencies: [opa-psm2](#opa-psm2), [rdma-core](#rdma-core), [ucx](#ucx), [psm](#psm) Description: The Open Fabrics Interfaces (OFI) is a framework focused on exporting fabric communication services to applications. --- libffi[¶](#libffi) === Homepage: * <https://sourceware.org/libffi/Spack package: * [libffi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libffi/package.py) Versions: 3.2.1 Description: The libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run time. --- libffs[¶](#libffs) === Homepage: * <http://www.cc.gatech.edu/systems/projects/FFSSpack package: * [libffs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libffs/package.py) Versions: develop, 1.5, 1.1.1, 1.1 Build Dependencies: [gtkorvo-dill](#gtkorvo-dill), [gtkorvo-atl](#gtkorvo-atl), [bison](#bison), [cmake](#cmake), [flex](#flex), [gtkorvo-cercs-env](#gtkorvo-cercs-env) Link Dependencies: [gtkorvo-dill](#gtkorvo-dill), [gtkorvo-atl](#gtkorvo-atl) Description: FFS is a middleware library for data communication, including representation, processing and marshaling that preserves the performance of traditional approaches while relaxing the requirement of a priori knowledge and providing complex run-time flexibility. --- libfontenc[¶](#libfontenc) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libfontencSpack package: * [libfontenc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libfontenc/package.py) Versions: 1.1.3 Build Dependencies: [zlib](#zlib), [xproto](#xproto), pkgconfig, [util-macros](#util-macros) Link Dependencies: [zlib](#zlib) Description: libfontenc - font encoding library. --- libfs[¶](#libfs) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libFSSpack package: * [libfs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libfs/package.py) Versions: 1.0.7 Build Dependencies: [util-macros](#util-macros), pkgconfig, [xtrans](#xtrans), [fontsproto](#fontsproto), [xproto](#xproto) Description: libFS - X Font Service client library. This library is used by clients of X Font Servers (xfs), such as xfsinfo, fslsfonts, and the X servers themselves. --- libgcrypt[¶](#libgcrypt) === Homepage: * <http://www.gnu.org/software/libgcrypt/Spack package: * [libgcrypt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libgcrypt/package.py) Versions: 1.8.1, 1.7.6, 1.6.2 Build Dependencies: [libgpg-error](#libgpg-error) Link Dependencies: [libgpg-error](#libgpg-error) Description: Libgcrypt is a general purpose cryptographic library based on the code from GnuPG. It provides functions for all cryptographic building blocks: symmetric ciphers, hash algorithms, MACs, public key algorithms, large integer functions, random numbers and a lot of supporting functions. --- libgd[¶](#libgd) === Homepage: * <https://github.com/libgd/libgdSpack package: * [libgd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libgd/package.py) Versions: 2.2.4, 2.2.3, 2.1.1 Build Dependencies: [libiconv](#libiconv), pkgconfig, [libtool](#libtool), [libpng](#libpng), [m4](#m4), [autoconf](#autoconf), [libtiff](#libtiff), [fontconfig](#fontconfig), [automake](#automake), [gettext](#gettext), jpeg Link Dependencies: [libtiff](#libtiff), [libiconv](#libiconv), [libpng](#libpng), [fontconfig](#fontconfig), jpeg Description: GD is an open source code library for the dynamic creation of images by programmers. GD is written in C, and "wrappers" are available for Perl, PHP and other languages. GD creates PNG, JPEG, GIF, WebP, XPM, BMP images, among other formats. GD is commonly used to generate charts, graphics, thumbnails, and most anything else, on the fly. While not restricted to use on the web, the most common applications of GD involve website development. --- libgeotiff[¶](#libgeotiff) === Homepage: * <https://trac.osgeo.org/geotiff/Spack package: * [libgeotiff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libgeotiff/package.py) Versions: 1.4.2 Build Dependencies: [zlib](#zlib), [libtiff](#libtiff), [proj](#proj), jpeg Link Dependencies: [zlib](#zlib), [libtiff](#libtiff), [proj](#proj), jpeg Description: GeoTIFF represents an effort by over 160 different remote sensing, GIS, cartographic, and surveying related companies and organizations to establish a TIFF based interchange format for georeferenced raster imagery. --- libgit2[¶](#libgit2) === Homepage: * <https://libgit2.github.com/Spack package: * [libgit2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libgit2/package.py) Versions: 0.26.0, 0.24.2 Build Dependencies: [cmake](#cmake), [libssh2](#libssh2) Link Dependencies: [libssh2](#libssh2) Description: libgit2 is a portable, pure C implementation of the Git core methods provided as a re-entrant linkable library with a solid API, allowing you to write native speed custom Git applications in any language which supports C bindings. --- libgpg-error[¶](#libgpg-error) === Homepage: * <https://www.gnupg.org/related_software/libgpg-errorSpack package: * [libgpg-error/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libgpg-error/package.py) Versions: 1.27, 1.21, 1.18 Description: Libgpg-error is a small library that defines common error values for all GnuPG components. Among these are GPG, GPGSM, GPGME, GPG-Agent, libgcrypt, Libksba, DirMngr, Pinentry, SmartCard Daemon and possibly more in the future. --- libgpuarray[¶](#libgpuarray) === Homepage: * <http://deeplearning.net/software/libgpuarray/Spack package: * [libgpuarray/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libgpuarray/package.py) Versions: 0.7.5, 0.7.4, 0.7.3, 0.7.2, 0.7.1, 0.7.0, 0.6.9, 0.6.2, 0.6.1, 0.6.0 Build Dependencies: [libcheck](#libcheck), [cmake](#cmake), [cuda](#cuda) Link Dependencies: [libcheck](#libcheck), [cuda](#cuda) Description: Make a common GPU ndarray(n dimensions array) that can be reused by all projects that is as future proof as possible, while keeping it easy to use for simple need/quick test. --- libgridxc[¶](#libgridxc) === Homepage: * <https://launchpad.net/libgridxcSpack package: * [libgridxc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libgridxc/package.py) Versions: 0.7.6 Description: A library to compute the exchange and correlation energy and potential in spherical (i.e. an atom) or periodic systems. --- libgtextutils[¶](#libgtextutils) === Homepage: * <https://github.com/agordon/libgtextutilsSpack package: * [libgtextutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libgtextutils/package.py) Versions: 0.7 Description: Gordon's Text utils Library. --- libharu[¶](#libharu) === Homepage: * <http://libharu.orgSpack package: * [libharu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libharu/package.py) Versions: 2.3.0, 2.2.0, master Build Dependencies: [zlib](#zlib), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [libpng](#libpng) Link Dependencies: [zlib](#zlib), [libpng](#libpng) Description: libharu - free PDF library. Haru is a free, cross platform, open-sourced software library for generating PDF. --- libhio[¶](#libhio) === Homepage: * <https://github.com/hpc/libhioSpack package: * [libhio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libhio/package.py) Versions: 1.4.1.2, 1.4.1.0 Build Dependencies: [json-c](#json-c), [bzip2](#bzip2), pkgconfig, mpi, [hdf5](#hdf5) Link Dependencies: [json-c](#json-c), [bzip2](#bzip2), mpi, [hdf5](#hdf5) Description: libHIO is a flexible, high-performance parallel IO package developed at LANL. libHIO supports IO to either a conventional PFS or to Cray DataWarp with management of Cray DataWarp space and stage-in and stage- out from and to the PFS. --- libiberty[¶](#libiberty) === Homepage: * <https://www.gnu.org/software/binutils/Spack package: * [libiberty/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libiberty/package.py) Versions: 2.31.1, 2.30, 2.29.1, 2.28.1 Description: The libiberty.a library from GNU binutils. Libiberty provides demangling and support functions for the GNU toolchain. --- libice[¶](#libice) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libICESpack package: * [libice/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libice/package.py) Versions: 1.0.9 Build Dependencies: [util-macros](#util-macros), pkgconfig, [xtrans](#xtrans), [xproto](#xproto) Description: libICE - Inter-Client Exchange Library. --- libiconv[¶](#libiconv) === Homepage: * <https://www.gnu.org/software/libiconv/Spack package: * [libiconv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libiconv/package.py) Versions: 1.15, 1.14 Description: GNU libiconv provides an implementation of the iconv() function and the iconv program for character set conversion. --- libint[¶](#libint) === Homepage: * <https://github.com/evaleev/libintSpack package: * [libint/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libint/package.py) Versions: 2.2.0, 2.1.0, 1.1.6, 1.1.5 Build Dependencies: [gmp](#gmp), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [boost](#boost) Link Dependencies: [gmp](#gmp), [boost](#boost) Description: Libint is a high-performance library for computing Gaussian integrals in quantum mechanics. --- libjpeg[¶](#libjpeg) === Homepage: * <http://www.ijg.orgSpack package: * [libjpeg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libjpeg/package.py) Versions: 9c, 9b, 9a Description: libjpeg is a widely used free library with functions for handling the JPEG image data format. It implements a JPEG codec (encoding and decoding) alongside various utilities for handling JPEG data. --- libjpeg-turbo[¶](#libjpeg-turbo) === Homepage: * <https://libjpeg-turbo.org/Spack package: * [libjpeg-turbo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libjpeg-turbo/package.py) Versions: 1.5.90, 1.5.3, 1.5.0, 1.3.1 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [nasm](#nasm), [cmake](#cmake) Description: libjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding. --- libksba[¶](#libksba) === Homepage: * <https://gnupg.org/software/libksba/index.htmlSpack package: * [libksba/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libksba/package.py) Versions: 1.3.5 Build Dependencies: [libgpg-error](#libgpg-error) Link Dependencies: [libgpg-error](#libgpg-error) Description: Libksba is a library to make the tasks of working with X.509 certificates, CMS data and related objects more easy. --- liblbxutil[¶](#liblbxutil) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/liblbxutilSpack package: * [liblbxutil/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/liblbxutil/package.py) Versions: 1.1.0 Build Dependencies: [xproto](#xproto), pkgconfig, [xextproto](#xextproto), [util-macros](#util-macros) Description: liblbxutil - Low Bandwith X extension (LBX) utility routines. --- liblockfile[¶](#liblockfile) === Homepage: * <https://github.com/miquels/liblockfileSpack package: * [liblockfile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/liblockfile/package.py) Versions: 1.14 Description: NFS-safe locking library --- libmatheval[¶](#libmatheval) === Homepage: * <https://www.gnu.org/software/libmatheval/Spack package: * [libmatheval/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libmatheval/package.py) Versions: 1.1.11 Build Dependencies: [guile](#guile), [flex](#flex) Link Dependencies: [flex](#flex) Description: GNU libmatheval is a library (callable from C and Fortran) to parse and evaluate symbolic expressions input as text. It supports expressions in any number of variables of arbitrary names, decimal and symbolic constants, basic unary and binary operators, and elementary mathematical functions. In addition to parsing and evaluation, libmatheval can also compute symbolic derivatives and output expressions to strings. --- libmaxminddb[¶](#libmaxminddb) === Homepage: * <https://github.com/maxmind/libmaxminddbSpack package: * [libmaxminddb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libmaxminddb/package.py) Versions: 1.3.2 Description: C library for the MaxMind DB file format --- libmesh[¶](#libmesh) === Homepage: * <http://libmesh.github.io/Spack package: * [libmesh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libmesh/package.py) Versions: 1.3.0, 1.2.1, 1.0.0 Build Dependencies: [slepc](#slepc), [petsc](#petsc), [eigen](#eigen), [hdf5](#hdf5), [perl](#perl), [boost](#boost), mpi, tbb Link Dependencies: [slepc](#slepc), [petsc](#petsc), [eigen](#eigen), [hdf5](#hdf5), [perl](#perl), [boost](#boost), mpi, tbb Description: The libMesh library provides a framework for the numerical simulation of partial differential equations using arbitrary unstructured discretizations on serial and parallel platforms. --- libmng[¶](#libmng) === Homepage: * <http://sourceforge.net/projects/libmng/Spack package: * [libmng/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libmng/package.py) Versions: 2.0.3, 2.0.2 Build Dependencies: [zlib](#zlib), jpeg, [lcms](#lcms) Link Dependencies: [zlib](#zlib), jpeg, [lcms](#lcms) Description: libmng -THE reference library for reading, displaying, writing and examining Multiple-Image Network Graphics. MNG is the animation extension to the popular PNG image-format. --- libmongoc[¶](#libmongoc) === Homepage: * <https://github.com/mongodb/mongo-c-driverSpack package: * [libmongoc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libmongoc/package.py) Versions: 1.9.1, 1.8.1, 1.8.0, 1.7.0, 1.6.3, 1.6.2, 1.6.1 Build Dependencies: [zlib](#zlib), pkgconfig, [m4](#m4), [openssl](#openssl), [snappy](#snappy), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [libbson](#libbson) Link Dependencies: [zlib](#zlib), [libbson](#libbson), [openssl](#openssl), [snappy](#snappy) Description: libmongoc is a client library written in C for MongoDB. --- libmonitor[¶](#libmonitor) === Homepage: * <https://github.com/HPCToolkit/libmonitorSpack package: * [libmonitor/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libmonitor/package.py) Versions: 2018.07.18, 2013.02.18, master Description: Libmonitor is a library providing callback functions for the begin and end of processes and threads. It provides a layer on which to build process monitoring tools and profilers. --- libnbc[¶](#libnbc) === Homepage: * <http://unixer.de/research/nbcoll/libnbc/Spack package: * [libnbc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libnbc/package.py) Versions: 1.1.1 Build Dependencies: mpi Link Dependencies: mpi Description: LibNBC is a prototypic implementation of a nonblocking interface for MPI collective operations. Based on ANSI C and MPI-1, it supports all MPI-1 collective operations in a nonblocking manner. LibNBC is distributed under the BSD license. --- libnl[¶](#libnl) === Homepage: * <https://www.infradead.org/~tgr/libnl/Spack package: * [libnl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libnl/package.py) Versions: 3.3.0, 3.2.25 Build Dependencies: [flex](#flex), [m4](#m4), [bison](#bison) Description: libnl - Netlink Protocol Library Suite --- libnova[¶](#libnova) === Homepage: * <http://libnova.sourceforge.netSpack package: * [libnova/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libnova/package.py) Versions: 0.15.0 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Link Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: "libnova is a general purpose, double precision, Celestial Mechanics, Astrometry and Astrodynamics library. --- libogg[¶](#libogg) === Homepage: * <https://www.xiph.org/ogg/Spack package: * [libogg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libogg/package.py) Versions: 1.3.2 Description: Ogg is a multimedia container format, and the native file and stream format for the Xiph.org multimedia codecs. --- liboldx[¶](#liboldx) === Homepage: * <https://cgit.freedesktop.org/xorg/lib/liboldX/Spack package: * [liboldx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/liboldx/package.py) Versions: 1.0.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11) Link Dependencies: [libx11](#libx11) Description: X version 10 backwards compatibility. --- libpcap[¶](#libpcap) === Homepage: * <http://www.tcpdump.org/Spack package: * [libpcap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libpcap/package.py) Versions: 1.8.1 Build Dependencies: [flex](#flex), [bison](#bison) Description: libpcap is a portable library in C/C++ for packet capture --- libpciaccess[¶](#libpciaccess) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libpciaccess/Spack package: * [libpciaccess/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libpciaccess/package.py) Versions: 0.13.5, 0.13.4 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libtool](#libtool) Description: Generic PCI access library. --- libpfm4[¶](#libpfm4) === Homepage: * <http://perfmon2.sourceforge.netSpack package: * [libpfm4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libpfm4/package.py) Versions: 4.10.1, 4.9.0, 4.8.0 Description: libpfm4 is a userspace library to help setup performance events for use with the perf_events Linux kernel interface. --- libpipeline[¶](#libpipeline) === Homepage: * <http://libpipeline.nongnu.org/Spack package: * [libpipeline/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libpipeline/package.py) Versions: 1.4.2 Build Dependencies: pkgconfig Test Dependencies: [check](#check) Description: libpipeline is a C library for manipulating pipelines of subprocesses in a flexible and convenient way. --- libpng[¶](#libpng) === Homepage: * <http://www.libpng.org/pub/png/libpng.htmlSpack package: * [libpng/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libpng/package.py) Versions: 1.6.34, 1.6.29, 1.6.28, 1.6.27, 1.2.57 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: libpng is the official PNG reference library. --- libpsl[¶](#libpsl) === Homepage: * <https://github.com/rockdaboot/libpslSpack package: * [libpsl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libpsl/package.py) Versions: 0.17.0 Build Dependencies: [icu4c](#icu4c), [gettext](#gettext), pkgconfig, [python](#python) Link Dependencies: [icu4c](#icu4c) Test Dependencies: [valgrind](#valgrind) Description: libpsl - C library to handle the Public Suffix List. --- libpthread-stubs[¶](#libpthread-stubs) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [libpthread-stubs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libpthread-stubs/package.py) Versions: 0.4, 0.3 Description: The libpthread-stubs package provides weak aliases for pthread functions not provided in libc or otherwise available by default. --- libquo[¶](#libquo) === Homepage: * <https://github.com/lanl/libquoSpack package: * [libquo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libquo/package.py) Versions: develop, 1.3, 1.2.9 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), mpi, [m4](#m4) Link Dependencies: mpi Description: QUO (as in "status quo") is a runtime library that aids in accommodating thread-level heterogeneity in dynamic, phased MPI+X applications comprising single- and multi-threaded libraries. --- librom[¶](#librom) === Homepage: * <https://github.com/LLNL/libROMSpack package: * [librom/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/librom/package.py) Versions: develop Build Dependencies: [zlib](#zlib), mpi, [libszip](#libszip), [hdf5](#hdf5), [perl](#perl), [doxygen](#doxygen), [graphviz](#graphviz), [boost](#boost), lapack Link Dependencies: [zlib](#zlib), mpi, [libszip](#libszip), [hdf5](#hdf5), [perl](#perl), [doxygen](#doxygen), [graphviz](#graphviz), [boost](#boost), lapack Description: libROM: library for computing large-scale reduced order models --- libsharp[¶](#libsharp) === Homepage: * <https://github.com/Libsharp/libsharpSpack package: * [libsharp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libsharp/package.py) Versions: 2018-01-17, 1.0.0 Build Dependencies: [autoconf](#autoconf), mpi Link Dependencies: mpi Description: Libsharp is a code library for spherical harmonic transforms (SHTs) and spin-weighted spherical harmonic transforms, which evolved from the libpsht library. --- libshm[¶](#libshm) === Homepage: * <https://github.com/afeldman/libshmSpack package: * [libshm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libshm/package.py) Versions: master Description: Libshm is a header library making an easy C++11 access to a shared memory. --- libsigcpp[¶](#libsigcpp) === Homepage: * <https://libsigcplusplus.github.io/libsigcplusplus/index.htmlSpack package: * [libsigcpp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libsigcpp/package.py) Versions: 2.9.3, 2.1.1, 2.0.3 Description: Libsigc++ is a C++ library for typesafe callbacks --- libsigsegv[¶](#libsigsegv) === Homepage: * <https://www.gnu.org/software/libsigsegv/Spack package: * [libsigsegv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libsigsegv/package.py) Versions: 2.11, 2.10 Description: GNU libsigsegv is a library for handling page faults in user mode. --- libsm[¶](#libsm) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libSMSpack package: * [libsm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libsm/package.py) Versions: 1.2.2 Build Dependencies: [libice](#libice), [xproto](#xproto), pkgconfig, [xtrans](#xtrans), [util-macros](#util-macros) Link Dependencies: [libice](#libice) Description: libSM - X Session Management Library. --- libsodium[¶](#libsodium) === Homepage: * <https://download.libsodium.org/doc/Spack package: * [libsodium/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libsodium/package.py) Versions: 1.0.15, 1.0.13, 1.0.12, 1.0.11, 1.0.10, 1.0.3, 1.0.2, 1.0.1, 1.0.0, 0.7.1 Description: Sodium is a modern, easy-to-use software library for encryption, decryption, signatures, password hashing and more. --- libspatialindex[¶](#libspatialindex) === Homepage: * <http://libspatialindex.github.ioSpack package: * [libspatialindex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libspatialindex/package.py) Versions: 1.8.5 Build Dependencies: [cmake](#cmake) Description: --- libsplash[¶](#libsplash) === Homepage: * <https://github.com/ComputationalRadiationPhysics/libSplashSpack package: * [libsplash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libsplash/package.py) Versions: develop, 1.7.0, 1.6.0, 1.5.0, 1.4.0, 1.3.1, 1.2.4, master Build Dependencies: [cmake](#cmake), mpi, [hdf5](#hdf5) Link Dependencies: mpi, [hdf5](#hdf5) Description: libSplash aims at developing a HDF5-based I/O library for HPC simulations. It is created as an easy-to-use frontend for the standard HDF5 library with support for MPI processes in a cluster environment. While the standard HDF5 library provides detailed low-level control, libSplash simplifies tasks commonly found in large-scale HPC simulations, such as iterative computations and MPI distributed processes. --- libssh[¶](#libssh) === Homepage: * <https://www.libssh.orgSpack package: * [libssh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libssh/package.py) Versions: 0.7.5 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [openssl](#openssl) Link Dependencies: [zlib](#zlib), [openssl](#openssl) Description: libssh: the SSH library --- libssh2[¶](#libssh2) === Homepage: * <https://www.libssh2.org/Spack package: * [libssh2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libssh2/package.py) Versions: 1.8.0, 1.7.0, 1.4.3 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [openssl](#openssl), [xz](#xz) Link Dependencies: [zlib](#zlib), [openssl](#openssl), [xz](#xz) Description: libssh2 is a client-side C library implementing the SSH2 protocol --- libsvm[¶](#libsvm) === Homepage: * <https://www.csie.ntu.edu.tw/~cjlin/libsvm/Spack package: * [libsvm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libsvm/package.py) Versions: 322 Description: Libsvm is a simple, easy-to-use, and efficient software for SVM classification and regression. --- libszip[¶](#libszip) === Homepage: * <https://support.hdfgroup.org/doc_resource/SZIP/Spack package: * [libszip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libszip/package.py) Versions: 2.1.1, 2.1 Description: Szip is an implementation of the extended-Rice lossless compression algorithm. It provides lossless compression of scientific data, and is provided with HDF software products. --- libtermkey[¶](#libtermkey) === Homepage: * <http://www.leonerd.org.uk/code/libtermkey/Spack package: * [libtermkey/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libtermkey/package.py) Versions: 0.18, 0.17, 0.16, 0.15b, 0.14 Build Dependencies: [libtool](#libtool), [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: Easy keyboard entry processing for terminal programs --- libtiff[¶](#libtiff) === Homepage: * <http://www.simplesystems.org/libtiff/Spack package: * [libtiff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libtiff/package.py) Versions: 4.0.9, 4.0.8, 4.0.7, 4.0.6, 4.0.3, 3.9.7 Build Dependencies: [zlib](#zlib), jpeg, [xz](#xz) Link Dependencies: [zlib](#zlib), jpeg, [xz](#xz) Description: LibTIFF - Tag Image File Format (TIFF) Library and Utilities. --- libtool[¶](#libtool) === Homepage: * <https://www.gnu.org/software/libtool/Spack package: * [libtool/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libtool/package.py) Versions: develop, 2.4.6, 2.4.2 Build Dependencies: [help2man](#help2man), [m4](#m4), [xz](#xz), [autoconf](#autoconf), [automake](#automake), [texinfo](#texinfo) Description: libtool -- library building part of autotools. --- libunistring[¶](#libunistring) === Homepage: * <https://www.gnu.org/software/libunistring/Spack package: * [libunistring/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libunistring/package.py) Versions: 0.9.7, 0.9.6 Description: This library provides functions for manipulating Unicode strings and for manipulating C strings according to the Unicode standard. --- libunwind[¶](#libunwind) === Homepage: * <http://www.nongnu.org/libunwind/Spack package: * [libunwind/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libunwind/package.py) Versions: 1.3-rc1, 1.2.1, 1.1 Link Dependencies: [xz](#xz) Description: A portable and efficient C programming interface (API) to determine the call-chain of a program. --- libuuid[¶](#libuuid) === Homepage: * <http://sourceforge.net/projects/libuuid/Spack package: * [libuuid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libuuid/package.py) Versions: 1.0.3 Description: Portable uuid C library --- libuv[¶](#libuv) === Homepage: * <http://libuv.orgSpack package: * [libuv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libuv/package.py) Versions: 1.9.0 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Description: Multi-platform library with a focus on asynchronous IO --- libvorbis[¶](#libvorbis) === Homepage: * <https://xiph.org/vorbis/Spack package: * [libvorbis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libvorbis/package.py) Versions: 1.3.5 Build Dependencies: [libogg](#libogg), pkgconfig Link Dependencies: [libogg](#libogg) Description: Ogg Vorbis is a fully open, non-proprietary, patent-and-royalty-free, general-purpose compressed audio format for mid to high quality (8kHz- 48.0kHz, 16+ bit, polyphonic) audio and music at fixed and variable bitrates from 16 to 128 kbps/channel. --- libvterm[¶](#libvterm) === Homepage: * <http://www.leonerd.org.uk/code/libvterm/Spack package: * [libvterm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libvterm/package.py) Versions: 681 Description: An abstract library implementation of a terminal emulator --- libwebsockets[¶](#libwebsockets) === Homepage: * <https://github.com/warmcat/libwebsocketsSpack package: * [libwebsockets/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libwebsockets/package.py) Versions: 2.2.1, 2.1.1, 2.1.0, 2.0.3, 1.7.9 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [openssl](#openssl) Link Dependencies: [zlib](#zlib), [openssl](#openssl) Description: C library for lightweight websocket clients and servers. --- libwindowswm[¶](#libwindowswm) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libWindowsWMSpack package: * [libwindowswm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libwindowswm/package.py) Versions: 1.0.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [windowswmproto](#windowswmproto), [libx11](#libx11), [libxext](#libxext), [xextproto](#xextproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: WindowsWM - Cygwin/X rootless window management extension. WindowsWM is a simple library designed to interface with the Windows-WM extension. This extension allows X window managers to better interact with the Cygwin XWin server when running X11 in a rootless mode. --- libx11[¶](#libx11) === Homepage: * <https://www.x.org/Spack package: * [libx11/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libx11/package.py) Versions: 1.6.5, 1.6.3 Build Dependencies: [inputproto](#inputproto), [xproto](#xproto), pkgconfig, [util-macros](#util-macros), [perl](#perl), [libxcb](#libxcb), [kbproto](#kbproto), [xextproto](#xextproto), [xtrans](#xtrans) Link Dependencies: [libxcb](#libxcb), [kbproto](#kbproto), [xextproto](#xextproto) Description: libX11 - Core X11 protocol client library. --- libxau[¶](#libxau) === Homepage: * <https://cgit.freedesktop.org/xorg/lib/libXau/Spack package: * [libxau/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxau/package.py) Versions: 1.0.8 Build Dependencies: [xproto](#xproto), pkgconfig, [util-macros](#util-macros) Link Dependencies: [xproto](#xproto) Description: The libXau package contains a library implementing the X11 Authorization Protocol. This is useful for restricting client access to the display. --- libxaw[¶](#libxaw) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXawSpack package: * [libxaw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxaw/package.py) Versions: 1.0.13 Build Dependencies: [libxt](#libxt), pkgconfig, [libxext](#libxext), [libxpm](#libxpm), [xproto](#xproto), [util-macros](#util-macros), [libx11](#libx11), [xextproto](#xextproto), [libxmu](#libxmu) Link Dependencies: [libxt](#libxt), [libx11](#libx11), [libxext](#libxext), [libxpm](#libxpm), [libxmu](#libxmu) Description: Xaw is the X Athena Widget Set. Xaw is a widget set based on the X Toolkit Intrinsics (Xt) Library. --- libxaw3d[¶](#libxaw3d) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXaw3dSpack package: * [libxaw3d/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxaw3d/package.py) Versions: 1.6.2 Build Dependencies: [libxt](#libxt), pkgconfig, [libxext](#libxext), [libxpm](#libxpm), [util-macros](#util-macros), [libx11](#libx11), [libxmu](#libxmu) Link Dependencies: [libxt](#libxt), [libx11](#libx11), [libxext](#libxext), [libxpm](#libxpm), [libxmu](#libxmu) Description: Xaw3d is the X 3D Athena Widget Set. Xaw3d is a widget set based on the X Toolkit Intrinsics (Xt) Library. --- libxc[¶](#libxc) === Homepage: * <http://www.tddft.org/programs/octopus/wiki/index.php/LibxcSpack package: * [libxc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxc/package.py) Versions: 3.0.0, 2.2.2, 2.2.1 Description: Libxc is a library of exchange-correlation functionals for density- functional theory. --- libxcb[¶](#libxcb) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [libxcb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxcb/package.py) Versions: 1.13, 1.12, 1.11.1, 1.11 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libpthread-stubs](#libpthread-stubs), [xcb-proto](#xcb-proto), [libxau](#libxau), [libxdmcp](#libxdmcp) Link Dependencies: [libxdmcp](#libxdmcp), [libxau](#libxau), [libpthread-stubs](#libpthread-stubs) Description: The X protocol C-language Binding (XCB) is a replacement for Xlib featuring a small footprint, latency hiding, direct access to the protocol, improved threading support, and extensibility. --- libxcomposite[¶](#libxcomposite) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXcompositeSpack package: * [libxcomposite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxcomposite/package.py) Versions: 0.4.4 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [compositeproto](#compositeproto), [libxfixes](#libxfixes), [fixesproto](#fixesproto) Link Dependencies: [libxfixes](#libxfixes), [libx11](#libx11) Description: libXcomposite - client library for the Composite extension to the X11 protocol. --- libxcursor[¶](#libxcursor) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXcursorSpack package: * [libxcursor/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxcursor/package.py) Versions: 1.1.14 Build Dependencies: [libxrender](#libxrender), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxfixes](#libxfixes), [fixesproto](#fixesproto) Link Dependencies: [libxrender](#libxrender), [libxfixes](#libxfixes), [libx11](#libx11) Description: libXcursor - X Window System Cursor management library. --- libxdamage[¶](#libxdamage) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXdamageSpack package: * [libxdamage/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxdamage/package.py) Versions: 1.1.4 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [damageproto](#damageproto), [libxfixes](#libxfixes), [xextproto](#xextproto), [fixesproto](#fixesproto) Link Dependencies: [libxfixes](#libxfixes), [libx11](#libx11) Description: This package contains the library for the X Damage extension. --- libxdmcp[¶](#libxdmcp) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXdmcpSpack package: * [libxdmcp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxdmcp/package.py) Versions: 1.1.2 Build Dependencies: [xproto](#xproto), pkgconfig, [util-macros](#util-macros) Description: libXdmcp - X Display Manager Control Protocol library. --- libxevie[¶](#libxevie) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXevieSpack package: * [libxevie/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxevie/package.py) Versions: 1.0.3 Build Dependencies: [xproto](#xproto), pkgconfig, [evieext](#evieext), [libxext](#libxext), [util-macros](#util-macros), [libx11](#libx11), [xextproto](#xextproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: Xevie - X Event Interception Extension (XEvIE). --- libxext[¶](#libxext) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXextSpack package: * [libxext/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxext/package.py) Versions: 1.3.3 Build Dependencies: [xproto](#xproto), pkgconfig, [xextproto](#xextproto), [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11) Description: libXext - library for common extensions to the X11 protocol. --- libxfixes[¶](#libxfixes) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXfixesSpack package: * [libxfixes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxfixes/package.py) Versions: 5.0.2 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [xextproto](#xextproto), [fixesproto](#fixesproto) Link Dependencies: [libx11](#libx11) Description: This package contains header files and documentation for the XFIXES extension. Library and server implementations are separate. --- libxfont[¶](#libxfont) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXfontSpack package: * [libxfont/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxfont/package.py) Versions: 1.5.2 Build Dependencies: [util-macros](#util-macros), pkgconfig, [xproto](#xproto), [libfontenc](#libfontenc), [fontsproto](#fontsproto), [xtrans](#xtrans), [freetype](#freetype) Link Dependencies: [libfontenc](#libfontenc), [freetype](#freetype) Description: libXfont provides the core of the legacy X11 font system, handling the index files (fonts.dir, fonts.alias, fonts.scale), the various font file formats, and rasterizing them. It is used by the X servers, the X Font Server (xfs), and some font utilities (bdftopcf for instance), but should not be used by normal X11 clients. X11 clients access fonts via either the new API's in libXft, or the legacy API's in libX11. --- libxfont2[¶](#libxfont2) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXfontSpack package: * [libxfont2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxfont2/package.py) Versions: 2.0.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [xproto](#xproto), [libfontenc](#libfontenc), [fontsproto](#fontsproto), [xtrans](#xtrans), [freetype](#freetype) Link Dependencies: [libfontenc](#libfontenc), [freetype](#freetype) Description: libXfont provides the core of the legacy X11 font system, handling the index files (fonts.dir, fonts.alias, fonts.scale), the various font file formats, and rasterizing them. It is used by the X servers, the X Font Server (xfs), and some font utilities (bdftopcf for instance), but should not be used by normal X11 clients. X11 clients access fonts via either the new API's in libXft, or the legacy API's in libX11. --- libxfontcache[¶](#libxfontcache) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXfontcacheSpack package: * [libxfontcache/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxfontcache/package.py) Versions: 1.0.5 Build Dependencies: [util-macros](#util-macros), pkgconfig, [fontcacheproto](#fontcacheproto), [libx11](#libx11), [libxext](#libxext), [xextproto](#xextproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: Xfontcache - X-TrueType font cache extension client library. --- libxft[¶](#libxft) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXftSpack package: * [libxft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxft/package.py) Versions: 2.3.2 Build Dependencies: [libxrender](#libxrender), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [fontconfig](#fontconfig), [freetype](#freetype) Link Dependencies: [libxrender](#libxrender), [fontconfig](#fontconfig), [libx11](#libx11), [freetype](#freetype) Description: X FreeType library. Xft version 2.1 was the first stand alone release of Xft, a library that connects X applications with the FreeType font rasterization library. Xft uses fontconfig to locate fonts so it has no configuration files. --- libxi[¶](#libxi) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXiSpack package: * [libxi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxi/package.py) Versions: 1.7.6 Build Dependencies: [inputproto](#inputproto), [xproto](#xproto), pkgconfig, [libxext](#libxext), [libx11](#libx11), [libxfixes](#libxfixes), [xextproto](#xextproto), [fixesproto](#fixesproto) Link Dependencies: [libxfixes](#libxfixes), [libx11](#libx11), [libxext](#libxext) Description: libXi - library for the X Input Extension. --- libxinerama[¶](#libxinerama) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXineramaSpack package: * [libxinerama/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxinerama/package.py) Versions: 1.1.3 Build Dependencies: [xineramaproto](#xineramaproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxext](#libxext), [xextproto](#xextproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: libXinerama - API for Xinerama extension to X11 Protocol. --- libxkbcommon[¶](#libxkbcommon) === Homepage: * <https://xkbcommon.org/Spack package: * [libxkbcommon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxkbcommon/package.py) Versions: 0.8.0 Build Dependencies: [xkbdata](#xkbdata), [m4](#m4), [bison](#bison), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Link Dependencies: [xkbdata](#xkbdata) Description: xkbcommon is a library to handle keyboard descriptions, including loading them from disk, parsing them and handling their state. It's mainly meant for client toolkits, window systems, and other system applications. --- libxkbfile[¶](#libxkbfile) === Homepage: * <https://cgit.freedesktop.org/xorg/lib/libxkbfileSpack package: * [libxkbfile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxkbfile/package.py) Versions: 1.0.9 Build Dependencies: [util-macros](#util-macros), [kbproto](#kbproto), [libx11](#libx11), pkgconfig Link Dependencies: [libx11](#libx11) Description: XKB file handling routines. --- libxkbui[¶](#libxkbui) === Homepage: * <https://cgit.freedesktop.org/xorg/lib/libxkbui/Spack package: * [libxkbui/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxkbui/package.py) Versions: 1.0.2 Build Dependencies: [libx11](#libx11), [libxt](#libxt), pkgconfig, [libxkbfile](#libxkbfile), [util-macros](#util-macros) Link Dependencies: [libxt](#libxt), [libx11](#libx11), [libxkbfile](#libxkbfile) Description: X.org libxkbui library. --- libxml2[¶](#libxml2) === Homepage: * <http://xmlsoft.orgSpack package: * [libxml2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxml2/package.py) Versions: 2.9.8, 2.9.4, 2.9.2, 2.7.8 Build Dependencies: [zlib](#zlib), pkgconfig, [python](#python), [xz](#xz) Link Dependencies: [zlib](#zlib), [python](#python), [xz](#xz) Description: Libxml2 is the XML C parser and toolkit developed for the Gnome project (but usable outside of the Gnome platform), it is free software available under the MIT License. --- libxmu[¶](#libxmu) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXmuSpack package: * [libxmu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxmu/package.py) Versions: 1.1.2 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxext](#libxext), [xextproto](#xextproto) Link Dependencies: [libxt](#libxt), [libxext](#libxext), [libx11](#libx11) Description: This library contains miscellaneous utilities and is not part of the Xlib standard. It contains routines which only use public interfaces so that it may be layered on top of any proprietary implementation of Xlib or Xt. --- libxp[¶](#libxp) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXpSpack package: * [libxp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxp/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxext](#libxext), [xextproto](#xextproto), [libxau](#libxau), [printproto](#printproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext), [libxau](#libxau) Description: libXp - X Print Client Library. --- libxpm[¶](#libxpm) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXpmSpack package: * [libxpm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxpm/package.py) Versions: 3.5.12, 3.5.11, 3.5.10, 3.5.9, 3.5.8, 3.5.7 Build Dependencies: [xproto](#xproto), pkgconfig, [gettext](#gettext), [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [gettext](#gettext), [libx11](#libx11) Description: libXpm - X Pixmap (XPM) image file format library. --- libxpresent[¶](#libxpresent) === Homepage: * <https://cgit.freedesktop.org/xorg/lib/libXpresent/Spack package: * [libxpresent/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxpresent/package.py) Versions: 1.0.0 Build Dependencies: [xproto](#xproto), [presentproto](#presentproto), [libx11](#libx11), [util-macros](#util-macros), [xextproto](#xextproto), pkgconfig Link Dependencies: [libx11](#libx11) Description: This package contains header files and documentation for the Present extension. Library and server implementations are separate. --- libxprintapputil[¶](#libxprintapputil) === Homepage: * <https://cgit.freedesktop.org/xorg/lib/libXprintAppUtil/Spack package: * [libxprintapputil/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxprintapputil/package.py) Versions: 1.0.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxp](#libxp), [libxprintutil](#libxprintutil), [libxau](#libxau), [printproto](#printproto) Link Dependencies: [libxprintutil](#libxprintutil), [libx11](#libx11), [libxp](#libxp), [libxau](#libxau) Description: Xprint application utility routines. --- libxprintutil[¶](#libxprintutil) === Homepage: * <https://cgit.freedesktop.org/xorg/lib/libXprintUtil/Spack package: * [libxprintutil/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxprintutil/package.py) Versions: 1.0.1 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [libxp](#libxp), [util-macros](#util-macros), [libxau](#libxau), [printproto](#printproto) Link Dependencies: [libxt](#libxt), [libx11](#libx11), [libxp](#libxp), [libxau](#libxau) Description: Xprint application utility routines. --- libxrandr[¶](#libxrandr) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXrandrSpack package: * [libxrandr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxrandr/package.py) Versions: 1.5.0 Build Dependencies: [libxrender](#libxrender), [util-macros](#util-macros), pkgconfig, [libxext](#libxext), [libx11](#libx11), [xextproto](#xextproto), [renderproto](#renderproto), [randrproto](#randrproto) Link Dependencies: [libxrender](#libxrender), [libx11](#libx11), [libxext](#libxext) Description: libXrandr - X Resize, Rotate and Reflection extension library. --- libxrender[¶](#libxrender) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXrenderSpack package: * [libxrender/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxrender/package.py) Versions: 0.9.10, 0.9.9 Build Dependencies: [libx11](#libx11), pkgconfig, [renderproto](#renderproto), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11), [renderproto](#renderproto) Description: libXrender - library for the Render Extension to the X11 protocol. --- libxres[¶](#libxres) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXResSpack package: * [libxres/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxres/package.py) Versions: 1.0.7 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [resourceproto](#resourceproto), [libxext](#libxext), [xextproto](#xextproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: libXRes - X-Resource extension client library. --- libxscrnsaver[¶](#libxscrnsaver) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXScrnSaverSpack package: * [libxscrnsaver/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxscrnsaver/package.py) Versions: 1.2.2 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [scrnsaverproto](#scrnsaverproto), [libxext](#libxext), [xextproto](#xextproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: XScreenSaver - X11 Screen Saver extension client library --- libxshmfence[¶](#libxshmfence) === Homepage: * <https://cgit.freedesktop.org/xorg/lib/libxshmfence/Spack package: * [libxshmfence/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxshmfence/package.py) Versions: 1.3, 1.2 Build Dependencies: [xproto](#xproto), pkgconfig, [util-macros](#util-macros) Description: libxshmfence - Shared memory 'SyncFence' synchronization primitive. This library offers a CPU-based synchronization primitive compatible with the X SyncFence objects that can be shared between processes using file descriptor passing. --- libxslt[¶](#libxslt) === Homepage: * <http://www.xmlsoft.org/XSLT/index.htmlSpack package: * [libxslt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxslt/package.py) Versions: 1.1.29, 1.1.28, 1.1.26 Build Dependencies: [zlib](#zlib), [libiconv](#libiconv), [libxml2](#libxml2), [libgcrypt](#libgcrypt), [xz](#xz) Link Dependencies: [zlib](#zlib), [libiconv](#libiconv), [libxml2](#libxml2), [libgcrypt](#libgcrypt), [xz](#xz) Description: Libxslt is the XSLT C library developed for the GNOME project. XSLT itself is a an XML language to define transformation for XML. Libxslt is based on libxml2 the XML C library developed for the GNOME project. It also implements most of the EXSLT set of processor-portable extensions functions and some of Saxon's evaluate and expressions extensions. --- libxsmm[¶](#libxsmm) === Homepage: * <https://github.com/hfp/libxsmmSpack package: * [libxsmm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxsmm/package.py) Versions: develop, 1.9, 1.8.3, 1.8.2, 1.8.1, 1.8, 1.7.1, 1.7, 1.6.6, 1.6.5, 1.6.4, 1.6.3, 1.6.2, 1.6.1, 1.6, 1.5.2, 1.5.1, 1.5, 1.4.4, 1.4.3, 1.4.2, 1.4.1, 1.4 Description: Library targeting Intel Architecture for small, dense or sparse matrix multiplications, and small convolutions. --- libxstream[¶](#libxstream) === Homepage: * <https://github.com/hfp/libxstreamSpack package: * [libxstream/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxstream/package.py) Versions: 0.9.0 Description: LIBXSTREAM is a library to work with streams, events, and code regions that are able to run asynchronous while preserving the usual stream conditions. --- libxt[¶](#libxt) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXtSpack package: * [libxt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxt/package.py) Versions: 1.1.5 Build Dependencies: [libice](#libice), [xproto](#xproto), pkgconfig, [libsm](#libsm), [libx11](#libx11), [util-macros](#util-macros), [kbproto](#kbproto) Link Dependencies: [libice](#libice), [libsm](#libsm), [libx11](#libx11) Description: libXt - X Toolkit Intrinsics library. --- libxtrap[¶](#libxtrap) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXTrapSpack package: * [libxtrap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxtrap/package.py) Versions: 1.0.1 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxext](#libxext), [xextproto](#xextproto), [trapproto](#trapproto) Link Dependencies: [libxt](#libxt), [libx11](#libx11), [libxext](#libxext) Description: libXTrap is the Xlib-based client API for the DEC-XTRAP extension. XTrap was a proposed standard extension for X11R5 which facilitated the capturing of server protocol and synthesizing core input events. Digital participated in the X Consortium's xtest working group which chose to evolve XTrap functionality into the XTEST & RECORD extensions for X11R6. As X11R6 was released in 1994, XTrap has now been deprecated for over 15 years, and uses of it should be quite rare. --- libxtst[¶](#libxtst) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXtstSpack package: * [libxtst/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxtst/package.py) Versions: 1.2.2 Build Dependencies: [inputproto](#inputproto), [util-macros](#util-macros), pkgconfig, [libxext](#libxext), [libx11](#libx11), [libxi](#libxi), [recordproto](#recordproto), [xextproto](#xextproto), [fixesproto](#fixesproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext), [libxi](#libxi) Description: libXtst provides the Xlib-based client API for the XTEST & RECORD extensions. The XTEST extension is a minimal set of client and server extensions required to completely test the X11 server with no user intervention. This extension is not intended to support general journaling and playback of user actions. The RECORD extension supports the recording and reporting of all core X protocol and arbitrary X extension protocol. --- libxv[¶](#libxv) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXvSpack package: * [libxv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxv/package.py) Versions: 1.0.10 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxext](#libxext), [xextproto](#xextproto), [videoproto](#videoproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: libXv - library for the X Video (Xv) extension to the X Window System. --- libxvmc[¶](#libxvmc) === Homepage: * <https://cgit.freedesktop.org/xorg/lib/libXvMCSpack package: * [libxvmc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxvmc/package.py) Versions: 1.0.9 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxext](#libxext), [xextproto](#xextproto), [videoproto](#videoproto), [libxv](#libxv) Link Dependencies: [libx11](#libx11), [libxext](#libxext), [libxv](#libxv) Description: X.org libXvMC library. --- libxxf86dga[¶](#libxxf86dga) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXxf86dgaSpack package: * [libxxf86dga/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxxf86dga/package.py) Versions: 1.1.4 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxext](#libxext), [xextproto](#xextproto), [xf86dgaproto](#xf86dgaproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: libXxf86dga - Client library for the XFree86-DGA extension. --- libxxf86misc[¶](#libxxf86misc) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXxf86miscSpack package: * [libxxf86misc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxxf86misc/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxext](#libxext), [xf86miscproto](#xf86miscproto), [libx11](#libx11), [xextproto](#xextproto), [xproto](#xproto) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: libXxf86misc - Extension library for the XFree86-Misc X extension. --- libxxf86vm[¶](#libxxf86vm) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libXxf86vmSpack package: * [libxxf86vm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libxxf86vm/package.py) Versions: 1.1.4 Build Dependencies: [xproto](#xproto), [libx11](#libx11), [libxext](#libxext), [util-macros](#util-macros), [xf86vidmodeproto](#xf86vidmodeproto), [xextproto](#xextproto), pkgconfig Link Dependencies: [libxext](#libxext), [libx11](#libx11) Description: libXxf86vm - Extension library for the XFree86-VidMode X extension. --- libyogrt[¶](#libyogrt) === Homepage: * <https://github.com/LLNL/libyogrtSpack package: * [libyogrt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libyogrt/package.py) Versions: 1.20-6, 1.20-5, 1.20-4, 1.20-3, 1.20-2 Description: Your One Get Remaining Time Library. --- libzip[¶](#libzip) === Homepage: * <https://nih.at/libzip/index.htmlSpack package: * [libzip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/libzip/package.py) Versions: 1.2.0 Description: libzip is a C library for reading, creating, and modifying zip archives. --- lighttpd[¶](#lighttpd) === Homepage: * <https://www.lighttpd.netSpack package: * [lighttpd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lighttpd/package.py) Versions: 1.4.50, 1.4.49 Build Dependencies: [cmake](#cmake) Description: a secure, fast, compliant and very flexible web-server --- likwid[¶](#likwid) === Homepage: * <https://github.com/RRZE-HPC/likwidSpack package: * [likwid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/likwid/package.py) Versions: 4.3.2, 4.3.1, 4.3.0 Build Dependencies: [perl](#perl), [lua](#lua) Link Dependencies: [lua](#lua) Run Dependencies: [perl](#perl) Description: Likwid is a simple to install and use toolsuite of command line applications for performance oriented programmers. It works for Intel and AMD processors on the Linux operating system. This version uses the perf_event backend which reduces the feature set but allows user installs. See https://github.com/RRZE- HPC/likwid/wiki/TutorialLikwidPerf#feature-limitations for information. --- linkphase3[¶](#linkphase3) === Homepage: * <https://github.com/tdruet/LINKPHASE3Spack package: * [linkphase3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/linkphase3/package.py) Versions: 2017-06-14 Description: Haplotype reconstruction in pedigreed populations. --- linux-headers[¶](#linux-headers) === Homepage: * <https://www.kernel.org/Spack package: * [linux-headers/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/linux-headers/package.py) Versions: 4.9.10 Description: The Linux kernel headers. --- listres[¶](#listres) === Homepage: * <http://cgit.freedesktop.org/xorg/app/listresSpack package: * [listres/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/listres/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxaw](#libxaw), [libxt](#libxt), [libxmu](#libxmu), [xproto](#xproto) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libxmu](#libxmu) Description: The listres program generates a list of X resources for a widget in an X client written using a toolkit based on libXt. --- llvm[¶](#llvm) === Homepage: * <http://llvm.org/Spack package: * [llvm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/llvm/package.py) Versions: develop, 7.0.0, 6.0.1, 6.0.0, 5.0.2, 5.0.1, 5.0.0, 4.0.1, 4.0.0, 3.9.1, 3.9.0, 3.8.1, 3.8.0, 3.7.1, 3.7.0, 3.6.2, 3.5.1, 3.0, flang-ppc64le-20180612, flang-develop, flang-20180612 Build Dependencies: [py-lit](#py-lit), [gmp](#gmp), [isl](#isl), [swig](#swig), [ncurses](#ncurses), [cmake](#cmake), [binutils](#binutils), [libedit](#libedit), [python](#python), [py-six](#py-six) Link Dependencies: [gmp](#gmp), [isl](#isl), [swig](#swig), [ncurses](#ncurses), [binutils](#binutils), [libedit](#libedit), [python](#python), [py-six](#py-six) Run Dependencies: [py-lit](#py-lit) Description: The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Despite its name, LLVM has little to do with traditional virtual machines, though it does provide helpful libraries that can be used to build them. The name "LLVM" itself is not an acronym; it is the full name of the project. --- llvm-openmp-ompt[¶](#llvm-openmp-ompt) === Homepage: * <https://github.com/OpenMPToolsInterface/LLVM-openmpSpack package: * [llvm-openmp-ompt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/llvm-openmp-ompt/package.py) Versions: 3.9.2b2, 3.9.2b, tr6_forwards Build Dependencies: [ninja](#ninja), [cmake](#cmake), [llvm](#llvm) Link Dependencies: [llvm](#llvm) Description: The OpenMP subproject provides an OpenMP runtime for use with the OpenMP implementation in Clang. This branch includes experimental changes for OMPT, the OpenMP Tools interface --- lmdb[¶](#lmdb) === Homepage: * <https://lmdb.tech/Spack package: * [lmdb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lmdb/package.py) Versions: 0.9.21, 0.9.16 Description: Symas LMDB is an extraordinarily fast, memory-efficient database we developed for the Symas OpenLDAP Project. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases. --- lmod[¶](#lmod) === Homepage: * <https://www.tacc.utexas.edu/research-development/tacc-projects/lmodSpack package: * [lmod/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lmod/package.py) Versions: 7.8, 7.7.29, 7.7.13, 7.7, 7.6.14, 7.4.11, 7.4.10, 7.4.9, 7.4.8, 7.4.5, 7.4.1, 7.3, 6.4.5, 6.4.1, 6.3.7, 6.0.1 Build Dependencies: [lua-luaposix](#lua-luaposix), [tcl](#tcl), [lua](#lua), [lua-luafilesystem](#lua-luafilesystem) Link Dependencies: [lua](#lua) Run Dependencies: [lua-luaposix](#lua-luaposix), [tcl](#tcl), [lua-luafilesystem](#lua-luafilesystem) Description: Lmod is a Lua based module system that easily handles the MODULEPATH Hierarchical problem. Environment Modules provide a convenient way to dynamically change the users' environment through modulefiles. This includes easily adding or removing directories to the PATH environment variable. Modulefiles for Library packages provide environment variables that specify where the library and header files can be found. --- lndir[¶](#lndir) === Homepage: * <http://cgit.freedesktop.org/xorg/util/lndirSpack package: * [lndir/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lndir/package.py) Versions: 1.0.3 Build Dependencies: [xproto](#xproto), pkgconfig Description: lndir - create a shadow directory of symbolic links to another directory tree. --- log4cplus[¶](#log4cplus) === Homepage: * <https://sourceforge.net/projects/log4cplus/Spack package: * [log4cplus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/log4cplus/package.py) Versions: 2.0.1, 1.2.1, 1.2.0 Build Dependencies: [cmake](#cmake) Description: log4cplus is a simple to use C++ logging API providing thread-safe, flexible, and arbitrarily granular control over log management and configuration. --- log4cxx[¶](#log4cxx) === Homepage: * <https://logging.apache.org/log4cxx/latest_stable/Spack package: * [log4cxx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/log4cxx/package.py) Versions: 0.10.0 Build Dependencies: [apr](#apr), [apr-util](#apr-util), [zip](#zip) Link Dependencies: [apr](#apr), [apr-util](#apr-util), [zip](#zip) Description: A C++ port of Log4j --- loki[¶](#loki) === Homepage: * <http://loki-lib.sourceforge.netSpack package: * [loki/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/loki/package.py) Versions: 0.1.7 Description: Loki is a C++ library of designs, containing flexible implementations of common design patterns and idioms. --- lordec[¶](#lordec) === Homepage: * <http://www.atgc-montpellier.fr/lordec/Spack package: * [lordec/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lordec/package.py) Versions: 0.8 Build Dependencies: [cmake](#cmake), [boost](#boost) Link Dependencies: [boost](#boost) Description: LoRDEC is a program to correct sequencing errors in long reads from 3rd generation sequencing with high error rate, and is especially intended for PacBio reads. --- lrslib[¶](#lrslib) === Homepage: * <http://cgm.cs.mcgill.ca/~avis/C/lrs.htmlSpack package: * [lrslib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lrslib/package.py) Versions: 6.2, 6.1, 6.0, 5.1, 4.3 Build Dependencies: [gmp](#gmp), [libtool](#libtool) Link Dependencies: [gmp](#gmp) Description: lrslib Ver 6.2 is a self-contained ANSI C implementation of the reverse search algorithm for vertex enumeration/convex hull problems and comes with a choice of three arithmetic packages --- lrzip[¶](#lrzip) === Homepage: * <http://lrzip.kolivas.orgSpack package: * [lrzip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lrzip/package.py) Versions: 0.630, 0.621, 0.616, 0.615, master Build Dependencies: [lzo](#lzo), [zlib](#zlib), [bzip2](#bzip2) Link Dependencies: [lzo](#lzo), [zlib](#zlib), [bzip2](#bzip2) Description: A compression utility that excels at compressing large files (usually > 10-50 MB). Larger files and/or more free RAM means that the utility will be able to more effectively compress your files (ie: faster / smaller size), especially if the filesize(s) exceed 100 MB. You can either choose to optimise for speed (fast compression / decompression) or size, but not both. --- lsof[¶](#lsof) === Homepage: * <https://people.freebsd.org/~abe/Spack package: * [lsof/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lsof/package.py) Versions: 4.89 Description: Lsof displays information about files open to Unix processes. --- ltrace[¶](#ltrace) === Homepage: * <https://www.ltrace.orgSpack package: * [ltrace/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ltrace/package.py) Versions: 0.7.3 Description: Ltrace intercepts and records dynamic library calls which are called by an executed process and the signals received by that process. It can also intercept and print the system calls executed by the program. --- lua[¶](#lua) === Homepage: * <http://www.lua.orgSpack package: * [lua/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lua/package.py) Versions: 5.3.4, 5.3.2, 5.3.1, 5.3.0, 5.2.4, 5.2.3, 5.2.2, 5.2.1, 5.2.0, 5.1.5, 5.1.4, 5.1.3 Build Dependencies: [readline](#readline), [ncurses](#ncurses) Link Dependencies: [readline](#readline), [ncurses](#ncurses) Run Dependencies: [unzip](#unzip) Description: The Lua programming language interpreter and library. --- lua-bitlib[¶](#lua-bitlib) === Homepage: * <http://luaforge.net/projects/bitlibSpack package: * [lua-bitlib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lua-bitlib/package.py) Versions: 23 Build Dependencies: [lua](#lua) Link Dependencies: [lua](#lua) Description: Lua-jit-like bitwise operations for lua --- lua-jit[¶](#lua-jit) === Homepage: * <http://www.luajit.orgSpack package: * [lua-jit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lua-jit/package.py) Versions: 2.0.4 Description: Flast flexible JITed lua --- lua-lpeg[¶](#lua-lpeg) === Homepage: * <http://www.inf.puc-rio.br/~roberto/lpeg/Spack package: * [lua-lpeg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lua-lpeg/package.py) Versions: 0.12.1 Build Dependencies: [lua](#lua) Link Dependencies: [lua](#lua) Description: pattern-matching for lua --- lua-luafilesystem[¶](#lua-luafilesystem) === Homepage: * <http://keplerproject.github.io/luafilesystemSpack package: * [lua-luafilesystem/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lua-luafilesystem/package.py) Versions: 1_6_3 Build Dependencies: [lua](#lua), [git](#git) Link Dependencies: [lua](#lua) Description: LuaFileSystem is a Lua library developed to complement the set of functions related to file systems offered by the standard Lua distribution. LuaFileSystem offers a portable way to access the underlying directory structure and file attributes. LuaFileSystem is free software and uses the same license as Lua 5.1 --- lua-luaposix[¶](#lua-luaposix) === Homepage: * <https://github.com/luaposix/luaposix/Spack package: * [lua-luaposix/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lua-luaposix/package.py) Versions: 33.4.0 Build Dependencies: [lua](#lua) Link Dependencies: [lua](#lua) Description: Lua posix bindings, including ncurses --- lua-mpack[¶](#lua-mpack) === Homepage: * <https://luarocks.org/modules/tarruda/mpackSpack package: * [lua-mpack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lua-mpack/package.py) Versions: 1.0.0-0 Build Dependencies: [lua](#lua), [msgpack-c](#msgpack-c) Link Dependencies: [lua](#lua), [msgpack-c](#msgpack-c) Description: lua bindings to libmpack --- luit[¶](#luit) === Homepage: * <http://cgit.freedesktop.org/xorg/app/luitSpack package: * [luit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/luit/package.py) Versions: 1.1.1 Build Dependencies: [libfontenc](#libfontenc), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libfontenc](#libfontenc), [libx11](#libx11) Description: Luit is a filter that can be run between an arbitrary application and a UTF-8 terminal emulator such as xterm. It will convert application output from the locale's encoding into UTF-8, and convert terminal input from UTF-8 into the locale's encoding. --- lulesh[¶](#lulesh) === Homepage: * <https://computation.llnl.gov/projects/co-design/luleshSpack package: * [lulesh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lulesh/package.py) Versions: 2.0.3 Build Dependencies: mpi, [silo](#silo), [hdf5](#hdf5) Link Dependencies: mpi, [silo](#silo), [hdf5](#hdf5) Description: LULESH is a highly simplified application, hard-coded to only style typical in scientific C or C++ based applications. Hard code to only solve a Sedov blast problem with analytic answer --- lumpy-sv[¶](#lumpy-sv) === Homepage: * <https://github.com/arq5x/lumpy-svSpack package: * [lumpy-sv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lumpy-sv/package.py) Versions: 0.2.13 Build Dependencies: [htslib](#htslib) Link Dependencies: [htslib](#htslib) Description: A probabilistic framework for structural variant discovery. --- lwgrp[¶](#lwgrp) === Homepage: * <https://github.com/hpc/lwgrpSpack package: * [lwgrp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lwgrp/package.py) Versions: 1.0.2 Build Dependencies: mpi Link Dependencies: mpi Description: Thie light-weight group library provides process group representations using O(log N) space and time. --- lwm2[¶](#lwm2) === Homepage: * <https://jay.grs.rwth-aachen.de/redmine/projects/lwm2Spack package: * [lwm2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lwm2/package.py) Versions: torus Build Dependencies: mpi, [papi](#papi) Link Dependencies: mpi, [papi](#papi) Description: LWM2: Light Weight Measurement Module. This is a PMPI module that can collect a number of time-sliced MPI and POSIX I/O measurements from a program. --- lz4[¶](#lz4) === Homepage: * <http://lz4.github.io/lz4/Spack package: * [lz4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lz4/package.py) Versions: 1.8.1.2, 1.7.5, 1.3.1 Test Dependencies: [valgrind](#valgrind) Description: LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems. --- lzma[¶](#lzma) === Homepage: * <http://tukaani.org/lzma/Spack package: * [lzma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lzma/package.py) Versions: 4.32.7 Description: LZMA Utils are legacy data compression software with high compression ratio. LZMA Utils are no longer developed, although critical bugs may be fixed as long as fixing them doesn't require huge changes to the code. Users of LZMA Utils should move to XZ Utils. XZ Utils support the legacy .lzma format used by LZMA Utils, and can also emulate the command line tools of LZMA Utils. This should make transition from LZMA Utils to XZ Utils relatively easy. --- lzo[¶](#lzo) === Homepage: * <https://www.oberhumer.com/opensource/lzo/Spack package: * [lzo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/lzo/package.py) Versions: 2.09, 2.08, 2.07, 2.06, 2.05 Description: Real-time data compression library --- m4[¶](#m4) === Homepage: * <https://www.gnu.org/software/m4/m4.htmlSpack package: * [m4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/m4/package.py) Versions: 1.4.18, 1.4.17 Build Dependencies: [libsigsegv](#libsigsegv) Link Dependencies: [libsigsegv](#libsigsegv) Description: GNU M4 is an implementation of the traditional Unix macro processor. --- macsio[¶](#macsio) === Homepage: * <https://computation.llnl.gov/projects/co-design/macsioSpack package: * [macsio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/macsio/package.py) Versions: develop, 1.1, 1.0 Build Dependencies: [exodusii](#exodusii), mpi, [silo](#silo), [hdf5](#hdf5), [cmake](#cmake), [json-cwx](#json-cwx), [typhonio](#typhonio), [scr](#scr) Link Dependencies: [exodusii](#exodusii), mpi, [silo](#silo), [json-cwx](#json-cwx), [hdf5](#hdf5), [typhonio](#typhonio), [scr](#scr) Description: A Multi-purpose, Application-Centric, Scalable I/O Proxy Application. --- mad-numdiff[¶](#mad-numdiff) === Homepage: * <https://github.com/quinoacomputing/ndiffSpack package: * [mad-numdiff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mad-numdiff/package.py) Versions: develop, 20150724 Build Dependencies: [cmake](#cmake) Description: compare unformatted text files with numerical content --- mafft[¶](#mafft) === Homepage: * <http://mafft.cbrc.jp/alignment/software/index.htmlSpack package: * [mafft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mafft/package.py) Versions: 7.221 Description: MAFFT is a multiple sequence alignment program for unix-like operating systems. It offers a range of multiple alignment methods, L-INS-i (accurate; for alignment of <~200 sequences), FFT-NS-2 (fast; for alignment of <~30,000 sequences), etc. --- magics[¶](#magics) === Homepage: * <https://software.ecmwf.int/wiki/display/MAGP/MagicsSpack package: * [magics/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/magics/package.py) Versions: 2.34.3, 2.34.1, 2.33.0, 2.32.0, 2.31.0, 2.29.6, 2.29.4, 2.29.0 Build Dependencies: pkgconfig, [libpng](#libpng), [grib-api](#grib-api), [perl](#perl), [cmake](#cmake), [py-numpy](#py-numpy), [expat](#expat), [qt](#qt), [pango](#pango), [zlib](#zlib), [netcdf-cxx](#netcdf-cxx), [python](#python), [eccodes](#eccodes), [proj](#proj), [swig](#swig), [boost](#boost), [libemos](#libemos), [perl-xml-parser](#perl-xml-parser) Link Dependencies: [libpng](#libpng), [grib-api](#grib-api), [expat](#expat), [qt](#qt), [pango](#pango), [zlib](#zlib), [netcdf-cxx](#netcdf-cxx), [eccodes](#eccodes), [proj](#proj), [boost](#boost), [libemos](#libemos), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Magics is the latest generation of the ECMWF's Meteorological plotting software MAGICS. Although completely redesigned in C++, it is intended to be as backwards-compatible as possible with the Fortran interface. --- magma[¶](#magma) === Homepage: * <http://icl.cs.utk.edu/magma/Spack package: * [magma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/magma/package.py) Versions: 2.4.0, 2.3.0, 2.2.0 Build Dependencies: [cuda](#cuda), lapack, blas, [cmake](#cmake) Link Dependencies: [cuda](#cuda), blas, lapack Description: The MAGMA project aims to develop a dense linear algebra library similar to LAPACK but for heterogeneous/hybrid architectures, starting with current "Multicore+GPU" systems. --- makedepend[¶](#makedepend) === Homepage: * <http://cgit.freedesktop.org/xorg/util/makedependSpack package: * [makedepend/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/makedepend/package.py) Versions: 1.0.5 Build Dependencies: [xproto](#xproto), pkgconfig Description: makedepend - create dependencies in makefiles. --- mallocmc[¶](#mallocmc) === Homepage: * <https://github.com/ComputationalRadiationPhysics/mallocMCSpack package: * [mallocmc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mallocmc/package.py) Versions: develop, 2.2.0crp, 2.1.0crp, 2.0.1crp, 2.0.0crp, 1.0.2crp, master Build Dependencies: [cmake](#cmake) Link Dependencies: [cuda](#cuda), [boost](#boost) Description: mallocMC: Memory Allocator for Many Core Architectures. This project provides a framework for fast memory managers on many core accelerators. Currently, it supports NVIDIA GPUs of compute capability sm_20 or higher through the ScatterAlloc algorithm. mallocMC is header-only, but requires a few other C++ libraries to be available. --- man-db[¶](#man-db) === Homepage: * <http://www.nongnu.org/man-db/Spack package: * [man-db/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/man-db/package.py) Versions: 2.7.6.1 Build Dependencies: [bzip2](#bzip2), [libpipeline](#libpipeline), [lzma](#lzma), [groff](#groff), [autoconf](#autoconf), [automake](#automake), [gettext](#gettext), [flex](#flex), [xz](#xz) Link Dependencies: [bzip2](#bzip2), [libpipeline](#libpipeline), [lzma](#lzma), [groff](#groff), [autoconf](#autoconf), [automake](#automake), [gettext](#gettext), [flex](#flex), [xz](#xz) Run Dependencies: [groff](#groff), [bzip2](#bzip2), [xz](#xz), [lzma](#lzma) Description: man-db is an implementation of the standard Unix documentation system accessed using the man command. It uses a Berkeley DB database in place of the traditional flat-text whatis databases. --- manta[¶](#manta) === Homepage: * <https://github.com/Illumina/mantaSpack package: * [manta/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/manta/package.py) Versions: 1.4.0, 1.3.2, 1.3.1, 1.3.0 Build Dependencies: [cmake](#cmake), [boost](#boost), [python](#python) Run Dependencies: [python](#python) Description: Structural variant and indel caller for mapped sequencing data --- maq[¶](#maq) === Homepage: * <http://maq.sourceforge.net/Spack package: * [maq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/maq/package.py) Versions: 0.7.1, 0.5.0 Description: Maq is a software that builds mapping assemblies from short reads generated by the next-generation sequencing machines. --- mariadb[¶](#mariadb) === Homepage: * <https://mariadb.org/about/Spack package: * [mariadb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mariadb/package.py) Versions: 10.2.8, 10.1.23, 10.1.14, 5.5.56, 5.5.49 Build Dependencies: [zlib](#zlib), [jemalloc](#jemalloc), [libevent](#libevent), [cmake](#cmake), [libaio](#libaio), [boost](#boost), [ncurses](#ncurses), [libedit](#libedit) Link Dependencies: [zlib](#zlib), [jemalloc](#jemalloc), [libevent](#libevent), [libaio](#libaio), [boost](#boost), [ncurses](#ncurses), [libedit](#libedit) Description: MariaDB turns data into structured information in a wide array of applications, ranging from banking to websites. It is an enhanced, drop- in replacement for MySQL. MariaDB is used because it is fast, scalable and robust, with a rich ecosystem of storage engines, plugins and many other tools make it very versatile for a wide variety of use cases. --- masa[¶](#masa) === Homepage: * <https://github.com/manufactured-solutions/MASASpack package: * [masa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/masa/package.py) Versions: master Build Dependencies: [swig](#swig), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [python](#python), [metaphysicl](#metaphysicl) Link Dependencies: [python](#python), [metaphysicl](#metaphysicl) Description: MASA (Manufactured Analytical Solution Abstraction) is a library written in C++ (with C, python and Fortran90 interfaces) which provides a suite of manufactured solutions for the software verification of partial differential equation solvers in multiple dimensions. --- masurca[¶](#masurca) === Homepage: * <http://www.genome.umd.edu/masurca.htmlSpack package: * [masurca/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/masurca/package.py) Versions: 3.2.6, 3.2.3 Build Dependencies: [perl](#perl), [zlib](#zlib), [boost](#boost) Link Dependencies: [zlib](#zlib), [boost](#boost) Run Dependencies: [perl](#perl) Description: MaSuRCA is whole genome assembly software. It combines the efficiency of the de Bruijn graph and Overlap-Layout-Consensus (OLC) approaches. --- matio[¶](#matio) === Homepage: * <http://sourceforge.net/projects/matio/Spack package: * [matio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/matio/package.py) Versions: 1.5.9, 1.5.2 Build Dependencies: [zlib](#zlib), [hdf5](#hdf5) Link Dependencies: [zlib](#zlib), [hdf5](#hdf5) Description: matio is an C library for reading and writing Matlab MAT files --- matlab[¶](#matlab) === Homepage: * <https://www.mathworks.com/products/matlab.htmlSpack package: * [matlab/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/matlab/package.py) Versions: R2018b, R2016b Description: MATLAB (MATrix LABoratory) is a multi-paradigm numerical computing environment and fourth-generation programming language. A proprietary programming language developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, C#, Java, Fortran and Python. Note: MATLAB is licensed software. You will need to create an account on the MathWorks homepage and download MATLAB yourself. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- maven[¶](#maven) === Homepage: * <https://maven.apache.org/index.htmlSpack package: * [maven/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/maven/package.py) Versions: 3.5.0, 3.3.9 Build Dependencies: java Link Dependencies: java Description: Apache Maven is a software project management and comprehension tool. --- maverick[¶](#maverick) === Homepage: * <https://github.com/bobverity/MavericKSpack package: * [maverick/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/maverick/package.py) Versions: 1.0.4 Description: MavericK is a program for inferring population structure on the basis of genetic information. --- mawk[¶](#mawk) === Homepage: * <http://invisible-island.net/mawk/mawk.htmlSpack package: * [mawk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mawk/package.py) Versions: 1.3.4 Description: mawk is an interpreter for the AWK Programming Language. --- mbedtls[¶](#mbedtls) === Homepage: * <https://tls.mbed.orgSpack package: * [mbedtls/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mbedtls/package.py) Versions: 2.3.0, 2.2.1, 2.2.0, 2.1.4, 2.1.3, 1.3.16 Build Dependencies: [cmake](#cmake) Description: mbed TLS (formerly known as PolarSSL) makes it trivially easy for developers to include cryptographic and SSL/TLS capabilities in their (embedded) products, facilitating this functionality with a minimal coding footprint. --- mc[¶](#mc) === Homepage: * <https://midnight-commander.orgSpack package: * [mc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mc/package.py) Versions: 4.8.20 Build Dependencies: pkgconfig, [libssh2](#libssh2), [glib](#glib), [ncurses](#ncurses) Link Dependencies: [glib](#glib), [libssh2](#libssh2), [ncurses](#ncurses) Description: The GNU Midnight Commander is a visual file manager. --- mcl[¶](#mcl) === Homepage: * <https://www.micans.org/mcl/index.htmlSpack package: * [mcl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mcl/package.py) Versions: 14-137 Description: The MCL algorithm is short for the Markov Cluster Algorithm, a fast and scalable unsupervised cluster algorithm for graphs (also known as networks) based on simulation of (stochastic) flow in graphs. --- mdtest[¶](#mdtest) === Homepage: * <https://github.com/LLNL/mdtestSpack package: * [mdtest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mdtest/package.py) Versions: 1.9.3 Build Dependencies: mpi Link Dependencies: mpi Description: mdtest is an MPI-coordinated metadata benchmark test that performs open/stat/close operations on files and directories and then reports the performance. --- med[¶](#med) === Homepage: * <http://docs.salome-platform.org/latest/dev/MEDCoupling/med-file.htmlSpack package: * [med/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/med/package.py) Versions: 3.2.0 Build Dependencies: [cmake](#cmake), mpi, [hdf5](#hdf5) Link Dependencies: mpi, [hdf5](#hdf5) Description: The MED file format is a specialization of the HDF5 standard. --- meep[¶](#meep) === Homepage: * <http://ab-initio.mit.edu/wiki/index.php/MeepSpack package: * [meep/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/meep/package.py) Versions: 1.3, 1.2.1, 1.1.1 Build Dependencies: mpi, [gsl](#gsl), [guile](#guile), [libctl](#libctl), [harminv](#harminv), [hdf5](#hdf5), blas, lapack Link Dependencies: mpi, [gsl](#gsl), [guile](#guile), [libctl](#libctl), [harminv](#harminv), [hdf5](#hdf5), blas, lapack Description: Meep (or MEEP) is a free finite-difference time-domain (FDTD) simulation software package developed at MIT to model electromagnetic systems. --- mefit[¶](#mefit) === Homepage: * <https://github.com/nisheth/MeFiTSpack package: * [mefit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mefit/package.py) Versions: 1.0 Build Dependencies: [py-htseq](#py-htseq), [py-numpy](#py-numpy), [casper](#casper), [jellyfish](#jellyfish) Link Dependencies: [py-htseq](#py-htseq), [py-numpy](#py-numpy), [casper](#casper), [jellyfish](#jellyfish) Description: This pipeline will merge overlapping paired-end reads, calculate merge statistics, and filter reads for quality. --- megahit[¶](#megahit) === Homepage: * <https://github.com/voutcn/megahitSpack package: * [megahit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/megahit/package.py) Versions: 1.1.3 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: MEGAHIT: An ultra-fast single-node solution for large and complex metagenomics assembly via succinct de Bruijn graph --- memaxes[¶](#memaxes) === Homepage: * <https://github.com/llnl/MemAxesSpack package: * [memaxes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/memaxes/package.py) Versions: 0.5 Build Dependencies: [cmake](#cmake), [qt](#qt) Link Dependencies: [qt](#qt) Description: MemAxes is a visualizer for sampled memory trace data. --- meme[¶](#meme) === Homepage: * <http://meme-suite.orgSpack package: * [meme/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/meme/package.py) Versions: 4.12.0, 4.11.4 Build Dependencies: [perl](#perl), [image-magick](#image-magick), mpi, [python](#python), [perl-xml-parser](#perl-xml-parser) Link Dependencies: [zlib](#zlib), mpi, [libgcrypt](#libgcrypt) Run Dependencies: [perl](#perl), [image-magick](#image-magick), [python](#python), [perl-xml-parser](#perl-xml-parser) Description: The MEME Suite allows the biologist to discover novel motifs in collections of unaligned nucleotide or protein sequences, and to perform a wide variety of other motif-based analyses. --- memkind[¶](#memkind) === Homepage: * <https://github.com/memkind/memkindSpack package: * [memkind/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/memkind/package.py) Versions: 1.7.0 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4), [numactl](#numactl) Link Dependencies: [numactl](#numactl) Description: The memkind library is a user extensible heap manager built on top of jemalloc which enables control of memory characteristics and a partitioning of the heap between kinds of memory. The kinds of memory are defined by operating system memory policies that have been applied to virtual address ranges. Memory characteristics supported by memkind without user extension include control of NUMA and page size features. The jemalloc non-standard interface has been extended to enable specialized arenas to make requests for virtual memory from the operating system through the memkind partition interface. Through the other memkind interfaces the user can control and extend memory partition features and allocate memory while selecting enabled features. --- meraculous[¶](#meraculous) === Homepage: * <http://jgi.doe.gov/data-and-tools/meraculous/Spack package: * [meraculous/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/meraculous/package.py) Versions: 2.2.5.1, 2.2.4 Build Dependencies: [perl](#perl), [cmake](#cmake), [boost](#boost), [gnuplot](#gnuplot), [perl-log-log4perl](#perl-log-log4perl) Link Dependencies: [boost](#boost), [gnuplot](#gnuplot) Run Dependencies: [perl](#perl), [perl-log-log4perl](#perl-log-log4perl) Description: Meraculous is a while genome assembler for Next Generation Sequencing data geared for large genomes. --- mercurial[¶](#mercurial) === Homepage: * <https://www.mercurial-scm.orgSpack package: * [mercurial/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mercurial/package.py) Versions: 4.4.1, 4.1.2, 3.9.1, 3.9, 3.8.4, 3.8.3, 3.8.2, 3.8.1 Build Dependencies: [py-docutils](#py-docutils), [py-pygments](#py-pygments), [python](#python), [py-certifi](#py-certifi) Link Dependencies: [python](#python) Run Dependencies: [py-pygments](#py-pygments), [python](#python), [py-certifi](#py-certifi) Description: Mercurial is a free, distributed source control management tool. --- mesa[¶](#mesa) === Homepage: * <http://www.mesa3d.orgSpack package: * [mesa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mesa/package.py) Versions: 18.1.2, 17.2.3, 17.2.2, 17.1.5, 17.1.4, 17.1.3, 17.1.1, 13.0.6, 12.0.6, 12.0.3 Build Dependencies: pkgconfig, [libdrm](#libdrm), [libelf](#libelf), [libpthread-stubs](#libpthread-stubs), [py-mako](#py-mako), [libxshmfence](#libxshmfence), [zlib](#zlib), [presentproto](#presentproto), [libx11](#libx11), [openssl](#openssl), [libxfixes](#libxfixes), [dri3proto](#dri3proto), [gettext](#gettext), [dri2proto](#dri2proto), [icu4c](#icu4c), [libxv](#libxv), [libxvmc](#libxvmc), [python](#python), [libxcb](#libxcb), [glproto](#glproto), [bison](#bison), [expat](#expat), [xproto](#xproto), [libxdamage](#libxdamage), [fixesproto](#fixesproto), [libxext](#libxext), [binutils](#binutils), [flex](#flex), [llvm](#llvm), [damageproto](#damageproto), [py-argparse](#py-argparse) Link Dependencies: [libdrm](#libdrm), [libelf](#libelf), [libpthread-stubs](#libpthread-stubs), [libxshmfence](#libxshmfence), [zlib](#zlib), [presentproto](#presentproto), [libxext](#libxext), [openssl](#openssl), [libxfixes](#libxfixes), [gettext](#gettext), [icu4c](#icu4c), [libxv](#libxv), [libxvmc](#libxvmc), [libxcb](#libxcb), [glproto](#glproto), [libxdamage](#libxdamage), [expat](#expat), [xproto](#xproto), [damageproto](#damageproto), [libx11](#libx11), [llvm](#llvm), [fixesproto](#fixesproto) Description: Mesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics. --- mesa-glu[¶](#mesa-glu) === Homepage: * <https://www.mesa3d.orgSpack package: * [mesa-glu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mesa-glu/package.py) Versions: 9.0.0 Build Dependencies: [mesa](#mesa) Link Dependencies: [mesa](#mesa) Description: This package provides the Mesa OpenGL Utility library. --- meshkit[¶](#meshkit) === Homepage: * <http://sigma.mcs.anl.gov/meshkit-librarySpack package: * [meshkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/meshkit/package.py) Versions: 1.5.0 Build Dependencies: [netgen](#netgen), [cgm](#cgm), mpi, [moab](#moab) Link Dependencies: [netgen](#netgen), [cgm](#cgm), mpi, [moab](#moab) Description: MeshKit is an open-source library of mesh generation functionality. Its design philosophy is two-fold: it provides a collection of meshing algorithms for use in real meshing problems, along with other tools commonly needed to support mesh generation --- meson[¶](#meson) === Homepage: * <http://mesonbuild.com/Spack package: * [meson/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/meson/package.py) Versions: 0.42.0, 0.41.2, 0.41.1 Build Dependencies: [ninja](#ninja), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [ninja](#ninja), [python](#python) Description: Meson is a portable open source build system meant to be both extremely fast, and as user friendly as possible. --- mesquite[¶](#mesquite) === Homepage: * <https://software.sandia.gov/mesquiteSpack package: * [mesquite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mesquite/package.py) Versions: 2.99, 2.3.0, 2.2.0 Build Dependencies: mpi Link Dependencies: mpi Description: Mesquite (Mesh Quality Improvement Toolkit) is designed to provide a stand-alone, portable, comprehensive suite of mesh quality improvement algorithms and components that can be used to construct custom quality improvement algorithms. Mesquite provides a robust and effective mesh improvement toolkit that allows both meshing researchers application scientists to benefit from the latest developments in mesh quality control and improvement. --- metabat[¶](#metabat) === Homepage: * <https://bitbucket.org/berkeleylab/metabatSpack package: * [metabat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/metabat/package.py) Versions: 2.12.1 Build Dependencies: [boost](#boost), [scons](#scons) Run Dependencies: [perl](#perl), [boost](#boost) Description: MetaBAT, an efficient tool for accurately reconstructing single genomes from complex microbial communities. --- metaphysicl[¶](#metaphysicl) === Homepage: * <https://github.com/roystgnr/MetaPhysicLSpack package: * [metaphysicl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/metaphysicl/package.py) Versions: 0.2.0 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Description: Metaprogramming and operator-overloaded classes for numerical simulations. --- metis[¶](#metis) === Homepage: * <http://glaros.dtc.umn.edu/gkhome/metis/metis/overviewSpack package: * [metis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/metis/package.py) Versions: 5.1.0, 5.0.2, 4.0.3 Build Dependencies: [cmake](#cmake) Description: METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes. --- mfem[¶](#mfem) === Homepage: * <http://www.mfem.orgSpack package: * [mfem/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mfem/package.py) Versions: develop, 3.4.0, 3.3.2, 3.3, 3.2, 3.1, laghos-v1.0 Build Dependencies: [pumi](#pumi), [petsc](#petsc), [hypre](#hypre), [mpfr](#mpfr), [conduit](#conduit), [gnutls](#gnutls), lapack, [zlib](#zlib), mpi, [netcdf](#netcdf), [superlu-dist](#superlu-dist), [suite-sparse](#suite-sparse), [sundials](#sundials), blas, unwind, [metis](#metis) Link Dependencies: [pumi](#pumi), [petsc](#petsc), [hypre](#hypre), [mpfr](#mpfr), [conduit](#conduit), [gnutls](#gnutls), lapack, [zlib](#zlib), mpi, [netcdf](#netcdf), [superlu-dist](#superlu-dist), [suite-sparse](#suite-sparse), [sundials](#sundials), blas, unwind, [metis](#metis) Description: Free, lightweight, scalable C++ library for finite element methods. --- microbiomeutil[¶](#microbiomeutil) === Homepage: * <http://microbiomeutil.sourceforge.net/Spack package: * [microbiomeutil/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/microbiomeutil/package.py) Versions: 20110519 Build Dependencies: [perl](#perl), [cdbfasta](#cdbfasta), [blast-plus](#blast-plus) Link Dependencies: [cdbfasta](#cdbfasta), [blast-plus](#blast-plus) Run Dependencies: [perl](#perl) Description: Microbiome analysis utilities --- minced[¶](#minced) === Homepage: * <https://github.com/ctSkennerton/mincedSpack package: * [minced/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minced/package.py) Versions: 0.2.0 Build Dependencies: java Run Dependencies: java Description: MinCED is a program to find Clustered Regularly Interspaced Short Palindromic Repeats (CRISPRs) in full genomes or environmental datasets such as metagenomes, in which sequence size can be anywhere from 100 to 800 bp. --- mindthegap[¶](#mindthegap) === Homepage: * <https://gatb.inria.fr/software/mind-the-gap/Spack package: * [mindthegap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mindthegap/package.py) Versions: 2.0.2 Build Dependencies: [zlib](#zlib), [cmake](#cmake) Link Dependencies: [zlib](#zlib) Description: MindTheGap is a software that performs integrated detection and assembly of genomic insertion variants in NGS read datasets with respect to a reference genome. --- miniaero[¶](#miniaero) === Homepage: * <http://mantevo.orgSpack package: * [miniaero/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/miniaero/package.py) Versions: 2016-11-11 Build Dependencies: [kokkos](#kokkos) Link Dependencies: [kokkos](#kokkos) Description: Proxy Application. MiniAero is a mini-application for the evaulation of programming models and hardware for next generation platforms. --- miniamr[¶](#miniamr) === Homepage: * <https://mantevo.orgSpack package: * [miniamr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/miniamr/package.py) Versions: 1.4.1, 1.4.0 Build Dependencies: mpi Link Dependencies: mpi Description: Proxy Application. 3D stencil calculation with Adaptive Mesh Refinement (AMR) --- miniasm[¶](#miniasm) === Homepage: * <http://www.example.co://github.com/lh3/miniasmSpack package: * [miniasm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/miniasm/package.py) Versions: 2018-3-30 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Miniasm is a very fast OLC-based de novo assembler for noisy long reads. --- miniconda2[¶](#miniconda2) === Homepage: * <https://conda.io/miniconda.htmlSpack package: * [miniconda2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/miniconda2/package.py) Versions: 4.5.11, 4.5.4, 4.3.30, 4.3.14, 4.3.11 Description: The minimalist bootstrap toolset for conda and Python2. --- miniconda3[¶](#miniconda3) === Homepage: * <https://conda.io/miniconda.htmlSpack package: * [miniconda3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/miniconda3/package.py) Versions: 4.5.11, 4.5.4, 4.3.30, 4.3.14, 4.3.11 Description: The minimalist bootstrap toolset for conda and Python3. --- minife[¶](#minife) === Homepage: * <https://mantevo.org/Spack package: * [minife/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minife/package.py) Versions: 2.1.0 Build Dependencies: mpi, [qthreads](#qthreads) Link Dependencies: mpi, [qthreads](#qthreads) Description: Proxy Application. MiniFE is an proxy application for unstructured implicit finite element codes. --- minighost[¶](#minighost) === Homepage: * <http://mantevo.orgSpack package: * [minighost/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minighost/package.py) Versions: 1.0.1 Build Dependencies: mpi Link Dependencies: mpi Description: Proxy Application. A Finite Difference proxy application which implements a difference stencil across a homogenous three dimensional domain. --- minigmg[¶](#minigmg) === Homepage: * <http://crd.lbl.gov/departments/computer-science/PAR/research/previous-projects/miniGMG/Spack package: * [minigmg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minigmg/package.py) Versions: master Build Dependencies: mpi Link Dependencies: mpi Description: miniGMG is a compact benchmark for understanding the performance challenges associated with geometric multigrid solvers found in applications built from AMR MG frameworks like CHOMBO or BoxLib when running on modern multi- and manycore-based supercomputers. It includes both productive reference examples as well as highly-optimized implementations for CPUs and GPUs. It is sufficiently general that it has been used to evaluate a broad range of research topics including PGAS programming models and algorithmic tradeoffs inherit in multigrid. miniGMG was developed under the CACHE Joint Math-CS Institute. Note, miniGMG code has been supersceded by HPGMG. --- minimap2[¶](#minimap2) === Homepage: * <https://github.com/lh3/minimap2Spack package: * [minimap2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minimap2/package.py) Versions: 2.2 Build Dependencies: [py-mappy](#py-mappy) Run Dependencies: [py-mappy](#py-mappy) Description: Minimap2 is a versatile sequence alignment program that aligns DNA or mRNA sequences against a large reference database. --- minimd[¶](#minimd) === Homepage: * <http://mantevo.orgSpack package: * [minimd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minimd/package.py) Versions: 1.2 Build Dependencies: mpi Link Dependencies: mpi Description: Proxy Application. A simple proxy for the force computations in a typical molecular dynamics applications. --- miniqmc[¶](#miniqmc) === Homepage: * <https://github.com/QMCPACK/miniqmcSpack package: * [miniqmc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/miniqmc/package.py) Versions: 0.4.0, 0.3.0, 0.2.0 Build Dependencies: [cmake](#cmake), mpi, lapack Link Dependencies: mpi, lapack Description: A simplified real space QMC code for algorithm development, performance portability testing, and computer science experiments --- minisign[¶](#minisign) === Homepage: * <https://jedisct1.github.io/minisign/Spack package: * [minisign/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minisign/package.py) Versions: 0.7 Build Dependencies: [cmake](#cmake), [libsodium](#libsodium) Link Dependencies: [libsodium](#libsodium) Description: Minisign is a dead simple tool to sign files and verify signatures. --- minismac2d[¶](#minismac2d) === Homepage: * <http://mantevo.orgSpack package: * [minismac2d/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minismac2d/package.py) Versions: 2.0 Build Dependencies: mpi Link Dependencies: mpi Description: Proxy Application. Solves the finite-differenced 2D incompressible Navier-Stokes equations with Spalart-Allmaras one-equation turbulence model on a structured body conforming grid. --- minitri[¶](#minitri) === Homepage: * <https://github.com/Mantevo/miniTriSpack package: * [minitri/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minitri/package.py) Versions: 1.0 Build Dependencies: mpi Link Dependencies: mpi Description: A simple, triangle-based data analytics proxy application. --- minivite[¶](#minivite) === Homepage: * <http://hpc.pnl.gov/people/hala/grappolo.htmlSpack package: * [minivite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minivite/package.py) Versions: develop, 1.0 Build Dependencies: mpi Link Dependencies: mpi Description: miniVite is a proxy application that implements a single phase of Louvain method in distributed memory for graph community detection. --- minixyce[¶](#minixyce) === Homepage: * <https://mantevo.orgSpack package: * [minixyce/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minixyce/package.py) Versions: 1.0 Build Dependencies: mpi Link Dependencies: mpi Description: Proxy Application. A portable proxy of some of the key capabilities in the electrical modeling Xyce. --- minuit[¶](#minuit) === Homepage: * <https://seal.web.cern.ch/seal/snapshot/work-packages/mathlibs/minuit/home.htmlSpack package: * [minuit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/minuit/package.py) Versions: 5.34.14, 5.28.00, 5.27.02, 5.24.00, 5.22.00, 5.21.06, 5.20.00, 5.18.00, 5.16.00, 5.14.00, 5.12.00, 5.10.00, 5.08.00, 1.7.9, 1.7.6, 1.7.1, 1.6.3, 1.6.0, 1.5.2, 1.5.0 Description: MINUIT is a physics analysis tool for function minimization. --- mira[¶](#mira) === Homepage: * <http://sourceforge.net/projects/mira-assembler/Spack package: * [mira/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mira/package.py) Versions: 4.0.2 Build Dependencies: [gperftools](#gperftools), [boost](#boost), [expat](#expat) Link Dependencies: [gperftools](#gperftools), [boost](#boost), [expat](#expat) Description: MIRA is a multi-pass DNA sequence data assembler/mapper for whole genome and EST/RNASeq projects. --- mirdeep2[¶](#mirdeep2) === Homepage: * <https://www.mdc-berlin.de/8551903/en/Spack package: * [mirdeep2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mirdeep2/package.py) Versions: 0.0.8 Build Dependencies: [perl-pdf-api2](#perl-pdf-api2), [squid](#squid), [perl](#perl), [randfold](#randfold), [bowtie](#bowtie), [viennarna](#viennarna) Link Dependencies: [randfold](#randfold), [bowtie](#bowtie), [viennarna](#viennarna), [squid](#squid) Run Dependencies: [perl](#perl), [perl-pdf-api2](#perl-pdf-api2) Description: miRDeep2 is a completely overhauled tool which discovers microRNA genes by analyzing sequenced RNAs. --- mitofates[¶](#mitofates) === Homepage: * <http://mitf.cbrc.jp/MitoFates/cgi-bin/top.cgiSpack package: * [mitofates/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mitofates/package.py) Versions: 1.2 Build Dependencies: [libsvm](#libsvm) Link Dependencies: [libsvm](#libsvm) Run Dependencies: [perl](#perl), [perl-perl6-slurp](#perl-perl6-slurp), [perl-math-cephes](#perl-math-cephes), [perl-inline-c](#perl-inline-c) Description: MitoFates predicts mitochondrial presequence, a cleavable localization signal located in N-terminal, and its cleaved position. --- mitos[¶](#mitos) === Homepage: * <https://github.com/llnl/MitosSpack package: * [mitos/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mitos/package.py) Versions: 0.9.2, 0.9.1 Build Dependencies: [cmake](#cmake), [dyninst](#dyninst), mpi, [hwloc](#hwloc) Link Dependencies: [hwloc](#hwloc), mpi, [dyninst](#dyninst) Description: Mitos is a library and a tool for collecting sampled memory performance data to view with MemAxes --- mkfontdir[¶](#mkfontdir) === Homepage: * <http://cgit.freedesktop.org/xorg/app/mkfontdirSpack package: * [mkfontdir/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mkfontdir/package.py) Versions: 1.0.7 Build Dependencies: [util-macros](#util-macros), pkgconfig Run Dependencies: [mkfontscale](#mkfontscale) Description: mkfontdir creates the fonts.dir files needed by the legacy X server core font system. The current implementation is a simple wrapper script around the mkfontscale program, which must be built and installed first. --- mkfontscale[¶](#mkfontscale) === Homepage: * <http://cgit.freedesktop.org/xorg/app/mkfontscaleSpack package: * [mkfontscale/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mkfontscale/package.py) Versions: 1.1.2 Build Dependencies: [util-macros](#util-macros), [xproto](#xproto), [freetype](#freetype), pkgconfig, [libfontenc](#libfontenc) Link Dependencies: [libfontenc](#libfontenc), [freetype](#freetype) Description: mkfontscale creates the fonts.scale and fonts.dir index files used by the legacy X11 font system. --- mlhka[¶](#mlhka) === Homepage: * <https://wright.eeb.utoronto.caSpack package: * [mlhka/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mlhka/package.py) Versions: 2.1 Description: A maximum likelihood ratio test of natural selection, using polymorphism and divergence data. --- moab[¶](#moab) === Homepage: * <https://bitbucket.org/fathomteam/moabSpack package: * [moab/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/moab/package.py) Versions: 5.0.0, 4.9.2, 4.9.1, 4.9.0, 4.8.2 Build Dependencies: [parmetis](#parmetis), [cgm](#cgm), [zoltan](#zoltan), mpi, [netcdf](#netcdf), [hdf5](#hdf5), [metis](#metis), blas, [parallel-netcdf](#parallel-netcdf), lapack Link Dependencies: [parmetis](#parmetis), [cgm](#cgm), [zoltan](#zoltan), mpi, [netcdf](#netcdf), [hdf5](#hdf5), [metis](#metis), blas, [parallel-netcdf](#parallel-netcdf), lapack Description: MOAB is a component for representing and evaluating mesh data. MOAB can store structured and unstructured mesh, consisting of elements in the finite element 'zoo.' The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. --- modern-wheel[¶](#modern-wheel) === Homepage: * <https://github.com/alalazo/modern_wheelSpack package: * [modern-wheel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/modern-wheel/package.py) Versions: 1.2, 1.1, 1.0 Build Dependencies: [cmake](#cmake), [boost](#boost) Link Dependencies: [boost](#boost) Description: C++ utility collection. Provides various facilities of common use in modern codebases like dynamic linking helpers, loadable plugins facilities and misc patterns. --- mofem-cephas[¶](#mofem-cephas) === Homepage: * <http://mofem.eng.gla.ac.ukSpack package: * [mofem-cephas/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mofem-cephas/package.py) Versions: develop, 0.8.7 Build Dependencies: [parmetis](#parmetis), [slepc](#slepc), mpi, [moab](#moab), [hdf5](#hdf5), [tetgen](#tetgen), [cmake](#cmake), [boost](#boost), [adol-c](#adol-c), [petsc](#petsc), [med](#med) Link Dependencies: [parmetis](#parmetis), [slepc](#slepc), mpi, [moab](#moab), [hdf5](#hdf5), [tetgen](#tetgen), [boost](#boost), [adol-c](#adol-c), [petsc](#petsc), [med](#med) Description: mofem-cephas core library --- mofem-fracture-module[¶](#mofem-fracture-module) === Homepage: * <http://mofem.eng.gla.ac.ukSpack package: * [mofem-fracture-module/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mofem-fracture-module/package.py) Versions: develop, 0.9.42 Build Dependencies: [cmake](#cmake), [mofem-cephas](#mofem-cephas), [mofem-users-modules](#mofem-users-modules) Link Dependencies: [mofem-cephas](#mofem-cephas), [mofem-users-modules](#mofem-users-modules) Run Dependencies: [mofem-users-modules](#mofem-users-modules) Description: mofem fracture module --- mofem-minimal-surface-equation[¶](#mofem-minimal-surface-equation) === Homepage: * <http://mofem.eng.gla.ac.ukSpack package: * [mofem-minimal-surface-equation/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mofem-minimal-surface-equation/package.py) Versions: develop, 0.3.9 Build Dependencies: [cmake](#cmake), [mofem-cephas](#mofem-cephas), [mofem-users-modules](#mofem-users-modules) Link Dependencies: [mofem-cephas](#mofem-cephas), [mofem-users-modules](#mofem-users-modules) Run Dependencies: [mofem-users-modules](#mofem-users-modules) Description: mofem minimal surface equation --- mofem-users-modules[¶](#mofem-users-modules) === Homepage: * <http://mofem.eng.gla.ac.ukSpack package: * [mofem-users-modules/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mofem-users-modules/package.py) Versions: 1.0 Build Dependencies: [cmake](#cmake), [mofem-cephas](#mofem-cephas) Link Dependencies: [mofem-cephas](#mofem-cephas) Description: MofemUsersModules creates installation environment for user-provided modules and extends of mofem-cephas package. The CMakeList.txt file for user modules is located in mofem-cephas/user_modules prefix. MofemUsersModules itself does not contain any code (is a dummy with a single dummy version). It provide sources location of users modules, i.e. mofem-fracture-module. Those are kept as a stand-alone package (instead of resources) as they have different versions and developers. One can install the extension, f.e. spack installs extension spack install mofem-fracture-module. Next, create a symlink to run the code, f.e. spack view symlink um_view mofem-cephas, and activate the extension, i.e. spack activate um_view mofem-minimal-surface-equation. Basic mofem functionality is available when with spack install mofem- users-modules, it provides simple examples for calculating elasticity problems, magnetostatics, saturated and unsaturated flow and a couple more. For more information how to work with Spack and MoFEM see http://mofem.eng.gla.ac.uk/mofem/html/install_spack.html --- molcas[¶](#molcas) === Homepage: * <http://www.molcas.org/Spack package: * [molcas/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/molcas/package.py) Versions: 8.2 Build Dependencies: [cmake](#cmake), [openblas](#openblas), [openmpi](#openmpi), [hdf5](#hdf5) Link Dependencies: [openmpi](#openmpi), [openblas](#openblas), [hdf5](#hdf5) Description: Molcas is an ab initio quantum chemistry software package developed by scientists to be used by scientists. Please set the path to licence file with the following command export MOLCAS_LICENSE=/path/to/molcas/license/ --- mono[¶](#mono) === Homepage: * <http://www.mono-project.com/Spack package: * [mono/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mono/package.py) Versions: 5.4.1.7, 5.4.0.167, 5.0.1.1, 4.8.0.524 Build Dependencies: [perl](#perl), [cmake](#cmake), [libiconv](#libiconv) Link Dependencies: [libiconv](#libiconv) Description: Mono is a software platform designed to allow developers to easily create cross platform applications. It is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime. --- mosh[¶](#mosh) === Homepage: * <https://mosh.org/Spack package: * [mosh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mosh/package.py) Versions: 1.3.2, 1.3.0, 1.2.6 Build Dependencies: [zlib](#zlib), pkgconfig, [ncurses](#ncurses), [openssl](#openssl), [protobuf](#protobuf) Link Dependencies: [zlib](#zlib), [ncurses](#ncurses), [openssl](#openssl), [protobuf](#protobuf) Run Dependencies: [perl](#perl) Description: Remote terminal application that allows roaming, supports intermittent connectivity, and provides intelligent local echo and line editing of user keystrokes. Mosh is a replacement for SSH. It's more robust and responsive, especially over Wi-Fi, cellular, and long-distance links. --- mothur[¶](#mothur) === Homepage: * <https://github.com/mothur/mothurSpack package: * [mothur/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mothur/package.py) Versions: 1.40.5, 1.39.5 Build Dependencies: [boost](#boost), [readline](#readline) Link Dependencies: [boost](#boost), [readline](#readline) Description: This project seeks to develop a single piece of open-source, expandable software to fill the bioinformatics needs of the microbial ecology community. --- motif[¶](#motif) === Homepage: * <http://motif.ics.com/Spack package: * [motif/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/motif/package.py) Versions: 2.3.8 Build Dependencies: [libxt](#libxt), [libxext](#libxext), [libxfixes](#libxfixes), [libxcomposite](#libxcomposite), [libx11](#libx11), [libxft](#libxft), [flex](#flex), [xbitmaps](#xbitmaps) Link Dependencies: [libxt](#libxt), [libxext](#libxext), [libxfixes](#libxfixes), [libxcomposite](#libxcomposite), [libx11](#libx11), [libxft](#libxft), [flex](#flex), [xbitmaps](#xbitmaps) Description: " Motif - Graphical user interface (GUI) specification and the widget toolkit --- motioncor2[¶](#motioncor2) === Homepage: * <http://msg.ucsf.edu/em/softwareSpack package: * [motioncor2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/motioncor2/package.py) Versions: 1.1.0, 1.0.5, 1.0.4 Run Dependencies: [libtiff](#libtiff), [cuda](#cuda) Description: MotionCor2 is a multi-GPU program that corrects beam-induced sample motion recorded on dose fractionated movie stacks. It implements a robust iterative alignment algorithm that delivers precise measurement and correction of both global and non-uniform local motions at single pixel level, suitable for both single-particle and tomographic images. MotionCor2 is sufficiently fast to keep up with automated data collection. --- mount-point-attributes[¶](#mount-point-attributes) === Homepage: * <https://github.com/LLNL/MountPointAttributesSpack package: * [mount-point-attributes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mount-point-attributes/package.py) Versions: 1.1 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Description: Library to turn expensive, non-scalable file system calls into simple string comparison operations. --- mozjs[¶](#mozjs) === Homepage: * <https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkeySpack package: * [mozjs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mozjs/package.py) Versions: 24.2.0, 17.0.0, 1.8.5 Build Dependencies: [zlib](#zlib), [perl](#perl), pkgconfig, [nspr](#nspr), [libffi](#libffi), [readline](#readline), [python](#python) Link Dependencies: [zlib](#zlib), [libffi](#libffi), [readline](#readline), [nspr](#nspr) Description: SpiderMonkey is Mozilla's JavaScript engine written in C/C++. It is used in various Mozilla products, including Firefox, and is available under the MPL2. --- mpark-variant[¶](#mpark-variant) === Homepage: * <https://mpark.github.io/variantSpack package: * [mpark-variant/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpark-variant/package.py) Versions: 1.3.0 Build Dependencies: [cmake](#cmake) Description: C++17 `std::variant` for C++11/14/17 --- mpc[¶](#mpc) === Homepage: * <http://www.multiprecision.orgSpack package: * [mpc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpc/package.py) Versions: 1.1.0, 1.0.3, 1.0.2 Build Dependencies: [gmp](#gmp), [mpfr](#mpfr) Link Dependencies: [gmp](#gmp), [mpfr](#mpfr) Description: Gnu Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. --- mpe2[¶](#mpe2) === Homepage: * <http://www.mcs.anl.gov/research/projects/perfvis/software/MPE/Spack package: * [mpe2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpe2/package.py) Versions: 1.3.0 Build Dependencies: mpi Link Dependencies: mpi Description: Message Passing Extensions (MPE): Parallel, shared X window graphics --- mpest[¶](#mpest) === Homepage: * <http://faculty.franklin.uga.edu/lliu/content/mp-estSpack package: * [mpest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpest/package.py) Versions: 1.5 Description: MP-EST estimates species trees from a set of gene trees by maximizing a pseudo-likelihood function. --- mpfr[¶](#mpfr) === Homepage: * <http://www.mpfr.orgSpack package: * [mpfr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpfr/package.py) Versions: 4.0.1, 4.0.0, 3.1.6, 3.1.5, 3.1.4, 3.1.3, 3.1.2 Build Dependencies: [gmp](#gmp) Link Dependencies: [gmp](#gmp) Description: The MPFR library is a C library for multiple-precision floating-point computations with correct rounding. --- mpibash[¶](#mpibash) === Homepage: * <https://github.com/lanl/MPI-BashSpack package: * [mpibash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpibash/package.py) Versions: 1.2 Build Dependencies: [bash](#bash), mpi, [libcircle](#libcircle) Link Dependencies: [bash](#bash), mpi, [libcircle](#libcircle) Description: Parallel scripting right from the Bourne-Again Shell (Bash) --- mpiblast[¶](#mpiblast) === Homepage: * <http://www.mpiblast.org/Spack package: * [mpiblast/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpiblast/package.py) Versions: 1.6.0 Build Dependencies: mpi Link Dependencies: mpi Description: mpiBLAST is a freely available, open-source, parallel implementation of NCBI BLAST --- mpich[¶](#mpich) === Homepage: * <http://www.mpich.orgSpack package: * [mpich/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpich/package.py) Versions: develop, 3.2.1, 3.2, 3.1.4, 3.1.3, 3.1.2, 3.1.1, 3.1, 3.0.4 Build Dependencies: [libfabric](#libfabric), [findutils](#findutils) Link Dependencies: [libfabric](#libfabric) Description: MPICH is a high performance and widely portable implementation of the Message Passing Interface (MPI) standard. --- mpifileutils[¶](#mpifileutils) === Homepage: * <https://github.com/hpc/mpifileutilsSpack package: * [mpifileutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpifileutils/package.py) Versions: develop, 0.8, 0.7, 0.6 Build Dependencies: [libarchive](#libarchive), mpi, [dtcmp](#dtcmp), [libcircle](#libcircle), [lwgrp](#lwgrp) Link Dependencies: [libarchive](#libarchive), mpi, [dtcmp](#dtcmp), [libcircle](#libcircle), [lwgrp](#lwgrp) Description: mpiFileUtils is a suite of MPI-based tools to manage large datasets, which may vary from large directory trees to large files. High- performance computing users often generate large datasets with parallel applications that run with many processes (millions in some cases). However those users are then stuck with single-process tools like cp and rm to manage their datasets. This suite provides MPI-based tools to handle typical jobs like copy, remove, and compare for such datasets, providing speedups of up to 20-30x. --- mpilander[¶](#mpilander) === Homepage: * <https://github.com/MPILander/MPILanderSpack package: * [mpilander/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpilander/package.py) Versions: develop Build Dependencies: [cmake](#cmake) Description: There can only be one (MPI process)! --- mpileaks[¶](#mpileaks) === Homepage: * <https://github.com/hpc/mpileaksSpack package: * [mpileaks/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpileaks/package.py) Versions: 1.0 Build Dependencies: [adept-utils](#adept-utils), mpi, [callpath](#callpath) Link Dependencies: [adept-utils](#adept-utils), mpi, [callpath](#callpath) Description: Tool to detect and report leaked MPI objects like MPI_Requests and MPI_Datatypes. --- mpip[¶](#mpip) === Homepage: * <http://mpip.sourceforge.net/Spack package: * [mpip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpip/package.py) Versions: 3.4.1 Build Dependencies: [libunwind](#libunwind), mpi, [libdwarf](#libdwarf), [libelf](#libelf) Link Dependencies: [libunwind](#libunwind), mpi, [libdwarf](#libdwarf), [libelf](#libelf) Description: mpiP: Lightweight, Scalable MPI Profiling --- mpir[¶](#mpir) === Homepage: * <https://github.com/wbhart/mpirSpack package: * [mpir/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpir/package.py) Versions: develop, 2.7.0, 2.6.0 Build Dependencies: [autoconf](#autoconf), [yasm](#yasm) Link Dependencies: [yasm](#yasm) Description: Multiple Precision Integers and Rationals. --- mpix-launch-swift[¶](#mpix-launch-swift) === Homepage: * <https://bitbucket.org/kshitijvmehta/mpix_launch_swiftSpack package: * [mpix-launch-swift/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mpix-launch-swift/package.py) Versions: develop Build Dependencies: [tcl](#tcl), mpi, [swig](#swig), [stc](#stc) Link Dependencies: [tcl](#tcl), mpi, [stc](#stc) Description: Library that allows a child MPI application to be launched inside a subset of processes in a parent MPI application. --- mrbayes[¶](#mrbayes) === Homepage: * <http://mrbayes.sourceforge.netSpack package: * [mrbayes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mrbayes/package.py) Versions: 2017-11-22 Build Dependencies: mpi, [libbeagle](#libbeagle), [m4](#m4), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Link Dependencies: [libbeagle](#libbeagle), mpi Description: MrBayes is a program for Bayesian inference and model choice across a wide range of phylogenetic and evolutionary models. MrBayes uses Markov chain Monte Carlo (MCMC) methods to estimate the posterior distribution of model parameters. --- mrnet[¶](#mrnet) === Homepage: * <http://paradyn.org/mrnetSpack package: * [mrnet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mrnet/package.py) Versions: 5.0.1-3, 5.0.1-2, 5.0.1, 4.1.0, 4.0.0 Build Dependencies: cti, [boost](#boost) Link Dependencies: cti, [boost](#boost) Description: The MRNet Multi-Cast Reduction Network. --- mrtrix3[¶](#mrtrix3) === Homepage: * <http://www.mrtrix.org/Spack package: * [mrtrix3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mrtrix3/package.py) Versions: 2017-09-25 Build Dependencies: [zlib](#zlib), [python](#python), [eigen](#eigen), [libtiff](#libtiff), [py-numpy](#py-numpy), [mesa-glu](#mesa-glu), [fftw](#fftw), [qt](#qt) Link Dependencies: [zlib](#zlib), [eigen](#eigen), [libtiff](#libtiff), [mesa-glu](#mesa-glu), [fftw](#fftw), [qt](#qt) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: MRtrix provides a set of tools to perform various advanced diffusion MRI analyses, including constrained spherical deconvolution (CSD), probabilistic tractography, track-density imaging, and apparent fibre density. --- mscgen[¶](#mscgen) === Homepage: * <http://www.mcternan.me.uk/mscgen/Spack package: * [mscgen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mscgen/package.py) Versions: 0.20 Build Dependencies: [libgd](#libgd), [pkgconf](#pkgconf), [flex](#flex), [bison](#bison) Link Dependencies: [libgd](#libgd), [pkgconf](#pkgconf), [flex](#flex), [bison](#bison) Description: Mscgen is a small program that parses Message Sequence Chart descriptions and produces PNG, SVG, EPS or server side image maps (ismaps) as the output. --- msgpack-c[¶](#msgpack-c) === Homepage: * <http://www.msgpack.orgSpack package: * [msgpack-c/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/msgpack-c/package.py) Versions: 3.0.1, 1.4.1 Build Dependencies: [cmake](#cmake) Test Dependencies: [googletest](#googletest) Description: A small, fast binary interchange format convertible to/from JSON --- mshadow[¶](#mshadow) === Homepage: * <https://github.com/dmlc/mshadowSpack package: * [mshadow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mshadow/package.py) Versions: 20170721, master Description: MShadow is a lightweight CPU/GPU Matrix/Tensor C++ Template Library. in C++/CUDA. --- msmc[¶](#msmc) === Homepage: * <https://github.com/stschiff/msmcSpack package: * [msmc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/msmc/package.py) Versions: 1.1.0 Build Dependencies: [dmd](#dmd), [gsl](#gsl) Run Dependencies: [gsl](#gsl) Description: This software implements MSMC, a method to infer population size and gene flow from multiple genome sequences --- multitail[¶](#multitail) === Homepage: * <https://www.vanheusden.com/multitail/index.phpSpack package: * [multitail/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/multitail/package.py) Versions: 6.4.2 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: MultiTail allows you to monitor logfiles and command output in multiple windows in a terminal, colorize, filter and merge. --- multiverso[¶](#multiverso) === Homepage: * <https://github.com/Microsoft/MultiversoSpack package: * [multiverso/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/multiverso/package.py) Versions: 143187, 0.2, master Build Dependencies: [cmake](#cmake), [boost](#boost), mpi Link Dependencies: [boost](#boost), mpi Description: Multiverso is a parameter server based framework for training machine learning models on big data with numbers of machines. --- mummer[¶](#mummer) === Homepage: * <http://mummer.sourceforge.net/Spack package: * [mummer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mummer/package.py) Versions: 3.23 Build Dependencies: [perl](#perl), [gnuplot](#gnuplot) Link Dependencies: [gnuplot](#gnuplot) Run Dependencies: [perl](#perl) Description: MUMmer is a system for rapidly aligning entire genomes. --- mumps[¶](#mumps) === Homepage: * <http://mumps.enseeiht.frSpack package: * [mumps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mumps/package.py) Versions: 5.1.1, 5.0.2, 5.0.1 Build Dependencies: [parmetis](#parmetis), mpi, [scotch](#scotch), [metis](#metis), scalapack, blas, lapack Link Dependencies: [parmetis](#parmetis), mpi, [scotch](#scotch), [metis](#metis), scalapack, blas, lapack Description: MUMPS: a MUltifrontal Massively Parallel sparse direct Solver --- munge[¶](#munge) === Homepage: * <https://code.google.com/p/munge/Spack package: * [munge/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/munge/package.py) Versions: 0.5.11 Build Dependencies: [openssl](#openssl), [libgcrypt](#libgcrypt) Link Dependencies: [openssl](#openssl), [libgcrypt](#libgcrypt) Description: MUNGE Uid 'N' Gid Emporium --- muparser[¶](#muparser) === Homepage: * <http://muparser.beltoforion.de/Spack package: * [muparser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/muparser/package.py) Versions: 2.2.6.1, 2.2.5 Build Dependencies: [cmake](#cmake) Description: C++ math expression parser library. --- muscle[¶](#muscle) === Homepage: * <http://drive5.com/muscle/Spack package: * [muscle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/muscle/package.py) Versions: 3.8.1551 Description: MUSCLE is one of the best-performing multiple alignment programs according to published benchmark tests, with accuracy and speed that are consistently better than CLUSTALW. --- muse[¶](#muse) === Homepage: * <http://bioinformatics.mdanderson.org/main/MuSESpack package: * [muse/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/muse/package.py) Versions: 1.0-rc Description: Somatic point mutation caller. --- muster[¶](#muster) === Homepage: * <https://github.com/llnl/musterSpack package: * [muster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/muster/package.py) Versions: 1.0.1, 1.0 Build Dependencies: [cmake](#cmake), [boost](#boost), mpi Link Dependencies: [boost](#boost), mpi Description: The Muster library provides implementations of sequential and parallel K-Medoids clustering algorithms. It is intended as a general framework for parallel cluster analysis, particularly for performance data analysis on systems with very large numbers of processes. --- mvapich2[¶](#mvapich2) === Homepage: * <http://mvapich.cse.ohio-state.edu/Spack package: * [mvapich2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mvapich2/package.py) Versions: 2.3rc2, 2.3rc1, 2.3a, 2.3, 2.2, 2.1, 2.0 Build Dependencies: [cuda](#cuda), [rdma-core](#rdma-core), [bison](#bison), [libpciaccess](#libpciaccess), [findutils](#findutils), [psm](#psm) Link Dependencies: [cuda](#cuda), [rdma-core](#rdma-core), [psm](#psm), [libpciaccess](#libpciaccess) Description: MVAPICH2 is an MPI implementation for Infiniband networks. --- mxml[¶](#mxml) === Homepage: * <http://michaelrsweet.github.io/mxml/Spack package: * [mxml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mxml/package.py) Versions: 2.10, 2.9, 2.8, 2.7, 2.6, 2.5 Description: Mini-XML is a small XML library that you can use to read and write XML and XML-like data files in your application without requiring large non- standard libraries. --- mxnet[¶](#mxnet) === Homepage: * <http://mxnet.ioSpack package: * [mxnet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/mxnet/package.py) Versions: 1.3.0, 0.10.0.post2, 0.10.0.post1, 0.10.0 Build Dependencies: [cub](#cub), [mshadow](#mshadow), [nnvm](#nnvm), [ps-lite](#ps-lite), [cudnn](#cudnn), [opencv](#opencv), blas, [dmlc-core](#dmlc-core), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [cub](#cub), [mshadow](#mshadow), [nnvm](#nnvm), [ps-lite](#ps-lite), [cudnn](#cudnn), blas, [dmlc-core](#dmlc-core), [python](#python), [opencv](#opencv) Run Dependencies: [python](#python) Description: MXNet is a deep learning framework designed for both efficiency and flexibility. --- nag[¶](#nag) === Homepage: * <http://www.nag.com/nagware/np.aspSpack package: * [nag/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nag/package.py) Versions: 6.2, 6.1, 6.0 Description: The NAG Fortran Compiler. --- nalu[¶](#nalu) === Homepage: * <https://github.com/NaluCFD/NaluSpack package: * [nalu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nalu/package.py) Versions: master Build Dependencies: [yaml-cpp](#yaml-cpp), mpi, [openfast](#openfast), [cmake](#cmake), [tioga](#tioga), [hypre](#hypre), [trilinos](#trilinos) Link Dependencies: [yaml-cpp](#yaml-cpp), mpi, [openfast](#openfast), [tioga](#tioga), [hypre](#hypre), [trilinos](#trilinos) Description: Nalu: a generalized unstructured massively parallel low Mach flow code designed to support a variety of energy applications of interest built on the Sierra Toolkit and Trilinos solver Tpetra/Epetra stack --- nalu-wind[¶](#nalu-wind) === Homepage: * <https://github.com/exawind/nalu-windSpack package: * [nalu-wind/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nalu-wind/package.py) Versions: master Build Dependencies: [yaml-cpp](#yaml-cpp), mpi, [openfast](#openfast), [cmake](#cmake), [tioga](#tioga), [hypre](#hypre), [trilinos](#trilinos) Link Dependencies: [yaml-cpp](#yaml-cpp), mpi, [openfast](#openfast), [tioga](#tioga), [hypre](#hypre), [trilinos](#trilinos) Description: Nalu-Wind: Wind energy focused variant of Nalu. --- namd[¶](#namd) === Homepage: * <http://www.ks.uiuc.edu/Research/namd/Spack package: * [namd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/namd/package.py) Versions: 2.12 Build Dependencies: [charmpp](#charmpp), [tcl](#tcl), [intel-mkl](#intel-mkl), [python](#python), [fftw](#fftw) Link Dependencies: [charmpp](#charmpp), [tcl](#tcl), [intel-mkl](#intel-mkl), [python](#python), [fftw](#fftw) Description: NAMDis a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. --- nano[¶](#nano) === Homepage: * <http://www.nano-editor.orgSpack package: * [nano/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nano/package.py) Versions: 2.6.3, 2.6.2 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: Tiny little text editor --- nanoflann[¶](#nanoflann) === Homepage: * <https://github.com/jlblancoc/nanoflannSpack package: * [nanoflann/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nanoflann/package.py) Versions: 1.2.3 Build Dependencies: [cmake](#cmake) Description: a C++ header-only library for Nearest Neighbor (NN) search wih KD-trees. --- nanopb[¶](#nanopb) === Homepage: * <https://jpa.kapsi.fi/nanopb/Spack package: * [nanopb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nanopb/package.py) Versions: 0.3.9.1 Build Dependencies: [cmake](#cmake), [py-protobuf](#py-protobuf), [protobuf](#protobuf) Description: Nanopb is a small code-size Protocol Buffers implementation in ansi C. --- nasm[¶](#nasm) === Homepage: * <http://www.nasm.usSpack package: * [nasm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nasm/package.py) Versions: 2.13.03, 2.11.06 Description: NASM (Netwide Assembler) is an 80x86 assembler designed for portability and modularity. It includes a disassembler as well. --- nauty[¶](#nauty) === Homepage: * <http://pallini.di.uniroma1.it/index.htmlSpack package: * [nauty/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nauty/package.py) Versions: 2.6r7 Build Dependencies: [zlib](#zlib), [gmp](#gmp), pkgconfig, [help2man](#help2man), [m4](#m4), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Link Dependencies: [zlib](#zlib), [gmp](#gmp) Description: nauty and Traces are programs for computing automorphism groups of graphsq and digraphs --- ncbi-magicblast[¶](#ncbi-magicblast) === Homepage: * <https://ncbi.github.io/magicblast/Spack package: * [ncbi-magicblast/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ncbi-magicblast/package.py) Versions: 1.3.0 Build Dependencies: [lmdb](#lmdb) Link Dependencies: [lmdb](#lmdb) Description: Magic-BLAST is a tool for mapping large next-generation RNA or DNA sequencing runs against a whole genome or transcriptome. --- ncbi-rmblastn[¶](#ncbi-rmblastn) === Homepage: * <https://www.ncbi.nlm.nih.gov/Spack package: * [ncbi-rmblastn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ncbi-rmblastn/package.py) Versions: 2.2.28 Description: RMBlast search engine for NCBI --- ncbi-toolkit[¶](#ncbi-toolkit) === Homepage: * <https://www.ncbi.nlm.nih.gov/IEB/ToolBox/CPP_DOC/Spack package: * [ncbi-toolkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ncbi-toolkit/package.py) Versions: 21_0_0 Build Dependencies: [lzo](#lzo), [zlib](#zlib), [libpng](#libpng), [libxml2](#libxml2), [libxslt](#libxslt), [libjpeg](#libjpeg), [samtools](#samtools), [bzip2](#bzip2), [bamtools](#bamtools), [sqlite](#sqlite), [libtiff](#libtiff), [pcre](#pcre), [boost](#boost), [giflib](#giflib) Link Dependencies: [lzo](#lzo), [zlib](#zlib), [libpng](#libpng), [libxml2](#libxml2), [libxslt](#libxslt), [libjpeg](#libjpeg), [samtools](#samtools), [bzip2](#bzip2), [bamtools](#bamtools), [sqlite](#sqlite), [libtiff](#libtiff), [pcre](#pcre), [boost](#boost), [giflib](#giflib) Description: NCBI C++ Toolkit --- nccl[¶](#nccl) === Homepage: * <https://github.com/NVIDIA/ncclSpack package: * [nccl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nccl/package.py) Versions: 2.3.5-5, 2.2, 1.3.4-1, 1.3.0-1 Build Dependencies: [cuda](#cuda) Link Dependencies: [cuda](#cuda) Description: Optimized primitives for collective multi-GPU communication. --- nccmp[¶](#nccmp) === Homepage: * <http://nccmp.sourceforge.net/Spack package: * [nccmp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nccmp/package.py) Versions: 1.8.2.0 Build Dependencies: [netcdf](#netcdf) Link Dependencies: [netcdf](#netcdf) Description: Compare NetCDF Files --- ncdu[¶](#ncdu) === Homepage: * <http://dev.yorhel.nl/ncduSpack package: * [ncdu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ncdu/package.py) Versions: 1.13, 1.12, 1.11, 1.10, 1.9, 1.8, 1.7 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: Ncdu is a disk usage analyzer with an ncurses interface. It is designed to find space hogs on a remote server where you don't have an entire gaphical setup available, but it is a useful tool even on regular desktop systems. Ncdu aims to be fast, simple and easy to use, and should be able to run in any minimal POSIX-like environment with ncurses installed. --- ncftp[¶](#ncftp) === Homepage: * <http://www.ncftp.com/Spack package: * [ncftp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ncftp/package.py) Versions: 3.2.6 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: NcFTP Client is a set of application programs implementing the File Transfer Protocol. --- ncl[¶](#ncl) === Homepage: * <https://www.ncl.ucar.eduSpack package: * [ncl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ncl/package.py) Versions: 6.5.0, 6.4.0 Build Dependencies: szip, [libiconv](#libiconv), jpeg, [bison](#bison), [curl](#curl), [hdf5](#hdf5), [libxmu](#libxmu), [gdal](#gdal), [libx11](#libx11), [netcdf](#netcdf), [udunits2](#udunits2), [tcsh](#tcsh), [libxaw](#libxaw), [cairo](#cairo), [hdf](#hdf), [flex](#flex) Link Dependencies: szip, [libiconv](#libiconv), jpeg, [hdf5](#hdf5), [curl](#curl), [libxmu](#libxmu), [gdal](#gdal), [libx11](#libx11), [netcdf](#netcdf), [udunits2](#udunits2), [tcsh](#tcsh), [libxaw](#libxaw), [cairo](#cairo), [hdf](#hdf), [flex](#flex) Run Dependencies: [esmf](#esmf) Description: NCL is an interpreted language designed specifically for scientific data analysis and visualization. Supports NetCDF 3/4, GRIB 1/2, HDF 4/5, HDF- EOD 2/5, shapefile, ASCII, binary. Numerous analysis functions are built-in. --- nco[¶](#nco) === Homepage: * <http://nco.sourceforge.net/Spack package: * [nco/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nco/package.py) Versions: 4.6.7, 4.6.6, 4.6.5, 4.6.4, 4.6.3, 4.6.2, 4.6.1, 4.5.5 Build Dependencies: [antlr](#antlr), [netcdf](#netcdf), [bison](#bison), [udunits2](#udunits2), [flex](#flex), [texinfo](#texinfo), [gsl](#gsl) Link Dependencies: [antlr](#antlr), [gsl](#gsl), [netcdf](#netcdf), [udunits2](#udunits2) Description: The NCO toolkit manipulates and analyzes data stored in netCDF- accessible formats --- ncurses[¶](#ncurses) === Homepage: * <http://invisible-island.net/ncurses/ncurses.htmlSpack package: * [ncurses/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ncurses/package.py) Versions: 6.1, 6.0, 5.9 Build Dependencies: pkgconfig Description: The ncurses (new curses) library is a free software emulation of curses in System V Release 4.0, and more. It uses terminfo format, supports pads and color and multiple highlights and forms characters and function-key mapping, and has all the other SYSV-curses enhancements over BSD curses. --- ncview[¶](#ncview) === Homepage: * <http://meteora.ucsd.edu/~pierce/ncview_home_page.htmlSpack package: * [ncview/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ncview/package.py) Versions: 2.1.7 Build Dependencies: [libpng](#libpng), [libxaw](#libxaw), [netcdf](#netcdf), [udunits2](#udunits2) Link Dependencies: [libpng](#libpng), [libxaw](#libxaw), [netcdf](#netcdf), [udunits2](#udunits2) Description: Simple viewer for NetCDF files. --- ndiff[¶](#ndiff) === Homepage: * <http://ftp.math.utah.edu/pub/ndiff/Spack package: * [ndiff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ndiff/package.py) Versions: 2.00, 1.00 Description: The ndiff tool is a binary utility that compares putatively similar files while ignoring small numeric differernces. This utility is most often used to compare files containing a lot of floating-point numeric data that may be slightly different due to numeric error. --- nek5000[¶](#nek5000) === Homepage: * <https://nek5000.mcs.anl.gov/Spack package: * [nek5000/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nek5000/package.py) Versions: develop, 17.0 Build Dependencies: [libxt](#libxt), [visit](#visit), mpi, [libx11](#libx11), [xproto](#xproto) Link Dependencies: [libxt](#libxt), [visit](#visit), mpi, [libx11](#libx11), [xproto](#xproto) Description: A fast and scalable high-order solver for computational fluid dynamics --- nekbone[¶](#nekbone) === Homepage: * <https://github.com/Nek5000/NekboneSpack package: * [nekbone/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nekbone/package.py) Versions: develop, 17.0 Build Dependencies: mpi Link Dependencies: mpi Description: NEK5000 emulation software called NEKbone. Nekbone captures the basic structure and user interface of the extensive Nek5000 software. Nek5000 is a high order, incompressible Navier-Stokes solver based on the spectral element method. --- nekcem[¶](#nekcem) === Homepage: * <https://nekcem.mcs.anl.govSpack package: * [nekcem/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nekcem/package.py) Versions: develop, 0b8bedd Build Dependencies: mpi, blas, lapack Link Dependencies: mpi, blas, lapack Description: Spectral-element solver for Maxwell's equations, drift-diffusion equations, and more. --- nektar[¶](#nektar) === Homepage: * <https://www.nektar.info/Spack package: * [nektar/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nektar/package.py) Versions: 4.4.1 Build Dependencies: mpi, [arpack-ng](#arpack-ng), [scotch](#scotch), [hdf5](#hdf5), [cmake](#cmake), [boost](#boost), blas, [tinyxml](#tinyxml), [fftw](#fftw), lapack Link Dependencies: mpi, [arpack-ng](#arpack-ng), [scotch](#scotch), [hdf5](#hdf5), [boost](#boost), blas, [tinyxml](#tinyxml), [fftw](#fftw), lapack Description: Nektar++: Spectral/hp Element Framework --- neovim[¶](#neovim) === Homepage: * <http://neovim.ioSpack package: * [neovim/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/neovim/package.py) Versions: 0.3.1, 0.3.0, 0.2.2, 0.2.1, 0.2.0 Build Dependencies: [libuv](#libuv), [jemalloc](#jemalloc), [cmake](#cmake), [libtermkey](#libtermkey), [msgpack-c](#msgpack-c), [lua-mpack](#lua-mpack), [lua-bitlib](#lua-bitlib), [gperf](#gperf), [lua](#lua), [unibilium](#unibilium), [lua-lpeg](#lua-lpeg), [libvterm](#libvterm) Link Dependencies: [lua-bitlib](#lua-bitlib), [gperf](#gperf), [libuv](#libuv), [lua](#lua), [libvterm](#libvterm), [jemalloc](#jemalloc), [lua-lpeg](#lua-lpeg), [unibilium](#unibilium), [libtermkey](#libtermkey), [msgpack-c](#msgpack-c), [lua-mpack](#lua-mpack) Description: NeoVim: the future of vim --- nest[¶](#nest) === Homepage: * <http://www.nest-simulator.orgSpack package: * [nest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nest/package.py) Versions: 2.14.0, 2.12.0, 2.10.0, 2.8.0, 2.6.0, 2.4.2 Build Dependencies: pkgconfig, mpi, [py-cython](#py-cython), [py-setuptools](#py-setuptools), [libtool](#libtool), [cmake](#cmake), [py-numpy](#py-numpy), [readline](#readline), [doxygen](#doxygen), [python](#python), [gsl](#gsl) Link Dependencies: [libtool](#libtool), mpi, [readline](#readline), [gsl](#gsl), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Test Dependencies: [py-nose](#py-nose) Description: NEST is a simulator for spiking neural network models It focuses on the dynamics, size and structure of neural systems rather than on the exact morphology of individual neurons. --- netcdf[¶](#netcdf) === Homepage: * <http://www.unidata.ucar.edu/software/netcdfSpack package: * [netcdf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/netcdf/package.py) Versions: 4.6.1, 4.4.1.1, 4.4.1, 4.4.0, 4.3.3.1, 4.3.3 Build Dependencies: [zlib](#zlib), mpi, [m4](#m4), [hdf5](#hdf5), [curl](#curl), [parallel-netcdf](#parallel-netcdf), [hdf](#hdf) Link Dependencies: [zlib](#zlib), mpi, [hdf5](#hdf5), [curl](#curl), [parallel-netcdf](#parallel-netcdf), [hdf](#hdf) Description: NetCDF is a set of software libraries and self-describing, machine- independent data formats that support the creation, access, and sharing of array-oriented scientific data. --- netcdf-cxx[¶](#netcdf-cxx) === Homepage: * <http://www.unidata.ucar.edu/software/netcdfSpack package: * [netcdf-cxx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/netcdf-cxx/package.py) Versions: 4.2 Build Dependencies: [netcdf](#netcdf) Link Dependencies: [netcdf](#netcdf) Description: Deprecated C++ compatibility bindings for NetCDF. These do NOT read or write NetCDF-4 files, and are no longer maintained by Unidata. Developers should migrate to current NetCDF C++ bindings, in Spack package netcdf-cxx4. --- netcdf-cxx4[¶](#netcdf-cxx4) === Homepage: * <http://www.unidata.ucar.edu/software/netcdfSpack package: * [netcdf-cxx4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/netcdf-cxx4/package.py) Versions: 4.3.0, 4.2.1 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [netcdf](#netcdf) Link Dependencies: [netcdf](#netcdf) Description: C++ interface for NetCDF4 --- netcdf-fortran[¶](#netcdf-fortran) === Homepage: * <http://www.unidata.ucar.edu/software/netcdfSpack package: * [netcdf-fortran/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/netcdf-fortran/package.py) Versions: 4.4.4, 4.4.3 Build Dependencies: [netcdf](#netcdf) Link Dependencies: [netcdf](#netcdf) Description: Fortran interface for NetCDF4 --- netgauge[¶](#netgauge) === Homepage: * <http://unixer.de/research/netgauge/Spack package: * [netgauge/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/netgauge/package.py) Versions: 2.4.6 Build Dependencies: mpi Link Dependencies: mpi Description: Netgauge is a high-precision network parameter measurement tool. It supports benchmarking of many different network protocols and communication patterns. The main focus lies on accuracy, statistical analysis and easy extensibility. --- netgen[¶](#netgen) === Homepage: * <https://ngsolve.org/Spack package: * [netgen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/netgen/package.py) Versions: 5.3.1 Build Dependencies: [zlib](#zlib), [oce](#oce), mpi, [metis](#metis) Link Dependencies: [zlib](#zlib), [oce](#oce), mpi, [metis](#metis) Description: NETGEN is an automatic 3d tetrahedral mesh generator. It accepts input from constructive solid geometry (CSG) or boundary representation (BRep) from STL file format. The connection to a geometry kernel allows the handling of IGES and STEP files. NETGEN contains modules for mesh optimization and hierarchical mesh refinement. --- netlib-lapack[¶](#netlib-lapack) === Homepage: * <http://www.netlib.org/lapack/Spack package: * [netlib-lapack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/netlib-lapack/package.py) Versions: 3.8.0, 3.7.1, 3.7.0, 3.6.1, 3.6.0, 3.5.0, 3.4.2, 3.4.1, 3.4.0, 3.3.1 Build Dependencies: [cmake](#cmake), [netlib-xblas](#netlib-xblas), blas Link Dependencies: blas, [netlib-xblas](#netlib-xblas) Test Dependencies: [python](#python) Description: LAPACK version 3.X is a comprehensive FORTRAN library that does linear algebra operations including matrix inversions, least squared solutions to linear sets of equations, eigenvector analysis, singular value decomposition, etc. It is a very comprehensive and reputable package that has found extensive use in the scientific community. --- netlib-scalapack[¶](#netlib-scalapack) === Homepage: * <http://www.netlib.org/scalapack/Spack package: * [netlib-scalapack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/netlib-scalapack/package.py) Versions: 2.0.2, 2.0.1, 2.0.0 Build Dependencies: [cmake](#cmake), mpi, blas, lapack Link Dependencies: mpi, blas, lapack Description: ScaLAPACK is a library of high-performance linear algebra routines for parallel distributed memory machines --- netlib-xblas[¶](#netlib-xblas) === Homepage: * <http://www.netlib.org/xblasSpack package: * [netlib-xblas/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/netlib-xblas/package.py) Versions: 1.0.248 Description: XBLAS is a reference implementation for extra precision BLAS. XBLAS is a reference implementation for the dense and banded BLAS routines, along with extended and mixed precision version. Extended precision is only used internally; input and output arguments remain the same as in the existing BLAS. Extra precisions is implemented as double-double (i.e., 128-bit total, 106-bit significand). Mixed precision permits some input/output arguments of different types (mixing real and complex) or precisions (mixing single and double). This implementation is proof of concept, and no attempt was made to optimize performance; performance should be as good as straightforward but careful code written by hand. --- nettle[¶](#nettle) === Homepage: * <https://www.lysator.liu.se/~nisse/nettle/Spack package: * [nettle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nettle/package.py) Versions: 3.4, 3.3, 3.2, 2.7.1, 2.7 Build Dependencies: [gmp](#gmp), [m4](#m4) Link Dependencies: [gmp](#gmp) Description: The Nettle package contains the low-level cryptographic library that is designed to fit easily in many contexts. --- neuron[¶](#neuron) === Homepage: * <https://www.neuron.yale.edu/Spack package: * [neuron/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/neuron/package.py) Versions: develop, 7.5, 7.4, 7.3, 7.2 Build Dependencies: pkgconfig, mpi, [python](#python), [bison](#bison), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [flex](#flex), [ncurses](#ncurses) Link Dependencies: mpi, [python](#python), [ncurses](#ncurses) Description: NEURON is a simulation environment for single and networks of neurons. NEURON is a simulation environment for modeling individual and networks of neurons. NEURON models individual neurons via the use of sections that are automatically subdivided into individual compartments, instead of requiring the user to manually create compartments. The primary scripting language is hoc but a Python interface is also available. --- nextflow[¶](#nextflow) === Homepage: * <http://www.nextflow.ioSpack package: * [nextflow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nextflow/package.py) Versions: 0.25.6, 0.24.1, 0.23.3, 0.21.0, 0.20.1, 0.17.3 Build Dependencies: java Link Dependencies: java Description: Data-driven computational pipelines --- nfft[¶](#nfft) === Homepage: * <https://www-user.tu-chemnitz.de/~potts/nfftSpack package: * [nfft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nfft/package.py) Versions: 3.4.1, 3.3.2 Build Dependencies: [fftw](#fftw) Link Dependencies: [fftw](#fftw) Description: NFFT is a C subroutine library for computing the nonequispaced discrete Fourier transform (NDFT) in one or more dimensions, of arbitrary input size, and of complex data. --- nghttp2[¶](#nghttp2) === Homepage: * <https://nghttp2.org/Spack package: * [nghttp2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nghttp2/package.py) Versions: 1.26.0 Build Dependencies: [py-cython](#py-cython), [py-setuptools](#py-setuptools), [python](#python) Run Dependencies: [py-cython](#py-cython), [python](#python) Description: nghttp2 is an implementation of HTTP/2 and its header compression algorithm HPACK in C. --- nginx[¶](#nginx) === Homepage: * <https://nginx.org/en/Spack package: * [nginx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nginx/package.py) Versions: 1.13.8, 1.12.0 Build Dependencies: [zlib](#zlib), [pcre](#pcre), [openssl](#openssl) Link Dependencies: [zlib](#zlib), [pcre](#pcre), [openssl](#openssl) Description: nginx [engine x] is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server, originally written by <NAME>. --- ngmlr[¶](#ngmlr) === Homepage: * <https://github.com/philres/ngmlrSpack package: * [ngmlr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ngmlr/package.py) Versions: 0.2.5 Build Dependencies: [cmake](#cmake) Description: Ngmlr is a long-read mapper designed to align PacBilo or Oxford Nanopore to a reference genome with a focus on reads that span structural variations. --- ninja[¶](#ninja) === Homepage: * <https://ninja-build.org/Spack package: * [ninja/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ninja/package.py) Versions: 1.8.2, 1.7.2, 1.6.0 Build Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Ninja is a small build system with a focus on speed. It differs from other build systems in two major respects: it is designed to have its input files generated by a higher-level build system, and it is designed to run builds as fast as possible. --- ninja-fortran[¶](#ninja-fortran) === Homepage: * <https://github.com/Kitware/ninjaSpack package: * [ninja-fortran/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ninja-fortran/package.py) Versions: 1.7.2.gcc0ea, 1.7.2.gaad58, 1.7.1.g7ca7f Build Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A Fortran capable fork of ninja. --- nlohmann-json[¶](#nlohmann-json) === Homepage: * <https://nlohmann.github.io/json/Spack package: * [nlohmann-json/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nlohmann-json/package.py) Versions: 3.3.0, 3.2.0, 3.1.2, 3.1.1 Build Dependencies: [cmake](#cmake) Description: JSON for Modern C++ --- nlopt[¶](#nlopt) === Homepage: * <https://nlopt.readthedocs.ioSpack package: * [nlopt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nlopt/package.py) Versions: develop, 2.5.0 Build Dependencies: [swig](#swig), [cmake](#cmake), [guile](#guile), [py-numpy](#py-numpy), [matlab](#matlab), [octave](#octave), [python](#python) Link Dependencies: [guile](#guile), [octave](#octave), [matlab](#matlab), [swig](#swig), [python](#python) Run Dependencies: [py-numpy](#py-numpy) Description: NLopt is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms. --- nmap[¶](#nmap) === Homepage: * <https://nmap.orgSpack package: * [nmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nmap/package.py) Versions: 7.70, 7.31, 7.30 Description: Nmap ("Network Mapper") is a free and open source (license) utility for network discovery and security auditing. It also provides ncat an updated nc --- nnvm[¶](#nnvm) === Homepage: * <https://github.com/dmlc/nnvmSpack package: * [nnvm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nnvm/package.py) Versions: 20170418, master Build Dependencies: [cmake](#cmake), [dmlc-core](#dmlc-core) Link Dependencies: [dmlc-core](#dmlc-core) Description: nnvm is a modular, decentralized and lightweight part to help build deep learning libraries. --- node-js[¶](#node-js) === Homepage: * <https://nodejs.org/Spack package: * [node-js/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/node-js/package.py) Versions: 8.9.1, 7.1.0, 6.3.0, 6.2.2 Build Dependencies: [python](#python), pkgconfig, [libtool](#libtool), [icu4c](#icu4c), [openssl](#openssl) Link Dependencies: [icu4c](#icu4c), [openssl](#openssl) Description: Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. --- notmuch[¶](#notmuch) === Homepage: * <https://notmuchmail.org/Spack package: * [notmuch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/notmuch/package.py) Versions: 0.23.7 Build Dependencies: [zlib](#zlib), [talloc](#talloc), [xapian-core](#xapian-core), [gmime](#gmime) Link Dependencies: [zlib](#zlib), [talloc](#talloc), [xapian-core](#xapian-core), [gmime](#gmime) Description: Notmuch is a mail indexer. Essentially, is a very thin front end on top of xapian. --- npb[¶](#npb) === Homepage: * <https://www.nas.nasa.gov/publications/npb.htmlSpack package: * [npb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/npb/package.py) Versions: 3.3.1 Build Dependencies: mpi Link Dependencies: mpi Description: The NAS Parallel Benchmarks (NPB) are a small set of programs designed to help evaluate the performance of parallel supercomputers. The benchmarks are derived from computational fluid dynamics (CFD) applications and consist of five kernels and three pseudo-applications in the original "pencil-and-paper" specification (NPB 1). The benchmark suite has been extended to include new benchmarks for unstructured adaptive mesh, parallel I/O, multi-zone applications, and computational grids. Problem sizes in NPB are predefined and indicated as different classes. Reference implementations of NPB are available in commonly-used programming models like MPI and OpenMP (NPB 2 and NPB 3). --- npm[¶](#npm) === Homepage: * <https://github.com/npm/npmSpack package: * [npm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/npm/package.py) Versions: 3.10.9, 3.10.5 Build Dependencies: [node-js](#node-js) Run Dependencies: [node-js](#node-js) Description: npm: A package manager for javascript. --- npth[¶](#npth) === Homepage: * <https://gnupg.org/software/npth/index.htmlSpack package: * [npth/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/npth/package.py) Versions: 1.5, 1.4 Description: nPth is a library to provide the GNU Pth API and thus a non-preemptive threads implementation. --- nspr[¶](#nspr) === Homepage: * <https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPRSpack package: * [nspr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nspr/package.py) Versions: 4.13.1 Build Dependencies: [perl](#perl) Description: Netscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions. --- numactl[¶](#numactl) === Homepage: * <http://oss.sgi.com/projects/libnuma/Spack package: * [numactl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/numactl/package.py) Versions: 2.0.11 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: NUMA support for Linux --- numdiff[¶](#numdiff) === Homepage: * <https://www.nongnu.org/numdiffSpack package: * [numdiff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/numdiff/package.py) Versions: 5.9.0, 5.8.1 Build Dependencies: [gmp](#gmp), [gettext](#gettext) Link Dependencies: [gmp](#gmp), [gettext](#gettext) Description: Numdiff is a little program that can be used to compare putatively similar files line by line and field by field, ignoring small numeric differences or/and different numeric formats. --- nut[¶](#nut) === Homepage: * <https://github.com/lanl/NuTSpack package: * [nut/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nut/package.py) Versions: serial, openmp Build Dependencies: [cmake](#cmake), [random123](#random123) Link Dependencies: [cmake](#cmake), [random123](#random123) Description: NuT is Monte Carlo code for neutrino transport and is a C++ analog to the Haskell McPhD code. NuT is principally aimed at exploring on-node parallelism and performance issues. --- nvptx-tools[¶](#nvptx-tools) === Homepage: * <https://github.com/MentorEmbedded/nvptx-toolsSpack package: * [nvptx-tools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nvptx-tools/package.py) Versions: 2018-03-01 Build Dependencies: [cuda](#cuda), [binutils](#binutils) Link Dependencies: [cuda](#cuda), [binutils](#binutils) Description: nvptx-tools: A collection of tools for use with nvptx-none GCC toolchains. These tools are necessary when building a version of GCC that enables offloading of OpenMP/OpenACC code to NVIDIA GPUs. --- nwchem[¶](#nwchem) === Homepage: * <http://www.nwchem-sw.orgSpack package: * [nwchem/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/nwchem/package.py) Versions: 6.8.1, 6.8, 6.6 Build Dependencies: scalapack, mpi, blas, [python](#python), lapack Link Dependencies: scalapack, mpi, blas, [python](#python), lapack Run Dependencies: [python](#python) Description: High-performance computational chemistry software --- ocaml[¶](#ocaml) === Homepage: * <http://ocaml.org/Spack package: * [ocaml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ocaml/package.py) Versions: 4.06.0, 4.03.0 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: OCaml is an industrial strength programming language supporting functional, imperative and object-oriented styles --- occa[¶](#occa) === Homepage: * <http://libocca.orgSpack package: * [occa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/occa/package.py) Versions: develop, v1.0.0-alpha.5, v0.2.0, v0.1.0 Build Dependencies: [cuda](#cuda) Link Dependencies: [cuda](#cuda) Description: OCCA is an open-source (MIT license) library used to program current multi-core/many-core architectures. Devices (such as CPUs, GPUs, Intel's Xeon Phi, FPGAs, etc) are abstracted using an offload-model for application development and programming for the devices is done through a C-based (OKL) or Fortran-based kernel language (OFL). OCCA gives developers the ability to target devices at run-time by using run-time compilation for device kernels. --- oce[¶](#oce) === Homepage: * <https://github.com/tpaviot/oceSpack package: * [oce/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/oce/package.py) Versions: 0.18.3, 0.18.2, 0.18.1, 0.18, 0.17.2, 0.17.1, 0.17, 0.16.1, 0.16 Build Dependencies: [cmake](#cmake), tbb Link Dependencies: tbb Description: Open CASCADE Community Edition: patches/improvements/experiments contributed by users over the official Open CASCADE library. --- oclint[¶](#oclint) === Homepage: * <http://oclint.org/Spack package: * [oclint/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/oclint/package.py) Versions: 0.13 Build Dependencies: [subversion](#subversion), [git](#git), [ninja](#ninja), [cmake](#cmake), [llvm](#llvm), [python](#python), [py-argparse](#py-argparse) Link Dependencies: [llvm](#llvm) Description: OClint: a static analysis tool for C, C++, and Objective-C code OCLint is a static code analysis tool for improving quality and reducing defects by inspecting C, C++ and Objective-C code and looking for potential problems --- oclock[¶](#oclock) === Homepage: * <http://cgit.freedesktop.org/xorg/app/oclockSpack package: * [oclock/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/oclock/package.py) Versions: 1.0.3 Build Dependencies: [libxkbfile](#libxkbfile), [libxt](#libxt), pkgconfig, [libxext](#libxext), [util-macros](#util-macros), [libx11](#libx11), [libxmu](#libxmu) Link Dependencies: [libxkbfile](#libxkbfile), [libxt](#libxt), [libxext](#libxext), [libx11](#libx11), [libxmu](#libxmu) Description: oclock is a simple analog clock using the SHAPE extension to make a round (possibly transparent) window. --- octave[¶](#octave) === Homepage: * <https://www.gnu.org/software/octave/Spack package: * [octave/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/octave/package.py) Versions: 4.4.1, 4.4.0, 4.2.2, 4.2.1, 4.2.0, 4.0.2, 4.0.0 Build Dependencies: [glpk](#glpk), pkgconfig, [arpack-ng](#arpack-ng), [qrupdate](#qrupdate), [gl2ps](#gl2ps), [curl](#curl), [readline](#readline), [qhull](#qhull), lapack, [zlib](#zlib), [pcre](#pcre), [fltk](#fltk), [gnuplot](#gnuplot), [hdf5](#hdf5), java, [image-magick](#image-magick), [suite-sparse](#suite-sparse), [fftw](#fftw), [fontconfig](#fontconfig), [qt](#qt), blas, [llvm](#llvm), [freetype](#freetype) Link Dependencies: [glpk](#glpk), [arpack-ng](#arpack-ng), [qrupdate](#qrupdate), [gl2ps](#gl2ps), [curl](#curl), [readline](#readline), [qt](#qt), [qhull](#qhull), [zlib](#zlib), [pcre](#pcre), lapack, [fltk](#fltk), [gnuplot](#gnuplot), [hdf5](#hdf5), java, [image-magick](#image-magick), [suite-sparse](#suite-sparse), [fftw](#fftw), [fontconfig](#fontconfig), blas, [llvm](#llvm), [freetype](#freetype) Description: GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language. --- octave-optim[¶](#octave-optim) === Homepage: * <https://octave.sourceforge.io/optim/Spack package: * [octave-optim/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/octave-optim/package.py) Versions: 1.5.2 Build Dependencies: [octave](#octave), [octave-struct](#octave-struct) Link Dependencies: [octave](#octave), [octave-struct](#octave-struct) Run Dependencies: [octave](#octave) Description: Non-linear optimization toolkit for Octave. --- octave-splines[¶](#octave-splines) === Homepage: * <http://octave.sourceforge.net/splines/index.htmlSpack package: * [octave-splines/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/octave-splines/package.py) Versions: 1.3.1 Build Dependencies: [octave](#octave) Link Dependencies: [octave](#octave) Run Dependencies: [octave](#octave) Description: Additional spline functions. --- octave-struct[¶](#octave-struct) === Homepage: * <https://octave.sourceforge.io/struct/Spack package: * [octave-struct/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/octave-struct/package.py) Versions: 1.0.14 Build Dependencies: [octave](#octave) Link Dependencies: [octave](#octave) Run Dependencies: [octave](#octave) Description: Additional structure manipulation functions for Octave. --- octopus[¶](#octopus) === Homepage: * <http://www.tddft.org/programs/octopus/Spack package: * [octopus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/octopus/package.py) Versions: 7.3, 6.0, 5.0.1 Build Dependencies: [libxc](#libxc), [parmetis](#parmetis), mpi, [netcdf-fortran](#netcdf-fortran), [gsl](#gsl), [metis](#metis), [arpack-ng](#arpack-ng), scalapack, blas, [fftw](#fftw), lapack Link Dependencies: [libxc](#libxc), [parmetis](#parmetis), mpi, [netcdf-fortran](#netcdf-fortran), [gsl](#gsl), [metis](#metis), [arpack-ng](#arpack-ng), scalapack, blas, [fftw](#fftw), lapack Description: A real-space finite-difference (time-dependent) density-functional theory code. --- of-adios-write[¶](#of-adios-write) === Homepage: * <https://develop.openfoam.com/Community/feature-adiosWrite/Spack package: * [of-adios-write/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/of-adios-write/package.py) Versions: develop, 1706, 1612 Build Dependencies: [openfoam-com](#openfoam-com), [adios](#adios) Link Dependencies: [openfoam-com](#openfoam-com), [adios](#adios) Description: adios-write supplies additional libraries and function objects for reading/writing OpenFOAM data with ADIOS. This offering is part of the community repository supported by OpenCFD Ltd, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM trademark. OpenCFD Ltd has been developing and releasing OpenFOAM since its debut in 2004. --- of-precice[¶](#of-precice) === Homepage: * <https://www.precice.orgSpack package: * [of-precice/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/of-precice/package.py) Versions: develop Build Dependencies: [yaml-cpp](#yaml-cpp), openfoam, [precice](#precice) Link Dependencies: [yaml-cpp](#yaml-cpp), openfoam, [precice](#precice) Description: preCICE adapter for OpenFOAM --- omega-h[¶](#omega-h) === Homepage: * <https://github.com/ibaned/omega_hSpack package: * [omega-h/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/omega-h/package.py) Versions: develop, 9.19.1, 9.19.0, 9.15.0, 9.14.0, 9.13.14, 9.13.13 Build Dependencies: [zlib](#zlib), [cmake](#cmake), mpi, [gmsh](#gmsh), [trilinos](#trilinos) Link Dependencies: [zlib](#zlib), mpi, [trilinos](#trilinos) Description: Omega_h is a C++11 library providing data structures and algorithms for adaptive discretizations. Its specialty is anisotropic triangle and tetrahedral mesh adaptation. It runs efficiently on most modern HPC hardware including GPUs. --- ompss[¶](#ompss) === Homepage: * <http://pm.bsc.es/Spack package: * [ompss/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ompss/package.py) Versions: 14.10 Build Dependencies: [hwloc](#hwloc), [extrae](#extrae), mpi Link Dependencies: [hwloc](#hwloc), [extrae](#extrae), mpi Description: OmpSs is an effort to integrate features from the StarSs programming model developed by BSC into a single programming model. In particular, our objective is to extend OpenMP with new directives to support asynchronous parallelism and heterogeneity (devices like GPUs). However, it can also be understood as new directives extending other accelerator based APIs like CUDA or OpenCL. Our OmpSs environment is built on top of our Mercurium compiler and Nanos++ runtime system. --- ompt-openmp[¶](#ompt-openmp) === Homepage: * <https://github.com/OpenMPToolsInterface/LLVM-openmpSpack package: * [ompt-openmp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ompt-openmp/package.py) Versions: 0.1 Build Dependencies: [cmake](#cmake) Description: LLVM/Clang OpenMP runtime with OMPT support. This is a fork of the OpenMPToolsInterface/LLVM-openmp fork of the official LLVM OpenMP mirror. This library provides a drop-in replacement of the OpenMP runtimes for GCC, Intel and LLVM/Clang. --- oniguruma[¶](#oniguruma) === Homepage: * <https://github.com/kkos/onigurumaSpack package: * [oniguruma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/oniguruma/package.py) Versions: 6.1.3 Description: Regular expression library. --- ont-albacore[¶](#ont-albacore) === Homepage: * <https://nanoporetech.comSpack package: * [ont-albacore/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ont-albacore/package.py) Versions: 2.3.1, 2.1.2, 1.2.4, 1.1.0 Build Dependencies: [py-ont-fast5-api](#py-ont-fast5-api), [py-pip](#py-pip), [py-h5py](#py-h5py), [py-setuptools](#py-setuptools), [py-numpy](#py-numpy), [py-dateutil](#py-dateutil), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-ont-fast5-api](#py-ont-fast5-api), [py-dateutil](#py-dateutil), [py-setuptools](#py-setuptools), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [python](#python) Description: Albacore is a software project that provides an entry point to the Oxford Nanopore basecalling algorithms. It can be run from the command line on Windows and multiple Linux platforms. A selection of configuration files allow basecalling DNA libraries made with our current range of sequencing kits and Flow Cells. --- opa-psm2[¶](#opa-psm2) === Homepage: * <http://github.com/01org/opa-psm2Spack package: * [opa-psm2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/opa-psm2/package.py) Versions: 10.3-37, 10.3-17, 10.3-10, 10.3-8, 10.2-260, 10.2-235, 10.2-175 Build Dependencies: [numactl](#numactl) Link Dependencies: [numactl](#numactl) Description: Intel Omni-Path Performance Scaled Messaging 2 (PSM2) library --- opam[¶](#opam) === Homepage: * <https://opam.ocaml.org/Spack package: * [opam/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/opam/package.py) Versions: 1.2.2, 1.2.1 Build Dependencies: [ocaml](#ocaml) Link Dependencies: [ocaml](#ocaml) Description: OPAM: OCaml Package Manager OPAM is a source-based package manager for OCaml. It supports multiple simultaneous compiler installations, flexible package constraints, and a Git-friendly development workflow. --- opari2[¶](#opari2) === Homepage: * <http://www.vi-hps.org/projects/score-pSpack package: * [opari2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/opari2/package.py) Versions: 2.0.4, 2.0.3, 2.0.1, 2.0, 1.1.4, 1.1.2 Description: OPARI2 is a source-to-source instrumentation tool for OpenMP and hybrid codes. It surrounds OpenMP directives and runtime library calls with calls to the POMP2 measurement interface. OPARI2 will provide you with a new initialization method that allows for multi-directory and parallel builds as well as the usage of pre-instrumented libraries. Furthermore, an efficient way of tracking parent-child relationships was added. Additionally, we extended OPARI2 to support instrumentation of OpenMP 3.0 tied tasks. --- openbabel[¶](#openbabel) === Homepage: * <http://openbabel.org/wiki/Main_PageSpack package: * [openbabel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openbabel/package.py) Versions: 2.4.1 Build Dependencies: [zlib](#zlib), pkgconfig, [libxml2](#libxml2), [eigen](#eigen), [cmake](#cmake), [cairo](#cairo), [python](#python) Link Dependencies: [zlib](#zlib), [python](#python), [cairo](#cairo), [libxml2](#libxml2), [eigen](#eigen) Run Dependencies: [python](#python) Description: Open Babel is a chemical toolbox designed to speak the many languages of chemical data. It's an open, collaborative project allowing anyone to search, convert, analyze, or store data from molecular modeling, chemistry, solid-state materials, biochemistry, or related areas. --- openblas[¶](#openblas) === Homepage: * <http://www.openblas.netSpack package: * [openblas/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openblas/package.py) Versions: develop, 0.3.3, 0.3.2, 0.3.1, 0.3.0, 0.2.20, 0.2.19, 0.2.18, 0.2.17, 0.2.16, 0.2.15 Description: OpenBLAS: An optimized BLAS library --- opencoarrays[¶](#opencoarrays) === Homepage: * <http://www.opencoarrays.org/Spack package: * [opencoarrays/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/opencoarrays/package.py) Versions: 2.2.0, 1.8.10, 1.8.4, 1.8.0, 1.7.4, 1.6.2 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: OpenCoarrays is an open-source software project that produces an application binary interface (ABI) supporting coarray Fortran (CAF) compilers, an application programming interface (API) that supports users of non-CAF compilers, and an associated compiler wrapper and program launcher. --- opencv[¶](#opencv) === Homepage: * <http://opencv.org/Spack package: * [opencv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/opencv/package.py) Versions: 3.4.3, 3.4.1, 3.4.0, 3.3.1, 3.3.0, 3.2.0, 3.1.0, 2.4.13.2, 2.4.13.1, 2.4.13, 2.4.12.3, 2.4.12.2, 2.4.12.1, master Build Dependencies: [cuda](#cuda), jpeg, [gtkplus](#gtkplus), [eigen](#eigen), java, [cmake](#cmake), [py-numpy](#py-numpy), [jasper](#jasper), [qt](#qt), [zlib](#zlib), [ffmpeg](#ffmpeg), mpi, [protobuf](#protobuf), [libtiff](#libtiff), [libpng](#libpng), [vtk](#vtk), [python](#python) Link Dependencies: [cuda](#cuda), [qt](#qt), jpeg, [gtkplus](#gtkplus), java, [vtk](#vtk), [jasper](#jasper), [zlib](#zlib), [ffmpeg](#ffmpeg), mpi, [protobuf](#protobuf), [libtiff](#libtiff), [libpng](#libpng), [python](#python) Run Dependencies: [py-numpy](#py-numpy) Description: OpenCV is released under a BSD license and hence it's free for both academic and commercial use. It has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. OpenCV was designed for computational efficiency and with a strong focus on real- time applications. Written in optimized C/C++, the library can take advantage of multi-core processing. Enabled with OpenCL, it can take advantage of the hardware acceleration of the underlying heterogeneous compute platform. Adopted all around the world, OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 9 million. Usage ranges from interactive art, to mines inspection, stitching maps on the web or through advanced robotics. --- openexr[¶](#openexr) === Homepage: * <http://www.openexr.com/Spack package: * [openexr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openexr/package.py) Versions: 2.2.0, 2.1.0, 2.0.1, 1.7.0, 1.6.1, 1.5.0, 1.4.0a, 1.3.2 Build Dependencies: pkgconfig, [ilmbase](#ilmbase) Link Dependencies: [ilmbase](#ilmbase) Description: OpenEXR Graphics Tools (high dynamic-range image file format) --- openfast[¶](#openfast) === Homepage: * <http://openfast.readthedocs.io/en/latest/Spack package: * [openfast/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openfast/package.py) Versions: develop, master Build Dependencies: [zlib](#zlib), [yaml-cpp](#yaml-cpp), mpi, [libxml2](#libxml2), [hdf5](#hdf5), [cmake](#cmake), blas, lapack Link Dependencies: [zlib](#zlib), [yaml-cpp](#yaml-cpp), mpi, [libxml2](#libxml2), [hdf5](#hdf5), blas, lapack Description: Wind turbine simulation package from NREL --- openfoam-com[¶](#openfoam-com) === Homepage: * <http://www.openfoam.com/Spack package: * [openfoam-com/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openfoam-com/package.py) Versions: develop, 1806, 1712, 1706, 1612 Build Dependencies: [zoltan](#zoltan), [fftw](#fftw), [cmake](#cmake), [cgal](#cgal), [paraview](#paraview), [zlib](#zlib), [vtk](#vtk), mpi, [parmgridgen](#parmgridgen), [metis](#metis), [boost](#boost), [kahip](#kahip), [flex](#flex), [scotch](#scotch) Link Dependencies: [zlib](#zlib), [zoltan](#zoltan), mpi, [paraview](#paraview), [metis](#metis), [boost](#boost), [cgal](#cgal), [kahip](#kahip), [vtk](#vtk), [fftw](#fftw), [scotch](#scotch) Description: OpenFOAM is a GPL-opensource C++ CFD-toolbox. This offering is supported by OpenCFD Ltd, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM trademark. OpenCFD Ltd has been developing and releasing OpenFOAM since its debut in 2004. --- openfoam-org[¶](#openfoam-org) === Homepage: * <http://www.openfoam.org/Spack package: * [openfoam-org/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openfoam-org/package.py) Versions: develop, 5.0, 4.1, 2.4.0 Build Dependencies: [zlib](#zlib), [cmake](#cmake), mpi, [flex](#flex), [scotch](#scotch) Link Dependencies: [zlib](#zlib), mpi, [scotch](#scotch) Description: OpenFOAM is a GPL-opensource C++ CFD-toolbox. The openfoam.org release is managed by the OpenFOAM Foundation Ltd as a licensee of the OPENFOAM trademark. This offering is not approved or endorsed by OpenCFD Ltd, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM trademark. --- openfst[¶](#openfst) === Homepage: * <http://www.openfst.orgSpack package: * [openfst/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openfst/package.py) Versions: 1.6.1, 1.6.0, 1.5.4, 1.5.3, 1.5.2, 1.5.1, 1.5.0, 1.4.1-patch, 1.4.1, 1.4.0 Description: OpenFst is a library for constructing, combining, optimizing, and searching weighted finite-state transducers (FSTs). Weighted finite- state transducers are automata where each transition has an input label, an output label, and a weight. --- opengl[¶](#opengl) === Homepage: * <https://www.opengl.org/Spack package: * [opengl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/opengl/package.py) Description: Placeholder for external OpenGL libraries from hardware vendors --- openglu[¶](#openglu) === Homepage: * <https://www.opengl.org/resources/librariesSpack package: * [openglu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openglu/package.py) Description: Placeholder for external OpenGL utility library (GLU) from hardware vendors --- openjpeg[¶](#openjpeg) === Homepage: * <https://github.com/uclouvain/openjpegSpack package: * [openjpeg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openjpeg/package.py) Versions: 2.3.0, 2.2.0, 2.1.2, 2.1.1, 2.1.0, 2.0.1, 2.0.0, 1.5.2, 1.5.1 Build Dependencies: [cmake](#cmake) Description: OpenJPEG is an open-source JPEG 2000 codec written in C language. It has been developed in order to promote the use of JPEG 2000, a still-image compression standard from the Joint Photographic Experts Group (JPEG). Since April 2015, it is officially recognized by ISO/IEC and ITU-T as a JPEG 2000 Reference Software. --- openmc[¶](#openmc) === Homepage: * <http://openmc.readthedocs.io/Spack package: * [openmc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openmc/package.py) Versions: develop, 0.10.0 Build Dependencies: [cmake](#cmake), [hdf5](#hdf5) Link Dependencies: [hdf5](#hdf5) Description: The OpenMC project aims to provide a fully-featured Monte Carlo particle transport code based on modern methods. It is a constructive solid geometry, continuous-energy transport code that uses ACE format cross sections. The project started under the Computational Reactor Physics Group at MIT. --- openmpi[¶](#openmpi) === Homepage: * <http://www.open-mpi.orgSpack package: * [openmpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openmpi/package.py) Versions: 3.1.3, 3.1.2, 3.1.1, 3.1.0, 3.0.2, 3.0.1, 3.0.0, 2.1.5, 2.1.4, 2.1.3, 2.1.2, 2.1.1, 2.1.0, 2.0.4, 2.0.3, 2.0.2, 2.0.1, 2.0.0, 1.10.7, 1.10.6, 1.10.5, 1.10.4, 1.10.3, 1.10.2, 1.10.1, 1.10.0, 1.8.8, 1.8.7, 1.8.6, 1.8.5, 1.8.4, 1.8.3, 1.8.2, 1.8.1, 1.8, 1.7.5, 1.7.4, 1.7.3, 1.7.2, 1.7.1, 1.7, 1.6.5, 1.6.4, 1.6.3, 1.6.2, 1.6.1, 1.6, 1.5.5, 1.5.4, 1.5.3, 1.5.2, 1.5.1, 1.5, 1.4.5, 1.4.4, 1.4.3, 1.4.2, 1.4.1, 1.4, 1.3.4, 1.3.3, 1.3.2, 1.3.1, 1.3, 1.2.9, 1.2.8, 1.2.7, 1.2.6, 1.2.5, 1.2.4, 1.2.3, 1.2.2, 1.2.1, 1.2, 1.1.5, 1.1.4, 1.1.3, 1.1.2, 1.1.1, 1.1, 1.0.2, 1.0.1, 1.0 Build Dependencies: [zlib](#zlib), [hwloc](#hwloc), [numactl](#numactl), [sqlite](#sqlite), java, [libfabric](#libfabric), [ucx](#ucx), [valgrind](#valgrind), [binutils](#binutils), [slurm](#slurm) Link Dependencies: [zlib](#zlib), [hwloc](#hwloc), [numactl](#numactl), [sqlite](#sqlite), java, [libfabric](#libfabric), [ucx](#ucx), [valgrind](#valgrind), [binutils](#binutils), [slurm](#slurm) Description: An open source Message Passing Interface implementation. The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers. --- opennurbs[¶](#opennurbs) === Homepage: * <https://github.com/OpenNURBS/OpenNURBSSpack package: * [opennurbs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/opennurbs/package.py) Versions: develop, percept Description: OpenNURBS is an open-source NURBS-based geometric modeling library and toolset, with meshing and display / output functions. --- openpmd-api[¶](#openpmd-api) === Homepage: * <http://www.openPMD.orgSpack package: * [openpmd-api/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openpmd-api/package.py) Versions: develop Build Dependencies: mpi, [mpark-variant](#mpark-variant), [hdf5](#hdf5), [cmake](#cmake), [adios2](#adios2), [py-pybind11](#py-pybind11), [adios](#adios), [python](#python) Link Dependencies: mpi, [mpark-variant](#mpark-variant), [hdf5](#hdf5), [adios2](#adios2), [py-pybind11](#py-pybind11), [adios](#adios), [python](#python) Run Dependencies: [py-numpy](#py-numpy) Test Dependencies: [py-numpy](#py-numpy), [catch](#catch) Description: API for easy reading and writing of openPMD files --- openscenegraph[¶](#openscenegraph) === Homepage: * <http://www.openscenegraph.orgSpack package: * [openscenegraph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openscenegraph/package.py) Versions: 3.2.3, 3.1.5 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [qt](#qt) Link Dependencies: [zlib](#zlib), [qt](#qt) Description: OpenSceneGraph is an open source, high performance 3D graphics toolkit that's used in a variety of visual simulation applications. --- openslide[¶](#openslide) === Homepage: * <http://openslide.org/Spack package: * [openslide/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openslide/package.py) Versions: 3.4.1 Build Dependencies: [libtiff](#libtiff), jpeg, [libxml2](#libxml2), [sqlite](#sqlite), [openjpeg](#openjpeg) Link Dependencies: [libtiff](#libtiff), jpeg, [libxml2](#libxml2), [sqlite](#sqlite), [openjpeg](#openjpeg) Description: OpenSlide reads whole slide image files. --- openspeedshop[¶](#openspeedshop) === Homepage: * <http://www.openspeedshop.orgSpack package: * [openspeedshop/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openspeedshop/package.py) Versions: develop, 2.4.0, 2.3.1.5, 2.3.1.4, 2.3.1.3 Build Dependencies: [libtool](#libtool), [libxml2](#libxml2), [libdwarf](#libdwarf), [bison](#bison), [cmake](#cmake), [cbtf-krell](#cbtf-krell), [qt](#qt), [mrnet](#mrnet), [sqlite](#sqlite), [cbtf-argonavis](#cbtf-argonavis), [dyninst](#dyninst), [boost](#boost), [binutils](#binutils), [cbtf](#cbtf), [python](#python), [flex](#flex) Link Dependencies: [libxml2](#libxml2), [libdwarf](#libdwarf), [cbtf-krell](#cbtf-krell), [qt](#qt), [mrnet](#mrnet), [sqlite](#sqlite), [cbtf-argonavis](#cbtf-argonavis), [dyninst](#dyninst), [boost](#boost), [binutils](#binutils), [cbtf](#cbtf), elf Run Dependencies: [cbtf-argonavis](#cbtf-argonavis), [cbtf-krell](#cbtf-krell), [cbtf](#cbtf), [python](#python), [mrnet](#mrnet) Description: OpenSpeedShop is a community effort by The Krell Institute with current direct funding from DOEs NNSA. It builds on top of a broad list of community infrastructures, most notably Dyninst and MRNet from UW, libmonitor from Rice, and PAPI from UTK. OpenSpeedShop is an open source multi platform Linux performance tool which is targeted to support performance analysis of applications running on both single node and large scale IA64, IA32, EM64T, AMD64, PPC, ARM, Power8, Intel Phi, Blue Gene and Cray platforms. OpenSpeedShop development is hosted by the Krell Institute. The infrastructure and base components of OpenSpeedShop are released as open source code primarily under LGPL. --- openspeedshop-utils[¶](#openspeedshop-utils) === Homepage: * <http://www.openspeedshop.orgSpack package: * [openspeedshop-utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openspeedshop-utils/package.py) Versions: develop, 2.4.0, 2.3.1.5, 2.3.1.4, 2.3.1.3 Build Dependencies: [libtool](#libtool), [libxml2](#libxml2), [libdwarf](#libdwarf), [bison](#bison), [cmake](#cmake), [cbtf-krell](#cbtf-krell), [mrnet](#mrnet), [sqlite](#sqlite), [cbtf-argonavis](#cbtf-argonavis), [dyninst](#dyninst), [boost](#boost), [binutils](#binutils), [cbtf](#cbtf), [python](#python), [flex](#flex) Link Dependencies: [libdwarf](#libdwarf), [libxml2](#libxml2), [mrnet](#mrnet), [dyninst](#dyninst), [cbtf-argonavis](#cbtf-argonavis), [sqlite](#sqlite), [cbtf-krell](#cbtf-krell), [cbtf](#cbtf), [boost](#boost), elf Run Dependencies: [cbtf-argonavis](#cbtf-argonavis), [cbtf-krell](#cbtf-krell), [cbtf](#cbtf), [python](#python), [mrnet](#mrnet) Description: OpenSpeedShop is a community effort by The Krell Institute with current direct funding from DOEs NNSA. It builds on top of a broad list of community infrastructures, most notably Dyninst and MRNet from UW, libmonitor from Rice, and PAPI from UTK. OpenSpeedShop is an open source multi platform Linux performance tool which is targeted to support performance analysis of applications running on both single node and large scale IA64, IA32, EM64T, AMD64, PPC, ARM, Power8, Intel Phi, Blue Gene and Cray platforms. OpenSpeedShop development is hosted by the Krell Institute. The infrastructure and base components of OpenSpeedShop are released as open source code primarily under LGPL. openspeedshop- utils is a package that does not have the qt3 gui. It was created to avoid a conflict between openspeedshop and cbtf-argonavis-gui based on the fact that spack will not allow a qt3 and qt4/qt5 dependency in a packages dependency tree. --- openssh[¶](#openssh) === Homepage: * <https://www.openssh.com/Spack package: * [openssh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openssh/package.py) Versions: 7.6p1, 7.5p1, 7.4p1, 7.3p1, 7.2p2, 7.1p2, 7.0p1, 6.9p1, 6.8p1, 6.7p1, 6.6p1 Build Dependencies: [zlib](#zlib), [openssl](#openssl), [libedit](#libedit), [ncurses](#ncurses) Link Dependencies: [zlib](#zlib), [openssl](#openssl), [libedit](#libedit), [ncurses](#ncurses) Description: OpenSSH is the premier connectivity tool for remote login with the SSH protocol. It encrypts all traffic to eliminate eavesdropping, connection hijacking, and other attacks. In addition, OpenSSH provides a large suite of secure tunneling capabilities, several authentication methods, and sophisticated configuration options. --- openssl[¶](#openssl) === Homepage: * <http://www.openssl.orgSpack package: * [openssl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/openssl/package.py) Versions: 1.1.0g, 1.1.0e, 1.1.0d, 1.1.0c, 1.0.2o, 1.0.2n, 1.0.2m, 1.0.2k, 1.0.2j, 1.0.2i, 1.0.2h, 1.0.2g, 1.0.2f, 1.0.2e, 1.0.2d, 1.0.1u, 1.0.1t, 1.0.1r, 1.0.1h Build Dependencies: [zlib](#zlib), [perl](#perl) Link Dependencies: [zlib](#zlib) Test Dependencies: [perl](#perl) Description: OpenSSL is an open source project that provides a robust, commercial- grade, and full-featured toolkit for the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols. It is also a general-purpose cryptography library. --- opium[¶](#opium) === Homepage: * <https://opium.sourceforge.net/index.htmlSpack package: * [opium/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/opium/package.py) Versions: 3.8 Build Dependencies: blas, lapack Link Dependencies: blas, lapack Description: DFT pseudopotential generation project --- optional-lite[¶](#optional-lite) === Homepage: * <https://github.com/martinmoene/optional-liteSpack package: * [optional-lite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/optional-lite/package.py) Versions: 3.0.0, 2.3.0, 2.2.0, 2.0.0, 1.0.3 Description: A single-file header-only version of a C++17-like optional, a nullable object for C++98, C++11 and later. --- opus[¶](#opus) === Homepage: * <http://opus-codec.org/Spack package: * [opus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/opus/package.py) Versions: 1.1.4, 1.1.3, 1.1.2, 1.1.1, 1.1, 1.0.3, 1.0.2, 1.0.1, 1.0.0, 0.9.14, 0.9.10, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5, 0.9.3, 0.9.2, 0.9.1, 0.9.0 Build Dependencies: [libvorbis](#libvorbis) Link Dependencies: [libvorbis](#libvorbis) Description: Opus is a totally open, royalty-free, highly versatile audio codec. --- orca[¶](#orca) === Homepage: * <https://cec.mpg.deSpack package: * [orca/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/orca/package.py) Versions: 4.0.1.2 Build Dependencies: [zstd](#zstd) Run Dependencies: [openmpi](#openmpi) Description: An ab initio, DFT and semiempirical SCF-MO package Note: Orca is licensed software. You will need to create an account on the Orca homepage and download Orca yourself. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- orfm[¶](#orfm) === Homepage: * <https://github.com/wwood/OrfMSpack package: * [orfm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/orfm/package.py) Versions: 0.7.1 Link Dependencies: [zlib](#zlib) Description: A simple and not slow open reading frame (ORF) caller. No bells or whistles like frameshift detection, just a straightforward goal of returning a FASTA file of open reading frames over a certain length from a FASTA/Q file of nucleotide sequences. --- orthofinder[¶](#orthofinder) === Homepage: * <https://github.com/davidemms/OrthoFinderSpack package: * [orthofinder/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/orthofinder/package.py) Versions: 2.2.0 Run Dependencies: [blast-plus](#blast-plus), [mcl](#mcl), [fastme](#fastme), [py-dlcpar](#py-dlcpar) Description: OrthoFinder is a fast, accurate and comprehensive analysis tool for comparative genomics. It finds orthologues and orthogroups infers rooted gene trees for all orthogroups and infers a rooted species tree for the species being analysed. OrthoFinder also provides comprehensive statistics for comparative genomic analyses. OrthoFinder is simple to use and all you need to run it is a set of protein sequence files (one per species) in FASTA format. --- orthomcl[¶](#orthomcl) === Homepage: * <http://orthomcl.org/orthomcl/Spack package: * [orthomcl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/orthomcl/package.py) Versions: 2.0.9 Build Dependencies: [perl](#perl), [blast-plus](#blast-plus), [mcl](#mcl), [mariadb](#mariadb) Link Dependencies: [blast-plus](#blast-plus), [mcl](#mcl), [mariadb](#mariadb) Run Dependencies: [perl](#perl) Description: OrthoMCL is a genome-scale algorithm for grouping orthologous protein sequences. --- osu-micro-benchmarks[¶](#osu-micro-benchmarks) === Homepage: * <http://mvapich.cse.ohio-state.edu/benchmarks/Spack package: * [osu-micro-benchmarks/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/osu-micro-benchmarks/package.py) Versions: 5.4, 5.3 Build Dependencies: [cuda](#cuda), mpi Link Dependencies: [cuda](#cuda), mpi Description: The Ohio MicroBenchmark suite is a collection of independent MPI message passing performance microbenchmarks developed and written at The Ohio State University. It includes traditional benchmarks and performance measures such as latency, bandwidth and host overhead and can be used for both traditional and GPU-enhanced nodes. --- otf[¶](#otf) === Homepage: * <http://tu-dresden.de/die_tu_dresden/zentrale_einrichtungen/zih/forschung/projekte/otf/index_html/document_view?set_language=enSpack package: * [otf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/otf/package.py) Versions: 1.12.5salmon Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: To improve scalability for very large and massively parallel traces the Open Trace Format (OTF) is developed at ZIH as a successor format to the Vampir Trace Format (VTF3). --- otf2[¶](#otf2) === Homepage: * <http://www.vi-hps.org/projects/score-pSpack package: * [otf2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/otf2/package.py) Versions: 2.1.1, 2.1, 2.0, 1.5.1, 1.4, 1.3.1, 1.2.1 Description: The Open Trace Format 2 is a highly scalable, memory efficient event trace data format plus support library. --- p4est[¶](#p4est) === Homepage: * <http://www.p4est.orgSpack package: * [p4est/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/p4est/package.py) Versions: 2.0, 1.1 Build Dependencies: [zlib](#zlib), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), mpi Link Dependencies: [zlib](#zlib), mpi Description: Dynamic management of a collection (a forest) of adaptive octrees in parallel --- p7zip[¶](#p7zip) === Homepage: * <http://p7zip.sourceforge.netSpack package: * [p7zip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/p7zip/package.py) Versions: 16.02 Description: A Unix port of the 7z file archiver --- pacbio-daligner[¶](#pacbio-daligner) === Homepage: * <https://github.com/PacificBiosciences/DALIGNERSpack package: * [pacbio-daligner/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pacbio-daligner/package.py) Versions: 2017-08-05 Build Dependencies: [gmake](#gmake), [pacbio-dazz-db](#pacbio-dazz-db) Link Dependencies: [pacbio-dazz-db](#pacbio-dazz-db) Description: Daligner: The Dazzler "Overlap" Module. This is a special fork required for some pacbio utilities. --- pacbio-damasker[¶](#pacbio-damasker) === Homepage: * <https://github.com/PacificBiosciences/DAMASKERSpack package: * [pacbio-damasker/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pacbio-damasker/package.py) Versions: 2017-02-11 Build Dependencies: [gmake](#gmake) Description: Damasker: The Dazzler Repeat Masking Suite. This is a special fork required for some pacbio utilities. --- pacbio-dazz-db[¶](#pacbio-dazz-db) === Homepage: * <https://github.com/PacificBiosciences/DAZZ_DBSpack package: * [pacbio-dazz-db/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pacbio-dazz-db/package.py) Versions: 2017-04-10 Build Dependencies: [gmake](#gmake) Description: The Dazzler Database Library. This version is a special fork required for some pacbio utilities. --- pacbio-dextractor[¶](#pacbio-dextractor) === Homepage: * <https://github.com/PacificBiosciences/DEXTRACTORSpack package: * [pacbio-dextractor/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pacbio-dextractor/package.py) Versions: 2016-08-09 Build Dependencies: [gmake](#gmake), [hdf5](#hdf5) Link Dependencies: [hdf5](#hdf5) Description: The Dextractor and Compression Command Library. This is a special fork required by some pacbio utilities. --- packmol[¶](#packmol) === Homepage: * <http://m3g.iqm.unicamp.br/packmol/home.shtmlSpack package: * [packmol/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/packmol/package.py) Versions: 18.169 Build Dependencies: [cmake](#cmake) Description: Packmol creates an initial point for molecular dynamics simulations by packing molecules in defined regions of space. --- pacvim[¶](#pacvim) === Homepage: * <https://github.com/jmoon018/PacVimSpack package: * [pacvim/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pacvim/package.py) Versions: 1.1.1 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: Pacvim is a command-line-based game based off of Pacman. The main purpose of this software is to familiarize individuals with Vim. --- pagit[¶](#pagit) === Homepage: * <http://www.sanger.ac.uk/science/tools/pagitSpack package: * [pagit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pagit/package.py) Versions: 1.01 Build Dependencies: java, [perl](#perl) Run Dependencies: java, [perl](#perl) Description: PAGIT addresses the need for software to generate high quality draft genomes. --- pagmo[¶](#pagmo) === Homepage: * <https://esa.github.io/pagmo/Spack package: * [pagmo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pagmo/package.py) Versions: 1.1.7 Build Dependencies: mpi, [gsl](#gsl), [python](#python), [cmake](#cmake), [boost](#boost), [ipopt](#ipopt), [py-scipy](#py-scipy), blas, [py-networkx](#py-networkx) Link Dependencies: mpi, [gsl](#gsl), [boost](#boost), [ipopt](#ipopt), blas, [python](#python) Run Dependencies: [py-networkx](#py-networkx), [py-scipy](#py-scipy) Description: Parallel Global Multiobjective Optimizer (and its Python alter ego PyGMO) is a C++ / Python platform to perform parallel computations of optimisation tasks (global and local) via the asynchronous generalized island model. --- paml[¶](#paml) === Homepage: * <http://abacus.gene.ucl.ac.uk/software/paml.htmlSpack package: * [paml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/paml/package.py) Versions: 4.9h Description: PAML is a package of programs for phylogenetic analyses of DNA or protein sewuences using maximum likelihood. --- panda[¶](#panda) === Homepage: * <http://comopt.ifi.uni-heidelberg.de/software/PANDA/index.htmlSpack package: * [panda/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/panda/package.py) Versions: 2016-03-07 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: PANDA: Parallel AdjaceNcy Decomposition Algorithm --- pandaseq[¶](#pandaseq) === Homepage: * <https://github.com/neufeld/pandaseqSpack package: * [pandaseq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pandaseq/package.py) Versions: 2.11, 2.10 Build Dependencies: [zlib](#zlib), pkgconfig, [m4](#m4), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool) Link Dependencies: [bzip2](#bzip2), [libtool](#libtool) Description: PANDASEQ is a program to align Illumina reads, optionally with PCR primers embedded in the sequence, and reconstruct an overlapping sequence. --- pango[¶](#pango) === Homepage: * <http://www.pango.orgSpack package: * [pango/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pango/package.py) Versions: 1.41.0, 1.40.3, 1.40.1, 1.36.8 Build Dependencies: pkgconfig, [libxft](#libxft), [harfbuzz](#harfbuzz), [cairo](#cairo), [glib](#glib), [gobject-introspection](#gobject-introspection) Link Dependencies: [cairo](#cairo), [glib](#glib), [harfbuzz](#harfbuzz), [libxft](#libxft), [gobject-introspection](#gobject-introspection) Description: Pango is a library for laying out and rendering of text, with an emphasis on internationalization. It can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in the context of the GTK+ widget toolkit. --- pangomm[¶](#pangomm) === Homepage: * <http://www.pango.org/Spack package: * [pangomm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pangomm/package.py) Versions: 2.14.1, 2.14.0 Build Dependencies: [cairomm](#cairomm), [glibmm](#glibmm), [pango](#pango) Link Dependencies: [cairomm](#cairomm), [glibmm](#glibmm), [pango](#pango) Description: Pangomm is a C++ interface to Pango. --- papi[¶](#papi) === Homepage: * <http://icl.cs.utk.edu/papi/index.htmlSpack package: * [papi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/papi/package.py) Versions: 5.6.0, 5.5.1, 5.5.0, 5.4.3, 5.4.1, 5.3.0 Description: PAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. In addition Component PAPI provides access to a collection of components that expose performance measurement opportunites across the hardware and software stack. --- papyrus[¶](#papyrus) === Homepage: * <https://code.ornl.gov/eck/papyrusSpack package: * [papyrus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/papyrus/package.py) Versions: develop, 1.0.0 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: Parallel Aggregate Persistent Storage --- paradiseo[¶](#paradiseo) === Homepage: * <http://paradiseo.gforge.inria.fr/Spack package: * [paradiseo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/paradiseo/package.py) Versions: head, dev-safe Build Dependencies: mpi, [eigen](#eigen), [cmake](#cmake), [doxygen](#doxygen), [gnuplot](#gnuplot), [boost](#boost) Link Dependencies: [gnuplot](#gnuplot), [boost](#boost), mpi Description: A C++ white-box object-oriented framework dedicated to the reusable design of metaheuristics. --- parallel[¶](#parallel) === Homepage: * <http://www.gnu.org/software/parallel/Spack package: * [parallel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/parallel/package.py) Versions: 20170322, 20170122, 20160422, 20160322 Build Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. --- parallel-netcdf[¶](#parallel-netcdf) === Homepage: * <https://trac.mcs.anl.gov/projects/parallel-netcdfSpack package: * [parallel-netcdf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/parallel-netcdf/package.py) Versions: 1.8.0, 1.7.0, 1.6.1 Build Dependencies: mpi, [m4](#m4) Link Dependencies: mpi Description: Parallel netCDF (PnetCDF) is a library providing high-performance parallel I/O while still maintaining file-format compatibility with Unidata's NetCDF. --- paraver[¶](#paraver) === Homepage: * <https://tools.bsc.es/paraverSpack package: * [paraver/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/paraver/package.py) Versions: 4.6.3, 4.6.2 Build Dependencies: [wx](#wx), [boost](#boost), [wxpropgrid](#wxpropgrid) Link Dependencies: [wx](#wx), [boost](#boost), [wxpropgrid](#wxpropgrid) Description: "A very powerful performance visualization and analysis tool based on traces that can be used to analyse any information that is expressed on its input trace format. Traces for parallel MPI, OpenMP and other programs can be genereated with Extrae. --- paraview[¶](#paraview) === Homepage: * <http://www.paraview.orgSpack package: * [paraview/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/paraview/package.py) Versions: 5.5.2, 5.5.1, 5.5.0, 5.4.1, 5.4.0, 5.3.0, 5.2.0, 5.1.2, 5.0.1, 4.4.0 Build Dependencies: jpeg, [mesa](#mesa), [hdf5](#hdf5), [cmake](#cmake), [qt](#qt), [libxt](#libxt), [zlib](#zlib), [bzip2](#bzip2), mpi, [libtiff](#libtiff), [libpng](#libpng), [freetype](#freetype), [python](#python), [libxml2](#libxml2) Link Dependencies: jpeg, [mesa](#mesa), [hdf5](#hdf5), [qt](#qt), [libxt](#libxt), [zlib](#zlib), [bzip2](#bzip2), mpi, [libtiff](#libtiff), [libpng](#libpng), [freetype](#freetype), [python](#python), [libxml2](#libxml2) Run Dependencies: [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib) Description: ParaView is an open-source, multi-platform data analysis and visualization application. --- parmetis[¶](#parmetis) === Homepage: * <http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overviewSpack package: * [parmetis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/parmetis/package.py) Versions: 4.0.3, 4.0.2 Build Dependencies: [cmake](#cmake), mpi, [metis](#metis) Link Dependencies: mpi, [metis](#metis) Description: ParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. --- parmgridgen[¶](#parmgridgen) === Homepage: * <http://www-users.cs.umn.edu/~moulitsa/software.htmlSpack package: * [parmgridgen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/parmgridgen/package.py) Versions: 1.0 Build Dependencies: mpi Link Dependencies: mpi Description: MGRIDGEN is a serial library written entirely in ANSI C that implements (serial) algorithms for obtaining a sequence of successive coarse grids that are well-suited for geometric multigrid methods. ParMGridGen is the parallel version of MGridGen. --- parquet[¶](#parquet) === Homepage: * <https://github.com/apache/parquet-cppSpack package: * [parquet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/parquet/package.py) Versions: 1.4.0 Build Dependencies: [thrift](#thrift), [cmake](#cmake), [boost](#boost), pkgconfig, [arrow](#arrow) Link Dependencies: [thrift](#thrift), [arrow](#arrow), [boost](#boost) Description: C++ bindings for the Apache Parquet columnar data format. --- parsimonator[¶](#parsimonator) === Homepage: * <http://www.exelixis-lab.org/Spack package: * [parsimonator/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/parsimonator/package.py) Versions: 1.0.2 Description: Parsimonator is a no-frills light-weight implementation for building starting trees under parsimony for RAxML --- parsplice[¶](#parsplice) === Homepage: * <https://gitlab.com/exaalt/parspliceSpack package: * [parsplice/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/parsplice/package.py) Versions: develop, 1.1 Build Dependencies: mpi, [eigen](#eigen), [nauty](#nauty), [cmake](#cmake), [boost](#boost), [berkeley-db](#berkeley-db), [lammps](#lammps) Link Dependencies: mpi, [eigen](#eigen), [nauty](#nauty), [boost](#boost), [berkeley-db](#berkeley-db), [lammps](#lammps) Description: ParSplice code implements the Parallel Trajectory Splicing algorithm --- partitionfinder[¶](#partitionfinder) === Homepage: * <https://github.com/brettc/partitionfinderSpack package: * [partitionfinder/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/partitionfinder/package.py) Versions: 2.1.1 Build Dependencies: [py-pyparsing](#py-pyparsing), [py-pytables](#py-pytables), [py-scikit-learn](#py-scikit-learn), [python](#python), [py-numpy](#py-numpy), [py-pandas](#py-pandas), [py-scipy](#py-scipy) Run Dependencies: [py-pyparsing](#py-pyparsing), [py-pytables](#py-pytables), [py-scikit-learn](#py-scikit-learn), [python](#python), [py-numpy](#py-numpy), [py-pandas](#py-pandas), [py-scipy](#py-scipy) Description: PartitionFinder is free open source software to select best-fit partitioning schemes and models of molecular evolution for phylogenetic analyses. --- patch[¶](#patch) === Homepage: * <http://savannah.gnu.org/projects/patch/Spack package: * [patch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/patch/package.py) Versions: 2.7.6, 2.7.5 Description: Patch takes a patch file containing a difference listing produced by the diff program and applies those differences to one or more original files, producing patched versions. --- patchelf[¶](#patchelf) === Homepage: * <https://nixos.org/patchelf.htmlSpack package: * [patchelf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/patchelf/package.py) Versions: 0.9, 0.8 Description: PatchELF is a small utility to modify the dynamic linker and RPATH of ELF executables. --- pathfinder[¶](#pathfinder) === Homepage: * <https://mantevo.org/packages/Spack package: * [pathfinder/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pathfinder/package.py) Versions: 1.0.0 Description: Proxy Application. Signature search. --- pax-utils[¶](#pax-utils) === Homepage: * <https://wiki.gentoo.org/index.php?title=Project:Hardened/PaX_UtilitiesSpack package: * [pax-utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pax-utils/package.py) Versions: 1.2.2 Description: ELF utils that can check files for security relevant properties --- pbbam[¶](#pbbam) === Homepage: * <https://github.com/PacificBiosciences/pbbamSpack package: * [pbbam/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pbbam/package.py) Versions: 0.18.0 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [boost](#boost), [doxygen](#doxygen), [htslib](#htslib) Link Dependencies: [zlib](#zlib), [boost](#boost), [doxygen](#doxygen), [htslib](#htslib) Description: The pbbam software package provides components to create, query, & edit PacBio BAM files and associated indices. These components include a core C++ library, bindings for additional languages, and command-line utilities. --- pbmpi[¶](#pbmpi) === Homepage: * <http://megasun.bch.umontreal.ca/People/lartillot/www/index.htmSpack package: * [pbmpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pbmpi/package.py) Versions: partition Build Dependencies: [libfabric](#libfabric), mpi Link Dependencies: [libfabric](#libfabric), mpi Description: A Bayesian software for phylogenetic reconstruction using mixture models --- pcma[¶](#pcma) === Homepage: * <http://prodata.swmed.edu/pcma/pcma.phpSpack package: * [pcma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pcma/package.py) Versions: 2.0 Description: PCMA is a progressive multiple sequence alignment program that combines two different alignment strategies. --- pcre[¶](#pcre) === Homepage: * <http://www.pcre.orgSpack package: * [pcre/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pcre/package.py) Versions: 8.42, 8.41, 8.40, 8.39, 8.38 Description: The PCRE package contains Perl Compatible Regular Expression libraries. These are useful for implementing regular expression pattern matching using the same syntax and semantics as Perl 5. --- pcre2[¶](#pcre2) === Homepage: * <http://www.pcre.orgSpack package: * [pcre2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pcre2/package.py) Versions: 10.31, 10.20 Description: The PCRE2 package contains Perl Compatible Regular Expression libraries. These are useful for implementing regular expression pattern matching using the same syntax and semantics as Perl 5. --- pdf2svg[¶](#pdf2svg) === Homepage: * <http://www.cityinthesky.co.uk/opensource/pdf2svgSpack package: * [pdf2svg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pdf2svg/package.py) Versions: 0.2.3, 0.2.2 Run Dependencies: [cairo](#cairo), [poppler](#poppler) Description: A simple PDF to SVG converter using the Poppler and Cairo libraries. --- pdftk[¶](#pdftk) === Homepage: * <https://www.pdflabs.com/tools/pdftk-serverSpack package: * [pdftk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pdftk/package.py) Versions: 2.02 Build Dependencies: [eclipse-gcj-parser](#eclipse-gcj-parser) Description: PDFtk Server is a command-line tool for working with PDFs. It is commonly used for client-side scripting or server-side processing of PDFs. --- pdsh[¶](#pdsh) === Homepage: * <https://github.com/grondo/pdshSpack package: * [pdsh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pdsh/package.py) Versions: 2.31 Description: PDSH: a high performance, parallel remote shell utility --- pdt[¶](#pdt) === Homepage: * <https://www.cs.uoregon.edu/research/pdt/home.phpSpack package: * [pdt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pdt/package.py) Versions: 3.25, 3.24, 3.23, 3.22.1, 3.22, 3.21, 3.20, 3.19, 3.18.1 Description: Program Database Toolkit (PDT) is a framework for analyzing source code written in several programming languages and for making rich program knowledge accessible to developers of static and dynamic analysis tools. PDT implements a standard program representation, the program database (PDB), that can be accessed in a uniform way through a class library supporting common PDB operations. --- pegtl[¶](#pegtl) === Homepage: * <https://github.com/taocpp/PEGTLSpack package: * [pegtl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pegtl/package.py) Versions: develop, 2.1.4, 2.0.0 Build Dependencies: [cmake](#cmake) Description: The Parsing Expression Grammar Template Library (PEGTL) is a zero- dependency C++11 header-only library for creating parsers according to a Parsing Expression Grammar (PEG). --- pennant[¶](#pennant) === Homepage: * <https://github.com/lanl/PENNANTSpack package: * [pennant/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pennant/package.py) Versions: 0.9, 0.8, 0.7, 0.6, 0.5, 0.4 Build Dependencies: mpi Link Dependencies: mpi Description: PENNANT is an unstructured mesh physics mini-app designed for advanced architecture research. It contains mesh data structures and a few physics algorithms adapted from the LANL rad-hydro code FLAG, and gives a sample of the typical memory access patterns of FLAG. --- percept[¶](#percept) === Homepage: * <https://github.com/PerceptTools/perceptSpack package: * [percept/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/percept/package.py) Versions: develop Build Dependencies: [opennurbs](#opennurbs), [yaml-cpp](#yaml-cpp), [googletest](#googletest), [cmake](#cmake), [boost](#boost), [trilinos](#trilinos) Link Dependencies: [opennurbs](#opennurbs), [yaml-cpp](#yaml-cpp), [boost](#boost), [trilinos](#trilinos), [googletest](#googletest) Description: Parallel mesh refinement and adaptivity tools for the finite element method. --- perl[¶](#perl) === Homepage: * <http://www.perl.orgSpack package: * [perl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl/package.py) Versions: 5.28.0, 5.26.2, 5.25.11, 5.24.1, 5.22.4, 5.22.3, 5.22.2, 5.22.1, 5.22.0, 5.20.3, 5.18.4, 5.16.3 Build Dependencies: [gdbm](#gdbm) Link Dependencies: [gdbm](#gdbm) Description: Perl 5 is a highly capable, feature-rich programming language with over 27 years of development. --- perl-algorithm-diff[¶](#perl-algorithm-diff) === Homepage: * <http://search.cpan.org/~tyemq/Algorithm-Diff-1.1903/lib/Algorithm/Diff.pmSpack package: * [perl-algorithm-diff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-algorithm-diff/package.py) Versions: 1.1903 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Compute 'intelligent' differences between two files / lists --- perl-app-cmd[¶](#perl-app-cmd) === Homepage: * <http://search.cpan.org/~rjbs/App-Cmd/lib/App/Cmd.pmSpack package: * [perl-app-cmd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-app-cmd/package.py) Versions: 0.331 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Write command line apps with less suffering --- perl-array-utils[¶](#perl-array-utils) === Homepage: * <http://search.cpan.org/~zmij/Array-Utils/Utils.pmSpack package: * [perl-array-utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-array-utils/package.py) Versions: 0.5 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Small utils for array manipulation --- perl-b-hooks-endofscope[¶](#perl-b-hooks-endofscope) === Homepage: * <http://search.cpan.org/~ether/B-Hooks-EndOfScope-0.21/lib/B/Hooks/EndOfScope.pmSpack package: * [perl-b-hooks-endofscope/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-b-hooks-endofscope/package.py) Versions: 0.21 Build Dependencies: [perl](#perl), [perl-sub-exporter-progressive](#perl-sub-exporter-progressive) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-sub-exporter-progressive](#perl-sub-exporter-progressive) Description: Execute code after a scope finished compilation. --- perl-bio-perl[¶](#perl-bio-perl) === Homepage: * <http://search.cpan.org/~cjfields/BioPerl-1.007002/Bio/Perl.pmSpack package: * [perl-bio-perl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-bio-perl/package.py) Versions: 1.007002 Build Dependencies: [perl](#perl), [perl-test-most](#perl-test-most), [perl-data-stag](#perl-data-stag), [perl-uri-escape](#perl-uri-escape), [perl-module-build](#perl-module-build), [perl-io-string](#perl-io-string) Link Dependencies: [perl](#perl) Run Dependencies: [perl-data-stag](#perl-data-stag), [perl](#perl), [perl-test-most](#perl-test-most), [perl-io-string](#perl-io-string), [perl-uri-escape](#perl-uri-escape) Description: Functional access to BioPerl for people who don't know objects --- perl-bit-vector[¶](#perl-bit-vector) === Homepage: * <http://search.cpan.org/~stbey/Bit-Vector-7.4/Vector.podSpack package: * [perl-bit-vector/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-bit-vector/package.py) Versions: 7.4 Build Dependencies: [perl](#perl), [perl-carp-clan](#perl-carp-clan) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-carp-clan](#perl-carp-clan) Description: Efficient bit vector, set of integers and "big int" math library --- perl-cairo[¶](#perl-cairo) === Homepage: * <http://search.cpan.org/~xaoc/Cairo/lib/Cairo.pmSpack package: * [perl-cairo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-cairo/package.py) Versions: 1.106 Build Dependencies: [perl](#perl), [perl-extutils-depends](#perl-extutils-depends), [perl-extutils-pkgconfig](#perl-extutils-pkgconfig), [cairo](#cairo) Link Dependencies: [perl](#perl), [perl-extutils-depends](#perl-extutils-depends), [perl-extutils-pkgconfig](#perl-extutils-pkgconfig), [cairo](#cairo) Run Dependencies: [perl](#perl) Description: Perl interface to the cairo 2d vector graphics library --- perl-capture-tiny[¶](#perl-capture-tiny) === Homepage: * <http://search.cpan.org/~dagolden/Capture-Tiny-0.46/lib/Capture/Tiny.pmSpack package: * [perl-capture-tiny/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-capture-tiny/package.py) Versions: 0.46 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Capture STDOUT and STDERR from Perl, XS or external programs --- perl-carp-clan[¶](#perl-carp-clan) === Homepage: * <http://search.cpan.org/~kentnl/Carp-Clan-6.06/lib/Carp/Clan.podSpack package: * [perl-carp-clan/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-carp-clan/package.py) Versions: 6.06 Build Dependencies: [perl](#perl), [perl-test-exception](#perl-test-exception), [perl-sub-uplevel](#perl-sub-uplevel) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-test-exception](#perl-test-exception), [perl-sub-uplevel](#perl-sub-uplevel) Description: Report errors from perspective of caller of a "clan" of modules --- perl-cgi[¶](#perl-cgi) === Homepage: * <https://metacpan.org/pod/CGISpack package: * [perl-cgi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-cgi/package.py) Versions: 4.40, 4.39, 4.38, 4.37 Build Dependencies: [perl](#perl), [perl-html-parser](#perl-html-parser) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-html-parser](#perl-html-parser) Description: CGI - Handle Common Gateway Interface requests and responses CGI was included in the Perl distribution from 5.4 to 5.20 but has since been removed. --- perl-class-data-inheritable[¶](#perl-class-data-inheritable) === Homepage: * <http://search.cpan.org/~tmtm/Class-Data-Inheritable-0.08/lib/Class/Data/Inheritable.pmSpack package: * [perl-class-data-inheritable/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-class-data-inheritable/package.py) Versions: 0.08 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: For creating accessor/mutators to class data. --- perl-class-inspector[¶](#perl-class-inspector) === Homepage: * <http://search.cpan.org/~plicease/Class-Inspector-1.32/lib/Class/Inspector.pmSpack package: * [perl-class-inspector/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-class-inspector/package.py) Versions: 1.32 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Get information about a class and its structure --- perl-class-load[¶](#perl-class-load) === Homepage: * <http://search.cpan.org/~ether/Class-Load-0.24/lib/Class/Load.pmSpack package: * [perl-class-load/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-class-load/package.py) Versions: 0.24 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: A working (require "Class::Name") and more --- perl-class-load-xs[¶](#perl-class-load-xs) === Homepage: * <http://search.cpan.org/~ether/Class-Load-XS-0.10/lib/Class/Load/XS.pmSpack package: * [perl-class-load-xs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-class-load-xs/package.py) Versions: 0.10 Build Dependencies: [perl](#perl), [perl-class-load](#perl-class-load) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-class-load](#perl-class-load) Description: This module provides an XS implementation for portions of Class::Load. --- perl-compress-raw-bzip2[¶](#perl-compress-raw-bzip2) === Homepage: * <http://search.cpan.org/~pmqs/Compress-Raw-Bzip2-2.081/lib/Compress/Raw/Bzip2.pmSpack package: * [perl-compress-raw-bzip2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-compress-raw-bzip2/package.py) Versions: 2.081 Build Dependencies: [perl](#perl), [bzip2](#bzip2), [perl-extutils-makemaker](#perl-extutils-makemaker) Link Dependencies: [perl](#perl), [bzip2](#bzip2) Run Dependencies: [perl](#perl) Description: A low-Level Interface to bzip2 compression library. --- perl-compress-raw-zlib[¶](#perl-compress-raw-zlib) === Homepage: * <http://search.cpan.org/~pmqs/Compress-Raw-Zlib-2.081/lib/Compress/Raw/Zlib.pmSpack package: * [perl-compress-raw-zlib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-compress-raw-zlib/package.py) Versions: 2.081 Build Dependencies: [perl](#perl), [zlib](#zlib), [perl-extutils-makemaker](#perl-extutils-makemaker) Link Dependencies: [perl](#perl), [zlib](#zlib) Run Dependencies: [perl](#perl) Description: A low-Level Interface to zlib compression library --- perl-contextual-return[¶](#perl-contextual-return) === Homepage: * <http://search.cpan.org/~dconway/Contextual-Return/lib/Contextual/Return.pmSpack package: * [perl-contextual-return/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-contextual-return/package.py) Versions: 0.004014 Build Dependencies: [perl](#perl), [perl-want](#perl-want) Link Dependencies: [perl](#perl), [perl-want](#perl-want) Run Dependencies: [perl](#perl) Description: Create context-sensitive return values --- perl-cpan-meta-check[¶](#perl-cpan-meta-check) === Homepage: * <http://search.cpan.org/~leont/CPAN-Meta-Check-0.014/lib/CPAN/Meta/Check.pmSpack package: * [perl-cpan-meta-check/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-cpan-meta-check/package.py) Versions: 0.014 Build Dependencies: [perl](#perl), [perl-test-deep](#perl-test-deep) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-test-deep](#perl-test-deep) Description: This module verifies if requirements described in a CPAN::Meta object are present.. --- perl-data-optlist[¶](#perl-data-optlist) === Homepage: * <http://search.cpan.org/~rjbs/Data-OptList-0.110/lib/Data/OptList.pmSpack package: * [perl-data-optlist/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-data-optlist/package.py) Versions: 0.110 Build Dependencies: [perl](#perl), [perl-sub-install](#perl-sub-install) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-sub-install](#perl-sub-install) Description: Parse and validate simple name/value option pairs --- perl-data-stag[¶](#perl-data-stag) === Homepage: * <http://search.cpan.org/~cmungall/Data-Stag-0.14/Data/Stag.pmSpack package: * [perl-data-stag/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-data-stag/package.py) Versions: 0.14 Build Dependencies: [perl](#perl), [perl-io-string](#perl-io-string) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-io-string](#perl-io-string) Description: Structured Tags datastructures --- perl-dbd-mysql[¶](#perl-dbd-mysql) === Homepage: * <http://search.cpan.org/~michielb/DBD-mysql-4.043/lib/DBD/mysql.pmSpack package: * [perl-dbd-mysql/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-dbd-mysql/package.py) Versions: 4.043 Build Dependencies: [perl](#perl), [perl-test-deep](#perl-test-deep), [mariadb](#mariadb), [perl-dbi](#perl-dbi) Link Dependencies: [perl](#perl), [mariadb](#mariadb) Run Dependencies: [perl](#perl), [perl-test-deep](#perl-test-deep), [perl-dbi](#perl-dbi) Description: MySQL driver for the Perl5 Database Interface (DBI) --- perl-dbd-sqlite[¶](#perl-dbd-sqlite) === Homepage: * <https://metacpan.org/pod/DBD::SQLiteSpack package: * [perl-dbd-sqlite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-dbd-sqlite/package.py) Versions: 1.59_01, 1.58, 1.57_01, 1.56 Build Dependencies: [perl](#perl), [perl-dbi](#perl-dbi) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-dbi](#perl-dbi) Description: DBD::SQLite - Self-contained RDBMS in a DBI Driver --- perl-dbfile[¶](#perl-dbfile) === Homepage: * <https://metacpan.org/pod/DB_FileSpack package: * [perl-dbfile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-dbfile/package.py) Versions: 1.840 Build Dependencies: [perl](#perl), [perl-extutils-makemaker](#perl-extutils-makemaker) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: DB_File is a module which allows Perl programs to make use of the facilities provided by Berkeley DB version 1.x (if you have a newer version of DB, see "Using DB_File with Berkeley DB version 2 or greater"). It is assumed that you have a copy of the Berkeley DB manual pages at hand when reading this documentation. The interface defined here mirrors the Berkeley DB interface closely. --- perl-dbi[¶](#perl-dbi) === Homepage: * <https://dbi.perl.org/Spack package: * [perl-dbi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-dbi/package.py) Versions: 1.636 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: The DBI is the standard database interface module for Perl. It defines a set of methods, variables and conventions that provide a consistent database interface independent of the actual database being used. --- perl-devel-cycle[¶](#perl-devel-cycle) === Homepage: * <http://search.cpan.org/~lds/Devel-Cycle-1.12/lib/Devel/Cycle.pmSpack package: * [perl-devel-cycle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-devel-cycle/package.py) Versions: 1.12 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Find memory cycles in objects --- perl-devel-globaldestruction[¶](#perl-devel-globaldestruction) === Homepage: * <http://search.cpan.org/~haarg/Devel-GlobalDestruction-0.14/lib/Devel/GlobalDestruction.pmSpack package: * [perl-devel-globaldestruction/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-devel-globaldestruction/package.py) Versions: 0.14 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Makes Perl's global destruction less tricky to deal with --- perl-devel-overloadinfo[¶](#perl-devel-overloadinfo) === Homepage: * <http://search.cpan.org/~ilmari/Devel-OverloadInfo-0.004/lib/Devel/OverloadInfo.pmSpack package: * [perl-devel-overloadinfo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-devel-overloadinfo/package.py) Versions: 0.005, 0.004 Build Dependencies: [perl](#perl), [perl-mro-compat](#perl-mro-compat) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-mro-compat](#perl-mro-compat) Description: Returns information about overloaded operators for a given class --- perl-devel-stacktrace[¶](#perl-devel-stacktrace) === Homepage: * <http://search.cpan.org/~drolsky/Devel-StackTrace-2.02/lib/Devel/StackTrace.pmSpack package: * [perl-devel-stacktrace/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-devel-stacktrace/package.py) Versions: 2.02 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: An object representing a stack trace. --- perl-digest-md5[¶](#perl-digest-md5) === Homepage: * <http://search.cpan.org/dist/Digest-MD5/MD5.pmSpack package: * [perl-digest-md5/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-digest-md5/package.py) Versions: 2.55 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl interface to the MD5 Algorithm --- perl-dist-checkconflicts[¶](#perl-dist-checkconflicts) === Homepage: * <http://search.cpan.org/~doy/Dist-CheckConflicts-0.11/lib/Dist/CheckConflicts.pmSpack package: * [perl-dist-checkconflicts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-dist-checkconflicts/package.py) Versions: 0.11 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Declare version conflicts for your dist --- perl-encode-locale[¶](#perl-encode-locale) === Homepage: * <http://search.cpan.org/~gaas/Encode-Locale-1.05/lib/Encode/Locale.pmSpack package: * [perl-encode-locale/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-encode-locale/package.py) Versions: 1.05 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Determine the locale encoding --- perl-eval-closure[¶](#perl-eval-closure) === Homepage: * <http://search.cpan.org/~doy/Eval-Closure-0.14/lib/Eval/Closure.pmSpack package: * [perl-eval-closure/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-eval-closure/package.py) Versions: 0.14 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Safely and cleanly create closures via string eval --- perl-exception-class[¶](#perl-exception-class) === Homepage: * <http://search.cpan.org/~drolsky/Exception-Class-1.43/lib/Exception/Class.pmSpack package: * [perl-exception-class/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-exception-class/package.py) Versions: 1.43 Build Dependencies: [perl](#perl), [perl-class-data-inheritable](#perl-class-data-inheritable), [perl-devel-stacktrace](#perl-devel-stacktrace) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-class-data-inheritable](#perl-class-data-inheritable), [perl-devel-stacktrace](#perl-devel-stacktrace) Description: A module that allows you to declare real exception classes in Perl --- perl-exporter-tiny[¶](#perl-exporter-tiny) === Homepage: * <http://search.cpan.org/~tobyink/Exporter-Tiny/lib/Exporter/Tiny.pmSpack package: * [perl-exporter-tiny/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-exporter-tiny/package.py) Versions: 1.000000 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: An exporter with the features of Sub::Exporter but only core dependencies --- perl-extutils-depends[¶](#perl-extutils-depends) === Homepage: * <http://search.cpan.org/~xaoc/ExtUtils-Depends/lib/ExtUtils/Depends.pmSpack package: * [perl-extutils-depends/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-extutils-depends/package.py) Versions: 0.405 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Easily build XS extensions that depend on XS extensions --- perl-extutils-makemaker[¶](#perl-extutils-makemaker) === Homepage: * <https://github.com/Perl-Toolchain-Gang/ExtUtils-MakeMakerSpack package: * [perl-extutils-makemaker/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-extutils-makemaker/package.py) Versions: 7.24 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: ExtUtils::MakeMaker - Create a module Makefile. This utility is designed to write a Makefile for an extension module from a Makefile.PL. It is based on the Makefile.SH model provided by <NAME> and the perl5-porters. --- perl-extutils-pkgconfig[¶](#perl-extutils-pkgconfig) === Homepage: * <http://search.cpan.org/~xaoc/ExtUtils-PkgConfig-1.16/lib/ExtUtils/PkgConfig.pmSpack package: * [perl-extutils-pkgconfig/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-extutils-pkgconfig/package.py) Versions: 1.16 Build Dependencies: [perl](#perl), pkgconfig Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), pkgconfig Description: simplistic interface to pkg-config --- perl-file-copy-recursive[¶](#perl-file-copy-recursive) === Homepage: * <http://search.cpan.org/~dmuey/File-Copy-Recursive-0.38/Recursive.pmSpack package: * [perl-file-copy-recursive/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-file-copy-recursive/package.py) Versions: 0.40, 0.38 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl extension for recursively copying files and directories --- perl-file-listing[¶](#perl-file-listing) === Homepage: * <http://search.cpan.org/~gaas/File-Listing-6.04/lib/File/Listing.pmSpack package: * [perl-file-listing/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-file-listing/package.py) Versions: 6.04 Build Dependencies: [perl](#perl), [perl-http-date](#perl-http-date) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-http-date](#perl-http-date) Description: Parse directory listing --- perl-file-pushd[¶](#perl-file-pushd) === Homepage: * <http://search.cpan.org/~dagolden/File-pushd-1.014/lib/File/pushd.pmSpack package: * [perl-file-pushd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-file-pushd/package.py) Versions: 1.014 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Change directory temporarily for a limited scope --- perl-file-sharedir-install[¶](#perl-file-sharedir-install) === Homepage: * <http://search.cpan.org/~ether/File-ShareDir-Install-0.11/lib/File/ShareDir/Install.pmSpack package: * [perl-file-sharedir-install/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-file-sharedir-install/package.py) Versions: 0.11 Build Dependencies: [perl](#perl), [perl-module-build](#perl-module-build) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Install shared files --- perl-file-slurp-tiny[¶](#perl-file-slurp-tiny) === Homepage: * <http://search.cpan.org/~leont/File-Slurp-Tiny-0.004/lib/File/Slurp/Tiny.pmSpack package: * [perl-file-slurp-tiny/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-file-slurp-tiny/package.py) Versions: 0.004 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: A simple, sane and efficient file slurper --- perl-file-slurper[¶](#perl-file-slurper) === Homepage: * <http://search.cpan.org/~leont/File-Slurper/lib/File/Slurper.pmSpack package: * [perl-file-slurper/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-file-slurper/package.py) Versions: 0.011 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: A simple, sane and efficient module to slurp a file --- perl-file-which[¶](#perl-file-which) === Homepage: * <http://cpansearch.perl.org/src/PLICEASE/File-Which-1.22/lib/File/Which.pmSpack package: * [perl-file-which/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-file-which/package.py) Versions: 1.22 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl implementation of the which utility as an API --- perl-font-ttf[¶](#perl-font-ttf) === Homepage: * <http://search.cpan.org/~bhallissy/Font-TTF-1.06/lib/Font/TTF.pmSpack package: * [perl-font-ttf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-font-ttf/package.py) Versions: 1.06 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl module for TrueType Font hacking --- perl-gd[¶](#perl-gd) === Homepage: * <http://search.cpan.org/~lds/GD-2.53/GD.pmSpack package: * [perl-gd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-gd/package.py) Versions: 2.53 Build Dependencies: [perl](#perl), [libgd](#libgd), [perl-module-build](#perl-module-build), [perl-extutils-pkgconfig](#perl-extutils-pkgconfig), [perl-extutils-makemaker](#perl-extutils-makemaker) Link Dependencies: [perl](#perl), [libgd](#libgd) Run Dependencies: [perl](#perl), [perl-extutils-pkgconfig](#perl-extutils-pkgconfig), [perl-extutils-makemaker](#perl-extutils-makemaker) Description: Interface to Gd Graphics Library --- perl-gd-graph[¶](#perl-gd-graph) === Homepage: * <http://search.cpan.org/~bwarfield/GDGraph/Graph.pmSpack package: * [perl-gd-graph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-gd-graph/package.py) Versions: 1.4308 Build Dependencies: [perl](#perl), [perl-gd-text](#perl-gd-text), [perl-test-exception](#perl-test-exception), [perl-capture-tiny](#perl-capture-tiny), [perl-gd](#perl-gd) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-gd-text](#perl-gd-text), [perl-test-exception](#perl-test-exception), [perl-capture-tiny](#perl-capture-tiny), [perl-gd](#perl-gd) Description: Graph Plotting Module for Perl 5 --- perl-gd-text[¶](#perl-gd-text) === Homepage: * <http://search.cpan.org/~mverb/GDTextUtil-0.86/Text.pmSpack package: * [perl-gd-text/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-gd-text/package.py) Versions: 0.86 Build Dependencies: [perl](#perl), [perl-gd](#perl-gd) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-gd](#perl-gd) Description: Text utilities for use with GD --- perl-gdgraph-histogram[¶](#perl-gdgraph-histogram) === Homepage: * <https://metacpan.org/pod/GD::Graph::histogramSpack package: * [perl-gdgraph-histogram/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-gdgraph-histogram/package.py) Versions: 1.1 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: GD::Graph::histogram extends the GD::Graph module to create histograms. The module allow creation of count or percentage histograms. --- perl-graph[¶](#perl-graph) === Homepage: * <http://search.cpan.org/~jhi/Graph/lib/Graph.podSpack package: * [perl-graph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-graph/package.py) Versions: 0.9704 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Graph data structures and algorithms --- perl-graph-readwrite[¶](#perl-graph-readwrite) === Homepage: * <http://search.cpan.org/~neilb/Graph-ReadWrite/lib/Graph/Writer/Dot.pmSpack package: * [perl-graph-readwrite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-graph-readwrite/package.py) Versions: 2.09 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Write out directed graph in Dot format --- perl-html-parser[¶](#perl-html-parser) === Homepage: * <http://search.cpan.org/~gaas/HTML-Parser-3.72/Parser.pmSpack package: * [perl-html-parser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-html-parser/package.py) Versions: 3.72 Build Dependencies: [perl](#perl), [perl-html-tagset](#perl-html-tagset) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-html-tagset](#perl-html-tagset) Description: HTML parser class --- perl-html-tagset[¶](#perl-html-tagset) === Homepage: * <http://search.cpan.org/~petdance/HTML-Tagset-3.20/Tagset.pmSpack package: * [perl-html-tagset/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-html-tagset/package.py) Versions: 3.20 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Data tables useful in parsing HTML --- perl-http-cookies[¶](#perl-http-cookies) === Homepage: * <http://search.cpan.org/~oalders/HTTP-Cookies-6.04/lib/HTTP/Cookies.pmSpack package: * [perl-http-cookies/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-http-cookies/package.py) Versions: 6.04 Build Dependencies: [perl](#perl), [perl-http-message](#perl-http-message), [perl-uri](#perl-uri) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-http-message](#perl-http-message), [perl-uri](#perl-uri) Description: HTTP cookie jars --- perl-http-daemon[¶](#perl-http-daemon) === Homepage: * <http://search.cpan.org/~gaas/HTTP-Daemon-6.01/lib/HTTP/Daemon.pmSpack package: * [perl-http-daemon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-http-daemon/package.py) Versions: 6.01 Build Dependencies: [perl](#perl), [perl-lwp-mediatypes](#perl-lwp-mediatypes), [perl-http-message](#perl-http-message) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-lwp-mediatypes](#perl-lwp-mediatypes), [perl-http-message](#perl-http-message) Description: A simple http server class --- perl-http-date[¶](#perl-http-date) === Homepage: * <http://search.cpan.org/~gaas/HTTP-Date-6.02/lib/HTTP/Date.pmSpack package: * [perl-http-date/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-http-date/package.py) Versions: 6.02 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Date conversion routines --- perl-http-message[¶](#perl-http-message) === Homepage: * <http://search.cpan.org/~oalders/HTTP-Message-6.13/lib/HTTP/Status.pmSpack package: * [perl-http-message/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-http-message/package.py) Versions: 6.13 Build Dependencies: [perl-http-date](#perl-http-date), [perl](#perl), [perl-try-tiny](#perl-try-tiny), [perl-io-html](#perl-io-html), [perl-lwp-mediatypes](#perl-lwp-mediatypes), [perl-encode-locale](#perl-encode-locale), [perl-uri](#perl-uri) Link Dependencies: [perl](#perl) Run Dependencies: [perl-http-date](#perl-http-date), [perl](#perl), [perl-try-tiny](#perl-try-tiny), [perl-io-html](#perl-io-html), [perl-lwp-mediatypes](#perl-lwp-mediatypes), [perl-encode-locale](#perl-encode-locale), [perl-uri](#perl-uri) Description: HTTP style message (base class) --- perl-http-negotiate[¶](#perl-http-negotiate) === Homepage: * <http://search.cpan.org/~gaas/HTTP-Negotiate-6.01/lib/HTTP/Negotiate.pmSpack package: * [perl-http-negotiate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-http-negotiate/package.py) Versions: 6.01 Build Dependencies: [perl](#perl), [perl-http-message](#perl-http-message) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-http-message](#perl-http-message) Description: Choose a variant to serve --- perl-inline[¶](#perl-inline) === Homepage: * <http://search.cpan.org/~ingy/Inline-0.80/lib/Inline.podSpack package: * [perl-inline/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-inline/package.py) Versions: 0.80 Build Dependencies: [perl](#perl), [perl-test-warn](#perl-test-warn) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-test-warn](#perl-test-warn) Description: Write Perl Subroutines in Other Programming Languages --- perl-inline-c[¶](#perl-inline-c) === Homepage: * <http://search.cpan.org/~tinita/Inline-C-0.78/lib/Inline/C.podSpack package: * [perl-inline-c/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-inline-c/package.py) Versions: 0.78 Build Dependencies: [perl-parse-recdescent](#perl-parse-recdescent), [perl-pegex](#perl-pegex), [perl-file-copy-recursive](#perl-file-copy-recursive), [perl-inline](#perl-inline), [perl](#perl), [perl-yaml-libyaml](#perl-yaml-libyaml) Link Dependencies: [perl](#perl) Run Dependencies: [perl-parse-recdescent](#perl-parse-recdescent), [perl-pegex](#perl-pegex), [perl-file-copy-recursive](#perl-file-copy-recursive), [perl-inline](#perl-inline), [perl](#perl), [perl-yaml-libyaml](#perl-yaml-libyaml) Description: C Language Support for Inline --- perl-intervaltree[¶](#perl-intervaltree) === Homepage: * <https://metacpan.org/release/Set-IntervalTreeSpack package: * [perl-intervaltree/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-intervaltree/package.py) Versions: 0.10 Build Dependencies: [perl](#perl), [perl-extutils-makemaker](#perl-extutils-makemaker) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Set::IntervalTree uses Interval Trees to store and efficiently look up ranges using a range-based lookup. --- perl-io-compress[¶](#perl-io-compress) === Homepage: * <http://search.cpan.org/~pmqs/IO-Compress-2.070/lib/IO/Uncompress/AnyUncompress.pmSpack package: * [perl-io-compress/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-io-compress/package.py) Versions: 2.081 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-compress-raw-zlib](#perl-compress-raw-zlib), [perl-compress-raw-bzip2](#perl-compress-raw-bzip2) Description: A perl library for uncompressing gzip, zip, bzip2 or lzop file/buffer. --- perl-io-html[¶](#perl-io-html) === Homepage: * <http://search.cpan.org/~cjm/IO-HTML-1.001/lib/IO/HTML.pmSpack package: * [perl-io-html/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-io-html/package.py) Versions: 1.001 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Open an HTML file with automatic charset detection. --- perl-io-sessiondata[¶](#perl-io-sessiondata) === Homepage: * <http://search.cpan.org/~phred/IO-SessionData-1.03/Spack package: * [perl-io-sessiondata/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-io-sessiondata/package.py) Versions: 1.03 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: A wrapper around a single IO::Socket object --- perl-io-socket-ssl[¶](#perl-io-socket-ssl) === Homepage: * <http://search.cpan.org/~sullr/IO-Socket-SSL-2.052/lib/IO/Socket/SSL.podSpack package: * [perl-io-socket-ssl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-io-socket-ssl/package.py) Versions: 2.052 Build Dependencies: [perl](#perl), [perl-net-ssleay](#perl-net-ssleay) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-net-ssleay](#perl-net-ssleay) Description: SSL sockets with IO::Socket interface --- perl-io-string[¶](#perl-io-string) === Homepage: * <http://search.cpan.org/~gaas/IO-String-1.08/String.pmSpack package: * [perl-io-string/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-io-string/package.py) Versions: 1.08 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Emulate file interface for in-core strings --- perl-json[¶](#perl-json) === Homepage: * <http://search.cpan.org/~ishigaki/JSON/lib/JSON.pmSpack package: * [perl-json/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-json/package.py) Versions: 2.97001 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: JSON (JavaScript Object Notation) encoder/decoder --- perl-libwww-perl[¶](#perl-libwww-perl) === Homepage: * <https://github.com/libwww-perl/libwww-perlSpack package: * [perl-libwww-perl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-libwww-perl/package.py) Versions: 6.33 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: The libwww-perl collection is a set of Perl modules which provides a simple and consistent application programming interface to the World- Wide Web. The main focus of the library is to provide classes and functions that allow you to write WWW clients. --- perl-list-moreutils[¶](#perl-list-moreutils) === Homepage: * <http://search.cpan.org/~rehsack/List-MoreUtils/lib/List/MoreUtils.pmSpack package: * [perl-list-moreutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-list-moreutils/package.py) Versions: 0.428 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Provide the stuff missing in List::Util --- perl-log-log4perl[¶](#perl-log-log4perl) === Homepage: * <http://search.cpan.org/~mschilli/Log-Log4perl-1.44/lib/Log/Log4perl.pmSpack package: * [perl-log-log4perl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-log-log4perl/package.py) Versions: 146 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Log4j implementation for Perl --- perl-lwp[¶](#perl-lwp) === Homepage: * <http://search.cpan.org/~oalders/libwww-perl-6.29/lib/LWP.pmSpack package: * [perl-lwp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-lwp/package.py) Versions: 6.29 Build Dependencies: [perl](#perl), [perl-http-negotiate](#perl-http-negotiate), [perl-file-listing](#perl-file-listing), [perl-http-message](#perl-http-message), [perl-test-requiresinternet](#perl-test-requiresinternet), [perl-net-http](#perl-net-http), [perl-www-robotrules](#perl-www-robotrules), [perl-test-fatal](#perl-test-fatal), [perl-html-parser](#perl-html-parser), [perl-http-cookies](#perl-http-cookies), [perl-http-daemon](#perl-http-daemon) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-http-negotiate](#perl-http-negotiate), [perl-file-listing](#perl-file-listing), [perl-http-message](#perl-http-message), [perl-test-requiresinternet](#perl-test-requiresinternet), [perl-net-http](#perl-net-http), [perl-www-robotrules](#perl-www-robotrules), [perl-test-fatal](#perl-test-fatal), [perl-html-parser](#perl-html-parser), [perl-http-cookies](#perl-http-cookies), [perl-http-daemon](#perl-http-daemon) Description: The World-Wide Web library for Perl --- perl-lwp-mediatypes[¶](#perl-lwp-mediatypes) === Homepage: * <http://search.cpan.org/~gaas/LWP-MediaTypes-6.02/lib/LWP/MediaTypes.pmSpack package: * [perl-lwp-mediatypes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-lwp-mediatypes/package.py) Versions: 6.02 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Guess media type for a file or a URL --- perl-lwp-protocol-https[¶](#perl-lwp-protocol-https) === Homepage: * <http://search.cpan.org/~gaas/LWP-Protocol-https-6.04/lib/LWP/Protocol/https.pmSpack package: * [perl-lwp-protocol-https/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-lwp-protocol-https/package.py) Versions: 6.04 Build Dependencies: [perl-io-socket-ssl](#perl-io-socket-ssl), [perl-test-requiresinternet](#perl-test-requiresinternet), [perl-lwp](#perl-lwp), [perl](#perl), [perl-net-http](#perl-net-http), [perl-mozilla-ca](#perl-mozilla-ca) Link Dependencies: [perl](#perl) Run Dependencies: [perl-io-socket-ssl](#perl-io-socket-ssl), [perl-test-requiresinternet](#perl-test-requiresinternet), [perl-lwp](#perl-lwp), [perl](#perl), [perl-net-http](#perl-net-http), [perl-mozilla-ca](#perl-mozilla-ca) Description: Provide https support for LWP::UserAgent --- perl-math-cdf[¶](#perl-math-cdf) === Homepage: * <http://search.cpan.org/~callahan/Math-CDF-0.1/CDF.pmSpack package: * [perl-math-cdf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-math-cdf/package.py) Versions: 0.1 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Generate probabilities and quantiles from several statistical probability functions --- perl-math-cephes[¶](#perl-math-cephes) === Homepage: * <http://search.cpan.org/~shlomif/Math-Cephes/lib/Math/Cephes.podSpack package: * [perl-math-cephes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-math-cephes/package.py) Versions: 0.5305 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: This module provides an interface to over 150 functions of the cephes math library of Stephen Moshier. --- perl-math-matrixreal[¶](#perl-math-matrixreal) === Homepage: * <http://search.cpan.org/~leto/Math-MatrixReal/lib/Math/MatrixReal.pmSpack package: * [perl-math-matrixreal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-math-matrixreal/package.py) Versions: 2.13 Build Dependencies: [perl](#perl), [perl-module-build](#perl-module-build) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Implements the data type "matrix of real numbers" (and consequently also "vector of real numbers"). --- perl-module-build[¶](#perl-module-build) === Homepage: * <http://search.cpan.org/perldoc/Module::BuildSpack package: * [perl-module-build/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-module-build/package.py) Versions: 0.4224, 0.4220 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Module::Build is a system for building, testing, and installing Perl modules. It is meant to be an alternative to ExtUtils::MakeMaker. Developers may alter the behavior of the module through subclassing in a much more straightforward way than with MakeMaker. It also does not require a make on your system - most of the Module::Build code is pure- perl and written in a very cross-platform way. --- perl-module-implementation[¶](#perl-module-implementation) === Homepage: * <http://search.cpan.org/~drolsky/Module-Implementation/lib/Module/Implementation.pmSpack package: * [perl-module-implementation/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-module-implementation/package.py) Versions: 0.09 Build Dependencies: [perl](#perl), [perl-try-tiny](#perl-try-tiny), [perl-module-runtime](#perl-module-runtime), [perl-test-fatal](#perl-test-fatal), [perl-test-requires](#perl-test-requires) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-try-tiny](#perl-try-tiny), [perl-module-runtime](#perl-module-runtime), [perl-test-fatal](#perl-test-fatal), [perl-test-requires](#perl-test-requires) Description: Loads one of several alternate underlying implementations for a module --- perl-module-runtime[¶](#perl-module-runtime) === Homepage: * <http://search.cpan.org/~zefram/Module-Runtime/lib/Module/Runtime.pmSpack package: * [perl-module-runtime/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-module-runtime/package.py) Versions: 0.016 Build Dependencies: [perl](#perl), [perl-module-build](#perl-module-build) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Runtime module handling --- perl-module-runtime-conflicts[¶](#perl-module-runtime-conflicts) === Homepage: * <http://search.cpan.org/~ether/Module-Runtime-Conflicts-0.003/lib/Module/Runtime/Conflicts.pmSpack package: * [perl-module-runtime-conflicts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-module-runtime-conflicts/package.py) Versions: 0.003 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Provide information on conflicts for Module::Runtime --- perl-moose[¶](#perl-moose) === Homepage: * <http://search.cpan.org/~ether/Moose-2.2006/lib/Moose.pmSpack package: * [perl-moose/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-moose/package.py) Versions: 2.2010, 2.2009, 2.2007, 2.2006 Build Dependencies: [perl-devel-overloadinfo](#perl-devel-overloadinfo), [perl-cpan-meta-check](#perl-cpan-meta-check), [perl-eval-closure](#perl-eval-closure), [perl-devel-stacktrace](#perl-devel-stacktrace), [perl](#perl), [perl-test-cleannamespaces](#perl-test-cleannamespaces), [perl-sub-name](#perl-sub-name), [perl-package-deprecationmanager](#perl-package-deprecationmanager), [perl-class-load-xs](#perl-class-load-xs), [perl-module-runtime-conflicts](#perl-module-runtime-conflicts), [perl-devel-globaldestruction](#perl-devel-globaldestruction), [perl-package-stash-xs](#perl-package-stash-xs) Link Dependencies: [perl](#perl) Run Dependencies: [perl-devel-overloadinfo](#perl-devel-overloadinfo), [perl-cpan-meta-check](#perl-cpan-meta-check), [perl-eval-closure](#perl-eval-closure), [perl-devel-stacktrace](#perl-devel-stacktrace), [perl](#perl), [perl-test-cleannamespaces](#perl-test-cleannamespaces), [perl-sub-name](#perl-sub-name), [perl-package-deprecationmanager](#perl-package-deprecationmanager), [perl-class-load-xs](#perl-class-load-xs), [perl-module-runtime-conflicts](#perl-module-runtime-conflicts), [perl-devel-globaldestruction](#perl-devel-globaldestruction), [perl-package-stash-xs](#perl-package-stash-xs) Description: A postmodern object system for Perl 5 --- perl-mozilla-ca[¶](#perl-mozilla-ca) === Homepage: * <http://search.cpan.org/~abh/Mozilla-CA-20160104/lib/Mozilla/CA.pmSpack package: * [perl-mozilla-ca/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-mozilla-ca/package.py) Versions: 20160104 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Mozilla's CA cert bundle in PEM format --- perl-mro-compat[¶](#perl-mro-compat) === Homepage: * <http://search.cpan.org/~haarg/MRO-Compat-0.13/lib/MRO/Compat.pmSpack package: * [perl-mro-compat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-mro-compat/package.py) Versions: 0.13 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Provides several utilities for dealing with method resolution order. --- perl-namespace-clean[¶](#perl-namespace-clean) === Homepage: * <http://search.cpan.org/~ribasushi/namespace-clean-0.27/lib/namespace/clean.pmSpack package: * [perl-namespace-clean/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-namespace-clean/package.py) Versions: 0.27 Build Dependencies: [perl](#perl), [perl-b-hooks-endofscope](#perl-b-hooks-endofscope) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-b-hooks-endofscope](#perl-b-hooks-endofscope) Description: Keep imports and functions out of your namespace. --- perl-net-http[¶](#perl-net-http) === Homepage: * <http://search.cpan.org/~oalders/Net-HTTP-6.17/lib/Net/HTTP.pmSpack package: * [perl-net-http/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-net-http/package.py) Versions: 6.17 Build Dependencies: [perl](#perl), [perl-uri](#perl-uri) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-uri](#perl-uri) Description: Low-level HTTP connection (client) --- perl-net-scp-expect[¶](#perl-net-scp-expect) === Homepage: * <http://search.cpan.org/~rybskej/Net-SCP-Expect/Expect.pmSpack package: * [perl-net-scp-expect/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-net-scp-expect/package.py) Versions: 0.16 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Wrapper for scp that allows passwords via Expect. --- perl-net-ssleay[¶](#perl-net-ssleay) === Homepage: * <http://search.cpan.org/~mikem/Net-SSLeay-1.82/lib/Net/SSLeay.podSpack package: * [perl-net-ssleay/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-net-ssleay/package.py) Versions: 1.82 Build Dependencies: [perl](#perl), [openssl](#openssl) Link Dependencies: [perl](#perl), [openssl](#openssl) Run Dependencies: [perl](#perl) Description: Perl extension for using OpenSSL --- perl-package-deprecationmanager[¶](#perl-package-deprecationmanager) === Homepage: * <http://search.cpan.org/~drolsky/Package-DeprecationManager-0.17/lib/Package/DeprecationManager.pmSpack package: * [perl-package-deprecationmanager/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-package-deprecationmanager/package.py) Versions: 0.17 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Manage deprecation warnings for your distribution --- perl-package-stash[¶](#perl-package-stash) === Homepage: * <http://search.cpan.org/~doy/Package-Stash-0.37/lib/Package/Stash.pmSpack package: * [perl-package-stash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-package-stash/package.py) Versions: 0.37 Build Dependencies: [perl](#perl), [perl-test-fatal](#perl-test-fatal), [perl-dist-checkconflicts](#perl-dist-checkconflicts), [perl-module-implementation](#perl-module-implementation), [perl-test-requires](#perl-test-requires) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-test-fatal](#perl-test-fatal), [perl-dist-checkconflicts](#perl-dist-checkconflicts), [perl-module-implementation](#perl-module-implementation), [perl-test-requires](#perl-test-requires) Description: Routines for manipulating stashes --- perl-package-stash-xs[¶](#perl-package-stash-xs) === Homepage: * <http://search.cpan.org/~doy/Package-Stash-XS-0.28/lib/Package/Stash/XS.pmSpack package: * [perl-package-stash-xs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-package-stash-xs/package.py) Versions: 0.28 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Faster and more correct implementation of the Package::Stash API --- perl-padwalker[¶](#perl-padwalker) === Homepage: * <http://search.cpan.org/~robin/PadWalker-2.2/PadWalker.pmSpack package: * [perl-padwalker/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-padwalker/package.py) Versions: 2.2 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: play with other peoples' lexical variables --- perl-parallel-forkmanager[¶](#perl-parallel-forkmanager) === Homepage: * <http://search.cpan.org/~yanick/Parallel-ForkManager/lib/Parallel/ForkManager.pmSpack package: * [perl-parallel-forkmanager/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-parallel-forkmanager/package.py) Versions: 1.19 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: A simple parallel processing fork manager --- perl-params-util[¶](#perl-params-util) === Homepage: * <http://search.cpan.org/~adamk/Params-Util-1.07/lib/Params/Util.pmSpack package: * [perl-params-util/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-params-util/package.py) Versions: 1.07 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Simple, compact and correct param-checking functions --- perl-parse-recdescent[¶](#perl-parse-recdescent) === Homepage: * <http://search.cpan.org/~jtbraun/Parse-RecDescent-1.967015/lib/Parse/RecDescent.pmSpack package: * [perl-parse-recdescent/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-parse-recdescent/package.py) Versions: 1.967015 Build Dependencies: [perl](#perl), [perl-module-build](#perl-module-build) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Generate Recursive-Descent Parsers --- perl-pdf-api2[¶](#perl-pdf-api2) === Homepage: * <http://search.cpan.org/~ssimms/PDF-API2-2.033/lib/PDF/API2.pmSpack package: * [perl-pdf-api2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-pdf-api2/package.py) Versions: 2.033 Build Dependencies: [perl](#perl), [perl-test-memory-cycle](#perl-test-memory-cycle), [perl-test-exception](#perl-test-exception), [perl-font-ttf](#perl-font-ttf) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-test-memory-cycle](#perl-test-memory-cycle), [perl-test-exception](#perl-test-exception), [perl-font-ttf](#perl-font-ttf) Description: Facilitates the creation and modification of PDF files --- perl-pegex[¶](#perl-pegex) === Homepage: * <http://search.cpan.org/~ingy/Pegex-0.64/lib/Pegex.podSpack package: * [perl-pegex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-pegex/package.py) Versions: 0.64 Build Dependencies: [perl](#perl), [perl-yaml-libyaml](#perl-yaml-libyaml), [perl-file-sharedir-install](#perl-file-sharedir-install) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-yaml-libyaml](#perl-yaml-libyaml), [perl-file-sharedir-install](#perl-file-sharedir-install) Description: Acmeist PEG Parser Framework --- perl-perl4-corelibs[¶](#perl-perl4-corelibs) === Homepage: * <https://metacpan.org/pod/release/ZEFRAM/Perl4-CoreLibs-0.003/lib/Perl4/CoreLibs.pmSpack package: * [perl-perl4-corelibs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-perl4-corelibs/package.py) Versions: 0.004, 0.003, 0.002, 0.001, 0.000 Build Dependencies: [perl](#perl), [perl-module-build](#perl-module-build) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl4::CoreLibs - libraries historically supplied with Perl 4 --- perl-perl6-slurp[¶](#perl-perl6-slurp) === Homepage: * <http://search.cpan.org/~dconway/Perl6-Slurp-0.051005/lib/Perl6/Slurp.pmSpack package: * [perl-perl6-slurp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-perl6-slurp/package.py) Versions: 0.051005 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl6::Slurp - Implements the Perl 6 'slurp' built-in --- perl-perlio-gzip[¶](#perl-perlio-gzip) === Homepage: * <http://search.cpan.org/~nwclark/PerlIO-gzip/gzip.pmSpack package: * [perl-perlio-gzip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-perlio-gzip/package.py) Versions: 0.20, 0.19 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl extension to provide a PerlIO layer to gzip/gunzip --- perl-perlio-utf8-strict[¶](#perl-perlio-utf8-strict) === Homepage: * <http://search.cpan.org/~leont/PerlIO-utf8_strict/lib/PerlIO/utf8_strict.pmSpack package: * [perl-perlio-utf8-strict/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-perlio-utf8-strict/package.py) Versions: 0.002 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: This module provides a fast and correct UTF-8 PerlIO layer. --- perl-scalar-util-numeric[¶](#perl-scalar-util-numeric) === Homepage: * <https://metacpan.org/pod/Scalar::Util::NumericSpack package: * [perl-scalar-util-numeric/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-scalar-util-numeric/package.py) Versions: 0.40 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: This module exports a number of wrappers around perl's builtin grok_number function, which returns the numeric type of its argument, or 0 if it isn't numeric. --- perl-soap-lite[¶](#perl-soap-lite) === Homepage: * <http://search.cpan.org/~phred/SOAP-Lite-1.20/lib/SOAP/Lite.pmSpack package: * [perl-soap-lite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-soap-lite/package.py) Versions: 1.22 Build Dependencies: [perl-xml-parser](#perl-xml-parser), [perl-lwp-protocol-https](#perl-lwp-protocol-https), [perl](#perl), [perl-xml-parser-lite](#perl-xml-parser-lite), [perl-class-inspector](#perl-class-inspector), [perl-task-weaken](#perl-task-weaken), [perl-test-warn](#perl-test-warn), [perl-io-sessiondata](#perl-io-sessiondata) Link Dependencies: [perl](#perl) Run Dependencies: [perl-xml-parser](#perl-xml-parser), [perl-lwp-protocol-https](#perl-lwp-protocol-https), [perl](#perl), [perl-xml-parser-lite](#perl-xml-parser-lite), [perl-class-inspector](#perl-class-inspector), [perl-task-weaken](#perl-task-weaken), [perl-test-warn](#perl-test-warn), [perl-io-sessiondata](#perl-io-sessiondata) Description: Perl's Web Services Toolkit --- perl-star-fusion[¶](#perl-star-fusion) === Homepage: * <https://github.com/STAR-Fusion/STAR-FusionSpack package: * [perl-star-fusion/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-star-fusion/package.py) Versions: master Build Dependencies: [perl](#perl), [star](#star), [perl-dbi](#perl-dbi), [perl-dbfile](#perl-dbfile), [perl-intervaltree](#perl-intervaltree), [perl-uri-escape](#perl-uri-escape) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [star](#star), [perl-dbi](#perl-dbi), [perl-dbfile](#perl-dbfile), [perl-intervaltree](#perl-intervaltree), [perl-uri-escape](#perl-uri-escape) Description: STAR-Fusion is a component of the Trinity Cancer Transcriptome Analysis Toolkit (CTAT). STAR-Fusion uses the STAR aligner to identify candidate fusion transcripts supported by Illumina reads. STAR-Fusion further processes the output generated by the STAR aligner to map junction reads and spanning reads to a reference annotation set. --- perl-statistics-descriptive[¶](#perl-statistics-descriptive) === Homepage: * <http://search.cpan.org/~shlomif/Statistics-Descriptive-3.0612/lib/Statistics/Descriptive.pmSpack package: * [perl-statistics-descriptive/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-statistics-descriptive/package.py) Versions: 3.0612 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Module of basic descriptive statistical functions. --- perl-statistics-pca[¶](#perl-statistics-pca) === Homepage: * <http://search.cpan.org/~dsth/Statistics-PCA/lib/Statistics/PCA.pmSpack package: * [perl-statistics-pca/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-statistics-pca/package.py) Versions: 0.0.1 Build Dependencies: [perl](#perl), [perl-text-simpletable](#perl-text-simpletable), [perl-module-build](#perl-module-build), [perl-math-matrixreal](#perl-math-matrixreal), [perl-contextual-return](#perl-contextual-return) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-text-simpletable](#perl-text-simpletable), [perl-math-matrixreal](#perl-math-matrixreal), [perl-contextual-return](#perl-contextual-return) Description: A simple Perl implementation of Principal Component Analysis. --- perl-sub-exporter[¶](#perl-sub-exporter) === Homepage: * <http://search.cpan.org/~rjbs/Sub-Exporter-0.987/lib/Sub/Exporter.pmSpack package: * [perl-sub-exporter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-sub-exporter/package.py) Versions: 0.987 Build Dependencies: [perl](#perl), [perl-data-optlist](#perl-data-optlist), [perl-params-util](#perl-params-util) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-data-optlist](#perl-data-optlist), [perl-params-util](#perl-params-util) Description: A sophisticated exporter for custom-built routines --- perl-sub-exporter-progressive[¶](#perl-sub-exporter-progressive) === Homepage: * <http://search.cpan.org/~frew/Sub-Exporter-Progressive-0.001013/lib/Sub/Exporter/Progressive.pmSpack package: * [perl-sub-exporter-progressive/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-sub-exporter-progressive/package.py) Versions: 0.001013 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Progressive Sub::Exporter --- perl-sub-identify[¶](#perl-sub-identify) === Homepage: * <http://search.cpan.org/~rgarcia/Sub-Identify-0.14/lib/Sub/Identify.pmSpack package: * [perl-sub-identify/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-sub-identify/package.py) Versions: 0.14 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Retrieve names of code references --- perl-sub-install[¶](#perl-sub-install) === Homepage: * <http://search.cpan.org/~rjbs/Sub-Install-0.928/lib/Sub/Install.pmSpack package: * [perl-sub-install/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-sub-install/package.py) Versions: 0.928 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Install subroutines into packages easily --- perl-sub-name[¶](#perl-sub-name) === Homepage: * <http://search.cpan.org/~ether/Sub-Name-0.21/lib/Sub/Name.pmSpack package: * [perl-sub-name/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-sub-name/package.py) Versions: 0.21 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Name or rename a sub --- perl-sub-uplevel[¶](#perl-sub-uplevel) === Homepage: * <http://search.cpan.org/~dagolden/Sub-Uplevel-0.2800/lib/Sub/Uplevel.pmSpack package: * [perl-sub-uplevel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-sub-uplevel/package.py) Versions: 0.2800 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: apparently run a function in a higher stack frame --- perl-svg[¶](#perl-svg) === Homepage: * <http://search.cpan.org/~manwar/SVG-2.78/lib/SVG.pmSpack package: * [perl-svg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-svg/package.py) Versions: 2.78 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl extension for generating Scalable Vector Graphics (SVG) documents. --- perl-swissknife[¶](#perl-swissknife) === Homepage: * <http://swissknife.sourceforge.netSpack package: * [perl-swissknife/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-swissknife/package.py) Versions: 1.75 Build Dependencies: [perl](#perl), [perl-module-build](#perl-module-build) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: An object-oriented Perl library to handle Swiss-Prot entries --- perl-task-weaken[¶](#perl-task-weaken) === Homepage: * <http://search.cpan.org/~adamk/Task-Weaken-1.04/lib/Task/Weaken.pmSpack package: * [perl-task-weaken/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-task-weaken/package.py) Versions: 1.04 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Ensure that a platform has weaken support --- perl-term-readkey[¶](#perl-term-readkey) === Homepage: * <http://search.cpan.org/perldoc/Term::ReadKeySpack package: * [perl-term-readkey/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-term-readkey/package.py) Versions: 2.37 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Term::ReadKey is a compiled perl module dedicated to providing simple control over terminal driver modes (cbreak, raw, cooked, etc.,) support for non-blocking reads, if the architecture allows, and some generalized handy functions for working with terminals. One of the main goals is to have the functions as portable as possible, so you can just plug in "use Term::ReadKey" on any architecture and have a good likelihood of it working. --- perl-test-cleannamespaces[¶](#perl-test-cleannamespaces) === Homepage: * <http://search.cpan.org/~ether/Test-CleanNamespaces-0.22/lib/Test/CleanNamespaces.pmSpack package: * [perl-test-cleannamespaces/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-cleannamespaces/package.py) Versions: 0.22 Build Dependencies: [perl-test-deep](#perl-test-deep), [perl-module-runtime](#perl-module-runtime), [perl-package-stash](#perl-package-stash), [perl-test-needs](#perl-test-needs), [perl-sub-identify](#perl-sub-identify), [perl](#perl), [perl-file-pushd](#perl-file-pushd), [perl-sub-exporter](#perl-sub-exporter), [perl-test-warnings](#perl-test-warnings), [perl-namespace-clean](#perl-namespace-clean) Link Dependencies: [perl](#perl) Run Dependencies: [perl-test-deep](#perl-test-deep), [perl-module-runtime](#perl-module-runtime), [perl-package-stash](#perl-package-stash), [perl-test-needs](#perl-test-needs), [perl-sub-identify](#perl-sub-identify), [perl](#perl), [perl-file-pushd](#perl-file-pushd), [perl-sub-exporter](#perl-sub-exporter), [perl-test-warnings](#perl-test-warnings), [perl-namespace-clean](#perl-namespace-clean) Description: This module lets you check your module's namespaces for imported functions you might have forgotten to remove --- perl-test-deep[¶](#perl-test-deep) === Homepage: * <http://search.cpan.org/~rjbs/Test-Deep-1.127/lib/Test/Deep.pmSpack package: * [perl-test-deep/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-deep/package.py) Versions: 1.127 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Extremely flexible deep comparison --- perl-test-differences[¶](#perl-test-differences) === Homepage: * <http://search.cpan.org/~dcantrell/Test-Differences-0.64/lib/Test/Differences.pmSpack package: * [perl-test-differences/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-differences/package.py) Versions: 0.64 Build Dependencies: [perl](#perl), [perl-text-diff](#perl-text-diff), [perl-module-build](#perl-module-build), [perl-capture-tiny](#perl-capture-tiny) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-text-diff](#perl-text-diff), [perl-capture-tiny](#perl-capture-tiny) Description: Test strings and data structures and show differences if not ok --- perl-test-exception[¶](#perl-test-exception) === Homepage: * <http://search.cpan.org/~exodist/Test-Exception-0.43/lib/Test/Exception.pmSpack package: * [perl-test-exception/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-exception/package.py) Versions: 0.43 Build Dependencies: [perl](#perl), [perl-sub-uplevel](#perl-sub-uplevel) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-sub-uplevel](#perl-sub-uplevel) Description: Test exception-based code --- perl-test-fatal[¶](#perl-test-fatal) === Homepage: * <http://search.cpan.org/~rjbs/Test-Fatal-0.014/lib/Test/Fatal.pmSpack package: * [perl-test-fatal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-fatal/package.py) Versions: 0.014 Build Dependencies: [perl](#perl), [perl-try-tiny](#perl-try-tiny) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-try-tiny](#perl-try-tiny) Description: Incredibly simple helpers for testing code with exceptions --- perl-test-memory-cycle[¶](#perl-test-memory-cycle) === Homepage: * <http://search.cpan.org/~petdance/Test-Memory-Cycle-1.06/Cycle.pmSpack package: * [perl-test-memory-cycle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-memory-cycle/package.py) Versions: 1.06 Build Dependencies: [perl](#perl), [perl-devel-cycle](#perl-devel-cycle), [perl-padwalker](#perl-padwalker) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-devel-cycle](#perl-devel-cycle), [perl-padwalker](#perl-padwalker) Description: Check for memory leaks and circular memory references --- perl-test-most[¶](#perl-test-most) === Homepage: * <http://search.cpan.org/~ovid/Test-Most-0.35/lib/Test/Most.pmSpack package: * [perl-test-most/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-most/package.py) Versions: 0.35 Build Dependencies: [perl-test-deep](#perl-test-deep), [perl-test-exception](#perl-test-exception), [perl](#perl), [perl-exception-class](#perl-exception-class), [perl-test-warn](#perl-test-warn), [perl-test-differences](#perl-test-differences) Link Dependencies: [perl](#perl) Run Dependencies: [perl-test-deep](#perl-test-deep), [perl-test-exception](#perl-test-exception), [perl](#perl), [perl-exception-class](#perl-exception-class), [perl-test-warn](#perl-test-warn), [perl-test-differences](#perl-test-differences) Description: Most commonly needed test functions and features. --- perl-test-needs[¶](#perl-test-needs) === Homepage: * <http://search.cpan.org/~haarg/Test-Needs-0.002005/lib/Test/Needs.pmSpack package: * [perl-test-needs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-needs/package.py) Versions: 0.002005 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Skip tests when modules not available. --- perl-test-requires[¶](#perl-test-requires) === Homepage: * <http://search.cpan.org/~tokuhirom/Test-Requires-0.10/lib/Test/Requires.pmSpack package: * [perl-test-requires/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-requires/package.py) Versions: 0.10 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Checks to see if the module can be loaded. --- perl-test-requiresinternet[¶](#perl-test-requiresinternet) === Homepage: * <http://search.cpan.org/~mallen/Test-RequiresInternet-0.05/lib/Test/RequiresInternet.pmSpack package: * [perl-test-requiresinternet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-requiresinternet/package.py) Versions: 0.05 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Easily test network connectivity --- perl-test-warn[¶](#perl-test-warn) === Homepage: * <http://search.cpan.org/~chorny/Test-Warn-0.30/Warn.pmSpack package: * [perl-test-warn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-warn/package.py) Versions: 0.30 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl extension to test methods for warnings --- perl-test-warnings[¶](#perl-test-warnings) === Homepage: * <http://deps.cpantesters.org/?module=Test%3A%3ACleanNamespaces;perl=latestSpack package: * [perl-test-warnings/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-test-warnings/package.py) Versions: 0.026 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Test for warnings and the lack of them --- perl-text-csv[¶](#perl-text-csv) === Homepage: * <http://search.cpan.org/~ishigaki/Text-CSV/lib/Text/CSV.pmSpack package: * [perl-text-csv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-text-csv/package.py) Versions: 1.95 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Comma-separated values manipulator (using XS or PurePerl) --- perl-text-diff[¶](#perl-text-diff) === Homepage: * <http://search.cpan.org/~neilb/Text-Diff-1.45/lib/Text/Diff.pmSpack package: * [perl-text-diff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-text-diff/package.py) Versions: 1.45 Build Dependencies: [perl](#perl), [perl-algorithm-diff](#perl-algorithm-diff) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-algorithm-diff](#perl-algorithm-diff) Description: Provides a basic set of services akin to the GNU diff utility. --- perl-text-simpletable[¶](#perl-text-simpletable) === Homepage: * <http://search.cpan.org/~mramberg/Text-SimpleTable/lib/Text/SimpleTable.pmSpack package: * [perl-text-simpletable/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-text-simpletable/package.py) Versions: 2.04 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Simple Eyecandy ASCII Tables --- perl-text-soundex[¶](#perl-text-soundex) === Homepage: * <http://search.cpan.org/~rjbs/Text-Soundex-3.05/Soundex.pmSpack package: * [perl-text-soundex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-text-soundex/package.py) Versions: 3.05 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Soundex is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for names with the same pronunciation to be encoded to the same representation so that they can be matched despite minor differences in spelling --- perl-text-unidecode[¶](#perl-text-unidecode) === Homepage: * <http://search.cpan.org/~sburke/Text-Unidecode/lib/Text/Unidecode.pmSpack package: * [perl-text-unidecode/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-text-unidecode/package.py) Versions: 1.30 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: plain ASCII transliterations of Unicode text --- perl-time-hires[¶](#perl-time-hires) === Homepage: * <http://search.cpan.org/~jhi/Time-HiRes-1.9746/HiRes.pmSpack package: * [perl-time-hires/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-time-hires/package.py) Versions: 1.9746 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: High resolution alarm, sleep, gettimeofday, interval timers --- perl-time-piece[¶](#perl-time-piece) === Homepage: * <http://search.cpan.org/~esaym/Time-Piece/Piece.pmSpack package: * [perl-time-piece/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-time-piece/package.py) Versions: 1.3203 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Object Oriented time objects --- perl-try-tiny[¶](#perl-try-tiny) === Homepage: * <http://search.cpan.org/~ether/Try-Tiny-0.28/lib/Try/Tiny.pmSpack package: * [perl-try-tiny/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-try-tiny/package.py) Versions: 0.28 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Minimal try/catch with proper preservation of $@ --- perl-uri[¶](#perl-uri) === Homepage: * <http://search.cpan.org/~ether/URI-1.72/lib/URI.pmSpack package: * [perl-uri/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-uri/package.py) Versions: 1.72 Build Dependencies: [perl](#perl), [perl-test-needs](#perl-test-needs) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-test-needs](#perl-test-needs) Description: Uniform Resource Identifiers (absolute and relative) --- perl-uri-escape[¶](#perl-uri-escape) === Homepage: * <https://metacpan.org/pod/URI::EscapeSpack package: * [perl-uri-escape/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-uri-escape/package.py) Versions: 1.71 Build Dependencies: [perl](#perl), [perl-extutils-makemaker](#perl-extutils-makemaker) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: This module provides functions to percent-encode and percent-decode URI strings as defined by RFC 3986. Percent-encoding URI's is informally called "URI escaping". This is the terminology used by this module, which predates the formalization of the terms by the RFC by several years. --- perl-version[¶](#perl-version) === Homepage: * <http://search.cpan.org/~bdfoy/Perl-Version-1.013/lib/Perl/Version.pmSpack package: * [perl-version/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-version/package.py) Versions: 1.013_03 Build Dependencies: [perl](#perl), [perl-file-slurp-tiny](#perl-file-slurp-tiny) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-file-slurp-tiny](#perl-file-slurp-tiny) Description: Parse and manipulate Perl version strings --- perl-want[¶](#perl-want) === Homepage: * <search.cpan.org/~robin/Want/Want.pmSpack package: * [perl-want/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-want/package.py) Versions: 0.29 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: A generalisation of wantarray. --- perl-www-robotrules[¶](#perl-www-robotrules) === Homepage: * <http://deps.cpantesters.org/?module=WWW%3A%3ARobotRules;perl=latestSpack package: * [perl-www-robotrules/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-www-robotrules/package.py) Versions: 6.02 Build Dependencies: [perl](#perl), [perl-uri](#perl-uri) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-uri](#perl-uri) Description: Database of robots.txt-derived permissions --- perl-xml-parser[¶](#perl-xml-parser) === Homepage: * <http://search.cpan.org/perldoc/XML::ParserSpack package: * [perl-xml-parser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-xml-parser/package.py) Versions: 2.44 Build Dependencies: [perl](#perl), [expat](#expat) Link Dependencies: [perl](#perl), [expat](#expat) Run Dependencies: [perl](#perl) Description: XML::Parser - A perl module for parsing XML documents --- perl-xml-parser-lite[¶](#perl-xml-parser-lite) === Homepage: * <http://search.cpan.org/~phred/XML-Parser-Lite-0.721/lib/XML/Parser/Lite.pmSpack package: * [perl-xml-parser-lite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-xml-parser-lite/package.py) Versions: 0.721 Build Dependencies: [perl](#perl), [perl-test-requires](#perl-test-requires) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-test-requires](#perl-test-requires) Description: Lightweight pure-perl XML Parser (based on regexps) --- perl-xml-simple[¶](#perl-xml-simple) === Homepage: * <http://search.cpan.org/~grantm/XML-Simple/lib/XML/Simple.pmSpack package: * [perl-xml-simple/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-xml-simple/package.py) Versions: 2.24 Build Dependencies: [perl](#perl), [perl-xml-parser](#perl-xml-parser) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-xml-parser](#perl-xml-parser) Description: An API for simple XML files --- perl-yaml-libyaml[¶](#perl-yaml-libyaml) === Homepage: * <http://search.cpan.org/~tinita/YAML-LibYAML/Spack package: * [perl-yaml-libyaml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl-yaml-libyaml/package.py) Versions: 0.67 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl YAML Serialization using XS and libyaml --- petsc[¶](#petsc) === Homepage: * <http://www.mcs.anl.gov/petsc/index.htmlSpack package: * [petsc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/petsc/package.py) Versions: develop, 3.10.2, 3.10.1, 3.10.0, 3.9.4, 3.9.3, 3.9.2, 3.9.1, 3.9.0, 3.8.4, 3.8.3, 3.8.2, 3.8.1, 3.8.0, 3.7.7, 3.7.6, 3.7.5, 3.7.4, 3.7.2, 3.6.4, 3.6.3, 3.5.3, 3.5.2, 3.5.1, 3.4.4, xsdk-0.2.0 Build Dependencies: [zlib](#zlib), [hdf5](#hdf5), [mumps](#mumps), scalapack, [suite-sparse](#suite-sparse), [sowing](#sowing), lapack, [parmetis](#parmetis), mpi, [python](#python), [superlu-dist](#superlu-dist), [metis](#metis), blas, [hypre](#hypre), [trilinos](#trilinos) Link Dependencies: [zlib](#zlib), [hdf5](#hdf5), [mumps](#mumps), scalapack, [suite-sparse](#suite-sparse), [sowing](#sowing), lapack, [parmetis](#parmetis), mpi, [superlu-dist](#superlu-dist), [metis](#metis), blas, [hypre](#hypre), [trilinos](#trilinos) Description: PETSc is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. --- pexsi[¶](#pexsi) === Homepage: * <https://math.berkeley.edu/~linlin/pexsi/index.htmlSpack package: * [pexsi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pexsi/package.py) Versions: 0.10.2, 0.9.2, 0.9.0 Build Dependencies: [parmetis](#parmetis), [superlu-dist](#superlu-dist) Link Dependencies: [parmetis](#parmetis), [superlu-dist](#superlu-dist) Description: The PEXSI library is written in C++, and uses message passing interface (MPI) to parallelize the computation on distributed memory computing systems and achieve scalability on more than 10,000 processors. The Pole EXpansion and Selected Inversion (PEXSI) method is a fast method for electronic structure calculation based on Kohn-Sham density functional theory. It efficiently evaluates certain selected elements of matrix functions, e.g., the Fermi-Dirac function of the KS Hamiltonian, which yields a density matrix. It can be used as an alternative to diagonalization methods for obtaining the density, energy and forces in electronic structure calculations. --- pfft[¶](#pfft) === Homepage: * <https://www-user.tu-chemnitz.de/~potts/workgroup/pippig/software.php.enSpack package: * [pfft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pfft/package.py) Versions: 1.0.8-alpha Build Dependencies: mpi, [fftw](#fftw) Link Dependencies: mpi, [fftw](#fftw) Description: PFFT is a software library for computing massively parallel, fast Fourier transformations on distributed memory architectures. PFFT can be understood as a generalization of FFTW-MPI to multidimensional data decomposition. --- pflotran[¶](#pflotran) === Homepage: * <http://www.pflotran.orgSpack package: * [pflotran/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pflotran/package.py) Versions: develop, xsdk-0.3.0, xsdk-0.2.0 Build Dependencies: mpi, [petsc](#petsc), [hdf5](#hdf5) Link Dependencies: mpi, [petsc](#petsc), [hdf5](#hdf5) Description: PFLOTRAN is an open source, state-of-the-art massively parallel subsurface flow and reactive transport code. --- pfunit[¶](#pfunit) === Homepage: * <http://pfunit.sourceforge.net/Spack package: * [pfunit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pfunit/package.py) Versions: 3.2.9 Build Dependencies: [cmake](#cmake), mpi, [python](#python) Link Dependencies: mpi Run Dependencies: [python](#python) Description: pFUnit is a unit testing framework enabling JUnit-like testing of serial and MPI-parallel software written in Fortran. --- pgdspider[¶](#pgdspider) === Homepage: * <http://www.cmpg.unibe.ch/software/PGDSpiderSpack package: * [pgdspider/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pgdspider/package.py) Versions: 2.1.1.2 Build Dependencies: java, [bwa](#bwa), [samtools](#samtools), [bcftools](#bcftools) Link Dependencies: [samtools](#samtools), [bwa](#bwa), [bcftools](#bcftools) Run Dependencies: java Description: "PGDSpider is a powerful automated data conversion tool for population genetic and genomics programs --- pgi[¶](#pgi) === Homepage: * <http://www.pgroup.com/Spack package: * [pgi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pgi/package.py) Versions: 18.4, 17.10, 17.4, 17.3, 16.10, 16.5, 16.3, 15.7 Description: PGI optimizing multi-core x64 compilers for Linux, MacOS & Windows with support for debugging and profiling of local MPI processes. Note: The PGI compilers are licensed software. You will need to create an account on the PGI homepage and download PGI yourself. Spack will search your current directory for the download tarball. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- pgmath[¶](#pgmath) === Homepage: * <https://github.com/flang-compiler/flangSpack package: * [pgmath/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pgmath/package.py) Versions: develop, 20180612 Build Dependencies: [cmake](#cmake) Description: Flang's math library --- phantompeakqualtools[¶](#phantompeakqualtools) === Homepage: * <https://github.com/kundajelab/phantompeakqualtoolsSpack package: * [phantompeakqualtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/phantompeakqualtools/package.py) Versions: 1.2 Build Dependencies: [r](#r), [samtools](#samtools), awk, [r-phantompeakqualtools](#r-phantompeakqualtools) Link Dependencies: [r](#r), [samtools](#samtools), awk Run Dependencies: [r](#r), [r-phantompeakqualtools](#r-phantompeakqualtools) Description: This package computes informative enrichment and quality measures for ChIP-seq/DNase-seq/FAIRE-seq/MNase-seq data. --- phast[¶](#phast) === Homepage: * <http://compgen.cshl.edu/phast/index.phpSpack package: * [phast/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/phast/package.py) Versions: 1.4 Build Dependencies: [clapack](#clapack) Link Dependencies: [clapack](#clapack) Description: PHAST is a freely available software package for comparative and evolutionary genomics. --- phasta[¶](#phasta) === Homepage: * <https://www.scorec.rpi.edu/software.phpSpack package: * [phasta/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/phasta/package.py) Versions: develop, 0.0.1 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: SCOREC RPI's Parallel Hierarchic Adaptive Stabilized Transient Analysis (PHASTA) of compressible and incompressible Navier Stokes equations. --- phist[¶](#phist) === Homepage: * <https://bitbucket.org/essex/phist/Spack package: * [phist/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/phist/package.py) Versions: develop, 1.7.3, 1.7.2, 1.6.1, 1.6.0, 1.4.3, master Build Dependencies: [parmetis](#parmetis), mpi, [trilinos](#trilinos), [eigen](#eigen), [ghost](#ghost), [cmake](#cmake), blas, [python](#python), [petsc](#petsc), lapack Link Dependencies: [parmetis](#parmetis), mpi, [trilinos](#trilinos), [eigen](#eigen), [ghost](#ghost), blas, [petsc](#petsc), lapack Description: The Pipelined, Hybrid-parallel Iterative Solver Toolkit provides implementations of and interfaces to block iterative solvers for sparse linear and eigenvalue problems. In contrast to other libraries we support multiple backends (e.g. Trilinos, PETSc and our own optimized kernels), and interfaces in multiple languages such as C, C++, Fortran 2003 and Python. PHIST has a clear focus on portability and hardware performance: in particular support row-major storage of block vectors and using GPUs (via the ghost library or Trilinos/Tpetra). --- phylip[¶](#phylip) === Homepage: * <http://evolution.genetics.washington.edu/phylip/Spack package: * [phylip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/phylip/package.py) Versions: 3.697, 3.696 Description: PHYLIP (the PHYLogeny Inference Package) is a package of programs for inferring phylogenies (evolutionary trees). --- phyluce[¶](#phyluce) === Homepage: * <https://github.com/faircloth-lab/phyluceSpack package: * [phyluce/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/phyluce/package.py) Versions: 1.6.7 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [gatk](#gatk), [samtools](#samtools), [lastz](#lastz), [py-biopython](#py-biopython), [mafft](#mafft), [trimal](#trimal), [gblocks](#gblocks), [picard](#picard), [velvet](#velvet), [spades](#spades), [seqtk](#seqtk), [abyss](#abyss), [py-setuptools](#py-setuptools), [trinity](#trinity), [bwa](#bwa), [muscle](#muscle), [raxml](#raxml), [python](#python), [bcftools](#bcftools) Description: phyluce (phy-loo-chee) is a software package that was initially developed for analyzing data collected from ultraconserved elements in organismal genomes --- picard[¶](#picard) === Homepage: * <http://broadinstitute.github.io/picard/Spack package: * [picard/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/picard/package.py) Versions: 2.18.3, 2.18.0, 2.17.0, 2.16.0, 2.15.0, 2.13.2, 2.10.0, 2.9.4, 2.9.3, 2.9.2, 2.9.0, 2.8.3, 2.6.0, 1.140 Run Dependencies: java Description: Picard is a set of command line tools for manipulating high-throughput sequencing (HTS) data and formats such as SAM/BAM/CRAM and VCF. --- picsar[¶](#picsar) === Homepage: * <https://picsar.netSpack package: * [picsar/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/picsar/package.py) Versions: develop Build Dependencies: mpi, [fftw](#fftw) Link Dependencies: mpi, [fftw](#fftw) Description: PICSAR is a high performance library of optimized versions of the key functionalities of the PIC loop. --- picsarlite[¶](#picsarlite) === Homepage: * <https://picsar.netSpack package: * [picsarlite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/picsarlite/package.py) Versions: develop, 0.1 Build Dependencies: mpi, [fftw](#fftw) Link Dependencies: mpi, [fftw](#fftw) Description: PICSARlite is a self-contained proxy that adequately portrays the computational loads and dataflow of more complex PIC codes. --- pidx[¶](#pidx) === Homepage: * <http://www.cedmav.com/pidxSpack package: * [pidx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pidx/package.py) Versions: 1.0 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: PIDX Parallel I/O Library. PIDX is an efficient parallel I/O library that reads and writes multiresolution IDX data files. --- pigz[¶](#pigz) === Homepage: * <http://zlib.net/pigz/Spack package: * [pigz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pigz/package.py) Versions: 2.4, 2.3.4 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: A parallel implementation of gzip for modern multi-processor, multi-core machines. --- pilon[¶](#pilon) === Homepage: * <https://github.com/broadinstitute/pilonSpack package: * [pilon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pilon/package.py) Versions: 1.22, 1.13 Run Dependencies: java Description: Pilon is an automated genome assembly improvement and variant detection tool. --- pindel[¶](#pindel) === Homepage: * <http://gmt.genome.wustl.edu/packages/pindel/Spack package: * [pindel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pindel/package.py) Versions: 0.2.5b8, 0.2.5b6, 0.2.5b5, 0.2.5b4, 0.2.5b1, 0.2.5a7, 0.2.5 Build Dependencies: [htslib](#htslib) Link Dependencies: [htslib](#htslib) Description: Pindel can detect breakpoints from next-gen sequence data. --- piranha[¶](#piranha) === Homepage: * <https://bluescarni.github.io/piranha/sphinx/Spack package: * [piranha/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/piranha/package.py) Versions: develop, 0.5 Build Dependencies: [gmp](#gmp), [bzip2](#bzip2), [cmake](#cmake), [boost](#boost), [mpfr](#mpfr), [python](#python) Link Dependencies: [gmp](#gmp), [bzip2](#bzip2), [boost](#boost), [mpfr](#mpfr), [python](#python) Description: Piranha is a computer-algebra library for the symbolic manipulation of sparse multivariate polynomials and other closely-related symbolic objects (such as Poisson series). --- pism[¶](#pism) === Homepage: * <http://pism-docs.org/wiki/doku.php:=Spack package: * [pism/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pism/package.py) Versions: develop, 0.7.3, 0.7.x, icebin Build Dependencies: [everytrace](#everytrace), [petsc](#petsc), [cmake](#cmake), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [fftw](#fftw), mpi, [gsl](#gsl), [udunits2](#udunits2), [proj](#proj), [python](#python), [netcdf](#netcdf) Link Dependencies: [python](#python), [everytrace](#everytrace), [petsc](#petsc), [gsl](#gsl), [udunits2](#udunits2), [proj](#proj), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), mpi, [fftw](#fftw), [netcdf](#netcdf) Description: Parallel Ice Sheet Model --- pixman[¶](#pixman) === Homepage: * <http://www.pixman.orgSpack package: * [pixman/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pixman/package.py) Versions: 0.34.0, 0.32.6 Build Dependencies: pkgconfig, [libpng](#libpng) Link Dependencies: [libpng](#libpng) Description: The Pixman package contains a library that provides low-level pixel manipulation features such as image compositing and trapezoid rasterization. --- pkg-config[¶](#pkg-config) === Homepage: * <http://www.freedesktop.org/wiki/Software/pkg-config/Spack package: * [pkg-config/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pkg-config/package.py) Versions: 0.29.2, 0.29.1, 0.28 Description: pkg-config is a helper tool used when compiling applications and libraries --- pkgconf[¶](#pkgconf) === Homepage: * <http://pkgconf.org/Spack package: * [pkgconf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pkgconf/package.py) Versions: 1.4.2, 1.4.0, 1.3.10, 1.3.8 Description: pkgconf is a program which helps to configure compiler and linker flags for development frameworks. It is similar to pkg-config from freedesktop.org, providing additional functionality while also maintaining compatibility. --- planck-likelihood[¶](#planck-likelihood) === Homepage: * <https://wiki.cosmos.esa.int/planckpla2015/index.php/CMB_spectrum_%26_Likelihood_CodeSpack package: * [planck-likelihood/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/planck-likelihood/package.py) Versions: 2.00 Build Dependencies: blas, [cfitsio](#cfitsio), lapack Link Dependencies: blas, [cfitsio](#cfitsio), lapack Description: 2015 Cosmic Microwave Background (CMB) spectra and likelihood code --- plasma[¶](#plasma) === Homepage: * <https://bitbucket.org/icl/plasma/Spack package: * [plasma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/plasma/package.py) Versions: develop, 18.10.0, 18.9.0, 17.1 Build Dependencies: [cmake](#cmake), blas, lapack Link Dependencies: blas, lapack Description: Parallel Linear Algebra Software for Multicore Architectures, PLASMA is a software package for solving problems in dense linear algebra using multicore processors and Xeon Phi coprocessors. PLASMA provides implementations of state-of-the-art algorithms using cutting-edge task scheduling techniques. PLASMA currently offers a collection of routines for solving linear systems of equations, least squares problems, eigenvalue problems, and singular value problems. --- platypus[¶](#platypus) === Homepage: * <http://www.well.ox.ac.uk/platypusSpack package: * [platypus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/platypus/package.py) Versions: 0.8.1 Build Dependencies: [py-cython](#py-cython), [python](#python), [htslib](#htslib) Link Dependencies: [htslib](#htslib) Run Dependencies: [python](#python) Description: A Haplotype-Based Variant Caller For Next Generation Sequence Data --- plink[¶](#plink) === Homepage: * <https://www.cog-genomics.org/plink/1.9/Spack package: * [plink/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/plink/package.py) Versions: 1.9-beta5, 1.07 Build Dependencies: [atlas](#atlas), [netlib-lapack](#netlib-lapack) Link Dependencies: [atlas](#atlas), [netlib-lapack](#netlib-lapack) Description: PLINK is a free, open-source whole genome association analysis toolset, designed to perform a range of basic, large-scale analyses in a computationally efficient manner. --- plplot[¶](#plplot) === Homepage: * <http://plplot.sourceforge.net/Spack package: * [plplot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/plplot/package.py) Versions: 5.13.0, 5.12.0, 5.11.0 Build Dependencies: [tcl](#tcl), [lua](#lua), [gtkplus](#gtkplus), java, [cmake](#cmake), [py-numpy](#py-numpy), [qhull](#qhull), [pango](#pango), [libx11](#libx11), [qt](#qt), [wx](#wx), [swig](#swig), [python](#python), [freetype](#freetype) Link Dependencies: [freetype](#freetype), [tcl](#tcl), [lua](#lua), [libx11](#libx11), [gtkplus](#gtkplus), java, [wx](#wx), [pango](#pango), [qhull](#qhull), [swig](#swig), [qt](#qt) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: PLplot is a cross-platform package for creating scientific plots. --- plumed[¶](#plumed) === Homepage: * <http://www.plumed.org/Spack package: * [plumed/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/plumed/package.py) Versions: 2.4.2, 2.4.1, 2.3.5, 2.3.3, 2.3.0, 2.2.4, 2.2.3 Build Dependencies: [zlib](#zlib), [libmatheval](#libmatheval), mpi, [gsl](#gsl), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), blas, lapack Link Dependencies: [zlib](#zlib), [libmatheval](#libmatheval), mpi, [gsl](#gsl), blas, lapack Description: PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes. --- pmgr-collective[¶](#pmgr-collective) === Homepage: * <http://www.sourceforge.net/projects/pmgrcollectiveSpack package: * [pmgr-collective/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pmgr-collective/package.py) Versions: 1.0 Description: PMGR_COLLECTIVE provides a scalable network for bootstrapping MPI jobs. --- pmix[¶](#pmix) === Homepage: * <https://pmix.github.io/pmixSpack package: * [pmix/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pmix/package.py) Versions: 3.0.2, 3.0.1, 3.0.0, 2.1.4, 2.1.3, 2.1.2, 2.0.1, 2.0.0, 1.2.5, 1.2.4, 1.2.3, 1.2.2, 1.2.1, 1.2.0 Build Dependencies: [libevent](#libevent) Link Dependencies: [libevent](#libevent) Description: The Process Management Interface (PMI) has been used for quite some time as a means of exchanging wireup information needed for interprocess communication. Two versions (PMI-1 and PMI-2) have been released as part of the MPICH effort. While PMI-2 demonstrates better scaling properties than its PMI-1 predecessor, attaining rapid launch and wireup of the roughly 1M processes executing across 100k nodes expected for exascale operations remains challenging. PMI Exascale (PMIx) represents an attempt to resolve these questions by providing an extended version of the PMI definitions specifically designed to support clusters up to and including exascale sizes. The overall objective of the project is not to branch the existing definitions - in fact, PMIx fully supports both of the existing PMI-1 and PMI-2 APIs - but rather to (a) augment and extend those APIs to eliminate some current restrictions that impact scalability, (b) establish a standards-like body for maintaining the definitions, and (c) provide a reference implementation of the PMIx standard that demonstrates the desired level of scalability. --- pnfft[¶](#pnfft) === Homepage: * <https://www-user.tu-chemnitz.de/~potts/workgroup/pippig/software.php.enSpack package: * [pnfft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pnfft/package.py) Versions: 1.0.7-alpha Build Dependencies: [gsl](#gsl), [pfft](#pfft) Link Dependencies: [gsl](#gsl), [pfft](#pfft) Description: PNFFT is a parallel software library for the calculation of three- dimensional nonequispaced FFTs. --- pngwriter[¶](#pngwriter) === Homepage: * <http://pngwriter.sourceforge.net/Spack package: * [pngwriter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pngwriter/package.py) Versions: develop, 0.7.0, 0.6.0, 0.5.6, master Build Dependencies: [zlib](#zlib), [cmake](#cmake), [libpng](#libpng), [freetype](#freetype) Link Dependencies: [zlib](#zlib), [libpng](#libpng), [freetype](#freetype) Description: PNGwriter is a very easy to use open source graphics library that uses PNG as its output format. The interface has been designed to be as simple and intuitive as possible. It supports plotting and reading pixels in the RGB (red, green, blue), HSV (hue, saturation, value/brightness) and CMYK (cyan, magenta, yellow, black) colour spaces, basic shapes, scaling, bilinear interpolation, full TrueType antialiased and rotated text support, bezier curves, opening existing PNG images and more. --- pnmpi[¶](#pnmpi) === Homepage: * <https://github.com/LLNL/PnMPISpack package: * [pnmpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pnmpi/package.py) Versions: 1.7 Build Dependencies: mpi, [help2man](#help2man), [cmake](#cmake), [argp-standalone](#argp-standalone), [doxygen](#doxygen), [binutils](#binutils) Link Dependencies: [binutils](#binutils), [argp-standalone](#argp-standalone), [doxygen](#doxygen), mpi, [help2man](#help2man) Description: PnMPI is a dynamic MPI tool infrastructure that builds on top of the standardized PMPI interface. --- poamsa[¶](#poamsa) === Homepage: * <https://sourceforge.net/projects/poamsaSpack package: * [poamsa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/poamsa/package.py) Versions: 2.0 Description: POA is Partial Order Alignment, a fast program for multiple sequence alignment in bioinformatics. Its advantages are speed, scalability, sensitivity, and the superior ability to handle branching / indels in the alignment. --- pocl[¶](#pocl) === Homepage: * <http://portablecl.orgSpack package: * [pocl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pocl/package.py) Versions: 1.1, 1.0, 0.14, 0.13, 0.12, 0.11, 0.10, master Build Dependencies: [cmake](#cmake), pkgconfig, [libtool](#libtool), [llvm](#llvm), [hwloc](#hwloc) Link Dependencies: [hwloc](#hwloc), [libtool](#libtool), [llvm](#llvm) Run Dependencies: [libtool](#libtool) Description: Portable Computing Language (pocl) is an open source implementation of the OpenCL standard which can be easily adapted for new targets and devices, both for homogeneous CPU and heterogeneous GPUs/accelerators. --- polymake[¶](#polymake) === Homepage: * <https://polymake.org/doku.phpSpack package: * [polymake/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/polymake/package.py) Versions: 3.0r2, 3.0r1 Build Dependencies: [gmp](#gmp), [mpfr](#mpfr), [bliss](#bliss), [lrslib](#lrslib), [boost](#boost), [ppl](#ppl), [cddlib](#cddlib) Link Dependencies: [gmp](#gmp), [mpfr](#mpfr), [bliss](#bliss), [lrslib](#lrslib), [boost](#boost), [ppl](#ppl), [cddlib](#cddlib) Description: polymake is open source software for research in polyhedral geometry --- poppler[¶](#poppler) === Homepage: * <https://poppler.freedesktop.orgSpack package: * [poppler/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/poppler/package.py) Versions: 0.65.0, 0.64.0 Build Dependencies: [libiconv](#libiconv), pkgconfig, jpeg, [openjpeg](#openjpeg), [cmake](#cmake), [qt](#qt), [gobject-introspection](#gobject-introspection), [zlib](#zlib), [fontconfig](#fontconfig), [lcms](#lcms), [curl](#curl), [libtiff](#libtiff), [libpng](#libpng), [cairo](#cairo), [glib](#glib), [poppler-data](#poppler-data), [freetype](#freetype) Link Dependencies: [libiconv](#libiconv), [glib](#glib), jpeg, [lcms](#lcms), [curl](#curl), [qt](#qt), [gobject-introspection](#gobject-introspection), [zlib](#zlib), [fontconfig](#fontconfig), [openjpeg](#openjpeg), [libtiff](#libtiff), [libpng](#libpng), [cairo](#cairo), [freetype](#freetype) Run Dependencies: [poppler-data](#poppler-data) Description: Poppler is a PDF rendering library based on the xpdf-3.0 code base. --- poppler-data[¶](#poppler-data) === Homepage: * <https://poppler.freedesktop.org/Spack package: * [poppler-data/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/poppler-data/package.py) Versions: 0.4.9 Build Dependencies: [cmake](#cmake) Description: This package consists of encoding files for use with poppler. The encoding files are optional and poppler will automatically read them if they are present. When installed, the encoding files enables poppler to correctly render CJK and Cyrrilic properly. While poppler is licensed under the GPL, these encoding files have different license, and thus distributed separately. --- porta[¶](#porta) === Homepage: * <http://porta.zib.deSpack package: * [porta/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/porta/package.py) Versions: 1.4.1 Build Dependencies: [libtool](#libtool) Description: PORTA is a collection of routines for analyzing polytopes and polyhedra --- portage[¶](#portage) === Homepage: * <http://portage.lanl.gov/Spack package: * [portage/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/portage/package.py) Versions: develop, 1.1.1, 1.1.0 Build Dependencies: [cmake](#cmake), mpi, lapack Link Dependencies: mpi, lapack Description: Portage is a framework that computational physics applications can use to build a highly customized, hybrid parallel (MPI+X) conservative remapping library for transfer of field data between meshes. --- portcullis[¶](#portcullis) === Homepage: * <https://github.com/maplesond/portcullisSpack package: * [portcullis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/portcullis/package.py) Versions: 1.1.2 Build Dependencies: [zlib](#zlib), [samtools](#samtools), [m4](#m4), [py-setuptools](#py-setuptools), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [py-pandas](#py-pandas), [python](#python), [py-sphinx](#py-sphinx) Link Dependencies: [py-sphinx](#py-sphinx) Run Dependencies: [py-pandas](#py-pandas), [py-setuptools](#py-setuptools), [python](#python) Description: PORTable CULLing of Invalid Splice junctions --- postgresql[¶](#postgresql) === Homepage: * <http://www.postgresql.org/Spack package: * [postgresql/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/postgresql/package.py) Versions: 10.3, 10.2, 9.5.3, 9.3.4 Build Dependencies: [readline](#readline), [openssl](#openssl) Link Dependencies: [readline](#readline), [openssl](#openssl) Description: PostgreSQL is a powerful, open source object-relational database system. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. --- ppl[¶](#ppl) === Homepage: * <http://bugseng.com/products/ppl/Spack package: * [ppl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ppl/package.py) Versions: 1.1 Build Dependencies: [gmp](#gmp) Link Dependencies: [gmp](#gmp) Description: The Parma Polyhedra Library (PPL) provides numerical abstractions especially targeted at applications in the field of analysis and verification of complex systems. These abstractions include convex polyhedra, some special classes of polyhedra shapes that offer interesting complexity/precision tradeoffs, and grids which represent regularly spaced points that satisfy a set of linear congruence relations. The library also supports finite powersets and products of polyhedra and grids, a mixed integer linear programming problem solver using an exact-arithmetic version of the simplex algorithm, a parametric integer programming solver, and primitives for termination analysis via the automatic synthesis of linear ranking functions. --- pplacer[¶](#pplacer) === Homepage: * <http://matsen.fhcrc.org/pplacer/Spack package: * [pplacer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pplacer/package.py) Versions: 1.1.alpha19 Description: Pplacer places query sequences on a fixed reference phylogenetic tree to maximize phylogenetic likelihood or posterior probability according to a reference alignment. Pplacer is designed to be fast, to give useful information about uncertainty, and to offer advanced visualization and downstream analysis. --- prank[¶](#prank) === Homepage: * <http://wasabiapp.org/software/prank/Spack package: * [prank/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/prank/package.py) Versions: 170427, 150803 Build Dependencies: [mafft](#mafft), [bpp-suite](#bpp-suite), [exonerate](#exonerate) Link Dependencies: [mafft](#mafft), [bpp-suite](#bpp-suite), [exonerate](#exonerate) Description: A powerful multiple sequence alignment browser. --- precice[¶](#precice) === Homepage: * <https://www.precice.orgSpack package: * [precice/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/precice/package.py) Versions: develop Build Dependencies: mpi, [petsc](#petsc), [eigen](#eigen), [cmake](#cmake), [boost](#boost), [python](#python) Link Dependencies: [petsc](#petsc), [boost](#boost), mpi, [eigen](#eigen) Run Dependencies: [python](#python) Description: preCICE (Precise Code Interaction Coupling Environment) is a coupling library for partitioned multi-physics simulations. Partitioned means that preCICE couples existing programs (solvers) capable of simulating a subpart of the complete physics involved in a simulation. --- presentproto[¶](#presentproto) === Homepage: * <https://cgit.freedesktop.org/xorg/proto/presentproto/Spack package: * [presentproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/presentproto/package.py) Versions: 1.0 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: Present protocol specification and Xlib/Xserver headers. --- preseq[¶](#preseq) === Homepage: * <https://github.com/smithlabcode/preseqSpack package: * [preseq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/preseq/package.py) Versions: 2.0.2 Build Dependencies: [samtools](#samtools), [gsl](#gsl) Link Dependencies: [samtools](#samtools), [gsl](#gsl) Description: The preseq package is aimed at predicting and estimating the complexity of a genomic sequencing library, equivalent to predicting and estimating the number of redundant reads from a given sequencing depth and how many will be expected from additional sequencing using an initial sequencing experiment. --- price[¶](#price) === Homepage: * <http://derisilab.ucsf.edu/software/price/Spack package: * [price/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/price/package.py) Versions: 140408 Description: PRICE (Paired-Read Iterative Contig Extension): a de novo genome assembler implemented in C++. --- primer3[¶](#primer3) === Homepage: * <http://primer3.sourceforge.net/Spack package: * [primer3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/primer3/package.py) Versions: 2.3.7 Description: Primer3 is a widely used program for designing PCR primers (PCR = "Polymerase Chain Reaction"). PCR is an essential and ubiquitous tool in genetics and molecular biology. Primer3 can also design hybridization probes and sequencing primers. --- prinseq-lite[¶](#prinseq-lite) === Homepage: * <http://prinseq.sourceforge.netSpack package: * [prinseq-lite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/prinseq-lite/package.py) Versions: 0.20.4 Run Dependencies: [perl](#perl), [perl-digest-md5](#perl-digest-md5), [perl-json](#perl-json), [perl-cairo](#perl-cairo) Description: PRINSEQ will help you to preprocess your genomic or metagenomic sequence data in FASTA or FASTQ format. --- printproto[¶](#printproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/printprotoSpack package: * [printproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/printproto/package.py) Versions: 1.0.5 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: Xprint extension to the X11 protocol - a portable, network-transparent printing system. --- prng[¶](#prng) === Homepage: * <http://statmath.wu.ac.at/prng/Spack package: * [prng/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/prng/package.py) Versions: 3.0.2 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: Pseudo-Random Number Generator library. --- probconsrna[¶](#probconsrna) === Homepage: * <http://probcons.stanford.edu/Spack package: * [probconsrna/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/probconsrna/package.py) Versions: 2005-6-7 Description: Experimental version of PROBCONS with parameters estimated via unsupervised training on BRAliBASE --- prodigal[¶](#prodigal) === Homepage: * <https://github.com/hyattpd/ProdigalSpack package: * [prodigal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/prodigal/package.py) Versions: 2.6.3 Description: Fast, reliable protein-coding gene prediction for prokaryotic genomes. --- proj[¶](#proj) === Homepage: * <https://proj4.org/Spack package: * [proj/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/proj/package.py) Versions: 5.0.1, 4.9.2, 4.9.1, 4.8.0, 4.7.0, 4.6.1 Description: PROJ is a generic coordinate transformation software, that transforms geospatial coordinates from one coordinate reference system (CRS) to another. This includes cartographic projections as well as geodetic transformations. --- protobuf[¶](#protobuf) === Homepage: * <https://developers.google.com/protocol-buffersSpack package: * [protobuf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/protobuf/package.py) Versions: 3.5.2, 3.5.1.1, 3.5.1, 3.5.0.1, 3.5.0, 3.4.1, 3.4.0, 3.3.0, 3.2.0, 3.1.0, 3.0.2 Build Dependencies: [zlib](#zlib), [cmake](#cmake) Link Dependencies: [zlib](#zlib) Description: Google's data interchange format. --- proxymngr[¶](#proxymngr) === Homepage: * <http://cgit.freedesktop.org/xorg/app/proxymngrSpack package: * [proxymngr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/proxymngr/package.py) Versions: 1.0.4 Build Dependencies: [libice](#libice), [libxt](#libxt), pkgconfig, [lbxproxy](#lbxproxy), [xproto](#xproto), [util-macros](#util-macros), [xproxymanagementprotocol](#xproxymanagementprotocol) Link Dependencies: [libice](#libice), [libxt](#libxt), [lbxproxy](#lbxproxy) Description: The proxy manager (proxymngr) is responsible for resolving requests from xfindproxy (and other similar clients), starting new proxies when appropriate, and keeping track of all of the available proxy services. The proxy manager strives to reuse existing proxies whenever possible. --- pruners-ninja[¶](#pruners-ninja) === Homepage: * <https://github.com/PRUNERS/NINJASpack package: * [pruners-ninja/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pruners-ninja/package.py) Versions: 1.0.1, 1.0.0 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), mpi Link Dependencies: mpi Description: NINJA: Noise Inject agent tool to expose subtle and unintended message races. --- ps-lite[¶](#ps-lite) === Homepage: * <https://github.com/dmlc/ps-liteSpack package: * [ps-lite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ps-lite/package.py) Versions: 20170328, master Build Dependencies: [zeromq](#zeromq), [cmake](#cmake), [protobuf](#protobuf) Link Dependencies: [zeromq](#zeromq), [protobuf](#protobuf) Description: ps-lite is A light and efficient implementation of the parameter server framework. --- psi4[¶](#psi4) === Homepage: * <http://www.psicode.org/Spack package: * [psi4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/psi4/package.py) Versions: 0.5 Build Dependencies: [cmake](#cmake), [boost](#boost), blas, [py-numpy](#py-numpy), [python](#python), lapack Link Dependencies: [boost](#boost), blas, [python](#python), lapack Run Dependencies: [py-numpy](#py-numpy) Description: Psi4 is an open-source suite of ab initio quantum chemistry programs designed for efficient, high-accuracy simulations of a variety of molecular properties. --- pslib[¶](#pslib) === Homepage: * <http://pslib.sourceforge.net/Spack package: * [pslib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pslib/package.py) Versions: 0.4.5 Build Dependencies: jpeg, [libpng](#libpng) Link Dependencies: jpeg, [libpng](#libpng) Description: C-library to create PostScript files on the fly. --- psm[¶](#psm) === Homepage: * <https://github.com/intel/psmSpack package: * [psm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/psm/package.py) Versions: 2017-04-28, 3.3 Build Dependencies: [libuuid](#libuuid) Link Dependencies: [libuuid](#libuuid) Description: Intel Performance scaled messaging library --- psmc[¶](#psmc) === Homepage: * <https://github.com/lh3/psmcSpack package: * [psmc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/psmc/package.py) Versions: 2016-1-21 Description: mplementation of the Pairwise Sequentially Markovian Coalescent (PSMC) model --- pstreams[¶](#pstreams) === Homepage: * <http://pstreams.sourceforge.net/Spack package: * [pstreams/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pstreams/package.py) Versions: 1.0.1 Description: C++ wrapper for the POSIX.2 functions popen(3) and pclose(3) --- pugixml[¶](#pugixml) === Homepage: * <http://pugixml.org/Spack package: * [pugixml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pugixml/package.py) Versions: 1.8.1 Build Dependencies: [cmake](#cmake) Description: Light-weight, simple, and fast XML parser for C++ with XPath support --- pumi[¶](#pumi) === Homepage: * <https://www.scorec.rpi.edu/pumiSpack package: * [pumi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pumi/package.py) Versions: develop, 2.2.0, 2.1.0 Build Dependencies: [cmake](#cmake), [zoltan](#zoltan), mpi Link Dependencies: [zoltan](#zoltan), mpi Description: SCOREC RPI's Parallel Unstructured Mesh Infrastructure (PUMI). An efficient distributed mesh data structure and methods to support parallel adaptive analysis including general mesh-based operations, such as mesh entity creation/deletion, adjacency and geometric classification, iterators, arbitrary (field) data attachable to mesh entities, efficient communication involving entities duplicated across multiple tasks, migration of mesh entities between tasks, and dynamic load balancing. --- pv[¶](#pv) === Homepage: * <http://www.ivarch.com/programs/pv.shtmlSpack package: * [pv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pv/package.py) Versions: 1.6.6 Description: Pipe Viewer - is a terminal-based tool for monitoring the progress of data through a pipeline --- pvm[¶](#pvm) === Homepage: * <http://www.csm.ornl.gov/pvm/pvm_home.htmlSpack package: * [pvm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pvm/package.py) Versions: 3.4.6 Description: PVM (Parallel Virtual Machine) is a software package that permits a heterogeneous collection of Unix and/or Windows computers hooked together by a network to be used as a single large parallel computer. --- pxz[¶](#pxz) === Homepage: * <https://jnovy.fedorapeople.org/pxz/pxz.htmlSpack package: * [pxz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pxz/package.py) Versions: develop, 4.999.9beta.20091201git Build Dependencies: [lzma](#lzma) Link Dependencies: [lzma](#lzma) Description: Pxz is a parallel LZMA compressor using liblzma. --- py-3to2[¶](#py-3to2) === Homepage: * <https://pypi.python.org/pypi/3to2Spack package: * [py-3to2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-3to2/package.py) Versions: 1.1.1 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: lib3to2 is a set of fixers that are intended to backport code written for Python version 3.x into Python version 2.x. --- py-4suite-xml[¶](#py-4suite-xml) === Homepage: * <http://4suite.org/Spack package: * [py-4suite-xml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-4suite-xml/package.py) Versions: 1.0.2 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: XML tools and libraries for Python: Domlette, XPath, XSLT, XPointer, XLink, XUpdate --- py-abipy[¶](#py-abipy) === Homepage: * <https://github.com/abinit/abipySpack package: * [py-abipy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-abipy/package.py) Versions: 0.2.0 Build Dependencies: [py-pyyaml](#py-pyyaml), [py-html2text](#py-html2text), [py-matplotlib](#py-matplotlib), [py-pandas](#py-pandas), [py-apscheduler](#py-apscheduler), [py-six](#py-six), [py-pydispatcher](#py-pydispatcher), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-netcdf4](#py-netcdf4), py-jupyter, [py-tabulate](#py-tabulate), [py-seaborn](#py-seaborn), [py-spglib](#py-spglib), [py-tqdm](#py-tqdm), [py-nbformat](#py-nbformat), [py-pymatgen](#py-pymatgen), py-wxpython, [py-prettytable](#py-prettytable), [py-scipy](#py-scipy), py-wxmplot, [py-ipython](#py-ipython), [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-html2text](#py-html2text), [py-matplotlib](#py-matplotlib), [py-pandas](#py-pandas), [py-apscheduler](#py-apscheduler), [py-six](#py-six), [py-pydispatcher](#py-pydispatcher), [py-netcdf4](#py-netcdf4), py-jupyter, [py-tabulate](#py-tabulate), [py-seaborn](#py-seaborn), [py-spglib](#py-spglib), [py-tqdm](#py-tqdm), [py-nbformat](#py-nbformat), [py-pymatgen](#py-pymatgen), py-wxpython, [py-prettytable](#py-prettytable), [py-scipy](#py-scipy), py-wxmplot, [py-ipython](#py-ipython), [py-numpy](#py-numpy), [python](#python) Description: Python package to automate ABINIT calculations and analyze the results. --- py-adios[¶](#py-adios) === Homepage: * <https://www.olcf.ornl.gov/center-projects/adios/Spack package: * [py-adios/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-adios/package.py) Versions: develop, 1.13.0, 1.12.0, 1.11.1, 1.11.0, 1.10.0, 1.9.0 Build Dependencies: [py-numpy](#py-numpy), mpi, [py-cython](#py-cython), [adios](#adios), [python](#python) Link Dependencies: mpi, [adios](#adios), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [adios](#adios), [python](#python), [py-mpi4py](#py-mpi4py) Description: NumPy bindings of ADIOS1 --- py-affine[¶](#py-affine) === Homepage: * <https://github.com/sgillies/affineSpack package: * [py-affine/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-affine/package.py) Versions: 2.1.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Matrices describing affine transformation of the plane. --- py-alabaster[¶](#py-alabaster) === Homepage: * <https://alabaster.readthedocs.io/Spack package: * [py-alabaster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-alabaster/package.py) Versions: 0.7.10, 0.7.9 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Alabaster is a visually (c)lean, responsive, configurable theme for the Sphinx documentation system. --- py-apache-libcloud[¶](#py-apache-libcloud) === Homepage: * <http://libcloud.apache.orgSpack package: * [py-apache-libcloud/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-apache-libcloud/package.py) Versions: 1.2.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python library for multiple cloud provider APIs --- py-apipkg[¶](#py-apipkg) === Homepage: * <https://pypi.python.org/pypi/apipkgSpack package: * [py-apipkg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-apipkg/package.py) Versions: 1.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: apipkg: namespace control and lazy-import mechanism --- py-appdirs[¶](#py-appdirs) === Homepage: * <https://github.com/ActiveState/appdirsSpack package: * [py-appdirs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-appdirs/package.py) Versions: 1.4.3, 1.4.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A small Python module for determining appropriate platform-specific dirs, e.g. a "user data dir". --- py-appnope[¶](#py-appnope) === Homepage: * <https://github.com/minrk/appnopeSpack package: * [py-appnope/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-appnope/package.py) Versions: 0.1.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Disable App Nap on OS X 10.9 --- py-apscheduler[¶](#py-apscheduler) === Homepage: * <https://github.com/agronholm/apschedulerSpack package: * [py-apscheduler/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-apscheduler/package.py) Versions: 3.3.1, 2.1.0 Build Dependencies: [py-six](#py-six), [py-tzlocal](#py-tzlocal), [py-setuptools](#py-setuptools), [python](#python), [py-pytz](#py-pytz) Link Dependencies: [python](#python) Run Dependencies: [py-six](#py-six), [py-tzlocal](#py-tzlocal), [python](#python), [py-pytz](#py-pytz) Description: In-process task scheduler with Cron-like capabilities. --- py-argcomplete[¶](#py-argcomplete) === Homepage: * <https://pypi.python.org/pypi/argcompleteSpack package: * [py-argcomplete/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-argcomplete/package.py) Versions: 1.1.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Bash tab completion for argparse. --- py-argparse[¶](#py-argparse) === Homepage: * <https://github.com/ThomasWaldmann/argparse/Spack package: * [py-argparse/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-argparse/package.py) Versions: 1.4.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python command-line parsing library. --- py-ase[¶](#py-ase) === Homepage: * <https://wiki.fysik.dtu.dk/ase/Spack package: * [py-ase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ase/package.py) Versions: 3.15.0, 3.13.0 Build Dependencies: [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: The Atomic Simulation Environment (ASE) is a set of tools and Python modules for setting up, manipulating, running, visualizing and analyzing atomistic simulations. --- py-asn1crypto[¶](#py-asn1crypto) === Homepage: * <https://github.com/wbond/asn1cryptoSpack package: * [py-asn1crypto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-asn1crypto/package.py) Versions: 0.22.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python ASN.1 library with a focus on performance and a pythonic API --- py-astroid[¶](#py-astroid) === Homepage: * <https://www.astroid.org/Spack package: * [py-astroid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-astroid/package.py) Versions: 1.4.5, 1.4.4, 1.4.3, 1.4.2, 1.4.1 Build Dependencies: [py-wrapt](#py-wrapt), [py-enum34](#py-enum34), [py-setuptools](#py-setuptools), [py-singledispatch](#py-singledispatch), [py-lazy-object-proxy](#py-lazy-object-proxy), [py-backports-functools-lru-cache](#py-backports-functools-lru-cache), [python](#python), [py-six](#py-six) Link Dependencies: [py-wrapt](#py-wrapt), [py-enum34](#py-enum34), [py-setuptools](#py-setuptools), [py-singledispatch](#py-singledispatch), [py-lazy-object-proxy](#py-lazy-object-proxy), [py-backports-functools-lru-cache](#py-backports-functools-lru-cache), [python](#python), [py-six](#py-six) Run Dependencies: [python](#python) Description: --- py-astropy[¶](#py-astropy) === Homepage: * <http://www.astropy.org/Spack package: * [py-astropy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-astropy/package.py) Versions: 1.1.2, 1.1.post1 Build Dependencies: [py-pyyaml](#py-pyyaml), [cfitsio](#cfitsio), [libxml2](#libxml2), [py-scikit-image](#py-scikit-image), [py-pytz](#py-pytz), [py-scipy](#py-scipy), [expat](#expat), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [py-matplotlib](#py-matplotlib), [py-markupsafe](#py-markupsafe), [py-setuptools](#py-setuptools), [py-beautifulsoup4](#py-beautifulsoup4), [py-pandas](#py-pandas), [python](#python) Link Dependencies: [expat](#expat), [cfitsio](#cfitsio), [libxml2](#libxml2), [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-markupsafe](#py-markupsafe), [py-beautifulsoup4](#py-beautifulsoup4), [py-scipy](#py-scipy), [py-scikit-image](#py-scikit-image), [py-pytz](#py-pytz), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [py-matplotlib](#py-matplotlib), [py-pandas](#py-pandas), [python](#python) Description: The Astropy Project is a community effort to develop a single core package for Astronomy in Python and foster interoperability between Python astronomy packages. --- py-atomicwrites[¶](#py-atomicwrites) === Homepage: * <https://github.com/untitaker/python-atomicwritesSpack package: * [py-atomicwrites/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-atomicwrites/package.py) Versions: 1.1.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Atomic file writes. --- py-attrs[¶](#py-attrs) === Homepage: * <http://attrs.org/Spack package: * [py-attrs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-attrs/package.py) Versions: 18.1.0, 16.3.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Test Dependencies: [py-coverage](#py-coverage), [py-zope-interface](#py-zope-interface), [py-pympler](#py-pympler), [py-pytest](#py-pytest), [py-hypothesis](#py-hypothesis), [py-six](#py-six) Description: Classes Without Boilerplate --- py-autopep8[¶](#py-autopep8) === Homepage: * <https://github.com/hhatto/autopep8Spack package: * [py-autopep8/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-autopep8/package.py) Versions: 1.3.3, 1.2.4, 1.2.2 Build Dependencies: [py-pycodestyle](#py-pycodestyle), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pycodestyle](#py-pycodestyle), [python](#python) Description: autopep8 automatically formats Python code to conform to the PEP 8 style guide. --- py-avro[¶](#py-avro) === Homepage: * <http://avro.apache.org/docs/current/Spack package: * [py-avro/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-avro/package.py) Versions: 1.8.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Avro is a serialization and RPC framework. --- py-avro-json-serializer[¶](#py-avro-json-serializer) === Homepage: * <https://github.com/linkedin/python-avro-json-serializerSpack package: * [py-avro-json-serializer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-avro-json-serializer/package.py) Versions: 0.4 Build Dependencies: [py-avro](#py-avro), [py-simplejson](#py-simplejson), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-avro](#py-avro), [py-simplejson](#py-simplejson), [python](#python) Description: Serializes data into a JSON format using AVRO schema. --- py-babel[¶](#py-babel) === Homepage: * <http://babel.pocoo.org/en/latest/Spack package: * [py-babel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-babel/package.py) Versions: 2.4.0, 2.3.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-pytz](#py-pytz) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-pytz](#py-pytz) Description: Babel is an integrated collection of utilities that assist in internationalizing and localizing Python applications, with an emphasis on web-based applications. --- py-backcall[¶](#py-backcall) === Homepage: * <https://github.com/takluyver/backcallSpack package: * [py-backcall/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-backcall/package.py) Versions: 0.1.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Specifications for callback functions passed in to an API --- py-backports-abc[¶](#py-backports-abc) === Homepage: * <https://github.com/cython/backports_abcSpack package: * [py-backports-abc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-backports-abc/package.py) Versions: 0.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Backports_ABC: A backport of recent additions to the 'collections.abc' module. --- py-backports-functools-lru-cache[¶](#py-backports-functools-lru-cache) === Homepage: * <https://github.com/jaraco/backports.functools_lru_cacheSpack package: * [py-backports-functools-lru-cache/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-backports-functools-lru-cache/package.py) Versions: 1.5, 1.4, 1.0.1 Build Dependencies: [py-setuptools-scm](#py-setuptools-scm), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Backport of functools.lru_cache from Python 3.3 --- py-backports-shutil-get-terminal-size[¶](#py-backports-shutil-get-terminal-size) === Homepage: * <https://pypi.python.org/pypi/backports.shutil_get_terminal_sizeSpack package: * [py-backports-shutil-get-terminal-size/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-backports-shutil-get-terminal-size/package.py) Versions: 1.0.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A backport of the get_terminal_size function from Python 3.3's shutil. --- py-backports-ssl-match-hostname[¶](#py-backports-ssl-match-hostname) === Homepage: * <https://pypi.python.org/pypi/backports.ssl_match_hostnameSpack package: * [py-backports-ssl-match-hostname/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-backports-ssl-match-hostname/package.py) Versions: 3.5.0.1 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The ssl.match_hostname() function from Python 3.5 --- py-basemap[¶](#py-basemap) === Homepage: * <http://matplotlib.org/basemap/Spack package: * [py-basemap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-basemap/package.py) Versions: 1.0.7 Build Dependencies: pil, [geos](#geos), [py-matplotlib](#py-matplotlib), [py-numpy](#py-numpy), [python](#python) Link Dependencies: [geos](#geos), [python](#python) Run Dependencies: pil, [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-setuptools](#py-setuptools), [python](#python) Description: The matplotlib basemap toolkit is a library for plotting 2D data on maps in Python. --- py-bcbio-gff[¶](#py-bcbio-gff) === Homepage: * <https://pypi.python.org/pypi/bcbio-gff/0.6.2Spack package: * [py-bcbio-gff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-bcbio-gff/package.py) Versions: 0.6.2 Build Dependencies: [py-biopython](#py-biopython), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-biopython](#py-biopython), [python](#python), [py-six](#py-six) Description: Read and write Generic Feature Format (GFF) with Biopython integration. --- py-beautifulsoup4[¶](#py-beautifulsoup4) === Homepage: * <https://www.crummy.com/software/BeautifulSoupSpack package: * [py-beautifulsoup4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-beautifulsoup4/package.py) Versions: 4.5.3, 4.5.1, 4.4.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. --- py-binwalk[¶](#py-binwalk) === Homepage: * <https://github.com/devttys0/binwalkSpack package: * [py-binwalk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-binwalk/package.py) Versions: 2.1.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Binwalk is a fast, easy to use tool for analyzing, reverse engineering, and extracting firmware images. --- py-biom-format[¶](#py-biom-format) === Homepage: * <https://pypi.python.org/pypi/biom-format/2.1.6Spack package: * [py-biom-format/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-biom-format/package.py) Versions: 2.1.6 Build Dependencies: [py-click](#py-click), [python](#python), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-future](#py-future), [py-pandas](#py-pandas), [py-pyqi](#py-pyqi), [py-h5py](#py-h5py), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-click](#py-click), [python](#python), [py-pandas](#py-pandas), [py-setuptools](#py-setuptools), [py-future](#py-future), [py-pyqi](#py-pyqi), [py-h5py](#py-h5py), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-six](#py-six) Description: The BIOM file format (canonically pronounced biome) is designed to be a general-use format for representing biological sample by observation contingency tables. --- py-biopython[¶](#py-biopython) === Homepage: * <http://biopython.org/wiki/Main_PageSpack package: * [py-biopython/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-biopython/package.py) Versions: 1.70, 1.65 Build Dependencies: [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: A distributed collaborative effort to develop Python libraries and applications which address the needs of current and future work in bioinformatics. --- py-bitarray[¶](#py-bitarray) === Homepage: * <https://pypi.python.org/pypi/bitarraySpack package: * [py-bitarray/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-bitarray/package.py) Versions: 0.8.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Efficient array of booleans - C extension --- py-bitstring[¶](#py-bitstring) === Homepage: * <http://pythonhosted.org/bitstringSpack package: * [py-bitstring/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-bitstring/package.py) Versions: 3.1.5 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Simple construction, analysis and modification of binary data. --- py-bleach[¶](#py-bleach) === Homepage: * <http://github.com/mozilla/bleachSpack package: * [py-bleach/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-bleach/package.py) Versions: 1.5.0 Build Dependencies: [py-html5lib](#py-html5lib), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-html5lib](#py-html5lib), [python](#python), [py-six](#py-six) Description: An easy whitelist-based HTML-sanitizing tool. --- py-blessings[¶](#py-blessings) === Homepage: * <https://github.com/erikrose/blessingsSpack package: * [py-blessings/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-blessings/package.py) Versions: 1.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A nicer, kinder way to write to the terminal --- py-bokeh[¶](#py-bokeh) === Homepage: * <http://github.com/bokeh/bokehSpack package: * [py-bokeh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-bokeh/package.py) Versions: 0.12.2 Build Dependencies: [py-pyyaml](#py-pyyaml), [py-futures](#py-futures), [python](#python), [py-requests](#py-requests), [py-numpy](#py-numpy), [py-dateutil](#py-dateutil), [py-tornado](#py-tornado), [py-jinja2](#py-jinja2), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-futures](#py-futures), [python](#python), [py-requests](#py-requests), [py-numpy](#py-numpy), [py-dateutil](#py-dateutil), [py-tornado](#py-tornado), [py-jinja2](#py-jinja2), [py-six](#py-six) Description: Statistical and novel interactive HTML plots for Python --- py-boltons[¶](#py-boltons) === Homepage: * <https://boltons.readthedocs.io/Spack package: * [py-boltons/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-boltons/package.py) Versions: 16.5.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: When they're not builtins, they're boltons. Functionality that should be in the standard library. Like builtins, but Boltons. Otherwise known as, "everyone's util.py," but cleaned up and tested. --- py-bottleneck[¶](#py-bottleneck) === Homepage: * <https://pypi.python.org/pypi/Bottleneck/1.0.0Spack package: * [py-bottleneck/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-bottleneck/package.py) Versions: 1.2.1, 1.0.0 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: A collection of fast NumPy array functions written in Cython. --- py-breakseq2[¶](#py-breakseq2) === Homepage: * <http://bioinform.github.io/breakseq2/Spack package: * [py-breakseq2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-breakseq2/package.py) Versions: 2.2 Build Dependencies: [python](#python), [py-biopython](#py-biopython), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-pysam](#py-pysam) Link Dependencies: [python](#python) Run Dependencies: [samtools](#samtools), [bwa](#bwa), [py-biopython](#py-biopython), [python](#python), [py-pysam](#py-pysam) Description: nucleotide-resolution analysis of structural variants --- py-brian[¶](#py-brian) === Homepage: * <http://www.briansimulator.orgSpack package: * [py-brian/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-brian/package.py) Versions: 1.4.3 Build Dependencies: [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-scipy](#py-scipy) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-scipy](#py-scipy) Description: A clock-driven simulator for spiking neural networks --- py-brian2[¶](#py-brian2) === Homepage: * <http://www.briansimulator.orgSpack package: * [py-brian2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-brian2/package.py) Versions: 2.0.1, 2.0rc3 Build Dependencies: [py-sympy](#py-sympy), [py-pyparsing](#py-pyparsing), [py-setuptools](#py-setuptools), [py-cpuinfo](#py-cpuinfo), [py-numpy](#py-numpy), [python](#python), [py-jinja2](#py-jinja2), [py-sphinx](#py-sphinx) Link Dependencies: [python](#python) Run Dependencies: [py-pyparsing](#py-pyparsing), [python](#python), [py-cpuinfo](#py-cpuinfo), [py-numpy](#py-numpy), [py-sympy](#py-sympy), [py-jinja2](#py-jinja2), [py-sphinx](#py-sphinx) Test Dependencies: py-nosetests Description: A clock-driven simulator for spiking neural networks --- py-bsddb3[¶](#py-bsddb3) === Homepage: * <https://pypi.python.org/pypi/bsddb3/6.2.5Spack package: * [py-bsddb3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-bsddb3/package.py) Versions: 6.2.5 Build Dependencies: [berkeley-db](#berkeley-db), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [berkeley-db](#berkeley-db), [python](#python) Run Dependencies: [python](#python) Description: This module provides a nearly complete wrapping of the Oracle/Sleepycat C API for the Database Environment, Database, Cursor, Log Cursor, Sequence and Transaction objects, and each of these is exposed as a Python type in the bsddb3.db module. --- py-bx-python[¶](#py-bx-python) === Homepage: * <https://github.com/bxlab/bx-pythonSpack package: * [py-bx-python/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-bx-python/package.py) Versions: 0.7.4 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python), [py-six](#py-six) Description: The bx-python project is a python library and associated set of scripts to allow for rapid implementation of genome scale analyses. --- py-cartopy[¶](#py-cartopy) === Homepage: * <http://scitools.org.uk/cartopy/Spack package: * [py-cartopy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cartopy/package.py) Versions: 0.16.0 Build Dependencies: [geos](#geos), [py-shapely](#py-shapely), [py-pyepsg](#py-pyepsg), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-six](#py-six), [gdal](#gdal), [py-owslib](#py-owslib), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-pillow](#py-pillow), [proj](#proj), [py-pyshp](#py-pyshp), [python](#python), [py-scipy](#py-scipy) Link Dependencies: [proj](#proj), [geos](#geos), [python](#python) Run Dependencies: [gdal](#gdal), [py-owslib](#py-owslib), [py-shapely](#py-shapely), [py-pillow](#py-pillow), [py-pyepsg](#py-pyepsg), [py-numpy](#py-numpy), [py-pyshp](#py-pyshp), [py-matplotlib](#py-matplotlib), [python](#python), [py-scipy](#py-scipy), [py-six](#py-six) Test Dependencies: [py-mock](#py-mock), [py-pytest](#py-pytest) Description: Cartopy - a cartographic python library with matplotlib support. --- py-cclib[¶](#py-cclib) === Homepage: * <https://cclib.github.io/Spack package: * [py-cclib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cclib/package.py) Versions: 1.5.post1 Build Dependencies: [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Open source library for parsing and interpreting the results of computational chemistry packages --- py-cdat-lite[¶](#py-cdat-lite) === Homepage: * <http://proj.badc.rl.ac.uk/cedaservices/wiki/CdatLiteSpack package: * [py-cdat-lite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cdat-lite/package.py) Versions: 6.0.1 Build Dependencies: [py-numpy](#py-numpy), [netcdf](#netcdf), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [netcdf](#netcdf), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Cdat-lite is a Python package for managing and analysing climate science data. It is a subset of the Climate Data Analysis Tools (CDAT) developed by PCMDI at Lawrence Livermore National Laboratory. --- py-cdo[¶](#py-cdo) === Homepage: * <https://pypi.python.org/pypi/cdoSpack package: * [py-cdo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cdo/package.py) Versions: 1.3.2 Build Dependencies: [py-netcdf4](#py-netcdf4), [python](#python), [py-setuptools](#py-setuptools), [py-scipy](#py-scipy), [cdo](#cdo) Link Dependencies: [python](#python), [cdo](#cdo) Run Dependencies: [py-netcdf4](#py-netcdf4), [python](#python), [py-scipy](#py-scipy) Description: The cdo package provides an interface to the Climate Data Operators from Python. --- py-certifi[¶](#py-certifi) === Homepage: * <http://certifi.io/Spack package: * [py-certifi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-certifi/package.py) Versions: 2017.1.23, 2016.02.28 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Certifi: A carefully curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. --- py-cffi[¶](#py-cffi) === Homepage: * <http://cffi.readthedocs.org/en/latest/Spack package: * [py-cffi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cffi/package.py) Versions: 1.11.5, 1.10.0, 1.1.2 Build Dependencies: [libffi](#libffi), [python](#python), [py-setuptools](#py-setuptools), [py-pycparser](#py-pycparser) Link Dependencies: [libffi](#libffi), [python](#python) Run Dependencies: [python](#python), [py-pycparser](#py-pycparser) Description: Foreign Function Interface for Python calling C code --- py-chardet[¶](#py-chardet) === Homepage: * <https://github.com/chardet/chardetSpack package: * [py-chardet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-chardet/package.py) Versions: 2.3.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Universal encoding detector for Python 2 and 3 --- py-checkm-genome[¶](#py-checkm-genome) === Homepage: * <https://ecogenomics.github.io/CheckMSpack package: * [py-checkm-genome/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-checkm-genome/package.py) Versions: 1.0.11 Build Dependencies: [py-scipy](#py-scipy), [py-dendropy](#py-dendropy), [hmmer](#hmmer), [py-matplotlib](#py-matplotlib), [py-numpy](#py-numpy), [py-pysam](#py-pysam), [prodigal](#prodigal), [python](#python) Link Dependencies: [hmmer](#hmmer), [prodigal](#prodigal), [python](#python) Run Dependencies: [py-scipy](#py-scipy), [py-dendropy](#py-dendropy), [py-pysam](#py-pysam), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python) Description: Assess the quality of microbial genomes recovered from isolates, single cells, and metagenomes --- py-cheetah[¶](#py-cheetah) === Homepage: * <https://pypi.python.org/pypi/Cheetah/2.4.4Spack package: * [py-cheetah/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cheetah/package.py) Versions: 2.3.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Cheetah is a template engine and code generation tool. --- py-click[¶](#py-click) === Homepage: * <http://github.com/mitsuhiko/clickSpack package: * [py-click/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-click/package.py) Versions: 6.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A simple wrapper around optparse for powerful command line utilities. --- py-cligj[¶](#py-cligj) === Homepage: * <https://github.com/mapbox/cligjSpack package: * [py-cligj/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cligj/package.py) Versions: 0.4.0 Build Dependencies: [py-click](#py-click), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-click](#py-click), [python](#python) Description: Click-based argument and option decorators for Python GIS command line programs --- py-cloudpickle[¶](#py-cloudpickle) === Homepage: * <https://github.com/cloudpipe/cloudpickleSpack package: * [py-cloudpickle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cloudpickle/package.py) Versions: 0.5.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Extended pickling support for Python objects. --- py-cogent[¶](#py-cogent) === Homepage: * <http://pycogent.orgSpack package: * [py-cogent/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cogent/package.py) Versions: 1.9, 1.5.3 Build Dependencies: [zlib](#zlib), [py-pymysql](#py-pymysql), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-sqlalchemy](#py-sqlalchemy), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-mpi4py](#py-mpi4py), [python](#python) Link Dependencies: [zlib](#zlib), [python](#python) Run Dependencies: [py-pymysql](#py-pymysql), [py-sqlalchemy](#py-sqlalchemy), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-mpi4py](#py-mpi4py), [python](#python) Description: A toolkit for statistical analysis of biological sequences. --- py-colorama[¶](#py-colorama) === Homepage: * <https://github.com/tartley/coloramaSpack package: * [py-colorama/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-colorama/package.py) Versions: 0.3.7 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Cross-platform colored terminal text. --- py-colormath[¶](#py-colormath) === Homepage: * <https://pypi.python.org/pypi/colormath/2.1.1Spack package: * [py-colormath/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-colormath/package.py) Versions: 3.0.0, 2.1.1 Build Dependencies: [py-numpy](#py-numpy), [py-networkx](#py-networkx), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-networkx](#py-networkx), [python](#python) Description: Color math and conversion library. --- py-configparser[¶](#py-configparser) === Homepage: * <https://docs.python.org/3/library/configparser.htmlSpack package: * [py-configparser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-configparser/package.py) Versions: 3.5.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: This library brings the updated configparser from Python 3.5 to Python 2.6-3.5. --- py-counter[¶](#py-counter) === Homepage: * <https://github.com/KelSolaar/CounterSpack package: * [py-counter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-counter/package.py) Versions: 1.0.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Counter package defines the "counter.Counter" class similar to bags or multisets in other languages. --- py-coverage[¶](#py-coverage) === Homepage: * <http://nedbatchelder.com/code/coverage/Spack package: * [py-coverage/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-coverage/package.py) Versions: 4.3.4, 4.0a6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Testing coverage checker for python --- py-cpuinfo[¶](#py-cpuinfo) === Homepage: * <https://github.com/workhorsy/py-cpuinfoSpack package: * [py-cpuinfo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cpuinfo/package.py) Versions: 0.2.3 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Get CPU info with pure Python 2 & 3 --- py-crispresso[¶](#py-crispresso) === Homepage: * <https://github.com/lucapinello/CRISPRessoSpack package: * [py-crispresso/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-crispresso/package.py) Versions: 1.0.8 Build Dependencies: [py-matplotlib](#py-matplotlib), [emboss](#emboss), [py-setuptools](#py-setuptools), [flash](#flash), java, [py-seaborn](#py-seaborn), [py-numpy](#py-numpy), [py-biopython](#py-biopython), [py-pandas](#py-pandas), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-matplotlib](#py-matplotlib), [emboss](#emboss), [flash](#flash), java, [py-seaborn](#py-seaborn), [py-numpy](#py-numpy), [py-biopython](#py-biopython), [py-pandas](#py-pandas), [python](#python) Description: Software pipeline for the analysis of CRISPR-Cas9 genome editing outcomes from deep sequencing data. --- py-cryptography[¶](#py-cryptography) === Homepage: * <https://pypi.python.org/pypi/cryptographySpack package: * [py-cryptography/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cryptography/package.py) Versions: 1.8.1 Build Dependencies: [py-enum34](#py-enum34), [py-ipaddress](#py-ipaddress), [py-setuptools](#py-setuptools), [openssl](#openssl), [py-six](#py-six), [py-cffi](#py-cffi), [py-idna](#py-idna), [python](#python), [py-asn1crypto](#py-asn1crypto) Link Dependencies: [python](#python), [openssl](#openssl) Run Dependencies: [py-enum34](#py-enum34), [py-ipaddress](#py-ipaddress), [py-six](#py-six), [py-cffi](#py-cffi), [py-idna](#py-idna), [python](#python), [py-asn1crypto](#py-asn1crypto) Description: cryptography is a package which provides cryptographic recipes and primitives to Python developers --- py-csvkit[¶](#py-csvkit) === Homepage: * <http://csvkit.rtfd.org/Spack package: * [py-csvkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-csvkit/package.py) Versions: 0.9.1 Build Dependencies: [py-openpyxl](#py-openpyxl), [py-setuptools](#py-setuptools), [py-sqlalchemy](#py-sqlalchemy), [py-xlrd](#py-xlrd), [py-dateutil](#py-dateutil), [py-dbf](#py-dbf), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-openpyxl](#py-openpyxl), [py-sqlalchemy](#py-sqlalchemy), [py-xlrd](#py-xlrd), [py-dateutil](#py-dateutil), [py-dbf](#py-dbf), [python](#python), [py-six](#py-six) Description: A library of utilities for working with CSV, the king of tabular file formats --- py-current[¶](#py-current) === Homepage: * <http://github.com/xflr6/currentSpack package: * [py-current/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-current/package.py) Versions: 0.3.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Current module relative paths and imports --- py-cutadapt[¶](#py-cutadapt) === Homepage: * <https://cutadapt.readthedocs.ioSpack package: * [py-cutadapt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cutadapt/package.py) Versions: 1.13 Build Dependencies: [python](#python), [py-setuptools](#py-setuptools), [py-xopen](#py-xopen) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-setuptools](#py-setuptools), [py-xopen](#py-xopen) Description: Cutadapt finds and removes adapter sequences, primers, poly-A tails and other types of unwanted sequence from your high-throughput sequencing reads. --- py-cvxopt[¶](#py-cvxopt) === Homepage: * <http://cvxopt.org/Spack package: * [py-cvxopt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cvxopt/package.py) Versions: 1.1.9 Build Dependencies: [glpk](#glpk), lapack, [gsl](#gsl), [dsdp](#dsdp), [suite-sparse](#suite-sparse), blas, [python](#python), [fftw](#fftw), [py-setuptools](#py-setuptools) Link Dependencies: [glpk](#glpk), [gsl](#gsl), [dsdp](#dsdp), [suite-sparse](#suite-sparse), blas, [python](#python), [fftw](#fftw), lapack Run Dependencies: [python](#python) Description: CVXOPT is a free software package for convex optimization based on the Python programming language. --- py-cycler[¶](#py-cycler) === Homepage: * <http://matplotlib.org/cycler/Spack package: * [py-cycler/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cycler/package.py) Versions: 0.10.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: Composable style cycles. --- py-cython[¶](#py-cython) === Homepage: * <https://pypi.python.org/pypi/cythonSpack package: * [py-cython/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-cython/package.py) Versions: 0.28.3, 0.28.1, 0.25.2, 0.23.5, 0.23.4, 0.22, 0.21.2 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The Cython compiler for writing C extensions for the Python language. --- py-dask[¶](#py-dask) === Homepage: * <https://github.com/dask/dask/Spack package: * [py-dask/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-dask/package.py) Versions: 0.17.4, 0.8.1 Build Dependencies: [py-partd](#py-partd), [py-cloudpickle](#py-cloudpickle), [py-setuptools](#py-setuptools), [py-numpy](#py-numpy), [py-pandas](#py-pandas), [python](#python), [py-toolz](#py-toolz) Link Dependencies: [python](#python) Run Dependencies: [py-partd](#py-partd), [py-cloudpickle](#py-cloudpickle), [py-numpy](#py-numpy), [py-pandas](#py-pandas), [python](#python), [py-toolz](#py-toolz) Test Dependencies: [py-pytest](#py-pytest), [py-requests](#py-requests) Description: Dask is a flexible parallel computing library for analytics. --- py-dateutil[¶](#py-dateutil) === Homepage: * <https://pypi.python.org/pypi/dateutilSpack package: * [py-dateutil/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-dateutil/package.py) Versions: 2.5.2, 2.4.2, 2.4.0, 2.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: Extensions to the standard Python datetime module. --- py-dbf[¶](#py-dbf) === Homepage: * <https://pypi.python.org/pypi/dbfSpack package: * [py-dbf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-dbf/package.py) Versions: 0.96.005, 0.94.003 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Pure python package for reading/writing dBase, FoxPro, and Visual FoxPro .dbf files (including memos) --- py-decorator[¶](#py-decorator) === Homepage: * <https://github.com/micheles/decoratorSpack package: * [py-decorator/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-decorator/package.py) Versions: 4.3.0, 4.0.9 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The aim of the decorator module it to simplify the usage of decorators for the average programmer, and to popularize decorators by showing various non-trivial examples. --- py-deeptools[¶](#py-deeptools) === Homepage: * <https://pypi.io/packages/source/d/deepToolsSpack package: * [py-deeptools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-deeptools/package.py) Versions: 2.5.2 Build Dependencies: [py-py2bit](#py-py2bit), [py-setuptools](#py-setuptools), [py-numpydoc](#py-numpydoc), [py-pybigwig](#py-pybigwig), [py-pysam](#py-pysam), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-scipy](#py-scipy) Link Dependencies: [python](#python) Run Dependencies: [py-py2bit](#py-py2bit), [py-scipy](#py-scipy), [py-numpydoc](#py-numpydoc), [py-pybigwig](#py-pybigwig), [py-pysam](#py-pysam), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python) Description: deepTools addresses the challenge of handling the large amounts of data that are now routinely generated from DNA sequencing centers. --- py-dendropy[¶](#py-dendropy) === Homepage: * <https://www.dendropy.orgSpack package: * [py-dendropy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-dendropy/package.py) Versions: 4.3.0, 3.12.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: DendroPy is a Python library for phylogenetic computing. It provides classes and functions for the simulation, processing, and manipulation of phylogenetic trees and character matrices, and supports the reading and writing of phylogenetic data in a range of formats, such as NEXUS, NEWICK, NeXML, Phylip, FASTA, etc. --- py-dill[¶](#py-dill) === Homepage: * <https://github.com/uqfoundation/dillSpack package: * [py-dill/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-dill/package.py) Versions: 0.2.6, 0.2.5, 0.2.4, 0.2.3, 0.2.2, 0.2.1, 0.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Serialize all of python --- py-discover[¶](#py-discover) === Homepage: * <https://pypi.python.org/pypi/discoverSpack package: * [py-discover/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-discover/package.py) Versions: 0.4.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Test discovery for unittest. --- py-dlcpar[¶](#py-dlcpar) === Homepage: * <https://www.cs.hmc.edu/~yjw/software/dlcpar/Spack package: * [py-dlcpar/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-dlcpar/package.py) Versions: 1.0 Build Dependencies: [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: DLCpar is a reconciliation method for inferring gene duplications, losses, and coalescence (accounting for incomplete lineage sorting). --- py-docopt[¶](#py-docopt) === Homepage: * <http://docopt.org/Spack package: * [py-docopt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-docopt/package.py) Versions: 0.6.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Command-line interface description language. --- py-docutils[¶](#py-docutils) === Homepage: * <http://docutils.sourceforge.net/Spack package: * [py-docutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-docutils/package.py) Versions: 0.13.1, 0.12 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Docutils is an open-source text processing system for processing plaintext documentation into useful formats, such as HTML, LaTeX, man- pages, open-document or XML. It includes reStructuredText, the easy to read, easy to use, what-you-see-is-what-you-get plaintext markup language. --- py-doxypy[¶](#py-doxypy) === Homepage: * <https://pypi.python.org/pypi/doxypySpack package: * [py-doxypy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-doxypy/package.py) Versions: 0.3 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: doxypy is an input filter for Doxygen. --- py-doxypypy[¶](#py-doxypypy) === Homepage: * <https://github.com/Feneric/doxypypySpack package: * [py-doxypypy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-doxypypy/package.py) Versions: 0.8.8.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A Doxygen filter for Python. A more Pythonic version of doxypy, a Doxygen filter for Python. --- py-dryscrape[¶](#py-dryscrape) === Homepage: * <https://github.com/niklasb/dryscrapeSpack package: * [py-dryscrape/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-dryscrape/package.py) Versions: develop, 1.0 Build Dependencies: [py-lxml](#py-lxml), [python](#python), [py-webkit-server](#py-webkit-server), [py-xvfbwrapper](#py-xvfbwrapper) Link Dependencies: [python](#python) Run Dependencies: [py-lxml](#py-lxml), [python](#python), [py-webkit-server](#py-webkit-server), [py-xvfbwrapper](#py-xvfbwrapper) Description: a lightweight Javascript-aware, headless web scraping library for Python --- py-dxchange[¶](#py-dxchange) === Homepage: * <https://github.com/data-exchange/dxchangeSpack package: * [py-dxchange/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-dxchange/package.py) Versions: 0.1.2 Build Dependencies: [py-astropy](#py-astropy), [python](#python), [py-dxfile](#py-dxfile), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [py-six](#py-six), [py-edffile](#py-edffile), [py-tifffile](#py-tifffile), [py-setuptools](#py-setuptools), [py-netcdf4](#py-netcdf4), [py-olefile](#py-olefile), [py-scipy](#py-scipy), [py-spefile](#py-spefile) Link Dependencies: [python](#python) Run Dependencies: [py-astropy](#py-astropy), [py-dxfile](#py-dxfile), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [py-six](#py-six), [py-edffile](#py-edffile), [py-tifffile](#py-tifffile), [py-scipy](#py-scipy), [py-netcdf4](#py-netcdf4), [py-olefile](#py-olefile), [python](#python), [py-spefile](#py-spefile) Description: DXchange provides an interface with tomoPy and raw tomographic data collected at different synchrotron facilities. --- py-dxfile[¶](#py-dxfile) === Homepage: * <https://github.com/data-exchange/dxfileSpack package: * [py-dxfile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-dxfile/package.py) Versions: 0.4 Build Dependencies: [py-h5py](#py-h5py), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-h5py](#py-h5py), [python](#python) Description: Scientific Data Exchange [A1] is a set of guidelines for storing scientific data and metadata in a Hierarchical Data Format 5 [B6] file. --- py-easybuild-easyblocks[¶](#py-easybuild-easyblocks) === Homepage: * <http://hpcugent.github.io/easybuild/Spack package: * [py-easybuild-easyblocks/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-easybuild-easyblocks/package.py) Versions: 3.1.2 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-easybuild-framework](#py-easybuild-framework), [python](#python) Description: Collection of easyblocks for EasyBuild, a software build and installation framework for (scientific) software on HPC systems. --- py-easybuild-easyconfigs[¶](#py-easybuild-easyconfigs) === Homepage: * <http://hpcugent.github.io/easybuild/Spack package: * [py-easybuild-easyconfigs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-easybuild-easyconfigs/package.py) Versions: 3.1.2 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-easybuild-framework](#py-easybuild-framework), [py-easybuild-easyblocks](#py-easybuild-easyblocks), [python](#python) Description: Collection of easyconfig files for EasyBuild, a software build and installation framework for (scientific) software on HPC systems. --- py-easybuild-framework[¶](#py-easybuild-framework) === Homepage: * <http://hpcugent.github.io/easybuild/Spack package: * [py-easybuild-framework/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-easybuild-framework/package.py) Versions: 3.1.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-setuptools](#py-setuptools), [py-vsc-base](#py-vsc-base), [py-vsc-install](#py-vsc-install) Description: The core of EasyBuild, a software build and installation framework for (scientific) software on HPC systems. --- py-edffile[¶](#py-edffile) === Homepage: * <https://github.com/vasole/pymca/blob/master/PyMca5/PyMcaIO/EdfFile.pySpack package: * [py-edffile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-edffile/package.py) Versions: 5.0.0 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Generic class for Edf files manipulation. --- py-editdistance[¶](#py-editdistance) === Homepage: * <https://github.com/aflc/editdistanceSpack package: * [py-editdistance/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-editdistance/package.py) Versions: 0.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Fast implementation of the edit distance (Levenshtein distance). --- py-elasticsearch[¶](#py-elasticsearch) === Homepage: * <https://github.com/elastic/elasticsearch-pySpack package: * [py-elasticsearch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-elasticsearch/package.py) Versions: 5.2.0, 2.3.0 Build Dependencies: [py-urllib3](#py-urllib3), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-urllib3](#py-urllib3), [python](#python) Description: Python client for Elasticsearch --- py-elephant[¶](#py-elephant) === Homepage: * <http://neuralensemble.org/elephantSpack package: * [py-elephant/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-elephant/package.py) Versions: 0.4.1, 0.3.0 Build Dependencies: [py-quantities](#py-quantities), [py-numpydoc](#py-numpydoc), [py-setuptools](#py-setuptools), [py-neo](#py-neo), [python](#python), [py-numpy](#py-numpy), [py-pandas](#py-pandas), [py-scipy](#py-scipy), [py-sphinx](#py-sphinx) Link Dependencies: [python](#python) Run Dependencies: [py-quantities](#py-quantities), [py-numpydoc](#py-numpydoc), [python](#python), [py-neo](#py-neo), [py-pandas](#py-pandas), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-sphinx](#py-sphinx) Test Dependencies: [py-nose](#py-nose) Description: Elephant is a package for analysis of electrophysiology data in Python --- py-emcee[¶](#py-emcee) === Homepage: * <http://dan.iel.fm/emcee/current/Spack package: * [py-emcee/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-emcee/package.py) Versions: 2.1.0 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: emcee is an MIT licensed pure-Python implementation of Goodman & Weare's Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler. --- py-entrypoints[¶](#py-entrypoints) === Homepage: * <https://pypi.python.org/pypi/entrypointsSpack package: * [py-entrypoints/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-entrypoints/package.py) Versions: 0.2.3 Build Dependencies: [py-configparser](#py-configparser), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-configparser](#py-configparser), [python](#python) Description: Discover and load entry points from installed packages. --- py-enum34[¶](#py-enum34) === Homepage: * <https://pypi.python.org/pypi/enum34Spack package: * [py-enum34/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-enum34/package.py) Versions: 1.1.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python 3.4 Enum backported to 3.3, 3.2, 3.1, 2.7, 2.6, 2.5, and 2.4. --- py-epydoc[¶](#py-epydoc) === Homepage: * <https://pypi.python.org/pypi/epydocSpack package: * [py-epydoc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-epydoc/package.py) Versions: 3.0.1 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Epydoc is a tool for generating API documentation documentation for Python modules, based on their docstrings. --- py-espresso[¶](#py-espresso) === Homepage: * <http://espressomd.org/Spack package: * [py-espresso/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-espresso/package.py) Versions: develop, 4.0.0 Build Dependencies: mpi, [py-cython](#py-cython), [python](#python), [hdf5](#hdf5), [cmake](#cmake), [boost](#boost), [py-numpy](#py-numpy), [fftw](#fftw) Link Dependencies: [boost](#boost), mpi, [python](#python), [fftw](#fftw), [hdf5](#hdf5) Run Dependencies: [py-numpy](#py-numpy) Description: ESPResSo is a highly versatile software package for performing and analyzing scientific Molecular Dynamics many-particle simulations of coarse-grained atomistic or bead-spring models as they are used in soft matter research in physics, chemistry and molecular biology. It can be used to simulate systems such as polymers, liquid crystals, colloids, polyelectrolytes, ferrofluids and biological systems, for example DNA and lipid membranes. It also has a DPD and lattice Boltzmann solver for hydrodynamic interactions, and allows several particle couplings to the LB fluid. --- py-espressopp[¶](#py-espressopp) === Homepage: * <https://espressopp.github.ioSpack package: * [py-espressopp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-espressopp/package.py) Versions: develop, 1.9.5, 1.9.4.1, 1.9.4 Build Dependencies: [py-sphinx](#py-sphinx), mpi, [texlive](#texlive), [python](#python), [cmake](#cmake), [py-numpy](#py-numpy), [doxygen](#doxygen), [py-matplotlib](#py-matplotlib), [boost](#boost), [fftw](#fftw), [py-mpi4py](#py-mpi4py) Link Dependencies: [boost](#boost), mpi, [python](#python), [fftw](#fftw) Run Dependencies: [py-numpy](#py-numpy), [py-mpi4py](#py-mpi4py) Description: ESPResSo++ is an extensible, flexible, fast and parallel simulation software for soft matter research. It is a highly versatile software package for the scientific simulation and analysis of coarse-grained atomistic or bead-spring models as they are used in soft matter research --- py-et-xmlfile[¶](#py-et-xmlfile) === Homepage: * <https://bitbucket.org/openpyxl/et_xmlfileSpack package: * [py-et-xmlfile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-et-xmlfile/package.py) Versions: 1.0.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: An implementation of lxml.xmlfile for the standard library. --- py-eventlet[¶](#py-eventlet) === Homepage: * <https://github.com/eventlet/eventletSpack package: * [py-eventlet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-eventlet/package.py) Versions: 0.22.0 Build Dependencies: [py-greenlet](#py-greenlet), [py-setuptools](#py-setuptools), [python](#python), [py-enum34](#py-enum34) Link Dependencies: [py-greenlet](#py-greenlet), [python](#python) Run Dependencies: [py-enum34](#py-enum34), [python](#python) Description: Concurrent networking library for Python --- py-execnet[¶](#py-execnet) === Homepage: * <http://codespeak.net/execnetSpack package: * [py-execnet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-execnet/package.py) Versions: 1.4.1 Build Dependencies: [py-setuptools-scm](#py-setuptools-scm), [py-apipkg](#py-apipkg), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-apipkg](#py-apipkg), [python](#python) Description: execnet provides a share-nothing model with channel-send/receive communication for distributing execution across many Python interpreters across version, platform and network barriers. --- py-fastaindex[¶](#py-fastaindex) === Homepage: * <https://github.com/lpryszcz/FastaIndexSpack package: * [py-fastaindex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-fastaindex/package.py) Versions: 0.11rc7 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: FastA index (.fai) handler compatible with samtools faidx is extended with 4 columns storing counts for A, C, G & T for each sequence.. --- py-fasteners[¶](#py-fasteners) === Homepage: * <https://github.com/harlowja/fastenersSpack package: * [py-fasteners/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-fasteners/package.py) Versions: 0.14.1 Build Dependencies: [py-monotonic](#py-monotonic), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-monotonic](#py-monotonic), [python](#python), [py-six](#py-six) Description: A python package that provides useful locks. --- py-faststructure[¶](#py-faststructure) === Homepage: * <https://github.com/rajanil/fastStructureSpack package: * [py-faststructure/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-faststructure/package.py) Versions: 1.0 Build Dependencies: [py-numpy](#py-numpy), [py-cython](#py-cython), [gsl](#gsl), [python](#python) Link Dependencies: [gsl](#gsl), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: FastStructure is a fast algorithm for inferring population structure from large SNP genotype data. --- py-filelock[¶](#py-filelock) === Homepage: * <https://github.com/benediktschmitt/py-filelockSpack package: * [py-filelock/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-filelock/package.py) Versions: 3.0.4, 3.0.3, 3.0.1, 3.0.0, 2.0.13, 2.0.12, 2.0.11, 2.0.10, 2.0.9, 2.0.8 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: This package contains a single module, which implements a platform independent file lock in Python, which provides a simple way of inter- process communication --- py-fiscalyear[¶](#py-fiscalyear) === Homepage: * <https://github.com/adamjstewart/fiscalyearSpack package: * [py-fiscalyear/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-fiscalyear/package.py) Versions: 0.1.0, master Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Test Dependencies: [py-pytest](#py-pytest), [py-pytest-runner](#py-pytest-runner) Description: fiscalyear is a small, lightweight Python module providing helpful utilities for managing the fiscal calendar. It is designed as an extension of the built-in datetime and calendar modules, adding the ability to query the fiscal year and fiscal quarter of a date or datetime object. --- py-flake8[¶](#py-flake8) === Homepage: * <https://github.com/PyCQA/flake8Spack package: * [py-flake8/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-flake8/package.py) Versions: 3.5.0, 3.0.4, 2.5.4 Build Dependencies: [py-pyflakes](#py-pyflakes), [py-enum34](#py-enum34), [py-configparser](#py-configparser), [py-mccabe](#py-mccabe), [py-setuptools](#py-setuptools), [py-pycodestyle](#py-pycodestyle), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pyflakes](#py-pyflakes), [py-enum34](#py-enum34), [py-configparser](#py-configparser), [py-mccabe](#py-mccabe), [py-setuptools](#py-setuptools), [py-pycodestyle](#py-pycodestyle), [python](#python) Test Dependencies: [py-nose](#py-nose) Description: Flake8 is a wrapper around PyFlakes, pep8 and Ned Batchelder's McCabe script. --- py-flake8-polyfill[¶](#py-flake8-polyfill) === Homepage: * <https://pypi.org/project/flake8-polyfill/Spack package: * [py-flake8-polyfill/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-flake8-polyfill/package.py) Versions: 1.0.2 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-flake8](#py-flake8) Description: flake8-polyfill is a package that provides some compatibility helpers for Flake8 plugins that intend to support Flake8 2.x and 3.x simultaneously. --- py-flask[¶](#py-flask) === Homepage: * <http://github.com/pallets/flaskSpack package: * [py-flask/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-flask/package.py) Versions: 0.12.2, 0.12.1, 0.11.1 Build Dependencies: [py-click](#py-click), [py-setuptools](#py-setuptools), [py-werkzeug](#py-werkzeug), [py-itsdangerous](#py-itsdangerous), [python](#python), [py-jinja2](#py-jinja2) Link Dependencies: [python](#python) Run Dependencies: [py-click](#py-click), [py-itsdangerous](#py-itsdangerous), [py-jinja2](#py-jinja2), [python](#python), [py-werkzeug](#py-werkzeug) Description: A microframework based on Werkzeug, Jinja2 and good intentions --- py-flask-compress[¶](#py-flask-compress) === Homepage: * <https://github.com/libwilliam/flask-compressSpack package: * [py-flask-compress/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-flask-compress/package.py) Versions: 1.4.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-flask](#py-flask) Link Dependencies: [python](#python) Run Dependencies: [py-flask](#py-flask), [python](#python) Description: Flask-Compress allows you to easily compress your Flask application's responses with gzip. --- py-flask-socketio[¶](#py-flask-socketio) === Homepage: * <https://flask-socketio.readthedocs.ioSpack package: * [py-flask-socketio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-flask-socketio/package.py) Versions: 2.9.6 Build Dependencies: [python](#python), [py-flask](#py-flask), [py-setuptools](#py-setuptools), [py-python-socketio](#py-python-socketio), [py-werkzeug](#py-werkzeug) Link Dependencies: [python](#python) Run Dependencies: [py-flask](#py-flask), [python](#python), [py-python-socketio](#py-python-socketio), [py-werkzeug](#py-werkzeug) Description: Flask-SocketIO gives Flask applications access to low latency bi- directional communications between the clients and the server. The client-side application can use any of the SocketIO official clients libraries in Javascript, C++, Java and Swift, or any compatible client to establish a permanent connection to the server. --- py-flexx[¶](#py-flexx) === Homepage: * <http://flexx.readthedocs.ioSpack package: * [py-flexx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-flexx/package.py) Versions: 0.4.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-tornado](#py-tornado) Link Dependencies: [python](#python) Run Dependencies: [py-tornado](#py-tornado), [python](#python) Description: Write desktop and web apps in pure Python. --- py-fn[¶](#py-fn) === Homepage: * <https://github.com/fnpy/fn.pySpack package: * [py-fn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-fn/package.py) Versions: 0.5.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Functional programming in Python: implementation of missing features to enjoy FP. --- py-fparser[¶](#py-fparser) === Homepage: * <https://github.com/stfc/fparserSpack package: * [py-fparser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-fparser/package.py) Versions: develop, 0.0.6, 0.0.5 Build Dependencies: [python](#python), [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [py-nose](#py-nose), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Test Dependencies: [py-pytest](#py-pytest) Description: Parser for Fortran 77..2003 code. --- py-funcsigs[¶](#py-funcsigs) === Homepage: * <https://pypi.python.org/pypi/funcsigsSpack package: * [py-funcsigs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-funcsigs/package.py) Versions: 1.0.2, 0.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Test Dependencies: [py-unittest2](#py-unittest2) Description: Python function signatures from PEP362 for Python 2.6, 2.7 and 3.2. --- py-functools32[¶](#py-functools32) === Homepage: * <https://github.com/MiCHiLU/python-functools32Spack package: * [py-functools32/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-functools32/package.py) Versions: 3.2.3-2 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Backport of the functools module from Python 3.2.3 for use on 2.7 and PyPy. --- py-future[¶](#py-future) === Homepage: * <https://python-future.org/Spack package: * [py-future/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-future/package.py) Versions: 0.16.0, 0.15.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Clean single-source support for Python 3 and 2 --- py-futures[¶](#py-futures) === Homepage: * <https://pypi.python.org/pypi/futuresSpack package: * [py-futures/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-futures/package.py) Versions: 3.0.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Backport of the concurrent.futures package from Python 3.2 --- py-fypp[¶](#py-fypp) === Homepage: * <https://github.com/aradi/fyppSpack package: * [py-fypp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-fypp/package.py) Versions: 2.1.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python powered Fortran preprocessor. --- py-gdbgui[¶](#py-gdbgui) === Homepage: * <https://gdbgui.comSpack package: * [py-gdbgui/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-gdbgui/package.py) Versions: 0.11.2.1 Build Dependencies: [py-gevent](#py-gevent), [py-flask-compress](#py-flask-compress), [py-flask-socketio](#py-flask-socketio), [py-setuptools](#py-setuptools), [py-flask](#py-flask), [py-pygdbmi](#py-pygdbmi), [python](#python), [py-pygments](#py-pygments) Link Dependencies: [python](#python) Run Dependencies: [py-gevent](#py-gevent), [gdb](#gdb), [py-flask-socketio](#py-flask-socketio), [py-setuptools](#py-setuptools), [py-flask-compress](#py-flask-compress), [py-pygdbmi](#py-pygdbmi), [py-pygments](#py-pygments), [python](#python), [py-flask](#py-flask) Description: gdbgui is a modern, free, browser-based frontend to gdb --- py-genders[¶](#py-genders) === Homepage: * <https://github.com/chaos/gendersSpack package: * [py-genders/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-genders/package.py) Versions: 1.22 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Description: Genders is a static cluster configuration database used for cluster configuration management. It is used by a variety of tools and scripts for management of large clusters. --- py-genshi[¶](#py-genshi) === Homepage: * <https://genshi.edgewall.org/Spack package: * [py-genshi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-genshi/package.py) Versions: 0.7, 0.6.1, 0.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python toolkit for generation of output for the web --- py-gevent[¶](#py-gevent) === Homepage: * <http://www.gevent.orgSpack package: * [py-gevent/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-gevent/package.py) Versions: 1.3a2 Build Dependencies: [py-cython](#py-cython), [py-cffi](#py-cffi), [py-setuptools](#py-setuptools), [python](#python), [py-greenlet](#py-greenlet) Link Dependencies: [python](#python) Run Dependencies: [py-greenlet](#py-greenlet), [py-cffi](#py-cffi), [python](#python) Description: gevent is a coroutine-based Python networking library. --- py-git-review[¶](#py-git-review) === Homepage: * <http://docs.openstack.org/infra/git-reviewSpack package: * [py-git-review/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-git-review/package.py) Versions: 1.26.0, 1.25.0, 1.24, 1.23, 1.22, 1.21 Build Dependencies: [py-pbr](#py-pbr), [py-setuptools](#py-setuptools), [python](#python), [py-requests](#py-requests) Link Dependencies: [python](#python) Run Dependencies: [tk](#tk), [py-pbr](#py-pbr), [git](#git), [python](#python), [py-requests](#py-requests) Description: git-review is a tool that helps submitting git branches to gerrit --- py-git2[¶](#py-git2) === Homepage: * <http://www.pygit2.org/Spack package: * [py-git2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-git2/package.py) Versions: 0.24.1 Build Dependencies: [libgit2](#libgit2), [py-cffi](#py-cffi), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [libgit2](#libgit2), [python](#python) Run Dependencies: [py-cffi](#py-cffi), [python](#python), [py-six](#py-six) Description: Pygit2 is a set of Python bindings to the libgit2 shared library, libgit2 implements the core of Git. --- py-gnuplot[¶](#py-gnuplot) === Homepage: * <http://gnuplot-py.sourceforge.net/Spack package: * [py-gnuplot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-gnuplot/package.py) Versions: 1.8 Build Dependencies: [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Gnuplot.py is a Python package that allows you to create graphs from within Python using the gnuplot plotting program. --- py-goatools[¶](#py-goatools) === Homepage: * <https://github.com/tanghaibao/goatoolsSpack package: * [py-goatools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-goatools/package.py) Versions: 0.7.11 Build Dependencies: [py-pandas](#py-pandas), [python](#python), [py-pyparsing](#py-pyparsing), [py-statsmodels](#py-statsmodels), [py-nose](#py-nose), [py-pydot](#py-pydot), [py-xlsxwriter](#py-xlsxwriter), [py-xlrd](#py-xlrd), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-pytest](#py-pytest) Link Dependencies: [python](#python) Run Dependencies: [py-pandas](#py-pandas), [python](#python), [py-pyparsing](#py-pyparsing), [py-statsmodels](#py-statsmodels), [py-nose](#py-nose), [py-pydot](#py-pydot), [py-xlsxwriter](#py-xlsxwriter), [py-xlrd](#py-xlrd), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-pytest](#py-pytest) Description: Python scripts to find enrichment of GO terms --- py-gpaw[¶](#py-gpaw) === Homepage: * <https://wiki.fysik.dtu.dk/gpaw/index.htmlSpack package: * [py-gpaw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-gpaw/package.py) Versions: 1.3.0 Build Dependencies: [libxc](#libxc), mpi, [python](#python), [py-ase](#py-ase), [py-numpy](#py-numpy), lapack, blas, scalapack, [fftw](#fftw), [py-scipy](#py-scipy) Link Dependencies: [libxc](#libxc), mpi, [python](#python), scalapack, blas, [fftw](#fftw), lapack Run Dependencies: [py-ase](#py-ase), [py-numpy](#py-numpy), mpi, [py-scipy](#py-scipy), [python](#python) Description: GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). --- py-greenlet[¶](#py-greenlet) === Homepage: * <https://github.com/python-greenlet/greenletSpack package: * [py-greenlet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-greenlet/package.py) Versions: 0.4.13 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Lightweight in-process concurrent programming --- py-griddataformats[¶](#py-griddataformats) === Homepage: * <http://www.mdanalysis.org/GridDataFormatsSpack package: * [py-griddataformats/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-griddataformats/package.py) Versions: 0.3.3 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python), [py-six](#py-six) Description: The gridDataFormats package provides classes to unify reading and writing n-dimensional datasets. One can read grid data from files, make them available as a Grid object, and write out the data again. --- py-guidata[¶](#py-guidata) === Homepage: * <https://github.com/PierreRaybaut/guidataSpack package: * [py-guidata/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-guidata/package.py) Versions: 1.7.5 Build Dependencies: [py-spyder](#py-spyder), [python](#python), [py-h5py](#py-h5py), [py-setuptools](#py-setuptools), [py-pyqt](#py-pyqt) Link Dependencies: [python](#python) Run Dependencies: [py-spyder](#py-spyder), [py-h5py](#py-h5py), [python](#python), [py-pyqt](#py-pyqt) Description: Automatic graphical user interfaces generation for easy dataset editing and display --- py-guiqwt[¶](#py-guiqwt) === Homepage: * <https://github.com/PierreRaybaut/guiqwtSpack package: * [py-guiqwt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-guiqwt/package.py) Versions: 3.0.2 Build Dependencies: [py-pythonqwt](#py-pythonqwt), [py-setuptools](#py-setuptools), [py-numpy](#py-numpy), [py-guidata](#py-guidata), [py-scipy](#py-scipy), [python](#python), [py-pillow](#py-pillow) Link Dependencies: [python](#python) Run Dependencies: [py-pythonqwt](#py-pythonqwt), [py-pillow](#py-pillow), [py-numpy](#py-numpy), [py-guidata](#py-guidata), [py-scipy](#py-scipy), [python](#python) Description: guiqwt is a set of tools for curve and image plotting (extension to PythonQwt) --- py-h5py[¶](#py-h5py) === Homepage: * <http://www.h5py.org/Spack package: * [py-h5py/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-h5py/package.py) Versions: 2.8.0, 2.7.1, 2.7.0, 2.6.0, 2.5.0, 2.4.0 Build Dependencies: [py-pkgconfig](#py-pkgconfig), mpi, [py-cython](#py-cython), [py-setuptools](#py-setuptools), [hdf5](#hdf5), [py-numpy](#py-numpy), [py-mpi4py](#py-mpi4py), [python](#python), [py-six](#py-six) Link Dependencies: mpi, [python](#python), [hdf5](#hdf5) Run Dependencies: [py-numpy](#py-numpy), [py-mpi4py](#py-mpi4py), [python](#python), [py-six](#py-six) Description: The h5py package provides both a high- and low-level interface to the HDF5 library from Python. --- py-hdfs[¶](#py-hdfs) === Homepage: * <https://hdfscli.readthedocs.io/en/latest/Spack package: * [py-hdfs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-hdfs/package.py) Versions: 2.1.0 Build Dependencies: [py-docopt](#py-docopt), [py-requests](#py-requests), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-docopt](#py-docopt), [py-requests](#py-requests), [python](#python), [py-six](#py-six) Description: API and command line interface for HDFS --- py-hepdata-validator[¶](#py-hepdata-validator) === Homepage: * <https://github.com/hepdata/hepdata-validatorSpack package: * [py-hepdata-validator/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-hepdata-validator/package.py) Versions: 0.1.16, 0.1.15, 0.1.14, 0.1.8 Build Dependencies: [py-pyyaml](#py-pyyaml), [python](#python), [py-setuptools](#py-setuptools), [py-jsonschema](#py-jsonschema) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [python](#python), [py-jsonschema](#py-jsonschema) Description: Validation schema and code for HEPdata submissions. --- py-html2text[¶](#py-html2text) === Homepage: * <https://github.com/Alir3z4/html2text/Spack package: * [py-html2text/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-html2text/package.py) Versions: 2016.9.19 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Turn HTML into equivalent Markdown-structured text. --- py-html5lib[¶](#py-html5lib) === Homepage: * <https://github.com/html5lib/html5lib-pythonSpack package: * [py-html5lib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-html5lib/package.py) Versions: 0.9999999 Build Dependencies: [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: HTML parser based on the WHATWG HTML specification. --- py-htseq[¶](#py-htseq) === Homepage: * <http://htseq.readthedocs.io/en/release_0.9.1/overview.htmlSpack package: * [py-htseq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-htseq/package.py) Versions: 0.9.1 Build Dependencies: [py-matplotlib](#py-matplotlib), [swig](#swig), [py-setuptools](#py-setuptools), [py-cython](#py-cython), [py-numpy](#py-numpy), [py-pysam](#py-pysam), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-matplotlib](#py-matplotlib), [swig](#swig), [py-cython](#py-cython), [py-numpy](#py-numpy), [py-pysam](#py-pysam), [python](#python) Description: HTSeq is a Python package that provides infrastructure to process data from high-throughput sequencing assays. --- py-httpbin[¶](#py-httpbin) === Homepage: * <https://github.com/Runscope/httpbinSpack package: * [py-httpbin/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-httpbin/package.py) Versions: 0.5.0 Build Dependencies: [py-markupsafe](#py-markupsafe), [py-setuptools](#py-setuptools), [py-decorator](#py-decorator), [py-itsdangerous](#py-itsdangerous), [py-flask](#py-flask), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-markupsafe](#py-markupsafe), [py-decorator](#py-decorator), [py-itsdangerous](#py-itsdangerous), [py-flask](#py-flask), [python](#python), [py-six](#py-six) Description: HTTP Request and Response Service --- py-hypothesis[¶](#py-hypothesis) === Homepage: * <https://github.com/HypothesisWorks/hypothesis-pythonSpack package: * [py-hypothesis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-hypothesis/package.py) Versions: 3.7.0 Build Dependencies: [py-enum34](#py-enum34), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-enum34](#py-enum34), [python](#python) Description: A library for property based testing. --- py-idna[¶](#py-idna) === Homepage: * <https://github.com/kjd/idnaSpack package: * [py-idna/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-idna/package.py) Versions: 2.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [py-setuptools](#py-setuptools), [python](#python) Run Dependencies: [python](#python) Description: Internationalized Domain Names for Python (IDNA 2008 and UTS #46) --- py-igraph[¶](#py-igraph) === Homepage: * <http://igraph.org/Spack package: * [py-igraph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-igraph/package.py) Versions: 0.7.0 Build Dependencies: [igraph](#igraph), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [igraph](#igraph), [python](#python) Run Dependencies: [python](#python) Description: igraph is a collection of network analysis tools with the emphasis on efficiency, portability and ease of use. --- py-illumina-utils[¶](#py-illumina-utils) === Homepage: * <https://github.com/meren/illumina-utilsSpack package: * [py-illumina-utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-illumina-utils/package.py) Versions: 2.3, 2.2 Build Dependencies: [py-pip](#py-pip), [py-setuptools](#py-setuptools), [py-python-levenshtein](#py-python-levenshtein), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-python-levenshtein](#py-python-levenshtein) Description: A library and collection of scripts to work with Illumina paired-end data (for CASAVA 1.8+). --- py-imageio[¶](#py-imageio) === Homepage: * <http://imageio.github.io/Spack package: * [py-imageio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-imageio/package.py) Versions: 2.3.0 Build Dependencies: [py-numpy](#py-numpy), [python](#python), [py-pillow](#py-pillow), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-numpy](#py-numpy), [ffmpeg](#ffmpeg), [py-pillow](#py-pillow) Description: Imageio is a Python library that provides an easy interface to read and write a wide range of image data, including animated images, video, volumetric data, and scientific formats. It is cross-platform, runs on Python 2.7 and 3.4+, and is easy to install. --- py-imagesize[¶](#py-imagesize) === Homepage: * <https://github.com/shibukawa/imagesize_pySpack package: * [py-imagesize/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-imagesize/package.py) Versions: 0.7.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Parses image file headers and returns image size. Supports PNG, JPEG, JPEG2000, and GIF image file formats. --- py-iminuit[¶](#py-iminuit) === Homepage: * <https://pypi.python.org/pypi/iminuitSpack package: * [py-iminuit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-iminuit/package.py) Versions: 1.2 Build Dependencies: [py-matplotlib](#py-matplotlib), [py-numpy](#py-numpy), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python) Description: Interactive IPython-Friendly Minimizer based on SEAL Minuit2. --- py-importlib[¶](#py-importlib) === Homepage: * <https://github.com/brettcannon/importlibSpack package: * [py-importlib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-importlib/package.py) Versions: 1.0.4 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Packaging for importlib from Python 2.7 --- py-ipaddress[¶](#py-ipaddress) === Homepage: * <https://github.com/phihag/ipaddressSpack package: * [py-ipaddress/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ipaddress/package.py) Versions: 1.0.18 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python 3.3's ipaddress for older Python versions --- py-ipdb[¶](#py-ipdb) === Homepage: * <https://pypi.python.org/pypi/ipdbSpack package: * [py-ipdb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ipdb/package.py) Versions: 0.10.1 Build Dependencies: [py-traitlets](#py-traitlets), [py-setuptools](#py-setuptools), [py-pexpect](#py-pexpect), [py-prompt-toolkit](#py-prompt-toolkit), [py-ipython](#py-ipython), [python](#python), [py-six](#py-six) Link Dependencies: [py-traitlets](#py-traitlets), [py-pexpect](#py-pexpect), [py-prompt-toolkit](#py-prompt-toolkit), [py-ipython](#py-ipython), [python](#python), [py-six](#py-six) Run Dependencies: [python](#python) Description: ipdb is the iPython debugger and has many additional features, including a better interactive debugging experience via colorized output. --- py-ipykernel[¶](#py-ipykernel) === Homepage: * <https://pypi.python.org/pypi/ipykernelSpack package: * [py-ipykernel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ipykernel/package.py) Versions: 4.5.0, 4.4.1, 4.4.0, 4.3.1, 4.3.0, 4.2.2, 4.2.1, 4.2.0, 4.1.1, 4.1.0 Build Dependencies: [py-traitlets](#py-traitlets), [py-jupyter-client](#py-jupyter-client), [py-pexpect](#py-pexpect), [py-ipython](#py-ipython), [py-tornado](#py-tornado), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-traitlets](#py-traitlets), [py-jupyter-client](#py-jupyter-client), [py-pexpect](#py-pexpect), [py-ipython](#py-ipython), [py-tornado](#py-tornado), [python](#python) Description: IPython Kernel for Jupyter --- py-ipython[¶](#py-ipython) === Homepage: * <https://pypi.python.org/pypi/ipythonSpack package: * [py-ipython/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ipython/package.py) Versions: 5.1.0, 3.1.0, 2.3.1 Build Dependencies: [py-traitlets](#py-traitlets), [py-pickleshare](#py-pickleshare), [py-simplegeneric](#py-simplegeneric), [py-pexpect](#py-pexpect), [py-prompt-toolkit](#py-prompt-toolkit), [py-pathlib2](#py-pathlib2), [py-pygments](#py-pygments), [py-backports-shutil-get-terminal-size](#py-backports-shutil-get-terminal-size), [python](#python), [py-decorator](#py-decorator) Link Dependencies: [python](#python) Run Dependencies: [py-traitlets](#py-traitlets), [py-pickleshare](#py-pickleshare), [py-simplegeneric](#py-simplegeneric), [py-pexpect](#py-pexpect), [py-prompt-toolkit](#py-prompt-toolkit), [py-pathlib2](#py-pathlib2), [py-pygments](#py-pygments), [py-backports-shutil-get-terminal-size](#py-backports-shutil-get-terminal-size), [python](#python), [py-decorator](#py-decorator) Description: IPython provides a rich toolkit to help you make the most out of using Python interactively. --- py-ipython-genutils[¶](#py-ipython-genutils) === Homepage: * <https://pypi.python.org/pypi/ipython_genutilsSpack package: * [py-ipython-genutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ipython-genutils/package.py) Versions: 0.1.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Vestigial utilities from IPython --- py-ipywidgets[¶](#py-ipywidgets) === Homepage: * <https://github.com/ipython/ipywidgetsSpack package: * [py-ipywidgets/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ipywidgets/package.py) Versions: 5.2.2 Build Dependencies: [py-traitlets](#py-traitlets), [py-ipython](#py-ipython), [py-ipykernel](#py-ipykernel), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-traitlets](#py-traitlets), [py-ipython](#py-ipython), [py-ipykernel](#py-ipykernel), [python](#python) Description: IPython widgets for the Jupyter Notebook --- py-isort[¶](#py-isort) === Homepage: * <https://github.com/timothycrosley/isortSpack package: * [py-isort/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-isort/package.py) Versions: 4.2.15 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A Python utility / library to sort Python imports. --- py-itsdangerous[¶](#py-itsdangerous) === Homepage: * <http://github.com/mitsuhiko/itsdangerousSpack package: * [py-itsdangerous/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-itsdangerous/package.py) Versions: 0.24 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Various helpers to pass trusted data to untrusted environments. --- py-jdcal[¶](#py-jdcal) === Homepage: * <http://github.com/phn/jdcalSpack package: * [py-jdcal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jdcal/package.py) Versions: 1.3, 1.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Julian dates from proleptic Gregorian and Julian calendars --- py-jedi[¶](#py-jedi) === Homepage: * <https://github.com/davidhalter/jediSpack package: * [py-jedi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jedi/package.py) Versions: 0.10.0, 0.9.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: An autocompletion tool for Python that can be used for text editors. --- py-jinja2[¶](#py-jinja2) === Homepage: * <http://jinja.pocoo.org/Spack package: * [py-jinja2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jinja2/package.py) Versions: 2.9.6, 2.8, 2.7.3, 2.7.2, 2.7.1, 2.7 Build Dependencies: [py-markupsafe](#py-markupsafe), [py-setuptools](#py-setuptools), [python](#python), [py-babel](#py-babel) Link Dependencies: [python](#python) Run Dependencies: [py-markupsafe](#py-markupsafe), [py-babel](#py-babel), [python](#python) Description: Jinja2 is a template engine written in pure Python. It provides a Django inspired non-XML syntax but supports inline expressions and an optional sandboxed environment. --- py-joblib[¶](#py-joblib) === Homepage: * <http://packages.python.org/joblib/Spack package: * [py-joblib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-joblib/package.py) Versions: 0.10.3, 0.10.2, 0.10.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python function as pipeline jobs --- py-jprops[¶](#py-jprops) === Homepage: * <https://github.com/mgood/jprops/Spack package: * [py-jprops/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jprops/package.py) Versions: 2.0.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Java properties file parser for Python --- py-jpype[¶](#py-jpype) === Homepage: * <https://github.com/originell/jpypeSpack package: * [py-jpype/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jpype/package.py) Versions: 0.6.2, 0.6.1, 0.6.0 Build Dependencies: java, [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: java, [python](#python) Description: JPype is an effort to allow python programs full access to java class libraries. --- py-jsonschema[¶](#py-jsonschema) === Homepage: * <http://github.com/Julian/jsonschemaSpack package: * [py-jsonschema/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jsonschema/package.py) Versions: 2.5.1 Build Dependencies: [py-functools32](#py-functools32), [py-setuptools](#py-setuptools), [python](#python), [py-vcversioner](#py-vcversioner) Link Dependencies: [python](#python) Run Dependencies: [py-functools32](#py-functools32), [py-vcversioner](#py-vcversioner), [python](#python) Description: Jsonschema: An(other) implementation of JSON Schema for Python. --- py-junit-xml[¶](#py-junit-xml) === Homepage: * <https://github.com/kyrus/python-junit-xmlSpack package: * [py-junit-xml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-junit-xml/package.py) Versions: 1.7 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: Creates JUnit XML test result documents that can be read by tools such as Jenkins --- py-jupyter-client[¶](#py-jupyter-client) === Homepage: * <https://github.com/jupyter/jupyter_clientSpack package: * [py-jupyter-client/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jupyter-client/package.py) Versions: 4.4.0, 4.3.0, 4.2.2, 4.2.1, 4.2.0, 4.1.1, 4.1.0, 4.0.0 Build Dependencies: [py-traitlets](#py-traitlets), [python](#python), [py-jupyter-core](#py-jupyter-core), [py-zmq](#py-zmq) Link Dependencies: [python](#python) Run Dependencies: [py-traitlets](#py-traitlets), [python](#python), [py-jupyter-core](#py-jupyter-core), [py-zmq](#py-zmq) Description: Jupyter protocol client APIs --- py-jupyter-console[¶](#py-jupyter-console) === Homepage: * <https://github.com/jupyter/jupyter_consoleSpack package: * [py-jupyter-console/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jupyter-console/package.py) Versions: 5.0.0, 4.1.1, 4.1.0, 4.0.3, 4.0.2 Build Dependencies: [python](#python), [py-prompt-toolkit](#py-prompt-toolkit), [py-ipython](#py-ipython), [py-pygments](#py-pygments), [py-jupyter-client](#py-jupyter-client), [py-ipykernel](#py-ipykernel) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-prompt-toolkit](#py-prompt-toolkit), [py-ipython](#py-ipython), [py-pygments](#py-pygments), [py-jupyter-client](#py-jupyter-client), [py-ipykernel](#py-ipykernel) Description: Jupyter Terminal Console --- py-jupyter-core[¶](#py-jupyter-core) === Homepage: * <http://jupyter-core.readthedocs.io/Spack package: * [py-jupyter-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jupyter-core/package.py) Versions: 4.2.0, 4.1.1, 4.1.0, 4.0.6, 4.0.5, 4.0.4, 4.0.3, 4.0.2, 4.0.1, 4.0 Build Dependencies: [py-traitlets](#py-traitlets), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-traitlets](#py-traitlets), [python](#python) Description: Core Jupyter functionality --- py-jupyter-notebook[¶](#py-jupyter-notebook) === Homepage: * <https://github.com/jupyter/notebookSpack package: * [py-jupyter-notebook/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-jupyter-notebook/package.py) Versions: 4.2.3, 4.2.2, 4.2.1, 4.2.0, 4.1.0, 4.0.6, 4.0.5, 4.0.4, 4.0.3, 4.0.2 Build Dependencies: [py-traitlets](#py-traitlets), [py-jupyter-core](#py-jupyter-core), [py-nbconvert](#py-nbconvert), [py-ipywidgets](#py-ipywidgets), [node-js](#node-js), [py-terminado](#py-terminado), [py-nbformat](#py-nbformat), [npm](#npm), [py-jupyter-client](#py-jupyter-client), [py-tornado](#py-tornado), [py-ipython-genutils](#py-ipython-genutils), [py-jinja2](#py-jinja2), [py-jupyter-console](#py-jupyter-console), [py-ipykernel](#py-ipykernel), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-traitlets](#py-traitlets), [py-jupyter-core](#py-jupyter-core), [py-nbconvert](#py-nbconvert), [py-ipywidgets](#py-ipywidgets), [node-js](#node-js), [py-terminado](#py-terminado), [py-nbformat](#py-nbformat), [py-tornado](#py-tornado), [py-jinja2](#py-jinja2), [py-ipython-genutils](#py-ipython-genutils), [py-jupyter-client](#py-jupyter-client), [py-jupyter-console](#py-jupyter-console), [py-ipykernel](#py-ipykernel), [python](#python) Description: Jupyter Interactive Notebook --- py-keras[¶](#py-keras) === Homepage: * <http://keras.ioSpack package: * [py-keras/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-keras/package.py) Versions: 2.0.3, 1.2.2, 1.2.1, 1.2.0, 1.1.2, 1.1.1, 1.1.0 Build Dependencies: [py-pyyaml](#py-pyyaml), [py-theano](#py-theano), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-theano](#py-theano), [python](#python), [py-six](#py-six) Description: Deep Learning library for Python. Convnets, recurrent neural networks, and more. Runs on Theano or TensorFlow. --- py-kiwisolver[¶](#py-kiwisolver) === Homepage: * <https://github.com/nucleic/kiwiSpack package: * [py-kiwisolver/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-kiwisolver/package.py) Versions: 1.0.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A fast implementation of the Cassowary constraint solver --- py-lark-parser[¶](#py-lark-parser) === Homepage: * <https://github.com/lark-parser/lark/Spack package: * [py-lark-parser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lark-parser/package.py) Versions: 0.6.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Lark is a modern general-purpose parsing library for Python. --- py-latexcodec[¶](#py-latexcodec) === Homepage: * <http://latexcodec.readthedocs.ioSpack package: * [py-latexcodec/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-latexcodec/package.py) Versions: 1.0.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: A lexer and codec to work with LaTeX code in Python. --- py-lazy[¶](#py-lazy) === Homepage: * <https://pypi.python.org/pypi/lazySpack package: * [py-lazy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lazy/package.py) Versions: 1.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Lazy attributes for Python objects --- py-lazy-object-proxy[¶](#py-lazy-object-proxy) === Homepage: * <https://github.com/ionelmc/python-lazy-object-proxySpack package: * [py-lazy-object-proxy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lazy-object-proxy/package.py) Versions: 1.3.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A fast and thorough lazy object proxy. --- py-lazy-property[¶](#py-lazy-property) === Homepage: * <https://github.com/jackmaney/lazy-propertySpack package: * [py-lazy-property/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lazy-property/package.py) Versions: 0.0.1, 0.0.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A package for making properties lazy --- py-lazyarray[¶](#py-lazyarray) === Homepage: * <http://bitbucket.org/apdavison/lazyarray/Spack package: * [py-lazyarray/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lazyarray/package.py) Versions: 0.2.10, 0.2.8 Build Dependencies: [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: a Python package that provides a lazily-evaluated numerical array class, larray, based on and compatible with NumPy arrays. --- py-libconf[¶](#py-libconf) === Homepage: * <https://pypi.python.org/pypi/libconfSpack package: * [py-libconf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-libconf/package.py) Versions: 1.0.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A pure-Python libconfig reader/writer with permissive license --- py-libensemble[¶](#py-libensemble) === Homepage: * <https://libensemble.readthedocs.ioSpack package: * [py-libensemble/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-libensemble/package.py) Versions: develop, 0.3.0, 0.2.0, 0.1.0 Build Dependencies: mpi, [py-setuptools](#py-setuptools), [py-petsc4py](#py-petsc4py), [python](#python), [nlopt](#nlopt), [py-numpy](#py-numpy), [py-mpi4py](#py-mpi4py), [py-scipy](#py-scipy) Link Dependencies: mpi, [python](#python) Run Dependencies: [python](#python), [py-petsc4py](#py-petsc4py), [nlopt](#nlopt), [py-numpy](#py-numpy), [py-mpi4py](#py-mpi4py), [py-scipy](#py-scipy) Description: Library for managing ensemble-like collections of computations. --- py-line-profiler[¶](#py-line-profiler) === Homepage: * <https://github.com/rkern/line_profilerSpack package: * [py-line-profiler/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-line-profiler/package.py) Versions: 2.0 Build Dependencies: [py-ipython](#py-ipython), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-ipython](#py-ipython), [python](#python) Description: Line-by-line profiler. --- py-linecache2[¶](#py-linecache2) === Homepage: * <https://github.com/testing-cabal/linecache2Spack package: * [py-linecache2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-linecache2/package.py) Versions: 1.0.0 Build Dependencies: [py-pbr](#py-pbr), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pbr](#py-pbr), [python](#python) Description: Backports of the linecache module --- py-lit[¶](#py-lit) === Homepage: * <https://pypi.python.org/pypi/litSpack package: * [py-lit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lit/package.py) Versions: 0.5.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: lit is a portable tool for executing LLVM and Clang style test suites, summarizing their results, and providing indication of failures. lit is designed to be a lightweight testing tool with as simple a user interface as possible. --- py-llvmlite[¶](#py-llvmlite) === Homepage: * <http://llvmlite.readthedocs.io/en/latest/index.htmlSpack package: * [py-llvmlite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-llvmlite/package.py) Versions: 0.23.0, 0.20.0 Build Dependencies: [py-enum34](#py-enum34), [binutils](#binutils), [py-setuptools](#py-setuptools), [python](#python), [llvm](#llvm) Link Dependencies: [llvm](#llvm), [python](#python) Run Dependencies: [py-enum34](#py-enum34), [python](#python) Description: A lightweight LLVM python binding for writing JIT compilers --- py-lmfit[¶](#py-lmfit) === Homepage: * <http://lmfit.github.io/lmfit-py/Spack package: * [py-lmfit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lmfit/package.py) Versions: 0.9.5 Build Dependencies: [py-numpy](#py-numpy), [python](#python), [py-scipy](#py-scipy), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python), [py-scipy](#py-scipy) Description: Least-Squares Minimization with Bounds and Constraints --- py-localcider[¶](#py-localcider) === Homepage: * <http://pappulab.github.io/localCIDERSpack package: * [py-localcider/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-localcider/package.py) Versions: 0.1.14 Build Dependencies: [python](#python), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-setuptools](#py-setuptools), [py-scipy](#py-scipy) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-scipy](#py-scipy) Description: Tools for calculating sequence properties of disordered proteins --- py-locket[¶](#py-locket) === Homepage: * <http://github.com/mwilliamson/locket.pySpack package: * [py-locket/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-locket/package.py) Versions: 0.2.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: File-based locks for Python for Linux and Windows. --- py-lockfile[¶](#py-lockfile) === Homepage: * <https://pypi.python.org/pypi/lockfileSpack package: * [py-lockfile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lockfile/package.py) Versions: 0.10.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The lockfile package exports a LockFile class which provides a simple API for locking files. Unlike the Windows msvcrt.locking function, the fcntl.lockf and flock functions, and the deprecated posixfile module, the API is identical across both Unix (including Linux and Mac) and Windows platforms. The lock mechanism relies on the atomic nature of the link (on Unix) and mkdir (on Windows) system calls. An implementation based on SQLite is also provided, more as a demonstration of the possibilities it provides than as production-quality code. --- py-logilab-common[¶](#py-logilab-common) === Homepage: * <https://www.logilab.org/project/logilab-commonSpack package: * [py-logilab-common/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-logilab-common/package.py) Versions: 1.2.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: Common modules used by Logilab projects --- py-lrudict[¶](#py-lrudict) === Homepage: * <https://github.com/amitdev/lru-dictSpack package: * [py-lrudict/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lrudict/package.py) Versions: 1.1.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A fast LRU cache --- py-lxml[¶](#py-lxml) === Homepage: * <http://lxml.de/Spack package: * [py-lxml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lxml/package.py) Versions: 3.7.3, 2.3 Build Dependencies: [libxslt](#libxslt), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [python](#python), [libxml2](#libxml2) Link Dependencies: [python](#python) Run Dependencies: [libxslt](#libxslt), [libxml2](#libxml2), [python](#python) Description: lxml is the most feature-rich and easy-to-use library for processing XML and HTML in the Python language. --- py-lzstring[¶](#py-lzstring) === Homepage: * <https://github.com/gkovacs/lz-string-pythonSpack package: * [py-lzstring/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-lzstring/package.py) Versions: 1.0.3 Build Dependencies: [py-future](#py-future), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-future](#py-future), [python](#python) Description: lz-string for python. --- py-macholib[¶](#py-macholib) === Homepage: * <https://pypi.python.org/pypi/macholibSpack package: * [py-macholib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-macholib/package.py) Versions: 1.8 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python package for Mach-O header analysis and editing --- py-machotools[¶](#py-machotools) === Homepage: * <https://pypi.python.org/pypi/machotoolsSpack package: * [py-machotools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-machotools/package.py) Versions: 0.2.0 Build Dependencies: [py-macholib](#py-macholib), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-macholib](#py-macholib), [python](#python) Description: Python package for editing Mach-O headers using macholib --- py-macs2[¶](#py-macs2) === Homepage: * <https://github.com/taoliu/MACSSpack package: * [py-macs2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-macs2/package.py) Versions: 2.1.1.20160309 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Description: MACS2 Model-based Analysis of ChIP-Seq --- py-maestrowf[¶](#py-maestrowf) === Homepage: * <https://github.com/LLNL/maestrowf/Spack package: * [py-maestrowf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-maestrowf/package.py) Versions: 1.1.2, 1.1.1, 1.1.0, 1.0.1 Build Dependencies: [py-pyyaml](#py-pyyaml), [py-enum34](#py-enum34), [py-setuptools](#py-setuptools), [py-filelock](#py-filelock), [python](#python), [py-tabulate](#py-tabulate), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-enum34](#py-enum34), [py-tabulate](#py-tabulate), [py-filelock](#py-filelock), [python](#python), [py-six](#py-six) Description: A general purpose workflow conductor for running multi-step simulation studies. --- py-mako[¶](#py-mako) === Homepage: * <https://pypi.python.org/pypi/makoSpack package: * [py-mako/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mako/package.py) Versions: 1.0.4, 1.0.1 Build Dependencies: [py-markupsafe](#py-markupsafe), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-markupsafe](#py-markupsafe), [python](#python) Test Dependencies: [py-pytest](#py-pytest), [py-mock](#py-mock) Description: A super-fast templating language that borrows the best ideas from the existing templating languages. --- py-mappy[¶](#py-mappy) === Homepage: * <https://pypi.python.org/pypi/mappySpack package: * [py-mappy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mappy/package.py) Versions: 2.2 Build Dependencies: [zlib](#zlib), [python](#python) Link Dependencies: [zlib](#zlib), [python](#python) Run Dependencies: [python](#python) Description: Mappy provides a convenient interface to minimap2. --- py-markdown[¶](#py-markdown) === Homepage: * <https://pythonhosted.org/Markdown/Spack package: * [py-markdown/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-markdown/package.py) Versions: 2.6.11, 2.6.7, 2.6.6, 2.6.5, 2.6.4, 2.6.3, 2.6.2, 2.6.1, 2.6, 2.5.2, 2.5.1, 2.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: This is a Python implementation of <NAME>'s Markdown. It is almost completely compliant with the reference implementation, though there are a few very minor differences. See John's Syntax Documentation for the syntax rules. --- py-markupsafe[¶](#py-markupsafe) === Homepage: * <http://www.pocoo.org/projects/markupsafe/Spack package: * [py-markupsafe/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-markupsafe/package.py) Versions: 1.0, 0.23, 0.22, 0.21, 0.20, 0.19 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: MarkupSafe is a library for Python that implements a unicode string that is aware of HTML escaping rules and can be used to implement automatic string escaping. It is used by Jinja 2, the Mako templating engine, the Pylons web framework and many more. --- py-matplotlib[¶](#py-matplotlib) === Homepage: * <https://pypi.python.org/pypi/matplotlibSpack package: * [py-matplotlib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-matplotlib/package.py) Versions: 3.0.0, 2.2.3, 2.2.2, 2.0.2, 2.0.0, 1.5.3, 1.5.1, 1.4.3, 1.4.2 Build Dependencies: pil, pkgconfig, [py-pyparsing](#py-pyparsing), [py-pyside](#py-pyside), [py-pytz](#py-pytz), [py-functools32](#py-functools32), [py-numpy](#py-numpy), [py-kiwisolver](#py-kiwisolver), [qhull](#qhull), [py-subprocess32](#py-subprocess32), [qt](#qt), [py-cycler](#py-cycler), [py-setuptools](#py-setuptools), [tk](#tk), [py-dateutil](#py-dateutil), [libpng](#libpng), [py-ipython](#py-ipython), [py-six](#py-six), [image-magick](#image-magick), [python](#python), [freetype](#freetype) Link Dependencies: [libpng](#libpng), [tk](#tk), [image-magick](#image-magick), [freetype](#freetype), [qhull](#qhull), [python](#python), [qt](#qt) Run Dependencies: pil, [ghostscript](#ghostscript), [py-cycler](#py-cycler), [py-pyparsing](#py-pyparsing), [py-pyside](#py-pyside), [py-pytz](#py-pytz), [py-functools32](#py-functools32), [py-numpy](#py-numpy), [py-dateutil](#py-dateutil), [py-subprocess32](#py-subprocess32), [py-six](#py-six), [texlive](#texlive), [py-setuptools](#py-setuptools), [py-ipython](#py-ipython), [python](#python), [py-kiwisolver](#py-kiwisolver) Test Dependencies: [py-mock](#py-mock), [py-nose](#py-nose) Description: matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. --- py-mccabe[¶](#py-mccabe) === Homepage: * <https://github.com/PyCQA/mccabeSpack package: * [py-mccabe/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mccabe/package.py) Versions: 0.6.1, 0.6.0, 0.5.3, 0.5.2, 0.5.1, 0.5.0, 0.4.0, 0.3.1, 0.3, 0.2.1, 0.2, 0.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Test Dependencies: [py-pytest](#py-pytest) Description: Ned's script to check McCabe complexity. --- py-mdanalysis[¶](#py-mdanalysis) === Homepage: * <http://www.mdanalysis.orgSpack package: * [py-mdanalysis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mdanalysis/package.py) Versions: 0.15.0 Build Dependencies: [py-networkx](#py-networkx), [py-seaborn](#py-seaborn), [py-scipy](#py-scipy), [py-biopython](#py-biopython), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-six](#py-six), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-netcdf4](#py-netcdf4), [python](#python), [py-griddataformats](#py-griddataformats) Link Dependencies: [python](#python) Run Dependencies: [py-griddataformats](#py-griddataformats), [py-networkx](#py-networkx), [py-seaborn](#py-seaborn), [python](#python), [hdf5](#hdf5), [py-netcdf4](#py-netcdf4), [py-biopython](#py-biopython), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-scipy](#py-scipy), [py-six](#py-six) Description: MDAnalysis is a Python toolkit to analyze molecular dynamics trajectories generated by a wide range of popular simulation packages including DL_Poly, CHARMM, Amber, NAMD, LAMMPS, and Gromacs. (See the lists of supported trajectory formats and topology formats.) --- py-meep[¶](#py-meep) === Homepage: * <https://launchpad.net/python-meepSpack package: * [py-meep/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-meep/package.py) Versions: 1.4.2 Build Dependencies: [meep](#meep), mpi, [swig](#swig), [py-scipy](#py-scipy), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python) Link Dependencies: [meep](#meep), mpi, [swig](#swig), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-scipy](#py-scipy), [python](#python) Description: Python-meep is a wrapper around libmeep. It allows the scripting of Meep-simulations with Python --- py-memory-profiler[¶](#py-memory-profiler) === Homepage: * <https://github.com/fabianp/memory_profilerSpack package: * [py-memory-profiler/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-memory-profiler/package.py) Versions: 0.47 Build Dependencies: [python](#python), [py-setuptools](#py-setuptools), [py-psutil](#py-psutil) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-psutil](#py-psutil) Description: A module for monitoring memory usage of a python program --- py-methylcode[¶](#py-methylcode) === Homepage: * <https://github.com/brentp/methylcodeSpack package: * [py-methylcode/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-methylcode/package.py) Versions: 1.0.0 Build Dependencies: [py-pyfasta](#py-pyfasta), [py-pyparsing](#py-pyparsing), [py-setuptools](#py-setuptools), [py-bsddb3](#py-bsddb3), [py-numpy](#py-numpy), [bowtie](#bowtie), [python](#python), [py-six](#py-six) Link Dependencies: [py-pyfasta](#py-pyfasta), [py-pyparsing](#py-pyparsing), [py-setuptools](#py-setuptools), [py-bsddb3](#py-bsddb3), [py-numpy](#py-numpy), [bowtie](#bowtie), [python](#python), [py-six](#py-six) Run Dependencies: [python](#python) Description: MethylCoder is a single program that takes of bisulfite-treated reads and outputs per-base methylation data. --- py-mg-rast-tools[¶](#py-mg-rast-tools) === Homepage: * <https://github.com/MG-RAST/MG-RAST-ToolsSpack package: * [py-mg-rast-tools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mg-rast-tools/package.py) Versions: 2018.04.17 Build Dependencies: [perl-list-moreutils](#perl-list-moreutils), [py-prettytable](#py-prettytable), [r-matr](#r-matr), [perl](#perl), [py-numpy](#py-numpy), [perl-exporter-tiny](#perl-exporter-tiny), [py-scipy](#py-scipy), [perl-libwww-perl](#perl-libwww-perl), [py-poster](#py-poster), [perl-json](#perl-json), [py-requests-toolbelt](#py-requests-toolbelt), [py-setuptools](#py-setuptools), [py-requests](#py-requests), [python](#python), [shocklibs](#shocklibs), [perl-http-message](#perl-http-message) Link Dependencies: [shocklibs](#shocklibs), [python](#python) Run Dependencies: [perl-list-moreutils](#perl-list-moreutils), [py-prettytable](#py-prettytable), [r-matr](#r-matr), [perl](#perl), [py-numpy](#py-numpy), [perl-exporter-tiny](#perl-exporter-tiny), [perl-libwww-perl](#perl-libwww-perl), [py-poster](#py-poster), [perl-json](#perl-json), [py-requests-toolbelt](#py-requests-toolbelt), [py-scipy](#py-scipy), [py-requests](#py-requests), [python](#python), [perl-http-message](#perl-http-message) Description: Repository of scripts and libraries for using the MG-RAST API and MG- RAST data. --- py-misopy[¶](#py-misopy) === Homepage: * <http://miso.readthedocs.io/en/fastmiso/Spack package: * [py-misopy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-misopy/package.py) Versions: 0.5.4 Build Dependencies: [samtools](#samtools), [bedtools2](#bedtools2), [py-setuptools](#py-setuptools), [py-scipy](#py-scipy), [py-pysam](#py-pysam), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python) Link Dependencies: [samtools](#samtools), [bedtools2](#bedtools2), [python](#python) Run Dependencies: [py-pysam](#py-pysam), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-scipy](#py-scipy), [python](#python) Description: MISO (Mixture of Isoforms) is a probabilistic framework that quantitates the expression level of alternatively spliced genes from RNA-Seq data, and identifies differentially regulated isoforms or exons across samples. --- py-mistune[¶](#py-mistune) === Homepage: * <http://mistune.readthedocs.org/en/latest/Spack package: * [py-mistune/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mistune/package.py) Versions: 0.7.1, 0.7, 0.6, 0.5.1, 0.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python markdown parser --- py-mock[¶](#py-mock) === Homepage: * <https://github.com/testing-cabal/mockSpack package: * [py-mock/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mock/package.py) Versions: 2.0.0, 1.3.0 Build Dependencies: [py-funcsigs](#py-funcsigs), [py-pbr](#py-pbr), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-funcsigs](#py-funcsigs), [py-pbr](#py-pbr), [python](#python), [py-six](#py-six) Description: mock is a library for testing in Python. It allows you to replace parts of your system under test with mock objects and make assertions about how they have been used. --- py-moltemplate[¶](#py-moltemplate) === Homepage: * <http://moltemplate.orgSpack package: * [py-moltemplate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-moltemplate/package.py) Versions: 2.5.8 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-setuptools](#py-setuptools), [python](#python) Description: Moltemplate is a general cross-platform text-based molecule builder for LAMMPS. --- py-mongo[¶](#py-mongo) === Homepage: * <http://github.com/mongodb/mongo-python-driverSpack package: * [py-mongo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mongo/package.py) Versions: 3.6.0, 3.3.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python driver for MongoDB <http://www.mongodb.org--- py-monotonic[¶](#py-monotonic) === Homepage: * <https://pypi.python.org/pypi/monotonicSpack package: * [py-monotonic/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-monotonic/package.py) Versions: 1.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: An implementation of time.monotonic() for Python 2 & < 3.3 --- py-monty[¶](#py-monty) === Homepage: * <https://github.com/materialsvirtuallab/montySpack package: * [py-monty/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-monty/package.py) Versions: 0.9.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: Monty is the missing complement to Python. --- py-more-itertools[¶](#py-more-itertools) === Homepage: * <https://github.com/erikrose/more-itertoolsSpack package: * [py-more-itertools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-more-itertools/package.py) Versions: 4.3.0, 4.1.0, 2.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: Additions to the standard Python itertools package. --- py-mpi4py[¶](#py-mpi4py) === Homepage: * <https://pypi.python.org/pypi/mpi4pySpack package: * [py-mpi4py/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mpi4py/package.py) Versions: develop, 3.0.0, 2.0.0, 1.3.1 Build Dependencies: mpi, [py-cython](#py-cython), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: mpi, [python](#python) Run Dependencies: [python](#python) Description: This package provides Python bindings for the Message Passing Interface (MPI) standard. It is implemented on top of the MPI-1/MPI-2 specification and exposes an API which grounds on the standard MPI-2 C++ bindings. --- py-mpmath[¶](#py-mpmath) === Homepage: * <http://mpmath.orgSpack package: * [py-mpmath/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mpmath/package.py) Versions: 1.0.0, 0.19 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A Python library for arbitrary-precision floating-point arithmetic. --- py-multiprocess[¶](#py-multiprocess) === Homepage: * <https://github.com/uqfoundation/multiprocessSpack package: * [py-multiprocess/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-multiprocess/package.py) Versions: 0.70.5, 0.70.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-dill](#py-dill) Link Dependencies: [python](#python) Run Dependencies: [py-dill](#py-dill), [python](#python) Description: Better multiprocessing and multithreading in Python --- py-multiqc[¶](#py-multiqc) === Homepage: * <https://multiqc.infoSpack package: * [py-multiqc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-multiqc/package.py) Versions: 1.5, 1.3, 1.0 Build Dependencies: [py-pyyaml](#py-pyyaml), [py-lzstring](#py-lzstring), [py-future](#py-future), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-jinja2](#py-jinja2), [py-click](#py-click), [py-enum34](#py-enum34), [py-setuptools](#py-setuptools), [py-requests](#py-requests), [py-simplejson](#py-simplejson), [py-spectra](#py-spectra), [py-markdown](#py-markdown), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-lzstring](#py-lzstring), [py-future](#py-future), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-jinja2](#py-jinja2), [py-click](#py-click), [py-enum34](#py-enum34), [py-requests](#py-requests), [py-simplejson](#py-simplejson), [py-spectra](#py-spectra), [py-markdown](#py-markdown), [python](#python) Description: MultiQC is a tool to aggregate bioinformatics results across many samples into a single report. It is written in Python and contains modules for a large number of common bioinformatics tools. --- py-mx[¶](#py-mx) === Homepage: * <http://www.egenix.com/products/python/mxBase/Spack package: * [py-mx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mx/package.py) Versions: 3.2.8 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The eGenix.com mx Base Distribution for Python is a collection of professional quality software tools which enhance Python's usability in many important areas such as fast text searching, date/time processing and high speed data types. --- py-mxnet[¶](#py-mxnet) === Homepage: * <http://mxnet.ioSpack package: * [py-mxnet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mxnet/package.py) Versions: 0.10.0.post2 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python), [mxnet](#mxnet) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python), [mxnet](#mxnet) Description: Python binding for DMLC/MXNet. --- py-myhdl[¶](#py-myhdl) === Homepage: * <http://www.myhdl.orgSpack package: * [py-myhdl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-myhdl/package.py) Versions: 0.9.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python as a Hardware Description Language --- py-mysqldb1[¶](#py-mysqldb1) === Homepage: * <https://github.com/farcepest/MySQLdb1Spack package: * [py-mysqldb1/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-mysqldb1/package.py) Versions: 1.2.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Legacy mysql bindings for python --- py-natsort[¶](#py-natsort) === Homepage: * <https://pypi.org/project/natsort/Spack package: * [py-natsort/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-natsort/package.py) Versions: 5.2.0, 5.1.1, 5.1.0, 5.0.3, 5.0.2, 5.0.1, 5.0.0, 4.0.4, 4.0.3, 4.0.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Simple yet flexible natural sorting in Python. --- py-nbconvert[¶](#py-nbconvert) === Homepage: * <https://github.com/jupyter/nbconvertSpack package: * [py-nbconvert/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-nbconvert/package.py) Versions: 4.2.0, 4.1.0, 4.0.0 Build Dependencies: [py-traitlets](#py-traitlets), [py-pygments](#py-pygments), [py-entrypoints](#py-entrypoints), [py-jupyter-core](#py-jupyter-core), [py-jinja2](#py-jinja2), [python](#python), [py-mistune](#py-mistune), [py-nbformat](#py-nbformat), [py-tornado](#py-tornado), [py-jupyter-client](#py-jupyter-client), [py-pycurl](#py-pycurl) Link Dependencies: [python](#python) Run Dependencies: [py-traitlets](#py-traitlets), [py-entrypoints](#py-entrypoints), [py-jupyter-core](#py-jupyter-core), [py-jinja2](#py-jinja2), [python](#python), [py-mistune](#py-mistune), [py-nbformat](#py-nbformat), [py-tornado](#py-tornado), [py-jupyter-client](#py-jupyter-client), [py-pygments](#py-pygments) Description: Jupyter Notebook Conversion --- py-nbformat[¶](#py-nbformat) === Homepage: * <https://github.com/jupyter/nbformatSpack package: * [py-nbformat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-nbformat/package.py) Versions: 4.1.0, 4.0.1, 4.0.0 Build Dependencies: [py-traitlets](#py-traitlets), [python](#python), [py-ipython-genutils](#py-ipython-genutils), [py-jsonschema](#py-jsonschema), [py-jupyter-core](#py-jupyter-core) Link Dependencies: [python](#python) Run Dependencies: [py-traitlets](#py-traitlets), [python](#python), [py-ipython-genutils](#py-ipython-genutils), [py-jsonschema](#py-jsonschema), [py-jupyter-core](#py-jupyter-core) Description: The Jupyter Notebook format --- py-neo[¶](#py-neo) === Homepage: * <http://neuralensemble.org/neoSpack package: * [py-neo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-neo/package.py) Versions: 0.5.2, 0.4.1 Build Dependencies: [py-quantities](#py-quantities), [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-quantities](#py-quantities), [py-numpy](#py-numpy), [python](#python) Description: Neo is a package for representing electrophysiology data in Python, together with support for reading a wide range of neurophysiology file formats --- py-nestle[¶](#py-nestle) === Homepage: * <http://kbarbary.github.io/nestle/Spack package: * [py-nestle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-nestle/package.py) Versions: 0.1.1 Build Dependencies: [py-numpy](#py-numpy), [python](#python), [py-scipy](#py-scipy) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python), [py-scipy](#py-scipy) Description: Nested sampling algorithms for evaluating Bayesian evidence. --- py-netcdf4[¶](#py-netcdf4) === Homepage: * <https://github.com/Unidata/netcdf4-pythonSpack package: * [py-netcdf4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-netcdf4/package.py) Versions: 1.2.7, 1.2.3.1 Build Dependencies: [py-cython](#py-cython), [py-setuptools](#py-setuptools), [hdf5](#hdf5), [py-numpy](#py-numpy), [python](#python), [netcdf](#netcdf) Link Dependencies: [netcdf](#netcdf), [python](#python), [hdf5](#hdf5) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Python interface to the netCDF Library. --- py-netifaces[¶](#py-netifaces) === Homepage: * <https://bitbucket.org/al45tair/netifacesSpack package: * [py-netifaces/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-netifaces/package.py) Versions: 0.10.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Portable network interface information --- py-networkx[¶](#py-networkx) === Homepage: * <http://networkx.github.io/Spack package: * [py-networkx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-networkx/package.py) Versions: 2.1, 1.11, 1.10 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-decorator](#py-decorator) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-decorator](#py-decorator) Description: NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. --- py-nose[¶](#py-nose) === Homepage: * <https://pypi.python.org/pypi/noseSpack package: * [py-nose/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-nose/package.py) Versions: 1.3.7, 1.3.6, 1.3.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: nose extends the test loading and running features of unittest, making it easier to write, find and run tests. --- py-nosexcover[¶](#py-nosexcover) === Homepage: * <https://github.com/cmheisel/nose-xcoverSpack package: * [py-nosexcover/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-nosexcover/package.py) Versions: 1.0.11 Build Dependencies: [py-coverage](#py-coverage), [python](#python), [py-setuptools](#py-setuptools), [py-nose](#py-nose) Link Dependencies: [python](#python) Run Dependencies: [py-coverage](#py-coverage), [python](#python), [py-nose](#py-nose) Description: A companion to the built-in nose.plugins.cover, this plugin will write out an XML coverage report to a file named coverage.xml. --- py-numba[¶](#py-numba) === Homepage: * <https://numba.pydata.org/Spack package: * [py-numba/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-numba/package.py) Versions: 0.35.0 Build Dependencies: [py-singledispatch](#py-singledispatch), [py-llvmlite](#py-llvmlite), [py-numpy](#py-numpy), [py-funcsigs](#py-funcsigs), [python](#python), [py-argparse](#py-argparse) Link Dependencies: [python](#python) Run Dependencies: [py-singledispatch](#py-singledispatch), [py-llvmlite](#py-llvmlite), [py-numpy](#py-numpy), [py-funcsigs](#py-funcsigs), [python](#python), [py-argparse](#py-argparse) Description: NumPy aware dynamic Python compiler using LLVM --- py-numexpr[¶](#py-numexpr) === Homepage: * <https://pypi.python.org/pypi/numexprSpack package: * [py-numexpr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-numexpr/package.py) Versions: 2.6.5, 2.6.1, 2.5, 2.4.6 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Fast numerical expression evaluator for NumPy --- py-numexpr3[¶](#py-numexpr3) === Homepage: * <https://github.com/pydata/numexpr/tree/numexpr-3.0Spack package: * [py-numexpr3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-numexpr3/package.py) Versions: 3.0.1.a1 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Numexpr3 is a fast numerical expression evaluator for NumPy. With it, expressions that operate on arrays (like "3*a+4*b") are accelerated and use less memory than doing the same calculation in Python. In addition, its multi-threaded capabilities can make use of all your cores, which may accelerate computations, most specially if they are not memory- bounded (e.g. those using transcendental functions). Compared to NumExpr 2.6, functions have been re-written in a fashion such that gcc can auto- vectorize them with SIMD instruction sets such as SSE2 or AVX2, if your processor supports them. Use of a newer version of gcc such as 5.4 is strongly recommended. --- py-numpy[¶](#py-numpy) === Homepage: * <http://www.numpy.org/Spack package: * [py-numpy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-numpy/package.py) Versions: 1.15.2, 1.15.1, 1.14.3, 1.14.2, 1.14.1, 1.14.0, 1.13.3, 1.13.1, 1.13.0, 1.12.1, 1.12.0, 1.11.2, 1.11.1, 1.11.0, 1.10.4, 1.9.2, 1.9.1 Build Dependencies: blas, [py-setuptools](#py-setuptools), [python](#python), lapack Link Dependencies: blas, [python](#python), lapack Run Dependencies: [python](#python) Test Dependencies: [py-pytest](#py-pytest), [py-nose](#py-nose) Description: NumPy is the fundamental package for scientific computing with Python. It contains among other things: a powerful N-dimensional array object, sophisticated (broadcasting) functions, tools for integrating C/C++ and Fortran code, and useful linear algebra, Fourier transform, and random number capabilities --- py-numpydoc[¶](#py-numpydoc) === Homepage: * <https://github.com/numpy/numpydocSpack package: * [py-numpydoc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-numpydoc/package.py) Versions: 0.6.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-sphinx](#py-sphinx) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: numpydoc - Numpy's Sphinx extensions --- py-olefile[¶](#py-olefile) === Homepage: * <https://www.decalage.info/python/olefileioSpack package: * [py-olefile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-olefile/package.py) Versions: 0.44 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python package to parse, read and write Microsoft OLE2 files --- py-ont-fast5-api[¶](#py-ont-fast5-api) === Homepage: * <https://github.com/nanoporetech/ont_fast5_apiSpack package: * [py-ont-fast5-api/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ont-fast5-api/package.py) Versions: 0.3.2 Build Dependencies: [py-numpy](#py-numpy), [py-h5py](#py-h5py), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-h5py](#py-h5py), [python](#python) Description: This project provides classes and utility functions for working with read fast5 files. It provides an abstraction layer between the underlying h5py library and the various concepts central to read fast5 files, such as "reads", "analyses", "analysis summaries", and "analysis datasets". Ideally all interaction with a read fast5 file should be possible via this API, without having to directly invoke the h5py library. --- py-openpmd-validator[¶](#py-openpmd-validator) === Homepage: * <http://www.openPMD.orgSpack package: * [py-openpmd-validator/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-openpmd-validator/package.py) Versions: 1.0.0.2 Build Dependencies: [py-dateutil](#py-dateutil), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-dateutil](#py-dateutil), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [python](#python) Description: Validator and Example Scripts for the openPMD markup. openPMD is an open standard for particle-mesh data files. --- py-openpyxl[¶](#py-openpyxl) === Homepage: * <http://openpyxl.readthedocs.org/Spack package: * [py-openpyxl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-openpyxl/package.py) Versions: 2.4.5, 2.2.0-b1 Build Dependencies: [python](#python), [py-et-xmlfile](#py-et-xmlfile), [py-setuptools](#py-setuptools), [py-jdcal](#py-jdcal) Link Dependencies: [python](#python) Run Dependencies: [py-et-xmlfile](#py-et-xmlfile), [python](#python), [py-jdcal](#py-jdcal) Description: A Python library to read/write Excel 2010 xlsx/xlsm files --- py-openslide-python[¶](#py-openslide-python) === Homepage: * <https://github.com/openslide/openslide-pythonSpack package: * [py-openslide-python/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-openslide-python/package.py) Versions: 1.1.1 Build Dependencies: [openslide](#openslide), [py-setuptools](#py-setuptools), [python](#python), [py-pillow](#py-pillow) Link Dependencies: [openslide](#openslide), [python](#python) Run Dependencies: [py-pillow](#py-pillow), [python](#python) Description: OpenSlide Python is a Python interface to the OpenSlide library. --- py-opentuner[¶](#py-opentuner) === Homepage: * <http://opentuner.org/Spack package: * [py-opentuner/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-opentuner/package.py) Versions: 0.8.0 Build Dependencies: [py-pysqlite](#py-pysqlite), [py-sqlalchemy](#py-sqlalchemy), [py-argparse](#py-argparse), [py-numpy](#py-numpy), [py-fn](#py-fn), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-pysqlite](#py-pysqlite), [py-sqlalchemy](#py-sqlalchemy), [py-numpy](#py-numpy), [py-fn](#py-fn), [python](#python), [py-argparse](#py-argparse) Description: An extensible framework for program autotuning. --- py-ordereddict[¶](#py-ordereddict) === Homepage: * <https://pypi.python.org/pypi/ordereddictSpack package: * [py-ordereddict/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ordereddict/package.py) Versions: 1.1 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A drop-in substitute for Py2.7's new collections. OrderedDict that works in Python 2.4-2.6. --- py-oset[¶](#py-oset) === Homepage: * <https://pypi.python.org/pypi/osetSpack package: * [py-oset/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-oset/package.py) Versions: 0.1.3 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Set that remembers original insertion order. --- py-owslib[¶](#py-owslib) === Homepage: * <http://http://geopython.github.io/OWSLib/#installationSpack package: * [py-owslib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-owslib/package.py) Versions: 0.16.0 Build Dependencies: [py-pytz](#py-pytz), [py-setuptools](#py-setuptools), [py-requests](#py-requests), [py-dateutil](#py-dateutil), [py-proj](#py-proj), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-dateutil](#py-dateutil), [py-requests](#py-requests), [py-proj](#py-proj), [python](#python), [py-pytz](#py-pytz) Description: OWSLib is a Python package for client programming with Open Geospatial Consortium (OGC) web service (hence OWS) interface standards, and their related content models. --- py-packaging[¶](#py-packaging) === Homepage: * <https://github.com/pypa/packagingSpack package: * [py-packaging/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-packaging/package.py) Versions: 17.1, 16.8 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pyparsing](#py-pyparsing), [python](#python), [py-six](#py-six) Description: Core utilities for Python packages. --- py-palettable[¶](#py-palettable) === Homepage: * <https://jiffyclub.github.io/palettable/Spack package: * [py-palettable/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-palettable/package.py) Versions: 3.0.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Color palettes for Python. --- py-pandas[¶](#py-pandas) === Homepage: * <http://pandas.pydata.org/Spack package: * [py-pandas/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pandas/package.py) Versions: 0.23.4, 0.21.1, 0.19.2, 0.19.0, 0.18.0, 0.16.1, 0.16.0 Build Dependencies: [py-numexpr](#py-numexpr), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-pytz](#py-pytz), [py-numpy](#py-numpy), [py-dateutil](#py-dateutil), [python](#python), [py-bottleneck](#py-bottleneck) Link Dependencies: [python](#python) Run Dependencies: [py-numexpr](#py-numexpr), [py-pytz](#py-pytz), [py-numpy](#py-numpy), [py-dateutil](#py-dateutil), [python](#python), [py-bottleneck](#py-bottleneck) Description: pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with relational or labeled data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. --- py-paramiko[¶](#py-paramiko) === Homepage: * <http://www.paramiko.org/Spack package: * [py-paramiko/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-paramiko/package.py) Versions: 2.1.2 Build Dependencies: [py-cryptography](#py-cryptography), [py-pyasn1](#py-pyasn1), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-cryptography](#py-cryptography), [py-pyasn1](#py-pyasn1), [python](#python) Description: SSH2 protocol library --- py-partd[¶](#py-partd) === Homepage: * <http://github.com/dask/partd/Spack package: * [py-partd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-partd/package.py) Versions: 0.3.8 Build Dependencies: [py-locket](#py-locket), [py-setuptools](#py-setuptools), [python](#python), [py-toolz](#py-toolz) Link Dependencies: [python](#python) Run Dependencies: [py-locket](#py-locket), [python](#python), [py-toolz](#py-toolz) Description: Key-value byte store with appendable values. --- py-pathlib2[¶](#py-pathlib2) === Homepage: * <https://pypi.python.org/pypi/pathlib2Spack package: * [py-pathlib2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pathlib2/package.py) Versions: 2.3.2, 2.1.0 Build Dependencies: [python](#python), [py-setuptools](#py-setuptools), [py-scandir](#py-scandir), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-scandir](#py-scandir), [py-six](#py-six) Description: Backport of pathlib from python 3.4 --- py-pathos[¶](#py-pathos) === Homepage: * <https://github.com/uqfoundation/pathosSpack package: * [py-pathos/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pathos/package.py) Versions: 0.2.0 Build Dependencies: [py-multiprocess](#py-multiprocess), [py-setuptools](#py-setuptools), [py-ppft](#py-ppft), [py-pox](#py-pox), [py-dill](#py-dill), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pox](#py-pox), [py-multiprocess](#py-multiprocess), [python](#python), [py-dill](#py-dill), [py-ppft](#py-ppft) Description: Parallel graph management and execution in heterogeneous computing --- py-pathspec[¶](#py-pathspec) === Homepage: * <https://pypi.python.org/pypi/pathspecSpack package: * [py-pathspec/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pathspec/package.py) Versions: 0.3.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: pathspec extends the test loading and running features of unittest, making it easier to write, find and run tests. --- py-patsy[¶](#py-patsy) === Homepage: * <https://github.com/pydata/patsySpack package: * [py-patsy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-patsy/package.py) Versions: 0.4.1 Build Dependencies: [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-scipy](#py-scipy), [python](#python), [py-six](#py-six) Test Dependencies: [py-nose](#py-nose) Description: A Python package for describing statistical models and for building design matrices. --- py-pbr[¶](#py-pbr) === Homepage: * <https://pypi.python.org/pypi/pbrSpack package: * [py-pbr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pbr/package.py) Versions: 3.1.1, 2.0.0, 1.10.0, 1.8.1 Build Dependencies: [py-enum34](#py-enum34), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: PBR is a library that injects some useful and sensible default behaviors into your setuptools run. --- py-pep8-naming[¶](#py-pep8-naming) === Homepage: * <https://pypi.org/project/pep8-naming/Spack package: * [py-pep8-naming/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pep8-naming/package.py) Versions: 0.7.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-flake8-polyfill](#py-flake8-polyfill), [python](#python) Description: Check PEP-8 naming conventions, plugin for flake8. --- py-perf[¶](#py-perf) === Homepage: * <https://pypi.python.org/pypi/perfSpack package: * [py-perf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-perf/package.py) Versions: 1.5.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: The Python perf module is a toolkit to write, run and analyze benchmarks. --- py-performance[¶](#py-performance) === Homepage: * <http://pyperformance.readthedocs.io/Spack package: * [py-performance/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-performance/package.py) Versions: 0.6.1, 0.6.0 Build Dependencies: [py-perf](#py-perf), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-perf](#py-perf), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Description: The performance project is intended to be an authoritative source of benchmarks for all Python implementations. The focus is on real-world benchmarks, rather than synthetic benchmarks, using whole applications when possible. --- py-periodictable[¶](#py-periodictable) === Homepage: * <https://pypi.python.org/pypi/periodictableSpack package: * [py-periodictable/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-periodictable/package.py) Versions: 1.4.1 Build Dependencies: [py-numpy](#py-numpy), [py-pyparsing](#py-pyparsing), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-pyparsing](#py-pyparsing), [python](#python) Description: nose extends the test loading and running features of unittest, making it easier to write, find and run tests. --- py-petsc4py[¶](#py-petsc4py) === Homepage: * <https://pypi.python.org/pypi/petsc4pySpack package: * [py-petsc4py/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-petsc4py/package.py) Versions: 3.8.1, 3.8.0, 3.7.0 Build Dependencies: [py-numpy](#py-numpy), [petsc](#petsc), [py-setuptools](#py-setuptools), [python](#python), [py-mpi4py](#py-mpi4py) Link Dependencies: [petsc](#petsc), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-mpi4py](#py-mpi4py), [python](#python) Description: This package provides Python bindings for the PETSc package. --- py-pexpect[¶](#py-pexpect) === Homepage: * <https://pypi.python.org/pypi/pexpectSpack package: * [py-pexpect/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pexpect/package.py) Versions: 4.2.1, 3.3 Build Dependencies: [py-ptyprocess](#py-ptyprocess), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-ptyprocess](#py-ptyprocess), [python](#python) Description: Pexpect allows easy control of interactive console applications. --- py-phonopy[¶](#py-phonopy) === Homepage: * <http://atztogo.github.io/phonopy/index.htmlSpack package: * [py-phonopy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-phonopy/package.py) Versions: 1.10.0 Build Dependencies: [py-pyyaml](#py-pyyaml), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-scipy](#py-scipy) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-scipy](#py-scipy) Description: Phonopy is an open source package for phonon calculations at harmonic and quasi-harmonic levels. --- py-pickleshare[¶](#py-pickleshare) === Homepage: * <https://pypi.python.org/pypi/pickleshareSpack package: * [py-pickleshare/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pickleshare/package.py) Versions: 0.7.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Tiny 'shelve'-like database with concurrency support --- py-picrust[¶](#py-picrust) === Homepage: * <http://picrust.github.io/picrust/index.htmlSpack package: * [py-picrust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-picrust/package.py) Versions: 1.1.3 Build Dependencies: [py-cogent](#py-cogent), [py-biom-format](#py-biom-format), [py-future](#py-future), [py-numpy](#py-numpy), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-future](#py-future), [py-numpy](#py-numpy), [py-cogent](#py-cogent), [py-biom-format](#py-biom-format), [python](#python) Description: bioinformatics software package designed to predict metagenome functional content from marker gene surveys and full genomes. --- py-pil[¶](#py-pil) === Homepage: * <http://www.pythonware.com/products/pil/Spack package: * [py-pil/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pil/package.py) Versions: 1.1.7 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The Python Imaging Library (PIL) adds image processing capabilities to your Python interpreter. This library supports many file formats, and provides powerful image processing and graphics capabilities. --- py-pillow[¶](#py-pillow) === Homepage: * <https://python-pillow.org/Spack package: * [py-pillow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pillow/package.py) Versions: 5.1.0, 3.2.0, 3.0.0 Build Dependencies: [zlib](#zlib), jpeg, [py-setuptools](#py-setuptools), [lcms](#lcms), [openjpeg](#openjpeg), [libtiff](#libtiff), [binutils](#binutils), [python](#python), [freetype](#freetype) Link Dependencies: [zlib](#zlib), jpeg, [lcms](#lcms), [openjpeg](#openjpeg), [libtiff](#libtiff), [python](#python), [freetype](#freetype) Run Dependencies: [python](#python) Description: Pillow is a fork of the Python Imaging Library (PIL). It adds image processing capabilities to your Python interpreter. This library supports many file formats, and provides powerful image processing and graphics capabilities. --- py-pip[¶](#py-pip) === Homepage: * <https://pypi.python.org/pypi/pipSpack package: * [py-pip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pip/package.py) Versions: 10.0.1, 9.0.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-setuptools](#py-setuptools), [python](#python) Description: The PyPA recommended tool for installing Python packages. --- py-pipits[¶](#py-pipits) === Homepage: * <https://github.com/hsgweon/pipitsSpack package: * [py-pipits/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pipits/package.py) Versions: 1.5.0 Build Dependencies: java, [rdp-classifier](#rdp-classifier), [py-biom-format](#py-biom-format), [itsx](#itsx), [hmmer](#hmmer), [py-numpy](#py-numpy), [fastx-toolkit](#fastx-toolkit), [vsearch](#vsearch), [python](#python) Link Dependencies: [rdp-classifier](#rdp-classifier), [itsx](#itsx), [hmmer](#hmmer), [fastx-toolkit](#fastx-toolkit), [vsearch](#vsearch), [python](#python) Run Dependencies: java, [py-numpy](#py-numpy), [py-biom-format](#py-biom-format), [python](#python) Description: Automated pipeline for analyses of fungal ITS from the Illumina --- py-pkgconfig[¶](#py-pkgconfig) === Homepage: * <http://github.com/matze/pkgconfigSpack package: * [py-pkgconfig/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pkgconfig/package.py) Versions: 1.2.2 Build Dependencies: [python](#python), pkgconfig, [py-setuptools](#py-setuptools), [py-nose](#py-nose) Link Dependencies: [python](#python) Run Dependencies: pkgconfig, [python](#python) Test Dependencies: [py-nose](#py-nose) Description: Interface Python with pkg-config. --- py-plotly[¶](#py-plotly) === Homepage: * <https://plot.ly/python/Spack package: * [py-plotly/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-plotly/package.py) Versions: 2.2.0 Build Dependencies: [py-requests](#py-requests), [py-pytz](#py-pytz), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-requests](#py-requests), [py-pytz](#py-pytz), [python](#python), [py-six](#py-six) Description: An interactive, browser-based graphing library for Python --- py-pluggy[¶](#py-pluggy) === Homepage: * <https://github.com/pytest-dev/pluggySpack package: * [py-pluggy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pluggy/package.py) Versions: 0.7.1, 0.6.0 Build Dependencies: [py-setuptools-scm](#py-setuptools-scm), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Plugin and hook calling mechanisms for python. --- py-ply[¶](#py-ply) === Homepage: * <http://www.dabeaz.com/plySpack package: * [py-ply/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ply/package.py) Versions: 3.11, 3.8 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: PLY is nothing more than a straightforward lex/yacc implementation. --- py-pmw[¶](#py-pmw) === Homepage: * <https://pypi.python.org/pypi/PmwSpack package: * [py-pmw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pmw/package.py) Versions: 2.0.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Pmw is a toolkit for building high-level compound widgets, or megawidgets, constructed using other widgets as component parts. --- py-poster[¶](#py-poster) === Homepage: * <https://pypi.org/project/poster/Spack package: * [py-poster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-poster/package.py) Versions: 0.8.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Streaming HTTP uploads and multipart/form-data encoding. --- py-pox[¶](#py-pox) === Homepage: * <https://github.com/uqfoundation/poxSpack package: * [py-pox/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pox/package.py) Versions: 0.2.3, 0.2.2, 0.2.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Utilities for filesystem exploration and automated builds. --- py-ppft[¶](#py-ppft) === Homepage: * <https://github.com/uqfoundation/ppftSpack package: * [py-ppft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ppft/package.py) Versions: 1.6.4.7.1, 1.6.4.6, 1.6.4.5 Build Dependencies: [py-six](#py-six), [py-dill](#py-dill), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-dill](#py-dill), [python](#python), [py-six](#py-six) Description: Distributed and parallel python --- py-prettytable[¶](#py-prettytable) === Homepage: * <https://code.google.com/archive/p/prettytable/Spack package: * [py-prettytable/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-prettytable/package.py) Versions: 0.7.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: PrettyTable is a simple Python library designed to make it quick and easy to represent tabular data in visually appealing ASCII tables. --- py-progress[¶](#py-progress) === Homepage: * <https://github.com/verigak/progress/Spack package: * [py-progress/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-progress/package.py) Versions: 1.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Easy progress reporting for Python --- py-proj[¶](#py-proj) === Homepage: * <http://jswhit.github.io/pyproj/Spack package: * [py-proj/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-proj/package.py) Versions: 1.9.5.1.1, 1.9.5.1 Build Dependencies: [py-cython](#py-cython), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python interface to the PROJ.4 Library. --- py-projectq[¶](#py-projectq) === Homepage: * <https://projectq.chSpack package: * [py-projectq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-projectq/package.py) Versions: develop, 0.3.6 Build Dependencies: [py-setuptools](#py-setuptools), [py-requests](#py-requests), [py-future](#py-future), [py-numpy](#py-numpy), [py-pybind11](#py-pybind11), [py-scipy](#py-scipy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-scipy](#py-scipy), [py-requests](#py-requests), [py-future](#py-future), [py-numpy](#py-numpy), [py-pybind11](#py-pybind11), [python](#python) Test Dependencies: [py-pytest](#py-pytest) Description: ProjectQ is an open-source software framework for quantum computing started at ETH Zurich. It allows users to implement their quantum programs in Python using a powerful and intuitive syntax. ProjectQ can then translate these programs to any type of back-end, be it a simulator run on a classical computer of an actual quantum chip. --- py-prompt-toolkit[¶](#py-prompt-toolkit) === Homepage: * <https://pypi.python.org/pypi/prompt_toolkitSpack package: * [py-prompt-toolkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-prompt-toolkit/package.py) Versions: 1.0.9 Build Dependencies: [py-wcwidth](#py-wcwidth), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-wcwidth](#py-wcwidth), [python](#python), [py-six](#py-six) Description: Library for building powerful interactive command lines in Python --- py-protobuf[¶](#py-protobuf) === Homepage: * <https://developers.google.com/protocol-buffers/Spack package: * [py-protobuf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-protobuf/package.py) Versions: 3.5.2.post1, 3.5.2, 3.5.1, 3.0.0b2, 2.6.1, 2.5.0, 2.4.1, 2.3.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [protobuf](#protobuf) Link Dependencies: [python](#python), [protobuf](#protobuf) Run Dependencies: [python](#python) Description: Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data - think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages. --- py-psutil[¶](#py-psutil) === Homepage: * <https://pypi.python.org/pypi/psutilSpack package: * [py-psutil/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-psutil/package.py) Versions: 5.4.5, 5.0.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: psutil is a cross-platform library for retrieving information on running processes and system utilization (CPU, memory, disks, network) in Python. --- py-psyclone[¶](#py-psyclone) === Homepage: * <https://github.com/stfc/PSycloneSpack package: * [py-psyclone/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-psyclone/package.py) Versions: develop, 1.5.1 Build Dependencies: [py-fparser](#py-fparser), [py-pyparsing](#py-pyparsing), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-fparser](#py-fparser), [py-pyparsing](#py-pyparsing), [python](#python) Test Dependencies: [py-numpy](#py-numpy), [py-pytest](#py-pytest), [py-nose](#py-nose) Description: Code generation for the PSyKAl framework from the GungHo project, as used by the LFRic model at the UK Met Office. --- py-ptyprocess[¶](#py-ptyprocess) === Homepage: * <https://pypi.python.org/pypi/ptyprocessSpack package: * [py-ptyprocess/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ptyprocess/package.py) Versions: 0.5.1 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Run a subprocess in a pseudo terminal --- py-pudb[¶](#py-pudb) === Homepage: * <http://mathema.tician.de/software/pudbSpack package: * [py-pudb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pudb/package.py) Versions: 2017.1.1, 2016.2 Build Dependencies: [py-urwid](#py-urwid), [py-setuptools](#py-setuptools), [python](#python), [py-pygments](#py-pygments) Link Dependencies: [python](#python) Run Dependencies: [py-urwid](#py-urwid), [py-setuptools](#py-setuptools), [python](#python), [py-pygments](#py-pygments) Description: Full-screen console debugger for Python --- py-py[¶](#py-py) === Homepage: * <http://pylib.readthedocs.io/en/latest/Spack package: * [py-py/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-py/package.py) Versions: 1.5.4, 1.5.3, 1.4.33, 1.4.31 Build Dependencies: [py-setuptools-scm](#py-setuptools-scm), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Library with cross-python path, ini-parsing, io, code, log facilities --- py-py2bit[¶](#py-py2bit) === Homepage: * <https://pypi.python.org/pypi/py2bitSpack package: * [py-py2bit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-py2bit/package.py) Versions: 0.2.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A package for accessing 2bit files using lib2bit. --- py-py2cairo[¶](#py-py2cairo) === Homepage: * <https://www.cairographics.org/pycairo/Spack package: * [py-py2cairo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-py2cairo/package.py) Versions: 1.10.0 Build Dependencies: [pixman](#pixman), pkgconfig, [cairo](#cairo), [python](#python) Link Dependencies: [pixman](#pixman), [cairo](#cairo), [python](#python) Run Dependencies: [python](#python) Test Dependencies: [py-pytest](#py-pytest) Description: Pycairo is a set of Python bindings for the cairo graphics library. --- py-py2neo[¶](#py-py2neo) === Homepage: * <http://py2neo.org/Spack package: * [py-py2neo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-py2neo/package.py) Versions: 2.0.8, 2.0.7, 2.0.6, 2.0.5, 2.0.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Py2neo is a client library and toolkit for working with Neo4j from within Python applications and from the command line. --- py-py4j[¶](#py-py4j) === Homepage: * <https://www.py4j.org/Spack package: * [py-py4j/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-py4j/package.py) Versions: 0.10.6, 0.10.4, 0.10.3 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Enables Python programs to dynamically access arbitrary Java objects. --- py-pyani[¶](#py-pyani) === Homepage: * <http://widdowquinn.github.io/pyaniSpack package: * [py-pyani/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyani/package.py) Versions: 0.2.7, 0.2.6 Build Dependencies: [py-seaborn](#py-seaborn), [py-setuptools](#py-setuptools), [py-matplotlib](#py-matplotlib), [py-pandas](#py-pandas), [py-biopython](#py-biopython), [python](#python), [py-scipy](#py-scipy) Link Dependencies: [python](#python) Run Dependencies: [mummer](#mummer), [blast-plus](#blast-plus), [py-seaborn](#py-seaborn), [python](#python), [py-matplotlib](#py-matplotlib), [py-pandas](#py-pandas), [py-biopython](#py-biopython), [py-scipy](#py-scipy) Description: pyani is a Python3 module that provides support for calculating average nucleotide identity (ANI) and related measures for whole genome comparisons, and rendering relevant graphical summary output. Where available, it takes advantage of multicore systems, and can integrate with SGE/OGE-type job schedulers for the sequence comparisons. --- py-pyarrow[¶](#py-pyarrow) === Homepage: * <http://arrow.apache.orgSpack package: * [py-pyarrow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyarrow/package.py) Versions: 0.11.0, 0.9.0 Build Dependencies: [arrow](#arrow), [pkg-config](#pkg-config), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [cmake](#cmake), [python](#python) Link Dependencies: [arrow](#arrow), [python](#python) Run Dependencies: [python](#python) Description: A cross-language development platform for in-memory data. This package contains the Python bindings. --- py-pyasn1[¶](#py-pyasn1) === Homepage: * <https://github.com/etingof/pyasn1Spack package: * [py-pyasn1/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyasn1/package.py) Versions: 0.2.3 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Generic ASN.1 library for Python http://pyasn1.sf.net --- py-pybigwig[¶](#py-pybigwig) === Homepage: * <https://pypi.python.org/pypi/pyBigWigSpack package: * [py-pybigwig/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pybigwig/package.py) Versions: 0.3.4 Build Dependencies: [curl](#curl), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [curl](#curl), [python](#python) Description: A package for accessing bigWig files using libBigWig. --- py-pybind11[¶](#py-pybind11) === Homepage: * <https://pybind11.readthedocs.ioSpack package: * [py-pybind11/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pybind11/package.py) Versions: develop, 2.2.4, 2.2.3, 2.2.2, 2.2.1, 2.2.0, 2.1.1, 2.1.0 Build Dependencies: [cmake](#cmake), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Test Dependencies: [py-pytest](#py-pytest) Description: pybind11 -- Seamless operability between C++11 and Python. pybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code. Its goals and syntax are similar to the excellent Boost.Python library by <NAME>: to minimize boilerplate code in traditional extension modules by inferring type information using compile-time introspection. --- py-pybtex[¶](#py-pybtex) === Homepage: * <https://pybtex.orgSpack package: * [py-pybtex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pybtex/package.py) Versions: 0.21 Build Dependencies: [py-pyyaml](#py-pyyaml), [py-latexcodec](#py-latexcodec), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-latexcodec](#py-latexcodec), [python](#python) Description: Pybtex is a BibTeX-compatible bibliography processor written in Python. --- py-pybtex-docutils[¶](#py-pybtex-docutils) === Homepage: * <https://pypi.python.org/pypi/pybtex-docutils/Spack package: * [py-pybtex-docutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pybtex-docutils/package.py) Versions: 0.2.1 Build Dependencies: [py-docutils](#py-docutils), [py-pybtex](#py-pybtex), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-docutils](#py-docutils), [py-pybtex](#py-pybtex), [python](#python), [py-six](#py-six) Description: A docutils backend for pybtex. --- py-pycairo[¶](#py-pycairo) === Homepage: * <https://www.cairographics.org/pycairo/Spack package: * [py-pycairo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pycairo/package.py) Versions: 1.17.1 Build Dependencies: pkgconfig, [cairo](#cairo), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [cairo](#cairo), [python](#python) Run Dependencies: [python](#python) Description: Pycairo is a set of Python bindings for the cairo graphics library. --- py-pychecker[¶](#py-pychecker) === Homepage: * <http://pychecker.sourceforge.net/Spack package: * [py-pychecker/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pychecker/package.py) Versions: 0.8.19 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python source code checking tool. --- py-pycodestyle[¶](#py-pycodestyle) === Homepage: * <https://github.com/PyCQA/pycodestyleSpack package: * [py-pycodestyle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pycodestyle/package.py) Versions: 2.3.1, 2.3.0, 2.2.0, 2.1.0, 2.0.0, 1.7.0, 1.6.2, 1.6.1, 1.6, 1.5.7, 1.5.6, 1.5.5, 1.5.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-setuptools](#py-setuptools), [python](#python) Description: pycodestyle is a tool to check your Python code against some of the style conventions in PEP 8. Note: formerly called pep8. --- py-pycparser[¶](#py-pycparser) === Homepage: * <https://github.com/eliben/pycparserSpack package: * [py-pycparser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pycparser/package.py) Versions: 2.18, 2.17, 2.13 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A complete parser of the C language, written in pure python. --- py-pycrypto[¶](#py-pycrypto) === Homepage: * <https://www.dlitz.net/software/pycrypto/Spack package: * [py-pycrypto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pycrypto/package.py) Versions: 2.6.1 Build Dependencies: [gmp](#gmp), [python](#python) Link Dependencies: [gmp](#gmp), [python](#python) Run Dependencies: [python](#python) Description: The Python Cryptography Toolkit --- py-pycurl[¶](#py-pycurl) === Homepage: * <http://pycurl.io/Spack package: * [py-pycurl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pycurl/package.py) Versions: 7.43.0 Build Dependencies: [curl](#curl), [python](#python) Link Dependencies: [curl](#curl), [python](#python) Run Dependencies: [python](#python) Description: PycURL is a Python interface to libcurl. PycURL can be used to fetch objects identified by a URL from a Python program. --- py-pydatalog[¶](#py-pydatalog) === Homepage: * <https://pypi.python.org/pypi/pyDatalog/Spack package: * [py-pydatalog/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pydatalog/package.py) Versions: 0.17.1 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: pyDatalog adds logic programming to Python. --- py-pydispatcher[¶](#py-pydispatcher) === Homepage: * <http://pydispatcher.sourceforge.net/Spack package: * [py-pydispatcher/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pydispatcher/package.py) Versions: 2.0.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Multi-producer-multi-consumer signal dispatching mechanism. --- py-pydot[¶](#py-pydot) === Homepage: * <https://github.com/erocarrera/pydot/Spack package: * [py-pydot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pydot/package.py) Versions: 1.2.3, 1.2.2 Build Dependencies: [py-pyparsing](#py-pyparsing), [graphviz](#graphviz), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pyparsing](#py-pyparsing), [graphviz](#graphviz), [python](#python) Description: Python interface to Graphviz's Dot language --- py-pyelftools[¶](#py-pyelftools) === Homepage: * <https://pypi.python.org/pypi/pyelftoolsSpack package: * [py-pyelftools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyelftools/package.py) Versions: 0.23 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A pure-Python library for parsing and analyzing ELF files and DWARF debugging information --- py-pyepsg[¶](#py-pyepsg) === Homepage: * <https://pyepsg.readthedocs.io/en/latest/Spack package: * [py-pyepsg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyepsg/package.py) Versions: 0.3.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-requests](#py-requests) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-requests](#py-requests) Description: Provides simple access to http://epsg.io/. --- py-pyfasta[¶](#py-pyfasta) === Homepage: * <https://pypi.python.org/pypi/pyfasta/Spack package: * [py-pyfasta/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyfasta/package.py) Versions: 0.5.2 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Run Dependencies: [python](#python) Description: Pyfasta: fast, memory-efficient, pythonic (and command-line) access to fasta sequence files --- py-pyfftw[¶](#py-pyfftw) === Homepage: * <http://hgomersall.github.com/pyFFTWSpack package: * [py-pyfftw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyfftw/package.py) Versions: 0.10.4 Build Dependencies: [py-cython](#py-cython), [python](#python), [fftw](#fftw), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-setuptools](#py-setuptools) Link Dependencies: [fftw](#fftw), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-scipy](#py-scipy), [python](#python) Description: A pythonic wrapper around FFTW, the FFT library, presenting a unified interface for all the supported transforms. --- py-pyflakes[¶](#py-pyflakes) === Homepage: * <https://github.com/PyCQA/pyflakesSpack package: * [py-pyflakes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyflakes/package.py) Versions: 1.6.0, 1.5.0, 1.4.0, 1.3.0, 1.2.3, 1.2.2, 1.2.1, 1.2.0, 1.1.0, 1.0.0, 0.9.2, 0.9.1, 0.9.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-setuptools](#py-setuptools), [python](#python) Description: A simple program which checks Python source files for errors. --- py-pygdbmi[¶](#py-pygdbmi) === Homepage: * <https://github.com/cs01/pygdbmiSpack package: * [py-pygdbmi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pygdbmi/package.py) Versions: 0.8.2.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Parse gdb machine interface output with Python --- py-pygments[¶](#py-pygments) === Homepage: * <http://pygments.org/Spack package: * [py-pygments/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pygments/package.py) Versions: 2.2.0, 2.1.3, 2.0.2, 2.0.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Pygments is a syntax highlighting package written in Python. --- py-pygobject[¶](#py-pygobject) === Homepage: * <https://pypi.python.org/pypi/pygobjectSpack package: * [py-pygobject/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pygobject/package.py) Versions: 3.28.3, 2.28.6, 2.28.3 Build Dependencies: [py-pycairo](#py-pycairo), [glib](#glib), [py-setuptools](#py-setuptools), [py-py2cairo](#py-py2cairo), [libffi](#libffi), [gobject-introspection](#gobject-introspection), [python](#python), [gtkplus](#gtkplus) Link Dependencies: [libffi](#libffi), [glib](#glib), [gobject-introspection](#gobject-introspection), [python](#python), [gtkplus](#gtkplus) Run Dependencies: [py-pycairo](#py-pycairo), [python](#python), [py-py2cairo](#py-py2cairo) Description: bindings for the GLib, and GObject, to be used in Python. --- py-pygpu[¶](#py-pygpu) === Homepage: * <http://deeplearning.net/software/libgpuarray/Spack package: * [py-pygpu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pygpu/package.py) Versions: 0.7.5, 0.7.4, 0.7.3, 0.7.2, 0.7.1, 0.7.0, 0.6.9, 0.6.2, 0.6.1, 0.6.0 Build Dependencies: [libgpuarray](#libgpuarray), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-nose](#py-nose), [libcheck](#libcheck), [py-numpy](#py-numpy), [py-mako](#py-mako), [python](#python) Link Dependencies: [libcheck](#libcheck), [libgpuarray](#libgpuarray), [python](#python) Run Dependencies: [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-nose](#py-nose), [py-mako](#py-mako), [py-numpy](#py-numpy), [python](#python) Description: Python packge for the libgpuarray C library. --- py-pygtk[¶](#py-pygtk) === Homepage: * <http://www.pygtk.org/Spack package: * [py-pygtk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pygtk/package.py) Versions: 2.24.0 Build Dependencies: [libffi](#libffi), pkgconfig, [gtkplus](#gtkplus), [py-py2cairo](#py-py2cairo), [py-pygobject](#py-pygobject), [cairo](#cairo), [atk](#atk), [glib](#glib), [python](#python) Link Dependencies: [glib](#glib), [gtkplus](#gtkplus), [libffi](#libffi), [atk](#atk), [cairo](#cairo), [python](#python) Run Dependencies: [py-pygobject](#py-pygobject), [python](#python), [py-py2cairo](#py-py2cairo) Description: bindings for the Gtk2 in Python. use pygobject for Gtk3. --- py-pylint[¶](#py-pylint) === Homepage: * <https://pypi.python.org/pypi/pylintSpack package: * [py-pylint/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pylint/package.py) Versions: 1.6.5, 1.4.3, 1.4.1 Build Dependencies: [py-isort](#py-isort), [py-configparser](#py-configparser), [py-mccabe](#py-mccabe), [py-setuptools](#py-setuptools), [py-editdistance](#py-editdistance), [py-singledispatch](#py-singledispatch), [py-astroid](#py-astroid), [py-backports-functools-lru-cache](#py-backports-functools-lru-cache), [python](#python), [py-six](#py-six) Link Dependencies: [py-isort](#py-isort), [py-configparser](#py-configparser), [py-mccabe](#py-mccabe), [py-editdistance](#py-editdistance), [py-singledispatch](#py-singledispatch), [py-backports-functools-lru-cache](#py-backports-functools-lru-cache), [python](#python) Run Dependencies: [py-astroid](#py-astroid), [python](#python), [py-six](#py-six) Description: array processing for numbers, strings, records, and objects. --- py-pymatgen[¶](#py-pymatgen) === Homepage: * <http://www.pymatgen.org/Spack package: * [py-pymatgen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pymatgen/package.py) Versions: 4.7.2, 4.6.2 Build Dependencies: [py-pyyaml](#py-pyyaml), [py-tabulate](#py-tabulate), [py-pydispatcher](#py-pydispatcher), [py-scipy](#py-scipy), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-six](#py-six), [py-spglib](#py-spglib), [py-monty](#py-monty), [py-setuptools](#py-setuptools), [py-requests](#py-requests), [python](#python), [py-palettable](#py-palettable) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-pydispatcher](#py-pydispatcher), [py-scipy](#py-scipy), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-six](#py-six), [py-spglib](#py-spglib), [py-monty](#py-monty), [py-tabulate](#py-tabulate), [py-requests](#py-requests), [python](#python), [py-palettable](#py-palettable) Description: Python Materials Genomics is a robust materials analysis code that defines core object representations for structures and molecules with support for many electronic structure codes. It is currently the core analysis code powering the Materials Project. --- py-pyminifier[¶](#py-pyminifier) === Homepage: * <http://liftoff.github.io/pyminifier/Spack package: * [py-pyminifier/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyminifier/package.py) Versions: 2.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Pyminifier is a Python code minifier, obfuscator, and compressor. --- py-pymol[¶](#py-pymol) === Homepage: * <https://pymol.orgSpack package: * [py-pymol/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pymol/package.py) Versions: 2.1.0 Build Dependencies: [tcl](#tcl), [libpng](#libpng), [libxml2](#libxml2), [tk](#tk), gl, [py-pyqt](#py-pyqt), [py-pmw](#py-pmw), glu, [msgpack-c](#msgpack-c), [freetype](#freetype), [glew](#glew), [python](#python), [freeglut](#freeglut) Link Dependencies: [tcl](#tcl), [libpng](#libpng), [libxml2](#libxml2), [tk](#tk), gl, [msgpack-c](#msgpack-c), [py-pmw](#py-pmw), glu, [freetype](#freetype), [glew](#glew), [python](#python), [freeglut](#freeglut) Run Dependencies: [python](#python), [py-pyqt](#py-pyqt) Description: PyMOL is a Python-enhanced molecular graphics tool. It excels at 3D visualization of proteins, small molecules, density, surfaces, and trajectories. It also includes molecular editing, ray tracing, and movies. Open Source PyMOL is free to everyone! --- py-pympler[¶](#py-pympler) === Homepage: * <https://github.com/pympler/pymplerSpack package: * [py-pympler/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pympler/package.py) Versions: 0.4.3, 0.4.2, 0.4.1, 0.4, 0.3.1 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Development tool to measure, monitor and analyze the memory behavior of Python objects in a running Python application. --- py-pymysql[¶](#py-pymysql) === Homepage: * <https://github.com/PyMySQL/PyMySQL/Spack package: * [py-pymysql/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pymysql/package.py) Versions: 0.9.2 Build Dependencies: [py-cryptography](#py-cryptography), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-cryptography](#py-cryptography), [python](#python) Description: Pure-Python MySQL client library --- py-pynn[¶](#py-pynn) === Homepage: * <http://neuralensemble.org/PyNN/Spack package: * [py-pynn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pynn/package.py) Versions: 0.9.1, 0.8.3, 0.8.1, 0.8beta, 0.7.5 Build Dependencies: [py-quantities](#py-quantities), [py-docutils](#py-docutils), [py-jinja2](#py-jinja2), [py-neo](#py-neo), [py-numpy](#py-numpy), [python](#python), [py-lazyarray](#py-lazyarray) Link Dependencies: [python](#python) Run Dependencies: [py-quantities](#py-quantities), [py-docutils](#py-docutils), [py-jinja2](#py-jinja2), [py-neo](#py-neo), [py-numpy](#py-numpy), [python](#python), [py-lazyarray](#py-lazyarray) Test Dependencies: [py-mock](#py-mock) Description: A Python package for simulator-independent specification of neuronal network models --- py-pypar[¶](#py-pypar) === Homepage: * <http://code.google.com/p/pypar/Spack package: * [py-pypar/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pypar/package.py) Versions: 2.1.5_108 Build Dependencies: [py-numpy](#py-numpy), mpi, [python](#python) Link Dependencies: mpi, [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Pypar is an efficient but easy-to-use module that allows programs written in Python to run in parallel on multiple processors and communicate using MPI. --- py-pyparsing[¶](#py-pyparsing) === Homepage: * <http://pyparsing.wikispaces.com/Spack package: * [py-pyparsing/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyparsing/package.py) Versions: 2.2.0, 2.1.10, 2.0.3 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A Python Parsing Module. --- py-pypeflow[¶](#py-pypeflow) === Homepage: * <https://github.com/PacificBiosciences/pypeFLOWSpack package: * [py-pypeflow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pypeflow/package.py) Versions: 2017-05-04 Build Dependencies: [py-networkx](#py-networkx), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-networkx](#py-networkx), [python](#python) Description: pypeFLOW is light weight and reusable make / flow data process library written in Python. --- py-pyprof2html[¶](#py-pyprof2html) === Homepage: * <https://pypi.python.org/pypi/pyprof2html/Spack package: * [py-pyprof2html/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyprof2html/package.py) Versions: 0.3.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-jinja2](#py-jinja2) Link Dependencies: [python](#python) Run Dependencies: [py-jinja2](#py-jinja2), [python](#python) Description: Python cProfile and hotshot profile's data to HTML Converter --- py-pyqi[¶](#py-pyqi) === Homepage: * <https://pyqi.readthedocs.ioSpack package: * [py-pyqi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyqi/package.py) Versions: 0.3.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: pyqi (canonically pronounced pie chee) is a Python framework designed to support wrapping general commands in multiple types of interfaces, including at the command line, HTML, and API levels. --- py-pyqt[¶](#py-pyqt) === Homepage: * <http://www.riverbankcomputing.com/software/pyqt/introSpack package: * [py-pyqt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyqt/package.py) Versions: 4.11.3 Build Dependencies: [py-sip](#py-sip), [qt](#qt), [python](#python) Link Dependencies: [qt](#qt), [python](#python) Run Dependencies: [py-sip](#py-sip) Description: PyQt is a set of Python v2 and v3 bindings for Digia's Qt application framework and runs on all platforms supported by Qt including Windows, MacOS/X and Linux. --- py-pyrad[¶](#py-pyrad) === Homepage: * <http://dereneaton.com/software/pyrad/Spack package: * [py-pyrad/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyrad/package.py) Versions: 3.0.66 Build Dependencies: [muscle](#muscle), [py-setuptools](#py-setuptools), [py-scipy](#py-scipy), [py-numpy](#py-numpy), [vsearch](#vsearch), [python](#python) Link Dependencies: [muscle](#muscle), [vsearch](#vsearch), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-scipy](#py-scipy), [python](#python) Description: RADseq for phylogenetics & introgression analyses --- py-pysam[¶](#py-pysam) === Homepage: * <https://pypi.python.org/pypi/pysamSpack package: * [py-pysam/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pysam/package.py) Versions: 0.14.1, 0.13, 0.11.2.2, 0.7.7 Build Dependencies: [samtools](#samtools), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [curl](#curl), [bcftools](#bcftools), [python](#python), [htslib](#htslib) Link Dependencies: [samtools](#samtools), [curl](#curl), [bcftools](#bcftools), [python](#python), [htslib](#htslib) Run Dependencies: [python](#python) Description: A python module for reading, manipulating and writing genomic data sets. --- py-pyscaf[¶](#py-pyscaf) === Homepage: * <https://pypi.python.org/pypi/pyScafSpack package: * [py-pyscaf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyscaf/package.py) Versions: 0.12a4 Build Dependencies: [py-fastaindex](#py-fastaindex), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-fastaindex](#py-fastaindex), [python](#python) Description: pyScaf orders contigs from genome assemblies utilising several types of information --- py-pyserial[¶](#py-pyserial) === Homepage: * <https://github.com/pyserial/pyserialSpack package: * [py-pyserial/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyserial/package.py) Versions: 3.1.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python Serial Port Extension --- py-pyshp[¶](#py-pyshp) === Homepage: * <https://github.com/GeospatialPython/pyshpSpack package: * [py-pyshp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyshp/package.py) Versions: 1.2.12 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The Python Shapefile Library (pyshp) reads and writes ESRI Shapefiles in pure Python. --- py-pyside[¶](#py-pyside) === Homepage: * <https://pypi.python.org/pypi/pysideSpack package: * [py-pyside/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyside/package.py) Versions: 1.2.4, 1.2.2 Build Dependencies: [py-setuptools](#py-setuptools), [libxslt](#libxslt), [cmake](#cmake), [py-sphinx](#py-sphinx), [qt](#qt), [python](#python), [libxml2](#libxml2) Link Dependencies: [libxslt](#libxslt), [libxml2](#libxml2), [python](#python), [qt](#qt) Run Dependencies: [python](#python), [py-sphinx](#py-sphinx) Description: Python bindings for Qt. --- py-pysocks[¶](#py-pysocks) === Homepage: * <https://github.com/Anorov/PySocksSpack package: * [py-pysocks/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pysocks/package.py) Versions: 1.6.6, 1.5.7 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A Python SOCKS client module. --- py-pyspark[¶](#py-pyspark) === Homepage: * <http://spark.apache.orgSpack package: * [py-pyspark/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyspark/package.py) Versions: 2.3.0 Build Dependencies: [py-py4j](#py-py4j), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-py4j](#py-py4j), [python](#python) Description: Python bindings for Apache Spark --- py-pysqlite[¶](#py-pysqlite) === Homepage: * <https://github.com/ghaering/pysqliteSpack package: * [py-pysqlite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pysqlite/package.py) Versions: 2.8.3 Build Dependencies: [python](#python), [sqlite](#sqlite) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [sqlite](#sqlite) Description: Python DB-API module for SQLite 3. --- py-pytables[¶](#py-pytables) === Homepage: * <http://www.pytables.org/Spack package: * [py-pytables/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytables/package.py) Versions: 3.3.0, 3.2.2 Build Dependencies: [py-numexpr](#py-numexpr), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [hdf5](#hdf5), [py-numpy](#py-numpy), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python), [hdf5](#hdf5) Run Dependencies: [py-numexpr](#py-numexpr), [py-numpy](#py-numpy), [py-cython](#py-cython), [python](#python), [py-six](#py-six) Description: PyTables is a package for managing hierarchical datasets and designed to efficiently and easily cope with extremely large amounts of data. --- py-pytest[¶](#py-pytest) === Homepage: * <http://pytest.org/Spack package: * [py-pytest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytest/package.py) Versions: 3.7.2, 3.7.1, 3.5.1, 3.0.7, 3.0.2 Build Dependencies: [py-py](#py-py), [py-setuptools](#py-setuptools), [py-pluggy](#py-pluggy), [py-setuptools-scm](#py-setuptools-scm), [py-pathlib2](#py-pathlib2), [py-funcsigs](#py-funcsigs), [py-six](#py-six), [py-attrs](#py-attrs), [py-more-itertools](#py-more-itertools), [python](#python), [py-atomicwrites](#py-atomicwrites) Link Dependencies: [python](#python) Run Dependencies: [py-setuptools](#py-setuptools), [py-pluggy](#py-pluggy), [py-py](#py-py), [py-pathlib2](#py-pathlib2), [py-funcsigs](#py-funcsigs), [py-six](#py-six), [py-attrs](#py-attrs), [py-more-itertools](#py-more-itertools), [python](#python), [py-atomicwrites](#py-atomicwrites) Description: pytest: simple powerful testing with Python. --- py-pytest-cov[¶](#py-pytest-cov) === Homepage: * <https://github.com/pytest-dev/pytest-covSpack package: * [py-pytest-cov/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytest-cov/package.py) Versions: 2.3.1 Build Dependencies: [py-coverage](#py-coverage), [py-setuptools](#py-setuptools), [python](#python), [py-pytest](#py-pytest) Link Dependencies: [python](#python) Run Dependencies: [py-coverage](#py-coverage), [py-pytest](#py-pytest), [python](#python) Description: Pytest plugin for measuring coverage. --- py-pytest-flake8[¶](#py-pytest-flake8) === Homepage: * <https://github.com/tholo/pytest-flake8Spack package: * [py-pytest-flake8/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytest-flake8/package.py) Versions: 0.8.1 Build Dependencies: [py-flake8](#py-flake8), [py-pytest](#py-pytest), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-pytest](#py-pytest), [python](#python), [py-flake8](#py-flake8) Description: pytest plugin to check FLAKE8 requirements. --- py-pytest-httpbin[¶](#py-pytest-httpbin) === Homepage: * <https://github.com/kevin1024/pytest-httpbinSpack package: * [py-pytest-httpbin/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytest-httpbin/package.py) Versions: 0.2.3 Build Dependencies: [py-httpbin](#py-httpbin), [py-setuptools](#py-setuptools), [py-decorator](#py-decorator), [py-flask](#py-flask), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-httpbin](#py-httpbin), [py-decorator](#py-decorator), [py-flask](#py-flask), [python](#python), [py-six](#py-six) Description: Easily test your HTTP library against a local copy of httpbin --- py-pytest-mock[¶](#py-pytest-mock) === Homepage: * <https://github.com/pytest-dev/pytest-mockSpack package: * [py-pytest-mock/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytest-mock/package.py) Versions: 1.2 Build Dependencies: [py-pytest](#py-pytest), [py-mock](#py-mock), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-mock](#py-mock), [python](#python), [py-pytest](#py-pytest) Description: Thin-wrapper around the mock package for easier use with py.test --- py-pytest-runner[¶](#py-pytest-runner) === Homepage: * <https://github.com/pytest-dev/pytest-runnerSpack package: * [py-pytest-runner/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytest-runner/package.py) Versions: 2.11.1 Build Dependencies: [py-setuptools-scm](#py-setuptools-scm), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Invoke py.test as distutils command with dependency resolution. --- py-pytest-xdist[¶](#py-pytest-xdist) === Homepage: * <https://github.com/pytest-dev/pytest-xdistSpack package: * [py-pytest-xdist/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytest-xdist/package.py) Versions: 1.16.0 Build Dependencies: [py-execnet](#py-execnet), [py-pytest](#py-pytest), [py-py](#py-py), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-execnet](#py-execnet), [py-py](#py-py), [python](#python), [py-pytest](#py-pytest) Description: py.test xdist plugin for distributed testing and loop-on-failing mode --- py-python-daemon[¶](#py-python-daemon) === Homepage: * <https://pypi.python.org/pypi/python-daemon/Spack package: * [py-python-daemon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-python-daemon/package.py) Versions: 2.0.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-lockfile](#py-lockfile) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-lockfile](#py-lockfile) Description: Library to implement a well-behaved Unix daemon process. This library implements the well-behaved daemon specification of PEP Standard daemon process. A well-behaved Unix daemon process is tricky to get right, but the required steps are much the same for every daemon program. A DaemonContext instance holds the behaviour and configured process environment for the program; use the instance as a context manager to enter a daemon state. --- py-python-engineio[¶](#py-python-engineio) === Homepage: * <http://python-engineio.readthedocs.io/en/latest/Spack package: * [py-python-engineio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-python-engineio/package.py) Versions: 2.0.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: Engine.IO is the implementation of transport-based cross-browser/cross- device bi-directional communication layer for Socket.IO. --- py-python-gitlab[¶](#py-python-gitlab) === Homepage: * <https://github.com/gpocentek/python-gitlabSpack package: * [py-python-gitlab/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-python-gitlab/package.py) Versions: 0.19, 0.18, 0.17, 0.16 Build Dependencies: [py-six](#py-six), [py-setuptools](#py-setuptools), [python](#python), [py-requests](#py-requests) Link Dependencies: [python](#python) Run Dependencies: [py-six](#py-six), [python](#python), [py-requests](#py-requests) Description: Python wrapper for the GitLab API --- py-python-levenshtein[¶](#py-python-levenshtein) === Homepage: * <https://github.com/ztane/python-LevenshteinSpack package: * [py-python-levenshtein/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-python-levenshtein/package.py) Versions: 0.12.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python extension for computing string edit distances and similarities. --- py-python-socketio[¶](#py-python-socketio) === Homepage: * <https://github.com/miguelgrinberg/python-socketioSpack package: * [py-python-socketio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-python-socketio/package.py) Versions: 1.8.4 Build Dependencies: [py-python-engineio](#py-python-engineio), [py-six](#py-six), [py-setuptools](#py-setuptools), [python](#python), [py-eventlet](#py-eventlet) Link Dependencies: [python](#python) Run Dependencies: [py-python-engineio](#py-python-engineio), [py-six](#py-six), [python](#python), [py-eventlet](#py-eventlet) Description: Python implementation of the Socket.IO realtime server. --- py-pythonqwt[¶](#py-pythonqwt) === Homepage: * <https://github.com/PierreRaybaut/PythonQwtSpack package: * [py-pythonqwt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pythonqwt/package.py) Versions: 0.5.5 Build Dependencies: [py-sip](#py-sip), [py-setuptools](#py-setuptools), [py-numpy](#py-numpy), [python](#python), [py-pyqt](#py-pyqt), [py-sphinx](#py-sphinx) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-sip](#py-sip), [py-pyqt](#py-pyqt), [python](#python), [py-sphinx](#py-sphinx) Description: Qt plotting widgets for Python --- py-pytorch[¶](#py-pytorch) === Homepage: * <http://pytorch.org/Spack package: * [py-pytorch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytorch/package.py) Versions: 0.4.0, 0.3.1 Build Dependencies: [py-pyyaml](#py-pyyaml), [cuda](#cuda), [intel-mkl](#intel-mkl), blas, [py-numpy](#py-numpy), [py-cffi](#py-cffi), lapack, [magma](#magma), [py-setuptools](#py-setuptools), [py-typing](#py-typing), [nccl](#nccl), [python](#python), [cudnn](#cudnn) Link Dependencies: [cuda](#cuda), [intel-mkl](#intel-mkl), [magma](#magma), [cudnn](#cudnn), blas, [nccl](#nccl), [python](#python), lapack Run Dependencies: [py-pyyaml](#py-pyyaml), [cuda](#cuda), [py-numpy](#py-numpy), [py-typing](#py-typing), [python](#python) Description: Tensors and Dynamic neural networks in Python with strong GPU acceleration. --- py-pytz[¶](#py-pytz) === Homepage: * <http://pythonhosted.org/pytzSpack package: * [py-pytz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pytz/package.py) Versions: 2017.2, 2016.10, 2016.6.1, 2016.3, 2015.4, 2014.10, 2014.9 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: World timezone definitions, modern and historical. --- py-pyutilib[¶](#py-pyutilib) === Homepage: * <https://github.com/PyUtilib/pyutilibSpack package: * [py-pyutilib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyutilib/package.py) Versions: 5.6.2, 5.6.1, 5.6, 5.5.1, 5.5, 5.4.1, 5.4, 5.3.5, 5.3.4, 5.3.3 Build Dependencies: [python](#python), [py-nose](#py-nose), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-nose](#py-nose), [py-six](#py-six) Description: The PyUtilib project supports a collection of Python utilities, including a well-developed component architecture and extensions to the PyUnit testing framework. PyUtilib has been developed to support several Python-centric projects, especially Pyomo. PyUtilib is available under the BSD License. --- py-pywavelets[¶](#py-pywavelets) === Homepage: * <https://github.com/PyWaveletsSpack package: * [py-pywavelets/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pywavelets/package.py) Versions: 0.5.2 Build Dependencies: [py-numpy](#py-numpy), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: PyWavelets is a free Open Source library for wavelet transforms in Python --- py-pyyaml[¶](#py-pyyaml) === Homepage: * <http://pyyaml.org/wiki/PyYAMLSpack package: * [py-pyyaml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-pyyaml/package.py) Versions: 3.12, 3.11 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: PyYAML is a YAML parser and emitter for Python. --- py-qtawesome[¶](#py-qtawesome) === Homepage: * <https://github.com/spyder-ide/qtawesomeSpack package: * [py-qtawesome/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-qtawesome/package.py) Versions: 0.4.1, 0.3.3 Build Dependencies: [py-qtpy](#py-qtpy), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-qtpy](#py-qtpy), [python](#python), [py-six](#py-six) Description: FontAwesome icons in PyQt and PySide applications --- py-qtconsole[¶](#py-qtconsole) === Homepage: * <http://ipython.orgSpack package: * [py-qtconsole/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-qtconsole/package.py) Versions: 4.2.1 Build Dependencies: [py-traitlets](#py-traitlets), [py-jupyter-core](#py-jupyter-core), [py-ipykernel](#py-ipykernel), [py-jupyter-client](#py-jupyter-client), [py-pygments](#py-pygments), [python](#python), [py-sphinx](#py-sphinx) Link Dependencies: [python](#python) Run Dependencies: [py-traitlets](#py-traitlets), [py-jupyter-core](#py-jupyter-core), [py-ipykernel](#py-ipykernel), [py-jupyter-client](#py-jupyter-client), [py-pygments](#py-pygments), [python](#python), [py-sphinx](#py-sphinx) Test Dependencies: [py-mock](#py-mock) Description: Jupyter Qt console --- py-qtpy[¶](#py-qtpy) === Homepage: * <https://github.com/spyder-ide/qtpySpack package: * [py-qtpy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-qtpy/package.py) Versions: 1.2.1 Build Dependencies: [python](#python), [py-setuptools](#py-setuptools), [py-pyqt](#py-pyqt) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-pyqt](#py-pyqt) Description: QtPy: Abtraction layer for PyQt5/PyQt4/PySide --- py-quantities[¶](#py-quantities) === Homepage: * <http://python-quantities.readthedocs.orgSpack package: * [py-quantities/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-quantities/package.py) Versions: 0.12.1, 0.11.1 Build Dependencies: [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Support for physical quantities with units, based on numpy --- py-quast[¶](#py-quast) === Homepage: * <http://cab.spbu.ru/software/quastSpack package: * [py-quast/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-quast/package.py) Versions: 4.6.3, 4.6.1, 4.6.0 Build Dependencies: [perl-time-hires](#perl-time-hires), [gnuplot](#gnuplot), [bwa](#bwa), java, [perl](#perl), [py-matplotlib](#py-matplotlib), [mummer](#mummer), [bedtools2](#bedtools2), [py-setuptools](#py-setuptools), [boost](#boost), [glimmer](#glimmer), [python](#python) Link Dependencies: [perl](#perl), [boost](#boost), [python](#python) Run Dependencies: [perl-time-hires](#perl-time-hires), [mummer](#mummer), [gnuplot](#gnuplot), [bwa](#bwa), java, [py-matplotlib](#py-matplotlib), [glimmer](#glimmer), [python](#python), [bedtools2](#bedtools2) Description: Quality Assessment Tool for Genome Assemblies --- py-radical-utils[¶](#py-radical-utils) === Homepage: * <http://radical.rutgers.eduSpack package: * [py-radical-utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-radical-utils/package.py) Versions: 0.45, 0.41.1 Build Dependencies: [py-colorama](#py-colorama), [py-netifaces](#py-netifaces), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-colorama](#py-colorama), [py-netifaces](#py-netifaces), [python](#python) Description: Shared code and tools for various RADICAL Projects --- py-ranger[¶](#py-ranger) === Homepage: * <http://ranger.nongnu.org/Spack package: * [py-ranger/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ranger/package.py) Versions: 1.7.2 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A VIM-inspired filemanager for the console --- py-rasterio[¶](#py-rasterio) === Homepage: * <https://github.com/mapbox/rasterioSpack package: * [py-rasterio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-rasterio/package.py) Versions: 1.0a12 Build Dependencies: [py-click](#py-click), [py-snuggs](#py-snuggs), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-attrs](#py-attrs), [gdal](#gdal), [py-cligj](#py-cligj), [py-numpy](#py-numpy), [py-affine](#py-affine), jpeg, [python](#python) Link Dependencies: [gdal](#gdal), jpeg, [python](#python) Run Dependencies: [py-click](#py-click), [py-snuggs](#py-snuggs), [py-attrs](#py-attrs), [py-cligj](#py-cligj), [py-numpy](#py-numpy), [py-affine](#py-affine), [python](#python) Description: Rasterio reads and writes geospatial raster data. Geographic information systems use GeoTIFF and other formats to organize and store gridded, or raster, datasets. Rasterio reads and writes these formats and provides a Python API based on N-D arrays. --- py-readme-renderer[¶](#py-readme-renderer) === Homepage: * <https://github.com/pypa/readme_rendererSpack package: * [py-readme-renderer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-readme-renderer/package.py) Versions: 16.0 Build Dependencies: [py-docutils](#py-docutils), [py-setuptools](#py-setuptools), [py-bleach](#py-bleach), [py-pygments](#py-pygments), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-docutils](#py-docutils), [py-bleach](#py-bleach), [py-pygments](#py-pygments), [python](#python), [py-six](#py-six) Description: readme_renderer is a library for rendering "readme" descriptions for Warehouse. --- py-regex[¶](#py-regex) === Homepage: * <https://pypi.python.org/pypi/regex/Spack package: * [py-regex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-regex/package.py) Versions: 2017.07.11 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Alternative regular expression module, to replace re. --- py-reportlab[¶](#py-reportlab) === Homepage: * <https://pypi.python.org/pypi/reportlabSpack package: * [py-reportlab/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-reportlab/package.py) Versions: 3.4.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The ReportLab Toolkit. An Open Source Python library for generating PDFs and graphics. --- py-requests[¶](#py-requests) === Homepage: * <http://python-requests.orgSpack package: * [py-requests/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-requests/package.py) Versions: 2.14.2, 2.13.0, 2.11.1, 2.3.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Test Dependencies: [py-pytest-httpbin](#py-pytest-httpbin), [py-pytest-cov](#py-pytest-cov), [py-pytest](#py-pytest), [py-pytest-mock](#py-pytest-mock) Description: Python HTTP for Humans. --- py-requests-toolbelt[¶](#py-requests-toolbelt) === Homepage: * <https://toolbelt.readthedocs.org/Spack package: * [py-requests-toolbelt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-requests-toolbelt/package.py) Versions: 0.8.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-requests](#py-requests) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-requests](#py-requests) Description: A toolbelt of useful classes and functions to be used with python- requests --- py-restview[¶](#py-restview) === Homepage: * <https://mg.pov.lt/restview/Spack package: * [py-restview/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-restview/package.py) Versions: 2.6.1 Build Dependencies: [py-docutils](#py-docutils), [py-readme-renderer](#py-readme-renderer), [py-setuptools](#py-setuptools), [python](#python), [py-pygments](#py-pygments) Link Dependencies: [python](#python) Run Dependencies: [py-docutils](#py-docutils), [py-readme-renderer](#py-readme-renderer), [py-pygments](#py-pygments), [python](#python) Description: A viewer for ReStructuredText documents that renders them on the fly. --- py-rope[¶](#py-rope) === Homepage: * <https://github.com/python-rope/ropeSpack package: * [py-rope/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-rope/package.py) Versions: 0.10.5 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: a python refactoring library. --- py-rpy2[¶](#py-rpy2) === Homepage: * <https://pypi.python.org/pypi/rpy2Spack package: * [py-rpy2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-rpy2/package.py) Versions: 2.9.4, 2.8.6, 2.5.6, 2.5.4 Build Dependencies: [r](#r), [py-setuptools](#py-setuptools), [py-singledispatch](#py-singledispatch), [py-six](#py-six), [python](#python), [py-jinja2](#py-jinja2) Link Dependencies: [python](#python) Run Dependencies: [r](#r), [py-six](#py-six), [python](#python), [py-jinja2](#py-jinja2), [py-singledispatch](#py-singledispatch) Description: rpy2 is a redesign and rewrite of rpy. It is providing a low-level interface to R from Python, a proposed high-level interface, including wrappers to graphical libraries, as well as R-like structures and functions. --- py-rsa[¶](#py-rsa) === Homepage: * <https://stuvel.eu/rsaSpack package: * [py-rsa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-rsa/package.py) Versions: 3.4.2 Build Dependencies: [py-pyasn1](#py-pyasn1), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pyasn1](#py-pyasn1), [py-setuptools](#py-setuptools), [python](#python) Description: Pure-Python RSA implementation --- py-rseqc[¶](#py-rseqc) === Homepage: * <http://rseqc.sourceforge.netSpack package: * [py-rseqc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-rseqc/package.py) Versions: 2.6.4 Build Dependencies: [r](#r), [py-setuptools](#py-setuptools), [py-numpy](#py-numpy), [py-pysam](#py-pysam), [python](#python), [py-bx-python](#py-bx-python) Link Dependencies: [python](#python) Run Dependencies: [r](#r), [py-numpy](#py-numpy), [py-pysam](#py-pysam), [python](#python), [py-bx-python](#py-bx-python) Description: RSeQC package provides a number of useful modules that can comprehensively evaluate high throughput sequence data especially RNA- seq data. --- py-rtree[¶](#py-rtree) === Homepage: * <http://toblerity.org/rtree/Spack package: * [py-rtree/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-rtree/package.py) Versions: 0.8.3 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [libspatialindex](#libspatialindex) Link Dependencies: [python](#python), [libspatialindex](#libspatialindex) Run Dependencies: [python](#python) Description: Python interface to the RTREE.4 Library. --- py-saga-python[¶](#py-saga-python) === Homepage: * <http://radical.rutgers.eduSpack package: * [py-saga-python/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-saga-python/package.py) Versions: 0.41.3 Build Dependencies: [python](#python), [py-setuptools](#py-setuptools), [py-apache-libcloud](#py-apache-libcloud), [py-radical-utils](#py-radical-utils) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-apache-libcloud](#py-apache-libcloud), [py-radical-utils](#py-radical-utils) Description: A light-weight access layer for distributed computing infrastructure --- py-scandir[¶](#py-scandir) === Homepage: * <https://github.com/benhoyt/scandirSpack package: * [py-scandir/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-scandir/package.py) Versions: 1.9.0, 1.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: scandir, a better directory iterator and faster os.walk(). --- py-scientificpython[¶](#py-scientificpython) === Homepage: * <https://sourcesup.renater.fr/projects/scientific-py/Spack package: * [py-scientificpython/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-scientificpython/package.py) Versions: 2.8.1 Build Dependencies: [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: ScientificPython is a collection of Python modules for scientific computing. It contains support for geometry, mathematical functions, statistics, physical units, IO, visualization, and parallelization. --- py-scikit-image[¶](#py-scikit-image) === Homepage: * <http://scikit-image.org/Spack package: * [py-scikit-image/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-scikit-image/package.py) Versions: 0.12.3 Build Dependencies: pil, [py-networkx](#py-networkx), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-dask](#py-dask), [py-matplotlib](#py-matplotlib), [python](#python), [py-scipy](#py-scipy), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: pil, [py-networkx](#py-networkx), [python](#python), [py-dask](#py-dask), [py-matplotlib](#py-matplotlib), [py-scipy](#py-scipy), [py-six](#py-six) Description: Image processing algorithms for SciPy, including IO, morphology, filtering, warping, color manipulation, object detection, etc. --- py-scikit-learn[¶](#py-scikit-learn) === Homepage: * <https://pypi.python.org/pypi/scikit-learnSpack package: * [py-scikit-learn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-scikit-learn/package.py) Versions: 0.20.0, 0.19.1, 0.18.1, 0.17.1, 0.16.1, 0.15.2, 0.13.1 Build Dependencies: [py-numpy](#py-numpy), [python](#python), [py-scipy](#py-scipy) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python), [py-scipy](#py-scipy) Description: A set of python modules for machine learning and data mining. --- py-scipy[¶](#py-scipy) === Homepage: * <http://www.scipy.org/Spack package: * [py-scipy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-scipy/package.py) Versions: 1.1.0, 1.0.0, 0.19.1, 0.19.0, 0.18.1, 0.17.0, 0.15.1, 0.15.0 Build Dependencies: [py-numpy](#py-numpy), blas, [py-setuptools](#py-setuptools), [python](#python), lapack Link Dependencies: blas, [python](#python), lapack Run Dependencies: [py-numpy](#py-numpy), [python](#python) Test Dependencies: [py-nose](#py-nose) Description: SciPy (pronounced "Sigh Pie") is a Scientific Library for Python. It provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization. --- py-seaborn[¶](#py-seaborn) === Homepage: * <http://seaborn.pydata.org/Spack package: * [py-seaborn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-seaborn/package.py) Versions: 0.9.0, 0.7.1 Build Dependencies: [py-setuptools](#py-setuptools), [py-scipy](#py-scipy), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [py-pandas](#py-pandas), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-scipy](#py-scipy), [py-pandas](#py-pandas), [py-matplotlib](#py-matplotlib), [py-numpy](#py-numpy), [python](#python) Description: Seaborn: statistical data visualization. Seaborn is a library for making attractive and informative statistical graphics in Python. It is built on top of matplotlib and tightly integrated with the PyData stack, including support for numpy and pandas data structures and statistical routines from scipy and statsmodels. --- py-setuptools[¶](#py-setuptools) === Homepage: * <https://github.com/pypa/setuptoolsSpack package: * [py-setuptools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-setuptools/package.py) Versions: 40.4.3, 40.2.0, 39.2.0, 39.0.1, 35.0.2, 34.4.1, 34.2.0, 25.2.0, 20.7.0, 20.6.7, 20.5, 19.2, 18.1, 16.0, 11.3.1 Build Dependencies: [py-appdirs](#py-appdirs), [py-packaging](#py-packaging), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-appdirs](#py-appdirs), [py-packaging](#py-packaging), [python](#python), [py-six](#py-six) Description: A Python utility that aids in the process of downloading, building, upgrading, installing, and uninstalling Python packages. --- py-setuptools-git[¶](#py-setuptools-git) === Homepage: * <https://pypi.python.org/pypi/setuptools-gitSpack package: * [py-setuptools-git/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-setuptools-git/package.py) Versions: 1.2 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [git](#git) Link Dependencies: [git](#git), [python](#python) Run Dependencies: [python](#python) Description: Setuptools revision control system plugin for Git --- py-setuptools-scm[¶](#py-setuptools-scm) === Homepage: * <https://github.com/pypa/setuptools_scmSpack package: * [py-setuptools-scm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-setuptools-scm/package.py) Versions: 3.1.0, 1.15.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The blessed package to manage your versions by scm tags. --- py-sfepy[¶](#py-sfepy) === Homepage: * <http://sfepy.orgSpack package: * [py-sfepy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sfepy/package.py) Versions: 2017.3 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-pytables](#py-pytables), [py-sympy](#py-sympy), [hdf5](#hdf5), [py-scipy](#py-scipy), [py-petsc4py](#py-petsc4py), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-six](#py-six) Description: SfePy (http://sfepy.org) is a software for solving systems of coupled partial differential equations (PDEs) by the finite element method in 1D, 2D and 3D. It can be viewed both as black-box PDE solver, and as a Python package which can be used for building custom applications. --- py-sh[¶](#py-sh) === Homepage: * <https://github.com/amoffat/shSpack package: * [py-sh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sh/package.py) Versions: 1.12.9, 1.11 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python subprocess interface --- py-shapely[¶](#py-shapely) === Homepage: * <https://github.com/Toblerity/ShapelySpack package: * [py-shapely/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-shapely/package.py) Versions: 1.6.4 Build Dependencies: [py-numpy](#py-numpy), [py-cython](#py-cython), [geos](#geos), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [geos](#geos), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Manipulation and analysis of geometric objects in the Cartesian plane. --- py-shiboken[¶](#py-shiboken) === Homepage: * <https://shiboken.readthedocs.org/Spack package: * [py-shiboken/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-shiboken/package.py) Versions: 1.2.2 Build Dependencies: [libxml2](#libxml2), [py-setuptools](#py-setuptools), [cmake](#cmake), [qt](#qt), [python](#python), [py-sphinx](#py-sphinx) Link Dependencies: [libxml2](#libxml2), [python](#python), [qt](#qt) Run Dependencies: [python](#python), [py-sphinx](#py-sphinx) Description: Shiboken generates bindings for C++ libraries using CPython. --- py-simplegeneric[¶](#py-simplegeneric) === Homepage: * <https://pypi.python.org/pypi/simplegenericSpack package: * [py-simplegeneric/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-simplegeneric/package.py) Versions: 0.8.1, 0.8 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Simple generic functions (similar to Python's own len(), pickle.dump(), etc.) --- py-simplejson[¶](#py-simplejson) === Homepage: * <https://github.com/simplejson/simplejsonSpack package: * [py-simplejson/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-simplejson/package.py) Versions: 3.10.0, 3.9.0, 3.8.2, 3.8.1, 3.8.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Simplejson is a simple, fast, extensible JSON encoder/decoder for Python --- py-singledispatch[¶](#py-singledispatch) === Homepage: * <https://pypi.python.org/pypi/singledispatchSpack package: * [py-singledispatch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-singledispatch/package.py) Versions: 3.4.0.3 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: This library brings functools.singledispatch to Python 2.6-3.3. --- py-sip[¶](#py-sip) === Homepage: * <http://www.riverbankcomputing.com/software/sip/introSpack package: * [py-sip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sip/package.py) Versions: 4.16.7, 4.16.5 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Description: SIP is a tool that makes it very easy to create Python bindings for C and C++ libraries. --- py-six[¶](#py-six) === Homepage: * <https://pypi.python.org/pypi/sixSpack package: * [py-six/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-six/package.py) Versions: 1.11.0, 1.10.0, 1.9.0, 1.8.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Python 2 and 3 compatibility utilities. --- py-slepc4py[¶](#py-slepc4py) === Homepage: * <https://pypi.python.org/pypi/slepc4pySpack package: * [py-slepc4py/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-slepc4py/package.py) Versions: 3.7.0 Build Dependencies: [py-petsc4py](#py-petsc4py), [slepc](#slepc), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [slepc](#slepc), [python](#python) Run Dependencies: [py-petsc4py](#py-petsc4py), [python](#python) Description: This package provides Python bindings for the SLEPc package. --- py-slurm-pipeline[¶](#py-slurm-pipeline) === Homepage: * <https://github.com/acorg/slurm-pipelineSpack package: * [py-slurm-pipeline/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-slurm-pipeline/package.py) Versions: 2.0.9, 1.1.13 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-six](#py-six) Description: A Python class for scheduling SLURM jobs --- py-sncosmo[¶](#py-sncosmo) === Homepage: * <http://sncosmo.readthedocs.io/Spack package: * [py-sncosmo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sncosmo/package.py) Versions: 1.2.0 Build Dependencies: [py-astropy](#py-astropy), [py-setuptools](#py-setuptools), [py-iminuit](#py-iminuit), [py-scipy](#py-scipy), [py-nestle](#py-nestle), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-emcee](#py-emcee) Link Dependencies: [python](#python) Run Dependencies: [py-astropy](#py-astropy), [py-scipy](#py-scipy), [py-iminuit](#py-iminuit), [py-nestle](#py-nestle), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib), [python](#python), [py-emcee](#py-emcee) Description: SNCosmo is a Python library for high-level supernova cosmology analysis. --- py-snowballstemmer[¶](#py-snowballstemmer) === Homepage: * <https://github.com/shibukawa/snowball_pySpack package: * [py-snowballstemmer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-snowballstemmer/package.py) Versions: 1.2.1 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: This package provides 16 stemmer algorithms (15 + Poerter English stemmer) generated from Snowball algorithms. --- py-snuggs[¶](#py-snuggs) === Homepage: * <https://github.com/mapbox/snuggsSpack package: * [py-snuggs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-snuggs/package.py) Versions: 1.4.1 Build Dependencies: [py-click](#py-click), [py-numpy](#py-numpy), [py-pyparsing](#py-pyparsing), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-click](#py-click), [py-numpy](#py-numpy), [py-pyparsing](#py-pyparsing), [python](#python) Description: Snuggs are s-expressions for Numpy --- py-spectra[¶](#py-spectra) === Homepage: * <https://pypi.python.org/pypi/spectra/0.0.8Spack package: * [py-spectra/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-spectra/package.py) Versions: 0.0.11, 0.0.8 Build Dependencies: [py-colormath](#py-colormath), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-colormath](#py-colormath), [python](#python) Description: Color scales and color conversion made easy for Python. --- py-spefile[¶](#py-spefile) === Homepage: * <https://github.com/conda-forge/spefile-feedstockSpack package: * [py-spefile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-spefile/package.py) Versions: 1.6 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Reader for SPE files part of pyspec a set of python routines for data analysis of x-ray scattering experiments --- py-spglib[¶](#py-spglib) === Homepage: * <http://atztogo.github.io/spglib/Spack package: * [py-spglib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-spglib/package.py) Versions: 1.9.9.18 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Description: Python bindings for C library for finding and handling crystal symmetries. --- py-sphinx[¶](#py-sphinx) === Homepage: * <http://sphinx-doc.orgSpack package: * [py-sphinx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinx/package.py) Versions: 1.7.4, 1.6.3, 1.6.1, 1.5.5, 1.4.5, 1.3.1 Build Dependencies: [py-pygments](#py-pygments), [py-docutils](#py-docutils), [py-sphinx-rtd-theme](#py-sphinx-rtd-theme), [python](#python), [py-alabaster](#py-alabaster), [py-six](#py-six), [py-jinja2](#py-jinja2), [py-snowballstemmer](#py-snowballstemmer), [py-babel](#py-babel), [py-imagesize](#py-imagesize), [py-setuptools](#py-setuptools), [py-requests](#py-requests), [py-typing](#py-typing), [py-packaging](#py-packaging), [py-sphinxcontrib-websupport](#py-sphinxcontrib-websupport) Link Dependencies: [python](#python) Run Dependencies: [py-pygments](#py-pygments), [py-docutils](#py-docutils), [py-sphinx-rtd-theme](#py-sphinx-rtd-theme), [python](#python), [py-alabaster](#py-alabaster), [py-six](#py-six), [py-jinja2](#py-jinja2), [py-snowballstemmer](#py-snowballstemmer), [py-babel](#py-babel), [py-imagesize](#py-imagesize), [py-setuptools](#py-setuptools), [py-requests](#py-requests), [py-typing](#py-typing), [py-packaging](#py-packaging), [py-sphinxcontrib-websupport](#py-sphinxcontrib-websupport) Test Dependencies: [py-simplejson](#py-simplejson), [py-html5lib](#py-html5lib), [py-pytest](#py-pytest), [py-mock](#py-mock) Description: Sphinx Documentation Generator. --- py-sphinx-bootstrap-theme[¶](#py-sphinx-bootstrap-theme) === Homepage: * <https://pypi.python.org/pypi/sphinx-bootstrap-theme/Spack package: * [py-sphinx-bootstrap-theme/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinx-bootstrap-theme/package.py) Versions: 0.4.13 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Sphinx Bootstrap Theme. --- py-sphinx-rtd-theme[¶](#py-sphinx-rtd-theme) === Homepage: * <https://github.com/rtfd/sphinx_rtd_theme/Spack package: * [py-sphinx-rtd-theme/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinx-rtd-theme/package.py) Versions: 0.2.5b1, 0.1.10a0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: ReadTheDocs.org theme for Sphinx. --- py-sphinxcontrib-bibtex[¶](#py-sphinxcontrib-bibtex) === Homepage: * <https://pypi.python.org/pypi/sphinxcontrib-bibtexSpack package: * [py-sphinxcontrib-bibtex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinxcontrib-bibtex/package.py) Versions: 0.3.5 Build Dependencies: [py-ordereddict](#py-ordereddict), [py-six](#py-six), [py-setuptools](#py-setuptools), [py-latexcodec](#py-latexcodec), [py-pybtex-docutils](#py-pybtex-docutils), [py-pybtex](#py-pybtex), [py-oset](#py-oset), [python](#python), [py-sphinx](#py-sphinx) Link Dependencies: [python](#python) Run Dependencies: [py-ordereddict](#py-ordereddict), [py-six](#py-six), [py-latexcodec](#py-latexcodec), [py-pybtex-docutils](#py-pybtex-docutils), [py-pybtex](#py-pybtex), [py-oset](#py-oset), [python](#python), [py-sphinx](#py-sphinx) Description: A Sphinx extension for BibTeX style citations. --- py-sphinxcontrib-programoutput[¶](#py-sphinxcontrib-programoutput) === Homepage: * <https://sphinxcontrib-programoutput.readthedocs.org/Spack package: * [py-sphinxcontrib-programoutput/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinxcontrib-programoutput/package.py) Versions: 0.10 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-sphinx](#py-sphinx) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-sphinx](#py-sphinx) Description: A Sphinx extension to literally insert the output of arbitrary commands into documents, helping you to keep your command examples up to date. --- py-sphinxcontrib-websupport[¶](#py-sphinxcontrib-websupport) === Homepage: * <http://sphinx-doc.org/Spack package: * [py-sphinxcontrib-websupport/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinxcontrib-websupport/package.py) Versions: 1.0.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Test Dependencies: [py-mock](#py-mock), [py-pytest](#py-pytest) Description: sphinxcontrib-webuspport provides a Python API to easily integrate Sphinx documentation into your Web application. --- py-spyder[¶](#py-spyder) === Homepage: * <https://github.com/spyder-ide/spyderSpack package: * [py-spyder/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-spyder/package.py) Versions: 3.1.3, 2.3.9 Build Dependencies: [py-sphinx](#py-sphinx), [py-pylint](#py-pylint), [py-jedi](#py-jedi), [py-numpydoc](#py-numpydoc), [py-pickleshare](#py-pickleshare), [py-nbconvert](#py-nbconvert), [qt](#qt), [py-psutil](#py-psutil), [py-pyflakes](#py-pyflakes), [py-chardet](#py-chardet), [py-qtconsole](#py-qtconsole), [py-zmq](#py-zmq), [py-qtpy](#py-qtpy), [py-pycodestyle](#py-pycodestyle), [py-qtawesome](#py-qtawesome), [py-pygments](#py-pygments), [python](#python), [py-rope](#py-rope) Link Dependencies: [python](#python) Run Dependencies: [py-sphinx](#py-sphinx), [py-pylint](#py-pylint), [py-jedi](#py-jedi), [py-numpydoc](#py-numpydoc), [py-pickleshare](#py-pickleshare), [py-nbconvert](#py-nbconvert), [qt](#qt), [py-psutil](#py-psutil), [py-pyflakes](#py-pyflakes), [py-chardet](#py-chardet), [py-qtconsole](#py-qtconsole), [py-zmq](#py-zmq), [py-qtpy](#py-qtpy), [py-pycodestyle](#py-pycodestyle), [py-qtawesome](#py-qtawesome), [py-pygments](#py-pygments), [python](#python), [py-rope](#py-rope) Description: Scientific PYthon Development EnviRonment --- py-spykeutils[¶](#py-spykeutils) === Homepage: * <https://github.com/rproepp/spykeutilsSpack package: * [py-spykeutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-spykeutils/package.py) Versions: 0.4.3 Build Dependencies: [py-quantities](#py-quantities), [py-neo](#py-neo), [python](#python), [py-setuptools](#py-setuptools), [py-scipy](#py-scipy) Link Dependencies: [python](#python) Run Dependencies: [py-quantities](#py-quantities), [py-neo](#py-neo), [python](#python), [py-scipy](#py-scipy) Description: Utilities for analyzing electrophysiological data --- py-sqlalchemy[¶](#py-sqlalchemy) === Homepage: * <http://www.sqlalchemy.org/Spack package: * [py-sqlalchemy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sqlalchemy/package.py) Versions: 1.0.12 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The Python SQL Toolkit and Object Relational Mapper --- py-statsmodels[¶](#py-statsmodels) === Homepage: * <http://www.statsmodels.orgSpack package: * [py-statsmodels/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-statsmodels/package.py) Versions: 0.8.0 Build Dependencies: [py-patsy](#py-patsy), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [py-pandas](#py-pandas), [py-scipy](#py-scipy), [py-numpy](#py-numpy), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-patsy](#py-patsy), [py-cython](#py-cython), [py-scipy](#py-scipy), [py-pandas](#py-pandas), [py-matplotlib](#py-matplotlib), [py-numpy](#py-numpy), [python](#python) Description: Statistical computations and models for use with SciPy --- py-stevedore[¶](#py-stevedore) === Homepage: * <https://docs.openstack.org/stevedore/latest/Spack package: * [py-stevedore/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-stevedore/package.py) Versions: 1.28.0 Build Dependencies: [py-pbr](#py-pbr), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-pbr](#py-pbr), [python](#python), [py-six](#py-six) Description: Manage Dynamic Plugins for Python Applications. --- py-storm[¶](#py-storm) === Homepage: * <https://storm.canonical.com/Spack package: * [py-storm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-storm/package.py) Versions: 0.20 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Storm is an object-relational mapper (ORM) for Python --- py-subprocess32[¶](#py-subprocess32) === Homepage: * <https://pypi.python.org/pypi/subprocess32Spack package: * [py-subprocess32/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-subprocess32/package.py) Versions: 3.2.7 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A backport of the subprocess module from Python 3.2/3.3 for 2.x. --- py-symengine[¶](#py-symengine) === Homepage: * <https://github.com/symengine/symengine.pySpack package: * [py-symengine/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-symengine/package.py) Versions: develop, 0.2.0 Build Dependencies: [cmake](#cmake), [py-cython](#py-cython), [py-setuptools](#py-setuptools), [python](#python), [symengine](#symengine) Link Dependencies: [python](#python), [symengine](#symengine) Run Dependencies: [python](#python) Description: Python wrappers for SymEngine, a symbolic manipulation library. --- py-symfit[¶](#py-symfit) === Homepage: * <http://symfit.readthedocs.orgSpack package: * [py-symfit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-symfit/package.py) Versions: 0.3.5 Build Dependencies: [py-pbr](#py-pbr), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-funcsigs](#py-funcsigs), [py-scipy](#py-scipy), [py-numpy](#py-numpy), [python](#python), [py-sympy](#py-sympy) Description: Symbolic Fitting; fitting as it should be. --- py-sympy[¶](#py-sympy) === Homepage: * <https://pypi.python.org/pypi/sympySpack package: * [py-sympy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sympy/package.py) Versions: 1.1.1, 1.0, 0.7.6 Build Dependencies: [py-mpmath](#py-mpmath), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-mpmath](#py-mpmath), [python](#python) Description: SymPy is a Python library for symbolic mathematics. --- py-tabulate[¶](#py-tabulate) === Homepage: * <https://bitbucket.org/astanin/python-tabulateSpack package: * [py-tabulate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-tabulate/package.py) Versions: 0.7.7 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Pretty-print tabular data --- py-tappy[¶](#py-tappy) === Homepage: * <https://github.com/mblayman/tappySpack package: * [py-tappy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-tappy/package.py) Versions: 1.6 Build Dependencies: [python](#python), [py-setuptools](#py-setuptools), [py-nose](#py-nose), [py-pygments](#py-pygments) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-nose](#py-nose), [py-pygments](#py-pygments) Description: Python TAP interface module for unit tests --- py-terminado[¶](#py-terminado) === Homepage: * <https://pypi.python.org/pypi/terminadoSpack package: * [py-terminado/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-terminado/package.py) Versions: 0.6 Build Dependencies: [py-ptyprocess](#py-ptyprocess), [py-tornado](#py-tornado), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-ptyprocess](#py-ptyprocess), [py-tornado](#py-tornado), [python](#python) Description: Terminals served to term.js using Tornado websockets --- py-testinfra[¶](#py-testinfra) === Homepage: * <https://testinfra.readthedocs.ioSpack package: * [py-testinfra/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-testinfra/package.py) Versions: 1.13.0, 1.12.0, 1.11.1 Build Dependencies: [py-importlib](#py-importlib), [py-paramiko](#py-paramiko), [py-setuptools](#py-setuptools), [py-pytest-xdist](#py-pytest-xdist), [py-six](#py-six), [python](#python), [py-pytest](#py-pytest) Link Dependencies: [python](#python) Run Dependencies: [py-importlib](#py-importlib), [py-paramiko](#py-paramiko), [py-pytest](#py-pytest), [py-pytest-xdist](#py-pytest-xdist), [python](#python), [py-six](#py-six) Description: With Testinfra you can write unit tests in Python to test actual state of your servers configured by management tools like Salt, Ansible, Puppet, Chef and so on. --- py-tetoolkit[¶](#py-tetoolkit) === Homepage: * <http://hammelllab.labsites.cshl.edu/softwareSpack package: * [py-tetoolkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-tetoolkit/package.py) Versions: 1.5.1 Build Dependencies: [py-pysam](#py-pysam), [r-deseq](#r-deseq), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [py-setuptools](#py-setuptools), [python](#python) Run Dependencies: [py-pysam](#py-pysam), [r-deseq](#r-deseq), [python](#python) Description: TEToolkit is a software package that utilizes both unambiguously (uniquely) and ambiguously (multi-) mapped reads to perform differential enrichment analyses from high throughput sequencing experiments. --- py-theano[¶](#py-theano) === Homepage: * <http://deeplearning.net/software/theano/Spack package: * [py-theano/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-theano/package.py) Versions: 1.0.2, 1.0.1, 0.8.2, master Build Dependencies: [cuda](#cuda), [libgpuarray](#libgpuarray), [cudnn](#cudnn), [py-setuptools](#py-setuptools), [python](#python), [py-numpy](#py-numpy), blas, [py-pygpu](#py-pygpu), [py-scipy](#py-scipy), [py-six](#py-six) Link Dependencies: [cuda](#cuda), [libgpuarray](#libgpuarray), blas, [cudnn](#cudnn), [python](#python) Run Dependencies: [py-scipy](#py-scipy), [py-setuptools](#py-setuptools), [py-numpy](#py-numpy), [py-pygpu](#py-pygpu), [python](#python), [py-six](#py-six) Test Dependencies: py-nose-parameterized, [py-nose](#py-nose) Description: Optimizing compiler for evaluating mathematical expressions on CPUs and GPUs. --- py-tifffile[¶](#py-tifffile) === Homepage: * <https://github.com/blink1073/tifffileSpack package: * [py-tifffile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-tifffile/package.py) Versions: 0.12.1 Build Dependencies: [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: Read and write image data from and to TIFF files. --- py-toml[¶](#py-toml) === Homepage: * <https://github.com/uiri/toml.gitSpack package: * [py-toml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-toml/package.py) Versions: 0.9.3 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A Python library for parsing and creating TOML configuration files. For more information on the TOML standard, see https://github.com/toml- lang/toml.git --- py-tomopy[¶](#py-tomopy) === Homepage: * <http://tomopy.readthedocs.io/en/latest/index.htmlSpack package: * [py-tomopy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-tomopy/package.py) Versions: 1.0.0 Build Dependencies: [py-h5py](#py-h5py), [py-setuptools](#py-setuptools), [py-scikit-image](#py-scikit-image), [py-scipy](#py-scipy), [py-pywavelets](#py-pywavelets), [py-dxchange](#py-dxchange), [py-pyfftw](#py-pyfftw), [py-numpy](#py-numpy), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-dxchange](#py-dxchange), [py-scikit-image](#py-scikit-image), [py-scipy](#py-scipy), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [py-pyfftw](#py-pyfftw), [py-pywavelets](#py-pywavelets), [python](#python), [py-six](#py-six) Description: TomoPy is an open-source Python package for tomographic data processing and image reconstruction. --- py-toolz[¶](#py-toolz) === Homepage: * <http://github.com/pytoolz/toolz/Spack package: * [py-toolz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-toolz/package.py) Versions: 0.9.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A set of utility functions for iterators, functions, and dictionaries --- py-tornado[¶](#py-tornado) === Homepage: * <https://github.com/tornadoweb/tornadoSpack package: * [py-tornado/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-tornado/package.py) Versions: 4.4.0 Build Dependencies: [py-backports-ssl-match-hostname](#py-backports-ssl-match-hostname), [py-setuptools](#py-setuptools), [py-backports-abc](#py-backports-abc), [py-singledispatch](#py-singledispatch), [py-certifi](#py-certifi), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-backports-ssl-match-hostname](#py-backports-ssl-match-hostname), [py-singledispatch](#py-singledispatch), [python](#python), [py-backports-abc](#py-backports-abc), [py-certifi](#py-certifi) Description: Tornado is a Python web framework and asynchronous networking library. --- py-tqdm[¶](#py-tqdm) === Homepage: * <https://github.com/tqdm/tqdmSpack package: * [py-tqdm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-tqdm/package.py) Versions: 4.8.4 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A Fast, Extensible Progress Meter --- py-traceback2[¶](#py-traceback2) === Homepage: * <https://github.com/testing-cabal/traceback2Spack package: * [py-traceback2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-traceback2/package.py) Versions: 1.4.0 Build Dependencies: [py-pbr](#py-pbr), [py-setuptools](#py-setuptools), [python](#python), [py-linecache2](#py-linecache2) Link Dependencies: [python](#python) Run Dependencies: [py-pbr](#py-pbr), [python](#python), [py-linecache2](#py-linecache2) Description: Backports of the traceback module --- py-traitlets[¶](#py-traitlets) === Homepage: * <https://pypi.python.org/pypi/traitletsSpack package: * [py-traitlets/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-traitlets/package.py) Versions: 4.3.1, 4.3.0, 4.2.2, 4.2.1, 4.2.0, 4.1.0, 4.0.0, 4.0 Build Dependencies: [py-enum34](#py-enum34), [py-ipython-genutils](#py-ipython-genutils), [python](#python), [py-decorator](#py-decorator) Link Dependencies: [python](#python) Run Dependencies: [py-enum34](#py-enum34), [py-ipython-genutils](#py-ipython-genutils), [python](#python), [py-decorator](#py-decorator) Description: Traitlets Python config system --- py-tuiview[¶](#py-tuiview) === Homepage: * <https://bitbucket.org/chchrsc/tuiviewSpack package: * [py-tuiview/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-tuiview/package.py) Versions: 1.1.7 Build Dependencies: [gdal](#gdal), [py-numpy](#py-numpy), [python](#python), [py-pyqt](#py-pyqt) Link Dependencies: [gdal](#gdal), [python](#python) Run Dependencies: [py-numpy](#py-numpy), [python](#python), [py-pyqt](#py-pyqt) Description: TuiView is a lightweight raster GIS with powerful raster attribute table manipulation abilities. --- py-twisted[¶](#py-twisted) === Homepage: * <https://twistedmatrix.com/Spack package: * [py-twisted/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-twisted/package.py) Versions: 15.4.0, 15.3.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: An asynchronous networking framework written in Python --- py-typing[¶](#py-typing) === Homepage: * <https://docs.python.org/3/library/typing.htmlSpack package: * [py-typing/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-typing/package.py) Versions: 3.6.1 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: This is a backport of the standard library typing module to Python versions older than 3.6. --- py-tzlocal[¶](#py-tzlocal) === Homepage: * <https://github.com/regebro/tzlocalSpack package: * [py-tzlocal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-tzlocal/package.py) Versions: 1.3 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-pytz](#py-pytz) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-pytz](#py-pytz) Description: tzinfo object for the local timezone. --- py-udunits[¶](#py-udunits) === Homepage: * <https://github.com/SciTools/cf_unitsSpack package: * [py-udunits/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-udunits/package.py) Versions: 1.1.3 Build Dependencies: [py-netcdf4](#py-netcdf4), [py-six](#py-six), [py-setuptools](#py-setuptools), [python](#python), [udunits2](#udunits2) Link Dependencies: [python](#python), [udunits2](#udunits2) Run Dependencies: [py-netcdf4](#py-netcdf4), [python](#python), [py-six](#py-six) Description: The MetOffice cf_units Python interface to the UDUNITS-2 Library. --- py-umi-tools[¶](#py-umi-tools) === Homepage: * <https://github.com/CGATOxford/UMI-toolsSpack package: * [py-umi-tools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-umi-tools/package.py) Versions: 0.5.3 Build Dependencies: [python](#python), [py-pandas](#py-pandas), [py-setuptools](#py-setuptools), [py-future](#py-future), [py-pysam](#py-pysam), [py-regex](#py-regex), [py-matplotlib](#py-matplotlib), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-pandas](#py-pandas), [python](#python), [py-future](#py-future), [py-pysam](#py-pysam), [py-regex](#py-regex), [py-matplotlib](#py-matplotlib), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-six](#py-six) Description: Tools for handling Unique Molecular Identifiers in NGS data sets --- py-unittest2[¶](#py-unittest2) === Homepage: * <https://pypi.python.org/pypi/unittest2Spack package: * [py-unittest2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-unittest2/package.py) Versions: 1.1.0 Build Dependencies: [py-enum34](#py-enum34), [py-six](#py-six), [py-setuptools](#py-setuptools), [py-argparse](#py-argparse), [python](#python), [py-traceback2](#py-traceback2) Link Dependencies: [python](#python) Run Dependencies: [py-enum34](#py-enum34), [py-six](#py-six), [py-traceback2](#py-traceback2), [python](#python), [py-argparse](#py-argparse) Description: unittest2 is a backport of the new features added to the unittest testing framework in Python 2.7 and onwards. --- py-unittest2py3k[¶](#py-unittest2py3k) === Homepage: * <https://pypi.python.org/pypi/unittest2py3kSpack package: * [py-unittest2py3k/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-unittest2py3k/package.py) Versions: 0.5.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: unittest2 is a backport of the new features added to the unittest testing framework in Python 2.7 and 3.2. This is a Python 3 compatible version of unittest2. --- py-urllib3[¶](#py-urllib3) === Homepage: * <https://urllib3.readthedocs.io/Spack package: * [py-urllib3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-urllib3/package.py) Versions: 1.20, 1.14 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: HTTP library with thread-safe connection pooling, file post, and more. --- py-urwid[¶](#py-urwid) === Homepage: * <http://urwid.org/Spack package: * [py-urwid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-urwid/package.py) Versions: 1.3.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A full-featured console UI library --- py-vcversioner[¶](#py-vcversioner) === Homepage: * <https://github.com/habnabit/vcversionerSpack package: * [py-vcversioner/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-vcversioner/package.py) Versions: 2.16.0.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Vcversioner: Take version numbers from version control. --- py-virtualenv[¶](#py-virtualenv) === Homepage: * <https://virtualenv.pypa.io/Spack package: * [py-virtualenv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-virtualenv/package.py) Versions: 16.0.0, 15.1.0, 15.0.1, 13.0.1, 1.11.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-setuptools](#py-setuptools), [python](#python) Description: virtualenv is a tool to create isolated Python environments. --- py-virtualenv-clone[¶](#py-virtualenv-clone) === Homepage: * <https://github.com/edwardgeorge/virtualenv-cloneSpack package: * [py-virtualenv-clone/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-virtualenv-clone/package.py) Versions: 0.2.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-setuptools](#py-setuptools), [python](#python) Description: A script for cloning a non-relocatable virtualenv. --- py-virtualenvwrapper[¶](#py-virtualenvwrapper) === Homepage: * <https://bitbucket.org/virtualenvwrapper/virtualenvwrapper.gitSpack package: * [py-virtualenvwrapper/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-virtualenvwrapper/package.py) Versions: 4.8.2 Build Dependencies: [py-virtualenv-clone](#py-virtualenv-clone), [py-stevedore](#py-stevedore), [py-virtualenv](#py-virtualenv), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-virtualenv-clone](#py-virtualenv-clone), [py-stevedore](#py-stevedore), [py-virtualenv](#py-virtualenv), [python](#python), [py-setuptools](#py-setuptools) Description: virtualenvwrapper is a set of extensions to Ian Bicking's virtualenv tool. The extensions include wrappers for creating and deleting virtual environments and otherwise managing your development workflow, making it easier to work on more than one project at a time without introducing conflicts in their dependencies. --- py-vsc-base[¶](#py-vsc-base) === Homepage: * <https://github.com/hpcugent/vsc-base/Spack package: * [py-vsc-base/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-vsc-base/package.py) Versions: 2.5.8 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-setuptools](#py-setuptools), [python](#python) Description: Common Python libraries tools created by HPC-UGent --- py-vsc-install[¶](#py-vsc-install) === Homepage: * <https://github.com/hpcugent/vsc-install/Spack package: * [py-vsc-install/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-vsc-install/package.py) Versions: 0.10.25 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-setuptools](#py-setuptools), [python](#python) Description: Shared setuptools functions and classes for Python libraries developed by HPC-UGent. --- py-wcsaxes[¶](#py-wcsaxes) === Homepage: * <http://wcsaxes.readthedocs.io/en/latest/index.htmlSpack package: * [py-wcsaxes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-wcsaxes/package.py) Versions: 0.8 Build Dependencies: [py-numpy](#py-numpy), [py-astropy](#py-astropy), [py-matplotlib](#py-matplotlib), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-astropy](#py-astropy), [py-matplotlib](#py-matplotlib), [python](#python) Description: WCSAxes is a framework for making plots of Astronomical data in Matplotlib. --- py-wcwidth[¶](#py-wcwidth) === Homepage: * <https://pypi.python.org/pypi/wcwidthSpack package: * [py-wcwidth/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-wcwidth/package.py) Versions: 0.1.7 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Measures number of Terminal column cells of wide-character codes --- py-webkit-server[¶](#py-webkit-server) === Homepage: * <https://github.com/niklasb/webkit-serverSpack package: * [py-webkit-server/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-webkit-server/package.py) Versions: develop, 1.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: a Webkit-based, headless web client --- py-weblogo[¶](#py-weblogo) === Homepage: * <http://weblogo.threeplusone.comSpack package: * [py-weblogo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-weblogo/package.py) Versions: 3.6.0 Build Dependencies: [pdf2svg](#pdf2svg), [ghostscript](#ghostscript), [py-numpy](#py-numpy), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [pdf2svg](#pdf2svg), [ghostscript](#ghostscript), [py-numpy](#py-numpy), [python](#python) Description: WebLogo is a web based application designed to make the generation of sequence logos as easy and painless as possible. --- py-werkzeug[¶](#py-werkzeug) === Homepage: * <http://werkzeug.pocoo.orgSpack package: * [py-werkzeug/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-werkzeug/package.py) Versions: 0.11.15, 0.11.11 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: The Swiss Army knife of Python web development --- py-wheel[¶](#py-wheel) === Homepage: * <https://pypi.python.org/pypi/wheelSpack package: * [py-wheel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-wheel/package.py) Versions: 0.29.0, 0.26.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A built-package format for Python. --- py-widgetsnbextension[¶](#py-widgetsnbextension) === Homepage: * <https://pypi.python.org/pypi/widgetsnbextensionSpack package: * [py-widgetsnbextension/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-widgetsnbextension/package.py) Versions: 1.2.6 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python), [py-jupyter-notebook](#py-jupyter-notebook) Link Dependencies: [python](#python) Run Dependencies: [python](#python), [py-jupyter-notebook](#py-jupyter-notebook) Description: IPython HTML widgets for Jupyter --- py-wrapt[¶](#py-wrapt) === Homepage: * <https://github.com/GrahamDumpleton/wraptSpack package: * [py-wrapt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-wrapt/package.py) Versions: 1.10.10 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Module for decorators, wrappers and monkey patching. --- py-xarray[¶](#py-xarray) === Homepage: * <https://github.com/pydata/xarraySpack package: * [py-xarray/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-xarray/package.py) Versions: 0.9.1 Build Dependencies: [py-numpy](#py-numpy), [py-pandas](#py-pandas), [python](#python), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-numpy](#py-numpy), [py-pandas](#py-pandas), [python](#python) Description: N-D labeled arrays and datasets in Python --- py-xattr[¶](#py-xattr) === Homepage: * <http://pyxattr.k1024.org/Spack package: * [py-xattr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-xattr/package.py) Versions: develop Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: A python interface to access extended file attributes, sans libattr dependency --- py-xdot[¶](#py-xdot) === Homepage: * <https://github.com/jrfonseca/xdot.pySpack package: * [py-xdot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-xdot/package.py) Versions: 1.0, 0.9, master Build Dependencies: [py-pycairo](#py-pycairo), [gtkplus](#gtkplus), [py-pygobject](#py-pygobject), [pango](#pango), [atk](#atk), [python](#python), [py-setuptools](#py-setuptools), [gdk-pixbuf](#gdk-pixbuf) Link Dependencies: [python](#python) Run Dependencies: [py-pycairo](#py-pycairo), [gtkplus](#gtkplus), [py-pygobject](#py-pygobject), [pango](#pango), [atk](#atk), [python](#python), [py-setuptools](#py-setuptools), [gdk-pixbuf](#gdk-pixbuf) Description: xdot.py is an interactive viewer for graphs written in Graphviz's dot language. --- py-xlrd[¶](#py-xlrd) === Homepage: * <http://www.python-excel.org/Spack package: * [py-xlrd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-xlrd/package.py) Versions: 0.9.4 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Library for developers to extract data from Microsoft Excel (tm) spreadsheet files --- py-xlsxwriter[¶](#py-xlsxwriter) === Homepage: * <https://pypi.python.org/pypi/XlsxWriterSpack package: * [py-xlsxwriter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-xlsxwriter/package.py) Versions: 1.0.2 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: XlsxWriter is a Python module for writing files in the Excel 2007+ XLSX file format. --- py-xmlrunner[¶](#py-xmlrunner) === Homepage: * <https://github.com/pycontribs/xmlrunnerSpack package: * [py-xmlrunner/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-xmlrunner/package.py) Versions: 1.7.7 Build Dependencies: [py-unittest2](#py-unittest2), [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-unittest2](#py-unittest2), [python](#python) Description: PyUnit-based test runner with JUnit like XML reporting. --- py-xopen[¶](#py-xopen) === Homepage: * <https://github.com/marcelm/xopenSpack package: * [py-xopen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-xopen/package.py) Versions: 0.1.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: This small Python module provides a xopen function that works like the built-in open function, but can also deal with compressed files. Supported compression formats are gzip, bzip2 and xz. They are automatically recognized by their file extensions .gz, .bz2 or .xz. --- py-xpyb[¶](#py-xpyb) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [py-xpyb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-xpyb/package.py) Versions: 1.3.1 Build Dependencies: [libxcb](#libxcb), [xcb-proto](#xcb-proto), [python](#python) Link Dependencies: [libxcb](#libxcb), [python](#python) Description: xpyb provides a Python binding to the X Window System protocol via libxcb. --- py-xvfbwrapper[¶](#py-xvfbwrapper) === Homepage: * <https://pypi.python.org/pypi/xvfbwrapper/0.2.9Spack package: * [py-xvfbwrapper/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-xvfbwrapper/package.py) Versions: 0.2.9 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: run headless display inside X virtual framebuffer (Xvfb) --- py-yamlreader[¶](#py-yamlreader) === Homepage: * <http://pyyaml.org/wiki/PyYAMLSpack package: * [py-yamlreader/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-yamlreader/package.py) Versions: 3.0.4 Build Dependencies: [py-pyyaml](#py-pyyaml), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Link Dependencies: [python](#python) Run Dependencies: [py-pyyaml](#py-pyyaml), [py-setuptools](#py-setuptools), [python](#python), [py-six](#py-six) Description: Yamlreader merges YAML data from a directory, a list of files or a file glob. --- py-yapf[¶](#py-yapf) === Homepage: * <https://github.com/google/yapfSpack package: * [py-yapf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-yapf/package.py) Versions: 0.2.1 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Yet Another Python Formatter --- py-yt[¶](#py-yt) === Homepage: * <http://yt-project.orgSpack package: * [py-yt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-yt/package.py) Versions: develop, 3.4.1, 3.4.0, 3.3.5, 3.3.4, 3.3.3, 3.3.2, 3.3.1, 3.3.0, 3.2.3, 3.2.2 Build Dependencies: [py-astropy](#py-astropy), [py-cython](#py-cython), [py-sympy](#py-sympy), [python](#python), [py-matplotlib](#py-matplotlib), [py-ipython](#py-ipython), [py-h5py](#py-h5py), [rockstar](#rockstar), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-setuptools](#py-setuptools) Link Dependencies: [python](#python) Run Dependencies: [py-astropy](#py-astropy), [py-cython](#py-cython), [py-sympy](#py-sympy), [python](#python), [py-matplotlib](#py-matplotlib), [py-ipython](#py-ipython), [py-h5py](#py-h5py), [rockstar](#rockstar), [py-numpy](#py-numpy), [py-scipy](#py-scipy), [py-setuptools](#py-setuptools) Description: Volumetric Data Analysis yt is a python package for analyzing and visualizing volumetric, multi-resolution data from astrophysical simulations, radio telescopes, and a burgeoning interdisciplinary community. --- py-ytopt[¶](#py-ytopt) === Homepage: * <https://xgitlab.cels.anl.gov/pbalapra/ytoptSpack package: * [py-ytopt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-ytopt/package.py) Versions: 0.1.0 Build Dependencies: py-scikit-optimize, [python](#python), [py-scikit-learn](#py-scikit-learn) Link Dependencies: [python](#python) Run Dependencies: py-scikit-optimize, [python](#python), [py-scikit-learn](#py-scikit-learn) Description: Ytopt package implements search using Random Forest (SuRF), an autotuning search method developed within Y-Tune ECP project. --- py-zmq[¶](#py-zmq) === Homepage: * <https://github.com/zeromq/pyzmqSpack package: * [py-zmq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-zmq/package.py) Versions: 16.0.2, 14.7.0 Build Dependencies: [zeromq](#zeromq), [py-cython](#py-cython), [py-cffi](#py-cffi), [py-py](#py-py), [python](#python) Link Dependencies: [zeromq](#zeromq), [python](#python) Run Dependencies: [py-cython](#py-cython), [py-cffi](#py-cffi), [py-py](#py-py), [python](#python) Description: PyZMQ: Python bindings for zeromq. --- py-zope-event[¶](#py-zope-event) === Homepage: * <http://github.com/zopefoundation/zope.eventSpack package: * [py-zope-event/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-zope-event/package.py) Versions: 4.3.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: Very basic event publishing system. --- py-zope-interface[¶](#py-zope-interface) === Homepage: * <https://github.com/zopefoundation/zope.interfaceSpack package: * [py-zope-interface/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-zope-interface/package.py) Versions: 4.5.0 Build Dependencies: [py-setuptools](#py-setuptools), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Test Dependencies: [py-coverage](#py-coverage), [py-zope-event](#py-zope-event), [py-nose](#py-nose) Description: This package provides an implementation of "object interfaces" for Python. Interfaces are a mechanism for labeling objects as conforming to a given API or contract. So, this package can be considered as implementation of the Design By Contract methodology support in Python. --- pythia6[¶](#pythia6) === Homepage: * <https://pythiasix.hepforge.org/Spack package: * [pythia6/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/pythia6/package.py) Versions: 6.4.28 Build Dependencies: [cmake](#cmake) Description: PYTHIA is a program for the generation of high-energy physics events, i.e. for the description of collisions at high energies between elementary particles such as e+, e-, p and pbar in various combinations. PYTHIA6 is a Fortran package which is no longer maintained: new prospective users should use Pythia8 instead. This recipe includes patches required to interoperate with Root. --- python[¶](#python) === Homepage: * <http://www.python.orgSpack package: * [python/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/python/package.py) Versions: 3.7.0, 3.6.5, 3.6.4, 3.6.3, 3.6.2, 3.6.1, 3.6.0, 3.5.2, 3.5.1, 3.5.0, 3.4.3, 3.3.6, 3.2.6, 3.1.5, 2.7.15, 2.7.14, 2.7.13, 2.7.12, 2.7.11, 2.7.10, 2.7.9, 2.7.8 Build Dependencies: [zlib](#zlib), [bzip2](#bzip2), pkgconfig, [sqlite](#sqlite), [openssl](#openssl), [tk](#tk), [libffi](#libffi), [readline](#readline), [gdbm](#gdbm), [ncurses](#ncurses), [tcl](#tcl) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2), [tcl](#tcl), [sqlite](#sqlite), [openssl](#openssl), [tk](#tk), [libffi](#libffi), [readline](#readline), [gdbm](#gdbm), [ncurses](#ncurses) Description: The Python programming language. --- qbank[¶](#qbank) === Homepage: * <http://www.pnnl.gov/Spack package: * [qbank/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qbank/package.py) Versions: 2.10.4 Build Dependencies: [perl](#perl), [perl-dbi](#perl-dbi), [openssl](#openssl) Link Dependencies: [openssl](#openssl) Run Dependencies: [perl](#perl), [perl-dbi](#perl-dbi) Description: QBank is a unique dynamic reservation-based allocation management system that manages the utilization of computational resources in a multi- project environment. It is used in conjunction with a resource management system allowing an organization to guarantee greater fairness and enforce mission priorities by associating a charge with the use of computational resources and allocating resource credits which limit how much of the resources may be used at what time and by whom. It tracks resource utilization and allows for insightful planning. --- qbox[¶](#qbox) === Homepage: * <http://qboxcode.org/Spack package: * [qbox/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qbox/package.py) Versions: 1.63.7, 1.63.5, 1.63.4, 1.63.2, 1.63.0, 1.62.3, 1.60.9, 1.60.4, 1.60.0, 1.58.0, 1.56.2, 1.54.4, 1.54.2, 1.52.3, 1.52.2, 1.50.4, 1.50.2, 1.50.1, 1.47.0, 1.45.3, 1.45.1, 1.45.0, 1.44.0 Build Dependencies: scalapack, mpi, blas, [xerces-c](#xerces-c), [fftw](#fftw) Link Dependencies: scalapack, mpi, blas, [xerces-c](#xerces-c), [fftw](#fftw) Description: Qbox is a C++/MPI scalable parallel implementation of first-principles molecular dynamics (FPMD) based on the plane-wave, pseudopotential formalism. Qbox is designed for operation on large parallel computers. --- qhull[¶](#qhull) === Homepage: * <http://www.qhull.orgSpack package: * [qhull/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qhull/package.py) Versions: 2015.2, 2012.1 Build Dependencies: [cmake](#cmake) Description: Qhull computes the convex hull, Delaunay triangulation, Voronoi diagram, halfspace intersection about a point, furt hest-site Delaunay triangulation, and furthest-site Voronoi diagram. The source code runs in 2-d, 3-d, 4-d, and higher dimensions. Qhull implements the Quickhull algorithm for computing the convex hull. It handles roundoff errors from floating point arithmetic. It computes volumes, surface areas, and approximations to the convex hull. --- qmcpack[¶](#qmcpack) === Homepage: * <http://www.qmcpack.org/Spack package: * [qmcpack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qmcpack/package.py) Versions: develop, 3.5.0, 3.4.0, 3.3.0, 3.2.0, 3.1.1, 3.1.0 Build Dependencies: [cuda](#cuda), mpi, [libxml2](#libxml2), [hdf5](#hdf5), [cmake](#cmake), [boost](#boost), blas, [fftw](#fftw), lapack Link Dependencies: [cuda](#cuda), mpi, [libxml2](#libxml2), [hdf5](#hdf5), [boost](#boost), blas, [fftw](#fftw), lapack Run Dependencies: [quantum-espresso](#quantum-espresso), [py-numpy](#py-numpy), [py-matplotlib](#py-matplotlib) Description: QMCPACK, is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code. --- qmd-progress[¶](#qmd-progress) === Homepage: * <https://github.com/lanl/qmd-progressSpack package: * [qmd-progress/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qmd-progress/package.py) Versions: develop, 1.1.0, 1.0.0 Build Dependencies: [cmake](#cmake), mpi, [bml](#bml), [metis](#metis) Link Dependencies: mpi, [bml](#bml), [metis](#metis) Description: PROGRESS: Parallel, Rapid O(N) and Graph-based Recursive Electronic Structure Solver. This library is focused on the development of general solvers that are commonly used in quantum chemistry packages. --- qorts[¶](#qorts) === Homepage: * <https://github.com/hartleys/QoRTsSpack package: * [qorts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qorts/package.py) Versions: 1.2.42 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r), java Description: The QoRTs software package is a fast, efficient, and portable multifunction toolkit designed to assist in the analysis, quality control, and data management of RNA-Seq and DNA-Seq datasets. Its primary function is to aid in the detection and identification of errors, biases, and artifacts produced by high-throughput sequencing technology. --- qrupdate[¶](#qrupdate) === Homepage: * <http://sourceforge.net/projects/qrupdate/Spack package: * [qrupdate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qrupdate/package.py) Versions: 1.1.2 Build Dependencies: blas, lapack Link Dependencies: blas, lapack Description: qrupdate is a Fortran library for fast updates of QR and Cholesky decompositions. --- qt[¶](#qt) === Homepage: * <http://qt.ioSpack package: * [qt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qt/package.py) Versions: 5.11.2, 5.10.0, 5.9.1, 5.9.0, 5.8.0, 5.7.1, 5.7.0, 5.5.1, 5.4.2, 5.4.0, 5.3.2, 5.2.1, 4.8.6, 3.3.8b Build Dependencies: [zlib](#zlib), [glib](#glib), [libpng](#libpng), [gtkplus](#gtkplus), [bison](#bison), [libxcb](#libxcb), [icu4c](#icu4c), [libtiff](#libtiff), gl, [dbus](#dbus), [gperf](#gperf), [libxml2](#libxml2), [libx11](#libx11), [openssl](#openssl), [fontconfig](#fontconfig), [libmng](#libmng), [libxext](#libxext), [flex](#flex), jpeg, [python](#python), [freetype](#freetype) Link Dependencies: [zlib](#zlib), [glib](#glib), [libpng](#libpng), [gtkplus](#gtkplus), [libxml2](#libxml2), [libxcb](#libxcb), [icu4c](#icu4c), [libtiff](#libtiff), gl, [dbus](#dbus), [gperf](#gperf), [libx11](#libx11), [openssl](#openssl), [fontconfig](#fontconfig), [libmng](#libmng), [libxext](#libxext), jpeg, [freetype](#freetype) Description: Qt is a comprehensive cross-platform C++ application framework. --- qt-creator[¶](#qt-creator) === Homepage: * <https://www.qt.io/ide/Spack package: * [qt-creator/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qt-creator/package.py) Versions: 4.4.0, 4.3.1, 4.1.0 Build Dependencies: [qt](#qt), [sqlite](#sqlite) Link Dependencies: [qt](#qt), [sqlite](#sqlite) Description: The Qt Creator IDE. --- qtgraph[¶](#qtgraph) === Homepage: * <https://github.com/OpenSpeedShop/QtGraphSpack package: * [qtgraph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qtgraph/package.py) Versions: develop, 1.0.0.0 Build Dependencies: [graphviz](#graphviz), [qt](#qt) Link Dependencies: [graphviz](#graphviz), [qt](#qt) Description: The baseline library used in the CUDA-centric Open|SpeedShop Graphical User Interface (GUI) which allows Graphviz DOT formatted data to be imported into a Qt application by wrapping the Graphviz libcgraph and libgvc within the Qt Graphics View Framework. --- qthreads[¶](#qthreads) === Homepage: * <http://www.cs.sandia.gov/qthreads/Spack package: * [qthreads/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qthreads/package.py) Versions: 1.12, 1.11, 1.10 Build Dependencies: [hwloc](#hwloc) Link Dependencies: [hwloc](#hwloc) Description: The qthreads API is designed to make using large numbers of threads convenient and easy, and to allow portable access to threading constructs used in massively parallel shared memory environments. The API maps well to both MTA-style threading and PIM-style threading, and we provide an implementation of this interface in both a standard SMP context as well as the SST context. The qthreads API provides access to full/empty-bit (FEB) semantics, where every word of memory can be marked either full or empty, and a thread can wait for any word to attain either state. --- quantum-espresso[¶](#quantum-espresso) === Homepage: * <http://quantum-espresso.orgSpack package: * [quantum-espresso/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/quantum-espresso/package.py) Versions: 6.2.0, 6.1.0, 5.4, 5.3 Build Dependencies: mpi, [elpa](#elpa), [hdf5](#hdf5), scalapack, blas, [fftw](#fftw), lapack Link Dependencies: mpi, [elpa](#elpa), [hdf5](#hdf5), scalapack, blas, [fftw](#fftw), lapack Description: Quantum-ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. --- quinoa[¶](#quinoa) === Homepage: * <http://quinoacomputing.orgSpack package: * [quinoa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/quinoa/package.py) Versions: develop Build Dependencies: [h5part](#h5part), [mad-numdiff](#mad-numdiff), [boostmplcartesianproduct](#boostmplcartesianproduct), [netlib-lapack](#netlib-lapack), [cmake](#cmake), [pstreams](#pstreams), [random123](#random123), [boost](#boost), [charmpp](#charmpp), [pugixml](#pugixml), [tut](#tut), [hdf5](#hdf5), [pegtl](#pegtl), [hypre](#hypre), [trilinos](#trilinos) Link Dependencies: [h5part](#h5part), [mad-numdiff](#mad-numdiff), [boostmplcartesianproduct](#boostmplcartesianproduct), [pstreams](#pstreams), [random123](#random123), [boost](#boost), [charmpp](#charmpp), [pugixml](#pugixml), [netlib-lapack](#netlib-lapack), [tut](#tut), [hdf5](#hdf5), [pegtl](#pegtl), [hypre](#hypre), [trilinos](#trilinos) Description: Quinoa is a set of computational tools that enables research and numerical analysis in fluid dynamics. At this time it is a test-bed to experiment with various algorithms using fully asynchronous runtime systems. --- qwt[¶](#qwt) === Homepage: * <http://qwt.sourceforge.net/Spack package: * [qwt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/qwt/package.py) Versions: 6.1.3, 5.2.2 Build Dependencies: [qt](#qt) Link Dependencies: [qt](#qt) Description: The Qwt library contains GUI Components and utility classes which are primarily useful for programs with a technical background. Beside a framework for 2D plots it provides scales, sliders, dials, compasses, thermometers, wheels and knobs to control or display values, arrays, or ranges of type double. --- r[¶](#r) === Homepage: * <https://www.r-project.orgSpack package: * [r/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r/package.py) Versions: 3.5.1, 3.5.0, 3.4.4, 3.4.3, 3.4.2, 3.4.1, 3.4.0, 3.3.3, 3.3.2, 3.3.1, 3.3.0, 3.2.5, 3.2.3, 3.2.2, 3.2.1, 3.2.0, 3.1.3, 3.1.2 Build Dependencies: [glib](#glib), [cairo](#cairo), jpeg, [bzip2](#bzip2), java, [curl](#curl), [icu4c](#icu4c), [readline](#readline), [tcl](#tcl), lapack, [zlib](#zlib), [libxt](#libxt), [libx11](#libx11), [tk](#tk), [libtiff](#libtiff), [pcre](#pcre), [pango](#pango), blas, [ncurses](#ncurses), [freetype](#freetype) Link Dependencies: [glib](#glib), [cairo](#cairo), jpeg, [bzip2](#bzip2), java, [curl](#curl), [icu4c](#icu4c), [readline](#readline), [tcl](#tcl), lapack, [zlib](#zlib), [libxt](#libxt), [libx11](#libx11), [tk](#tk), [libtiff](#libtiff), [pcre](#pcre), [pango](#pango), blas, [ncurses](#ncurses), [freetype](#freetype) Description: R is 'GNU S', a freely available language and environment for statistical computing and graphics which provides a wide variety of statistical and graphical techniques: linear and nonlinear modelling, statistical tests, time series analysis, classification, clustering, etc. Please consult the R project homepage for further information. --- r-a4[¶](#r-a4) === Homepage: * <https://www.bioconductor.org/packages/a4/Spack package: * [r-a4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-a4/package.py) Versions: 1.24.0 Build Dependencies: [r](#r), [r-a4base](#r-a4base), [r-a4reporting](#r-a4reporting), [r-a4preproc](#r-a4preproc), [r-a4core](#r-a4core), [r-a4classif](#r-a4classif) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-a4base](#r-a4base), [r-a4reporting](#r-a4reporting), [r-a4preproc](#r-a4preproc), [r-a4core](#r-a4core), [r-a4classif](#r-a4classif) Description: Automated Affymetrix Array Analysis Umbrella Package. --- r-a4base[¶](#r-a4base) === Homepage: * <https://www.bioconductor.org/packages/a4Base/Spack package: * [r-a4base/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-a4base/package.py) Versions: 1.24.0 Build Dependencies: [r-a4core](#r-a4core), [r-multtest](#r-multtest), [r-limma](#r-limma), [r-mpm](#r-mpm), [r](#r), [r-glmnet](#r-glmnet), [r-a4preproc](#r-a4preproc), [r-biobase](#r-biobase), [r-gplots](#r-gplots), [r-annotationdbi](#r-annotationdbi), [r-annaffy](#r-annaffy), [r-genefilter](#r-genefilter) Link Dependencies: [r](#r) Run Dependencies: [r-a4core](#r-a4core), [r-multtest](#r-multtest), [r-limma](#r-limma), [r-mpm](#r-mpm), [r](#r), [r-glmnet](#r-glmnet), [r-a4preproc](#r-a4preproc), [r-biobase](#r-biobase), [r-gplots](#r-gplots), [r-annotationdbi](#r-annotationdbi), [r-annaffy](#r-annaffy), [r-genefilter](#r-genefilter) Description: Automated Affymetrix Array Analysis. --- r-a4classif[¶](#r-a4classif) === Homepage: * <https://www.bioconductor.org/packages/a4Classif/Spack package: * [r-a4classif/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-a4classif/package.py) Versions: 1.24.0 Build Dependencies: [r](#r), [r-a4core](#r-a4core), [r-a4preproc](#r-a4preproc), [r-mlinterfaces](#r-mlinterfaces), [r-pamr](#r-pamr), [r-varselrf](#r-varselrf), [r-glmnet](#r-glmnet), [r-rocr](#r-rocr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-a4core](#r-a4core), [r-a4preproc](#r-a4preproc), [r-mlinterfaces](#r-mlinterfaces), [r-pamr](#r-pamr), [r-varselrf](#r-varselrf), [r-glmnet](#r-glmnet), [r-rocr](#r-rocr) Description: Automated Affymetrix Array Analysis Classification Package. --- r-a4core[¶](#r-a4core) === Homepage: * <https://www.bioconductor.org/packages/a4Core/Spack package: * [r-a4core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-a4core/package.py) Versions: 1.24.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-glmnet](#r-glmnet) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-glmnet](#r-glmnet) Description: Automated Affymetrix Array Analysis Core Package. --- r-a4preproc[¶](#r-a4preproc) === Homepage: * <https://www.bioconductor.org/packages/a4Preproc/Spack package: * [r-a4preproc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-a4preproc/package.py) Versions: 1.24.0 Build Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Description: Automated Affymetrix Array Analysis Preprocessing Package. --- r-a4reporting[¶](#r-a4reporting) === Homepage: * <https://www.bioconductor.org/packages/a4ReportingSpack package: * [r-a4reporting/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-a4reporting/package.py) Versions: 1.24.0 Build Dependencies: [r](#r), [r-xtable](#r-xtable), [r-annaffy](#r-annaffy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-xtable](#r-xtable), [r-annaffy](#r-annaffy) Description: Automated Affymetrix Array Analysis Reporting Package. --- r-abadata[¶](#r-abadata) === Homepage: * <https://bioconductor.org/packages/ABAData/Spack package: * [r-abadata/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-abadata/package.py) Versions: 1.6.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides the data for the gene expression enrichment analysis conducted in the package 'ABAEnrichment'. The package includes three datasets which are derived from the Allen Brain Atlas: (1) Gene expression data from Human Brain (adults) averaged across donors, (2) Gene expression data from the Developing Human Brain pooled into five age categories and averaged across donors and (3) a developmental effect score based on the Developing Human Brain expression data. All datasets are restricted to protein coding genes. --- r-abaenrichment[¶](#r-abaenrichment) === Homepage: * <https://bioconductor.org/packages/ABAEnrichment/Spack package: * [r-abaenrichment/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-abaenrichment/package.py) Versions: 1.6.0 Build Dependencies: [r](#r), [r-gplots](#r-gplots), [r-rcpp](#r-rcpp), [r-abadata](#r-abadata), [r-gtools](#r-gtools) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gplots](#r-gplots), [r-rcpp](#r-rcpp), [r-abadata](#r-abadata), [r-gtools](#r-gtools) Description: The package ABAEnrichment is designed to test for enrichment of user defined candidate genes in the set of expressed genes in different human brain regions. The core function 'aba_enrich' integrates the expression of the candidate gene set (averaged across donors) and the structural information of the brain using an ontology, both provided by the Allen Brain Atlas project. 'aba_enrich' interfaces the ontology enrichment software FUNC to perform the statistical analyses. Additional functions provided in this package like 'get_expression' and 'plot_expression' facilitate exploring the expression data. From version 1.3.5 onwards genomic regions can be provided as input, too; and from version 1.5.9 onwards the function 'get_annotated_genes' offers an easy way to obtain annotations of genes to enriched or user-defined brain regions. --- r-abind[¶](#r-abind) === Homepage: * <https://cran.r-project.org/Spack package: * [r-abind/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-abind/package.py) Versions: 1.4-3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Combine multidimensional arrays into a single array. This is a generalization of 'cbind' and 'rbind'. Works with vectors, matrices, and higher-dimensional arrays. Also provides functions 'adrop', 'asub', and 'afill' for manipulating, extracting and replacing data in arrays. --- r-absseq[¶](#r-absseq) === Homepage: * <https://www.bioconductor.org/packages/ABSSeq/Spack package: * [r-absseq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-absseq/package.py) Versions: 1.22.8 Build Dependencies: [r](#r), [r-locfit](#r-locfit), [r-limma](#r-limma) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-locfit](#r-locfit), [r-limma](#r-limma) Description: Inferring differential expression genes by absolute counts difference between two groups, utilizing Negative binomial distribution and moderating fold-change according to heterogeneity of dispersion across expression level. --- r-acde[¶](#r-acde) === Homepage: * <https://www.bioconductor.org/packages/acde/Spack package: * [r-acde/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-acde/package.py) Versions: 1.6.0 Build Dependencies: [r](#r), [r-boot](#r-boot) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-boot](#r-boot) Description: This package provides a multivariate inferential analysis method for detecting differentially expressed genes in gene expression data. It uses artificial components, close to the data's principal components but with an exact interpretation in terms of differential genetic expression, to identify differentially expressed genes while controlling the false discovery rate (FDR). The methods on this package are described in the vignette or in the article 'Multivariate Method for Inferential Identification of Differentially Expressed Genes in Gene Expression Experiments' by <NAME>, <NAME> and <NAME> (2015, pending publication). --- r-acepack[¶](#r-acepack) === Homepage: * <https://CRAN.R-project.org/package=acepackSpack package: * [r-acepack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-acepack/package.py) Versions: 1.4.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: ACE and AVAS for Selecting Multiple Regression Transformations. --- r-acgh[¶](#r-acgh) === Homepage: * <https://www.bioconductor.org/packages/aCGH/Spack package: * [r-acgh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-acgh/package.py) Versions: 1.54.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-survival](#r-survival), [r-multtest](#r-multtest), [r-cluster](#r-cluster) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-survival](#r-survival), [r-multtest](#r-multtest), [r-cluster](#r-cluster) Description: Functions for reading aCGH data from image analysis output files and clone information files, creation of aCGH S3 objects for storing these data. Basic methods for accessing/replacing, subsetting, printing and plotting aCGH objects. --- r-acme[¶](#r-acme) === Homepage: * <https://www.bioconductor.org/packages/ACME/Spack package: * [r-acme/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-acme/package.py) Versions: 2.32.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics) Description: ACME (Algorithms for Calculating Microarray Enrichment) is a set of tools for analysing tiling array ChIP/chip, DNAse hypersensitivity, or other experiments that result in regions of the genome showing "enrichment". It does not rely on a specific array technology (although the array should be a "tiling" array), is very general (can be applied in experiments resulting in regions of enrichment), and is very insensitive to array noise or normalization methods. It is also very fast and can be applied on whole-genome tiling array experiments quite easily with enough memory. --- r-ada[¶](#r-ada) === Homepage: * <https://cran.r-project.org/web/packages/ada/index.htmlSpack package: * [r-ada/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ada/package.py) Versions: 2.0-5 Build Dependencies: [r](#r), [r-rpart](#r-rpart) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rpart](#r-rpart) Description: Performs discrete, real, and gentle boost under both exponential and logistic loss on a given data set. --- r-adabag[¶](#r-adabag) === Homepage: * <https://cran.r-project.org/package=adabagSpack package: * [r-adabag/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-adabag/package.py) Versions: 4.1 Build Dependencies: [r](#r), [r-mlbench](#r-mlbench), [r-rpart](#r-rpart), [r-caret](#r-caret) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mlbench](#r-mlbench), [r-rpart](#r-rpart), [r-caret](#r-caret) Description: Applies Multiclass AdaBoost.M1, SAMME and Bagging. --- r-ade4[¶](#r-ade4) === Homepage: * <http://pbil.univ-lyon1.fr/ADE-4Spack package: * [r-ade4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ade4/package.py) Versions: 1.7-6 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Analysis of Ecological Data : Exploratory and Euclidean Methods in Environmental Sciences --- r-adegenet[¶](#r-adegenet) === Homepage: * <https://github.com/thibautjombart/adegenet/wikiSpack package: * [r-adegenet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-adegenet/package.py) Versions: 2.0.1 Build Dependencies: [r](#r), [r-ade4](#r-ade4), [r-reshape2](#r-reshape2), [r-spdep](#r-spdep), [r-ggplot2](#r-ggplot2), [r-shiny](#r-shiny), [r-vegan](#r-vegan), [r-igraph](#r-igraph), [r-dplyr](#r-dplyr), [r-ape](#r-ape), [r-seqinr](#r-seqinr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ade4](#r-ade4), [r-reshape2](#r-reshape2), [r-spdep](#r-spdep), [r-ggplot2](#r-ggplot2), [r-shiny](#r-shiny), [r-vegan](#r-vegan), [r-igraph](#r-igraph), [r-dplyr](#r-dplyr), [r-ape](#r-ape), [r-seqinr](#r-seqinr) Description: Toolset for the exploration of genetic and genomic data. Adegenet provides formal (S4) classes for storing and handling various genetic data, including genetic markers with varying ploidy and hierarchical population structure ('genind' class), alleles counts by populations ('genpop'), and genome-wide SNP data ('genlight'). It also implements original multivariate methods (DAPC, sPCA), graphics, statistical tests, simulation tools, distance and similarity measures, and several spatial methods. A range of both empirical and simulated datasets is also provided to illustrate various methods. --- r-adsplit[¶](#r-adsplit) === Homepage: * <https://www.bioconductor.org/packages/adSplit/Spack package: * [r-adsplit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-adsplit/package.py) Versions: 1.46.0 Build Dependencies: [r](#r), [r-multtest](#r-multtest), [r-go-db](#r-go-db), [r-kegg-db](#r-kegg-db), [r-cluster](#r-cluster), [r-annotationdbi](#r-annotationdbi), [r-biobase](#r-biobase) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-multtest](#r-multtest), [r-go-db](#r-go-db), [r-kegg-db](#r-kegg-db), [r-cluster](#r-cluster), [r-annotationdbi](#r-annotationdbi), [r-biobase](#r-biobase) Description: This package implements clustering of microarray gene expression profiles according to functional annotations. For each term genes are annotated to, splits into two subclasses are computed and a significance of the supporting gene set is determined. --- r-aer[¶](#r-aer) === Homepage: * <https://cran.r-project.org/web/packages/AER/index.htmlSpack package: * [r-aer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-aer/package.py) Versions: 1.2-5 Build Dependencies: [r](#r), [r-sandwich](#r-sandwich), [r-lmtest](#r-lmtest), [r-car](#r-car), [r-survival](#r-survival), [r-zoo](#r-zoo), [r-formula](#r-formula) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-sandwich](#r-sandwich), [r-lmtest](#r-lmtest), [r-car](#r-car), [r-survival](#r-survival), [r-zoo](#r-zoo), [r-formula](#r-formula) Description: Functions, data sets, examples, demos, and vignettes for the book <NAME> and <NAME> (2008), Applied Econometrics with R, Springer-Verlag, New York. ISBN 978-0-387-77316-2. --- r-affxparser[¶](#r-affxparser) === Homepage: * <https://www.bioconductor.org/packages/affxparser/Spack package: * [r-affxparser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affxparser/package.py) Versions: 1.48.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Package for parsing Affymetrix files (CDF, CEL, CHP, BPMAP, BAR). It provides methods for fast and memory efficient parsing of Affymetrix files using the Affymetrix' Fusion SDK. Both ASCII- and binary-based files are supported. Currently, there are methods for reading chip definition file (CDF) and a cell intensity file (CEL). These files can be read either in full or in part. For example, probe signals from a few probesets can be extracted very quickly from a set of CEL files into a convenient list structure. --- r-affy[¶](#r-affy) === Homepage: * <https://bioconductor.org/packages/affy/Spack package: * [r-affy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affy/package.py) Versions: 1.54.0 Build Dependencies: [r](#r), [r-biocinstaller](#r-biocinstaller), [r-zlibbioc](#r-zlibbioc), [r-affyio](#r-affyio), [r-biobase](#r-biobase), [r-preprocesscore](#r-preprocesscore), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biocinstaller](#r-biocinstaller), [r-zlibbioc](#r-zlibbioc), [r-affyio](#r-affyio), [r-biobase](#r-biobase), [r-preprocesscore](#r-preprocesscore), [r-biocgenerics](#r-biocgenerics) Description: The package contains functions for exploratory oligonucleotide array analysis. The dependence on tkWidgets only concerns few convenience functions. 'affy' is fully functional without it. --- r-affycomp[¶](#r-affycomp) === Homepage: * <https://www.bioconductor.org/packages/affycomp/Spack package: * [r-affycomp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affycomp/package.py) Versions: 1.52.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase) Description: The package contains functions that can be used to compare expression measures for Affymetrix Oligonucleotide Arrays. --- r-affycompatible[¶](#r-affycompatible) === Homepage: * <https://www.bioconductor.org/packages/AffyCompatible/Spack package: * [r-affycompatible/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affycompatible/package.py) Versions: 1.36.0 Build Dependencies: [r](#r), [r-xml](#r-xml), [r-rcurl](#r-rcurl), [r-biostrings](#r-biostrings) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-xml](#r-xml), [r-rcurl](#r-rcurl), [r-biostrings](#r-biostrings) Description: This package provides an interface to Affymetrix chip annotation and sample attribute files. The package allows an easy way for users to download and manage local data bases of Affynmetrix NetAffx annotation files. The package also provides access to GeneChip Operating System (GCOS) and GeneChip Command Console (AGCC)-compatible sample annotation files. --- r-affycontam[¶](#r-affycontam) === Homepage: * <https://www.bioconductor.org/packages/affyContam/Spack package: * [r-affycontam/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affycontam/package.py) Versions: 1.34.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-affydata](#r-affydata), [r-affy](#r-affy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-affydata](#r-affydata), [r-affy](#r-affy) Description: structured corruption of cel file data to demonstrate QA effectiveness. --- r-affycoretools[¶](#r-affycoretools) === Homepage: * <https://www.bioconductor.org/packages/affycoretools/Spack package: * [r-affycoretools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affycoretools/package.py) Versions: 1.48.0 Build Dependencies: [r-gostats](#r-gostats), [r-oligoclasses](#r-oligoclasses), [r-ggplot2](#r-ggplot2), [r-limma](#r-limma), [r-s4vectors](#r-s4vectors), [r-xtable](#r-xtable), [r-rsqlite](#r-rsqlite), [r-affy](#r-affy), [r](#r), [r-hwriter](#r-hwriter), [r-gcrma](#r-gcrma), [r-annotationdbi](#r-annotationdbi), [r-reportingtools](#r-reportingtools), [r-gplots](#r-gplots), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics), [r-lattice](#r-lattice), [r-edger](#r-edger) Link Dependencies: [r](#r) Run Dependencies: [r-gostats](#r-gostats), [r-oligoclasses](#r-oligoclasses), [r-ggplot2](#r-ggplot2), [r-limma](#r-limma), [r-s4vectors](#r-s4vectors), [r-xtable](#r-xtable), [r-rsqlite](#r-rsqlite), [r-affy](#r-affy), [r](#r), [r-hwriter](#r-hwriter), [r-gcrma](#r-gcrma), [r-annotationdbi](#r-annotationdbi), [r-reportingtools](#r-reportingtools), [r-gplots](#r-gplots), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics), [r-lattice](#r-lattice), [r-edger](#r-edger) Description: Various wrapper functions that have been written to streamline the more common analyses that a core Biostatistician might see. --- r-affydata[¶](#r-affydata) === Homepage: * <https://www.bioconductor.org/packages/affydata/Spack package: * [r-affydata/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affydata/package.py) Versions: 1.24.0 Build Dependencies: [r](#r), [r-affy](#r-affy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-affy](#r-affy) Description: Example datasets of a slightly large size. They represent 'real world examples', unlike the artificial examples included in the package affy. --- r-affyexpress[¶](#r-affyexpress) === Homepage: * <https://www.bioconductor.org/packages/AffyExpress/Spack package: * [r-affyexpress/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affyexpress/package.py) Versions: 1.42.0 Build Dependencies: [r](#r), [r-affy](#r-affy), [r-limma](#r-limma) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-affy](#r-affy), [r-limma](#r-limma) Description: The purpose of this package is to provide a comprehensive and easy-to- use tool for quality assessment and to identify differentially expressed genes in the Affymetrix gene expression data. --- r-affyilm[¶](#r-affyilm) === Homepage: * <https://www.bioconductor.org/packages/affyILM/Spack package: * [r-affyilm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affyilm/package.py) Versions: 1.28.0 Build Dependencies: [r](#r), [r-affxparser](#r-affxparser), [r-gcrma](#r-gcrma), [r-affy](#r-affy), [r-biobase](#r-biobase) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-affxparser](#r-affxparser), [r-gcrma](#r-gcrma), [r-affy](#r-affy), [r-biobase](#r-biobase) Description: affyILM is a preprocessing tool which estimates gene expression levels for Affymetrix Gene Chips. Input from physical chemistry is employed to first background subtract intensities before calculating concentrations on behalf of the Langmuir model. --- r-affyio[¶](#r-affyio) === Homepage: * <https://bioconductor.org/packages/affyio/Spack package: * [r-affyio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affyio/package.py) Versions: 1.46.0 Build Dependencies: [r](#r), [r-zlibbioc](#r-zlibbioc) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-zlibbioc](#r-zlibbioc) Description: Routines for parsing Affymetrix data files based upon file format information. Primary focus is on accessing the CEL and CDF file formats. --- r-affypdnn[¶](#r-affypdnn) === Homepage: * <https://www.bioconductor.org/packages/affypdnn/Spack package: * [r-affypdnn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affypdnn/package.py) Versions: 1.50.0 Build Dependencies: [r](#r), [r-affy](#r-affy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-affy](#r-affy) Description: The package contains functions to perform the PDNN method described by <NAME> et al. --- r-affyplm[¶](#r-affyplm) === Homepage: * <https://www.bioconductor.org/packages/affyPLM/Spack package: * [r-affyplm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affyplm/package.py) Versions: 1.52.1 Build Dependencies: [r](#r), [r-gcrma](#r-gcrma), [r-zlibbioc](#r-zlibbioc), [r-biobase](#r-biobase), [r-preprocesscore](#r-preprocesscore), [r-affy](#r-affy), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gcrma](#r-gcrma), [r-zlibbioc](#r-zlibbioc), [r-biobase](#r-biobase), [r-preprocesscore](#r-preprocesscore), [r-affy](#r-affy), [r-biocgenerics](#r-biocgenerics) Description: A package that extends and improves the functionality of the base affy package. Routines that make heavy use of compiled code for speed. Central focus is on implementation of methods for fitting probe-level models and tools using these models. PLM based quality assessment tools. --- r-affyqcreport[¶](#r-affyqcreport) === Homepage: * <https://www.bioconductor.org/packages/affyQCReport/Spack package: * [r-affyqcreport/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affyqcreport/package.py) Versions: 1.54.0 Build Dependencies: [r](#r), [r-lattice](#r-lattice), [r-affyplm](#r-affyplm), [r-affy](#r-affy), [r-biobase](#r-biobase), [r-xtable](#r-xtable), [r-rcolorbrewer](#r-rcolorbrewer), [r-genefilter](#r-genefilter), [r-simpleaffy](#r-simpleaffy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice), [r-affyplm](#r-affyplm), [r-affy](#r-affy), [r-biobase](#r-biobase), [r-xtable](#r-xtable), [r-rcolorbrewer](#r-rcolorbrewer), [r-genefilter](#r-genefilter), [r-simpleaffy](#r-simpleaffy) Description: This package creates a QC report for an AffyBatch object. The report is intended to allow the user to quickly assess the quality of a set of arrays in an AffyBatch object. --- r-affyrnadegradation[¶](#r-affyrnadegradation) === Homepage: * <https://www.bioconductor.org/packages/AffyRNADegradation/Spack package: * [r-affyrnadegradation/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-affyrnadegradation/package.py) Versions: 1.22.0 Build Dependencies: [r](#r), [r-affy](#r-affy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-affy](#r-affy) Description: The package helps with the assessment and correction of RNA degradation effects in Affymetrix 3' expression arrays. The parameter d gives a robust and accurate measure of RNA integrity. The correction removes the probe positional bias, and thus improves comparability of samples that are affected by RNA degradation. --- r-agdex[¶](#r-agdex) === Homepage: * <http://bioconductor.org/packages/AGDEX/Spack package: * [r-agdex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-agdex/package.py) Versions: 1.24.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-gseabase](#r-gseabase) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-gseabase](#r-gseabase) Description: A tool to evaluate agreement of differential expression for cross- species genomics. --- r-agilp[¶](#r-agilp) === Homepage: * <http://bioconductor.org/packages/agilp/Spack package: * [r-agilp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-agilp/package.py) Versions: 3.8.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Agilent expression array processing package. --- r-agimicrorna[¶](#r-agimicrorna) === Homepage: * <https://www.bioconductor.org/packages/AgiMicroRna/Spack package: * [r-agimicrorna/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-agimicrorna/package.py) Versions: 2.26.0 Build Dependencies: [r](#r), [r-affycoretools](#r-affycoretools), [r-limma](#r-limma), [r-biobase](#r-biobase), [r-preprocesscore](#r-preprocesscore), [r-affy](#r-affy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-affycoretools](#r-affycoretools), [r-limma](#r-limma), [r-biobase](#r-biobase), [r-preprocesscore](#r-preprocesscore), [r-affy](#r-affy) Description: Processing and Analysis of Agilent microRNA data. --- r-aims[¶](#r-aims) === Homepage: * <http://bioconductor.org/packages/AIMS/Spack package: * [r-aims/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-aims/package.py) Versions: 1.8.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-e1071](#r-e1071) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-e1071](#r-e1071) Description: This package contains the AIMS implementation. It contains necessary functions to assign the five intrinsic molecular subtypes (Luminal A, Luminal B, Her2-enriched, Basal-like, Normal-like). Assignments could be done on individual samples as well as on dataset of gene expression data. --- r-aldex2[¶](#r-aldex2) === Homepage: * <http://bioconductor.org/packages/ALDEx2/Spack package: * [r-aldex2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-aldex2/package.py) Versions: 1.8.0 Build Dependencies: [r](#r), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-biocparallel](#r-biocparallel), [r-s4vectors](#r-s4vectors), [r-summarizedexperiment](#r-summarizedexperiment) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-biocparallel](#r-biocparallel), [r-s4vectors](#r-s4vectors), [r-summarizedexperiment](#r-summarizedexperiment) Description: A differential abundance analysis for the comparison of two or more conditions. For example, single-organism and meta-RNA-seq high- throughput sequencing assays, or of selected and unselected values from in-vitro sequence selections. Uses a Dirichlet-multinomial model to infer abundance from counts, that has been optimized for three or more experimental replicates. Infers sampling variation and calculates the expected false discovery rate given the biological and sampling variation using the Wilcox rank test or Welches t-test (aldex.ttest) or the glm and Kruskal Wallis tests (aldex.glm). Reports both P and fdr values calculated by the Benjamini Hochberg correction. --- r-allelicimbalance[¶](#r-allelicimbalance) === Homepage: * <http://bioconductor.org/packages/AllelicImbalance/Spack package: * [r-allelicimbalance/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-allelicimbalance/package.py) Versions: 1.14.0 Build Dependencies: [r-iranges](#r-iranges), [r-biocgenerics](#r-biocgenerics), [r-latticeextra](#r-latticeextra), [r-seqinr](#r-seqinr), [r-s4vectors](#r-s4vectors), [r-bsgenome](#r-bsgenome), [r-variantannotation](#r-variantannotation), [r-rsamtools](#r-rsamtools), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomicfeatures](#r-genomicfeatures), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-gviz](#r-gviz), [r-annotationdbi](#r-annotationdbi), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-lattice](#r-lattice), [r-gridextra](#r-gridextra), [r-genomeinfodb](#r-genomeinfodb), [r-nlme](#r-nlme) Link Dependencies: [r](#r) Run Dependencies: [r-iranges](#r-iranges), [r-biocgenerics](#r-biocgenerics), [r-latticeextra](#r-latticeextra), [r-seqinr](#r-seqinr), [r-s4vectors](#r-s4vectors), [r-bsgenome](#r-bsgenome), [r-variantannotation](#r-variantannotation), [r-rsamtools](#r-rsamtools), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomicfeatures](#r-genomicfeatures), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-gviz](#r-gviz), [r-annotationdbi](#r-annotationdbi), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-lattice](#r-lattice), [r-gridextra](#r-gridextra), [r-genomeinfodb](#r-genomeinfodb), [r-nlme](#r-nlme) Description: Provides a framework for allelic specific expression investigation using RNA-seq data. --- r-alpine[¶](#r-alpine) === Homepage: * <http://bioconductor.org/packages/alpine/Spack package: * [r-alpine/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-alpine/package.py) Versions: 1.2.0 Build Dependencies: [r](#r), [r-graph](#r-graph), [r-s4vectors](#r-s4vectors), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomicfeatures](#r-genomicfeatures), [r-biostrings](#r-biostrings), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-stringr](#r-stringr), [r-genomicalignments](#r-genomicalignments), [r-rbgl](#r-rbgl), [r-rsamtools](#r-rsamtools), [r-speedglm](#r-speedglm), [r-genomeinfodb](#r-genomeinfodb) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-graph](#r-graph), [r-s4vectors](#r-s4vectors), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomicfeatures](#r-genomicfeatures), [r-biostrings](#r-biostrings), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-stringr](#r-stringr), [r-genomicalignments](#r-genomicalignments), [r-rbgl](#r-rbgl), [r-rsamtools](#r-rsamtools), [r-speedglm](#r-speedglm), [r-genomeinfodb](#r-genomeinfodb) Description: Fragment sequence bias modeling and correction for RNA-seq transcript abundance estimation. --- r-als[¶](#r-als) === Homepage: * <https://cran.r-project.org/package=ALSSpack package: * [r-als/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-als/package.py) Versions: 0.0.6 Build Dependencies: [r](#r), [r-iso](#r-iso), [r-nnls](#r-nnls) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iso](#r-iso), [r-nnls](#r-nnls) Description: Alternating least squares is often used to resolve components contributing to data with a bilinear structure; the basic technique may be extended to alternating constrained least squares. Commonly applied constraints include unimodality, non-negativity, and normalization of components. Several data matrices may be decomposed simultaneously by assuming that one of the two matrices in the bilinear decomposition is shared between datasets. --- r-alsace[¶](#r-alsace) === Homepage: * <https://www.bioconductor.org/packages/alsace/Spack package: * [r-alsace/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-alsace/package.py) Versions: 1.12.0 Build Dependencies: [r](#r), [r-als](#r-als), [r-ptw](#r-ptw) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-als](#r-als), [r-ptw](#r-ptw) Description: Alternating Least Squares (or Multivariate Curve Resolution) for analytical chemical data, in particular hyphenated data where the first direction is a retention time axis, and the second a spectral axis. Package builds on the basic als function from the ALS package and adds functionality for high-throughput analysis, including definition of time windows, clustering of profiles, retention time correction, etcetera. --- r-altcdfenvs[¶](#r-altcdfenvs) === Homepage: * <https://www.bioconductor.org/packages/altcdfenvs/Spack package: * [r-altcdfenvs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-altcdfenvs/package.py) Versions: 2.38.0 Build Dependencies: [r](#r), [r-hypergraph](#r-hypergraph), [r-s4vectors](#r-s4vectors), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics), [r-makecdfenv](#r-makecdfenv), [r-affy](#r-affy), [r-biostrings](#r-biostrings) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-hypergraph](#r-hypergraph), [r-s4vectors](#r-s4vectors), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics), [r-makecdfenv](#r-makecdfenv), [r-affy](#r-affy), [r-biostrings](#r-biostrings) Description: Convenience data structures and functions to handle cdfenvs. --- r-amap[¶](#r-amap) === Homepage: * <http://mulcyber.toulouse.inra.fr/projects/amap/Spack package: * [r-amap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-amap/package.py) Versions: 0.8-16 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Tools for Clustering and Principal Component Analysis (With robust methods, and parallelized functions). --- r-ampliqueso[¶](#r-ampliqueso) === Homepage: * <https://www.bioconductor.org/packages/ampliQueso/Spack package: * [r-ampliqueso/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ampliqueso/package.py) Versions: 1.14.0 Build Dependencies: [r-knitr](#r-knitr), [r-doparallel](#r-doparallel), [r-foreach](#r-foreach), [r-ggplot2](#r-ggplot2), [r-variantannotation](#r-variantannotation), [r-statmod](#r-statmod), [r](#r), [r-deseq](#r-deseq), [r-edger](#r-edger), [r-gplots](#r-gplots), [r-xtable](#r-xtable), [r-rgl](#r-rgl), [r-samr](#r-samr), [r-genefilter](#r-genefilter), r-rnaseqmap Link Dependencies: [r](#r) Run Dependencies: [r-knitr](#r-knitr), [r-doparallel](#r-doparallel), [r-foreach](#r-foreach), [r-ggplot2](#r-ggplot2), [r-variantannotation](#r-variantannotation), [r-statmod](#r-statmod), [r](#r), [r-deseq](#r-deseq), [r-edger](#r-edger), [r-gplots](#r-gplots), [r-xtable](#r-xtable), [r-rgl](#r-rgl), [r-samr](#r-samr), [r-genefilter](#r-genefilter), r-rnaseqmap Description: The package provides tools and reports for the analysis of amplicon sequencing panels, such as AmpliSeq. --- r-analysispageserver[¶](#r-analysispageserver) === Homepage: * <https://www.bioconductor.org/packages/AnalysisPageServer/Spack package: * [r-analysispageserver/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-analysispageserver/package.py) Versions: 1.10.0 Build Dependencies: [r](#r), [r-rjson](#r-rjson), [r-graph](#r-graph), [r-log4r](#r-log4r), [r-biobase](#r-biobase) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rjson](#r-rjson), [r-graph](#r-graph), [r-log4r](#r-log4r), [r-biobase](#r-biobase) Description: AnalysisPageServer is a modular system that enables sharing of customizable R analyses via the web. --- r-anaquin[¶](#r-anaquin) === Homepage: * <https://www.bioconductor.org/packages/Anaquin/Spack package: * [r-anaquin/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-anaquin/package.py) Versions: 1.2.0 Build Dependencies: [r](#r), [r-locfit](#r-locfit), [r-knitr](#r-knitr), [r-ggplot2](#r-ggplot2), [r-deseq2](#r-deseq2), [r-plyr](#r-plyr), [r-qvalue](#r-qvalue), [r-rocr](#r-rocr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-locfit](#r-locfit), [r-knitr](#r-knitr), [r-ggplot2](#r-ggplot2), [r-deseq2](#r-deseq2), [r-plyr](#r-plyr), [r-qvalue](#r-qvalue), [r-rocr](#r-rocr) Description: The project is intended to support the use of sequins (synthetic sequencing spike-in controls) owned and made available by the Garvan Institute of Medical Research. The goal is to provide a standard open source library for quantitative analysis, modelling and visualization of spike-in controls. --- r-aneufinder[¶](#r-aneufinder) === Homepage: * <https://www.bioconductor.org/packages/AneuFinder/Spack package: * [r-aneufinder/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-aneufinder/package.py) Versions: 1.4.0 Build Dependencies: [r-iranges](#r-iranges), [r-biocgenerics](#r-biocgenerics), [r-ggrepel](#r-ggrepel), [r-doparallel](#r-doparallel), [r-ggplot2](#r-ggplot2), [r-mclust](#r-mclust), [r-s4vectors](#r-s4vectors), [r-reordercluster](#r-reordercluster), [r-aneufinderdata](#r-aneufinderdata), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-cowplot](#r-cowplot), [r-reshape2](#r-reshape2), [r-foreach](#r-foreach), [r-genomeinfodb](#r-genomeinfodb), [r-ggdendro](#r-ggdendro), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-rsamtools](#r-rsamtools), [r-dnacopy](#r-dnacopy), [r-bamsignals](#r-bamsignals) Link Dependencies: [r](#r) Run Dependencies: [r-iranges](#r-iranges), [r-biocgenerics](#r-biocgenerics), [r-ggrepel](#r-ggrepel), [r-doparallel](#r-doparallel), [r-ggplot2](#r-ggplot2), [r-mclust](#r-mclust), [r-s4vectors](#r-s4vectors), [r-reordercluster](#r-reordercluster), [r-aneufinderdata](#r-aneufinderdata), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-cowplot](#r-cowplot), [r-reshape2](#r-reshape2), [r-foreach](#r-foreach), [r-genomeinfodb](#r-genomeinfodb), [r-ggdendro](#r-ggdendro), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-rsamtools](#r-rsamtools), [r-dnacopy](#r-dnacopy), [r-bamsignals](#r-bamsignals) Description: This package implements functions for CNV calling, plotting, export and analysis from whole-genome single cell sequencing data. --- r-aneufinderdata[¶](#r-aneufinderdata) === Homepage: * <https://www.bioconductor.org/packages/AneuFinderData/Spack package: * [r-aneufinderdata/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-aneufinderdata/package.py) Versions: 1.4.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Whole-genome single cell sequencing data for demonstration purposes in the AneuFinder package. --- r-animation[¶](#r-animation) === Homepage: * <https://cran.r-project.org/package=animationSpack package: * [r-animation/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-animation/package.py) Versions: 2.5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides functions for animations in statistics, covering topics in probability theory, mathematical statistics, multivariate statistics, non-parametric statistics, sampling survey, linear models, time series, computational statistics, data mining and machine learning. These functions maybe helpful in teaching statistics and data analysis. --- r-annaffy[¶](#r-annaffy) === Homepage: * <https://www.bioconductor.org/packages/annaffy/Spack package: * [r-annaffy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-annaffy/package.py) Versions: 1.48.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-go-db](#r-go-db), [r-kegg-db](#r-kegg-db) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-go-db](#r-go-db), [r-kegg-db](#r-kegg-db) Description: Functions for handling data from Bioconductor Affymetrix annotation data packages. Produces compact HTML and text reports including experimental data and URL links to many online databases. Allows searching biological metadata using various criteria. --- r-annotate[¶](#r-annotate) === Homepage: * <https://www.bioconductor.org/packages/annotate/Spack package: * [r-annotate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-annotate/package.py) Versions: 1.58.0, 1.54.0 Build Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi), [r-xml](#r-xml), [r-rcurl](#r-rcurl), [r-xtable](#r-xtable) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi), [r-xml](#r-xml), [r-rcurl](#r-rcurl), [r-xtable](#r-xtable) Description: Using R enviroments for annotation. --- r-annotationdbi[¶](#r-annotationdbi) === Homepage: * <https://www.bioconductor.org/packages/AnnotationDbi/Spack package: * [r-annotationdbi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-annotationdbi/package.py) Versions: 1.42.1, 1.38.2 Build Dependencies: [r](#r), [r-iranges](#r-iranges), [r-dbi](#r-dbi), [r-rsqlite](#r-rsqlite), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iranges](#r-iranges), [r-dbi](#r-dbi), [r-rsqlite](#r-rsqlite), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics) Description: Provides user interface and database connection code for annotation data packages using SQLite data storage. --- r-annotationfilter[¶](#r-annotationfilter) === Homepage: * <https://bioconductor.org/packages/AnnotationFilter/Spack package: * [r-annotationfilter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-annotationfilter/package.py) Versions: 1.0.0 Build Dependencies: [r](#r), [r-genomicranges](#r-genomicranges), [r-lazyeval](#r-lazyeval) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-genomicranges](#r-genomicranges), [r-lazyeval](#r-lazyeval) Description: This package provides class and other infrastructure to implement filters for manipulating Bioconductor annotation resources. The filters will be used by ensembldb, Organism.dplyr, and other packages. --- r-annotationforge[¶](#r-annotationforge) === Homepage: * <https://www.bioconductor.org/packages/AnnotationForge/Spack package: * [r-annotationforge/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-annotationforge/package.py) Versions: 1.18.2 Build Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics), [r-dbi](#r-dbi), [r-biobase](#r-biobase), [r-s4vectors](#r-s4vectors), [r-annotationdbi](#r-annotationdbi), [r-rsqlite](#r-rsqlite), [r-rcurl](#r-rcurl), [r-xml](#r-xml) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics), [r-dbi](#r-dbi), [r-biobase](#r-biobase), [r-s4vectors](#r-s4vectors), [r-annotationdbi](#r-annotationdbi), [r-rsqlite](#r-rsqlite), [r-rcurl](#r-rcurl), [r-xml](#r-xml) Description: Provides code for generating Annotation packages and their databases. Packages produced are intended to be used with AnnotationDbi. --- r-annotationhub[¶](#r-annotationhub) === Homepage: * <https://bioconductor.org/packages/AnnotationHub/Spack package: * [r-annotationhub/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-annotationhub/package.py) Versions: 2.8.3 Build Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-biocinstaller](#r-biocinstaller), [r-rsqlite](#r-rsqlite), [r-httr](#r-httr), [r-annotationdbi](#r-annotationdbi), [r-interactivedisplaybase](#r-interactivedisplaybase), [r-yaml](#r-yaml) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-biocinstaller](#r-biocinstaller), [r-rsqlite](#r-rsqlite), [r-httr](#r-httr), [r-annotationdbi](#r-annotationdbi), [r-interactivedisplaybase](#r-interactivedisplaybase), [r-yaml](#r-yaml) Description: This package provides a client for the Bioconductor AnnotationHub web resource. The AnnotationHub web resource provides a central location where genomic files (e.g., VCF, bed, wig) and other resources from standard locations (e.g., UCSC, Ensembl) can be discovered. The resource includes metadata about each resource, e.g., a textual description, tags, and date of modification. The client creates and manages a local cache of files retrieved by the user, helping with quick and reproducible access. --- r-ape[¶](#r-ape) === Homepage: * <http://ape-package.ird.fr/Spack package: * [r-ape/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ape/package.py) Versions: 5.0, 4.1 Build Dependencies: [r](#r), [r-nlme](#r-nlme), [r-lattice](#r-lattice), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-nlme](#r-nlme), [r-lattice](#r-lattice), [r-rcpp](#r-rcpp) Description: Functions for reading, writing, plotting, and manipulating phylogenetic trees, analyses of comparative data in a phylogenetic framework, ancestral character analyses, analyses of diversification and macroevolution, computing distances from DNA sequences, reading and writing nucleotide sequences as well as importing from BioConductor, and several tools such as Mantel's test, generalized skyline plots, graphical exploration of phylogenetic data (alex, trex, kronoviz), estimation of absolute evolutionary rates and clock-like trees using mean path lengths and penalized likelihood, dating trees with non- contemporaneous sequences, translating DNA into AA sequences, and assessing sequence alignments. Phylogeny estimation can be done with the NJ, BIONJ, ME, MVR, SDM, and triangle methods, and several methods handling incomplete distance matrices (NJ*, BIONJ*, MVR*, and the corresponding triangle method). Some functions call external applications (PhyML, Clustal, T-Coffee, Muscle) whose results are returned into R. --- r-argparse[¶](#r-argparse) === Homepage: * <https://github.com/trevorld/argparseSpack package: * [r-argparse/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-argparse/package.py) Versions: 1.1.1 Build Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-getopt](#r-getopt), [r-proto](#r-proto), [r-findpython](#r-findpython) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-getopt](#r-getopt), [r-proto](#r-proto), [r-findpython](#r-findpython) Description: A command line parser to be used with Rscript to write "#!" shebang scripts that gracefully accept positional and optional arguments and automatically generate usage. --- r-assertthat[¶](#r-assertthat) === Homepage: * <https://cran.r-project.org/package=assertthatSpack package: * [r-assertthat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-assertthat/package.py) Versions: 0.2.0, 0.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: assertthat is an extension to stopifnot() that makes it easy to declare the pre and post conditions that you code should satisfy, while also producing friendly error messages so that your users know what they've done wrong. --- r-backports[¶](#r-backports) === Homepage: * <https://cran.r-project.org/package=backportsSpack package: * [r-backports/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-backports/package.py) Versions: 1.1.1, 1.1.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Implementations of functions which have been introduced in R since version 3.0.0. The backports are conditionally exported which results in R resolving the function names to the version shipped with R (if available) and uses the implemented backports as fallback. This way package developers can make use of the new functions without worrying about the minimum required R version. --- r-bamsignals[¶](#r-bamsignals) === Homepage: * <https://www.bioconductor.org/packages/bamsignals/Spack package: * [r-bamsignals/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bamsignals/package.py) Versions: 1.8.0 Build Dependencies: [r](#r), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-zlibbioc](#r-zlibbioc), [r-rhtslib](#r-rhtslib), [r-rcpp](#r-rcpp), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-zlibbioc](#r-zlibbioc), [r-rhtslib](#r-rhtslib), [r-rcpp](#r-rcpp), [r-biocgenerics](#r-biocgenerics) Description: This package allows to efficiently obtain count vectors from indexed bam files. It counts the number of reads in given genomic ranges and it computes reads profiles and coverage profiles. It also handles paired- end data. --- r-base64[¶](#r-base64) === Homepage: * <https://cran.r-project.org/package=base64Spack package: * [r-base64/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-base64/package.py) Versions: 2.0 Build Dependencies: [r](#r), [r-openssl](#r-openssl) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-openssl](#r-openssl) Description: Compatibility wrapper to replace the orphaned package by Romain Francois. New applications should use the 'openssl' or 'base64enc' package instead. --- r-base64enc[¶](#r-base64enc) === Homepage: * <http://www.rforge.net/base64encSpack package: * [r-base64enc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-base64enc/package.py) Versions: 0.1-3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This package provides tools for handling base64 encoding. It is more flexible than the orphaned base64 package. --- r-bbmisc[¶](#r-bbmisc) === Homepage: * <https://github.com/berndbischl/BBmiscSpack package: * [r-bbmisc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bbmisc/package.py) Versions: 1.11 Build Dependencies: [r](#r), [r-checkmate](#r-checkmate) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-checkmate](#r-checkmate) Description: Miscellaneous helper functions for and from B. Bischl and some other guys, mainly for package development. --- r-beanplot[¶](#r-beanplot) === Homepage: * <https://cran.r-project.org/web/packages/beanplot/index.htmlSpack package: * [r-beanplot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-beanplot/package.py) Versions: 1.2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Plots univariate comparison graphs, an alternative to boxplot/stripchart/violin plot. --- r-bh[¶](#r-bh) === Homepage: * <https://cran.r-project.org/web/packages/BH/index.htmlSpack package: * [r-bh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bh/package.py) Versions: 1.65.0-1, 1.60.0-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Boost provides free peer-reviewed portable C++ source libraries. A large part of Boost is provided as C++ template code which is resolved entirely at compile-time without linking. This package aims to provide the most useful subset of Boost libraries for template use among CRAN package. By placing these libraries in this package, we offer a more efficient distribution system for CRAN as replication of this code in the sources of other packages is avoided. As of release 1.60.0-2, the following Boost libraries are included: 'algorithm' 'any' 'bimap' 'bind' 'circular_buffer' 'concept' 'config' 'container' 'date'_'time' 'detail' 'dynamic_bitset' 'exception' 'filesystem' 'flyweight' 'foreach' 'functional' 'fusion' 'geometry' 'graph' 'heap' 'icl' 'integer' 'interprocess' 'intrusive' 'io' 'iostreams' 'iterator' 'math' 'move' 'mpl' 'multiprcecision' 'numeric' 'pending' 'phoenix' 'preprocessor' 'random' 'range' 'smart_ptr' 'spirit' 'tuple' 'type_trains' 'typeof' 'unordered' 'utility' 'uuid'. --- r-biasedurn[¶](#r-biasedurn) === Homepage: * <http://www.agner.org/random/Spack package: * [r-biasedurn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biasedurn/package.py) Versions: 1.07 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Statistical models of biased sampling in the form of univariate and multivariate noncentral hypergeometric distributions, including Wallenius' noncentral hypergeometric distribution and Fisher's noncentral hypergeometric distribution (also called extended hypergeometric distribution). See vignette("UrnTheory") for explanation of these distributions. --- r-bindr[¶](#r-bindr) === Homepage: * <https://github.com/krlmlr/bindrSpack package: * [r-bindr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bindr/package.py) Versions: 0.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides a simple interface for creating active bindings where the bound function accepts additional arguments. --- r-bindrcpp[¶](#r-bindrcpp) === Homepage: * <https://github.com/krlmlr/bindrcppSpack package: * [r-bindrcpp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bindrcpp/package.py) Versions: 0.2.2, 0.2 Build Dependencies: [r](#r), [r-bindr](#r-bindr), [r-rcpp](#r-rcpp), [r-plogr](#r-plogr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-bindr](#r-bindr), [r-rcpp](#r-rcpp), [r-plogr](#r-plogr) Description: Provides an easy way to fill an environment with active bindings that call a C++ function. --- r-biobase[¶](#r-biobase) === Homepage: * <https://www.bioconductor.org/packages/Biobase/Spack package: * [r-biobase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biobase/package.py) Versions: 2.40.0, 2.38.0, 2.36.2 Build Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics) Description: Functions that are needed by many other packages or which replace R functions. --- r-biocgenerics[¶](#r-biocgenerics) === Homepage: * <https://www.bioconductor.org/packages/BiocGenerics/Spack package: * [r-biocgenerics/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biocgenerics/package.py) Versions: 0.26.0, 0.24.0, 0.22.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: S4 generic functions needed by many Bioconductor packages. --- r-biocinstaller[¶](#r-biocinstaller) === Homepage: * <https://bioconductor.org/packages/BiocInstaller/Spack package: * [r-biocinstaller/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biocinstaller/package.py) Versions: 1.26.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This package is used to install and update Bioconductor, CRAN, and (some) github packages. --- r-biocparallel[¶](#r-biocparallel) === Homepage: * <https://bioconductor.org/packages/BiocParallel/Spack package: * [r-biocparallel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biocparallel/package.py) Versions: 1.14.2, 1.10.1 Build Dependencies: [r](#r), [r-futile-logger](#r-futile-logger), [r-bh](#r-bh), [r-snow](#r-snow) Link Dependencies: [r](#r), [r-bh](#r-bh) Run Dependencies: [r](#r), [r-futile-logger](#r-futile-logger), [r-bh](#r-bh), [r-snow](#r-snow) Description: This package provides modified versions and novel implementation of functions for parallel evaluation, tailored to use with Bioconductor objects. --- r-biocstyle[¶](#r-biocstyle) === Homepage: * <https://www.bioconductor.org/packages/BiocStyle/Spack package: * [r-biocstyle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biocstyle/package.py) Versions: 2.4.1 Build Dependencies: [r](#r), [r-bookdown](#r-bookdown), [r-knitr](#r-knitr), [r-yaml](#r-yaml), [r-rmarkdown](#r-rmarkdown) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-bookdown](#r-bookdown), [r-knitr](#r-knitr), [r-yaml](#r-yaml), [r-rmarkdown](#r-rmarkdown) Description: Provides standard formatting styles for Bioconductor PDF and HTML documents. Package vignettes illustrate use and functionality. --- r-biom-utils[¶](#r-biom-utils) === Homepage: * <https://github.com/braithwaite/BIOM.utils/Spack package: * [r-biom-utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biom-utils/package.py) Versions: 0.9 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides utilities to facilitate import, export and computation with the BIOM (Biological Observation Matrix) format (http://biom-format.org). --- r-biomart[¶](#r-biomart) === Homepage: * <https://bioconductor.org/packages/biomaRt/Spack package: * [r-biomart/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biomart/package.py) Versions: 2.36.1, 2.34.2, 2.32.1 Build Dependencies: [r](#r), [r-progress](#r-progress), [r-httr](#r-httr), [r-annotationdbi](#r-annotationdbi), [r-rcurl](#r-rcurl), [r-stringr](#r-stringr), [r-xml](#r-xml) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-progress](#r-progress), [r-httr](#r-httr), [r-annotationdbi](#r-annotationdbi), [r-rcurl](#r-rcurl), [r-stringr](#r-stringr), [r-xml](#r-xml) Description: In recent years a wealth of biological data has become available in public data repositories. Easy access to these valuable data resources and firm integration with data analysis is needed for comprehensive bioinformatics data analysis. biomaRt provides an interface to a growing collection of databases implementing the BioMart software suite (http://www.biomart.org). The package enables retrieval of large amounts of data in a uniform way without the need to know the underlying database schemas or write complex SQL queries. Examples of BioMart databases are Ensembl, COSMIC, Uniprot, HGNC, Gramene, Wormbase and dbSNP mapped to Ensembl. These major databases give biomaRt users direct access to a diverse set of data and enable a wide range of powerful online queries from gene annotation to database mining. --- r-biomformat[¶](#r-biomformat) === Homepage: * <https://www.bioconductor.org/packages/biomformat/Spack package: * [r-biomformat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biomformat/package.py) Versions: 1.4.0 Build Dependencies: [r](#r), [r-plyr](#r-plyr), [r-rhdf5](#r-rhdf5), [r-matrix](#r-matrix), [r-jsonlite](#r-jsonlite) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-plyr](#r-plyr), [r-rhdf5](#r-rhdf5), [r-matrix](#r-matrix), [r-jsonlite](#r-jsonlite) Description: This is an R package for interfacing with the BIOM format. This package includes basic tools for reading biom-format files, accessing and subsetting data tables from a biom object (which is more complex than a single table), as well as limited support for writing a biom-object back to a biom-format file. The design of this API is intended to match the python API and other tools included with the biom-format project, but with a decidedly "R flavor" that should be familiar to R users. This includes S4 classes and methods, as well as extensions of common core functions/methods. --- r-biostrings[¶](#r-biostrings) === Homepage: * <https://bioconductor.org/packages/Biostrings/Spack package: * [r-biostrings/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biostrings/package.py) Versions: 2.48.0, 2.44.2 Build Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-iranges](#r-iranges), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-iranges](#r-iranges), [r-biocgenerics](#r-biocgenerics) Description: Memory efficient string containers, string matching algorithms, and other utilities, for fast manipulation of large biological sequences or sets of sequences. --- r-biovizbase[¶](#r-biovizbase) === Homepage: * <http://bioconductor.org/packages/biovizBase/Spack package: * [r-biovizbase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-biovizbase/package.py) Versions: 1.24.0 Build Dependencies: [r-hmisc](#r-hmisc), [r-ensembldb](#r-ensembldb), [r-scales](#r-scales), [r-s4vectors](#r-s4vectors), [r-dichromat](#r-dichromat), [r-variantannotation](#r-variantannotation), [r-rcolorbrewer](#r-rcolorbrewer), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomicfeatures](#r-genomicfeatures), [r-biostrings](#r-biostrings), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-annotationfilter](#r-annotationfilter), [r-annotationdbi](#r-annotationdbi), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r-hmisc](#r-hmisc), [r-ensembldb](#r-ensembldb), [r-scales](#r-scales), [r-s4vectors](#r-s4vectors), [r-dichromat](#r-dichromat), [r-variantannotation](#r-variantannotation), [r-rcolorbrewer](#r-rcolorbrewer), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomicfeatures](#r-genomicfeatures), [r-biostrings](#r-biostrings), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-annotationfilter](#r-annotationfilter), [r-annotationdbi](#r-annotationdbi), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: The biovizBase package is designed to provide a set of utilities, color schemes and conventions for genomic data. It serves as the base for various high-level packages for biological data visualization. This saves development effort and encourages consistency. --- r-bit[¶](#r-bit) === Homepage: * <https://cran.rstudio.com/web/packages/bit/index.htmlSpack package: * [r-bit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bit/package.py) Versions: 1.1-12 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A class for vectors of 1-bit booleans. --- r-bit64[¶](#r-bit64) === Homepage: * <https://cran.rstudio.com/web/packages/bit64/index.htmlSpack package: * [r-bit64/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bit64/package.py) Versions: 0.9-7 Build Dependencies: [r](#r), [r-bit](#r-bit) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-bit](#r-bit) Description: Package 'bit64' provides serializable S3 atomic 64bit (signed) integers. These are useful for handling database keys and exact counting in +-2^63. WARNING: do not use them as replacement for 32bit integers, integer64 are not supported for subscripting by R-core and they have different semantics when combined with double, e.g. integer64 + double => integer64. Class integer64 can be used in vectors, matrices, arrays and data.frames. Methods are available for coercion from and to logicals, integers, doubles, characters and factors as well as many elementwise and summary functions. Many fast algorithmic operations such as 'match' and 'order' support inter- active data exploration and manipulation and optionally leverage caching. --- r-bitops[¶](#r-bitops) === Homepage: * <https://cran.r-project.org/package=bitopsSpack package: * [r-bitops/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bitops/package.py) Versions: 1.0-6 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions for bitwise operations on integer vectors. --- r-blob[¶](#r-blob) === Homepage: * <https://cran.rstudio.com/web/packages/blob/index.htmlSpack package: * [r-blob/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-blob/package.py) Versions: 1.1.0 Build Dependencies: [r](#r), [r-tibble](#r-tibble) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tibble](#r-tibble) Description: R's raw vector is useful for storing a single binary object. What if you want to put a vector of them in a data frame? The blob package provides the blob object, a list of raw vectors, suitable for use as a column in data frame. --- r-bookdown[¶](#r-bookdown) === Homepage: * <https://cran.r-project.org/package=bookdownSpack package: * [r-bookdown/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bookdown/package.py) Versions: 0.5 Build Dependencies: [r](#r), [r-knitr](#r-knitr), [r-yaml](#r-yaml), [r-htmltools](#r-htmltools), [r-rmarkdown](#r-rmarkdown) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-knitr](#r-knitr), [r-yaml](#r-yaml), [r-htmltools](#r-htmltools), [r-rmarkdown](#r-rmarkdown) Description: Output formats and utilities for authoring books and technical documents with R Markdown. --- r-boot[¶](#r-boot) === Homepage: * <https://cran.r-project.org/package=bootSpack package: * [r-boot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-boot/package.py) Versions: 1.3-20, 1.3-18 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions and datasets for bootstrapping from the book "Bootstrap Methods and Their Application" by <NAME> and <NAME> (1997, CUP), originally written by <NAME> S. --- r-brew[¶](#r-brew) === Homepage: * <https://cran.r-project.org/package=brewSpack package: * [r-brew/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-brew/package.py) Versions: 1.0-6 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: brew implements a templating framework for mixing text and R code for report generation. brew template syntax is similar to PHP, Ruby's erb module, Java Server Pages, and Python's psp module. --- r-broom[¶](#r-broom) === Homepage: * <http://github.com/tidyverse/broomSpack package: * [r-broom/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-broom/package.py) Versions: 0.4.2 Build Dependencies: [r](#r), [r-tidyr](#r-tidyr), [r-reshape2](#r-reshape2), [r-psych](#r-psych), [r-dplyr](#r-dplyr), [r-plyr](#r-plyr), [r-stringr](#r-stringr), [r-nlme](#r-nlme) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tidyr](#r-tidyr), [r-reshape2](#r-reshape2), [r-psych](#r-psych), [r-dplyr](#r-dplyr), [r-plyr](#r-plyr), [r-stringr](#r-stringr), [r-nlme](#r-nlme) Description: Convert statistical analysis objects from R into tidy data frames, so that they can more easily be combined, reshaped and otherwise processed with tools like 'dplyr', 'tidyr' and 'ggplot2'. The package provides three S3 generics: tidy, which summarizes a model's statistical findings such as coefficients of a regression; augment, which adds columns to the original data such as predictions, residuals and cluster assignments; and glance, which provides a one-row summary of model-level statistics. --- r-bsgenome[¶](#r-bsgenome) === Homepage: * <https://www.bioconductor.org/packages/BSgenome/Spack package: * [r-bsgenome/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bsgenome/package.py) Versions: 1.46.0, 1.44.2 Build Dependencies: [r](#r), [r-iranges](#r-iranges), [r-rtracklayer](#r-rtracklayer), [r-genomicranges](#r-genomicranges), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-biocgenerics](#r-biocgenerics), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biostrings](#r-biostrings) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iranges](#r-iranges), [r-rtracklayer](#r-rtracklayer), [r-genomicranges](#r-genomicranges), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-biocgenerics](#r-biocgenerics), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biostrings](#r-biostrings) Description: Infrastructure shared by all the Biostrings-based genome data packages. --- r-bumphunter[¶](#r-bumphunter) === Homepage: * <http://bioconductor.org/packages/bumphunter/Spack package: * [r-bumphunter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-bumphunter/package.py) Versions: 1.16.0 Build Dependencies: [r](#r), [r-iterators](#r-iterators), [r-limma](#r-limma), [r-s4vectors](#r-s4vectors), [r-genomicfeatures](#r-genomicfeatures), [r-iranges](#r-iranges), [r-locfit](#r-locfit), [r-genomicranges](#r-genomicranges), [r-foreach](#r-foreach), [r-dorng](#r-dorng), [r-matrixstats](#r-matrixstats), [r-annotationdbi](#r-annotationdbi), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iterators](#r-iterators), [r-limma](#r-limma), [r-s4vectors](#r-s4vectors), [r-genomicfeatures](#r-genomicfeatures), [r-iranges](#r-iranges), [r-locfit](#r-locfit), [r-genomicranges](#r-genomicranges), [r-foreach](#r-foreach), [r-dorng](#r-dorng), [r-matrixstats](#r-matrixstats), [r-annotationdbi](#r-annotationdbi), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: Tools for finding bumps in genomic data --- r-c50[¶](#r-c50) === Homepage: * <https://cran.r-project.org/package=C50Spack package: * [r-c50/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-c50/package.py) Versions: 0.1.0-24 Build Dependencies: [r](#r), [r-partykit](#r-partykit) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-partykit](#r-partykit) Description: C5.0 decision trees and rule-based models for pattern recognition. --- r-callr[¶](#r-callr) === Homepage: * <https://github.com/MangoTheCat/callrSpack package: * [r-callr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-callr/package.py) Versions: 3.0.0, 1.0.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: It is sometimes useful to perform a computation in a separate R process, without affecting the current R process at all. This packages does exactly that. --- r-car[¶](#r-car) === Homepage: * <https://r-forge.r-project.org/projects/car/Spack package: * [r-car/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-car/package.py) Versions: 2.1-4, 2.1-2 Build Dependencies: [r](#r), [r-quantreg](#r-quantreg), [r-pbkrtest](#r-pbkrtest), [r-mgcv](#r-mgcv), [r-nnet](#r-nnet), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-quantreg](#r-quantreg), [r-pbkrtest](#r-pbkrtest), [r-mgcv](#r-mgcv), [r-nnet](#r-nnet), [r-mass](#r-mass) Description: Functions and Datasets to Accompany <NAME> and <NAME>, An R Companion to Applied Regression, Second Edition, Sage, 2011. --- r-caret[¶](#r-caret) === Homepage: * <https://github.com/topepo/caret/Spack package: * [r-caret/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-caret/package.py) Versions: 6.0-73, 6.0-70 Build Dependencies: [r](#r), [r-modelmetrics](#r-modelmetrics), [r-car](#r-car), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-plyr](#r-plyr), [r-lattice](#r-lattice), [r-foreach](#r-foreach), [r-nlme](#r-nlme) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-modelmetrics](#r-modelmetrics), [r-car](#r-car), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-plyr](#r-plyr), [r-lattice](#r-lattice), [r-foreach](#r-foreach), [r-nlme](#r-nlme) Description: Misc functions for training and plotting classification and regression models. --- r-category[¶](#r-category) === Homepage: * <https://www.bioconductor.org/packages/Category/Spack package: * [r-category/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-category/package.py) Versions: 2.42.1 Build Dependencies: [r](#r), [r-graph](#r-graph), [r-rbgl](#r-rbgl), [r-dbi](#r-dbi), [r-annotationdbi](#r-annotationdbi), [r-annotate](#r-annotate), [r-matrix](#r-matrix), [r-biobase](#r-biobase), [r-gseabase](#r-gseabase), [r-genefilter](#r-genefilter), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-graph](#r-graph), [r-rbgl](#r-rbgl), [r-dbi](#r-dbi), [r-annotationdbi](#r-annotationdbi), [r-annotate](#r-annotate), [r-matrix](#r-matrix), [r-biobase](#r-biobase), [r-gseabase](#r-gseabase), [r-genefilter](#r-genefilter), [r-biocgenerics](#r-biocgenerics) Description: A collection of tools for performing category analysis. --- r-catools[¶](#r-catools) === Homepage: * <https://cran.r-project.org/package=caToolsSpack package: * [r-catools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-catools/package.py) Versions: 1.17.1 Build Dependencies: [r](#r), [r-bitops](#r-bitops) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-bitops](#r-bitops) Description: Contains several basic utility functions including: moving (rolling, running) window statistic functions, read/write for GIF and ENVI binary files, fast calculation of AUC, LogitBoost classifier, base64 encoder/decoder, round-off-error-free sum and cumsum, etc. --- r-cdcfluview[¶](#r-cdcfluview) === Homepage: * <https://cran.r-project.org/package=cdcfluviewSpack package: * [r-cdcfluview/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-cdcfluview/package.py) Versions: 0.7.0 Build Dependencies: [r](#r), [r-httr](#r-httr), [r-purrr](#r-purrr), [r-jsonlite](#r-jsonlite), [r-dplyr](#r-dplyr), [r-units](#r-units), [r-mmwrweek](#r-mmwrweek), [r-sf](#r-sf), [r-xml2](#r-xml2), [r-readr](#r-readr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-httr](#r-httr), [r-purrr](#r-purrr), [r-jsonlite](#r-jsonlite), [r-dplyr](#r-dplyr), [r-units](#r-units), [r-mmwrweek](#r-mmwrweek), [r-sf](#r-sf), [r-xml2](#r-xml2), [r-readr](#r-readr) Description: The 'U.S.' Centers for Disease Control ('CDC') maintains a portal <http://gis.cdc.gov/grasp/fluview/fluportaldashboard.html> for accessing state, regional and national influenza statistics as well as Mortality Surveillance Data. The web interface makes it difficult and time- consuming to select and retrieve influenza data. Tools are provided to access the data provided by the portal's underlying 'API'. --- r-cellranger[¶](#r-cellranger) === Homepage: * <https://cran.r-project.org/package=cellrangerSpack package: * [r-cellranger/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-cellranger/package.py) Versions: 1.1.0 Build Dependencies: [r](#r), [r-tibble](#r-tibble), [r-rematch](#r-rematch) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tibble](#r-tibble), [r-rematch](#r-rematch) Description: Helper functions to work with spreadsheets and the "A1:D10" style of cell range specification. --- r-checkmate[¶](#r-checkmate) === Homepage: * <https://cran.r-project.org/package=checkmateSpack package: * [r-checkmate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-checkmate/package.py) Versions: 1.8.4 Build Dependencies: [r](#r), [r-backports](#r-backports) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-backports](#r-backports) Description: Tests and assertions to perform frequent argument checks. A substantial part of the package was written in C to minimize any worries about execution time overhead. --- r-checkpoint[¶](#r-checkpoint) === Homepage: * <https://cran.r-project.org/package=checkpointSpack package: * [r-checkpoint/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-checkpoint/package.py) Versions: 0.3.18, 0.3.15 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: The goal of checkpoint is to solve the problem of package reproducibility in R. Specifically, checkpoint allows you to install packages as they existed on CRAN on a specific snapshot date as if you had a CRAN time machine. --- r-chemometrics[¶](#r-chemometrics) === Homepage: * <https://cran.r-project.org/web/packages/chemometrics/index.htmlSpack package: * [r-chemometrics/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-chemometrics/package.py) Versions: 1.4.2, 1.4.1, 1.3.9, 1.3.8, 1.3.7 Build Dependencies: [r](#r), [r-lars](#r-lars), [r-mclust](#r-mclust), [r-robustbase](#r-robustbase), [r-pls](#r-pls), [r-som](#r-som), [r-pcapp](#r-pcapp), [r-rpart](#r-rpart), [r-e1071](#r-e1071) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lars](#r-lars), [r-mclust](#r-mclust), [r-robustbase](#r-robustbase), [r-pls](#r-pls), [r-som](#r-som), [r-pcapp](#r-pcapp), [r-rpart](#r-rpart), [r-e1071](#r-e1071) Description: R companion to the book "Introduction to Multivariate Statistical Analysis in Chemometrics" written by <NAME> and <NAME> (2009). --- r-chron[¶](#r-chron) === Homepage: * <https://cran.r-project.org/package=chronSpack package: * [r-chron/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-chron/package.py) Versions: 2.3-47 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Chronological objects which can handle dates and times. --- r-circlize[¶](#r-circlize) === Homepage: * <https://cran.r-project.org/package=circlizeSpack package: * [r-circlize/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-circlize/package.py) Versions: 0.4.1, 0.4.0 Build Dependencies: [r](#r), [r-shape](#r-shape), [r-colorspace](#r-colorspace), [r-globaloptions](#r-globaloptions) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-shape](#r-shape), [r-colorspace](#r-colorspace), [r-globaloptions](#r-globaloptions) Description: Circular layout is an efficient way for the visualization of huge amounts of information. Here this package provides an implementation of circular layout generation in R as well as an enhancement of available software. The flexibility of the package is based on the usage of low- level graphics functions such that self-defined high-level graphics can be easily implemented by users for specific purposes. Together with the seamless connection between the powerful computational and visual environment in R, it gives users more convenience and freedom to design figures for better understanding complex patterns behind multiple dimensional data. --- r-class[¶](#r-class) === Homepage: * <http://www.stats.ox.ac.uk/pub/MASS4/Spack package: * [r-class/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-class/package.py) Versions: 7.3-14 Build Dependencies: [r](#r), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mass](#r-mass) Description: Various functions for classification, including k-nearest neighbour, Learning Vector Quantization and Self-Organizing Maps. --- r-classint[¶](#r-classint) === Homepage: * <https://cran.r-project.org/package=classIntSpack package: * [r-classint/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-classint/package.py) Versions: 0.1-24 Build Dependencies: [r](#r), [r-e1071](#r-e1071), [r-class](#r-class) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-e1071](#r-e1071), [r-class](#r-class) Description: Selected commonly used methods for choosing univariate class intervals for mapping or other graphics purposes. --- r-cli[¶](#r-cli) === Homepage: * <https://github.com/r-lib/cli#readmeSpack package: * [r-cli/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-cli/package.py) Versions: 1.0.0 Build Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-crayon](#r-crayon) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-crayon](#r-crayon) Description: A suite of tools designed to build attractive command line interfaces ('CLIs'). Includes tools for drawing rules, boxes, trees, and 'Unicode' symbols with 'ASCII' alternatives. --- r-clipr[¶](#r-clipr) === Homepage: * <https://github.com/mdlincoln/cliprSpack package: * [r-clipr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-clipr/package.py) Versions: 0.4.0 Build Dependencies: [r](#r), [r-rstudioapi](#r-rstudioapi), [r-testthat](#r-testthat), [xclip](#xclip) Link Dependencies: [r](#r), [xclip](#xclip) Run Dependencies: [r](#r), [r-rstudioapi](#r-rstudioapi), [r-testthat](#r-testthat) Description: Simple utility functions to read from and write to the Windows, OS X, and X11 clipboards. --- r-cluster[¶](#r-cluster) === Homepage: * <https://cran.r-project.org/web/packages/cluster/index.htmlSpack package: * [r-cluster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-cluster/package.py) Versions: 2.0.7-1, 2.0.5, 2.0.4 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Methods for Cluster analysis. Much extended the original from Peter Rousseeuw, <NAME> and <NAME>, based on Kaufman and Rousseeuw (1990) "Finding Groups in Data". --- r-clusterprofiler[¶](#r-clusterprofiler) === Homepage: * <https://www.bioconductor.org/packages/clusterProfiler/Spack package: * [r-clusterprofiler/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-clusterprofiler/package.py) Versions: 3.4.4 Build Dependencies: [r](#r), [r-tidyr](#r-tidyr), [r-go-db](#r-go-db), [r-gosemsim](#r-gosemsim), [r-ggplot2](#r-ggplot2), [r-plyr](#r-plyr), [r-magrittr](#r-magrittr), [r-rvcheck](#r-rvcheck), [r-annotationdbi](#r-annotationdbi), [r-qvalue](#r-qvalue), [r-dose](#r-dose) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tidyr](#r-tidyr), [r-go-db](#r-go-db), [r-gosemsim](#r-gosemsim), [r-ggplot2](#r-ggplot2), [r-plyr](#r-plyr), [r-magrittr](#r-magrittr), [r-rvcheck](#r-rvcheck), [r-annotationdbi](#r-annotationdbi), [r-qvalue](#r-qvalue), [r-dose](#r-dose) Description: This package implements methods to analyze and visualize functional profiles (GO and KEGG) of gene and gene clusters. --- r-cner[¶](#r-cner) === Homepage: * <https://bioconductor.org/packages/CNEr/Spack package: * [r-cner/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-cner/package.py) Versions: 1.14.0 Build Dependencies: [r-readr](#r-readr), [r-iranges](#r-iranges), [r-reshape2](#r-reshape2), [r-dbi](#r-dbi), [r-ggplot2](#r-ggplot2), [r-powerlaw](#r-powerlaw), [r-rtracklayer](#r-rtracklayer), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-xvector](#r-xvector), [r-keggrest](#r-keggrest), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-go-db](#r-go-db), [r-utils](#r-utils), [r-annotate](#r-annotate), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r-readr](#r-readr), [r-iranges](#r-iranges), [r-reshape2](#r-reshape2), [r-dbi](#r-dbi), [r-ggplot2](#r-ggplot2), [r-powerlaw](#r-powerlaw), [r-rtracklayer](#r-rtracklayer), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-xvector](#r-xvector), [r-keggrest](#r-keggrest), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-go-db](#r-go-db), [r-utils](#r-utils), [r-annotate](#r-annotate), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: "Large-scale identification and advanced visualization of sets of conserved noncoding elements. --- r-coda[¶](#r-coda) === Homepage: * <https://cran.r-project.org/web/packages/coda/index.htmlSpack package: * [r-coda/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-coda/package.py) Versions: 0.19-1 Build Dependencies: [r](#r), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice) Description: Provides functions for summarizing and plotting the output from Markov Chain Monte Carlo (MCMC) simulations, as well as diagnostic tests of convergence to the equilibrium distribution of the Markov chain. --- r-codetools[¶](#r-codetools) === Homepage: * <https://cran.r-project.org/web/packages/codetools/index.htmlSpack package: * [r-codetools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-codetools/package.py) Versions: 0.2-15, 0.2-14 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Code analysis tools for R. --- r-coin[¶](#r-coin) === Homepage: * <https://cran.r-project.org/package=coinSpack package: * [r-coin/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-coin/package.py) Versions: 1.1-3 Build Dependencies: [r](#r), [r-modeltools](#r-modeltools), [r-survival](#r-survival), [r-multcomp](#r-multcomp), [r-mvtnorm](#r-mvtnorm) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-modeltools](#r-modeltools), [r-survival](#r-survival), [r-multcomp](#r-multcomp), [r-mvtnorm](#r-mvtnorm) Description: Conditional inference procedures for the general independence problem including two-sample, K-sample (non-parametric ANOVA), correlation, censored, ordered and multivariate problems. --- r-colorspace[¶](#r-colorspace) === Homepage: * <https://cran.r-project.org/web/packages/colorspace/index.htmlSpack package: * [r-colorspace/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-colorspace/package.py) Versions: 1.3-2, 1.2-6 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Carries out mapping between assorted color spaces including RGB, HSV, HLS, CIEXYZ, CIELUV, HCL (polar CIELUV), CIELAB and polar CIELAB. Qualitative, sequential, and diverging color palettes based on HCL colors are provided. --- r-complexheatmap[¶](#r-complexheatmap) === Homepage: * <https://bioconductor.org/packages/ComplexHeatmap/Spack package: * [r-complexheatmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-complexheatmap/package.py) Versions: 1.14.0 Build Dependencies: [r](#r), [r-dendextend](#r-dendextend), [r-circlize](#r-circlize), [r-globaloptions](#r-globaloptions), [r-colorspace](#r-colorspace), [r-rcolorbrewer](#r-rcolorbrewer), [r-getoptlong](#r-getoptlong) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-dendextend](#r-dendextend), [r-circlize](#r-circlize), [r-globaloptions](#r-globaloptions), [r-colorspace](#r-colorspace), [r-rcolorbrewer](#r-rcolorbrewer), [r-getoptlong](#r-getoptlong) Description: Complex heatmaps are efficient to visualize associations between different sources of data sets and reveal potential structures. Here the ComplexHeatmap package provides a highly flexible way to arrange multiple heatmaps and supports self-defined annotation graphics. --- r-corpcor[¶](#r-corpcor) === Homepage: * <https://cran.r-project.org/package=corpcorSpack package: * [r-corpcor/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-corpcor/package.py) Versions: 1.6.9 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Efficient Estimation of Covariance and (Partial) Correlation --- r-corrplot[¶](#r-corrplot) === Homepage: * <https://cran.r-project.org/package=corrplotSpack package: * [r-corrplot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-corrplot/package.py) Versions: 0.77 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A graphical display of a correlation matrix or general matrix. It also contains some algorithms to do matrix reordering. --- r-covr[¶](#r-covr) === Homepage: * <https://cran.r-project.org/package=covrSpack package: * [r-covr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-covr/package.py) Versions: 3.0.1 Build Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-withr](#r-withr), [r-crayon](#r-crayon), [r-httr](#r-httr), [r-rex](#r-rex) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-withr](#r-withr), [r-crayon](#r-crayon), [r-httr](#r-httr), [r-rex](#r-rex) Description: Track and report code coverage for your package and (optionally) upload the results to a coverage service like 'Codecov' <http://codecov.io> or 'Coveralls' <http://coveralls.io>. Code coverage is a measure of the amount of code being exercised by a set of tests. It is an indirect measure of test quality and completeness. This package is compatible with any testing methodology or framework and tracks coverage of both R code and compiled C/C++/FORTRAN code. --- r-cowplot[¶](#r-cowplot) === Homepage: * <https://cran.r-project.org/package=cowplotSpack package: * [r-cowplot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-cowplot/package.py) Versions: 0.8.0 Build Dependencies: [r](#r), [r-plyr](#r-plyr), [r-ggplot2](#r-ggplot2), [r-gtable](#r-gtable) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-plyr](#r-plyr), [r-ggplot2](#r-ggplot2), [r-gtable](#r-gtable) Description: Some helpful extensions and modifications to the 'ggplot2' package. In particular, this package makes it easy to combine multiple 'ggplot2' plots into one and label them with letters, e.g. A, B, C, etc., as is often required for scientific publications. The package also provides a streamlined and clean theme that is used in the Wilke lab, hence the package name, which stands for <NAME>'s plot package. --- r-crayon[¶](#r-crayon) === Homepage: * <https://cran.r-project.org/package=sourcetoolsSpack package: * [r-crayon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-crayon/package.py) Versions: 1.3.4, 1.3.2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Colored terminal output on terminals that support 'ANSI' color and highlight codes. It also works in 'Emacs' 'ESS'. 'ANSI' color support is automatically detected. Colors and highlighting can be combined and nested. New styles can also be created easily. This package was inspired by the 'chalk' 'JavaScript' project. --- r-crosstalk[¶](#r-crosstalk) === Homepage: * <https://cran.r-project.org/web/packages/crosstalk/index.htmlSpack package: * [r-crosstalk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-crosstalk/package.py) Versions: 1.0.0 Build Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-lazyeval](#r-lazyeval), [r-ggplot2](#r-ggplot2), [r-shiny](#r-shiny), [r-htmltools](#r-htmltools) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-lazyeval](#r-lazyeval), [r-ggplot2](#r-ggplot2), [r-shiny](#r-shiny), [r-htmltools](#r-htmltools) Description: Provides building blocks for allowing HTML widgets to communicate with each other, with Shiny or without (i.e. static .html files). --- r-ctc[¶](#r-ctc) === Homepage: * <https://www.bioconductor.org/packages/release/bioc/html/ctc.htmlSpack package: * [r-ctc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ctc/package.py) Versions: 1.54.0 Build Dependencies: [r](#r), [r-amap](#r-amap) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-amap](#r-amap) Description: Tools for export and import classification trees and clusters to other programs --- r-cubature[¶](#r-cubature) === Homepage: * <https://cran.r-project.org/package=cubatureSpack package: * [r-cubature/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-cubature/package.py) Versions: 1.1-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Adaptive multivariate integration over hypercubes --- r-cubist[¶](#r-cubist) === Homepage: * <https://cran.r-project.org/package=CubistSpack package: * [r-cubist/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-cubist/package.py) Versions: 0.0.19 Build Dependencies: [r](#r), [r-reshape2](#r-reshape2), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-reshape2](#r-reshape2), [r-lattice](#r-lattice) Description: Regression modeling using rules with added instance-based corrections --- r-curl[¶](#r-curl) === Homepage: * <https://github.com/jeroenooms/curlSpack package: * [r-curl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-curl/package.py) Versions: 3.0, 2.3, 1.0, 0.9.7 Build Dependencies: [r](#r), [curl](#curl) Link Dependencies: [r](#r), [curl](#curl) Run Dependencies: [r](#r) Description: The curl() and curl_download() functions provide highly configurable drop-in replacements for base url() and download.file() with better performance, support for encryption (https, ftps), gzip compression, authentication, and other libcurl goodies. The core of the package implements a framework for performing fully customized requests where data can be processed either in memory, on disk, or streaming via the callback or connection interfaces. Some knowledge of libcurl is recommended; for a more-user-friendly web client see the 'httr' package which builds on this package with http specific tools and logic. --- r-data-table[¶](#r-data-table) === Homepage: * <https://github.com/Rdatatable/data.table/wikiSpack package: * [r-data-table/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-data-table/package.py) Versions: 1.10.4-3, 1.10.4-2, 1.10.0, 1.9.6 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Fast aggregation of large data (e.g. 100GB in RAM), fast ordered joins, fast add/modify/delete of columns by group using no copies at all, list columns and a fast file reader (fread). Offers a natural and flexible syntax, for faster development. --- r-dbi[¶](#r-dbi) === Homepage: * <http://rstats-db.github.io/DBISpack package: * [r-dbi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dbi/package.py) Versions: 0.7, 0.4-1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A database interface definition for communication between R and relational database management systems. All classes in this package are virtual and need to be extended by the various R/DBMS implementations. --- r-dbplyr[¶](#r-dbplyr) === Homepage: * <https://github.com/tidyverse/dbplyrSpack package: * [r-dbplyr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dbplyr/package.py) Versions: 1.1.0 Build Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-purrr](#r-purrr), [r-dbi](#r-dbi), [r-tibble](#r-tibble), [r-dplyr](#r-dplyr), [r-r6](#r-r6), [r-glue](#r-glue), [r-rlang](#r-rlang) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-purrr](#r-purrr), [r-dbi](#r-dbi), [r-tibble](#r-tibble), [r-dplyr](#r-dplyr), [r-r6](#r-r6), [r-glue](#r-glue), [r-rlang](#r-rlang) Description: A 'dplyr' back end for databases that allows you to work with remote database tables as if they are in-memory data frames. Basic features works with any database that has a 'DBI' back end; more advanced features require 'SQL' translation to be provided by the package author. --- r-delayedarray[¶](#r-delayedarray) === Homepage: * <https://bioconductor.org/packages/DelayedArray/Spack package: * [r-delayedarray/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-delayedarray/package.py) Versions: 0.6.5, 0.4.1, 0.2.7 Build Dependencies: [r](#r), [r-iranges](#r-iranges), [r-matrixstats](#r-matrixstats), [r-s4vectors](#r-s4vectors), [r-biocparallel](#r-biocparallel), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iranges](#r-iranges), [r-matrixstats](#r-matrixstats), [r-s4vectors](#r-s4vectors), [r-biocparallel](#r-biocparallel), [r-biocgenerics](#r-biocgenerics) Description: Wrapping an array-like object (typically an on-disk object) in a DelayedArray object allows one to perform common array operations on it without loading the object in memory. In order to reduce memory usage and optimize performance, operations on the object are either delayed or executed using a block processing mechanism. Note that this also works on in-memory array-like objects like DataFrame objects (typically with Rle columns), Matrix objects, and ordinary arrays and data frames. Wrapping an array-like object (typically an on-disk object) in a DelayedArray object allows one to perform common array operations on it without loading the object in memory. In order to reduce memory usage and optimize performance, operations on the object are either delayed or executed using a block processing mechanism. Note that this also works on in-memory array-like objects like DataFrame objects (typically with Rle columns), Matrix objects, and ordinary arrays and data frames. --- r-deldir[¶](#r-deldir) === Homepage: * <https://CRAN.R-project.org/package=deldirSpack package: * [r-deldir/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-deldir/package.py) Versions: 0.1-14 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Calculates the Delaunay triangulation and the Dirichlet or Voronoi tessellation (with respect to the entire plane) of a planar point set. Plots triangulations and tessellations in various ways. Clips tessellations to sub-windows. Calculates perimeters of tessellations. Summarises information about the tiles of the tessellation. --- r-dendextend[¶](#r-dendextend) === Homepage: * <https://CRAN.R-project.org/package=dendextendSpack package: * [r-dendextend/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dendextend/package.py) Versions: 1.5.2 Build Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-whisker](#r-whisker), [r-viridis](#r-viridis), [r-fpc](#r-fpc), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-whisker](#r-whisker), [r-viridis](#r-viridis), [r-fpc](#r-fpc), [r-magrittr](#r-magrittr) Description: dendextend: Extending 'Dendrogram' Functionality in R --- r-deoptim[¶](#r-deoptim) === Homepage: * <https://cran.r-project.org/package=DEoptimSpack package: * [r-deoptim/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-deoptim/package.py) Versions: 2.2-3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Implements the differential evolution algorithm for global optimization of a real-valued function of a real-valued parameter vector. --- r-deoptimr[¶](#r-deoptimr) === Homepage: * <https://cran.r-project.org/web/packages/DEoptimR/index.htmlSpack package: * [r-deoptimr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-deoptimr/package.py) Versions: 1.0-8 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: An implementation of a bespoke jDE variant of the Differential Evolution stochastic algorithm for global optimization of nonlinear programming problems. --- r-deseq[¶](#r-deseq) === Homepage: * <https://www.bioconductor.org/packages/DESeq/Spack package: * [r-deseq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-deseq/package.py) Versions: 1.28.0 Build Dependencies: [r](#r), [r-locfit](#r-locfit), [r-lattice](#r-lattice), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics), [r-rcolorbrewer](#r-rcolorbrewer), [r-geneplotter](#r-geneplotter), [r-genefilter](#r-genefilter), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-locfit](#r-locfit), [r-lattice](#r-lattice), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics), [r-rcolorbrewer](#r-rcolorbrewer), [r-geneplotter](#r-geneplotter), [r-genefilter](#r-genefilter), [r-mass](#r-mass) Description: Estimate variance-mean dependence in count data from high-throughput sequencing assays and test for differential expression based on a model using the negative binomial distribution. --- r-deseq2[¶](#r-deseq2) === Homepage: * <https://www.bioconductor.org/packages/DESeq2/Spack package: * [r-deseq2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-deseq2/package.py) Versions: 1.20.0, 1.18.1, 1.16.1 Build Dependencies: [r-rcpparmadillo](#r-rcpparmadillo), [r-genomicranges](#r-genomicranges), [r-biocgenerics](#r-biocgenerics), [r-ggplot2](#r-ggplot2), [r-hmisc](#r-hmisc), [r-s4vectors](#r-s4vectors), [r-summarizedexperiment](#r-summarizedexperiment), [r-iranges](#r-iranges), [r-locfit](#r-locfit), [r-biocparallel](#r-biocparallel), [r-biobase](#r-biobase), [r](#r), [r-geneplotter](#r-geneplotter), [r-genefilter](#r-genefilter), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r-rcpparmadillo](#r-rcpparmadillo), [r-genomicranges](#r-genomicranges), [r-biocgenerics](#r-biocgenerics), [r-ggplot2](#r-ggplot2), [r-hmisc](#r-hmisc), [r-s4vectors](#r-s4vectors), [r-summarizedexperiment](#r-summarizedexperiment), [r-iranges](#r-iranges), [r-locfit](#r-locfit), [r-biocparallel](#r-biocparallel), [r-biobase](#r-biobase), [r](#r), [r-geneplotter](#r-geneplotter), [r-genefilter](#r-genefilter), [r-rcpp](#r-rcpp) Description: Estimate variance-mean dependence in count data from high-throughput sequencing assays and test for differential expression based on a model using the negative binomial distribution. --- r-desolve[¶](#r-desolve) === Homepage: * <https://cran.r-project.org/package=deSolveSpack package: * [r-desolve/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-desolve/package.py) Versions: 1.20 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions that solve initial value problems of a system of first-order ordinary differential equations ('ODE'), of partial differential equations ('PDE'), of differential algebraic equations ('DAE'), and of delay differential equations. --- r-devtools[¶](#r-devtools) === Homepage: * <https://github.com/hadley/devtoolsSpack package: * [r-devtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-devtools/package.py) Versions: 1.12.0, 1.11.1 Build Dependencies: [r](#r), [r-withr](#r-withr), [r-httr](#r-httr), [r-git2r](#r-git2r), [r-rstudioapi](#r-rstudioapi), [r-jsonlite](#r-jsonlite), [r-digest](#r-digest), [r-whisker](#r-whisker), [r-memoise](#r-memoise) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-withr](#r-withr), [r-httr](#r-httr), [r-git2r](#r-git2r), [r-rstudioapi](#r-rstudioapi), [r-jsonlite](#r-jsonlite), [r-digest](#r-digest), [r-whisker](#r-whisker), [r-memoise](#r-memoise) Description: Collection of package development tools. --- r-diagrammer[¶](#r-diagrammer) === Homepage: * <https://github.com/rich-iannone/DiagrammeRSpack package: * [r-diagrammer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-diagrammer/package.py) Versions: 0.8.4 Build Dependencies: [r](#r), [r-igraph](#r-igraph), [r-rstudioapi](#r-rstudioapi), [r-htmlwidgets](#r-htmlwidgets), [r-visnetwork](#r-visnetwork), [r-influencer](#r-influencer), [r-scales](#r-scales), [r-stringr](#r-stringr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-igraph](#r-igraph), [r-rstudioapi](#r-rstudioapi), [r-htmlwidgets](#r-htmlwidgets), [r-visnetwork](#r-visnetwork), [r-influencer](#r-influencer), [r-scales](#r-scales), [r-stringr](#r-stringr) Description: Create graph diagrams and flowcharts using R. --- r-dicekriging[¶](#r-dicekriging) === Homepage: * <http://dice.emse.fr/Spack package: * [r-dicekriging/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dicekriging/package.py) Versions: 1.5.5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Estimation, validation and prediction of kriging models. Important functions : km, print.km, plot.km, predict.km. --- r-dichromat[¶](#r-dichromat) === Homepage: * <https://cran.r-project.org/web/packages/dichromat/index.htmlSpack package: * [r-dichromat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dichromat/package.py) Versions: 2.0-0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Collapse red-green or green-blue distinctions to simulate the effects of different types of color-blindness. --- r-diffusionmap[¶](#r-diffusionmap) === Homepage: * <https://cran.r-project.org/web/packages/diffusionMap/index.htmlSpack package: * [r-diffusionmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-diffusionmap/package.py) Versions: 1.1-0, 1.0-0, 0.0-2, 0.0-1 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-igraph](#r-igraph), [r-scatterplot3d](#r-scatterplot3d) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-igraph](#r-igraph), [r-scatterplot3d](#r-scatterplot3d) Description: Allows to display a progress bar in the R console for long running computations taking place in c++ code, and support for interrupting those computations even in multithreaded code, typically using OpenMP. --- r-digest[¶](#r-digest) === Homepage: * <http://dirk.eddelbuettel.com/code/digest.htmlSpack package: * [r-digest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-digest/package.py) Versions: 0.6.12, 0.6.11, 0.6.9 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Implementation of a function 'digest()' for the creation of hash digests of arbitrary R objects (using the md5, sha-1, sha-256, crc32, xxhash and murmurhash algorithms) permitting easy comparison of R language objects, as well as a function 'hmac()' to create hash-based message authentication code. The md5 algorithm by <NAME> is specified in RFC 1321, the sha-1 and sha-256 algorithms are specified in FIPS-180-1 and FIPS-180-2, and the crc32 algorithm is described in ftp://ftp.rocksoft.com/cliens/rocksoft/papers/crc_v3.txt. For md5, sha-1, sha-256 and aes, this package uses small standalone implementations that were provided by <NAME>. For crc32, code from the zlib library is used. For sha-512, an implementation by Aaron <NAME> is used. For xxhash, the implementation by Yann Collet is used. For murmurhash, an implementation by Shane Day is used. Please note that this package is not meant to be deployed for cryptographic purposes for which more comprehensive (and widely tested) libraries such as OpenSSL should be used. --- r-diptest[¶](#r-diptest) === Homepage: * <https://CRAN.R-project.org/package=diptestSpack package: * [r-diptest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-diptest/package.py) Versions: 0.75-7 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: diptest: Hartigan's Dip Test Statistic for Unimodality - Corrected --- r-dirichletmultinomial[¶](#r-dirichletmultinomial) === Homepage: * <https://bioconductor.org/packages/DirichletMultinomial/Spack package: * [r-dirichletmultinomial/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dirichletmultinomial/package.py) Versions: 1.20.0 Build Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-iranges](#r-iranges), [gsl](#gsl) Link Dependencies: [r](#r), [gsl](#gsl) Run Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-iranges](#r-iranges) Description: Dirichlet-multinomial mixture models can be used to describe variability in microbial metagenomic data. This package is an interface to code originally made available by Holmes, Harris, and Quince, 2012, PLoS ONE 7(2): 1-15, as discussed further in the man page for this package, ?DirichletMultinomial. --- r-dismo[¶](#r-dismo) === Homepage: * <https://cran.r-project.org/package=dismoSpack package: * [r-dismo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dismo/package.py) Versions: 1.1-4 Build Dependencies: [r](#r), [r-raster](#r-raster), [r-sp](#r-sp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-raster](#r-raster), [r-sp](#r-sp) Description: Functions for species distribution modeling, that is, predicting entire geographic distributions form occurrences at a number of sites and the environment at these sites. --- r-dnacopy[¶](#r-dnacopy) === Homepage: * <https://www.bioconductor.org/packages/DNAcopy/Spack package: * [r-dnacopy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dnacopy/package.py) Versions: 1.50.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Implements the circular binary segmentation (CBS) algorithm to segment DNA copy number data and identify genomic regions with abnormal copy number. --- r-do-db[¶](#r-do-db) === Homepage: * <https://bioconductor.org/packages/DO.db/Spack package: * [r-do-db/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-do-db/package.py) Versions: 2.9 Build Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Description: A set of annotation maps describing the entire Disease Ontology assembled using data from DO. --- r-domc[¶](#r-domc) === Homepage: * <https://cran.r-project.org/package=doMCSpack package: * [r-domc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-domc/package.py) Versions: 1.3.4 Build Dependencies: [r](#r), [r-iterators](#r-iterators), [r-foreach](#r-foreach) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iterators](#r-iterators), [r-foreach](#r-foreach) Description: Provides a parallel backend for the %dopar% function using the multicore functionality of the parallel package. --- r-doparallel[¶](#r-doparallel) === Homepage: * <https://cran.r-project.org/web/packages/doParallel/index.htmlSpack package: * [r-doparallel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-doparallel/package.py) Versions: 1.0.11, 1.0.10 Build Dependencies: [r](#r), [r-iterators](#r-iterators), [r-foreach](#r-foreach) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iterators](#r-iterators), [r-foreach](#r-foreach) Description: Provides a parallel backend for the %dopar% function using the parallel package. --- r-dorng[¶](#r-dorng) === Homepage: * <https://cran.rstudio.com/web/packages/doRNG/index.htmlSpack package: * [r-dorng/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dorng/package.py) Versions: 1.6.6 Build Dependencies: [r](#r), [r-iterators](#r-iterators), [r-rngtools](#r-rngtools), [r-pkgmaker](#r-pkgmaker), [r-foreach](#r-foreach) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iterators](#r-iterators), [r-rngtools](#r-rngtools), [r-pkgmaker](#r-pkgmaker), [r-foreach](#r-foreach) Description: Provides functions to perform reproducible parallel foreach loops, using independent random streams as generated by L'Ecuyer's combined multiple- recursive generator [L'Ecuyer (1999), <doi:10.1287/opre.47.1.159>]. It enables to easily convert standard %dopar% loops into fully reproducible loops, independently of the number of workers, the task scheduling strategy, or the chosen parallel environment and associated foreach backend. --- r-dose[¶](#r-dose) === Homepage: * <https://www.bioconductor.org/packages/DOSE/Spack package: * [r-dose/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dose/package.py) Versions: 3.2.0 Build Dependencies: [r](#r), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-scales](#r-scales), [r-s4vectors](#r-s4vectors), [r-gosemsim](#r-gosemsim), [r-igraph](#r-igraph), [r-biocparallel](#r-biocparallel), [r-annotationdbi](#r-annotationdbi), [r-fgsea](#r-fgsea), [r-do-db](#r-do-db), [r-qvalue](#r-qvalue) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-scales](#r-scales), [r-s4vectors](#r-s4vectors), [r-gosemsim](#r-gosemsim), [r-igraph](#r-igraph), [r-biocparallel](#r-biocparallel), [r-annotationdbi](#r-annotationdbi), [r-fgsea](#r-fgsea), [r-do-db](#r-do-db), [r-qvalue](#r-qvalue) Description: This package implements five methods proposed by Resnik, Schlicker, Jiang, Lin and Wang respectively for measuring semantic similarities among DO terms and gene products. Enrichment analyses including hypergeometric model and gene set enrichment analysis are also implemented for discovering disease associations of high-throughput biological data. --- r-downloader[¶](#r-downloader) === Homepage: * <https://cran.rstudio.com/web/packages/downloader/index.htmlSpack package: * [r-downloader/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-downloader/package.py) Versions: 0.4 Build Dependencies: [r](#r), [r-digest](#r-digest) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-digest](#r-digest) Description: Provides a wrapper for the download.file function, making it possible to download files over HTTPS on Windows, Mac OS X, and other Unix-like platforms. The 'RCurl' package provides this functionality (and much more) but can be difficult to install because it must be compiled with external dependencies. This package has no external dependencies, so it is much easier to install. --- r-dplyr[¶](#r-dplyr) === Homepage: * <https://cran.r-project.org/package=dplyrSpack package: * [r-dplyr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dplyr/package.py) Versions: 0.7.5, 0.7.4, 0.7.3, 0.5.0 Build Dependencies: [r-bindrcpp](#r-bindrcpp), [r-assertthat](#r-assertthat), [r-r6](#r-r6), [r-tidyselect](#r-tidyselect), [r-dbi](#r-dbi), [r-tibble](#r-tibble), [r-bindr](#r-bindr), [r-bh](#r-bh), [r-magrittr](#r-magrittr), [r-lazyeval](#r-lazyeval), [r-plogr](#r-plogr), [r-pkgconfig](#r-pkgconfig), [r](#r), [r-glue](#r-glue), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r-bindrcpp](#r-bindrcpp), [r-assertthat](#r-assertthat), [r-r6](#r-r6), [r-tidyselect](#r-tidyselect), [r-dbi](#r-dbi), [r-tibble](#r-tibble), [r-bindr](#r-bindr), [r-bh](#r-bh), [r-magrittr](#r-magrittr), [r-lazyeval](#r-lazyeval), [r-plogr](#r-plogr), [r-pkgconfig](#r-pkgconfig), [r](#r), [r-glue](#r-glue), [r-rcpp](#r-rcpp) Description: A fast, consistent tool for working with data frame like objects, both in memory and out of memory. --- r-dt[¶](#r-dt) === Homepage: * <http://rstudio.github.io/DTSpack package: * [r-dt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dt/package.py) Versions: 0.1 Build Dependencies: [r](#r), [r-htmltools](#r-htmltools), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-htmltools](#r-htmltools), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr) Description: Data objects in R can be rendered as HTML tables using the JavaScript library 'DataTables' (typically via R Markdown or Shiny). The 'DataTables' library has been included in this R package. The package name 'DT' is an abbreviation of 'DataTables'. --- r-dtw[¶](#r-dtw) === Homepage: * <https://cran.r-project.org/web/packages/dtw/index.htmlSpack package: * [r-dtw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dtw/package.py) Versions: 1.18-1, 1.17-1, 1.16, 1.15, 1.14-3 Build Dependencies: [r](#r), [r-proxy](#r-proxy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-proxy](#r-proxy) Description: A comprehensive implementation of dynamic time warping (DTW) algorithms in R. DTW computes the optimal (least cumulative distance) alignment between points of two time series. --- r-dygraphs[¶](#r-dygraphs) === Homepage: * <https://cran.r-project.org/web/packages/dygraphs/index.htmlSpack package: * [r-dygraphs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-dygraphs/package.py) Versions: 0.9 Build Dependencies: [r](#r), [r-xts](#r-xts), [r-zoo](#r-zoo), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-xts](#r-xts), [r-zoo](#r-zoo), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr) Description: An R interface to the 'dygraphs' JavaScript charting library (a copy of which is included in the package). Provides rich facilities for charting time-series data in R, including highly configurable series- and axis- display and interactive features like zoom/pan and series/point highlighting. --- r-e1071[¶](#r-e1071) === Homepage: * <https://cran.r-project.org/package=e1071Spack package: * [r-e1071/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-e1071/package.py) Versions: 1.6-7 Build Dependencies: [r](#r), [r-class](#r-class) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-class](#r-class) Description: Functions for latent class analysis, short time Fourier transform, fuzzy clustering, support vector machines, shortest path computation, bagged clustering, naive Bayes classifier, ... --- r-edger[¶](#r-edger) === Homepage: * <https://bioconductor.org/packages/edgeR/Spack package: * [r-edger/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-edger/package.py) Versions: 3.22.3, 3.18.1 Build Dependencies: [r](#r), [r-locfit](#r-locfit), [r-limma](#r-limma), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r), [r-rcpp](#r-rcpp) Run Dependencies: [r](#r), [r-locfit](#r-locfit), [r-limma](#r-limma), [r-rcpp](#r-rcpp) Description: Differential expression analysis of RNA-seq expression profiles with biological replication. Implements a range of statistical methodology based on the negative binomial distributions, including empirical Bayes estimation, exact tests, generalized linear models and quasi-likelihood tests. As well as RNA-seq, it be applied to differential signal analysis of other types of genomic data that produce counts, including ChIP-seq, SAGE and CAGE. --- r-ellipse[¶](#r-ellipse) === Homepage: * <https://cran.r-project.org/package=ellipseSpack package: * [r-ellipse/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ellipse/package.py) Versions: 0.3-8 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This package contains various routines for drawing ellipses and ellipse- like confidence regions. --- r-ensembldb[¶](#r-ensembldb) === Homepage: * <https://bioconductor.org/packages/ensembldb/Spack package: * [r-ensembldb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ensembldb/package.py) Versions: 2.0.4 Build Dependencies: [r-iranges](#r-iranges), [r-dbi](#r-dbi), [r-curl](#r-curl), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-genomicfeatures](#r-genomicfeatures), [r-annotationhub](#r-annotationhub), [r-genomicranges](#r-genomicranges), [r-biostrings](#r-biostrings), [r-rtracklayer](#r-rtracklayer), [r-annotationfilter](#r-annotationfilter), [r-biobase](#r-biobase), [r-protgenerics](#r-protgenerics), [r-annotationdbi](#r-annotationdbi), [r](#r), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r-iranges](#r-iranges), [r-dbi](#r-dbi), [r-curl](#r-curl), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-genomicfeatures](#r-genomicfeatures), [r-annotationhub](#r-annotationhub), [r-genomicranges](#r-genomicranges), [r-biostrings](#r-biostrings), [r-rtracklayer](#r-rtracklayer), [r-annotationfilter](#r-annotationfilter), [r-biobase](#r-biobase), [r-protgenerics](#r-protgenerics), [r-annotationdbi](#r-annotationdbi), [r](#r), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: The package provides functions to create and use transcript centric annotation databases/packages. The annotation for the databases are directly fetched from Ensembl using their Perl API. The functionality and data is similar to that of the TxDb packages from the GenomicFeatures package, but, in addition to retrieve all gene/transcript models and annotations from the database, the ensembldb package provides also a filter framework allowing to retrieve annotations for specific entries like genes encoded on a chromosome region or transcript models of lincRNA genes. --- r-ergm[¶](#r-ergm) === Homepage: * <http://statnet.orgSpack package: * [r-ergm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ergm/package.py) Versions: 3.7.1 Build Dependencies: [r](#r), [r-coda](#r-coda), [r-lpsolve](#r-lpsolve), [r-robustbase](#r-robustbase), [r-matrix](#r-matrix), [r-network](#r-network), [r-mass](#r-mass), [r-trust](#r-trust), [r-statnet-common](#r-statnet-common) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-coda](#r-coda), [r-lpsolve](#r-lpsolve), [r-robustbase](#r-robustbase), [r-matrix](#r-matrix), [r-network](#r-network), [r-mass](#r-mass), [r-trust](#r-trust), [r-statnet-common](#r-statnet-common) Description: An integrated set of tools to analyze and simulate networks based on exponential-family random graph models (ERGM). "ergm" is a part of the "statnet" suite of packages for network analysis. --- r-evaluate[¶](#r-evaluate) === Homepage: * <https://cran.r-project.org/package=evaluateSpack package: * [r-evaluate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-evaluate/package.py) Versions: 0.10.1, 0.10, 0.9 Build Dependencies: [r](#r), [r-stringr](#r-stringr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-stringr](#r-stringr) Description: Parsing and evaluation tools that make it easy to recreate the command line behaviour of R. --- r-expm[¶](#r-expm) === Homepage: * <http://R-Forge.R-project.org/projects/expmSpack package: * [r-expm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-expm/package.py) Versions: 0.999-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Computation of the matrix exponential, logarithm, sqrt, and related quantities. --- r-factoextra[¶](#r-factoextra) === Homepage: * <http://www.sthda.com/english/rpkgs/factoextraSpack package: * [r-factoextra/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-factoextra/package.py) Versions: 1.0.4 Build Dependencies: [r](#r), [r-dendextend](#r-dendextend), [r-reshape2](#r-reshape2), [r-factominer](#r-factominer), [r-ggplot2](#r-ggplot2), [r-ggpubr](#r-ggpubr), [r-ggrepel](#r-ggrepel), [r-abind](#r-abind), [r-tidyr](#r-tidyr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-dendextend](#r-dendextend), [r-reshape2](#r-reshape2), [r-factominer](#r-factominer), [r-ggplot2](#r-ggplot2), [r-ggpubr](#r-ggpubr), [r-ggrepel](#r-ggrepel), [r-abind](#r-abind), [r-tidyr](#r-tidyr) Description: factoextra: Extract and Visualize the Results of Multivariate Data Analyses --- r-factominer[¶](#r-factominer) === Homepage: * <http://factominer.free.frSpack package: * [r-factominer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-factominer/package.py) Versions: 1.35 Build Dependencies: [r](#r), [r-knitr](#r-knitr), [r-scatterplot3d](#r-scatterplot3d), [r-leaps](#r-leaps), [r-flashclust](#r-flashclust), [r-ellipse](#r-ellipse), [r-car](#r-car) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-knitr](#r-knitr), [r-scatterplot3d](#r-scatterplot3d), [r-leaps](#r-leaps), [r-flashclust](#r-flashclust), [r-ellipse](#r-ellipse), [r-car](#r-car) Description: FactoMineR: Multivariate Exploratory Data Analysis and Data Mining --- r-fastcluster[¶](#r-fastcluster) === Homepage: * <http://danifold.net/fastcluster.htmlSpack package: * [r-fastcluster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-fastcluster/package.py) Versions: 1.1.25 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This is a two-in-one package which provides interfaces to both R and 'Python'. It implements fast hierarchical, agglomerative clustering routines. Part of the functionality is designed as drop-in replacement for existing routines: linkage() in the 'SciPy' package 'scipy.cluster.hierarchy', hclust() in R's 'stats' package, and the 'flashClust' package. It provides the same functionality with the benefit of a much faster implementation. Moreover, there are memory- saving routines for clustering of vector data, which go beyond what the existing packages provide. For information on how to install the 'Python' files, see the file INSTALL in the source distribution. --- r-fastmatch[¶](#r-fastmatch) === Homepage: * <http://www.rforge.net/fastmatchSpack package: * [r-fastmatch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-fastmatch/package.py) Versions: 1.1-0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Package providing a fast match() replacement for cases that require repeated look-ups. It is slightly faster that R's built-in match() function on first match against a table, but extremely fast on any subsequent lookup as it keeps the hash table in memory. --- r-ff[¶](#r-ff) === Homepage: * <http://ff.r-forge.r-project.org/Spack package: * [r-ff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ff/package.py) Versions: 2.2-13 Build Dependencies: [r](#r), [r-bit](#r-bit) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-bit](#r-bit) Description: memory-efficient storage of large data on disk and fast access functions. --- r-fftwtools[¶](#r-fftwtools) === Homepage: * <https://github.com/krahim/fftwtoolsSpack package: * [r-fftwtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-fftwtools/package.py) Versions: 0.9-8 Build Dependencies: [r](#r), [fftw](#fftw) Link Dependencies: [r](#r), [fftw](#fftw) Run Dependencies: [r](#r) Description: Provides a wrapper for several 'FFTW' functions. This package provides access to the two-dimensional 'FFT', the multivariate 'FFT', and the one-dimensional real to complex 'FFT' using the 'FFTW3' library. The package includes the functions fftw() and mvfftw() which are designed to mimic the functionality of the R functions fft() and mvfft(). The 'FFT' functions have a parameter that allows them to not return the redundant complex conjugate when the input is real data. --- r-fgsea[¶](#r-fgsea) === Homepage: * <https://www.bioconductor.org/packages/fgsea/Spack package: * [r-fgsea/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-fgsea/package.py) Versions: 1.2.1 Build Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-data-table](#r-data-table), [r-biocparallel](#r-biocparallel), [r-fastmatch](#r-fastmatch), [r-gridextra](#r-gridextra), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-data-table](#r-data-table), [r-biocparallel](#r-biocparallel), [r-fastmatch](#r-fastmatch), [r-gridextra](#r-gridextra), [r-rcpp](#r-rcpp) Description: The package implements an algorithm for fast gene set enrichment analysis. Using the fast algorithm allows to make more permutations and get more fine grained p-values, which allows to use accurate stantard approaches to multiple hypothesis correction. --- r-filehash[¶](#r-filehash) === Homepage: * <https://cran.r-project.org/Spack package: * [r-filehash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-filehash/package.py) Versions: 2.3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Implements a simple key-value style database where character string keys are associated with data values that are stored on the disk. A simple interface is provided for inserting, retrieving, and deleting data from the database. Utilities are provided that allow 'filehash' databases to be treated much like environments and lists are already used in R. These utilities are provided to encourage interactive and exploratory analysis on large datasets. Three different file formats for representing the database are currently available and new formats can easily be incorporated by third parties for use in the 'filehash' framework. --- r-findpython[¶](#r-findpython) === Homepage: * <https://github.com/trevorld/findpythonSpack package: * [r-findpython/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-findpython/package.py) Versions: 1.0.3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [python](#python) Description: Package designed to find an acceptable python binary. --- r-fit-models[¶](#r-fit-models) === Homepage: * <https://cran.r-project.org/package=fit.modelsSpack package: * [r-fit-models/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-fit-models/package.py) Versions: 0.5-14, 0.5-13 Build Dependencies: [r](#r), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice) Description: Compare Fitted Models --- r-flashclust[¶](#r-flashclust) === Homepage: * <https://CRAN.R-project.org/package=flashClustSpack package: * [r-flashclust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-flashclust/package.py) Versions: 1.01-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: flashClust: Implementation of optimal hierarchical clustering --- r-flexclust[¶](#r-flexclust) === Homepage: * <https://cran.r-project.org/package=flexclustSpack package: * [r-flexclust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-flexclust/package.py) Versions: 1.3-5 Build Dependencies: [r](#r), [r-modeltools](#r-modeltools), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-modeltools](#r-modeltools), [r-lattice](#r-lattice) Description: The main function kcca implements a general framework for k-centroids cluster analysis supporting arbitrary distance measures and centroid computation. Further cluster methods include hard competitive learning, neural gas, and QT clustering. There are numerous visualization methods for cluster results (neighborhood graphs, convex cluster hulls, barcharts of centroids, ...), and bootstrap methods for the analysis of cluster stability. --- r-flexmix[¶](#r-flexmix) === Homepage: * <https://CRAN.R-project.org/package=flexmixSpack package: * [r-flexmix/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-flexmix/package.py) Versions: 2.3-14 Build Dependencies: [r](#r), [r-modeltools](#r-modeltools) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-modeltools](#r-modeltools) Description: flexmix: Flexible Mixture Modeling --- r-fnn[¶](#r-fnn) === Homepage: * <https://cran.r-project.org/web/packages/FNN/index.htmlSpack package: * [r-fnn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-fnn/package.py) Versions: 1.1, 1.0, 0.6-4, 0.6-3, 0.6-2 Build Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm), [r-chemometrics](#r-chemometrics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm), [r-chemometrics](#r-chemometrics) Description: Cover-tree and kd-tree fast k-nearest neighbor search algorithms and related applications including KNN classification, regression and information measures are implemented. --- r-forcats[¶](#r-forcats) === Homepage: * <http://forcats.tidyverse.org/Spack package: * [r-forcats/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-forcats/package.py) Versions: 0.2.0 Build Dependencies: [r](#r), [r-tibble](#r-tibble), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tibble](#r-tibble), [r-magrittr](#r-magrittr) Description: Helpers for reordering factor levels (including moving specified levels to front, ordering by first appearance, reversing, and randomly shuffling), and tools for modifying factor levels (including collapsing rare levels into other, 'anonymising', and manually 'recoding'). --- r-foreach[¶](#r-foreach) === Homepage: * <https://cran.r-project.org/web/packages/foreach/index.htmlSpack package: * [r-foreach/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-foreach/package.py) Versions: 1.4.3 Build Dependencies: [r](#r), [r-codetools](#r-codetools), [r-iterators](#r-iterators) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-codetools](#r-codetools), [r-iterators](#r-iterators) Description: Support for the foreach looping construct. Foreach is an idiom that allows for iterating over elements in a collection, without the use of an explicit loop counter. This package in particular is intended to be used for its return value, rather than for its side effects. In that sense, it is similar to the standard lapply function, but doesn't require the evaluation of a function. Using foreach without side effects also facilitates executing the loop in parallel. --- r-forecast[¶](#r-forecast) === Homepage: * <https://cran.r-project.org/package=forecastSpack package: * [r-forecast/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-forecast/package.py) Versions: 8.2 Build Dependencies: [r-rcpparmadillo](#r-rcpparmadillo), [r-ggplot2](#r-ggplot2), [r-colorspace](#r-colorspace), [r](#r), [r-fracdiff](#r-fracdiff), [r-tseries](#r-tseries), [r-nnet](#r-nnet), [r-magrittr](#r-magrittr), [r-lmtest](#r-lmtest), [r-zoo](#r-zoo), [r-timedate](#r-timedate), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r-rcpparmadillo](#r-rcpparmadillo), [r-ggplot2](#r-ggplot2), [r-colorspace](#r-colorspace), [r](#r), [r-fracdiff](#r-fracdiff), [r-tseries](#r-tseries), [r-nnet](#r-nnet), [r-magrittr](#r-magrittr), [r-lmtest](#r-lmtest), [r-zoo](#r-zoo), [r-timedate](#r-timedate), [r-rcpp](#r-rcpp) Description: Methods and tools for displaying and analysing univariate time series forecasts including exponential smoothing via state space models and automatic ARIMA modelling. --- r-foreign[¶](#r-foreign) === Homepage: * <https://cran.r-project.org/web/packages/foreign/index.htmlSpack package: * [r-foreign/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-foreign/package.py) Versions: 0.8-66 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions for reading and writing data stored by some versions of Epi Info, Minitab, S, SAS, SPSS, Stata, Systat and Weka and for reading and writing some dBase files. --- r-formatr[¶](#r-formatr) === Homepage: * <https://cran.r-project.org/package=formatRSpack package: * [r-formatr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-formatr/package.py) Versions: 1.5, 1.4 Build Dependencies: [r](#r), [r-codetools](#r-codetools), [r-testit](#r-testit), [r-shiny](#r-shiny) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-codetools](#r-codetools), [r-testit](#r-testit), [r-shiny](#r-shiny) Description: Provides a function tidy_source() to format R source code. Spaces and indent will be added to the code automatically, and comments will be preserved under certain conditions, so that R code will be more human- readable and tidy. There is also a Shiny app as a user interface in this package. --- r-formula[¶](#r-formula) === Homepage: * <https://cran.r-project.org/package=FormulaSpack package: * [r-formula/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-formula/package.py) Versions: 1.2-2, 1.2-1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Infrastructure for extended formulas with multiple parts on the right- hand side and/or multiple responses on the left-hand side. --- r-fpc[¶](#r-fpc) === Homepage: * <http://www.homepages.ucl.ac.uk/~ucakcheSpack package: * [r-fpc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-fpc/package.py) Versions: 2.1-10 Build Dependencies: [r](#r), [r-flexmix](#r-flexmix), [r-mvtnorm](#r-mvtnorm), [r-prabclus](#r-prabclus), [r-kernlab](#r-kernlab), [r-mclust](#r-mclust), [r-robustbase](#r-robustbase), [r-trimcluster](#r-trimcluster), [r-diptest](#r-diptest) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-flexmix](#r-flexmix), [r-mvtnorm](#r-mvtnorm), [r-prabclus](#r-prabclus), [r-kernlab](#r-kernlab), [r-mclust](#r-mclust), [r-robustbase](#r-robustbase), [r-trimcluster](#r-trimcluster), [r-diptest](#r-diptest) Description: fpc: Flexible Procedures for Clustering --- r-fracdiff[¶](#r-fracdiff) === Homepage: * <https://cran.r-project.org/package=fracdiffSpack package: * [r-fracdiff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-fracdiff/package.py) Versions: 1.4-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Maximum likelihood estimation of the parameters of a fractionally differenced ARIMA(p,d,q) model (Haslett and Raftery, Appl.Statistics, 1989). --- r-futile-logger[¶](#r-futile-logger) === Homepage: * <https://cran.rstudio.com/web/packages/futile.logger/index.htmlSpack package: * [r-futile-logger/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-futile-logger/package.py) Versions: 1.4.3 Build Dependencies: [r](#r), [r-lambda-r](#r-lambda-r), [r-futile-options](#r-futile-options) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lambda-r](#r-lambda-r), [r-futile-options](#r-futile-options) Description: Provides a simple yet powerful logging utility. Based loosely on log4j, futile.logger takes advantage of R idioms to make logging a convenient and easy to use replacement for cat and print statements. --- r-futile-options[¶](#r-futile-options) === Homepage: * <https://cran.rstudio.com/web/packages/futile.options/index.htmlSpack package: * [r-futile-options/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-futile-options/package.py) Versions: 1.0.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A scoped options management framework --- r-gbm[¶](#r-gbm) === Homepage: * <https://cran.rstudio.com/web/packages/gbm/index.htmlSpack package: * [r-gbm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gbm/package.py) Versions: 2.1.3 Build Dependencies: [r](#r), [r-survival](#r-survival), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-survival](#r-survival), [r-lattice](#r-lattice) Description: Generalized Boosted Regression Models. --- r-gcrma[¶](#r-gcrma) === Homepage: * <https://bioconductor.org/packages/gcrma/Spack package: * [r-gcrma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gcrma/package.py) Versions: 2.48.0 Build Dependencies: [r](#r), [r-biocinstaller](#r-biocinstaller), [r-biobase](#r-biobase), [r-xvector](#r-xvector), [r-affyio](#r-affyio), [r-affy](#r-affy), [r-biostrings](#r-biostrings) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biocinstaller](#r-biocinstaller), [r-biobase](#r-biobase), [r-xvector](#r-xvector), [r-affyio](#r-affyio), [r-affy](#r-affy), [r-biostrings](#r-biostrings) Description: Background adjustment using sequence information --- r-gdata[¶](#r-gdata) === Homepage: * <https://cran.r-project.org/package=gdataSpack package: * [r-gdata/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gdata/package.py) Versions: 2.18.0, 2.17.0 Build Dependencies: [r](#r), [r-gtools](#r-gtools) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gtools](#r-gtools) Description: Various R programming tools for data manipulation, including: - medical unit conversions ('ConvertMedUnits', 'MedUnits'), - combining objects ('bindData', 'cbindX', 'combine', 'interleave'), - character vector operations ('centerText', 'startsWith', 'trim'), - factor manipulation ('levels', 'reorder.factor', 'mapLevels'), - obtaining information about R objects ('object.size', 'elem', 'env', 'humanReadable', 'is.what', 'll', 'keep', 'ls.funs', 'Args','nPairs', 'nobs'), - manipulating MS- Excel formatted files ('read.xls', 'installXLSXsupport', 'sheetCount', 'xlsFormats'), - generating fixed-width format files ('write.fwf'), - extricating components of date & time objects ('getYear', 'getMonth', 'getDay', 'getHour', 'getMin', 'getSec'), - operations on columns of data frames ('matchcols', 'rename.vars'), - matrix operations ('unmatrix', 'upperTriangle', 'lowerTriangle'), - operations on vectors ('case', 'unknownToNA', 'duplicated2', 'trimSum'), - operations on data frames ('frameApply', 'wideByFactor'), - value of last evaluated expression ('ans'), and - wrapper for 'sample' that ensures consistent behavior for both scalar and vector arguments ('resample'). --- r-gdsfmt[¶](#r-gdsfmt) === Homepage: * <http://bioconductor.org/packages/gdsfmt/Spack package: * [r-gdsfmt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gdsfmt/package.py) Versions: 1.14.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This package provides a high-level R interface to CoreArray Genomic Data Structure (GDS) data files, which are portable across platforms with hierarchical structure to store multiple scalable array-oriented data sets with metadata information. It is suited for large-scale datasets, especially for data which are much larger than the available random- access memory. The gdsfmt package offers the efficient operations specifically designed for integers of less than 8 bits, since a diploid genotype, like single-nucleotide polymorphism (SNP), usually occupies fewer bits than a byte. Data compression and decompression are available with relatively efficient random access. It is also allowed to read a GDS file in parallel with multiple R processes supported by the package parallel. --- r-geiger[¶](#r-geiger) === Homepage: * <https://cran.r-project.org/package=geigerSpack package: * [r-geiger/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-geiger/package.py) Versions: 2.0.6 Build Dependencies: [r](#r), [r-coda](#r-coda), [r-mvtnorm](#r-mvtnorm), [r-subplex](#r-subplex), [r-digest](#r-digest), [r-ncbit](#r-ncbit), [r-desolve](#r-desolve), [r-colorspace](#r-colorspace), [r-ape](#r-ape), [r-mass](#r-mass), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-coda](#r-coda), [r-mvtnorm](#r-mvtnorm), [r-subplex](#r-subplex), [r-digest](#r-digest), [r-ncbit](#r-ncbit), [r-desolve](#r-desolve), [r-colorspace](#r-colorspace), [r-ape](#r-ape), [r-mass](#r-mass), [r-rcpp](#r-rcpp) Description: Methods for fitting macroevolutionary models to phylogenetic trees. --- r-genefilter[¶](#r-genefilter) === Homepage: * <https://bioconductor.org/packages/genefilter/Spack package: * [r-genefilter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-genefilter/package.py) Versions: 1.62.0, 1.58.1 Build Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-annotationdbi](#r-annotationdbi), [r-biobase](#r-biobase), [r-annotate](#r-annotate) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-annotationdbi](#r-annotationdbi), [r-biobase](#r-biobase), [r-annotate](#r-annotate) Description: Some basic functions for filtering genes --- r-genelendatabase[¶](#r-genelendatabase) === Homepage: * <https://bioconductor.org/packages/release/data/experiment/html/geneLenDataBase.htmlSpack package: * [r-genelendatabase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-genelendatabase/package.py) Versions: 1.16.0 Build Dependencies: [r](#r), [r-rtracklayer](#r-rtracklayer), [r-genomicfeatures](#r-genomicfeatures) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rtracklayer](#r-rtracklayer), [r-genomicfeatures](#r-genomicfeatures) Description: Length of mRNA transcripts for a number of genomes and gene ID formats, largely based on UCSC table browser --- r-geneplotter[¶](#r-geneplotter) === Homepage: * <https://www.bioconductor.org/packages/geneplotter/Spack package: * [r-geneplotter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-geneplotter/package.py) Versions: 1.58.0, 1.54.0 Build Dependencies: [r](#r), [r-lattice](#r-lattice), [r-biobase](#r-biobase), [r-annotationdbi](#r-annotationdbi), [r-annotate](#r-annotate), [r-rcolorbrewer](#r-rcolorbrewer), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice), [r-biobase](#r-biobase), [r-annotationdbi](#r-annotationdbi), [r-annotate](#r-annotate), [r-rcolorbrewer](#r-rcolorbrewer), [r-biocgenerics](#r-biocgenerics) Description: Functions for plotting genomic data. --- r-genie3[¶](#r-genie3) === Homepage: * <https://bioconductor.org/packages/GENIE3/Spack package: * [r-genie3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-genie3/package.py) Versions: 1.2.0 Build Dependencies: [r](#r), [r-reshape2](#r-reshape2) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-reshape2](#r-reshape2) Description: This package implements the GENIE3 algorithm for inferring gene regulatory networks from expression data. --- r-genomeinfodb[¶](#r-genomeinfodb) === Homepage: * <https://bioconductor.org/packages/GenomeInfoDb/Spack package: * [r-genomeinfodb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-genomeinfodb/package.py) Versions: 1.16.0, 1.14.0, 1.12.3 Build Dependencies: [r](#r), [r-iranges](#r-iranges), [r-genomeinfodbdata](#r-genomeinfodbdata), [r-s4vectors](#r-s4vectors), [r-rcurl](#r-rcurl), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iranges](#r-iranges), [r-genomeinfodbdata](#r-genomeinfodbdata), [r-s4vectors](#r-s4vectors), [r-rcurl](#r-rcurl), [r-biocgenerics](#r-biocgenerics) Description: Contains data and functions that define and allow translation between different chromosome sequence naming conventions (e.g., "chr1" versus "1"), including a function that attempts to place sequence names in their natural, rather than lexicographic, order. --- r-genomeinfodbdata[¶](#r-genomeinfodbdata) === Homepage: * <https://bioconductor.org/packages/GenomeInfoDbData/Spack package: * [r-genomeinfodbdata/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-genomeinfodbdata/package.py) Versions: 1.1.0, 0.99.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: for mapping between NCBI taxonomy ID and species. Used by functions in the GenomeInfoDb package. --- r-genomicalignments[¶](#r-genomicalignments) === Homepage: * <https://bioconductor.org/packages/GenomicAlignments/Spack package: * [r-genomicalignments/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-genomicalignments/package.py) Versions: 1.16.0, 1.14.2, 1.12.2 Build Dependencies: [r](#r), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-biocparallel](#r-biocparallel), [r-s4vectors](#r-s4vectors), [r-biocgenerics](#r-biocgenerics), [r-rsamtools](#r-rsamtools), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomeinfodb](#r-genomeinfodb), [r-biostrings](#r-biostrings) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-biocparallel](#r-biocparallel), [r-s4vectors](#r-s4vectors), [r-biocgenerics](#r-biocgenerics), [r-rsamtools](#r-rsamtools), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomeinfodb](#r-genomeinfodb), [r-biostrings](#r-biostrings) Description: Provides efficient containers for storing and manipulating short genomic alignments (typically obtained by aligning short reads to a reference genome). This includes read counting, computing the coverage, junction detection, and working with the nucleotide content of the alignments. --- r-genomicfeatures[¶](#r-genomicfeatures) === Homepage: * <http://bioconductor.org/packages/GenomicFeatures/Spack package: * [r-genomicfeatures/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-genomicfeatures/package.py) Versions: 1.32.2, 1.28.5 Build Dependencies: [r](#r), [r-dbi](#r-dbi), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-xvector](#r-xvector), [r-biomart](#r-biomart), [r-rcurl](#r-rcurl), [r-biostrings](#r-biostrings), [r-iranges](#r-iranges), [r-rtracklayer](#r-rtracklayer), [r-genomicranges](#r-genomicranges), [r-annotationdbi](#r-annotationdbi), [r-biobase](#r-biobase), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-dbi](#r-dbi), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-xvector](#r-xvector), [r-biomart](#r-biomart), [r-rcurl](#r-rcurl), [r-biostrings](#r-biostrings), [r-iranges](#r-iranges), [r-rtracklayer](#r-rtracklayer), [r-genomicranges](#r-genomicranges), [r-annotationdbi](#r-annotationdbi), [r-biobase](#r-biobase), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: A set of tools and methods for making and manipulating transcript centric annotations. With these tools the user can easily download the genomic locations of the transcripts, exons and cds of a given organism, from either the UCSC Genome Browser or a BioMart database (more sources will be supported in the future). This information is then stored in a local database that keeps track of the relationship between transcripts, exons, cds and genes. Flexible methods are provided for extracting the desired features in a convenient format. --- r-genomicranges[¶](#r-genomicranges) === Homepage: * <https://bioconductor.org/packages/GenomicRanges/Spack package: * [r-genomicranges/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-genomicranges/package.py) Versions: 1.32.6, 1.30.3, 1.28.6 Build Dependencies: [r](#r), [r-iranges](#r-iranges), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iranges](#r-iranges), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: The ability to efficiently represent and manipulate genomic annotations and alignments is playing a central role when it comes to analyzing high-throughput sequencing data (a.k.a. NGS data). The GenomicRanges package defines general purpose containers for storing and manipulating genomic intervals and variables defined along a genome. More specialized containers for representing and manipulating short alignments against a reference genome, or a matrix-like summarization of an experiment, are defined in the GenomicAlignments and SummarizedExperiment packages respectively. Both packages build on top of the GenomicRanges infrastructure. --- r-geomorph[¶](#r-geomorph) === Homepage: * <https://cran.r-project.org/package=geomorphSpack package: * [r-geomorph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-geomorph/package.py) Versions: 3.0.5 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-geiger](#r-geiger), [r-jpeg](#r-jpeg), [r-rgl](#r-rgl), [r-ape](#r-ape) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-geiger](#r-geiger), [r-jpeg](#r-jpeg), [r-rgl](#r-rgl), [r-ape](#r-ape) Description: Read, manipulate, and digitize landmark data, generate shape variables via Procrustes analysis for points, curves and surfaces, perform shape analyses, and provide graphical depictions of shapes and patterns of shape variation. --- r-geoquery[¶](#r-geoquery) === Homepage: * <https://bioconductor.org/packages/GEOquery/Spack package: * [r-geoquery/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-geoquery/package.py) Versions: 2.42.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-httr](#r-httr), [r-rcurl](#r-rcurl), [r-xml](#r-xml) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-httr](#r-httr), [r-rcurl](#r-rcurl), [r-xml](#r-xml) Description: The NCBI Gene Expression Omnibus (GEO) is a public repository of microarray data. Given the rich and varied nature of this resource, it is only natural to want to apply BioConductor tools to these data. GEOquery is the bridge between GEO and BioConductor. --- r-geosphere[¶](#r-geosphere) === Homepage: * <https://cran.r-project.org/package=geosphereSpack package: * [r-geosphere/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-geosphere/package.py) Versions: 1.5-5 Build Dependencies: [r](#r), [r-sp](#r-sp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-sp](#r-sp) Description: Spherical trigonometry for geographic applications. That is, compute distances and related measures for angular (longitude/latitude) locations. --- r-getopt[¶](#r-getopt) === Homepage: * <https://github.com/trevorld/getoptSpack package: * [r-getopt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-getopt/package.py) Versions: 1.20.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Package designed to be used with Rscript to write "#!" shebang scripts that accept short and long flags/options. Many users will prefer using instead the packages optparse or argparse which add extra features like automatically generated help option and usage, support for default values, positional argument support, etc. --- r-getoptlong[¶](#r-getoptlong) === Homepage: * <https://cran.rstudio.com/web/packages/GetoptLong/index.htmlSpack package: * [r-getoptlong/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-getoptlong/package.py) Versions: 0.1.6 Build Dependencies: [r](#r), [r-rjson](#r-rjson), [r-globaloptions](#r-globaloptions), [perl](#perl) Link Dependencies: [r](#r), [perl](#perl) Run Dependencies: [r](#r), [r-rjson](#r-rjson), [r-globaloptions](#r-globaloptions) Description: This is yet another command-line argument parser which wraps the powerful Perl module Getopt::Long and with some adaptation for easier use in R. It also provides a simple way for variable interpolation in R. --- r-ggally[¶](#r-ggally) === Homepage: * <https://cran.r-project.org/package=GGallySpack package: * [r-ggally/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggally/package.py) Versions: 1.3.2 Build Dependencies: [r](#r), [r-progress](#r-progress), [r-ggplot2](#r-ggplot2), [r-plyr](#r-plyr), [r-reshape](#r-reshape), [r-rcolorbrewer](#r-rcolorbrewer), [r-gtable](#r-gtable) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-progress](#r-progress), [r-ggplot2](#r-ggplot2), [r-plyr](#r-plyr), [r-reshape](#r-reshape), [r-rcolorbrewer](#r-rcolorbrewer), [r-gtable](#r-gtable) Description: The R package 'ggplot2' is a plotting system based on the grammar of graphics. 'GGally' extends 'ggplot2' by adding several functions to reduce the complexity of combining geometric objects with transformed data. Some of these functions include a pairwise plot matrix, a two group pairwise plot matrix, a parallel coordinates plot, a survival plot, and several functions to plot networks. --- r-ggbio[¶](#r-ggbio) === Homepage: * <http://bioconductor.org/packages/ggbio/Spack package: * [r-ggbio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggbio/package.py) Versions: 1.24.1 Build Dependencies: [r-hmisc](#r-hmisc), [r-ensembldb](#r-ensembldb), [r-reshape2](#r-reshape2), [r-gtable](#r-gtable), [r-genomicfeatures](#r-genomicfeatures), [r-organismdbi](#r-organismdbi), [r-genomicranges](#r-genomicranges), [r-biobase](#r-biobase), [r-annotationdbi](#r-annotationdbi), [r-genomicalignments](#r-genomicalignments), [r-biostrings](#r-biostrings), [r-gridextra](#r-gridextra), [r-genomeinfodb](#r-genomeinfodb), [r-bsgenome](#r-bsgenome), [r-scales](#r-scales), [r-s4vectors](#r-s4vectors), [r-variantannotation](#r-variantannotation), [r-summarizedexperiment](#r-summarizedexperiment), [r-ggally](#r-ggally), [r](#r), [r-rtracklayer](#r-rtracklayer), [r-iranges](#r-iranges), [r-biovizbase](#r-biovizbase), [r-rsamtools](#r-rsamtools), [r-annotationfilter](#r-annotationfilter) Link Dependencies: [r](#r) Run Dependencies: [r-hmisc](#r-hmisc), [r-ensembldb](#r-ensembldb), [r-reshape2](#r-reshape2), [r-gtable](#r-gtable), [r-genomicfeatures](#r-genomicfeatures), [r-organismdbi](#r-organismdbi), [r-genomicranges](#r-genomicranges), [r-biobase](#r-biobase), [r-annotationdbi](#r-annotationdbi), [r-genomicalignments](#r-genomicalignments), [r-biostrings](#r-biostrings), [r-gridextra](#r-gridextra), [r-genomeinfodb](#r-genomeinfodb), [r-bsgenome](#r-bsgenome), [r-scales](#r-scales), [r-s4vectors](#r-s4vectors), [r-variantannotation](#r-variantannotation), [r-summarizedexperiment](#r-summarizedexperiment), [r-ggally](#r-ggally), [r](#r), [r-rtracklayer](#r-rtracklayer), [r-iranges](#r-iranges), [r-biovizbase](#r-biovizbase), [r-rsamtools](#r-rsamtools), [r-annotationfilter](#r-annotationfilter) Description: The ggbio package extends and specializes the grammar of graphics for biological data. The graphics are designed to answer common scientific questions, in particular those often asked of high throughput genomics data. All core Bioconductor data structures are supported, where appropriate. The package supports detailed views of particular genomic regions, as well as genome-wide overviews. Supported overviews include ideograms and grand linear views. High-level plots include sequence fragment length, edge-linked interval to data view, mismatch pileup, and several splicing summaries. --- r-ggdendro[¶](#r-ggdendro) === Homepage: * <https://cran.r-project.org/package=ggdendroSpack package: * [r-ggdendro/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggdendro/package.py) Versions: 0.1-20 Build Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-mass](#r-mass) Description: This is a set of tools for dendrograms and tree plots using 'ggplot2'. The 'ggplot2' philosophy is to clearly separate data from the presentation. Unfortunately the plot method for dendrograms plots directly to a plot device without exposing the data. The 'ggdendro' package resolves this by making available functions that extract the dendrogram plot data. The package provides implementations for tree, rpart, as well as diana and agnes cluster diagrams. --- r-ggjoy[¶](#r-ggjoy) === Homepage: * <https://cran.r-project.org/web/packages/ggjoy/index.htmlSpack package: * [r-ggjoy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggjoy/package.py) Versions: 0.4.0, 0.3.0, 0.2.0 Build Dependencies: [r](#r), [r-ggridges](#r-ggridges), [r-ggplot2](#r-ggplot2) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ggridges](#r-ggridges), [r-ggplot2](#r-ggplot2) Description: Joyplots provide a convenient way of visualizing changes in distributions over time or space. --- r-ggmap[¶](#r-ggmap) === Homepage: * <https://github.com/dkahle/ggmapSpack package: * [r-ggmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggmap/package.py) Versions: 2.6.1 Build Dependencies: [r](#r), [r-reshape2](#r-reshape2), [r-geosphere](#r-geosphere), [r-ggplot2](#r-ggplot2), [r-scales](#r-scales), [r-digest](#r-digest), [r-plyr](#r-plyr), [r-jpeg](#r-jpeg), [r-rjson](#r-rjson), [r-proto](#r-proto), [r-png](#r-png), [r-mapproj](#r-mapproj), [r-rgooglemaps](#r-rgooglemaps) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-reshape2](#r-reshape2), [r-geosphere](#r-geosphere), [r-ggplot2](#r-ggplot2), [r-scales](#r-scales), [r-digest](#r-digest), [r-plyr](#r-plyr), [r-jpeg](#r-jpeg), [r-rjson](#r-rjson), [r-proto](#r-proto), [r-png](#r-png), [r-mapproj](#r-mapproj), [r-rgooglemaps](#r-rgooglemaps) Description: A collection of functions to visualize spatial data and models on top of static maps from various online sources (e.g Google Maps and Stamen Maps). It includes tools common to those tasks, including functions for geolocation and routing. --- r-ggplot2[¶](#r-ggplot2) === Homepage: * <http://ggplot2.org/Spack package: * [r-ggplot2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggplot2/package.py) Versions: 2.2.1, 2.1.0 Build Dependencies: [r](#r), [r-tibble](#r-tibble), [r-reshape2](#r-reshape2), [r-lazyeval](#r-lazyeval), [r-gtable](#r-gtable), [r-scales](#r-scales), [r-digest](#r-digest), [r-plyr](#r-plyr), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tibble](#r-tibble), [r-reshape2](#r-reshape2), [r-lazyeval](#r-lazyeval), [r-gtable](#r-gtable), [r-scales](#r-scales), [r-digest](#r-digest), [r-plyr](#r-plyr), [r-mass](#r-mass) Description: An implementation of the grammar of graphics in R. It combines the advantages of both base and lattice graphics: conditioning and shared axes are handled automatically, and you can still build up a plot step by step from multiple data sources. It also implements a sophisticated multidimensional conditioning system and a consistent interface to map data to aesthetic attributes. See http://ggplot2.org for more information, documentation and examples. --- r-ggpubr[¶](#r-ggpubr) === Homepage: * <http://www.sthda.com/english/rpkgs/ggpubrSpack package: * [r-ggpubr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggpubr/package.py) Versions: 0.1.2 Build Dependencies: [r](#r), [r-ggsci](#r-ggsci), [r-ggrepel](#r-ggrepel), [r-ggplot2](#r-ggplot2), [r-plyr](#r-plyr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ggsci](#r-ggsci), [r-ggrepel](#r-ggrepel), [r-ggplot2](#r-ggplot2), [r-plyr](#r-plyr) Description: ggpubr: 'ggplot2' Based Publication Ready Plots --- r-ggrepel[¶](#r-ggrepel) === Homepage: * <http://github.com/slowkow/ggrepelSpack package: * [r-ggrepel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggrepel/package.py) Versions: 0.6.5 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp), [r-ggplot2](#r-ggplot2), [r-scales](#r-scales) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp), [r-ggplot2](#r-ggplot2), [r-scales](#r-scales) Description: ggrepel: Repulsive Text and Label Geoms for 'ggplot2' --- r-ggridges[¶](#r-ggridges) === Homepage: * <https://cran.r-project.org/web/packages/ggridges/index.htmlSpack package: * [r-ggridges/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggridges/package.py) Versions: 0.4.1, 0.4.0 Build Dependencies: [r](#r), [r-ggplot2](#r-ggplot2) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ggplot2](#r-ggplot2) Description: Ridgeline plots provide a convenient way of visualizing changes in distributions over time or space. --- r-ggsci[¶](#r-ggsci) === Homepage: * <https://github.com/road2stat/ggsciSpack package: * [r-ggsci/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggsci/package.py) Versions: 2.4 Build Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-scales](#r-scales) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-scales](#r-scales) Description: ggsci: Scientific Journal and Sci-Fi Themed Color Palettes for 'ggplot2' --- r-ggvis[¶](#r-ggvis) === Homepage: * <http://ggvis.rstudio.com/Spack package: * [r-ggvis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ggvis/package.py) Versions: 0.4.3, 0.4.2 Build Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-jsonlite](#r-jsonlite), [r-shiny](#r-shiny), [r-magrittr](#r-magrittr), [r-dplyr](#r-dplyr), [r-lazyeval](#r-lazyeval), [r-htmltools](#r-htmltools) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-jsonlite](#r-jsonlite), [r-shiny](#r-shiny), [r-magrittr](#r-magrittr), [r-dplyr](#r-dplyr), [r-lazyeval](#r-lazyeval), [r-htmltools](#r-htmltools) Description: An implementation of an interactive grammar of graphics, taking the best parts of 'ggplot2', combining them with the reactive framework from 'shiny' and web graphics from 'vega'. --- r-gistr[¶](#r-gistr) === Homepage: * <https://github.com/ropensci/gistrSpack package: * [r-gistr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gistr/package.py) Versions: 0.3.6 Build Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-httr](#r-httr), [r-knitr](#r-knitr), [r-assertthat](#r-assertthat), [r-dplyr](#r-dplyr), [r-magrittr](#r-magrittr), [r-rmarkdown](#r-rmarkdown) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-httr](#r-httr), [r-knitr](#r-knitr), [r-assertthat](#r-assertthat), [r-dplyr](#r-dplyr), [r-magrittr](#r-magrittr), [r-rmarkdown](#r-rmarkdown) Description: Work with 'GitHub' 'gists' from 'R'. This package allows the user to create new 'gists', update 'gists' with new files, rename files, delete files, get and delete 'gists', star and 'un-star' 'gists', fork 'gists', open a 'gist' in your default browser, get embed code for a 'gist', list 'gist' 'commits', and get rate limit information when 'authenticated'. Some requests require authentication and some do not. --- r-git2r[¶](#r-git2r) === Homepage: * <https://github.com/ropensci/git2rSpack package: * [r-git2r/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-git2r/package.py) Versions: 0.18.0, 0.15.0 Build Dependencies: [r](#r), [zlib](#zlib), [openssl](#openssl) Link Dependencies: [r](#r), [zlib](#zlib), [openssl](#openssl) Run Dependencies: [r](#r) Description: Interface to the 'libgit2' library, which is a pure C implementation of the 'Git' core methods. Provides access to 'Git' repositories to extract data and running some basic 'Git' commands. --- r-glimma[¶](#r-glimma) === Homepage: * <https://bioconductor.org/packages/release/bioc/html/Glimma.htmlSpack package: * [r-glimma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-glimma/package.py) Versions: 1.8.2 Build Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-s4vectors](#r-s4vectors), [r-edger](#r-edger) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-s4vectors](#r-s4vectors), [r-edger](#r-edger) Description: This package generates interactive visualisations for analysis of RNA- sequencing data using output from limma, edgeR or DESeq2 packages in an HTML page. The interactions are built on top of the popular static representations of analysis results in order to provide additional information. --- r-glmnet[¶](#r-glmnet) === Homepage: * <https://cran.rstudio.com/web/packages/glmnet/index.htmlSpack package: * [r-glmnet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-glmnet/package.py) Versions: 2.0-13, 2.0-5 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-foreach](#r-foreach) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-foreach](#r-foreach) Description: Extremely efficient procedures for fitting the entire lasso or elastic- net regularization path for linear regression, logistic and multinomial regression models, Poisson regression and the Cox model. Two recent additions are the multiple-response Gaussian, and the grouped multinomial. The algorithm uses cyclical coordinate descent in a path- wise fashion, as described in the paper linked to via the URL below. --- r-globaloptions[¶](#r-globaloptions) === Homepage: * <https://cran.r-project.org/package=GlobalOptionsSpack package: * [r-globaloptions/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-globaloptions/package.py) Versions: 0.0.12 Build Dependencies: [r](#r), [r-knitr](#r-knitr), [r-markdown](#r-markdown), [r-testthat](#r-testthat) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-knitr](#r-knitr), [r-markdown](#r-markdown), [r-testthat](#r-testthat) Description: It provides more controls on the option values such as validation and filtering on the values, making options invisible or private. --- r-glue[¶](#r-glue) === Homepage: * <https://github.com/tidyverse/glueSpack package: * [r-glue/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-glue/package.py) Versions: 1.2.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: An implementation of interpreted string literals, inspired by Python's Literal String Interpolation <https://www.python.org/dev/peps/pep-0498/> and Docstrings <https://www.python.org/dev/peps/pep-0257/> and Julia's Triple-Quoted String Literals <https://docs.julialang.org/en/stable/ manual/strings/#triple-quoted-string-literals>. --- r-gmodels[¶](#r-gmodels) === Homepage: * <http://www.sf.net/projects/r-gregmiscSpack package: * [r-gmodels/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gmodels/package.py) Versions: 2.16.2 Build Dependencies: [r](#r), [r-gdata](#r-gdata) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gdata](#r-gdata) Description: Various R programming tools for model fitting. --- r-gmp[¶](#r-gmp) === Homepage: * <http://mulcyber.toulouse.inra.fr/projects/gmpSpack package: * [r-gmp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gmp/package.py) Versions: 0.5-13.1 Build Dependencies: [r](#r), [gmp](#gmp) Link Dependencies: [r](#r), [gmp](#gmp) Run Dependencies: [r](#r) Description: Multiple Precision Arithmetic (big integers and rationals, prime number tests, matrix computation), "arithmetic without limitations" using the C library GMP (GNU Multiple Precision Arithmetic). --- r-go-db[¶](#r-go-db) === Homepage: * <https://www.bioconductor.org/packages/GO.db/Spack package: * [r-go-db/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-go-db/package.py) Versions: 3.4.1 Build Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Description: A set of annotation maps describing the entire Gene Ontology assembled using data from GO. --- r-googlevis[¶](#r-googlevis) === Homepage: * <https://github.com/mages/googleVis#googlevisSpack package: * [r-googlevis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-googlevis/package.py) Versions: 0.6.0 Build Dependencies: [r](#r), [r-jsonlite](#r-jsonlite) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-jsonlite](#r-jsonlite) Description: R interface to Google Charts API, allowing users to create interactive charts based on data frames. Charts are displayed locally via the R HTTP help server. A modern browser with an Internet connection is required and for some charts a Flash player. The data remains local and is not uploaded to Google. --- r-goplot[¶](#r-goplot) === Homepage: * <https://github.com/wencke/wencke.github.io/issuesSpack package: * [r-goplot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-goplot/package.py) Versions: 1.0.2 Build Dependencies: [r](#r), [r-ggdendro](#r-ggdendro), [r-rcolorbrewer](#r-rcolorbrewer), [r-gridextra](#r-gridextra), [r-ggplot2](#r-ggplot2) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ggdendro](#r-ggdendro), [r-rcolorbrewer](#r-rcolorbrewer), [r-gridextra](#r-gridextra), [r-ggplot2](#r-ggplot2) Description: Implementation of multilayered visualizations for enhanced graphical representation of functional analysis data. It combines and integrates omics data derived from expression and functional annotation enrichment analyses. Its plotting functions have been developed with an hierarchical structure in mind: starting from a general overview to identify the most enriched categories (modified bar plot, bubble plot) to a more detailed one displaying different types of relevant information for the molecules in a given set of categories (circle plot, chord plot, cluster plot, Venn diagram, heatmap). --- r-gosemsim[¶](#r-gosemsim) === Homepage: * <https://www.bioconductor.org/packages/GOSemSim/Spack package: * [r-gosemsim/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gosemsim/package.py) Versions: 2.2.0 Build Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi), [r-go-db](#r-go-db), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi), [r-go-db](#r-go-db), [r-rcpp](#r-rcpp) Description: The semantic comparisons of Gene Ontology (GO) annotations provide quantitative ways to compute similarities between genes and gene groups, and have became important basis for many bioinformatics analysis approaches. GOSemSim is an R package for semantic similarity computation among GO terms, sets of GO terms, gene products and gene clusters. GOSemSim implemented five methods proposed by Resnik, Schlicker, Jiang, Lin and Wang respectively. --- r-goseq[¶](#r-goseq) === Homepage: * <https://bioconductor.org/packages/release/bioc/html/goseq.htmlSpack package: * [r-goseq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-goseq/package.py) Versions: 1.32.0 Build Dependencies: [r](#r), [r-biasedurn](#r-biasedurn), [r-go-db](#r-go-db), [r-mgcv](#r-mgcv), [r-annotationdbi](#r-annotationdbi), [r-genelendatabase](#r-genelendatabase), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biasedurn](#r-biasedurn), [r-go-db](#r-go-db), [r-mgcv](#r-mgcv), [r-annotationdbi](#r-annotationdbi), [r-genelendatabase](#r-genelendatabase), [r-biocgenerics](#r-biocgenerics) Description: Detects Gene Ontology and/or other user defined categories which are over/under represented in RNA-seq data --- r-gostats[¶](#r-gostats) === Homepage: * <https://www.bioconductor.org/packages/GOstats/Spack package: * [r-gostats/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gostats/package.py) Versions: 2.42.0 Build Dependencies: [r](#r), [r-category](#r-category), [r-go-db](#r-go-db), [r-annotate](#r-annotate), [r-annotationdbi](#r-annotationdbi), [r-rbgl](#r-rbgl), [r-graph](#r-graph), [r-annotationforge](#r-annotationforge), [r-biobase](#r-biobase) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-category](#r-category), [r-go-db](#r-go-db), [r-annotate](#r-annotate), [r-annotationdbi](#r-annotationdbi), [r-rbgl](#r-rbgl), [r-graph](#r-graph), [r-annotationforge](#r-annotationforge), [r-biobase](#r-biobase) Description: A set of tools for interacting with GO and microarray data. A variety of basic manipulation tools for graphs, hypothesis testing and other simple calculations. --- r-gplots[¶](#r-gplots) === Homepage: * <https://cran.r-project.org/package=gplotsSpack package: * [r-gplots/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gplots/package.py) Versions: 3.0.1 Build Dependencies: [r](#r), [r-gdata](#r-gdata), [r-kernsmooth](#r-kernsmooth), [r-gtools](#r-gtools), [r-catools](#r-catools) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gdata](#r-gdata), [r-kernsmooth](#r-kernsmooth), [r-gtools](#r-gtools), [r-catools](#r-catools) Description: Various R Programming Tools for Plotting Data. --- r-graph[¶](#r-graph) === Homepage: * <https://www.bioconductor.org/packages/graph/Spack package: * [r-graph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-graph/package.py) Versions: 1.54.0 Build Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics) Description: A package that implements some simple graph handling capabilities. --- r-gridbase[¶](#r-gridbase) === Homepage: * <https://cran.r-project.org/web/packages/gridBase/index.htmlSpack package: * [r-gridbase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gridbase/package.py) Versions: 0.4-7 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Integration of base and grid graphics. --- r-gridextra[¶](#r-gridextra) === Homepage: * <https://cran.r-project.org/package=gridExtraSpack package: * [r-gridextra/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gridextra/package.py) Versions: 2.3, 2.2.1 Build Dependencies: [r](#r), [r-gtable](#r-gtable) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gtable](#r-gtable) Description: Provides a number of user-level functions to work with "grid" graphics, notably to arrange multiple grid-based plots on a page, and draw tables. --- r-gseabase[¶](#r-gseabase) === Homepage: * <https://www.bioconductor.org/packages/GSEABase/Spack package: * [r-gseabase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gseabase/package.py) Versions: 1.38.2 Build Dependencies: [r](#r), [r-graph](#r-graph), [r-biocgenerics](#r-biocgenerics), [r-biobase](#r-biobase), [r-annotationdbi](#r-annotationdbi), [r-annotate](#r-annotate), [r-xml](#r-xml) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-graph](#r-graph), [r-biocgenerics](#r-biocgenerics), [r-biobase](#r-biobase), [r-annotationdbi](#r-annotationdbi), [r-annotate](#r-annotate), [r-xml](#r-xml) Description: This package provides classes and methods to support Gene Set Enrichment Analysis (GSEA). --- r-gss[¶](#r-gss) === Homepage: * <https://cran.r-project.org/package=gssSpack package: * [r-gss/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gss/package.py) Versions: 2.1-7 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A comprehensive package for structural multivariate function estimation using smoothing splines. --- r-gsubfn[¶](#r-gsubfn) === Homepage: * <https://cran.r-project.org/package=gsubfnSpack package: * [r-gsubfn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gsubfn/package.py) Versions: 0.6-6 Build Dependencies: [r](#r), [r-proto](#r-proto) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-proto](#r-proto) Description: gsubfn is like gsub but can take a replacement function or certain other objects instead of the replacement string. Matches and back references are input to the replacement function and replaced by the function output. gsubfn can be used to split strings based on content rather than delimiters and for quasi-perl-style string interpolation. The package also has facilities for translating formulas to functions and allowing such formulas in function calls instead of functions. This can be used with R functions such as apply, sapply, lapply, optim, integrate, xyplot, Filter and any other function that expects another function as an input argument or functions like cat or sql calls that may involve strings where substitution is desirable. --- r-gtable[¶](#r-gtable) === Homepage: * <https://cran.r-project.org/web/packages/gtable/index.htmlSpack package: * [r-gtable/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gtable/package.py) Versions: 0.2.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Tools to make it easier to work with "tables" of 'grobs'. --- r-gtools[¶](#r-gtools) === Homepage: * <https://cran.r-project.org/package=gtoolsSpack package: * [r-gtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gtools/package.py) Versions: 3.5.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions to assist in R programming, including: - assist in developing, updating, and maintaining R and R packages ('ask', 'checkRVersion', 'getDependencies', 'keywords', 'scat'), - calculate the logit and inverse logit transformations ('logit', 'inv.logit'), - test if a value is missing, empty or contains only NA and NULL values ('invalid'), - manipulate R's .Last function ('addLast'), - define macros ('defmacro'), - detect odd and even integers ('odd', 'even'), - convert strings containing non-ASCII characters (like single quotes) to plain ASCII ('ASCIIfy'), - perform a binary search ('binsearch'), - sort strings containing both numeric and character components ('mixedsort'), - create a factor variable from the quantiles of a continuous variable ('quantcut'), - enumerate permutations and combinations ('combinations', 'permutation'), - calculate and convert between fold-change and log- ratio ('foldchange', 'logratio2foldchange', 'foldchange2logratio'), - calculate probabilities and generate random numbers from Dirichlet distributions ('rdirichlet', 'ddirichlet'), - apply a function over adjacent subsets of a vector ('running'), - modify the TCP\_NODELAY ('de-Nagle') flag for socket objects, - efficient 'rbind' of data frames, even if the column names don't match ('smartbind'), - generate significance stars from p-values ('stars.pval'), - convert characters to/from ASCII codes. --- r-gtrellis[¶](#r-gtrellis) === Homepage: * <https://bioconductor.org/packages/gtrellis/Spack package: * [r-gtrellis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gtrellis/package.py) Versions: 1.8.0 Build Dependencies: [r](#r), [r-iranges](#r-iranges), [r-circlize](#r-circlize), [r-genomicranges](#r-genomicranges), [r-getoptlong](#r-getoptlong) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iranges](#r-iranges), [r-circlize](#r-circlize), [r-genomicranges](#r-genomicranges), [r-getoptlong](#r-getoptlong) Description: Genome level Trellis graph visualizes genomic data conditioned by genomic categories (e.g. chromosomes). For each genomic category, multiple dimensional data which are represented as tracks describe different features from different aspects. This package provides high flexibility to arrange genomic categories and to add self-defined graphics in the plot. --- r-gviz[¶](#r-gviz) === Homepage: * <http://bioconductor.org/packages/Gviz/Spack package: * [r-gviz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-gviz/package.py) Versions: 1.20.0 Build Dependencies: [r-digest](#r-digest), [r-xvector](#r-xvector), [r-biomart](#r-biomart), [r-genomicfeatures](#r-genomicfeatures), [r-genomicranges](#r-genomicranges), [r-biobase](#r-biobase), [r-annotationdbi](#r-annotationdbi), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-genomeinfodb](#r-genomeinfodb), [r-iranges](#r-iranges), [r-latticeextra](#r-latticeextra), [r-matrixstats](#r-matrixstats), [r-lattice](#r-lattice), [r-rcolorbrewer](#r-rcolorbrewer), [r-biostrings](#r-biostrings), [r-bsgenome](#r-bsgenome), [r-s4vectors](#r-s4vectors), [r-rtracklayer](#r-rtracklayer), [r-biovizbase](#r-biovizbase), [r-rsamtools](#r-rsamtools), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r-digest](#r-digest), [r-xvector](#r-xvector), [r-biomart](#r-biomart), [r-genomicfeatures](#r-genomicfeatures), [r-genomicranges](#r-genomicranges), [r-biobase](#r-biobase), [r-annotationdbi](#r-annotationdbi), [r-genomicalignments](#r-genomicalignments), [r](#r), [r-genomeinfodb](#r-genomeinfodb), [r-iranges](#r-iranges), [r-latticeextra](#r-latticeextra), [r-matrixstats](#r-matrixstats), [r-lattice](#r-lattice), [r-rcolorbrewer](#r-rcolorbrewer), [r-biostrings](#r-biostrings), [r-bsgenome](#r-bsgenome), [r-s4vectors](#r-s4vectors), [r-rtracklayer](#r-rtracklayer), [r-biovizbase](#r-biovizbase), [r-rsamtools](#r-rsamtools), [r-biocgenerics](#r-biocgenerics) Description: Genomic data analyses requires integrated visualization of known genomic information and new experimental data. Gviz uses the biomaRt and the rtracklayer packages to perform live annotation queries to Ensembl and UCSC and translates this to e.g. gene/transcript structures in viewports of the grid graphics package. This results in genomic information plotted together with your data. --- r-haven[¶](#r-haven) === Homepage: * <http://haven.tidyverse.org/Spack package: * [r-haven/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-haven/package.py) Versions: 1.1.0 Build Dependencies: [r](#r), [r-readr](#r-readr), [r-tibble](#r-tibble), [r-hms](#r-hms), [r-forcats](#r-forcats), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-readr](#r-readr), [r-tibble](#r-tibble), [r-hms](#r-hms), [r-forcats](#r-forcats), [r-rcpp](#r-rcpp) Description: Import foreign statistical formats into R via the embedded 'ReadStat' C library, <https://github.com/WizardMac/ReadStat>. --- r-hexbin[¶](#r-hexbin) === Homepage: * <http://github.com/edzer/hexbinSpack package: * [r-hexbin/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-hexbin/package.py) Versions: 1.27.1 Build Dependencies: [r](#r), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice) Description: Binning and plotting functions for hexagonal bins. Now uses and relies on grid graphics and formal (S4) classes and methods. --- r-highr[¶](#r-highr) === Homepage: * <https://github.com/yihui/highrSpack package: * [r-highr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-highr/package.py) Versions: 0.6 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides syntax highlighting for R source code. Currently it supports LaTeX and HTML output. Source code of other languages is supported via <NAME>'s highlight package. --- r-hmisc[¶](#r-hmisc) === Homepage: * <http://biostat.mc.vanderbilt.edu/HmiscSpack package: * [r-hmisc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-hmisc/package.py) Versions: 4.1-1, 4.0-3 Build Dependencies: [r-base64enc](#r-base64enc), [r-latticeextra](#r-latticeextra), [r-htmltable](#r-htmltable), [r-data-table](#r-data-table), [r-viridis](#r-viridis), [r-survival](#r-survival), [r-htmltools](#r-htmltools), [r-formula](#r-formula), [r](#r), [r-acepack](#r-acepack), [r-lattice](#r-lattice), [r-gridextra](#r-gridextra), [r-ggplot2](#r-ggplot2) Link Dependencies: [r](#r) Run Dependencies: [r-base64enc](#r-base64enc), [r-latticeextra](#r-latticeextra), [r-htmltable](#r-htmltable), [r-data-table](#r-data-table), [r-viridis](#r-viridis), [r-survival](#r-survival), [r-htmltools](#r-htmltools), [r-formula](#r-formula), [r](#r), [r-acepack](#r-acepack), [r-lattice](#r-lattice), [r-gridextra](#r-gridextra), [r-ggplot2](#r-ggplot2) Description: Contains many functions useful for data analysis, high-level graphics, utility operations, functions for computing sample size and power, importing and annotating datasets, imputing missing values, advanced table making, variable clustering, character string manipulation, conversion of R objects to LaTeX and html code, and recoding variables. --- r-hms[¶](#r-hms) === Homepage: * <https://cran.rstudio.com/web/packages/hms/index.htmlSpack package: * [r-hms/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-hms/package.py) Versions: 0.3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Implements an S3 class for storing and formatting time-of-day values, based on the 'difftime' class. --- r-htmltable[¶](#r-htmltable) === Homepage: * <https://CRAN.R-project.org/package=htmlTableSpack package: * [r-htmltable/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-htmltable/package.py) Versions: 1.11.2, 1.9 Build Dependencies: [r](#r), [r-knitr](#r-knitr), [r-rstudioapi](#r-rstudioapi), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr), [r-checkmate](#r-checkmate), [r-stringr](#r-stringr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-knitr](#r-knitr), [r-rstudioapi](#r-rstudioapi), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr), [r-checkmate](#r-checkmate), [r-stringr](#r-stringr) Description: Tables with state-of-the-art layout elements such as row spanners, column spanners, table spanners, zebra striping, and more. While allowing advanced layout, the underlying css-structure is simple in order to maximize compatibility with word processors such as 'MS Word' or 'LibreOffice'. The package also contains a few text formatting functions that help outputting text compatible with HTML/'LaTeX'. --- r-htmltools[¶](#r-htmltools) === Homepage: * <https://github.com/rstudio/htmltoolsSpack package: * [r-htmltools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-htmltools/package.py) Versions: 0.3.6, 0.3.5 Build Dependencies: [r](#r), [r-digest](#r-digest), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-digest](#r-digest), [r-rcpp](#r-rcpp) Description: Tools for HTML generation and output. --- r-htmlwidgets[¶](#r-htmlwidgets) === Homepage: * <https://cran.r-project.org/package=htmlTableSpack package: * [r-htmlwidgets/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-htmlwidgets/package.py) Versions: 0.9, 0.8, 0.6 Build Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-yaml](#r-yaml), [r-htmltools](#r-htmltools) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-yaml](#r-yaml), [r-htmltools](#r-htmltools) Description: A framework for creating HTML widgets that render in various contexts including the R console, 'R Markdown' documents, and 'Shiny' web applications. --- r-httpuv[¶](#r-httpuv) === Homepage: * <https://github.com/rstudio/httpuvSpack package: * [r-httpuv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-httpuv/package.py) Versions: 1.3.5, 1.3.3 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: Provides low-level socket and protocol support for handling HTTP and WebSocket requests directly from within R. It is primarily intended as a building block for other packages, rather than making it particularly easy to create complete web applications using httpuv alone. httpuv is built on top of the libuv and http-parser C libraries, both of which were developed by Joyent, Inc. (See LICENSE file for libuv and http- parser license information.) --- r-httr[¶](#r-httr) === Homepage: * <https://github.com/hadley/httrSpack package: * [r-httr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-httr/package.py) Versions: 1.3.1, 1.2.1, 1.1.0 Build Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-r6](#r-r6), [r-curl](#r-curl), [r-mime](#r-mime), [r-openssl](#r-openssl) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-r6](#r-r6), [r-curl](#r-curl), [r-mime](#r-mime), [r-openssl](#r-openssl) Description: Useful tools for working with HTTP organised by HTTP verbs (GET(), POST(), etc). Configuration functions make it easy to control additional request components (authenticate(), add_headers() and so on). --- r-hwriter[¶](#r-hwriter) === Homepage: * <https://cran.rstudio.com/web/packages/hwriter/index.htmlSpack package: * [r-hwriter/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-hwriter/package.py) Versions: 1.3.2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Easy-to-use and versatile functions to output R objects in HTML format. --- r-hypergraph[¶](#r-hypergraph) === Homepage: * <https://www.bioconductor.org/packages/hypergraph/Spack package: * [r-hypergraph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-hypergraph/package.py) Versions: 1.48.0 Build Dependencies: [r](#r), [r-graph](#r-graph) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-graph](#r-graph) Description: A package that implements some simple capabilities for representing and manipulating hypergraphs. --- r-ica[¶](#r-ica) === Homepage: * <https://cran.r-project.org/web/packages/ica/index.htmlSpack package: * [r-ica/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ica/package.py) Versions: 1.0-1, 1.0-0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Independent Component Analysis (ICA) using various algorithms: FastICA, Information-Maximization (Infomax), and Joint Approximate Diagonalization of Eigenmatrices (JADE). --- r-igraph[¶](#r-igraph) === Homepage: * <http://igraph.org/Spack package: * [r-igraph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-igraph/package.py) Versions: 1.1.2, 1.0.1 Build Dependencies: [r](#r), [gmp](#gmp), [r-irlba](#r-irlba), [libxml2](#libxml2), [r-magrittr](#r-magrittr), [r-matrix](#r-matrix), [r-pkgconfig](#r-pkgconfig) Link Dependencies: [r](#r), [gmp](#gmp), [libxml2](#libxml2) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-irlba](#r-irlba), [r-pkgconfig](#r-pkgconfig), [r-magrittr](#r-magrittr) Description: Routines for simple graphs and network analysis. It can handle large graphs very well and provides functions for generating random and regular graphs, graph visualization, centrality methods and much more. --- r-illuminaio[¶](#r-illuminaio) === Homepage: * <http://bioconductor.org/packages/illuminaio/Spack package: * [r-illuminaio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-illuminaio/package.py) Versions: 0.18.0 Build Dependencies: [r](#r), [r-base64](#r-base64) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-base64](#r-base64) Description: Tools for parsing Illumina's microarray output files, including IDAT. --- r-impute[¶](#r-impute) === Homepage: * <https://www.bioconductor.org/packages/impute/Spack package: * [r-impute/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-impute/package.py) Versions: 1.50.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Imputation for microarray data (currently KNN only). --- r-influencer[¶](#r-influencer) === Homepage: * <https://github.com/rcc-uchicago/influenceRSpack package: * [r-influencer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-influencer/package.py) Versions: 0.1.0 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-igraph](#r-igraph) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-igraph](#r-igraph) Description: Provides functionality to compute various node centrality measures on networks. Included are functions to compute betweenness centrality (by utilizing Madduri and Bader's SNAP library), implementations of Burt's constraint and effective network size (ENS) metrics, Borgatti's algorithm to identify key players, and Valente's bridging metric. On Unix systems, the betweenness, Key Players, and bridging implementations are parallelized with OpenMP, which may run faster on systems which have OpenMP configured. --- r-inline[¶](#r-inline) === Homepage: * <https://cran.r-project.org/web/packages/inline/index.htmlSpack package: * [r-inline/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-inline/package.py) Versions: 0.3.14 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functionality to dynamically define R functions and S4 methods with inlined C, C++ or Fortran code supporting .C and .Call calling conventions. --- r-interactivedisplaybase[¶](#r-interactivedisplaybase) === Homepage: * <https://bioconductor.org/packages/interactiveDisplayBase/Spack package: * [r-interactivedisplaybase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-interactivedisplaybase/package.py) Versions: 1.14.0 Build Dependencies: [r](#r), [r-shiny](#r-shiny), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-shiny](#r-shiny), [r-biocgenerics](#r-biocgenerics) Description: The interactiveDisplayBase package contains the the basic methods needed to generate interactive Shiny based display methods for Bioconductor objects. --- r-ipred[¶](#r-ipred) === Homepage: * <https://cran.r-project.org/package=ipredSpack package: * [r-ipred/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ipred/package.py) Versions: 0.9-5 Build Dependencies: [r](#r), [r-nnet](#r-nnet), [r-prodlim](#r-prodlim), [r-survival](#r-survival), [r-rpart](#r-rpart), [r-class](#r-class), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-nnet](#r-nnet), [r-prodlim](#r-prodlim), [r-survival](#r-survival), [r-rpart](#r-rpart), [r-class](#r-class), [r-mass](#r-mass) Description: Improved predictive models by indirect classification and bagging for classification, regression and survival problems as well as resampling based estimators of prediction error. --- r-iranges[¶](#r-iranges) === Homepage: * <https://www.bioconductor.org/packages/IRanges/Spack package: * [r-iranges/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-iranges/package.py) Versions: 2.14.10, 2.12.0, 2.10.5 Build Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-biocgenerics](#r-biocgenerics) Description: Provides efficient low-level and highly reusable S4 classes for storing, manipulating and aggregating over annotated ranges of integers. Implements an algebra of range operations, including efficient algorithms for finding overlaps and nearest neighbors. Defines efficient list-like classes for storing, transforming and aggregating large grouped data, i.e., collections of atomic vectors and DataFrames. --- r-irdisplay[¶](#r-irdisplay) === Homepage: * <https://irkernel.github.ioSpack package: * [r-irdisplay/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-irdisplay/package.py) Versions: 0.4.4 Build Dependencies: [r](#r), [r-repr](#r-repr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-repr](#r-repr) Description: An interface to the rich display capabilities of Jupyter front-ends (e.g. 'Jupyter Notebook') Designed to be used from a running IRkernel session --- r-irkernel[¶](#r-irkernel) === Homepage: * <https://irkernel.github.io/Spack package: * [r-irkernel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-irkernel/package.py) Versions: master Build Dependencies: [r](#r), [r-devtools](#r-devtools), [r-evaluate](#r-evaluate), [r-crayon](#r-crayon), [r-digest](#r-digest), [r-irdisplay](#r-irdisplay), [r-uuid](#r-uuid), [r-repr](#r-repr), [r-pbdzmq](#r-pbdzmq) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-devtools](#r-devtools), [r-evaluate](#r-evaluate), [r-crayon](#r-crayon), [r-digest](#r-digest), [r-irdisplay](#r-irdisplay), [r-uuid](#r-uuid), [r-repr](#r-repr), [r-pbdzmq](#r-pbdzmq) Description: R kernel for Jupyter --- r-irlba[¶](#r-irlba) === Homepage: * <https://cran.r-project.org/web/packages/irlba/index.htmlSpack package: * [r-irlba/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-irlba/package.py) Versions: 2.1.2, 2.0.0 Build Dependencies: [r](#r), [r-matrix](#r-matrix) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix) Description: Fast and memory efficient methods for truncated singular and eigenvalue decompositions and principal component analysis of large sparse or dense matrices. --- r-iso[¶](#r-iso) === Homepage: * <https://cran.r-project.org/package=IsoSpack package: * [r-iso/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-iso/package.py) Versions: 0.0-17 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Linear order and unimodal order (univariate) isotonic regression; bivariate isotonic regression with linear order on both variables. --- r-iterators[¶](#r-iterators) === Homepage: * <https://cran.r-project.org/web/packages/iterators/index.htmlSpack package: * [r-iterators/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-iterators/package.py) Versions: 1.0.8 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Support for iterators, which allow a programmer to traverse through all the elements of a vector, list, or other collection of data. --- r-janitor[¶](#r-janitor) === Homepage: * <https://github.com/sfirke/janitorSpack package: * [r-janitor/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-janitor/package.py) Versions: 0.3.0 Build Dependencies: [r](#r), [r-dplyr](#r-dplyr), [r-tidyr](#r-tidyr), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-dplyr](#r-dplyr), [r-tidyr](#r-tidyr), [r-magrittr](#r-magrittr) Description: The main janitor functions can: perfectly format data.frame column names; provide quick one- and two-variable tabulations (i.e., frequency tables and crosstabs); and isolate duplicate records. Other janitor functions nicely format the tabulation results. These tabulate-and- report functions approximate popular features of SPSS and Microsoft Excel. This package follows the principles of the "tidyverse" and works well with the pipe function %>%. janitor was built with beginning-to- intermediate R users in mind and is optimized for user-friendliness. Advanced R users can already do everything covered here, but with janitor they can do it faster and save their thinking for the fun stuff. --- r-jaspar2018[¶](#r-jaspar2018) === Homepage: * <http://jaspar.genereg.net/Spack package: * [r-jaspar2018/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-jaspar2018/package.py) Versions: 1.0.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Data package for JASPAR 2018. To search this databases, please use the package TFBSTools (>= 1.15.6). --- r-jomo[¶](#r-jomo) === Homepage: * <https://cran.r-project.org/package=jomoSpack package: * [r-jomo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-jomo/package.py) Versions: 2.6-2 Build Dependencies: [r](#r), [r-lme4](#r-lme4), [r-survival](#r-survival) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lme4](#r-lme4), [r-survival](#r-survival) Description: Similarly to Schafer's package 'pan', 'jomo' is a package for multilevel joint modelling multiple imputation (Carpenter and Kenward, 2013) <doi:10.1002/9781119942283>. Novel aspects of 'jomo' are the possibility of handling binary and categorical data through latent normal variables, the option to use cluster-specific covariance matrices and to impute compatibly with the substantive model. --- r-jpeg[¶](#r-jpeg) === Homepage: * <http://www.rforge.net/jpeg/Spack package: * [r-jpeg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-jpeg/package.py) Versions: 0.1-8 Build Dependencies: [r](#r), jpeg Link Dependencies: [r](#r), jpeg Run Dependencies: [r](#r) Description: This package provides an easy and simple way to read, write and display bitmap images stored in the JPEG format. It can read and write both files and in-memory raw vectors. --- r-jsonlite[¶](#r-jsonlite) === Homepage: * <https://github.com/jeroenooms/jsonliteSpack package: * [r-jsonlite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-jsonlite/package.py) Versions: 1.5, 1.2, 1.0, 0.9.21 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A fast JSON parser and generator optimized for statistical data and the web. Started out as a fork of 'RJSONIO', but has been completely rewritten in recent versions. The package offers flexible, robust, high performance tools for working with JSON in R and is particularly powerful for building pipelines and interacting with a web API. The implementation is based on the mapping described in the vignette (Ooms, 2014). In addition to converting JSON data from/to R objects, 'jsonlite' contains functions to stream, validate, and prettify JSON data. The unit tests included with the package verify that all edge cases are encoded and decoded consistently for use with dynamic data in systems and applications. --- r-kegg-db[¶](#r-kegg-db) === Homepage: * <https://www.bioconductor.org/packages/KEGG.db/Spack package: * [r-kegg-db/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-kegg-db/package.py) Versions: 3.2.3 Build Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Description: A set of annotation maps for KEGG assembled using data from KEGG. --- r-kegggraph[¶](#r-kegggraph) === Homepage: * <https://www.bioconductor.org/packages/KEGGgraph/Spack package: * [r-kegggraph/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-kegggraph/package.py) Versions: 1.38.1 Build Dependencies: [r](#r), [r-graph](#r-graph), [r-xml](#r-xml) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-graph](#r-graph), [r-xml](#r-xml) Description: KEGGGraph is an interface between KEGG pathway and graph object as well as a collection of tools to analyze, dissect and visualize these graphs. It parses the regularly updated KGML (KEGG XML) files into graph models maintaining all essential pathway attributes. The package offers functionalities including parsing, graph operation, visualization and etc. --- r-keggrest[¶](#r-keggrest) === Homepage: * <http://bioconductor.org/packages/KEGGRESTSpack package: * [r-keggrest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-keggrest/package.py) Versions: 1.18.1, 1.2.0 Build Dependencies: [r](#r), [r-httr](#r-httr), [r-png](#r-png), [r-biostrings](#r-biostrings) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-httr](#r-httr), [r-png](#r-png), [r-biostrings](#r-biostrings) Description: This package provides functions and routines useful in the analysis of somatic signatures (cf. L. <NAME> al., Nature 2013). In particular, functions to perform a signature analysis with known signatures (LCD = linear combination decomposition) and a signature analysis on stratified mutational catalogue (SMC = stratify mutational catalogue) are provided. --- r-kernlab[¶](#r-kernlab) === Homepage: * <https://cran.r-project.org/package=kernlabSpack package: * [r-kernlab/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-kernlab/package.py) Versions: 0.9-25 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Kernel-based machine learning methods for classification, regression, clustering, novelty detection, quantile regression and dimensionality reduction. Among other methods 'kernlab' includes Support Vector Machines, Spectral Clustering, Kernel PCA, Gaussian Processes and a QP solver. --- r-kernsmooth[¶](#r-kernsmooth) === Homepage: * <https://cran.r-project.org/package=KernSmoothSpack package: * [r-kernsmooth/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-kernsmooth/package.py) Versions: 2.23-15 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions for kernel smoothing (and density estimation). --- r-kknn[¶](#r-kknn) === Homepage: * <https://cran.r-project.org/package=kknnSpack package: * [r-kknn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-kknn/package.py) Versions: 1.3.1 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-igraph](#r-igraph) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-igraph](#r-igraph) Description: Weighted k-Nearest Neighbors for Classification, Regression and Clustering. --- r-knitr[¶](#r-knitr) === Homepage: * <https://cran.r-project.org/package=knitrSpack package: * [r-knitr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-knitr/package.py) Versions: 1.17, 1.14, 0.6 Build Dependencies: [r](#r), [r-formatr](#r-formatr), [r-markdown](#r-markdown), [r-yaml](#r-yaml), [r-highr](#r-highr), [r-digest](#r-digest), [r-stringr](#r-stringr), [r-evaluate](#r-evaluate) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-formatr](#r-formatr), [r-markdown](#r-markdown), [r-yaml](#r-yaml), [r-highr](#r-highr), [r-digest](#r-digest), [r-stringr](#r-stringr), [r-evaluate](#r-evaluate) Description: Provides a general-purpose tool for dynamic report generation in R using Literate Programming techniques. --- r-ks[¶](#r-ks) === Homepage: * <https://cran.r-project.org/package=ksSpack package: * [r-ks/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ks/package.py) Versions: 1.11.2 Build Dependencies: [r](#r), [r-kernlab](#r-kernlab), [r-mvtnorm](#r-mvtnorm), [r-multicool](#r-multicool), [r-mclust](#r-mclust), [r-matrix](#r-matrix), [r-fnn](#r-fnn), [r-mgcv](#r-mgcv), [r-kernsmooth](#r-kernsmooth) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-kernlab](#r-kernlab), [r-mvtnorm](#r-mvtnorm), [r-multicool](#r-multicool), [r-mclust](#r-mclust), [r-matrix](#r-matrix), [r-fnn](#r-fnn), [r-mgcv](#r-mgcv), [r-kernsmooth](#r-kernsmooth) Description: Kernel smoothers for univariate and multivariate data. --- r-labeling[¶](#r-labeling) === Homepage: * <https://cran.r-project.org/web/packages/labeling/index.htmlSpack package: * [r-labeling/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-labeling/package.py) Versions: 0.3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides a range of axis labeling algorithms. --- r-lambda-r[¶](#r-lambda-r) === Homepage: * <https://cran.rstudio.com/web/packages/lambda.r/index.htmlSpack package: * [r-lambda-r/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lambda-r/package.py) Versions: 1.2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A language extension to efficiently write functional programs in R. Syntax extensions include multi-part function definitions, pattern matching, guard statements, built-in (optional) type safety. --- r-laplacesdemon[¶](#r-laplacesdemon) === Homepage: * <https://github.com/LaplacesDemonR/LaplacesDemonSpack package: * [r-laplacesdemon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-laplacesdemon/package.py) Versions: 16.0.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides a complete environment for Bayesian inference using a variety of different samplers (see ?LaplacesDemon for an overview). The README describes the history of the package development process. --- r-lars[¶](#r-lars) === Homepage: * <https://cran.r-project.org/web/packages/lars/index.htmlSpack package: * [r-lars/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lars/package.py) Versions: 1.2, 1.1, 0.9-8 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Efficient procedures for fitting an entire lasso sequence with the cost of a single least squares fit. --- r-lattice[¶](#r-lattice) === Homepage: * <http://lattice.r-forge.r-project.org/Spack package: * [r-lattice/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lattice/package.py) Versions: 0.20-35, 0.20-34 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A powerful and elegant high-level data visualization system inspired by Trellis graphics, with an emphasis on multivariate data. Lattice is sufficient for typical graphics needs, and is also flexible enough to handle most nonstandard requirements. See ?Lattice for an introduction. --- r-latticeextra[¶](#r-latticeextra) === Homepage: * <http://latticeextra.r-forge.r-project.org/Spack package: * [r-latticeextra/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-latticeextra/package.py) Versions: 0.6-28 Build Dependencies: [r](#r), [r-lattice](#r-lattice), [r-rcolorbrewer](#r-rcolorbrewer) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice), [r-rcolorbrewer](#r-rcolorbrewer) Description: Building on the infrastructure provided by the lattice package, this package provides several new high-level functions and methods, as well as additional utilities such as panel and axis annotation functions. --- r-lava[¶](#r-lava) === Homepage: * <https://cran.r-project.org/package=lavaSpack package: * [r-lava/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lava/package.py) Versions: 1.4.7 Build Dependencies: [r](#r), [r-survival](#r-survival), [r-numderiv](#r-numderiv) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-survival](#r-survival), [r-numderiv](#r-numderiv) Description: Estimation and simulation of latent variable models. --- r-lazyeval[¶](#r-lazyeval) === Homepage: * <https://cran.r-project.org/web/packages/lazyeval/index.htmlSpack package: * [r-lazyeval/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lazyeval/package.py) Versions: 0.2.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: An alternative approach to non-standard evaluation using formulas. Provides a full implementation of LISP style 'quasiquotation', making it easier to generate code with other code. --- r-leaflet[¶](#r-leaflet) === Homepage: * <http://rstudio.github.io/leaflet/Spack package: * [r-leaflet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-leaflet/package.py) Versions: 1.0.1 Build Dependencies: [r](#r), [r-base64enc](#r-base64enc), [r-markdown](#r-markdown), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr), [r-png](#r-png), [r-raster](#r-raster), [r-scales](#r-scales), [r-rcolorbrewer](#r-rcolorbrewer), [r-htmltools](#r-htmltools), [r-sp](#r-sp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-base64enc](#r-base64enc), [r-markdown](#r-markdown), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr), [r-png](#r-png), [r-raster](#r-raster), [r-scales](#r-scales), [r-rcolorbrewer](#r-rcolorbrewer), [r-htmltools](#r-htmltools), [r-sp](#r-sp) Description: Create and customize interactive maps using the 'Leaflet' JavaScript library and the 'htmlwidgets' package. These maps can be used directly from the R console, from 'RStudio', in Shiny apps and R Markdown documents. --- r-leaps[¶](#r-leaps) === Homepage: * <https://CRAN.R-project.org/package=leapsSpack package: * [r-leaps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-leaps/package.py) Versions: 3.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: leaps: Regression Subset Selection --- r-learnbayes[¶](#r-learnbayes) === Homepage: * <https://CRAN.R-project.org/package=LearnBayesSpack package: * [r-learnbayes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-learnbayes/package.py) Versions: 2.15 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: LearnBayes contains a collection of functions helpful in learning the basic tenets of Bayesian statistical inference. It contains functions for summarizing basic one and two parameter posterior distributions and predictive distributions. It contains MCMC algorithms for summarizing posterior distributions defined by the user. It also contains functions for regression models, hierarchical models, Bayesian tests, and illustrations of Gibbs sampling. --- r-lhs[¶](#r-lhs) === Homepage: * <http://lhs.r-forge.r-project.org/Spack package: * [r-lhs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lhs/package.py) Versions: 0.16 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides a number of methods for creating and augmenting Latin Hypercube Samples. --- r-limma[¶](#r-limma) === Homepage: * <https://www.bioconductor.org/packages/limma/Spack package: * [r-limma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-limma/package.py) Versions: 3.36.2, 3.34.9, 3.32.10 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Data analysis, linear models and differential expression for microarray data. --- r-lme4[¶](#r-lme4) === Homepage: * <https://github.com/lme4/lme4/Spack package: * [r-lme4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lme4/package.py) Versions: 1.1-12 Build Dependencies: [r](#r), [r-lattice](#r-lattice), [r-nlme](#r-nlme), [r-nloptr](#r-nloptr), [r-matrix](#r-matrix), [r-minqa](#r-minqa), [r-mass](#r-mass), [r-rcppeigen](#r-rcppeigen), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice), [r-nlme](#r-nlme), [r-nloptr](#r-nloptr), [r-matrix](#r-matrix), [r-minqa](#r-minqa), [r-mass](#r-mass), [r-rcppeigen](#r-rcppeigen), [r-rcpp](#r-rcpp) Description: Fit linear and generalized linear mixed-effects models. The models and their components are represented using S4 classes and methods. The core computational algorithms are implemented using the 'Eigen' C++ library for numerical linear algebra and 'RcppEigen' "glue". --- r-lmtest[¶](#r-lmtest) === Homepage: * <https://cran.r-project.org/package=lmtestSpack package: * [r-lmtest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lmtest/package.py) Versions: 0.9-34 Build Dependencies: [r](#r), [r-zoo](#r-zoo) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-zoo](#r-zoo) Description: A collection of tests, data sets, and examples for diagnostic checking in linear regression models. Furthermore, some generic tools for inference in parametric models are provided. --- r-locfit[¶](#r-locfit) === Homepage: * <https://cran.rstudio.com/web/packages/locfit/index.htmlSpack package: * [r-locfit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-locfit/package.py) Versions: 1.5-9.1 Build Dependencies: [r](#r), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice) Description: Local regression, likelihood and density estimation. --- r-log4r[¶](#r-log4r) === Homepage: * <https://cran.r-project.org/package=log4rSpack package: * [r-log4r/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-log4r/package.py) Versions: 0.2 Build Dependencies: [r](#r), [r-testthat](#r-testthat) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-testthat](#r-testthat) Description: logr4 provides an object-oriented logging system that uses an API roughly equivalent to log4j and its related variants. --- r-lpsolve[¶](#r-lpsolve) === Homepage: * <https://cran.r-project.org/web/packages/lpSolve/index.htmlSpack package: * [r-lpsolve/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lpsolve/package.py) Versions: 5.6.13 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Lp_solve is freely available (under LGPL 2) software for solving linear, integer and mixed integer programs. In this implementation we supply a "wrapper" function in C and some R functions that solve general linear/integer problems, assignment problems, and transportation problems. This version calls lp_solve --- r-lsei[¶](#r-lsei) === Homepage: * <https://cran.r-project.org/package=lseiSpack package: * [r-lsei/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lsei/package.py) Versions: 1.2-0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: It contains functions that solve least squares linear regression problems under linear equality/inequality constraints. Functions for solving quadratic programming problems are also available, which transform such problems into least squares ones first. It is developed based on the 'Fortran' program of Lawson and Hanson (1974, 1995), which is public domain and available at <http://www.netlib.org/lawson-hanson>. --- r-lubridate[¶](#r-lubridate) === Homepage: * <https://cran.r-project.org/web/packages/lubridate/index.htmlSpack package: * [r-lubridate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-lubridate/package.py) Versions: 1.7.1, 1.5.6 Build Dependencies: [r](#r), [r-stringr](#r-stringr), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-stringr](#r-stringr), [r-rcpp](#r-rcpp) Description: Functions to work with date-times and timespans: fast and user friendly parsing of date-time data, extraction and updating of components of a date-time (years, months, days, hours, minutes, and seconds), algebraic manipulation on date-time and timespan objects. The 'lubridate' package has a consistent and memorable syntax that makes working with dates easy and fun. --- r-magic[¶](#r-magic) === Homepage: * <https://cran.r-project.org/Spack package: * [r-magic/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-magic/package.py) Versions: 1.5-6 Build Dependencies: [r](#r), [r-abind](#r-abind) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-abind](#r-abind) Description: A collection of efficient, vectorized algorithms for the creation and investigation of magic squares and hypercubes, including a variety of functions for the manipulation and analysis of arbitrarily dimensioned arrays. --- r-magrittr[¶](#r-magrittr) === Homepage: * <https://cran.r-project.org/web/packages/magrittr/index.htmlSpack package: * [r-magrittr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-magrittr/package.py) Versions: 1.5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides a mechanism for chaining commands with a new forward-pipe operator, %>%. This operator will forward a value, or the result of an expression, into the next function call/expression. There is flexible support for the type of right-hand side expressions. For more information, see package vignette. --- r-makecdfenv[¶](#r-makecdfenv) === Homepage: * <https://www.bioconductor.org/packages/makecdfenv/Spack package: * [r-makecdfenv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-makecdfenv/package.py) Versions: 1.52.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-affyio](#r-affyio), [r-affy](#r-affy), [r-zlibbioc](#r-zlibbioc) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-affyio](#r-affyio), [r-affy](#r-affy), [r-zlibbioc](#r-zlibbioc) Description: This package has two functions. One reads a Affymetrix chip description file (CDF) and creates a hash table environment containing the location/probe set membership mapping. The other creates a package that automatically loads that environment. --- r-maldiquant[¶](#r-maldiquant) === Homepage: * <https://cran.r-project.org/package=MALDIquantSpack package: * [r-maldiquant/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-maldiquant/package.py) Versions: 1.16.4 Build Dependencies: [r](#r), [r-knitr](#r-knitr), [r-testthat](#r-testthat) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-knitr](#r-knitr), [r-testthat](#r-testthat) Description: A complete analysis pipeline for matrix-assisted laser desorption/ionization-time-of-flight (MALDI-TOF) and other two- dimensional mass spectrometry data. In addition to commonly used plotting and processing methods it includes distinctive features, namely baseline subtraction methods such as morphological filters (TopHat) or the statistics-sensitive non-linear iterative peak-clipping algorithm (SNIP), peak alignment using warping functions, handling of replicated measurements as well as allowing spectra with different resolutions. --- r-mapproj[¶](#r-mapproj) === Homepage: * <https://cran.r-project.org/package=mapprojSpack package: * [r-mapproj/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mapproj/package.py) Versions: 1.2-4 Build Dependencies: [r](#r), [r-maps](#r-maps) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-maps](#r-maps) Description: Converts latitude/longitude into projected coordinates. --- r-maps[¶](#r-maps) === Homepage: * <https://cran.r-project.org/Spack package: * [r-maps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-maps/package.py) Versions: 3.1.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Display of maps. Projection code and larger maps are in separate packages ('mapproj' and 'mapdata'). --- r-maptools[¶](#r-maptools) === Homepage: * <http://r-forge.r-project.org/projects/maptools/Spack package: * [r-maptools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-maptools/package.py) Versions: 0.8-39 Build Dependencies: [r](#r), [r-lattice](#r-lattice), [r-sp](#r-sp), [r-foreign](#r-foreign) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice), [r-sp](#r-sp), [r-foreign](#r-foreign) Description: Set of tools for manipulating and reading geographic data, in particular ESRI shapefiles; C code used from shapelib. It includes binary access to GSHHG shoreline files. The package also provides interface wrappers for exchanging spatial objects with packages such as PBSmapping, spatstat, maps, RArcInfo, Stata tmap, WinBUGS, Mondrian, and others. --- r-markdown[¶](#r-markdown) === Homepage: * <https://cran.r-project.org/package=markdownSpack package: * [r-markdown/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-markdown/package.py) Versions: 0.8, 0.7.7 Build Dependencies: [r](#r), [r-mime](#r-mime) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mime](#r-mime) Description: Provides R bindings to the 'Sundown' 'Markdown' rendering library (https://github.com/vmg/sundown). 'Markdown' is a plain-text formatting syntax that can be converted to 'XHTML' or other formats. See http://en.wikipedia.org/wiki/Markdown for more information about 'Markdown'. --- r-mass[¶](#r-mass) === Homepage: * <https://cran.r-project.org/web/packages/MASS/index.htmlSpack package: * [r-mass/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mass/package.py) Versions: 7.3-47, 7.3-45 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions and datasets to support Venables and Ripley, "Modern Applied Statistics with S" (4th edition, 2002). --- r-matr[¶](#r-matr) === Homepage: * <https://github.com/MG-RAST/matRSpack package: * [r-matr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-matr/package.py) Versions: 0.9.1, 0.9 Build Dependencies: [r](#r), [r-biom-utils](#r-biom-utils), [r-mgraster](#r-mgraster) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biom-utils](#r-biom-utils), [r-mgraster](#r-mgraster) Description: Package matR (Metagenomics Analysis Tools for R) is an analysis client for the MG-RAST metagenome annotation engine, part of the US Department of Energy (DOE) Systems Biology Knowledge Base (KBase). Customized analysis and visualization tools securely access remote data and metadata within the popular open source R language and environment for statistical computing. --- r-matrix[¶](#r-matrix) === Homepage: * <http://matrix.r-forge.r-project.org/Spack package: * [r-matrix/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-matrix/package.py) Versions: 1.2-14, 1.2-12, 1.2-11, 1.2-8, 1.2-6 Build Dependencies: [r](#r), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice) Description: Classes and methods for dense and sparse matrices and operations on them using 'LAPACK' and 'SuiteSparse'. --- r-matrixmodels[¶](#r-matrixmodels) === Homepage: * <http://matrix.r-forge.r-project.org/Spack package: * [r-matrixmodels/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-matrixmodels/package.py) Versions: 0.4-1 Build Dependencies: [r](#r), [r-matrix](#r-matrix) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix) Description: Modelling with sparse and dense 'Matrix' matrices, using modular prediction and response module classes. --- r-matrixstats[¶](#r-matrixstats) === Homepage: * <https://cran.rstudio.com/web/packages/matrixStats/index.htmlSpack package: * [r-matrixstats/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-matrixstats/package.py) Versions: 0.52.2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: High-performing functions operating on rows and columns of matrices, e.g. col / rowMedians(), col / rowRanks(), and col / rowSds(). Functions optimized per data type and for subsetted calculations such that both memory usage and processing time is minimized. There are also optimized vector-based methods, e.g. binMeans(), madDiff() and weightedMedian(). --- r-mclust[¶](#r-mclust) === Homepage: * <http://www.stat.washington.edu/mclustSpack package: * [r-mclust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mclust/package.py) Versions: 5.3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: mclust: Gaussian Mixture Modelling for Model-Based Clustering, Classification, and Density Estimation --- r-mcmcglmm[¶](#r-mcmcglmm) === Homepage: * <https://cran.r-project.org/web/packages/MCMCglmm/index.htmlSpack package: * [r-mcmcglmm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mcmcglmm/package.py) Versions: 2.25 Build Dependencies: [r](#r), [r-coda](#r-coda), [r-cubature](#r-cubature), [r-tensora](#r-tensora), [r-matrix](#r-matrix), [r-corpcor](#r-corpcor), [r-ape](#r-ape) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-coda](#r-coda), [r-cubature](#r-cubature), [r-tensora](#r-tensora), [r-matrix](#r-matrix), [r-corpcor](#r-corpcor), [r-ape](#r-ape) Description: MCMC Generalised Linear Mixed Models. --- r-mco[¶](#r-mco) === Homepage: * <https://github.com/cran/mcoSpack package: * [r-mco/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mco/package.py) Versions: 1.0-15.1, 1.0-15 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions for multiple criteria optimization using genetic algorithms and related test problems --- r-mda[¶](#r-mda) === Homepage: * <https://cran.r-project.org/package=mdaSpack package: * [r-mda/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mda/package.py) Versions: 0.4-9 Build Dependencies: [r](#r), [r-class](#r-class) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-class](#r-class) Description: Mixture and flexible discriminant analysis, multivariate adaptive regression splines (MARS), BRUTO. --- r-memoise[¶](#r-memoise) === Homepage: * <https://github.com/hadley/memoiseSpack package: * [r-memoise/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-memoise/package.py) Versions: 1.1.0, 1.0.0 Build Dependencies: [r](#r), [r-digest](#r-digest) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-digest](#r-digest) Description: Cache the results of a function so that when you call it again with the same arguments it returns the pre-computed value. --- r-mergemaid[¶](#r-mergemaid) === Homepage: * <https://www.bioconductor.org/packages/MergeMaid/Spack package: * [r-mergemaid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mergemaid/package.py) Versions: 2.48.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-survival](#r-survival), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-survival](#r-survival), [r-mass](#r-mass) Description: The functions in this R extension are intended for cross-study comparison of gene expression array data. Required from the user is gene expression matrices, their corresponding gene-id vectors and other useful information, and they could be 'list','matrix', or 'ExpressionSet'. The main function is 'mergeExprs' which transforms the input objects into data in the merged format, such that common genes in different datasets can be easily found. And the function 'intcor' calculate the correlation coefficients. Other functions use the output from 'modelOutcome' to graphically display the results and cross- validate associations of gene expression data with survival. --- r-methodss3[¶](#r-methodss3) === Homepage: * <https://cran.r-project.org/package=R.methodsS3Spack package: * [r-methodss3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-methodss3/package.py) Versions: 1.7.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Methods that simplify the setup of S3 generic functions and S3 methods. Major effort has been made in making definition of methods as simple as possible with a minimum of maintenance for package developers. For example, generic functions are created automatically, if missing, and naming conflict are automatically solved, if possible. The method setMethodS3() is a good start for those who in the future may want to migrate to S4. This is a cross-platform package implemented in pure R that generates standard S3 methods. --- r-mgcv[¶](#r-mgcv) === Homepage: * <https://cran.r-project.org/package=mgcvSpack package: * [r-mgcv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mgcv/package.py) Versions: 1.8-22, 1.8-21, 1.8-20, 1.8-19, 1.8-18, 1.8-17, 1.8-16, 1.8-13 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-nlme](#r-nlme) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-nlme](#r-nlme) Description: GAMs, GAMMs and other generalized ridge regression with multiple smoothing parameter estimation by GCV, REML or UBRE/AIC. Includes a gam() function, a wide variety of smoothers, JAGS support and distributions beyond the exponential family. --- r-mgraster[¶](#r-mgraster) === Homepage: * <https://github.com/braithwaite/MGRASTer/Spack package: * [r-mgraster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mgraster/package.py) Versions: 0.9 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Convenience Functions for R Language Access to the v.1 API of the MG- RAST Metagenome Annotation Server, part of the US Department of Energy (DOE) Systems Biology Knowledge Base (KBase). --- r-mice[¶](#r-mice) === Homepage: * <https://cran.r-project.org/package=miceSpack package: * [r-mice/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mice/package.py) Versions: 3.0.0 Build Dependencies: [r](#r), [r-broom](#r-broom), [r-mass](#r-mass), [r-mitml](#r-mitml), [r-dplyr](#r-dplyr), [r-survival](#r-survival), [r-rcpp](#r-rcpp), [r-lattice](#r-lattice), [r-rpart](#r-rpart), [r-nnet](#r-nnet), [r-rlang](#r-rlang) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-broom](#r-broom), [r-mass](#r-mass), [r-mitml](#r-mitml), [r-dplyr](#r-dplyr), [r-survival](#r-survival), [r-rcpp](#r-rcpp), [r-lattice](#r-lattice), [r-rpart](#r-rpart), [r-nnet](#r-nnet), [r-rlang](#r-rlang) Description: Multiple imputation using Fully Conditional Specification (FCS) implemented by the MICE algorithm as described in <NAME> and Groothuis-Oudshoorn (2011) <doi:10.18637/jss.v045.i03>. Each variable has its own imputation model. Built-in imputation models are provided for continuous data (predictive mean matching, normal), binary data (logistic regression), unordered categorical data (polytomous logistic regression) and ordered categorical data (proportional odds). MICE can also impute continuous two-level data (normal model, pan, second-level variables). Passive imputation can be used to maintain consistency between variables. Various diagnostic plots are available to inspect the quality of the imputations. --- r-mime[¶](#r-mime) === Homepage: * <https://github.com/yihui/mimeSpack package: * [r-mime/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mime/package.py) Versions: 0.5, 0.4 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Guesses the MIME type from a filename extension using the data derived from /etc/mime.types in UNIX-type systems. --- r-minfi[¶](#r-minfi) === Homepage: * <https://bioconductor.org/packages/minfi/Spack package: * [r-minfi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-minfi/package.py) Versions: 1.22.1 Build Dependencies: [r-quadprog](#r-quadprog), [r-biocgenerics](#r-biocgenerics), [r-data-table](#r-data-table), [r-mclust](#r-mclust), [r-geoquery](#r-geoquery), [r-preprocesscore](#r-preprocesscore), [r-iranges](#r-iranges), [r-bumphunter](#r-bumphunter), [r-biobase](#r-biobase), [r-illuminaio](#r-illuminaio), [r-genomeinfodb](#r-genomeinfodb), [r-nlme](#r-nlme), [r-genomicranges](#r-genomicranges), [r-siggenes](#r-siggenes), [r-limma](#r-limma), [r-s4vectors](#r-s4vectors), [r-beanplot](#r-beanplot), [r-rcolorbrewer](#r-rcolorbrewer), [r-summarizedexperiment](#r-summarizedexperiment), [r-biostrings](#r-biostrings), [r](#r), [r-matrixstats](#r-matrixstats), [r-nor1mix](#r-nor1mix), [r-reshape](#r-reshape), [r-lattice](#r-lattice), [r-genefilter](#r-genefilter), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r-quadprog](#r-quadprog), [r-biocgenerics](#r-biocgenerics), [r-data-table](#r-data-table), [r-mclust](#r-mclust), [r-geoquery](#r-geoquery), [r-preprocesscore](#r-preprocesscore), [r-iranges](#r-iranges), [r-bumphunter](#r-bumphunter), [r-biobase](#r-biobase), [r-illuminaio](#r-illuminaio), [r-genomeinfodb](#r-genomeinfodb), [r-nlme](#r-nlme), [r-genomicranges](#r-genomicranges), [r-siggenes](#r-siggenes), [r-limma](#r-limma), [r-s4vectors](#r-s4vectors), [r-beanplot](#r-beanplot), [r-rcolorbrewer](#r-rcolorbrewer), [r-summarizedexperiment](#r-summarizedexperiment), [r-biostrings](#r-biostrings), [r](#r), [r-matrixstats](#r-matrixstats), [r-nor1mix](#r-nor1mix), [r-reshape](#r-reshape), [r-lattice](#r-lattice), [r-genefilter](#r-genefilter), [r-mass](#r-mass) Description: Tools to analyze & visualize Illumina Infinium methylation arrays. --- r-minqa[¶](#r-minqa) === Homepage: * <http://optimizer.r-forge.r-project.org/Spack package: * [r-minqa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-minqa/package.py) Versions: 1.2.4 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: Derivative-free optimization by quadratic approximation based on an interface to Fortran implementations by <NAME>. --- r-misc3d[¶](#r-misc3d) === Homepage: * <https://cran.r-project.org/web/packages/misc3d/index.htmlSpack package: * [r-misc3d/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-misc3d/package.py) Versions: 0.8-4 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A collection of miscellaneous 3d plots, including isosurfaces. --- r-mitml[¶](#r-mitml) === Homepage: * <https://cran.r-project.org/package=mitmlSpack package: * [r-mitml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mitml/package.py) Versions: 0.3-5 Build Dependencies: [r](#r), [r-pan](#r-pan), [r-haven](#r-haven), [r-jomo](#r-jomo) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-pan](#r-pan), [r-haven](#r-haven), [r-jomo](#r-jomo) Description: Provides tools for multiple imputation of missing data in multilevel modeling. Includes a user-friendly interface to the packages 'pan' and 'jomo', and several functions for visualization, data management and the analysis of multiply imputed data sets. --- r-mixtools[¶](#r-mixtools) === Homepage: * <https://cran.r-project.org/web/packages/mixtools/index.htmlSpack package: * [r-mixtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mixtools/package.py) Versions: 1.1.0, 1.0.4 Build Dependencies: [r](#r), [r-survival](#r-survival), [r-segmented](#r-segmented), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-survival](#r-survival), [r-segmented](#r-segmented), [r-mass](#r-mass) Description: mixtools: Tools for Analyzing Finite Mixture Models Analyzes finite mixture models for various parametric and semiparametric settings. --- r-mlbench[¶](#r-mlbench) === Homepage: * <https://cran.r-project.org/web/packages/mlbench/index.htmlSpack package: * [r-mlbench/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mlbench/package.py) Versions: 2.1-1 Build Dependencies: [r](#r), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice) Description: A collection of artificial and real-world machine learning benchmark problems, including, e.g., several data sets from the UCI repository. --- r-mlinterfaces[¶](#r-mlinterfaces) === Homepage: * <https://www.bioconductor.org/packages/MLInterfaces/Spack package: * [r-mlinterfaces/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mlinterfaces/package.py) Versions: 1.56.0 Build Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics), [r-gbm](#r-gbm), [r-shiny](#r-shiny), [r-ggvis](#r-ggvis), [r-sfsmisc](#r-sfsmisc), [r-pls](#r-pls), [r-fpc](#r-fpc), [r-rcolorbrewer](#r-rcolorbrewer), [r-rda](#r-rda), [r-gdata](#r-gdata), [r-mlbench](#r-mlbench), [r-biobase](#r-biobase), [r-hwriter](#r-hwriter), [r-genefilter](#r-genefilter), [r-threejs](#r-threejs) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics), [r-gbm](#r-gbm), [r-shiny](#r-shiny), [r-ggvis](#r-ggvis), [r-sfsmisc](#r-sfsmisc), [r-pls](#r-pls), [r-fpc](#r-fpc), [r-rcolorbrewer](#r-rcolorbrewer), [r-rda](#r-rda), [r-gdata](#r-gdata), [r-mlbench](#r-mlbench), [r-biobase](#r-biobase), [r-hwriter](#r-hwriter), [r-genefilter](#r-genefilter), [r-threejs](#r-threejs) Description: This package provides uniform interfaces to machine learning code for data in R and Bioconductor containers. --- r-mlr[¶](#r-mlr) === Homepage: * <https://github.com/mlr-org/mlr/Spack package: * [r-mlr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mlr/package.py) Versions: 2.12.1, 2.12 Build Dependencies: [r](#r), [r-stringi](#r-stringi), [r-parallelmap](#r-parallelmap), [r-ggplot2](#r-ggplot2), [r-bbmisc](#r-bbmisc), [r-backports](#r-backports), [r-xml](#r-xml), [r-paramhelpers](#r-paramhelpers), [r-checkmate](#r-checkmate), [r-data-table](#r-data-table) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-stringi](#r-stringi), [r-parallelmap](#r-parallelmap), [r-ggplot2](#r-ggplot2), [r-bbmisc](#r-bbmisc), [r-backports](#r-backports), [r-xml](#r-xml), [r-paramhelpers](#r-paramhelpers), [r-checkmate](#r-checkmate), [r-data-table](#r-data-table) Description: Interface to a large number of classification and regression techniques, including machine-readable parameter descriptions. There is also an experimental extension for survival analysis, clustering and general, example-specific cost-sensitive learning. Generic resampling, including cross-validation, bootstrapping and subsampling. Hyperparameter tuning with modern optimization techniques, for single- and multi-objective problems. Filter and wrapper methods for feature selection. Extension of basic learners with additional operations common in machine learning, also allowing for easy nested resampling. Most operations can be parallelized. --- r-mlrmbo[¶](#r-mlrmbo) === Homepage: * <https://github.com/mlr-org/mlrMBO/Spack package: * [r-mlrmbo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mlrmbo/package.py) Versions: 1.1.1, 1.1.0 Build Dependencies: [r](#r), [r-paramhelpers](#r-paramhelpers), [r-parallelmap](#r-parallelmap), [r-data-table](#r-data-table), [r-lhs](#r-lhs), [r-smoof](#r-smoof), [r-backports](#r-backports), [r-checkmate](#r-checkmate), [r-mlr](#r-mlr), [r-bbmisc](#r-bbmisc) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-paramhelpers](#r-paramhelpers), [r-parallelmap](#r-parallelmap), [r-data-table](#r-data-table), [r-lhs](#r-lhs), [r-smoof](#r-smoof), [r-backports](#r-backports), [r-checkmate](#r-checkmate), [r-mlr](#r-mlr), [r-bbmisc](#r-bbmisc) Description: Flexible and comprehensive R toolbox for model-based optimization ('MBO'), also known as Bayesian optimization. It is designed for both single- and multi-objective optimization with mixed continuous, categorical and conditional parameters. The machine learning toolbox 'mlr' provide dozens of regression learners to model the performance of the target algorithm with respect to the parameter settings. It provides many different infill criteria to guide the search process. Additional features include multi-point batch proposal, parallel execution as well as visualization and sophisticated logging mechanisms, which is especially useful for teaching and understanding of algorithm behavior. 'mlrMBO' is implemented in a modular fashion, such that single components can be easily replaced or adapted by the user for specific use cases. --- r-mmwrweek[¶](#r-mmwrweek) === Homepage: * <https://cran.r-project.org/package=MMWRweekSpack package: * [r-mmwrweek/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mmwrweek/package.py) Versions: 0.1.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: The first day of any MMWR week is Sunday. MMWR week numbering is sequential beginning with 1 and incrementing with each week to a maximum of 52 or 53. MMWR week #1 of an MMWR year is the first week of the year that has at least four days in the calendar year. This package provides functionality to convert Dates to MMWR day, week, and year and the reverse. --- r-mnormt[¶](#r-mnormt) === Homepage: * <http://azzalini.stat.unipd.it/SW/Pkg-mnormtSpack package: * [r-mnormt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mnormt/package.py) Versions: 1.5-5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions are provided for computing the density and the distribution function of multivariate normal and "t" random variables, and for generating random vectors sampled from these distributions. Probabilities are computed via non-Monte Carlo methods; different routines are used in the case d=1, d=2, d>2, if d denotes the number of dimensions. --- r-modelmetrics[¶](#r-modelmetrics) === Homepage: * <https://cran.r-project.org/package=ModelMetricsSpack package: * [r-modelmetrics/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-modelmetrics/package.py) Versions: 1.1.0 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: Collection of metrics for evaluating models written in C++ using 'Rcpp'. --- r-modelr[¶](#r-modelr) === Homepage: * <https://github.com/hadley/modelrSpack package: * [r-modelr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-modelr/package.py) Versions: 0.1.1 Build Dependencies: [r](#r), [r-lazyeval](#r-lazyeval), [r-purrr](#r-purrr), [r-broom](#r-broom), [r-tibble](#r-tibble), [r-dplyr](#r-dplyr), [r-magrittr](#r-magrittr), [r-tidyr](#r-tidyr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lazyeval](#r-lazyeval), [r-purrr](#r-purrr), [r-broom](#r-broom), [r-tibble](#r-tibble), [r-dplyr](#r-dplyr), [r-magrittr](#r-magrittr), [r-tidyr](#r-tidyr) Description: Functions for modelling that help you seamlessly integrate modelling into a pipeline of data manipulation and visualisation. --- r-modeltools[¶](#r-modeltools) === Homepage: * <https://cran.r-project.org/package=modeltoolsSpack package: * [r-modeltools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-modeltools/package.py) Versions: 0.2-21 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A collection of tools to deal with statistical models. --- r-mpm[¶](#r-mpm) === Homepage: * <https://cran.rstudio.com/web/packages/mpm/index.htmlSpack package: * [r-mpm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mpm/package.py) Versions: 1.0-22 Build Dependencies: [r](#r), [r-kernsmooth](#r-kernsmooth) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-kernsmooth](#r-kernsmooth) Description: Exploratory graphical analysis of multivariate data, specifically gene expression data with different projection methods: principal component analysis, correspondence analysis, spectral map analysis. --- r-msnbase[¶](#r-msnbase) === Homepage: * <https://www.bioconductor.org/packages/MSnbase/Spack package: * [r-msnbase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-msnbase/package.py) Versions: 2.2.0 Build Dependencies: [r-maldiquant](#r-maldiquant), [r-biocgenerics](#r-biocgenerics), [r-impute](#r-impute), [r-ggplot2](#r-ggplot2), [r-pcamethods](#r-pcamethods), [r-plyr](#r-plyr), [r-mzid](#r-mzid), [r-preprocesscore](#r-preprocesscore), [r-affy](#r-affy), [r-xml](#r-xml), [r-iranges](#r-iranges), [r-digest](#r-digest), [r-s4vectors](#r-s4vectors), [r-vsn](#r-vsn), [r-biocparallel](#r-biocparallel), [r-protgenerics](#r-protgenerics), [r-biobase](#r-biobase), [r](#r), [r-lattice](#r-lattice), [r-mzr](#r-mzr), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r-maldiquant](#r-maldiquant), [r-biocgenerics](#r-biocgenerics), [r-impute](#r-impute), [r-ggplot2](#r-ggplot2), [r-pcamethods](#r-pcamethods), [r-plyr](#r-plyr), [r-mzid](#r-mzid), [r-preprocesscore](#r-preprocesscore), [r-affy](#r-affy), [r-xml](#r-xml), [r-iranges](#r-iranges), [r-digest](#r-digest), [r-s4vectors](#r-s4vectors), [r-vsn](#r-vsn), [r-biocparallel](#r-biocparallel), [r-protgenerics](#r-protgenerics), [r-biobase](#r-biobase), [r](#r), [r-lattice](#r-lattice), [r-mzr](#r-mzr), [r-rcpp](#r-rcpp) Description: Manipulation, processing and visualisation of mass spectrometry and proteomics data. --- r-multcomp[¶](#r-multcomp) === Homepage: * <http://multcomp.r-forge.r-project.org/Spack package: * [r-multcomp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-multcomp/package.py) Versions: 1.4-6 Build Dependencies: [r](#r), [r-sandwich](#r-sandwich), [r-mvtnorm](#r-mvtnorm), [r-codetools](#r-codetools), [r-th-data](#r-th-data), [r-survival](#r-survival) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-sandwich](#r-sandwich), [r-mvtnorm](#r-mvtnorm), [r-codetools](#r-codetools), [r-th-data](#r-th-data), [r-survival](#r-survival) Description: Simultaneous tests and confidence intervals for general linear hypotheses in parametric models, including linear, generalized linear, linear mixed effects, and survival models. The package includes demos reproducing analyzes presented in the book "Multiple Comparisons Using R" (<NAME>, Westfall, 2010, CRC Press). --- r-multicool[¶](#r-multicool) === Homepage: * <https://cran.r-project.org/package=multicoolSpack package: * [r-multicool/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-multicool/package.py) Versions: 0.1-9 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: Permutations of multisets in cool-lex order. --- r-multtest[¶](#r-multtest) === Homepage: * <https://www.bioconductor.org/packages/multtest/Spack package: * [r-multtest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-multtest/package.py) Versions: 2.32.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics) Description: Resampling-based multiple hypothesis testing --- r-munsell[¶](#r-munsell) === Homepage: * <https://cran.r-project.org/web/packages/munsell/index.htmlSpack package: * [r-munsell/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-munsell/package.py) Versions: 0.4.3 Build Dependencies: [r](#r), [r-colorspace](#r-colorspace) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-colorspace](#r-colorspace) Description: Provides easy access to, and manipulation of, the Munsell colours. Provides a mapping between Munsell's original notation (e.g. "5R 5/10") and hexadecimal strings suitable for use directly in R graphics. Also provides utilities to explore slices through the Munsell colour tree, to transform Munsell colours and display colour palettes. --- r-mvtnorm[¶](#r-mvtnorm) === Homepage: * <http://mvtnorm.r-forge.r-project.org/Spack package: * [r-mvtnorm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mvtnorm/package.py) Versions: 1.0-6, 1.0-5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Computes multivariate normal and t probabilities, quantiles, random deviates and densities. --- r-mzid[¶](#r-mzid) === Homepage: * <https://www.bioconductor.org/packages/mzID/Spack package: * [r-mzid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mzid/package.py) Versions: 1.14.0 Build Dependencies: [r](#r), [r-iterators](#r-iterators), [r-protgenerics](#r-protgenerics), [r-plyr](#r-plyr), [r-doparallel](#r-doparallel), [r-foreach](#r-foreach), [r-xml](#r-xml) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iterators](#r-iterators), [r-protgenerics](#r-protgenerics), [r-plyr](#r-plyr), [r-doparallel](#r-doparallel), [r-foreach](#r-foreach), [r-xml](#r-xml) Description: A parser for mzIdentML files implemented using the XML package. The parser tries to be general and able to handle all types of mzIdentML files with the drawback of having less 'pretty' output than a vendor specific parser. Please contact the maintainer with any problems and supply an mzIdentML file so the problems can be fixed quickly. --- r-mzr[¶](#r-mzr) === Homepage: * <https://www.bioconductor.org/packages/mzR/Spack package: * [r-mzr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-mzr/package.py) Versions: 2.10.0 Build Dependencies: [r](#r), [netcdf](#netcdf), [r-zlibbioc](#r-zlibbioc), [r-protgenerics](#r-protgenerics), [r-biobase](#r-biobase), [r-rcpp](#r-rcpp), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r), [netcdf](#netcdf) Run Dependencies: [r](#r), [r-zlibbioc](#r-zlibbioc), [r-protgenerics](#r-protgenerics), [r-biobase](#r-biobase), [r-rcpp](#r-rcpp), [r-biocgenerics](#r-biocgenerics) Description: mzR provides a unified API to the common file formats and parsers available for mass spectrometry data. It comes with a wrapper for the ISB random access parser for mass spectrometry mzXML, mzData and mzML files. The package contains the original code written by the ISB, and a subset of the proteowizard library for mzML and mzIdentML. The netCDF reading code has previously been used in XCMS. --- r-nanotime[¶](#r-nanotime) === Homepage: * <https://cran.r-project.org/package=nanotimeSpack package: * [r-nanotime/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-nanotime/package.py) Versions: 0.2.0 Build Dependencies: [r](#r), [r-bit64](#r-bit64), [r-rcppcctz](#r-rcppcctz), [r-zoo](#r-zoo) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-bit64](#r-bit64), [r-rcppcctz](#r-rcppcctz), [r-zoo](#r-zoo) Description: Full 64-bit resolution date and time support with resolution up to nanosecond granularity is provided, with easy transition to and from the standard 'POSIXct' type. --- r-ncbit[¶](#r-ncbit) === Homepage: * <https://cran.r-project.org/package=ncbitSpack package: * [r-ncbit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ncbit/package.py) Versions: 2013.03.29 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Making NCBI taxonomic data locally available and searchable as an R object. --- r-ncdf4[¶](#r-ncdf4) === Homepage: * <http://cirrus.ucsd.edu/~pierce/ncdfSpack package: * [r-ncdf4/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ncdf4/package.py) Versions: 1.15 Build Dependencies: [r](#r), [netcdf](#netcdf) Link Dependencies: [r](#r), [netcdf](#netcdf) Run Dependencies: [r](#r) Description: Provides a high-level R interface to data files written using Unidata's netCDF library (version 4 or earlier), which are binary data files that are portable across platforms and include metadata information in addition to the data sets. Using this package, netCDF files (either version 4 or "classic" version 3) can be opened and data sets read in easily. It is also easy to create new netCDF dimensions, variables, and files, in either version 3 or 4 format, and manipulate existing netCDF files. This package replaces the former ncdf package, which only worked with netcdf version 3 files. For various reasons the names of the functions have had to be changed from the names in the ncdf package. The old ncdf package is still available at the URL given below, if you need to have backward compatibility. It should be possible to have both the ncdf and ncdf4 packages installed simultaneously without a problem. However, the ncdf package does not provide an interface for netcdf version 4 files. --- r-network[¶](#r-network) === Homepage: * <https://statnet.orgSpack package: * [r-network/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-network/package.py) Versions: 1.13.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Tools to create and modify network objects. The network class can represent a range of relational data types, and supports arbitrary vertex/edge/graph attributes. --- r-networkd3[¶](#r-networkd3) === Homepage: * <http://cran.r-project.org/package=networkD3Spack package: * [r-networkd3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-networkd3/package.py) Versions: 0.2.12 Build Dependencies: [r](#r), [r-igraph](#r-igraph), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-igraph](#r-igraph), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr) Description: Creates 'D3' 'JavaScript' network, tree, dendrogram, and Sankey graphs from 'R'. --- r-nlme[¶](#r-nlme) === Homepage: * <https://cran.r-project.org/package=nlmeSpack package: * [r-nlme/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-nlme/package.py) Versions: 3.1-131, 3.1-130, 3.1-128 Build Dependencies: [r](#r), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice) Description: Fit and compare Gaussian linear and nonlinear mixed-effects models. --- r-nloptr[¶](#r-nloptr) === Homepage: * <https://cran.r-project.org/package=nloptrSpack package: * [r-nloptr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-nloptr/package.py) Versions: 1.0.4 Build Dependencies: [r](#r), [nlopt](#nlopt), [r-testthat](#r-testthat) Link Dependencies: [r](#r), [nlopt](#nlopt) Run Dependencies: [r](#r), [r-testthat](#r-testthat) Description: nloptr is an R interface to NLopt. NLopt is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms. See http://ab- initio.mit.edu/wiki/index.php/NLopt _Introduction for more information on the available algorithms. During installation on Unix the NLopt code is downloaded and compiled from the NLopt website. --- r-nmf[¶](#r-nmf) === Homepage: * <http://renozao.github.io/NMFSpack package: * [r-nmf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-nmf/package.py) Versions: 0.20.6 Build Dependencies: [r-reshape2](#r-reshape2), [r-gridbase](#r-gridbase), [r-cluster](#r-cluster), [r-digest](#r-digest), [r-pkgmaker](#r-pkgmaker), [r-colorspace](#r-colorspace), [r-rcolorbrewer](#r-rcolorbrewer), [r-registry](#r-registry), [r](#r), [r-foreach](#r-foreach), [r-rngtools](#r-rngtools), [r-doparallel](#r-doparallel), [r-stringr](#r-stringr), [r-ggplot2](#r-ggplot2) Link Dependencies: [r](#r) Run Dependencies: [r-reshape2](#r-reshape2), [r-gridbase](#r-gridbase), [r-cluster](#r-cluster), [r-digest](#r-digest), [r-pkgmaker](#r-pkgmaker), [r-colorspace](#r-colorspace), [r-rcolorbrewer](#r-rcolorbrewer), [r-registry](#r-registry), [r](#r), [r-foreach](#r-foreach), [r-rngtools](#r-rngtools), [r-doparallel](#r-doparallel), [r-stringr](#r-stringr), [r-ggplot2](#r-ggplot2) Description: Provides a framework to perform Non-negative Matrix Factorization (NMF). The package implements a set of already published algorithms and seeding methods, and provides a framework to test, develop and plug new/custom algorithms. Most of the built-in algorithms have been optimized in C++, and the main interface function provides an easy way of performing parallel computations on multicore machines.. --- r-nnet[¶](#r-nnet) === Homepage: * <http://www.stats.ox.ac.uk/pub/MASS4/Spack package: * [r-nnet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-nnet/package.py) Versions: 7.3-12 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Software for feed-forward neural networks with a single hidden layer, and for multinomial log-linear models. --- r-nnls[¶](#r-nnls) === Homepage: * <https://cran.r-project.org/package=nnlsSpack package: * [r-nnls/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-nnls/package.py) Versions: 1.4 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: An R interface to the Lawson-Hanson implementation of an algorithm for non-negative least squares (NNLS). Also allows the combination of non- negative and non-positive constraints. --- r-nor1mix[¶](#r-nor1mix) === Homepage: * <https://CRAN.R-project.org/package=nor1mixSpack package: * [r-nor1mix/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-nor1mix/package.py) Versions: 1.2-3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Onedimensional Normal Mixture Models Classes, for, e.g., density estimation or clustering algorithms research and teaching; providing the widely used Marron-Wand densities. Efficient random number generation and graphics; now fitting to data by ML (Maximum Likelihood) or EM estimation. --- r-np[¶](#r-np) === Homepage: * <https://github.com/JeffreyRacine/R-Package-np/Spack package: * [r-np/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-np/package.py) Versions: 0.60-2 Build Dependencies: [r](#r), [r-cubature](#r-cubature), [r-boot](#r-boot) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-cubature](#r-cubature), [r-boot](#r-boot) Description: This package provides a variety of nonparametric (and semiparametric) kernel methods that seamlessly handle a mix of continuous, unordered, and ordered factor data types. We would like to gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC:www.nserc.ca), the Social Sciences and Humanities Research Council of Canada (SSHRC:www.sshrc.ca), and the Shared Hierarchical Academic Research Computing Network (SHARCNET:www.sharcnet.ca). --- r-numderiv[¶](#r-numderiv) === Homepage: * <https://cran.r-project.org/package=numDerivSpack package: * [r-numderiv/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-numderiv/package.py) Versions: 2016.8-1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Methods for calculating (usually) accurate numerical first and second order derivatives. --- r-oligoclasses[¶](#r-oligoclasses) === Homepage: * <https://www.bioconductor.org/packages/oligoClasses/Spack package: * [r-oligoclasses/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-oligoclasses/package.py) Versions: 1.38.0 Build Dependencies: [r](#r), [r-ff](#r-ff), [r-genomicranges](#r-genomicranges), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-summarizedexperiment](#r-summarizedexperiment), [r-biostrings](#r-biostrings), [r-iranges](#r-iranges), [r-biocinstaller](#r-biocinstaller), [r-biobase](#r-biobase), [r-affyio](#r-affyio), [r-foreach](#r-foreach), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ff](#r-ff), [r-genomicranges](#r-genomicranges), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-summarizedexperiment](#r-summarizedexperiment), [r-biostrings](#r-biostrings), [r-iranges](#r-iranges), [r-biocinstaller](#r-biocinstaller), [r-biobase](#r-biobase), [r-affyio](#r-affyio), [r-foreach](#r-foreach), [r-biocgenerics](#r-biocgenerics) Description: This package contains class definitions, validity checks, and initialization methods for classes used by the oligo and crlmm packages. --- r-oo[¶](#r-oo) === Homepage: * <https://github.com/HenrikBengtsson/R.ooSpack package: * [r-oo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-oo/package.py) Versions: 1.21.0 Build Dependencies: [r](#r), [r-methodss3](#r-methodss3) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-methodss3](#r-methodss3) Description: Methods and classes for object-oriented programming in R with or without references. Large effort has been made on making definition of methods as simple as possible with a minimum of maintenance for package developers. The package has been developed since 2001 and is now considered very stable. This is a cross-platform package implemented in pure R that defines standard S3 classes without any tricks. --- r-openssl[¶](#r-openssl) === Homepage: * <https://CRAN.R-project.org/package=opensslSpack package: * [r-openssl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-openssl/package.py) Versions: 0.9.7, 0.9.6, 0.9.4 Build Dependencies: [r](#r), [openssl](#openssl) Link Dependencies: [r](#r), [openssl](#openssl) Run Dependencies: [r](#r) Description: Bindings to OpenSSL libssl and libcrypto, plus custom SSH pubkey parsers. Supports RSA, DSA and EC curves P-256, P-384 and P-521. Cryptographic signatures can either be created and verified manually or via x509 certificates. AES can be used in cbc, ctr or gcm mode for symmetric encryption; RSA for asymmetric (public key) encryption or EC for Diffie Hellman. High-level envelope functions combine RSA and AES for encrypting arbitrary sized data. Other utilities include key generators, hash functions (md5, sha1, sha256, etc), base64 encoder, a secure random number generator, and 'bignum' math methods for manually performing crypto calculations on large multibyte integers. --- r-org-hs-eg-db[¶](#r-org-hs-eg-db) === Homepage: * <https://bioconductor.org/packages/org.Hs.eg.db/Spack package: * [r-org-hs-eg-db/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-org-hs-eg-db/package.py) Versions: 3.4.1 Build Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Description: Genome wide annotation for Human, primarily based on mapping using Entrez Gene identifiers. --- r-organismdbi[¶](#r-organismdbi) === Homepage: * <https://bioconductor.org/packages/OrganismDbi/Spack package: * [r-organismdbi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-organismdbi/package.py) Versions: 1.18.1 Build Dependencies: [r-iranges](#r-iranges), [r-graph](#r-graph), [r-dbi](#r-dbi), [r-s4vectors](#r-s4vectors), [r-biobase](#r-biobase), [r-genomicranges](#r-genomicranges), [r-biocinstaller](#r-biocinstaller), [r-genomicfeatures](#r-genomicfeatures), [r](#r), [r-annotationdbi](#r-annotationdbi), [r-rbgl](#r-rbgl), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r-iranges](#r-iranges), [r-graph](#r-graph), [r-dbi](#r-dbi), [r-s4vectors](#r-s4vectors), [r-biobase](#r-biobase), [r-genomicranges](#r-genomicranges), [r-biocinstaller](#r-biocinstaller), [r-genomicfeatures](#r-genomicfeatures), [r](#r), [r-annotationdbi](#r-annotationdbi), [r-rbgl](#r-rbgl), [r-biocgenerics](#r-biocgenerics) Description: The package enables a simple unified interface to several annotation packages each of which has its own schema by taking advantage of the fact that each of these packages implements a select methods. --- r-packrat[¶](#r-packrat) === Homepage: * <https://github.com/rstudio/packrat/Spack package: * [r-packrat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-packrat/package.py) Versions: 0.4.8-1, 0.4.7-1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Manage the R packages your project depends on in an isolated, portable, and reproducible way. --- r-pacman[¶](#r-pacman) === Homepage: * <https://cran.r-project.org/package=pacmanSpack package: * [r-pacman/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pacman/package.py) Versions: 0.4.1 Build Dependencies: [r](#r), [r-devtools](#r-devtools) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-devtools](#r-devtools) Description: Tools to more conveniently perform tasks associated with add-on packages. pacman conveniently wraps library and package related functions and names them in an intuitive and consistent fashion. It seeks to combine functionality from lower level functions which can speed up workflow. --- r-pamr[¶](#r-pamr) === Homepage: * <https://cran.r-project.org/package=pamrSpack package: * [r-pamr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pamr/package.py) Versions: 1.55 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Some functions for sample classification in microarrays. --- r-pan[¶](#r-pan) === Homepage: * <https://cran.r-project.org/package=panSpack package: * [r-pan/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pan/package.py) Versions: 1.4 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Multiple imputation for multivariate panel or clustered data. --- r-parallelmap[¶](#r-parallelmap) === Homepage: * <https://github.com/berndbischl/parallelMapSpack package: * [r-parallelmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-parallelmap/package.py) Versions: 1.3 Build Dependencies: [r](#r), [r-checkmate](#r-checkmate), [r-bbmisc](#r-bbmisc) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-checkmate](#r-checkmate), [r-bbmisc](#r-bbmisc) Description: Unified parallelization framework for multiple back-end, designed for internal package and interactive usage. The main operation is a parallel "map" over lists. Supports local, multicore, mpi and BatchJobs mode. Allows "tagging" of the parallel operation with a level name that can be later selected by the user to switch on parallel execution for exactly this operation. --- r-paramhelpers[¶](#r-paramhelpers) === Homepage: * <https://github.com/berndbischl/ParamHelpersSpack package: * [r-paramhelpers/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-paramhelpers/package.py) Versions: 1.10 Build Dependencies: [r](#r), [r-checkmate](#r-checkmate), [r-bbmisc](#r-bbmisc) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-checkmate](#r-checkmate), [r-bbmisc](#r-bbmisc) Description: Functions for parameter descriptions and operations in black-box optimization, tuning and machine learning. Parameters can be described (type, constraints, defaults, etc.), combined to parameter sets and can in general be programmed on. A useful OptPath object (archive) to log function evaluations is also provided. --- r-party[¶](#r-party) === Homepage: * <https://cran.r-project.org/web/packages/party/index.htmlSpack package: * [r-party/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-party/package.py) Versions: 1.1-2 Build Dependencies: [r](#r), [r-modeltools](#r-modeltools), [r-mvtnorm](#r-mvtnorm), [r-sandwich](#r-sandwich), [r-strucchange](#r-strucchange), [r-survival](#r-survival), [r-zoo](#r-zoo), [r-coin](#r-coin) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-modeltools](#r-modeltools), [r-mvtnorm](#r-mvtnorm), [r-sandwich](#r-sandwich), [r-strucchange](#r-strucchange), [r-survival](#r-survival), [r-zoo](#r-zoo), [r-coin](#r-coin) Description: A computational toolbox for recursive partitioning. --- r-partykit[¶](#r-partykit) === Homepage: * <http://partykit.r-forge.r-project.org/partykitSpack package: * [r-partykit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-partykit/package.py) Versions: 1.1-1 Build Dependencies: [r](#r), [r-survival](#r-survival), [r-formula](#r-formula) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-survival](#r-survival), [r-formula](#r-formula) Description: A toolkit with infrastructure for representing, summarizing, and visualizing tree-structured regression and classification models. This unified infrastructure can be used for reading/coercing tree models from different sources ('rpart', 'RWeka', 'PMML') yielding objects that share functionality for print()/plot()/predict() methods. Furthermore, new and improved reimplementations of conditional inference trees (ctree()) and model-based recursive partitioning (mob()) from the 'party' package are provided based on the new infrastructure. --- r-pathview[¶](#r-pathview) === Homepage: * <https://www.bioconductor.org/packages/pathview/Spack package: * [r-pathview/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pathview/package.py) Versions: 1.16.7 Build Dependencies: [r](#r), [r-kegggraph](#r-kegggraph), [r-graph](#r-graph), [r-org-hs-eg-db](#r-org-hs-eg-db), [r-png](#r-png), [r-annotationdbi](#r-annotationdbi), [r-keggrest](#r-keggrest), [r-rgraphviz](#r-rgraphviz), [r-xml](#r-xml) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-kegggraph](#r-kegggraph), [r-graph](#r-graph), [r-org-hs-eg-db](#r-org-hs-eg-db), [r-png](#r-png), [r-annotationdbi](#r-annotationdbi), [r-keggrest](#r-keggrest), [r-rgraphviz](#r-rgraphviz), [r-xml](#r-xml) Description: Pathview is a tool set for pathway based data integration and visualization. It maps and renders a wide variety of biological data on relevant pathway graphs. All users need is to supply their data and specify the target pathway. Pathview automatically downloads the pathway graph data, parses the data file, maps user data to the pathway, and render pathway graph with the mapped data. In addition, Pathview also seamlessly integrates with pathway and gene set (enrichment) analysis tools for large-scale and fully automated analysis. --- r-pbapply[¶](#r-pbapply) === Homepage: * <https://cran.r-project.org/web/packages/pbapply/index.htmlSpack package: * [r-pbapply/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pbapply/package.py) Versions: 1.3-3, 1.3-2, 1.3-1, 1.3-0, 1.2-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A lightweight package that adds progress bar to vectorized R apply functions. --- r-pbdzmq[¶](#r-pbdzmq) === Homepage: * <http://r-pbd.org/Spack package: * [r-pbdzmq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pbdzmq/package.py) Versions: 0.2-4 Build Dependencies: [r](#r), [zeromq](#zeromq), [r-r6](#r-r6) Link Dependencies: [r](#r), [zeromq](#zeromq) Run Dependencies: [r](#r), [r-r6](#r-r6) Description: 'ZeroMQ' is a well-known library for high-performance asynchronous messaging in scalable, distributed applications. This package provides high level R wrapper functions to easily utilize 'ZeroMQ'. We mainly focus on interactive client/server programming frameworks. For convenience, a minimal 'ZeroMQ' library (4.1.0 rc1) is shipped with 'pbdZMQ', which can be used if no system installation of 'ZeroMQ' is available. A few wrapper functions compatible with 'rzmq' are also provided. --- r-pbkrtest[¶](#r-pbkrtest) === Homepage: * <http://people.math.aau.dk/~sorenh/software/pbkrtest/Spack package: * [r-pbkrtest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pbkrtest/package.py) Versions: 0.4-6, 0.4-4 Build Dependencies: [r](#r), [r-lme4](#r-lme4), [r-matrix](#r-matrix), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lme4](#r-lme4), [r-matrix](#r-matrix), [r-mass](#r-mass) Description: Test in mixed effects models. Attention is on mixed effects models as implemented in the 'lme4' package. This package implements a parametric bootstrap test and a Kenward Roger modification of F-tests for linear mixed effects models and a parametric bootstrap test for generalized linear mixed models. --- r-pcamethods[¶](#r-pcamethods) === Homepage: * <http://bioconductor.org/packages/pcaMethods/Spack package: * [r-pcamethods/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pcamethods/package.py) Versions: 1.68.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics), [r-mass](#r-mass), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-biocgenerics](#r-biocgenerics), [r-mass](#r-mass), [r-rcpp](#r-rcpp) Description: Provides Bayesian PCA, Probabilistic PCA, Nipals PCA, Inverse Non-Linear PCA and the conventional SVD PCA. A cluster based method for missing value estimation is included for comparison. BPCA, PPCA and NipalsPCA may be used to perform PCA on incomplete data as well as for accurate missing value estimation. A set of methods for printing and plotting the results is also provided. All PCA methods make use of the same data structure (pcaRes) to provide a common interface to the PCA results. Initiated at the Max-Planck Institute for Molecular Plant Physiology, Golm, Germany. --- r-pcapp[¶](#r-pcapp) === Homepage: * <https://cran.r-project.org/web/packages/pcaPP/index.htmlSpack package: * [r-pcapp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pcapp/package.py) Versions: 1.9-72, 1.9-70, 1.9-61, 1.9-60, 1.9-50 Build Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm) Description: Provides functions for robust PCA by projection pursuit. --- r-permute[¶](#r-permute) === Homepage: * <https://github.com/gavinsimpson/permuteSpack package: * [r-permute/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-permute/package.py) Versions: 0.9-4 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A set of restricted permutation designs for freely exchangeable, line transects (time series), and spatial grid designs plus permutation of blocks (groups of samples) is provided. 'permute' also allows split-plot designs, in which the whole-plots or split-plots or both can be freely- exchangeable or one of the restricted designs. The 'permute' package is modelled after the permutation schemes of 'Canoco 3.1' (and later) by <NAME>. --- r-pfam-db[¶](#r-pfam-db) === Homepage: * <https://www.bioconductor.org/packages/PFAM.db/Spack package: * [r-pfam-db/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pfam-db/package.py) Versions: 3.4.1 Build Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-annotationdbi](#r-annotationdbi) Description: A set of protein ID mappings for PFAM assembled using data from public repositories. --- r-phangorn[¶](#r-phangorn) === Homepage: * <https://cran.r-project.org/package=phangornSpack package: * [r-phangorn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-phangorn/package.py) Versions: 2.3.1 Build Dependencies: [r](#r), [r-quadprog](#r-quadprog), [r-magrittr](#r-magrittr), [r-matrix](#r-matrix), [r-fastmatch](#r-fastmatch), [r-igraph](#r-igraph), [r-ape](#r-ape), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-quadprog](#r-quadprog), [r-magrittr](#r-magrittr), [r-matrix](#r-matrix), [r-fastmatch](#r-fastmatch), [r-igraph](#r-igraph), [r-ape](#r-ape), [r-rcpp](#r-rcpp) Description: Package contains methods for estimation of phylogenetic trees and networks using Maximum Likelihood, Maximum Parsimony, distance methods and Hadamard conjugation. Allows to compare trees, models selection and offers visualizations for trees and split networks. --- r-phantompeakqualtools[¶](#r-phantompeakqualtools) === Homepage: * <https://github.com/kundajelab/phantompeakqualtoolsSpack package: * [r-phantompeakqualtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-phantompeakqualtools/package.py) Versions: 1.14 Build Dependencies: [r](#r), [r-catools](#r-catools), [r-bitops](#r-bitops), [r-snowfall](#r-snowfall), [r-rsamtools](#r-rsamtools), [boost](#boost), [r-snow](#r-snow) Link Dependencies: [r](#r), [boost](#boost) Run Dependencies: [r](#r), [r-catools](#r-catools), [r-bitops](#r-bitops), [r-snowfall](#r-snowfall), [r-rsamtools](#r-rsamtools), [r-snow](#r-snow) Description: Computes informative enrichment and quality measures for ChIP-seq/DNase- seq/FAIRE-seq/MNase-seq data. This is a modified version of r-spp to be used in conjunction with the phantompeakqualtools package. --- r-phyloseq[¶](#r-phyloseq) === Homepage: * <https://www.bioconductor.org/packages/phyloseq/Spack package: * [r-phyloseq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-phyloseq/package.py) Versions: 1.20.0 Build Dependencies: [r-ade4](#r-ade4), [r-multtest](#r-multtest), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-data-table](#r-data-table), [r-cluster](#r-cluster), [r-vegan](#r-vegan), [r-plyr](#r-plyr), [r-scales](#r-scales), [r-biostrings](#r-biostrings), [r](#r), [r-biomformat](#r-biomformat), [r-biobase](#r-biobase), [r-igraph](#r-igraph), [r-ape](#r-ape), [r-foreach](#r-foreach), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r-ade4](#r-ade4), [r-multtest](#r-multtest), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-data-table](#r-data-table), [r-cluster](#r-cluster), [r-vegan](#r-vegan), [r-plyr](#r-plyr), [r-scales](#r-scales), [r-biostrings](#r-biostrings), [r](#r), [r-biomformat](#r-biomformat), [r-biobase](#r-biobase), [r-igraph](#r-igraph), [r-ape](#r-ape), [r-foreach](#r-foreach), [r-biocgenerics](#r-biocgenerics) Description: phyloseq provides a set of classes and tools to facilitate the import, storage, analysis, and graphical display of microbiome census data. --- r-picante[¶](#r-picante) === Homepage: * <https://cran.r-project.org/package=picanteSpack package: * [r-picante/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-picante/package.py) Versions: 1.6-2, 1.6-1 Build Dependencies: [r](#r), [r-vegan](#r-vegan), [r-ape](#r-ape), [r-nlme](#r-nlme) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-vegan](#r-vegan), [r-ape](#r-ape), [r-nlme](#r-nlme) Description: R tools for integrating phylogenies and ecology --- r-pkgconfig[¶](#r-pkgconfig) === Homepage: * <https://cran.rstudio.com/web/packages/pkgconfig/index.htmlSpack package: * [r-pkgconfig/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pkgconfig/package.py) Versions: 2.0.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Set configuration options on a per-package basis. Options set by a given package only apply to that package, other packages are unaffected. --- r-pkgmaker[¶](#r-pkgmaker) === Homepage: * <https://renozao.github.io/pkgmakerSpack package: * [r-pkgmaker/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pkgmaker/package.py) Versions: 0.22 Build Dependencies: [r](#r), [r-codetools](#r-codetools), [r-stringr](#r-stringr), [r-digest](#r-digest), [r-xtable](#r-xtable), [r-registry](#r-registry) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-codetools](#r-codetools), [r-stringr](#r-stringr), [r-digest](#r-digest), [r-xtable](#r-xtable), [r-registry](#r-registry) Description: This package provides some low-level utilities to use for package development. It currently provides managers for multiple package specific options and registries, vignette, unit test and bibtex related utilities. It serves as a base package for packages like NMF, RcppOctave, doRNG, and as an incubator package for other general purposes utilities, that will eventually be packaged separately. It is still under heavy development and changes in the interface(s) are more than likely to happen. --- r-plogr[¶](#r-plogr) === Homepage: * <https://cran.r-project.org/package=plogrSpack package: * [r-plogr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-plogr/package.py) Versions: 0.2.0, 0.1-1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A simple header-only logging library for C++. Add 'LinkingTo: plogr' to 'DESCRIPTION', and '#include <plogr.h>' in your C++ modules to use it. --- r-plot3d[¶](#r-plot3d) === Homepage: * <https://CRAN.R-project.org/package=plot3DSpack package: * [r-plot3d/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-plot3d/package.py) Versions: 1.1.1 Build Dependencies: [r](#r), [r-misc3d](#r-misc3d) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-misc3d](#r-misc3d) Description: Functions for viewing 2-D and 3-D data, including perspective plots, slice plots, surface plots, scatter plots, etc. Includes data sets from oceanography. --- r-plotly[¶](#r-plotly) === Homepage: * <https://cran.r-project.org/web/packages/plotly/index.htmlSpack package: * [r-plotly/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-plotly/package.py) Versions: 4.7.1, 4.7.0, 4.6.0, 4.5.6, 4.5.2 Build Dependencies: [r-base64enc](#r-base64enc), [r](#r), [r-purrr](#r-purrr), [r-ggplot2](#r-ggplot2), [r-htmlwidgets](#r-htmlwidgets), [r-dplyr](#r-dplyr), [r-htmltools](#r-htmltools), [r-hexbin](#r-hexbin), [r-crosstalk](#r-crosstalk), [r-httr](#r-httr), [r-tidyr](#r-tidyr), [r-data-table](#r-data-table) Link Dependencies: [r](#r) Run Dependencies: [r-base64enc](#r-base64enc), [r](#r), [r-purrr](#r-purrr), [r-ggplot2](#r-ggplot2), [r-htmlwidgets](#r-htmlwidgets), [r-dplyr](#r-dplyr), [r-htmltools](#r-htmltools), [r-hexbin](#r-hexbin), [r-crosstalk](#r-crosstalk), [r-httr](#r-httr), [r-tidyr](#r-tidyr), [r-data-table](#r-data-table) Description: Easily translate 'ggplot2' graphs to an interactive web-based version and/or create custom web-based visualizations directly from R. --- r-plotrix[¶](#r-plotrix) === Homepage: * <https://cran.r-project.org/package=plotrixSpack package: * [r-plotrix/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-plotrix/package.py) Versions: 3.6-4, 3.6-3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Lots of plots, various labeling, axis and color scaling functions. --- r-pls[¶](#r-pls) === Homepage: * <https://cran.r-project.org/package=plsSpack package: * [r-pls/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pls/package.py) Versions: 2.6-0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Multivariate regression methods Partial Least Squares Regression (PLSR), Principal Component Regression (PCR) and Canonical Powered Partial Least Squares (CPPLS). --- r-plyr[¶](#r-plyr) === Homepage: * <http://had.co.nz/plyrSpack package: * [r-plyr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-plyr/package.py) Versions: 1.8.4 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: A set of tools that solves a common set of problems: you need to break a big problem down into manageable pieces, operate on each piece and then put all the pieces back together. For example, you might want to fit a model to each spatial location or time point in your study, summarise data by panels or collapse high-dimensional arrays to simpler summary statistics. The development of 'plyr' has been generously supported by '<NAME>'. --- r-pmcmr[¶](#r-pmcmr) === Homepage: * <https://cran.r-project.org/package=PMCMRSpack package: * [r-pmcmr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pmcmr/package.py) Versions: 4.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: The Kruskal and Wallis one-way analysis of variance by ranks or van der Waerden's normal score test can be employed, if the data do not meet the assumptions for one-way ANOVA. Provided that significant differences were detected by the omnibus test, one may be interested in applying post-hoc tests for pairwise multiple comparisons (such as Nemenyi's test, Dunn's test, Conover's test, van der Waerden's test). Similarly, one-way ANOVA with repeated measures that is also referred to as ANOVA with unreplicated block design can also be conducted via the Friedman- Test or the Quade-test. The consequent post-hoc pairwise multiple comparison tests according to Nemenyi, Conover and Quade are also provided in this package. Finally Durbin's test for a two-way balanced incomplete block design (BIBD) is also given in this package. --- r-png[¶](#r-png) === Homepage: * <http://www.rforge.net/png/Spack package: * [r-png/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-png/package.py) Versions: 0.1-7 Build Dependencies: [r](#r), [libpng](#libpng) Link Dependencies: [r](#r), [libpng](#libpng) Run Dependencies: [r](#r) Description: This package provides an easy and simple way to read, write and display bitmap images stored in the PNG format. It can read and write both files and in-memory raw vectors. --- r-powerlaw[¶](#r-powerlaw) === Homepage: * <https://github.com/csgillespie/poweRlawSpack package: * [r-powerlaw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-powerlaw/package.py) Versions: 0.70.1 Build Dependencies: [r](#r), [r-vgam](#r-vgam) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-vgam](#r-vgam) Description: An implementation of maximum likelihood estimators for a variety of heavy tailed distributions, including both the discrete and continuous power law distributions. Additionally, a goodness-of-fit based approach is used to estimate the lower cut-off for the scaling region. --- r-prabclus[¶](#r-prabclus) === Homepage: * <http://www.homepages.ucl.ac.uk/~ucakcheSpack package: * [r-prabclus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-prabclus/package.py) Versions: 2.2-6 Build Dependencies: [r](#r), [r-mclust](#r-mclust) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mclust](#r-mclust) Description: prabclus: Functions for Clustering of Presence-Absence, Abundance and Multilocus Genetic Data --- r-praise[¶](#r-praise) === Homepage: * <https://github.com/gaborcsardi/praiseSpack package: * [r-praise/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-praise/package.py) Versions: 1.0.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Build friendly R packages that praise their users if they have done something good, or they just need it to feel better. --- r-preprocesscore[¶](#r-preprocesscore) === Homepage: * <https://bioconductor.org/packages/preprocessCore/Spack package: * [r-preprocesscore/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-preprocesscore/package.py) Versions: 1.38.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A library of core preprocessing routines --- r-prettyunits[¶](#r-prettyunits) === Homepage: * <https://cran.r-project.org/package=prettyunitsSpack package: * [r-prettyunits/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-prettyunits/package.py) Versions: 1.0.2 Build Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-magrittr](#r-magrittr) Description: Pretty, human readable formatting of quantities. Time intervals: 1337000 -> 15d 11h 23m 20s. Vague time intervals: 2674000 -> about a month ago. Bytes: 1337 -> 1.34 kB. --- r-processx[¶](#r-processx) === Homepage: * <https://github.com/r-lib/processxSpack package: * [r-processx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-processx/package.py) Versions: 3.2.0, 3.1.0, 3.0.3, 2.0.0.1, 2.0.0 Build Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-r6](#r-r6), [r-utils](#r-utils), [r-crayon](#r-crayon), [r-ps](#r-ps) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-r6](#r-r6), [r-utils](#r-utils), [r-crayon](#r-crayon), [r-ps](#r-ps) Description: Tools to run system processes in the background --- r-prodlim[¶](#r-prodlim) === Homepage: * <https://cran.r-project.org/package=prodlimSpack package: * [r-prodlim/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-prodlim/package.py) Versions: 1.5.9 Build Dependencies: [r](#r), [r-lava](#r-lava), [r-survival](#r-survival), [r-kernsmooth](#r-kernsmooth), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lava](#r-lava), [r-survival](#r-survival), [r-kernsmooth](#r-kernsmooth), [r-rcpp](#r-rcpp) Description: Product-Limit Estimation for Censored Event History Analysis. Fast and user friendly implementation of nonparametric estimators for censored event history (survival) analysis. Kaplan-Meier and Aalen-Johansen method. --- r-progress[¶](#r-progress) === Homepage: * <https://cran.r-project.org/package=progressSpack package: * [r-progress/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-progress/package.py) Versions: 1.1.2 Build Dependencies: [r](#r), [r-prettyunits](#r-prettyunits), [r-r6](#r-r6) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-prettyunits](#r-prettyunits), [r-r6](#r-r6) Description: Configurable Progress bars, they may include percentage, elapsed time, and/or the estimated completion time. They work in terminals, in 'Emacs' 'ESS', 'RStudio', 'Windows' 'Rgui' and the 'macOS' 'R.app'. The package also provides a 'C++' 'API', that works with or without 'Rcpp'. --- r-protgenerics[¶](#r-protgenerics) === Homepage: * <https://bioconductor.org/packages/ProtGenerics/Spack package: * [r-protgenerics/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-protgenerics/package.py) Versions: 1.8.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: S4 generic functions needed by Bioconductor proteomics packages. --- r-proto[¶](#r-proto) === Homepage: * <http://r-proto.googlecode.com/Spack package: * [r-proto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-proto/package.py) Versions: 1.0.0, 0.3-10 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: An object oriented system using object-based, also called prototype- based, rather than class-based object oriented ideas. --- r-proxy[¶](#r-proxy) === Homepage: * <https://cran.r-project.org/package=proxySpack package: * [r-proxy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-proxy/package.py) Versions: 0.4-19 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides an extensible framework for the efficient calculation of auto- and cross-proximities, along with implementations of the most popular ones. --- r-pryr[¶](#r-pryr) === Homepage: * <https://github.com/hadley/pryrSpack package: * [r-pryr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-pryr/package.py) Versions: 0.1.2 Build Dependencies: [r](#r), [r-stringr](#r-stringr), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-stringr](#r-stringr), [r-rcpp](#r-rcpp) Description: Useful tools to pry back the covers of R and understand the language at a deeper level. --- r-ps[¶](#r-ps) === Homepage: * <https://github.com/r-lib/psSpack package: * [r-ps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ps/package.py) Versions: 1.1.0, 1.0.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Manipulate processes on Windows, Linux and MacOS --- r-psych[¶](#r-psych) === Homepage: * <http://personality-project.org/r/psychSpack package: * [r-psych/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-psych/package.py) Versions: 1.7.8 Build Dependencies: [r](#r), [r-lattice](#r-lattice), [r-mnormt](#r-mnormt), [r-foreign](#r-foreign), [r-nlme](#r-nlme) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice), [r-mnormt](#r-mnormt), [r-foreign](#r-foreign), [r-nlme](#r-nlme) Description: A general purpose toolbox for personality, psychometric theory and experimental psychology. Functions are primarily for multivariate analysis and scale construction using factor analysis, principal component analysis, cluster analysis and reliability analysis, although others provide basic descriptive statistics. Item Response Theory is done using factor analysis of tetrachoric and polychoric correlations. Functions for analyzing data at multiple levels include within and between group statistics, including correlations and factor analysis. Functions for simulating and testing particular item and test structures are included. Several functions serve as a useful front end for structural equation modeling. Graphical displays of path diagrams, factor analysis and structural equation models are created using basic graphics. Some of the functions are written to support a book on psychometric theory as well as publications in personality research. For more information, see the <http://personality-project.org/r> web page. --- r-ptw[¶](#r-ptw) === Homepage: * <https://cran.r-project.org/package=ptwSpack package: * [r-ptw/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ptw/package.py) Versions: 1.9-12 Build Dependencies: [r](#r), [r-nloptr](#r-nloptr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-nloptr](#r-nloptr) Description: Parametric Time Warping aligns patterns, i.e. it aims to put corresponding features at the same locations. The algorithm searches for an optimal polynomial describing the warping. It is possible to align one sample to a reference, several samples to the same reference, or several samples to several references. One can choose between calculating individual warpings, or one global warping for a set of samples and one reference. Two optimization criteria are implemented: RMS (Root Mean Square error) and WCC (Weighted Cross Correlation). Both warping of peak profiles and of peak lists are supported. --- r-purrr[¶](#r-purrr) === Homepage: * <http://purrr.tidyverse.org/Spack package: * [r-purrr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-purrr/package.py) Versions: 0.2.4 Build Dependencies: [r](#r), [r-tibble](#r-tibble), [r-magrittr](#r-magrittr), [r-rlang](#r-rlang) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tibble](#r-tibble), [r-magrittr](#r-magrittr), [r-rlang](#r-rlang) Description: A complete and consistent functional programming toolkit for R. --- r-quadprog[¶](#r-quadprog) === Homepage: * <https://cran.r-project.org/web/packages/quadprog/index.htmlSpack package: * [r-quadprog/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-quadprog/package.py) Versions: 1.5-5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This package contains routines and documentation for solving quadratic programming problems. --- r-quantmod[¶](#r-quantmod) === Homepage: * <http://www.quantmod.com/Spack package: * [r-quantmod/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-quantmod/package.py) Versions: 0.4-10, 0.4-5 Build Dependencies: [r](#r), [r-ttr](#r-ttr), [r-xts](#r-xts), [r-zoo](#r-zoo), [r-curl](#r-curl) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ttr](#r-ttr), [r-xts](#r-xts), [r-zoo](#r-zoo), [r-curl](#r-curl) Description: Specify, build, trade, and analyse quantitative financial trading strategies. --- r-quantreg[¶](#r-quantreg) === Homepage: * <https://cran.r-project.org/package=quantregSpack package: * [r-quantreg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-quantreg/package.py) Versions: 5.29, 5.26 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-sparsem](#r-sparsem), [r-matrixmodels](#r-matrixmodels) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-sparsem](#r-sparsem), [r-matrixmodels](#r-matrixmodels) Description: Estimation and inference methods for models of conditional quantiles: Linear and nonlinear parametric and non-parametric (total variation penalized) models for conditional quantiles of a univariate response and several methods for handling censored survival data. Portfolio selection methods based on expected shortfall risk are also included. --- r-quantro[¶](#r-quantro) === Homepage: * <https://www.bioconductor.org/packages/quantro/Spack package: * [r-quantro/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-quantro/package.py) Versions: 1.10.0 Build Dependencies: [r](#r), [r-iterators](#r-iterators), [r-ggplot2](#r-ggplot2), [r-biobase](#r-biobase), [r-doparallel](#r-doparallel), [r-rcolorbrewer](#r-rcolorbrewer), [r-minfi](#r-minfi), [r-foreach](#r-foreach) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-iterators](#r-iterators), [r-ggplot2](#r-ggplot2), [r-biobase](#r-biobase), [r-doparallel](#r-doparallel), [r-rcolorbrewer](#r-rcolorbrewer), [r-minfi](#r-minfi), [r-foreach](#r-foreach) Description: A data-driven test for the assumptions of quantile normalization using raw data such as objects that inherit eSets (e.g. ExpressionSet, MethylSet). Group level information about each sample (such as Tumor / Normal status) must also be provided because the test assesses if there are global differences in the distributions between the user-defined groups. --- r-qvalue[¶](#r-qvalue) === Homepage: * <https://www.bioconductor.org/packages/qvalue/Spack package: * [r-qvalue/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-qvalue/package.py) Versions: 2.12.0, 2.8.0 Build Dependencies: [r](#r), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2) Description: This package takes a list of p-values resulting from the simultaneous testing of many hypotheses and estimates their q-values and local FDR values. The q-value of a test measures the proportion of false positives incurred (called the false discovery rate) when that particular test is called significant. The local FDR measures the posterior probability the null hypothesis is true given the test's p-value. Various plots are automatically generated, allowing one to make sensible significance cut- offs. Several mathematical results have recently been shown on the conservative accuracy of the estimated q-values from this software. The software can be applied to problems in genomics, brain imaging, astrophysics, and data mining. --- r-r6[¶](#r-r6) === Homepage: * <https://github.com/wch/R6/Spack package: * [r-r6/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-r6/package.py) Versions: 2.2.2, 2.2.0, 2.1.2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: The R6 package allows the creation of classes with reference semantics, similar to R's built-in reference classes. Compared to reference classes, R6 classes are simpler and lighter-weight, and they are not built on S4 classes so they do not require the methods package. These classes allow public and private members, and they support inheritance, even when the classes are defined in different packages. --- r-randomforest[¶](#r-randomforest) === Homepage: * <https://www.stat.berkeley.edu/~breiman/RandomForests/Spack package: * [r-randomforest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-randomforest/package.py) Versions: 4.6-12 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Classification and regression based on a forest of trees using random inputs. --- r-ranger[¶](#r-ranger) === Homepage: * <https://cran.r-project.org/web/packages/ranger/index.htmlSpack package: * [r-ranger/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ranger/package.py) Versions: 0.8.0, 0.7.0, 0.6.0, 0.5.0, 0.4.0 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-rcppeigen](#r-rcppeigen), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-rcppeigen](#r-rcppeigen), [r-rcpp](#r-rcpp) Description: A fast implementation of Random Forests, particularly suited for high dimensional data. --- r-rappdirs[¶](#r-rappdirs) === Homepage: * <https://cran.r-project.org/package=rappdirsSpack package: * [r-rappdirs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rappdirs/package.py) Versions: 0.3.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: An easy way to determine which directories on the users computer you should use to save data, caches and logs. A port of Python's 'Appdirs' to R. --- r-raster[¶](#r-raster) === Homepage: * <http://cran.r-project.org/package=rasterSpack package: * [r-raster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-raster/package.py) Versions: 2.5-8 Build Dependencies: [r](#r), [r-sp](#r-sp), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-sp](#r-sp), [r-rcpp](#r-rcpp) Description: Reading, writing, manipulating, analyzing and modeling of gridded spatial data. The package implements basic and high-level functions. Processing of very large files is supported. --- r-rbgl[¶](#r-rbgl) === Homepage: * <https://www.bioconductor.org/packages/RBGL/Spack package: * [r-rbgl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rbgl/package.py) Versions: 1.52.0 Build Dependencies: [r](#r), [r-graph](#r-graph) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-graph](#r-graph) Description: A fairly extensive and comprehensive interface to the graph algorithms contained in the BOOST library. --- r-rbokeh[¶](#r-rbokeh) === Homepage: * <https://hafen.github.io/rbokehSpack package: * [r-rbokeh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rbokeh/package.py) Versions: 0.5.0 Build Dependencies: [r-hexbin](#r-hexbin), [r-gistr](#r-gistr), [r-ggplot2](#r-ggplot2), [r-htmlwidgets](#r-htmlwidgets), [r-scales](#r-scales), [r-digest](#r-digest), [r-maps](#r-maps), [r-jsonlite](#r-jsonlite), [r-lazyeval](#r-lazyeval), [r-magrittr](#r-magrittr), [r](#r), [r-pryr](#r-pryr) Link Dependencies: [r](#r) Run Dependencies: [r-hexbin](#r-hexbin), [r-gistr](#r-gistr), [r-ggplot2](#r-ggplot2), [r-htmlwidgets](#r-htmlwidgets), [r-scales](#r-scales), [r-digest](#r-digest), [r-maps](#r-maps), [r-jsonlite](#r-jsonlite), [r-lazyeval](#r-lazyeval), [r-magrittr](#r-magrittr), [r](#r), [r-pryr](#r-pryr) Description: R interface for creating plots in Bokeh. Bokeh by Continuum Analytics. --- r-rcolorbrewer[¶](#r-rcolorbrewer) === Homepage: * <http://colorbrewer2.orgSpack package: * [r-rcolorbrewer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rcolorbrewer/package.py) Versions: 1.1-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides color schemes for maps (and other graphics) designed by Cynthia Brewer as described at http://colorbrewer2.org --- r-rcpp[¶](#r-rcpp) === Homepage: * <http://dirk.eddelbuettel.com/code/rcpp.htmlSpack package: * [r-rcpp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rcpp/package.py) Versions: 0.12.16, 0.12.15, 0.12.14, 0.12.13, 0.12.12, 0.12.11, 0.12.9, 0.12.6, 0.12.5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: The 'Rcpp' package provides R functions as well as C++ classes which offer a seamless integration of R and C++. Many R data types and objects can be mapped back and forth to C++ equivalents which facilitates both writing of new code as well as easier integration of third-party libraries. Documentation about 'Rcpp' is provided by several vignettes included in this package, via the 'Rcpp Gallery' site at <http://gallery.rcpp.org>, the paper by Eddelbuettel and Francois (2011, JSS), and the book by Eddelbuettel (2013, Springer); see 'citation("Rcpp")' for details on these last two. --- r-rcpparmadillo[¶](#r-rcpparmadillo) === Homepage: * <https://cran.r-project.org/package=RcppArmadilloSpack package: * [r-rcpparmadillo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rcpparmadillo/package.py) Versions: 0.8.100.1.0 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: 'Rcpp' Integration for the 'Armadillo' Templated Linear Algebra Library. --- r-rcppblaze[¶](#r-rcppblaze) === Homepage: * <https://github.com/Chingchuan-chen/RcppBlazeSpack package: * [r-rcppblaze/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rcppblaze/package.py) Versions: 0.2.2 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-bh](#r-bh), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-bh](#r-bh), [r-rcpp](#r-rcpp) Description: 'Blaze' is an open-source, high-performance C++ math library for dense and sparse arithmetic. With its state-of-the-art Smart Expression Template implementation 'Blaze' combines the elegance and ease of use of a domain-specific language with 'HPC'-grade performance, making it one of the most intuitive and fastest C++ math libraries available. The 'Blaze' library offers: - high performance through the integration of 'BLAS' libraries and manually tuned 'HPC' math kernels - vectorization by 'SSE', 'SSE2', 'SSE3', 'SSSE3', 'SSE4', 'AVX', 'AVX2', 'AVX-512', 'FMA', and 'SVML' - parallel execution by 'OpenMP', C++11 threads and 'Boost' threads ('Boost' threads are disabled in 'RcppBlaze') - the intuitive and easy to use API of a domain specific language - unified arithmetic with dense and sparse vectors and matrices - thoroughly tested matrix and vector arithmetic - completely portable, high quality C++ source code. The 'RcppBlaze' package includes the header files from the 'Blaze' library with disabling some functionalities related to link to the thread and system libraries which make 'RcppBlaze' be a header- only library. Therefore, users do not need to install 'Blaze' and the dependency 'Boost'. 'Blaze' is licensed under the New (Revised) BSD license, while 'RcppBlaze' (the 'Rcpp' bindings/bridge to 'Blaze') is licensed under the GNU GPL version 2 or later, as is the rest of 'Rcpp'. Note that since 'Blaze' has committed to 'C++14' commit to 'C++14' which does not used by most R users from version 3.0, we will use the version 2.6 of 'Blaze' which is 'C++98' compatible to support the most compilers and system. --- r-rcppcctz[¶](#r-rcppcctz) === Homepage: * <https://github.com/eddelbuettel/rcppcctzSpack package: * [r-rcppcctz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rcppcctz/package.py) Versions: 0.2.3 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: 'Rcpp' Access to the 'CCTZ' timezone library is provided. 'CCTZ' is a C++ library for translating between absolute and civil times using the rules of a time zone. The 'CCTZ' source code, released under the Apache 2.0 License, is included in this package. See <https://github.com/google/cctz> for more details. --- r-rcppcnpy[¶](#r-rcppcnpy) === Homepage: * <https://github.com/eddelbuettel/rcppcnpySpack package: * [r-rcppcnpy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rcppcnpy/package.py) Versions: 0.2.9 Build Dependencies: [r](#r), [cnpy](#cnpy), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r), [cnpy](#cnpy) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: Rcpp bindings for NumPy files. --- r-rcppeigen[¶](#r-rcppeigen) === Homepage: * <http://eigen.tuxfamily.org/Spack package: * [r-rcppeigen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rcppeigen/package.py) Versions: 0.3.3.3.1, 0.3.2.9.0, 0.3.2.8.1 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-rcpp](#r-rcpp) Description: R and 'Eigen' integration using 'Rcpp'. 'Eigen' is a C++ template library for linear algebra: matrices, vectors, numerical solvers and related algorithms. It supports dense and sparse matrices on integer, floating point and complex numbers, decompositions of such matrices, and solutions of linear systems. Its performance on many algorithms is comparable with some of the best implementations based on 'Lapack' and level-3 'BLAS'. The 'RcppEigen' package includes the header files from the 'Eigen' C++ template library (currently version 3.2.8). Thus users do not need to install 'Eigen' itself in order to use 'RcppEigen'. Since version 3.1.1, 'Eigen' is licensed under the Mozilla Public License (version 2); earlier version were licensed under the GNU LGPL version 3 or later. 'RcppEigen' (the 'Rcpp' bindings/bridge to 'Eigen') is licensed under the GNU GPL version 2 or later, as is the rest of 'Rcpp'. --- r-rcppprogress[¶](#r-rcppprogress) === Homepage: * <https://cran.r-project.org/web/packages/RcppProgress/index.htmlSpack package: * [r-rcppprogress/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rcppprogress/package.py) Versions: 0.3, 0.2.1, 0.2, 0.1 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: Allows to display a progress bar in the R console for long running computations taking place in c++ code, and support for interrupting those computations even in multithreaded code, typically using OpenMP. --- r-rcurl[¶](#r-rcurl) === Homepage: * <https://cran.rstudio.com/web/packages/RCurl/index.htmlSpack package: * [r-rcurl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rcurl/package.py) Versions: 1.95-4.8 Build Dependencies: [r](#r), [curl](#curl), [r-bitops](#r-bitops) Link Dependencies: [r](#r), [curl](#curl) Run Dependencies: [r](#r), [r-bitops](#r-bitops) Description: A wrapper for 'libcurl' <http://curl.haxx.se/libcurl/> Provides functions to allow one to compose general HTTP requests and provides convenient functions to fetch URIs, get & post forms, etc. and process the results returned by the Web server. This provides a great deal of control over the HTTP/FTP/... connection and the form of the request while providing a higher-level interface than is available just using R socket connections. Additionally, the underlying implementation is robust and extensive, supporting FTP/FTPS/TFTP (uploads and downloads), SSL/HTTPS, telnet, dict, ldap, and also supports cookies, redirects, authentication, etc. --- r-rda[¶](#r-rda) === Homepage: * <https://cran.r-project.org/web/packages/rda/index.htmlSpack package: * [r-rda/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rda/package.py) Versions: 1.0.2-1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Shrunken Centroids Regularized Discriminant Analysis for the classification purpose in high dimensional data. --- r-readr[¶](#r-readr) === Homepage: * <https://cran.rstudio.com/web/packages/readr/index.htmlSpack package: * [r-readr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-readr/package.py) Versions: 1.1.1 Build Dependencies: [r](#r), [r-tibble](#r-tibble), [r-hms](#r-hms), [r-r6](#r-r6), [r-bh](#r-bh), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tibble](#r-tibble), [r-hms](#r-hms), [r-r6](#r-r6), [r-bh](#r-bh), [r-rcpp](#r-rcpp) Description: The goal of 'readr' is to provide a fast and friendly way to read rectangular data (like 'csv', 'tsv', and 'fwf'). It is designed to flexibly parse many types of data found in the wild, while still cleanly failing when data unexpectedly changes. --- r-readxl[¶](#r-readxl) === Homepage: * <http://readxl.tidyverse.org/Spack package: * [r-readxl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-readxl/package.py) Versions: 1.1.0, 1.0.0 Build Dependencies: [r](#r), [r-tibble](#r-tibble), [r-cellranger](#r-cellranger), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tibble](#r-tibble), [r-cellranger](#r-cellranger), [r-rcpp](#r-rcpp) Description: Import excel files into R. Supports '.xls' via the embedded 'libxls' C library <https://sourceforge.net/projects/libxls/> and '.xlsx' via the embedded 'RapidXML' C++ library <https://rapidxml.sourceforge.net>. Works on Windows, Mac and Linux without external dependencies. --- r-registry[¶](#r-registry) === Homepage: * <https://cran.r-project.org/web/packages/registry/index.htmlSpack package: * [r-registry/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-registry/package.py) Versions: 0.3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides a generic infrastructure for creating and using registries. --- r-rematch[¶](#r-rematch) === Homepage: * <https://cran.r-project.org/package=rematchSpack package: * [r-rematch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rematch/package.py) Versions: 1.0.1 Build Dependencies: [r](#r), [r-covr](#r-covr), [r-testthat](#r-testthat) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-covr](#r-covr), [r-testthat](#r-testthat) Description: A small wrapper on 'regexpr' to extract the matches and captured groups from the match of a regular expression to a character vector. --- r-reordercluster[¶](#r-reordercluster) === Homepage: * <https://cran.r-project.org/package=ReorderClusterSpack package: * [r-reordercluster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-reordercluster/package.py) Versions: 1.0 Build Dependencies: [r](#r), [r-gplots](#r-gplots), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gplots](#r-gplots), [r-rcpp](#r-rcpp) Description: Tools for performing the leaf reordering for the dendrogram that preserves the hierarchical clustering result and at the same time tries to group instances from the same class together. --- r-reportingtools[¶](#r-reportingtools) === Homepage: * <https://bioconductor.org/packages/ReportingTools/Spack package: * [r-reportingtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-reportingtools/package.py) Versions: 2.16.0 Build Dependencies: [r-knitr](#r-knitr), [r-category](#r-category), [r-biocgenerics](#r-biocgenerics), [r-ggplot2](#r-ggplot2), [r-deseq2](#r-deseq2), [r-edger](#r-edger), [r-pfam-db](#r-pfam-db), [r-gseabase](#r-gseabase), [r-xml](#r-xml), [r-iranges](#r-iranges), [r-hwriter](#r-hwriter), [r-utils](#r-utils), [r-annotationdbi](#r-annotationdbi), [r-annotate](#r-annotate), [r-biobase](#r-biobase), [r-limma](#r-limma), [r](#r), [r-lattice](#r-lattice), [r-ggbio](#r-ggbio), [r-gostats](#r-gostats) Link Dependencies: [r](#r) Run Dependencies: [r-knitr](#r-knitr), [r-category](#r-category), [r-biocgenerics](#r-biocgenerics), [r-ggplot2](#r-ggplot2), [r-deseq2](#r-deseq2), [r-edger](#r-edger), [r-pfam-db](#r-pfam-db), [r-gseabase](#r-gseabase), [r-xml](#r-xml), [r-iranges](#r-iranges), [r-hwriter](#r-hwriter), [r-utils](#r-utils), [r-annotationdbi](#r-annotationdbi), [r-annotate](#r-annotate), [r-biobase](#r-biobase), [r-limma](#r-limma), [r](#r), [r-lattice](#r-lattice), [r-ggbio](#r-ggbio), [r-gostats](#r-gostats) Description: The ReportingTools software package enables users to easily display reports of analysis results generated from sources such as microarray and sequencing data. The package allows users to create HTML pages that may be viewed on a web browser such as Safari, or in other formats readable by programs such as Excel. Users can generate tables with sortable and filterable columns, make and display plots, and link table entries to other data sources such as NCBI or larger plots within the HTML page. Using the package, users can also produce a table of contents page to link various reports together for a particular project that can be viewed in a web browser. For more examples, please visit our site: http:// research-pub.gene.com/ReportingTools. --- r-repr[¶](#r-repr) === Homepage: * <https://github.com/IRkernel/reprSpack package: * [r-repr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-repr/package.py) Versions: 0.9 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: String and binary representations of objects for several formats and mime types. --- r-reprex[¶](#r-reprex) === Homepage: * <https://github.com/jennybc/reprexSpack package: * [r-reprex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-reprex/package.py) Versions: 0.1.1 Build Dependencies: [r](#r), [r-knitr](#r-knitr), [r-whisker](#r-whisker), [r-clipr](#r-clipr), [r-callr](#r-callr), [r-rmarkdown](#r-rmarkdown) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-knitr](#r-knitr), [r-whisker](#r-whisker), [r-clipr](#r-clipr), [r-callr](#r-callr), [r-rmarkdown](#r-rmarkdown) Description: Convenience wrapper that uses the 'rmarkdown' package to render small snippets of code to target formats that include both code and output. The goal is to encourage the sharing of small, reproducible, and runnable examples on code-oriented websites, such as <http://stackoverflow.com> and <https://github.com>, or in email. 'reprex' also extracts clean, runnable R code from various common formats, such as copy/paste from an R session. --- r-reshape[¶](#r-reshape) === Homepage: * <https://cran.r-project.org/package=reshapeSpack package: * [r-reshape/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-reshape/package.py) Versions: 0.8.7 Build Dependencies: [r](#r), [r-plyr](#r-plyr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-plyr](#r-plyr) Description: Flexibly restructure and aggregate data using just two functions: melt and cast. --- r-reshape2[¶](#r-reshape2) === Homepage: * <https://github.com/hadley/reshapeSpack package: * [r-reshape2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-reshape2/package.py) Versions: 1.4.2, 1.4.1 Build Dependencies: [r](#r), [r-plyr](#r-plyr), [r-stringr](#r-stringr), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-plyr](#r-plyr), [r-stringr](#r-stringr), [r-rcpp](#r-rcpp) Description: Flexibly restructure and aggregate data using just two functions: melt and dcast (or acast). --- r-rex[¶](#r-rex) === Homepage: * <https://cran.r-project.org/package=rexSpack package: * [r-rex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rex/package.py) Versions: 1.1.2 Build Dependencies: [r](#r), [r-lazyeval](#r-lazyeval), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lazyeval](#r-lazyeval), [r-magrittr](#r-magrittr) Description: A friendly interface for the construction of regular expressions. --- r-rgdal[¶](#r-rgdal) === Homepage: * <https://cran.r-project.org/package=rgdalSpack package: * [r-rgdal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rgdal/package.py) Versions: 1.2-16 Build Dependencies: [r](#r), [gdal](#gdal), [proj](#proj), [r-sp](#r-sp) Link Dependencies: [r](#r), [gdal](#gdal), [proj](#proj) Run Dependencies: [r](#r), [r-sp](#r-sp) Description: Provides bindings to the 'Geospatial' Data Abstraction Library ('GDAL') (>= 1.6.3) and access to projection/transformation operations from the 'PROJ.4' library. The 'GDAL' and 'PROJ.4' libraries are external to the package, and, when installing the package from source, must be correctly installed first. Both 'GDAL' raster and 'OGR' vector map data can be imported into R, and 'GDAL' raster data and 'OGR' vector data exported. Use is made of classes defined in the 'sp' package. Windows and Mac Intel OS X binaries (including 'GDAL', 'PROJ.4' and 'Expat') are provided on 'CRAN'. --- r-rgenoud[¶](#r-rgenoud) === Homepage: * <http://sekhon.berkeley.edu/rgenoud/Spack package: * [r-rgenoud/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rgenoud/package.py) Versions: 5.8-1.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A genetic algorithm plus derivative optimizer. --- r-rgeos[¶](#r-rgeos) === Homepage: * <https://cran.r-project.org/package=rgeosSpack package: * [r-rgeos/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rgeos/package.py) Versions: 0.3-26 Build Dependencies: [r](#r), [geos](#geos), [r-sp](#r-sp) Link Dependencies: [r](#r), [geos](#geos) Run Dependencies: [r](#r), [r-sp](#r-sp) Description: Interface to Geometry Engine - Open Source ('GEOS') using the C 'API' for topology operations on geometries. The 'GEOS' library is external to the package, and, when installing the package from source, must be correctly installed first. Windows and Mac Intel OS X binaries are provided on 'CRAN'. --- r-rgl[¶](#r-rgl) === Homepage: * <https://r-forge.r-project.org/projects/rglSpack package: * [r-rgl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rgl/package.py) Versions: 0.98.1 Build Dependencies: [r](#r), [r-knitr](#r-knitr), [libx11](#libx11), [r-htmlwidgets](#r-htmlwidgets), [r-shiny](#r-shiny), [r-jsonlite](#r-jsonlite), [r-magrittr](#r-magrittr), [r-htmltools](#r-htmltools) Link Dependencies: [zlib](#zlib), [r](#r), [libpng](#libpng), [mesa](#mesa), [libx11](#libx11), [mesa-glu](#mesa-glu), [freetype](#freetype) Run Dependencies: [r](#r), [r-knitr](#r-knitr), [r-htmlwidgets](#r-htmlwidgets), [r-shiny](#r-shiny), [r-jsonlite](#r-jsonlite), [r-magrittr](#r-magrittr), [r-htmltools](#r-htmltools) Description: Provides medium to high level functions for 3D interactive graphics, including functions modelled on base graphics (plot3d(), etc.) as well as functions for constructing representations of geometric objects (cube3d(), etc.). Output may be on screen using OpenGL, or to various standard 3D file formats including WebGL, PLY, OBJ, STL as well as 2D image formats, including PNG, Postscript, SVG, PGF. --- r-rgooglemaps[¶](#r-rgooglemaps) === Homepage: * <https://cran.r-project.org/package=RgoogleMapsSpack package: * [r-rgooglemaps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rgooglemaps/package.py) Versions: 1.2.0.7 Build Dependencies: [r](#r), [r-png](#r-png), [r-rjsonio](#r-rjsonio) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-png](#r-png), [r-rjsonio](#r-rjsonio) Description: This package serves two purposes: (i) Provide a comfortable R interface to query the Google server for static maps, and (ii) Use the map as a background image to overlay plots within R. This requires proper coordinate scaling. --- r-rgraphviz[¶](#r-rgraphviz) === Homepage: * <http://bioconductor.org/packages/Rgraphviz/Spack package: * [r-rgraphviz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rgraphviz/package.py) Versions: 2.20.0 Build Dependencies: [r](#r), [r-graph](#r-graph) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-graph](#r-graph) Description: Interfaces R with the AT and T graphviz library for plotting R graph objects from the graph package. --- r-rhdf5[¶](#r-rhdf5) === Homepage: * <https://www.bioconductor.org/packages/rhdf5/Spack package: * [r-rhdf5/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rhdf5/package.py) Versions: 2.20.0 Build Dependencies: [r](#r), [r-zlibbioc](#r-zlibbioc) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-zlibbioc](#r-zlibbioc) Description: This R/Bioconductor package provides an interface between HDF5 and R. HDF5's main features are the ability to store and access very large and/or complex datasets and a wide variety of metadata on mass storage (disk) through a completely portable file format. The rhdf5 package is thus suited for the exchange of large and/or complex datasets between R and other software package, and for letting R applications work on datasets that are larger than the available RAM. --- r-rhtslib[¶](#r-rhtslib) === Homepage: * <https://www.bioconductor.org/packages/Rhtslib/Spack package: * [r-rhtslib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rhtslib/package.py) Versions: 1.8.0 Build Dependencies: [r](#r), [autoconf](#autoconf), [r-zlibbioc](#r-zlibbioc) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-zlibbioc](#r-zlibbioc) Description: This package provides version 1.1 of the 'HTSlib' C library for high- throughput sequence analysis. The package is primarily useful to developers of other R packages who wish to make use of HTSlib. Motivation and instructions for use of this package are in the vignette, vignette(package="Rhtslib", "Rhtslib"). --- r-rinside[¶](#r-rinside) === Homepage: * <http://dirk.eddelbuettel.com/code/rinside.htmlSpack package: * [r-rinside/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rinside/package.py) Versions: 0.2.14, 0.2.13 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: C++ classes to embed R in C++ applications The 'RInside' packages makes it easier to have "R inside" your C++ application by providing a C++ wrapperclass providing the R interpreter. As R itself is embedded into your application, a shared library build of R is required. This works on Linux, OS X and even on Windows provided you use the same tools used to build R itself. Numerous examples are provided in the eight subdirectories of the examples/ directory of the installed package: standard, mpi (for parallel computing) qt (showing how to embed 'RInside' inside a Qt GUI application), wt (showing how to build a "web- application" using the Wt toolkit), armadillo (for 'RInside' use with 'RcppArmadillo') and eigen (for 'RInside' use with 'RcppEigen'). The example use GNUmakefile(s) with GNU extensions, so a GNU make is required (and will use the GNUmakefile automatically). Doxygen-generated documentation of the C++ classes is available at the 'RInside' website as well. --- r-rjags[¶](#r-rjags) === Homepage: * <https://cran.r-project.org/web/packages/rjags/index.htmlSpack package: * [r-rjags/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rjags/package.py) Versions: 4-8, 4-6 Build Dependencies: [r](#r), [r-coda](#r-coda) Link Dependencies: [r](#r), [jags](#jags) Run Dependencies: [r](#r), [r-coda](#r-coda) Description: Interface to the JAGS MCMC library. Usage: $ spack load r-rjags --- r-rjava[¶](#r-rjava) === Homepage: * <http://www.rforge.net/rJava/Spack package: * [r-rjava/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rjava/package.py) Versions: 0.9-8 Build Dependencies: [r](#r), java Link Dependencies: [r](#r), java Run Dependencies: [r](#r) Description: Low-level interface to Java VM very much like .C/.Call and friends. Allows creation of objects, calling methods and accessing fields. --- r-rjson[¶](#r-rjson) === Homepage: * <https://cran.r-project.org/package=rjsonSpack package: * [r-rjson/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rjson/package.py) Versions: 0.2.15 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Converts R object into JSON objects and vice-versa. --- r-rjsonio[¶](#r-rjsonio) === Homepage: * <https://cran.r-project.org/package=RJSONIOSpack package: * [r-rjsonio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rjsonio/package.py) Versions: 1.3-0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This is a package that allows conversion to and from data in Javascript object notation (JSON) format. This allows R objects to be inserted into Javascript/ECMAScript/ActionScript code and allows R programmers to read and convert JSON content to R objects. This is an alternative to rjson package. Originally, that was too slow for converting large R objects to JSON and was not extensible. rjson's performance is now similar to this package, and perhaps slightly faster in some cases. This package uses methods and is readily extensible by defining methods for different classes, vectorized operations, and C code and callbacks to R functions for deserializing JSON objects to R. The two packages intentionally share the same basic interface. This package (RJSONIO) has many additional options to allow customizing the generation and processing of JSON content. This package uses libjson rather than implementing yet another JSON parser. The aim is to support other general projects by building on their work, providing feedback and benefit from their ongoing development. --- r-rlang[¶](#r-rlang) === Homepage: * <https://cran.r-project.org/package=rlangSpack package: * [r-rlang/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rlang/package.py) Versions: 0.2.2, 0.1.4, 0.1.2, 0.1.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A toolbox for working with base types, core R features like the condition system, and core 'Tidyverse' features like tidy evaluation. --- r-rmarkdown[¶](#r-rmarkdown) === Homepage: * <http://rmarkdown.rstudio.com/Spack package: * [r-rmarkdown/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rmarkdown/package.py) Versions: 1.7, 1.0 Build Dependencies: [r](#r), [r-knitr](#r-knitr), [r-base64enc](#r-base64enc), [r-yaml](#r-yaml), [r-evaluate](#r-evaluate), [r-jsonlite](#r-jsonlite), [r-rprojroot](#r-rprojroot), [r-catools](#r-catools), [r-htmltools](#r-htmltools), [r-stringr](#r-stringr), [r-mime](#r-mime) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-knitr](#r-knitr), [r-base64enc](#r-base64enc), [r-yaml](#r-yaml), [r-evaluate](#r-evaluate), [r-jsonlite](#r-jsonlite), [r-rprojroot](#r-rprojroot), [r-catools](#r-catools), [r-htmltools](#r-htmltools), [r-stringr](#r-stringr), [r-mime](#r-mime) Description: Convert R Markdown documents into a variety of formats. --- r-rminer[¶](#r-rminer) === Homepage: * <http://www3.dsi.uminho.pt/pcortez/rminer.htmlSpack package: * [r-rminer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rminer/package.py) Versions: 1.4.2 Build Dependencies: [r-kernlab](#r-kernlab), [r-randomforest](#r-randomforest), [r-party](#r-party), [r-pls](#r-pls), [r-cubist](#r-cubist), [r-plotrix](#r-plotrix), [r-adabag](#r-adabag), [r-glmnet](#r-glmnet), [r-e1071](#r-e1071), [r-xgboost](#r-xgboost), [r-mda](#r-mda), [r-kknn](#r-kknn), [r](#r), [r-lattice](#r-lattice), [r-rpart](#r-rpart), [r-nnet](#r-nnet), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r-kernlab](#r-kernlab), [r-randomforest](#r-randomforest), [r-party](#r-party), [r-pls](#r-pls), [r-cubist](#r-cubist), [r-plotrix](#r-plotrix), [r-adabag](#r-adabag), [r-glmnet](#r-glmnet), [r-e1071](#r-e1071), [r-xgboost](#r-xgboost), [r-mda](#r-mda), [r-kknn](#r-kknn), [r](#r), [r-lattice](#r-lattice), [r-rpart](#r-rpart), [r-nnet](#r-nnet), [r-mass](#r-mass) Description: Facilitates the use of data mining algorithms in classification and regression (including time series forecasting) tasks by presenting a short and coherent set of functions. --- r-rmpfr[¶](#r-rmpfr) === Homepage: * <http://rmpfr.r-forge.r-project.orgSpack package: * [r-rmpfr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rmpfr/package.py) Versions: 0.6-1 Build Dependencies: [r](#r), [r-gmp](#r-gmp), [mpfr](#mpfr) Link Dependencies: [r](#r), [mpfr](#mpfr) Run Dependencies: [r](#r), [r-gmp](#r-gmp) Description: Arithmetic (via S4 classes and methods) for arbitrary precision floating point numbers, including transcendental ("special") functions. To this end, Rmpfr interfaces to the LGPL'ed MPFR (Multiple Precision Floating- Point Reliable) Library which itself is based on the GMP (GNU Multiple Precision) Library. --- r-rmpi[¶](#r-rmpi) === Homepage: * <http://www.stats.uwo.ca/faculty/yu/RmpiSpack package: * [r-rmpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rmpi/package.py) Versions: 0.6-6 Build Dependencies: [r](#r), mpi Link Dependencies: [r](#r), mpi Run Dependencies: [r](#r) Description: An interface (wrapper) to MPI APIs. It also provides interactive R manager and worker environment. --- r-rmysql[¶](#r-rmysql) === Homepage: * <https://github.com/rstats-db/rmysqlSpack package: * [r-rmysql/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rmysql/package.py) Versions: 0.10.9 Build Dependencies: [r](#r), [mariadb](#mariadb), [r-dbi](#r-dbi) Link Dependencies: [r](#r), [mariadb](#mariadb) Run Dependencies: [r](#r), [r-dbi](#r-dbi) Description: Implements 'DBI' Interface to 'MySQL' and 'MariaDB' Databases. --- r-rngtools[¶](#r-rngtools) === Homepage: * <https://renozao.github.io/rngtoolsSpack package: * [r-rngtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rngtools/package.py) Versions: 1.2.4 Build Dependencies: [r](#r), [r-digest](#r-digest), [r-pkgmaker](#r-pkgmaker), [r-stringr](#r-stringr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-digest](#r-digest), [r-pkgmaker](#r-pkgmaker), [r-stringr](#r-stringr) Description: This package contains a set of functions for working with Random Number Generators (RNGs). In particular, it defines a generic S4 framework for getting/setting the current RNG, or RNG data that are embedded into objects for reproducibility. Notably, convenient default methods greatly facilitate the way current RNG settings can be changed. --- r-robustbase[¶](#r-robustbase) === Homepage: * <https://robustbase.r-forge.r-project.orgSpack package: * [r-robustbase/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-robustbase/package.py) Versions: 0.92-7 Build Dependencies: [r](#r), [r-deoptimr](#r-deoptimr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-deoptimr](#r-deoptimr) Description: "Essential" Robust Statistics. Tools allowing to analyze data with robust methods. This includes regression methodology including model selections and multivariate statistics where we strive to cover the book "Robust Statistics, Theory and Methods" by '<NAME> and Yohai'; Wiley 2006. --- r-rocr[¶](#r-rocr) === Homepage: * <https://cran.r-project.org/package=ROCRSpack package: * [r-rocr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rocr/package.py) Versions: 1.0-7 Build Dependencies: [r](#r), [r-gplots](#r-gplots) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gplots](#r-gplots) Description: ROC graphs, sensitivity/specificity curves, lift charts, and precision/recall plots are popular examples of trade-off visualizations for specific pairs of performance measures. ROCR is a flexible tool for creating cutoff-parameterized 2D performance curves by freely combining two from over 25 performance measures (new performance measures can be added using a standard interface). Curves from different cross- validation or bootstrapping runs can be averaged by different methods, and standard deviations, standard errors or box plots can be used to visualize the variability across the runs. The parameterization can be visualized by printing cutoff values at the corresponding curve positions, or by coloring the curve according to cutoff. All components of a performance plot can be quickly adjusted using a flexible parameter dispatching mechanism. Despite its flexibility, ROCR is easy to use, with only three commands and reasonable default values for all optional parameters. --- r-rodbc[¶](#r-rodbc) === Homepage: * <https://cran.rstudio.com/web/packages/RODBC/Spack package: * [r-rodbc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rodbc/package.py) Versions: 1.3-13 Build Dependencies: [r](#r), [unixodbc](#unixodbc) Link Dependencies: [r](#r), [unixodbc](#unixodbc) Run Dependencies: [r](#r) Description: An ODBC database interface. --- r-rots[¶](#r-rots) === Homepage: * <https://bioconductor.org/packages/release/bioc/html/ROTS.htmlSpack package: * [r-rots/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rots/package.py) Versions: 1.8.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-rcpp](#r-rcpp) Description: Calculates the Reproducibility-Optimized Test Statistic (ROTS) for differential testing in omics data. --- r-roxygen2[¶](#r-roxygen2) === Homepage: * <https://github.com/klutometis/roxygenSpack package: * [r-roxygen2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-roxygen2/package.py) Versions: 5.0.1 Build Dependencies: [r](#r), [r-stringi](#r-stringi), [r-brew](#r-brew), [r-digest](#r-digest), [r-stringr](#r-stringr), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-stringi](#r-stringi), [r-brew](#r-brew), [r-digest](#r-digest), [r-stringr](#r-stringr), [r-rcpp](#r-rcpp) Description: A 'Doxygen'-like in-source documentation system for Rd, collation, and 'NAMESPACE' files. --- r-rpart[¶](#r-rpart) === Homepage: * <https://cran.r-project.org/package=rpartSpack package: * [r-rpart/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rpart/package.py) Versions: 4.1-11, 4.1-10 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Recursive partitioning for classification, regression and survival trees. --- r-rpart-plot[¶](#r-rpart-plot) === Homepage: * <https://cran.r-project.org/package=rpart.plotSpack package: * [r-rpart-plot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rpart-plot/package.py) Versions: 2.1.0 Build Dependencies: [r](#r), [r-rpart](#r-rpart) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rpart](#r-rpart) Description: Plot 'rpart' models. Extends plot.rpart() and text.rpart() in the 'rpart' package. --- r-rpostgresql[¶](#r-rpostgresql) === Homepage: * <https://code.google.com/p/rpostgresql/Spack package: * [r-rpostgresql/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rpostgresql/package.py) Versions: 0.4-1 Build Dependencies: [r](#r), [postgresql](#postgresql), [r-dbi](#r-dbi) Link Dependencies: [r](#r), [postgresql](#postgresql) Run Dependencies: [r](#r), [r-dbi](#r-dbi) Description: Database interface and PostgreSQL driver for R This package provides a Database Interface (DBI) compliant driver for R to access PostgreSQL database systems. In order to build and install this package from source, PostgreSQL itself must be present your system to provide PostgreSQL functionality via its libraries and header files. These files are provided as postgresql-devel package under some Linux distributions. On Microsoft Windows system the attached libpq library source will be used. A wiki and issue tracking system for the package are available at Google Code at https://code.google.com/p/rpostgresql/. --- r-rprojroot[¶](#r-rprojroot) === Homepage: * <https://cran.r-project.org/package=rprojrootSpack package: * [r-rprojroot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rprojroot/package.py) Versions: 1.2 Build Dependencies: [r](#r), [r-backports](#r-backports) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-backports](#r-backports) Description: Robust, reliable and flexible paths to files below a project root. The 'root' of a project is defined as a directory that matches a certain criterion, e.g., it contains a certain regular file. --- r-rsamtools[¶](#r-rsamtools) === Homepage: * <https://bioconductor.org/packages/Rsamtools/Spack package: * [r-rsamtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rsamtools/package.py) Versions: 1.32.2, 1.28.0 Build Dependencies: [r](#r), [r-genomicranges](#r-genomicranges), [r-biocgenerics](#r-biocgenerics), [r-iranges](#r-iranges), [r-zlibbioc](#r-zlibbioc), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-bitops](#r-bitops), [r-biocparallel](#r-biocparallel), [r-genomeinfodb](#r-genomeinfodb), [r-biostrings](#r-biostrings) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-genomicranges](#r-genomicranges), [r-biocgenerics](#r-biocgenerics), [r-iranges](#r-iranges), [r-zlibbioc](#r-zlibbioc), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-bitops](#r-bitops), [r-biocparallel](#r-biocparallel), [r-genomeinfodb](#r-genomeinfodb), [r-biostrings](#r-biostrings) Description: This package provides an interface to the 'samtools', 'bcftools', and 'tabix' utilities (see 'LICENCE') for manipulating SAM (Sequence Alignment / Map), FASTA, binary variant call (BCF) and compressed indexed tab-delimited (tabix) files. --- r-rsnns[¶](#r-rsnns) === Homepage: * <http://sci2s.ugr.es/dicits/software/RSNNSSpack package: * [r-rsnns/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rsnns/package.py) Versions: 0.4-7 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: The Stuttgart Neural Network Simulator (SNNS) is a library containing many standard implementations of neural networks. This package wraps the SNNS functionality to make it available from within R. Using the RSNNS low-level interface, all of the algorithmic functionality and flexibility of SNNS can be accessed. Furthermore, the package contains a convenient high-level interface, so that the most common neural network topologies and learning algorithms integrate seamlessly into R. --- r-rsolnp[¶](#r-rsolnp) === Homepage: * <https://cran.r-project.org/package=RsolnpSpack package: * [r-rsolnp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rsolnp/package.py) Versions: 1.16 Build Dependencies: [r](#r), [r-truncnorm](#r-truncnorm) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-truncnorm](#r-truncnorm) Description: General Non-linear Optimization Using Augmented Lagrange Multiplier Method. --- r-rsqlite[¶](#r-rsqlite) === Homepage: * <https://cran.rstudio.com/web/packages/RSQLite/index.htmlSpack package: * [r-rsqlite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rsqlite/package.py) Versions: 2.0 Build Dependencies: [r](#r), [r-bit64](#r-bit64), [r-dbi](#r-dbi), [r-memoise](#r-memoise), [r-plogr](#r-plogr), [r-blob](#r-blob), [r-bh](#r-bh), [r-pkgconfig](#r-pkgconfig), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-bit64](#r-bit64), [r-dbi](#r-dbi), [r-memoise](#r-memoise), [r-plogr](#r-plogr), [r-blob](#r-blob), [r-bh](#r-bh), [r-pkgconfig](#r-pkgconfig), [r-rcpp](#r-rcpp) Description: This package embeds the SQLite database engine in R and provides an interface compliant with the DBI package. The source for the SQLite engine (version 3.8.6) is included. --- r-rstan[¶](#r-rstan) === Homepage: * <http://mc-stan.org/Spack package: * [r-rstan/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rstan/package.py) Versions: 2.17.2, 2.10.1 Build Dependencies: [r](#r), [r-inline](#r-inline), [r-ggplot2](#r-ggplot2), [r-stanheaders](#r-stanheaders), [r-bh](#r-bh), [r-rcppeigen](#r-rcppeigen), [r-gridextra](#r-gridextra), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-inline](#r-inline), [r-ggplot2](#r-ggplot2), [r-stanheaders](#r-stanheaders), [r-bh](#r-bh), [r-rcppeigen](#r-rcppeigen), [r-gridextra](#r-gridextra), [r-rcpp](#r-rcpp) Description: User-facing R functions are provided to parse, compile, test, estimate, and analyze Stan models by accessing the header-only Stan library provided by the 'StanHeaders' package. The Stan project develops a probabilistic programming language that implements full Bayesian statistical inference via Markov Chain Monte Carlo, rough Bayesian inference via variational approximation, and (optionally penalized) maximum likelihood estimation via optimization. In all three cases, automatic differentiation is used to quickly and accurately evaluate gradients without burdening the user with the need to derive the partial derivatives. --- r-rstudioapi[¶](#r-rstudioapi) === Homepage: * <https://cran.r-project.org/web/packages/rstudioapi/index.htmlSpack package: * [r-rstudioapi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rstudioapi/package.py) Versions: 0.7, 0.6, 0.5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Access the RStudio API (if available) and provide informative error messages when it's not. --- r-rtracklayer[¶](#r-rtracklayer) === Homepage: * <http://bioconductor.org/packages/rtracklayer/Spack package: * [r-rtracklayer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rtracklayer/package.py) Versions: 1.40.5, 1.36.6 Build Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-rcurl](#r-rcurl), [r-xml](#r-xml), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-zlibbioc](#r-zlibbioc), [r-genomicalignments](#r-genomicalignments), [r-biostrings](#r-biostrings), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-xvector](#r-xvector), [r-rcurl](#r-rcurl), [r-xml](#r-xml), [r-iranges](#r-iranges), [r-genomicranges](#r-genomicranges), [r-zlibbioc](#r-zlibbioc), [r-genomicalignments](#r-genomicalignments), [r-biostrings](#r-biostrings), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: Extensible framework for interacting with multiple genome browsers (currently UCSC built-in) and manipulating annotation tracks in various formats (currently GFF, BED, bedGraph, BED15, WIG, BigWig and 2bit built-in). The user may export/import tracks to/from the supported browsers, as well as query and modify the browser state, such as the current viewport. --- r-rtsne[¶](#r-rtsne) === Homepage: * <https://CRAN.R-project.org/package=RtsneSpack package: * [r-rtsne/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rtsne/package.py) Versions: 0.13, 0.11, 0.10 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: An R wrapper around the fast T-distributed Stochastic Neighbor Embedding implementation. --- r-rvcheck[¶](#r-rvcheck) === Homepage: * <https://cran.r-project.org/package=rvcheckSpack package: * [r-rvcheck/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rvcheck/package.py) Versions: 0.0.9 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Check latest release version of R and R package (both in 'CRAN', 'Bioconductor' or 'Github'). --- r-rvest[¶](#r-rvest) === Homepage: * <https://github.com/hadley/rvestSpack package: * [r-rvest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rvest/package.py) Versions: 0.3.2 Build Dependencies: [r](#r), [r-httr](#r-httr), [r-selectr](#r-selectr), [r-xml2](#r-xml2), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-httr](#r-httr), [r-selectr](#r-selectr), [r-xml2](#r-xml2), [r-magrittr](#r-magrittr) Description: Wrappers around the 'xml2' and 'httr' packages to make it easy to download, then manipulate, HTML and XML. --- r-rzmq[¶](#r-rzmq) === Homepage: * <http://github.com/armstrtw/rzmqSpack package: * [r-rzmq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rzmq/package.py) Versions: 0.7.7 Build Dependencies: [r](#r), [zeromq](#zeromq) Link Dependencies: [r](#r), [zeromq](#zeromq) Run Dependencies: [r](#r) Description: Interface to the ZeroMQ lightweight messaging kernel. --- r-s4vectors[¶](#r-s4vectors) === Homepage: * <https://bioconductor.org/packages/S4Vectors/Spack package: * [r-s4vectors/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-s4vectors/package.py) Versions: 0.18.3, 0.16.0, 0.14.7 Build Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biocgenerics](#r-biocgenerics) Description: The S4Vectors package defines the Vector and List virtual classes and a set of generic functions that extend the semantic of ordinary vectors and lists in R. Package developers can easily implement vector-like or list-like objects as concrete subclasses of Vector or List. In addition, a few low-level concrete subclasses of general interest (e.g. DataFrame, Rle, and Hits) are implemented in the S4Vectors package itself (many more are implemented in the IRanges package and in other Bioconductor infrastructure packages). --- r-samr[¶](#r-samr) === Homepage: * <https://cran.r-project.org/package=samrSpack package: * [r-samr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-samr/package.py) Versions: 2.0 Build Dependencies: [r](#r), [r-matrixstats](#r-matrixstats), [r-impute](#r-impute) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrixstats](#r-matrixstats), [r-impute](#r-impute) Description: Significance Analysis of Microarrays. --- r-sandwich[¶](#r-sandwich) === Homepage: * <https://cran.r-project.org/package=sandwichSpack package: * [r-sandwich/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sandwich/package.py) Versions: 2.3-4 Build Dependencies: [r](#r), [r-zoo](#r-zoo) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-zoo](#r-zoo) Description: Model-robust standard error estimators for cross-sectional, time series, and longitudinal data. --- r-scales[¶](#r-scales) === Homepage: * <https://github.com/hadley/scalesSpack package: * [r-scales/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-scales/package.py) Versions: 0.5.0, 0.4.1, 0.4.0 Build Dependencies: [r](#r), [r-r6](#r-r6), [r-viridislite](#r-viridislite), [r-dichromat](#r-dichromat), [r-plyr](#r-plyr), [r-munsell](#r-munsell), [r-labeling](#r-labeling), [r-rcolorbrewer](#r-rcolorbrewer), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-r6](#r-r6), [r-viridislite](#r-viridislite), [r-dichromat](#r-dichromat), [r-plyr](#r-plyr), [r-munsell](#r-munsell), [r-labeling](#r-labeling), [r-rcolorbrewer](#r-rcolorbrewer), [r-rcpp](#r-rcpp) Description: Graphical scales map data to aesthetics, and provide methods for automatically determining breaks and labels for axes and legends. --- r-scatterplot3d[¶](#r-scatterplot3d) === Homepage: * <https://CRAN.R-project.org/package=scatterplot3dSpack package: * [r-scatterplot3d/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-scatterplot3d/package.py) Versions: 0.3-40 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: scatterplot3d: 3D Scatter Plot --- r-sdmtools[¶](#r-sdmtools) === Homepage: * <https://cran.r-project.org/web/packages/SDMTools/index.htmlSpack package: * [r-sdmtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sdmtools/package.py) Versions: 1.1-221, 1.1-20, 1.1-13, 1.1-12, 1.1-11 Build Dependencies: [r](#r), [r-utils](#r-utils) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-utils](#r-utils) Description: Species Distribution Modelling Tools: Tools for processing data associated with species distribution modelling exercises This packages provides a set of tools for post processing the outcomes of species distribution modeling exercises. --- r-segmented[¶](#r-segmented) === Homepage: * <https://CRAN.R-project.org/package=segmentedSpack package: * [r-segmented/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-segmented/package.py) Versions: 0.5-2.2, 0.5-1.4 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Given a regression model, segmented 'updates' the model by adding one or more segmented (i.e., piecewise-linear) relationships. Several variables with multiple breakpoints are allowed. --- r-selectr[¶](#r-selectr) === Homepage: * <https://sjp.co.nz/projects/selectrSpack package: * [r-selectr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-selectr/package.py) Versions: 0.3-1 Build Dependencies: [r](#r), [r-testthat](#r-testthat), [r-stringr](#r-stringr), [r-xml2](#r-xml2), [r-xml](#r-xml) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-testthat](#r-testthat), [r-stringr](#r-stringr), [r-xml2](#r-xml2), [r-xml](#r-xml) Description: Translates a CSS3 selector into an equivalent XPath expression. This allows us to use CSS selectors when working with the XML package as it can only evaluate XPath expressions. Also provided are convenience functions useful for using CSS selectors on XML nodes. This package is a port of the Python package 'cssselect' (<https://pythonhosted.org/cssselect/>). --- r-seqinr[¶](#r-seqinr) === Homepage: * <http://seqinr.r-forge.r-project.orgSpack package: * [r-seqinr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-seqinr/package.py) Versions: 3.4-5, 3.3-6 Build Dependencies: [r](#r), [r-ade4](#r-ade4), [zlib](#zlib), [r-segmented](#r-segmented) Link Dependencies: [r](#r), [zlib](#zlib) Run Dependencies: [r](#r), [r-ade4](#r-ade4), [r-segmented](#r-segmented) Description: Exploratory data analysis and data visualization for biological sequence (DNA and protein) data. Includes also utilities for sequence data management under the ACNUC system. --- r-seqlogo[¶](#r-seqlogo) === Homepage: * <https://bioconductor.org/packages/seqLogo/Spack package: * [r-seqlogo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-seqlogo/package.py) Versions: 1.44.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: seqLogo takes the position weight matrix of a DNA sequence motif and plots the corresponding sequence logo as introduced by Schneider and Stephens (1990). --- r-seurat[¶](#r-seurat) === Homepage: * <http://satijalab.org/seurat/Spack package: * [r-seurat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-seurat/package.py) Versions: 2.1.0, 2.0.1 Build Dependencies: [r-hmisc](#r-hmisc), [r-rtsne](#r-rtsne), [r-reshape2](#r-reshape2), [r-irlba](#r-irlba), [r-ggplot2](#r-ggplot2), [r-rcppprogress](#r-rcppprogress), [r-tsne](#r-tsne), [r-pkgconfig](#r-pkgconfig), [r-sdmtools](#r-sdmtools), [r-nmf](#r-nmf), [r-ape](#r-ape), [r-diffusionmap](#r-diffusionmap), [r-gdata](#r-gdata), [r-cowplot](#r-cowplot), [r-plogr](#r-plogr), [r-caret](#r-caret), [r-vgam](#r-vgam), [r-gridextra](#r-gridextra), [r-tidyr](#r-tidyr), [r-lars](#r-lars), [r-ranger](#r-ranger), [r-ica](#r-ica), [r-dtw](#r-dtw), [r-pbapply](#r-pbapply), [r-plotly](#r-plotly), [r-rocr](#r-rocr), [r](#r), [r-tclust](#r-tclust), [r-fpc](#r-fpc), [r-gplots](#r-gplots), [r-ggjoy](#r-ggjoy), [r-igraph](#r-igraph), [r-fnn](#r-fnn), [r-glue](#r-glue), [r-mixtools](#r-mixtools) Link Dependencies: [r](#r) Run Dependencies: [r-hmisc](#r-hmisc), [r-rtsne](#r-rtsne), [r-reshape2](#r-reshape2), [r-irlba](#r-irlba), [r-ggplot2](#r-ggplot2), [r-rcppprogress](#r-rcppprogress), [r-tsne](#r-tsne), [r-pkgconfig](#r-pkgconfig), [r-sdmtools](#r-sdmtools), [r-nmf](#r-nmf), [r-ape](#r-ape), [r-diffusionmap](#r-diffusionmap), [r-gdata](#r-gdata), [r-cowplot](#r-cowplot), [r-plogr](#r-plogr), [r-caret](#r-caret), [r-vgam](#r-vgam), [r-gridextra](#r-gridextra), [r-tidyr](#r-tidyr), [r-lars](#r-lars), [r-ranger](#r-ranger), [r-ica](#r-ica), [r-dtw](#r-dtw), [r-pbapply](#r-pbapply), [r-plotly](#r-plotly), [r-rocr](#r-rocr), [r](#r), [r-tclust](#r-tclust), [r-fpc](#r-fpc), [r-gplots](#r-gplots), [r-ggjoy](#r-ggjoy), [r-igraph](#r-igraph), [r-fnn](#r-fnn), [r-glue](#r-glue), [r-mixtools](#r-mixtools) Description: Seurat is an R package designed for QC, analysis, and exploration of single cell RNA-seq data. --- r-sf[¶](#r-sf) === Homepage: * <https://github.com/r-spatial/sf/Spack package: * [r-sf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sf/package.py) Versions: 0.5-5 Build Dependencies: [r](#r), [r-units](#r-units), [geos](#geos), [r-dbi](#r-dbi), [gdal](#gdal), [r-magrittr](#r-magrittr), [proj](#proj), [r-classint](#r-classint), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r), [gdal](#gdal), [geos](#geos), [proj](#proj) Run Dependencies: [r](#r), [r-units](#r-units), [r-dbi](#r-dbi), [r-magrittr](#r-magrittr), [r-classint](#r-classint), [r-rcpp](#r-rcpp) Description: Support for simple features, a standardized way to encode spatial vector data. Binds to GDAL for reading and writing data, to GEOS for geometrical operations, and to Proj.4 for projection conversions and datum transformations. --- r-sfsmisc[¶](#r-sfsmisc) === Homepage: * <https://cran.r-project.org/web/packages/sfsmisc/index.htmlSpack package: * [r-sfsmisc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sfsmisc/package.py) Versions: 1.1-0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Useful utilities ['goodies'] from Seminar fuer Statistik ETH Zurich, quite a few related to graphics; some were ported from S-plus. --- r-shape[¶](#r-shape) === Homepage: * <https://cran.r-project.org/package=shapeSpack package: * [r-shape/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-shape/package.py) Versions: 1.4.3, 1.4.2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Functions for plotting graphical shapes such as ellipses, circles, cylinders, arrows, ... --- r-shiny[¶](#r-shiny) === Homepage: * <http://shiny.rstudio.com/Spack package: * [r-shiny/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-shiny/package.py) Versions: 1.0.5, 0.13.2 Build Dependencies: [r](#r), [r-mime](#r-mime), [r-xtable](#r-xtable), [r-httpuv](#r-httpuv), [r-jsonlite](#r-jsonlite), [r-digest](#r-digest), [r-sourcetools](#r-sourcetools), [r-r6](#r-r6), [r-htmltools](#r-htmltools) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mime](#r-mime), [r-xtable](#r-xtable), [r-httpuv](#r-httpuv), [r-jsonlite](#r-jsonlite), [r-digest](#r-digest), [r-sourcetools](#r-sourcetools), [r-r6](#r-r6), [r-htmltools](#r-htmltools) Description: Makes it incredibly easy to build interactive web applications with R. Automatic "reactive" binding between inputs and outputs and extensive pre-built widgets make it possible to build beautiful, responsive, and powerful applications with minimal effort. --- r-shinydashboard[¶](#r-shinydashboard) === Homepage: * <https://cran.r-project.org/package=shinydashboardSpack package: * [r-shinydashboard/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-shinydashboard/package.py) Versions: 0.7.0, 0.6.1 Build Dependencies: [r](#r), [r-htmltools](#r-htmltools), [r-shiny](#r-shiny) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-htmltools](#r-htmltools), [r-shiny](#r-shiny) Description: Create Dashboards with 'Shiny' --- r-shortread[¶](#r-shortread) === Homepage: * <https://www.bioconductor.org/packages/ShortRead/Spack package: * [r-shortread/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-shortread/package.py) Versions: 1.34.2 Build Dependencies: [r](#r), [r-latticeextra](#r-latticeextra), [r-s4vectors](#r-s4vectors), [r-lattice](#r-lattice), [r-zlibbioc](#r-zlibbioc), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-iranges](#r-iranges), [r-biocparallel](#r-biocparallel), [r-biobase](#r-biobase), [r-genomicalignments](#r-genomicalignments), [r-hwriter](#r-hwriter), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-latticeextra](#r-latticeextra), [r-s4vectors](#r-s4vectors), [r-lattice](#r-lattice), [r-zlibbioc](#r-zlibbioc), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-iranges](#r-iranges), [r-biocparallel](#r-biocparallel), [r-biobase](#r-biobase), [r-genomicalignments](#r-genomicalignments), [r-hwriter](#r-hwriter), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: This package implements sampling, iteration, and input of FASTQ files. The package includes functions for filtering and trimming reads, and for generating a quality assessment report. Data are represented as DNAStringSet-derived objects, and easily manipulated for a diversity of purposes. The package also contains legacy support for early single-end, ungapped alignment formats. --- r-siggenes[¶](#r-siggenes) === Homepage: * <http://bioconductor.org/packages/siggenes/Spack package: * [r-siggenes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-siggenes/package.py) Versions: 1.50.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-multtest](#r-multtest) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-multtest](#r-multtest) Description: Identification of differentially expressed genes and estimation of the False Discovery Rate (FDR) using both the Significance Analysis of Microarrays (SAM) and the Empirical Bayes Analyses of Microarrays (EBAM). --- r-simpleaffy[¶](#r-simpleaffy) === Homepage: * <http://bioconductor.org/packages/simpleaffy/Spack package: * [r-simpleaffy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-simpleaffy/package.py) Versions: 2.52.0 Build Dependencies: [r](#r), [r-gcrma](#r-gcrma), [r-biobase](#r-biobase), [r-affy](#r-affy), [r-genefilter](#r-genefilter), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gcrma](#r-gcrma), [r-biobase](#r-biobase), [r-affy](#r-affy), [r-genefilter](#r-genefilter), [r-biocgenerics](#r-biocgenerics) Description: Provides high level functions for reading Affy .CEL files, phenotypic data, and then computing simple things with it, such as t-tests, fold changes and the like. Makes heavy use of the affy library. Also has some basic scatter plot functions and mechanisms for generating high resolution journal figures... --- r-sm[¶](#r-sm) === Homepage: * <http://www.stats.gla.ac.uk/~adrian/smSpack package: * [r-sm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sm/package.py) Versions: 2.2-5.5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This is software linked to the book 'Applied Smoothing Techniques for Data Analysis - The Kernel Approach with S-Plus Illustrations' Oxford University Press. --- r-smoof[¶](#r-smoof) === Homepage: * <http://github.com/jakobbossek/smoofSpack package: * [r-smoof/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-smoof/package.py) Versions: 1.5.1, 1.5 Build Dependencies: [r-rjsonio](#r-rjsonio), [r-rcpparmadillo](#r-rcpparmadillo), [r-ggplot2](#r-ggplot2), [r-mco](#r-mco), [r-rcolorbrewer](#r-rcolorbrewer), [r-plotly](#r-plotly), [r-bbmisc](#r-bbmisc), [r](#r), [r-plot3d](#r-plot3d), [r-paramhelpers](#r-paramhelpers), [r-checkmate](#r-checkmate), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r-rjsonio](#r-rjsonio), [r-rcpparmadillo](#r-rcpparmadillo), [r-ggplot2](#r-ggplot2), [r-mco](#r-mco), [r-rcolorbrewer](#r-rcolorbrewer), [r-plotly](#r-plotly), [r-bbmisc](#r-bbmisc), [r](#r), [r-plot3d](#r-plot3d), [r-paramhelpers](#r-paramhelpers), [r-checkmate](#r-checkmate), [r-rcpp](#r-rcpp) Description: Provides generators for a high number of both single- and multi- objective test functions which are frequently used for the benchmarking of (numerical) optimization algorithms. Moreover, it offers a set of convenient functions to generate, plot and work with objective functions. --- r-sn[¶](#r-sn) === Homepage: * <https://cran.r-project.org/web/packages/sn/index.htmlSpack package: * [r-sn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sn/package.py) Versions: 1.5-0, 1.4-0, 1.3-0, 1.2-4, 1.2-3 Build Dependencies: [r](#r), [r-numderiv](#r-numderiv), [r-mnormt](#r-mnormt) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-numderiv](#r-numderiv), [r-mnormt](#r-mnormt) Description: Build and manipulate probability distributions of the skew-normal family and some related ones, notably the skew-t family, and provide related statistical methods for data fitting and diagnostics, in the univariate and the multivariate case. --- r-snow[¶](#r-snow) === Homepage: * <https://cran.r-project.org/web/packages/snow/index.htmlSpack package: * [r-snow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-snow/package.py) Versions: 0.4-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rmpi](#r-rmpi) Description: Support for simple parallel computing in R. --- r-snowfall[¶](#r-snowfall) === Homepage: * <https://cran.r-project.org/web/packages/snowfall/index.htmlSpack package: * [r-snowfall/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-snowfall/package.py) Versions: 1.84-6.1 Build Dependencies: [r](#r), [r-snow](#r-snow) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-snow](#r-snow) Description: Usability wrapper around snow for easier development of parallel R programs. This package offers e.g. extended error checks, and additional functions. All functions work in sequential mode, too, if no cluster is present or wished. Package is also designed as connector to the cluster management tool sfCluster, but can also used without it. --- r-snprelate[¶](#r-snprelate) === Homepage: * <https://bioconductor.org/packages/SNPRelateSpack package: * [r-snprelate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-snprelate/package.py) Versions: 1.12.2 Build Dependencies: [r](#r), [r-gdsfmt](#r-gdsfmt) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-gdsfmt](#r-gdsfmt) Description: Genome-wide association studies (GWAS) are widely used to investigate the genetic basis of diseases and traits, but they pose many computational challenges. We developed an R package SNPRelate to provide a binary format for single-nucleotide polymorphism (SNP) data in GWAS utilizing CoreArray Genomic Data Structure (GDS) data files. The GDS format offers the efficient operations specifically designed for integers with two bits, since a SNP could occupy only two bits. SNPRelate is also designed to accelerate two key computations on SNP data using parallel computing for multi-core symmetric multiprocessing computer architectures: Principal Component Analysis (PCA) and relatedness analysis using Identity-By-Descent measures. The SNP GDS format is also used by the GWASTools package with the support of S4 classes and generic functions. The extended GDS format is implemented in the SeqArray package to support the storage of single nucleotide variations (SNVs), insertion/deletion polymorphism (indel) and structural variation calls. --- r-som[¶](#r-som) === Homepage: * <https://cran.r-project.org/web/packages/som/index.htmlSpack package: * [r-som/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-som/package.py) Versions: 0.3-5.1, 0.3-5, 0.3-4, 0.3-3, 0.3-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Self-Organizing Map (with application in gene clustering). --- r-somaticsignatures[¶](#r-somaticsignatures) === Homepage: * <https://bioconductor.org/packages/SomaticSignatures/Spack package: * [r-somaticsignatures/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-somaticsignatures/package.py) Versions: 2.12.1 Build Dependencies: [r-iranges](#r-iranges), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-s4vectors](#r-s4vectors), [r-variantannotation](#r-variantannotation), [r-nmf](#r-nmf), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-pcamethods](#r-pcamethods), [r-proxy](#r-proxy), [r-biobase](#r-biobase), [r](#r), [r-ggbio](#r-ggbio), [r-genomeinfodb](#r-genomeinfodb) Link Dependencies: [r](#r) Run Dependencies: [r-iranges](#r-iranges), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-s4vectors](#r-s4vectors), [r-variantannotation](#r-variantannotation), [r-nmf](#r-nmf), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-pcamethods](#r-pcamethods), [r-proxy](#r-proxy), [r-biobase](#r-biobase), [r](#r), [r-ggbio](#r-ggbio), [r-genomeinfodb](#r-genomeinfodb) Description: The SomaticSignatures package identifies mutational signatures of single nucleotide variants (SNVs). It provides a infrastructure related to the methodology described in Nik-Zainal (2012, Cell), with flexibility in the matrix decomposition algorithms. --- r-sourcetools[¶](#r-sourcetools) === Homepage: * <https://cran.r-project.org/package=sourcetoolsSpack package: * [r-sourcetools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sourcetools/package.py) Versions: 0.1.6, 0.1.5 Build Dependencies: [r](#r), [r-testthat](#r-testthat) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-testthat](#r-testthat) Description: Tools for Reading, Tokenizing and Parsing R Code. --- r-sp[¶](#r-sp) === Homepage: * <https://github.com/edzer/sp/Spack package: * [r-sp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sp/package.py) Versions: 1.2-3 Build Dependencies: [r](#r), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice) Description: Classes and methods for spatial data; the classes document where the spatial location information resides, for 2D or 3D data. Utility functions are provided, e.g. for plotting data as maps, spatial selection, as well as methods for retrieving coordinates, for subsetting, print, summary, etc. --- r-sparsem[¶](#r-sparsem) === Homepage: * <http://www.econ.uiuc.edu/~roger/research/sparse/sparse.htmlSpack package: * [r-sparsem/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sparsem/package.py) Versions: 1.74, 1.7 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Some basic linear algebra functionality for sparse matrices is provided: including Cholesky decomposition and backsolving as well as standard R subsetting and Kronecker products. --- r-spdep[¶](#r-spdep) === Homepage: * <https://r-forge.r-project.org/projects/spdepSpack package: * [r-spdep/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-spdep/package.py) Versions: 0.6-13 Build Dependencies: [r](#r), [r-learnbayes](#r-learnbayes), [r-deldir](#r-deldir), [r-coda](#r-coda), [r-expm](#r-expm), [r-gmodels](#r-gmodels), [r-sp](#r-sp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-learnbayes](#r-learnbayes), [r-deldir](#r-deldir), [r-coda](#r-coda), [r-expm](#r-expm), [r-gmodels](#r-gmodels), [r-sp](#r-sp) Description: A collection of functions to create spatial weights matrix objects from polygon contiguities, from point patterns by distance and tessellations, for summarizing these objects, and for permitting their use in spatial data analysis, including regional aggregation by minimum spanning tree; a collection of tests for spatial autocorrelation, including global Moran's I, APLE, Geary's C, Hubert/Mantel general cross product statistic, Empirical Bayes estimates and AssunasReis Index, Getis/Ord G and multicoloured join count statistics, local Moran's I and Getis/Ord G, saddlepoint approximations and exact tests for global and local Moran's I; and functions for estimating spatial simultaneous autoregressive (SAR) lag and error models, impact measures for lag models, weighted and unweighted SAR and CAR spatial regression models, semi-parametric and Moran eigenvector spatial filtering, GM SAR error models, and generalized spatial two stage least squares models. --- r-speedglm[¶](#r-speedglm) === Homepage: * <https://cran.r-project.org/package=speedglmSpack package: * [r-speedglm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-speedglm/package.py) Versions: 0.3-2 Build Dependencies: [r](#r), [r-matrix](#r-matrix), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix), [r-mass](#r-mass) Description: Fitting linear models and generalized linear models to large data sets by updating algorithms. --- r-spem[¶](#r-spem) === Homepage: * <https://bioconductor.org/packages/SPEM/Spack package: * [r-spem/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-spem/package.py) Versions: 1.18.0 Build Dependencies: [r](#r), [r-biobase](#r-biobase), [r-rsolnp](#r-rsolnp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-biobase](#r-biobase), [r-rsolnp](#r-rsolnp) Description: This package can optimize the parameter in S-system models given time series data --- r-splitstackshape[¶](#r-splitstackshape) === Homepage: * <http://github.com/mrdwab/splitstackshapeSpack package: * [r-splitstackshape/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-splitstackshape/package.py) Versions: 1.4.4 Build Dependencies: [r](#r), [r-data-table](#r-data-table) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-data-table](#r-data-table) Description: Stack and Reshape Datasets After Splitting Concatenated Values. Online data collection tools like Google Forms often export multiple-response questions with data concatenated in cells. The concat.split (cSplit) family of functions splits such data into separate cells. The package also includes functions to stack groups of columns and to reshape wide data, even when the data are "unbalanced" something which reshape (from base R) does not handle, and which melt and dcast from reshape2 do not easily handle. --- r-sqldf[¶](#r-sqldf) === Homepage: * <https://cran.r-project.org/package=sqldfSpack package: * [r-sqldf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sqldf/package.py) Versions: 0.4-11 Build Dependencies: [r](#r), [r-dbi](#r-dbi), [r-proto](#r-proto), [r-rsqlite](#r-rsqlite), [r-gsubfn](#r-gsubfn), [r-chron](#r-chron) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-dbi](#r-dbi), [r-proto](#r-proto), [r-rsqlite](#r-rsqlite), [r-gsubfn](#r-gsubfn), [r-chron](#r-chron) Description: The sqldf() function is typically passed a single argument which is an SQL select statement where the table names are ordinary R data frame names. sqldf() transparently sets up a database, imports the data frames into that database, performs the SQL select or other statement and returns the result using a heuristic to determine which class to assign to each column of the returned data frame. The sqldf() or read.csv.sql() functions can also be used to read filtered files into R even if the original files are larger than R itself can handle. 'RSQLite', 'RH2', 'RMySQL' and 'RPostgreSQL' backends are supported. --- r-squash[¶](#r-squash) === Homepage: * <https://cran.r-project.org/package=squashSpack package: * [r-squash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-squash/package.py) Versions: 1.0.8, 1.0.7 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Color-Based Plots for Multivariate Visualization --- r-stanheaders[¶](#r-stanheaders) === Homepage: * <http://mc-stan.org/Spack package: * [r-stanheaders/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-stanheaders/package.py) Versions: 2.17.1, 2.10.0-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: The C++ header files of the Stan project are provided by this package, but it contains no R code, vignettes, or function documentation. There is a shared object containing part of the CVODES library, but it is not accessible from R. StanHeaders is only useful for developers who want to utilize the LinkingTo directive of their package's DESCRIPTION file to build on the Stan library without incurring unnecessary dependencies. The Stan project develops a probabilistic programming language that implements full or approximate Bayesian statistical inference via Markov Chain Monte Carlo or variational methods and implements (optionally penalized) maximum likelihood estimation via optimization. The Stan library includes an advanced automatic differentiation scheme, templated statistical and linear algebra functions that can handle the automatically differentiable scalar types (and doubles, ints, etc.), and a parser for the Stan language. The 'rstan' package provides user-facing R functions to parse, compile, test, estimate, and analyze Stan models. --- r-statmod[¶](#r-statmod) === Homepage: * <https://cran.r-project.org/package=statmodSpack package: * [r-statmod/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-statmod/package.py) Versions: 1.4.30 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A collection of algorithms and functions to aid statistical modeling. Includes growth curve comparisons, limiting dilution analysis (aka ELDA), mixed linear models, heteroscedastic regression, inverse-Gaussian probability calculations, Gauss quadrature and a secure convergence algorithm for nonlinear models. Includes advanced generalized linear model functions that implement secure convergence, dispersion modeling and Tweedie power-law families. --- r-statnet-common[¶](#r-statnet-common) === Homepage: * <http://www.statnet.orgSpack package: * [r-statnet-common/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-statnet-common/package.py) Versions: 3.3.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Non-statistical utilities used by the software developed by the Statnet Project. They may also be of use to others. --- r-stringi[¶](#r-stringi) === Homepage: * <http://www.gagolewski.com/software/stringi/Spack package: * [r-stringi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-stringi/package.py) Versions: 1.1.5, 1.1.3, 1.1.2, 1.1.1 Build Dependencies: [r](#r), [icu4c](#icu4c) Link Dependencies: [r](#r), [icu4c](#icu4c) Run Dependencies: [r](#r) Description: Allows for fast, correct, consistent, portable, as well as convenient character string/text processing in every locale and any native encoding. Owing to the use of the ICU library, the package provides R users with platform-independent functions known to Java, Perl, Python, PHP, and Ruby programmers. Among available features there are: pattern searching (e.g., with ICU Java-like regular expressions or the Unicode Collation Algorithm), random string generation, case mapping, string transliteration, concatenation, Unicode normalization, date-time formatting and parsing, etc. --- r-stringr[¶](#r-stringr) === Homepage: * <https://cran.r-project.org/web/packages/stringr/index.htmlSpack package: * [r-stringr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-stringr/package.py) Versions: 1.2.0, 1.1.0, 1.0.0 Build Dependencies: [r](#r), [r-stringi](#r-stringi), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-stringi](#r-stringi), [r-magrittr](#r-magrittr) Description: A consistent, simple and easy to use set of wrappers around the fantastic 'stringi' package. All function and argument names (and positions) are consistent, all functions deal with "NA"'s and zero length vectors in the same way, and the output from one function is easy to feed into the input of another. --- r-strucchange[¶](#r-strucchange) === Homepage: * <https://cran.r-project.org/package=strucchangeSpack package: * [r-strucchange/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-strucchange/package.py) Versions: 1.5-1 Build Dependencies: [r](#r), [r-sandwich](#r-sandwich), [r-zoo](#r-zoo) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-sandwich](#r-sandwich), [r-zoo](#r-zoo) Description: Testing, monitoring and dating structural changes in (linear) regression models. --- r-subplex[¶](#r-subplex) === Homepage: * <https://cran.r-project.org/package=subplexSpack package: * [r-subplex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-subplex/package.py) Versions: 1.4-1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Unconstrained Optimization using the Subplex Algorithm --- r-summarizedexperiment[¶](#r-summarizedexperiment) === Homepage: * <https://bioconductor.org/packages/SummarizedExperiment/Spack package: * [r-summarizedexperiment/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-summarizedexperiment/package.py) Versions: 1.10.0, 1.8.1, 1.6.5 Build Dependencies: [r](#r), [r-genomicranges](#r-genomicranges), [r-iranges](#r-iranges), [r-matrix](#r-matrix), [r-s4vectors](#r-s4vectors), [r-biobase](#r-biobase), [r-delayedarray](#r-delayedarray), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-genomicranges](#r-genomicranges), [r-iranges](#r-iranges), [r-matrix](#r-matrix), [r-s4vectors](#r-s4vectors), [r-biobase](#r-biobase), [r-delayedarray](#r-delayedarray), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: The SummarizedExperiment container contains one or more assays, each represented by a matrix-like object of numeric or other mode. The rows typically represent genomic ranges of interest and the columns represent samples. --- r-survey[¶](#r-survey) === Homepage: * <http://r-survey.r-forge.r-project.org/survey/Spack package: * [r-survey/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-survey/package.py) Versions: 3.30-3 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Summary statistics, two-sample tests, rank tests, generalised linear models, cumulative link models, Cox models, loglinear models, and general maximum pseudolikelihood estimation for multistage stratified, cluster-sampled, unequally weighted survey samples. Variances by Taylor series linearisation or replicate weights. Post-stratification, calibration, and raking. Two-phase subsampling designs. Graphics. PPS sampling without replacement. Principal components, factor analysis. --- r-survival[¶](#r-survival) === Homepage: * <https://cran.r-project.org/package=survivalSpack package: * [r-survival/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-survival/package.py) Versions: 2.41-3, 2.40-1, 2.39-5 Build Dependencies: [r](#r), [r-matrix](#r-matrix) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-matrix](#r-matrix) Description: Contains the core survival analysis routines, including definition of Surv objects, Kaplan-Meier and Aalen-Johansen (multi-state) curves, Cox models, and parametric accelerated failure time models. --- r-sva[¶](#r-sva) === Homepage: * <https://www.bioconductor.org/packages/sva/Spack package: * [r-sva/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-sva/package.py) Versions: 3.24.4 Build Dependencies: [r](#r), [r-limma](#r-limma), [r-mgcv](#r-mgcv), [r-matrixstats](#r-matrixstats), [r-biocparallel](#r-biocparallel), [r-genefilter](#r-genefilter) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-limma](#r-limma), [r-mgcv](#r-mgcv), [r-matrixstats](#r-matrixstats), [r-biocparallel](#r-biocparallel), [r-genefilter](#r-genefilter) Description: Surrogate Variable Analysis. --- r-tarifx[¶](#r-tarifx) === Homepage: * <https://cran.r-project.org/package=taRifxSpack package: * [r-tarifx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tarifx/package.py) Versions: 1.0.6 Build Dependencies: [r](#r), [r-plyr](#r-plyr), [r-reshape2](#r-reshape2) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-plyr](#r-plyr), [r-reshape2](#r-reshape2) Description: A collection of various utility and convenience functions. --- r-tclust[¶](#r-tclust) === Homepage: * <https://cran.r-project.org/web/packages/tclust/index.htmlSpack package: * [r-tclust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tclust/package.py) Versions: 1.3-1, 1.2-7, 1.2-3, 1.1-03, 1.1-02 Build Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm), [r-cluster](#r-cluster), [r-sn](#r-sn), [r-mclust](#r-mclust) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm), [r-cluster](#r-cluster), [r-sn](#r-sn), [r-mclust](#r-mclust) Description: Provides functions for robust trimmed clustering. --- r-tensora[¶](#r-tensora) === Homepage: * <https://cran.r-project.org/web/packages/tensorA/index.htmlSpack package: * [r-tensora/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tensora/package.py) Versions: 0.36 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: The package provides convenience functions for advance linear algebra with tensors and computation with datasets of tensors on a higher level abstraction. --- r-testit[¶](#r-testit) === Homepage: * <https://cran.r-project.org/package=testitSpack package: * [r-testit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-testit/package.py) Versions: 0.7, 0.5 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Provides two convenience functions assert() and test_pkg() to facilitate testing R packages. --- r-testthat[¶](#r-testthat) === Homepage: * <https://github.com/hadley/testthatSpack package: * [r-testthat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-testthat/package.py) Versions: 1.0.2 Build Dependencies: [r](#r), [r-praise](#r-praise), [r-r6](#r-r6), [r-magrittr](#r-magrittr), [r-digest](#r-digest), [r-crayon](#r-crayon) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-praise](#r-praise), [r-r6](#r-r6), [r-magrittr](#r-magrittr), [r-digest](#r-digest), [r-crayon](#r-crayon) Description: A unit testing system designed to be fun, flexible and easy to set up. --- r-tfbstools[¶](#r-tfbstools) === Homepage: * <http://bioconductor.org/packages/TFBSTools/Spack package: * [r-tfbstools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tfbstools/package.py) Versions: 1.16.0 Build Dependencies: [r-iranges](#r-iranges), [r-dbi](#r-dbi), [r-catools](#r-catools), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-xvector](#r-xvector), [r-biocparallel](#r-biocparallel), [r-dirichletmultinomial](#r-dirichletmultinomial), [r-biostrings](#r-biostrings), [r-bsgenome](#r-bsgenome), [r-seqlogo](#r-seqlogo), [r-genomeinfodb](#r-genomeinfodb), [r-rtracklayer](#r-rtracklayer), [r-genomicranges](#r-genomicranges), [r-gtools](#r-gtools), [r-tfmpvalue](#r-tfmpvalue), [r-xml](#r-xml), [r-biobase](#r-biobase), [r](#r), [r-cner](#r-cner), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r-iranges](#r-iranges), [r-dbi](#r-dbi), [r-catools](#r-catools), [r-s4vectors](#r-s4vectors), [r-rsqlite](#r-rsqlite), [r-xvector](#r-xvector), [r-biocparallel](#r-biocparallel), [r-dirichletmultinomial](#r-dirichletmultinomial), [r-biostrings](#r-biostrings), [r-bsgenome](#r-bsgenome), [r-seqlogo](#r-seqlogo), [r-genomeinfodb](#r-genomeinfodb), [r-rtracklayer](#r-rtracklayer), [r-genomicranges](#r-genomicranges), [r-gtools](#r-gtools), [r-tfmpvalue](#r-tfmpvalue), [r-xml](#r-xml), [r-biobase](#r-biobase), [r](#r), [r-cner](#r-cner), [r-biocgenerics](#r-biocgenerics) Description: TFBSTools is a package for the analysis and manipulation of transcription factor binding sites. It includes matrices conversion between Position Frequency Matirx (PFM), Position Weight Matirx (PWM) and Information Content Matrix (ICM). It can also scan putative TFBS from sequence/alignment, query JASPAR database and provides a wrapper of de novo motif discovery software. TFBSTools is a package for the analysis and manipulation of transcription factor binding sites. It includes matrices conversion between Position Frequency Matirx (PFM), Position Weight Matirx (PWM) and Information Content Matrix (ICM). It can also scan putative TFBS from sequence/alignment, query JASPAR database and provides a wrapper of de novo motif discovery software. --- r-tfmpvalue[¶](#r-tfmpvalue) === Homepage: * <https://github.com/ge11232002/TFMPvalueSpack package: * [r-tfmpvalue/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tfmpvalue/package.py) Versions: 0.0.6 Build Dependencies: [r](#r), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rcpp](#r-rcpp) Description: In putative Transcription Factor Binding Sites (TFBSs) identification from sequence/alignments, we are interested in the significance of certain match score. TFMPvalue provides the accurate calculation of P-value with score threshold for Position Weight Matrices, or the score with given P-value. This package is an interface to code originally made available by <NAME> and <NAME>, 2007, Algorithms Mol Biol:2, 15. --- r-th-data[¶](#r-th-data) === Homepage: * <https://cran.r-project.org/package=TH.dataSpack package: * [r-th-data/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-th-data/package.py) Versions: 1.0-8, 1.0-7 Build Dependencies: [r](#r), [r-survival](#r-survival), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-survival](#r-survival), [r-mass](#r-mass) Description: Contains data sets used in other packages Torsten Hothorn maintains. --- r-threejs[¶](#r-threejs) === Homepage: * <http://bwlewis.github.io/rthreejsSpack package: * [r-threejs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-threejs/package.py) Versions: 0.2.2 Build Dependencies: [r](#r), [r-base64enc](#r-base64enc), [r-matrix](#r-matrix), [r-htmlwidgets](#r-htmlwidgets), [r-jsonlite](#r-jsonlite) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-base64enc](#r-base64enc), [r-matrix](#r-matrix), [r-htmlwidgets](#r-htmlwidgets), [r-jsonlite](#r-jsonlite) Description: Create interactive 3D scatter plots, network plots, and globes using the 'three.js' visualization library ("http://threejs.org"). --- r-tibble[¶](#r-tibble) === Homepage: * <https://github.com/tidyverse/tibbleSpack package: * [r-tibble/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tibble/package.py) Versions: 1.3.4, 1.2, 1.1 Build Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-rcpp](#r-rcpp), [r-lazyeval](#r-lazyeval), [r-rlang](#r-rlang) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-assertthat](#r-assertthat), [r-rcpp](#r-rcpp), [r-lazyeval](#r-lazyeval), [r-rlang](#r-rlang) Description: Provides a 'tbl_df' class that offers better checking and printing capabilities than traditional data frames. --- r-tidycensus[¶](#r-tidycensus) === Homepage: * <https://cran.r-project.org/package=tidycensusSpack package: * [r-tidycensus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tidycensus/package.py) Versions: 0.3.1 Build Dependencies: [r-tidyr](#r-tidyr), [r-readr](#r-readr), [r-purrr](#r-purrr), [r-rvest](#r-rvest), [r-dplyr](#r-dplyr), [r](#r), [r-jsonlite](#r-jsonlite), [r-stringr](#r-stringr), [r-tigris](#r-tigris), [r-httr](#r-httr), [r-rappdirs](#r-rappdirs), [r-units](#r-units), [r-sf](#r-sf), [r-xml2](#r-xml2) Link Dependencies: [r](#r) Run Dependencies: [r-tidyr](#r-tidyr), [r-readr](#r-readr), [r-purrr](#r-purrr), [r-rvest](#r-rvest), [r-dplyr](#r-dplyr), [r](#r), [r-jsonlite](#r-jsonlite), [r-stringr](#r-stringr), [r-tigris](#r-tigris), [r-httr](#r-httr), [r-rappdirs](#r-rappdirs), [r-units](#r-units), [r-sf](#r-sf), [r-xml2](#r-xml2) Description: An integrated R interface to the decennial US Census and American Community Survey APIs and the US Census Bureau's geographic boundary files. Allows R users to return Census and ACS data as tidyverse-ready data frames, and optionally returns a list-column with feature geometry for many geographies. --- r-tidyr[¶](#r-tidyr) === Homepage: * <https://github.com/hadley/tidyrSpack package: * [r-tidyr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tidyr/package.py) Versions: 0.7.2, 0.5.1 Build Dependencies: [r](#r), [r-tibble](#r-tibble), [r-purrr](#r-purrr), [r-tidyselect](#r-tidyselect), [r-magrittr](#r-magrittr), [r-dplyr](#r-dplyr), [r-rcpp](#r-rcpp), [r-stringi](#r-stringi), [r-glue](#r-glue), [r-rlang](#r-rlang) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-tibble](#r-tibble), [r-purrr](#r-purrr), [r-tidyselect](#r-tidyselect), [r-magrittr](#r-magrittr), [r-dplyr](#r-dplyr), [r-rcpp](#r-rcpp), [r-stringi](#r-stringi), [r-glue](#r-glue), [r-rlang](#r-rlang) Description: An evolution of 'reshape2'. It's designed specifically for data tidying (not general reshaping or aggregating) and works well with 'dplyr' data pipelines. --- r-tidyselect[¶](#r-tidyselect) === Homepage: * <https://cran.r-project.org/package=tidyselectSpack package: * [r-tidyselect/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tidyselect/package.py) Versions: 0.2.3 Build Dependencies: [r](#r), [r-purrr](#r-purrr), [r-rlang](#r-rlang), [r-glue](#r-glue), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-purrr](#r-purrr), [r-rlang](#r-rlang), [r-glue](#r-glue), [r-rcpp](#r-rcpp) Description: A backend for the selecting functions of the 'tidyverse'. It makes it easy to implement select-like functions in your own packages in a way that is consistent with other 'tidyverse' interfaces for selection. --- r-tidyverse[¶](#r-tidyverse) === Homepage: * <http://tidyverse.tidyverse.org/Spack package: * [r-tidyverse/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tidyverse/package.py) Versions: 1.2.1 Build Dependencies: [r-tibble](#r-tibble), [r-ggplot2](#r-ggplot2), [r-rvest](#r-rvest), [r-dbplyr](#r-dbplyr), [r-dplyr](#r-dplyr), [r-hms](#r-hms), [r-forcats](#r-forcats), [r-cli](#r-cli), [r-jsonlite](#r-jsonlite), [r-modelr](#r-modelr), [r-httr](#r-httr), [r-rlang](#r-rlang), [r-stringr](#r-stringr), [r-readr](#r-readr), [r-lubridate](#r-lubridate), [r-purrr](#r-purrr), [r-haven](#r-haven), [r-crayon](#r-crayon), [r-reprex](#r-reprex), [r](#r), [r-rstudioapi](#r-rstudioapi), [r-xml2](#r-xml2), [r-magrittr](#r-magrittr), [r-broom](#r-broom), [r-tidyr](#r-tidyr), [r-readxl](#r-readxl) Link Dependencies: [r](#r) Run Dependencies: [r-tibble](#r-tibble), [r-ggplot2](#r-ggplot2), [r-rvest](#r-rvest), [r-dbplyr](#r-dbplyr), [r-dplyr](#r-dplyr), [r-hms](#r-hms), [r-forcats](#r-forcats), [r-cli](#r-cli), [r-jsonlite](#r-jsonlite), [r-modelr](#r-modelr), [r-httr](#r-httr), [r-rlang](#r-rlang), [r-stringr](#r-stringr), [r-readr](#r-readr), [r-lubridate](#r-lubridate), [r-purrr](#r-purrr), [r-haven](#r-haven), [r-crayon](#r-crayon), [r-reprex](#r-reprex), [r](#r), [r-rstudioapi](#r-rstudioapi), [r-xml2](#r-xml2), [r-magrittr](#r-magrittr), [r-broom](#r-broom), [r-tidyr](#r-tidyr), [r-readxl](#r-readxl) Description: The 'tidyverse' is a set of packages that work in harmony because they share common data representations and 'API' design. This package is designed to make it easy to install and load multiple 'tidyverse' packages in a single step. --- r-tiff[¶](#r-tiff) === Homepage: * <http://www.rforge.net/tiff/Spack package: * [r-tiff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tiff/package.py) Versions: 0.1-5 Build Dependencies: [r](#r), [libtiff](#libtiff), [libjpeg](#libjpeg) Link Dependencies: [r](#r), [libtiff](#libtiff), [libjpeg](#libjpeg) Run Dependencies: [r](#r) Description: This package provides an easy and simple way to read, write and display bitmap images stored in the TIFF format. It can read and write both files and in-memory raw vectors. --- r-tigris[¶](#r-tigris) === Homepage: * <https://cran.r-project.org/package=tigrisSpack package: * [r-tigris/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tigris/package.py) Versions: 0.5.3 Build Dependencies: [r](#r), [r-dplyr](#r-dplyr), [r-sf](#r-sf), [r-rgeos](#r-rgeos), [r-rgdal](#r-rgdal), [r-sp](#r-sp), [r-magrittr](#r-magrittr), [r-httr](#r-httr), [r-rappdirs](#r-rappdirs), [r-maptools](#r-maptools), [r-uuid](#r-uuid), [r-stringr](#r-stringr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-dplyr](#r-dplyr), [r-sf](#r-sf), [r-rgeos](#r-rgeos), [r-rgdal](#r-rgdal), [r-sp](#r-sp), [r-magrittr](#r-magrittr), [r-httr](#r-httr), [r-rappdirs](#r-rappdirs), [r-maptools](#r-maptools), [r-uuid](#r-uuid), [r-stringr](#r-stringr) Description: Download TIGER/Line shapefiles from the United States Census Bureau and load into R as 'SpatialDataFrame' or 'sf' objects. --- r-timedate[¶](#r-timedate) === Homepage: * <https://cran.r-project.org/package=timeDateSpack package: * [r-timedate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-timedate/package.py) Versions: 3012.100 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Environment for teaching "Financial Engineering and Computational Finance". Managing chronological and calendar objects. --- r-tmixclust[¶](#r-tmixclust) === Homepage: * <https://bioconductor.org/packages/TMixClust/Spack package: * [r-tmixclust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tmixclust/package.py) Versions: 1.0.1 Build Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm), [r-flexclust](#r-flexclust), [r-gss](#r-gss), [r-cluster](#r-cluster), [r-biobase](#r-biobase), [r-biocparallel](#r-biocparallel), [r-zoo](#r-zoo), [r-spem](#r-spem) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm), [r-flexclust](#r-flexclust), [r-gss](#r-gss), [r-cluster](#r-cluster), [r-biobase](#r-biobase), [r-biocparallel](#r-biocparallel), [r-zoo](#r-zoo), [r-spem](#r-spem) Description: Implementation of a clustering method for time series gene expression data based on mixed-effects models with Gaussian variables and non- parametric cubic splines estimation. The method can robustly account for the high levels of noise present in typical gene expression time series datasets. --- r-topgo[¶](#r-topgo) === Homepage: * <https://www.bioconductor.org/packages/topGO/Spack package: * [r-topgo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-topgo/package.py) Versions: 2.30.1, 2.28.0 Build Dependencies: [r](#r), [r-graph](#r-graph), [r-go-db](#r-go-db), [r-dbi](#r-dbi), [r-biobase](#r-biobase), [r-matrixstats](#r-matrixstats), [r-annotationdbi](#r-annotationdbi), [r-sparsem](#r-sparsem), [r-lattice](#r-lattice), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-graph](#r-graph), [r-go-db](#r-go-db), [r-dbi](#r-dbi), [r-biobase](#r-biobase), [r-matrixstats](#r-matrixstats), [r-annotationdbi](#r-annotationdbi), [r-sparsem](#r-sparsem), [r-lattice](#r-lattice), [r-biocgenerics](#r-biocgenerics) Description: topGO package provides tools for testing GO terms while accounting for the topology of the GO graph. Different test statistics and different methods for eliminating local similarities and dependencies between GO terms can be implemented and applied. --- r-trimcluster[¶](#r-trimcluster) === Homepage: * <http://www.homepages.ucl.ac.uk/~ucakcheSpack package: * [r-trimcluster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-trimcluster/package.py) Versions: 0.1-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: trimcluster: Cluster analysis with trimming --- r-truncnorm[¶](#r-truncnorm) === Homepage: * <https://cran.r-project.org/package=truncnormSpack package: * [r-truncnorm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-truncnorm/package.py) Versions: 1.0-8 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Density, probability, quantile and random number generation functions for the truncated normal distribution. --- r-trust[¶](#r-trust) === Homepage: * <http://www.stat.umn.edu/geyer/trustSpack package: * [r-trust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-trust/package.py) Versions: 0.1-7 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Does local optimization using two derivatives and trust regions. Guaranteed to converge to local minimum of objective function. --- r-tseries[¶](#r-tseries) === Homepage: * <https://cran.r-project.org/package=tseriesSpack package: * [r-tseries/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tseries/package.py) Versions: 0.10-42 Build Dependencies: [r](#r), [r-quadprog](#r-quadprog), [r-zoo](#r-zoo), [r-quantmod](#r-quantmod) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-quadprog](#r-quadprog), [r-zoo](#r-zoo), [r-quantmod](#r-quantmod) Description: Time series analysis and computational finance. --- r-tsne[¶](#r-tsne) === Homepage: * <https://cran.r-project.org/web/packages/tsne/index.htmlSpack package: * [r-tsne/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-tsne/package.py) Versions: 0.1-3, 0.1-2, 0.1-1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A "pure R" implementation of the t-SNE algorithm. --- r-ttr[¶](#r-ttr) === Homepage: * <https://github.com/joshuaulrich/TTRSpack package: * [r-ttr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-ttr/package.py) Versions: 0.23-1 Build Dependencies: [r](#r), [r-xts](#r-xts), [r-zoo](#r-zoo) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-xts](#r-xts), [r-zoo](#r-zoo) Description: Functions and data to construct technical trading rules with R. --- r-udunits2[¶](#r-udunits2) === Homepage: * <https://github.com/pacificclimate/Rudunits2Spack package: * [r-udunits2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-udunits2/package.py) Versions: 0.13 Build Dependencies: [r](#r), [udunits2](#udunits2) Link Dependencies: [r](#r), [udunits2](#udunits2) Run Dependencies: [r](#r) Description: Provides simple bindings to Unidata's udunits library. --- r-units[¶](#r-units) === Homepage: * <https://github.com/edzer/units/Spack package: * [r-units/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-units/package.py) Versions: 0.4-6 Build Dependencies: [r](#r), [r-udunits2](#r-udunits2) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-udunits2](#r-udunits2) Description: Support for measurement units in R vectors, matrices and arrays: automatic propagation, conversion, derivation and simplification of units; raising errors in case of unit incompatibility. Compatible with the POSIXct, Date and difftime classes. Uses the UNIDATA udunits library and unit database for unit compatibility checking and conversion. --- r-utils[¶](#r-utils) === Homepage: * <https://github.com/HenrikBengtsson/R.utilsSpack package: * [r-utils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-utils/package.py) Versions: 2.5.0 Build Dependencies: [r](#r), [r-oo](#r-oo) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-oo](#r-oo) Description: Utility functions useful when programming and developing R packages. --- r-uuid[¶](#r-uuid) === Homepage: * <http://www.rforge.net/uuidSpack package: * [r-uuid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-uuid/package.py) Versions: 0.1-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Tools for generating and handling of UUIDs (Universally Unique Identifiers). --- r-variantannotation[¶](#r-variantannotation) === Homepage: * <https://www.bioconductor.org/packages/VariantAnnotation/Spack package: * [r-variantannotation/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-variantannotation/package.py) Versions: 1.22.3 Build Dependencies: [r-iranges](#r-iranges), [r-dbi](#r-dbi), [r-s4vectors](#r-s4vectors), [r-bsgenome](#r-bsgenome), [r-xvector](#r-xvector), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomicfeatures](#r-genomicfeatures), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-biobase](#r-biobase), [r-rtracklayer](#r-rtracklayer), [r-zlibbioc](#r-zlibbioc), [r-annotationdbi](#r-annotationdbi), [r](#r), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r-iranges](#r-iranges), [r-dbi](#r-dbi), [r-s4vectors](#r-s4vectors), [r-bsgenome](#r-bsgenome), [r-xvector](#r-xvector), [r-summarizedexperiment](#r-summarizedexperiment), [r-genomicfeatures](#r-genomicfeatures), [r-biostrings](#r-biostrings), [r-genomicranges](#r-genomicranges), [r-biobase](#r-biobase), [r-rtracklayer](#r-rtracklayer), [r-zlibbioc](#r-zlibbioc), [r-annotationdbi](#r-annotationdbi), [r](#r), [r-rsamtools](#r-rsamtools), [r-genomeinfodb](#r-genomeinfodb), [r-biocgenerics](#r-biocgenerics) Description: Annotate variants, compute amino acid coding changes, predict coding outcomes. --- r-varselrf[¶](#r-varselrf) === Homepage: * <http://ligarto.org/rdiaz/Software/Software.htmlSpack package: * [r-varselrf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-varselrf/package.py) Versions: 0.7-8 Build Dependencies: [r](#r), [r-randomforest](#r-randomforest) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-randomforest](#r-randomforest) Description: Variable selection from random forests using both backwards variable elimination (for the selection of small sets of non-redundant variables) and selection based on the importance spectrum (somewhat similar to scree plots; for the selection of large, potentially highly-correlated variables) . Main applications in high-dimensional data (e.g., microarray data, and other genomics and proteomics applications). --- r-vcd[¶](#r-vcd) === Homepage: * <https://cran.r-project.org/package=vcdSpack package: * [r-vcd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-vcd/package.py) Versions: 1.4-1 Build Dependencies: [r](#r), [r-colorspace](#r-colorspace), [r-lmtest](#r-lmtest), [r-mass](#r-mass) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-colorspace](#r-colorspace), [r-lmtest](#r-lmtest), [r-mass](#r-mass) Description: Visualization techniques, data sets, summary and inference procedures aimed particularly at categorical data. Special emphasis is given to highly extensible grid graphics. The package was package was originally inspired by the book "Visualizing Categorical Data" by <NAME> and is now the main support package for a new book, "Discrete Data Analysis with R" by <NAME> and <NAME> (2015). --- r-vegan[¶](#r-vegan) === Homepage: * <https://github.com/vegandevs/veganSpack package: * [r-vegan/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-vegan/package.py) Versions: 2.4-3 Build Dependencies: [r](#r), [r-permute](#r-permute) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-permute](#r-permute) Description: Ordination methods, diversity analysis and other functions for community and vegetation ecologists. --- r-vgam[¶](#r-vgam) === Homepage: * <https://cran.r-project.org/web/packages/VGAM/index.htmlSpack package: * [r-vgam/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-vgam/package.py) Versions: 1.0-4, 1.0-3, 1.0-2, 1.0-1, 1.0-0 Build Dependencies: [r](#r), [r-mass](#r-mass), [r-mgcv](#r-mgcv) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mass](#r-mass), [r-mgcv](#r-mgcv) Description: An implementation of about 6 major classes of statistical regression models. --- r-vipor[¶](#r-vipor) === Homepage: * <https://cran.r-project.org/package=viporSpack package: * [r-vipor/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-vipor/package.py) Versions: 0.4.5, 0.4.4 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Plot Categorical Data Using Quasirandom Noise and Density Estimates --- r-viridis[¶](#r-viridis) === Homepage: * <https://github.com/sjmgarnier/viridisSpack package: * [r-viridis/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-viridis/package.py) Versions: 0.4.0 Build Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-gridextra](#r-gridextra), [r-viridislite](#r-viridislite) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-ggplot2](#r-ggplot2), [r-gridextra](#r-gridextra), [r-viridislite](#r-viridislite) Description: viridis: Default Color Maps from 'matplotlib' --- r-viridislite[¶](#r-viridislite) === Homepage: * <https://github.com/sjmgarnier/viridisLiteSpack package: * [r-viridislite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-viridislite/package.py) Versions: 0.2.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: viridisLite: Default Color Maps from 'matplotlib' (Lite Version) --- r-visnetwork[¶](#r-visnetwork) === Homepage: * <https://github.com/datastorm-open/visNetworkSpack package: * [r-visnetwork/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-visnetwork/package.py) Versions: 1.0.1 Build Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-htmltools](#r-htmltools), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-jsonlite](#r-jsonlite), [r-htmltools](#r-htmltools), [r-htmlwidgets](#r-htmlwidgets), [r-magrittr](#r-magrittr) Description: Provides an R interface to the 'vis.js' JavaScript charting library. It allows an interactive visualization of networks. --- r-vsn[¶](#r-vsn) === Homepage: * <https://www.bioconductor.org/packages/vsn/Spack package: * [r-vsn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-vsn/package.py) Versions: 3.44.0 Build Dependencies: [r](#r), [r-hexbin](#r-hexbin), [r-ggplot2](#r-ggplot2), [r-limma](#r-limma), [r-biobase](#r-biobase), [r-lattice](#r-lattice), [r-affy](#r-affy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-hexbin](#r-hexbin), [r-ggplot2](#r-ggplot2), [r-limma](#r-limma), [r-biobase](#r-biobase), [r-lattice](#r-lattice), [r-affy](#r-affy) Description: The package implements a method for normalising microarray intensities, and works for single- and multiple-color arrays. It can also be used for data from other technologies, as long as they have similar format. The method uses a robust variant of the maximum-likelihood estimator for an additive-multiplicative error model and affine calibration. The model incorporates data calibration step (a.k.a. normalization), a model for the dependence of the variance on the mean intensity and a variance stabilizing data transformation. Differences between transformed intensities are analogous to "normalized log-ratios". However, in contrast to the latter, their variance is independent of the mean, and they are usually more sensitive and specific in detecting differential transcription. --- r-whisker[¶](#r-whisker) === Homepage: * <http://github.com/edwindj/whiskerSpack package: * [r-whisker/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-whisker/package.py) Versions: 0.3-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: logicless templating, reuse templates in many programming languages including R --- r-withr[¶](#r-withr) === Homepage: * <http://github.com/jimhester/withrSpack package: * [r-withr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-withr/package.py) Versions: 1.0.2, 1.0.1 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: A set of functions to run code 'with' safely and temporarily modified global state. Many of these functions were originally a part of the 'devtools' package, this provides a simple package with limited dependencies to provide access to these functions. --- r-xde[¶](#r-xde) === Homepage: * <https://www.bioconductor.org/packages/XDE/Spack package: * [r-xde/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xde/package.py) Versions: 2.22.0 Build Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm), [r-mergemaid](#r-mergemaid), [r-gtools](#r-gtools), [r-biobase](#r-biobase), [r-genefilter](#r-genefilter), [r-biocgenerics](#r-biocgenerics) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-mvtnorm](#r-mvtnorm), [r-mergemaid](#r-mergemaid), [r-gtools](#r-gtools), [r-biobase](#r-biobase), [r-genefilter](#r-genefilter), [r-biocgenerics](#r-biocgenerics) Description: Multi-level model for cross-study detection of differential gene expression. --- r-xgboost[¶](#r-xgboost) === Homepage: * <https://github.com/dmlc/xgboostSpack package: * [r-xgboost/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xgboost/package.py) Versions: 0.6-4, 0.4-4 Build Dependencies: [r](#r), [r-stringi](#r-stringi), [r-data-table](#r-data-table), [r-magrittr](#r-magrittr), [r-matrix](#r-matrix), [r-stringr](#r-stringr) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-stringi](#r-stringi), [r-data-table](#r-data-table), [r-magrittr](#r-magrittr), [r-matrix](#r-matrix), [r-stringr](#r-stringr) Description: Extreme Gradient Boosting, which is an efficient implementation of gradient boosting framework. This package is its R interface. The package includes efficient linear model solver and tree learning algorithms. The package can automatically do parallel computation on a single machine which could be more than 10 times faster than existing gradient boosting packages. It supports various objective functions, including regression, classification and ranking. The package is made to be extensible, so that users are also allowed to define their own objectives easily. --- r-xlconnect[¶](#r-xlconnect) === Homepage: * <http://miraisolutions.wordpress.com/Spack package: * [r-xlconnect/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xlconnect/package.py) Versions: 0.2-12, 0.2-11 Build Dependencies: [r](#r), [r-rjava](#r-rjava), [r-xlconnectjars](#r-xlconnectjars) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rjava](#r-rjava), [r-xlconnectjars](#r-xlconnectjars) Description: Provides comprehensive functionality to read, write and format Excel data. --- r-xlconnectjars[¶](#r-xlconnectjars) === Homepage: * <http://miraisolutions.wordpress.com/Spack package: * [r-xlconnectjars/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xlconnectjars/package.py) Versions: 0.2-12, 0.2-9 Build Dependencies: [r](#r), [r-rjava](#r-rjava) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rjava](#r-rjava) Description: Provides external JAR dependencies for the XLConnect package. --- r-xlsx[¶](#r-xlsx) === Homepage: * <http://code.google.com/p/rexcel/Spack package: * [r-xlsx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xlsx/package.py) Versions: 0.5.7 Build Dependencies: [r](#r), [r-rjava](#r-rjava), [r-xlsxjars](#r-xlsxjars) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rjava](#r-rjava), [r-xlsxjars](#r-xlsxjars) Description: Provide R functions to read/write/format Excel 2007 and Excel 97/2000/XP/2003 file formats. --- r-xlsxjars[¶](#r-xlsxjars) === Homepage: * <https://cran.rstudio.com/web/packages/xlsxjars/index.htmlSpack package: * [r-xlsxjars/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xlsxjars/package.py) Versions: 0.6.1 Build Dependencies: [r](#r), [r-rjava](#r-rjava) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-rjava](#r-rjava) Description: The xlsxjars package collects all the external jars required for the xlxs package. This release corresponds to POI 3.10.1. --- r-xmapbridge[¶](#r-xmapbridge) === Homepage: * <https://www.bioconductor.org/packages/xmapbridge/Spack package: * [r-xmapbridge/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xmapbridge/package.py) Versions: 1.34.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: xmapBridge can plot graphs in the X:Map genome browser. This package exports plotting files in a suitable format. --- r-xml[¶](#r-xml) === Homepage: * <https://cran.r-project.org/web/packages/XML/index.htmlSpack package: * [r-xml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xml/package.py) Versions: 3.98-1.9, 3.98-1.5, 3.98-1.4 Build Dependencies: [r](#r), [libxml2](#libxml2) Link Dependencies: [r](#r), [libxml2](#libxml2) Run Dependencies: [r](#r) Description: Many approaches for both reading and creating XML (and HTML) documents (including DTDs), both local and accessible via HTTP or FTP. Also offers access to an 'XPath' "interpreter". --- r-xml2[¶](#r-xml2) === Homepage: * <https://cran.r-project.org/package=xml2Spack package: * [r-xml2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xml2/package.py) Versions: 1.1.1 Build Dependencies: [r](#r), [r-bh](#r-bh), [libxml2](#libxml2), [r-rcpp](#r-rcpp) Link Dependencies: [r](#r), [libxml2](#libxml2) Run Dependencies: [r](#r), [r-bh](#r-bh), [r-rcpp](#r-rcpp) Description: Work with XML files using a simple, consistent interface. Built on top of the 'libxml2' C library. --- r-xtable[¶](#r-xtable) === Homepage: * <http://xtable.r-forge.r-project.org/Spack package: * [r-xtable/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xtable/package.py) Versions: 1.8-2 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: Coerce data to LaTeX and HTML tables. --- r-xts[¶](#r-xts) === Homepage: * <http://r-forge.r-project.org/projects/xts/Spack package: * [r-xts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xts/package.py) Versions: 0.9-7 Build Dependencies: [r](#r), [r-zoo](#r-zoo) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-zoo](#r-zoo) Description: Provide for uniform handling of R's different time-based data classes by extending zoo, maximizing native format information preservation and allowing for user level customization and extension, while simplifying cross-class interoperability. --- r-xvector[¶](#r-xvector) === Homepage: * <https://bioconductor.org/packages/XVector/Spack package: * [r-xvector/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-xvector/package.py) Versions: 0.20.0, 0.16.0 Build Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-biocgenerics](#r-biocgenerics), [r-iranges](#r-iranges), [r-zlibbioc](#r-zlibbioc) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-s4vectors](#r-s4vectors), [r-biocgenerics](#r-biocgenerics), [r-iranges](#r-iranges), [r-zlibbioc](#r-zlibbioc) Description: Memory efficient S4 classes for storing sequences "externally" (behind an R external pointer, or on disk). --- r-yaml[¶](#r-yaml) === Homepage: * <https://cran.r-project.org/web/packages/yaml/index.htmlSpack package: * [r-yaml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-yaml/package.py) Versions: 2.1.14, 2.1.13 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This package implements the libyaml YAML 1.1 parser and emitter (http://pyyaml.org/wiki/LibYAML) for R. --- r-yapsa[¶](#r-yapsa) === Homepage: * <http://bioconductor.org/packages/YAPSA/Spack package: * [r-yapsa/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-yapsa/package.py) Versions: 1.2.0 Build Dependencies: [r-dendextend](#r-dendextend), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-pmcmr](#r-pmcmr), [r-variantannotation](#r-variantannotation), [r-keggrest](#r-keggrest), [r-lsei](#r-lsei), [r-genomicranges](#r-genomicranges), [r-getoptlong](#r-getoptlong), [r-somaticsignatures](#r-somaticsignatures), [r-corrplot](#r-corrplot), [r-gtrellis](#r-gtrellis), [r](#r), [r-gridextra](#r-gridextra), [r-genomeinfodb](#r-genomeinfodb), [r-complexheatmap](#r-complexheatmap) Link Dependencies: [r](#r) Run Dependencies: [r-dendextend](#r-dendextend), [r-reshape2](#r-reshape2), [r-ggplot2](#r-ggplot2), [r-pmcmr](#r-pmcmr), [r-variantannotation](#r-variantannotation), [r-keggrest](#r-keggrest), [r-lsei](#r-lsei), [r-genomicranges](#r-genomicranges), [r-getoptlong](#r-getoptlong), [r-somaticsignatures](#r-somaticsignatures), [r-corrplot](#r-corrplot), [r-gtrellis](#r-gtrellis), [r](#r), [r-gridextra](#r-gridextra), [r-genomeinfodb](#r-genomeinfodb), [r-complexheatmap](#r-complexheatmap) Description: This package provides functions and routines useful in the analysis of somatic signatures (cf. <NAME> al., Nature 2013). In particular, functions to perform a signature analysis with known signatures (LCD = linear combination decomposition) and a signature analysis on stratified mutational catalogue (SMC = stratify mutational catalogue) are provided. --- r-yaqcaffy[¶](#r-yaqcaffy) === Homepage: * <http://bioconductor.org/packages/yaqcaffy/Spack package: * [r-yaqcaffy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-yaqcaffy/package.py) Versions: 1.36.0 Build Dependencies: [r](#r), [r-simpleaffy](#r-simpleaffy) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-simpleaffy](#r-simpleaffy) Description: Quality control of Affymetrix GeneChip expression data and reproducibility analysis of human whole genome chips with the MAQC reference datasets. --- r-yarn[¶](#r-yarn) === Homepage: * <https://bioconductor.org/packages/yarn/Spack package: * [r-yarn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-yarn/package.py) Versions: 1.2.0 Build Dependencies: [r-readr](#r-readr), [r-quantro](#r-quantro), [r-limma](#r-limma), [r-matrixstats](#r-matrixstats), [r-rcolorbrewer](#r-rcolorbrewer), [r-biomart](#r-biomart), [r-preprocesscore](#r-preprocesscore), [r](#r), [r-downloader](#r-downloader), [r-edger](#r-edger), [r-gplots](#r-gplots), [r-biobase](#r-biobase) Link Dependencies: [r](#r) Run Dependencies: [r-readr](#r-readr), [r-quantro](#r-quantro), [r-limma](#r-limma), [r-matrixstats](#r-matrixstats), [r-rcolorbrewer](#r-rcolorbrewer), [r-biomart](#r-biomart), [r-preprocesscore](#r-preprocesscore), [r](#r), [r-downloader](#r-downloader), [r-edger](#r-edger), [r-gplots](#r-gplots), [r-biobase](#r-biobase) Description: Expedite large RNA-Seq analyses using a combination of previously developed tools. YARN is meant to make it easier for the user in performing basic mis-annotation quality control, filtering, and condition-aware normalization. YARN leverages many Bioconductor tools and statistical techniques to account for the large heterogeneity and sparsity found in very large RNA-seq experiments. --- r-zlibbioc[¶](#r-zlibbioc) === Homepage: * <http://bioconductor.org/packages/release/bioc/html/zlibbioc.htmlSpack package: * [r-zlibbioc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-zlibbioc/package.py) Versions: 1.26.0, 1.22.0 Build Dependencies: [r](#r) Link Dependencies: [r](#r) Run Dependencies: [r](#r) Description: This package uses the source code of zlib-1.2.5 to create libraries for systems that do not have these available via other means (most Linux and Mac users should have system-level access to zlib, and no direct need for this package). See the vignette for instructions on use. --- r-zoo[¶](#r-zoo) === Homepage: * <http://zoo.r-forge.r-project.org/Spack package: * [r-zoo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-zoo/package.py) Versions: 1.7-14, 1.7-13 Build Dependencies: [r](#r), [r-lattice](#r-lattice) Link Dependencies: [r](#r) Run Dependencies: [r](#r), [r-lattice](#r-lattice) Description: An S3 class with methods for totally ordered indexed observations. It is particularly aimed at irregular time series of numeric vectors/matrices and factors. zoo's key design goals are independence of a particular index/date/time class and consistency with ts and base R by providing methods to extend standard generics. --- racon[¶](#racon) === Homepage: * <https://github.com/isovic/raconSpack package: * [racon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/racon/package.py) Versions: 1.3.0, 1.2.1 Build Dependencies: [cmake](#cmake), [python](#python) Description: Ultrafast consensus module for raw de novo genome assembly of long uncorrected reads. --- raft[¶](#raft) === Homepage: * <https://bitbucket.org/gill_martinez/raft_apsSpack package: * [raft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/raft/package.py) Versions: develop, 1.2.3 Build Dependencies: [cuda](#cuda), [hdf5](#hdf5), mpi, [fftw](#fftw), [cmake](#cmake) Link Dependencies: [cuda](#cuda), mpi, [fftw](#fftw), [hdf5](#hdf5) Description: RAFT: Reconstruct Algorithms for Tomography. Toolbox under development at Brazilian Synchrotron Light Source. --- ragel[¶](#ragel) === Homepage: * <http://www.colm.net/open-source/ragelSpack package: * [ragel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ragel/package.py) Versions: 6.10 Build Dependencies: [colm](#colm) Description: Ragel State Machine Compiler Ragel compiles executable finite state machines from regular languages. Ragel targets C, C++ and ASM. Ragel state machines can not only recognize byte sequences as regular expression machines do, but can also execute code at arbitrary points in the recognition of a regular language. Code embedding is done using inline operators that do not disrupt the regular language syntax. --- raja[¶](#raja) === Homepage: * <http://software.llnl.gov/RAJA/Spack package: * [raja/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/raja/package.py) Versions: develop, 0.5.3, 0.5.2, 0.5.1, 0.5.0, 0.4.1, 0.4.0, master Build Dependencies: [cmake](#cmake), [cuda](#cuda) Link Dependencies: [cuda](#cuda) Description: RAJA Parallel Framework. --- randfold[¶](#randfold) === Homepage: * <http://bioinformatics.psb.ugent.be/supplementary_data/erbon/nov2003/Spack package: * [randfold/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/randfold/package.py) Versions: 2.0.1 Build Dependencies: [squid](#squid) Link Dependencies: [squid](#squid) Description: Minimum free energy of folding randomization test software --- random123[¶](#random123) === Homepage: * <http://www.deshawresearch.com/resources_random123.htmlSpack package: * [random123/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/random123/package.py) Versions: 1.09 Description: Random123 is a library of 'counter-based' random number generators (CBRNGs), in which the Nth random number can be obtained by applying a stateless mixing function to N instead of the conventional approach of using N iterations of a stateful transformation. --- randrproto[¶](#randrproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/randrprotoSpack package: * [randrproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/randrproto/package.py) Versions: 1.5.0 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Resize and Rotate Extension (RandR). This extension defines a protocol for clients to dynamically change X screens, so as to resize, rotate and reflect the root window of a screen. --- range-v3[¶](#range-v3) === Homepage: * <https://github.com/ericniebler/range-v3Spack package: * [range-v3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/range-v3/package.py) Versions: develop, 0.3.6, 0.3.5, 0.3.0, 0.2.6, 0.2.5, 0.2.4, 0.2.3, 0.2.2, 0.2.1, 0.2.0 Build Dependencies: [cmake](#cmake) Description: Range library for C++11/14/17 --- rankstr[¶](#rankstr) === Homepage: * <https://github.com/ECP-VeloC/rankstrSpack package: * [rankstr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rankstr/package.py) Versions: 0.0.2, master Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: Assign one-to-one mapping of MPI ranks to strings --- rapidjson[¶](#rapidjson) === Homepage: * <http://rapidjson.orgSpack package: * [rapidjson/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rapidjson/package.py) Versions: 1.1.0, 1.0.2, 1.0.1, 1.0.0 Build Dependencies: [cmake](#cmake) Description: A fast JSON parser/generator for C++ with both SAX/DOM style API --- ravel[¶](#ravel) === Homepage: * <https://github.com/llnl/ravelSpack package: * [ravel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ravel/package.py) Versions: 1.0.0 Build Dependencies: [cmake](#cmake), [otf](#otf), [muster](#muster), [otf2](#otf2), [qt](#qt) Link Dependencies: [otf](#otf), [muster](#muster), [otf2](#otf2), [qt](#qt) Description: Ravel is a parallel communication trace visualization tool that orders events according to logical time. --- raxml[¶](#raxml) === Homepage: * <https://sco.h-its.org/exelixis/web/software/raxmlSpack package: * [raxml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/raxml/package.py) Versions: 8.2.11 Build Dependencies: mpi Link Dependencies: mpi Description: RAxML (Randomized Axelerated Maximum Likelihood) is a program for sequential and parallel Maximum Likelihood based inference of large phylogenetic trees. --- ray[¶](#ray) === Homepage: * <http://denovoassembler.sourceforge.net/Spack package: * [ray/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ray/package.py) Versions: 2.3.1 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: Parallel genome assemblies for parallel DNA sequencing --- rclone[¶](#rclone) === Homepage: * <http://rclone.orgSpack package: * [rclone/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rclone/package.py) Versions: 1.43 Build Dependencies: [go](#go) Description: Rclone is a command line program to sync files and directories to and from various cloud storage providers --- rdma-core[¶](#rdma-core) === Homepage: * <https://github.com/linux-rdma/rdma-coreSpack package: * [rdma-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rdma-core/package.py) Versions: 20, 17.1, 13 Build Dependencies: [cmake](#cmake), pkgconfig, [libnl](#libnl) Link Dependencies: [libnl](#libnl) Description: RDMA core userspace libraries and daemons --- rdp-classifier[¶](#rdp-classifier) === Homepage: * <http://rdp.cme.msu.edu/Spack package: * [rdp-classifier/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rdp-classifier/package.py) Versions: 2.12 Build Dependencies: java Run Dependencies: java Description: The RDP Classifier is a naive Bayesian classifier that can rapidly and accurately provides taxonomic assignments from domain to genus, with confidence estimates for each assignment. --- re2c[¶](#re2c) === Homepage: * <http://re2c.org/index.htmlSpack package: * [re2c/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/re2c/package.py) Versions: 1.0.3 Description: re2c: a free and open-source lexer generator for C and C++ --- readfq[¶](#readfq) === Homepage: * <https://github.com/lh3/readfqSpack package: * [readfq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/readfq/package.py) Versions: 2013.04.10 Description: Readfq is a collection of routines for parsing the FASTA/FASTQ format. It seamlessly parses both FASTA and multi-line FASTQ with a simple interface. --- readline[¶](#readline) === Homepage: * <http://cnswww.cns.cwru.edu/php/chet/readline/rltop.htmlSpack package: * [readline/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/readline/package.py) Versions: 7.0, 6.3 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: The GNU Readline library provides a set of functions for use by applications that allow users to edit command lines as they are typed in. Both Emacs and vi editing modes are available. The Readline library includes additional functions to maintain a list of previously-entered command lines, to recall and perhaps reedit those lines, and perform csh-like history expansion on previous commands. --- recordproto[¶](#recordproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/recordprotoSpack package: * [recordproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/recordproto/package.py) Versions: 1.14.2 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Record Extension. This extension defines a protocol for the recording and playback of user actions in the X Window System. --- redset[¶](#redset) === Homepage: * <https://github.com/ECP-VeloC/redsetSpack package: * [redset/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/redset/package.py) Versions: 0.0.3, master Build Dependencies: [cmake](#cmake), mpi, [rankstr](#rankstr), [kvtree](#kvtree) Link Dependencies: mpi, [rankstr](#rankstr), [kvtree](#kvtree) Description: Create MPI communicators for disparate redundancy sets --- redundans[¶](#redundans) === Homepage: * <https://github.com/Gabaldonlab/redundansSpack package: * [redundans/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/redundans/package.py) Versions: 0.13c Build Dependencies: [bwa](#bwa), [gapcloser](#gapcloser), [py-fastaindex](#py-fastaindex), [parallel](#parallel), [perl](#perl), [py-pyscaf](#py-pyscaf), [py-numpy](#py-numpy), [sspace-standard](#sspace-standard), [snap-berkeley](#snap-berkeley), [python](#python), [last](#last) Link Dependencies: [bwa](#bwa), [gapcloser](#gapcloser), [sspace-standard](#sspace-standard), [parallel](#parallel), [last](#last) Run Dependencies: [snap-berkeley](#snap-berkeley), [py-fastaindex](#py-fastaindex), [py-pyscaf](#py-pyscaf), [perl](#perl), [py-numpy](#py-numpy), [python](#python) Description: Redundans pipeline assists an assembly of heterozygous genomes. --- regcm[¶](#regcm) === Homepage: * <https://gforge.ictp.it/gf/project/regcm/Spack package: * [regcm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/regcm/package.py) Versions: 4.7.0 Build Dependencies: [netcdf-fortran](#netcdf-fortran), mpi, [netcdf](#netcdf), [hdf5](#hdf5) Link Dependencies: [netcdf-fortran](#netcdf-fortran), mpi, [netcdf](#netcdf), [hdf5](#hdf5) Description: RegCM ICTP Regional Climate Model. --- relion[¶](#relion) === Homepage: * <http://http://www2.mrc-lmb.cam.ac.uk/relionSpack package: * [relion/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/relion/package.py) Versions: develop, 3.0_beta, 2.1, 2.0.3 Build Dependencies: [fltk](#fltk), [cuda](#cuda), mpi, [libtiff](#libtiff), [cmake](#cmake), [fftw](#fftw) Link Dependencies: [fltk](#fltk), [cuda](#cuda), [libtiff](#libtiff), mpi, [fftw](#fftw) Description: RELION (for REgularised LIkelihood OptimisatioN, pronounce rely-on) is a stand-alone computer program that employs an empirical Bayesian approach to refinement of (multiple) 3D reconstructions or 2D class averages in electron cryo-microscopy (cryo-EM). --- rempi[¶](#rempi) === Homepage: * <https://github.com/PRUNERS/ReMPISpack package: * [rempi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rempi/package.py) Versions: 1.1.0, 1.0.0 Build Dependencies: [zlib](#zlib), [autoconf](#autoconf), [automake](#automake), mpi, [libtool](#libtool) Link Dependencies: [zlib](#zlib), mpi Description: ReMPI is a record-and-replay tool for MPI applications. --- rename[¶](#rename) === Homepage: * <http://plasmasturm.org/code/renameSpack package: * [rename/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rename/package.py) Versions: 1.600 Build Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: Perl-powered file rename script with many helpful built-ins. --- rendercheck[¶](#rendercheck) === Homepage: * <http://cgit.freedesktop.org/xorg/app/rendercheckSpack package: * [rendercheck/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rendercheck/package.py) Versions: 1.5 Build Dependencies: [libxrender](#libxrender), [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libxrender](#libxrender), [libx11](#libx11) Description: rendercheck is a program to test a Render extension implementation against separate calculations of expected output. --- renderproto[¶](#renderproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/renderprotoSpack package: * [renderproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/renderproto/package.py) Versions: 0.11.1 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Rendering Extension. This extension defines the protcol for a digital image composition as the foundation of a new rendering model within the X Window System. --- repeatmasker[¶](#repeatmasker) === Homepage: * <http://www.repeatmasker.orgSpack package: * [repeatmasker/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/repeatmasker/package.py) Versions: 4.0.7 Build Dependencies: [perl](#perl), [hmmer](#hmmer), [perl-text-soundex](#perl-text-soundex), [trf](#trf), [ncbi-rmblastn](#ncbi-rmblastn) Link Dependencies: [hmmer](#hmmer), [ncbi-rmblastn](#ncbi-rmblastn), [trf](#trf) Run Dependencies: [perl](#perl), [perl-text-soundex](#perl-text-soundex) Description: RepeatMasker is a program that screens DNA sequences for interspersed repeats and low complexity DNA sequences. --- resourceproto[¶](#resourceproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/resourceprotoSpack package: * [resourceproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/resourceproto/package.py) Versions: 1.2.0 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Resource Extension. This extension defines a protocol that allows a client to query the X server about its usage of various resources. --- revbayes[¶](#revbayes) === Homepage: * <https://revbayes.github.ioSpack package: * [revbayes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/revbayes/package.py) Versions: 1.0.4 Build Dependencies: [cmake](#cmake), [boost](#boost), mpi Link Dependencies: [boost](#boost), mpi Description: Bayesian phylogenetic inference using probabilistic graphical models and an interpreted language. --- rgb[¶](#rgb) === Homepage: * <http://cgit.freedesktop.org/xorg/app/rgbSpack package: * [rgb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rgb/package.py) Versions: 1.0.6 Build Dependencies: [xproto](#xproto), [xorg-server](#xorg-server) Link Dependencies: [xorg-server](#xorg-server) Description: X color name database. This package includes both the list mapping X color names to RGB values (rgb.txt) and, if configured to use a database for color lookup, the rgb program to convert the text file into the binary database format. The "others" subdirectory contains some alternate color databases. --- rhash[¶](#rhash) === Homepage: * <https://sourceforge.net/projects/rhash/Spack package: * [rhash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rhash/package.py) Versions: 1.3.5 Description: RHash is a console utility for computing and verifying hash sums of files. It supports CRC32, MD4, MD5, SHA1, SHA256, SHA512, SHA3, Tiger, TTH, Torrent BTIH, AICH, ED2K, GOST R 34.11-94, RIPEMD-160, HAS-160, EDON-R 256/512, WHIRLPOOL and SNEFRU hash sums. --- rlwrap[¶](#rlwrap) === Homepage: * <https://github.com/hanslub42/rlwrapSpack package: * [rlwrap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rlwrap/package.py) Versions: 0.43 Build Dependencies: [readline](#readline) Link Dependencies: [readline](#readline) Description: rlwrap is a 'readline wrapper', a small utility that uses the GNU readline library to allow the editing of keyboard input for any command. --- rmats[¶](#rmats) === Homepage: * <https://rnaseq-mats.sourceforge.net/index.htmlSpack package: * [rmats/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rmats/package.py) Versions: 4.0.2 Build Dependencies: [py-numpy](#py-numpy), [openblas](#openblas) Link Dependencies: [openblas](#openblas) Run Dependencies: [py-numpy](#py-numpy), [python](#python) Description: MATS is a computational tool to detect differential alternative splicing events from RNA-Seq data. --- rmlab[¶](#rmlab) === Homepage: * <https://github.com/ax3l/lines-are-beautifulSpack package: * [rmlab/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rmlab/package.py) Versions: develop Build Dependencies: [cmake](#cmake), [pngwriter](#pngwriter) Link Dependencies: [pngwriter](#pngwriter) Description: C++ File API for the reMarkable tablet --- rna-seqc[¶](#rna-seqc) === Homepage: * <http://archive.broadinstitute.org/cancer/cga/rna-seqcSpack package: * [rna-seqc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rna-seqc/package.py) Versions: 1.1.8, 1.1.7, 1.1.6, 1.1.5, 1.1.4 Run Dependencies: [jdk](#jdk) Description: RNA-SeQC is a java program which computes a series of quality control metrics for RNA-seq data. --- rngstreams[¶](#rngstreams) === Homepage: * <http://statmath.wu.ac.at/software/RngStreamsSpack package: * [rngstreams/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rngstreams/package.py) Versions: 1.0.1 Description: Multiple independent streams of pseudo-random numbers. --- rockstar[¶](#rockstar) === Homepage: * <https://bitbucket.org/gfcstanford/rockstarSpack package: * [rockstar/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rockstar/package.py) Versions: develop, yt Build Dependencies: [hdf5](#hdf5) Link Dependencies: [hdf5](#hdf5) Description: The Rockstar Halo Finder --- root[¶](#root) === Homepage: * <https://root.cern.chSpack package: * [root/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/root/package.py) Versions: 6.09.02, 6.08.06, 6.06.08, 6.06.06, 6.06.04, 6.06.02, 5.34.36 Build Dependencies: pkgconfig, [cfitsio](#cfitsio), [zlib](#zlib), [libxext](#libxext), [openssl](#openssl), [pcre](#pcre), [libpng](#libpng), [libice](#libice), [binutils](#binutils), jpeg, [libxml2](#libxml2), [libxpm](#libxpm), [xz](#xz), [cmake](#cmake), [fftw](#fftw), [libsm](#libsm), [gsl](#gsl), [sqlite](#sqlite), [libx11](#libx11), [libxft](#libxft), [graphviz](#graphviz), [python](#python), [freetype](#freetype) Link Dependencies: [zlib](#zlib), [graphviz](#graphviz), jpeg, [libxml2](#libxml2), [libxpm](#libxpm), [xz](#xz), [sqlite](#sqlite), [freetype](#freetype), [fftw](#fftw), [libice](#libice), [libxext](#libxext), [libsm](#libsm), [libx11](#libx11), [gsl](#gsl), [openssl](#openssl), [cfitsio](#cfitsio), [pcre](#pcre), [libxft](#libxft), [binutils](#binutils), [python](#python), [libpng](#libpng) Description: ROOT is a data analysis framework. --- rose[¶](#rose) === Homepage: * <http://rosecompiler.org/Spack package: * [rose/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rose/package.py) Versions: 0.9.7, master Build Dependencies: [z3](#z3), java, [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [boost](#boost), [libgcrypt](#libgcrypt) Link Dependencies: java, [boost](#boost), [z3](#z3) Run Dependencies: [py-binwalk](#py-binwalk) Description: A compiler infrastructure to build source-to-source program transformation and analysis tools. (Developed at Lawrence Livermore National Lab) --- ross[¶](#ross) === Homepage: * <http://carothersc.github.io/ROSS/Spack package: * [ross/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ross/package.py) Versions: develop, 7.0.0 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: Rensselaer Optimistic Simulation System --- rr[¶](#rr) === Homepage: * <http://rr-project.org/Spack package: * [rr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rr/package.py) Versions: 4.5.0, 4.4.0, 4.3.0 Build Dependencies: [gdb](#gdb), [zlib](#zlib), pkgconfig, [git](#git), [cmake](#cmake) Link Dependencies: [gdb](#gdb), [zlib](#zlib), [git](#git) Test Dependencies: [py-pexpect](#py-pexpect) Description: Application execution recorder, player and debugger --- rsbench[¶](#rsbench) === Homepage: * <https://github.com/ANL-CESAR/RSBenchSpack package: * [rsbench/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rsbench/package.py) Versions: 2, 0 Description: A mini-app to represent the multipole resonance representation lookup cross section algorithm. --- rsem[¶](#rsem) === Homepage: * <http://deweylab.github.io/RSEM/Spack package: * [rsem/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rsem/package.py) Versions: 1.3.0 Build Dependencies: [r](#r), [star](#star), [bowtie](#bowtie), [perl](#perl), [bowtie2](#bowtie2), [python](#python) Link Dependencies: [star](#star), [bowtie](#bowtie), [bowtie2](#bowtie2) Run Dependencies: [r](#r), [perl](#perl), [python](#python) Description: RSEM is a software package for estimating gene and isoform expression levels from RNA-Seq data. --- rstart[¶](#rstart) === Homepage: * <http://cgit.freedesktop.org/xorg/app/rstartSpack package: * [rstart/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rstart/package.py) Versions: 1.0.5 Build Dependencies: [xproto](#xproto), pkgconfig, [util-macros](#util-macros) Description: This package includes both the client and server sides implementing the protocol described in the "A Flexible Remote Execution Protocol Based on rsh" paper found in the specs/ subdirectory. This software has been deprecated in favor of the X11 forwarding provided in common ssh implementations. --- rsync[¶](#rsync) === Homepage: * <https://rsync.samba.orgSpack package: * [rsync/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rsync/package.py) Versions: 3.1.3, 3.1.2, 3.1.1 Description: An open source utility that provides fast incremental file transfer. --- rtags[¶](#rtags) === Homepage: * <https://github.com/Andersbakken/rtags/Spack package: * [rtags/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rtags/package.py) Versions: 2.17 Build Dependencies: [zlib](#zlib), pkgconfig, [lua](#lua), [openssl](#openssl), [bash-completion](#bash-completion), [cmake](#cmake), [llvm](#llvm) Link Dependencies: [zlib](#zlib), [bash-completion](#bash-completion), [lua](#lua), [llvm](#llvm), [openssl](#openssl) Description: RTags is a client/server application that indexes C/C++ code --- rtax[¶](#rtax) === Homepage: * <https://github.com/davidsoergel/rtaxSpack package: * [rtax/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rtax/package.py) Versions: 0.984 Build Dependencies: [usearch](#usearch) Link Dependencies: [usearch](#usearch) Description: Rapid and accurate taxonomic classification of short paired-end sequence reads from the 16S ribosomal RNA gene --- ruby[¶](#ruby) === Homepage: * <https://www.ruby-lang.org/Spack package: * [ruby/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ruby/package.py) Versions: 2.2.0 Build Dependencies: [zlib](#zlib), pkgconfig, [libx11](#libx11), [openssl](#openssl), [tk](#tk), [libffi](#libffi), [readline](#readline), [tcl](#tcl) Link Dependencies: [zlib](#zlib), [tcl](#tcl), [libx11](#libx11), [openssl](#openssl), [tk](#tk), [libffi](#libffi), [readline](#readline) Description: A dynamic, open source programming language with a focus on simplicity and productivity. --- ruby-gnuplot[¶](#ruby-gnuplot) === Homepage: * <https://rubygems.org/gems/gnuplot/versions/2.6.2Spack package: * [ruby-gnuplot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ruby-gnuplot/package.py) Versions: 2.6.2 Build Dependencies: [gnuplot](#gnuplot), [ruby](#ruby) Link Dependencies: [gnuplot](#gnuplot), [ruby](#ruby) Description: Utility library to aid in interacting with gnuplot from ruby --- ruby-narray[¶](#ruby-narray) === Homepage: * <https://rubygems.org/gems/narraySpack package: * [ruby-narray/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ruby-narray/package.py) Versions: 0.9.0.9 Build Dependencies: [ruby](#ruby) Link Dependencies: [ruby](#ruby) Description: Numo::NArray is an Numerical N-dimensional Array class for fast processing and easy manipulation of multi-dimensional numerical data, similar to numpy.ndaray. --- ruby-ronn[¶](#ruby-ronn) === Homepage: * <https://rubygems.org/gems/ronnSpack package: * [ruby-ronn/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ruby-ronn/package.py) Versions: 0.7.3, 0.7.0 Build Dependencies: [ruby](#ruby) Link Dependencies: [ruby](#ruby) Description: Ronn builds manuals. It converts simple, human readable textfiles to roff for terminal display, and also to HTML for the web. --- ruby-rubyinline[¶](#ruby-rubyinline) === Homepage: * <https://rubygems.org/gems/RubyInlineSpack package: * [ruby-rubyinline/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ruby-rubyinline/package.py) Versions: 3.12.4 Build Dependencies: [ruby](#ruby) Link Dependencies: [ruby](#ruby) Description: Inline allows you to write foreign code within your ruby code. --- ruby-svn2git[¶](#ruby-svn2git) === Homepage: * <https://github.com/nirvdrum/svn2git/Spack package: * [ruby-svn2git/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ruby-svn2git/package.py) Versions: 2.4.0 Build Dependencies: [subversion](#subversion), [ruby](#ruby), [git](#git) Link Dependencies: [subversion](#subversion), [ruby](#ruby), [git](#git) Description: svn2git is a tiny utility for migrating projects from Subversion to Git while keeping the trunk, branches and tags where they should be. It uses git-svn to clone an svn repository and does some clean-up to make sure branches and tags are imported in a meaningful way, and that the code checked into master ends up being what's currently in your svn trunk rather than whichever svn branch your last commit was in. --- ruby-terminal-table[¶](#ruby-terminal-table) === Homepage: * <https://rubygems.org/gems/terminal-tableSpack package: * [ruby-terminal-table/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ruby-terminal-table/package.py) Versions: 1.8.0 Build Dependencies: [ruby](#ruby) Link Dependencies: [ruby](#ruby) Description: Simple, feature rich ascii table generation library --- rust[¶](#rust) === Homepage: * <http://www.rust-lang.orgSpack package: * [rust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rust/package.py) Versions: 1.8.0 Build Dependencies: [git](#git), [openssl](#openssl), [curl](#curl), [cmake](#cmake), [llvm](#llvm), [python](#python) Link Dependencies: [git](#git), [openssl](#openssl), [curl](#curl), [cmake](#cmake), [llvm](#llvm), [python](#python) Description: The rust programming language toolchain --- rust-bindgen[¶](#rust-bindgen) === Homepage: * <http://www.rust-lang.orgSpack package: * [rust-bindgen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/rust-bindgen/package.py) Versions: 0.20.5 Build Dependencies: [rust](#rust), [llvm](#llvm) Link Dependencies: [rust](#rust), [llvm](#llvm) Description: The rust programming language toolchain --- sabre[¶](#sabre) === Homepage: * <https://github.com/najoshi/sabreSpack package: * [sabre/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sabre/package.py) Versions: 2013-09-27 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Sabre is a tool that will demultiplex barcoded reads into separate files. It will work on both single-end and paired-end data in fastq format. It simply compares the provided barcodes with each read and separates the read into its appropriate barcode file, after stripping the barcode from the read (and also stripping the quality values of the barcode bases). If a read does not have a recognized barcode, then it is put into the unknown file. --- sailfish[¶](#sailfish) === Homepage: * <http://www.cs.cmu.edu/~ckingsf/software/sailfishSpack package: * [sailfish/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sailfish/package.py) Versions: 0.10.1 Build Dependencies: [cmake](#cmake), [boost](#boost), tbb Link Dependencies: [boost](#boost), tbb Description: Sailfish is a tool for transcript quantification from RNA-seq data. --- salmon[¶](#salmon) === Homepage: * <http://combine-lab.github.io/salmon/Spack package: * [salmon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/salmon/package.py) Versions: 0.9.1, 0.8.2 Build Dependencies: [cmake](#cmake), [boost](#boost), tbb Link Dependencies: [boost](#boost), tbb Description: Salmon is a tool for quantifying the expression of transcripts using RNA-seq data. --- sambamba[¶](#sambamba) === Homepage: * <http://lomereiter.github.io/sambamba/Spack package: * [sambamba/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sambamba/package.py) Versions: 0.6.6 Build Dependencies: [ldc](#ldc), [python](#python) Link Dependencies: [ldc](#ldc) Description: Sambamba: process your BAM data faster (bioinformatics) --- samblaster[¶](#samblaster) === Homepage: * <https://github.com/GregoryFaust/samblasterSpack package: * [samblaster/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/samblaster/package.py) Versions: 0.1.24, 0.1.23 Description: A tool to mark duplicates and extract discordant and split reads from sam files. --- samrai[¶](#samrai) === Homepage: * <https://computation.llnl.gov/projects/samraiSpack package: * [samrai/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/samrai/package.py) Versions: 3.12.0, 3.11.5, 3.11.4, 3.11.2, 3.11.1, 3.10.0, 3.9.1, 3.8.0, 3.7.3, 3.7.2, 3.6.3-beta, 3.5.2-beta, 3.5.0-beta, 3.4.1-beta, 3.3.3-beta, 3.3.2-beta, 2.4.4 Build Dependencies: [zlib](#zlib), mpi, [m4](#m4), [silo](#silo), [hdf5](#hdf5), [boost](#boost) Link Dependencies: [zlib](#zlib), mpi, [silo](#silo), [hdf5](#hdf5) Description: SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) is an object-oriented C++ software library enables exploration of numerical, algorithmic, parallel computing, and software issues associated with applying structured adaptive mesh refinement (SAMR) technology in large-scale parallel application development. --- samtools[¶](#samtools) === Homepage: * <www.htslib.orgSpack package: * [samtools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/samtools/package.py) Versions: 1.9, 1.8, 1.7, 1.6, 1.4, 1.3.1, 1.2 Build Dependencies: [zlib](#zlib), [bzip2](#bzip2), [ncurses](#ncurses), [htslib](#htslib) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2), [ncurses](#ncurses), [htslib](#htslib) Description: SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format --- sandbox[¶](#sandbox) === Homepage: * <https://www.gentoo.org/proj/en/portage/sandbox/Spack package: * [sandbox/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sandbox/package.py) Versions: 2.12 Description: sandbox'd LD_PRELOAD hack by Gentoo Linux --- sas[¶](#sas) === Homepage: * <https://github.com/dpiparo/SASSpack package: * [sas/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sas/package.py) Versions: 0.2.0, 0.1.4, 0.1.3 Build Dependencies: [cmake](#cmake), [llvm](#llvm), [python](#python) Link Dependencies: [llvm](#llvm), [python](#python) Description: SAS (Static Analysis Suite) is a powerful tool for running static analysis on C++ code. --- satsuma2[¶](#satsuma2) === Homepage: * <https://github.com/bioinfologics/satsuma2Spack package: * [satsuma2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/satsuma2/package.py) Versions: 2016-11-22 Build Dependencies: [cmake](#cmake) Description: Satsuma2 is an optimsed version of Satsuma, a tool to reliably align large and complex DNA sequences providing maximum sensitivity (to find all there is to find), specificity (to only find real homology) and speed (to accomodate the billions of base pairs in vertebrate genomes). --- savanna[¶](#savanna) === Homepage: * <https://github.com/CODARcode/savannaSpack package: * [savanna/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/savanna/package.py) Versions: develop, 0.5 Build Dependencies: [tau](#tau), mpi, [adios](#adios), [stc](#stc), [mpix-launch-swift](#mpix-launch-swift) Link Dependencies: [tau](#tau), mpi, [adios](#adios), [stc](#stc), [mpix-launch-swift](#mpix-launch-swift) Description: CODARcode Savanna runtime framework for high performance, workflow management using Swift/T and ADIOS. --- saws[¶](#saws) === Homepage: * <https://bitbucket.org/saws/saws/wiki/HomeSpack package: * [saws/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/saws/package.py) Versions: develop, 0.1.0 Description: The Scientific Application Web server (SAWs) turns any C or C++ scientific or engineering application code into a webserver, allowing one to examine (and even modify) the state of the simulation with any browser from anywhere. --- sbt[¶](#sbt) === Homepage: * <http://www.scala-sbt.orgSpack package: * [sbt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sbt/package.py) Versions: 1.1.6, 1.1.5, 1.1.4, 0.13.17 Build Dependencies: java Link Dependencies: java Description: Scala Build Tool --- scala[¶](#scala) === Homepage: * <https://www.scala-lang.org/Spack package: * [scala/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scala/package.py) Versions: 2.12.5, 2.12.1, 2.11.11, 2.10.6 Build Dependencies: java Link Dependencies: java Description: Scala is a general-purpose programming language providing support for functional programming and a strong static type system. Designed to be concise, many of Scala's design decisions were designed to build from criticisms of Java. --- scalasca[¶](#scalasca) === Homepage: * <http://www.scalasca.orgSpack package: * [scalasca/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scalasca/package.py) Versions: 2.4, 2.3.1, 2.2.2, 2.1 Build Dependencies: [cubew](#cubew), [otf2](#otf2), mpi, [cube](#cube) Link Dependencies: [cubew](#cubew), [otf2](#otf2), mpi, [cube](#cube) Description: Scalasca is a software tool that supports the performance optimization of parallel programs by measuring and analyzing their runtime behavior. The analysis identifies potential performance bottlenecks - in particular those concerning communication and synchronization - and offers guidance in exploring their causes. --- scalpel[¶](#scalpel) === Homepage: * <http://scalpel.sourceforge.net/index.htmlSpack package: * [scalpel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scalpel/package.py) Versions: 0.5.3 Build Dependencies: [perl](#perl), [cmake](#cmake) Link Dependencies: [perl](#perl), [cmake](#cmake) Description: Scalpel is a software package for detecting INDELs (INsertions and DELetions) mutations in a reference genome which has been sequenced with next-generation sequencing technology. --- scan-for-matches[¶](#scan-for-matches) === Homepage: * <http://blog.theseed.org/servers/2010/07/scan-for-matches.htmlSpack package: * [scan-for-matches/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scan-for-matches/package.py) Versions: 2010-7-16 Description: scan_for_matches is a utility written in C for locating patterns in DNA or protein FASTA files. --- scons[¶](#scons) === Homepage: * <http://scons.orgSpack package: * [scons/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scons/package.py) Versions: 3.0.1, 2.5.1, 2.5.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Run Dependencies: [python](#python) Description: SCons is a software construction tool --- scorec-core[¶](#scorec-core) === Homepage: * <https://www.scorec.rpi.edu/Spack package: * [scorec-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scorec-core/package.py) Versions: develop Build Dependencies: [cmake](#cmake), [zoltan](#zoltan), mpi Link Dependencies: [zoltan](#zoltan), mpi Description: The SCOREC Core is a set of C/C++ libraries for unstructured mesh simulations on supercomputers. --- scorep[¶](#scorep) === Homepage: * <http://www.vi-hps.org/projects/score-pSpack package: * [scorep/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scorep/package.py) Versions: 4.1, 4.0, 3.1, 3.0, 2.0.2, 1.4.2, 1.3 Build Dependencies: [pdt](#pdt), [cubew](#cubew), mpi, [otf2](#otf2), [cubelib](#cubelib), [cube](#cube), [papi](#papi), [opari2](#opari2) Link Dependencies: [pdt](#pdt), [cubew](#cubew), mpi, [otf2](#otf2), [cubelib](#cubelib), [cube](#cube), [papi](#papi), [opari2](#opari2) Description: The Score-P measurement infrastructure is a highly scalable and easy-to- use tool suite for profiling, event tracing, and online analysis of HPC applications. --- scotch[¶](#scotch) === Homepage: * <http://www.labri.fr/perso/pelegrin/scotch/Spack package: * [scotch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scotch/package.py) Versions: 6.0.6, 6.0.5a, 6.0.4, 6.0.3, 6.0.0, 5.1.10b Build Dependencies: [zlib](#zlib), mpi, [flex](#flex), [bison](#bison) Link Dependencies: [zlib](#zlib), mpi Description: Scotch is a software package for graph and mesh/hypergraph partitioning, graph clustering, and sparse matrix ordering. --- scr[¶](#scr) === Homepage: * <http://computation.llnl.gov/projects/scalable-checkpoint-restart-for-mpiSpack package: * [scr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scr/package.py) Versions: 1.2.0, master Build Dependencies: [zlib](#zlib), mpi, [dtcmp](#dtcmp), [libyogrt](#libyogrt), [pdsh](#pdsh), [cmake](#cmake) Link Dependencies: [zlib](#zlib), mpi, [dtcmp](#dtcmp), [libyogrt](#libyogrt) Run Dependencies: [pdsh](#pdsh) Description: SCR caches checkpoint data in storage on the compute nodes of a Linux cluster to provide a fast, scalable checkpoint/restart capability for MPI codes --- screen[¶](#screen) === Homepage: * <https://www.gnu.org/software/screen/Spack package: * [screen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/screen/package.py) Versions: 4.6.2, 4.3.1, 4.3.0, 4.2.1, 4.2.0, 4.0.3, 4.0.2, 3.9.15, 3.9.11, 3.9.10, 3.9.9, 3.9.8, 3.9.4, 3.7.6, 3.7.4, 3.7.2, 3.7.1 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: Screen is a full-screen window manager that multiplexes a physical terminal between several processes, typically interactive shells. --- scripts[¶](#scripts) === Homepage: * <http://cgit.freedesktop.org/xorg/app/scriptsSpack package: * [scripts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scripts/package.py) Versions: 1.0.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11) Link Dependencies: [libx11](#libx11) Description: Various X related scripts. --- scrnsaverproto[¶](#scrnsaverproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/scrnsaverprotoSpack package: * [scrnsaverproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/scrnsaverproto/package.py) Versions: 1.2.2 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: MIT Screen Saver Extension. This extension defines a protocol to control screensaver features and also to query screensaver info on specific windows. --- sctk[¶](#sctk) === Homepage: * <https://www.nist.gov/itl/iad/mig/toolsSpack package: * [sctk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sctk/package.py) Versions: 2.4.10, 2.4.9, 2.4.8, 2.4.0 Description: The NIST Scoring Toolkit (SCTK) is a collection of software tools designed to score benchmark test evaluations of Automatic Speech Recognition (ASR) Systems. The toolkit is currently used by NIST, benchmark test participants, and reserchers worldwide to as a common scoring engine. --- sdl2[¶](#sdl2) === Homepage: * <https://wiki.libsdl.org/FrontPageSpack package: * [sdl2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sdl2/package.py) Versions: 2.0.5 Build Dependencies: [cmake](#cmake) Description: Simple DirectMedia Layer is a cross-platform development library designed to provide low level access to audio, keyboard, mouse, joystick, and graphics hardware via OpenGL and Direct3D. --- sdl2-image[¶](#sdl2-image) === Homepage: * <http://sdl.beuc.net/sdl.wiki/SDL_imageSpack package: * [sdl2-image/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sdl2-image/package.py) Versions: 2.0.1 Build Dependencies: [sdl2](#sdl2) Link Dependencies: [sdl2](#sdl2) Description: SDL is designed to provide the bare bones of creating a graphical program. --- sed[¶](#sed) === Homepage: * <http://www.gnu.org/software/sed/Spack package: * [sed/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sed/package.py) Versions: 4.2.2 Description: GNU implementation of the famous stream editor. --- sentieon-genomics[¶](#sentieon-genomics) === Homepage: * <https://www.sentieon.com/Spack package: * [sentieon-genomics/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sentieon-genomics/package.py) Versions: 201808.01 Description: Sentieon provides complete solutions for secondary DNA analysis. Our software improves upon BWA, GATK, Mutect, and Mutect2 based pipelines. The Sentieon tools are deployable on any CPU-based computing system. Please set the path to the sentieon license server with: export SENTIEON_LICENSE=[FQDN]:[PORT] Note: A manual download is required. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- seqan[¶](#seqan) === Homepage: * <https://www.seqan.deSpack package: * [seqan/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/seqan/package.py) Versions: 2.4.0 Build Dependencies: [zlib](#zlib), [bzip2](#bzip2), [py-nose](#py-nose), [cmake](#cmake), [boost](#boost), [python](#python), [py-sphinx](#py-sphinx) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2), [boost](#boost) Description: SeqAn is an open source C++ library of efficient algorithms and data structures for the analysis of sequences with the focus on biological data. Our library applies a unique generic design that guarantees high performance, generality, extensibility, and integration with other libraries. SeqAn is easy to use and simplifies the development of new software tools with a minimal loss of performance --- seqprep[¶](#seqprep) === Homepage: * <https://github.com/jstjohn/SeqPrepSpack package: * [seqprep/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/seqprep/package.py) Versions: 1.3.2 Description: SeqPrep is a program to merge paired end Illumina reads that are overlapping into a single longer read. --- seqtk[¶](#seqtk) === Homepage: * <https://github.com/lh3/seqtkSpack package: * [seqtk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/seqtk/package.py) Versions: 1.2, 1.1 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Toolkit for processing sequences in FASTA/Q formats. --- serf[¶](#serf) === Homepage: * <https://serf.apache.org/Spack package: * [serf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/serf/package.py) Versions: 1.3.9, 1.3.8 Build Dependencies: [zlib](#zlib), [scons](#scons), [apr](#apr), [apr-util](#apr-util), [openssl](#openssl) Link Dependencies: [zlib](#zlib), [apr](#apr), [apr-util](#apr-util), [openssl](#openssl) Description: Apache Serf - a high performance C-based HTTP client library built upon the Apache Portable Runtime (APR) library --- sessreg[¶](#sessreg) === Homepage: * <http://cgit.freedesktop.org/xorg/app/sessregSpack package: * [sessreg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sessreg/package.py) Versions: 1.1.0 Build Dependencies: [xproto](#xproto), pkgconfig, [util-macros](#util-macros) Description: Sessreg is a simple program for managing utmp/wtmp entries for X sessions. It was originally written for use with xdm, but may also be used with other display managers such as gdm or kdm. --- setxkbmap[¶](#setxkbmap) === Homepage: * <http://cgit.freedesktop.org/xorg/app/setxkbmapSpack package: * [setxkbmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/setxkbmap/package.py) Versions: 1.3.1 Build Dependencies: [libxkbfile](#libxkbfile), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libxkbfile](#libxkbfile), [libx11](#libx11) Description: setxkbmap is an X11 client to change the keymaps in the X server for a specified keyboard to use the layout determined by the options listed on the command line. --- sga[¶](#sga) === Homepage: * <https://www.msi.umn.edu/sw/sgaSpack package: * [sga/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sga/package.py) Versions: 0.10.15, 0.10.14, 0.10.13, 0.10.12, 0.10.11, 0.10.10, 0.10.9, 0.10.8, 0.10.3 Build Dependencies: [zlib](#zlib), [jemalloc](#jemalloc), [bamtools](#bamtools), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [sparsehash](#sparsehash) Link Dependencies: [zlib](#zlib), [jemalloc](#jemalloc), [bamtools](#bamtools), [sparsehash](#sparsehash) Description: SGA is a de novo genome assembler based on the concept of string graphs. The major goal of SGA is to be very memory efficient, which is achieved by using a compressed representation of DNA sequence reads. --- shapeit[¶](#shapeit) === Homepage: * <https://mathgen.stats.ox.ac.uk/genetics_software/shapeit/shapeit.htmlSpack package: * [shapeit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/shapeit/package.py) Versions: 2.837 Description: SHAPEIT is a fast and accurate method for estimation of haplotypes (aka phasing) from genotype or sequencing data. --- shared-mime-info[¶](#shared-mime-info) === Homepage: * <https://freedesktop.org/wiki/Software/shared-mime-infoSpack package: * [shared-mime-info/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/shared-mime-info/package.py) Versions: 1.9, 1.8 Build Dependencies: pkgconfig, [gettext](#gettext), [glib](#glib), [intltool](#intltool), [libxml2](#libxml2) Link Dependencies: [glib](#glib), [libxml2](#libxml2) Description: Database of common MIME types. --- shiny-server[¶](#shiny-server) === Homepage: * <https://www.rstudio.com/products/shiny/shiny-server/Spack package: * [shiny-server/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/shiny-server/package.py) Versions: 1.5.3.838 Build Dependencies: [r](#r), [cmake](#cmake), [python](#python), [git](#git), [openssl](#openssl) Link Dependencies: [r](#r), [cmake](#cmake), [python](#python), [git](#git), [openssl](#openssl) Description: Shiny server lets you put shiny web applications and interactive documents online. Take your shiny apps and share them with your organization or the world. --- shocklibs[¶](#shocklibs) === Homepage: * <https://github.com/MG-RAST/ShockSpack package: * [shocklibs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/shocklibs/package.py) Versions: 0.9.24 Description: The lib for shock: An object store for scientific data. --- shoremap[¶](#shoremap) === Homepage: * <http://bioinfo.mpipz.mpg.de/shoremap/Spack package: * [shoremap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/shoremap/package.py) Versions: 3.6 Build Dependencies: [dislin](#dislin) Link Dependencies: [dislin](#dislin) Description: SHOREmap is a computational tool implementing a method that enables simple and straightforward mapping-by-sequencing analysis. Whole genome resequencing of pools of recombinant mutant genomes allows directly linking phenotypic traits to causal mutations. Such an analysis, called mapping-by-sequencing, combines classical genetic mapping and next generation sequencing by relying on selection-induced patterns within genome-wide allele frequency in pooled genomes. --- shortbred[¶](#shortbred) === Homepage: * <https://huttenhower.sph.harvard.edu/shortbredSpack package: * [shortbred/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/shortbred/package.py) Versions: 0.9.4 Build Dependencies: [muscle](#muscle), [blast-plus](#blast-plus), [usearch](#usearch), [py-biopython](#py-biopython), [cdhit](#cdhit), [python](#python) Link Dependencies: [muscle](#muscle), [blast-plus](#blast-plus), [usearch](#usearch), [py-biopython](#py-biopython), [cdhit](#cdhit), [python](#python) Description: ShortBRED is a system for profiling protein families of interest at very high specificity in shotgun meta'omic sequencing data. --- shortstack[¶](#shortstack) === Homepage: * <http://sites.psu.edu/axtell/software/shortstack/Spack package: * [shortstack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/shortstack/package.py) Versions: 3.8.3 Build Dependencies: [perl](#perl), [samtools](#samtools), [viennarna](#viennarna), [bowtie](#bowtie) Link Dependencies: [samtools](#samtools), [viennarna](#viennarna), [bowtie](#bowtie) Run Dependencies: [perl](#perl) Description: ShortStack is a tool developed to process and analyze smallRNA-seq data with respect to a reference genome, and output a comprehensive and informative annotation of all discovered small RNA genes. --- showfont[¶](#showfont) === Homepage: * <http://cgit.freedesktop.org/xorg/app/showfontSpack package: * [showfont/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/showfont/package.py) Versions: 1.0.5 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libfs](#libfs) Link Dependencies: [libfs](#libfs) Description: showfont displays data about a font from an X font server. The information shown includes font information, font properties, character metrics, and character bitmaps. --- shuffile[¶](#shuffile) === Homepage: * <https://github.com/ECP-VeloC/shuffileSpack package: * [shuffile/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/shuffile/package.py) Versions: 0.0.3, master Build Dependencies: [cmake](#cmake), mpi, [kvtree](#kvtree) Link Dependencies: mpi, [kvtree](#kvtree) Description: Shuffle files between MPI ranks --- sickle[¶](#sickle) === Homepage: * <https://github.com/najoshi/sickleSpack package: * [sickle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sickle/package.py) Versions: 1.33 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Sickle is a tool that uses sliding windows along with quality and length thresholds to determine when quality is sufficiently low to trim the 3'-end of reads and also determines when the quality is sufficiently high enough to trim the 5'-end of reads. --- siesta[¶](#siesta) === Homepage: * <https://departments.icmab.es/leem/siesta/Spack package: * [siesta/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/siesta/package.py) Versions: 4.0.1, 3.2-pl-5 Build Dependencies: [netcdf-fortran](#netcdf-fortran), mpi, [netcdf](#netcdf), scalapack, blas, lapack Link Dependencies: [netcdf-fortran](#netcdf-fortran), mpi, [netcdf](#netcdf), scalapack, blas, lapack Description: SIESTA performs electronic structure calculations and ab initio molecular dynamics simulations of molecules and solids. --- signalp[¶](#signalp) === Homepage: * <http://www.cbs.dtu.dk/services/SignalP/Spack package: * [signalp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/signalp/package.py) Versions: 4.1f Build Dependencies: [perl](#perl), [gnuplot](#gnuplot) Link Dependencies: [gnuplot](#gnuplot) Run Dependencies: [perl](#perl) Description: SignalP predicts the presence and location of signal peptide cleavage sites in amino acid sequences from different organisms: Gram-positive bacteria, Gram-negative bacteria, and eukaryotes. Note: A manual download is required for SignalP. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- signify[¶](#signify) === Homepage: * <https://github.com/aperezdc/signifySpack package: * [signify/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/signify/package.py) Versions: 23 Build Dependencies: [libbsd](#libbsd) Link Dependencies: [libbsd](#libbsd) Description: OpenBSD tool to signs and verify signatures on files. --- silo[¶](#silo) === Homepage: * <http://wci.llnl.gov/simulation/computer-codes/siloSpack package: * [silo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/silo/package.py) Versions: 4.10.2-bsd, 4.10.2, 4.9, 4.8 Build Dependencies: [zlib](#zlib), mpi, [qt](#qt), [hdf5](#hdf5) Link Dependencies: [zlib](#zlib), mpi, [qt](#qt), [hdf5](#hdf5) Description: Silo is a library for reading and writing a wide variety of scientific data to binary, disk files. --- simplemoc[¶](#simplemoc) === Homepage: * <https://github.com/ANL-CESAR/SimpleMOC/Spack package: * [simplemoc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/simplemoc/package.py) Versions: 4 Build Dependencies: mpi Link Dependencies: mpi Description: The purpose of this mini-app is to demonstrate the performance characterterics and viability of the Method of Characteristics (MOC) for 3D neutron transport calculations in the context of full scale light water reactor simulation. --- simul[¶](#simul) === Homepage: * <https://github.com/LLNL/simulSpack package: * [simul/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/simul/package.py) Versions: 1.16, 1.15, 1.14, 1.13 Build Dependencies: mpi Link Dependencies: mpi Description: simul is an MPI coordinated test of parallel filesystem system calls and library functions. --- simulationio[¶](#simulationio) === Homepage: * <https://github.com/eschnett/SimulationIOSpack package: * [simulationio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/simulationio/package.py) Versions: develop, 1.0.0, 0.1.0 Build Dependencies: [swig](#swig), [hdf5](#hdf5), [julia](#julia), [cmake](#cmake), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [python](#python) Link Dependencies: [python](#python), [hdf5](#hdf5) Run Dependencies: [julia](#julia), [py-numpy](#py-numpy), [py-h5py](#py-h5py), [python](#python) Description: SimulationIO: Efficient and convenient I/O for large PDE simulations --- singularity[¶](#singularity) === Homepage: * <https://www.sylabs.io/singularity/Spack package: * [singularity/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/singularity/package.py) Versions: develop, 2.6.0, 2.5.2 Build Dependencies: [libarchive](#libarchive), [automake](#automake), [libtool](#libtool), [m4](#m4), [autoconf](#autoconf) Link Dependencies: [libarchive](#libarchive) Description: Singularity is a container platform focused on supporting 'Mobility of Compute' --- skilion-onedrive[¶](#skilion-onedrive) === Homepage: * <https://github.com/skilion/onedriveSpack package: * [skilion-onedrive/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/skilion-onedrive/package.py) Versions: 1.1.1 Build Dependencies: [curl](#curl), [dmd](#dmd), [sqlite](#sqlite) Link Dependencies: [curl](#curl), [dmd](#dmd), [sqlite](#sqlite) Description: A complete tool to interact with OneDrive on Linux, developed by Skilion, following the UNIX philosophy. --- sleef[¶](#sleef) === Homepage: * <http://sleef.orgSpack package: * [sleef/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sleef/package.py) Versions: 3.2 Build Dependencies: [cmake](#cmake) Description: SIMD Library for Evaluating Elementary Functions, vectorized libm and DFT. --- slepc[¶](#slepc) === Homepage: * <http://www.grycap.upv.es/slepcSpack package: * [slepc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/slepc/package.py) Versions: develop, 3.10.1, 3.10.0, 3.9.2, 3.9.1, 3.9.0, 3.8.2, 3.8.0, 3.7.4, 3.7.3, 3.7.1, 3.6.3, 3.6.2 Build Dependencies: [petsc](#petsc), [arpack-ng](#arpack-ng), [python](#python) Link Dependencies: [petsc](#petsc), [arpack-ng](#arpack-ng) Description: Scalable Library for Eigenvalue Problem Computations. --- slurm[¶](#slurm) === Homepage: * <https://slurm.schedmd.comSpack package: * [slurm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/slurm/package.py) Versions: 18-08-0-1, 17-11-9-2, 17-02-6-1 Build Dependencies: [json-c](#json-c), [glib](#glib), [lz4](#lz4), [munge](#munge), [hdf5](#hdf5), [curl](#curl), [readline](#readline), pkgconfig, [zlib](#zlib), [hwloc](#hwloc), [openssl](#openssl), [mariadb](#mariadb), [gtkplus](#gtkplus) Link Dependencies: [json-c](#json-c), [glib](#glib), [lz4](#lz4), [munge](#munge), [hdf5](#hdf5), [curl](#curl), [readline](#readline), [zlib](#zlib), [hwloc](#hwloc), [openssl](#openssl), [mariadb](#mariadb), [gtkplus](#gtkplus) Description: Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work. --- smalt[¶](#smalt) === Homepage: * <http://www.sanger.ac.uk/science/tools/smalt-0Spack package: * [smalt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/smalt/package.py) Versions: 0.7.6 Description: SMALT aligns DNA sequencing reads with a reference genome. --- smproxy[¶](#smproxy) === Homepage: * <http://cgit.freedesktop.org/xorg/app/smproxySpack package: * [smproxy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/smproxy/package.py) Versions: 1.0.6 Build Dependencies: [libice](#libice), [libxt](#libxt), pkgconfig, [libsm](#libsm), [util-macros](#util-macros), [libxmu](#libxmu) Link Dependencies: [libice](#libice), [libxt](#libxt), [libsm](#libsm), [libxmu](#libxmu) Description: smproxy allows X applications that do not support X11R6 session management to participate in an X11R6 session. --- snakemake[¶](#snakemake) === Homepage: * <https://snakemake.readthedocs.io/en/stable/Spack package: * [snakemake/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/snakemake/package.py) Versions: 3.11.2 Build Dependencies: [py-wrapt](#py-wrapt), [py-setuptools](#py-setuptools), [python](#python), [py-requests](#py-requests) Link Dependencies: [python](#python) Run Dependencies: [py-wrapt](#py-wrapt), [py-setuptools](#py-setuptools), [python](#python), [py-requests](#py-requests) Description: Snakemake is an MIT-licensed workflow management system. --- snap[¶](#snap) === Homepage: * <https://github.com/lanl/SNAPSpack package: * [snap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/snap/package.py) Versions: master Build Dependencies: mpi Link Dependencies: mpi Description: SNAP serves as a proxy application to model the performance of a modern discrete ordinates neutral particle transport application. SNAP may be considered an update to Sweep3D, intended for hybrid computing architectures. It is modeled off the Los Alamos National Laboratory code PARTISN. --- snap-berkeley[¶](#snap-berkeley) === Homepage: * <http://snap.cs.berkeley.edu/Spack package: * [snap-berkeley/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/snap-berkeley/package.py) Versions: 1.0beta.18, 0.15 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: SNAP is a fast and accurate aligner for short DNA reads. It is optimized for modern read lengths of 100 bases or higher, and takes advantage of these reads to align data quickly through a hash-based indexing scheme. --- snap-korf[¶](#snap-korf) === Homepage: * <http://korflab.ucdavis.edu/software.htmlSpack package: * [snap-korf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/snap-korf/package.py) Versions: 2013-11-29 Build Dependencies: [perl](#perl), [boost](#boost), [sparsehash](#sparsehash), [sqlite](#sqlite) Link Dependencies: [boost](#boost), [sparsehash](#sparsehash), [sqlite](#sqlite) Run Dependencies: [perl](#perl) Description: SNAP is a general purpose gene finding program suitable for both eukaryotic and prokaryotic genomes. --- snappy[¶](#snappy) === Homepage: * <https://github.com/google/snappySpack package: * [snappy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/snappy/package.py) Versions: 1.1.7 Build Dependencies: [cmake](#cmake) Test Dependencies: [googletest](#googletest) Description: A fast compressor/decompressor: https://code.google.com/p/snappy --- snbone[¶](#snbone) === Homepage: * <https://github.com/ANL-CESAR/Spack package: * [snbone/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/snbone/package.py) Versions: develop Build Dependencies: [metis](#metis) Link Dependencies: [metis](#metis) Description: This application targets the primary computational solve burden of a SN, continuous finite element based transport equation solver. --- sniffles[¶](#sniffles) === Homepage: * <https://github.com/fritzsedlazeck/Sniffles/wikiSpack package: * [sniffles/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sniffles/package.py) Versions: 1.0.7, 1.0.5 Build Dependencies: [cmake](#cmake) Description: Structural variation caller using third generation sequencing. --- snpeff[¶](#snpeff) === Homepage: * <http://snpeff.sourceforge.net/Spack package: * [snpeff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/snpeff/package.py) Versions: 2017-11-24 Build Dependencies: [jdk](#jdk) Run Dependencies: [jdk](#jdk) Description: SnpEff is a variant annotation and effect prediction tool. It annotates and predicts the effects of genetic variants (such as amino acid changes). --- snphylo[¶](#snphylo) === Homepage: * <http://chibba.pgml.uga.edu/snphylo/Spack package: * [snphylo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/snphylo/package.py) Versions: 2016-02-04 Build Dependencies: [r](#r), [r-phangorn](#r-phangorn), [r-gdsfmt](#r-gdsfmt), [muscle](#muscle), [phylip](#phylip), [r-getopt](#r-getopt), [r-snprelate](#r-snprelate), [python](#python) Link Dependencies: [muscle](#muscle), [phylip](#phylip) Run Dependencies: [r](#r), [r-phangorn](#r-phangorn), [r-gdsfmt](#r-gdsfmt), [r-getopt](#r-getopt), [r-snprelate](#r-snprelate), [python](#python) Description: A pipeline to generate a phylogenetic tree from huge SNP data --- snptest[¶](#snptest) === Homepage: * <https://mathgen.stats.ox.ac.uk/genetics_software/snptest/snptest.htmlSpack package: * [snptest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/snptest/package.py) Versions: 2.5.2 Description: SNPTEST is a program for the analysis of single SNP association in genome-wide studies. --- soap2[¶](#soap2) === Homepage: * <http://soap.genomics.org.cn/soapaligner.htmlSpack package: * [soap2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/soap2/package.py) Versions: 2.21 Description: Software for short oligonucleotide alignment. --- soapdenovo-trans[¶](#soapdenovo-trans) === Homepage: * <http://soap.genomics.org.cn/SOAPdenovo-Trans.htmlSpack package: * [soapdenovo-trans/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/soapdenovo-trans/package.py) Versions: 1.0.4 Description: SOAPdenovo-Trans is a de novo transcriptome assembler basing on the SOAPdenovo framework, adapt to alternative splicing and different expression level among transcripts. --- soapdenovo2[¶](#soapdenovo2) === Homepage: * <https://github.com/aquaskyline/SOAPdenovo2Spack package: * [soapdenovo2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/soapdenovo2/package.py) Versions: 240 Description: SOAPdenovo is a novel short-read assembly method that can build a de novo draft assembly for the human-sized genomes. The program is specially designed to assemble Illumina GA short reads. It creates new opportunities for building reference sequences and carrying out accurate analyses of unexplored genomes in a cost effective way. --- soapindel[¶](#soapindel) === Homepage: * <http://soap.genomics.org.cn/soapindel.htmlSpack package: * [soapindel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/soapindel/package.py) Versions: 2.1.7.17 Build Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: SOAPindel is focusing on calling indels from the next-generation paired- end sequencing data. --- soapsnp[¶](#soapsnp) === Homepage: * <http://soap.genomics.org.cn/soapsnp.htmlSpack package: * [soapsnp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/soapsnp/package.py) Versions: 1.03 Build Dependencies: [boost](#boost) Link Dependencies: [boost](#boost) Description: SOAPsnp uses a method based on Bayes' theorem (the reverse probability model) to call consensus genotype by carefully considering the data quality, alignment, and recurring experimental errors. --- sofa-c[¶](#sofa-c) === Homepage: * <http://www.iausofa.org/current_C.htmlSpack package: * [sofa-c/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sofa-c/package.py) Versions: 20180130 Description: Standards of Fundamental Astronomy (SOFA) library for ANSI C. --- somatic-sniper[¶](#somatic-sniper) === Homepage: * <http://gmt.genome.wustl.edu/packages/somatic-sniperSpack package: * [somatic-sniper/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/somatic-sniper/package.py) Versions: 1.0.5.0 Build Dependencies: [cmake](#cmake), [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: A tool to call somatic single nucleotide variants. --- sortmerna[¶](#sortmerna) === Homepage: * <https://github.com/biocore/sortmernaSpack package: * [sortmerna/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sortmerna/package.py) Versions: 2017-07-13 Build Dependencies: [zlib](#zlib), [cmake](#cmake) Link Dependencies: [zlib](#zlib) Description: SortMeRNA is a program tool for filtering, mapping and OTU-picking NGS reads in metatranscriptomic and metagenomic data --- sosflow[¶](#sosflow) === Homepage: * <https://github.com/cdwdirect/sos_flow/wikiSpack package: * [sosflow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sosflow/package.py) Versions: spack Build Dependencies: [cmake](#cmake), pkgconfig, mpi, [libevpath](#libevpath), [sqlite](#sqlite) Link Dependencies: pkgconfig, mpi, [libevpath](#libevpath), [sqlite](#sqlite) Description: SOSflow provides a flexible, scalable, and programmable framework for observation, introspection, feedback, and control of HPC applications. --- sowing[¶](#sowing) === Homepage: * <http://www.mcs.anl.gov/petsc/index.htmlSpack package: * [sowing/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sowing/package.py) Versions: 1.1.25-p1, 1.1.23-p1 Description: Sowing generates Fortran interfaces and documentation for PETSc and MPICH. --- sox[¶](#sox) === Homepage: * <http://sox.sourceforge.net/Main/HomePageSpack package: * [sox/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sox/package.py) Versions: 14.4.2 Build Dependencies: [opus](#opus), [bzip2](#bzip2), [id3lib](#id3lib), [flac](#flac), [libvorbis](#libvorbis) Link Dependencies: [opus](#opus), [bzip2](#bzip2), [id3lib](#id3lib), [flac](#flac), [libvorbis](#libvorbis) Description: SoX, the Swiss Army knife of sound processing programs. --- spades[¶](#spades) === Homepage: * <http://cab.spbu.ru/software/spades/Spack package: * [spades/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/spades/package.py) Versions: 3.12.0, 3.11.1, 3.10.1 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [python](#python), [bzip2](#bzip2) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2) Run Dependencies: [python](#python) Description: SPAdes - St. Petersburg genome assembler - is intended for both standard isolates and single-cell MDA bacteria assemblies. --- span-lite[¶](#span-lite) === Homepage: * <https://github.com/martinmoene/span-liteSpack package: * [span-lite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/span-lite/package.py) Versions: 0.3.0, 0.2.0, 0.1.0 Description: A single-file header-only version of a C++20-like span for C++98, C++11 and later --- spark[¶](#spark) === Homepage: * <http://spark.apache.orgSpack package: * [spark/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/spark/package.py) Versions: 2.3.0, 2.1.0, 2.0.2, 2.0.0, 1.6.2, 1.6.1, 1.6.0 Build Dependencies: java, [hadoop](#hadoop) Run Dependencies: java, [hadoop](#hadoop) Description: Apache Spark is a fast and general engine for large-scale data processing. --- sparsehash[¶](#sparsehash) === Homepage: * <https://github.com/sparsehash/sparsehashSpack package: * [sparsehash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sparsehash/package.py) Versions: 2.0.3 Description: Sparse and dense hash-tables for C++ by Google --- sparta[¶](#sparta) === Homepage: * <https://github.com/atulkakrana/sPARTA.githubSpack package: * [sparta/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sparta/package.py) Versions: 1.25 Build Dependencies: [py-numpy](#py-numpy), [bowtie2](#bowtie2), [python](#python), [py-scipy](#py-scipy) Link Dependencies: [bowtie2](#bowtie2) Run Dependencies: [py-numpy](#py-numpy), [python](#python), [py-scipy](#py-scipy) Description: small RNA-PARE Target Analyzer (sPARTA) is a tool which utilizes high- throughput sequencing to profile genome-wide cleavage products. --- spdlog[¶](#spdlog) === Homepage: * <https://github.com/gabime/spdlogSpack package: * [spdlog/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/spdlog/package.py) Versions: 1.2.1, 1.2.0, 1.1.0, 1.0.0, 0.17.0, 0.16.3, 0.16.2, 0.16.1, 0.16.0, 0.14.0, 0.13.0, 0.12.0, 0.11.0, 0.10.0, 0.9.0 Build Dependencies: [cmake](#cmake) Description: Very fast, header only, C++ logging library --- spectrum-mpi[¶](#spectrum-mpi) === Homepage: * <http://www-03.ibm.com/systems/spectrum-computing/products/mpiSpack package: * [spectrum-mpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/spectrum-mpi/package.py) Description: IBM MPI implementation from Spectrum MPI. --- speex[¶](#speex) === Homepage: * <https://speex.orgSpack package: * [speex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/speex/package.py) Versions: 1.2.0 Description: Speex is an Open Source/Free Software patent-free audio compression format designed for speech. --- spglib[¶](#spglib) === Homepage: * <https://atztogo.github.io/spglib/Spack package: * [spglib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/spglib/package.py) Versions: 1.10.3, 1.10.0 Build Dependencies: [cmake](#cmake) Description: C library for finding and handling crystal symmetries. --- sph2pipe[¶](#sph2pipe) === Homepage: * <https://www.ldc.upenn.edu/language-resources/tools/sphere-conversion-toolsSpack package: * [sph2pipe/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sph2pipe/package.py) Versions: 2.5 Build Dependencies: [cmake](#cmake) Description: Sph2pipe is a portable tool for converting SPHERE files to other formats. --- spherepack[¶](#spherepack) === Homepage: * <https://www2.cisl.ucar.edu/resources/legacy/spherepackSpack package: * [spherepack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/spherepack/package.py) Versions: 3.2 Description: SPHEREPACK - A Package for Modeling Geophysical Processes --- spindle[¶](#spindle) === Homepage: * <https://computation.llnl.gov/project/spindle/Spack package: * [spindle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/spindle/package.py) Versions: 0.8.1 Build Dependencies: [launchmon](#launchmon) Link Dependencies: [launchmon](#launchmon) Description: Spindle improves the library-loading performance of dynamically linked HPC applications. Without Spindle large MPI jobs can overload on a shared file system when loading dynamically linked libraries, causing site-wide performance problems. --- spot[¶](#spot) === Homepage: * <https://spot.lrde.epita.fr/Spack package: * [spot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/spot/package.py) Versions: 1.99.3, 1.2.6 Build Dependencies: [boost](#boost), [python](#python) Link Dependencies: [boost](#boost), [python](#python) Description: Spot is a C++11 library for omega-automata manipulation and model checking. --- sqlite[¶](#sqlite) === Homepage: * <www.sqlite.orgSpack package: * [sqlite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sqlite/package.py) Versions: 3.23.1, 3.22.0, 3.21.0, 3.20.0, 3.18.0, 3.8.10.2, 3.8.5 Build Dependencies: [readline](#readline) Link Dependencies: [readline](#readline) Description: SQLite3 is an SQL database engine in a C library. Programs that link the SQLite3 library can have SQL database access without running a separate RDBMS process. --- sqlitebrowser[¶](#sqlitebrowser) === Homepage: * <https://sqlitebrowser.orgSpack package: * [sqlitebrowser/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sqlitebrowser/package.py) Versions: 3.10.1 Build Dependencies: [cmake](#cmake), [qt](#qt), [sqlite](#sqlite) Link Dependencies: [qt](#qt), [sqlite](#sqlite) Description: DB Browser for SQLite (DB4S) is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite. --- squid[¶](#squid) === Homepage: * <http://eddylab.org/software.htmlSpack package: * [squid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/squid/package.py) Versions: 1.9g Description: C function library for sequence analysis. --- sra-toolkit[¶](#sra-toolkit) === Homepage: * <https://trace.ncbi.nlm.nih.gov/Traces/sraSpack package: * [sra-toolkit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sra-toolkit/package.py) Versions: 2.9.2, 2.8.2-1 Description: The NCBI SRA Toolkit enables reading ("dumping") of sequencing files from the SRA database and writing ("loading") files into the .sra format. --- ssht[¶](#ssht) === Homepage: * <https://astro-informatics.github.io/ssht/Spack package: * [ssht/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ssht/package.py) Versions: 1.2b1 Build Dependencies: [fftw](#fftw) Link Dependencies: [fftw](#fftw) Description: The SSHT code provides functionality to perform fast and exact spin spherical harmonic transforms. --- sspace-longread[¶](#sspace-longread) === Homepage: * <https://www.baseclear.com/genomics/bioinformatics/basetools/SSPACE-longreadSpack package: * [sspace-longread/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sspace-longread/package.py) Versions: 1.1 Build Dependencies: [perl](#perl) Run Dependencies: [perl](#perl) Description: SSPACE-LongRead is a stand-alone program for scaffolding pre-assembled contigs using long reads Note: A manual download is required for SSPACE- LongRead. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- sspace-standard[¶](#sspace-standard) === Homepage: * <https://www.baseclear.com/genomics/bioinformatics/basetools/SSPACESpack package: * [sspace-standard/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sspace-standard/package.py) Versions: 3.0 Build Dependencies: [perl](#perl), [perl-perl4-corelibs](#perl-perl4-corelibs) Run Dependencies: [perl](#perl), [perl-perl4-corelibs](#perl-perl4-corelibs) Description: SSPACE standard is a stand-alone program for scaffolding pre-assembled contigs using NGS paired-read data Note: A manual download is required for SSPACE-Standard. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- sst-core[¶](#sst-core) === Homepage: * <http://sst-simulator.org/Spack package: * [sst-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sst-core/package.py) Versions: develop, 8.0.0 Build Dependencies: [zlib](#zlib), mpi, [m4](#m4), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [boost](#boost), [python](#python) Link Dependencies: mpi, [python](#python) Description: The Structural Simulation Toolkit (SST) was developed to explore innovations in highly concurrent systems where the ISA, microarchitecture, and memory interact with the programming model and communications system --- sst-dumpi[¶](#sst-dumpi) === Homepage: * <http://sst.sandia.gov/about_dumpi.htmlSpack package: * [sst-dumpi/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sst-dumpi/package.py) Versions: 6.1.0, master Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: The DUMPI package provides libraries to collect and read traces of MPI applications. Traces are created by linking an application with a library that uses the PMPI interface to intercept MPI calls. DUMPI records signatures of all MPI-1 and MPI-2 subroutine calls, return values, request information, and PAPI counters. --- sst-macro[¶](#sst-macro) === Homepage: * <http://sst.sandia.gov/about_sstmacro.htmlSpack package: * [sst-macro/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sst-macro/package.py) Versions: develop, 8.0.0, 6.1.0 Build Dependencies: [zlib](#zlib), mpi, [m4](#m4), [sst-core](#sst-core), [automake](#automake), [libtool](#libtool), [autoconf](#autoconf), [otf2](#otf2), [binutils](#binutils), [boost](#boost), [llvm](#llvm) Link Dependencies: [zlib](#zlib), mpi, [sst-core](#sst-core), [otf2](#otf2), [boost](#boost), [llvm](#llvm) Description: The Structural Simulation Toolkit Macroscale Element Library simulates large-scale parallel computer architectures for the coarse-grained study of distributed-memory applications. The simulator is driven from either a trace file or skeleton application. SST/macro's modular architecture can be extended with additional network models, trace file formats, software services, and processor models. --- stacks[¶](#stacks) === Homepage: * <http://catchenlab.life.illinois.edu/stacks/Spack package: * [stacks/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stacks/package.py) Versions: 1.46 Build Dependencies: [perl](#perl), [sparsehash](#sparsehash) Link Dependencies: [sparsehash](#sparsehash) Run Dependencies: [perl](#perl) Description: Stacks is a software pipeline for building loci from short-read sequences, such as those generated on the Illumina platform. --- staden-io-lib[¶](#staden-io-lib) === Homepage: * <http://staden.sourceforge.net/Spack package: * [staden-io-lib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/staden-io-lib/package.py) Versions: 1.14.8 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Io_lib is a library for reading/writing various bioinformatics file formats. --- star[¶](#star) === Homepage: * <https://github.com/alexdobin/STARSpack package: * [star/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/star/package.py) Versions: 2.6.1b, 2.6.1a, 2.6.0c, 2.6.0b, 2.6.0a, 2.5.4b, 2.5.4a, 2.5.3a, 2.5.2b, 2.5.2a, 2.4.2a Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: STAR is an ultrafast universal RNA-seq aligner. --- star-ccm-plus[¶](#star-ccm-plus) === Homepage: * <http://mdx.plm.automation.siemens.com/star-ccm-plusSpack package: * [star-ccm-plus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/star-ccm-plus/package.py) Versions: 11.06.010_02 Description: STAR-CCM+ (Computational Continuum Mechanics) CFD solver. --- startup-notification[¶](#startup-notification) === Homepage: * <https://www.freedesktop.org/wiki/Software/startup-notification/Spack package: * [startup-notification/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/startup-notification/package.py) Versions: 0.12 Build Dependencies: [xcb-util](#xcb-util), [libxcb](#libxcb), [libx11](#libx11) Link Dependencies: [xcb-util](#xcb-util), [libxcb](#libxcb), [libx11](#libx11) Description: startup-notification contains a reference implementation of the freedesktop startup notification protocol. --- stat[¶](#stat) === Homepage: * <http://paradyn.org/STAT/STAT.htmlSpack package: * [stat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stat/package.py) Versions: develop, 4.0.1, 4.0.0, 3.0.1, 3.0.0, 2.2.0, 2.1.0, 2.0.0 Build Dependencies: [launchmon](#launchmon), [graphviz](#graphviz), [graphlib](#graphlib), [autoconf](#autoconf), [py-pygtk](#py-pygtk), [mrnet](#mrnet), [py-xdot](#py-xdot), [fast-global-file-status](#fast-global-file-status), mpi, [swig](#swig), [dyninst](#dyninst), [automake](#automake), [libtool](#libtool), [python](#python) Link Dependencies: [py-xdot](#py-xdot), [fast-global-file-status](#fast-global-file-status), [launchmon](#launchmon), mpi, [swig](#swig), [python](#python), [graphlib](#graphlib), [dyninst](#dyninst), [graphviz](#graphviz), [mrnet](#mrnet) Run Dependencies: [py-enum34](#py-enum34), [py-pygtk](#py-pygtk), [graphviz](#graphviz) Description: Library to create, manipulate, and export graphs Graphlib. --- stc[¶](#stc) === Homepage: * <http://swift-lang.org/Swift-TSpack package: * [stc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stc/package.py) Versions: 0.8.2 Build Dependencies: java, [ant](#ant), [turbine](#turbine), [zsh](#zsh) Run Dependencies: java, [turbine](#turbine), [zsh](#zsh) Description: STC: The Swift-Turbine Compiler --- steps[¶](#steps) === Homepage: * <https://groups.oist.jp/cnu/softwareSpack package: * [steps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/steps/package.py) Versions: develop, 3.3.0, 3.2.0 Build Dependencies: mpi, [petsc](#petsc), [py-cython](#py-cython), [cmake](#cmake), blas, [python](#python), lapack Link Dependencies: mpi, [petsc](#petsc), [py-cython](#py-cython), blas, [python](#python), lapack Description: STochastic Engine for Pathway Simulation --- stow[¶](#stow) === Homepage: * <https://www.gnu.org/software/stow/Spack package: * [stow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stow/package.py) Versions: 2.2.2, 2.2.0, 2.1.3, 2.1.2, 2.1.1, 2.1.0 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Description: GNU Stow: a symlink farm manager GNU Stow is a symlink farm manager which takes distinct packages of software and/or data located in separate directories on the filesystem, and makes them appear to be installed in the same place. --- strace[¶](#strace) === Homepage: * <https://strace.ioSpack package: * [strace/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/strace/package.py) Versions: 4.21 Description: Strace is a diagnostic, debugging and instructional userspace utility for Linux. It is used to monitor and tamper with interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state. --- stream[¶](#stream) === Homepage: * <https://www.cs.virginia.edu/stream/ref.htmlSpack package: * [stream/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stream/package.py) Versions: 5.10 Description: The STREAM benchmark is a simple synthetic benchmark program that measures sustainable memory bandwidth (in MB/s) and the corresponding computation rate for simple vector kernels. --- strelka[¶](#strelka) === Homepage: * <https://github.com/Illumina/strelkaSpack package: * [strelka/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/strelka/package.py) Versions: 2.8.2 Build Dependencies: [zlib](#zlib), [cmake](#cmake), [boost](#boost), [python](#python), [bzip2](#bzip2) Link Dependencies: [zlib](#zlib), [cmake](#cmake), [boost](#boost), [python](#python), [bzip2](#bzip2) Description: Somatic and germline small variant caller for mapped sequencing data. --- stress[¶](#stress) === Homepage: * <https://people.seas.harvard.edu/~apw/stress/Spack package: * [stress/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stress/package.py) Versions: 1.0.4 Description: stress is a deliberately simple workload generator for POSIX systems. It imposes a configurable amount of CPU, memory, I/O, and disk stress on the system. It is written in C, and is free software licensed under the GPLv2. --- string-view-lite[¶](#string-view-lite) === Homepage: * <https://github.com/martinmoene/string-view-liteSpack package: * [string-view-lite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/string-view-lite/package.py) Versions: 1.0.0, 0.2.0, 0.1.0 Description: A single-file header-only version of a C++17-like string_view for C++98, C++11 and later --- stringtie[¶](#stringtie) === Homepage: * <https://ccb.jhu.edu/software/stringtieSpack package: * [stringtie/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stringtie/package.py) Versions: 1.3.4a, 1.3.3b Build Dependencies: [samtools](#samtools) Link Dependencies: [samtools](#samtools) Description: StringTie is a fast and highly efficient assembler of RNA-Seq alignments into potential transcripts. --- structure[¶](#structure) === Homepage: * <http://web.stanford.edu/group/pritchardlab/structure.htmlSpack package: * [structure/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/structure/package.py) Versions: 2.3.4 Build Dependencies: [jdk](#jdk) Run Dependencies: [jdk](#jdk) Description: Structure is a free software package for using multi-locus genotype data to investigate population structure. --- strumpack[¶](#strumpack) === Homepage: * <http://portal.nersc.gov/project/sparse/strumpackSpack package: * [strumpack/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/strumpack/package.py) Versions: 3.1.1, 3.1.0, 3.0.3, 3.0.2, 3.0.1, 3.0.0, 2.2.0, master Build Dependencies: [parmetis](#parmetis), mpi, [scotch](#scotch), [metis](#metis), [cmake](#cmake), scalapack, blas, lapack Link Dependencies: [parmetis](#parmetis), mpi, [scotch](#scotch), [metis](#metis), scalapack, blas, lapack Description: STRUMPACK -- STRUctured Matrix PACKage - provides linear solvers for sparse matrices and for dense rank-structured matrices, i.e., matrices that exhibit some kind of low-rank property. It provides a distributed memory fully algebraic sparse solver and preconditioner. The preconditioner is mostly aimed at large sparse linear systems which result from the discretization of a partial differential equation, but is not limited to any particular type of problem. STRUMPACK also provides preconditioned GMRES and BiCGStab iterative solvers. --- sublime-text[¶](#sublime-text) === Homepage: * <http://www.sublimetext.com/Spack package: * [sublime-text/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sublime-text/package.py) Versions: 3_build_3176, 3_build_3126, 2.0.2 Run Dependencies: [glib](#glib), [libx11](#libx11), [pcre](#pcre), [libffi](#libffi), [libxcb](#libxcb), [libxau](#libxau) Description: Sublime Text is a sophisticated text editor for code, markup and prose. --- subread[¶](#subread) === Homepage: * <http://subread.sourceforge.net/Spack package: * [subread/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/subread/package.py) Versions: 1.6.2, 1.6.0, 1.5.2 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: The Subread software package is a tool kit for processing next-gen sequencing data. --- subversion[¶](#subversion) === Homepage: * <https://subversion.apache.org/Spack package: * [subversion/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/subversion/package.py) Versions: 1.9.7, 1.9.6, 1.9.5, 1.9.3, 1.8.17, 1.8.13 Build Dependencies: [zlib](#zlib), [serf](#serf), [swig](#swig), [sqlite](#sqlite), [perl-term-readkey](#perl-term-readkey), [perl](#perl), [apr](#apr), [apr-util](#apr-util) Link Dependencies: [zlib](#zlib), [serf](#serf), [swig](#swig), [sqlite](#sqlite), [perl-term-readkey](#perl-term-readkey), [perl](#perl), [apr](#apr), [apr-util](#apr-util) Description: Apache Subversion - an open source version control system. --- suite-sparse[¶](#suite-sparse) === Homepage: * <http://faculty.cse.tamu.edu/davis/suitesparse.htmlSpack package: * [suite-sparse/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/suite-sparse/package.py) Versions: 5.3.0, 5.2.0, 5.1.0, 4.5.5, 4.5.4, 4.5.3, 4.5.1 Build Dependencies: [cuda](#cuda), [metis](#metis), [cmake](#cmake), tbb, blas, lapack Link Dependencies: [cuda](#cuda), lapack, blas, tbb, [metis](#metis) Description: SuiteSparse is a suite of sparse matrix algorithms --- sumaclust[¶](#sumaclust) === Homepage: * <https://git.metabarcoding.org/obitools/sumaclustSpack package: * [sumaclust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sumaclust/package.py) Versions: 1.0.20 Description: Sumaclust aims to cluster sequences in a way that is fast and exact at the same time. --- sundials[¶](#sundials) === Homepage: * <https://computation.llnl.gov/projects/sundialsSpack package: * [sundials/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sundials/package.py) Versions: 4.0.0-dev.2, 4.0.0-dev.1, 4.0.0-dev, 3.2.1, 3.2.0, 3.1.2, 3.1.1, 3.1.0, 3.0.0, 2.7.0, 2.6.2 Build Dependencies: [petsc](#petsc), [cuda](#cuda), mpi, [raja](#raja), [cmake](#cmake), [superlu-mt](#superlu-mt), [suite-sparse](#suite-sparse), blas, [hypre](#hypre), lapack Link Dependencies: [petsc](#petsc), [cuda](#cuda), mpi, [raja](#raja), [superlu-mt](#superlu-mt), [suite-sparse](#suite-sparse), blas, [hypre](#hypre), lapack Description: SUNDIALS (SUite of Nonlinear and DIfferential/ALgebraic equation Solvers) --- superlu[¶](#superlu) === Homepage: * <http://crd-legacy.lbl.gov/~xiaoye/SuperLU/#superluSpack package: * [superlu/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/superlu/package.py) Versions: 5.2.1, 4.3 Build Dependencies: [cmake](#cmake), blas Link Dependencies: blas Description: SuperLU is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on high performance machines. SuperLU is designed for sequential machines. --- superlu-dist[¶](#superlu-dist) === Homepage: * <http://crd-legacy.lbl.gov/~xiaoye/SuperLU/Spack package: * [superlu-dist/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/superlu-dist/package.py) Versions: develop, 6.0.0, 5.4.0, 5.3.0, 5.2.2, 5.2.1, 5.1.3, 5.1.2, 5.1.0, 5.0.0, xsdk-0.2.0 Build Dependencies: [parmetis](#parmetis), mpi, [metis](#metis), [cmake](#cmake), blas, lapack Link Dependencies: [parmetis](#parmetis), mpi, blas, lapack, [metis](#metis) Description: A general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on high performance machines. --- superlu-mt[¶](#superlu-mt) === Homepage: * <http://crd-legacy.lbl.gov/~xiaoye/SuperLU/#superlu_mtSpack package: * [superlu-mt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/superlu-mt/package.py) Versions: 3.1 Build Dependencies: blas Link Dependencies: blas Description: SuperLU is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on high performance machines. SuperLU_MT is designed for shared memory parallel machines. --- supernova[¶](#supernova) === Homepage: * <https://support.10xgenomics.com/de-novo-assembly/software/overview/latest/welcomeSpack package: * [supernova/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/supernova/package.py) Versions: 2.0.1 Build Dependencies: [bcl2fastq2](#bcl2fastq2) Link Dependencies: [bcl2fastq2](#bcl2fastq2) Description: Supernova is a software package for de novo assembly from Chromium Linked-Reads that are made from a single whole-genome library from an individual DNA source. A key feature of Supernova is that it creates diploid assemblies, thus separately representing maternal and paternal chromosomes over very long distances. Almost all other methods instead merge homologous chromosomes into single incorrect 'consensus' sequences. Supernova is the only practical method for creating diploid assemblies of large genomes. To install this package, you will need to go to the supernova download page of supernova, register with your email address and download supernova yourself. Spack will search your current directory for the download file. Alternatively, add this file yo a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- sw4lite[¶](#sw4lite) === Homepage: * <https://geodynamics.org/cig/software/sw4Spack package: * [sw4lite/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sw4lite/package.py) Versions: develop, 1.1, 1.0 Build Dependencies: mpi, blas, lapack Link Dependencies: mpi, blas, lapack Description: Sw4lite is a bare bone version of SW4 intended for testing performance optimizations in a few important numerical kernels of SW4. --- swap-assembler[¶](#swap-assembler) === Homepage: * <https://sourceforge.net/projects/swapassembler/Spack package: * [swap-assembler/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/swap-assembler/package.py) Versions: 0.4 Build Dependencies: [mpich](#mpich) Link Dependencies: [mpich](#mpich) Description: A scalable and fully parallelized genome assembler. --- swarm[¶](#swarm) === Homepage: * <https://github.com/torognes/swarmSpack package: * [swarm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/swarm/package.py) Versions: 2.1.13 Description: A robust and fast clustering method for amplicon-based studies. --- swfft[¶](#swfft) === Homepage: * <https://xgitlab.cels.anl.gov/hacc/SWFFTSpack package: * [swfft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/swfft/package.py) Versions: develop, 1.0 Build Dependencies: mpi, [fftw](#fftw) Link Dependencies: mpi, [fftw](#fftw) Description: A stand-alone version of HACC's distributed-memory, pencil-decomposed, parallel 3D FFT. --- swiftsim[¶](#swiftsim) === Homepage: * <http://icc.dur.ac.uk/swift/Spack package: * [swiftsim/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/swiftsim/package.py) Versions: 0.7.0, 0.3.0 Build Dependencies: mpi, [m4](#m4), [metis](#metis), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [hdf5](#hdf5) Link Dependencies: [hdf5](#hdf5), mpi, [metis](#metis) Description: SPH With Inter-dependent Fine-grained Tasking (SWIFT) provides astrophysicists with a state of the art framework to perform particle based simulations. --- swig[¶](#swig) === Homepage: * <http://www.swig.orgSpack package: * [swig/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/swig/package.py) Versions: 3.0.12, 3.0.11, 3.0.10, 3.0.8, 3.0.2, 2.0.12, 2.0.2, 1.3.40 Build Dependencies: [pcre](#pcre) Link Dependencies: [pcre](#pcre) Description: SWIG is an interface compiler that connects programs written in C and C++ with scripting languages such as Perl, Python, Ruby, and Tcl. It works by taking the declarations found in C/C++ header files and using them to generate the wrapper code that scripting languages need to access the underlying C/C++ code. In addition, SWIG provides a variety of customization features that let you tailor the wrapping process to suit your application. --- symengine[¶](#symengine) === Homepage: * <https://github.com/symengine/symengineSpack package: * [symengine/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/symengine/package.py) Versions: develop, 0.3.0, 0.2.0, 0.1.0 Build Dependencies: [gmp](#gmp), [piranha](#piranha), [mpc](#mpc), [flint](#flint), [cmake](#cmake), [boost](#boost), [mpfr](#mpfr), [llvm](#llvm) Link Dependencies: [gmp](#gmp), [piranha](#piranha), [mpc](#mpc), [flint](#flint), [boost](#boost), [mpfr](#mpfr), [llvm](#llvm) Description: SymEngine is a fast symbolic manipulation library, written in C++. --- sympol[¶](#sympol) === Homepage: * <http://www.math.uni-rostock.de/~rehn/software/sympol.htmlSpack package: * [sympol/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sympol/package.py) Versions: 0.1.8 Build Dependencies: [gmp](#gmp), [cmake](#cmake), [boost](#boost), [bliss](#bliss), [lrslib](#lrslib) Link Dependencies: [gmp](#gmp), [lrslib](#lrslib), [boost](#boost), [bliss](#bliss) Description: SymPol is a C++ tool to work with symmetric polyhedra --- sz[¶](#sz) === Homepage: * <https://collab.cels.anl.gov/display/ESR/SZSpack package: * [sz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/sz/package.py) Versions: develop, 2.0.2.0, 1.4.13.5, 1.4.13.4, 1.4.13.3, 1.4.13.2, 1.4.13.0, 1.4.12.3, 1.4.12.1, 1.4.11.1, 1.4.11.0, 1.4.10.0, 1.4.9.2 Description: Error-bounded Lossy Compressor for HPC Data. --- tabix[¶](#tabix) === Homepage: * <https://github.com/samtools/tabixSpack package: * [tabix/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tabix/package.py) Versions: 2013-12-16 Build Dependencies: [perl](#perl), [python](#python) Run Dependencies: [perl](#perl), [python](#python) Description: Generic indexer for TAB-delimited genome position files --- talass[¶](#talass) === Homepage: * <http://www.cedmav.org/research/project/16-talass.htmlSpack package: * [talass/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/talass/package.py) Versions: 2018-09-21 Build Dependencies: [cmake](#cmake) Description: TALASS: Topological Analysis of Large-Scale Simulations This package compiles the talass tool chain thar implements various topological algorithms to analyze large scale data. The package is organized hierarchical FileFormat < Statistics < StreamingTopology and any of the subsets can be build stand- alone. --- talloc[¶](#talloc) === Homepage: * <https://talloc.samba.orgSpack package: * [talloc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/talloc/package.py) Versions: 2.1.9 Description: Talloc provides a hierarchical, reference counted memory pool system with destructors. It is the core memory allocator used in Samba. --- tantan[¶](#tantan) === Homepage: * <http://cbrc3.cbrc.jp/~martin/tantanSpack package: * [tantan/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tantan/package.py) Versions: 13 Description: tantan is a tool to mask simple regions (low complexity and short-period tandem repeats) in DNA, RNA, and protein sequences. --- tar[¶](#tar) === Homepage: * <https://www.gnu.org/software/tar/Spack package: * [tar/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tar/package.py) Versions: 1.30, 1.29, 1.28 Description: GNU Tar provides the ability to create tar archives, as well as various other kinds of manipulation. --- targetp[¶](#targetp) === Homepage: * <http://www.cbs.dtu.dk/services/TargetP/Spack package: * [targetp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/targetp/package.py) Versions: 1.1b Build Dependencies: [chlorop](#chlorop), [signalp](#signalp) Link Dependencies: [chlorop](#chlorop), [signalp](#signalp) Run Dependencies: [perl](#perl), awk Description: TargetP predicts the subcellular location of eukaryotic protein sequences. Note: A manual download is required for TargetP. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- task[¶](#task) === Homepage: * <http://www.taskwarrior.orgSpack package: * [task/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/task/package.py) Versions: 2.5.1, 2.4.4 Build Dependencies: [cmake](#cmake), [libuuid](#libuuid), [gnutls](#gnutls) Link Dependencies: [libuuid](#libuuid), [gnutls](#gnutls) Description: Feature-rich console based todo list manager --- taskd[¶](#taskd) === Homepage: * <http://www.taskwarrior.orgSpack package: * [taskd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/taskd/package.py) Versions: 1.1.0 Build Dependencies: [cmake](#cmake), [libuuid](#libuuid), [gnutls](#gnutls) Link Dependencies: [libuuid](#libuuid), [gnutls](#gnutls) Description: TaskWarrior task synchronization daemon --- tasmanian[¶](#tasmanian) === Homepage: * <http://tasmanian.ornl.govSpack package: * [tasmanian/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tasmanian/package.py) Versions: develop, 6.0, 5.1, 5.0 Build Dependencies: [cuda](#cuda), mpi, [magma](#magma), [cmake](#cmake), [py-numpy](#py-numpy), blas, [python](#python) Link Dependencies: [python](#python) Run Dependencies: [cuda](#cuda), mpi, [magma](#magma), [py-numpy](#py-numpy), blas, [python](#python) Description: The Toolkit for Adaptive Stochastic Modeling and Non-Intrusive ApproximatioN is a robust library for high dimensional integration and interpolation as well as parameter calibration. --- tassel[¶](#tassel) === Homepage: * <http://www.maizegenetics.net/tasselSpack package: * [tassel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tassel/package.py) Versions: 2017-07-22 Build Dependencies: java, [perl](#perl) Run Dependencies: java, [perl](#perl) Description: TASSEL is a software package to evaluate traits associations, evolutionary patterns, and linkage disequilibrium. --- tau[¶](#tau) === Homepage: * <http://www.cs.uoregon.edu/research/tauSpack package: * [tau/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tau/package.py) Versions: 2.27.1, 2.27, 2.26.3, 2.25, 2.24.1, 2.24, 2.23.1 Build Dependencies: [pdt](#pdt), [scorep](#scorep), mpi, [binutils](#binutils) Link Dependencies: [pdt](#pdt), [scorep](#scorep), mpi, [binutils](#binutils) Description: A portable profiling and tracing toolkit for performance analysis of parallel programs written in Fortran, C, C++, UPC, Java, Python. --- tcl[¶](#tcl) === Homepage: * <http://www.tcl.tkSpack package: * [tcl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tcl/package.py) Versions: 8.6.8, 8.6.6, 8.6.5, 8.6.4, 8.6.3, 8.5.19 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Tcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more. Open source and business-friendly, Tcl is a mature yet evolving language that is truly cross platform, easily deployed and highly extensible. --- tcl-itcl[¶](#tcl-itcl) === Homepage: * <https://sourceforge.net/projects/incrtcl/Spack package: * [tcl-itcl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tcl-itcl/package.py) Versions: 4.0.4 Build Dependencies: [tcl](#tcl) Link Dependencies: [tcl](#tcl) Description: [incr Tcl] is the most widely used O-O system for Tcl. The name is a play on C++, and [incr Tcl] provides a similar object model, including multiple inheritence and public and private classes and variables. --- tcl-tcllib[¶](#tcl-tcllib) === Homepage: * <http://www.tcl.tk/software/tcllibSpack package: * [tcl-tcllib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tcl-tcllib/package.py) Versions: 1.19, 1.18, 1.17, 1.16, 1.15, 1.14 Build Dependencies: [tcl](#tcl) Link Dependencies: [tcl](#tcl) Description: Tcllib is a collection of utility modules for Tcl. These modules provide a wide variety of functionality, from implementations of standard data structures to implementations of common networking protocols. The intent is to collect commonly used function into a single library, which users can rely on to be available and stable. --- tcl-tclxml[¶](#tcl-tclxml) === Homepage: * <http://tclxml.sourceforge.net/tclxml.htmlSpack package: * [tcl-tclxml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tcl-tclxml/package.py) Versions: 3.2, 3.1 Build Dependencies: [libxslt](#libxslt), [tcl](#tcl), [tcl-tcllib](#tcl-tcllib), [libxml2](#libxml2) Link Dependencies: [libxslt](#libxslt), [tcl](#tcl), [tcl-tcllib](#tcl-tcllib), [libxml2](#libxml2) Description: TclXML is an API for parsing XML documents using the Tcl scripting language. It is also a package including a DOM implementation (TclDOM) and XSL Transformations (TclXSLT). These allow Tcl scripts to read, manipulate and write XML documents. --- tclap[¶](#tclap) === Homepage: * <http://tclap.sourceforge.netSpack package: * [tclap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tclap/package.py) Versions: 1.2.2, 1.2.1 Description: Templatized C++ Command Line Parser --- tcoffee[¶](#tcoffee) === Homepage: * <http://www.tcoffee.org/Spack package: * [tcoffee/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tcoffee/package.py) Versions: 2017-08-17 Build Dependencies: [clustalw](#clustalw), [muscle](#muscle), [blast-plus](#blast-plus), [viennarna](#viennarna), [pcma](#pcma), [mafft](#mafft), [perl](#perl), [poamsa](#poamsa), [dialign-tx](#dialign-tx), [probconsrna](#probconsrna), [tmalign](#tmalign) Link Dependencies: [clustalw](#clustalw), [muscle](#muscle), [blast-plus](#blast-plus), [viennarna](#viennarna), [pcma](#pcma), [mafft](#mafft), [poamsa](#poamsa), [dialign-tx](#dialign-tx), [probconsrna](#probconsrna), [tmalign](#tmalign) Run Dependencies: [perl](#perl) Description: T-Coffee is a multiple sequence alignment program. --- tcptrace[¶](#tcptrace) === Homepage: * <http://www.tcptrace.org/Spack package: * [tcptrace/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tcptrace/package.py) Versions: 6.6.7 Build Dependencies: [flex](#flex), [libpcap](#libpcap), [bison](#bison) Link Dependencies: [libpcap](#libpcap) Description: tcptrace is a tool written by <NAME> at Ohio University for analysis of TCP dump files. It can take as input the files produced by several popular packet-capture programs, including tcpdump, snoop, etherpeek, HP Net Metrix, and WinDump. --- tcsh[¶](#tcsh) === Homepage: * <http://www.tcsh.org/Spack package: * [tcsh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tcsh/package.py) Versions: 6.20.00 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: Tcsh is an enhanced but completely compatible version of csh, the C shell. Tcsh is a command language interpreter which can be used both as an interactive login shell and as a shell script command processor. Tcsh includes a command line editor, programmable word completion, spelling correction, a history mechanism, job control and a C language like syntax. --- tealeaf[¶](#tealeaf) === Homepage: * <http://uk-mac.github.io/TeaLeaf/Spack package: * [tealeaf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tealeaf/package.py) Versions: 1.0 Build Dependencies: mpi Link Dependencies: mpi Description: Proxy Application. TeaLeaf is a mini-app that solves the linear heat conduction equation on a spatially decomposed regularly grid using a 5 point stencil with implicit solvers. --- templight[¶](#templight) === Homepage: * <https://github.com/mikael-s-persson/templightSpack package: * [templight/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/templight/package.py) Versions: develop, 2018.07.20 Build Dependencies: [py-lit](#py-lit), [cmake](#cmake), [python](#python) Link Dependencies: [python](#python) Run Dependencies: [py-lit](#py-lit) Description: Templight is a Clang-based tool to profile the time and memory consumption of template instantiations and to perform interactive debugging sessions to gain introspection into the template instantiation process. --- templight-tools[¶](#templight-tools) === Homepage: * <https://github.com/mikael-s-persson/templight-toolsSpack package: * [templight-tools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/templight-tools/package.py) Versions: develop Build Dependencies: [cmake](#cmake), [boost](#boost) Link Dependencies: [boost](#boost) Description: Supporting tools for the Templight Profiler --- tetgen[¶](#tetgen) === Homepage: * <http://wias-berlin.de/software/tetgen/Spack package: * [tetgen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tetgen/package.py) Versions: 1.5.0, 1.4.3 Description: TetGen is a program and library that can be used to generate tetrahedral meshes for given 3D polyhedral domains. TetGen generates exact constrained Delaunay tetrahedralizations, boundary conforming Delaunay meshes, and Voronoi paritions. --- tethex[¶](#tethex) === Homepage: * <https://github.com/martemyev/tethexSpack package: * [tethex/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tethex/package.py) Versions: develop, 0.0.7 Build Dependencies: [cmake](#cmake) Description: Tethex is designed to convert triangular (in 2D) or tetrahedral (in 3D) Gmsh's mesh to quadrilateral or hexahedral one respectively. These meshes can be used in software packages working with hexahedrals only - for example, deal.II. --- texinfo[¶](#texinfo) === Homepage: * <https://www.gnu.org/software/texinfo/Spack package: * [texinfo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/texinfo/package.py) Versions: 6.5, 6.3, 6.0, 5.2, 5.1, 5.0 Build Dependencies: [perl](#perl) Link Dependencies: [perl](#perl) Description: Texinfo is the official documentation format of the GNU project. It was invented by <NAME> and <NAME> many years ago, loosely based on Brian Reid's Scribe and other formatting languages of the time. It is used by many non-GNU projects as well. --- texlive[¶](#texlive) === Homepage: * <http://www.tug.org/texliveSpack package: * [texlive/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/texlive/package.py) Versions: live Build Dependencies: [perl](#perl) Description: TeX Live is a free software distribution for the TeX typesetting system. Heads up, it's is not a reproducible installation. --- the-platinum-searcher[¶](#the-platinum-searcher) === Homepage: * <https://github.com/monochromegane/the_platinum_searcherSpack package: * [the-platinum-searcher/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/the-platinum-searcher/package.py) Versions: head Build Dependencies: [go](#go) Link Dependencies: [go](#go) Description: Fast parallel recursive grep alternative --- the-silver-searcher[¶](#the-silver-searcher) === Homepage: * <http://geoff.greer.fm/ag/Spack package: * [the-silver-searcher/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/the-silver-searcher/package.py) Versions: 2.1.0, 0.32.0, 0.30.0 Build Dependencies: [zlib](#zlib), [pcre](#pcre), pkgconfig, [xz](#xz) Link Dependencies: [zlib](#zlib), [pcre](#pcre), [xz](#xz) Description: Fast recursive grep alternative --- thornado-mini[¶](#thornado-mini) === Homepage: * <https://sites.google.com/lbl.gov/exastar/homeSpack package: * [thornado-mini/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/thornado-mini/package.py) Versions: 1.0 Build Dependencies: [hdf5](#hdf5), mpi, lapack Link Dependencies: [hdf5](#hdf5), mpi, lapack Description: Code to solve the equation of radiative transfer in the multi-group two- moment approximation --- thrift[¶](#thrift) === Homepage: * <http://thrift.apache.orgSpack package: * [thrift/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/thrift/package.py) Versions: 0.11.0, 0.10.0, 0.9.3, 0.9.2 Build Dependencies: [zlib](#zlib), [libevent](#libevent), [openssl](#openssl), [bison](#bison), java, [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [flex](#flex), [boost](#boost), [python](#python) Link Dependencies: [zlib](#zlib), [libevent](#libevent), [openssl](#openssl), java, [boost](#boost), [python](#python) Description: Software framework for scalable cross-language services development. Thrift combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages. --- thrust[¶](#thrust) === Homepage: * <https://thrust.github.ioSpack package: * [thrust/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/thrust/package.py) Versions: 1.8.2 Description: Thrust is a parallel algorithms library which resembles the C++ Standard Template Library (STL). --- tig[¶](#tig) === Homepage: * <https://jonas.github.io/tig/Spack package: * [tig/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tig/package.py) Versions: 2.2.2 Build Dependencies: [ncurses](#ncurses) Link Dependencies: [ncurses](#ncurses) Description: Text-mode interface for git --- tinyxml[¶](#tinyxml) === Homepage: * <http://grinninglizard.com/tinyxml/Spack package: * [tinyxml/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tinyxml/package.py) Versions: 2.6.2 Build Dependencies: [cmake](#cmake) Description: Simple, small, efficient, C++ XML parser --- tinyxml2[¶](#tinyxml2) === Homepage: * <http://grinninglizard.com/tinyxml2/Spack package: * [tinyxml2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tinyxml2/package.py) Versions: 4.0.1, 4.0.0, 3.0.0, 2.2.0, 2.1.0, 2.0.2 Build Dependencies: [cmake](#cmake) Description: Simple, small, efficient, C++ XML parser --- tioga[¶](#tioga) === Homepage: * <https://github.com/jsitaraman/tiogaSpack package: * [tioga/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tioga/package.py) Versions: develop Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: Topology Independent Overset Grid Assembly (TIOGA) --- tk[¶](#tk) === Homepage: * <http://www.tcl.tkSpack package: * [tk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tk/package.py) Versions: 8.6.8, 8.6.6, 8.6.5, 8.6.3 Build Dependencies: [tcl](#tcl), [libx11](#libx11) Link Dependencies: [tcl](#tcl), [libx11](#libx11) Description: Tk is a graphical user interface toolkit that takes developing desktop applications to a higher level than conventional approaches. Tk is the standard GUI not only for Tcl, but for many other dynamic languages, and can produce rich, native applications that run unchanged across Windows, Mac OS X, Linux and more. --- tldd[¶](#tldd) === Homepage: * <https://gitlab.com/miscripts/tlddSpack package: * [tldd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tldd/package.py) Versions: 2018-10-05, master Build Dependencies: [pstreams](#pstreams) Link Dependencies: [pstreams](#pstreams) Description: A program similar to ldd(1) but showing the output as a tree. --- tmalign[¶](#tmalign) === Homepage: * <http://zhanglab.ccmb.med.umich.edu/TM-alignSpack package: * [tmalign/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tmalign/package.py) Versions: 2016-05-25 Description: TM-align is an algorithm for sequence-order independent protein structure comparisons. --- tmhmm[¶](#tmhmm) === Homepage: * <http://www.cbs.dtu.dk/cgi-bin/nph-sw_request?tmhmmSpack package: * [tmhmm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tmhmm/package.py) Versions: 2.0c Run Dependencies: [perl](#perl) Description: Transmembrane helices in proteins Note: A manual download is required for TMHMM. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- tmux[¶](#tmux) === Homepage: * <http://tmux.github.ioSpack package: * [tmux/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tmux/package.py) Versions: 2.7, 2.6, 2.5, 2.4, 2.3, 2.2, 2.1, 1.9a Build Dependencies: [libevent](#libevent), [ncurses](#ncurses) Link Dependencies: [libevent](#libevent), [ncurses](#ncurses) Description: Tmux is a terminal multiplexer. What is a terminal multiplexer? It lets you switch easily between several programs in one terminal, detach them (they keep running in the background) and reattach them to a different terminal. And do a lot more. --- tmuxinator[¶](#tmuxinator) === Homepage: * <https://github.com/tmuxinator/tmuxinatorSpack package: * [tmuxinator/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tmuxinator/package.py) Versions: 0.6.11 Build Dependencies: [ruby](#ruby) Link Dependencies: [ruby](#ruby) Description: A session configuration creator and manager for tmux --- tophat[¶](#tophat) === Homepage: * <http://ccb.jhu.edu/software/tophat/index.shtmlSpack package: * [tophat/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tophat/package.py) Versions: 2.1.2, 2.1.1 Build Dependencies: [autoconf](#autoconf), [boost](#boost), [libtool](#libtool), [m4](#m4), [automake](#automake) Link Dependencies: [boost](#boost) Run Dependencies: [bowtie2](#bowtie2) Description: Spliced read mapper for RNA-Seq. --- tppred[¶](#tppred) === Homepage: * <https://tppred2.biocomp.unibo.it/tppred2/default/softwareSpack package: * [tppred/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tppred/package.py) Versions: 2.0 Build Dependencies: [emboss](#emboss) Link Dependencies: [emboss](#emboss) Run Dependencies: [python](#python), [py-scikit-learn](#py-scikit-learn) Description: TPPRED is a software package for the prediction of mitochondrial targeting peptides from protein primary sequence. --- tracer[¶](#tracer) === Homepage: * <https://tracer-codes.readthedocs.ioSpack package: * [tracer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tracer/package.py) Versions: develop Build Dependencies: [codes](#codes), [otf2](#otf2), mpi Link Dependencies: [codes](#codes), [otf2](#otf2), mpi Description: Trace Replay and Network Simulation Framework --- transabyss[¶](#transabyss) === Homepage: * <http://www.bcgsc.ca/platform/bioinfo/software/trans-abyssSpack package: * [transabyss/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/transabyss/package.py) Versions: 1.5.5 Build Dependencies: [abyss](#abyss), [python](#python), [py-igraph](#py-igraph), [blat](#blat) Link Dependencies: [abyss](#abyss), [blat](#blat) Run Dependencies: [python](#python), [py-igraph](#py-igraph) Description: De novo assembly of RNAseq data using ABySS --- transdecoder[¶](#transdecoder) === Homepage: * <http://transdecoder.github.io/Spack package: * [transdecoder/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/transdecoder/package.py) Versions: 3.0.1 Build Dependencies: [perl](#perl) Run Dependencies: [perl](#perl), [perl-uri-escape](#perl-uri-escape) Description: TransDecoder identifies candidate coding regions within transcript sequences, such as those generated by de novo RNA-Seq transcript assembly using Trinity, or constructed based on RNA-Seq alignments to the genome using Tophat and Cufflinks. --- transposome[¶](#transposome) === Homepage: * <https://sestaton.github.io/Transposome/Spack package: * [transposome/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/transposome/package.py) Versions: 0.11.2 Build Dependencies: [perl](#perl), [blast-plus](#blast-plus) Link Dependencies: [perl](#perl), [blast-plus](#blast-plus) Run Dependencies: [perl](#perl) Description: A toolkit for annotation of transposable element families from unassembled sequence reads. --- transset[¶](#transset) === Homepage: * <http://cgit.freedesktop.org/xorg/app/transsetSpack package: * [transset/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/transset/package.py) Versions: 1.0.1 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11) Description: transset is an utility for setting opacity property. --- trapproto[¶](#trapproto) === Homepage: * <https://cgit.freedesktop.org/xorg/proto/trapprotoSpack package: * [trapproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/trapproto/package.py) Versions: 3.4.3 Description: X.org TrapProto protocol headers. --- tree[¶](#tree) === Homepage: * <http://mama.indstate.edu/users/ice/tree/Spack package: * [tree/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tree/package.py) Versions: 1.7.0 Description: Tree is a recursive directory listing command that produces a depth indented listing of files, which is colorized ala dircolors if the LS_COLORS environment variable is set and output is to tty. Tree has been ported and reported to work under the following operating systems: Linux, FreeBSD, OS X, Solaris, HP/UX, Cygwin, HP Nonstop and OS/2. --- treesub[¶](#treesub) === Homepage: * <https:/github.com/tamuri/treesubSpack package: * [treesub/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/treesub/package.py) Versions: 0.2, 0.1 Build Dependencies: [ant](#ant) Run Dependencies: [raxml](#raxml), [jdk](#jdk), [paml](#paml), [figtree](#figtree) Description: A small program (which glues together other programs) that allows a user to input a codon alignment in FASTA format and produce an annotated phylogenetic tree showing which substitutions occurred on a given branch. Originally written for colleagues at the MRC NIMR. --- trf[¶](#trf) === Homepage: * <https://tandem.bu.edu/trf/trf.htmlSpack package: * [trf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/trf/package.py) Versions: 4.09 Description: Tandem Repeats Finder is a program to locate and display tandem repeats in DNA sequences. Note: A manual download is required for TRF. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- triangle[¶](#triangle) === Homepage: * <http://www.cs.cmu.edu/~quake/triangle.htmlSpack package: * [triangle/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/triangle/package.py) Versions: 1.6 Description: Triangle is a two-dimensional mesh generator and Delaunay triangulator. Triangle generates exact Delaunay triangulations, constrained Delaunay triangulations, conforming Delaunay triangulations, Voronoi diagrams, and high-quality triangular meshes. --- trilinos[¶](#trilinos) === Homepage: * <https://trilinos.org/Spack package: * [trilinos/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/trilinos/package.py) Versions: develop, 12.12.1, 12.10.1, 12.8.1, 12.6.4, 12.6.3, 12.6.2, 12.6.1, 12.4.2, 12.2.1, 12.0.1, 11.14.3, 11.14.2, 11.14.1, xsdk-0.2.0, master Build Dependencies: [mumps](#mumps), scalapack, lapack, [zlib](#zlib), mpi, [swig](#swig), [netcdf](#netcdf), [superlu-dist](#superlu-dist), [py-numpy](#py-numpy), [boost](#boost), [superlu](#superlu), [parmetis](#parmetis), [hdf5](#hdf5), [cmake](#cmake), [suite-sparse](#suite-sparse), [parallel-netcdf](#parallel-netcdf), [cgns](#cgns), [python](#python), [metis](#metis), blas, [hypre](#hypre), [glm](#glm), [matio](#matio) Link Dependencies: [superlu](#superlu), [zlib](#zlib), [hdf5](#hdf5), [mumps](#mumps), scalapack, [suite-sparse](#suite-sparse), [parallel-netcdf](#parallel-netcdf), lapack, [parmetis](#parmetis), [python](#python), mpi, [swig](#swig), [netcdf](#netcdf), [superlu-dist](#superlu-dist), [metis](#metis), [cgns](#cgns), [boost](#boost), blas, [hypre](#hypre), [glm](#glm), [matio](#matio) Run Dependencies: [py-numpy](#py-numpy) Description: The Trilinos Project is an effort to develop algorithms and enabling technologies within an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. A unique design feature of Trilinos is its focus on packages. --- trimal[¶](#trimal) === Homepage: * <https://github.com/scapella/trimalSpack package: * [trimal/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/trimal/package.py) Versions: 1.4.1 Description: A tool for automated alignment trimming in large-scale phylogenetic analyses --- trimgalore[¶](#trimgalore) === Homepage: * <https://github.com/FelixKrueger/TrimGaloreSpack package: * [trimgalore/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/trimgalore/package.py) Versions: 0.4.5, 0.4.4 Build Dependencies: [perl](#perl), [py-cutadapt](#py-cutadapt), [fastqc](#fastqc) Link Dependencies: [fastqc](#fastqc) Run Dependencies: [perl](#perl), [py-cutadapt](#py-cutadapt) Description: Trim Galore! is a wrapper around Cutadapt and FastQC to consistently apply adapter and quality trimming to FastQ files, with extra functionality for RRBS data. --- trimmomatic[¶](#trimmomatic) === Homepage: * <http://www.usadellab.org/cms/?page=trimmomaticSpack package: * [trimmomatic/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/trimmomatic/package.py) Versions: 0.36, 0.33 Run Dependencies: java Description: A flexible read trimming tool for Illumina NGS data. --- trinity[¶](#trinity) === Homepage: * <http://trinityrnaseq.github.io/Spack package: * [trinity/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/trinity/package.py) Versions: 2.6.6 Build Dependencies: java, [salmon](#salmon), [perl](#perl), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [bowtie2](#bowtie2), [jellyfish](#jellyfish) Link Dependencies: [bowtie2](#bowtie2), [salmon](#salmon), [jellyfish](#jellyfish) Run Dependencies: [kallisto](#kallisto), [py-numpy](#py-numpy), [r-glimma](#r-glimma), [samtools](#samtools), [bowtie](#bowtie), [r-goseq](#r-goseq), [r-tidyverse](#r-tidyverse), [fastqc](#fastqc), [r-biobase](#r-biobase), [r-rots](#r-rots), [r-ape](#r-ape), [r-qvalue](#r-qvalue), [r-goplot](#r-goplot), [rsem](#rsem), [blast-plus](#blast-plus), [express](#express), [r-argparse](#r-argparse), [perl-dbfile](#perl-dbfile), [perl](#perl), [r-fastcluster](#r-fastcluster), [r-deseq2](#r-deseq2), [r](#r), java, [r-sm](#r-sm), [r-edger](#r-edger), [r-ctc](#r-ctc), [r-gplots](#r-gplots), [perl-uri-escape](#perl-uri-escape) Description: Trinity, developed at the Broad Institute and the Hebrew University of Jerusalem, represents a novel method for the efficient and robust de novo reconstruction of transcriptomes from RNA-seq data. Trinity combines three independent software modules: Inchworm, Chrysalis, and Butterfly, applied sequentially to process large volumes of RNA-seq reads. Trinity partitions the sequence data into many individual de Bruijn graphs, each representing the transcriptional complexity at a given gene or locus, and then processes each graph independently to extract full-length splicing isoforms and to tease apart transcripts derived from paralogous genes. --- trinotate[¶](#trinotate) === Homepage: * <https://trinotate.github.io/Spack package: * [trinotate/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/trinotate/package.py) Versions: 3.1.1 Run Dependencies: [transdecoder](#transdecoder), [perl-cgi](#perl-cgi), [perl-dbd-mysql](#perl-dbd-mysql), [trinity](#trinity), [sqlite](#sqlite), [lighttpd](#lighttpd), [hmmer](#hmmer), [ncbi-rmblastn](#ncbi-rmblastn), [perl](#perl), [perl-dbi](#perl-dbi) Description: Trinotate is a comprehensive annotation suite designed for automatic functional annotation of transcriptomes, particularly de novo assembled transcriptomes, from model or non-model organisms --- trnascan-se[¶](#trnascan-se) === Homepage: * <http://lowelab.ucsc.edu/tRNAscan-SE/Spack package: * [trnascan-se/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/trnascan-se/package.py) Versions: 2.0.0 Description: Seaching for tRNA genes in genomic sequence --- turbine[¶](#turbine) === Homepage: * <http://swift-lang.org/Swift-TSpack package: * [turbine/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/turbine/package.py) Versions: 1.2.3, 1.2.1, 1.1.0 Build Dependencies: [r](#r), [tcl](#tcl), [zsh](#zsh), [swig](#swig), [adlbx](#adlbx), [python](#python) Link Dependencies: [r](#r), [adlbx](#adlbx), [python](#python) Run Dependencies: [tcl](#tcl), [zsh](#zsh) Description: Turbine: The Swift/T runtime --- turbomole[¶](#turbomole) === Homepage: * <http://www.turbomole-gmbh.com/Spack package: * [turbomole/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/turbomole/package.py) Versions: 7.0.2 Description: TURBOMOLE: Program Package for ab initio Electronic Structure Calculations. Note: Turbomole requires purchase of a license to download. Go to the Turbomole home page, http://www.turbomole-gmbh.com, for details. Spack will search the current directory for this file. It is probably best to add this file to a Spack mirror so that it can be found from anywhere. For information on setting up a Spack mirror see http://spack.readthedocs.io/en/latest/mirrors.html --- tut[¶](#tut) === Homepage: * <http://mrzechonek.github.io/tut-framework/Spack package: * [tut/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tut/package.py) Versions: 2016-12-19 Build Dependencies: [python](#python) Description: TUT is a small and portable unit test framework for C++. --- twm[¶](#twm) === Homepage: * <http://cgit.freedesktop.org/xorg/app/twmSpack package: * [twm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/twm/package.py) Versions: 1.0.9 Build Dependencies: [libice](#libice), [libxt](#libxt), pkgconfig, [libsm](#libsm), [libxext](#libxext), [xproto](#xproto), [util-macros](#util-macros), [libx11](#libx11), [bison](#bison), [libxmu](#libxmu), [flex](#flex) Link Dependencies: [libice](#libice), [libxt](#libxt), [libsm](#libsm), [libx11](#libx11), [libxext](#libxext), [libxmu](#libxmu) Description: twm is a window manager for the X Window System. It provides titlebars, shaped windows, several forms of icon management, user-defined macro functions, click-to-type and pointer-driven keyboard focus, and user- specified key and pointer button bindings. --- tycho2[¶](#tycho2) === Homepage: * <https://github.com/lanl/tycho2Spack package: * [tycho2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/tycho2/package.py) Versions: develop Build Dependencies: mpi Link Dependencies: mpi Description: A neutral particle transport mini-app to study performance of sweeps on unstructured, 3D tetrahedral meshes. --- typhon[¶](#typhon) === Homepage: * <https://github.com/UK-MAC/TyphonSpack package: * [typhon/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/typhon/package.py) Versions: develop, 3.0.2, 3.0.1, 3.0 Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: Typhon is a distributed communications library for unstructured mesh applications. --- typhonio[¶](#typhonio) === Homepage: * <http://uk-mac.github.io/typhonio/Spack package: * [typhonio/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/typhonio/package.py) Versions: develop, 1.6_CMake Build Dependencies: [cmake](#cmake), mpi, [hdf5](#hdf5) Link Dependencies: mpi, [hdf5](#hdf5) Description: TyphonIO is a library of routines that perform input/output (I/O) of scientific data within application codes --- uberftp[¶](#uberftp) === Homepage: * <http://toolkit.globus.org/grid_software/data/uberftp.phpSpack package: * [uberftp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/uberftp/package.py) Versions: 2_8, 2_7, 2_6 Build Dependencies: [globus-toolkit](#globus-toolkit) Link Dependencies: [globus-toolkit](#globus-toolkit) Description: UberFTP is an interactive (text-based) client for GridFTP --- ucx[¶](#ucx) === Homepage: * <http://www.openucx.orgSpack package: * [ucx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ucx/package.py) Versions: 1.3.1, 1.3.0, 1.2.2, 1.2.1 Build Dependencies: [rdma-core](#rdma-core), [numactl](#numactl) Link Dependencies: [rdma-core](#rdma-core), [numactl](#numactl) Description: a communication library implementing high-performance messaging for MPI/PGAS frameworks --- udunits2[¶](#udunits2) === Homepage: * <http://www.unidata.ucar.edu/software/udunitsSpack package: * [udunits2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/udunits2/package.py) Versions: 2.2.24, 2.2.23, 2.2.21 Build Dependencies: [expat](#expat) Link Dependencies: [expat](#expat) Description: Automated units conversion --- ufo-core[¶](#ufo-core) === Homepage: * <https://ufo.kit.eduSpack package: * [ufo-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ufo-core/package.py) Versions: 0.14.0 Build Dependencies: [cmake](#cmake), [glib](#glib), [json-glib](#json-glib) Link Dependencies: [glib](#glib), [json-glib](#json-glib) Description: The UFO data processing framework is a C library suited to build general purpose streams data processing on heterogeneous architectures such as CPUs, GPUs or clusters. This package contains the run-time system and development files. --- ufo-filters[¶](#ufo-filters) === Homepage: * <https://ufo.kit.eduSpack package: * [ufo-filters/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/ufo-filters/package.py) Versions: 0.14.1 Build Dependencies: [cmake](#cmake), [ufo-core](#ufo-core) Link Dependencies: [ufo-core](#ufo-core) Description: The UFO data processing framework is a C library suited to build general purpose streams data processing on heterogeneous architectures such as CPUs, GPUs or clusters. This package contains filter plugins. --- umpire[¶](#umpire) === Homepage: * <https://github.com/LLNL/UmpireSpack package: * [umpire/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/umpire/package.py) Versions: develop, 0.1.4, 0.1.3, master Build Dependencies: [cmake](#cmake), [cuda](#cuda) Link Dependencies: [cuda](#cuda) Description: An application-focused API for memory management on NUMA & GPU architectures --- unblur[¶](#unblur) === Homepage: * <http://grigoriefflab.janelia.org/unblurSpack package: * [unblur/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/unblur/package.py) Versions: 1.0.2 Build Dependencies: [zlib](#zlib), [jbigkit](#jbigkit), jpeg, [gsl](#gsl), [libtiff](#libtiff), [fftw](#fftw) Link Dependencies: [zlib](#zlib), [jbigkit](#jbigkit), jpeg, [gsl](#gsl), [libtiff](#libtiff), [fftw](#fftw) Description: Unblur is used to align the frames of movies recorded on an electron microscope to reduce image blurring due to beam-induced motion. --- uncrustify[¶](#uncrustify) === Homepage: * <http://uncrustify.sourceforge.net/Spack package: * [uncrustify/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/uncrustify/package.py) Versions: 0.67, 0.61 Build Dependencies: [cmake](#cmake) Description: Source Code Beautifier for C, C++, C#, ObjectiveC, Java, and others. --- unibilium[¶](#unibilium) === Homepage: * <https://github.com/mauke/unibiliumSpack package: * [unibilium/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/unibilium/package.py) Versions: 1.2.0 Description: A terminfo parsing library --- unifycr[¶](#unifycr) === Homepage: * <https://github.com/LLNL/UnifyCRSpack package: * [unifycr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/unifycr/package.py) Versions: develop, 0.1.1 Build Dependencies: mpi, [leveldb](#leveldb), [m4](#m4), [pkg-config](#pkg-config), [hdf5](#hdf5), [libtool](#libtool), [autoconf](#autoconf), [gotcha](#gotcha), [numactl](#numactl), [automake](#automake) Link Dependencies: mpi, [leveldb](#leveldb), [gotcha](#gotcha), [hdf5](#hdf5), [numactl](#numactl), [pkg-config](#pkg-config) Description: User level file system that enables applications to use node-local storage as burst buffers for shared files. Supports scalable and efficient aggregation of I/O bandwidth from burst buffers while having the same life cycle as a batch-submitted job. UnifyCR is designed to support common I/O workloads, including checkpoint/restart. While primarily designed for N-N write/read, UnifyCR compliments its functionality with the support for N-1 write/read. --- unison[¶](#unison) === Homepage: * <https://www.cis.upenn.edu/~bcpierce/unison/Spack package: * [unison/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/unison/package.py) Versions: 2.48.4 Build Dependencies: [ocaml](#ocaml) Description: Unison is a file-synchronization tool for OSX, Unix, and Windows. It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other. --- units[¶](#units) === Homepage: * <https://www.gnu.org/software/units/Spack package: * [units/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/units/package.py) Versions: 2.13 Build Dependencies: [python](#python) Run Dependencies: [python](#python) Description: GNU units converts between different systems of units --- unixodbc[¶](#unixodbc) === Homepage: * <http://www.unixodbc.org/Spack package: * [unixodbc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/unixodbc/package.py) Versions: 2.3.4 Build Dependencies: [libiconv](#libiconv), [libtool](#libtool) Link Dependencies: [libiconv](#libiconv), [libtool](#libtool) Description: ODBC is an open specification for providing application developers with a predictable API with which to access Data Sources. Data Sources include SQL Servers and any Data Source with an ODBC Driver. --- unuran[¶](#unuran) === Homepage: * <http://statmath.wu.ac.at/unuranSpack package: * [unuran/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/unuran/package.py) Versions: 1.8.1 Build Dependencies: [rngstreams](#rngstreams), [gsl](#gsl) Link Dependencies: [rngstreams](#rngstreams), [gsl](#gsl) Description: Universal Non-Uniform Random number generator. --- unzip[¶](#unzip) === Homepage: * <http://www.info-zip.org/Zip.htmlSpack package: * [unzip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/unzip/package.py) Versions: 6.0 Description: Unzip is a compression and file packaging/archive utility. --- usearch[¶](#usearch) === Homepage: * <http://www.drive5.com/usearch/Spack package: * [usearch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/usearch/package.py) Versions: 10.0.240 Description: USEARCH is a unique sequence analysis tool with thousands of users world-wide. Note: A manual download is required for USEARCH. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- util-linux[¶](#util-linux) === Homepage: * <http://freecode.com/projects/util-linuxSpack package: * [util-linux/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/util-linux/package.py) Versions: 2.29.2, 2.29.1, 2.25 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Description: Util-linux is a suite of essential utilities for any Linux system. --- util-macros[¶](#util-macros) === Homepage: * <http://cgit.freedesktop.org/xorg/util/macros/Spack package: * [util-macros/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/util-macros/package.py) Versions: 1.19.1, 1.19.0 Description: This is a set of autoconf macros used by the configure.ac scripts in other Xorg modular packages, and is needed to generate new versions of their configure scripts with autoconf. --- uuid[¶](#uuid) === Homepage: * <http://www.ossp.org/pkg/lib/uuidSpack package: * [uuid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/uuid/package.py) Versions: 1.6.2 Description: OSSP uuid is a ISO-C:1999 application programming interface (API) and corresponding command line interface (CLI) for the generation of DCE 1.1, ISO/IEC 11578:1996 and RFC 4122 compliant Universally Unique Identifier (UUID). --- valgrind[¶](#valgrind) === Homepage: * <http://valgrind.org/Spack package: * [valgrind/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/valgrind/package.py) Versions: develop, 3.14.0, 3.13.0, 3.12.0, 3.11.0, 3.10.1, 3.10.0 Build Dependencies: [autoconf](#autoconf), [boost](#boost), [libtool](#libtool), mpi, [automake](#automake) Link Dependencies: [boost](#boost), mpi Description: An instrumentation framework for building dynamic analysis. There are Valgrind tools that can automatically detect many memory management and threading bugs, and profile your programs in detail. You can also use Valgrind to build new tools. Valgrind is Open Source / Free Software, and is freely available under the GNU General Public License, version 2. --- vampirtrace[¶](#vampirtrace) === Homepage: * <https://tu-dresden.de/zih/forschung/projekte/vampirtraceSpack package: * [vampirtrace/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vampirtrace/package.py) Versions: 5.14.4 Build Dependencies: [zlib](#zlib), [otf](#otf), mpi, [papi](#papi) Link Dependencies: [zlib](#zlib), [otf](#otf), mpi, [papi](#papi) Description: VampirTrace is an open source library that allows detailed logging of program execution for parallel applications using message passing (MPI) and threads (OpenMP, Pthreads). --- vardictjava[¶](#vardictjava) === Homepage: * <https://github.com/AstraZeneca-NGS/VarDictJavaSpack package: * [vardictjava/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vardictjava/package.py) Versions: 1.5.1, 1.4.4 Run Dependencies: java Description: VarDictJava is a variant discovery program written in Java. It is a partial Java port of VarDict variant caller. --- varscan[¶](#varscan) === Homepage: * <http://dkoboldt.github.io/varscan/Spack package: * [varscan/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/varscan/package.py) Versions: 2.4.2 Build Dependencies: java Run Dependencies: java Description: Variant calling and somatic mutation/CNV detection for next-generation sequencing data --- vc[¶](#vc) === Homepage: * <https://github.com/VcDevel/VcSpack package: * [vc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vc/package.py) Versions: 1.3.0, 1.2.0, 1.1.0 Build Dependencies: [cmake](#cmake) Description: SIMD Vector Classes for C++ --- vcftools[¶](#vcftools) === Homepage: * <https://vcftools.github.io/Spack package: * [vcftools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vcftools/package.py) Versions: 0.1.14 Build Dependencies: [perl](#perl), [zlib](#zlib) Link Dependencies: [zlib](#zlib) Run Dependencies: [perl](#perl) Description: VCFtools is a program package designed for working with VCF files, such as those generated by the 1000 Genomes Project. The aim of VCFtools is to provide easily accessible methods for working with complex genetic variation data in the form of VCF files. --- vcsh[¶](#vcsh) === Homepage: * <https://github.com/RichiH/vcshSpack package: * [vcsh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vcsh/package.py) Versions: 1.20151229-1, 1.20151229, 1.20150502, 1.20141026, 1.20141025 Run Dependencies: [git](#git) Description: config manager based on git --- vdt[¶](#vdt) === Homepage: * <https://github.com/dpiparo/vdtSpack package: * [vdt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vdt/package.py) Versions: 0.3.9, 0.3.8, 0.3.7, 0.3.6 Build Dependencies: [cmake](#cmake) Description: Vectorised math. A collection of fast and inline implementations of mathematical functions. --- vecgeom[¶](#vecgeom) === Homepage: * <https://gitlab.cern.ch/VecGeom/VecGeomSpack package: * [vecgeom/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vecgeom/package.py) Versions: 01.00.00, 00.05.00, 0.3.rc Build Dependencies: [cmake](#cmake) Description: The vectorized geometry library for particle-detector simulation (toolkits). --- veclibfort[¶](#veclibfort) === Homepage: * <https://github.com/mcg1969/vecLibFortSpack package: * [veclibfort/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/veclibfort/package.py) Versions: develop, 0.4.2 Description: Lightweight but flexible shim designed to rectify the incompatibilities between the Accelerate/vecLib BLAS and LAPACK libraries shipped with macOS and FORTRAN code compiled with modern compilers such as GNU Fortran. --- vegas2[¶](#vegas2) === Homepage: * <https://vegas2.qimrberghofer.edu.au/Spack package: * [vegas2/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vegas2/package.py) Versions: 2 Build Dependencies: [plink](#plink) Link Dependencies: [plink](#plink) Run Dependencies: [perl](#perl), [r](#r), [r-mvtnorm](#r-mvtnorm), [r-corpcor](#r-corpcor) Description: "VEGAS2 is an extension that uses 1,000 Genomes data to model SNP correlations across the autosomes and chromosome X --- veloc[¶](#veloc) === Homepage: * <https://github.com/ECP-VeloC/VELOCSpack package: * [veloc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/veloc/package.py) Versions: 1.0rc1, 1.0, master Build Dependencies: mpi, [cmake](#cmake), [axl](#axl), [boost](#boost), [libpthread-stubs](#libpthread-stubs), [er](#er) Link Dependencies: [axl](#axl), [boost](#boost), mpi, [er](#er), [libpthread-stubs](#libpthread-stubs) Description: Very-Low Overhead Checkpointing System. VELOC is a multi-level checkpoint-restart runtime for HPC supercomputing infrastructures --- velvet[¶](#velvet) === Homepage: * <http://www.ebi.ac.uk/~zerbino/velvet/Spack package: * [velvet/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/velvet/package.py) Versions: 1.2.10 Description: Velvet is a de novo genomic assembler specially designed for short read sequencing technologies. --- verilator[¶](#verilator) === Homepage: * <https://www.veripool.org/projects/verilatorSpack package: * [verilator/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/verilator/package.py) Versions: 3.920, 3.904 Build Dependencies: [perl](#perl), [flex](#flex), [bison](#bison) Run Dependencies: [perl](#perl) Description: Verilator is the fastest free Verilog HDL simulator. It compiles synthesizable Verilog (not test-bench code!), plus some PSL, SystemVerilog and Synthesis assertions into C++ or SystemC code. It is designed for large projects where fast simulation performance is of primary concern, and is especially well suited to generate executable models of CPUs for embedded software design teams. Please do not download this program if you are expecting a full featured replacement for NC-Verilog, VCS or another commercial Verilog simulator or Verilog compiler for a little project! (Try Icarus instead.) However, if you are looking for a path to migrate synthesizable Verilog to C++ or SystemC, and writing just a touch of C code and Makefiles doesn't scare you off, this is the free Verilog compiler for you. Verilator supports the synthesis subset of Verilog, plus initial statements, proper blocking/non-blocking assignments, functions, tasks, multi-dimensional arrays, and signed numbers. It also supports very simple forms of SystemVerilog assertions and coverage analysis. Verilator supports the more important Verilog 2005 constructs, and some SystemVerilog features, with additional constructs being added as users request them. Verilator has been used to simulate many very large multi-million gate designs with thousands of modules. --- verrou[¶](#verrou) === Homepage: * <https://github.com/edf-hpc/verrouSpack package: * [verrou/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/verrou/package.py) Versions: develop, 2.0.0, 1.1.0 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: A floating-point error checker. Verrou helps you look for floating-point round-off errors in programs. It implements a stochastic floating-point arithmetic based on random rounding: all floating-point operations are perturbed by randomly switching rounding modes. This can be seen as an asynchronous variant of the CESTAC method, or a subset of Monte Carlo Arithmetic, performing only output randomization through random rounding. --- videoproto[¶](#videoproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/videoprotoSpack package: * [videoproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/videoproto/package.py) Versions: 2.3.3 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Video Extension. This extension provides a protocol for a video output mechanism, mainly to rescale video playback in the video controller hardware. --- viennarna[¶](#viennarna) === Homepage: * <https://www.tbi.univie.ac.at/RNA/Spack package: * [viennarna/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/viennarna/package.py) Versions: 2.4.3, 2.3.5 Build Dependencies: [perl](#perl), [libsvm](#libsvm), [gsl](#gsl), [python](#python) Link Dependencies: [libsvm](#libsvm), [gsl](#gsl) Run Dependencies: [perl](#perl), [python](#python) Description: The ViennaRNA Package consists of a C code library and several stand- alone programs for the prediction and comparison of RNA secondary structures. --- viewres[¶](#viewres) === Homepage: * <http://cgit.freedesktop.org/xorg/app/viewresSpack package: * [viewres/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/viewres/package.py) Versions: 1.0.4 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxmu](#libxmu), [libxt](#libxt), [libxaw](#libxaw) Link Dependencies: [libxaw](#libxaw), [libxmu](#libxmu), [libxt](#libxt) Description: viewres displays a tree showing the widget class hierarchy of the Athena Widget Set (libXaw). --- vim[¶](#vim) === Homepage: * <http://www.vim.orgSpack package: * [vim/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vim/package.py) Versions: 8.1.0338, 8.1.0001, 8.0.1376, 8.0.0503, 8.0.0454, 8.0.0134, 7.4.2367 Build Dependencies: [ruby](#ruby), [libxt](#libxt), [lua](#lua), [libx11](#libx11), [python](#python), [libxpm](#libxpm), [perl](#perl), [libxtst](#libxtst), [ncurses](#ncurses), [libsm](#libsm) Link Dependencies: [ruby](#ruby), [libxt](#libxt), [lua](#lua), [libx11](#libx11), [python](#python), [libxpm](#libxpm), [perl](#perl), [libxtst](#libxtst), [ncurses](#ncurses), [libsm](#libsm) Run Dependencies: [cscope](#cscope) Description: Vim is a highly configurable text editor built to enable efficient text editing. It is an improved version of the vi editor distributed with most UNIX systems. Vim is often called a "programmer's editor," and so useful for programming that many consider it an entire IDE. It's not just for programmers, though. Vim is perfect for all kinds of text editing, from composing email to editing configuration files. --- virtualgl[¶](#virtualgl) === Homepage: * <http://www.virtualgl.org/Main/HomePageSpack package: * [virtualgl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/virtualgl/package.py) Versions: 2.5.2 Build Dependencies: [cmake](#cmake), [mesa-glu](#mesa-glu), [libjpeg-turbo](#libjpeg-turbo) Link Dependencies: [mesa-glu](#mesa-glu), [libjpeg-turbo](#libjpeg-turbo) Description: VirtualGL redirects 3D commands from a Unix/Linux OpenGL application onto a server-side GPU and converts the rendered 3D images into a video stream with which remote clients can interact to view and control the 3D application in real time. --- visit[¶](#visit) === Homepage: * <https://wci.llnl.gov/simulation/computer-codes/visit/Spack package: * [visit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/visit/package.py) Versions: 2.13.0, 2.12.3, 2.12.2, 2.10.3, 2.10.2, 2.10.1 Build Dependencies: [vtk](#vtk), mpi, [silo](#silo), [qwt](#qwt), [hdf5](#hdf5), [cmake](#cmake), [qt](#qt), [python](#python) Link Dependencies: [vtk](#vtk), mpi, [silo](#silo), [qwt](#qwt), [hdf5](#hdf5), [qt](#qt), [python](#python) Description: VisIt is an Open Source, interactive, scalable, visualization, animation and analysis tool. --- vizglow[¶](#vizglow) === Homepage: * <http://esgeetech.com/products/vizglow-plasma-modeling/Spack package: * [vizglow/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vizglow/package.py) Versions: 2.2alpha20, 2.2alpha17, 2.2alpha15 Build Dependencies: [zlib](#zlib), [libxrender](#libxrender), [fontconfig](#fontconfig), [libcanberra](#libcanberra), [xterm](#xterm), [freetype](#freetype) Link Dependencies: [zlib](#zlib), [libxrender](#libxrender), [fontconfig](#fontconfig), [libcanberra](#libcanberra), [xterm](#xterm), [freetype](#freetype) Description: VizGlow software tool is used for high-fidelity multi-dimensional modeling of non-equilibrium plasma discharges. Note: VizGlow is licensed software. You will need to create an account on the EsgeeTech homepage and download VizGlow yourself. Spack will search your current directory for a file of this format. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- vmatch[¶](#vmatch) === Homepage: * <http://www.vmatch.de/Spack package: * [vmatch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vmatch/package.py) Versions: 2.3.0 Description: Vmatch is a versatile software tool for efficiently solving large scale sequence matching tasks --- voropp[¶](#voropp) === Homepage: * <http://math.lbl.gov/voro++/about.htmlSpack package: * [voropp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/voropp/package.py) Versions: 0.4.6 Description: Voro++ is a open source software library for the computation of the Voronoi diagram, a widely-used tessellation that has applications in many scientific fields. --- votca-csg[¶](#votca-csg) === Homepage: * <http://www.votca.orgSpack package: * [votca-csg/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/votca-csg/package.py) Versions: develop, 1.4.1, 1.4 Build Dependencies: [cmake](#cmake), [votca-tools](#votca-tools), [gromacs](#gromacs), [hdf5](#hdf5) Link Dependencies: [votca-tools](#votca-tools), [gromacs](#gromacs), [hdf5](#hdf5) Description: Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) is a package intended to reduce the amount of routine work when doing systematic coarse-graining of various systems. The core is written in C++. This package contains the VOTCA coarse-graining engine. --- votca-ctp[¶](#votca-ctp) === Homepage: * <http://www.votca.orgSpack package: * [votca-ctp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/votca-ctp/package.py) Versions: develop Build Dependencies: [cmake](#cmake), [votca-tools](#votca-tools), [gsl](#gsl), [votca-csg](#votca-csg) Link Dependencies: [votca-tools](#votca-tools), [gsl](#gsl), [votca-csg](#votca-csg) Description: Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) is a package intended to reduce the amount of routine work when doing systematic coarse-graining of various systems. The core is written in C++. This package contains the VOTCA charge transport engine. --- votca-tools[¶](#votca-tools) === Homepage: * <http://www.votca.orgSpack package: * [votca-tools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/votca-tools/package.py) Versions: develop, 1.4.1, 1.4 Build Dependencies: [sqlite](#sqlite), [gsl](#gsl), [eigen](#eigen), [cmake](#cmake), [boost](#boost), [expat](#expat), [fftw](#fftw) Link Dependencies: [gsl](#gsl), [eigen](#eigen), [sqlite](#sqlite), [boost](#boost), [expat](#expat), [fftw](#fftw) Description: Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) is a package intended to reduce the amount of routine work when doing systematic coarse-graining of various systems. The core is written in C++. This package contains the basic tools library of VOTCA. --- votca-xtp[¶](#votca-xtp) === Homepage: * <http://www.votca.orgSpack package: * [votca-xtp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/votca-xtp/package.py) Versions: develop, 1.4.1 Build Dependencies: [libxc](#libxc), [ceres-solver](#ceres-solver), [cmake](#cmake), [votca-ctp](#votca-ctp), [votca-tools](#votca-tools), [votca-csg](#votca-csg) Link Dependencies: [libxc](#libxc), [votca-ctp](#votca-ctp), [votca-tools](#votca-tools), [ceres-solver](#ceres-solver), [votca-csg](#votca-csg) Description: Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) is a package intended to reduce the amount of routine work when doing systematic coarse-graining of various systems. The core is written in C++. This package contains the VOTCA exciton transport engine. --- vpfft[¶](#vpfft) === Homepage: * <http://www.exmatex.org/vpfft.htmlSpack package: * [vpfft/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vpfft/package.py) Versions: develop Build Dependencies: mpi, [eigen](#eigen), [fftw](#fftw) Link Dependencies: mpi, [eigen](#eigen), [fftw](#fftw) Description: Proxy Application. VPFFT is an implementation of a mesoscale micromechanical materials model. By solving the viscoplasticity model, VPFFT simulates the evolution of a material under deformation. The solution time to the viscoplasticity model, described by a set of partial differential equations, is significantly reduced by the application of Fast Fourier Transform in the VPFFT algorithm. --- vpic[¶](#vpic) === Homepage: * <https://github.com/lanl/vpicSpack package: * [vpic/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vpic/package.py) Versions: develop Build Dependencies: [cmake](#cmake), mpi Link Dependencies: mpi Description: VPIC is a general purpose particle-in-cell simulation code for modeling kinetic plasmas in one, two, or three spatial dimensions. It employs a second-order, explicit, leapfrog algorithm to update charged particle positions and velocities in order to solve the relativistic kinetic equation for each species in the plasma, along with a full Maxwell description for the electric and magnetic fields evolved via a second- order finite-difference-time-domain (FDTD) solve. --- vsearch[¶](#vsearch) === Homepage: * <https://github.com/torognes/vsearchSpack package: * [vsearch/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vsearch/package.py) Versions: 2.4.3 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: VSEARCH is a versatile open-source tool for metagenomics. --- vt[¶](#vt) === Homepage: * <http://genome.sph.umich.edu/wiki/vtSpack package: * [vt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vt/package.py) Versions: 0.577 Description: A tool set for short variant discovery in genetic sequence data. --- vtk[¶](#vtk) === Homepage: * <http://www.vtk.orgSpack package: * [vtk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vtk/package.py) Versions: 8.1.1, 8.0.1, 7.1.0, 7.0.0, 6.3.0, 6.1.0 Build Dependencies: gl, [mesa](#mesa), [libjpeg](#libjpeg), [qt](#qt), [zlib](#zlib), [ffmpeg](#ffmpeg), mpi, [netcdf-cxx](#netcdf-cxx), [netcdf](#netcdf), [libtiff](#libtiff), [boost](#boost), [glew](#glew), [lz4](#lz4), [libpng](#libpng), [libxml2](#libxml2), [hdf5](#hdf5), [cmake](#cmake), [jsoncpp](#jsoncpp), [expat](#expat), [libharu](#libharu), [opengl](#opengl), [python](#python), [freetype](#freetype) Link Dependencies: gl, [mesa](#mesa), [libjpeg](#libjpeg), [qt](#qt), [zlib](#zlib), [ffmpeg](#ffmpeg), mpi, [netcdf-cxx](#netcdf-cxx), [netcdf](#netcdf), [libtiff](#libtiff), [boost](#boost), [glew](#glew), [lz4](#lz4), [libpng](#libpng), [libxml2](#libxml2), [hdf5](#hdf5), [jsoncpp](#jsoncpp), [expat](#expat), [libharu](#libharu), [opengl](#opengl), [python](#python), [freetype](#freetype) Run Dependencies: [py-mpi4py](#py-mpi4py) Description: The Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. --- vtkh[¶](#vtkh) === Homepage: * <https://github.com/Alpine-DAV/vtk-hSpack package: * [vtkh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vtkh/package.py) Versions: master Build Dependencies: [cuda](#cuda), mpi, tbb, [vtkm](#vtkm), [cmake](#cmake) Link Dependencies: [cuda](#cuda), mpi, tbb, [vtkm](#vtkm), [cmake](#cmake) Description: VTK-h is a toolkit of scientific visualization algorithms for emerging processor architectures. VTK-h brings together several projects like VTK-m and DIY2 to provide a toolkit with hybrid parallel capabilities. --- vtkm[¶](#vtkm) === Homepage: * <https://m.vtk.org/Spack package: * [vtkm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/vtkm/package.py) Versions: 1.1.0, master Build Dependencies: [cuda](#cuda), tbb, [cmake](#cmake) Link Dependencies: [cuda](#cuda), tbb, [cmake](#cmake) Description: VTK-m is a toolkit of scientific visualization algorithms for emerging processor architectures. VTK-m supports the fine-grained concurrency for data analysis and visualization algorithms required to drive extreme scale computing by providing abstract models for data and execution that can be applied to a variety of algorithms across many different processor architectures. --- wannier90[¶](#wannier90) === Homepage: * <http://wannier.orgSpack package: * [wannier90/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/wannier90/package.py) Versions: 2.1.0, 2.0.1 Build Dependencies: mpi, blas, lapack Link Dependencies: mpi, blas, lapack Description: Wannier90 calculates maximally-localised Wannier functions (MLWFs). Wannier90 is released under the GNU General Public License. --- warpx[¶](#warpx) === Homepage: * <https://ecp-warpx.github.io/index.htmlSpack package: * [warpx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/warpx/package.py) Versions: master, dev Build Dependencies: mpi Link Dependencies: mpi Description: WarpX is an advanced electromagnetic Particle-In-Cell code. It supports many features including Perfectly-Matched Layers (PML) and mesh refinement. In addition, WarpX is a highly-parallel and highly-optimized code and features hybrid OpenMP/MPI parallelization, advanced vectorization techniques and load balancing capabilities. --- wcslib[¶](#wcslib) === Homepage: * <http://www.atnf.csiro.au/people/mcalabre/WCS/wcslib/Spack package: * [wcslib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/wcslib/package.py) Versions: 5.18 Build Dependencies: [libx11](#libx11), [gmake](#gmake), [cfitsio](#cfitsio), [flex](#flex) Link Dependencies: [libx11](#libx11), [cfitsio](#cfitsio) Description: WCSLIB a C implementation of the coordinate transformations defined in the FITS WCS papers. --- wget[¶](#wget) === Homepage: * <http://www.gnu.org/software/wget/Spack package: * [wget/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/wget/package.py) Versions: 1.19.1, 1.17, 1.16 Build Dependencies: [zlib](#zlib), pkgconfig, [gettext](#gettext), [gnutls](#gnutls), [openssl](#openssl), [perl](#perl), [pcre](#pcre), [libpsl](#libpsl), [python](#python) Link Dependencies: [zlib](#zlib), [pcre](#pcre), [libpsl](#libpsl), [gnutls](#gnutls), [openssl](#openssl) Test Dependencies: [valgrind](#valgrind) Description: GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non- interactive commandline tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc. --- wgsim[¶](#wgsim) === Homepage: * <https://github.com/lh3/wgsimSpack package: * [wgsim/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/wgsim/package.py) Versions: 2011.10.17 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Wgsim is a small tool for simulating sequence reads from a reference genome. It is able to simulate diploid genomes with SNPs and insertion/deletion (INDEL) polymorphisms, and simulate reads with uniform substitution sequencing errors. It does not generate INDEL sequencing errors, but this can be partly compensated by simulating INDEL polymorphisms. --- windowswmproto[¶](#windowswmproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/windowswmprotoSpack package: * [windowswmproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/windowswmproto/package.py) Versions: 1.0.4 Description: This module provides the definition of the WindowsWM extension to the X11 protocol, used for coordination between an X11 server and the Microsoft Windows native window manager. WindowsWM is only intended to be used on Cygwin when running a rootless XWin server. --- wireshark[¶](#wireshark) === Homepage: * <https://www.wireshark.orgSpack package: * [wireshark/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/wireshark/package.py) Versions: 2.6.0 Build Dependencies: pkgconfig, [lua](#lua), [git](#git), [bison](#bison), adwaita-icon-theme, [cares](#cares), [qt](#qt), gtkplus3, portaudio, [libgcrypt](#libgcrypt), [libtool](#libtool), [libpcap](#libpcap), [gtkplus](#gtkplus), [libssh](#libssh), [libmaxminddb](#libmaxminddb), [cmake](#cmake), [doxygen](#doxygen), [nghttp2](#nghttp2), [krb5](#krb5), libsmi, [flex](#flex), [glib](#glib), [gnutls](#gnutls) Link Dependencies: [cares](#cares), adwaita-icon-theme, [lua](#lua), [gtkplus](#gtkplus), gtkplus3, [glib](#glib), [krb5](#krb5), [nghttp2](#nghttp2), libsmi, portaudio, [gnutls](#gnutls), [qt](#qt), [libssh](#libssh), [libpcap](#libpcap), [libgcrypt](#libgcrypt), [libmaxminddb](#libmaxminddb) Description: Graphical network analyzer and capture tool --- workrave[¶](#workrave) === Homepage: * <http://www.workrave.org/Spack package: * [workrave/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/workrave/package.py) Versions: 1_10_20, 1_10_19, 1_10_18, 1_10_17, 1_10_16, 1_10_15, 1_10_14, 1_10_13, 1_10_12, 1_10_10 Build Dependencies: [glib](#glib), [libsigcpp](#libsigcpp), [libx11](#libx11), [m4](#m4), [glibmm](#glibmm), [py-cheetah](#py-cheetah), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [gtkmm](#gtkmm), [gtkplus](#gtkplus) Link Dependencies: [glib](#glib), [libsigcpp](#libsigcpp), [libx11](#libx11), [gtkplus](#gtkplus), [py-cheetah](#py-cheetah), [gtkmm](#gtkmm), [glibmm](#glibmm) Description: Workrave is a program that assists in the recovery and prevention of Repetitive Strain Injury (RSI). The program frequently alerts you to take micro-pauses, rest breaks and restricts you to your daily limit. The program runs on GNU/Linux and Microsoft Windows. --- wt[¶](#wt) === Homepage: * <http://www.webtoolkit.eu/wtSpack package: * [wt/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/wt/package.py) Versions: 3.3.7, master Build Dependencies: [zlib](#zlib), [pango](#pango), [openssl](#openssl), [sqlite](#sqlite), [cmake](#cmake), [postgresql](#postgresql), [mariadb](#mariadb), [boost](#boost), [libharu](#libharu) Link Dependencies: [zlib](#zlib), [pango](#pango), [openssl](#openssl), [sqlite](#sqlite), [boost](#boost), [mariadb](#mariadb), [postgresql](#postgresql), [libharu](#libharu) Description: Wt, C++ Web Toolkit. Wt is a C++ library for developing web applications. --- wx[¶](#wx) === Homepage: * <http://www.wxwidgets.org/Spack package: * [wx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/wx/package.py) Versions: develop, 3.1.0, 3.0.2, 3.0.1 Build Dependencies: pkgconfig, [gtkplus](#gtkplus) Link Dependencies: [gtkplus](#gtkplus) Description: wxWidgets is a C++ library that lets developers create applications for Windows, Mac OS X, Linux and other platforms with a single code base. It has popular language bindings for Python, Perl, Ruby and many other languages, and unlike other cross-platform toolkits, wxWidgets gives applications a truly native look and feel because it uses the platform's native API rather than emulating the GUI. It's also extensive, free, open-source and mature. --- wxpropgrid[¶](#wxpropgrid) === Homepage: * <http://wxpropgrid.sourceforge.net/Spack package: * [wxpropgrid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/wxpropgrid/package.py) Versions: 1.4.15 Build Dependencies: [wx](#wx) Link Dependencies: [wx](#wx) Description: wxPropertyGrid is a property sheet control for wxWidgets. In other words, it is a specialized two-column grid for editing properties such as strings, numbers, flagsets, string arrays, and colours. --- x11perf[¶](#x11perf) === Homepage: * <http://cgit.freedesktop.org/xorg/app/x11perfSpack package: * [x11perf/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/x11perf/package.py) Versions: 1.6.0 Build Dependencies: [libxrender](#libxrender), [xproto](#xproto), pkgconfig, [libxft](#libxft), [util-macros](#util-macros), [libx11](#libx11), [libxmu](#libxmu) Link Dependencies: [libxrender](#libxrender), [libx11](#libx11), [libxft](#libxft), [libxmu](#libxmu) Description: Simple X server performance benchmarker. --- xapian-core[¶](#xapian-core) === Homepage: * <https://xapian.orgSpack package: * [xapian-core/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xapian-core/package.py) Versions: 1.4.3 Build Dependencies: [zlib](#zlib) Link Dependencies: [zlib](#zlib) Description: Xapian is a highly adaptable toolkit which allows developers to easily add advanced indexing and search facilities to their own applications. It supports the Probabilistic Information Retrieval model and also supports a rich set of boolean query operators. --- xauth[¶](#xauth) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xauthSpack package: * [xauth/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xauth/package.py) Versions: 1.0.9 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxext](#libxext), [libxmu](#libxmu), [libxau](#libxau) Link Dependencies: [xproto](#xproto), [libx11](#libx11), [libxext](#libxext), [libxau](#libxau), [libxmu](#libxmu) Description: The xauth program is used to edit and display the authorization information used in connecting to the X server. --- xbacklight[¶](#xbacklight) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xbacklightSpack package: * [xbacklight/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xbacklight/package.py) Versions: 1.2.1 Build Dependencies: [xcb-util](#xcb-util), [libxcb](#libxcb), pkgconfig, [util-macros](#util-macros) Link Dependencies: [xcb-util](#xcb-util), [libxcb](#libxcb) Description: Xbacklight is used to adjust the backlight brightness where supported. It uses the RandR extension to find all outputs on the X server supporting backlight brightness control and changes them all in the same way. --- xbiff[¶](#xbiff) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xbiffSpack package: * [xbiff/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xbiff/package.py) Versions: 1.0.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxext](#libxext), [libxaw](#libxaw), [libxmu](#libxmu), [xbitmaps](#xbitmaps) Link Dependencies: [libx11](#libx11), [libxext](#libxext), [libxaw](#libxaw), [libxmu](#libxmu) Description: xbiff provides graphical notification of new e-mail. It only handles mail stored in a filesystem accessible file, not via IMAP, POP or other remote access protocols. --- xbitmaps[¶](#xbitmaps) === Homepage: * <https://cgit.freedesktop.org/xorg/data/bitmaps/Spack package: * [xbitmaps/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xbitmaps/package.py) Versions: 1.1.1 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: The xbitmaps package contains bitmap images used by multiple applications built in Xorg. --- xbraid[¶](#xbraid) === Homepage: * <https://computation.llnl.gov/projects/parallel-time-integration-multigrid/softwareSpack package: * [xbraid/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xbraid/package.py) Versions: 2.2.0 Build Dependencies: mpi Link Dependencies: mpi Description: XBraid: Parallel time integration with Multigrid --- xcalc[¶](#xcalc) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xcalcSpack package: * [xcalc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcalc/package.py) Versions: 1.0.6 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxt](#libxt), [libxaw](#libxaw), [xproto](#xproto) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11) Description: xcalc is a scientific calculator X11 client that can emulate a TI-30 or an HP-10C. --- xcb-demo[¶](#xcb-demo) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [xcb-demo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-demo/package.py) Versions: 0.1 Build Dependencies: [xcb-util](#xcb-util), [libxcb](#libxcb), pkgconfig, [xcb-util-wm](#xcb-util-wm), [xcb-util-image](#xcb-util-image) Link Dependencies: [xcb-util](#xcb-util), [libxcb](#libxcb), [xcb-util-wm](#xcb-util-wm), [xcb-util-image](#xcb-util-image) Description: xcb-demo: A collection of demo programs that use the XCB library. --- xcb-proto[¶](#xcb-proto) === Homepage: * <http://xcb.freedesktop.org/Spack package: * [xcb-proto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-proto/package.py) Versions: 1.13, 1.12, 1.11 Description: xcb-proto provides the XML-XCB protocol descriptions that libxcb uses to generate the majority of its code and API. --- xcb-util[¶](#xcb-util) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [xcb-util/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-util/package.py) Versions: 0.4.0 Build Dependencies: [libxcb](#libxcb), pkgconfig Link Dependencies: [libxcb](#libxcb) Description: The XCB util modules provides a number of libraries which sit on top of libxcb, the core X protocol library, and some of the extension libraries. These experimental libraries provide convenience functions and interfaces which make the raw X protocol more usable. Some of the libraries also provide client-side code which is not strictly part of the X protocol but which have traditionally been provided by Xlib. --- xcb-util-cursor[¶](#xcb-util-cursor) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [xcb-util-cursor/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-util-cursor/package.py) Versions: 0.1.3 Build Dependencies: [libxcb](#libxcb), pkgconfig, [xcb-util-renderutil](#xcb-util-renderutil), [xcb-util-image](#xcb-util-image) Link Dependencies: [libxcb](#libxcb), [xcb-util-renderutil](#xcb-util-renderutil), [xcb-util-image](#xcb-util-image) Description: The XCB util modules provides a number of libraries which sit on top of libxcb, the core X protocol library, and some of the extension libraries. These experimental libraries provide convenience functions and interfaces which make the raw X protocol more usable. Some of the libraries also provide client-side code which is not strictly part of the X protocol but which have traditionally been provided by Xlib. --- xcb-util-errors[¶](#xcb-util-errors) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [xcb-util-errors/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-util-errors/package.py) Versions: 1.0 Build Dependencies: [libxcb](#libxcb), [xcb-proto](#xcb-proto), pkgconfig Link Dependencies: [libxcb](#libxcb) Description: The XCB util modules provides a number of libraries which sit on top of libxcb, the core X protocol library, and some of the extension libraries. These experimental libraries provide convenience functions and interfaces which make the raw X protocol more usable. Some of the libraries also provide client-side code which is not strictly part of the X protocol but which have traditionally been provided by Xlib. --- xcb-util-image[¶](#xcb-util-image) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [xcb-util-image/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-util-image/package.py) Versions: 0.4.0 Build Dependencies: [xcb-util](#xcb-util), [libxcb](#libxcb), pkgconfig, [xproto](#xproto) Link Dependencies: [xcb-util](#xcb-util), [libxcb](#libxcb) Description: The XCB util modules provides a number of libraries which sit on top of libxcb, the core X protocol library, and some of the extension libraries. These experimental libraries provide convenience functions and interfaces which make the raw X protocol more usable. Some of the libraries also provide client-side code which is not strictly part of the X protocol but which have traditionally been provided by Xlib. --- xcb-util-keysyms[¶](#xcb-util-keysyms) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [xcb-util-keysyms/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-util-keysyms/package.py) Versions: 0.4.0 Build Dependencies: [libxcb](#libxcb), pkgconfig, [xproto](#xproto) Link Dependencies: [libxcb](#libxcb) Description: The XCB util modules provides a number of libraries which sit on top of libxcb, the core X protocol library, and some of the extension libraries. These experimental libraries provide convenience functions and interfaces which make the raw X protocol more usable. Some of the libraries also provide client-side code which is not strictly part of the X protocol but which have traditionally been provided by Xlib. --- xcb-util-renderutil[¶](#xcb-util-renderutil) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [xcb-util-renderutil/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-util-renderutil/package.py) Versions: 0.3.9 Build Dependencies: [libxcb](#libxcb), pkgconfig Link Dependencies: [libxcb](#libxcb) Description: The XCB util modules provides a number of libraries which sit on top of libxcb, the core X protocol library, and some of the extension libraries. These experimental libraries provide convenience functions and interfaces which make the raw X protocol more usable. Some of the libraries also provide client-side code which is not strictly part of the X protocol but which have traditionally been provided by Xlib. --- xcb-util-wm[¶](#xcb-util-wm) === Homepage: * <https://xcb.freedesktop.org/Spack package: * [xcb-util-wm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-util-wm/package.py) Versions: 0.4.1 Build Dependencies: [libxcb](#libxcb), pkgconfig Link Dependencies: [libxcb](#libxcb) Description: The XCB util modules provides a number of libraries which sit on top of libxcb, the core X protocol library, and some of the extension libraries. These experimental libraries provide convenience functions and interfaces which make the raw X protocol more usable. Some of the libraries also provide client-side code which is not strictly part of the X protocol but which have traditionally been provided by Xlib. --- xcb-util-xrm[¶](#xcb-util-xrm) === Homepage: * <https://github.com/Airblader/xcb-util-xrmSpack package: * [xcb-util-xrm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcb-util-xrm/package.py) Versions: 1.2 Build Dependencies: pkgconfig, [m4](#m4), [autoconf](#autoconf), [libxcb](#libxcb), [automake](#automake), [libtool](#libtool) Link Dependencies: [libxcb](#libxcb) Description: XCB util-xrm module provides the 'xrm' library, i.e. utility functions for the X resource manager. --- xclip[¶](#xclip) === Homepage: * <https://github.com/astrand/xclipSpack package: * [xclip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xclip/package.py) Versions: 0.13 Build Dependencies: [libx11](#libx11), [m4](#m4), [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [libxmu](#libxmu) Link Dependencies: [libxmu](#libxmu), [libx11](#libx11) Description: xclip is a command line utility that is designed to run on any system with an X11 implementation. It provides an interface to X selections ("the clipboard") from the command line. It can read data from standard in or a file and place it in an X selection for pasting into other X applications. xclip can also print an X selection to standard out, which can then be redirected to a file or another program. --- xclipboard[¶](#xclipboard) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xclipboardSpack package: * [xclipboard/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xclipboard/package.py) Versions: 1.1.3 Build Dependencies: [libxkbfile](#libxkbfile), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxt](#libxt), [libxaw](#libxaw), [libxmu](#libxmu), [xproto](#xproto) Link Dependencies: [libxmu](#libxmu), [libx11](#libx11), [libxaw](#libxaw), [libxkbfile](#libxkbfile), [libxt](#libxt) Description: xclipboard is used to collect and display text selections that are sent to the CLIPBOARD by other clients. It is typically used to save CLIPBOARD selections for later use. It stores each CLIPBOARD selection as a separate string, each of which can be selected. --- xclock[¶](#xclock) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xclockSpack package: * [xclock/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xclock/package.py) Versions: 1.0.7 Build Dependencies: [libxrender](#libxrender), [libxt](#libxt), pkgconfig, [libx11](#libx11), [xproto](#xproto), [util-macros](#util-macros), [libxaw](#libxaw), [libxft](#libxft), [libxkbfile](#libxkbfile), [libxmu](#libxmu) Link Dependencies: [libxrender](#libxrender), [libxt](#libxt), [libx11](#libx11), [libxmu](#libxmu), [libxaw](#libxaw), [libxft](#libxft), [libxkbfile](#libxkbfile) Description: xclock is the classic X Window System clock utility. It displays the time in analog or digital form, continuously updated at a frequency which may be specified by the user. --- xcmiscproto[¶](#xcmiscproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/xcmiscprotoSpack package: * [xcmiscproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcmiscproto/package.py) Versions: 1.2.2 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: XC-MISC Extension. This extension defines a protocol that provides Xlib two ways to query the server for available resource IDs. --- xcmsdb[¶](#xcmsdb) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xcmsdbSpack package: * [xcmsdb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcmsdb/package.py) Versions: 1.0.5 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11) Link Dependencies: [libx11](#libx11) Description: xcmsdb is used to load, query, or remove Device Color Characterization data stored in properties on the root window of the screen as specified in section 7, Device Color Characterization, of the X11 Inter-Client Communication Conventions Manual (ICCCM). --- xcompmgr[¶](#xcompmgr) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xcompmgrSpack package: * [xcompmgr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcompmgr/package.py) Versions: 1.1.7 Build Dependencies: [libxrender](#libxrender), [util-macros](#util-macros), pkgconfig, [libxext](#libxext), [libxdamage](#libxdamage), [libxcomposite](#libxcomposite), [libxfixes](#libxfixes) Link Dependencies: [libxrender](#libxrender), [libxcomposite](#libxcomposite), [libxfixes](#libxfixes), [libxext](#libxext), [libxdamage](#libxdamage) Description: xcompmgr is a sample compositing manager for X servers supporting the XFIXES, DAMAGE, RENDER, and COMPOSITE extensions. It enables basic eye- candy effects. --- xconsole[¶](#xconsole) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xconsoleSpack package: * [xconsole/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xconsole/package.py) Versions: 1.0.6 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [xproto](#xproto), [libxaw](#libxaw), [libxmu](#libxmu), [libxt](#libxt) Link Dependencies: [libxmu](#libxmu), [xproto](#xproto), [libxaw](#libxaw), [libx11](#libx11), [libxt](#libxt) Description: xconsole displays in a X11 window the messages which are usually sent to /dev/console. --- xcursor-themes[¶](#xcursor-themes) === Homepage: * <http://cgit.freedesktop.org/xorg/data/cursorsSpack package: * [xcursor-themes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcursor-themes/package.py) Versions: 1.0.4 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxcursor](#libxcursor), [xcursorgen](#xcursorgen) Link Dependencies: [libxcursor](#libxcursor) Description: This is a default set of cursor themes for use with libXcursor, originally created for the XFree86 Project, and now shipped as part of the X.Org software distribution. --- xcursorgen[¶](#xcursorgen) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xcursorgenSpack package: * [xcursorgen/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xcursorgen/package.py) Versions: 1.0.6 Build Dependencies: [libx11](#libx11), pkgconfig, [libpng](#libpng), [libxcursor](#libxcursor), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11), [libpng](#libpng), [libxcursor](#libxcursor) Description: xcursorgen prepares X11 cursor sets for use with libXcursor. --- xdbedizzy[¶](#xdbedizzy) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xdbedizzySpack package: * [xdbedizzy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xdbedizzy/package.py) Versions: 1.1.0 Build Dependencies: [libx11](#libx11), pkgconfig, [libxext](#libxext), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11), [libxext](#libxext) Description: xdbedizzy is a demo of the X11 Double Buffer Extension (DBE) creating a double buffered spinning scene. --- xditview[¶](#xditview) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xditviewSpack package: * [xditview/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xditview/package.py) Versions: 1.0.4 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxaw](#libxaw), [libxmu](#libxmu) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11), [libxmu](#libxmu) Description: xditview displays ditroff output on an X display. --- xdm[¶](#xdm) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xdmSpack package: * [xdm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xdm/package.py) Versions: 1.1.11 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxpm](#libxpm), [libxmu](#libxmu), [libxau](#libxau), [libxdmcp](#libxdmcp), [libxt](#libxt), [libxext](#libxext), [libx11](#libx11), [libxft](#libxft), [libxinerama](#libxinerama), [libxaw](#libxaw) Link Dependencies: [libxt](#libxt), [libxext](#libxext), [libx11](#libx11), [libxpm](#libxpm), [libxaw](#libxaw), [libxft](#libxft), [libxinerama](#libxinerama), [libxmu](#libxmu), [libxau](#libxau), [libxdmcp](#libxdmcp) Description: X Display Manager / XDMCP server. --- xdpyinfo[¶](#xdpyinfo) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xdpyinfoSpack package: * [xdpyinfo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xdpyinfo/package.py) Versions: 1.3.2 Build Dependencies: [inputproto](#inputproto), [xproto](#xproto), pkgconfig, [libx11](#libx11), [libxext](#libxext), [libxcb](#libxcb), [util-macros](#util-macros), [recordproto](#recordproto), [libxtst](#libxtst), [fixesproto](#fixesproto) Link Dependencies: [libxcb](#libxcb), [libxext](#libxext), [libxtst](#libxtst), [libx11](#libx11) Description: xdpyinfo is a utility for displaying information about an X server. It is used to examine the capabilities of a server, the predefined values for various parameters used in communicating between clients and the server, and the different types of screens, visuals, and X11 protocol extensions that are available. --- xdriinfo[¶](#xdriinfo) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xdriinfoSpack package: * [xdriinfo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xdriinfo/package.py) Versions: 1.0.5 Build Dependencies: [libx11](#libx11), [util-macros](#util-macros), pkgconfig, [libxext](#libxext), [libxdamage](#libxdamage), [libxfixes](#libxfixes), [glproto](#glproto), [libxshmfence](#libxshmfence), [expat](#expat), [pcre](#pcre) Link Dependencies: [libxext](#libxext), [libx11](#libx11), [libxdamage](#libxdamage), [libxfixes](#libxfixes), [libxshmfence](#libxshmfence), [expat](#expat), [pcre](#pcre) Description: xdriinfo - query configuration information of X11 DRI drivers. --- xedit[¶](#xedit) === Homepage: * <https://cgit.freedesktop.org/xorg/app/xeditSpack package: * [xedit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xedit/package.py) Versions: 1.2.2 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxaw](#libxaw), [libxmu](#libxmu) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11), [libxmu](#libxmu) Description: Xedit is a simple text editor for X. --- xerces-c[¶](#xerces-c) === Homepage: * <https://xerces.apache.org/xerces-cSpack package: * [xerces-c/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xerces-c/package.py) Versions: 3.2.2, 3.2.1, 3.1.4 Link Dependencies: [libiconv](#libiconv), [icu4c](#icu4c) Description: Xerces-C++ is a validating XML parser written in a portable subset of C++. Xerces-C++ makes it easy to give your application the ability to read and write XML data. A shared library is provided for parsing, generating, manipulating, and validating XML documents using the DOM, SAX, and SAX2 APIs. --- xeus[¶](#xeus) === Homepage: * <https://xeus.readthedocs.io/en/latest/Spack package: * [xeus/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xeus/package.py) Versions: develop, 0.15.0, 0.14.1 Build Dependencies: [zeromq](#zeromq), [cppzmq](#cppzmq), [nlohmann-json](#nlohmann-json), [cmake](#cmake), [libuuid](#libuuid), [xtl](#xtl), [cryptopp](#cryptopp) Link Dependencies: [zeromq](#zeromq), [cppzmq](#cppzmq), [nlohmann-json](#nlohmann-json), [libuuid](#libuuid), [xtl](#xtl), [cryptopp](#cryptopp) Description: QuantStack C++ implementation of Jupyter kernel protocol --- xev[¶](#xev) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xevSpack package: * [xev/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xev/package.py) Versions: 1.2.2 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [libxrandr](#libxrandr), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11), [libxrandr](#libxrandr) Description: xev creates a window and then asks the X server to send it X11 events whenever anything happens to the window (such as it being moved, resized, typed in, clicked in, etc.). You can also attach it to an existing window. It is useful for seeing what causes events to occur and to display the information that they contain; it is essentially a debugging and development tool, and should not be needed in normal usage. --- xextproto[¶](#xextproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/xextprotoSpack package: * [xextproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xextproto/package.py) Versions: 7.3.0 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Protocol Extensions. --- xeyes[¶](#xeyes) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xeyesSpack package: * [xeyes/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xeyes/package.py) Versions: 1.1.1 Build Dependencies: [libxrender](#libxrender), [libxt](#libxt), pkgconfig, [libxext](#libxext), [util-macros](#util-macros), [libx11](#libx11), [libxmu](#libxmu) Link Dependencies: [libxrender](#libxrender), [libxt](#libxt), [libx11](#libx11), [libxext](#libxext), [libxmu](#libxmu) Description: xeyes - a follow the mouse X demo, using the X SHAPE extension --- xf86bigfontproto[¶](#xf86bigfontproto) === Homepage: * <https://cgit.freedesktop.org/xorg/proto/xf86bigfontprotoSpack package: * [xf86bigfontproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xf86bigfontproto/package.py) Versions: 1.2.0 Description: X.org XF86BigFontProto protocol headers. --- xf86dga[¶](#xf86dga) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xf86dgaSpack package: * [xf86dga/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xf86dga/package.py) Versions: 1.0.3 Build Dependencies: [libxxf86dga](#libxxf86dga), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libxxf86dga](#libxxf86dga), [libx11](#libx11) Description: dga is a simple test client for the XFree86-DGA extension. --- xf86dgaproto[¶](#xf86dgaproto) === Homepage: * <https://cgit.freedesktop.org/xorg/proto/xf86dgaprotoSpack package: * [xf86dgaproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xf86dgaproto/package.py) Versions: 2.1 Description: X.org XF86DGAProto protocol headers. --- xf86driproto[¶](#xf86driproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/xf86driprotoSpack package: * [xf86driproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xf86driproto/package.py) Versions: 2.1.1 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: XFree86 Direct Rendering Infrastructure Extension. This extension defines a protocol to allow user applications to access the video hardware without requiring data to be passed through the X server. --- xf86miscproto[¶](#xf86miscproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/xf86miscprotoSpack package: * [xf86miscproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xf86miscproto/package.py) Versions: 0.9.3 Description: This package includes the protocol definitions of the "XFree86-Misc" extension to the X11 protocol. The "XFree86-Misc" extension is supported by the XFree86 X server and versions of the Xorg X server prior to Xorg 1.6. --- xf86rushproto[¶](#xf86rushproto) === Homepage: * <https://cgit.freedesktop.org/xorg/proto/xf86rushprotoSpack package: * [xf86rushproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xf86rushproto/package.py) Versions: 1.1.2 Description: X.org XF86RushProto protocol headers. --- xf86vidmodeproto[¶](#xf86vidmodeproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/xf86vidmodeprotoSpack package: * [xf86vidmodeproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xf86vidmodeproto/package.py) Versions: 2.3.1 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: XFree86 Video Mode Extension. This extension defines a protocol for dynamically configuring modelines and gamma. --- xfd[¶](#xfd) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xfdSpack package: * [xfd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xfd/package.py) Versions: 1.1.2 Build Dependencies: [libxrender](#libxrender), [libxt](#libxt), pkgconfig, [fontconfig](#fontconfig), [util-macros](#util-macros), [libxaw](#libxaw), [libxft](#libxft), [libxmu](#libxmu), [xproto](#xproto) Link Dependencies: [libxrender](#libxrender), [libxaw](#libxaw), [fontconfig](#fontconfig), [libxt](#libxt), [libxft](#libxft), [libxmu](#libxmu) Description: xfd - display all the characters in a font using either the X11 core protocol or libXft2. --- xfindproxy[¶](#xfindproxy) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xfindproxySpack package: * [xfindproxy/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xfindproxy/package.py) Versions: 1.0.4 Build Dependencies: [libice](#libice), [libxt](#libxt), pkgconfig, [util-macros](#util-macros), [xproxymanagementprotocol](#xproxymanagementprotocol), [xproto](#xproto) Link Dependencies: [libice](#libice), [libxt](#libxt) Description: xfindproxy is used to locate available X11 proxy services. It utilizes the Proxy Management Protocol to communicate with a proxy manager. The proxy manager keeps track of all available proxy services, starts new proxies when necessary, and makes sure that proxies are shared whenever possible. --- xfontsel[¶](#xfontsel) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xfontselSpack package: * [xfontsel/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xfontsel/package.py) Versions: 1.0.5 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxaw](#libxaw), [libxmu](#libxmu) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11), [libxmu](#libxmu) Description: xfontsel application provides a simple way to display the X11 core protocol fonts known to your X server, examine samples of each, and retrieve the X Logical Font Description ("XLFD") full name for a font. --- xfs[¶](#xfs) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xfsSpack package: * [xfs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xfs/package.py) Versions: 1.1.4 Build Dependencies: [xproto](#xproto), pkgconfig, [util-macros](#util-macros), [fontsproto](#fontsproto), [libxfont](#libxfont), [font-util](#font-util), [xtrans](#xtrans) Link Dependencies: [libxfont](#libxfont), [font-util](#font-util) Description: X Font Server. --- xfsinfo[¶](#xfsinfo) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xfsinfoSpack package: * [xfsinfo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xfsinfo/package.py) Versions: 1.0.5 Build Dependencies: [xproto](#xproto), pkgconfig, [libfs](#libfs), [util-macros](#util-macros) Link Dependencies: [libfs](#libfs) Description: xfsinfo is a utility for displaying information about an X font server. It is used to examine the capabilities of a server, the predefined values for various parameters used in communicating between clients and the server, and the font catalogues and alternate servers that are available. --- xfwp[¶](#xfwp) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xfwpSpack package: * [xfwp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xfwp/package.py) Versions: 1.0.3 Build Dependencies: [libice](#libice), [xproto](#xproto), pkgconfig, [xproxymanagementprotocol](#xproxymanagementprotocol), [util-macros](#util-macros) Link Dependencies: [libice](#libice) Description: xfwp proxies X11 protocol connections, such as through a firewall. --- xgamma[¶](#xgamma) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xgammaSpack package: * [xgamma/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xgamma/package.py) Versions: 1.0.6 Build Dependencies: [xproto](#xproto), [libxxf86vm](#libxxf86vm), [libx11](#libx11), pkgconfig, [util-macros](#util-macros) Link Dependencies: [libxxf86vm](#libxxf86vm), [libx11](#libx11) Description: xgamma allows X users to query and alter the gamma correction of a monitor via the X video mode extension (XFree86-VidModeExtension). --- xgc[¶](#xgc) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xgcSpack package: * [xgc/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xgc/package.py) Versions: 1.0.5 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxaw](#libxaw), [libxt](#libxt), [bison](#bison), [flex](#flex) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw) Description: xgc is an X11 graphics demo that shows various features of the X11 core protocol graphics primitives. --- xhmm[¶](#xhmm) === Homepage: * <http://atgu.mgh.harvard.edu/xhmm/index.shtmlSpack package: * [xhmm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xhmm/package.py) Versions: 20160104 Build Dependencies: lapack Link Dependencies: lapack Description: The XHMM C++ software suite was written to call copy number variation (CNV) from next-generation sequencing projects, where exome capture was used (or targeted sequencing, more generally). --- xhost[¶](#xhost) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xhostSpack package: * [xhost/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xhost/package.py) Versions: 1.0.7 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxmu](#libxmu), [libxau](#libxau) Link Dependencies: [libxmu](#libxmu), [libx11](#libx11), [libxau](#libxau) Description: xhost is used to manage the list of host names or user names allowed to make connections to the X server. --- xineramaproto[¶](#xineramaproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/xineramaprotoSpack package: * [xineramaproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xineramaproto/package.py) Versions: 1.2.1 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Xinerama Extension. This is an X extension that allows multiple physical screens controlled by a single X server to appear as a single screen. --- xinit[¶](#xinit) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xinitSpack package: * [xinit/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xinit/package.py) Versions: 1.3.4 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11) Description: The xinit program is used to start the X Window System server and a first client program on systems that are not using a display manager such as xdm. --- xinput[¶](#xinput) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xinputSpack package: * [xinput/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xinput/package.py) Versions: 1.6.2 Build Dependencies: [inputproto](#inputproto), [xineramaproto](#xineramaproto), pkgconfig, [libxext](#libxext), [fixesproto](#fixesproto), [util-macros](#util-macros), [libxi](#libxi), [libx11](#libx11), [libxinerama](#libxinerama), [libxrandr](#libxrandr), [randrproto](#randrproto) Link Dependencies: [libx11](#libx11), [libxinerama](#libxinerama), [libxext](#libxext), [libxrandr](#libxrandr), [libxi](#libxi) Description: xinput is a utility to configure and test XInput devices. --- xios[¶](#xios) === Homepage: * <https://forge.ipsl.jussieu.fr/ioserver/wikiSpack package: * [xios/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xios/package.py) Versions: develop, 1.0 Build Dependencies: [netcdf-fortran](#netcdf-fortran), [perl](#perl), mpi, [netcdf](#netcdf), [hdf5](#hdf5), [gmake](#gmake), [perl-uri-escape](#perl-uri-escape), [boost](#boost), [blitz](#blitz) Link Dependencies: [netcdf-fortran](#netcdf-fortran), mpi, [netcdf](#netcdf), [hdf5](#hdf5), [boost](#boost), [blitz](#blitz) Description: XML-IO-SERVER library for IO management of climate models. --- xkbcomp[¶](#xkbcomp) === Homepage: * <https://www.x.org/wiki/XKB/Spack package: * [xkbcomp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xkbcomp/package.py) Versions: 1.3.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [xproto](#xproto), [bison](#bison), [libxkbfile](#libxkbfile) Link Dependencies: [libxkbfile](#libxkbfile), [libx11](#libx11) Description: The X Keyboard (XKB) Extension essentially replaces the core protocol definition of a keyboard. The extension makes it possible to specify clearly and explicitly most aspects of keyboard behaviour on a per-key basis, and to track more closely the logical and physical state of a keyboard. It also includes a number of keyboard controls designed to make keyboards more accessible to people with physical impairments. --- xkbdata[¶](#xkbdata) === Homepage: * <https://www.x.org/wiki/XKB/Spack package: * [xkbdata/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xkbdata/package.py) Versions: 1.0.1 Build Dependencies: [xkbcomp](#xkbcomp) Description: The XKB data files for the various keyboard models, layouts, and locales. --- xkbevd[¶](#xkbevd) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xkbevdSpack package: * [xkbevd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xkbevd/package.py) Versions: 1.1.4 Build Dependencies: [libxkbfile](#libxkbfile), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [bison](#bison) Link Dependencies: [libxkbfile](#libxkbfile), [libx11](#libx11) Description: XKB event daemon demo. --- xkbprint[¶](#xkbprint) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xkbprintSpack package: * [xkbprint/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xkbprint/package.py) Versions: 1.0.4 Build Dependencies: [libxkbfile](#libxkbfile), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [xproto](#xproto) Link Dependencies: [libxkbfile](#libxkbfile), [libx11](#libx11) Description: xkbprint generates a printable or encapsulated PostScript description of an XKB keyboard description. --- xkbutils[¶](#xkbutils) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xkbutilsSpack package: * [xkbutils/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xkbutils/package.py) Versions: 1.0.4 Build Dependencies: [inputproto](#inputproto), [libxt](#libxt), pkgconfig, [libx11](#libx11), [xproto](#xproto), [util-macros](#util-macros), [libxaw](#libxaw) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11) Description: xkbutils is a collection of small utilities utilizing the XKeyboard (XKB) extension to the X11 protocol. --- xkeyboard-config[¶](#xkeyboard-config) === Homepage: * <https://www.freedesktop.org/wiki/Software/XKeyboardConfig/Spack package: * [xkeyboard-config/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xkeyboard-config/package.py) Versions: 2.18 Build Dependencies: [libxslt](#libxslt), [xproto](#xproto), pkgconfig, [intltool](#intltool), [libx11](#libx11) Link Dependencies: [libx11](#libx11) Description: This project provides a consistent, well-structured, frequently released, open source database of keyboard configuration data. The project is targeted to XKB-based systems. --- xkill[¶](#xkill) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xkillSpack package: * [xkill/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xkill/package.py) Versions: 1.0.4 Build Dependencies: [libx11](#libx11), [util-macros](#util-macros), pkgconfig, [libxmu](#libxmu), [xproto](#xproto) Link Dependencies: [libx11](#libx11), [libxmu](#libxmu) Description: xkill is a utility for forcing the X server to close connections to clients. This program is very dangerous, but is useful for aborting programs that have displayed undesired windows on a user's screen. --- xload[¶](#xload) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xloadSpack package: * [xload/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xload/package.py) Versions: 1.1.2 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [xproto](#xproto), [libxaw](#libxaw), [libxmu](#libxmu), [libxt](#libxt) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11), [libxmu](#libxmu) Description: xload displays a periodically updating histogram of the system load average. --- xlogo[¶](#xlogo) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xlogoSpack package: * [xlogo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xlogo/package.py) Versions: 1.0.4 Build Dependencies: [libxrender](#libxrender), [libxt](#libxt), [libxaw](#libxaw), [libsm](#libsm), [libxext](#libxext), [util-macros](#util-macros), [libx11](#libx11), [libxft](#libxft), [libxmu](#libxmu), pkgconfig Link Dependencies: [libxrender](#libxrender), [libxt](#libxt), [libxaw](#libxaw), [libsm](#libsm), [libxext](#libxext), [libx11](#libx11), [libxft](#libxft), [libxmu](#libxmu) Description: The xlogo program simply displays the X Window System logo. --- xlsatoms[¶](#xlsatoms) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xlsatomsSpack package: * [xlsatoms/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xlsatoms/package.py) Versions: 1.1.2 Build Dependencies: [libxcb](#libxcb), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libxcb](#libxcb), [libx11](#libx11) Description: xlsatoms lists the interned atoms defined on an X11 server. --- xlsclients[¶](#xlsclients) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xlsclientsSpack package: * [xlsclients/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xlsclients/package.py) Versions: 1.1.3 Build Dependencies: [libxcb](#libxcb), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libxcb](#libxcb), [libx11](#libx11) Description: xlsclients is a utility for listing information about the client applications running on a X11 server. --- xlsfonts[¶](#xlsfonts) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xlsfontsSpack package: * [xlsfonts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xlsfonts/package.py) Versions: 1.0.5 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11) Description: xlsfonts lists fonts available from an X server via the X11 core protocol. --- xmag[¶](#xmag) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xmagSpack package: * [xmag/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xmag/package.py) Versions: 1.0.6 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxaw](#libxaw), [libxmu](#libxmu) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11), [libxmu](#libxmu) Description: xmag displays a magnified snapshot of a portion of an X11 screen. --- xman[¶](#xman) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xmanSpack package: * [xman/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xman/package.py) Versions: 1.1.4 Build Dependencies: [xproto](#xproto), [util-macros](#util-macros), pkgconfig, [libxaw](#libxaw), [libxt](#libxt) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw) Description: xman is a graphical manual page browser using the Athena Widgets (Xaw) toolkit. --- xmessage[¶](#xmessage) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xmessageSpack package: * [xmessage/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xmessage/package.py) Versions: 1.0.4 Build Dependencies: [libxt](#libxt), pkgconfig, [libxaw](#libxaw), [util-macros](#util-macros) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw) Description: xmessage displays a message or query in a window. The user can click on an "okay" button to dismiss it or can select one of several buttons to answer a question. xmessage can also exit after a specified time. --- xmh[¶](#xmh) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xmhSpack package: * [xmh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xmh/package.py) Versions: 1.0.3 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxaw](#libxaw), [libxmu](#libxmu), [xbitmaps](#xbitmaps) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11), [libxmu](#libxmu) Description: The xmh program provides a graphical user interface to the MH Message Handling System. To actually do things with your mail, it makes calls to the MH package. --- xmlf90[¶](#xmlf90) === Homepage: * <https://launchpad.net/xmlf90Spack package: * [xmlf90/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xmlf90/package.py) Versions: 1.5.2 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: xmlf90 is a suite of libraries to handle XML in Fortran. --- xmlto[¶](#xmlto) === Homepage: * <https://pagure.io/xmltoSpack package: * [xmlto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xmlto/package.py) Versions: 0.0.28 Build Dependencies: [libxslt](#libxslt) Link Dependencies: [libxslt](#libxslt) Description: Utility xmlto is a simple shell script for converting XML files to various formats. It serves as easy to use command line frontend to make fine output without remembering many long options and searching for the syntax of the backends. --- xmodmap[¶](#xmodmap) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xmodmapSpack package: * [xmodmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xmodmap/package.py) Versions: 1.0.9 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11) Description: The xmodmap program is used to edit and display the keyboard modifier map and keymap table that are used by client applications to convert event keycodes into keysyms. It is usually run from the user's session startup script to configure the keyboard according to personal tastes. --- xmore[¶](#xmore) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xmoreSpack package: * [xmore/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xmore/package.py) Versions: 1.0.2 Build Dependencies: [libxt](#libxt), pkgconfig, [libxaw](#libxaw), [util-macros](#util-macros) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw) Description: xmore - plain text display program for the X Window System. --- xorg-cf-files[¶](#xorg-cf-files) === Homepage: * <http://cgit.freedesktop.org/xorg/util/cfSpack package: * [xorg-cf-files/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xorg-cf-files/package.py) Versions: 1.0.6 Build Dependencies: pkgconfig Description: The xorg-cf-files package contains the data files for the imake utility, defining the known settings for a wide variety of platforms (many of which have not been verified or tested in over a decade), and for many of the libraries formerly delivered in the X.Org monolithic releases. --- xorg-docs[¶](#xorg-docs) === Homepage: * <http://cgit.freedesktop.org/xorg/doc/xorg-docsSpack package: * [xorg-docs/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xorg-docs/package.py) Versions: 1.7.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [xmlto](#xmlto), [xorg-sgml-doctools](#xorg-sgml-doctools) Description: This package provides miscellaneous documentation for the X Window System that doesn't better fit into other packages. The preferred documentation format for these documents is DocBook XML. --- xorg-gtest[¶](#xorg-gtest) === Homepage: * <https://people.freedesktop.org/~cndougla/xorg-gtest/Spack package: * [xorg-gtest/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xorg-gtest/package.py) Versions: 0.7.1 Build Dependencies: [libx11](#libx11), pkgconfig, [xorg-server](#xorg-server), [libxi](#libxi), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11), [xorg-server](#xorg-server), [libxi](#libxi) Description: Provides a Google Test environment for starting and stopping a X server for testing purposes. --- xorg-server[¶](#xorg-server) === Homepage: * <http://cgit.freedesktop.org/xorg/xserverSpack package: * [xorg-server/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xorg-server/package.py) Versions: 1.18.99.901 Build Dependencies: [util-macros](#util-macros), pkgconfig, [dri3proto](#dri3proto), [mesa](#mesa), [font-util](#font-util), [compositeproto](#compositeproto), [kbproto](#kbproto), [libxshmfence](#libxshmfence), [fontsproto](#fontsproto), [xtrans](#xtrans), [xproto](#xproto), [inputproto](#inputproto), [presentproto](#presentproto), [randrproto](#randrproto), [libx11](#libx11), [damageproto](#damageproto), [xf86driproto](#xf86driproto), [libdrm](#libdrm), [resourceproto](#resourceproto), [dri2proto](#dri2proto), [videoproto](#videoproto), [xcmiscproto](#xcmiscproto), [libxfont2](#libxfont2), [bigreqsproto](#bigreqsproto), [pixman](#pixman), [glproto](#glproto), [libxdamage](#libxdamage), [renderproto](#renderproto), [libepoxy](#libepoxy), [xineramaproto](#xineramaproto), [bison](#bison), [recordproto](#recordproto), [fixesproto](#fixesproto), [libxext](#libxext), [scrnsaverproto](#scrnsaverproto), [libxfixes](#libxfixes), [xextproto](#xextproto), [flex](#flex), [libxkbfile](#libxkbfile) Link Dependencies: [libdrm](#libdrm), [damageproto](#damageproto), [libxdamage](#libxdamage), [compositeproto](#compositeproto), [kbproto](#kbproto), [libxshmfence](#libxshmfence), [xtrans](#xtrans), [xproto](#xproto), [inputproto](#inputproto), [presentproto](#presentproto), [libx11](#libx11), [resourceproto](#resourceproto), [xf86driproto](#xf86driproto), [libxkbfile](#libxkbfile), [videoproto](#videoproto), [xcmiscproto](#xcmiscproto), [libxfont2](#libxfont2), [bigreqsproto](#bigreqsproto), [pixman](#pixman), [glproto](#glproto), [font-util](#font-util), [renderproto](#renderproto), [libepoxy](#libepoxy), [xineramaproto](#xineramaproto), [fontsproto](#fontsproto), [recordproto](#recordproto), [randrproto](#randrproto), [libxext](#libxext), [scrnsaverproto](#scrnsaverproto), [libxfixes](#libxfixes), [xextproto](#xextproto), [fixesproto](#fixesproto) Description: X.Org Server is the free and open source implementation of the display server for the X Window System stewarded by the X.Org Foundation. --- xorg-sgml-doctools[¶](#xorg-sgml-doctools) === Homepage: * <http://cgit.freedesktop.org/xorg/doc/xorg-sgml-doctoolsSpack package: * [xorg-sgml-doctools/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xorg-sgml-doctools/package.py) Versions: 1.11 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: This package provides a common set of SGML entities and XML/CSS style sheets used in building/formatting the documentation provided in other X.Org packages. --- xphelloworld[¶](#xphelloworld) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xphelloworldSpack package: * [xphelloworld/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xphelloworld/package.py) Versions: 1.0.1 Build Dependencies: [libxt](#libxt), pkgconfig, [libxaw](#libxaw), [libxp](#libxp), [libxprintapputil](#libxprintapputil), [util-macros](#util-macros), [libxprintutil](#libxprintutil), [libx11](#libx11) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libxp](#libxp), [libxprintapputil](#libxprintapputil), [libxprintutil](#libxprintutil), [libx11](#libx11) Description: Xprint sample applications. --- xplor-nih[¶](#xplor-nih) === Homepage: * <https://nmr.cit.nih.gov/xplor-nih/Spack package: * [xplor-nih/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xplor-nih/package.py) Versions: 2.45 Build Dependencies: [python](#python) Run Dependencies: [python](#python) Description: XPLOR-NIH is a structure determination program. Note: A manual download is required for XPLOR-NIH. Spack will search your current directory for the download file. Alternatively, add this file to a mirror so that Spack can find it. For instructions on how to set up a mirror, see http://spack.readthedocs.io/en/latest/mirrors.html --- xplsprinters[¶](#xplsprinters) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xplsprintersSpack package: * [xplsprinters/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xplsprinters/package.py) Versions: 1.0.1 Build Dependencies: [libxprintutil](#libxprintutil), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxp](#libxp) Link Dependencies: [libxprintutil](#libxprintutil), [libx11](#libx11), [libxp](#libxp) Description: List Xprint printers. --- xpr[¶](#xpr) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xprSpack package: * [xpr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xpr/package.py) Versions: 1.0.4 Build Dependencies: [xproto](#xproto), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxmu](#libxmu) Link Dependencies: [libxmu](#libxmu), [libx11](#libx11) Description: xpr takes as input a window dump file produced by xwd and formats it for output on various types of printers. --- xprehashprinterlist[¶](#xprehashprinterlist) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xprehashprinterlistSpack package: * [xprehashprinterlist/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xprehashprinterlist/package.py) Versions: 1.0.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxp](#libxp) Link Dependencies: [libx11](#libx11), [libxp](#libxp) Description: Rehash list of Xprint printers. --- xprop[¶](#xprop) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xpropSpack package: * [xprop/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xprop/package.py) Versions: 1.2.2 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11) Description: xprop is a command line tool to display and/or set window and font properties of an X server. --- xproto[¶](#xproto) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/x11protoSpack package: * [xproto/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xproto/package.py) Versions: 7.0.31, 7.0.29 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: X Window System Core Protocol. This package provides the headers and specification documents defining the X Window System Core Protocol, Version 11. It also includes a number of headers that aren't purely protocol related, but are depended upon by many other X Window System packages to provide common definitions and porting layer. --- xproxymanagementprotocol[¶](#xproxymanagementprotocol) === Homepage: * <http://cgit.freedesktop.org/xorg/proto/pmprotoSpack package: * [xproxymanagementprotocol/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xproxymanagementprotocol/package.py) Versions: 1.0.3 Description: The Proxy Management Protocol is an ICE based protocol that provides a way for application servers to easily locate proxy services available to them. --- xqilla[¶](#xqilla) === Homepage: * <http://xqilla.sourceforge.net/HomePageSpack package: * [xqilla/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xqilla/package.py) Versions: 2.3.3 Build Dependencies: [xerces-c](#xerces-c) Link Dependencies: [xerces-c](#xerces-c) Description: XQilla is an XQuery and XPath 2 library and command line utility written in C++, implemented on top of the Xerces-C library. --- xrandr[¶](#xrandr) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xrandrSpack package: * [xrandr/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xrandr/package.py) Versions: 1.5.0 Build Dependencies: [libxrender](#libxrender), [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxrandr](#libxrandr), [randrproto](#randrproto) Link Dependencies: [libxrender](#libxrender), [libx11](#libx11), [libxrandr](#libxrandr), [randrproto](#randrproto) Description: xrandr - primitive command line interface to X11 Resize, Rotate, and Reflect (RandR) extension. --- xrdb[¶](#xrdb) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xrdbSpack package: * [xrdb/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xrdb/package.py) Versions: 1.1.0 Build Dependencies: [xproto](#xproto), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxmu](#libxmu) Link Dependencies: [libxmu](#libxmu), [libx11](#libx11) Description: xrdb - X server resource database utility. --- xrefresh[¶](#xrefresh) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xrefreshSpack package: * [xrefresh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xrefresh/package.py) Versions: 1.0.5 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11) Description: xrefresh - refresh all or part of an X screen. --- xrootd[¶](#xrootd) === Homepage: * <http://xrootd.orgSpack package: * [xrootd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xrootd/package.py) Versions: 4.8.3, 4.8.2, 4.8.1, 4.8.0, 4.7.1, 4.7.0, 4.6.1, 4.6.0, 4.5.0, 4.4.1, 4.4.0, 4.3.0 Build Dependencies: [zlib](#zlib), [bzip2](#bzip2), [libxml2](#libxml2), [openssl](#openssl), [xz](#xz), [cmake](#cmake), [readline](#readline), [python](#python) Link Dependencies: [zlib](#zlib), [bzip2](#bzip2), [libxml2](#libxml2), [openssl](#openssl), [xz](#xz), [readline](#readline), [python](#python) Description: The XROOTD project aims at giving high performance, scalable fault tolerant access to data repositories of many kinds. --- xrx[¶](#xrx) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xrxSpack package: * [xrx/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xrx/package.py) Versions: 1.0.4 Build Dependencies: [libice](#libice), [libxt](#libxt), [libx11](#libx11), [libxext](#libxext), [xproxymanagementprotocol](#xproxymanagementprotocol), [util-macros](#util-macros), [libxaw](#libxaw), pkgconfig, [libxau](#libxau), [xtrans](#xtrans) Link Dependencies: [libice](#libice), [libxt](#libxt), [libx11](#libx11), [libxext](#libxext), [libxaw](#libxaw), [libxau](#libxau) Description: The remote execution (RX) service specifies a MIME format for invoking applications remotely, for example via a World Wide Web browser. This RX format specifies a syntax for listing network services required by the application, for example an X display server. The requesting Web browser must identify specific instances of the services in the request to invoke the application. --- xsbench[¶](#xsbench) === Homepage: * <https://github.com/ANL-CESAR/XSBench/Spack package: * [xsbench/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsbench/package.py) Versions: 18, 14, 13 Build Dependencies: mpi Link Dependencies: mpi Description: XSBench is a mini-app representing a key computational kernel of the Monte Carlo neutronics application OpenMC. A full explanation of the theory and purpose of XSBench is provided in docs/XSBench_Theory.pdf. --- xscope[¶](#xscope) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xscopeSpack package: * [xscope/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xscope/package.py) Versions: 1.4.1 Build Dependencies: [util-macros](#util-macros), pkgconfig, [xtrans](#xtrans), [xproto](#xproto) Description: XSCOPE -- a program to monitor X11/Client conversations. --- xsd[¶](#xsd) === Homepage: * <https://www.codesynthesis.comSpack package: * [xsd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsd/package.py) Versions: 4.0.0 Build Dependencies: [libtool](#libtool), [xerces-c](#xerces-c) Link Dependencies: [xerces-c](#xerces-c) Description: CodeSynthesis XSD is an open-source, cross-platform W3C XML Schema to C++ data binding compiler. It support in-memory and event-driven XML processing models and is available for a wide range of C++ compilers and platforms. --- xsdk[¶](#xsdk) === Homepage: * <http://xsdk.infoSpack package: * [xsdk/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsdk/package.py) Versions: develop, 0.3.0, xsdk-0.2.0 Build Dependencies: [petsc](#petsc), [alquimia](#alquimia), [dealii](#dealii), [slepc](#slepc), [hypre](#hypre), [plasma](#plasma), [mfem](#mfem), [magma](#magma), [superlu-dist](#superlu-dist), [pflotran](#pflotran), [sundials](#sundials), [phist](#phist), [amrex](#amrex), [trilinos](#trilinos) Link Dependencies: [petsc](#petsc), [alquimia](#alquimia), [dealii](#dealii), [slepc](#slepc), [hypre](#hypre), [plasma](#plasma), [mfem](#mfem), [magma](#magma), [superlu-dist](#superlu-dist), [pflotran](#pflotran), [sundials](#sundials), [phist](#phist), [amrex](#amrex), [trilinos](#trilinos) Description: Xsdk is a suite of Department of Energy (DOE) packages for numerical simulation. This is a Spack bundle package that installs the xSDK packages --- xsdktrilinos[¶](#xsdktrilinos) === Homepage: * <https://trilinos.org/Spack package: * [xsdktrilinos/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsdktrilinos/package.py) Versions: develop, 12.8.1, 12.6.4, xsdk-0.2.0 Build Dependencies: [cmake](#cmake), mpi, [petsc](#petsc), [hypre](#hypre), [trilinos](#trilinos) Link Dependencies: mpi, [petsc](#petsc), [hypre](#hypre), [trilinos](#trilinos) Description: xSDKTrilinos contains the portions of Trilinos that depend on PETSc because they would cause a circular dependency if built as part of Trilinos. --- xset[¶](#xset) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xsetSpack package: * [xset/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xset/package.py) Versions: 1.2.3 Build Dependencies: [xproto](#xproto), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxmu](#libxmu) Link Dependencies: [libxmu](#libxmu), [libx11](#libx11) Description: User preference utility for X. --- xsetmode[¶](#xsetmode) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xsetmodeSpack package: * [xsetmode/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsetmode/package.py) Versions: 1.0.0 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxi](#libxi) Link Dependencies: [libx11](#libx11), [libxi](#libxi) Description: Set the mode for an X Input device. --- xsetpointer[¶](#xsetpointer) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xsetpointerSpack package: * [xsetpointer/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsetpointer/package.py) Versions: 1.0.1 Build Dependencies: [inputproto](#inputproto), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxi](#libxi) Link Dependencies: [libx11](#libx11), [libxi](#libxi) Description: Set an X Input device as the main pointer. --- xsetroot[¶](#xsetroot) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xsetrootSpack package: * [xsetroot/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsetroot/package.py) Versions: 1.1.1 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [libxcursor](#libxcursor), [util-macros](#util-macros), [libxmu](#libxmu), [xbitmaps](#xbitmaps) Link Dependencies: [libxmu](#libxmu), [libx11](#libx11), [libxcursor](#libxcursor) Description: xsetroot - root window parameter setting utility for X. --- xsimd[¶](#xsimd) === Homepage: * <http://quantstack.net/xsimdSpack package: * [xsimd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsimd/package.py) Versions: develop, 4.0.0, 3.1.0 Build Dependencies: [cmake](#cmake) Test Dependencies: [googletest](#googletest) Description: C++ wrappers for SIMD intrinsics --- xsm[¶](#xsm) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xsmSpack package: * [xsm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xsm/package.py) Versions: 1.0.3 Build Dependencies: [libice](#libice), [libxt](#libxt), pkgconfig, [libsm](#libsm), [libxaw](#libxaw), [util-macros](#util-macros), [libx11](#libx11) Link Dependencies: [libice](#libice), [libxt](#libxt), [libx11](#libx11), [libsm](#libsm), [libxaw](#libxaw) Description: X Session Manager. --- xstdcmap[¶](#xstdcmap) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xstdcmapSpack package: * [xstdcmap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xstdcmap/package.py) Versions: 1.0.3 Build Dependencies: [xproto](#xproto), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxmu](#libxmu) Link Dependencies: [libxmu](#libxmu), [libx11](#libx11) Description: The xstdcmap utility can be used to selectively define standard colormap properties. It is intended to be run from a user's X startup script to create standard colormap definitions in order to facilitate sharing of scarce colormap resources among clients using PseudoColor visuals. --- xtensor[¶](#xtensor) === Homepage: * <http://quantstack.net/xtensorSpack package: * [xtensor/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xtensor/package.py) Versions: develop, 0.15.1, 0.13.1 Build Dependencies: [cmake](#cmake), [xsimd](#xsimd), [xtl](#xtl) Link Dependencies: [xsimd](#xsimd), [xtl](#xtl) Description: Multi-dimensional arrays with broadcasting and lazy computing --- xtensor-python[¶](#xtensor-python) === Homepage: * <https://xtensor-python.readthedocs.ioSpack package: * [xtensor-python/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xtensor-python/package.py) Versions: develop, 0.17.0 Build Dependencies: [xtensor](#xtensor), [cmake](#cmake), [py-numpy](#py-numpy), [py-pybind11](#py-pybind11), [xtl](#xtl), [python](#python) Link Dependencies: [python](#python), [py-numpy](#py-numpy), [py-pybind11](#py-pybind11), [xtl](#xtl), [xtensor](#xtensor) Run Dependencies: [python](#python) Description: Python bindings for the xtensor C++ multi-dimensional array library --- xterm[¶](#xterm) === Homepage: * <http://invisible-island.net/xterm/Spack package: * [xterm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xterm/package.py) Versions: 327 Build Dependencies: [libxrender](#libxrender), pkgconfig, [fontconfig](#fontconfig), [libxpm](#libxpm), [libxt](#libxt), [libxcb](#libxcb), [libxmu](#libxmu), [libxau](#libxau), [libice](#libice), [bzip2](#bzip2), [libx11](#libx11), [libsm](#libsm), [libxext](#libxext), [libxaw](#libxaw), [libxft](#libxft), [libxinerama](#libxinerama), [freetype](#freetype) Link Dependencies: [libxrender](#libxrender), [fontconfig](#fontconfig), [libxpm](#libxpm), [libxt](#libxt), [libxcb](#libxcb), [libxmu](#libxmu), [libxau](#libxau), [libice](#libice), [bzip2](#bzip2), [libx11](#libx11), [libsm](#libsm), [libxext](#libxext), [libxaw](#libxaw), [libxft](#libxft), [libxinerama](#libxinerama), [freetype](#freetype) Description: The xterm program is a terminal emulator for the X Window System. It provides DEC VT102 and Tektronix 4014 compatible terminals for programs that can't use the window system directly. --- xtl[¶](#xtl) === Homepage: * <https://github.com/QuantStack/xtlSpack package: * [xtl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xtl/package.py) Versions: develop, 0.4.0, 0.3.4, 0.3.3 Build Dependencies: [cmake](#cmake) Description: QuantStack tools library --- xtrans[¶](#xtrans) === Homepage: * <http://cgit.freedesktop.org/xorg/lib/libxtransSpack package: * [xtrans/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xtrans/package.py) Versions: 1.3.5 Build Dependencies: [util-macros](#util-macros), pkgconfig Description: xtrans is a library of code that is shared among various X packages to handle network protocol transport in a modular fashion, allowing a single place to add new transport types. It is used by the X server, libX11, libICE, the X font server, and related components. --- xtrap[¶](#xtrap) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xtrapSpack package: * [xtrap/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xtrap/package.py) Versions: 1.0.2 Build Dependencies: [libxtrap](#libxtrap), [util-macros](#util-macros), pkgconfig, [libx11](#libx11) Link Dependencies: [libxtrap](#libxtrap), [libx11](#libx11) Description: XTrap sample clients. --- xts[¶](#xts) === Homepage: * <https://www.x.org/wiki/XorgTesting/Spack package: * [xts/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xts/package.py) Versions: 0.99.1 Build Dependencies: [bdftopcf](#bdftopcf), [xdpyinfo](#xdpyinfo), [xset](#xset), [mkfontdir](#mkfontdir), [perl](#perl), [libxmu](#libxmu), [libxau](#libxau), [libxt](#libxt), [libxext](#libxext), [libx11](#libx11), [libxi](#libxi), [libxaw](#libxaw), [libxtst](#libxtst), [xtrans](#xtrans) Link Dependencies: [libxt](#libxt), [libxext](#libxext), [libx11](#libx11), [libxau](#libxau), [libxaw](#libxaw), [libxtst](#libxtst), [libxmu](#libxmu), [libxi](#libxi) Description: This is a revamped version of X Test Suite (XTS) which removes some of the ugliness of building and running the tests. --- xvidtune[¶](#xvidtune) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xvidtuneSpack package: * [xvidtune/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xvidtune/package.py) Versions: 1.0.3 Build Dependencies: [libxt](#libxt), pkgconfig, [libx11](#libx11), [util-macros](#util-macros), [libxaw](#libxaw), [libxxf86vm](#libxxf86vm), [libxmu](#libxmu) Link Dependencies: [libxt](#libxt), [libxaw](#libxaw), [libx11](#libx11), [libxxf86vm](#libxxf86vm), [libxmu](#libxmu) Description: xvidtune is a client interface to the X server video mode extension (XFree86-VidModeExtension). --- xvinfo[¶](#xvinfo) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xvinfoSpack package: * [xvinfo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xvinfo/package.py) Versions: 1.1.3 Build Dependencies: [util-macros](#util-macros), pkgconfig, [libxv](#libxv), [libx11](#libx11), [xproto](#xproto) Link Dependencies: [libx11](#libx11), [libxv](#libxv) Description: xvinfo prints out the capabilities of any video adaptors associated with the display that are accessible through the X-Video extension. --- xwd[¶](#xwd) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xwdSpack package: * [xwd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xwd/package.py) Versions: 1.0.6 Build Dependencies: [libx11](#libx11), [util-macros](#util-macros), pkgconfig, [libxkbfile](#libxkbfile), [xproto](#xproto) Link Dependencies: [libx11](#libx11), [libxkbfile](#libxkbfile) Description: xwd - dump an image of an X window. --- xwininfo[¶](#xwininfo) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xwininfoSpack package: * [xwininfo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xwininfo/package.py) Versions: 1.1.3 Build Dependencies: [xproto](#xproto), [util-macros](#util-macros), pkgconfig, [libx11](#libx11), [libxcb](#libxcb) Link Dependencies: [libxcb](#libxcb), [libx11](#libx11) Description: xwininfo prints information about windows on an X server. Various information is displayed depending on which options are selected. --- xwud[¶](#xwud) === Homepage: * <http://cgit.freedesktop.org/xorg/app/xwudSpack package: * [xwud/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xwud/package.py) Versions: 1.0.4 Build Dependencies: [xproto](#xproto), pkgconfig, [libx11](#libx11), [util-macros](#util-macros) Link Dependencies: [libx11](#libx11) Description: xwud allows X users to display in a window an image saved in a specially formatted dump file, such as produced by xwd. --- xxhash[¶](#xxhash) === Homepage: * <https://github.com/Cyan4973/xxHashSpack package: * [xxhash/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xxhash/package.py) Versions: 0.6.5, 0.6.4, 0.6.3, 0.6.2, 0.6.1, 0.6.0, 0.5.1, 0.5.0 Description: xxHash is an Extremely fast Hash algorithm, running at RAM speed limits. It successfully completes the SMHasher test suite which evaluates collision, dispersion and randomness qualities of hash functions. Code is highly portable, and hashes are identical on all platforms (little / big endian). --- xz[¶](#xz) === Homepage: * <http://tukaani.org/xz/Spack package: * [xz/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/xz/package.py) Versions: 5.2.4, 5.2.3, 5.2.2, 5.2.0 Description: XZ Utils is free general-purpose data compression software with high compression ratio. XZ Utils were written for POSIX-like systems, but also work on some not-so-POSIX systems. XZ Utils are the successor to LZMA Utils. --- yajl[¶](#yajl) === Homepage: * <http://lloyd.github.io/yajl/Spack package: * [yajl/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/yajl/package.py) Versions: develop, 2.1.0 Build Dependencies: [cmake](#cmake) Description: Yet Another JSON Library (YAJL) --- yambo[¶](#yambo) === Homepage: * <http://www.yambo-code.org/index.phpSpack package: * [yambo/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/yambo/package.py) Versions: 4.2.2, 4.2.1, 4.2.0 Build Dependencies: [netcdf-fortran](#netcdf-fortran), [libxc](#libxc), mpi, [netcdf](#netcdf), [hdf5](#hdf5), scalapack, blas, [fftw](#fftw), lapack Link Dependencies: [netcdf-fortran](#netcdf-fortran), [libxc](#libxc), mpi, [netcdf](#netcdf), [hdf5](#hdf5), scalapack, blas, [fftw](#fftw), lapack Description: Yambo is a FORTRAN/C code for Many-Body calculations in solid state and molecular physics. Yambo relies on the Kohn-Sham wavefunctions generated by two DFT public codes: abinit, and PWscf. The code was originally developed in the Condensed Matter Theoretical Group of the Physics Department at the University of Rome "Tor Vergata" by <NAME>. Previous to its release under the GPL license, yambo was known as SELF. --- yaml-cpp[¶](#yaml-cpp) === Homepage: * <https://github.com/jbeder/yaml-cppSpack package: * [yaml-cpp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/yaml-cpp/package.py) Versions: develop, 0.6.2, 0.5.3 Build Dependencies: [cmake](#cmake), [boost](#boost) Link Dependencies: [boost](#boost) Description: A YAML parser and emitter in C++ --- yasm[¶](#yasm) === Homepage: * <http://yasm.tortall.netSpack package: * [yasm/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/yasm/package.py) Versions: develop, 1.3.0 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Link Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), [m4](#m4) Description: Yasm is a complete rewrite of the NASM-2.11.06 assembler. It supports the x86 and AMD64 instruction sets, accepts NASM and GAS assembler syntaxes and outputs binary, ELF32 and ELF64 object formats. --- yorick[¶](#yorick) === Homepage: * <http://dhmunro.github.io/yorick-doc/Spack package: * [yorick/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/yorick/package.py) Versions: 2.2.04, master, f90-plugin Build Dependencies: [libx11](#libx11) Link Dependencies: [libx11](#libx11) Description: Yorick is an interpreted programming language for scientific simulations or calculations, postprocessing or steering large simulation codes, interactive scientific graphics, and reading, writing, or translating files of numbers. Yorick includes an interactive graphics package, and a binary file package capable of translating to and from the raw numeric formats of all modern computers. Yorick is written in ANSI C and runs on most operating systems (\*nix systems, MacOS X, Windows). --- z3[¶](#z3) === Homepage: * <https://github.com/Z3Prover/z3/wikiSpack package: * [z3/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/z3/package.py) Versions: 4.5.0, 4.4.1, 4.4.0 Build Dependencies: [python](#python) Link Dependencies: [python](#python) Description: Z3 is a theorem prover from Microsoft Research. It is licensed under the MIT license. --- zeromq[¶](#zeromq) === Homepage: * <http://zguide.zeromq.org/Spack package: * [zeromq/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/zeromq/package.py) Versions: develop, 4.2.5, 4.2.2, 4.1.4, 4.1.2, 4.1.1, 4.0.7, 4.0.6, 4.0.5 Build Dependencies: [autoconf](#autoconf), [automake](#automake), [libtool](#libtool), pkgconfig, [libsodium](#libsodium) Link Dependencies: [libsodium](#libsodium) Description: The ZMQ networking/concurrency library and core API --- zfp[¶](#zfp) === Homepage: * <http://computation.llnl.gov/projects/floating-point-compressionSpack package: * [zfp/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/zfp/package.py) Versions: 0.5.2, 0.5.1, 0.5.0 Description: zfp is an open source C/C++ library for high-fidelity, high-throughput lossy compression of floating-point and integer multi-dimensional arrays. --- zip[¶](#zip) === Homepage: * <http://www.info-zip.org/Zip.htmlSpack package: * [zip/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/zip/package.py) Versions: 3.0 Build Dependencies: [bzip2](#bzip2) Link Dependencies: [bzip2](#bzip2) Description: Zip is a compression and file packaging/archive utility. --- zlib[¶](#zlib) === Homepage: * <http://zlib.netSpack package: * [zlib/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/zlib/package.py) Versions: 1.2.11, 1.2.8, 1.2.3 Description: A free, general-purpose, legally unencumbered lossless data-compression library. --- zoltan[¶](#zoltan) === Homepage: * <http://www.cs.sandia.gov/zoltanSpack package: * [zoltan/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/zoltan/package.py) Versions: 3.83, 3.8, 3.6, 3.3 Build Dependencies: [parmetis](#parmetis), mpi, [metis](#metis) Link Dependencies: [parmetis](#parmetis), mpi, [metis](#metis) Description: The Zoltan library is a toolkit of parallel combinatorial algorithms for parallel, unstructured, and/or adaptive scientific applications. Zoltan's largest component is a suite of dynamic load-balancing and partitioning algorithms that increase applications' parallel performance by reducing idle time. Zoltan also has graph coloring and graph ordering algorithms, which are useful in task schedulers and parallel preconditioners. --- zsh[¶](#zsh) === Homepage: * <http://www.zsh.orgSpack package: * [zsh/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/zsh/package.py) Versions: 5.4.2, 5.3.1, 5.1.1 Build Dependencies: [pcre](#pcre), [ncurses](#ncurses) Link Dependencies: [pcre](#pcre), [ncurses](#ncurses) Description: Zsh is a shell designed for interactive use, although it is also a powerful scripting language. Many of the useful features of bash, ksh, and tcsh were incorporated into zsh; many original features were added. --- zstd[¶](#zstd) === Homepage: * <http://facebook.github.io/zstd/Spack package: * [zstd/package.py](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/zstd/package.py) Versions: 1.3.0, 1.1.2 Description: Zstandard, or zstd as short version, is a fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level and better compression ratios. --- Contribution Guide[¶](#contribution-guide) --- This guide is intended for developers or administrators who want to contribute a new package, feature, or bugfix to Spack. It assumes that you have at least some familiarity with Git VCS and Github. The guide will show a few examples of contributing workflows and discuss the granularity of pull-requests (PRs). It will also discuss the tests your PR must pass in order to be accepted into Spack. First, what is a PR? Quoting [Bitbucket’s tutorials](https://www.atlassian.com/git/tutorials/making-a-pull-request/): > Pull requests are a mechanism for a developer to notify team members that > they have **completed a feature**. The pull request is more than just a > notification—it’s a dedicated forum for discussing the proposed feature. Important is **completed feature**. The changes one proposes in a PR should correspond to one feature/bugfix/extension/etc. One can create PRs with changes relevant to different ideas, however reviewing such PRs becomes tedious and error prone. If possible, try to follow the **one-PR-one-package/feature** rule. Spack uses a rough approximation of the [Git Flow](http://nvie.com/posts/a-successful-git-branching-model/) branching model. The develop branch contains the latest contributions, and master is always tagged and points to the latest stable release. Therefore, when you send your request, make `develop` the destination branch on the [Spack repository](https://github.com/spack/spack). ### Continuous Integration[¶](#continuous-integration) Spack uses [Travis CI](https://travis-ci.org/spack/spack) for Continuous Integration testing. This means that every time you submit a pull request, a series of tests will be run to make sure you didn’t accidentally introduce any bugs into Spack. **Your PR will not be accepted until it passes all of these tests.** While you can certainly wait for the results of these tests after submitting a PR, we recommend that you run them locally to speed up the review process. Note Oftentimes, Travis will fail for reasons other than a problem with your PR. For example, apt-get, pip, or homebrew will fail to download one of the dependencies for the test suite, or a transient bug will cause the unit tests to timeout. If Travis fails, click the “Details” link and click on the test(s) that is failing. If it doesn’t look like it is failing for reasons related to your PR, you have two options. If you have write permissions for the Spack repository, you should see a “Restart job” button on the right-hand side. If not, you can close and reopen your PR to rerun all of the tests. If the same test keeps failing, there may be a problem with your PR. If you notice that every recent PR is failing with the same error message, it may be that Travis is down or one of Spack’s dependencies put out a new release that is causing problems. If this is the case, please file an issue. If you take a look in `$SPACK_ROOT/.travis.yml`, you’ll notice that we test against Python 2.6, 2.7, and 3.4-3.7 on both macOS and Linux. We currently perform 3 types of tests: #### Unit Tests[¶](#unit-tests) Unit tests ensure that core Spack features like fetching or spec resolution are working as expected. If your PR only adds new packages or modifies existing ones, there’s very little chance that your changes could cause the unit tests to fail. However, if you make changes to Spack’s core libraries, you should run the unit tests to make sure you didn’t break anything. Since they test things like fetching from VCS repos, the unit tests require [git](https://git-scm.com/), [mercurial](https://www.mercurial-scm.org/), and [subversion](https://subversion.apache.org/) to run. Make sure these are installed on your system and can be found in your `PATH`. All of these can be installed with Spack or with your system package manager. To run *all* of the unit tests, use: ``` $ spack test ``` These tests may take several minutes to complete. If you know you are only modifying a single Spack feature, you can run a single unit test at a time: ``` $ spack test architecture ``` This allows you to develop iteratively: make a change, test that change, make another change, test that change, etc. To get a list of all available unit tests, run: ``` $ spack test --list architecture make_executable spec_dag blame license lock build_environment mirror spec_semantics build_env list log build_system_guess module_parsing spec_syntax cd mirror common build_systems multimethod spec_yaml clean module dotkit cc namespace_trie stage commands print_shell_vars lmod compilers optional_deps svn_fetch config providers tcl concretize package_hash tengine debug python file_cache concretize_preferences package_sanity test_activations dependencies spec log_parser config packages url_fetch dependents test_compiler_cmd prefix database packaging url_parse env uninstall spack_lock_wrapper directory_layout patch url_substitution find url spack_yaml environment_modifications pattern variant flake8 versions util_string flag_handlers provider_index versions gpg view git_fetch python_version views graph file_list graph repo web help filesystem hg_fetch sbang activate info lang install spack_yaml arch install link_tree ``` A more detailed list of available unit tests can be found by running `spack test --long-list`. By default, `pytest` captures the output of all unit tests. If you add print statements to a unit test and want to see the output, simply run: ``` $ spack test -s -k architecture ``` Unit tests are crucial to making sure bugs aren’t introduced into Spack. If you are modifying core Spack libraries or adding new functionality, please consider adding new unit tests or strengthening existing tests. Note There is also a `run-unit-tests` script in `share/spack/qa` that runs the unit tests. Afterwards, it reports back to Codecov with the percentage of Spack that is covered by unit tests. This script is designed for Travis CI. If you want to run the unit tests yourself, we suggest you use `spack test`. #### Flake8 Tests[¶](#flake8-tests) Spack uses [Flake8](http://flake8.pycqa.org/en/latest/) to test for [PEP 8](https://www.python.org/dev/peps/pep-0008/) conformance. PEP 8 is a series of style guides for Python that provide suggestions for everything from variable naming to indentation. In order to limit the number of PRs that were mostly style changes, we decided to enforce PEP 8 conformance. Your PR needs to comply with PEP 8 in order to be accepted. Testing for PEP 8 compliance is easy. Simply run the `spack flake8` command: ``` $ spack flake8 ``` `spack flake8` has a couple advantages over running `flake8` by hand: 1. It only tests files that you have modified since branching off of `develop`. 2. It works regardless of what directory you are in. 3. It automatically adds approved exemptions from the `flake8` checks. For example, URLs are often longer than 80 characters, so we exempt them from line length checks. We also exempt lines that start with “homepage”, “url”, “version”, “variant”, “depends_on”, and “extends” in `package.py` files. More approved flake8 exemptions can be found [here](https://github.com/spack/spack/blob/develop/.flake8). If all is well, you’ll see something like this: ``` $ run-flake8-tests Dependencies found. === flake8: running flake8 code checks on spack. Modified files: var/spack/repos/builtin/packages/hdf5/package.py var/spack/repos/builtin/packages/hdf/package.py var/spack/repos/builtin/packages/netcdf/package.py === Flake8 checks were clean. ``` However, if you aren’t compliant with PEP 8, flake8 will complain: ``` var/spack/repos/builtin/packages/netcdf/package.py:26: [F401] 'os' imported but unused var/spack/repos/builtin/packages/netcdf/package.py:61: [E303] too many blank lines (2) var/spack/repos/builtin/packages/netcdf/package.py:106: [E501] line too long (92 > 79 characters) Flake8 found errors. ``` Most of the error messages are straightforward, but if you don’t understand what they mean, just ask questions about them when you submit your PR. The line numbers will change if you add or delete lines, so simply run `spack flake8` again to update them. Tip Try fixing flake8 errors in reverse order. This eliminates the need for multiple runs of `spack flake8` just to re-compute line numbers and makes it much easier to fix errors directly off of the Travis output. Warning Flake8 and `pep8-naming` require a number of dependencies in order to run. If you installed `py-flake8` and `py-pep8-naming`, the easiest way to ensure the right packages are on your `PYTHONPATH` is to run: ``` spack activate py-flake8 spack activate pep8-naming ``` so that all of the dependencies are symlinked to a central location. If you see an error message like: ``` Traceback (most recent call last): File: "/usr/bin/flake8", line 5, in <module> from pkg_resources import load_entry_point ImportError: No module named pkg_resources ``` that means Flake8 couldn’t find setuptools in your `PYTHONPATH`. #### Documentation Tests[¶](#documentation-tests) Spack uses [Sphinx](http://www.sphinx-doc.org/en/stable/) to build its documentation. In order to prevent things like broken links and missing imports, we added documentation tests that build the documentation and fail if there are any warning or error messages. Building the documentation requires several dependencies, all of which can be installed with Spack: * sphinx * sphinxcontrib-programoutput * sphinx-rtd-theme * graphviz * git * mercurial * subversion Warning Sphinx has [several required dependencies](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-sphinx/package.py). If you installed `py-sphinx` with Spack, make sure to add all of these dependencies to your `PYTHONPATH`. The easiest way to do this is to run: ``` $ spack activate py-sphinx $ spack activate py-sphinx-rtd-theme $ spack activate py-sphinxcontrib-programoutput ``` so that all of the dependencies are symlinked to a central location. If you see an error message like: ``` Extension error: Could not import extension sphinxcontrib.programoutput (exception: No module named sphinxcontrib.programoutput) make: *** [html] Error 1 ``` that means Sphinx couldn’t find `py-sphinxcontrib-programoutput` in your `PYTHONPATH`. Once all of the dependencies are installed, you can try building the documentation: ``` $ cd "$SPACK_ROOT/lib/spack/docs" $ make clean $ make ``` If you see any warning or error messages, you will have to correct those before your PR is accepted. Note There is also a `run-doc-tests` script in `share/spack/qa`. The only difference between running this script and running `make` by hand is that the script will exit immediately if it encounters an error or warning. This is necessary for Travis CI. If you made a lot of documentation changes, it is much quicker to run `make` by hand so that you can see all of the warnings at once. If you are editing the documentation, you should obviously be running the documentation tests. But even if you are simply adding a new package, your changes could cause the documentation tests to fail: ``` package_list.rst:8745: WARNING: Block quote ends without a blank line; unexpected unindent. ``` At first, this error message will mean nothing to you, since you didn’t edit that file. Until you look at line 8745 of the file in question: ``` Description: NetCDF is a set of software libraries and self-describing, machine- independent data formats that support the creation, access, and sharing of array-oriented scientific data. ``` Our documentation includes [a list of all Spack packages](index.html#package-list). If you add a new package, its docstring is added to this page. The problem in this case was that the docstring looked like: ``` class Netcdf(Package): """ NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. """ ``` Docstrings cannot start with a newline character, or else Sphinx will complain. Instead, they should look like: ``` class Netcdf(Package): """NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.""" ``` Documentation changes can result in much more obfuscated warning messages. If you don’t understand what they mean, feel free to ask when you submit your PR. ### Coverage[¶](#coverage) Spack uses [Codecov](https://codecov.io/) to generate and report unit test coverage. This helps us tell what percentage of lines of code in Spack are covered by unit tests. Although code covered by unit tests can still contain bugs, it is much less error prone than code that is not covered by unit tests. Codecov provides [browser extensions](https://github.com/codecov/browser-extension) for Google Chrome, Firefox, and Opera. These extensions integrate with GitHub and allow you to see coverage line-by-line when viewing the Spack repository. If you are new to Spack, a great way to get started is to write unit tests to increase coverage! Unlike with Travis, Codecov tests are not required to pass in order for your PR to be merged. If you modify core Spack libraries, we would greatly appreciate unit tests that cover these changed lines. Otherwise, we have no way of knowing whether or not your changes introduce a bug. If you make substantial changes to the core, we may request unit tests to increase coverage. Note If the only files you modified are package files, we do not care about coverage on your PR. You may notice that the Codecov tests fail even though you didn’t modify any core files. This means that Spack’s overall coverage has increased since you branched off of develop. This is a good thing! If you really want to get the Codecov tests to pass, you can rebase off of the latest develop, but again, this is not required. ### Git Workflows[¶](#git-workflows) Spack is still in the beta stages of development. Most of our users run off of the develop branch, and fixes and new features are constantly being merged. So how do you keep up-to-date with upstream while maintaining your own local differences and contributing PRs to Spack? #### Branching[¶](#branching) The easiest way to contribute a pull request is to make all of your changes on new branches. Make sure your `develop` is up-to-date and create a new branch off of it: ``` $ git checkout develop $ git pull upstream develop $ git branch <descriptive_branch_name> $ git checkout <descriptive_branch_name> ``` Here we assume that the local `develop` branch tracks the upstream develop branch of Spack. This is not a requirement and you could also do the same with remote branches. But for some it is more convenient to have a local branch that tracks upstream. Normally we prefer that commits pertaining to a package `<package-name>` have a message `<package-name>: descriptive message`. It is important to add descriptive message so that others, who might be looking at your changes later (in a year or maybe two), would understand the rationale behind them. Now, you can make your changes while keeping the `develop` branch pure. Edit a few files and commit them by running: ``` $ git add <files_to_be_part_of_the_commit> $ git commit --message <descriptive_message_of_this_particular_commit> ``` Next, push it to your remote fork and create a PR: ``` $ git push origin <descriptive_branch_name> --set-upstream ``` GitHub provides a [tutorial](https://help.github.com/articles/about-pull-requests/) on how to file a pull request. When you send the request, make `develop` the destination branch. If you need this change immediately and don’t have time to wait for your PR to be merged, you can always work on this branch. But if you have multiple PRs, another option is to maintain a Frankenstein branch that combines all of your other branches: ``` $ git co develop $ git branch <your_modified_develop_branch> $ git checkout <your_modified_develop_branch> $ git merge <descriptive_branch_name> ``` This can be done with each new PR you submit. Just make sure to keep this local branch up-to-date with upstream `develop` too. #### Cherry-Picking[¶](#cherry-picking) What if you made some changes to your local modified develop branch and already committed them, but later decided to contribute them to Spack? You can use cherry-picking to create a new branch with only these commits. First, check out your local modified develop branch: ``` $ git checkout <your_modified_develop_branch> ``` Now, get the hashes of the commits you want from the output of: ``` $ git log ``` Next, create a new branch off of upstream `develop` and copy the commits that you want in your PR: ``` $ git checkout develop $ git pull upstream develop $ git branch <descriptive_branch_name> $ git checkout <descriptive_branch_name> $ git cherry-pick <hash> $ git push origin <descriptive_branch_name> --set-upstream ``` Now you can create a PR from the web-interface of GitHub. The net result is as follows: 1. You patched your local version of Spack and can use it further. 2. You “cherry-picked” these changes in a stand-alone branch and submitted it as a PR upstream. Should you have several commits to contribute, you could follow the same procedure by getting hashes of all of them and cherry-picking to the PR branch. Note It is important that whenever you change something that might be of importance upstream, create a pull request as soon as possible. Do not wait for weeks/months to do this, because: 1. you might forget why you modified certain files 2. it could get difficult to isolate this change into a stand-alone clean PR. #### Rebasing[¶](#rebasing) Other developers are constantly making contributions to Spack, possibly on the same files that your PR changed. If their PR is merged before yours, it can create a merge conflict. This means that your PR can no longer be automatically merged without a chance of breaking your changes. In this case, you will be asked to rebase on top of the latest upstream `develop`. First, make sure your develop branch is up-to-date: ``` $ git checkout develop $ git pull upstream develop ``` Now, we need to switch to the branch you submitted for your PR and rebase it on top of develop: ``` $ git checkout <descriptive_branch_name> $ git rebase develop ``` Git will likely ask you to resolve conflicts. Edit the file that it says can’t be merged automatically and resolve the conflict. Then, run: ``` $ git add <file_that_could_not_be_merged> $ git rebase --continue ``` You may have to repeat this process multiple times until all conflicts are resolved. Once this is done, simply force push your rebased branch to your remote fork: ``` $ git push --force origin <descriptive_branch_name> ``` #### Rebasing with cherry-pick[¶](#rebasing-with-cherry-pick) You can also perform a rebase using `cherry-pick`. First, create a temporary backup branch: ``` $ git checkout <descriptive_branch_name> $ git branch tmp ``` If anything goes wrong, you can always go back to your `tmp` branch. Now, look at the logs and save the hashes of any commits you would like to keep: ``` $ git log ``` Next, go back to the original branch and reset it to `develop`. Before doing so, make sure that you local `develop` branch is up-to-date with upstream: ``` $ git checkout develop $ git pull upstream develop $ git checkout <descriptive_branch_name> $ git reset --hard develop ``` Now you can cherry-pick relevant commits: ``` $ git cherry-pick <hash1> $ git cherry-pick <hash2> ``` Push the modified branch to your fork: ``` $ git push --force origin <descriptive_branch_name> ``` If everything looks good, delete the backup branch: ``` $ git branch --delete --force tmp ``` #### Re-writing History[¶](#re-writing-history) Sometimes you may end up on a branch that has diverged so much from develop that it cannot easily be rebased. If the current commits history is more of an experimental nature and only the net result is important, you may rewrite the history. First, merge upstream `develop` and reset you branch to it. On the branch in question, run: ``` $ git merge develop $ git reset develop ``` At this point your branch will point to the same commit as develop and thereby the two are indistinguishable. However, all the files that were previously modified will stay as such. In other words, you do not lose the changes you made. Changes can be reviewed by looking at diffs: ``` $ git status $ git diff ``` The next step is to rewrite the history by adding files and creating commits: ``` $ git add <files_to_be_part_of_commit> $ git commit --message <descriptive_message> ``` After all changed files are committed, you can push the branch to your fork and create a PR: ``` $ git push origin --set-upstream ``` Packaging Guide[¶](#packaging-guide) --- This guide is intended for developers or administrators who want to package software so that Spack can install it. It assumes that you have at least some familiarity with Python, and that you’ve read the [basic usage guide](index.html#basic-usage), especially the part about [specs](index.html#sec-specs). There are two key parts of Spack: 1. **Specs**: expressions for describing builds of software, and 2. **Packages**: Python modules that describe how to build software according to a spec. Specs allow a user to describe a *particular* build in a way that a package author can understand. Packages allow the packager to encapsulate the build logic for different versions, compilers, options, platforms, and dependency combinations in one place. Essentially, a package translates a spec into build logic. Packages in Spack are written in pure Python, so you can do anything in Spack that you can do in Python. Python was chosen as the implementation language for two reasons. First, Python is becoming ubiquitous in the scientific software community. Second, it’s a modern language and has many powerful features to help make package writing easy. ### Creating & editing packages[¶](#creating-editing-packages) #### `spack create`[¶](#spack-create) The `spack create` command creates a directory with the package name and generates a `package.py` file with a boilerplate package template. If given a URL pointing to a tarball or other software archive, `spack create` is smart enough to determine basic information about the package, including its name and build system. In most cases, `spack create` plus a few modifications is all you need to get a package working. Here’s an example: ``` $ spack create https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2 ``` Spack examines the tarball URL and tries to figure out the name of the package to be created. If the name contains uppercase letters, these are automatically converted to lowercase. If the name contains underscores or periods, these are automatically converted to dashes. Spack also searches for *additional* versions located in the same directory of the website. Spack prompts you to tell you how many versions it found and asks you how many you would like to download and checksum: ``` $ spack create https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2 ==> This looks like a URL for gmp ==> Found 16 versions of gmp: 6.1.2 https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2 6.1.1 https://gmplib.org/download/gmp/gmp-6.1.1.tar.bz2 6.1.0 https://gmplib.org/download/gmp/gmp-6.1.0.tar.bz2 ... 5.0.0 https://gmplib.org/download/gmp/gmp-5.0.0.tar.bz2 How many would you like to checksum? (default is 1, q to abort) ``` Spack will automatically download the number of tarballs you specify (starting with the most recent) and checksum each of them. You do not *have* to download all of the versions up front. You can always choose to download just one tarball initially, and run [spack checksum](#cmd-spack-checksum) later if you need more versions. Let’s say you download 3 tarballs: ``` How many would you like to checksum? (default is 1, q to abort) 3 ==> Downloading... ==> Fetching https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2 ######################################################################## 100.0% ==> Fetching https://gmplib.org/download/gmp/gmp-6.1.1.tar.bz2 ######################################################################## 100.0% ==> Fetching https://gmplib.org/download/gmp/gmp-6.1.0.tar.bz2 ######################################################################## 100.0% ==> Checksummed 3 versions of gmp: ==> This package looks like it uses the autotools build system ==> Created template for gmp package ==> Created package file: /Users/Adam/spack/var/spack/repos/builtin/packages/gmp/package.py ``` Spack automatically creates a directory in the appropriate repository, generates a boilerplate template for your package, and opens up the new `package.py` in your favorite `$EDITOR`: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 ``` | ``` # # This is a template package file for Spack. We've put "FIXME" # next to all the things you'll want to change. Once you've handled # them, you can save this file and test your package like this: # # spack install gmp # # You can edit this file again by typing: # # spack edit gmp # # See the Spack documentation for more information on packaging. # If you submit this package back to Spack as a pull request, # please first remove this boilerplate and all FIXME comments. # from spack import * class Gmp(AutotoolsPackage): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2" version('6.1.2', '8ddbb26dc3bd4e2302984debba1406a5') version('6.1.1', '4c175f86e11eb32d8bf9872ca3a8e11d') version('6.1.0', '86ee6e54ebfc4a90b643a65e402c4048') # FIXME: Add dependencies if required. # depends_on('foo') def configure_args(self): # FIXME: Add arguments other than --prefix # FIXME: If not needed delete the function args = [] return args ``` | The tedious stuff (creating the class, checksumming archives) has been done for you. You’ll notice that `spack create` correctly detected that `gmp` uses the Autotools build system. It created a new `Gmp` package that subclasses the `AutotoolsPackage` base class. This base class provides basic installation methods common to all Autotools packages: ``` ./configure --prefix=/path/to/installation/directory make make check make install ``` For most Autotools packages, this is sufficient. If you need to add additional arguments to the `./configure` call, add them via the `configure_args` function. In the generated package, the download `url` attribute is already set. All the things you still need to change are marked with `FIXME` labels. You can delete the commented instructions between the license and the first import statement after reading them. The rest of the tasks you need to do are as follows: 1. Add a description. Immediately inside the package class is a *docstring* in triple-quotes (`"""`). It is used to generate the description shown when users run `spack info`. 2. Change the `homepage` to a useful URL. The `homepage` is displayed when users run `spack info` so that they can learn more about your package. 3. Add `depends_on()` calls for the package’s dependencies. `depends_on` tells Spack that other packages need to be built and installed before this one. See [Dependencies](#dependencies). 4. Get the installation working. Your new package may require specific flags during `configure`. These can be added via `configure_args`. Specifics will differ depending on the package and its build system. [Implementing the install method](#install-method) is covered in detail later. Passing a URL to `spack create` is a convenient and easy way to get a basic package template, but what if your software is licensed and cannot be downloaded from a URL? You can still create a boilerplate `package.py` by telling `spack create` what name you want to use: ``` $ spack create --name intel ``` This will create a simple `intel` package with an `install()` method that you can craft to install your package. What if `spack create <url>` guessed the wrong name or build system? For example, if your package uses the Autotools build system but does not come with a `configure` script, Spack won’t realize it uses Autotools. You can overwrite the old package with `--force` and specify a name with `--name` or a build system template to use with `--template`: ``` $ spack create --name gmp https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2 $ spack create --force --template autotools https://gmplib.org/download/gmp/gmp-6.1.2.tar.bz2 ``` Note If you are creating a package that uses the Autotools build system but does not come with a `configure` script, you’ll need to add an `autoreconf` method to your package that explains how to generate the `configure` script. You may also need the following dependencies: ``` depends_on('autoconf', type='build') depends_on('automake', type='build') depends_on('libtool', type='build') depends_on('m4', type='build') ``` A complete list of available build system templates can be found by running `spack create --help`. #### `spack edit`[¶](#spack-edit) One of the easiest ways to learn how to write packages is to look at existing ones. You can edit a package file by name with the `spack edit` command: ``` $ spack edit gmp ``` So, if you used `spack create` to create a package, then saved and closed the resulting file, you can get back to it with `spack edit`. The `gmp` package actually lives in `$SPACK_ROOT/var/spack/repos/builtin/packages/gmp/package.py`, but `spack edit` provides a much simpler shortcut and saves you the trouble of typing the full path. ### Naming & directory structure[¶](#naming-directory-structure) This section describes how packages need to be named, and where they live in Spack’s directory structure. In general, [spack create](#cmd-spack-create) handles creating package files for you, so you can skip most of the details here. #### `var/spack/repos/builtin/packages`[¶](#var-spack-repos-builtin-packages) A Spack installation directory is structured like a standard UNIX install prefix (`bin`, `lib`, `include`, `var`, `opt`, etc.). Most of the code for Spack lives in `$SPACK_ROOT/lib/spack`. Packages themselves live in `$SPACK_ROOT/var/spack/repos/builtin/packages`. If you `cd` to that directory, you will see directories for each package: ``` $ cd $SPACK_ROOT/var/spack/repos/builtin/packages && ls abinit abyss accfft ack activeharmony adept-utils adios adios2 adlbx adol-c ... ``` Each directory contains a file called `package.py`, which is where all the python code for the package goes. For example, the `libelf` package lives in: ``` $SPACK_ROOT/var/spack/repos/builtin/packages/libelf/package.py ``` Alongside the `package.py` file, a package may contain extra directories or files (like patches) that it needs to build. #### Package Names[¶](#package-names) Packages are named after the directory containing `package.py`. So, `libelf`’s `package.py` lives in a directory called `libelf`. The `package.py` file defines a class called `Libelf`, which extends Spack’s `Package` class. For example, here is `$SPACK_ROOT/var/spack/repos/builtin/packages/libelf/package.py`: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 ``` | ``` from spack import * class Libelf(Package): """ ... description ... """ homepage = ... url = ... version(...) depends_on(...) def install(): ... ``` | The **directory name** (`libelf`) determines the package name that users should provide on the command line. e.g., if you type any of these: ``` $ spack info libelf $ spack versions libelf $ spack install libelf@0.8.13 ``` Spack sees the package name in the spec and looks for `libelf/package.py` in `var/spack/repos/builtin/packages`. Likewise, if you run `spack install py-numpy`, Spack looks for `py-numpy/package.py`. Spack uses the directory name as the package name in order to give packagers more freedom in naming their packages. Package names can contain letters, numbers, and dashes. Using a Python identifier (e.g., a class name or a module name) would make it difficult to support these options. So, you can name a package `3proxy` or `foo-bar` and Spack won’t care. It just needs to see that name in the packages directory. #### Package class names[¶](#package-class-names) Spack loads `package.py` files dynamically, and it needs to find a special class name in the file for the load to succeed. The **class name** (`Libelf` in our example) is formed by converting words separated by `-` in the file name to CamelCase. If the name starts with a number, we prefix the class name with `_`. Here are some examples: | Module Name | Class Name | | --- | --- | | `foo-bar` | `FooBar` | | `3proxy` | `_3proxy` | In general, you won’t have to remember this naming convention because [spack create](#cmd-spack-create) and [spack edit](#cmd-spack-edit) handle the details for you. ### Trusted Downloads[¶](#trusted-downloads) Spack verifies that the source code it downloads is not corrupted or compromised; or at least, that it is the same version the author of the Spack package saw when the package was created. If Spack uses a download method it can verify, we say the download method is *trusted*. Trust is important for *all downloads*: Spack has no control over the security of the various sites from which it downloads source code, and can never assume that any particular site hasn’t been compromised. Trust is established in different ways for different download methods. For the most common download method — a single-file tarball — the tarball is checksummed. Git downloads using `commit=` are trusted implicitly, as long as a hash is specified. Spack also provides untrusted download methods: tarball URLs may be supplied without a checksum, or Git downloads may specify a branch or tag instead of a hash. If the user does not control or trust the source of an untrusted download, it is a security risk. Unless otherwise specified by the user for special cases, Spack should by default use *only* trusted download methods. Unfortunately, Spack does not currently provide that guarantee. It does provide the following mechanisms for safety: 1. By default, Spack will only install a tarball package if it has a checksum and that checksum matches. You can override this with `spack install --no-checksum`. 2. Numeric versions are almost always tarball downloads, whereas non-numeric versions not named `develop` frequently download untrusted branches or tags from a version control system. As long as a package has at least one numeric version, and no non-numeric version named `develop`, Spack will prefer it over any non-numeric versions. #### Checksums[¶](#checksums) For tarball downloads, Spack can currently support checksums using the MD5, SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 algorithms. It determines the algorithm to use based on the hash length. ### Versions and fetching[¶](#versions-and-fetching) The most straightforward way to add new versions to your package is to add a line like this in the package class: ``` class Foo(Package): url = "http://example.com/foo-1.0.tar.gz" version('8.2.1', '4136d7b4c04df68b686570afa26988ac') version('8.2.0', '1c9f62f0778697a09d36121ead88e08e') version('8.1.2', 'd47dd09ed7ae6e7fd6f9a816d7f5fdf6') ``` Versions should be listed in descending order, from newest to oldest. #### Date Versions[¶](#date-versions) If you wish to use dates as versions, it is best to use the format `@yyyy-mm-dd`. This will ensure they sort in the correct order. Alternately, you might use a hybrid release-version / date scheme. For example, `@1.3_2016-08-31` would mean the version from the `1.3` branch, as of August 31, 2016. #### Version URLs[¶](#version-urls) By default, each version’s URL is extrapolated from the `url` field in the package. For example, Spack is smart enough to download version `8.2.1` of the `Foo` package above from <http://example.com/foo-8.2.1.tar.gz>. If the URL is particularly complicated or changes based on the release, you can override the default URL generation algorithm by defining your own `url_for_version()` function. For example, the download URL for OpenMPI contains the major.minor version in one spot and the major.minor.patch version in another: <https://www.open-mpi.org/software/ompi/v2.1/downloads/openmpi-2.1.1.tar.bz2In order to handle this, you can define a `url_for_version()` function like so: ``` def url_for_version(self, version): url = "http://www.open-mpi.org/software/ompi/v{0}/downloads/openmpi-{1}.tar.bz2" return url.format(version.up_to(2), version) ``` With the use of this `url_for_version()`, Spack knows to download OpenMPI `2.1.1` from <http://www.open-mpi.org/software/ompi/v2.1/downloads/openmpi-2.1.1.tar.bz2> but download OpenMPI `1.10.7` from <http://www.open-mpi.org/software/ompi/v1.10/downloads/openmpi-1.10.7.tar.bz2>. You’ll notice that OpenMPI’s `url_for_version()` function makes use of a special `Version` function called `up_to()`. When you call `version.up_to(2)` on a version like `1.10.0`, it returns `1.10`. `version.up_to(1)` would return `1`. This can be very useful for packages that place all `X.Y.*` versions in a single directory and then places all `X.Y.Z` versions in a sub-directory. There are a few `Version` properties you should be aware of. We generally prefer numeric versions to be separated by dots for uniformity, but not all tarballs are named that way. For example, `icu4c` separates its major and minor versions with underscores, like `icu4c-57_1-src.tgz`. The value `57_1` can be obtained with the use of the `version.underscored` property. Note that Python properties don’t need parentheses. There are other separator properties as well: | Property | Result | | --- | --- | | version.dotted | 1.2.3 | | version.dashed | 1-2-3 | | version.underscored | 1_2_3 | | version.joined | 123 | Note Python properties don’t need parentheses. `version.dashed` is correct. `version.dashed()` is incorrect. In addition, these version properties can be combined with `up_to()`. For example: ``` >>> version = Version('1.2.3') >>> version.up_to(2).dashed Version('1-2') >>> version.underscored.up_to(2) Version('1_2') ``` As you can see, order is not important. Just keep in mind that `up_to()` and the other version properties return `Version` objects, not strings. If a URL cannot be derived systematically, or there is a special URL for one of its versions, you can add an explicit URL for a particular version: ``` version('8.2.1', '4136d7b4c04df68b686570afa26988ac', url='http://example.com/foo-8.2.1-special-version.tar.gz') ``` When you supply a custom URL for a version, Spack uses that URL *verbatim* and does not perform extrapolation. The order of precedence of these methods is: 1. package-level `url` 2. `url_for_version()` 3. version-specific `url` so if your package contains a `url_for_version()`, it can be overridden by a version-specific `url`. If your package does not contain a package-level `url` or `url_for_version()`, Spack can determine which URL to download from even if only some of the versions specify their own `url`. Spack will use the nearest URL *before* the requested version. This is useful for packages that have an easy to extrapolate URL, but keep changing their URL format every few releases. With this method, you only need to specify the `url` when the URL changes. #### Skipping the expand step[¶](#skipping-the-expand-step) Spack normally expands archives (e.g. `*.tar.gz` and `*.zip`) automatically after downloading them. If you want to skip this step (e.g., for self-extracting executables and other custom archive types), you can add `expand=False` to a `version` directive. ``` version('8.2.1', '4136d7b4c04df68b686570afa26988ac', url='http://example.com/foo-8.2.1-special-version.sh', expand=False) ``` When `expand` is set to `False`, Spack sets the current working directory to the directory containing the downloaded archive before it calls your `install` method. Within `install`, the path to the downloaded archive is available as `self.stage.archive_file`. Here is an example snippet for packages distributed as self-extracting archives. The example sets permissions on the downloaded file to make it executable, then runs it with some arguments. ``` def install(self, spec, prefix): set_executable(self.stage.archive_file) installer = Executable(self.stage.archive_file) installer('--prefix=%s' % prefix, 'arg1', 'arg2', 'etc.') ``` #### Download caching[¶](#download-caching) Spack maintains a cache (described [here](index.html#caching)) which saves files retrieved during package installations to avoid re-downloading in the case that a package is installed with a different specification (but the same version) or reinstalled on account of a change in the hashing scheme. #### Version comparison[¶](#version-comparison) Most Spack versions are numeric, a tuple of integers; for example, `apex@0.1`, `ferret@6.96` or `py-netcdf@1.2.3.1`. Spack knows how to compare and sort numeric versions. Some Spack versions involve slight extensions of numeric syntax; for example, `py-sphinx-rtd-theme@0.1.10a0`. In this case, numbers are always considered to be “newer” than letters. This is for consistency with [RPM](https://bugzilla.redhat.com/show_bug.cgi?id=50977). Spack versions may also be arbitrary non-numeric strings; any string here will suffice; for example, `@develop`, `@master`, `@local`. The following rules determine the sort order of numeric vs. non-numeric versions: 1. The non-numeric version `@develop` is considered greatest (newest). 2. Numeric versions are all less than `@develop` version, and are sorted numerically. 3. All other non-numeric versions are less than numeric versions, and are sorted alphabetically. The logic behind this sort order is two-fold: 1. Non-numeric versions are usually used for special cases while developing or debugging a piece of software. Keeping most of them less than numeric versions ensures that Spack chooses numeric versions by default whenever possible. 2. The most-recent development version of a package will usually be newer than any released numeric versions. This allows the `develop` version to satisfy dependencies like `depends_on(abc, when="@x.y.z:")` #### Version selection[¶](#version-selection) When concretizing, many versions might match a user-supplied spec. For example, the spec `python` matches all available versions of the package `python`. Similarly, `python@3:` matches all versions of Python3. Given a set of versions that match a spec, Spack concretization uses the following priorities to decide which one to use: 1. If the user provided a list of versions in `packages.yaml`, the first matching version in that list will be used. 2. If one or more versions is specified as `preferred=True`, in either `packages.yaml` or `package.py`, the largest matching version will be used. (“Latest” is defined by the sort order above). 3. If no preferences in particular are specified in the package or in `packages.yaml`, then the largest matching non-develop version will be used. By avoiding `@develop`, this prevents users from accidentally installing a `@develop` version. 4. If all else fails and `@develop` is the only matching version, it will be used. #### `spack checksum`[¶](#spack-checksum) If you want to add new versions to a package you’ve already created, this is automated with the `spack checksum` command. Here’s an example for `libelf`: ``` $ spack checksum libelf ==> Found 16 versions of libelf. 0.8.13 http://www.mr511.de/software/libelf-0.8.13.tar.gz 0.8.12 http://www.mr511.de/software/libelf-0.8.12.tar.gz 0.8.11 http://www.mr511.de/software/libelf-0.8.11.tar.gz 0.8.10 http://www.mr511.de/software/libelf-0.8.10.tar.gz 0.8.9 http://www.mr511.de/software/libelf-0.8.9.tar.gz 0.8.8 http://www.mr511.de/software/libelf-0.8.8.tar.gz 0.8.7 http://www.mr511.de/software/libelf-0.8.7.tar.gz 0.8.6 http://www.mr511.de/software/libelf-0.8.6.tar.gz 0.8.5 http://www.mr511.de/software/libelf-0.8.5.tar.gz ... 0.5.2 http://www.mr511.de/software/libelf-0.5.2.tar.gz How many would you like to checksum? (default is 1, q to abort) ``` This does the same thing that `spack create` does, but it allows you to go back and add new versions easily as you need them (e.g., as they’re released). It fetches the tarballs you ask for and prints out a list of `version` commands ready to copy/paste into your package file: ``` ==> Checksummed new versions of libelf: version('0.8.13', '4136d7b4c04df68b686570afa26988ac') version('0.8.12', 'e21f8273d9f5f6d43a59878dc274fec7') version('0.8.11', 'e931910b6d100f6caa32239849947fbf') version('0.8.10', '9db4d36c283d9790d8fa7df1f4d7b4d9') ``` By default, Spack will search for new tarball downloads by scraping the parent directory of the tarball you gave it. So, if your tarball is at `http://example.com/downloads/foo-1.0.tar.gz`, Spack will look in `http://example.com/downloads/` for links to additional versions. If you need to search another path for download links, you can supply some extra attributes that control how your package finds new versions. See the documentation on [list_url](#attribute-list-url) and [list_depth](#attribute-list-depth). Note * This command assumes that Spack can extrapolate new URLs from an existing URL in the package, and that Spack can find similar URLs on a webpage. If that’s not possible, e.g. if the package’s developers don’t name their tarballs consistently, you’ll need to manually add `version` calls yourself. * For `spack checksum` to work, Spack needs to be able to `import` your package in Python. That means it can’t have any syntax errors, or the `import` will fail. Use this once you’ve got your package in working order. ### Finding new versions[¶](#finding-new-versions) You’ve already seen the `homepage` and `url` package attributes: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 ``` | ``` from spack import * class Mpich(Package): """MPICH is a high performance and widely portable implementation of the Message Passing Interface (MPI) standard.""" homepage = "http://www.mpich.org" url = "http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz" ``` | These are class-level attributes used by Spack to show users information about the package, and to determine where to download its source code. Spack uses the tarball URL to extrapolate where to find other tarballs of the same package (e.g. in [spack checksum](#cmd-spack-checksum), but this does not always work. This section covers ways you can tell Spack to find tarballs elsewhere. #### `list_url`[¶](#list-url) When spack tries to find available versions of packages (e.g. with [spack checksum](#cmd-spack-checksum)), it spiders the parent directory of the tarball in the `url` attribute. For example, for libelf, the url is: ``` url = "http://www.mr511.de/software/libelf-0.8.13.tar.gz" ``` Here, Spack spiders `http://www.mr511.de/software/` to find similar tarball links and ultimately to make a list of available versions of `libelf`. For many packages, the tarball’s parent directory may be unlistable, or it may not contain any links to source code archives. In fact, many times additional package downloads aren’t even available in the same directory as the download URL. For these, you can specify a separate `list_url` indicating the page to search for tarballs. For example, `libdwarf` has the homepage as the `list_url`, because that is where links to old versions are: | | | | --- | --- | | ``` 1 2 3 4 ``` | ``` class Libdwarf(Package): homepage = "http://www.prevanders.net/dwarf.html" url = "http://www.prevanders.net/libdwarf-20130729.tar.gz" list_url = homepage ``` | #### `list_depth`[¶](#list-depth) `libdwarf` and many other packages have a listing of available versions on a single webpage, but not all do. For example, `mpich` has a tarball URL that looks like this: ``` url = "http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz" ``` But its downloads are in many different subdirectories of `http://www.mpich.org/static/downloads/`. So, we need to add a `list_url` *and* a `list_depth` attribute: | | | | --- | --- | | ``` 1 2 3 4 5 ``` | ``` class Mpich(Package): homepage = "http://www.mpich.org" url = "http://www.mpich.org/static/downloads/3.0.4/mpich-3.0.4.tar.gz" list_url = "http://www.mpich.org/static/downloads/" list_depth = 1 ``` | By default, Spack only looks at the top-level page available at `list_url`. `list_depth = 1` tells it to follow up to 1 level of links from the top-level page. Note that here, this implies 1 level of subdirectories, as the `mpich` website is structured much like a filesystem. But `list_depth` really refers to link depth when spidering the page. ### Fetching from code repositories[¶](#fetching-from-code-repositories) For some packages, source code is provided in a Version Control System (VCS) repository rather than in a tarball. Spack can fetch packages from VCS repositories. Currently, Spack supports fetching with [Git](#git-fetch), [Mercurial (hg)](#hg-fetch), [Subversion (svn)](#svn-fetch), and [Go](#go-fetch). To fetch a package from a source repository, Spack needs to know which VCS to use and where to download from. Much like with `url`, package authors can specify a class-level `git`, `hg`, `svn`, or `go` attribute containing the correct download location. Many packages developed with Git have both a Git repository as well as release tarballs available for download. Packages can define both a class-level tarball URL and VCS. For example: ``` class Trilinos(CMakePackage): homepage = "https://trilinos.org/" url = "https://github.com/trilinos/Trilinos/archive/trilinos-release-12-12-1.tar.gz" git = "https://github.com/trilinos/Trilinos.git" version('develop', branch='develop') version('master', branch='master') version('12.12.1', 'ecd4606fa332212433c98bf950a69cc7') version('12.10.1', '667333dbd7c0f031d47d7c5511fd0810') version('12.8.1', '9f37f683ee2b427b5540db8a20ed6b15') ``` If a package contains both a `url` and `git` class-level attribute, Spack decides which to use based on the arguments to the `version()` directive. Versions containing a specific branch, tag, or revision are assumed to be for VCS download methods, while versions containing a checksum are assumed to be for URL download methods. Like `url`, if a specific version downloads from a different repository than the default repo, it can be overridden with a version-specific argument. Note In order to reduce ambiguity, each package can only have a single VCS top-level attribute in addition to `url`. In the rare case that a package uses multiple VCS, a fetch strategy can be specified for each version. For example, the `rockstar` package contains: ``` class Rockstar(MakefilePackage): homepage = "https://bitbucket.org/gfcstanford/rockstar" version('develop', git='https://bitbucket.org/gfcstanford/rockstar.git') version('yt', hg='https://bitbucket.org/MatthewTurk/rockstar') ``` #### Git[¶](#git) Git fetching supports the following parameters to `version`: * `git`: URL of the git repository, if different than the class-level `git`. * `branch`: Name of a branch to fetch. * `tag`: Name of a tag to fetch. * `commit`: SHA hash (or prefix) of a commit to fetch. * `submodules`: Also fetch submodules recursively when checking out this repository. Only one of `tag`, `branch`, or `commit` can be used at a time. Default branch To fetch a repository’s default branch: ``` class Example(Package): git = "https://github.com/example-project/example.git" version('develop') ``` This download method is untrusted, and is not recommended. Aside from HTTPS, there is no way to verify that the repository has not been compromised, and the commit you get when you install the package likely won’t be the same commit that was used when the package was first written. Additionally, the default branch may change. It is best to at least specify a branch name. Branches To fetch a particular branch, use the `branch` parameter: ``` version('experimental', branch='experimental') ``` This download method is untrusted, and is not recommended. Branches are moving targets, so the commit you get when you install the package likely won’t be the same commit that was used when the package was first written. Tags To fetch from a particular tag, use `tag` instead: ``` version('1.0.1', tag='v1.0.1') ``` This download method is untrusted, and is not recommended. Although tags are generally more stable than branches, Git allows tags to be moved. Many developers use tags to denote rolling releases, and may move the tag when a bug is patched. Commits Finally, to fetch a particular commit, use `commit`: ``` version('2014-10-08', commit='9d38cd4e2c94c3cea97d0e2924814acc') ``` This doesn’t have to be a full hash; you can abbreviate it as you’d expect with git: ``` version('2014-10-08', commit='9d38cd') ``` This download method *is trusted*. It is the recommended way to securely download from a Git repository. It may be useful to provide a saner version for commits like this, e.g. you might use the date as the version, as done above. Or, if you know the commit at which a release was cut, you can use the release version. It’s up to the package author to decide what makes the most sense. Although you can use the commit hash as the version number, this is not recommended, as it won’t sort properly. Submodules You can supply `submodules=True` to cause Spack to fetch submodules recursively along with the repository at fetch time. For more information about git submodules see the manpage of git: `man git-submodule`. ``` version('1.0.1', tag='v1.0.1', submodules=True) ``` #### GitHub[¶](#github) If a project is hosted on GitHub, *any* valid Git branch, tag, or hash may be downloaded as a tarball. This is accomplished simply by constructing an appropriate URL. Spack can checksum any package downloaded this way, thereby producing a trusted download. For example, the following downloads a particular hash, and then applies a checksum. ``` version('1.9.5.1.1', 'd035e4bc704d136db79b43ab371b27d2', url='https://www.github.com/jswhit/pyproj/tarball/0be612cc9f972e38b50a90c946a9b353e2ab140f') ``` #### Mercurial[¶](#mercurial) Fetching with Mercurial works much like [Git](git-fetch), but you use the `hg` parameter. Default branch Add the `hg` attribute with no `revision` passed to `version`: ``` class Example(Package): hg = "https://bitbucket.org/example-project/example" version('develop') ``` This download method is untrusted, and is not recommended. As with Git’s default fetching strategy, there is no way to verify the integrity of the download. Revisions To fetch a particular revision, use the `revision` parameter: ``` version('1.0', revision='v1.0') ``` Unlike `git`, which has special parameters for different types of revisions, you can use `revision` for branches, tags, and commits when you fetch with Mercurial. Like Git, fetching specific branches or tags is an untrusted download method, and is not recommended. The recommended fetch strategy is to specify a particular commit hash as the revision. #### Subversion[¶](#subversion) To fetch with subversion, use the `svn` and `revision` parameters. Fetching the head Simply add an `svn` parameter to the package: ``` class Example(Package): svn = "https://outreach.scidac.gov/svn/example/trunk" version('develop') ``` This download method is untrusted, and is not recommended for the same reasons as mentioned above. Fetching a revision To fetch a particular revision, add a `revision` argument to the version directive: ``` version('develop', revision=128) ``` This download method is untrusted, and is not recommended. Unfortunately, Subversion has no commit hashing scheme like Git and Mercurial do, so there is no way to guarantee that the download you get is the same as the download used when the package was created. Use at your own risk. Subversion branches are handled as part of the directory structure, so you can check out a branch or tag by changing the URL. If you want to package multiple branches, simply add a `svn` argument to each version directive. #### Go[¶](#go) Go isn’t a VCS, it is a programming language with a builtin command, [go get](https://golang.org/cmd/go/#hdr-Download_and_install_packages_and_dependencies), that fetches packages and their dependencies automatically. It can clone a Git repository, or download from another source location. For example: ``` class ThePlatinumSearcher(Package): homepage = "https://github.com/monochromegane/the_platinum_searcher" go = "github.com/monochromegane/the_platinum_searcher/..." version('head') ``` Go cannot be used to fetch a particular commit or branch, it always downloads the head of the repository. This download method is untrusted, and is not recommended. Use another fetch strategy whenever possible. and is not recommended. Use another fetch strategy whenever possible. ### Resources (expanding extra tarballs)[¶](#resources-expanding-extra-tarballs) Some packages (most notably compilers) provide optional features if additional resources are expanded within their source tree before building. In Spack it is possible to describe such a need with the `resource` directive : > ``` > resource( > name='cargo', > git='https://github.com/rust-lang/cargo.git', > tag='0.10.0', > destination='cargo' > ) > ``` Based on the keywords present among the arguments the appropriate `FetchStrategy` will be used for the resource. The keyword `destination` is relative to the source root of the package and should point to where the resource is to be expanded. ### Licensed software[¶](#licensed-software) In order to install licensed software, Spack needs to know a few more details about a package. The following class attributes should be defined. #### `license_required`[¶](#license-required) Boolean. If set to `True`, this software requires a license. If set to `False`, all of the following attributes will be ignored. Defaults to `False`. #### `license_comment`[¶](#license-comment) String. Contains the symbol used by the license manager to denote a comment. Defaults to `#`. #### `license_files`[¶](#license-files) List of strings. These are files that the software searches for when looking for a license. All file paths must be relative to the installation directory. More complex packages like Intel may require multiple licenses for individual components. Defaults to the empty list. #### `license_vars`[¶](#license-vars) List of strings. Environment variables that can be set to tell the software where to look for a license if it is not in the usual location. Defaults to the empty list. #### `license_url`[¶](#license-url) String. A URL pointing to license setup instructions for the software. Defaults to the empty string. For example, let’s take a look at the package for the PGI compilers. ``` # Licensing license_required = True license_comment = '#' license_files = ['license.dat'] license_vars = ['PGROUPD_LICENSE_FILE', 'LM_LICENSE_FILE'] license_url = 'http://www.pgroup.com/doc/pgiinstall.pdf' ``` As you can see, PGI requires a license. Its license manager, FlexNet, uses the `#` symbol to denote a comment. It expects the license file to be named `license.dat` and to be located directly in the installation prefix. If you would like the installation file to be located elsewhere, simply set `PGROUPD_LICENSE_FILE` or `LM_LICENSE_FILE` after installation. For further instructions on installation and licensing, see the URL provided. Let’s walk through a sample PGI installation to see exactly what Spack is and isn’t capable of. Since PGI does not provide a download URL, it must be downloaded manually. It can either be added to a mirror or located in the current directory when `spack install pgi` is run. See [Mirrors](index.html#mirrors) for instructions on setting up a mirror. After running `spack install pgi`, the first thing that will happen is Spack will create a global license file located at `$SPACK_ROOT/etc/spack/licenses/pgi/license.dat`. It will then open up the file using the editor set in `$EDITOR`, or vi if unset. It will look like this: ``` # A license is required to use pgi. # # The recommended solution is to store your license key in this global # license file. After installation, the following symlink(s) will be # added to point to this file (relative to the installation prefix): # # license.dat # # Alternatively, use one of the following environment variable(s): # # PGROUPD_LICENSE_FILE # LM_LICENSE_FILE # # If you choose to store your license in a non-standard location, you may # set one of these variable(s) to the full pathname to the license file, or # port@host if you store your license keys on a dedicated license server. # You will likely want to set this variable in a module file so that it # gets loaded every time someone tries to use pgi. # # For further information on how to acquire a license, please refer to: # # http://www.pgroup.com/doc/pgiinstall.pdf # # You may enter your license below. ``` You can add your license directly to this file, or tell FlexNet to use a license stored on a separate license server. Here is an example that points to a license server called licman1: ``` SERVER licman1.mcs.anl.gov 00163eb7fba5 27200 USE_SERVER ``` If your package requires the license to install, you can reference the location of this global license using `self.global_license_file`. After installation, symlinks for all of the files given in `license_files` will be created, pointing to this global license. If you install a different version or variant of the package, Spack will automatically detect and reuse the already existing global license. If the software you are trying to package doesn’t rely on license files, Spack will print a warning message, letting the user know that they need to set an environment variable or pointing them to installation documentation. ### Patches[¶](#patches) Depending on the host architecture, package version, known bugs, or other issues, you may need to patch your software to get it to build correctly. Like many other package systems, spack allows you to store patches alongside your package files and apply them to source code after it’s downloaded. #### `patch`[¶](#patch) You can specify patches in your package file with the `patch()` directive. `patch` looks like this: ``` class Mvapich2(Package): ... patch('ad_lustre_rwcontig_open_source.patch', when='@1.9:') ``` The first argument can be either a URL or a filename. It specifies a patch file that should be applied to your source. If the patch you supply is a filename, then the patch needs to live within the spack source tree. For example, the patch above lives in a directory structure like this: ``` $SPACK_ROOT/var/spack/repos/builtin/packages/ mvapich2/ package.py ad_lustre_rwcontig_open_source.patch ``` If you supply a URL instead of a filename, you need to supply a `sha256` checksum, like this: ``` patch('http://www.nwchem-sw.org/images/Tddft_mxvec20.patch', sha256='252c0af58be3d90e5dc5e0d16658434c9efa5d20a5df6c10bf72c2d77f780866') ``` Spack includes the hashes of patches in its versioning information, so that the same package with different patches applied will have different hash identifiers. To ensure that the hashing scheme is consistent, you must use a `sha256` checksum for the patch. Patches will be fetched from their URLs, checked, and applied to your source code. You can use the `spack sha256` command to generate a checksum for a patch file or URL. Spack can also handle compressed patches. If you use these, Spack needs a little more help. Specifically, it needs *two* checksums: the `sha256` of the patch and `archive_sha256` for the compressed archive. `archive_sha256` helps Spack ensure that the downloaded file is not corrupted or malicious, before running it through a tool like `tar` or `zip`. The `sha256` of the patch is still required so that it can be included in specs. Providing it in the package file ensures that Spack won’t have to download and decompress patches it won’t end up using at install time. Both the archive and patch checksum are checked when patch archives are downloaded. ``` patch('http://www.nwchem-sw.org/images/Tddft_mxvec20.patch.gz', sha256='252c0af58be3d90e5dc5e0d16658434c9efa5d20a5df6c10bf72c2d77f780866', archive_sha256='4e8092a161ec6c3a1b5253176fcf33ce7ba23ee2ff27c75dbced589dabacd06e') ``` `patch` keyword arguments are described below. ##### `sha256`, `archive_sha256`[¶](#sha256-archive-sha256) Hashes of downloaded patch and compressed archive, respectively. Only needed for patches fetched from URLs. ##### `when`[¶](#when) If supplied, this is a spec that tells spack when to apply the patch. If the installed package spec matches this spec, the patch will be applied. In our example above, the patch is applied when mvapich is at version `1.9` or higher. ##### `level`[¶](#level) This tells spack how to run the `patch` command. By default, the level is 1 and spack runs `patch -p 1`. If level is 2, spack will run `patch -p 2`, and so on. A lot of people are confused by level, so here’s a primer. If you look in your patch file, you may see something like this: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 ``` | ``` --- a/src/mpi/romio/adio/ad_lustre/ad_lustre_rwcontig.c 2013-12-10 12:05:44.806417000 -0800 +++ b/src/mpi/romio/adio/ad_lustre/ad_lustre_rwcontig.c 2013-12-10 11:53:03.295622000 -0800 @@ -8,7 +8,7 @@ * Copyright (C) 2008 Sun Microsystems, Lustre group */ -#define _XOPEN_SOURCE 600 +//#define _XOPEN_SOURCE 600 #include <stdlib.h> #include <malloc.h> #include "ad_lustre.h" ``` | Lines 1-2 show paths with synthetic `a/` and `b/` prefixes. These are placeholders for the two `mvapich2` source directories that `diff` compared when it created the patch file. This is git’s default behavior when creating patch files, but other programs may behave differently. `-p1` strips off the first level of the prefix in both paths, allowing the patch to be applied from the root of an expanded mvapich2 archive. If you set level to `2`, it would strip off `src`, and so on. It’s generally easier to just structure your patch file so that it applies cleanly with `-p1`, but if you’re using a patch you didn’t create yourself, `level` can be handy. ##### `working_dir`[¶](#working-dir) This tells spack where to run the `patch` command. By default, the working directory is the source path of the stage (`.`). However, sometimes patches are made with respect to a subdirectory and this is where the working directory comes in handy. Internally, the working directory is given to `patch` via the `-d` option. Let’s take the example patch from above and assume for some reason, it can only be downloaded in the following form: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 ``` | ``` --- a/romio/adio/ad_lustre/ad_lustre_rwcontig.c 2013-12-10 12:05:44.806417000 -0800 +++ b/romio/adio/ad_lustre/ad_lustre_rwcontig.c 2013-12-10 11:53:03.295622000 -0800 @@ -8,7 +8,7 @@ * Copyright (C) 2008 Sun Microsystems, Lustre group */ -#define _XOPEN_SOURCE 600 +//#define _XOPEN_SOURCE 600 #include <stdlib.h> #include <malloc.h> #include "ad_lustre.h" ``` | Hence, the patch needs to applied in the `src/mpi` subdirectory, and the `working_dir='src/mpi'` option would exactly do that. #### Patch functions[¶](#patch-functions) In addition to supplying patch files, you can write a custom function to patch a package’s source. For example, the `py-pyside` package contains some custom code for tweaking the way the PySide build handles `RPATH`: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 ``` | ``` def patch(self): """Undo PySide RPATH handling and add Spack RPATH.""" # Figure out the special RPATH pypkg = self.spec['python'].package rpath = self.rpath rpath.append(os.path.join( self.prefix, pypkg.site_packages_dir, 'PySide')) # Add Spack's standard CMake args to the sub-builds. # They're called BY setup.py so we have to patch it. filter_file( r'OPTION_CMAKE,', r'OPTION_CMAKE, ' + ( '"-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=FALSE", ' '"-DCMAKE_INSTALL_RPATH=%s",' % ':'.join(rpath)), 'setup.py') # PySide tries to patch ELF files to remove RPATHs # Disable this and go with the one we set. if self.spec.satisfies('@1.2.4:'): rpath_file = 'setup.py' else: rpath_file = 'pyside_postinstall.py' filter_file(r'(^\s*)(rpath_cmd\(.*\))', r'\1#\2', rpath_file) # TODO: rpath handling for PySide 1.2.4 still doesn't work. # PySide can't find the Shiboken library, even though it comes # bundled with it and is installed in the same directory. # PySide does not provide official support for # Python 3.5, but it should work fine filter_file("'Programming Language :: Python :: 3.4'", "'Programming Language :: Python :: 3.4',\r\n " "'Programming Language :: Python :: 3.5'", "setup.py") ``` | A `patch` function, if present, will be run after patch files are applied and before `install()` is run. You could put this logic in `install()`, but putting it in a patch function gives you some benefits. First, spack ensures that the `patch()` function is run once per code checkout. That means that if you run install, hit ctrl-C, and run install again, the code in the patch function is only run once. Also, you can tell Spack to run only the patching part of the build using the [spack patch](#cmd-spack-patch) command. #### Dependency patching[¶](#dependency-patching) So far we’ve covered how the `patch` directive can be used by a package to patch *its own* source code. Packages can *also* specify patches to be applied to their dependencies, if they require special modifications. As with all packages in Spack, a patched dependency library can coexist with other versions of that library. See the [section on depends_on](#dependency-dependency-patching) for more details. ### Handling RPATHs[¶](#handling-rpaths) Spack installs each package in a way that ensures that all of its dependencies are found when it runs. It does this using [RPATHs](http://en.wikipedia.org/wiki/Rpath). An RPATH is a search path, stored in a binary (an executable or library), that tells the dynamic loader where to find its dependencies at runtime. You may be familiar with [LD_LIBRARY_PATH](http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html) on Linux or [DYLD_LIBRARY_PATH](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/dyld.1.html) on Mac OS X. RPATH is similar to these paths, in that it tells the loader where to find libraries. Unlike them, it is embedded in the binary and not set in each user’s environment. RPATHs in Spack are handled in one of three ways: 1. For most packages, RPATHs are handled automatically using Spack’s [compiler wrappers](#compiler-wrappers). These wrappers are set in standard variables like `CC`, `CXX`, `F77`, and `FC`, so most build systems (autotools and many gmake systems) pick them up and use them. 2. CMake also respects Spack’s compiler wrappers, but many CMake builds have logic to overwrite RPATHs when binaries are installed. Spack provides the `std_cmake_args` variable, which includes parameters necessary for CMake build use the right installation RPATH. It can be used like this when `cmake` is invoked: ``` class MyPackage(Package): ... def install(self, spec, prefix): cmake('..', *std_cmake_args) make() make('install') ``` 3. If you need to modify the build to add your own RPATHs, you can use the `self.rpath` property of your package, which will return a list of all the RPATHs that Spack will use when it links. You can see this how this is used in the [PySide example](#pyside-patch) above. ### Parallel builds[¶](#parallel-builds) By default, Spack will invoke `make()` with a `-j <njobs>` argument, so that builds run in parallel. It figures out how many jobs to run by determining how many cores are on the host machine. Specifically, it uses the number of CPUs reported by Python’s [multiprocessing.cpu_count()](http://docs.python.org/library/multiprocessing.html#multiprocessing.cpu_count). If a package does not build properly in parallel, you can override this setting by adding `parallel = False` to your package. For example, OpenSSL’s build does not work in parallel, so its package looks like this: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 ``` | ``` class Openssl(Package): homepage = "http://www.openssl.org" url = "http://www.openssl.org/source/openssl-1.0.1h.tar.gz" version('1.0.1h', '8d6d684a9430d5cc98a62a5d8fbda8cf') depends_on("zlib") parallel = False ``` | Similarly, you can disable parallel builds only for specific make commands, as `libdwarf` does: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 ``` | ``` class Libelf(Package): ... def install(self, spec, prefix): configure("--prefix=" + prefix, "--enable-shared", "--disable-dependency-tracking", "--disable-debug") make() # The mkdir commands in libelf's install can fail in parallel make("install", parallel=False) ``` | The first make will run in parallel here, but the second will not. If you set `parallel` to `False` at the package level, then each call to `make()` will be sequential by default, but packagers can call `make(parallel=True)` to override it. ### Dependencies[¶](#dependencies) We’ve covered how to build a simple package, but what if one package relies on another package to build? How do you express that in a package file? And how do you refer to the other package in the build script for your own package? Spack makes this relatively easy. Let’s take a look at the `libdwarf` package to see how it’s done: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 ``` | ``` class Libdwarf(Package): homepage = "http://www.prevanders.net/dwarf.html" url = "http://www.prevanders.net/libdwarf-20130729.tar.gz" list_url = homepage version('20130729', '4cc5e48693f7b93b7aa0261e63c0e21d') ... depends_on("libelf") def install(self, spec, prefix): ... ``` | #### `depends_on()`[¶](#depends-on) The highlighted `depends_on('libelf')` call tells Spack that it needs to build and install the `libelf` package before it builds `libdwarf`. This means that in your `install()` method, you are guaranteed that `libelf` has been built and installed successfully, so you can rely on it for your libdwarf build. #### Dependency specs[¶](#dependency-specs) `depends_on` doesn’t just take the name of another package. It can take a full spec as well. This means that you can restrict the versions or other configuration options of `libelf` that `libdwarf` will build with. For example, suppose that in the `libdwarf` package you write: ``` depends_on('libelf@0.8') ``` Now `libdwarf` will require `libelf` at *exactly* version `0.8`. You can also specify a requirement for a particular variant or for specific compiler flags: ``` depends_on('libelf@0.8+debug') depends_on('libelf debug=True') depends_on('libelf cppflags="-fPIC"') ``` Both users *and* package authors can use the same spec syntax to refer to different package configurations. Users use the spec syntax on the command line to find installed packages or to install packages with particular constraints, and package authors can use specs to describe relationships between packages. #### Version ranges[¶](#version-ranges) Although some packages require a specific version for their dependencies, most can be built with a range of version. For example, if you are writing a package for a legacy Python module that only works with Python 2.4 through 2.6, this would look like: ``` depends_on('python@2.4:2.6') ``` Version ranges in Spack are *inclusive*, so `2.4:2.6` means any version greater than or equal to `2.4` and up to and including `2.6`. If you want to specify that a package works with any version of Python 3, this would look like: ``` depends_on('python@3:') ``` Here we leave out the upper bound. If you want to say that a package requires Python 2, you can similarly leave out the lower bound: ``` depends_on('python@:2.9') ``` Notice that we didn’t use `@:3`. Version ranges are *inclusive*, so `@:3` means “up to and including 3”. What if a package can only be built with Python 2.6? You might be inclined to use: ``` depends_on('python@2.6') ``` However, this would be wrong. Spack assumes that all version constraints are absolute, so it would try to install Python at exactly `2.6`. The correct way to specify this would be: ``` depends_on('python@2.6.0:2.6.999') ``` A spec can contain multiple version ranges separated by commas. For example, if you need Boost 1.59.0 or newer, but there are known issues with 1.64.0, 1.65.0, and 1.66.0, you can say: ``` depends_on('boost@1.59.0:1.63,1.65.1,1.67.0:') ``` #### Dependency types[¶](#dependency-types) Not all dependencies are created equal, and Spack allows you to specify exactly what kind of a dependency you need. For example: ``` depends_on('cmake', type='build') depends_on('py-numpy', type=('build', 'run')) depends_on('libelf', type=('build', 'link')) ``` The following dependency types are available: * **“build”**: made available during the project’s build. The package will be added to `PATH`, the compiler include paths, and `PYTHONPATH`. Other projects which depend on this one will not have these modified (building project X doesn’t need project Y’s build dependencies). * **“link”**: the project is linked to by the project. The package will be added to the current package’s `rpath`. * **“run”**: the project is used by the project at runtime. The package will be added to `PATH` and `PYTHONPATH`. One of the advantages of the `build` dependency type is that although the dependency needs to be installed in order for the package to be built, it can be uninstalled without concern afterwards. `link` and `run` disallow this because uninstalling the dependency would break the package. If the dependency type is not specified, Spack uses a default of `('build', 'link')`. This is the common case for compiler languages. Non-compiled packages like Python modules commonly use `('build', 'run')`. This means that the compiler wrappers don’t need to inject the dependency’s `prefix/lib` directory, but the package needs to be in `PATH` and `PYTHONPATH` during the build process and later when a user wants to run the package. #### Dependency patching[¶](#dependency-dependency-patching) Some packages maintain special patches on their dependencies, either to add new features or to fix bugs. This typically makes a package harder to maintain, and we encourage developers to upstream (contribute back) their changes rather than maintaining patches. However, in some cases it’s not possible to upstream. Maybe the dependency’s developers don’t accept changes, or maybe they just haven’t had time to integrate them. For times like these, Spack’s `depends_on` directive can optionally take a patch or list of patches: ``` class SpecialTool(Package): ... depends_on('binutils', patches='special-binutils-feature.patch') ... ``` Here, the `special-tool` package requires a special feature in `binutils`, so it provides an extra `patches=<filename>` keyword argument. This is similar to the [patch directive](#patching), with one small difference. Here, `special-tool` is responsible for the patch, so it should live in `special-tool`’s directory in the package repository, not the `binutils` directory. If you need something more sophisticated than this, you can simply nest a `patch()` directive inside of `depends_on`: ``` class SpecialTool(Package): ... depends_on( 'binutils', patches=patch('special-binutils-feature.patch', level=3, when='@:1.3'), # condition on binutils when='@2.0:') # condition on special-tool ... ``` Note that there are two optional `when` conditions here – one on the `patch` directive and the other on `depends_on`. The condition in the `patch` directive applies to `binutils` (the package being patched), while the condition in `depends_on` applies to `special-tool`. See [patch directive](#patching) for details on all the arguments the `patch` directive can take. Finally, if you need *multiple* patches on a dependency, you can provide a list for `patches`, e.g.: ``` class SpecialTool(Package): ... depends_on( 'binutils', patches=[ 'binutils-bugfix1.patch', 'binutils-bugfix2.patch', patch('https://example.com/special-binutils-feature.patch', sha256='252c0af58be3d90e5dc5e0d16658434c9efa5d20a5df6c10bf72c2d77f780866', when='@:1.3')], when='@2.0:') ... ``` As with `patch` directives, patches are applied in the order they appear in the package file (or in this case, in the list). Note You may wonder whether dependency patching will interfere with other packages that depend on `binutils`. It won’t. As described in [patching](#patching), Patching a package adds the `sha256` of the patch to the package’s spec, which means it will have a *different* unique hash than other versions without the patch. The patched version coexists with unpatched versions, and Spack’s support for [handling_rpaths](#handling-rpaths) guarantees that each installation finds the right version. If two packages depend on `binutils` patched *the same* way, they can both use a single installation of `binutils`. #### `setup_dependent_environment()`[¶](#setup-dependent-environment) Spack provides a mechanism for dependencies to provide variables that can be used in their dependents’ build. Any package can declare a `setup_dependent_environment()` function, and this function will be called before the `install()` method of any dependent packages. This allows dependencies to set up environment variables and other properties to be used by dependents. The function declaration should look like this: | | | | --- | --- | | ``` 1 2 ``` | ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): spack_env.set('QTDIR', self.prefix) ``` | Here, the Qt package sets the `QTDIR` environment variable so that packages that depend on a particular Qt installation will find it. The arguments to this function are: * **spack_env**: List of environment modifications to be applied when the dependent package is built within Spack. * **run_env**: List of environment modifications to be applied when the dependent package is run outside of Spack. These are added to the resulting module file. * **dependent_spec**: The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that *this* package’s spec is available as `self.spec`. A good example of using these is in the Python package: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 ``` | ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): """Set PYTHONPATH to include the site-packages directory for the extension and any other python extensions it depends on.""" # If we set PYTHONHOME, we must also ensure that the corresponding # python is found in the build environment. This to prevent cases # where a system provided python is run against the standard libraries # of a Spack built python. See issue #7128 spack_env.set('PYTHONHOME', self.home) path = os.path.dirname(self.command.path) if not is_system_path(path): spack_env.prepend_path('PATH', path) python_paths = [] for d in dependent_spec.traverse( deptype=('build', 'run', 'test')): if d.package.extends(self.spec): python_paths.append(join_path(d.prefix, self.site_packages_dir)) pythonpath = ':'.join(python_paths) spack_env.set('PYTHONPATH', pythonpath) # For run time environment set only the path for # dependent_spec and prepend it to PYTHONPATH if dependent_spec.package.extends(self.spec): run_env.prepend_path('PYTHONPATH', join_path( dependent_spec.prefix, self.site_packages_dir)) ``` | The first thing that happens here is that the `python` command is inserted into module scope of the dependent. This allows most python packages to have a very simple install method, like this: ``` def install(self, spec, prefix): python('setup.py', 'install', '--prefix={0}'.format(prefix)) ``` Python’s `setup_dependent_environment` method also sets up some other variables, creates a directory, and sets up the `PYTHONPATH` so that dependent packages can find their dependencies at build time. ### Conflicts[¶](#conflicts) Sometimes packages have known bugs, or limitations, that would prevent them to build e.g. against other dependencies or with certain compilers. Spack makes it possible to express such constraints with the `conflicts` directive. Adding the following to a package: ``` conflicts('%intel', when='@1.2') ``` we express the fact that the current package *cannot be built* with the Intel compiler when we are trying to install version “1.2”. The `when` argument can be omitted, in which case the conflict will always be active. Conflicts are always evaluated after the concretization step has been performed, and if any match is found a detailed error message is shown to the user. ### Extensions[¶](#extensions) Spack’s support for package extensions is documented extensively in [spack module tcl loads](index.html#extensions). This section documents how to make your own extendable packages and extensions. To support extensions, a package needs to set its `extendable` property to `True`, e.g.: ``` class Python(Package): ... extendable = True ... ``` To make a package into an extension, simply add simply add an `extends` call in the package definition, and pass it the name of an extendable package: ``` class PyNumpy(Package): ... extends('python') ... ``` Now, the `py-numpy` package can be used as an argument to `spack activate`. When it is activated, all the files in its prefix will be symbolically linked into the prefix of the python package. Some packages produce a Python extension, but are only compatible with Python 3, or with Python 2. In those cases, a `depends_on()` declaration should be made in addition to the `extends()` declaration: ``` class Icebin(Package): extends('python', when='+python') depends_on('python@3:', when='+python') ``` Many packages produce Python extensions for *some* variants, but not others: they should extend `python` only if the appropriate variant(s) are selected. This may be accomplished with conditional `extends()` declarations: ``` class FooLib(Package): variant('python', default=True, description= \ 'Build the Python extension Module') extends('python', when='+python') ... ``` Sometimes, certain files in one package will conflict with those in another, which means they cannot both be activated (symlinked) at the same time. In this case, you can tell Spack to ignore those files when it does the activation: ``` class PySncosmo(Package): ... # py-sncosmo binaries are duplicates of those from py-astropy extends('python', ignore=r'bin/.*') depends_on('py-astropy') ... ``` The code above will prevent everything in the `$prefix/bin/` directory from being linked in at activation time. Note You can call *either* `depends_on` or `extends` on any one package, but not both. For example you cannot both `depends_on('python')` and `extends(python)` in the same package. `extends` implies `depends_on`. ### Views[¶](#views) As covered in [Filesystem Views](index.html#filesystem-views), the `spack view` command can be used to symlink a number of packages into a merged prefix. The methods of `PackageViewMixin` can be overridden to customize how packages are added to views. Generally this can be used to create copies of specific files rather than symlinking them when symlinking does not work. For example, `Python` overrides `add_files_to_view` in order to create a copy of the `python` binary since the real path of the Python executable is used to detect extensions; as a consequence python extension packages (those inheriting from `PythonPackage`) likewise override `add_files_to_view` in order to rewrite shebang lines which point to the Python interpreter. #### Activation & deactivation[¶](#activation-deactivation) Adding an extension to a view is referred to as an activation. If the view is maintained in the Spack installation prefix of the extendee this is called a global activation. Activations may involve updating some centralized state that is maintained by the extendee package, so there can be additional work for adding extensions compared with non-extension packages. Spack’s `Package` class has default `activate` and `deactivate` implementations that handle symbolically linking extensions’ prefixes into a specified view. Extendable packages can override these methods to add custom activate/deactivate logic of their own. For example, the `activate` and `deactivate` methods in the Python class handle symbolic linking of extensions, but they also handle details surrounding Python’s `.pth` files, and other aspects of Python packaging. Spack’s extensions mechanism is designed to be extensible, so that other packages (like Ruby, R, Perl, etc.) can provide their own custom extension management logic, as they may not handle modules the same way that Python does. Let’s look at Python’s activate function: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 ``` | ``` def activate(self, ext_pkg, view, **args): ignore = self.python_ignore(ext_pkg, args) args.update(ignore=ignore) super(Python, self).activate(ext_pkg, view, **args) extensions_layout = view.extensions_layout exts = extensions_layout.extension_map(self.spec) exts[ext_pkg.name] = ext_pkg.spec self.write_easy_install_pth(exts, prefix=view.root) ``` | This function is called on the *extendee* (Python). It first calls `activate` in the superclass, which handles symlinking the extension package’s prefix into the specified view. It then does some special handling of the `easy-install.pth` file, part of Python’s setuptools. Deactivate behaves similarly to activate, but it unlinks files: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 ``` | ``` def deactivate(self, ext_pkg, view, **args): args.update(ignore=self.python_ignore(ext_pkg, args)) super(Python, self).deactivate(ext_pkg, view, **args) extensions_layout = view.extensions_layout exts = extensions_layout.extension_map(self.spec) # Make deactivate idempotent if ext_pkg.name in exts: del exts[ext_pkg.name] self.write_easy_install_pth(exts, prefix=view.root) ``` | Both of these methods call some custom functions in the Python package. See the source for Spack’s Python package for details. #### Activation arguments[¶](#activation-arguments) You may have noticed that the `activate` function defined above takes keyword arguments. These are the keyword arguments from `extends()`, and they are passed to both activate and deactivate. This capability allows an extension to customize its own activation by passing arguments to the extendee. Extendees can likewise implement custom `activate()` and `deactivate()` functions to suit their needs. The only keyword argument supported by default is the `ignore` argument, which can take a regex, list of regexes, or a predicate to determine which files *not* to symlink during activation. ### Virtual dependencies[¶](#virtual-dependencies) In some cases, more than one package can satisfy another package’s dependency. One way this can happen is if a package depends on a particular *interface*, but there are multiple *implementations* of the interface, and the package could be built with any of them. A *very* common interface in HPC is the [Message Passing Interface (MPI)](http://www.mcs.anl.gov/research/projects/mpi/), which is used in many large-scale parallel applications. MPI has several different implementations (e.g., [MPICH](http://www.mpich.org), [OpenMPI](http://www.open-mpi.org), and [MVAPICH](http://mvapich.cse.ohio-state.edu)) and scientific applications can be built with any one of them. Complicating matters, MPI does not have a standardized ABI, so a package built with one implementation cannot simply be relinked with another implementation. Many package managers handle interfaces like this by requiring many similar package files, e.g., `foo`, `foo-mvapich`, `foo-mpich`, but Spack avoids this explosion of package files by providing support for *virtual dependencies*. #### `provides`[¶](#provides) In Spack, `mpi` is handled as a *virtual package*. A package like `mpileaks` can depend on it just like any other package, by supplying a `depends_on` call in the package definition. For example: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 ``` | ``` class Mpileaks(Package): homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') depends_on("mpi") depends_on("adept-utils") depends_on("callpath") ``` | Here, `callpath` and `adept-utils` are concrete packages, but there is no actual package file for `mpi`, so we say it is a *virtual* package. The syntax of `depends_on`, is the same for both. If we look inside the package file of an MPI implementation, say MPICH, we’ll see something like this: ``` class Mpich(Package): provides('mpi') ... ``` The `provides("mpi")` call tells Spack that the `mpich` package can be used to satisfy the dependency of any package that `depends_on('mpi')`. #### Versioned Interfaces[¶](#versioned-interfaces) Just as you can pass a spec to `depends_on`, so can you pass a spec to `provides` to add constraints. This allows Spack to support the notion of *versioned interfaces*. The MPI standard has gone through many revisions, each with new functions added, and each revision of the standard has a version number. Some packages may require a recent implementation that supports MPI-3 functions, but some MPI versions may only provide up to MPI-2. Others may need MPI 2.1 or higher. You can indicate this by adding a version constraint to the spec passed to `provides`: ``` provides("mpi@:2") ``` Suppose that the above `provides` call is in the `mpich2` package. This says that `mpich2` provides MPI support *up to* version 2, but if a package `depends_on("mpi@3")`, then Spack will *not* build that package with `mpich2`. #### `provides when`[¶](#provides-when) The same package may provide different versions of an interface depending on *its* version. Above, we simplified the `provides` call in `mpich` to make the explanation easier. In reality, this is how `mpich` calls `provides`: ``` provides('mpi@:3', when='@3:') provides('mpi@:1', when='@1:') ``` The `when` argument to `provides` allows you to specify optional constraints on the *providing* package, or the *provider*. The provider only provides the declared virtual spec when *it* matches the constraints in the when clause. Here, when `mpich` is at version 3 or higher, it provides MPI up to version 3. When `mpich` is at version 1 or higher, it provides the MPI virtual package at version 1. The `when` qualifier ensures that Spack selects a suitably high version of `mpich` to satisfy some other package that `depends_on` a particular version of MPI. It will also prevent a user from building with too low a version of `mpich`. For example, suppose the package `foo` declares this: ``` class Foo(Package): ... depends_on('mpi@2') ``` Suppose a user invokes `spack install` like this: ``` $ spack install foo ^mpich@1.0 ``` Spack will fail with a constraint violation, because the version of MPICH requested is too low for the `mpi` requirement in `foo`. ### Abstract & concrete specs[¶](#abstract-concrete-specs) Now that we’ve seen how spec constraints can be specified [on the command line](index.html#sec-specs) and within package definitions, we can talk about how Spack puts all of this information together. When you run this: ``` $ spack install mpileaks ^callpath@1.0+debug ^libelf@0.8.11 ``` Spack parses the command line and builds a spec from the description. The spec says that `mpileaks` should be built with the `callpath` library at 1.0 and with the debug option enabled, and with `libelf` version 0.8.11. Spack will also look at the `depends_on` calls in all of these packages, and it will build a spec from that. The specs from the command line and the specs built from package descriptions are then combined, and the constraints are checked against each other to make sure they’re satisfiable. What we have after this is done is called an *abstract spec*. An abstract spec is partially specified. In other words, it could describe more than one build of a package. Spack does this to make things easier on the user: they should only have to specify as much of the package spec as they care about. Here’s an example partial spec DAG, based on the constraints above: ``` mpileaks ^callpath@1.0+debug ^dyninst ^libdwarf ^libelf@0.8.11 ^mpi ``` digraph { mpileaks -> mpi mpileaks -> "callpath@1.0+debug" -> mpi "callpath@1.0+debug" -> dyninst dyninst -> libdwarf -> "libelf@0.8.11" dyninst -> "libelf@0.8.11" } This diagram shows a spec DAG output as a tree, where successive levels of indentation represent a depends-on relationship. In the above DAG, we can see some packages annotated with their constraints, and some packages with no annotations at all. When there are no annotations, it means the user doesn’t care what configuration of that package is built, just so long as it works. #### Concretization[¶](#concretization) An abstract spec is useful for the user, but you can’t install an abstract spec. Spack has to take the abstract spec and “fill in” the remaining unspecified parts in order to install. This process is called **concretization**. Concretization happens in between the time the user runs `spack install` and the time the `install()` method is called. The concretized version of the spec above might look like this: ``` mpileaks@2.3%gcc@4.7.3 arch=linux-debian7-x86_64 ^callpath@1.0%gcc@4.7.3+debug arch=linux-debian7-x86_64 ^dyninst@8.1.2%gcc@4.7.3 arch=linux-debian7-x86_64 ^libdwarf@20130729%gcc@4.7.3 arch=linux-debian7-x86_64 ^libelf@0.8.11%gcc@4.7.3 arch=linux-debian7-x86_64 ^mpich@3.0.4%gcc@4.7.3 arch=linux-debian7-x86_64 ``` digraph { "mpileaks@2.3\n%gcc@4.7.3\n arch=linux-debian7-x86_64" -> "mpich@3.0.4\n%gcc@4.7.3\n arch=linux-debian7-x86_64" "mpileaks@2.3\n%gcc@4.7.3\n arch=linux-debian7-x86_64" -> "callpath@1.0\n%gcc@4.7.3+debug\n arch=linux-debian7-x86_64" -> "mpich@3.0.4\n%gcc@4.7.3\n arch=linux-debian7-x86_64" "callpath@1.0\n%gcc@4.7.3+debug\n arch=linux-debian7-x86_64" -> "dyninst@8.1.2\n%gcc@4.7.3\n arch=linux-debian7-x86_64" "dyninst@8.1.2\n%gcc@4.7.3\n arch=linux-debian7-x86_64" -> "libdwarf@20130729\n%gcc@4.7.3\n arch=linux-debian7-x86_64" -> "libelf@0.8.11\n%gcc@4.7.3\n arch=linux-debian7-x86_64" "dyninst@8.1.2\n%gcc@4.7.3\n arch=linux-debian7-x86_64" -> "libelf@0.8.11\n%gcc@4.7.3\n arch=linux-debian7-x86_64" } Here, all versions, compilers, and platforms are filled in, and there is a single version (no version ranges) for each package. All decisions about configuration have been made, and only after this point will Spack call the `install()` method for your package. Concretization in Spack is based on certain selection policies that tell Spack how to select, e.g., a version, when one is not specified explicitly. Concretization policies are discussed in more detail in [Configuration Files](index.html#configuration). Sites using Spack can customize them to match the preferences of their own users. #### `spack spec`[¶](#spack-spec) For an arbitrary spec, you can see the result of concretization by running `spack spec`. For example: ``` $ spack spec dyninst@8.0.1 dyninst@8.0.1 ^libdwarf ^libelf dyninst@8.0.1%gcc@4.7.3 arch=linux-debian7-x86_64 ^libdwarf@20130729%gcc@4.7.3 arch=linux-debian7-x86_64 ^libelf@0.8.13%gcc@4.7.3 arch=linux-debian7-x86_64 ``` This is useful when you want to know exactly what Spack will do when you ask for a particular spec. #### `Concretization Policies`[¶](#concretization-policies) A user may have certain preferences for how packages should be concretized on their system. For example, one user may prefer packages built with OpenMPI and the Intel compiler. Another user may prefer packages be built with MVAPICH and GCC. See the [Concretization Preferences](index.html#concretization-preferences) section for more details. ### Conflicting Specs[¶](#conflicting-specs) Suppose a user needs to install package C, which depends on packages A and B. Package A builds a library with a Python2 extension, and package B builds a library with a Python3 extension. Packages A and B cannot be loaded together in the same Python runtime: ``` class A(Package): variant('python', default=True, 'enable python bindings') depends_on('python@2.7', when='+python') def install(self, spec, prefix): # do whatever is necessary to enable/disable python # bindings according to variant class B(Package): variant('python', default=True, 'enable python bindings') depends_on('python@3.2:', when='+python') def install(self, spec, prefix): # do whatever is necessary to enable/disable python # bindings according to variant ``` Package C needs to use the libraries from packages A and B, but does not need either of the Python extensions. In this case, package C should simply depend on the `~python` variant of A and B: ``` class C(Package): depends_on('A~python') depends_on('B~python') ``` This may require that A or B be built twice, if the user wishes to use the Python extensions provided by them: once for `+python` and once for `~python`. Other than using a little extra disk space, that solution has no serious problems. ### Implementing the installation procedure[¶](#implementing-the-installation-procedure) The last element of a package is its **installation procedure**. This is where the real work of installation happens, and it’s the main part of the package you’ll need to customize for each piece of software. Defining an installation procedure means overriding a set of methods or attributes that will be called at some point during the installation of the package. The package base class, usually specialized for a given build system, determines the actual set of entities available for overriding. The classes that are currently provided by Spack are: > | **Base Class** | **Purpose** | > | --- | --- | > | `Package` | General base class not > specialized for any build system | > | `MakefilePackage` | Specialized class for packages > built invoking > hand-written Makefiles | > | `AutotoolsPackage` | Specialized class for packages > built using GNU Autotools | > | `CMakePackage` | Specialized class for packages > built using CMake | > | `CudaPackage` | A helper class for packages that > use CUDA. It is intended to be > used in combination with others | > | `QMakePackage` | Specialized class for packages > build using QMake | > | `SConsPackage` | Specialized class for packages > built using SCons | > | `WafPackage` | Specialized class for packages > built using Waf | > | `RPackage` | Specialized class for > `R` extensions | > | `OctavePackage` | Specialized class for > `Octave` packages | > | `PythonPackage` | Specialized class for > `Python` extensions | > | `PerlPackage` | Specialized class for > `Perl` extensions | > | `IntelPackage` | Specialized class for licensed > Intel software | Note Choice of the appropriate base class for a package In most cases packagers don’t have to worry about the selection of the right base class for a package, as `spack create` will make the appropriate choice on their behalf. In those rare cases where manual intervention is needed we need to stress that a package base class depends on the *build system* being used, not the language of the package. For example, a Python extension installed with CMake would `extends('python')` and subclass from `CMakePackage`. #### Installation pipeline[¶](#installation-pipeline) When a user runs `spack install`, Spack: 1. Fetches an archive for the correct version of the software. 2. Expands the archive. 3. Sets the current working directory to the root directory of the expanded archive. Then, depending on the base class of the package under consideration, it will execute a certain number of **phases** that reflect the way a package of that type is usually built. The name and order in which the phases will be executed can be obtained either reading the API docs at [`build_systems`](index.html#module-spack.build_systems), or using the `spack info` command: ``` $ spack info m4 AutotoolsPackage: m4 Homepage: https://www.gnu.org/software/m4/m4.html Safe versions: 1.4.17 ftp://ftp.gnu.org/gnu/m4/m4-1.4.17.tar.gz Variants: Name Default Description sigsegv on Build the libsigsegv dependency Installation Phases: autoreconf configure build install Build Dependencies: libsigsegv ... ``` Typically, phases have default implementations that fit most of the common cases: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 ``` | ``` def configure(self, spec, prefix): """Runs configure with the arguments specified in :py:meth:`~.AutotoolsPackage.configure_args` and an appropriately set prefix. """ options = getattr(self, 'configure_flag_args', []) options += ['--prefix={0}'.format(prefix)] options += self.configure_args() with working_dir(self.build_directory, create=True): inspect.getmodule(self).configure(*options) ``` | It is thus just sufficient for a packager to override a few build system specific helper methods or attributes to provide, for instance, configure arguments: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ``` | ``` def configure_args(self): spec = self.spec args = ['--enable-c++'] if spec.satisfies('%clang') and not spec.satisfies('platform=darwin'): args.append('CFLAGS=-rtlib=compiler-rt') if spec.satisfies('%intel'): args.append('CFLAGS=-no-gcc') if '+sigsegv' in spec: args.append('--with-libsigsegv-prefix={0}'.format( spec['libsigsegv'].prefix)) else: args.append('--without-libsigsegv-prefix') # http://lists.gnu.org/archive/html/bug-m4/2016-09/msg00002.html arch = spec.architecture if (arch.platform == 'darwin' and arch.platform_os == 'sierra' and '%gcc' in spec): args.append('ac_cv_type_struct_sched_param=yes') return args ``` | Note Each specific build system has a list of attributes that can be overridden to fine-tune the installation of a package without overriding an entire phase. To have more information on them the place to go is the API docs of the [`build_systems`](index.html#module-spack.build_systems) module. #### Overriding an entire phase[¶](#overriding-an-entire-phase) In extreme cases it may be necessary to override an entire phase. Regardless of the build system, the signature is the same. For example, the signature for the install phase is: ``` class Foo(Package): def install(self, spec, prefix): ... ``` `self` For those not used to Python instance methods, this is the package itself. In this case it’s an instance of `Foo`, which extends `Package`. For API docs on Package objects, see [`Package`](index.html#spack.package.Package). `spec` This is the concrete spec object created by Spack from an abstract spec supplied by the user. It describes what should be installed. It will be of type [`Spec`](index.html#spack.spec.Spec). `prefix` This is the path that your install method should copy build targets into. It acts like a string, but it’s actually its own special type, [`Prefix`](index.html#spack.util.prefix.Prefix). The arguments `spec` and `prefix` are passed only for convenience, as they always correspond to `self.spec` and `self.spec.prefix` respectively. As mentioned in [The build environment](#install-environment), you will usually not need to refer to dependencies explicitly in your package file, as the compiler wrappers take care of most of the heavy lifting here. There will be times, though, when you need to refer to the install locations of dependencies, or when you need to do something different depending on the version, compiler, dependencies, etc. that your package is built with. These parameters give you access to this type of information. ### The build environment[¶](#the-build-environment) In general, you should not have to do much differently in your install method than you would when installing a package on the command line. In fact, you may need to do *less* than you would on the command line. Spack tries to set environment variables and modify compiler calls so that it *appears* to the build system that you’re building with a standard system install of everything. Obviously that’s not going to cover *all* build systems, but it should make it easy to port packages to Spack if they use a standard build system. Usually with autotools or cmake, building and installing is easy. With builds that use custom Makefiles, you may need to add logic to modify the makefiles. The remainder of the section covers the way Spack’s build environment works. #### Forking `install()`[¶](#forking-install) To give packagers free reign over their install environment, Spack forks a new process each time it invokes a package’s `install()` method. This allows packages to have a sandboxed build environment, without impacting the environments ofother jobs that the main Spack process runs. Packages are free to change the environment or to modify Spack internals, because each `install()` call has its own dedicated process. #### Environment variables[¶](#environment-variables) Spack sets a number of standard environment variables that serve two purposes: 1. Make build systems use Spack’s compiler wrappers for their builds. 2. Allow build systems to find dependencies more easily The Compiler environment variables that Spack sets are: > | Variable | Purpose | > | --- | --- | > | `CC` | C compiler | > | `CXX` | C++ compiler | > | `F77` | Fortran 77 compiler | > | `FC` | Fortran 90 and above compiler | Spack sets these variables so that they point to *compiler wrappers*. These are covered in [their own section](#compiler-wrappers) below. All of these are standard variables respected by most build systems. If your project uses `Autotools` or `CMake`, then it should pick them up automatically when you run `configure` or `cmake` in the `install()` function. Many traditional builds using GNU Make and BSD make also respect these variables, so they may work with these systems. If your build system does *not* automatically pick these variables up from the environment, then you can simply pass them on the command line or use a patch as part of your build process to get the correct compilers into the project’s build system. There are also some file editing commands you can use – these are described later in the [section on file manipulation](#file-manipulation). In addition to the compiler variables, these variables are set before entering `install()` so that packages can locate dependencies easily: | `PATH` | Set to point to `/bin` directories of dependencies | | `CMAKE_PREFIX_PATH` | Path to dependency prefixes for CMake | | `PKG_CONFIG_PATH` | Path to any pkgconfig directories for dependencies | | `PYTHONPATH` | Path to site-packages dir of any python dependencies | `PATH` is set up to point to dependencies `/bin` directories so that you can use tools installed by dependency packages at build time. For example, `$MPICH_ROOT/bin/mpicc` is frequently used by dependencies of `mpich`. `CMAKE_PREFIX_PATH` contains a colon-separated list of prefixes where `cmake` will search for dependency libraries and headers. This causes all standard CMake find commands to look in the paths of your dependencies, so you *do not* have to manually specify arguments like `-DDEPENDENCY_DIR=/path/to/dependency` to `cmake`. More on this is [in the CMake documentation](http://www.cmake.org/cmake/help/v3.0/variable/CMAKE_PREFIX_PATH.html). `PKG_CONFIG_PATH` is for packages that attempt to discover dependencies using the GNU `pkg-config` tool. It is similar to `CMAKE_PREFIX_PATH` in that it allows a build to automatically discover its dependencies. If you want to see the environment that a package will build with, or if you want to run commands in that environment to test them out, you can use the [spack env](#cmd-spack-env) command, documented below. #### Failing the build[¶](#failing-the-build) Sometimes you don’t want a package to successfully install unless some condition is true. You can explicitly cause the build to fail from `install()` by raising an `InstallError`, for example: ``` if spec.architecture.startswith('darwin'): raise InstallError('This package does not build on Mac OS X!') ``` #### Shell command functions[¶](#shell-command-functions) Recall the install method from `libelf`: | | | | --- | --- | | ``` 1 2 ``` | ``` def install(self, spec, prefix): make('install', parallel=False) ``` | Normally in Python, you’d have to write something like this in order to execute shell commands: ``` import subprocess subprocess.check_call('configure', '--prefix={0}'.format(prefix)) ``` We’ve tried to make this a bit easier by providing callable wrapper objects for some shell commands. By default, `configure`, `cmake`, and `make` wrappers are are provided, so you can call them more naturally in your package files. If you need other commands, you can use `which` to get them: ``` sed = which('sed') sed('s/foo/bar/', filename) ``` The `which` function will search the `PATH` for the application. Callable wrappers also allow spack to provide some special features. For example, in Spack, `make` is parallel by default, and Spack figures out the number of cores on your machine and passes an appropriate value for `-j<numjobs>` when it calls `make` (see the `parallel` package attribute <attribute_parallel>). In a package file, you can supply a keyword argument, `parallel=False`, to the `make` wrapper to disable parallel make. In the `libelf` package, this allows us to avoid race conditions in the library’s build system. #### Compiler flags[¶](#compiler-flags) Compiler flags set by the user through the Spec object can be passed to the build in one of three ways. By default, the build environment injects these flags directly into the compiler commands using Spack’s compiler wrappers. In cases where the build system requires knowledge of the compiler flags, they can be registered with the build system by alternatively passing them through environment variables or as build system arguments. The flag_handler method can be used to change this behavior. Packages can override the flag_handler method with one of three built-in flag_handlers. The built-in flag_handlers are named `inject_flags`, `env_flags`, and `build_system_flags`. The `inject_flags` method is the default. The `env_flags` method puts all of the flags into the environment variables that `make` uses as implicit variables (‘CFLAGS’, ‘CXXFLAGS’, etc.). The `build_system_flags` method adds the flags as arguments to the invocation of `configure` or `cmake`, respectively. Warning Passing compiler flags using build system arguments is only supported for CMake and Autotools packages. Individual packages may also differ in whether they properly respect these arguments. Individual packages may also define their own `flag_handler` methods. The `flag_handler` method takes the package instance (`self`), the name of the flag, and a list of the values of the flag. It will be called on each of the six compiler flags supported in Spack. It should return a triple of `(injf, envf, bsf)` where `injf` is a list of flags to inject via the Spack compiler wrappers, `envf` is a list of flags to set in the appropriate environment variables, and `bsf` is a list of flags to pass to the build system as arguments. Warning Passing a non-empty list of flags to `bsf` for a build system that does not support build system arguments will result in an error. Here are the definitions of the three built-in flag handlers: > def build_system_flags(self, name, flags): > return (None, None, flags) > def inject_flags(pkg, name, flags): > return (flags, None, None) > def env_flags(pkg, name, flags): > return (None, flags, None) > def build_system_flags(pkg, name, flags): > return (None, None, flags) Note Returning `[]` and `None` are equivalent in a `flag_handler` method. Packages can override the default behavior either by specifying one of the built-in flag handlers, ``` flag_handler = env_flags ``` or by implementing the flag_handler method. Suppose for a package `Foo` we need to pass `cflags`, `cxxflags`, and `cppflags` through the environment, the rest of the flags through compiler wrapper injection, and we need to add `-lbar` to `ldlibs`. The following flag handler method accomplishes that. ``` def flag_handler(self, name, flags): if name in ['cflags', 'cxxflags', 'cppflags']: return (None, flags, None) elif name == 'ldlibs': flags.append('-lbar') return (flags, None, None) ``` Because these methods can pass values through environment variables, it is important not to override these variables unnecessarily (E.g. setting `env['CFLAGS']`) in other package methods when using non-default flag handlers. In the `setup_environment` and `setup_dependent_environment` methods, use the `append_flags` method of the `EnvironmentModifications` class to append values to a list of flags whenever the flag handler is `env_flags`. If the package passes flags through the environment or the build system manually (in the install method, for example), we recommend using the default flag handler, or removing manual references and implementing a custom flag handler method that adds the desired flags to export as environment variables or pass to the build system. Manual flag passing is likely to interfere with the `env_flags` and `build_system_flags` methods. In rare circumstances such as compiling and running small unit tests, a package developer may need to know what are the appropriate compiler flags to enable features like `OpenMP`, `c++11`, `c++14` and alike. To that end the compiler classes in `spack` implement the following **properties**: `openmp_flag`, `cxx98_flag`, `cxx11_flag`, `cxx14_flag`, and `cxx17_flag`, which can be accessed in a package by `self.compiler.cxx11_flag` and alike. Note that the implementation is such that if a given compiler version does not support this feature, an error will be produced. Therefore package developers can also use these properties to assert that a compiler supports the requested feature. This is handy when a package supports additional variants like ``` variant('openmp', default=True, description="Enable OpenMP support.") ``` #### Blas, Lapack and ScaLapack libraries[¶](#blas-lapack-and-scalapack-libraries) Multiple packages provide implementations of `Blas`, `Lapack` and `ScaLapack` routines. The names of the resulting static and/or shared libraries differ from package to package. In order to make the `install()` method independent of the choice of `Blas` implementation, each package which provides it implements `@property def blas_libs(self):` to return an object of [LibraryList](http://spack.readthedocs.io/en/latest/llnl.util.html#llnl.util.filesystem.LibraryList) type which simplifies usage of a set of libraries. The same applies to packages which provide `Lapack` and `ScaLapack`. Package developers are requested to use this interface. Common usage cases are: 1. Space separated list of full paths ``` lapack_blas = spec['lapack'].libs + spec['blas'].libs options.append( '--with-blas-lapack-lib={0}'.format(lapack_blas.joined()) ) ``` 2. Names of libraries and directories which contain them ``` blas = spec['blas'].libs options.extend([ '-DBLAS_LIBRARY_NAMES={0}'.format(';'.join(blas.names)), '-DBLAS_LIBRARY_DIRS={0}'.format(';'.join(blas.directories)) ]) ``` 3. Search and link flags ``` math_libs = spec['scalapack'].libs + spec['lapack'].libs + spec['blas'].libs options.append( '-DMATH_LIBS:STRING={0}'.format(math_libs.ld_flags) ) ``` For more information, see documentation of [LibraryList](http://spack.readthedocs.io/en/latest/llnl.util.html#llnl.util.filesystem.LibraryList) class. #### Prefix objects[¶](#prefix-objects) Spack passes the `prefix` parameter to the install method so that you can pass it to `configure`, `cmake`, or some other installer, e.g.: ``` configure('--prefix={0}'.format(prefix)) ``` For the most part, prefix objects behave exactly like strings. For packages that do not have their own install target, or for those that implement it poorly (like `libdwarf`), you may need to manually copy things into particular directories under the prefix. For this, you can refer to standard subdirectories without having to construct paths yourself, e.g.: ``` def install(self, spec, prefix): mkdirp(prefix.bin) install('foo-tool', prefix.bin) mkdirp(prefix.include) install('foo.h', prefix.include) mkdirp(prefix.lib) install('libfoo.a', prefix.lib) ``` Attributes of this object are created on the fly when you request them, so any of the following will work: | Prefix Attribute | Location | | --- | --- | | `prefix.bin` | `$prefix/bin` | | `prefix.lib64` | `$prefix/lib64` | | `prefix.share.man` | `$prefix/share/man` | | `prefix.foo.bar.baz` | `$prefix/foo/bar/baz` | Of course, this only works if your file or directory is a valid Python variable name. If your file or directory contains dashes or dots, use `join` instead: ``` prefix.lib.join('libz.a') ``` ### Spec objects[¶](#spec-objects) When `install` is called, most parts of the build process are set up for you. The correct version’s tarball has been downloaded and expanded. Environment variables like `CC` and `CXX` are set to point to the correct compiler and version. An install prefix has already been selected and passed in as `prefix`. In most cases this is all you need to get `configure`, `cmake`, or another install working correctly. There will be times when you need to know more about the build configuration. For example, some software requires that you pass special parameters to `configure`, like `--with-libelf=/path/to/libelf` or `--with-mpich`. You might also need to supply special compiler flags depending on the compiler. All of this information is available in the spec. #### Testing spec constraints[¶](#testing-spec-constraints) You can test whether your spec is configured a certain way by using the `satisfies` method. For example, if you want to check whether the package’s version is in a particular range, you can use specs to do that, e.g.: ``` configure_args = [ '--prefix={0}'.format(prefix) ] if spec.satisfies('@1.2:1.4'): configure_args.append("CXXFLAGS='-DWITH_FEATURE'") configure(*configure_args) ``` This works for compilers, too: ``` if spec.satisfies('%gcc'): configure_args.append('CXXFLAGS="-g3 -O3"') if spec.satisfies('%intel'): configure_args.append('CXXFLAGS="-xSSE2 -fast"') ``` Or for combinations of spec constraints: ``` if spec.satisfies('@1.2%intel'): tty.error("Version 1.2 breaks when using Intel compiler!") ``` You can also do similar satisfaction tests for dependencies: ``` if spec.satisfies('^dyninst@8.0'): configure_args.append('CXXFLAGS=-DSPECIAL_DYNINST_FEATURE') ``` This could allow you to easily work around a bug in a particular dependency version. You can use `satisfies()` to test for particular dependencies, e.g. `foo.satisfies('^openmpi@1.2')` or `foo.satisfies('^mpich')`, or you can use Python’s built-in `in` operator: ``` if 'libelf' in spec: print "this package depends on libelf" ``` This is useful for virtual dependencies, as you can easily see what implementation was selected for this build: ``` if 'openmpi' in spec: configure_args.append('--with-openmpi') elif 'mpich' in spec: configure_args.append('--with-mpich') elif 'mvapich' in spec: configure_args.append('--with-mvapich') ``` It’s also a bit more concise than satisfies. The difference between the two functions is that `satisfies()` tests whether spec constraints overlap at all, while `in` tests whether a spec or any of its dependencies satisfy the provided spec. #### Accessing Dependencies[¶](#accessing-dependencies) You may need to get at some file or binary that’s in the installation prefix of one of your dependencies. You can do that by sub-scripting the spec: ``` spec['mpi'] ``` The value in the brackets needs to be some package name, and spec needs to depend on that package, or the operation will fail. For example, the above code will fail if the `spec` doesn’t depend on `mpi`. The value returned is itself just another `Spec` object, so you can do all the same things you would do with the package’s own spec: ``` spec['mpi'].prefix.bin spec['mpi'].version ``` #### Multimethods and `@when`[¶](#multimethods-and-when) Spack allows you to make multiple versions of instance functions in packages, based on whether the package’s spec satisfies particular criteria. The `@when` annotation lets packages declare multiple versions of methods like `install()` that depend on the package’s spec. For example: ``` class SomePackage(Package): ... def install(self, prefix): # Do default install @when('arch=chaos_5_x86_64_ib') def install(self, prefix): # This will be executed instead of the default install if # the package's sys_type() is chaos_5_x86_64_ib. @when('arch=linux-debian7-x86_64') def install(self, prefix): # This will be executed if the package's sys_type() is # linux-debian7-x86_64. ``` In the above code there are three versions of `install()`, two of which are specialized for particular platforms. The version that is called depends on the architecture of the package spec. Note that this works for methods other than install, as well. So, if you only have part of the install that is platform specific, you could do something more like this: ``` class SomePackage(Package): ... # virtual dependence on MPI. # could resolve to mpich, mpich2, OpenMPI depends_on('mpi') def setup(self): # do nothing in the default case pass @when('^openmpi') def setup(self): # do something special when this is built with OpenMPI for # its MPI implementations. def install(self, prefix): # Do common install stuff self.setup() # Do more common install stuff ``` You can write multiple `@when` specs that satisfy the package’s spec, for example: ``` class SomePackage(Package): ... depends_on('mpi') def setup_mpi(self): # the default, called when no @when specs match pass @when('^mpi@3:') def setup_mpi(self): # this will be called when mpi is version 3 or higher pass @when('^mpi@2:') def setup_mpi(self): # this will be called when mpi is version 2 or higher pass @when('^mpi@1:') def setup_mpi(self): # this will be called when mpi is version 1 or higher pass ``` In situations like this, the first matching spec, in declaration order will be called. As before, if no `@when` spec matches, the default method (the one without the `@when` decorator) will be called. Warning The default version of decorated methods must **always** come first. Otherwise it will override all of the platform-specific versions. There’s not much we can do to get around this because of the way decorators work. ### Compiler wrappers[¶](#compiler-wrappers) As mentioned, `CC`, `CXX`, `F77`, and `FC` are set to point to Spack’s compiler wrappers. These are simply called `cc`, `c++`, `f77`, and `f90`, and they live in `$SPACK_ROOT/lib/spack/env`. `$SPACK_ROOT/lib/spack/env` is added first in the `PATH` environment variable when `install()` runs so that system compilers are not picked up instead. All of these compiler wrappers point to a single compiler wrapper script that figures out which *real* compiler it should be building with. This comes either from spec [concretization](abstract-and-concrete) or from a user explicitly asking for a particular compiler using, e.g., `%intel` on the command line. In addition to invoking the right compiler, the compiler wrappers add flags to the compile line so that dependencies can be easily found. These flags are added for each dependency, if they exist: Compile-time library search paths * `-L$dep_prefix/lib` * `-L$dep_prefix/lib64` Runtime library search paths (RPATHs) * `$rpath_flag$dep_prefix/lib` * `$rpath_flag$dep_prefix/lib64` Include search paths * `-I$dep_prefix/include` An example of this would be the `libdwarf` build, which has one dependency: `libelf`. Every call to `cc` in the `libdwarf` build will have `-I$LIBELF_PREFIX/include`, `-L$LIBELF_PREFIX/lib`, and `$rpath_flag$LIBELF_PREFIX/lib` inserted on the command line. This is done transparently to the project’s build system, which will just think it’s using a system where `libelf` is readily available. Because of this, you **do not** have to insert extra `-I`, `-L`, etc. on the command line. Another useful consequence of this is that you often do *not* have to add extra parameters on the `configure` line to get autotools to find dependencies. The `libdwarf` install method just calls configure like this: ``` configure("--prefix=" + prefix) ``` Because of the `-L` and `-I` arguments, configure will successfully find `libdwarf.h` and `libdwarf.so`, without the packager having to provide `--with-libdwarf=/path/to/libdwarf` on the command line. Note For most compilers, `$rpath_flag` is `-Wl,-rpath,`. However, NAG passes its flags to GCC instead of passing them directly to the linker. Therefore, its `$rpath_flag` is doubly wrapped: `-Wl,-Wl,,-rpath,`. `$rpath_flag` can be overriden on a compiler specific basis in `lib/spack/spack/compilers/$compiler.py`. The compiler wrappers also pass the compiler flags specified by the user from the command line (`cflags`, `cxxflags`, `fflags`, `cppflags`, `ldflags`, and/or `ldlibs`). They do not override the canonical autotools flags with the same names (but in ALL-CAPS) that may be passed into the build by particularly challenging package scripts. ### MPI support in Spack[¶](#mpi-support-in-spack) It is common for high performance computing software/packages to use the Message Passing Interface ( `MPI`). As a result of conretization, a given package can be built using different implementations of MPI such as `Openmpi`, `MPICH` or `IntelMPI`. That is, when your package declares that it `depends_on('mpi')`, it can be built with any of these `mpi` implementations. In some scenarios, to configure a package, one has to provide it with appropriate MPI compiler wrappers such as `mpicc`, `mpic++`. However different implementations of `MPI` may have different names for those wrappers. Spack provides an idiomatic way to use MPI compilers in your package. To use MPI wrappers to compile your whole build, do this in your `install()` method: ``` env['CC'] = spec['mpi'].mpicc env['CXX'] = spec['mpi'].mpicxx env['F77'] = spec['mpi'].mpif77 env['FC'] = spec['mpi'].mpifc ``` That’s all. A longer explanation of why this works is below. We don’t try to force any particular build method on packagers. The decision to use MPI wrappers depends on the way the package is written, on common practice, and on “what works”. Loosely, There are three types of MPI builds: > 1. Some build systems work well without the wrappers and can treat MPI > as an external library, where the person doing the build has to > supply includes/libs/etc. This is fairly uncommon. > 2. Others really want the wrappers and assume you’re using an MPI > “compiler” – i.e., they have no mechanism to add MPI > includes/libraries/etc. > 3. CMake’s `FindMPI` needs the compiler wrappers, but it uses them to > extract `–I` / `-L` / `-D` arguments, then treats MPI like a > regular library. Note that some CMake builds fall into case 2 because they either don’t know about or don’t like CMake’s `FindMPI` support – they just assume an MPI compiler. Also, some autotools builds fall into case 3 (e.g. [here is an autotools version of CMake’s FindMPI](https://github.com/tgamblin/libra/blob/master/m4/lx_find_mpi.m4)). Given all of this, we leave the use of the wrappers up to the packager. Spack will support all three ways of building MPI packages. #### Packaging Conventions[¶](#packaging-conventions) As mentioned above, in the `install()` method, `CC`, `CXX`, `F77`, and `FC` point to Spack’s wrappers around the chosen compiler. Spack’s wrappers are not the MPI compiler wrappers, though they do automatically add `–I`, `–L`, and `–Wl,-rpath` args for dependencies in a similar way. The MPI wrappers are a bit different in that they also add `-l` arguments for the MPI libraries, and some add special `-D` arguments to trigger build options in MPI programs. For case 1 above, you generally don’t need to do more than patch your Makefile or add configure args as you normally would. For case 3, you don’t need to do much of anything, as Spack puts the MPI compiler wrappers in the PATH, and the build will find them and interrogate them. For case 2, things are a bit more complicated, as you’ll need to tell the build to use the MPI compiler wrappers instead of Spack’s compiler wrappers. All it takes some lines like this: ``` env['CC'] = spec['mpi'].mpicc env['CXX'] = spec['mpi'].mpicxx env['F77'] = spec['mpi'].mpif77 env['FC'] = spec['mpi'].mpifc ``` Or, if you pass CC, CXX, etc. directly to your build with, e.g., –with-cc=<path>, you’ll want to substitute spec[‘mpi’].mpicc in there instead, e.g.: ``` configure('—prefix=%s' % prefix, '—with-cc=%s' % spec['mpi'].mpicc) ``` Now, you may think that doing this will lose the includes, library paths, and RPATHs that Spack’s compiler wrapper get you, but we’ve actually set things up so that the MPI compiler wrappers use Spack’s compiler wrappers when run from within Spack. So using the MPI wrappers should really be as simple as the code above. #### `spec['mpi']`[¶](#spec-mpi) Ok, so how does all this work? If your package has a virtual dependency like `mpi`, then referring to `spec['mpi']` within `install()` will get you the concrete `mpi` implementation in your dependency DAG. That is a spec object just like the one passed to install, only the MPI implementations all set some additional properties on it to help you out. E.g., in mvapich2, you’ll find this: ``` def setup_dependent_package(self, module, dependent_spec): self.spec.mpicc = join_path(self.prefix.bin, 'mpicc') self.spec.mpicxx = join_path(self.prefix.bin, 'mpicxx') self.spec.mpifc = join_path(self.prefix.bin, 'mpif90') self.spec.mpif77 = join_path(self.prefix.bin, 'mpif77') self.spec.mpicxx_shared_libs = [ join_path(self.prefix.lib, 'libmpicxx.{0}'.format(dso_suffix)), join_path(self.prefix.lib, 'libmpi.{0}'.format(dso_suffix)) ] ``` That code allows the mvapich2 package to associate an `mpicc` property with the `mvapich2` node in the DAG, so that dependents can access it. `openmpi` and `mpich` do similar things. So, no matter what MPI you’re using, spec[‘mpi’].mpicc gets you the location of the MPI compilers. This allows us to have a fairly simple polymorphic interface for information about virtual dependencies like MPI. #### Wrapping wrappers[¶](#wrapping-wrappers) Spack likes to use its own compiler wrappers to make it easy to add `RPATHs` to builds, and to try hard to ensure that your builds use the right dependencies. This doesn’t play nicely by default with MPI, so we have to do a couple tricks. > 1. If we build MPI with Spack’s wrappers, mpicc and friends will be > installed with hard-coded paths to Spack’s wrappers, and using them > from outside of Spack will fail because they only work within Spack. > To fix this, we patch mpicc and friends to use the regular > compilers. Look at the filter_compilers method in mpich, openmpi, > or mvapich2 for details. > 2. We still want to use the Spack compiler wrappers when Spack is > calling mpicc. Luckily, wrappers in all mainstream MPI > implementations provide environment variables that allow us to > dynamically set the compiler to be used by mpicc, mpicxx, etc. > Denis pasted some code from this below – Spack’s build environment > sets `MPICC`, `MPICXX`, etc. for mpich derivatives and > `OMPI_CC`, `OMPI_CXX`, etc. for OpenMPI. This makes the MPI > compiler wrappers use the Spack compiler wrappers so that your > dependencies still get proper RPATHs even if you use the MPI > wrappers. #### MPI on Cray machines[¶](#mpi-on-cray-machines) The Cray programming environment notably uses ITS OWN compiler wrappers, which function like MPI wrappers. On Cray systems, the `CC`, `cc`, and `ftn` wrappers ARE the MPI compiler wrappers, and it’s assumed that you’ll use them for all of your builds. So on Cray we don’t bother with `mpicc`, `mpicxx`, etc, Spack MPI implementations set `spec['mpi'].mpicc` to point to Spack’s wrappers, which wrap the Cray wrappers, which wrap the regular compilers and include MPI flags. That may seem complicated, but for packagers, that means the same code for using MPI wrappers will work, even on even on a Cray: ``` env['CC'] = spec['mpi'].mpicc ``` This is because on Cray, `spec['mpi'].mpicc` is just `spack_cc`. ### Checking an installation[¶](#checking-an-installation) By default, Spack assumes that a build has failed if nothing is written to the install prefix, and that it has succeeded if anything (a file, a directory, etc.) is written to the install prefix after `install()` completes. Consider a simple autotools build like this: ``` def install(self, spec, prefix): configure("--prefix={0}".format(prefix)) make() make("install") ``` If you are using using standard autotools or CMake, `configure` and `make` will not write anything to the install prefix. Only `make install` writes the files, and only once the build is already complete. #### `sanity_check_is_file` and `sanity_check_is_dir`[¶](#sanity-check-is-file-and-sanity-check-is-dir) Unfortunately, many builds of scientific software modify the install prefix *before* `make install`. Builds like this can falsely report that they were successfully installed if an error occurs before the install is complete but after files have been written to the `prefix`. You can optionally specify *sanity checks* to deal with this problem. Add properties like this to your package: ``` class MyPackage(Package): ... sanity_check_is_file = ['include/libelf.h'] sanity_check_is_dir = [lib] def install(self, spec, prefix): configure("--prefix=" + prefix) make() make("install") ``` Now, after `install()` runs, Spack will check whether `$prefix/include/libelf.h` exists and is a file, and whether `$prefix/lib` exists and is a directory. If the checks fail, then the build will fail and the install prefix will be removed. If they succeed, Spack considers the build successful and keeps the prefix in place. #### Build-time tests[¶](#build-time-tests) Sometimes packages finish to build “correctly” and issues with their run-time behavior are discovered only at a later stage, maybe after a full software stack relying on them has already been built. To avoid situations of that kind it’s possible to write build-time tests that will be executed only if the option `--run-tests` of `spack install` has been activated. The proper way to write these tests is relying on two decorators that come with any base class listed in [Implementing the installation procedure](#installation-procedure). ``` @run_after('build') @on_package_attributes(run_tests=True) def check_build(self): # Custom implementation goes here pass ``` The first decorator `run_after('build')` schedules this function to be invoked after the `build` phase has been executed, while the second one makes the invocation conditional on the fact that `self.run_tests == True`. It is also possible to schedule a function to be invoked *before* a given phase using the `run_before` decorator. Note Default implementations for build-time tests > Packages that are built using specific build systems may already have a > default implementation for build-time tests. For instance `AutotoolsPackage` > based packages will try to invoke `make test` and `make check` if > Spack is asked to run tests. > More information on each class is available in the the [`build_systems`](index.html#module-spack.build_systems) > documentation. Warning The API for adding tests is not yet considered stable and may change drastically in future releases. ### File manipulation functions[¶](#file-manipulation-functions) Many builds are not perfect. If a build lacks an install target, or if it does not use systems like CMake or autotools, which have standard ways of setting compilers and options, you may need to edit files or install some files yourself to get them working with Spack. You can do this with standard Python code, and Python has rich libraries with functions for file manipulation and filtering. Spack also provides a number of convenience functions of its own to make your life even easier. These functions are described in this section. All of the functions in this section can be included by simply running: ``` from spack import * ``` This is already part of the boilerplate for packages created with `spack create`. #### Filtering functions[¶](#filtering-functions) `filter_file(regex, repl, *filenames, **kwargs)` Works like `sed` but with Python regular expression syntax. Takes a regular expression, a replacement, and a set of files. `repl` can be a raw string or a callable function. If it is a raw string, it can contain `\1`, `\2`, etc. to refer to capture groups in the regular expression. If it is a callable, it is passed the Python `MatchObject` and should return a suitable replacement string for the particular match. Examples: 1. Filtering a Makefile to force it to use Spack’s compiler wrappers: ``` filter_file(r'^CC\s*=.*', spack_cc, 'Makefile') filter_file(r'^CXX\s*=.*', spack_cxx, 'Makefile') filter_file(r'^F77\s*=.*', spack_f77, 'Makefile') filter_file(r'^FC\s*=.*', spack_fc, 'Makefile') ``` 2. Replacing `#!/usr/bin/perl` with `#!/usr/bin/env perl` in `bib2xhtml`: ``` filter_file(r'#!/usr/bin/perl', '#!/usr/bin/env perl', prefix.bin.bib2xhtml) ``` 3. Switching the compilers used by `mpich`’s MPI wrapper scripts from `cc`, etc. to the compilers used by the Spack build: ``` filter_file('CC="cc"', 'CC="%s"' % self.compiler.cc, prefix.bin.mpicc) filter_file('CXX="c++"', 'CXX="%s"' % self.compiler.cxx, prefix.bin.mpicxx) ``` `change_sed_delimiter(old_delim, new_delim, *filenames)` Some packages, like TAU, have a build system that can’t install into directories with, e.g. ‘@’ in the name, because they use hard-coded `sed` commands in their build. `change_sed_delimiter` finds all `sed` search/replace commands and change the delimiter. e.g., if the file contains commands that look like `s///`, you can use this to change them to `s@@@`. Example of changing `s///` to `s@@@` in TAU: ``` change_sed_delimiter('@', ';', 'configure') change_sed_delimiter('@', ';', 'utils/FixMakefile') change_sed_delimiter('@', ';', 'utils/FixMakefile.sed.default') ``` #### File functions[¶](#file-functions) `ancestor(dir, n=1)` Get the nth ancestor of the directory `dir`. `can_access(path)` True if we can read and write to the file at `path`. Same as native python `os.access(file_name, os.R_OK|os.W_OK)`. `install(src, dest)` Install a file to a particular location. For example, install a header into the `include` directory under the install `prefix`: ``` install('my-header.h', prefix.include) ``` `join_path(*paths)` An alias for `os.path.join`. This joins paths using the OS path separator. `mkdirp(*paths)` Create each of the directories in `paths`, creating any parent directories if they do not exist. `working_dir(dirname, kwargs)` This is a Python [Context Manager](https://docs.python.org/2/library/contextlib.html) that makes it easier to work with subdirectories in builds. You use this with the Python `with` statement to change into a working directory, and when the with block is done, you change back to the original directory. Think of it as a safe `pushd` / `popd` combination, where `popd` is guaranteed to be called at the end, even if exceptions are thrown. Example usage: 1. The `libdwarf` build first runs `configure` and `make` in a subdirectory called `libdwarf`. It then implements the installation code itself. This is natural with `working_dir`: ``` with working_dir('libdwarf'): configure("--prefix=" + prefix, "--enable-shared") make() install('libdwarf.a', prefix.lib) ``` 2. Many CMake builds require that you build “out of source”, that is, in a subdirectory. You can handle creating and `cd`’ing to the subdirectory like the LLVM package does: ``` with working_dir('spack-build', create=True): cmake('..', '-DLLVM_REQUIRES_RTTI=1', '-DPYTHON_EXECUTABLE=/usr/bin/python', '-DPYTHON_INCLUDE_DIR=/usr/include/python2.6', '-DPYTHON_LIBRARY=/usr/lib64/libpython2.6.so', *std_cmake_args) make() make("install") ``` The `create=True` keyword argument causes the command to create the directory if it does not exist. `touch(path)` Create an empty file at `path`. ### Style guidelines for packages[¶](#style-guidelines-for-packages) The following guidelines are provided, in the interests of making Spack packages work in a consistent manner: #### Variant Names[¶](#variant-names) Spack packages with variants similar to already-existing Spack packages should use the same name for their variants. Standard variant names are: > | Name | Default | Description | > | --- | --- | --- | > | shared | True | Build shared libraries | > | static | True | Build static libraries | > | mpi | True | Use MPI | > | python | False | Build Python extension | If specified in this table, the corresponding default should be used when declaring a variant. #### Version Lists[¶](#version-lists) Spack packages should list supported versions with the newest first. ### Packaging workflow commands[¶](#packaging-workflow-commands) When you are building packages, you will likely not get things completely right the first time. The `spack install` command performs a number of tasks before it finally installs each package. It downloads an archive, expands it in a temporary directory, and only then gives control to the package’s `install()` method. If the build doesn’t go as planned, you may want to clean up the temporary directory, or if the package isn’t downloading properly, you might want to run *only* the `fetch` stage of the build. A typical package workflow might look like this: ``` $ spack edit mypackage $ spack install mypackage ... build breaks! ... $ spack clean mypackage $ spack edit mypackage $ spack install mypackage ... repeat clean/install until install works ... ``` Below are some commands that will allow you some finer-grained control over the install process. #### `spack fetch`[¶](#spack-fetch) The first step of `spack install`. Takes a spec and determines the correct download URL to use for the requested package version, then downloads the archive, checks it against an MD5 checksum, and stores it in a staging directory if the check was successful. The staging directory will be located under `$SPACK_HOME/var/spack`. When run after the archive has already been downloaded, `spack fetch` is idempotent and will not download the archive again. #### `spack stage`[¶](#spack-stage) The second step in `spack install` after `spack fetch`. Expands the downloaded archive in its temporary directory, where it will be built by `spack install`. Similar to `fetch`, if the archive has already been expanded, `stage` is idempotent. #### `spack patch`[¶](#spack-patch) After staging, Spack applies patches to downloaded packages, if any have been specified in the package file. This command will run the install process through the fetch, stage, and patch phases. Spack keeps track of whether patches have already been applied and skips this step if they have been. If Spack discovers that patches didn’t apply cleanly on some previous run, then it will restage the entire package before patching. #### `spack restage`[¶](#spack-restage) Restores the source code to pristine state, as it was before building. Does this in one of two ways: 1. If the source was fetched as a tarball, deletes the entire build directory and re-expands the tarball. 2. If the source was checked out from a repository, this deletes the build directory and checks it out again. #### `spack clean`[¶](#spack-clean) Cleans up all of Spack’s temporary and cached files. This can be used to recover disk space if temporary files from interrupted or failed installs accumulate in the staging area. When called with `--stage` or without arguments this removes all staged files. When called with `--downloads` this will clear all resources [cached](index.html#caching) during installs. When called with `--user-cache` this will remove caches in the user home directory, including cached virtual indices. To remove all of the above, the command can be called with `--all`. When called with positional arguments, cleans up temporary files only for a particular package. If `fetch`, `stage`, or `install` are run again after this, Spack’s build process will start from scratch. #### Keeping the stage directory on success[¶](#keeping-the-stage-directory-on-success) By default, `spack install` will delete the staging area once a package has been successfully built and installed. Use `--keep-stage` to leave the build directory intact: ``` $ spack install --keep-stage <spec> ``` This allows you to inspect the build directory and potentially debug the build. You can use `clean` later to get rid of the unwanted temporary files. #### Keeping the install prefix on failure[¶](#keeping-the-install-prefix-on-failure) By default, `spack install` will delete any partially constructed install prefix if anything fails during `install()`. If you want to keep the prefix anyway (e.g. to diagnose a bug), you can use `--keep-prefix`: ``` $ spack install --keep-prefix <spec> ``` Note that this may confuse Spack into thinking that the package has been installed properly, so you may need to use `spack uninstall --force` to get rid of the install prefix before you build again: ``` $ spack uninstall --force <spec> ``` ### Graphing dependencies[¶](#graphing-dependencies) #### `spack graph`[¶](#spack-graph) Spack provides the `spack graph` command for graphing dependencies. The command by default generates an ASCII rendering of a spec’s dependency graph. For example: ``` $ spack graph mpileaks o mpileaks |\ | |\ | o | callpath |/| | | |\| | |\ \ | | |\ \ | | | |\ \ | | | o | | dyninst | | |/| | | | | | |\| | | | | |\ \ \ | | | | |\ \ \ | | | | o | | | intel-tbb | | | | | |/ / | | | | |/| | | | | | | | o adept-utils | |_|_|_|_|/| |/| | | | |/| | | | | |/|/ | | | | o | cmake | | | | |\ \ | | | | o | | openssl | | | | |\ \ \ o | | | | | | | openmpi |\ \ \ \ \ \ \ \ | |_|_|_|/ / / / |/| | | | | | | | |\ \ \ \ \ \ \ | | | o | | | | | libdwarf | |_|/| | | | | | |/| | | | | | | | | | | |/ / / / / | | o | | | | | hwloc | |/| | | | | | | | |\ \ \ \ \ \ | | | |\ \ \ \ \ \ | | | | | o | | | | elfutils | |_|_|_|/| | | | | |/| | | | | | | | | | | | | | o | | | | gettext | | | | |/| | | | | | | | |/| | | | | | | | | | | |\ \ \ \ \ | | | | | | |\ \ \ \ \ | | | | | | | |\ \ \ \ \ | | | | | | | | |_|_|/ / | | | | | | | |/| | | | | | | o | | | | | | | | libxml2 | |_|/| | | | | | | | | |/| |/| | | | | | | | | | | | | |/ / / / / / / | | | |/| | | | | | | | | | | | | | | | | o boost | |_|_|_|_|_|_|_|_|/| |/| | | | | | | |_|/ | | | | | | | |/| | o | | | | | | | | | zlib / / / / / / / / / | | o | | | | | | xz | | / / / / / / | | o | | | | | libpciaccess | |/| | | | | | | | |\ \ \ \ \ \ | | o | | | | | | util-macros | | / / / / / / | | | o | | | | tar | | | / / / / o | | | | | | numactl |\ \ \ \ \ \ \ | |\ \ \ \ \ \ \ | | |_|/ / / / / | |/| | | | | | | | |\ \ \ \ \ \ | | o | | | | | | automake | | |\| | | | | | | | | |_|_|_|_|/ | | |/| | | | | | | | o | | | | autoconf | |_|/| | | | | |/| |/ / / / / | | o | | | | perl | | o | | | | gdbm | | o | | | | readline | | | |/ / / | | |/| | | | | o | | | ncurses | | |/ / / | | o | | pkgconf | | / / | o | | libtool |/ / / o | | m4 o | | libsigsegv / / | o libiberty | o bzip2 o diffutils ``` At the top is the root package in the DAG, with dependency edges emerging from it. On a color terminal, the edges are colored by which dependency they lead to. ``` $ spack graph --deptype=all mpileaks o mpileaks |\ | |\ | o | callpath |/| | | |\| | |\ \ | | |\ \ | | | |\ \ | | | o | | dyninst | | |/| | | | | | |\| | | | | |\ \ \ | | | | |\ \ \ | | | | o | | | intel-tbb | | | | | |/ / | | | | |/| | | | | | | | o adept-utils | |_|_|_|_|/| |/| | | | |/| | | | | |/|/ | | | | o | cmake | | | | |\ \ | | | | o | | openssl | | | | |\ \ \ o | | | | | | | openmpi |\ \ \ \ \ \ \ \ | |_|_|_|/ / / / |/| | | | | | | | |\ \ \ \ \ \ \ | | | o | | | | | libdwarf | |_|/| | | | | | |/| | | | | | | | | | | |/ / / / / | | o | | | | | hwloc | |/| | | | | | | | |\ \ \ \ \ \ | | | |\ \ \ \ \ \ | | | | | o | | | | elfutils | |_|_|_|/| | | | | |/| | | | | | | | | | | | | | o | | | | gettext | | | | |/| | | | | | | | |/| | | | | | | | | | | |\ \ \ \ \ | | | | | | |\ \ \ \ \ | | | | | | | |\ \ \ \ \ | | | | | | | | |_|_|/ / | | | | | | | |/| | | | | | | o | | | | | | | | libxml2 | |_|/| | | | | | | | | |/| |/| | | | | | | | | | | | | |/ / / / / / / | | | |/| | | | | | | | | | | | | | | | | o boost | |_|_|_|_|_|_|_|_|/| |/| | | | | | | |_|/ | | | | | | | |/| | o | | | | | | | | | zlib / / / / / / / / / | | o | | | | | | xz | | / / / / / / | | o | | | | | libpciaccess | |/| | | | | | | | |\ \ \ \ \ \ | | o | | | | | | util-macros | | / / / / / / | | | o | | | | tar | | | / / / / o | | | | | | numactl |\ \ \ \ \ \ \ | |\ \ \ \ \ \ \ | | |_|/ / / / / | |/| | | | | | | | |\ \ \ \ \ \ | | o | | | | | | automake | | |\| | | | | | | | | |_|_|_|_|/ | | |/| | | | | | | | o | | | | autoconf | |_|/| | | | | |/| |/ / / / / | | o | | | | perl | | o | | | | gdbm | | o | | | | readline | | | |/ / / | | |/| | | | | o | | | ncurses | | |/ / / | | o | | pkgconf | | / / | o | | libtool |/ / / o | | m4 o | | libsigsegv / / | o libiberty | o bzip2 o diffutils ``` The `deptype` argument tells Spack what types of dependencies to graph. By default it includes link and run dependencies but not build dependencies. Supplying `--deptype=all` will show the build dependencies as well. This is equivalent to `--deptype=build,link,run`. Options for `deptype` include: * Any combination of `build`, `link`, and `run` separated by commas. * `all` or `alldeps` for all types of dependencies. You can also use `spack graph` to generate graphs in the widely used [Dot](http://www.graphviz.org/doc/info/lang.html) format. For example: ``` $ spack graph --dot mpileaks digraph G { labelloc = "b" rankdir = "TB" ranksep = "5" node[ fontname=Monaco, penwidth=2, fontsize=12, margin=.1, shape=box, fillcolor=lightblue, style="rounded,filled"] "jicflieg2zyqi5wzd7z2zds3bltaitgu" [label="mpileaks/jicflie"] "7tippnvo5g76wpijk7x5kwfpr3iqiaen" [label="adept-utils/7tippnv"] "zbgfxapchxa4awxdwpleubfuznblxzvt" [label="boost/zbgfxap"] "ufczdvsqt6edesm36xiucyry7myhj7e7" [label="bzip2/ufczdvs"] "2rhuivgjrna2nrxhntyde6md2khcvs34" [label="diffutils/2rhuivg"] "5nus6knzumx4ik2yl44jxtgtsl7d54xb" [label="zlib/5nus6kn"] "otafqzhh4xnlq2mpakch7dr3tjfsrjnx" [label="cmake/otafqzh"] "3o765ourmesfrji6yeclb4wb5w54aqbh" [label="ncurses/3o765ou"] "fovrh7alpft646n6mhis5mml6k6e5f4v" [label="pkgconf/fovrh7a"] "b4y3w3bsyvjla6eesv4vt6aplpfrpsha" [label="openssl/b4y3w3b"] "ic2kyoadgp3dxfejcbllyplj2wf524fo" [label="perl/ic2kyoa"] "q4fpyuo7ouhkeq6d3oabtrppctpvxmes" [label="gdbm/q4fpyuo"] "nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi" [label="readline/nxhwrg7"] "3njc4q5pqdpptq6jvqjrezkffwokv2sx" [label="openmpi/3njc4q5"] "43tkw5mt6huhv37vqnybqgxtkodbsava" [label="hwloc/43tkw5m"] "5urc6tcjae26fbbd2wyfohoszhgxtbmc" [label="libpciaccess/5urc6tc"] "o2pfwjf44353ajgr42xqtvzyvqsazkgu" [label="libtool/o2pfwjf"] "suf5jtcfehivwfesrc5hjy72r4nukyel" [label="m4/suf5jtc"] "fypapcprssrj3nstp6njprskeyynsgaz" [label="libsigsegv/fypapcp"] "milz7fmttmptcic2qdk5cnel7ll5sybr" [label="util-macros/milz7fm"] "wpexsphdmfayxqxd4up5vgwuqgu5woo7" [label="libxml2/wpexsph"] "teneqii2xv5u6zl5r6qi3pwurc6pmypz" [label="xz/teneqii"] "ft463odrombnxlc3qew4omckhlq7tqgc" [label="numactl/ft463od"] "3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q" [label="autoconf/3sx2gxe"] "rymw7imfehycqxzj4nuy2oiw3abegooy" [label="automake/rymw7im"] "empvyxdkc4j4pwg7gznwhbiumruey66x" [label="callpath/empvyxd"] "sggxwr72g7mbunx6dovhsfjjxxtbwbkv" [label="dyninst/sggxwr7"] "hjmhfrvwmwugnnjhitp3n6o6rk3wyjbr" [label="elfutils/hjmhfrv"] "tawgousiaddjmwyr6nmrorxmfmfgtcmp" [label="gettext/tawgous"] "dk7lrpo3i2vfppw7kohg5aqhkrk4ckca" [label="tar/dk7lrpo"] "jwc7tgqxbccwtbw65j3vq2okbzfo4nj5" [label="intel-tbb/jwc7tgq"] "bmy5ujub65mnjwvf3mbg2y5ushsu2mov" [label="libiberty/bmy5uju"] "p4jeflorwlnkoq2vpuyocwrbcht2ayak" [label="libdwarf/p4jeflo"] "tawgousiaddjmwyr6nmrorxmfmfgtcmp" -> "teneqii2xv5u6zl5r6qi3pwurc6pmypz" "o2pfwjf44353ajgr42xqtvzyvqsazkgu" -> "suf5jtcfehivwfesrc5hjy72r4nukyel" "rymw7imfehycqxzj4nuy2oiw3abegooy" -> "3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q" "q4fpyuo7ouhkeq6d3oabtrppctpvxmes" -> "nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi" "<KEY>" -> "ft463odrombnxlc3qew4omckhlq7tqgc" "b4y3w3bsyvjla6eesv4vt6aplpfrpsha" -> "<KEY>ik2yl<KEY>" "<KEY>" -> "<KEY>" "hjmhfrvwmwugnnjhitp3n6o6rk3wyjbr" -> "tawgousiaddjmwyr6nmrorxmfmfgtcmp" "p4jeflorwlnkoq2vpuyocwrbcht2ayak" -> "5nus6kn<KEY>" "empvyxdkc4j4pwg7gznwhbiumruey66x" -> "7tippnvo5g76wpijk7x5kwfpr3iqiaen" "empvyxdkc4j4pwg7gznwhbiumruey66x" -> "3njc4q5pqdpptq6jvqjrezkffwokv2sx" "5urc6tcjae26fbbd2wyfohoszhgxtbmc" -> "fovrh7alpft646n6mhis5mml6k6e5f4v" "tawgousiaddjmwyr6nmrorxmfmfgtcmp" -> "<KEY>7myhj7e7" "zbgfxapchxa4awxdwpleubfuznblxzvt" -> "<KEY>" "<KEY>" -> "<KEY>" "emp<KEY>" -> "<KEY>mbunx6dovhsfjjxxtbwbkv" "hjmhfrvwmwugnnjhitp3n6o6rk3wyjbr" -> "<KEY>" "ft463odrombn<KEY>" -> "suf5jtcfehivwfes<KEY>" "<KEY>huhv37vqnybqgxtkodbsava" -> "<KEY>" "empvyxdkc4j4pwg7gznwhbiumruey66x" -> "otafqzhh4xnlq2mpakch7dr3tjfsrjnx" "wpexsphdmfayxqxd4up5vgwuqgu5woo7" -> "fovrh7alpft646n6mhis5mml6k6e5f4v" "ufczdvsqt6edesm36xiucyry7myhj7e7" -> "2rhuivgjrna2nrxhntyde6md2khcvs34" "jicflieg2zyqi5wzd7z2zds3bltaitgu" -> "7tippnvo5g76wpijk7x5kwfpr3iqiaen" "rymw7imfehycqxzj4nuy2oiw3abegooy" -> "ic2kyoadgp3dxfejcbllyplj2wf524fo" "tawgousiaddjmwyr6nmrorxmfmfgtcmp" -> "dk7lrpo3i2vfppw7kohg5aqhkrk4ckca" "<KEY>" -> "fovrh7alpft646n6mhis5mml6k6e5f4v" "ft463odrombnxlc3qew4omckhlq7tqgc" -> "<KEY>" "sggxwr72g7mbunx6dovhsfjjxxtbwbkv" -> "zbgfxapchxa4awxdwpleubfuznblxzvt" "sggxwr72g7mbunx6dovhsfjjxxtbwbkv" -> "otafqzhh4xnlq2mpakch7dr3tjfsrjnx" "otafqzhh4xnlq2mpakch7dr3tjfsrjnx" -> "<KEY>" "ft463odrombnxlc3qew4omckhlq7tqgc" -> "o2pfwjf44353ajgr42xqtvzyvqsazkgu" "<KEY>" -> "fovrh7alpft646n6mhis5mml6k6e5f4v" "wpexsphdmfayxqxd4up5vgwuqgu5woo7" -> "5nus6knzumx4ik2yl44jxtgtsl7d54xb" "empvyxdkc4j4pwg7gznwhbiumruey66x" -> "p4jeflorwlnkoq2vpuyocwrbcht2ayak" "<KEY>" -> "<KEY>" "<KEY>" -> "<KEY>" "<KEY>" -> "<KEY>" "<KEY>" -> "<KEY>" "<KEY>" -> "wpexsphdmfayxqxd4up5vgwuqgu5woo7" "empvyxdkc4j4pwg7gznwhbiumruey66x" -> "hjmhfrvwmwugnnjhitp3n6o6rk3wyjbr" "<KEY>" -> "q4fpyuo7ouhkeq6d3oabtrppctpvxmes" "7tippnvo5g76wpijk7x5kwfpr3iqiaen" -> "zbgfxapchxa4awxdwpleubfuznblxzvt" "suf5jtcfehivwf<KEY>" -> "fypapcprssrj3nstp6njprskeyynsgaz" "7tippnvo5g76wpijk7x5kwfpr3iqiaen" -> "otafqzhh4xnlq2mpakch7dr3tjfsrjnx" "jicflieg2zyqi5wzd7z2zds3bltaitgu" -> "empvyxdkc4j4pwg7gznwhbiumruey66x" "sggxwr72g7mbunx6dovhsfjjxxtbwbkv" -> "<KEY>" "<KEY>" -> "<KEY>" "<KEY>" -> "<KEY>" "<KEY>" -> "<KEY>" "5urc6tcjae26fbbd2wyfohoszhgxtbmc" -> "o2pfwjf44353ajgr42xqtvzyvqsazkgu" "wpexsphdmfayxqxd4up5vgwuqgu5woo7" -> "teneqii2xv5u6zl5r6qi3pwurc6pmypz" "tawgousiaddjmwyr6nmrorxmfmfgtcmp" -> "3o765ourmesfrji6yeclb4wb5w54aqbh" "43tkw5mt6huhv37vqnybqgxtkodbsava" -> "<KEY>mc" "p4jeflorwlnkoq2vpuyocwrbcht2ayak" -> "hjmhfrvwmwugnnjhitp3n6o6rk3wyjbr" "7tippnvo5g76wpijk7x5kwfpr3iqiaen" -> "3njc4q5pqdpptq6jvqjrezkffwokv2sx" "zbgfxapchxa4awxdwpleubfuznblxzvt" -> "5nus6knzumx4ik2yl44jxtgtsl7d54xb" "otafqzhh4xnlq2mpakch7dr3tjfsrjnx" -> "b4y3w3bsyvjla6eesv4vt6aplpfrpsha" "5urc6tcjae26fbbd2wyfohoszhgxtbmc" -> "milz7fmttmptcic2qdk5cnel7ll5sybr" "ft463odrombnxlc3qew4omckhlq7tqgc" -> "3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q" "3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q" -> "ic2kyoadgp3dxfejcbllyplj2wf524fo" "tawgousiaddjmwyr6nmrorxmfmfgtcmp" -> "wpexsphdmfayxqxd4up5vgwuqgu5woo7" } ``` This graph can be provided as input to other graphing tools, such as those in [Graphviz](http://www.graphviz.org). ### Interactive shell support[¶](#interactive-shell-support) Spack provides some limited shell support to make life easier for packagers. You can enable these commands by sourcing a setup file in the `share/spack` directory. For `bash` or `ksh`, run: ``` export SPACK_ROOT=/path/to/spack . $SPACK_ROOT/share/spack/setup-env.sh ``` For `csh` and `tcsh` run: ``` setenv SPACK_ROOT /path/to/spack source $SPACK_ROOT/share/spack/setup-env.csh ``` `spack cd` will then be available. #### `spack cd`[¶](#spack-cd) `spack cd` allows you to quickly cd to pertinent directories in Spack. Suppose you’ve staged a package but you want to modify it before you build it: ``` $ spack stage libelf ==> Trying to fetch from http://www.mr511.de/software/libelf-0.8.13.tar.gz ######################################################################## 100.0% ==> Staging archive: ~/spack/var/spack/stage/libelf@0.8.13%gcc@4.8.3 arch=linux-debian7-x86_64/libelf-0.8.13.tar.gz ==> Created stage in ~/spack/var/spack/stage/libelf@0.8.13%gcc@4.8.3 arch=linux-debian7-x86_64. $ spack cd libelf $ pwd ~/spack/var/spack/stage/libelf@0.8.13%gcc@4.8.3 arch=linux-debian7-x86_64/libelf-0.8.13 ``` `spack cd` here changed the current working directory to the directory containing the expanded `libelf` source code. There are a number of other places you can cd to in the spack directory hierarchy: ``` $ spack cd --help usage: spack cd [-h] [-m | -r | -i | -p | -P | -s | -S | -b | -e ENV] ... cd to spack directories in the shell positional arguments: spec spec of package to fetch directory for optional arguments: -h, --help show this help message and exit -m, --module-dir spack python module directory -r, --spack-root spack installation root -i, --install-dir install prefix for spec (spec need not be installed) -p, --package-dir directory enclosing a spec's package.py file -P, --packages top-level packages directory for Spack -s, --stage-dir stage directory for a spec -S, --stages top level stage directory -b, --build-dir checked out or expanded source directory for a spec (requires it to be staged first) -e ENV, --env ENV location of an environment managed by spack ``` Some of these change directory into package-specific locations (stage directory, install directory, package directory) and others change to core spack locations. For example, `spack cd --module-dir` will take you to the main python source directory of your spack install. #### `spack env`[¶](#spack-env) `spack env` functions much like the standard unix `env` command, but it takes a spec as an argument. You can use it to see the environment variables that will be set when a particular build runs, for example: ``` $ spack env mpileaks@1.1%intel ``` This will display the entire environment that will be set when the `mpileaks@1.1%intel` build runs. To run commands in a package’s build environment, you can simply provide them after the spec argument to `spack env`: ``` $ spack cd mpileaks@1.1%intel $ spack env mpileaks@1.1%intel ./configure ``` This will cd to the build directory and then run `configure` in the package’s build environment. #### `spack location`[¶](#spack-location) `spack location` is the same as `spack cd` but it does not require shell support. It simply prints out the path you ask for, rather than cd’ing to it. In bash, this: ``` $ cd $(spack location --build-dir <spec>) ``` is the same as: ``` $ spack cd --build-dir <spec> ``` `spack location` is intended for use in scripts or makefiles that need to know where packages are installed. e.g., in a makefile you might write: ``` DWARF_PREFIX = $(spack location --install-dir libdwarf) CXXFLAGS += -I$DWARF_PREFIX/include CXXFLAGS += -L$DWARF_PREFIX/lib ``` #### Build System Configuration Support[¶](#build-system-configuration-support) Imagine a developer creating a CMake or Autotools-based project in a local directory, which depends on libraries A-Z. Once Spack has installed those dependencies, one would like to run `cmake` with appropriate command line and environment so CMake can find them. The `spack setup` command does this conveniently, producing a CMake configuration that is essentially the same as how Spack *would have* configured the project. This can be demonstrated with a usage example: ``` $ cd myproject $ spack setup myproject@local $ mkdir build; cd build $ ../spconfig.py .. $ make $ make install ``` Notes: * Spack must have `myproject/package.py` in its repository for this to work. * `spack setup` produces the executable script `spconfig.py` in the local directory, and also creates the module file for the package. `spconfig.py` is normally run from the user’s out-of-source build directory. * The version number given to `spack setup` is arbitrary, just like `spack diy`. `myproject/package.py` does not need to have any valid downloadable versions listed (typical when a project is new). * spconfig.py produces a CMake configuration that *does not* use the Spack wrappers. Any resulting binaries *will not* use RPATH, unless the user has enabled it. This is recommended for development purposes, not production. * `spconfig.py` is human readable, and can serve as a developer reference of what dependencies are being used. * `make install` installs the package into the Spack repository, where it may be used by other Spack packages. * CMake-generated makefiles re-run CMake in some circumstances. Use of `spconfig.py` breaks this behavior, requiring the developer to manually re-run `spconfig.py` when a `CMakeLists.txt` file has changed. #### CMakePackage[¶](#cmakepackage) In order to enable `spack setup` functionality, the author of `myproject/package.py` must subclass from `CMakePackage` instead of the standard `Package` superclass. Because CMake is standardized, the packager does not need to tell Spack how to run `cmake; make; make install`. Instead the packager only needs to create (optional) methods `configure_args()` and `configure_env()`, which provide the arguments (as a list) and extra environment variables (as a dict) to provide to the `cmake` command. Usually, these will translate variant flags into CMake definitions. For example: ``` def configure_args(self): spec = self.spec return [ '-DUSE_EVERYTRACE=%s' % ('YES' if '+everytrace' in spec else 'NO'), '-DBUILD_PYTHON=%s' % ('YES' if '+python' in spec else 'NO'), '-DBUILD_GRIDGEN=%s' % ('YES' if '+gridgen' in spec else 'NO'), '-DBUILD_COUPLER=%s' % ('YES' if '+coupler' in spec else 'NO'), '-DUSE_PISM=%s' % ('YES' if '+pism' in spec else 'NO') ] ``` If needed, a packager may also override methods defined in `StagedPackage` (see below). #### StagedPackage[¶](#stagedpackage) `CMakePackage` is implemented by subclassing the `StagedPackage` superclass, which breaks down the standard `Package.install()` method into several sub-stages: `setup`, `configure`, `build` and `install`. Details: * Instead of implementing the standard `install()` method, package authors implement the methods for the sub-stages `install_setup()`, `install_configure()`, `install_build()`, and `install_install()`. * The `spack install` command runs the sub-stages `configure`, `build` and `install` in order. (The `setup` stage is not run by default; see below). * The `spack setup` command runs the sub-stages `setup` and a dummy install (to create the module file). * The sub-stage install methods take no arguments (other than `self`). The arguments `spec` and `prefix` to the standard `install()` method may be accessed via `self.spec` and `self.prefix`. #### GNU Autotools[¶](#gnu-autotools) The `setup` functionality is currently only available for CMake-based packages. Extending this functionality to GNU Autotools-based packages would be easy (and should be done by a developer who actively uses Autotools). Packages that use non-standard build systems can gain `setup` functionality by subclassing `StagedPackage` directly. Build Systems[¶](#build-systems) --- Spack defines a number of classes which understand how to use common [build systems](https://en.wikipedia.org/wiki/List_of_build_automation_software) (Makefiles, CMake, etc.). Spack package definitions can inherit these classes in order to streamline their builds. This guide provides information specific to each particular build system. It assumes that you’ve read the [Packaging Guide](index.html#packaging-guide) and expands on these ideas for each distinct build system that Spack supports: ### MakefilePackage[¶](#makefilepackage) The most primitive build system a package can use is a plain Makefile. Makefiles are simple to write for small projects, but they usually require you to edit the Makefile to set platform and compiler-specific variables. #### Phases[¶](#phases) The `MakefilePackage` base class comes with 3 phases: 1. `edit` - edit the Makefile 2. `build` - build the project 3. `install` - install the project By default, `edit` does nothing, but you can override it to replace hard-coded Makefile variables. The `build` and `install` phases run: ``` $ make $ make install ``` #### Important files[¶](#important-files) The main file that matters for a `MakefilePackage` is the Makefile. This file will be named one of the following ways: * GNUmakefile (only works with GNU Make) * Makefile (most common) * makefile Some Makefiles also *include* other configuration files. Check for an `include` directive in the Makefile. #### Build system dependencies[¶](#build-system-dependencies) Spack assumes that the operating system will have a valid `make` utility installed already, so you don’t need to add a dependency on `make`. However, if the package uses a `GNUmakefile` or the developers recommend using GNU Make, you should add a dependency on `gmake`: ``` depends_on('gmake', type='build') ``` #### Types of Makefile packages[¶](#types-of-makefile-packages) Most of the work involved in packaging software that uses Makefiles involves overriding or replacing hard-coded variables. Many packages make the mistake of hard-coding compilers, usually for GCC or Intel. This is fine if you happen to be using that particular compiler, but Spack is designed to work with *any* compiler, and you need to ensure that this is the case. Depending on how the Makefile is designed, there are 4 common strategies that can be used to set or override the appropriate variables: ##### Environment variables[¶](#environment-variables) Make has multiple types of [assignment operators](https://www.gnu.org/software/make/manual/make.html#Setting). Some Makefiles use `=` to assign variables. The only way to override these variables is to edit the Makefile or override them on the command-line. However, Makefiles that use `?=` for assignment honor environment variables. Since Spack already sets `CC`, `CXX`, `F77`, and `FC`, you won’t need to worry about setting these variables. If there are any other variables you need to set, you can do this in the `edit` method: ``` def edit(self, spec, prefix): env['PREFIX'] = prefix env['BLASLIB'] = spec['blas'].libs.ld_flags ``` [cbench](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cbench/package.py) is a good example of a simple package that does this, while [esmf](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/esmf/package.py) is a good example of a more complex package. ##### Command-line arguments[¶](#command-line-arguments) If the Makefile ignores environment variables, the next thing to try is command-line arguments. You can do this by overriding the `build_targets` attribute. If you don’t need access to the spec, you can do this like so: ``` build_targets = ['CC=cc'] ``` If you do need access to the spec, you can create a property like so: ``` @property def build_targets(self): spec = self.spec return [ 'CC=cc', 'BLASLIB={0}'.format(spec['blas'].libs.ld_flags), ] ``` [cloverleaf](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cloverleaf/package.py) is a good example of a package that uses this strategy. ##### Edit Makefile[¶](#edit-makefile) Some Makefiles are just plain stubborn and will ignore command-line variables. The only way to ensure that these packages build correctly is to directly edit the Makefile. Spack provides a `FileFilter` class and a `filter_file` method to help with this. For example: ``` def edit(self, spec, prefix): makefile = FileFilter('Makefile') makefile.filter('CC = gcc', 'CC = cc') makefile.filter('CXX = g++', 'CC = c++') ``` [stream](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/stream/package.py) is a good example of a package that involves editing a Makefile to set the appropriate variables. ##### Config file[¶](#config-file) More complex packages often involve Makefiles that *include* a configuration file. These configuration files are primarily composed of variables relating to the compiler, platform, and the location of dependencies or names of libraries. Since these config files are dependent on the compiler and platform, you will often see entire directories of examples for common compilers and architectures. Use these examples to help determine what possible values to use. If the config file is long and only contains one or two variables that need to be modified, you can use the technique above to edit the config file. However, if you end up needing to modify most of the variables, it may be easier to write a new file from scratch. If each variable is independent of each other, a dictionary works well for storing variables: ``` def edit(self, spec, prefix): config = { 'CC': 'cc', 'MAKE': 'make', } if '+blas' in spec: config['BLAS_LIBS'] = spec['blas'].libs.joined() with open('make.inc', 'w') as inc: for key in config: inc.write('{0} = {1}\n'.format(key, config[key])) ``` [elk](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/elk/package.py) is a good example of a package that uses a dictionary to store configuration variables. If the order of variables is important, it may be easier to store them in a list: ``` def edit(self, spec, prefix): config = [ 'INSTALL_DIR = {0}'.format(prefix), 'INCLUDE_DIR = $(INSTALL_DIR)/include', 'LIBRARY_DIR = $(INSTALL_DIR)/lib', ] with open('make.inc', 'w') as inc: for var in config: inc.write('{0}\n'.format(var)) ``` [hpl](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/hpl/package.py) is a good example of a package that uses a list to store configuration variables. #### Variables to watch out for[¶](#variables-to-watch-out-for) The following is a list of common variables to watch out for. The first two sections are [implicit variables](https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html) defined by Make and will always use the same name, while the rest are user-defined variables and may vary from package to package. * **Compilers** This includes variables such as `CC`, `CXX`, `F77`, `F90`, and `FC`, as well as variables related to MPI compiler wrappers, like `MPICC` and friends. * **Compiler flags** This includes variables for specific compilers, like `CFLAGS`, `CXXFLAGS`, `F77FLAGS`, `F90FLAGS`, `FCFLAGS`, and `CPPFLAGS`. These variables are often hard-coded to contain flags specific to a certain compiler. If these flags don’t work for every compiler, you may want to consider filtering them. * **Variables that enable or disable features** This includes variables like `MPI`, `OPENMP`, `PIC`, and `DEBUG`. These flags often require you to create a variant so that you can either build with or without MPI support, for example. These flags are often compiler-dependent. You should replace them with the appropriate compiler flags, such as `self.compiler.openmp_flag` or `self.compiler.pic_flag`. * **Platform flags** These flags control the type of architecture that the executable is compiler for. Watch out for variables like `PLAT` or `ARCH`. * **Dependencies** Look out for variables that sound like they could be used to locate dependencies, such as `JAVA_HOME`, `JPEG_ROOT`, or `ZLIBDIR`. Also watch out for variables that control linking, such as `LIBS`, `LDFLAGS`, and `INCLUDES`. These variables need to be set to the installation prefix of a dependency, or to the correct linker flags to link to that dependency. * **Installation prefix** If your Makefile has an `install` target, it needs some way of knowing where to install. By default, many packages install to `/usr` or `/usr/local`. Since many Spack users won’t have sudo privileges, it is imperative that each package is installed to the proper prefix. Look for variables like `PREFIX` or `INSTALL`. #### Makefiles in a sub-directory[¶](#makefiles-in-a-sub-directory) Not every package places their Makefile in the root of the package tarball. If the Makefile is in a sub-directory like `src`, you can tell Spack where to locate it like so: ``` build_directory = 'src' ``` #### Manual installation[¶](#manual-installation) Not every Makefile includes an `install` target. If this is the case, you can override the default `install` method to manually install the package: ``` def install(self, spec, prefix): mkdir(prefix.bin) install('foo', prefix.bin) install_tree('lib', prefix.lib) ``` #### External documentation[¶](#external-documentation) For more information on reading and writing Makefiles, see: <https://www.gnu.org/software/make/manual/make.html### SConsPackage[¶](#sconspackage) SCons is a general-purpose build system that does not rely on Makefiles to build software. SCons is written in Python, and handles all building and linking itself. As far as build systems go, SCons is very non-uniform. It provides a common framework for developers to write build scripts, but the build scripts themselves can vary drastically. Some developers add subcommands like: ``` $ scons clean $ scons build $ scons test $ scons install ``` Others don’t add any subcommands. Some have configuration options that can be specified through variables on the command line. Others don’t. #### Phases[¶](#phases) As previously mentioned, SCons allows developers to add subcommands like `build` and `install`, but by default, installation usually looks like: ``` $ scons $ scons install ``` To facilitate this, the `SConsPackage` base class provides the following phases: 1. `build` - build the package 2. `install` - install the package Package developers often add unit tests that can be invoked with `scons test` or `scons check`. Spack provides a `test` method to handle this. Since we don’t know which one the package developer chose, the `test` method does nothing by default, but can be easily overridden like so: ``` def test(self): scons('check') ``` #### Important files[¶](#important-files) SCons packages can be identified by their `SConstruct` files. These files handle everything from setting up subcommands and command-line options to linking and compiling. One thing to look for is the `EnsureSConsVersion` function: ``` EnsureSConsVersion(2, 3, 0) ``` This means that SCons 2.3.0 is the earliest release that will work. You should specify this in a `depends_on` statement. #### Build system dependencies[¶](#build-system-dependencies) At the bare minimum, packages that use the SCons build system need a `scons` dependency. Since this is always the case, the `SConsPackage` base class already contains: ``` depends_on('scons', type='build') ``` If you want to specify a particular version requirement, you can override this in your package: ``` depends_on('scons@2.3.0:', type='build') ``` #### Finding available options[¶](#finding-available-options) The first place to start when looking for a list of valid options to build a package is `scons --help`. Some packages like [kahip](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/kahip/package.py) don’t bother overwriting the default SCons help message, so this isn’t very useful, but other packages like [serf](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/serf/package.py) print a list of valid command-line variables: ``` $ scons --help scons: Reading SConscript files ... Checking for GNU-compatible C compiler...yes scons: done reading SConscript files. PREFIX: Directory to install under ( /path/to/PREFIX ) default: /usr/local actual: /usr/local LIBDIR: Directory to install architecture dependent libraries under ( /path/to/LIBDIR ) default: $PREFIX/lib actual: /usr/local/lib APR: Path to apr-1-config, or to APR's install area ( /path/to/APR ) default: /usr actual: /usr APU: Path to apu-1-config, or to APR's install area ( /path/to/APU ) default: /usr actual: /usr OPENSSL: Path to OpenSSL's install area ( /path/to/OPENSSL ) default: /usr actual: /usr ZLIB: Path to zlib's install area ( /path/to/ZLIB ) default: /usr actual: /usr GSSAPI: Path to GSSAPI's install area ( /path/to/GSSAPI ) default: None actual: None DEBUG: Enable debugging info and strict compile warnings (yes|no) default: False actual: False APR_STATIC: Enable using a static compiled APR (yes|no) default: False actual: False CC: Command name or path of the C compiler default: None actual: gcc CFLAGS: Extra flags for the C compiler (space-separated) default: None actual: LIBS: Extra libraries passed to the linker, e.g. "-l<library1> -l<library2>" (space separated) default: None actual: None LINKFLAGS: Extra flags for the linker (space-separated) default: None actual: CPPFLAGS: Extra flags for the C preprocessor (space separated) default: None actual: None Use scons -H for help about command-line options. ``` More advanced packages like [cantera](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cantera/package.py) use `scons --help` to print a list of subcommands: ``` $ scons --help scons: Reading SConscript files ... SCons build script for Cantera Basic usage: 'scons help' - print a description of user-specifiable options. 'scons build' - Compile Cantera and the language interfaces using default options. 'scons clean' - Delete files created while building Cantera. '[sudo] scons install' - Install Cantera. '[sudo] scons uninstall' - Uninstall Cantera. 'scons test' - Run all tests which did not previously pass or for which the results may have changed. 'scons test-reset' - Reset the passing status of all tests. 'scons test-clean' - Delete files created while running the tests. 'scons test-help' - List available tests. 'scons test-NAME' - Run the test named "NAME". 'scons <command> dump' - Dump the state of the SCons environment to the screen instead of doing <command>, e.g. 'scons build dump'. For debugging purposes. 'scons samples' - Compile the C++ and Fortran samples. 'scons msi' - Build a Windows installer (.msi) for Cantera. 'scons sphinx' - Build the Sphinx documentation 'scons doxygen' - Build the Doxygen documentation ``` You’ll notice that cantera provides a `scons help` subcommand. Running `scons help` prints a list of valid command-line variables. #### Passing arguments to scons[¶](#passing-arguments-to-scons) Now that you know what arguments the project accepts, you can add them to the package build phase. This is done by overriding `build_args` like so: ``` def build_args(self, spec, prefix): args = [ 'PREFIX={0}'.format(prefix), 'ZLIB={0}'.format(spec['zlib'].prefix), ] if '+debug' in spec: args.append('DEBUG=yes') else: args.append('DEBUG=no') return args ``` `SConsPackage` also provides an `install_args` function that you can override to pass additional arguments to `scons install`. #### Compiler wrappers[¶](#compiler-wrappers) By default, SCons builds all packages in a separate execution environment, and doesn’t pass any environment variables from the user environment. Even changes to `PATH` are not propagated unless the package developer does so. This is particularly troublesome for Spack’s compiler wrappers, which depend on environment variables to manage dependencies and linking flags. In many cases, SCons packages are not compatible with Spack’s compiler wrappers, and linking must be done manually. First of all, check the list of valid options for anything relating to environment variables. For example, cantera has the following option: ``` * env_vars: [ string ] Environment variables to propagate through to SCons. Either the string "all" or a comma separated list of variable names, e.g. 'LD_LIBRARY_PATH,HOME'. - default: 'LD_LIBRARY_PATH,PYTHONPATH' ``` In the case of cantera, using `env_vars=all` allows us to use Spack’s compiler wrappers. If you don’t see an option related to environment variables, try using Spack’s compiler wrappers by passing `spack_cc`, `spack_cxx`, and `spack_fc` via the `CC`, `CXX`, and `FC` arguments, respectively. If you pass them to the build and you see an error message like: ``` Spack compiler must be run from Spack! Input 'SPACK_PREFIX' is missing. ``` you’ll know that the package isn’t compatible with Spack’s compiler wrappers. In this case, you’ll have to use the path to the actual compilers, which are stored in `self.compiler.cc` and friends. Note that this may involve passing additional flags to the build to locate dependencies, a task normally done by the compiler wrappers. serf is an example of a package with this limitation. #### External documentation[¶](#external-documentation) For more information on the SCons build system, see: <http://scons.org/documentation.html### WafPackage[¶](#wafpackage) Like SCons, Waf is a general-purpose build system that does not rely on Makefiles to build software. #### Phases[¶](#phases) The `WafPackage` base class comes with the following phases: 1. `configure` - configure the project 2. `build` - build the project 3. `install` - install the project By default, these phases run: ``` $ python waf configure --prefix=/path/to/installation/prefix $ python waf build $ python waf install ``` Each of these are standard Waf commands and can be found by running: ``` $ python waf --help ``` Each phase provides a `<phase>` function that runs: ``` $ python waf -j<jobs> <phase> ``` where `<jobs>` is the number of parallel jobs to build with. Each phase also has a `<phase_args>` function that can pass arguments to this call. All of these functions are empty except for the `configure_args` function, which passes `--prefix=/path/to/installation/prefix`. #### Testing[¶](#testing) `WafPackage` also provides `test` and `installtest` methods, which are run after the `build` and `install` phases, respectively. By default, these phases do nothing, but you can override them to run package-specific unit tests. For example, the [py-py2cairo](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-py2cairo/package.py) package uses: ``` def installtest(self): with working_dir('test'): pytest = which('py.test') pytest() ``` #### Important files[¶](#important-files) Each Waf package comes with a custom `waf` build script, written in Python. This script contains instructions to build the project. The package also comes with a `wscript` file. This file is used to override the default `configure`, `build`, and `install` phases to customize the Waf project. It also allows developers to override the default `./waf --help` message. Check this file to find useful information about dependencies and the minimum versions that are supported. #### Build system dependencies[¶](#build-system-dependencies) `WafPackage` does not require `waf` to build. `waf` is only needed to create the `./waf` script. Since `./waf` is a Python script, Python is needed to build the project. `WafPackage` adds the following dependency automatically: ``` depends_on('python@2.5:', type='build') ``` Waf only supports Python 2.5 and up. #### Passing arguments to waf[¶](#passing-arguments-to-waf) As previously mentioned, each phase comes with a `<phase_args>` function that can be used to pass arguments to that particular phase. For example, if you need to pass arguments to the build phase, you can use: ``` def build_args(self, spec, prefix): args = [] if self.run_tests: args.append('--test') return args ``` A list of valid options can be found by running `./waf --help`. #### External documentation[¶](#external-documentation) For more information on the Waf build system, see: <https://waf.io/book/### AutotoolsPackage[¶](#autotoolspackage) Autotools is a GNU build system that provides a build-script generator. By running the platform-independent `./configure` script that comes with the package, you can generate a platform-dependent Makefile. #### Phases[¶](#phases) The `AutotoolsPackage` base class comes with the following phases: 1. `autoreconf` - generate the configure script 2. `configure` - generate the Makefiles 3. `build` - build the package 4. `install` - install the package Most of the time, the `autoreconf` phase will do nothing, but if the package is missing a `configure` script, `autoreconf` will generate one for you. The other phases run: ``` $ ./configure --prefix=/path/to/installation/prefix $ make $ make check # optional $ make install $ make installcheck # optional ``` Of course, you may need to add a few arguments to the `./configure` line. #### Important files[¶](#important-files) The most important file for an Autotools-based package is the `configure` script. This script is automatically generated by Autotools and generates the appropriate Makefile when run. Warning Watch out for fake Autotools packages! Autotools is a very popular build system, and many people are used to the classic steps to install a package: ``` $ ./configure $ make $ make install ``` For this reason, some developers will write their own `configure` scripts that have nothing to do with Autotools. These packages may not accept the same flags as other Autotools packages, so it is better to use the `Package` base class and create a [custom build system](index.html#custompackage). You can tell if a package uses Autotools by running `./configure --help` and comparing the output to other known Autotools packages. You should also look for files like: * `configure.ac` * `configure.in` * `Makefile.am` Packages that don’t use Autotools aren’t likely to have these files. #### Build system dependencies[¶](#build-system-dependencies) Whether or not your package requires Autotools to install depends on how the source code is distributed. Most of the time, when developers distribute tarballs, they will already contain the `configure` script necessary for installation. If this is the case, your package does not require any Autotools dependencies. However, a basic rule of version control systems is to never commit code that can be generated. The source code repository itself likely does not have a `configure` script. Developers typically write (or auto-generate) a `configure.ac` script that contains configuration preferences and a `Makefile.am` script that contains build instructions. Then, `autoconf` is used to convert `configure.ac` into `configure`, while `automake` is used to convert `Makefile.am` into `Makefile.in`. `Makefile.in` is used by `configure` to generate a platform-dependent `Makefile` for you. The following diagram provides a high-level overview of the process: [GNU autoconf and automake process for generating makefiles](https://commons.wikimedia.org/wiki/File:Autoconf-automake-process.svg) by Jdthood under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/deed.en) If a `configure` script is not present in your tarball, you will need to generate one yourself. Luckily, Spack already has an `autoreconf` phase to do most of the work for you. By default, the `autoreconf` phase runs: ``` $ libtoolize $ aclocal $ autoreconf --install --verbose --force ``` All you need to do is add a few Autotools dependencies to the package. Most stable releases will come with a `configure` script, but if you check out a commit from the `develop` branch, you would want to add: ``` depends_on('autoconf', type='build', when='@develop') depends_on('automake', type='build', when='@develop') depends_on('libtool', type='build', when='@develop') depends_on('m4', type='build', when='@develop') ``` In some cases, developers might need to distribute a patch that modifies one of the files used to generate `configure` or `Makefile.in`. In this case, these scripts will need to be regenerated. It is preferable to regenerate these manually using the patch, and then create a new patch that directly modifies `configure`. That way, Spack can use the secondary patch and additional build system dependencies aren’t necessary. ##### force_autoreconf[¶](#force-autoreconf) If for whatever reason you really want to add the original patch and tell Spack to regenerate `configure`, you can do so using the following setting: ``` force_autoreconf = True ``` This line tells Spack to wipe away the existing `configure` script and generate a new one. If you only need to do this for a single version, this can be done like so: ``` @property def force_autoreconf(self): return self.version == Version('1.2.3'): ``` #### Finding configure flags[¶](#finding-configure-flags) Once you have a `configure` script present, the next step is to determine what option flags are available. These flags can be found by running: ``` $ ./configure --help ``` `configure` will display a list of valid flags separated into some or all of the following sections: * Configuration * Installation directories * Fine tuning of the installation directories * Program names * X features * System types * **Optional Features** * **Optional Packages** * **Some influential environment variables** For the most part, you can ignore all but the last 3 sections. The “Optional Features” sections lists flags that enable/disable features you may be interested in. The “Optional Packages” section often lists dependencies and the flags needed to locate them. The “environment variables” section lists environment variables that the build system uses to pass flags to the compiler and linker. #### Addings flags to configure[¶](#addings-flags-to-configure) For most of the flags you encounter, you will want a variant to optionally enable/disable them. You can then optionally pass these flags to the `configure` call by overriding the `configure_args` function like so: ``` def configure_args(self): args = [] if '+mpi' in self.spec: args.append('--enable-mpi') else: args.append('--disable-mpi') return args ``` Note that we are explicitly disabling MPI support if it is not requested. This is important, as many Autotools packages will enable options by default if the dependencies are found, and disable them otherwise. We want Spack installations to be as deterministic as possible. If two users install a package with the same variants, the goal is that both installations work the same way. See [here](https://www.linux.com/news/best-practices-autotools) and [here](https://wiki.gentoo.org/wiki/Project:Quality_Assurance/Automagic_dependencies) for a rationale as to why these so-called “automagic” dependencies are a problem. By default, Autotools installs packages to `/usr`. We don’t want this, so Spack automatically adds `--prefix=/path/to/installation/prefix` to your list of `configure_args`. You don’t need to add this yourself. #### Helper functions[¶](#helper-functions) You may have noticed that most of the Autotools flags are of the form `--enable-foo`, `--disable-bar`, `--with-baz=<prefix>`, or `--without-baz`. Since these flags are so common, Spack provides a couple of helper functions to make your life easier. TODO: document `with_or_without` and `enable_or_disable`. #### Configure script in a sub-directory[¶](#configure-script-in-a-sub-directory) Occasionally, developers will hide their source code and `configure` script in a subdirectory like `src`. If this happens, Spack won’t be able to automatically detect the build system properly when running `spack create`. You will have to manually change the package base class and tell Spack where the `configure` script resides. You can do this like so: ``` configure_directory = 'src' ``` #### Building out of source[¶](#building-out-of-source) Some packages like `gcc` recommend building their software in a different directory than the source code to prevent build pollution. This can be done using the `build_directory` variable: ``` build_directory = 'spack-build' ``` By default, Spack will build the package in the same directory that contains the `configure` script #### Build and install targets[¶](#build-and-install-targets) For most Autotools packages, the usual: ``` $ configure $ make $ make install ``` is sufficient to install the package. However, if you need to run make with any other targets, for example, to build an optional library or build the documentation, you can add these like so: ``` build_targets = ['all', 'docs'] install_targets = ['install', 'docs'] ``` #### Testing[¶](#testing) Autotools-based packages typically provide unit testing via the `check` and `installcheck` targets. If you build your software with `spack install --test=root`, Spack will check for the presence of a `check` or `test` target in the Makefile and run `make check` for you. After installation, it will check for an `installcheck` target and run `make installcheck` if it finds one. #### External documentation[¶](#external-documentation) For more information on the Autotools build system, see: <https://www.gnu.org/software/automake/manual/html_node/Autotools-Introduction.html### CMakePackage[¶](#cmakepackage) Like Autotools, CMake is a widely-used build-script generator. Designed by Kitware, CMake is the most popular build system for new C, C++, and Fortran projects, and many older projects are switching to it as well. Unlike Autotools, CMake can generate build scripts for builders other than Make: Ninja, Visual Studio, etc. It is therefore cross-platform, whereas Autotools is Unix-only. #### Phases[¶](#phases) The `CMakePackage` base class comes with the following phases: 1. `cmake` - generate the Makefile 2. `build` - build the package 3. `install` - install the package By default, these phases run: ``` $ mkdir spack-build $ cd spack-build $ cmake .. -DCMAKE_INSTALL_PREFIX=/path/to/installation/prefix $ make $ make test # optional $ make install ``` A few more flags are passed to `cmake` by default, including flags for setting the build type and flags for locating dependencies. Of course, you may need to add a few arguments yourself. #### Important files[¶](#important-files) A CMake-based package can be identified by the presence of a `CMakeLists.txt` file. This file defines the build flags that can be passed to the cmake invocation, as well as linking instructions. If you are familiar with CMake, it can prove very useful for determining dependencies and dependency version requirements. One thing to look for is the `cmake_minimum_required` function: ``` cmake_minimum_required(VERSION 2.8.12) ``` This means that CMake 2.8.12 is the earliest release that will work. You should specify this in a `depends_on` statement. CMake-based packages may also contain `CMakeLists.txt` in subdirectories. This modularization helps to manage complex builds in a hierarchical fashion. Sometimes these nested `CMakeLists.txt` require additional dependencies not mentioned in the top-level file. There’s also usually a `cmake` or `CMake` directory containing additional macros, find scripts, etc. These may prove useful in determining dependency version requirements. #### Build system dependencies[¶](#build-system-dependencies) Every package that uses the CMake build system requires a `cmake` dependency. Since this is always the case, the `CMakePackage` base class already contains: ``` depends_on('cmake', type='build') ``` If you need to specify a particular version requirement, you can override this in your package: ``` depends_on('cmake@2.8.12:', type='build') ``` #### Finding cmake flags[¶](#finding-cmake-flags) To get a list of valid flags that can be passed to `cmake`, run the following command in the directory that contains `CMakeLists.txt`: ``` $ cmake . -LAH ``` CMake will start by checking for compilers and dependencies. Eventually it will begin to list build options. You’ll notice that most of the build options at the top are prefixed with `CMAKE_`. You can safely ignore most of these options as Spack already sets them for you. This includes flags needed to locate dependencies, RPATH libraries, set the installation directory, and set the build type. The rest of the flags are the ones you should consider adding to your package. They often include flags to enable/disable support for certain features and locate specific dependencies. One thing you’ll notice that makes CMake different from Autotools is that CMake has an understanding of build flag hierarchy. That is, certain flags will not display unless their parent flag has been selected. For example, flags to specify the `lib` and `include` directories for a package might not appear unless CMake found the dependency it was looking for. You may need to manually specify certain flags to explore the full depth of supported build flags, or check the `CMakeLists.txt` yourself. #### Adding flags to cmake[¶](#adding-flags-to-cmake) To add additional flags to the `cmake` call, simply override the `cmake_args` function: ``` def cmake_args(self): args = [] if '+hdf5' in self.spec: args.append('-DDETECT_HDF5=ON') else: args.append('-DDETECT_HDF5=OFF') return args ``` #### Generators[¶](#generators) CMake and Autotools are build-script generation tools; they “generate” the Makefiles that are used to build a software package. CMake actually supports multiple generators, not just Makefiles. Another common generator is Ninja. To switch to the Ninja generator, simply add: ``` generator = 'Ninja' ``` `CMakePackage` defaults to “Unix Makefiles”. If you switch to the Ninja generator, make sure to add: ``` depends_on('ninja', type='build') ``` to the package as well. Aside from that, you shouldn’t need to do anything else. Spack will automatically detect that you are using Ninja and run: ``` $ cmake .. -G Ninja $ ninja $ ninja install ``` Spack currently only supports “Unix Makefiles” and “Ninja” as valid generators, but it should be simple to add support for alternative generators. For more information on CMake generators, see: <https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html#### CMAKE_BUILD_TYPE[¶](#cmake-build-type) Every CMake-based package accepts a `-DCMAKE_BUILD_TYPE` flag to dictate which level of optimization to use. In order to ensure uniformity across packages, the `CMakePackage` base class adds a variant to control this: ``` variant('build_type', default='RelWithDebInfo', description='CMake build type', values=('Debug', 'Release', 'RelWithDebInfo', 'MinSizeRel')) ``` However, not every CMake package accepts all four of these options. Grep the `CMakeLists.txt` file to see if the default values are missing or replaced. For example, the [dealii](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/dealii/package.py) package overrides the default variant with: ``` variant('build_type', default='DebugRelease', description='The build type to build', values=('Debug', 'Release', 'DebugRelease')) ``` For more information on `CMAKE_BUILD_TYPE`, see: <https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html#### CMakeLists.txt in a sub-directory[¶](#cmakelists-txt-in-a-sub-directory) Occasionally, developers will hide their source code and `CMakeLists.txt` in a subdirectory like `src`. If this happens, Spack won’t be able to automatically detect the build system properly when running `spack create`. You will have to manually change the package base class and tell Spack where `CMakeLists.txt` resides. You can do this like so: ``` root_cmakelists_dir = 'src' ``` Note that this path is relative to the root of the extracted tarball, not to the `build_directory`. It defaults to the current directory. #### Building out of source[¶](#building-out-of-source) By default, Spack builds every `CMakePackage` in a `spack-build` sub-directory. If, for whatever reason, you would like to build in a different sub-directory, simply override `build_directory` like so: ``` build_directory = 'my-build' ``` #### Build and install targets[¶](#build-and-install-targets) For most CMake packages, the usual: ``` $ cmake $ make $ make install ``` is sufficient to install the package. However, if you need to run make with any other targets, for example, to build an optional library or build the documentation, you can add these like so: ``` build_targets = ['all', 'docs'] install_targets = ['install', 'docs'] ``` #### Testing[¶](#testing) CMake-based packages typically provide unit testing via the `test` target. If you build your software with `--test=root`, Spack will check for the presence of a `test` target in the Makefile and run `make test` for you. If you want to run a different test instead, simply override the `check` method. #### External documentation[¶](#external-documentation) For more information on the CMake build system, see: <https://cmake.org/cmake/help/latest/### MesonPackage[¶](#mesonpackage) Much like Autotools and CMake, Meson is a build system. But it is meant to be both fast and as user friendly as possible. GNOME’s goal is to port modules to use the Meson build system. #### Phases[¶](#phases) The `MesonPackage` base class comes with the following phases: 1. `meson` - generate ninja files 2. `build` - build the project 3. `install` - install the project By default, these phases run: ``` $ mkdir spack-build $ cd spack-build $ meson .. --prefix=/path/to/installation/prefix $ ninja $ ninja test # optional $ ninja install ``` Any of these phases can be overridden in your package as necessary. There is also a `check` method that looks for a `test` target in the build file. If a `test` target exists and the user runs: ``` $ spack install --test=root <meson-package> ``` Spack will run `ninja test` after the build phase. #### Important files[¶](#important-files) Packages that use the Meson build system can be identified by the presence of a `meson.build` file. This file declares things like build instructions and dependencies. #### Build system dependencies[¶](#build-system-dependencies) At the bare minimum, packages that use the Meson build system need `meson` and ``ninja`` dependencies. Since this is always the case, the `MesonPackage` base class already contains: ``` depends_on('meson', type='build') depends_on('ninja', type='build') ``` #### Passing arguments to meson[¶](#passing-arguments-to-meson) If you need to pass any arguments to the `meson` call, you can override the `meson_args` method like so: ``` def meson_args(self): return ['--default-library=both'] ``` This method can be used to pass flags as well as variables. #### External documentation[¶](#external-documentation) For more information on the Meson build system, see: <https://mesonbuild.com/index.html### QMakePackage[¶](#qmakepackage) Much like Autotools and CMake, QMake is a build-script generator designed by the developers of Qt. In its simplest form, Spack’s `QMakePackage` runs the following steps: ``` $ qmake $ make $ make check # optional $ make install ``` QMake does not appear to have a standardized way of specifying the installation directory, so you may have to set environment variables or edit `*.pro` files to get things working properly. #### Phases[¶](#phases) The `QMakePackage` base class comes with the following phases: 1. `qmake` - generate Makefiles 2. `build` - build the project 3. `install` - install the project By default, these phases run: ``` $ qmake $ make $ make install ``` Any of these phases can be overridden in your package as necessary. There is also a `check` method that looks for a `check` target in the Makefile. If a `check` target exists and the user runs: ``` $ spack install --test=root <qmake-package> ``` Spack will run `make check` after the build phase. #### Important files[¶](#important-files) Packages that use the QMake build system can be identified by the presence of a `<project-name>.pro` file. This file declares things like build instructions and dependencies. One thing to look for is the `minQtVersion` function: ``` minQtVersion(5, 6, 0) ``` This means that Qt 5.6.0 is the earliest release that will work. You should specify this in a `depends_on` statement. #### Build system dependencies[¶](#build-system-dependencies) At the bare minimum, packages that use the QMake build system need a `qt` dependency. Since this is always the case, the `QMakePackage` base class already contains: ``` depends_on('qt', type='build') ``` If you want to specify a particular version requirement, or need to link to the `qt` libraries, you can override this in your package: ``` depends_on('qt@5.6.0:') ``` #### Passing arguments to qmake[¶](#passing-arguments-to-qmake) If you need to pass any arguments to the `qmake` call, you can override the `qmake_args` method like so: ``` def qmake_args(self): return ['-recursive'] ``` This method can be used to pass flags as well as variables. #### External documentation[¶](#external-documentation) For more information on the QMake build system, see: <http://doc.qt.io/qt-5/qmake-manual.html### OctavePackage[¶](#octavepackage) Octave has its own build system for installing packages. #### Phases[¶](#phases) The `OctavePackage` base class has a single phase: 1. `install` - install the package By default, this phase runs the following command: ``` $ octave '--eval' 'pkg prefix <prefix>; pkg install <archive_file>' ``` Beware that uninstallation is not implemented at the moment. After uninstalling a package via Spack, you also need to manually uninstall it from Octave via `pkg uninstall <package_name>`. #### Finding Octave packages[¶](#finding-octave-packages) Most Octave packages are listed at <https://octave.sourceforge.io/packages.php>. #### Dependencies[¶](#dependencies) Usually, the homepage of a package will list dependencies, i.e. `Dependencies: Octave >= 3.6.0 struct >= 1.0.12`. The same information should be available in the `DESCRIPTION` file in the root of each archive. #### External Documentation[¶](#external-documentation) For more information on the Octave build system, see: <https://octave.org/doc/v4.4.0/Installing-and-Removing-Packages.html### PerlPackage[¶](#perlpackage) Much like Octave, Perl has its own language-specific build system. #### Phases[¶](#phases) The `PerlPackage` base class comes with 3 phases that can be overridden: 1. `configure` - configure the package 2. `build` - build the package 3. `install` - install the package Perl packages have 2 common modules used for module installation: ##### `ExtUtils::MakeMaker`[¶](#extutils-makemaker) The `ExtUtils::MakeMaker` module is just what it sounds like, a module designed to generate Makefiles. It can be identified by the presence of a `Makefile.PL` file, and has the following installation steps: ``` $ perl Makefile.PL INSTALL_BASE=/path/to/installation/prefix $ make $ make test # optional $ make install ``` ##### `Module::Build`[¶](#module-build) The `Module::Build` module is a pure-Perl build system, and can be identified by the presence of a `Build.PL` file. It has the following installation steps: ``` $ perl Build.PL --install_base /path/to/installation/prefix $ ./Build $ ./Build test # optional $ ./Build install ``` If both `Makefile.PL` and `Build.PL` files exist in the package, Spack will use `Makefile.PL` by default. If your package uses a different module, `PerlPackage` will need to be extended to support it. `PerlPackage` automatically detects which build steps to use, so there shouldn’t be much work on the package developer’s side to get things working. #### Finding Perl packages[¶](#finding-perl-packages) Most Perl modules are hosted on CPAN - The Comprehensive Perl Archive Network. If you need to find a package for `XML::Parser`, for example, you should search for “CPAN XML::Parser”. Some CPAN pages are versioned. Check for a link to the “Latest Release” to make sure you have the latest version. #### Package name[¶](#package-name) When you use `spack create` to create a new Perl package, Spack will automatically prepend `perl-` to the front of the package name. This helps to keep Perl modules separate from other packages. The same naming scheme is used for other language extensions, like Python and R. #### Description[¶](#description) Most CPAN pages have a short description under “NAME” and a longer description under “DESCRIPTION”. Use whichever you think is more useful while still being succinct. #### Homepage[¶](#homepage) In the top-right corner of the CPAN page, you’ll find a “permalink” for the package. This should be used instead of the current URL, as it doesn’t contain the version number and will always link to the latest release. #### URL[¶](#url) If you haven’t found it already, the download URL is on the right side of the page below the permalink. Search for “Download”. #### Build system dependencies[¶](#build-system-dependencies) Every `PerlPackage` obviously depends on Perl at build and run-time, so `PerlPackage` contains: ``` extends('perl') depends_on('perl', type=('build', 'run')) ``` If your package requires a specific version of Perl, you should specify this. Although newer versions of Perl include `ExtUtils::MakeMaker` and `Module::Build` as “core” modules, you may want to add dependencies on `perl-extutils-makemaker` and `perl-module-build` anyway. Many people add Perl as an external package, and we want the build to work properly. If your package uses `Makefile.PL` to build, add: ``` depends_on('perl-extutils-makemaker', type='build') ``` If your package uses `Build.PL` to build, add: ``` depends_on('perl-module-build', type='build') ``` #### Perl dependencies[¶](#perl-dependencies) Below the download URL, you will find a “Dependencies” link, which takes you to a page listing all of the dependencies of the package. Packages listed as “Core module” don’t need to be added as dependencies, but all direct dependencies should be added. Don’t add dependencies of dependencies. These should be added as dependencies to the dependency, not to your package. #### Passing arguments to configure[¶](#passing-arguments-to-configure) Packages that have non-Perl dependencies often use command-line variables to specify their installation directory. You can pass arguments to `Makefile.PL` or `Build.PL` by overriding `configure_args` like so: ``` def configure_args(self): expat = self.spec['expat'].prefix return [ 'EXPATLIBPATH={0}'.format(expat.lib), 'EXPATINCPATH={0}'.format(expat.include), ] ``` #### Alternatives to Spack[¶](#alternatives-to-spack) If you need to maintain a stack of Perl modules for a user and don’t want to add all of them to Spack, a good alternative is `cpanm`. If Perl is already installed on your system, it should come with a `cpan` executable. To install `cpanm`, run the following command: ``` $ cpan App::cpanminus ``` Now, you can install any Perl module you want by running: ``` $ cpanm Module::Name ``` Obviously, these commands can only be run if you have root privileges. Furthermore, `cpanm` is not capable of installing non-Perl dependencies. If you need to install to your home directory or need to install a module with non-Perl dependencies, Spack is a better option. #### External documentation[¶](#external-documentation) You can find more information on installing Perl modules from source at: <http://www.perlmonks.org/?node_id=128077More generic Perl module installation instructions can be found at: <http://www.cpan.org/modules/INSTALL.html### PythonPackage[¶](#pythonpackage) Python packages and modules have their own special build system. #### Phases[¶](#phases) The `PythonPackage` base class provides the following phases that can be overridden: * `build` * `build_py` * `build_ext` * `build_clib` * `build_scripts` * `clean` * `install` * `install_lib` * `install_headers` * `install_scripts` * `install_data` * `sdist` * `register` * `bdist` * `bdist_dumb` * `bdist_rpm` * `bdist_wininst` * `upload` * `check` These are all standard `setup.py` commands and can be found by running: ``` $ python setup.py --help-commands ``` By default, only the `build` and `install` phases are run: 1. `build` - build everything needed to install 2. `install` - install everything from build directory If for whatever reason you need to run more phases, simply modify your `phases` list like so: ``` phases = ['build_ext', 'install', 'bdist'] ``` Each phase provides a function `<phase>` that runs: ``` $ python -s setup.py --no-user-cfg <phase> ``` Each phase also has a `<phase_args>` function that can pass arguments to this call. All of these functions are empty except for the `install_args` function, which passes `--prefix=/path/to/installation/prefix`. There is also some additional logic specific to setuptools and eggs. If you need to run a phase that is not a standard `setup.py` command, you’ll need to define a function for it like so: ``` phases = ['configure', 'build', 'install'] def configure(self, spec, prefix): self.setup_py('configure') ``` #### Important files[¶](#important-files) Python packages can be identified by the presence of a `setup.py` file. This file is used by package managers like `pip` to determine a package’s dependencies and the version of dependencies required, so if the `setup.py` file is not accurate, the package will not build properly. For this reason, the `setup.py` file should be fairly reliable. If the documentation and `setup.py` disagree on something, the `setup.py` file should be considered to be the truth. As dependencies are added or removed, the documentation is much more likely to become outdated than the `setup.py`. #### Finding Python packages[¶](#finding-python-packages) The vast majority of Python packages are hosted on PyPI - The Python Package Index. `pip` only supports packages hosted on PyPI, making it the only option for developers who want a simple installation. Search for “PyPI <package-name>” to find the download page. Note that some pages are versioned, and the first result may not be the newest version. Click on the “Latest Version” button to the top right to see if a newer version is available. The download page is usually at: <https://pypi.org/project>/<package-name#### Description[¶](#description) The top of the PyPI downloads page contains a description of the package. The first line is usually a short description, while there may be a several line “Project Description” that follows. Choose whichever is more useful. You can also get these descriptions on the command-line using: ``` $ python setup.py --description $ python setup.py --long-description ``` #### Homepage[¶](#homepage) Package developers use `setup.py` to upload new versions to PyPI. The `setup` method often passes metadata like `homepage` to PyPI. This metadata is displayed on the left side of the download page. Search for the text “Homepage” under “Project links” to find it. You should use this page instead of the PyPI page if they differ. You can also get the homepage on the command-line by running: ``` $ python setup.py --url ``` #### URL[¶](#url) You may have noticed that Spack allows you to add multiple versions of the same package without adding multiple versions of the download URL. It does this by guessing what the version string in the URL is and replacing this with the requested version. Obviously, if Spack cannot guess the version correctly, or if non-version-related things change in the URL, Spack cannot substitute the version properly. Once upon a time, PyPI offered nice, simple download URLs like: <https://pypi.python.org/packages/source/n/numpy/numpy-1.13.1.zipAs you can see, the version is 1.13.1. It probably isn’t hard to guess what URL to use to download version 1.12.0, and Spack was perfectly capable of performing this calculation. However, PyPI switched to a new download URL format: <https://pypi.python.org/packages/c0/3a/40967d9f5675fbb097ffec170f59c2ba19fc96373e73ad47c2cae9a30aed/numpy-1.13.1.zip#md5=2c3c0f4edf720c3a7b525dacc825b9aeand more recently: <https://files.pythonhosted.org/packages/b0/2b/497c2bb7c660b2606d4a96e2035e92554429e139c6c71cdff67af66b58d2/numpy-1.14.3.zipAs you can imagine, it is impossible for Spack to guess what URL to use to download version 1.12.0 given this URL. There is a solution, however. PyPI offers a new hidden interface for downloading Python packages that does not include a hash in the URL: <https://pypi.io/packages/source/n/numpy/numpy-1.13.1.zipThis URL redirects to the files.pythonhosted.org URL. The general syntax for this pypi.io URL is: <https://pypi.io/packages/source>/<first-letter-of-name>/<name>/<name>-<version>.<extensionPlease use the pypi.io URL instead of the pypi.python.org URL. If both `.tar.gz` and `.zip` versions are available, `.tar.gz` is preferred. If some releases offer both `.tar.gz` and `.zip` versions, but some only offer `.zip` versions, use `.zip`. ##### PyPI vs. GitHub[¶](#pypi-vs-github) Many packages are hosted on PyPI, but are developed on GitHub and other version control systems. The tarball can be downloaded from either location, but PyPI is preferred for the following reasons: 1. PyPI contains the bare minimum of files to install the package. You may notice that the tarball you download from PyPI does not have the same checksum as the tarball you download from GitHub. When a developer uploads a new release to PyPI, it doesn’t contain every file in the repository, only the files necessary to install the package. PyPI tarballs are therefore smaller. 2. PyPI is the official source for package managers like `pip`. Let’s be honest, `pip` is much more popular than Spack. If the GitHub tarball contains a file not present in the PyPI tarball that causes a bug, the developers may not realize this for quite some time. If the bug was in a file contained in the PyPI tarball, users would notice the bug much more quickly. 3. GitHub release may be a beta version. When a developer releases a new version of a package on GitHub, it may not be intended for most users. Until that release also makes its way to PyPI, it should be assumed that the release is not yet ready for general use. 4. The checksum for a GitHub release may change. Unfortunately, some developers have a habit of patching releases without incrementing the version number. This results in a change in tarball checksum. Package managers like Spack that use checksums to verify the integrity of a download tarball grind to a halt when the checksum for a known version changes. Most of the time, the change is intentional, and contains a needed bug fix. However, sometimes the change indicates a download source that has been compromised, and a tarball that contains a virus. If this happens, you must contact the developers to determine which is the case. PyPI is nice because it makes it physically impossible to re-release the same version of a package with a different checksum. There are some reasons to prefer downloading from GitHub: 1. The GitHub tarball may contain unit tests As previously mentioned, the PyPI tarball contains the bare minimum of files to install the package. Unless explicitly specified by the developers, it will not contain development files like unit tests. If you desire to run the unit tests during installation, you should use the GitHub tarball instead. 2. Spack does not yet support `spack versions` and `spack checksum` with PyPI URLs These commands work just fine with GitHub URLs. This is a minor annoyance, not a reason to prefer GitHub over PyPI. If you really want to run these unit tests, no one will stop you from submitting a PR for a new package that downloads from GitHub. #### Build system dependencies[¶](#build-system-dependencies) There are a few dependencies common to the `PythonPackage` build system. ##### Python[¶](#python) Obviously, every `PythonPackage` needs Python at build-time to run `python setup.py build && python setup.py install`. Python is also needed at run-time if you want to import the module. Due to backwards incompatible changes between Python 2 and 3, it is very important to specify which versions of Python are supported. If the documentation mentions that Python 3 is required, this can be specified as: ``` depends_on('python@3:', type=('build', 'run') ``` If Python 2 is required, this would look like: ``` depends_on('python@:2', type=('build', 'run') ``` If Python 2.7 is the only version that works, you can use: ``` depends_on('python@2.7:2.8', type=('build', 'run') ``` The documentation may not always specify supported Python versions. Another place to check is in the `setup.py` file. Look for a line containing `python_requires`. An example from [py-numpy](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-numpy/package.py) looks like: ``` python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*' ``` More commonly, you will find a version check at the top of the file: ``` if sys.version_info[:2] < (2, 7) or (3, 0) <= sys.version_info[:2] < (3, 4): raise RuntimeError("Python version 2.7 or >= 3.4 required.") ``` This can be converted to Spack’s spec notation like so: ``` depends_on('python@2.7:2.8,3.4:', type=('build', 'run')) ``` ##### setuptools[¶](#setuptools) Originally, the Python language had a single build system called distutils, which is built into Python. Distutils provided a common framework for package authors to describe their project and how it should be built. However, distutils was not without limitations. Most notably, there was no way to list a project’s dependencies with distutils. Along came setuptools, a non-builtin build system designed to overcome the limitations of distutils. Both projects use a similar API, making the transition easy while adding much needed functionality. Today, setuptools is used in around 75% of the Python packages in Spack. Since setuptools isn’t built-in to Python, you need to add it as a dependency. To determine whether or not a package uses setuptools, search the file for an import statement like: ``` import setuptools ``` or: ``` from setuptools import setup ``` Some packages are designed to work with both setuptools and distutils, so you may find something like: ``` try: from setuptools import setup except ImportError: from distutils.core import setup ``` This uses setuptools if available, and falls back to distutils if not. In this case, you would still want to add a setuptools dependency, as it offers us more control over the installation. Unless specified otherwise, setuptools is usually a build-only dependency. That is, it is needed to install the software, but is not needed at run-time. This can be specified as: ``` depends_on('py-setuptools', type='build') ``` ##### cython[¶](#cython) Compared to compiled languages, interpreted languages like Python can be quite a bit slower. To work around this, some Python developers rewrite computationally demanding sections of code in C, a process referred to as “cythonizing”. In order to build these package, you need to add a build dependency on cython: ``` depends_on('py-cython', type='build') ``` Look for references to “cython” in the `setup.py` to determine whether or not this is necessary. Cython may be optional, but even then you should list it as a required dependency. Spack is designed to compile software, and is meant for HPC facilities where speed is crucial. There is no reason why someone would not want an optimized version of a library instead of the pure-Python version. #### Python dependencies[¶](#python-dependencies) When you install a package with `pip`, it reads the `setup.py` file in order to determine the dependencies of the package. If the dependencies are not yet installed, `pip` downloads them and installs them for you. This may sound convenient, but Spack cannot rely on this behavior for two reasons: 1. Spack needs to be able to install packages on air-gapped networks. If there is no internet connection, `pip` can’t download the package dependencies. By explicitly listing every dependency in the `package.py`, Spack knows what to download ahead of time. 2. Duplicate installations of the same dependency may occur. Spack supports *activation* of Python extensions, which involves symlinking the package installation prefix to the Python installation prefix. If your package is missing a dependency, that dependency will be installed to the installation directory of the same package. If you try to activate the package + dependency, it may cause a problem if that package has already been activated. For these reasons, you must always explicitly list all dependencies. Although the documentation may list the package’s dependencies, often the developers assume people will use `pip` and won’t have to worry about it. Always check the `setup.py` to find the true dependencies. If the package relies on `distutils`, it may not explicitly list its dependencies. Check for statements like: ``` try: import numpy except ImportError: raise ImportError("numpy must be installed prior to installation") ``` Obviously, this means that `py-numpy` is a dependency. If the package uses `setuptools`, check for the following clues: * `install_requires` These packages are required for installation. * `extra_requires` These packages are optional dependencies that enable additional functionality. You should add a variant that optionally adds these dependencies. * `test_requires` These are packages that are required to run the unit tests for the package. These dependencies can be specified using the `type='test'` dependency type. In the root directory of the package, you may notice a `requirements.txt` file. It may look like this file contains a list of all of the package’s dependencies. Don’t be fooled. This file is used by tools like Travis to install the pre-requisites for the package
 and a whole bunch of other things. It often contains dependencies only needed for unit tests, like: * mock * nose * pytest It can also contain dependencies for building the documentation, like sphinx. If you can’t find any information about the package’s dependencies, you can take a look in `requirements.txt`, but be sure not to add test or documentation dependencies. ##### setuptools[¶](#id2) Setuptools is a bit of a special case. If a package requires setuptools at run-time, how do they express this? They could add it to `install_requires`, but setuptools is imported long before this and needed to read this line. And since you can’t install the package without setuptools, the developers assume that setuptools will already be there, so they never mention when it is required. We don’t want to add run-time dependencies if they aren’t needed, so you need to determine whether or not setuptools is needed. Grep the installation directory for any files containing a reference to `setuptools` or `pkg_resources`. Both modules come from `py-setuptools`. `pkg_resources` is particularly common in scripts in `prefix/bin`. #### Passing arguments to setup.py[¶](#passing-arguments-to-setup-py) The default build and install phases should be sufficient to install most packages. However, you may want to pass additional flags to either phase. You can view the available options for a particular phase with: ``` $ python setup.py <phase> --help ``` Each phase provides a `<phase_args>` function that can be used to pass arguments to that phase. For example, [py-numpy](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-numpy/package.py) adds: ``` def build_args(self, spec, prefix): args = [] # From NumPy 1.10.0 on it's possible to do a parallel build. if self.version >= Version('1.10.0'): # But Parallel build in Python 3.5+ is broken. See: # https://github.com/spack/spack/issues/7927 # https://github.com/scipy/scipy/issues/7112 if spec['python'].version < Version('3.5'): args = ['-j', str(make_jobs)] return args ``` #### Testing[¶](#testing) `PythonPackage` provides a couple of options for testing packages. ##### Import tests[¶](#import-tests) Just because a package successfully built does not mean that it built correctly. The most reliable test of whether or not the package was correctly installed is to attempt to import all of the modules that get installed. To get a list of modules, run the following command in the source directory: ``` $ python >>> import setuptools >>> setuptools.find_packages() ['numpy', 'numpy._build_utils', 'numpy.compat', 'numpy.core', 'numpy.distutils', 'numpy.doc', 'numpy.f2py', 'numpy.fft', 'numpy.lib', 'numpy.linalg', 'numpy.ma', 'numpy.matrixlib', 'numpy.polynomial', 'numpy.random', 'numpy.testing', 'numpy.core.code_generators', 'numpy.distutils.command', 'numpy.distutils.fcompiler'] ``` Large, complex packages like `numpy` will return a long list of packages, while other packages like `six` will return an empty list. `py-six` installs a single `six.py` file. In Python packaging lingo, a “package” is a directory containing files like: ``` foo/__init__.py foo/bar.py foo/baz.py ``` whereas a “module” is a single Python file. Since `find_packages` only returns packages, you’ll have to determine the correct module names yourself. You can now add these packages and modules to the package like so: ``` import_modules = ['six'] ``` When you run `spack install --test=root py-six`, Spack will attempt to import the `six` module after installation. These tests most often catch missing dependencies and non-RPATHed libraries. Make sure not to add modules/packages containing the word “test”, as these likely won’t end up in installation directory. ##### Unit tests[¶](#unit-tests) The package you want to install may come with additional unit tests. By default, Spack runs: ``` $ python setup.py test ``` if it detects that the `setup.py` file supports a `test` phase. You can add additional build-time or install-time tests by overriding `test` and `installtest`, respectively. For example, `py-numpy` adds: ``` def install_test(self): with working_dir('..'): python('-c', 'import numpy; numpy.test("full", verbose=2)') ``` #### Setup file in a sub-directory[¶](#setup-file-in-a-sub-directory) In order to be compatible with package managers like `pip`, the package is required to place its `setup.py` in the root of the tarball. However, not every Python package cares about `pip` or PyPI. If you are installing a package that is not hosted on PyPI, you may find that it places its `setup.py` in a sub-directory. To handle this, add the directory containing `setup.py` to the package like so: ``` build_directory = 'source' ``` #### Alternate names for setup.py[¶](#alternate-names-for-setup-py) As previously mentioned, packages need to call their setup script `setup.py` in order to be compatible with package managers like `pip`. However, some packages like [py-meep](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-meep/package.py) and [py-adios](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/py-adios/package.py) come with multiple setup scripts, one for a serial build and another for a parallel build. You can override the default name to use like so: ``` def setup_file(self): return 'setup-mpi.py' if '+mpi' in self.spec else 'setup.py' ``` #### PythonPackage vs. packages that use Python[¶](#pythonpackage-vs-packages-that-use-python) There are many packages that make use of Python, but packages that depend on Python are not necessarily `PythonPackages`. ##### Choosing a build system[¶](#choosing-a-build-system) First of all, you need to select a build system. `spack create` usually does this for you, but if for whatever reason you need to do this manually, choose `PythonPackage` if and only if the package contains a `setup.py` file. ##### Choosing a package name[¶](#choosing-a-package-name) Selecting the appropriate package name is a little more complicated than choosing the build system. By default, `spack create` will prepend `py-` to the beginning of the package name if it detects that the package uses the `PythonPackage` build system. However, there are occasionally packages that use `PythonPackage` that shouldn’t start with `py-`. For example: * busco * easybuild * httpie * mercurial * scons * snakemake The thing these packages have in common is that they are command-line tools that just so happen to be written in Python. Someone who wants to install `mercurial` with Spack isn’t going to realize that it is written in Python, and they certainly aren’t going to assume the package is called `py-mercurial`. For this reason, we manually renamed the package to `mercurial`. Likewise, there are occasionally packages that don’t use the `PythonPackage` build system but should still be prepended with `py-`. For example: * py-genders * py-py2cairo * py-pygobject * py-pygtk * py-pyqt * py-pyserial * py-sip * py-xpyb These packages are primarily used as Python libraries, not as command-line tools. You may see C/C++ packages that have optional Python language-bindings, such as: * antlr * cantera * conduit * pagmo * vtk Don’t prepend these kind of packages with `py-`. When in doubt, think about how this package will be used. Is it primarily a Python library that will be imported in other Python scripts? Or is it a command-line tool, or C/C++/Fortran program with optional Python modules? The former should be prepended with `py-`, while the latter should not. ##### extends vs. depends_on[¶](#extends-vs-depends-on) This is very similar to the naming dilemma above, with a slight twist. As mentioned in the [Packaging Guide](index.html#packaging-extensions), `extends` and `depends_on` are very similar, but `extends` adds the ability to *activate* the package. Activation involves symlinking everything in the installation prefix of the package to the installation prefix of Python. This allows the user to import a Python module without having to add that module to `PYTHONPATH`. When deciding between `extends` and `depends_on`, the best rule of thumb is to check the installation prefix. If Python libraries are installed to `prefix/lib/python2.7/site-packages` (where 2.7 is the MAJOR.MINOR version of Python you used to install the package), then you should use `extends`. If Python libraries are installed elsewhere or the only files that get installed reside in `prefix/bin`, then don’t use `extends`, as symlinking the package wouldn’t be useful. #### Alternatives to Spack[¶](#alternatives-to-spack) PyPI has hundreds of thousands of packages that are not yet in Spack, and `pip` may be a perfectly valid alternative to using Spack. The main advantage of Spack over `pip` is its ability to compile non-Python dependencies. It can also build cythonized versions of a package or link to an optimized BLAS/LAPACK library like MKL, resulting in calculations that run orders of magnitude faster. Spack does not offer a significant advantage to other python-management systems for installing and using tools like flake8 and sphinx. But if you need packages with non-Python dependencies like numpy and scipy, Spack will be very valuable to you. Anaconda is another great alternative to Spack, and comes with its own `conda` package manager. Like Spack, Anaconda is capable of compiling non-Python dependencies. Anaconda contains many Python packages that are not yet in Spack, and Spack contains many Python packages that are not yet in Anaconda. The main advantage of Spack over Anaconda is its ability to choose a specific compiler and BLAS/LAPACK or MPI library. Spack also has better platform support for supercomputers. On the other hand, Anaconda offers Windows support. #### External documentation[¶](#external-documentation) For more information on Python packaging, see: <https://packaging.python.org/### RPackage[¶](#rpackage) Like Python, R has its own built-in build system. The R build system is remarkably uniform and well-tested. This makes it one of the easiest build systems to create new Spack packages for. #### Phases[¶](#phases) The `RPackage` base class has a single phase: 1. `install` - install the package By default, this phase runs the following command: ``` $ R CMD INSTALL --library=/path/to/installation/prefix/rlib/R/library . ``` #### Finding R packages[¶](#finding-r-packages) The vast majority of R packages are hosted on CRAN - The Comprehensive R Archive Network. If you are looking for a particular R package, search for “CRAN <package-name>” and you should quickly find what you want. If it isn’t on CRAN, try Bioconductor, another common R repository. For the purposes of this tutorial, we will be walking through [r-caret](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-caret/package.py) as an example. If you search for “CRAN caret”, you will quickly find what you are looking for at <https://cran.r-project.org/web/packages/caret/index.html>. If you search for “Package source”, you will find the download URL for the latest release. Use this URL with `spack create` to create a new package. #### Package name[¶](#package-name) The first thing you’ll notice is that Spack prepends `r-` to the front of the package name. This is how Spack separates R package extensions from the rest of the packages in Spack. Without this, we would end up with package name collisions more frequently than we would like. For instance, there are already packages for both: * `ape` and `r-ape` * `curl` and `r-curl` * `gmp` and `r-gmp` * `jpeg` and `r-jpeg` * `openssl` and `r-openssl` * `uuid` and `r-uuid` * `xts` and `r-xts` Many popular programs written in C/C++ are later ported to R as a separate project. #### Description[¶](#description) The first thing you’ll need to add to your new package is a description. The top of the homepage for `caret` lists the following description: > caret: Classification and Regression Training > Misc functions for training and plotting classification and regression models. You can either use the short description (first line), long description (second line), or both depending on what you feel is most appropriate. #### Homepage[¶](#homepage) If you look at the bottom of the page, you’ll see: > Linking: > Please use the canonical form <https://CRAN.R-project.org/package=caret> to link to this page. Please uphold the wishes of the CRAN admins and use <https://CRAN.R-project.org/package=caret> as the homepage instead of <https://cran.r-project.org/web/packages/caret/index.html>. The latter may change without notice. #### URL[¶](#url) As previously mentioned, the download URL for the latest release can be found by searching “Package source” on the homepage. #### List URL[¶](#list-url) CRAN maintains a single webpage containing the latest release of every single package: <https://cran.r-project.org/src/contrib/Of course, as soon as a new release comes out, the version you were using in your package is no longer available at that URL. It is moved to an archive directory. If you search for “Old sources”, you will find: <https://cran.r-project.org/src/contrib/Archive/caretIf you only specify the URL for the latest release, your package will no longer be able to fetch that version as soon as a new release comes out. To get around this, add the archive directory as a `list_url`. #### Build system dependencies[¶](#build-system-dependencies) As an extension of the R ecosystem, your package will obviously depend on R to build and run. Normally, we would use `depends_on` to express this, but for R packages, we use `extends`. `extends` is similar to `depends_on`, but adds an additional feature: the ability to “activate” the package by symlinking it to the R installation directory. Since every R package needs this, the `RPackage` base class contains: ``` extends('r') depends_on('r', type=('build', 'run')) ``` Take a close look at the homepage for `caret`. If you look at the “Depends” section, you’ll notice that `caret` depends on “R (≥ 2.10)”. You should add this to your package like so: ``` depends_on('r@2.10:', type=('build', 'run')) ``` #### R dependencies[¶](#r-dependencies) R packages are often small and follow the classic Unix philosophy of doing one thing well. They are modular and usually depend on several other packages. You may find a single package with over a hundred dependencies. Luckily, CRAN packages are well-documented and list all of their dependencies in the following sections: * Depends * Imports * LinkingTo As far as Spack is concerned, all 3 of these dependency types correspond to `type=('build', 'run')`, so you don’t have to worry about them. If you are curious what they mean, <https://github.com/spack/spack/issues/2951> has a pretty good summary: > `Depends` is required and will cause those R packages to be *attached*, > that is, their APIs are exposed to the user. `Imports` *loads* packages > so that *the package* importing these packages can access their APIs, > while *not* being exposed to the user. When a user calls `library(foo)` > s/he *attaches* package `foo` and all of the packages under `Depends`. > Any function in one of these package can be called directly as `bar()`. > If there are conflicts, user can also specify `pkgA::bar()` and > `pkgB::bar()` to distinguish between them. Historically, there was only > `Depends` and `Suggests`, hence the confusing names. Today, maybe > `Depends` would have been named `Attaches`. > The `LinkingTo` is not perfect and there was recently an extensive > discussion about API/ABI among other things on the R-devel mailing > list among very skilled R developers: > * <https://stat.ethz.ch/pipermail/r-devel/2016-December/073505.html> > * <https://stat.ethz.ch/pipermail/r-devel/2017-January/073647.html> Some packages also have a fourth section: * Suggests These are optional, rarely-used dependencies that a user might find useful. You should **NOT** add these dependencies to your package. R packages already have enough dependencies as it is, and adding optional dependencies can really slow down the concretization process. They can also introduce circular dependencies. #### Core, recommended, and non-core packages[¶](#core-recommended-and-non-core-packages) If you look at “Depends”, “Imports”, and “LinkingTo”, you will notice 3 different types of packages: ##### Core packages[¶](#core-packages) If you look at the `caret` homepage, you’ll notice a few dependencies that don’t have a link to the package, like `methods`, `stats`, and `utils`. These packages are part of the core R distribution and are tied to the R version installed. You can basically consider these to be “R itself”. These are so essential to R so it would not make sense that they could be updated via CRAN. If so, you would basically get a different version of R. Thus, they’re updated when R is updated. You can find a list of these core libraries at: <https://github.com/wch/r-source/tree/trunk/src/library##### Recommended packages[¶](#recommended-packages) When you install R, there is an option called `--with-recommended-packages`. This flag causes the R installation to include a few “Recommended” packages (legacy term). They are for historical reasons quite tied to the core R distribution, developed by the R core team or people closely related to it. The R core distribution “knows” about these package, but they are indeed distributed via CRAN. Because they’re distributed via CRAN, they can also be updated between R version releases. Spack explicitly adds the `--without-recommended-packages` flag to prevent the installation of these packages. Due to the way Spack handles package activation (symlinking packages to the R installation directory), pre-existing recommended packages will cause conflicts for already-existing files. We could either not include these recommended packages in Spack and require them to be installed through `--with-recommended-packages`, or we could not install them with R and let users choose the version of the package they want to install. We chose the latter. Since these packages are so commonly distributed with the R system, many developers may assume these packages exist and fail to list them as dependencies. Watch out for this. You can find a list of these recommended packages at: <https://github.com/wch/r-source/blob/trunk/share/make/vars.mk##### Non-core packages[¶](#non-core-packages) These are packages that are neither “core” nor “recommended”. There are more than 10,000 of these packages hosted on CRAN alone. For each of these package types, if you see that a specific version is required, for example, “lattice (≥ 0.20)”, please add this information to the dependency: ``` depends_on('r-lattice@0.20:', type=('build', 'run')) ``` #### Non-R dependencies[¶](#non-r-dependencies) Some packages depend on non-R libraries for linking. Check out the [r-stringi](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-stringi/package.py) package for an example: <https://CRAN.R-project.org/package=stringi>. If you search for the text “SystemRequirements”, you will see: > ICU4C (>= 52, optional) This is how non-R dependencies are listed. Make sure to add these dependencies. The default dependency type should suffice. #### Passing arguments to the installation[¶](#passing-arguments-to-the-installation) Some R packages provide additional flags that can be passed to `R CMD INSTALL`, often to locate non-R dependencies. [r-rmpi](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/r-rmpi/package.py) is an example of this, and flags for linking to an MPI library. To pass these to the installation command, you can override `configure_args` like so: ``` def configure_args(self, spec, prefix): mpi_name = spec['mpi'].name # The type of MPI. Supported values are: # OPENMPI, LAM, MPICH, MPICH2, or CRAY if mpi_name == 'openmpi': Rmpi_type = 'OPENMPI' elif mpi_name == 'mpich': Rmpi_type = 'MPICH2' else: raise InstallError('Unsupported MPI type') return [ '--with-Rmpi-type={0}'.format(Rmpi_type), '--with-mpi={0}'.format(spec['mpi'].prefix), ] ``` There is a similar `configure_vars` function that can be overridden to pass variables to the build. #### Alternatives to Spack[¶](#alternatives-to-spack) CRAN hosts over 10,000 R packages, most of which are not in Spack. Many users may not need the advanced features of Spack, and may prefer to install R packages the normal way: ``` $ R > install.packages("ggplot2") ``` R will search CRAN for the `ggplot2` package and install all necessary dependencies for you. If you want to update all installed R packages to the latest release, you can use: ``` > update.packages(ask = FALSE) ``` This works great for users who have internet access, but those on an air-gapped cluster will find it easier to let Spack build a download mirror and install these packages for you. Where Spack really shines is its ability to install non-R dependencies and link to them properly, something the R installation mechanism cannot handle. #### External documentation[¶](#external-documentation) For more information on installing R packages, see: <https://stat.ethz.ch/R-manual/R-devel/library/utils/html/INSTALL.html### RubyPackage[¶](#rubypackage) Like Perl, Python, and R, Ruby has its own build system for installing Ruby gems. This build system is a work-in-progress. See <https://github.com/spack/spack/pull/3127> for more information. ### CudaPackage[¶](#cudapackage) Different from other packages, `CudaPackage` does not represent a build system. Instead its goal is to simplify and unify usage of `CUDA` in other packages. #### Provided variants and dependencies[¶](#provided-variants-and-dependencies) `CudaPackage` provides `cuda` variant (default to `off`) to enable/disable `CUDA`, and `cuda_arch` variant to optionally specify the architecture. It also declares dependencies on the `CUDA` package `depends_on('cuda@...')` based on the architecture as well as specifies conflicts for certain compiler versions. #### Usage[¶](#usage) In order to use it, just add another base class to your package, for example: ``` class MyPackage(CMakePackage, CudaPackage): ... def cmake_args(self): spec = self.spec if '+cuda' in spec: options.append('-DWITH_CUDA=ON') cuda_arch = spec.variants['cuda_arch'].value if cuda_arch is not None: options.append('-DCUDA_FLAGS=-arch=sm_{0}'.format(cuda_arch[0])) else: options.append('-DWITH_CUDA=OFF') ``` ### [IntelPackage](#id12)[¶](#intelpackage) Contents * [IntelPackage](#intelpackage) + [Intel packages in Spack](#intel-packages-in-spack) - [Packages under no-cost license](#packages-under-no-cost-license) - [Licensed packages](#licensed-packages) - [Unrelated packages](#unrelated-packages) - [Configuring Spack to use Intel licenses](#configuring-spack-to-use-intel-licenses) * [Pointing to an existing license server](#pointing-to-an-existing-license-server) * [Installing a standalone license file](#installing-a-standalone-license-file) + [Integration of Intel tools installed *external* to Spack](#integration-of-intel-tools-installed-external-to-spack) - [Integrating external compilers](#integrating-external-compilers) - [Integrating external libraries](#integrating-external-libraries) + [Installing Intel tools *within* Spack](#installing-intel-tools-within-spack) - [Install steps for packages with compilers and libraries](#install-steps-for-packages-with-compilers-and-libraries) - [Install steps for library-only packages](#install-steps-for-library-only-packages) - [Debug notes](#debug-notes) + [Using Intel tools in Spack to install client packages](#using-intel-tools-in-spack-to-install-client-packages) - [Selecting Intel compilers](#selecting-intel-compilers) - [Selecting libraries to satisfy virtual packages](#selecting-libraries-to-satisfy-virtual-packages) - [Using Intel tools as explicit dependency](#using-intel-tools-as-explicit-dependency) - [Tips for configuring client packages to use MKL](#tips-for-configuring-client-packages-to-use-mkl) + [Footnotes](#footnotes) #### [Intel packages in Spack](#id13)[¶](#intel-packages-in-spack) Spack can install and use several software development products offered by Intel. Some of these are available under no-cost terms, others require a paid license. All share the same basic steps for configuration, installation, and, where applicable, license management. The Spack Python class `IntelPackage` implements these steps. Spack interacts with Intel tools in several routes, like it does for any other package: 1. Accept system-provided tools after you declare them to Spack as *external packages*. 2. Install the products for you as *internal packages* in Spack. 3. *Use* the packages, regardless of installation route, to install what we’ll call *client packages* for you, this being Spack’s primary purpose. An auxiliary route follows from route 2, as it would for most Spack packages, namely: 4. Make Spack-installed Intel tools available outside of Spack for ad-hoc use, typically through Spack-managed modulefiles. This document covers routes 1 through 3. ##### [Packages under no-cost license](#id14)[¶](#packages-under-no-cost-license) Intel’s standalone performance library products, notably MKL and MPI, are available for use under a [simplified license](https://software.intel.com/en-us/license/intel-simplified-software-license) since 2017 [[fn1]](index.html#fn1). They are packaged in Spack as: * `intel-mkl` – Math Kernel Library (linear algebra and FFT), * `intel-mpi` – The Intel-MPI implementation (derived from MPICH), * `intel-ipp` – Primitives for image-, signal-, and data-processing, * `intel-daal` – Machine learning and data analytics. Some earlier versions of these libraries were released under a paid license. For these older versions, the license must be available at installation time of the products and during compilation of client packages. The library packages work well with the Intel compilers but do not require them – those packages can just as well be used with other compilers. The Intel compiler invocation commands offer custom options to simplify linking Intel libraries (sometimes considerably), but Spack always uses fairly explicit linkage anyway. ##### [Licensed packages](#id15)[¶](#licensed-packages) Intel’s core software development products that provide compilers, analyzers, and optimizers do require a paid license. In Spack, they are packaged as: * `intel-parallel-studio` – the entire suite of compilers and libraries, * `intel` – a subset containing just the compilers and the Intel-MPI runtime [[fn2]](index.html#fn2). The license is needed at installation time and to compile client packages, but never to merely run any resulting binaries. The license status for a given Spack package is normally specified in the *package code* through directives like license_required (see [Licensed software](index.html#license)). For the Intel packages, however, the *class code* provides these directives (in exchange of forfeiting a measure of OOP purity) and takes care of idiosyncasies like historic version dependence. The libraries that are provided in the standalone packages are also included in the all-encompassing `intel-parallel-studio`. To complicate matters a bit, that package is sold in 3 “editions”, of which only the upper-tier `cluster` edition supports *compiling* MPI applications, and hence only that edition can provide the `mpi` virtual package. (As mentioned [[fn2]](index.html#fn2), all editions provide support for *running* MPI applications.) The edition forms the leading part of the version number for Spack’s `intel*` packages discussed here. This differs from the primarily numeric version numbers seen with most other Spack packages. For example, we have: ``` $ spack info intel-parallel-studio ... Preferred version: professional.2018.3 http:... Safe versions: professional.2018.3 http:... ... composer.2018.3 http:... ... cluster.2018.3 http:... ... ... ``` The full studio suite, capable of compiling MPI applications, currently requires about 12 GB of disk space when installed (see section [Install steps for packages with compilers and libraries](#install-steps-for-packages-with-compilers-and-libraries) for detailed instructions). If you need to save disk space or installation time, you could install the `intel` compilers-only subset (0.6 GB) and just the library packages you need, for example `intel-mpi` (0.5 GB) and `intel-mkl` (2.5 GB). ##### [Unrelated packages](#id16)[¶](#unrelated-packages) The following packages do not use the Intel installer and are not in class `IntelPackage` that is discussed here: * `intel-gpu-tools` – Test suite and low-level tools for the Linux [Direct Rendering Manager](https://en.wikipedia.org/wiki/Direct_Rendering_Manager) * `intel-mkl-dnn` – Math Kernel Library for Deep Neural Networks (`CMakePackage`) * `intel-xed` – X86 machine instructions encoder/decoder * `intel-tbb` – Standalone version of Intel Threading Building Blocks. Note that a TBB runtime version is included with `intel-mkl`, and development versions are provided by the packages `intel-parallel-studio` (all editions) and its `intel` subset. ##### [Configuring Spack to use Intel licenses](#id17)[¶](#configuring-spack-to-use-intel-licenses) If you wish to integrate licensed Intel products into Spack as external packages ([route 1](#route-1) above) we assume that their license configuration is in place and is working [[fn3]](index.html#fn3). In this case, skip to section [Integration of Intel tools installed external to Spack](#integration-of-intel-tools-installed-external-to-spack). If you plan to have Spack install licensed products for you ([route 2](#route-2) above), the Intel product installer that Spack will run underneath must have access to a license that is either provided by a *license server* or as a *license file*. The installer may be able to locate a license that is already configured on your system. If it cannot, you must configure Spack to provide either the server location or the license file. For authoritative information on Intel licensing, see: * <https://software.intel.com/en-us/faq/licensing> * <https://software.intel.com/en-us/articles/how-do-i-manage-my-licenses###### [Pointing to an existing license server](#id18)[¶](#pointing-to-an-existing-license-server) Installing and configuring a license server is outside the scope of Spack. We assume that: * Your system administrator has a license server running. * The license server offers valid licenses for the Intel packages of interest. * You can access these licenses under the user id running Spack. Be aware of the difference between (a) installing and configuring a license server, and (b) configuring client software to *use* a server’s so-called floating licenses. We are concerned here with (b) only. The process of obtaining a license from a server for temporary use is called “checking out a license”. For that, a client application such as the Intel package installer or a compiler needs to know the host name and port number of one or more license servers that it may query [[fn4]](index.html#fn4). Follow one of three methods to [point client software to a floating license server](https://software.intel.com/en-us/articles/licensing-setting-up-the-client-floating-license). Ideally, your license administrator will already have implemented one that can be used unchanged in Spack: Look for the environment variable `INTEL_LICENSE_FILE` or for files `/opt/intel/licenses/*.lic` that contain: ``` SERVER hostname hostid_or_ANY portnum USE_SERVER ``` The relevant tokens, among possibly others, are the `USE_SERVER` line, intended specifically for clients, and one or more `SERVER` lines above it which give the network address. If you cannot find pre-existing `/opt/intel/licenses/*.lic` files and the `INTEL_LICENSE_FILE` environment variable is not set (even after you loaded any relevant modulefiles), ask your license administrator for the server address(es) and place them in a “global” license file within your Spack directory tree [as shown below](#spack-managed-file)). ###### [Installing a standalone license file](#id19)[¶](#installing-a-standalone-license-file) If you purchased a user-specific license, follow [Intel’s instructions](https://software.intel.com/en-us/faq/licensing#license-management) to “activate” it for your serial number, then download the resulting license file. If needed, [request to have the file re-sent](https://software.intel.com/en-us/articles/resend-license-file) to you. Intel’s license files are text files that contain tokens in the proprietary “FLEXlm” format and whose name ends in `.lic`. Intel installers and compilers look for license files in several locations when they run. Place your license by one of the following means, in order of decreasing preference: * Default directory Install your license file in the directory `/opt/intel/licenses/` if you have write permission to it. This directory is inspected by all Intel tools and is therefore preferred, as no further configuration will be needed. Create the directory if it does not yet exist. For the file name, either keep the downloaded name or use another suitably plain yet descriptive name that ends in `.lic`. Adjust file permissions for access by licensed users. * Directory given in environment variable If you cannot use the default directory, but your system already has set the environment variable `INTEL_LICENSE_FILE` independent from Spack [[fn5]](index.html#fn5), then, if you have the necessary write permissions, place your license file in one of the directories mentioned in this environment variable. Adjust file permissions to match licensed users. Tip If your system has not yet set and used the environment variable `INTEL_LICENSE_FILE`, you could start using it with the `spack install` stage of licensed tools and subsequent client packages. You would, however, be in a bind to always set that variable in the same manner, across updates and re-installations, and perhaps accommodate additions to it. As this may be difficult in the long run, we recommend that you do *not* attempt to start using the variable solely for Spack. * Spack-managed file The first time Spack encounters an Intel package that requires a license, it will initialize a Spack-global Intel-specific license file for you, as a template with instructional comments, and bring up an editor [[fn6]](index.html#fn6). Spack will do this *even if you have a working license elsewhere* on the system. + To proceed with an externally configured license, leave the newly templated file as is (containing comments only) and close the editor. You do not need to touch the file again. + To configure your own standalone license, copy the contents of your downloaded license file into the opened file, save it, and close the editor. + To use a license server (i.e., a floating network license) that is not already configured elsewhere on the system, supply your license server address(es) in the form of `SERVER` and `USE_SERVER` lines at the *beginning of the file* [[fn7]](index.html#fn7), in the format shown in section [Pointing to an existing license server](#pointing-to-an-existing-license-server). Save the file and close the editor.To revisit and manually edit this file, such as prior to a subsequent installation attempt, find it at `$SPACK_ROOT/etc/spack/licenses/intel/intel.lic` . Spack will place symbolic links to this file in each directory where licensed Intel binaries were installed. If you kept the template unchanged, Intel tools will simply ignore it. #### [Integration of Intel tools installed *external* to Spack](#id20)[¶](#integration-of-intel-tools-installed-external-to-spack) This section discusses [route 1](#route-1) from the introduction. A site that already uses Intel tools, especially licensed ones, will likely have some versions already installed on the system, especially at a time when Spack is just being introduced. It will be useful to make such previously installed tools available for use by Spack as they are. How to do this varies depending on the type of the tools: ##### [Integrating external compilers](#id21)[¶](#integrating-external-compilers) For Spack to use external Intel compilers, you must tell it both *where* to find them and *when* to use them. The present section documents the “where” aspect, involving `compilers.yaml` and, in most cases, long absolute paths. The “when” aspect actually relates to [route 3](#route-3) and requires explicitly stating the compiler as a spec component (in the form `foo %intel` or `foo %intel@compilerversion`) when installing client packages or altering Spack’s compiler default in `packages.yaml`. See section [Selecting Intel compilers](#selecting-intel-compilers) for details. To integrate a new set of externally installed Intel compilers into Spack follow section [Compiler configuration](index.html#compiler-config). Briefly, prepare your shell environment like you would if you were to use these compilers normally, i.e., typically by a `module load ...` or a shell `source ...` command, then use `spack compiler find` to make Spack aware of these compilers. This will create a new entry in a suitably scoped and possibly new `compilers.yaml` file. You could certainly create such a compiler entry manually, but this is error-prone due to the indentation and different data types involved. The Intel compilers need and use the system’s native GCC compiler (`gcc` on most systems, `clang` on macOS) to provide certain functionality, notably to support C++. To provide a different GCC compiler for the Intel tools, or more generally set persistent flags for all invocations of the Intel compilers, locate the `compilers.yaml` entry that defines your Intel compiler, and, using a text editor, change one or both of the following: 1. At the `modules:` tag, add a `gcc` module to the list. 2. At the `flags:` tag, add `cflags:`, `cxxflags:`, and `fflags:` key-value entries. Consult the examples under [Compiler configuration](index.html#compiler-config) and [Vendor-Specific Compiler Configuration](index.html#vendor-specific-compiler-configuration) in the Spack documentation. When done, validate your compiler definition by running `spack compiler info intel@compilerversion` (replacing `compilerversion` by the version that you defined). Be aware that both the GCC integration and persistent compiler flags can also be affected by an advanced third method: 3. A modulefile that provides the Intel compilers for you could, for the benefit of users outside of Spack, implicitly integrate a specific `gcc` version via compiler flag environment variables or (hopefully not) via a sneaky extra `PATH` addition. Next, visit section [Selecting Intel Compilers](#selecting-intel-compilers) to learn how to tell Spack to use the newly configured compilers. ##### [Integrating external libraries](#id22)[¶](#integrating-external-libraries) Configure external library-type packages (as opposed to compilers) in the files `$SPACK_ROOT/etc/spack/packages.yaml` or `~/.spack/packages.yaml`, following the Spack documentation under [External Packages](index.html#sec-external-packages). Similar to `compilers.yaml`, the `packages.yaml` files define a package external to Spack in terms of a Spack spec and resolve each such spec via either the `paths` or `modules` tokens to a specific pre-installed package version on the system. Since Intel tools generally need environment variables to interoperate, which cannot be conveyed in a mere `paths` specification, the `modules` token will be more sensible to use. It resolves the Spack-side spec to a modulefile generated and managed outside of Spack’s purview, which Spack will load internally and transiently when the corresponding spec is called upon to compile client packages. Unlike for compilers, where `spack find compilers [spec]` generates an entry in an existing or new `compilers.yaml` file, Spack does not offer a command to generate an entirely new `packages.yaml` entry. You must create new entries yourself in a text editor, though the command `spack config [--scope=...] edit packages` can help with selecting the proper file. See section [Configuration Scopes](index.html#configuration-scopes) for an explanation about the different files and section [Build customization](index.html#build-settings) for specifics and examples for `packages.yaml` files. The following example integrates packages embodied by hypothetical external modulefiles `intel-mkl/18/...` into Spack as packages `intel-mkl@...`: ``` $ spack config edit packages ``` Make sure the file begins with: ``` packages: ``` Adapt the following example. Be sure to maintain the indentation: ``` # other content ... intel-mkl: modules: intel-mkl@2018.2.199 arch=linux-centos6-x86_64: intel-mkl/18/18.0.2 intel-mkl@2018.3.222 arch=linux-centos6-x86_64: intel-mkl/18/18.0.3 ``` The version numbers for the `intel-mkl` specs defined here correspond to file and directory names that Intel uses for its products because they were adopted and declared as such within Spack’s package repository. You can inspect the versions known to your current Spack installation by: ``` $ spack info intel-mkl ``` Using the same version numbers for external packages as for packages known internally is useful for clarity, but not strictly necessary. Moreover, with a `packages.yaml` entry, you can go beyond internally known versions. Note that the Spack spec in the example does not contain a compiler specification. This is intentional, as the Intel library packages can be used unmodified with different compilers. A slightly more advanced example illustrates how to provide [variants](index.html#basic-variants) and how to use the `buildable: False` directive to prevent Spack from installing other versions or variants of the named package through its normal internal mechanism. ``` packages: intel-parallel-studio: modules: intel-parallel-studio@cluster.2018.2.199 +mkl+mpi+ipp+tbb+daal arch=linux-centos6-x86_64: intel/18/18.0.2 intel-parallel-studio@cluster.2018.3.222 +mkl+mpi+ipp+tbb+daal arch=linux-centos6-x86_64: intel/18/18.0.3 buildable: False ``` One additional example illustrates the use of `paths:` instead of `modules:`, useful when external modulefiles are not available or not suitable: ``` packages: intel-parallel-studio: paths: intel-parallel-studio@cluster.2018.2.199 +mkl+mpi+ipp+tbb+daal: /opt/intel intel-parallel-studio@cluster.2018.3.222 +mkl+mpi+ipp+tbb+daal: /opt/intel buildable: False ``` Note that for the Intel packages discussed here, the directory values in the `paths:` entries must be the high-level and typically version-less “installation directory” that has been used by Intel’s product installer. Such a directory will typically accumulate various product versions. Amongst them, Spack will select the correct version-specific product directory based on the `@version` spec component that each path is being defined for. For further background and details, see [External Packages](index.html#sec-external-packages). #### [Installing Intel tools *within* Spack](#id23)[¶](#installing-intel-tools-within-spack) This section discusses [route 2](#route-2) from the introduction. When a system does not yet have Intel tools installed already, or the installed versions are undesirable, Spack can install these tools like any regular Spack package for you and, with appropriate pre- and post-install configuration, use its compilers and/or libraries to install client packages. ##### [Install steps for packages with compilers and libraries](#id24)[¶](#install-steps-for-packages-with-compilers-and-libraries) The packages `intel-parallel-studio` and `intel` (which is a subset of the former) are many-in-one products that contain both compilers and a set of library packages whose scope depends on the edition. Because they are general products geared towards shell environments, it can be somewhat involved to integrate these packages at their full extent into Spack. Note: To install library-only packages like `intel-mkl`, `intel-mpi`, and `intel-daal` follow [the next section](#intel-install-libs) instead. 1. Review the section [Configuring spack to use intel licenses](#configuring-spack-to-use-intel-licenses). 2. To install a version of `intel-parallel-studio` that provides Intel compilers at a version that you have *not yet declared in Spack*, the following preparatory steps are recommended: 1. Determine the compiler spec that the new `intel-parallel-studio` package will provide, as follows: From the package version, combine the last two digits of the version year, a literal “0” (zero), and the version component that immediately follows the year. | Package version | Compiler spec provided | | `<EMAIL>` | `intel@yy.0.u` | Example: The package `intel-parallel-studio@cluster.2018.3` will provide the compiler with spec `intel@18.0.3`. 2. Add a new compiler section with the newly anticipated version at the end of a `compilers.yaml` file in a suitable scope. For example, run: ``` $ spack config --scope=user/linux edit compilers ``` and append a stub entry: ``` - compiler: target: x86_64 operating_system: centos6 modules: [] spec: intel@18.0.3 paths: cc: stub cxx: stub f77: stub fc: stub ``` Replace `18.0.3` with the version that you determined in the preceeding step. The contents under `paths:` do not matter yet.You are right to ask: “Why on earth is that necessary?” [[fn8]](index.html#fn8). The answer lies in Spack striving for strict compiler consistency. Consider what happens without such a pre-declared compiler stub: Say, you ask Spack to install a particular version `intel-parallel-studio@edition.V`. Spack will apply an unrelated compiler spec to concretize and install your request, resulting in `intel-parallel-studio@edition.V %X`. That compiler `%X` is not going to be the version that this new package itself provides. Rather, it would typically be `%gcc@...` in a default Spack installation or possibly indeed `%intel@...`, but at a version that precedes `V`. The problem comes to the fore as soon as you try to use any virtual `mkl` or `mpi` packages that you would expect to now be provided by `intel-parallel-studio@edition.V`. Spack will indeed see those virtual packages, but only as being tied to the compiler that the package `intel-parallel-studio@edition.V` was concretized with *at installation*. If you were to install a client package with the new compilers now available to you, you would naturally run `spack install foo +mkl %intel@V`, yet Spack will either complain about `mkl%intel@V` being missing (because it only knows about `mkl%X`) or it will go and attempt to install *another instance* of `intel-parallel-studio@edition.V %intel@V` so as to match the compiler spec `%intel@V` that you gave for your client package `foo`. This will be unexpected and will quickly get annoying because each reinstallation takes up time and extra disk space. To escape this trap, put the compiler stub declaration shown here in place, then use that pre-declared compiler spec to install the actual package, as shown next. This approach works because during installation only the package’s own self-sufficient installer will be used, not any compiler. 3. Verify that the compiler version provided by the new `studio` version would be used as expected if you were to compile a client package: ``` $ spack spec zlib %intel ``` If the version does not match, explicitly state the anticipated compiler version, e.g.: ``` $ spack spec zlib %intel@18.0.3 ``` if there are problems, review and correct the compiler’s `compilers.yaml` entry, be it still in stub form or already complete (as it would be for a re-installation). 4. Install the new `studio` package using Spack’s regular `install` command. It may be wise to provide the anticipated compiler ([see above](#verify-compiler-anticipated)) as an explicit concretization element: ``` $ spack install intel-parallel-studio@cluster.2018.3 %intel@18.0.3 ``` 5. Follow the same steps as under [Integrating external compilers](#integrating-external-compilers) to tell Spack the minutiae for actually using those compilers with client packages. If you placed a stub entry in a `compilers.yaml` file, now is the time to edit it and fill in the particulars. * Under `paths:`, give the full paths to the actual compiler binaries (`icc`, `ifort`, etc.) located within the Spack installation tree, in all their unsightly length [[fn9]](index.html#fn9). To determine the full path to the C compiler, adapt and run: ``` $ find `spack location -i intel-parallel-studio@cluster.2018.3` \ -name icc -type f -ls ``` If you get hits for both `intel64` and `ia32`, you almost certainly will want to use the `intel64` variant. The `icpc` and `ifort` compilers will be located in the same directory as `icc`. * Use the `modules:` and/or `cflags:` tokens to specify a suitable accompanying `gcc` version to help pacify picky client packages that ask for C++ standards more recent than supported by your system-provided `gcc` and its `libstdc++.so`. * To set the Intel compilers for default use in Spack, instead of the usual `%gcc`, follow section [Selecting Intel compilers](#selecting-intel-compilers). Tip Compiler packages like `intel-parallel-studio` can easily be above 10 GB in size, which can tax the disk space available for temporary files on small, busy, or restricted systems (like virtual machines). The Intel installer will stop and report insufficient space as: ``` ==> './install.sh' '--silent' 'silent.cfg' ... Missing critical prerequisite -- Not enough disk space ``` As first remedy, clean Spack’s existing staging area: ``` $ spack clean --stage ``` then retry installing the large package. Spack normally cleans staging directories but certain failures may prevent it from doing so. If the error persists, tell Spack to use an alternative location for temporary files: 1. Run `df -h` to identify an alternative location on your system. 2. Tell Spack to use that location for staging. Do **one** of the following: * Run Spack with the environment variable `TMPDIR` altered for just a single command. For example, to use your `$HOME` directory: ``` $ TMPDIR="$HOME/spack-stage" spack install .... ``` This example uses Bourne shell syntax. Adapt for other shells as needed. * Alternatively, customize Spack’s `build_stage` [configuration setting](index.html#config-overrides). ``` $ spack config edit config ``` Append: ``` config: build_stage: - /home/$user/spack-stage ``` Do not duplicate the `config:` line if it already is present. Adapt the location, which here is the same as in the preceeding example. 3. Retry installing the large package. ##### [Install steps for library-only packages](#id25)[¶](#install-steps-for-library-only-packages) To install library-only packages like `intel-mkl`, `intel-mpi`, and `intel-daal` follow the steps given here. For packages that contain a compiler, follow [the previous section](#intel-install-studio) instead. 1. For pre-2017 product releases, review the section [Configuring Spack to use Intel licenses](#configuring-spack-to-use-intel-licenses). 2. Inspect the package spec. Specify an explicit compiler if necessary, e.g.: ``` $ spack spec intel-mpi@2018.3.199 $ spack spec intel-mpi@2018.3.199 %intel ``` Check that the package will use the compiler flavor and version that you expect. 3. Install the package normally within Spack. Use the same spec as in the previous command, i.e., as general or as specific as needed: ``` $ spack install intel-mpi@2018.3.199 $ spack install intel-mpi@2018.3.199 %intel@18 ``` 4. To prepare the new packages for use with client packages, follow [Selecting libraries to satisfy virtual packages](#selecting-libraries-to-satisfy-virtual-packages). ##### [Debug notes](#id26)[¶](#debug-notes) * You can trigger a wall of additional diagnostics using Spack options, e.g.: ``` $ spack --debug -v install intel-mpi ``` The `--debug` option can also be useful while installing client packages [(see below)](#using-intel-tools-in-spack-to-install-client-packages) to confirm the integration of the Intel tools in Spack, notably MKL and MPI. * The `.spack/` subdirectory of an installed `IntelPackage` will contain, besides Spack’s usual archival items, a copy of the `silent.cfg` file that was passed to the Intel installer: ``` $ grep COMPONENTS ...intel-mpi...<hash>/.spack/silent.cfg COMPONENTS=ALL ``` * If an installation error occurs, Spack will normally clean up and remove a partially installed target directory. You can direct Spack to keep it using `--keep-prefix`, e.g.: ``` $ spack install --keep-prefix intel-mpi ``` You must, however, *remove such partial installations* prior to subsequent installation attempts. Otherwise, the Intel installer will behave incorrectly. #### [Using Intel tools in Spack to install client packages](#id27)[¶](#using-intel-tools-in-spack-to-install-client-packages) Finally, this section pertains to [route 3](#route-3) from the introduction. Once Intel tools are installed within Spack as external or internal packages they can be used as intended for installing client packages. ##### [Selecting Intel compilers](#id28)[¶](#selecting-intel-compilers) Select Intel compilers to compile client packages, like any compiler in Spack, by one of the following means: * Request the Intel compilers explicitly in the client spec, e.g.: ``` $ spack install libxc@3.0.0%intel ``` * Alternatively, request Intel compilers implicitly by concretization preferences. Configure the order of compilers in the appropriate `packages.yaml` file, under either an `all:` or client-package-specific entry, in a `compiler:` list. Consult the Spack documentation for [Configuring Package Preferences](index.html#configs-tutorial-package-prefs) and [Concretization Preferences](index.html#concretization-preferences). Example: `etc/spack/packages.yaml` might simply contain: ``` packages: all: compiler: [ intel, gcc, ] ``` To be more specific, you can state partial or full compiler version numbers, for example: ``` packages: all: compiler: [ intel@18, intel@17, gcc@4.4.7, gcc@4.9.3, gcc@7.3.0, ] ``` ##### [Selecting libraries to satisfy virtual packages](#id29)[¶](#selecting-libraries-to-satisfy-virtual-packages) Intel packages, whether integrated into Spack as external packages or installed within Spack, can be called upon to satisfy the requirement of a client package for a library that is available from different providers. The relevant virtual packages for Intel are `blas`, `lapack`, `scalapack`, and `mpi`. In both integration routes, Intel packages can have optional [variants](index.html#basic-variants) which alter the list of virtual packages they can satisfy. For Spack-external packages, the active variants are a combination of the defaults declared in Spack’s package repository and the spec it is declared as in `packages.yaml`. Needless to say, those should match the components that are actually present in the external product installation. Likewise, for Spack-internal packages, the active variants are determined, persistently at installation time, from the defaults in the repository and the spec selected to be installed. To have Intel packages satisfy virtual package requests for all or selected client packages, edit the `packages.yaml` file. Customize, either in the `all:` or a more specific entry, a `providers:` dictionary whose keys are the virtual packages and whose values are the Spack specs that satisfy the virtual package, in order of decreasing preference. To learn more about the `providers:` settings, see the Spack tutorial for [Configuring Package Preferences](index.html#configs-tutorial-package-prefs) and the section [Concretization Preferences](index.html#concretization-preferences). Example: The following fairly minimal example for `packages.yaml` shows how to exclusively use the standalone `intel-mkl` package for all the linear algebra virtual packages in Spack, and `intel-mpi` as the preferred MPI implementation. Other providers can still be chosen on a per-package basis. ``` packages: all: providers: mpi: [intel-mpi] blas: [intel-mkl] lapack: [intel-mkl] scalapack: [intel-mkl] ``` If you have access to the `intel-parallel-studio@cluster` edition, you can use instead: ``` all: providers: mpi: [intel-parallel-studio+mpi] # Note: +mpi vs. +mkl blas: [intel-parallel-studio+mkl] lapack: [intel-parallel-studio+mkl] scalapack: [intel-parallel-studio+mkl] ``` If you installed `intel-parallel-studio` within Spack (“[route 2](#route-2)”), make sure you followed the [special installation step](#intel-compiler-anticipation) to ensure that its virtual packages match the compilers it provides. ##### [Using Intel tools as explicit dependency](#id30)[¶](#using-intel-tools-as-explicit-dependency) With the proper installation as detailed above, no special steps should be required when a client package specifically (and thus deliberately) requests an Intel package as dependency, this being one of the target use cases for Spack. ##### [Tips for configuring client packages to use MKL](#id31)[¶](#tips-for-configuring-client-packages-to-use-mkl) The Math Kernel Library (MKL) is provided by several Intel packages, currently `intel-parallel-studio` when variant `+mkl` is active (it is by default) and the standalone `intel-mkl`. Because of these different provider packages, a *virtual* `mkl` package is declared in Spack. * To use MKL-specific APIs in a client package: Declare a dependency on `mkl`, rather than a specific provider like `intel-mkl`. Declare the dependency either absolutely or conditionally based on variants that your package might have declared: ``` # Examples for absolute and conditional dependencies: depends_on('mkl') depends_on('mkl', when='+mkl') depends_on('mkl', when='fftw=mkl') ``` The `MKLROOT` environment variable (part of the documented API) will be set during all stages of client package installation, and is available to both the Spack packaging code and the client code. * To use MKL as provider for BLAS, LAPACK, or ScaLAPACK: The packages that provide `mkl` also provide the narrower virtual `blas`, `lapack`, and `scalapack` packages. See the relevant [Packaging Guide section](index.html#blas-lapack-scalapack) for an introduction. To portably use these virtual packages, construct preprocessor and linker option strings in your package configuration code using the package functions `.headers` and `.libs` in conjunction with utility functions from the following classes: + [`llnl.util.filesystem.FileList`](index.html#llnl.util.filesystem.FileList), + [`llnl.util.filesystem.HeaderList`](index.html#llnl.util.filesystem.HeaderList), + [`llnl.util.filesystem.LibraryList`](index.html#llnl.util.filesystem.LibraryList). Tip *Do not* use constructs like `.prefix.include` or `.prefix.lib`, with Intel or any other implementation of `blas`, `lapack`, and `scalapack`. For example, for an [AutotoolsPackage](index.html#autotoolspackage) use `.libs.ld_flags` to transform the library file list into linker options passed to `./configure`: ``` def configure_args(self): args = [] ... args.append('--with-blas=%s' % self.spec['blas'].libs.ld_flags) args.append('--with-lapack=%s' % self.spec['lapack'].libs.ld_flags) ... ``` Tip Even though `.ld_flags` will return a string of multiple words, *do not* use quotes for options like `--with-blas=...` because Spack passes them to `./configure` without invoking a shell. Likewise, in a [MakefilePackage](index.html#makefilepackage) or similiar package that does not use AutoTools you may need to provide include and link options for use on command lines or in environment variables. For example, to generate an option string of the form `-I<dir>`, use: ``` self.spec['blas'].headers.include_flags ``` and to generate linker options (`-L<dir> -llibname ...`), use the same as above, ``` self.spec['blas'].libs.ld_flags ``` See [MakefilePackage](index.html#makefilepackage) and more generally the [Packaging Guide](index.html#blas-lapack-scalapack) for background and further examples. #### [Footnotes](#id32)[¶](#footnotes) | [[fn1]](#id2) | Strictly speaking, versions from `2017.2` onward. | | [fn2] | *([1](#id3), [2](#id4))* The package `intel` intentionally does not have a `+mpi` variant since it is meant to be small. The native installer will always add MPI *runtime* components because it follows defaults defined in the download package, even when `intel-parallel-studio ~mpi` has been requested. For `intel-parallel-studio +mpi`, the class function :py:func:`.IntelPackage.pset_components` will include `"intel-mpi intel-imb"` in a list of component patterns passed to the Intel installer. The installer will extend each pattern word with an implied glob-like `*` to resolve it to package names that are *actually present in the product BOM*. As a side effect, this pattern approach accommodates occasional package name changes, e.g., capturing both `intel-mpirt` and `intel-mpi-rt` . | | [[fn3]](#id5) | How could the external installation have succeeded otherwise? | | [[fn4]](#id6) | According to Intel’s documentation, there is supposedly a way to install a product using a network license even [when a FLEXlm server is not running](https://software.intel.com/en-us/articles/licensing-setting-up-the-client-floating-license): Specify the license in the form `port@serverhost` in the `INTEL_LICENSE_FILE` environment variable. All other means of specifying a network license require that the license server be up. | | [[fn5]](#id7) | Despite the name, `INTEL_LICENSE_FILE` can hold several and diverse entries. They can be either directories (presumed to contain `*.lic` files), file names, or network locations in the form `port@host` (on Linux and Mac), with all items separated by “:” (on Linux and Mac). | | [[fn6]](#id8) | Should said editor turn out to be `vi`, you better be in a position to know how to use it. | | [[fn7]](#id9) | Comment lines in FLEXlm files, indicated by `#` as the first non-whitespace character on the line, are generally allowed anywhere in the file. There [have been reports](https://github.com/spack/spack/issues/6534), however, that as of 2018, `SERVER` and `USE_SERVER` lines must precede any comment lines. | | [[fn8]](#id10) | Spack’s close coupling of installed packages to compilers, which both necessitates the detour for installing `intel-parallel-studio`, and largely limits any of its provided virtual packages to a single compiler, heavily favors [recommending to install Intel Parallel Studio outside of Spack](#integrate-external-intel) and declare it for Spack in `packages.yaml` by a [compiler-less spec](#compiler-neutral-package). | | [[fn9]](#id11) | With some effort, you can convince Spack to use shorter paths. Warning Altering the naming scheme means that Spack will lose track of all packages it has installed for you so far. That said, the time is right for this kind of customization when you are defining a new set of compilers. The relevant tunables are:1. Set the `install_tree` location in `config.yaml` ([see doc](index.html#config-yaml)). 2. Set the hash length in `install-path-scheme`, also in `config.yaml` ([q.v.](index.html#config-yaml)). 3. You will want to set the *same* hash length for [tcl module files](index.html#modules-naming-scheme) if you have Spack produce them for you, under `naming_scheme` in `modules.yaml`. Other module dialects cannot be altered in this manner. | ### Custom Build Systems[¶](#custom-build-systems) While the build systems listed above should meet your needs for the vast majority of packages, some packages provide custom build scripts. This guide is intended for the following use cases: * Packaging software with its own custom build system * Adding support for new build systems If you want to add support for a new build system, a good place to start is to look at the definitions of other build systems. This guide focuses mostly on how Spack’s build systems work. In this guide, we will be using the [perl](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/perl/package.py) and [cmake](https://github.com/spack/spack/blob/develop/var/spack/repos/builtin/packages/cmake/package.py) packages as examples. `perl`’s build system is a hand-written `Configure` shell script, while `cmake` bootstraps itself during installation. Both of these packages require custom build systems. #### Base class[¶](#base-class) If your package does not belong to any of the aforementioned build systems that Spack already supports, you should inherit from the `Package` base class. `Package` is a simple base class with a single phase: `install`. If your package is simple, you may be able to simply write an `install` method that gets the job done. However, if your package is more complex and installation involves multiple steps, you should add separate phases as mentioned in the next section. If you are creating a new build system base class, you should inherit from `PackageBase`. This is the superclass for all build systems in Spack. #### Phases[¶](#phases) The most important concept in Spack’s build system support is the idea of phases. Each build system defines a set of phases that are necessary to install the package. They usually follow some sort of “configure”, “build”, “install” guideline, but any of those phases may be missing or combined with another phase. If you look at the `perl` package, you’ll see: ``` phases = ['configure', 'build', 'install'] ``` Similarly, `cmake` defines: ``` phases = ['bootstrap', 'build', 'install'] ``` If we look at the `cmake` example, this tells Spack’s `PackageBase` class to run the `bootstrap`, `build`, and `install` functions in that order. It is now up to you to define these methods. #### Phase and phase_args functions[¶](#phase-and-phase-args-functions) If we look at `perl`, we see that it defines a `configure` method: ``` def configure(self, spec, prefix): configure = Executable('./Configure') configure(*self.configure_args()) ``` There is also a corresponding `configure_args` function that handles all of the arguments to pass to `Configure`, just like in `AutotoolsPackage`. Comparatively, the `build` and `install` phases are pretty simple: ``` def build(self, spec, prefix): make() def install(self, spec, prefix): make('install') ``` The `cmake` package looks very similar, but with a `bootstrap` function instead of `configure`: ``` def bootstrap(self, spec, prefix): bootstrap = Executable('./bootstrap') bootstrap(*self.bootstrap_args()) def build(self, spec, prefix): make() def install(self, spec, prefix): make('install') ``` Again, there is a `boostrap_args` function that determines the correct bootstrap flags to use. #### run_before/run_after[¶](#run-before-run-after) Occasionally, you may want to run extra steps either before or after a given phase. This applies not just to custom build systems, but to existing build systems as well. You may need to patch a file that is generated by configure, or install extra files in addition to what `make install` copies to the installation prefix. This is where `@run_before` and `@run_after` come in. These Python decorators allow you to write functions that are called before or after a particular phase. For example, in `perl`, we see: ``` @run_after('install') def install_cpanm(self): spec = self.spec if '+cpanm' in spec: with working_dir(join_path('cpanm', 'cpanm')): perl = spec['perl'].command perl('Makefile.PL') make() make('install') ``` This extra step automatically installs `cpanm` in addition to the base Perl installation. #### on_package_attributes[¶](#on-package-attributes) The `run_before`/`run_after` logic discussed above becomes particularly powerful when combined with the `@on_package_attributes` decorator. This decorator allows you to conditionally run certain functions depending on the attributes of that package. The most common example is conditional testing. Many unit tests are prone to failure, even when there is nothing wrong with the installation. Unfortunately, non-portable unit tests and tests that are “supposed to fail” are more common than we would like. Instead of always running unit tests on installation, Spack lets users conditionally run tests with the `--test=root` flag. If we wanted to define a function that would conditionally run if and only if this flag is set, we would use the following line: ``` @on_package_attributes(run_tests=True) ``` #### Testing[¶](#testing) Let’s put everything together and add unit tests to our package. In the `perl` package, we can see: ``` @run_after('build') @on_package_attributes(run_tests=True) def test(self): make('test') ``` As you can guess, this runs `make test` *after* building the package, if and only if testing is requested. Again, this is not specific to custom build systems, it can be added to existing build systems as well. Ideally, every package in Spack will have some sort of test to ensure that it was built correctly. It is up to the package authors to make sure this happens. If you are adding a package for some software and the developers list commands to test the installation, please add these tests to your `package.py`. Warning The order of decorators matters. The following ordering: ``` @run_after('install') @on_package_attributes(run_tests=True) ``` works as expected. However, if you reverse the ordering: ``` @on_package_attributes(run_tests=True) @run_after('install') ``` the tests will always be run regardless of whether or not `--test=root` is requested. See <https://github.com/spack/spack/issues/3833> for more information For reference, the [`Build System API docs`](index.html#module-spack.build_systems) provide a list of build systems and methods/attributes that can be overridden. If you are curious about the implementation of a particular build system, you can view the source code by running: ``` $ spack edit --build-system autotools ``` This will open up the `AutotoolsPackage` definition in your favorite editor. In addition, if you are working with a less common build system like QMake, SCons, or Waf, it may be useful to see examples of other packages. You can quickly find examples by running: ``` $ cd var/spack/repos/builtin/packages $ grep -l QMakePackage */package.py ``` You can then view these packages with `spack edit`. This guide is intended to supplement the [`Build System API docs`](index.html#module-spack.build_systems) with examples of how to override commonly used methods. It also provides rules of thumb and suggestions for package developers who are unfamiliar with a particular build system. Developer Guide[¶](#developer-guide) --- This guide is intended for people who want to work on Spack itself. If you just want to develop packages, see the [Packaging Guide](index.html#packaging-guide). It is assumed that you’ve read the [Basic Usage](index.html#basic-usage) and [Packaging Guide](index.html#packaging-guide) sections, and that you’re familiar with the concepts discussed there. If you’re not, we recommend reading those first. ### Overview[¶](#overview) Spack is designed with three separate roles in mind: 1. **Users**, who need to install software *without* knowing all the details about how it is built. 2. **Packagers** who know how a particular software package is built and encode this information in package files. 3. **Developers** who work on Spack, add new features, and try to make the jobs of packagers and users easier. Users could be end users installing software in their home directory, or administrators installing software to a shared directory on a shared machine. Packagers could be administrators who want to automate software builds, or application developers who want to make their software more accessible to users. As you might expect, there are many types of users with different levels of sophistication, and Spack is designed to accommodate both simple and complex use cases for packages. A user who only knows that he needs a certain package should be able to type something simple, like `spack install <package name>`, and get the package that he wants. If a user wants to ask for a specific version, use particular compilers, or build several versions with different configurations, then that should be possible with a minimal amount of additional specification. This gets us to the two key concepts in Spack’s software design: 1. **Specs**: expressions for describing builds of software, and 2. **Packages**: Python modules that build software according to a spec. A package is a template for building particular software, and a spec as a descriptor for one or more instances of that template. Users express the configuration they want using a spec, and a package turns the spec into a complete build. The obvious difficulty with this design is that users under-specify what they want. To build a software package, the package object needs a *complete* specification. In Spack, if a spec describes only one instance of a package, then we say it is **concrete**. If a spec could describes many instances, (i.e. it is under-specified in one way or another), then we say it is **abstract**. Spack’s job is to take an *abstract* spec from the user, find a *concrete* spec that satisfies the constraints, and hand the task of building the software off to the package object. The rest of this document describes all the pieces that come together to make that happen. ### Directory Structure[¶](#directory-structure) So that you can familiarize yourself with the project, we’ll start with a high level view of Spack’s directory structure: ``` spack/ <- installation root bin/ spack <- main spack executable etc/ spack/ <- Spack config files. Can be overridden by files in ~/.spack. var/ spack/ <- build & stage directories repos/ <- contains package repositories builtin/ <- pkg repository that comes with Spack repo.yaml <- descriptor for the builtin repository packages/ <- directories under here contain packages cache/ <- saves resources downloaded during installs opt/ spack/ <- packages are installed here lib/ spack/ docs/ <- source for this documentation env/ <- compiler wrappers for build environment external/ <- external libs included in Spack distro llnl/ <- some general-use libraries spack/ <- spack module; contains Python code cmd/ <- each file in here is a spack subcommand compilers/ <- compiler description files test/ <- unit test modules util/ <- common code ``` Spack is designed so that it could live within a [standard UNIX directory hierarchy](http://linux.die.net/man/7/hier), so `lib`, `var`, and `opt` all contain a `spack` subdirectory in case Spack is installed alongside other software. Most of the interesting parts of Spack live in `lib/spack`. Spack has *one* directory layout and there is no install process. Most Python programs don’t look like this (they use distutils, `setup.py`, etc.) but we wanted to make Spack *very* easy to use. The simple layout spares users from the need to install Spack into a Python environment. Many users don’t have write access to a Python installation, and installing an entire new instance of Python to bootstrap Spack would be very complicated. Users should not have to install a big, complicated package to use the thing that’s supposed to spare them from the details of big, complicated packages. The end result is that Spack works out of the box: clone it and add `bin` to your PATH and you’re ready to go. ### Code Structure[¶](#code-structure) This section gives an overview of the various Python modules in Spack, grouped by functionality. #### Package-related modules[¶](#package-related-modules) [`spack.package`](index.html#module-spack.package) Contains the [`Package`](index.html#spack.package.Package) class, which is the superclass for all packages in Spack. Methods on `Package` implement all phases of the [package lifecycle](index.html#package-lifecycle) and manage the build process. `spack.packages` Contains all of the packages in Spack and methods for managing them. Functions like `packages.get` and `class_name_for_package_name` handle mapping package module names to class names and dynamically instantiating packages by name from module files. `spack.relations` *Relations* are relationships between packages, like `depends_on` and `provides`. See [Dependencies](index.html#dependencies) and [Virtual dependencies](index.html#virtual-dependencies). [`spack.multimethod`](index.html#module-spack.multimethod) Implementation of the [`@when`](index.html#spack.multimethod.when) decorator, which allows [multimethods](index.html#multimethods) in packages. #### Spec-related modules[¶](#spec-related-modules) [`spack.spec`](index.html#module-spack.spec) Contains [`Spec`](index.html#spack.spec.Spec) and `SpecParser`. Also implements most of the logic for normalization and concretization of specs. [`spack.parse`](index.html#module-spack.parse) Contains some base classes for implementing simple recursive descent parsers: [`Parser`](index.html#spack.parse.Parser) and [`Lexer`](index.html#spack.parse.Lexer). Used by `SpecParser`. [`spack.concretize`](index.html#module-spack.concretize) Contains `DefaultConcretizer` implementation, which allows site administrators to change Spack’s [Concretization Policies](index.html#concretization-policies). [`spack.version`](index.html#module-spack.version) Implements a simple [`Version`](index.html#spack.version.Version) class with simple comparison semantics. Also implements [`VersionRange`](index.html#spack.version.VersionRange) and [`VersionList`](index.html#spack.version.VersionList). All three are comparable with each other and offer union and intersection operations. Spack uses these classes to compare versions and to manage version constraints on specs. Comparison semantics are similar to the `LooseVersion` class in `distutils` and to the way RPM compares version strings. [`spack.compilers`](index.html#module-spack.compilers) Submodules contains descriptors for all valid compilers in Spack. This is used by the build system to set up the build environment. Warning Not yet implemented. Currently has two compiler descriptions, but compilers aren’t fully integrated with the build process yet. [`spack.architecture`](index.html#module-spack.architecture) `architecture.sys_type` is used to determine the host architecture while building. Warning Not yet implemented. Should eventually have architecture descriptions for cross-compiling. #### Build environment[¶](#build-environment) [`spack.stage`](index.html#module-spack.stage) Handles creating temporary directories for builds. `spack.compilation` This contains utility functions used by the compiler wrapper script, `cc`. [`spack.directory_layout`](index.html#module-spack.directory_layout) Classes that control the way an installation directory is laid out. Create more implementations of this to change the hierarchy and naming scheme in `$spack_prefix/opt` #### Spack Subcommands[¶](#spack-subcommands) [`spack.cmd`](index.html#module-spack.cmd) Each module in this package implements a Spack subcommand. See [writing commands](#writing-commands) for details. #### Unit tests[¶](#unit-tests) [`spack.test`](index.html#module-spack.test) Implements Spack’s test suite. Add a module and put its name in the test suite in `__init__.py` to add more unit tests. `spack.test.mock_packages` This is a fake package hierarchy used to mock up packages for Spack’s test suite. #### Other Modules[¶](#other-modules) [`spack.url`](index.html#module-spack.url) URL parsing, for deducing names and versions of packages from tarball URLs. [`spack.error`](index.html#module-spack.error) [`SpackError`](index.html#spack.error.SpackError), the base class for Spack’s exception hierarchy. [`llnl.util.tty`](index.html#module-llnl.util.tty) Basic output functions for all of the messages Spack writes to the terminal. [`llnl.util.tty.color`](index.html#module-llnl.util.tty.color) Implements a color formatting syntax used by `spack.tty`. [`llnl.util`](index.html#module-llnl.util) In this package are a number of utility modules for the rest of Spack. ### Spec objects[¶](#spec-objects) ### Package objects[¶](#package-objects) Most spack commands look something like this: 1. Parse an abstract spec (or specs) from the command line, 2. *Normalize* the spec based on information in package files, 3. *Concretize* the spec according to some customizable policies, 4. Instantiate a package based on the spec, and 5. Call methods (e.g., `install()`) on the package object. The information in Package files is used at all stages in this process. Conceptually, packages are overloaded. They contain: ### Stage objects[¶](#stage-objects) ### Writing commands[¶](#writing-commands) Adding a new command to Spack is easy. Simply add a `<name>.py` file to `lib/spack/spack/cmd/`, where `<name>` is the name of the subcommand. At the bare minimum, two functions are required in this file: #### `setup_parser()`[¶](#setup-parser) Unless your command doesn’t accept any arguments, a `setup_parser()` function is required to define what arguments and flags your command takes. See the [Argparse documentation](https://docs.python.org/2.7/library/argparse.html) for more details on how to add arguments. Some commands have a set of subcommands, like `spack compiler find` or `spack module lmod refresh`. You can add subparsers to your parser to handle this. Check out `spack edit --command compiler` for an example of this. A lot of commands take the same arguments and flags. These arguments should be defined in `lib/spack/spack/cmd/common/arguments.py` so that they don’t need to be redefined in multiple commands. #### `<name>()`[¶](#name) In order to run your command, Spack searches for a function with the same name as your command in `<name>.py`. This is the main method for your command, and can call other helper methods to handle common tasks. Remember, before adding a new command, think to yourself whether or not this new command is actually necessary. Sometimes, the functionality you desire can be added to an existing command. Also remember to add unit tests for your command. If it isn’t used very frequently, changes to the rest of Spack can cause your command to break without sufficient unit tests to prevent this from happening. Whenever you add/remove/rename a command or flags for an existing command, make sure to update Spack’s [Bash tab completion script](https://github.com/adamjstewart/spack/blob/develop/share/spack/spack-completion.bash). ### Unit tests[¶](#id3) ### Unit testing[¶](#unit-testing) ### Developer commands[¶](#developer-commands) #### `spack doc`[¶](#spack-doc) #### `spack test`[¶](#spack-test) #### `spack python`[¶](#spack-python) `spack python` is a command that lets you import and debug things as if you were in a Spack interactive shell. Without any arguments, it is similar to a normal interactive Python shell, except you can import spack and any other Spack modules: ``` $ spack python Spack version 0.10.0 Python 2.7.13, Linux x86_64 >>> from spack.version import Version >>> a = Version('1.2.3') >>> b = Version('1_2_3') >>> a == b True >>> c = Version('1.2.3b') >>> c > a True >>> ``` You can also run a single command: ``` $ spack python -c 'import distro; distro.linux_distribution()' ('Fedora', '25', 'Workstation Edition') ``` or a file: ``` $ spack python ~/test_fetching.py ``` just like you would with the normal `python` command. #### `spack url`[¶](#spack-url) A package containing a single URL can be used to download several different versions of the package. If you’ve ever wondered how this works, all of the magic is in [`spack.url`](index.html#module-spack.url). This module contains methods for extracting the name and version of a package from its URL. The name is used by `spack create` to guess the name of the package. By determining the version from the URL, Spack can replace it with other versions to determine where to download them from. The regular expressions in `parse_name_offset` and `parse_version_offset` are used to extract the name and version, but they aren’t perfect. In order to debug Spack’s URL parsing support, the `spack url` command can be used. ##### `spack url parse`[¶](#spack-url-parse) If you need to debug a single URL, you can use the following command: ``` $ spack url parse http://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.0.tar.gz ==> Parsing URL: http://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.0.tar.gz ==> Matched version regex 0: r'^[a-zA-Z+._-]+[._-]v?(\\d[\\d._-]*)$' ==> Matched name regex 8: r'^([A-Za-z\\d+\\._-]+)$' ==> Detected: http://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.0.tar.gz --- ~~~~~ name: ruby version: 2.2.0 ==> Substituting version 9.9.9b: http://cache.ruby-lang.org/pub/ruby/2.2/ruby-9.9.9b.tar.gz --- ~~~~~~ ``` You’ll notice that the name and version of this URL are correctly detected, and you can even see which regular expressions it was matched to. However, you’ll notice that when it substitutes the version number in, it doesn’t replace the `2.2` with `9.9` where we would expect `9.9.9b` to live. This particular package may require a `list_url` or `url_for_version` function. This command also accepts a `--spider` flag. If provided, Spack searches for other versions of the package and prints the matching URLs. ##### `spack url list`[¶](#spack-url-list) This command lists every URL in every package in Spack. If given the `--color` and `--extrapolation` flags, it also colors the part of the string that it detected to be the name and version. The `--incorrect-name` and `--incorrect-version` flags can be used to print URLs that were not being parsed correctly. ##### `spack url summary`[¶](#spack-url-summary) This command attempts to parse every URL for every package in Spack and prints a summary of how many of them are being correctly parsed. It also prints a histogram showing which regular expressions are being matched and how frequently: ``` $ spack url summary ==> Generating a summary of URL parsing in Spack... Total URLs found: 2752 Names correctly parsed: 2406/2752 (87.43%) Versions correctly parsed: 2558/2752 (92.95%) ==> Statistics on name regular expressions: Index Count Regular Expression 0: 595 r'github\\.com/[^/]+/([^/]+)' 1: 8 r'gitlab[^/]+/api/v4/projects/[^/]+%2F([^/]+)' 2: 3 r'gitlab[^/]+/(?!api/v4/projects)[^/]+/([^/]+)' 3: 15 r'bitbucket\\.org/[^/]+/([^/]+)' 4: 368 r'pypi\\.(?:python\\.org|io)/packages/source/[A-Za-z\\d]/([^/]+)' 6: 5 r'\\?package=([A-Za-z\\d+-]+)' 7: 1 r'([^/]+)/download.php$' 8: 1756 r'^([A-Za-z\\d+\\._-]+)$' ==> Statistics on version regular expressions: Index Count Regular Expression 0: 1939 r'^[a-zA-Z+._-]+[._-]v?(\\d[\\d._-]*)$' 1: 397 r'^v?(\\d[\\d._-]*)$' 2: 23 r'^[a-zA-Z+]*(\\d[\\da-zA-Z]*)$' 3: 10 r'^[a-zA-Z+-]*(\\d[\\da-zA-Z-]*)$' 4: 54 r'^[a-zA-Z+_]*(\\d[\\da-zA-Z_]*)$' 5: 42 r'^[a-zA-Z+.]*(\\d[\\da-zA-Z.]*)$' 6: 150 r'^[a-zA-Z\\d+-]+-v?(\\d[\\da-zA-Z.]*)$' 7: 1 r'^[a-zA-Z\\d+-]+-v?(\\d[\\da-zA-Z_]*)$' 8: 18 r'^[a-zA-Z\\d+_]+_v?(\\d[\\da-zA-Z.]*)$' 9: 1 r'^[a-zA-Z\\d+_]+\\.v?(\\d[\\da-zA-Z.]*)$' 10: 32 r'^(?:[a-zA-Z\\d+-]+-)?v?(\\d[\\da-zA-Z.-]*)$' 11: 3 r'^[a-zA-Z+]+v?(\\d[\\da-zA-Z.-]*)$' 12: 3 r'^[a-zA-Z\\d+_]+-v?(\\d[\\da-zA-Z.]*)$' 13: 12 r'^[a-zA-Z\\d+.]+_v?(\\d[\\da-zA-Z.-]*)$' 14: 1 r'^[a-zA-Z\\d+-]+-v?(\\d[\\da-zA-Z._]*)$' 16: 3 r'^[a-zA-Z+-]+(\\d[\\da-zA-Z._]*)$' 17: 1 r'bzr(\\d[\\da-zA-Z._-]*)$' 18: 2 r'github\\.com/[^/]+/[^/]+/releases/download/[a-zA-Z+._-]*v?(\\d[\\da-zA-Z._-]*)/' 19: 8 r'\\?sha=[a-zA-Z+._-]*v?(\\d[\\da-zA-Z._-]*)$' 20: 1 r'\\?ref=[a-zA-Z+._-]*v?(\\d[\\da-zA-Z._-]*)$' 23: 2 r'\\?package=[a-zA-Z\\d+-]+&get=[a-zA-Z\\d+-]+-v?(\\d[\\da-zA-Z.]*)$' ``` This command is essential for anyone adding or changing the regular expressions that parse names and versions. By running this command before and after the change, you can make sure that your regular expression fixes more packages than it breaks. ### Profiling[¶](#profiling) Spack has some limited built-in support for profiling, and can report statistics using standard Python timing tools. To use this feature, supply `--profile` to Spack on the command line, before any subcommands. #### `spack --profile`[¶](#spack-profile) `spack --profile` output looks like this: ``` $ spack --profile graph dyninst o dyninst |\ | |\ | | |\ | | | |\ | o | | | intel-tbb | | |/ / | |/| | | o | | cmake | |\ \ \ | o | | | openssl | |\ \ \ \ | | | | o | elfutils | | |_|/| | | |/| | | | | | | | o | gettext | | | |/| | | | | | |\ \ | | | | | |\ \ | | | | | | |\ \ | | | | | | o | | libxml2 | | |_|_|_|/| | | | |/| | | |/| | | | | | | |/| | | | | | | | | | | | o boost ... ``` The bottom of the output shows the top most time consuming functions, slowest on top. The profiling support is from Python’s built-in tool, [cProfile](https://docs.python.org/2/library/profile.html#module-cProfile). Docker for Developers[¶](#docker-for-developers) --- This guide is intended for people who want to use our prepared docker environments to work on developing Spack or working on spack packages. It is meant to serve as the companion documentation for the [Packaging Guide](index.html#packaging-guide). ### Overview[¶](#overview) To get started, all you need is the latest version of `docker`. ``` $ cd share/spack/docker $ source config/ubuntu.bash $ ./run-image.sh ``` This command should drop you into an interactive shell where you can run spack within an isolated docker container running ubuntu. The copy of spack being used should be tied to the working copy of your cloned git repo, so any changes you make should be immediately reflected in the running docker container. Feel free to add or modify any packages or to hack on spack, itself. Your contained copy of spack should immediately reflect all changes. To work within a container running a different linux distro, source one of the other environment files under `config`. ``` $ source config/fedora.bash $ ./run-image.sh ``` spack package[¶](#spack-package) --- ### Subpackages[¶](#subpackages) #### spack.build_systems package[¶](#spack-build-systems-package) ##### Submodules[¶](#submodules) ##### spack.build_systems.aspell_dict module[¶](#module-spack.build_systems.aspell_dict) *class* `spack.build_systems.aspell_dict.``AspellDictPackage`(*spec*)[¶](#spack.build_systems.aspell_dict.AspellDictPackage) Bases: [`spack.build_systems.autotools.AutotoolsPackage`](#spack.build_systems.autotools.AutotoolsPackage) Specialized class for building aspell dictionairies. `configure`(*spec*, *prefix*)[¶](#spack.build_systems.aspell_dict.AspellDictPackage.configure) Runs configure with the arguments specified in `configure_args()` and an appropriately set prefix. `patch`()[¶](#spack.build_systems.aspell_dict.AspellDictPackage.patch) `view_destination`(*view*)[¶](#spack.build_systems.aspell_dict.AspellDictPackage.view_destination) The target root directory: each file is added relative to this directory. `view_source`()[¶](#spack.build_systems.aspell_dict.AspellDictPackage.view_source) The source root directory that will be added to the view: files are added such that their path relative to the view destination matches their path relative to the view source. ##### spack.build_systems.autotools module[¶](#module-spack.build_systems.autotools) *class* `spack.build_systems.autotools.``AutotoolsPackage`(*spec*)[¶](#spack.build_systems.autotools.AutotoolsPackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages built using GNU Autotools. This class provides four phases that can be overridden: > 1. [`autoreconf()`](#spack.build_systems.autotools.AutotoolsPackage.autoreconf) > 2. [`configure()`](#spack.build_systems.autotools.AutotoolsPackage.configure) > 3. [`build()`](#spack.build_systems.autotools.AutotoolsPackage.build) > 4. [`install()`](#spack.build_systems.autotools.AutotoolsPackage.install) They all have sensible defaults and for many packages the only thing necessary will be to override the helper method [`configure_args()`](#spack.build_systems.autotools.AutotoolsPackage.configure_args). For a finer tuning you may also override: > | **Method** | **Purpose** | > | --- | --- | > | [`build_targets`](#spack.build_systems.autotools.AutotoolsPackage.build_targets) | Specify `make` > targets for the > build phase | > | [`install_targets`](#spack.build_systems.autotools.AutotoolsPackage.install_targets) | Specify `make` > targets for the > install phase | > | [`check()`](#spack.build_systems.autotools.AutotoolsPackage.check) | Run build time > tests if required | `archive_files`[¶](#spack.build_systems.autotools.AutotoolsPackage.archive_files) Files to archive for packages based on autotools `autoreconf`(*spec*, *prefix*)[¶](#spack.build_systems.autotools.AutotoolsPackage.autoreconf) Not needed usually, configure should be already there `autoreconf_extra_args` *= []*[¶](#spack.build_systems.autotools.AutotoolsPackage.autoreconf_extra_args) Options to be passed to autoreconf when using the default implementation `build`(*spec*, *prefix*)[¶](#spack.build_systems.autotools.AutotoolsPackage.build) Makes the build targets specified by :py:attr:`~.AutotoolsPackage.build_targets` `build_directory`[¶](#spack.build_systems.autotools.AutotoolsPackage.build_directory) Override to provide another place to build the package `build_system_class` *= 'AutotoolsPackage'*[¶](#spack.build_systems.autotools.AutotoolsPackage.build_system_class) This attribute is used in UI queries that need to know the build system base class `build_targets` *= []*[¶](#spack.build_systems.autotools.AutotoolsPackage.build_targets) Targets for `make` during the [`build()`](#spack.build_systems.autotools.AutotoolsPackage.build) phase `build_time_test_callbacks` *= ['check']*[¶](#spack.build_systems.autotools.AutotoolsPackage.build_time_test_callbacks) Callback names for build-time test `check`()[¶](#spack.build_systems.autotools.AutotoolsPackage.check) Searches the Makefile for targets `test` and `check` and runs them if found. `configure`(*spec*, *prefix*)[¶](#spack.build_systems.autotools.AutotoolsPackage.configure) Runs configure with the arguments specified in [`configure_args()`](#spack.build_systems.autotools.AutotoolsPackage.configure_args) and an appropriately set prefix. `configure_abs_path`[¶](#spack.build_systems.autotools.AutotoolsPackage.configure_abs_path) `configure_args`()[¶](#spack.build_systems.autotools.AutotoolsPackage.configure_args) Produces a list containing all the arguments that must be passed to configure, except `--prefix` which will be pre-pended to the list. | Returns: | list of arguments for configure | `configure_directory`[¶](#spack.build_systems.autotools.AutotoolsPackage.configure_directory) Returns the directory where ‘configure’ resides. | Returns: | directory where to find configure | `delete_configure_to_force_update`()[¶](#spack.build_systems.autotools.AutotoolsPackage.delete_configure_to_force_update) `enable_or_disable`(*name*, *activation_value=None*)[¶](#spack.build_systems.autotools.AutotoolsPackage.enable_or_disable) Same as [`with_or_without()`](#spack.build_systems.autotools.AutotoolsPackage.with_or_without) but substitute `with` with `enable` and `without` with `disable`. | Parameters: | * **name** (*str*) – name of a valid multi-valued variant * **activation_value** (*callable*) – if present accepts a single value and returns the parameter to be used leading to an entry of the type `--enable-{name}={parameter}` The special value ‘prefix’ can also be assigned and will return `spec[name].prefix` as activation parameter. | | Returns: | list of arguments to configure | `flags_to_build_system_args`(*flags*)[¶](#spack.build_systems.autotools.AutotoolsPackage.flags_to_build_system_args) Produces a list of all command line arguments to pass specified compiler flags to configure. `force_autoreconf` *= False*[¶](#spack.build_systems.autotools.AutotoolsPackage.force_autoreconf) Set to true to force the autoreconf step even if configure is present `install`(*spec*, *prefix*)[¶](#spack.build_systems.autotools.AutotoolsPackage.install) Makes the install targets specified by :py:attr:`~.AutotoolsPackage.install_targets` `install_targets` *= ['install']*[¶](#spack.build_systems.autotools.AutotoolsPackage.install_targets) Targets for `make` during the [`install()`](#spack.build_systems.autotools.AutotoolsPackage.install) phase `install_time_test_callbacks` *= ['installcheck']*[¶](#spack.build_systems.autotools.AutotoolsPackage.install_time_test_callbacks) Callback names for install-time test `installcheck`()[¶](#spack.build_systems.autotools.AutotoolsPackage.installcheck) Searches the Makefile for an `installcheck` target and runs it if found. `patch_config_guess` *= True*[¶](#spack.build_systems.autotools.AutotoolsPackage.patch_config_guess) Whether or not to update `config.guess` on old architectures `phases` *= ['autoreconf', 'configure', 'build', 'install']*[¶](#spack.build_systems.autotools.AutotoolsPackage.phases) Phases of a GNU Autotools package `set_configure_or_die`()[¶](#spack.build_systems.autotools.AutotoolsPackage.set_configure_or_die) Checks the presence of a `configure` file after the autoreconf phase. If it is found sets a module attribute appropriately, otherwise raises an error. | Raises: | **RuntimeError** – if a configure script is not found in [`configure_directory()`](#spack.build_systems.autotools.AutotoolsPackage.configure_directory) | `with_or_without`(*name*, *activation_value=None*)[¶](#spack.build_systems.autotools.AutotoolsPackage.with_or_without) Inspects a variant and returns the arguments that activate or deactivate the selected feature(s) for the configure options. This function works on all type of variants. For bool-valued variants it will return by default `--with-{name}` or `--without-{name}`. For other kinds of variants it will cycle over the allowed values and return either `--with-{value}` or `--without-{value}`. If activation_value is given, then for each possible value of the variant, the option `--with-{value}=activation_value(value)` or `--without-{value}` will be added depending on whether or not `variant=value` is in the spec. | Parameters: | * **name** (*str*) – name of a valid multi-valued variant * **activation_value** (*callable*) – callable that accepts a single value and returns the parameter to be used leading to an entry of the type `--with-{name}={parameter}`. The special value ‘prefix’ can also be assigned and will return `spec[name].prefix` as activation parameter. | | Returns: | list of arguments to configure | ##### spack.build_systems.cmake module[¶](#module-spack.build_systems.cmake) *class* `spack.build_systems.cmake.``CMakePackage`(*spec*)[¶](#spack.build_systems.cmake.CMakePackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages built using CMake For more information on the CMake build system, see: <https://cmake.org/cmake/help/latest/This class provides three phases that can be overridden: > 1. [`cmake()`](#spack.build_systems.cmake.CMakePackage.cmake) > 2. [`build()`](#spack.build_systems.cmake.CMakePackage.build) > 3. [`install()`](#spack.build_systems.cmake.CMakePackage.install) They all have sensible defaults and for many packages the only thing necessary will be to override [`cmake_args()`](#spack.build_systems.cmake.CMakePackage.cmake_args). For a finer tuning you may also override: > | **Method** | **Purpose** | > | --- | --- | > | [`root_cmakelists_dir()`](#spack.build_systems.cmake.CMakePackage.root_cmakelists_dir) | Location of the > root CMakeLists.txt | > | [`build_directory()`](#spack.build_systems.cmake.CMakePackage.build_directory) | Directory where to > build the package | `archive_files`[¶](#spack.build_systems.cmake.CMakePackage.archive_files) Files to archive for packages based on CMake `build`(*spec*, *prefix*)[¶](#spack.build_systems.cmake.CMakePackage.build) Make the build targets `build_directory`[¶](#spack.build_systems.cmake.CMakePackage.build_directory) Returns the directory to use when building the package | Returns: | directory where to build the package | `build_system_class` *= 'CMakePackage'*[¶](#spack.build_systems.cmake.CMakePackage.build_system_class) This attribute is used in UI queries that need to know the build system base class `build_targets` *= []*[¶](#spack.build_systems.cmake.CMakePackage.build_targets) `build_time_test_callbacks` *= ['check']*[¶](#spack.build_systems.cmake.CMakePackage.build_time_test_callbacks) `check`()[¶](#spack.build_systems.cmake.CMakePackage.check) Searches the CMake-generated Makefile for the target `test` and runs it if found. `cmake`(*spec*, *prefix*)[¶](#spack.build_systems.cmake.CMakePackage.cmake) Runs `cmake` in the build directory `cmake_args`()[¶](#spack.build_systems.cmake.CMakePackage.cmake_args) Produces a list containing all the arguments that must be passed to cmake, except: > * CMAKE_INSTALL_PREFIX > * CMAKE_BUILD_TYPE which will be set automatically. | Returns: | list of arguments for cmake | `flags_to_build_system_args`(*flags*)[¶](#spack.build_systems.cmake.CMakePackage.flags_to_build_system_args) Produces a list of all command line arguments to pass the specified compiler flags to cmake. Note CMAKE does not have a cppflags option, so cppflags will be added to cflags, cxxflags, and fflags to mimic the behavior in other tools. `generator` *= 'Unix Makefiles'*[¶](#spack.build_systems.cmake.CMakePackage.generator) The build system generator to use. See `cmake --help` for a list of valid generators. Currently, “Unix Makefiles” and “Ninja” are the only generators that Spack supports. Defaults to “Unix Makefiles”. See <https://cmake.org/cmake/help/latest/manual/cmake-generators.7.html> for more information. `install`(*spec*, *prefix*)[¶](#spack.build_systems.cmake.CMakePackage.install) Make the install targets `install_targets` *= ['install']*[¶](#spack.build_systems.cmake.CMakePackage.install_targets) `phases` *= ['cmake', 'build', 'install']*[¶](#spack.build_systems.cmake.CMakePackage.phases) Phases of a CMake package `root_cmakelists_dir`[¶](#spack.build_systems.cmake.CMakePackage.root_cmakelists_dir) The relative path to the directory containing CMakeLists.txt This path is relative to the root of the extracted tarball, not to the `build_directory`. Defaults to the current directory. | Returns: | directory containing CMakeLists.txt | `std_cmake_args`[¶](#spack.build_systems.cmake.CMakePackage.std_cmake_args) Standard cmake arguments provided as a property for convenience of package writers | Returns: | standard cmake arguments | ##### spack.build_systems.cuda module[¶](#module-spack.build_systems.cuda) *class* `spack.build_systems.cuda.``CudaPackage`(*spec*)[¶](#spack.build_systems.cuda.CudaPackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Auxiliary class which contains CUDA variant, dependencies and conflicts and is meant to unify and facilitate its usage. *static* `cuda_flags`()[¶](#spack.build_systems.cuda.CudaPackage.cuda_flags) ##### spack.build_systems.intel module[¶](#module-spack.build_systems.intel) *class* `spack.build_systems.intel.``IntelPackage`(*spec*)[¶](#spack.build_systems.intel.IntelPackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for licensed Intel software. This class provides two phases that can be overridden: 1. [`configure()`](#spack.build_systems.intel.IntelPackage.configure) 2. [`install()`](#spack.build_systems.intel.IntelPackage.install) They both have sensible defaults and for many packages the only thing necessary will be to override setup_environment to set the appropriate environment variables. `blas_libs`[¶](#spack.build_systems.intel.IntelPackage.blas_libs) `build_system_class` *= 'IntelPackage'*[¶](#spack.build_systems.intel.IntelPackage.build_system_class) This attribute is used in UI queries that need to know the build system base class `component_bin_dir`(*component*, ***kwargs*)[¶](#spack.build_systems.intel.IntelPackage.component_bin_dir) `component_include_dir`(*component*, ***kwargs*)[¶](#spack.build_systems.intel.IntelPackage.component_include_dir) `component_lib_dir`(*component*, ***kwargs*)[¶](#spack.build_systems.intel.IntelPackage.component_lib_dir) Provide directory suitable for find_libraries() and SPACK_COMPILER_EXTRA_RPATHS. `configure`(*spec*, *prefix*)[¶](#spack.build_systems.intel.IntelPackage.configure) Generates the silent.cfg file to pass to installer.sh. See <https://software.intel.com/en-us/articles/configuration-file-format`configure_rpath`()[¶](#spack.build_systems.intel.IntelPackage.configure_rpath) `file_to_source`[¶](#spack.build_systems.intel.IntelPackage.file_to_source) Full path of file to source for initializing an Intel package. A client package could override as follows: ` @property` ` def file_to_source(self):` ` return self.normalize_path(“apsvars.sh”, “vtune_amplifier”)` `filter_compiler_wrappers`()[¶](#spack.build_systems.intel.IntelPackage.filter_compiler_wrappers) `global_license_file`[¶](#spack.build_systems.intel.IntelPackage.global_license_file) Returns the path where a Spack-global license file should be stored. All Intel software shares the same license, so we store it in a common ‘intel’ directory. `headers`[¶](#spack.build_systems.intel.IntelPackage.headers) `install`(*spec*, *prefix*)[¶](#spack.build_systems.intel.IntelPackage.install) Runs Intel’s install.sh installation script. Afterwards, save the installer config and logs to <prefix>/.spack `intel64_int_suffix`[¶](#spack.build_systems.intel.IntelPackage.intel64_int_suffix) Provide the suffix for Intel library names to match a client application’s desired int size, conveyed by the active spec variant. The possible suffixes and their meanings are: > `ilp64` all of int, long, and pointer are 64 bit, > `` lp64`` only long and pointer are 64 bit; int will be 32bit. `lapack_libs`[¶](#spack.build_systems.intel.IntelPackage.lapack_libs) `libs`[¶](#spack.build_systems.intel.IntelPackage.libs) `license_comment` *= '#'*[¶](#spack.build_systems.intel.IntelPackage.license_comment) Comment symbol used in the license.lic file `license_files`[¶](#spack.build_systems.intel.IntelPackage.license_files) list() -> new empty list list(iterable) -> new list initialized from iterable’s items `license_required`[¶](#spack.build_systems.intel.IntelPackage.license_required) bool(x) -> bool Returns True when the argument x is true, False otherwise. The builtins True and False are the only two instances of the class bool. The class bool is a subclass of the class int, and cannot be subclassed. `license_url` *= 'https://software.intel.com/en-us/articles/intel-license-manager-faq'*[¶](#spack.build_systems.intel.IntelPackage.license_url) URL providing information on how to acquire a license key `license_vars` *= ['INTEL_LICENSE_FILE']*[¶](#spack.build_systems.intel.IntelPackage.license_vars) Environment variables that Intel searches for a license file `mpi_compiler_wrappers`[¶](#spack.build_systems.intel.IntelPackage.mpi_compiler_wrappers) Return paths to compiler wrappers as a dict of env-like names `mpi_setup_dependent_environment`(*spack_env*, *run_env*, *dependent_spec*, *compilers_of_client={}*)[¶](#spack.build_systems.intel.IntelPackage.mpi_setup_dependent_environment) Unified back-end for setup_dependent_environment() of Intel packages that provide ‘mpi’. | Parameters: | * **run_env****,** **dependent_spec** (*spack_env**,*) – same as in setup_dependent_environment(). * **compilers_of_client** (*dict*) – Conveys spack_cc, spack_cxx, etc., from the scope of dependent packages; constructed in caller. | `normalize_path`(*component_path*, *component_suite_dir=None*, *relative=False*)[¶](#spack.build_systems.intel.IntelPackage.normalize_path) Returns the absolute or relative path to a component or file under a component suite directory. Intel’s product names, scope, and directory layout changed over the years. This function provides a unified interface to their directory names. | Parameters: | * **component_path** (*str*) – a component name like ‘mkl’, or ‘mpi’, or a deeper relative path. * **component_suite_dir** (*str*) – _Unversioned_ name of the expected parent directory of component_path. When absent or None, an appropriate default will be used. A present but empty string “” requests that component_path refer to self.prefix directly. Typical values: compilers_and_libraries, composer_xe, parallel_studio_xe. Also supported: advisor, inspector, vtune. The actual directory name for these suites varies by release year. The name will be corrected as needed for use in the return value. * **relative** (*bool*) – When True, return path relative to self.prefix, otherwise, return an absolute path (the default). | `normalize_suite_dir`(*suite_dir_name, version_globs=['*.*.*']*)[¶](#spack.build_systems.intel.IntelPackage.normalize_suite_dir) Returns the version-specific and absolute path to the directory of an Intel product or a suite of product components. | Parameters: | * **suite_dir_name** (*str*) – Name of the product directory, without numeric version. + Examples: ``` composer_xe, parallel_studio_xe, compilers_and_libraries ```The following will work as well, even though they are not directly targets for Spack installation: ``` advisor_xe, inspector_xe, vtune_amplifier_xe, performance_snapshots (new name for vtune as of 2018) ``` These are single-component products without subordinate components and are normally made available to users by a toplevel psxevars.sh or equivalent file to source (and thus by the modulefiles that Spack produces). * **version_globs** (*list of str*) – Suffix glob patterns (most specific first) expected to qualify suite_dir_name to its fully version-specific install directory (as opposed to a compatibility directory or symlink). | `openmp_libs`[¶](#spack.build_systems.intel.IntelPackage.openmp_libs) Supply LibraryList for linking OpenMP `phases` *= ['configure', 'install']*[¶](#spack.build_systems.intel.IntelPackage.phases) Phases of an Intel package `pset_components`[¶](#spack.build_systems.intel.IntelPackage.pset_components) `scalapack_libs`[¶](#spack.build_systems.intel.IntelPackage.scalapack_libs) `setup_dependent_environment`(*spack_env*, *run_env*, *dependent_spec*)[¶](#spack.build_systems.intel.IntelPackage.setup_dependent_environment) Set up the environment of packages that depend on this one. This is similar to `setup_environment`, but it is used to modify the compile and runtime environments of packages that *depend* on this one. This gives packages like Python and others that follow the extension model a way to implement common environment or compile-time settings for dependencies. This is useful if there are some common steps to installing all extensions for a certain package. Example: 1. Installing python modules generally requires `PYTHONPATH` to point to the `lib/pythonX.Y/site-packages` directory in the module’s install prefix. This method could be used to set that variable. | Parameters: | * **spack_env** (*EnvironmentModifications*) – List of environment modifications to be applied when the dependent package is built within Spack. * **run_env** (*EnvironmentModifications*) – List of environment modifications to be applied when the dependent package is run outside of Spack. These are added to the resulting module file. * **dependent_spec** (*Spec*) – The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that *this* package’s spec is available as `self.spec`. | `setup_dependent_package`(*module*, *dep_spec*)[¶](#spack.build_systems.intel.IntelPackage.setup_dependent_package) Set up Python module-scope variables for dependent packages. Called before the install() method of dependents. Default implementation does nothing, but this can be overridden by an extendable package to set up the module of its extensions. This is useful if there are some common steps to installing all extensions for a certain package. Examples: 1. Extensions often need to invoke the `python` interpreter from the Python installation being extended. This routine can put a `python()` Executable object in the module scope for the extension package to simplify extension installs. 2. MPI compilers could set some variables in the dependent’s scope that point to `mpicc`, `mpicxx`, etc., allowing them to be called by common name regardless of which MPI is used. 3. BLAS/LAPACK implementations can set some variables indicating the path to their libraries, since these paths differ by BLAS/LAPACK implementation. | Parameters: | * **module** ([*spack.package.PackageBase.module*](index.html#spack.package.PackageBase.module)) – The Python `module` object of the dependent package. Packages can use this to set module-scope variables for the dependent to use. * **dependent_spec** (*Spec*) – The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that *this* package’s spec is available as `self.spec`. | `setup_environment`(*spack_env*, *run_env*)[¶](#spack.build_systems.intel.IntelPackage.setup_environment) Adds environment variables to the generated module file. These environment variables come from running: ``` $ source parallel_studio_xe_2017/bin/psxevars.sh intel64 [and likewise for MKL, MPI, and other components] ``` `tbb_libs`[¶](#spack.build_systems.intel.IntelPackage.tbb_libs) Supply LibraryList for linking TBB `uninstall_ism`()[¶](#spack.build_systems.intel.IntelPackage.uninstall_ism) `version_yearlike`[¶](#spack.build_systems.intel.IntelPackage.version_yearlike) Return the version in a unified style, suitable for Version class conditionals. `version_years` *= {'intel-ipp@9.0:9.99': 2016, 'intel-mkl@11.3.0:11.3.999': 2016, 'intel-mpi@5.1:5.99': 2016}*[¶](#spack.build_systems.intel.IntelPackage.version_years) `spack.build_systems.intel.``debug_print`(*msg*, **args*)[¶](#spack.build_systems.intel.debug_print) Prints a message (usu. a variable) and the callers’ names for a couple of stack frames. `spack.build_systems.intel.``raise_lib_error`(**args*)[¶](#spack.build_systems.intel.raise_lib_error) Bails out with an error message. Shows args after the first as one per line, tab-indented, useful for long paths to line up and stand out. ##### spack.build_systems.makefile module[¶](#module-spack.build_systems.makefile) *class* `spack.build_systems.makefile.``MakefilePackage`(*spec*)[¶](#spack.build_systems.makefile.MakefilePackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages that are built using editable Makefiles This class provides three phases that can be overridden: > 1. [`edit()`](#spack.build_systems.makefile.MakefilePackage.edit) > 2. [`build()`](#spack.build_systems.makefile.MakefilePackage.build) > 3. [`install()`](#spack.build_systems.makefile.MakefilePackage.install) It is usually necessary to override the [`edit()`](#spack.build_systems.makefile.MakefilePackage.edit) phase, while [`build()`](#spack.build_systems.makefile.MakefilePackage.build) and [`install()`](#spack.build_systems.makefile.MakefilePackage.install) have sensible defaults. For a finer tuning you may override: > | **Method** | **Purpose** | > | --- | --- | > | [`build_targets`](#spack.build_systems.makefile.MakefilePackage.build_targets) | Specify `make` > targets for the > build phase | > | [`install_targets`](#spack.build_systems.makefile.MakefilePackage.install_targets) | Specify `make` > targets for the > install phase | > | [`build_directory()`](#spack.build_systems.makefile.MakefilePackage.build_directory) | Directory where the > Makefile is located | `build`(*spec*, *prefix*)[¶](#spack.build_systems.makefile.MakefilePackage.build) Calls make, passing [`build_targets`](#spack.build_systems.makefile.MakefilePackage.build_targets) as targets. `build_directory`[¶](#spack.build_systems.makefile.MakefilePackage.build_directory) Returns the directory containing the main Makefile | Returns: | build directory | `build_system_class` *= 'MakefilePackage'*[¶](#spack.build_systems.makefile.MakefilePackage.build_system_class) This attribute is used in UI queries that need to know the build system base class `build_targets` *= []*[¶](#spack.build_systems.makefile.MakefilePackage.build_targets) Targets for `make` during the [`build()`](#spack.build_systems.makefile.MakefilePackage.build) phase `build_time_test_callbacks` *= ['check']*[¶](#spack.build_systems.makefile.MakefilePackage.build_time_test_callbacks) Callback names for build-time test `check`()[¶](#spack.build_systems.makefile.MakefilePackage.check) Searches the Makefile for targets `test` and `check` and runs them if found. `edit`(*spec*, *prefix*)[¶](#spack.build_systems.makefile.MakefilePackage.edit) Edits the Makefile before calling make. This phase cannot be defaulted. `install`(*spec*, *prefix*)[¶](#spack.build_systems.makefile.MakefilePackage.install) Calls make, passing [`install_targets`](#spack.build_systems.makefile.MakefilePackage.install_targets) as targets. `install_targets` *= ['install']*[¶](#spack.build_systems.makefile.MakefilePackage.install_targets) Targets for `make` during the [`install()`](#spack.build_systems.makefile.MakefilePackage.install) phase `install_time_test_callbacks` *= ['installcheck']*[¶](#spack.build_systems.makefile.MakefilePackage.install_time_test_callbacks) Callback names for install-time test `installcheck`()[¶](#spack.build_systems.makefile.MakefilePackage.installcheck) Searches the Makefile for an `installcheck` target and runs it if found. `phases` *= ['edit', 'build', 'install']*[¶](#spack.build_systems.makefile.MakefilePackage.phases) Phases of a package that is built with an hand-written Makefile ##### spack.build_systems.meson module[¶](#module-spack.build_systems.meson) *class* `spack.build_systems.meson.``MesonPackage`(*spec*)[¶](#spack.build_systems.meson.MesonPackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages built using Meson For more information on the Meson build system, see: <https://mesonbuild.com/This class provides three phases that can be overridden: > 1. [`meson()`](#spack.build_systems.meson.MesonPackage.meson) > 2. [`build()`](#spack.build_systems.meson.MesonPackage.build) > 3. [`install()`](#spack.build_systems.meson.MesonPackage.install) They all have sensible defaults and for many packages the only thing necessary will be to override [`meson_args()`](#spack.build_systems.meson.MesonPackage.meson_args). For a finer tuning you may also override: > | **Method** | **Purpose** | > | --- | --- | > | [`root_mesonlists_dir()`](#spack.build_systems.meson.MesonPackage.root_mesonlists_dir) | Location of the > root MesonLists.txt | > | [`build_directory()`](#spack.build_systems.meson.MesonPackage.build_directory) | Directory where to > build the package | `archive_files`[¶](#spack.build_systems.meson.MesonPackage.archive_files) Files to archive for packages based on Meson `build`(*spec*, *prefix*)[¶](#spack.build_systems.meson.MesonPackage.build) Make the build targets `build_directory`[¶](#spack.build_systems.meson.MesonPackage.build_directory) Returns the directory to use when building the package | Returns: | directory where to build the package | `build_system_class` *= 'MesonPackage'*[¶](#spack.build_systems.meson.MesonPackage.build_system_class) This attribute is used in UI queries that need to know the build system base class `build_targets` *= []*[¶](#spack.build_systems.meson.MesonPackage.build_targets) `build_time_test_callbacks` *= ['check']*[¶](#spack.build_systems.meson.MesonPackage.build_time_test_callbacks) `check`()[¶](#spack.build_systems.meson.MesonPackage.check) Searches the Meson-generated file for the target `test` and runs it if found. `flags_to_build_system_args`(*flags*)[¶](#spack.build_systems.meson.MesonPackage.flags_to_build_system_args) Produces a list of all command line arguments to pass the specified compiler flags to meson. `install`(*spec*, *prefix*)[¶](#spack.build_systems.meson.MesonPackage.install) Make the install targets `install_targets` *= ['install']*[¶](#spack.build_systems.meson.MesonPackage.install_targets) `meson`(*spec*, *prefix*)[¶](#spack.build_systems.meson.MesonPackage.meson) Runs `meson` in the build directory `meson_args`()[¶](#spack.build_systems.meson.MesonPackage.meson_args) Produces a list containing all the arguments that must be passed to meson, except: * `--prefix` * `--libdir` * `--buildtype` * `--strip` which will be set automatically. | Returns: | list of arguments for meson | `phases` *= ['meson', 'build', 'install']*[¶](#spack.build_systems.meson.MesonPackage.phases) Phases of a Meson package `root_mesonlists_dir`[¶](#spack.build_systems.meson.MesonPackage.root_mesonlists_dir) The relative path to the directory containing meson.build This path is relative to the root of the extracted tarball, not to the `build_directory`. Defaults to the current directory. | Returns: | directory containing meson.build | `std_meson_args`[¶](#spack.build_systems.meson.MesonPackage.std_meson_args) Standard meson arguments provided as a property for convenience of package writers | Returns: | standard meson arguments | ##### spack.build_systems.octave module[¶](#module-spack.build_systems.octave) *class* `spack.build_systems.octave.``OctavePackage`(*spec*)[¶](#spack.build_systems.octave.OctavePackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for Octave packages. See <https://www.gnu.org/software/octave/doc/v4.2.0/Installing-and-Removing-Packages.html> for more information. This class provides the following phases that can be overridden: 1. [`install()`](#spack.build_systems.octave.OctavePackage.install) `build_system_class` *= 'OctavePackage'*[¶](#spack.build_systems.octave.OctavePackage.build_system_class) `install`(*spec*, *prefix*)[¶](#spack.build_systems.octave.OctavePackage.install) Install the package from the archive file `phases` *= ['install']*[¶](#spack.build_systems.octave.OctavePackage.phases) `setup_environment`(*spack_env*, *run_env*)[¶](#spack.build_systems.octave.OctavePackage.setup_environment) Set up the compile and runtime environments for a package. ##### spack.build_systems.perl module[¶](#module-spack.build_systems.perl) *class* `spack.build_systems.perl.``PerlPackage`(*spec*)[¶](#spack.build_systems.perl.PerlPackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages that are built using Perl. This class provides four phases that can be overridden if required: > 1. [`configure()`](#spack.build_systems.perl.PerlPackage.configure) > 2. [`build()`](#spack.build_systems.perl.PerlPackage.build) > 3. [`check()`](#spack.build_systems.perl.PerlPackage.check) > 4. [`install()`](#spack.build_systems.perl.PerlPackage.install) The default methods use, in order of preference: 1. Makefile.PL, 2. Build.PL. Some packages may need to override [`configure_args()`](#spack.build_systems.perl.PerlPackage.configure_args), which produces a list of arguments for [`configure()`](#spack.build_systems.perl.PerlPackage.configure). Arguments should not include the installation base directory. `build`(*spec*, *prefix*)[¶](#spack.build_systems.perl.PerlPackage.build) Builds a Perl package. `build_system_class` *= 'PerlPackage'*[¶](#spack.build_systems.perl.PerlPackage.build_system_class) This attribute is used in UI queries that need to know the build system base class `build_time_test_callbacks` *= ['check']*[¶](#spack.build_systems.perl.PerlPackage.build_time_test_callbacks) Callback names for build-time test `check`()[¶](#spack.build_systems.perl.PerlPackage.check) Runs built-in tests of a Perl package. `configure`(*spec*, *prefix*)[¶](#spack.build_systems.perl.PerlPackage.configure) Runs Makefile.PL or Build.PL with arguments consisting of an appropriate installation base directory followed by the list returned by [`configure_args()`](#spack.build_systems.perl.PerlPackage.configure_args). | Raises: | **RuntimeError** – if neither Makefile.PL or Build.PL exist | `configure_args`()[¶](#spack.build_systems.perl.PerlPackage.configure_args) Produces a list containing the arguments that must be passed to [`configure()`](#spack.build_systems.perl.PerlPackage.configure). Arguments should not include the installation base directory, which is prepended automatically. | Returns: | list of arguments for Makefile.PL or Build.PL | `install`(*spec*, *prefix*)[¶](#spack.build_systems.perl.PerlPackage.install) Installs a Perl package. `phases` *= ['configure', 'build', 'install']*[¶](#spack.build_systems.perl.PerlPackage.phases) Phases of a Perl package ##### spack.build_systems.python module[¶](#module-spack.build_systems.python) *class* `spack.build_systems.python.``PythonPackage`(*spec*)[¶](#spack.build_systems.python.PythonPackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages that are built using Python setup.py files This class provides the following phases that can be overridden: * build * build_py * build_ext * build_clib * build_scripts * clean * install * install_lib * install_headers * install_scripts * install_data * sdist * register * bdist * bdist_dumb * bdist_rpm * bdist_wininst * upload * check These are all standard setup.py commands and can be found by running: ``` $ python setup.py --help-commands ``` By default, only the ‘build’ and ‘install’ phases are run, but if you need to run more phases, simply modify your `phases` list like so: ``` phases = ['build_ext', 'install', 'bdist'] ``` Each phase provides a function <phase> that runs: ``` $ python -s setup.py --no-user-cfg <phase> ``` Each phase also has a <phase_args> function that can pass arguments to this call. All of these functions are empty except for the `install_args` function, which passes `--prefix=/path/to/installation/directory`. If you need to run a phase which is not a standard setup.py command, you’ll need to define a function for it like so: ``` def configure(self, spec, prefix): self.setup_py('configure') ``` `add_files_to_view`(*view*, *merge_map*)[¶](#spack.build_systems.python.PythonPackage.add_files_to_view) Given a map of package files to destination paths in the view, add the files to the view. By default this adds all files. Alternative implementations may skip some files, for example if other packages linked into the view already include the file. `bdist`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.bdist) Create a built (binary) distribution. `bdist_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.bdist_args) Arguments to pass to bdist. `bdist_dumb`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.bdist_dumb) Create a “dumb” built distribution. `bdist_dumb_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.bdist_dumb_args) Arguments to pass to bdist_dumb. `bdist_rpm`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.bdist_rpm) Create an RPM distribution. `bdist_rpm_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.bdist_rpm_args) Arguments to pass to bdist_rpm. `bdist_wininst`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.bdist_wininst) Create an executable installer for MS Windows. `bdist_wininst_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.bdist_wininst_args) Arguments to pass to bdist_wininst. `build`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.build) Build everything needed to install. `build_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.build_args) Arguments to pass to build. `build_clib`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.build_clib) Build C/C++ libraries used by Python extensions. `build_clib_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.build_clib_args) Arguments to pass to build_clib. `build_directory`[¶](#spack.build_systems.python.PythonPackage.build_directory) The directory containing the `setup.py` file. `build_ext`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.build_ext) Build C/C++ extensions (compile/link to build directory). `build_ext_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.build_ext_args) Arguments to pass to build_ext. `build_py`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.build_py) “Build” pure Python modules (copy to build directory). `build_py_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.build_py_args) Arguments to pass to build_py. `build_scripts`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.build_scripts) “Build” scripts (copy and fixup #! line). `build_system_class` *= 'PythonPackage'*[¶](#spack.build_systems.python.PythonPackage.build_system_class) `build_time_test_callbacks` *= ['test']*[¶](#spack.build_systems.python.PythonPackage.build_time_test_callbacks) Callback names for build-time test `check`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.check) Perform some checks on the package. `check_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.check_args) Arguments to pass to check. `clean`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.clean) Clean up temporary files from ‘build’ command. `clean_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.clean_args) Arguments to pass to clean. `import_module_test`()[¶](#spack.build_systems.python.PythonPackage.import_module_test) Attempts to import the module that was just installed. This test is only run if the package overrides [`import_modules`](#spack.build_systems.python.PythonPackage.import_modules) with a list of module names. `import_modules` *= []*[¶](#spack.build_systems.python.PythonPackage.import_modules) `install`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install) Install everything from build directory. `install_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install_args) Arguments to pass to install. `install_data`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install_data) Install data files. `install_data_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install_data_args) Arguments to pass to install_data. `install_headers`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install_headers) Install C/C++ header files. `install_headers_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install_headers_args) Arguments to pass to install_headers. `install_lib`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install_lib) Install all Python modules (extensions and pure Python). `install_lib_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install_lib_args) Arguments to pass to install_lib. `install_scripts`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install_scripts) Install scripts (Python or otherwise). `install_scripts_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.install_scripts_args) Arguments to pass to install_scripts. `install_time_test_callbacks` *= ['import_module_test']*[¶](#spack.build_systems.python.PythonPackage.install_time_test_callbacks) Callback names for install-time test `phases` *= ['build', 'install']*[¶](#spack.build_systems.python.PythonPackage.phases) `py_namespace` *= None*[¶](#spack.build_systems.python.PythonPackage.py_namespace) `python`(**args*, ***kwargs*)[¶](#spack.build_systems.python.PythonPackage.python) `register`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.register) Register the distribution with the Python package index. `register_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.register_args) Arguments to pass to register. `remove_files_from_view`(*view*, *merge_map*)[¶](#spack.build_systems.python.PythonPackage.remove_files_from_view) Given a map of package files to files currently linked in the view, remove the files from the view. The default implementation removes all files. Alternative implementations may not remove all files. For example if two packages include the same file, it should only be removed when both packages are removed. `sdist`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.sdist) Create a source distribution (tarball, zip file, etc.). `sdist_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.sdist_args) Arguments to pass to sdist. `setup_file`()[¶](#spack.build_systems.python.PythonPackage.setup_file) Returns the name of the setup file to use. `setup_py`(**args*, ***kwargs*)[¶](#spack.build_systems.python.PythonPackage.setup_py) `test`()[¶](#spack.build_systems.python.PythonPackage.test) Run unit tests after in-place build. These tests are only run if the package actually has a ‘test’ command. `test_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.test_args) Arguments to pass to test. `upload`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.upload) Upload binary package to PyPI. `upload_args`(*spec*, *prefix*)[¶](#spack.build_systems.python.PythonPackage.upload_args) Arguments to pass to upload. `view_file_conflicts`(*view*, *merge_map*)[¶](#spack.build_systems.python.PythonPackage.view_file_conflicts) Report all file conflicts, excepting special cases for python. Specifically, this does not report errors for duplicate __init__.py files for packages in the same namespace. ##### spack.build_systems.qmake module[¶](#module-spack.build_systems.qmake) *class* `spack.build_systems.qmake.``QMakePackage`(*spec*)[¶](#spack.build_systems.qmake.QMakePackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages built using qmake. For more information on the qmake build system, see: <http://doc.qt.io/qt-5/qmake-manual.htmlThis class provides three phases that can be overridden: 1. [`qmake()`](#spack.build_systems.qmake.QMakePackage.qmake) 2. [`build()`](#spack.build_systems.qmake.QMakePackage.build) 3. [`install()`](#spack.build_systems.qmake.QMakePackage.install) They all have sensible defaults and for many packages the only thing necessary will be to override [`qmake_args()`](#spack.build_systems.qmake.QMakePackage.qmake_args). `build`(*spec*, *prefix*)[¶](#spack.build_systems.qmake.QMakePackage.build) Make the build targets `build_system_class` *= 'QMakePackage'*[¶](#spack.build_systems.qmake.QMakePackage.build_system_class) This attribute is used in UI queries that need to know the build system base class `build_time_test_callbacks` *= ['check']*[¶](#spack.build_systems.qmake.QMakePackage.build_time_test_callbacks) Callback names for build-time test `check`()[¶](#spack.build_systems.qmake.QMakePackage.check) Searches the Makefile for a `check:` target and runs it if found. `install`(*spec*, *prefix*)[¶](#spack.build_systems.qmake.QMakePackage.install) Make the install targets `phases` *= ['qmake', 'build', 'install']*[¶](#spack.build_systems.qmake.QMakePackage.phases) Phases of a qmake package `qmake`(*spec*, *prefix*)[¶](#spack.build_systems.qmake.QMakePackage.qmake) Run `qmake` to configure the project and generate a Makefile. `qmake_args`()[¶](#spack.build_systems.qmake.QMakePackage.qmake_args) Produces a list containing all the arguments that must be passed to qmake ##### spack.build_systems.r module[¶](#module-spack.build_systems.r) *class* `spack.build_systems.r.``RPackage`(*spec*)[¶](#spack.build_systems.r.RPackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages that are built using R. For more information on the R build system, see: <https://stat.ethz.ch/R-manual/R-devel/library/utils/html/INSTALL.htmlThis class provides a single phase that can be overridden: > 1. [`install()`](#spack.build_systems.r.RPackage.install) It has sensible defaults, and for many packages the only thing necessary will be to add dependencies `build_system_class` *= 'RPackage'*[¶](#spack.build_systems.r.RPackage.build_system_class) This attribute is used in UI queries that need to know the build system base class `configure_args`()[¶](#spack.build_systems.r.RPackage.configure_args) Arguments to pass to install via `--configure-args`. `configure_vars`()[¶](#spack.build_systems.r.RPackage.configure_vars) Arguments to pass to install via `--configure-vars`. `install`(*spec*, *prefix*)[¶](#spack.build_systems.r.RPackage.install) Installs an R package. `phases` *= ['install']*[¶](#spack.build_systems.r.RPackage.phases) ##### spack.build_systems.scons module[¶](#module-spack.build_systems.scons) *class* `spack.build_systems.scons.``SConsPackage`(*spec*)[¶](#spack.build_systems.scons.SConsPackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages built using SCons. See <http://scons.org/documentation.html> for more information. This class provides the following phases that can be overridden: 1. [`build()`](#spack.build_systems.scons.SConsPackage.build) 2. [`install()`](#spack.build_systems.scons.SConsPackage.install) Packages that use SCons as a build system are less uniform than packages that use other build systems. Developers can add custom subcommands or variables that control the build. You will likely need to override [`build_args()`](#spack.build_systems.scons.SConsPackage.build_args) to pass the appropriate variables. `build`(*spec*, *prefix*)[¶](#spack.build_systems.scons.SConsPackage.build) Build the package. `build_args`(*spec*, *prefix*)[¶](#spack.build_systems.scons.SConsPackage.build_args) Arguments to pass to build. `build_system_class` *= 'SConsPackage'*[¶](#spack.build_systems.scons.SConsPackage.build_system_class) To be used in UI queries that require to know which build-system class we are using `build_time_test_callbacks` *= ['test']*[¶](#spack.build_systems.scons.SConsPackage.build_time_test_callbacks) Callback names for build-time test `install`(*spec*, *prefix*)[¶](#spack.build_systems.scons.SConsPackage.install) Install the package. `install_args`(*spec*, *prefix*)[¶](#spack.build_systems.scons.SConsPackage.install_args) Arguments to pass to install. `phases` *= ['build', 'install']*[¶](#spack.build_systems.scons.SConsPackage.phases) Phases of a SCons package `test`()[¶](#spack.build_systems.scons.SConsPackage.test) Run unit tests after build. By default, does nothing. Override this if you want to add package-specific tests. ##### spack.build_systems.waf module[¶](#module-spack.build_systems.waf) *class* `spack.build_systems.waf.``WafPackage`(*spec*)[¶](#spack.build_systems.waf.WafPackage) Bases: [`spack.package.PackageBase`](index.html#spack.package.PackageBase) Specialized class for packages that are built using the Waf build system. See <https://waf.io/book/> for more information. This class provides the following phases that can be overridden: * configure * build * install These are all standard Waf commands and can be found by running: ``` $ python waf --help ``` Each phase provides a function <phase> that runs: ``` $ python waf -j<jobs> <phase> ``` where <jobs> is the number of parallel jobs to build with. Each phase also has a <phase_args> function that can pass arguments to this call. All of these functions are empty except for the `configure_args` function, which passes `--prefix=/path/to/installation/prefix`. `build`(*spec*, *prefix*)[¶](#spack.build_systems.waf.WafPackage.build) Executes the build. `build_args`()[¶](#spack.build_systems.waf.WafPackage.build_args) Arguments to pass to build. `build_directory`[¶](#spack.build_systems.waf.WafPackage.build_directory) The directory containing the `waf` file. `build_system_class` *= 'WafPackage'*[¶](#spack.build_systems.waf.WafPackage.build_system_class) `build_time_test_callbacks` *= ['test']*[¶](#spack.build_systems.waf.WafPackage.build_time_test_callbacks) `configure`(*spec*, *prefix*)[¶](#spack.build_systems.waf.WafPackage.configure) Configures the project. `configure_args`()[¶](#spack.build_systems.waf.WafPackage.configure_args) Arguments to pass to configure. `install`(*spec*, *prefix*)[¶](#spack.build_systems.waf.WafPackage.install) Installs the targets on the system. `install_args`()[¶](#spack.build_systems.waf.WafPackage.install_args) Arguments to pass to install. `install_time_test_callbacks` *= ['installtest']*[¶](#spack.build_systems.waf.WafPackage.install_time_test_callbacks) `installtest`()[¶](#spack.build_systems.waf.WafPackage.installtest) Run unit tests after install. By default, does nothing. Override this if you want to add package-specific tests. `phases` *= ['configure', 'build', 'install']*[¶](#spack.build_systems.waf.WafPackage.phases) `python`(**args*, ***kwargs*)[¶](#spack.build_systems.waf.WafPackage.python) The python `Executable`. `test`()[¶](#spack.build_systems.waf.WafPackage.test) Run unit tests after build. By default, does nothing. Override this if you want to add package-specific tests. `waf`(**args*, ***kwargs*)[¶](#spack.build_systems.waf.WafPackage.waf) Runs the waf `Executable`. ##### Module contents[¶](#module-spack.build_systems) #### spack.cmd package[¶](#spack-cmd-package) ##### Subpackages[¶](#subpackages) ###### spack.cmd.common package[¶](#spack-cmd-common-package) ####### Submodules[¶](#submodules) ####### spack.cmd.common.arguments module[¶](#module-spack.cmd.common.arguments) `spack.cmd.common.arguments.``add_common_arguments`(*parser*, *list_of_arguments*)[¶](#spack.cmd.common.arguments.add_common_arguments) Extend a parser with extra arguments | Parameters: | * **parser** – parser to be extended * **list_of_arguments** – arguments to be added to the parser | ####### Module contents[¶](#module-spack.cmd.common) `spack.cmd.common.``print_module_placeholder_help`()[¶](#spack.cmd.common.print_module_placeholder_help) For use by commands to tell user how to activate shell support. ###### spack.cmd.modules package[¶](#spack-cmd-modules-package) ####### Submodules[¶](#submodules) ####### spack.cmd.modules.dotkit module[¶](#module-spack.cmd.modules.dotkit) `spack.cmd.modules.dotkit.``add_command`(*parser*, *command_dict*)[¶](#spack.cmd.modules.dotkit.add_command) ####### spack.cmd.modules.lmod module[¶](#module-spack.cmd.modules.lmod) `spack.cmd.modules.lmod.``add_command`(*parser*, *command_dict*)[¶](#spack.cmd.modules.lmod.add_command) `spack.cmd.modules.lmod.``setdefault`(*module_type*, *specs*, *args*)[¶](#spack.cmd.modules.lmod.setdefault) Set the default module file, when multiple are present ####### spack.cmd.modules.tcl module[¶](#module-spack.cmd.modules.tcl) `spack.cmd.modules.tcl.``add_command`(*parser*, *command_dict*)[¶](#spack.cmd.modules.tcl.add_command) ####### Module contents[¶](#module-spack.cmd.modules) Implementation details of the `spack module` command. *exception* `spack.cmd.modules.``MultipleSpecsMatch`[¶](#spack.cmd.modules.MultipleSpecsMatch) Bases: `Exception` Raised when multiple specs match a constraint, in a context where this is not allowed. *exception* `spack.cmd.modules.``NoSpecMatches`[¶](#spack.cmd.modules.NoSpecMatches) Bases: `Exception` Raised when no spec matches a constraint, in a context where this is not allowed. `spack.cmd.modules.``add_loads_arguments`(*subparser*)[¶](#spack.cmd.modules.add_loads_arguments) `spack.cmd.modules.``callbacks` *= {'find': <function find at 0x7f5730f40840>, 'loads': <function loads at 0x7f5730f407b8>, 'refresh': <function refresh at 0x7f5730f40950>, 'rm': <function rm at 0x7f5730f408c8>}*[¶](#spack.cmd.modules.callbacks) Dictionary populated with the list of sub-commands. Each sub-command must be callable and accept 3 arguments: > * module_type: the type of module it refers to > * specs : the list of specs to be processed > * args : namespace containing the parsed command line arguments `spack.cmd.modules.``find`(*module_type*, *specs*, *args*)[¶](#spack.cmd.modules.find) Returns the module file “use” name if there’s a single match. Raises error messages otherwise. `spack.cmd.modules.``loads`(*module_type*, *specs*, *args*, *out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>*)[¶](#spack.cmd.modules.loads) Prompt the list of modules associated with a list of specs `spack.cmd.modules.``modules_cmd`(*parser*, *args*, *module_type*, *callbacks={'find': <function find at 0x7f5730f40840>*, *'loads': <function loads at 0x7f5730f407b8>*, *'refresh': <function refresh at 0x7f5730f40950>*, *'rm': <function rm at 0x7f5730f408c8>}*)[¶](#spack.cmd.modules.modules_cmd) `spack.cmd.modules.``one_spec_or_raise`(*specs*)[¶](#spack.cmd.modules.one_spec_or_raise) Ensures exactly one spec has been selected, or raises the appropriate exception. `spack.cmd.modules.``refresh`(*module_type*, *specs*, *args*)[¶](#spack.cmd.modules.refresh) Regenerates the module files for every spec in specs and every module type in module types. `spack.cmd.modules.``rm`(*module_type*, *specs*, *args*)[¶](#spack.cmd.modules.rm) Deletes the module files associated with every spec in specs, for every module type in module types. `spack.cmd.modules.``setup_parser`(*subparser*)[¶](#spack.cmd.modules.setup_parser) ##### Submodules[¶](#submodules) ##### spack.cmd.activate module[¶](#module-spack.cmd.activate) `spack.cmd.activate.``activate`(*parser*, *args*)[¶](#spack.cmd.activate.activate) `spack.cmd.activate.``setup_parser`(*subparser*)[¶](#spack.cmd.activate.setup_parser) ##### spack.cmd.add module[¶](#module-spack.cmd.add) `spack.cmd.add.``add`(*parser*, *args*)[¶](#spack.cmd.add.add) `spack.cmd.add.``setup_parser`(*subparser*)[¶](#spack.cmd.add.setup_parser) ##### spack.cmd.arch module[¶](#module-spack.cmd.arch) `spack.cmd.arch.``arch`(*parser*, *args*)[¶](#spack.cmd.arch.arch) `spack.cmd.arch.``setup_parser`(*subparser*)[¶](#spack.cmd.arch.setup_parser) ##### spack.cmd.blame module[¶](#module-spack.cmd.blame) `spack.cmd.blame.``blame`(*parser*, *args*)[¶](#spack.cmd.blame.blame) `spack.cmd.blame.``setup_parser`(*subparser*)[¶](#spack.cmd.blame.setup_parser) ##### spack.cmd.bootstrap module[¶](#module-spack.cmd.bootstrap) `spack.cmd.bootstrap.``bootstrap`(*parser*, *args*, ***kwargs*)[¶](#spack.cmd.bootstrap.bootstrap) `spack.cmd.bootstrap.``setup_parser`(*subparser*)[¶](#spack.cmd.bootstrap.setup_parser) ##### spack.cmd.build module[¶](#module-spack.cmd.build) `spack.cmd.build.``build`(*parser*, *args*)[¶](#spack.cmd.build.build) `spack.cmd.build.``setup_parser`(*subparser*)[¶](#spack.cmd.build.setup_parser) ##### spack.cmd.build_env module[¶](#module-spack.cmd.build_env) `spack.cmd.build_env.``build_env`(*parser*, *args*)[¶](#spack.cmd.build_env.build_env) `spack.cmd.build_env.``setup_parser`(*subparser*)[¶](#spack.cmd.build_env.setup_parser) ##### spack.cmd.buildcache module[¶](#module-spack.cmd.buildcache) `spack.cmd.buildcache.``buildcache`(*parser*, *args*)[¶](#spack.cmd.buildcache.buildcache) `spack.cmd.buildcache.``createtarball`(*args*)[¶](#spack.cmd.buildcache.createtarball) create a binary package from an existing install `spack.cmd.buildcache.``find_matching_specs`(*pkgs*, *allow_multiple_matches=False*, *force=False*)[¶](#spack.cmd.buildcache.find_matching_specs) Returns a list of specs matching the not necessarily concretized specs given from cli | Parameters: | * **specs** – list of specs to be matched against installed packages * **allow_multiple_matches** – if True multiple matches are admitted | | Returns: | list of specs | `spack.cmd.buildcache.``getkeys`(*args*)[¶](#spack.cmd.buildcache.getkeys) get public keys available on mirrors `spack.cmd.buildcache.``install_tarball`(*spec*, *args*)[¶](#spack.cmd.buildcache.install_tarball) `spack.cmd.buildcache.``installtarball`(*args*)[¶](#spack.cmd.buildcache.installtarball) install from a binary package `spack.cmd.buildcache.``listspecs`(*args*)[¶](#spack.cmd.buildcache.listspecs) list binary packages available from mirrors `spack.cmd.buildcache.``match_downloaded_specs`(*pkgs*, *allow_multiple_matches=False*, *force=False*)[¶](#spack.cmd.buildcache.match_downloaded_specs) Returns a list of specs matching the not necessarily concretized specs given from cli | Parameters: | * **specs** – list of specs to be matched against buildcaches on mirror * **allow_multiple_matches** – if True multiple matches are admitted | | Returns: | list of specs | `spack.cmd.buildcache.``setup_parser`(*subparser*)[¶](#spack.cmd.buildcache.setup_parser) ##### spack.cmd.cd module[¶](#module-spack.cmd.cd) `spack.cmd.cd.``cd`(*parser*, *args*)[¶](#spack.cmd.cd.cd) `spack.cmd.cd.``setup_parser`(*subparser*)[¶](#spack.cmd.cd.setup_parser) This is for decoration – spack cd is used through spack’s shell support. This allows spack cd to print a descriptive help message when called with -h. ##### spack.cmd.checksum module[¶](#module-spack.cmd.checksum) `spack.cmd.checksum.``checksum`(*parser*, *args*)[¶](#spack.cmd.checksum.checksum) `spack.cmd.checksum.``setup_parser`(*subparser*)[¶](#spack.cmd.checksum.setup_parser) ##### spack.cmd.clean module[¶](#module-spack.cmd.clean) *class* `spack.cmd.clean.``AllClean`(*option_strings*, *dest*, *nargs=None*, *const=None*, *default=None*, *type=None*, *choices=None*, *required=False*, *help=None*, *metavar=None*)[¶](#spack.cmd.clean.AllClean) Bases: `argparse.Action` Activates flags -s -d -m and -p simultaneously `spack.cmd.clean.``clean`(*parser*, *args*)[¶](#spack.cmd.clean.clean) `spack.cmd.clean.``setup_parser`(*subparser*)[¶](#spack.cmd.clean.setup_parser) ##### spack.cmd.clone module[¶](#module-spack.cmd.clone) `spack.cmd.clone.``clone`(*parser*, *args*)[¶](#spack.cmd.clone.clone) `spack.cmd.clone.``get_origin_info`(*remote*)[¶](#spack.cmd.clone.get_origin_info) `spack.cmd.clone.``setup_parser`(*subparser*)[¶](#spack.cmd.clone.setup_parser) ##### spack.cmd.commands module[¶](#module-spack.cmd.commands) *class* `spack.cmd.commands.``SpackArgparseRstWriter`(*documented_commands*, *out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>*)[¶](#spack.cmd.commands.SpackArgparseRstWriter) Bases: [`llnl.util.argparsewriter.ArgparseRstWriter`](index.html#llnl.util.argparsewriter.ArgparseRstWriter) RST writer tailored for spack documentation. `usage`(**args*)[¶](#spack.cmd.commands.SpackArgparseRstWriter.usage) *class* `spack.cmd.commands.``SubcommandWriter`[¶](#spack.cmd.commands.SubcommandWriter) Bases: [`llnl.util.argparsewriter.ArgparseWriter`](index.html#llnl.util.argparsewriter.ArgparseWriter) `begin_command`(*prog*)[¶](#spack.cmd.commands.SubcommandWriter.begin_command) `spack.cmd.commands.``commands`(*parser*, *args*)[¶](#spack.cmd.commands.commands) `spack.cmd.commands.``formatter`(*func*)[¶](#spack.cmd.commands.formatter) Decorator used to register formatters `spack.cmd.commands.``formatters` *= {'names': <function names at 0x7f5731120a60>, 'rst': <function rst at 0x7f57311206a8>, 'subcommands': <function subcommands at 0x7f57311208c8>}*[¶](#spack.cmd.commands.formatters) list of command formatters `spack.cmd.commands.``names`(*args*)[¶](#spack.cmd.commands.names) `spack.cmd.commands.``rst`(*args*)[¶](#spack.cmd.commands.rst) `spack.cmd.commands.``rst_index`(*out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>*)[¶](#spack.cmd.commands.rst_index) `spack.cmd.commands.``setup_parser`(*subparser*)[¶](#spack.cmd.commands.setup_parser) `spack.cmd.commands.``subcommands`(*args*)[¶](#spack.cmd.commands.subcommands) ##### spack.cmd.compiler module[¶](#module-spack.cmd.compiler) `spack.cmd.compiler.``compiler`(*parser*, *args*)[¶](#spack.cmd.compiler.compiler) `spack.cmd.compiler.``compiler_find`(*args*)[¶](#spack.cmd.compiler.compiler_find) Search either $PATH or a list of paths OR MODULES for compilers and add them to Spack’s configuration. `spack.cmd.compiler.``compiler_info`(*args*)[¶](#spack.cmd.compiler.compiler_info) Print info about all compilers matching a spec. `spack.cmd.compiler.``compiler_list`(*args*)[¶](#spack.cmd.compiler.compiler_list) `spack.cmd.compiler.``compiler_remove`(*args*)[¶](#spack.cmd.compiler.compiler_remove) `spack.cmd.compiler.``setup_parser`(*subparser*)[¶](#spack.cmd.compiler.setup_parser) ##### spack.cmd.compilers module[¶](#module-spack.cmd.compilers) `spack.cmd.compilers.``compilers`(*parser*, *args*)[¶](#spack.cmd.compilers.compilers) `spack.cmd.compilers.``setup_parser`(*subparser*)[¶](#spack.cmd.compilers.setup_parser) ##### spack.cmd.concretize module[¶](#module-spack.cmd.concretize) `spack.cmd.concretize.``concretize`(*parser*, *args*)[¶](#spack.cmd.concretize.concretize) `spack.cmd.concretize.``setup_parser`(*subparser*)[¶](#spack.cmd.concretize.setup_parser) ##### spack.cmd.config module[¶](#module-spack.cmd.config) `spack.cmd.config.``config`(*parser*, *args*)[¶](#spack.cmd.config.config) `spack.cmd.config.``config_blame`(*args*)[¶](#spack.cmd.config.config_blame) Print out line-by-line blame of merged YAML. `spack.cmd.config.``config_edit`(*args*)[¶](#spack.cmd.config.config_edit) Edit the configuration file for a specific scope and config section. With no arguments and an active environment, edit the spack.yaml for the active environment. `spack.cmd.config.``config_get`(*args*)[¶](#spack.cmd.config.config_get) Dump merged YAML configuration for a specific section. With no arguments and an active environment, print the contents of the environment’s manifest file (spack.yaml). `spack.cmd.config.``setup_parser`(*subparser*)[¶](#spack.cmd.config.setup_parser) ##### spack.cmd.configure module[¶](#module-spack.cmd.configure) `spack.cmd.configure.``configure`(*parser*, *args*)[¶](#spack.cmd.configure.configure) `spack.cmd.configure.``setup_parser`(*subparser*)[¶](#spack.cmd.configure.setup_parser) ##### spack.cmd.create module[¶](#module-spack.cmd.create) *class* `spack.cmd.create.``AutoreconfPackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.AutoreconfPackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for Autotools-based packages that *do not* come with a `configure` script `base_class_name` *= 'AutotoolsPackage'*[¶](#spack.cmd.create.AutoreconfPackageTemplate.base_class_name) `body` *= " def autoreconf(self, spec, prefix):\n # FIXME: Modify the autoreconf method as necessary\n autoreconf('--install', '--verbose', '--force')\n\n def configure_args(self):\n # FIXME: Add arguments other than --prefix\n # FIXME: If not needed delete this function\n args = []\n return args"*[¶](#spack.cmd.create.AutoreconfPackageTemplate.body) `dependencies` *= " depends_on('autoconf', type='build')\n depends_on('automake', type='build')\n depends_on('libtool', type='build')\n depends_on('m4', type='build')\n\n # FIXME: Add additional dependencies if required.\n # depends_on('foo')"*[¶](#spack.cmd.create.AutoreconfPackageTemplate.dependencies) *class* `spack.cmd.create.``AutotoolsPackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.AutotoolsPackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for Autotools-based packages that *do* come with a `configure` script `base_class_name` *= 'AutotoolsPackage'*[¶](#spack.cmd.create.AutotoolsPackageTemplate.base_class_name) `body` *= ' def configure_args(self):\n # FIXME: Add arguments other than --prefix\n # FIXME: If not needed delete this function\n args = []\n return args'*[¶](#spack.cmd.create.AutotoolsPackageTemplate.body) *class* `spack.cmd.create.``BazelPackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.BazelPackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for Bazel-based packages `body` *= ' def install(self, spec, prefix):\n # FIXME: Add logic to build and install here.\n bazel()'*[¶](#spack.cmd.create.BazelPackageTemplate.body) `dependencies` *= " # FIXME: Add additional dependencies if required.\n depends_on('bazel', type='build')"*[¶](#spack.cmd.create.BazelPackageTemplate.dependencies) *class* `spack.cmd.create.``BuildSystemGuesser`[¶](#spack.cmd.create.BuildSystemGuesser) Bases: `object` An instance of BuildSystemGuesser provides a callable object to be used during `spack create`. By passing this object to `spack checksum`, we can take a peek at the fetched tarball and discern the build system it uses *class* `spack.cmd.create.``CMakePackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.CMakePackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for CMake-based packages `base_class_name` *= 'CMakePackage'*[¶](#spack.cmd.create.CMakePackageTemplate.base_class_name) `body` *= ' def cmake_args(self):\n # FIXME: Add arguments other than\n # FIXME: CMAKE_INSTALL_PREFIX and CMAKE_BUILD_TYPE\n # FIXME: If not needed delete this function\n args = []\n return args'*[¶](#spack.cmd.create.CMakePackageTemplate.body) *class* `spack.cmd.create.``IntelPackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.IntelPackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for licensed Intel software `base_class_name` *= 'IntelPackage'*[¶](#spack.cmd.create.IntelPackageTemplate.base_class_name) `body` *= ' # FIXME: Override `setup_environment` if necessary.'*[¶](#spack.cmd.create.IntelPackageTemplate.body) *class* `spack.cmd.create.``MakefilePackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.MakefilePackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for Makefile packages `base_class_name` *= 'MakefilePackage'*[¶](#spack.cmd.create.MakefilePackageTemplate.base_class_name) `body` *= " def edit(self, spec, prefix):\n # FIXME: Edit the Makefile if necessary\n # FIXME: If not needed delete this function\n # makefile = FileFilter('Makefile')\n # makefile.filter('CC = .*', 'CC = cc')"*[¶](#spack.cmd.create.MakefilePackageTemplate.body) *class* `spack.cmd.create.``MesonPackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.MesonPackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for meson-based packages `base_class_name` *= 'MesonPackage'*[¶](#spack.cmd.create.MesonPackageTemplate.base_class_name) `body` *= ' def meson_args(self):\n # FIXME: If not needed delete this function\n args = []\n return args'*[¶](#spack.cmd.create.MesonPackageTemplate.body) *class* `spack.cmd.create.``OctavePackageTemplate`(*name*, **args*)[¶](#spack.cmd.create.OctavePackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for octave packages `base_class_name` *= 'OctavePackage'*[¶](#spack.cmd.create.OctavePackageTemplate.base_class_name) `dependencies` *= " extends('octave')\n\n # FIXME: Add additional dependencies if required.\n # depends_on('octave-foo', type=('build', 'run'))"*[¶](#spack.cmd.create.OctavePackageTemplate.dependencies) *class* `spack.cmd.create.``PackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.PackageTemplate) Bases: `object` Provides the default values to be used for the package file template `base_class_name` *= 'Package'*[¶](#spack.cmd.create.PackageTemplate.base_class_name) `body` *= " def install(self, spec, prefix):\n # FIXME: Unknown build system\n make()\n make('install')"*[¶](#spack.cmd.create.PackageTemplate.body) `dependencies` *= " # FIXME: Add dependencies if required.\n # depends_on('foo')"*[¶](#spack.cmd.create.PackageTemplate.dependencies) `write`(*pkg_path*)[¶](#spack.cmd.create.PackageTemplate.write) Writes the new package file. *class* `spack.cmd.create.``PerlbuildPackageTemplate`(*name*, **args*)[¶](#spack.cmd.create.PerlbuildPackageTemplate) Bases: [`spack.cmd.create.PerlmakePackageTemplate`](#spack.cmd.create.PerlmakePackageTemplate) Provides appropriate overrides for Perl extensions that come with a Build.PL instead of a Makefile.PL `dependencies` *= " depends_on('perl-module-build', type='build')\n\n # FIXME: Add additional dependencies if required:\n # depends_on('perl-foo', type=('build', 'run'))"*[¶](#spack.cmd.create.PerlbuildPackageTemplate.dependencies) *class* `spack.cmd.create.``PerlmakePackageTemplate`(*name*, **args*)[¶](#spack.cmd.create.PerlmakePackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for Perl extensions that come with a Makefile.PL `base_class_name` *= 'PerlPackage'*[¶](#spack.cmd.create.PerlmakePackageTemplate.base_class_name) `body` *= ' def configure_args(self):\n # FIXME: Add non-standard arguments\n # FIXME: If not needed delete this function\n args = []\n return args'*[¶](#spack.cmd.create.PerlmakePackageTemplate.body) `dependencies` *= " # FIXME: Add dependencies if required:\n # depends_on('perl-foo', type=('build', 'run'))"*[¶](#spack.cmd.create.PerlmakePackageTemplate.dependencies) *class* `spack.cmd.create.``PythonPackageTemplate`(*name*, **args*)[¶](#spack.cmd.create.PythonPackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for python extensions `base_class_name` *= 'PythonPackage'*[¶](#spack.cmd.create.PythonPackageTemplate.base_class_name) `body` *= ' def build_args(self, spec, prefix):\n # FIXME: Add arguments other than --prefix\n # FIXME: If not needed delete this function\n args = []\n return args'*[¶](#spack.cmd.create.PythonPackageTemplate.body) `dependencies` *= " # FIXME: Add dependencies if required.\n # depends_on('py-setuptools', type='build')\n # depends_on('py-foo', type=('build', 'run'))"*[¶](#spack.cmd.create.PythonPackageTemplate.dependencies) *class* `spack.cmd.create.``QMakePackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.QMakePackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for QMake-based packages `base_class_name` *= 'QMakePackage'*[¶](#spack.cmd.create.QMakePackageTemplate.base_class_name) `body` *= ' def qmake_args(self):\n # FIXME: If not needed delete this function\n args = []\n return args'*[¶](#spack.cmd.create.QMakePackageTemplate.body) *class* `spack.cmd.create.``RPackageTemplate`(*name*, **args*)[¶](#spack.cmd.create.RPackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for R extensions `base_class_name` *= 'RPackage'*[¶](#spack.cmd.create.RPackageTemplate.base_class_name) `body` *= ' def configure_args(self, spec, prefix):\n # FIXME: Add arguments to pass to install via --configure-args\n # FIXME: If not needed delete this function\n args = []\n return args'*[¶](#spack.cmd.create.RPackageTemplate.body) `dependencies` *= " # FIXME: Add dependencies if required.\n # depends_on('r-foo', type=('build', 'run'))"*[¶](#spack.cmd.create.RPackageTemplate.dependencies) *class* `spack.cmd.create.``SconsPackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.SconsPackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate overrides for SCons-based packages `base_class_name` *= 'SConsPackage'*[¶](#spack.cmd.create.SconsPackageTemplate.base_class_name) `body` *= ' def build_args(self, spec, prefix):\n # FIXME: Add arguments to pass to build.\n # FIXME: If not needed delete this function\n args = []\n return args'*[¶](#spack.cmd.create.SconsPackageTemplate.body) *class* `spack.cmd.create.``WafPackageTemplate`(*name*, *url*, *versions*)[¶](#spack.cmd.create.WafPackageTemplate) Bases: [`spack.cmd.create.PackageTemplate`](#spack.cmd.create.PackageTemplate) Provides appropriate override for Waf-based packages `base_class_name` *= 'WafPackage'*[¶](#spack.cmd.create.WafPackageTemplate.base_class_name) `body` *= ' # FIXME: Override configure_args(), build_args(),\n # or install_args() if necessary.'*[¶](#spack.cmd.create.WafPackageTemplate.body) `spack.cmd.create.``create`(*parser*, *args*)[¶](#spack.cmd.create.create) `spack.cmd.create.``get_build_system`(*args*, *guesser*)[¶](#spack.cmd.create.get_build_system) Determine the build system template. If a template is specified, always use that. Otherwise, if a URL is provided, download the tarball and peek inside to guess what build system it uses. Otherwise, use a generic template by default. | Parameters: | * **args** (*argparse.Namespace*) – The arguments given to `spack create` * **guesser** (*BuildSystemGuesser*) – The first_stage_function given to `spack checksum` which records the build system it detects | | Returns: | The name of the build system template to use | | Return type: | str | `spack.cmd.create.``get_name`(*args*)[¶](#spack.cmd.create.get_name) Get the name of the package based on the supplied arguments. If a name was provided, always use that. Otherwise, if a URL was provided, extract the name from that. Otherwise, use a default. | Parameters: | **args** (*param argparse.Namespace*) – The arguments given to `spack create` | | Returns: | The name of the package | | Return type: | str | `spack.cmd.create.``get_repository`(*args*, *name*)[¶](#spack.cmd.create.get_repository) Returns a Repo object that will allow us to determine the path where the new package file should be created. | Parameters: | * **args** (*argparse.Namespace*) – The arguments given to `spack create` * **name** (*str*) – The name of the package to create | | Returns: | A Repo object capable of determining the path to the package file | | Return type: | Repo | `spack.cmd.create.``get_url`(*args*)[¶](#spack.cmd.create.get_url) Get the URL to use. Use a default URL if none is provided. | Parameters: | **args** (*argparse.Namespace*) – The arguments given to `spack create` | | Returns: | The URL of the package | | Return type: | str | `spack.cmd.create.``get_versions`(*args*, *name*)[¶](#spack.cmd.create.get_versions) Returns a list of versions and hashes for a package. Also returns a BuildSystemGuesser object. Returns default values if no URL is provided. | Parameters: | * **args** (*argparse.Namespace*) – The arguments given to `spack create` * **name** (*str*) – The name of the package | | Returns: | Versions and hashes, and a BuildSystemGuesser object | | Return type: | str and BuildSystemGuesser | `spack.cmd.create.``setup_parser`(*subparser*)[¶](#spack.cmd.create.setup_parser) ##### spack.cmd.deactivate module[¶](#module-spack.cmd.deactivate) `spack.cmd.deactivate.``deactivate`(*parser*, *args*)[¶](#spack.cmd.deactivate.deactivate) `spack.cmd.deactivate.``setup_parser`(*subparser*)[¶](#spack.cmd.deactivate.setup_parser) ##### spack.cmd.debug module[¶](#module-spack.cmd.debug) `spack.cmd.debug.``create_db_tarball`(*args*)[¶](#spack.cmd.debug.create_db_tarball) `spack.cmd.debug.``debug`(*parser*, *args*)[¶](#spack.cmd.debug.debug) `spack.cmd.debug.``setup_parser`(*subparser*)[¶](#spack.cmd.debug.setup_parser) ##### spack.cmd.dependencies module[¶](#module-spack.cmd.dependencies) `spack.cmd.dependencies.``dependencies`(*parser*, *args*)[¶](#spack.cmd.dependencies.dependencies) `spack.cmd.dependencies.``setup_parser`(*subparser*)[¶](#spack.cmd.dependencies.setup_parser) ##### spack.cmd.dependents module[¶](#module-spack.cmd.dependents) `spack.cmd.dependents.``dependents`(*parser*, *args*)[¶](#spack.cmd.dependents.dependents) `spack.cmd.dependents.``get_dependents`(*pkg_name*, *ideps*, *transitive=False*, *dependents=None*)[¶](#spack.cmd.dependents.get_dependents) Get all dependents for a package. | Parameters: | * **pkg_name** (*str*) – name of the package whose dependents should be returned * **ideps** (*dict*) – dictionary of dependents, from inverted_dependencies() * **transitive** (*bool**,* *optional*) – return transitive dependents when True | `spack.cmd.dependents.``inverted_dependencies`()[¶](#spack.cmd.dependents.inverted_dependencies) Iterate through all packages and return a dictionary mapping package names to possible dependnecies. Virtual packages are included as sources, so that you can query dependents of, e.g., mpi, but virtuals are not included as actual dependents. `spack.cmd.dependents.``setup_parser`(*subparser*)[¶](#spack.cmd.dependents.setup_parser) ##### spack.cmd.diy module[¶](#module-spack.cmd.diy) `spack.cmd.diy.``diy`(*self*, *args*)[¶](#spack.cmd.diy.diy) `spack.cmd.diy.``setup_parser`(*subparser*)[¶](#spack.cmd.diy.setup_parser) ##### spack.cmd.docs module[¶](#module-spack.cmd.docs) `spack.cmd.docs.``docs`(*parser*, *args*)[¶](#spack.cmd.docs.docs) ##### spack.cmd.edit module[¶](#module-spack.cmd.edit) `spack.cmd.edit.``edit`(*parser*, *args*)[¶](#spack.cmd.edit.edit) `spack.cmd.edit.``edit_package`(*name*, *repo_path*, *namespace*)[¶](#spack.cmd.edit.edit_package) Opens the requested package file in your favorite $EDITOR. | Parameters: | * **name** (*str*) – The name of the package * **repo_path** (*str*) – The path to the repository containing this package * **namespace** (*str*) – A valid namespace registered with Spack | `spack.cmd.edit.``setup_parser`(*subparser*)[¶](#spack.cmd.edit.setup_parser) ##### spack.cmd.env module[¶](#module-spack.cmd.env) `spack.cmd.env.``env`(*parser*, *args*)[¶](#spack.cmd.env.env) Look for a function called environment_<name> and call it. `spack.cmd.env.``env_activate`(*args*)[¶](#spack.cmd.env.env_activate) `spack.cmd.env.``env_activate_setup_parser`(*subparser*)[¶](#spack.cmd.env.env_activate_setup_parser) set the current environment `spack.cmd.env.``env_create`(*args*)[¶](#spack.cmd.env.env_create) `spack.cmd.env.``env_create_setup_parser`(*subparser*)[¶](#spack.cmd.env.env_create_setup_parser) create a new environment `spack.cmd.env.``env_deactivate`(*args*)[¶](#spack.cmd.env.env_deactivate) `spack.cmd.env.``env_deactivate_setup_parser`(*subparser*)[¶](#spack.cmd.env.env_deactivate_setup_parser) deactivate any active environment in the shell `spack.cmd.env.``env_list`(*args*)[¶](#spack.cmd.env.env_list) `spack.cmd.env.``env_list_setup_parser`(*subparser*)[¶](#spack.cmd.env.env_list_setup_parser) list available environments `spack.cmd.env.``env_loads`(*args*)[¶](#spack.cmd.env.env_loads) `spack.cmd.env.``env_loads_setup_parser`(*subparser*)[¶](#spack.cmd.env.env_loads_setup_parser) list modules for an installed environment ‘(see spack module loads)’ `spack.cmd.env.``env_remove`(*args*)[¶](#spack.cmd.env.env_remove) Remove a *named* environment. This removes an environment managed by Spack. Directory environments and spack.yaml files embedded in repositories should be removed manually. `spack.cmd.env.``env_remove_setup_parser`(*subparser*)[¶](#spack.cmd.env.env_remove_setup_parser) remove an existing environment `spack.cmd.env.``env_status`(*args*)[¶](#spack.cmd.env.env_status) `spack.cmd.env.``env_status_setup_parser`(*subparser*)[¶](#spack.cmd.env.env_status_setup_parser) print whether there is an active environment `spack.cmd.env.``setup_parser`(*subparser*)[¶](#spack.cmd.env.setup_parser) `spack.cmd.env.``subcommand_functions` *= {}*[¶](#spack.cmd.env.subcommand_functions) Dictionary mapping subcommand names and aliases to functions `spack.cmd.env.``subcommands` *= ['activate', 'deactivate', 'create', ['remove', 'rm'], ['list', 'ls'], ['status', 'st'], 'loads']*[¶](#spack.cmd.env.subcommands) List of subcommands of spack env ##### spack.cmd.extensions module[¶](#module-spack.cmd.extensions) `spack.cmd.extensions.``extensions`(*parser*, *args*)[¶](#spack.cmd.extensions.extensions) `spack.cmd.extensions.``setup_parser`(*subparser*)[¶](#spack.cmd.extensions.setup_parser) ##### spack.cmd.fetch module[¶](#module-spack.cmd.fetch) `spack.cmd.fetch.``fetch`(*parser*, *args*)[¶](#spack.cmd.fetch.fetch) `spack.cmd.fetch.``setup_parser`(*subparser*)[¶](#spack.cmd.fetch.setup_parser) ##### spack.cmd.find module[¶](#module-spack.cmd.find) `spack.cmd.find.``find`(*parser*, *args*)[¶](#spack.cmd.find.find) `spack.cmd.find.``query_arguments`(*args*)[¶](#spack.cmd.find.query_arguments) `spack.cmd.find.``setup_env`(*env*)[¶](#spack.cmd.find.setup_env) Create a function for decorating specs when in an environment. `spack.cmd.find.``setup_parser`(*subparser*)[¶](#spack.cmd.find.setup_parser) ##### spack.cmd.flake8 module[¶](#module-spack.cmd.flake8) `spack.cmd.flake8.``add_pattern_exemptions`(*line*, *codes*)[¶](#spack.cmd.flake8.add_pattern_exemptions) Add a flake8 exemption to a line. `spack.cmd.flake8.``changed_files`(*args*)[¶](#spack.cmd.flake8.changed_files) Get list of changed files in the Spack repository. `spack.cmd.flake8.``exclude_directories` *= ['/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1/lib/spack/external']*[¶](#spack.cmd.flake8.exclude_directories) List of directories to exclude from checks. `spack.cmd.flake8.``filter_file`(*source*, *dest*, *output=False*)[¶](#spack.cmd.flake8.filter_file) Filter a single file through all the patterns in pattern_exemptions. `spack.cmd.flake8.``flake8`(*parser*, *args*)[¶](#spack.cmd.flake8.flake8) `spack.cmd.flake8.``is_package`(*f*)[¶](#spack.cmd.flake8.is_package) Whether flake8 should consider a file as a core file or a package. We run flake8 with different exceptions for the core and for packages, since we allow from spack import * and poking globals into packages. `spack.cmd.flake8.``pattern_exemptions` *= {re.compile('package.py$'): {'F403': [re.compile('^from spack import \\*$')], 'F811': [re.compile('^\\s*@when\\(.*\\)')], 'E501': [re.compile('^\\s*homepage\\s*='), re.compile('^\\s*url\\s*='), re.compile('^\\s*git\\s*='), re.compile('^\\s*svn\\s*='), re.compile('^\\s*hg\\s*='), re.compile('^\\s*list_url\\s*='), re.compile('^\\s*version\\('), re.compile('^\\s*variant\\('), re.compile('^\\s*provides\\('), re.compile('^\\s*extends\\('), re.compile('^\\s*depends_on\\('), re.compile('^\\s*conflicts\\('), re.compile('^\\s*resource\\('), re.compile('^\\s*patch\\(')]}, re.compile('.py$'): {'E501': [re.compile('(https?|ftp|file)\\:'), re.compile('([\\\'"])[0-9a-fA-F]{32,}\\1')]}}*[¶](#spack.cmd.flake8.pattern_exemptions) *This is a dict that maps* – > filename pattern -> > flake8 exemption code -> > list of patterns, for which matching lines should have codes applied. For each file, if the filename pattern matches, we’ll add per-line exemptions if any patterns in the sub-dict match. `spack.cmd.flake8.``setup_parser`(*subparser*)[¶](#spack.cmd.flake8.setup_parser) ##### spack.cmd.gpg module[¶](#module-spack.cmd.gpg) `spack.cmd.gpg.``gpg`(*parser*, *args*)[¶](#spack.cmd.gpg.gpg) `spack.cmd.gpg.``gpg_create`(*args*)[¶](#spack.cmd.gpg.gpg_create) `spack.cmd.gpg.``gpg_export`(*args*)[¶](#spack.cmd.gpg.gpg_export) `spack.cmd.gpg.``gpg_init`(*args*)[¶](#spack.cmd.gpg.gpg_init) `spack.cmd.gpg.``gpg_list`(*args*)[¶](#spack.cmd.gpg.gpg_list) `spack.cmd.gpg.``gpg_sign`(*args*)[¶](#spack.cmd.gpg.gpg_sign) `spack.cmd.gpg.``gpg_trust`(*args*)[¶](#spack.cmd.gpg.gpg_trust) `spack.cmd.gpg.``gpg_untrust`(*args*)[¶](#spack.cmd.gpg.gpg_untrust) `spack.cmd.gpg.``gpg_verify`(*args*)[¶](#spack.cmd.gpg.gpg_verify) `spack.cmd.gpg.``setup_parser`(*subparser*)[¶](#spack.cmd.gpg.setup_parser) ##### spack.cmd.graph module[¶](#module-spack.cmd.graph) `spack.cmd.graph.``graph`(*parser*, *args*)[¶](#spack.cmd.graph.graph) `spack.cmd.graph.``setup_parser`(*subparser*)[¶](#spack.cmd.graph.setup_parser) ##### spack.cmd.help module[¶](#module-spack.cmd.help) `spack.cmd.help.``help`(*parser*, *args*)[¶](#spack.cmd.help.help) `spack.cmd.help.``setup_parser`(*subparser*)[¶](#spack.cmd.help.setup_parser) ##### spack.cmd.info module[¶](#module-spack.cmd.info) *class* `spack.cmd.info.``VariantFormatter`(*variants*, *max_widths=(30*, *20*, *30)*)[¶](#spack.cmd.info.VariantFormatter) Bases: `object` `default`(*v*)[¶](#spack.cmd.info.VariantFormatter.default) `lines`[¶](#spack.cmd.info.VariantFormatter.lines) `spack.cmd.info.``info`(*parser*, *args*)[¶](#spack.cmd.info.info) `spack.cmd.info.``padder`(*str_list*, *extra=0*)[¶](#spack.cmd.info.padder) Return a function to pad elements of a list. `spack.cmd.info.``print_text_info`(*pkg*)[¶](#spack.cmd.info.print_text_info) Print out a plain text description of a package. `spack.cmd.info.``section_title`(*s*)[¶](#spack.cmd.info.section_title) `spack.cmd.info.``setup_parser`(*subparser*)[¶](#spack.cmd.info.setup_parser) `spack.cmd.info.``variant`(*s*)[¶](#spack.cmd.info.variant) `spack.cmd.info.``version`(*s*)[¶](#spack.cmd.info.version) ##### spack.cmd.install module[¶](#module-spack.cmd.install) `spack.cmd.install.``default_log_file`(*spec*)[¶](#spack.cmd.install.default_log_file) Computes the default filename for the log file and creates the corresponding directory if not present `spack.cmd.install.``install`(*parser*, *args*, ***kwargs*)[¶](#spack.cmd.install.install) `spack.cmd.install.``install_spec`(*cli_args*, *kwargs*, *abstract_spec*, *spec*)[¶](#spack.cmd.install.install_spec) Do the actual installation. `spack.cmd.install.``setup_parser`(*subparser*)[¶](#spack.cmd.install.setup_parser) `spack.cmd.install.``update_kwargs_from_args`(*args*, *kwargs*)[¶](#spack.cmd.install.update_kwargs_from_args) Parse cli arguments and construct a dictionary that will be passed to Package.do_install API ##### spack.cmd.license module[¶](#module-spack.cmd.license) `spack.cmd.license.``apache2_mit_spdx` *= '(Apache-2.0 OR MIT)'*[¶](#spack.cmd.license.apache2_mit_spdx) Spack’s license identifier `spack.cmd.license.``git` *= <exe: ['/usr/bin/git']>*[¶](#spack.cmd.license.git) need the git command to check new files `spack.cmd.license.``lgpl_exceptions` *= ['lib/spack/spack/cmd/license.py', 'lib/spack/spack/test/cmd/license.py']*[¶](#spack.cmd.license.lgpl_exceptions) licensed files that can have LGPL language in them so far, just this command – so it can find LGPL things elsewhere `spack.cmd.license.``license`(*parser*, *args*)[¶](#spack.cmd.license.license) `spack.cmd.license.``license_lines` *= 6*[¶](#spack.cmd.license.license_lines) SPDX license id must appear in the first <license_lines> lines of a file `spack.cmd.license.``licensed_files` *= ['^bin/spack$', '^bin/spack-python$', '^bin/sbang$', '^lib/spack/spack/.*\\.py$', '^lib/spack/spack/.*\\.sh$', '^lib/spack/llnl/.*\\.py$', '^lib/spack/env/cc$', '^lib/spack/docs/(?!command_index|spack|llnl).*\\.rst$', '^lib/spack/docs/.*\\.py$', '^lib/spack/external/__init__.py$', '^lib/spack/external/ordereddict_backport.py$', '^share/spack/.*\\.sh$', '^share/spack/.*\\.bash$', '^share/spack/.*\\.csh$', '^share/spack/qa/run-[^/]*$', '^var/spack/repos/.*/package.py$']*[¶](#spack.cmd.license.licensed_files) regular expressions for licensed files. `spack.cmd.license.``list_files`(*args*)[¶](#spack.cmd.license.list_files) list files in spack that should have license headers `spack.cmd.license.``setup_parser`(*subparser*)[¶](#spack.cmd.license.setup_parser) `spack.cmd.license.``verify`(*args*)[¶](#spack.cmd.license.verify) verify that files in spack have the right license header ##### spack.cmd.list module[¶](#module-spack.cmd.list) `spack.cmd.list.``filter_by_name`(*pkgs*, *args*)[¶](#spack.cmd.list.filter_by_name) Filters the sequence of packages according to user prescriptions | Parameters: | * **pkgs** – sequence of packages * **args** – parsed command line arguments | | Returns: | filtered and sorted list of packages | `spack.cmd.list.``formatter`(*func*)[¶](#spack.cmd.list.formatter) Decorator used to register formatters `spack.cmd.list.``github_url`(*pkg*)[¶](#spack.cmd.list.github_url) Link to a package file on github. `spack.cmd.list.``html`(*pkg_names*)[¶](#spack.cmd.list.html) Print out information on all packages in Sphinx HTML. This is intended to be inlined directly into Sphinx documentation. We write HTML instead of RST for speed; generating RST from *all* packages causes the Sphinx build to take forever. Including this as raw HTML is much faster. `spack.cmd.list.``list`(*parser*, *args*)[¶](#spack.cmd.list.list) `spack.cmd.list.``name_only`(*pkgs*)[¶](#spack.cmd.list.name_only) `spack.cmd.list.``rows_for_ncols`(*elts*, *ncols*)[¶](#spack.cmd.list.rows_for_ncols) Print out rows in a table with ncols of elts laid out vertically. `spack.cmd.list.``rst`(*pkg_names*)[¶](#spack.cmd.list.rst) Print out information on all packages in restructured text. `spack.cmd.list.``rst_table`(*elts*)[¶](#spack.cmd.list.rst_table) Print out a RST-style table. `spack.cmd.list.``setup_parser`(*subparser*)[¶](#spack.cmd.list.setup_parser) ##### spack.cmd.load module[¶](#module-spack.cmd.load) `spack.cmd.load.``load`(*parser*, *args*)[¶](#spack.cmd.load.load) `spack.cmd.load.``setup_parser`(*subparser*)[¶](#spack.cmd.load.setup_parser) Parser is only constructed so that this prints a nice help message with -h. ##### spack.cmd.location module[¶](#module-spack.cmd.location) `spack.cmd.location.``location`(*parser*, *args*)[¶](#spack.cmd.location.location) `spack.cmd.location.``setup_parser`(*subparser*)[¶](#spack.cmd.location.setup_parser) ##### spack.cmd.log_parse module[¶](#module-spack.cmd.log_parse) `spack.cmd.log_parse.``log_parse`(*parser*, *args*)[¶](#spack.cmd.log_parse.log_parse) `spack.cmd.log_parse.``setup_parser`(*subparser*)[¶](#spack.cmd.log_parse.setup_parser) ##### spack.cmd.mirror module[¶](#module-spack.cmd.mirror) `spack.cmd.mirror.``mirror`(*parser*, *args*)[¶](#spack.cmd.mirror.mirror) `spack.cmd.mirror.``mirror_add`(*args*)[¶](#spack.cmd.mirror.mirror_add) Add a mirror to Spack. `spack.cmd.mirror.``mirror_create`(*args*)[¶](#spack.cmd.mirror.mirror_create) Create a directory to be used as a spack mirror, and fill it with package archives. `spack.cmd.mirror.``mirror_list`(*args*)[¶](#spack.cmd.mirror.mirror_list) Print out available mirrors to the console. `spack.cmd.mirror.``mirror_remove`(*args*)[¶](#spack.cmd.mirror.mirror_remove) Remove a mirror by name. `spack.cmd.mirror.``setup_parser`(*subparser*)[¶](#spack.cmd.mirror.setup_parser) ##### spack.cmd.module module[¶](#module-spack.cmd.module) `spack.cmd.module.``add_deprecated_command`(*subparser*, *name*)[¶](#spack.cmd.module.add_deprecated_command) `spack.cmd.module.``handle_deprecated_command`(*args*, *unknown_args*)[¶](#spack.cmd.module.handle_deprecated_command) `spack.cmd.module.``module`(*parser*, *args*, *unknown_args*)[¶](#spack.cmd.module.module) `spack.cmd.module.``setup_parser`(*subparser*)[¶](#spack.cmd.module.setup_parser) ##### spack.cmd.patch module[¶](#module-spack.cmd.patch) `spack.cmd.patch.``patch`(*parser*, *args*)[¶](#spack.cmd.patch.patch) `spack.cmd.patch.``setup_parser`(*subparser*)[¶](#spack.cmd.patch.setup_parser) ##### spack.cmd.pkg module[¶](#module-spack.cmd.pkg) `spack.cmd.pkg.``diff_packages`(*rev1*, *rev2*)[¶](#spack.cmd.pkg.diff_packages) `spack.cmd.pkg.``list_packages`(*rev*)[¶](#spack.cmd.pkg.list_packages) `spack.cmd.pkg.``pkg`(*parser*, *args*)[¶](#spack.cmd.pkg.pkg) `spack.cmd.pkg.``pkg_add`(*args*)[¶](#spack.cmd.pkg.pkg_add) `spack.cmd.pkg.``pkg_added`(*args*)[¶](#spack.cmd.pkg.pkg_added) Show packages added since a commit. `spack.cmd.pkg.``pkg_diff`(*args*)[¶](#spack.cmd.pkg.pkg_diff) Compare packages available in two different git revisions. `spack.cmd.pkg.``pkg_list`(*args*)[¶](#spack.cmd.pkg.pkg_list) List packages associated with a particular spack git revision. `spack.cmd.pkg.``pkg_removed`(*args*)[¶](#spack.cmd.pkg.pkg_removed) Show packages removed since a commit. `spack.cmd.pkg.``setup_parser`(*subparser*)[¶](#spack.cmd.pkg.setup_parser) ##### spack.cmd.providers module[¶](#module-spack.cmd.providers) `spack.cmd.providers.``providers`(*parser*, *args*)[¶](#spack.cmd.providers.providers) `spack.cmd.providers.``setup_parser`(*subparser*)[¶](#spack.cmd.providers.setup_parser) ##### spack.cmd.pydoc module[¶](#module-spack.cmd.pydoc) `spack.cmd.pydoc.``pydoc`(*parser*, *args*)[¶](#spack.cmd.pydoc.pydoc) `spack.cmd.pydoc.``setup_parser`(*subparser*)[¶](#spack.cmd.pydoc.setup_parser) ##### spack.cmd.python module[¶](#module-spack.cmd.python) `spack.cmd.python.``python`(*parser*, *args*)[¶](#spack.cmd.python.python) `spack.cmd.python.``setup_parser`(*subparser*)[¶](#spack.cmd.python.setup_parser) ##### spack.cmd.reindex module[¶](#module-spack.cmd.reindex) `spack.cmd.reindex.``reindex`(*parser*, *args*)[¶](#spack.cmd.reindex.reindex) ##### spack.cmd.remove module[¶](#module-spack.cmd.remove) `spack.cmd.remove.``remove`(*parser*, *args*)[¶](#spack.cmd.remove.remove) `spack.cmd.remove.``setup_parser`(*subparser*)[¶](#spack.cmd.remove.setup_parser) ##### spack.cmd.repo module[¶](#module-spack.cmd.repo) `spack.cmd.repo.``repo`(*parser*, *args*)[¶](#spack.cmd.repo.repo) `spack.cmd.repo.``repo_add`(*args*)[¶](#spack.cmd.repo.repo_add) Add a package source to Spack’s configuration. `spack.cmd.repo.``repo_create`(*args*)[¶](#spack.cmd.repo.repo_create) Create a new package repository. `spack.cmd.repo.``repo_list`(*args*)[¶](#spack.cmd.repo.repo_list) Show registered repositories and their namespaces. `spack.cmd.repo.``repo_remove`(*args*)[¶](#spack.cmd.repo.repo_remove) Remove a repository from Spack’s configuration. `spack.cmd.repo.``setup_parser`(*subparser*)[¶](#spack.cmd.repo.setup_parser) ##### spack.cmd.restage module[¶](#module-spack.cmd.restage) `spack.cmd.restage.``restage`(*parser*, *args*)[¶](#spack.cmd.restage.restage) `spack.cmd.restage.``setup_parser`(*subparser*)[¶](#spack.cmd.restage.setup_parser) ##### spack.cmd.setup module[¶](#module-spack.cmd.setup) `spack.cmd.setup.``setup`(*self*, *args*)[¶](#spack.cmd.setup.setup) `spack.cmd.setup.``setup_parser`(*subparser*)[¶](#spack.cmd.setup.setup_parser) `spack.cmd.setup.``spack_transitive_include_path`()[¶](#spack.cmd.setup.spack_transitive_include_path) `spack.cmd.setup.``write_spconfig`(*package*, *dirty*)[¶](#spack.cmd.setup.write_spconfig) ##### spack.cmd.spec module[¶](#module-spack.cmd.spec) `spack.cmd.spec.``setup_parser`(*subparser*)[¶](#spack.cmd.spec.setup_parser) `spack.cmd.spec.``spec`(*parser*, *args*)[¶](#spack.cmd.spec.spec) ##### spack.cmd.stage module[¶](#module-spack.cmd.stage) `spack.cmd.stage.``setup_parser`(*subparser*)[¶](#spack.cmd.stage.setup_parser) `spack.cmd.stage.``stage`(*parser*, *args*)[¶](#spack.cmd.stage.stage) ##### spack.cmd.test module[¶](#module-spack.cmd.test) `spack.cmd.test.``do_list`(*args*, *unknown_args*)[¶](#spack.cmd.test.do_list) Print a lists of tests than what pytest offers. `spack.cmd.test.``setup_parser`(*subparser*)[¶](#spack.cmd.test.setup_parser) `spack.cmd.test.``test`(*parser*, *args*, *unknown_args*)[¶](#spack.cmd.test.test) ##### spack.cmd.uninstall module[¶](#module-spack.cmd.uninstall) `spack.cmd.uninstall.``add_common_arguments`(*subparser*)[¶](#spack.cmd.uninstall.add_common_arguments) `spack.cmd.uninstall.``dependent_environments`(*specs*)[¶](#spack.cmd.uninstall.dependent_environments) Map each spec to environments that depend on it. | Parameters: | **specs** (*list*) – list of Specs | | Returns: | mapping from spec to lists of dependent Environments | | Return type: | (dict) | `spack.cmd.uninstall.``do_uninstall`(*env*, *specs*, *force*)[¶](#spack.cmd.uninstall.do_uninstall) Uninstalls all the specs in a list. | Parameters: | * **env** (*Environment*) – active environment, or `None` if there is not one * **specs** (*list*) – list of specs to be uninstalled * **force** (*bool*) – force uninstallation (boolean) | `spack.cmd.uninstall.``find_matching_specs`(*env*, *specs*, *allow_multiple_matches=False*, *force=False*)[¶](#spack.cmd.uninstall.find_matching_specs) Returns a list of specs matching the not necessarily concretized specs given from cli | Parameters: | * **env** (*Environment*) – active environment, or `None` if there is not one * **specs** (*list*) – list of specs to be matched against installed packages * **allow_multiple_matches** (*bool*) – if True multiple matches are admitted | | Returns: | list of specs | `spack.cmd.uninstall.``get_uninstall_list`(*args*, *specs*, *env*)[¶](#spack.cmd.uninstall.get_uninstall_list) `spack.cmd.uninstall.``inactive_dependent_environments`(*spec_envs*)[¶](#spack.cmd.uninstall.inactive_dependent_environments) Strip the active environment from a dependent map. Take the output of `dependent_environment()` and remove the active environment from all mappings. Remove any specs in the map that now have no dependent environments. Return the result. | Parameters: | **(****dict****)** – mapping from spec to lists of dependent Environments | | Returns: | mapping from spec to lists of *inactive* dependent Environments | | Return type: | (dict) | `spack.cmd.uninstall.``installed_dependents`(*specs*, *env*)[¶](#spack.cmd.uninstall.installed_dependents) Map each spec to a list of its installed dependents. | Parameters: | * **specs** (*list*) – list of Specs * **env** (*Environment*) – the active environment, or None | | Returns: | two mappings: one from specs to their dependent environments in the active environment (or global scope if there is no environment), and one from specs to their dependents in *inactive* environments (empty if there is no environment | | Return type: | (tuple of dicts) | `spack.cmd.uninstall.``setup_parser`(*subparser*)[¶](#spack.cmd.uninstall.setup_parser) `spack.cmd.uninstall.``uninstall`(*parser*, *args*)[¶](#spack.cmd.uninstall.uninstall) `spack.cmd.uninstall.``uninstall_specs`(*args*, *specs*)[¶](#spack.cmd.uninstall.uninstall_specs) ##### spack.cmd.unload module[¶](#module-spack.cmd.unload) `spack.cmd.unload.``setup_parser`(*subparser*)[¶](#spack.cmd.unload.setup_parser) Parser is only constructed so that this prints a nice help message with -h. `spack.cmd.unload.``unload`(*parser*, *args*)[¶](#spack.cmd.unload.unload) ##### spack.cmd.unuse module[¶](#module-spack.cmd.unuse) `spack.cmd.unuse.``setup_parser`(*subparser*)[¶](#spack.cmd.unuse.setup_parser) Parser is only constructed so that this prints a nice help message with -h. `spack.cmd.unuse.``unuse`(*parser*, *args*)[¶](#spack.cmd.unuse.unuse) ##### spack.cmd.url module[¶](#module-spack.cmd.url) `spack.cmd.url.``name_parsed_correctly`(*pkg*, *name*)[¶](#spack.cmd.url.name_parsed_correctly) Determine if the name of a package was correctly parsed. | Parameters: | * **pkg** ([*spack.package.PackageBase*](index.html#spack.package.PackageBase)) – The Spack package * **name** (*str*) – The name that was extracted from the URL | | Returns: | True if the name was correctly parsed, else False | | Return type: | bool | `spack.cmd.url.``print_name_and_version`(*url*)[¶](#spack.cmd.url.print_name_and_version) Prints a URL. Underlines the detected name with dashes and the detected version with tildes. | Parameters: | **url** (*str*) – The url to parse | `spack.cmd.url.``remove_separators`(*version*)[¶](#spack.cmd.url.remove_separators) Removes separator characters (‘.’, ‘_’, and ‘-‘) from a version. A version like 1.2.3 may be displayed as 1_2_3 in the URL. Make sure 1.2.3, 1-2-3, 1_2_3, and 123 are considered equal. Unfortunately, this also means that 1.23 and 12.3 are equal. | Parameters: | **version** (*str* *or* *Version*) – A version | | Returns: | The version with all separator characters removed | | Return type: | str | `spack.cmd.url.``setup_parser`(*subparser*)[¶](#spack.cmd.url.setup_parser) `spack.cmd.url.``url`(*parser*, *args*)[¶](#spack.cmd.url.url) `spack.cmd.url.``url_list`(*args*)[¶](#spack.cmd.url.url_list) `spack.cmd.url.``url_list_parsing`(*args*, *urls*, *url*, *pkg*)[¶](#spack.cmd.url.url_list_parsing) Helper function for [`url_list()`](#spack.cmd.url.url_list). | Parameters: | * **args** (*argparse.Namespace*) – The arguments given to `spack url list` * **urls** (*set*) – List of URLs that have already been added * **url** (*str* *or* *None*) – A URL to potentially add to `urls` depending on `args` * **pkg** ([*spack.package.PackageBase*](index.html#spack.package.PackageBase)) – The Spack package | | Returns: | The updated set of `urls` | | Return type: | set | `spack.cmd.url.``url_parse`(*args*)[¶](#spack.cmd.url.url_parse) `spack.cmd.url.``url_stats`(*args*)[¶](#spack.cmd.url.url_stats) `spack.cmd.url.``url_summary`(*args*)[¶](#spack.cmd.url.url_summary) `spack.cmd.url.``version_parsed_correctly`(*pkg*, *version*)[¶](#spack.cmd.url.version_parsed_correctly) Determine if the version of a package was correctly parsed. | Parameters: | * **pkg** ([*spack.package.PackageBase*](index.html#spack.package.PackageBase)) – The Spack package * **version** (*str*) – The version that was extracted from the URL | | Returns: | True if the name was correctly parsed, else False | | Return type: | bool | ##### spack.cmd.use module[¶](#module-spack.cmd.use) `spack.cmd.use.``setup_parser`(*subparser*)[¶](#spack.cmd.use.setup_parser) Parser is only constructed so that this prints a nice help message with -h. `spack.cmd.use.``use`(*parser*, *args*)[¶](#spack.cmd.use.use) ##### spack.cmd.versions module[¶](#module-spack.cmd.versions) `spack.cmd.versions.``setup_parser`(*subparser*)[¶](#spack.cmd.versions.setup_parser) `spack.cmd.versions.``versions`(*parser*, *args*)[¶](#spack.cmd.versions.versions) ##### spack.cmd.view module[¶](#module-spack.cmd.view) Produce a “view” of a Spack DAG. A “view” is file hierarchy representing the union of a number of Spack-installed package file hierarchies. The union is formed from: * specs resolved from the package names given by the user (the seeds) * all dependencies of the seeds unless user specifies –no-dependencies * less any specs with names matching the regular expressions given by –exclude The view can be built and tore down via a number of methods (the “actions”): * symlink :: a file system view which is a directory hierarchy that is the union of the hierarchies of the installed packages in the DAG where installed files are referenced via symlinks. * hardlink :: like the symlink view but hardlinks are used. * statlink :: a view producing a status report of a symlink or hardlink view. The file system view concept is imspired by Nix, implemented by [<EMAIL>](mailto:<EMAIL>) ca 2016. All operations on views are performed via proxy objects such as YamlFilesystemView. `spack.cmd.view.``relaxed_disambiguate`(*specs*, *view*)[¶](#spack.cmd.view.relaxed_disambiguate) When dealing with querying actions (remove/status) the name of the spec is sufficient even though more versions of that name might be in the database. `spack.cmd.view.``setup_parser`(*sp*)[¶](#spack.cmd.view.setup_parser) `spack.cmd.view.``view`(*parser*, *args*)[¶](#spack.cmd.view.view) Produce a view of a set of packages. ##### Module contents[¶](#module-spack.cmd) `spack.cmd.``all_commands`()[¶](#spack.cmd.all_commands) Names of all commands `spack.cmd.``cmd_name`(*python_name*)[¶](#spack.cmd.cmd_name) Convert module name (with `_`) to command name (with `-`). `spack.cmd.``disambiguate_spec`(*spec*)[¶](#spack.cmd.disambiguate_spec) `spack.cmd.``display_specs`(*specs*, *args=None*, ***kwargs*)[¶](#spack.cmd.display_specs) Display human readable specs with customizable formatting. Prints the supplied specs to the screen, formatted according to the arguments provided. Specs are grouped by architecture and compiler, and columnized if possible. There are three possible “modes”: > * `short` (default): short specs with name and version, columnized > * `paths`: Two columns: one for specs, one for paths > * `deps`: Dependency-tree style, like `spack spec`; can get long Options can add more information to the default display. Options can be provided either as keyword arguments or as an argparse namespace. Keyword arguments take precedence over settings in the argparse namespace. | Parameters: | * **specs** (*list of spack.spec.Spec*) – the specs to display * **args** (*optional argparse.Namespace*) – namespace containing formatting arguments | | Keyword Arguments: | | | * **mode** (*str*) – Either ‘short’, ‘paths’, or ‘deps’ * **long** (*bool*) – Display short hashes with specs * **very_long** (*bool*) – Display full hashes with specs (supersedes `long`) * **namespace** (*bool*) – Print namespaces along with names * **show_flags** (*bool*) – Show compiler flags with specs * **variants** (*bool*) – Show variants with specs * **indent** (*int*) – indent each line this much * **decorators** (*dict*) – dictionary mappng specs to decorators * **header_callback** (*function*) – called at start of arch/compiler sections * **all_headers** (*bool*) – show headers even when arch/compiler aren’t defined | `spack.cmd.``elide_list`(*line_list*, *max_num=10*)[¶](#spack.cmd.elide_list) Takes a long list and limits it to a smaller number of elements, replacing intervening elements with ‘
’. For example: ``` elide_list([1,2,3,4,5,6], 4) ``` gives: ``` [1, 2, 3, '...', 6] ``` `spack.cmd.``get_command`(*cmd_name*)[¶](#spack.cmd.get_command) Imports the command’s function from a module and returns it. | Parameters: | **cmd_name** (*str*) – name of the command for which to get a module (contains `-`, not `_`). | `spack.cmd.``get_module`(*cmd_name*)[¶](#spack.cmd.get_module) Imports the module for a particular command name and returns it. | Parameters: | **cmd_name** (*str*) – name of the command for which to get a module (contains `-`, not `_`). | `spack.cmd.``gray_hash`(*spec*, *length*)[¶](#spack.cmd.gray_hash) `spack.cmd.``parse_specs`(*args*, ***kwargs*)[¶](#spack.cmd.parse_specs) Convenience function for parsing arguments from specs. Handles common exceptions and dies if there are errors. `spack.cmd.``python_name`(*cmd_name*)[¶](#spack.cmd.python_name) Convert `-` to `_` in command name, to make a valid identifier. `spack.cmd.``remove_options`(*parser*, **options*)[¶](#spack.cmd.remove_options) Remove some options from a parser. `spack.cmd.``spack_is_git_repo`()[¶](#spack.cmd.spack_is_git_repo) Ensure that this instance of Spack is a git clone. #### spack.compilers package[¶](#spack-compilers-package) ##### Submodules[¶](#submodules) ##### spack.compilers.arm module[¶](#module-spack.compilers.arm) *class* `spack.compilers.arm.``Arm`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compilers.arm.Arm) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) `cc_names` *= ['armclang']*[¶](#spack.compilers.arm.Arm.cc_names) `cxx11_flag`[¶](#spack.compilers.arm.Arm.cxx11_flag) `cxx14_flag`[¶](#spack.compilers.arm.Arm.cxx14_flag) `cxx17_flag`[¶](#spack.compilers.arm.Arm.cxx17_flag) `cxx_names` *= ['armclang++']*[¶](#spack.compilers.arm.Arm.cxx_names) *classmethod* `default_version`(*comp*)[¶](#spack.compilers.arm.Arm.default_version) Override just this to override all compiler version functions. `f77_names` *= ['armflang']*[¶](#spack.compilers.arm.Arm.f77_names) *classmethod* `f77_version`(*f77*)[¶](#spack.compilers.arm.Arm.f77_version) `fc_names` *= ['armflang']*[¶](#spack.compilers.arm.Arm.fc_names) *classmethod* `fc_version`(*fc*)[¶](#spack.compilers.arm.Arm.fc_version) `link_paths` *= {'cc': 'clang/clang', 'cxx': 'clang/clang++', 'f77': 'clang/flang', 'fc': 'clang/flang'}*[¶](#spack.compilers.arm.Arm.link_paths) `openmp_flag`[¶](#spack.compilers.arm.Arm.openmp_flag) `pic_flag`[¶](#spack.compilers.arm.Arm.pic_flag) ##### spack.compilers.cce module[¶](#module-spack.compilers.cce) *class* `spack.compilers.cce.``Cce`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compilers.cce.Cce) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) Cray compiler environment compiler. `PrgEnv` *= 'PrgEnv-cray'*[¶](#spack.compilers.cce.Cce.PrgEnv) `PrgEnv_compiler` *= 'cce'*[¶](#spack.compilers.cce.Cce.PrgEnv_compiler) `cc_names` *= ['cc']*[¶](#spack.compilers.cce.Cce.cc_names) `cxx11_flag`[¶](#spack.compilers.cce.Cce.cxx11_flag) `cxx_names` *= ['CC']*[¶](#spack.compilers.cce.Cce.cxx_names) *classmethod* `default_version`(*comp*)[¶](#spack.compilers.cce.Cce.default_version) Override just this to override all compiler version functions. `f77_names` *= ['ftn']*[¶](#spack.compilers.cce.Cce.f77_names) `fc_names` *= ['ftn']*[¶](#spack.compilers.cce.Cce.fc_names) `link_paths` *= {'cc': 'cc', 'cxx': 'c++', 'f77': 'f77', 'fc': 'fc'}*[¶](#spack.compilers.cce.Cce.link_paths) `openmp_flag`[¶](#spack.compilers.cce.Cce.openmp_flag) `pic_flag`[¶](#spack.compilers.cce.Cce.pic_flag) `suffixes` *= ['-mp-\\d\\.\\d']*[¶](#spack.compilers.cce.Cce.suffixes) ##### spack.compilers.clang module[¶](#module-spack.compilers.clang) *class* `spack.compilers.clang.``Clang`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compilers.clang.Clang) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) `cc_names` *= ['clang']*[¶](#spack.compilers.clang.Clang.cc_names) `cxx11_flag`[¶](#spack.compilers.clang.Clang.cxx11_flag) `cxx14_flag`[¶](#spack.compilers.clang.Clang.cxx14_flag) `cxx17_flag`[¶](#spack.compilers.clang.Clang.cxx17_flag) `cxx_names` *= ['clang++']*[¶](#spack.compilers.clang.Clang.cxx_names) *classmethod* `default_version`(*comp*)[¶](#spack.compilers.clang.Clang.default_version) The `--version` option works for clang compilers. On most platforms, output looks like this: ``` clang version 3.1 (trunk 149096) Target: x86_64-unknown-linux-gnu Thread model: posix ``` On macOS, it looks like this: ``` Apple LLVM version 7.0.2 (clang-700.1.81) Target: x86_64-apple-darwin15.2.0 Thread model: posix ``` `f77_names` *= ['flang', 'gfortran', 'xlf_r']*[¶](#spack.compilers.clang.Clang.f77_names) *classmethod* `f77_version`(*f77*)[¶](#spack.compilers.clang.Clang.f77_version) `fc_names` *= ['flang', 'gfortran', 'xlf90_r']*[¶](#spack.compilers.clang.Clang.fc_names) *classmethod* `fc_version`(*fc*)[¶](#spack.compilers.clang.Clang.fc_version) `is_apple`[¶](#spack.compilers.clang.Clang.is_apple) `link_paths`[¶](#spack.compilers.clang.Clang.link_paths) `openmp_flag`[¶](#spack.compilers.clang.Clang.openmp_flag) `pic_flag`[¶](#spack.compilers.clang.Clang.pic_flag) `setup_custom_environment`(*pkg*, *env*)[¶](#spack.compilers.clang.Clang.setup_custom_environment) Set the DEVELOPER_DIR environment for the Xcode toolchain. On macOS, not all buildsystems support querying CC and CXX for the compilers to use and instead query the Xcode toolchain for what compiler to run. This side-steps the spack wrappers. In order to inject spack into this setup, we need to copy (a subset of) Xcode.app and replace the compiler executables with symlinks to the spack wrapper. Currently, the stage is used to store the Xcode.app copies. We then set the ‘DEVELOPER_DIR’ environment variables to cause the xcrun and related tools to use this Xcode.app. `spack.compilers.clang.``f77_mapping` *= [('gfortran', 'clang/gfortran'), ('xlf_r', 'xl_r/xlf_r'), ('xlf', 'xl/xlf'), ('pgfortran', 'pgi/pgfortran'), ('ifort', 'intel/ifort')]*[¶](#spack.compilers.clang.f77_mapping) compiler symlink mappings for mixed f77 compilers `spack.compilers.clang.``fc_mapping` *= [('gfortran', 'clang/gfortran'), ('xlf90_r', 'xl_r/xlf90_r'), ('xlf90', 'xl/xlf90'), ('pgfortran', 'pgi/pgfortran'), ('ifort', 'intel/ifort')]*[¶](#spack.compilers.clang.fc_mapping) compiler symlink mappings for mixed f90/fc compilers ##### spack.compilers.gcc module[¶](#module-spack.compilers.gcc) *class* `spack.compilers.gcc.``Gcc`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compilers.gcc.Gcc) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) `PrgEnv` *= 'PrgEnv-gnu'*[¶](#spack.compilers.gcc.Gcc.PrgEnv) `PrgEnv_compiler` *= 'gcc'*[¶](#spack.compilers.gcc.Gcc.PrgEnv_compiler) `cc_names` *= ['gcc']*[¶](#spack.compilers.gcc.Gcc.cc_names) `cxx11_flag`[¶](#spack.compilers.gcc.Gcc.cxx11_flag) `cxx14_flag`[¶](#spack.compilers.gcc.Gcc.cxx14_flag) `cxx17_flag`[¶](#spack.compilers.gcc.Gcc.cxx17_flag) `cxx98_flag`[¶](#spack.compilers.gcc.Gcc.cxx98_flag) `cxx_names` *= ['g++']*[¶](#spack.compilers.gcc.Gcc.cxx_names) *classmethod* `default_version`(*cc*)[¶](#spack.compilers.gcc.Gcc.default_version) Older versions of gcc use the `-dumpversion` option. Output looks like this: ``` 4.4.7 ``` In GCC 7, this option was changed to only return the major version of the compiler: ``` 7 ``` A new `-dumpfullversion` option was added that gives us what we want: ``` 7.2.0 ``` `f77_names` *= ['gfortran']*[¶](#spack.compilers.gcc.Gcc.f77_names) *classmethod* `f77_version`(*f77*)[¶](#spack.compilers.gcc.Gcc.f77_version) `fc_names` *= ['gfortran']*[¶](#spack.compilers.gcc.Gcc.fc_names) *classmethod* `fc_version`(*fc*)[¶](#spack.compilers.gcc.Gcc.fc_version) Older versions of gfortran use the `-dumpversion` option. Output looks like this: ``` GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18) Copyright (C) 2010 Free Software Foundation, Inc. ``` or: ``` 4.8.5 ``` In GCC 7, this option was changed to only return the major version of the compiler: ``` 7 ``` A new `-dumpfullversion` option was added that gives us what we want: ``` 7.2.0 ``` `link_paths` *= {'cc': 'gcc/gcc', 'cxx': 'gcc/g++', 'f77': 'gcc/gfortran', 'fc': 'gcc/gfortran'}*[¶](#spack.compilers.gcc.Gcc.link_paths) `openmp_flag`[¶](#spack.compilers.gcc.Gcc.openmp_flag) `pic_flag`[¶](#spack.compilers.gcc.Gcc.pic_flag) `stdcxx_libs`[¶](#spack.compilers.gcc.Gcc.stdcxx_libs) `suffixes` *= ['-mp-\\d\\.\\d', '-\\d\\.\\d', '-\\d', '\\d\\d']*[¶](#spack.compilers.gcc.Gcc.suffixes) ##### spack.compilers.intel module[¶](#module-spack.compilers.intel) *class* `spack.compilers.intel.``Intel`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compilers.intel.Intel) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) `PrgEnv` *= 'PrgEnv-intel'*[¶](#spack.compilers.intel.Intel.PrgEnv) `PrgEnv_compiler` *= 'intel'*[¶](#spack.compilers.intel.Intel.PrgEnv_compiler) `cc_names` *= ['icc']*[¶](#spack.compilers.intel.Intel.cc_names) `cxx11_flag`[¶](#spack.compilers.intel.Intel.cxx11_flag) `cxx14_flag`[¶](#spack.compilers.intel.Intel.cxx14_flag) `cxx_names` *= ['icpc']*[¶](#spack.compilers.intel.Intel.cxx_names) *classmethod* `default_version`(*comp*)[¶](#spack.compilers.intel.Intel.default_version) The `--version` option seems to be the most consistent one for intel compilers. Output looks like this: ``` icpc (ICC) 12.1.5 20120612 Copyright (C) 1985-2012 Intel Corporation. All rights reserved. ``` or: ``` ifort (IFORT) 12.1.5 20120612 Copyright (C) 1985-2012 Intel Corporation. All rights reserved. ``` `f77_names` *= ['ifort']*[¶](#spack.compilers.intel.Intel.f77_names) `fc_names` *= ['ifort']*[¶](#spack.compilers.intel.Intel.fc_names) `link_paths` *= {'cc': 'intel/icc', 'cxx': 'intel/icpc', 'f77': 'intel/ifort', 'fc': 'intel/ifort'}*[¶](#spack.compilers.intel.Intel.link_paths) `openmp_flag`[¶](#spack.compilers.intel.Intel.openmp_flag) `pic_flag`[¶](#spack.compilers.intel.Intel.pic_flag) `stdcxx_libs`[¶](#spack.compilers.intel.Intel.stdcxx_libs) ##### spack.compilers.nag module[¶](#module-spack.compilers.nag) *class* `spack.compilers.nag.``Nag`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compilers.nag.Nag) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) `cc_names` *= []*[¶](#spack.compilers.nag.Nag.cc_names) `cxx11_flag`[¶](#spack.compilers.nag.Nag.cxx11_flag) `cxx_names` *= []*[¶](#spack.compilers.nag.Nag.cxx_names) *classmethod* `default_version`(*comp*)[¶](#spack.compilers.nag.Nag.default_version) The `-V` option works for nag compilers. Output looks like this: ``` NAG Fortran Compiler Release 6.0(Hibiya) Build 1037 Product NPL6A60NA for x86-64 Linux ``` `f77_names` *= ['nagfor']*[¶](#spack.compilers.nag.Nag.f77_names) `f77_rpath_arg`[¶](#spack.compilers.nag.Nag.f77_rpath_arg) `fc_names` *= ['nagfor']*[¶](#spack.compilers.nag.Nag.fc_names) `fc_rpath_arg`[¶](#spack.compilers.nag.Nag.fc_rpath_arg) `link_paths` *= {'cc': 'cc', 'cxx': 'c++', 'f77': 'nag/nagfor', 'fc': 'nag/nagfor'}*[¶](#spack.compilers.nag.Nag.link_paths) `openmp_flag`[¶](#spack.compilers.nag.Nag.openmp_flag) `pic_flag`[¶](#spack.compilers.nag.Nag.pic_flag) ##### spack.compilers.pgi module[¶](#module-spack.compilers.pgi) *class* `spack.compilers.pgi.``Pgi`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compilers.pgi.Pgi) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) `PrgEnv` *= 'PrgEnv-pgi'*[¶](#spack.compilers.pgi.Pgi.PrgEnv) `PrgEnv_compiler` *= 'pgi'*[¶](#spack.compilers.pgi.Pgi.PrgEnv_compiler) `cc_names` *= ['pgcc']*[¶](#spack.compilers.pgi.Pgi.cc_names) `cxx11_flag`[¶](#spack.compilers.pgi.Pgi.cxx11_flag) `cxx_names` *= ['pgc++', 'pgCC']*[¶](#spack.compilers.pgi.Pgi.cxx_names) *classmethod* `default_version`(*comp*)[¶](#spack.compilers.pgi.Pgi.default_version) The `-V` option works for all the PGI compilers. Output looks like this: ``` pgcc 15.10-0 64-bit target on x86-64 Linux -tp sandybridge The Portland Group - PGI Compilers and Tools Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. ``` on x86-64, and: ``` pgcc 17.4-0 linuxpower target on Linuxpower PGI Compilers and Tools Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved. ``` on PowerPC. `f77_names` *= ['pgfortran', 'pgf77']*[¶](#spack.compilers.pgi.Pgi.f77_names) `fc_names` *= ['pgfortran', 'pgf95', 'pgf90']*[¶](#spack.compilers.pgi.Pgi.fc_names) `link_paths` *= {'cc': 'pgi/pgcc', 'cxx': 'pgi/pgc++', 'f77': 'pgi/pgfortran', 'fc': 'pgi/pgfortran'}*[¶](#spack.compilers.pgi.Pgi.link_paths) `openmp_flag`[¶](#spack.compilers.pgi.Pgi.openmp_flag) `pic_flag`[¶](#spack.compilers.pgi.Pgi.pic_flag) ##### spack.compilers.xl module[¶](#module-spack.compilers.xl) *class* `spack.compilers.xl.``Xl`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compilers.xl.Xl) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) `cc_names` *= ['xlc']*[¶](#spack.compilers.xl.Xl.cc_names) `cxx11_flag`[¶](#spack.compilers.xl.Xl.cxx11_flag) `cxx_names` *= ['xlC', 'xlc++']*[¶](#spack.compilers.xl.Xl.cxx_names) *classmethod* `default_version`(*comp*)[¶](#spack.compilers.xl.Xl.default_version) The ‘-qversion’ is the standard option fo XL compilers. Output looks like this: ``` IBM XL C/C++ for Linux, V11.1 (5724-X14) Version: 11.01.0000.0000 ``` or: ``` IBM XL Fortran for Linux, V13.1 (5724-X16) Version: 13.01.0000.0000 ``` or: ``` IBM XL C/C++ for AIX, V11.1 (5724-X13) Version: 11.01.0000.0009 ``` or: ``` IBM XL C/C++ Advanced Edition for Blue Gene/P, V9.0 Version: 09.00.0000.0017 ``` `f77_names` *= ['xlf']*[¶](#spack.compilers.xl.Xl.f77_names) *classmethod* `f77_version`(*f77*)[¶](#spack.compilers.xl.Xl.f77_version) `fc_names` *= ['xlf90', 'xlf95', 'xlf2003', 'xlf2008']*[¶](#spack.compilers.xl.Xl.fc_names) *classmethod* `fc_version`(*fc*)[¶](#spack.compilers.xl.Xl.fc_version) The fortran and C/C++ versions of the XL compiler are always two units apart. By this we mean that the fortran release that goes with XL C/C++ 11.1 is 13.1. Having such a difference in version number is confusing spack quite a lot. Most notably if you keep the versions as is the default xl compiler will only have fortran and no C/C++. So we associate the Fortran compiler with the version associated to the C/C++ compiler. One last stumble. Version numbers over 10 have at least a .1 those under 10 a .0. There is no xlf 9.x or under currently available. BG/P and BG/L can such a compiler mix and possibly older version of AIX and linux on power. `fflags`[¶](#spack.compilers.xl.Xl.fflags) `link_paths` *= {'cc': 'xl/xlc', 'cxx': 'xl/xlc++', 'f77': 'xl/xlf', 'fc': 'xl/xlf90'}*[¶](#spack.compilers.xl.Xl.link_paths) `openmp_flag`[¶](#spack.compilers.xl.Xl.openmp_flag) `pic_flag`[¶](#spack.compilers.xl.Xl.pic_flag) ##### spack.compilers.xl_r module[¶](#module-spack.compilers.xl_r) *class* `spack.compilers.xl_r.``XlR`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compilers.xl_r.XlR) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) `cc_names` *= ['xlc_r']*[¶](#spack.compilers.xl_r.XlR.cc_names) `cxx11_flag`[¶](#spack.compilers.xl_r.XlR.cxx11_flag) `cxx_names` *= ['xlC_r', 'xlc++_r']*[¶](#spack.compilers.xl_r.XlR.cxx_names) *classmethod* `default_version`(*comp*)[¶](#spack.compilers.xl_r.XlR.default_version) The ‘-qversion’ is the standard option fo XL compilers. Output looks like this: ``` IBM XL C/C++ for Linux, V11.1 (5724-X14) Version: 11.01.0000.0000 ``` or: ``` IBM XL Fortran for Linux, V13.1 (5724-X16) Version: 13.01.0000.0000 ``` or: ``` IBM XL C/C++ for AIX, V11.1 (5724-X13) Version: 11.01.0000.0009 ``` or: ``` IBM XL C/C++ Advanced Edition for Blue Gene/P, V9.0 Version: 09.00.0000.0017 ``` `f77_names` *= ['xlf_r']*[¶](#spack.compilers.xl_r.XlR.f77_names) *classmethod* `f77_version`(*f77*)[¶](#spack.compilers.xl_r.XlR.f77_version) `fc_names` *= ['xlf90_r', 'xlf95_r', 'xlf2003_r', 'xlf2008_r']*[¶](#spack.compilers.xl_r.XlR.fc_names) *classmethod* `fc_version`(*fc*)[¶](#spack.compilers.xl_r.XlR.fc_version) The fortran and C/C++ versions of the XL compiler are always two units apart. By this we mean that the fortran release that goes with XL C/C++ 11.1 is 13.1. Having such a difference in version number is confusing spack quite a lot. Most notably if you keep the versions as is the default xl compiler will only have fortran and no C/C++. So we associate the Fortran compiler with the version associated to the C/C++ compiler. One last stumble. Version numbers over 10 have at least a .1 those under 10 a .0. There is no xlf 9.x or under currently available. BG/P and BG/L can such a compiler mix and possibly older version of AIX and linux on power. `fflags`[¶](#spack.compilers.xl_r.XlR.fflags) `link_paths` *= {'cc': 'xl_r/xlc_r', 'cxx': 'xl_r/xlc++_r', 'f77': 'xl_r/xlf_r', 'fc': 'xl_r/xlf90_r'}*[¶](#spack.compilers.xl_r.XlR.link_paths) `openmp_flag`[¶](#spack.compilers.xl_r.XlR.openmp_flag) `pic_flag`[¶](#spack.compilers.xl_r.XlR.pic_flag) ##### Module contents[¶](#module-spack.compilers) This module contains functions related to finding compilers on the system and configuring Spack to use multiple compilers. *exception* `spack.compilers.``CompilerDuplicateError`(*compiler_spec*, *arch_spec*)[¶](#spack.compilers.CompilerDuplicateError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) *exception* `spack.compilers.``CompilerSpecInsufficientlySpecificError`(*compiler_spec*)[¶](#spack.compilers.CompilerSpecInsufficientlySpecificError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) *exception* `spack.compilers.``InvalidCompilerConfigurationError`(*compiler_spec*)[¶](#spack.compilers.InvalidCompilerConfigurationError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) *exception* `spack.compilers.``NoCompilerForSpecError`(*compiler_spec*, *target*)[¶](#spack.compilers.NoCompilerForSpecError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) *exception* `spack.compilers.``NoCompilersError`[¶](#spack.compilers.NoCompilersError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) `spack.compilers.``add_compilers_to_config`(*compilers*, *scope=None*, *init_config=True*)[¶](#spack.compilers.add_compilers_to_config) Add compilers to the config for the specified architecture. | Parameters: | * **compilers** (*-*) – a list of Compiler objects. * **scope** (*-*) – configuration scope to modify. | `spack.compilers.``all_compiler_specs`(*scope=None*, *init_config=True*)[¶](#spack.compilers.all_compiler_specs) `spack.compilers.``all_compiler_types`()[¶](#spack.compilers.all_compiler_types) `spack.compilers.``all_compilers`(*scope=None*)[¶](#spack.compilers.all_compilers) `spack.compilers.``all_compilers_config`(*scope=None*, *init_config=True*)[¶](#spack.compilers.all_compilers_config) Return a set of specs for all the compiler versions currently available to build with. These are instances of CompilerSpec. `spack.compilers.``all_os_classes`()[¶](#spack.compilers.all_os_classes) Return the list of classes for all operating systems available on this platform `spack.compilers.``class_for_compiler_name`(*compiler_name*)[¶](#spack.compilers.class_for_compiler_name) Given a compiler module name, get the corresponding Compiler class. `spack.compilers.``compiler_config_files`()[¶](#spack.compilers.compiler_config_files) `spack.compilers.``compiler_for_spec`(*cspec_like*, **args*, ***kwargs*)[¶](#spack.compilers.compiler_for_spec) `spack.compilers.``compiler_from_config_entry`(*items*)[¶](#spack.compilers.compiler_from_config_entry) `spack.compilers.``compilers_for_arch`(*arch_spec*, *scope=None*)[¶](#spack.compilers.compilers_for_arch) `spack.compilers.``compilers_for_spec`(*cspec_like*, **args*, ***kwargs*)[¶](#spack.compilers.compilers_for_spec) `spack.compilers.``find`(*cspec_like*, **args*, ***kwargs*)[¶](#spack.compilers.find) `spack.compilers.``find_compilers`(**paths*)[¶](#spack.compilers.find_compilers) Return a list of compilers found in the supplied paths. This invokes the find_compilers() method for each operating system associated with the host platform, and appends the compilers detected to a list. `spack.compilers.``get_compiler_config`(*scope=None*, *init_config=True*)[¶](#spack.compilers.get_compiler_config) Return the compiler configuration for the specified architecture. `spack.compilers.``get_compiler_duplicates`(*cspec_like*, **args*, ***kwargs*)[¶](#spack.compilers.get_compiler_duplicates) `spack.compilers.``get_compilers`(*config*, *cspec=None*, *arch_spec=None*)[¶](#spack.compilers.get_compilers) `spack.compilers.``remove_compiler_from_config`(*cspec_like*, **args*, ***kwargs*)[¶](#spack.compilers.remove_compiler_from_config) `spack.compilers.``supported`(*cspec_like*, **args*, ***kwargs*)[¶](#spack.compilers.supported) `spack.compilers.``supported_compilers`()[¶](#spack.compilers.supported_compilers) Return a set of names of compilers supported by Spack. See available_compilers() to get a list of all the available versions of supported compilers. #### spack.hooks package[¶](#spack-hooks-package) ##### Submodules[¶](#submodules) ##### spack.hooks.extensions module[¶](#module-spack.hooks.extensions) `spack.hooks.extensions.``pre_uninstall`(*spec*)[¶](#spack.hooks.extensions.pre_uninstall) ##### spack.hooks.licensing module[¶](#module-spack.hooks.licensing) `spack.hooks.licensing.``post_install`(*spec*)[¶](#spack.hooks.licensing.post_install) This hook symlinks local licenses to the global license for licensed software. `spack.hooks.licensing.``pre_install`(*spec*)[¶](#spack.hooks.licensing.pre_install) This hook handles global license setup for licensed software. `spack.hooks.licensing.``set_up_license`(*pkg*)[¶](#spack.hooks.licensing.set_up_license) Prompt the user, letting them know that a license is required. For packages that rely on license files, a global license file is created and opened for editing. For packages that rely on environment variables to point to a license, a warning message is printed. For all other packages, documentation on how to set up a license is printed. `spack.hooks.licensing.``symlink_license`(*pkg*)[¶](#spack.hooks.licensing.symlink_license) Create local symlinks that point to the global license file. `spack.hooks.licensing.``write_license_file`(*pkg*, *license_path*)[¶](#spack.hooks.licensing.write_license_file) Writes empty license file. Comments give suggestions on alternative methods of installing a license. ##### spack.hooks.module_file_generation module[¶](#module-spack.hooks.module_file_generation) `spack.hooks.module_file_generation.``post_install`(*spec*)[¶](#spack.hooks.module_file_generation.post_install) `spack.hooks.module_file_generation.``post_uninstall`(*spec*)[¶](#spack.hooks.module_file_generation.post_uninstall) ##### spack.hooks.permissions_setters module[¶](#module-spack.hooks.permissions_setters) `spack.hooks.permissions_setters.``chmod_real_entries`(*path*, *perms*)[¶](#spack.hooks.permissions_setters.chmod_real_entries) `spack.hooks.permissions_setters.``forall_files`(*path*, *fn*, *args*, *dir_args=None*)[¶](#spack.hooks.permissions_setters.forall_files) Apply function to all files in directory, with file as first arg. Does not apply to the root dir. Does not apply to links `spack.hooks.permissions_setters.``post_install`(*spec*)[¶](#spack.hooks.permissions_setters.post_install) ##### spack.hooks.sbang module[¶](#module-spack.hooks.sbang) `spack.hooks.sbang.``filter_shebang`(*path*)[¶](#spack.hooks.sbang.filter_shebang) Adds a second shebang line, using sbang, at the beginning of a file. `spack.hooks.sbang.``filter_shebangs_in_directory`(*directory*, *filenames=None*)[¶](#spack.hooks.sbang.filter_shebangs_in_directory) `spack.hooks.sbang.``post_install`(*spec*)[¶](#spack.hooks.sbang.post_install) This hook edits scripts so that they call /bin/bash $spack_prefix/bin/sbang instead of something longer than the shebang limit. `spack.hooks.sbang.``shebang_too_long`(*path*)[¶](#spack.hooks.sbang.shebang_too_long) Detects whether a file has a shebang line that is too long. ##### spack.hooks.yaml_version_check module[¶](#module-spack.hooks.yaml_version_check) Yaml Version Check is a module for ensuring that config file formats are compatible with the current version of Spack. `spack.hooks.yaml_version_check.``check_compiler_yaml_version`()[¶](#spack.hooks.yaml_version_check.check_compiler_yaml_version) `spack.hooks.yaml_version_check.``pre_run`()[¶](#spack.hooks.yaml_version_check.pre_run) ##### Module contents[¶](#module-spack.hooks) This package contains modules with hooks for various stages in the Spack install process. You can add modules here and they’ll be executed by package at various times during the package lifecycle. Each hook is just a function that takes a package as a parameter. Hooks are not executed in any particular order. Currently the following hooks are supported: > * pre_run() > * pre_install(spec) > * post_install(spec) > * pre_uninstall(spec) > * post_uninstall(spec) This can be used to implement support for things like module systems (e.g. modules, dotkit, etc.) or to add other custom features. *class* `spack.hooks.``HookRunner`(*hook_name*)[¶](#spack.hooks.HookRunner) Bases: `object` #### spack.modules package[¶](#spack-modules-package) ##### Submodules[¶](#submodules) ##### spack.modules.common module[¶](#module-spack.modules.common) Here we consolidate the logic for creating an abstract description of the information that module systems need. This information maps **a single spec** to: > * a unique module filename > * the module file content and is divided among four classes: > * a configuration class that provides a convenient interface to query > details about the configuration for the spec under consideration. > * a layout class that provides the information associated with module > file names and directories > * a context class that provides the dictionary used by the template engine > to generate the module file > * a writer that collects and uses the information above to either write > or remove the module file Each of the four classes needs to be sub-classed when implementing a new module type. *class* `spack.modules.common.``BaseConfiguration`(*spec*)[¶](#spack.modules.common.BaseConfiguration) Bases: `object` Manipulates the information needed to generate a module file to make querying easier. It needs to be sub-classed for specific module types. `blacklisted`[¶](#spack.modules.common.BaseConfiguration.blacklisted) Returns True if the module has been blacklisted, False otherwise. `context`[¶](#spack.modules.common.BaseConfiguration.context) `env`[¶](#spack.modules.common.BaseConfiguration.env) List of environment modifications that should be done in the module. `environment_blacklist`[¶](#spack.modules.common.BaseConfiguration.environment_blacklist) List of variables that should be left unmodified. `hash`[¶](#spack.modules.common.BaseConfiguration.hash) Hash tag for the module or None `literals_to_load`[¶](#spack.modules.common.BaseConfiguration.literals_to_load) List of literal modules to be loaded. `naming_scheme`[¶](#spack.modules.common.BaseConfiguration.naming_scheme) Naming scheme suitable for non-hierarchical layouts `specs_to_load`[¶](#spack.modules.common.BaseConfiguration.specs_to_load) List of specs that should be loaded in the module file. `specs_to_prereq`[¶](#spack.modules.common.BaseConfiguration.specs_to_prereq) List of specs that should be prerequisite of the module file. `suffixes`[¶](#spack.modules.common.BaseConfiguration.suffixes) List of suffixes that should be appended to the module file name. `template`[¶](#spack.modules.common.BaseConfiguration.template) Returns the name of the template to use for the module file or None if not specified in the configuration. `verbose`[¶](#spack.modules.common.BaseConfiguration.verbose) Returns True if the module file needs to be verbose, False otherwise *class* `spack.modules.common.``BaseContext`(*configuration*)[¶](#spack.modules.common.BaseContext) Bases: [`spack.tengine.Context`](index.html#spack.tengine.Context) Provides the base context needed for template rendering. This class needs to be sub-classed for specific module types. The following attributes need to be implemented: * fields `autoload`[¶](#spack.modules.common.BaseContext.autoload) List of modules that needs to be loaded automatically. `category`[¶](#spack.modules.common.BaseContext.category) `configure_options`[¶](#spack.modules.common.BaseContext.configure_options) `context_properties` *= ['spec', 'timestamp', 'category', 'short_description', 'long_description', 'configure_options', 'environment_modifications', 'autoload', 'verbose']*[¶](#spack.modules.common.BaseContext.context_properties) `environment_modifications`[¶](#spack.modules.common.BaseContext.environment_modifications) List of environment modifications to be processed. `long_description`[¶](#spack.modules.common.BaseContext.long_description) `short_description`[¶](#spack.modules.common.BaseContext.short_description) `spec`[¶](#spack.modules.common.BaseContext.spec) `timestamp`[¶](#spack.modules.common.BaseContext.timestamp) `verbose`[¶](#spack.modules.common.BaseContext.verbose) Verbosity level. *class* `spack.modules.common.``BaseFileLayout`(*configuration*)[¶](#spack.modules.common.BaseFileLayout) Bases: `object` Provides information on the layout of module files. Needs to be sub-classed for specific module types. *classmethod* `dirname`()[¶](#spack.modules.common.BaseFileLayout.dirname) Root folder for module files of this type. `extension` *= None*[¶](#spack.modules.common.BaseFileLayout.extension) This needs to be redefined `filename`[¶](#spack.modules.common.BaseFileLayout.filename) Name of the module file for the current spec. `spec`[¶](#spack.modules.common.BaseFileLayout.spec) Spec under consideration `use_name`[¶](#spack.modules.common.BaseFileLayout.use_name) Returns the ‘use’ name of the module i.e. the name you have to type to console to use it. This implementation fits the needs of most non-hierarchical layouts. *class* `spack.modules.common.``BaseModuleFileWriter`(*spec*)[¶](#spack.modules.common.BaseModuleFileWriter) Bases: `object` `remove`()[¶](#spack.modules.common.BaseModuleFileWriter.remove) Deletes the module file. `write`(*overwrite=False*)[¶](#spack.modules.common.BaseModuleFileWriter.write) Writes the module file. | Parameters: | **overwrite** (*bool*) – if True it is fine to overwrite an already existing file. If False the operation is skipped an we print a warning to the user. | *exception* `spack.modules.common.``DefaultTemplateNotDefined`[¶](#spack.modules.common.DefaultTemplateNotDefined) Bases: `AttributeError`, [`spack.modules.common.ModulesError`](#spack.modules.common.ModulesError) Raised if the attribute ‘default_template’ has not been specified in the derived classes. *exception* `spack.modules.common.``ModulesError`(*message*, *long_message=None*)[¶](#spack.modules.common.ModulesError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) Base error for modules. *exception* `spack.modules.common.``ModulesTemplateNotFoundError`(*message*, *long_message=None*)[¶](#spack.modules.common.ModulesTemplateNotFoundError) Bases: [`spack.modules.common.ModulesError`](#spack.modules.common.ModulesError), `RuntimeError` Raised if the template for a module file was not found. `spack.modules.common.``configuration` *= {'enable': ['tcl', 'dotkit'], 'lmod': {'hierarchy': ['mpi'], 'blacklist_implicits': False, 'verbose': False, 'hash_length': 7}, 'prefix_inspections': {'bin': ['PATH'], 'man': ['MANPATH'], 'share/man': ['MANPATH'], 'share/aclocal': ['ACLOCAL_PATH'], 'lib': ['LD_LIBRARY_PATH', 'LIBRARY_PATH'], 'lib64': ['LD_LIBRARY_PATH', 'LIBRARY_PATH'], 'include': ['CPATH'], 'lib/pkgconfig': ['PKG_CONFIG_PATH'], 'lib64/pkgconfig': ['PKG_CONFIG_PATH'], '': ['CMAKE_PREFIX_PATH']}}*[¶](#spack.modules.common.configuration) config section for this file `spack.modules.common.``dependencies`(*spec*, *request='all'*)[¶](#spack.modules.common.dependencies) Returns the list of dependent specs for a given spec, according to the request passed as parameter. | Parameters: | * **spec** – spec to be analyzed * **request** – either ‘none’, ‘direct’ or ‘all’ | | Returns: | list of dependencies The return list will be empty if request is ‘none’, will contain the direct dependencies if request is ‘direct’, or the entire DAG if request is ‘all’. | `spack.modules.common.``merge_config_rules`(*configuration*, *spec*)[¶](#spack.modules.common.merge_config_rules) Parses the module specific part of a configuration and returns a dictionary containing the actions to be performed on the spec passed as an argument. | Parameters: | * **configuration** – module specific configuration (e.g. entries under the top-level ‘tcl’ key) * **spec** – spec for which we need to generate a module file | | Returns: | actions to be taken on the spec passed as an argument | | Return type: | dict | `spack.modules.common.``prefix_inspections` *= {'': ['CMAKE_PREFIX_PATH'], 'bin': ['PATH'], 'include': ['CPATH'], 'lib': ['LD_LIBRARY_PATH', 'LIBRARY_PATH'], 'lib/pkgconfig': ['PKG_CONFIG_PATH'], 'lib64': ['LD_LIBRARY_PATH', 'LIBRARY_PATH'], 'lib64/pkgconfig': ['PKG_CONFIG_PATH'], 'man': ['MANPATH'], 'share/aclocal': ['ACLOCAL_PATH'], 'share/man': ['MANPATH']}*[¶](#spack.modules.common.prefix_inspections) Inspections that needs to be done on spec prefixes `spack.modules.common.``root_path`(*name*)[¶](#spack.modules.common.root_path) Returns the root folder for module file installation. | Parameters: | **name** – name of the module system t be used (e.g. ‘tcl’) | | Returns: | root folder for module file installation | `spack.modules.common.``roots` *= {'dotkit': '$spack/share/spack/dotkit', 'lmod': '$spack/share/spack/lmod', 'tcl': '$spack/share/spack/modules'}*[¶](#spack.modules.common.roots) Root folders where the various module files should be written `spack.modules.common.``update_dictionary_extending_lists`(*target*, *update*)[¶](#spack.modules.common.update_dictionary_extending_lists) Updates a dictionary, but extends lists instead of overriding them. | Parameters: | * **target** – dictionary to be updated * **update** – update to be applied | ##### spack.modules.dotkit module[¶](#module-spack.modules.dotkit) This module implements the classes necessary to generate dotkit modules. *class* `spack.modules.dotkit.``DotkitConfiguration`(*spec*)[¶](#spack.modules.dotkit.DotkitConfiguration) Bases: [`spack.modules.common.BaseConfiguration`](#spack.modules.common.BaseConfiguration) Configuration class for dotkit module files. *class* `spack.modules.dotkit.``DotkitContext`(*configuration*)[¶](#spack.modules.dotkit.DotkitContext) Bases: [`spack.modules.common.BaseContext`](#spack.modules.common.BaseContext) Context class for dotkit module files. `context_properties` *= ['spec', 'timestamp', 'category', 'short_description', 'long_description', 'configure_options', 'environment_modifications', 'autoload', 'verbose']*[¶](#spack.modules.dotkit.DotkitContext.context_properties) *class* `spack.modules.dotkit.``DotkitFileLayout`(*configuration*)[¶](#spack.modules.dotkit.DotkitFileLayout) Bases: [`spack.modules.common.BaseFileLayout`](#spack.modules.common.BaseFileLayout) File layout for dotkit module files. `extension` *= 'dk'*[¶](#spack.modules.dotkit.DotkitFileLayout.extension) file extension of dotkit module files *class* `spack.modules.dotkit.``DotkitModulefileWriter`(*spec*)[¶](#spack.modules.dotkit.DotkitModulefileWriter) Bases: [`spack.modules.common.BaseModuleFileWriter`](#spack.modules.common.BaseModuleFileWriter) Writer class for dotkit module files. `default_template` *= 'modules/modulefile.dk'*[¶](#spack.modules.dotkit.DotkitModulefileWriter.default_template) `spack.modules.dotkit.``configuration` *= {}*[¶](#spack.modules.dotkit.configuration) Dotkit specific part of the configuration `spack.modules.dotkit.``configuration_registry` *= {}*[¶](#spack.modules.dotkit.configuration_registry) *Caches the configuration {spec_hash* – configuration} `spack.modules.dotkit.``make_configuration`(*spec*)[¶](#spack.modules.dotkit.make_configuration) Returns the dotkit configuration for spec `spack.modules.dotkit.``make_context`(*spec*)[¶](#spack.modules.dotkit.make_context) Returns the context information for spec `spack.modules.dotkit.``make_layout`(*spec*)[¶](#spack.modules.dotkit.make_layout) Returns the layout information for spec ##### spack.modules.lmod module[¶](#module-spack.modules.lmod) *exception* `spack.modules.lmod.``CoreCompilersNotFoundError`(*message*, *long_message=None*)[¶](#spack.modules.lmod.CoreCompilersNotFoundError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError), `KeyError` Error raised if the key ‘core_compilers’ has not been specified in the configuration file. *class* `spack.modules.lmod.``LmodConfiguration`(*spec*)[¶](#spack.modules.lmod.LmodConfiguration) Bases: [`spack.modules.common.BaseConfiguration`](#spack.modules.common.BaseConfiguration) Configuration class for lmod module files. `available`[¶](#spack.modules.lmod.LmodConfiguration.available) Returns a dictionary of the services that are currently available. `core_compilers`[¶](#spack.modules.lmod.LmodConfiguration.core_compilers) Returns the list of “Core” compilers | Raises: | [`CoreCompilersNotFoundError`](#spack.modules.lmod.CoreCompilersNotFoundError) – if the key was not specified in the configuration file or the sequence is empty | `hierarchy_tokens`[¶](#spack.modules.lmod.LmodConfiguration.hierarchy_tokens) Returns the list of tokens that are part of the modulefile hierarchy. ‘compiler’ is always present. `missing`[¶](#spack.modules.lmod.LmodConfiguration.missing) Returns the list of tokens that are not available. `provides`[¶](#spack.modules.lmod.LmodConfiguration.provides) Returns a dictionary mapping all the services provided by this spec to the spec itself. `requires`[¶](#spack.modules.lmod.LmodConfiguration.requires) Returns a dictionary mapping all the requirements of this spec to the actual provider. ‘compiler’ is always present among the requirements. *class* `spack.modules.lmod.``LmodContext`(*configuration*)[¶](#spack.modules.lmod.LmodContext) Bases: [`spack.modules.common.BaseContext`](#spack.modules.common.BaseContext) Context class for lmod module files. `conditionally_unlocked_paths`[¶](#spack.modules.lmod.LmodContext.conditionally_unlocked_paths) Returns the list of paths that are unlocked conditionally. Each item in the list is a tuple with the structure (condition, path). `context_properties` *= ['has_modulepath_modifications', 'has_conditional_modifications', 'name_part', 'version_part', 'provides', 'missing', 'unlocked_paths', 'conditionally_unlocked_paths', 'spec', 'timestamp', 'category', 'short_description', 'long_description', 'configure_options', 'environment_modifications', 'autoload', 'verbose']*[¶](#spack.modules.lmod.LmodContext.context_properties) `has_conditional_modifications`[¶](#spack.modules.lmod.LmodContext.has_conditional_modifications) True if this module modifies MODULEPATH conditionally to the presence of other services in the environment, False otherwise. `has_modulepath_modifications`[¶](#spack.modules.lmod.LmodContext.has_modulepath_modifications) True if this module modifies MODULEPATH, False otherwise. `missing`[¶](#spack.modules.lmod.LmodContext.missing) Returns a list of missing services. `name_part`[¶](#spack.modules.lmod.LmodContext.name_part) Name of this provider. `provides`[¶](#spack.modules.lmod.LmodContext.provides) Returns the dictionary of provided services. `unlocked_paths`[¶](#spack.modules.lmod.LmodContext.unlocked_paths) Returns the list of paths that are unlocked unconditionally. `version_part`[¶](#spack.modules.lmod.LmodContext.version_part) Version of this provider. *class* `spack.modules.lmod.``LmodFileLayout`(*configuration*)[¶](#spack.modules.lmod.LmodFileLayout) Bases: [`spack.modules.common.BaseFileLayout`](#spack.modules.common.BaseFileLayout) File layout for lmod module files. `arch_dirname`[¶](#spack.modules.lmod.LmodFileLayout.arch_dirname) Returns the root folder for THIS architecture `available_path_parts`[¶](#spack.modules.lmod.LmodFileLayout.available_path_parts) List of path parts that are currently available. Needed to construct the file name. `extension` *= 'lua'*[¶](#spack.modules.lmod.LmodFileLayout.extension) file extension of lua module files `filename`[¶](#spack.modules.lmod.LmodFileLayout.filename) Returns the filename for the current module file `token_to_path`(*name*, *value*)[¶](#spack.modules.lmod.LmodFileLayout.token_to_path) Transforms a hierarchy token into the corresponding path part. | Parameters: | * **name** (*str*) – name of the service in the hierarchy * **value** – actual provider of the service | | Returns: | part of the path associated with the service | | Return type: | str | `unlocked_paths`[¶](#spack.modules.lmod.LmodFileLayout.unlocked_paths) Returns a dictionary mapping conditions to a list of unlocked paths. The paths that are unconditionally unlocked are under the key ‘None’. The other keys represent the list of services you need loaded to unlock the corresponding paths. `use_name`[¶](#spack.modules.lmod.LmodFileLayout.use_name) Returns the ‘use’ name of the module i.e. the name you have to type to console to use it. *class* `spack.modules.lmod.``LmodModulefileWriter`(*spec*)[¶](#spack.modules.lmod.LmodModulefileWriter) Bases: [`spack.modules.common.BaseModuleFileWriter`](#spack.modules.common.BaseModuleFileWriter) Writer class for lmod module files. `default_template` *= 'modules/modulefile.lua'*[¶](#spack.modules.lmod.LmodModulefileWriter.default_template) *exception* `spack.modules.lmod.``NonVirtualInHierarchyError`(*message*, *long_message=None*)[¶](#spack.modules.lmod.NonVirtualInHierarchyError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError), `TypeError` Error raised if non-virtual specs are used as hierarchy tokens in the lmod section of ‘modules.yaml’. `spack.modules.lmod.``configuration` *= {'blacklist_implicits': False, 'hash_length': 7, 'hierarchy': ['mpi'], 'verbose': False}*[¶](#spack.modules.lmod.configuration) TCL specific part of the configuration `spack.modules.lmod.``configuration_registry` *= {}*[¶](#spack.modules.lmod.configuration_registry) *Caches the configuration {spec_hash* – configuration} `spack.modules.lmod.``guess_core_compilers`(*store=False*)[¶](#spack.modules.lmod.guess_core_compilers) Guesses the list of core compilers installed in the system. | Parameters: | **store** (*bool*) – if True writes the core compilers to the modules.yaml configuration file | | Returns: | List of core compilers, if found, or None | `spack.modules.lmod.``make_configuration`(*spec*)[¶](#spack.modules.lmod.make_configuration) Returns the lmod configuration for spec `spack.modules.lmod.``make_context`(*spec*)[¶](#spack.modules.lmod.make_context) Returns the context information for spec `spack.modules.lmod.``make_layout`(*spec*)[¶](#spack.modules.lmod.make_layout) Returns the layout information for spec ##### spack.modules.tcl module[¶](#module-spack.modules.tcl) This module implements the classes necessary to generate TCL non-hierarchical modules. *class* `spack.modules.tcl.``TclConfiguration`(*spec*)[¶](#spack.modules.tcl.TclConfiguration) Bases: [`spack.modules.common.BaseConfiguration`](#spack.modules.common.BaseConfiguration) Configuration class for tcl module files. `conflicts`[¶](#spack.modules.tcl.TclConfiguration.conflicts) Conflicts for this module file *class* `spack.modules.tcl.``TclContext`(*configuration*)[¶](#spack.modules.tcl.TclContext) Bases: [`spack.modules.common.BaseContext`](#spack.modules.common.BaseContext) Context class for tcl module files. `conflicts`[¶](#spack.modules.tcl.TclContext.conflicts) List of conflicts for the tcl module file. `context_properties` *= ['prerequisites', 'conflicts', 'spec', 'timestamp', 'category', 'short_description', 'long_description', 'configure_options', 'environment_modifications', 'autoload', 'verbose']*[¶](#spack.modules.tcl.TclContext.context_properties) `prerequisites`[¶](#spack.modules.tcl.TclContext.prerequisites) List of modules that needs to be loaded automatically. *class* `spack.modules.tcl.``TclFileLayout`(*configuration*)[¶](#spack.modules.tcl.TclFileLayout) Bases: [`spack.modules.common.BaseFileLayout`](#spack.modules.common.BaseFileLayout) File layout for tcl module files. *class* `spack.modules.tcl.``TclModulefileWriter`(*spec*)[¶](#spack.modules.tcl.TclModulefileWriter) Bases: [`spack.modules.common.BaseModuleFileWriter`](#spack.modules.common.BaseModuleFileWriter) Writer class for tcl module files. `default_template` *= 'modules/modulefile.tcl'*[¶](#spack.modules.tcl.TclModulefileWriter.default_template) `spack.modules.tcl.``configuration` *= {}*[¶](#spack.modules.tcl.configuration) TCL specific part of the configuration `spack.modules.tcl.``configuration_registry` *= {}*[¶](#spack.modules.tcl.configuration_registry) *Caches the configuration {spec_hash* – configuration} `spack.modules.tcl.``make_configuration`(*spec*)[¶](#spack.modules.tcl.make_configuration) Returns the tcl configuration for spec `spack.modules.tcl.``make_context`(*spec*)[¶](#spack.modules.tcl.make_context) Returns the context information for spec `spack.modules.tcl.``make_layout`(*spec*)[¶](#spack.modules.tcl.make_layout) Returns the layout information for spec ##### Module contents[¶](#module-spack.modules) This package contains code for creating environment modules, which can include dotkits, TCL non-hierarchical modules, LUA hierarchical modules, and others. *class* `spack.modules.``DotkitModulefileWriter`(*spec*)[¶](#spack.modules.DotkitModulefileWriter) Bases: [`spack.modules.common.BaseModuleFileWriter`](#spack.modules.common.BaseModuleFileWriter) Writer class for dotkit module files. `default_template` *= 'modules/modulefile.dk'*[¶](#spack.modules.DotkitModulefileWriter.default_template) *class* `spack.modules.``TclModulefileWriter`(*spec*)[¶](#spack.modules.TclModulefileWriter) Bases: [`spack.modules.common.BaseModuleFileWriter`](#spack.modules.common.BaseModuleFileWriter) Writer class for tcl module files. `default_template` *= 'modules/modulefile.tcl'*[¶](#spack.modules.TclModulefileWriter.default_template) *class* `spack.modules.``LmodModulefileWriter`(*spec*)[¶](#spack.modules.LmodModulefileWriter) Bases: [`spack.modules.common.BaseModuleFileWriter`](#spack.modules.common.BaseModuleFileWriter) Writer class for lmod module files. `default_template` *= 'modules/modulefile.lua'*[¶](#spack.modules.LmodModulefileWriter.default_template) #### spack.operating_systems package[¶](#spack-operating-systems-package) ##### Submodules[¶](#submodules) ##### spack.operating_systems.cnk module[¶](#module-spack.operating_systems.cnk) *class* `spack.operating_systems.cnk.``Cnk`[¶](#spack.operating_systems.cnk.Cnk) Bases: [`spack.architecture.OperatingSystem`](index.html#spack.architecture.OperatingSystem) Compute Node Kernel (CNK) is the node level operating system for the IBM Blue Gene series of supercomputers. The compute nodes of the Blue Gene family of supercomputers run CNK, a lightweight kernel that runs on each node and supports one application running for one user on that node. ##### spack.operating_systems.cnl module[¶](#module-spack.operating_systems.cnl) *class* `spack.operating_systems.cnl.``Cnl`[¶](#spack.operating_systems.cnl.Cnl) Bases: [`spack.architecture.OperatingSystem`](index.html#spack.architecture.OperatingSystem) Compute Node Linux (CNL) is the operating system used for the Cray XC series super computers. It is a very stripped down version of GNU/Linux. Any compilers found through this operating system will be used with modules. If updated, user must make sure that version and name are updated to indicate that OS has been upgraded (or downgraded) `find_compiler`(*cmp_cls*, **paths*)[¶](#spack.operating_systems.cnl.Cnl.find_compiler) Try to find the given type of compiler in the user’s environment. For each set of compilers found, this returns compiler objects with the cc, cxx, f77, fc paths and the version filled in. This will search for compilers with the names in cc_names, cxx_names, etc. and it will group them if they have common prefixes, suffixes, and versions. e.g., gcc-mp-4.7 would be grouped with g++-mp-4.7 and gfortran-mp-4.7. `find_compilers`(**paths*)[¶](#spack.operating_systems.cnl.Cnl.find_compilers) Return a list of compilers found in the supplied paths. This invokes the find() method for each Compiler class, and appends the compilers detected to a list. ##### spack.operating_systems.cray_frontend module[¶](#module-spack.operating_systems.cray_frontend) *class* `spack.operating_systems.cray_frontend.``CrayFrontend`[¶](#spack.operating_systems.cray_frontend.CrayFrontend) Bases: [`spack.operating_systems.linux_distro.LinuxDistro`](#spack.operating_systems.linux_distro.LinuxDistro) Represents OS that runs on login and service nodes of the Cray platform. It acts as a regular Linux without Cray-specific modules and compiler wrappers. `find_compilers`(**paths*)[¶](#spack.operating_systems.cray_frontend.CrayFrontend.find_compilers) Calls the overridden method but prevents it from detecting Cray compiler wrappers to avoid possible false detections. The detected compilers come into play only if a user decides to work with the Cray’s frontend OS as if it was a regular Linux environment. ##### spack.operating_systems.linux_distro module[¶](#module-spack.operating_systems.linux_distro) *class* `spack.operating_systems.linux_distro.``LinuxDistro`[¶](#spack.operating_systems.linux_distro.LinuxDistro) Bases: [`spack.architecture.OperatingSystem`](index.html#spack.architecture.OperatingSystem) This class will represent the autodetected operating system for a Linux System. Since there are many different flavors of Linux, this class will attempt to encompass them all through autodetection using the python module platform and the method platform.dist() ##### spack.operating_systems.mac_os module[¶](#module-spack.operating_systems.mac_os) *class* `spack.operating_systems.mac_os.``MacOs`[¶](#spack.operating_systems.mac_os.MacOs) Bases: [`spack.architecture.OperatingSystem`](index.html#spack.architecture.OperatingSystem) This class represents the macOS operating system. This will be auto detected using the python platform.mac_ver. The macOS platform will be represented using the major version operating system name, i.e el capitan, yosemite
etc. `spack.operating_systems.mac_os.``macos_version`()[¶](#spack.operating_systems.mac_os.macos_version) temporary workaround to return a macOS version as a Version object ##### Module contents[¶](#module-spack.operating_systems) #### spack.platforms package[¶](#spack-platforms-package) ##### Submodules[¶](#submodules) ##### spack.platforms.bgq module[¶](#module-spack.platforms.bgq) *class* `spack.platforms.bgq.``Bgq`[¶](#spack.platforms.bgq.Bgq) Bases: [`spack.architecture.Platform`](index.html#spack.architecture.Platform) `back_end` *= 'ppc64'*[¶](#spack.platforms.bgq.Bgq.back_end) `default` *= 'ppc64'*[¶](#spack.platforms.bgq.Bgq.default) *classmethod* `detect`()[¶](#spack.platforms.bgq.Bgq.detect) Subclass is responsible for implementing this method. Returns True if the Platform class detects that it is the current platform and False if it’s not. `front_end` *= 'power7'*[¶](#spack.platforms.bgq.Bgq.front_end) `priority` *= 30*[¶](#spack.platforms.bgq.Bgq.priority) ##### spack.platforms.cray module[¶](#module-spack.platforms.cray) *class* `spack.platforms.cray.``Cray`[¶](#spack.platforms.cray.Cray) Bases: [`spack.architecture.Platform`](index.html#spack.architecture.Platform) *classmethod* `detect`()[¶](#spack.platforms.cray.Cray.detect) Subclass is responsible for implementing this method. Returns True if the Platform class detects that it is the current platform and False if it’s not. `priority` *= 10*[¶](#spack.platforms.cray.Cray.priority) *classmethod* `setup_platform_environment`(*pkg*, *env*)[¶](#spack.platforms.cray.Cray.setup_platform_environment) Change the linker to default dynamic to be more similar to linux/standard linker behavior ##### spack.platforms.darwin module[¶](#module-spack.platforms.darwin) *class* `spack.platforms.darwin.``Darwin`[¶](#spack.platforms.darwin.Darwin) Bases: [`spack.architecture.Platform`](index.html#spack.architecture.Platform) `back_end` *= 'x86_64'*[¶](#spack.platforms.darwin.Darwin.back_end) `default` *= 'x86_64'*[¶](#spack.platforms.darwin.Darwin.default) *classmethod* `detect`()[¶](#spack.platforms.darwin.Darwin.detect) Subclass is responsible for implementing this method. Returns True if the Platform class detects that it is the current platform and False if it’s not. `front_end` *= 'x86_64'*[¶](#spack.platforms.darwin.Darwin.front_end) `priority` *= 89*[¶](#spack.platforms.darwin.Darwin.priority) ##### spack.platforms.linux module[¶](#module-spack.platforms.linux) *class* `spack.platforms.linux.``Linux`[¶](#spack.platforms.linux.Linux) Bases: [`spack.architecture.Platform`](index.html#spack.architecture.Platform) *classmethod* `detect`()[¶](#spack.platforms.linux.Linux.detect) Subclass is responsible for implementing this method. Returns True if the Platform class detects that it is the current platform and False if it’s not. `priority` *= 90*[¶](#spack.platforms.linux.Linux.priority) ##### spack.platforms.test module[¶](#module-spack.platforms.test) *class* `spack.platforms.test.``Test`[¶](#spack.platforms.test.Test) Bases: [`spack.architecture.Platform`](index.html#spack.architecture.Platform) `back_end` *= 'x86_64'*[¶](#spack.platforms.test.Test.back_end) `back_os` *= 'debian6'*[¶](#spack.platforms.test.Test.back_os) `default` *= 'x86_64'*[¶](#spack.platforms.test.Test.default) `default_os` *= 'debian6'*[¶](#spack.platforms.test.Test.default_os) *classmethod* `detect`()[¶](#spack.platforms.test.Test.detect) Subclass is responsible for implementing this method. Returns True if the Platform class detects that it is the current platform and False if it’s not. `front_end` *= 'x86_32'*[¶](#spack.platforms.test.Test.front_end) `front_os` *= 'redhat6'*[¶](#spack.platforms.test.Test.front_os) `priority` *= 1000000*[¶](#spack.platforms.test.Test.priority) ##### Module contents[¶](#module-spack.platforms) #### spack.reporters package[¶](#spack-reporters-package) ##### Submodules[¶](#submodules) ##### spack.reporters.cdash module[¶](#module-spack.reporters.cdash) *class* `spack.reporters.cdash.``CDash`(*install_command*, *cdash_upload_url*)[¶](#spack.reporters.cdash.CDash) Bases: [`spack.reporter.Reporter`](index.html#spack.reporter.Reporter) Generate reports of spec installations for CDash. To use this reporter, pass the `--cdash-upload-url` argument to `spack install`: ``` spack install --cdash-upload-url=\ https://mydomain.com/cdash/submit.php?project=Spack <spec> ``` In this example, results will be uploaded to the *Spack* project on the CDash instance hosted at <https://mydomain.com/cdash>. `build_report`(*filename*, *report_data*)[¶](#spack.reporters.cdash.CDash.build_report) `concretization_report`(*filename*, *msg*)[¶](#spack.reporters.cdash.CDash.concretization_report) `initialize_report`(*filename*, *report_data*)[¶](#spack.reporters.cdash.CDash.initialize_report) `upload`(*filename*)[¶](#spack.reporters.cdash.CDash.upload) ##### spack.reporters.junit module[¶](#module-spack.reporters.junit) *class* `spack.reporters.junit.``JUnit`(*install_command*, *cdash_upload_url*)[¶](#spack.reporters.junit.JUnit) Bases: [`spack.reporter.Reporter`](index.html#spack.reporter.Reporter) Generate reports of spec installations for JUnit. `build_report`(*filename*, *report_data*)[¶](#spack.reporters.junit.JUnit.build_report) ##### Module contents[¶](#module-spack.reporters) #### spack.schema package[¶](#spack-schema-package) ##### Submodules[¶](#submodules) ##### spack.schema.compilers module[¶](#module-spack.schema.compilers) Schema for compilers.yaml configuration file. ``` #: Properties for inclusion in other schemas properties = { 'compilers': { 'type': 'array', 'items': [{ 'type': 'object', 'additionalProperties': False, 'properties': { 'compiler': { 'type': 'object', 'additionalProperties': False, 'required': [ 'paths', 'spec', 'modules', 'operating_system'], 'properties': { 'paths': { 'type': 'object', 'required': ['cc', 'cxx', 'f77', 'fc'], 'additionalProperties': False, 'properties': { 'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}}, 'flags': { 'type': 'object', 'additionalProperties': False, 'properties': { 'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}}, 'spec': {'type': 'string'}, 'operating_system': {'type': 'string'}, 'target': {'type': 'string'}, 'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'environment': { 'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': { 'set': { 'type': 'object', 'patternProperties': { # Variable name r'\w[\w-]*': { 'anyOf': [{'type': 'string'}, {'type': 'number'}] } } }, 'unset': { 'type': 'object', 'patternProperties': { # Variable name r'\w[\w-]*': {'type': 'null'} } }, 'prepend-path': { 'type': 'object', 'patternProperties': { # Variable name r'\w[\w-]*': { 'anyOf': [{'type': 'string'}, {'type': 'number'}] } } }, 'append-path': { 'type': 'object', 'patternProperties': { # Variable name r'\w[\w-]*': { 'anyOf': [{'type': 'string'}, {'type': 'number'}] } } } } }, 'extra_rpaths': { 'type': 'array', 'default': [], 'items': {'type': 'string'} } } } } }] } } #: Full schema with metadata schema = { '$schema': 'http://json-schema.org/schema#', 'title': 'Spack compiler configuration file schema', 'type': 'object', 'additionalProperties': False, 'properties': properties, } ``` `spack.schema.compilers.``properties` *= {'compilers': {'items': [{'additionalProperties': False, 'properties': {'compiler': {'additionalProperties': False, 'properties': {'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'extra_rpaths': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'operating_system': {'type': 'string'}, 'spec': {'type': 'string'}, 'environment': {'properties': {'append-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'patternProperties': {'\\w[\\w-]*': {'type': 'null'}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'target': {'type': 'string'}, 'flags': {'additionalProperties': False, 'properties': {'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'type': 'object'}, 'paths': {'additionalProperties': False, 'properties': {'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['cc', 'cxx', 'f77', 'fc'], 'type': 'object'}, 'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'type': 'object'}}, 'type': 'object'}], 'type': 'array'}}*[¶](#spack.schema.compilers.properties) Properties for inclusion in other schemas `spack.schema.compilers.``schema` *= {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'compilers': {'items': [{'additionalProperties': False, 'properties': {'compiler': {'additionalProperties': False, 'properties': {'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'extra_rpaths': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'operating_system': {'type': 'string'}, 'spec': {'type': 'string'}, 'environment': {'properties': {'append-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'patternProperties': {'\\w[\\w-]*': {'type': 'null'}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'target': {'type': 'string'}, 'flags': {'additionalProperties': False, 'properties': {'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'type': 'object'}, 'paths': {'additionalProperties': False, 'properties': {'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['cc', 'cxx', 'f77', 'fc'], 'type': 'object'}, 'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'type': 'object'}}, 'type': 'object'}], 'type': 'array'}}, 'title': 'Spack compiler configuration file schema', 'type': 'object'}*[¶](#spack.schema.compilers.schema) Full schema with metadata ##### spack.schema.config module[¶](#module-spack.schema.config) Schema for config.yaml configuration file. ``` #: Properties for inclusion in other schemas properties = { 'config': { 'type': 'object', 'default': {}, 'properties': { 'install_tree': {'type': 'string'}, 'install_hash_length': {'type': 'integer', 'minimum': 1}, 'install_path_scheme': {'type': 'string'}, 'build_stage': { 'oneOf': [ {'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}], }, 'template_dirs': { 'type': 'array', 'items': {'type': 'string'} }, 'module_roots': { 'type': 'object', 'additionalProperties': False, 'properties': { 'tcl': {'type': 'string'}, 'lmod': {'type': 'string'}, 'dotkit': {'type': 'string'}, }, }, 'source_cache': {'type': 'string'}, 'misc_cache': {'type': 'string'}, 'verify_ssl': {'type': 'boolean'}, 'debug': {'type': 'boolean'}, 'checksum': {'type': 'boolean'}, 'locks': {'type': 'boolean'}, 'dirty': {'type': 'boolean'}, 'build_language': {'type': 'string'}, 'build_jobs': {'type': 'integer', 'minimum': 1}, 'ccache': {'type': 'boolean'}, 'db_lock_timeout': {'type': 'integer', 'minimum': 1}, 'package_lock_timeout': { 'anyOf': [ {'type': 'integer', 'minimum': 1}, {'type': 'null'} ], }, }, }, } #: Full schema with metadata schema = { '$schema': 'http://json-schema.org/schema#', 'title': 'Spack core configuration file schema', 'type': 'object', 'additionalProperties': False, 'properties': properties, } ``` `spack.schema.config.``properties` *= {'config': {'properties': {'install_path_scheme': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'module_roots': {'additionalProperties': False, 'properties': {'tcl': {'type': 'string'}, 'lmod': {'type': 'string'}, 'dotkit': {'type': 'string'}}, 'type': 'object'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'ccache': {'type': 'boolean'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'source_cache': {'type': 'string'}, 'checksum': {'type': 'boolean'}, 'verify_ssl': {'type': 'boolean'}, 'install_tree': {'type': 'string'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'misc_cache': {'type': 'string'}, 'build_language': {'type': 'string'}, 'dirty': {'type': 'boolean'}, 'package_lock_timeout': {'anyOf': [{'minimum': 1, 'type': 'integer'}, {'type': 'null'}]}}, 'default': {}, 'type': 'object'}}*[¶](#spack.schema.config.properties) Properties for inclusion in other schemas `spack.schema.config.``schema` *= {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'config': {'properties': {'install_path_scheme': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'module_roots': {'additionalProperties': False, 'properties': {'tcl': {'type': 'string'}, 'lmod': {'type': 'string'}, 'dotkit': {'type': 'string'}}, 'type': 'object'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'ccache': {'type': 'boolean'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'source_cache': {'type': 'string'}, 'checksum': {'type': 'boolean'}, 'verify_ssl': {'type': 'boolean'}, 'install_tree': {'type': 'string'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'misc_cache': {'type': 'string'}, 'build_language': {'type': 'string'}, 'dirty': {'type': 'boolean'}, 'package_lock_timeout': {'anyOf': [{'minimum': 1, 'type': 'integer'}, {'type': 'null'}]}}, 'default': {}, 'type': 'object'}}, 'title': 'Spack core configuration file schema', 'type': 'object'}*[¶](#spack.schema.config.schema) Full schema with metadata ##### spack.schema.env module[¶](#module-spack.schema.env) Schema for env.yaml configuration file. ``` 'type': 'string' }, }, 'specs': { # Specs is a list of specs, which can have # optional additional properties in a sub-dict 'type': 'array', 'default': [], 'additionalProperties': False, 'items': { 'anyOf': [ {'type': 'string'}, {'type': 'null'}, {'type': 'object'}, ] } } } ) } } } ``` ##### spack.schema.merged module[¶](#module-spack.schema.merged) Schema for configuration merged into one file. ``` } ``` `spack.schema.merged.``properties` *= {'compilers': {'items': [{'additionalProperties': False, 'properties': {'compiler': {'additionalProperties': False, 'properties': {'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'extra_rpaths': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'operating_system': {'type': 'string'}, 'spec': {'type': 'string'}, 'environment': {'properties': {'append-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'patternProperties': {'\\w[\\w-]*': {'type': 'null'}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'target': {'type': 'string'}, 'flags': {'additionalProperties': False, 'properties': {'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'type': 'object'}, 'paths': {'additionalProperties': False, 'properties': {'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['cc', 'cxx', 'f77', 'fc'], 'type': 'object'}, 'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'type': 'object'}}, 'type': 'object'}], 'type': 'array'}, 'config': {'properties': {'install_path_scheme': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'module_roots': {'additionalProperties': False, 'properties': {'tcl': {'type': 'string'}, 'lmod': {'type': 'string'}, 'dotkit': {'type': 'string'}}, 'type': 'object'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'ccache': {'type': 'boolean'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'source_cache': {'type': 'string'}, 'checksum': {'type': 'boolean'}, 'verify_ssl': {'type': 'boolean'}, 'install_tree': {'type': 'string'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'misc_cache': {'type': 'string'}, 'build_language': {'type': 'string'}, 'dirty': {'type': 'boolean'}, 'package_lock_timeout': {'anyOf': [{'minimum': 1, 'type': 'integer'}, {'type': 'null'}]}}, 'default': {}, 'type': 'object'}, 'mirrors': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'default': {}, 'type': 'object'}, 'modules': {'properties': {'dotkit': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'prefix_inspections': {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/array_of_strings'}}, 'type': 'object'}, 'lmod': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {'hierarchical_scheme': {'$ref': '#/definitions/array_of_strings'}, 'core_compilers': {'$ref': '#/definitions/array_of_strings'}}]}, 'tcl': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'enable': {'items': {'enum': ['tcl', 'dotkit', 'lmod'], 'type': 'string'}, 'default': [], 'type': 'array'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'packages': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'properties': {'providers': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'default': {}, 'type': 'object'}, 'compiler': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'modules': {'default': {}, 'type': 'object'}, 'paths': {'default': {}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'version': {'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'default': [], 'type': 'array'}, 'buildable': {'default': True, 'type': 'boolean'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}}, 'default': {}, 'type': 'object'}, 'repos': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}*[¶](#spack.schema.merged.properties) Properties for inclusion in other schemas `spack.schema.merged.``schema` *= {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'definitions': {'module_type_configuration': {'anyOf': [{'properties': {'blacklist_implicits': {'default': False, 'type': 'boolean'}, 'naming_scheme': {'type': 'string'}, 'verbose': {'default': False, 'type': 'boolean'}, 'hash_length': {'minimum': 0, 'default': 7, 'type': 'integer'}, 'blacklist': {'$ref': '#/definitions/array_of_strings'}, 'whitelist': {'$ref': '#/definitions/array_of_strings'}}}, {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/module_file_configuration'}}}], 'default': {}, 'type': 'object'}, 'module_file_configuration': {'properties': {'conflict': {'$ref': '#/definitions/array_of_strings'}, 'filter': {'properties': {'environment_blacklist': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'suffixes': {'$ref': '#/definitions/dictionary_of_strings'}, 'environment': {'properties': {'append_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'prepend_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'unset': {'$ref': '#/definitions/array_of_strings'}, 'set': {'$ref': '#/definitions/dictionary_of_strings'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'load': {'$ref': '#/definitions/array_of_strings'}, 'prerequisites': {'$ref': '#/definitions/dependency_selection'}, 'autoload': {'$ref': '#/definitions/dependency_selection'}, 'template': {'type': 'string'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'array_of_strings': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'dictionary_of_strings': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'dependency_selection': {'enum': ['none', 'direct', 'all'], 'type': 'string'}}, 'properties': {'mirrors': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'default': {}, 'type': 'object'}, 'config': {'properties': {'install_path_scheme': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'module_roots': {'additionalProperties': False, 'properties': {'tcl': {'type': 'string'}, 'lmod': {'type': 'string'}, 'dotkit': {'type': 'string'}}, 'type': 'object'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'ccache': {'type': 'boolean'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'source_cache': {'type': 'string'}, 'checksum': {'type': 'boolean'}, 'verify_ssl': {'type': 'boolean'}, 'install_tree': {'type': 'string'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'misc_cache': {'type': 'string'}, 'build_language': {'type': 'string'}, 'dirty': {'type': 'boolean'}, 'package_lock_timeout': {'anyOf': [{'minimum': 1, 'type': 'integer'}, {'type': 'null'}]}}, 'default': {}, 'type': 'object'}, 'modules': {'properties': {'dotkit': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'prefix_inspections': {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/array_of_strings'}}, 'type': 'object'}, 'lmod': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {'hierarchical_scheme': {'$ref': '#/definitions/array_of_strings'}, 'core_compilers': {'$ref': '#/definitions/array_of_strings'}}]}, 'tcl': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'enable': {'items': {'enum': ['tcl', 'dotkit', 'lmod'], 'type': 'string'}, 'default': [], 'type': 'array'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'packages': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'properties': {'providers': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'default': {}, 'type': 'object'}, 'compiler': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'modules': {'default': {}, 'type': 'object'}, 'paths': {'default': {}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'version': {'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'default': [], 'type': 'array'}, 'buildable': {'default': True, 'type': 'boolean'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}}, 'default': {}, 'type': 'object'}, 'compilers': {'items': [{'additionalProperties': False, 'properties': {'compiler': {'additionalProperties': False, 'properties': {'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'extra_rpaths': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'operating_system': {'type': 'string'}, 'spec': {'type': 'string'}, 'environment': {'properties': {'append-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'patternProperties': {'\\w[\\w-]*': {'type': 'null'}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'target': {'type': 'string'}, 'flags': {'additionalProperties': False, 'properties': {'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'type': 'object'}, 'paths': {'additionalProperties': False, 'properties': {'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['cc', 'cxx', 'f77', 'fc'], 'type': 'object'}, 'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'type': 'object'}}, 'type': 'object'}], 'type': 'array'}, 'repos': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'title': 'Spack merged configuration file schema', 'type': 'object'}*[¶](#spack.schema.merged.schema) Full schema with metadata ##### spack.schema.mirrors module[¶](#module-spack.schema.mirrors) Schema for mirrors.yaml configuration file. ``` #: Properties for inclusion in other schemas properties = { 'mirrors': { 'type': 'object', 'default': {}, 'additionalProperties': False, 'patternProperties': { r'\w[\w-]*': {'type': 'string'}, }, }, } #: Full schema with metadata schema = { '$schema': 'http://json-schema.org/schema#', 'title': 'Spack mirror configuration file schema', 'type': 'object', 'additionalProperties': False, 'properties': properties, } ``` `spack.schema.mirrors.``properties` *= {'mirrors': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'default': {}, 'type': 'object'}}*[¶](#spack.schema.mirrors.properties) Properties for inclusion in other schemas `spack.schema.mirrors.``schema` *= {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'mirrors': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'default': {}, 'type': 'object'}}, 'title': 'Spack mirror configuration file schema', 'type': 'object'}*[¶](#spack.schema.mirrors.schema) Full schema with metadata ##### spack.schema.modules module[¶](#module-spack.schema.modules) Schema for modules.yaml configuration file. ``` #: Definitions for parts of module schema definitions = { 'array_of_strings': { 'type': 'array', 'default': [], 'items': { 'type': 'string' } }, 'dictionary_of_strings': { 'type': 'object', 'patternProperties': { r'\w[\w-]*': { # key 'type': 'string' } } }, 'dependency_selection': { 'type': 'string', 'enum': ['none', 'direct', 'all'] }, 'module_file_configuration': { 'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': { 'filter': { 'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': { 'environment_blacklist': { 'type': 'array', 'default': [], 'items': { 'type': 'string' } } } }, 'template': { 'type': 'string' }, 'autoload': { '$ref': '#/definitions/dependency_selection'}, 'prerequisites': { '$ref': '#/definitions/dependency_selection'}, 'conflict': { '$ref': '#/definitions/array_of_strings'}, 'load': { '$ref': '#/definitions/array_of_strings'}, 'suffixes': { '$ref': '#/definitions/dictionary_of_strings'}, 'environment': { 'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': { 'set': { '$ref': '#/definitions/dictionary_of_strings'}, 'unset': { '$ref': '#/definitions/array_of_strings'}, 'prepend_path': { '$ref': '#/definitions/dictionary_of_strings'}, 'append_path': { '$ref': '#/definitions/dictionary_of_strings'} } } } }, 'module_type_configuration': { 'type': 'object', 'default': {}, 'anyOf': [ {'properties': { 'verbose': { 'type': 'boolean', 'default': False }, 'hash_length': { 'type': 'integer', 'minimum': 0, 'default': 7 }, 'whitelist': { '$ref': '#/definitions/array_of_strings'}, 'blacklist': { '$ref': '#/definitions/array_of_strings'}, 'blacklist_implicits': { 'type': 'boolean', 'default': False }, 'naming_scheme': { 'type': 'string' # Can we be more specific here? } }}, {'patternProperties': { r'\w[\w-]*': { '$ref': '#/definitions/module_file_configuration' } }} ] } } # Properties for inclusion into other schemas (requires definitions) properties = { 'modules': { 'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': { 'prefix_inspections': { 'type': 'object', 'patternProperties': { # prefix-relative path to be inspected for existence r'\w[\w-]*': { '$ref': '#/definitions/array_of_strings'}}}, 'enable': { 'type': 'array', 'default': [], 'items': { 'type': 'string', 'enum': ['tcl', 'dotkit', 'lmod']}}, 'lmod': { 'allOf': [ # Base configuration {'$ref': '#/definitions/module_type_configuration'}, { 'core_compilers': { '$ref': '#/definitions/array_of_strings' }, 'hierarchical_scheme': { '$ref': '#/definitions/array_of_strings' } } # Specific lmod extensions ] }, 'tcl': { 'allOf': [ # Base configuration {'$ref': '#/definitions/module_type_configuration'}, {} # Specific tcl extensions ] }, 'dotkit': { 'allOf': [ # Base configuration {'$ref': '#/definitions/module_type_configuration'}, {} # Specific dotkit extensions ] }, }, }, } #: Full schema with metadata schema = { '$schema': 'http://json-schema.org/schema#', 'title': 'Spack module file configuration file schema', 'definitions': definitions, 'type': 'object', 'additionalProperties': False, 'properties': properties, } ``` `spack.schema.modules.``definitions` *= {'array_of_strings': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'dependency_selection': {'enum': ['none', 'direct', 'all'], 'type': 'string'}, 'dictionary_of_strings': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'module_file_configuration': {'properties': {'conflict': {'$ref': '#/definitions/array_of_strings'}, 'filter': {'properties': {'environment_blacklist': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'suffixes': {'$ref': '#/definitions/dictionary_of_strings'}, 'environment': {'properties': {'append_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'prepend_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'unset': {'$ref': '#/definitions/array_of_strings'}, 'set': {'$ref': '#/definitions/dictionary_of_strings'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'load': {'$ref': '#/definitions/array_of_strings'}, 'prerequisites': {'$ref': '#/definitions/dependency_selection'}, 'autoload': {'$ref': '#/definitions/dependency_selection'}, 'template': {'type': 'string'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'module_type_configuration': {'anyOf': [{'properties': {'blacklist_implicits': {'default': False, 'type': 'boolean'}, 'naming_scheme': {'type': 'string'}, 'verbose': {'default': False, 'type': 'boolean'}, 'hash_length': {'minimum': 0, 'default': 7, 'type': 'integer'}, 'blacklist': {'$ref': '#/definitions/array_of_strings'}, 'whitelist': {'$ref': '#/definitions/array_of_strings'}}}, {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/module_file_configuration'}}}], 'default': {}, 'type': 'object'}}*[¶](#spack.schema.modules.definitions) Definitions for parts of module schema `spack.schema.modules.``schema` *= {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'definitions': {'module_type_configuration': {'anyOf': [{'properties': {'blacklist_implicits': {'default': False, 'type': 'boolean'}, 'naming_scheme': {'type': 'string'}, 'verbose': {'default': False, 'type': 'boolean'}, 'hash_length': {'minimum': 0, 'default': 7, 'type': 'integer'}, 'blacklist': {'$ref': '#/definitions/array_of_strings'}, 'whitelist': {'$ref': '#/definitions/array_of_strings'}}}, {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/module_file_configuration'}}}], 'default': {}, 'type': 'object'}, 'module_file_configuration': {'properties': {'conflict': {'$ref': '#/definitions/array_of_strings'}, 'filter': {'properties': {'environment_blacklist': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'suffixes': {'$ref': '#/definitions/dictionary_of_strings'}, 'environment': {'properties': {'append_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'prepend_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'unset': {'$ref': '#/definitions/array_of_strings'}, 'set': {'$ref': '#/definitions/dictionary_of_strings'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'load': {'$ref': '#/definitions/array_of_strings'}, 'prerequisites': {'$ref': '#/definitions/dependency_selection'}, 'autoload': {'$ref': '#/definitions/dependency_selection'}, 'template': {'type': 'string'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'array_of_strings': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'dictionary_of_strings': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'dependency_selection': {'enum': ['none', 'direct', 'all'], 'type': 'string'}}, 'properties': {'modules': {'properties': {'dotkit': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'prefix_inspections': {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/array_of_strings'}}, 'type': 'object'}, 'lmod': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {'hierarchical_scheme': {'$ref': '#/definitions/array_of_strings'}, 'core_compilers': {'$ref': '#/definitions/array_of_strings'}}]}, 'tcl': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'enable': {'items': {'enum': ['tcl', 'dotkit', 'lmod'], 'type': 'string'}, 'default': [], 'type': 'array'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}}, 'title': 'Spack module file configuration file schema', 'type': 'object'}*[¶](#spack.schema.modules.schema) Full schema with metadata ##### spack.schema.packages module[¶](#module-spack.schema.packages) Schema for packages.yaml configuration files. ``` #: Properties for inclusion in other schemas properties = { 'packages': { 'type': 'object', 'default': {}, 'additionalProperties': False, 'patternProperties': { r'\w[\w-]*': { # package name 'type': 'object', 'default': {}, 'additionalProperties': False, 'properties': { 'version': { 'type': 'array', 'default': [], # version strings 'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'compiler': { 'type': 'array', 'default': [], 'items': {'type': 'string'}}, # compiler specs 'buildable': { 'type': 'boolean', 'default': True, }, 'permissions': { 'type': 'object', 'additionalProperties': False, 'properties': { 'read': { 'type': 'string', 'enum': ['user', 'group', 'world'], }, 'write': { 'type': 'string', 'enum': ['user', 'group', 'world'], }, 'group': { 'type': 'string', }, }, }, 'modules': { 'type': 'object', 'default': {}, }, 'providers': { 'type': 'object', 'default': {}, 'additionalProperties': False, 'patternProperties': { r'\w[\w-]*': { 'type': 'array', 'default': [], 'items': {'type': 'string'}, }, }, }, 'paths': { 'type': 'object', 'default': {}, }, 'variants': { 'oneOf': [ {'type': 'string'}, {'type': 'array', 'items': {'type': 'string'}}], }, }, }, }, }, } #: Full schema with metadata schema = { '$schema': 'http://json-schema.org/schema#', 'title': 'Spack package configuration file schema', 'type': 'object', 'additionalProperties': False, 'properties': properties, } ``` `spack.schema.packages.``properties` *= {'packages': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'properties': {'providers': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'default': {}, 'type': 'object'}, 'compiler': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'modules': {'default': {}, 'type': 'object'}, 'paths': {'default': {}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'version': {'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'default': [], 'type': 'array'}, 'buildable': {'default': True, 'type': 'boolean'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}}, 'default': {}, 'type': 'object'}}*[¶](#spack.schema.packages.properties) Properties for inclusion in other schemas `spack.schema.packages.``schema` *= {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'packages': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'properties': {'providers': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'default': {}, 'type': 'object'}, 'compiler': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'modules': {'default': {}, 'type': 'object'}, 'paths': {'default': {}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'version': {'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'default': [], 'type': 'array'}, 'buildable': {'default': True, 'type': 'boolean'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}}, 'default': {}, 'type': 'object'}}, 'title': 'Spack package configuration file schema', 'type': 'object'}*[¶](#spack.schema.packages.schema) Full schema with metadata ##### spack.schema.repos module[¶](#module-spack.schema.repos) Schema for repos.yaml configuration file. ``` #: Properties for inclusion in other schemas properties = { 'repos': { 'type': 'array', 'default': [], 'items': {'type': 'string'}, }, } #: Full schema with metadata schema = { '$schema': 'http://json-schema.org/schema#', 'title': 'Spack repository configuration file schema', 'type': 'object', 'additionalProperties': False, 'properties': properties, } ``` `spack.schema.repos.``properties` *= {'repos': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}*[¶](#spack.schema.repos.properties) Properties for inclusion in other schemas `spack.schema.repos.``schema` *= {'$schema': 'http://json-schema.org/schema#', 'additionalProperties': False, 'properties': {'repos': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'title': 'Spack repository configuration file schema', 'type': 'object'}*[¶](#spack.schema.repos.schema) Full schema with metadata ##### Module contents[¶](#module-spack.schema) This module contains jsonschema files for all of Spack’s YAML formats. #### spack.test package[¶](#spack-test-package) ##### Submodules[¶](#submodules) ##### spack.test.architecture module[¶](#module-spack.test.architecture) Test checks if the architecture class is created correctly and also that the functions are looking for the correct architecture name `spack.test.architecture.``test_boolness`()[¶](#spack.test.architecture.test_boolness) `spack.test.architecture.``test_dict_functions_for_architecture`()[¶](#spack.test.architecture.test_dict_functions_for_architecture) `spack.test.architecture.``test_platform`()[¶](#spack.test.architecture.test_platform) `spack.test.architecture.``test_user_back_end_input`(*config*)[¶](#spack.test.architecture.test_user_back_end_input) Test when user inputs backend that both the backend target and backend operating system match `spack.test.architecture.``test_user_defaults`(*config*)[¶](#spack.test.architecture.test_user_defaults) `spack.test.architecture.``test_user_front_end_input`(*config*)[¶](#spack.test.architecture.test_user_front_end_input) Test when user inputs just frontend that both the frontend target and frontend operating system match `spack.test.architecture.``test_user_input_combination`(*config*)[¶](#spack.test.architecture.test_user_input_combination) ##### spack.test.build_environment module[¶](#module-spack.test.build_environment) `spack.test.build_environment.``build_environment`()[¶](#spack.test.build_environment.build_environment) `spack.test.build_environment.``test_cc_not_changed_by_modules`(*monkeypatch*)[¶](#spack.test.build_environment.test_cc_not_changed_by_modules) `spack.test.build_environment.``test_compiler_config_modifications`(*monkeypatch*)[¶](#spack.test.build_environment.test_compiler_config_modifications) `spack.test.build_environment.``test_spack_paths_before_module_paths`(*config*, *mock_packages*, *monkeypatch*)[¶](#spack.test.build_environment.test_spack_paths_before_module_paths) `spack.test.build_environment.``test_static_to_shared_library`(*build_environment*)[¶](#spack.test.build_environment.test_static_to_shared_library) ##### spack.test.build_system_guess module[¶](#module-spack.test.build_system_guess) `spack.test.build_system_guess.``test_build_systems`(*url_and_build_system*)[¶](#spack.test.build_system_guess.test_build_systems) `spack.test.build_system_guess.``url_and_build_system`(*request*, *tmpdir*)[¶](#spack.test.build_system_guess.url_and_build_system) Sets up the resources to be pulled by the stage with the appropriate file name and returns their url along with the correct build-system guess ##### spack.test.build_systems module[¶](#module-spack.test.build_systems) *class* `spack.test.build_systems.``TestAutotoolsPackage`[¶](#spack.test.build_systems.TestAutotoolsPackage) Bases: `object` `pytestmark` *= [Mark(name='usefixtures', args=('config', 'mock_packages'), kwargs={})]*[¶](#spack.test.build_systems.TestAutotoolsPackage.pytestmark) `test_with_or_without`()[¶](#spack.test.build_systems.TestAutotoolsPackage.test_with_or_without) `spack.test.build_systems.``test_affirmative_make_check`(*directory*, *config*, *mock_packages*)[¶](#spack.test.build_systems.test_affirmative_make_check) Tests that Spack correctly detects targets in a Makefile. `spack.test.build_systems.``test_affirmative_ninja_check`(*directory*, *config*, *mock_packages*)[¶](#spack.test.build_systems.test_affirmative_ninja_check) Tests that Spack correctly detects targets in a Ninja build script. `spack.test.build_systems.``test_cmake_std_args`(*config*, *mock_packages*)[¶](#spack.test.build_systems.test_cmake_std_args) `spack.test.build_systems.``test_negative_make_check`(*directory*, *config*, *mock_packages*)[¶](#spack.test.build_systems.test_negative_make_check) Tests that Spack correctly ignores false positives in a Makefile. `spack.test.build_systems.``test_negative_ninja_check`(*directory*, *config*, *mock_packages*)[¶](#spack.test.build_systems.test_negative_ninja_check) Tests that Spack correctly ignores false positives in a Ninja build script. ##### spack.test.cc module[¶](#module-spack.test.cc) This test checks that the Spack cc compiler wrapper is parsing arguments correctly. `spack.test.cc.``check_args`(*cc*, *args*, *expected*)[¶](#spack.test.cc.check_args) Check output arguments that cc produces when called with args. This assumes that cc will print debug command output with one element per line, so that we see whether arguments that should (or shouldn’t) contain spaces are parsed correctly. `spack.test.cc.``dep1`(*tmpdir_factory*)[¶](#spack.test.cc.dep1) `spack.test.cc.``dep2`(*tmpdir_factory*)[¶](#spack.test.cc.dep2) `spack.test.cc.``dep3`(*tmpdir_factory*)[¶](#spack.test.cc.dep3) `spack.test.cc.``dep4`(*tmpdir_factory*)[¶](#spack.test.cc.dep4) `spack.test.cc.``dump_mode`(*cc*, *args*)[¶](#spack.test.cc.dump_mode) Make cc dump the mode it detects, and return it. `spack.test.cc.``pkg_prefix` *= '/spack-test-prefix'*[¶](#spack.test.cc.pkg_prefix) The prefix of the package being mock installed `spack.test.cc.``real_cc` *= '/bin/mycc'*[¶](#spack.test.cc.real_cc) the “real” compiler the wrapper is expected to invoke `spack.test.cc.``test_as_mode`()[¶](#spack.test.cc.test_as_mode) `spack.test.cc.``test_cc_deps`(*dep1*, *dep2*, *dep3*, *dep4*)[¶](#spack.test.cc.test_cc_deps) Ensure -L and RPATHs are not added in cc mode. `spack.test.cc.``test_cc_flags`(*wrapper_flags*)[¶](#spack.test.cc.test_cc_flags) `spack.test.cc.``test_ccache_prepend_for_cc`()[¶](#spack.test.cc.test_ccache_prepend_for_cc) `spack.test.cc.``test_ccld_deps`(*dep1*, *dep2*, *dep3*, *dep4*)[¶](#spack.test.cc.test_ccld_deps) Ensure all flags are added in ccld mode. `spack.test.cc.``test_ccld_mode`()[¶](#spack.test.cc.test_ccld_mode) `spack.test.cc.``test_ccld_with_system_dirs`(*dep1*, *dep2*, *dep3*, *dep4*)[¶](#spack.test.cc.test_ccld_with_system_dirs) Ensure all flags are added in ccld mode. `spack.test.cc.``test_cpp_flags`(*wrapper_flags*)[¶](#spack.test.cc.test_cpp_flags) `spack.test.cc.``test_cpp_mode`()[¶](#spack.test.cc.test_cpp_mode) `spack.test.cc.``test_cxx_flags`(*wrapper_flags*)[¶](#spack.test.cc.test_cxx_flags) `spack.test.cc.``test_dep_include`(*dep4*)[¶](#spack.test.cc.test_dep_include) Ensure a single dependency include directory is added. `spack.test.cc.``test_dep_lib`(*dep2*)[¶](#spack.test.cc.test_dep_lib) Ensure a single dependency RPATH is added. `spack.test.cc.``test_dep_lib_no_lib`(*dep2*)[¶](#spack.test.cc.test_dep_lib_no_lib) Ensure a single dependency RPATH is added with no -L. `spack.test.cc.``test_dep_lib_no_rpath`(*dep2*)[¶](#spack.test.cc.test_dep_lib_no_rpath) Ensure a single dependency link flag is added with no dep RPATH. `spack.test.cc.``test_dep_rpath`()[¶](#spack.test.cc.test_dep_rpath) Ensure RPATHs for root package are added. `spack.test.cc.``test_fc_flags`(*wrapper_flags*)[¶](#spack.test.cc.test_fc_flags) `spack.test.cc.``test_ld_deps`(*dep1*, *dep2*, *dep3*, *dep4*)[¶](#spack.test.cc.test_ld_deps) Ensure no (extra) -I args or -Wl, are passed in ld mode. `spack.test.cc.``test_ld_deps_no_link`(*dep1*, *dep2*, *dep3*, *dep4*)[¶](#spack.test.cc.test_ld_deps_no_link) Ensure SPACK_RPATH_DEPS controls -rpath for ld. `spack.test.cc.``test_ld_deps_no_rpath`(*dep1*, *dep2*, *dep3*, *dep4*)[¶](#spack.test.cc.test_ld_deps_no_rpath) Ensure SPACK_LINK_DEPS controls -L for ld. `spack.test.cc.``test_ld_deps_partial`(*dep1*)[¶](#spack.test.cc.test_ld_deps_partial) Make sure ld -r (partial link) is handled correctly on OS’s where it doesn’t accept rpaths. `spack.test.cc.``test_ld_flags`(*wrapper_flags*)[¶](#spack.test.cc.test_ld_flags) `spack.test.cc.``test_ld_mode`()[¶](#spack.test.cc.test_ld_mode) `spack.test.cc.``test_no_ccache_prepend_for_fc`()[¶](#spack.test.cc.test_no_ccache_prepend_for_fc) `spack.test.cc.``test_vcheck_mode`()[¶](#spack.test.cc.test_vcheck_mode) `spack.test.cc.``wrapper_environment`()[¶](#spack.test.cc.wrapper_environment) `spack.test.cc.``wrapper_flags`()[¶](#spack.test.cc.wrapper_flags) ##### spack.test.compilers module[¶](#module-spack.test.compilers) *class* `spack.test.compilers.``MockCompiler`[¶](#spack.test.compilers.MockCompiler) Bases: [`spack.compiler.Compiler`](index.html#spack.compiler.Compiler) `name`[¶](#spack.test.compilers.MockCompiler.name) `version`[¶](#spack.test.compilers.MockCompiler.version) `spack.test.compilers.``flag_value`(*flag*, *spec*)[¶](#spack.test.compilers.flag_value) `spack.test.compilers.``supported_flag_test`(*flag*, *flag_value_ref*, *spec=None*)[¶](#spack.test.compilers.supported_flag_test) `spack.test.compilers.``test_all_compilers`(*config*)[¶](#spack.test.compilers.test_all_compilers) `spack.test.compilers.``test_cce_flags`()[¶](#spack.test.compilers.test_cce_flags) `spack.test.compilers.``test_clang_flags`()[¶](#spack.test.compilers.test_clang_flags) `spack.test.compilers.``test_compiler_flags_from_config_are_grouped`()[¶](#spack.test.compilers.test_compiler_flags_from_config_are_grouped) `spack.test.compilers.``test_default_flags`()[¶](#spack.test.compilers.test_default_flags) `spack.test.compilers.``test_gcc_flags`()[¶](#spack.test.compilers.test_gcc_flags) `spack.test.compilers.``test_get_compiler_duplicates`(*config*)[¶](#spack.test.compilers.test_get_compiler_duplicates) `spack.test.compilers.``test_intel_flags`()[¶](#spack.test.compilers.test_intel_flags) `spack.test.compilers.``test_nag_flags`()[¶](#spack.test.compilers.test_nag_flags) `spack.test.compilers.``test_pgi_flags`()[¶](#spack.test.compilers.test_pgi_flags) `spack.test.compilers.``test_version_detection_is_empty`()[¶](#spack.test.compilers.test_version_detection_is_empty) `spack.test.compilers.``test_version_detection_is_successful`()[¶](#spack.test.compilers.test_version_detection_is_successful) `spack.test.compilers.``test_xl_flags`()[¶](#spack.test.compilers.test_xl_flags) `spack.test.compilers.``test_xl_r_flags`()[¶](#spack.test.compilers.test_xl_r_flags) `spack.test.compilers.``unsupported_flag_test`(*flag*, *spec=None*)[¶](#spack.test.compilers.unsupported_flag_test) ##### spack.test.concretize module[¶](#module-spack.test.concretize) *class* `spack.test.concretize.``TestConcretize`[¶](#spack.test.concretize.TestConcretize) Bases: `object` `concretize_multi_provider`()[¶](#spack.test.concretize.TestConcretize.concretize_multi_provider) `pytestmark` *= [Mark(name='usefixtures', args=('config', 'mock_packages'), kwargs={})]*[¶](#spack.test.concretize.TestConcretize.pytestmark) `test_architecture_deep_inheritance`()[¶](#spack.test.concretize.TestConcretize.test_architecture_deep_inheritance) Make sure that indirect dependencies receive architecture information from the root even when partial architecture information is provided by an intermediate dependency. `test_architecture_inheritance`()[¶](#spack.test.concretize.TestConcretize.test_architecture_inheritance) test_architecture_inheritance is likely to fail with an UnavailableCompilerVersionError if the architecture is concretized incorrectly. `test_compiler_child`()[¶](#spack.test.concretize.TestConcretize.test_compiler_child) `test_compiler_flags_from_user_are_grouped`()[¶](#spack.test.concretize.TestConcretize.test_compiler_flags_from_user_are_grouped) `test_compiler_inheritance`()[¶](#spack.test.concretize.TestConcretize.test_compiler_inheritance) `test_concretize`(*spec*)[¶](#spack.test.concretize.TestConcretize.test_concretize) `test_concretize_disable_compiler_existence_check`()[¶](#spack.test.concretize.TestConcretize.test_concretize_disable_compiler_existence_check) `test_concretize_mention_build_dep`()[¶](#spack.test.concretize.TestConcretize.test_concretize_mention_build_dep) `test_concretize_preferred_version`()[¶](#spack.test.concretize.TestConcretize.test_concretize_preferred_version) `test_concretize_two_virtuals`()[¶](#spack.test.concretize.TestConcretize.test_concretize_two_virtuals) Test a package with multiple virtual dependencies. `test_concretize_two_virtuals_with_dual_provider`()[¶](#spack.test.concretize.TestConcretize.test_concretize_two_virtuals_with_dual_provider) Test a package with multiple virtual dependencies and force a provider that provides both. `test_concretize_two_virtuals_with_dual_provider_and_a_conflict`()[¶](#spack.test.concretize.TestConcretize.test_concretize_two_virtuals_with_dual_provider_and_a_conflict) Test a package with multiple virtual dependencies and force a provider that provides both, and another conflicting package that provides one. `test_concretize_two_virtuals_with_one_bound`(*mutable_mock_packages*)[¶](#spack.test.concretize.TestConcretize.test_concretize_two_virtuals_with_one_bound) Test a package with multiple virtual dependencies and one preset. `test_concretize_two_virtuals_with_two_bound`()[¶](#spack.test.concretize.TestConcretize.test_concretize_two_virtuals_with_two_bound) Test a package with multiple virtual deps and two of them preset. `test_concretize_with_provides_when`()[¶](#spack.test.concretize.TestConcretize.test_concretize_with_provides_when) Make sure insufficient versions of MPI are not in providers list when we ask for some advanced version. `test_concretize_with_restricted_virtual`()[¶](#spack.test.concretize.TestConcretize.test_concretize_with_restricted_virtual) `test_conflicts_in_spec`(*conflict_spec*)[¶](#spack.test.concretize.TestConcretize.test_conflicts_in_spec) `test_different_compilers_get_different_flags`()[¶](#spack.test.concretize.TestConcretize.test_different_compilers_get_different_flags) `test_external_and_virtual`()[¶](#spack.test.concretize.TestConcretize.test_external_and_virtual) `test_external_package`()[¶](#spack.test.concretize.TestConcretize.test_external_package) `test_external_package_module`()[¶](#spack.test.concretize.TestConcretize.test_external_package_module) `test_find_spec_children`()[¶](#spack.test.concretize.TestConcretize.test_find_spec_children) `test_find_spec_none`()[¶](#spack.test.concretize.TestConcretize.test_find_spec_none) `test_find_spec_parents`()[¶](#spack.test.concretize.TestConcretize.test_find_spec_parents) Tests the spec finding logic used by concretization. `test_find_spec_self`()[¶](#spack.test.concretize.TestConcretize.test_find_spec_self) `test_find_spec_sibling`()[¶](#spack.test.concretize.TestConcretize.test_find_spec_sibling) `test_my_dep_depends_on_provider_of_my_virtual_dep`()[¶](#spack.test.concretize.TestConcretize.test_my_dep_depends_on_provider_of_my_virtual_dep) `test_no_compilers_for_arch`()[¶](#spack.test.concretize.TestConcretize.test_no_compilers_for_arch) `test_no_matching_compiler_specs`()[¶](#spack.test.concretize.TestConcretize.test_no_matching_compiler_specs) `test_nobuild_package`()[¶](#spack.test.concretize.TestConcretize.test_nobuild_package) `test_provides_handles_multiple_providers_of_same_vesrion`()[¶](#spack.test.concretize.TestConcretize.test_provides_handles_multiple_providers_of_same_vesrion) `test_regression_issue_4492`()[¶](#spack.test.concretize.TestConcretize.test_regression_issue_4492) `test_regression_issue_7239`()[¶](#spack.test.concretize.TestConcretize.test_regression_issue_7239) `test_regression_issue_7705`()[¶](#spack.test.concretize.TestConcretize.test_regression_issue_7705) `test_regression_issue_7941`()[¶](#spack.test.concretize.TestConcretize.test_regression_issue_7941) `test_virtual_is_fully_expanded_for_callpath`()[¶](#spack.test.concretize.TestConcretize.test_virtual_is_fully_expanded_for_callpath) `test_virtual_is_fully_expanded_for_mpileaks`()[¶](#spack.test.concretize.TestConcretize.test_virtual_is_fully_expanded_for_mpileaks) `spack.test.concretize.``check_concretize`(*abstract_spec*)[¶](#spack.test.concretize.check_concretize) `spack.test.concretize.``check_spec`(*abstract*, *concrete*)[¶](#spack.test.concretize.check_spec) `spack.test.concretize.``spec`(*request*)[¶](#spack.test.concretize.spec) Spec to be concretized ##### spack.test.concretize_preferences module[¶](#module-spack.test.concretize_preferences) *class* `spack.test.concretize_preferences.``TestConcretizePreferences`[¶](#spack.test.concretize_preferences.TestConcretizePreferences) Bases: `object` `pytestmark` *= [Mark(name='usefixtures', args=('concretize_scope', 'mock_packages'), kwargs={})]*[¶](#spack.test.concretize_preferences.TestConcretizePreferences.pytestmark) `test_all_is_not_a_virtual`()[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_all_is_not_a_virtual) Verify that all is allowed in packages.yaml. `test_config_permissions_differ_read_write`(*configure_permissions*)[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_config_permissions_differ_read_write) `test_config_permissions_from_all`(*configure_permissions*)[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_config_permissions_from_all) `test_config_permissions_from_package`(*configure_permissions*)[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_config_permissions_from_package) `test_config_perms_fail_write_gt_read`(*configure_permissions*)[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_config_perms_fail_write_gt_read) `test_develop`()[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_develop) Test concretization with develop version `test_external_mpi`()[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_external_mpi) `test_no_virtuals_in_packages_yaml`()[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_no_virtuals_in_packages_yaml) Verify that virtuals are not allowed in packages.yaml. `test_preferred_compilers`(*mutable_mock_packages*)[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_preferred_compilers) Test preferred compilers are applied correctly `test_preferred_providers`()[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_preferred_providers) Test preferred providers of virtual packages are applied correctly `test_preferred_variants`()[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_preferred_variants) Test preferred variants are applied correctly `test_preferred_versions`()[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_preferred_versions) Test preferred package versions are applied correctly `test_preferred_versions_mixed_version_types`()[¶](#spack.test.concretize_preferences.TestConcretizePreferences.test_preferred_versions_mixed_version_types) `spack.test.concretize_preferences.``assert_variant_values`(*spec*, ***variants*)[¶](#spack.test.concretize_preferences.assert_variant_values) `spack.test.concretize_preferences.``concretize`(*abstract_spec*)[¶](#spack.test.concretize_preferences.concretize) `spack.test.concretize_preferences.``concretize_scope`(*config*, *tmpdir*)[¶](#spack.test.concretize_preferences.concretize_scope) Adds a scope for concretization preferences `spack.test.concretize_preferences.``configure_permissions`()[¶](#spack.test.concretize_preferences.configure_permissions) `spack.test.concretize_preferences.``update_packages`(*pkgname*, *section*, *value*)[¶](#spack.test.concretize_preferences.update_packages) Update config and reread package list ##### spack.test.config module[¶](#module-spack.test.config) `spack.test.config.``check_compiler_config`(*comps*, **compiler_names*)[¶](#spack.test.config.check_compiler_config) Check that named compilers in comps match Spack’s config. `spack.test.config.``check_schema`(*name*, *file_contents*)[¶](#spack.test.config.check_schema) Check a Spack YAML schema against some data `spack.test.config.``compiler_specs`()[¶](#spack.test.config.compiler_specs) Returns a couple of compiler specs needed for the tests `spack.test.config.``get_config_error`(*filename*, *schema*, *yaml_string*)[¶](#spack.test.config.get_config_error) Parse a YAML string and return the resulting ConfigFormatError. Fail if there is no ConfigFormatError `spack.test.config.``test_add_command_line_scopes`(*tmpdir*, *mutable_config*)[¶](#spack.test.config.test_add_command_line_scopes) `spack.test.config.``test_bad_command_line_scopes`(*tmpdir*, *mock_config*)[¶](#spack.test.config.test_bad_command_line_scopes) `spack.test.config.``test_bad_compilers_yaml`(*tmpdir*)[¶](#spack.test.config.test_bad_compilers_yaml) `spack.test.config.``test_bad_config_section`(*mock_config*)[¶](#spack.test.config.test_bad_config_section) Test that getting or setting a bad section gives an error. `spack.test.config.``test_bad_config_yaml`(*tmpdir*)[¶](#spack.test.config.test_bad_config_yaml) `spack.test.config.``test_bad_env_yaml`(*tmpdir*)[¶](#spack.test.config.test_bad_env_yaml) `spack.test.config.``test_bad_mirrors_yaml`(*tmpdir*)[¶](#spack.test.config.test_bad_mirrors_yaml) `spack.test.config.``test_bad_repos_yaml`(*tmpdir*)[¶](#spack.test.config.test_bad_repos_yaml) `spack.test.config.``test_config_format_error`(*mutable_config*)[¶](#spack.test.config.test_config_format_error) This is raised when we try to write a bad configuration. `spack.test.config.``test_config_parse_dict_in_list`(*tmpdir*)[¶](#spack.test.config.test_config_parse_dict_in_list) `spack.test.config.``test_config_parse_list_in_dict`(*tmpdir*)[¶](#spack.test.config.test_config_parse_list_in_dict) `spack.test.config.``test_config_parse_str_not_bool`(*tmpdir*)[¶](#spack.test.config.test_config_parse_str_not_bool) `spack.test.config.``test_good_env_yaml`(*tmpdir*)[¶](#spack.test.config.test_good_env_yaml) `spack.test.config.``test_immutable_scope`(*tmpdir*)[¶](#spack.test.config.test_immutable_scope) `spack.test.config.``test_internal_config_filename`(*mock_config*, *write_config_file*)[¶](#spack.test.config.test_internal_config_filename) `spack.test.config.``test_internal_config_from_data`()[¶](#spack.test.config.test_internal_config_from_data) `spack.test.config.``test_internal_config_update`(*mock_config*, *write_config_file*)[¶](#spack.test.config.test_internal_config_update) `spack.test.config.``test_keys_are_ordered`()[¶](#spack.test.config.test_keys_are_ordered) Test that keys in Spack YAML files retain their order from the file. `spack.test.config.``test_mark_internal`()[¶](#spack.test.config.test_mark_internal) `spack.test.config.``test_merge_with_defaults`(*mock_config*, *write_config_file*)[¶](#spack.test.config.test_merge_with_defaults) This ensures that specified preferences merge with defaults as expected. Originally all defaults were initialized with the exact same object, which led to aliasing problems. Therefore the test configs used here leave ‘version’ blank for multiple packages in ‘packages_merge_low’. `spack.test.config.``test_read_config`(*mock_config*, *write_config_file*)[¶](#spack.test.config.test_read_config) `spack.test.config.``test_read_config_merge_list`(*mock_config*, *write_config_file*)[¶](#spack.test.config.test_read_config_merge_list) `spack.test.config.``test_read_config_override_all`(*mock_config*, *write_config_file*)[¶](#spack.test.config.test_read_config_override_all) `spack.test.config.``test_read_config_override_key`(*mock_config*, *write_config_file*)[¶](#spack.test.config.test_read_config_override_key) `spack.test.config.``test_read_config_override_list`(*mock_config*, *write_config_file*)[¶](#spack.test.config.test_read_config_override_list) `spack.test.config.``test_single_file_scope`(*tmpdir*, *config*)[¶](#spack.test.config.test_single_file_scope) `spack.test.config.``test_substitute_config_variables`(*mock_config*)[¶](#spack.test.config.test_substitute_config_variables) `spack.test.config.``test_substitute_tempdir`(*mock_config*)[¶](#spack.test.config.test_substitute_tempdir) `spack.test.config.``test_substitute_user`(*mock_config*)[¶](#spack.test.config.test_substitute_user) `spack.test.config.``test_write_key_in_memory`(*mock_config*, *compiler_specs*)[¶](#spack.test.config.test_write_key_in_memory) `spack.test.config.``test_write_key_to_disk`(*mock_config*, *compiler_specs*)[¶](#spack.test.config.test_write_key_to_disk) `spack.test.config.``test_write_list_in_memory`(*mock_config*)[¶](#spack.test.config.test_write_list_in_memory) `spack.test.config.``test_write_to_same_priority_file`(*mock_config*, *compiler_specs*)[¶](#spack.test.config.test_write_to_same_priority_file) `spack.test.config.``write_config_file`(*tmpdir*)[¶](#spack.test.config.write_config_file) Returns a function that writes a config file. ##### spack.test.conftest module[¶](#module-spack.test.conftest) *class* `spack.test.conftest.``MockPackage`(*name*, *dependencies*, *dependency_types*, *conditions=None*, *versions=None*)[¶](#spack.test.conftest.MockPackage) Bases: `object` *class* `spack.test.conftest.``MockPackageMultiRepo`(*packages*)[¶](#spack.test.conftest.MockPackageMultiRepo) Bases: `object` `exists`(*name*)[¶](#spack.test.conftest.MockPackageMultiRepo.exists) `get`(*spec*)[¶](#spack.test.conftest.MockPackageMultiRepo.get) `get_pkg_class`(*name*)[¶](#spack.test.conftest.MockPackageMultiRepo.get_pkg_class) `is_virtual`(*name*)[¶](#spack.test.conftest.MockPackageMultiRepo.is_virtual) `repo_for_pkg`(*name*)[¶](#spack.test.conftest.MockPackageMultiRepo.repo_for_pkg) `spack.test.conftest.``check_for_leftover_stage_files`(*request*, *mock_stage*, *_ignore_stage_files*)[¶](#spack.test.conftest.check_for_leftover_stage_files) Ensure that each test leaves a clean stage when done. This can be disabled for tests that are expected to dirty the stage by adding: ``` @pytest.mark.disable_clean_stage_check ``` to tests that need it. `spack.test.conftest.``clean_user_environment`()[¶](#spack.test.conftest.clean_user_environment) `spack.test.conftest.``config`(*configuration_dir*)[¶](#spack.test.conftest.config) Hooks the mock configuration files into spack.config `spack.test.conftest.``configuration_dir`(*tmpdir_factory*, *linux_os*)[¶](#spack.test.conftest.configuration_dir) Copies mock configuration files in a temporary directory. Returns the directory path. `spack.test.conftest.``conflict_spec`(*request*)[¶](#spack.test.conftest.conflict_spec) Specs which violate constraints specified with the “conflicts” directive in the “conflict” package. `spack.test.conftest.``database`(*tmpdir_factory*, *mock_packages*, *config*)[¶](#spack.test.conftest.database) Creates a mock database with some packages installed note that the ref count for dyninst here will be 3, as it’s recycled across each install. `spack.test.conftest.``install_mockery`(*tmpdir*, *config*, *mock_packages*)[¶](#spack.test.conftest.install_mockery) Hooks a fake install directory, DB, and stage directory into Spack. `spack.test.conftest.``invalid_spec`(*request*)[¶](#spack.test.conftest.invalid_spec) Specs that do not parse cleanly due to invalid formatting. `spack.test.conftest.``linux_os`()[¶](#spack.test.conftest.linux_os) Returns a named tuple with attributes ‘name’ and ‘version’ representing the OS. `spack.test.conftest.``mock_archive`(*tmpdir_factory*)[¶](#spack.test.conftest.mock_archive) Creates a very simple archive directory with a configure script and a makefile that installs to a prefix. Tars it up into an archive. `spack.test.conftest.``mock_config`(*tmpdir*)[¶](#spack.test.conftest.mock_config) Mocks two configuration scopes: ‘low’ and ‘high’. `spack.test.conftest.``mock_fetch`(*mock_archive*)[¶](#spack.test.conftest.mock_fetch) Fake the URL for a package so it downloads from a file. `spack.test.conftest.``mock_fetch_cache`(*monkeypatch*)[¶](#spack.test.conftest.mock_fetch_cache) Substitutes spack.paths.fetch_cache with a mock object that does nothing and raises on fetch. `spack.test.conftest.``mock_git_repository`(*tmpdir_factory*)[¶](#spack.test.conftest.mock_git_repository) Creates a very simple git repository with two branches and two commits. `spack.test.conftest.``mock_hg_repository`(*tmpdir_factory*)[¶](#spack.test.conftest.mock_hg_repository) Creates a very simple hg repository with two commits. `spack.test.conftest.``mock_packages`(*repo_path*)[¶](#spack.test.conftest.mock_packages) Use the ‘builtin.mock’ repository instead of ‘builtin’ `spack.test.conftest.``mock_stage`(*tmpdir_factory*)[¶](#spack.test.conftest.mock_stage) Mocks up a fake stage directory for use by tests. `spack.test.conftest.``mock_svn_repository`(*tmpdir_factory*)[¶](#spack.test.conftest.mock_svn_repository) Creates a very simple svn repository with two commits. `spack.test.conftest.``module_configuration`(*monkeypatch*, *request*)[¶](#spack.test.conftest.module_configuration) Reads the module configuration file from the mock ones prepared for tests and monkeypatches the right classes to hook it in. `spack.test.conftest.``mutable_config`(*tmpdir_factory*, *configuration_dir*, *config*)[¶](#spack.test.conftest.mutable_config) Like config, but tests can modify the configuration. `spack.test.conftest.``mutable_database`(*database*)[¶](#spack.test.conftest.mutable_database) For tests that need to modify the database instance. `spack.test.conftest.``mutable_mock_env_path`(*tmpdir_factory*)[¶](#spack.test.conftest.mutable_mock_env_path) Fixture for mocking the internal spack environments directory. `spack.test.conftest.``mutable_mock_packages`(*mock_packages*, *repo_path*)[¶](#spack.test.conftest.mutable_mock_packages) Function-scoped mock packages, for tests that need to modify them. `spack.test.conftest.``no_chdir`()[¶](#spack.test.conftest.no_chdir) Ensure that no test changes Spack’s working dirctory. This prevents Spack tests (and therefore Spack commands) from changing the working directory and causing other tests to fail mysteriously. Tests should use `working_dir` or `py.path`’s `.as_cwd()` instead of `os.chdir` to avoid failing this check. We assert that the working directory hasn’t changed, unless the original wd somehow ceased to exist. `spack.test.conftest.``pytest_addoption`(*parser*)[¶](#spack.test.conftest.pytest_addoption) `spack.test.conftest.``pytest_collection_modifyitems`(*config*, *items*)[¶](#spack.test.conftest.pytest_collection_modifyitems) `spack.test.conftest.``remove_whatever_it_is`(*path*)[¶](#spack.test.conftest.remove_whatever_it_is) Type-agnostic remove. `spack.test.conftest.``repo_path`()[¶](#spack.test.conftest.repo_path) Session scoped RepoPath object pointing to the mock repository ##### spack.test.database module[¶](#module-spack.test.database) These tests check the database is functioning properly, both in memory and in its file `spack.test.database.``test_005_db_exists`(*database*)[¶](#spack.test.database.test_005_db_exists) Make sure db cache file exists after creating. `spack.test.database.``test_010_all_install_sanity`(*database*)[¶](#spack.test.database.test_010_all_install_sanity) Ensure that the install layout reflects what we think it does. `spack.test.database.``test_015_write_and_read`(*database*)[¶](#spack.test.database.test_015_write_and_read) `spack.test.database.``test_020_db_sanity`(*database*)[¶](#spack.test.database.test_020_db_sanity) Make sure query() returns what’s actually in the db. `spack.test.database.``test_025_reindex`(*database*)[¶](#spack.test.database.test_025_reindex) Make sure reindex works and ref counts are valid. `spack.test.database.``test_030_db_sanity_from_another_process`(*mutable_database*)[¶](#spack.test.database.test_030_db_sanity_from_another_process) `spack.test.database.``test_040_ref_counts`(*database*)[¶](#spack.test.database.test_040_ref_counts) Ensure that we got ref counts right when we read the DB. `spack.test.database.``test_050_basic_query`(*database*)[¶](#spack.test.database.test_050_basic_query) Ensure querying database is consistent with what is installed. `spack.test.database.``test_060_remove_and_add_root_package`(*database*)[¶](#spack.test.database.test_060_remove_and_add_root_package) `spack.test.database.``test_070_remove_and_add_dependency_package`(*database*)[¶](#spack.test.database.test_070_remove_and_add_dependency_package) `spack.test.database.``test_080_root_ref_counts`(*database*)[¶](#spack.test.database.test_080_root_ref_counts) `spack.test.database.``test_090_non_root_ref_counts`(*database*)[¶](#spack.test.database.test_090_non_root_ref_counts) `spack.test.database.``test_100_no_write_with_exception_on_remove`(*database*)[¶](#spack.test.database.test_100_no_write_with_exception_on_remove) `spack.test.database.``test_110_no_write_with_exception_on_install`(*database*)[¶](#spack.test.database.test_110_no_write_with_exception_on_install) `spack.test.database.``test_115_reindex_with_packages_not_in_repo`(*mutable_database*)[¶](#spack.test.database.test_115_reindex_with_packages_not_in_repo) `spack.test.database.``test_default_queries`(*database*)[¶](#spack.test.database.test_default_queries) `spack.test.database.``test_external_entries_in_db`(*database*)[¶](#spack.test.database.test_external_entries_in_db) `spack.test.database.``test_regression_issue_8036`(*mutable_database*, *usr_folder_exists*)[¶](#spack.test.database.test_regression_issue_8036) `spack.test.database.``usr_folder_exists`(*monkeypatch*)[¶](#spack.test.database.usr_folder_exists) The `/usr` folder is assumed to be existing in some tests. This fixture makes it such that its existence is mocked, so we have no requirements on the system running tests. ##### spack.test.directory_layout module[¶](#module-spack.test.directory_layout) This test verifies that the Spack directory layout works properly. `spack.test.directory_layout.``layout_and_dir`(*tmpdir*)[¶](#spack.test.directory_layout.layout_and_dir) Returns a directory layout and the corresponding directory. `spack.test.directory_layout.``test_find`(*layout_and_dir*, *config*, *mock_packages*)[¶](#spack.test.directory_layout.test_find) Test that finding specs within an install layout works. `spack.test.directory_layout.``test_handle_unknown_package`(*layout_and_dir*, *config*, *mock_packages*)[¶](#spack.test.directory_layout.test_handle_unknown_package) This test ensures that spack can at least do *some* operations with packages that are installed but that it does not know about. This is actually not such an uncommon scenario with spack; it can happen when you switch from a git branch where you’re working on a new package. This test ensures that the directory layout stores enough information about installed packages’ specs to uninstall or query them again if the package goes away. `spack.test.directory_layout.``test_read_and_write_spec`(*layout_and_dir*, *config*, *mock_packages*)[¶](#spack.test.directory_layout.test_read_and_write_spec) This goes through each package in spack and creates a directory for it. It then ensures that the spec for the directory’s installed package can be read back in consistently, and finally that the directory can be removed by the directory layout. `spack.test.directory_layout.``test_yaml_directory_layout_parameters`(*tmpdir*, *config*)[¶](#spack.test.directory_layout.test_yaml_directory_layout_parameters) This tests the various parameters that can be used to configure the install location ##### spack.test.environment_modifications module[¶](#module-spack.test.environment_modifications) `spack.test.environment_modifications.``env`(*prepare_environment_for_tests*)[¶](#spack.test.environment_modifications.env) Returns an empty EnvironmentModifications object. `spack.test.environment_modifications.``files_to_be_sourced`()[¶](#spack.test.environment_modifications.files_to_be_sourced) Returns a list of files to be sourced `spack.test.environment_modifications.``miscellaneous_paths`()[¶](#spack.test.environment_modifications.miscellaneous_paths) Returns a list of paths, including system ones. `spack.test.environment_modifications.``prepare_environment_for_tests`()[¶](#spack.test.environment_modifications.prepare_environment_for_tests) Sets a few dummy variables in the current environment, that will be useful for the tests below. `spack.test.environment_modifications.``test_append_flags`(*env*)[¶](#spack.test.environment_modifications.test_append_flags) Tests appending to a value in the environment. `spack.test.environment_modifications.``test_exclude_paths_from_inspection`()[¶](#spack.test.environment_modifications.test_exclude_paths_from_inspection) `spack.test.environment_modifications.``test_extend`(*env*)[¶](#spack.test.environment_modifications.test_extend) Tests that we can construct a list of environment modifications starting from another list. `spack.test.environment_modifications.``test_extra_arguments`(*env*)[¶](#spack.test.environment_modifications.test_extra_arguments) Tests that we can attach extra arguments to any command. `spack.test.environment_modifications.``test_filter_system_paths`(*miscellaneous_paths*)[¶](#spack.test.environment_modifications.test_filter_system_paths) Tests that the filtering of system paths works as expected. `spack.test.environment_modifications.``test_inspect_path`(*tmpdir*)[¶](#spack.test.environment_modifications.test_inspect_path) `spack.test.environment_modifications.``test_path_manipulation`(*env*)[¶](#spack.test.environment_modifications.test_path_manipulation) Tests manipulating list of paths in the environment. `spack.test.environment_modifications.``test_preserve_environment`(*prepare_environment_for_tests*)[¶](#spack.test.environment_modifications.test_preserve_environment) `spack.test.environment_modifications.``test_set`(*env*)[¶](#spack.test.environment_modifications.test_set) Tests setting values in the environment. `spack.test.environment_modifications.``test_set_path`(*env*)[¶](#spack.test.environment_modifications.test_set_path) Tests setting paths in an environment variable. `spack.test.environment_modifications.``test_source_files`(*files_to_be_sourced*)[¶](#spack.test.environment_modifications.test_source_files) Tests the construction of a list of environment modifications that are the result of sourcing a file. `spack.test.environment_modifications.``test_unset`(*env*)[¶](#spack.test.environment_modifications.test_unset) Tests unsetting values in the environment. ##### spack.test.flag_handlers module[¶](#module-spack.test.flag_handlers) *class* `spack.test.flag_handlers.``TestFlagHandlers`[¶](#spack.test.flag_handlers.TestFlagHandlers) Bases: `object` `pytestmark` *= [Mark(name='usefixtures', args=('config', 'mock_packages'), kwargs={})]*[¶](#spack.test.flag_handlers.TestFlagHandlers.pytestmark) `test_add_build_system_flags_autotools`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_add_build_system_flags_autotools) `test_add_build_system_flags_cmake`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_add_build_system_flags_cmake) `test_build_system_flags_autotools`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_build_system_flags_autotools) `test_build_system_flags_cmake`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_build_system_flags_cmake) `test_build_system_flags_not_implemented`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_build_system_flags_not_implemented) `test_env_flags`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_env_flags) `test_inject_flags`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_inject_flags) `test_ld_flags_cmake`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_ld_flags_cmake) `test_ld_libs_cmake`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_ld_libs_cmake) `test_no_build_system_flags`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_no_build_system_flags) `test_unbound_method`(*temp_env*)[¶](#spack.test.flag_handlers.TestFlagHandlers.test_unbound_method) `spack.test.flag_handlers.``add_o3_to_build_system_cflags`(*pkg*, *name*, *flags*)[¶](#spack.test.flag_handlers.add_o3_to_build_system_cflags) `spack.test.flag_handlers.``temp_env`()[¶](#spack.test.flag_handlers.temp_env) ##### spack.test.git_fetch module[¶](#module-spack.test.git_fetch) `spack.test.git_fetch.``git_version`(*request*)[¶](#spack.test.git_fetch.git_version) Tests GitFetchStrategy behavior for different git versions. GitFetchStrategy tries to optimize using features of newer git versions, but needs to work with older git versions. To ensure code paths for old versions still work, we fake it out here and make it use the backward-compatibility code paths with newer git versions. `spack.test.git_fetch.``test_fetch`(*type_of_test*, *secure*, *mock_git_repository*, *config*, *mutable_mock_packages*, *git_version*)[¶](#spack.test.git_fetch.test_fetch) Tries to: 1. Fetch the repo using a fetch strategy constructed with supplied args (they depend on type_of_test). 2. Check if the test_file is in the checked out repository. 3. Assert that the repository is at the revision supplied. 4. Add and remove some files, then reset the repo, and ensure it’s all there again. ##### spack.test.graph module[¶](#module-spack.test.graph) `spack.test.graph.``test_ascii_graph_mpileaks`(*mock_packages*)[¶](#spack.test.graph.test_ascii_graph_mpileaks) Test dynamically graphing the mpileaks package. `spack.test.graph.``test_dynamic_dot_graph_mpileaks`(*mock_packages*)[¶](#spack.test.graph.test_dynamic_dot_graph_mpileaks) Test dynamically graphing the mpileaks package. `spack.test.graph.``test_static_graph_mpileaks`(*mock_packages*)[¶](#spack.test.graph.test_static_graph_mpileaks) Test a static spack graph for a simple package. `spack.test.graph.``test_topo_sort`(*mock_packages*)[¶](#spack.test.graph.test_topo_sort) Test topo sort gives correct order. ##### spack.test.hg_fetch module[¶](#module-spack.test.hg_fetch) `spack.test.hg_fetch.``test_fetch`(*type_of_test*, *secure*, *mock_hg_repository*, *config*, *mutable_mock_packages*)[¶](#spack.test.hg_fetch.test_fetch) Tries to: 1. Fetch the repo using a fetch strategy constructed with supplied args (they depend on type_of_test). 2. Check if the test_file is in the checked out repository. 3. Assert that the repository is at the revision supplied. 4. Add and remove some files, then reset the repo, and ensure it’s all there again. ##### spack.test.install module[¶](#module-spack.test.install) *exception* `spack.test.install.``MockInstallError`(*message*, *long_message=None*)[¶](#spack.test.install.MockInstallError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) *class* `spack.test.install.``MockStage`(*wrapped_stage*)[¶](#spack.test.install.MockStage) Bases: `object` `create`()[¶](#spack.test.install.MockStage.create) `destroy`()[¶](#spack.test.install.MockStage.destroy) *class* `spack.test.install.``RemovePrefixChecker`(*wrapped_rm_prefix*)[¶](#spack.test.install.RemovePrefixChecker) Bases: `object` `remove_prefix`()[¶](#spack.test.install.RemovePrefixChecker.remove_prefix) `spack.test.install.``mock_remove_prefix`(**args*)[¶](#spack.test.install.mock_remove_prefix) `spack.test.install.``test_dont_add_patches_to_installed_package`(*install_mockery*, *mock_fetch*)[¶](#spack.test.install.test_dont_add_patches_to_installed_package) `spack.test.install.``test_failing_build`(*install_mockery*, *mock_fetch*)[¶](#spack.test.install.test_failing_build) `spack.test.install.``test_install_and_uninstall`(*install_mockery*, *mock_fetch*)[¶](#spack.test.install.test_install_and_uninstall) `spack.test.install.``test_installed_dependency_request_conflicts`(*install_mockery*, *mock_fetch*, *mutable_mock_packages*)[¶](#spack.test.install.test_installed_dependency_request_conflicts) `spack.test.install.``test_partial_install_delete_prefix_and_stage`(*install_mockery*, *mock_fetch*)[¶](#spack.test.install.test_partial_install_delete_prefix_and_stage) `spack.test.install.``test_partial_install_keep_prefix`(*install_mockery*, *mock_fetch*)[¶](#spack.test.install.test_partial_install_keep_prefix) `spack.test.install.``test_second_install_no_overwrite_first`(*install_mockery*, *mock_fetch*)[¶](#spack.test.install.test_second_install_no_overwrite_first) `spack.test.install.``test_store`(*install_mockery*, *mock_fetch*)[¶](#spack.test.install.test_store) ##### spack.test.make_executable module[¶](#module-spack.test.make_executable) Tests for Spack’s built-in parallel make support. This just tests whether the right args are getting passed to make. *class* `spack.test.make_executable.``MakeExecutableTest`(*methodName='runTest'*)[¶](#spack.test.make_executable.MakeExecutableTest) Bases: `unittest.case.TestCase` `setUp`()[¶](#spack.test.make_executable.MakeExecutableTest.setUp) Hook method for setting up the test fixture before exercising it. `tearDown`()[¶](#spack.test.make_executable.MakeExecutableTest.tearDown) Hook method for deconstructing the test fixture after testing it. `test_make_explicit`()[¶](#spack.test.make_executable.MakeExecutableTest.test_make_explicit) `test_make_jobs_env`()[¶](#spack.test.make_executable.MakeExecutableTest.test_make_jobs_env) `test_make_normal`()[¶](#spack.test.make_executable.MakeExecutableTest.test_make_normal) `test_make_one_job`()[¶](#spack.test.make_executable.MakeExecutableTest.test_make_one_job) `test_make_parallel_disabled`()[¶](#spack.test.make_executable.MakeExecutableTest.test_make_parallel_disabled) `test_make_parallel_false`()[¶](#spack.test.make_executable.MakeExecutableTest.test_make_parallel_false) `test_make_parallel_precedence`()[¶](#spack.test.make_executable.MakeExecutableTest.test_make_parallel_precedence) ##### spack.test.mirror module[¶](#module-spack.test.mirror) `spack.test.mirror.``check_mirror`()[¶](#spack.test.mirror.check_mirror) `spack.test.mirror.``set_up_package`(*name*, *repository*, *url_attr*)[¶](#spack.test.mirror.set_up_package) Set up a mock package to be mirrored. Each package needs us to: 1. Set up a mock repo/archive to fetch from. 2. Point the package’s version args at that repo. `spack.test.mirror.``test_all_mirror`(*mock_git_repository*, *mock_svn_repository*, *mock_hg_repository*, *mock_archive*)[¶](#spack.test.mirror.test_all_mirror) `spack.test.mirror.``test_git_mirror`(*mock_git_repository*)[¶](#spack.test.mirror.test_git_mirror) `spack.test.mirror.``test_hg_mirror`(*mock_hg_repository*)[¶](#spack.test.mirror.test_hg_mirror) `spack.test.mirror.``test_svn_mirror`(*mock_svn_repository*)[¶](#spack.test.mirror.test_svn_mirror) `spack.test.mirror.``test_url_mirror`(*mock_archive*)[¶](#spack.test.mirror.test_url_mirror) ##### spack.test.module_parsing module[¶](#module-spack.test.module_parsing) `spack.test.module_parsing.``save_env`()[¶](#spack.test.module_parsing.save_env) `spack.test.module_parsing.``test_get_argument_from_module_line`()[¶](#spack.test.module_parsing.test_get_argument_from_module_line) `spack.test.module_parsing.``test_get_module_cmd_fails`(*save_env*)[¶](#spack.test.module_parsing.test_get_module_cmd_fails) `spack.test.module_parsing.``test_get_module_cmd_from_bash_parens`(*save_env*)[¶](#spack.test.module_parsing.test_get_module_cmd_from_bash_parens) `spack.test.module_parsing.``test_get_module_cmd_from_bash_ticks`(*save_env*)[¶](#spack.test.module_parsing.test_get_module_cmd_from_bash_ticks) `spack.test.module_parsing.``test_get_module_cmd_from_bash_using_modules`()[¶](#spack.test.module_parsing.test_get_module_cmd_from_bash_using_modules) `spack.test.module_parsing.``test_get_module_cmd_from_which`(*tmpdir*, *save_env*)[¶](#spack.test.module_parsing.test_get_module_cmd_from_which) `spack.test.module_parsing.``test_get_path_from_module`(*save_env*)[¶](#spack.test.module_parsing.test_get_path_from_module) `spack.test.module_parsing.``test_get_path_from_module_contents`()[¶](#spack.test.module_parsing.test_get_path_from_module_contents) `spack.test.module_parsing.``test_pkg_dir_from_module_name`()[¶](#spack.test.module_parsing.test_pkg_dir_from_module_name) ##### spack.test.multimethod module[¶](#module-spack.test.multimethod) Test for multi_method dispatch. `spack.test.multimethod.``test_default_works`(*mock_packages*)[¶](#spack.test.multimethod.test_default_works) `spack.test.multimethod.``test_dependency_match`(*mock_packages*)[¶](#spack.test.multimethod.test_dependency_match) `spack.test.multimethod.``test_mpi_version`(*mock_packages*)[¶](#spack.test.multimethod.test_mpi_version) `spack.test.multimethod.``test_multimethod_with_base_class`(*mock_packages*)[¶](#spack.test.multimethod.test_multimethod_with_base_class) `spack.test.multimethod.``test_no_version_match`(*mock_packages*)[¶](#spack.test.multimethod.test_no_version_match) `spack.test.multimethod.``test_one_version_match`(*mock_packages*)[¶](#spack.test.multimethod.test_one_version_match) `spack.test.multimethod.``test_target_match`(*mock_packages*)[¶](#spack.test.multimethod.test_target_match) `spack.test.multimethod.``test_undefined_mpi_version`(*mock_packages*)[¶](#spack.test.multimethod.test_undefined_mpi_version) `spack.test.multimethod.``test_version_overlap`(*mock_packages*)[¶](#spack.test.multimethod.test_version_overlap) `spack.test.multimethod.``test_virtual_dep_match`(*mock_packages*)[¶](#spack.test.multimethod.test_virtual_dep_match) ##### spack.test.namespace_trie module[¶](#module-spack.test.namespace_trie) `spack.test.namespace_trie.``test_add_multiple`(*trie*)[¶](#spack.test.namespace_trie.test_add_multiple) `spack.test.namespace_trie.``test_add_none_multiple`(*trie*)[¶](#spack.test.namespace_trie.test_add_none_multiple) `spack.test.namespace_trie.``test_add_none_single`(*trie*)[¶](#spack.test.namespace_trie.test_add_none_single) `spack.test.namespace_trie.``test_add_single`(*trie*)[¶](#spack.test.namespace_trie.test_add_single) `spack.test.namespace_trie.``test_add_three`(*trie*)[¶](#spack.test.namespace_trie.test_add_three) `spack.test.namespace_trie.``trie`()[¶](#spack.test.namespace_trie.trie) ##### spack.test.optional_deps module[¶](#module-spack.test.optional_deps) `spack.test.optional_deps.``spec_and_expected`(*request*)[¶](#spack.test.optional_deps.spec_and_expected) Parameters for the normalization test. `spack.test.optional_deps.``test_default_variant`(*config*, *mock_packages*)[¶](#spack.test.optional_deps.test_default_variant) `spack.test.optional_deps.``test_normalize`(*spec_and_expected*, *config*, *mock_packages*)[¶](#spack.test.optional_deps.test_normalize) ##### spack.test.package_hash module[¶](#module-spack.test.package_hash) `spack.test.package_hash.``compare_sans_name`(*eq*, *spec1*, *spec2*)[¶](#spack.test.package_hash.compare_sans_name) `spack.test.package_hash.``test_all_same_but_archive_hash`(*tmpdir*, *mock_packages*, *config*)[¶](#spack.test.package_hash.test_all_same_but_archive_hash) Archive hash is not intended to be reflected in Package hash. `spack.test.package_hash.``test_all_same_but_install`(*tmpdir*, *mock_packages*, *config*)[¶](#spack.test.package_hash.test_all_same_but_install) `spack.test.package_hash.``test_all_same_but_name`(*tmpdir*, *mock_packages*, *config*)[¶](#spack.test.package_hash.test_all_same_but_name) `spack.test.package_hash.``test_all_same_but_patch_contents`(*tmpdir*, *mock_packages*, *config*)[¶](#spack.test.package_hash.test_all_same_but_patch_contents) `spack.test.package_hash.``test_all_same_but_patches_to_apply`(*tmpdir*, *mock_packages*, *config*)[¶](#spack.test.package_hash.test_all_same_but_patches_to_apply) `spack.test.package_hash.``test_different_variants`(*tmpdir*, *mock_packages*, *config*)[¶](#spack.test.package_hash.test_different_variants) `spack.test.package_hash.``test_hash`(*tmpdir*, *mock_packages*, *config*)[¶](#spack.test.package_hash.test_hash) ##### spack.test.package_sanity module[¶](#module-spack.test.package_sanity) This test does sanity checks on Spack’s builtin package database. `spack.test.package_sanity.``check_repo`()[¶](#spack.test.package_sanity.check_repo) Get all packages in the builtin repo to make sure they work. `spack.test.package_sanity.``test_all_versions_are_lowercase`()[¶](#spack.test.package_sanity.test_all_versions_are_lowercase) Spack package names must be lowercase, and use - instead of _. `spack.test.package_sanity.``test_all_virtual_packages_have_default_providers`()[¶](#spack.test.package_sanity.test_all_virtual_packages_have_default_providers) All virtual packages must have a default provider explicitly set. `spack.test.package_sanity.``test_get_all_mock_packages`()[¶](#spack.test.package_sanity.test_get_all_mock_packages) Get the mock packages once each too. `spack.test.package_sanity.``test_get_all_packages`()[¶](#spack.test.package_sanity.test_get_all_packages) Get all packages once and make sure that works. `spack.test.package_sanity.``test_no_fixme`()[¶](#spack.test.package_sanity.test_no_fixme) Packages should not contain any boilerplate such as FIXME or example.com. `spack.test.package_sanity.``test_package_version_consistency`()[¶](#spack.test.package_sanity.test_package_version_consistency) Make sure all versions on builtin packages can produce a fetcher. ##### spack.test.packages module[¶](#module-spack.test.packages) *class* `spack.test.packages.``TestPackage`[¶](#spack.test.packages.TestPackage) Bases: `object` `pytestmark` *= [Mark(name='usefixtures', args=('config', 'mock_packages'), kwargs={})]*[¶](#spack.test.packages.TestPackage.pytestmark) `test_all_same_but_archive_hash`()[¶](#spack.test.packages.TestPackage.test_all_same_but_archive_hash) `test_content_hash_all_same_but_patch_contents`()[¶](#spack.test.packages.TestPackage.test_content_hash_all_same_but_patch_contents) `test_content_hash_different_variants`()[¶](#spack.test.packages.TestPackage.test_content_hash_different_variants) `test_dependency_extensions`()[¶](#spack.test.packages.TestPackage.test_dependency_extensions) `test_import_class_from_package`()[¶](#spack.test.packages.TestPackage.test_import_class_from_package) `test_import_module_from_package`()[¶](#spack.test.packages.TestPackage.test_import_module_from_package) `test_import_namespace_container_modules`()[¶](#spack.test.packages.TestPackage.test_import_namespace_container_modules) `test_import_package`()[¶](#spack.test.packages.TestPackage.test_import_package) `test_import_package_as`()[¶](#spack.test.packages.TestPackage.test_import_package_as) `test_inheritance_of_diretives`()[¶](#spack.test.packages.TestPackage.test_inheritance_of_diretives) `test_load_package`()[¶](#spack.test.packages.TestPackage.test_load_package) `test_nonexisting_package_filename`()[¶](#spack.test.packages.TestPackage.test_nonexisting_package_filename) `test_package_class_names`()[¶](#spack.test.packages.TestPackage.test_package_class_names) `test_package_filename`()[¶](#spack.test.packages.TestPackage.test_package_filename) `test_package_name`()[¶](#spack.test.packages.TestPackage.test_package_name) `spack.test.packages.``test_git_top_level`(*mock_packages*, *config*)[¶](#spack.test.packages.test_git_top_level) Ensure that top-level git attribute can be used as a default. `spack.test.packages.``test_git_url_top_level_conflicts`(*mock_packages*, *config*)[¶](#spack.test.packages.test_git_url_top_level_conflicts) Test git fetch strategy inference when url is specified with git. `spack.test.packages.``test_git_url_top_level_git_versions`(*mock_packages*, *config*)[¶](#spack.test.packages.test_git_url_top_level_git_versions) Test git fetch strategy inference when url is specified with git. `spack.test.packages.``test_git_url_top_level_url_versions`(*mock_packages*, *config*)[¶](#spack.test.packages.test_git_url_top_level_url_versions) Test URL fetch strategy inference when url is specified with git. `spack.test.packages.``test_hg_top_level`(*mock_packages*, *config*)[¶](#spack.test.packages.test_hg_top_level) Ensure that top-level hg attribute can be used as a default. `spack.test.packages.``test_no_extrapolate_without_url`(*mock_packages*, *config*)[¶](#spack.test.packages.test_no_extrapolate_without_url) Verify that we can’t extrapolate versions for non-URL packages. `spack.test.packages.``test_svn_top_level`(*mock_packages*, *config*)[¶](#spack.test.packages.test_svn_top_level) Ensure that top-level svn attribute can be used as a default. `spack.test.packages.``test_two_vcs_fetchers_top_level`(*mock_packages*, *config*)[¶](#spack.test.packages.test_two_vcs_fetchers_top_level) Verify conflict when two VCS strategies are specified together. `spack.test.packages.``test_url_for_version_with_no_urls`()[¶](#spack.test.packages.test_url_for_version_with_no_urls) `spack.test.packages.``test_url_for_version_with_only_overrides`(*mock_packages*, *config*)[¶](#spack.test.packages.test_url_for_version_with_only_overrides) `spack.test.packages.``test_url_for_version_with_only_overrides_with_gaps`(*mock_packages*, *config*)[¶](#spack.test.packages.test_url_for_version_with_only_overrides_with_gaps) `spack.test.packages.``test_urls_for_versions`(*mock_packages*, *config*)[¶](#spack.test.packages.test_urls_for_versions) Version directive without a ‘url’ argument should use default url. ##### spack.test.packaging module[¶](#module-spack.test.packaging) This test checks the binary packaging infrastructure `spack.test.packaging.``fake_fetchify`(*url*, *pkg*)[¶](#spack.test.packaging.fake_fetchify) Fake the URL for a package so it downloads from a file. `spack.test.packaging.``has_gnupg2`()[¶](#spack.test.packaging.has_gnupg2) `spack.test.packaging.``test_buildcache`(*mock_archive*, *tmpdir*)[¶](#spack.test.packaging.test_buildcache) `spack.test.packaging.``test_elf_paths`()[¶](#spack.test.packaging.test_elf_paths) `spack.test.packaging.``test_macho_paths`()[¶](#spack.test.packaging.test_macho_paths) `spack.test.packaging.``test_needs_relocation`()[¶](#spack.test.packaging.test_needs_relocation) `spack.test.packaging.``test_relocate_macho`(*tmpdir*)[¶](#spack.test.packaging.test_relocate_macho) `spack.test.packaging.``test_relocate_text`(*tmpdir*)[¶](#spack.test.packaging.test_relocate_text) `spack.test.packaging.``testing_gpg_directory`(*tmpdir*)[¶](#spack.test.packaging.testing_gpg_directory) ##### spack.test.patch module[¶](#module-spack.test.patch) `spack.test.patch.``mock_stage`(*tmpdir*, *monkeypatch*)[¶](#spack.test.patch.mock_stage) `spack.test.patch.``test_conditional_patched_dependencies`(*mock_packages*, *config*)[¶](#spack.test.patch.test_conditional_patched_dependencies) Test whether conditional patched dependencies work. `spack.test.patch.``test_conditional_patched_deps_with_conditions`(*mock_packages*, *config*)[¶](#spack.test.patch.test_conditional_patched_deps_with_conditions) Test whether conditional patched dependencies with conditions work. `spack.test.patch.``test_multiple_patched_dependencies`(*mock_packages*, *config*)[¶](#spack.test.patch.test_multiple_patched_dependencies) Test whether multiple patched dependencies work. `spack.test.patch.``test_patch_in_spec`(*mock_packages*, *config*)[¶](#spack.test.patch.test_patch_in_spec) Test whether patches in a package appear in the spec. `spack.test.patch.``test_patched_dependency`(*mock_packages*, *config*, *install_mockery*, *mock_fetch*)[¶](#spack.test.patch.test_patched_dependency) Test whether patched dependencies work. `spack.test.patch.``test_url_patch`(*mock_stage*, *filename*, *sha256*, *archive_sha256*)[¶](#spack.test.patch.test_url_patch) ##### spack.test.pattern module[¶](#module-spack.test.pattern) `spack.test.pattern.``composite`(*interface*, *implementation*, *request*)[¶](#spack.test.pattern.composite) Returns a composite that contains an instance of implementation(1) and one of implementation(2). `spack.test.pattern.``implementation`(*interface*)[¶](#spack.test.pattern.implementation) Returns an implementation of the interface `spack.test.pattern.``interface`()[¶](#spack.test.pattern.interface) Returns the interface class for the composite. `spack.test.pattern.``test_composite_interface_calls`(*interface*, *composite*)[¶](#spack.test.pattern.test_composite_interface_calls) `spack.test.pattern.``test_composite_no_methods`()[¶](#spack.test.pattern.test_composite_no_methods) `spack.test.pattern.``test_composite_wrong_container`(*interface*)[¶](#spack.test.pattern.test_composite_wrong_container) ##### spack.test.provider_index module[¶](#module-spack.test.provider_index) Tests for provider index cache files. Tests assume that mock packages provide this: ``` {'blas': { blas: set([netlib-blas, openblas, openblas-with-lapack])}, 'lapack': {lapack: set([netlib-lapack, openblas-with-lapack])}, 'mpi': {mpi@:1: set([mpich@:1]), mpi@:2.0: set([mpich2]), mpi@:2.1: set([mpich2@1.1:]), mpi@:2.2: set([mpich2@1.2:]), mpi@:3: set([mpich@3:]), mpi@:10.0: set([zmpi])}, 'stuff': {stuff: set([externalvirtual])}} ``` `spack.test.provider_index.``test_copy`(*mock_packages*)[¶](#spack.test.provider_index.test_copy) `spack.test.provider_index.``test_equal`(*mock_packages*)[¶](#spack.test.provider_index.test_equal) `spack.test.provider_index.``test_mpi_providers`(*mock_packages*)[¶](#spack.test.provider_index.test_mpi_providers) `spack.test.provider_index.``test_providers_for_simple`(*mock_packages*)[¶](#spack.test.provider_index.test_providers_for_simple) `spack.test.provider_index.``test_yaml_round_trip`(*mock_packages*)[¶](#spack.test.provider_index.test_yaml_round_trip) ##### spack.test.python_version module[¶](#module-spack.test.python_version) Check that Spack complies with minimum supported python versions. We ensure that all Spack files work with Python2 >= 2.6 and Python3 >= 3.0. We’d like to drop 2.6 support at some point, but there are still many HPC systems that ship with RHEL6/CentOS 6, which have Python 2.6 as the default version. Once those go away, we can likely drop 2.6 and increase the minimum supported Python 3 version, as well. `spack.test.python_version.``check_python_versions`(*files*)[¶](#spack.test.python_version.check_python_versions) Check that a set of Python files works with supported Ptyhon versions `spack.test.python_version.``pyfiles`(*search_paths*, *exclude=()*)[¶](#spack.test.python_version.pyfiles) Generator that yields all the python files in the search paths. | Parameters: | * **search_paths** (*list of str*) – list of paths to search for python files * **exclude** (*list of str*) – file paths to exclude from search | | Yields: | python files in the search path. | `spack.test.python_version.``test_core_module_compatibility`()[¶](#spack.test.python_version.test_core_module_compatibility) Test that all core spack modules work with supported Python versions. `spack.test.python_version.``test_package_module_compatibility`()[¶](#spack.test.python_version.test_package_module_compatibility) Test that all spack packages work with supported Python versions. ##### spack.test.repo module[¶](#module-spack.test.repo) `spack.test.repo.``extra_repo`(*tmpdir_factory*)[¶](#spack.test.repo.extra_repo) `spack.test.repo.``repo_for_test`()[¶](#spack.test.repo.repo_for_test) `spack.test.repo.``test_repo_getpkg`(*repo_for_test*)[¶](#spack.test.repo.test_repo_getpkg) `spack.test.repo.``test_repo_multi_getpkg`(*repo_for_test*, *extra_repo*)[¶](#spack.test.repo.test_repo_multi_getpkg) `spack.test.repo.``test_repo_multi_getpkgclass`(*repo_for_test*, *extra_repo*)[¶](#spack.test.repo.test_repo_multi_getpkgclass) `spack.test.repo.``test_repo_pkg_with_unknown_namespace`(*repo_for_test*)[¶](#spack.test.repo.test_repo_pkg_with_unknown_namespace) `spack.test.repo.``test_repo_unknown_pkg`(*repo_for_test*)[¶](#spack.test.repo.test_repo_unknown_pkg) ##### spack.test.sbang module[¶](#module-spack.test.sbang) Test that Spack’s shebang filtering works correctly. *class* `spack.test.sbang.``ScriptDirectory`[¶](#spack.test.sbang.ScriptDirectory) Bases: `object` Directory full of test scripts to run sbang instrumentation on. `destroy`()[¶](#spack.test.sbang.ScriptDirectory.destroy) `spack.test.sbang.``script_dir`()[¶](#spack.test.sbang.script_dir) `spack.test.sbang.``test_shebang_handles_non_writable_files`(*script_dir*)[¶](#spack.test.sbang.test_shebang_handles_non_writable_files) `spack.test.sbang.``test_shebang_handling`(*script_dir*)[¶](#spack.test.sbang.test_shebang_handling) ##### spack.test.spack_yaml module[¶](#module-spack.test.spack_yaml) Test Spack’s custom YAML format. `spack.test.spack_yaml.``data`()[¶](#spack.test.spack_yaml.data) Returns the data loaded from a test file `spack.test.spack_yaml.``test_dict_order`(*data*)[¶](#spack.test.spack_yaml.test_dict_order) `spack.test.spack_yaml.``test_line_numbers`(*data*)[¶](#spack.test.spack_yaml.test_line_numbers) `spack.test.spack_yaml.``test_parse`(*data*)[¶](#spack.test.spack_yaml.test_parse) `spack.test.spack_yaml.``test_yaml_aliases`()[¶](#spack.test.spack_yaml.test_yaml_aliases) ##### spack.test.spec_dag module[¶](#module-spack.test.spec_dag) These tests check Spec DAG operations using dummy packages. *class* `spack.test.spec_dag.``TestSpecDag`[¶](#spack.test.spec_dag.TestSpecDag) Bases: `object` `check_diamond_deptypes`(*spec*)[¶](#spack.test.spec_dag.TestSpecDag.check_diamond_deptypes) Validate deptypes in dt-diamond spec. This ensures that concretization works properly when two packages depend on the same dependency in different ways. `check_diamond_normalized_dag`(*spec*)[¶](#spack.test.spec_dag.TestSpecDag.check_diamond_normalized_dag) `pytestmark` *= [Mark(name='usefixtures', args=('mutable_mock_packages',), kwargs={})]*[¶](#spack.test.spec_dag.TestSpecDag.pytestmark) `test_canonical_deptype`()[¶](#spack.test.spec_dag.TestSpecDag.test_canonical_deptype) `test_concretize_deptypes`()[¶](#spack.test.spec_dag.TestSpecDag.test_concretize_deptypes) Ensure that dependency types are preserved after concretization. `test_conflicting_package_constraints`(*set_dependency*)[¶](#spack.test.spec_dag.TestSpecDag.test_conflicting_package_constraints) `test_conflicting_spec_constraints`()[¶](#spack.test.spec_dag.TestSpecDag.test_conflicting_spec_constraints) `test_construct_spec_with_deptypes`()[¶](#spack.test.spec_dag.TestSpecDag.test_construct_spec_with_deptypes) Ensure that it is possible to construct a spec with explicit dependency types. `test_contains`()[¶](#spack.test.spec_dag.TestSpecDag.test_contains) `test_copy_concretized`()[¶](#spack.test.spec_dag.TestSpecDag.test_copy_concretized) `test_copy_dependencies`()[¶](#spack.test.spec_dag.TestSpecDag.test_copy_dependencies) `test_copy_deptypes`()[¶](#spack.test.spec_dag.TestSpecDag.test_copy_deptypes) Ensure that dependency types are preserved by spec copy. `test_copy_normalized`()[¶](#spack.test.spec_dag.TestSpecDag.test_copy_normalized) `test_copy_simple`()[¶](#spack.test.spec_dag.TestSpecDag.test_copy_simple) `test_dependents_and_dependencies_are_correct`()[¶](#spack.test.spec_dag.TestSpecDag.test_dependents_and_dependencies_are_correct) `test_deptype_traversal`()[¶](#spack.test.spec_dag.TestSpecDag.test_deptype_traversal) `test_deptype_traversal_full`()[¶](#spack.test.spec_dag.TestSpecDag.test_deptype_traversal_full) `test_deptype_traversal_run`()[¶](#spack.test.spec_dag.TestSpecDag.test_deptype_traversal_run) `test_deptype_traversal_with_builddeps`()[¶](#spack.test.spec_dag.TestSpecDag.test_deptype_traversal_with_builddeps) `test_edge_traversals`()[¶](#spack.test.spec_dag.TestSpecDag.test_edge_traversals) Make sure child and parent traversals of specs work. `test_equal`()[¶](#spack.test.spec_dag.TestSpecDag.test_equal) `test_getitem_exceptional_paths`()[¶](#spack.test.spec_dag.TestSpecDag.test_getitem_exceptional_paths) `test_getitem_query`()[¶](#spack.test.spec_dag.TestSpecDag.test_getitem_query) `test_hash_bits`()[¶](#spack.test.spec_dag.TestSpecDag.test_hash_bits) Ensure getting first n bits of a base32-encoded DAG hash works. `test_invalid_dep`()[¶](#spack.test.spec_dag.TestSpecDag.test_invalid_dep) `test_invalid_literal_spec`()[¶](#spack.test.spec_dag.TestSpecDag.test_invalid_literal_spec) `test_normalize_a_lot`()[¶](#spack.test.spec_dag.TestSpecDag.test_normalize_a_lot) `test_normalize_diamond_deptypes`()[¶](#spack.test.spec_dag.TestSpecDag.test_normalize_diamond_deptypes) Ensure that dependency types are preserved even if the same thing is depended on in two different ways. `test_normalize_mpileaks`()[¶](#spack.test.spec_dag.TestSpecDag.test_normalize_mpileaks) `test_normalize_twice`()[¶](#spack.test.spec_dag.TestSpecDag.test_normalize_twice) Make sure normalize can be run twice on the same spec, and that it is idempotent. `test_normalize_with_virtual_package`()[¶](#spack.test.spec_dag.TestSpecDag.test_normalize_with_virtual_package) `test_normalize_with_virtual_spec`()[¶](#spack.test.spec_dag.TestSpecDag.test_normalize_with_virtual_spec) `test_postorder_edge_traversal`()[¶](#spack.test.spec_dag.TestSpecDag.test_postorder_edge_traversal) `test_postorder_node_traversal`()[¶](#spack.test.spec_dag.TestSpecDag.test_postorder_node_traversal) `test_postorder_path_traversal`()[¶](#spack.test.spec_dag.TestSpecDag.test_postorder_path_traversal) `test_preorder_edge_traversal`()[¶](#spack.test.spec_dag.TestSpecDag.test_preorder_edge_traversal) `test_preorder_node_traversal`()[¶](#spack.test.spec_dag.TestSpecDag.test_preorder_node_traversal) `test_preorder_path_traversal`()[¶](#spack.test.spec_dag.TestSpecDag.test_preorder_path_traversal) `test_traversal_directions`()[¶](#spack.test.spec_dag.TestSpecDag.test_traversal_directions) Make sure child and parent traversals of specs work. `test_unsatisfiable_architecture`(*set_dependency*)[¶](#spack.test.spec_dag.TestSpecDag.test_unsatisfiable_architecture) `test_unsatisfiable_compiler`(*set_dependency*)[¶](#spack.test.spec_dag.TestSpecDag.test_unsatisfiable_compiler) `test_unsatisfiable_compiler_version`(*set_dependency*)[¶](#spack.test.spec_dag.TestSpecDag.test_unsatisfiable_compiler_version) `test_unsatisfiable_version`(*set_dependency*)[¶](#spack.test.spec_dag.TestSpecDag.test_unsatisfiable_version) `spack.test.spec_dag.``check_links`(*spec_to_check*)[¶](#spack.test.spec_dag.check_links) `spack.test.spec_dag.``saved_deps`()[¶](#spack.test.spec_dag.saved_deps) Returns a dictionary to save the dependencies. `spack.test.spec_dag.``set_dependency`(*saved_deps*)[¶](#spack.test.spec_dag.set_dependency) Returns a function that alters the dependency information for a package in the `saved_deps` fixture. `spack.test.spec_dag.``test_conditional_dep_with_user_constraints`()[¶](#spack.test.spec_dag.test_conditional_dep_with_user_constraints) This sets up packages X->Y such that X depends on Y conditionally. It then constructs a Spec with X but with no constraints on X, so that the initial normalization pass cannot determine whether the constraints are met to add the dependency; this checks whether a user-specified constraint on Y is applied properly. `spack.test.spec_dag.``test_test_deptype`()[¶](#spack.test.spec_dag.test_test_deptype) Ensure that test-only dependencies are only included for specified packages in the following spec DAG: ``` w /| x y | z ``` w->y deptypes are (link, build), w->x and y->z deptypes are (test) ##### spack.test.spec_semantics module[¶](#module-spack.test.spec_semantics) *class* `spack.test.spec_semantics.``TestSpecSematics`[¶](#spack.test.spec_semantics.TestSpecSematics) Bases: `object` This tests satisfies(), constrain() and other semantic operations on specs. `pytestmark` *= [Mark(name='usefixtures', args=('config', 'mock_packages'), kwargs={})]*[¶](#spack.test.spec_semantics.TestSpecSematics.pytestmark) `test_constrain_architecture`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_constrain_architecture) `test_constrain_changed`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_constrain_changed) `test_constrain_compiler`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_constrain_compiler) `test_constrain_compiler_flags`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_constrain_compiler_flags) `test_constrain_dependency_changed`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_constrain_dependency_changed) `test_constrain_dependency_not_changed`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_constrain_dependency_not_changed) `test_constrain_multi_value_variant`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_constrain_multi_value_variant) `test_constrain_not_changed`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_constrain_not_changed) `test_constrain_variants`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_constrain_variants) `test_copy_satisfies_transitive`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_copy_satisfies_transitive) `test_dep_index`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_dep_index) `test_exceptional_paths_for_constructor`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_exceptional_paths_for_constructor) `test_indirect_unsatisfied_single_valued_variant`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_indirect_unsatisfied_single_valued_variant) `test_invalid_constraint`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_invalid_constraint) `test_satisfies`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies) `test_satisfies_architecture`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_architecture) `test_satisfies_compiler`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_compiler) `test_satisfies_compiler_version`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_compiler_version) `test_satisfies_dependencies`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_dependencies) `test_satisfies_dependency_versions`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_dependency_versions) `test_satisfies_matching_compiler_flag`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_matching_compiler_flag) `test_satisfies_matching_variant`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_matching_variant) `test_satisfies_multi_value_variant`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_multi_value_variant) `test_satisfies_namespace`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_namespace) `test_satisfies_namespaced_dep`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_namespaced_dep) Ensure spec from same or unspecified namespace satisfies namespace constraint. `test_satisfies_same_spec_with_different_hash`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_same_spec_with_different_hash) Ensure that concrete specs are matched *exactly* by hash. `test_satisfies_single_valued_variant`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_single_valued_variant) Tests that the case reported in <https://github.com/spack/spack/pull/2386#issuecomment-282147639> is handled correctly. `test_satisfies_unconstrained_compiler_flag`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_unconstrained_compiler_flag) `test_satisfies_unconstrained_variant`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_unconstrained_variant) `test_satisfies_virtual`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_virtual) `test_satisfies_virtual_dep_with_virtual_constraint`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_virtual_dep_with_virtual_constraint) Ensure we can satisfy virtual constraints when there are multiple vdep providers in the specs. `test_satisfies_virtual_dependencies`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_virtual_dependencies) `test_satisfies_virtual_dependency_versions`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_satisfies_virtual_dependency_versions) `test_self_index`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_self_index) `test_spec_contains_deps`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_spec_contains_deps) `test_spec_formatting`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_spec_formatting) `test_unsatisfiable_compiler_flag`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_unsatisfiable_compiler_flag) `test_unsatisfiable_compiler_flag_mismatch`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_unsatisfiable_compiler_flag_mismatch) `test_unsatisfiable_multi_value_variant`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_unsatisfiable_multi_value_variant) `test_unsatisfiable_variant_mismatch`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_unsatisfiable_variant_mismatch) `test_unsatisfiable_variant_types`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_unsatisfiable_variant_types) `test_unsatisfiable_variants`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_unsatisfiable_variants) `test_unsatisfied_single_valued_variant`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_unsatisfied_single_valued_variant) `test_virtual_index`()[¶](#spack.test.spec_semantics.TestSpecSematics.test_virtual_index) `spack.test.spec_semantics.``argument_factory`(*argument_spec*, *left*)[¶](#spack.test.spec_semantics.argument_factory) `spack.test.spec_semantics.``check_constrain`(*expected*, *spec*, *constraint*)[¶](#spack.test.spec_semantics.check_constrain) `spack.test.spec_semantics.``check_constrain_changed`(*spec*, *constraint*)[¶](#spack.test.spec_semantics.check_constrain_changed) `spack.test.spec_semantics.``check_constrain_not_changed`(*spec*, *constraint*)[¶](#spack.test.spec_semantics.check_constrain_not_changed) `spack.test.spec_semantics.``check_invalid_constraint`(*spec*, *constraint*)[¶](#spack.test.spec_semantics.check_invalid_constraint) `spack.test.spec_semantics.``check_satisfies`(*target_spec*, *argument_spec*, *target_concrete=False*)[¶](#spack.test.spec_semantics.check_satisfies) `spack.test.spec_semantics.``check_unsatisfiable`(*target_spec*, *argument_spec*, *target_concrete=False*)[¶](#spack.test.spec_semantics.check_unsatisfiable) `spack.test.spec_semantics.``target_factory`(*spec_string*, *target_concrete*)[¶](#spack.test.spec_semantics.target_factory) ##### spack.test.spec_syntax module[¶](#module-spack.test.spec_syntax) *class* `spack.test.spec_syntax.``TestSpecSyntax`[¶](#spack.test.spec_syntax.TestSpecSyntax) Bases: `object` `check_lex`(*tokens*, *spec*)[¶](#spack.test.spec_syntax.TestSpecSyntax.check_lex) Check that the provided spec parses to the provided token list. `check_parse`(*expected*, *spec=None*)[¶](#spack.test.spec_syntax.TestSpecSyntax.check_parse) Assert that the provided spec is able to be parsed. If this is called with one argument, it assumes that the string is canonical (i.e., no spaces and ~ instead of - for variants) and that it will convert back to the string it came from. If this is called with two arguments, the first argument is the expected canonical form and the second is a non-canonical input to be parsed. `test_ambiguous`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_ambiguous) `test_ambiguous_hash`(*database*)[¶](#spack.test.spec_syntax.TestSpecSyntax.test_ambiguous_hash) `test_anonymous_specs`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_anonymous_specs) `test_anonymous_specs_with_multiple_parts`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_anonymous_specs_with_multiple_parts) `test_canonicalize`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_canonicalize) `test_dep_spec_by_hash`(*database*)[¶](#spack.test.spec_syntax.TestSpecSyntax.test_dep_spec_by_hash) `test_dependencies_with_versions`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_dependencies_with_versions) `test_duplicate_architecture`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_duplicate_architecture) `test_duplicate_architecture_component`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_duplicate_architecture_component) `test_duplicate_compiler`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_duplicate_compiler) `test_duplicate_dependency`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_duplicate_dependency) `test_duplicate_variant`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_duplicate_variant) `test_full_specs`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_full_specs) `test_invalid_hash`(*database*)[¶](#spack.test.spec_syntax.TestSpecSyntax.test_invalid_hash) `test_kv_with_quotes`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_kv_with_quotes) `test_kv_with_spaces`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_kv_with_spaces) `test_kv_without_quotes`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_kv_without_quotes) `test_minimal_spaces`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_minimal_spaces) `test_multiple_specs`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_multiple_specs) `test_multiple_specs_after_kv`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_multiple_specs_after_kv) `test_multiple_specs_long_second`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_multiple_specs_long_second) `test_multiple_specs_with_hash`(*database*)[¶](#spack.test.spec_syntax.TestSpecSyntax.test_multiple_specs_with_hash) `test_nonexistent_hash`(*database*)[¶](#spack.test.spec_syntax.TestSpecSyntax.test_nonexistent_hash) Ensure we get errors for nonexistant hashes. `test_package_names`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_package_names) `test_parse_errors`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_parse_errors) `test_redundant_spec`(*database*)[¶](#spack.test.spec_syntax.TestSpecSyntax.test_redundant_spec) Check that redundant spec constraints raise errors. TODO (TG): does this need to be an error? Or should concrete specs only raise errors if constraints cause a contradiction? `test_simple_dependence`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_simple_dependence) `test_spaces_between_dependences`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_spaces_between_dependences) `test_spaces_between_options`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_spaces_between_options) `test_spec_by_hash`(*database*)[¶](#spack.test.spec_syntax.TestSpecSyntax.test_spec_by_hash) `test_way_too_many_spaces`()[¶](#spack.test.spec_syntax.TestSpecSyntax.test_way_too_many_spaces) `spack.test.spec_syntax.``test_parse_anonymous_specs`(*spec*, *anon_spec*, *spec_name*)[¶](#spack.test.spec_syntax.test_parse_anonymous_specs) ##### spack.test.spec_yaml module[¶](#module-spack.test.spec_yaml) Test YAML serialization for specs. YAML format preserves DAG information in the spec. `spack.test.spec_yaml.``check_yaml_round_trip`(*spec*)[¶](#spack.test.spec_yaml.check_yaml_round_trip) `spack.test.spec_yaml.``reverse_all_dicts`(*data*)[¶](#spack.test.spec_yaml.reverse_all_dicts) Descend into data and reverse all the dictionaries `spack.test.spec_yaml.``test_ambiguous_version_spec`(*mock_packages*)[¶](#spack.test.spec_yaml.test_ambiguous_version_spec) `spack.test.spec_yaml.``test_concrete_spec`(*config*, *mock_packages*)[¶](#spack.test.spec_yaml.test_concrete_spec) `spack.test.spec_yaml.``test_external_spec`(*config*, *mock_packages*)[¶](#spack.test.spec_yaml.test_external_spec) `spack.test.spec_yaml.``test_normal_spec`(*mock_packages*)[¶](#spack.test.spec_yaml.test_normal_spec) `spack.test.spec_yaml.``test_ordered_read_not_required_for_consistent_dag_hash`(*config*, *mock_packages*)[¶](#spack.test.spec_yaml.test_ordered_read_not_required_for_consistent_dag_hash) Make sure ordered serialization isn’t required to preserve hashes. For consistent hashes, we require that YAML and json documents have their keys serialized in a deterministic order. However, we don’t want to require them to be serialized in order. This ensures that is not required. `spack.test.spec_yaml.``test_simple_spec`()[¶](#spack.test.spec_yaml.test_simple_spec) `spack.test.spec_yaml.``test_using_ordered_dict`(*mock_packages*)[¶](#spack.test.spec_yaml.test_using_ordered_dict) Checks that dicts are ordered Necessary to make sure that dag_hash is stable across python versions and processes. `spack.test.spec_yaml.``test_yaml_multivalue`()[¶](#spack.test.spec_yaml.test_yaml_multivalue) `spack.test.spec_yaml.``test_yaml_subdag`(*config*, *mock_packages*)[¶](#spack.test.spec_yaml.test_yaml_subdag) ##### spack.test.stage module[¶](#module-spack.test.stage) Test that the Stage class works correctly. *class* `spack.test.stage.``TestStage`[¶](#spack.test.stage.TestStage) Bases: `object` `pytestmark` *= [Mark(name='usefixtures', args=('mock_packages',), kwargs={})]*[¶](#spack.test.stage.TestStage.pytestmark) `stage_name` *= 'spack-test-stage'*[¶](#spack.test.stage.TestStage.stage_name) `test_expand_archive`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_expand_archive) `test_fetch`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_fetch) `test_keep_exceptions`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_keep_exceptions) `test_keep_without_exceptions`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_keep_without_exceptions) `test_no_keep_with_exceptions`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_no_keep_with_exceptions) `test_no_keep_without_exceptions`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_no_keep_without_exceptions) `test_no_search_if_default_succeeds`(*mock_archive*, *failing_search_fn*)[¶](#spack.test.stage.TestStage.test_no_search_if_default_succeeds) `test_no_search_mirror_only`(*failing_fetch_strategy*, *failing_search_fn*)[¶](#spack.test.stage.TestStage.test_no_search_mirror_only) `test_restage`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_restage) `test_search_if_default_fails`(*failing_fetch_strategy*, *search_fn*)[¶](#spack.test.stage.TestStage.test_search_if_default_fails) `test_setup_and_destroy_name_with_tmp`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_setup_and_destroy_name_with_tmp) `test_setup_and_destroy_name_without_tmp`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_setup_and_destroy_name_without_tmp) `test_setup_and_destroy_no_name_with_tmp`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_setup_and_destroy_no_name_with_tmp) `test_setup_and_destroy_no_name_without_tmp`(*mock_archive*)[¶](#spack.test.stage.TestStage.test_setup_and_destroy_no_name_without_tmp) `spack.test.stage.``check_destroy`(*stage*, *stage_name*)[¶](#spack.test.stage.check_destroy) Figure out whether a stage was destroyed correctly. `spack.test.stage.``check_expand_archive`(*stage*, *stage_name*, *mock_archive*)[¶](#spack.test.stage.check_expand_archive) `spack.test.stage.``check_fetch`(*stage*, *stage_name*)[¶](#spack.test.stage.check_fetch) `spack.test.stage.``check_setup`(*stage*, *stage_name*, *archive*)[¶](#spack.test.stage.check_setup) Figure out whether a stage was set up correctly. `spack.test.stage.``failing_fetch_strategy`()[¶](#spack.test.stage.failing_fetch_strategy) Returns a fetch strategy that fails. `spack.test.stage.``failing_search_fn`()[¶](#spack.test.stage.failing_search_fn) Returns a search function that fails! Always! `spack.test.stage.``get_stage_path`(*stage*, *stage_name*)[¶](#spack.test.stage.get_stage_path) Figure out where a stage should be living. This depends on whether it’s named. `spack.test.stage.``mock_archive`(*tmpdir*, *monkeypatch*, *mutable_config*)[¶](#spack.test.stage.mock_archive) Creates a mock archive with the structure expected by the tests `spack.test.stage.``search_fn`()[¶](#spack.test.stage.search_fn) Returns a search function that always succeeds. `spack.test.stage.``tmpdir_for_stage`(*mock_archive*, *mutable_config*)[¶](#spack.test.stage.tmpdir_for_stage) Uses a temporary directory for staging ##### spack.test.svn_fetch module[¶](#module-spack.test.svn_fetch) `spack.test.svn_fetch.``test_fetch`(*type_of_test*, *secure*, *mock_svn_repository*, *config*, *mutable_mock_packages*)[¶](#spack.test.svn_fetch.test_fetch) Tries to: 1. Fetch the repo using a fetch strategy constructed with supplied args (they depend on type_of_test). 2. Check if the test_file is in the checked out repository. 3. Assert that the repository is at the revision supplied. 4. Add and remove some files, then reset the repo, and ensure it’s all there again. ##### spack.test.tengine module[¶](#module-spack.test.tengine) *class* `spack.test.tengine.``TestContext`[¶](#spack.test.tengine.TestContext) Bases: `object` *class* `A`[¶](#spack.test.tengine.TestContext.A) Bases: [`spack.tengine.Context`](index.html#spack.tengine.Context) `context_properties` *= ['foo']*[¶](#spack.test.tengine.TestContext.A.context_properties) `foo`[¶](#spack.test.tengine.TestContext.A.foo) *class* `B`[¶](#spack.test.tengine.TestContext.B) Bases: [`spack.tengine.Context`](index.html#spack.tengine.Context) `bar`[¶](#spack.test.tengine.TestContext.B.bar) `context_properties` *= ['bar']*[¶](#spack.test.tengine.TestContext.B.context_properties) *class* `C`[¶](#spack.test.tengine.TestContext.C) Bases: `spack.test.tengine.A`, `spack.test.tengine.B` `context_properties` *= ['foobar', 'foo', 'bar']*[¶](#spack.test.tengine.TestContext.C.context_properties) `foo`[¶](#spack.test.tengine.TestContext.C.foo) `foobar`[¶](#spack.test.tengine.TestContext.C.foobar) `test_to_dict`()[¶](#spack.test.tengine.TestContext.test_to_dict) Tests that all the context properties in a hierarchy are considered when building the context dictionary. *class* `spack.test.tengine.``TestTengineEnvironment`[¶](#spack.test.tengine.TestTengineEnvironment) Bases: `object` `pytestmark` *= [Mark(name='usefixtures', args=('config',), kwargs={})]*[¶](#spack.test.tengine.TestTengineEnvironment.pytestmark) `test_template_retrieval`()[¶](#spack.test.tengine.TestTengineEnvironment.test_template_retrieval) Tests the template retrieval mechanism hooked into config files ##### spack.test.test_activations module[¶](#module-spack.test.test_activations) *class* `spack.test.test_activations.``FakeExtensionPackage`(*name*, *prefix*)[¶](#spack.test.test_activations.FakeExtensionPackage) Bases: [`spack.package.PackageViewMixin`](index.html#spack.package.PackageViewMixin) *class* `spack.test.test_activations.``FakePythonExtensionPackage`(*name*, *prefix*, *py_namespace*, *python_spec*)[¶](#spack.test.test_activations.FakePythonExtensionPackage) Bases: [`spack.test.test_activations.FakeExtensionPackage`](#spack.test.test_activations.FakeExtensionPackage) `add_files_to_view`(*view*, *merge_map*)[¶](#spack.test.test_activations.FakePythonExtensionPackage.add_files_to_view) Given a map of package files to destination paths in the view, add the files to the view. By default this adds all files. Alternative implementations may skip some files, for example if other packages linked into the view already include the file. `remove_files_from_view`(*view*, *merge_map*)[¶](#spack.test.test_activations.FakePythonExtensionPackage.remove_files_from_view) Given a map of package files to files currently linked in the view, remove the files from the view. The default implementation removes all files. Alternative implementations may not remove all files. For example if two packages include the same file, it should only be removed when both packages are removed. `view_file_conflicts`(*view*, *merge_map*)[¶](#spack.test.test_activations.FakePythonExtensionPackage.view_file_conflicts) Report any files which prevent adding this package to the view. The default implementation looks for any files which already exist. Alternative implementations may allow some of the files to exist in the view (in this case they would be omitted from the results). *class* `spack.test.test_activations.``FakeSpec`(*package*)[¶](#spack.test.test_activations.FakeSpec) Bases: `object` `dag_hash`()[¶](#spack.test.test_activations.FakeSpec.dag_hash) `spack.test.test_activations.``create_dir_structure`(*tmpdir*, *dir_structure*)[¶](#spack.test.test_activations.create_dir_structure) `spack.test.test_activations.``namespace_extensions`(*tmpdir*)[¶](#spack.test.test_activations.namespace_extensions) `spack.test.test_activations.``perl_and_extension_dirs`(*tmpdir*)[¶](#spack.test.test_activations.perl_and_extension_dirs) `spack.test.test_activations.``python_and_extension_dirs`(*tmpdir*)[¶](#spack.test.test_activations.python_and_extension_dirs) `spack.test.test_activations.``test_perl_activation`(*tmpdir*)[¶](#spack.test.test_activations.test_perl_activation) `spack.test.test_activations.``test_perl_activation_view`(*tmpdir*, *perl_and_extension_dirs*)[¶](#spack.test.test_activations.test_perl_activation_view) `spack.test.test_activations.``test_perl_activation_with_files`(*tmpdir*, *perl_and_extension_dirs*)[¶](#spack.test.test_activations.test_perl_activation_with_files) `spack.test.test_activations.``test_python_activation_view`(*tmpdir*, *python_and_extension_dirs*)[¶](#spack.test.test_activations.test_python_activation_view) `spack.test.test_activations.``test_python_activation_with_files`(*tmpdir*, *python_and_extension_dirs*)[¶](#spack.test.test_activations.test_python_activation_with_files) `spack.test.test_activations.``test_python_ignore_namespace_init_conflict`(*tmpdir*, *namespace_extensions*)[¶](#spack.test.test_activations.test_python_ignore_namespace_init_conflict) Test the view update logic in PythonPackage ignores conflicting instances of __init__ for packages which are in the same namespace. `spack.test.test_activations.``test_python_keep_namespace_init`(*tmpdir*, *namespace_extensions*)[¶](#spack.test.test_activations.test_python_keep_namespace_init) Test the view update logic in PythonPackage keeps the namespace __init__ file as long as one package in the namespace still exists. `spack.test.test_activations.``test_python_namespace_conflict`(*tmpdir*, *namespace_extensions*)[¶](#spack.test.test_activations.test_python_namespace_conflict) Test the view update logic in PythonPackage reports an error when two python extensions with different namespaces have a conflicting __init__ file. ##### spack.test.url_fetch module[¶](#module-spack.test.url_fetch) `spack.test.url_fetch.``checksum_type`(*request*)[¶](#spack.test.url_fetch.checksum_type) `spack.test.url_fetch.``test_fetch`(*mock_archive*, *secure*, *checksum_type*, *config*, *mutable_mock_packages*)[¶](#spack.test.url_fetch.test_fetch) Fetch an archive and make sure we can checksum it. `spack.test.url_fetch.``test_from_list_url`(*mock_packages*, *config*)[¶](#spack.test.url_fetch.test_from_list_url) `spack.test.url_fetch.``test_hash_detection`(*checksum_type*)[¶](#spack.test.url_fetch.test_hash_detection) `spack.test.url_fetch.``test_unknown_hash`(*checksum_type*)[¶](#spack.test.url_fetch.test_unknown_hash) ##### spack.test.url_parse module[¶](#module-spack.test.url_parse) Tests Spack’s ability to parse the name and version of a package based on its URL. `spack.test.url_parse.``test_no_version`(*not_detectable_url*)[¶](#spack.test.url_parse.test_no_version) `spack.test.url_parse.``test_url_parse_name_and_version`(*name*, *version*, *url*)[¶](#spack.test.url_parse.test_url_parse_name_and_version) `spack.test.url_parse.``test_url_parse_offset`(*name*, *noffset*, *ver*, *voffset*, *path*)[¶](#spack.test.url_parse.test_url_parse_offset) Tests that the name, version and offsets are computed correctly. | Parameters: | * **name** (*str*) – expected name * **noffset** (*int*) – name offset * **ver** (*str*) – expected version * **voffset** (*int*) – version offset * **path** (*str*) – url to be parsed | `spack.test.url_parse.``test_url_strip_name_suffixes`(*url*, *version*, *expected*)[¶](#spack.test.url_parse.test_url_strip_name_suffixes) `spack.test.url_parse.``test_url_strip_version_suffixes`(*url*, *expected*)[¶](#spack.test.url_parse.test_url_strip_version_suffixes) ##### spack.test.url_substitution module[¶](#module-spack.test.url_substitution) Tests Spack’s ability to substitute a different version into a URL. `spack.test.url_substitution.``test_url_substitution`(*base_url*, *version*, *expected*)[¶](#spack.test.url_substitution.test_url_substitution) ##### spack.test.variant module[¶](#module-spack.test.variant) *class* `spack.test.variant.``TestBoolValuedVariant`[¶](#spack.test.variant.TestBoolValuedVariant) Bases: `object` `test_compatible`()[¶](#spack.test.variant.TestBoolValuedVariant.test_compatible) `test_constrain`()[¶](#spack.test.variant.TestBoolValuedVariant.test_constrain) `test_initialization`()[¶](#spack.test.variant.TestBoolValuedVariant.test_initialization) `test_satisfies`()[¶](#spack.test.variant.TestBoolValuedVariant.test_satisfies) `test_yaml_entry`()[¶](#spack.test.variant.TestBoolValuedVariant.test_yaml_entry) *class* `spack.test.variant.``TestMultiValuedVariant`[¶](#spack.test.variant.TestMultiValuedVariant) Bases: `object` `test_compatible`()[¶](#spack.test.variant.TestMultiValuedVariant.test_compatible) `test_constrain`()[¶](#spack.test.variant.TestMultiValuedVariant.test_constrain) `test_initialization`()[¶](#spack.test.variant.TestMultiValuedVariant.test_initialization) `test_satisfies`()[¶](#spack.test.variant.TestMultiValuedVariant.test_satisfies) `test_yaml_entry`()[¶](#spack.test.variant.TestMultiValuedVariant.test_yaml_entry) *class* `spack.test.variant.``TestSingleValuedVariant`[¶](#spack.test.variant.TestSingleValuedVariant) Bases: `object` `test_compatible`()[¶](#spack.test.variant.TestSingleValuedVariant.test_compatible) `test_constrain`()[¶](#spack.test.variant.TestSingleValuedVariant.test_constrain) `test_initialization`()[¶](#spack.test.variant.TestSingleValuedVariant.test_initialization) `test_satisfies`()[¶](#spack.test.variant.TestSingleValuedVariant.test_satisfies) `test_yaml_entry`()[¶](#spack.test.variant.TestSingleValuedVariant.test_yaml_entry) *class* `spack.test.variant.``TestVariant`[¶](#spack.test.variant.TestVariant) Bases: `object` `test_callable_validator`()[¶](#spack.test.variant.TestVariant.test_callable_validator) `test_representation`()[¶](#spack.test.variant.TestVariant.test_representation) `test_validation`()[¶](#spack.test.variant.TestVariant.test_validation) *class* `spack.test.variant.``TestVariantMapTest`[¶](#spack.test.variant.TestVariantMapTest) Bases: `object` `test_copy`()[¶](#spack.test.variant.TestVariantMapTest.test_copy) `test_invalid_values`()[¶](#spack.test.variant.TestVariantMapTest.test_invalid_values) `test_satisfies_and_constrain`()[¶](#spack.test.variant.TestVariantMapTest.test_satisfies_and_constrain) `test_set_item`()[¶](#spack.test.variant.TestVariantMapTest.test_set_item) `test_str`()[¶](#spack.test.variant.TestVariantMapTest.test_str) `test_substitute`()[¶](#spack.test.variant.TestVariantMapTest.test_substitute) `spack.test.variant.``test_from_node_dict`()[¶](#spack.test.variant.test_from_node_dict) ##### spack.test.versions module[¶](#module-spack.test.versions) These version tests were taken from the RPM source code. We try to maintain compatibility with RPM’s version semantics where it makes sense. `spack.test.versions.``assert_canonical`(*canonical_list*, *version_list*)[¶](#spack.test.versions.assert_canonical) Asserts that a redundant list is reduced to canonical form. `spack.test.versions.``assert_does_not_satisfy`(*v1*, *v2*)[¶](#spack.test.versions.assert_does_not_satisfy) Asserts that ‘v1’ does not satisfy ‘v2’. `spack.test.versions.``assert_in`(*needle*, *haystack*)[¶](#spack.test.versions.assert_in) Asserts that ‘needle’ is in ‘haystack’. `spack.test.versions.``assert_no_overlap`(*v1*, *v2*)[¶](#spack.test.versions.assert_no_overlap) Asserts that two version ranges do not overlap. `spack.test.versions.``assert_not_in`(*needle*, *haystack*)[¶](#spack.test.versions.assert_not_in) Asserts that ‘needle’ is not in ‘haystack’. `spack.test.versions.``assert_overlaps`(*v1*, *v2*)[¶](#spack.test.versions.assert_overlaps) Asserts that two version ranges overlaps. `spack.test.versions.``assert_satisfies`(*v1*, *v2*)[¶](#spack.test.versions.assert_satisfies) Asserts that ‘v1’ satisfies ‘v2’. `spack.test.versions.``assert_ver_eq`(*a*, *b*)[¶](#spack.test.versions.assert_ver_eq) Asserts the results of comparisons when ‘a’ is equal to ‘b’. `spack.test.versions.``assert_ver_gt`(*a*, *b*)[¶](#spack.test.versions.assert_ver_gt) Asserts the results of comparisons when ‘a’ is greater than ‘b’. `spack.test.versions.``assert_ver_lt`(*a*, *b*)[¶](#spack.test.versions.assert_ver_lt) Asserts the results of comparisons when ‘a’ is less than ‘b’. `spack.test.versions.``check_intersection`(*expected*, *a*, *b*)[¶](#spack.test.versions.check_intersection) Asserts that ‘a’ intersect ‘b’ == ‘expected’. `spack.test.versions.``check_union`(*expected*, *a*, *b*)[¶](#spack.test.versions.check_union) Asserts that ‘a’ union ‘b’ == ‘expected’. `spack.test.versions.``test_alpha`()[¶](#spack.test.versions.test_alpha) `spack.test.versions.``test_alpha_beta`()[¶](#spack.test.versions.test_alpha_beta) `spack.test.versions.``test_alpha_with_dots`()[¶](#spack.test.versions.test_alpha_with_dots) `spack.test.versions.``test_basic_version_satisfaction`()[¶](#spack.test.versions.test_basic_version_satisfaction) `spack.test.versions.``test_basic_version_satisfaction_in_lists`()[¶](#spack.test.versions.test_basic_version_satisfaction_in_lists) `spack.test.versions.``test_canonicalize_list`()[¶](#spack.test.versions.test_canonicalize_list) `spack.test.versions.``test_close_numbers`()[¶](#spack.test.versions.test_close_numbers) `spack.test.versions.``test_contains`()[¶](#spack.test.versions.test_contains) `spack.test.versions.``test_date_stamps`()[¶](#spack.test.versions.test_date_stamps) `spack.test.versions.``test_double_alpha`()[¶](#spack.test.versions.test_double_alpha) `spack.test.versions.``test_formatted_strings`()[¶](#spack.test.versions.test_formatted_strings) `spack.test.versions.``test_get_item`()[¶](#spack.test.versions.test_get_item) `spack.test.versions.``test_in_list`()[¶](#spack.test.versions.test_in_list) `spack.test.versions.``test_intersect_with_containment`()[¶](#spack.test.versions.test_intersect_with_containment) `spack.test.versions.``test_intersection`()[¶](#spack.test.versions.test_intersection) `spack.test.versions.``test_len`()[¶](#spack.test.versions.test_len) `spack.test.versions.``test_lists_overlap`()[¶](#spack.test.versions.test_lists_overlap) `spack.test.versions.``test_num_alpha_with_no_separator`()[¶](#spack.test.versions.test_num_alpha_with_no_separator) `spack.test.versions.``test_nums_and_patch`()[¶](#spack.test.versions.test_nums_and_patch) `spack.test.versions.``test_overlap_with_containment`()[¶](#spack.test.versions.test_overlap_with_containment) `spack.test.versions.``test_padded_numbers`()[¶](#spack.test.versions.test_padded_numbers) `spack.test.versions.``test_patch`()[¶](#spack.test.versions.test_patch) `spack.test.versions.``test_ranges_overlap`()[¶](#spack.test.versions.test_ranges_overlap) `spack.test.versions.``test_rc_versions`()[¶](#spack.test.versions.test_rc_versions) `spack.test.versions.``test_repr_and_str`()[¶](#spack.test.versions.test_repr_and_str) `spack.test.versions.``test_rpm_oddities`()[¶](#spack.test.versions.test_rpm_oddities) `spack.test.versions.``test_satisfaction_with_lists`()[¶](#spack.test.versions.test_satisfaction_with_lists) `spack.test.versions.``test_three_segments`()[¶](#spack.test.versions.test_three_segments) `spack.test.versions.``test_two_segments`()[¶](#spack.test.versions.test_two_segments) `spack.test.versions.``test_underscores`()[¶](#spack.test.versions.test_underscores) `spack.test.versions.``test_union_with_containment`()[¶](#spack.test.versions.test_union_with_containment) `spack.test.versions.``test_up_to`()[¶](#spack.test.versions.test_up_to) `spack.test.versions.``test_version_range_satisfaction`()[¶](#spack.test.versions.test_version_range_satisfaction) `spack.test.versions.``test_version_range_satisfaction_in_lists`()[¶](#spack.test.versions.test_version_range_satisfaction_in_lists) `spack.test.versions.``test_version_ranges`()[¶](#spack.test.versions.test_version_ranges) ##### spack.test.views module[¶](#module-spack.test.views) `spack.test.views.``test_global_activation`(*install_mockery*, *mock_fetch*)[¶](#spack.test.views.test_global_activation) This test ensures that views which are maintained inside of an extendee package’s prefix are maintained as expected and are compatible with global activations prior to #7152. ##### spack.test.web module[¶](#module-spack.test.web) Tests for web.py. `spack.test.web.``test_find_exotic_versions_of_archive_2`()[¶](#spack.test.web.test_find_exotic_versions_of_archive_2) `spack.test.web.``test_find_exotic_versions_of_archive_3`()[¶](#spack.test.web.test_find_exotic_versions_of_archive_3) `spack.test.web.``test_find_versions_of_archive_0`()[¶](#spack.test.web.test_find_versions_of_archive_0) `spack.test.web.``test_find_versions_of_archive_1`()[¶](#spack.test.web.test_find_versions_of_archive_1) `spack.test.web.``test_find_versions_of_archive_2`()[¶](#spack.test.web.test_find_versions_of_archive_2) `spack.test.web.``test_find_versions_of_archive_3`()[¶](#spack.test.web.test_find_versions_of_archive_3) `spack.test.web.``test_spider_0`()[¶](#spack.test.web.test_spider_0) `spack.test.web.``test_spider_1`()[¶](#spack.test.web.test_spider_1) `spack.test.web.``test_spider_2`()[¶](#spack.test.web.test_spider_2) `spack.test.web.``test_spider_3`()[¶](#spack.test.web.test_spider_3) ##### Module contents[¶](#module-spack.test) #### spack.util package[¶](#spack-util-package) ##### Subpackages[¶](#subpackages) ###### spack.util.imp package[¶](#spack-util-imp-package) ####### Submodules[¶](#submodules) ####### spack.util.imp.imp_importer module[¶](#module-spack.util.imp.imp_importer) Implementation of Spack imports that uses imp underneath. `imp` is deprecated in newer versions of Python, but is the only option in Python 2.6. `spack.util.imp.imp_importer.``import_lock`()[¶](#spack.util.imp.imp_importer.import_lock) `spack.util.imp.imp_importer.``load_source`(*full_name*, *path*, *prepend=None*)[¶](#spack.util.imp.imp_importer.load_source) Import a Python module from source. Load the source file and add it to `sys.modules`. | Parameters: | * **full_name** (*str*) – full name of the module to be loaded * **path** (*str*) – path to the file that should be loaded * **prepend** (*str**,* *optional*) – some optional code to prepend to the loaded module; e.g., can be used to inject import statements | | Returns: | the loaded module | | Return type: | (ModuleType) | `spack.util.imp.imp_importer.``prepend_open`(*f*, **args*, ***kwargs*)[¶](#spack.util.imp.imp_importer.prepend_open) Open a file for reading, but prepend with some text prepended Arguments are same as for `open()`, with one keyword argument, `text`, specifying the text to prepend. We have to write and read a tempfile for the `imp`-based importer, as the `file` argument to `imp.load_source()` requires a low-level file handle. See the `importlib`-based importer for a faster way to do this in later versions of python. ####### spack.util.imp.importlib_importer module[¶](#module-spack.util.imp.importlib_importer) Implementation of Spack imports that uses importlib underneath. `importlib` is only fully implemented in Python 3. *class* `spack.util.imp.importlib_importer.``PrependFileLoader`(*full_name*, *path*, *prepend=None*)[¶](#spack.util.imp.importlib_importer.PrependFileLoader) Bases: `_frozen_importlib_external.SourceFileLoader` `get_data`(*path*)[¶](#spack.util.imp.importlib_importer.PrependFileLoader.get_data) Return the data from path as raw bytes. `spack.util.imp.importlib_importer.``load_source`(*full_name*, *path*, *prepend=None*)[¶](#spack.util.imp.importlib_importer.load_source) Import a Python module from source. Load the source file and add it to `sys.modules`. | Parameters: | * **full_name** (*str*) – full name of the module to be loaded * **path** (*str*) – path to the file that should be loaded * **prepend** (*str**,* *optional*) – some optional code to prepend to the loaded module; e.g., can be used to inject import statements | | Returns: | the loaded module | | Return type: | (ModuleType) | ####### Module contents[¶](#module-spack.util.imp) Consolidated module for all imports done by Spack. Many parts of Spack have to import Python code. This utility package wraps Spack’s interface with Python’s import system. We do this because Python’s import system is confusing and changes from Python version to Python version, and we should be able to adapt our approach to the underlying implementation. Currently, this uses `importlib.machinery` where available and `imp` when `importlib` is not completely usable. ##### Submodules[¶](#submodules) ##### spack.util.compression module[¶](#module-spack.util.compression) `spack.util.compression.``allowed_archive`(*path*)[¶](#spack.util.compression.allowed_archive) `spack.util.compression.``decompressor_for`(*path*, *extension=None*)[¶](#spack.util.compression.decompressor_for) Get the appropriate decompressor for a path. `spack.util.compression.``extension`(*path*)[¶](#spack.util.compression.extension) Get the archive extension for a path. `spack.util.compression.``strip_extension`(*path*)[¶](#spack.util.compression.strip_extension) Get the part of a path that does not include its compressed type extension. ##### spack.util.crypto module[¶](#module-spack.util.crypto) *class* `spack.util.crypto.``Checker`(*hexdigest*, ***kwargs*)[¶](#spack.util.crypto.Checker) Bases: `object` A checker checks files against one particular hex digest. It will automatically determine what hashing algorithm to used based on the length of the digest it’s initialized with. e.g., if the digest is 32 hex characters long this will use md5. Example: know your tarball should hash to ‘abc123’. You want to check files against this. You would use this class like so: ``` hexdigest = 'abc123' checker = Checker(hexdigest) success = checker.check('downloaded.tar.gz') ``` After the call to check, the actual checksum is available in checker.sum, in case it’s needed for error output. You can trade read performance and memory usage by adjusting the block_size optional arg. By default it’s a 1MB (2**20 bytes) buffer. `check`(*filename*)[¶](#spack.util.crypto.Checker.check) Read the file with the specified name and check its checksum against self.hexdigest. Return True if they match, False otherwise. Actual checksum is stored in self.sum. `hash_name`[¶](#spack.util.crypto.Checker.hash_name) Get the name of the hash function this Checker is using. *class* `spack.util.crypto.``DeprecatedHash`(*hash_alg*, *alert_fn*, *disable_security_check*)[¶](#spack.util.crypto.DeprecatedHash) Bases: `object` `spack.util.crypto.``bit_length`(*num*)[¶](#spack.util.crypto.bit_length) Number of bits required to represent an integer in binary. `spack.util.crypto.``checksum`(*hashlib_algo*, *filename*, ***kwargs*)[¶](#spack.util.crypto.checksum) Returns a hex digest of the filename generated using an algorithm from hashlib. `spack.util.crypto.``hash_algo_for_digest`(*hexdigest*)[¶](#spack.util.crypto.hash_algo_for_digest) Gets name of the hash algorithm for a hex digest. `spack.util.crypto.``hash_fun_for_algo`(*algo*)[¶](#spack.util.crypto.hash_fun_for_algo) Get a function that can perform the specified hash algorithm. `spack.util.crypto.``hash_fun_for_digest`(*hexdigest*)[¶](#spack.util.crypto.hash_fun_for_digest) Gets a hash function corresponding to a hex digest. `spack.util.crypto.``hashes` *= {'md5': 16, 'sha1': 20, 'sha224': 28, 'sha256': 32, 'sha384': 48, 'sha512': 64}*[¶](#spack.util.crypto.hashes) Set of hash algorithms that Spack can use, mapped to digest size in bytes `spack.util.crypto.``prefix_bits`(*byte_array*, *bits*)[¶](#spack.util.crypto.prefix_bits) Return the first <bits> bits of a byte array as an integer. ##### spack.util.debug module[¶](#module-spack.util.debug) Debug signal handler: prints a stack trace and enters interpreter. `register_interrupt_handler()` enables a ctrl-C handler that prints a stack trace and drops the user into an interpreter. `spack.util.debug.``debug_handler`(*sig*, *frame*)[¶](#spack.util.debug.debug_handler) Interrupt running process, and provide a python prompt for interactive debugging. `spack.util.debug.``register_interrupt_handler`()[¶](#spack.util.debug.register_interrupt_handler) Print traceback and enter an interpreter on Ctrl-C ##### spack.util.editor module[¶](#module-spack.util.editor) Module for finding the user’s preferred text editor. Defines one variable: `editor`, which is a `spack.util.executable.Executable` object that can be called to invoke the editor. If no `editor` is found, an `EnvironmentError` is raised when `editor` is invoked. ##### spack.util.environment module[¶](#module-spack.util.environment) Utilities for setting and modifying environment variables. *class* `spack.util.environment.``AppendFlagsEnv`(*name*, *value*, ***kwargs*)[¶](#spack.util.environment.AppendFlagsEnv) Bases: [`spack.util.environment.NameValueModifier`](#spack.util.environment.NameValueModifier) `execute`()[¶](#spack.util.environment.AppendFlagsEnv.execute) *class* `spack.util.environment.``AppendPath`(*name*, *value*, ***kwargs*)[¶](#spack.util.environment.AppendPath) Bases: [`spack.util.environment.NameValueModifier`](#spack.util.environment.NameValueModifier) `execute`()[¶](#spack.util.environment.AppendPath.execute) *class* `spack.util.environment.``EnvironmentModifications`(*other=None*)[¶](#spack.util.environment.EnvironmentModifications) Bases: `object` Keeps track of requests to modify the current environment. Each call to a method to modify the environment stores the extra information on the caller in the request: > * ‘filename’ : filename of the module where the caller is defined > * ‘lineno’: line number where the request occurred > * ‘context’ : line of code that issued the request that failed `append_flags`(*name*, *value*, *sep=' '*, ***kwargs*)[¶](#spack.util.environment.EnvironmentModifications.append_flags) Stores in the current object a request to append to an env variable | Parameters: | * **name** – name of the environment variable to be appended to * **value** – value to append to the environment variable | Appends with spaces separating different additions to the variable `append_path`(*name*, *path*, ***kwargs*)[¶](#spack.util.environment.EnvironmentModifications.append_path) Stores a request to append a path to a path list. | Parameters: | * **name** – name of the path list in the environment * **path** – path to be appended | `apply_modifications`()[¶](#spack.util.environment.EnvironmentModifications.apply_modifications) Applies the modifications and clears the list. `clear`()[¶](#spack.util.environment.EnvironmentModifications.clear) Clears the current list of modifications `extend`(*other*)[¶](#spack.util.environment.EnvironmentModifications.extend) *static* `from_sourcing_file`(**args*, ***kwargs*)[¶](#spack.util.environment.EnvironmentModifications.from_sourcing_file) Returns modifications that would be made by sourcing a file. | Parameters: | * **filename** (*str*) – The file to source * ***args** (*list of str*) – Arguments to pass on the command line | | Keyword Arguments: | | | * **shell** (*str*) – The shell to use (default: `bash`) * **shell_options** (*str*) – Options passed to the shell (default: `-c`) * **source_command** (*str*) – The command to run (default: `source`) * **suppress_output** (*str*) – Redirect used to suppress output of command (default: `&> /dev/null`) * **concatenate_on_success** (*str*) – Operator used to execute a command only when the previous command succeeds (default: `&&`) * **blacklist** (*[**str* *or* *re**]*) – Ignore any modifications of these variables (default: []) * **whitelist** (*[**str* *or* *re**]*) – Always respect modifications of these variables (default: []). Has precedence over blacklist. * **clean** (*bool*) – In addition to removing empty entries, also remove duplicate entries (default: False). | | Returns: | an object that, if executed, has the same effect on the environment as sourcing the file | | Return type: | EnvironmentModifications | `group_by_name`()[¶](#spack.util.environment.EnvironmentModifications.group_by_name) Returns a dict of the modifications grouped by variable name. | Returns: | dict mapping the environment variable name to the modifications to be done on it | `prepend_path`(*name*, *path*, ***kwargs*)[¶](#spack.util.environment.EnvironmentModifications.prepend_path) Same as append_path, but the path is pre-pended. | Parameters: | * **name** – name of the path list in the environment * **path** – path to be pre-pended | `remove_path`(*name*, *path*, ***kwargs*)[¶](#spack.util.environment.EnvironmentModifications.remove_path) Stores a request to remove a path from a path list. | Parameters: | * **name** – name of the path list in the environment * **path** – path to be removed | `set`(*name*, *value*, ***kwargs*)[¶](#spack.util.environment.EnvironmentModifications.set) Stores a request to set an environment variable. | Parameters: | * **name** – name of the environment variable to be set * **value** – value of the environment variable | `set_path`(*name*, *elements*, ***kwargs*)[¶](#spack.util.environment.EnvironmentModifications.set_path) Stores a request to set a path generated from a list. | Parameters: | * **name** – name o the environment variable to be set. * **elements** – elements of the path to set. | `unset`(*name*, ***kwargs*)[¶](#spack.util.environment.EnvironmentModifications.unset) Stores a request to unset an environment variable. | Parameters: | **name** – name of the environment variable to be set | *class* `spack.util.environment.``NameModifier`(*name*, ***kwargs*)[¶](#spack.util.environment.NameModifier) Bases: `object` `update_args`(***kwargs*)[¶](#spack.util.environment.NameModifier.update_args) *class* `spack.util.environment.``NameValueModifier`(*name*, *value*, ***kwargs*)[¶](#spack.util.environment.NameValueModifier) Bases: `object` `update_args`(***kwargs*)[¶](#spack.util.environment.NameValueModifier.update_args) *class* `spack.util.environment.``PrependPath`(*name*, *value*, ***kwargs*)[¶](#spack.util.environment.PrependPath) Bases: [`spack.util.environment.NameValueModifier`](#spack.util.environment.NameValueModifier) `execute`()[¶](#spack.util.environment.PrependPath.execute) *class* `spack.util.environment.``RemovePath`(*name*, *value*, ***kwargs*)[¶](#spack.util.environment.RemovePath) Bases: [`spack.util.environment.NameValueModifier`](#spack.util.environment.NameValueModifier) `execute`()[¶](#spack.util.environment.RemovePath.execute) *class* `spack.util.environment.``SetEnv`(*name*, *value*, ***kwargs*)[¶](#spack.util.environment.SetEnv) Bases: [`spack.util.environment.NameValueModifier`](#spack.util.environment.NameValueModifier) `execute`()[¶](#spack.util.environment.SetEnv.execute) *class* `spack.util.environment.``SetPath`(*name*, *value*, ***kwargs*)[¶](#spack.util.environment.SetPath) Bases: [`spack.util.environment.NameValueModifier`](#spack.util.environment.NameValueModifier) `execute`()[¶](#spack.util.environment.SetPath.execute) *class* `spack.util.environment.``UnsetEnv`(*name*, ***kwargs*)[¶](#spack.util.environment.UnsetEnv) Bases: [`spack.util.environment.NameModifier`](#spack.util.environment.NameModifier) `execute`()[¶](#spack.util.environment.UnsetEnv.execute) `spack.util.environment.``concatenate_paths`(*paths*, *separator=':'*)[¶](#spack.util.environment.concatenate_paths) Concatenates an iterable of paths into a string of paths separated by separator, defaulting to colon. | Parameters: | * **paths** – iterable of paths * **separator** – the separator to use, default ‘:’ | | Returns: | string | `spack.util.environment.``dump_environment`(*path*)[¶](#spack.util.environment.dump_environment) Dump the current environment out to a file. `spack.util.environment.``env_flag`(*name*)[¶](#spack.util.environment.env_flag) `spack.util.environment.``filter_environment_blacklist`(*env*, *variables*)[¶](#spack.util.environment.filter_environment_blacklist) Generator that filters out any change to environment variables present in the input list. | Parameters: | * **env** – list of environment modifications * **variables** – list of variable names to be filtered | | Returns: | items in env if they are not in variables | `spack.util.environment.``filter_system_paths`(*paths*)[¶](#spack.util.environment.filter_system_paths) `spack.util.environment.``get_path`(*name*)[¶](#spack.util.environment.get_path) `spack.util.environment.``inspect_path`(*root*, *inspections*, *exclude=None*)[¶](#spack.util.environment.inspect_path) Inspects `root` to search for the subdirectories in `inspections`. Adds every path found to a list of prepend-path commands and returns it. | Parameters: | * **root** (*str*) – absolute path where to search for subdirectories * **inspections** (*dict*) – maps relative paths to a list of environment variables that will be modified if the path exists. The modifications are not performed immediately, but stored in a command object that is returned to client * **exclude** (*callable*) – optional callable. If present it must accept an absolute path and return True if it should be excluded from the inspection | Examples: The following lines execute an inspection in `/usr` to search for `/usr/include` and `/usr/lib64`. If found we want to prepend `/usr/include` to `CPATH` and `/usr/lib64` to `MY_LIB64_PATH`. > ``` > # Set up the dictionary containing the inspection > inspections = { > 'include': ['CPATH'], > 'lib64': ['MY_LIB64_PATH'] > } > # Get back the list of command needed to modify the environment > env = inspect_path('/usr', inspections) > # Eventually execute the commands > env.apply_modifications() > ``` | Returns: | instance of EnvironmentModifications containing the requested modifications | `spack.util.environment.``is_system_path`(*path*)[¶](#spack.util.environment.is_system_path) Predicate that given a path returns True if it is a system path, False otherwise. | Parameters: | **path** (*str*) – path to a directory | | Returns: | True or False | `spack.util.environment.``path_put_first`(*var_name*, *directories*)[¶](#spack.util.environment.path_put_first) Puts the provided directories first in the path, adding them if they’re not already there. `spack.util.environment.``path_set`(*var_name*, *directories*)[¶](#spack.util.environment.path_set) `spack.util.environment.``preserve_environment`(**variables*)[¶](#spack.util.environment.preserve_environment) Ensures that the value of the environment variables passed as arguments is the same before entering to the context manager and after exiting it. Variables that are unset before entering the context manager will be explicitly unset on exit. | Parameters: | **variables** (*list of str*) – list of environment variables to be preserved | `spack.util.environment.``set_env`(***kwargs*)[¶](#spack.util.environment.set_env) Temporarily sets and restores environment variables. Variables can be set as keyword arguments to this function. `spack.util.environment.``set_or_unset_not_first`(*variable*, *changes*, *errstream*)[¶](#spack.util.environment.set_or_unset_not_first) Check if we are going to set or unset something after other modifications have already been requested. `spack.util.environment.``validate`(*env*, *errstream*)[¶](#spack.util.environment.validate) Validates the environment modifications to check for the presence of suspicious patterns. Prompts a warning for everything that was found. Current checks: - set or unset variables after other changes on the same variable | Parameters: | **env** – list of environment modifications | ##### spack.util.executable module[¶](#module-spack.util.executable) *class* `spack.util.executable.``Executable`(*name*)[¶](#spack.util.executable.Executable) Bases: `object` Class representing a program that can be run on the command line. `add_default_arg`(*arg*)[¶](#spack.util.executable.Executable.add_default_arg) Add a default argument to the command. `add_default_env`(*key*, *value*)[¶](#spack.util.executable.Executable.add_default_env) Set an environment variable when the command is run. | Parameters: | * **key** – The environment variable to set * **value** – The value to set it to | `command`[¶](#spack.util.executable.Executable.command) The command-line string. | Returns: | The executable and default arguments | | Return type: | str | `name`[¶](#spack.util.executable.Executable.name) The executable name. | Returns: | The basename of the executable | | Return type: | str | `path`[¶](#spack.util.executable.Executable.path) The path to the executable. | Returns: | The path to the executable | | Return type: | str | `spack.util.executable.``which`(**args*, ***kwargs*)[¶](#spack.util.executable.which) Finds an executable in the path like command-line which. If given multiple executables, returns the first one that is found. If no executables are found, returns None. | Parameters: | ***args** (*str*) – One or more executables to search for | | Keyword Arguments: | | | * **path** (`list()` or str) – The path to search. Defaults to `PATH` * **required** (*bool*) – If set to True, raise an error if executable not found | | Returns: | The first executable that is found in the path | | Return type: | Executable | *exception* `spack.util.executable.``ProcessError`(*message*, *long_message=None*)[¶](#spack.util.executable.ProcessError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) ProcessErrors are raised when Executables exit with an error code. ##### spack.util.file_cache module[¶](#module-spack.util.file_cache) *exception* `spack.util.file_cache.``CacheError`(*message*, *long_message=None*)[¶](#spack.util.file_cache.CacheError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) *class* `spack.util.file_cache.``FileCache`(*root*, *timeout=120*)[¶](#spack.util.file_cache.FileCache) Bases: `object` This class manages cached data in the filesystem. * Cache files are fetched and stored by unique keys. Keys can be relative paths, so that there can be some hierarchy in the cache. * The FileCache handles locking cache files for reading and writing, so client code need not manage locks for cache entries. `cache_path`(*key*)[¶](#spack.util.file_cache.FileCache.cache_path) Path to the file in the cache for a particular key. `destroy`()[¶](#spack.util.file_cache.FileCache.destroy) Remove all files under the cache root. `init_entry`(*key*)[¶](#spack.util.file_cache.FileCache.init_entry) Ensure we can access a cache file. Create a lock for it if needed. Return whether the cache file exists yet or not. `mtime`(*key*)[¶](#spack.util.file_cache.FileCache.mtime) Return modification time of cache file, or 0 if it does not exist. Time is in units returned by os.stat in the mtime field, which is platform-dependent. `read_transaction`(*key*)[¶](#spack.util.file_cache.FileCache.read_transaction) Get a read transaction on a file cache item. Returns a ReadTransaction context manager and opens the cache file for reading. You can use it like this: > with file_cache_object.read_transaction(key) as cache_file: > cache_file.read() `remove`(*key*)[¶](#spack.util.file_cache.FileCache.remove) `write_transaction`(*key*)[¶](#spack.util.file_cache.FileCache.write_transaction) Get a write transaction on a file cache item. Returns a WriteTransaction context manager that opens a temporary file for writing. Once the context manager finishes, if nothing went wrong, moves the file into place on top of the old file atomically. ##### spack.util.gpg module[¶](#module-spack.util.gpg) *class* `spack.util.gpg.``Gpg`[¶](#spack.util.gpg.Gpg) Bases: `object` *classmethod* `create`(***kwargs*)[¶](#spack.util.gpg.Gpg.create) *classmethod* `export_keys`(*location*, **keys*)[¶](#spack.util.gpg.Gpg.export_keys) *static* `gpg`()[¶](#spack.util.gpg.Gpg.gpg) *classmethod* `list`(*trusted*, *signing*)[¶](#spack.util.gpg.Gpg.list) *classmethod* `sign`(*key*, *file*, *output*, *clearsign=False*)[¶](#spack.util.gpg.Gpg.sign) *classmethod* `signing_keys`()[¶](#spack.util.gpg.Gpg.signing_keys) *classmethod* `trust`(*keyfile*)[¶](#spack.util.gpg.Gpg.trust) *classmethod* `untrust`(*signing*, **keys*)[¶](#spack.util.gpg.Gpg.untrust) *classmethod* `verify`(*signature*, *file*)[¶](#spack.util.gpg.Gpg.verify) ##### spack.util.lock module[¶](#module-spack.util.lock) Wrapper for `llnl.util.lock` allows locking to be enabled/disabled. *class* `spack.util.lock.``Lock`(**args*, ***kwargs*)[¶](#spack.util.lock.Lock) Bases: [`llnl.util.lock.Lock`](index.html#llnl.util.lock.Lock) Lock that can be disabled. This overrides the `_lock()` and `_unlock()` methods from `llnl.util.lock` so that all the lock API calls will succeed, but the actual locking mechanism can be disabled via `_enable_locks`. `spack.util.lock.``check_lock_safety`(*path*)[¶](#spack.util.lock.check_lock_safety) Do some extra checks to ensure disabling locks is safe. This will raise an error if `path` can is group- or world-writable AND the current user can write to the directory (i.e., if this user AND others could write to the path). This is intended to run on the Spack prefix, but can be run on any path for testing. ##### spack.util.log_parse module[¶](#module-spack.util.log_parse) `spack.util.log_parse.``parse_log_events`(*stream*, *context=6*, *jobs=None*, *profile=False*)[¶](#spack.util.log_parse.parse_log_events) Extract interesting events from a log file as a list of LogEvent. | Parameters: | * **stream** (*str* *or* *fileobject*) – build log name or file object * **context** (*int*) – lines of context to extract around each log event * **jobs** (*int*) – number of jobs to parse with; default ncpus * **profile** (*bool*) – print out profile information for parsing | | Returns: | two lists containig `BuildError` and `BuildWarning` objects. | | Return type: | (tuple) | This is a wrapper around `ctest_log_parser.CTestLogParser` that lazily constructs a single `CTestLogParser` object. This ensures that all the regex compilation is only done once. `spack.util.log_parse.``make_log_context`(*log_events*, *width=None*)[¶](#spack.util.log_parse.make_log_context) Get error context from a log file. | Parameters: | * **log_events** (*list of LogEvent*) – list of events created by `ctest_log_parser.parse()` * **width** (*int* *or* *None*) – wrap width; `0` for no limit; `None` to auto-size for terminal | | Returns: | context from the build log with errors highlighted | | Return type: | str | Parses the log file for lines containing errors, and prints them out with line numbers and context. Errors are highlighted with ‘>>’ and with red highlighting (if color is enabled). Events are sorted by line number before they are displayed. ##### spack.util.module_cmd module[¶](#module-spack.util.module_cmd) This module contains routines related to the module command for accessing and parsing environment modules. *exception* `spack.util.module_cmd.``ModuleError`[¶](#spack.util.module_cmd.ModuleError) Bases: `Exception` Raised the the module_cmd utility to indicate errors. `spack.util.module_cmd.``get_module_cmd`(*bashopts=''*)[¶](#spack.util.module_cmd.get_module_cmd) `spack.util.module_cmd.``get_module_cmd_from_bash`(*bashopts=''*)[¶](#spack.util.module_cmd.get_module_cmd_from_bash) `spack.util.module_cmd.``get_module_cmd_from_which`()[¶](#spack.util.module_cmd.get_module_cmd_from_which) `spack.util.module_cmd.``get_path_arg_from_module_line`(*line*)[¶](#spack.util.module_cmd.get_path_arg_from_module_line) `spack.util.module_cmd.``get_path_from_module`(*mod*)[¶](#spack.util.module_cmd.get_path_from_module) Inspects a TCL module for entries that indicate the absolute path at which the library supported by said module can be found. `spack.util.module_cmd.``get_path_from_module_contents`(*text*, *module_name*)[¶](#spack.util.module_cmd.get_path_from_module_contents) `spack.util.module_cmd.``load_module`(*mod*)[¶](#spack.util.module_cmd.load_module) Takes a module name and removes modules until it is possible to load that module. It then loads the provided module. Depends on the modulecmd implementation of modules used in cray and lmod. `spack.util.module_cmd.``unload_module`(*mod*)[¶](#spack.util.module_cmd.unload_module) Takes a module name and unloads the module from the environment. It does not check whether conflicts arise from the unloaded module ##### spack.util.naming module[¶](#module-spack.util.naming) `spack.util.naming.``mod_to_class`(*mod_name*)[¶](#spack.util.naming.mod_to_class) Convert a name from module style to class name style. Spack mostly follows [PEP-8](http://legacy.python.org/dev/peps/pep-0008/): > * Module and package names use lowercase_with_underscores. > * Class names use the CapWords convention. Regular source code follows these convetions. Spack is a bit more liberal with its Package names and Compiler names: > * They can contain ‘-‘ as well as ‘_’, but cannot start with ‘-‘. > * They can start with numbers, e.g. “3proxy”. This function converts from the module convention to the class convention by removing _ and - and converting surrounding lowercase text to CapWords. If mod_name starts with a number, the class name returned will be prepended with ‘_’ to make a valid Python identifier. `spack.util.naming.``spack_module_to_python_module`(*mod_name*)[¶](#spack.util.naming.spack_module_to_python_module) Given a Spack module name, returns the name by which it can be imported in Python. `spack.util.naming.``valid_module_name`(*mod_name*)[¶](#spack.util.naming.valid_module_name) Return whether mod_name is valid for use in Spack. `spack.util.naming.``valid_fully_qualified_module_name`(*mod_name*)[¶](#spack.util.naming.valid_fully_qualified_module_name) Return whether mod_name is a valid namespaced module name. `spack.util.naming.``validate_fully_qualified_module_name`(*mod_name*)[¶](#spack.util.naming.validate_fully_qualified_module_name) Raise an exception if mod_name is not a valid namespaced module name. `spack.util.naming.``validate_module_name`(*mod_name*)[¶](#spack.util.naming.validate_module_name) Raise an exception if mod_name is not valid. `spack.util.naming.``possible_spack_module_names`(*python_mod_name*)[¶](#spack.util.naming.possible_spack_module_names) Given a Python module name, return a list of all possible spack module names that could correspond to it. `spack.util.naming.``simplify_name`(*name*)[¶](#spack.util.naming.simplify_name) Simplify package name to only lowercase, digits, and dashes. Simplifies a name which may include uppercase letters, periods, underscores, and pluses. In general, we want our package names to only contain lowercase letters, digits, and dashes. | Parameters: | **name** (*str*) – The original name of the package | | Returns: | The new name of the package | | Return type: | str | *class* `spack.util.naming.``NamespaceTrie`(*separator='.'*)[¶](#spack.util.naming.NamespaceTrie) Bases: `object` *class* `Element`(*value*)[¶](#spack.util.naming.NamespaceTrie.Element) Bases: `object` `has_value`(*namespace*)[¶](#spack.util.naming.NamespaceTrie.has_value) True if there is a value set for the given namespace. `is_leaf`(*namespace*)[¶](#spack.util.naming.NamespaceTrie.is_leaf) True if this namespace has no children in the trie. `is_prefix`(*namespace*)[¶](#spack.util.naming.NamespaceTrie.is_prefix) True if the namespace has a value, or if it’s the prefix of one that does. ##### spack.util.package_hash module[¶](#module-spack.util.package_hash) *exception* `spack.util.package_hash.``PackageHashError`(*message*, *long_message=None*)[¶](#spack.util.package_hash.PackageHashError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) Raised for all errors encountered during package hashing. *class* `spack.util.package_hash.``RemoveDirectives`(*spec*)[¶](#spack.util.package_hash.RemoveDirectives) Bases: `ast.NodeTransformer` Remove Spack directives from a package AST. `is_directive`(*node*)[¶](#spack.util.package_hash.RemoveDirectives.is_directive) `is_spack_attr`(*node*)[¶](#spack.util.package_hash.RemoveDirectives.is_spack_attr) `visit_ClassDef`(*node*)[¶](#spack.util.package_hash.RemoveDirectives.visit_ClassDef) *class* `spack.util.package_hash.``RemoveDocstrings`[¶](#spack.util.package_hash.RemoveDocstrings) Bases: `ast.NodeTransformer` Transformer that removes docstrings from a Python AST. `remove_docstring`(*node*)[¶](#spack.util.package_hash.RemoveDocstrings.remove_docstring) `visit_ClassDef`(*node*)[¶](#spack.util.package_hash.RemoveDocstrings.visit_ClassDef) `visit_FunctionDef`(*node*)[¶](#spack.util.package_hash.RemoveDocstrings.visit_FunctionDef) `visit_Module`(*node*)[¶](#spack.util.package_hash.RemoveDocstrings.visit_Module) *class* `spack.util.package_hash.``ResolveMultiMethods`(*methods*)[¶](#spack.util.package_hash.ResolveMultiMethods) Bases: `ast.NodeTransformer` Remove methods which do not exist if their @when is not satisfied. `resolve`(*node*)[¶](#spack.util.package_hash.ResolveMultiMethods.resolve) `visit_FunctionDef`(*node*)[¶](#spack.util.package_hash.ResolveMultiMethods.visit_FunctionDef) *class* `spack.util.package_hash.``TagMultiMethods`(*spec*)[¶](#spack.util.package_hash.TagMultiMethods) Bases: `ast.NodeVisitor` Tag @when-decorated methods in a spec. `visit_FunctionDef`(*node*)[¶](#spack.util.package_hash.TagMultiMethods.visit_FunctionDef) `spack.util.package_hash.``package_ast`(*spec*)[¶](#spack.util.package_hash.package_ast) `spack.util.package_hash.``package_content`(*spec*)[¶](#spack.util.package_hash.package_content) `spack.util.package_hash.``package_hash`(*spec*, *content=None*)[¶](#spack.util.package_hash.package_hash) ##### spack.util.path module[¶](#module-spack.util.path) Utilities for managing paths in Spack. TODO: this is really part of spack.config. Consolidate it. `spack.util.path.``substitute_config_variables`(*path*)[¶](#spack.util.path.substitute_config_variables) Substitute placeholders into paths. Spack allows paths in configs to have some placeholders, as follows: * $spack The Spack instance’s prefix * $user The current user’s username * $tempdir Default temporary directory returned by tempfile.gettempdir() These are substituted case-insensitively into the path, and users can use either `$var` or `${var}` syntax for the variables. `spack.util.path.``canonicalize_path`(*path*)[¶](#spack.util.path.canonicalize_path) Substitute config vars, expand environment vars, expand user home, take abspath. ##### spack.util.pattern module[¶](#module-spack.util.pattern) *class* `spack.util.pattern.``Args`(**flags*, ***kwargs*)[¶](#spack.util.pattern.Args) Bases: [`spack.util.pattern.Bunch`](#spack.util.pattern.Bunch) Subclass of Bunch to write argparse args more naturally. *class* `spack.util.pattern.``Bunch`(***kwargs*)[¶](#spack.util.pattern.Bunch) Bases: `object` Carries a bunch of named attributes (from <NAME>elli bunch) `spack.util.pattern.``composite`(*interface=None*, *method_list=None*, *container=<class 'list'>*)[¶](#spack.util.pattern.composite) Decorator implementing the GoF composite pattern. | Parameters: | * **interface** (*type*) – class exposing the interface to which the composite object must conform. Only non-private and non-special methods will be taken into account * **method_list** (*list of str*) – names of methods that should be part of the composite * **container** (*MutableSequence*) – container for the composite object (default = list). Must fulfill the MutableSequence contract. The composite class will expose the container API to manage object composition | | Returns: | a class decorator that patches a class adding all the methods it needs to be a composite for a given interface. | ##### spack.util.prefix module[¶](#module-spack.util.prefix) This file contains utilities for managing the installation prefix of a package. *class* `spack.util.prefix.``Prefix`[¶](#spack.util.prefix.Prefix) Bases: `str` This class represents an installation prefix, but provides useful attributes for referring to directories inside the prefix. Attributes of this object are created on the fly when you request them, so any of the following is valid: ``` >>> prefix = Prefix('/usr') >>> prefix.bin /usr/bin >>> prefix.lib64 /usr/lib64 >>> prefix.share.man /usr/share/man >>> prefix.foo.bar.baz /usr/foo/bar/baz >>> prefix.join('dashed-directory').bin64 /usr/dashed-directory/bin64 ``` Prefix objects behave identically to strings. In fact, they subclass `str`. So operators like `+` are legal: ``` print('foobar ' + prefix) ``` This prints `foobar /usr`. All of this is meant to make custom installs easy. `join`(*string*)[¶](#spack.util.prefix.Prefix.join) Concatenates a string to a prefix. | Parameters: | **string** (*str*) – the string to append to the prefix | | Returns: | the newly created installation prefix | | Return type: | Prefix | ##### spack.util.spack_json module[¶](#module-spack.util.spack_json) Simple wrapper around JSON to guarantee consistent use of load/dump. `spack.util.spack_json.``load`(*stream*)[¶](#spack.util.spack_json.load) Spack JSON needs to be ordered to support specs. `spack.util.spack_json.``dump`(*data*, *stream=None*)[¶](#spack.util.spack_json.dump) Dump JSON with a reasonable amount of indentation and separation. *exception* `spack.util.spack_json.``SpackJSONError`(*msg*, *json_error*)[¶](#spack.util.spack_json.SpackJSONError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) Raised when there are issues with JSON parsing. ##### spack.util.spack_yaml module[¶](#module-spack.util.spack_yaml) Enhanced YAML parsing for Spack. * `load()` preserves YAML Marks on returned objects – this allows us to access file and line information later. * `Our load methods use ``OrderedDict` class instead of YAML’s default unorderd dict. `spack.util.spack_yaml.``load`(**args*, ***kwargs*)[¶](#spack.util.spack_yaml.load) Load but modify the loader instance so that it will add __line__ atrributes to the returned object. `spack.util.spack_yaml.``dump`(**args*, ***kwargs*)[¶](#spack.util.spack_yaml.dump) *exception* `spack.util.spack_yaml.``SpackYAMLError`(*msg*, *yaml_error*)[¶](#spack.util.spack_yaml.SpackYAMLError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) Raised when there are issues with YAML parsing. ##### spack.util.string module[¶](#module-spack.util.string) `spack.util.string.``comma_and`(*sequence*)[¶](#spack.util.string.comma_and) `spack.util.string.``comma_list`(*sequence*, *article=''*)[¶](#spack.util.string.comma_list) `spack.util.string.``comma_or`(*sequence*)[¶](#spack.util.string.comma_or) `spack.util.string.``plural`(*n*, *singular*, *plural=None*, *show_n=True*)[¶](#spack.util.string.plural) Pluralize <singular> word by adding an s if n != 1. | Parameters: | * **n** (*int*) – number of things there are * **singular** (*str*) – singular form of word * **plural** (*str**,* *optional*) – optional plural form, for when it’s not just singular + ‘s’ * **show_n** (*bool*) – whether to include n in the result string (default True) | | Returns: | “1 thing” if n == 1 or “n things” if n != 1 | | Return type: | (str) | `spack.util.string.``quote`(*sequence*, *q="'"*)[¶](#spack.util.string.quote) ##### spack.util.web module[¶](#module-spack.util.web) *exception* `spack.util.web.``HTMLParseError`[¶](#spack.util.web.HTMLParseError) Bases: `Exception` *class* `spack.util.web.``LinkParser`[¶](#spack.util.web.LinkParser) Bases: `html.parser.HTMLParser` This parser just takes an HTML page and strips out the hrefs on the links. Good enough for a really simple spider. `handle_starttag`(*tag*, *attrs*)[¶](#spack.util.web.LinkParser.handle_starttag) *exception* `spack.util.web.``NoNetworkConnectionError`(*message*, *url*)[¶](#spack.util.web.NoNetworkConnectionError) Bases: [`spack.util.web.SpackWebError`](#spack.util.web.SpackWebError) Raised when an operation can’t get an internet connection. *class* `spack.util.web.``NonDaemonContext`[¶](#spack.util.web.NonDaemonContext) Bases: `multiprocessing.context.ForkContext` `Process`[¶](#spack.util.web.NonDaemonContext.Process) alias of [`NonDaemonProcess`](#spack.util.web.NonDaemonProcess) *class* `spack.util.web.``NonDaemonPool`(**args*, ***kwargs*)[¶](#spack.util.web.NonDaemonPool) Bases: `multiprocessing.pool.Pool` Pool that uses non-daemon processes *class* `spack.util.web.``NonDaemonProcess`(*group=None*, *target=None*, *name=None*, *args=()*, *kwargs={}*, ***, *daemon=None*)[¶](#spack.util.web.NonDaemonProcess) Bases: `multiprocessing.context.Process` Process tha allows sub-processes, so pools can have sub-pools. `daemon`[¶](#spack.util.web.NonDaemonProcess.daemon) Return whether process is a daemon *exception* `spack.util.web.``SpackWebError`(*message*, *long_message=None*)[¶](#spack.util.web.SpackWebError) Bases: [`spack.error.SpackError`](index.html#spack.error.SpackError) Superclass for Spack web spidering errors. *exception* `spack.util.web.``VersionFetchError`(*message*, *long_message=None*)[¶](#spack.util.web.VersionFetchError) Bases: [`spack.util.web.SpackWebError`](#spack.util.web.SpackWebError) Raised when we can’t determine a URL to fetch a package. `spack.util.web.``find_versions_of_archive`(*archive_urls*, *list_url=None*, *list_depth=0*)[¶](#spack.util.web.find_versions_of_archive) Scrape web pages for new versions of a tarball. | Parameters: | **archive_urls** – URL or sequence of URLs for different versions of a package. Typically these are just the tarballs from the package file itself. By default, this searches the parent directories of archives. | | Keyword Arguments: | | | * **list_url** – URL for a listing of archives. Spack wills scrape these pages for download links that look like the archive URL. * **list_depth** – Max depth to follow links on list_url pages. Default 0. | `spack.util.web.``get_checksums_for_versions`(*url_dict*, *name*, *first_stage_function=None*, *keep_stage=False*)[¶](#spack.util.web.get_checksums_for_versions) Fetches and checksums archives from URLs. This function is called by both `spack checksum` and `spack create`. The `first_stage_function` argument allows the caller to inspect the first downloaded archive, e.g., to determine the build system. | Parameters: | * **url_dict** (*dict*) – A dictionary of the form: version -> URL * **name** (*str*) – The name of the package * **first_stage_function** (*callable*) – function that takes a Stage and a URL; this is run on the stage of the first URL downloaded * **keep_stage** (*bool*) – whether to keep staging area when command completes | | Returns: | A multi-line string containing versions and corresponding hashes | | Return type: | (str) | `spack.util.web.``spider`(*root_url*, *depth=0*)[¶](#spack.util.web.spider) Gets web pages from a root URL. If depth is specified (e.g., depth=2), then this will also follow up to <depth> levels of links from the root. This will spawn processes to fetch the children, for much improved performance over a sequential fetch. ##### Module contents[¶](#module-spack.util) ### Submodules[¶](#submodules) ### spack.abi module[¶](#module-spack.abi) *class* `spack.abi.``ABI`[¶](#spack.abi.ABI) Bases: `object` This class provides methods to test ABI compatibility between specs. The current implementation is rather rough and could be improved. `architecture_compatible`(*parent*, *child*)[¶](#spack.abi.ABI.architecture_compatible) Return true if parent and child have ABI compatible targets. `compatible`(*parent*, *child*, ***kwargs*)[¶](#spack.abi.ABI.compatible) Returns true iff a parent and child spec are ABI compatible `compiler_compatible`(*parent*, *child*, ***kwargs*)[¶](#spack.abi.ABI.compiler_compatible) Return true if compilers for parent and child are ABI compatible. ### spack.architecture module[¶](#module-spack.architecture) This module contains all the elements that are required to create an architecture object. These include, the target processor, the operating system, and the architecture platform (i.e. cray, darwin, linux, bgq, etc) classes. On a multiple architecture machine, the architecture spec field can be set to build a package against any target and operating system that is present on the platform. On Cray platforms or any other architecture that has different front and back end environments, the operating system will determine the method of compiler detection. There are two different types of compiler detection: 1. Through the $PATH env variable (front-end detection) 2. Through the tcl module system. (back-end detection) Depending on which operating system is specified, the compiler will be detected using one of those methods. For platforms such as linux and darwin, the operating system is autodetected and the target is set to be x86_64. The command line syntax for specifying an architecture is as follows: > target=<Target name> os=<OperatingSystem nameIf the user wishes to use the defaults, either target or os can be left out of the command line and Spack will concretize using the default. These defaults are set in the ‘platforms/’ directory which contains the different subclasses for platforms. If the machine has multiple architectures, the user can also enter front-end, or fe or back-end or be. These settings will concretize to their respective front-end and back-end targets and operating systems. Additional platforms can be added by creating a subclass of Platform and adding it inside the platform directory. Platforms are an abstract class that are extended by subclasses. If the user wants to add a new type of platform (such as cray_xe), they can create a subclass and set all the class attributes such as priority, front_target, back_target, front_os, back_os. Platforms also contain a priority class attribute. A lower number signifies higher priority. These numbers are arbitrarily set and can be changed though often there isn’t much need unless a new platform is added and the user wants that to be detected first. Targets are created inside the platform subclasses. Most architecture (like linux, and darwin) will have only one target (x86_64) but in the case of Cray machines, there is both a frontend and backend processor. The user can specify which targets are present on front-end and back-end architecture Depending on the platform, operating systems are either auto-detected or are set. The user can set the front-end and back-end operating setting by the class attributes front_os and back_os. The operating system as described earlier, will be responsible for compiler detection. *class* `spack.architecture.``Arch`(*plat=None*, *os=None*, *target=None*)[¶](#spack.architecture.Arch) Bases: `object` Architecture is now a class to help with setting attributes. TODO: refactor so that we don’t need this class. `concrete`[¶](#spack.architecture.Arch.concrete) *static* `from_dict`()[¶](#spack.architecture.Arch.from_dict) `to_dict`()[¶](#spack.architecture.Arch.to_dict) *exception* `spack.architecture.``NoPlatformError`[¶](#spack.architecture.NoPlatformError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) *class* `spack.architecture.``OperatingSystem`(*name*, *version*)[¶](#spack.architecture.OperatingSystem) Bases: `object` Operating System will be like a class similar to platform extended by subclasses for the specifics. Operating System will contain the compiler finding logic. Instead of calling two separate methods to find compilers we call find_compilers method for each operating system `find_compiler`(*cmp_cls*, **path*)[¶](#spack.architecture.OperatingSystem.find_compiler) Try to find the given type of compiler in the user’s environment. For each set of compilers found, this returns compiler objects with the cc, cxx, f77, fc paths and the version filled in. This will search for compilers with the names in cc_names, cxx_names, etc. and it will group them if they have common prefixes, suffixes, and versions. e.g., gcc-mp-4.7 would be grouped with g++-mp-4.7 and gfortran-mp-4.7. `find_compilers`(**paths*)[¶](#spack.architecture.OperatingSystem.find_compilers) Return a list of compilers found in the supplied paths. This invokes the find() method for each Compiler class, and appends the compilers detected to a list. `to_dict`()[¶](#spack.architecture.OperatingSystem.to_dict) *class* `spack.architecture.``Platform`(*name*)[¶](#spack.architecture.Platform) Bases: `object` Abstract class that each type of Platform will subclass. Will return a instance of it once it is returned. `add_operating_system`(*name*, *os_class*)[¶](#spack.architecture.Platform.add_operating_system) Add the operating_system class object into the platform.operating_sys dictionary `add_target`(*name*, *target*)[¶](#spack.architecture.Platform.add_target) Used by the platform specific subclass to list available targets. Raises an error if the platform specifies a name that is reserved by spack as an alias. `back_end` *= None*[¶](#spack.architecture.Platform.back_end) `back_os` *= None*[¶](#spack.architecture.Platform.back_os) `default` *= None*[¶](#spack.architecture.Platform.default) `default_os` *= None*[¶](#spack.architecture.Platform.default_os) *classmethod* `detect`()[¶](#spack.architecture.Platform.detect) Subclass is responsible for implementing this method. Returns True if the Platform class detects that it is the current platform and False if it’s not. `front_end` *= None*[¶](#spack.architecture.Platform.front_end) `front_os` *= None*[¶](#spack.architecture.Platform.front_os) `operating_system`(*name*)[¶](#spack.architecture.Platform.operating_system) `priority` *= None*[¶](#spack.architecture.Platform.priority) `reserved_oss` *= ['default_os', 'frontend', 'fe', 'backend', 'be']*[¶](#spack.architecture.Platform.reserved_oss) `reserved_targets` *= ['default_target', 'frontend', 'fe', 'backend', 'be']*[¶](#spack.architecture.Platform.reserved_targets) *classmethod* `setup_platform_environment`(*pkg*, *env*)[¶](#spack.architecture.Platform.setup_platform_environment) Subclass can override this method if it requires any platform-specific build environment modifications. `target`(*name*)[¶](#spack.architecture.Platform.target) This is a getter method for the target dictionary that handles defaulting based on the values provided by default, front-end, and back-end. This can be overwritten by a subclass for which we want to provide further aliasing options. *class* `spack.architecture.``Target`(*name*, *module_name=None*)[¶](#spack.architecture.Target) Bases: `object` Target is the processor of the host machine. The host machine may have different front-end and back-end targets, especially if it is a Cray machine. The target will have a name and also the module_name (e.g craype-compiler). Targets will also recognize which platform they came from using the set_platform method. Targets will have compiler finding strategies `spack.architecture.``arch_for_spec`(*arch_spec*)[¶](#spack.architecture.arch_for_spec) Transforms the given architecture spec into an architecture objct. `spack.architecture.``get_platform`(*platform_name*)[¶](#spack.architecture.get_platform) Returns a platform object that corresponds to the given name. `spack.architecture.``verify_platform`(*platform_name*)[¶](#spack.architecture.verify_platform) Determines whether or not the platform with the given name is supported in Spack. For more information, see the ‘spack.platforms’ submodule. ### spack.binary_distribution module[¶](#module-spack.binary_distribution) *exception* `spack.binary_distribution.``NewLayoutException`(*message*, *long_message=None*)[¶](#spack.binary_distribution.NewLayoutException) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised if directory layout is different from buildcache. *exception* `spack.binary_distribution.``NoChecksumException`(*message*, *long_message=None*)[¶](#spack.binary_distribution.NoChecksumException) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised if file fails checksum verification. *exception* `spack.binary_distribution.``NoGpgException`(*message*, *long_message=None*)[¶](#spack.binary_distribution.NoGpgException) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when gpg2 is not in PATH *exception* `spack.binary_distribution.``NoKeyException`(*message*, *long_message=None*)[¶](#spack.binary_distribution.NoKeyException) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when gpg has no default key added. *exception* `spack.binary_distribution.``NoOverwriteException`(*file_path*)[¶](#spack.binary_distribution.NoOverwriteException) Bases: `Exception` Raised when a file exists and must be overwritten. *exception* `spack.binary_distribution.``NoVerifyException`(*message*, *long_message=None*)[¶](#spack.binary_distribution.NoVerifyException) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised if file fails signature verification. *exception* `spack.binary_distribution.``PickKeyException`(*keys*)[¶](#spack.binary_distribution.PickKeyException) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when multiple keys can be used to sign. `spack.binary_distribution.``build_tarball`(*spec*, *outdir*, *force=False*, *rel=False*, *unsigned=False*, *allow_root=False*, *key=None*)[¶](#spack.binary_distribution.build_tarball) Build a tarball from given spec and put it into the directory structure used at the mirror (following <tarball_directory_name>). `spack.binary_distribution.``buildinfo_file_name`(*prefix*)[¶](#spack.binary_distribution.buildinfo_file_name) Filename of the binary package meta-data file `spack.binary_distribution.``checksum_tarball`(*file*)[¶](#spack.binary_distribution.checksum_tarball) `spack.binary_distribution.``download_tarball`(*spec*)[¶](#spack.binary_distribution.download_tarball) Download binary tarball for given package into stage area Return True if successful `spack.binary_distribution.``extract_tarball`(*spec*, *filename*, *allow_root=False*, *unsigned=False*, *force=False*)[¶](#spack.binary_distribution.extract_tarball) extract binary tarball for given package into install area `spack.binary_distribution.``generate_index`(*outdir*, *indexfile_path*)[¶](#spack.binary_distribution.generate_index) `spack.binary_distribution.``get_keys`(*install=False*, *trust=False*, *force=False*)[¶](#spack.binary_distribution.get_keys) Get pgp public keys available on mirror `spack.binary_distribution.``get_specs`(*force=False*)[¶](#spack.binary_distribution.get_specs) Get spec.yaml’s for build caches available on mirror `spack.binary_distribution.``has_gnupg2`()[¶](#spack.binary_distribution.has_gnupg2) `spack.binary_distribution.``make_package_placeholder`(*workdir*, *allow_root*)[¶](#spack.binary_distribution.make_package_placeholder) Change paths in binaries to placeholder paths `spack.binary_distribution.``make_package_relative`(*workdir*, *prefix*, *allow_root*)[¶](#spack.binary_distribution.make_package_relative) Change paths in binaries to relative paths `spack.binary_distribution.``read_buildinfo_file`(*prefix*)[¶](#spack.binary_distribution.read_buildinfo_file) Read buildinfo file `spack.binary_distribution.``relocate_package`(*workdir*, *allow_root*)[¶](#spack.binary_distribution.relocate_package) Relocate the given package `spack.binary_distribution.``sign_tarball`(*key*, *force*, *specfile_path*)[¶](#spack.binary_distribution.sign_tarball) `spack.binary_distribution.``tarball_directory_name`(*spec*)[¶](#spack.binary_distribution.tarball_directory_name) Return name of the tarball directory according to the convention <os>-<architecture>/<compiler>/<package>-<version>/ `spack.binary_distribution.``tarball_name`(*spec*, *ext*)[¶](#spack.binary_distribution.tarball_name) Return the name of the tarfile according to the convention <os>-<architecture>-<package>-<dag_hash><ext`spack.binary_distribution.``tarball_path_name`(*spec*, *ext*)[¶](#spack.binary_distribution.tarball_path_name) Return the full path+name for a given spec according to the convention <tarball_directory_name>/<tarball_name`spack.binary_distribution.``write_buildinfo_file`(*prefix*, *workdir*, *rel=False*)[¶](#spack.binary_distribution.write_buildinfo_file) Create a cache file containing information required for the relocation ### spack.build_environment module[¶](#module-spack.build_environment) This module contains all routines related to setting up the package build environment. All of this is set up by package.py just before install() is called. There are two parts to the build environment: 1. Python build environment (i.e. install() method) This is how things are set up when install() is called. Spack takes advantage of each package being in its own module by adding a bunch of command-like functions (like configure(), make(), etc.) in the package’s module scope. Ths allows package writers to call them all directly in Package.install() without writing ‘self.’ everywhere. No, this isn’t Pythonic. Yes, it makes the code more readable and more like the shell script from which someone is likely porting. 2. Build execution environment This is the set of environment variables, like PATH, CC, CXX, etc. that control the build. There are also a number of environment variables used to pass information (like RPATHs and other information about dependencies) to Spack’s compiler wrappers. All of these env vars are also set up here. Skimming this module is a nice way to get acquainted with the types of calls you can make from within the install() function. *exception* `spack.build_environment.``ChildError`(*msg*, *module*, *classname*, *traceback_string*, *build_log*, *context*)[¶](#spack.build_environment.ChildError) Bases: [`spack.build_environment.InstallError`](#spack.build_environment.InstallError) Special exception class for wrapping exceptions from child processes in Spack’s build environment. The main features of a ChildError are: 1. They’re serializable, so when a child build fails, we can send one of these to the parent and let the parent report what happened. 2. They have a `traceback` field containing a traceback generated on the child immediately after failure. Spack will print this on failure in lieu of trying to run sys.excepthook on the parent process, so users will see the correct stack trace from a child. 3. They also contain context, which shows context in the Package implementation where the error happened. This helps people debug Python code in their packages. To get it, Spack searches the stack trace for the deepest frame where `self` is in scope and is an instance of PackageBase. This will generally find a useful spot in the `package.py` file. The long_message of a ChildError displays one of two things: > 1. If the original error was a ProcessError, indicating a command > died during the build, we’ll show context from the build log. > 2. If the original error was any other type of error, we’ll show > context from the Python code. SpackError handles displaying the special traceback if we’re in debug mode with spack -d. `build_errors` *= [('spack.util.executable', 'ProcessError')]*[¶](#spack.build_environment.ChildError.build_errors) `long_message`[¶](#spack.build_environment.ChildError.long_message) *exception* `spack.build_environment.``InstallError`(*message*, *long_message=None*)[¶](#spack.build_environment.InstallError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised by packages when a package fails to install. Any subclass of InstallError will be annotated by Spack wtih a `pkg` attribute on failure, which the caller can use to get the package for which the exception was raised. *class* `spack.build_environment.``MakeExecutable`(*name*, *jobs*)[¶](#spack.build_environment.MakeExecutable) Bases: [`spack.util.executable.Executable`](index.html#spack.util.executable.Executable) Special callable executable object for make so the user can specify parallelism options on a per-invocation basis. Specifying ‘parallel’ to the call will override whatever the package’s global setting is, so you can either default to true or false and override particular calls. Specifying ‘jobs_env’ to a particular call will name an environment variable which will be set to the parallelism level (without affecting the normal invocation with -j). Note that if the SPACK_NO_PARALLEL_MAKE env var is set it overrides everything. `spack.build_environment.``clean_environment`()[¶](#spack.build_environment.clean_environment) `spack.build_environment.``fork`(*pkg*, *function*, *dirty*, *fake*)[¶](#spack.build_environment.fork) Fork a child process to do part of a spack build. | Parameters: | * **pkg** (*PackageBase*) – package whose environment we should set up the forked process for. * **function** (*callable*) – argless function to run in the child process. * **dirty** (*bool*) – If True, do NOT clean the environment before building. * **fake** (*bool*) – If True, skip package setup b/c it’s not a real build | Usage: ``` def child_fun(): # do stuff build_env.fork(pkg, child_fun) ``` Forked processes are run with the build environment set up by spack.build_environment. This allows package authors to have full control over the environment, etc. without affecting other builds that might be executed in the same spack call. If something goes wrong, the child process catches the error and passes it to the parent wrapped in a ChildError. The parent is expected to handle (or re-raise) the ChildError. `spack.build_environment.``get_package_context`(*traceback*, *context=3*)[¶](#spack.build_environment.get_package_context) Return some context for an error message when the build fails. | Parameters: | * **traceback** (*traceback*) – A traceback from some exception raised during install * **context** (*int*) – Lines of context to show before and after the line where the error happened | This function inspects the stack to find where we failed in the package file, and it adds detailed context to the long_message from there. `spack.build_environment.``get_rpath_deps`(*pkg*)[¶](#spack.build_environment.get_rpath_deps) Return immediate or transitive RPATHs depending on the package. `spack.build_environment.``get_rpaths`(*pkg*)[¶](#spack.build_environment.get_rpaths) Get a list of all the rpaths for a package. `spack.build_environment.``get_std_cmake_args`(*pkg*)[¶](#spack.build_environment.get_std_cmake_args) List of standard arguments used if a package is a CMakePackage. | Returns: | standard arguments that would be used if this package were a CMakePackage instance. | | Return type: | list of str | | Parameters: | **pkg** (*PackageBase*) – package under consideration | | Returns: | arguments for cmake | | Return type: | list of str | `spack.build_environment.``get_std_meson_args`(*pkg*)[¶](#spack.build_environment.get_std_meson_args) List of standard arguments used if a package is a MesonPackage. | Returns: | standard arguments that would be used if this package were a MesonPackage instance. | | Return type: | list of str | | Parameters: | **pkg** (*PackageBase*) – package under consideration | | Returns: | arguments for meson | | Return type: | list of str | `spack.build_environment.``load_external_modules`(*pkg*)[¶](#spack.build_environment.load_external_modules) Traverse a package’s spec DAG and load any external modules. Traverse a package’s dependencies and load any external modules associated with them. | Parameters: | **pkg** (*PackageBase*) – package to load deps for | `spack.build_environment.``parent_class_modules`(*cls*)[¶](#spack.build_environment.parent_class_modules) Get list of superclass modules that descend from spack.package.PackageBase `spack.build_environment.``set_build_environment_variables`(*pkg*, *env*, *dirty*)[¶](#spack.build_environment.set_build_environment_variables) Ensure a clean install environment when we build packages. This involves unsetting pesky environment variables that may affect the build. It also involves setting environment variables used by Spack’s compiler wrappers. | Parameters: | * **pkg** – The package we are building * **env** – The build environment * **dirty** (*bool*) – Skip unsetting the user’s environment settings | `spack.build_environment.``set_compiler_environment_variables`(*pkg*, *env*)[¶](#spack.build_environment.set_compiler_environment_variables) `spack.build_environment.``set_module_variables_for_package`(*pkg*, *module*)[¶](#spack.build_environment.set_module_variables_for_package) Populate the module scope of install() with some useful functions. This makes things easier for package writers. `spack.build_environment.``setup_package`(*pkg*, *dirty*)[¶](#spack.build_environment.setup_package) Execute all environment setup routines. ### spack.caches module[¶](#module-spack.caches) Caches used by Spack to store data `spack.caches.``fetch_cache` *= <spack.fetch_strategy.FsCache object>*[¶](#spack.caches.fetch_cache) Spack’s local cache for downloaded source archives `spack.caches.``misc_cache` *= <spack.util.file_cache.FileCache object>*[¶](#spack.caches.misc_cache) Spack’s cache for small data ### spack.compiler module[¶](#module-spack.compiler) *class* `spack.compiler.``Compiler`(*cspec*, *operating_system*, *target*, *paths*, *modules=[]*, *alias=None*, *environment=None*, *extra_rpaths=None*, ***kwargs*)[¶](#spack.compiler.Compiler) Bases: `object` This class encapsulates a Spack “compiler”, which includes C, C++, and Fortran compilers. Subclasses should implement support for specific compilers, their possible names, arguments, and how to identify the particular type of compiler. `PrgEnv` *= None*[¶](#spack.compiler.Compiler.PrgEnv) `PrgEnv_compiler` *= None*[¶](#spack.compiler.Compiler.PrgEnv_compiler) `cc_names` *= []*[¶](#spack.compiler.Compiler.cc_names) `cc_rpath_arg`[¶](#spack.compiler.Compiler.cc_rpath_arg) *classmethod* `cc_version`(*cc*)[¶](#spack.compiler.Compiler.cc_version) `cxx11_flag`[¶](#spack.compiler.Compiler.cxx11_flag) `cxx14_flag`[¶](#spack.compiler.Compiler.cxx14_flag) `cxx17_flag`[¶](#spack.compiler.Compiler.cxx17_flag) `cxx98_flag`[¶](#spack.compiler.Compiler.cxx98_flag) `cxx_names` *= []*[¶](#spack.compiler.Compiler.cxx_names) `cxx_rpath_arg`[¶](#spack.compiler.Compiler.cxx_rpath_arg) *classmethod* `cxx_version`(*cxx*)[¶](#spack.compiler.Compiler.cxx_version) *classmethod* `default_version`(*cc*)[¶](#spack.compiler.Compiler.default_version) Override just this to override all compiler version functions. `f77_names` *= []*[¶](#spack.compiler.Compiler.f77_names) `f77_rpath_arg`[¶](#spack.compiler.Compiler.f77_rpath_arg) *classmethod* `f77_version`(*f77*)[¶](#spack.compiler.Compiler.f77_version) `fc_names` *= []*[¶](#spack.compiler.Compiler.fc_names) `fc_rpath_arg`[¶](#spack.compiler.Compiler.fc_rpath_arg) *classmethod* `fc_version`(*fc*)[¶](#spack.compiler.Compiler.fc_version) `openmp_flag`[¶](#spack.compiler.Compiler.openmp_flag) `prefixes` *= []*[¶](#spack.compiler.Compiler.prefixes) `setup_custom_environment`(*pkg*, *env*)[¶](#spack.compiler.Compiler.setup_custom_environment) Set any environment variables necessary to use the compiler. `suffixes` *= ['-.*']*[¶](#spack.compiler.Compiler.suffixes) `version`[¶](#spack.compiler.Compiler.version) `spack.compiler.``get_compiler_version`(*compiler_path*, *version_arg*, *regex='(.*)'*)[¶](#spack.compiler.get_compiler_version) ### spack.concretize module[¶](#module-spack.concretize) Functions here are used to take abstract specs and make them concrete. For example, if a spec asks for a version between 1.8 and 1.9, these functions might take will take the most recent 1.9 version of the package available. Or, if the user didn’t specify a compiler for a spec, then this will assign a compiler to the spec based on defaults or user preferences. TODO: make this customizable and allow users to configure concretization policies. *class* `spack.concretize.``Concretizer`[¶](#spack.concretize.Concretizer) Bases: `object` You can subclass this class to override some of the default concretization strategies, or you can override all of them. `choose_virtual_or_external`(*spec*)[¶](#spack.concretize.Concretizer.choose_virtual_or_external) Given a list of candidate virtual and external packages, try to find one that is most ABI compatible. `concretize_architecture`(*spec*)[¶](#spack.concretize.Concretizer.concretize_architecture) If the spec is empty provide the defaults of the platform. If the architecture is not a string type, then check if either the platform, target or operating system are concretized. If any of the fields are changed then return True. If everything is concretized (i.e the architecture attribute is a namedtuple of classes) then return False. If the target is a string type, then convert the string into a concretized architecture. If it has no architecture and the root of the DAG has an architecture, then use the root otherwise use the defaults on the platform. `concretize_compiler`(*spec*)[¶](#spack.concretize.Concretizer.concretize_compiler) If the spec already has a compiler, we’re done. If not, then take the compiler used for the nearest ancestor with a compiler spec and use that. If the ancestor’s compiler is not concrete, then used the preferred compiler as specified in spackconfig. Intuition: Use the spackconfig default if no package that depends on this one has a strict compiler requirement. Otherwise, try to build with the compiler that will be used by libraries that link to this one, to maximize compatibility. `concretize_compiler_flags`(*spec*)[¶](#spack.concretize.Concretizer.concretize_compiler_flags) The compiler flags are updated to match those of the spec whose compiler is used, defaulting to no compiler flags in the spec. Default specs set at the compiler level will still be added later. `concretize_variants`(*spec*)[¶](#spack.concretize.Concretizer.concretize_variants) If the spec already has variants filled in, return. Otherwise, add the user preferences from packages.yaml or the default variants from the package specification. `concretize_version`(*spec*)[¶](#spack.concretize.Concretizer.concretize_version) If the spec is already concrete, return. Otherwise take the preferred version from spackconfig, and default to the package’s version if there are no available versions. TODO: In many cases we probably want to look for installed versions of each package and use an installed version if we can link to it. The policy implemented here will tend to rebuild a lot of stuff becasue it will prefer a compiler in the spec to any compiler already- installed things were built with. There is likely some better policy that finds some middle ground between these two extremes. `disable_compiler_existence_check`()[¶](#spack.concretize.Concretizer.disable_compiler_existence_check) *exception* `spack.concretize.``InsufficientArchitectureInfoError`(*spec*, *archs*)[¶](#spack.concretize.InsufficientArchitectureInfoError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when details on architecture cannot be collected from the system *exception* `spack.concretize.``NoBuildError`(*spec*)[¶](#spack.concretize.NoBuildError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when a package is configured with the buildable option False, but no satisfactory external versions can be found *exception* `spack.concretize.``NoCompilersForArchError`(*arch*, *available_os_targets*)[¶](#spack.concretize.NoCompilersForArchError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) *exception* `spack.concretize.``NoValidVersionError`(*spec*)[¶](#spack.concretize.NoValidVersionError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when there is no way to have a concrete version for a particular spec. *exception* `spack.concretize.``UnavailableCompilerVersionError`(*compiler_spec*, *arch=None*)[¶](#spack.concretize.UnavailableCompilerVersionError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when there is no available compiler that satisfies a compiler spec. `spack.concretize.``concretizer` *= <spack.concretize.Concretizer object>*[¶](#spack.concretize.concretizer) Concretizer singleton `spack.concretize.``find_spec`(*spec*, *condition*, *default=None*)[¶](#spack.concretize.find_spec) Searches the dag from spec in an intelligent order and looks for a spec that matches a condition ### spack.config module[¶](#module-spack.config) This module implements Spack’s configuration file handling. This implements Spack’s configuration system, which handles merging multiple scopes with different levels of precedence. See the documentation on [Configuration Scopes](index.html#configuration-scopes) for details on how Spack’s configuration system behaves. The scopes are: > 1. `default` > 2. `system` > 3. `site` > 4. `user` And corresponding [per-platform scopes](index.html#platform-scopes). Important functions in this module are: * `get_config()` * `update_config()` `get_config` reads in YAML data for a particular scope and returns it. Callers can then modify the data and write it back with `update_config`. When read in, Spack validates configurations with jsonschemas. The schemas are in submodules of [`spack.schema`](index.html#module-spack.schema). *exception* `spack.config.``ConfigError`(*message*, *long_message=None*)[¶](#spack.config.ConfigError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Superclass for all Spack config related errors. *exception* `spack.config.``ConfigFileError`(*message*, *long_message=None*)[¶](#spack.config.ConfigFileError) Bases: [`spack.config.ConfigError`](#spack.config.ConfigError) Issue reading or accessing a configuration file. *exception* `spack.config.``ConfigFormatError`(*validation_error*, *data*, *filename=None*, *line=None*)[¶](#spack.config.ConfigFormatError) Bases: [`spack.config.ConfigError`](#spack.config.ConfigError) Raised when a configuration format does not match its schema. *class* `spack.config.``ConfigScope`(*name*, *path*)[¶](#spack.config.ConfigScope) Bases: `object` This class represents a configuration scope. A scope is one directory containing named configuration files. Each file is a config “section” (e.g., mirrors, compilers, etc). `clear`()[¶](#spack.config.ConfigScope.clear) Empty cached config information. `get_section`(*section*)[¶](#spack.config.ConfigScope.get_section) `get_section_filename`(*section*)[¶](#spack.config.ConfigScope.get_section_filename) `write_section`(*section*)[¶](#spack.config.ConfigScope.write_section) *exception* `spack.config.``ConfigSectionError`(*message*, *long_message=None*)[¶](#spack.config.ConfigSectionError) Bases: [`spack.config.ConfigError`](#spack.config.ConfigError) Error for referring to a bad config section name in a configuration. *class* `spack.config.``Configuration`(**scopes*)[¶](#spack.config.Configuration) Bases: `object` A full Spack configuration, from a hierarchy of config files. This class makes it easy to add a new scope on top of an existing one. `clear_caches`()[¶](#spack.config.Configuration.clear_caches) Clears the caches for configuration files, This will cause files to be re-read upon the next request. `file_scopes`[¶](#spack.config.Configuration.file_scopes) List of writable scopes with an associated file. `get`(*path*, *default=None*, *scope=None*)[¶](#spack.config.Configuration.get) Get a config section or a single value from one. Accepts a path syntax that allows us to grab nested config map entries. Getting the ‘config’ section would look like: ``` spack.config.get('config') ``` and the `dirty` section in the `config` scope would be: ``` spack.config.get('config:dirty') ``` We use `:` as the separator, like YAML objects. `get_config`(*section*, *scope=None*)[¶](#spack.config.Configuration.get_config) Get configuration settings for a section. If `scope` is `None` or not provided, return the merged contents of all of Spack’s configuration scopes. If `scope` is provided, return only the confiugration as specified in that scope. This off the top-level name from the YAML section. That is, for a YAML config file that looks like this: ``` config: install_tree: $spack/opt/spack module_roots: lmod: $spack/share/spack/lmod ``` `get_config('config')` will return: ``` { 'install_tree': '$spack/opt/spack', 'module_roots: { 'lmod': '$spack/share/spack/lmod' } } ``` `get_config_filename`(*scope*, *section*)[¶](#spack.config.Configuration.get_config_filename) For some scope and section, get the name of the configuration file. `highest_precedence_scope`()[¶](#spack.config.Configuration.highest_precedence_scope) Non-internal scope with highest precedence. `pop_scope`()[¶](#spack.config.Configuration.pop_scope) Remove the highest precedence scope and return it. `print_section`(*section*, *blame=False*)[¶](#spack.config.Configuration.print_section) Print a configuration to stdout. `push_scope`(*scope*)[¶](#spack.config.Configuration.push_scope) Add a higher precedence scope to the Configuration. `remove_scope`(*scope_name*)[¶](#spack.config.Configuration.remove_scope) `set`(*path*, *value*, *scope=None*)[¶](#spack.config.Configuration.set) Convenience function for setting single values in config files. Accepts the path syntax described in `get()`. `update_config`(*section*, *update_data*, *scope=None*)[¶](#spack.config.Configuration.update_config) Update the configuration file for a particular scope. Overwrites contents of a section in a scope with update_data, then writes out the config file. update_data should have the top-level section name stripped off (it will be re-added). Data itself can be a list, dict, or any other yaml-ish structure. *class* `spack.config.``ImmutableConfigScope`(*name*, *path*)[¶](#spack.config.ImmutableConfigScope) Bases: [`spack.config.ConfigScope`](#spack.config.ConfigScope) A configuration scope that cannot be written to. This is used for ConfigScopes passed on the command line. `write_section`(*section*)[¶](#spack.config.ImmutableConfigScope.write_section) *class* `spack.config.``InternalConfigScope`(*name*, *data=None*)[¶](#spack.config.InternalConfigScope) Bases: [`spack.config.ConfigScope`](#spack.config.ConfigScope) An internal configuration scope that is not persisted to a file. This is for spack internal use so that command-line options and config file settings are accessed the same way, and Spack can easily override settings from files. `get_section`(*section*)[¶](#spack.config.InternalConfigScope.get_section) Just reads from an internal dictionary. `get_section_filename`(*section*)[¶](#spack.config.InternalConfigScope.get_section_filename) `write_section`(*section*)[¶](#spack.config.InternalConfigScope.write_section) This only validates, as the data is already in memory. *class* `spack.config.``SingleFileScope`(*name*, *path*, *schema*, *yaml_path=None*)[¶](#spack.config.SingleFileScope) Bases: [`spack.config.ConfigScope`](#spack.config.ConfigScope) This class represents a configuration scope in a single YAML file. `get_section`(*section*)[¶](#spack.config.SingleFileScope.get_section) `get_section_filename`(*section*)[¶](#spack.config.SingleFileScope.get_section_filename) `write_section`(*section*)[¶](#spack.config.SingleFileScope.write_section) `spack.config.``command_line_scopes` *= []*[¶](#spack.config.command_line_scopes) configuration scopes added on the command line set by `spack.main.main()`. `spack.config.``config` *= <spack.config.Configuration object>*[¶](#spack.config.config) This is the singleton configuration instance for Spack. `spack.config.``config_defaults` *= {'config': {'checksum': True, 'debug': False, 'verify_ssl': True, 'dirty': False, 'build_jobs': 4}}*[¶](#spack.config.config_defaults) Hard-coded default values for some key configuration options. This ensures that Spack will still work even if config.yaml in the defaults scope is removed. `spack.config.``configuration_paths` *= (('defaults', '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1/etc/spack/defaults'), ('system', '/etc/spack'), ('site', '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1/etc/spack'), ('user', '/home/docs/.spack'))*[¶](#spack.config.configuration_paths) Builtin paths to configuration files in Spack `spack.config.``default_list_scope`()[¶](#spack.config.default_list_scope) Return the config scope that is listed by default. Commands that list configuration list *all* scopes (merged) by default. `spack.config.``default_modify_scope`()[¶](#spack.config.default_modify_scope) Return the config scope that commands should modify by default. Commands that modify configuration by default modify the *highest* priority scope. `spack.config.``first_existing`(*dictionary*, *keys*)[¶](#spack.config.first_existing) Get the value of the first key in keys that is in the dictionary. `spack.config.``get`(*path*, *default=None*, *scope=None*)[¶](#spack.config.get) Module-level wrapper for `Configuration.get()`. `spack.config.``override`(*path_or_scope*, *value=None*)[¶](#spack.config.override) Simple way to override config settings within a context. | Parameters: | * **path_or_scope** (*ConfigScope* *or* *str*) – scope or single option to override * **value** (*object**,* *optional*) – value for the single option | Temporarily push a scope on the current configuration, then remove it after the context completes. If a single option is provided, create an internal config scope for it and push/pop that scope. `spack.config.``scopes`()[¶](#spack.config.scopes) Convenience function to get list of configuration scopes. `spack.config.``scopes_metavar` *= '{defaults,system,site,user}[/PLATFORM]'*[¶](#spack.config.scopes_metavar) metavar to use for commands that accept scopes this is shorter and more readable than listing all choices `spack.config.``section_schemas` *= {'compilers': {'$schema': 'http://json-schema.org/schema#', 'title': 'Spack compiler configuration file schema', 'properties': {'compilers': {'items': [{'additionalProperties': False, 'properties': {'compiler': {'additionalProperties': False, 'properties': {'modules': {'anyOf': [{'type': 'string'}, {'type': 'null'}, {'type': 'array'}]}, 'extra_rpaths': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'operating_system': {'type': 'string'}, 'spec': {'type': 'string'}, 'environment': {'properties': {'append-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'prepend-path': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}, 'unset': {'patternProperties': {'\\w[\\w-]*': {'type': 'null'}}, 'type': 'object'}, 'set': {'patternProperties': {'\\w[\\w-]*': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}}, 'type': 'object'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'target': {'type': 'string'}, 'flags': {'additionalProperties': False, 'properties': {'fflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cppflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'ldlibs': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxxflags': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'type': 'object'}, 'paths': {'additionalProperties': False, 'properties': {'fc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cc': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'cxx': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}, 'f77': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['cc', 'cxx', 'f77', 'fc'], 'type': 'object'}, 'alias': {'anyOf': [{'type': 'string'}, {'type': 'null'}]}}, 'required': ['paths', 'spec', 'modules', 'operating_system'], 'type': 'object'}}, 'type': 'object'}], 'type': 'array'}}, 'additionalProperties': False, 'type': 'object'}, 'config': {'$schema': 'http://json-schema.org/schema#', 'title': 'Spack core configuration file schema', 'properties': {'config': {'properties': {'install_path_scheme': {'type': 'string'}, 'locks': {'type': 'boolean'}, 'db_lock_timeout': {'minimum': 1, 'type': 'integer'}, 'module_roots': {'additionalProperties': False, 'properties': {'tcl': {'type': 'string'}, 'lmod': {'type': 'string'}, 'dotkit': {'type': 'string'}}, 'type': 'object'}, 'template_dirs': {'items': {'type': 'string'}, 'type': 'array'}, 'ccache': {'type': 'boolean'}, 'install_hash_length': {'minimum': 1, 'type': 'integer'}, 'source_cache': {'type': 'string'}, 'checksum': {'type': 'boolean'}, 'verify_ssl': {'type': 'boolean'}, 'install_tree': {'type': 'string'}, 'build_jobs': {'minimum': 1, 'type': 'integer'}, 'debug': {'type': 'boolean'}, 'build_stage': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'misc_cache': {'type': 'string'}, 'build_language': {'type': 'string'}, 'dirty': {'type': 'boolean'}, 'package_lock_timeout': {'anyOf': [{'minimum': 1, 'type': 'integer'}, {'type': 'null'}]}}, 'default': {}, 'type': 'object'}}, 'additionalProperties': False, 'type': 'object'}, 'mirrors': {'$schema': 'http://json-schema.org/schema#', 'title': 'Spack mirror configuration file schema', 'properties': {'mirrors': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'default': {}, 'type': 'object'}}, 'additionalProperties': False, 'type': 'object'}, 'modules': {'$schema': 'http://json-schema.org/schema#', 'title': 'Spack module file configuration file schema', 'definitions': {'module_type_configuration': {'anyOf': [{'properties': {'blacklist_implicits': {'default': False, 'type': 'boolean'}, 'naming_scheme': {'type': 'string'}, 'verbose': {'default': False, 'type': 'boolean'}, 'hash_length': {'minimum': 0, 'default': 7, 'type': 'integer'}, 'blacklist': {'$ref': '#/definitions/array_of_strings'}, 'whitelist': {'$ref': '#/definitions/array_of_strings'}}}, {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/module_file_configuration'}}}], 'default': {}, 'type': 'object'}, 'module_file_configuration': {'properties': {'conflict': {'$ref': '#/definitions/array_of_strings'}, 'filter': {'properties': {'environment_blacklist': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'suffixes': {'$ref': '#/definitions/dictionary_of_strings'}, 'environment': {'properties': {'append_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'prepend_path': {'$ref': '#/definitions/dictionary_of_strings'}, 'unset': {'$ref': '#/definitions/array_of_strings'}, 'set': {'$ref': '#/definitions/dictionary_of_strings'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'load': {'$ref': '#/definitions/array_of_strings'}, 'prerequisites': {'$ref': '#/definitions/dependency_selection'}, 'autoload': {'$ref': '#/definitions/dependency_selection'}, 'template': {'type': 'string'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}, 'array_of_strings': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'dictionary_of_strings': {'patternProperties': {'\\w[\\w-]*': {'type': 'string'}}, 'type': 'object'}, 'dependency_selection': {'enum': ['none', 'direct', 'all'], 'type': 'string'}}, 'additionalProperties': False, 'properties': {'modules': {'properties': {'dotkit': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'prefix_inspections': {'patternProperties': {'\\w[\\w-]*': {'$ref': '#/definitions/array_of_strings'}}, 'type': 'object'}, 'lmod': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {'hierarchical_scheme': {'$ref': '#/definitions/array_of_strings'}, 'core_compilers': {'$ref': '#/definitions/array_of_strings'}}]}, 'tcl': {'allOf': [{'$ref': '#/definitions/module_type_configuration'}, {}]}, 'enable': {'items': {'enum': ['tcl', 'dotkit', 'lmod'], 'type': 'string'}, 'default': [], 'type': 'array'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}}, 'type': 'object'}, 'packages': {'$schema': 'http://json-schema.org/schema#', 'title': 'Spack package configuration file schema', 'properties': {'packages': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'properties': {'providers': {'additionalProperties': False, 'patternProperties': {'\\w[\\w-]*': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'default': {}, 'type': 'object'}, 'compiler': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}, 'variants': {'oneOf': [{'type': 'string'}, {'items': {'type': 'string'}, 'type': 'array'}]}, 'modules': {'default': {}, 'type': 'object'}, 'paths': {'default': {}, 'type': 'object'}, 'permissions': {'additionalProperties': False, 'properties': {'group': {'type': 'string'}, 'read': {'enum': ['user', 'group', 'world'], 'type': 'string'}, 'write': {'enum': ['user', 'group', 'world'], 'type': 'string'}}, 'type': 'object'}, 'version': {'items': {'anyOf': [{'type': 'string'}, {'type': 'number'}]}, 'default': [], 'type': 'array'}, 'buildable': {'default': True, 'type': 'boolean'}}, 'additionalProperties': False, 'default': {}, 'type': 'object'}}, 'default': {}, 'type': 'object'}}, 'additionalProperties': False, 'type': 'object'}, 'repos': {'$schema': 'http://json-schema.org/schema#', 'title': 'Spack repository configuration file schema', 'properties': {'repos': {'items': {'type': 'string'}, 'default': [], 'type': 'array'}}, 'additionalProperties': False, 'type': 'object'}}*[¶](#spack.config.section_schemas) Dict from section names -> schema for that section `spack.config.``set`(*path*, *value*, *scope=None*)[¶](#spack.config.set) Convenience function for getting single values in config files. Accepts the path syntax described in `get()`. ### spack.database module[¶](#module-spack.database) Spack’s installation tracking database. The database serves two purposes: > 1. It implements a cache on top of a potentially very large Spack > directory hierarchy, speeding up many operations that would > otherwise require filesystem access. > 2. It will allow us to track external installations as well as lost > packages and their dependencies. Prior to the implementation of this store, a directory layout served as the authoritative database of packages in Spack. This module provides a cache and a sanity checking mechanism for what is in the filesystem. *exception* `spack.database.``CorruptDatabaseError`(*message*, *long_message=None*)[¶](#spack.database.CorruptDatabaseError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when errors are found while reading the database. *class* `spack.database.``Database`(*root*, *db_dir=None*)[¶](#spack.database.Database) Bases: `object` Per-process lock objects for each install prefix. `activated_extensions_for`(*spec_like*, **args*, ***kwargs*)[¶](#spack.database.Database.activated_extensions_for) `add`(*spec_like*, **args*, ***kwargs*)[¶](#spack.database.Database.add) `get_record`(*spec_like*, **args*, ***kwargs*)[¶](#spack.database.Database.get_record) `installed_extensions_for`(*spec_like*, **args*, ***kwargs*)[¶](#spack.database.Database.installed_extensions_for) `installed_relatives`(*spec_like*, **args*, ***kwargs*)[¶](#spack.database.Database.installed_relatives) `missing`(*spec*)[¶](#spack.database.Database.missing) `prefix_lock`(*spec*)[¶](#spack.database.Database.prefix_lock) Get a lock on a particular spec’s installation directory. NOTE: The installation directory **does not** need to exist. Prefix lock is a byte range lock on the nth byte of a file. The lock file is `spack.store.db.prefix_lock` – the DB tells us what to call it and it lives alongside the install DB. n is the sys.maxsize-bit prefix of the DAG hash. This makes likelihood of collision is very low AND it gives us readers-writer lock semantics with just a single lockfile, so no cleanup required. `prefix_read_lock`(*spec*)[¶](#spack.database.Database.prefix_read_lock) `prefix_write_lock`(*spec*)[¶](#spack.database.Database.prefix_write_lock) `query`(*query_spec=<built-in function any>*, *known=<built-in function any>*, *installed=True*, *explicit=<built-in function any>*, *start_date=None*, *end_date=None*, *hashes=None*)[¶](#spack.database.Database.query) Run a query on the database | Parameters: | * **query_spec** – queries iterate through specs in the database and return those that satisfy the supplied `query_spec`. If query_spec is any, This will match all specs in the database. If it is a spec, we’ll evaluate `spec.satisfies(query_spec)` * **known** (*bool* *or* *any**,* *optional*) – Specs that are “known” are those for which Spack can locate a `package.py` file – i.e., Spack “knows” how to install them. Specs that are unknown may represent packages that existed in a previous version of Spack, but have since either changed their name or been removed * **installed** (*bool* *or* *any**,* *optional*) – Specs for which a prefix exists are “installed”. A spec that is NOT installed will be in the database if some other spec depends on it but its installation has gone away since Spack installed it. * **explicit** (*bool* *or* *any**,* *optional*) – A spec that was installed following a specific user request is marked as explicit. If instead it was pulled-in as a dependency of a user requested spec it’s considered implicit. * **start_date** (*datetime**,* *optional*) – filters the query discarding specs that have been installed before `start_date`. * **end_date** (*datetime**,* *optional*) – filters the query discarding specs that have been installed after `end_date`. * **hashes** (*container*) – list or set of hashes that we can use to restrict the search | | Returns: | list of specs that match the query | `query_one`(*query_spec*, *known=<built-in function any>*, *installed=True*)[¶](#spack.database.Database.query_one) Query for exactly one spec that matches the query spec. Raises an assertion error if more than one spec matches the query. Returns None if no installed package matches. `read_transaction`()[¶](#spack.database.Database.read_transaction) Get a read lock context manager for use in a with block. `reindex`(*directory_layout*)[¶](#spack.database.Database.reindex) Build database index from scratch based on a directory layout. Locks the DB if it isn’t locked already. `remove`(*spec_like*, **args*, ***kwargs*)[¶](#spack.database.Database.remove) `write_transaction`()[¶](#spack.database.Database.write_transaction) Get a write lock context manager for use in a with block. *class* `spack.database.``InstallRecord`(*spec*, *path*, *installed*, *ref_count=0*, *explicit=False*, *installation_time=None*)[¶](#spack.database.InstallRecord) Bases: `object` A record represents one installation in the DB. The record keeps track of the spec for the installation, its install path, AND whether or not it is installed. We need the installed flag in case a user either: > 1. blew away a directory, or > 2. used spack uninstall -f to get rid of it If, in either case, the package was removed but others still depend on it, we still need to track its spec, so we don’t actually remove from the database until a spec has no installed dependents left. | Parameters: | * **spec** (*Spec*) – spec tracked by the install record * **path** (*str*) – path where the spec has been installed * **installed** (*bool*) – whether or not the spec is currently installed * **ref_count** (*int*) – number of specs that depend on this one * **explicit** (*bool**,* *optional*) – whether or not this spec was explicitly installed, or pulled-in as a dependency of something else * **installation_time** (*time**,* *optional*) – time of the installation | *classmethod* `from_dict`(*spec*, *dictionary*)[¶](#spack.database.InstallRecord.from_dict) `to_dict`()[¶](#spack.database.InstallRecord.to_dict) *exception* `spack.database.``InvalidDatabaseVersionError`(*expected*, *found*)[¶](#spack.database.InvalidDatabaseVersionError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) *exception* `spack.database.``NonConcreteSpecAddError`(*message*, *long_message=None*)[¶](#spack.database.NonConcreteSpecAddError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when attemptint to add non-concrete spec to DB. ### spack.dependency module[¶](#module-spack.dependency) Data structures that represent Spack’s dependency relationships. *class* `spack.dependency.``Dependency`(*pkg*, *spec*, *type=('build'*, *'link')*)[¶](#spack.dependency.Dependency) Bases: `object` Class representing metadata for a dependency on a package. This class differs from `spack.spec.DependencySpec` because it represents metadata at the `Package` level. `spack.spec.DependencySpec` is a descriptor for an actual package configuration, while `Dependency` is a descriptor for a package’s dependency *requirements*. A dependency is a requirement for a configuration of another package that satisfies a particular spec. The dependency can have *types*, which determine *how* that package configuration is required, e.g. whether it is required for building the package, whether it needs to be linked to, or whether it is needed at runtime so that Spack can call commands from it. A package can also depend on another package with *patches*. This is for cases where the maintainers of one package also maintain special patches for their dependencies. If one package depends on another with patches, a special version of that dependency with patches applied will be built for use by the dependent package. The patches are included in the new version’s spec hash to differentiate it from unpatched versions of the same package, so that unpatched versions of the dependency package can coexist with the patched version. `merge`(*other*)[¶](#spack.dependency.Dependency.merge) Merge constraints, deptypes, and patches of other into self. `name`[¶](#spack.dependency.Dependency.name) Get the name of the dependency package. `spack.dependency.``all_deptypes` *= ('build', 'link', 'run', 'test')*[¶](#spack.dependency.all_deptypes) The types of dependency relationships that Spack understands. `spack.dependency.``canonical_deptype`(*deptype*)[¶](#spack.dependency.canonical_deptype) Convert deptype to a canonical sorted tuple, or raise ValueError. | Parameters: | **deptype** (*str* *or* *list* *or* *tuple*) – string representing dependency type, or a list/tuple of such strings. Can also be the builtin function `all` or the string ‘all’, which result in a tuple of all dependency types known to Spack. | `spack.dependency.``default_deptype` *= ('build', 'link')*[¶](#spack.dependency.default_deptype) Default dependency type if none is specified ### spack.directives module[¶](#module-spack.directives) This package contains directives that can be used within a package. Directives are functions that can be called inside a package definition to modify the package, for example: > class OpenMpi(Package): > depends_on(“hwloc”) > provides(“mpi”) > 
 `provides` and `depends_on` are spack directives. The available directives are: > * `version` > * `depends_on` > * `provides` > * `extends` > * `patch` > * `variant` > * `resource` `spack.directives.``version`(*ver*, *checksum=None*, ***kwargs*)[¶](#spack.directives.version) Adds a version and metadata describing how to fetch its source code. Metadata is stored as a dict of `kwargs` in the package class’s `versions` dictionary. The `dict` of arguments is turned into a valid fetch strategy later. See `spack.fetch_strategy.for_package_version()`. `spack.directives.``conflicts`(*conflict_spec*, *when=None*, *msg=None*)[¶](#spack.directives.conflicts) Allows a package to define a conflict. Currently, a “conflict” is a concretized configuration that is known to be non-valid. For example, a package that is known not to be buildable with intel compilers can declare: ``` conflicts('%intel') ``` To express the same constraint only when the ‘foo’ variant is activated: ``` conflicts('%intel', when='+foo') ``` | Parameters: | * **conflict_spec** (*Spec*) – constraint defining the known conflict * **when** (*Spec*) – optional constraint that triggers the conflict * **msg** (*str*) – optional user defined message | `spack.directives.``depends_on`(*spec*, *when=None*, *type=('build'*, *'link')*, *patches=None*)[¶](#spack.directives.depends_on) Creates a dict of deps with specs defining when they apply. | Parameters: | * **spec** (*Spec* *or* *str*) – the package and constraints depended on * **when** (*Spec* *or* *str*) – when the dependent satisfies this, it has the dependency represented by `spec` * **type** (*str* *or* *tuple of str*) – str or tuple of legal Spack deptypes * **patches** (*obj* *or* *list*) – single result of `patch()` directive, a `str` to be passed to `patch`, or a list of these | This directive is to be used inside a Package definition to declare that the package requires other packages to be built first. @see The section “Dependency specs” in the Spack Packaging Guide. `spack.directives.``extends`(*spec*, ***kwargs*)[¶](#spack.directives.extends) Same as depends_on, but allows symlinking into dependency’s prefix tree. This is for Python and other language modules where the module needs to be installed into the prefix of the Python installation. Spack handles this by installing modules into their own prefix, but allowing ONE module version to be symlinked into a parent Python install at a time, using `spack activate`. keyword arguments can be passed to extends() so that extension packages can pass parameters to the extendee’s extension mechanism. `spack.directives.``provides`(**specs*, ***kwargs*)[¶](#spack.directives.provides) Allows packages to provide a virtual dependency. If a package provides ‘mpi’, other packages can declare that they depend on “mpi”, and spack can use the providing package to satisfy the dependency. `spack.directives.``patch`(*url_or_filename*, *level=1*, *when=None*, *working_dir='.'*, ***kwargs*)[¶](#spack.directives.patch) Packages can declare patches to apply to source. You can optionally provide a when spec to indicate that a particular patch should only be applied when the package’s spec meets certain conditions (e.g. a particular version). | Parameters: | * **url_or_filename** (*str*) – url or filename of the patch * **level** (*int*) – patch level (as in the patch shell command) * **when** (*Spec*) – optional anonymous spec that specifies when to apply the patch * **working_dir** (*str*) – dir to change to before applying | | Keyword Arguments: | | | * **sha256** (*str*) – sha256 sum of the patch, used to verify the patch (only required for URL patches) * **archive_sha256** (*str*) – sha256 sum of the *archive*, if the patch is compressed (only required for compressed URL patches) | `spack.directives.``variant`(*name*, *default=None*, *description=''*, *values=None*, *multi=False*, *validator=None*)[¶](#spack.directives.variant) Define a variant for the package. Packager can specify a default value as well as a text description. | Parameters: | * **name** (*str*) – name of the variant * **default** (*str* *or* *bool*) – default value for the variant, if not specified otherwise the default will be False for a boolean variant and ‘nothing’ for a multi-valued variant * **description** (*str*) – description of the purpose of the variant * **values** (*tuple* *or* *callable*) – either a tuple of strings containing the allowed values, or a callable accepting one value and returning True if it is valid * **multi** (*bool*) – if False only one value per spec is allowed for this variant * **validator** (*callable*) – optional group validator to enforce additional logic. It receives a tuple of values and should raise an instance of SpackError if the group doesn’t meet the additional constraints | `spack.directives.``resource`(***kwargs*)[¶](#spack.directives.resource) Define an external resource to be fetched and staged when building the package. Based on the keywords present in the dictionary the appropriate FetchStrategy will be used for the resource. Resources are fetched and staged in their own folder inside spack stage area, and then moved into the stage area of the package that needs them. List of recognized keywords: * ‘when’ : (optional) represents the condition upon which the resource is needed * ‘destination’ : (optional) path where to move the resource. This path must be relative to the main package stage area. * ‘placement’ : (optional) gives the possibility to fine tune how the resource is moved into the main package stage area. ### spack.directory_layout module[¶](#module-spack.directory_layout) *class* `spack.directory_layout.``DirectoryLayout`(*root*)[¶](#spack.directory_layout.DirectoryLayout) Bases: `object` A directory layout is used to associate unique paths with specs. Different installations are going to want differnet layouts for their install, and they can use this to customize the nesting structure of spack installs. `all_specs`()[¶](#spack.directory_layout.DirectoryLayout.all_specs) To be implemented by subclasses to traverse all specs for which there is a directory within the root. `check_installed`(*spec*)[¶](#spack.directory_layout.DirectoryLayout.check_installed) Checks whether a spec is installed. Return the spec’s prefix, if it is installed, None otherwise. Raise an exception if the install is inconsistent or corrupt. `create_install_directory`(*spec*)[¶](#spack.directory_layout.DirectoryLayout.create_install_directory) Creates the installation directory for a spec. `hidden_file_paths`[¶](#spack.directory_layout.DirectoryLayout.hidden_file_paths) Return a list of hidden files used by the directory layout. Paths are relative to the root of an install directory. If the directory layout uses no hidden files to maintain state, this should return an empty container, e.g. [] or (,). `path_for_spec`(*spec*)[¶](#spack.directory_layout.DirectoryLayout.path_for_spec) Return absolute path from the root to a directory for the spec. `relative_path_for_spec`(*spec*)[¶](#spack.directory_layout.DirectoryLayout.relative_path_for_spec) Implemented by subclasses to return a relative path from the install root to a unique location for the provided spec. `remove_install_directory`(*spec*)[¶](#spack.directory_layout.DirectoryLayout.remove_install_directory) Removes a prefix and any empty parent directories from the root. Raised RemoveFailedError if something goes wrong. *exception* `spack.directory_layout.``DirectoryLayoutError`(*message*, *long_msg=None*)[¶](#spack.directory_layout.DirectoryLayoutError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Superclass for directory layout errors. *exception* `spack.directory_layout.``ExtensionAlreadyInstalledError`(*spec*, *ext_spec*)[¶](#spack.directory_layout.ExtensionAlreadyInstalledError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when an extension is added to a package that already has it. *exception* `spack.directory_layout.``ExtensionConflictError`(*spec*, *ext_spec*, *conflict*)[¶](#spack.directory_layout.ExtensionConflictError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when an extension is added to a package that already has it. *class* `spack.directory_layout.``ExtensionsLayout`(*root*, ***kwargs*)[¶](#spack.directory_layout.ExtensionsLayout) Bases: `object` A directory layout is used to associate unique paths with specs for package extensions. Keeps track of which extensions are activated for what package. Depending on the use case, this can mean globally activated extensions directly in the installation folder - or extensions activated in filesystem views. `add_extension`(*spec*, *ext_spec*)[¶](#spack.directory_layout.ExtensionsLayout.add_extension) Add to the list of currently installed extensions. `check_activated`(*spec*, *ext_spec*)[¶](#spack.directory_layout.ExtensionsLayout.check_activated) Ensure that ext_spec can be removed from spec. If not, raise NoSuchExtensionError. `check_extension_conflict`(*spec*, *ext_spec*)[¶](#spack.directory_layout.ExtensionsLayout.check_extension_conflict) Ensure that ext_spec can be activated in spec. If not, raise ExtensionAlreadyInstalledError or ExtensionConflictError. `extendee_target_directory`(*extendee*)[¶](#spack.directory_layout.ExtensionsLayout.extendee_target_directory) Specify to which full path extendee should link all files from extensions. `extension_map`(*spec*)[¶](#spack.directory_layout.ExtensionsLayout.extension_map) Get a dict of currently installed extension packages for a spec. Dict maps { name : extension_spec } Modifying dict does not affect internals of this layout. `remove_extension`(*spec*, *ext_spec*)[¶](#spack.directory_layout.ExtensionsLayout.remove_extension) Remove from the list of currently installed extensions. *exception* `spack.directory_layout.``InconsistentInstallDirectoryError`(*message*, *long_msg=None*)[¶](#spack.directory_layout.InconsistentInstallDirectoryError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when a package seems to be installed to the wrong place. *exception* `spack.directory_layout.``InstallDirectoryAlreadyExistsError`(*path*)[¶](#spack.directory_layout.InstallDirectoryAlreadyExistsError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when create_install_directory is called unnecessarily. *exception* `spack.directory_layout.``InvalidDirectoryLayoutParametersError`(*message*, *long_msg=None*)[¶](#spack.directory_layout.InvalidDirectoryLayoutParametersError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when a invalid directory layout parameters are supplied *exception* `spack.directory_layout.``InvalidExtensionSpecError`(*message*, *long_msg=None*)[¶](#spack.directory_layout.InvalidExtensionSpecError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when an extension file has a bad spec in it. *exception* `spack.directory_layout.``NoSuchExtensionError`(*spec*, *ext_spec*)[¶](#spack.directory_layout.NoSuchExtensionError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when an extension isn’t there on deactivate. *exception* `spack.directory_layout.``RemoveFailedError`(*installed_spec*, *prefix*, *error*)[¶](#spack.directory_layout.RemoveFailedError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when a DirectoryLayout cannot remove an install prefix. *exception* `spack.directory_layout.``SpecHashCollisionError`(*installed_spec*, *new_spec*)[¶](#spack.directory_layout.SpecHashCollisionError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when there is a hash collision in an install layout. *exception* `spack.directory_layout.``SpecReadError`(*message*, *long_msg=None*)[¶](#spack.directory_layout.SpecReadError) Bases: [`spack.directory_layout.DirectoryLayoutError`](#spack.directory_layout.DirectoryLayoutError) Raised when directory layout can’t read a spec. *class* `spack.directory_layout.``YamlDirectoryLayout`(*root*, ***kwargs*)[¶](#spack.directory_layout.YamlDirectoryLayout) Bases: [`spack.directory_layout.DirectoryLayout`](#spack.directory_layout.DirectoryLayout) By default lays out installation directories like this:: <install root>/ <platform-os-target>/ <compiler>-<compiler version>/ <name>-<version>-<hashThe hash here is a SHA-1 hash for the full DAG plus the build spec. TODO: implement the build spec. The installation directory scheme can be modified with the arguments hash_len and path_scheme. `all_specs`()[¶](#spack.directory_layout.YamlDirectoryLayout.all_specs) To be implemented by subclasses to traverse all specs for which there is a directory within the root. `build_env_path`(*spec*)[¶](#spack.directory_layout.YamlDirectoryLayout.build_env_path) `build_log_path`(*spec*)[¶](#spack.directory_layout.YamlDirectoryLayout.build_log_path) `build_packages_path`(*spec*)[¶](#spack.directory_layout.YamlDirectoryLayout.build_packages_path) `check_installed`(*spec*)[¶](#spack.directory_layout.YamlDirectoryLayout.check_installed) Checks whether a spec is installed. Return the spec’s prefix, if it is installed, None otherwise. Raise an exception if the install is inconsistent or corrupt. `create_install_directory`(*spec*)[¶](#spack.directory_layout.YamlDirectoryLayout.create_install_directory) Creates the installation directory for a spec. `hidden_file_paths`[¶](#spack.directory_layout.YamlDirectoryLayout.hidden_file_paths) Return a list of hidden files used by the directory layout. Paths are relative to the root of an install directory. If the directory layout uses no hidden files to maintain state, this should return an empty container, e.g. [] or (,). `metadata_path`(*spec*)[¶](#spack.directory_layout.YamlDirectoryLayout.metadata_path) `read_spec`(*path*)[¶](#spack.directory_layout.YamlDirectoryLayout.read_spec) Read the contents of a file and parse them as a spec `relative_path_for_spec`(*spec*)[¶](#spack.directory_layout.YamlDirectoryLayout.relative_path_for_spec) Implemented by subclasses to return a relative path from the install root to a unique location for the provided spec. `spec_file_path`(*spec*)[¶](#spack.directory_layout.YamlDirectoryLayout.spec_file_path) Gets full path to spec file `specs_by_hash`()[¶](#spack.directory_layout.YamlDirectoryLayout.specs_by_hash) `write_spec`(*spec*, *path*)[¶](#spack.directory_layout.YamlDirectoryLayout.write_spec) Write a spec out to a file. *class* `spack.directory_layout.``YamlViewExtensionsLayout`(*root*, *layout*)[¶](#spack.directory_layout.YamlViewExtensionsLayout) Bases: [`spack.directory_layout.ExtensionsLayout`](#spack.directory_layout.ExtensionsLayout) Maintain extensions within a view. `add_extension`(*spec*, *ext_spec*)[¶](#spack.directory_layout.YamlViewExtensionsLayout.add_extension) Add to the list of currently installed extensions. `check_activated`(*spec*, *ext_spec*)[¶](#spack.directory_layout.YamlViewExtensionsLayout.check_activated) Ensure that ext_spec can be removed from spec. If not, raise NoSuchExtensionError. `check_extension_conflict`(*spec*, *ext_spec*)[¶](#spack.directory_layout.YamlViewExtensionsLayout.check_extension_conflict) Ensure that ext_spec can be activated in spec. If not, raise ExtensionAlreadyInstalledError or ExtensionConflictError. `extension_file_path`(*spec*)[¶](#spack.directory_layout.YamlViewExtensionsLayout.extension_file_path) Gets full path to an installed package’s extension file, which keeps track of all the extensions for that package which have been added to this view. `extension_map`(*spec*)[¶](#spack.directory_layout.YamlViewExtensionsLayout.extension_map) Defensive copying version of _extension_map() for external API. `remove_extension`(*spec*, *ext_spec*)[¶](#spack.directory_layout.YamlViewExtensionsLayout.remove_extension) Remove from the list of currently installed extensions. ### spack.environment module[¶](#module-spack.environment) *class* `spack.environment.``Environment`(*path*, *init_file=None*)[¶](#spack.environment.Environment) Bases: `object` `active`[¶](#spack.environment.Environment.active) True if this environment is currently active. `add`(*user_spec*)[¶](#spack.environment.Environment.add) Add a single user_spec (non-concretized) to the Environment | Returns: | True if the spec was added, False if it was already present and did not need to be added | | Return type: | (bool) | `added_specs`()[¶](#spack.environment.Environment.added_specs) Specs that are not yet installed. Yields the user spec for non-concretized specs, and the concrete spec for already concretized but not yet installed specs. `all_hashes`()[¶](#spack.environment.Environment.all_hashes) Return all specs, even those a user spec would shadow. `all_specs`()[¶](#spack.environment.Environment.all_specs) Return all specs, even those a user spec would shadow. `all_specs_by_hash`()[¶](#spack.environment.Environment.all_specs_by_hash) Map of hashes to spec for all specs in this environment. `clear`()[¶](#spack.environment.Environment.clear) `concretize`(*force=False*)[¶](#spack.environment.Environment.concretize) Concretize user_specs in this environment. Only concretizes specs that haven’t been concretized yet unless force is `True`. This only modifies the environment in memory. `write()` will write out a lockfile containing concretized specs. | Parameters: | **force** (*bool*) – re-concretize ALL specs, even those that were already concretized | `concretized_specs`()[¶](#spack.environment.Environment.concretized_specs) Tuples of (user spec, concrete spec) for all concrete specs. `config_scopes`()[¶](#spack.environment.Environment.config_scopes) A list of all configuration scopes for this environment. `destroy`()[¶](#spack.environment.Environment.destroy) Remove this environment from Spack entirely. `env_file_config_scope`()[¶](#spack.environment.Environment.env_file_config_scope) Get the configuration scope for the environment’s manifest file. `env_file_config_scope_name`()[¶](#spack.environment.Environment.env_file_config_scope_name) Name of the config scope of this environment’s manifest file. `env_subdir_path`[¶](#spack.environment.Environment.env_subdir_path) Path to directory where the env stores repos, logs, views. `included_config_scopes`()[¶](#spack.environment.Environment.included_config_scopes) List of included configuration scopes from the environment. Scopes are listed in the YAML file in order from highest to lowest precedence, so configuration from earlier scope will take precedence over later ones. This routine returns them in the order they should be pushed onto the internal scope stack (so, in reverse, from lowest to highest). `install`(*user_spec*, *concrete_spec=None*, ***install_args*)[¶](#spack.environment.Environment.install) Install a single spec into an environment. This will automatically concretize the single spec, but it won’t affect other as-yet unconcretized specs. `install_all`(*args=None*)[¶](#spack.environment.Environment.install_all) Install all concretized specs in an environment. `internal`[¶](#spack.environment.Environment.internal) Whether this environment is managed by Spack. `lock_path`[¶](#spack.environment.Environment.lock_path) Path to spack.lock file in this environment. `log_path`[¶](#spack.environment.Environment.log_path) `manifest_path`[¶](#spack.environment.Environment.manifest_path) Path to spack.yaml file in this environment. `name`[¶](#spack.environment.Environment.name) Human-readable representation of the environment. This is the path for directory environments, and just the name for named environments. `remove`(*query_spec*, *force=False*)[¶](#spack.environment.Environment.remove) Remove specs from an environment that match a query_spec `removed_specs`()[¶](#spack.environment.Environment.removed_specs) Tuples of (user spec, concrete spec) for all specs that will be removed on nexg concretize. `repo`[¶](#spack.environment.Environment.repo) `repos_path`[¶](#spack.environment.Environment.repos_path) `roots`()[¶](#spack.environment.Environment.roots) Specs explicitly requested by the user *in this environment*. Yields both added and installed specs that have user specs in spack.yaml. `write`()[¶](#spack.environment.Environment.write) Writes an in-memory environment to its location on disk. This will also write out package files for each newly concretized spec. *exception* `spack.environment.``SpackEnvironmentError`(*message*, *long_message=None*)[¶](#spack.environment.SpackEnvironmentError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Superclass for all errors to do with Spack environments. `spack.environment.``activate`(*env*, *use_env_repo=False*)[¶](#spack.environment.activate) Activate an environment. To activate an environment, we add its configuration scope to the existing Spack configuration, and we set active to the current environment. | Parameters: | * **env** (*Environment*) – the environment to activate * **use_env_repo** (*bool*) – use the packages exactly as they appear in the environment’s repository | TODO: Add support for views here. Activation should set up the shell TODO: environment to use the activated spack environment. `spack.environment.``active`(*name*)[¶](#spack.environment.active) True if the named environment is active. `spack.environment.``all_environment_names`()[¶](#spack.environment.all_environment_names) List the names of environments that currently exist. `spack.environment.``all_environments`()[¶](#spack.environment.all_environments) Generator for all named Environments. `spack.environment.``config_dict`(*yaml_data*)[¶](#spack.environment.config_dict) Get the configuration scope section out of an spack.yaml `spack.environment.``create`(*name*, *init_file=None*)[¶](#spack.environment.create) Create a named environment in Spack. `spack.environment.``deactivate`()[¶](#spack.environment.deactivate) Undo any configuration or repo settings modified by `activate()`. | Returns: | True if an environment was deactivated, False if no environment was active. | | Return type: | (bool) | `spack.environment.``deactivate_config_scope`(*env*)[¶](#spack.environment.deactivate_config_scope) Remove any scopes from env from the global config path. `spack.environment.``default_manifest_yaml` *= '# This is a Spack Environment file.\n#\n# It describes a set of packages to be installed, along with\n# configuration settings.\nspack:\n # add package specs to the `specs` list\n specs:\n -\n'*[¶](#spack.environment.default_manifest_yaml) default spack.yaml file to put in new environments `spack.environment.``env_path` *= '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1/var/spack/environments'*[¶](#spack.environment.env_path) path where environments are stored in the spack tree `spack.environment.``env_schema_keys` *= ('spack', 'env')*[¶](#spack.environment.env_schema_keys) legal first keys in the spack.yaml manifest file `spack.environment.``env_subdir_name` *= '.spack-env'*[¶](#spack.environment.env_subdir_name) Name of the directory where environments store repos, logs, views `spack.environment.``exists`(*name*)[¶](#spack.environment.exists) Whether an environment with this name exists or not. `spack.environment.``find_environment`(*args*)[¶](#spack.environment.find_environment) Find active environment from args, spack.yaml, or environment variable. This is called in `spack.main` to figure out which environment to activate. Check for an environment in this order: 1. via `spack -e ENV` or `spack -D DIR` (arguments) 2. as a spack.yaml file in the current directory, or 3. via a path in the SPACK_ENV environment variable. If an environment is found, read it in. If not, return None. | Parameters: | **args** (*Namespace*) – argparse namespace wtih command arguments | | Returns: | a found environment, or `None` | | Return type: | (Environment) | `spack.environment.``get_env`(*args*, *cmd_name*, *required=True*)[¶](#spack.environment.get_env) Used by commands to get the active environment. This first checks for an `env` argument, then looks at the `active` environment. We check args first because Spack’s subcommand arguments are parsed *after* the `-e` and `-D` arguments to `spack`. So there may be an `env` argument that is *not* the active environment, and we give it precedence. This is used by a number of commands for determining whether there is an active environment. If an environment is not found *and* is required, print an error message that says the calling command *needs* an active environment. | Parameters: | * **args** (*Namespace*) – argparse namespace wtih command arguments * **cmd_name** (*str*) – name of calling command * **required** (*bool*) – if `False`, return `None` if no environment is found instead of raising an exception. | | Returns: | if there is an arg or active environment | | Return type: | (Environment) | `spack.environment.``is_env_dir`(*path*)[¶](#spack.environment.is_env_dir) Whether a directory contains a spack environment. `spack.environment.``lockfile_format_version` *= 1*[¶](#spack.environment.lockfile_format_version) version of the lockfile format. Must increase monotonically. `spack.environment.``lockfile_name` *= 'spack.lock'*[¶](#spack.environment.lockfile_name) Name of the input yaml file for an environment `spack.environment.``make_repo_path`(*root*)[¶](#spack.environment.make_repo_path) Make a RepoPath from the repo subdirectories in an environment. `spack.environment.``manifest_name` *= 'spack.yaml'*[¶](#spack.environment.manifest_name) Name of the input yaml file for an environment `spack.environment.``prepare_config_scope`(*env*)[¶](#spack.environment.prepare_config_scope) Add env’s scope to the global configuration search path. `spack.environment.``read`(*name*)[¶](#spack.environment.read) Get an environment with the supplied name. `spack.environment.``root`(*name*)[¶](#spack.environment.root) Get the root directory for an environment by name. `spack.environment.``spack_env_var` *= 'SPACK_ENV'*[¶](#spack.environment.spack_env_var) environment variable used to indicate the active environment `spack.environment.``valid_env_name`(*name*)[¶](#spack.environment.valid_env_name) `spack.environment.``valid_environment_name_re` *= '^\\w[\\w-]*$'*[¶](#spack.environment.valid_environment_name_re) regex for validating enviroment names `spack.environment.``validate`(*data*, *filename=None*)[¶](#spack.environment.validate) `spack.environment.``validate_env_name`(*name*)[¶](#spack.environment.validate_env_name) ### spack.error module[¶](#module-spack.error) *exception* `spack.error.``SpackError`(*message*, *long_message=None*)[¶](#spack.error.SpackError) Bases: `Exception` This is the superclass for all Spack errors. Subclasses can be found in the modules they have to do with. `die`()[¶](#spack.error.SpackError.die) `long_message`[¶](#spack.error.SpackError.long_message) `print_context`()[¶](#spack.error.SpackError.print_context) Print extended debug information about this exception. This is usually printed when the top-level Spack error handler calls `die()`, but it can be called separately beforehand if a lower-level error handler needs to print error context and continue without raising the exception to the top level. *exception* `spack.error.``SpecError`(*message*, *long_message=None*)[¶](#spack.error.SpecError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Superclass for all errors that occur while constructing specs. *exception* `spack.error.``UnsatisfiableSpecError`(*provided*, *required*, *constraint_type*)[¶](#spack.error.UnsatisfiableSpecError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when a spec conflicts with package constraints. Provide the requirement that was violated when raising. *exception* `spack.error.``UnsupportedPlatformError`(*message*)[¶](#spack.error.UnsupportedPlatformError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised by packages when a platform is not supported `spack.error.``debug` *= False*[¶](#spack.error.debug) whether we should write stack traces or short error messages this is module-scoped because it needs to be set very early ### spack.fetch_strategy module[¶](#module-spack.fetch_strategy) Fetch strategies are used to download source code into a staging area in order to build it. They need to define the following methods: > * fetch() > This should attempt to download/check out source from somewhere. > * check() > Apply a checksum to the downloaded source code, e.g. for an archive. > May not do anything if the fetch method was safe to begin with. > * expand() > Expand (e.g., an archive) downloaded file to source. > * reset() > Restore original state of downloaded code. Used by clean commands. > This may just remove the expanded source and re-expand an archive, > or it may run something like git reset –hard. > * archive() > Archive a source directory, e.g. for creating a mirror. *class* `spack.fetch_strategy.``CacheURLFetchStrategy`(*url=None*, *checksum=None*, ***kwargs*)[¶](#spack.fetch_strategy.CacheURLFetchStrategy) Bases: [`spack.fetch_strategy.URLFetchStrategy`](#spack.fetch_strategy.URLFetchStrategy) The resource associated with a cache URL may be out of date. `fetch`()[¶](#spack.fetch_strategy.CacheURLFetchStrategy.fetch) Fetch source code archive or repo. | Returns: | True on success, False on failure. | | Return type: | bool | *exception* `spack.fetch_strategy.``ChecksumError`(*message*, *long_message=None*)[¶](#spack.fetch_strategy.ChecksumError) Bases: [`spack.fetch_strategy.FetchError`](#spack.fetch_strategy.FetchError) Raised when archive fails to checksum. *exception* `spack.fetch_strategy.``ExtrapolationError`(*message*, *long_message=None*)[¶](#spack.fetch_strategy.ExtrapolationError) Bases: [`spack.fetch_strategy.FetchError`](#spack.fetch_strategy.FetchError) Raised when we can’t extrapolate a version for a package. *class* `spack.fetch_strategy.``FSMeta`(*name*, *bases*, *dict*)[¶](#spack.fetch_strategy.FSMeta) Bases: `type` This metaclass registers all fetch strategies in a list. *exception* `spack.fetch_strategy.``FailedDownloadError`(*url*, *msg=''*)[¶](#spack.fetch_strategy.FailedDownloadError) Bases: [`spack.fetch_strategy.FetchError`](#spack.fetch_strategy.FetchError) Raised wen a download fails. *exception* `spack.fetch_strategy.``FetchError`(*message*, *long_message=None*)[¶](#spack.fetch_strategy.FetchError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Superclass fo fetcher errors. *class* `spack.fetch_strategy.``FetchStrategy`[¶](#spack.fetch_strategy.FetchStrategy) Bases: `object` Superclass of all fetch strategies. `archive`(*destination*)[¶](#spack.fetch_strategy.FetchStrategy.archive) Create an archive of the downloaded data for a mirror. For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository. `cachable`[¶](#spack.fetch_strategy.FetchStrategy.cachable) Whether fetcher is capable of caching the resource it retrieves. This generally is determined by whether the resource is identifiably associated with a specific package version. | Returns: | True if can cache, False otherwise. | | Return type: | bool | `check`()[¶](#spack.fetch_strategy.FetchStrategy.check) Checksum the archive fetched by this FetchStrategy. `enabled` *= False*[¶](#spack.fetch_strategy.FetchStrategy.enabled) `expand`()[¶](#spack.fetch_strategy.FetchStrategy.expand) Expand the downloaded archive. `fetch`()[¶](#spack.fetch_strategy.FetchStrategy.fetch) Fetch source code archive or repo. | Returns: | True on success, False on failure. | | Return type: | bool | *classmethod* `matches`(*args*)[¶](#spack.fetch_strategy.FetchStrategy.matches) `optional_attrs` *= []*[¶](#spack.fetch_strategy.FetchStrategy.optional_attrs) *Optional attributes can be used to distinguish fetchers when* – classes have multiple `url_attrs` at the top-level. `reset`()[¶](#spack.fetch_strategy.FetchStrategy.reset) Revert to freshly downloaded state. For archive files, this may just re-expand the archive. `set_stage`(*stage*)[¶](#spack.fetch_strategy.FetchStrategy.set_stage) This is called by Stage before any of the fetching methods are called on the stage. `source_id`()[¶](#spack.fetch_strategy.FetchStrategy.source_id) A unique ID for the source. The returned value is added to the content which determines the full hash for a package using str(). `url_attr` *= None*[¶](#spack.fetch_strategy.FetchStrategy.url_attr) The URL attribute must be specified either at the package class level, or as a keyword argument to `version()`. It is used to distinguish fetchers for different versions in the package DSL. *exception* `spack.fetch_strategy.``FetcherConflict`(*message*, *long_message=None*)[¶](#spack.fetch_strategy.FetcherConflict) Bases: [`spack.fetch_strategy.FetchError`](#spack.fetch_strategy.FetchError) Raised for packages with invalid fetch attributes. *class* `spack.fetch_strategy.``FsCache`(*root*)[¶](#spack.fetch_strategy.FsCache) Bases: `object` `destroy`()[¶](#spack.fetch_strategy.FsCache.destroy) `fetcher`(*target_path*, *digest*, ***kwargs*)[¶](#spack.fetch_strategy.FsCache.fetcher) `store`(*fetcher*, *relative_dest*)[¶](#spack.fetch_strategy.FsCache.store) *class* `spack.fetch_strategy.``GitFetchStrategy`(***kwargs*)[¶](#spack.fetch_strategy.GitFetchStrategy) Bases: [`spack.fetch_strategy.VCSFetchStrategy`](#spack.fetch_strategy.VCSFetchStrategy) Fetch strategy that gets source code from a git repository. Use like this in a package: > version(‘name’, git=’<https://github.com/project/repo.git>’) Optionally, you can provide a branch, or commit to check out, e.g.: > version(‘1.1’, git=’<https://github.com/project/repo.git>’, tag=’v1.1’) You can use these three optional attributes in addition to `git`: > * `branch`: Particular branch to build from (default is master) > * `tag`: Particular tag to check out > * `commit`: Particular commit hash in the repo `archive`(*destination*)[¶](#spack.fetch_strategy.GitFetchStrategy.archive) Create an archive of the downloaded data for a mirror. For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository. `cachable`[¶](#spack.fetch_strategy.GitFetchStrategy.cachable) Whether fetcher is capable of caching the resource it retrieves. This generally is determined by whether the resource is identifiably associated with a specific package version. | Returns: | True if can cache, False otherwise. | | Return type: | bool | `enabled` *= True*[¶](#spack.fetch_strategy.GitFetchStrategy.enabled) `fetch`()[¶](#spack.fetch_strategy.GitFetchStrategy.fetch) Fetch source code archive or repo. | Returns: | True on success, False on failure. | | Return type: | bool | `get_source_id`()[¶](#spack.fetch_strategy.GitFetchStrategy.get_source_id) `git`[¶](#spack.fetch_strategy.GitFetchStrategy.git) `git_version`[¶](#spack.fetch_strategy.GitFetchStrategy.git_version) `optional_attrs` *= ['tag', 'branch', 'commit', 'submodules']*[¶](#spack.fetch_strategy.GitFetchStrategy.optional_attrs) `reset`()[¶](#spack.fetch_strategy.GitFetchStrategy.reset) Revert to freshly downloaded state. For archive files, this may just re-expand the archive. `source_id`()[¶](#spack.fetch_strategy.GitFetchStrategy.source_id) A unique ID for the source. The returned value is added to the content which determines the full hash for a package using str(). `url_attr` *= 'git'*[¶](#spack.fetch_strategy.GitFetchStrategy.url_attr) *class* `spack.fetch_strategy.``GoFetchStrategy`(***kwargs*)[¶](#spack.fetch_strategy.GoFetchStrategy) Bases: [`spack.fetch_strategy.VCSFetchStrategy`](#spack.fetch_strategy.VCSFetchStrategy) Fetch strategy that employs the go get infrastructure. Use like this in a package: > version(‘name’, > go=’github.com/monochromegane/the_platinum_searcher/
’) Go get does not natively support versions, they can be faked with git `archive`(*destination*)[¶](#spack.fetch_strategy.GoFetchStrategy.archive) Create an archive of the downloaded data for a mirror. For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository. `enabled` *= True*[¶](#spack.fetch_strategy.GoFetchStrategy.enabled) `fetch`()[¶](#spack.fetch_strategy.GoFetchStrategy.fetch) Fetch source code archive or repo. | Returns: | True on success, False on failure. | | Return type: | bool | `go`[¶](#spack.fetch_strategy.GoFetchStrategy.go) `go_version`[¶](#spack.fetch_strategy.GoFetchStrategy.go_version) `reset`()[¶](#spack.fetch_strategy.GoFetchStrategy.reset) Revert to freshly downloaded state. For archive files, this may just re-expand the archive. `url_attr` *= 'go'*[¶](#spack.fetch_strategy.GoFetchStrategy.url_attr) *class* `spack.fetch_strategy.``HgFetchStrategy`(***kwargs*)[¶](#spack.fetch_strategy.HgFetchStrategy) Bases: [`spack.fetch_strategy.VCSFetchStrategy`](#spack.fetch_strategy.VCSFetchStrategy) Fetch strategy that gets source code from a Mercurial repository. Use like this in a package: > version(‘name’, hg=’<https://jay.grs.rwth-aachen.de/hg/lwm2>’) Optionally, you can provide a branch, or revision to check out, e.g.: > version(‘torus’, > hg=’<https://jay.grs.rwth-aachen.de/hg/lwm2>’, branch=’torus’) You can use the optional ‘revision’ attribute to check out a branch, tag, or particular revision in hg. To prevent non-reproducible builds, using a moving target like a branch is discouraged. > * `revision`: Particular revision, branch, or tag. `archive`(*destination*)[¶](#spack.fetch_strategy.HgFetchStrategy.archive) Create an archive of the downloaded data for a mirror. For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository. `cachable`[¶](#spack.fetch_strategy.HgFetchStrategy.cachable) Whether fetcher is capable of caching the resource it retrieves. This generally is determined by whether the resource is identifiably associated with a specific package version. | Returns: | True if can cache, False otherwise. | | Return type: | bool | `enabled` *= True*[¶](#spack.fetch_strategy.HgFetchStrategy.enabled) `fetch`()[¶](#spack.fetch_strategy.HgFetchStrategy.fetch) Fetch source code archive or repo. | Returns: | True on success, False on failure. | | Return type: | bool | `get_source_id`()[¶](#spack.fetch_strategy.HgFetchStrategy.get_source_id) `hg`[¶](#spack.fetch_strategy.HgFetchStrategy.hg) *returns* – The hg executable :rtype: Executable `optional_attrs` *= ['revision']*[¶](#spack.fetch_strategy.HgFetchStrategy.optional_attrs) `reset`()[¶](#spack.fetch_strategy.HgFetchStrategy.reset) Revert to freshly downloaded state. For archive files, this may just re-expand the archive. `source_id`()[¶](#spack.fetch_strategy.HgFetchStrategy.source_id) A unique ID for the source. The returned value is added to the content which determines the full hash for a package using str(). `url_attr` *= 'hg'*[¶](#spack.fetch_strategy.HgFetchStrategy.url_attr) *exception* `spack.fetch_strategy.``InvalidArgsError`(*pkg=None*, *version=None*, ***args*)[¶](#spack.fetch_strategy.InvalidArgsError) Bases: [`spack.fetch_strategy.FetchError`](#spack.fetch_strategy.FetchError) Raised when a version can’t be deduced from a set of arguments. *exception* `spack.fetch_strategy.``NoArchiveFileError`(*message*, *long_message=None*)[¶](#spack.fetch_strategy.NoArchiveFileError) Bases: [`spack.fetch_strategy.FetchError`](#spack.fetch_strategy.FetchError) “Raised when an archive file is expected but none exists. *exception* `spack.fetch_strategy.``NoCacheError`(*message*, *long_message=None*)[¶](#spack.fetch_strategy.NoCacheError) Bases: [`spack.fetch_strategy.FetchError`](#spack.fetch_strategy.FetchError) Raised when there is no cached archive for a package. *exception* `spack.fetch_strategy.``NoDigestError`(*message*, *long_message=None*)[¶](#spack.fetch_strategy.NoDigestError) Bases: [`spack.fetch_strategy.FetchError`](#spack.fetch_strategy.FetchError) Raised after attempt to checksum when URL has no digest. *exception* `spack.fetch_strategy.``NoStageError`(*method*)[¶](#spack.fetch_strategy.NoStageError) Bases: [`spack.fetch_strategy.FetchError`](#spack.fetch_strategy.FetchError) Raised when fetch operations are called before set_stage(). *class* `spack.fetch_strategy.``SvnFetchStrategy`(***kwargs*)[¶](#spack.fetch_strategy.SvnFetchStrategy) Bases: [`spack.fetch_strategy.VCSFetchStrategy`](#spack.fetch_strategy.VCSFetchStrategy) Fetch strategy that gets source code from a subversion repository. Use like this in a package: > version(‘name’, svn=’<http://www.example.com/svn/trunk>’) Optionally, you can provide a revision for the URL: > version(‘name’, svn=’<http://www.example.com/svn/trunk>’, > revision=‘1641’) `archive`(*destination*)[¶](#spack.fetch_strategy.SvnFetchStrategy.archive) Create an archive of the downloaded data for a mirror. For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository. `cachable`[¶](#spack.fetch_strategy.SvnFetchStrategy.cachable) Whether fetcher is capable of caching the resource it retrieves. This generally is determined by whether the resource is identifiably associated with a specific package version. | Returns: | True if can cache, False otherwise. | | Return type: | bool | `enabled` *= True*[¶](#spack.fetch_strategy.SvnFetchStrategy.enabled) `fetch`()[¶](#spack.fetch_strategy.SvnFetchStrategy.fetch) Fetch source code archive or repo. | Returns: | True on success, False on failure. | | Return type: | bool | `get_source_id`()[¶](#spack.fetch_strategy.SvnFetchStrategy.get_source_id) `optional_attrs` *= ['revision']*[¶](#spack.fetch_strategy.SvnFetchStrategy.optional_attrs) `reset`()[¶](#spack.fetch_strategy.SvnFetchStrategy.reset) Revert to freshly downloaded state. For archive files, this may just re-expand the archive. `source_id`()[¶](#spack.fetch_strategy.SvnFetchStrategy.source_id) A unique ID for the source. The returned value is added to the content which determines the full hash for a package using str(). `svn`[¶](#spack.fetch_strategy.SvnFetchStrategy.svn) `url_attr` *= 'svn'*[¶](#spack.fetch_strategy.SvnFetchStrategy.url_attr) *class* `spack.fetch_strategy.``URLFetchStrategy`(*url=None*, *checksum=None*, ***kwargs*)[¶](#spack.fetch_strategy.URLFetchStrategy) Bases: [`spack.fetch_strategy.FetchStrategy`](#spack.fetch_strategy.FetchStrategy) FetchStrategy that pulls source code from a URL for an archive, checks the archive against a checksum,and decompresses the archive. `archive`(*destination*)[¶](#spack.fetch_strategy.URLFetchStrategy.archive) Just moves this archive to the destination. `archive_file`[¶](#spack.fetch_strategy.URLFetchStrategy.archive_file) Path to the source archive within this stage directory. `cachable`[¶](#spack.fetch_strategy.URLFetchStrategy.cachable) Whether fetcher is capable of caching the resource it retrieves. This generally is determined by whether the resource is identifiably associated with a specific package version. | Returns: | True if can cache, False otherwise. | | Return type: | bool | `check`()[¶](#spack.fetch_strategy.URLFetchStrategy.check) Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository. `curl`[¶](#spack.fetch_strategy.URLFetchStrategy.curl) `enabled` *= True*[¶](#spack.fetch_strategy.URLFetchStrategy.enabled) `expand`()[¶](#spack.fetch_strategy.URLFetchStrategy.expand) Expand the downloaded archive. `fetch`()[¶](#spack.fetch_strategy.URLFetchStrategy.fetch) Fetch source code archive or repo. | Returns: | True on success, False on failure. | | Return type: | bool | `optional_attrs` *= ['sha384', 'sha224', 'sha1', 'sha512', 'sha256', 'md5', 'checksum']*[¶](#spack.fetch_strategy.URLFetchStrategy.optional_attrs) `reset`()[¶](#spack.fetch_strategy.URLFetchStrategy.reset) Removes the source path if it exists, then re-expands the archive. `source_id`()[¶](#spack.fetch_strategy.URLFetchStrategy.source_id) A unique ID for the source. The returned value is added to the content which determines the full hash for a package using str(). `url_attr` *= 'url'*[¶](#spack.fetch_strategy.URLFetchStrategy.url_attr) *class* `spack.fetch_strategy.``VCSFetchStrategy`(***kwargs*)[¶](#spack.fetch_strategy.VCSFetchStrategy) Bases: [`spack.fetch_strategy.FetchStrategy`](#spack.fetch_strategy.FetchStrategy) Superclass for version control system fetch strategies. Like all fetchers, VCS fetchers are identified by the attributes passed to the `version` directive. The optional_attrs for a VCS fetch strategy represent types of revisions, e.g. tags, branches, commits, etc. The required attributes (git, svn, etc.) are used to specify the URL and to distinguish a VCS fetch strategy from a URL fetch strategy. `archive`(*destination*, ***kwargs*)[¶](#spack.fetch_strategy.VCSFetchStrategy.archive) Create an archive of the downloaded data for a mirror. For downloaded files, this should preserve the checksum of the original file. For repositories, it should just create an expandable tarball out of the downloaded repository. `check`()[¶](#spack.fetch_strategy.VCSFetchStrategy.check) Checksum the archive fetched by this FetchStrategy. `expand`()[¶](#spack.fetch_strategy.VCSFetchStrategy.expand) Expand the downloaded archive. `spack.fetch_strategy.``all_strategies` *= [<class 'spack.fetch_strategy.URLFetchStrategy'>, <class 'spack.fetch_strategy.CacheURLFetchStrategy'>, <class 'spack.fetch_strategy.GoFetchStrategy'>, <class 'spack.fetch_strategy.GitFetchStrategy'>, <class 'spack.fetch_strategy.SvnFetchStrategy'>, <class 'spack.fetch_strategy.HgFetchStrategy'>]*[¶](#spack.fetch_strategy.all_strategies) List of all fetch strategies, created by FetchStrategy metaclass. `spack.fetch_strategy.``check_pkg_attributes`(*pkg*)[¶](#spack.fetch_strategy.check_pkg_attributes) Find ambiguous top-level fetch attributes in a package. Currently this only ensures that two or more VCS fetch strategies are not specified at once. `spack.fetch_strategy.``for_package_version`(*pkg*, *version*)[¶](#spack.fetch_strategy.for_package_version) Determine a fetch strategy based on the arguments supplied to version() in the package description. `spack.fetch_strategy.``from_kwargs`(***kwargs*)[¶](#spack.fetch_strategy.from_kwargs) Construct an appropriate FetchStrategy from the given keyword arguments. | Parameters: | ****kwargs** – dictionary of keyword arguments, e.g. from a `version()` directive in a package. | | Returns: | The fetch strategy that matches the args, based on attribute names (e.g., `git`, `hg`, etc.) | | Return type: | fetch_strategy | | Raises: | [`FetchError`](#spack.fetch_strategy.FetchError) – If no `fetch_strategy` matches the args. | `spack.fetch_strategy.``from_list_url`(*pkg*)[¶](#spack.fetch_strategy.from_list_url) If a package provides a URL which lists URLs for resources by version, this can can create a fetcher for a URL discovered for the specified package’s version. `spack.fetch_strategy.``from_url`(*url*)[¶](#spack.fetch_strategy.from_url) Given a URL, find an appropriate fetch strategy for it. Currently just gives you a URLFetchStrategy that uses curl. TODO: make this return appropriate fetch strategies for other types of URLs. ### spack.filesystem_view module[¶](#module-spack.filesystem_view) *class* `spack.filesystem_view.``FilesystemView`(*root*, *layout*, ***kwargs*)[¶](#spack.filesystem_view.FilesystemView) Bases: `object` Governs a filesystem view that is located at certain root-directory. Packages are linked from their install directories into a common file hierachy. In distributed filesystems, loading each installed package seperately can lead to slow-downs due to too many directories being traversed. This can be circumvented by loading all needed modules into a common directory structure. `add_extension`(*spec*)[¶](#spack.filesystem_view.FilesystemView.add_extension) Add (link) an extension in this view. Does not add dependencies. `add_specs`(**specs*, ***kwargs*)[¶](#spack.filesystem_view.FilesystemView.add_specs) Add given specs to view. The supplied specs might be standalone packages or extensions of other packages. Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be activated as well. Should except an exclude keyword argument containing a list of regexps that filter out matching spec names. This method should make use of activate_{extension,standalone}. `add_standalone`(*spec*)[¶](#spack.filesystem_view.FilesystemView.add_standalone) Add (link) a standalone package into this view. `check_added`(*spec*)[¶](#spack.filesystem_view.FilesystemView.check_added) Check if the given concrete spec is active in this view. `get_all_specs`()[¶](#spack.filesystem_view.FilesystemView.get_all_specs) Get all specs currently active in this view. `get_spec`(*spec*)[¶](#spack.filesystem_view.FilesystemView.get_spec) Return the actual spec linked in this view (i.e. do not look it up in the database by name). spec can be a name or a spec from which the name is extracted. As there can only be a single version active for any spec the name is enough to identify the spec in the view. If no spec is present, returns None. `print_status`(**specs*, ***kwargs*)[¶](#spack.filesystem_view.FilesystemView.print_status) Print a short summary about the given specs, detailing whether.. * ..they are active in the view. * ..they are active but the activated version differs. * ..they are not activte in the view. Takes with_dependencies keyword argument so that the status of dependencies is printed as well. `remove_extension`(*spec*)[¶](#spack.filesystem_view.FilesystemView.remove_extension) Remove (unlink) an extension from this view. `remove_specs`(**specs*, ***kwargs*)[¶](#spack.filesystem_view.FilesystemView.remove_specs) Removes given specs from view. The supplied spec might be a standalone package or an extension of another package. Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be deactivated as well. Should accept with_dependents as keyword argument (default True) to indicate wether or not dependents on the deactivated specs should be removed as well. Should except an exclude keyword argument containing a list of regexps that filter out matching spec names. This method should make use of deactivate_{extension,standalone}. `remove_standalone`(*spec*)[¶](#spack.filesystem_view.FilesystemView.remove_standalone) Remove (unlink) a standalone package from this view. *class* `spack.filesystem_view.``YamlFilesystemView`(*root*, *layout*, ***kwargs*)[¶](#spack.filesystem_view.YamlFilesystemView) Bases: [`spack.filesystem_view.FilesystemView`](#spack.filesystem_view.FilesystemView) Filesystem view to work with a yaml based directory layout. `add_extension`(*spec*)[¶](#spack.filesystem_view.YamlFilesystemView.add_extension) Add (link) an extension in this view. Does not add dependencies. `add_specs`(**specs*, ***kwargs*)[¶](#spack.filesystem_view.YamlFilesystemView.add_specs) Add given specs to view. The supplied specs might be standalone packages or extensions of other packages. Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be activated as well. Should except an exclude keyword argument containing a list of regexps that filter out matching spec names. This method should make use of activate_{extension,standalone}. `add_standalone`(*spec*)[¶](#spack.filesystem_view.YamlFilesystemView.add_standalone) Add (link) a standalone package into this view. `check_added`(*spec*)[¶](#spack.filesystem_view.YamlFilesystemView.check_added) Check if the given concrete spec is active in this view. `get_all_specs`()[¶](#spack.filesystem_view.YamlFilesystemView.get_all_specs) Get all specs currently active in this view. `get_conflicts`(**specs*)[¶](#spack.filesystem_view.YamlFilesystemView.get_conflicts) Return list of tuples (<spec>, <spec in view>) where the spec active in the view differs from the one to be activated. `get_path_meta_folder`(*spec*)[¶](#spack.filesystem_view.YamlFilesystemView.get_path_meta_folder) Get path to meta folder for either spec or spec name. `get_spec`(*spec*)[¶](#spack.filesystem_view.YamlFilesystemView.get_spec) Return the actual spec linked in this view (i.e. do not look it up in the database by name). spec can be a name or a spec from which the name is extracted. As there can only be a single version active for any spec the name is enough to identify the spec in the view. If no spec is present, returns None. `link_meta_folder`(*spec*)[¶](#spack.filesystem_view.YamlFilesystemView.link_meta_folder) `merge`(*spec*, *ignore=None*)[¶](#spack.filesystem_view.YamlFilesystemView.merge) `print_conflict`(*spec_active*, *spec_specified*, *level='error'*)[¶](#spack.filesystem_view.YamlFilesystemView.print_conflict) Singular print function for spec conflicts. `print_status`(**specs*, ***kwargs*)[¶](#spack.filesystem_view.YamlFilesystemView.print_status) Print a short summary about the given specs, detailing whether.. * ..they are active in the view. * ..they are active but the activated version differs. * ..they are not activte in the view. Takes with_dependencies keyword argument so that the status of dependencies is printed as well. `purge_empty_directories`()[¶](#spack.filesystem_view.YamlFilesystemView.purge_empty_directories) Ascend up from the leaves accessible from path and remove empty directories. `remove_extension`(*spec*, *with_dependents=True*)[¶](#spack.filesystem_view.YamlFilesystemView.remove_extension) Remove (unlink) an extension from this view. `remove_file`(*src*, *dest*)[¶](#spack.filesystem_view.YamlFilesystemView.remove_file) `remove_specs`(**specs*, ***kwargs*)[¶](#spack.filesystem_view.YamlFilesystemView.remove_specs) Removes given specs from view. The supplied spec might be a standalone package or an extension of another package. Should accept with_dependencies as keyword argument (default True) to indicate wether or not dependencies should be deactivated as well. Should accept with_dependents as keyword argument (default True) to indicate wether or not dependents on the deactivated specs should be removed as well. Should except an exclude keyword argument containing a list of regexps that filter out matching spec names. This method should make use of deactivate_{extension,standalone}. `remove_standalone`(*spec*)[¶](#spack.filesystem_view.YamlFilesystemView.remove_standalone) Remove (unlink) a standalone package from this view. `unlink_meta_folder`(*spec*)[¶](#spack.filesystem_view.YamlFilesystemView.unlink_meta_folder) `unmerge`(*spec*, *ignore=None*)[¶](#spack.filesystem_view.YamlFilesystemView.unmerge) ### spack.graph module[¶](#module-spack.graph) Functions for graphing DAGs of dependencies. This file contains code for graphing DAGs of software packages (i.e. Spack specs). There are two main functions you probably care about: graph_ascii() will output a colored graph of a spec in ascii format, kind of like the graph git shows with “git log –graph”, e.g.: ``` o mpileaks | | | | o | callpath |/| | | |\| | |\ | | |\ | | | | o adept-utils | |_|_|/| |/| | | | o | | | | mpi / / / / | | o | dyninst | |/| | |/|/| | | | |/ | o | libdwarf |/ / o | libelf / o boost ``` graph_dot() will output a graph of a spec (or multiple specs) in dot format. Note that `graph_ascii` assumes a single spec while `graph_dot` can take a number of specs as input. `spack.graph.``topological_sort`(*spec*, *reverse=False*, *deptype='all'*)[¶](#spack.graph.topological_sort) Topological sort for specs. Return a list of dependency specs sorted topologically. The spec argument is not modified in the process. `spack.graph.``graph_ascii`(*spec*, *node='o'*, *out=None*, *debug=False*, *indent=0*, *color=None*, *deptype='all'*)[¶](#spack.graph.graph_ascii) *class* `spack.graph.``AsciiGraph`[¶](#spack.graph.AsciiGraph) Bases: `object` `write`(*spec*, *color=None*, *out=None*)[¶](#spack.graph.AsciiGraph.write) Write out an ascii graph of the provided spec. Arguments: spec – spec to graph. This only handles one spec at a time. Optional arguments: out – file object to write out to (default is sys.stdout) color – whether to write in color. Default is to autodetect based on output file. `spack.graph.``graph_dot`(*specs*, *deptype='all'*, *static=False*, *out=None*)[¶](#spack.graph.graph_dot) Generate a graph in dot format of all provided specs. Print out a dot formatted graph of all the dependencies between package. Output can be passed to graphviz, e.g.: > spack graph –dot qt | dot -Tpdf > spack-graph.pdf ### spack.main module[¶](#module-spack.main) This is the implementation of the Spack command line executable. In a normal Spack installation, this is invoked from the bin/spack script after the system path is set up. *class* `spack.main.``SpackArgumentParser`(*prog=None*, *usage=None*, *description=None*, *epilog=None*, *parents=[]*, *formatter_class=<class 'argparse.HelpFormatter'>*, *prefix_chars='-'*, *fromfile_prefix_chars=None*, *argument_default=None*, *conflict_handler='error'*, *add_help=True*, *allow_abbrev=True*)[¶](#spack.main.SpackArgumentParser) Bases: `argparse.ArgumentParser` `add_command`(*cmd_name*)[¶](#spack.main.SpackArgumentParser.add_command) Add one subcommand to this parser. `add_subparsers`(***kwargs*)[¶](#spack.main.SpackArgumentParser.add_subparsers) Ensure that sensible defaults are propagated to subparsers `format_help`(*level='short'*)[¶](#spack.main.SpackArgumentParser.format_help) `format_help_sections`(*level*)[¶](#spack.main.SpackArgumentParser.format_help_sections) Format help on sections for a particular verbosity level. | Parameters: | **level** (*str*) – ‘short’ or ‘long’ (more commands shown for long) | *class* `spack.main.``SpackCommand`(*command_name*)[¶](#spack.main.SpackCommand) Bases: `object` Callable object that invokes a spack command (for testing). Example usage: ``` install = SpackCommand('install') install('-v', 'mpich') ``` Use this to invoke Spack commands directly from Python and check their output. *exception* `spack.main.``SpackCommandError`[¶](#spack.main.SpackCommandError) Bases: `Exception` Raised when SpackCommand execution fails. *class* `spack.main.``SpackHelpFormatter`(*prog*, *indent_increment=2*, *max_help_position=24*, *width=None*)[¶](#spack.main.SpackHelpFormatter) Bases: `argparse.RawTextHelpFormatter` `spack.main.``add_all_commands`(*parser*)[¶](#spack.main.add_all_commands) Add all spack subcommands to the parser. `spack.main.``aliases` *= {'rm': 'remove'}*[¶](#spack.main.aliases) top-level aliases for Spack commands `spack.main.``allows_unknown_args`(*command*)[¶](#spack.main.allows_unknown_args) Implements really simple argument injection for unknown arguments. Commands may add an optional argument called “unknown args” to indicate they can handle unknonwn args, and we’ll pass the unknown args in. `spack.main.``index_commands`()[¶](#spack.main.index_commands) create an index of commands by section for this help level `spack.main.``intro_by_level` *= {'long': 'Complete list of spack commands:', 'short': 'These are common spack commands:'}*[¶](#spack.main.intro_by_level) intro text for help at different levels `spack.main.``levels` *= ['short', 'long']*[¶](#spack.main.levels) help levels in order of detail (i.e., number of commands shown) `spack.main.``main`(*argv=None*)[¶](#spack.main.main) This is the entry point for the Spack command. | Parameters: | **argv** (*list of str* *or* *None*) – command line arguments, NOT including the executable name. If None, parses from sys.argv. | `spack.main.``make_argument_parser`(***kwargs*)[¶](#spack.main.make_argument_parser) Create an basic argument parser without any subcommands added. `spack.main.``options_by_level` *= {'long': 'all', 'short': ['h', 'k', 'V', 'color']}*[¶](#spack.main.options_by_level) control top-level spack options shown in basic vs. advanced help `spack.main.``print_setup_info`(**info*)[¶](#spack.main.print_setup_info) Print basic information needed by setup-env.[c]sh. | Parameters: | **info** (*list of str*) – list of things to print: comma-separated list of ‘csh’, ‘sh’, or ‘modules’ | This is in `main.py` to make it fast; the setup scripts need to invoke spack in login scripts, and it needs to be quick. `spack.main.``required_command_properties` *= ['level', 'section', 'description']*[¶](#spack.main.required_command_properties) Properties that commands are required to set. `spack.main.``section_descriptions` *= {'admin': 'administration', 'basic': 'query packages', 'build': 'build packages', 'config': 'configuration', 'developer': 'developer', 'environment': 'environment', 'extensions': 'extensions', 'help': 'more help', 'packaging': 'create packages', 'system': 'system'}*[¶](#spack.main.section_descriptions) Longer text for each section, to show in help `spack.main.``section_order` *= {'basic': ['list', 'info', 'find'], 'build': ['fetch', 'stage', 'patch', 'configure', 'build', 'restage', 'install', 'uninstall', 'clean'], 'packaging': ['create', 'edit']}*[¶](#spack.main.section_order) preferential command order for some sections (e.g., build pipeline is in execution order, not alphabetical) `spack.main.``set_working_dir`()[¶](#spack.main.set_working_dir) Change the working directory to getcwd, or spack prefix if no cwd. `spack.main.``setup_main_options`(*args*)[¶](#spack.main.setup_main_options) Configure spack globals based on the basic options. `spack.main.``spack_working_dir` *= None*[¶](#spack.main.spack_working_dir) Recorded directory where spack command was originally invoked `spack.main.``stat_names` *= {'calls': (((1, -1),), 'call count'), 'cumtime': (((3, -1),), 'cumulative time'), 'cumulative': (((3, -1),), 'cumulative time'), 'file': (((4, 1),), 'file name'), 'filename': (((4, 1),), 'file name'), 'line': (((5, 1),), 'line number'), 'module': (((4, 1),), 'file name'), 'name': (((6, 1),), 'function name'), 'ncalls': (((1, -1),), 'call count'), 'nfl': (((6, 1), (4, 1), (5, 1)), 'name/file/line'), 'pcalls': (((0, -1),), 'primitive call count'), 'stdname': (((7, 1),), 'standard name'), 'time': (((2, -1),), 'internal time'), 'tottime': (((2, -1),), 'internal time')}*[¶](#spack.main.stat_names) names of profile statistics ### spack.mirror module[¶](#module-spack.mirror) This file contains code for creating spack mirror directories. A mirror is an organized hierarchy containing specially named archive files. This enabled spack to know where to find files in a mirror if the main server for a particular package is down. Or, if the computer where spack is run is not connected to the internet, it allows spack to download packages directly from a mirror (e.g., on an intranet). *exception* `spack.mirror.``MirrorError`(*msg*, *long_msg=None*)[¶](#spack.mirror.MirrorError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Superclass of all mirror-creation related errors. `spack.mirror.``add_single_spec`(*spec*, *mirror_root*, *categories*, ***kwargs*)[¶](#spack.mirror.add_single_spec) `spack.mirror.``create`(*path*, *specs*, ***kwargs*)[¶](#spack.mirror.create) Create a directory to be used as a spack mirror, and fill it with package archives. | Parameters: | * **path** – Path to create a mirror directory hierarchy in. * **specs** – Any package versions matching these specs will be added to the mirror. | | Keyword Arguments: | | | * **no_checksum** – If True, do not checkpoint when fetching (default False) * **num_versions** – Max number of versions to fetch per spec, if spec is ambiguous (default is 0 for all of them) | Return Value: Returns a tuple of lists: (present, mirrored, error) * present: Package specs that were already present. * mirrored: Package specs that were successfully mirrored. * error: Package specs that failed to mirror due to some error. This routine iterates through all known package versions, and it creates specs for those versions. If the version satisfies any spec in the specs list, it is downloaded and added to the mirror. `spack.mirror.``get_matching_versions`(*specs*, ***kwargs*)[¶](#spack.mirror.get_matching_versions) Get a spec for EACH known version matching any spec in the list. `spack.mirror.``mirror_archive_filename`(*spec*, *fetcher*, *resource_id=None*)[¶](#spack.mirror.mirror_archive_filename) Get the name of the spec’s archive in the mirror. `spack.mirror.``mirror_archive_path`(*spec*, *fetcher*, *resource_id=None*)[¶](#spack.mirror.mirror_archive_path) Get the relative path to the spec’s archive within a mirror. `spack.mirror.``suggest_archive_basename`(*resource*)[¶](#spack.mirror.suggest_archive_basename) Return a tentative basename for an archive. | Raises: | `RuntimeError` – if the name is not an allowed archive type. | ### spack.mixins module[¶](#module-spack.mixins) This module contains additional behavior that can be attached to any given package. `spack.mixins.``filter_compiler_wrappers`(**files*, ***kwargs*)[¶](#spack.mixins.filter_compiler_wrappers) Substitutes any path referring to a Spack compiler wrapper with the path of the underlying compiler that has been used. If this isn’t done, the files will have CC, CXX, F77, and FC set to Spack’s generic cc, c++, f77, and f90. We want them to be bound to whatever compiler they were built with. | Parameters: | * ***files** – files to be filtered relative to the search root (which is, by default, the installation prefix) * ****kwargs** – allowed keyword arguments after specifies after which phase the files should be filtered (defaults to ‘install’) relative_root path relative to prefix where to start searching for the files to be filtered. If not set the install prefix wil be used as the search root. **It is highly recommended to set this, as searching from the installation prefix may affect performance severely in some cases**. ignore_absent, backup these two keyword arguments, if present, will be forwarded to `filter_file` (see its documentation for more information on their behavior) recursive this keyword argument, if present, will be forwarded to `find` (see its documentation for more information on the behavior) | ### spack.multimethod module[¶](#module-spack.multimethod) This module contains utilities for using multi-methods in spack. You can think of multi-methods like overloaded methods – they’re methods with the same name, and we need to select a version of the method based on some criteria. e.g., for overloaded methods, you would select a version of the method to call based on the types of its arguments. In spack, multi-methods are used to ease the life of package authors. They allow methods like install() (or other methods called by install()) to declare multiple versions to be called when the package is instantiated with different specs. e.g., if the package is built with OpenMPI on x86_64,, you might want to call a different install method than if it was built for mpich2 on BlueGene/Q. Likewise, you might want to do a different type of install for different versions of the package. Multi-methods provide a simple decorator-based syntax for this that avoids overly complicated rat nests of if statements. Obviously, depending on the scenario, regular old conditionals might be clearer, so package authors should use their judgement. *exception* `spack.multimethod.``MultiMethodError`(*message*)[¶](#spack.multimethod.MultiMethodError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Superclass for multimethod dispatch errors *exception* `spack.multimethod.``NoSuchMethodError`(*cls*, *method_name*, *spec*, *possible_specs*)[¶](#spack.multimethod.NoSuchMethodError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when we can’t find a version of a multi-method. *class* `spack.multimethod.``SpecMultiMethod`(*default=None*)[¶](#spack.multimethod.SpecMultiMethod) Bases: `object` This implements a multi-method for Spack specs. Packages are instantiated with a particular spec, and you may want to execute different versions of methods based on what the spec looks like. For example, you might want to call a different version of install() for one platform than you call on another. The SpecMultiMethod class implements a callable object that handles method dispatch. When it is called, it looks through registered methods and their associated specs, and it tries to find one that matches the package’s spec. If it finds one (and only one), it will call that method. The package author is responsible for ensuring that only one condition on multi-methods ever evaluates to true. If multiple methods evaluate to true, this will raise an exception. This is intended for use with decorators (see below). The decorator (see docs below) creates SpecMultiMethods and registers method versions with them. To register a method, you can do something like this: mm = SpecMultiMethod() mm.register(“^chaos_5_x86_64_ib”, some_method) The object registered needs to be a Spec or some string that will parse to be a valid spec. When the mm is actually called, it selects a version of the method to call based on the sys_type of the object it is called on. See the docs for decorators below for more details. `register`(*spec*, *method*)[¶](#spack.multimethod.SpecMultiMethod.register) Register a version of a method for a particular sys_type. *class* `spack.multimethod.``when`(*spec*)[¶](#spack.multimethod.when) Bases: `object` This annotation lets packages declare multiple versions of methods like install() that depend on the package’s spec. For example: ``` class SomePackage(Package): ... def install(self, prefix): # Do default install @when('arch=chaos_5_x86_64_ib') def install(self, prefix): # This will be executed instead of the default install if # the package's platform() is chaos_5_x86_64_ib. @when('arch=bgqos_0") def install(self, prefix): # This will be executed if the package's sys_type is bgqos_0 ``` This allows each package to have a default version of install() AND specialized versions for particular platforms. The version that is called depends on the architecutre of the instantiated package. Note that this works for methods other than install, as well. So, if you only have part of the install that is platform specific, you could do this: ``` class SomePackage(Package): ... # virtual dependence on MPI. # could resolve to mpich, mpich2, OpenMPI depends_on('mpi') def setup(self): # do nothing in the default case pass @when('^openmpi') def setup(self): # do something special when this is built with OpenMPI for # its MPI implementations. def install(self, prefix): # Do common install stuff self.setup() # Do more common install stuff ``` There must be one (and only one) @when clause that matches the package’s spec. If there is more than one, or if none match, then the method will raise an exception when it’s called. Note that the default version of decorated methods must *always* come first. Otherwise it will override all of the platform-specific versions. There’s not much we can do to get around this because of the way decorators work. ### spack.package module[¶](#module-spack.package) This is where most of the action happens in Spack. The spack package class structure is based strongly on Homebrew (<http://brew.sh/>), mainly because Homebrew makes it very easy to create packages. *exception* `spack.package.``ActivationError`(*msg*, *long_msg=None*)[¶](#spack.package.ActivationError) Bases: [`spack.package.ExtensionError`](#spack.package.ExtensionError) Raised when there are problems activating an extension. *exception* `spack.package.``DependencyConflictError`(*conflict*)[¶](#spack.package.DependencyConflictError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when the dependencies cannot be flattened as asked for. *exception* `spack.package.``ExtensionError`(*message*, *long_msg=None*)[¶](#spack.package.ExtensionError) Bases: [`spack.package.PackageError`](#spack.package.PackageError) Superclass for all errors having to do with extension packages. *exception* `spack.package.``ExternalPackageError`(*message*, *long_msg=None*)[¶](#spack.package.ExternalPackageError) Bases: [`spack.package.InstallError`](#spack.package.InstallError) Raised by install() when a package is only for external use. *exception* `spack.package.``FetchError`(*message*, *long_msg=None*)[¶](#spack.package.FetchError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when something goes wrong during fetch. *exception* `spack.package.``InstallError`(*message*, *long_msg=None*)[¶](#spack.package.InstallError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when something goes wrong during install or uninstall. *class* `spack.package.``InstallPhase`(*name*)[¶](#spack.package.InstallPhase) Bases: `object` Manages a single phase of the installation. This descriptor stores at creation time the name of the method it should search for execution. The method is retrieved at __get__ time, so that it can be overridden by subclasses of whatever class declared the phases. It also provides hooks to execute arbitrary callbacks before and after the phase. `copy`()[¶](#spack.package.InstallPhase.copy) *exception* `spack.package.``NoURLError`(*cls*)[¶](#spack.package.NoURLError) Bases: [`spack.package.PackageError`](#spack.package.PackageError) Raised when someone tries to build a URL for a package with no URLs. *class* `spack.package.``Package`(*spec*)[¶](#spack.package.Package) Bases: [`spack.package.PackageBase`](#spack.package.PackageBase) General purpose class with a single `install` phase that needs to be coded by packagers. `build_system_class` *= 'Package'*[¶](#spack.package.Package.build_system_class) This attribute is used in UI queries that require to know which build-system class we are using `phases` *= ['install']*[¶](#spack.package.Package.phases) The one and only phase *class* `spack.package.``PackageBase`(*spec*)[¶](#spack.package.PackageBase) Bases: [`spack.package.PackageViewMixin`](#spack.package.PackageViewMixin), `object` This is the superclass for all spack packages. ***The Package class*** A package defines how to fetch, verify (via, e.g., sha256), build, and install a piece of software. A Package also defines what other packages it depends on, so that dependencies can be installed along with the package itself. Packages are written in pure python by users of Spack. There are two main parts of a Spack package: > 1. **The package class**. Classes contain `directives`, which are > special functions, that add metadata (versions, patches, > dependencies, and other information) to packages (see > `directives.py`). Directives provide the constraints that are > used as input to the concretizer. > 2. **Package instances**. Once instantiated, a package is > essentially an installer for a particular piece of > software. Spack calls methods like `do_install()` on the > `Package` object, and it uses those to drive user-implemented > methods like `patch()`, `install()`, and other build steps. > To install software, An instantiated package needs a *concrete* > spec, which guides the behavior of the various install methods. Packages are imported from repos (see `repo.py`). **Package DSL** Look in `lib/spack/docs` or check <https://spack.readthedocs.io> for the full documentation of the package domain-specific language. That used to be partially documented here, but as it grew, the docs here became increasingly out of date. **Package Lifecycle** A package’s lifecycle over a run of Spack looks something like this: ``` p = Package() # Done for you by spack p.do_fetch() # downloads tarball from a URL p.do_stage() # expands tarball in a temp directory p.do_patch() # applies patches to expanded source p.do_install() # calls package's install() function p.do_uninstall() # removes install directory ``` There are also some other commands that clean the build area: ``` p.do_clean() # removes the stage directory entirely p.do_restage() # removes the build directory and # re-expands the archive. ``` The convention used here is that a `do_*` function is intended to be called internally by Spack commands (in spack.cmd). These aren’t for package writers to override, and doing so may break the functionality of the Package class. Package creators have a lot of freedom, and they could technically override anything in this class. That is not usually required. For most use cases. Package creators typically just add attributes like `url` and `homepage`, or functions like `install()`. There are many custom `Package` subclasses in the `spack.build_systems` package that make things even easier for specific build systems. `activate`(*extension*, *view*, ***kwargs*)[¶](#spack.package.PackageBase.activate) Add the extension to the specified view. Package authors can override this function to maintain some centralized state related to the set of activated extensions for a package. Spack internals (commands, hooks, etc.) should call do_activate() method so that proper checks are always executed. `all_urls`[¶](#spack.package.PackageBase.all_urls) A list of all URLs in a package. Check both class-level and version-specific URLs. | Returns: | a list of URLs | | Return type: | list | `architecture`[¶](#spack.package.PackageBase.architecture) Get the spack.architecture.Arch object that represents the environment in which this package will be built. `archive_files` *= []*[¶](#spack.package.PackageBase.archive_files) List of glob expressions. Each expression must either be absolute or relative to the package source path. Matching artifacts found at the end of the build process will be copied in the same directory tree as build.env and build.out. `build_log_path`[¶](#spack.package.PackageBase.build_log_path) *classmethod* `build_system_flags`(*name*, *flags*)[¶](#spack.package.PackageBase.build_system_flags) flag_handler that passes flags to the build system arguments. Any package using build_system_flags must also implement flags_to_build_system_args, or derive from a class that implements it. Currently, AutotoolsPackage and CMakePackage implement it. `build_time_test_callbacks` *= None*[¶](#spack.package.PackageBase.build_time_test_callbacks) `check_for_unfinished_installation`(*keep_prefix=False*, *restage=False*)[¶](#spack.package.PackageBase.check_for_unfinished_installation) Check for leftover files from partially-completed prior install to prepare for a new install attempt. Options control whether these files are reused (vs. destroyed). | Parameters: | * **keep_prefix** (*bool*) – True if the installation prefix needs to be kept, False otherwise * **restage** (*bool*) – False if the stage has to be kept, True otherwise | | Returns: | True if the prefix exists but the install is not complete, False otherwise. | `compiler`[¶](#spack.package.PackageBase.compiler) Get the spack.compiler.Compiler object used to build this package `content_hash`(*content=None*)[¶](#spack.package.PackageBase.content_hash) Create a hash based on the sources and logic used to build the package. This includes the contents of all applied patches and the contents of applicable functions in the package subclass. `deactivate`(*extension*, *view*, ***kwargs*)[¶](#spack.package.PackageBase.deactivate) Remove all extension files from the specified view. Package authors can override this method to support other extension mechanisms. Spack internals (commands, hooks, etc.) should call do_deactivate() method so that proper checks are always executed. `dependencies_of_type`(**deptypes*)[¶](#spack.package.PackageBase.dependencies_of_type) Get dependencies that can possibly have these deptypes. This analyzes the package and determines which dependencies *can* be a certain kind of dependency. Note that they may not *always* be this kind of dependency, since dependencies can be optional, so something may be a build dependency in one configuration and a run dependency in another. `dependency_activations`()[¶](#spack.package.PackageBase.dependency_activations) `do_activate`(*view=None*, *with_dependencies=True*, *verbose=True*)[¶](#spack.package.PackageBase.do_activate) Called on an extension to invoke the extendee’s activate method. Commands should call this routine, and should not call activate() directly. `do_clean`()[¶](#spack.package.PackageBase.do_clean) Removes the package’s build stage and source tarball. `do_deactivate`(*view=None*, ***kwargs*)[¶](#spack.package.PackageBase.do_deactivate) Remove this extension package from the specified view. Called on the extension to invoke extendee’s deactivate() method. remove_dependents=True deactivates extensions depending on this package instead of raising an error. `do_fake_install`()[¶](#spack.package.PackageBase.do_fake_install) Make a fake install directory containing fake executables, headers, and libraries. `do_fetch`(*mirror_only=False*)[¶](#spack.package.PackageBase.do_fetch) Creates a stage directory and downloads the tarball for this package. Working directory will be set to the stage directory. `do_install`(*keep_prefix=False*, *keep_stage=False*, *install_source=False*, *install_deps=True*, *skip_patch=False*, *verbose=False*, *make_jobs=None*, *fake=False*, *explicit=False*, *tests=False*, *dirty=None*, ***kwargs*)[¶](#spack.package.PackageBase.do_install) Called by commands to install a package and its dependencies. Package implementations should override install() to describe their build process. | Parameters: | * **keep_prefix** (*bool*) – Keep install prefix on failure. By default, destroys it. * **keep_stage** (*bool*) – By default, stage is destroyed only if there are no exceptions during build. Set to True to keep the stage even with exceptions. * **install_source** (*bool*) – By default, source is not installed, but for debugging it might be useful to keep it around. * **install_deps** (*bool*) – Install dependencies before installing this package * **skip_patch** (*bool*) – Skip patch stage of build if True. * **verbose** (*bool*) – Display verbose build output (by default, suppresses it) * **make_jobs** (*int*) – Number of make jobs to use for install. Default is ncpus * **fake** (*bool*) – Don’t really build; install fake stub files instead. * **explicit** (*bool*) – True if package was explicitly installed, False if package was implicitly installed (as a dependency). * **tests** (*bool* *or* *list* *or* *set*) – False to run no tests, True to test all packages, or a list of package names to run tests for some * **dirty** (*bool*) – Don’t clean the build environment before installing. * **force** (*bool*) – Install again, even if already installed. | `do_patch`()[¶](#spack.package.PackageBase.do_patch) Applies patches if they haven’t been applied already. `do_restage`()[¶](#spack.package.PackageBase.do_restage) Reverts expanded/checked out source to a pristine state. `do_stage`(*mirror_only=False*)[¶](#spack.package.PackageBase.do_stage) Unpacks and expands the fetched tarball. `do_uninstall`(*force=False*)[¶](#spack.package.PackageBase.do_uninstall) Uninstall this package by spec. *classmethod* `env_flags`(*name*, *flags*)[¶](#spack.package.PackageBase.env_flags) flag_handler that adds all flags to canonical environment variables. `env_path`[¶](#spack.package.PackageBase.env_path) `extendable` *= False*[¶](#spack.package.PackageBase.extendable) Most packages are NOT extendable. Set to True if you want extensions. `extendee_args`[¶](#spack.package.PackageBase.extendee_args) Spec of the extendee of this package, or None if it is not an extension `extendee_spec`[¶](#spack.package.PackageBase.extendee_spec) Spec of the extendee of this package, or None if it is not an extension `extends`(*spec*)[¶](#spack.package.PackageBase.extends) Returns True if this package extends the given spec. If `self.spec` is concrete, this returns whether this package extends the given spec. If `self.spec` is not concrete, this returns whether this package may extend the given spec. `fetch_remote_versions`()[¶](#spack.package.PackageBase.fetch_remote_versions) Find remote versions of this package. Uses `list_url` and any other URLs listed in the package file. | Returns: | a dictionary mapping versions to URLs | | Return type: | dict | `fetcher`[¶](#spack.package.PackageBase.fetcher) *classmethod* `flag_handler`(*name*, *flags*)[¶](#spack.package.PackageBase.flag_handler) flag_handler that injects all flags through the compiler wrapper. `flags_to_build_system_args`(*flags*)[¶](#spack.package.PackageBase.flags_to_build_system_args) `format_doc`(***kwargs*)[¶](#spack.package.PackageBase.format_doc) Wrap doc string at 72 characters and format nicely `global_license_dir`[¶](#spack.package.PackageBase.global_license_dir) Returns the directory where global license files for all packages are stored. `global_license_file`[¶](#spack.package.PackageBase.global_license_file) Returns the path where a global license file for this particular package should be stored. *classmethod* `inject_flags`(*name*, *flags*)[¶](#spack.package.PackageBase.inject_flags) flag_handler that injects all flags through the compiler wrapper. `install_time_test_callbacks` *= None*[¶](#spack.package.PackageBase.install_time_test_callbacks) `installed`[¶](#spack.package.PackageBase.installed) Installation status of a package. | Returns: | True if the package has been installed, False otherwise. | `is_activated`(*view*)[¶](#spack.package.PackageBase.is_activated) Return True if package is activated. `is_extension`[¶](#spack.package.PackageBase.is_extension) `license_comment` *= '#'*[¶](#spack.package.PackageBase.license_comment) String. Contains the symbol used by the license manager to denote a comment. Defaults to `#`. `license_files` *= []*[¶](#spack.package.PackageBase.license_files) List of strings. These are files that the software searches for when looking for a license. All file paths must be relative to the installation directory. More complex packages like Intel may require multiple licenses for individual components. Defaults to the empty list. `license_required` *= False*[¶](#spack.package.PackageBase.license_required) Boolean. If set to `True`, this software requires a license. If set to `False`, all of the `license_*` attributes will be ignored. Defaults to `False`. `license_url` *= ''*[¶](#spack.package.PackageBase.license_url) String. A URL pointing to license setup instructions for the software. Defaults to the empty string. `license_vars` *= []*[¶](#spack.package.PackageBase.license_vars) List of strings. Environment variables that can be set to tell the software where to look for a license if it is not in the usual location. Defaults to the empty list. `log`()[¶](#spack.package.PackageBase.log) `log_path`[¶](#spack.package.PackageBase.log_path) *classmethod* `lookup_patch`(*sha256*)[¶](#spack.package.PackageBase.lookup_patch) Look up a patch associated with this package by its sha256 sum. | Parameters: | **sha256** (*str*) – sha256 sum of the patch to look up | | Returns: | `Patch` object with the given hash, or `None` if not found. | | Return type: | (Patch) | To do the lookup, we build an index lazily. This allows us to avoid computing a sha256 for *every* patch and on every package load. With lazy hashing, we only compute hashes on lookup, which usually happens at build time. `maintainers` *= []*[¶](#spack.package.PackageBase.maintainers) List of strings which contains GitHub usernames of package maintainers. Do not include @ here in order not to unnecessarily ping the users. `make_jobs` *= None*[¶](#spack.package.PackageBase.make_jobs) # jobs to use for parallel make. If set, overrides default of ncpus. `metadata_attrs` *= ['homepage', 'url', 'list_url', 'extendable', 'parallel', 'make_jobs']*[¶](#spack.package.PackageBase.metadata_attrs) List of attributes which affect do not affect a package’s content. `module`[¶](#spack.package.PackageBase.module) Use this to add variables to the class’s module’s scope. This lets us use custom syntax in the install method. `namespace`[¶](#spack.package.PackageBase.namespace) `nearest_url`(*version*)[¶](#spack.package.PackageBase.nearest_url) Finds the URL with the “closest” version to `version`. This uses the following precedence order: > 1. Find the next lowest or equal version with a URL. > 2. If no lower URL, return the next *higher* URL. > 3. If no higher URL, return None. `package_dir`[¶](#spack.package.PackageBase.package_dir) Return the directory where the package.py file lives. `parallel` *= True*[¶](#spack.package.PackageBase.parallel) By default we build in parallel. Subclasses can override this. `possible_dependencies`(*transitive=True*, *expand_virtuals=True*, *visited=None*)[¶](#spack.package.PackageBase.possible_dependencies) Return set of possible dependencies of this package. Note: the set returned *includes* the package itself. | Parameters: | * **transitive** (*bool*) – return all transitive dependencies if True, only direct dependencies if False. * **expand_virtuals** (*bool*) – expand virtual dependencies into all possible implementations. * **visited** (*set*) – set of names of dependencies visited so far. | `prefix`[¶](#spack.package.PackageBase.prefix) Get the prefix into which this package should be installed. `provides`(*vpkg_name*)[¶](#spack.package.PackageBase.provides) True if this package provides a virtual package with the specified name `remove_prefix`()[¶](#spack.package.PackageBase.remove_prefix) Removes the prefix for a package along with any empty parent directories `rpath`[¶](#spack.package.PackageBase.rpath) Get the rpath this package links with, as a list of paths. `rpath_args`[¶](#spack.package.PackageBase.rpath_args) Get the rpath args as a string, with -Wl,-rpath, for each element `run_tests` *= False*[¶](#spack.package.PackageBase.run_tests) By default do not run tests within package’s install() `sanity_check_is_dir` *= []*[¶](#spack.package.PackageBase.sanity_check_is_dir) List of prefix-relative directory paths (or a single path). If these do not exist after install, or if they exist but are not directories, sanity checks will fail. `sanity_check_is_file` *= []*[¶](#spack.package.PackageBase.sanity_check_is_file) List of prefix-relative file paths (or a single path). If these do not exist after install, or if they exist but are not files, sanity checks fail. `sanity_check_prefix`()[¶](#spack.package.PackageBase.sanity_check_prefix) This function checks whether install succeeded. `setup_dependent_environment`(*spack_env*, *run_env*, *dependent_spec*)[¶](#spack.package.PackageBase.setup_dependent_environment) Set up the environment of packages that depend on this one. This is similar to `setup_environment`, but it is used to modify the compile and runtime environments of packages that *depend* on this one. This gives packages like Python and others that follow the extension model a way to implement common environment or compile-time settings for dependencies. This is useful if there are some common steps to installing all extensions for a certain package. Example: 1. Installing python modules generally requires `PYTHONPATH` to point to the `lib/pythonX.Y/site-packages` directory in the module’s install prefix. This method could be used to set that variable. | Parameters: | * **spack_env** (*EnvironmentModifications*) – List of environment modifications to be applied when the dependent package is built within Spack. * **run_env** (*EnvironmentModifications*) – List of environment modifications to be applied when the dependent package is run outside of Spack. These are added to the resulting module file. * **dependent_spec** (*Spec*) – The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that *this* package’s spec is available as `self.spec`. | `setup_dependent_package`(*module*, *dependent_spec*)[¶](#spack.package.PackageBase.setup_dependent_package) Set up Python module-scope variables for dependent packages. Called before the install() method of dependents. Default implementation does nothing, but this can be overridden by an extendable package to set up the module of its extensions. This is useful if there are some common steps to installing all extensions for a certain package. Examples: 1. Extensions often need to invoke the `python` interpreter from the Python installation being extended. This routine can put a `python()` Executable object in the module scope for the extension package to simplify extension installs. 2. MPI compilers could set some variables in the dependent’s scope that point to `mpicc`, `mpicxx`, etc., allowing them to be called by common name regardless of which MPI is used. 3. BLAS/LAPACK implementations can set some variables indicating the path to their libraries, since these paths differ by BLAS/LAPACK implementation. | Parameters: | * **module** ([*spack.package.PackageBase.module*](index.html#spack.package.PackageBase.module)) – The Python `module` object of the dependent package. Packages can use this to set module-scope variables for the dependent to use. * **dependent_spec** (*Spec*) – The spec of the dependent package about to be built. This allows the extendee (self) to query the dependent’s state. Note that *this* package’s spec is available as `self.spec`. | `setup_environment`(*spack_env*, *run_env*)[¶](#spack.package.PackageBase.setup_environment) Set up the compile and runtime environments for a package. `spack_env` and `run_env` are `EnvironmentModifications` objects. Package authors can call methods on them to alter the environment within Spack and at runtime. Both `spack_env` and `run_env` are applied within the build process, before this package’s `install()` method is called. Modifications in `run_env` will *also* be added to the generated environment modules for this package. Default implementation does nothing, but this can be overridden if the package needs a particular environment. Example: 1. Qt extensions need `QTDIR` set. | Parameters: | * **spack_env** (*EnvironmentModifications*) – List of environment modifications to be applied when this package is built within Spack. * **run_env** (*EnvironmentModifications*) – List of environment modifications to be applied when this package is run outside of Spack. These are added to the resulting module file. | `stage`[¶](#spack.package.PackageBase.stage) Get the build staging area for this package. This automatically instantiates a `Stage` object if the package doesn’t have one yet, but it does not create the Stage directory on the filesystem. `transitive_rpaths` *= True*[¶](#spack.package.PackageBase.transitive_rpaths) When True, add RPATHs for the entire DAG. When False, add RPATHs only for immediate dependencies. `try_install_from_binary_cache`(*explicit*)[¶](#spack.package.PackageBase.try_install_from_binary_cache) *static* `uninstall_by_spec`(*force=False*)[¶](#spack.package.PackageBase.uninstall_by_spec) `unit_test_check`()[¶](#spack.package.PackageBase.unit_test_check) Hook for unit tests to assert things about package internals. Unit tests can override this function to perform checks after `Package.install` and all post-install hooks run, but before the database is updated. The overridden function may indicate that the install procedure should terminate early (before updating the database) by returning `False` (or any value such that `bool(result)` is `False`). | Returns: | `True` to continue, `False` to skip `install()` | | Return type: | (bool) | `url_for_version`(*version*)[¶](#spack.package.PackageBase.url_for_version) Returns a URL from which the specified version of this package may be downloaded. version: class Version The version for which a URL is sought. See Class Version (version.py) `url_version`(*version*)[¶](#spack.package.PackageBase.url_version) Given a version, this returns a string that should be substituted into the package’s URL to download that version. By default, this just returns the version string. Subclasses may need to override this, e.g. for boost versions where you need to ensure that there are _’s in the download URL. `use_xcode` *= False*[¶](#spack.package.PackageBase.use_xcode) By default do not setup mockup XCode on macOS with Clang `version`[¶](#spack.package.PackageBase.version) `version_urls` *= functools.partial(<bound method memoized.__call__ of <llnl.util.lang.memoized object>>, None)*[¶](#spack.package.PackageBase.version_urls) `view`()[¶](#spack.package.PackageBase.view) Create a view with the prefix of this package as the root. Extensions added to this view will modify the installation prefix of this package. *exception* `spack.package.``PackageError`(*message*, *long_msg=None*)[¶](#spack.package.PackageError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when something is wrong with a package definition. *class* `spack.package.``PackageMeta`(*name*, *bases*, *attr_dict*)[¶](#spack.package.PackageMeta) Bases: `spack.directives.DirectiveMeta`, `spack.mixins.PackageMixinsMeta` Conveniently transforms attributes to permit extensible phases Iterates over the attribute ‘phases’ and creates / updates private InstallPhase attributes in the class that is being initialized `phase_fmt` *= '_InstallPhase_{0}'*[¶](#spack.package.PackageMeta.phase_fmt) *static* `register_callback`(**phases*)[¶](#spack.package.PackageMeta.register_callback) *exception* `spack.package.``PackageStillNeededError`(*spec*, *dependents*)[¶](#spack.package.PackageStillNeededError) Bases: [`spack.package.InstallError`](#spack.package.InstallError) Raised when package is still needed by another on uninstall. *exception* `spack.package.``PackageVersionError`(*version*)[¶](#spack.package.PackageVersionError) Bases: [`spack.package.PackageError`](#spack.package.PackageError) Raised when a version URL cannot automatically be determined. *class* `spack.package.``PackageViewMixin`[¶](#spack.package.PackageViewMixin) Bases: `object` This collects all functionality related to adding installed Spack package to views. Packages can customize how they are added to views by overriding these functions. `add_files_to_view`(*view*, *merge_map*)[¶](#spack.package.PackageViewMixin.add_files_to_view) Given a map of package files to destination paths in the view, add the files to the view. By default this adds all files. Alternative implementations may skip some files, for example if other packages linked into the view already include the file. `remove_files_from_view`(*view*, *merge_map*)[¶](#spack.package.PackageViewMixin.remove_files_from_view) Given a map of package files to files currently linked in the view, remove the files from the view. The default implementation removes all files. Alternative implementations may not remove all files. For example if two packages include the same file, it should only be removed when both packages are removed. `view_destination`(*view*)[¶](#spack.package.PackageViewMixin.view_destination) The target root directory: each file is added relative to this directory. `view_file_conflicts`(*view*, *merge_map*)[¶](#spack.package.PackageViewMixin.view_file_conflicts) Report any files which prevent adding this package to the view. The default implementation looks for any files which already exist. Alternative implementations may allow some of the files to exist in the view (in this case they would be omitted from the results). `view_source`()[¶](#spack.package.PackageViewMixin.view_source) The source root directory that will be added to the view: files are added such that their path relative to the view destination matches their path relative to the view source. `spack.package.``dump_packages`(*spec*, *path*)[¶](#spack.package.dump_packages) Dump all package information for a spec and its dependencies. This creates a package repository within path for every namespace in the spec DAG, and fills the repos wtih package files and patch files for every node in the DAG. `spack.package.``flatten_dependencies`(*spec*, *flat_dir*)[¶](#spack.package.flatten_dependencies) Make each dependency of spec present in dir via symlink. `spack.package.``install_dependency_symlinks`(*pkg*, *spec*, *prefix*)[¶](#spack.package.install_dependency_symlinks) Execute a dummy install and flatten dependencies `spack.package.``on_package_attributes`(***attr_dict*)[¶](#spack.package.on_package_attributes) Decorator: executes instance function only if object has attr valuses. Executes the decorated method only if at the moment of calling the instance has attributes that are equal to certain values. | Parameters: | **attr_dict** (*dict*) – dictionary mapping attribute names to their required values | `spack.package.``print_pkg`(*message*)[¶](#spack.package.print_pkg) Outputs a message with a package icon. `spack.package.``run_after`(**phases*)[¶](#spack.package.run_after) Registers a method of a package to be run after a given phase `spack.package.``run_before`(**phases*)[¶](#spack.package.run_before) Registers a method of a package to be run before a given phase `spack.package.``use_cray_compiler_names`()[¶](#spack.package.use_cray_compiler_names) Compiler names for builds that rely on cray compiler names. ### spack.package_prefs module[¶](#module-spack.package_prefs) *class* `spack.package_prefs.``PackagePrefs`(*pkgname*, *component*, *vpkg=None*)[¶](#spack.package_prefs.PackagePrefs) Bases: `object` Defines the sort order for a set of specs. Spack’s package preference implementation uses PackagePrefss to define sort order. The PackagePrefs class looks at Spack’s packages.yaml configuration and, when called on a spec, returns a key that can be used to sort that spec in order of the user’s preferences. You can use it like this: > # key function sorts CompilerSpecs for mpich in order of preference > kf = PackagePrefs(‘mpich’, ‘compiler’) > compiler_list.sort(key=kf) Or like this: > # key function to sort VersionLists for OpenMPI in order of preference. > kf = PackagePrefs(‘openmpi’, ‘version’) > version_list.sort(key=kf) Optionally, you can sort in order of preferred virtual dependency providers. To do that, provide ‘providers’ and a third argument denoting the virtual package (e.g., `mpi`): > kf = PackagePrefs(‘trilinos’, ‘providers’, ‘mpi’) > provider_spec_list.sort(key=kf) *classmethod* `clear_caches`()[¶](#spack.package_prefs.PackagePrefs.clear_caches) *classmethod* `has_preferred_providers`(*pkgname*, *vpkg*)[¶](#spack.package_prefs.PackagePrefs.has_preferred_providers) Whether specific package has a preferred vpkg providers. *classmethod* `preferred_variants`(*pkg_name*)[¶](#spack.package_prefs.PackagePrefs.preferred_variants) Return a VariantMap of preferred variants/values for a spec. *exception* `spack.package_prefs.``VirtualInPackagesYAMLError`(*message*, *long_message=None*)[¶](#spack.package_prefs.VirtualInPackagesYAMLError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when a disallowed virtual is found in packages.yaml `spack.package_prefs.``get_package_dir_permissions`(*spec*)[¶](#spack.package_prefs.get_package_dir_permissions) Return the permissions configured for the spec. Include the GID bit if group permissions are on. This makes the group attribute sticky for the directory. Package-specific settings take precedent over settings for `all` `spack.package_prefs.``get_package_group`(*spec*)[¶](#spack.package_prefs.get_package_group) Return the unix group associated with the spec. Package-specific settings take precedence over settings for `all` `spack.package_prefs.``get_package_permissions`(*spec*)[¶](#spack.package_prefs.get_package_permissions) Return the permissions configured for the spec. Package-specific settings take precedence over settings for `all` `spack.package_prefs.``get_packages_config`()[¶](#spack.package_prefs.get_packages_config) Wrapper around get_packages_config() to validate semantics. `spack.package_prefs.``is_spec_buildable`(*spec*)[¶](#spack.package_prefs.is_spec_buildable) Return true if the spec pkgspec is configured as buildable `spack.package_prefs.``spec_externals`(*spec*)[¶](#spack.package_prefs.spec_externals) Return a list of external specs (w/external directory path filled in), one for each known external installation. ### spack.package_test module[¶](#module-spack.package_test) `spack.package_test.``compare_output`(*current_output*, *blessed_output*)[¶](#spack.package_test.compare_output) Compare blessed and current output of executables. `spack.package_test.``compare_output_file`(*current_output*, *blessed_output_file*)[¶](#spack.package_test.compare_output_file) Same as above, but when the blessed output is given as a file. `spack.package_test.``compile_c_and_execute`(*source_file*, *include_flags*, *link_flags*)[¶](#spack.package_test.compile_c_and_execute) Compile C @p source_file with @p include_flags and @p link_flags, run and return the output. ### spack.parse module[¶](#module-spack.parse) *exception* `spack.parse.``LexError`(*message*, *string*, *pos*)[¶](#spack.parse.LexError) Bases: [`spack.parse.ParseError`](#spack.parse.ParseError) Raised when we don’t know how to lex something. *class* `spack.parse.``Lexer`(*lexicon0*, *mode_switches_01=[]*, *lexicon1=[]*, *mode_switches_10=[]*)[¶](#spack.parse.Lexer) Bases: `object` Base class for Lexers that keep track of line numbers. `lex`(*text*)[¶](#spack.parse.Lexer.lex) `lex_word`(*word*)[¶](#spack.parse.Lexer.lex_word) `token`(*type*, *value=''*)[¶](#spack.parse.Lexer.token) *exception* `spack.parse.``ParseError`(*message*, *string*, *pos*)[¶](#spack.parse.ParseError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when we don’t hit an error while parsing. *class* `spack.parse.``Parser`(*lexer*)[¶](#spack.parse.Parser) Bases: `object` Base class for simple recursive descent parsers. `accept`(*id*)[¶](#spack.parse.Parser.accept) Put the next symbol in self.token if accepted, then call gettok() `expect`(*id*)[¶](#spack.parse.Parser.expect) Like accept(), but fails if we don’t like the next token. `gettok`()[¶](#spack.parse.Parser.gettok) Puts the next token in the input stream into self.next. `last_token_error`(*message*)[¶](#spack.parse.Parser.last_token_error) Raise an error about the previous token in the stream. `next_token_error`(*message*)[¶](#spack.parse.Parser.next_token_error) Raise an error about the next token in the stream. `parse`(*text*)[¶](#spack.parse.Parser.parse) `push_tokens`(*iterable*)[¶](#spack.parse.Parser.push_tokens) Adds all tokens in some iterable to the token stream. `setup`(*text*)[¶](#spack.parse.Parser.setup) `unexpected_token`()[¶](#spack.parse.Parser.unexpected_token) *class* `spack.parse.``Token`(*type*, *value=''*, *start=0*, *end=0*)[¶](#spack.parse.Token) Bases: `object` Represents tokens; generated from input by lexer and fed to parse(). `is_a`(*type*)[¶](#spack.parse.Token.is_a) ### spack.patch module[¶](#module-spack.patch) *class* `spack.patch.``FilePatch`(*pkg*, *path_or_url*, *level*, *working_dir*)[¶](#spack.patch.FilePatch) Bases: [`spack.patch.Patch`](#spack.patch.Patch) Describes a patch that is retrieved from a file in the repository `sha256`[¶](#spack.patch.FilePatch.sha256) *exception* `spack.patch.``NoSuchPatchError`(*message*, *long_message=None*)[¶](#spack.patch.NoSuchPatchError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when a patch file doesn’t exist. *class* `spack.patch.``Patch`(*path_or_url*, *level*, *working_dir*)[¶](#spack.patch.Patch) Bases: `object` Base class to describe a patch that needs to be applied to some expanded source code. `apply`(*stage*)[¶](#spack.patch.Patch.apply) Apply the patch at self.path to the source code in the supplied stage | Parameters: | **stage** – stage for the package that needs to be patched | *static* `create`(*path_or_url*, *level=1*, *working_dir='.'*, ***kwargs*)[¶](#spack.patch.Patch.create) Factory method that creates an instance of some class derived from Patch | Parameters: | * **pkg** – package that needs to be patched * **path_or_url** – path or url where the patch is found * **level** – patch level (default 1) * **working_dir** (*str*) – dir to change to before applying (default ‘.’) | | Returns: | instance of some Patch class | *exception* `spack.patch.``PatchDirectiveError`(*message*, *long_message=None*)[¶](#spack.patch.PatchDirectiveError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when the wrong arguments are suppled to the patch directive. *class* `spack.patch.``UrlPatch`(*path_or_url*, *level*, *working_dir*, ***kwargs*)[¶](#spack.patch.UrlPatch) Bases: [`spack.patch.Patch`](#spack.patch.Patch) Describes a patch that is retrieved from a URL `apply`(*stage*)[¶](#spack.patch.UrlPatch.apply) Retrieve the patch in a temporary stage, computes self.path and calls super().apply(stage) | Parameters: | **stage** – stage for the package that needs to be patched | `spack.patch.``absolute_path_for_package`(*pkg*)[¶](#spack.patch.absolute_path_for_package) Returns the absolute path to the `package.py` file implementing the recipe for the package passed as argument. | Parameters: | **pkg** – a valid package object, or a Dependency object. | ### spack.paths module[¶](#module-spack.paths) Defines paths that are part of Spack’s directory structure. Do not import other `spack` modules here. This module is used throughout Spack and should bring in a minimal number of external dependencies. `spack.paths.``bin_path` *= '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1/bin'*[¶](#spack.paths.bin_path) bin directory in the spack prefix `spack.paths.``prefix` *= '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1'*[¶](#spack.paths.prefix) This file lives in $prefix/lib/spack/spack/__file__ `spack.paths.``spack_root` *= '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1'*[¶](#spack.paths.spack_root) synonym for prefix `spack.paths.``spack_script` *= '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1/bin/spack'*[¶](#spack.paths.spack_script) The spack script itself `spack.paths.``user_config_path` *= '/home/docs/.spack'*[¶](#spack.paths.user_config_path) User configuration location ### spack.pkgkit module[¶](#module-spack.pkgkit) pkgkit is a set of useful build tools and directives for packages. Everything in this module is automatically imported into Spack package files. ### spack.provider_index module[¶](#module-spack.provider_index) The `virtual` module contains utility classes for virtual dependencies. *class* `spack.provider_index.``ProviderIndex`(*specs=None*, *restrict=False*)[¶](#spack.provider_index.ProviderIndex) Bases: `object` This is a dict of dicts used for finding providers of particular virtual dependencies. The dict of dicts looks like: { vpkg name : { full vpkg spec : set(packages providing spec) } } Callers can use this to first find which packages provide a vpkg, then find a matching full spec. e.g., in this scenario: { ‘mpi’ : { mpi@:1.1 : set([mpich]), mpi@:2.3 : set([[mpich2@1.9](mailto:mpich2%401.9):]) } } Calling providers_for(spec) will find specs that provide a matching implementation of MPI. `copy`()[¶](#spack.provider_index.ProviderIndex.copy) Deep copy of this ProviderIndex. *static* `from_yaml`()[¶](#spack.provider_index.ProviderIndex.from_yaml) `merge`(*other*)[¶](#spack.provider_index.ProviderIndex.merge) Merge other ProviderIndex into this one. `providers_for`(**vpkg_specs*)[¶](#spack.provider_index.ProviderIndex.providers_for) Gives specs of all packages that provide virtual packages with the supplied specs. `remove_provider`(*pkg_name*)[¶](#spack.provider_index.ProviderIndex.remove_provider) Remove a provider from the ProviderIndex. `satisfies`(*other*)[¶](#spack.provider_index.ProviderIndex.satisfies) Check that providers of virtual specs are compatible. `to_yaml`(*stream=None*)[¶](#spack.provider_index.ProviderIndex.to_yaml) `update`(*spec*)[¶](#spack.provider_index.ProviderIndex.update) *exception* `spack.provider_index.``ProviderIndexError`(*message*, *long_message=None*)[¶](#spack.provider_index.ProviderIndexError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when there is a problem with a ProviderIndex. ### spack.relocate module[¶](#module-spack.relocate) *exception* `spack.relocate.``InstallRootStringException`(*file_path*, *root_path*)[¶](#spack.relocate.InstallRootStringException) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when the relocated binary still has the install root string. `spack.relocate.``get_existing_elf_rpaths`(*path_name*)[¶](#spack.relocate.get_existing_elf_rpaths) Return the RPATHS returned by patchelf –print-rpath path_name as a list of strings. `spack.relocate.``get_patchelf`()[¶](#spack.relocate.get_patchelf) Builds and installs spack patchelf package on linux platforms using the first concretized spec. Returns the full patchelf binary path. `spack.relocate.``get_placeholder_rpaths`(*path_name*, *orig_rpaths*)[¶](#spack.relocate.get_placeholder_rpaths) Replaces original layout root dir with a placeholder string in all rpaths. `spack.relocate.``get_relative_rpaths`(*path_name*, *orig_dir*, *orig_rpaths*)[¶](#spack.relocate.get_relative_rpaths) Replaces orig_dir with relative path from dirname(path_name) if an rpath in orig_rpaths contains orig_path. Prefixes $ORIGIN to relative paths and returns replacement rpaths. `spack.relocate.``macho_get_paths`(*path_name*)[¶](#spack.relocate.macho_get_paths) Examines the output of otool -l path_name for these three fields: LC_ID_DYLIB, LC_LOAD_DYLIB, LC_RPATH and parses out the rpaths, dependiencies and library id. Returns these values. `spack.relocate.``macho_make_paths_placeholder`(*rpaths*, *deps*, *idpath*)[¶](#spack.relocate.macho_make_paths_placeholder) Replace old_dir with a placeholder of the same length in rpaths and deps and idpaths is needed. replacement are returned. `spack.relocate.``macho_make_paths_relative`(*path_name*, *old_dir*, *rpaths*, *deps*, *idpath*)[¶](#spack.relocate.macho_make_paths_relative) Replace old_dir with relative path from dirname(path_name) in rpaths and deps; idpaths are replaced with @rpath/libname as needed; replacement are returned. `spack.relocate.``macho_replace_paths`(*old_dir*, *new_dir*, *rpaths*, *deps*, *idpath*)[¶](#spack.relocate.macho_replace_paths) Replace old_dir with new_dir in rpaths, deps and idpath and return replacements `spack.relocate.``make_binary_placeholder`(*cur_path_names*, *allow_root*)[¶](#spack.relocate.make_binary_placeholder) Replace old install root in RPATHs with placeholder in binary files `spack.relocate.``make_binary_relative`(*cur_path_names*, *orig_path_names*, *old_dir*, *allow_root*)[¶](#spack.relocate.make_binary_relative) Replace old RPATHs with paths relative to old_dir in binary files `spack.relocate.``modify_elf_object`(*path_name*, *new_rpaths*)[¶](#spack.relocate.modify_elf_object) Replace orig_rpath with new_rpath in RPATH of elf object path_name `spack.relocate.``modify_macho_object`(*cur_path*, *rpaths*, *deps*, *idpath*, *new_rpaths*, *new_deps*, *new_idpath*)[¶](#spack.relocate.modify_macho_object) Modify MachO binary path_name by replacing old_dir with new_dir or the relative path to spack install root. The old install dir in LC_ID_DYLIB is replaced with the new install dir using install_name_tool -id newid binary The old install dir in LC_LOAD_DYLIB is replaced with the new install dir using install_name_tool -change old new binary The old install dir in LC_RPATH is replaced with the new install dir using install_name_tool -rpath old new binary `spack.relocate.``needs_binary_relocation`(*filetype*, *os_id=None*)[¶](#spack.relocate.needs_binary_relocation) Check whether the given filetype is a binary that may need relocation. `spack.relocate.``needs_text_relocation`(*filetype*)[¶](#spack.relocate.needs_text_relocation) Check whether the given filetype is text that may need relocation. `spack.relocate.``relocate_binary`(*path_names*, *old_dir*, *new_dir*, *allow_root*)[¶](#spack.relocate.relocate_binary) Change old_dir to new_dir in RPATHs of elf or mach-o files Account for the case where old_dir is now a placeholder `spack.relocate.``relocate_text`(*path_names*, *old_dir*, *new_dir*)[¶](#spack.relocate.relocate_text) Replace old path with new path in text file path_name `spack.relocate.``set_placeholder`(*dirname*)[¶](#spack.relocate.set_placeholder) return string of @’s with same length `spack.relocate.``strings_contains_installroot`(*path_name*, *root_dir*)[¶](#spack.relocate.strings_contains_installroot) Check if the file contain the install root string. `spack.relocate.``substitute_rpath`(*orig_rpath*, *topdir*, *new_root_path*)[¶](#spack.relocate.substitute_rpath) Replace topdir with new_root_path RPATH list orig_rpath ### spack.repo module[¶](#module-spack.repo) *exception* `spack.repo.``BadRepoError`(*message*, *long_message=None*)[¶](#spack.repo.BadRepoError) Bases: [`spack.repo.RepoError`](#spack.repo.RepoError) Raised when repo layout is invalid. *exception* `spack.repo.``FailedConstructorError`(*name*, *exc_type*, *exc_obj*, *exc_tb*)[¶](#spack.repo.FailedConstructorError) Bases: [`spack.repo.RepoError`](#spack.repo.RepoError) Raised when a package’s class constructor fails. *class* `spack.repo.``FastPackageChecker`(*packages_path*)[¶](#spack.repo.FastPackageChecker) Bases: `collections.abc.Mapping` Cache that maps package names to the stats obtained on the ‘package.py’ files associated with them. For each repository a cache is maintained at class level, and shared among all instances referring to it. Update of the global cache is done lazily during instance initialization. *exception* `spack.repo.``InvalidNamespaceError`(*message*, *long_message=None*)[¶](#spack.repo.InvalidNamespaceError) Bases: [`spack.repo.RepoError`](#spack.repo.RepoError) Raised when an invalid namespace is encountered. `spack.repo.``NOT_PROVIDED` *= <object object>*[¶](#spack.repo.NOT_PROVIDED) Guaranteed unused default value for some functions. *exception* `spack.repo.``NoRepoConfiguredError`(*message*, *long_message=None*)[¶](#spack.repo.NoRepoConfiguredError) Bases: [`spack.repo.RepoError`](#spack.repo.RepoError) Raised when there are no repositories configured. *class* `spack.repo.``Repo`(*root*, *namespace='spack.pkg'*)[¶](#spack.repo.Repo) Bases: `object` Class representing a package repository in the filesystem. Each package repository must have a top-level configuration file called repo.yaml. Currently, repo.yaml this must define: namespace: A Python namespace where the repository’s packages should live. `all_package_names`()[¶](#spack.repo.Repo.all_package_names) Returns a sorted list of all package names in the Repo. `all_packages`()[¶](#spack.repo.Repo.all_packages) Iterator over all packages in the repository. Use this with care, because loading packages is slow. `dirname_for_package_name`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.Repo.dirname_for_package_name) `dump_provenance`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.Repo.dump_provenance) `exists`(*pkg_name*)[¶](#spack.repo.Repo.exists) Whether a package with the supplied name exists. `extensions_for`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.Repo.extensions_for) `filename_for_package_name`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.Repo.filename_for_package_name) `find_module`(*fullname*, *path=None*)[¶](#spack.repo.Repo.find_module) Python find_module import hook. Returns this Repo if it can load the module; None if not. `get`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.Repo.get) `get_pkg_class`(*pkg_name*)[¶](#spack.repo.Repo.get_pkg_class) Get the class for the package out of its module. First loads (or fetches from cache) a module for the package. Then extracts the package class from the module according to Spack’s naming convention. `is_prefix`(*fullname*)[¶](#spack.repo.Repo.is_prefix) True if fullname is a prefix of this Repo’s namespace. `is_virtual`(*pkg_name*)[¶](#spack.repo.Repo.is_virtual) True if the package with this name is virtual, False otherwise. `load_module`(*fullname*)[¶](#spack.repo.Repo.load_module) Python importer load hook. Tries to load the module; raises an ImportError if it can’t. `packages_with_tags`(**tags*)[¶](#spack.repo.Repo.packages_with_tags) `provider_index`[¶](#spack.repo.Repo.provider_index) A provider index with names *specific* to this repo. `providers_for`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.Repo.providers_for) `purge`()[¶](#spack.repo.Repo.purge) Clear entire package instance cache. `real_name`(*import_name*)[¶](#spack.repo.Repo.real_name) Allow users to import Spack packages using Python identifiers. A python identifier might map to many different Spack package names due to hyphen/underscore ambiguity. Easy example: num3proxy -> 3proxy Ambiguous: foo_bar -> foo_bar, foo-bar More ambiguous: foo_bar_baz -> foo_bar_baz, foo-bar-baz, foo_bar-baz, foo-bar_baz `tag_index`[¶](#spack.repo.Repo.tag_index) A provider index with names *specific* to this repo. *exception* `spack.repo.``RepoError`(*message*, *long_message=None*)[¶](#spack.repo.RepoError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Superclass for repository-related errors. *class* `spack.repo.``RepoPath`(**repos*, ***kwargs*)[¶](#spack.repo.RepoPath) Bases: `object` A RepoPath is a list of repos that function as one. It functions exactly like a Repo, but it operates on the combined results of the Repos in its list instead of on a single package repository. | Parameters: | **repos** (*list*) – list Repo objects or paths to put in this RepoPath | Optional Args: repo_namespace (str): super-namespace for all packages in this RepoPath (used when importing repos as modules) `all_package_names`()[¶](#spack.repo.RepoPath.all_package_names) Return all unique package names in all repositories. `all_packages`()[¶](#spack.repo.RepoPath.all_packages) `dirname_for_package_name`(*pkg_name*)[¶](#spack.repo.RepoPath.dirname_for_package_name) `dump_provenance`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.RepoPath.dump_provenance) `exists`(*pkg_name*)[¶](#spack.repo.RepoPath.exists) Whether package with the give name exists in the path’s repos. Note that virtual packages do not “exist”. `extensions_for`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.RepoPath.extensions_for) `filename_for_package_name`(*pkg_name*)[¶](#spack.repo.RepoPath.filename_for_package_name) `find_module`(*fullname*, *path=None*)[¶](#spack.repo.RepoPath.find_module) Implements precedence for overlaid namespaces. Loop checks each namespace in self.repos for packages, and also handles loading empty containing namespaces. `first_repo`()[¶](#spack.repo.RepoPath.first_repo) Get the first repo in precedence order. `get`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.RepoPath.get) `get_pkg_class`(*pkg_name*)[¶](#spack.repo.RepoPath.get_pkg_class) Find a class for the spec’s package and return the class object. `get_repo`(*namespace*, *default=<object object>*)[¶](#spack.repo.RepoPath.get_repo) Get a repository by namespace. | Parameters: | **namespace** – Look up this namespace in the RepoPath, and return it if found. | Optional Arguments: > default: > > > If default is provided, return it when the namespace > > isn’t found. If not, raise an UnknownNamespaceError. `is_virtual`(*pkg_name*)[¶](#spack.repo.RepoPath.is_virtual) True if the package with this name is virtual, False otherwise. `load_module`(*fullname*)[¶](#spack.repo.RepoPath.load_module) Handles loading container namespaces when necessary. See `Repo` for how actual package modules are loaded. `packages_with_tags`(**tags*)[¶](#spack.repo.RepoPath.packages_with_tags) `provider_index`[¶](#spack.repo.RepoPath.provider_index) Merged ProviderIndex from all Repos in the RepoPath. `providers_for`(*spec_like*, **args*, ***kwargs*)[¶](#spack.repo.RepoPath.providers_for) `put_first`(*repo*)[¶](#spack.repo.RepoPath.put_first) Add repo first in the search path. `put_last`(*repo*)[¶](#spack.repo.RepoPath.put_last) Add repo last in the search path. `remove`(*repo*)[¶](#spack.repo.RepoPath.remove) Remove a repo from the search path. `repo_for_pkg`(*spec*)[¶](#spack.repo.RepoPath.repo_for_pkg) Given a spec, get the repository for its package. *class* `spack.repo.``SpackNamespace`(*namespace*)[¶](#spack.repo.SpackNamespace) Bases: `module` Allow lazy loading of modules. *class* `spack.repo.``TagIndex`[¶](#spack.repo.TagIndex) Bases: `collections.abc.Mapping` Maps tags to list of packages. *static* `from_json`()[¶](#spack.repo.TagIndex.from_json) `to_json`(*stream*)[¶](#spack.repo.TagIndex.to_json) `update_package`(*pkg_name*)[¶](#spack.repo.TagIndex.update_package) Updates a package in the tag index. | Parameters: | **pkg_name** (*str*) – name of the package to be removed from the index | *exception* `spack.repo.``UnknownEntityError`(*message*, *long_message=None*)[¶](#spack.repo.UnknownEntityError) Bases: [`spack.repo.RepoError`](#spack.repo.RepoError) Raised when we encounter a package spack doesn’t have. *exception* `spack.repo.``UnknownNamespaceError`(*namespace*)[¶](#spack.repo.UnknownNamespaceError) Bases: [`spack.repo.UnknownEntityError`](#spack.repo.UnknownEntityError) Raised when we encounter an unknown namespace *exception* `spack.repo.``UnknownPackageError`(*name*, *repo=None*)[¶](#spack.repo.UnknownPackageError) Bases: [`spack.repo.UnknownEntityError`](#spack.repo.UnknownEntityError) Raised when we encounter a package spack doesn’t have. `spack.repo.``all_package_names`()[¶](#spack.repo.all_package_names) Convenience wrapper around `spack.repo.all_package_names()`. `spack.repo.``create_or_construct`(*path*, *namespace=None*)[¶](#spack.repo.create_or_construct) Create a repository, or just return a Repo if it already exists. `spack.repo.``create_repo`(*root*, *namespace=None*)[¶](#spack.repo.create_repo) Create a new repository in root with the specified namespace. If the namespace is not provided, use basename of root. Return the canonicalized path and namespace of the created repository. `spack.repo.``get`(*spec*)[¶](#spack.repo.get) Convenience wrapper around `spack.repo.get()`. `spack.repo.``path` *= <spack.repo.RepoPath object>*[¶](#spack.repo.path) Singleton repo path instance `spack.repo.``repo_namespace` *= 'spack.pkg'*[¶](#spack.repo.repo_namespace) Super-namespace for all packages. Package modules are imported as spack.pkg.<namespace>.<pkg-name>. `spack.repo.``set_path`(*repo*)[¶](#spack.repo.set_path) Set the path singleton to a specific value. Overwrite `path` and register it as an importer in `sys.meta_path` if it is a `Repo` or `RepoPath`. `spack.repo.``swap`(*repo_path*)[¶](#spack.repo.swap) Temporarily use another RepoPath. ### spack.report module[¶](#module-spack.report) Tools to produce reports of spec installations `spack.report.``valid_formats` *= ['junit', None, 'cdash']*[¶](#spack.report.valid_formats) Allowed report formats *class* `spack.report.``collect_info`(*format_name*, *install_command*, *cdash_upload_url*)[¶](#spack.report.collect_info) Bases: `object` Collects information to build a report while installing and dumps it on exit. If the format name is not `None`, this context manager decorates PackageBase.do_install when entering the context and unrolls the change when exiting. Within the context, only the specs that are passed to it on initialization will be recorded for the report. Data from other specs will be discarded. Examples ``` # The file 'junit.xml' is written when exiting # the context specs = [Spec('hdf5').concretized()] with collect_info(specs, 'junit', 'junit.xml'): # A report will be generated for these specs... for spec in specs: spec.do_install() # ...but not for this one Spec('zlib').concretized().do_install() ``` | Parameters: | * **format_name** (*str* *or* *None*) – one of the supported formats * **install_command** (*str*) – the command line passed to spack * **cdash_upload_url** (*str* *or* *None*) – where to upload the report | | Raises: | `ValueError` – when `format_name` is not in `valid_formats` | `concretization_report`(*msg*)[¶](#spack.report.collect_info.concretization_report) ### spack.reporter module[¶](#module-spack.reporter) *class* `spack.reporter.``Reporter`(*install_command*, *cdash_upload_url*)[¶](#spack.reporter.Reporter) Bases: `object` Base class for report writers. `build_report`(*filename*, *report_data*)[¶](#spack.reporter.Reporter.build_report) `concretization_report`(*filename*, *msg*)[¶](#spack.reporter.Reporter.concretization_report) ### spack.resource module[¶](#module-spack.resource) Describes an optional resource needed for a build. Typically a bunch of sources that can be built in-tree within another package to enable optional features. *class* `spack.resource.``Resource`(*name*, *fetcher*, *destination*, *placement*)[¶](#spack.resource.Resource) Bases: `object` Represents an optional resource to be fetched by a package. Aggregates a name, a fetcher, a destination and a placement. ### spack.spec module[¶](#module-spack.spec) Spack allows very fine-grained control over how packages are installed and over how they are built and configured. To make this easy, it has its own syntax for declaring a dependence. We call a descriptor of a particular package configuration a “spec”. The syntax looks like this: ``` $ spack install mpileaks ^openmpi @1.2:1.4 +debug %intel @12.1 =bgqos_0 0 1 2 3 4 5 6 ``` The first part of this is the command, ‘spack install’. The rest of the line is a spec for a particular installation of the mpileaks package. 0. The package to install 1. A dependency of the package, prefixed by ^ 2. A version descriptor for the package. This can either be a specific version, like “1.2”, or it can be a range of versions, e.g. “1.2:1.4”. If multiple specific versions or multiple ranges are acceptable, they can be separated by commas, e.g. if a package will only build with versions 1.0, 1.2-1.4, and 1.6-1.8 of mavpich, you could say: > depends_on(“[mvapich@1.0](mailto:mvapich%401.0),1.2:1.4,1.6:1.8”) > 3. A compile-time variant of the package. If you need openmpi to be built in debug mode for your package to work, you can require it by adding +debug to the openmpi spec when you depend on it. If you do NOT want the debug option to be enabled, then replace this with -debug. 4. The name of the compiler to build with. 5. The versions of the compiler to build with. Note that the identifier for a compiler version is the same ‘@’ that is used for a package version. A version list denoted by ‘@’ is associated with the compiler only if if it comes immediately after the compiler name. Otherwise it will be associated with the current package spec. 6. The architecture to build with. This is needed on machines where cross-compilation is required Here is the EBNF grammar for a spec: ``` spec-list = { spec [ dep-list ] } dep_list = { ^ spec } spec = id [ options ] options = { @version-list | +variant | -variant | ~variant | %compiler | arch=architecture | [ flag ]=value} flag = { cflags | cxxflags | fcflags | fflags | cppflags | ldflags | ldlibs } variant = id architecture = id compiler = id [ version-list ] version-list = version [ { , version } ] version = id | id: | :id | id:id id = [A-Za-z0-9_][A-Za-z0-9_.-]* ``` Identifiers using the <name>=<value> command, such as architectures and compiler flags, require a space before the name. There is one context-sensitive part: ids in versions may contain ‘.’, while other ids may not. There is one ambiguity: since ‘-‘ is allowed in an id, you need to put whitespace space before -variant for it to be tokenized properly. You can either use whitespace, or you can just use ~variant since it means the same thing. Spack uses ~variant in directory names and in the canonical form of specs to avoid ambiguity. Both are provided because ~ can cause shell expansion when it is the first character in an id typed on the command line. *class* `spack.spec.``Spec`(*spec_like*, ***kwargs*)[¶](#spack.spec.Spec) Bases: `object` `cformat`(**args*, ***kwargs*)[¶](#spack.spec.Spec.cformat) Same as format, but color defaults to auto instead of False. `colorized`()[¶](#spack.spec.Spec.colorized) `common_dependencies`(*other*)[¶](#spack.spec.Spec.common_dependencies) Return names of dependencies that self an other have in common. `concrete`[¶](#spack.spec.Spec.concrete) A spec is concrete if it describes a single build of a package. More formally, a spec is concrete if concretize() has been called on it and it has been marked _concrete. Concrete specs either can be or have been built. All constraints have been resolved, optional dependencies have been added or removed, a compiler has been chosen, and all variants have values. `concretize`(*tests=False*)[¶](#spack.spec.Spec.concretize) A spec is concrete if it describes one build of a package uniquely. This will ensure that this spec is concrete. | Parameters: | **tests** (*list* *or* *bool*) – list of packages that will need test dependencies, or True/False for test all/none | If this spec could describe more than one version, variant, or build of a package, this will add constraints to make it concrete. Some rigorous validation and checks are also performed on the spec. Concretizing ensures that it is self-consistent and that it’s consistent with requirements of its packages. See flatten() and normalize() for more details on this. `concretized`()[¶](#spack.spec.Spec.concretized) This is a non-destructive version of concretize(). First clones, then returns a concrete version of this package without modifying this package. `constrain`(*other*, *deps=True*)[¶](#spack.spec.Spec.constrain) Merge the constraints of other with self. Returns True if the spec changed as a result, False if not. `constrained`(*other*, *deps=True*)[¶](#spack.spec.Spec.constrained) Return a constrained copy without modifying this spec. `copy`(*deps=True*, ***kwargs*)[¶](#spack.spec.Spec.copy) Make a copy of this spec. | Parameters: | * **deps** (*bool* *or* *tuple*) – Defaults to True. If boolean, controls whether dependencies are copied (copied if True). If a tuple is provided, *only* dependencies of types matching those in the tuple are copied. * **kwargs** – additional arguments for internal use (passed to `_dup`). | | Returns: | A copy of this spec. | Examples Deep copy with dependnecies: ``` spec.copy() spec.copy(deps=True) ``` Shallow copy (no dependencies): ``` spec.copy(deps=False) ``` Only build and run dependencies: ``` deps=('build', 'run'): ``` `cshort_spec`[¶](#spack.spec.Spec.cshort_spec) Returns an auto-colorized version of `self.short_spec`. `dag_hash`(*length=None*)[¶](#spack.spec.Spec.dag_hash) Return a hash of the entire spec DAG, including connectivity. `dag_hash_bit_prefix`(*bits*)[¶](#spack.spec.Spec.dag_hash_bit_prefix) Get the first <bits> bits of the DAG hash as an integer type. `dep_difference`(*other*)[¶](#spack.spec.Spec.dep_difference) Returns dependencies in self that are not in other. `dep_string`()[¶](#spack.spec.Spec.dep_string) `dependencies`(*deptype='all'*)[¶](#spack.spec.Spec.dependencies) `dependencies_dict`(*deptype='all'*)[¶](#spack.spec.Spec.dependencies_dict) *static* `dependencies_from_node_dict`()[¶](#spack.spec.Spec.dependencies_from_node_dict) `dependents`(*deptype='all'*)[¶](#spack.spec.Spec.dependents) `dependents_dict`(*deptype='all'*)[¶](#spack.spec.Spec.dependents_dict) `eq_dag`(*other*, *deptypes=True*)[¶](#spack.spec.Spec.eq_dag) True if the full dependency DAGs of specs are equal. `eq_node`(*other*)[¶](#spack.spec.Spec.eq_node) Equality with another spec, not including dependencies. `external`[¶](#spack.spec.Spec.external) `flat_dependencies`(***kwargs*)[¶](#spack.spec.Spec.flat_dependencies) Return a DependencyMap containing all of this spec’s dependencies with their constraints merged. If copy is True, returns merged copies of its dependencies without modifying the spec it’s called on. If copy is False, clears this spec’s dependencies and returns them. `format`(*format_string='$_$@$%@+$+$='*, ***kwargs*)[¶](#spack.spec.Spec.format) Prints out particular pieces of a spec, depending on what is in the format string. The format strings you can provide are: ``` $_ Package name $. Full package name (with namespace) $@ Version with '@' prefix $% Compiler with '%' prefix $%@ Compiler with '%' prefix & compiler version with '@' prefix $%+ Compiler with '%' prefix & compiler flags prefixed by name $%@+ Compiler, compiler version, and compiler flags with same prefixes as above $+ Options $= Architecture prefixed by 'arch=' $/ 7-char prefix of DAG hash with '-' prefix $$ $ ``` You can also use full-string versions, which elide the prefixes: ``` ${PACKAGE} Package name ${FULLPACKAGE} Full package name (with namespace) ${VERSION} Version ${COMPILER} Full compiler string ${COMPILERNAME} Compiler name ${COMPILERVER} Compiler version ${COMPILERFLAGS} Compiler flags ${OPTIONS} Options ${ARCHITECTURE} Architecture ${PLATFORM} Platform ${OS} Operating System ${TARGET} Target ${SHA1} Dependencies 8-char sha1 prefix ${HASH:len} DAG hash with optional length specifier ${SPACK_ROOT} The spack root directory ${SPACK_INSTALL} The default spack install directory, ${SPACK_PREFIX}/opt ${PREFIX} The package prefix ${NAMESPACE} The package namespace ``` Note these are case-insensitive: for example you can specify either `${PACKAGE}` or `${package}`. Optionally you can provide a width, e.g. `$20_` for a 20-wide name. Like printf, you can provide ‘-‘ for left justification, e.g. `$-20_` for a left-justified name. Anything else is copied verbatim into the output stream. | Parameters: | **format_string** (*str*) – string containing the format to be expanded | | Keyword Arguments: | | | * **color** (*bool*) – True if returned string is colored * **transform** (*dict*) – maps full-string formats to a callable that accepts a string and returns another one | Examples The following line: ``` s = spec.format('$_$@$+') ``` translates to the name, version, and options of the package, but no dependencies, arch, or compiler. TODO: allow, e.g., `$6#` to customize short hash length TODO: allow, e.g., `$//` for full hash. *static* `from_dict`()[¶](#spack.spec.Spec.from_dict) Construct a spec from YAML. Parameters: data – a nested dict/list data structure read from YAML or JSON. *static* `from_json`()[¶](#spack.spec.Spec.from_json) Construct a spec from JSON. Parameters: stream – string or file object to read from. *static* `from_literal`(*normal=True*)[¶](#spack.spec.Spec.from_literal) Builds a Spec from a dictionary containing the spec literal. The dictionary must have a single top level key, representing the root, and as many secondary level keys as needed in the spec. The keys can be either a string or a Spec or a tuple containing the Spec and the dependency types. | Parameters: | * **spec_dict** (*dict*) – the dictionary containing the spec literal * **normal** (*bool*) – if True the same key appearing at different levels of the `spec_dict` will map to the same object in memory. | Examples A simple spec `foo` with no dependencies: ``` {'foo': None} ``` A spec `foo` with a `(build, link)` dependency `bar`: ``` {'foo': {'bar:build,link': None}} ``` A spec with a diamond dependency and various build types: ``` {'dt-diamond': { 'dt-diamond-left:build,link': { 'dt-diamond-bottom:build': None }, 'dt-diamond-right:build,link': { 'dt-diamond-bottom:build,link,run': None } }} ``` The same spec with a double copy of `dt-diamond-bottom` and no diamond structure: ``` {'dt-diamond': { 'dt-diamond-left:build,link': { 'dt-diamond-bottom:build': None }, 'dt-diamond-right:build,link': { 'dt-diamond-bottom:build,link,run': None } }, normal=False} ``` Constructing a spec using a Spec object as key: ``` mpich = Spec('mpich') libelf = Spec('libelf@1.8.11') expected_normalized = Spec.from_literal({ 'mpileaks': { 'callpath': { 'dyninst': { 'libdwarf': {libelf: None}, libelf: None }, mpich: None }, mpich: None }, }) ``` *static* `from_node_dict`()[¶](#spack.spec.Spec.from_node_dict) *static* `from_yaml`()[¶](#spack.spec.Spec.from_yaml) Construct a spec from YAML. Parameters: stream – string or file object to read from. `full_hash`(*length=None*)[¶](#spack.spec.Spec.full_hash) `fullname`[¶](#spack.spec.Spec.fullname) `get_dependency`(*name*)[¶](#spack.spec.Spec.get_dependency) `index`(*deptype='all'*)[¶](#spack.spec.Spec.index) Return DependencyMap that points to all the dependencies in this spec. *static* `is_virtual`()[¶](#spack.spec.Spec.is_virtual) Test if a name is virtual without requiring a Spec. `ne_dag`(*other*, *deptypes=True*)[¶](#spack.spec.Spec.ne_dag) True if the full dependency DAGs of specs are not equal. `ne_node`(*other*)[¶](#spack.spec.Spec.ne_node) Inequality with another spec, not including dependencies. `normalize`(*force=False*, *tests=False*, *user_spec_deps=None*)[¶](#spack.spec.Spec.normalize) When specs are parsed, any dependencies specified are hanging off the root, and ONLY the ones that were explicitly provided are there. Normalization turns a partial flat spec into a DAG, where: 1. Known dependencies of the root package are in the DAG. 2. Each node’s dependencies dict only contains its known direct deps. 3. There is only ONE unique spec for each package in the DAG. * This includes virtual packages. If there a non-virtual package that provides a virtual package that is in the spec, then we replace the virtual package with the non-virtual one. TODO: normalize should probably implement some form of cycle detection, to ensure that the spec is actually a DAG. `normalized`()[¶](#spack.spec.Spec.normalized) Return a normalized copy of this spec without modifying this spec. `package`[¶](#spack.spec.Spec.package) `package_class`[¶](#spack.spec.Spec.package_class) Internal package call gets only the class object for a package. Use this to just get package metadata. `patches`[¶](#spack.spec.Spec.patches) Return patch objects for any patch sha256 sums on this Spec. This is for use after concretization to iterate over any patches associated with this spec. TODO: this only checks in the package; it doesn’t resurrect old patches from install directories, but it probably should. `prefix`[¶](#spack.spec.Spec.prefix) *static* `read_yaml_dep_specs`()[¶](#spack.spec.Spec.read_yaml_dep_specs) Read the DependencySpec portion of a YAML-formatted Spec. This needs to be backward-compatible with older spack spec formats so that reindex will work on old specs/databases. `root`[¶](#spack.spec.Spec.root) Follow dependent links and find the root of this spec’s DAG. Spack specs have a single root (the package being installed). `satisfies`(*other*, *deps=True*, *strict=False*, *strict_deps=False*)[¶](#spack.spec.Spec.satisfies) Determine if this spec satisfies all constraints of another. There are two senses for satisfies: > * loose (default): the absence of a constraint in self > implies that it *could* be satisfied by other, so we only > check that there are no conflicts with other for > constraints that this spec actually has. > * strict: strict means that we *must* meet all the > constraints specified on other. `satisfies_dependencies`(*other*, *strict=False*)[¶](#spack.spec.Spec.satisfies_dependencies) This checks constraints on common dependencies against each other. `short_spec`[¶](#spack.spec.Spec.short_spec) Returns a version of the spec with the dependencies hashed instead of completely enumerated. `sorted_deps`()[¶](#spack.spec.Spec.sorted_deps) Return a list of all dependencies sorted by name. `to_dict`(*all_deps=False*)[¶](#spack.spec.Spec.to_dict) `to_json`(*stream=None*)[¶](#spack.spec.Spec.to_json) `to_node_dict`(*hash_function=None*, *all_deps=False*)[¶](#spack.spec.Spec.to_node_dict) `to_yaml`(*stream=None*)[¶](#spack.spec.Spec.to_yaml) `traverse`(***kwargs*)[¶](#spack.spec.Spec.traverse) `traverse_edges`(*visited=None*, *d=0*, *deptype='all'*, *dep_spec=None*, ***kwargs*)[¶](#spack.spec.Spec.traverse_edges) Generic traversal of the DAG represented by this spec. This will yield each node in the spec. Options: order [=pre|post] Order to traverse spec nodes. Defaults to preorder traversal. Options are: ‘pre’: Pre-order traversal; each node is yielded before its children in the dependency DAG. ‘post’: Post-order traversal; each node is yielded after its children in the dependency DAG. cover [=nodes|edges|paths] Determines how extensively to cover the dag. Possible values: ‘nodes’: Visit each node in the dag only once. Every node yielded by this function will be unique. ‘edges’: If a node has been visited once but is reached along a new path from the root, yield it but do not descend into it. This traverses each ‘edge’ in the DAG once. ‘paths’: Explore every unique path reachable from the root. This descends into visited subtrees and will yield nodes twice if they’re reachable by multiple paths. depth [=False] Defaults to False. When True, yields not just nodes in the spec, but also their depth from the root in a (depth, node) tuple. key [=id] Allow a custom key function to track the identity of nodes in the traversal. root [=True] If False, this won’t yield the root node, just its descendents. direction [=children|parents] If ‘children’, does a traversal of this spec’s children. If ‘parents’, traverses upwards in the DAG towards the root. `tree`(***kwargs*)[¶](#spack.spec.Spec.tree) Prints out this spec and its dependencies, tree-formatted with indentation. `validate_or_raise`()[¶](#spack.spec.Spec.validate_or_raise) Checks that names and values in this spec are real. If they’re not, it will raise an appropriate exception. `version`[¶](#spack.spec.Spec.version) `virtual`[¶](#spack.spec.Spec.virtual) Right now, a spec is virtual if no package exists with its name. TODO: revisit this – might need to use a separate namespace and be more explicit about this. Possible idea: just use conventin and make virtual deps all caps, e.g., MPI vs mpi. `virtual_dependencies`()[¶](#spack.spec.Spec.virtual_dependencies) Return list of any virtual deps in this spec. `spack.spec.``parse`(*string*)[¶](#spack.spec.parse) Returns a list of specs from an input string. For creating one spec, see Spec() constructor. `spack.spec.``parse_anonymous_spec`(*spec_like*, *pkg_name*)[¶](#spack.spec.parse_anonymous_spec) Allow the user to omit the package name part of a spec if they know what it has to be already. e.g., provides(‘mpi@2’, [when=’@1.9](mailto:when='%401.9):’) says that this package provides MPI-3 when its version is higher than 1.9. *exception* `spack.spec.``SpecError`(*message*, *long_message=None*)[¶](#spack.spec.SpecError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Superclass for all errors that occur while constructing specs. *exception* `spack.spec.``SpecParseError`(*parse_error*)[¶](#spack.spec.SpecParseError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Wrapper for ParseError for when we’re parsing specs. *exception* `spack.spec.``DuplicateDependencyError`(*message*, *long_message=None*)[¶](#spack.spec.DuplicateDependencyError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when the same dependency occurs in a spec twice. *exception* `spack.spec.``DuplicateVariantError`(*message*, *long_message=None*)[¶](#spack.spec.DuplicateVariantError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when the same variant occurs in a spec twice. *exception* `spack.spec.``DuplicateCompilerSpecError`(*message*, *long_message=None*)[¶](#spack.spec.DuplicateCompilerSpecError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when the same compiler occurs in a spec twice. *exception* `spack.spec.``UnsupportedCompilerError`(*compiler_name*)[¶](#spack.spec.UnsupportedCompilerError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when the user asks for a compiler spack doesn’t know about. *exception* `spack.spec.``UnknownVariantError`(*pkg*, *variant*)[¶](#spack.spec.UnknownVariantError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when an unknown variant occurs in a spec. *exception* `spack.spec.``DuplicateArchitectureError`(*message*, *long_message=None*)[¶](#spack.spec.DuplicateArchitectureError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when the same architecture occurs in a spec twice. *exception* `spack.spec.``InconsistentSpecError`(*message*, *long_message=None*)[¶](#spack.spec.InconsistentSpecError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when two nodes in the same spec DAG have inconsistent constraints. *exception* `spack.spec.``InvalidDependencyError`(*message*, *long_message=None*)[¶](#spack.spec.InvalidDependencyError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when a dependency in a spec is not actually a dependency of the package. *exception* `spack.spec.``NoProviderError`(*vpkg*)[¶](#spack.spec.NoProviderError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when there is no package that provides a particular virtual dependency. *exception* `spack.spec.``MultipleProviderError`(*vpkg*, *providers*)[¶](#spack.spec.MultipleProviderError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when there is no package that provides a particular virtual dependency. *exception* `spack.spec.``UnsatisfiableSpecError`(*provided*, *required*, *constraint_type*)[¶](#spack.spec.UnsatisfiableSpecError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when a spec conflicts with package constraints. Provide the requirement that was violated when raising. *exception* `spack.spec.``UnsatisfiableSpecNameError`(*provided*, *required*)[¶](#spack.spec.UnsatisfiableSpecNameError) Bases: [`spack.error.UnsatisfiableSpecError`](#spack.error.UnsatisfiableSpecError) Raised when two specs aren’t even for the same package. *exception* `spack.spec.``UnsatisfiableVersionSpecError`(*provided*, *required*)[¶](#spack.spec.UnsatisfiableVersionSpecError) Bases: [`spack.error.UnsatisfiableSpecError`](#spack.error.UnsatisfiableSpecError) Raised when a spec version conflicts with package constraints. *exception* `spack.spec.``UnsatisfiableCompilerSpecError`(*provided*, *required*)[¶](#spack.spec.UnsatisfiableCompilerSpecError) Bases: [`spack.error.UnsatisfiableSpecError`](#spack.error.UnsatisfiableSpecError) Raised when a spec comiler conflicts with package constraints. *exception* `spack.spec.``UnsatisfiableVariantSpecError`(*provided*, *required*)[¶](#spack.spec.UnsatisfiableVariantSpecError) Bases: [`spack.error.UnsatisfiableSpecError`](#spack.error.UnsatisfiableSpecError) Raised when a spec variant conflicts with package constraints. *exception* `spack.spec.``UnsatisfiableCompilerFlagSpecError`(*provided*, *required*)[¶](#spack.spec.UnsatisfiableCompilerFlagSpecError) Bases: [`spack.error.UnsatisfiableSpecError`](#spack.error.UnsatisfiableSpecError) Raised when a spec variant conflicts with package constraints. *exception* `spack.spec.``UnsatisfiableArchitectureSpecError`(*provided*, *required*)[¶](#spack.spec.UnsatisfiableArchitectureSpecError) Bases: [`spack.error.UnsatisfiableSpecError`](#spack.error.UnsatisfiableSpecError) Raised when a spec architecture conflicts with package constraints. *exception* `spack.spec.``UnsatisfiableProviderSpecError`(*provided*, *required*)[¶](#spack.spec.UnsatisfiableProviderSpecError) Bases: [`spack.error.UnsatisfiableSpecError`](#spack.error.UnsatisfiableSpecError) Raised when a provider is supplied but constraints don’t match a vpkg requirement *exception* `spack.spec.``UnsatisfiableDependencySpecError`(*provided*, *required*)[¶](#spack.spec.UnsatisfiableDependencySpecError) Bases: [`spack.error.UnsatisfiableSpecError`](#spack.error.UnsatisfiableSpecError) Raised when some dependency of constrained specs are incompatible *exception* `spack.spec.``AmbiguousHashError`(*msg*, **specs*)[¶](#spack.spec.AmbiguousHashError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) *exception* `spack.spec.``InvalidHashError`(*spec*, *hash*)[¶](#spack.spec.InvalidHashError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) *exception* `spack.spec.``NoSuchHashError`(*hash*)[¶](#spack.spec.NoSuchHashError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) *exception* `spack.spec.``RedundantSpecError`(*spec*, *addition*)[¶](#spack.spec.RedundantSpecError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) ### spack.stage module[¶](#module-spack.stage) *class* `spack.stage.``DIYStage`(*path*)[¶](#spack.stage.DIYStage) Bases: `object` Simple class that allows any directory to be a spack stage. `cache_local`()[¶](#spack.stage.DIYStage.cache_local) `check`()[¶](#spack.stage.DIYStage.check) `create`()[¶](#spack.stage.DIYStage.create) `destroy`()[¶](#spack.stage.DIYStage.destroy) `expand_archive`()[¶](#spack.stage.DIYStage.expand_archive) `fetch`(**args*, ***kwargs*)[¶](#spack.stage.DIYStage.fetch) `restage`()[¶](#spack.stage.DIYStage.restage) *class* `spack.stage.``ResourceStage`(*url_or_fetch_strategy*, *root*, *resource*, ***kwargs*)[¶](#spack.stage.ResourceStage) Bases: [`spack.stage.Stage`](#spack.stage.Stage) `expand_archive`()[¶](#spack.stage.ResourceStage.expand_archive) Changes to the stage directory and attempt to expand the downloaded archive. Fail if the stage is not set up or if the archive is not yet downloaded. `restage`()[¶](#spack.stage.ResourceStage.restage) Removes the expanded archive path if it exists, then re-expands the archive. *exception* `spack.stage.``RestageError`(*message*, *long_message=None*)[¶](#spack.stage.RestageError) Bases: [`spack.stage.StageError`](#spack.stage.StageError) “Error encountered during restaging. *class* `spack.stage.``Stage`(*url_or_fetch_strategy*, *name=None*, *mirror_path=None*, *keep=False*, *path=None*, *lock=True*, *search_fn=None*)[¶](#spack.stage.Stage) Bases: `object` Manages a temporary stage directory for building. A Stage object is a context manager that handles a directory where some source code is downloaded and built before being installed. It handles fetching the source code, either as an archive to be expanded or by checking it out of a repository. A stage’s lifecycle looks like this: ``` with Stage() as stage: # Context manager creates and destroys the # stage directory stage.fetch() # Fetch a source archive into the stage. stage.expand_archive() # Expand the source archive. <install> # Build and install the archive. # (handled by user of Stage) ``` When used as a context manager, the stage is automatically destroyed if no exception is raised by the context. If an excpetion is raised, the stage is left in the filesystem and NOT destroyed, for potential reuse later. You can also use the stage’s create/destroy functions manually, like this: ``` stage = Stage() try: stage.create() # Explicitly create the stage directory. stage.fetch() # Fetch a source archive into the stage. stage.expand_archive() # Expand the source archive. <install> # Build and install the archive. # (handled by user of Stage) finally: stage.destroy() # Explicitly destroy the stage directory. ``` There are two kinds of stages: named and unnamed. Named stages can persist between runs of spack, e.g. if you fetched a tarball but didn’t finish building it, you won’t have to fetch it again. Unnamed stages are created using standard mkdtemp mechanisms or similar, and are intended to persist for only one run of spack. `archive_file`[¶](#spack.stage.Stage.archive_file) Path to the source archive within this stage directory. `cache_local`()[¶](#spack.stage.Stage.cache_local) `check`()[¶](#spack.stage.Stage.check) Check the downloaded archive against a checksum digest. No-op if this stage checks code out of a repository. `create`()[¶](#spack.stage.Stage.create) Creates the stage directory. If get_tmp_root() is None, the stage directory is created directly under spack.paths.stage_path, otherwise this will attempt to create a stage in a temporary directory and link it into spack.paths.stage_path. `destroy`()[¶](#spack.stage.Stage.destroy) Removes this stage directory. `expand_archive`()[¶](#spack.stage.Stage.expand_archive) Changes to the stage directory and attempt to expand the downloaded archive. Fail if the stage is not set up or if the archive is not yet downloaded. `expected_archive_files`[¶](#spack.stage.Stage.expected_archive_files) Possible archive file paths. `fetch`(*mirror_only=False*)[¶](#spack.stage.Stage.fetch) Downloads an archive or checks out code from a repository. `restage`()[¶](#spack.stage.Stage.restage) Removes the expanded archive path if it exists, then re-expands the archive. `save_filename`[¶](#spack.stage.Stage.save_filename) `source_path`[¶](#spack.stage.Stage.source_path) Returns the path to the expanded/checked out source code. To find the source code, this method searches for the first subdirectory of the stage that it can find, and returns it. This assumes nothing besides the archive file will be in the stage path, but it has the advantage that we don’t need to know the name of the archive or its contents. If the fetch strategy is not supposed to expand the downloaded file, it will just return the stage path. If the archive needs to be expanded, it will return None when no archive is found. `stage_locks` *= {}*[¶](#spack.stage.Stage.stage_locks) *exception* `spack.stage.``StageError`(*message*, *long_message=None*)[¶](#spack.stage.StageError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) “Superclass for all errors encountered during staging. `spack.stage.``ensure_access`(*file='/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1/var/spack/stage'*)[¶](#spack.stage.ensure_access) Ensure we can access a directory and die with an error if we can’t. `spack.stage.``get_tmp_root`()[¶](#spack.stage.get_tmp_root) `spack.stage.``purge`()[¶](#spack.stage.purge) Remove all build directories in the top-level stage path. ### spack.store module[¶](#module-spack.store) Components that manage Spack’s installation tree. An install tree, or “build store” consists of two parts: > 1. A package database that tracks what is installed. > 2. A directory layout that determines how the installations > are laid out. The store contains all the install prefixes for packages installed by Spack. The simplest store could just contain prefixes named by DAG hash, but we use a fancier directory layout to make browsing the store and debugging easier. The directory layout is currently hard-coded to be a YAMLDirectoryLayout, so called because it stores build metadata within each prefix, in spec.yaml files. In future versions of Spack we may consider allowing install trees to define their own layouts with some per-tree configuration. *class* `spack.store.``Store`(*root*, *path_scheme=None*, *hash_length=None*)[¶](#spack.store.Store) Bases: `object` A store is a path full of installed Spack packages. Stores consist of packages installed according to a `DirectoryLayout`, along with an index, or _database_ of their contents. The directory layout controls what paths look like and how Spack ensures that each uniqe spec gets its own unique directory (or not, though we don’t recommend that). The database is a signle file that caches metadata for the entire Spack installation. It prevents us from having to spider the install tree to figure out what’s there. | Parameters: | * **root** (*str*) – path to the root of the install tree * **path_scheme** (*str*) – expression according to guidelines in `spack.util.path` that describes how to construct a path to a package prefix in this store * **hash_length** (*int*) – length of the hashes used in the directory layout; spec hash suffixes will be truncated to this length | `reindex`()[¶](#spack.store.Store.reindex) Convenience function to reindex the store DB with its own layout. `spack.store.``default_root` *= '/home/docs/checkouts/readthedocs.org/user_builds/spack/checkouts/v0.12.1/opt/spack'*[¶](#spack.store.default_root) default installation root, relative to the Spack install path `spack.store.``store` *= <spack.store.Store object>*[¶](#spack.store.store) Singleton store instance ### spack.tengine module[¶](#module-spack.tengine) *class* `spack.tengine.``Context`[¶](#spack.tengine.Context) Bases: `object` Base class for context classes that are used with the template engine. `context_properties` *= []*[¶](#spack.tengine.Context.context_properties) `to_dict`()[¶](#spack.tengine.Context.to_dict) Returns a dictionary containing all the context properties. *class* `spack.tengine.``ContextMeta`[¶](#spack.tengine.ContextMeta) Bases: `type` Meta class for Context. It helps reducing the boilerplate in client code. *classmethod* `context_property`(*func*)[¶](#spack.tengine.ContextMeta.context_property) Decorator that adds a function name to the list of new context properties, and then returns a property. `spack.tengine.``context_property` *= <bound method ContextMeta.context_property of <class 'spack.tengine.ContextMeta'>>*[¶](#spack.tengine.context_property) A saner way to use the decorator `spack.tengine.``make_environment`(*dirs=None*)[¶](#spack.tengine.make_environment) Returns an configured environment for template rendering. `spack.tengine.``prepend_to_line`(*text*, *token*)[¶](#spack.tengine.prepend_to_line) Prepends a token to each line in text `spack.tengine.``quote`(*text*)[¶](#spack.tengine.quote) Quotes each line in text ### spack.url module[¶](#module-spack.url) This module has methods for parsing names and versions of packages from URLs. The idea is to allow package creators to supply nothing more than the download location of the package, and figure out version and name information from there. **Example:** when spack is given the following URL: > <https://www.hdfgroup.org/ftp/HDF/releases/HDF4.2.12/src/hdf-4.2.12.tar.gzIt can figure out that the package name is `hdf`, and that it is at version `4.2.12`. This is useful for making the creation of packages simple: a user just supplies a URL and skeleton code is generated automatically. Spack can also figure out that it can most likely download 4.2.6 at this URL: > <https://www.hdfgroup.org/ftp/HDF/releases/HDF4.2.6/src/hdf-4.2.6.tar.gzThis is useful if a user asks for a package at a particular version number; spack doesn’t need anyone to tell it where to get the tarball even though it’s never been told about that version before. *exception* `spack.url.``UndetectableNameError`(*path*)[¶](#spack.url.UndetectableNameError) Bases: [`spack.url.UrlParseError`](#spack.url.UrlParseError) Raised when we can’t parse a package name from a string. *exception* `spack.url.``UndetectableVersionError`(*path*)[¶](#spack.url.UndetectableVersionError) Bases: [`spack.url.UrlParseError`](#spack.url.UrlParseError) Raised when we can’t parse a version from a string. *exception* `spack.url.``UrlParseError`(*msg*, *path*)[¶](#spack.url.UrlParseError) Bases: [`spack.error.SpackError`](#spack.error.SpackError) Raised when the URL module can’t parse something correctly. `spack.url.``color_url`(*path*, ***kwargs*)[¶](#spack.url.color_url) Color the parts of the url according to Spack’s parsing. Colors are: Cyan: The version found by [`parse_version_offset()`](#spack.url.parse_version_offset). Red: The name found by [`parse_name_offset()`](#spack.url.parse_name_offset). Green: Instances of version string from [`substitute_version()`](#spack.url.substitute_version). Magenta: Instances of the name (protected from substitution). | Parameters: | * **path** (*str*) – The filename or URL for the package * **errors** (*bool*) – Append parse errors at end of string. * **subs** (*bool*) – Color substitutions as well as parsed name/version. | `spack.url.``cumsum`(*elts*, *init=0*, *fn=<function <lambda>>*)[¶](#spack.url.cumsum) Return cumulative sum of result of fn on each element in elts. `spack.url.``determine_url_file_extension`(*path*)[¶](#spack.url.determine_url_file_extension) This returns the type of archive a URL refers to. This is sometimes confusing because of URLs like: 1. <https://github.com/petdance/ack/tarball/1.93_02Where the URL doesn’t actually contain the filename. We need to know what type it is so that we can appropriately name files in mirrors. `spack.url.``find_all`(*substring*, *string*)[¶](#spack.url.find_all) Returns a list containing the indices of every occurrence of substring in string. `spack.url.``find_list_url`(*url*)[¶](#spack.url.find_list_url) Finds a good list URL for the supplied URL. By default, returns the dirname of the archive path. Provides special treatment for the following websites, which have a unique list URL different from the dirname of the download URL: | GitHub | <https://github.com>/<repo>/<name>/releases | | GitLab | [https://gitlab.*](https://gitlab.*)/<repo>/<name>/tags | | BitBucket | <https://bitbucket.org>/<repo>/<name>/downloads/?tab=tags | | CRAN | [https://*.r-project.org/src/contrib/Archive](https://*.r-project.org/src/contrib/Archive)/<name> | | Parameters: | **url** (*str*) – The download URL for the package | | Returns: | The list URL for the package | | Return type: | str | `spack.url.``insensitize`(*string*)[¶](#spack.url.insensitize) Change upper and lowercase letters to be case insensitive in the provided string. e.g., ‘a’ becomes ‘[Aa]’, ‘B’ becomes ‘[bB]’, etc. Use for building regexes. `spack.url.``parse_name`(*path*, *ver=None*)[¶](#spack.url.parse_name) Try to determine the name of a package from its filename or URL. | Parameters: | * **path** (*str*) – The filename or URL for the package * **ver** (*str*) – The version of the package | | Returns: | The name of the package | | Return type: | str | | Raises: | [`UndetectableNameError`](#spack.url.UndetectableNameError) – If the URL does not match any regexes | `spack.url.``parse_name_and_version`(*path*)[¶](#spack.url.parse_name_and_version) Try to determine the name of a package and extract its version from its filename or URL. | Parameters: | **path** (*str*) – The filename or URL for the package | | Returns: | The name of the package The version of the package | | Return type: | tuple of (str, Version)A tuple containing | | Raises: | * [`UndetectableVersionError`](#spack.url.UndetectableVersionError) – If the URL does not match any regexes * [`UndetectableNameError`](#spack.url.UndetectableNameError) – If the URL does not match any regexes | `spack.url.``parse_name_offset`(*path*, *v=None*)[¶](#spack.url.parse_name_offset) Try to determine the name of a package from its filename or URL. | Parameters: | * **path** (*str*) – The filename or URL for the package * **v** (*str*) – The version of the package | | Returns: | A tuple containing: name of the package, first index of name, length of name, the index of the matching regex the matching regex | | Return type: | tuple of (str, int, int, int, str) | | Raises: | [`UndetectableNameError`](#spack.url.UndetectableNameError) – If the URL does not match any regexes | `spack.url.``parse_version`(*path*)[¶](#spack.url.parse_version) Try to extract a version string from a filename or URL. | Parameters: | **path** (*str*) – The filename or URL for the package | | Returns: | The version of the package | | Return type: | [spack.version.Version](index.html#spack.version.Version) | | Raises: | [`UndetectableVersionError`](#spack.url.UndetectableVersionError) – If the URL does not match any regexes | `spack.url.``parse_version_offset`(*path*)[¶](#spack.url.parse_version_offset) Try to extract a version string from a filename or URL. | Parameters: | **path** (*str*) – The filename or URL for the package | | Returns: | A tuple containing: version of the package, first index of version, length of version string, the index of the matching regex the matching regex | | Return type: | tuple of (Version, int, int, int, str) | | Raises: | [`UndetectableVersionError`](#spack.url.UndetectableVersionError) – If the URL does not match any regexes | `spack.url.``split_url_extension`(*path*)[¶](#spack.url.split_url_extension) Some URLs have a query string, e.g.: 1. <https://github.com/losalamos/CLAMR/blob/packages/PowerParser_v2.0.7.tgz?raw=true> 2. <http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-1.2.0-rc2-bin.tar.gz> 3. <https://gitlab.kitware.com/vtk/vtk/repository/archive.tar.bz2?ref=v7.0.0In (1), the query string needs to be stripped to get at the extension, but in (2) & (3), the filename is IN a single final query argument. This strips the URL into three pieces: `prefix`, `ext`, and `suffix`. The suffix contains anything that was stripped off the URL to get at the file extension. In (1), it will be `'?raw=true'`, but in (2), it will be empty. In (3) the suffix is a parameter that follows after the file extension, e.g.: 1. `('https://github.com/losalamos/CLAMR/blob/packages/PowerParser_v2.0.7', '.tgz', '?raw=true')` 2. `('http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.0/apache-cassandra-1.2.0-rc2-bin', '.tar.gz', None)` 3. `('https://gitlab.kitware.com/vtk/vtk/repository/archive', '.tar.bz2', '?ref=v7.0.0')` `spack.url.``strip_name_suffixes`(*path*, *version*)[¶](#spack.url.strip_name_suffixes) Most tarballs contain a package name followed by a version number. However, some also contain extraneous information in-between the name and version: * `rgb-1.0.6` * `converge_install_2.3.16` * `jpegsrc.v9b` These strings are not part of the package name and should be ignored. This function strips the version number and any extraneous suffixes off and returns the remaining string. The goal is that the name is always the last thing in `path`: * `rgb` * `converge` * `jpeg` | Parameters: | * **path** (*str*) – The filename or URL for the package * **version** (*str*) – The version detected for this URL | | Returns: | The `path` with any extraneous suffixes removed | | Return type: | str | `spack.url.``strip_query_and_fragment`(*path*)[¶](#spack.url.strip_query_and_fragment) `spack.url.``strip_version_suffixes`(*path*)[¶](#spack.url.strip_version_suffixes) Some tarballs contain extraneous information after the version: * `bowtie2-2.2.5-source` * `libevent-2.0.21-stable` * `cuda_8.0.44_linux.run` These strings are not part of the version number and should be ignored. This function strips those suffixes off and returns the remaining string. The goal is that the version is always the last thing in `path`: * `bowtie2-2.2.5` * `libevent-2.0.21` * `cuda_8.0.44` | Parameters: | **path** (*str*) – The filename or URL for the package | | Returns: | The `path` with any extraneous suffixes removed | | Return type: | str | `spack.url.``substitute_version`(*path*, *new_version*)[¶](#spack.url.substitute_version) Given a URL or archive name, find the version in the path and substitute the new version for it. Replace all occurrences of the version *if* they don’t overlap with the package name. Simple example: ``` substitute_version('http://www.mr511.de/software/libelf-0.8.13.tar.gz', '2.9.3') >>> 'http://www.mr511.de/software/libelf-2.9.3.tar.gz' ``` Complex example: ``` substitute_version('https://www.hdfgroup.org/ftp/HDF/releases/HDF4.2.12/src/hdf-4.2.12.tar.gz', '2.3') >>> 'https://www.hdfgroup.org/ftp/HDF/releases/HDF2.3/src/hdf-2.3.tar.gz' ``` `spack.url.``substitution_offsets`(*path*)[¶](#spack.url.substitution_offsets) This returns offsets for substituting versions and names in the provided path. It is a helper for [`substitute_version()`](#spack.url.substitute_version). `spack.url.``wildcard_version`(*path*)[¶](#spack.url.wildcard_version) Find the version in the supplied path, and return a regular expression that will match this path with any version in its place. ### spack.variant module[¶](#module-spack.variant) The variant module contains data structures that are needed to manage variants both in packages and in specs. *class* `spack.variant.``AbstractVariant`(*name*, *value*)[¶](#spack.variant.AbstractVariant) Bases: `object` A variant that has not yet decided who it wants to be. It behaves like a multi valued variant which **could** do things. This kind of variant is generated during parsing of expressions like `foo=bar` and differs from multi valued variants because it will satisfy any other variant with the same name. This is because it **could** do it if it grows up to be a multi valued variant with the right set of values. `compatible`(*other*)[¶](#spack.variant.AbstractVariant.compatible) Returns True if self and other are compatible, False otherwise. As there is no semantic check, two VariantSpec are compatible if either they contain the same value or they are both multi-valued. | Parameters: | **other** – instance against which we test compatibility | | Returns: | True or False | | Return type: | bool | `constrain`(*other*)[¶](#spack.variant.AbstractVariant.constrain) Modify self to match all the constraints for other if both instances are multi-valued. Returns True if self changed, False otherwise. | Parameters: | **other** – instance against which we constrain self | | Returns: | True or False | | Return type: | bool | `copy`()[¶](#spack.variant.AbstractVariant.copy) Returns an instance of a variant equivalent to self | Returns: | a copy of self | | Return type: | any variant type | ``` >>> a = MultiValuedVariant('foo', True) >>> b = a.copy() >>> assert a == b >>> assert a is not b ``` *static* `from_node_dict`(*value*)[¶](#spack.variant.AbstractVariant.from_node_dict) Reconstruct a variant from a node dict. `satisfies`(*other*)[¶](#spack.variant.AbstractVariant.satisfies) Returns true if `other.name == self.name`, because any value that other holds and is not in self yet **could** be added. | Parameters: | **other** – constraint to be met for the method to return True | | Returns: | True or False | | Return type: | bool | `value`[¶](#spack.variant.AbstractVariant.value) Returns a tuple of strings containing the values stored in the variant. | Returns: | values stored in the variant | | Return type: | tuple of str | `yaml_entry`()[¶](#spack.variant.AbstractVariant.yaml_entry) Returns a key, value tuple suitable to be an entry in a yaml dict. | Returns: | (name, value_representation) | | Return type: | tuple | *class* `spack.variant.``BoolValuedVariant`(*name*, *value*)[¶](#spack.variant.BoolValuedVariant) Bases: [`spack.variant.SingleValuedVariant`](#spack.variant.SingleValuedVariant) A variant that can hold either True or False. *exception* `spack.variant.``DuplicateVariantError`(*message*, *long_message=None*)[¶](#spack.variant.DuplicateVariantError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when the same variant occurs in a spec twice. *exception* `spack.variant.``InconsistentValidationError`(*vspec*, *variant*)[¶](#spack.variant.InconsistentValidationError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised if the wrong validator is used to validate a variant. *exception* `spack.variant.``InvalidVariantValueError`(*variant*, *invalid_values*, *pkg*)[¶](#spack.variant.InvalidVariantValueError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when a valid variant has at least an invalid value. *class* `spack.variant.``MultiValuedVariant`(*name*, *value*)[¶](#spack.variant.MultiValuedVariant) Bases: [`spack.variant.AbstractVariant`](#spack.variant.AbstractVariant) A variant that can hold multiple values at once. `satisfies`(*other*)[¶](#spack.variant.MultiValuedVariant.satisfies) Returns true if `other.name == self.name` and `other.value` is a strict subset of self. Does not try to validate. | Parameters: | **other** – constraint to be met for the method to return True | | Returns: | True or False | | Return type: | bool | *exception* `spack.variant.``MultipleValuesInExclusiveVariantError`(*variant*, *pkg*)[¶](#spack.variant.MultipleValuesInExclusiveVariantError) Bases: [`spack.error.SpecError`](#spack.error.SpecError), `ValueError` Raised when multiple values are present in a variant that wants only one. *class* `spack.variant.``SingleValuedVariant`(*name*, *value*)[¶](#spack.variant.SingleValuedVariant) Bases: [`spack.variant.MultiValuedVariant`](#spack.variant.MultiValuedVariant) A variant that can hold multiple values, but one at a time. `compatible`(*other*)[¶](#spack.variant.SingleValuedVariant.compatible) Returns True if self and other are compatible, False otherwise. As there is no semantic check, two VariantSpec are compatible if either they contain the same value or they are both multi-valued. | Parameters: | **other** – instance against which we test compatibility | | Returns: | True or False | | Return type: | bool | `constrain`(*other*)[¶](#spack.variant.SingleValuedVariant.constrain) Modify self to match all the constraints for other if both instances are multi-valued. Returns True if self changed, False otherwise. | Parameters: | **other** – instance against which we constrain self | | Returns: | True or False | | Return type: | bool | `satisfies`(*other*)[¶](#spack.variant.SingleValuedVariant.satisfies) Returns true if `other.name == self.name` and `other.value` is a strict subset of self. Does not try to validate. | Parameters: | **other** – constraint to be met for the method to return True | | Returns: | True or False | | Return type: | bool | `yaml_entry`()[¶](#spack.variant.SingleValuedVariant.yaml_entry) Returns a key, value tuple suitable to be an entry in a yaml dict. | Returns: | (name, value_representation) | | Return type: | tuple | *exception* `spack.variant.``UnknownVariantError`(*pkg*, *variant*)[¶](#spack.variant.UnknownVariantError) Bases: [`spack.error.SpecError`](#spack.error.SpecError) Raised when an unknown variant occurs in a spec. *exception* `spack.variant.``UnsatisfiableVariantSpecError`(*provided*, *required*)[¶](#spack.variant.UnsatisfiableVariantSpecError) Bases: [`spack.error.UnsatisfiableSpecError`](#spack.error.UnsatisfiableSpecError) Raised when a spec variant conflicts with package constraints. *class* `spack.variant.``Variant`(*name*, *default*, *description*, *values=(True*, *False)*, *multi=False*, *validator=None*)[¶](#spack.variant.Variant) Bases: `object` Represents a variant in a package, as declared in the variant directive. `allowed_values`[¶](#spack.variant.Variant.allowed_values) Returns a string representation of the allowed values for printing purposes | Returns: | representation of the allowed values | | Return type: | str | `make_default`()[¶](#spack.variant.Variant.make_default) Factory that creates a variant holding the default value. | Returns: | instance of the proper variant | | Return type: | MultiValuedVariant or SingleValuedVariant or BoolValuedVariant | `make_variant`(*value*)[¶](#spack.variant.Variant.make_variant) Factory that creates a variant holding the value passed as a parameter. | Parameters: | **value** – value that will be hold by the variant | | Returns: | instance of the proper variant | | Return type: | MultiValuedVariant or SingleValuedVariant or BoolValuedVariant | `validate_or_raise`(*vspec*, *pkg=None*)[¶](#spack.variant.Variant.validate_or_raise) Validate a variant spec against this package variant. Raises an exception if any error is found. | Parameters: | * **vspec** (*VariantSpec*) – instance to be validated * **pkg** (*Package*) – the package that required the validation, if available | | Raises: | * [`InconsistentValidationError`](#spack.variant.InconsistentValidationError) – if `vspec.name != self.name` * [`MultipleValuesInExclusiveVariantError`](#spack.variant.MultipleValuesInExclusiveVariantError) – if `vspec` has multiple values but `self.multi == False` * [`InvalidVariantValueError`](#spack.variant.InvalidVariantValueError) – if `vspec.value` contains invalid values | `variant_cls`[¶](#spack.variant.Variant.variant_cls) Proper variant class to be used for this configuration. *class* `spack.variant.``VariantMap`(*spec*)[¶](#spack.variant.VariantMap) Bases: [`llnl.util.lang.HashableMap`](index.html#llnl.util.lang.HashableMap) Map containing variant instances. New values can be added only if the key is not already present. `concrete`[¶](#spack.variant.VariantMap.concrete) Returns True if the spec is concrete in terms of variants. | Returns: | True or False | | Return type: | bool | `constrain`(*other*)[¶](#spack.variant.VariantMap.constrain) Add all variants in other that aren’t in self to self. Also constrain all multi-valued variants that are already present. Return True if self changed, False otherwise | Parameters: | **other** (*VariantMap*) – instance against which we constrain self | | Returns: | True or False | | Return type: | bool | `copy`()[¶](#spack.variant.VariantMap.copy) Return an instance of VariantMap equivalent to self. | Returns: | a copy of self | | Return type: | VariantMap | `satisfies`(*other*, *strict=False*)[¶](#spack.variant.VariantMap.satisfies) Returns True if this VariantMap is more constrained than other, False otherwise. | Parameters: | * **other** (*VariantMap*) – VariantMap instance to satisfy * **strict** (*bool*) – if True return False if a key is in other and not in self, otherwise discard that key and proceed with evaluation | | Returns: | True or False | | Return type: | bool | `substitute`(*vspec*)[¶](#spack.variant.VariantMap.substitute) Substitutes the entry under `vspec.name` with `vspec`. | Parameters: | **vspec** – variant spec to be substituted | `spack.variant.``implicit_variant_conversion`(*method*)[¶](#spack.variant.implicit_variant_conversion) Converts other to type(self) and calls method(self, other) | Parameters: | **method** – any predicate method that takes another variant as an argument | Returns: decorated method `spack.variant.``substitute_abstract_variants`(*spec*)[¶](#spack.variant.substitute_abstract_variants) Uses the information in spec.package to turn any variant that needs it into a SingleValuedVariant. | Parameters: | **spec** – spec on which to operate the substitution | ### spack.version module[¶](#module-spack.version) This module implements Version and version-ish objects. These are: Version A single version of a package. VersionRange A range of versions of a package. VersionList A list of Versions and VersionRanges. All of these types support the following operations, which can be called on any of the types: ``` __eq__, __ne__, __lt__, __gt__, __ge__, __le__, __hash__ __contains__ satisfies overlaps union intersection concrete ``` *class* `spack.version.``Version`(*string*)[¶](#spack.version.Version) Bases: `object` Class to represent versions `concrete`[¶](#spack.version.Version.concrete) `dashed`[¶](#spack.version.Version.dashed) The dashed representation of the version. Example: >>> version = Version(‘1.2.3b’) >>> version.dashed Version(‘1-2-3b’) | Returns: | The version with separator characters replaced by dashes | | Return type: | Version | `dotted`[¶](#spack.version.Version.dotted) The dotted representation of the version. Example: >>> version = Version(‘1-2-3b’) >>> version.dotted Version(‘1.2.3b’) | Returns: | The version with separator characters replaced by dots | | Return type: | Version | `highest`()[¶](#spack.version.Version.highest) `intersection`(*other*)[¶](#spack.version.Version.intersection) `is_predecessor`(*other*)[¶](#spack.version.Version.is_predecessor) True if the other version is the immediate predecessor of this one. That is, NO versions v exist such that: (self < v < other and v not in self). `is_successor`(*other*)[¶](#spack.version.Version.is_successor) `isdevelop`()[¶](#spack.version.Version.isdevelop) Triggers on the special case of the @develop version. `isnumeric`()[¶](#spack.version.Version.isnumeric) Tells if this version is numeric (vs. a non-numeric version). A version will be numeric as long as the first section of it is, even if it contains non-numerica portions. Some numeric versions: 1 1.1 1.1a 1.a.1b Some non-numeric versions: develop system myfavoritebranch `joined`[¶](#spack.version.Version.joined) The joined representation of the version. Example: >>> version = Version(‘1.2.3b’) >>> version.joined Version(‘123b’) | Returns: | The version with separator characters removed | | Return type: | Version | `lowest`()[¶](#spack.version.Version.lowest) `overlaps`(*other*)[¶](#spack.version.Version.overlaps) `satisfies`(*other*)[¶](#spack.version.Version.satisfies) A Version ‘satisfies’ another if it is at least as specific and has a common prefix. e.g., we want [gcc@4.7.3](mailto:gcc%404.7.3) to satisfy a request for [gcc@4.7](mailto:gcc%404.7) so that when a user asks to build with [gcc@4.7](mailto:gcc%404.7), we can find a suitable compiler. `underscored`[¶](#spack.version.Version.underscored) The underscored representation of the version. Example: >>> version = Version(‘1.2.3b’) >>> version.underscored Version(‘1_2_3b’) | Returns: | The version with separator characters replaced by underscores | | Return type: | Version | `union`(*other*)[¶](#spack.version.Version.union) `up_to`(*index*)[¶](#spack.version.Version.up_to) The version up to the specified component. Examples: >>> version = Version(‘1.23-4b’) >>> version.up_to(1) Version(‘1’) >>> version.up_to(2) Version(‘1.23’) >>> version.up_to(3) Version(‘1.23-4’) >>> version.up_to(4) Version(‘1.23-4b’) >>> version.up_to(-1) Version(‘1.23-4’) >>> version.up_to(-2) Version(‘1.23’) >>> version.up_to(-3) Version(‘1’) | Returns: | The first index components of the version | | Return type: | Version | *class* `spack.version.``VersionRange`(*start*, *end*)[¶](#spack.version.VersionRange) Bases: `object` `concrete`[¶](#spack.version.VersionRange.concrete) `highest`()[¶](#spack.version.VersionRange.highest) `intersection`(*other*)[¶](#spack.version.VersionRange.intersection) `lowest`()[¶](#spack.version.VersionRange.lowest) `overlaps`(*other*)[¶](#spack.version.VersionRange.overlaps) `satisfies`(*other*)[¶](#spack.version.VersionRange.satisfies) A VersionRange satisfies another if some version in this range would satisfy some version in the other range. To do this it must either: 1. Overlap with the other range 2. The start of this range satisfies the end of the other range. This is essentially the same as overlaps(), but overlaps assumes that its arguments are specific. That is, 4.7 is interpreted as 4.7.0.0.0.0
 . This funciton assumes that 4.7 woudl be satisfied by 4.7.3.5, etc. Rationale: If a user asks for [gcc@4.5](mailto:gcc%404.5):4.7, and a package is only compatible with [gcc@4.7.3](mailto:gcc%404.7.3):4.8, then that package should be able to build under the constraints. Just using overlaps() would not work here. Note that we don’t need to check whether the end of this range would satisfy the start of the other range, because overlaps() already covers that case. Note further that overlaps() is a symmetric operation, while satisfies() is not. `union`(*other*)[¶](#spack.version.VersionRange.union) *class* `spack.version.``VersionList`(*vlist=None*)[¶](#spack.version.VersionList) Bases: `object` Sorted, non-redundant list of Versions and VersionRanges. `add`(*version*)[¶](#spack.version.VersionList.add) `concrete`[¶](#spack.version.VersionList.concrete) `copy`()[¶](#spack.version.VersionList.copy) *static* `from_dict`()[¶](#spack.version.VersionList.from_dict) Parse dict from to_dict. `highest`()[¶](#spack.version.VersionList.highest) Get the highest version in the list. `intersect`(*other*)[¶](#spack.version.VersionList.intersect) Intersect this spec’s list with other. Return True if the spec changed as a result; False otherwise `intersection`(*other*)[¶](#spack.version.VersionList.intersection) `lowest`()[¶](#spack.version.VersionList.lowest) Get the lowest version in the list. `overlaps`(*other*)[¶](#spack.version.VersionList.overlaps) `satisfies`(*other*, *strict=False*)[¶](#spack.version.VersionList.satisfies) A VersionList satisfies another if some version in the list would satisfy some version in the other list. This uses essentially the same algorithm as overlaps() does for VersionList, but it calls satisfies() on member Versions and VersionRanges. If strict is specified, this version list must lie entirely *within* the other in order to satisfy it. `to_dict`()[¶](#spack.version.VersionList.to_dict) Generate human-readable dict for YAML. `union`(*other*)[¶](#spack.version.VersionList.union) `update`(*other*)[¶](#spack.version.VersionList.update) `spack.version.``ver`(*obj*)[¶](#spack.version.ver) Parses a Version, VersionRange, or VersionList from a string or list of strings. ### Module contents[¶](#module-spack) `spack.``spack_version_info` *= (0, 12, 1)*[¶](#spack.spack_version_info) major, minor, patch version for Spack, in a tuple `spack.``spack_version` *= '0.12.1'*[¶](#spack.spack_version) String containing Spack version joined with .’s llnl package[¶](#llnl-package) --- ### Subpackages[¶](#subpackages) #### llnl.util package[¶](#llnl-util-package) ##### Subpackages[¶](#subpackages) ###### llnl.util.tty package[¶](#llnl-util-tty-package) ####### Submodules[¶](#submodules) ####### llnl.util.tty.colify module[¶](#module-llnl.util.tty.colify) Routines for printing columnar output. See `colify()` for more information. *class* `llnl.util.tty.colify.``ColumnConfig`(*cols*)[¶](#llnl.util.tty.colify.ColumnConfig) Bases: `object` `llnl.util.tty.colify.``colified`(*elts*, ***options*)[¶](#llnl.util.tty.colify.colified) Invokes the `colify()` function but returns the result as a string instead of writing it to an output string. `llnl.util.tty.colify.``colify`(*elts*, ***options*)[¶](#llnl.util.tty.colify.colify) Takes a list of elements as input and finds a good columnization of them, similar to how gnu ls does. This supports both uniform-width and variable-width (tighter) columns. If elts is not a list of strings, each element is first conveted using `str()`. | Keyword Arguments: | | --- | | | * **output** (*stream*) – A file object to write to. Default is `sys.stdout` * **indent** (*int*) – Optionally indent all columns by some number of spaces * **padding** (*int*) – Spaces between columns. Default is 2 * **width** (*int*) – Width of the output. Default is 80 if tty not detected * **cols** (*int*) – Force number of columns. Default is to size to terminal, or single-column if no tty * **tty** (*bool*) – Whether to attempt to write to a tty. Default is to autodetect a tty. Set to False to force single-column output * **method** (*str*) – Method to use to fit columns. Options are variable or uniform. Variable-width columns are tighter, uniform columns are all the same width and fit less data on the screen | `llnl.util.tty.colify.``colify_table`(*table*, ***options*)[¶](#llnl.util.tty.colify.colify_table) Version of `colify()` for data expressed in rows, (list of lists). Same as regular colify but takes a list of lists, where each sub-list must be the same length, and each is interpreted as a row in a table. Regular colify displays a sequential list of values in columns. `llnl.util.tty.colify.``config_uniform_cols`(*elts*, *console_width*, *padding*, *cols=0*)[¶](#llnl.util.tty.colify.config_uniform_cols) Uniform-width column fitting algorithm. Determines the longest element in the list, and determines how many columns of that width will fit on screen. Returns a corresponding column config. `llnl.util.tty.colify.``config_variable_cols`(*elts*, *console_width*, *padding*, *cols=0*)[¶](#llnl.util.tty.colify.config_variable_cols) Variable-width column fitting algorithm. This function determines the most columns that can fit in the screen width. Unlike uniform fitting, where all columns take the width of the longest element in the list, each column takes the width of its own longest element. This packs elements more efficiently on screen. If cols is nonzero, force ####### llnl.util.tty.color module[¶](#module-llnl.util.tty.color) This file implements an expression syntax, similar to `printf`, for adding ANSI colors to text. See `colorize()`, `cwrite()`, and `cprint()` for routines that can generate colored output. `colorize` will take a string and replace all color expressions with ANSI control codes. If the `isatty` keyword arg is set to False, then the color expressions will be converted to null strings, and the returned string will have no color. `cwrite` and `cprint` are equivalent to `write()` and `print()` calls in python, but they colorize their output. If the `stream` argument is not supplied, they write to `sys.stdout`. Here are some example color expressions: | Expression | Meaning | | --- | --- | | @r | Turn on red coloring | | @R | Turn on bright red coloring | | @*{foo} | Bold foo, but don’t change text color | | @_{bar} | Underline bar, but don’t change text color | | @*b | Turn on bold, blue text | | @_B | Turn on bright blue text with an underline | | @. | Revert to plain formatting | | @*g{green} | Print out ‘green’ in bold, green text, then reset to plain. | | @*ggreen@. | Print out ‘green’ in bold, green text, then reset to plain. | The syntax consists of: | color-expr | ‘@’ [style] color-code ‘{‘ text ‘}’ | ‘@.’ | ‘@@’ | | style | ‘*’ | ‘_’ | | color-code | [krgybmcwKRGYBMCW] | | text | .* | ‘@’ indicates the start of a color expression. It can be followed by an optional * or _ that indicates whether the font should be bold or underlined. If * or _ is not provided, the text will be plain. Then an optional color code is supplied. This can be [krgybmcw] or [KRGYBMCW], where the letters map to black(k), red(r), green(g), yellow(y), blue(b), magenta(m), cyan(c), and white(w). Lowercase letters denote normal ANSI colors and capital letters denote bright ANSI colors. Finally, the color expression can be followed by text enclosed in {}. If braces are present, only the text in braces is colored. If the braces are NOT present, then just the control codes to enable the color will be output. The console can be reset later to plain text with ‘@.’. To output an @, use ‘@@’. To output a } inside braces, use ‘}}’. *exception* `llnl.util.tty.color.``ColorParseError`(*message*)[¶](#llnl.util.tty.color.ColorParseError) Bases: `Exception` Raised when a color format fails to parse. *class* `llnl.util.tty.color.``ColorStream`(*stream*, *color=None*)[¶](#llnl.util.tty.color.ColorStream) Bases: `object` `write`(*string*, ***kwargs*)[¶](#llnl.util.tty.color.ColorStream.write) `writelines`(*sequence*, ***kwargs*)[¶](#llnl.util.tty.color.ColorStream.writelines) `llnl.util.tty.color.``cescape`(*string*)[¶](#llnl.util.tty.color.cescape) Escapes special characters needed for color codes. Replaces the following symbols with their equivalent literal forms: | `@` | `@@` | | `}` | `}}` | | Parameters: | **string** (*str*) – the string to escape | | Returns: | the string with color codes escaped | | Return type: | (str) | `llnl.util.tty.color.``cextra`(*string*)[¶](#llnl.util.tty.color.cextra) Length of extra color characters in a string `llnl.util.tty.color.``clen`(*string*)[¶](#llnl.util.tty.color.clen) Return the length of a string, excluding ansi color sequences. `llnl.util.tty.color.``color_when`(*value*)[¶](#llnl.util.tty.color.color_when) Context manager to temporarily use a particular color setting. `llnl.util.tty.color.``colorize`(*string*, ***kwargs*)[¶](#llnl.util.tty.color.colorize) Replace all color expressions in a string with ANSI control codes. | Parameters: | **string** (*str*) – The string to replace | | Returns: | The filtered string | | Return type: | str | | Keyword Arguments: | | | **color** (*bool*) – If False, output will be plain text without control codes, for output to non-console devices. | `llnl.util.tty.color.``cprint`(*string*, *stream=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>*, *color=None*)[¶](#llnl.util.tty.color.cprint) Same as cwrite, but writes a trailing newline to the stream. `llnl.util.tty.color.``cwrite`(*string*, *stream=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>*, *color=None*)[¶](#llnl.util.tty.color.cwrite) Replace all color expressions in string with ANSI control codes and write the result to the stream. If color is False, this will write plain text with no color. If True, then it will always write colored output. If not supplied, then it will be set based on stream.isatty(). `llnl.util.tty.color.``get_color_when`()[¶](#llnl.util.tty.color.get_color_when) Return whether commands should print color or not. *class* `llnl.util.tty.color.``match_to_ansi`(*color=True*)[¶](#llnl.util.tty.color.match_to_ansi) Bases: `object` `escape`(*s*)[¶](#llnl.util.tty.color.match_to_ansi.escape) Returns a TTY escape sequence for a color `llnl.util.tty.color.``set_color_when`(*when*)[¶](#llnl.util.tty.color.set_color_when) Set when color should be applied. Options are: * True or ‘always’: always print color * False or ‘never’: never print color * None or ‘auto’: only print color if sys.stdout is a tty. ####### llnl.util.tty.log module[¶](#module-llnl.util.tty.log) Utility classes for logging the output of blocks of code. *class* `llnl.util.tty.log.``Unbuffered`(*stream*)[¶](#llnl.util.tty.log.Unbuffered) Bases: `object` Wrapper for Python streams that forces them to be unbuffered. This is implemented by forcing a flush after each write. `write`(*data*)[¶](#llnl.util.tty.log.Unbuffered.write) `writelines`(*datas*)[¶](#llnl.util.tty.log.Unbuffered.writelines) *class* `llnl.util.tty.log.``keyboard_input`(*stream*)[¶](#llnl.util.tty.log.keyboard_input) Bases: `object` Context manager to disable line editing and echoing. Use this with `sys.stdin` for keyboard input, e.g.: ``` with keyboard_input(sys.stdin): r, w, x = select.select([sys.stdin], [], []) # ... do something with keypresses ... ``` This disables canonical input so that keypresses are available on the stream immediately. Typically standard input allows line editing, which means keypresses won’t be sent until the user hits return. It also disables echoing, so that keys pressed aren’t printed to the terminal. So, the user can hit, e.g., ‘v’, and it’s read on the other end of the pipe immediately but not printed. When the with block completes, prior TTY settings are restored. Note: this depends on termios support. If termios isn’t available, or if the stream isn’t a TTY, this context manager has no effect. *class* `llnl.util.tty.log.``log_output`(*file_like=None*, *echo=False*, *debug=False*, *buffer=False*)[¶](#llnl.util.tty.log.log_output) Bases: `object` Context manager that logs its output to a file. In the simplest case, the usage looks like this: ``` with log_output('logfile.txt'): # do things ... output will be logged ``` Any output from the with block will be redirected to `logfile.txt`. If you also want the output to be echoed to `stdout`, use the `echo` parameter: ``` with log_output('logfile.txt', echo=True): # do things ... output will be logged and printed out ``` And, if you just want to echo *some* stuff from the parent, use `force_echo`: ``` with log_output('logfile.txt', echo=False) as logger: # do things ... output will be logged with logger.force_echo(): # things here will be echoed *and* logged ``` Under the hood, we spawn a daemon and set up a pipe between this process and the daemon. The daemon writes our output to both the file and to stdout (if echoing). The parent process can communicate with the daemon to tell it when and when not to echo; this is what force_echo does. You can also enable/disable echoing by typing ‘v’. We try to use OS-level file descriptors to do the redirection, but if stdout or stderr has been set to some Python-level file object, we use Python-level redirection instead. This allows the redirection to work within test frameworks like nose and pytest. `force_echo`()[¶](#llnl.util.tty.log.log_output.force_echo) Context manager to force local echo, even if echo is off. ####### Module contents[¶](#module-llnl.util.tty) `llnl.util.tty.``debug`(*message*, **args*, ***kwargs*)[¶](#llnl.util.tty.debug) `llnl.util.tty.``die`(*message*, **args*, ***kwargs*)[¶](#llnl.util.tty.die) `llnl.util.tty.``error`(*message*, **args*, ***kwargs*)[¶](#llnl.util.tty.error) `llnl.util.tty.``get_number`(*prompt*, ***kwargs*)[¶](#llnl.util.tty.get_number) `llnl.util.tty.``get_yes_or_no`(*prompt*, ***kwargs*)[¶](#llnl.util.tty.get_yes_or_no) `llnl.util.tty.``hline`(*label=None*, ***kwargs*)[¶](#llnl.util.tty.hline) Draw a labeled horizontal line. | Keyword Arguments: | | --- | | | * **char** (*str*) – Char to draw the line with. Default ‘-‘ * **max_width** (*int*) – Maximum width of the line. Default is 64 chars. | `llnl.util.tty.``info`(*message*, **args*, ***kwargs*)[¶](#llnl.util.tty.info) `llnl.util.tty.``is_debug`()[¶](#llnl.util.tty.is_debug) `llnl.util.tty.``is_stacktrace`()[¶](#llnl.util.tty.is_stacktrace) `llnl.util.tty.``is_verbose`()[¶](#llnl.util.tty.is_verbose) `llnl.util.tty.``msg`(*message*, **args*, ***kwargs*)[¶](#llnl.util.tty.msg) `llnl.util.tty.``process_stacktrace`(*countback*)[¶](#llnl.util.tty.process_stacktrace) Gives file and line frame ‘countback’ frames from the bottom `llnl.util.tty.``set_debug`(*flag*)[¶](#llnl.util.tty.set_debug) `llnl.util.tty.``set_stacktrace`(*flag*)[¶](#llnl.util.tty.set_stacktrace) `llnl.util.tty.``set_verbose`(*flag*)[¶](#llnl.util.tty.set_verbose) `llnl.util.tty.``terminal_size`()[¶](#llnl.util.tty.terminal_size) Gets the dimensions of the console: (rows, cols). `llnl.util.tty.``verbose`(*message*, **args*, ***kwargs*)[¶](#llnl.util.tty.verbose) `llnl.util.tty.``warn`(*message*, **args*, ***kwargs*)[¶](#llnl.util.tty.warn) ##### Submodules[¶](#submodules) ##### llnl.util.argparsewriter module[¶](#module-llnl.util.argparsewriter) *class* `llnl.util.argparsewriter.``ArgparseRstWriter`(*out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>, rst_levels=['=', '-', '^', '~', ':', '`'], strip_root_prog=True*)[¶](#llnl.util.argparsewriter.ArgparseRstWriter) Bases: [`llnl.util.argparsewriter.ArgparseWriter`](#llnl.util.argparsewriter.ArgparseWriter) Write argparse output as rst sections. `begin_command`(*prog*)[¶](#llnl.util.argparsewriter.ArgparseRstWriter.begin_command) `begin_optionals`()[¶](#llnl.util.argparsewriter.ArgparseRstWriter.begin_optionals) `begin_positionals`()[¶](#llnl.util.argparsewriter.ArgparseRstWriter.begin_positionals) `begin_subcommands`(*subcommands*)[¶](#llnl.util.argparsewriter.ArgparseRstWriter.begin_subcommands) `description`(*description*)[¶](#llnl.util.argparsewriter.ArgparseRstWriter.description) `line`(*string=''*)[¶](#llnl.util.argparsewriter.ArgparseRstWriter.line) `optional`(*opts*, *help*)[¶](#llnl.util.argparsewriter.ArgparseRstWriter.optional) `positional`(*name*, *help*)[¶](#llnl.util.argparsewriter.ArgparseRstWriter.positional) `usage`(*usage*)[¶](#llnl.util.argparsewriter.ArgparseRstWriter.usage) *class* `llnl.util.argparsewriter.``ArgparseWriter`[¶](#llnl.util.argparsewriter.ArgparseWriter) Bases: `object` Analyzes an argparse ArgumentParser for easy generation of help. `begin_command`(*prog*)[¶](#llnl.util.argparsewriter.ArgparseWriter.begin_command) `begin_optionals`()[¶](#llnl.util.argparsewriter.ArgparseWriter.begin_optionals) `begin_positionals`()[¶](#llnl.util.argparsewriter.ArgparseWriter.begin_positionals) `begin_subcommands`(*subcommands*)[¶](#llnl.util.argparsewriter.ArgparseWriter.begin_subcommands) `description`(*description*)[¶](#llnl.util.argparsewriter.ArgparseWriter.description) `end_command`(*prog*)[¶](#llnl.util.argparsewriter.ArgparseWriter.end_command) `end_optionals`()[¶](#llnl.util.argparsewriter.ArgparseWriter.end_optionals) `end_positionals`()[¶](#llnl.util.argparsewriter.ArgparseWriter.end_positionals) `end_subcommands`(*subcommands*)[¶](#llnl.util.argparsewriter.ArgparseWriter.end_subcommands) `optional`(*option*, *help*)[¶](#llnl.util.argparsewriter.ArgparseWriter.optional) `positional`(*name*, *help*)[¶](#llnl.util.argparsewriter.ArgparseWriter.positional) `usage`(*usage*)[¶](#llnl.util.argparsewriter.ArgparseWriter.usage) `write`(*parser*, *root=True*)[¶](#llnl.util.argparsewriter.ArgparseWriter.write) Write out details about an ArgumentParser. | Parameters: | * **parser** (*ArgumentParser*) – an `argparse` parser * **root** (*bool* *or* *int*) – if bool, whether to include the root parser; or `1` to flatten the root parser with first-level subcommands | ##### llnl.util.filesystem module[¶](#module-llnl.util.filesystem) *class* `llnl.util.filesystem.``FileFilter`(**filenames*)[¶](#llnl.util.filesystem.FileFilter) Bases: `object` Convenience class for calling `filter_file` a lot. `filter`(*regex*, *repl*, ***kwargs*)[¶](#llnl.util.filesystem.FileFilter.filter) *class* `llnl.util.filesystem.``FileList`(*files*)[¶](#llnl.util.filesystem.FileList) Bases: `collections.abc.Sequence` Sequence of absolute paths to files. Provides a few convenience methods to manipulate file paths. `basenames`[¶](#llnl.util.filesystem.FileList.basenames) Stable de-duplication of the base-names in the list ``` >>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir3/liba.a']) >>> l.basenames ['liba.a', 'libb.a'] >>> h = HeaderList(['/dir1/a.h', '/dir2/b.h', '/dir3/a.h']) >>> h.basenames ['a.h', 'b.h'] ``` | Returns: | A list of base-names | | Return type: | list of strings | `directories`[¶](#llnl.util.filesystem.FileList.directories) Stable de-duplication of the directories where the files reside. ``` >>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/libc.a']) >>> l.directories ['/dir1', '/dir2'] >>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h']) >>> h.directories ['/dir1', '/dir2'] ``` | Returns: | A list of directories | | Return type: | list of strings | `joined`(*separator=' '*)[¶](#llnl.util.filesystem.FileList.joined) *class* `llnl.util.filesystem.``HeaderList`(*files*)[¶](#llnl.util.filesystem.HeaderList) Bases: [`llnl.util.filesystem.FileList`](#llnl.util.filesystem.FileList) Sequence of absolute paths to headers. Provides a few convenience methods to manipulate header paths and get commonly used compiler flags or names. `add_macro`(*macro*)[¶](#llnl.util.filesystem.HeaderList.add_macro) Add a macro definition | Parameters: | **macro** (*str*) – The macro to add | `cpp_flags`[¶](#llnl.util.filesystem.HeaderList.cpp_flags) Include flags + macro definitions ``` >>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h']) >>> h.cpp_flags '-I/dir1 -I/dir2' >>> h.add_macro('-DBOOST_DYN_LINK') >>> h.cpp_flags '-I/dir1 -I/dir2 -DBOOST_DYN_LINK' ``` | Returns: | A joined list of include flags and macro definitions | | Return type: | str | `headers`[¶](#llnl.util.filesystem.HeaderList.headers) Stable de-duplication of the headers. | Returns: | A list of header files | | Return type: | list of strings | `include_flags`[¶](#llnl.util.filesystem.HeaderList.include_flags) Include flags ``` >>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h']) >>> h.include_flags '-I/dir1 -I/dir2' ``` | Returns: | A joined list of include flags | | Return type: | str | `macro_definitions`[¶](#llnl.util.filesystem.HeaderList.macro_definitions) Macro definitions ``` >>> h = HeaderList(['/dir1/a.h', '/dir1/b.h', '/dir2/c.h']) >>> h.add_macro('-DBOOST_LIB_NAME=boost_regex') >>> h.add_macro('-DBOOST_DYN_LINK') >>> h.macro_definitions '-DBOOST_LIB_NAME=boost_regex -DBOOST_DYN_LINK' ``` | Returns: | A joined list of macro definitions | | Return type: | str | `names`[¶](#llnl.util.filesystem.HeaderList.names) Stable de-duplication of header names in the list without extensions ``` >>> h = HeaderList(['/dir1/a.h', '/dir2/b.h', '/dir3/a.h']) >>> h.names ['a', 'b'] ``` | Returns: | A list of files without extensions | | Return type: | list of strings | *class* `llnl.util.filesystem.``LibraryList`(*files*)[¶](#llnl.util.filesystem.LibraryList) Bases: [`llnl.util.filesystem.FileList`](#llnl.util.filesystem.FileList) Sequence of absolute paths to libraries Provides a few convenience methods to manipulate library paths and get commonly used compiler flags or names `ld_flags`[¶](#llnl.util.filesystem.LibraryList.ld_flags) Search flags + link flags ``` >>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/liba.so']) >>> l.ld_flags '-L/dir1 -L/dir2 -la -lb' ``` | Returns: | A joined list of search flags and link flags | | Return type: | str | `libraries`[¶](#llnl.util.filesystem.LibraryList.libraries) Stable de-duplication of library files. | Returns: | A list of library files | | Return type: | list of strings | `link_flags`[¶](#llnl.util.filesystem.LibraryList.link_flags) Link flags for the libraries ``` >>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/liba.so']) >>> l.link_flags '-la -lb' ``` | Returns: | A joined list of link flags | | Return type: | str | `names`[¶](#llnl.util.filesystem.LibraryList.names) Stable de-duplication of library names in the list ``` >>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir3/liba.so']) >>> l.names ['a', 'b'] ``` | Returns: | A list of library names | | Return type: | list of strings | `search_flags`[¶](#llnl.util.filesystem.LibraryList.search_flags) Search flags for the libraries ``` >>> l = LibraryList(['/dir1/liba.a', '/dir2/libb.a', '/dir1/liba.so']) >>> l.search_flags '-L/dir1 -L/dir2' ``` | Returns: | A joined list of search flags | | Return type: | str | `llnl.util.filesystem.``ancestor`(*dir*, *n=1*)[¶](#llnl.util.filesystem.ancestor) Get the nth ancestor of a directory. `llnl.util.filesystem.``can_access`(*file_name*)[¶](#llnl.util.filesystem.can_access) True if we have read/write access to the file. `llnl.util.filesystem.``change_sed_delimiter`(*old_delim*, *new_delim*, **filenames*)[¶](#llnl.util.filesystem.change_sed_delimiter) Find all sed search/replace commands and change the delimiter. e.g., if the file contains seds that look like `'s///'`, you can call `change_sed_delimiter('/', '@', file)` to change the delimiter to `'@'`. Note that this routine will fail if the delimiter is `'` or `"`. Handling those is left for future work. | Parameters: | * **old_delim** (*str*) – The delimiter to search for * **new_delim** (*str*) – The delimiter to replace with * ***filenames** – One or more files to search and replace | `llnl.util.filesystem.``copy_mode`(*src*, *dest*)[¶](#llnl.util.filesystem.copy_mode) Set the mode of dest to that of src unless it is a link. `llnl.util.filesystem.``filter_file`(*regex*, *repl*, **filenames*, ***kwargs*)[¶](#llnl.util.filesystem.filter_file) Like sed, but uses python regular expressions. Filters every line of each file through regex and replaces the file with a filtered version. Preserves mode of filtered files. As with re.sub, `repl` can be either a string or a callable. If it is a callable, it is passed the match object and should return a suitable replacement string. If it is a string, it can contain `\1`, `\2`, etc. to represent back-substitution as sed would allow. | Parameters: | * **regex** (*str*) – The regular expression to search for * **repl** (*str*) – The string to replace matches with * ***filenames** – One or more files to search and replace | | Keyword Arguments: | | | * **string** (*bool*) – Treat regex as a plain string. Default it False * **backup** (*bool*) – Make backup file(s) suffixed with `~`. Default is True * **ignore_absent** (*bool*) – Ignore any files that don’t exist. Default is False | `llnl.util.filesystem.``find`(*root*, *files*, *recursive=True*)[¶](#llnl.util.filesystem.find) Search for `files` starting from the `root` directory. Like GNU/BSD find but written entirely in Python. Examples: ``` $ find /usr -name python ``` is equivalent to: ``` >>> find('/usr', 'python') ``` ``` $ find /usr/local/bin -maxdepth 1 -name python ``` is equivalent to: ``` >>> find('/usr/local/bin', 'python', recursive=False) ``` Accepts any glob characters accepted by fnmatch: | Pattern | Meaning | | --- | --- | | * | matches everything | | ? | matches any single character | | [seq] | matches any character in `seq` | | [!seq] | matches any character not in `seq` | | Parameters: | * **root** (*str*) – The root directory to start searching from * **files** (*str* *or* *collections.Sequence*) – Library name(s) to search for * **recurse** (*bool**,* *optional*) – if False search only root folder, if True descends top-down from the root. Defaults to True. | | Returns: | The files that have been found | | Return type: | list of strings | `llnl.util.filesystem.``find_headers`(*headers*, *root*, *recursive=False*)[¶](#llnl.util.filesystem.find_headers) Returns an iterable object containing a list of full paths to headers if found. Accepts any glob characters accepted by fnmatch: | Pattern | Meaning | | --- | --- | | * | matches everything | | ? | matches any single character | | [seq] | matches any character in `seq` | | [!seq] | matches any character not in `seq` | | Parameters: | * **headers** (*str* *or* *list of str*) – Header name(s) to search for * **root** (*str*) – The root directory to start searching from * **recursive** (*bool**,* *optional*) – if False search only root folder, if True descends top-down from the root. Defaults to False. | | Returns: | The headers that have been found | | Return type: | HeaderList | `llnl.util.filesystem.``find_libraries`(*libraries*, *root*, *shared=True*, *recursive=False*)[¶](#llnl.util.filesystem.find_libraries) Returns an iterable of full paths to libraries found in a root dir. Accepts any glob characters accepted by fnmatch: | Pattern | Meaning | | --- | --- | | * | matches everything | | ? | matches any single character | | [seq] | matches any character in `seq` | | [!seq] | matches any character not in `seq` | | Parameters: | * **libraries** (*str* *or* *list of str*) – Library name(s) to search for * **root** (*str*) – The root directory to start searching from * **shared** (*bool**,* *optional*) – if True searches for shared libraries, otherwise for static. Defaults to True. * **recursive** (*bool**,* *optional*) – if False search only root folder, if True descends top-down from the root. Defaults to False. | | Returns: | The libraries that have been found | | Return type: | LibraryList | `llnl.util.filesystem.``find_system_libraries`(*libraries*, *shared=True*)[¶](#llnl.util.filesystem.find_system_libraries) Searches the usual system library locations for `libraries`. Search order is as follows: 1. `/lib64` 2. `/lib` 3. `/usr/lib64` 4. `/usr/lib` 5. `/usr/local/lib64` 6. `/usr/local/lib` Accepts any glob characters accepted by fnmatch: | Pattern | Meaning | | --- | --- | | * | matches everything | | ? | matches any single character | | [seq] | matches any character in `seq` | | [!seq] | matches any character not in `seq` | | Parameters: | * **libraries** (*str* *or* *list of str*) – Library name(s) to search for * **shared** (*bool**,* *optional*) – if True searches for shared libraries, otherwise for static. Defaults to True. | | Returns: | The libraries that have been found | | Return type: | LibraryList | `llnl.util.filesystem.``fix_darwin_install_name`(*path*)[¶](#llnl.util.filesystem.fix_darwin_install_name) Fix install name of dynamic libraries on Darwin to have full path. There are two parts of this task: 1. Use `install_name('-id', ...)` to change install name of a single lib 2. Use `install_name('-change', ...)` to change the cross linking between libs. The function assumes that all libraries are in one folder and currently won’t follow subfolders. | Parameters: | **path** (*str*) – directory in which .dylib files are located | `llnl.util.filesystem.``force_remove`(**paths*)[¶](#llnl.util.filesystem.force_remove) Remove files without printing errors. Like `rm -f`, does NOT remove directories. `llnl.util.filesystem.``force_symlink`(*src*, *dest*)[¶](#llnl.util.filesystem.force_symlink) `llnl.util.filesystem.``copy`(*src*, *dest*, *_permissions=False*)[¶](#llnl.util.filesystem.copy) Copies the file *src* to the file or directory *dest*. If *dest* specifies a directory, the file will be copied into *dest* using the base filename from *src*. | Parameters: | * **src** (*str*) – the file to copy * **dest** (*str*) – the destination file or directory * **_permissions** (*bool*) – for internal use only | `llnl.util.filesystem.``install`(*src*, *dest*)[¶](#llnl.util.filesystem.install) Installs the file *src* to the file or directory *dest*. Same as [`copy()`](#llnl.util.filesystem.copy) with the addition of setting proper permissions on the installed file. | Parameters: | * **src** (*str*) – the file to install * **dest** (*str*) – the destination file or directory | `llnl.util.filesystem.``copy_tree`(*src*, *dest*, *symlinks=True*, *ignore=None*, *_permissions=False*)[¶](#llnl.util.filesystem.copy_tree) Recursively copy an entire directory tree rooted at *src*. If the destination directory *dest* does not already exist, it will be created as well as missing parent directories. If *symlinks* is true, symbolic links in the source tree are represented as symbolic links in the new tree and the metadata of the original links will be copied as far as the platform allows; if false, the contents and metadata of the linked files are copied to the new tree. If *ignore* is set, then each path relative to *src* will be passed to this function; the function returns whether that path should be skipped. | Parameters: | * **src** (*str*) – the directory to copy * **dest** (*str*) – the destination directory * **symlinks** (*bool*) – whether or not to preserve symlinks * **ignore** (*function*) – function indicating which files to ignore * **_permissions** (*bool*) – for internal use only | `llnl.util.filesystem.``install_tree`(*src*, *dest*, *symlinks=True*, *ignore=None*)[¶](#llnl.util.filesystem.install_tree) Recursively install an entire directory tree rooted at *src*. Same as [`copy_tree()`](#llnl.util.filesystem.copy_tree) with the addition of setting proper permissions on the installed files and directories. | Parameters: | * **src** (*str*) – the directory to install * **dest** (*str*) – the destination directory * **symlinks** (*bool*) – whether or not to preserve symlinks * **ignore** (*function*) – function indicating which files to ignore | `llnl.util.filesystem.``is_exe`(*path*)[¶](#llnl.util.filesystem.is_exe) True if path is an executable file. `llnl.util.filesystem.``join_path`(*prefix*, **args*)[¶](#llnl.util.filesystem.join_path) `llnl.util.filesystem.``mkdirp`(**paths*, ***kwargs*)[¶](#llnl.util.filesystem.mkdirp) Creates a directory, as well as parent directories if needed. | Parameters: | **paths** (*str*) – paths to create with mkdirp | Keyword Aguments: mode (permission bits or None, optional): optional permissions to set on the created directory – use OS default if not provided `llnl.util.filesystem.``remove_dead_links`(*root*)[¶](#llnl.util.filesystem.remove_dead_links) Removes any dead link that is present in root. | Parameters: | **root** (*str*) – path where to search for dead links | `llnl.util.filesystem.``remove_if_dead_link`(*path*)[¶](#llnl.util.filesystem.remove_if_dead_link) Removes the argument if it is a dead link. | Parameters: | **path** (*str*) – The potential dead link | `llnl.util.filesystem.``remove_linked_tree`(*path*)[¶](#llnl.util.filesystem.remove_linked_tree) Removes a directory and its contents. If the directory is a symlink, follows the link and removes the real directory before removing the link. | Parameters: | **path** (*str*) – Directory to be removed | `llnl.util.filesystem.``set_executable`(*path*)[¶](#llnl.util.filesystem.set_executable) `llnl.util.filesystem.``set_install_permissions`(*path*)[¶](#llnl.util.filesystem.set_install_permissions) Set appropriate permissions on the installed file. `llnl.util.filesystem.``touch`(*path*)[¶](#llnl.util.filesystem.touch) Creates an empty file at the specified path. `llnl.util.filesystem.``touchp`(*path*)[¶](#llnl.util.filesystem.touchp) Like `touch`, but creates any parent directories needed for the file. `llnl.util.filesystem.``traverse_tree`(*source_root*, *dest_root*, *rel_path=''*, ***kwargs*)[¶](#llnl.util.filesystem.traverse_tree) Traverse two filesystem trees simultaneously. Walks the LinkTree directory in pre or post order. Yields each file in the source directory with a matching path from the dest directory, along with whether the file is a directory. e.g., for this tree: ``` root/ a/ file1 file2 b/ file3 ``` When called on dest, this yields: ``` ('root', 'dest') ('root/a', 'dest/a') ('root/a/file1', 'dest/a/file1') ('root/a/file2', 'dest/a/file2') ('root/b', 'dest/b') ('root/b/file3', 'dest/b/file3') ``` | Keyword Arguments: | | --- | | | * **order** (*str*) – Whether to do pre- or post-order traversal. Accepted values are ‘pre’ and ‘post’ * **ignore** (*function*) – function indicating which files to ignore * **follow_nonexisting** (*bool*) – Whether to descend into directories in `src` that do not exit in `dest`. Default is True * **follow_links** (*bool*) – Whether to descend into symlinks in `src` | `llnl.util.filesystem.``unset_executable_mode`(*path*)[¶](#llnl.util.filesystem.unset_executable_mode) `llnl.util.filesystem.``working_dir`(*dirname*, ***kwargs*)[¶](#llnl.util.filesystem.working_dir) ##### llnl.util.lang module[¶](#module-llnl.util.lang) *class* `llnl.util.lang.``HashableMap`[¶](#llnl.util.lang.HashableMap) Bases: `collections.abc.MutableMapping` This is a hashable, comparable dictionary. Hash is performed on a tuple of the values in the dictionary. `copy`()[¶](#llnl.util.lang.HashableMap.copy) Type-agnostic clone method. Preserves subclass type. *class* `llnl.util.lang.``LazyReference`(*ref_function*)[¶](#llnl.util.lang.LazyReference) Bases: `object` Lazily evaluated reference to part of a singleton. *class* `llnl.util.lang.``ObjectWrapper`(*wrapped_object*)[¶](#llnl.util.lang.ObjectWrapper) Bases: `object` Base class that wraps an object. Derived classes can add new behavior while staying undercover. This class is modeled after the stackoverflow answer: * <http://stackoverflow.com/a/1445289/771663*exception* `llnl.util.lang.``RequiredAttributeError`(*message*)[¶](#llnl.util.lang.RequiredAttributeError) Bases: `ValueError` *class* `llnl.util.lang.``Singleton`(*factory*)[¶](#llnl.util.lang.Singleton) Bases: `object` Simple wrapper for lazily initialized singleton objects. `instance`[¶](#llnl.util.lang.Singleton.instance) `llnl.util.lang.``attr_required`(*obj*, *attr_name*)[¶](#llnl.util.lang.attr_required) Ensure that a class has a required attribute. `llnl.util.lang.``attr_setdefault`(*obj*, *name*, *value*)[¶](#llnl.util.lang.attr_setdefault) Like dict.setdefault, but for objects. `llnl.util.lang.``caller_locals`()[¶](#llnl.util.lang.caller_locals) This will return the locals of the *parent* of the caller. This allows a function to insert variables into its caller’s scope. Yes, this is some black magic, and yes it’s useful for implementing things like depends_on and provides. `llnl.util.lang.``check_kwargs`(*kwargs*, *fun*)[¶](#llnl.util.lang.check_kwargs) Helper for making functions with kwargs. Checks whether the kwargs are empty after all of them have been popped off. If they’re not, raises an error describing which kwargs are invalid. Example: ``` def foo(self, **kwargs): x = kwargs.pop('x', None) y = kwargs.pop('y', None) z = kwargs.pop('z', None) check_kwargs(kwargs, self.foo) # This raises a TypeError: foo(w='bad kwarg') ``` *class* `llnl.util.lang.``classproperty`[¶](#llnl.util.lang.classproperty) Bases: `property` classproperty decorator: like property but for classmethods. `llnl.util.lang.``dedupe`(*sequence*)[¶](#llnl.util.lang.dedupe) Yields a stable de-duplication of an hashable sequence | Parameters: | **sequence** – hashable sequence to be de-duplicated | | Returns: | stable de-duplication of the sequence | `llnl.util.lang.``get_calling_module_name`()[¶](#llnl.util.lang.get_calling_module_name) Make sure that the caller is a class definition, and return the enclosing module’s name. `llnl.util.lang.``has_method`(*cls*, *name*)[¶](#llnl.util.lang.has_method) `llnl.util.lang.``in_function`(*function_name*)[¶](#llnl.util.lang.in_function) True if the caller was called from some function with the supplied Name, False otherwise. `llnl.util.lang.``index_by`(*objects*, **funcs*)[¶](#llnl.util.lang.index_by) Create a hierarchy of dictionaries by splitting the supplied set of objects on unique values of the supplied functions. Values are used as keys. For example, suppose you have four objects with attributes that look like this: ``` a = Spec(name="boost", compiler="gcc", arch="bgqos_0") b = Spec(name="mrnet", compiler="intel", arch="chaos_5_x86_64_ib") c = Spec(name="libelf", compiler="xlc", arch="bgqos_0") d = Spec(name="libdwarf", compiler="intel", arch="chaos_5_x86_64_ib") list_of_specs = [a,b,c,d] index1 = index_by(list_of_specs, lambda s: s.arch, lambda s: s.compiler) index2 = index_by(list_of_specs, lambda s: s.compiler) ``` `index1` now has two levels of dicts, with lists at the leaves, like this: ``` { 'bgqos_0' : { 'gcc' : [a], 'xlc' : [c] }, 'chaos_5_x86_64_ib' : { 'intel' : [b, d] } } ``` And `index2` is a single level dictionary of lists that looks like this: ``` { 'gcc' : [a], 'intel' : [b,d], 'xlc' : [c] } ``` If any elemnts in funcs is a string, it is treated as the name of an attribute, and acts like getattr(object, name). So shorthand for the above two indexes would be: ``` index1 = index_by(list_of_specs, 'arch', 'compiler') index2 = index_by(list_of_specs, 'compiler') ``` You can also index by tuples by passing tuples: ``` index1 = index_by(list_of_specs, ('arch', 'compiler')) ``` Keys in the resulting dict will look like (‘gcc’, ‘bgqos_0’). `llnl.util.lang.``key_ordering`(*cls*)[¶](#llnl.util.lang.key_ordering) Decorates a class with extra methods that implement rich comparison operations and `__hash__`. The decorator assumes that the class implements a function called `_cmp_key()`. The rich comparison operations will compare objects using this key, and the `__hash__` function will return the hash of this key. If a class already has `__eq__`, `__ne__`, `__lt__`, `__le__`, `__gt__`, or `__ge__` defined, this decorator will overwrite them. | Raises: | `TypeError` – If the class does not have a `_cmp_key` method | `llnl.util.lang.``list_modules`(*directory*, ***kwargs*)[¶](#llnl.util.lang.list_modules) Lists all of the modules, excluding `__init__.py`, in a particular directory. Listed packages have no particular order. `llnl.util.lang.``match_predicate`(**args*)[¶](#llnl.util.lang.match_predicate) Utility function for making string matching predicates. Each arg can be a: * regex * list or tuple of regexes * predicate that takes a string. This returns a predicate that is true if: * any arg regex matches * any regex in a list or tuple of regexes matches. * any predicate in args matches. *class* `llnl.util.lang.``memoized`(*func*)[¶](#llnl.util.lang.memoized) Bases: `object` Decorator that caches the results of a function, storing them in an attribute of that function. `clear`()[¶](#llnl.util.lang.memoized.clear) Expunge cache so that self.func will be called again. `llnl.util.lang.``pretty_date`(*time*, *now=None*)[¶](#llnl.util.lang.pretty_date) Convert a datetime or timestamp to a pretty, relative date. | Parameters: | * **time** (*datetime* *or* *int*) – date to print prettily * **now** (*datetime*) – dateimte for ‘now’, i.e. the date the pretty date is relative to (default is datetime.now()) | | Returns: | pretty string like ‘an hour ago’, ‘Yesterday’, ‘3 months ago’, ‘just now’, etc. | | Return type: | (str) | Adapted from <https://stackoverflow.com/questions/1551382>. `llnl.util.lang.``pretty_string_to_date`(*date_str*, *now=None*)[¶](#llnl.util.lang.pretty_string_to_date) Parses a string representing a date and returns a datetime object. | Parameters: | **date_str** (*str*) – string representing a date. This string might be in different format (like `YYYY`, `YYYY-MM`, `YYYY-MM-DD`) or be a *pretty date* (like `yesterday` or `two months ago`) | | Returns: | datetime object corresponding to `date_str` | | Return type: | (datetime) | `llnl.util.lang.``union_dicts`(**dicts*)[¶](#llnl.util.lang.union_dicts) Use update() to combine all dicts into one. This builds a new dictionary, into which we `update()` each element of `dicts` in order. Items from later dictionaries will override items from earlier dictionaries. | Parameters: | **dicts** (*list*) – list of dictionaries | Return: (dict): a merged dictionary containing combined keys and values from `dicts`. ##### llnl.util.link_tree module[¶](#module-llnl.util.link_tree) LinkTree class for setting up trees of symbolic links. *class* `llnl.util.link_tree.``LinkTree`(*source_root*)[¶](#llnl.util.link_tree.LinkTree) Bases: `object` Class to create trees of symbolic links from a source directory. LinkTree objects are constructed with a source root. Their methods allow you to create and delete trees of symbolic links back to the source tree in specific destination directories. Trees comprise symlinks only to files; directries are never symlinked to, to prevent the source directory from ever being modified. `find_conflict`(*dest_root*, *ignore=None*, *ignore_file_conflicts=False*)[¶](#llnl.util.link_tree.LinkTree.find_conflict) Returns the first file in dest that conflicts with src `find_dir_conflicts`(*dest_root*, *ignore*)[¶](#llnl.util.link_tree.LinkTree.find_dir_conflicts) `get_file_map`(*dest_root*, *ignore*)[¶](#llnl.util.link_tree.LinkTree.get_file_map) `merge`(*dest_root*, ***kwargs*)[¶](#llnl.util.link_tree.LinkTree.merge) Link all files in src into dest, creating directories if necessary. If ignore_conflicts is True, do not break when the target exists but rather return a list of files that could not be linked. Note that files blocking directories will still cause an error. `merge_directories`(*dest_root*, *ignore*)[¶](#llnl.util.link_tree.LinkTree.merge_directories) `unmerge`(*dest_root*, ***kwargs*)[¶](#llnl.util.link_tree.LinkTree.unmerge) Unlink all files in dest that exist in src. Unlinks directories in dest if they are empty. `unmerge_directories`(*dest_root*, *ignore*)[¶](#llnl.util.link_tree.LinkTree.unmerge_directories) ##### llnl.util.lock module[¶](#module-llnl.util.lock) *class* `llnl.util.lock.``Lock`(*path*, *start=0*, *length=0*, *debug=False*, *default_timeout=None*)[¶](#llnl.util.lock.Lock) Bases: `object` This is an implementation of a filesystem lock using Python’s lockf. In Python, `lockf` actually calls `fcntl`, so this should work with any filesystem implementation that supports locking through the fcntl calls. This includes distributed filesystems like Lustre (when flock is enabled) and recent NFS versions. Note that this is for managing contention over resources *between* processes and not for managing contention between threads in a process: the functions of this object are not thread-safe. A process also must not maintain multiple locks on the same file. `acquire_read`(*timeout=None*)[¶](#llnl.util.lock.Lock.acquire_read) Acquires a recursive, shared lock for reading. Read and write locks can be acquired and released in arbitrary order, but the POSIX lock is held until all local read and write locks are released. Returns True if it is the first acquire and actually acquires the POSIX lock, False if it is a nested transaction. `acquire_write`(*timeout=None*)[¶](#llnl.util.lock.Lock.acquire_write) Acquires a recursive, exclusive lock for writing. Read and write locks can be acquired and released in arbitrary order, but the POSIX lock is held until all local read and write locks are released. Returns True if it is the first acquire and actually acquires the POSIX lock, False if it is a nested transaction. `release_read`()[¶](#llnl.util.lock.Lock.release_read) Releases a read lock. Returns True if the last recursive lock was released, False if there are still outstanding locks. Does limited correctness checking: if a read lock is released when none are held, this will raise an assertion error. `release_write`()[¶](#llnl.util.lock.Lock.release_write) Releases a write lock. Returns True if the last recursive lock was released, False if there are still outstanding locks. Does limited correctness checking: if a read lock is released when none are held, this will raise an assertion error. *class* `llnl.util.lock.``LockTransaction`(*lock*, *acquire_fn=None*, *release_fn=None*, *timeout=None*)[¶](#llnl.util.lock.LockTransaction) Bases: `object` Simple nested transaction context manager that uses a file lock. This class can trigger actions when the lock is acquired for the first time and released for the last. If the `acquire_fn` returns a value, it is used as the return value for `__enter__`, allowing it to be passed as the `as` argument of a `with` statement. If `acquire_fn` returns a context manager, *its* `__enter__` function will be called in `__enter__` after `acquire_fn`, and its `__exit__` funciton will be called before `release_fn` in `__exit__`, allowing you to nest a context manager to be used along with the lock. Timeout for lock is customizable. *class* `llnl.util.lock.``WriteTransaction`(*lock*, *acquire_fn=None*, *release_fn=None*, *timeout=None*)[¶](#llnl.util.lock.WriteTransaction) Bases: [`llnl.util.lock.LockTransaction`](#llnl.util.lock.LockTransaction) LockTransaction context manager that does a write and releases it. *class* `llnl.util.lock.``ReadTransaction`(*lock*, *acquire_fn=None*, *release_fn=None*, *timeout=None*)[¶](#llnl.util.lock.ReadTransaction) Bases: [`llnl.util.lock.LockTransaction`](#llnl.util.lock.LockTransaction) LockTransaction context manager that does a read and releases it. *exception* `llnl.util.lock.``LockError`[¶](#llnl.util.lock.LockError) Bases: `Exception` Raised for any errors related to locks. *exception* `llnl.util.lock.``LockTimeoutError`[¶](#llnl.util.lock.LockTimeoutError) Bases: [`llnl.util.lock.LockError`](#llnl.util.lock.LockError) Raised when an attempt to acquire a lock times out. *exception* `llnl.util.lock.``LockPermissionError`[¶](#llnl.util.lock.LockPermissionError) Bases: [`llnl.util.lock.LockError`](#llnl.util.lock.LockError) Raised when there are permission issues with a lock. *exception* `llnl.util.lock.``LockROFileError`(*path*)[¶](#llnl.util.lock.LockROFileError) Bases: [`llnl.util.lock.LockPermissionError`](#llnl.util.lock.LockPermissionError) Tried to take an exclusive lock on a read-only file. *exception* `llnl.util.lock.``CantCreateLockError`(*path*)[¶](#llnl.util.lock.CantCreateLockError) Bases: [`llnl.util.lock.LockPermissionError`](#llnl.util.lock.LockPermissionError) Attempt to create a lock in an unwritable location. ##### llnl.util.multiproc module[¶](#module-llnl.util.multiproc) This implements a parallel map operation but it can accept more values than multiprocessing.Pool.apply() can. For example, apply() will fail to pickle functions if they’re passed indirectly as parameters. `llnl.util.multiproc.``spawn`(*f*)[¶](#llnl.util.multiproc.spawn) `llnl.util.multiproc.``parmap`(*f*, *elements*)[¶](#llnl.util.multiproc.parmap) *class* `llnl.util.multiproc.``Barrier`(*n*, *timeout=None*)[¶](#llnl.util.multiproc.Barrier) Bases: `object` Simple reusable semaphore barrier. Python 2.6 doesn’t have multiprocessing barriers so we implement this. See <http://greenteapress.com/semaphores/downey08semaphores.pdf>, p. 41. `wait`()[¶](#llnl.util.multiproc.Barrier.wait) ##### Module contents[¶](#module-llnl.util) ### Module contents[¶](#module-llnl) Indices and tables[¶](#indices-and-tables) === * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html)
template
ctan
TeX
**Wiener linguistische gazette** Rezension **Titel des besprochenen Werks** _Besprochen von: <NAME>/<NAME>_ Sonderdruck aus: _Wiener Linguistische Gazette_ (WLG) 8o (2017): 1-4 **Eigentumer, Herausgeber und Verleger:** Universitat Wien, Institut fur Sprachwissenschaft Sensengasse 3a 1090 Wien Osterreich **Redaktion:** <NAME> (Allgemeine Sprachwissenschaft), <NAME>, <NAME> & <NAME> (Angewandte Sprachwissenschaft), <NAME> (Historische Sprachwissenschaft) **Kontakt:** <EMAIL> **Homepage:**_[http://wlg.univie.ac.at_](http://wlg.univie.ac.at_) **ISSN: 2224-1876** **NBN:** BI,078,1063 Die Wiener Linguistische Gazette erscheint in loser Folge im Open-Access-Format. Alle Ausgaben ab Nr. 72 (2005) sind online verftigbar. \begin{tabular}{c c} & Dieses Werk unterliegt der Creative-Commons-Lizenz CC BY-NC-ND 4-o \\ & (Namensnennung - Nicht kommerziell - Keine Bearbeitungen) \\ \end{tabular} Rezension **Titel des besprochenen Werks** _Besprochen von: <NAME>*/<NAME>+_ Footnote †: footnotemark: _[http://wlg.univie.ac.at/fileadmin/user_upload/p_wlg/802017/template-wlg-review.pdf_](http://wlg.univie.ac.at/fileadmin/user_upload/p_wlg/802017/template-wlg-review.pdf_) _Wiener Linguistische Gazette_ (WLG) Institut fur Sprachwissenschaft Universitat Wien Ausgabe 80 (2017): a-4 Ein schickes Motto (Quelle) Dies hier ist ein Blindtext zum Testen von Textausgaben. Wer diesen Text liest, ist selbst schuld. Der Text gibt lediglich den Grauwert der Schrift an. Ist das wirklich so? Ist es gleichgultig, ob ich schreibe:,,Dies ist ein Blindtext" oder,,Huardest gefburn"? Kjift - mintichten! Ein Blindtext bietet mir wichtige Informationen. An ihm messe ich die Lesbarkeit einer Schrift, ihre Anmutung, wie harmonisch die Figuren zueinander stehen und prufe, wie breit oder schmal sie lauft. Ein Blindtext sollte moglichst viele verschiedene Buchstaben enthalten und in der Originalsprache gesetzt sein. Er muss keinen Sinn ergeben, sollte aber lesbar sein. Fremdsprachige Texte wie,,Lorem ipsum" dienen nicht dem eigentlichen Zweck, da sie eine falsche Anmutung vermitteln. Footnote †: footnotemark: _[http://wlg.univie.ac.at/fileadmin/user_upload/p_wlg/802017/template-wlg-review.pdf_](http://wlg.univie.ac.at/fileadmin/user_upload/p_wlg/802017/template-wlg-review.pdf_) Publiziert am 31. Marz 2023Dies hier ist ein Blindtext zum Testen von Textausgaben. Wer diesen Text liest, ist selbst schuld. Der Text gibt lediglich den Grauwert der Schrift an. Ist das wirklich so? Ist es gleichgultig, ob ich schreibe:,,Dies ist ein Blindtext" oder,,Huardest gefburn"? Kjift - mitnichten! Ein Blindtext bietet mir wichtige Informationen. An ihm messe ich die Lesbarkeit einer Schrift, ihre Anmutung, wie harmonisch die Figuren zueinander stehen und prufe, wie breit oder schmal sie lauft. Ein Blindtext sollte moglichst viele verschiedene Buchstaben enthalten und in der Originalsprache gesetzt sein. Er muss keinen Sinn ergeben, sollte aber lesbar sein. Fremdsprachige Texte wie,,Lorem ipsum" dienen nicht dem eigentlichen Zweck, da sie eine falsche Anmutung vermitteln. Dies hier ist ein Blindtext zum Testen von Textausgaben. Wer diesen Text liest, ist selbst schuld. Der Text gibt lediglich den Grauwert der Schrift an. Ist das wirklich so? Ist es gleichgultig, ob ich schreibe:,,Dies ist ein Blindtext" oder,,Huardest gefburn"? Kjift - mitnichten! Ein Blindtext bietet mir wichtige Informationen. An ihm messe ich die Lesbarkeit einer Schrift, ihre Anmutung, wie harmonisch die Figuren zueinander stehen und prufe, wie breit oder schmal sie lauft. Ein Blindtext sollte moglichst viele verschiedene Buchstaben enthalten und in der Originalsprache gesetzt sein. Er muss keinen Sinn ergeben, sollte aber lesbar sein. Fremdsprachige Texte wie,,Lorem ipsum" dienen nicht dem eigentlichen Zweck, da sie eine falsche Anmutung vermitteln. * Erster Listenpunkt, Stufe 1 * Erster Listenpunkt, Stufe 2 * Erster Listenpunkt, Stufe 3 * Zweiter Listenpunkt, Stufe 3 * Dritter Listenpunkt, Stufe 3 * Vierter Listenpunkt, Stufe 3 * Funfter Listenpunkt, Stufe 3 * Zweiter Listenpunkt, Stufe 2 * Dritter Listenpunkt, Stufe 2 * Vierter Listenpunkt, Stufe 2 * Funfter Listenpunkt, Stufe 2 * Zweiter Listenpunkt, Stufe 1 * Dritter Listenpunkt, Stufe 1 * Vierter Listenpunkt, Stufe 1 * Funfter Listenpunkt, Stufe 1 Dies hier ist ein Blindtext zum Testen von Textausgaben. Wer diesen Text liest, ist selbst schuld. Der Text gibt lediglich den Grauwert der Schrift an. Ist das wirklich so? Ist es gleichgultig, ob ich schreibe:,,Dies ist ein Blindtext" oder,,Huardest gefburn"? Kjift - mitnichten! Ein Blindtext bietet mir wichtige Informationen. An ihm messe ich die Lesbarkeit einer Schrift, ihre Anmutung, wie harmonisch die Figuren zueinander stehen und prufe, wie breit oder schmal sie lauft. Ein Blindtext sollte moglichst viele verschiedene Buchstaben enthalten und in der Originalsprache gesetzt sein. Er muss keinen Sinn ergeben, sollte aber lesbar sein. Fremdsprachige Texte wie,,Lorem ipsum" dienen nicht dem eigentlichen Zweck, da sie eine falsche Anmutung vermitteln. * Erster Listenpunkt, Stufe 1 * Erster Listenpunkt, Stufe 2 * Erster Listenpunkt, Stufe 3 * Zweiter Listenpunkt, Stufe 3 * Dritter Listenpunkt, Stufe 3 * Vierter Listenpunkt, Stufe 3 * Funfter Listenpunkt, Stufe 3 * Zweiter Listenpunkt, Stufe 2 * Dritter Listenpunkt, Stufe 2 * Vierter Listenpunkt, Stufe 2 * Funfter Listenpunkt, Stufe 2 * Zweiter Listenpunkt, Stufe 1 * Dritter Listenpunkt, Stufe 1 * Vierter Listenpunkt, Stufe 1 * Funfter Listenpunkt, Stufe 1
chanjo
readthedoc
Unknown
Chanjo Documentation Release 3.2.0 <NAME> December 22, 2015 Contents 2.1 Installatio... 3 2.2 Introduction & Dem... 4 2.3 Interface Walkthroug... 7 2.4 Public AP... 10 2.5 Developer’s Guid... 12 2.6 Release note... 14 2.7 FA... 16 i ii CHAPTER 1 Simple Sambamba integration There’s a new kid on the BAM processing block: Sambamba! You can easily load output into a Chanjo database for further coverage exploration: $ sambamba depth region -L exons.bed -t 10 -t 20 alignment.bam > exons.coverage.bed $ chanjo load exons.coverage.bed $ chanjo calculate region 1 861321 871275 --sample ADM980A2 {"completeness_10": 100, "completeness_20": 87.123, "mean_coverage": 47.24234} To learn more about Chanjo and how you can use it to gain a better understanding of sequencing coverage you can do no better than to: Chanjo Documentation, Release 3.2.0 2 Chapter 1. Simple Sambamba integration CHAPTER 2 Contents 2.1 Installation 2.1.1 Guide Chanjo targets Unix and Python 2.7+/3.2+. Since chanjo relies on Sambamba for BAM processing, it’s now very simple to install. I do still recommend Miniconda, it’s a slim distribution of Python with a very nice package manager. $ pip install chanjo Vagrant dev/testing environment You can also set up a local development environment in a virtual machine through Vagrant. This will handle the install for you automatically through Ansible provisioning! After downloading/cloning the repo: $ vagrant up $ vagrant ssh 2.1.2 Sambamba You will also need a copy of Sambamba which you can simply grab from their ‘GitHub repo‘_ where they serve up static binaries - just drop the latest in your path and you are good to go! $ wget -P /tmp/ https://github.com/lomereiter/sambamba/releases/download/v0.5.8/sambamba_v0.5.8_linu $ tar xjfv /tmp/sambamba_v0.5.8_linux.tar.bz2 -C /tmp/ $ mv /tmp/sambamba_v0.5.8 ~/bin/sambamba $ chmod +x ~/bin/sambamba 2.1.3 Optional dependecies If you plan on using MySQL for your SQL database you also need a SQL adapter. My recommendation is ‘pymysql’. It’s written in pure Python and works on both version 2.x and 3.x. $ pip install pymysql Chanjo Documentation, Release 3.2.0 2.1.4 Getting the code Would you like to take part in the development or tweak the code for any other reason? In that case, you should follow these simple steps and download the code from the GitHub repo. $ git clone https://github.com/robinandeer/chanjo.git $ cd chanjo $ pip install --editable . 2.2 Introduction & Demo 2.2.1 Concise overview Current release: Radical Red Panda (3.2.0) The goals of Chanjo are quite simple: 1. Integrate seamlessly with sambamba depth region output 2. Break down coverage to intuitive levels of abstraction: exons, transcripts, and genes 3. Enable explorative coverage investigations across samples and genomic regions 2.2.2 Demo The rest of this document will guide you through a short demo that will cover how to use Chanjo from the command line. Demo files First we need some files to work with. Chanjo comes with some pre-installed demo files we can use. $ chanjo demo && cd chanjo-demo This will create a new folder (chanjo-demo) in your current directory and fill it with the example files you need. Note: You can name the new folder anything you like but it must not already exist! Setup and configuration The first task is to create a config file (chanjo.yaml) and prepare the database. Chanjo will walk you through setting it up by running: $ chanjo init $ chanjo db setup Note: Chanjo uses project-level config files by default. This means that it will look for a chanjo.yaml file in the current directory where you execute your command. You can also point to a diffrent config file using the chanjo -c /path/to/chanjo.yaml option. Chanjo Documentation, Release 3.2.0 Defining interesting regions One important thing to note is that Chanjo considers coverage across exonic regions of the genome. It’s perfectly possible to compose your own list of intervals. Just make sure to follow the BED conventions (http://genome.ucsc.edu/FAQ/FAQformat.html#format1). You then add a couple of additional columns that define relationships between exons and transcripts and transcripts and genes: If an exon belongs to multiple transcripts you define a list of ids and an equal number of gene identifiers to match. Linking exons/transcripts/genes Let’s tell Chanjo which exons belong to which transcripts and which transcripts belong to which genes. It’s fine to use the output from Sambamba as long as the two columns after “strand” are present in the file. $ chanjo link sample1.coverage.bed Loading annotations After running sambamba depth region you can take the output and load it into the database. Let’s also add a group identifier to indicate that the sample is related to some other samples. $ for file in *.coverage.bed; do echo "${file}"; chanjo load "${file}"; done Extracting informtion We now have some information loaded for a few samples and we can now start exploring what coverage looks like! The output will be in the handy JSON lines format. $ chanjo calculate mean sample1 { "metrics": { "completeness_20": 90.38, "completeness_10": 90.92, "completeness_100": 70.62, "mean_coverage": 193.85 }, "sample_id": "sample1" } $ chanjo calculate gene FAP MUSK { "genes": { "MUSK": { "completeness_20": 100.0, "completeness_10": 100.0, "completeness_100": 99.08, "mean_coverage": 370.36 }, "FAP": { "completeness_20": 97.24, "completeness_10": 100.0, "completeness_100": 50.19, "mean_coverage": 151.63 } }, Chanjo Documentation, Release 3.2.0 "sample_id": "sample5" } [...] $ chanjo calculate region 11 619304 619586 { "completeness_20": 100.0, "completeness_10": 100.0, "completeness_100": 100.0, "mean_coverage": 258.24 } $ chanjo calculate region 11 619304 619586 --per exon { "metrics": { "completeness_20": 100.0, "completeness_10": 100.0, "completeness_100": 100.0, "mean_coverage": 223.3904 }, "exon_id": "11-619305-619389" } { "metrics": { "completeness_20": 100.0, "completeness_10": 100.0, "completeness_100": 100.0, "mean_coverage": 284.23 }, "exon_id": "11-619473-619586" } Note: So what is this “completeness”? Well, it’s pretty simple; the percentage of bases with at least “sufficient” (say; 10x) coverage. 2.2.3 What’s next? The SQL schema has been designed to be a powerful tool on it’s own for studying coverage. It let’s you quickly aggregate metrics across multiple samples and can be used as a general coverage API for accompanying tools. One example of such a tool is Chanjo-Report, a coverage report generator for Chanjo output. A report could look something like this (click for the full PDF): Chanjo Documentation, Release 3.2.0 2.3 Interface Walkthrough 2.3.1 Table of Contents The page provides an extended look into each of the six subcommands that make up the command line interface of Chanjo. In accordance with UNIX philosophy, they each try to do only one thing really well. 1. chanjo 2. init 3. load 4. link 5. calculate 6. db 2.3.2 chanjo The base command doesn’t do anything on it’s own. However, there are a few global options accessible at this level. For example, to log debug information to a file called ./chanjo.log use this command: $ chanjo -v --log ./chanjo.log [SUBCOMMAND HERE] This is also where you can define what database to connect to using the -d/--database option. Chanjo will oth- erwise use the database defined in the config. To learn more about the global Chanjo options, run chanjo --help. Chanjo Documentation, Release 3.2.0 2.3.3 chanjo init Walks you through the setup of a Chanjo config file and optionally initialized a new database. With a config you won’t have to worry about missing to specify default options on the command line. The format of the config is .yaml. $ chanjo init The generated config file will be stored in the current working directory. This is also where Chanjo will automatically look for it. If you want to share a config file between projects it’s possible to point to a global file with the --config option. 2.3.4 chanjo load Chanjo takes advantage of the power behind SQL databases by providing a way to store coverage annotations in SQL. You load coverage annotations from sambamba depth region output. It’s possible to pipe directly to this command: $ sambamba depth region -L exons.bed alignment.bam | chanjo load Each line is added independently so it doesn’t really matter if the file is sorted. Most of the information is already stored in the BED output file from Sambamba but to link multiple samples into logical related groups you can future specify a group identifier when calling the load command: $ chanjo load --group group1 exons.coverage.bed 2.3.5 chanjo link Another main benefit of Chanjo is the ability to get coverage for related genomic elements: exons, transcripts, and genes. The “link” only need to be run ones (at any time) and accepts a similar BED file to “load”. $ chanjo link exons.coverage.bed Each line is added independently so it doesn’t really matter if the file is sorted. 2.3.6 chanjo calculate This is where the exiting exploration happens! Chanjo exposes a few powerful ways to investigate coverage ones it’s been loaded into the databse. mean Extract basic coverage stats across all exons for samples. $ chanjo calculate mean {"complateness_10": 78.93, "mean_coverage": 114.21, "sample_id": "sample1"} {"complateness_10": 45.92, "mean_coverage": 78.42, "sample_id": "sample2"} Chanjo Documentation, Release 3.2.0 gene Extract metrics for particular genes. This requries that the exons have been linked using the “link” command to the related transcripts and genes. It should be noted that this information is only an approximation since we don’t take overlapping exons into consideration. $ chanjo calculate gene ADK {"ADK": {"complateness_10": 78.93, "mean_coverage": 114.21}, "sample_id": "sample1"} The calculation is based on simple averages across all exons related to the gene. region Metrics can also be extracted for a continous interval of exons. This enables some interesting opportunities for explo- ration. The base command reports average metrics for all included exons across all samples: $ chanjo calculate region 1 122544 185545 {"complateness_10": 50.45, "mean_coverage": 56.12} We can split this up into each individual exon as well for more detail: $ chanjo calculate region 1 122544 185545 --per exon {"complateness_10": 10.12, "mean_coverage": 12.00, "exon_id": "exon1"} {"complateness_10": 90.76, "mean_coverage": 114.98, "exon_id": "exon2"} We can of course also filter the results down to individual samples as well: $ chanjo calculate region 1 122544 185545 --per exon --sample sample1 {"complateness_10": 23.56, "mean_coverage": 34.05, "exon_id": "exon1"} {"complateness_10": 91.86, "mean_coverage": 157.02, "exon_id": "exon2"} 2.3.7 chanjo db Enables you to quickly perform housekeeping tasks on the database. setup Set up and tear down a Chanjo database. remove Remove all traces of a sample from the database. $ chanjo db remove sample1 2.3.8 Closing words The command line interface is really just a bunch of shortcuts that simplifies the use of Chanjo in a UNIX environment. To customize your particular use of Chanjo you would probably want to look into the API Reference. Chanjo Documentation, Release 3.2.0 2.4 Public API This is the public API documentation. The functions and classes listed below exclusively make up the code scope which is guaranteed to stay consistent. If you plan to make use of the Public API it would be a good idea to also check out the Developer’s Guide that coveres some additional implementation details. Chanjo exclusively uses unicode strings throughout the interface. It therefore important to always specify ‘utf-8’ encoding when e.g. reading files from the OS. In Python 2: >>> import io >>> handle = io.open('./LICENSE', encoding='utf-8') >>> next(handle) u'The MIT License (MIT)\n' 2.4.1 Chanjo coverage store module The central API for Chanjo SQL databases. Built on SQLAlchemy. From here you have access to contents of the database (models) and can access query interface that SQLAlchemy exposes. class chanjo.store.Store(uri=None, debug=False) SQLAlchemy-based database object. Bundles functionality required to setup and interact with various related genomic interval elements. Changed in version 2.1.0: Lazy-loadable, all “init” arguments optional. Examples >>> chanjo_db = Store('data/elements.sqlite3') >>> chanjo_db.set_up() Note: For testing pourposes use :memory: as the path argument to set up in-memory (temporary) database. Parameters • uri (Optional[str]) – path/URI to the database to connect to • debug (Optional[bool]) – whether to output logging information uri str – path/URI to the database to connect to engine class – SQLAlchemy engine, defines what database to use session class – SQLAlchemy ORM session, manages persistance query method – SQLAlchemy ORM query builder method classes dict – bound ORM classes Chanjo Documentation, Release 3.2.0 add(elements) Add one or more new elements and commit the changes. Chainable. Parameters elements (orm/list) – new ORM object instance or list of such Returns self for chainability Return type Store connect(db_uri, debug=False) Configure connection to a SQL database. New in version 2.1.0. Parameters • db_uri (str) – path/URI to the database to connect to • debug (Optional[bool]) – whether to output logging information create(class_id, *args, **kwargs) Create a new instance of an ORM element. If attributes are supplied as a tuple they must be in the correct order. Supplying a dict doesn’t require the attributes to be in any particular order. Parameters • class_id (str) – choice between “superblock”, “block”, “interval” • *args (tuple) – list the element attributes in the correct order • **kwargs (dict) – element attributes in whatever order you like Returns new ORM instance object Return type orm dialect Return database dialect name used for the current connection. Dynamic attribute. Returns name of dialect used for database connection Return type str find(klass_id, query=None, attrs=None) Fetch one or more elements based on the query. If the ‘query’ parameter is a string find() will fetch one element; just like get. If query is a list it will match element ids to items in that list and return a list of elements. If ‘query’ is None all elements of that class will be returned. Parameters • klass_id (str) – type of element to find • query (str/list, optional) – element Id(s) • attrs (list, optional) – list of columns to fetch Returns element(s) from the database Return type object/list Chanjo Documentation, Release 3.2.0 get(typ, type_id) Fetch a specific element or ORM class. Calls itself recursively when asked to fetch an element. Parameters • typ (str) – element key or ‘class’ • type_id (str) – element id or ORM model id Returns element or ORM class Return type model Examples >>> gene = db.get('gene', 'GIT1') get_or_create(model, **kwargs) Get or create a record in the database. save() Manually persist changes made to various elements. Chainable. Changed in version 2.1.2: Flush session before commit. Returns self for chainability Return type Store set_up() Initialize a new database with the default tables and columns. Returns self Return type Store tear_down() Tear down a database (tables and columns). Returns self Return type Store 2.5 Developer’s Guide The developer guide is directed to fellow coders. You should read this if: • you want to contribute to the development of Chanjo • develop a Chanjo plugin that hooks into one of the entry points 2.5.1 Contributing Currently the best resource on this topic is available at GitHub, in the CONTRIBUTING.md file. Chanjo Documentation, Release 3.2.0 2.5.2 Installation/dev environment Check out the installation guide to learn how you can set up a Vagrant environment which is ready to start development in no time! 2.5.3 Develop a plugin Chanjo exposes a couple of plugin interfaces using setuptools entry points. When publishing a new Chanjo plugin you should register with the corresponding entry point in your setup.py script. It might look something like this: setup( name='Chanjo-PluginName', ... entry_points={ 'chanjo.subcommands.3': [ 'plugin_name = chanjo_plugin.cli:main', ], }, ... ) The setup above would register a new subcommand to the command line interface as chanjo plugin. When you write a Chanjo plugin name it something like “Chanjo-MyPlugin” to make it easy to find using pip search. Note: It’s absolutly OK for plugins to to depend on Chanjo itself or any Chanjo dependencies. 2.5.4 New subcommand Setuptools entry point: chanjo.subcommands.3 You can write a plugin that will show up as an additional subcommand when you type chanjo on the command line. The implementation should use the Click command line framework. As long as you stick to Click, you can do pretty much whatever you want. Let’s your imagination run free! The only requirement is that it should tie into some form of Chanjo functionality like generating a report from a populated SQL database. 2.5.5 License The MIT License (MIT) Copyright (c) 2014, <NAME> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documen- tation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Chanjo Documentation, Release 3.2.0 THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PAR- TICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFT- WARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 2.6 Release notes Current release: Radical Red Panda (3.2.0) # Change Log All notable changes to this project will be documented in this file. This project adheres to [Semantic Versioning](http://semver.org/). ## [3.2.0] - 2015-12-22 ### Changed - diagnostic yield function now accepts “exon_ids” explicitly and requires “sample_id” to be given ## [3.1.1] - 2015-11-19 ### Fixed - fixed bug in SQL relationship between gene and transcript ## [3.1.0] - 2015-11-16 ### Added - sex subcommand for guessing sex from BAM alignment, see #158 ## [3.0.2] - 2015-11-04 ### Fixed - chanjo API: restrict converter queries to distinct/unique rows ## [3.0.1] - 2015-10-26 ### Fixed - import from root init, use root logger ## [3.0.0] - 2015-10-19 Code name: “Radical Red Panda” This is a major new release. Please refer to the documentation for more details on what has been updated. ### Added - Add functionality to run sambamba from chanjo - Add calulate command to get basic statistics from database - Add link command to specifically link genomic elements - Add db command to interface and perform maintainance on database ### Removed - Support for older versions of Chanjo ### Fixed - Changed way of logging - Added proper logstream handler ## [2.3.2] - 2015-03-21 ### Fixed - Refactor EntryPointsCLI to allow for external subcommands - Updated documentation to reflect Chanjo 2.x CLI ## [2.3.1] - 2015-03-05 ### Changed - use custom “multi command class” to load dynamic entry point plugins ### Fixed - some pylint improvements ## [2.3.0] ### Added - New logging module accessible from the command line (-vvv) - Add SQL schema drawing to “Code walkthrough” ## [2.2.1] ### Fixed - Fix incorrectly references ID field in join statement (block_stats) ## [2.2.0] ### Added - Read sample ID from BAM header - Validate BED format in “annotate” - Enable getting config values from “chanjo.toml” (chanjo config annotate.cutoff) ### Fixed - Fix issue with hardlinks in Vagrant shared folders (setup.py) - Change Travis CI setup using official guidelines - Fix typography in docs - Use io.open instead of codecs.open - Use .sqlite3 extension for SQLite databases - Better error message when overwriting existing databases ## [2.1.3] ### Fixed - Fix misstake in “import” subcommand so it’s finally working! ## [2.1.2] ### Fixed - Fix bug in “import” where the program didn’t flush the session before committing. - Change “build_interval_data” to only create model without adding to the session. - Use “scoped_session” with “sessionmaker”. - Flush session before each commit call in chanjo.Store.save() Chanjo Documentation, Release 3.2.0 ## [2.1.1] ### Fixed - Fix interval assertion that didn’t allow intervals to start and end on the same chromosomal position. ## [2.1.0] ### Added - Add lazy loading of ‘‘chanjo.Store‘ through new ‘‘chanjo.Store.connect‘ method - Much improved documentation of changes between releases ### Fixed - Fix case where “demo” subcommand fails (‘‘__package__‘ not set) ## [2.0.2] ### Fixed - Rename misspelled method (non-breaking): ‘‘chanjo.Store.tare_down‘ to ‘‘chanjo.Store.tear_down‘ ### Fixed - Fix some CSS selectors in theme - Reorder API references in API docs ## [2.0.1] ### Fixed - Fixes broken symlinked demo/fixture files - Adds validation to check that stdin isn’t empty - Fixes link to logo on front page - Adds <NAME> as collaborator - Adds link to Master’s Thesis paper for reference in README - Adds more FAQ ## [2.0.0] Code name: “<NAME>” Being a major release, expect previous scripts written for Chanjo 1.x to be incompatible with Chanjo 2.x. ### Added - New built-in “demo” subcommand in the CLI - New public setuptools entry point for Chanjo plug- ins (CLI subcommands) - New of cial public Python API (stable until 3.x release). Read more in the new [API documentation][api-docs]. - New “sex-checker” bonus command to guess gender from BAM alignment. • Command line interface updates – --out option removed across CLI. Use >| to redirect STDOUT instead. – --prepend‘ is now known as ‘‘--prefix – --db and --dialect must be supplied directly after “chanjo” on the command line (not after the subcommand). Like: chanjo --db ./coverage.sqlite import annotations.bed. – --extend-by is now --extendby • Config file format has changed from JSON to [TOML][toml]. It’s a more readable format (think INI) that also supports comments! • Improves BED-format compliance. Chanjo will now expect the “score” field to be in position 5 (and strand in position 6). The Chanjo specific fields start from position 7. • Major internal code restructuring. Essentially everything is built as plugins/self-contained components. Since no official Python API existed pre Chanjo 2, I won’t go into any details here. • Improves documentation. • Last but not least, Chanjo will now code name releases according to animals in the Musteloidea superfamily :) • Introduces a new compat module to better support Python 2+3. • Trades command line framework from “docopt” to “click” to build more flexible nested commands. • Adds a first hand BaseInterval object to unify handling of intervals inside Chanjo. • BamFile no-longer requires numpy as a hard dependency. You still likely want to keep it though for performance reasons. ## [1.0.0] Code name: “<NAME>” First and current stable version of Chanjo. ## [0.6.0] ### Added - BREAKING: changes group_id field to string instead of int. - Exposes the threshold option to the CLI for optimizing BAM-file reading with SAMTools, fixes #58 ## [0.5.0] ### Fixed - UPDATE: Small updates to the command line interface - UPDATE: New tests for new functions ### Added - NEW: MySQL support added - CHANGE: A lot of internal restructuring from classes to functions - Chanjo Documentation, Release 3.2.0 IMPROV ENT: New structure seems to significantly improve speed Documentation - UPDATE: New documentation covering new features/structure ## [0.4.0] - NEW: Table with Sample meta-data - UPDATE: CLI creates sample entries - UPDATE: SQL structure in docs - UPDATE: Updated tests - UPDATE: included test data (MANIFEST.in) - more on this later... ## [0.3.0] - NEW: API - annotate: splice sites option - NEW: CLI - annotate: splice sites option - UPDATE: Much improved documentation - UPDATE: Modern setuptools only installation - UPDATE: New cleaner banner - NEW: travis integration ## [0.2.0] New CLI! • New Command Line: “chanjo” replaces “chanjo-autopilot” • Ability to save a temporary JSON file when running Chanjo in parallel (avoids writing to SQLite in several instances) • New command line option: peaking into a database • New command line option: building a new SQLite database skeleton • New command line option: import temporary JSON files • New command line option: reading coverage from any interval from BAM-file • Many small bugfixes and minor improvements • New dependency: path.py [api-docs]: https://chanjo.readthedocs.org/en/latest/api.html [toml]: https://github.com/toml-lang/toml 2.7 FAQ 2.7.1 Why doesn’t Chanjo work on my remote server? Chanjo complains when trying to import annotations to my SQLite database. To get to the point it has to do with the NFS filesystem (Network FileSystem or something). Some more complex SQL queries simply aren’t compatible with NFS. So what can you do? 1. Switch to a MySQL database which should work fine. 2. Only process the data to JSON-files with the “–json” flag. Then on a different computer you could import the annotations. I’m afraid I can’t offer any other solutions at this point. 2.7.2 Error: (2006, ‘MySQL server has gone away’) You need to change a setting in the “my.cnf” file. More information here. • Known limitations • Parallelization (postpone import) -> Usage examples • NFS file systems -> Usage examples • Troubleshooting Chanjo Documentation, Release 3.2.0 2.7.3 I can’t overwrite exising files using Chanjo! As of the 2.0 release, Chanjo now completely relies on UNIX style output redirection for writing to files. You might, wisely, be using set -o noclobber in your .bashrc. This raises an error in UNIX if you try to overwrite existing files by output redirection. The way to force overwrite is by using a special syntax: $ echo two >| afile $ cat afile two 2.7.4 How does Chanjo handle genes in pseudoautosomal regions (X+Y)? A few genes are present on both sex chromosomes (X+Y). Becuase the chromosomes are treated as separate entities, Chanjo also treat these genes as separate entities. To keep them separated in the SQL database, the default “ccds” converter adapter will prefix their names by “X-” and “Y-” respectively. Note: It’s imporant that all converter adapters find some consistent way of handling elements in these tricky regions. Chanjo Documentation, Release 3.2.0 18 Chapter 2. Contents CHAPTER 3 made by 19 Chanjo Documentation, Release 3.2.0 20 Chapter 3. made by
fairsubset
cran
R
Package ‘fairsubset’ October 13, 2022 Type Package Title Choose Representative Subsets Version 1.0 Date 2020-08-14 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Allows user to obtain subsets of columns of data or vectors within a list. These sub- sets will match the original data in terms of average and variation, but have a consis- tent length of data per column. It is intended for use on automated data genera- tion which may not always output the same N per replicate or sample. URL https://pubmed.ncbi.nlm.nih.gov/31583263/ License GPL-3 Imports matrixStats, stats RoxygenNote 7.1.1 Encoding UTF-8 Suggests knitr, rmarkdown VignetteBuilder knitr NeedsCompilation no Repository CRAN Date/Publication 2020-09-17 08:50:08 UTC R topics documented: fairsubse... 2 fairsubset fairsubset Description Allows user to obtain subsets of columns of data or vectors within a list. These subsets will match the original data in terms of average and variation, but have a consistent length of data per column. It is intended for use on automated data generation which may not always output the same N per replicate or sample. Usage fairSubset( input_list, subset_setting = "mean", manual_N = NULL, random_subsets = 1000 ) Arguments input_list A list, data frame, or matrix. If matrix or data frame, columns should represent each sample’s data. subset_setting Choose from c("mean", "median", "ks"). Mean or median will use these aver- ages to choose the best subset. "ks" will use the Kolmogorov Smirnov test to choose the best subset. Defaults to "mean". manual_N To manually choose how many data points should be in each sample, enter an integer value here. Otherwise, fairSubset chooses the length of the sample with the most data. Defaults to NULL. random_subsets To manually choose how many random subsets should be used to choose the best subset, enter an integer value here. Defaults to 1000. Value Returns a list. $best_subset is a data.frame containing data best representative of original data, given the parame- ters chosen for fairsubset $worst_subset is a data.frame containing data as far from the original as observed in all randomly chosen subsets. It is used solely as a comparator for the worst case scenario from randomly choosing subsets $report is a data.frame of averages and variation regarding original data, best subset, and worst subset $warning is a character string. If != "", it represents known errors Author(s) <NAME> Examples input_list <- list(a= stats::rnorm(100, mean = 3, sd = 2), b = stats::rnorm(50, mean = 5, sd = 5), c= stats::rnorm(75, mean = 2, sd = 0.5)) fairSubset(input_list, subset_setting = "mean", manual_N = 10, random_subsets = 1000)$report
elsdoc
ctan
TeX
* some easier ways for formatting list and theorem environments are provided while people can still use amsthm.sty package; * natbib.sty is the main citation processing package which can comprehensively handle all kinds of citations and works perfectly with hyperref.sty in combination with hypernat.sty; * long title pages are processed correctly in preprint and final formats. ## 3 Installation The package is available at author resources page at Elsevier ([http://www.elsevier.com/locate/later](http://www.elsevier.com/locate/later)). It can also be found in any of the nodes of the Comprehensive TeX Archive Network (ctan), one of the primary nodes being [http://tug.ctan.org/tex-archive/macros/latex/contrib/elsarticle/](http://tug.ctan.org/tex-archive/macros/latex/contrib/elsarticle/). Please download the elsarticle.dtx which is a composite class with documentation and elsarticle.ins which is the LaTeX installer file. When we compile the elsarticle.ins with LaTeX it provides the class file, elsarticle.cls by stripping off all the documentation from the *.dtx file. The class may be moved or copied to a place, usually, $TEXMF/tex/latex/elsevier/, or a folder which will be read by LaTeX during document compilation. The TeX file database needs updation after moving/copying class file. Usually, we use commands like mktexlsr or texhash depending upon the distribution and operating system. ## 4 Usage The class should be loaded with the command: \documentclass[<options>]{elsarticle} where the options can be the following: preprint default option which format the document for submission to Elsevier journals. review similar to the preprint option, but increases the baselineskip to facilitate easier review process. 1p formats the article to the look and feel of the final format of model 1+ journals. This is always single column style. 3p formats the article to the look and feel of the final format of model 3+ journals. If the journal is a two column model, use twocolumn option in combination. 5p formats for model 5+ journals. This is always of two column style. authoryear author-year citation style of natbib.sty. If you want to add extra options of natbib.sty, you may use the options as comma delimited strings as arguments to \biboptions command. An example would be: \biboptions{longnamesfirst,angle,semicolon} number numbered citation style. Extra options can be loaded with \biboptions command. sort&compress sorts and compresses the numbered citations. For example, citation [1,2,3] will become [1, 2, 3]. longtitle if front matter is unusually long, use this option to split the title page across pages with the correct placement of title and author footnotes in the first page. times loads txdots.sty, if available in the system to use Times and compatible math fonts. reversenotenum Use alphabets as author-affiliation linking labels and use numbers for author footnotes. By default, numbers will be used as author-affiliation linking labels and alphabets for author footnotes. lefttitle To move title and author/affiliation block to flushleft. centertitle is the default option which produces center alignment. endfloat To place all floats at the end of the document. nonatbib To unload natbib.sty. doubleblind To hide author name, affiliation, email address etc. for double blind refereeing purpose. All options of article.cls can be used with this document class. The default options loaded are a4paper, 10pt, oneside, onecolumn and preprint. 5. Frontmatter There are two types of frontmatter coding: (1) each author is connected to an affiliation with a footnote marker; hence all authors are grouped together and affiliations follow;2. authors of same affiliations are grouped together and the relevant affiliation follows this group. An example of coding the first type is provided below. \title{This is a specimen title\tnotereft{t1,t2}} \tnotetext[t1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[t2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \author[1]{<NAME>\corref{cor1}% \fnref{fn1}} \ead{<EMAIL>} \author[2]{CV Radhakrishnan\fnref{fn2}} \ead{<EMAIL>} \author[3]{CV Rajagopal\fnref{fn1,fn3}} \ead{url}{www.stmdocs.in} \cortext[cor1]{Corresponding author} \fntext[fn1]{This is the first author footnote.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \fntext[fn3]{Yet another author footnote.} \affiliation[1]{organization={Elsevier B.V.}, addressline={Radarweg 29}, postcode={1043 NX}, city={Amsterdam}, country={The Netherlands}} \affiliation[2]{organization={Sayahna Foundation}, addressline={JWRA 34, Jagathy}, city={Trivandrum} postcode={695014}, country={India}} \[3.3\]Validation[3]{organization={STM Document Engineering Pvt Ltd.}, addressline={Mepukada, Malayinkil}, city={Trivandrum} postcode={695571}, country={India}} ###### Abstract The output of the above TeX source is given in Clips 1 and 2. The header portion or title area is given in Clip 1 and the footer area is given in Clip 2. This is a specimen \(a_{b}\) title\({}^{\star,\star\star}\) <NAME>\({}^{\text{a},1,\star}\), CV Radhakrishnan\({}^{\text{b},2}\), CV Rajagopal\({}^{\text{c},1,3}\) \({}^{\star}\)Elsevier B.V., Radrawczyk 29, 1043 XX Amsterdam, The Netherlands \({}^{\star}\)Sayahtan Foundations, JWRA 34, Jagarathy, Trivandrum 695014, India \({}^{\star}\)STM Document Engineering Pvt Ltd., Mepukada, Malayinkil, Trivandrum 695571, India Most of the commands such as \title,\author,\affiliation are self explanatory. Various components are linked to each other by a label-reference mechanism; for instance, title footnote is linked to the title with a footnote mark generated by referring to the \label string of the \tnotetext. We have used similar commands such as \tnoteref (to link title note to title); \corrref (to link corresponding author text to corresponding author); \fnref (to link footnote text to the relevant author names). TeX needs two compilations to resolve the footnote marks in the preamble part. Given below are the syntax of various note marks and note texts. where <label(s)> can be either one or more comma delimited label strings. The optional arguments to the \author command holds the ref label(s) of the address(es) to which the author is affiliated while each \affiliation command can have an optional argument of a label. In the same manner, \tnotetext, \fntext, \cortext will have optional arguments as their respective labels and note text as their mandatory argument. The following example code provides the markup of the second type of author-affiliation. \author{<NAME>\corref{cor1}% \fnref{fn1}} \ead{<EMAIL>} \affiliation[1]{organization={Elsevier B.V.}, addressline={Radarweg 29}, postcode={1043 NX}, city={Amsterdam}, country={The Netherlands}} \author{<NAME>\fnref{fn2}} \ead{<EMAIL>} \affiliation[2]{organization={Sayahna Foundation}, addressline={JWRA 34, Jagathy}, city={Trivandrum} postcode={695014}, country={India}} \author{CV Rajagopal\fnref{fn1,fn3}} \ead{url}{www.stmdocs.in} \affiliation[3]{organization={STM Document Engineering Pvt Ltd.}, addressline={Mepukada, Malayinkil}, city={Trivandrum} postcode={695571}, country={India}} # The output of the above TeX source is given in Clip 3. This is a specimen \(a_{b}\) title <NAME> Elsvier B.V. Radarweg 29, 1043 XX Amsterdam, The Netherlands CV Radhakrishnan Suyama Foundations, JWRA 34, Jagatty, Trivandrum 695014, India CV Rajagopal STM Document Engineering Pvt Ltd., Mepalvada, Malayinli, Trivandrum 695571, India ###### Abstract In this work we demonstrate \(a_{b}\) the formation Y_1 of a new type of polariton. The frontmatter part has further environments such as abstracts and keywords. These can be marked up in the following manner: \begin{abstract} In this work we demonstrate the formation of a new type of polariton on the interface between a.... \end{abstract} \begin{keyword} quadruple exiton \sep polariton \sep WGM \end{keyword} Each keyword shall be separated by a \sep command. MSC classifications shall be provided in the keyword environment with the commands WSC. MSC accepts an optional argument to accommodate future revisions. eg., MSC[2008]. The default is 2000. ### New page Sometimes you may need to give a page-break and start a new page after title, author or abstract. Following commands can be used for this purpose. \newpageafter{title} newpageafter{author} newpageafter{abstract} \newpageafter{title} typeset the title alone on one page. \newpageafter{author} typeset the title and author details on one page. \newpageafter{abstract} typeset the title, author details and abstract & keywords one one page. ## 6 Floats Figures may be included using the command, \includegraphics in combination with or without its several options to further control graphic. \includegraphics is provided by graphic[s,x].sty which is part of any standard LaTeX distribution. graphicx.sty is loaded by default. LaTeX accepts figures in the postscript format while pdfLaTeX accepts *.pdf, *.mps (metapost), *.jpg and *.png formats. pdfLaTeX does not accept graphic files in the postscript format. The table environment is handy for marking up tabular material. If users want to use multirow.sty, array.sty, etc., to fine control/enhance the tables, they are welcome to load any package of their choice and elsarticle.cls will work in combination with all loaded packages. Further, the enhanced list environment allows one to prefix a string like'step' to all the item numbers. ``` \begin{enumerate}[Step1.] \itemThisisthefirststepoftheexamplelist. \itemObviouslythisisthesecondstep. \itemThefinalsteptowindupthisexample. \end{enumerate} ``` Step 1. Thisisthefirststepoftheexamplelist. \itemObviouslythisisthesecondstep. \end{enumerate} ``` Step 2. Thefinalsteptowindupthisexample. ## 9 Cross-references In electronic publications, articles may be internally hyperlinked. Hyperlinks are generated from proper cross-references in the article. For example, the words Fig. 1 will never be more than simple text, whereas the proper cross-reference \ref{tiger} may be turned into a hyperlink to the figure itself: Fig. 1. In the same way, the words Ref. [1] will fail to turn into a hyperlink; the proper cross-reference is \cite{Knuth96}. Cross-referencing is possible in \(\mathbb{L}^{\mathbb{L}^{\mathbb{T}}}\)EX for sections, subsections, formulae, figures, tables, and literature references. 10. Mathematical symbols and formulae Many physical/mathematical sciences authors require more mathematical symbols than the few that are provided in standard LaTeX. A useful package for additional symbols is the amssymb package, developed by the American Mathematical Society. This package includes such oft-used symbols as <> ()sesssim, > () ()gtrsim or \(h\) ()bar. Note that your TeX system should have the msam and msbm fonts installed. If you need only a few symbols, such as ()Box, you might try the package Latexsym. Another point which would require authors' attention is the breaking up of long equations. When you use elsarticle.cls for formatting your submissions in the preprint mode, the document is formatted in single column style with a text width of 384pt or 5.3in. When this document is formatted for final print and if the journal happens to be a double column journal, the text width will be reduced to 224pt at for 3+ double column and 5+ journals respectively. All the nifty fine-tuning in equation breaking done by the author goes to waste in such cases. Therefore, authors are requested to check this problem by typesetting their submissions in final format as well just to see if their equations are broken at appropriate places, by changing appropriate options in the document class loading command, which is explained in section 4, Usage. This allows authors to fix any equation breaking problem before submission for publication. elsarticle.cls supports formatting the author submission in different types of final format. This is further discussed in section 13, Final print. ### Displayed equations and double column journals Many Elsevier journals print their text in two columns. Since the preprint layout uses a larger line width than such columns, the formulae are too wide for the line width in print. Here is an example of an equation (see equation 6) which is perfect in a single column preprint format: In normal course, articles are prepared and submitted in single column format even if the final printed article will come in a double column format journal. Here the problem is that when the article is typeset by the typesetters for paginating and fit within the single column width, they have to break the lengthy equations and align them properly. Even if most of the tasks in preparing your proof is automated, the equation breaking and aligning requires manual judgement, hence this task is manual. When there comes a manual operation that area is error prone. Author needs to check that equation pretty well. However if authors themselves break the equation to the single column width typesetters need not want to touch these area and the proof authors get will be without any errors. ## 11 Bibliography Three bibliographic style files (*.bst) are provided -- elsarticle-num.bst, elsarticle-num-names.bst and elsarticle-harv.bst -- the first one can be used for the numbered scheme, second one for numbered with new options of natbib.sty. The third one is for the author year scheme. In LaTeX literature, references are listed in the bibliography environment. Each reference is a \bibitem and each \bibitem is identified by a label, by which it can be cited in the text: \bibitem[Elson et al.(1996)]{ESG96} is cited as \citet{ESG96}. In connection with cross-referencing and possible future hyperlinking it is not a good idea to collect more that one literature item in one \bibitem. The so-called Harvard or author-year style of referencing is enabled by the LaTeX package natbib. With this package the literature can be cited as follows: * Parenthetical: \citep{WB96} produces (Wettig & Brown, 1996). * Textual: \citet{ESG96} produces Elson et al. (1996). * An affix and part of a reference: \citep[e.g.][Ch. 2]{Gea97} produces (e.g. Governato et al., 1997, Ch. 2). In the numbered scheme of citation, \cite{<label>} is used, since \citep or \citet has no relevance in the numbered scheme. natbib package is loaded by elsarticle with numbers as default option. You can change this to author-year or Harvard scheme by adding option authoryear in the class loading command. If you want to use more options of the natbib package, you can do so with the \biboptions command, which is described in the section 4, Usage. For details of various options of the natbib package, please take a look at the natbib documentation, which is part of any standard LaTeX installation. In addition to the above standard.bst files, there are 10 journal-specific.bst files also available. Instruction for using these.bst files can be found at [http://support.stmdocs.in](http://support.stmdocs.in) ## 12 Graphical abstract and highlights A template for adding graphical abstract and highlights are available now. This will appear as the first two pages of the PDF before the article content begins. 13. Final print The authors can format their submission to the page size and margins of their preferred journal. elsarticle provides four class options for the same. But it does not mean that using these options you can emulate the exact page layout of the final print copy. 1p: 1+ journals with a text area of 384pt x 562pt or 13.5cm x 19.75cm or 5.3in x 7.78in, single column style only. 3p: 3+ journals with a text area of 468pt x 622pt or 16.45cm x 21.9cm or 6.5in x 8.6in, single column style. twocolumn: should be used along with 3p option if the journal is 3+ with the same text area as above, but double column style. 5p: 5+ with text area of 522pt x 682pt or 18.35cm x 24cm or 7.22in x 9.45in, double column style only. Following pages have the clippings of different parts of the title page of different journal models typeset in final format. Model 1+ and 3+ will have the same look and feel in the typeset copy when presented in this document. That is also the case with the double column 3+ and 5+ journal article pages. The only difference will be wider text width of higher models. Here are the specimen single and double column journal pages. This is a specimen \(a_{b}\) title <NAME> Elsev<NAME>., Radzowsky 29, 10438 X Anawatrian, The Netherlands CV Radhakrishnan Sugina Foundation, JWRA 34, Jagadish, Triumultan 695014, India CV Rajagopal STM Document Engineering Pv Ltd, Morghade, Morghield, Triumultan 695571, India ###### Abstract In this work we demonstrate \(a_{b}\) the formation Y.1 of a new type of polariton on the interface between a cuprous oxide slab and a polystyrene micro-sphere placed on the slab. The evanescent field of the resonant whispering gallery mode (WGM) of the micro sphere has a substantial gradient, and therefore effectively couples with the quadrupole 1.5 excitons in cuprous oxide. This evanescent polariton has a long life-time, which is determined only by its excitonic and WGM component. The polariton lower branch has a well pronounced minimum. This suggests that this excitation is localized and can be utilized for possible BEC. The spatial coherence of the polariton can be improved by assembling the micro-spheres into a linear chain. keywords: quadrupole exciton, polariton, WGM, BEC JEL 71.35.-y, 71.35.Lk, 71.36.+c + Footnote †: journal: Elsevier ## 1 Introduction Although quadrupole excitons (QE) in cuprous oxide crystals are good candidates for BEC due to their narrow line-width and long life-time are some factors impeding DEC [1, 2]. One of these factors is that due to the small but non negligible coupling to the photon bath, one must consider BEC of the corresponding mixed light-matter states called polaritons [3]. The photom-like part of the polariton has a large group velocity and tends to escape from the crystal. Thus,
@greguintow/apollo-server-plugin-operation-registry
npm
JavaScript
Operation Registry Plugin === The operation registry plugin is the interface into the Apollo Platform's **operation registry** and enables operation **safelisting**, which allows selective execution based on the operation. Safelisting eliminates the risk of unexpected operations that could cause downtime from being run against a graph. In order to enable safelisting, follow the [step by step guide in the Apollo docs](https://www.apollographql.com/docs/studio/operation-registry/). These steps describe how to extract and upload operations defined within client applications to [Apollo Studio](https://studio.apollographql.com) using the Apollo CLI. Once operations have been registered, this plugin for Apollo Server fetches the manifest of these operations from [Apollo Studio](https://studio.apollographql.com) and forbids the execution of any operations that are not in that manifest. ### Usage The following example shows basic usage of the plugin with Apollo Server. First, add the plugin to your project's `package.json`: ``` npm install apollo-server-plugin-operation-registry ``` Then, ensure Apollo Server has access to an [API key](https://www.apollographql.com/docs/studio/operation-registry/#6-start-apollo-server-with-apollo-studio-enabled), for example as the `APOLLO_KEY` environment variable: ``` APOLLO_KEY=<API_KEY> npm start ``` Next, enable the plugin by adding it to the `plugins` parameter to the Apollo Server options: ``` const server = new ApolloServer({ typeDefs, resolvers, subscriptions: false, plugins: [ require("apollo-server-plugin-operation-registry")({ forbidUnregisteredOperations: true }) ] }); ``` With federation, the setup follows the same `plugins` configuration: ``` const { ApolloServer } = require("apollo-server"); const { ApolloGateway } = require("@apollo/gateway"); const gateway = new ApolloGateway({ serviceList: [ /* services */ ], }); const server = new ApolloServer({ gateway, subscriptions: false, plugins: [ require("apollo-server-plugin-operation-registry")({ forbidUnregisteredOperations: true }) ] }); server.listen().then(({ url }) => { console.log(`🚀 Server ready at ${url}`); }); ``` #### Variant Clients can register their operations to a specific variant, so the plugin contains the `graphVariant` field to specify which variant to pull operation manifests from. ``` const server = new ApolloServer({ plugins: [ require("apollo-server-plugin-operation-registry")({ graphVariant: "production" }) ] }); ``` ### Metrics The plugin will transmit metrics regarding unregistered operations which can be viewed within [Apollo Studio](https://studio.apollographql.com). The following example shows the unregistered operations sent by a particular client: Readme --- ### Keywords none
model4you
cran
R
Package ‘model4you’ October 13, 2022 Title Stratified and Personalised Models Based on Model-Based Trees and Forests Date 2020-12-23 Version 0.9-7 Description Model-based trees for subgroup analyses in clinical trials and model-based forests for the estimation and prediction of personalised treatment effects (personalised models). Currently partitioning of linear models, lm(), generalised linear models, glm(), and Weibull models, survreg(), is supported. Advanced plotting functionality is supported for the trees and a test for parameter heterogeneity is provided for the personalised models. For details on model-based trees for subgroup analyses see Seibold, Zeileis and Hothorn (2016) <doi:10.1515/ijb-2015-0032>; for details on model-based forests for estimation of individual treatment effects see Seibold, Zeileis and Hothorn (2017) <doi:10.1177/0962280217693034>. Depends R (>= 3.1.0), partykit (>= 1.2-6), grid Imports sandwich, stats, methods, ggplot2, Formula, gridExtra, survival Suggests mvtnorm, TH.data, psychotools, strucchange, plyr, knitr, ggbeeswarm, MASS License GPL-2 | GPL-3 Encoding UTF-8 RoxygenNote 7.1.1 NeedsCompilation no Author <NAME> [aut, cre], <NAME> [aut], <NAME> [aut] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2021-01-20 16:10:02 UTC R topics documented: .add_modelinf... 2 .modelfi... 3 .prepare_arg... 4 binomial_glm_plo... 4 coeftable.survre... 6 coxph_plo... 7 lm_plo... 7 logLik.pmtre... 8 node_pmtermina... 9 objfu... 10 objfun.pmodel_identit... 12 objfun.pmtre... 12 one_facto... 13 pmfores... 14 pmode... 17 pmtes... 19 pmtre... 20 predict.pmtre... 22 print.pmtre... 24 rs... 25 survreg_plo... 26 varimp.pmfores... 26 .add_modelinfo Add model information to a personalised-model-ctree Description For internal use. Usage .add_modelinfo(x, nodeids, data, model, coeffun) Arguments x constparty object. nodeids node ids, usually the terminal ids. data data. model model. coeffun function that takes the model object and returns the coefficients. Useful when coef() does not return all coefficients (e.g. survreg). Value tree with added info. Class still to be added. .modelfit Fit function when model object is given Description Use update function to refit model and extract info such as coef, logLik and estfun. Usage .modelfit(model, data, coeffun = coef, weights, control, parm = NULL) Arguments model model object. data data. coeffun function that takes the model object and returns the coefficients. Useful when coef() does not return all coefficients (e.g. survreg). weights weights. control control options from ctree_control. parm which parameters should be used for instability test? Value A function returning a list of coefficients coef. objfun logLik. object the model object. converged Did the model converge? estfun estfun. .prepare_args Prepare input for ctree/cforest from input of pmtree/pmforest Description Prepare input for ctree/cforest from input of pmtree/pmforest Usage .prepare_args(model, data, zformula, control, ...) Arguments model model. data an optional data frame. zformula ormula describing which variable should be used for partitioning. control ontrol parameters, see ctree_control. ... other arguments. Value args to be passed to ctree/cforest. binomial_glm_plot Plot for a given logistic regression model (glm with binomial family) with one binary covariate. Description Can be used on its own but is also useable as plotfun in node_pmterminal. Usage binomial_glm_plot( mod, data = NULL, plot_data = FALSE, theme = theme_classic(), ... ) Arguments mod A model of class glm with binomial family. data optional data frame. If NULL the data stored in mod is used. plot_data should the data in form of a mosaic type plot be plotted? theme A ggplot2 theme. ... ignored at the moment. Examples set.seed(2017) # number of observations n <- 1000 # balanced binary treatment # trt <- factor(rep(c("C", "A"), each = n/2), # levels = c("C", "A")) # unbalanced binary treatment trt <- factor(c(rep("C", n/4), rep("A", 3*n/4)), levels = c("C", "A")) # some continuous variables x1 <- rnorm(n) x2 <- rnorm(n) # linear predictor lp <- -0.5 + 0.5*I(trt == "A") + 1*I(trt == "A")*I(x1 > 0) # compute probability with inverse logit function invlogit <- function(x) 1/(1 + exp(-x)) pr <- invlogit(lp) # bernoulli response variable y <- rbinom(n, 1, pr) dat <- data.frame(y, trt, x1, x2) # logistic regression model mod <- glm(y ~ trt, data = dat, family = "binomial") binomial_glm_plot(mod, plot_data = TRUE) # logistic regression model tree ltr <- pmtree(mod) plot(ltr, terminal_panel = node_pmterminal(ltr, plotfun = binomial_glm_plot, confint = TRUE, plot_data = TRUE)) coeftable.survreg Table of coefficients for survreg model Description This function is mostly useful for plotting a pmtree. The generic plotting does not show the estimate and confidence interval of the scale parameter. This one does. Usage coeftable.survreg(model, confint = TRUE, digits = 2, intree = FALSE) Arguments model model of class survreg confint should a confidence interval be computed? Default: TRUE digits integer, used for formating numbers. Default: 2 intree is the table plotted within a tree? Default: FALSE Value None. Examples if(require("survival") & require("TH.data")) { ## Load data data(GBSG2, package = "TH.data") ## Weibull model bmod <- survreg(Surv(time, cens) ~ horTh, data = GBSG2, model = TRUE) ## Coefficient table grid.newpage() coeftable.survreg(bmod) ## partitioned model tr <- pmtree(bmod) ## plot plot(tr, terminal_panel = node_pmterminal(tr, plotfun = survreg_plot, confint = TRUE, coeftable = coeftable.survreg)) } coxph_plot Survival plot for a given coxph model with one binary covariate. Description Can be used on its own but is also useable as plotfun in node_pmterminal. Usage coxph_plot(mod, data = NULL, theme = theme_classic(), yrange = NULL) Arguments mod A model of class coxph. data optional data frame. If NULL the data stored in mod is used. theme A ggplot2 theme. yrange Range of the y variable to be used for plotting. If NULL it will be 0 to max(y). Examples if(require("survival")) { coxph_plot(coxph(Surv(futime, fustat) ~ factor(rx), ovarian)) } lm_plot Density plot for a given lm model with one binary covariate. Description Can be used on its own but is also useable as plotfun in node_pmterminal. Usage lm_plot( mod, data = NULL, densest = FALSE, theme = theme_classic(), yrange = NULL ) Arguments mod A model of class lm. data optional data frame. If NULL the data stored in mod is used. densest should additional to the model density kernel density estimates (see geom_density) be computed? theme A ggplot2 theme. yrange Range of the y variable to be used for plotting. If NULL the range in the data will be used. Details In case of an offset, the value of the offset variable will be set to the median of the values in the data. Examples ## example taken from ?lm ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group <- gl(2, 10, 20, labels = c("Ctl","Trt")) weight <- c(ctl, trt) data <- data.frame(weight, group) lm.D9 <- lm(weight ~ group, data = data) lm_plot(lm.D9) ## example taken from ?glm (modified version) data(anorexia, package = "MASS") anorexia$treatment <- factor(anorexia$Treat != "Cont") anorex.1 <- glm(Postwt ~ treatment + offset(Prewt), family = gaussian, data = anorexia) lm_plot(anorex.1) logLik.pmtree Extract log-Likelihood Description Extract sum of log-Likelihood contributions of all terminal nodes. By default the degrees of freedom from the models are used but optionally degrees of freedom for splits can be incorporated. Usage ## S3 method for class 'pmtree' logLik(object, dfsplit = 0, newdata = NULL, weights = NULL, perm = NULL, ...) Arguments object pmtree object. dfsplit degrees of freedom per selected split. newdata an optional new data frame for which to compute the sum of objective functions. weights weights. perm the number of permutations performed (see varimp). ... ignored. Value Returns an object of class logLik. See Also objfun.pmtree for the sum of contributions to the objective function (not the same when partition- ing linear models lm) node_pmterminal Panel-Generator for Visualization of pmtrees Description The plot method for party and constparty objects are rather flexible and can be extended by panel functions. The pre-defined panel-generating function of class grapcon_generator for pmtrees is documented here. Usage node_pmterminal( obj, coeftable = TRUE, digits = 2, confint = TRUE, plotfun, nid = function(node) paste0(nam[id_node(node)], ", n = ", node$info$nobs), ... ) Arguments obj an object of class party. coeftable logical or function. If logical: should a table with coefficients be added to the plot (TRUE/FALSE)? If function: A function comparable to coeftable.survreg. digits integer, used for formating numbers. confint Should a confidence interval be computed. plotfun Plotting function to be used. Needs to be of format function(mod, data) where mod is the model object. See examples for more details. nid function to retrieve info on what is plottet as node ids. ... arguments passed on to plotfun. Examples if(require("survival")) { ## compute survreg model mod_surv <- survreg(Surv(futime, fustat) ~ factor(rx), ovarian, dist = 'weibull') survreg_plot(mod_surv) ## partition model and plot tr_surv <- pmtree(mod_surv) plot(tr_surv, terminal_panel = node_pmterminal(tr_surv, plotfun = survreg_plot, confint = TRUE)) } if(require("survival") & require("TH.data")) { ## Load data data(GBSG2, package = "TH.data") ## Weibull model bmod <- survreg(Surv(time, cens) ~ horTh, data = GBSG2, model = TRUE) ## Coefficient table grid.newpage() coeftable.survreg(bmod) ## partitioned model tr <- pmtree(bmod) ## plot with specific coeftable plot(tr, terminal_panel = node_pmterminal(tr, plotfun = survreg_plot, confint = TRUE, coeftable = coeftable.survreg)) } objfun Objective function Description Get the contributions of an objective function. For glm these are the (weighted) log-likelihood contributions, for lm the negative (weighted) squared error. Usage objfun(x, ...) ## S3 method for class 'survreg' objfun(x, newdata = NULL, weights = NULL, ...) ## S3 method for class 'lm' objfun(x, newdata = NULL, weights = NULL, ...) ## S3 method for class 'glm' objfun(x, newdata = NULL, weights = NULL, log = TRUE, ...) Arguments x model object. ... further arguments passed on to objfun methods. newdata optional. New data frame. Can be useful for model evaluation / benchmarking. weights optional. Prior weights. See glm or lm. log should the log-Likelihood contributions or the Likelhood contributions be re- turned? Value vector of objective function contributions. Examples ## Example taken from ?stats::glm ## Dobson (1990) Page 93: Randomized Controlled Trial : counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3,1,9) treatment <- gl(3,3) print(d.AD <- data.frame(treatment, outcome, counts)) glm.D93 <- glm(counts ~ outcome + treatment, family = poisson()) logLik_contributions <- objfun(glm.D93) sum(logLik_contributions) logLik(glm.D93) if(require("survival")) { x <- survreg(Surv(futime, fustat) ~ rx, ovarian, dist = "weibull") newdata <- ovarian[3:5, ] sum(objfun(x)) x$loglik objfun(x, newdata = newdata) } objfun.pmodel_identity Objective function of personalised models Description Get the contributions of an objective function (e.g. likelihood contributions) and the sum thereof (e.g. log-Likelihood). Usage ## S3 method for class 'pmodel_identity' objfun(x, ...) ## S3 method for class 'pmodel_identity' logLik(object, add_df = 0, ...) Arguments x, object object of class pmodel_identity (obtained by pmodel(..., fun = identity)). ... additional parameters passed on to objfun. add_df it is not very clear what the degrees of freedom are in personalised models. With this argument you can add/substract degrees of freedom at your convenience. Default is 0 which means adding up the degrees of freedom of all individual models. For examples see pmodel. objfun.pmtree Objective function of a given pmtree Description Returns the contributions to the objective function or the sum thereof (if sum = TRUE). Usage ## S3 method for class 'pmtree' objfun(x, newdata = NULL, weights = NULL, perm = NULL, sum = FALSE, ...) Arguments x pmtree object. newdata an optional new data frame for which to compute the sum of objective functions. weights weights. perm the number of permutations performed (see varimp). sum should the sum of objective functions be computed. ... passed on to predict.party. Note that objfun.pmtree(x, sum = TRUE) is much faster than sum(objfun.pmtree(x)). Value objective function or the sum thereof Examples ## generate data set.seed(2) n <- 1000 trt <- factor(rep(1:2, each = n/2)) age <- sample(40:60, size = n, replace = TRUE) eff <- -1 + I(trt == 2) + 1 * I(trt == 2) * I(age > 50) expit <- function(x) 1/(1 + exp(-x)) success <- rbinom(n = n, size = 1, prob = expit(eff)) dat <- data.frame(success, trt, age) ## compute base model bmod1 <- glm(success ~ trt, data = dat, family = binomial) ## copmute tree (tr1 <- pmtree(bmod1, data = dat)) ## compute log-Likelihood logLik(tr1) objfun(tr1, newdata = dat, sum = TRUE) objfun(tr1, sum = TRUE) ## log-Likelihood contributions of first ## 5 observations nd <- dat[1:5, ] objfun(tr1, newdata = nd) one_factor Check if model has only one factor covariate. Description See https://stackoverflow.com/questions/50504386/check-that-model-has-only-one-factor-covariate/50514499#50514499 Usage one_factor(object) Arguments object model. Value Returns TRUE if model has a single factor covariate, FALSE otherwise. pmforest Compute model-based forest from model. Description Input a parametric model and get a forest. Usage pmforest( model, data = NULL, zformula = ~., ntree = 500L, perturb = list(replace = FALSE, fraction = 0.632), mtry = NULL, applyfun = NULL, cores = NULL, control = ctree_control(teststat = "quad", testtype = "Univ", mincriterion = 0, saveinfo = FALSE, lookahead = TRUE, ...), trace = FALSE, ... ) ## S3 method for class 'pmforest' gettree(object, tree = 1L, saveinfo = TRUE, coeffun = coef, ...) Arguments model a model object. The model can be a parametric model with a single binary covariate. data data. If NULL the data from the model object are used. zformula formula describing which variable should be used for partitioning. Default is to use all variables in data that are not in the model (i.e. ~ .). ntree number of trees. perturb a list with arguments replace and fraction determining which type of resampling with replace = TRUE referring to the n-out-of-n bootstrap and replace = FALSE to sample splitting. fraction is the number of observations to draw without re- placement. mtry number of input variables randomly sampled as candidates at each node (Default NULL corresponds to ceiling(sqrt(nvar))). Bagging, as special case of a random forest without random input variable sampling, can be performed by setting mtry either equal to Inf or equal to the number of input variables. applyfun see cforest. cores see cforest. control control parameters, see ctree_control. trace a logical indicating if a progress bar shall be printed while the forest grows. ... additional parameters passed on to model fit such as weights. object an object returned by pmforest. tree an integer, the number of the tree to extract from the forest. saveinfo logical. Should the model info be stored in terminal nodes? coeffun function that takes the model object and returns the coefficients. Useful when coef() does not return all coefficients (e.g. survreg). Value cforest object See Also gettree Examples library("model4you") if(require("mvtnorm") & require("survival")) { ## function to simulate the data sim_data <- function(n = 500, p = 10, beta = 3, sd = 1){ ## treatment lev <- c("C", "A") a <- rep(factor(lev, labels = lev, levels = lev), length = n) ## correlated z variables sigma <- diag(p) sigma[sigma == 0] <- 0.2 ztemp <- rmvnorm(n, sigma = sigma) z <- (pnorm(ztemp) * 2 * pi) - pi colnames(z) <- paste0("z", 1:ncol(z)) z1 <- z[,1] ## outcome y <- 7 + 0.2 * (a %in% "A") + beta * cos(z1) * (a %in% "A") + rnorm(n, 0, sd) data.frame(y = y, a = a, z) } ## simulate data set.seed(123) beta <- 3 ntrain <- 500 ntest <- 50 simdata <- simdata_s <- sim_data(p = 5, beta = beta, n = ntrain) tsimdata <- tsimdata_s <- sim_data(p = 5, beta = beta, n = ntest) simdata_s$cens <- rep(1, ntrain) tsimdata_s$cens <- rep(1, ntest) ## base model basemodel_lm <- lm(y ~ a, data = simdata) ## forest frst_lm <- pmforest(basemodel_lm, ntree = 20, perturb = list(replace = FALSE, fraction = 0.632), control = ctree_control(mincriterion = 0)) ## personalised models # (1) return the model objects pmodels_lm <- pmodel(x = frst_lm, newdata = tsimdata, fun = identity) class(pmodels_lm) # (2) return coefficients only (default) coefs_lm <- pmodel(x = frst_lm, newdata = tsimdata) # compare predictive objective functions of personalised models versus # base model sum(objfun(pmodels_lm)) # -RSS personalised models sum(objfun(basemodel_lm, newdata = tsimdata)) # -RSS base model if(require("ggplot2")) { ## dependence plot dp_lm <- cbind(coefs_lm, tsimdata) ggplot(tsimdata) + stat_function(fun = function(z1) 0.2 + beta * cos(z1), aes(color = "true treatment\neffect")) + geom_point(data = dp_lm, aes(y = aA, x = z1, color = "estimates lm"), alpha = 0.5) + ylab("treatment effect") + xlab("patient characteristic z1") } } pmodel Personalised model Description Compute personalised models from cforest object. Usage pmodel( x = NULL, model = NULL, newdata = NULL, OOB = TRUE, fun = coef, return_attr = c("modelcall", "data", "similarity") ) Arguments x cforest object or matrix of weights. model model object. If NULL the model in x$info$model is used. newdata new data. If NULL cforest learning data is used. Ignored if x is a matrix. OOB In case of using the learning data, should patient similarities be computed out of bag? fun function to apply on the personalised model before returning. The default coef returns a matrix of personalised coefficients. For returning the model objects use identity. return_attr which attributes to add to the object returned. If it contains "modelcall" the call of the base model is returned, if it contains "data" the data, and if it contains "similarity" the matrix of similarity weights is added. Value depends on fun. Examples library("model4you") if(require("mvtnorm") & require("survival")) { ## function to simulate the data sim_data <- function(n = 500, p = 10, beta = 3, sd = 1){ ## treatment lev <- c("C", "A") a <- rep(factor(lev, labels = lev, levels = lev), length = n) ## correlated z variables sigma <- diag(p) sigma[sigma == 0] <- 0.2 ztemp <- rmvnorm(n, sigma = sigma) z <- (pnorm(ztemp) * 2 * pi) - pi colnames(z) <- paste0("z", 1:ncol(z)) z1 <- z[,1] ## outcome y <- 7 + 0.2 * (a %in% "A") + beta * cos(z1) * (a %in% "A") + rnorm(n, 0, sd) data.frame(y = y, a = a, z) } ## simulate data set.seed(123) beta <- 3 ntrain <- 500 ntest <- 50 simdata <- simdata_s <- sim_data(p = 5, beta = beta, n = ntrain) tsimdata <- tsimdata_s <- sim_data(p = 5, beta = beta, n = ntest) simdata_s$cens <- rep(1, ntrain) tsimdata_s$cens <- rep(1, ntest) ## base model basemodel_lm <- lm(y ~ a, data = simdata) ## forest frst_lm <- pmforest(basemodel_lm, ntree = 20, perturb = list(replace = FALSE, fraction = 0.632), control = ctree_control(mincriterion = 0)) ## personalised models # (1) return the model objects pmodels_lm <- pmodel(x = frst_lm, newdata = tsimdata, fun = identity) class(pmodels_lm) # (2) return coefficients only (default) coefs_lm <- pmodel(x = frst_lm, newdata = tsimdata) # compare predictive objective functions of personalised models versus # base model sum(objfun(pmodels_lm)) # -RSS personalised models sum(objfun(basemodel_lm, newdata = tsimdata)) # -RSS base model if(require("ggplot2")) { ## dependence plot dp_lm <- cbind(coefs_lm, tsimdata) ggplot(tsimdata) + stat_function(fun = function(z1) 0.2 + beta * cos(z1), aes(color = "true treatment\neffect")) + geom_point(data = dp_lm, aes(y = aA, x = z1, color = "estimates lm"), alpha = 0.5) + ylab("treatment effect") + xlab("patient characteristic z1") } } pmtest Test if personalised models improve upon base model. Description This is a rudimentary test if there is heterogeneity in the model parameters. The null-hypothesis is: the base model is the correct model. Usage pmtest(forest, pmodels = NULL, data = NULL, B = 100) ## S3 method for class 'heterogeneity_test' plot(x, ...) Arguments forest pmforest object. pmodels pmodel_identity object (pmodel(..., fun = identity)). data data. B number of bootstrap samples. x object of class heterogeneity_test. ... ignored. Value list where the first element is the p-value und the second element is a data.frame with all neccessary infos to compute the p-value. The test statistic is the difference in objective function between the base model and the personalised models. To compute the distribution under the Null we draw parametric bootstrap samples from the base model. For each bootstrap sample we again compute the difference in objective function between the base model and the personalised models. If the difference in the original data is greater than the difference in the bootstrap samples, we reject the null-hypothesis. Examples ## Not run: set.seed(123) n <- 160 trt <- factor(rep(0:1, each = n/2)) y <- 4 + (trt == 1) + rnorm(n) z <- matrix(rnorm(n * 2), ncol = 2) dat <- data.frame(y, trt, z) mod <- lm(y ~ trt, data = dat) ## Note that ntree should usually be higher frst <- pmforest(mod, ntree = 20) pmods <- pmodel(frst, fun = identity) ## Note that B should be at least 100 ## The low B is just for demonstration ## purposes. tst <- pmtest(forest = frst, pmodels = pmods, B = 10) tst$pvalue tst plot(tst) ## End(Not run) pmtree Compute model-based tree from model. Description Input a parametric model and get a model-based tree. Usage pmtree( model, data = NULL, zformula = ~., control = ctree_control(), coeffun = coef, ... ) Arguments model a model object. The model can be a parametric model with a binary covariate. data data. If NULL (default) the data from the model object are used. zformula formula describing which variable should be used for partitioning. Default is to use all variables in data that are not in the model (i.e. ~ .). control control parameters, see ctree_control. coeffun function that takes the model object and returns the coefficients. Useful when coef() does not return all coefficients (e.g. survreg). ... additional parameters passed on to model fit such as weights. Details Sometimes the number of participant in each treatment group needs to be of a certain size. This can be accomplished by setting control$converged. See example below. Value ctree object Examples if(require("TH.data") & require("survival")) { ## base model bmod <- survreg(Surv(time, cens) ~ horTh, data = GBSG2, model = TRUE) survreg_plot(bmod) ## partitioned model tr <- pmtree(bmod) plot(tr, terminal_panel = node_pmterminal(tr, plotfun = survreg_plot, confint = TRUE)) summary(tr) summary(tr, node = 1:2) logLik(bmod) logLik(tr) ## Sometimes the number of participant in each treatment group needs to ## be of a certain size. This can be accomplished using converged ## Each treatment group should have more than 33 observations ctrl <- ctree_control(lookahead = TRUE) ctrl$converged <- function(mod, data, subset) { all(table(data$horTh[subset]) > 33) } tr2 <- pmtree(bmod, control = ctrl) plot(tr2, terminal_panel = node_pmterminal(tr, plotfun = survreg_plot, confint = TRUE)) summary(tr2[[5]]$data$horTh) } if(require("psychotools")) { data("MathExam14W", package = "psychotools") ## scale points achieved to [0, 100] percent MathExam14W$tests <- 100 * MathExam14W$tests/26 MathExam14W$pcorrect <- 100 * MathExam14W$nsolved/13 ## select variables to be used MathExam <- MathExam14W[ , c("pcorrect", "group", "tests", "study", "attempt", "semester", "gender")] ## compute base model bmod_math <- lm(pcorrect ~ group, data = MathExam) lm_plot(bmod_math, densest = TRUE) ## compute tree (tr_math <- pmtree(bmod_math, control = ctree_control(maxdepth = 2))) plot(tr_math, terminal_panel = node_pmterminal(tr_math, plotfun = lm_plot, confint = FALSE)) plot(tr_math, terminal_panel = node_pmterminal(tr_math, plotfun = lm_plot, densest = TRUE, confint = TRUE)) ## predict newdat <- MathExam[1:5, ] # terminal nodes (nodes <- predict(tr_math, type = "node", newdata = newdat)) # response (pr <- predict(tr_math, type = "pass", newdata = newdat)) # response including confidence intervals, see ?predict.lm (pr1 <- predict(tr_math, type = "pass", newdata = newdat, predict_args = list(interval = "confidence"))) } predict.pmtree pmtree predictions Description Compute predictions from pmtree object. Usage ## S3 method for class 'pmtree' predict( object, newdata = NULL, type = "node", predict_args = list(), perm = NULL, ... ) Arguments object pmtree object. newdata an optional data frame in which to look for variables with which to predict, if omitted, object$data is used. type character denoting the type of predicted value. The terminal node is returned for "node". If type = "pass" the model predict method is used and arguments can be passed to it via predict_args. If type = "coef" the the model coefficients are returned. predict_args If type = "pass" arguments can be passed on to the model predict function. perm an optional character vector of variable names (or integer vector of variable lo- cation in newdata). Splits of nodes with a primary split in any of these variables will be permuted (after dealing with surrogates). Note that surrogate split in the perm variables will no be permuted. ... passed on to predict.party (e.g. perm). Value predictions Examples if(require("psychotools")) { data("MathExam14W", package = "psychotools") ## scale points achieved to [0, 100] percent MathExam14W$tests <- 100 * MathExam14W$tests/26 MathExam14W$pcorrect <- 100 * MathExam14W$nsolved/13 ## select variables to be used MathExam <- MathExam14W[ , c("pcorrect", "group", "tests", "study", "attempt", "semester", "gender")] ## compute base model bmod_math <- lm(pcorrect ~ group, data = MathExam) lm_plot(bmod_math, densest = TRUE) ## compute tree (tr_math <- pmtree(bmod_math, control = ctree_control(maxdepth = 2))) plot(tr_math, terminal_panel = node_pmterminal(tr_math, plotfun = lm_plot, confint = FALSE)) plot(tr_math, terminal_panel = node_pmterminal(tr_math, plotfun = lm_plot, densest = TRUE, confint = TRUE)) ## predict newdat <- MathExam[1:5, ] # terminal nodes (nodes <- predict(tr_math, type = "node", newdata = newdat)) # response (pr <- predict(tr_math, type = "pass", newdata = newdat)) # response including confidence intervals, see ?predict.lm (pr1 <- predict(tr_math, type = "pass", newdata = newdat, predict_args = list(interval = "confidence"))) } print.pmtree Methods for pmtree Description Print and summary methods for pmtree objects. Usage ## S3 method for class 'pmtree' print( x, node = NULL, FUN = NULL, digits = getOption("digits") - 4L, footer = TRUE, ... ) ## S3 method for class 'pmtree' summary(object, node = NULL, ...) ## S3 method for class 'summary.pmtree' print(x, digits = 4, ...) ## S3 method for class 'pmtree' coef(object, node = NULL, ...) Arguments x object. node node number, if any. FUN formatinfo function. digits number of digits. footer should footer be included? ... further arguments passed on to print.party. object object. Value print rss Residual sum of squares Description Returns the sum of the squared residuals for a given object. Usage rss(object, ...) ## Default S3 method: rss(object, ...) Arguments object model object. ... passed on to specific methods. Value sum of the squared residuals. Examples ## example from ?lm ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group <- gl(2, 10, 20, labels = c("Ctl","Trt")) weight <- c(ctl, trt) lm.D9 <- lm(weight ~ group) rss(lm.D9) survreg_plot Survival plot for a given survreg model with one binary covariate. Description Can be used on its own but is also useable as plotfun in node_pmterminal. Usage survreg_plot(mod, data = NULL, theme = theme_classic(), yrange = NULL) Arguments mod A model of class survreg. data optional data frame. If NULL the data stored in mod is used. theme A ggplot2 theme. yrange Range of the y variable to be used for plotting. If NULL it will be 0 to max(y). Examples if(require("survival")) { survreg_plot(survreg(Surv(futime, fustat) ~ factor(rx), ovarian)) } varimp.pmforest Variable Importance for pmforest Description See varimp.cforest. Usage ## S3 method for class 'pmforest' varimp( object, nperm = 1L, OOB = TRUE, risk = function(x, ...) -objfun(x, sum = TRUE, ...), conditional = FALSE, threshold = 0.2, ... ) Arguments object DESCRIPTION. nperm the number of permutations performed. OOB a logical determining whether the importance is computed from the out-of-bag sample or the learning sample (not suggested). risk the risk to be evaluated. By default the objective function (e.g. log-Likelihood) is used. conditional a logical determining whether unconditional or conditional computation of the importance is performed. threshold the value of the test statistic or 1 - p-value of the association between the vari- able of interest and a covariate that must be exceeded inorder to include the covariate in the conditioning scheme for the variable of interest (only relevant if conditional = TRUE). ... passed on to objfun. Value A vector of ’mean decrease in accuracy’ importance scores.
tornado
readthedoc
Unknown
Tornado Documentation Release 6.3.3 The Tornado Authors Aug 11, 2023 CONTENTS 6.1 User’s guid... 13 6.2 Web framewor... 40 6.3 HTTP servers and client... 83 6.4 Asynchronous networkin... 101 6.5 Coroutines and concurrenc... 120 6.6 Integration with other service... 135 6.7 Utilitie... 146 6.8 Frequently Asked Question... 161 6.9 Release note... 163 i ii Tornado Documentation, Release 6.3.3 Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user. Tornado Documentation, Release 6.3.3 2 CONTENTS CHAPTER ONE QUICK LINKS • Current version: 6.3.3 (download from PyPI, release notes) • Source (GitHub) • Mailing lists: discussion and announcements • Stack Overflow • Wiki Tornado Documentation, Release 6.3.3 4 Chapter 1. Quick links CHAPTER TWO HELLO, WORLD Here is a simple “Hello, world” example web app for Tornado: import asyncio import tornado class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") def make_app(): return tornado.web.Application([ (r"/", MainHandler), ]) async def main(): app = make_app() app.listen(8888) await asyncio.Event().wait() if __name__ == "__main__": asyncio.run(main()) This example does not use any of Tornado’s asynchronous features; for that see this simple chat room. Tornado Documentation, Release 6.3.3 6 Chapter 2. Hello, world CHAPTER THREE THREADS AND WSGI Tornado is different from most Python web frameworks. It is not based on WSGI, and it is typically run with only one thread per process. See the User’s guide for more on Tornado’s approach to asynchronous programming. While some support of WSGI is available in the tornado.wsgi module, it is not a focus of development and most applications should be written to use Tornado’s own interfaces (such as tornado.web) directly instead of using WSGI. In general, Tornado code is not thread-safe. The only method in Tornado that is safe to call from other threads is IOLoop.add_callback. You can also use IOLoop.run_in_executor to asynchronously run a blocking function on another thread, but note that the function passed to run_in_executor should avoid referencing any Tornado objects. run_in_executor is the recommended way to interact with blocking code. Tornado Documentation, Release 6.3.3 8 Chapter 3. Threads and WSGI CHAPTER FOUR ASYNCIO INTEGRATION Tornado is integrated with the standard library asyncio module and shares the same event loop (by default since Tornado 5.0). In general, libraries designed for use with asyncio can be mixed freely with Tornado. Tornado Documentation, Release 6.3.3 10 Chapter 4. asyncio Integration CHAPTER FIVE INSTALLATION pip install tornado Tornado is listed in PyPI and can be installed with pip. Note that the source distribution includes demo applications that are not present when Tornado is installed in this way, so you may wish to download a copy of the source tarball or clone the git repository as well. Prerequisites: Tornado 6.3 requires Python 3.8 or newer. The following optional packages may be useful: • pycurl is used by the optional tornado.curl_httpclient. Libcurl version 7.22 or higher is required. • pycares is an alternative non-blocking DNS resolver that can be used when threads are not appropriate. Platforms: Tornado is designed for Unix-like platforms, with best performance and scalability on systems supporting epoll (Linux), kqueue (BSD/macOS), or /dev/poll (Solaris). Tornado will also run on Windows, although this configuration is not officially supported or recommended for pro- duction use. Some features are missing on Windows (including multi-process mode) and scalability is limited (Even though Tornado is built on asyncio, which supports Windows, Tornado does not use the APIs that are necessary for scalable networking on Windows). Tornado Documentation, Release 6.3.3 12 Chapter 5. Installation CHAPTER SIX DOCUMENTATION This documentation is also available in PDF and Epub formats. 6.1 User’s guide 6.1.1 Introduction Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. By using non-blocking network I/O, Tornado can scale to tens of thousands of open connections, making it ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user. Tornado can be roughly divided into three major components: • A web framework (including RequestHandler which is subclassed to create web applications, and various supporting classes). • Client- and server-side implementions of HTTP (HTTPServer and AsyncHTTPClient). • An asynchronous networking library including the classes IOLoop and IOStream, which serve as the building blocks for the HTTP components and can also be used to implement other protocols. The Tornado web framework and HTTP server together offer a full-stack alternative to WSGI. While it is possible to use the Tornado HTTP server as a container for other WSGI frameworks (WSGIContainer), this combination has limitations and to take full advantage of Tornado you will need to use Tornado’s web framework and HTTP server together. 6.1.2 Asynchronous and non-Blocking I/O Real-time web features require a long-lived mostly-idle connection per user. In a traditional synchronous web server, this implies devoting one thread to each user, which can be very expensive. To minimize the cost of concurrent connections, Tornado uses a single-threaded event loop. This means that all appli- cation code should aim to be asynchronous and non-blocking because only one operation can be active at a time. The terms asynchronous and non-blocking are closely related and are often used interchangeably, but they are not quite the same thing. Tornado Documentation, Release 6.3.3 Blocking A function blocks when it waits for something to happen before returning. A function may block for many reasons: network I/O, disk I/O, mutexes, etc. In fact, every function blocks, at least a little bit, while it is running and using the CPU (for an extreme example that demonstrates why CPU blocking must be taken as seriously as other kinds of blocking, consider password hashing functions like bcrypt, which by design use hundreds of milliseconds of CPU time, far more than a typical network or disk access). A function can be blocking in some respects and non-blocking in others. In the context of Tornado we generally talk about blocking in the context of network I/O, although all kinds of blocking are to be minimized. Asynchronous An asynchronous function returns before it is finished, and generally causes some work to happen in the background before triggering some future action in the application (as opposed to normal synchronous functions, which do every- thing they are going to do before returning). There are many styles of asynchronous interfaces: • Callback argument • Return a placeholder (Future, Promise, Deferred) • Deliver to a queue • Callback registry (e.g. POSIX signals) Regardless of which type of interface is used, asynchronous functions by definition interact differently with their callers; there is no free way to make a synchronous function asynchronous in a way that is transparent to its callers (systems like gevent use lightweight threads to offer performance comparable to asynchronous systems, but they do not actually make things asynchronous). Asynchronous operations in Tornado generally return placeholder objects (Futures), with the exception of some low- level components like the IOLoop that use callbacks. Futures are usually transformed into their result with the await or yield keywords. Examples Here is a sample synchronous function: from tornado.httpclient import HTTPClient def synchronous_fetch(url): http_client = HTTPClient() response = http_client.fetch(url) return response.body And here is the same function rewritten asynchronously as a native coroutine: from tornado.httpclient import AsyncHTTPClient async def asynchronous_fetch(url): http_client = AsyncHTTPClient() response = await http_client.fetch(url) return response.body Or for compatibility with older versions of Python, using the tornado.gen module: Tornado Documentation, Release 6.3.3 from tornado.httpclient import AsyncHTTPClient from tornado import gen @gen.coroutine def async_fetch_gen(url): http_client = AsyncHTTPClient() response = yield http_client.fetch(url) raise gen.Return(response.body) Coroutines are a little magical, but what they do internally is something like this: from tornado.concurrent import Future def async_fetch_manual(url): http_client = AsyncHTTPClient() my_future = Future() fetch_future = http_client.fetch(url) def on_fetch(f): my_future.set_result(f.result().body) fetch_future.add_done_callback(on_fetch) return my_future Notice that the coroutine returns its Future before the fetch is done. This is what makes coroutines asynchronous. Anything you can do with coroutines you can also do by passing callback objects around, but coroutines provide an important simplification by letting you organize your code in the same way you would if it were synchronous. This is especially important for error handling, since try/except blocks work as you would expect in coroutines while this is difficult to achieve with callbacks. Coroutines will be discussed in depth in the next section of this guide. 6.1.3 Coroutines Coroutines are the recommended way to write asynchronous code in Tornado. Coroutines use the Python await keyword to suspend and resume execution instead of a chain of callbacks (cooperative lightweight threads as seen in frameworks like gevent are sometimes called coroutines as well, but in Tornado all coroutines use explicit context switches and are called as asynchronous functions). Coroutines are almost as simple as synchronous code, but without the expense of a thread. They also make concurrency easier to reason about by reducing the number of places where a context switch can happen. Example: async def fetch_coroutine(url): http_client = AsyncHTTPClient() response = await http_client.fetch(url) return response.body Tornado Documentation, Release 6.3.3 Native vs decorated coroutines Python 3.5 introduced the async and await keywords (functions using these keywords are also called “native corou- tines”). For compatibility with older versions of Python, you can use “decorated” or “yield-based” coroutines using the tornado.gen.coroutine decorator. Native coroutines are the recommended form whenever possible. Only use decorated coroutines when compatibility with older versions of Python is required. Examples in the Tornado documentation will generally use the native form. Translation between the two forms is generally straightforward: # Decorated: # Native: # Normal function declaration # with decorator # "async def" keywords @gen.coroutine def a(): async def a(): # "yield" all async funcs # "await" all async funcs b = yield c() b = await c() # "return" and "yield" # cannot be mixed in # Python 2, so raise a # special exception. # Return normally raise gen.Return(b) return b Other differences between the two forms of coroutine are outlined below. • Native coroutines: – are generally faster. – can use async for and async with statements which make some patterns much simpler. – do not run at all unless you await or yield them. Decorated coroutines can start running “in the back- ground” as soon as they are called. Note that for both kinds of coroutines it is important to use await or yield so that any exceptions have somewhere to go. • Decorated coroutines: – have additional integration with the concurrent.futures package, allowing the result of executor. submit to be yielded directly. For native coroutines, use IOLoop.run_in_executor instead. – support some shorthand for waiting on multiple objects by yielding a list or dict. Use tornado.gen.multi to do this in native coroutines. – can support integration with other packages including Twisted via a registry of conversion functions. To access this functionality in native coroutines, use tornado.gen.convert_yielded. – always return a Future object. Native coroutines return an awaitable object that is not a Future. In Tornado the two are mostly interchangeable. Tornado Documentation, Release 6.3.3 How it works This section explains the operation of decorated coroutines. Native coroutines are conceptually similar, but a little more complicated because of the extra integration with the Python runtime. A function containing yield is a generator. All generators are asynchronous; when called they return a generator object instead of running to completion. The @gen.coroutine decorator communicates with the generator via the yield expressions, and with the coroutine’s caller by returning a Future. Here is a simplified version of the coroutine decorator’s inner loop: # Simplified inner loop of tornado.gen.Runner def run(self): # send(x) makes the current yield return x. # It returns when the next yield is reached future = self.gen.send(self.next) def callback(f): self.next = f.result() self.run() future.add_done_callback(callback) The decorator receives a Future from the generator, waits (without blocking) for that Future to complete, then “un- wraps” the Future and sends the result back into the generator as the result of the yield expression. Most asyn- chronous code never touches the Future class directly except to immediately pass the Future returned by an asyn- chronous function to a yield expression. How to call a coroutine Coroutines do not raise exceptions in the normal way: any exception they raise will be trapped in the awaitable object until it is yielded. This means it is important to call coroutines in the right way, or you may have errors that go unnoticed: async def divide(x, y): return x / y def bad_call(): # This should raise a ZeroDivisionError, but it won't because # the coroutine is called incorrectly. divide(1, 0) In nearly all cases, any function that calls a coroutine must be a coroutine itself, and use the await or yield keyword in the call. When you are overriding a method defined in a superclass, consult the documentation to see if coroutines are allowed (the documentation should say that the method “may be a coroutine” or “may return a Future”): async def good_call(): # await will unwrap the object returned by divide() and raise # the exception. await divide(1, 0) Sometimes you may want to “fire and forget” a coroutine without waiting for its result. In this case it is recommended to use IOLoop.spawn_callback, which makes the IOLoop responsible for the call. If it fails, the IOLoop will log a stack trace: # The IOLoop will catch the exception and print a stack trace in # the logs. Note that this doesn't look like a normal call, since (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) # we pass the function object to be called by the IOLoop. IOLoop.current().spawn_callback(divide, 1, 0) Using IOLoop.spawn_callback in this way is recommended for functions using @gen.coroutine, but it is required for functions using async def (otherwise the coroutine runner will not start). Finally, at the top level of a program, if the IOLoop is not yet running, you can start the IOLoop, run the coroutine, and then stop the IOLoop with the IOLoop.run_sync method. This is often used to start the main function of a batch-oriented program: # run_sync() doesn't take arguments, so we must wrap the # call in a lambda. IOLoop.current().run_sync(lambda: divide(1, 0)) Coroutine patterns Calling blocking functions The simplest way to call a blocking function from a coroutine is to use IOLoop.run_in_executor, which returns Futures that are compatible with coroutines: async def call_blocking(): await IOLoop.current().run_in_executor(None, blocking_func, args) Parallelism The multi function accepts lists and dicts whose values are Futures, and waits for all of those Futures in parallel: from tornado.gen import multi async def parallel_fetch(url1, url2): resp1, resp2 = await multi([http_client.fetch(url1), http_client.fetch(url2)]) async def parallel_fetch_many(urls): responses = await multi ([http_client.fetch(url) for url in urls]) # responses is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): responses = await multi({url: http_client.fetch(url) for url in urls}) # responses is a dict {url: HTTPResponse} In decorated coroutines, it is possible to yield the list or dict directly: @gen.coroutine def parallel_fetch_decorated(url1, url2): resp1, resp2 = yield [http_client.fetch(url1), http_client.fetch(url2)] Tornado Documentation, Release 6.3.3 Interleaving Sometimes it is useful to save a Future instead of yielding it immediately, so you can start another operation before waiting. from tornado.gen import convert_yielded async def get(self): # convert_yielded() starts the native coroutine in the background. # This is equivalent to asyncio.ensure_future() (both work in Tornado). fetch_future = convert_yielded(self.fetch_next_chunk()) while True: chunk = await fetch_future if chunk is None: break self.write(chunk) fetch_future = convert_yielded(self.fetch_next_chunk()) await self.flush() This is a little easier to do with decorated coroutines, because they start immediately when called: @gen.coroutine def get(self): fetch_future = self.fetch_next_chunk() while True: chunk = yield fetch_future if chunk is None: break self.write(chunk) fetch_future = self.fetch_next_chunk() yield self.flush() Looping In native coroutines, async for can be used. In older versions of Python, looping is tricky with coroutines since there is no way to yield on every iteration of a for or while loop and capture the result of the yield. Instead, you’ll need to separate the loop condition from accessing the results, as in this example from Motor: import motor db = motor.MotorClient().test @gen.coroutine def loop_example(collection): cursor = db.collection.find() while (yield cursor.fetch_next): doc = cursor.next_object() Tornado Documentation, Release 6.3.3 Running in the background As an alternative to PeriodicCallback, a coroutine can contain a while True: loop and use tornado.gen.sleep: async def minute_loop(): while True: await do_something() await gen.sleep(60) # Coroutines that loop forever are generally started with # spawn_callback(). IOLoop.current().spawn_callback(minute_loop) Sometimes a more complicated loop may be desirable. For example, the previous loop runs every 60+N seconds, where N is the running time of do_something(). To run exactly every 60 seconds, use the interleaving pattern from above: async def minute_loop2(): while True: nxt = gen.sleep(60) # Start the clock. await do_something() # Run while the clock is ticking. await nxt # Wait for the timer to run out. 6.1.4 Queue example - a concurrent web spider Tornado’s tornado.queues module (and the very similar Queue classes in asyncio) implements an asynchronous producer / consumer pattern for coroutines, analogous to the pattern implemented for threads by the Python standard library’s queue module. A coroutine that yields Queue.get pauses until there is an item in the queue. If the queue has a maximum size set, a coroutine that yields Queue.put pauses until there is room for another item. A Queue maintains a count of unfinished tasks, which begins at zero. put increments the count; task_done decrements it. In the web-spider example here, the queue begins containing only base_url. When a worker fetches a page it parses the links and puts new ones in the queue, then calls task_done to decrement the counter once. Eventually, a worker fetches a page whose URLs have all been seen before, and there is also no work left in the queue. Thus that worker’s call to task_done decrements the counter to zero. The main coroutine, which is waiting for join, is unpaused and finishes. #!/usr/bin/env python3 import asyncio import time from datetime import timedelta from html.parser import HTMLParser from urllib.parse import urljoin, urldefrag from tornado import gen, httpclient, queues base_url = "http://www.tornadoweb.org/en/stable/" concurrency = 10 (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) async def get_links_from_url(url): """Download the page at `url` and parse it for links. Returned links have had the fragment after `#` removed, and have been made absolute so, e.g. the URL 'gen.html#tornado.gen.coroutine' becomes 'http://www.tornadoweb.org/en/stable/gen.html'. """ response = await httpclient.AsyncHTTPClient().fetch(url) print("fetched %s" % url) html = response.body.decode(errors="ignore") return [urljoin(url, remove_fragment(new_url)) for new_url in get_links(html)] def remove_fragment(url): pure_url, frag = urldefrag(url) return pure_url def get_links(html): class URLSeeker(HTMLParser): def __init__(self): HTMLParser.__init__(self) self.urls = [] def handle_starttag(self, tag, attrs): href = dict(attrs).get("href") if href and tag == "a": self.urls.append(href) url_seeker = URLSeeker() url_seeker.feed(html) return url_seeker.urls async def main(): q = queues.Queue() start = time.time() fetching, fetched, dead = set(), set(), set() async def fetch_url(current_url): if current_url in fetching: return print("fetching %s" % current_url) fetching.add(current_url) urls = await get_links_from_url(current_url) fetched.add(current_url) for new_url in urls: # Only follow links beneath the base URL (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) if new_url.startswith(base_url): await q.put(new_url) async def worker(): async for url in q: if url is None: return try: await fetch_url(url) except Exception as e: print("Exception: %s %s" % (e, url)) dead.add(url) finally: q.task_done() await q.put(base_url) # Start workers, then wait for the work queue to be empty. workers = gen.multi([worker() for _ in range(concurrency)]) await q.join(timeout=timedelta(seconds=300)) assert fetching == (fetched | dead) print("Done in %d seconds, fetched %s URLs." % (time.time() - start, len(fetched))) print("Unable to fetch %s URLs." % len(dead)) # Signal all the workers to exit. for _ in range(concurrency): await q.put(None) await workers if __name__ == "__main__": asyncio.run(main()) 6.1.5 Structure of a Tornado web application A Tornado web application generally consists of one or more RequestHandler subclasses, an Application object which routes incoming requests to handlers, and a main() function to start the server. A minimal “hello world” example looks something like this: import asyncio import tornado class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") def make_app(): return tornado.web.Application([ (r"/", MainHandler), ]) (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) async def main(): app = make_app() app.listen(8888) shutdown_event = asyncio.Event() await shutdown_event.wait() if __name__ == "__main__": asyncio.run(main()) The main coroutine Beginning with Tornado 6.2 and Python 3.10, the recommended pattern for starting a Tornado application is to create a main coroutine to be run with asyncio.run. (In older versions, it was common to do initialization in a regular function and then start the event loop with IOLoop.current().start(). However, this pattern produces deprecation warnings starting in Python 3.10 and will break in some future version of Python.) When the main function returns, the program exits, so most of the time for a web server main should run forever. Waiting on an asyncio.Event whose set() method is never called is a convenient way to make an asynchronus function run forever. (and if you wish to have main exit early as a part of a graceful shutdown procedure, you can call shutdown_event.set() to make it exit). The Application object The Application object is responsible for global configuration, including the routing table that maps requests to handlers. The routing table is a list of URLSpec objects (or tuples), each of which contains (at least) a regular expression and a handler class. Order matters; the first matching rule is used. If the regular expression contains capturing groups, these groups are the path arguments and will be passed to the handler’s HTTP method. If a dictionary is passed as the third ele- ment of the URLSpec, it supplies the initialization arguments which will be passed to RequestHandler.initialize. Finally, the URLSpec may have a name, which will allow it to be used with RequestHandler.reverse_url. For example, in this fragment the root URL / is mapped to MainHandler and URLs of the form /story/ followed by a number are mapped to StoryHandler. That number is passed (as a string) to StoryHandler.get. class MainHandler(RequestHandler): def get(self): self.write('<a href="%s">link to story 1</a>' % self.reverse_url("story", "1")) class StoryHandler(RequestHandler): def initialize(self, db): self.db = db def get(self, story_id): self.write("this is story %s" % story_id) app = Application([ url(r"/", MainHandler), url(r"/story/([0-9]+)", StoryHandler, dict(db=db), name="story") ]) Tornado Documentation, Release 6.3.3 The Application constructor takes many keyword arguments that can be used to customize the behavior of the ap- plication and enable optional features; see Application.settings for the complete list. Subclassing RequestHandler Most of the work of a Tornado web application is done in subclasses of RequestHandler. The main entry point for a handler subclass is a method named after the HTTP method being handled: get(), post(), etc. Each handler may define one or more of these methods to handle different HTTP actions. As described above, these methods will be called with arguments corresponding to the capturing groups of the routing rule that matched. Within a handler, call methods such as RequestHandler.render or RequestHandler.write to produce a response. render() loads a Template by name and renders it with the given arguments. write() is used for non-template-based output; it accepts strings, bytes, and dictionaries (dicts will be encoded as JSON). Many methods in RequestHandler are designed to be overridden in subclasses and be used throughout the application. It is common to define a BaseHandler class that overrides methods such as write_error and get_current_user and then subclass your own BaseHandler instead of RequestHandler for all your specific handlers. Handling request input The request handler can access the object representing the current request with self.request. See the class definition for HTTPServerRequest for a complete list of attributes. Request data in the formats used by HTML forms will be parsed for you and is made available in methods like get_query_argument and get_body_argument. class MyFormHandler(tornado.web.RequestHandler): def get(self): self.write('<html><body><form action="/myform" method="POST">' '<input type="text" name="message">' '<input type="submit" value="Submit">' '</form></body></html>') def post(self): self.set_header("Content-Type", "text/plain") self.write("You wrote " + self.get_body_argument("message")) Since the HTML form encoding is ambiguous as to whether an argument is a single value or a list with one element, RequestHandler has distinct methods to allow the application to indicate whether or not it expects a list. For lists, use get_query_arguments and get_body_arguments instead of their singular counterparts. Files uploaded via a form are available in self.request.files, which maps names (the name of the HTML <input type="file"> element) to a list of files. Each file is a dictionary of the form {"filename":..., "content_type":..., "body":...}. The files object is only present if the files were uploaded with a form wrapper (i.e. a multipart/form-data Content-Type); if this format was not used the raw uploaded data is available in self.request.body. By default uploaded files are fully buffered in memory; if you need to handle files that are too large to comfortably keep in memory see the stream_request_body class decorator. In the demos directory, file_receiver.py shows both methods of receiving file uploads. Due to the quirks of the HTML form encoding (e.g. the ambiguity around singular versus plural arguments), Tornado does not attempt to unify form arguments with other types of input. In particular, we do not parse JSON request bodies. Applications that wish to use JSON instead of form-encoding may override prepare to parse their requests: Tornado Documentation, Release 6.3.3 def prepare(self): if self.request.headers.get("Content-Type", "").startswith("application/json"): self.json_args = json.loads(self.request.body) else: self.json_args = None Overriding RequestHandler methods In addition to get()/post()/etc, certain other methods in RequestHandler are designed to be overridden by sub- classes when necessary. On every request, the following sequence of calls takes place: 1. A new RequestHandler object is created on each request. 2. initialize() is called with the initialization arguments from the Application configuration. initialize should typically just save the arguments passed into member variables; it may not produce any output or call methods like send_error. 3. prepare() is called. This is most useful in a base class shared by all of your handler subclasses, as prepare is called no matter which HTTP method is used. prepare may produce output; if it calls finish (or redirect, etc), processing stops here. 4. One of the HTTP methods is called: get(), post(), put(), etc. If the URL regular expression contains cap- turing groups, they are passed as arguments to this method. 5. When the request is finished, on_finish() is called. This is generally after get() or another HTTP method returns. All methods designed to be overridden are noted as such in the RequestHandler documentation. Some of the most commonly overridden methods include: • write_error - outputs HTML for use on error pages. • on_connection_close - called when the client disconnects; applications may choose to detect this case and halt further processing. Note that there is no guarantee that a closed connection can be detected promptly. • get_current_user - see User authentication. • get_user_locale - returns Locale object to use for the current user. • set_default_headers - may be used to set additional headers on the response (such as a custom Server header). Error Handling If a handler raises an exception, Tornado will call RequestHandler.write_error to generate an error page. tornado.web.HTTPError can be used to generate a specified status code; all other exceptions return a 500 status. The default error page includes a stack trace in debug mode and a one-line description of the error (e.g. “500: Internal Server Error”) otherwise. To produce a custom error page, override RequestHandler.write_error (probably in a base class shared by all your handlers). This method may produce output normally via methods such as write and render. If the error was caused by an exception, an exc_info triple will be passed as a keyword argument (note that this exception is not guaranteed to be the current exception in sys.exc_info, so write_error must use e.g. traceback.format_exception instead of traceback.format_exc). It is also possible to generate an error page from regular handler methods instead of write_error by calling set_status, writing a response, and returning. The special exception tornado.web.Finish may be raised to ter- minate the handler without calling write_error in situations where simply returning is not convenient. Tornado Documentation, Release 6.3.3 For 404 errors, use the default_handler_class Application setting. This handler should override prepare instead of a more specific method like get() so it works with any HTTP method. It should produce its error page as described above: either by raising a HTTPError(404) and overriding write_error, or calling self. set_status(404) and producing the response directly in prepare(). Redirection There are two main ways you can redirect requests in Tornado: RequestHandler.redirect and with the RedirectHandler. You can use self.redirect() within a RequestHandler method to redirect users elsewhere. There is also an optional parameter permanent which you can use to indicate that the redirection is considered permanent. The de- fault value of permanent is False, which generates a 302 Found HTTP response code and is appropriate for things like redirecting users after successful POST requests. If permanent is True, the 301 Moved Permanently HTTP response code is used, which is useful for e.g. redirecting to a canonical URL for a page in an SEO-friendly manner. RedirectHandler lets you configure redirects directly in your Application routing table. For example, to configure a single static redirect: app = tornado.web.Application([ url(r"/app", tornado.web.RedirectHandler, dict(url="http://itunes.apple.com/my-app-id")), ]) RedirectHandler also supports regular expression substitutions. The following rule redirects all requests beginning with /pictures/ to the prefix /photos/ instead: app = tornado.web.Application([ url(r"/photos/(.*)", MyPhotoHandler), url(r"/pictures/(.*)", tornado.web.RedirectHandler, dict(url=r"/photos/{0}")), ]) Unlike RequestHandler.redirect, RedirectHandler uses permanent redirects by default. This is because the routing table does not change at runtime and is presumed to be permanent, while redirects found in handlers are likely to be the result of other logic that may change. To send a temporary redirect with a RedirectHandler, add permanent=False to the RedirectHandler initialization arguments. Asynchronous handlers Certain handler methods (including prepare() and the HTTP verb methods get()/post()/etc) may be overridden as coroutines to make the handler asynchronous. For example, here is a simple handler using a coroutine: class MainHandler(tornado.web.RequestHandler): async def get(self): http = tornado.httpclient.AsyncHTTPClient() response = await http.fetch("http://friendfeed-api.com/v2/feed/bret") json = tornado.escape.json_decode(response.body) self.write("Fetched " + str(len(json["entries"])) + " entries " "from the FriendFeed API") Tornado Documentation, Release 6.3.3 For a more advanced asynchronous example, take a look at the chat example application, which implements an AJAX chat room using long polling. Users of long polling may want to override on_connection_close() to clean up after the client closes the connection (but see that method’s docstring for caveats). 6.1.6 Templates and UI Tornado includes a simple, fast, and flexible templating language. This section describes that language as well as related issues such as internationalization. Tornado can also be used with any other Python template language, although there is no provision for integrating these systems into RequestHandler.render. Simply render the template to a string and pass it to RequestHandler. write Configuring templates By default, Tornado looks for template files in the same directory as the .py files that refer to them. To put your tem- plate files in a different directory, use the template_path Application setting (or override RequestHandler. get_template_path if you have different template paths for different handlers). To load templates from a non-filesystem location, subclass tornado.template.BaseLoader and pass an instance as the template_loader application setting. Compiled templates are cached by default; to turn off this caching and reload templates so changes to the underlying files are always visible, use the application settings compiled_template_cache=False or debug=True. Template syntax A Tornado template is just HTML (or any other text-based format) with Python control sequences and expressions embedded within the markup: <html> <head> <title>{{ title }}</title> </head> <body> <ul> {% for item in items %} <li>{{ escape(item) }}</li> {% end %} </ul> </body> </html> If you saved this template as “template.html” and put it in the same directory as your Python file, you could render this template with: class MainHandler(tornado.web.RequestHandler): def get(self): items = ["Item 1", "Item 2", "Item 3"] self.render("template.html", title="My title", items=items) Tornado templates support control statements and expressions. Control statements are surrounded by {% and %}, e.g. {% if len(items) > 2 %}. Expressions are surrounded by {{ and }}, e.g. {{ items[0] }}. Tornado Documentation, Release 6.3.3 Control statements more or less map exactly to Python statements. We support if, for, while, and try, all of which are terminated with {% end %}. We also support template inheritance using the extends and block statements, which are described in detail in the documentation for the tornado.template. Expressions can be any Python expression, including function calls. Template code is executed in a namespace that includes the following objects and functions. (Note that this list applies to templates rendered using RequestHandler. render and render_string. If you’re using the tornado.template module directly outside of a RequestHandler many of these entries are not present). • escape: alias for tornado.escape.xhtml_escape • xhtml_escape: alias for tornado.escape.xhtml_escape • url_escape: alias for tornado.escape.url_escape • json_encode: alias for tornado.escape.json_encode • squeeze: alias for tornado.escape.squeeze • linkify: alias for tornado.escape.linkify • datetime: the Python datetime module • handler: the current RequestHandler object • request: alias for handler.request • current_user: alias for handler.current_user • locale: alias for handler.locale • _: alias for handler.locale.translate • static_url: alias for handler.static_url • xsrf_form_html: alias for handler.xsrf_form_html • reverse_url: alias for Application.reverse_url • All entries from the ui_methods and ui_modules Application settings • Any keyword arguments passed to render or render_string When you are building a real application, you are going to want to use all of the features of Tornado templates, espe- cially template inheritance. Read all about those features in the tornado.template section (some features, including UIModules are implemented in the tornado.web module) Under the hood, Tornado templates are translated directly to Python. The expressions you include in your template are copied verbatim into a Python function representing your template. We don’t try to prevent anything in the template language; we created it explicitly to provide the flexibility that other, stricter templating systems prevent. Consequently, if you write random stuff inside of your template expressions, you will get random Python errors when you execute the template. All template output is escaped by default, using the tornado.escape.xhtml_escape function. This behavior can be changed globally by passing autoescape=None to the Application or tornado.template.Loader constructors, for a template file with the {% autoescape None %} directive, or for a single expression by replacing {{ ... }} with {% raw ...%}. Additionally, in each of these places the name of an alternative escaping function may be used instead of None. Note that while Tornado’s automatic escaping is helpful in avoiding XSS vulnerabilities, it is not sufficient in all cases. Expressions that appear in certain locations, such as in JavaScript or CSS, may need additional escaping. Addition- ally, either care must be taken to always use double quotes and xhtml_escape in HTML attributes that may contain untrusted content, or a separate escaping function must be used for attributes (see e.g. this blog post). Tornado Documentation, Release 6.3.3 Internationalization The locale of the current user (whether they are logged in or not) is always available as self.locale in the request handler and as locale in templates. The name of the locale (e.g., en_US) is available as locale.name, and you can translate strings with the Locale.translate method. Templates also have the global function call _() available for string translation. The translate function has two forms: _("Translate this string") which translates the string directly based on the current locale, and: _("A person liked this", "%(num)d people liked this", len(people)) % {"num": len(people)} which translates a string that can be singular or plural based on the value of the third argument. In the example above, a translation of the first string will be returned if len(people) is 1, or a translation of the second string will be returned otherwise. The most common pattern for translations is to use Python named placeholders for variables (the %(num)d in the example above) since placeholders can move around on translation. Here is a properly internationalized template: <html> <head> <title>FriendFeed - {{ _("Sign in") }}</title> </head> <body> <form action="{{ request.path }}" method="post"> <div>{{ _("Username") }} <input type="text" name="username"/></div> <div>{{ _("Password") }} <input type="password" name="password"/></div> <div><input type="submit" value="{{ _("Sign in") }}"/></div> {% module xsrf_form_html() %} </form> </body> </html> By default, we detect the user’s locale using the Accept-Language header sent by the user’s browser. We choose en_US if we can’t find an appropriate Accept-Language value. If you let user’s set their locale as a preference, you can override this default locale selection by overriding RequestHandler.get_user_locale: class BaseHandler(tornado.web.RequestHandler): def get_current_user(self): user_id = self.get_signed_cookie("user") if not user_id: return None return self.backend.get_user_by_id(user_id) def get_user_locale(self): if "locale" not in self.current_user.prefs: # Use the Accept-Language header return None return self.current_user.prefs["locale"] If get_user_locale returns None, we fall back on the Accept-Language header. The tornado.locale module supports loading translations in two formats: the .mo format used by gettext Tornado Documentation, Release 6.3.3 and related tools, and a simple .csv format. An application will generally call either tornado.locale. load_translations or tornado.locale.load_gettext_translations once at startup; see those methods for more details on the supported formats. You can get the list of supported locales in your application with tornado.locale.get_supported_locales(). The user’s locale is chosen to be the closest match based on the supported locales. For example, if the user’s locale is es_GT, and the es locale is supported, self.locale will be es for that request. We fall back on en_US if no close match can be found. UI modules Tornado supports UI modules to make it easy to support standard, reusable UI widgets across your application. UI modules are like special function calls to render components of your page, and they can come packaged with their own CSS and JavaScript. For example, if you are implementing a blog, and you want to have blog entries appear on both the blog home page and on each blog entry page, you can make an Entry module to render them on both pages. First, create a Python module for your UI modules, e.g. uimodules.py: class Entry(tornado.web.UIModule): def render(self, entry, show_comments=False): return self.render_string( "module-entry.html", entry=entry, show_comments=show_comments) Tell Tornado to use uimodules.py using the ui_modules setting in your application: from . import uimodules class HomeHandler(tornado.web.RequestHandler): def get(self): entries = self.db.query("SELECT * FROM entries ORDER BY date DESC") self.render("home.html", entries=entries) class EntryHandler(tornado.web.RequestHandler): def get(self, entry_id): entry = self.db.get("SELECT * FROM entries WHERE id = %s", entry_id) if not entry: raise tornado.web.HTTPError(404) self.render("entry.html", entry=entry) settings = { "ui_modules": uimodules, } application = tornado.web.Application([ (r"/", HomeHandler), (r"/entry/([0-9]+)", EntryHandler), ], **settings) Within a template, you can call a module with the {% module %} statement. For example, you could call the Entry module from both home.html: {% for entry in entries %} {% module Entry(entry) %} {% end %} and entry.html: Tornado Documentation, Release 6.3.3 {% module Entry(entry, show_comments=True) %} Modules can include custom CSS and JavaScript functions by overriding the embedded_css, embedded_javascript, javascript_files, or css_files methods: class Entry(tornado.web.UIModule): def embedded_css(self): return ".entry { margin-bottom: 1em; }" def render(self, entry, show_comments=False): return self.render_string( "module-entry.html", show_comments=show_comments) Module CSS and JavaScript will be included once no matter how many times a module is used on a page. CSS is always included in the <head> of the page, and JavaScript is always included just before the </body> tag at the end of the page. When additional Python code is not required, a template file itself may be used as a module. For example, the preceding example could be rewritten to put the following in module-entry.html: {{ set_resources(embedded_css=".entry { margin-bottom: 1em; }") }} <!-- more template html... --> This revised template module would be invoked with: {% module Template("module-entry.html", show_comments=True) %} The set_resources function is only available in templates invoked via {% module Template(...) %}. Unlike the {% include ... %} directive, template modules have a distinct namespace from their containing template - they can only see the global template namespace and their own keyword arguments. 6.1.7 Authentication and security Cookies and signed cookies You can set cookies in the user’s browser with the set_cookie method: class MainHandler(tornado.web.RequestHandler): def get(self): if not self.get_cookie("mycookie"): self.set_cookie("mycookie", "myvalue") self.write("Your cookie was not set yet!") else: self.write("Your cookie was set!") Cookies are not secure and can easily be modified by clients. If you need to set cookies to, e.g., identify the cur- rently logged in user, you need to sign your cookies to prevent forgery. Tornado supports signed cookies with the set_signed_cookie and get_signed_cookie methods. To use these methods, you need to specify a secret key named cookie_secret when you create your application. You can pass in application settings as keyword arguments to your application: application = tornado.web.Application([ (r"/", MainHandler), ], cookie_secret="__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__") Tornado Documentation, Release 6.3.3 Signed cookies contain the encoded value of the cookie in addition to a timestamp and an HMAC signature. If the cookie is old or if the signature doesn’t match, get_signed_cookie will return None just as if the cookie isn’t set. The secure version of the example above: class MainHandler(tornado.web.RequestHandler): def get(self): if not self.get_signed_cookie("mycookie"): self.set_signed_cookie("mycookie", "myvalue") self.write("Your cookie was not set yet!") else: self.write("Your cookie was set!") Tornado’s signed cookies guarantee integrity but not confidentiality. That is, the cookie cannot be modified but its contents can be seen by the user. The cookie_secret is a symmetric key and must be kept secret – anyone who obtains the value of this key could produce their own signed cookies. By default, Tornado’s signed cookies expire after 30 days. To change this, use the expires_days keyword argument to set_signed_cookie and the max_age_days argument to get_signed_cookie. These two values are passed separately so that you may e.g. have a cookie that is valid for 30 days for most purposes, but for certain sensitive actions (such as changing billing information) you use a smaller max_age_days when reading the cookie. Tornado also supports multiple signing keys to enable signing key rotation. cookie_secret then must be a dict with integer key versions as keys and the corresponding secrets as values. The currently used signing key must then be set as key_version application setting but all other keys in the dict are allowed for cookie signature validation, if the correct key version is set in the cookie. To implement cookie updates, the current signing key version can be queried via get_signed_cookie_key_version. User authentication The currently authenticated user is available in every request handler as self.current_user, and in every template as current_user. By default, current_user is None. To implement user authentication in your application, you need to override the get_current_user() method in your request handlers to determine the current user based on, e.g., the value of a cookie. Here is an example that lets users log into the application simply by specifying a nickname, which is then saved in a cookie: class BaseHandler(tornado.web.RequestHandler): def get_current_user(self): return self.get_signed_cookie("user") class MainHandler(BaseHandler): def get(self): if not self.current_user: self.redirect("/login") return name = tornado.escape.xhtml_escape(self.current_user) self.write("Hello, " + name) class LoginHandler(BaseHandler): def get(self): self.write('<html><body><form action="/login" method="post">' 'Name: <input type="text" name="name">' '<input type="submit" value="Sign in">' '</form></body></html>') (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) def post(self): self.set_signed_cookie("user", self.get_argument("name")) self.redirect("/") application = tornado.web.Application([ (r"/", MainHandler), (r"/login", LoginHandler), ], cookie_secret="__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__") You can require that the user be logged in using the Python decorator tornado.web.authenticated. If a request goes to a method with this decorator, and the user is not logged in, they will be redirected to login_url (another application setting). The example above could be rewritten: class MainHandler(BaseHandler): @tornado.web.authenticated def get(self): name = tornado.escape.xhtml_escape(self.current_user) self.write("Hello, " + name) settings = { "cookie_secret": "__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__", "login_url": "/login", } application = tornado.web.Application([ (r"/", MainHandler), (r"/login", LoginHandler), ], **settings) If you decorate post() methods with the authenticated decorator, and the user is not logged in, the server will send a 403 response. The @authenticated decorator is simply shorthand for if not self.current_user: self. redirect() and may not be appropriate for non-browser-based login schemes. Check out the Tornado Blog example application for a complete example that uses authentication (and stores user data in a PostgreSQL database). Third party authentication The tornado.auth module implements the authentication and authorization protocols for a number of the most pop- ular sites on the web, including Google/Gmail, Facebook, Twitter, and FriendFeed. The module includes methods to log users in via these sites and, where applicable, methods to authorize access to the service so you can, e.g., download a user’s address book or publish a Twitter message on their behalf. Here is an example handler that uses Google for authentication, saving the Google credentials in a cookie for later access: class GoogleOAuth2LoginHandler(tornado.web.RequestHandler, tornado.auth.GoogleOAuth2Mixin): async def get(self): if self.get_argument('code', False): user = await self.get_authenticated_user( redirect_uri='http://your.site.com/auth/google', code=self.get_argument('code')) (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) # Save the user with e.g. set_signed_cookie else: await self.authorize_redirect( redirect_uri='http://your.site.com/auth/google', client_id=self.settings['google_oauth']['key'], scope=['profile', 'email'], response_type='code', extra_params={'approval_prompt': 'auto'}) See the tornado.auth module documentation for more details. Cross-site request forgery protection Cross-site request forgery, or XSRF, is a common problem for personalized web applications. The generally accepted solution to prevent XSRF is to cookie every user with an unpredictable value and include that value as an additional argument with every form submission on your site. If the cookie and the value in the form submission do not match, then the request is likely forged. Tornado comes with built-in XSRF protection. To include it in your site, include the application setting xsrf_cookies: settings = { "cookie_secret": "__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__", "login_url": "/login", "xsrf_cookies": True, } application = tornado.web.Application([ (r"/", MainHandler), (r"/login", LoginHandler), ], **settings) If xsrf_cookies is set, the Tornado web application will set the _xsrf cookie for all users and reject all POST, PUT, and DELETE requests that do not contain a correct _xsrf value. If you turn this setting on, you need to instrument all forms that submit via POST to contain this field. You can do this with the special UIModule xsrf_form_html(), available in all templates: <form action="/new_message" method="post"> {% module xsrf_form_html() %} <input type="text" name="message"/> <input type="submit" value="Post"/> </form> If you submit AJAX POST requests, you will also need to instrument your JavaScript to include the _xsrf value with each request. This is the jQuery function we use at FriendFeed for AJAX POST requests that automatically adds the _xsrf value to all requests: function getCookie(name) { var r = document.cookie.match("\\b" + name + "=([^;]*)\\b"); return r ? r[1] : undefined; } jQuery.postJSON = function(url, args, callback) { args._xsrf = getCookie("_xsrf"); (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) $.ajax({url: url, data: $.param(args), dataType: "text", type: "POST", success: function(response) { callback(eval("(" + response + ")")); }}); }; For PUT and DELETE requests (as well as POST requests that do not use form-encoded arguments), the XSRF token may also be passed via an HTTP header named X-XSRFToken. The XSRF cookie is normally set when xsrf_form_html is used, but in a pure-JavaScript application that does not use any regular forms you may need to access self. xsrf_token manually (just reading the property is enough to set the cookie as a side effect). If you need to customize XSRF behavior on a per-handler basis, you can override RequestHandler. check_xsrf_cookie(). For example, if you have an API whose authentication does not use cookies, you may want to disable XSRF protection by making check_xsrf_cookie() do nothing. However, if you support both cookie and non-cookie-based authentication, it is important that XSRF protection be used whenever the current request is authenticated with a cookie. DNS Rebinding DNS rebinding is an attack that can bypass the same-origin policy and allow external sites to access resources on private networks. This attack involves a DNS name (with a short TTL) that alternates between returning an IP address controlled by the attacker and one controlled by the victim (often a guessable private IP address such as 127.0.0.1 or 192.168.1.1). Applications that use TLS are not vulnerable to this attack (because the browser will display certificate mismatch warnings that block automated access to the target site). Applications that cannot use TLS and rely on network-level access controls (for example, assuming that a server on 127.0.0.1 can only be accessed by the local machine) should guard against DNS rebinding by validating the Host HTTP header. This means passing a restrictive hostname pattern to either a HostMatches router or the first argument of Application.add_handlers: # BAD: uses a default host pattern of r'.*' app = Application([('/foo', FooHandler)]) # GOOD: only matches localhost or its ip address. app = Application() app.add_handlers(r'(localhost|127\.0\.0\.1)', [('/foo', FooHandler)]) # GOOD: same as previous example using tornado.routing. app = Application([ (HostMatches(r'(localhost|127\.0\.0\.1)'), [('/foo', FooHandler)]), ]) In addition, the default_host argument to Application and the DefaultHostMatches router must not be used in applications that may be vulnerable to DNS rebinding, because it has a similar effect to a wildcard host pattern. Tornado Documentation, Release 6.3.3 6.1.8 Running and deploying Since Tornado supplies its own HTTPServer, running and deploying it is a little different from other Python web frameworks. Instead of configuring a WSGI container to find your application, you write a main() function that starts the server: import asyncio async def main(): app = make_app() app.listen(8888) await asyncio.Event().wait() if __name__ == '__main__': asyncio.run(main()) Configure your operating system or process manager to run this program to start the server. Please note that it may be necessary to increase the number of open files per process (to avoid “Too many open files”-Error). To raise this limit (setting it to 50000 for example) you can use the ulimit command, modify /etc/security/limits.conf or set minfds in your supervisord config. Processes and ports Due to the Python GIL (Global Interpreter Lock), it is necessary to run multiple Python processes to take full advantage of multi-CPU machines. Typically it is best to run one process per CPU. The simplest way to do this is to add reuse_port=True to your listen() calls and then simply run multiple copies of your application. Tornado also has the ability to start multiple processes from a single parent process (note that this does not work on Windows). This requires some alterations to application startup. def main(): sockets = bind_sockets(8888) tornado.process.fork_processes(0) async def post_fork_main(): server = TCPServer() server.add_sockets(sockets) await asyncio.Event().wait() asyncio.run(post_fork_main()) This is another way to start multiple processes and have them all share the same port, although it has some limitations. First, each child process will have its own IOLoop, so it is important that nothing touches the global IOLoop instance (even indirectly) before the fork. Second, it is difficult to do zero-downtime updates in this model. Finally, since all the processes share the same port it is more difficult to monitor them individually. For more sophisticated deployments, it is recommended to start the processes independently, and have each one listen on a different port. The “process groups” feature of supervisord is one good way to arrange this. When each process uses a different port, an external load balancer such as HAProxy or nginx is usually needed to present a single address to outside visitors. Tornado Documentation, Release 6.3.3 Running behind a load balancer When running behind a load balancer like nginx, it is recommended to pass xheaders=True to the HTTPServer constructor. This will tell Tornado to use headers like X-Real-IP to get the user’s IP address instead of attributing all traffic to the balancer’s IP address. This is a barebones nginx config file that is structurally similar to the one we use at FriendFeed. It assumes nginx and the Tornado servers are running on the same machine, and the four Tornado servers are running on ports 8000 - 8003: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; } http { # Enumerate all the Tornado servers here upstream frontends { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; server 127.0.0.1:8003; } include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; keepalive_timeout 65; proxy_read_timeout 200; sendfile on; tcp_nopush on; tcp_nodelay on; gzip on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/html text/css text/xml application/x-javascript application/xml application/atom+xml text/javascript; # Only retry if there was a communication error, not a timeout # on the Tornado server (to avoid propagating "queries of death" # to all frontends) proxy_next_upstream error; server { listen 80; (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) # Allow file uploads client_max_body_size 50M; location ^~ /static/ { root /var/www; if ($query_string) { expires max; } } location = /favicon.ico { rewrite (.*) /static/favicon.ico; } location = /robots.txt { rewrite (.*) /static/robots.txt; } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://frontends; } } } Static files and aggressive file caching You can serve static files from Tornado by specifying the static_path setting in your application: settings = { "static_path": os.path.join(os.path.dirname(__file__), "static"), "cookie_secret": "__TODO:_GENERATE_YOUR_OWN_RANDOM_VALUE_HERE__", "login_url": "/login", "xsrf_cookies": True, } application = tornado.web.Application([ (r"/", MainHandler), (r"/login", LoginHandler), (r"/(apple-touch-icon\.png)", tornado.web.StaticFileHandler, dict(path=settings['static_path'])), ], **settings) This setting will automatically make all requests that start with /static/ serve from that static directory, e.g. http:/ /localhost:8888/static/foo.png will serve the file foo.png from the specified static directory. We also au- tomatically serve /robots.txt and /favicon.ico from the static directory (even though they don’t start with the /static/ prefix). In the above settings, we have explicitly configured Tornado to serve apple-touch-icon.png from the root with the StaticFileHandler, though it is physically in the static file directory. (The capturing group in that regular expression is necessary to tell StaticFileHandler the requested filename; recall that capturing groups are passed to handlers Tornado Documentation, Release 6.3.3 as method arguments.) You could do the same thing to serve e.g. sitemap.xml from the site root. Of course, you can also avoid faking a root apple-touch-icon.png by using the appropriate <link /> tag in your HTML. To improve performance, it is generally a good idea for browsers to cache static resources aggressively so browsers won’t send unnecessary If-Modified-Since or Etag requests that might block the rendering of the page. Tornado supports this out of the box with static content versioning. To use this feature, use the static_url method in your templates rather than typing the URL of the static file directly in your HTML: <html> <head> <title>FriendFeed - {{ _("Home") }}</title> </head> <body> <div><img src="{{ static_url("images/logo.png") }}"/></div> </body> </html> The static_url() function will translate that relative path to a URI that looks like /static/images/logo.png? v=aae54. The v argument is a hash of the content in logo.png, and its presence makes the Tornado server send cache headers to the user’s browser that will make the browser cache the content indefinitely. Since the v argument is based on the content of the file, if you update a file and restart your server, it will start sending a new v value, so the user’s browser will automatically fetch the new file. If the file’s contents don’t change, the browser will continue to use a locally cached copy without ever checking for updates on the server, significantly improving rendering performance. In production, you probably want to serve static files from a more optimized static file server like nginx. You can con- figure almost any web server to recognize the version tags used by static_url() and set caching headers accordingly. Here is the relevant portion of the nginx configuration we use at FriendFeed: location /static/ { root /var/friendfeed/static; if ($query_string) { expires max; } } Debug mode and automatic reloading If you pass debug=True to the Application constructor, the app will be run in debug/development mode. In this mode, several features intended for convenience while developing will be enabled (each of which is also available as an individual flag; if both are specified the individual flag takes precedence): • autoreload=True: The app will watch for changes to its source files and reload itself when anything changes. This reduces the need to manually restart the server during development. However, certain failures (such as syntax errors at import time) can still take the server down in a way that debug mode cannot currently recover from. • compiled_template_cache=False: Templates will not be cached. • static_hash_cache=False: Static file hashes (used by the static_url function) will not be cached. • serve_traceback=True: When an exception in a RequestHandler is not caught, an error page including a stack trace will be generated. Tornado Documentation, Release 6.3.3 Autoreload mode is not compatible with the multi-process mode of HTTPServer. You must not give HTTPServer. start an argument other than 1 (or call tornado.process.fork_processes) if you are using autoreload mode. The automatic reloading feature of debug mode is available as a standalone module in tornado.autoreload. The two can be used in combination to provide extra robustness against syntax errors: set autoreload=True within the app to detect changes while it is running, and start it with python -m tornado.autoreload myserver.py to catch any syntax errors or other errors at startup. Reloading loses any Python interpreter command-line arguments (e.g. -u) because it re-executes Python using sys. executable and sys.argv. Additionally, modifying these variables will cause reloading to behave incorrectly. On some platforms (including Windows and Mac OSX prior to 10.6), the process cannot be updated “in-place”, so when a code change is detected the old server exits and a new one starts. This has been known to confuse some IDEs. 6.2 Web framework 6.2.1 tornado.web — RequestHandler and Application classes tornado.web provides a simple web framework with asynchronous features that allow it to scale to large numbers of open connections, making it ideal for long polling. Here is a simple “Hello, world” example app: import asyncio import tornado class MainHandler(tornado.web.RequestHandler): def get(self): self.write("Hello, world") async def main(): application = tornado.web.Application([ (r"/", MainHandler), ]) application.listen(8888) await asyncio.Event().wait() if __name__ == "__main__": asyncio.run(main()) See the User’s guide for additional information. Thread-safety notes In general, methods on RequestHandler and elsewhere in Tornado are not thread-safe. In particular, methods such as write(), finish(), and flush() must only be called from the main thread. If you use multiple threads it is important to use IOLoop.add_callback to transfer control back to the main thread before finishing the request, or to limit your use of other threads to IOLoop.run_in_executor and ensure that your callbacks running in the executor do not refer to Tornado objects. Tornado Documentation, Release 6.3.3 Request handlers class tornado.web.RequestHandler(...) Base class for HTTP request handlers. Subclasses must define at least one of the methods defined in the “Entry points” section below. Applications should not construct RequestHandler objects directly and subclasses should not override __init__ (override initialize instead). Entry points RequestHandler.initialize() → None Hook for subclass initialization. Called for each request. A dictionary passed as the third argument of a URLSpec will be supplied as keyword arguments to initialize(). Example: class ProfileHandler(RequestHandler): def initialize(self, database): self.database = database def get(self, username): ... app = Application([ (r'/user/(.*)', ProfileHandler, dict(database=database)), ]) RequestHandler.prepare() → Optional[Awaitable[None]] Called at the beginning of a request before get/post/etc. Override this method to perform common initialization regardless of the request method. Asynchronous support: Use async def or decorate this method with gen.coroutine to make it asynchronous. If this method returns an Awaitable execution will not proceed until the Awaitable is done. New in version 3.1: Asynchronous support. RequestHandler.on_finish() → None Called after the end of a request. Override this method to perform cleanup, logging, etc. This method is a counterpart to prepare. on_finish may not produce any output, as it is called after the response has been sent to the client. Implement any of the following methods (collectively known as the HTTP verb methods) to handle the corresponding HTTP method. These methods can be made asynchronous with the async def keyword or gen.coroutine decorator. The arguments to these methods come from the URLSpec: Any capturing groups in the regular expression become arguments to the HTTP verb methods (keyword arguments if the group is named, positional arguments if it’s unnamed). To support a method not on this list, override the class variable SUPPORTED_METHODS: class WebDAVHandler(RequestHandler): SUPPORTED_METHODS = RequestHandler.SUPPORTED_METHODS + ('PROPFIND',) (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) def propfind(self): pass RequestHandler.get(*args: str, **kwargs: str) → None RequestHandler.head(*args: str, **kwargs: str) → None RequestHandler.post(*args: str, **kwargs: str) → None RequestHandler.delete(*args: str, **kwargs: str) → None RequestHandler.patch(*args: str, **kwargs: str) → None RequestHandler.put(*args: str, **kwargs: str) → None RequestHandler.options(*args: str, **kwargs: str) → None Input The argument methods provide support for HTML form-style arguments. These methods are available in both singular and plural forms because HTML forms are ambiguous and do not distinguish between a singular argument and a list containing one entry. If you wish to use other formats for arguments (for example, JSON), parse self.request.body yourself: def prepare(self): if self.request.headers['Content-Type'] == 'application/x-json': self.args = json_decode(self.request.body) # Access self.args directly instead of using self.get_argument. RequestHandler.get_argument(name: str, default: str, strip: bool = True) → str RequestHandler.get_argument(name: str, default: _ArgDefaultMarker = _ARG_DEFAULT, strip: bool = True) → str RequestHandler.get_argument(name: str, default: None, strip: bool = True) → Optional[str] Returns the value of the argument with the given name. If default is not provided, the argument is considered to be required, and we raise a MissingArgumentError if it is missing. If the argument appears in the request more than once, we return the last value. This method searches both the query and body arguments. RequestHandler.get_arguments(name: str, strip: bool = True) → List[str] Returns a list of the arguments with the given name. If the argument is not present, returns an empty list. This method searches both the query and body arguments. RequestHandler.get_query_argument(name: str, default: Union[None, str, RAISE] = RAISE, strip: bool = True) → Optional[str] Returns the value of the argument with the given name from the request query string. If default is not provided, the argument is considered to be required, and we raise a MissingArgumentError if it is missing. Tornado Documentation, Release 6.3.3 If the argument appears in the url more than once, we return the last value. New in version 3.2. RequestHandler.get_query_arguments(name: str, strip: bool = True) → List[str] Returns a list of the query arguments with the given name. If the argument is not present, returns an empty list. New in version 3.2. RequestHandler.get_body_argument(name: str, default: Union[None, str, RAISE] = RAISE, strip: bool = True) → Optional[str] Returns the value of the argument with the given name from the request body. If default is not provided, the argument is considered to be required, and we raise a MissingArgumentError if it is missing. If the argument appears in the url more than once, we return the last value. New in version 3.2. RequestHandler.get_body_arguments(name: str, strip: bool = True) → List[str] Returns a list of the body arguments with the given name. If the argument is not present, returns an empty list. New in version 3.2. RequestHandler.decode_argument(value: bytes, name: Optional[str] = None) → str Decodes an argument from the request. The argument has been percent-decoded and is now a byte string. By default, this method decodes the argument as utf-8 and returns a unicode string, but this may be overridden in subclasses. This method is used as a filter for both get_argument() and for values extracted from the url and passed to get()/post()/etc. The name of the argument is provided if known, but may be None (e.g. for unnamed groups in the url regex). RequestHandler.request The tornado.httputil.HTTPServerRequest object containing additional request parameters including e.g. headers and body data. RequestHandler.path_args RequestHandler.path_kwargs The path_args and path_kwargs attributes contain the positional and keyword arguments that are passed to the HTTP verb methods. These attributes are set before those methods are called, so the values are available during prepare. RequestHandler.data_received(chunk: bytes) → Optional[Awaitable[None]] Implement this method to handle streamed request data. Requires the stream_request_body decorator. May be a coroutine for flow control. Tornado Documentation, Release 6.3.3 Output RequestHandler.set_status(status_code: int, reason: Optional[str] = None) → None Sets the status code for our response. Parameters • status_code (int) – Response status code. • reason (str) – Human-readable reason phrase describing the status code. If None, it will be filled in from http.client.responses or “Unknown”. Changed in version 5.0: No longer validates that the response code is in http.client.responses. RequestHandler.set_header(name: str, value: Union[bytes, str, int, Integral, datetime]) → None Sets the given response header name and value. All header values are converted to strings (datetime objects are formatted according to the HTTP specification for the Date header). RequestHandler.add_header(name: str, value: Union[bytes, str, int, Integral, datetime]) → None Adds the given response header and value. Unlike set_header, add_header may be called multiple times to return multiple values for the same header. RequestHandler.clear_header(name: str) → None Clears an outgoing header, undoing a previous set_header call. Note that this method does not apply to multi-valued headers set by add_header. RequestHandler.set_default_headers() → None Override this to set HTTP headers at the beginning of the request. For example, this is the place to set a custom Server header. Note that setting such headers in the normal flow of request processing may not do what you want, since headers may be reset during error handling. RequestHandler.write(chunk: Union[str, bytes, dict]) → None Writes the given chunk to the output buffer. To write the output to the network, use the flush() method below. If the given chunk is a dictionary, we write it as JSON and set the Content-Type of the response to be application/json. (if you want to send JSON as a different Content-Type, call set_header after call- ing write()). Note that lists are not converted to JSON because of a potential cross-site security vulnerability. All JSON output should be wrapped in a dictionary. More details at http://haacked.com/archive/2009/06/25/json-hijacking.aspx/ and https://github.com/facebook/tornado/issues/1009 RequestHandler.flush(include_footers: bool = False) → Future[None] Flushes the current output buffer to the network. Changed in version 4.0: Now returns a Future if no callback is given. Changed in version 6.0: The callback argument was removed. RequestHandler.finish(chunk: Optional[Union[str, bytes, dict]] = None) → Future[None] Finishes this response, ending the HTTP request. Passing a chunk to finish() is equivalent to passing that chunk to write() and then calling finish() with no arguments. Tornado Documentation, Release 6.3.3 Returns a Future which may optionally be awaited to track the sending of the response to the client. This Future resolves when all the response data has been sent, and raises an error if the connection is closed before all data can be sent. Changed in version 5.1: Now returns a Future instead of None. RequestHandler.render(template_name: str, **kwargs: Any) → Future[None] Renders the template with the given arguments as the response. render() calls finish(), so no other output methods can be called after it. Returns a Future with the same semantics as the one returned by finish . Awaiting this Future is optional. Changed in version 5.1: Now returns a Future instead of None. RequestHandler.render_string(template_name: str, **kwargs: Any) → bytes Generate the given template with the given arguments. We return the generated byte string (in utf8). To generate and write a template as a response, use render() above. RequestHandler.get_template_namespace() → Dict[str, Any] Returns a dictionary to be used as the default template namespace. May be overridden by subclasses to add or modify values. The results of this method will be combined with additional defaults in the tornado.template module and keyword arguments to render or render_string. RequestHandler.redirect(url: str, permanent: bool = False, status: Optional[int] = None) → None Sends a redirect to the given (optionally relative) URL. If the status argument is specified, that value is used as the HTTP status code; otherwise either 301 (permanent) or 302 (temporary) is chosen based on the permanent argument. The default is 302 (temporary). RequestHandler.send_error(status_code: int = 500, **kwargs: Any) → None Sends the given HTTP error code to the browser. If flush() has already been called, it is not possible to send an error, so this method will simply terminate the response. If output has been written but not yet flushed, it will be discarded and replaced with the error page. Override write_error() to customize the error page that is returned. Additional keyword arguments are passed through to write_error. RequestHandler.write_error(status_code: int, **kwargs: Any) → None Override to implement custom error pages. write_error may call write, render, set_header, etc to produce output as usual. If this error was caused by an uncaught exception (including HTTPError), an exc_info triple will be available as kwargs["exc_info"]. Note that this exception may not be the “current” exception for purposes of methods like sys.exc_info() or traceback.format_exc. RequestHandler.clear() → None Resets all headers and content for this response. RequestHandler.render_linked_js(js_files: Iterable[str]) → str Default method used to render the final js links for the rendered webpage. Override this method in a sub-classed controller to change the output. Tornado Documentation, Release 6.3.3 RequestHandler.render_embed_js(js_embed: Iterable[bytes]) → bytes Default method used to render the final embedded js for the rendered webpage. Override this method in a sub-classed controller to change the output. RequestHandler.render_linked_css(css_files: Iterable[str]) → str Default method used to render the final css links for the rendered webpage. Override this method in a sub-classed controller to change the output. RequestHandler.render_embed_css(css_embed: Iterable[bytes]) → bytes Default method used to render the final embedded css for the rendered webpage. Override this method in a sub-classed controller to change the output. Cookies RequestHandler.cookies An alias for self.request.cookies. RequestHandler.get_cookie(name: str, default: Optional[str] = None) → Optional[str] Returns the value of the request cookie with the given name. If the named cookie is not present, returns default. This method only returns cookies that were present in the request. It does not see the outgoing cookies set by set_cookie in this handler. RequestHandler.set_cookie(name: str, value: Union[str, bytes], domain: Optional[str] = None, expires: Optional[Union[float, Tuple, datetime]] = None, path: str = '/', expires_days: Optional[float] = None, *, max_age: Optional[int] = None, httponly: bool = False, secure: bool = False, samesite: Optional[str] = None, **kwargs: Any) → None Sets an outgoing cookie name/value with the given options. Newly-set cookies are not immediately visible via get_cookie; they are not present until the next request. Most arguments are passed directly to http.cookies.Morsel directly. See https://developer.mozilla.org/ en-US/docs/Web/HTTP/Headers/Set-Cookie for more information. expires may be a numeric timestamp as returned by time.time, a time tuple as returned by time.gmtime, or a datetime.datetime object. expires_days is provided as a convenience to set an expiration time in days from today (if both are set, expires is used). Deprecated since version 6.3: Keyword arguments are currently accepted case-insensitively. In Tornado 7.0 this will be changed to only accept lowercase arguments. RequestHandler.clear_cookie(name: str, **kwargs: Any) → None Deletes the cookie with the given name. This method accepts the same arguments as set_cookie, except for expires and max_age. Clearing a cookie requires the same domain and path arguments as when it was set. In some cases the samesite and secure arguments are also required to match. Other arguments are ignored. Similar to set_cookie, the effect of this method will not be seen until the following request. Changed in version 6.3: Now accepts all keyword arguments that set_cookie does. The samesite and secure flags have recently become required for clearing samesite="none" cookies. Tornado Documentation, Release 6.3.3 RequestHandler.clear_all_cookies(**kwargs: Any) → None Attempt to delete all the cookies the user sent with this request. See clear_cookie for more information on keyword arguments. Due to limitations of the cookie protocol, it is impossible to determine on the server side which values are necessary for the domain, path, samesite, or secure arguments, this method can only be successful if you consistently use the same values for these arguments when setting cookies. Similar to set_cookie, the effect of this method will not be seen until the following request. Changed in version 3.2: Added the path and domain parameters. Changed in version 6.3: Now accepts all keyword arguments that set_cookie does. Deprecated since version 6.3: The increasingly complex rules governing cookies have made it impossible for a clear_all_cookies method to work reliably since all we know about cookies are their names. Applications should generally use clear_cookie one at a time instead. RequestHandler.get_signed_cookie(name: str, value: Optional[str] = None, max_age_days: float = 31, min_version: Optional[int] = None) → Optional[bytes] Returns the given signed cookie if it validates, or None. The decoded cookie value is returned as a byte string (unlike get_cookie). Similar to get_cookie, this method only returns cookies that were present in the request. It does not see outgoing cookies set by set_signed_cookie in this handler. Changed in version 3.2.1: Added the min_version argument. Introduced cookie version 2; both versions 1 and 2 are accepted by default. Changed in version 6.3: Renamed from get_secure_cookie to get_signed_cookie to avoid confusion with other uses of “secure” in cookie attributes and prefixes. The old name remains as an alias. RequestHandler.get_signed_cookie_key_version(name: str, value: Optional[str] = None) → Optional[int] Returns the signing key version of the secure cookie. The version is returned as int. Changed in version 6.3: Renamed from get_secure_cookie_key_version to set_signed_cookie_key_version to avoid confusion with other uses of “secure” in cookie attributes and prefixes. The old name remains as an alias. RequestHandler.set_signed_cookie(name: str, value: Union[str, bytes], expires_days: Optional[float] = 30, version: Optional[int] = None, **kwargs: Any) → None Signs and timestamps a cookie so it cannot be forged. You must specify the cookie_secret setting in your Application to use this method. It should be a long, random sequence of bytes to be used as the HMAC secret for the signature. To read a cookie set with this method, use get_signed_cookie(). Note that the expires_days parameter sets the lifetime of the cookie in the browser, but is independent of the max_age_days parameter to get_signed_cookie. A value of None limits the lifetime to the current browser session. Secure cookies may contain arbitrary byte values, not just unicode strings (unlike regular cookies) Similar to set_cookie, the effect of this method will not be seen until the following request. Changed in version 3.2.1: Added the version argument. Introduced cookie version 2 and made it the default. Tornado Documentation, Release 6.3.3 Changed in version 6.3: Renamed from set_secure_cookie to set_signed_cookie to avoid confusion with other uses of “secure” in cookie attributes and prefixes. The old name remains as an alias. RequestHandler.get_secure_cookie() Deprecated alias for get_signed_cookie. Deprecated since version 6.3. RequestHandler.get_secure_cookie_key_version() Deprecated alias for get_signed_cookie_key_version. Deprecated since version 6.3. RequestHandler.set_secure_cookie() Deprecated alias for set_signed_cookie. Deprecated since version 6.3. RequestHandler.create_signed_value(name: str, value: Union[str, bytes], version: Optional[int] = None) → bytes Signs and timestamps a string so it cannot be forged. Normally used via set_signed_cookie, but provided as a separate method for non-cookie uses. To decode a value not stored as a cookie use the optional value argument to get_signed_cookie. Changed in version 3.2.1: Added the version argument. Introduced cookie version 2 and made it the default. tornado.web.MIN_SUPPORTED_SIGNED_VALUE_VERSION = 1 The oldest signed value version supported by this version of Tornado. Signed values older than this version cannot be decoded. New in version 3.2.1. tornado.web.MAX_SUPPORTED_SIGNED_VALUE_VERSION = 2 The newest signed value version supported by this version of Tornado. Signed values newer than this version cannot be decoded. New in version 3.2.1. tornado.web.DEFAULT_SIGNED_VALUE_VERSION = 2 The signed value version produced by RequestHandler.create_signed_value. May be overridden by passing a version keyword argument. New in version 3.2.1. tornado.web.DEFAULT_SIGNED_VALUE_MIN_VERSION = 1 The oldest signed value accepted by RequestHandler.get_signed_cookie. May be overridden by passing a min_version keyword argument. New in version 3.2.1. Tornado Documentation, Release 6.3.3 Other RequestHandler.application The Application object serving this request RequestHandler.check_etag_header() → bool Checks the Etag header against requests’s If-None-Match. Returns True if the request’s Etag matches and a 304 should be returned. For example: self.set_etag_header() if self.check_etag_header(): self.set_status(304) return This method is called automatically when the request is finished, but may be called earlier for applications that override compute_etag and want to do an early check for If-None-Match before completing the request. The Etag header should be set (perhaps with set_etag_header) before calling this method. RequestHandler.check_xsrf_cookie() → None Verifies that the _xsrf cookie matches the _xsrf argument. To prevent cross-site request forgery, we set an _xsrf cookie and include the same value as a non-cookie field with all POST requests. If the two do not match, we reject the form submission as a potential forgery. The _xsrf value may be set as either a form field named _xsrf or in a custom HTTP header named X-XSRFToken or X-CSRFToken (the latter is accepted for compatibility with Django). See http://en.wikipedia.org/wiki/Cross-site_request_forgery Changed in version 3.2.2: Added support for cookie version 2. Both versions 1 and 2 are supported. RequestHandler.compute_etag() → Optional[str] Computes the etag header to be used for this request. By default uses a hash of the content written so far. May be overridden to provide custom etag implementations, or may return None to disable tornado’s default etag support. RequestHandler.create_template_loader(template_path: str) → BaseLoader Returns a new template loader for the given path. May be overridden by subclasses. By default returns a directory-based loader on the given path, using the autoescape and template_whitespace application settings. If a template_loader application setting is supplied, uses that instead. RequestHandler.current_user The authenticated user for this request. This is set in one of two ways: • A subclass may override get_current_user(), which will be called automatically the first time self. current_user is accessed. get_current_user() will only be called once per request, and is cached for future access: def get_current_user(self): user_cookie = self.get_signed_cookie("user") if user_cookie: (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) return json.loads(user_cookie) return None • It may be set as a normal variable, typically from an overridden prepare(): @gen.coroutine def prepare(self): user_id_cookie = self.get_signed_cookie("user_id") if user_id_cookie: self.current_user = yield load_user(user_id_cookie) Note that prepare() may be a coroutine while get_current_user() may not, so the latter form is necessary if loading the user requires asynchronous operations. The user object may be any type of the application’s choosing. RequestHandler.detach() → IOStream Take control of the underlying stream. Returns the underlying IOStream object and stops all further HTTP processing. Intended for implementing protocols like websockets that tunnel over an HTTP handshake. This method is only supported when HTTP/1.1 is used. New in version 5.1. RequestHandler.get_browser_locale(default: str = 'en_US') → Locale Determines the user’s locale from Accept-Language header. See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.4 RequestHandler.get_current_user() → Any Override to determine the current user from, e.g., a cookie. This method may not be a coroutine. RequestHandler.get_login_url() → str Override to customize the login URL based on the request. By default, we use the login_url application setting. RequestHandler.get_status() → int Returns the status code for our response. RequestHandler.get_template_path() → Optional[str] Override to customize template path for each handler. By default, we use the template_path application setting. Return None to load templates relative to the calling file. RequestHandler.get_user_locale() → Optional[Locale] Override to determine the locale from the authenticated user. If None is returned, we fall back to get_browser_locale(). This method should return a tornado.locale.Locale object, most likely obtained via a call like tornado. locale.get("en") Tornado Documentation, Release 6.3.3 RequestHandler.locale The locale for the current session. Determined by either get_user_locale, which you can override to set the locale based on, e.g., a user prefer- ence stored in a database, or get_browser_locale, which uses the Accept-Language header. RequestHandler.log_exception(typ: Optional[Type[BaseException]], value: Optional[BaseException], tb: Optional[TracebackType]) → None Override to customize logging of uncaught exceptions. By default logs instances of HTTPError as warnings without stack traces (on the tornado.general logger), and all other exceptions as errors with stack traces (on the tornado.application logger). New in version 3.1. RequestHandler.on_connection_close() → None Called in async handlers if the client closed the connection. Override this to clean up resources associated with long-lived connections. Note that this method is called only if the connection was closed during asynchronous processing; if you need to do cleanup after every request override on_finish instead. Proxies may keep a connection open for a time (perhaps indefinitely) after the client has gone away, so this method may not be called promptly after the end user closes their connection. RequestHandler.require_setting(name: str, feature: str = 'this feature') → None Raises an exception if the given app setting is not defined. RequestHandler.reverse_url(name: str, *args: Any) → str Alias for Application.reverse_url. RequestHandler.set_etag_header() → None Sets the response’s Etag header using self.compute_etag(). Note: no header will be set if compute_etag() returns None. This method is called automatically when the request is finished. RequestHandler.settings An alias for self.application.settings. RequestHandler.static_url(path: str, include_host: Optional[bool] = None, **kwargs: Any) → str Returns a static URL for the given relative static file path. This method requires you set the static_path setting in your application (which specifies the root directory of your static files). This method returns a versioned url (by default appending ?v=<signature>), which allows the static files to be cached indefinitely. This can be disabled by passing include_version=False (in the default implementation; other static file implementations are not required to support this, but they may support other options). By default this method returns URLs relative to the current host, but if include_host is true the URL returned will be absolute. If this handler has an include_host attribute, that value will be used as the default for all static_url calls that do not pass include_host as a keyword argument. RequestHandler.xsrf_form_html() → str An HTML <input/> element to be included with all POST forms. It defines the _xsrf input value, which we check on all POST requests to prevent cross-site request forgery. If you have set the xsrf_cookies application setting, you must include this HTML within all of your HTML forms. Tornado Documentation, Release 6.3.3 In a template, this method should be called with {% module xsrf_form_html() %} See check_xsrf_cookie() above for more information. RequestHandler.xsrf_token The XSRF-prevention token for the current user/session. To prevent cross-site request forgery, we set an ‘_xsrf’ cookie and include the same ‘_xsrf’ value as an argument with all POST requests. If the two do not match, we reject the form submission as a potential forgery. See http://en.wikipedia.org/wiki/Cross-site_request_forgery This property is of type bytes, but it contains only ASCII characters. If a character string is required, there is no need to base64-encode it; just decode the byte string as UTF-8. Changed in version 3.2.2: The xsrf token will now be have a random mask applied in every request, which makes it safe to include the token in pages that are compressed. See http://breachattack.com for more information on the issue fixed by this change. Old (version 1) cookies will be converted to version 2 when this method is called unless the xsrf_cookie_version Application setting is set to 1. Changed in version 4.3: The xsrf_cookie_kwargs Application setting may be used to sup- ply additional cookie options (which will be passed directly to set_cookie). For example, xsrf_cookie_kwargs=dict(httponly=True, secure=True) will set the secure and httponly flags on the _xsrf cookie. Application configuration class tornado.web.Application(handlers: Optional[List[Union[Rule, Tuple]]] = None, default_host: Optional[str] = None, transforms: Optional[List[Type[OutputTransform]]] = None, **settings) A collection of request handlers that make up a web application. Instances of this class are callable and can be passed directly to HTTPServer to serve the application: application = web.Application([ (r"/", MainPageHandler), ]) http_server = httpserver.HTTPServer(application) http_server.listen(8080) The constructor for this class takes in a list of Rule objects or tuples of values corresponding to the arguments of Rule constructor: (matcher, target, [target_kwargs], [name]), the values in square brackets being optional. The default matcher is PathMatches, so (regexp, target) tuples can also be used instead of (PathMatches(regexp), target). A common routing target is a RequestHandler subclass, but you can also use lists of rules as a target, which create a nested routing configuration: application = web.Application([ (HostMatches("example.com"), [ (r"/", MainPageHandler), (r"/feed", FeedHandler), ]), ]) In addition to this you can use nested Router instances, HTTPMessageDelegate subclasses and callables as routing targets (see routing module docs for more information). Tornado Documentation, Release 6.3.3 When we receive requests, we iterate over the list in order and instantiate an instance of the first request class whose regexp matches the request path. The request class can be specified as either a class object or a (fully- qualified) name. A dictionary may be passed as the third element (target_kwargs) of the tuple, which will be used as keyword ar- guments to the handler’s constructor and initialize method. This pattern is used for the StaticFileHandler in this example (note that a StaticFileHandler can be installed automatically with the static_path setting de- scribed below): application = web.Application([ (r"/static/(.*)", web.StaticFileHandler, {"path": "/var/www"}), ]) We support virtual hosts with the add_handlers method, which takes in a host regular expression as the first argument: application.add_handlers(r"www\.myhost\.com", [ (r"/article/([0-9]+)", ArticleHandler), ]) If there’s no match for the current request’s host, then default_host parameter value is matched against host regular expressions. Warning: Applications that do not use TLS may be vulnerable to DNS rebinding attacks. This attack is especially relevant to applications that only listen on 127.0.0.1 or other private networks. Appropriate host patterns must be used (instead of the default of r'.*') to prevent this risk. The default_host argument must not be used in applications that may be vulnerable to DNS rebinding. You can serve static files by sending the static_path setting as a keyword argument. We will serve those files from the /static/ URI (this is configurable with the static_url_prefix setting), and we will serve /favicon.ico and /robots.txt from the same directory. A custom subclass of StaticFileHandler can be specified with the static_handler_class setting. Changed in version 4.5: Integration with the new tornado.routing module. settings Additional keyword arguments passed to the constructor are saved in the settings dictionary, and are often referred to in documentation as “application settings”. Settings are used to customize various aspects of Tornado (although in some cases richer customization is possible by overriding methods in a subclass of RequestHandler). Some applications also like to use the settings dictionary as a way to make application-specific settings available to handlers without using global variables. Settings used in Tornado are described below. General settings: • autoreload: If True, the server process will restart when any source files change, as described in Debug mode and automatic reloading. This option is new in Tornado 3.2; previously this functionality was controlled by the debug setting. • debug: Shorthand for several debug mode settings, described in Debug mode and automatic reload- ing. Setting debug=True is equivalent to autoreload=True, compiled_template_cache=False, static_hash_cache=False, serve_traceback=True. • default_handler_class and default_handler_args: This handler will be used if no other match is found; use this to implement custom 404 pages (new in Tornado 3.2). Tornado Documentation, Release 6.3.3 • compress_response: If True, responses in textual formats will be compressed automatically. New in Tornado 4.0. • gzip: Deprecated alias for compress_response since Tornado 4.0. • log_function: This function will be called at the end of every request to log the result (with one argument, the RequestHandler object). The default implementation writes to the logging module’s root logger. May also be customized by overriding Application.log_request. • serve_traceback: If True, the default error page will include the traceback of the error. This option is new in Tornado 3.2; previously this functionality was controlled by the debug setting. • ui_modules and ui_methods: May be set to a mapping of UIModule or UI methods to be made available to templates. May be set to a module, dictionary, or a list of modules and/or dicts. See UI modules for more details. • websocket_ping_interval: If set to a number, all websockets will be pinged every n seconds. This can help keep the connection alive through certain proxy servers which close idle connections, and it can detect if the websocket has failed without being properly closed. • websocket_ping_timeout: If the ping interval is set, and the server doesn’t receive a ‘pong’ in this many seconds, it will close the websocket. The default is three times the ping interval, with a minimum of 30 seconds. Ignored if the ping interval is not set. Authentication and security settings: • cookie_secret: Used by RequestHandler.get_signed_cookie and set_signed_cookie to sign cookies. • key_version: Used by requestHandler set_signed_cookie to sign cookies with a specific key when cookie_secret is a key dictionary. • login_url: The authenticated decorator will redirect to this url if the user is not logged in. Can be further customized by overriding RequestHandler.get_login_url • xsrf_cookies: If True, Cross-site request forgery protection will be enabled. • xsrf_cookie_version: Controls the version of new XSRF cookies produced by this server. Should generally be left at the default (which will always be the highest supported version), but may be set to a lower value temporarily during version transitions. New in Tornado 3.2.2, which introduced XSRF cookie version 2. • xsrf_cookie_kwargs: May be set to a dictionary of additional arguments to be passed to RequestHandler.set_cookie for the XSRF cookie. • xsrf_cookie_name: Controls the name used for the XSRF cookie (default _xsrf). The intended use is to take advantage of cookie prefixes. Note that cookie prefixes interact with other cookie flags, so they must be combined with xsrf_cookie_kwargs, such as {"xsrf_cookie_name": "__Host-xsrf", "xsrf_cookie_kwargs": {"secure": True}} • twitter_consumer_key, twitter_consumer_secret, friendfeed_consumer_key, friendfeed_consumer_secret, google_consumer_key, google_consumer_secret, facebook_api_key, facebook_secret: Used in the tornado.auth module to authenticate to various APIs. Template settings: • autoescape: Controls automatic escaping for templates. May be set to None to disable escaping, or to the name of a function that all output should be passed through. Defaults to "xhtml_escape". Can be changed on a per-template basis with the {% autoescape %} directive. Tornado Documentation, Release 6.3.3 • compiled_template_cache: Default is True; if False templates will be recompiled on every re- quest. This option is new in Tornado 3.2; previously this functionality was controlled by the debug setting. • template_path: Directory containing template files. Can be further customized by overriding RequestHandler.get_template_path • template_loader: Assign to an instance of tornado.template.BaseLoader to customize tem- plate loading. If this setting is used the template_path and autoescape settings are ignored. Can be further customized by overriding RequestHandler.create_template_loader. • template_whitespace: Controls handling of whitespace in templates; see tornado.template. filter_whitespace for allowed values. New in Tornado 4.3. Static file settings: • static_hash_cache: Default is True; if False static urls will be recomputed on every request. This option is new in Tornado 3.2; previously this functionality was controlled by the debug setting. • static_path: Directory from which static files will be served. • static_url_prefix: Url prefix for static files, defaults to "/static/". • static_handler_class, static_handler_args: May be set to use a different handler for static files instead of the default tornado.web.StaticFileHandler. static_handler_args, if set, should be a dictionary of keyword arguments to be passed to the handler’s initialize method. Application.listen(port: int, address: Optional[str] = None, *, family: AddressFamily = AddressFamily.AF_UNSPEC, backlog: int = 128, flags: Optional[int] = None, reuse_port: bool = False, **kwargs: Any) → HTTPServer Starts an HTTP server for this application on the given port. This is a convenience alias for creating an HTTPServer object and calling its listen method. Keyword arguments not supported by HTTPServer.listen are passed to the HTTPServer constructor. For advanced uses (e.g. multi-process mode), do not use this method; create an HTTPServer and call its TCPServer.bind/TCPServer. start methods directly. Note that after calling this method you still need to call IOLoop.current().start() (or run within asyncio. run) to start the server. Returns the HTTPServer object. Changed in version 4.3: Now returns the HTTPServer object. Changed in version 6.2: Added support for new keyword arguments in TCPServer.listen, including reuse_port. Application.add_handlers(handlers: List[Union[Rule, Tuple]]) Appends the given handlers to our handler list. Host patterns are processed sequentially in the order they were added. All matching patterns will be considered. Application.get_handler_delegate(request: HTTPServerRequest, target_class: Type[RequestHandler], target_kwargs: Optional[Dict[str, Any]] = None, path_args: Optional[List[bytes]] = None, path_kwargs: Optional[Dict[str, bytes]] = None) → _HandlerDelegate Returns HTTPMessageDelegate that can serve a request for application and RequestHandler subclass. Parameters • request (httputil.HTTPServerRequest) – current HTTP request. • target_class (RequestHandler) – a RequestHandler class. Tornado Documentation, Release 6.3.3 • target_kwargs (dict) – keyword arguments for target_class constructor. • path_args (list) – positional arguments for target_class HTTP method that will be executed while handling a request (get, post or any other). • path_kwargs (dict) – keyword arguments for target_class HTTP method. Application.reverse_url(name: str, *args: Any) → str Returns a URL path for handler named name The handler must be added to the application as a named URLSpec. Args will be substituted for capturing groups in the URLSpec regex. They will be converted to strings if necessary, encoded as utf8, and url-escaped. Application.log_request(handler: RequestHandler) → None Writes a completed HTTP request to the logs. By default writes to the python root logger. To change this behavior either subclass Application and override this method, or pass a function in the application settings dictionary as log_function. class tornado.web.URLSpec(pattern: Union[str, Pattern], handler: Any, kwargs: Optional[Dict[str, Any]] = None, name: Optional[str] = None) Specifies mappings between URLs and handlers. Parameters: • pattern: Regular expression to be matched. Any capturing groups in the regex will be passed in to the handler’s get/post/etc methods as arguments (by keyword if named, by position if unnamed. Named and unnamed capturing groups may not be mixed in the same rule). • handler: RequestHandler subclass to be invoked. • kwargs (optional): A dictionary of additional arguments to be passed to the handler’s constructor. • name (optional): A name for this handler. Used by reverse_url. The URLSpec class is also available under the name tornado.web.url. Decorators tornado.web.authenticated(method: Callable[[...], Optional[Awaitable[None]]]) → Callable[[...], Optional[Awaitable[None]]] Decorate methods with this to require that the user be logged in. If the user is not logged in, they will be redirected to the configured login url. If you configure a login url with a query parameter, Tornado will assume you know what you’re doing and use it as-is. If not, it will add a next parameter so the login page knows where to send you once you’re logged in. tornado.web.addslash(method: Callable[[...], Optional[Awaitable[None]]]) → Callable[[...], Optional[Awaitable[None]]] Use this decorator to add a missing trailing slash to the request path. For example, a request to /foo would redirect to /foo/ with this decorator. Your request handler mapping should use a regular expression like r'/foo/?' in conjunction with using the decorator. tornado.web.removeslash(method: Callable[[...], Optional[Awaitable[None]]]) → Callable[[...], Optional[Awaitable[None]]] Tornado Documentation, Release 6.3.3 Use this decorator to remove trailing slashes from the request path. For example, a request to /foo/ would redirect to /foo with this decorator. Your request handler mapping should use a regular expression like r'/foo/*' in conjunction with using the decorator. tornado.web.stream_request_body(cls: Type[_RequestHandlerType]) → Type[_RequestHandlerType] Apply to RequestHandler subclasses to enable streaming body support. This decorator implies the following changes: • HTTPServerRequest.body is undefined, and body arguments will not be included in RequestHandler. get_argument. • RequestHandler.prepare is called when the request headers have been read instead of after the entire body has been read. • The subclass must define a method data_received(self, data):, which will be called zero or more times as data is available. Note that if the request has an empty body, data_received may not be called. • prepare and data_received may return Futures (such as via @gen.coroutine, in which case the next method will not be called until those futures have completed. • The regular HTTP method (post, put, etc) will be called after the entire body has been read. See the file receiver demo for example usage. Everything else exception tornado.web.HTTPError(status_code: int = 500, log_message: Optional[str] = None, *args: Any, **kwargs: Any) An exception that will turn into an HTTP error response. Raising an HTTPError is a convenient alternative to calling RequestHandler.send_error since it automati- cally ends the current function. To customize the response sent with an HTTPError, override RequestHandler.write_error. Parameters • status_code (int) – HTTP status code. Must be listed in httplib.responses unless the reason keyword argument is given. • log_message (str) – Message to be written to the log for this error (will not be shown to the user unless the Application is in debug mode). May contain %s-style placeholders, which will be filled in with remaining positional parameters. • reason (str) – Keyword-only argument. The HTTP “reason” phrase to pass in the status line along with status_code. Normally determined automatically from status_code, but can be used to use a non-standard numeric code. exception tornado.web.Finish An exception that ends the request without producing an error response. When Finish is raised in a RequestHandler, the request will end (calling RequestHandler.finish if it hasn’t already been called), but the error-handling methods (including RequestHandler.write_error) will not be called. If Finish() was created with no arguments, the pending response will be sent as-is. If Finish() was given an argument, that argument will be passed to RequestHandler.finish(). Tornado Documentation, Release 6.3.3 This can be a more convenient way to implement custom error pages than overriding write_error (especially in library code): if self.current_user is None: self.set_status(401) self.set_header('WWW-Authenticate', 'Basic realm="something"') raise Finish() Changed in version 4.3: Arguments passed to Finish() will be passed on to RequestHandler.finish . exception tornado.web.MissingArgumentError(arg_name: str) Exception raised by RequestHandler.get_argument. This is a subclass of HTTPError, so if it is uncaught a 400 response code will be used instead of 500 (and a stack trace will not be logged). New in version 3.1. class tornado.web.UIModule(handler: RequestHandler) A re-usable, modular UI unit on a page. UI modules often execute additional queries, and they can include additional CSS and JavaScript that will be included in the output page, which is automatically inserted on page render. Subclasses of UIModule must override the render method. render(*args: Any, **kwargs: Any) → str Override in subclasses to return this module’s output. embedded_javascript() → Optional[str] Override to return a JavaScript string to be embedded in the page. javascript_files() → Optional[Iterable[str]] Override to return a list of JavaScript files needed by this module. If the return values are relative paths, they will be passed to RequestHandler.static_url; otherwise they will be used as-is. embedded_css() → Optional[str] Override to return a CSS string that will be embedded in the page. css_files() → Optional[Iterable[str]] Override to returns a list of CSS files required by this module. If the return values are relative paths, they will be passed to RequestHandler.static_url; otherwise they will be used as-is. html_head() → Optional[str] Override to return an HTML string that will be put in the <head/> element. html_body() → Optional[str] Override to return an HTML string that will be put at the end of the <body/> element. render_string(path: str, **kwargs: Any) → bytes Renders a template and returns it as a string. class tornado.web.ErrorHandler(application: Application, request: HTTPServerRequest, **kwargs: Any) Generates an error response with status_code for all requests. Tornado Documentation, Release 6.3.3 class tornado.web.FallbackHandler(application: Application, request: HTTPServerRequest, **kwargs: Any) A RequestHandler that wraps another HTTP server callback. The fallback is a callable object that accepts an HTTPServerRequest, such as an Application or tornado. wsgi.WSGIContainer. This is most useful to use both Tornado RequestHandlers and WSGI in the same server. Typical usage: wsgi_app = tornado.wsgi.WSGIContainer( django.core.handlers.wsgi.WSGIHandler()) application = tornado.web.Application([ (r"/foo", FooHandler), (r".*", FallbackHandler, dict(fallback=wsgi_app), ]) class tornado.web.RedirectHandler(application: Application, request: HTTPServerRequest, **kwargs: Any) Redirects the client to the given URL for all GET requests. You should provide the keyword argument url to the handler, e.g.: application = web.Application([ (r"/oldpath", web.RedirectHandler, {"url": "/newpath"}), ]) RedirectHandler supports regular expression substitutions. E.g., to swap the first and second parts of a path while preserving the remainder: application = web.Application([ (r"/(.*?)/(.*?)/(.*)", web.RedirectHandler, {"url": "/{1}/{0}/{2}"}), ]) The final URL is formatted with str.format and the substrings that match the capturing groups. In the above example, a request to “/a/b/c” would be formatted like: str.format("/{1}/{0}/{2}", "a", "b", "c") # -> "/b/a/c" Use Python’s format string syntax to customize how values are substituted. Changed in version 4.5: Added support for substitutions into the destination URL. Changed in version 5.0: If any query arguments are present, they will be copied to the destination URL. class tornado.web.StaticFileHandler(application: Application, request: HTTPServerRequest, **kwargs: Any) A simple handler that can serve static content from a directory. A StaticFileHandler is configured automatically if you pass the static_path keyword argument to Application. This handler can be customized with the static_url_prefix, static_handler_class, and static_handler_args settings. To map an additional path to this handler for a static data directory you would add a line to your application like: application = web.Application([ (r"/content/(.*)", web.StaticFileHandler, {"path": "/var/www"}), ]) Tornado Documentation, Release 6.3.3 The handler constructor requires a path argument, which specifies the local root directory of the content to be served. Note that a capture group in the regex is required to parse the value for the path argument to the get() method (different than the constructor argument above); see URLSpec for details. To serve a file like index.html automatically when a directory is requested, set static_handler_args=dict(default_filename="index.html") in your application settings, or add default_filename as an initializer argument for your StaticFileHandler. To maximize the effectiveness of browser caching, this class supports versioned urls (by default using the argu- ment ?v=). If a version is given, we instruct the browser to cache this file indefinitely. make_static_url (also available as RequestHandler.static_url) can be used to construct a versioned url. This handler is intended primarily for use in development and light-duty file serving; for heavy traffic it will be more efficient to use a dedicated static file server (such as nginx or Apache). We support the HTTP Accept-Ranges mechanism to return partial content (because some browsers require this functionality to be present to seek in HTML5 audio or video). Subclassing notes This class is designed to be extensible by subclassing, but because of the way static urls are generated with class methods rather than instance methods, the inheritance patterns are somewhat unusual. Be sure to use the @classmethod decorator when overriding a class method. Instance methods may use the attributes self.path self.absolute_path, and self.modified. Subclasses should only override methods discussed in this section; overriding other methods is error-prone. Overriding StaticFileHandler.get is particularly problematic due to the tight coupling with compute_etag and other methods. To change the way static urls are generated (e.g. to match the behavior of another server or CDN), override make_static_url, parse_url_path , get_cache_time, and/or get_version. To replace all interaction with the filesystem (e.g. to serve static content from a database), override get_content, get_content_size, get_modified_time, get_absolute_path , and validate_absolute_path . Changed in version 3.1: Many of the methods for subclasses were added in Tornado 3.1. compute_etag() → Optional[str] Sets the Etag header based on static url version. This allows efficient If-None-Match checks against cached versions, and sends the correct Etag for a partial response (i.e. the same Etag as the full file). New in version 3.1. set_headers() → None Sets the content and caching headers on the response. New in version 3.1. should_return_304() → bool Returns True if the headers indicate that we should return 304. New in version 3.1. classmethod get_absolute_path(root: str, path: str) → str Returns the absolute location of path relative to root. root is the path configured for this StaticFileHandler (in most cases the static_path Application setting). Tornado Documentation, Release 6.3.3 This class method may be overridden in subclasses. By default it returns a filesystem path, but other strings may be used as long as they are unique and understood by the subclass’s overridden get_content. New in version 3.1. validate_absolute_path(root: str, absolute_path: str) → Optional[str] Validate and return the absolute path. root is the configured path for the StaticFileHandler, and path is the result of get_absolute_path This is an instance method called during request processing, so it may raise HTTPError or use methods like RequestHandler.redirect (return None after redirecting to halt further processing). This is where 404 errors for missing files are generated. This method may modify the path before returning it, but note that any such modifications will not be understood by make_static_url. In instance methods, this method’s result is available as self.absolute_path. New in version 3.1. classmethod get_content(abspath: str, start: Optional[int] = None, end: Optional[int] = None) → Generator[bytes, None, None] Retrieve the content of the requested resource which is located at the given absolute path. This class method may be overridden by subclasses. Note that its signature is different from other overrid- able class methods (no settings argument); this is deliberate to ensure that abspath is able to stand on its own as a cache key. This method should either return a byte string or an iterator of byte strings. The latter is preferred for large files as it helps reduce memory fragmentation. New in version 3.1. classmethod get_content_version(abspath: str) → str Returns a version string for the resource at the given path. This class method may be overridden by subclasses. The default implementation is a SHA-512 hash of the file’s contents. New in version 3.1. get_content_size() → int Retrieve the total size of the resource at the given path. This method may be overridden by subclasses. New in version 3.1. Changed in version 4.0: This method is now always called, instead of only when partial results are requested. get_modified_time() → Optional[datetime] Returns the time that self.absolute_path was last modified. May be overridden in subclasses. Should return a datetime object or None. New in version 3.1. get_content_type() → str Returns the Content-Type header to be used for this request. New in version 3.1. Tornado Documentation, Release 6.3.3 set_extra_headers(path: str) → None For subclass to add extra headers to the response get_cache_time(path: str, modified: Optional[datetime], mime_type: str) → int Override to customize cache control behavior. Return a positive number of seconds to make the result cacheable for that amount of time or 0 to mark resource as cacheable for an unspecified amount of time (subject to browser heuristics). By default returns cache expiry of 10 years for resources requested with v argument. classmethod make_static_url(settings: Dict[str, Any], path: str, include_version: bool = True) → str Constructs a versioned url for the given path. This method may be overridden in subclasses (but note that it is a class method rather than an instance method). Subclasses are only required to implement the signature make_static_url(cls, settings, path); other keyword arguments may be passed through static_url but are not standard. settings is the Application.settings dictionary. path is the static path being requested. The url returned should be relative to the current host. include_version determines whether the generated URL should include the query string containing the version hash of the file corresponding to the given path. parse_url_path(url_path: str) → str Converts a static URL path into a filesystem path. url_path is the path component of the URL with static_url_prefix removed. The return value should be filesystem path relative to static_path. This is the inverse of make_static_url. classmethod get_version(settings: Dict[str, Any], path: str) → Optional[str] Generate the version string to be used in static URLs. settings is the Application.settings dictionary and path is the relative location of the requested asset on the filesystem. The returned value should be a string, or None if no version could be determined. Changed in version 3.1: This method was previously recommended for subclasses to override; get_content_version is now preferred as it allows the base class to handle caching of the result. 6.2.2 tornado.template — Flexible output generation A simple template system that compiles templates to Python code. Basic usage looks like: t = template.Template("<html>{{ myvalue }}</html>") print(t.generate(myvalue="XXX")) Loader is a class that loads templates from a root directory and caches the compiled templates: loader = template.Loader("/home/btaylor") print(loader.load("test.html").generate(myvalue="XXX")) We compile all templates to raw Python. Error-reporting is currently. . . uh, interesting. Syntax for the templates: Tornado Documentation, Release 6.3.3 ### base.html <html> <head> <title>{% block title %}Default title{% end %}</title> </head> <body> <ul> {% for student in students %} {% block student %} <li>{{ escape(student.name) }}</li> {% end %} {% end %} </ul> </body> </html> ### bold.html {% extends "base.html" %} {% block title %}A bolder title{% end %} {% block student %} <li><span style="bold">{{ escape(student.name) }}</span></li> {% end %} Unlike most other template systems, we do not put any restrictions on the expressions you can include in your statements. if and for blocks get translated exactly into Python, so you can do complex expressions like: {% for student in [p for p in people if p.student and p.age > 23] %} <li>{{ escape(student.name) }}</li> {% end %} Translating directly to Python means you can apply functions to expressions easily, like the escape() function in the examples above. You can pass functions in to your template just like any other variable (In a RequestHandler, override RequestHandler.get_template_namespace): ### Python code def add(x, y): return x + y template.execute(add=add) ### The template {{ add(1, 2) }} We provide the functions escape(), url_escape(), json_encode(), and squeeze() to all templates by default. Typical applications do not create Template or Loader instances by hand, but instead use the render and render_string methods of tornado.web.RequestHandler, which load templates automatically based on the template_path Application setting. Variable names beginning with _tt_ are reserved by the template system and should not be used by application code. Tornado Documentation, Release 6.3.3 Syntax Reference Template expressions are surrounded by double curly braces: {{ ... }}. The contents may be any python expression, which will be escaped according to the current autoescape setting and inserted into the output. Other template directives use {% %}. To comment out a section so that it is omitted from the output, surround it with {# ... #}. To include a literal {{, {%, or {# in the output, escape them as {{!, {%!, and {#!, respectively. {% apply *function* %}...{% end %} Applies a function to the output of all template code between apply and end: {% apply linkify %}{{name}} said: {{message}}{% end %} Note that as an implementation detail apply blocks are implemented as nested functions and thus may interact strangely with variables set via {% set %}, or the use of {% break %} or {% continue %} within loops. {% autoescape *function* %} Sets the autoescape mode for the current file. This does not affect other files, even those referenced by {% include %}. Note that autoescaping can also be configured globally, at the Application or Loader.: {% autoescape xhtml_escape %} {% autoescape None %} {% block *name* %}...{% end %} Indicates a named, replaceable block for use with {% extends %}. Blocks in the parent template will be replaced with the contents of the same-named block in a child template.: <!-- base.html --> <title>{% block title %}Default title{% end %}</title> <!-- mypage.html --> {% extends "base.html" %} {% block title %}My page title{% end %} {% comment ... %} A comment which will be removed from the template output. Note that there is no {% end %} tag; the comment goes from the word comment to the closing %} tag. {% extends *filename* %} Inherit from another template. Templates that use extends should contain one or more block tags to replace content from the parent template. Anything in the child template not contained in a block tag will be ignored. For an example, see the {% block %} tag. {% for *var* in *expr* %}...{% end %} Same as the python for statement. {% break %} and {% continue %} may be used inside the loop. {% from *x* import *y* %} Same as the python import statement. {% if *condition* %}...{% elif *condition* %}...{% else %}...{% end %} Conditional statement - outputs the first section whose condition is true. (The elif and else sections are optional) {% import *module* %} Same as the python import statement. {% include *filename* %} Includes another template file. The included file can see all the local variables as if it were copied directly to Tornado Documentation, Release 6.3.3 the point of the include directive (the {% autoescape %} directive is an exception). Alternately, {% module Template(filename, **kwargs) %} may be used to include another template with an isolated namespace. {% module *expr* %} Renders a UIModule. The output of the UIModule is not escaped: {% module Template("foo.html", arg=42) %} UIModules are a feature of the tornado.web.RequestHandler class (and specifically its render method) and will not work when the template system is used on its own in other contexts. {% raw *expr* %} Outputs the result of the given expression without autoescaping. {% set *x* = *y* %} Sets a local variable. {% try %}...{% except %}...{% else %}...{% finally %}...{% end %} Same as the python try statement. {% while *condition* %}... {% end %} Same as the python while statement. {% break %} and {% continue %} may be used inside the loop. {% whitespace *mode* %} Sets the whitespace mode for the remainder of the current file (or until the next {% whitespace %} directive). See filter_whitespace for available options. New in Tornado 4.3. Class reference class tornado.template.Template(template_string, name='<string>', loader=None, compress_whitespace=None, autoescape='xhtml_escape', whitespace=None) A compiled template. We compile into Python from the given template_string. You can generate the template from variables with generate(). Construct a Template. Parameters • template_string (str) – the contents of the template file. • name (str) – the filename from which the template was loaded (used for error message). • loader (tornado.template.BaseLoader) – the BaseLoader responsible for this tem- plate, used to resolve {% include %} and {% extend %} directives. • compress_whitespace (bool) – Deprecated since Tornado 4.3. Equivalent to whitespace="single" if true and whitespace="all" if false. • autoescape (str) – The name of a function in the template namespace, or None to disable escaping by default. • whitespace (str) – A string specifying treatment of whitespace; see filter_whitespace for options. Changed in version 4.3: Added whitespace parameter; deprecated compress_whitespace. generate(**kwargs: Any) → bytes Generate this template with the given arguments. Tornado Documentation, Release 6.3.3 class tornado.template.BaseLoader(autoescape: str = 'xhtml_escape', namespace: Optional[Dict[str, Any]] = None, whitespace: Optional[str] = None) Base class for template loaders. You must use a template loader to use template constructs like {% extends %} and {% include %}. The loader caches all templates after they are loaded the first time. Construct a template loader. Parameters • autoescape (str) – The name of a function in the template namespace, such as “xhtml_escape”, or None to disable autoescaping by default. • namespace (dict) – A dictionary to be added to the default template namespace, or None. • whitespace (str) – A string specifying default behavior for whitespace in templates; see filter_whitespace for options. Default is “single” for files ending in “.html” and “.js” and “all” for other files. Changed in version 4.3: Added whitespace parameter. reset() → None Resets the cache of compiled templates. resolve_path(name: str, parent_path: Optional[str] = None) → str Converts a possibly-relative path to absolute (used internally). load(name: str, parent_path: Optional[str] = None) → Template Loads a template. class tornado.template.Loader(root_directory: str, **kwargs: Any) A template loader that loads from a single root directory. class tornado.template.DictLoader(dict: Dict[str, str], **kwargs: Any) A template loader that loads from a dictionary. exception tornado.template.ParseError(message: str, filename: Optional[str] = None, lineno: int = 0) Raised for template syntax errors. ParseError instances have filename and lineno attributes indicating the position of the error. Changed in version 4.3: Added filename and lineno attributes. tornado.template.filter_whitespace(mode: str, text: str) → str Transform whitespace in text according to mode. Available modes are: • all: Return all whitespace unmodified. • single: Collapse consecutive whitespace with a single whitespace character, preserving newlines. • oneline: Collapse all runs of whitespace into a single space character, removing all newlines in the pro- cess. New in version 4.3. Tornado Documentation, Release 6.3.3 6.2.3 tornado.routing — Basic routing implementation Flexible routing implementation. Tornado routes HTTP requests to appropriate handlers using Router class implementations. The tornado.web. Application class is a Router implementation and may be used directly, or the classes in this module may be used for additional flexibility. The RuleRouter class can match on more criteria than Application, or the Router interface can be subclassed for maximum customization. Router interface extends HTTPServerConnectionDelegate to provide additional routing capabilities. This also means that any Router implementation can be used directly as a request_callback for HTTPServer constructor. Router subclass must implement a find_handler method to provide a suitable HTTPMessageDelegate instance to handle the request: class CustomRouter(Router): def find_handler(self, request, **kwargs): # some routing logic providing a suitable HTTPMessageDelegate instance return MessageDelegate(request.connection) class MessageDelegate(HTTPMessageDelegate): def __init__(self, connection): self.connection = connection def finish(self): self.connection.write_headers( ResponseStartLine("HTTP/1.1", 200, "OK"), HTTPHeaders({"Content-Length": "2"}), b"OK") self.connection.finish() router = CustomRouter() server = HTTPServer(router) The main responsibility of Router implementation is to provide a mapping from a request to HTTPMessageDelegate instance that will handle this request. In the example above we can see that routing is possible even without instantiating an Application. For routing to RequestHandler implementations we need an Application instance. get_handler_delegate provides a convenient way to create HTTPMessageDelegate for a given request and RequestHandler. Here is a simple example of how we can we route to RequestHandler subclasses by HTTP method: resources = {} class GetResource(RequestHandler): def get(self, path): if path not in resources: raise HTTPError(404) self.finish(resources[path]) class PostResource(RequestHandler): def post(self, path): resources[path] = self.request.body (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) class HTTPMethodRouter(Router): def __init__(self, app): self.app = app def find_handler(self, request, **kwargs): handler = GetResource if request.method == "GET" else PostResource return self.app.get_handler_delegate(request, handler, path_args=[request.path]) router = HTTPMethodRouter(Application()) server = HTTPServer(router) ReversibleRouter interface adds the ability to distinguish between the routes and reverse them to the original urls using route’s name and additional arguments. Application is itself an implementation of ReversibleRouter class. RuleRouter and ReversibleRuleRouter are implementations of Router and ReversibleRouter interfaces and can be used for creating rule-based routing configurations. Rules are instances of Rule class. They contain a Matcher, which provides the logic for determining whether the rule is a match for a particular request and a target, which can be one of the following. 1) An instance of HTTPServerConnectionDelegate: router = RuleRouter([ Rule(PathMatches("/handler"), ConnectionDelegate()), # ... more rules ]) class ConnectionDelegate(HTTPServerConnectionDelegate): def start_request(self, server_conn, request_conn): return MessageDelegate(request_conn) 2) A callable accepting a single argument of HTTPServerRequest type: router = RuleRouter([ Rule(PathMatches("/callable"), request_callable) ]) def request_callable(request): request.write(b"HTTP/1.1 200 OK\r\nContent-Length: 2\r\n\r\nOK") request.finish() 3) Another Router instance: router = RuleRouter([ Rule(PathMatches("/router.*"), CustomRouter()) ]) Of course a nested RuleRouter or a Application is allowed: router = RuleRouter([ Rule(HostMatches("example.com"), RuleRouter([ Rule(PathMatches("/app1/.*"), Application([(r"/app1/handler", Handler)])), ])) ]) (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) server = HTTPServer(router) In the example below RuleRouter is used to route between applications: app1 = Application([ (r"/app1/handler", Handler1), # other handlers ... ]) app2 = Application([ (r"/app2/handler", Handler2), # other handlers ... ]) router = RuleRouter([ Rule(PathMatches("/app1.*"), app1), Rule(PathMatches("/app2.*"), app2) ]) server = HTTPServer(router) For more information on application-level routing see docs for Application. New in version 4.5. class tornado.routing.Router Abstract router interface. find_handler(request: HTTPServerRequest, **kwargs: Any) → Optional[HTTPMessageDelegate] Must be implemented to return an appropriate instance of HTTPMessageDelegate that can serve the re- quest. Routing implementations may pass additional kwargs to extend the routing logic. Parameters • request (httputil.HTTPServerRequest) – current HTTP request. • kwargs – additional keyword arguments passed by routing implementation. Returns an instance of HTTPMessageDelegate that will be used to process the request. class tornado.routing.ReversibleRouter Abstract router interface for routers that can handle named routes and support reversing them to original urls. reverse_url(name: str, *args: Any) → Optional[str] Returns url string for a given route name and arguments or None if no match is found. Parameters • name (str) – route name. • args – url parameters. Returns parametrized url string for a given route name (or None). Tornado Documentation, Release 6.3.3 class tornado.routing.RuleRouter(rules: Optional[List[Union[Rule, List[Any], Tuple[Union[str, Matcher], Any], Tuple[Union[str, Matcher], Any, Dict[str, Any]], Tuple[Union[str, Matcher], Any, Dict[str, Any], str]]]] = None) Rule-based router implementation. Constructs a router from an ordered list of rules: RuleRouter([ Rule(PathMatches("/handler"), Target), # ... more rules ]) You can also omit explicit Rule constructor and use tuples of arguments: RuleRouter([ (PathMatches("/handler"), Target), ]) PathMatches is a default matcher, so the example above can be simplified: RuleRouter([ ("/handler", Target), ]) In the examples above, Target can be a nested Router instance, an instance of HTTPServerConnectionDelegate or an old-style callable, accepting a request argument. Parameters rules – a list of Rule instances or tuples of Rule constructor arguments. add_rules(rules: List[Union[Rule, List[Any], Tuple[Union[str, Matcher], Any], Tuple[Union[str, Matcher], Any, Dict[str, Any]], Tuple[Union[str, Matcher], Any, Dict[str, Any], str]]]) → None Appends new rules to the router. Parameters rules – a list of Rule instances (or tuples of arguments, which are passed to Rule constructor). process_rule(rule: Rule) → Rule Override this method for additional preprocessing of each rule. Parameters rule (Rule) – a rule to be processed. Returns the same or modified Rule instance. get_target_delegate(target: Any, request: HTTPServerRequest, **target_params: Any) → Optional[HTTPMessageDelegate] Returns an instance of HTTPMessageDelegate for a Rule’s target. This method is called by find_handler and can be extended to provide additional target types. Parameters • target – a Rule’s target. • request (httputil.HTTPServerRequest) – current request. • target_params – additional parameters that can be useful for HTTPMessageDelegate creation. Tornado Documentation, Release 6.3.3 class tornado.routing.ReversibleRuleRouter(rules: Optional[List[Union[Rule, List[Any], Tuple[Union[str, Matcher], Any], Tuple[Union[str, Matcher], Any, Dict[str, Any]], Tuple[Union[str, Matcher], Any, Dict[str, Any], str]]]] = None) A rule-based router that implements reverse_url method. Each rule added to this router may have a name attribute that can be used to reconstruct an original uri. The actual reconstruction takes place in a rule’s matcher (see Matcher.reverse). class tornado.routing.Rule(matcher: Matcher, target: Any, target_kwargs: Optional[Dict[str, Any]] = None, name: Optional[str] = None) A routing rule. Constructs a Rule instance. Parameters • matcher (Matcher) – a Matcher instance used for determining whether the rule should be considered a match for a specific request. • target – a Rule’s target (typically a RequestHandler or HTTPServerConnectionDelegate subclass or even a nested Router, depending on routing implementation). • target_kwargs (dict) – a dict of parameters that can be useful at the moment of target instantiation (for example, status_code for a RequestHandler subclass). They end up in target_params['target_kwargs'] of RuleRouter.get_target_delegate method. • name (str) – the name of the rule that can be used to find it in ReversibleRouter. reverse_url implementation. class tornado.routing.Matcher Represents a matcher for request features. match(request: HTTPServerRequest) → Optional[Dict[str, Any]] Matches current instance against the request. Parameters request (httputil.HTTPServerRequest) – current HTTP request Returns a dict of parameters to be passed to the target handler (for example, handler_kwargs, path_args, path_kwargs can be passed for proper RequestHandler instantiation). An empty dict is a valid (and common) return value to indicate a match when the argument- passing features are not used. None must be returned to indicate that there is no match. reverse(*args: Any) → Optional[str] Reconstructs full url from matcher instance and additional arguments. class tornado.routing.AnyMatches Matches any request. class tornado.routing.HostMatches(host_pattern: Union[str, Pattern]) Matches requests from hosts specified by host_pattern regex. class tornado.routing.DefaultHostMatches(application: Any, host_pattern: Pattern) Matches requests from host that is equal to application’s default_host. Always returns no match if X-Real-Ip header is present. Tornado Documentation, Release 6.3.3 class tornado.routing.PathMatches(path_pattern: Union[str, Pattern]) Matches requests with paths specified by path_pattern regex. class tornado.routing.URLSpec(pattern: Union[str, Pattern], handler: Any, kwargs: Optional[Dict[str, Any]] = None, name: Optional[str] = None) Specifies mappings between URLs and handlers. Parameters: • pattern: Regular expression to be matched. Any capturing groups in the regex will be passed in to the handler’s get/post/etc methods as arguments (by keyword if named, by position if unnamed. Named and unnamed capturing groups may not be mixed in the same rule). • handler: RequestHandler subclass to be invoked. • kwargs (optional): A dictionary of additional arguments to be passed to the handler’s constructor. • name (optional): A name for this handler. Used by reverse_url. 6.2.4 tornado.escape — Escaping and string manipulation Escaping/unescaping methods for HTML, JSON, URLs, and others. Also includes a few other miscellaneous string manipulation functions that have crept in over time. Escaping functions tornado.escape.xhtml_escape(value: Union[str, bytes]) → str Escapes a string so it is valid within HTML or XML. Escapes the characters <, >, ", ', and &. When used in attribute values the escaped strings must be enclosed in quotes. Changed in version 3.2: Added the single quote to the list of escaped characters. tornado.escape.xhtml_unescape(value: Union[str, bytes]) → str Un-escapes an XML-escaped string. tornado.escape.url_escape(value: Union[str, bytes], plus: bool = True) → str Returns a URL-encoded version of the given value. If plus is true (the default), spaces will be represented as “+” instead of “%20”. This is appropriate for query strings but not for the path component of a URL. Note that this default is the reverse of Python’s urllib module. New in version 3.1: The plus argument tornado.escape.url_unescape(value: Union[str, bytes], encoding: None, plus: bool = True) → bytes tornado.escape.url_unescape(value: Union[str, bytes], encoding: str = 'utf-8', plus: bool = True) → str Decodes the given value from a URL. The argument may be either a byte or unicode string. If encoding is None, the result will be a byte string. Otherwise, the result is a unicode string in the specified encoding. If plus is true (the default), plus signs will be interpreted as spaces (literal plus signs must be represented as “%2B”). This is appropriate for query strings and form-encoded values but not for the path component of a URL. Note that this default is the reverse of Python’s urllib module. New in version 3.1: The plus argument Tornado Documentation, Release 6.3.3 tornado.escape.json_encode(value: Any) → str JSON-encodes the given Python object. tornado.escape.json_decode(value: Union[str, bytes]) → Any Returns Python objects for the given JSON string. Supports both str and bytes inputs. Byte/unicode conversions tornado.escape.utf8(value: bytes) → bytes tornado.escape.utf8(value: str) → bytes tornado.escape.utf8(value: None) → None Converts a string argument to a byte string. If the argument is already a byte string or None, it is returned unchanged. Otherwise it must be a unicode string and is encoded as utf8. tornado.escape.to_unicode(value: str) → str tornado.escape.to_unicode(value: bytes) → str tornado.escape.to_unicode(value: None) → None Converts a string argument to a unicode string. If the argument is already a unicode string or None, it is returned unchanged. Otherwise it must be a byte string and is decoded as utf8. tornado.escape.native_str() tornado.escape.to_basestring() Converts a byte or unicode string into type str. These functions were used to help transition from Python 2 to Python 3 but are now deprecated aliases for to_unicode. tornado.escape.recursive_unicode(obj: Any) → Any Walks a simple data structure, converting byte strings to unicode. Supports lists, tuples, and dictionaries. Miscellaneous functions tornado.escape.linkify(text: Union[str, bytes], shorten: bool = False, extra_params: Union[str, Callable[[str], str]] = '', require_protocol: bool = False, permitted_protocols: List[str] = ['http', 'https']) → str Converts plain text into HTML with links. For example: linkify("Hello http://tornadoweb.org!") would return Hello <a href="http:// tornadoweb.org">http://tornadoweb.org</a>! Parameters: • shorten: Long urls will be shortened for display. • extra_params: Extra text to include in the link tag, or a callable taking the link as an argument and return- ing the extra text e.g. linkify(text, extra_params='rel="nofollow" class="external"'), or: Tornado Documentation, Release 6.3.3 def extra_params_cb(url): if url.startswith("http://example.com"): return 'class="internal"' else: return 'class="external" rel="nofollow"' linkify(text, extra_params=extra_params_cb) • require_protocol: Only linkify urls which include a protocol. If this is False, urls such as www.facebook.com will also be linkified. • permitted_protocols: List (or set) of protocols which should be linkified, e.g. linkify(text, permitted_protocols=["http", "ftp", "mailto"]). It is very unsafe to include protocols such as javascript. tornado.escape.squeeze(value: str) → str Replace all sequences of whitespace chars with a single space. 6.2.5 tornado.locale — Internationalization support Translation methods for generating localized strings. To load a locale and generate a translated string: user_locale = tornado.locale.get("es_LA") print(user_locale.translate("Sign out")) tornado.locale.get() returns the closest matching locale, not necessarily the specific locale you requested. You can support pluralization with additional arguments to translate(), e.g.: people = [...] message = user_locale.translate( "%(list)s is online", "%(list)s are online", len(people)) print(message % {"list": user_locale.list(people)}) The first string is chosen if len(people) == 1, otherwise the second string is chosen. Applications should call one of load_translations (which uses a simple CSV format) or load_gettext_translations (which uses the .mo format supported by gettext and related tools). If nei- ther method is called, the Locale.translate method will simply return the original string. tornado.locale.get(*locale_codes: str) → Locale Returns the closest match for the given locale codes. We iterate over all given locale codes in order. If we have a tight or a loose match for the code (e.g., “en” for “en_US”), we return the locale. Otherwise we move to the next code in the list. By default we return en_US if no translations are found for any of the specified locales. You can change the default locale with set_default_locale(). tornado.locale.set_default_locale(code: str) → None Sets the default locale. The default locale is assumed to be the language used for all strings in the system. The translations loaded from disk are mappings from the default locale to the destination locale. Consequently, you don’t need to create a translation file for the default locale. Tornado Documentation, Release 6.3.3 tornado.locale.load_translations(directory: str, encoding: Optional[str] = None) → None Loads translations from CSV files in a directory. Translations are strings with optional Python-style named placeholders (e.g., My name is %(name)s) and their associated translations. The directory should have translation files of the form LOCALE.csv, e.g. es_GT.csv. The CSV files should have two or three columns: string, translation, and an optional plural indicator. Plural indicators should be one of “plural” or “singular”. A given string can have both singular and plural forms. For example %(name)s liked this may have a different verb conjugation depending on whether %(name)s is one name or a list of names. There should be two rows in the CSV file for that string, one with plural indicator “singular”, and one “plural”. For strings with no verbs that would change on translation, simply use “unknown” or the empty string (or don’t include the column at all). The file is read using the csv module in the default “excel” dialect. In this format there should not be spaces after the commas. If no encoding parameter is given, the encoding will be detected automatically (among UTF-8 and UTF-16) if the file contains a byte-order marker (BOM), defaulting to UTF-8 if no BOM is present. Example translation es_LA.csv: "I love you","Te amo" "%(name)s liked this","A %(name)s les gustó esto","plural" "%(name)s liked this","A %(name)s le gustó esto","singular" Changed in version 4.3: Added encoding parameter. Added support for BOM-based encoding detection, UTF- 16, and UTF-8-with-BOM. tornado.locale.load_gettext_translations(directory: str, domain: str) → None Loads translations from gettext’s locale tree Locale tree is similar to system’s /usr/share/locale, like: {directory}/{lang}/LC_MESSAGES/{domain}.mo Three steps are required to have your app translated: 1. Generate POT translation file: xgettext --language=Python --keyword=_:1,2 -d mydomain file1.py file2.html etc 2. Merge against existing POT file: msgmerge old.po mydomain.po > new.po 3. Compile: msgfmt mydomain.po -o {directory}/pt_BR/LC_MESSAGES/mydomain.mo tornado.locale.get_supported_locales() → Iterable[str] Returns a list of all the supported locale codes. class tornado.locale.Locale(code: str) Object representing a locale. After calling one of load_translations or load_gettext_translations, call get or get_closest to get a Locale object. Tornado Documentation, Release 6.3.3 classmethod get_closest(*locale_codes: str) → Locale Returns the closest match for the given locale code. classmethod get(code: str) → Locale Returns the Locale for the given locale code. If it is not supported, we raise an exception. translate(message: str, plural_message: Optional[str] = None, count: Optional[int] = None) → str Returns the translation for the given message for this locale. If plural_message is given, you must also provide count. We return plural_message when count != 1, and we return the singular form for the given message when count == 1. format_date(date: Union[int, float, datetime], gmt_offset: int = 0, relative: bool = True, shorter: bool = False, full_format: bool = False) → str Formats the given date (which should be GMT). By default, we return a relative time (e.g., “2 minutes ago”). You can return an absolute date string with relative=False. You can force a full format date (“July 10, 1980”) with full_format=True. This method is primarily intended for dates in the past. For dates in the future, we fall back to full format. format_day(date: datetime, gmt_offset: int = 0, dow: bool = True) → bool Formats the given date as a day of week. Example: “Monday, January 22”. You can remove the day of week with dow=False. list(parts: Any) → str Returns a comma-separated list for the given list of parts. The format is, e.g., “A, B and C”, “A and B” or just “A” for lists of size 1. friendly_number(value: int) → str Returns a comma-separated number for the given integer. class tornado.locale.CSVLocale(code: str, translations: Dict[str, Dict[str, str]]) Locale implementation using tornado’s CSV translation format. class tornado.locale.GettextLocale(code: str, translations: NullTranslations) Locale implementation using the gettext module. pgettext(context: str, message: str, plural_message: Optional[str] = None, count: Optional[int] = None) → str Allows to set context for translation, accepts plural forms. Usage example: pgettext("law", "right") pgettext("good", "right") Plural message example: pgettext("organization", "club", "clubs", len(clubs)) pgettext("stick", "club", "clubs", len(clubs)) To generate POT file with context, add following options to step 1 of load_gettext_translations sequence: Tornado Documentation, Release 6.3.3 xgettext [basic options] --keyword=pgettext:1c,2 --keyword=pgettext:1c,2,3 New in version 4.2. 6.2.6 tornado.websocket — Bidirectional communication to the browser Implementation of the WebSocket protocol. WebSockets allow for bidirectional communication between the browser and server. WebSockets are supported in the current versions of all major browsers. This module implements the final version of the WebSocket protocol as defined in RFC 6455. Changed in version 4.0: Removed support for the draft 76 protocol version. class tornado.websocket.WebSocketHandler(application: Application, request: HTTPServerRequest, **kwargs: Any) Subclass this class to create a basic WebSocket handler. Override on_message to handle incoming messages, and use write_message to send messages to the client. You can also override open and on_close to handle opened and closed connections. Custom upgrade response headers can be sent by overriding set_default_headers or prepare. See http://dev.w3.org/html5/websockets/ for details on the JavaScript interface. The protocol is specified at http://tools.ietf.org/html/rfc6455. Here is an example WebSocket handler that echos back all received messages back to the client: class EchoWebSocket(tornado.websocket.WebSocketHandler): def open(self): print("WebSocket opened") def on_message(self, message): self.write_message(u"You said: " + message) def on_close(self): print("WebSocket closed") WebSockets are not standard HTTP connections. The “handshake” is HTTP, but after the handshake, the protocol is message-based. Consequently, most of the Tornado HTTP facilities are not available in handlers of this type. The only communication methods available to you are write_message(), ping(), and close(). Likewise, your request handler class should implement open() method rather than get() or post(). If you map the handler above to /websocket in your application, you can invoke it in JavaScript with: var ws = new WebSocket("ws://localhost:8888/websocket"); ws.onopen = function() { ws.send("Hello, world"); }; ws.onmessage = function (evt) { alert(evt.data); }; This script pops up an alert box that says “You said: Hello, world”. Web browsers allow any site to open a websocket connection to any other, instead of using the same-origin policy that governs other network access from JavaScript. This can be surprising and is a potential security hole, so Tornado Documentation, Release 6.3.3 since Tornado 4.0 WebSocketHandler requires applications that wish to receive cross-origin websockets to opt in by overriding the check_origin method (see that method’s docs for details). Failure to do so is the most likely cause of 403 errors when making a websocket connection. When using a secure websocket connection (wss://) with a self-signed certificate, the connection from a browser may fail because it wants to show the “accept this certificate” dialog but has nowhere to show it. You must first visit a regular HTML page using the same certificate to accept it before the websocket connection will succeed. If the application setting websocket_ping_interval has a non-zero value, a ping will be sent periodically, and the connection will be closed if a response is not received before the websocket_ping_timeout. Messages larger than the websocket_max_message_size application setting (default 10MiB) will not be ac- cepted. Changed in version 4.5: Added websocket_ping_interval, websocket_ping_timeout, and websocket_max_message_size. Event handlers WebSocketHandler.open(*args: str, **kwargs: str) → Optional[Awaitable[None]] Invoked when a new WebSocket is opened. The arguments to open are extracted from the tornado.web.URLSpec regular expression, just like the argu- ments to tornado.web.RequestHandler.get. open may be a coroutine. on_message will not be called until open has returned. Changed in version 5.1: open may be a coroutine. WebSocketHandler.on_message(message: Union[str, bytes]) → Optional[Awaitable[None]] Handle incoming messages on the WebSocket This method must be overridden. Changed in version 4.5: on_message can be a coroutine. WebSocketHandler.on_close() → None Invoked when the WebSocket is closed. If the connection was closed cleanly and a status code or reason phrase was supplied, these values will be available as the attributes self.close_code and self.close_reason. Changed in version 4.0: Added close_code and close_reason attributes. WebSocketHandler.select_subprotocol(subprotocols: List[str]) → Optional[str] Override to implement subprotocol negotiation. subprotocols is a list of strings identifying the subprotocols proposed by the client. This method may be overridden to return one of those strings to select it, or None to not select a subprotocol. Failure to select a subprotocol does not automatically abort the connection, although clients may close the con- nection if none of their proposed subprotocols was selected. The list may be empty, in which case this method must return None. This method is always called exactly once even if no subprotocols were proposed so that the handler can be advised of this fact. Changed in version 5.1: Previously, this method was called with a list containing an empty string instead of an empty list if no subprotocols were proposed by the client. Tornado Documentation, Release 6.3.3 WebSocketHandler.selected_subprotocol The subprotocol returned by select_subprotocol. New in version 5.1. WebSocketHandler.on_ping(data: bytes) → None Invoked when the a ping frame is received. Output WebSocketHandler.write_message(message: Union[bytes, str, Dict[str, Any]], binary: bool = False) → Future[None] Sends the given message to the client of this Web Socket. The message may be either a string or a dict (which will be encoded as json). If the binary argument is false, the message will be sent as utf8; in binary mode any byte string is allowed. If the connection is already closed, raises WebSocketClosedError. Returns a Future which can be used for flow control. Changed in version 3.2: WebSocketClosedError was added (previously a closed connection would raise an AttributeError) Changed in version 4.3: Returns a Future which can be used for flow control. Changed in version 5.0: Consistently raises WebSocketClosedError. Previously could sometimes raise StreamClosedError. WebSocketHandler.close(code: Optional[int] = None, reason: Optional[str] = None) → None Closes this Web Socket. Once the close handshake is successful the socket will be closed. code may be a numeric status code, taken from the values defined in RFC 6455 section 7.4.1. reason may be a textual message about why the connection is closing. These values are made available to the client, but are not otherwise interpreted by the websocket protocol. Changed in version 4.0: Added the code and reason arguments. Configuration WebSocketHandler.check_origin(origin: str) → bool Override to enable support for allowing alternate origins. The origin argument is the value of the Origin HTTP header, the url responsible for initiating this request. This method is not called for clients that do not send this header; such requests are always allowed (because all browsers that implement WebSockets support this header, and non-browser clients do not have the same cross-site security concerns). Should return True to accept the request or False to reject it. By default, rejects all requests with an origin on a host other than this one. This is a security protection against cross site scripting attacks on browsers, since WebSockets are allowed to bypass the usual same-origin policies and don’t use CORS headers. Tornado Documentation, Release 6.3.3 Warning: This is an important security measure; don’t disable it without understanding the security im- plications. In particular, if your authentication is cookie-based, you must either restrict the origins allowed by check_origin() or implement your own XSRF-like protection for websocket connections. See these articles for more. To accept all cross-origin traffic (which was the default prior to Tornado 4.0), simply override this method to always return True: def check_origin(self, origin): return True To allow connections from any subdomain of your site, you might do something like: def check_origin(self, origin): parsed_origin = urllib.parse.urlparse(origin) return parsed_origin.netloc.endswith(".mydomain.com") New in version 4.0. WebSocketHandler.get_compression_options() → Optional[Dict[str, Any]] Override to return compression options for the connection. If this method returns None (the default), compression will be disabled. If it returns a dict (even an empty one), it will be enabled. The contents of the dict may be used to control the following compression options: compression_level specifies the compression level. mem_level specifies the amount of memory used for the internal compression state. These parameters are documented in details here: https://docs.python.org/3.6/library/zlib.html#zlib. compressobj New in version 4.1. Changed in version 4.5: Added compression_level and mem_level. WebSocketHandler.set_nodelay(value: bool) → None Set the no-delay flag for this stream. By default, small messages may be delayed and/or combined to minimize the number of packets sent. This can sometimes cause 200-500ms delays due to the interaction between Nagle’s algorithm and TCP delayed ACKs. To reduce this delay (at the expense of possibly increasing bandwidth usage), call self.set_nodelay(True) once the websocket connection is established. See BaseIOStream.set_nodelay for additional details. New in version 3.1. Tornado Documentation, Release 6.3.3 Other WebSocketHandler.ping(data: Union[str, bytes] = b'') → None Send ping frame to the remote end. The data argument allows a small amount of data (up to 125 bytes) to be sent as a part of the ping message. Note that not all websocket implementations expose this data to applications. Consider using the websocket_ping_interval application setting instead of sending pings manually. Changed in version 5.1: The data argument is now optional. WebSocketHandler.on_pong(data: bytes) → None Invoked when the response to a ping frame is received. exception tornado.websocket.WebSocketClosedError Raised by operations on a closed connection. New in version 3.2. Client-side support tornado.websocket.websocket_connect(url: Union[str, HTTPRequest], callback: Optional[Callable[[Future[WebSocketClientConnection]], None]] = None, connect_timeout: Optional[float] = None, on_message_callback: Optional[Callable[[Union[None, str, bytes]], None]] = None, compression_options: Optional[Dict[str, Any]] = None, ping_interval: Optional[float] = None, ping_timeout: Optional[float] = None, max_message_size: int = 10485760, subprotocols: Optional[List[str]] = None, resolver: Optional[Resolver] = None) → Awaitable[WebSocketClientConnection] Client-side websocket support. Takes a url and returns a Future whose result is a WebSocketClientConnection. compression_options is interpreted in the same way as the return value of WebSocketHandler. get_compression_options. The connection supports two styles of operation. In the coroutine style, the application typically calls read_message in a loop: conn = yield websocket_connect(url) while True: msg = yield conn.read_message() if msg is None: break # Do something with msg In the callback style, pass an on_message_callback to websocket_connect. In both styles, a message of None indicates that the connection has been closed. subprotocols may be a list of strings specifying proposed subprotocols. The selected protocol may be found on the selected_subprotocol attribute of the connection object when the connection is complete. Changed in version 3.2: Also accepts HTTPRequest objects in place of urls. Changed in version 4.1: Added compression_options and on_message_callback. Tornado Documentation, Release 6.3.3 Changed in version 4.5: Added the ping_interval, ping_timeout, and max_message_size arguments, which have the same meaning as in WebSocketHandler. Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. Changed in version 5.1: Added the subprotocols argument. Changed in version 6.3: Added the resolver argument. class tornado.websocket.WebSocketClientConnection(request: HTTPRequest, on_message_callback: Optional[Callable[[Union[None, str, bytes]], None]] = None, compression_options: Optional[Dict[str, Any]] = None, ping_interval: Optional[float] = None, ping_timeout: Optional[float] = None, max_message_size: int = resolver: Optional[Resolver] = None) WebSocket client connection. This class should not be instantiated directly; use the websocket_connect function instead. close(code: Optional[int] = None, reason: Optional[str] = None) → None Closes the websocket connection. code and reason are documented under WebSocketHandler.close. New in version 3.2. Changed in version 4.0: Added the code and reason arguments. write_message(message: Union[str, bytes, Dict[str, Any]], binary: bool = False) → Future[None] Sends a message to the WebSocket server. If the stream is closed, raises WebSocketClosedError. Returns a Future which can be used for flow control. Changed in version 5.0: Exception raised on a closed stream changed from StreamClosedError to WebSocketClosedError. read_message(callback: Optional[Callable[[Future[Union[None, str, bytes]]], None]] = None) → Awaitable[Union[None, str, bytes]] Reads a message from the WebSocket server. If on_message_callback was specified at WebSocket initialization, this function will never return messages Returns a future whose result is the message, or None if the connection is closed. If a callback argument is given it will be called with the future when it is ready. ping(data: bytes = b'') → None Send ping frame to the remote end. The data argument allows a small amount of data (up to 125 bytes) to be sent as a part of the ping message. Note that not all websocket implementations expose this data to applications. Consider using the ping_interval argument to websocket_connect instead of sending pings manually. New in version 5.1. property selected_subprotocol: Optional[str] The subprotocol selected by the server. New in version 5.1. Tornado Documentation, Release 6.3.3 6.3 HTTP servers and clients 6.3.1 tornado.httpserver — Non-blocking HTTP server A non-blocking, single-threaded HTTP server. Typical applications have little direct interaction with the HTTPServer class except to start a server at the beginning of the process (and even that is often done indirectly via tornado.web.Application.listen). Changed in version 4.0: The HTTPRequest class that used to live in this module has been moved to tornado. httputil.HTTPServerRequest. The old name remains as an alias. HTTP Server class tornado.httpserver.HTTPServer(request_callback: Union[httputil.HTTPServerConnectionDelegate, Callable[[httputil.HTTPServerRequest], None]], no_keep_alive: bool = False, xheaders: bool = False, ssl_options: Union[Dict[str, Any], ssl.SSLContext] = None, protocol: Optional[str] = None, decompress_request: bool = False, chunk_size: Optional[int] = None, max_header_size: Optional[int] = None, idle_connection_timeout: Optional[float] = None, body_timeout: Optional[float] = None, max_body_size: Optional[int] = None, max_buffer_size: Optional[int] = None, trusted_downstream: Optional[List[str]] = None) A non-blocking, single-threaded HTTP server. A server is defined by a subclass of HTTPServerConnectionDelegate, or, for backwards compatibility, a callback that takes an HTTPServerRequest as an argument. The delegate is usually a tornado.web. Application. HTTPServer supports keep-alive connections by default (automatically for HTTP/1.1, or for HTTP/1.0 when the client requests Connection: keep-alive). If xheaders is True, we support the X-Real-Ip/X-Forwarded-For and X-Scheme/X-Forwarded-Proto headers, which override the remote IP and URI scheme/protocol for all requests. These headers are useful when running Tornado behind a reverse proxy or load balancer. The protocol argument can also be set to https if Tornado is run behind an SSL-decoding proxy that does not set one of the supported xheaders. By default, when parsing the X-Forwarded-For header, Tornado will select the last (i.e., the closest) address on the list of hosts as the remote host IP address. To select the next server in the chain, a list of trusted downstream hosts may be passed as the trusted_downstream argument. These hosts will be skipped when parsing the X-Forwarded-For header. To make this server serve SSL traffic, send the ssl_options keyword argument with an ssl.SSLContext object. For compatibility with older versions of Python ssl_options may also be a dictionary of keyword arguments for the ssl.wrap_socket method.: ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) ssl_ctx.load_cert_chain(os.path.join(data_dir, "mydomain.crt"), os.path.join(data_dir, "mydomain.key")) HTTPServer(application, ssl_options=ssl_ctx) HTTPServer initialization follows one of three patterns (the initialization methods are defined on tornado. tcpserver.TCPServer): 1. listen: single-process: Tornado Documentation, Release 6.3.3 async def main(): server = HTTPServer() server.listen(8888) await asyncio.Event.wait() asyncio.run(main()) In many cases, tornado.web.Application.listen can be used to avoid the need to explicitly create the HTTPServer. While this example does not create multiple processes on its own, when the reuse_port=True argument is passed to listen() you can run the program multiple times to create a multi-process service. 2. add_sockets: multi-process: sockets = bind_sockets(8888) tornado.process.fork_processes(0) async def post_fork_main(): server = HTTPServer() server.add_sockets(sockets) await asyncio.Event().wait() asyncio.run(post_fork_main()) The add_sockets interface is more complicated, but it can be used with tornado.process. fork_processes to run a multi-process service with all worker processes forked from a single parent. add_sockets can also be used in single-process servers if you want to create your listening sockets in some way other than bind_sockets. Note that when using this pattern, nothing that touches the event loop can be run before fork_processes. 3. bind/start: simple deprecated multi-process: server = HTTPServer() server.bind(8888) server.start(0) # Forks multiple sub-processes IOLoop.current().start() This pattern is deprecated because it requires interfaces in the asyncio module that have been deprecated since Python 3.10. Support for creating multiple processes in the start method will be removed in a future version of Tornado. Changed in version 4.0: Added decompress_request, chunk_size, max_header_size, idle_connection_timeout, body_timeout, max_body_size arguments. Added support for HTTPServerConnectionDelegate instances as request_callback. Changed in version 4.1: HTTPServerConnectionDelegate.start_request is now called with two arguments (server_conn, request_conn) (in accordance with the documentation) instead of one (request_conn). Changed in version 4.2: HTTPServer is now a subclass of tornado.util.Configurable. Changed in version 4.5: Added the trusted_downstream argument. Changed in version 5.0: The io_loop argument has been removed. The public interface of this class is mostly inherited from TCPServer and is documented under that class. Tornado Documentation, Release 6.3.3 coroutine close_all_connections() → None Close all open connections and asynchronously wait for them to finish. This method is used in combination with stop to support clean shutdowns (especially for unittests). Typical usage would call stop() first to stop accepting new connections, then await close_all_connections() to wait for existing connections to finish. This method does not currently close open websocket connections. Note that this method is a coroutine and must be called with await. 6.3.2 tornado.httpclient — Asynchronous HTTP client Blocking and non-blocking HTTP client interfaces. This module defines a common interface shared by two implementations, simple_httpclient and curl_httpclient. Applications may either instantiate their chosen implementation class directly or use the AsyncHTTPClient class from this module, which selects an implementation that can be overridden with the AsyncHTTPClient.configure method. The default implementation is simple_httpclient, and this is expected to be suitable for most users’ needs. However, some applications may wish to switch to curl_httpclient for reasons such as the following: • curl_httpclient has some features not found in simple_httpclient, including support for HTTP proxies and the ability to use a specified network interface. • curl_httpclient is more likely to be compatible with sites that are not-quite-compliant with the HTTP spec, or sites that use little-exercised features of HTTP. • curl_httpclient is faster. Note that if you are using curl_httpclient, it is highly recommended that you use a recent version of libcurl and pycurl. Currently the minimum supported version of libcurl is 7.22.0, and the minimum version of pycurl is 7.18.2. It is highly recommended that your libcurl installation is built with asynchronous DNS resolver (threaded or c-ares), otherwise you may encounter various problems with request timeouts (for more information, see http://curl.haxx.se/ libcurl/c/curl_easy_setopt.html#CURLOPTCONNECTTIMEOUTMS and comments in curl_httpclient.py). To select curl_httpclient, call AsyncHTTPClient.configure at startup: AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient") HTTP client interfaces class tornado.httpclient.HTTPClient(async_client_class: Optional[Type[AsyncHTTPClient]] = None, **kwargs: Any) A blocking HTTP client. This interface is provided to make it easier to share code between synchronous and asynchronous applications. Applications that are running an IOLoop must use AsyncHTTPClient instead. Typical usage looks like this: http_client = httpclient.HTTPClient() try: response = http_client.fetch("http://www.google.com/") print(response.body) except httpclient.HTTPError as e: (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) # HTTPError is raised for non-200 responses; the response # can be found in e.response. print("Error: " + str(e)) except Exception as e: # Other errors are possible, such as IOError. print("Error: " + str(e)) http_client.close() Changed in version 5.0: Due to limitations in asyncio, it is no longer possible to use the synchronous HTTPClient while an IOLoop is running. Use AsyncHTTPClient instead. close() → None Closes the HTTPClient, freeing any resources used. fetch(request: Union[HTTPRequest, str], **kwargs: Any) → HTTPResponse Executes a request, returning an HTTPResponse. The request may be either a string URL or an HTTPRequest object. If it is a string, we construct an HTTPRequest using any additional kwargs: HTTPRequest(request, **kwargs) If an error occurs during the fetch, we raise an HTTPError unless the raise_error keyword argument is set to False. class tornado.httpclient.AsyncHTTPClient(force_instance: bool = False, **kwargs: Any) An non-blocking HTTP client. Example usage: async def f(): http_client = AsyncHTTPClient() try: response = await http_client.fetch("http://www.google.com") except Exception as e: print("Error: %s" % e) else: print(response.body) The constructor for this class is magic in several respects: It actually creates an instance of an implementation- specific subclass, and instances are reused as a kind of pseudo-singleton (one per IOLoop). The keyword argu- ment force_instance=True can be used to suppress this singleton behavior. Unless force_instance=True is used, no arguments should be passed to the AsyncHTTPClient constructor. The implementation subclass as well as arguments to its constructor can be set with the static method configure() All AsyncHTTPClient implementations support a defaults keyword argument, which can be used to set de- fault values for HTTPRequest attributes. For example: AsyncHTTPClient.configure( None, defaults=dict(user_agent="MyUserAgent")) # or with force_instance: client = AsyncHTTPClient(force_instance=True, defaults=dict(user_agent="MyUserAgent")) Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. close() → None Destroys this HTTP client, freeing any file descriptors used. Tornado Documentation, Release 6.3.3 This method is not needed in normal use due to the way that AsyncHTTPClient objects are transpar- ently reused. close() is generally only necessary when either the IOLoop is also being closed, or the force_instance=True argument was used when creating the AsyncHTTPClient. No other methods may be called on the AsyncHTTPClient after close(). fetch(request: Union[str, HTTPRequest], raise_error: bool = True, **kwargs: Any) → Future[HTTPResponse] Executes a request, asynchronously returning an HTTPResponse. The request may be either a string URL or an HTTPRequest object. If it is a string, we construct an HTTPRequest using any additional kwargs: HTTPRequest(request, **kwargs) This method returns a Future whose result is an HTTPResponse. By default, the Future will raise an HTTPError if the request returned a non-200 response code (other errors may also be raised if the server could not be contacted). Instead, if raise_error is set to False, the response will always be returned regardless of the response code. If a callback is given, it will be invoked with the HTTPResponse. In the callback interface, HTTPError is not automatically raised. Instead, you must check the response’s error attribute or call its rethrow method. Changed in version 6.0: The callback argument was removed. Use the returned Future instead. The raise_error=False argument only affects the HTTPError raised when a non-200 response code is used, instead of suppressing all errors. classmethod configure(impl: Union[None, str, Type[Configurable]], **kwargs: Any) → None Configures the AsyncHTTPClient subclass to use. AsyncHTTPClient() actually creates an instance of a subclass. This method may be called with either a class object or the fully-qualified name of such a class (or None to use the default, SimpleAsyncHTTPClient) If additional keyword arguments are given, they will be passed to the constructor of each subclass instance created. The keyword argument max_clients determines the maximum number of simultaneous fetch() operations that can execute in parallel on each IOLoop. Additional arguments may be supported depending on the implementation class in use. Example: AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient") Tornado Documentation, Release 6.3.3 Request objects class tornado.httpclient.HTTPRequest(url: str, method: str = 'GET', headers: Optional[Union[Dict[str, str], HTTPHeaders]] = None, body: Optional[Union[bytes, str]] = None, auth_username: Optional[str] = None, auth_password: Optional[str] = None, auth_mode: Optional[str] = None, connect_timeout: Optional[float] = None, request_timeout: Optional[float] = None, if_modified_since: Optional[Union[float, datetime]] = None, follow_redirects: Optional[bool] = None, max_redirects: Optional[int] = None, user_agent: Optional[str] = None, use_gzip: Optional[bool] = None, network_interface: Optional[str] = None, streaming_callback: Optional[Callable[[bytes], None]] = None, header_callback: Optional[Callable[[str], None]] = None, prepare_curl_callback: Optional[Callable[[Any], None]] = None, proxy_host: Optional[str] = None, proxy_port: Optional[int] = None, proxy_username: Optional[str] = None, proxy_password: Optional[str] = None, proxy_auth_mode: Optional[str] = None, allow_nonstandard_methods: Optional[bool] = None, validate_cert: Optional[bool] = None, ca_certs: Optional[str] = None, allow_ipv6: Optional[bool] = None, client_key: Optional[str] = None, client_cert: Optional[str] = None, body_producer: Optional[Callable[[Callable[[bytes], None]], Future[None]]] = None, expect_100_continue: bool = False, decompress_response: Optional[bool] = None, ssl_options: Optional[Union[Dict[str, Any], SSLContext]] = None) HTTP client request object. All parameters except url are optional. Parameters • url (str) – URL to fetch • method (str) – HTTP method, e.g. “GET” or “POST” • headers (HTTPHeaders or dict) – Additional HTTP headers to pass on the request • body (str or bytes) – HTTP request body as a string (byte or unicode; if unicode the utf-8 encoding will be used) • body_producer (collections.abc.Callable) – Callable used for lazy/asynchronous request bodies. It is called with one argument, a write function, and should return a Future. It should call the write function with new data as it becomes available. The write function re- turns a Future which can be used for flow control. Only one of body and body_producer may be specified. body_producer is not supported on curl_httpclient. When using body_producer it is recommended to pass a Content-Length in the headers as other- wise chunked encoding will be used, and many servers do not support chunked encoding on requests. New in Tornado 4.0 • auth_username (str) – Username for HTTP authentication • auth_password (str) – Password for HTTP authentication • auth_mode (str) – Authentication mode; default is “basic”. Allowed values are implementation-defined; curl_httpclient supports “basic” and “digest”; simple_httpclient only supports “basic” Tornado Documentation, Release 6.3.3 • connect_timeout (float) – Timeout for initial connection in seconds, default 20 seconds (0 means no timeout) • request_timeout (float) – Timeout for entire request in seconds, default 20 seconds (0 means no timeout) • if_modified_since (datetime or float) – Timestamp for If-Modified-Since header • follow_redirects (bool) – Should redirects be followed automatically or return the 3xx response? Default True. • max_redirects (int) – Limit for follow_redirects, default 5. • user_agent (str) – String to send as User-Agent header • decompress_response (bool) – Request a compressed response from the server and de- compress it after downloading. Default is True. New in Tornado 4.0. • use_gzip (bool) – Deprecated alias for decompress_response since Tornado 4.0. • network_interface (str) – Network interface or source IP to use for request. See curl_httpclient note below. • streaming_callback (collections.abc.Callable) – If set, streaming_callback will be run with each chunk of data as it is received, and HTTPResponse.body and HTTPResponse.buffer will be empty in the final response. • header_callback (collections.abc.Callable) – If set, header_callback will be run with each header line as it is received (including the first line, e.g. HTTP/1.0 200 OK\ r\n, and a final line containing only \r\n. All lines include the trailing newline characters). HTTPResponse.headers will be empty in the final response. This is most useful in con- junction with streaming_callback, because it’s the only way to get access to header data while the request is in progress. • prepare_curl_callback (collections.abc.Callable) – If set, will be called with a pycurl.Curl object to allow the application to make additional setopt calls. • proxy_host (str) – HTTP proxy hostname. To use proxies, proxy_host and proxy_port must be set; proxy_username, proxy_pass and proxy_auth_mode are optional. Proxies are currently only supported with curl_httpclient. • proxy_port (int) – HTTP proxy port • proxy_username (str) – HTTP proxy username • proxy_password (str) – HTTP proxy password • proxy_auth_mode (str) – HTTP proxy Authentication mode; default is “basic”. supports “basic” and “digest” • allow_nonstandard_methods (bool) – Allow unknown values for method argument? Default is False. • validate_cert (bool) – For HTTPS requests, validate the server’s certificate? Default is True. • ca_certs (str) – filename of CA certificates in PEM format, or None to use defaults. See note below when used with curl_httpclient. • client_key (str) – Filename for client SSL key, if any. See note below when used with curl_httpclient. • client_cert (str) – Filename for client SSL certificate, if any. See note below when used with curl_httpclient. Tornado Documentation, Release 6.3.3 • ssl_options (ssl.SSLContext) – ssl.SSLContext object for use in simple_httpclient (unsupported by curl_httpclient). Overrides validate_cert, ca_certs, client_key, and client_cert. • allow_ipv6 (bool) – Use IPv6 when available? Default is True. • expect_100_continue (bool) – If true, send the Expect: 100-continue header and wait for a continue response before sending the request body. Only supported with simple_httpclient. Note: When using curl_httpclient certain options may be inherited by subsequent fetches because pycurl does not allow them to be cleanly reset. This applies to the ca_certs, client_key, client_cert, and network_interface arguments. If you use these options, you should pass them on every request (you don’t have to always use the same values, but it’s not possible to mix requests that specify these options with ones that use the defaults). New in version 3.1: The auth_mode argument. New in version 4.0: The body_producer and expect_100_continue arguments. New in version 4.2: The ssl_options argument. New in version 4.5: The proxy_auth_mode argument. Response objects class tornado.httpclient.HTTPResponse(request: HTTPRequest, code: int, headers: Optional[HTTPHeaders] = None, buffer: Optional[BytesIO] = None, effective_url: Optional[str] = None, error: Optional[BaseException] = None, request_time: Optional[float] = None, time_info: Optional[Dict[str, float]] = None, reason: Optional[str] = None, start_time: Optional[float] = None) HTTP Response object. Attributes: • request: HTTPRequest object • code: numeric HTTP status code, e.g. 200 or 404 • reason: human-readable reason phrase describing the status code • headers: tornado.httputil.HTTPHeaders object • effective_url: final location of the resource after following any redirects • buffer: cStringIO object for response body • body: response body as bytes (created on demand from self.buffer) • error: Exception object, if any • request_time: seconds from request start to finish. Includes all network operations from DNS resolution to receiving the last byte of data. Does not include time spent in the queue (due to the max_clients option). If redirects were followed, only includes the final request. • start_time: Time at which the HTTP operation started, based on time.time (not the monotonic clock used by IOLoop.time). May be None if the request timed out while in the queue. Tornado Documentation, Release 6.3.3 • time_info: dictionary of diagnostic timing information from the request. Available data are subject to change, but currently uses timings available from http://curl.haxx.se/libcurl/c/curl_easy_getinfo.html, plus queue, which is the delay (if any) introduced by waiting for a slot under AsyncHTTPClient’s max_clients setting. New in version 5.1: Added the start_time attribute. Changed in version 5.1: The request_time attribute previously included time spent in the queue for simple_httpclient, but not in curl_httpclient. Now queueing time is excluded in both implementa- tions. request_time is now more accurate for curl_httpclient because it uses a monotonic clock when available. rethrow() → None If there was an error on the request, raise an HTTPError. Exceptions exception tornado.httpclient.HTTPClientError(code: int, message: Optional[str] = None, response: Optional[HTTPResponse] = None) Exception thrown for an unsuccessful HTTP request. Attributes: • code - HTTP error integer error code, e.g. 404. Error code 599 is used when no HTTP response was received, e.g. for a timeout. • response - HTTPResponse object, if any. Note that if follow_redirects is False, redirects become HTTPErrors, and you can look at error.response. headers['Location'] to see the destination of the redirect. Changed in version 5.1: Renamed from HTTPError to HTTPClientError to avoid collisions with tornado. web.HTTPError. The name tornado.httpclient.HTTPError remains as an alias. exception tornado.httpclient.HTTPError Alias for HTTPClientError. Command-line interface This module provides a simple command-line interface to fetch a url using Tornado’s HTTP client. Example usage: # Fetch the url and print its body python -m tornado.httpclient http://www.google.com # Just print the headers python -m tornado.httpclient --print_headers --print_body=false http://www.google.com Tornado Documentation, Release 6.3.3 Implementations class tornado.simple_httpclient.SimpleAsyncHTTPClient(force_instance: bool = False, **kwargs: Any) Non-blocking HTTP client with no external dependencies. This class implements an HTTP 1.1 client on top of Tornado’s IOStreams. Some features found in the curl-based AsyncHTTPClient are not yet supported. In particular, proxies are not supported, connections are not reused, and callers cannot select the network interface to be used. This implementation supports the following arguments, which can be passed to configure() to control the global singleton, or to the constructor when force_instance=True. max_clients is the number of concurrent requests that can be in progress; when this limit is reached additional requests will be queued. Note that time spent waiting in this queue still counts against the request_timeout. defaults is a dict of parameters that will be used as defaults on all HTTPRequest objects submitted to this client. hostname_mapping is a dictionary mapping hostnames to IP addresses. It can be used to make local DNS changes when modifying system-wide settings like /etc/hosts is not possible or desirable (e.g. in unittests). resolver is similar, but using the Resolver interface instead of a simple mapping. max_buffer_size (default 100MB) is the number of bytes that can be read into memory at once. max_body_size (defaults to max_buffer_size) is the largest response body that the client will accept. With- out a streaming_callback, the smaller of these two limits applies; with a streaming_callback only max_body_size does. Changed in version 4.2: Added the max_body_size argument. class tornado.curl_httpclient.CurlAsyncHTTPClient(max_clients=10, defaults=None) libcurl-based HTTP client. This implementation supports the following arguments, which can be passed to configure() to control the global singleton, or to the constructor when force_instance=True. max_clients is the number of concurrent requests that can be in progress; when this limit is reached additional requests will be queued. defaults is a dict of parameters that will be used as defaults on all HTTPRequest objects submitted to this client. Example Code • A simple webspider shows how to fetch URLs concurrently. • The file uploader demo uses either HTTP POST or HTTP PUT to upload files to a server. 6.3.3 tornado.httputil — Manipulate HTTP headers and URLs HTTP utility code shared by clients and servers. This module also defines the HTTPServerRequest class which is exposed via tornado.web.RequestHandler. request. class tornado.httputil.HTTPHeaders(__arg: Mapping[str, List[str]]) class tornado.httputil.HTTPHeaders(__arg: Mapping[str, str]) class tornado.httputil.HTTPHeaders(*args: Tuple[str, str]) Tornado Documentation, Release 6.3.3 class tornado.httputil.HTTPHeaders(**kwargs: str) A dictionary that maintains Http-Header-Case for all keys. Supports multiple values per key via a pair of new methods, add() and get_list(). The regular dictionary interface returns a single value per key, with multiple values joined by a comma. >>> h = HTTPHeaders({"content-type": "text/html"}) >>> list(h.keys()) ['Content-Type'] >>> h["Content-Type"] 'text/html' >>> h.add("Set-Cookie", "A=B") >>> h.add("Set-Cookie", "C=D") >>> h["set-cookie"] 'A=B,C=D' >>> h.get_list("set-cookie") ['A=B', 'C=D'] >>> for (k,v) in sorted(h.get_all()): ... print('%s: %s' % (k,v)) ... Content-Type: text/html Set-Cookie: A=B Set-Cookie: C=D add(name: str, value: str) → None Adds a new value for the given key. get_list(name: str) → List[str] Returns all values for the given header as a list. get_all() → Iterable[Tuple[str, str]] Returns an iterable of all (name, value) pairs. If a header has multiple values, multiple pairs will be returned with the same name. parse_line(line: str) → None Updates the dictionary with a single header line. >>> h = HTTPHeaders() >>> h.parse_line("Content-Type: text/html") >>> h.get('content-type') 'text/html' classmethod parse(headers: str) → HTTPHeaders Returns a dictionary from HTTP header text. >>> h = HTTPHeaders.parse("Content-Type: text/html\r\nContent-Length: 42\r\n") >>> sorted(h.items()) [('Content-Length', '42'), ('Content-Type', 'text/html')] Changed in version 5.1: Raises HTTPInputError on malformed headers instead of a mix of KeyError, and ValueError. Tornado Documentation, Release 6.3.3 class tornado.httputil.HTTPServerRequest(method: Optional[str] = None, uri: Optional[str] = None, = None, body: Optional[bytes] = None, host: Optional[str] = None, files: Optional[Dict[str, List[HTTPFile]]] = None, connection: Optional[HTTPConnection] = None, start_line: Optional[RequestStartLine] = None, server_connection: Optional[object] = None) A single HTTP request. All attributes are type str unless otherwise noted. method HTTP request method, e.g. “GET” or “POST” uri The requested uri. path The path portion of uri query The query portion of uri version HTTP version specified in request, e.g. “HTTP/1.1” headers HTTPHeaders dictionary-like object for request headers. Acts like a case-insensitive dictionary with addi- tional methods for repeated headers. body Request body, if present, as a byte string. remote_ip Client’s IP address as a string. If HTTPServer.xheaders is set, will pass along the real IP address provided by a load balancer in the X-Real-Ip or X-Forwarded-For header. Changed in version 3.1: The list format of X-Forwarded-For is now supported. protocol The protocol used, either “http” or “https”. If HTTPServer.xheaders is set, will pass along the protocol used by a load balancer if reported via an X-Scheme header. host The requested hostname, usually taken from the Host header. arguments GET/POST arguments are available in the arguments property, which maps arguments names to lists of values (to support multiple values for individual names). Names are of type str, while arguments are byte strings. Note that this is different from RequestHandler.get_argument, which returns argument values as unicode strings. query_arguments Same format as arguments, but contains only arguments extracted from the query string. New in version 3.2. Tornado Documentation, Release 6.3.3 body_arguments Same format as arguments, but contains only arguments extracted from the request body. New in version 3.2. files File uploads are available in the files property, which maps file names to lists of HTTPFile. connection An HTTP request is attached to a single HTTP connection, which can be accessed through the “connec- tion” attribute. Since connections are typically kept open in HTTP/1.1, multiple requests can be handled sequentially on a single connection. Changed in version 4.0: Moved from tornado.httpserver.HTTPRequest. property cookies: Dict[str, Morsel] A dictionary of http.cookies.Morsel objects. full_url() → str Reconstructs the full URL for this request. request_time() → float Returns the amount of time it took for this request to execute. get_ssl_certificate(binary_form: bool = False) → Union[None, Dict, bytes] Returns the client’s SSL certificate, if any. To use client certificates, the HTTPServer’s ssl.SSLContext.verify_mode field must be set, e.g.: ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) ssl_ctx.load_cert_chain("foo.crt", "foo.key") ssl_ctx.load_verify_locations("cacerts.pem") ssl_ctx.verify_mode = ssl.CERT_REQUIRED server = HTTPServer(app, ssl_options=ssl_ctx) By default, the return value is a dictionary (or None, if no client certificate is present). If binary_form is true, a DER-encoded form of the certificate is returned instead. See SSLSocket.getpeercert() in the standard library for more details. http://docs.python.org/library/ssl.html#sslsocket-objects exception tornado.httputil.HTTPInputError Exception class for malformed HTTP requests or responses from remote sources. New in version 4.0. exception tornado.httputil.HTTPOutputError Exception class for errors in HTTP output. New in version 4.0. class tornado.httputil.HTTPServerConnectionDelegate Implement this interface to handle requests from HTTPServer. New in version 4.0. start_request(server_conn: object, request_conn: HTTPConnection) → HTTPMessageDelegate This method is called by the server when a new request has started. Parameters • server_conn – is an opaque object representing the long-lived (e.g. tcp-level) connection. Tornado Documentation, Release 6.3.3 • request_conn – is a HTTPConnection object for a single request/response exchange. This method should return a HTTPMessageDelegate. on_close(server_conn: object) → None This method is called when a connection has been closed. Parameters server_conn – is a server connection that has previously been passed to start_request. class tornado.httputil.HTTPMessageDelegate Implement this interface to handle an HTTP request or response. New in version 4.0. headers_received(start_line: Union[RequestStartLine, ResponseStartLine], headers: HTTPHeaders) → Optional[Awaitable[None]] Called when the HTTP headers have been received and parsed. Parameters • start_line – a RequestStartLine or ResponseStartLine depending on whether this is a client or server message. • headers – a HTTPHeaders instance. Some HTTPConnection methods can only be called during headers_received. May return a Future; if it does the body will not be read until it is done. data_received(chunk: bytes) → Optional[Awaitable[None]] Called when a chunk of data has been received. May return a Future for flow control. finish() → None Called after the last chunk of data has been received. on_connection_close() → None Called if the connection is closed without finishing the request. If headers_received is called, either finish or on_connection_close will be called, but not both. class tornado.httputil.HTTPConnection Applications use this interface to write their responses. New in version 4.0. write_headers(start_line: Union[RequestStartLine, ResponseStartLine], headers: HTTPHeaders, chunk: Optional[bytes] = None) → Future[None] Write an HTTP header block. Parameters • start_line – a RequestStartLine or ResponseStartLine. • headers – a HTTPHeaders instance. • chunk – the first (optional) chunk of data. This is an optimization so that small responses can be written in the same call as their headers. The version field of start_line is ignored. Returns a future for flow control. Tornado Documentation, Release 6.3.3 Changed in version 6.0: The callback argument was removed. write(chunk: bytes) → Future[None] Writes a chunk of body data. Returns a future for flow control. Changed in version 6.0: The callback argument was removed. finish() → None Indicates that the last body data has been written. tornado.httputil.url_concat(url: str, args: Union[None, Dict[str, str], List[Tuple[str, str]], Tuple[Tuple[str, str], ...]]) → str Concatenate url and arguments regardless of whether url has existing query parameters. args may be either a dictionary or a list of key-value pairs (the latter allows for multiple values with the same key. >>> url_concat("http://example.com/foo", dict(c="d")) 'http://example.com/foo?c=d' >>> url_concat("http://example.com/foo?a=b", dict(c="d")) 'http://example.com/foo?a=b&c=d' >>> url_concat("http://example.com/foo?a=b", [("c", "d"), ("c", "d2")]) 'http://example.com/foo?a=b&c=d&c=d2' class tornado.httputil.HTTPFile Represents a file uploaded via a form. For backwards compatibility, its instance attributes are also accessible as dictionary keys. • filename • body • content_type tornado.httputil.parse_body_arguments(content_type: str, body: bytes, arguments: Dict[str, List[bytes]], files: Dict[str, List[HTTPFile]], headers: Optional[HTTPHeaders] = None) → None Parses a form request body. Supports application/x-www-form-urlencoded and multipart/form-data. The content_type param- eter should be a string and body should be a byte string. The arguments and files parameters are dictionaries that will be updated with the parsed contents. tornado.httputil.parse_multipart_form_data(boundary: bytes, data: bytes, arguments: Dict[str, List[bytes]], files: Dict[str, List[HTTPFile]]) → None Parses a multipart/form-data body. The boundary and data parameters are both byte strings. The dictionaries given in the arguments and files parameters will be updated with the contents of the body. Changed in version 5.1: Now recognizes non-ASCII filenames in RFC 2231/5987 (filename*=) format. tornado.httputil.format_timestamp(ts: Union[int, float, tuple, struct_time, datetime]) → str Formats a timestamp in the format used by HTTP. The argument may be a numeric timestamp as returned by time.time, a time tuple as returned by time.gmtime, or a datetime.datetime object. Tornado Documentation, Release 6.3.3 >>> format_timestamp(1359312200) 'Sun, 27 Jan 2013 18:43:20 GMT' class tornado.httputil.RequestStartLine(method, path, version) RequestStartLine(method, path, version) Create new instance of RequestStartLine(method, path, version) method Alias for field number 0 path Alias for field number 1 version Alias for field number 2 tornado.httputil.parse_request_start_line(line: str) → RequestStartLine Returns a (method, path, version) tuple for an HTTP 1.x request line. The response is a collections.namedtuple. >>> parse_request_start_line("GET /foo HTTP/1.1") RequestStartLine(method='GET', path='/foo', version='HTTP/1.1') class tornado.httputil.ResponseStartLine(version, code, reason) ResponseStartLine(version, code, reason) Create new instance of ResponseStartLine(version, code, reason) code Alias for field number 1 reason Alias for field number 2 version Alias for field number 0 tornado.httputil.parse_response_start_line(line: str) → ResponseStartLine Returns a (version, code, reason) tuple for an HTTP 1.x response line. The response is a collections.namedtuple. >>> parse_response_start_line("HTTP/1.1 200 OK") ResponseStartLine(version='HTTP/1.1', code=200, reason='OK') tornado.httputil.encode_username_password(username: Union[str, bytes], password: Union[str, bytes]) → bytes Encodes a username/password pair in the format used by HTTP auth. The return value is a byte string in the form username:password. New in version 5.1. Tornado Documentation, Release 6.3.3 tornado.httputil.split_host_and_port(netloc: str) → Tuple[str, Optional[int]] Returns (host, port) tuple from netloc. Returned port will be None if not present. New in version 4.1. tornado.httputil.qs_to_qsl(qs: Dict[str, List]) → Iterable[Tuple[str, AnyStr]] Generator converting a result of parse_qs back to name-value pairs. New in version 5.0. tornado.httputil.parse_cookie(cookie: str) → Dict[str, str] Parse a Cookie HTTP header into a dict of name/value pairs. This function attempts to mimic browser cookie parsing behavior; it specifically does not follow any of the cookie-related RFCs (because browsers don’t either). The algorithm used is identical to that used by Django version 1.9.10. New in version 4.4.2. 6.3.4 tornado.http1connection – HTTP/1.x client/server implementation Client and server implementations of HTTP/1.x. New in version 4.0. class tornado.http1connection.HTTP1ConnectionParameters(no_keep_alive: bool = False, chunk_size: Optional[int] = None, max_header_size: Optional[int] = None, header_timeout: Optional[float] = None, max_body_size: Optional[int] = None, body_timeout: Optional[float] = None, decompress: bool = False) Parameters for HTTP1Connection and HTTP1ServerConnection. Parameters • no_keep_alive (bool) – If true, always close the connection after one request. • chunk_size (int) – how much data to read into memory at once • max_header_size (int) – maximum amount of data for HTTP headers • header_timeout (float) – how long to wait for all headers (seconds) • max_body_size (int) – maximum amount of data for body • body_timeout (float) – how long to wait while reading body (seconds) • decompress (bool) – if true, decode incoming Content-Encoding: gzip class tornado.http1connection.HTTP1Connection(stream: IOStream, is_client: bool, params: context: Optional[object] = None) Implements the HTTP/1.x protocol. This class can be on its own for clients, or via HTTP1ServerConnection for servers. Parameters • stream – an IOStream Tornado Documentation, Release 6.3.3 • is_client (bool) – client or server • params – a HTTP1ConnectionParameters instance or None • context – an opaque application-defined object that can be accessed as connection. context. read_response(delegate: HTTPMessageDelegate) → Awaitable[bool] Read a single HTTP response. Typical client-mode usage is to write a request using write_headers, write, and finish , and then call read_response. Parameters delegate – a HTTPMessageDelegate Returns a Future that resolves to a bool after the full response has been read. The result is true if the stream is still open. set_close_callback(callback: Optional[Callable[[], None]]) → None Sets a callback that will be run when the connection is closed. Note that this callback is slightly different from HTTPMessageDelegate.on_connection_close: The HTTPMessageDelegate method is called when the connection is closed while receiving a message. This callback is used when there is not an active delegate (for example, on the server side this callback is used if the client closes the connection after sending its request but before receiving all the response. detach() → IOStream Take control of the underlying stream. Returns the underlying IOStream object and stops all further HTTP processing. May only be called during HTTPMessageDelegate.headers_received. Intended for implementing protocols like websockets that tunnel over an HTTP handshake. set_body_timeout(timeout: float) → None Sets the body timeout for a single request. Overrides the value from HTTP1ConnectionParameters. set_max_body_size(max_body_size: int) → None Sets the body size limit for a single request. Overrides the value from HTTP1ConnectionParameters. write_headers(start_line: Union[RequestStartLine, ResponseStartLine], headers: HTTPHeaders, chunk: Optional[bytes] = None) → Future[None] Implements HTTPConnection.write_headers. write(chunk: bytes) → Future[None] Implements HTTPConnection.write. For backwards compatibility it is allowed but deprecated to skip write_headers and instead call write() with a pre-encoded header block. finish() → None Implements HTTPConnection.finish . class tornado.http1connection.HTTP1ServerConnection(stream: IOStream, params: None, context: Optional[object] = None) An HTTP/1.x server. Tornado Documentation, Release 6.3.3 Parameters • stream – an IOStream • params – a HTTP1ConnectionParameters or None • context – an opaque application-defined object that is accessible as connection.context coroutine close() → None Closes the connection. Returns a Future that resolves after the serving loop has exited. start_serving(delegate: HTTPServerConnectionDelegate) → None Starts serving requests on this connection. Parameters delegate – a HTTPServerConnectionDelegate tornado.http1connection.parse_int(s: str) → int Parse a non-negative integer from a string. tornado.http1connection.parse_hex_int(s: str) → int Parse a non-negative hexadecimal integer from a string. 6.4 Asynchronous networking 6.4.1 tornado.ioloop — Main event loop An I/O event loop for non-blocking sockets. In Tornado 6.0, IOLoop is a wrapper around the asyncio event loop, with a slightly different interface. The IOLoop interface is now provided primarily for backwards compatibility; new code should generally use the asyncio event loop interface directly. The IOLoop.current class method provides the IOLoop instance corresponding to the running asyncio event loop. IOLoop objects class tornado.ioloop.IOLoop(*args: Any, **kwargs: Any) An I/O event loop. As of Tornado 6.0, IOLoop is a wrapper around the asyncio event loop. Example usage for a simple TCP server: import asyncio import errno import functools import socket import tornado from tornado.iostream import IOStream async def handle_connection(connection, address): stream = IOStream(connection) (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) message = await stream.read_until_close() print("message from client:", message.decode().strip()) def connection_ready(sock, fd, events): while True: try: connection, address = sock.accept() except BlockingIOError: return connection.setblocking(0) io_loop = tornado.ioloop.IOLoop.current() io_loop.spawn_callback(handle_connection, connection, address) async def main(): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.setblocking(0) sock.bind(("", 8888)) sock.listen(128) io_loop = tornado.ioloop.IOLoop.current() callback = functools.partial(connection_ready, sock) io_loop.add_handler(sock.fileno(), callback, io_loop.READ) await asyncio.Event().wait() if __name__ == "__main__": asyncio.run(main()) Most applications should not attempt to construct an IOLoop directly, and instead initialize the asyncio event loop and use IOLoop.current(). In some cases, such as in test frameworks when initializing an IOLoop to be run in a secondary thread, it may be appropriate to construct an IOLoop with IOLoop(make_current=False). In general, an IOLoop cannot survive a fork or be shared across processes in any way. When multiple processes are being used, each process should create its own IOLoop, which also implies that any objects which depend on the IOLoop (such as AsyncHTTPClient) must also be created in the child processes. As a guideline, anything that starts processes (including the tornado.process and multiprocessing modules) should do so as early as possible, ideally the first thing the application does after loading its configuration, and before any calls to IOLoop.start or asyncio.run. Changed in version 4.2: Added the make_current keyword argument to the IOLoop constructor. Changed in version 5.0: Uses the asyncio event loop by default. The IOLoop.configure method cannot be used on Python 3 except to redundantly specify the asyncio event loop. Changed in version 6.3: make_current=True is now the default when creating an IOLoop - previously the default was to make the event loop current if there wasn’t already a current one. Tornado Documentation, Release 6.3.3 Running an IOLoop static IOLoop.current() → IOLoop static IOLoop.current(instance: bool = True) → Optional[IOLoop] Returns the current thread’s IOLoop. If an IOLoop is currently running or has been marked as current by make_current, returns that instance. If there is no current IOLoop and instance is true, creates one. Changed in version 4.1: Added instance argument to control the fallback to IOLoop.instance(). Changed in version 5.0: On Python 3, control of the current IOLoop is delegated to asyncio, with this and other methods as pass-through accessors. The instance argument now controls whether an IOLoop is created automatically when there is none, instead of whether we fall back to IOLoop.instance() (which is now an alias for this method). instance=False is deprecated, since even if we do not create an IOLoop, this method may initialize the asyncio loop. Deprecated since version 6.2: It is deprecated to call IOLoop.current() when no asyncio event loop is running. IOLoop.make_current() → None Makes this the IOLoop for the current thread. An IOLoop automatically becomes current for its thread when it is started, but it is sometimes useful to call make_current explicitly before starting the IOLoop, so that code run at startup time can find the right instance. Changed in version 4.1: An IOLoop created while there is no current IOLoop will automatically become current. Changed in version 5.0: This method also sets the current asyncio event loop. Deprecated since version 6.2: Setting and clearing the current event loop through Tornado is deprecated. Use asyncio.set_event_loop instead if you need this. static IOLoop.clear_current() → None Clears the IOLoop for the current thread. Intended primarily for use by test frameworks in between tests. Changed in version 5.0: This method also clears the current asyncio event loop. Deprecated since version 6.2. IOLoop.start() → None Starts the I/O loop. The loop will run until one of the callbacks calls stop(), which will make the loop stop after the current event iteration completes. IOLoop.stop() → None Stop the I/O loop. If the event loop is not currently running, the next call to start() will return immediately. Note that even after stop has been called, the IOLoop is not completely stopped until IOLoop.start has also returned. Some work that was scheduled before the call to stop may still be run before the IOLoop shuts down. IOLoop.run_sync(func: Callable, timeout: Optional[float] = None) → Any Starts the IOLoop, runs the given function, and stops the loop. The function must return either an awaitable object or None. If the function returns an awaitable object, the IOLoop will run until the awaitable is resolved (and run_sync() will return the awaitable’s result). If it raises an exception, the IOLoop will stop and the exception will be re-raised to the caller. Tornado Documentation, Release 6.3.3 The keyword-only argument timeout may be used to set a maximum duration for the function. If the timeout expires, a asyncio.TimeoutError is raised. This method is useful to allow asynchronous calls in a main() function: async def main(): # do stuff... if __name__ == '__main__': IOLoop.current().run_sync(main) Changed in version 4.3: Returning a non-None, non-awaitable value is now an error. Changed in version 5.0: If a timeout occurs, the func coroutine will be cancelled. Changed in version 6.2: tornado.util.TimeoutError is now an alias to asyncio.TimeoutError. IOLoop.close(all_fds: bool = False) → None Closes the IOLoop, freeing any resources used. If all_fds is true, all file descriptors registered on the IOLoop will be closed (not just the ones created by the IOLoop itself). Many applications will only use a single IOLoop that runs for the entire lifetime of the process. In that case closing the IOLoop is not necessary since everything will be cleaned up when the process exits. IOLoop.close is provided mainly for scenarios such as unit tests, which create and destroy a large number of IOLoops. An IOLoop must be completely stopped before it can be closed. This means that IOLoop.stop() must be called and IOLoop.start() must be allowed to return before attempting to call IOLoop.close(). Therefore the call to close will usually appear just after the call to start rather than near the call to stop. Changed in version 3.1: If the IOLoop implementation supports non-integer objects for “file descriptors”, those objects will have their close method when all_fds is true. static IOLoop.instance() → IOLoop Deprecated alias for IOLoop.current(). Changed in version 5.0: Previously, this method returned a global singleton IOLoop, in contrast with the per- thread IOLoop returned by current(). In nearly all cases the two were the same (when they differed, it was generally used from non-Tornado threads to communicate back to the main thread’s IOLoop). This distinction is not present in asyncio, so in order to facilitate integration with that package instance() was changed to be an alias to current(). Applications using the cross-thread communications aspect of instance() should instead set their own global variable to point to the IOLoop they want to use. Deprecated since version 5.0. IOLoop.install() → None Deprecated alias for make_current(). Changed in version 5.0: Previously, this method would set this IOLoop as the global singleton used by IOLoop. instance(). Now that instance() is an alias for current(), install() is an alias for make_current(). Deprecated since version 5.0. static IOLoop.clear_instance() → None Deprecated alias for clear_current(). Changed in version 5.0: Previously, this method would clear the IOLoop used as the global singleton by IOLoop.instance(). Now that instance() is an alias for current(), clear_instance() is an alias for clear_current(). Deprecated since version 5.0. Tornado Documentation, Release 6.3.3 I/O events IOLoop.add_handler(fd: int, handler: Callable[[int, int], None], events: int) → None IOLoop.add_handler(fd: _S, handler: Callable[[_S, int], None], events: int) → None Registers the given handler to receive the given events for fd. The fd argument may either be an integer file descriptor or a file-like object with a fileno() and close() method. The events argument is a bitwise or of the constants IOLoop.READ, IOLoop.WRITE, and IOLoop.ERROR. When an event occurs, handler(fd, events) will be run. Changed in version 4.0: Added the ability to pass file-like objects in addition to raw file descriptors. IOLoop.update_handler(fd: Union[int, _Selectable], events: int) → None Changes the events we listen for fd. Changed in version 4.0: Added the ability to pass file-like objects in addition to raw file descriptors. IOLoop.remove_handler(fd: Union[int, _Selectable]) → None Stop listening for events on fd. Changed in version 4.0: Added the ability to pass file-like objects in addition to raw file descriptors. Callbacks and timeouts IOLoop.add_callback(callback: Callable, *args: Any, **kwargs: Any) → None Calls the given callback on the next I/O loop iteration. It is safe to call this method from any thread at any time, except from a signal handler. Note that this is the only method in IOLoop that makes this thread-safety guarantee; all other interaction with the IOLoop must be done from that IOLoop’s thread. add_callback() may be used to transfer control from other threads to the IOLoop’s thread. To add a callback from a signal handler, see add_callback_from_signal. IOLoop.add_callback_from_signal(callback: Callable, *args: Any, **kwargs: Any) → None Calls the given callback on the next I/O loop iteration. Safe for use from a Python signal handler; should not be used otherwise. IOLoop.add_future(future: Union[Future[_T], concurrent.futures.Future[_T]], callback: Callable[[Future[_T]], None]) → None Schedules a callback on the IOLoop when the given Future is finished. The callback is invoked with one argument, the Future. This method only accepts Future objects and not other awaitables (unlike most of Tornado where the two are interchangeable). IOLoop.add_timeout(deadline: Union[float, timedelta], callback: Callable, *args: Any, **kwargs: Any) → object Runs the callback at the time deadline from the I/O loop. Returns an opaque handle that may be passed to remove_timeout to cancel. deadline may be a number denoting a time (on the same scale as IOLoop.time, normally time.time), or a datetime.timedelta object for a deadline relative to the current time. Since Tornado 4.0, call_later is a more convenient alternative for the relative case since it does not require a timedelta object. Tornado Documentation, Release 6.3.3 Note that it is not safe to call add_timeout from other threads. Instead, you must use add_callback to transfer control to the IOLoop’s thread, and then call add_timeout from there. Subclasses of IOLoop must implement either add_timeout or call_at; the default implementations of each will call the other. call_at is usually easier to implement, but subclasses that wish to maintain compatibility with Tornado versions prior to 4.0 must use add_timeout instead. Changed in version 4.0: Now passes through *args and **kwargs to the callback. IOLoop.call_at(when: float, callback: Callable, *args: Any, **kwargs: Any) → object Runs the callback at the absolute time designated by when. when must be a number using the same reference point as IOLoop.time. Returns an opaque handle that may be passed to remove_timeout to cancel. Note that unlike the asyncio method of the same name, the returned object does not have a cancel() method. See add_timeout for comments on thread-safety and subclassing. New in version 4.0. IOLoop.call_later(delay: float, callback: Callable, *args: Any, **kwargs: Any) → object Runs the callback after delay seconds have passed. Returns an opaque handle that may be passed to remove_timeout to cancel. Note that unlike the asyncio method of the same name, the returned object does not have a cancel() method. See add_timeout for comments on thread-safety and subclassing. New in version 4.0. IOLoop.remove_timeout(timeout: object) → None Cancels a pending timeout. The argument is a handle as returned by add_timeout. It is safe to call remove_timeout even if the callback has already been run. IOLoop.spawn_callback(callback: Callable, *args: Any, **kwargs: Any) → None Calls the given callback on the next IOLoop iteration. As of Tornado 6.0, this method is equivalent to add_callback. New in version 4.0. IOLoop.run_in_executor(executor: Optional[Executor], func: Callable[[...], _T], *args: Any) → Awaitable[_T] Runs a function in a concurrent.futures.Executor. If executor is None, the IO loop’s default executor will be used. Use functools.partial to pass keyword arguments to func. New in version 5.0. IOLoop.set_default_executor(executor: Executor) → None Sets the default executor to use with run_in_executor(). New in version 5.0. IOLoop.time() → float Returns the current time according to the IOLoop’s clock. The return value is a floating-point number relative to an unspecified time in the past. Historically, the IOLoop could be customized to use e.g. time.monotonic instead of time.time, but this is not currently supported and so this method is equivalent to time.time. Tornado Documentation, Release 6.3.3 class tornado.ioloop.PeriodicCallback(callback: Callable[[], Optional[Awaitable]], callback_time: Schedules the given callback to be called periodically. The callback is called every callback_time milliseconds when callback_time is a float. Note that the timeout is given in milliseconds, while most other time-related functions in Tornado use seconds. callback_time may alternatively be given as a datetime.timedelta object. If jitter is specified, each callback time will be randomly selected within a window of jitter * callback_time milliseconds. Jitter can be used to reduce alignment of events with similar periods. A jit- ter of 0.1 means allowing a 10% variation in callback time. The window is centered on callback_time so the total number of calls within a given interval should not be significantly affected by adding jitter. If the callback runs for longer than callback_time milliseconds, subsequent invocations will be skipped to get back on schedule. start must be called after the PeriodicCallback is created. Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. Changed in version 5.1: The jitter argument is added. Changed in version 6.2: If the callback argument is a coroutine, and a callback runs for longer than callback_time, subsequent invocations will be skipped. Previously this was only true for regular functions, not coroutines, which were “fire-and-forget” for PeriodicCallback. The callback_time argument now accepts datetime.timedelta objects, in addition to the previous numeric milliseconds. start() → None Starts the timer. stop() → None Stops the timer. is_running() → bool Returns True if this PeriodicCallback has been started. New in version 4.1. 6.4.2 tornado.iostream — Convenient wrappers for non-blocking sockets Utility classes to write to and read from non-blocking files and sockets. Contents: • BaseIOStream: Generic interface for reading and writing. • IOStream: Implementation of BaseIOStream using non-blocking sockets. • SSLIOStream: SSL-aware version of IOStream. • PipeIOStream: Pipe-based IOStream implementation. Tornado Documentation, Release 6.3.3 Base class class tornado.iostream.BaseIOStream(max_buffer_size: Optional[int] = None, read_chunk_size: Optional[int] = None, max_write_buffer_size: Optional[int] = None) A utility class to write to and read from a non-blocking file or socket. We support a non-blocking write() and a family of read_*() methods. When the operation completes, the Awaitable will resolve with the data read (or None for write()). All outstanding Awaitables will resolve with a StreamClosedError when the stream is closed; BaseIOStream.set_close_callback can also be used to be notified of a closed stream. When a stream is closed due to an error, the IOStream’s error attribute contains the exception object. Subclasses must implement fileno, close_fd, write_to_fd, read_from_fd, and optionally get_fd_error. BaseIOStream constructor. Parameters • max_buffer_size – Maximum amount of incoming data to buffer; defaults to 100MB. • read_chunk_size – Amount of data to read at one time from the underlying transport; defaults to 64KB. • max_write_buffer_size – Amount of outgoing data to buffer; defaults to unlimited. Changed in version 4.0: Add the max_write_buffer_size parameter. Changed default read_chunk_size to 64KB. Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. Main interface BaseIOStream.write(data: Union[bytes, memoryview]) → Future[None] Asynchronously write the given data to this stream. This method returns a Future that resolves (with a result of None) when the write has been completed. The data argument may be of type bytes or memoryview. Changed in version 4.0: Now returns a Future if no callback is given. Changed in version 4.5: Added support for memoryview arguments. Changed in version 6.0: The callback argument was removed. Use the returned Future instead. BaseIOStream.read_bytes(num_bytes: int, partial: bool = False) → Awaitable[bytes] Asynchronously read a number of bytes. If partial is true, data is returned as soon as we have any bytes to return (but never more than num_bytes) Changed in version 4.0: Added the partial argument. The callback argument is now optional and a Future will be returned if it is omitted. Changed in version 6.0: The callback and streaming_callback arguments have been removed. Use the returned Future (and partial=True for streaming_callback) instead. Tornado Documentation, Release 6.3.3 BaseIOStream.read_into(buf: bytearray, partial: bool = False) → Awaitable[int] Asynchronously read a number of bytes. buf must be a writable buffer into which data will be read. If partial is true, the callback is run as soon as any bytes have been read. Otherwise, it is run when the buf has been entirely filled with read data. New in version 5.0. Changed in version 6.0: The callback argument was removed. Use the returned Future instead. BaseIOStream.read_until(delimiter: bytes, max_bytes: Optional[int] = None) → Awaitable[bytes] Asynchronously read until we have found the given delimiter. The result includes all the data read including the delimiter. If max_bytes is not None, the connection will be closed if more than max_bytes bytes have been read and the delimiter is not found. Changed in version 4.0: Added the max_bytes argument. The callback argument is now optional and a Future will be returned if it is omitted. Changed in version 6.0: The callback argument was removed. Use the returned Future instead. BaseIOStream.read_until_regex(regex: bytes, max_bytes: Optional[int] = None) → Awaitable[bytes] Asynchronously read until we have matched the given regex. The result includes the data that matches the regex and anything that came before it. If max_bytes is not None, the connection will be closed if more than max_bytes bytes have been read and the regex is not satisfied. Changed in version 4.0: Added the max_bytes argument. The callback argument is now optional and a Future will be returned if it is omitted. Changed in version 6.0: The callback argument was removed. Use the returned Future instead. BaseIOStream.read_until_close() → Awaitable[bytes] Asynchronously reads all data from the socket until it is closed. This will buffer all available data until max_buffer_size is reached. If flow control or cancellation are desired, use a loop with read_bytes(partial=True) instead. Changed in version 4.0: The callback argument is now optional and a Future will be returned if it is omitted. Changed in version 6.0: The callback and streaming_callback arguments have been removed. Use the returned Future (and read_bytes with partial=True for streaming_callback) instead. BaseIOStream.close(exc_info: Union[None, bool, BaseException, Tuple[Optional[Type[BaseException]], Optional[BaseException], Optional[TracebackType]]] = False) → None Close this stream. If exc_info is true, set the error attribute to the current exception from sys.exc_info (or if exc_info is a tuple, use that instead of sys.exc_info). BaseIOStream.set_close_callback(callback: Optional[Callable[[], None]]) → None Call the given callback when the stream is closed. This mostly is not necessary for applications that use the Future interface; all outstanding Futures will resolve with a StreamClosedError when the stream is closed. However, it is still useful as a way to signal that the stream has been closed while no other read or write is in progress. Unlike other callback-based interfaces, set_close_callback was not removed in Tornado 6.0. Tornado Documentation, Release 6.3.3 BaseIOStream.closed() → bool Returns True if the stream has been closed. BaseIOStream.reading() → bool Returns True if we are currently reading from the stream. BaseIOStream.writing() → bool Returns True if we are currently writing to the stream. BaseIOStream.set_nodelay(value: bool) → None Sets the no-delay flag for this stream. By default, data written to TCP streams may be held for a time to make the most efficient use of bandwidth (according to Nagle’s algorithm). The no-delay flag requests that data be written as soon as possible, even if doing so would consume additional bandwidth. This flag is currently defined only for TCP-based IOStreams. New in version 3.1. Methods for subclasses BaseIOStream.fileno() → Union[int, _Selectable] Returns the file descriptor for this stream. BaseIOStream.close_fd() → None Closes the file underlying this stream. close_fd is called by BaseIOStream and should not be called elsewhere; other users should call close instead. BaseIOStream.write_to_fd(data: memoryview) → int Attempts to write data to the underlying file. Returns the number of bytes written. BaseIOStream.read_from_fd(buf: Union[bytearray, memoryview]) → Optional[int] Attempts to read from the underlying file. Reads up to len(buf) bytes, storing them in the buffer. Returns the number of bytes read. Returns None if there was nothing to read (the socket returned EWOULDBLOCK or equivalent), and zero on EOF. Changed in version 5.0: Interface redesigned to take a buffer and return a number of bytes instead of a freshly- allocated object. BaseIOStream.get_fd_error() → Optional[Exception] Returns information about any error on the underlying file. This method is called after the IOLoop has signaled an error on the file descriptor, and should return an Exception (such as socket.error with additional information, or None if no such information is available. Tornado Documentation, Release 6.3.3 Implementations class tornado.iostream.IOStream(socket: socket, *args: Any, **kwargs: Any) Socket-based IOStream implementation. This class supports the read and write methods from BaseIOStream plus a connect method. The socket parameter may either be connected or unconnected. For server operations the socket is the result of calling socket.accept. For client operations the socket is created with socket.socket, and may either be connected before passing it to the IOStream or connected with IOStream.connect. A very simple (and broken) HTTP client using this class: import socket import tornado async def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) stream = tornado.iostream.IOStream(s) await stream.connect(("friendfeed.com", 80)) await stream.write(b"GET / HTTP/1.0\r\nHost: friendfeed.com\r\n\r\n") header_data = await stream.read_until(b"\r\n\r\n") headers = {} for line in header_data.split(b"\r\n"): parts = line.split(b":") if len(parts) == 2: headers[parts[0].strip()] = parts[1].strip() body_data = await stream.read_bytes(int(headers[b"Content-Length"])) print(body_data) stream.close() if __name__ == '__main__': asyncio.run(main()) connect(address: Any, server_hostname: Optional[str] = None) → Future[_IOStreamType] Connects the socket to a remote address without blocking. May only be called if the socket passed to the constructor was not previously connected. The address parameter is in the same format as for socket.connect for the type of socket passed to the IOStream con- structor, e.g. an (ip, port) tuple. Hostnames are accepted here, but will be resolved synchronously and block the IOLoop. If you have a hostname instead of an IP address, the TCPClient class is recommended instead of calling this method directly. TCPClient will do asynchronous DNS resolution and handle both IPv4 and IPv6. If callback is specified, it will be called with no arguments when the connection is completed; if not this method returns a Future (whose result after a successful connection will be the stream itself). In SSL mode, the server_hostname parameter will be used for certificate validation (unless disabled in the ssl_options) and SNI (if supported; requires Python 2.7.9+). Note that it is safe to call IOStream.write while the connection is pending, in which case the data will be written as soon as the connection is ready. Calling IOStream read methods before the socket is connected works on some platforms but is non-portable. Changed in version 4.0: If no callback is given, returns a Future. Changed in version 4.2: SSL certificates are validated by default; pass ssl_options=dict(cert_reqs=ssl.CERT_NONE) or a suitably-configured ssl.SSLContext to Tornado Documentation, Release 6.3.3 the SSLIOStream constructor to disable. Changed in version 6.0: The callback argument was removed. Use the returned Future instead. start_tls(server_side: bool, ssl_options: Optional[Union[Dict[str, Any], SSLContext]] = None, server_hostname: Optional[str] = None) → Awaitable[SSLIOStream] Convert this IOStream to an SSLIOStream. This enables protocols that begin in clear-text mode and switch to SSL after some initial negotiation (such as the STARTTLS extension to SMTP and IMAP). This method cannot be used if there are outstanding reads or writes on the stream, or if there is any data in the IOStream’s buffer (data in the operating system’s socket buffer is allowed). This means it must generally be used immediately after reading or writing the last clear-text data. It can also be used immediately after connecting, before any reads or writes. The ssl_options argument may be either an ssl.SSLContext object or a dictionary of keyword argu- ments for the ssl.wrap_socket function. The server_hostname argument will be used for certificate validation unless disabled in the ssl_options. This method returns a Future whose result is the new SSLIOStream. After this method has been called, any other operation on the original stream is undefined. If a close callback is defined on this stream, it will be transferred to the new stream. New in version 4.0. Changed in version 4.2: SSL certificates are validated by default; pass ssl_options=dict(cert_reqs=ssl.CERT_NONE) or a suitably-configured ssl.SSLContext to disable. class tornado.iostream.SSLIOStream(*args: Any, **kwargs: Any) A utility class to write to and read from a non-blocking SSL socket. If the socket passed to the constructor is already connected, it should be wrapped with: ssl.wrap_socket(sock, do_handshake_on_connect=False, **kwargs) before constructing the SSLIOStream. Unconnected sockets will be wrapped when IOStream.connect is finished. The ssl_options keyword argument may either be an ssl.SSLContext object or a dictionary of keywords arguments for ssl.wrap_socket wait_for_handshake() → Future[SSLIOStream] Wait for the initial SSL handshake to complete. If a callback is given, it will be called with no arguments once the handshake is complete; otherwise this method returns a Future which will resolve to the stream itself after the handshake is complete. Once the handshake is complete, information such as the peer’s certificate and NPN/ALPN selections may be accessed on self.socket. This method is intended for use on server-side streams or after using IOStream.start_tls; it should not be used with IOStream.connect (which already waits for the handshake to complete). It may only be called once per stream. New in version 4.2. Changed in version 6.0: The callback argument was removed. Use the returned Future instead. Tornado Documentation, Release 6.3.3 class tornado.iostream.PipeIOStream(fd: int, *args: Any, **kwargs: Any) Pipe-based IOStream implementation. The constructor takes an integer file descriptor (such as one returned by os.pipe) rather than an open file object. Pipes are generally one-way, so a PipeIOStream can be used for reading or writing but not both. PipeIOStream is only available on Unix-based platforms. Exceptions exception tornado.iostream.StreamBufferFullError Exception raised by IOStream methods when the buffer is full. exception tornado.iostream.StreamClosedError(real_error: Optional[BaseException] = None) Exception raised by IOStream methods when the stream is closed. Note that the close callback is scheduled to run after other callbacks on the stream (to allow for buffered data to be processed), so you may see this error before you see the close callback. The real_error attribute contains the underlying error that caused the stream to close (if any). Changed in version 4.3: Added the real_error attribute. exception tornado.iostream.UnsatisfiableReadError Exception raised when a read cannot be satisfied. Raised by read_until and read_until_regex with a max_bytes argument. 6.4.3 tornado.netutil — Miscellaneous network utilities Miscellaneous network utility code. tornado.netutil.bind_sockets(port: int, address: Optional[str] = None, family: AddressFamily = AddressFamily.AF_UNSPEC, backlog: int = 128, flags: Optional[int] = None, reuse_port: bool = False) → List[socket] Creates listening sockets bound to the given port and address. Returns a list of socket objects (multiple sockets are returned if the given address maps to multiple IP addresses, which is most common for mixed IPv4 and IPv6 use). Address may be either an IP address or hostname. If it’s a hostname, the server will listen on all IP addresses associated with the name. Address may be an empty string or None to listen on all available interfaces. Family may be set to either socket.AF_INET or socket.AF_INET6 to restrict to IPv4 or IPv6 addresses, otherwise both will be used if available. The backlog argument has the same meaning as for socket.listen(). flags is a bitmask of AI_* flags to getaddrinfo, like socket.AI_PASSIVE | socket.AI_NUMERICHOST. reuse_port option sets SO_REUSEPORT option for every socket in the list. If your platform doesn’t support this option ValueError will be raised. tornado.netutil.bind_unix_socket(file: str, mode: int = 384, backlog: int = 128) → socket Creates a listening unix socket. If a socket with the given name already exists, it will be deleted. If any other file with that name exists, an exception will be raised. Returns a socket object (not a list of socket objects like bind_sockets) Tornado Documentation, Release 6.3.3 tornado.netutil.add_accept_handler(sock: socket, callback: Callable[[socket, Any], None]) → Callable[[], None] Adds an IOLoop event handler to accept new connections on sock. When a connection is accepted, callback(connection, address) will be run (connection is a socket ob- ject, and address is the address of the other end of the connection). Note that this signature is different from the callback(fd, events) signature used for IOLoop handlers. A callable is returned which, when called, will remove the IOLoop event handler and stop processing further incoming connections. Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. Changed in version 5.0: A callable is returned (None was returned before). tornado.netutil.is_valid_ip(ip: str) → bool Returns True if the given string is a well-formed IP address. Supports IPv4 and IPv6. class tornado.netutil.Resolver(*args: Any, **kwargs: Any) Configurable asynchronous DNS resolver interface. By default, a blocking implementation is used (which simply calls socket.getaddrinfo). An alternative implementation can be chosen with the Resolver.configure class method: Resolver.configure('tornado.netutil.ThreadedResolver') The implementations of this interface included with Tornado are • tornado.netutil.DefaultLoopResolver • tornado.netutil.DefaultExecutorResolver (deprecated) • tornado.netutil.BlockingResolver (deprecated) • tornado.netutil.ThreadedResolver (deprecated) • tornado.netutil.OverrideResolver • tornado.platform.twisted.TwistedResolver (deprecated) • tornado.platform.caresresolver.CaresResolver (deprecated) Changed in version 5.0: The default implementation has changed from BlockingResolver to DefaultExecutorResolver. Changed in version 6.2: The default implementation has changed from DefaultExecutorResolver to DefaultLoopResolver. resolve(host: str, port: int, family: AddressFamily = AddressFamily.AF_UNSPEC) → Awaitable[List[Tuple[int, Any]]] Resolves an address. The host argument is a string which may be a hostname or a literal IP address. Returns a Future whose result is a list of (family, address) pairs, where address is a tuple suitable to pass to socket.connect (i.e. a (host, port) pair for IPv4; additional fields may be present for IPv6). If a callback is passed, it will be run with the result as an argument when it is complete. Raises IOError – if the address cannot be resolved. Tornado Documentation, Release 6.3.3 Changed in version 4.4: Standardized all implementations to raise IOError. Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. close() → None Closes the Resolver, freeing any resources used. New in version 3.1. class tornado.netutil.DefaultExecutorResolver(*args: Any, **kwargs: Any) Resolver implementation using IOLoop.run_in_executor. New in version 5.0. Deprecated since version 6.2: Use DefaultLoopResolver instead. class tornado.netutil.DefaultLoopResolver(*args: Any, **kwargs: Any) Resolver implementation using asyncio.loop.getaddrinfo. class tornado.netutil.ExecutorResolver(*args: Any, **kwargs: Any) Resolver implementation using a concurrent.futures.Executor. Use this instead of ThreadedResolver when you require additional control over the executor being used. The executor will be shut down when the resolver is closed unless close_resolver=False; use this if you want to reuse the same executor elsewhere. Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. Deprecated since version 5.0: The default Resolver now uses asyncio.loop.getaddrinfo; use that instead of this class. class tornado.netutil.BlockingResolver(*args: Any, **kwargs: Any) Default Resolver implementation, using socket.getaddrinfo. The IOLoop will be blocked during the resolution, although the callback will not be run until the next IOLoop iteration. Deprecated since version 5.0: The default Resolver now uses IOLoop.run_in_executor; use that instead of this class. class tornado.netutil.ThreadedResolver(*args: Any, **kwargs: Any) Multithreaded non-blocking Resolver implementation. Requires the concurrent.futures package to be installed (available in the standard library since Python 3.2, installable with pip install futures in older versions). The thread pool size can be configured with: Resolver.configure('tornado.netutil.ThreadedResolver', num_threads=10) Changed in version 3.1: All ThreadedResolvers share a single thread pool, whose size is set by the first one to be created. Deprecated since version 5.0: The default Resolver now uses IOLoop.run_in_executor; use that instead of this class. class tornado.netutil.OverrideResolver(*args: Any, **kwargs: Any) Wraps a resolver with a mapping of overrides. This can be used to make local DNS changes (e.g. for testing) without modifying system-wide settings. The mapping can be in three formats: Tornado Documentation, Release 6.3.3 { # Hostname to host or ip "example.com": "127.0.1.1", # Host+port to host+port ("login.example.com", 443): ("localhost", 1443), # Host+port+address family to host+port ("login.example.com", 443, socket.AF_INET6): ("::1", 1443), } Changed in version 5.0: Added support for host-port-family triplets. tornado.netutil.ssl_options_to_context(ssl_options: Union[Dict[str, Any], SSLContext], server_side: Optional[bool] = None) → SSLContext Try to convert an ssl_options dictionary to an SSLContext object. The ssl_options dictionary contains keywords to be passed to ssl.wrap_socket. In Python 2.7.9+, ssl. SSLContext objects can be used instead. This function converts the dict form to its SSLContext equivalent, and may be used when a component which accepts both forms needs to upgrade to the SSLContext version to use features like SNI or NPN. Changed in version 6.2: Added server_side argument. Omitting this argument will result in a DeprecationWarn- ing on Python 3.10. tornado.netutil.ssl_wrap_socket(socket: socket, ssl_options: Union[Dict[str, Any], SSLContext], server_hostname: Optional[str] = None, server_side: Optional[bool] = None, **kwargs: Any) → SSLSocket Returns an ssl.SSLSocket wrapping the given socket. ssl_options may be either an ssl.SSLContext object or a dictionary (as accepted by ssl_options_to_context). Additional keyword arguments are passed to wrap_socket (either the SSLContext method or the ssl module function as appropriate). Changed in version 6.2: Added server_side argument. Omitting this argument will result in a DeprecationWarn- ing on Python 3.10. 6.4.4 tornado.tcpclient — IOStream connection factory A non-blocking TCP connection factory. class tornado.tcpclient.TCPClient(resolver: Optional[Resolver] = None) A non-blocking TCP connection factory. Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. coroutine connect(host: str, port: int, af: AddressFamily = AddressFamily.AF_UNSPEC, ssl_options: Optional[Union[Dict[str, Any], SSLContext]] = None, max_buffer_size: Optional[int] = None, source_ip: Optional[str] = None, source_port: Optional[int] = None, timeout: Optional[Union[float, timedelta]] = None) → IOStream Connect to the given host and port. Asynchronously returns an IOStream (or SSLIOStream if ssl_options is not None). Using the source_ip kwarg, one can specify the source IP address to use when establishing the connection. In case the user needs to resolve and use a specific interface, it has to be handled outside of Tornado as this depends very much on the platform. Tornado Documentation, Release 6.3.3 Raises TimeoutError if the input future does not complete before timeout, which may be specified in any form allowed by IOLoop.add_timeout (i.e. a datetime.timedelta or an absolute time relative to IOLoop.time) Similarly, when the user requires a certain source port, it can be specified using the source_port arg. Changed in version 4.5: Added the source_ip and source_port arguments. Changed in version 5.0: Added the timeout argument. 6.4.5 tornado.tcpserver — Basic IOStream-based TCP server A non-blocking, single-threaded TCP server. class tornado.tcpserver.TCPServer(ssl_options: Optional[Union[Dict[str, Any], SSLContext]] = None, max_buffer_size: Optional[int] = None, read_chunk_size: Optional[int] = None) A non-blocking, single-threaded TCP server. To use TCPServer, define a subclass which overrides the handle_stream method. For example, a simple echo server could be defined like this: from tornado.tcpserver import TCPServer from tornado.iostream import StreamClosedError class EchoServer(TCPServer): async def handle_stream(self, stream, address): while True: try: data = await stream.read_until(b"\n") await stream.write(data) except StreamClosedError: break To make this server serve SSL traffic, send the ssl_options keyword argument with an ssl.SSLContext object. For compatibility with older versions of Python ssl_options may also be a dictionary of keyword arguments for the ssl.wrap_socket method.: ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) ssl_ctx.load_cert_chain(os.path.join(data_dir, "mydomain.crt"), os.path.join(data_dir, "mydomain.key")) TCPServer(ssl_options=ssl_ctx) TCPServer initialization follows one of three patterns: 1. listen: single-process: async def main(): server = TCPServer() server.listen(8888) await asyncio.Event.wait() asyncio.run(main()) While this example does not create multiple processes on its own, when the reuse_port=True argument is passed to listen() you can run the program multiple times to create a multi-process service. Tornado Documentation, Release 6.3.3 2. add_sockets: multi-process: sockets = bind_sockets(8888) tornado.process.fork_processes(0) async def post_fork_main(): server = TCPServer() server.add_sockets(sockets) await asyncio.Event().wait() asyncio.run(post_fork_main()) The add_sockets interface is more complicated, but it can be used with tornado.process. fork_processes to run a multi-process service with all worker processes forked from a single parent. add_sockets can also be used in single-process servers if you want to create your listening sockets in some way other than bind_sockets. Note that when using this pattern, nothing that touches the event loop can be run before fork_processes. 3. bind/start: simple deprecated multi-process: server = TCPServer() server.bind(8888) server.start(0) # Forks multiple sub-processes IOLoop.current().start() This pattern is deprecated because it requires interfaces in the asyncio module that have been deprecated since Python 3.10. Support for creating multiple processes in the start method will be removed in a future version of Tornado. New in version 3.1: The max_buffer_size argument. Changed in version 5.0: The io_loop argument has been removed. listen(port: int, address: Optional[str] = None, family: AddressFamily = AddressFamily.AF_UNSPEC, backlog: int = 128, flags: Optional[int] = None, reuse_port: bool = False) → None Starts accepting connections on the given port. This method may be called more than once to listen on multiple ports. listen takes effect immediately; it is not necessary to call TCPServer.start afterwards. It is, however, necessary to start the event loop if it is not already running. All arguments have the same meaning as in tornado.netutil.bind_sockets. Changed in version 6.2: Added family, backlog, flags, and reuse_port arguments to match tornado. netutil.bind_sockets. add_sockets(sockets: Iterable[socket]) → None Makes this server start accepting connections on the given sockets. The sockets parameter is a list of socket objects such as those returned by bind_sockets. add_sockets is typically used in combination with that method and tornado.process.fork_processes to provide greater control over the initialization of a multi-process server. add_socket(socket: socket) → None Singular version of add_sockets. Takes a single socket object. bind(port: int, address: Optional[str] = None, family: AddressFamily = AddressFamily.AF_UNSPEC, backlog: int = 128, flags: Optional[int] = None, reuse_port: bool = False) → None Binds this server to the given port on the given address. Tornado Documentation, Release 6.3.3 To start the server, call start. If you want to run this server in a single process, you can call listen as a shortcut to the sequence of bind and start calls. Address may be either an IP address or hostname. If it’s a hostname, the server will listen on all IP addresses associated with the name. Address may be an empty string or None to listen on all available interfaces. Family may be set to either socket.AF_INET or socket.AF_INET6 to restrict to IPv4 or IPv6 addresses, otherwise both will be used if available. The backlog argument has the same meaning as for socket.listen. The reuse_port argument has the same meaning as for bind_sockets. This method may be called multiple times prior to start to listen on multiple ports or interfaces. Changed in version 4.4: Added the reuse_port argument. Changed in version 6.2: Added the flags argument to match bind_sockets. Deprecated since version 6.2: Use either listen() or add_sockets() instead of bind() and start(). start(num_processes: Optional[int] = 1, max_restarts: Optional[int] = None) → None Starts this server in the IOLoop. By default, we run the server in this process and do not fork any additional child process. If num_processes is None or <= 0, we detect the number of cores available on this machine and fork that number of child processes. If num_processes is given and > 1, we fork that specific number of sub- processes. Since we use processes and not threads, there is no shared memory between any server code. Note that multiple processes are not compatible with the autoreload module (or the autoreload=True option to tornado.web.Application which defaults to True when debug=True). When using multiple processes, no IOLoops can be created or referenced until after the call to TCPServer.start(n). Values of num_processes other than 1 are not supported on Windows. The max_restarts argument is passed to fork_processes. Changed in version 6.0: Added max_restarts argument. Deprecated since version 6.2: Use either listen() or add_sockets() instead of bind() and start(). stop() → None Stops listening for new connections. Requests currently in progress may still continue after the server is stopped. handle_stream(stream: IOStream, address: tuple) → Optional[Awaitable[None]] Override to handle a new IOStream from an incoming connection. This method may be a coroutine; if so any exceptions it raises asynchronously will be logged. Accepting of incoming connections will not be blocked by this coroutine. If this TCPServer is configured for SSL, handle_stream may be called before the SSL handshake has completed. Use SSLIOStream.wait_for_handshake if you need to verify the client’s certificate or use NPN/ALPN. Changed in version 4.2: Added the option for this method to be a coroutine. Tornado Documentation, Release 6.3.3 6.5 Coroutines and concurrency 6.5.1 tornado.gen — Generator-based coroutines tornado.gen implements generator-based coroutines. Note: The “decorator and generator” approach in this module is a precursor to native coroutines (using async def and await) which were introduced in Python 3.5. Applications that do not require compatibility with older versions of Python should use native coroutines instead. Some parts of this module are still useful with native coroutines, notably multi, sleep, WaitIterator, and with_timeout. Some of these functions have counterparts in the asyncio module which may be used as well, although the two may not necessarily be 100% compatible. Coroutines provide an easier way to work in an asynchronous environment than chaining callbacks. Code using corou- tines is technically asynchronous, but it is written as a single generator instead of a collection of separate functions. For example, here’s a coroutine-based handler: class GenAsyncHandler(RequestHandler): @gen.coroutine def get(self): http_client = AsyncHTTPClient() response = yield http_client.fetch("http://example.com") do_something_with_response(response) self.render("template.html") Asynchronous functions in Tornado return an Awaitable or Future; yielding this object returns its result. You can also yield a list or dict of other yieldable objects, which will be started at the same time and run in parallel; a list or dict of results will be returned when they are all finished: @gen.coroutine def get(self): http_client = AsyncHTTPClient() response1, response2 = yield [http_client.fetch(url1), http_client.fetch(url2)] response_dict = yield dict(response3=http_client.fetch(url3), response4=http_client.fetch(url4)) response3 = response_dict['response3'] response4 = response_dict['response4'] If tornado.platform.twisted is imported, it is also possible to yield Twisted’s Deferred objects. See the convert_yielded function to extend this mechanism. Changed in version 3.2: Dict support added. Changed in version 4.1: Support added for yielding asyncio Futures and Twisted Deferreds via singledispatch. Tornado Documentation, Release 6.3.3 Decorators tornado.gen.coroutine(func: Callable[[...], Generator[Any, Any, _T]]) → Callable[[...], Future[_T]] tornado.gen.coroutine(func: Callable[[...], _T]) → Callable[[...], Future[_T]] Decorator for asynchronous generators. For compatibility with older versions of Python, coroutines may also “return” by raising the special exception Return(value). Functions with this decorator return a Future. Warning: When exceptions occur inside a coroutine, the exception information will be stored in the Future object. You must examine the result of the Future object, or the exception may go unnoticed by your code. This means yielding the function if called from another coroutine, using something like IOLoop.run_sync for top-level calls, or passing the Future to IOLoop.add_future. Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. exception tornado.gen.Return(value: Optional[Any] = None) Special exception to return a value from a coroutine. If this exception is raised, its value argument is used as the result of the coroutine: @gen.coroutine def fetch_json(url): response = yield AsyncHTTPClient().fetch(url) raise gen.Return(json_decode(response.body)) In Python 3.3, this exception is no longer necessary: the return statement can be used directly to return a value (previously yield and return with a value could not be combined in the same function). By analogy with the return statement, the value argument is optional, but it is never necessary to raise gen. Return(). The return statement can be used with no arguments instead. Utility functions tornado.gen.with_timeout(timeout: Union[float, datetime.timedelta], future: Yieldable, quiet_exceptions: Union[Type[Exception], Tuple[Type[Exception], ...]] = ()) Wraps a Future (or other yieldable object) in a timeout. Raises tornado.util.TimeoutError if the input future does not complete before timeout, which may be specified in any form allowed by IOLoop.add_timeout (i.e. a datetime.timedelta or an absolute time relative to IOLoop.time) If the wrapped Future fails after it has timed out, the exception will be logged unless it is either of a type contained in quiet_exceptions (which may be an exception type or a sequence of types), or an asyncio. CancelledError. The wrapped Future is not canceled when the timeout expires, permitting it to be reused. asyncio.wait_for is similar to this function but it does cancel the wrapped Future on timeout. New in version 4.0. Changed in version 4.1: Added the quiet_exceptions argument and the logging of unhandled exceptions. Changed in version 4.4: Added support for yieldable objects other than Future. Tornado Documentation, Release 6.3.3 Changed in version 6.0.3: asyncio.CancelledError is now always considered “quiet”. Changed in version 6.2: tornado.util.TimeoutError is now an alias to asyncio.TimeoutError. tornado.gen.sleep(duration: float) → Future[None] Return a Future that resolves after the given number of seconds. When used with yield in a coroutine, this is a non-blocking analogue to time.sleep (which should not be used in coroutines because it is blocking): yield gen.sleep(0.5) Note that calling this function on its own does nothing; you must wait on the Future it returns (usually by yielding it). New in version 4.1. class tornado.gen.WaitIterator(*args: Future, **kwargs: Future) Provides an iterator to yield the results of awaitables as they finish. Yielding a set of awaitables like this: results = yield [awaitable1, awaitable2] pauses the coroutine until both awaitable1 and awaitable2 return, and then restarts the coroutine with the results of both awaitables. If either awaitable raises an exception, the expression will raise that exception and all the results will be lost. If you need to get the result of each awaitable as soon as possible, or if you need the result of some awaitables even if others produce errors, you can use WaitIterator: wait_iterator = gen.WaitIterator(awaitable1, awaitable2) while not wait_iterator.done(): try: result = yield wait_iterator.next() except Exception as e: print("Error {} from {}".format(e, wait_iterator.current_future)) else: print("Result {} received from {} at {}".format( result, wait_iterator.current_future, wait_iterator.current_index)) Because results are returned as soon as they are available the output from the iterator will not be in the same order as the input arguments. If you need to know which future produced the current result, you can use the attributes WaitIterator.current_future, or WaitIterator.current_index to get the index of the awaitable from the input list. (if keyword arguments were used in the construction of the WaitIterator, current_index will use the corresponding keyword). On Python 3.5, WaitIterator implements the async iterator protocol, so it can be used with the async for statement (note that in this version the entire iteration is aborted if any value raises an exception, while the previous example can continue past individual errors): async for result in gen.WaitIterator(future1, future2): print("Result {} received from {} at {}".format( result, wait_iterator.current_future, wait_iterator.current_index)) New in version 4.1. Changed in version 4.3: Added async for support in Python 3.5. Tornado Documentation, Release 6.3.3 done() → bool Returns True if this iterator has no more results. next() → Future Returns a Future that will yield the next available result. Note that this Future will not be the same object as any of the inputs. tornado.gen.multi(Union[List[Yieldable], Dict[Any, Yieldable]], quiet_exceptions: Union[Type[Exception], Tuple[Type[Exception], ...]] = ()) Runs multiple asynchronous operations in parallel. children may either be a list or a dict whose values are yieldable objects. multi() returns a new yieldable object that resolves to a parallel structure containing their results. If children is a list, the result is a list of results in the same order; if it is a dict, the result is a dict with the same keys. That is, results = yield multi(list_of_futures) is equivalent to: results = [] for future in list_of_futures: results.append(yield future) If any children raise exceptions, multi() will raise the first one. All others will be logged, unless they are of types contained in the quiet_exceptions argument. In a yield-based coroutine, it is not normally necessary to call this function directly, since the coroutine runner will do it automatically when a list or dict is yielded. However, it is necessary in await-based coroutines, or to pass the quiet_exceptions argument. This function is available under the names multi() and Multi() for historical reasons. Cancelling a Future returned by multi() does not cancel its children. asyncio.gather is similar to multi(), but it does cancel its children. Changed in version 4.2: If multiple yieldables fail, any exceptions after the first (which is raised) will be logged. Added the quiet_exceptions argument to suppress this logging for selected exception types. Changed in version 4.3: Replaced the class Multi and the function multi_future with a unified function multi. Added support for yieldables other than YieldPoint and Future. tornado.gen.multi_future(Union[List[Yieldable], Dict[Any, Yieldable]], quiet_exceptions: Union[Type[Exception], Tuple[Type[Exception], ...]] = ()) Wait for multiple asynchronous futures in parallel. Since Tornado 6.0, this function is exactly the same as multi. New in version 4.0. Changed in version 4.2: If multiple Futures fail, any exceptions after the first (which is raised) will be logged. Added the quiet_exceptions argument to suppress this logging for selected exception types. Deprecated since version 4.3: Use multi instead. tornado.gen.convert_yielded(yielded: Union[None, Awaitable, List[Awaitable], Dict[Any, Awaitable], Future]) → Future Convert a yielded object into a Future. The default implementation accepts lists, dictionaries, and Futures. This has the side effect of starting any corou- tines that did not start themselves, similar to asyncio.ensure_future. If the singledispatch library is available, this function may be extended to support additional types. For example: Tornado Documentation, Release 6.3.3 @convert_yielded.register(asyncio.Future) def _(asyncio_future): return tornado.platform.asyncio.to_tornado_future(asyncio_future) New in version 4.1. tornado.gen.maybe_future(x: Any) → Future Converts x into a Future. If x is already a Future, it is simply returned; otherwise it is wrapped in a new Future. This is suitable for use as result = yield gen.maybe_future(f()) when you don’t know whether f() returns a Future or not. Deprecated since version 4.3: This function only handles Futures, not other yieldable objects. Instead of maybe_future, check for the non-future result types you expect (often just None), and yield anything unknown. tornado.gen.is_coroutine_function(func: Any) → bool Return whether func is a coroutine function, i.e. a function wrapped with coroutine. New in version 4.5. tornado.gen.moment A special object which may be yielded to allow the IOLoop to run for one iteration. This is not needed in normal use but it can be helpful in long-running coroutines that are likely to yield Futures that are ready instantly. Usage: yield gen.moment In native coroutines, the equivalent of yield gen.moment is await asyncio.sleep(0). New in version 4.0. Deprecated since version 4.5: yield None (or yield with no argument) is now equivalent to yield gen. moment. 6.5.2 tornado.locks – Synchronization primitives New in version 4.2. Coordinate coroutines with synchronization primitives analogous to those the standard library provides to threads. These classes are very similar to those provided in the standard library’s asyncio package. Warning: Note that these primitives are not actually thread-safe and cannot be used in place of those from the standard library’s threading module–they are meant to coordinate Tornado coroutines in a single-threaded app, not to protect shared objects in a multithreaded app. Tornado Documentation, Release 6.3.3 Condition class tornado.locks.Condition A condition allows one or more coroutines to wait until notified. Like a standard threading.Condition, but does not need an underlying lock that is acquired and released. With a Condition, coroutines can wait to be notified by other coroutines: import asyncio from tornado import gen from tornado.locks import Condition condition = Condition() async def waiter(): print("I'll wait right here") await condition.wait() print("I'm done waiting") async def notifier(): print("About to notify") condition.notify() print("Done notifying") async def runner(): # Wait for waiter() and notifier() in parallel await gen.multi([waiter(), notifier()]) asyncio.run(runner()) I'll wait right here About to notify Done notifying I'm done waiting wait takes an optional timeout argument, which is either an absolute timestamp: io_loop = IOLoop.current() # Wait up to 1 second for a notification. await condition.wait(timeout=io_loop.time() + 1) . . . or a datetime.timedelta for a timeout relative to the current time: # Wait up to 1 second. await condition.wait(timeout=datetime.timedelta(seconds=1)) The method returns False if there’s no notification before the deadline. Changed in version 5.0: Previously, waiters could be notified synchronously from within notify. Now, the notification will always be received on the next iteration of the IOLoop. wait(timeout: Optional[Union[float, timedelta]] = None) → Awaitable[bool] Wait for notify. Returns a Future that resolves True if the condition is notified, or False after a timeout. Tornado Documentation, Release 6.3.3 notify(n: int = 1) → None Wake n waiters. notify_all() → None Wake all waiters. Event class tornado.locks.Event An event blocks coroutines until its internal flag is set to True. Similar to threading.Event. A coroutine can wait for an event to be set. Once it is set, calls to yield event.wait() will not block unless the event has been cleared: import asyncio from tornado import gen from tornado.locks import Event event = Event() async def waiter(): print("Waiting for event") await event.wait() print("Not waiting this time") await event.wait() print("Done") async def setter(): print("About to set the event") event.set() async def runner(): await gen.multi([waiter(), setter()]) asyncio.run(runner()) Waiting for event About to set the event Not waiting this time Done is_set() → bool Return True if the internal flag is true. set() → None Set the internal flag to True. All waiters are awakened. Calling wait once the flag is set will not block. clear() → None Reset the internal flag to False. Calls to wait will block until set is called. Tornado Documentation, Release 6.3.3 wait(timeout: Optional[Union[float, timedelta]] = None) → Awaitable[None] Block until the internal flag is true. Returns an awaitable, which raises tornado.util.TimeoutError after a timeout. Semaphore class tornado.locks.Semaphore(value: int = 1) A lock that can be acquired a fixed number of times before blocking. A Semaphore manages a counter representing the number of release calls minus the number of acquire calls, plus an initial value. The acquire method blocks if necessary until it can return without making the counter negative. Semaphores limit access to a shared resource. To allow access for two workers at a time: import asyncio from tornado import gen from tornado.locks import Semaphore sem = Semaphore(2) async def worker(worker_id): await sem.acquire() try: print("Worker %d is working" % worker_id) await use_some_resource() finally: print("Worker %d is done" % worker_id) sem.release() async def runner(): # Join all workers. await gen.multi([worker(i) for i in range(3)]) asyncio.run(runner()) Worker 0 is working Worker 1 is working Worker 0 is done Worker 2 is working Worker 1 is done Worker 2 is done Workers 0 and 1 are allowed to run concurrently, but worker 2 waits until the semaphore has been released once, by worker 0. The semaphore can be used as an async context manager: async def worker(worker_id): async with sem: print("Worker %d is working" % worker_id) await use_some_resource() (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) # Now the semaphore has been released. print("Worker %d is done" % worker_id) For compatibility with older versions of Python, acquire is a context manager, so worker could also be written as: @gen.coroutine def worker(worker_id): with (yield sem.acquire()): print("Worker %d is working" % worker_id) yield use_some_resource() # Now the semaphore has been released. print("Worker %d is done" % worker_id) Changed in version 4.3: Added async with support in Python 3.5. release() → None Increment the counter and wake one waiter. acquire(timeout: Optional[Union[float, timedelta]] = None) → Awaitable[_ReleasingContextManager] Decrement the counter. Returns an awaitable. Block if the counter is zero and wait for a release. The awaitable raises TimeoutError after the deadline. BoundedSemaphore class tornado.locks.BoundedSemaphore(value: int = 1) A semaphore that prevents release() being called too many times. If release would increment the semaphore’s value past the initial value, it raises ValueError. Semaphores are mostly used to guard resources with limited capacity, so a semaphore released too many times is a sign of a bug. release() → None Increment the counter and wake one waiter. acquire(timeout: Optional[Union[float, timedelta]] = None) → Awaitable[_ReleasingContextManager] Decrement the counter. Returns an awaitable. Block if the counter is zero and wait for a release. The awaitable raises TimeoutError after the deadline. Lock class tornado.locks.Lock A lock for coroutines. A Lock begins unlocked, and acquire locks it immediately. While it is locked, a coroutine that yields acquire waits until another coroutine calls release. Releasing an unlocked lock raises RuntimeError. A Lock can be used as an async context manager with the async with statement: Tornado Documentation, Release 6.3.3 >>> from tornado import locks >>> lock = locks.Lock() >>> >>> async def f(): ... async with lock: ... # Do something holding the lock. ... pass ... ... # Now the lock is released. For compatibility with older versions of Python, the acquire method asynchronously returns a regular context manager: >>> async def f2(): ... with (yield lock.acquire()): ... # Do something holding the lock. ... pass ... ... # Now the lock is released. Changed in version 4.3: Added async with support in Python 3.5. acquire(timeout: Optional[Union[float, timedelta]] = None) → Awaitable[_ReleasingContextManager] Attempt to lock. Returns an awaitable. Returns an awaitable, which raises tornado.util.TimeoutError after a timeout. release() → None Unlock. The first coroutine in line waiting for acquire gets the lock. If not locked, raise a RuntimeError. 6.5.3 tornado.queues – Queues for coroutines New in version 4.2. Asynchronous queues for coroutines. These classes are very similar to those provided in the standard library’s asyncio package. Warning: Unlike the standard library’s queue module, the classes defined here are not thread-safe. To use these queues from another thread, use IOLoop.add_callback to transfer control to the IOLoop thread before calling any queue methods. Classes Queue class tornado.queues.Queue(maxsize: int = 0) Coordinate producer and consumer coroutines. If maxsize is 0 (the default) the queue size is unbounded. Tornado Documentation, Release 6.3.3 import asyncio from tornado.ioloop import IOLoop from tornado.queues import Queue q = Queue(maxsize=2) async def consumer(): async for item in q: try: print('Doing work on %s' % item) await asyncio.sleep(0.01) finally: q.task_done() async def producer(): for item in range(5): await q.put(item) print('Put %s' % item) async def main(): # Start consumer without waiting (since it never finishes). IOLoop.current().spawn_callback(consumer) await producer() # Wait for producer to put all tasks. await q.join() # Wait for consumer to finish all tasks. print('Done') asyncio.run(main()) Put 0 Put 1 Doing work on 0 Put 2 Doing work on 1 Put 3 Doing work on 2 Put 4 Doing work on 3 Doing work on 4 Done In versions of Python without native coroutines (before 3.5), consumer() could be written as: @gen.coroutine def consumer(): while True: item = yield q.get() try: print('Doing work on %s' % item) yield gen.sleep(0.01) finally: q.task_done() Changed in version 4.3: Added async for support in Python 3.5. Tornado Documentation, Release 6.3.3 property maxsize: int Number of items allowed in the queue. qsize() → int Number of items in the queue. put(item: _T, timeout: Optional[Union[float, timedelta]] = None) → Future[None] Put an item into the queue, perhaps waiting until there is room. Returns a Future, which raises tornado.util.TimeoutError after a timeout. timeout may be a number denoting a time (on the same scale as tornado.ioloop.IOLoop.time, nor- mally time.time), or a datetime.timedelta object for a deadline relative to the current time. put_nowait(item: _T ) → None Put an item into the queue without blocking. If no free slot is immediately available, raise QueueFull. get(timeout: Optional[Union[float, timedelta]] = None) → Awaitable[_T] Remove and return an item from the queue. Returns an awaitable which resolves once an item is available, or raises tornado.util.TimeoutError after a timeout. timeout may be a number denoting a time (on the same scale as tornado.ioloop.IOLoop.time, nor- mally time.time), or a datetime.timedelta object for a deadline relative to the current time. Note: The timeout argument of this method differs from that of the standard library’s queue.Queue.get. That method interprets numeric values as relative timeouts; this one interprets them as absolute deadlines and requires timedelta objects for relative timeouts (consistent with other timeouts in Tornado). get_nowait() → _T Remove and return an item from the queue without blocking. Return an item if one is immediately available, else raise QueueEmpty. task_done() → None Indicate that a formerly enqueued task is complete. Used by queue consumers. For each get used to fetch a task, a subsequent call to task_done tells the queue that the processing on the task is complete. If a join is blocking, it resumes when all items have been processed; that is, when every put is matched by a task_done. Raises ValueError if called more times than put. join(timeout: Optional[Union[float, timedelta]] = None) → Awaitable[None] Block until all items in the queue are processed. Returns an awaitable, which raises tornado.util.TimeoutError after a timeout. Tornado Documentation, Release 6.3.3 PriorityQueue class tornado.queues.PriorityQueue(maxsize: int = 0) A Queue that retrieves entries in priority order, lowest first. Entries are typically tuples like (priority number, data). import asyncio from tornado.queues import PriorityQueue async def main(): q = PriorityQueue() q.put((1, 'medium-priority item')) q.put((0, 'high-priority item')) q.put((10, 'low-priority item')) print(await q.get()) print(await q.get()) print(await q.get()) asyncio.run(main()) (0, 'high-priority item') (1, 'medium-priority item') (10, 'low-priority item') LifoQueue class tornado.queues.LifoQueue(maxsize: int = 0) A Queue that retrieves the most recently put items first. import asyncio from tornado.queues import LifoQueue async def main(): q = LifoQueue() q.put(3) q.put(2) q.put(1) print(await q.get()) print(await q.get()) print(await q.get()) asyncio.run(main()) 1 2 3 Tornado Documentation, Release 6.3.3 Exceptions QueueEmpty exception tornado.queues.QueueEmpty Raised by Queue.get_nowait when the queue has no items. QueueFull exception tornado.queues.QueueFull Raised by Queue.put_nowait when a queue is at its maximum size. 6.5.4 tornado.process — Utilities for multiple processes Utilities for working with multiple processes, including both forking the server into multiple processes and managing subprocesses. exception tornado.process.CalledProcessError An alias for subprocess.CalledProcessError. tornado.process.cpu_count() → int Returns the number of processors on this machine. tornado.process.fork_processes(num_processes: Optional[int], max_restarts: Optional[int] = None) → int Starts multiple worker processes. If num_processes is None or <= 0, we detect the number of cores available on this machine and fork that number of child processes. If num_processes is given and > 0, we fork that specific number of sub-processes. Since we use processes and not threads, there is no shared memory between any server code. Note that multiple processes are not compatible with the autoreload module (or the autoreload=True option to tornado.web.Application which defaults to True when debug=True). When using multiple processes, no IOLoops can be created or referenced until after the call to fork_processes. In each child process, fork_processes returns its task id, a number between 0 and num_processes. Processes that exit abnormally (due to a signal or non-zero exit status) are restarted with the same id (up to max_restarts times). In the parent process, fork_processes calls sys.exit(0) after all child processes have exited nor- mally. max_restarts defaults to 100. Availability: Unix tornado.process.task_id() → Optional[int] Returns the current task id, if any. Returns None if this process was not created by fork_processes. class tornado.process.Subprocess(*args: Any, **kwargs: Any) Wraps subprocess.Popen with IOStream support. The constructor is the same as subprocess.Popen with the following additions: • stdin, stdout, and stderr may have the value tornado.process.Subprocess.STREAM, which will make the corresponding attribute of the resulting Subprocess a PipeIOStream. If this option is used, the caller is responsible for closing the streams when done with them. Tornado Documentation, Release 6.3.3 The Subprocess.STREAM option and the set_exit_callback and wait_for_exit methods do not work on Windows. There is therefore no reason to use this class instead of subprocess.Popen on that platform. Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. set_exit_callback(callback: Callable[[int], None]) → None Runs callback when this process exits. The callback takes one argument, the return code of the process. This method uses a SIGCHLD handler, which is a global setting and may conflict if you have other libraries trying to handle the same signal. If you are using more than one IOLoop it may be necessary to call Subprocess.initialize first to designate one IOLoop to run the signal handlers. In many cases a close callback on the stdout or stderr streams can be used as an alternative to an exit callback if the signal handler is causing a problem. Availability: Unix wait_for_exit(raise_error: bool = True) → Future[int] Returns a Future which resolves when the process exits. Usage: ret = yield proc.wait_for_exit() This is a coroutine-friendly alternative to set_exit_callback (and a replacement for the blocking subprocess.Popen.wait). By default, raises subprocess.CalledProcessError if the process has a non-zero exit status. Use wait_for_exit(raise_error=False) to suppress this behavior and return the exit status without rais- ing. New in version 4.2. Availability: Unix classmethod initialize() → None Initializes the SIGCHLD handler. The signal handler is run on an IOLoop to avoid locking issues. Note that the IOLoop used for signal handling need not be the same one used by individual Subprocess objects (as long as the IOLoops are each running in separate threads). Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. Availability: Unix classmethod uninitialize() → None Removes the SIGCHLD handler. Tornado Documentation, Release 6.3.3 6.6 Integration with other services 6.6.1 tornado.auth — Third-party login with OpenID and OAuth This module contains implementations of various third-party authentication schemes. All the classes in this file are class mixins designed to be used with the tornado.web.RequestHandler class. They are used in two ways: • On a login handler, use methods such as authenticate_redirect(), authorize_redirect(), and get_authenticated_user() to establish the user’s identity and store authentication tokens to your database and/or cookies. • In non-login handlers, use methods such as facebook_request() or twitter_request() to use the authen- tication tokens to make requests to the respective services. They all take slightly different arguments due to the fact all these services implement authentication and authorization slightly differently. See the individual service classes below for complete documentation. Example usage for Google OAuth: class GoogleOAuth2LoginHandler(tornado.web.RequestHandler, tornado.auth.GoogleOAuth2Mixin): async def get(self): if self.get_argument('code', False): user = await self.get_authenticated_user( redirect_uri='http://your.site.com/auth/google', code=self.get_argument('code')) # Save the user with e.g. set_signed_cookie else: self.authorize_redirect( redirect_uri='http://your.site.com/auth/google', client_id=self.settings['google_oauth']['key'], scope=['profile', 'email'], response_type='code', extra_params={'approval_prompt': 'auto'}) Common protocols These classes implement the OpenID and OAuth standards. They will generally need to be subclassed to use them with any particular site. The degree of customization required will vary, but in most cases overriding the class attributes (which are named beginning with underscores for historical reasons) should be sufficient. class tornado.auth.OpenIdMixin Abstract implementation of OpenID and Attribute Exchange. Class attributes: • _OPENID_ENDPOINT: the identity provider’s URI. authenticate_redirect(callback_uri: Optional[str] = None, ax_attrs: List[str] = ['name', 'email', 'language', 'username']) → None Redirects to the authentication URL for this service. After authentication, the service will redirect back to the given callback URI with additional parameters including openid.mode. Tornado Documentation, Release 6.3.3 We request the given attributes for the authenticated user by default (name, email, language, and username). If you don’t need all those attributes for your app, you can request fewer with the ax_attrs keyword argument. Changed in version 6.0: The callback argument was removed and this method no longer returns an await- able object. It is now an ordinary synchronous function. coroutine get_authenticated_user(http_client: Optional[AsyncHTTPClient] = None) → Dict[str, Any] Fetches the authenticated user data upon redirect. This method should be called by the handler that receives the redirect from the authenticate_redirect() method (which is often the same as the one that calls it; in that case you would call get_authenticated_user if the openid.mode parameter is present and authenticate_redirect if it is not). The result of this method will generally be used to set a cookie. Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. get_auth_http_client() → AsyncHTTPClient Returns the AsyncHTTPClient instance to be used for auth requests. May be overridden by subclasses to use an HTTP client other than the default. class tornado.auth.OAuthMixin Abstract implementation of OAuth 1.0 and 1.0a. See TwitterMixin below for an example implementation. Class attributes: • _OAUTH_AUTHORIZE_URL: The service’s OAuth authorization url. • _OAUTH_ACCESS_TOKEN_URL: The service’s OAuth access token url. • _OAUTH_VERSION: May be either “1.0” or “1.0a”. • _OAUTH_NO_CALLBACKS: Set this to True if the service requires advance registration of callbacks. Subclasses must also override the _oauth_get_user_future and _oauth_consumer_token methods. async authorize_redirect(callback_uri: Optional[str] = None, extra_params: Optional[Dict[str, Any]] = None, http_client: Optional[AsyncHTTPClient] = None) → None Redirects the user to obtain OAuth authorization for this service. The callback_uri may be omitted if you have previously registered a callback URI with the third-party service. For some services, you must use a previously-registered callback URI and cannot specify a callback via this method. This method sets a cookie called _oauth_request_token which is subsequently used (and cleared) in get_authenticated_user for security purposes. This method is asynchronous and must be called with await or yield (This is different from other auth*_redirect methods defined in this module). It calls RequestHandler.finish for you so you should not write any other response after it returns. Changed in version 3.1: Now returns a Future and takes an optional callback, for compatibility with gen. coroutine. Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. Tornado Documentation, Release 6.3.3 async get_authenticated_user(http_client: Optional[AsyncHTTPClient] = None) → Dict[str, Any] Gets the OAuth authorized user and access token. This method should be called from the handler for your OAuth callback URL to complete the registra- tion process. We run the callback with the authenticated user dictionary. This dictionary will contain an access_key which can be used to make authorized requests to this service on behalf of the user. The dictionary will also contain other fields such as name, depending on the service used. Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. _oauth_consumer_token() → Dict[str, Any] Subclasses must override this to return their OAuth consumer keys. The return value should be a dict with keys key and secret. async _oauth_get_user_future(access_token: Dict[str, Any]) → Dict[str, Any] Subclasses must override this to get basic information about the user. Should be a coroutine whose result is a dictionary containing information about the user, which may have been retrieved by using access_token to make a request to the service. The access token will be added to the returned dictionary to make the result of get_authenticated_user. Changed in version 5.1: Subclasses may also define this method with async def. Changed in version 6.0: A synchronous fallback to _oauth_get_user was removed. get_auth_http_client() → AsyncHTTPClient Returns the AsyncHTTPClient instance to be used for auth requests. May be overridden by subclasses to use an HTTP client other than the default. class tornado.auth.OAuth2Mixin Abstract implementation of OAuth 2.0. See FacebookGraphMixin or GoogleOAuth2Mixin below for example implementations. Class attributes: • _OAUTH_AUTHORIZE_URL: The service’s authorization url. • _OAUTH_ACCESS_TOKEN_URL: The service’s access token url. authorize_redirect(redirect_uri: Optional[str] = None, client_id: Optional[str] = None, client_secret: Optional[str] = None, extra_params: Optional[Dict[str, Any]] = None, scope: Optional[List[str]] = None, response_type: str = 'code') → None Redirects the user to obtain OAuth authorization for this service. Some providers require that you register a redirect URL with your application instead of passing one via this method. You should call this method to log the user in, and then call get_authenticated_user in the handler for your redirect URL to complete the authorization process. Changed in version 6.0: The callback argument and returned awaitable were removed; this is now an ordinary synchronous function. coroutine oauth2_request(url: str, access_token: Optional[str] = None, post_args: Optional[Dict[str, Any]] = None, **args: Any) → Any Fetches the given URL auth an OAuth2 access token. If the request is a POST, post_args should be provided. Query string arguments should be given as keyword arguments. Example usage: Tornado Documentation, Release 6.3.3 ..testcode: class MainHandler(tornado.web.RequestHandler, tornado.auth.FacebookGraphMixin): @tornado.web.authenticated async def get(self): new_entry = await self.oauth2_request( "https://graph.facebook.com/me/feed", post_args={"message": "I am posting from my Tornado application!"}, access_token=self.current_user["access_token"]) if not new_entry: # Call failed; perhaps missing permission? self.authorize_redirect() return self.finish("Posted a message!") New in version 4.3. get_auth_http_client() → AsyncHTTPClient Returns the AsyncHTTPClient instance to be used for auth requests. May be overridden by subclasses to use an HTTP client other than the default. New in version 4.3. Google class tornado.auth.GoogleOAuth2Mixin Google authentication using OAuth2. In order to use, register your application with Google and copy the relevant parameters to your application set- tings. • Go to the Google Dev Console at http://console.developers.google.com • Select a project, or create a new one. • In the sidebar on the left, select Credentials. • Click CREATE CREDENTIALS and click OAuth client ID. • Under Application type, select Web application. • Name OAuth 2.0 client and click Create. • Copy the “Client secret” and “Client ID” to the application settings as {"google_oauth": {"key": CLIENT_ID, "secret": CLIENT_SECRET}} New in version 3.2. get_google_oauth_settings() → Dict[str, str] Return the Google OAuth 2.0 credentials that you created with [Google Cloud Platform](https://console. cloud.google.com/apis/credentials). The dict format is: { "key": "your_client_id", "secret": "your_client_secret" } Tornado Documentation, Release 6.3.3 If your credentials are stored differently (e.g. in a db) you can override this method for custom provision. coroutine get_authenticated_user(redirect_uri: str, code: str, client_id: Optional[str] = None, client_secret: Optional[str] = None) → Dict[str, Any] Handles the login for the Google user, returning an access token. The result is a dictionary containing an access_token field ([among others](https:// developers.google.com/identity/protocols/OAuth2WebServer#handlingtheresponse)). Unlike other get_authenticated_user methods in this package, this method does not return any additional infor- mation about the user. The returned access token can be used with OAuth2Mixin.oauth2_request to request additional information (perhaps from https://www.googleapis.com/oauth2/v2/userinfo) Example usage: class GoogleOAuth2LoginHandler(tornado.web.RequestHandler, async def get(self): if self.get_argument('code', False): access = await self.get_authenticated_user( redirect_uri='http://your.site.com/auth/google', code=self.get_argument('code')) user = await self.oauth2_request( "https://www.googleapis.com/oauth2/v1/userinfo", access_token=access["access_token"]) # Save the user and access token with # e.g. set_signed_cookie. else: self.authorize_redirect( redirect_uri='http://your.site.com/auth/google', client_id=self.get_google_oauth_settings()['key'], scope=['profile', 'email'], response_type='code', extra_params={'approval_prompt': 'auto'}) Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. Facebook class tornado.auth.FacebookGraphMixin Facebook authentication using the new Graph API and OAuth2. coroutine get_authenticated_user(redirect_uri: str, client_id: str, client_secret: str, code: str, extra_fields: Optional[Dict[str, Any]] = None) → Optional[Dict[str, Any]] Handles the login for the Facebook user, returning a user object. Example usage: class FacebookGraphLoginHandler(tornado.web.RequestHandler, tornado.auth.FacebookGraphMixin): async def get(self): if self.get_argument("code", False): user = await self.get_authenticated_user( redirect_uri='/auth/facebookgraph/', (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) client_id=self.settings["facebook_api_key"], client_secret=self.settings["facebook_secret"], code=self.get_argument("code")) # Save the user with e.g. set_signed_cookie else: self.authorize_redirect( redirect_uri='/auth/facebookgraph/', client_id=self.settings["facebook_api_key"], extra_params={"scope": "read_stream,offline_access"}) This method returns a dictionary which may contain the following fields: • access_token, a string which may be passed to facebook_request • session_expires, an integer encoded as a string representing the time until the access token expires in seconds. This field should be used like int(user['session_expires']); in a future version of Tornado it will change from a string to an integer. • id, name, first_name, last_name, locale, picture, link, plus any fields named in the extra_fields argument. These fields are copied from the Facebook graph API user object Changed in version 4.5: The session_expires field was updated to support changes made to the Face- book API in March 2017. Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. coroutine facebook_request(path: str, access_token: Optional[str] = None, post_args: Optional[Dict[str, Any]] = None, **args: Any) → Any Fetches the given relative API path, e.g., “/btaylor/picture” If the request is a POST, post_args should be provided. Query string arguments should be given as keyword arguments. An introduction to the Facebook Graph API can be found at http://developers.facebook.com/docs/api Many methods require an OAuth access token which you can obtain through authorize_redirect and get_authenticated_user. The user returned through that process includes an access_token attribute that can be used to make authenticated requests via this method. Example usage: class MainHandler(tornado.web.RequestHandler, tornado.auth.FacebookGraphMixin): @tornado.web.authenticated async def get(self): new_entry = await self.facebook_request( "/me/feed", post_args={"message": "I am posting from my Tornado application!"}, access_token=self.current_user["access_token"]) if not new_entry: # Call failed; perhaps missing permission? self.authorize_redirect() return self.finish("Posted a message!") The given path is relative to self._FACEBOOK_BASE_URL, by default “https://graph.facebook.com”. Tornado Documentation, Release 6.3.3 This method is a wrapper around OAuth2Mixin.oauth2_request; the only difference is that this method takes a relative path, while oauth2_request takes a complete url. Changed in version 3.1: Added the ability to override self._FACEBOOK_BASE_URL. Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. Twitter class tornado.auth.TwitterMixin Twitter OAuth authentication. To authenticate with Twitter, register your application with Twitter at http://twitter.com/apps. Then copy your Consumer Key and Consumer Secret to the application settings twitter_consumer_key and twitter_consumer_secret. Use this mixin on the handler for the URL you registered as your application’s callback URL. When your application is set up, you can use this mixin like this to authenticate the user with Twitter and get access to their stream: class TwitterLoginHandler(tornado.web.RequestHandler, tornado.auth.TwitterMixin): async def get(self): if self.get_argument("oauth_token", None): user = await self.get_authenticated_user() # Save the user using e.g. set_signed_cookie() else: await self.authorize_redirect() The user object returned by get_authenticated_user includes the attributes username, name, access_token, and all of the custom Twitter user attributes described at https://dev.twitter.com/docs/api/1. 1/get/users/show coroutine authenticate_redirect(callback_uri: Optional[str] = None) → None Just like authorize_redirect, but auto-redirects if authorized. This is generally the right interface to use if you are using Twitter for single-sign on. Changed in version 3.1: Now returns a Future and takes an optional callback, for compatibility with gen. coroutine. Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. coroutine twitter_request(path: str, access_token: Dict[str, Any], post_args: Optional[Dict[str, Any]] = None, **args: Any) → Any Fetches the given API path, e.g., statuses/user_timeline/btaylor The path should not include the format or API version number. (we automatically use JSON format and API version 1). If the request is a POST, post_args should be provided. Query string arguments should be given as keyword arguments. All the Twitter methods are documented at http://dev.twitter.com/ Many methods require an OAuth access token which you can obtain through authorize_redirect and get_authenticated_user. The user returned through that process includes an ‘access_token’ attribute that can be used to make authenticated requests via this method. Example usage: Tornado Documentation, Release 6.3.3 class MainHandler(tornado.web.RequestHandler, tornado.auth.TwitterMixin): @tornado.web.authenticated async def get(self): new_entry = await self.twitter_request( "/statuses/update", post_args={"status": "Testing Tornado Web Server"}, access_token=self.current_user["access_token"]) if not new_entry: # Call failed; perhaps missing permission? await self.authorize_redirect() return self.finish("Posted a message!") Changed in version 6.0: The callback argument was removed. Use the returned awaitable object instead. 6.6.2 tornado.wsgi — Interoperability with other Python frameworks and servers WSGI support for the Tornado web framework. WSGI is the Python standard for web servers, and allows for interoperability between Tornado and other Python web frameworks and servers. This module provides WSGI support via the WSGIContainer class, which makes it possible to run applications using other WSGI frameworks on the Tornado HTTP server. The reverse is not supported; the Tornado Application and RequestHandler classes are designed for use with the Tornado HTTPServer and cannot be used in a generic WSGI container. class tornado.wsgi.WSGIContainer(wsgi_application: WSGIAppType, executor: Optional[Executor] = None) Makes a WSGI-compatible application runnable on Tornado’s HTTP server. Warning: WSGI is a synchronous interface, while Tornado’s concurrency model is based on single-threaded asynchronous execution. Many of Tornado’s distinguishing features are not available in WSGI mode, includ- ing efficient long-polling and websockets. The primary purpose of WSGIContainer is to support both WSGI applications and native Tornado RequestHandlers in a single process. WSGI-only applications are likely to be better off with a dedicated WSGI server such as gunicorn or uwsgi. Wrap a WSGI application in a WSGIContainer to make it implement the Tornado HTTPServer request_callback interface. The WSGIContainer object can then be passed to classes from the tornado. routing module, tornado.web.FallbackHandler, or to HTTPServer directly. This class is intended to let other frameworks (Django, Flask, etc) run on the Tornado HTTP server and I/O loop. Realistic usage will be more complicated, but the simplest possible example uses a hand-written WSGI applica- tion with HTTPServer: def simple_app(environ, start_response): status = "200 OK" response_headers = [("Content-type", "text/plain")] start_response(status, response_headers) return [b"Hello world!\n"] async def main(): (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) container = tornado.wsgi.WSGIContainer(simple_app) http_server = tornado.httpserver.HTTPServer(container) http_server.listen(8888) await asyncio.Event().wait() asyncio.run(main()) The recommended pattern is to use the tornado.routing module to set up routing rules between your WSGI application and, typically, a tornado.web.Application. Alternatively, tornado.web.Application can be used as the top-level router and tornado.web.FallbackHandler can embed a WSGIContainer within it. If the executor argument is provided, the WSGI application will be executed on that executor. This must be an instance of concurrent.futures.Executor, typically a ThreadPoolExecutor (ProcessPoolExecutor is not supported). If no executor is given, the application will run on the event loop thread in Tornado 6.3; this will change to use an internal thread pool by default in Tornado 7.0. Warning: By default, the WSGI application is executed on the event loop’s thread. This limits the server to one request at a time (per process), making it less scalable than most other WSGI servers. It is therefore highly recommended that you pass a ThreadPoolExecutor when constructing the WSGIContainer, after verifying that your application is thread-safe. The default will change to use a ThreadPoolExecutor in Tornado 7.0. New in version 6.3: The executor parameter. Deprecated since version 6.3: The default behavior of running the WSGI application on the event loop thread is deprecated and will change in Tornado 7.0 to use a thread pool by default. environ(request: HTTPServerRequest) → Dict[str, Any] Converts a tornado.httputil.HTTPServerRequest to a WSGI environment. Changed in version 6.3: No longer a static method. 6.6.3 tornado.platform.caresresolver — Asynchronous DNS Resolver using C- Ares This module contains a DNS resolver using the c-ares library (and its wrapper pycares). class tornado.platform.caresresolver.CaresResolver Name resolver based on the c-ares library. This is a non-blocking and non-threaded resolver. It may not produce the same results as the system resolver, but can be used for non-blocking resolution when threads cannot be used. c-ares fails to resolve some names when family is AF_UNSPEC, so it is only recommended for use in AF_INET (i.e. IPv4). This is the default for tornado.simple_httpclient, but other libraries may default to AF_UNSPEC. Deprecated since version 6.2: This class is deprecated and will be removed in Tornado 7.0. Use the default thread-based resolver instead. Tornado Documentation, Release 6.3.3 6.6.4 tornado.platform.twisted — Bridges between Twisted and Tornado Deprecated since version 6.0: This module is no longer recommended for new code. Instead of using direct integration between Tornado and Twisted, new applications should rely on the integration with asyncio provided by both packages. Importing this module has the side effect of registering Twisted’s Deferred class with Tornado’s @gen.coroutine so that Deferred objects can be used with yield in coroutines using this decorator (importing this module has no effect on native coroutines using async def). tornado.platform.twisted.install() Install AsyncioSelectorReactor as the default Twisted reactor. Deprecated since version 5.1: This function is provided for backwards compatibility; code that does not require compatibility with older versions of Tornado should use twisted.internet.asyncioreactor.install() directly. Changed in version 6.0.3: In Tornado 5.x and before, this function installed a reactor based on the Tornado IOLoop. When that reactor implementation was removed in Tornado 6.0.0, this function was removed as well. It was restored in Tornado 6.0.3 using the asyncio reactor instead. Twisted DNS resolver class tornado.platform.twisted.TwistedResolver Twisted-based asynchronous resolver. This is a non-blocking and non-threaded resolver. It is recommended only when threads cannot be used, since it has limitations compared to the standard getaddrinfo-based Resolver and DefaultExecutorResolver. Specifically, it returns at most one result, and arguments other than host and family are ignored. It may fail to resolve when family is not socket.AF_UNSPEC. Requires Twisted 12.1 or newer. Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. Deprecated since version 6.2: This class is deprecated and will be removed in Tornado 7.0. Use the default thread-based resolver instead. 6.6.5 tornado.platform.asyncio — Bridge between asyncio and Tornado Bridges between the asyncio module and Tornado IOLoop. New in version 3.2. This module integrates Tornado with the asyncio module introduced in Python 3.4. This makes it possible to combine the two libraries on the same event loop. Deprecated since version 5.0: While the code in this module is still used, it is now enabled automatically when asyncio is available, so applications should no longer need to refer to this module directly. Note: Tornado is designed to use a selector-based event loop. On Windows, where a proactor-based event loop has been the default since Python 3.8, a selector event loop is emulated by running select on a separate thread. Configuring asyncio to use a selector event loop may improve performance of Tornado (but may reduce performance of other asyncio-based libraries in the same process). Tornado Documentation, Release 6.3.3 class tornado.platform.asyncio.AsyncIOMainLoop(*args: Any, **kwargs: Any) AsyncIOMainLoop creates an IOLoop that corresponds to the current asyncio event loop (i.e. the one returned by asyncio.get_event_loop()). Deprecated since version 5.0: Now used automatically when appropriate; it is no longer necessary to refer to this class directly. Changed in version 5.0: Closing an AsyncIOMainLoop now closes the underlying asyncio loop. class tornado.platform.asyncio.AsyncIOLoop(*args: Any, **kwargs: Any) AsyncIOLoop is an IOLoop that runs on an asyncio event loop. This class follows the usual Tornado semantics for creating new IOLoops; these loops are not necessarily related to the asyncio default event loop. Each AsyncIOLoop creates a new asyncio.EventLoop; this object can be accessed with the asyncio_loop attribute. Changed in version 6.2: Support explicit asyncio_loop argument for specifying the asyncio loop to attach to, rather than always creating a new one with the default policy. Changed in version 5.0: When an AsyncIOLoop becomes the current IOLoop, it also sets the current asyncio event loop. Deprecated since version 5.0: Now used automatically when appropriate; it is no longer necessary to refer to this class directly. tornado.platform.asyncio.to_tornado_future(asyncio_future: Future) → Future Convert an asyncio.Future to a tornado.concurrent.Future. New in version 4.1. Deprecated since version 5.0: Tornado Futures have been merged with asyncio.Future, so this method is now a no-op. tornado.platform.asyncio.to_asyncio_future(tornado_future: Future) → Future Convert a Tornado yieldable object to an asyncio.Future. New in version 4.1. Changed in version 4.3: Now accepts any yieldable object, not just tornado.concurrent.Future. Deprecated since version 5.0: Tornado Futures have been merged with asyncio.Future, so this method is now equivalent to tornado.gen.convert_yielded. class tornado.platform.asyncio.AnyThreadEventLoopPolicy Event loop policy that allows loop creation on any thread. The default asyncio event loop policy only automatically creates event loops in the main threads. Other threads must create event loops explicitly or asyncio.get_event_loop (and therefore IOLoop.current) will fail. Installing this policy allows event loops to be created automatically on any thread, matching the behavior of Tornado versions prior to 5.0 (or 5.0 on Python 2). Usage: asyncio.set_event_loop_policy(AnyThreadEventLoopPolicy()) New in version 5.0. Deprecated since version 6.2: AnyThreadEventLoopPolicy affects the implicit creation of an event loop, which is deprecated in Python 3.10 and will be removed in a future version of Python. At that time AnyThreadEventLoopPolicy will no longer be useful. If you are relying on it, use asyncio. new_event_loop or asyncio.run explicitly in any non-main threads that need event loops. Tornado Documentation, Release 6.3.3 class tornado.platform.asyncio.AddThreadSelectorEventLoop(real_loop: AbstractEventLoop) Wrap an event loop to add implementations of the add_reader method family. Instances of this class start a second thread to run a selector. This thread is completely hidden from the user; all callbacks are run on the wrapped event loop’s thread. This class is used automatically by Tornado; applications should not need to refer to it directly. It is safe to wrap any event loop with this class, although it only makes sense for event loops that do not implement the add_reader family of methods themselves (i.e. WindowsProactorEventLoop) Closing the AddThreadSelectorEventLoop also closes the wrapped event loop. 6.7 Utilities 6.7.1 tornado.autoreload — Automatically detect code changes in development Automatically restart the server when a source file is modified. Most applications should not access this module directly. Instead, pass the keyword argument autoreload=True to the tornado.web.Application constructor (or debug=True, which enables this setting and several others). This will enable autoreload mode as well as checking for changes to templates and static resources. Note that restarting is a destructive operation and any requests in progress will be aborted when the process restarts. (If you want to disable autoreload while using other debug-mode features, pass both debug=True and autoreload=False). This module can also be used as a command-line wrapper around scripts such as unit test runners. See the main method for details. The command-line wrapper and Application debug modes can be used together. This combination is encouraged as the wrapper catches syntax errors and other import-time failures, while debug mode catches changes once the server has started. This module will not work correctly when HTTPServer’s multi-process mode is used. Reloading loses any Python interpreter command-line arguments (e.g. -u) because it re-executes Python using sys. executable and sys.argv. Additionally, modifying these variables will cause reloading to behave incorrectly. tornado.autoreload.start(check_time: int = 500) → None Begins watching source files for changes. Changed in version 5.0: The io_loop argument (deprecated since version 4.1) has been removed. tornado.autoreload.wait() → None Wait for a watched file to change, then restart the process. Intended to be used at the end of scripts like unit test runners, to run the tests again after any source file changes (but see also the command-line interface in main) tornado.autoreload.watch(filename: str) → None Add a file to the watch list. All imported modules are watched by default. tornado.autoreload.add_reload_hook(fn: Callable[[], None]) → None Add a function to be called before reloading the process. Note that for open file and socket handles it is generally preferable to set the FD_CLOEXEC flag (using fcntl or os.set_inheritable) instead of using a reload hook to close them. Tornado Documentation, Release 6.3.3 tornado.autoreload.main() → None Command-line wrapper to re-run a script whenever its source changes. Scripts may be specified by filename or module name: python -m tornado.autoreload -m tornado.test.runtests python -m tornado.autoreload tornado/test/runtests.py Running a script with this wrapper is similar to calling tornado.autoreload.wait at the end of the script, but this wrapper can catch import-time problems like syntax errors that would otherwise prevent the script from reaching its call to wait. 6.7.2 tornado.concurrent — Work with Future objects Utilities for working with Future objects. Tornado previously provided its own Future class, but now uses asyncio.Future. This module contains utility functions for working with asyncio.Future in a way that is backwards-compatible with Tornado’s old Future implementation. While this module is an important part of Tornado’s internal implementation, applications rarely need to interact with it directly. class tornado.concurrent.Future tornado.concurrent.Future is an alias for asyncio.Future. In Tornado, the main way in which applications interact with Future objects is by awaiting or yielding them in coroutines, instead of calling methods on the Future objects themselves. For more information on the available methods, see the asyncio.Future docs. Changed in version 5.0: Tornado’s implementation of Future has been replaced by the version from asyncio when available. • Future objects can only be created while there is a current IOLoop • The timing of callbacks scheduled with Future.add_done_callback has changed. • Cancellation is now partially supported (only on Python 3) • The exc_info and set_exc_info methods are no longer available on Python 3. tornado.concurrent.run_on_executor(*args: Any, **kwargs: Any) → Callable Decorator to run a synchronous method asynchronously on an executor. Returns a future. The executor to be used is determined by the executor attributes of self. To use a different attribute name, pass a keyword argument to the decorator: @run_on_executor(executor='_thread_pool') def foo(self): pass This decorator should not be confused with the similarly-named IOLoop.run_in_executor. In general, us- ing run_in_executor when calling a blocking method is recommended instead of using this decorator when defining a method. If compatibility with older versions of Tornado is required, consider defining an executor and using executor.submit() at the call site. Changed in version 4.2: Added keyword arguments to use alternative attributes. Tornado Documentation, Release 6.3.3 Changed in version 5.0: Always uses the current IOLoop instead of self.io_loop. Changed in version 5.1: Returns a Future compatible with await instead of a concurrent.futures.Future. Deprecated since version 5.1: The callback argument is deprecated and will be removed in 6.0. The decorator itself is discouraged in new code but will not be removed in 6.0. Changed in version 6.0: The callback argument was removed. tornado.concurrent.chain_future(a: Future[_T], b: Future[_T]) → None Chain two futures together so that when one completes, so does the other. The result (success or failure) of a will be copied to b, unless b has already been completed or cancelled by the time a finishes. Changed in version 5.0: Now accepts both Tornado/asyncio Future objects and concurrent.futures. Future. tornado.concurrent.future_set_result_unless_cancelled(future: Union[futures.Future[_T], Future[_T]], value: _T ) → None Set the given value as the Future’s result, if not cancelled. Avoids asyncio.InvalidStateError when calling set_result() on a cancelled asyncio.Future. New in version 5.0. tornado.concurrent.future_set_exception_unless_cancelled(future: Union[futures.Future[_T], Future[_T]], exc: BaseException) → None Set the given exc as the Future’s exception. If the Future is already canceled, logs the exception instead. If this logging is not desired, the caller should explicitly check the state of the Future and call Future.set_exception instead of this wrapper. Avoids asyncio.InvalidStateError when calling set_exception() on a cancelled asyncio.Future. New in version 6.0. tornado.concurrent.future_set_exc_info(future: Union[futures.Future[_T], Future[_T]], exc_info: Tuple[Optional[type], Optional[BaseException], Optional[TracebackType]]) → None Set the given exc_info as the Future’s exception. Understands both asyncio.Future and the extensions in older versions of Tornado to enable better tracebacks on Python 2. New in version 5.0. Changed in version 6.0: If the future is already cancelled, this function is a no-op. (previously asyncio. InvalidStateError would be raised) tornado.concurrent.future_add_done_callback(future: futures.Future[_T], callback: Callable[[futures.Future[_T]], None]) → None tornado.concurrent.future_add_done_callback(future: Future[_T], callback: Callable[[Future[_T]], None]) → None Arrange to call callback when future is complete. callback is invoked with one argument, the future. If future is already done, callback is invoked immediately. This may differ from the behavior of Future. add_done_callback, which makes no such guarantee. New in version 5.0. Tornado Documentation, Release 6.3.3 6.7.3 tornado.log — Logging support Logging support for Tornado. Tornado uses three logger streams: • tornado.access: Per-request logging for Tornado’s HTTP servers (and potentially other servers in the future) • tornado.application: Logging of errors from application code (i.e. uncaught exceptions from callbacks) • tornado.general: General-purpose logging, including any errors or warnings from Tornado itself. These streams may be configured independently using the standard library’s logging module. For example, you may wish to send tornado.access logs to a separate file for analysis. class tornado.log.LogFormatter(fmt: str = '%(color)s[%(levelname)1.1s %(asctime)s %(module)s:%(lineno)d]%(end_color)s %(message)s', datefmt: str = '%y%m%d %H:%M:%S', style: str = '%', color: bool = True, colors: Dict[int, int] = {10: 4, 20: 2, 30: 3, 40: 1, 50: 5}) Log formatter used in Tornado. Key features of this formatter are: • Color support when logging to a terminal that supports it. • Timestamps on every log line. • Robust against str/bytes encoding problems. This formatter is enabled automatically by tornado.options.parse_command_line or tornado.options. parse_config_file (unless --logging=none is used). Color support on Windows versions that do not support ANSI color codes is enabled by use of the colorama library. Applications that wish to use this must first initialize colorama with a call to colorama.init. See the colorama documentation for details. Changed in version 4.5: Added support for colorama. Changed the constructor signature to be compatible with logging.config.dictConfig. Parameters • color (bool) – Enables color support. • fmt (str) – Log message format. It will be applied to the attributes dict of log records. The text between %(color)s and %(end_color)s will be colored depending on the level if color support is on. • colors (dict) – color mappings from logging level to terminal color code • datefmt (str) – Datetime format. Used for formatting (asctime) placeholder in prefix_fmt. Changed in version 3.2: Added fmt and datefmt arguments. tornado.log.enable_pretty_logging(options: Optional[Any] = None, logger: Optional[Logger] = None) → None Turns on formatted logging output as configured. This is called automatically by tornado.options.parse_command_line and tornado.options. parse_config_file. Tornado Documentation, Release 6.3.3 tornado.log.define_logging_options(options: Optional[Any] = None) → None Add logging-related flags to options. These options are present automatically on the default options instance; this method is only necessary if you have created your own OptionParser. New in version 4.2: This function existed in prior versions but was broken and undocumented until 4.2. 6.7.4 tornado.options — Command-line parsing A command line parsing module that lets modules define their own options. This module is inspired by Google’s gflags. The primary difference with libraries such as argparse is that a global registry is used so that options may be defined in any module (it also enables tornado.log by default). The rest of Tornado does not depend on this module, so feel free to use argparse or other configuration libraries if you prefer them. Options must be defined with tornado.options.define before use, generally at the top level of a module. The options are then accessible as attributes of tornado.options.options: # myapp/db.py from tornado.options import define, options define("mysql_host", default="127.0.0.1:3306", help="Main user DB") define("memcache_hosts", default="127.0.0.1:11011", multiple=True, help="Main user memcache servers") def connect(): db = database.Connection(options.mysql_host) ... # myapp/server.py from tornado.options import define, options define("port", default=8080, help="port to listen on") def start_server(): app = make_app() app.listen(options.port) The main() method of your application does not need to be aware of all of the options used throughout your program; they are all automatically loaded when the modules are loaded. However, all modules that define options must have been imported before the command line is parsed. Your main() method can parse the command line or parse a config file with either parse_command_line or parse_config_file: import myapp.db, myapp.server import tornado if __name__ == '__main__': tornado.options.parse_command_line() # or tornado.options.parse_config_file("/etc/server.conf") Tornado Documentation, Release 6.3.3 Note: When using multiple parse_* functions, pass final=False to all but the last one, or side effects may occur twice (in particular, this can result in log messages being doubled). tornado.options.options is a singleton instance of OptionParser, and the top-level functions in this module (define, parse_command_line, etc) simply call methods on it. You may create additional OptionParser instances to define isolated sets of options, such as for subcommands. Note: By default, several options are defined that will configure the standard logging module when parse_command_line or parse_config_file are called. If you want Tornado to leave the logging configuration alone so you can manage it yourself, either pass --logging=none on the command line or do the following to disable it in code: from tornado.options import options, parse_command_line options.logging = None parse_command_line() Note: parse_command_line or parse_config_file function should called after logging configuration and user- defined command line flags using the callback option definition, or these configurations will not take effect. Changed in version 4.3: Dashes and underscores are fully interchangeable in option names; options can be defined, set, and read with any mix of the two. Dashes are typical for command-line usage while config files require underscores. Global functions tornado.options.define(name: str, default: Optional[Any] = None, type: Optional[type] = None, help: Optional[str] = None, metavar: Optional[str] = None, multiple: bool = False, group: Optional[str] = None, callback: Optional[Callable[[Any], None]] = None) → None Defines an option in the global namespace. See OptionParser.define. tornado.options.options Global options object. All defined options are available as attributes on this object. tornado.options.parse_command_line(args: Optional[List[str]] = None, final: bool = True) → List[str] Parses global options from the command line. See OptionParser.parse_command_line. tornado.options.parse_config_file(path: str, final: bool = True) → None Parses global options from a config file. See OptionParser.parse_config_file. tornado.options.print_help(file=sys.stderr) Prints all the command line options to stderr (or another file). See OptionParser.print_help. tornado.options.add_parse_callback(callback: Callable[[], None]) → None Adds a parse callback, to be invoked when option parsing is done. See OptionParser.add_parse_callback Tornado Documentation, Release 6.3.3 exception tornado.options.Error Exception raised by errors in the options module. OptionParser class class tornado.options.OptionParser A collection of options, a dictionary with object-like access. Normally accessed via static functions in the tornado.options module, which reference a global instance. OptionParser.define(name: str, default: Optional[Any] = None, type: Optional[type] = None, help: Optional[str] = None, metavar: Optional[str] = None, multiple: bool = False, group: Optional[str] = None, callback: Optional[Callable[[Any], None]] = None) → None Defines a new command line option. type can be any of str, int, float, bool, datetime, or timedelta. If no type is given but a default is, type is the type of default. Otherwise, type defaults to str. If multiple is True, the option value is a list of type instead of an instance of type. help and metavar are used to construct the automatically generated command line help string. The help message is formatted like: --name=METAVAR help string group is used to group the defined options in logical groups. By default, command line options are grouped by the file in which they are defined. Command line option names must be unique globally. If a callback is given, it will be run with the new value whenever the option is changed. This can be used to combine command-line and file-based options: define("config", type=str, help="path to config file", callback=lambda path: parse_config_file(path, final=False)) With this definition, options in the file specified by --config will override options set earlier on the command line, but can be overridden by later flags. OptionParser.parse_command_line(args: Optional[List[str]] = None, final: bool = True) → List[str] Parses all options given on the command line (defaults to sys.argv). Options look like --option=value and are parsed according to their type. For boolean options, --option is equivalent to --option=true If the option has multiple=True, comma-separated values are accepted. For multi-value integer options, the syntax x:y is also accepted and equivalent to range(x, y). Note that args[0] is ignored since it is the program name in sys.argv. We return a list of all arguments that are not parsed as options. If final is False, parse callbacks will not be run. This is useful for applications that wish to combine configu- rations from multiple sources. OptionParser.parse_config_file(path: str, final: bool = True) → None Parses and loads the config file at the given path. The config file contains Python code that will be executed (so it is not safe to use untrusted config files). Anything in the global namespace that matches a defined option will be used to set that option’s value. Tornado Documentation, Release 6.3.3 Options may either be the specified type for the option or strings (in which case they will be parsed the same way as in parse_command_line) Example (using the options defined in the top-level docs of this module): port = 80 mysql_host = 'mydb.example.com:3306' # Both lists and comma-separated strings are allowed for # multiple=True. memcache_hosts = ['cache1.example.com:11011', 'cache2.example.com:11011'] memcache_hosts = 'cache1.example.com:11011,cache2.example.com:11011' If final is False, parse callbacks will not be run. This is useful for applications that wish to combine configu- rations from multiple sources. Note: tornado.options is primarily a command-line library. Config file support is provided for applications that wish to use it, but applications that prefer config files may wish to look at other libraries instead. Changed in version 4.1: Config files are now always interpreted as utf-8 instead of the system default encoding. Changed in version 4.4: The special variable __file__ is available inside config files, specifying the absolute path to the config file itself. Changed in version 5.1: Added the ability to set options via strings in config files. OptionParser.print_help(file: Optional[TextIO] = None) → None Prints all the command line options to stderr (or another file). OptionParser.add_parse_callback(callback: Callable[[], None]) → None Adds a parse callback, to be invoked when option parsing is done. OptionParser.mockable() → _Mockable Returns a wrapper around self that is compatible with mock.patch. The mock.patch function (included in the standard library unittest.mock package since Python 3.3, or in the third-party mock package for older versions of Python) is incompatible with objects like options that override __getattr__ and __setattr__. This function returns an object that can be used with mock.patch.object to modify option values: with mock.patch.object(options.mockable(), 'name', value): assert options.name == value OptionParser.items() → Iterable[Tuple[str, Any]] An iterable of (name, value) pairs. New in version 3.1. OptionParser.as_dict() → Dict[str, Any] The names and values of all options. New in version 3.1. OptionParser.groups() → Set[str] The set of option-groups created by define. New in version 3.1. Tornado Documentation, Release 6.3.3 OptionParser.group_dict(group: str) → Dict[str, Any] The names and values of options in a group. Useful for copying options into Application settings: from tornado.options import define, parse_command_line, options define('template_path', group='application') define('static_path', group='application') parse_command_line() application = Application( handlers, **options.group_dict('application')) New in version 3.1. 6.7.5 tornado.testing — Unit testing support for asynchronous code Support classes for automated testing. • AsyncTestCase and AsyncHTTPTestCase: Subclasses of unittest.TestCase with additional support for testing asynchronous (IOLoop-based) code. • ExpectLog: Make test logs less spammy. • main(): A simple test runner (wrapper around unittest.main()) with support for the tornado.autoreload module to rerun the tests when code changes. Asynchronous test cases class tornado.testing.AsyncTestCase(methodName: str = 'runTest') TestCase subclass for testing IOLoop-based asynchronous code. The unittest framework is synchronous, so the test must be complete by the time the test method returns. This means that asynchronous code cannot be used in quite the same way as usual and must be adapted to fit. To write your tests with coroutines, decorate your test methods with tornado.testing.gen_test instead of tornado. gen.coroutine. This class also provides the (deprecated) stop() and wait() methods for a more manual style of testing. The test method itself must call self.wait(), and asynchronous callbacks should call self.stop() to signal com- pletion. By default, a new IOLoop is constructed for each test and is available as self.io_loop. If the code being tested requires a reused global IOLoop, subclasses should override get_new_ioloop to return it, although this is deprecated as of Tornado 6.3. The IOLoop’s start and stop methods should not be called directly. Instead, use self.stop and self.wait. Arguments passed to self.stop are returned from self.wait. It is possible to have multiple wait/stop cycles in the same test. Example: # This test uses coroutine style. class MyTestCase(AsyncTestCase): @tornado.testing.gen_test (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) def test_http_fetch(self): client = AsyncHTTPClient() response = yield client.fetch("http://www.tornadoweb.org") # Test contents of response self.assertIn("FriendFeed", response.body) # This test uses argument passing between self.stop and self.wait. class MyTestCase2(AsyncTestCase): def test_http_fetch(self): client = AsyncHTTPClient() client.fetch("http://www.tornadoweb.org/", self.stop) response = self.wait() # Test contents of response self.assertIn("FriendFeed", response.body) get_new_ioloop() → IOLoop Returns the IOLoop to use for this test. By default, a new IOLoop is created for each test. Subclasses may override this method to return IOLoop. current() if it is not appropriate to use a new IOLoop in each tests (for example, if there are global singletons using the default IOLoop) or if a per-test event loop is being provided by another system (such as pytest-asyncio). Deprecated since version 6.3: This method will be removed in Tornado 7.0. stop(_arg: Optional[Any] = None, **kwargs: Any) → None Stops the IOLoop, causing one pending (or future) call to wait() to return. Keyword arguments or a single positional argument passed to stop() are saved and will be returned by wait(). Deprecated since version 5.1: stop and wait are deprecated; use @gen_test instead. wait(condition: Optional[Callable[[...], bool]] = None, timeout: Optional[float] = None) → Any Runs the IOLoop until stop is called or timeout has passed. In the event of a timeout, an exception will be thrown. The default timeout is 5 seconds; it may be overridden with a timeout keyword argument or globally with the ASYNC_TEST_TIMEOUT environment variable. If condition is not None, the IOLoop will be restarted after stop() until condition() returns True. Changed in version 3.1: Added the ASYNC_TEST_TIMEOUT environment variable. Deprecated since version 5.1: stop and wait are deprecated; use @gen_test instead. class tornado.testing.AsyncHTTPTestCase(methodName: str = 'runTest') A test case that starts up an HTTP server. Subclasses must override get_app(), which returns the tornado.web.Application (or other HTTPServer callback) to be tested. Tests will typically use the provided self.http_client to fetch URLs from this server. Example, assuming the “Hello, world” example from the user guide is in hello.py: import hello class TestHelloApp(AsyncHTTPTestCase): def get_app(self): return hello.make_app() (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) def test_homepage(self): response = self.fetch('/') self.assertEqual(response.code, 200) self.assertEqual(response.body, 'Hello, world') That call to self.fetch() is equivalent to self.http_client.fetch(self.get_url('/'), self.stop) response = self.wait() which illustrates how AsyncTestCase can turn an asynchronous operation, like http_client.fetch(), into a synchronous operation. If you need to do other asynchronous operations in tests, you’ll probably need to use stop() and wait() yourself. get_app() → Application Should be overridden by subclasses to return a tornado.web.Application or other HTTPServer call- back. fetch(path: str, raise_error: bool = False, **kwargs: Any) → HTTPResponse Convenience method to synchronously fetch a URL. The given path will be appended to the local server’s host and port. Any additional keyword arguments will be passed directly to AsyncHTTPClient.fetch (and so could be used to pass method="POST", body=". ..", etc). If the path begins with http:// or https://, it will be treated as a full URL and will be fetched as-is. If raise_error is True, a tornado.httpclient.HTTPError will be raised if the response code is not 200. This is the same behavior as the raise_error argument to AsyncHTTPClient.fetch , but the default is False here (it’s True in AsyncHTTPClient) because tests often need to deal with non-200 response codes. Changed in version 5.0: Added support for absolute URLs. Changed in version 5.1: Added the raise_error argument. Deprecated since version 5.1: This method currently turns any exception into an HTTPResponse with status code 599. In Tornado 6.0, errors other than tornado.httpclient.HTTPError will be passed through, and raise_error=False will only suppress errors that would be raised due to non-200 response codes. get_httpserver_options() → Dict[str, Any] May be overridden by subclasses to return additional keyword arguments for the server. get_http_port() → int Returns the port used by the server. A new port is chosen for each test. get_url(path: str) → str Returns an absolute url for the given path on the test server. class tornado.testing.AsyncHTTPSTestCase(methodName: str = 'runTest') A test case that starts an HTTPS server. Interface is generally the same as AsyncHTTPTestCase. Tornado Documentation, Release 6.3.3 get_ssl_options() → Dict[str, Any] May be overridden by subclasses to select SSL options. By default includes a self-signed testing certificate. tornado.testing.gen_test(*, timeout: Optional[float] = None) → Callable[[Callable[[...], Union[Generator, Coroutine]]], Callable[[...], None]] tornado.testing.gen_test(func: Callable[[...], Union[Generator, Coroutine]]) → Callable[[...], None] Testing equivalent of @gen.coroutine, to be applied to test methods. @gen.coroutine cannot be used on tests because the IOLoop is not already running. @gen_test should be applied to test methods on subclasses of AsyncTestCase. Example: class MyTest(AsyncHTTPTestCase): @gen_test def test_something(self): response = yield self.http_client.fetch(self.get_url('/')) By default, @gen_test times out after 5 seconds. The timeout may be overridden globally with the ASYNC_TEST_TIMEOUT environment variable, or for each test with the timeout keyword argument: class MyTest(AsyncHTTPTestCase): @gen_test(timeout=10) def test_something_slow(self): response = yield self.http_client.fetch(self.get_url('/')) Note that @gen_test is incompatible with AsyncTestCase.stop, AsyncTestCase.wait, and AsyncHTTPTestCase.fetch . Use yield self.http_client.fetch(self.get_url()) as shown above instead. New in version 3.1: The timeout argument and ASYNC_TEST_TIMEOUT environment variable. Changed in version 4.0: The wrapper now passes along *args, **kwargs so it can be used on functions with arguments. Controlling log output class tornado.testing.ExpectLog(logger: Union[Logger, str], regex: str, required: bool = True, level: Optional[int] = None) Context manager to capture and suppress expected log output. Useful to make tests of error conditions less noisy, while still leaving unexpected log entries visible. Not thread safe. The attribute logged_stack is set to True if any exception stack trace was logged. Usage: with ExpectLog('tornado.application', "Uncaught exception"): error_response = self.fetch("/some_page") Changed in version 4.3: Added the logged_stack attribute. Constructs an ExpectLog context manager. Parameters Tornado Documentation, Release 6.3.3 • logger – Logger object (or name of logger) to watch. Pass an empty string to watch the root logger. • regex – Regular expression to match. Any log entries on the specified logger that match this regex will be suppressed. • required – If true, an exception will be raised if the end of the with statement is reached without matching any log entries. • level – A constant from the logging module indicating the expected log level. If this parameter is provided, only log messages at this level will be considered to match. Addi- tionally, the supplied logger will have its level adjusted if necessary (for the duration of the ExpectLog to enable the expected message. Changed in version 6.1: Added the level parameter. Deprecated since version 6.3: In Tornado 7.0, only WARNING and higher logging levels will be matched by default. To match INFO and lower levels, the level argument must be used. This is changing to minimize differences between tornado.testing.main (which enables INFO logs by default) and most other test runners (including those in IDEs) which have INFO logs disabled by default. Test runner tornado.testing.main(**kwargs: Any) → None A simple test runner. This test runner is essentially equivalent to unittest.main from the standard library, but adds support for Tornado-style option parsing and log formatting. It is not necessary to use this main function to run tests using AsyncTestCase; these tests are self-contained and can run with any test runner. The easiest way to run a test is via the command line: python -m tornado.testing tornado.test.web_test See the standard library unittest module for ways in which tests can be specified. Projects with many tests may wish to define a test script like tornado/test/runtests.py. This script should define a method all() which returns a test suite and then call tornado.testing.main(). Note that even when a test script is used, the all() test suite may be overridden by naming a single test on the command line: # Runs all tests python -m tornado.test.runtests # Runs one test python -m tornado.test.runtests tornado.test.web_test Additional keyword arguments passed through to unittest.main(). For example, use tornado.testing. main(verbosity=2) to show many test details as they are run. See http://docs.python.org/library/unittest.html# unittest.main for full argument list. Changed in version 5.0: This function produces no output of its own; only that produced by the unittest module (previously it would add a PASS or FAIL log message). Tornado Documentation, Release 6.3.3 Helper functions tornado.testing.bind_unused_port(reuse_port: bool = False, address: str = '127.0.0.1') → Tuple[socket, int] Binds a server socket to an available port on localhost. Returns a tuple (socket, port). Changed in version 4.4: Always binds to 127.0.0.1 without resolving the name localhost. Changed in version 6.2: Added optional address argument to override the default “127.0.0.1”. tornado.testing.get_async_test_timeout() → float Get the global timeout setting for async tests. Returns a float, the timeout in seconds. New in version 3.1. 6.7.6 tornado.util — General-purpose utilities Miscellaneous utility functions and classes. This module is used internally by Tornado. It is not necessarily expected that the functions and classes defined here will be useful to other applications, but they are documented here in case they are. The one public-facing part of this module is the Configurable class and its configure method, which becomes a part of the interface of its subclasses, including AsyncHTTPClient, IOLoop, and Resolver. class tornado.util.TimeoutError Exception raised by gen.with_timeout and IOLoop.run_sync. Changed in version 5.0: Unified tornado.gen.TimeoutError and tornado.ioloop.TimeoutError as tornado.util.TimeoutError. Both former names remain as aliases. Changed in version 6.2: tornado.util.TimeoutError is an alias to asyncio.TimeoutError class tornado.util.ObjectDict Makes a dictionary behave like an object, with attribute-style access. class tornado.util.GzipDecompressor Streaming gzip decompressor. The interface is like that of zlib.decompressobj (without some of the optional arguments, but it understands gzip headers and checksums. decompress(value: bytes, max_length: int = 0) → bytes Decompress a chunk, returning newly-available data. Some data may be buffered for later processing; flush must be called when there is no more input data to ensure that all data was processed. If max_length is given, some input data may be left over in unconsumed_tail; you must retrieve this value and pass it back to a future call to decompress if it is not empty. property unconsumed_tail: bytes Returns the unconsumed portion left over flush() → bytes Return any remaining buffered data not yet returned by decompress. Also checks for errors such as truncated input. No other methods may be called on this object after flush . Tornado Documentation, Release 6.3.3 tornado.util.import_object(name: str) → Any Imports an object by name. import_object('x') is equivalent to import x. import_object('x.y.z') is equivalent to from x.y import z. >>> import tornado.escape >>> import_object('tornado.escape') is tornado.escape True >>> import_object('tornado.escape.utf8') is tornado.escape.utf8 True >>> import_object('tornado') is tornado True >>> import_object('tornado.missing_module') Traceback (most recent call last): ... ImportError: No module named missing_module tornado.util.errno_from_exception(e: BaseException) → Optional[int] Provides the errno from an Exception object. There are cases that the errno attribute was not set so we pull the errno out of the args but if someone instantiates an Exception without any args you will get a tuple error. So this function abstracts all that behavior to give you a safe way to get the errno. tornado.util.re_unescape(s: str) → str Unescape a string escaped by re.escape. May raise ValueError for regular expressions which could not have been produced by re.escape (for example, strings containing \d cannot be unescaped). New in version 4.4. class tornado.util.Configurable(*args: Any, **kwargs: Any) Base class for configurable interfaces. A configurable interface is an (abstract) class whose constructor acts as a factory function for one of its imple- mentation subclasses. The implementation subclass as well as optional keyword arguments to its initializer can be set globally at runtime with configure. By using the constructor as the factory method, the interface looks like a normal class, isinstance works as usual, etc. This pattern is most useful when the choice of implementation is likely to be a global decision (e.g. when epoll is available, always use it instead of select), or when a previously-monolithic class has been split into specialized subclasses. Configurable subclasses must define the class methods configurable_base and configurable_default, and use the instance method initialize instead of __init__. Changed in version 5.0: It is now possible for configuration to be specified at multiple levels of a class hierarchy. classmethod configurable_base() → Type[Configurable] Returns the base class of a configurable hierarchy. This will normally return the class in which it is defined. (which is not necessarily the same as the cls classmethod parameter). classmethod configurable_default() → Type[Configurable] Returns the implementation class to be used if none is configured. Tornado Documentation, Release 6.3.3 initialize() → None Initialize a Configurable subclass instance. Configurable classes should use initialize instead of __init__. Changed in version 4.2: Now accepts positional arguments in addition to keyword arguments. classmethod configure(impl: Union[None, str, Type[Configurable]], **kwargs: Any) → None Sets the class to use when the base class is instantiated. Keyword arguments will be saved and added to the arguments passed to the constructor. This can be used to set global defaults for some parameters. classmethod configured_class() → Type[Configurable] Returns the currently configured class. class tornado.util.ArgReplacer(func: Callable, name: str) Replaces one value in an args, kwargs pair. Inspects the function signature to find an argument by name whether it is passed by position or keyword. For use in decorators and similar wrappers. get_old_value(args: Sequence[Any], kwargs: Dict[str, Any], default: Optional[Any] = None) → Any Returns the old value of the named argument without replacing it. Returns default if the argument is not present. replace(new_value: Any, args: Sequence[Any], kwargs: Dict[str, Any]) → Tuple[Any, Sequence[Any], Dict[str, Any]] Replace the named argument in args, kwargs with new_value. Returns (old_value, args, kwargs). The returned args and kwargs objects may not be the same as the input objects, or the input objects may be mutated. If the named argument was not found, new_value will be added to kwargs and None will be returned as old_value. tornado.util.timedelta_to_seconds(td: datetime.timedelta) → float Equivalent to td.total_seconds() (introduced in Python 2.7). 6.8 Frequently Asked Questions • Why isn’t this example with time.sleep() running in parallel? • My code is asynchronous. Why is it not running in parallel in two browser tabs? Tornado Documentation, Release 6.3.3 6.8.1 Why isn’t this example with time.sleep() running in parallel? Many people’s first foray into Tornado’s concurrency looks something like this: class BadExampleHandler(RequestHandler): def get(self): for i in range(5): print(i) time.sleep(1) Fetch this handler twice at the same time and you’ll see that the second five-second countdown doesn’t start until the first one has completely finished. The reason for this is that time.sleep is a blocking function: it doesn’t allow control to return to the IOLoop so that other handlers can be run. Of course, time.sleep is really just a placeholder in these examples, the point is to show what happens when some- thing in a handler gets slow. No matter what the real code is doing, to achieve concurrency blocking code must be replaced with non-blocking equivalents. This means one of three things: 1. Find a coroutine-friendly equivalent. For time.sleep, use tornado.gen.sleep (or asyncio.sleep) instead: class CoroutineSleepHandler(RequestHandler): async def get(self): for i in range(5): print(i) await gen.sleep(1) When this option is available, it is usually the best approach. See the Tornado wiki for links to asynchronous libraries that may be useful. 2. Find a callback-based equivalent. Similar to the first option, callback-based libraries are available for many tasks, although they are slightly more complicated to use than a library designed for coroutines. Adapt the callback- based function into a future: class CoroutineTimeoutHandler(RequestHandler): async def get(self): io_loop = IOLoop.current() for i in range(5): print(i) f = tornado.concurrent.Future() do_something_with_callback(f.set_result) result = await f Again, the Tornado wiki can be useful to find suitable libraries. 3. Run the blocking code on another thread. When asynchronous libraries are not available, concurrent. futures.ThreadPoolExecutor can be used to run any blocking code on another thread. This is a universal solution that can be used for any blocking function whether an asynchronous counterpart exists or not: class ThreadPoolHandler(RequestHandler): async def get(self): for i in range(5): print(i) await IOLoop.current().run_in_executor(None, time.sleep, 1) See the Asynchronous I/O chapter of the Tornado user’s guide for more on blocking and asynchronous functions. Tornado Documentation, Release 6.3.3 6.8.2 My code is asynchronous. Why is it not running in parallel in two browser tabs? Even when a handler is asynchronous and non-blocking, it can be surprisingly tricky to verify this. Browsers will recognize that you are trying to load the same page in two different tabs and delay the second request until the first has finished. To work around this and see that the server is in fact working in parallel, do one of two things: • Add something to your urls to make them unique. Instead of http://localhost:8888 in both tabs, load http://localhost:8888/?x=1 in one and http://localhost:8888/?x=2 in the other. • Use two different browsers. For example, Firefox will be able to load a url even while that same url is being loaded in a Chrome tab. 6.9 Release notes 6.9.1 What’s new in Tornado 6.3.3 Aug 11, 2023 Security improvements • The Content-Length header and chunked Transfer-Encoding sizes are now parsed more strictly (according to the relevant RFCs) to avoid potential request-smuggling vulnerabilities when deployed behind certain proxies. 6.9.2 What’s new in Tornado 6.3.2 May 13, 2023 Security improvements • Fixed an open redirect vulnerability in StaticFileHandler under certain configurations. 6.9.3 What’s new in Tornado 6.3.1 Apr 21, 2023 tornado.web • RequestHandler.set_cookie once again accepts capitalized keyword arguments for backwards compatibility. This is deprecated and in Tornado 7.0 only lowercase arguments will be accepted. Tornado Documentation, Release 6.3.3 6.9.4 What’s new in Tornado 6.3.0 Apr 17, 2023 Highlights • The new Application setting xsrf_cookie_name can now be used to take advantage of the __Host cookie prefix for improved security. To use it, add {"xsrf_cookie_name": "__Host-xsrf", "xsrf_cookie_kwargs": {"secure": True}} to your Application settings. Note that this feature cur- rently only works when HTTPS is used. • WSGIContainer now supports running the application in a ThreadPoolExecutor so the event loop is no longer blocked. • AsyncTestCase and AsyncHTTPTestCase, which were deprecated in Tornado 6.2, are no longer deprecated. • WebSockets are now much faster at receiving large messages split into many fragments. General changes • Python 3.7 is no longer supported; the minimum supported Python version is 3.8. Python 3.12 is now supported. • To avoid spurious deprecation warnings, users of Python 3.10 should upgrade to at least version 3.10.9, and users of Python 3.11 should upgrade to at least version 3.11.1. • Tornado submodules are now imported automatically on demand. This means it is now possible to use a single import tornado statement and refer to objects in submodules such as tornado.web.RequestHandler. Deprecation notices • In Tornado 7.0, tornado.testing.ExpectLog will match WARNING and above regardless of the current logging configuration, unless the level argument is used. • RequestHandler.get_secure_cookie is now a deprecated alias for RequestHandler. get_signed_cookie. RequestHandler.set_secure_cookie is now a deprecated alias for RequestHandler.set_signed_cookie. • RequestHandler.clear_all_cookies is deprecated. No direct replacement is provided; RequestHandler. clear_cookie should be used on individual cookies. • Calling the IOLoop constructor without a make_current argument, which was deprecated in Tornado 6.2, is no longer deprecated. • AsyncTestCase and AsyncHTTPTestCase, which were deprecated in Tornado 6.2, are no longer deprecated. • AsyncTestCase.get_new_ioloop is deprecated. Tornado Documentation, Release 6.3.3 tornado.auth • New method GoogleOAuth2Mixin.get_google_oauth_settings can now be overridden to get credentials from a source other than the Application settings. tornado.gen • contextvars now work properly when a @gen.coroutine calls a native coroutine. tornado.options • parse_config_file now recognizes single comma-separated strings (in addition to lists of strings) for options with multiple=True. tornado.web • New Application setting xsrf_cookie_name can be used to change the name of the XSRF cookie. This is most useful to take advantage of the __Host- cookie prefix. • RequestHandler.get_secure_cookie and RequestHandler.set_secure_cookie (and related methods and attributes) have been renamed to get_signed_cookie and set_signed_cookie. This makes it more explicit what kind of security is provided, and avoids confusion with the Secure cookie attribute and __Secure- cookie prefix. The old names remain supported as deprecated aliases. • RequestHandler.clear_cookie now accepts all keyword arguments accepted by set_cookie. In some cases clearing a cookie requires certain arguments to be passed the same way in which it was set. • RequestHandler.clear_all_cookies now accepts additional keyword arguments for the same reason as clear_cookie. However, since the requirements for additional arguments mean that it cannot reliably clear all cookies, this method is now deprecated. tornado.websocket • It is now much faster (no longer quadratic) to receive large messages that have been split into many fragments. • websocket_connect now accepts a resolver parameter. tornado.wsgi • WSGIContainer now accepts an executor parameter which can be used to run the WSGI application on a thread pool. Tornado Documentation, Release 6.3.3 6.9.5 What’s new in Tornado 6.2.0 Jul 3, 2022 Deprecation notice • April 2023 update: Python 3.12 reversed some of the changes described below. In Tornado 6.3, AsyncTestCase, AsyncHTTPTestCase, and the behavior of the IOLoop constructor related to the make_current parameter are no longer deprecated. • Python 3.10 has begun the process of significant changes to the APIs for managing the event loop. Calls to methods such as asyncio.get_event_loop may now raise DeprecationWarning if no event loop is running. This has significant impact on the patterns for initializing applications, and in particular invalidates patterns that have long been the norm in Tornado’s documentation and actual usage. In the future (with some as-yet- unspecified future version of Python), the old APIs will be removed. The new recommended pattern is to start the event loop with asyncio.run. More detailed migration guides will be coming in the future. – The IOLoop constructor is deprecated unless the make_current=False argument is used. Use IOLoop. current when the loop is already running instead. – AsyncTestCase (and AsyncHTTPTestCase) are deprecated. Use unittest. IsolatedAsyncioTestCase instead. – Multi-process TCPServer.bind/TCPServer.start is deprecated. See TCPServer docs for supported alternatives. – AnyThreadEventLoopPolicy is deprecated. This class controls the creation of the “current” event loop so it will be removed when that concept is no longer supported. – IOLoop.make_current and IOLoop.clear_current are deprecated. In the future the concept of a “current” event loop as distinct from one that is currently running will be removed. • TwistedResolver and CaresResolver are deprecated and will be removed in Tornado 7.0. General changes • The minimum supported Python version is now 3.7. • Wheels are now published with the Python stable ABI (abi3) for compatibility across versions of Python. • SSL certificate verification and hostname checks are now enabled by default in more places (primarily in client- side usage of SSLIOStream). • Various improvements to type hints throughout the package. • CI has moved from Travis and Appveyor to Github Actions. tornado.gen • Fixed a bug in which WaitIterator.current_index could be incorrect. • tornado.gen.TimeoutError is now an alias for asyncio.TimeoutError. Tornado Documentation, Release 6.3.3 tornado.http1connection • max_body_size may now be set to zero to disallow a non-empty body. • Content-Encoding: gzip is now recognized case-insensitively. tornado.httpclient • curl_httpclient now supports non-ASCII (ISO-8859-1) header values, same as simple_httpclient. tornado.ioloop • PeriodicCallback now understands coroutines and will not start multiple copies if a previous invocation runs too long. • PeriodicCallback now accepts datetime.timedelta objects in addition to numbers of milliseconds. • Avoid logging “Event loop is closed” during shutdown-related race conditions. • Tornado no longer calls logging.basicConfig when starting an IOLoop; this has been unnecessary since Python 3.2 added a logger of last resort. • The IOLoop constructor now accepts an asyncio_loop keyword argument to initialize with a specfied asyncio event loop. • It is now possible to construct an IOLoop on one thread (with make_current=False) and start it on a different thread. tornado.iostream • SSLIOStream now supports reading more than 2GB at a time. • IOStream.write now supports typed memoryview objects. tornado.locale • load_gettext_translations no longer logs errors when language directories exist but do not contain the expected file. tornado.netutil • is_valid_ip no longer raises exceptions when the input is too long. • The default resolver now uses the same methods (and thread pool) as asyncio. Tornado Documentation, Release 6.3.3 tornado.tcpserver • TCPServer.listen now supports more arguments to pass through to netutil.bind_sockets. tornado.testing • bind_unused_port now takes an optional address argument. • Wrapped test methods now include the __wrapped__ attribute. tornado.web • When using a custom StaticFileHandler subclass, the reset() method is now called on this subclass instead of the base class. • Improved handling of the Accept-Language header. • Application.listen now supports more arguments to pass through to netutil.bind_sockets. tornado.websocket • WebSocketClientConnection.write_message now accepts dict arguments for consistency with WebSocketHandler.write_message. • WebSocketClientConnection.write_message now raises an exception as documented if the connection is already closed. 6.9.6 What’s new in Tornado 6.1.0 Oct 30, 2020 Deprecation notice • This is the last release of Tornado to support Python 3.5. Future versions will require Python 3.6 or newer. General changes • Windows support has been improved. Tornado is now compatible with the proactor event loop (which became the default in Python 3.8) by automatically falling back to running a selector in a second thread. This means that it is no longer necessary to explicitly configure a selector event loop, although doing so may improve performance. This does not change the fact that Tornado is significantly less scalable on Windows than on other platforms. • Binary wheels are now provided for Windows, MacOS, and Linux (amd64 and arm64). Tornado Documentation, Release 6.3.3 tornado.gen • coroutine now has better support for the Python 3.7+ contextvars module. In particular, the ContextVar. reset method is now supported. tornado.http1connection • HEAD requests to handlers that used chunked encoding no longer produce malformed output. • Certain kinds of malformed gzip data no longer cause an infinite loop. tornado.httpclient • Setting decompress_response=False now works correctly with curl_httpclient. • Mixing requests with and without proxies works correctly in curl_httpclient (assuming the version of pycurl is recent enough). • A default User-Agent of Tornado/$VERSION is now used if the user_agent parameter is not specified. • After a 303 redirect, tornado.simple_httpclient always uses GET. Previously this would use GET if the original request was a POST and would otherwise reuse the original request method. For curl_httpclient, the behavior depends on the version of libcurl (with the most recent versions using GET after 303 regardless of the original method). • Setting request_timeout and/or connect_timeout to zero is now supported to disable the timeout. tornado.httputil • Header parsing is now faster. • parse_body_arguments now accepts incompletely-escaped non-ASCII inputs. tornado.iostream • ssl.CertificateError during the SSL handshake is now handled correctly. • Reads that are resolved while the stream is closing are now handled correctly. tornado.log • When colored logging is enabled, logging.CRITICAL messages are now recognized and colored magenta. Tornado Documentation, Release 6.3.3 tornado.netutil • EADDRNOTAVAIL is now ignored when binding to localhost with IPv6. This error is common in docker. tornado.platform.asyncio • AnyThreadEventLoopPolicy now also configures a selector event loop for these threads (the proactor event loop only works on the main thread) tornado.platform.auto • The set_close_exec function has been removed. tornado.testing • ExpectLog now has a level argument to ensure that the given log level is enabled. tornado.web • RedirectHandler.get now accepts keyword arguments. • When sending 304 responses, more headers (including Allow) are now preserved. • reverse_url correctly handles escaped characters in the regex route. • Default Etag headers are now generated with SHA-512 instead of MD5. tornado.websocket • The ping_interval timer is now stopped when the connection is closed. • websocket_connect now raises an error when it encounters a redirect instead of hanging. 6.9.7 What’s new in Tornado 6.0.4 Mar 3, 2020 General changes • Binary wheels are now available for Python 3.8 on Windows. Note that it is still necessary to use asyncio. set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) for this platform/version. Tornado Documentation, Release 6.3.3 Bug fixes • Fixed an issue in IOStream (introduced in 6.0.0) that resulted in StreamClosedError being incorrectly raised if a stream is closed mid-read but there is enough buffered data to satisfy the read. • AnyThreadEventLoopPolicy now always uses the selector event loop on Windows. 6.9.8 What’s new in Tornado 6.0.3 Jun 22, 2019 Bug fixes • gen.with_timeout always treats asyncio.CancelledError as a quiet_exception (this improves com- patibility with Python 3.8, which changed CancelledError to a BaseException). • IOStream now checks for closed streams earlier, avoiding spurious logged errors in some situations (mainly with websockets). 6.9.9 What’s new in Tornado 6.0.2 Mar 23, 2019 Bug fixes • WebSocketHandler.set_nodelay works again. • Accessing HTTPResponse.body now returns an empty byte string instead of raising ValueError for error responses that don’t have a body (it returned None in this case in Tornado 5). 6.9.10 What’s new in Tornado 6.0.1 Mar 3, 2019 Bug fixes • Fixed issues with type annotations that caused errors while importing Tornado on Python 3.5.2. 6.9.11 What’s new in Tornado 6.0 Mar 1, 2019 Backwards-incompatible changes • Python 2.7 and 3.4 are no longer supported; the minimum supported Python version is 3.5.2. • APIs deprecated in Tornado 5.1 have been removed. This includes the tornado.stack_context module and most callback arguments throughout the package. All removed APIs emitted DeprecationWarning when used in Tornado 5.1, so running your application with the -Wd Python command-line flag or the environment variable PYTHONWARNINGS=d should tell you whether your application is ready to move to Tornado 6.0. Tornado Documentation, Release 6.3.3 • .WebSocketHandler.get is now a coroutine and must be called accordingly in any subclasses that override this method (but note that overriding get is not recommended; either prepare or open should be used instead). General changes • Tornado now includes type annotations compatible with mypy. These annotations will be used when type- checking your application with mypy, and may be usable in editors and other tools. • Tornado now uses native coroutines internally, improving performance. tornado.auth • All callback arguments in this package have been removed. Use the coroutine interfaces instead. • The OAuthMixin._oauth_get_user method has been removed. Override _oauth_get_user_future in- stead. tornado.concurrent • The callback argument to run_on_executor has been removed. • return_future has been removed. tornado.gen • Some older portions of this module have been removed. This includes engine, YieldPoint, Callback, Wait, WaitAll, MultiYieldPoint, and Task. • Functions decorated with @gen.coroutine no longer accept callback arguments. tornado.httpclient • The behavior of raise_error=False has changed. Now only suppresses the errors raised due to completed responses with non-200 status codes (previously it suppressed all errors). • The callback argument to AsyncHTTPClient.fetch has been removed. tornado.httputil • HTTPServerRequest.write has been removed. Use the methods of request.connection instead. • Unrecognized Content-Encoding values now log warnings only for content types that we would otherwise attempt to parse. Tornado Documentation, Release 6.3.3 tornado.ioloop • IOLoop.set_blocking_signal_threshold, IOLoop.set_blocking_log_threshold, IOLoop. log_stack, and IOLoop.handle_callback_exception have been removed. • Improved performance of IOLoop.add_callback. tornado.iostream • All callback arguments in this module have been removed except for BaseIOStream.set_close_callback. • streaming_callback arguments to BaseIOStream.read_bytes and BaseIOStream.read_until_close have been removed. • Eliminated unnecessary logging of “Errno 0”. tornado.log • Log files opened by this module are now explicitly set to UTF-8 encoding. tornado.netutil • The results of getaddrinfo are now sorted by address family to avoid partial failures and deadlocks. tornado.platform.twisted • TornadoReactor and TwistedIOLoop have been removed. tornado.simple_httpclient • The default HTTP client now supports the network_interface request argument to specify the source IP for the connection. • If a server returns a 3xx response code without a Location header, the response is raised or returned directly instead of trying and failing to follow the redirect. • When following redirects, methods other than POST will no longer be transformed into GET requests. 301 (per- manent) redirects are now treated the same way as 302 (temporary) and 303 (see other) redirects in this respect. • Following redirects now works with body_producer. tornado.stack_context • The tornado.stack_context module has been removed. Tornado Documentation, Release 6.3.3 tornado.tcpserver • TCPServer.start now supports a max_restarts argument (same as fork_processes). tornado.testing • AsyncHTTPTestCase now drops all references to the Application during tearDown, allowing its memory to be reclaimed sooner. • AsyncTestCase now cancels all pending coroutines in tearDown, in an effort to reduce warnings from the python runtime about coroutines that were not awaited. Note that this may cause asyncio.CancelledError to be logged in other places. Coroutines that expect to be running at test shutdown may need to catch this exception. tornado.web • The asynchronous decorator has been removed. • The callback argument to RequestHandler.flush has been removed. • StaticFileHandler now supports large negative values for the Range header and returns an appropriate error for end > start. • It is now possible to set expires_days in xsrf_cookie_kwargs. tornado.websocket • Pings and other messages sent while the connection is closing are now silently dropped instead of logging ex- ceptions. • Errors raised by open() are now caught correctly when this method is a coroutine. tornado.wsgi • WSGIApplication and WSGIAdapter have been removed. 6.9.12 What’s new in Tornado 5.1.1 Sep 16, 2018 Bug fixes • Fixed an case in which the Future returned by RequestHandler.finish could fail to resolve. • The TwitterMixin.authenticate_redirect method works again. • Improved error handling in the tornado.auth module, fixing hanging requests when a network or other error occurs. Tornado Documentation, Release 6.3.3 6.9.13 What’s new in Tornado 5.1 July 12, 2018 Deprecation notice • Tornado 6.0 will drop support for Python 2.7 and 3.4. The minimum supported Python version will be 3.5.2. • The tornado.stack_context module is deprecated and will be removed in Tornado 6.0. The reason for this is that it is not feasible to provide this module’s semantics in the presence of async def native coroutines. ExceptionStackContext is mainly obsolete thanks to coroutines. StackContext lacks a direct replacement although the new contextvars package (in the Python standard library beginning in Python 3.7) may be an alternative. • Callback-oriented code often relies on ExceptionStackContext to handle errors and prevent leaked connec- tions. In order to avoid the risk of silently introducing subtle leaks (and to consolidate all of Tornado’s interfaces behind the coroutine pattern), callback arguments throughout the package are deprecated and will be removed in version 6.0. All functions that had a callback argument removed now return a Future which should be used instead. • Where possible, deprecation warnings are emitted when any of these deprecated interfaces is used. However, Python does not display deprecation warnings by default. To prepare your application for Tornado 6.0, run Python with the -Wd argument or set the environment variable PYTHONWARNINGS to d. If your application runs on Python 3 without deprecation warnings, it should be able to move to Tornado 6.0 without disruption. tornado.auth • OAuthMixin._oauth_get_user_future may now be a native coroutine. • All callback arguments in this package are deprecated and will be removed in 6.0. Use the coroutine interfaces instead. • The OAuthMixin._oauth_get_user method is deprecated and will be removed in 6.0. Override _oauth_get_user_future instead. tornado.autoreload • The command-line autoreload wrapper is now preserved if an internal autoreload fires. • The command-line wrapper no longer starts duplicated processes on windows when combined with internal autoreload. tornado.concurrent • run_on_executor now returns Future objects that are compatible with await. • The callback argument to run_on_executor is deprecated and will be removed in 6.0. • return_future is deprecated and will be removed in 6.0. Tornado Documentation, Release 6.3.3 tornado.gen • Some older portions of this module are deprecated and will be removed in 6.0. This includes engine, YieldPoint, Callback, Wait, WaitAll, MultiYieldPoint, and Task. • Functions decorated with @gen.coroutine will no longer accept callback arguments in 6.0. tornado.httpclient • The behavior of raise_error=False is changing in 6.0. Currently it suppresses all errors; in 6.0 it will only suppress the errors raised due to completed responses with non-200 status codes. • The callback argument to AsyncHTTPClient.fetch is deprecated and will be removed in 6.0. • tornado.httpclient.HTTPError has been renamed to HTTPClientError to avoid ambiguity in code that also has to deal with tornado.web.HTTPError. The old name remains as an alias. • tornado.curl_httpclient now supports non-ASCII characters in username and password arguments. • .HTTPResponse.request_time now behaves consistently across simple_httpclient and curl_httpclient, excluding time spent in the max_clients queue in both cases (previously this time was included in simple_httpclient but excluded in curl_httpclient). In both cases the time is now computed using a monotonic clock where available. • HTTPResponse now has a start_time attribute recording a wall-clock (time.time) timestamp at which the request started (after leaving the max_clients queue if applicable). tornado.httputil • parse_multipart_form_data now recognizes non-ASCII filenames in RFC 2231/5987 (filename*=) for- mat. • HTTPServerRequest.write is deprecated and will be removed in 6.0. Use the methods of request. connection instead. • Malformed HTTP headers are now logged less noisily. tornado.ioloop • PeriodicCallback now supports a jitter argument to randomly vary the timeout. • IOLoop.set_blocking_signal_threshold, IOLoop.set_blocking_log_threshold, IOLoop. log_stack, and IOLoop.handle_callback_exception are deprecated and will be removed in 6.0. • Fixed a KeyError in IOLoop.close when IOLoop objects are being opened and closed in multiple threads. Tornado Documentation, Release 6.3.3 tornado.iostream • All callback arguments in this module are deprecated except for BaseIOStream.set_close_callback. They will be removed in 6.0. • streaming_callback arguments to BaseIOStream.read_bytes and BaseIOStream.read_until_close are deprecated and will be removed in 6.0. tornado.netutil • Improved compatibility with GNU Hurd. tornado.options • tornado.options.parse_config_file now allows setting options to strings (which will be parsed the same way as tornado.options.parse_command_line) in addition to the specified type for the option. tornado.platform.twisted • TornadoReactor and TwistedIOLoop are deprecated and will be removed in 6.0. Instead, Tornado will always use the asyncio event loop and twisted can be configured to do so as well. tornado.stack_context • The tornado.stack_context module is deprecated and will be removed in 6.0. tornado.testing • AsyncHTTPTestCase.fetch now takes a raise_error argument. This argument has the same semantics as AsyncHTTPClient.fetch , but defaults to false because tests often need to deal with non-200 responses (and for backwards-compatibility). • The AsyncTestCase.stop and AsyncTestCase.wait methods are deprecated. tornado.web • New method RequestHandler.detach can be used from methods that are not decorated with @asynchronous (the decorator was required to use self.request.connection.detach(). • RequestHandler.finish and RequestHandler.render now return Futures that can be used to wait for the last part of the response to be sent to the client. • FallbackHandler now calls on_finish for the benefit of subclasses that may have overridden it. • The asynchronous decorator is deprecated and will be removed in 6.0. • The callback argument to RequestHandler.flush is deprecated and will be removed in 6.0. Tornado Documentation, Release 6.3.3 tornado.websocket • When compression is enabled, memory limits now apply to the post-decompression size of the data, protecting against DoS attacks. • websocket_connect now supports subprotocols. • WebSocketHandler and WebSocketClientConnection now have selected_subprotocol attributes to see the subprotocol in use. • The WebSocketHandler.select_subprotocol method is now called with an empty list instead of a list con- taining an empty string if no subprotocols were requested by the client. • WebSocketHandler.open may now be a coroutine. • The data argument to WebSocketHandler.ping is now optional. • Client-side websocket connections no longer buffer more than one message in memory at a time. • Exception logging now uses RequestHandler.log_exception. tornado.wsgi • WSGIApplication and WSGIAdapter are deprecated and will be removed in Tornado 6.0. 6.9.14 What’s new in Tornado 5.0.2 Apr 7, 2018 Bug fixes • Fixed a memory leak when IOLoop objects are created and destroyed. • If AsyncTestCase.get_new_ioloop returns a reference to a preexisting event loop (typically when it has been overridden to return IOLoop.current()), the test’s tearDown method will not close this loop. • Fixed a confusing error message when the synchronous HTTPClient fails to initialize because an event loop is already running. • PeriodicCallback no longer executes twice in a row due to backwards clock adjustments. 6.9.15 What’s new in Tornado 5.0.1 Mar 18, 2018 Bug fix • This release restores support for versions of Python 3.4 prior to 3.4.4. This is important for compatibility with Debian Jessie which has 3.4.2 as its version of Python 3. Tornado Documentation, Release 6.3.3 6.9.16 What’s new in Tornado 5.0 Mar 5, 2018 Highlights • The focus of this release is improving integration with asyncio. On Python 3, the IOLoop is always a wrapper around the asyncio event loop, and asyncio.Future and asyncio.Task are used instead of their Tornado counterparts. This means that libraries based on asyncio can be mixed relatively seamlessly with those using Tornado. While care has been taken to minimize the disruption from this change, code changes may be required for compatibility with Tornado 5.0, as detailed in the following section. • Tornado 5.0 supports Python 2.7.9+ and 3.4+. Python 2.7 and 3.4 are deprecated and support for them will be removed in Tornado 6.0, which will require Python 3.5+. Backwards-compatibility notes • Python 3.3 is no longer supported. • Versions of Python 2.7 that predate the ssl module update are no longer supported. (The ssl module was updated in version 2.7.9, although in some distributions the updates are present in builds with a lower version number. Tornado requires ssl.SSLContext, ssl.create_default_context, and ssl.match_hostname) • Versions of Python 3.5 prior to 3.5.2 are no longer supported due to a change in the async iterator protocol in that version. • The trollius project (asyncio backported to Python 2) is no longer supported. • tornado.concurrent.Future is now an alias for asyncio.Future when running on Python 3. This results in a number of minor behavioral changes: – Future objects can only be created while there is a current IOLoop – The timing of callbacks scheduled with Future.add_done_callback has changed. tornado. concurrent.future_add_done_callback can be used to make the behavior more like older versions of Tornado (but not identical). Some of these changes are also present in the Python 2 version of tornado. concurrent.Future to minimize the difference between Python 2 and 3. – Cancellation is now partially supported, via asyncio.Future.cancel. A canceled Future can no longer have its result set. Applications that handle Future objects directly may want to use tornado. concurrent.future_set_result_unless_cancelled. In native coroutines, cancellation will cause an exception to be raised in the coroutine. – The exc_info and set_exc_info methods are no longer present. Use tornado.concurrent. future_set_exc_info to replace the latter, and raise the exception with result to replace the former. • io_loop arguments to many Tornado functions have been removed. Use IOLoop.current() instead of passing IOLoop objects explicitly. • On Python 3, IOLoop is always a wrapper around the asyncio event loop. IOLoop.configure is effectively removed on Python 3 (for compatibility, it may be called to redundantly specify the asyncio-backed IOLoop) • IOLoop.instance is now a deprecated alias for IOLoop.current. Applications that need the cross-thread communication behavior facilitated by IOLoop.instance should use their own global variable instead. Tornado Documentation, Release 6.3.3 Other notes • The futures (concurrent.futures backport) package is now required on Python 2.7. • The certifi and backports.ssl-match-hostname packages are no longer required on Python 2.7. • Python 3.6 or higher is recommended, because it features more efficient garbage collection of asyncio.Future objects. tornado.auth • GoogleOAuth2Mixin now uses a newer set of URLs. tornado.autoreload • On Python 3, uses __main__.__spec to more reliably reconstruct the original command line and avoid modi- fying PYTHONPATH. • The io_loop argument to tornado.autoreload.start has been removed. tornado.concurrent • tornado.concurrent.Future is now an alias for asyncio.Future when running on Python 3. See “Backwards-compatibility notes” for more. • Setting the result of a Future no longer blocks while callbacks are being run. Instead, the callbacks are scheduled on the next IOLoop iteration. • The deprecated alias tornado.concurrent.TracebackFuture has been removed. • tornado.concurrent.chain_future now works with all three kinds of Futures (Tornado, asyncio, and concurrent.futures) • The io_loop argument to tornado.concurrent.run_on_executor has been removed. • New functions future_set_result_unless_cancelled, future_set_exc_info, and future_add_done_callback help mask the difference between asyncio.Future and Tornado’s previ- ous Future implementation. tornado.curl_httpclient • Improved debug logging on Python 3. • The time_info response attribute now includes appconnect in addition to other measurements. • Closing a CurlAsyncHTTPClient now breaks circular references that could delay garbage collection. • The io_loop argument to the CurlAsyncHTTPClient constructor has been removed. Tornado Documentation, Release 6.3.3 tornado.gen • tornado.gen.TimeoutError is now an alias for tornado.util.TimeoutError. • Leak detection for Futures created by this module now attributes them to their proper caller instead of the coroutine machinery. • Several circular references that could delay garbage collection have been broken up. • On Python 3, asyncio.Task is used instead of the Tornado coroutine runner. This improves compatibility with some asyncio libraries and adds support for cancellation. • The io_loop arguments to YieldFuture and with_timeout have been removed. tornado.httpclient • The io_loop argument to all AsyncHTTPClient constructors has been removed. tornado.httpserver • It is now possible for a client to reuse a connection after sending a chunked request. • If a client sends a malformed request, the server now responds with a 400 error instead of simply closing the connection. • Content-Length and Transfer-Encoding headers are no longer sent with 1xx or 204 responses (this was already true of 304 responses). • When closing a connection to a HTTP/1.1 client, the Connection: close header is sent with the response. • The io_loop argument to the HTTPServer constructor has been removed. • If more than one X-Scheme or X-Forwarded-Proto header is present, only the last is used. tornado.httputil • The string representation of HTTPServerRequest objects (which are sometimes used in log messages) no longer includes the request headers. • New function qs_to_qsl converts the result of urllib.parse.parse_qs to name-value pairs. tornado.ioloop • tornado.ioloop.TimeoutError is now an alias for tornado.util.TimeoutError. • IOLoop.instance is now a deprecated alias for IOLoop.current. • IOLoop.install and IOLoop.clear_instance are deprecated. • The IOLoop.initialized method has been removed. • On Python 3, the asyncio-backed IOLoop is always used and alternative IOLoop implementations cannot be configured. IOLoop.current and related methods pass through to asyncio.get_event_loop. • run_sync cancels its argument on a timeout. This results in better stack traces (and avoids log messages about leaks) in native coroutines. Tornado Documentation, Release 6.3.3 • New methods IOLoop.run_in_executor and IOLoop.set_default_executor make it easier to run func- tions in other threads from native coroutines (since concurrent.futures.Future does not support await). • PollIOLoop (the default on Python 2) attempts to detect misuse of IOLoop instances across os.fork. • The io_loop argument to PeriodicCallback has been removed. • It is now possible to create a PeriodicCallback in one thread and start it in another without passing an explicit event loop. • The IOLoop.set_blocking_signal_threshold and IOLoop.set_blocking_log_threshold methods are deprecated because they are not implemented for the asyncio event loop`. Use the PYTHONASYNCIODEBUG=1 environment variable instead. • IOLoop.clear_current now works if it is called before any current loop is established. tornado.iostream • The io_loop argument to the IOStream constructor has been removed. • New method BaseIOStream.read_into provides a minimal-copy alternative to BaseIOStream. read_bytes. • BaseIOStream.write is now much more efficient for very large amounts of data. • Fixed some cases in which IOStream.error could be inaccurate. • Writing a memoryview can no longer result in “BufferError: Existing exports of data: object cannot be re-sized”. tornado.locks • As a side effect of the Future changes, waiters are always notified asynchronously with respect to Condition. notify. tornado.netutil • The default Resolver now uses IOLoop.run_in_executor. ExecutorResolver, BlockingResolver, and ThreadedResolver are deprecated. • The io_loop arguments to add_accept_handler, ExecutorResolver, and ThreadedResolver have been removed. • add_accept_handler returns a callable which can be used to remove all handlers that were added. • OverrideResolver now accepts per-family overrides. tornado.options • Duplicate option names are now detected properly whether they use hyphens or underscores. Tornado Documentation, Release 6.3.3 tornado.platform.asyncio • AsyncIOLoop and AsyncIOMainLoop are now used automatically when appropriate; referencing them explicitly is no longer recommended. • Starting an IOLoop or making it current now also sets the asyncio event loop for the current thread. Closing an IOLoop closes the corresponding asyncio event loop. • to_tornado_future and to_asyncio_future are deprecated since they are now no-ops. • AnyThreadEventLoopPolicy can now be used to easily allow the creation of event loops on any thread (similar to Tornado’s prior policy). tornado.platform.caresresolver • The io_loop argument to CaresResolver has been removed. tornado.platform.twisted • The io_loop arguments to TornadoReactor, TwistedResolver, and tornado.platform.twisted. install have been removed. tornado.process • The io_loop argument to the Subprocess constructor and Subprocess.initialize has been removed. tornado.routing • A default 404 response is now generated if no delegate is found for a request. tornado.simple_httpclient • The io_loop argument to SimpleAsyncHTTPClient has been removed. • TLS is now configured according to ssl.create_default_context by default. tornado.tcpclient • The io_loop argument to the TCPClient constructor has been removed. • TCPClient.connect has a new timeout argument. Tornado Documentation, Release 6.3.3 tornado.tcpserver • The io_loop argument to the TCPServer constructor has been removed. • TCPServer no longer logs EBADF errors during shutdown. tornado.testing • The deprecated tornado.testing.get_unused_port and tornado.testing.LogTrapTestCase have been removed. • AsyncHTTPTestCase.fetch now supports absolute URLs. • AsyncHTTPTestCase.fetch now connects to 127.0.0.1 instead of localhost to be more robust against faulty ipv6 configurations. tornado.util • tornado.util.TimeoutError replaces tornado.gen.TimeoutError and tornado.ioloop. TimeoutError. • Configurable now supports configuration at multiple levels of an inheritance hierarchy. tornado.web • RequestHandler.set_status no longer requires that the given status code appear in http.client. responses. • It is no longer allowed to send a body with 1xx or 204 responses. • Exception handling now breaks up reference cycles that could delay garbage collection. • RedirectHandler now copies any query arguments from the request to the redirect location. • If both If-None-Match and If-Modified-Since headers are present in a request to StaticFileHandler, the latter is now ignored. tornado.websocket • The C accelerator now operates on multiple bytes at a time to improve performance. • Requests with invalid websocket headers now get a response with status code 400 instead of a closed connection. • WebSocketHandler.write_message now raises WebSocketClosedError if the connection closes while the write is in progress. • The io_loop argument to websocket_connect has been removed. Tornado Documentation, Release 6.3.3 6.9.17 What’s new in Tornado 4.5.3 Jan 6, 2018 tornado.curl_httpclient • Improved debug logging on Python 3. tornado.httpserver • Content-Length and Transfer-Encoding headers are no longer sent with 1xx or 204 responses (this was already true of 304 responses). • Reading chunked requests no longer leaves the connection in a broken state. tornado.iostream • Writing a memoryview can no longer result in “BufferError: Existing exports of data: object cannot be re-sized”. tornado.options • Duplicate option names are now detected properly whether they use hyphens or underscores. tornado.testing • AsyncHTTPTestCase.fetch now uses 127.0.0.1 instead of localhost, improving compatibility with sys- tems that have partially-working ipv6 stacks. tornado.web • It is no longer allowed to send a body with 1xx or 204 responses. tornado.websocket • Requests with invalid websocket headers now get a response with status code 400 instead of a closed connection. 6.9.18 What’s new in Tornado 4.5.2 Aug 27, 2017 Bug Fixes • Tornado now sets the FD_CLOEXEC flag on all file descriptors it creates. This prevents hanging client connections and resource leaks when the tornado.autoreload module (or Application(debug=True)) is used. Tornado Documentation, Release 6.3.3 6.9.19 What’s new in Tornado 4.5.1 Apr 20, 2017 tornado.log • Improved detection of libraries for colorized logging. tornado.httputil • url_concat once again treats None as equivalent to an empty sequence. 6.9.20 What’s new in Tornado 4.5 Apr 16, 2017 Backwards-compatibility warning • The tornado.websocket module now imposes a limit on the size of incoming messages, which defaults to 10MiB. New module • tornado.routing provides a more flexible routing system than the one built in to Application. General changes • Reduced the number of circular references, reducing memory usage and improving performance. tornado.auth • The tornado.auth module has been updated for compatibility with a change to Facebook’s access_token end- point. This includes both the changes initially released in Tornado 4.4.3 and an additional change to support the `session_expires field in the new format. The session_expires field is currently a string; it should be accessed as int(user['session_expires']) because it will change from a string to an int in Tornado 5.0. tornado.autoreload • Autoreload is now compatible with the asyncio event loop. • Autoreload no longer attempts to close the IOLoop and all registered file descriptors before restarting; it relies on the CLOEXEC flag being set instead. Tornado Documentation, Release 6.3.3 tornado.concurrent • Suppressed some “‘NoneType’ object not callback” messages that could be logged at shutdown. tornado.gen • yield None is now equivalent to yield gen.moment. moment is deprecated. This improves compatibility with asyncio. • Fixed an issue in which a generator object could be garbage collected prematurely (most often when weak refer- ences are used. • New function is_coroutine_function identifies functions wrapped by coroutine or engine. tornado.http1connection • The Transfer-Encoding header is now parsed case-insensitively. tornado.httpclient • SimpleAsyncHTTPClient now follows 308 redirects. • CurlAsyncHTTPClient will no longer accept protocols other than http and https. To override this, set pycurl.PROTOCOLS and pycurl.REDIR_PROTOCOLS in a prepare_curl_callback. • CurlAsyncHTTPClient now supports digest authentication for proxies (in addition to basic auth) via the new proxy_auth_mode argument. • The minimum supported version of libcurl is now 7.22.0. tornado.httpserver • HTTPServer now accepts the keyword argument trusted_downstream which controls the parsing of X-Forwarded-For headers. This header may be a list or set of IP addresses of trusted proxies which will be skipped in the X-Forwarded-For list. • The no_keep_alive argument works again. tornado.httputil • url_concat correctly handles fragments and existing query arguments. Tornado Documentation, Release 6.3.3 tornado.ioloop • Fixed 100% CPU usage after a callback returns an empty list or dict. • IOLoop.add_callback now uses a lockless implementation which makes it safe for use from __del__ meth- ods. This improves performance of calls to add_callback from the IOLoop thread, and slightly decreases it for calls from other threads. tornado.iostream • memoryview objects are now permitted as arguments to write. • The internal memory buffers used by IOStream now use bytearray instead of a list of bytes, improving performance. • Futures returned by write are no longer orphaned if a second call to write occurs before the previous one is finished. tornado.log • Colored log output is now supported on Windows if the colorama library is installed and the application calls colorama.init() at startup. • The signature of the LogFormatter constructor has been changed to make it compatible with logging. config.dictConfig. tornado.netutil • Worked around an issue that caused “LookupError: unknown encoding: latin1” errors on Solaris. tornado.process • Subprocess no longer causes “subprocess still running” warnings on Python 3.6. • Improved error handling in cpu_count. tornado.tcpclient • TCPClient now supports a source_ip and source_port argument. • Improved error handling for environments where IPv6 support is incomplete. Tornado Documentation, Release 6.3.3 tornado.tcpserver • TCPServer.handle_stream implementations may now be native coroutines. • Stopping a TCPServer twice no longer raises an exception. tornado.web • RedirectHandler now supports substituting parts of the matched URL into the redirect location using str. format syntax. • New methods RequestHandler.render_linked_js, RequestHandler.render_embed_js, RequestHandler.render_linked_css, and RequestHandler.render_embed_css can be overridden to customize the output of UIModule. tornado.websocket • WebSocketHandler.on_message implementations may now be coroutines. New messages will not be pro- cessed until the previous on_message coroutine has finished. • The websocket_ping_interval and websocket_ping_timeout application settings can now be used to enable a periodic ping of the websocket connection, allowing dropped connections to be detected and closed. • The new websocket_max_message_size setting defaults to 10MiB. The connection will be closed if messages larger than this are received. • Headers set by RequestHandler.prepare or RequestHandler.set_default_headers are now sent as a part of the websocket handshake. • Return values from WebSocketHandler.get_compression_options may now include the keys compression_level and mem_level to set gzip parameters. The default compression level is now 6 instead of 9. Demos • A new file upload demo is available in the file_upload directory. • A new TCPClient and TCPServer demo is available in the tcpecho directory. • Minor updates have been made to several existing demos, including updates to more recent versions of jquery. Credits The following people contributed commits to this release: • <NAME> • <NAME> • <NAME> • Alexander • <NAME> • <NAME> Tornado Documentation, Release 6.3.3 • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • Dario • <NAME> • <NAME> • <NAME> • JZQT • <NAME> • <NAME> • Leynos • <NAME> • <NAME> • Min RK • <NAME> • Ping • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • <NAME> • TaoBeier • <NAME> • <NAME> • matee • mike820324 • stiletto • zhimin • 190 Chapter 6. Documentation Tornado Documentation, Release 6.3.3 6.9.21 What’s new in Tornado 4.4.3 Mar 30, 2017 Bug fixes • The tornado.auth module has been updated for compatibility with a change to Facebook’s access_token end- point. 6.9.22 What’s new in Tornado 4.4.2 Oct 1, 2016 Security fixes • A difference in cookie parsing between Tornado and web browsers (especially when combined with Google Analytics) could allow an attacker to set arbitrary cookies and bypass XSRF protection. The cookie parser has been rewritten to fix this attack. Backwards-compatibility notes • Cookies containing certain special characters (in particular semicolon and square brackets) are now parsed dif- ferently. • If the cookie header contains a combination of valid and invalid cookies, the valid ones will be returned (older versions of Tornado would reject the entire header for a single invalid cookie). 6.9.23 What’s new in Tornado 4.4.1 Jul 23, 2016 tornado.web • Fixed a regression in Tornado 4.4 which caused URL regexes containing backslash escapes outside capturing groups to be rejected. 6.9.24 What’s new in Tornado 4.4 Jul 15, 2016 General • Tornado now requires Python 2.7 or 3.3+; versions 2.6 and 3.2 are no longer supported. Pypy3 is still supported even though its latest release is mainly based on Python 3.2. • The monotonic package is now supported as an alternative to Monotime for monotonic clock support on Python 2. Tornado Documentation, Release 6.3.3 tornado.curl_httpclient • Failures in _curl_setup_request no longer cause the max_clients pool to be exhausted. • Non-ascii header values are now handled correctly. tornado.gen • with_timeout now accepts any yieldable object (except YieldPoint), not just tornado.concurrent. Future. tornado.httpclient • The errors raised by timeouts now indicate what state the request was in; the error message is no longer simply “599 Timeout”. • Calling repr on a tornado.httpclient.HTTPError no longer raises an error. tornado.httpserver • Int-like enums (including http.HTTPStatus) can now be used as status codes. • Responses with status code 204 No Content no longer emit a Content-Length: 0 header. tornado.ioloop • Improved performance when there are large numbers of active timeouts. tornado.netutil • All included Resolver implementations raise IOError (or a subclass) for any resolution failure. tornado.options • Options can now be modified with subscript syntax in addition to attribute syntax. • The special variable __file__ is now available inside config files. tornado.simple_httpclient • HTTP/1.0 (not 1.1) responses without a Content-Length header now work correctly. Tornado Documentation, Release 6.3.3 tornado.tcpserver • TCPServer.bind now accepts a reuse_port argument. tornado.testing • Test sockets now always use 127.0.0.1 instead of localhost. This avoids conflicts when the automatically- assigned port is available on IPv4 but not IPv6, or in unusual network configurations when localhost has multiple IP addresses. tornado.web • image/svg+xml is now on the list of compressible mime types. • Fixed an error on Python 3 when compression is used with multiple Vary headers. tornado.websocket • WebSocketHandler.__init__ now uses super, which improves support for multiple inheritance. 6.9.25 What’s new in Tornado 4.3 Nov 6, 2015 Highlights • The new async/await keywords in Python 3.5 are supported. In most cases, async def can be used in place of the @gen.coroutine decorator. Inside a function defined with async def, use await instead of yield to wait on an asynchronous operation. Coroutines defined with async/await will be faster than those defined with @gen.coroutine and yield, but do not support some features including Callback/Wait or the ability to yield a Twisted Deferred. See the users’ guide for more. • The async/await keywords are also available when compiling with Cython in older versions of Python. Deprecation notice • This will be the last release of Tornado to support Python 2.6 or 3.2. Note that PyPy3 will continue to be supported even though it implements a mix of Python 3.2 and 3.3 features. Tornado Documentation, Release 6.3.3 Installation • Tornado has several new dependencies: ordereddict on Python 2.6, singledispatch on all Python ver- sions prior to 3.4 (This was an optional dependency in prior versions of Tornado, and is now mandatory), and backports_abc>=0.4 on all versions prior to 3.5. These dependencies will be installed automatically when installing with pip or setup.py install. These dependencies will not be required when running on Google App Engine. • Binary wheels are provided for Python 3.5 on Windows (32 and 64 bit). tornado.auth • New method OAuth2Mixin.oauth2_request can be used to make authenticated requests with an access token. • Now compatible with callbacks that have been compiled with Cython. tornado.autoreload • Fixed an issue with the autoreload command-line wrapper in which imports would be incorrectly interpreted as relative. tornado.curl_httpclient • Fixed parsing of multi-line headers. • allow_nonstandard_methods=True now bypasses body sanity checks, in the same way as in simple_httpclient. • The PATCH method now allows a body without allow_nonstandard_methods=True. tornado.gen • WaitIterator now supports the async for statement on Python 3.5. • @gen.coroutine can be applied to functions compiled with Cython. On python versions prior to 3.5, the backports_abc package must be installed for this functionality. • Multi and multi_future are deprecated and replaced by a unified function multi. tornado.httpclient • tornado.httpclient.HTTPError is now copyable with the copy module. Tornado Documentation, Release 6.3.3 tornado.httpserver • Requests containing both Content-Length and Transfer-Encoding will be treated as an error. tornado.httputil • HTTPHeaders can now be pickled and unpickled. tornado.ioloop • IOLoop(make_current=True) now works as intended instead of raising an exception. • The Twisted and asyncio IOLoop implementations now clear current() when they exit, like the standard IOLoops. • IOLoop.add_callback is faster in the single-threaded case. • IOLoop.add_callback no longer raises an error when called on a closed IOLoop, but the callback will not be invoked. tornado.iostream • Coroutine-style usage of IOStream now converts most errors into StreamClosedError, which has the effect of reducing log noise from exceptions that are outside the application’s control (especially SSL errors). • StreamClosedError now has a real_error attribute which indicates why the stream was closed. It is the same as the error attribute of IOStream but may be more easily accessible than the IOStream itself. • Improved error handling in read_until_close. • Logging is less noisy when an SSL server is port scanned. • EINTR is now handled on all reads. tornado.locale • tornado.locale.load_translations now accepts encodings other than UTF-8. UTF-16 and UTF-8 will be detected automatically if a BOM is present; for other encodings load_translations has an encoding parameter. tornado.locks • Lock and Semaphore now support the async with statement on Python 3.5. Tornado Documentation, Release 6.3.3 tornado.log • A new time-based log rotation mode is available with --log_rotate_mode=time, --log-rotate-when, and log-rotate-interval. tornado.netutil • bind_sockets now supports SO_REUSEPORT with the reuse_port=True argument. tornado.options • Dashes and underscores are now fully interchangeable in option names. tornado.queues • Queue now supports the async for statement on Python 3.5. tornado.simple_httpclient • When following redirects, streaming_callback and header_callback will no longer be run on the redirect responses (only the final non-redirect). • Responses containing both Content-Length and Transfer-Encoding will be treated as an error. tornado.template • tornado.template.ParseError now includes the filename in addition to line number. • Whitespace handling has become more configurable. The Loader constructor now has a whitespace argument, there is a new template_whitespace Application setting, and there is a new {% whitespace %} template directive. All of these options take a mode name defined in the tornado.template.filter_whitespace function. The default mode is single, which is the same behavior as prior versions of Tornado. • Non-ASCII filenames are now supported. tornado.testing • ExpectLog objects now have a boolean logged_stack attribute to make it easier to test whether an exception stack trace was logged. Tornado Documentation, Release 6.3.3 tornado.web • The hard limit of 4000 bytes per outgoing header has been removed. • StaticFileHandler returns the correct Content-Type for files with .gz, .bz2, and .xz extensions. • Responses smaller than 1000 bytes will no longer be compressed. • The default gzip compression level is now 6 (was 9). • Fixed a regression in Tornado 4.2.1 that broke StaticFileHandler with a path of /. • tornado.web.HTTPError is now copyable with the copy module. • The exception Finish now accepts an argument which will be passed to the method RequestHandler.finish . • New Application setting xsrf_cookie_kwargs can be used to set additional attributes such as secure or httponly on the XSRF cookie. • Application.listen now returns the HTTPServer it created. tornado.websocket • Fixed handling of continuation frames when compression is enabled. 6.9.26 What’s new in Tornado 4.2.1 Jul 17, 2015 Security fix • This release fixes a path traversal vulnerability in StaticFileHandler, in which files whose names started with the static_path directory but were not actually in that directory could be accessed. 6.9.27 What’s new in Tornado 4.2 May 26, 2015 Backwards-compatibility notes • SSLIOStream.connect and IOStream.start_tls now validate certificates by default. • Certificate validation will now use the system CA root certificates instead of certifi when possible (i.e. Python 2.7.9+ or 3.4+). This includes IOStream and simple_httpclient, but not curl_httpclient. • The default SSL configuration has become stricter, using ssl.create_default_context where available on the client side. (On the server side, applications are encouraged to migrate from the ssl_options dict-based API to pass an ssl.SSLContext instead). • The deprecated classes in the tornado.auth module, GoogleMixin, FacebookMixin, and FriendFeedMixin have been removed. Tornado Documentation, Release 6.3.3 New modules: tornado.locks and tornado.queues These modules provide classes for coordinating coroutines, merged from Toro. To port your code from Toro’s queues to Tornado 4.2, import Queue, PriorityQueue, or LifoQueue from tornado. queues instead of from toro. Use Queue instead of Toro’s JoinableQueue. In Tornado the methods join and task_done are available on all queues, not on a special JoinableQueue. Tornado queues raise exceptions specific to Tornado instead of reusing exceptions from the Python standard li- brary. Therefore instead of catching the standard queue.Empty exception from Queue.get_nowait, catch the special tornado.queues.QueueEmpty exception, and instead of catching the standard queue.Full from Queue. get_nowait, catch tornado.queues.QueueFull. To port from Toro’s locks to Tornado 4.2, import Condition, Event, Semaphore, BoundedSemaphore, or Lock from tornado.locks instead of from toro. Toro’s Semaphore.wait allowed a coroutine to wait for the semaphore to be unlocked without acquiring it. This encouraged unorthodox patterns; in Tornado, just use acquire. Toro’s Event.wait raised a Timeout exception after a timeout. In Tornado, Event.wait raises tornado.gen. TimeoutError. Toro’s Condition.wait also raised Timeout, but in Tornado, the Future returned by Condition.wait resolves to False after a timeout: @gen.coroutine def await_notification(): if not (yield condition.wait(timeout=timedelta(seconds=1))): print('timed out') else: print('condition is true') In lock and queue methods, wherever Toro accepted deadline as a keyword argument, Tornado names the argument timeout instead. Toro’s AsyncResult is not merged into Tornado, nor its exceptions NotReady and AlreadySet. Use a Future instead. If you wrote code like this: from tornado import gen import toro result = toro.AsyncResult() @gen.coroutine def setter(): result.set(1) @gen.coroutine def getter(): value = yield result.get() print(value) # Prints "1". Then the Tornado equivalent is: Tornado Documentation, Release 6.3.3 from tornado import gen from tornado.concurrent import Future result = Future() @gen.coroutine def setter(): result.set_result(1) @gen.coroutine def getter(): value = yield result print(value) # Prints "1". tornado.autoreload • Improved compatibility with Windows. • Fixed a bug in Python 3 if a module was imported during a reload check. tornado.concurrent • run_on_executor now accepts arguments to control which attributes it uses to find the IOLoop and executor. tornado.curl_httpclient • Fixed a bug that would cause the client to stop processing requests if an exception occurred in certain places while there is a queue. tornado.escape • xhtml_escape now supports numeric character references in hex format (&#x20;) tornado.gen • WaitIterator no longer uses weak references, which fixes several garbage-collection-related bugs. • tornado.gen.Multi and tornado.gen.multi_future (which are used when yielding a list or dict in a coroutine) now log any exceptions after the first if more than one Future fails (previously they would be logged when the Future was garbage-collected, but this is more reliable). Both have a new keyword argu- ment quiet_exceptions to suppress logging of certain exception types; to use this argument you must call Multi or multi_future directly instead of simply yielding a list. • multi_future now works when given multiple copies of the same Future. • On Python 3, catching an exception in a coroutine no longer leads to leaks via Exception.__context__. Tornado Documentation, Release 6.3.3 tornado.httpclient • The raise_error argument now works correctly with the synchronous HTTPClient. • The synchronous HTTPClient no longer interferes with IOLoop.current(). tornado.httpserver • HTTPServer is now a subclass of tornado.util.Configurable. tornado.httputil • HTTPHeaders can now be copied with copy.copy and copy.deepcopy. tornado.ioloop • The IOLoop constructor now has a make_current keyword argument to control whether the new IOLoop be- comes IOLoop.current(). • Third-party implementations of IOLoop should accept **kwargs in their IOLoop.initialize methods and pass them to the superclass implementation. • PeriodicCallback is now more efficient when the clock jumps forward by a large amount. tornado.iostream • SSLIOStream.connect and IOStream.start_tls now validate certificates by default. • New method SSLIOStream.wait_for_handshake allows server-side applications to wait for the handshake to complete in order to verify client certificates or use NPN/ALPN. • The Future returned by SSLIOStream.connect now resolves after the handshake is complete instead of as soon as the TCP connection is established. • Reduced logging of SSL errors. • BaseIOStream.read_until_close now works correctly when a streaming_callback is given but callback is None (i.e. when it returns a Future) tornado.locale • New method GettextLocale.pgettext allows additional context to be supplied for gettext translations. Tornado Documentation, Release 6.3.3 tornado.log • define_logging_options now works correctly when given a non-default options object. tornado.process • New method Subprocess.wait_for_exit is a coroutine-friendly version of Subprocess. set_exit_callback. tornado.simple_httpclient • Improved performance on Python 3 by reusing a single ssl.SSLContext. • New constructor argument max_body_size controls the maximum response size the client is willing to accept. It may be bigger than max_buffer_size if streaming_callback is used. tornado.tcpserver • TCPServer.handle_stream may be a coroutine (so that any exceptions it raises will be logged). tornado.util • import_object now supports unicode strings on Python 2. • Configurable.initialize now supports positional arguments. tornado.web • Key versioning support for cookie signing. cookie_secret application setting can now contain a dict of valid keys with version as key. The current signing key then must be specified via key_version setting. • Parsing of the If-None-Match header now follows the RFC and supports weak validators. • Passing secure=False or httponly=False to RequestHandler.set_cookie now works as expected (pre- viously only the presence of the argument was considered and its value was ignored). • RequestHandler.get_arguments now requires that its strip argument be of type bool. This helps prevent errors caused by the slightly dissimilar interfaces between the singular and plural methods. • Errors raised in _handle_request_exception are now logged more reliably. • RequestHandler.redirect now works correctly when called from a handler whose path begins with two slashes. • Passing messages containing % characters to tornado.web.HTTPError no longer causes broken error messages. Tornado Documentation, Release 6.3.3 tornado.websocket • The on_close method will no longer be called more than once. • When the other side closes a connection, we now echo the received close code back instead of sending an empty close frame. 6.9.28 What’s new in Tornado 4.1 Feb 7, 2015 Highlights • If a Future contains an exception but that exception is never examined or re-raised (e.g. by yielding the Future), a stack trace will be logged when the Future is garbage-collected. • New class tornado.gen.WaitIterator provides a way to iterate over Futures in the order they resolve. • The tornado.websocket module now supports compression via the “permessage-deflate” extension. Override WebSocketHandler.get_compression_options to enable on the server side, and use the compression_options keyword argument to websocket_connect on the client side. • When the appropriate packages are installed, it is possible to yield asyncio.Future or Twisted Defered objects in Tornado coroutines. Backwards-compatibility notes • HTTPServer now calls start_request with the correct arguments. This change is backwards-incompatible, affecting any application which implemented HTTPServerConnectionDelegate by following the example of Application instead of the documented method signatures. tornado.concurrent • If a Future contains an exception but that exception is never examined or re-raised (e.g. by yielding the Future), a stack trace will be logged when the Future is garbage-collected. • Future now catches and logs exceptions in its callbacks. tornado.curl_httpclient • tornado.curl_httpclient now supports request bodies for PATCH and custom methods. • tornado.curl_httpclient now supports resubmitting bodies after following redirects for methods other than POST. • curl_httpclient now runs the streaming and header callbacks on the IOLoop. • tornado.curl_httpclient now uses its own logger for debug output so it can be filtered more easily. Tornado Documentation, Release 6.3.3 tornado.gen • New class tornado.gen.WaitIterator provides a way to iterate over Futures in the order they resolve. • When the singledispatch library is available (standard on Python 3.4, available via pip install singledispatch on older versions), the convert_yielded function can be used to make other kinds of objects yieldable in coroutines. • New function tornado.gen.sleep is a coroutine-friendly analogue to time.sleep. • gen.engine now correctly captures the stack context for its callbacks. tornado.httpclient • tornado.httpclient.HTTPRequest accepts a new argument raise_error=False to suppress the default behavior of raising an error for non-200 response codes. tornado.httpserver • HTTPServer now calls start_request with the correct arguments. This change is backwards-incompatible, afffecting any application which implemented HTTPServerConnectionDelegate by following the example of Application instead of the documented method signatures. • HTTPServer now tolerates extra newlines which are sometimes inserted between requests on keep-alive connec- tions. • HTTPServer can now use keep-alive connections after a request with a chunked body. • HTTPServer now always reports HTTP/1.1 instead of echoing the request version. tornado.httputil • New function tornado.httputil.split_host_and_port for parsing the netloc portion of URLs. • The context argument to HTTPServerRequest is now optional, and if a context is supplied the remote_ip attribute is also optional. • HTTPServerRequest.body is now always a byte string (previously the default empty body would be a unicode string on python 3). • Header parsing now works correctly when newline-like unicode characters are present. • Header parsing again supports both CRLF and bare LF line separators. • Malformed multipart/form-data bodies will always be logged quietly instead of raising an unhandled ex- ception; previously the behavior was inconsistent depending on the exact error. Tornado Documentation, Release 6.3.3 tornado.ioloop • The kqueue and select IOLoop implementations now report writeability correctly, fixing flow control in IOStream. • When a new IOLoop is created, it automatically becomes “current” for the thread if there is not already a current instance. • New method PeriodicCallback.is_running can be used to see whether the PeriodicCallback has been started. tornado.iostream • IOStream.start_tls now uses the server_hostname parameter for certificate validation. • SSLIOStream will no longer consume 100% CPU after certain error conditions. • SSLIOStream no longer logs EBADF errors during the handshake as they can result from nmap scans in certain modes. tornado.options • parse_config_file now always decodes the config file as utf8 on Python 3. • tornado.options.define more accurately finds the module defining the option. tornado.platform.asyncio • It is now possible to yield asyncio.Future objects in coroutines when the singledispatch library is available and tornado.platform.asyncio has been imported. • New methods tornado.platform.asyncio.to_tornado_future and to_asyncio_future convert be- tween the two libraries’ Future classes. tornado.platform.twisted • It is now possible to yield Deferred objects in coroutines when the singledispatch library is available and tornado.platform.twisted has been imported. tornado.tcpclient • TCPClient will no longer raise an exception due to an ill-timed timeout. Tornado Documentation, Release 6.3.3 tornado.tcpserver • TCPServer no longer ignores its read_chunk_size argument. tornado.testing • AsyncTestCase has better support for multiple exceptions. Previously it would silently swallow all but the last; now it raises the first and logs all the rest. • AsyncTestCase now cleans up Subprocess state on tearDown when necessary. tornado.web • The asynchronous decorator now understands concurrent.futures.Future in addition to tornado. concurrent.Future. • StaticFileHandler no longer logs a stack trace if the connection is closed while sending the file. • RequestHandler.send_error now supports a reason keyword argument, similar to tornado.web. HTTPError. • RequestHandler.locale now has a property setter. • Application.add_handlers hostname matching now works correctly with IPv6 literals. • Redirects for the Application default_host setting now match the request protocol instead of redirecting HTTPS to HTTP. • Malformed _xsrf cookies are now ignored instead of causing uncaught exceptions. • Application.start_request now has the same signature as HTTPServerConnectionDelegate. start_request. tornado.websocket • The tornado.websocket module now supports compression via the “permessage-deflate” extension. Override WebSocketHandler.get_compression_options to enable on the server side, and use the compression_options keyword argument to websocket_connect on the client side. • WebSocketHandler no longer logs stack traces when the connection is closed. • WebSocketHandler.open now accepts *args, **kw for consistency with RequestHandler.get and related methods. • The Sec-WebSocket-Version header now includes all supported versions. • websocket_connect now has a on_message_callback keyword argument for callback-style use without read_message(). Tornado Documentation, Release 6.3.3 6.9.29 What’s new in Tornado 4.0.2 Sept 10, 2014 Bug fixes • Fixed a bug that could sometimes cause a timeout to fire after being cancelled. • AsyncTestCase once again passes along arguments to test methods, making it compatible with extensions such as Nose’s test generators. • StaticFileHandler can again compress its responses when gzip is enabled. • simple_httpclient passes its max_buffer_size argument to the underlying stream. • Fixed a reference cycle that can lead to increased memory consumption. • add_accept_handler will now limit the number of times it will call accept per IOLoop iteration, addressing a potential starvation issue. • Improved error handling in IOStream.connect (primarily for FreeBSD systems) 6.9.30 What’s new in Tornado 4.0.1 Aug 12, 2014 • The build will now fall back to pure-python mode if the C extension fails to build for any reason (previously it would fall back for some errors but not others). • IOLoop.call_at and IOLoop.call_later now always return a timeout handle for use with IOLoop. remove_timeout. • If any callback of a PeriodicCallback or IOStream returns a Future, any error raised in that future will now be logged (similar to the behavior of IOLoop.add_callback). • Fixed an exception in client-side websocket connections when the connection is closed. • simple_httpclient once again correctly handles 204 status codes with no content-length header. • Fixed a regression in simple_httpclient that would result in timeouts for certain kinds of errors. 6.9.31 What’s new in Tornado 4.0 July 15, 2014 Highlights • The tornado.web.stream_request_body decorator allows large files to be uploaded with limited memory usage. • Coroutines are now faster and are used extensively throughout Tornado itself. More methods now return Futures, including most IOStream methods and RequestHandler.flush . • Many user-overridden methods are now allowed to return a Future for flow control. Tornado Documentation, Release 6.3.3 • HTTP-related code is now shared between the tornado.httpserver, tornado.simple_httpclient and tornado.wsgi modules, making support for features such as chunked and gzip encoding more consistent. HTTPServer now uses new delegate interfaces defined in tornado.httputil in addition to its old single- callback interface. • New module tornado.tcpclient creates TCP connections with non-blocking DNS, SSL handshaking, and support for IPv6. Backwards-compatibility notes • tornado.concurrent.Future is no longer thread-safe; use concurrent.futures.Future when thread- safety is needed. • Tornado now depends on the certifi package instead of bundling its own copy of the Mozilla CA list. This will be installed automatically when using pip or easy_install. • This version includes the changes to the secure cookie format first introduced in version 3.2.1, and the xsrf token change in version 3.2.2. If you are upgrading from an earlier version, see those versions’ release notes. • WebSocket connections from other origin sites are now rejected by default. To accept cross-origin websocket connections, override the new method WebSocketHandler.check_origin. • WebSocketHandler no longer supports the old draft 76 protocol (this mainly affects Safari 5.x browsers). Applications should use non-websocket workarounds for these browsers. • Authors of alternative IOLoop implementations should see the changes to IOLoop.add_handler in this release. • The RequestHandler.async_callback and WebSocketHandler.async_callback wrapper functions have been removed; they have been obsolete for a long time due to stack contexts (and more recently coroutines). • curl_httpclient now requires a minimum of libcurl version 7.21.1 and pycurl 7.18.2. • Support for RequestHandler.get_error_html has been removed; override RequestHandler. write_error instead. Other notes • The git repository has moved to https://github.com/tornadoweb/tornado. All old links should be redirected to the new location. • An announcement mailing list is now available. • All Tornado modules are now importable on Google App Engine (although the App Engine environment does not allow the system calls used by IOLoop so many modules are still unusable). tornado.auth • Fixed a bug in .FacebookMixin on Python 3. • When using the Future interface, exceptions are more reliably delivered to the caller. Tornado Documentation, Release 6.3.3 tornado.concurrent • tornado.concurrent.Future is now always thread-unsafe (previously it would be thread-safe if the concurrent.futures package was available). This improves performance and provides more consistent se- mantics. The parts of Tornado that accept Futures will accept both Tornado’s thread-unsafe Futures and the thread-safe concurrent.futures.Future. • tornado.concurrent.Future now includes all the functionality of the old TracebackFuture class. TracebackFuture is now simply an alias for Future. tornado.curl_httpclient • curl_httpclient now passes along the HTTP “reason” string in response.reason. tornado.gen • Performance of coroutines has been improved. • Coroutines no longer generate StackContexts by default, but they will be created on demand when needed. • The internals of the tornado.gen module have been rewritten to improve performance when using Futures, at the expense of some performance degradation for the older YieldPoint interfaces. • New function with_timeout wraps a Future and raises an exception if it doesn’t complete in a given amount of time. • New object moment can be yielded to allow the IOLoop to run for one iteration before resuming. • Task is now a function returning a Future instead of a YieldPoint subclass. This change should be transparent to application code, but allows Task to take advantage of the newly-optimized Future handling. tornado.http1connection • New module contains the HTTP implementation shared by tornado.httpserver and tornado. simple_httpclient. tornado.httpclient • The command-line HTTP client (python -m tornado.httpclient $URL) now works on Python 3. • Fixed a memory leak in AsyncHTTPClient shutdown that affected applications that created many HTTP clients and IOLoops. • New client request parameter decompress_response replaces the existing use_gzip parameter; both names are accepted. Tornado Documentation, Release 6.3.3 tornado.httpserver • tornado.httpserver.HTTPRequest has moved to tornado.httputil.HTTPServerRequest. • HTTP implementation has been unified with tornado.simple_httpclient in tornado.http1connection. • Now supports Transfer-Encoding: chunked for request bodies. • Now supports Content-Encoding: gzip for request bodies if decompress_request=True is passed to the HTTPServer constructor. • The connection attribute of HTTPServerRequest is now documented for public use; applications are expected to write their responses via the HTTPConnection interface. • The HTTPServerRequest.write and HTTPServerRequest.finish methods are now deprecated. (RequestHandler.write and RequestHandler.finish are not deprecated; this only applies to the methods on HTTPServerRequest) • HTTPServer now supports HTTPServerConnectionDelegate in addition to the old request_callback in- terface. The delegate interface supports streaming of request bodies. • HTTPServer now detects the error of an application sending a Content-Length error that is inconsistent with the actual content. • New constructor arguments max_header_size and max_body_size allow separate limits to be set for different parts of the request. max_body_size is applied even in streaming mode. • New constructor argument chunk_size can be used to limit the amount of data read into memory at one time per request. • New constructor arguments idle_connection_timeout and body_timeout allow time limits to be placed on the reading of requests. • Form-encoded message bodies are now parsed for all HTTP methods, not just POST, PUT, and PATCH. tornado.httputil • HTTPServerRequest was moved to this module from tornado.httpserver. • New base classes HTTPConnection, HTTPServerConnectionDelegate, and HTTPMessageDelegate define the interaction between applications and the HTTP implementation. tornado.ioloop • IOLoop.add_handler and related methods now accept file-like objects in addition to raw file descriptors. Pass- ing the objects is recommended (when possible) to avoid a garbage-collection-related problem in unit tests. • New method IOLoop.clear_instance makes it possible to uninstall the singleton instance. • Timeout scheduling is now more robust against slow callbacks. • IOLoop.add_timeout is now a bit more efficient. • When a function run by the IOLoop returns a Future and that Future has an exception, the IOLoop will log the exception. • New method IOLoop.spawn_callback simplifies the process of launching a fire-and-forget callback that is separated from the caller’s stack context. • New methods IOLoop.call_later and IOLoop.call_at simplify the specification of relative or absolute timeouts (as opposed to add_timeout, which used the type of its argument). Tornado Documentation, Release 6.3.3 tornado.iostream • The callback argument to most IOStream methods is now optional. When called without a callback the method will return a Future for use with coroutines. • New method IOStream.start_tls converts an IOStream to an SSLIOStream. • No longer gets confused when an IOError or OSError without an errno attribute is raised. • BaseIOStream.read_bytes now accepts a partial keyword argument, which can be used to return before the full amount has been read. This is a more coroutine-friendly alternative to streaming_callback. • BaseIOStream.read_until and read_until_regex now acept a max_bytes keyword argument which will cause the request to fail if it cannot be satisfied from the given number of bytes. • IOStream no longer reads from the socket into memory if it does not need data to satisfy a pending read. As a side effect, the close callback will not be run immediately if the other side closes the connection while there is unconsumed data in the buffer. • The default chunk_size has been increased to 64KB (from 4KB) • The IOStream constructor takes a new keyword argument max_write_buffer_size (defaults to unlimited). Calls to BaseIOStream.write will raise StreamBufferFullError if the amount of unsent buffered data exceeds this limit. • ETIMEDOUT errors are no longer logged. If you need to distinguish timeouts from other forms of closed connec- tions, examine stream.error from a close callback. tornado.netutil • When bind_sockets chooses a port automatically, it will now use the same port for IPv4 and IPv6. • TLS compression is now disabled by default on Python 3.3 and higher (it is not possible to change this option in older versions). tornado.options • It is now possible to disable the default logging configuration by setting options.logging to None instead of the string "none". tornado.platform.asyncio • Now works on Python 2.6. • Now works with Trollius version 0.3. Tornado Documentation, Release 6.3.3 tornado.platform.twisted • TwistedIOLoop now works on Python 3.3+ (with Twisted 14.0.0+). tornado.simple_httpclient • simple_httpclient has better support for IPv6, which is now enabled by default. • Improved default cipher suite selection (Python 2.7+). • HTTP implementation has been unified with tornado.httpserver in tornado.http1connection • Streaming request bodies are now supported via the body_producer keyword argument to tornado. httpclient.HTTPRequest. • The expect_100_continue keyword argument to tornado.httpclient.HTTPRequest allows the use of the HTTP Expect: 100-continue feature. • simple_httpclient now raises the original exception (e.g. an IOError) in more cases, instead of converting everything to HTTPError. tornado.stack_context • The stack context system now has less performance overhead when no stack contexts are active. tornado.tcpclient • New module which creates TCP connections and IOStreams, including name resolution, connecting, and SSL handshakes. tornado.testing • AsyncTestCase now attempts to detect test methods that are generators but were not run with @gen_test or any similar decorator (this would previously result in the test silently being skipped). • Better stack traces are now displayed when a test times out. • The @gen_test decorator now passes along *args, **kwargs so it can be used on functions with arguments. • Fixed the test suite when unittest2 is installed on Python 3. tornado.web • It is now possible to support streaming request bodies with the stream_request_body decorator and the new RequestHandler.data_received method. • RequestHandler.flush now returns a Future if no callback is given. • New exception Finish may be raised to finish a request without triggering error handling. • When gzip support is enabled, all text/* mime types will be compressed, not just those on a whitelist. • Application now implements the HTTPMessageDelegate interface. • HEAD requests in StaticFileHandler no longer read the entire file. Tornado Documentation, Release 6.3.3 • StaticFileHandler now streams response bodies to the client. • New setting compress_response replaces the existing gzip setting; both names are accepted. • XSRF cookies that were not generated by this module (i.e. strings without any particular formatting) are once again accepted (as long as the cookie and body/header match). This pattern was common for testing and non- browser clients but was broken by the changes in Tornado 3.2.2. tornado.websocket • WebSocket connections from other origin sites are now rejected by default. Browsers do not use the same- origin policy for WebSocket connections as they do for most other browser-initiated communications. This can be surprising and a security risk, so we disallow these connections on the server side by default. To accept cross-origin websocket connections, override the new method WebSocketHandler.check_origin. • WebSocketHandler.close and WebSocketClientConnection.close now support code and reason argu- ments to send a status code and message to the other side of the connection when closing. Both classes also have close_code and close_reason attributes to receive these values when the other side closes. • The C speedup module now builds correctly with MSVC, and can support messages larger than 2GB on 64-bit systems. • The fallback mechanism for detecting a missing C compiler now works correctly on Mac OS X. • Arguments to WebSocketHandler.open are now decoded in the same way as arguments to RequestHandler. get and similar methods. • It is now allowed to override prepare in a WebSocketHandler, and this method may generate HTTP responses (error pages) in the usual way. The HTTP response methods are still not allowed once the WebSocket handshake has completed. tornado.wsgi • New class WSGIAdapter supports running a Tornado Application on a WSGI server in a way that is more compatible with Tornado’s non-WSGI HTTPServer. WSGIApplication is deprecated in favor of using WSGIAdapter with a regular Application. • WSGIAdapter now supports gzipped output. 6.9.32 What’s new in Tornado 3.2.2 June 3, 2014 Security fixes • The XSRF token is now encoded with a random mask on each request. This makes it safe to include in compressed pages without being vulnerable to the BREACH attack. This applies to most applications that use both the xsrf_cookies and gzip options (or have gzip applied by a proxy). Tornado Documentation, Release 6.3.3 Backwards-compatibility notes • If Tornado 3.2.2 is run at the same time as older versions on the same domain, there is some potential for issues with the differing cookie versions. The Application setting xsrf_cookie_version=1 can be used for a transitional period to generate the older cookie format on newer servers. Other changes • tornado.platform.asyncio is now compatible with trollius version 0.3. 6.9.33 What’s new in Tornado 3.2.1 May 5, 2014 Security fixes • The signed-value format used by RequestHandler.set_secure_cookie and RequestHandler. get_secure_cookie has changed to be more secure. This is a disruptive change. The secure_cookie functions take new version parameters to support transitions between cookie formats. • The new cookie format fixes a vulnerability that may be present in applications that use multiple cookies where the name of one cookie is a prefix of the name of another. • To minimize disruption, cookies in the older format will be accepted by default until they expire. Applications that may be vulnerable can reject all cookies in the older format by passing min_version=2 to RequestHandler. get_secure_cookie. • Thanks to <NAME> of Certified Secure for reporting this issue. Backwards-compatibility notes • Signed cookies issued by RequestHandler.set_secure_cookie in Tornado 3.2.1 cannot be read by older releases. If you need to run 3.2.1 in parallel with older releases, you can pass version=1 to RequestHandler. set_secure_cookie to issue cookies that are backwards-compatible (but have a known weakness, so this option should only be used for a transitional period). Other changes • The C extension used to speed up the websocket module now compiles correctly on Windows with MSVC and 64-bit mode. The fallback to the pure-Python alternative now works correctly on Mac OS X machines with no C compiler installed. Tornado Documentation, Release 6.3.3 6.9.34 What’s new in Tornado 3.2 Jan 14, 2014 Installation • Tornado now depends on the backports.ssl_match_hostname when running on Python 2. This will be installed automatically when using pip or easy_install • Tornado now includes an optional C extension module, which greatly improves performance of websockets. This extension will be built automatically if a C compiler is found at install time. New modules • The tornado.platform.asyncio module provides integration with the asyncio module introduced in Python 3.4 (also available for Python 3.3 with pip install asyncio). tornado.auth • Added GoogleOAuth2Mixin support authentication to Google services with OAuth 2 instead of OpenID and OAuth 1. • FacebookGraphMixin has been updated to use the current Facebook login URL, which saves a redirect. tornado.concurrent • TracebackFuture now accepts a timeout keyword argument (although it is still incorrect to use a non-zero timeout in non-blocking code). tornado.curl_httpclient • tornado.curl_httpclient now works on Python 3 with the soon-to-be-released pycurl 7.19.3, which will officially support Python 3 for the first time. Note that there are some unofficial Python 3 ports of pycurl (Ubuntu has included one for its past several releases); these are not supported for use with Tornado. tornado.escape • xhtml_escape now escapes apostrophes as well. • tornado.escape.utf8, to_unicode, and native_str now raise TypeError instead of AssertionError when given an invalid value. Tornado Documentation, Release 6.3.3 tornado.gen • Coroutines may now yield dicts in addition to lists to wait for multiple tasks in parallel. • Improved performance of tornado.gen when yielding a Future that is already done. tornado.httpclient • tornado.httpclient.HTTPRequest now uses property setters so that setting attributes after construction applies the same conversions as __init__ (e.g. converting the body attribute to bytes). tornado.httpserver • Malformed x-www-form-urlencoded request bodies will now log a warning and continue instead of causing the request to fail (similar to the existing handling of malformed multipart/form-data bodies. This is done mainly because some libraries send this content type by default even when the data is not form-encoded. • Fix some error messages for unix sockets (and other non-IP sockets) tornado.ioloop • IOLoop now uses IOLoop.handle_callback_exception consistently for error logging. • IOLoop now frees callback objects earlier, reducing memory usage while idle. • IOLoop will no longer call logging.basicConfig if there is a handler defined for the root logger or for the tornado or tornado.application loggers (previously it only looked at the root logger). tornado.iostream • IOStream now recognizes ECONNABORTED error codes in more places (which was mainly an issue on Windows). • IOStream now frees memory earlier if a connection is closed while there is data in the write buffer. • PipeIOStream now handles EAGAIN error codes correctly. • SSLIOStream now initiates the SSL handshake automatically without waiting for the application to try and read or write to the connection. • Swallow a spurious exception from set_nodelay when a connection has been reset. tornado.locale • Locale.format_date no longer forces the use of absolute dates in Russian. Tornado Documentation, Release 6.3.3 tornado.log • Fix an error from tornado.log.enable_pretty_logging when sys.stderr does not have an isatty method. • tornado.log.LogFormatter now accepts keyword arguments fmt and datefmt. tornado.netutil • is_valid_ip (and therefore HTTPRequest.remote_ip) now rejects empty strings. • Synchronously using ThreadedResolver at import time to resolve a unicode hostname no longer deadlocks. tornado.platform.twisted • TwistedResolver now has better error handling. tornado.process • Subprocess no longer leaks file descriptors if subprocess.Popen fails. tornado.simple_httpclient • simple_httpclient now applies the connect_timeout to requests that are queued and have not yet started. • On Python 2.6, simple_httpclient now uses TLSv1 instead of SSLv3. • simple_httpclient now enforces the connect timeout during DNS resolution. • The embedded ca-certificates.crt file has been updated with the current Mozilla CA list. tornado.web • StaticFileHandler no longer fails if the client requests a Range that is larger than the entire file (Facebook has a crawler that does this). • RequestHandler.on_connection_close now works correctly on subsequent requests of a keep-alive con- nection. • New application setting default_handler_class can be used to easily set up custom 404 pages. • New application settings autoreload, compiled_template_cache, static_hash_cache, and serve_traceback can be used to control individual aspects of debug mode. • New methods RequestHandler.get_query_argument and RequestHandler.get_body_argument and new attributes HTTPRequest.query_arguments and HTTPRequest.body_arguments allow access to argu- ments without intermingling those from the query string with those from the request body. • RequestHandler.decode_argument and related methods now raise an HTTPError(400) instead of UnicodeDecodeError when the argument could not be decoded. • RequestHandler.clear_all_cookies now accepts domain and path arguments, just like clear_cookie. • It is now possible to specify handlers by name when using the tornado.web.URLSpec class. Tornado Documentation, Release 6.3.3 • Application now accepts 4-tuples to specify the name parameter (which previously required constructing a tornado.web.URLSpec object instead of a tuple). • Fixed an incorrect error message when handler methods return a value other than None or a Future. • Exceptions will no longer be logged twice when using both @asynchronous and @gen.coroutine tornado.websocket • WebSocketHandler.write_message now raises WebSocketClosedError instead of AttributeError when the connection has been closed. • websocket_connect now accepts preconstructed HTTPRequest objects. • Fix a bug with WebSocketHandler when used with some proxies that unconditionally modify the Connection header. • websocket_connect now returns an error immediately for refused connections instead of waiting for the time- out. • WebSocketClientConnection now has a close method. tornado.wsgi • WSGIContainer now calls the iterable’s close() method even if an error is raised, in compliance with the spec. 6.9.35 What’s new in Tornado 3.1.1 Sep 1, 2013 • StaticFileHandler no longer fails if the client requests a Range that is larger than the entire file (Facebook has a crawler that does this). • RequestHandler.on_connection_close now works correctly on subsequent requests of a keep-alive con- nection. 6.9.36 What’s new in Tornado 3.1 Jun 15, 2013 Multiple modules • Many reference cycles have been broken up throughout the package, allowing for more efficient garbage collection on CPython. • Silenced some log messages when connections are opened and immediately closed (i.e. port scans), or other situations related to closed connections. • Various small speedups: HTTPHeaders case normalization, UIModule proxy objects, precompile some regexes. Tornado Documentation, Release 6.3.3 tornado.auth • OAuthMixin always sends oauth_version=1.0 in its request as required by the spec. • FacebookGraphMixin now uses self._FACEBOOK_BASE_URL in facebook_request to allow the base url to be overridden. • The authenticate_redirect and authorize_redirect methods in the tornado.auth mixin classes all now return Futures. These methods are asynchronous in OAuthMixin and derived classes, although they do not take a callback. The Future these methods return must be yielded if they are called from a function decorated with gen.coroutine (but not gen.engine). • TwitterMixin now uses /account/verify_credentials to get information about the logged-in user, which is more robust against changing screen names. • The demos directory (in the source distribution) has a new twitter demo using TwitterMixin. tornado.escape • url_escape and url_unescape have a new plus argument (defaulting to True for consistency with the previous behavior) which specifies whether they work like urllib.parse.unquote or urllib.parse. unquote_plus. tornado.gen • Fixed a potential memory leak with long chains of tornado.gen coroutines. tornado.httpclient • tornado.httpclient.HTTPRequest takes a new argument auth_mode, which can be either basic or digest. Digest authentication is only supported with tornado.curl_httpclient. • tornado.curl_httpclient no longer goes into an infinite loop when pycurl returns a negative timeout. • curl_httpclient now supports the PATCH and OPTIONS methods without the use of allow_nonstandard_methods=True. • Worked around a class of bugs in libcurl that would result in errors from IOLoop.update_handler in various scenarios including digest authentication and socks proxies. • The TCP_NODELAY flag is now set when appropriate in simple_httpclient. • simple_httpclient no longer logs exceptions, since those exceptions are made available to the caller as HTTPResponse.error. Tornado Documentation, Release 6.3.3 tornado.httpserver • tornado.httpserver.HTTPServer handles malformed HTTP headers more gracefully. • HTTPServer now supports lists of IPs in X-Forwarded-For (it chooses the last, i.e. nearest one). • Memory is now reclaimed promptly on CPython when an HTTP request fails because it exceeded the maximum upload size. • The TCP_NODELAY flag is now set when appropriate in HTTPServer. • The HTTPServer no_keep_alive option is now respected with HTTP 1.0 connections that explicitly pass Connection: keep-alive. • The Connection: keep-alive check for HTTP 1.0 connections is now case-insensitive. • The str and repr of tornado.httpserver.HTTPRequest no longer include the request body, reducing log spam on errors (and potential exposure/retention of private data). tornado.httputil • The cache used in HTTPHeaders will no longer grow without bound. tornado.ioloop • Some IOLoop implementations (such as pyzmq) accept objects other than integer file descriptors; these objects will now have their .close() method called when the IOLoop` is closed with ``all_fds=True. • The stub handles left behind by IOLoop.remove_timeout will now get cleaned up instead of waiting to expire. tornado.iostream • Fixed a bug in BaseIOStream.read_until_close that would sometimes cause data to be passed to the final callback instead of the streaming callback. • The IOStream close callback is now run more reliably if there is an exception in _try_inline_read. • New method BaseIOStream.set_nodelay can be used to set the TCP_NODELAY flag. • Fixed a case where errors in SSLIOStream.connect (and SimpleAsyncHTTPClient) were not being reported correctly. tornado.locale • Locale.format_date now works on Python 3. Tornado Documentation, Release 6.3.3 tornado.netutil • The default Resolver implementation now works on Solaris. • Resolver now has a close method. • Fixed a potential CPU DoS when tornado.netutil.ssl_match_hostname is used on certificates with an abusive wildcard pattern. • All instances of ThreadedResolver now share a single thread pool, whose size is set by the first one to be created (or the static Resolver.configure method). • ExecutorResolver is now documented for public use. • bind_sockets now works in configurations with incomplete IPv6 support. tornado.options • tornado.options.define with multiple=True now works on Python 3. • tornado.options.options and other OptionParser instances support some new dict-like methods: items(), iteration over keys, and (read-only) access to options with square braket syntax. OptionParser. group_dict returns all options with a given group name, and OptionParser.as_dict returns all options. tornado.process • tornado.process.Subprocess no longer leaks file descriptors into the child process, which fixes a problem in which the child could not detect that the parent process had closed its stdin pipe. • Subprocess.set_exit_callback now works for subprocesses created without an explicit io_loop parame- ter. tornado.stack_context • tornado.stack_context has been rewritten and is now much faster. • New function run_with_stack_context facilitates the use of stack contexts with coroutines. tornado.tcpserver • The constructors of TCPServer and HTTPServer now take a max_buffer_size keyword argument. tornado.template • Some internal names used by the template system have been changed; now all “reserved” names in templates start with _tt_. Tornado Documentation, Release 6.3.3 tornado.testing • tornado.testing.AsyncTestCase.wait now raises the correct exception when it has been modified by tornado.stack_context. • tornado.testing.gen_test can now be called as @gen_test(timeout=60) to give some tests a longer timeout than others. • The environment variable ASYNC_TEST_TIMEOUT can now be set to override the default timeout for AsyncTestCase.wait and gen_test. • bind_unused_port now passes None instead of 0 as the port to getaddrinfo, which works better with some unusual network configurations. tornado.util • tornado.util.import_object now works with top-level module names that do not contain a dot. • tornado.util.import_object now consistently raises ImportError instead of AttributeError when it fails. tornado.web • The handlers list passed to the tornado.web.Application constructor and add_handlers methods can now contain lists in addition to tuples and URLSpec objects. • tornado.web.StaticFileHandler now works on Windows when the client passes an If-Modified-Since timestamp before 1970. • New method RequestHandler.log_exception can be overridden to customize the logging behavior when an exception is uncaught. Most apps that currently override _handle_request_exception can now use a combination of RequestHandler.log_exception and write_error. • RequestHandler.get_argument now raises MissingArgumentError (a subclass of tornado.web. HTTPError, which is what it raised previously) if the argument cannot be found. • Application.reverse_url now uses url_escape with plus=False, i.e. spaces are encoded as %20 instead of +. • Arguments extracted from the url path are now decoded with url_unescape with plus=False, so plus signs are left as-is instead of being turned into spaces. • RequestHandler.send_error will now only be called once per request, even if multiple exceptions are caught by the stack context. • The tornado.web.asynchronous decorator is no longer necessary for methods that return a Future (i.e. those that use the gen.coroutine or return_future decorators) • RequestHandler.prepare may now be asynchronous if it returns a Future. The tornado.web. asynchronous decorator is not used with prepare; one of the Future-related decorators should be used in- stead. • RequestHandler.current_user may now be assigned to normally. • RequestHandler.redirect no longer silently strips control characters and whitespace. It is now an error to pass control characters, newlines or tabs. • StaticFileHandler has been reorganized internally and now has additional extension points that can be over- ridden in subclasses. Tornado Documentation, Release 6.3.3 • StaticFileHandler now supports HTTP Range requests. StaticFileHandler is still not suitable for files too large to comfortably fit in memory, but Range support is necessary in some browsers to enable seeking of HTML5 audio and video. • StaticFileHandler now uses longer hashes by default, and uses the same hashes for Etag as it does for versioned urls. • StaticFileHandler.make_static_url and RequestHandler.static_url now have an additional key- word argument include_version to suppress the url versioning. • StaticFileHandler now reads its file in chunks, which will reduce memory fragmentation. • Fixed a problem with the Date header and cookie expiration dates when the system locale is set to a non-english configuration. tornado.websocket • WebSocketHandler now catches StreamClosedError and runs on_close immediately instead of logging a stack trace. • New method WebSocketHandler.set_nodelay can be used to set the TCP_NODELAY flag. tornado.wsgi • Fixed an exception in WSGIContainer when the connection is closed while output is being written. 6.9.37 What’s new in Tornado 3.0.2 Jun 2, 2013 • tornado.auth.TwitterMixin now defaults to version 1.1 of the Twitter API, instead of version 1.0 which is being discontinued on June 11. It also now uses HTTPS when talking to Twitter. • Fixed a potential memory leak with a long chain of gen.coroutine or gen.engine functions. 6.9.38 What’s new in Tornado 3.0.1 Apr 8, 2013 • The interface of tornado.auth.FacebookGraphMixin is now consistent with its documentation and the rest of the module. The get_authenticated_user and facebook_request methods return a Future and the callback argument is optional. • The tornado.testing.gen_test decorator will no longer be recognized as a (broken) test by nose. • Work around a bug in Ubuntu 13.04 betas involving an incomplete backport of the ssl.match_hostname func- tion. • tornado.websocket.websocket_connect now fails cleanly when it attempts to connect to a non-websocket url. • tornado.testing.LogTrapTestCase once again works with byte strings on Python 2. • The request attribute of tornado.httpclient.HTTPResponse is now always an HTTPRequest, never a _RequestProxy. Tornado Documentation, Release 6.3.3 • Exceptions raised by the tornado.gen module now have better messages when tuples are used as callback keys. 6.9.39 What’s new in Tornado 3.0 Mar 29, 2013 Highlights • The callback argument to many asynchronous methods is now optional, and these methods return a Future. The tornado.gen module now understands Futures, and these methods can be used directly without a gen. Task wrapper. • New function IOLoop.current returns the IOLoop that is running on the current thread (as opposed to IOLoop. instance, which returns a specific thread’s (usually the main thread’s) IOLoop. • New class tornado.netutil.Resolver provides an asynchronous interface to DNS resolution. The default implementation is still blocking, but non-blocking implementations are available using one of three optional dependencies: ThreadedResolver using the concurrent.futures thread pool, tornado. platform.caresresolver.CaresResolver using the pycares library, or tornado.platform.twisted. TwistedResolver using twisted • Tornado’s logging is now less noisy, and it no longer goes directly to the root logger, allowing for finer-grained configuration. • New class tornado.process.Subprocess wraps subprocess.Popen with PipeIOStream access to the child’s file descriptors. • IOLoop now has a static configure method like the one on AsyncHTTPClient, which can be used to select an IOLoop implementation other than the default. • IOLoop can now optionally use a monotonic clock if available (see below for more details). Backwards-incompatible changes • Python 2.5 is no longer supported. Python 3 is now supported in a single codebase instead of using 2to3 • The tornado.database module has been removed. It is now available as a separate package, torndb • Functions that take an io_loop parameter now default to IOLoop.current() instead of IOLoop.instance(). • Empty HTTP request arguments are no longer ignored. This applies to HTTPRequest.arguments and RequestHandler.get_argument[s] in WSGI and non-WSGI modes. • On Python 3, tornado.escape.json_encode no longer accepts byte strings. • On Python 3, the get_authenticated_user methods in tornado.auth now return character strings instead of byte strings. • tornado.netutil.TCPServer has moved to its own module, tornado.tcpserver. • The Tornado test suite now requires unittest2 when run on Python 2.6. • tornado.options.options is no longer a subclass of dict; attribute-style access is now required. Tornado Documentation, Release 6.3.3 Detailed changes by module Multiple modules • Tornado no longer logs to the root logger. Details on the new logging scheme can be found under the tornado. log module. Note that in some cases this will require that you add an explicit logging configuration in order to see any output (perhaps just calling logging.basicConfig()), although both IOLoop.start() and tornado. options.parse_command_line will do this for you. • On python 3.2+, methods that take an ssl_options argument (on SSLIOStream, TCPServer, and HTTPServer) now accept either a dictionary of options or an ssl.SSLContext object. • New optional dependency on concurrent.futures to provide better support for working with threads. concurrent.futures is in the standard library for Python 3.2+, and can be installed on older versions with pip install futures. tornado.autoreload • tornado.autoreload is now more reliable when there are errors at import time. • Calling tornado.autoreload.start (or creating an Application with debug=True) twice on the same IOLoop now does nothing (instead of creating multiple periodic callbacks). Starting autoreload on more than one IOLoop in the same process now logs a warning. • Scripts run by autoreload no longer inherit __future__ imports used by Tornado. tornado.auth • On Python 3, the get_authenticated_user method family now returns character strings instead of byte strings. • Asynchronous methods defined in tornado.auth now return a Future, and their callback argument is op- tional. The Future interface is preferred as it offers better error handling (the previous interface just logged a warning and returned None). • The tornado.auth mixin classes now define a method get_auth_http_client, which can be overridden to use a non-default AsyncHTTPClient instance (e.g. to use a different IOLoop) • Subclasses of OAuthMixin are encouraged to override OAuthMixin._oauth_get_user_future instead of _oauth_get_user, although both methods are still supported. tornado.concurrent • New module tornado.concurrent contains code to support working with concurrent.futures, or to em- ulate future-based interface when that module is not available. Tornado Documentation, Release 6.3.3 tornado.curl_httpclient • Preliminary support for tornado.curl_httpclient on Python 3. The latest official release of pycurl only supports Python 2, but Ubuntu has a port available in 12.10 (apt-get install python3-pycurl). This port currently has bugs that prevent it from handling arbitrary binary data but it should work for textual (utf8) resources. • Fix a crash with libcurl 7.29.0 if a curl object is created and closed without being used. tornado.escape • On Python 3, json_encode no longer accepts byte strings. This mirrors the behavior of the underlying json module. Python 2 behavior is unchanged but should be faster. tornado.gen • New decorator @gen.coroutine is available as an alternative to @gen.engine. It automatically returns a Future, and within the function instead of calling a callback you return a value with raise gen. Return(value) (or simply return value in Python 3.3). • Generators may now yield Future objects. • Callbacks produced by gen.Callback and gen.Task are now automatically stack-context-wrapped, to mini- mize the risk of context leaks when used with asynchronous functions that don’t do their own wrapping. • Fixed a memory leak involving generators, RequestHandler.flush , and clients closing connections while output is being written. • Yielding a large list no longer has quadratic performance. tornado.httpclient • AsyncHTTPClient.fetch now returns a Future and its callback argument is optional. When the future inter- face is used, any error will be raised automatically, as if HTTPResponse.rethrow was called. • AsyncHTTPClient.configure and all AsyncHTTPClient constructors now take a defaults keyword argu- ment. This argument should be a dictionary, and its values will be used in place of corresponding attributes of HTTPRequest that are not set. • All unset attributes of tornado.httpclient.HTTPRequest are now None. The default values of some attributes (connect_timeout, request_timeout, follow_redirects, max_redirects, use_gzip, proxy_password, allow_nonstandard_methods, and validate_cert have been moved from HTTPRequest to the client implementations. • The max_clients argument to AsyncHTTPClient is now a keyword-only argument. • Keyword arguments to AsyncHTTPClient.configure are no longer used when instantiating an implementation subclass directly. • Secondary AsyncHTTPClient callbacks (streaming_callback, header_callback, and prepare_curl_callback) now respect StackContext. Tornado Documentation, Release 6.3.3 tornado.httpserver • HTTPServer no longer logs an error when it is unable to read a second request from an HTTP 1.1 keep-alive connection. • HTTPServer now takes a protocol keyword argument which can be set to https if the server is behind an SSL-decoding proxy that does not set any supported X-headers. • tornado.httpserver.HTTPConnection now has a set_close_callback method that should be used in- stead of reaching into its stream attribute. • Empty HTTP request arguments are no longer ignored. This applies to HTTPRequest.arguments and RequestHandler.get_argument[s] in WSGI and non-WSGI modes. tornado.ioloop • New function IOLoop.current returns the IOLoop that is running on the current thread (as opposed to IOLoop. instance, which returns a specific thread’s (usually the main thread’s) IOLoop). • New method IOLoop.add_future to run a callback on the IOLoop when an asynchronous Future finishes. • IOLoop now has a static configure method like the one on AsyncHTTPClient, which can be used to select an IOLoop implementation other than the default. • The IOLoop poller implementations (select, epoll, kqueue) are now available as distinct subclasses of IOLoop. Instantiating IOLoop will continue to automatically choose the best available implementation. • The IOLoop constructor has a new keyword argument time_func, which can be used to set the time function used when scheduling callbacks. This is most useful with the time.monotonic function, introduced in Python 3.3 and backported to older versions via the monotime module. Using a monotonic clock here avoids problems when the system clock is changed. • New function IOLoop.time returns the current time according to the IOLoop. To use the new monotonic clock functionality, all calls to IOLoop.add_timeout must be either pass a datetime.timedelta or a time relative to IOLoop.time, not time.time. (time.time will continue to work only as long as the IOLoop’s time_func argument is not used). • New convenience method IOLoop.run_sync can be used to start an IOLoop just long enough to run a single coroutine. • New method IOLoop.add_callback_from_signal is safe to use in a signal handler (the regular add_callback method may deadlock). • IOLoop now uses signal.set_wakeup_fd where available (Python 2.6+ on Unix) to avoid a race condition that could result in Python signal handlers being delayed. • Method IOLoop.running() has been removed. • IOLoop has been refactored to better support subclassing. • IOLoop.add_callback and add_callback_from_signal now take *args, **kwargs to pass along to the callback. Tornado Documentation, Release 6.3.3 tornado.iostream • IOStream.connect now has an optional server_hostname argument which will be used for SSL certificate validation when applicable. Additionally, when supported (on Python 3.2+), this hostname will be sent via SNI (and this is supported by tornado.simple_httpclient) • Much of IOStream has been refactored into a separate class BaseIOStream. • New class tornado.iostream.PipeIOStream provides the IOStream interface on pipe file descriptors. • IOStream now raises a new exception tornado.iostream.StreamClosedError when you attempt to read or write after the stream has been closed (by either side). • IOStream now simply closes the connection when it gets an ECONNRESET error, rather than logging it as an error. • IOStream.error no longer picks up unrelated exceptions. • BaseIOStream.close now has an exc_info argument (similar to the one used in the logging module) that can be used to set the stream’s error attribute when closing it. • BaseIOStream.read_until_close now works correctly when it is called while there is buffered data. • Fixed a major performance regression when run on PyPy (introduced in Tornado 2.3). tornado.log • New module containing enable_pretty_logging and LogFormatter, moved from the options module. • LogFormatter now handles non-ascii data in messages and tracebacks better. tornado.netutil • New class tornado.netutil.Resolver provides an asynchronous interface to DNS resolution. The default implementation is still blocking, but non-blocking implementations are available using one of three optional dependencies: ThreadedResolver using the concurrent.futures thread pool, tornado. platform.caresresolver.CaresResolver using the pycares library, or tornado.platform.twisted. TwistedResolver using twisted • New function tornado.netutil.is_valid_ip returns true if a given string is a valid IP (v4 or v6) address. • tornado.netutil.bind_sockets has a new flags argument that can be used to pass additional flags to getaddrinfo. • tornado.netutil.bind_sockets no longer sets AI_ADDRCONFIG; this will cause it to bind to both ipv4 and ipv6 more often than before. • tornado.netutil.bind_sockets now works when Python was compiled with --disable-ipv6 but IPv6 DNS resolution is available on the system. • tornado.netutil.TCPServer has moved to its own module, tornado.tcpserver. Tornado Documentation, Release 6.3.3 tornado.options • The class underlying the functions in tornado.options is now public (tornado.options.OptionParser). This can be used to create multiple independent option sets, such as for subcommands. • tornado.options.parse_config_file now configures logging automatically by default, in the same way that parse_command_line does. • New function tornado.options.add_parse_callback schedules a callback to be run after the command line or config file has been parsed. The keyword argument final=False can be used on either parsing function to suppress these callbacks. • tornado.options.define now takes a callback argument. This callback will be run with the new value whenever the option is changed. This is especially useful for options that set other options, such as by reading from a config file. • tornado.options.parse_command_line --help output now goes to stderr rather than stdout. • tornado.options.options is no longer a subclass of dict; attribute-style access is now required. • tornado.options.options (and OptionParser instances generally) now have a mockable() method that returns a wrapper object compatible with mock.patch. • Function tornado.options.enable_pretty_logging has been moved to the tornado.log module. tornado.platform.caresresolver • New module containing an asynchronous implementation of the Resolver interface, using the pycares library. tornado.platform.twisted • New class tornado.platform.twisted.TwistedIOLoop allows Tornado code to be run on the Twisted re- actor (as opposed to the existing TornadoReactor, which bridges the gap in the other direction). • New class tornado.platform.twisted.TwistedResolver is an asynchronous implementation of the Resolver interface. tornado.process • New class tornado.process.Subprocess wraps subprocess.Popen with PipeIOStream access to the child’s file descriptors. tornado.simple_httpclient • SimpleAsyncHTTPClient now takes a resolver keyword argument (which may be passed to either the con- structor or configure), to allow it to use the new non-blocking tornado.netutil.Resolver. • When following redirects, SimpleAsyncHTTPClient now treats a 302 response code the same as a 303. This is contrary to the HTTP spec but consistent with all browsers and other major HTTP clients (including CurlAsyncHTTPClient). • The behavior of header_callback with SimpleAsyncHTTPClient has changed and is now the same as that of CurlAsyncHTTPClient. The header callback now receives the first line of the response (e.g. HTTP/1.0 200 OK) and the final empty line. Tornado Documentation, Release 6.3.3 • tornado.simple_httpclient now accepts responses with a 304 status code that include a Content-Length header. • Fixed a bug in which SimpleAsyncHTTPClient callbacks were being run in the client’s stack_context. tornado.stack_context • stack_context.wrap now runs the wrapped callback in a more consistent environment by recreating contexts even if they already exist on the stack. • Fixed a bug in which stack contexts could leak from one callback chain to another. • Yield statements inside a with statement can cause stack contexts to become inconsistent; an exception will now be raised when this case is detected. tornado.template • Errors while rendering templates no longer log the generated code, since the enhanced stack traces (from version 2.1) should make this unnecessary. • The {% apply %} directive now works properly with functions that return both unicode strings and byte strings (previously only byte strings were supported). • Code in templates is no longer affected by Tornado’s __future__ imports (which previously included absolute_import and division). tornado.testing • New function tornado.testing.bind_unused_port both chooses a port and binds a socket to it, so there is no risk of another process using the same port. get_unused_port is now deprecated. • New decorator tornado.testing.gen_test can be used to allow for yielding tornado.gen objects in tests, as an alternative to the stop and wait methods of AsyncTestCase. • tornado.testing.AsyncTestCase and friends now extend unittest2.TestCase when it is available (and continue to use the standard unittest module when unittest2 is not available) • tornado.testing.ExpectLog can be used as a finer-grained alternative to tornado.testing. LogTrapTestCase • The command-line interface to tornado.testing.main now supports additional arguments from the underly- ing unittest module: verbose, quiet, failfast, catch, buffer. • The deprecated --autoreload option of tornado.testing.main has been removed. Use python -m tornado.autoreload as a prefix command instead. • The --httpclient option of tornado.testing.main has been moved to tornado.test.runtests so as not to pollute the application option namespace. The tornado.options module’s new callback support now makes it easy to add options from a wrapper script instead of putting all possible options in tornado.testing. main. • AsyncHTTPTestCase no longer calls AsyncHTTPClient.close for tests that use the singleton IOLoop. instance. • LogTrapTestCase no longer fails when run in unknown logging configurations. This allows tests to be run under nose, which does its own log buffering (LogTrapTestCase doesn’t do anything useful in this case, but at least it doesn’t break things any more). Tornado Documentation, Release 6.3.3 tornado.util • tornado.util.b (which was only intended for internal use) is gone. tornado.web • RequestHandler.set_header now overwrites previous header values case-insensitively. • tornado.web.RequestHandler has new attributes path_args and path_kwargs, which contain the posi- tional and keyword arguments that are passed to the get/post/etc method. These attributes are set before those methods are called, so they are available during prepare() • tornado.web.ErrorHandler no longer requires XSRF tokens on POST requests, so posts to an unknown url will always return 404 instead of complaining about XSRF tokens. • Several methods related to HTTP status codes now take a reason keyword argument to specify an alternate “reason” string (i.e. the “Not Found” in “HTTP/1.1 404 Not Found”). It is now possible to set status codes other than those defined in the spec, as long as a reason string is given. • The Date HTTP header is now set by default on all responses. • Etag/If-None-Match requests now work with StaticFileHandler. • StaticFileHandler no longer sets Cache-Control: public unnecessarily. • When gzip is enabled in a tornado.web.Application, appropriate Vary: Accept-Encoding headers are now sent. • It is no longer necessary to pass all handlers for a host in a single Application.add_handlers call. Now the request will be matched against the handlers for any host_pattern that includes the request’s Host header. tornado.websocket • Client-side WebSocket support is now available: tornado.websocket.websocket_connect • WebSocketHandler has new methods ping and on_pong to send pings to the browser (not supported on the draft76 protocol) 6.9.40 What’s new in Tornado 2.4.1 Nov 24, 2012 Bug fixes • Fixed a memory leak in tornado.stack_context that was especially likely with long-running @gen.engine functions. • tornado.auth.TwitterMixin now works on Python 3. • Fixed a bug in which IOStream.read_until_close with a streaming callback would sometimes pass the last chunk of data to the final callback instead of the streaming callback. Tornado Documentation, Release 6.3.3 6.9.41 What’s new in Tornado 2.4 Sep 4, 2012 General • Fixed Python 3 bugs in tornado.auth , tornado.locale, and tornado.wsgi. HTTP clients • Removed max_simultaneous_connections argument from tornado.httpclient (both implementations). This argument hasn’t been useful for some time (if you were using it you probably want max_clients instead) • tornado.simple_httpclient now accepts and ignores HTTP 1xx status responses. tornado.ioloop and tornado.iostream • Fixed a bug introduced in 2.3 that would cause IOStream close callbacks to not run if there were pending reads. • Improved error handling in SSLIOStream and SSL-enabled TCPServer. • SSLIOStream.get_ssl_certificate now has a binary_form argument which is passed to SSLSocket. getpeercert. • SSLIOStream.write can now be called while the connection is in progress, same as non-SSL IOStream (but be careful not to send sensitive data until the connection has completed and the certificate has been verified). • IOLoop.add_handler cannot be called more than once with the same file descriptor. This was always true for epoll, but now the other implementations enforce it too. • On Windows, TCPServer uses SO_EXCLUSIVEADDRUSER instead of SO_REUSEADDR. tornado.template • {% break %} and {% continue %} can now be used looping constructs in templates. • It is no longer an error for an if/else/for/etc block in a template to have an empty body. tornado.testing • New class tornado.testing.AsyncHTTPSTestCase is like AsyncHTTPTestCase. but enables SSL for the testing server (by default using a self-signed testing certificate). • tornado.testing.main now accepts additional keyword arguments and forwards them to unittest.main. Tornado Documentation, Release 6.3.3 tornado.web • New method RequestHandler.get_template_namespace can be overridden to add additional variables without modifying keyword arguments to render_string. • RequestHandler.add_header now works with WSGIApplication. • RequestHandler.get_secure_cookie now handles a potential error case. • RequestHandler.__init__ now calls super().__init__ to ensure that all constructors are called when multiple inheritance is used. • Docs have been updated with a description of all available Application settings Other modules • OAuthMixin now accepts "oob" as a callback_uri. • OpenIdMixin now also returns the claimed_id field for the user. • tornado.platform.twisted shutdown sequence is now more compatible. • The logging configuration used in tornado.options is now more tolerant of non-ascii byte strings. 6.9.42 What’s new in Tornado 2.3 May 31, 2012 HTTP clients • tornado.httpclient.HTTPClient now supports the same constructor keyword arguments as AsyncHTTPClient. • The max_clients keyword argument to AsyncHTTPClient.configure now works. • tornado.simple_httpclient now supports the OPTIONS and PATCH HTTP methods. • tornado.simple_httpclient is better about closing its sockets instead of leaving them for garbage collection. • tornado.simple_httpclient correctly verifies SSL certificates for URLs containing IPv6 literals (This bug affected Python 2.5 and 2.6). • tornado.simple_httpclient no longer includes basic auth credentials in the Host header when those cre- dentials are extracted from the URL. • tornado.simple_httpclient no longer modifies the caller-supplied header dictionary, which caused prob- lems when following redirects. • tornado.curl_httpclient now supports client SSL certificates (using the same client_cert and client_key arguments as tornado.simple_httpclient) Tornado Documentation, Release 6.3.3 HTTP Server • HTTPServer now works correctly with paths starting with // • HTTPHeaders.copy (inherited from dict.copy) now works correctly. • HTTPConnection.address is now always the socket address, even for non-IP sockets. HTTPRequest. remote_ip is still always an IP-style address (fake data is used for non-IP sockets) • Extra data at the end of multipart form bodies is now ignored, which fixes a compatibility problem with an iOS HTTP client library. IOLoop and IOStream • IOStream now has an error attribute that can be used to determine why a socket was closed. • tornado.iostream.IOStream.read_until and read_until_regex are much faster with large input. • IOStream.write performs better when given very large strings. • IOLoop.instance() is now thread-safe. tornado.options • tornado.options options with multiple=True that are set more than once now overwrite rather than append. This makes it possible to override values set in parse_config_file with parse_command_line. • tornado.options --help output is now prettier. • tornado.options.options now supports attribute assignment. tornado.template • Template files containing non-ASCII (utf8) characters now work on Python 3 regardless of the locale environment variables. • Templates now support else clauses in try/except/finally/else blocks. tornado.web • tornado.web.RequestHandler now supports the PATCH HTTP method. Note that this means any existing methods named patch in RequestHandler subclasses will need to be renamed. • tornado.web.addslash and removeslash decorators now send permanent redirects (301) instead of tempo- rary (302). • RequestHandler.flush now invokes its callback whether there was any data to flush or not. • Repeated calls to RequestHandler.set_cookie with the same name now overwrite the previous cookie instead of producing additional copies. • tornado.web.OutputTransform.transform_first_chunk now takes and returns a status code in addition to the headers and chunk. This is a backwards-incompatible change to an interface that was never technically private, but was not included in the documentation and does not appear to have been used outside Tornado itself. • Fixed a bug on python versions before 2.6.5 when tornado.web.URLSpec regexes are constructed from unicode strings and keyword arguments are extracted. Tornado Documentation, Release 6.3.3 • The reverse_url function in the template namespace now comes from the RequestHandler rather than the Application. (Unless overridden, RequestHandler.reverse_url is just an alias for the Application method). • The Etag header is now returned on 304 responses to an If-None-Match request, improving compatibility with some caches. • tornado.web will no longer produce responses with status code 304 that also have entity headers such as Content-Length. Other modules • tornado.auth.FacebookGraphMixin no longer sends post_args redundantly in the url. • The extra_params argument to tornado.escape.linkify may now be a callable, to allow parameters to be chosen separately for each link. • tornado.gen no longer leaks StackContexts when a @gen.engine wrapped function is called repeatedly. • tornado.locale.get_supported_locales no longer takes a meaningless cls argument. • StackContext instances now have a deactivation callback that can be used to prevent further propagation. • tornado.testing.AsyncTestCase.wait now resets its timeout on each call. • tornado.wsgi.WSGIApplication now parses arguments correctly on Python 3. • Exception handling on Python 3 has been improved; previously some exceptions such as UnicodeDecodeError would generate TypeErrors 6.9.43 What’s new in Tornado 2.2.1 Apr 23, 2012 Security fixes • tornado.web.RequestHandler.set_header now properly sanitizes input values to protect against header injection, response splitting, etc. (it has always attempted to do this, but the check was incorrect). Note that redirects, the most likely source of such bugs, are protected by a separate check in RequestHandler.redirect. Bug fixes • Colored logging configuration in tornado.options is compatible with Python 3.2.3 (and 3.3). 6.9.44 What’s new in Tornado 2.2 Jan 30, 2012 Highlights • Updated and expanded WebSocket support. • Improved compatibility in the Twisted/Tornado bridge. • Template errors now generate better stack traces. Tornado Documentation, Release 6.3.3 • Better exception handling in tornado.gen. Security fixes • tornado.simple_httpclient now disables SSLv2 in all cases. Previously SSLv2 would be allowed if the Python interpreter was linked against a pre-1.0 version of OpenSSL. Backwards-incompatible changes • tornado.process.fork_processes now raises SystemExit if all child processes exit cleanly rather than returning None. The old behavior was surprising and inconsistent with most of the documented examples of this function (which did not check the return value). • On Python 2.6, tornado.simple_httpclient only supports SSLv3. This is because Python 2.6 does not expose a way to support both SSLv3 and TLSv1 without also supporting the insecure SSLv2. • tornado.websocket no longer supports the older “draft 76” version of the websocket protocol by default, al- though this version can be enabled by overriding tornado.websocket.WebSocketHandler.allow_draft76. tornado.httpclient • SimpleAsyncHTTPClient no longer hangs on HEAD requests, responses with no content, or empty POST/PUT response bodies. • SimpleAsyncHTTPClient now supports 303 and 307 redirect codes. • tornado.curl_httpclient now accepts non-integer timeouts. • tornado.curl_httpclient now supports basic authentication with an empty password. tornado.httpserver • HTTPServer with xheaders=True will no longer accept X-Real-IP headers that don’t look like valid IP ad- dresses. • HTTPServer now treats the Connection request header as case-insensitive. tornado.ioloop and tornado.iostream • IOStream.write now works correctly when given an empty string. • IOStream.read_until (and read_until_regex) now perform better when there is a lot of buffered data, which improves performance of SimpleAsyncHTTPClient when downloading files with lots of chunks. • SSLIOStream now works correctly when ssl_version is set to a value other than SSLv23. • Idle IOLoops no longer wake up several times a second. • tornado.ioloop.PeriodicCallback no longer triggers duplicate callbacks when stopped and started repeat- edly. Tornado Documentation, Release 6.3.3 tornado.template • Exceptions in template code will now show better stack traces that reference lines from the original template file. • {# and #} can now be used for comments (and unlike the old {% comment %} directive, these can wrap other template directives). • Template directives may now span multiple lines. tornado.web • Now behaves better when given malformed Cookie headers • RequestHandler.redirect now has a status argument to send status codes other than 301 and 302. • New method RequestHandler.on_finish may be overridden for post-request processing (as a counterpart to RequestHandler.prepare) • StaticFileHandler now outputs Content-Length and Etag headers on HEAD requests. • StaticFileHandler now has overridable get_version and parse_url_path methods for use in subclasses. • RequestHandler.static_url now takes an include_host parameter (in addition to the old support for the RequestHandler.include_host attribute). tornado.websocket • Updated to support the latest version of the protocol, as finalized in RFC 6455. • Many bugs were fixed in all supported protocol versions. • tornado.websocket no longer supports the older “draft 76” version of the websocket protocol by default, al- though this version can be enabled by overriding tornado.websocket.WebSocketHandler.allow_draft76. • WebSocketHandler.write_message now accepts a binary argument to send binary messages. • Subprotocols (i.e. the Sec-WebSocket-Protocol header) are now supported; see the WebSocketHandler. select_subprotocol method for details. • .WebSocketHandler.get_websocket_scheme can be used to select the appropriate url scheme (ws:// or wss://) in cases where HTTPRequest.protocol is not set correctly. Other modules • tornado.auth.TwitterMixin.authenticate_redirect now takes a callback_uri parameter. • tornado.auth.TwitterMixin.twitter_request now accepts both URLs and partial paths (complete URLs are useful for the search API which follows different patterns). • Exception handling in tornado.gen has been improved. It is now possible to catch exceptions thrown by a Task. • tornado.netutil.bind_sockets now works when getaddrinfo returns duplicate addresses. • tornado.platform.twisted compatibility has been significantly improved. Twisted version 11.1.0 is now supported in addition to 11.0.0. • tornado.process.fork_processes correctly reseeds the random module even when os.urandom is not implemented. Tornado Documentation, Release 6.3.3 • tornado.testing.main supports a new flag --exception_on_interrupt, which can be set to false to make Ctrl-C kill the process more reliably (at the expense of stack traces when it does so). • tornado.version_info is now a four-tuple so official releases can be distinguished from development branches. 6.9.45 What’s new in Tornado 2.1.1 Oct 4, 2011 Bug fixes • Fixed handling of closed connections with the epoll (i.e. Linux) IOLoop. Previously, closed connec- tions could be shut down too early, which most often manifested as “Stream is closed” exceptions in SimpleAsyncHTTPClient. • Fixed a case in which chunked responses could be closed prematurely, leading to truncated output. • IOStream.connect now reports errors more consistently via logging and the close callback (this affects e.g. connections to localhost on FreeBSD). • IOStream.read_bytes again accepts both int and long arguments. • PeriodicCallback no longer runs repeatedly when IOLoop iterations complete faster than the resolution of time.time() (mainly a problem on Windows). Backwards-compatibility note • Listening for IOLoop.ERROR alone is no longer sufficient for detecting closed connections on an otherwise un- used socket. IOLoop.ERROR must always be used in combination with READ or WRITE. 6.9.46 What’s new in Tornado 2.1 Sep 20, 2011 Backwards-incompatible changes • Support for secure cookies written by pre-1.0 releases of Tornado has been removed. The RequestHandler. get_secure_cookie method no longer takes an include_name parameter. • The debug application setting now causes stack traces to be displayed in the browser on uncaught exceptions. Since this may leak sensitive information, debug mode is not recommended for public-facing servers. Tornado Documentation, Release 6.3.3 Security fixes • Diginotar has been removed from the default CA certificates file used by SimpleAsyncHTTPClient. New modules • tornado.gen: A generator-based interface to simplify writing asynchronous functions. • tornado.netutil: Parts of tornado.httpserver have been extracted into a new module for use with non- HTTP protocols. • tornado.platform.twisted: A bridge between the Tornado IOLoop and the Twisted Reactor, allowing code written for Twisted to be run on Tornado. • tornado.process: Multi-process mode has been improved, and can now restart crashed child processes. A new entry point has been added at tornado.process.fork_processes, although tornado.httpserver. HTTPServer.start is still supported. tornado.web • tornado.web.RequestHandler.write_error replaces get_error_html as the preferred way to generate custom error pages (get_error_html is still supported, but deprecated) • In tornado.web.Application, handlers may be specified by (fully-qualified) name instead of importing and passing the class object itself. • It is now possible to use a custom subclass of StaticFileHandler with the static_handler_class appli- cation setting, and this subclass can override the behavior of the static_url method. • StaticFileHandler subclasses can now override get_cache_time to customize cache control behavior. • tornado.web.RequestHandler.get_secure_cookie now has a max_age_days parameter to allow appli- cations to override the default one-month expiration. • set_cookie now accepts a max_age keyword argument to set the max-age cookie attribute (note underscore vs dash) • tornado.web.RequestHandler.set_default_headers may be overridden to set headers in a way that does not get reset during error handling. • RequestHandler.add_header can now be used to set a header that can appear multiple times in the response. • RequestHandler.flush can now take a callback for flow control. • The application/json content type can now be gzipped. • The cookie-signing functions are now accessible as static functions tornado.web.create_signed_value and tornado.web.decode_signed_value. Tornado Documentation, Release 6.3.3 tornado.httpserver • To facilitate some advanced multi-process scenarios, HTTPServer has a new method add_sockets, and socket- opening code is available separately as tornado.netutil.bind_sockets. • The cookies property is now available on tornado.httpserver.HTTPRequest (it is also available in its old location as a property of RequestHandler) • tornado.httpserver.HTTPServer.bind now takes a backlog argument with the same meaning as socket. listen. • HTTPServer can now be run on a unix socket as well as TCP. • Fixed exception at startup when socket.AI_ADDRCONFIG is not available, as on Windows XP IOLoop and IOStream • IOStream performance has been improved, especially for small synchronous requests. • New methods tornado.iostream.IOStream.read_until_close and tornado.iostream.IOStream. read_until_regex. • IOStream.read_bytes and IOStream.read_until_close now take a streaming_callback argument to return data as it is received rather than all at once. • IOLoop.add_timeout now accepts datetime.timedelta objects in addition to absolute timestamps. • PeriodicCallback now sticks to the specified period instead of creeping later due to accumulated errors. • tornado.ioloop.IOLoop and tornado.httpclient.HTTPClient now have close() methods that should be used in applications that create and destroy many of these objects. • IOLoop.install can now be used to use a custom subclass of IOLoop as the singleton without monkey- patching. • IOStream should now always call the close callback instead of the connect callback on a connection error. • The IOStream close callback will no longer be called while there are pending read callbacks that can be satisfied with buffered data. tornado.simple_httpclient • Now supports client SSL certificates with the client_key and client_cert parameters to tornado. httpclient.HTTPRequest • Now takes a maximum buffer size, to allow reading files larger than 100MB • Now works with HTTP 1.0 servers that don’t send a Content-Length header • The allow_nonstandard_methods flag on HTTP client requests now permits methods other than POST and PUT to contain bodies. • Fixed file descriptor leaks and multiple callback invocations in SimpleAsyncHTTPClient • No longer consumes extra connection resources when following redirects. • Now works with buggy web servers that separate headers with \n instead of \r\n\r\n. • Now sets response.request_time correctly. • Connect timeouts now work correctly. Tornado Documentation, Release 6.3.3 Other modules • tornado.auth.OpenIdMixin now uses the correct realm when the callback URI is on a different domain. • tornado.autoreload has a new command-line interface which can be used to wrap any script. This replaces the --autoreload argument to tornado.testing.main and is more robust against syntax errors. • tornado.autoreload.watch can be used to watch files other than the sources of imported modules. • tornado.database.Connection has new variants of execute and executemany that return the number of rows affected instead of the last inserted row id. • tornado.locale.load_translations now accepts any properly-formatted locale name, not just those in the predefined LOCALE_NAMES list. • tornado.options.define now takes a group parameter to group options in --help output. • Template loaders now take a namespace constructor argument to add entries to the template namespace. • tornado.websocket now supports the latest (“hybi-10”) version of the protocol (the old version, “hixie-76” is still supported; the correct version is detected automatically). • tornado.websocket now works on Python 3 Bug fixes • Windows support has been improved. Windows is still not an officially supported platform, but the test suite now passes and tornado.autoreload works. • Uploading files whose names contain special characters will now work. • Cookie values containing special characters are now properly quoted and unquoted. • Multi-line headers are now supported. • Repeated Content-Length headers (which may be added by certain proxies) are now supported in HTTPServer. • Unicode string literals now work in template expressions. • The template {% module %} directive now works even if applications use a template variable named modules. • Requests with “Expect: 100-continue” now work on python 3 6.9.47 What’s new in Tornado 2.0 Jun 21, 2011 Major changes: * Template output is automatically escaped by default; see backwards compatibility note below. * The default AsyncHTTPClient implementation is now simple_httpclient. * Python 3.2 is now supported. Backwards compatibility: * Template autoescaping is enabled by default. Applications upgrading from a previous release of Tornado must either disable autoescaping or adapt their templates to work with it. For most applications, the simplest way to do this is to pass autoescape=None to the Application constructor. (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) Note that this affects certain built-in methods, e.g. xsrf_form_html and linkify, which must now be called with {% raw %} instead of {} * Applications that wish to continue using curl_httpclient instead of simple_httpclient may do so by calling AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient") at the beginning of the process. Users of Python 2.5 will probably want to use curl_httpclient as simple_httpclient only supports ssl on Python 2.6+. * Python 3 compatibility involved many changes throughout the codebase, so users are encouraged to test their applications more thoroughly than usual when upgrading to this release. Other changes in this release: * Templates support several new directives: - {% autoescape ...%} to control escaping behavior - {% raw ... %} for unescaped output - {% module ... %} for calling UIModules * {% module Template(path, **kwargs) %} may now be used to call another template with an independent namespace * All IOStream callbacks are now run directly on the IOLoop via add_callback. * HTTPServer now supports IPv6 where available. To disable, pass family=socket.AF_INET to HTTPServer.bind(). * HTTPClient now supports IPv6, configurable via allow_ipv6=bool on the HTTPRequest. allow_ipv6 defaults to false on simple_httpclient and true on curl_httpclient. * RequestHandlers can use an encoding other than utf-8 for query parameters by overriding decode_argument() * Performance improvements, especially for applications that use a lot of IOLoop timeouts * HTTP OPTIONS method no longer requires an XSRF token. * JSON output (RequestHandler.write(dict)) now sets Content-Type to application/json * Etag computation can now be customized or disabled by overriding RequestHandler.compute_etag * USE_SIMPLE_HTTPCLIENT environment variable is no longer supported. Use AsyncHTTPClient.configure instead. 6.9.48 What’s new in Tornado 1.2.1 Mar 3, 2011 We are pleased to announce the release of Tornado 1.2.1, available from https://github.com/downloads/facebook/tornado/tornado-1.2.1.tar.gz This release contains only two small changes relative to version 1.2: * FacebookGraphMixin has been updated to work with a recent change to the Facebook API. * Running "setup.py install" will no longer attempt to automatically install pycurl. This wasn't working well on platforms where the best way to install pycurl is via something like apt-get instead of easy_install. (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) This is an important upgrade if you are using FacebookGraphMixin, but otherwise it can be safely ignored. 6.9.49 What’s new in Tornado 1.2 Feb 20, 2011 We are pleased to announce the release of Tornado 1.2, available from https://github.com/downloads/facebook/tornado/tornado-1.2.tar.gz Backwards compatibility notes: * This release includes the backwards-incompatible security change from version 1.1.1. Users upgrading from 1.1 or earlier should read the release notes from that release: http://groups.google.com/group/python-tornado/browse_thread/thread/b36191c781580cde * StackContexts that do something other than catch exceptions may need to be modified to be reentrant. https://github.com/tornadoweb/tornado/commit/7a7e24143e77481d140fb5579bc67e4c45cbcfad * When XSRF tokens are used, the token must also be present on PUT and DELETE requests (anything but GET and HEAD) New features: * A new HTTP client implementation is available in the module tornado.simple_httpclient. This HTTP client does not depend on pycurl. It has not yet been tested extensively in production, but is intended to eventually replace the pycurl-based HTTP client in a future release of Tornado. To transparently replace tornado.httpclient.AsyncHTTPClient with this new implementation, you can set the environment variable USE_SIMPLE_HTTPCLIENT=1 (note that the next release of Tornado will likely include a different way to select HTTP client implementations) * Request logging is now done by the Application rather than the RequestHandler. Logging behavior may be customized by either overriding Application.log_request in a subclass or by passing log_function as an Application setting * Application.listen(port): Convenience method as an alternative to explicitly creating an HTTPServer * tornado.escape.linkify(): Wrap urls in <a> tags * RequestHandler.create_signed_value(): Create signatures like the secure_cookie methods without setting cookies. * tornado.testing.get_unused_port(): Returns a port selected in the same way as inAsyncHTTPTestCase * AsyncHTTPTestCase.fetch(): Convenience method for synchronous fetches * IOLoop.set_blocking_signal_threshold(): Set a callback to be run when the IOLoop is blocked. * IOStream.connect(): Asynchronously connect a client socket * AsyncHTTPClient.handle_callback_exception(): May be overridden in subclass for custom error handling * httpclient.HTTPRequest has two new keyword arguments, validate_cert and ca_certs. Setting validate_cert=False will disable all certificate checks when fetching https urls. ca_certs may be set to a filename containing (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) trusted certificate authorities (defaults will be used if this is unspecified) * HTTPRequest.get_ssl_certificate(): Returns the client's SSL certificate (if client certificates were requested in the server's ssl_options * StaticFileHandler can be configured to return a default file (e.g. index.html) when a directory is requested * Template directives of the form "{% from x import y %}" are now supported (in addition to the existing support for "{% import x %}" * FacebookGraphMixin.get_authenticated_user now accepts a new parameter 'extra_fields' which may be used to request additional information about the user Bug fixes: * auth: Fixed KeyError with Facebook offline_access * auth: Uses request.uri instead of request.path as the default redirect so that parameters are preserved. * escape: xhtml_escape() now returns a unicode string, not utf8-encoded bytes * ioloop: Callbacks added with add_callback are now run in the order they were added * ioloop: PeriodicCallback.stop can now be called from inside the callback. * iostream: Fixed several bugs in SSLIOStream * iostream: Detect when the other side has closed the connection even with the select()-based IOLoop * iostream: read_bytes(0) now works as expected * iostream: Fixed bug when writing large amounts of data on windows * iostream: Fixed infinite loop that could occur with unhandled exceptions * httpclient: Fix bugs when some requests use proxies and others don't * httpserver: HTTPRequest.protocol is now set correctly when using the built-in SSL support * httpserver: When using multiple processes, the standard library's random number generator is re-seeded in each child process * httpserver: With xheaders enabled, X-Forwarded-Proto is supported as an alternative to X-Scheme * httpserver: Fixed bugs in multipart/form-data parsing * locale: format_date() now behaves sanely with dates in the future * locale: Updates to the language list * stack_context: Fixed bug with contexts leaking through reused IOStreams * stack_context: Simplified semantics and improved performance * web: The order of css_files from UIModules is now preserved * web: Fixed error with default_host redirect * web: StaticFileHandler works when os.path.sep != '/' (i.e. on Windows) * web: Fixed a caching-related bug in StaticFileHandler when a file's timestamp has changed but its contents have not. * web: Fixed bugs with HEAD requests and e.g. Etag headers * web: Fix bugs when different handlers have different static_paths * web: @removeslash will no longer cause a redirect loop when applied to the root path * websocket: Now works over SSL * websocket: Improved compatibility with proxies (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) Many thanks to everyone who contributed patches, bug reports, and feedback that went into this release! -Ben 6.9.50 What’s new in Tornado 1.1.1 Feb 8, 2011 Tornado 1.1.1 is a BACKWARDS-INCOMPATIBLE security update that fixes an XSRF vulnerability. It is available at https://github.com/downloads/facebook/tornado/tornado-1.1.1.tar.gz This is a backwards-incompatible change. Applications that previously relied on a blanket exception for XMLHTTPRequest may need to be modified to explicitly include the XSRF token when making ajax requests. The tornado chat demo application demonstrates one way of adding this token (specifically the function postJSON in demos/chat/static/chat.js). More information about this change and its justification can be found at http://www.djangoproject.com/weblog/2011/feb/08/security/ http://weblog.rubyonrails.org/2011/2/8/csrf-protection-bypass-in-ruby-on-rails 6.9.51 What’s new in Tornado 1.1 Sep 7, 2010 We are pleased to announce the release of Tornado 1.1, available from https://github.com/downloads/facebook/tornado/tornado-1.1.tar.gz Changes in this release: * RequestHandler.async_callback and related functions in other classes are no longer needed in most cases (although it's harmless to continue using them). Uncaught exceptions will now cause the request to be closed even in a callback. If you're curious how this works, see the new tornado.stack_context module. * The new tornado.testing module contains support for unit testing asynchronous IOLoop-based code. * AsyncHTTPClient has been rewritten (the new implementation was available as AsyncHTTPClient2 in Tornado 1.0; both names are supported for backwards compatibility). * The tornado.auth module has had a number of updates, including support for OAuth 2.0 and the Facebook Graph API, and upgrading Twitter and Google support to OAuth 1.0a. * The websocket module is back and supports the latest version (76) of the websocket protocol. Note that this module's interface is different from the websocket module that appeared in pre-1.0 versions of Tornado. (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) * New method RequestHandler.initialize() can be overridden in subclasses to simplify handling arguments from URLSpecs. The sequence of methods called during initialization is documented at http://tornadoweb.org/documentation#overriding-requesthandler-methods * get_argument() and related methods now work on PUT requests in addition to POST. * The httpclient module now supports HTTP proxies. * When HTTPServer is run in SSL mode, the SSL handshake is now non-blocking. * Many smaller bug fixes and documentation updates Backwards-compatibility notes: * While most users of Tornado should not have to deal with the stack_context module directly, users of worker thread pools and similar constructs may need to use stack_context.wrap and/or NullContext to avoid memory leaks. * The new AsyncHTTPClient still works with libcurl version 7.16.x, but it performs better when both libcurl and pycurl are at least version 7.18.2. * OAuth transactions started under previous versions of the auth module cannot be completed under the new module. This applies only to the initial authorization process; once an authorized token is issued that token works with either version. Many thanks to everyone who contributed patches, bug reports, and feedback that went into this release! -Ben 6.9.52 What’s new in Tornado 1.0.1 Aug 13, 2010 This release fixes a bug with RequestHandler.get_secure_cookie, which would in some circumstances allow an attacker to tamper with data stored in the cookie. 6.9.53 What’s new in Tornado 1.0 July 22, 2010 We are pleased to announce the release of Tornado 1.0, available from https://github.com/downloads/facebook/tornado/tornado-1.0.tar.gz. There have been many changes since version 0.2; here are some of the highlights: New features: * Improved support for running other WSGI applications in a Tornado server (tested with Django and CherryPy) * Improved performance on Mac OS X and BSD (kqueue-based IOLoop), (continues on next page) Tornado Documentation, Release 6.3.3 (continued from previous page) and experimental support for win32 * Rewritten AsyncHTTPClient available as tornado.httpclient.AsyncHTTPClient2 (this will become the default in a future release) * Support for standard .mo files in addition to .csv in the locale module * Pre-forking support for running multiple Tornado processes at once (see HTTPServer.start()) * SSL and gzip support in HTTPServer * reverse_url() function refers to urls from the Application config by name from templates and RequestHandlers * RequestHandler.on_connection_close() callback is called when the client has closed the connection (subject to limitations of the underlying network stack, any proxies, etc) * Static files can now be served somewhere other than /static/ via the static_url_prefix application setting * URL regexes can now use named groups ("(?P<name>)") to pass arguments to get()/post() via keyword instead of position * HTTP header dictionary-like objects now support multiple values for the same header via the get_all() and add() methods. * Several new options in the httpclient module, including prepare_curl_callback and header_callback * Improved logging configuration in tornado.options. * UIModule.html_body() can be used to return html to be inserted at the end of the document body. Backwards-incompatible changes: * RequestHandler.get_error_html() now receives the exception object as a keyword argument if the error was caused by an uncaught exception. * Secure cookies are now more secure, but incompatible with cookies set by Tornado 0.2. To read cookies set by older versions of Tornado, pass include_name=False to RequestHandler.get_secure_cookie() * Parameters passed to RequestHandler.get/post() by extraction from the path now have %-escapes decoded, for consistency with the processing that was already done with other query parameters. Many thanks to everyone who contributed patches, bug reports, and feedback that went into this release! -Ben • genindex • modindex • search CHAPTER SEVEN DISCUSSION AND SUPPORT You can discuss Tornado on the Tornado developer mailing list, and report bugs on the GitHub issue tracker. Links to additional resources can be found on the Tornado wiki. New releases are announced on the announcements mailing list. Tornado is available under the Apache License, Version 2.0. This web site and all documentation is licensed under Creative Commons 3.0. Tornado Documentation, Release 6.3.3 248 Chapter 7. Discussion and support PYTHON MODULE INDEX t tornado.auth, 135 tornado.autoreload, 146 tornado.concurrent, 147 tornado.curl_httpclient, 92 tornado.escape, 72 tornado.gen, 120 tornado.http1connection, 99 tornado.httpclient, 85 tornado.httpserver, 83 tornado.httputil, 92 tornado.ioloop, 101 tornado.iostream, 107 tornado.locale, 74 tornado.locks, 124 tornado.log, 149 tornado.netutil, 113 tornado.options, 150 tornado.platform.asyncio, 144 tornado.platform.caresresolver, 143 tornado.platform.twisted, 144 tornado.process, 133 tornado.queues, 129 tornado.routing, 67 tornado.simple_httpclient, 92 tornado.tcpclient, 116 tornado.tcpserver, 117 tornado.template, 62 tornado.testing, 154 tornado.util, 159 tornado.web, 40 tornado.websocket, 77 tornado.wsgi, 142 Tornado Documentation, Release 6.3.3 250 Python Module Index INDEX Symbols application (tornado.web.RequestHandler attribute), _oauth_consumer_token() (tornado.auth.OAuthMixin 49 method), 137 ArgReplacer (class in tornado.util), 161 _oauth_get_user_future() (tor- arguments (tornado.httputil.HTTPServerRequest at- nado.auth.OAuthMixin method), 137 tribute), 94 acquire() (tornado.locks.BoundedSemaphore method), 128 acquire() (tornado.locks.Lock method), 129 AsyncIOMainLoop (class in tornado.platform.asyncio), acquire() (tornado.locks.Semaphore method), 128 add() (tornado.httputil.HTTPHeaders method), 93 add_accept_handler() (in module tornado.netutil), authenticate_redirect() (tor- 113 add_callback() (tornado.ioloop.IOLoop method), 105 authenticate_redirect() (tor- add_callback_from_signal() (tor- nado.ioloop.IOLoop method), 105 add_future() (tornado.ioloop.IOLoop method), 105 add_handler() (tornado.ioloop.IOLoop method), 105 add_handlers() (tornado.web.Application method), 55 authorize_redirect() (tornado.auth.OAuthMixin add_header() (tornado.web.RequestHandler method), 44 add_parse_callback() (in module tornado.options), 151 B add_parse_callback() (tor- BaseIOStream (class in tornado.iostream), 108 nado.options.OptionParser method), 153 BaseLoader (class in tornado.template), 65 add_reload_hook() (in module tornado.autoreload), bind() (tornado.tcpserver.TCPServer method), 118 146 bind_sockets() (in module tornado.netutil), 113 add_rules() (tornado.routing.RuleRouter method), 70 bind_unix_socket() (in module tornado.netutil), 113 add_socket() (tornado.tcpserver.TCPServer method), bind_unused_port() (in module tornado.testing), 159 118 BlockingResolver (class in tornado.netutil), 115 add_sockets() (tornado.tcpserver.TCPServer method), body (tornado.httputil.HTTPServerRequest attribute), 94 118 body_arguments (tornado.httputil.HTTPServerRequest add_timeout() (tornado.ioloop.IOLoop method), 105 attribute), 94 addslash() (in module tornado.web), 56 BoundedSemaphore (class in tornado.locks), 128 AddThreadSelectorEventLoop (class in tor- nado.platform.asyncio), 145 C AnyMatches (class in tornado.routing), 71 call_at() (tornado.ioloop.IOLoop method), 106 AnyThreadEventLoopPolicy (class in tor- call_later() (tornado.ioloop.IOLoop method), 106 nado.platform.asyncio), 145 CalledProcessError, 133 Application (class in tornado.web), 52 CaresResolver (class in tor- Tornado Documentation, Release 6.3.3 chain_future() (in module tornado.concurrent), 148 connect() (tornado.iostream.IOStream method), 111 check_etag_header() (tornado.web.RequestHandler connect() (tornado.tcpclient.TCPClient method), 116 method), 49 connection (tornado.httputil.HTTPServerRequest at- check_origin() (tornado.websocket.WebSocketHandler tribute), 95 method), 79 convert_yielded() (in module tornado.gen), 123 check_xsrf_cookie() (tornado.web.RequestHandler cookies (tornado.httputil.HTTPServerRequest prop- method), 49 erty), 95 clear() (tornado.locks.Event method), 126 cookies (tornado.web.RequestHandler attribute), 46 clear() (tornado.web.RequestHandler method), 45 coroutine() (in module tornado.gen), 121 clear_all_cookies() (tornado.web.RequestHandler cpu_count() (in module tornado.process), 133 method), 46 create_signed_value() (tor- clear_cookie() (tornado.web.RequestHandler nado.web.RequestHandler method), 48 method), 46 create_template_loader() (tor- clear_current() (tornado.ioloop.IOLoop static nado.web.RequestHandler method), 49 method), 103 css_files() (tornado.web.UIModule method), 58 clear_header() (tornado.web.RequestHandler CSVLocale (class in tornado.locale), 76 method), 44 CurlAsyncHTTPClient (class in tor- clear_instance() (tornado.ioloop.IOLoop static nado.curl_httpclient), 92 method), 104 current() (tornado.ioloop.IOLoop static method), 103 close() (tornado.http1connection.HTTP1ServerConnectioncurrent_user (tornado.web.RequestHandler attribute), method), 101 49 close() (tornado.httpclient.AsyncHTTPClient method), 86 D close() (tornado.httpclient.HTTPClient method), 86 data_received() (tor- close() (tornado.ioloop.IOLoop method), 104 nado.httputil.HTTPMessageDelegate method), close() (tornado.iostream.BaseIOStream method), 109 96 close() (tornado.netutil.Resolver method), 115 data_received() (tornado.web.RequestHandler close() (tornado.websocket.WebSocketClientConnection method), 43 method), 82 decode_argument() (tornado.web.RequestHandler close() (tornado.websocket.WebSocketHandler method), 43 method), 79 decompress() (tornado.util.GzipDecompressor close_all_connections() (tor- method), 159 nado.httpserver.HTTPServer method), 84 DEFAULT_SIGNED_VALUE_MIN_VERSION (in module tor- close_fd() (tornado.iostream.BaseIOStream method), nado.web), 48 110 DEFAULT_SIGNED_VALUE_VERSION (in module tor- closed() (tornado.iostream.BaseIOStream method), nado.web), 48 109 DefaultExecutorResolver (class in tornado.netutil), code (tornado.httputil.ResponseStartLine attribute), 98 115 compute_etag() (tornado.web.RequestHandler DefaultHostMatches (class in tornado.routing), 71 method), 49 DefaultLoopResolver (class in tornado.netutil), 115 compute_etag() (tornado.web.StaticFileHandler define() (in module tornado.options), 151 method), 60 define() (tornado.options.OptionParser method), 152 Condition (class in tornado.locks), 125 define_logging_options() (in module tornado.log), Configurable (class in tornado.util), 160 149 configurable_base() (tornado.util.Configurable class delete() (tornado.web.RequestHandler method), 42 method), 160 detach() (tornado.http1connection.HTTP1Connection configurable_default() (tornado.util.Configurable method), 100 class method), 160 detach() (tornado.web.RequestHandler method), 50 configure() (tornado.httpclient.AsyncHTTPClient DictLoader (class in tornado.template), 66 class method), 87 done() (tornado.gen.WaitIterator method), 122 configure() (tornado.util.Configurable class method), 161 E configured_class() (tornado.util.Configurable class embedded_css() (tornado.web.UIModule method), 58 method), 161 Tornado Documentation, Release 6.3.3 embedded_javascript() (tornado.web.UIModule future_set_exc_info() (in module tor- method), 58 nado.concurrent), 148 enable_pretty_logging() (in module tornado.log), future_set_exception_unless_cancelled() (in encode_username_password() (in module tor- future_set_result_unless_cancelled() (in mod- nado.httputil), 98 ule tornado.concurrent), 148 environ() (tornado.wsgi.WSGIContainer method), 143 errno_from_exception() (in module tornado.util), G 160 gen_test() (in module tornado.testing), 157 Error, 152 generate() (tornado.template.Template method), 65 ErrorHandler (class in tornado.web), 58 get() (in module tornado.locale), 74 Event (class in tornado.locks), 126 get() (tornado.locale.Locale class method), 76 ExecutorResolver (class in tornado.netutil), 115 get() (tornado.queues.Queue method), 131 ExpectLog (class in tornado.testing), 157 get() (tornado.web.RequestHandler method), 42 get_absolute_path() (tornado.web.StaticFileHandler facebook_request() (tor- get_all() (tornado.httputil.HTTPHeaders method), 93 nado.auth.FacebookGraphMixin method), get_app() (tornado.testing.AsyncHTTPTestCase FacebookGraphMixin (class in tornado.auth), 139 get_argument() (tornado.web.RequestHandler FallbackHandler (class in tornado.web), 58 method), 42 fetch() (tornado.httpclient.AsyncHTTPClient method), get_arguments() (tornado.web.RequestHandler fetch() (tornado.httpclient.HTTPClient method), 86 get_async_test_timeout() (in module tor- fetch() (tornado.testing.AsyncHTTPTestCase method), nado.testing), 159 156 get_auth_http_client() (tornado.auth.OAuth2Mixin fileno() (tornado.iostream.BaseIOStream method), method), 138 110 get_auth_http_client() (tornado.auth.OAuthMixin files (tornado.httputil.HTTPServerRequest attribute), method), 137 95 get_auth_http_client() (tornado.auth.OpenIdMixin filter_whitespace() (in module tornado.template), method), 136 66 get_authenticated_user() (tor- find_handler() (tornado.routing.Router method), 69 nado.auth.FacebookGraphMixin method), finish() (tornado.http1connection.HTTP1Connection get_authenticated_user() (tor- method), 100 nado.auth.GoogleOAuth2Mixin method), finish() (tornado.httputil.HTTPConnection method), 139 97 get_authenticated_user() (tor- finish() (tornado.httputil.HTTPMessageDelegate nado.auth.OAuthMixin method), 136 method), 96 get_authenticated_user() (tor- finish() (tornado.web.RequestHandler method), 44 nado.auth.OpenIdMixin method), 136 flush() (tornado.util.GzipDecompressor method), 159 get_body_argument() (tornado.web.RequestHandler flush() (tornado.web.RequestHandler method), 44 method), 43 fork_processes() (in module tornado.process), 133 get_body_arguments() (tornado.web.RequestHandler format_date() (tornado.locale.Locale method), 76 method), 43 format_day() (tornado.locale.Locale method), 76 get_browser_locale() (tornado.web.RequestHandler format_timestamp() (in module tornado.httputil), 97 method), 50 friendly_number() (tornado.locale.Locale method), get_cache_time() (tornado.web.StaticFileHandler full_url() (tornado.httputil.HTTPServerRequest get_closest() (tornado.locale.Locale class method), method), 95 75 Future (class in tornado.concurrent), 147 get_compression_options() (tor- future_add_done_callback() (in module tor- nado.websocket.WebSocketHandler method), nado.concurrent), 148 80 Tornado Documentation, Release 6.3.3 get_content() (tornado.web.StaticFileHandler class 156 method), 61 get_status() (tornado.web.RequestHandler method), get_content_size() (tornado.web.StaticFileHandler 50 method), 61 get_supported_locales() (in module tor- get_content_type() (tornado.web.StaticFileHandler nado.locale), 75 method), 61 get_target_delegate() (tornado.routing.RuleRouter get_content_version() (tor- method), 70 nado.web.StaticFileHandler class method), get_template_namespace() (tor- get_cookie() (tornado.web.RequestHandler method), get_template_path() (tornado.web.RequestHandler get_current_user() (tornado.web.RequestHandler get_url() (tornado.testing.AsyncHTTPTestCase method), 50 method), 156 get_fd_error() (tornado.iostream.BaseIOStream get_user_locale() (tornado.web.RequestHandler method), 110 method), 50 get_google_oauth_settings() (tor- get_version() (tornado.web.StaticFileHandler class nado.auth.GoogleOAuth2Mixin method), method), 62 138 GettextLocale (class in tornado.locale), 76 get_handler_delegate() (tornado.web.Application GoogleOAuth2Mixin (class in tornado.auth), 138 method), 55 group_dict() (tornado.options.OptionParser method), get_http_port() (tor- 153 nado.testing.AsyncHTTPTestCase method), groups() (tornado.options.OptionParser method), 153 156 GzipDecompressor (class in tornado.util), 159 get_httpserver_options() (tor- nado.testing.AsyncHTTPTestCase method), H 156 handle_stream() (tornado.tcpserver.TCPServer get_list() (tornado.httputil.HTTPHeaders method), method), 119 93 head() (tornado.web.RequestHandler method), 42 get_login_url() (tornado.web.RequestHandler headers (tornado.httputil.HTTPServerRequest at- method), 50 tribute), 94 get_modified_time() (tornado.web.StaticFileHandler headers_received() (tor- method), 61 nado.httputil.HTTPMessageDelegate method), get_new_ioloop() (tornado.testing.AsyncTestCase 96 method), 155 host (tornado.httputil.HTTPServerRequest attribute), 94 get_nowait() (tornado.queues.Queue method), 131 HostMatches (class in tornado.routing), 71 get_old_value() (tornado.util.ArgReplacer method), html_body() (tornado.web.UIModule method), 58 161 html_head() (tornado.web.UIModule method), 58 get_query_argument() (tornado.web.RequestHandler HTTP1Connection (class in tornado.http1connection), method), 42 99 get_query_arguments() (tor- HTTP1ConnectionParameters (class in tor- nado.web.RequestHandler method), 43 nado.http1connection), 99 get_secure_cookie() (tornado.web.RequestHandler HTTP1ServerConnection (class in tor- method), 48 nado.http1connection), 100 get_secure_cookie_key_version() (tor- HTTPClient (class in tornado.httpclient), 85 nado.web.RequestHandler method), 48 HTTPClientError, 91 get_signed_cookie() (tornado.web.RequestHandler HTTPConnection (class in tornado.httputil), 96 method), 47 HTTPError, 57, 91 get_signed_cookie_key_version() (tor- HTTPFile (class in tornado.httputil), 97 nado.web.RequestHandler method), 47 HTTPHeaders (class in tornado.httputil), 92 get_ssl_certificate() (tor- HTTPInputError, 95 nado.httputil.HTTPServerRequest method), HTTPMessageDelegate (class in tornado.httputil), 96 95 HTTPOutputError, 95 get_ssl_options() (tor- HTTPRequest (class in tornado.httpclient), 88 nado.testing.AsyncHTTPSTestCase method), HTTPResponse (class in tornado.httpclient), 90 Tornado Documentation, Release 6.3.3 HTTPServer (class in tornado.httpserver), 83 main() (in module tornado.testing), 158 HTTPServerConnectionDelegate (class in tor- make_current() (tornado.ioloop.IOLoop method), 103 nado.httputil), 95 make_static_url() (tornado.web.StaticFileHandler HTTPServerRequest (class in tornado.httputil), 93 class method), 62 import_object() (in module tornado.util), 159 MAX_SUPPORTED_SIGNED_VALUE_VERSION (in module initialize() (tornado.process.Subprocess class tornado.web), 48 method), 134 maxsize (tornado.queues.Queue property), 130 initialize() (tornado.util.Configurable method), 160 maybe_future() (in module tornado.gen), 124 initialize() (tornado.web.RequestHandler method), method (tornado.httputil.HTTPServerRequest attribute), install() (in module tornado.platform.twisted), 144 method (tornado.httputil.RequestStartLine attribute), 98 install() (tornado.ioloop.IOLoop method), 104 MIN_SUPPORTED_SIGNED_VALUE_VERSION (in module instance() (tornado.ioloop.IOLoop static method), 104 tornado.web), 48 IOLoop (class in tornado.ioloop), 101 MissingArgumentError, 58 IOStream (class in tornado.iostream), 111 mockable() (tornado.options.OptionParser method), is_coroutine_function() (in module tornado.gen), 153 124 module is_running() (tornado.ioloop.PeriodicCallback tornado.auth, 135 method), 107 tornado.autoreload, 146 is_set() (tornado.locks.Event method), 126 tornado.concurrent, 147 is_valid_ip() (in module tornado.netutil), 114 tornado.curl_httpclient, 92 items() (tornado.options.OptionParser method), 153 tornado.escape, 72 javascript_files() (tornado.web.UIModule method), 58 join() (tornado.queues.Queue method), 131 json_decode() (in module tornado.escape), 73 json_encode() (in module tornado.escape), 73 LifoQueue (class in tornado.queues), 132 tornado.netutil, 113 linkify() (in module tornado.escape), 73 tornado.options, 150 list() (tornado.locale.Locale method), 76 tornado.platform.asyncio, 144 listen() (tornado.tcpserver.TCPServer method), 118 tornado.platform.caresresolver, 143 listen() (tornado.web.Application method), 55 tornado.platform.twisted, 144 load() (tornado.template.BaseLoader method), 66 tornado.process, 133 load_gettext_translations() (in module tor- tornado.queues, 129 nado.locale), 75 tornado.routing, 67 load_translations() (in module tornado.locale), 74 tornado.simple_httpclient, 92 Loader (class in tornado.template), 66 tornado.tcpclient, 116 Locale (class in tornado.locale), 75 tornado.tcpserver, 117 locale (tornado.web.RequestHandler attribute), 50 tornado.template, 62 Lock (class in tornado.locks), 128 tornado.testing, 154 log_exception() (tornado.web.RequestHandler tornado.util, 159 method), 51 tornado.web, 40 log_request() (tornado.web.Application method), 56 tornado.websocket, 77 LogFormatter (class in tornado.log), 149 tornado.wsgi, 142 main() (in module tornado.autoreload), 146 multi_future() (in module tornado.gen), 123 Tornado Documentation, Release 6.3.3 N parse_line() (tornado.httputil.HTTPHeaders method), native_str() (in module tornado.escape), 73 93 next() (tornado.gen.WaitIterator method), 123 parse_multipart_form_data() (in module tor- notify() (tornado.locks.Condition method), 126 nado.httputil), 97 notify_all() (tornado.locks.Condition method), 126 parse_request_start_line() (in module tor- O parse_response_start_line() (in module tor- oauth2_request() (tornado.auth.OAuth2Mixin parse_url_path() (tornado.web.StaticFileHandler method), 137 OAuth2Mixin (class in tornado.auth), 137 OAuthMixin (class in tornado.auth), 136 ObjectDict (class in tornado.util), 159 on_close() (tornado.httputil.HTTPServerConnectionDelegate method), 96 on_close() (tornado.websocket.WebSocketHandler path_kwargs (tornado.web.RequestHandler attribute), method), 78 on_connection_close() (tor- nado.httputil.HTTPMessageDelegate method), 96 on_connection_close() (tor- ping() (tornado.websocket.WebSocketClientConnection nado.web.RequestHandler method), 51 on_finish() (tornado.web.RequestHandler method), 41 ping() (tornado.websocket.WebSocketHandler method), on_message() (tornado.websocket.WebSocketHandler method), 78 on_ping() (tornado.websocket.WebSocketHandler method), 79 on_pong() (tornado.websocket.WebSocketHandler method), 81 print_help() (tornado.options.OptionParser method), open() (tornado.websocket.WebSocketHandler method), 78 OpenIdMixin (class in tornado.auth), 135 process_rule() (tornado.routing.RuleRouter method), OptionParser (class in tornado.options), 152 options (in module tornado.options), 151 protocol (tornado.httputil.HTTPServerRequest at- options() (tornado.web.RequestHandler method), 42 OverrideResolver (class in tornado.netutil), 115 parse() (tornado.httputil.HTTPHeaders class method), 93 Q parse_body_arguments() (in module tor- nado.httputil), 97 parse_command_line() (in module tornado.options), query (tornado.httputil.HTTPServerRequest attribute), 151 parse_command_line() (tor- query_arguments (tor- nado.options.OptionParser method), 152 nado.httputil.HTTPServerRequest attribute), parse_config_file() (in module tornado.options), 151 parse_config_file() (tornado.options.OptionParser method), 152 parse_cookie() (in module tornado.httputil), 99 parse_hex_int() (in module tornado.http1connection), 101 R parse_int() (in module tornado.http1connection), 101 re_unescape() (in module tornado.util), 160 Tornado Documentation, Release 6.3.3 read_bytes() (tornado.iostream.BaseIOStream require_setting() (tornado.web.RequestHandler method), 108 method), 51 read_from_fd() (tornado.iostream.BaseIOStream reset() (tornado.template.BaseLoader method), 66 method), 110 resolve() (tornado.netutil.Resolver method), 114 read_into() (tornado.iostream.BaseIOStream method), resolve_path() (tornado.template.BaseLoader read_message() (tornado.websocket.WebSocketClientConnection method), 82 ResponseStartLine (class in tornado.httputil), 98 read_response() (tor- rethrow() (tornado.httpclient.HTTPResponse method), nado.http1connection.HTTP1Connection 91 method), 100 Return, 121 read_until() (tornado.iostream.BaseIOStream reverse() (tornado.routing.Matcher method), 71 method), 109 reverse_url() (tornado.routing.ReversibleRouter read_until_close() (tornado.iostream.BaseIOStream method), 69 method), 109 reverse_url() (tornado.web.Application method), 56 read_until_regex() (tornado.iostream.BaseIOStream reverse_url() (tornado.web.RequestHandler method), method), 109 51 reading() (tornado.iostream.BaseIOStream method), ReversibleRouter (class in tornado.routing), 69 110 ReversibleRuleRouter (class in tornado.routing), 70 reason (tornado.httputil.ResponseStartLine attribute), Router (class in tornado.routing), 69 98 Rule (class in tornado.routing), 71 recursive_unicode() (in module tornado.escape), 73 RuleRouter (class in tornado.routing), 69 redirect() (tornado.web.RequestHandler method), 45 run_in_executor() (tornado.ioloop.IOLoop method), RedirectHandler (class in tornado.web), 59 106 release() (tornado.locks.BoundedSemaphore method), run_on_executor() (in module tornado.concurrent), release() (tornado.locks.Lock method), 129 run_sync() (tornado.ioloop.IOLoop method), 103 release() (tornado.locks.Semaphore method), 128 remote_ip (tornado.httputil.HTTPServerRequest at- S tribute), 94 select_subprotocol() (tor- remove_handler() (tornado.ioloop.IOLoop method), nado.websocket.WebSocketHandler method), remove_timeout() (tornado.ioloop.IOLoop method), selected_subprotocol (tor- removeslash() (in module tornado.web), 56 property), 82 render() (tornado.web.RequestHandler method), 45 selected_subprotocol (tor- render() (tornado.web.UIModule method), 58 nado.websocket.WebSocketHandler attribute), render_embed_css() (tornado.web.RequestHandler 78 method), 46 Semaphore (class in tornado.locks), 127 render_embed_js() (tornado.web.RequestHandler send_error() (tornado.web.RequestHandler method), method), 45 45 render_linked_css() (tornado.web.RequestHandler set() (tornado.locks.Event method), 126 method), 46 set_body_timeout() (tor- render_linked_js() (tornado.web.RequestHandler nado.http1connection.HTTP1Connection method), 45 method), 100 render_string() (tornado.web.RequestHandler set_close_callback() (tor- method), 45 nado.http1connection.HTTP1Connection render_string() (tornado.web.UIModule method), 58 method), 100 replace() (tornado.util.ArgReplacer method), 161 set_close_callback() (tor- request (tornado.web.RequestHandler attribute), 43 nado.iostream.BaseIOStream method), 109 request_time() (tornado.httputil.HTTPServerRequest set_cookie() (tornado.web.RequestHandler method), method), 95 46 RequestHandler (class in tornado.web), 41 set_default_executor() (tornado.ioloop.IOLoop RequestStartLine (class in tornado.httputil), 98 method), 106 Tornado Documentation, Release 6.3.3 set_default_headers() (tor- 51 nado.web.RequestHandler method), 44 StaticFileHandler (class in tornado.web), 59 set_default_locale() (in module tornado.locale), 74 stop() (tornado.ioloop.IOLoop method), 103 set_etag_header() (tornado.web.RequestHandler stop() (tornado.ioloop.PeriodicCallback method), 107 method), 51 stop() (tornado.tcpserver.TCPServer method), 119 set_exit_callback() (tornado.process.Subprocess stop() (tornado.testing.AsyncTestCase method), 155 method), 134 stream_request_body() (in module tornado.web), 57 set_extra_headers() (tornado.web.StaticFileHandler StreamBufferFullError, 113 method), 61 StreamClosedError, 113 set_header() (tornado.web.RequestHandler method), Subprocess (class in tornado.process), 133 44 set_headers() (tornado.web.StaticFileHandler T method), 60 task_done() (tornado.queues.Queue method), 131 set_max_body_size() (tor- task_id() (in module tornado.process), 133 nado.http1connection.HTTP1Connection TCPClient (class in tornado.tcpclient), 116 method), 100 TCPServer (class in tornado.tcpserver), 117 set_nodelay() (tornado.iostream.BaseIOStream Template (class in tornado.template), 65 method), 110 ThreadedResolver (class in tornado.netutil), 115 set_nodelay() (tornado.websocket.WebSocketHandler time() (tornado.ioloop.IOLoop method), 106 method), 80 timedelta_to_seconds() (in module tornado.util), set_secure_cookie() (tornado.web.RequestHandler 161 method), 48 TimeoutError (class in tornado.util), 159 set_signed_cookie() (tornado.web.RequestHandler to_asyncio_future() (in module tor- method), 47 nado.platform.asyncio), 145 set_status() (tornado.web.RequestHandler method), to_basestring() (in module tornado.escape), 73 44 to_tornado_future() (in module tor- settings (tornado.web.Application attribute), 53 nado.platform.asyncio), 145 settings (tornado.web.RequestHandler attribute), 51 to_unicode() (in module tornado.escape), 73 should_return_304() (tornado.web.StaticFileHandler tornado.auth method), 60 module, 135 SimpleAsyncHTTPClient (class in tor- tornado.autoreload nado.simple_httpclient), 92 module, 146 sleep() (in module tornado.gen), 122 tornado.concurrent spawn_callback() (tornado.ioloop.IOLoop method), module, 147 106 tornado.curl_httpclient split_host_and_port() (in module tornado.httputil), module, 92 98 tornado.escape squeeze() (in module tornado.escape), 74 module, 72 ssl_options_to_context() (in module tor- tornado.gen nado.netutil), 116 module, 120 ssl_wrap_socket() (in module tornado.netutil), 116 tornado.http1connection SSLIOStream (class in tornado.iostream), 112 module, 99 start() (in module tornado.autoreload), 146 tornado.httpclient start() (tornado.ioloop.IOLoop method), 103 module, 85 start() (tornado.ioloop.PeriodicCallback method), 107 tornado.httpserver start() (tornado.tcpserver.TCPServer method), 119 module, 83 start_request() (tor- tornado.httputil nado.httputil.HTTPServerConnectionDelegate module, 92 method), 95 tornado.ioloop start_serving() (tor- module, 101 nado.http1connection.HTTP1ServerConnection tornado.iostream method), 101 module, 107 start_tls() (tornado.iostream.IOStream method), 112 tornado.locale static_url() (tornado.web.RequestHandler method), module, 74 Tornado Documentation, Release 6.3.3 tornado.locks uri (tornado.httputil.HTTPServerRequest attribute), 94 module, 124 url_concat() (in module tornado.httputil), 97 tornado.log url_escape() (in module tornado.escape), 72 module, 149 url_unescape() (in module tornado.escape), 72 tornado.netutil URLSpec (class in tornado.routing), 72 module, 113 URLSpec (class in tornado.web), 56 tornado.options utf8() (in module tornado.escape), 73 module, 150 tornado.platform.asyncio V module, 144 validate_absolute_path() (tor- tornado.platform.caresresolver nado.web.StaticFileHandler method), 61 module, 143 version (tornado.httputil.HTTPServerRequest at- tornado.platform.twisted tribute), 94 module, 144 version (tornado.httputil.RequestStartLine attribute), 98 tornado.process version (tornado.httputil.ResponseStartLine attribute), module, 133 98 tornado.queues module, 129 W tornado.routing wait() (in module tornado.autoreload), 146 module, 67 wait() (tornado.locks.Condition method), 125 tornado.simple_httpclient wait() (tornado.locks.Event method), 126 module, 92 wait() (tornado.testing.AsyncTestCase method), 155 tornado.tcpclient wait_for_exit() (tornado.process.Subprocess module, 116 method), 134 tornado.tcpserver wait_for_handshake() (tor- module, 117 nado.iostream.SSLIOStream method), 112 tornado.template WaitIterator (class in tornado.gen), 122 module, 62 watch() (in module tornado.autoreload), 146 tornado.testing websocket_connect() (in module tornado.websocket), module, 154 81 tornado.util WebSocketClientConnection (class in tor- module, 159 nado.websocket), 82 tornado.web WebSocketClosedError, 81 module, 40 WebSocketHandler (class in tornado.websocket), 77 tornado.websocket with_timeout() (in module tornado.gen), 121 module, 77 write() (tornado.http1connection.HTTP1Connection module, 142 write() (tornado.httputil.HTTPConnection method), 97 translate() (tornado.locale.Locale method), 76 write() (tornado.iostream.BaseIOStream method), 108 TwistedResolver (class in tornado.platform.twisted), write() (tornado.web.RequestHandler method), 44 144 write_error() (tornado.web.RequestHandler method), twitter_request() (tornado.auth.TwitterMixin 45 method), 141 write_headers() (tor- TwitterMixin (class in tornado.auth), 141 nado.http1connection.HTTP1Connection U write_headers() (tornado.httputil.HTTPConnection UIModule (class in tornado.web), 58 method), 96 unconsumed_tail (tornado.util.GzipDecompressor write_message() (tor- property), 159 nado.websocket.WebSocketClientConnection uninitialize() (tornado.process.Subprocess class method), 82 method), 134 write_message() (tor- UnsatisfiableReadError, 113 nado.websocket.WebSocketHandler method), update_handler() (tornado.ioloop.IOLoop method), 79 105 Tornado Documentation, Release 6.3.3 write_to_fd() (tornado.iostream.BaseIOStream method), 110 writing() (tornado.iostream.BaseIOStream method), 110 WSGIContainer (class in tornado.wsgi), 142 X xhtml_escape() (in module tornado.escape), 72 xhtml_unescape() (in module tornado.escape), 72 xsrf_form_html() (tornado.web.RequestHandler method), 51 xsrf_token (tornado.web.RequestHandler attribute), 52
ollggamma
cran
R
Package ‘ollggamma’ October 14, 2022 Title Odd Log-Logistic Generalized Gamma Probability Distribution Version 1.0.2 Description Density, distribution function, quantile function and random generation for the Odd Log- Logistic Generalized Gamma pro- posed in Prataviera, F. et al (2017) <doi:10.1080/00949655.2016.1238088>. License MIT + file LICENSE Encoding UTF-8 LazyData true URL https://mjsaldanha.com/posts/ollggamma BugReports https://github.com/matheushjs/ollggamma RoxygenNote 7.0.2 Depends R (>= 3.1.0) Suggests testthat, ggamma NeedsCompilation no Author <NAME> [aut, cre], <NAME> [aut] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2020-02-20 09:30:03 UTC R topics documented: OLL-G.Gamm... 2 OLL-G.Gamma Odd Log-Logistic Generalized Gamma Probability Distribution Description Density, distribution function, quantile function and random generation for the Odd Log-Logistic Generalized Gamma probability distribution. Fast implementation of density, distribution function, quantile function and random generation for the Odd Log-Logistic Generalized Gamma probability distribution. Usage dollggamma(x, a, b, k, lambda, log = F) pollggamma(q, a, b, k, lambda, lower.tail = TRUE, log.p = FALSE) qollggamma(p, a, b, k, lambda, lower.tail = TRUE, log.p = FALSE) rollggamma(n, a, b, k, lambda) Arguments x, q vector of quantiles. a, b, k, lambda Parameters of the distribution, all of which must be positive. log, log.p logical; if TRUE, probabilities p are given as log(p). lower.tail logical; if TRUE (default), probabilities are P [X ≀ x] otherwise, P [X > x]. p vector of probabilities. n number of observations. If length(n) > 1, the length is taken to be the number required. Details This package follows naming convention that is consistent with base R, where density (or proba- bility mass) functions, distribution functions, quantile functions and random generation functions names are followed by d, p, q, and r prefixes. Behaviour of the functions is consistent with base R, where for not valid parameters values NaN’s are returned, while for values beyond function support 0’s are returned (e.g. for non-integers in discrete distributions, or for negative values in functions with non-negative support). The Odd Log-Logistic Generalized Gamma (OLL-GG) (Pratavieira et al, 2017) distribution is gen- erated by applying a transformation upon the GG cumulative distribution, thus defining a new cdf F(t) as follows: G(t)λ F (t) = where G(t) is the cdf for the GG distribution (which is given later), and λ is the new parameter introduced by this transformation. The probability density function is then: λg(t){G(t)(1 − G(t))}λ−1 f (t) = {G(t)λ + [1 + G(t)]λ }2 where g(t) is the pdf for the GG distribution. The quantile function is:   F −1 (q) = G−1 where G−1 is the GG quantile function. The generalized gamma distribution proposed by Stacy (1962) has parameters a, d, p, but here we adopt the reparametrization a=a b=p d k= p as is used by the R package *ggamma*. Probability density function bxbk−1 exp[−(x/a)b ] f (x) = abk Γ(k) Cumulative density function γ(k, (x/a)b ) F (x) = Γ(k) The above function can be written in terms of a Gamma(α, β). Let T ∌ Gamma(k, 1) and its cumulative distribution be denoted as FT (t), then the cumulative density function of the generalized gamma distribution can be written as F (x) = FT ((x/a)b ) which allows us to write the quantile function of the generalized gamma in terms of the gamma one (QT (u) is the quantile function of T ) Q(u) = (QT (u) · a)1/b from which random numbers can be drawn. Value ‘dollggamma’ gives the density, ‘pollggamma’ gives the distribution function, ‘qollggamma’ gives the quantile function, and ‘rollggamma’ generates random samples. The length of the result is determined by ‘n’ for ‘rollggamma’, and is the maximum of the lengths of the numerical arguments for the other functions. The distribution support is x > 0 and any value out of this range will lead to ’dollggamma’ returning 0. References <NAME>., <NAME>., <NAME>., & <NAME>. (2017). The odd log-logistic generalized gamma model: properties, applications, classical and bayesian approach. Biom Biostat Int J, 6(4), 00174. <NAME>. (1962). A generalization of the gamma distribution. The Annals of mathematical statistics, 33(3), 1187-1192. Examples x = seq(0.001, 5, length=1000); plot(x, dollggamma(x, 0.5, 1.2, 5, 0.3), col=2, type="l", lwd=4, ylim=c(0, 1)); lines(x, pollggamma(x, 0.5, 1.2, 5, 0.3), col=4, type="l", lwd=4, ylim=c(0, 1)); legend("right", c("PDF", "CDF"), col=c(2, 4), lwd=4); r = rgamma(n = 100, 2, 2); lik = function(params) -sum(dollggamma(r, params[1], params[2], params[3], params[4], log=TRUE)); optPar = optim(lik, par=c(1, 1, 1, 1), method="L-BFGS", lower=0.00001, upper=Inf)$par; x = seq(0.001, 5, length=1000); plot(x, dgamma(x, 2, 2), type="l", col=2, lwd=4, ylim=c(0, 1)); lines(x, dollggamma(x, optPar[1], optPar[2], optPar[3], optPar[4]), col=4, lwd=4); legend("topright", c("Gamma(shape=2, rate=2)", "MLE OLL-Gen.Gamma"), col=c(2, 4), lwd=4);
hms
cran
R
Package ‘hms’ March 21, 2023 Title Pretty Time of Day Date 2023-03-21 Version 1.1.3 Description Implements an S3 class for storing and formatting time-of-day values, based on the 'difftime' class. Imports lifecycle, methods, pkgconfig, rlang (>= 1.0.2), vctrs (>= 0.3.8) Suggests crayon, lubridate, pillar (>= 1.1.0), testthat (>= 3.0.0) License MIT + file LICENSE Encoding UTF-8 URL https://hms.tidyverse.org/, https://github.com/tidyverse/hms BugReports https://github.com/tidyverse/hms/issues RoxygenNote 7.2.3 Config/testthat/edition 3 Config/autostyle/scope line_breaks Config/autostyle/strict false Config/Needs/website tidyverse/tidytemplate NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-1416-3412>), R Consortium [fnd], RStudio [fnd] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-03-21 18:10:02 UTC R topics documented: hms-packag... 2 hm... 3 parse_hm... 4 round_hm... 5 vec_cast.hm... 6 vec_ptype2.hm... 6 hms-package hms: Pretty Time of Day Description Implements an S3 class for storing and formatting time-of-day values, based on the ’difftime’ class. Details [Stable] Author(s) Maintainer: <NAME> <<EMAIL>> (ORCID) Other contributors: • R Consortium [funder] • RStudio [funder] See Also Useful links: • https://hms.tidyverse.org/ • https://github.com/tidyverse/hms • Report bugs at https://github.com/tidyverse/hms/issues hms A simple class for storing time-of-day values Description The values are stored as a difftime vector with a custom class, and always with "seconds" as unit for robust coercion to numeric. Supports construction from time values, coercion to and from various data types, and formatting. Can be used as a regular column in a data frame. hms() is a high-level constructor that accepts second, minute, hour and day components as numeric vectors. new_hms() is a low-level constructor that only checks that its input has the correct base type, nu- meric. is_hms() checks if an object is of class hms. as_hms() is a generic that supports conversions beyond casting. The default method forwards to vec_cast(). Usage hms(seconds = NULL, minutes = NULL, hours = NULL, days = NULL) new_hms(x = numeric()) is_hms(x) as_hms(x, ...) ## S3 method for class 'hms' as.POSIXct(x, ...) ## S3 method for class 'hms' as.POSIXlt(x, ...) ## S3 method for class 'hms' as.character(x, ...) ## S3 method for class 'hms' format(x, ...) ## S3 method for class 'hms' print(x, ...) Arguments seconds, minutes, hours, days Time since midnight. No bounds checking is performed. x An object. ... additional arguments to be passed to or from methods. Details For hms(), all arguments must have the same length or be NULL. Odd combinations (e.g., passing only seconds and hours but not minutes) are rejected. For arguments of type POSIXct and POSIXlt, as_hms() does not perform timezone conversion. Use lubridate::with_tz() and lubridate::force_tz() as necessary. Examples hms(56, 34, 12) hms() new_hms(as.numeric(1:3)) # Supports numeric only! try(new_hms(1:3)) as_hms(1) as_hms("12:34:56") as_hms(Sys.time()) as.POSIXct(hms(1)) data.frame(a = hms(1)) d <- data.frame(hours = 1:3) d$hours <- hms(hours = d$hours) d parse_hms Parsing hms values Description These functions convert character vectors to objects of the hms class. NA values are supported. parse_hms() accepts values of the form "HH:MM:SS", with optional fractional seconds. parse_hm() accepts values of the form "HH:MM". Usage parse_hms(x) parse_hm(x) Arguments x A character vector Value An object of class hms. Examples parse_hms("12:34:56") parse_hms("12:34:56.789") parse_hm("12:34") round_hms Round or truncate to a multiple of seconds Description Convenience functions to round or truncate to a multiple of seconds. Usage round_hms(x, secs = NULL, digits = NULL) trunc_hms(x, secs = NULL, digits = NULL) Arguments x A vector of class hms secs Multiple of seconds, a positive numeric. Values less than one are supported digits Number of digits, a whole number. Negative numbers are supported. Value The input, rounded or truncated to the nearest multiple of secs (or number of digits) Examples round_hms(as_hms("12:34:56"), 5) round_hms(as_hms("12:34:56"), 60) round_hms(as_hms("12:34:56.78"), 0.25) round_hms(as_hms("12:34:56.78"), digits = 1) round_hms(as_hms("12:34:56.78"), digits = -2) trunc_hms(as_hms("12:34:56"), 60) vec_cast.hms Casting Description Double dispatch methods to support vctrs::vec_cast(). Usage ## S3 method for class 'hms' vec_cast(x, to, ...) Arguments x Vectors to cast. to Type to cast to. If NULL, x will be returned as is. ... For vec_cast_common(), vectors to cast. For vec_cast(), vec_cast_default(), and vec_restore(), these dots are only for future extensions and should be empty. vec_ptype2.hms Coercion Description Double dispatch methods to support vctrs::vec_ptype2(). Usage ## S3 method for class 'hms' vec_ptype2(x, y, ..., x_arg = "", y_arg = "") Arguments x, y Vector types. ... These dots are for future extensions and must be empty. x_arg, y_arg Argument names for x and y. These are used in error messages to inform the user about the locations of incompatible types (see stop_incompatible_type()).
ex_figures
hex
Erlang
ExFigures === Elixir wrapper for Appfigures [API](https://docs.appfigures.com) based on [Tesla](https://github.com/teamon/tesla) library. Appfigures provides a RESTful way to interact with reports and account data. * [API Overview](https://docs.appfigures.com/) of what is can do * [Getting started guide](https://docs.appfigures.com/api/getting-started) to dwell into api key registration and first requests Appfigures provides two ways of authentication: * *basic auth* * *OAuth 1.0* Right now client supports only *basic auth* method. Appfigures require that any client made publicly available uses OAuth 1.0. So make sure that's your client is for internal/team use only. [Link to this section](#summary) Summary === [Types](#types) --- [option()](#t:option/0) Represents ExFigures client init options [result()](#t:result/0) Represents ExFigures api call result wrapped in `:ok/:error` tuple [t()](#t:t/0) Represents ExFigures client [Functions](#functions) --- [client(opts \\ [])](#client/1) Creates ExFigures client. The options passed affect authentication and middleware extending. [Link to this section](#types) Types === ``` option() :: {:username, [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() | {:system, [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()}} | {:password, [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() | {:system, [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()}} | {:client_key, [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)() | {:system, [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()}} | {:middleware, [[any](https://hexdocs.pm/elixir/typespecs.html#basic-types)()]} ``` Represents ExFigures client init options ``` result() :: [Tesla.Env.result](https://hexdocs.pm/tesla/1.2.1/Tesla.Env.html#t:result/0)() ``` Represents ExFigures api call result wrapped in `:ok/:error` tuple ``` t() :: [Tesla.Client.t](https://hexdocs.pm/tesla/1.2.1/Tesla.Client.html#t:t/0)() ``` Represents ExFigures client [Link to this section](#functions) Functions === ``` client([[option](#t:option/0)()]) :: [ExFigures.t](ExFigures.html#t:t/0)() ``` Creates ExFigures client. The options passed affect authentication and middleware extending. Options: * `username` - appfigures account's username or email address. In case of empty client uses `:ex_figures, :username` config variable. * `password` - appfigures account's password. In case of empty client uses `:ex_figures, :password` config variable. * `client_key` - appfigures api client key. In case of empty client uses `:ex_figures, :password` config variable. * `middleware` - list of additional Tesla middle. Useful for extending client with additional functionality like instrumentation or logging. ExFigures.Products === The [/products](https://docs.appfigures.com/products) resource provides access to product meta data in a variety of ways. [Link to this section](#summary) Summary === [Types](#types) --- [meta_opt()](#t:meta_opt/0) Most places you can be given products you can also ask for additional data from the products by appending the `meta: true` to your request. [mine_opt()](#t:mine_opt/0) Represents user products query arguments [search_opt()](#t:search_opt/0) Represents products searching query arguments [update_attrs()](#t:update_attrs/0) Represents update product query argument [Functions](#functions) --- [get_by_appfigures_id(client, appfigures_id)](#get_by_appfigures_id/2) Getting a product by its Appfigures ID [get_by_store_id(client, store, store_id)](#get_by_store_id/3) Getting a product by its id in the store [list_mine(client, query \\ [])](#list_mine/2) Listing all of your products [search(client, term, query \\ [])](#search/3) Searching for products. The Public Data API add-on is required for this route. Scope: This resource requires the public:read scope. [update(client, appfigures_id, attrs)](#update/3) Updating products [Link to this section](#types) Types === ``` meta_opt() :: {:meta, [boolean](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} ``` Most places you can be given products you can also ask for additional data from the products by appending the `meta: true` to your request. ``` mine_opt() :: {:store, [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} | [meta_opt](#t:meta_opt/0)() ``` Represents user products query arguments ``` search_opt() :: {:filter, [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} | {:page, [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | {:count, [integer](https://hexdocs.pm/elixir/typespecs.html#basic-types)()} | [meta_opt](#t:meta_opt/0)() ``` Represents products searching query arguments ``` update_attrs() :: %{ source: %{optional(:active) => [boolean](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), optional(:hidden) => [boolean](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()} } ``` Represents update product query argument [Link to this section](#functions) Functions === ``` get_by_appfigures_id([ExFigures.t](ExFigures.html#t:t/0)(), [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: [ExFigures.result](ExFigures.html#t:result/0)() ``` Getting a product by its Appfigures ID ``` get_by_store_id([ExFigures.t](ExFigures.html#t:t/0)(), [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)()) :: [ExFigures.result](ExFigures.html#t:result/0)() ``` Getting a product by its id in the store ``` list_mine([ExFigures.t](ExFigures.html#t:t/0)(), [[mine_opt](#t:mine_opt/0)()]) :: [ExFigures.result](ExFigures.html#t:result/0)() ``` Listing all of your products Query options: * `:store` - Filter to only show products from a given store. `apple`, `google_play`, `amazon_appstore` are some valid examples. ``` search([ExFigures.t](ExFigures.html#t:t/0)(), [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [[search_opt](#t:search_opt/0)()]) :: [ExFigures.result](ExFigures.html#t:result/0)() ``` Searching for products. The Public Data API add-on is required for this route. Scope: This resource requires the public:read scope. Query options: * `:term` - The name of an app or a developer. Prefix with @name= or @developer= if you’d like to search that field only. * `:filter` - Filter to apply to the results: one of `ios`, `mac`, `google`, `amazon`. Defaults to showing all results * `:page` - Page of results to show. Defaults to the first page * `:count` - Number of results to show in a page. Defaults to 25 ``` update([ExFigures.t](ExFigures.html#t:t/0)(), [binary](https://hexdocs.pm/elixir/typespecs.html#built-in-types)(), [update_attrs](#t:update_attrs/0)()) :: [ExFigures.result](ExFigures.html#t:result/0)() ``` Updating products Attribures: * `:source` + `:active` - `true/false` + `:hidden` - `true/false` ExFigures.Root === The [/](https://docs.appfigures.com/api/reference/v2/root)(root) resource provides basic information about the currently authenticated user. [Link to this section](#summary) Summary === [Functions](#functions) --- [get(client)](#get/1) The root resource provides basic information about the currently authenticated user. [Link to this section](#functions) Functions === ``` get([ExFigures.t](ExFigures.html#t:t/0)()) :: [ExFigures.result](ExFigures.html#t:result/0)() ``` The root resource provides basic information about the currently authenticated user.
spack-tutorial
readthedoc
YAML
Spack Tutorial documentation [Spack Tutorial](index.html#document-index) --- Tutorial: Spack 101[¶](#tutorial-spack-101) === This is a full day introduction to Spack with lectures and live demos. It was presented as a tutorial at [Supercomputing 2017](http://sc17.supercomputing.org) on November 13, 2017. You can use these materials to teach a course on Spack at your own site, or you can just skip ahead and read the live demo scripts to see how Spack is used in practice. Slides [`Download Slides`](_downloads/5864a405c3982062b617b887ae2e9df0/Spack-SC17-Tutorial.pdf). **Full citation:** <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. [Managing HPC Software Complexity with Spack](https://sc17.supercomputing.org/index.html%3Fpost_type=page&p=5407&id=tut151&sess=sess233.html). Tutorial presented at Supercomputing 2017. November 13, 2017, Denver, CO, USA. Live Demos These scripts will take you step-by-step through basic Spack tasks. They correspond to sections in the slides above. > 1. [Basic Installation Tutorial](index.html#basics-tutorial) > 2. [Configuration Tutorial](index.html#configs-tutorial) > 3. [Package Creation Tutorial](index.html#packaging-tutorial) > 4. [Environments Tutorial](index.html#environments-tutorial) > 5. [Module Files](index.html#modules-tutorial) > 6. [Spack Package Build Systems](index.html#build-systems-tutorial) > 7. [Advanced Topics in Packaging](index.html#advanced-packaging-tutorial) Full contents: Basic Installation Tutorial[¶](#basic-installation-tutorial) --- This tutorial will guide you through the process of installing software using Spack. We will first cover the spack install command, focusing on the power of the spec syntax and the flexibility it gives to users. We will also cover the spack find command for viewing installed packages and the spack uninstall command. Finally, we will touch on how Spack manages compilers, especially as it relates to using Spack-built compilers within Spack. We will include full output from all of the commands demonstrated, although we will frequently call attention to only small portions of that output (or merely to the fact that it succeeded). The provided output is all from an AWS instance running Ubuntu 16.04 ### Installing Spack[¶](#installing-spack) Spack works out of the box. Simply clone spack and get going. We will clone Spack and immediately checkout the most recent release, v0.12. ``` $ git clone https://github.com/spack/spack git clone https://github.com/spack/spack Cloning into 'spack'... remote: Enumerating objects: 68, done. remote: Counting objects: 100% (68/68), done. remote: Compressing objects: 100% (56/56), done. remote: Total 135389 (delta 40), reused 16 (delta 9), pack-reused 135321 Receiving objects: 100% (135389/135389), 47.31 MiB | 1.01 MiB/s, done. Resolving deltas: 100% (64414/64414), done. Checking connectivity... done. $ cd spack $ git checkout releases/v0.12 Branch releases/v0.12 set up to track remote branch releases/v0.12 from origin. Switched to a new branch 'releases/v0.12' ``` Next add Spack to your path. Spack has some nice command line integration tools, so instead of simply appending to your `PATH` variable, source the spack setup script. Then add Spack to your path. ``` $ . share/spack/setup-env.sh ``` You’re good to go! ### What is in Spack?[¶](#what-is-in-spack) The `spack list` command shows available packages. ``` $ spack list ==> 2907 packages. abinit libgpuarray py-espresso r-mlrmbo abyss libgridxc py-espressopp r-mmwrweek accfft libgtextutils py-et-xmlfile r-mnormt ... ``` The `spack list` command can also take a query string. Spack automatically adds wildcards to both ends of the string. For example, we can view all available python packages. ``` $ spack list py- ==> 479 packages. lumpy-sv py-funcsigs py-numpydoc py-utililib perl-file-copy-recursive py-functools32 py-olefile py-pywavelets py-3to2 py-future py-ont-fast5-api py-pyyaml ... ``` ### Installing Packages[¶](#installing-packages) Installing a package with Spack is very simple. To install a piece of software, simply type `spack install <package_name>`. ``` $ spack install zlib ==> Installing zlib ==> Searching for binary cache of zlib ==> Warning: No Spack mirrors are currently configured ==> No binary for zlib found: installing from source ==> Fetching http://zlib.net/fossils/zlib-1.2.11.tar.gz ######################################################################## 100.0% ==> Staging archive: /home/spack1/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb/zlib-1.2.11.tar.gz ==> Created stage in /home/spack1/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> No patches needed for zlib ==> Building zlib [Package] ==> Executing phase: 'install' ==> Successfully installed zlib Fetch: 3.27s. Build: 2.18s. Total: 5.44s. [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ``` Spack can install software either from source or from a binary cache. Packages in the binary cache are signed with GPG for security. For the tutorial we have prepared a binary cache so you don’t have to wait on slow compilation from source. To be able to install from the binary cache, we will need to configure Spack with the location of the binary cache and trust the GPG key that the binary cache was signed with. ``` $ spack mirror add tutorial /mirror $ spack gpg trust /mirror/public.key gpg: keybox '/home/spack1/spack/opt/spack/gpg/pubring.kbx' created gpg: /home/spack1/spack/opt/spack/gpg/trustdb.gpg: trustdb created gpg: key 3B7C69B2: public key "sc-tutorial (GPG created for Spack) <<EMAIL>>" imported gpg: Total number processed: 1 gpg: imported: 1 ``` You’ll learn more about configuring Spack later in the tutorial, but for now you will be able to install the rest of the packages in the tutorial from a binary cache using the same `spack install` command. By default this will install the binary cached version if it exists and fall back on installing from source. Spack’s spec syntax is the interface by which we can request specific configurations of the package. The `%` sigil is used to specify compilers. ``` $ spack install zlib %clang ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64-gcc-7.2.0-texinfo-6.5-cuqnfgfhhmudqp5f7upmld6ax7pratzw.spec.yaml ######################################################################## 100.0% ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64-gcc-4.7-zlib-1.2.11-bq2wtdxakpjytk2tjr7qu23i4py2fi2r.spec.yaml ######################################################################## 100.0% ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64-gcc-5.4.0-dyninst-9.3.2-bu6s2jzievsjkwtcnrtimc5b625j5omf.spec.yaml ######################################################################## 100.0% ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64-gcc-7.2.0-openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4.spec.yaml ... ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.11/linux-ubuntu16.04-x86_64-clang-3.8.0-2ubuntu4-zlib-1.2.11-4pt75q7qq6lygf3hgnona4lyc2uwedul.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:08:01 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 95C7 1787 7AC0 0FFD AA8F D6E9 9CFA 4A45 3B7C 69B2 ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.11-4pt75q7qq6lygf3hgnona4lyc2uwedul ``` Note that this installation is located separately from the previous one. We will discuss this in more detail later, but this is part of what allows Spack to support arbitrarily versioned software. You can check for particular versions before requesting them. We will use the `spack versions` command to see the available versions, and then install a different version of `zlib`. ``` $ spack versions zlib ==> Safe versions (already checksummed): 1.2.11 1.2.8 1.2.3 ==> Remote versions (not yet checksummed): 1.2.10 1.2.7 1.2.5.1 1.2.4.2 1.2.3.7 ... ``` The `@` sigil is used to specify versions, both of packages and of compilers. ``` $ spack install zlib@1.2.8 ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-zlib-1.2.8-bkyl5bhuep6fmhuxzkmhqy25qefjcvzc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:30 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-bkyl5bhuep6fmhuxzkmhqy25qefjcvzc $ spack install zlib %gcc@4.7 ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-4.7/zlib-1.2.11/linux-ubuntu16.04-x86_64-gcc-4.7-zlib-1.2.11-bq2wtdxakpjytk2tjr7qu23i4py2fi2r.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 04:55:30 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-4.7/zlib-1.2.11-bq2wtdxakpjytk2tjr7qu23i4py2fi2r ``` The spec syntax also includes compiler flags. Spack accepts `cppflags`, `cflags`, `cxxflags`, `fflags`, `ldflags`, and `ldlibs` parameters. The values of these fields must be quoted on the command line if they include spaces. These values are injected into the compile line automatically by the Spack compiler wrappers. ``` $ spack install zlib @1.2.8 cppflags=-O3 ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-zlib-1.2.8-64mns5mvdacqvlashkf7v6lqrxixhmxu.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:31:54 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-64mns5mvdacqvlashkf7v6lqrxixhmxu ``` The `spack find` command is used to query installed packages. Note that some packages appear identical with the default output. The `-l` flag shows the hash of each package, and the `-f` flag shows any non-empty compiler flags of those packages. ``` $ spack find ==> 5 installed packages. -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- zlib@1.2.8 zlib@1.2.8 zlib@1.2.11 $ spack find -lf ==> 5 installed packages. -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 4pt75q7 zlib@1.2.11%clang -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- bq2wtdx zlib@1.2.11%gcc -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- bkyl5bh zlib@1.2.8%gcc 64mns5m zlib@1.2.8%gcc cppflags="-O3" 5nus6kn zlib@1.2.11%gcc ``` Spack generates a hash for each spec. This hash is a function of the full provenance of the package, so any change to the spec affects the hash. Spack uses this value to compare specs and to generate unique installation directories for every combinatorial version. As we move into more complicated packages with software dependencies, we can see that Spack reuses existing packages to satisfy a dependency only when the existing package’s hash matches the desired spec. ``` $ spack install tcl ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing tcl ==> Searching for binary cache of tcl ==> Finding buildcaches in /mirror/build_cache ==> Installing tcl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:15 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed tcl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt ``` Dependencies can be explicitly requested using the `^` sigil. Note that the spec syntax is recursive. Anything we could specify about the top-level package, we can also specify about a dependency using `^`. ``` $ spack install tcl ^zlib @1.2.8 %clang ==> Installing zlib ==> Searching for binary cache of zlib ==> Finding buildcaches in /mirror/build_cache ==> Installing zlib from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.8/linux-ubuntu16.04-x86_64-clang-3.8.0-2ubuntu4-zlib-1.2.8-i426yu3o6lyau5fv5ljwsajfkqxj5rl5.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:09:01 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed zlib from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.8-i426yu3o6lyau5fv5ljwsajfkqxj5rl5 ==> Installing tcl ==> Searching for binary cache of tcl ==> Installing tcl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/tcl-8.6.8/linux-ubuntu16.04-x86_64-clang-3.8.0-2ubuntu4-tcl-8.6.8-6wc66etr7y6hgibp2derrdkf763exwvc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:10:21 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed tcl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/tcl-8.6.8-6wc66etr7y6hgibp2derrdkf763exwvc ``` Packages can also be referred to from the command line by their package hash. Using the `spack find -lf` command earlier we saw that the hash of our optimized installation of zlib (`cppflags="-O3"`) began with `64mns5m`. We can now explicitly build with that package without typing the entire spec, by using the `/` sigil to refer to it by hash. As with other tools like git, you do not need to specify an *entire* hash on the command line. You can specify just enough digits to identify a hash uniquely. If a hash prefix is ambiguous (i.e., two or more installed packages share the prefix) then spack will report an error. ``` $ spack install tcl ^/64mn ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-64mns5mvdacqvlashkf7v6lqrxixhmxu ==> Installing tcl ==> Searching for binary cache of tcl ==> Finding buildcaches in /mirror/build_cache ==> Installing tcl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-tcl-8.6.8-am4pbatrtga3etyusg2akmsvrswwxno2.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:11:53 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed tcl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-am4pbatrtga3etyusg2akmsvrswwxno2 ``` The `spack find` command can also take a `-d` flag, which can show dependency information. Note that each package has a top-level entry, even if it also appears as a dependency. ``` $ spack find -ldf ==> 9 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 6wc66et tcl@8.6.8%clang i426yu3 ^zlib@1.2.8%clang i426yu3 zlib@1.2.8%clang 4pt75q7 zlib@1.2.11%clang -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- bq2wtdx zlib@1.2.11%gcc -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- am4pbat tcl@8.6.8%gcc 64mns5m ^zlib@1.2.8%gcc cppflags="-O3" qhwyccy tcl@8.6.8%gcc 5nus6kn ^zlib@1.2.11%gcc bkyl5bh zlib@1.2.8%gcc 64mns5m zlib@1.2.8%gcc cppflags="-O3" 5nus6kn zlib@1.2.11%gcc ``` Let’s move on to slightly more complicated packages. `HDF5` is a good example of a more complicated package, with an MPI dependency. If we install it “out of the box,” it will build with `openmpi`. ``` $ spack install hdf5 ==> Installing libsigsegv ==> Searching for binary cache of libsigsegv ==> Finding buildcaches in /mirror/build_cache ==> Installing libsigsegv from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11/linux-ubuntu16.04-x86_64-gcc-5.4.0-libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:08:01 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 95C7 1787 7AC0 0FFD AA8F D6E9 9CFA 4A45 3B7C 69B2 ==> Successfully installed libsigsegv from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> Installing m4 ==> Searching for binary cache of m4 ==> Installing m4 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18/linux-ubuntu16.04-x86_64-gcc-5.4.0-m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:24:11 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 95C7 1787 7AC0 0FFD AA8F D6E9 9CFA 4A45 3B7C 69B2 ==> Successfully installed m4 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> Installing libtool ==> Searching for binary cache of libtool ==> Installing libtool from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6/linux-ubuntu16.04-x86_64-gcc-5.4.0-libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:12:47 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed libtool from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> Installing pkgconf ==> Searching for binary cache of pkgconf ==> Installing pkgconf from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:00:47 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed pkgconf from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> Installing util-macros ==> Searching for binary cache of util-macros ==> Installing util-macros from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/util-macros-1.19.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-util-macros-1.19.1-milz7fmttmptcic2qdk5cnel7ll5sybr.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:31:54 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed util-macros from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/util-macros-1.19.1-milz7fmttmptcic2qdk5cnel7ll5sybr ==> Installing libpciaccess ==> Searching for binary cache of libpciaccess ==> Installing libpciaccess from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/libpciaccess-0.13.5/linux-ubuntu16.04-x86_64-gcc-5.4.0-libpciaccess-0.13.5-5urc6tcjae26fbbd2wyfohoszhgxtbmc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:09:34 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed libpciaccess from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libpciaccess-0.13.5-5urc6tcjae26fbbd2wyfohoszhgxtbmc ==> Installing xz ==> Searching for binary cache of xz ==> Installing xz from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/xz-5.2.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-xz-5.2.4-teneqii2xv5u6zl5r6qi3pwurc6pmypz.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:05:03 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed xz from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/xz-5.2.4-teneqii2xv5u6zl5r6qi3pwurc6pmypz ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing libxml2 ==> Searching for binary cache of libxml2 ==> Installing libxml2 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/libxml2-2.9.8/linux-ubuntu16.04-x86_64-gcc-5.4.0-libxml2-2.9.8-wpexsphdmfayxqxd4up5vgwuqgu5woo7.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 04:56:04 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed libxml2 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libxml2-2.9.8-wpexsphdmfayxqxd4up5vgwuqgu5woo7 ==> Installing ncurses ==> Searching for binary cache of ncurses ==> Installing ncurses from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:04:49 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed ncurses from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> Installing readline ==> Searching for binary cache of readline ==> Installing readline from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:04:56 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed readline from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> Installing gdbm ==> Searching for binary cache of gdbm ==> Installing gdbm from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:34 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed gdbm from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> Installing perl ==> Searching for binary cache of perl ==> Installing perl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:12:45 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed perl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> Installing autoconf ==> Searching for binary cache of autoconf ==> Installing autoconf from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69/linux-ubuntu16.04-x86_64-gcc-5.4.0-autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:24:03 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed autoconf from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> Installing automake ==> Searching for binary cache of automake ==> Installing automake from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:12:03 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed automake from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> Installing numactl ==> Searching for binary cache of numactl ==> Installing numactl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/numactl-2.0.11/linux-ubuntu16.04-x86_64-gcc-5.4.0-numactl-2.0.11-ft463odrombnxlc3qew4omckhlq7tqgc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:34 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed numactl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/numactl-2.0.11-ft463odrombnxlc3qew4omckhlq7tqgc ==> Installing hwloc ==> Searching for binary cache of hwloc ==> Installing hwloc from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hwloc-1.11.9/linux-ubuntu16.04-x86_64-gcc-5.4.0-hwloc-1.11.9-43tkw5mt6huhv37vqnybqgxtkodbsava.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:08:00 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hwloc from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hwloc-1.11.9-43tkw5mt6huhv37vqnybqgxtkodbsava ==> Installing openmpi ==> Searching for binary cache of openmpi ==> Installing openmpi from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:01:54 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed openmpi from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx ==> Installing hdf5 ==> Searching for binary cache of hdf5 ==> Installing hdf5 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:23:04 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hdf5 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw ``` Spack packages can also have build options, called variants. Boolean variants can be specified using the `+` and `~` or `-` sigils. There are two sigils for `False` to avoid conflicts with shell parsing in different situations. Variants (boolean or otherwise) can also be specified using the same syntax as compiler flags. Here we can install HDF5 without MPI support. ``` $ spack install hdf5~mpi ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing hdf5 ==> Searching for binary cache of hdf5 ==> Finding buildcaches in /mirror/build_cache ==> Installing hdf5 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-hdf5-1.10.4-5vcv5r67vpjzenq4apyebshclelnzuja.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:23:24 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hdf5 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-5vcv5r67vpjzenq4apyebshclelnzuja ``` We might also want to install HDF5 with a different MPI implementation. While MPI is not a package itself, packages can depend on abstract interfaces like MPI. Spack handles these through “virtual dependencies.” A package, such as HDF5, can depend on the MPI interface. Other packages (`openmpi`, `mpich`, `mvapich`, etc.) provide the MPI interface. Any of these providers can be requested for an MPI dependency. For example, we can build HDF5 with MPI support provided by mpich by specifying a dependency on `mpich`. Spack also supports versioning of virtual dependencies. A package can depend on the MPI interface at version 3, and provider packages specify what version of the interface *they* provide. The partial spec `^mpi@3` can be safisfied by any of several providers. ``` $ spack install hdf5+hl+mpi ^mpich ==> libsigsegv is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> pkgconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> ncurses is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> readline is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> gdbm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> perl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> autoconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> automake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> libtool is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> Installing texinfo ==> Searching for binary cache of texinfo ==> Finding buildcaches in /mirror/build_cache ==> Installing texinfo from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/texinfo-6.5/linux-ubuntu16.04-x86_64-gcc-5.4.0-texinfo-6.5-zs7a2pcwhq6ho2cj2x26uxfktwkpyucn.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:29 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed texinfo from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/texinfo-6.5-zs7a2pcwhq6ho2cj2x26uxfktwkpyucn ==> Installing findutils ==> Searching for binary cache of findutils ==> Installing findutils from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/findutils-4.6.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-findutils-4.6.0-d4iajxsopzrlcjtasahxqeyjkjv5jx4v.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:17 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed findutils from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/findutils-4.6.0-d4iajxsopzrlcjtasahxqeyjkjv5jx4v ==> Installing mpich ==> Searching for binary cache of mpich ==> Installing mpich from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpich-3.2.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-mpich-3.2.1-p3f7p2r5ntrynqibosglxvhwyztiwqs5.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:23:57 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed mpich from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpich-3.2.1-p3f7p2r5ntrynqibosglxvhwyztiwqs5 ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing hdf5 ==> Searching for binary cache of hdf5 ==> Installing hdf5 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-hdf5-1.10.4-xxd7syhgej6onpyfyewxqcqe7ltkt7ob.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:32 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hdf5 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-xxd7syhgej6onpyfyewxqcqe7ltkt7ob ``` We’ll do a quick check in on what we have installed so far. ``` $ spack find -ldf ==> 32 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 6wc66et tcl@8.6.8%clang i426yu3 ^zlib@1.2.8%clang i426yu3 zlib@1.2.8%clang 4pt75q7 zlib@1.2.11%clang -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- bq2wtdx zlib@1.2.11%gcc -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- 3sx2gxe autoconf@2.69%gcc suf5jtc ^m4@1.4.18%gcc fypapcp ^libsigsegv@2.11%gcc ic2kyoa ^perl@5.26.2%gcc q4fpyuo ^gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc rymw7im automake@1.16.1%gcc ic2kyoa ^perl@5.26.2%gcc q4fpyuo ^gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc d4iajxs findutils@4.6.0%gcc q4fpyuo gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc 5vcv5r6 hdf5@1.10.4%gcc 5nus6kn ^zlib@1.2.11%gcc ozyvmhz hdf5@1.10.4%gcc 3njc4q5 ^openmpi@3.1.3%gcc 43tkw5m ^hwloc@1.11.9%gcc 5urc6tc ^libpciaccess@0.13.5%gcc wpexsph ^libxml2@2.9.8%gcc teneqii ^xz@5.2.4%gcc 5nus6kn ^zlib@1.2.11%gcc ft463od ^numactl@2.0.11%gcc xxd7syh hdf5@1.10.4%gcc p3f7p2r ^mpich@3.2.1%gcc 5nus6kn ^zlib@1.2.11%gcc 43tkw5m hwloc@1.11.9%gcc 5urc6tc ^libpciaccess@0.13.5%gcc wpexsph ^libxml2@2.9.8%gcc teneqii ^xz@5.2.4%gcc 5nus6kn ^zlib@1.2.11%gcc ft463od ^numactl@2.0.11%gcc 5urc6tc libpciaccess@0.13.5%gcc fypapcp libsigsegv@2.11%gcc o2pfwjf libtool@2.4.6%gcc wpexsph libxml2@2.9.8%gcc teneqii ^xz@5.2.4%gcc 5nus6kn ^zlib@1.2.11%gcc suf5jtc m4@1.4.18%gcc fypapcp ^libsigsegv@2.11%gcc p3f7p2r mpich@3.2.1%gcc 3o765ou ncurses@6.1%gcc ft463od numactl@2.0.11%gcc 3njc4q5 openmpi@3.1.3%gcc 43tkw5m ^hwloc@1.11.9%gcc 5urc6tc ^libpciaccess@0.13.5%gcc wpexsph ^libxml2@2.9.8%gcc teneqii ^xz@5.2.4%gcc 5nus6kn ^zlib@1.2.11%gcc ft463od ^numactl@2.0.11%gcc ic2kyoa perl@5.26.2%gcc q4fpyuo ^gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc fovrh7a pkgconf@1.4.2%gcc nxhwrg7 readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc am4pbat tcl@8.6.8%gcc 64mns5m ^zlib@1.2.8%gcc cppflags="-O3" qhwyccy tcl@8.6.8%gcc 5nus6kn ^zlib@1.2.11%gcc zs7a2pc texinfo@6.5%gcc ic2kyoa ^perl@5.26.2%gcc q4fpyuo ^gdbm@1.14.1%gcc nxhwrg7 ^readline@7.0%gcc 3o765ou ^ncurses@6.1%gcc milz7fm util-macros@1.19.1%gcc teneqii xz@5.2.4%gcc bkyl5bh zlib@1.2.8%gcc 64mns5m zlib@1.2.8%gcc cppflags="-O3" 5nus6kn zlib@1.2.11%gcc ``` Spack models the dependencies of packages as a directed acyclic graph (DAG). The `spack find -d` command shows the tree representation of that graph. We can also use the `spack graph` command to view the entire DAG as a graph. ``` $ spack graph hdf5+hl+mpi ^mpich o hdf5 |\ o | zlib / o mpich o findutils |\ | |\ | | |\ | | | |\ o | | | | texinfo | | | o | automake | |_|/| | |/| | | | | | | |/ | | | o autoconf | |_|/| |/| |/ | |/| o | | perl o | | gdbm o | | readline o | | ncurses o | | pkgconf / / | o libtool |/ o m4 o libsigsegv ``` You may also have noticed that there are some packages shown in the `spack find -d` output that we didn’t install explicitly. These are dependencies that were installed implicitly. A few packages installed implicitly are not shown as dependencies in the `spack find -d` output. These are build dependencies. For example, `libpciaccess` is a dependency of openmpi and requires `m4` to build. Spack will build `m4` as part of the installation of `openmpi`, but it does not become a part of the DAG because it is not linked in at run time. Spack handles build dependencies differently because of their different (less strict) consistency requirements. It is entirely possible to have two packages using different versions of a dependency to build, which obviously cannot be done with linked dependencies. `HDF5` is more complicated than our basic example of zlib and openssl, but it’s still within the realm of software that an experienced HPC user could reasonably expect to install given a bit of time. Now let’s look at an even more complicated package. ``` $ spack install trilinos ==> Installing diffutils ==> Searching for binary cache of diffutils ==> Finding buildcaches in /mirror/build_cache ==> Installing diffutils from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/diffutils-3.6/linux-ubuntu16.04-x86_64-gcc-5.4.0-diffutils-3.6-2rhuivgjrna2nrxhntyde6md2khcvs34.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:17 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed diffutils from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/diffutils-3.6-2rhuivgjrna2nrxhntyde6md2khcvs34 ==> Installing bzip2 ==> Searching for binary cache of bzip2 ==> Installing bzip2 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/bzip2-1.0.6/linux-ubuntu16.04-x86_64-gcc-5.4.0-bzip2-1.0.6-ufczdvsqt6edesm36xiucyry7myhj7e7.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:34:37 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed bzip2 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/bzip2-1.0.6-ufczdvsqt6edesm36xiucyry7myhj7e7 ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> Installing boost ==> Searching for binary cache of boost ==> Installing boost from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/boost-1.68.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-boost-1.68.0-zbgfxapchxa4awxdwpleubfuznblxzvt.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 04:58:55 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed boost from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/boost-1.68.0-zbgfxapchxa4awxdwpleubfuznblxzvt ==> pkgconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> ncurses is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> readline is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> gdbm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> perl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> Installing openssl ==> Searching for binary cache of openssl ==> Installing openssl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/openssl-1.0.2o/linux-ubuntu16.04-x86_64-gcc-5.4.0-openssl-1.0.2o-b4y3w3bsyvjla6eesv4vt6aplpfrpsha.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:24:10 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed openssl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openssl-1.0.2o-b4y3w3bsyvjla6eesv4vt6aplpfrpsha ==> Installing cmake ==> Searching for binary cache of cmake ==> Installing cmake from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/cmake-3.12.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-cmake-3.12.3-otafqzhh4xnlq2mpakch7dr3tjfsrjnx.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:33:15 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed cmake from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/cmake-3.12.3-otafqzhh4xnlq2mpakch7dr3tjfsrjnx ==> Installing glm ==> Searching for binary cache of glm ==> Installing glm from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/glm-0.9.7.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-glm-0.9.7.1-jnw622jwcbsymzj2fsx22omjl7tmvaws.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:33 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 95C7 1787 7AC0 0FFD AA8F D6E9 9CFA 4A45 3B7C 69B2 ==> Successfully installed glm from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/glm-0.9.7.1-jnw622jwcbsymzj2fsx22omjl7tmvaws ==> libsigsegv is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> libtool is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> util-macros is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/util-macros-1.19.1-milz7fmttmptcic2qdk5cnel7ll5sybr ==> libpciaccess is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libpciaccess-0.13.5-5urc6tcjae26fbbd2wyfohoszhgxtbmc ==> xz is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/xz-5.2.4-teneqii2xv5u6zl5r6qi3pwurc6pmypz ==> libxml2 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libxml2-2.9.8-wpexsphdmfayxqxd4up5vgwuqgu5woo7 ==> autoconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> automake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> numactl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/numactl-2.0.11-ft463odrombnxlc3qew4omckhlq7tqgc ==> hwloc is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hwloc-1.11.9-43tkw5mt6huhv37vqnybqgxtkodbsava ==> openmpi is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx ==> Installing hdf5 ==> Searching for binary cache of hdf5 ==> Installing hdf5 from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-hdf5-1.10.4-oqwnui7wtovuf2id4vjwcxfmxlzjus6y.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:09:10 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hdf5 from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-oqwnui7wtovuf2id4vjwcxfmxlzjus6y ==> Installing openblas ==> Searching for binary cache of openblas ==> Installing openblas from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/openblas-0.3.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-openblas-0.3.3-cyeg2yiitpuqglhvbox5gtbgsim2v5vn.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:32:04 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed openblas from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openblas-0.3.3-cyeg2yiitpuqglhvbox5gtbgsim2v5vn ==> Installing hypre ==> Searching for binary cache of hypre ==> Installing hypre from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hypre-2.15.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-hypre-2.15.1-fshksdpecwiq7r6vawfswpboedhbisju.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:34 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hypre from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hypre-2.15.1-fshksdpecwiq7r6vawfswpboedhbisju ==> Installing matio ==> Searching for binary cache of matio ==> Installing matio from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/matio-1.5.9/linux-ubuntu16.04-x86_64-gcc-5.4.0-matio-1.5.9-lmzdgssvobdljw52mtahelu2ju7osh6h.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:05:13 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed matio from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/matio-1.5.9-lmzdgssvobdljw52mtahelu2ju7osh6h ==> Installing metis ==> Searching for binary cache of metis ==> Installing metis from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/metis-5.1.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-metis-5.1.0-3wnvp4ji3wwu4v4vymszrhx6naehs6jc.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:31:42 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed metis from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/metis-5.1.0-3wnvp4ji3wwu4v4vymszrhx6naehs6jc ==> Installing netlib-scalapack ==> Searching for binary cache of netlib-scalapack ==> Installing netlib-scalapack from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-netlib-scalapack-2.0.2-wotpfwfctgfkzzn2uescucxvvbg3tm6b.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:22 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed netlib-scalapack from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2-wotpfwfctgfkzzn2uescucxvvbg3tm6b ==> Installing mumps ==> Searching for binary cache of mumps ==> Installing mumps from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mumps-5.1.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-mumps-5.1.1-acsg2dzroox2swssgc5cwgkvdy6jcm5q.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:32 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed mumps from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mumps-5.1.1-acsg2dzroox2swssgc5cwgkvdy6jcm5q ==> Installing netcdf ==> Searching for binary cache of netcdf ==> Installing netcdf from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.6.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-netcdf-4.6.1-mhm4izpogf4mrjidyskb6ewtzxdi7t6g.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:11:57 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed netcdf from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.6.1-mhm4izpogf4mrjidyskb6ewtzxdi7t6g ==> Installing parmetis ==> Searching for binary cache of parmetis ==> Installing parmetis from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/parmetis-4.0.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-parmetis-4.0.3-uv6h3sqx6quqg22hxesi2mw2un3kw6b7.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:12:04 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed parmetis from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/parmetis-4.0.3-uv6h3sqx6quqg22hxesi2mw2un3kw6b7 ==> Installing suite-sparse ==> Searching for binary cache of suite-sparse ==> Installing suite-sparse from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/suite-sparse-5.3.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-suite-sparse-5.3.0-zaau4kifha2enpdcn3mjlrqym7hm7yon.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:22:54 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed suite-sparse from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/suite-sparse-5.3.0-zaau4kifha2enpdcn3mjlrqym7hm7yon ==> Installing trilinos ==> Searching for binary cache of trilinos ==> Installing trilinos from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:10 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed trilinos from binary cache ``` Now we’re starting to see the power of Spack. Trilinos in its default configuration has 23 top level dependecies, many of which have dependencies of their own. Installing more complex packages can take days or weeks even for an experienced user. Although we’ve done a binary installation for the tutorial, a source installation of trilinos using Spack takes about 3 hours (depending on the system), but only 20 seconds of programmer time. Spack manages constistency of the entire DAG. Every MPI dependency will be satisfied by the same configuration of MPI, etc. If we install `trilinos` again specifying a dependency on our previous HDF5 built with `mpich`: ``` $ spack install trilinos +hdf5 ^hdf5+hl+mpi ^mpich ==> diffutils is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/diffutils-3.6-2rhuivgjrna2nrxhntyde6md2khcvs34 ==> bzip2 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/bzip2-1.0.6-ufczdvsqt6edesm36xiucyry7myhj7e7 ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> boost is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/boost-1.68.0-zbgfxapchxa4awxdwpleubfuznblxzvt ==> pkgconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> ncurses is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> readline is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> gdbm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> perl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> openssl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openssl-1.0.2o-b4y3w3bsyvjla6eesv4vt6aplpfrpsha ==> cmake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/cmake-3.12.3-otafqzhh4xnlq2mpakch7dr3tjfsrjnx ==> glm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/glm-0.9.7.1-jnw622jwcbsymzj2fsx22omjl7tmvaws ==> libsigsegv is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> autoconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> automake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> libtool is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> texinfo is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/texinfo-6.5-zs7a2pcwhq6ho2cj2x26uxfktwkpyucn ==> findutils is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/findutils-4.6.0-d4iajxsopzrlcjtasahxqeyjkjv5jx4v ==> mpich is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpich-3.2.1-p3f7p2r5ntrynqibosglxvhwyztiwqs5 ==> hdf5 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-xxd7syhgej6onpyfyewxqcqe7ltkt7ob ==> openblas is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openblas-0.3.3-cyeg2yiitpuqglhvbox5gtbgsim2v5vn ==> Installing hypre ==> Searching for binary cache of hypre ==> Finding buildcaches in /mirror/build_cache ==> Installing hypre from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/hypre-2.15.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-hypre-2.15.1-obewuozolon7tkdg4cfxc6ae2tzkronb.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:34:36 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed hypre from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hypre-2.15.1-obewuozolon7tkdg4cfxc6ae2tzkronb ==> Installing matio ==> Searching for binary cache of matio ==> Installing matio from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/matio-1.5.9/linux-ubuntu16.04-x86_64-gcc-5.4.0-matio-1.5.9-gvyqldhifflmvcrtui3b6s64jcczsxxh.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:25:11 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed matio from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/matio-1.5.9-gvyqldhifflmvcrtui3b6s64jcczsxxh ==> metis is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/metis-5.1.0-3wnvp4ji3wwu4v4vymszrhx6naehs6jc ==> Installing netlib-scalapack ==> Searching for binary cache of netlib-scalapack ==> Installing netlib-scalapack from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-netlib-scalapack-2.0.2-p7iln2pcosw2ipyqoyr7ie6lpva2oj7r.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:32:20 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed netlib-scalapack from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2-p7iln2pcosw2ipyqoyr7ie6lpva2oj7r ==> Installing mumps ==> Searching for binary cache of mumps ==> Installing mumps from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mumps-5.1.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-mumps-5.1.1-cumcj5a75cagsznpjrgretxdg6okxaur.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:33:18 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed mumps from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mumps-5.1.1-cumcj5a75cagsznpjrgretxdg6okxaur ==> Installing netcdf ==> Searching for binary cache of netcdf ==> Installing netcdf from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.6.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-netcdf-4.6.1-wmmx5sgwfds34v7bkkhiduar5yecrnnd.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:24:01 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed netcdf from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.6.1-wmmx5sgwfds34v7bkkhiduar5yecrnnd ==> Installing parmetis ==> Searching for binary cache of parmetis ==> Installing parmetis from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/parmetis-4.0.3/linux-ubuntu16.04-x86_64-gcc-5.4.0-parmetis-4.0.3-jehtatan4y2lcobj6waoqv66jj4libtz.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:07:41 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed parmetis from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/parmetis-4.0.3-jehtatan4y2lcobj6waoqv66jj4libtz ==> suite-sparse is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/suite-sparse-5.3.0-zaau4kifha2enpdcn3mjlrqym7hm7yon ==> Installing trilinos ==> Searching for binary cache of trilinos ==> Installing trilinos from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1/linux-ubuntu16.04-x86_64-gcc-5.4.0-trilinos-12.12.1-kqc52moweigxqxzwzfqajc6ocxwdwn4w.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:15 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed trilinos from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-kqc52moweigxqxzwzfqajc6ocxwdwn4w ``` We see that every package in the trilinos DAG that depends on MPI now uses `mpich`. ``` $ spack find -d trilinos ==> 2 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- trilinos@12.12.1 ^boost@1.68.0 ^bzip2@1.0.6 ^zlib@1.2.11 ^glm@0.9.7.1 ^hdf5@1.10.4 ^openmpi@3.1.3 ^hwloc@1.11.9 ^libpciaccess@0.13.5 ^libxml2@2.9.8 ^xz@5.2.4 ^numactl@2.0.11 ^hypre@2.15.1 ^openblas@0.3.3 ^matio@1.5.9 ^metis@5.1.0 ^mumps@5.1.1 ^netlib-scalapack@2.0.2 ^netcdf@4.6.1 ^parmetis@4.0.3 ^suite-sparse@5.3.0 trilinos@12.12.1 ^boost@1.68.0 ^bzip2@1.0.6 ^zlib@1.2.11 ^glm@0.9.7.1 ^hdf5@1.10.4 ^mpich@3.2.1 ^hypre@2.15.1 ^openblas@0.3.3 ^matio@1.5.9 ^metis@5.1.0 ^mumps@5.1.1 ^netlib-scalapack@2.0.2 ^netcdf@4.6.1 ^parmetis@4.0.3 ^suite-sparse@5.3.0 ``` As we discussed before, the `spack find -d` command shows the dependency information as a tree. While that is often sufficient, many complicated packages, including trilinos, have dependencies that cannot be fully represented as a tree. Again, the `spack graph` command shows the full DAG of the dependency information. ``` $ spack graph trilinos o trilinos |\ | |\ | | |\ | | | |\ | | | | |\ | | | | | |\ | | | | | | |\ | | | | | | | |\ | | | | | | | | |\ | | | | | | | | | |\ | | | | | | | | | | |\ | | | | | | | | | | | |\ | | | | | | | | | | | | |\ o | | | | | | | | | | | | | suite-sparse |\ \ \ \ \ \ \ \ \ \ \ \ \ \ | |_|_|/ / / / / / / / / / / |/| | | | | | | | | | | | | | |\ \ \ \ \ \ \ \ \ \ \ \ \ | | |_|_|_|_|_|/ / / / / / / | |/| | | | | | | | | | | | | | | |_|_|_|_|_|_|_|/ / / | | |/| | | | | | | | | | | | | o | | | | | | | | | parmetis | | |/| | | | | | | | | | | |/|/| | | | | | | | | | | | | |/ / / / / / / / / | | | | | | o | | | | | mumps | |_|_|_|_|/| | | | | | |/| | | |_|/| | | | | | | | | |/| |/ / / / / / | | | | |/| | | | | | | | | | o | | | | | | netlib-scalapack | |_|_|/| | | | | | | |/| | |/| | | | | | | | | |/|/ / / / / / / | o | | | | | | | | metis | |/ / / / / / / / | | | | | | | o | glm | | |_|_|_|_|/ / | |/| | | | | | | o | | | | | | cmake | |\ \ \ \ \ \ \ | o | | | | | | | openssl | |\ \ \ \ \ \ \ \ | | | | | o | | | | netcdf | | |_|_|/| | | | | | |/| | |/| | | | | | | | | | |\ \ \ \ \ | | | | | | | |_|/ / | | | | | | |/| | | | | | | | | | o | | matio | | |_|_|_|_|/| | | | |/| | | | |/ / / | | | | | | | o | hypre | |_|_|_|_|_|/| | |/| | | | |_|/ / | | | | |/| | | | | | | | | o | hdf5 | | |_|_|_|/| | | |/| | | |/ / | | | | |/| | | | | | o | | openmpi | | |_|/| | | | |/| | | | | | | | | |\ \ \ | | | | | o | | hwloc | | | | |/| | | | | | | | |\ \ \ | | | | | | |\ \ \ | | | | | | o | | | libxml2 | | |_|_|_|/| | | | | |/| | | |/| | | | | | | | | | | | | o boost | | |_|_|_|_|_|_|/| | |/| | | | | | | | | o | | | | | | | | zlib | / / / / / / / / | | | | | o | | | xz | | | | | / / / | | | | | o | | libpciaccess | | | | |/| | | | | | | | |\ \ \ | | | | | o | | | util-macros | | | | | / / / | | | o | | | | numactl | | | |\ \ \ \ \ | | | | |_|_|/ / | | | |/| | | | | | | | |\ \ \ \ | | | | | |_|/ / | | | | |/| | | | | | | | |\ \ \ | | | | | o | | | automake | | |_|_|/| | | | | |/| | | | | | | | | | | | |/ / / | | | | | o | | autoconf | | |_|_|/| | | | |/| | |/ / / | | | |/| | | | o | | | | | perl | o | | | | | gdbm | o | | | | | readline | |/ / / / / | o | | | | ncurses | | |_|/ / | |/| | | | o | | | pkgconf | / / / o | | | openblas / / / | o | libtool |/ / o | m4 o | libsigsegv / o bzip2 o diffutils ``` You can control how the output is displayed with a number of options. The ASCII output from `spack graph` can be difficult to parse for complicated packages. The output can be changed to the `graphviz` `.dot` format using the `--dot` flag. ``` $ spack graph --dot trilinos | dot -Tpdf trilinos_graph.pdf ``` ### Uninstalling Packages[¶](#uninstalling-packages) Earlier we installed many configurations each of zlib and tcl. Now we will go through and uninstall some of those packages that we didn’t really need. ``` $ spack find -d tcl ==> 3 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- tcl@8.6.8 ^zlib@1.2.8 -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- tcl@8.6.8 ^zlib@1.2.8 tcl@8.6.8 ^zlib@1.2.11 $ spack find zlib ==> 6 installed packages. -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- zlib@1.2.8 zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- zlib@1.2.8 zlib@1.2.8 zlib@1.2.11 ``` We can uninstall packages by spec using the same syntax as install. ``` $ spack uninstall zlib %gcc@4.7 ==> The following packages will be uninstalled: -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- bq2wtdx zlib@1.2.11%gcc+optimize+pic+shared ==> Do you want to proceed? [y/N] y ==> Successfully uninstalled zlib@1.2.11%gcc@4.7+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 /bq2wtdx $ spack find -lf zlib ==> 5 installed packages. -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- i426yu3 zlib@1.2.8%clang 4pt75q7 zlib@1.2.11%clang -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- bkyl5bh zlib@1.2.8%gcc 64mns5m zlib@1.2.8%gcc cppflags="-O3" 5nus6kn zlib@1.2.11%gcc ``` We can also uninstall packages by referring only to their hash. We can use either `-f` (force) or `-R` (remove dependents as well) to remove packages that are required by another installed package. ``` $ spack uninstall zlib/i426 ==> Error: Will not uninstall zlib@1.2.8%clang@3.8.0-2ubuntu4/i426yu3 The following packages depend on it: -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 6wc66et tcl@8.6.8%clang ==> Error: Use \`spack uninstall --dependents\` to uninstall these dependencies as well. $ spack uninstall -R zlib/i426 ==> The following packages will be uninstalled: -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- 6wc66et tcl@8.6.8%clang i426yu3 zlib@1.2.8%clang+optimize+pic+shared ==> Do you want to proceed? [y/N] y ==> Successfully uninstalled tcl@8.6.8%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 /6wc66et ==> Successfully uninstalled zlib@1.2.8%clang@3.8.0-2ubuntu4+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 /i426yu3 ``` Spack will not uninstall packages that are not sufficiently specified. The `-a` (all) flag can be used to uninstall multiple packages at once. ``` $ spack uninstall trilinos ==> Error: trilinos matches multiple packages: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- rlsruav trilinos@12.12.1%gcc~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 kqc52mo trilinos@12.12.1%gcc~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 ==> Error: You can either: a) use a more specific spec, or b) use `spack uninstall --all` to uninstall ALL matching specs. $ spack uninstall /rlsr ==> The following packages will be uninstalled: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- rlsruav trilinos@12.12.1%gcc~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 ==> Do you want to proceed? [y/N] y ==> Successfully uninstalled trilinos@12.12.1%gcc@5.4.0~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 arch=linux-ubuntu16.04-x86_64 /rlsruav ``` ### Advanced `spack find` Usage[¶](#advanced-spack-find-usage) We will go over some additional uses for the `spack find` command not already covered in the [Installing Spack](#basics-tutorial-install) and [Uninstalling Packages](#basics-tutorial-uninstall) sections. The `spack find` command can accept what we call “anonymous specs.” These are expressions in spec syntax that do not contain a package name. For example, `spack find ^mpich` will return every installed package that depends on mpich, and `spack find cppflags="-O3"` will return every package which was built with `cppflags="-O3"`. ``` $ spack find ^mpich ==> 8 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- hdf5@1.10.4 matio@1.5.9 netcdf@4.6.1 parmetis@4.0.3 hypre@2.15.1 mumps@5.1.1 netlib-scalapack@2.0.2 trilinos@12.12.1 $ spack find cppflags=-O3 ==> 1 installed packages. -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- zlib@1.2.8 ``` The `find` command can also show which packages were installed explicitly (rather than pulled in as a dependency) using the `-x` flag. The `-X` flag shows implicit installs only. The `find` command can also show the path to which a spack package was installed using the `-p` command. ``` $ spack find -px ==> 10 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- zlib@1.2.11 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/clang-3.8.0-2ubuntu4/zlib-1.2.11-4pt75q7qq6lygf3hgnona4lyc2uwedul -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- hdf5@1.10.4 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-5vcv5r67vpjzenq4apyebshclelnzuja hdf5@1.10.4 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw hdf5@1.10.4 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-xxd7syhgej6onpyfyewxqcqe7ltkt7ob tcl@8.6.8 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-am4pbatrtga3etyusg2akmsvrswwxno2 tcl@8.6.8 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt trilinos@12.12.1 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-kqc52moweigxqxzwzfqajc6ocxwdwn4w zlib@1.2.8 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-bkyl5bhuep6fmhuxzkmhqy25qefjcvzc zlib@1.2.8 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.8-64mns5mvdacqvlashkf7v6lqrxixhmxu zlib@1.2.11 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ``` ### Customizing Compilers[¶](#customizing-compilers) Spack manages a list of available compilers on the system, detected automatically from from the user’s `PATH` variable. The `spack compilers` command is an alias for the command `spack compiler list`. ``` $ spack compilers ==> Available compilers -- clang ubuntu16.04-x86_64 --- clang@3.8.0-2ubuntu4 clang@3.7.1-2ubuntu2 -- gcc ubuntu16.04-x86_64 --- gcc@5.4.0 gcc@4.7 ``` The compilers are maintained in a YAML file. Later in the tutorial you will learn how to configure compilers by hand for special cases. Spack also has tools to add compilers, and compilers built with Spack can be added to the configuration. ``` $ spack install gcc @7.2.0 ==> libsigsegv is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-suf5jtcfehivwfesrc5hjy72r4nukyel ==> pkgconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkgconf-1.4.2-fovrh7alpft646n6mhis5mml6k6e5f4v ==> ncurses is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/ncurses-6.1-3o765ourmesfrji6yeclb4wb5w54aqbh ==> readline is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/readline-7.0-nxhwrg7xwc6nbsm2v4ezwe63l6nfidbi ==> gdbm is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gdbm-1.14.1-q4fpyuo7ouhkeq6d3oabtrppctpvxmes ==> perl is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/perl-5.26.2-ic2kyoadgp3dxfejcbllyplj2wf524fo ==> autoconf is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/autoconf-2.69-3sx2gxeibc4oasqd4o5h6lnwpcpsgd2q ==> automake is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/automake-1.16.1-rymw7imfehycqxzj4nuy2oiw3abegooy ==> libtool is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libtool-2.4.6-o2pfwjf44353ajgr42xqtvzyvqsazkgu ==> Installing gmp ==> Searching for binary cache of gmp ==> Finding buildcaches in /mirror/build_cache ==> Installing gmp from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/gmp-6.1.2/linux-ubuntu16.04-x86_64-gcc-5.4.0-gmp-6.1.2-qc4qcfz4monpllc3nqupdo7vwinf73sw.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:18:16 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed gmp from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gmp-6.1.2-qc4qcfz4monpllc3nqupdo7vwinf73sw ==> Installing isl ==> Searching for binary cache of isl ==> Installing isl from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/isl-0.18/linux-ubuntu16.04-x86_64-gcc-5.4.0-isl-0.18-vttqoutnsmjpm3ogb52rninksc7hq5ax.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:05:19 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 95C7 1787 7AC0 0FFD AA8F D6E9 9CFA 4A45 3B7C 69B2 ==> Successfully installed isl from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/isl-0.18-vttqoutnsmjpm3ogb52rninksc7hq5ax ==> Installing mpfr ==> Searching for binary cache of mpfr ==> Installing mpfr from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpfr-3.1.6/linux-ubuntu16.04-x86_64-gcc-5.4.0-mpfr-3.1.6-jnt2nnp5pmvikbw7opueajlbwbhmjxyv.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:32:07 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 95C7 1787 7AC0 0FFD AA8F D6E9 9CFA 4A45 3B7C 69B2 ==> Successfully installed mpfr from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpfr-3.1.6-jnt2nnp5pmvikbw7opueajlbwbhmjxyv ==> Installing mpc ==> Searching for binary cache of mpc ==> Installing mpc from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpc-1.1.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-mpc-1.1.0-iuf3gc3zpgr4n4mditnxhff6x3joxi27.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:30:35 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed mpc from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpc-1.1.0-iuf3gc3zpgr4n4mditnxhff6x3joxi27 ==> zlib is already installed in /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb Installing gcc ==> Searching for binary cache of gcc ==> Finding buildcaches in /mirror/build_cache ==> Installing gcc from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0/linux-ubuntu16.04-x86_64-gcc-5.4.0-gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs.spack ######################################################################## 100.0% gpg: Signature made Sat Nov 10 05:22:47 2018 UTC using RSA key ID 3B7C69B2 gpg: Good signature from "sc-tutorial (GPG created for Spack) <<EMAIL>>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: <KEY> ==> Successfully installed gcc from binary cache [+] /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs $ spack find -p gcc spack find -p gcc ==> 1 installed package -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- gcc@7.2.0 /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs ``` We can add gcc to Spack as an available compiler using the `spack compiler add` command. This will allow future packages to build with gcc @7.2.0. ``` $ spack compiler add `spack location -i gcc@7.2.0` ==> Added 1 new compiler to /home/ubuntu/.spack/linux/compilers.yaml gcc@7.2.0 ==> Compilers are defined in the following files: /home/ubuntu/.spack/linux/compilers.yaml ``` We can also remove compilers from our configuration using `spack compiler remove <compiler_spec>` ``` $ spack compiler remove gcc@7.2.0 ==> Removed compiler gcc@7.2.0 ``` Configuration Tutorial[¶](#configuration-tutorial) --- This tutorial will guide you through various configuration options that allow you to customize Spack’s behavior with respect to software installation. We will first cover the configuration file hierarchy. Then, we will cover configuration options for compilers, focusing on how they can be used to extend Spack’s compiler auto-detection. Next, we will cover the packages configuration file, focusing on how it can be used to override default build options as well as specify external package installations to use. Finally, we will briefly touch on the config configuration file, which manages more high-level Spack configuration options. For all of these features, we will demonstrate how we build up a full configuration file. For some, we will then demonstrate how the configuration affects the install command, and for others we will use the `spack spec` command to demonstrate how the configuration changes have affected Spack’s concretization algorithm. The provided output is all from a server running Ubuntu version 16.04. ### Configuration Scopes[¶](#configuration-scopes) Depending on your use case, you may want to provide configuration settings common to everyone on your team, or you may want to set default behaviors specific to a single user account. Spack provides six configuration *scopes* to handle this customization. These scopes, in order of decreasing priority, are: | Scope | Directory | | --- | --- | | Command-line | N/A | | Custom | Custom directory, specified with `--config-scope` | | User | `~/.spack/` | | Site | `$SPACK_ROOT/etc/spack/` | | System | `/etc/spack/` | | Defaults | `$SPACK_ROOT/etc/spack/defaults/` | Spack’s default configuration settings reside in `$SPACK_ROOT/etc/spack/defaults`. These are useful for reference, but should never be directly edited. To override these settings, create new configuration files in any of the higher-priority configuration scopes. A particular cluster may have multiple Spack installations associated with different projects. To provide settings common to all Spack installations, put your configuration files in `/etc/spack`. To provide settings specific to a particular Spack installation, you can use the `$SPACK_ROOT/etc/spack` directory. For settings specific to a particular user, you will want to add configuration files to the `~/.spack` directory. When Spack first checked for compilers on your system, you may have noticed that it placed your compiler configuration in this directory. Configuration settings can also be placed in a custom location, which is then specified on the command line via `--config-scope`. An example use case is managing two sets of configurations, one for development and another for production preferences. Settings specified on the command line have precedence over all other configuration scopes. You can also use `spack config blame <config>` for displaying the effective configuration. Spack will show from which scopes the configuration has been assembled. #### Platform-specific scopes[¶](#platform-specific-scopes) Some facilities manage multiple platforms from a single shared file system. In order to handle this, each of the configuration scopes listed above has two *sub-scopes*: platform-specific and platform-independent. For example, compiler settings can be stored in `compilers.yaml` configuration files in the following locations: 1. `~/.spack/<platform>/compilers.yaml` 2. `~/.spack/compilers.yaml` 3. `$SPACK_ROOT/etc/spack/<platform>/compilers.yaml` 4. `$SPACK_ROOT/etc/spack/compilers.yaml` 5. `/etc/spack/<platform>/compilers.yaml` 6. `/etc/spack/compilers.yaml` 7. `$SPACK_ROOT/etc/defaults/<platform>/compilers.yaml` 8. `$SPACK_ROOT/etc/defaults/compilers.yaml` These files are listed in decreasing order of precedence, so files in `~/.spack/<platform>` will override settings in `~/.spack`. ### YAML Format[¶](#yaml-format) Spack configurations are YAML dictionaries. Every configuration file begins with a top-level dictionary that tells Spack which configuration set it modifies. When Spack checks its configuration, the configuration scopes are updated as dictionaries in increasing order of precedence, allowing higher precedence files to override lower. YAML dictionaries use a colon “:” to specify key-value pairs. Spack extends YAML syntax slightly to allow a double-colon “::” to specify a key-value pair. When a double-colon is used to specify a key-value pair, instead of adding that section, Spack replaces what was in that section with the new value. For example, consider a user’s compilers configuration file as follows: ``` compilers:: - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/gcc cxx: /usr/bin/g++ f77: /usr/bin/gfortran fc: /usr/bin/gfortran spec: gcc@5.4.0 target: x86_64 ``` This ensures that no other compilers are used, as the user configuration scope is the last scope searched and the `compilers::` line replaces all previous configuration files information. If the same configuration file had a single colon instead of the double colon, it would add the GCC version 5.4.0 compiler to whatever other compilers were listed in other configuration files. ### Compiler Configuration[¶](#compiler-configuration) For most tasks, we can use Spack with the compilers auto-detected the first time Spack runs on a system. As discussed in the basic installation tutorial, we can also tell Spack where compilers are located using the `spack compiler add` command. However, in some circumstances we want even more fine-grained control over the compilers available. This section will teach you how to exercise that control using the compilers configuration file. We will start by opening the compilers configuration file: ``` $ spack config edit compilers ``` ``` compilers: - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/clang-3.7 cxx: /usr/bin/clang++-3.7 f77: null fc: null spec: clang@3.7.1-2ubuntu2 target: x86_64 - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/clang cxx: /usr/bin/clang++ f77: null fc: null spec: clang@3.8.0-2ubuntu4 target: x86_64 - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/gcc-4.7 cxx: /usr/bin/g++-4.7 f77: /usr/bin/gfortran-4.7 fc: /usr/bin/gfortran-4.7 spec: gcc@4.7 target: x86_64 - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/gcc cxx: /usr/bin/g++ f77: /usr/bin/gfortran fc: /usr/bin/gfortran spec: gcc@5.4.0 target: x86_64 ``` This specifies two versions of the GCC compiler and two versions of the Clang compiler with no Flang compiler. Now suppose we have a code that we want to compile with the Clang compiler for C/C++ code, but with gfortran for Fortran components. We can do this by adding another entry to the `compilers.yaml` file. ``` - compiler: environment: {} extra_rpaths: [] flags: {} modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/clang cxx: /usr/bin/clang++ f77: /usr/bin/gfortran fc: /usr/bin/gfortran spec: clang@3.8.0-gfortran target: x86_64 ``` Let’s talk about the sections of this compiler entry that we’ve changed. The biggest change we’ve made is to the `paths` section. This lists the paths to the compilers to use for each language/specification. In this case, we point to the Clang compiler for C/C++ and the gfortran compiler for both specifications of Fortran. We’ve also changed the `spec` entry for this compiler. The `spec` entry is effectively the name of the compiler for Spack. It consists of a name and a version number, separated by the `@` sigil. The name must be one of the supported compiler names in Spack (gcc, intel, pgi, xl, xl_r, clang, nag, cce, arm). The version number can be an arbitrary string of alphanumeric characters, as well as `-`, `.`, and `_`. The `target` and `operating_system` sections we leave unchanged. These sections specify when Spack can use different compilers, and are primarily useful for configuration files that will be used across multiple systems. We can verify that our new compiler works by invoking it now: ``` $ spack install --no-cache zlib %clang@3.8.0-gfortran ... ``` This new compiler also works on Fortran codes: ``` $ spack install --no-cache cfitsio~bzip2 %clang@3.8.0-gfortran ... ``` #### Compiler flags[¶](#compiler-flags) Some compilers may require specific compiler flags to work properly in a particular computing environment. Spack provides configuration options for setting compiler flags every time a specific compiler is invoked. These flags become part of the package spec and therefore of the build provenance. As on the command line, the flags are set through the implicit build variables `cflags`, `cxxflags`, `cppflags`, `fflags`, `ldflags`, and `ldlibs`. Let’s open our compilers configuration file again and add a compiler flag: ``` - compiler: environment: {} extra_rpaths: [] flags: cppflags: -g modules: [] operating_system: ubuntu16.04 paths: cc: /usr/bin/clang cxx: /usr/bin/clang++ f77: /usr/bin/gfortran fc: /usr/bin/gfortran spec: clang@3.8.0-gfortran target: x86_64 ``` We can test this out using the `spack spec` command to show how the spec is concretized: ``` $ spack spec cfitsio %clang@3.8.0-gfortran Input spec --- cfitsio%clang@3.8.0-gfortran Normalized --- cfitsio%clang@3.8.0-gfortran Concretized --- cfitsio@3.410%clang@3.8.0-gfortran cppflags="-g" +bzip2+shared arch=linux-ubuntu16.04-x86_64 ^bzip2@1.0.6%clang@3.8.0-gfortran cppflags="-g" +shared arch=linux-ubuntu16.04-x86_64 ``` We can see that `cppflags="-g"` has been added to every node in the DAG. #### Advanced compiler configuration[¶](#advanced-compiler-configuration) There are three fields of the compiler configuration entry that we have not yet talked about. The `modules` field of the compiler is used primarily on Cray systems, but can be useful on any system that has compilers that are only useful when a particular module is loaded. Any modules in the `modules` field of the compiler configuration will be loaded as part of the build environment for packages using that compiler. The `extra_rpaths` field of the compiler configuration is used for compilers that do not rpath all of their dependencies by default. Since compilers are often installed externally to Spack, Spack is unable to manage compiler dependencies and enforce rpath usage. This can lead to packages not finding link dependencies imposed by the compiler properly. For compilers that impose link dependencies on the resulting executables that are not rpath’ed into the executable automatically, the `extra_rpaths` field of the compiler configuration tells Spack which dependencies to rpath into every executable created by that compiler. The executables will then be able to find the link dependencies imposed by the compiler. As an example, this field can be set by: ``` - compiler: ... extra_rpaths: - /apps/intel/ComposerXE2017/compilers_and_libraries_2017.5.239/linux/compiler/lib/intel64_lin ... ``` The `environment` field of the compiler configuration is used for compilers that require environment variables to be set during build time. For example, if your Intel compiler suite requires the `INTEL_LICENSE_FILE` environment variable to point to the proper license server, you can set this in `compilers.yaml` as follows: ``` - compiler: environment: set: INTEL_LICENSE_FILE: 1713@license4 ... ``` In addition to `set`, `environment` also supports `unset`, `prepend-path`, and `append-path`. ### Configuring Package Preferences[¶](#configuring-package-preferences) Package preferences in Spack are managed through the `packages.yaml` configuration file. First, we will look at the default `packages.yaml` file. ``` $ spack config --scope defaults edit packages ``` ``` # --- # This file controls default concretization preferences for Spack. # # Settings here are versioned with Spack and are intended to provide # sensible defaults out of the box. Spack maintainers should edit this # file to keep it current. # # Users can override these settings by editing the following files. # # Per-spack-instance settings (overrides defaults): # $SPACK_ROOT/etc/spack/packages.yaml # # Per-user settings (overrides default and site settings): # ~/.spack/packages.yaml # --- packages: all: compiler: [gcc, intel, pgi, clang, xl, nag, fj] providers: D: [ldc] awk: [gawk] blas: [openblas] daal: [intel-daal] elf: [elfutils] fftw-api: [fftw] gl: [mesa+opengl, opengl] glx: [mesa+glx, opengl] glu: [mesa-glu, openglu] golang: [gcc] ipp: [intel-ipp] java: [openjdk, jdk, ibm-java] jpeg: [libjpeg-turbo, libjpeg] lapack: [openblas] mariadb-client: [mariadb-c-client, mariadb] mkl: [intel-mkl] mpe: [mpe2] mpi: [openmpi, mpich] mysql-client: [mysql, mariadb-c-client] opencl: [pocl] pil: [py-pillow] pkgconfig: [pkgconf, pkg-config] scalapack: [netlib-scalapack] szip: [libszip, libaec] tbb: [intel-tbb] unwind: [libunwind] permissions: read: world write: user ``` This sets the default preferences for compilers and for providers of virtual packages. To illustrate how this works, suppose we want to change the preferences to prefer the Clang compiler and to prefer MPICH over OpenMPI. Currently, we prefer GCC and OpenMPI. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ``` Now we will open the packages configuration file and update our preferences. ``` $ spack config edit packages ``` ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] ``` Because of the configuration scoping we discussed earlier, this overrides the default settings just for these two items. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^mpich@3.2.1%clang@3.8.0-2ubuntu4 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 ^findutils@4.6.0%clang@3.8.0-2ubuntu4 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%clang@3.8.0-2ubuntu4 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%clang@3.8.0-2ubuntu4+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%clang@3.8.0-2ubuntu4~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^texinfo@6.5%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ``` #### Variant preferences[¶](#variant-preferences) The packages configuration file can also set variant preferences for package variants. For example, let’s change our preferences to build all packages without shared libraries. We will accomplish this by turning off the `shared` variant on all packages that have one. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared ``` We can check the effect of this command with `spack spec hdf5` again. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic~shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^mpich@3.2.1%clang@3.8.0-2ubuntu4 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 ^findutils@4.6.0%clang@3.8.0-2ubuntu4 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%clang@3.8.0-2ubuntu4 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%clang@3.8.0-2ubuntu4+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac ~shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%clang@3.8.0-2ubuntu4~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^texinfo@6.5%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` So far we have only made global changes to the package preferences. As we’ve seen throughout this tutorial, HDF5 builds with MPI enabled by default in Spack. If we were working on a project that would routinely need serial HDF5, that might get annoying quickly, having to type `hdf5~mpi` all the time. Instead, we’ll update our preferences for HDF5. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared hdf5: variants: ~mpi ``` Now hdf5 will concretize without an MPI dependency by default. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` In general, every attribute that we can set for all packages we can set separately for an individual package. #### External packages[¶](#external-packages) The packages configuration file also controls when Spack will build against an externally installed package. On these systems we have a pre-installed zlib. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared hdf5: variants: ~mpi zlib: paths: zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr ``` Here, we’ve told Spack that zlib 1.2.8 is installed on our system. We’ve also told it the installation prefix where zlib can be found. We don’t know exactly which variants it was built with, but that’s okay. ``` $ spack spec hdf5 Input spec --- hdf5 Concretized --- hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` You’ll notice that Spack is now using the external zlib installation, but the compiler used to build zlib is now overriding our compiler preference of clang. If we explicitly specify Clang: ``` $ spack spec hdf5 %clang Input spec --- hdf5%clang Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%clang@3.8.0-2ubuntu4+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` Spack concretizes to both HDF5 and zlib being built with Clang. This has a side-effect of rebuilding zlib. If we want to force Spack to use the system zlib, we have two choices. We can either specify it on the command line, or we can tell Spack that it’s not allowed to build its own zlib. We’ll go with the latter. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared hdf5: variants: ~mpi zlib: paths: zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr buildable: False ``` Now Spack will be forced to choose the external zlib. ``` $ spack spec hdf5 %clang Input spec --- hdf5%clang Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl~mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` This gets slightly more complicated with virtual dependencies. Suppose we don’t want to build our own MPI, but we now want a parallel version of HDF5? Well, fortunately we have MPICH installed on these systems. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared hdf5: variants: ~mpi zlib: paths: zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr buildable: False mpich: paths: mpich@3.2%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64: /usr buildable: False ``` If we concretize `hdf5+mpi` with this configuration file, we will just build with an alternate MPI implementation. ``` $ spack spec hdf5+mpi %clang Input spec --- hdf5%clang+mpi Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^openmpi@3.1.3%clang@3.8.0-2ubuntu4~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 ^hwloc@1.11.9%clang@3.8.0-2ubuntu4~cairo~cuda+libxml2+pci~shared arch=linux-ubuntu16.04-x86_64 ^libpciaccess@0.13.5%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%clang@3.8.0-2ubuntu4 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^util-macros@1.19.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^libxml2@2.9.8%clang@3.8.0-2ubuntu4~python arch=linux-ubuntu16.04-x86_64 ^xz@5.2.4%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ^numactl@2.0.11%clang@3.8.0-2ubuntu4 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%clang@3.8.0-2ubuntu4+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac ~shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%clang@3.8.0-2ubuntu4~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%clang@3.8.0-2ubuntu4 arch=linux-ubuntu16.04-x86_64 ``` We have only expressed a preference for MPICH over other MPI implementations, and Spack will happily build with one we haven’t forbid it from building. We could resolve this by requesting `hdf5+mpi%clang^mpich` explicitly, or we can configure Spack not to use any other MPI implementation. Since we’re focused on configurations here and the former can get tedious, we’ll need to modify our `packages.yaml` file again. While we’re at it, we can configure HDF5 to build with MPI by default again. ``` packages: all: compiler: [clang, gcc, intel, pgi, xl, nag] providers: mpi: [mpich, openmpi] variants: ~shared zlib: paths: zlib@1.2.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64: /usr buildable: False mpich: paths: mpich@3.2%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64: /usr buildable: False openmpi: buildable: False mvapich2: buildable: False intel-mpi: buildable: False intel-parallel-studio: buildable: False spectrum-mpi: buildable: False mpilander: buildable: False charm: buildable: False charmpp: buildable: False ``` Now that we have configured Spack not to build any of the possible providers for MPI, we can try again. ``` $ spack spec hdf5 %clang Input spec --- hdf5%clang Concretized --- hdf5@1.10.4%clang@3.8.0-2ubuntu4~cxx~debug~fortran~hl+mpi+pic~shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 ^mpich@3.2%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.8%gcc@5.4.0+optimize+pic~shared arch=linux-ubuntu16.04-x86_64 ``` By configuring most of our package preferences in `packages.yaml`, we can cut down on the amount of work we need to do when specifying a spec on the command line. In addition to compiler and variant preferences, we can specify version preferences as well. Except for selecting providers via ^, anything that you can specify on the command line can be specified in `packages.yaml` with the exact same spec syntax. #### Installation permissions[¶](#installation-permissions) The `packages.yaml` file also controls the default permissions to use when installing a package. You’ll notice that by default, the installation prefix will be world readable but only user writable. Let’s say we need to install `converge`, a licensed software package. Since a specific research group, `fluid_dynamics`, pays for this license, we want to ensure that only members of this group can access the software. We can do this like so: ``` packages: converge: permissions: read: group group: fluid_dynamics ``` Now, only members of the `fluid_dynamics` group can use any `converge` installations. Warning Make sure to delete or move the `packages.yaml` you have been editing up to this point. Otherwise, it will change the hashes of your packages, leading to differences in the output of later tutorial sections. ### High-level Config[¶](#high-level-config) In addition to compiler and package settings, Spack allows customization of several high-level settings. These settings are stored in the generic `config.yaml` configuration file. You can see the default settings by running: ``` $ spack config --scope defaults edit config ``` ``` # --- # This is the default spack configuration file. # # Settings here are versioned with Spack and are intended to provide # sensible defaults out of the box. Spack maintainers should edit this # file to keep it current. # # Users can override these settings by editing the following files. # # Per-spack-instance settings (overrides defaults): # $SPACK_ROOT/etc/spack/config.yaml # # Per-user settings (overrides default and site settings): # ~/.spack/config.yaml # --- config: # This is the path to the root of the Spack install tree. # You can use $spack here to refer to the root of the spack instance. install_tree: $spack/opt/spack # Locations where templates should be found template_dirs: - $spack/share/spack/templates # Default directory layout install_path_scheme: "${ARCHITECTURE}/${COMPILERNAME}-${COMPILERVER}/${PACKAGE}-${VERSION}-${HASH}" # Locations where different types of modules should be installed. module_roots: tcl: $spack/share/spack/modules lmod: $spack/share/spack/lmod # Temporary locations Spack can try to use for builds. # # Recommended options are given below. # # Builds can be faster in temporary directories on some (e.g., HPC) systems. # Specifying `$tempdir` will ensure use of the default temporary directory # (i.e., ``$TMP` or ``$TMPDIR``). # # Another option that prevents conflicts and potential permission issues is # to specify `~/.spack/stage`, which ensures each user builds in their home # directory. # # A more traditional path uses the value of `$spack/var/spack/stage`, which # builds directly inside Spack's instance without staging them in a # temporary space. Problems with specifying a path inside a Spack instance # are that it precludes its use as a system package and its ability to be # pip installable. # # In any case, if the username is not already in the path, Spack will append # the value of `$user` in an attempt to avoid potential conflicts between # users in shared temporary spaces. # # The build stage can be purged with `spack clean --stage` and # `spack clean -a`, so it is important that the specified directory uniquely # identifies Spack staging to avoid accidentally wiping out non-Spack work. build_stage: - $tempdir/$user/spack-stage - ~/.spack/stage # - $spack/var/spack/stage # Cache directory for already downloaded source tarballs and archived # repositories. This can be purged with `spack clean --downloads`. source_cache: $spack/var/spack/cache # Cache directory for miscellaneous files, like the package index. # This can be purged with `spack clean --misc-cache` misc_cache: ~/.spack/cache # If this is false, tools like curl that use SSL will not verify # certifiates. (e.g., curl will use use the -k option) verify_ssl: true # If set to true, Spack will attempt to build any compiler on the spec # that is not already available. If set to False, Spack will only use # compilers already configured in compilers.yaml install_missing_compilers: False # If set to true, Spack will always check checksums after downloading # archives. If false, Spack skips the checksum step. checksum: true # If set to true, `spack install` and friends will NOT clean # potentially harmful variables from the build environment. Use wisely. dirty: false # The language the build environment will use. This will produce English # compiler messages by default, so the log parser can highlight errors. # If set to C, it will use English (see man locale). # If set to the empty string (''), it will use the language from the # user's environment. build_language: C # When set to true, concurrent instances of Spack will use locks to # avoid modifying the install tree, database file, etc. If false, Spack # will disable all locking, but you must NOT run concurrent instances # of Spack. For filesystems that don't support locking, you should set # this to false and run one Spack at a time, but otherwise we recommend # enabling locks. locks: true # The maximum number of jobs to use when running `make` in parallel, # always limited by the number of cores available. For instance: # - If set to 16 on a 4 cores machine `spack install` will run `make -j4` # - If set to 16 on a 18 cores machine `spack install` will run `make -j16` # If not set, Spack will use all available cores up to 16. # build_jobs: 16 # If set to true, Spack will use ccache to cache C compiles. ccache: false # How long to wait to lock the Spack installation database. This lock is used # when Spack needs to manage its own package metadata and all operations are # expected to complete within the default time limit. The timeout should # therefore generally be left untouched. db_lock_timeout: 120 # How long to wait when attempting to modify a package (e.g. to install it). # This value should typically be 'null' (never time out) unless the Spack # instance only ever has a single user at a time, and only if the user # anticipates that a significant delay indicates that the lock attempt will # never succeed. package_lock_timeout: null ``` As you can see, many of the directories Spack uses can be customized. For example, you can tell Spack to install packages to a prefix outside of the `$SPACK_ROOT` hierarchy. Module files can be written to a central location if you are using multiple Spack instances. If you have a fast scratch file system, you can run builds from this file system with the following `config.yaml`: ``` config: build_stage: - /scratch/$user/spack-stage ``` Note It is important to distinguish the build stage directory from other directories in your scratch space to ensure `spack clean` does not inadvertently remove unrelated files. This can be accomplished by including a combination of `spack` and or `stage` in each path as shown in the default settings and documented examples. See [Basic Settings](https://spack.readthedocs.io/en/latest/config_yaml.html#config-yaml) for details. On systems with compilers that absolutely *require* environment variables like `LD_LIBRARY_PATH`, it is possible to prevent Spack from cleaning the build environment with the `dirty` setting: ``` config: dirty: true ``` However, this is strongly discouraged, as it can pull unwanted libraries into the build. One last setting that may be of interest to many users is the ability to customize the parallelism of Spack builds. By default, Spack installs all packages in parallel with the number of jobs equal to the number of cores on the node (up to a maximum of 16). For example, on a node with 16 cores, this will look like: ``` $ spack install --no-cache --verbose --overwrite zlib ==> Installing zlib ==> Using cached archive: /home/user/spack/var/spack/cache/zlib/zlib-1.2.11.tar.gz ==> Staging archive: /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb/zlib-1.2.11.tar.gz ==> Created stage in /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> No patches needed for zlib ==> Building zlib [Package] ==> Executing phase: 'install' ==> './configure' '--prefix=/home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb' ... ==> 'make' '-j16' ... ==> 'make' '-j16' 'install' ... ==> Successfully installed zlib Fetch: 0.00s. Build: 1.03s. Total: 1.03s. [+] /home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ``` As you can see, we are building with all 16 cores on the node. If you are on a shared login node, this can slow down the system for other users. If you have a strict ulimit or restriction on the number of available licenses, you may not be able to build at all with this many cores. On nodes with 64+ cores, you may not see a significant speedup of the build anyway. To limit the number of cores our build uses, set `build_jobs` like so: ``` config: build_jobs: 2 ``` If we uninstall and reinstall zlib, we see that it now uses only 2 cores: ``` $ spack install --no-cache --verbose --overwrite zlib ==> Installing zlib ==> Using cached archive: /home/user/spack/var/spack/cache/zlib/zlib-1.2.11.tar.gz ==> Staging archive: /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb/zlib-1.2.11.tar.gz ==> Created stage in /home/user/spack/var/spack/stage/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ==> No patches needed for zlib ==> Building zlib [Package] ==> Executing phase: 'install' ==> './configure' '--prefix=/home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb' ... ==> 'make' '-j2' ... ==> 'make' '-j2' 'install' ... ==> Successfully installed zlib Fetch: 0.00s. Build: 1.03s. Total: 1.03s. [+] /home/user/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/zlib-1.2.11-5nus6knzumx4ik2yl44jxtgtsl7d54xb ``` Obviously, if you want to build everything in serial for whatever reason, you would set `build_jobs` to 1. ### Examples[¶](#examples) For examples of how other sites configure Spack, see <https://github.com/spack/spack-configs>. If you use Spack at your site and want to share your config files, feel free to submit a pull request! Package Creation Tutorial[¶](#package-creation-tutorial) --- This tutorial will walk you through the steps behind building a simple package installation script. We’ll focus on writing a package for mpileaks, an MPI debugging tool. By creating a package file we’re essentially giving Spack a recipe for how to build a particular piece of software. We’re describing some of the software’s dependencies, where to find the package, what commands and options are used to build the package from source, and more. Once we’ve specified a package’s recipe, we can ask Spack to build that package in many different ways. This tutorial assumes you have a basic familiarity with some of the Spack commands, and that you have a working version of Spack installed. If not, we suggest looking at Spack’s [Getting Started](https://spack.readthedocs.io/en/latest/getting_started.html#getting-started) guide. This tutorial also assumes you have at least a beginner’s-level familiarity with Python. Also note that this document is a tutorial. It can help you get started with packaging, but is not intended to be complete. See Spack’s [Packaging Guide](https://spack.readthedocs.io/en/latest/packaging_guide.html#packaging-guide) for more complete documentation on this topic. ### Getting Started[¶](#id1) A few things before we get started: * We’ll refer to the Spack installation location via the environment variable `SPACK_ROOT`. You should point `SPACK_ROOT` at wherever you have Spack installed. * Add `$SPACK_ROOT/bin` to your `PATH` before you start. * Make sure your `EDITOR` environment variable is set to your preferred text editor. * We’ll be writing Python code as part of this tutorial. You can find successive versions of the Python code in `$SPACK_ROOT/lib/spack/docs/tutorial/examples`. ### Creating the Package File[¶](#creating-the-package-file) We will use a separate package repository for the tutorial. Package repositories allow you to separate sets of packages that take precedence over one another. We will use the tutorial repo that ships with Spack to avoid breaking the builtin Spack packages. ``` $ spack repo add $SPACK_ROOT/var/spack/repos/tutorial/ ==> Added repo with namespace 'tutorial'. ``` Spack comes with a handy command to create a new package: `spack create`. This command is given the location of a package’s source code, downloads the code, and sets up some basic packaging infrastructure for you. The mpileaks source code can be found on GitHub, and here’s what happens when we run `spack create` on it: ``` $ spack create -t generic https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz ==> This looks like a URL for mpileaks ==> Found 1 version of mpileaks: 1.0 https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz ==> How many would you like to checksum? (default is 1, q to abort) 1 ==> Downloading... ==> Fetching https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz ############################################################################# 100.0% ==> Checksummed 1 version of mpileaks ==> Using specified package template: 'generic' ==> Created template for mpileaks package ==> Created package file: ~/spack/var/spack/repos/tutorial/packages/mpileaks/package.py ``` Spack should spawn a text editor with this file: ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) # # This is a template package file for Spack. We've put "FIXME" # next to all the things you'll want to change. Once you've handled # them, you can save this file and test your package like this: # # spack install mpileaks # # You can edit this file again by typing: # # spack edit mpileaks # # See the Spack documentation for more information on packaging. # If you submit this package back to Spack as a pull request, # please first remove this boilerplate and all FIXME comments. # from spack import * class Mpileaks(Package): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', sha256='2e34cc4505556d1c1f085758e26f2f8eea0972db9382f051b2dcfb1d7d9e1825') # FIXME: Add dependencies if required. # depends_on('foo') def install(self, spec, prefix): # FIXME: Unknown build system make() make('install') ``` Spack has created this file in `$SPACK_ROOT/var/spack/repos/tutorial/packages/mpileaks/package.py`. Take a moment to look over the file. There’s a few placeholders that Spack has created, which we’ll fill in as part of this tutorial: * We’ll document some information about this package in the comments. * We’ll fill in the dependency list for this package. * We’ll fill in some of the configuration arguments needed to build this package. For the moment, exit your editor and let’s see what happens when we try to build this package: ``` $ spack install mpileaks ==> Installing mpileaks ==> Searching for binary cache of mpileaks ==> Warning: No Spack mirrors are currently configured ==> No binary for mpileaks found: installing from source ==> Fetching https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz ############################################################################# 100.0% ==> Staging archive: ~/spack/var/spack/stage/mpileaks-1.0-sv75n3u5ev6mljwcezisz3slooozbbxu/mpileaks-1.0.tar.gz ==> Created stage in ~/spack/var/spack/stage/mpileaks-1.0-sv75n3u5ev6mljwcezisz3slooozbbxu ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> Error: ProcessError: Command exited with status 2: 'make' '-j16' 1 error found in build log: 1 ==> Executing phase: 'install' 2 ==> 'make' '-j16' >> 3 make: *** No targets specified and no makefile found. Stop. See build log for details: ~/spack/var/spack/stage/mpileaks-1.0-sv75n3u5ev6mljwcezisz3slooozbbxu/spack-build-out.txt ``` This obviously didn’t work; we need to fill in the package-specific information. Specifically, Spack didn’t try to build any of mpileaks’ dependencies, nor did it use the proper configure arguments. Let’s start fixing things. ### Package Documentation[¶](#package-documentation) We can bring the `package.py` file back into our `EDITOR` with the `spack edit` command: ``` $ spack edit mpileaks ``` Let’s remove some of the `FIXME` comments, add links to the mpileaks homepage, and document what mpileaks does. I’m also going to cut out the Copyright clause at this point to keep this tutorial document shorter, but you shouldn’t do that normally. The results of these changes can be found in `$SPACK_ROOT/lib/spack/docs/tutorial/examples/1.package.py` and are displayed below. Make these changes to your `package.py`: ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/LLNL/mpileaks" url = "https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', sha256='2e34cc4505556d1c1f085758e26f2f8eea0972db9382f051b2dcfb1d7d9e1825') # FIXME: Add dependencies if required. # depends_on('foo') def install(self, spec, prefix): # FIXME: Unknown build system make() make('install') ``` We’ve filled in the comment that describes what this package does and added a link to its website. That won’t help us build yet, but it will allow Spack to provide some documentation on this package to other users: ``` $ spack info mpileaks Package: mpileaks Description: Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes. Homepage: https://github.com/LLNL/mpileaks Tags: None Preferred version: 1.0 https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz Safe versions: 1.0 https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz Variants: None Installation Phases: install Build Dependencies: None Link Dependencies: None Run Dependencies: None Virtual Packages: None ``` As we fill in more information about this package the `spack info` command will become more informative. Now let’s start making this package build. ### Dependencies[¶](#dependencies) The mpileaks package depends on three other packages: `mpi`, `adept-utils`, and `callpath`. Let’s add those via the `depends_on` command in our `package.py` (this version is in `$SPACK_ROOT/lib/spack/docs/tutorial/examples/2.package.py`): ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/LLNL/mpileaks" url = "https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', sha256='2e34cc4505556d1c1f085758e26f2f8eea0972db9382f051b2dcfb1d7d9e1825') depends_on('mpi') depends_on('adept-utils') depends_on('callpath') def install(self, spec, prefix): # FIXME: Unknown build system make() make('install') ``` Now when we go to build mpileaks, Spack will fetch and build these dependencies before building mpileaks. Note that the mpi dependency is a different kind of beast than the adept-utils and callpath dependencies; there is no mpi package available in Spack. Instead mpi is a *virtual dependency*. Spack may satisfy that dependency by installing packages such as `openmpi` or `mvapich2`. See the [Packaging Guide](https://spack.readthedocs.io/en/latest/packaging_guide.html#packaging-guide) for more information on virtual dependencies. Now when we try to install this package, a lot more happens: ``` $ spack install mpileaks ... ==> Successfully installed libdwarf from binary cache [+] ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libdwarf-20180129-p4jeflorwlnkoq2vpuyocwrbcht2ayak ==> Installing callpath ==> Searching for binary cache of callpath ==> Installing callpath from binary cache ==> Fetching file:///mirror/build_cache/linux-ubuntu16.04-x86_64/gcc-5.4.0/callpath-1.0.4/linux-ubuntu16.04-x86_64-gcc-5.4.0-callpath-1.0.4-empvyxdkc4j4pwg7gznwhbiumruey66x.spack ######################################################################## 100.0% ==> Successfully installed callpath from binary cache [+] ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/callpath-1.0.4-empvyxdkc4j4pwg7gznwhbiumruey66x ==> Installing mpileaks ==> Searching for binary cache of mpileaks ==> No binary for mpileaks found: installing from source ==> Using cached archive: ~/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz ==> Staging archive: ~/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0.tar.gz ==> Created stage in ~/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> Error: ProcessError: Command exited with status 2: 'make' '-j16' 1 error found in build log: 1 ==> Executing phase: 'install' 2 ==> 'make' '-j16' >> 3 make: *** No targets specified and no makefile found. Stop. See build log for details: ~/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0/spack-build-out.txt ``` Note that this command may take a while to run and produce more output if you don’t have an MPI already installed or configured in Spack. Now Spack has identified and made sure all of our dependencies have been built. It found the `openmpi` package that will satisfy our `mpi` dependency, and the `callpath` and `adept-utils` package to satisfy our concrete dependencies. ### Debugging Package Builds[¶](#debugging-package-builds) Our `mpileaks` package is still not building. It may be obvious to many of you that we never ran the configure script. Let’s add a call to `configure()` to the top of the install routine. The resulting `package.py` is in `$SPACK_ROOT/lib/spack/docs/tutorial/examples/3.package.py`: ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/LLNL/mpileaks" url = "https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', sha256='2e34cc4505556d1c1f085758e26f2f8eea0972db9382f051b2dcfb1d7d9e1825') depends_on('mpi') depends_on('adept-utils') depends_on('callpath') def install(self, spec, prefix): configure() make() make('install') ``` If we re-run we still get errors: ``` $ spack install mpileaks ... ==> Installing mpileaks ==> Searching for binary cache of mpileaks ==> Finding buildcaches in /mirror/build_cache ==> No binary for mpileaks found: installing from source ==> Using cached archive: ~/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz ==> Staging archive: ~/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0.tar.gz ==> Created stage in ~/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> Error: ProcessError: Command exited with status 1: './configure' 1 error found in build log: 25 checking for ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3-3 njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc... ~/spack/opt/spack/linux-ubuntu16.04- x86_64/gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc 26 Checking whether ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1 .3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc responds to '-showme:compile'... no 27 Checking whether ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1 .3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc responds to '-showme'... no 28 Checking whether ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1 .3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc responds to '-compile-info'... no 29 Checking whether ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1 .3-3njc4q5pqdpptq6jvqjrezkffwokv2sx/bin/mpicc responds to '-show'... no 30 ./configure: line 4809: Echo: command not found >> 31 configure: error: unable to locate adept-utils installation See build log for details: ~/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0/spack-build-out.txt ``` Again, the problem may be obvious. But let’s pretend we’re not all experienced Autotools developers and use this opportunity to spend some time debugging. We have a few options that can tell us about what’s going wrong: As per the error message, Spack has given us a `spack-build-out.txt` debug log: ``` ==> Executing phase: 'install' ==> './configure' checking metadata... no checking installation directory variables... yes checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for gcc... /home/spack1/spack/lib/spack/env/gcc/gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether /home/spack1/spack/lib/spack/env/gcc/gcc accepts -g... yes checking for /home/spack1/spack/lib/spack/env/gcc/gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of /home/spack1/spack/lib/spack/env/gcc/gcc... gcc3 checking whether /home/spack1/spack/lib/spack/env/gcc/gcc and cc understand -c and -o together... yes checking whether we are using the GNU C++ compiler... yes checking whether /home/spack1/spack/lib/spack/env/gcc/g++ accepts -g... yes checking dependency style of /home/spack1/spack/lib/spack/env/gcc/g++... gcc3 checking for /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc... /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc Checking whether /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc responds to '-showme:compile'... yes configure: error: unable to locate adept-utils installation ``` This gives us the output from the build, and mpileaks isn’t finding its `adept-utils` package. Spack has automatically added the include and library directories of `adept-utils` to the compiler’s search path, but some packages like mpileaks can sometimes be picky and still want things spelled out on their command line. But let’s continue to pretend we’re not experienced developers, and explore some other debugging paths: We can also enter the build area and try to manually run the build: ``` $ spack build-env mpileaks bash $ spack cd mpileaks ``` The `spack build-env` command spawned a new shell that contains the same environment that Spack used to build the mpileaks package (you can substitute bash for your favorite shell). The `spack cd` command changed our working dirctory to the last attempted build for mpileaks. From here we can manually re-run the build: ``` $ ./configure checking metadata... no checking installation directory variables... yes checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for gcc... /home/spack1/spack/lib/spack/env/gcc/gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether /home/spack1/spack/lib/spack/env/gcc/gcc accepts -g... yes checking for /home/spack1/spack/lib/spack/env/gcc/gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of /home/spack1/spack/lib/spack/env/gcc/gcc... gcc3 checking whether /home/spack1/spack/lib/spack/env/gcc/gcc and cc understand -c and -o together... yes checking whether we are using the GNU C++ compiler... yes checking whether /home/spack1/spack/lib/spack/env/gcc/g++ accepts -g... yes checking dependency style of /home/spack1/spack/lib/spack/env/gcc/g++... gcc3 checking for /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc... /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc Checking whether /home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f/bin/mpicc responds to '-showme:compile'... yes configure: error: unable to locate adept-utils installation ``` We’re seeing the same error, but now we’re in a shell where we can run the command ourselves and debug as needed. We could, for example, run `./configure --help` to see what options we can use to specify dependencies. We can use the `exit` command to leave the shell spawned by `spack build-env`. ### Specifying Configure Arguments[¶](#specifying-configure-arguments) Let’s add the configure arguments to the mpileaks’ `package.py`. This version can be found in `$SPACK_ROOT/lib/spack/docs/tutorial/examples/4.package.py`: ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/LLNL/mpileaks" url = "https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', sha256='2e34cc4505556d1c1f085758e26f2f8eea0972db9382f051b2dcfb1d7d9e1825') depends_on('mpi') depends_on('adept-utils') depends_on('callpath') def install(self, spec, prefix): configure('--prefix={0}'.format(prefix), '--with-adept-utils={0}'.format(spec['adept-utils'].prefix), '--with-callpath={0}'.format(spec['callpath'].prefix)) make() make('install') ``` This is all we need for a working mpileaks package! If we install now we’ll see: ``` $ spack install mpileaks ... ==> Installing mpileaks ==> Searching for binary cache of mpileaks ==> Finding buildcaches in /mirror/build_cache ==> No binary for mpileaks found: installing from source ==> Using cached archive: ~/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz ==> Staging archive: ~/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb/mpileaks-1.0.tar.gz ==> Created stage in ~/spack/var/spack/stage/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> Successfully installed mpileaks Fetch: 0.00s. Build: 9.41s. Total: 9.41s. [+] ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpileaks-1.0-csoikctsalli4cdkkdk377gprkc472rb ``` There are some special circumstances in this package that are worth highlighting. Normally, Spack would have automatically detected that mpileaks was an Autotools-based package when we ran `spack create` and made it an `AutoToolsPackage` class (except we added the `-t generic` option to skip this). Instead of a full install routine we would have just written: ``` def configure_args(self): return [ '--with-adept-utils={0}'.format(self.spec['adept-utils'].prefix), '--with-callpath={0}'.format(self.spec['callpath'].prefix) ] ``` Similarly, if this had been a CMake-based package we would have been filling in a `cmake_args` function instead of `configure_args`. There are similar default package types for many build environments that will be discussed later in the tutorial. ### Variants[¶](#variants) We have a successful mpileaks build, but let’s take some time to improve it. `mpileaks` has a build-time option to truncate parts of the stack that it walks. Let’s add a variant to allow users to set this when they build mpileaks with Spack. To do this, we’ll add a variant to our package, as per the following (see `$SPACK_ROOT/lib/spack/docs/tutorial/examples/5.package.py`): ``` from spack import * class Mpileaks(Package): """Tool to detect and report MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/LLNL/mpileaks" url = "https://github.com/LLNL/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', sha256='2e34cc4505556d1c1f085758e26f2f8eea0972db9382f051b2dcfb1d7d9e1825') variant('stackstart', values=int, default=0, description='Specify the number of stack frames to truncate') depends_on('mpi') depends_on('adept-utils') depends_on('callpath') def install(self, spec, prefix): stackstart = int(spec.variants['stackstart'].value) args = [ '--prefix={0}'.format(prefix), '--with-adept-utils={0}'.format(spec['adept-utils'].prefix), '--with-callpath={0}'.format(spec['callpath'].prefix), ] if stackstart: args.extend([ '--with-stack-start-c={0}'.format(stackstart), '--with-stack-start-fortran={0}'.format(stackstart) ]) configure(*args) make() make('install') ``` We’ve added the variant `stackstart`, and given it a default value of `0`. If we install now we can see the stackstart variant added to the configure line (output truncated for length): ``` $ spack install --verbose mpileaks stackstart=4 ... ==> Installing mpileaks ==> Searching for binary cache of mpileaks ==> Finding buildcaches in /mirror/build_cache ==> No binary for mpileaks found: installing from source ==> Using cached archive: ~/spack/var/spack/cache/mpileaks/mpileaks-1.0.tar.gz ==> Staging archive: ~/spack/var/spack/stage/mpileaks-1.0-meufjojkxve3l7rci2mbud3faidgplto/mpileaks-1.0.tar.gz ==> Created stage in ~/spack/var/spack/stage/mpileaks-1.0-meufjojkxve3l7rci2mbud3faidgplto ==> No patches needed for mpileaks ==> Building mpileaks [Package] ==> Executing phase: 'install' ==> './configure' '--prefix=~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/mpileaks-1.0-meufjojkxve3l7rci2mbud3faidgplto' '--with-adept-utils=~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/adept-utils-1.0.1-7tippnvo5g76wpijk7x5kwfpr3iqiaen' '--with-callpath=~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/callpath-1.0.4-empvyxdkc4j4pwg7gznwhbiumruey66x' '--with-stack-start-c=4' '--with-stack-start-fortran=4' ``` ### The Spec Object[¶](#the-spec-object) This tutorial has glossed over a few important features, which weren’t too relevant for mpileaks but may be useful for other packages. There were several places we reference the `self.spec` object. This is a powerful class for querying information about what we’re building. For example, you could use the spec to query information about how a package’s dependencies were built, or what compiler was being used, or what version of a package is being installed. Full documentation can be found in the [Packaging Guide](https://spack.readthedocs.io/en/latest/packaging_guide.html#packaging-guide), but here’s some quick snippets with common queries: * Am I building `mpileaks` version `1.1` or greater? ``` if self.spec.satisfies('@1.1:'): # Do things needed for 1.1+ ``` * Is `openmpi` the MPI I’m building with? ``` if self.spec['mpi'].name == 'openmpi': # Do openmpi things ``` * Am I building with `gcc` version less than `5.0.0`: ``` if self.spec.satisfies('%gcc@:5.0.0'): # Add arguments specific to gcc's earlier than 5.0.0 ``` * Am I building with the `debug` variant: ``` if self.spec.satisfies('+debug'): # Add -g option to configure flags ``` * Is my `dyninst` dependency greater than version `8.0`? ``` if self.spec['dyninst'].satisfies('@8.0:'): # Use newest dyninst options ``` More examples can be found in the thousands of packages already added to Spack in `$SPACK_ROOT/var/spack/repos/builtin/packages`. Good Luck! To ensure that future sections of the tutorial run properly, please uninstall mpileaks and remove the tutorial repo from your configuration. ``` $ spack uninstall -ay mpileaks $ spack repo remove tutorial $ rm -rf $SPACK_ROOT/var/spack/repos/tutorial/packages/mpileaks ``` Environments Tutorial[¶](#environments-tutorial) --- We’ve shown you how to install and remove packages with Spack. You can use [spack install](https://spack.readthedocs.io/en/latest/basic_usage.html#cmd-spack-install) to install packages, [spack uninstall](https://spack.readthedocs.io/en/latest/basic_usage.html#cmd-spack-uninstall) to remove them, and [spack find](https://spack.readthedocs.io/en/latest/basic_usage.html#cmd-spack-find) to look at and query what is installed. We’ve also shown you how to customize Spack’s installation with configuration files like [packages.yaml](https://spack.readthedocs.io/en/latest/build_settings.html#build-settings). If you build a lot of software, or if you work on multiple projects, managing everything in one place can be overwhelming. The default `spack find` output may contain many packages, but you may want to *just* focus on packages for a particular project. Moreover, you may want to include special configuration with your package groups, e.g., to build all the packages in the same group the same way. Spack **environments** provide a way to handle these problems. ### Environment Basics[¶](#environment-basics) Let’s look at the output of `spack find` at this point in the tutorial. ``` $ spack find ==> 70 installed packages -- linux-ubuntu16.04-x86_64 / clang@3.8.0-2ubuntu4 --- tcl@8.6.8 zlib@1.2.8 zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@4.7 --- zlib@1.2.11 -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- adept-utils@1.0.1 hdf5@1.10.4 mpc@1.1.0 perl@5.26.2 autoconf@2.69 hdf5@1.10.4 mpfr@3.1.6 pkgconf@1.4.2 automake@1.16.1 hdf5@1.10.4 mpich@3.2.1 readline@7.0 boost@1.68.0 hwloc@1.11.9 mpileaks@1.0 suite-sparse@5.3.0 bzip2@1.0.6 hypre@2.15.1 mumps@5.1.1 tar@1.30 callpath@1.0.4 hypre@2.15.1 mumps@5.1.1 tcl@8.6.8 cmake@3.12.3 isl@0.18 ncurses@6.1 tcl@8.6.8 diffutils@3.6 libdwarf@20180129 netcdf@4.6.1 texinfo@6.5 dyninst@9.3.2 libiberty@2.31.1 netcdf@4.6.1 trilinos@12.12.1 elfutils@0.173 libpciaccess@0.13.5 netlib-scalapack@2.0.2 trilinos@12.12.1 findutils@4.6.0 libsigsegv@2.11 netlib-scalapack@2.0.2 util-macros@1.19.1 gcc@7.2.0 libtool@2.4.6 numactl@2.0.11 xz@5.2.4 gdbm@1.14.1 libxml2@2.9.8 openblas@0.3.3 zlib@1.2.8 gettext@0.19.8.1 m4@1.4.18 openmpi@3.1.3 zlib@1.2.8 glm@0.9.7.1 matio@1.5.9 openssl@1.0.2o zlib@1.2.11 gmp@6.1.2 matio@1.5.9 parmetis@4.0.3 hdf5@1.10.4 metis@5.1.0 parmetis@4.0.3 ``` This is a complete, but cluttered view. There are packages built with both `openmpi` and `mpich`, as well as multiple variants of other packages, like `zlib`. The query mechanism we learned about in `spack find` can help, but it would be nice if we could start from a clean slate without losing what we’ve already done. #### Creating and activating environments[¶](#creating-and-activating-environments) The `spack env` command can help. Let’s create a new environment: ``` $ spack env create myproject ==> Created environment 'myproject' in ~/spack/var/spack/environments/myproject ``` An environment is a virtualized `spack` instance that you can use for a specific purpose. You can see the environments we’ve created so far like this: ``` $ spack env list ==> 1 environments myproject ``` And you can **activate** an environment with `spack env activate`: ``` $ spack env activate myproject ``` Once you enter an environment, `spack find` shows only what is in the current environment. That’s nothing, so far: ``` $ spack find ==> In environment myproject ==> No root specs ==> 0 installed packages ``` The `spack find` output is still *slightly* different. It tells you that you’re in the `myproject` environment, so that you don’t panic when you see that there is nothing installed. It also says that there are *no root specs*. We’ll get back to what that means later. If you *only* want to check what environment you are in, you can use `spack env status`: ``` $ spack env status ==> In environment myproject ``` And, if you want to leave this environment and go back to normal Spack, you can use `spack env deactivate`. We like to use the `despacktivate` alias (which Spack sets up automatically) for short: ``` $ despacktivate # short alias for `spack env deactivate` $ spack env status ==> No active environment $ spack find netcdf@4.6.1 readline@7.0 zlib@1.2.11 diffutils@3.6 hdf5@1.10.4 m4@1.4.18 netcdf@4.6.1 suite-sparse@5.3.0 dyninst@10.0.0 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 tar@1.30 elfutils@0.173 hypre@2.15.1 matio@1.5.9 netlib-scalapack@2.0.2 tcl@8.6.8 findutils@4.6.0 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 tcl@8.6.8 gcc@7.2.0 intel-tbb@2019 mpc@1.1.0 openblas@0.3.3 texinfo@6.5~ ``` #### Installing packages[¶](#installing-packages) Ok, now that we understand how creation and activation work, let’s go back to `myproject` and *install* a few packages: ``` $ spack env activate myproject $ spack install tcl ==> tcl is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt $ spack install trilinos ==> trilinos is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r $ spack find ==> In environment myproject ==> Root specs tcl trilinos ==> 22 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 xz@5.2.4 bzip2@1.0.6 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 zlib@1.2.11 glm@0.9.7.1 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 tcl@8.6.8 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 trilinos@12.12.1 ``` We’ve installed `tcl` and `trilinos` in our environment, along with all of their dependencies. We call `tcl` and `trilinos` the **roots** because we asked for them explicitly. The other 20 packages listed under “installed packages” are present because they were needed as dependencies. So, these are the roots of the packages’ dependency graph. The “<package> is already installed” messages above are generated because we already installed these packages in previous steps of the tutorial, and we don’t have to rebuild them to put them in an environment. Now let’s create *another* project. We’ll call this one `myproject2`: ``` $ spack env create myproject2 ==> Created environment 'myproject2' in ~/spack/var/spack/environments/myproject2 $ spack env activate myproject2 $ spack install hdf5 ==> hdf5 is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw $ spack install trilinos ==> trilinos is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r $ spack find ==> In environment myproject2 ==> Root specs hdf5 trilinos ==> 22 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 xz@5.2.4 bzip2@1.0.6 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 zlib@1.2.11 glm@0.9.7.1 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 hdf5@1.10.4 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 trilinos@12.12.1 ``` Now we have two environments: one with `tcl` and `trilinos`, and another with `hdf5` and `trilinos`. We can uninstall trilinos from `myproject2` as you would expect: ``` $ spack uninstall trilinos ==> The following packages will be uninstalled: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- rlsruav trilinos@12.12.1%gcc~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 ==> Do you want to proceed? [y/N] y $ spack find ==> In environment myproject2 ==> Root specs hdf5 ==> 8 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- hdf5@1.10.4 libpciaccess@0.13.5 numactl@2.0.11 xz@5.2.4 hwloc@1.11.9 libxml2@2.9.8 openmpi@3.1.3 zlib@1.2.11 ``` Now there is only one root spec, `hdf5`, which requires fewer additional dependencies. However, we still needed `trilinos` for the `myproject` environment! What happened to it? Let’s switch back and see. ``` $ despacktivate $ spack env activate myproject $ spack find ==> In environment myproject ==> Root specs tcl trilinos ==> 22 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 xz@5.2.4 bzip2@1.0.6 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 zlib@1.2.11 glm@0.9.7.1 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 tcl@8.6.8 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 trilinos@12.12.1 ``` Spack is smart enough to realize that `trilinos` is still present in the other environment. Trilinos won’t *actually* be uninstalled unless it is no longer needed by any environments or packages. If it is still needed, it is only removed from the environment. ### Dealing with Many Specs at Once[¶](#dealing-with-many-specs-at-once) In the above examples, we just used `install` and `uninstall`. There are other ways to deal with groups of packages, as well. #### Adding specs[¶](#adding-specs) Let’s go back to our first `myproject` environment and *add* a few specs instead of installing them: ``` $ spack add hdf5 ==> Adding hdf5 to environment myproject $ spack add gmp ==> Adding mumps to environment myproject $ spack find ==> In environment myproject ==> Root specs gmp hdf5 tcl trilinos ==> 22 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 xz@5.2.4 bzip2@1.0.6 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 zlib@1.2.11 glm@0.9.7.1 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 tcl@8.6.8 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 trilinos@12.12.1 ``` Let’s take a close look at what happened. The two packages we added, `hdf5` and `gmp`, are present, but they’re not installed in the environment yet. `spack add` just adds *roots* to the environment, but it does not automatically install them. We can install *all* the as-yet uninstalled packages in an environment by simply running `spack install` with no arguments: ``` $ spack install ==> Concretizing hdf5 [+] ozyvmhz hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 [+] 3njc4q5 ^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 [+] 43tkw5m ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 [+] 5urc6tc ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] milz7fm ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] wpexsph ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 [+] teneqii ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 [+] ft463od ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ==> Concretizing gmp [+] qc4qcfz gmp@6.1.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ==> Installing environment myproject ==> tcl is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/tcl-8.6.8-qhwyccywhx2i6s7ob2gvjrjtj3rnfuqt ==> trilinos is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r ==> hdf5 is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/hdf5-1.10.4-ozyvmhzdew66byarohm4p36ep7wtcuiw ==> gmp is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gmp-6.1.2-qc4qcfz4monpllc3nqupdo7vwinf73sw ``` Spack will concretize the new roots, and install everything you added to the environment. Now we can see the installed roots in the output of `spack find`: ``` $ spack find ==> In environment myproject ==> Root specs gmp hdf5 tcl trilinos ==> 24 installed packages -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- boost@1.68.0 hdf5@1.10.4 libpciaccess@0.13.5 mumps@5.1.1 openblas@0.3.3 tcl@8.6.8 bzip2@1.0.6 hdf5@1.10.4 libxml2@2.9.8 netcdf@4.6.1 openmpi@3.1.3 trilinos@12.12.1 glm@0.9.7.1 hwloc@1.11.9 matio@1.5.9 netlib-scalapack@2.0.2 parmetis@4.0.3 xz@5.2.4 gmp@6.1.2 hypre@2.15.1 metis@5.1.0 numactl@2.0.11 suite-sparse@5.3.0 zlib@1.2.11 ``` We can build whole environments this way, by adding specs and installing all at once, or we can install them with the usual `install` and `uninstall` portions. The advantage to doing them all at once is that we don’t have to write a script outside of Spack to automate this, and we can kick off a large build of many packages easily. #### Configuration[¶](#configuration) So far, `myproject` does not have any special configuration associated with it. The specs concretize using Spack’s defaults: ``` $ spack spec hypre Input spec --- hypre Concretized --- hypre@2.15.1%gcc@5.4.0~debug~int64+internal-superlu+mpi+shared arch=linux-ubuntu16.04-x86_64 ^openblas@0.3.3%gcc@5.4.0 cpu_target= ~ilp64 patches=47cfa7a952ac7b2e4632c73ae199d69fb54490627b66a62c681e21019c4ddc9d,714aea33692304a50bd0ccde42590c176c82ded4a8ac7f06e573dc8071929c33 +pic+shared threads=none ~virtual_machine arch=linux-ubuntu16.04-x86_64 ^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ``` You may want to add extra configuration to your environment. You can see how your environment is configured using `spack config get`: ``` $ spack config get # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: [tcl, trilinos, hdf5, gmp] ``` It turns out that this is a special configuration format where Spack stores the state for the environment. Currently, the file is just a `spack:` header and a list of `specs`. These are the roots. You can edit this file to add your own custom configuration. Spack provides a shortcut to do that: ``` spack config edit ``` You should now see the same file, and edit it to look like this: ``` # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: packages: all: providers: mpi: [mpich] # add package specs to the `specs` list specs: [tcl, trilinos, hdf5, gmp] ``` Now if we run `spack spec` again in the environment, specs will concretize with `mpich` as the MPI implementation: ``` $ spack spec hypre Input spec --- hypre Concretized --- hypre@2.15.1%gcc@5.4.0~debug~int64+internal-superlu+mpi+shared arch=linux-ubuntu16.04-x86_64 ^mpich@3.2.1%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 ^findutils@4.6.0%gcc@5.4.0 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^texinfo@6.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ^openblas@0.3.3%gcc@5.4.0 cpu_target= ~ilp64 patches=47cfa7a952ac7b2e4632c73ae199d69fb54490627b66a62c681e21019c4ddc9d,714aea33692304a50bd0ccde42590c176c82ded4a8ac7f06e573dc8071929c33 +pic+shared threads=none ~virtual_machine arch=linux-ubuntu16.04-x86_64 ``` In addition to the `specs` section, an environment’s configuration can contain any of the configuration options from Spack’s various config sections. You can add custom repositories, a custom install location, custom compilers, or custom external packages, in addition to the `package` preferences we show here. But now we have a problem. We already installed part of this environment with openmpi, but now we want to install it with `mpich`. You can run `spack concretize` inside of an environment to concretize all of its specs. We can run it here: ``` $ spack concretize -f ==> Concretizing tcl [+] qhwyccy tcl@8.6.8%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ==> Concretizing trilinos [+] kqc52mo trilinos@12.12.1%gcc@5.4.0~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 arch=linux-ubuntu16.04-x86_64 [+] zbgfxap ^boost@1.68.0%gcc@5.4.0+atomic+chrono~clanglibcpp cxxstd=default +date_time~debug+exception+filesystem+graph~icu+iostreams+locale+log+math~mpi+multithreaded~numpy patches=2ab6c72d03dec6a4ae20220a9dfd5c8c572c5294252155b85c6874d97c323199 +program_options~python+random+regex+serialization+shared+signals~singlethreaded+system~taggedlayout+test+thread+timer~versionedlayout+wave arch=linux-ubuntu16.04-x86_64 [+] ufczdvs ^bzip2@1.0.6%gcc@5.4.0+shared arch=linux-ubuntu16.04-x86_64 [+] 2rhuivg ^diffutils@3.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 [+] otafqzh ^cmake@3.12.3%gcc@5.4.0~doc+ncurses+openssl+ownlibs patches=dd3a40d4d92f6b2158b87d6fb354c277947c776424aa03f6dc8096cf3135f5d0 ~qt arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] b4y3w3b ^openssl@1.0.2o%gcc@5.4.0+systemcerts arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] jnw622j ^glm@0.9.7.1%gcc@5.4.0 build_type=RelWithDebInfo arch=linux-ubuntu16.04-x86_64 [+] xxd7syh ^hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran+hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 [+] p3f7p2r ^mpich@3.2.1%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 [+] d4iajxs ^findutils@4.6.0%gcc@5.4.0 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] zs7a2pc ^texinfo@6.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] obewuoz ^hypre@2.15.1%gcc@5.4.0~debug~int64~internal-superlu+mpi+shared arch=linux-ubuntu16.04-x86_64 [+] cyeg2yi ^openblas@0.3.3%gcc@5.4.0 cpu_target= ~ilp64 patches=47cfa7a952ac7b2e4632c73ae199d69fb54490627b66a62c681e21019c4ddc9d,714aea33692304a50bd0ccde42590c176c82ded4a8ac7f06e573dc8071929c33 +pic+shared threads=none ~virtual_machine arch=linux-ubuntu16.04-x86_64 [+] gvyqldh ^matio@1.5.9%gcc@5.4.0+hdf5+shared+zlib arch=linux-ubuntu16.04-x86_64 [+] 3wnvp4j ^metis@5.1.0%gcc@5.4.0 build_type=Release ~gdb~int64 patches=4991da938c1d3a1d3dea78e49bbebecba00273f98df2a656e38b83d55b281da1 ~real64+shared arch=linux-ubuntu16.04-x86_64 [+] cumcj5a ^mumps@5.1.1%gcc@5.4.0+complex+double+float~int64~metis+mpi~parmetis~ptscotch~scotch+shared arch=linux-ubuntu16.04-x86_64 [+] p7iln2p ^netlib-scalapack@2.0.2%gcc@5.4.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 [+] wmmx5sg ^netcdf@4.6.1%gcc@5.4.0~dap~hdf4 maxdims=1024 maxvars=8192 +mpi~parallel-netcdf+shared arch=linux-ubuntu16.04-x86_64 [+] jehtata ^parmetis@4.0.3%gcc@5.4.0 build_type=RelWithDebInfo ~gdb patches=4f892531eb0a807eb1b82e683a416d3e35154a455274cf9b162fb02054d11a5b,50ed2081bc939269689789942067c58b3e522c269269a430d5d34c00edbc5870,704b84f7c7444d4372cb59cca6e1209df4ef3b033bc4ee3cf50f369bce972a9d +shared arch=linux-ubuntu16.04-x86_64 [+] zaau4ki ^suite-sparse@5.3.0%gcc@5.4.0~cuda~openmp+pic~tbb arch=linux-ubuntu16.04-x86_64 ==> Concretizing hdf5 - zjgyn3w hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran~hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 [+] p3f7p2r ^mpich@3.2.1%gcc@5.4.0 device=ch3 +hydra netmod=tcp +pmi+romio~verbs arch=linux-ubuntu16.04-x86_64 [+] d4iajxs ^findutils@4.6.0%gcc@5.4.0 patches=84b916c0bf8c51b7e7b28417692f0ad3e7030d1f3c248ba77c42ede5c1c5d11e,bd9e4e5cc280f9753ae14956c4e4aa17fe7a210f55dd6c84aa60b12d106d47a2 arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] zs7a2pc ^texinfo@6.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ==> Concretizing gmp [+] qc4qcfz gmp@6.1.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ``` Now, all the specs in the environment are concrete and ready to be installed with `mpich` as the MPI implementation. Normally, we could just run `spack config edit`, edit the environment configuration, `spack add` some specs, and `spack install`. But, when we already have installed packages in the environment, we have to force everything in the environment to be re-concretized using `spack concretize -f`. *Then* we can re-run `spack install`. ### `spack.yaml` and `spack.lock`[¶](#spack-yaml-and-spack-lock) So far we’ve shown you how to interact with environments from the command line, but they also have a file-based interface that can be used by developers and admins to manage workflows for projects. In this section we’ll dive a little deeper to see how environments are implemented, and how you could use this in your day-to-day development. #### `spack.yaml`[¶](#spack-yaml) Earlier, we changed an environment’s configuration using `spack config edit`. We were actually editing a special file called `spack.yaml`. Let’s take a look. We can get directly to the current environment’s location using `spack cd`: ``` $ spack cd -e myproject $ pwd ~/spack/var/spack/environments/myproject $ ls spack.lock spack.yaml ``` We notice two things here. First, the environment is just a directory inside of `var/spack/environments` within the Spack installation. Second, it contains two important files: `spack.yaml` and `spack.lock`. `spack.yaml` is the configuration file for environments that we’ve already seen, but it does not *have* to live inside Spack. If you create an environment using `spack env create`, it is *managed* by Spack in the `var/spack/environments` directory, and you can refer to it by name. You can actually put a `spack.yaml` file *anywhere*, and you can use it to bundle an environment, or a list of dependencies to install, with your project. Let’s make a simple project: ``` $ cd $ mkdir code $ cd code $ spack env create -d . ==> Created environment in ~/code ``` Here, we made a new directory called *code*, and we used the `-d` option to create an environment in it. What really happened? ``` $ ls spack.yaml $ cat spack.yaml # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: [] ``` Spack just created a `spack.yaml` file in the code directory, with an empty list of root specs. Now we have a Spack environment, *in a directory*, that we can use to manage dependencies. Suppose your project depends on `boost`, `trilinos`, and `openmpi`. You can add these to your spec list: ``` # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: - boost - trilinos - openmpi ``` And now *anyone* who uses the *code* repository can use this format to install the project’s dependencies. They need only clone the repository, `cd` into it, and type `spack install`: ``` $ spack install ==> Concretizing boost [+] zbgfxap boost@1.68.0%gcc@5.4.0+atomic+chrono~clanglibcpp cxxstd=default +date_time~debug+exception+filesystem+graph~icu+iostreams+locale+log+math~mpi+multithreaded~numpy patches=2ab6c72d03dec6a4ae20220a9dfd5c8c572c5294252155b85c6874d97c323199 +program_options~python+random+regex+serialization+shared+signals~singlethreaded+system~taggedlayout+test+thread+timer~versionedlayout+wave arch=linux-ubuntu16.04-x86_64 [+] ufczdvs ^bzip2@1.0.6%gcc@5.4.0+shared arch=linux-ubuntu16.04-x86_64 [+] 2rhuivg ^diffutils@3.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 ==> Concretizing trilinos [+] rlsruav trilinos@12.12.1%gcc@5.4.0~alloptpkgs+amesos+amesos2+anasazi+aztec+belos+boost build_type=RelWithDebInfo ~cgns~complex~dtk+epetra+epetraext+exodus+explicit_template_instantiation~float+fortran~fortrilinos+gtest+hdf5+hypre+ifpack+ifpack2~intrepid~intrepid2~isorropia+kokkos+metis~minitensor+ml+muelu+mumps~nox~openmp~phalanx~piro~pnetcdf~python~rol~rythmos+sacado~shards+shared~stk+suite-sparse~superlu~superlu-dist~teko~tempus+teuchos+tpetra~x11~xsdkflags~zlib+zoltan+zoltan2 arch=linux-ubuntu16.04-x86_64 [+] zbgfxap ^boost@1.68.0%gcc@5.4.0+atomic+chrono~clanglibcpp cxxstd=default +date_time~debug+exception+filesystem+graph~icu+iostreams+locale+log+math~mpi+multithreaded~numpy patches=2ab6c72d03dec6a4ae20220a9dfd5c8c572c5294252155b85c6874d97c323199 +program_options~python+random+regex+serialization+shared+signals~singlethreaded+system~taggedlayout+test+thread+timer~versionedlayout+wave arch=linux-ubuntu16.04-x86_64 [+] ufczdvs ^bzip2@1.0.6%gcc@5.4.0+shared arch=linux-ubuntu16.04-x86_64 [+] 2rhuivg ^diffutils@3.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 [+] otafqzh ^cmake@3.12.3%gcc@5.4.0~doc+ncurses+openssl+ownlibs patches=dd3a40d4d92f6b2158b87d6fb354c277947c776424aa03f6dc8096cf3135f5d0 ~qt arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] b4y3w3b ^openssl@1.0.2o%gcc@5.4.0+systemcerts arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] jnw622j ^glm@0.9.7.1%gcc@5.4.0 build_type=RelWithDebInfo arch=linux-ubuntu16.04-x86_64 [+] oqwnui7 ^hdf5@1.10.4%gcc@5.4.0~cxx~debug~fortran+hl+mpi+pic+shared~szip~threadsafe arch=linux-ubuntu16.04-x86_64 [+] 3njc4q5 ^openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 [+] 43tkw5m ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 [+] 5urc6tc ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] milz7fm ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] wpexsph ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 [+] teneqii ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ft463od ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] fshksdp ^hypre@2.15.1%gcc@5.4.0~debug~int64~internal-superlu+mpi+shared arch=linux-ubuntu16.04-x86_64 [+] cyeg2yi ^openblas@0.3.3%gcc@5.4.0 cpu_target= ~ilp64 patches=47cfa7a952ac7b2e4632c73ae199d69fb54490627b66a62c681e21019c4ddc9d,714aea33692304a50bd0ccde42590c176c82ded4a8ac7f06e573dc8071929c33 +pic+shared threads=none ~virtual_machine arch=linux-ubuntu16.04-x86_64 [+] lmzdgss ^matio@1.5.9%gcc@5.4.0+hdf5+shared+zlib arch=linux-ubuntu16.04-x86_64 [+] 3wnvp4j ^metis@5.1.0%gcc@5.4.0 build_type=Release ~gdb~int64 patches=4991da938c1d3a1d3dea78e49bbebecba00273f98df2a656e38b83d55b281da1 ~real64+shared arch=linux-ubuntu16.04-x86_64 [+] acsg2dz ^mumps@5.1.1%gcc@5.4.0+complex+double+float~int64~metis+mpi~parmetis~ptscotch~scotch+shared arch=linux-ubuntu16.04-x86_64 [+] wotpfwf ^netlib-scalapack@2.0.2%gcc@5.4.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 [+] mhm4izp ^netcdf@4.6.1%gcc@5.4.0~dap~hdf4 maxdims=1024 maxvars=8192 +mpi~parallel-netcdf+shared arch=linux-ubuntu16.04-x86_64 [+] uv6h3sq ^parmetis@4.0.3%gcc@5.4.0 build_type=RelWithDebInfo ~gdb patches=4f892531eb0a807eb1b82e683a416d3e35154a455274cf9b162fb02054d11a5b,50ed2081bc939269689789942067c58b3e522c269269a430d5d34c00edbc5870,704b84f7c7444d4372cb59cca6e1209df4ef3b033bc4ee3cf50f369bce972a9d +shared arch=linux-ubuntu16.04-x86_64 [+] zaau4ki ^suite-sparse@5.3.0%gcc@5.4.0~cuda~openmp+pic~tbb arch=linux-ubuntu16.04-x86_64 ==> Concretizing openmpi [+] 3njc4q5 openmpi@3.1.3%gcc@5.4.0~cuda+cxx_exceptions fabrics= ~java~legacylaunchers~memchecker~pmi schedulers= ~sqlite3~thread_multiple+vt arch=linux-ubuntu16.04-x86_64 [+] 43tkw5m ^hwloc@1.11.9%gcc@5.4.0~cairo~cuda+libxml2+pci+shared arch=linux-ubuntu16.04-x86_64 [+] 5urc6tc ^libpciaccess@0.13.5%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] o2pfwjf ^libtool@2.4.6%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] suf5jtc ^m4@1.4.18%gcc@5.4.0 patches=3877ab548f88597ab2327a2230ee048d2d07ace1062efe81fc92e91b7f39cd00,c0a408fbffb7255fcc75e26bd8edab116fc81d216bfd18b473668b7739a4158e,fc9b61654a3ba1a8d6cd78ce087e7c96366c290bc8d2c299f09828d793b853c8 +sigsegv arch=linux-ubuntu16.04-x86_64 [+] fypapcp ^libsigsegv@2.11%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] fovrh7a ^pkgconf@1.4.2%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] milz7fm ^util-macros@1.19.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] wpexsph ^libxml2@2.9.8%gcc@5.4.0~python arch=linux-ubuntu16.04-x86_64 [+] teneqii ^xz@5.2.4%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 5nus6kn ^zlib@1.2.11%gcc@5.4.0+optimize+pic+shared arch=linux-ubuntu16.04-x86_64 [+] ft463od ^numactl@2.0.11%gcc@5.4.0 patches=592f30f7f5f757dfc239ad0ffd39a9a048487ad803c26b419e0f96b8cda08c1a arch=linux-ubuntu16.04-x86_64 [+] 3sx2gxe ^autoconf@2.69%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] ic2kyoa ^perl@5.26.2%gcc@5.4.0+cpanm patches=0eac10ed90aeb0459ad8851f88081d439a4e41978e586ec743069e8b059370ac +shared+threads arch=linux-ubuntu16.04-x86_64 [+] q4fpyuo ^gdbm@1.14.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] nxhwrg7 ^readline@7.0%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 [+] 3o765ou ^ncurses@6.1%gcc@5.4.0~symlinks~termlib arch=linux-ubuntu16.04-x86_64 [+] rymw7im ^automake@1.16.1%gcc@5.4.0 arch=linux-ubuntu16.04-x86_64 ==> Installing environment ~/code ==> boost is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/boost-1.68.0-zbgfxapchxa4awxdwpleubfuznblxzvt ==> trilinos is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/trilinos-12.12.1-rlsruavxqvwk2tgxzxboclbo6ykjf54r ==> openmpi is already installed in ~/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.1.3-3njc4q5pqdpptq6jvqjrezkffwokv2sx ``` Spack concretizes the specs in the `spack.yaml` file and installs them. What happened here? If you `cd` into a directory that has a `spack.yaml` file in it, Spack considers this directory’s environment to be activated. The directory does not have to live within Spack; it can be anywhere. So, from `~/code`, we can actually manipulate `spack.yaml` using `spack add` and `spack remove` (just like managed environments): ``` $ spack add hdf5@5.5.1 ==> Adding hdf5 to environment ~/code $ cat spack.yaml # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: - boost - trilinos - openmpi - hdf5@5.5.1 $ spack remove hdf5 ==> Removing hdf5 from environment ~/code $ cat spack.yaml # This is a Spack Environment file. # # It describes a set of packages to be installed, along with # configuration settings. spack: # add package specs to the `specs` list specs: - boost - trilinos - openmpi ``` #### `spack.lock`[¶](#spack-lock) Okay, we’ve covered managed environments, environments in directories, and the last thing we’ll cover is `spack.lock`. You may remember that when we ran `spack install`, Spack concretized all the specs in the `spack.yaml` file and installed them. Whenever we concretize Specs in an environment, all concrete specs in the environment are written out to a `spack.lock` file *alongside* `spack.yaml`. The `spack.lock` file is not really human-readable like the `spack.yaml` file. It is a `json` format that contains all the information that we need to *reproduce* the build of an environment: ``` $ head spack.lock { "concrete_specs": { "teneqii2xv5u6zl5r6qi3pwurc6pmypz": { "xz": { "version": "5.2.4", "arch": { "platform": "linux", "platform_os": "ubuntu16.04", "target": "x86_64" }, ... ``` `spack.yaml` and `spack.lock` correspond to two fundamental concepts in Spack, but for environments: > * `spack.yaml` is the set of *abstract* specs and configuration that > you want to install. > * `spack.lock` is the set of all fully *concretized* specs generated > from concretizing `spack.yaml` Using either of these, you can recreate an environment that someone else built. `spack env create` takes an extra optional argument, which can be either a `spack.yaml` or a `spack.lock` file: ``` $ spack env create my-project spack.yaml $ spack env create my-project spack.lock ``` Both of these create a new environment called `my-project`, but which one you choose to use depends on your needs: 1. copying the yaml file allows someone else to build your *requirements*, potentially a different way. 2. copying the lock file allows someone else to rebuild your *installation* exactly as you built it. The first use case can *re-concretize* the same specs on new platforms in order to build, but it will preserve the abstract requirements. The second use case (currently) requires you to be on the same machine, but it retains all decisions made during concretization and is faithful to a prior install. Module Files[¶](#module-files) --- In this tutorial, we’ll introduce a few concepts that are fundamental to the generation of module files with Spack, and we’ll guide you through the customization of both module files content and their layout on disk. In the end you should have a clear understanding of: > * What are module files and how they work > * How Spack generates them > * Which commands are available to ease their maintenance > * How it is possible to customize them in all aspects ### Modules at a Glance[¶](#modules-at-a-glance) Let’s start by summarizing what module files are and how you can use them to modify your environment. The idea is to give enough information so that people without any previous exposure to them will be able to follow the tutorial later on. We’ll also give a high-level view of how module files are generated in Spack. If you are already familiar with these topics you can quickly skim through this section or move directly to [Setup for the Tutorial](#module-file-tutorial-prerequisites). #### What are module files?[¶](#what-are-module-files) Module files are an easy way to modify your environment in a controlled manner during a shell session. In general, they contain the information needed to run an application or use a library, and they work in conjunction with a tool that interprets them. Typical module files instruct this tool to modify the environment variables when a module file is loaded: > ``` > $ module show zlib > --- > /home/mculpo/PycharmProjects/spack/share/spack/modules/linux-ubuntu14.04-x86_64/zlib/1.2.11-gcc-7.2.0-linux-ubuntu14.04-x86_64-co2px3k: > module-whatis A free, general-purpose, legally unencumbered lossless data-compression library. > prepend-path MANPATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/share/man > prepend-path LIBRARY_PATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/lib > prepend-path LD_LIBRARY_PATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/lib > prepend-path CPATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/include > prepend-path PKG_CONFIG_PATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/lib/pkgconfig > prepend-path CMAKE_PREFIX_PATH /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/ > --- > $ echo $LD_LIBRARY_PATH > $ module load zlib > $ echo $LD_LIBRARY_PATH > /home/mculpo/PycharmProjects/spack/opt/spack/linux-ubuntu14.04-x86_64/gcc-7.2.0/zlib-1.2.11-co2px3k53m76lm6tofylh2mur2hnicux/lib > ``` and to undo the modifications when the same module file is unloaded: > ``` > $ module unload zlib > $ echo $LD_LIBRARY_PATH > $ > ``` Different formats exist for module files, and different tools provide various levels of support for them. Spack can natively generate: 1. Non-hierarchical module files written in TCL 2. Hierarchical module files written in Lua and can build [environment-modules](http://modules.sourceforge.net/) and [lmod](http://lmod.readthedocs.io/en/latest) as support tools. Which of the formats or tools best suits one’s needs depends on each particular use-case. For the sake of illustration, we’ll be working on both formats using `lmod`. See also Environment modulesThis is the original tool that provided modules support. Its first version was coded in C in the early ’90s and was later substituted by a version completely coded in TCL - the one Spack is distributing. More details on its features are given in the [homepage of the project](http://modules.sourceforge.net/) or in its [github page](https://github.com/cea-hpc/modules). The tool is able to interpret the non-hierarchical TCL modulefiles written by Spack. LmodLmod is a module system written in Lua, designed to easily handle hierarchies of module files. It’s a drop-in replacement of Environment Modules and works with both of the module file formats generated by Spack. Despite being fully compatible with Environment Modules there are many features that are unique to Lmod. These features are either [targeted towards safety](http://lmod.readthedocs.io/en/latest/010_user.html#safety-features) or meant to [extend the module system functionality](http://lmod.readthedocs.io/en/latest/010_user.html#module-hierarchy). #### How do we generate module files?[¶](#how-do-we-generate-module-files) Before we dive into the hands-on sections it’s worth spending a couple of words to explain how module files are generated by Spack. The following diagram provides a high-level view of the process: The red dashed line above represents Spack’s boundaries, the blue one Spack’s dependencies [1](#f1). Module files are generated by combining: > * the configuration details in `config.yaml` and `modules.yaml` > * the information contained in Spack packages (and processed by the module subpackage) > * a set of template files with [Jinja2](http://jinja.pocoo.org/docs/2.9/), an external template engine that stamps out each particular module file. As Spack serves very diverse needs this process has many points of customization, and we’ll explore most of them in the next sections. [1](#id1) Spack vendors its dependencies! This means that Spack comes with a copy of each one of its dependencies, including `Jinja2`, and is already configured to use them. ### Setup for the Tutorial[¶](#setup-for-the-tutorial) In order to showcase the capabilities of Spack’s module file generation, we need a representative set of software to work with. This set must include different flavors of the same packages installed alongside each other and some [external packages](https://spack.readthedocs.io/en/latest/build_settings.html#sec-external-packages). The purpose of this setup is not to make our life harder but to demonstrate how Spack can help with similar situations, as they will happen on real HPC clusters. For instance, it’s often preferable for Spack to use vendor-provided MPI implementations than to build one itself. To keep the set of software we’re dealing with manageable, we’re going to uninstall everything from earlier in the tutorial. #### Build a module tool[¶](#build-a-module-tool) The first thing that we need is the module tool. In this case we choose `lmod` as it can work with both hierarchical and non-hierarchical module file layouts. ``` $ bin/spack install lmod ``` Once the module tool is installed we need to have it available in the current shell. As the installation directories are definitely not easy to remember, we’ll employ the command `spack location` to retrieve the `lmod` prefix directly from Spack: ``` $ . $(spack location -i lmod)/lmod/lmod/init/bash ``` Now we can re-source the setup file and Spack modules will be put in our module path. ``` $ . share/spack/setup-env.sh ``` #### Add a new compiler[¶](#add-a-new-compiler) The second step is to build a recent compiler. On first use, Spack scans the environment and automatically locates the compiler(s) already available on the system. For this tutorial, however, we want to use `gcc@7.2.0`. ``` $ spack install gcc@7.2.0 ... Wait a long time ... ``` Once `gcc` is installed we can use shell support to load it and make it readily available: ``` $ spack load gcc@7.2.0 ``` It may not be apparent, but the last command employed the module files generated automatically by Spack. What happens under the hood when you use the `spack load` command is: 1. the spec passed as argument is translated into a module file name 2. the current module tool is used to load that module file You can use this command to double check: ``` $ module list Currently Loaded Modules: 1) gcc-7.2.0-gcc-5.4.0-b7smjjc ``` Note that the 7-digit hash at the end of the generated module may vary depending on architecture or package version. Now that we have `gcc@7.2.0` in `PATH` we can finally add it to the list of compilers known to Spack: ``` $ spack compiler add ==> Added 1 new compiler to /home/spack1/.spack/linux/compilers.yaml gcc@7.2.0 ==> Compilers are defined in the following files: /home/spack1/.spack/linux/compilers.yaml $ spack compiler list ==> Available compilers -- clang ubuntu16.04-x86_64 --- clang@3.8.0-2ubuntu4 clang@3.7.1-2ubuntu2 -- gcc ubuntu16.04-x86_64 --- gcc@7.2.0 gcc@5.4.0 gcc@4.7 ``` #### Build the software that will be used in the tutorial[¶](#build-the-software-that-will-be-used-in-the-tutorial) Finally, we should use Spack to install the packages used in the examples: ``` $ spack install netlib-scalapack ^openmpi ^openblas $ spack install netlib-scalapack ^mpich ^openblas $ spack install netlib-scalapack ^openmpi ^netlib-lapack $ spack install netlib-scalapack ^mpich ^netlib-lapack $ spack install py-scipy ^openblas ``` ### Non-hierarchical Module Files[¶](#non-hierarchical-module-files) If you arrived to this point you should have an environment that looks similar to: ``` $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf-2.69-gcc-5.4.0-3sx2gxe libsigsegv-2.11-gcc-7.2.0-g67xpfd openssl-1.0.2o-gcc-5.4.0-b4y3w3b autoconf-2.69-gcc-7.2.0-yb2makb libtool-2.4.6-gcc-5.4.0-o2pfwjf openssl-1.0.2o-gcc-7.2.0-cvldq3v automake-1.16.1-gcc-5.4.0-rymw7im libtool-2.4.6-gcc-7.2.0-kt2udm6 pcre-8.42-gcc-5.4.0-gt5lgzi automake-1.16.1-gcc-7.2.0-qoowd5q libxml2-2.9.8-gcc-5.4.0-wpexsph perl-5.26.2-gcc-5.4.0-ic2kyoa bzip2-1.0.6-gcc-5.4.0-ufczdvs libxml2-2.9.8-gcc-7.2.0-47gf5kk perl-5.26.2-gcc-7.2.0-fdwz5yu bzip2-1.0.6-gcc-7.2.0-mwamumj lmod-7.8-gcc-5.4.0-kmhks3p pkgconf-1.4.2-gcc-5.4.0-fovrh7a cmake-3.12.3-gcc-7.2.0-obqgn2v lua-5.3.4-gcc-5.4.0-cpfeo2w pkgconf-1.4.2-gcc-7.2.0-yoxwmgb curl-7.60.0-gcc-5.4.0-vzqreb2 lua-luafilesystem-1_6_3-gcc-5.4.0-alakjim py-numpy-1.15.2-gcc-7.2.0-wbwtcxf diffutils-3.6-gcc-5.4.0-2rhuivg lua-luaposix-33.4.0-gcc-5.4.0-7wqhwoc py-scipy-1.1.0-gcc-7.2.0-d5n3cph diffutils-3.6-gcc-7.2.0-eauxwi7 m4-1.4.18-gcc-5.4.0-suf5jtc py-setuptools-40.4.3-gcc-7.2.0-5dbwfwn expat-2.2.5-gcc-5.4.0-emyv67q m4-1.4.18-gcc-7.2.0-wdzvagl python-2.7.15-gcc-7.2.0-ucmr2mn findutils-4.6.0-gcc-7.2.0-ca4b7zq mpc-1.1.0-gcc-5.4.0-iuf3gc3 readline-7.0-gcc-5.4.0-nxhwrg7 gcc-7.2.0-gcc-5.4.0-b7smjjc (L) mpfr-3.1.6-gcc-5.4.0-jnt2nnp readline-7.0-gcc-7.2.0-ccruj2i gdbm-1.14.1-gcc-5.4.0-q4fpyuo mpich-3.2.1-gcc-7.2.0-vt5xcat sqlite-3.23.1-gcc-7.2.0-5ltus3a gdbm-1.14.1-gcc-7.2.0-zk5lhob ncurses-6.1-gcc-5.4.0-3o765ou tar-1.30-gcc-5.4.0-dk7lrpo gettext-0.19.8.1-gcc-5.4.0-tawgous ncurses-6.1-gcc-7.2.0-xcgzqdv tcl-8.6.8-gcc-5.4.0-qhwyccy git-2.19.1-gcc-5.4.0-p3gjnfa netlib-lapack-3.8.0-gcc-7.2.0-fj7nayd texinfo-6.5-gcc-7.2.0-cuqnfgf gmp-6.1.2-gcc-5.4.0-qc4qcfz netlib-scalapack-2.0.2-gcc-7.2.0-67nmj7g unzip-6.0-gcc-5.4.0-ba23fbg hwloc-1.11.9-gcc-7.2.0-gbyc65s netlib-scalapack-2.0.2-gcc-7.2.0-6jgjbyg util-macros-1.19.1-gcc-7.2.0-t62kozq isl-0.18-gcc-5.4.0-vttqout netlib-scalapack-2.0.2-gcc-7.2.0-prgo67d xz-5.2.4-gcc-5.4.0-teneqii libbsd-0.8.6-gcc-5.4.0-f4qkkwm netlib-scalapack-2.0.2-gcc-7.2.0-zxpt252 xz-5.2.4-gcc-7.2.0-rql5kog libiconv-1.15-gcc-5.4.0-u2x3umv numactl-2.0.11-gcc-7.2.0-rifwktk zlib-1.2.11-gcc-5.4.0-5nus6kn libpciaccess-0.13.5-gcc-7.2.0-riipwi2 openblas-0.3.3-gcc-7.2.0-xxoxfh4 zlib-1.2.11-gcc-7.2.0-ezuwp4p libsigsegv-2.11-gcc-5.4.0-fypapcp openmpi-3.1.3-gcc-7.2.0-do5xfer Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` The non-hierarchical module files that have been generated so far follow [the default rules for module generation](https://spack.readthedocs.io/en/latest/module_file_support.html#modules-yaml). Taking a look at the `gcc` module you’ll see, for example: ``` $ module show gcc-7.2.0-gcc-5.4.0-b7smjjc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/gcc-7.2.0-gcc-5.4.0-b7smjjc: --- whatis("The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("CPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/include") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/") setenv("CC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gcc") setenv("CXX","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/g++") setenv("FC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F77","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F90","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ]]) ``` As expected, a few environment variables representing paths will be modified by the module file according to the default prefix inspection rules. #### Filter unwanted modifications to the environment[¶](#filter-unwanted-modifications-to-the-environment) Now consider the case that your site has decided that `CPATH` and `LIBRARY_PATH` modifications should not be present in module files. What you can do to abide by the rules is to create a configuration file `~/.spack/modules.yaml` with the following content: ``` modules: tcl: all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` Next you should regenerate all the module files: ``` $ spack module tcl refresh ==> You are about to regenerate tcl module files for: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- 3sx2gxe autoconf@2.69 b7smjjc gcc@7.2.0 f4qkkwm libbsd@0.8.6 cpfeo2w lua@5.3.4 3o765ou ncurses@6.1 dk7lrpo tar@1.30 rymw7im automake@1.16.1 q4fpyuo gdbm@1.14.1 u2x3umv libiconv@1.15 alakjim lua-luafilesystem@1_6_3 b4y3w3b openssl@1.0.2o qhwyccy tcl@8.6.8 ufczdvs bzip2@1.0.6 tawgous gettext@0.19.8.1 fypapcp libsigsegv@2.11 7wqhwoc lua-luaposix@33.4.0 gt5lgzi pcre@8.42 ba23fbg unzip@6.0 vzqreb2 curl@7.60.0 p3gjnfa git@2.19.1 o2pfwjf libtool@2.4.6 suf5jtc m4@1.4.18 ic2kyoa perl@5.26.2 teneqii xz@5.2.4 2rhuivg diffutils@3.6 qc4qcfz gmp@6.1.2 wpexsph libxml2@2.9.8 iuf3gc3 mpc@1.1.0 fovrh7a pkgconf@1.4.2 5nus6kn zlib@1.2.11 emyv67q expat@2.2.5 vttqout isl@0.18 kmhks3p lmod@7.8 jnt2nnp mpfr@3.1.6 nxhwrg7 readline@7.0 -- linux-ubuntu16.04-x86_64 / gcc@7.2.0 --- yb2makb autoconf@2.69 riipwi2 libpciaccess@0.13.5 6jgjbyg netlib-scalapack@2.0.2 fdwz5yu perl@5.26.2 cuqnfgf texinfo@6.5 qoowd5q automake@1.16.1 g67xpfd libsigsegv@2.11 zxpt252 netlib-scalapack@2.0.2 yoxwmgb pkgconf@1.4.2 t62kozq util-macros@1.19.1 mwamumj bzip2@1.0.6 kt2udm6 libtool@2.4.6 67nmj7g netlib-scalapack@2.0.2 wbwtcxf py-numpy@1.15.2 rql5kog xz@5.2.4 obqgn2v cmake@3.12.3 47gf5kk libxml2@2.9.8 prgo67d netlib-scalapack@2.0.2 d5n3cph py-scipy@1.1.0 ezuwp4p zlib@1.2.11 eauxwi7 diffutils@3.6 wdzvagl m4@1.4.18 rifwktk numactl@2.0.11 5dbwfwn py-setuptools@40.4.3 ca4b7zq findutils@4.6.0 vt5xcat mpich@3.2.1 xxoxfh4 openblas@0.3.3 ucmr2mn python@2.7.15 zk5lhob gdbm@1.14.1 xcgzqdv ncurses@6.1 do5xfer openmpi@3.1.3 ccruj2i readline@7.0 gbyc65s hwloc@1.11.9 fj7nayd netlib-lapack@3.8.0 cvldq3v openssl@1.0.2o 5ltus3a sqlite@3.23.1 ==> Do you want to proceed? [y/n] y ==> Regenerating tcl module files ``` If you take a look now at the module for `gcc` you’ll see that the unwanted paths have disappeared: ``` $ module show gcc-7.2.0-gcc-5.4.0-b7smjjc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/gcc-7.2.0-gcc-5.4.0-b7smjjc: --- whatis("The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/") setenv("CC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gcc") setenv("CXX","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/g++") setenv("FC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F77","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F90","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ]]) ``` #### Prevent some module files from being generated[¶](#prevent-some-module-files-from-being-generated) Another common request at many sites is to avoid exposing software that is only needed as an intermediate step when building a newer stack. Let’s try to prevent the generation of module files for anything that is compiled with `gcc@5.4.0` (the OS provided compiler). To do this you should add a `blacklist` keyword to `~/.spack/modules.yaml`: ``` modules: tcl: blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` and regenerate the module files: This time it is convenient to pass the option `--delete-tree` to the command that regenerates the module files to instruct it to delete the existing tree and regenerate a new one instead of overwriting the files in the existing directory. ``` $ spack module tcl refresh --delete-tree ==> You are about to regenerate tcl module files for: -- linux-ubuntu16.04-x86_64 / gcc@5.4.0 --- 3sx2gxe autoconf@2.69 b7smjjc gcc@7.2.0 f4qkkwm libbsd@0.8.6 cpfeo2w lua@5.3.4 3o765ou ncurses@6.1 dk7lrpo tar@1.30 rymw7im automake@1.16.1 q4fpyuo gdbm@1.14.1 u2x3umv libiconv@1.15 alakjim lua-luafilesystem@1_6_3 b4y3w3b openssl@1.0.2o qhwyccy tcl@8.6.8 ufczdvs bzip2@1.0.6 tawgous gettext@0.19.8.1 fypapcp libsigsegv@2.11 7wqhwoc lua-luaposix@33.4.0 gt5lgzi pcre@8.42 ba23fbg unzip@6.0 vzqreb2 curl@7.60.0 p3gjnfa git@2.19.1 o2pfwjf libtool@2.4.6 suf5jtc m4@1.4.18 ic2kyoa perl@5.26.2 teneqii xz@5.2.4 2rhuivg diffutils@3.6 qc4qcfz gmp@6.1.2 wpexsph libxml2@2.9.8 iuf3gc3 mpc@1.1.0 fovrh7a pkgconf@1.4.2 5nus6kn zlib@1.2.11 emyv67q expat@2.2.5 vttqout isl@0.18 kmhks3p lmod@7.8 jnt2nnp mpfr@3.1.6 nxhwrg7 readline@7.0 -- linux-ubuntu16.04-x86_64 / gcc@7.2.0 --- yb2makb autoconf@2.69 riipwi2 libpciaccess@0.13.5 6jgjbyg netlib-scalapack@2.0.2 fdwz5yu perl@5.26.2 cuqnfgf texinfo@6.5 qoowd5q automake@1.16.1 g67xpfd libsigsegv@2.11 zxpt252 netlib-scalapack@2.0.2 yoxwmgb pkgconf@1.4.2 t62kozq util-macros@1.19.1 mwamumj bzip2@1.0.6 kt2udm6 libtool@2.4.6 67nmj7g netlib-scalapack@2.0.2 wbwtcxf py-numpy@1.15.2 rql5kog xz@5.2.4 obqgn2v cmake@3.12.3 47gf5kk libxml2@2.9.8 prgo67d netlib-scalapack@2.0.2 d5n3cph py-scipy@1.1.0 ezuwp4p zlib@1.2.11 eauxwi7 diffutils@3.6 wdzvagl m4@1.4.18 rifwktk numactl@2.0.11 5dbwfwn py-setuptools@40.4.3 ca4b7zq findutils@4.6.0 vt5xcat mpich@3.2.1 xxoxfh4 openblas@0.3.3 ucmr2mn python@2.7.15 zk5lhob gdbm@1.14.1 xcgzqdv ncurses@6.1 do5xfer openmpi@3.1.3 ccruj2i readline@7.0 gbyc65s hwloc@1.11.9 fj7nayd netlib-lapack@3.8.0 cvldq3v openssl@1.0.2o 5ltus3a sqlite@3.23.1 ==> Do you want to proceed? [y/n] y ==> Regenerating tcl module files $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf-2.69-gcc-7.2.0-yb2makb m4-1.4.18-gcc-7.2.0-wdzvagl perl-5.26.2-gcc-7.2.0-fdwz5yu automake-1.16.1-gcc-7.2.0-qoowd5q mpich-3.2.1-gcc-7.2.0-vt5xcat pkgconf-1.4.2-gcc-7.2.0-yoxwmgb bzip2-1.0.6-gcc-7.2.0-mwamumj ncurses-6.1-gcc-7.2.0-xcgzqdv py-numpy-1.15.2-gcc-7.2.0-wbwtcxf cmake-3.12.3-gcc-7.2.0-obqgn2v netlib-lapack-3.8.0-gcc-7.2.0-fj7nayd py-scipy-1.1.0-gcc-7.2.0-d5n3cph diffutils-3.6-gcc-7.2.0-eauxwi7 netlib-scalapack-2.0.2-gcc-7.2.0-67nmj7g py-setuptools-40.4.3-gcc-7.2.0-5dbwfwn findutils-4.6.0-gcc-7.2.0-ca4b7zq netlib-scalapack-2.0.2-gcc-7.2.0-6jgjbyg python-2.7.15-gcc-7.2.0-ucmr2mn gdbm-1.14.1-gcc-7.2.0-zk5lhob netlib-scalapack-2.0.2-gcc-7.2.0-prgo67d readline-7.0-gcc-7.2.0-ccruj2i hwloc-1.11.9-gcc-7.2.0-gbyc65s netlib-scalapack-2.0.2-gcc-7.2.0-zxpt252 sqlite-3.23.1-gcc-7.2.0-5ltus3a libpciaccess-0.13.5-gcc-7.2.0-riipwi2 numactl-2.0.11-gcc-7.2.0-rifwktk texinfo-6.5-gcc-7.2.0-cuqnfgf libsigsegv-2.11-gcc-7.2.0-g67xpfd openblas-0.3.3-gcc-7.2.0-xxoxfh4 util-macros-1.19.1-gcc-7.2.0-t62kozq libtool-2.4.6-gcc-7.2.0-kt2udm6 openmpi-3.1.3-gcc-7.2.0-do5xfer xz-5.2.4-gcc-7.2.0-rql5kog libxml2-2.9.8-gcc-7.2.0-47gf5kk openssl-1.0.2o-gcc-7.2.0-cvldq3v zlib-1.2.11-gcc-7.2.0-ezuwp4p Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` If you look closely you’ll see though that we went too far in blacklisting modules: the module for `gcc@7.2.0` disappeared as it was bootstrapped with `gcc@5.4.0`. To specify exceptions to the blacklist rules you can use `whitelist`: ``` modules: tcl: whitelist: - gcc blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` `whitelist` rules always have precedence over `blacklist` rules. If you regenerate the modules again: ``` $ spack module tcl refresh -y ==> Regenerating tcl module files ``` you’ll see that now the module for `gcc@7.2.0` has reappeared: ``` $ module avail gcc-7.2.0-gcc-5.4.0-b7smjjc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- gcc-7.2.0-gcc-5.4.0-b7smjjc Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` An additional possibility that you can leverage to unclutter the environment is that of preventing the generation of module files for implicitly installed packages. In this case all one needs to do is to add the following line: ``` modules: tcl: blacklist_implicits: true whitelist: - gcc blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` to `modules.yaml` and regenerate the module file tree as above. #### Change module file naming[¶](#change-module-file-naming) The next step in making module files more user-friendly is to improve their naming scheme. To reduce the length of the hash or remove it altogether you can use the `hash_length` keyword in the configuration file: ``` modules: tcl: hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] ``` If you try to regenerate the module files now you will get an error: ``` $ spack module tcl refresh --delete-tree -y ==> Error: Name clashes detected in module files: file: /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/netlib-scalapack-2.0.2-gcc-7.2.0 spec: netlib-scalapack@2.0.2%gcc@7.2.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 spec: netlib-scalapack@2.0.2%gcc@7.2.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 spec: netlib-scalapack@2.0.2%gcc@7.2.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 spec: netlib-scalapack@2.0.2%gcc@7.2.0 build_type=RelWithDebInfo ~pic+shared arch=linux-ubuntu16.04-x86_64 ==> Error: Operation aborted ``` Note We try to check for errors upfront!In Spack we check for errors upfront whenever possible, so don’t worry about your module files: as a name clash was detected nothing has been changed on disk. The problem here is that without the hashes the four different flavors of `netlib-scalapack` map to the same module file name. We can add suffixes to differentiate them: ``` modules: tcl: hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' all: suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ``` As you can see it is possible to specify rules that apply only to a restricted set of packages using [anonymous specs](https://spack.readthedocs.io/en/latest/module_file_support.html#anonymous-specs). Regenerating module files now we obtain: ``` $ spack module tcl refresh --delete-tree -y ==> Regenerating tcl module files $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf-2.69-gcc-7.2.0 m4-1.4.18-gcc-7.2.0 pkgconf-1.4.2-gcc-7.2.0 automake-1.16.1-gcc-7.2.0 mpich-3.2.1-gcc-7.2.0 py-numpy-1.15.2-gcc-7.2.0-openblas bzip2-1.0.6-gcc-7.2.0 ncurses-6.1-gcc-7.2.0 py-scipy-1.1.0-gcc-7.2.0-openblas cmake-3.12.3-gcc-7.2.0 netlib-lapack-3.8.0-gcc-7.2.0 py-setuptools-40.4.3-gcc-7.2.0 diffutils-3.6-gcc-7.2.0 netlib-scalapack-2.0.2-gcc-7.2.0-netlib-mpich python-2.7.15-gcc-7.2.0 findutils-4.6.0-gcc-7.2.0 netlib-scalapack-2.0.2-gcc-7.2.0-netlib-openmpi readline-7.0-gcc-7.2.0 gcc-7.2.0-gcc-5.4.0 netlib-scalapack-2.0.2-gcc-7.2.0-openblas-mpich sqlite-3.23.1-gcc-7.2.0 gdbm-1.14.1-gcc-7.2.0 netlib-scalapack-2.0.2-gcc-7.2.0-openblas-openmpi texinfo-6.5-gcc-7.2.0 hwloc-1.11.9-gcc-7.2.0 numactl-2.0.11-gcc-7.2.0 util-macros-1.19.1-gcc-7.2.0 libpciaccess-0.13.5-gcc-7.2.0 openblas-0.3.3-gcc-7.2.0 xz-5.2.4-gcc-7.2.0 libsigsegv-2.11-gcc-7.2.0 openmpi-3.1.3-gcc-7.2.0 zlib-1.2.11-gcc-7.2.0 libtool-2.4.6-gcc-7.2.0 openssl-1.0.2o-gcc-7.2.0 libxml2-2.9.8-gcc-7.2.0 perl-5.26.2-gcc-7.2.0 Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` Finally we can set a `naming_scheme` to prevent users from loading modules that refer to different flavors of the same library/application: ``` modules: tcl: hash_length: 0 naming_scheme: '{name}/{version}-{compiler.name}-{compiler.version}' whitelist: - gcc blacklist: - '%gcc@5.4.0' all: conflict: - '{name}' suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ``` The final result should look like: ``` $ spack module tcl refresh --delete-tree -y ==> Regenerating tcl module files $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf/2.69-gcc-7.2.0 m4/1.4.18-gcc-7.2.0 pkgconf/1.4.2-gcc-7.2.0 automake/1.16.1-gcc-7.2.0 mpich/3.2.1-gcc-7.2.0 py-numpy/1.15.2-gcc-7.2.0-openblas bzip2/1.0.6-gcc-7.2.0 ncurses/6.1-gcc-7.2.0 py-scipy/1.1.0-gcc-7.2.0-openblas cmake/3.12.3-gcc-7.2.0 netlib-lapack/3.8.0-gcc-7.2.0 py-setuptools/40.4.3-gcc-7.2.0 diffutils/3.6-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-netlib-mpich python/2.7.15-gcc-7.2.0 findutils/4.6.0-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-netlib-openmpi readline/7.0-gcc-7.2.0 gcc/7.2.0-gcc-5.4.0 netlib-scalapack/2.0.2-gcc-7.2.0-openblas-mpich sqlite/3.23.1-gcc-7.2.0 gdbm/1.14.1-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-openblas-openmpi (D) texinfo/6.5-gcc-7.2.0 hwloc/1.11.9-gcc-7.2.0 numactl/2.0.11-gcc-7.2.0 util-macros/1.19.1-gcc-7.2.0 libpciaccess/0.13.5-gcc-7.2.0 openblas/0.3.3-gcc-7.2.0 xz/5.2.4-gcc-7.2.0 libsigsegv/2.11-gcc-7.2.0 openmpi/3.1.3-gcc-7.2.0 zlib/1.2.11-gcc-7.2.0 libtool/2.4.6-gcc-7.2.0 openssl/1.0.2o-gcc-7.2.0 libxml2/2.9.8-gcc-7.2.0 perl/5.26.2-gcc-7.2.0 Where: D: Default Module Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` Note TCL specific directiveThe directives `naming_scheme` and `conflict` are TCL specific and can’t be used in the `lmod` section of the configuration file. #### Add custom environment modifications[¶](#add-custom-environment-modifications) At many sites it is customary to set an environment variable in a package’s module file that points to the folder in which the package is installed. You can achieve this with Spack by adding an `environment` directive to the configuration file: ``` modules: tcl: hash_length: 0 naming_scheme: '{name}/{version}-{compiler.name}-{compiler.version}' whitelist: - gcc blacklist: - '%gcc@5.4.0' all: conflict: - '{name}' suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '{name}_ROOT': '{prefix}' netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ``` Under the hood Spack uses the `format()` API to substitute tokens in either environment variable names or values. There are two caveats though: * The set of allowed tokens in variable names is restricted to `name`, `version`, `compiler`, `compiler.name`, `compiler.version`, `architecture` * Any token expanded in a variable name is made uppercase, but other than that case sensitivity is preserved Regenerating the module files results in something like: ``` $ spack module tcl refresh -y ==> Regenerating tcl module files $ module show gcc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/gcc/7.2.0-gcc-5.4.0: --- whatis("The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ") conflict("gcc") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/") setenv("CC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gcc") setenv("CXX","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/g++") setenv("FC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F77","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F90","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("GCC_ROOT","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs") help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ]]) ``` As you can see, the `gcc` module has the environment variable `GCC_ROOT` set. Sometimes it’s also useful to apply environment modifications selectively and target only certain packages. You can, for instance set the common variables `CC`, `CXX`, etc. in the `gcc` module file and apply other custom modifications to the `openmpi` modules as follows: ``` modules: tcl: hash_length: 0 naming_scheme: '{name}/{version}-{compiler.name}-{compiler.version}' whitelist: - gcc blacklist: - '%gcc@5.4.0' all: conflict: - '{name}' suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '{name}_ROOT': '{prefix}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ``` This time we will be more selective and regenerate only the `gcc` and `openmpi` module files: ``` $ spack module tcl refresh -y gcc ==> Regenerating tcl module files $ spack module tcl refresh -y openmpi ==> Regenerating tcl module files $ module show gcc --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/gcc/7.2.0-gcc-5.4.0: --- whatis("The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ") conflict("gcc") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/lib64") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/") setenv("CC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gcc") setenv("CXX","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/g++") setenv("FC","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F77","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("F90","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs/bin/gfortran") setenv("GCC_ROOT","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/gcc-7.2.0-b7smjjcsmwe5u5fcsvjmonlhlzzctnfs") setenv("CC","gcc") setenv("CXX","g++'") setenv("FC","gfortran") setenv("F77","gfortran") setenv("F90","gfortran") help([[The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Ada, and Go, as well as libraries for these languages. ]]) $ module show openmpi --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64/openmpi/3.1.3-gcc-7.2.0: --- whatis("An open source Message Passing Interface implementation. ") conflict("openmpi") prepend_path("PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/bin") prepend_path("MANPATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/share/man") prepend_path("LD_LIBRARY_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/lib") prepend_path("PKG_CONFIG_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/lib/pkgconfig") prepend_path("CMAKE_PREFIX_PATH","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4/") setenv("OPENMPI_ROOT","/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/openmpi-3.1.3-do5xfer2whhk7gc26atgs3ozr3ljbvs4") setenv("SLURM_MPI_TYPE","pmi2") setenv("OMPI_MCA_btl_openib_warn_default_gid_prefix","0") help([[An open source Message Passing Interface implementation. The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers. ]]) ``` #### Autoload dependencies[¶](#autoload-dependencies) Spack can also generate module files that contain code to load the dependencies automatically. You can, for instance generate python modules that load their dependencies by adding the `autoload` directive and assigning it the value `direct`: ``` modules: tcl: verbose: True hash_length: 0 naming_scheme: '{name}/{version}-{compiler.name}-{compiler.version}' whitelist: - gcc blacklist: - '%gcc@5.4.0' all: conflict: - '{name}' suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '{name}_ROOT': '{prefix}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' netlib-scalapack: suffixes: '^openmpi': openmpi '^mpich': mpich ^python: autoload: 'direct' ``` and regenerating the module files for every package that depends on `python`: ``` root@module-file-tutorial:/# spack module tcl refresh -y ^python ==> Regenerating tcl module files ``` Now the `py-scipy` module will be: ``` #%Module1.0 ## Module file created by spack (https://github.com/spack/spack) on 2018-11-11 22:10:48.834221 ## ## py-scipy@1.1.0%gcc@7.2.0 arch=linux-ubuntu16.04-x86_64 /d5n3cph ## module-whatis "SciPy (pronounced 'Sigh Pie') is a Scientific Library for Python. It provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization." proc ModulesHelp { } { puts stderr "SciPy (pronounced "Sigh Pie") is a Scientific Library for Python. It" puts stderr "provides many user-friendly and efficient numerical routines such as" puts stderr "routines for numerical integration and optimization." } if { [ module-info mode load ] && ![ is-loaded python/2.7.15-gcc-7.2.0 ] } { puts stderr "Autoloading python/2.7.15-gcc-7.2.0" module load python/2.7.15-gcc-7.2.0 } if { [ module-info mode load ] && ![ is-loaded openblas/0.3.3-gcc-7.2.0 ] } { puts stderr "Autoloading openblas/0.3.3-gcc-7.2.0" module load openblas/0.3.3-gcc-7.2.0 } if { [ module-info mode load ] && ![ is-loaded py-numpy/1.15.2-gcc-7.2.0-openblas ] } { puts stderr "Autoloading py-numpy/1.15.2-gcc-7.2.0-openblas" module load py-numpy/1.15.2-gcc-7.2.0-openblas } conflict py-scipy prepend-path LD_LIBRARY_PATH "/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/py-scipy-1.1.0-d5n3cphk2lx2v74ypwb6h7tna7vvgdyn/lib" prepend-path CMAKE_PREFIX_PATH "/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/py-scipy-1.1.0-d5n3cphk2lx2v74ypwb6h7tna7vvgdyn/" prepend-path PYTHONPATH "/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/py-scipy-1.1.0-d5n3cphk2lx2v74ypwb6h7tna7vvgdyn/lib/python2.7/site-packages" setenv PY_SCIPY_ROOT "/home/spack1/spack/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/py-scipy-1.1.0-d5n3cphk2lx2v74ypwb6h7tna7vvgdyn" ``` and will contain code to autoload all the dependencies: ``` $ module load py-scipy Autoloading python/2.7.15-gcc-7.2.0 Autoloading openblas/0.3.3-gcc-7.2.0 Autoloading py-numpy/1.15.2-gcc-7.2.0-openblas ``` In case messages are unwanted during the autoload procedure, it will be sufficient to omit the line setting `verbose: True` in the configuration file above. ### Hierarchical Module Files[¶](#hierarchical-module-files) So far we worked with non-hierarchical module files, i.e. with module files that are all generated in the same root directory and don’t attempt to dynamically modify the `MODULEPATH`. This results in a flat module structure where all the software is visible at the same time: ``` $ module avail --- /home/spack1/spack/share/spack/modules/linux-ubuntu16.04-x86_64 --- autoconf/2.69-gcc-7.2.0 m4/1.4.18-gcc-7.2.0 pkgconf/1.4.2-gcc-7.2.0 automake/1.16.1-gcc-7.2.0 mpich/3.2.1-gcc-7.2.0 py-numpy/1.15.2-gcc-7.2.0-openblas (L) bzip2/1.0.6-gcc-7.2.0 ncurses/6.1-gcc-7.2.0 py-scipy/1.1.0-gcc-7.2.0-openblas (L) cmake/3.12.3-gcc-7.2.0 netlib-lapack/3.8.0-gcc-7.2.0 py-setuptools/40.4.3-gcc-7.2.0 diffutils/3.6-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-netlib-mpich python/2.7.15-gcc-7.2.0 (L) findutils/4.6.0-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-netlib-openmpi readline/7.0-gcc-7.2.0 gcc/7.2.0-gcc-5.4.0 netlib-scalapack/2.0.2-gcc-7.2.0-openblas-mpich sqlite/3.23.1-gcc-7.2.0 gdbm/1.14.1-gcc-7.2.0 netlib-scalapack/2.0.2-gcc-7.2.0-openblas-openmpi (D) texinfo/6.5-gcc-7.2.0 hwloc/1.11.9-gcc-7.2.0 numactl/2.0.11-gcc-7.2.0 util-macros/1.19.1-gcc-7.2.0 libpciaccess/0.13.5-gcc-7.2.0 openblas/0.3.3-gcc-7.2.0 (L) xz/5.2.4-gcc-7.2.0 libsigsegv/2.11-gcc-7.2.0 openmpi/3.1.3-gcc-7.2.0 zlib/1.2.11-gcc-7.2.0 libtool/2.4.6-gcc-7.2.0 openssl/1.0.2o-gcc-7.2.0 libxml2/2.9.8-gcc-7.2.0 perl/5.26.2-gcc-7.2.0 Where: L: Module is loaded D: Default Module Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` This layout is quite simple to deploy, but you can see from the above snippet that nothing prevents users from loading incompatible sets of modules: ``` $ module purge $ module load netlib-lapack/3.8.0-gcc-7.2.0 openblas/0.3.3-gcc-7.2.0 $ module list Currently Loaded Modules: 1) netlib-lapack/3.8.0-gcc-7.2.0 2) openblas/0.3.3-gcc-7.2.0 ``` Even if `conflicts` directives are carefully placed in module files, they: > * won’t enforce a consistent environment, but will just report an error > * need constant updates, for instance as soon as a new compiler or MPI library is installed [Hierarchical module files](http://lmod.readthedocs.io/en/latest/080_hierarchy.html) try to overcome these shortcomings by showing at start-up only a restricted view of what is available on the system: more specifically only the software that has been installed with OS provided compilers. Among this software there will be other - usually more recent - compilers that, once loaded, will prepend new directories to `MODULEPATH` unlocking all the software that was compiled with them. This “unlocking” idea can then be extended arbitrarily to virtual dependencies, as we’ll see in the following section. #### Core/Compiler/MPI[¶](#core-compiler-mpi) The most widely used hierarchy is the so called `Core/Compiler/MPI` where, on top of the compilers, different MPI libraries also unlock software linked to them. There are just a few steps needed to adapt the `modules.yaml` file we used previously: > 1. enable the `lmod` file generator > 2. change the `tcl` tag to `lmod` > 3. remove `tcl` specific directives (`naming_scheme` and `conflict`) > 4. declare which compilers are considered `core_compilers` > 5. remove the `mpi` related suffixes (as they will be substituted by hierarchies) After these modifications your configuration file should look like: ``` modules: enable:: - lmod lmod: core_compilers: - 'gcc@5.4.0' hierarchy: - mpi hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' all: suffixes: '^openblas': openblas '^netlib-lapack': netlib filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '{name}_ROOT': '{prefix}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' ``` Note Double colon in configuration filesThe double colon after `enable` is intentional and it serves the purpose of overriding the default list of enabled generators so that only `lmod` will be active (see [Overriding entire sections](https://spack.readthedocs.io/en/latest/configuration.html#config-overrides) for more details). The directive `core_compilers` accepts a list of compilers. Everything built using these compilers will create a module in the `Core` part of the hierarchy, which is the entry point for hierarchical module files. It is common practice to put the OS provided compilers in the list and only build common utilities and other compilers with them. If we now regenerate the module files: ``` $ spack module lmod refresh --delete-tree -y ==> Regenerating lmod module files ``` and update `MODULEPATH` to point to the `Core`: ``` $ module purge $ module unuse $HOME/spack/share/spack/modules/linux-ubuntu16.04-x86_64 $ module use $HOME/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/Core ``` asking for the available modules will return: ``` $ module avail --- share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` Unsurprisingly, the only visible module is `gcc`. Loading that we’ll unlock the `Compiler` part of the hierarchy: ``` $ module load gcc $ module avail --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/gcc/7.2.0 --- autoconf/2.69 findutils/4.6.0 libtool/2.4.6 netlib-lapack/3.8.0 perl/5.26.2 python/2.7.15 xz/5.2.4 automake/1.16.1 gdbm/1.14.1 libxml2/2.9.8 numactl/2.0.11 pkgconf/1.4.2 readline/7.0 zlib/1.2.11 bzip2/1.0.6 hwloc/1.11.9 m4/1.4.18 openblas/0.3.3 py-numpy/1.15.2-openblas sqlite/3.23.1 cmake/3.12.3 libpciaccess/0.13.5 mpich/3.2.1 openmpi/3.1.3 py-scipy/1.1.0-openblas texinfo/6.5 diffutils/3.6 libsigsegv/2.11 ncurses/6.1 openssl/1.0.2o py-setuptools/40.4.3 util-macros/1.19.1 --- share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 (L) Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` The same holds true also for the `MPI` part, that you can enable by loading either `mpich` or `openmpi`. Let’s start by loading `mpich`: ``` $ module load mpich $ module avail --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/mpich/3.2.1-vt5xcat/gcc/7.2.0 --- netlib-scalapack/2.0.2-netlib netlib-scalapack/2.0.2-openblas (D) --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/gcc/7.2.0 --- autoconf/2.69 findutils/4.6.0 libtool/2.4.6 netlib-lapack/3.8.0 perl/5.26.2 python/2.7.15 xz/5.2.4 automake/1.16.1 gdbm/1.14.1 libxml2/2.9.8 numactl/2.0.11 pkgconf/1.4.2 readline/7.0 zlib/1.2.11 bzip2/1.0.6 hwloc/1.11.9 m4/1.4.18 openblas/0.3.3 py-numpy/1.15.2-openblas sqlite/3.23.1 cmake/3.12.3 libpciaccess/0.13.5 mpich/3.2.1 (L) openmpi/3.1.3 py-scipy/1.1.0-openblas texinfo/6.5 diffutils/3.6 libsigsegv/2.11 ncurses/6.1 openssl/1.0.2o py-setuptools/40.4.3 util-macros/1.19.1 --- share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 (L) Where: L: Module is loaded D: Default Module Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". root@module-file-tutorial:/# module load openblas netlib-scalapack/2.0.2-openblas root@module-file-tutorial:/# module list Currently Loaded Modules: 1) gcc/7.2.0 2) mpich/3.2.1 3) openblas/0.3.3 4) netlib-scalapack/2.0.2-openblas ``` At this point we can showcase the improved consistency that a hierarchical layout provides over a non-hierarchical one: ``` $ module load openmpi Lmod is automatically replacing "mpich/3.2.1" with "openmpi/3.1.3". Due to MODULEPATH changes, the following have been reloaded: 1) netlib-scalapack/2.0.2-openblas ``` `Lmod` took care of swapping the MPI provider for us, and it also substituted the `netlib-scalapack` module to conform to the change in the MPI. In this way we can’t accidentally pull-in two different MPI providers at the same time or load a module file for a package linked to `openmpi` when `mpich` is also loaded. Consistency for compilers and MPI is ensured by the tool. #### Add LAPACK to the hierarchy[¶](#add-lapack-to-the-hierarchy) The hierarchy just shown is already a great improvement over non-hierarchical layouts, but it still has an asymmetry: `LAPACK` providers cover the same semantic role as `MPI` providers, but yet they are not part of the hierarchy. To be more practical, this means that although we have gained an improved consistency in our environment when it comes to `MPI`, we still have the same problems as we had before for `LAPACK` implementations: ``` root@module-file-tutorial:/# module list Currently Loaded Modules: 1) gcc/7.2.0 2) openblas/0.3.3 3) openmpi/3.1.3 4) netlib-scalapack/2.0.2-openblas root@module-file-tutorial:/# module load netlib-scalapack/2.0.2-netlib The following have been reloaded with a version change: 1) netlib-scalapack/2.0.2-openblas => netlib-scalapack/2.0.2-netlib root@module-file-tutorial:/# module list Currently Loaded Modules: 1) gcc/7.2.0 2) openblas/0.3.3 3) openmpi/3.1.3 4) netlib-scalapack/2.0.2-netlib ``` Hierarchies that are deeper than `Core`/`Compiler`/`MPI` are probably still considered “unusual” or “impractical” at many sites, mainly because module files are written manually and keeping track of the combinations among multiple providers quickly becomes quite involved. For instance, having both `MPI` and `LAPACK` in the hierarchy means we must classify software into one of four categories: > 1. Software that doesn’t depend on `MPI` or `LAPACK` > 2. Software that depends only on `MPI` > 3. Software that depends only on `LAPACK` > 4. Software that depends on both to decide when to show it to the user. The situation becomes more involved as the number of virtual dependencies in the hierarchy increases. We can take advantage of the DAG that Spack maintains for the installed software and solve this combinatorial problem in a clean and automated way. In some sense Spack’s ability to manage this combinatorial complexity makes deeper hierarchies feasible. Coming back to our example, let’s add `lapack` to the hierarchy and remove any remaining suffix: ``` modules: enable:: - lmod lmod: core_compilers: - 'gcc@5.4.0' hierarchy: - mpi - lapack hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '{name}_ROOT': '{prefix}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' ``` After module files have been regenerated as usual: ``` root@module-file-tutorial:/# module purge root@module-file-tutorial:/# spack module lmod refresh --delete-tree -y ==> Regenerating lmod module files ``` we can see that now we have additional components in the hierarchy: ``` $ module load gcc $ module load openblas $ module avail --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/openblas/0.3.3-xxoxfh4/gcc/7.2.0 --- py-numpy/1.15.2 py-scipy/1.1.0 --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/gcc/7.2.0 --- autoconf/2.69 findutils/4.6.0 libtool/2.4.6 netlib-lapack/3.8.0 perl/5.26.2 sqlite/3.23.1 automake/1.16.1 gdbm/1.14.1 libxml2/2.9.8 numactl/2.0.11 pkgconf/1.4.2 texinfo/6.5 bzip2/1.0.6 hwloc/1.11.9 m4/1.4.18 openblas/0.3.3 (L) py-setuptools/40.4.3 util-macros/1.19.1 cmake/3.12.3 libpciaccess/0.13.5 mpich/3.2.1 openmpi/3.1.3 python/2.7.15 xz/5.2.4 diffutils/3.6 libsigsegv/2.11 ncurses/6.1 openssl/1.0.2o readline/7.0 zlib/1.2.11 --- share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 (L) Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". $ module load openmpi $ module avail --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/openmpi/3.1.3-do5xfer/openblas/0.3.3-xxoxfh4/gcc/7.2.0 --- netlib-scalapack/2.0.2 --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/openblas/0.3.3-xxoxfh4/gcc/7.2.0 --- py-numpy/1.15.2 py-scipy/1.1.0 --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/gcc/7.2.0 --- autoconf/2.69 findutils/4.6.0 libtool/2.4.6 netlib-lapack/3.8.0 perl/5.26.2 sqlite/3.23.1 automake/1.16.1 gdbm/1.14.1 libxml2/2.9.8 numactl/2.0.11 pkgconf/1.4.2 texinfo/6.5 bzip2/1.0.6 hwloc/1.11.9 m4/1.4.18 openblas/0.3.3 (L) py-setuptools/40.4.3 util-macros/1.19.1 cmake/3.12.3 libpciaccess/0.13.5 mpich/3.2.1 openmpi/3.1.3 (L) python/2.7.15 xz/5.2.4 diffutils/3.6 libsigsegv/2.11 ncurses/6.1 openssl/1.0.2o readline/7.0 zlib/1.2.11 --- /home/spack1/spack/share/spack/lmod/linux-ubuntu16.04-x86_64/Core --- gcc/7.2.0 (L) Where: L: Module is loaded Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". ``` Both `MPI` and `LAPACK` providers will now benefit from the same safety features: ``` $ module load py-numpy netlib-scalapack $ module load mpich Lmod is automatically replacing "openmpi/3.1.3" with "mpich/3.2.1". Due to MODULEPATH changes, the following have been reloaded: 1) netlib-scalapack/2.0.2 $ module load netlib-lapack Lmod is automatically replacing "openblas/0.3.3" with "netlib-lapack/3.8.0". Inactive Modules: 1) py-numpy Due to MODULEPATH changes, the following have been reloaded: 1) netlib-scalapack/2.0.2 ``` Because we only compiled `py-numpy` with `openblas` the module is made inactive when we switch the `LAPACK` provider. The user environment is now consistent by design! ### Working with Templates[¶](#working-with-templates) As briefly mentioned in the introduction, Spack uses [Jinja2](http://jinja.pocoo.org/docs/2.9/) to generate each individual module file. This means that you have all of its flexibility and power when it comes to customizing what gets generated! #### Module file templates[¶](#module-file-templates) The templates that Spack uses to generate module files are stored in the `share/spack/templates/module` directory within the Spack prefix, and they all share the same common structure. Usually, they start with a header that identifies the type of module being generated. In the case of hierarchical module files it’s: ``` -- -*- lua -*- -- Module file created by spack (https://github.com/spack/spack) on {{ timestamp }} -- -- {{ spec.short_spec }} -- ``` The statements within double curly brackets `{{ ... }}` denote [expressions](http://jinja.pocoo.org/docs/2.9/templates/#expressions) that will be evaluated and substituted at module generation time. The rest of the file is then divided into [blocks](http://jinja.pocoo.org/docs/2.9/templates/#template-inheritance) that can be overridden or extended by users, if need be. [Control structures](http://jinja.pocoo.org/docs/2.9/templates/#list-of-control-structures) , delimited by `{% ... %}`, are also permitted in the template language: ``` {% block environment %} {% for command_name, cmd in environment_modifications %} {% if command_name == 'PrependPath' %} prepend_path("{{ cmd.name }}", "{{ cmd.value }}", "{{ cmd.separator }}") {% elif command_name == 'AppendPath' %} append_path("{{ cmd.name }}", "{{ cmd.value }}", "{{ cmd.separator }}") {% elif command_name == 'RemovePath' %} remove_path("{{ cmd.name }}", "{{ cmd.value }}", "{{ cmd.separator }}") {% elif command_name == 'SetEnv' %} setenv("{{ cmd.name }}", "{{ cmd.value }}") {% elif command_name == 'UnsetEnv' %} unsetenv("{{ cmd.name }}") {% endif %} {% endfor %} {% endblock %} ``` The locations where Spack looks for templates are specified in `config.yaml`: ``` # Locations where templates should be found template_dirs: - $spack/share/spack/templates ``` and can be extended by users to employ custom templates, as we’ll see next. #### Extend the default templates[¶](#extend-the-default-templates) Let’s assume one of our software is protected by group membership: allowed users belong to the same linux group, and access is granted at group level. Wouldn’t it be nice if people that are not yet entitled to use it could receive a helpful message at module load time that tells them who to contact in your organization to be inserted in the group? To automate the generation of module files with such site-specific behavior we’ll start by extending the list of locations where Spack looks for module files. Let’s create the file `~/.spack/config.yaml` with the content: ``` config: template_dirs: - $HOME/.spack/templates ``` This tells Spack to also search another location when looking for template files. Next, we need to create our custom template extension in the folder listed above: ``` {% extends "modules/modulefile.lua" %} {% block footer %} -- Access is granted only to specific groups if not isDir("{{ spec.prefix }}") then LmodError ( "You don't have the necessary rights to run \"{{ spec.name }}\".\n\n", "\tPlease write an e-mail to <EMAIL> if you need further information on how to get access to it.\n" ) end {% endblock %} ``` Let’s name this file `group-restricted.lua`. The line: ``` {% extends "modules/modulefile.lua" %} ``` tells Jinja2 that we are reusing the standard template for hierarchical module files. The section: ``` {% block footer %} -- Access is granted only to specific groups if not isDir("{{ spec.prefix }}") then LmodError ( "You don't have the necessary rights to run \"{{ spec.name }}\".\n\n", "\tPlease write an e-mail to <EMAIL> if you need further information on how to get access to it.\n" ) end {% endblock %} ``` overrides the `footer` block. Finally, we need to add a couple of lines in `modules.yaml` to tell Spack which specs need to use the new custom template. For the sake of illustration let’s assume it’s `netlib-scalapack`: ``` modules: enable:: - lmod lmod: core_compilers: - 'gcc@5.4.0' hierarchy: - mpi - lapack hash_length: 0 whitelist: - gcc blacklist: - '%gcc@5.4.0' - readline all: filter: environment_blacklist: ['CPATH', 'LIBRARY_PATH'] environment: set: '{name}_ROOT': '{prefix}' gcc: environment: set: CC: gcc CXX: g++ FC: gfortran F90: gfortran F77: gfortran openmpi: environment: set: SLURM_MPI_TYPE: pmi2 OMPI_MCA_btl_openib_warn_default_gid_prefix: '0' netlib-scalapack: template: 'group-restricted.lua' ``` If we regenerate the module files one last time: ``` root@module-file-tutorial:/# spack module lmod refresh -y netlib-scalapack ==> Regenerating lmod module files ``` we’ll find the following at the end of each `netlib-scalapack` module file: ``` -- Access is granted only to specific groups if not isDir("/usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-7.2.0/netlib-scalapack-2.0.2-d3lertflood3twaor44eam2kcr4l72ag") then LmodError ( "You don't have the necessary rights to run \"netlib-scalapack\".\n\n", "\tPlease write an e-mail to <EMAIL> if you need further information on how to get access to it.\n" ) end ``` and every user that doesn’t have access to the software will now be redirected to the right e-mail address where to ask for it! Spack Package Build Systems[¶](#spack-package-build-systems) --- You may begin to notice after writing a couple of package template files a pattern emerge for some packages. For example, you may find yourself writing an `install()` method that invokes: `configure`, `cmake`, `make`, `make install`. You may also find yourself writing `"prefix=" + prefix` as an argument to `configure` or `cmake`. Rather than having you repeat these lines for all packages, Spack has classes that can take care of these patterns. In addition, these package files allow for finer grained control of these build systems. In this section, we will describe each build system and give examples on how these can be manipulated to install a package. ### Package Class Hierarchy[¶](#package-class-hierarchy) digraph G { node [ shape = "record" ] edge [ arrowhead = "empty" ] PackageBase -> Package [dir=back] PackageBase -> MakefilePackage [dir=back] PackageBase -> AutotoolsPackage [dir=back] PackageBase -> CMakePackage [dir=back] PackageBase -> PythonPackage [dir=back] } The above diagram gives a high level view of the class hierarchy and how each package relates. Each subclass inherits from the `PackageBaseClass` super class. The bulk of the work is done in this super class which includes fetching, extracting to a staging directory and installing. Each subclass then adds additional build-system-specific functionality. In the following sections, we will go over examples of how to utilize each subclass and to see how powerful these abstractions are when packaging. ### Package[¶](#package) We’ve already seen examples of a `Package` class in our walkthrough for writing package files, so we won’t be spending much time with them here. Briefly, the Package class allows for abitrary control over the build process, whereas subclasses rely on certain patterns (e.g. `configure` `make` `make install`) to be useful. `Package` classes are particularly useful for packages that have a non-conventional way of being built since the packager can utilize some of Spack’s helper functions to customize the building and installing of a package. ### Autotools[¶](#autotools) As we have seen earlier, packages using `Autotools` use `configure`, `make` and `make install` commands to execute the build and install process. In our `Package` class, your typical build incantation will consist of the following: ``` def install(self, spec, prefix): configure("--prefix=" + prefix) make() make("install") ``` You’ll see that this looks similar to what we wrote in our packaging tutorial. The `Autotools` subclass aims to simplify writing package files and provides convenience methods to manipulate each of the different phases for a `Autotools` build system. `Autotools` packages consist of four phases: 1. `autoreconf()` 2. `configure()` 3. `build()` 4. `install()` Each of these phases have sensible defaults. Let’s take a quick look at some the internals of the `Autotools` class: ``` $ spack edit --build-system autotools ``` This will open the `AutotoolsPackage` file in your text editor. Note The examples showing code for these classes is abridged to avoid having long examples. We only show what is relevant to the packager. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 ``` | ``` They all have sensible defaults and for many packages the only thing necessary will be to override the helper method :py:meth:`~.AutotoolsPackage.configure_args`. For a finer tuning you may also override: +---+---+ | **Method** | **Purpose** | +===+===+ | :py:attr:`~.AutotoolsPackage.build_targets` | Specify ``make`` | | | targets for the | | | build phase | +---+---+ | :py:attr:`~.AutotoolsPackage.install_targets` | Specify ``make`` | | | targets for the | | | install phase | +---+---+ | :py:meth:`~.AutotoolsPackage.check` | Run build time | | | tests if required | +---+---+ """ #: Phases of a GNU Autotools package phases = ['autoreconf', 'configure', 'build', 'install'] #: This attribute is used in UI queries that need to know the build #: system base class build_system_class = 'AutotoolsPackage' #: Whether or not to update ``config.guess`` on old architectures patch_config_guess = True #: Targets for ``make`` during the :py:meth:`~.AutotoolsPackage.build` #: phase build_targets = [] #: Targets for ``make`` during the :py:meth:`~.AutotoolsPackage.install` #: phase install_targets = ['install'] #: Callback names for build-time test build_time_test_callbacks = ['check'] #: Callback names for install-time test install_time_test_callbacks = ['installcheck'] #: Set to true to force the autoreconf step even if configure is present force_autoreconf = False #: Options to be passed to autoreconf when using the default implementation autoreconf_extra_args = [] self.configure_flag_args.append(values_str) def configure(self, spec, prefix): """Runs configure with the arguments specified in :py:meth:`~.AutotoolsPackage.configure_args` and an appropriately set prefix. """ options = getattr(self, 'configure_flag_args', []) options += ['--prefix={0}'.format(prefix)] ``` | Important to note are the highlighted lines. These properties allow the packager to set what build targets and install targets they want for their package. If, for example, we wanted to add as our build target `foo` then we can append to our `build_targets` property: ``` build_targets = ["foo"] ``` Which is similiar to invoking make in our Package ``` make("foo") ``` This is useful if we have packages that ignore environment variables and need a command-line argument. Another thing to take note of is in the `configure()` method. Here we see that the `prefix` argument is already included since it is a common pattern amongst packages using `Autotools`. We then only have to override `configure_args()`, which will then return it’s output to to `configure()`. Then, `configure()` will append the common arguments Packagers also have the option to run `autoreconf` in case a package needs to update the build system and generate a new `configure`. Though, for the most part this will be unnecessary. Let’s look at the `mpileaks` package.py file that we worked on earlier: ``` $ spack edit mpileaks ``` Notice that mpileaks is a `Package` class but uses the `Autotools` build system. Although this package is acceptable let’s make this into an `AutotoolsPackage` class and simplify it further. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Mpileaks(AutotoolsPackage): """Tool to detect and report leaked MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') depends_on("mpi") depends_on("adept-utils") depends_on("callpath") def install(self, spec, prefix): configure("--prefix=" + prefix, "--with-adept-utils=" + spec['adept-utils'].prefix, "--with-callpath=" + spec['callpath'].prefix) make() make("install") ``` | We first inherit from the `AutotoolsPackage` class. Although we could keep the `install()` method, most of it can be handled by the `AutotoolsPackage` base class. In fact, the only thing that needs to be overridden is `configure_args()`. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Mpileaks(AutotoolsPackage): """Tool to detect and report leaked MPI objects like MPI_Requests and MPI_Datatypes.""" homepage = "https://github.com/hpc/mpileaks" url = "https://github.com/hpc/mpileaks/releases/download/v1.0/mpileaks-1.0.tar.gz" version('1.0', '8838c574b39202a57d7c2d68692718aa') variant("stackstart", values=int, default=0, description="Specify the number of stack frames to truncate") depends_on("mpi") depends_on("adept-utils") depends_on("callpath") def configure_args(self): stackstart = int(self.spec.variants['stackstart'].value) args = ["--with-adept-utils=" + spec['adept-utils'].prefix, "--with-callpath=" + spec['callpath'].prefix] if stackstart: args.extend(['--with-stack-start-c=%s' % stackstart, '--with-stack-start-fortran=%s' % stackstart]) return args ``` | Since Spack takes care of setting the prefix for us we can exclude that as an argument to `configure`. Our packages look simpler, and the packager does not need to worry about whether they have properly included `configure` and `make`. This version of the `mpileaks` package installs the same as the previous, but the `AutotoolsPackage` class lets us do it with a cleaner looking package file. ### Makefile[¶](#makefile) Packages that utilize `Make` or a `Makefile` usually require you to edit a `Makefile` to set up platform and compiler specific variables. These packages are handled by the `Makefile` subclass which provides convenience methods to help write these types of packages. A `MakefilePackage` class has three phases that can be overridden. These include: > 1. `edit()` > 2. `build()` > 3. `install()` Packagers then have the ability to control how a `Makefile` is edited, and what targets to include for the build phase or install phase. Let’s also take a look inside the `MakefilePackage` class: ``` $ spack edit --build-system makefile ``` Take note of the following: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 ``` | ``` class MakefilePackage(PackageBase): #: Phases of a package that is built with an hand-written Makefile phases = ['edit', 'build', 'install'] #: This attribute is used in UI queries that need to know the build #: system base class build_system_class = 'MakefilePackage' #: Targets for ``make`` during the :py:meth:`~.MakefilePackage.build` #: phase build_targets = [] #: Targets for ``make`` during the :py:meth:`~.MakefilePackage.install` #: phase install_targets = ['install'] #: Callback names for build-time test build_time_test_callbacks = ['check'] #: Callback names for install-time test install_time_test_callbacks = ['installcheck'] def edit(self, spec, prefix): """Edits the Makefile before calling make. This phase cannot be defaulted. """ tty.msg('Using default implementation: skipping edit phase.') def build(self, spec, prefix): """Calls make, passing :py:attr:`~.MakefilePackage.build_targets` as targets. """ with working_dir(self.build_directory): inspect.getmodule(self).make(*self.build_targets) def install(self, spec, prefix): """Calls make, passing :py:attr:`~.MakefilePackage.install_targets` as targets. """ with working_dir(self.build_directory): inspect.getmodule(self).make(*self.install_targets) ``` | Similar to `Autotools`, `MakefilePackage` class has properties that can be set by the packager. We can also override the different methods highlighted. Let’s try to recreate the [Bowtie](http://bowtie-bio.sourceforge.net/index.shtml) package: ``` $ spack create -f https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip ==> This looks like a URL for bowtie ==> Found 1 version of bowtie: 1.2.1.1 https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip ==> How many would you like to checksum? (default is 1, q to abort) 1 ==> Downloading... ==> Fetching https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip ######################################################################## 100.0% ==> Checksummed 1 version of bowtie ==> This package looks like it uses the makefile build system ==> Created template for bowtie package ==> Created package file: /Users/mamelara/spack/var/spack/repos/builtin/packages/bowtie/package.py ``` Once the fetching is completed, Spack will open up your text editor in the usual fashion and create a template of a `MakefilePackage` package.py. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Bowtie(MakefilePackage): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip" version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf') # FIXME: Add dependencies if required. # depends_on('foo') def edit(self, spec, prefix): # FIXME: Edit the Makefile if necessary # FIXME: If not needed delete this function # makefile = FileFilter('Makefile') # makefile.filter('CC = .*', 'CC = cc') return ``` | Spack was successfully able to detect that `Bowtie` uses `Make`. Let’s add in the rest of our details for our package: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Bowtie(MakefilePackage): """Bowtie is an ultrafast, memory efficient short read aligner for short DNA sequences (reads) from next-gen sequencers.""" homepage = "https://sourceforge.net/projects/bowtie-bio/" url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip" version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf') variant("tbb", default=False, description="Use Intel thread building block") depends_on("tbb", when="+tbb") def edit(self, spec, prefix): # FIXME: Edit the Makefile if necessary # FIXME: If not needed delete this function # makefile = FileFilter('Makefile') # makefile.filter('CC = .*', 'CC = cc') return ``` | As we mentioned earlier, most packages using a `Makefile` have hard-coded variables that must be edited. These variables are fine if you happen to not care about setup or types of compilers used but Spack is designed to work with any compiler. The `MakefilePackage` subclass makes it easy to edit these `Makefiles` by having an `edit()` method that can be overridden. Let’s take a look at the default `Makefile` that `Bowtie` provides. If we look inside, we see that `CC` and `CXX` point to our GNU compiler: ``` $ spack stage bowtie ``` Note As usual make sure you have shell support activated with spack:`source /path/to/spack_root/spack/share/spack/setup-env.sh` ``` $ spack cd -s bowtie $ cd bowtie-1.2 $ vim Makefile ``` ``` CPP = g++ -w CXX = $(CPP) CC = gcc LIBS = $(LDFLAGS) -lz HEADERS = $(wildcard *.h) ``` To fix this, we need to use the `edit()` method to write our custom `Makefile`. | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Bowtie(MakefilePackage): """Bowtie is an ultrafast, memory efficient short read aligner for short DNA sequences (reads) from next-gen sequencers.""" homepage = "https://sourceforge.net/projects/bowtie-bio/" url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip" version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf') variant("tbb", default=False, description="Use Intel thread building block") depends_on("tbb", when="+tbb") def edit(self, spec, prefix): makefile = FileFilter("Makefile") makefile.filter('CC= .*', 'CC = ' + env['CC']) makefile.filter('CXX = .*', 'CXX = ' + env['CXX']) ``` | Here we use a `FileFilter` object to edit our `Makefile`. It takes in a regular expression and then replaces `CC` and `CXX` to whatever Spack sets `CC` and `CXX` environment variables to. This allows us to build `Bowtie` with whatever compiler we specify through Spack’s `spec` syntax. Let’s change the build and install phases of our package: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Bowtie(MakefilePackage): """Bowtie is an ultrafast, memory efficient short read aligner for short DNA sequences (reads) from next-gen sequencers.""" homepage = "https://sourceforge.net/projects/bowtie-bio/" url = "https://downloads.sourceforge.net/project/bowtie-bio/bowtie/1.2.1.1/bowtie-1.2.1.1-src.zip" version('1.2.1.1', 'ec06265730c5f587cd58bcfef6697ddf') variant("tbb", default=False, description="Use Intel thread building block") depends_on("tbb", when="+tbb") def edit(self, spec, prefix): makefile = FileFilter("Makefile") makefile.filter('CC= .*', 'CC = ' + env['CC']) makefile.filter('CXX = .*', 'CXX = ' + env['CXX']) @property def build_targets(self): if "+tbb" in spec: return [] else: return ["NO_TBB=1"] @property def install_targets(self): return ['prefix={0}'.format(self.prefix), 'install'] ``` | Here demonstrate another strategy that we can use to manipulate our package We can provide command-line arguments to `make()`. Since `Bowtie` can use `tbb` we can either add `NO_TBB=1` as a argument to prevent `tbb` support or we can just invoke `make` with no arguments. `Bowtie` requires our `install_target` to provide a path to the install directory. We can do this by providing `prefix=` as a command line argument to `make()`. Let’s look at a couple of other examples and go through them: ``` $ spack edit esmf ``` Some packages allow environment variables to be set and will honor them. Packages that use `?=` for assignment in their `Makefile` can be set using environment variables. In our `esmf` example we set two environment variables in our `edit()` method: ``` def edit(self, spec, prefix): for var in os.environ: if var.startswith('ESMF_'): os.environ.pop(var) # More code ... if self.compiler.name == 'gcc': os.environ['ESMF_COMPILER'] = 'gfortran' elif self.compiler.name == 'intel': os.environ['ESMF_COMPILER'] = 'intel' elif self.compiler.name == 'clang': os.environ['ESMF_COMPILER'] = 'gfortranclang' elif self.compiler.name == 'nag': os.environ['ESMF_COMPILER'] = 'nag' elif self.compiler.name == 'pgi': os.environ['ESMF_COMPILER'] = 'pgi' else: msg = "The compiler you are building with, " msg += "'{0}', is not supported by ESMF." raise InstallError(msg.format(self.compiler.name)) ``` As you may have noticed, we didn’t really write anything to the `Makefile` but rather we set environment variables that will override variables set in the `Makefile`. Some packages include a configuration file that sets certain compiler variables, platform specific variables, and the location of dependencies or libraries. If the file is simple and only requires a couple of changes, we can overwrite those entries with our `FileFilter` object. If the configuration involves complex changes, we can write a new configuration file from scratch. Let’s look at an example of this in the `elk` package: ``` $ spack edit elk ``` ``` def edit(self, spec, prefix): # Dictionary of configuration options config = { 'MAKE': 'make', 'AR': 'ar' } # Compiler-specific flags flags = '' if self.compiler.name == 'intel': flags = '-O3 -ip -unroll -no-prec-div' elif self.compiler.name == 'gcc': flags = '-O3 -ffast-math -funroll-loops' elif self.compiler.name == 'pgi': flags = '-O3 -lpthread' elif self.compiler.name == 'g95': flags = '-O3 -fno-second-underscore' elif self.compiler.name == 'nag': flags = '-O4 -kind=byte -dusty -dcfuns' elif self.compiler.name == 'xl': flags = '-O3' config['F90_OPTS'] = flags config['F77_OPTS'] = flags # BLAS/LAPACK support # Note: BLAS/LAPACK must be compiled with OpenMP support # if the +openmp variant is chosen blas = 'blas.a' lapack = 'lapack.a' if '+blas' in spec: blas = spec['blas'].libs.joined() if '+lapack' in spec: lapack = spec['lapack'].libs.joined() # lapack must come before blas config['LIB_LPK'] = ' '.join([lapack, blas]) # FFT support if '+fft' in spec: config['LIB_FFT'] = join_path(spec['fftw'].prefix.lib, 'libfftw3.so') config['SRC_FFT'] = 'zfftifc_fftw.f90' else: config['LIB_FFT'] = 'fftlib.a' config['SRC_FFT'] = 'zfftifc.f90' # MPI support if '+mpi' in spec: config['F90'] = spec['mpi'].mpifc config['F77'] = spec['mpi'].mpif77 else: config['F90'] = spack_fc config['F77'] = spack_f77 config['SRC_MPI'] = 'mpi_stub.f90' # OpenMP support if '+openmp' in spec: config['F90_OPTS'] += ' ' + self.compiler.openmp_flag config['F77_OPTS'] += ' ' + self.compiler.openmp_flag else: config['SRC_OMP'] = 'omp_stub.f90' # Libxc support if '+libxc' in spec: config['LIB_libxc'] = ' '.join([ join_path(spec['libxc'].prefix.lib, 'libxcf90.so'), join_path(spec['libxc'].prefix.lib, 'libxc.so') ]) config['SRC_libxc'] = ' '.join([ 'libxc_funcs.f90', 'libxc.f90', 'libxcifc.f90' ]) else: config['SRC_libxc'] = 'libxcifc_stub.f90' # Write configuration options to include file with open('make.inc', 'w') as inc: for key in config: inc.write('{0} = {1}\n'.format(key, config[key])) ``` `config` is just a dictionary that we can add key-value pairs to. By the end of the `edit()` method we write the contents of our dictionary to `make.inc`. ### CMake[¶](#cmake) [CMake](https://cmake.org) is another common build system that has been gaining popularity. It works in a similar manner to `Autotools` but with differences in variable names, the number of configuration options available, and the handling of shared libraries. Typical build incantations look like this: ``` def install(self, spec, prefix): cmake("-DCMAKE_INSTALL_PREFIX:PATH=/path/to/install_dir ..") make() make("install") ``` As you can see from the example above, it’s very similar to invoking `configure` and `make` in an `Autotools` build system. However, the variable names and options differ. Most options in CMake are prefixed with a `'-D'` flag to indicate a configuration setting. In the `CMakePackage` class we can override the following phases: 1. `cmake()` 2. `build()` 3. `install()` The `CMakePackage` class also provides sensible defaults so we only need to override `cmake_args()`. Let’s look at these defaults in the `CMakePackage` class in the `_std_args()` method: ``` $ spack edit --build-system cmake ``` | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 ``` | ``` return [os.path.join(self.build_directory, 'CMakeCache.txt')] @property def root_cmakelists_dir(self): """The relative path to the directory containing CMakeLists.txt This path is relative to the root of the extracted tarball, not to the ``build_directory``. Defaults to the current directory. :return: directory containing CMakeLists.txt """ return self.stage.source_path @property def std_cmake_args(self): """Standard cmake arguments provided as a property for convenience of package writers :return: standard cmake arguments """ # standard CMake arguments std_cmake_args = CMakePackage._std_args(self) std_cmake_args += getattr(self, 'cmake_flag_args', []) return std_cmake_args @staticmethod def _std_args(pkg): """Computes the standard cmake arguments for a generic package""" try: generator = pkg.generator except AttributeError: generator = 'Unix Makefiles' # Make sure a valid generator was chosen valid_primary_generators = ['Unix Makefiles', 'Ninja'] primary_generator = _extract_primary_generator(generator) if primary_generator not in valid_primary_generators: msg = "Invalid CMake generator: '{0}'\n".format(generator) msg += "CMakePackage currently supports the following " msg += "primary generators: '{0}'".\ format("', '".join(valid_primary_generators)) raise InstallError(msg) try: build_type = pkg.spec.variants['build_type'].value except KeyError: ``` | Some `CMake` packages use different generators. Spack is able to support [Unix-Makefile](https://cmake.org/cmake/help/v3.4/generator/Unix%20Makefiles.html) generators as well as [Ninja](https://cmake.org/cmake/help/v3.4/generator/Ninja.html) generators. If no generator is specified Spack will default to `Unix Makefiles`. Next we setup the build type. In `CMake` you can specify the build type that you want. Options include: 1. `empty` 2. `Debug` 3. `Release` 4. `RelWithDebInfo` 5. `MinSizeRel` With these options you can specify whether you want your executable to have the debug version only, release version or the release with debug information. Release executables tend to be more optimized than Debug. In Spack, we set the default as RelWithDebInfo unless otherwise specified through a variant. Spack then automatically sets up the `-DCMAKE_INSTALL_PREFIX` path, appends the build type (`RelWithDebInfo` default), and then specifies a verbose `Makefile`. Next we add the `rpaths` to `-DCMAKE_INSTALL_RPATH:STRING`. Finally we add to `-DCMAKE_PREFIX_PATH:STRING` the locations of all our dependencies so that `CMake` can find them. In the end our `cmake` line will look like this (example is `xrootd`): ``` $ cmake $HOME/spack/var/spack/stage/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk/xrootd-4.6.0 -G Unix Makefiles -DCMAKE_INSTALL_PREFIX:PATH=$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk -DCMAKE_BUILD_TYPE:STRING=RelWithDebInfo -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_FIND_FRAMEWORK:STRING=LAST -DCMAKE_INSTALL_RPATH_USE_LINK_PATH:BOOL=FALSE -DCMAKE_INSTALL_RPATH:STRING=$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk/lib:$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/xrootd-4.6.0-4ydm74kbrp4xmcgda5upn33co5pwddyk/lib64 -DCMAKE_PREFIX_PATH:STRING=$HOME/spack/opt/spack/darwin-sierra-x86_64/clang-9.0.0-apple/cmake-3.9.4-hally3vnbzydiwl3skxcxcbzsscaasx5 ``` We can see now how `CMake` takes care of a lot of the boilerplate code that would have to be otherwise typed in. Let’s try to recreate [callpath](https://github.com/LLNL/callpath.git): ``` $ spack create -f https://github.com/llnl/callpath/archive/v1.0.3.tar.gz ==> This looks like a URL for callpath ==> Found 4 versions of callpath: 1.0.3 https://github.com/LLNL/callpath/archive/v1.0.3.tar.gz 1.0.2 https://github.com/LLNL/callpath/archive/v1.0.2.tar.gz 1.0.1 https://github.com/LLNL/callpath/archive/v1.0.1.tar.gz 1.0 https://github.com/LLNL/callpath/archive/v1.0.tar.gz ==> How many would you like to checksum? (default is 1, q to abort) 1 ==> Downloading... ==> Fetching https://github.com/LLNL/callpath/archive/v1.0.3.tar.gz ######################################################################## 100.0% ==> Checksummed 1 version of callpath ==> This package looks like it uses the cmake build system ==> Created template for callpath package ==> Created package file: /Users/mamelara/spack/var/spack/repos/builtin/packages/callpath/package.py ``` which then produces the following template: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) # # This is a template package file for Spack. We've put "FIXME" # next to all the things you'll want to change. Once you've handled # them, you can save this file and test your package like this: # # spack install callpath # # You can edit this file again by typing: # # spack edit callpath # # See the Spack documentation for more information on packaging. # If you submit this package back to Spack as a pull request, # please first remove this boilerplate and all FIXME comments. # from spack import * class Callpath(CMakePackage): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "https://github.com/llnl/callpath/archive/v1.0.1.tar.gz" version('1.0.3', 'c89089b3f1c1ba47b09b8508a574294a') # FIXME: Add dependencies if required. # depends_on('foo') def cmake_args(self): # FIXME: Add arguments other than # FIXME: CMAKE_INSTALL_PREFIX and CMAKE_BUILD_TYPE # FIXME: If not needed delete this function args = [] return args ``` | Again we fill in the details: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Callpath(CMakePackage): """Library for representing callpaths consistently in distributed-memory performance tools.""" homepage = "https://github.com/llnl/callpath" url = "https://github.com/llnl/callpath/archive/v1.0.3.tar.gz" version('1.0.3', 'c89089b3f1c1ba47b09b8508a574294a') depends_on("elf", type="link") depends_on("libdwarf") depends_on("dyninst") depends_on("adept-utils") depends_on("mpi") depends_on("cmake@2.8:", type="build") ``` | As mentioned earlier, Spack will use sensible defaults to prevent repeated code and to make writing `CMake` package files simpler. In callpath, we want to add options to `CALLPATH_WALKER` as well as add compiler flags. We add the following options like so: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class Callpath(CMakePackage): """Library for representing callpaths consistently in distributed-memory performance tools.""" homepage = "https://github.com/llnl/callpath" url = "https://github.com/llnl/callpath/archive/v1.0.3.tar.gz" version('1.0.3', 'c89089b3f1c1ba47b09b8508a574294a') depends_on("elf", type="link") depends_on("libdwarf") depends_on("dyninst") depends_on("adept-utils") depends_on("mpi") depends_on("cmake@2.8:", type="build") def cmake_args(self): args = ["-DCALLPATH_WALKER=dyninst"] if self.spec.satisfies("^dyninst@9.3.0:"): std.flag = self.compiler.cxx_flag args.append("-DCMAKE_CXX_FLAGS='{0}' -fpermissive'".format( std_flag)) return args ``` | Now we can control our build options using `cmake_args()`. If defaults are sufficient enough for the package, we can leave this method out. `CMakePackage` classes allow for control of other features in the build system. For example, you can specify the path to the “out of source” build directory and also point to the root of the `CMakeLists.txt` file if it is placed in a non-standard location. A good example of a package that has its `CMakeLists.txt` file located at a different location is found in `spades`. ``` $ spack edit spades ``` ``` root_cmakelists_dir = "src" ``` Here `root_cmakelists_dir` will tell Spack where to find the location of `CMakeLists.txt`. In this example, it is located a directory level below in the `src` directory. Some `CMake` packages also require the `install` phase to be overridden. For example, let’s take a look at `sniffles`. ``` $ spack edit sniffles ``` In the `install()` method, we have to manually install our targets so we override the `install()` method to do it for us: ``` # the build process doesn't actually install anything, do it by hand def install(self, spec, prefix): mkdir(prefix.bin) src = "bin/sniffles-core-{0}".format(spec.version.dotted) binaries = ['sniffles', 'sniffles-debug'] for b in binaries: install(join_path(src, b), join_path(prefix.bin, b)) ``` ### PythonPackage[¶](#pythonpackage) Python extensions and modules are built differently from source than most applications. Python uses a `setup.py` script to install Python modules. The script consists of a call to `setup()` which provides the information required to build a module to Distutils. If you’re familiar with pip or easy_install, setup.py does the same thing. These modules are usually installed using the following line: ``` $ python setup.py install ``` There are also a list of commands and phases that you can call. To see the full list you can run: ``` $ python setup.py --help-commands Standard commands: build build everything needed to install build_py "build" pure Python modules (copy to build directory) build_ext build C/C++ extensions (compile/link to build directory) build_clib build C/C++ libraries used by Python extensions build_scripts "build" scripts (copy and fixup #! line) clean (no description available) install install everything from build directory install_lib install all Python modules (extensions and pure Python) install_headers install C/C++ header files install_scripts install scripts (Python or otherwise) install_data install data files sdist create a source distribution (tarball, zip file, etc.) register register the distribution with the Python package index bdist create a built (binary) distribution bdist_dumb create a "dumb" built distribution bdist_rpm create an RPM distribution bdist_wininst create an executable installer for MS Windows upload upload binary package to PyPI check perform some checks on the package ``` We can write package files for Python packages using the `Package` class, but the class brings with it a lot of methods that are useless for Python packages. Instead, Spack has a `PythonPackage` subclass that allows packagers of Python modules to be able to invoke `setup.py` and use `Distutils`, which is much more familiar to a typical python user. To see the defaults that Spack has for each a methods, we will take a look at the `PythonPackage` class: ``` $ spack edit --build-system python ``` We see the following: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 ``` | ``` class PythonPackage(PackageBase): # Standard commands def build(self, spec, prefix): """Build everything needed to install.""" args = self.build_args(spec, prefix) self.setup_py('build', *args) def build_args(self, spec, prefix): """Arguments to pass to build.""" return [] def build_py(self, spec, prefix): '''"Build" pure Python modules (copy to build directory).''' args = self.build_py_args(spec, prefix) self.setup_py('build_py', *args) def build_py_args(self, spec, prefix): """Arguments to pass to build_py.""" return [] def build_ext(self, spec, prefix): """Build C/C++ extensions (compile/link to build directory).""" args = self.build_ext_args(spec, prefix) self.setup_py('build_ext', *args) def build_ext_args(self, spec, prefix): """Arguments to pass to build_ext.""" return [] def build_clib(self, spec, prefix): """Build C/C++ libraries used by Python extensions.""" args = self.build_clib_args(spec, prefix) self.setup_py('build_clib', *args) def build_clib_args(self, spec, prefix): """Arguments to pass to build_clib.""" return [] def build_scripts(self, spec, prefix): '''"Build" scripts (copy and fixup #! line).''' args = self.build_scripts_args(spec, prefix) self.setup_py('build_scripts', *args) def clean(self, spec, prefix): """Clean up temporary files from 'build' command.""" args = self.clean_args(spec, prefix) self.setup_py('clean', *args) def clean_args(self, spec, prefix): """Arguments to pass to clean.""" return [] def install(self, spec, prefix): """Install everything from build directory.""" args = self.install_args(spec, prefix) self.setup_py('install', *args) def install_args(self, spec, prefix): """Arguments to pass to install.""" args = ['--prefix={0}'.format(prefix)] # This option causes python packages (including setuptools) NOT # to create eggs or easy-install.pth files. Instead, they # install naturally into $prefix/pythonX.Y/site-packages. # # Eggs add an extra level of indirection to sys.path, slowing # down large HPC runs. They are also deprecated in favor of # wheels, which use a normal layout when installed. # # Spack manages the package directory on its own by symlinking # extensions into the site-packages directory, so we don't really # need the .pth files or egg directories, anyway. # # We need to make sure this is only for build dependencies. A package # such as py-basemap will not build properly with this flag since # it does not use setuptools to build and those does not recognize # the --single-version-externally-managed flag if ('py-setuptools' == spec.name or # this is setuptools, or 'py-setuptools' in spec._dependencies and # it's an immediate dep 'build' in spec._dependencies['py-setuptools'].deptypes): args += ['--single-version-externally-managed', '--root=/'] return args def install_lib(self, spec, prefix): """Install all Python modules (extensions and pure Python).""" args = self.install_lib_args(spec, prefix) self.setup_py('install_lib', *args) def install_lib_args(self, spec, prefix): """Arguments to pass to install_lib.""" return [] def install_headers(self, spec, prefix): """Install C/C++ header files.""" args = self.install_headers_args(spec, prefix) self.setup_py('install_headers', *args) def install_headers_args(self, spec, prefix): """Arguments to pass to install_headers.""" return [] def install_scripts(self, spec, prefix): """Install scripts (Python or otherwise).""" args = self.install_scripts_args(spec, prefix) self.setup_py('install_scripts', *args) def install_scripts_args(self, spec, prefix): """Arguments to pass to install_scripts.""" return [] def install_data(self, spec, prefix): """Install data files.""" args = self.install_data_args(spec, prefix) self.setup_py('install_data', *args) def install_data_args(self, spec, prefix): """Arguments to pass to install_data.""" return [] def sdist(self, spec, prefix): """Create a source distribution (tarball, zip file, etc.).""" args = self.sdist_args(spec, prefix) self.setup_py('sdist', *args) def sdist_args(self, spec, prefix): """Arguments to pass to sdist.""" return [] def register(self, spec, prefix): """Register the distribution with the Python package index.""" args = self.register_args(spec, prefix) self.setup_py('register', *args) def register_args(self, spec, prefix): """Arguments to pass to register.""" return [] def bdist(self, spec, prefix): """Create a built (binary) distribution.""" args = self.bdist_args(spec, prefix) self.setup_py('bdist', *args) def bdist_args(self, spec, prefix): """Arguments to pass to bdist.""" return [] def bdist_dumb(self, spec, prefix): '''Create a "dumb" built distribution.''' args = self.bdist_dumb_args(spec, prefix) self.setup_py('bdist_dumb', *args) def bdist_dumb_args(self, spec, prefix): """Arguments to pass to bdist_dumb.""" return [] def bdist_rpm(self, spec, prefix): """Create an RPM distribution.""" args = self.bdist_rpm(spec, prefix) self.setup_py('bdist_rpm', *args) def bdist_rpm_args(self, spec, prefix): """Arguments to pass to bdist_rpm.""" return [] def bdist_wininst(self, spec, prefix): """Create an executable installer for MS Windows.""" args = self.bdist_wininst_args(spec, prefix) self.setup_py('bdist_wininst', *args) def bdist_wininst_args(self, spec, prefix): """Arguments to pass to bdist_wininst.""" return [] def upload(self, spec, prefix): """Upload binary package to PyPI.""" args = self.upload_args(spec, prefix) self.setup_py('upload', *args) def upload_args(self, spec, prefix): """Arguments to pass to upload.""" return [] def check(self, spec, prefix): """Perform some checks on the package.""" args = self.check_args(spec, prefix) self.setup_py('check', *args) def check_args(self, spec, prefix): """Arguments to pass to check.""" return [] ``` | Each of these methods have sensible defaults or they can be overridden. We will write a package file for [Pandas](https://pandas.pydata.org): ``` $ spack create -f https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz ==> This looks like a URL for pandas ==> Warning: Spack was unable to fetch url list due to a certificate verification problem. You can try running spack -k, which will not check SSL certificates. Use this at your own risk. ==> Found 1 version of pandas: 0.19.0 https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz ==> How many would you like to checksum? (default is 1, q to abort) 1 ==> Downloading... ==> Fetching https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz ######################################################################## 100.0% ==> Checksummed 1 version of pandas ==> This package looks like it uses the python build system ==> Changing package name from pandas to py-pandas ==> Created template for py-pandas package ==> Created package file: /Users/mamelara/spack/var/spack/repos/builtin/packages/py-pandas/package.py ``` And we are left with the following template: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) # # This is a template package file for Spack. We've put "FIXME" # next to all the things you'll want to change. Once you've handled # them, you can save this file and test your package like this: # # spack install py-pandas # # You can edit this file again by typing: # # spack edit py-pandas # # See the Spack documentation for more information on packaging. # If you submit this package back to Spack as a pull request, # please first remove this boilerplate and all FIXME comments. # from spack import * class PyPandas(PythonPackage): """FIXME: Put a proper description of your package here.""" # FIXME: Add a proper url for your package's homepage here. homepage = "http://www.example.com" url = "https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz" version('0.19.0', 'bc9bb7188e510b5d44fbdd249698a2c3') # FIXME: Add dependencies if required. # depends_on('py-setuptools', type='build') # depends_on('py-foo', type=('build', 'run')) def build_args(self, spec, prefix): # FIXME: Add arguments other than --prefix # FIXME: If not needed delete this function args = [] return args ``` | As you can see this is not any different than any package template that we have written. We have the choice of providing build options or using the sensible defaults Luckily for us, there is no need to provide build args. Next we need to find the dependencies of a package. Dependencies are usually listed in `setup.py`. You can find the dependencies by searching for `install_requires` keyword in that file. Here it is for `Pandas`: ``` # ... code if sys.version_info[0] >= 3: setuptools_kwargs = { 'zip_safe': False, 'install_requires': ['python-dateutil >= 2', 'pytz >= 2011k', 'numpy >= %s' % min_numpy_ver], 'setup_requires': ['numpy >= %s' % min_numpy_ver], } if not _have_setuptools: sys.exit("need setuptools/distribute for Py3k" "\n$ pip install distribute") # ... more code ``` You can find a more comprehensive list at the Pandas [documentation](https://pandas.pydata.org/pandas-docs/stable/install.html). By reading the documentation and `setup.py` we found that `Pandas` depends on `python-dateutil`, `pytz`, and `numpy`, `numexpr`, and finally `bottleneck`. Here is the completed `Pandas` script: | | | | --- | --- | | ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 ``` | ``` # Copyright 2013-2019 Lawrence Livermore National Security, LLC and other # Spack Project Developers. See the top-level COPYRIGHT file for details. # # SPDX-License-Identifier: (Apache-2.0 OR MIT) from spack import * class PyPandas(PythonPackage): """pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with relational or labeled data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. """ homepage = "http://pandas.pydata.org/" url = "https://pypi.io/packages/source/p/pandas/pandas-0.19.0.tar.gz" version('0.19.0', 'bc9bb7188e510b5d44fbdd249698a2c3') version('0.18.0', 'f143762cd7a59815e348adf4308d2cf6') version('0.16.1', 'fac4f25748f9610a3e00e765474bdea8') version('0.16.0', 'bfe311f05dc0c351f8955fbd1e296e73') depends_on('py-dateutil', type=('build', 'run')) depends_on('py-numpy', type=('build', 'run')) depends_on('py-setuptools', type='build') depends_on('py-cython', type='build') depends_on('py-pytz', type=('build', 'run')) depends_on('py-numexpr', type=('build', 'run')) depends_on('py-bottleneck', type=('build', 'run')) ``` | It is quite important to declare all the dependencies of a Python package. Spack can “activate” Python packages to prevent the user from having to load each dependency module explictly. If a dependency is missed, Spack will be unable to properly activate the package and it will cause an issue. To learn more about extensions go to [spack extensions](https://spack.readthedocs.io/en/latest/basic_usage.html#cmd-spack-extensions). From this example, you can see that building Python modules is made easy through the `PythonPackage` class. ### Other Build Systems[¶](#other-build-systems) Although we won’t get in depth with any of the other build systems that Spack supports, it is worth mentioning that Spack does provide subclasses for the following build systems: 1. `IntelPackage` 2. `SconsPackage` 3. `WafPackage` 4. `RPackage` 5. `PerlPackage` 6. `QMakePackage` Each of these classes have their own abstractions to help assist in writing package files. For whatever doesn’t fit nicely into the other build-systems, you can use the `Package` class. Hopefully by now you can see how we aim to make packaging simple and robust through these classes. If you want to learn more about these build systems, check out [Implementing the installation procedure](https://spack.readthedocs.io/en/latest/packaging_guide.html#installation-procedure) in the Packaging Guide. Advanced Topics in Packaging[¶](#advanced-topics-in-packaging) --- Spack tries to automatically configure packages with information from dependencies such that all you need to do is to list the dependencies (i.e., with the `depends_on` directive) and the build system (for example by deriving from `CmakePackage`). However, there are many special cases. Often you need to retrieve details about dependencies to set package-specific configuration options, or to define package-specific environment variables used by the package’s build system. This tutorial covers how to retrieve build information from dependencies, and how you can automatically provide important information to dependents in your package. ### Setup for the Tutorial[¶](#setup-for-the-tutorial) Note We do not recommend doing this section of the tutorial in a production Spack instance. The tutorial uses custom package definitions with missing sections that will be filled in during the tutorial. These package definitions are stored in a separate package repository, which can be enabled with: ``` $ spack repo add --scope=site var/spack/repos/tutorial ``` This section of the tutorial may also require a newer version of gcc. If you have not already installed gcc @7.2.0 and added it to your configuration, you can do so with: ``` $ spack install gcc@7.2.0 %gcc@5.4.0 $ spack compiler add --scope=site `spack location -i gcc@7.2.0 %gcc@5.4.0` ``` If you are using the tutorial docker image, all dependency packages will have been installed. Otherwise, to install these packages you can use the following commands: ``` $ spack install openblas $ spack install netlib-lapack $ spack install mpich ``` Now, you are ready to set your preferred `EDITOR` and continue with the rest of the tutorial. Note Several of these packages depend on an MPI implementation. You can use OpenMPI if you install it from scratch, but this is slow (>10 min.). A binary cache of MPICH may be provided, in which case you can force the package to use it and install quickly. All tutorial examples with packages that depend on MPICH include the spec syntax for building with it. ### Modifying a Package’s Build Environment[¶](#modifying-a-package-s-build-environment) Spack sets up several environment variables like `PATH` by default to aid in building a package, but many packages make use of environment variables which convey specific information about their dependencies (e.g., `MPICC`). This section covers how to update your Spack packages so that package-specific environment variables are defined at build-time. #### Set environment variables in dependent packages at build-time[¶](#set-environment-variables-in-dependent-packages-at-build-time) Dependencies can set environment variables that are required when their dependents build. For example, when a package depends on a python extension like py-numpy, Spack’s `python` package will add it to `PYTHONPATH` so it is available at build time; this is required because the default setup that spack does is not sufficient for python to import modules. To provide environment setup for a dependent, a package can implement the `setup_dependent_environment` function. This function takes as a parameter a `EnvironmentModifications` object which includes convenience methods to update the environment. For example, an MPI implementation can set `MPICC` for packages that depend on it: ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): spack_env.set('MPICC', join_path(self.prefix.bin, 'mpicc')) ``` In this case packages that depend on `mpi` will have `MPICC` defined in their environment when they build. This section is focused on modifying the build-time environment represented by `spack_env`, but it’s worth noting that modifications to `run_env` are included in Spack’s automatically-generated module files. We can practice by editing the `mpich` package to set the `MPICC` environment variable in the build-time environment of dependent packages. ``` root@advanced-packaging-tutorial:/# spack edit mpich ``` Once you’re finished, the method should look like this: ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): spack_env.set('MPICC', join_path(self.prefix.bin, 'mpicc')) spack_env.set('MPICXX', join_path(self.prefix.bin, 'mpic++')) spack_env.set('MPIF77', join_path(self.prefix.bin, 'mpif77')) spack_env.set('MPIF90', join_path(self.prefix.bin, 'mpif90')) spack_env.set('MPICH_CC', spack_cc) spack_env.set('MPICH_CXX', spack_cxx) spack_env.set('MPICH_F77', spack_f77) spack_env.set('MPICH_F90', spack_fc) spack_env.set('MPICH_FC', spack_fc) ``` At this point we can, for instance, install `netlib-scalapack` with `mpich`: ``` root@advanced-packaging-tutorial:/# spack install netlib-scalapack ^mpich ... ==> Created stage in /usr/local/var/spack/stage/netlib-scalapack-2.0.2-km7tsbgoyyywonyejkjoojskhc5knz3z ==> No patches needed for netlib-scalapack ==> Building netlib-scalapack [CMakePackage] ==> Executing phase: 'cmake' ==> Executing phase: 'build' ==> Executing phase: 'install' ==> Successfully installed netlib-scalapack Fetch: 0.01s. Build: 3m 59.86s. Total: 3m 59.87s. [+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-scalapack-2.0.2-km7tsbgoyyywonyejkjoojskhc5knz3z ``` and double check the environment logs to verify that every variable was set to the correct value. #### Set environment variables in your own package[¶](#set-environment-variables-in-your-own-package) Packages can modify their own build-time environment by implementing the `setup_environment` function. For `qt` this looks like: ``` def setup_environment(self, spack_env, run_env): spack_env.set('MAKEFLAGS', '-j{0}'.format(make_jobs)) run_env.set('QTDIR', self.prefix) ``` When `qt` builds, `MAKEFLAGS` will be defined in the environment. To contrast with `qt`’s `setup_dependent_environment` function: ``` def setup_dependent_environment(self, spack_env, run_env, dependent_spec): spack_env.set('QTDIR', self.prefix) ``` Let’s see how it works by completing the `elpa` package: ``` root@advanced-packaging-tutorial:/# spack edit elpa ``` In the end your method should look like: ``` def setup_environment(self, spack_env, run_env): spec = self.spec spack_env.set('CC', spec['mpi'].mpicc) spack_env.set('FC', spec['mpi'].mpifc) spack_env.set('CXX', spec['mpi'].mpicxx) spack_env.set('SCALAPACK_LDFLAGS', spec['scalapack'].libs.joined()) spack_env.append_flags('LDFLAGS', spec['lapack'].libs.search_flags) spack_env.append_flags('LIBS', spec['lapack'].libs.link_flags) ``` At this point it’s possible to proceed with the installation of `elpa ^mpich` ### Retrieving Library Information[¶](#retrieving-library-information) Although Spack attempts to help packages locate their dependency libraries automatically (e.g. by setting `PKG_CONFIG_PATH` and `CMAKE_PREFIX_PATH`), a package may have unique configuration options that are required to locate libraries. When a package needs information about dependency libraries, the general approach in Spack is to query the dependencies for the locations of their libraries and set configuration options accordingly. By default most Spack packages know how to automatically locate their libraries. This section covers how to retrieve library information from dependencies and how to locate libraries when the default logic doesn’t work. #### Accessing dependency libraries[¶](#accessing-dependency-libraries) If you need to access the libraries of a dependency, you can do so via the `libs` property of the spec, for example in the `arpack-ng` package: ``` def install(self, spec, prefix): lapack_libs = spec['lapack'].libs.joined(';') blas_libs = spec['blas'].libs.joined(';') cmake(*[ '-DLAPACK_LIBRARIES={0}'.format(lapack_libs), '-DBLAS_LIBRARIES={0}'.format(blas_libs) ], '.') ``` Note that `arpack-ng` is querying virtual dependencies, which Spack automatically resolves to the installed implementation (e.g. `openblas` for `blas`). We’ve started work on a package for `armadillo`. You should open it, read through the comment that starts with `# TUTORIAL:` and complete the `cmake_args` section: ``` root@advanced-packaging-tutorial:/# spack edit armadillo ``` If you followed the instructions in the package, when you are finished your `cmake_args` method should look like: ``` def cmake_args(self): spec = self.spec return [ # ARPACK support '-DARPACK_LIBRARY={0}'.format(spec['arpack-ng'].libs.joined(";")), # BLAS support '-DBLAS_LIBRARY={0}'.format(spec['blas'].libs.joined(";")), # LAPACK support '-DLAPACK_LIBRARY={0}'.format(spec['lapack'].libs.joined(";")), # SuperLU support '-DSuperLU_INCLUDE_DIR={0}'.format(spec['superlu'].prefix.include), '-DSuperLU_LIBRARY={0}'.format(spec['superlu'].libs.joined(";")), # HDF5 support '-DDETECT_HDF5={0}'.format('ON' if '+hdf5' in spec else 'OFF') ] ``` As you can see, getting the list of libraries that your dependencies provide is as easy as accessing the their `libs` attribute. Furthermore, the interface remains the same whether you are querying regular or virtual dependencies. At this point you can complete the installation of `armadillo` using `openblas` as a LAPACK provider (`armadillo ^openblas ^mpich`): ``` root@advanced-packaging-tutorial:/# spack install armadillo ^openblas ^mpich ==> pkg-config is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkg-config-0.29.2-ae2hwm7q57byfbxtymts55xppqwk7ecj ... ==> superlu is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/superlu-5.2.1-q2mbtw2wo4kpzis2e2n227ip2fquxrno ==> Installing armadillo ==> Using cached archive: /usr/local/var/spack/cache/armadillo/armadillo-8.100.1.tar.xz ==> Staging archive: /usr/local/var/spack/stage/armadillo-8.100.1-n2eojtazxbku6g4l5izucwwgnpwz77r4/armadillo-8.100.1.tar.xz ==> Created stage in /usr/local/var/spack/stage/armadillo-8.100.1-n2eojtazxbku6g4l5izucwwgnpwz77r4 ==> Applied patch undef_linux.patch ==> Building armadillo [CMakePackage] ==> Executing phase: 'cmake' ==> Executing phase: 'build' ==> Executing phase: 'install' ==> Successfully installed armadillo Fetch: 0.01s. Build: 3.96s. Total: 3.98s. [+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/armadillo-8.100.1-n2eojtazxbku6g4l5izucwwgnpwz77r4 ``` Hopefully the installation went fine and the code we added expanded to the right list of semicolon separated libraries (you are encouraged to open `armadillo`’s build logs to double check). #### Providing libraries to dependents[¶](#providing-libraries-to-dependents) Spack provides a default implementation for `libs` which often works out of the box. A user can write a package definition without having to implement a `libs` property and dependents can retrieve its libraries as shown in the above section. However, the default implementation assumes that libraries follow the naming scheme `lib<package name>.so` (or e.g. `lib<package name>.a` for static libraries). Packages which don’t follow this naming scheme must implement this function themselves, e.g. `opencv`: ``` @property def libs(self): shared = "+shared" in self.spec return find_libraries( "libopencv_*", root=self.prefix, shared=shared, recurse=True ) ``` This issue is common for packages which implement an interface (i.e. virtual package providers in Spack). If we try to build another version of `armadillo` tied to `netlib-lapack` (`armadillo ^netlib-lapack ^mpich`) we’ll notice that this time the installation won’t complete: ``` root@advanced-packaging-tutorial:/# spack install armadillo ^netlib-lapack ^mpich ==> pkg-config is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/pkg-config-0.29.2-ae2hwm7q57byfbxtymts55xppqwk7ecj ... ==> openmpi is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/openmpi-3.0.0-yo5qkfvumpmgmvlbalqcadu46j5bd52f ==> Installing arpack-ng ==> Using cached archive: /usr/local/var/spack/cache/arpack-ng/arpack-ng-3.5.0.tar.gz ==> Already staged arpack-ng-3.5.0-bloz7cqirpdxj33pg7uj32zs5likz2un in /usr/local/var/spack/stage/arpack-ng-3.5.0-bloz7cqirpdxj33pg7uj32zs5likz2un ==> No patches needed for arpack-ng ==> Building arpack-ng [Package] ==> Executing phase: 'install' ==> Error: RuntimeError: Unable to recursively locate netlib-lapack libraries in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-lapack-3.6.1-jjfe23wgt7nkjnp2adeklhseg3ftpx6z RuntimeError: RuntimeError: Unable to recursively locate netlib-lapack libraries in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netlib-lapack-3.6.1-jjfe23wgt7nkjnp2adeklhseg3ftpx6z /usr/local/var/spack/repos/builtin/packages/arpack-ng/package.py:105, in install: 5 options.append('-DCMAKE_INSTALL_NAME_DIR:PATH=%s/lib' % prefix) 6 7 # Make sure we use Spack's blas/lapack: >> 8 lapack_libs = spec['lapack'].libs.joined(';') 9 blas_libs = spec['blas'].libs.joined(';') 10 11 options.extend([ See build log for details: /usr/local/var/spack/stage/arpack-ng-3.5.0-bloz7cqirpdxj33pg7uj32zs5likz2un/arpack-ng-3.5.0/spack-build-out.txt ``` Unlike `openblas` which provides a library named `libopenblas.so`, `netlib-lapack` provides `liblapack.so`, so it needs to implement customized library search logic. Let’s edit it: ``` root@advanced-packaging-tutorial:/# spack edit netlib-lapack ``` and follow the instructions in the `# TUTORIAL:` comment as before. What we need to implement is: ``` @property def lapack_libs(self): shared = True if '+shared' in self.spec else False return find_libraries( 'liblapack', root=self.prefix, shared=shared, recursive=True ) ``` i.e., a property that returns the correct list of libraries for the LAPACK interface. We use the name `lapack_libs` rather than `libs` because `netlib-lapack` can also provide `blas`, and when it does it is provided as a separate library file. Using this name ensures that when dependents ask for `lapack` libraries, `netlib-lapack` will retrieve only the libraries associated with the `lapack` interface. Now we can finally install `armadillo ^netlib-lapack ^mpich`: ``` root@advanced-packaging-tutorial:/# spack install armadillo ^netlib-lapack ^mpich ... ==> Building armadillo [CMakePackage] ==> Executing phase: 'cmake' ==> Executing phase: 'build' ==> Executing phase: 'install' ==> Successfully installed armadillo Fetch: 0.01s. Build: 3.75s. Total: 3.76s. [+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/armadillo-8.100.1-sxmpu5an4dshnhickh6ykchyfda7jpyn ``` Since each implementation of a virtual package is responsible for locating the libraries associated with the interfaces it provides, dependents do not need to include special-case logic for different implementations and for example need only ask for `spec['blas'].libs`. ### Other Packaging Topics[¶](#other-packaging-topics) #### Attach attributes to other packages[¶](#attach-attributes-to-other-packages) Build tools usually also provide a set of executables that can be used when another package is being installed. Spack gives you the opportunity to monkey-patch dependent modules and attach attributes to them. This helps make the packager experience as similar as possible to what would have been the manual installation of the same package. An example here is the `automake` package, which overrides `setup_dependent_package`: ``` def setup_dependent_package(self, module, dependent_spec): # Automake is very likely to be a build dependency, # so we add the tools it provides to the dependent module executables = ['aclocal', 'automake'] for name in executables: setattr(module, name, self._make_executable(name)) ``` so that every other package that depends on it can use directly `aclocal` and `automake` with the usual function call syntax of `Executable`: ``` aclocal('--force') ``` #### Extra query parameters[¶](#extra-query-parameters) An advanced feature of the Spec’s build-interface protocol is the support for extra parameters after the subscript key. In fact, any of the keys used in the query can be followed by a comma-separated list of extra parameters which can be inspected by the package receiving the request to fine-tune a response. Let’s look at an example and try to install `netcdf ^mpich`: ``` root@advanced-packaging-tutorial:/# spack install netcdf ^mpich ==> libsigsegv is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-r5envx3kqctwwflhd4qax4ahqtt6x43a ... ==> Error: AttributeError: 'list' object has no attribute 'search_flags' AttributeError: AttributeError: 'list' object has no attribute 'search_flags' /usr/local/var/spack/repos/builtin/packages/netcdf/package.py:207, in configure_args: 50 # used instead. 51 hdf5_hl = self.spec['hdf5:hl'] 52 CPPFLAGS.append(hdf5_hl.headers.cpp_flags) >> 53 LDFLAGS.append(hdf5_hl.libs.search_flags) 54 55 if '+parallel-netcdf' in self.spec: 56 config_args.append('--enable-pnetcdf') See build log for details: /usr/local/var/spack/stage/netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj/netcdf-4.4.1.1/spack-build-out.txt ``` We can see from the error that `netcdf` needs to know how to link the *high-level interface* of `hdf5`, and thus passes the extra parameter `hl` after the request to retrieve it. Clearly the implementation in the `hdf5` package is not complete, and we need to fix it: ``` root@advanced-packaging-tutorial:/# spack edit hdf5 ``` If you followed the instructions correctly, the code added to the `lib` property should be similar to: ``` query_parameters = self.spec.last_query.extra_parameters key = tuple(sorted(query_parameters)) libraries = query2libraries[key] shared = '+shared' in self.spec return find_libraries( libraries, root=self.prefix, shared=shared, recurse=True ) ``` where we highlighted the line retrieving the extra parameters. Now we can successfully complete the installation of `netcdf ^mpich`: ``` root@advanced-packaging-tutorial:/# spack install netcdf ^mpich ==> libsigsegv is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/libsigsegv-2.11-fypapcprssrj3nstp6njprskeyynsgaz ==> m4 is already installed in /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/m4-1.4.18-r5envx3kqctwwflhd4qax4ahqtt6x43a ... ==> Installing netcdf ==> Using cached archive: /usr/local/var/spack/cache/netcdf/netcdf-4.4.1.1.tar.gz ==> Already staged netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj in /usr/local/var/spack/stage/netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj ==> Already patched netcdf ==> Building netcdf [AutotoolsPackage] ==> Executing phase: 'autoreconf' ==> Executing phase: 'configure' ==> Executing phase: 'build' ==> Executing phase: 'install' ==> Successfully installed netcdf Fetch: 0.01s. Build: 24.61s. Total: 24.62s. [+] /usr/local/opt/spack/linux-ubuntu16.04-x86_64/gcc-5.4.0/netcdf-4.4.1.1-gk2xxhbqijnrdwicawawcll4t3c7dvoj ```
SDMPlay
cran
R
Package ‘SDMPlay’ October 12, 2022 Type Package Title Species Distribution Modelling Playground Version 2.0 Date 2021-09-14 Maintainer <NAME> <<EMAIL>> Description Species distribution modelling (SDM) has been developed for several years to ad- dress conservation issues, assess the direct impact of human activities on ecosystems and pre- dict the potential distribution shifts of invasive species (see Elith et al. 2006, Pear- son 2007, Elith and Leathwick 2009). SDM relates species occurrences with environmental in- formation and can predict species distribution on their entire occupied space. This ap- proach has been increasingly applied to Southern Ocean case studies, but requires correc- tions in such a context, due to the broad scale area, the limited number of presence records avail- able and the spatial and temporal aggregations of these datasets. SDMPlay is a pedagogic pack- age that will allow you to compute SDMs, to understand the overall method, and to pro- duce model outputs. The package, along with its associated vignettes, highlights the differ- ent steps of model calibration and describes how to choose the best methods to generate accu- rate and relevant outputs. SDMPlay proposes codes to apply a popular machine learning ap- proach, BRT (Boosted Regression Trees) and introduces MaxEnt (Maximum Entropy). It con- tains occurrences of marine species and environmental descriptors datasets as examples associ- ated to several vignette tutorials available at <https: //github.com/charleneguillaumot/THESIS/tree/master/SDMPLAY_R_PACKAGE>. License GPL-3 LazyData TRUE Depends R (>= 3.5.0) Encoding UTF-8 Imports raster, dismo, stats, base Suggests maptools, testthat, grDevices, knitr, rmarkdown, markdown, ncdf4, rgdal, graphics, sp, rJava, gbm RoxygenNote 7.1.1 VignetteBuilder knitr NeedsCompilation no Author <NAME> [aut, cre], <NAME> [aut], <NAME> [aut], <NAME> [aut], <NAME> [aut] Repository CRAN Date/Publication 2021-09-15 06:40:05 UTC R topics documented: brisaster.antarcticu... 2 clock... 4 clock... 5 clock... 6 clock... 7 compute.br... 8 compute.maxen... 10 ctenocidaris.nutri... 12 delim.are... 14 depth_S... 16 Glabraster.antarctic... 16 ice_cover_mean_S... 17 null.mode... 18 Odontaster.validu... 20 predictors1965_197... 21 predictors2005_201... 23 predictors2200AI... 25 SDMdata.qualit... 27 SDMeva... 28 SDMta... 29 seafloor_temp_2005_2012_mean_S... 31 worldma... 32 brisaster.antarcticus Presence-only records of the echinoid Brisaster antarcticus (Kerguelen Plateau) Description Dataset that contains the presence records of the echinoid species Brisaster antarcticus reported from several oceanographic campaigns including RV Marion Dufresne MD03 1974 & MD04 1975, POKER 2 (2010) and PROTEKER 2013, 2014, 2015. Brisaster antarcticus (Doderlein 1906) is distributed from 3.5 to 75.6W and -53.35 to -45.95S in the Southern Ocean. The species is mainly found around Kerguelen and Crozet Islands. Brisaster antarcticus commonly lives from 100 to 600 meters depth. It is a detrivorous species for which reproduction includes dispersal (David et al. 2005). See Guillaumot et al. (2016) for more details. Usage data('Brisaster.antarcticus') Format A data frame containing 43 occurrences and 13 descriptive variables of the associated environmental conditions • id Occurrence number indicator • scientific.name Species scientific name • scientific.name.authorship Author of the species description • genus Genus scientific name and its associated author • family Family scientific name and its associated author • order.and.higher.taxonomic.range Order scientific name and its associated author • decimal.Longitude Longitude in decimal degrees • decimal.Latitude Latitude in decimal degrees • depth Depth in meters • campaign Campaign origin of the data • reference Campaign reference • vessel Campaign vessel References <NAME>, <NAME>, <NAME>, <NAME> (2005) Antarctic Echinoidea. Synopses of the Antarctic Benthos 10. Doderlein L (1906) Die Echinoiden der Deutschen Tiefsee-Expedition. Deutsche Tiefsee Expedi- tion 1898-1899. 5: 63-290. Guillaumot C, <NAME>, <NAME>, <NAME> & <NAME> (2016). Echinoids of the Kerguelen Plateau: Occurrence data and environmental setting for past, present, and future species distribution modelling, Zookeys, 630: 1-17. Examples data('Brisaster.antarcticus') x <- brisaster.antarcticus #(be careful of the capital letter distinction) # plot of the occurrences: # selecting the species according to the campaigns brisaster7475 <- subset(x,x$year==1974 | x$year==1975) brisaster20102015 <- subset(x,x$campaign=='POKER II'| x$campaign=='PROTEKER') # drawing the background (depth) library(grDevices) blue.palette <- colorRampPalette(c('blue','deepskyblue','azure'))(100) data('predictors1965_1974') depth <- raster :: subset(predictors1965_1974, 1) raster::plot(depth, col=blue.palette,main= "Brisaster antarcticus occurrences") data('worldmap') # adding the occurrence data to the background points(worldmap,type="l") points(brisaster7475[,c('decimal.Longitude','decimal.Latitude')], col='orange',pch=16) points(brisaster20102015[,c('decimal.Longitude','decimal.Latitude')], col='darkgreen',pch=16) legend('bottomleft', legend=c('Brisaster antarcticus 1974-1975','Brisaster antarcticus 2010-2015'), col= c('orange','darkgreen'), pch= c(15, 15),cex=0.9) clock2 Spatial cross-validation procedure, CLOCK-2 method Description Cross-validation procedures aims at splitting the initial occurrence dataset into a training subset that is used to build the model and the remaining data can be lately used to test model predictions. Spatially splitting training and test datasets helps reducing the influence of data spatial aggregation on model evaluation performance (Guillaumot et al. 2019, 2021). Usage clock2(occ, bg.coords) Arguments occ Dataframe with longitude (column 1) and latitude (column 2) of the presence- only data. Decimal longitude and latitude are required. bg.coords Dataframe with longitude (column 1) and latitude (column 2) of the sampled background records. Decimal longitude and latitude are required. Details See Guillaumot et al.(2019) and vignette tutorial #4 "Spatial cross-validation" for complete exam- ples and details. Value A list that details the group to which each data (presence or background record) belongs to; and the detail of the random longitude data that was sampled to initiate the CLOCK scheme. list(occ.grp=occ.grp,bg.coords.grp= bg.coords.grp, tirage) References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2019). Broad-scale species distribution models applied to data-poor areas. Progress in Oceanogra- phy, 175, 198-207. <NAME>, <NAME>, <NAME> (2021). Species Distribution Modelling of the Southern Ocean : methods, main limits and some solutions. Antarctic Science. Examples #See Tutorial #4 "Spatial cross-validation" clock3 Spatial cross-validation procedure, CLOCK-3 method Description Cross-validation procedures aims at splitting the initial occurrence dataset into a training subset that is used to build the model and the remaining data can be lately used to test model predictions. Spatially splitting training and test datasets helps reducing the influence of data spatial aggregation on model evaluation performance (Guillaumot et al. 2019, 2021). Usage clock3(occ, bg.coords) Arguments occ Dataframe with longitude (column 1) and latitude (column 2) of the presence- only data. Decimal longitude and latitude are required. bg.coords Dataframe with longitude (column 1) and latitude (column 2) of the sampled background records. Decimal longitude and latitude are required. Details See Guillaumot et al.(2019) and vignette tutorial #4 "Spatial cross-validation" for complete exam- ples and details. Value A list that details the group to which each data (presence or background record) belongs to; and the detail of the random longitude data that was sampled to initiate the CLOCK scheme. list(occ.grp=occ.grp,bg.coords.grp= bg.coords.grp, tirage) References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2019). Broad-scale species distribution models applied to data-poor areas. Progress in Oceanogra- phy, 175, 198-207. <NAME>, <NAME>, <NAME> (2021). Species Distribution Modelling of the Southern Ocean : methods, main limits and some solutions. Antarctic Science. Examples #See Tutorial #4 "Spatial cross-validation" clock4 Spatial cross-validation procedure, CLOCK-4 method Description Cross-validation procedures aims at splitting the initial occurrence dataset into a training subset that is used to build the model and the remaining data can be lately used to test model predictions. Spatially splitting training and test datasets helps reducing the influence of data spatial aggregation on model evaluation performance (Guillaumot et al. 2019, 2021). Usage clock4(occ, bg.coords) Arguments occ Dataframe with longitude (column 1) and latitude (column 2) of the presence- only data. Decimal longitude and latitude are required. bg.coords Dataframe with longitude (column 1) and latitude (column 2) of the sampled background records. Decimal longitude and latitude are required. Details See Guillaumot et al.(2019) and vignette tutorial #4 "Spatial cross-validation" for complete exam- ples and details. Value A list that details the group to which each data (presence or background record) belongs to; and the detail of the random longitude data that was sampled to initiate the CLOCK scheme. list(occ.grp=occ.grp,bg.coords.grp= bg.coords.grp, tirage) References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2019). Broad-scale species distribution models applied to data-poor areas. Progress in Oceanogra- phy, 175, 198-207. <NAME>, <NAME>, <NAME> (2021). Species Distribution Modelling of the Southern Ocean : methods, main limits and some solutions. Antarctic Science. Examples #See Tutorial #4 "Spatial cross-validation" clock6 Spatial cross-validation procedure, CLOCK-6 method Description Cross-validation procedures aims at splitting the initial occurrence dataset into a training subset that is used to build the model and the remaining data can be lately used to test model predictions. Spatially splitting training and test datasets helps reducing the influence of data spatial aggregation on model evaluation performance (Guillaumot et al. 2019, 2021). Usage clock6(occ, bg.coords) Arguments occ Dataframe with longitude (column 1) and latitude (column 2) of the presence- only data. Decimal longitude and latitude are required. bg.coords Dataframe with longitude (column 1) and latitude (column 2) of the sampled background records. Decimal longitude and latitude are required. Details See Guillaumot et al.(2019) and vignette tutorial #4 "Spatial cross-validation" for complete exam- ples and details. Value A list that details the group to which each data (presence or background record) belongs to; and the detail of the random longitude data that was sampled to initiate the CLOCK scheme. list(occ.grp=occ.grp,bg.coords.grp= bg.coords.grp, tirage) References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2019). Broad-scale species distribution models applied to data-poor areas. Progress in Oceanogra- phy, 175, 198-207. <NAME>, <NAME>, <NAME> (2021). Species Distribution Modelling of the Southern Ocean : methods, main limits and some solutions. Antarctic Science. Examples #See Tutorial #4 "Spatial cross-validation" compute.brt Compute BRT (Boosted Regression Trees) model Description Compute species distribution models with Boosted Regression Trees Usage compute.brt(x, proj.predictors, tc = 2, lr = 0.001, bf = 0.75, n.trees = 50, step.size = n.trees, n.folds= 10, fold.vector = NULL) Arguments x SDMtab object or dataframe that contains id, longitude, latitude and values of environmental descriptors at corresponding locations proj.predictors RasterStack of environmental descriptors on which the model will be projected tc Integer. Tree complexity. Sets the complexity of individual trees lr Learning rate. Sets the weight applied to individual trees bf Bag fraction. Sets the proportion of observations used in selecting variables n.trees Number of initial trees to fit. Set at 50 by default step.size Number of trees to add at each cycle, set equal to n.trees by default n.folds Number of subsets into which the initial dataset (x) is divided for model evalu- ation procedures (cross-validation). Set to 10 by default. fold.vector Vector indicating the fold group to which each data belongs to. Details The function realises a BRT model according to the gbm.step function provided by Elith et al.(2008). See the publication for further information about setting decisions. The map produced provides species presence probability on the projected area. Value A list of 5 model$algorithm "brt" string character • model$data x dataframe that was used to implement the model • model$response Parameters returned by the model object: list of 41, see gbm.step for more info • model$raster.prediction Raster layer that predicts the potential species distribution • model$eval.stats List of elements to evaluate the model: AUC, maxSSS, COR, pCOR, TSS, ntrees, residuals Note See <NAME>in et al. (2012) for information about background selection to implement BRT models References <NAME>, <NAME> & <NAME> (2008) A working guide to boosted regression trees. Journal of Animal Ecology, 77(4): 802-813. <NAME>, <NAME>, <NAME> & <NAME> (2012) Selecting pseudo absences for species distribution models: how, where and how many? Methods in Ecology and Evolution, 3(2): 327-338. See Also gbm.step Examples ## Not run: #Download the presence data data('ctenocidaris.nutrix') occ <- ctenocidaris.nutrix # select longitude and latitude coordinates among all the information occ <- ctenocidaris.nutrix[,c('decimal.Longitude','decimal.Latitude')] #Download some environmental predictors data(predictors2005_2012) envi <- predictors2005_2012 envi #Create a SDMtab matrix SDMtable_ctenocidaris <- SDMPlay:::SDMtab(xydata=occ, unique.data=FALSE, same=TRUE) #Run the model model <- SDMPlay:::compute.brt(x=SDMtable_ctenocidaris, proj.predictors=envi,lr=0.0005) #Plot the partial dependance plots dismo::gbm.plot(model$response) #Get the contribution of each variable to the model model$response$contributions #Get the interaction between variables dismo::gbm.interactions(model$response) #Plot some interactions int <- dismo::gbm.interactions(model$response) dismo::gbm.perspec(model$response,int$rank.list[1,1],int$rank.list[1,3]) #Plot the map prediction library(grDevices) # add nice colors palet.col <- colorRampPalette(c('deepskyblue','green','yellow', 'red'))( 80 ) raster::plot(model$raster.prediction, col=palet.col, main="Prediction map of Ctenocidaris nutrix distribution") data('worldmap') #add data points(worldmap, type="l") points(occ, col='black',pch=16) REMARK: see more examples in the vignette tutorials ## End(Not run) compute.maxent Compute MaxEnt model Description Compute species distribution models with MaxEnt (Maximum Entropy) Usage compute.maxent(x, proj.predictors) Arguments x SDMtab object or dataframe that contains id, longitude, latitude and values of environmental descriptors at corresponding locations. proj.predictors RasterStack of environmental descriptors on which the model will be projected Details MaxEnt species distribution model minimizes the relative entropy between environmental descrip- tors and presence data. Further information are provided in the references below. compute.maxent uses the functionalities of the maxent function. This function uses MaxEnt species distribution software, which is a java program that could be downloaded at https://github. com/charleneguillaumot/SDMPlay. In order to run compute.maxent, put the ’maxent.jar’ file downloaded at this address in the ’java’ folder of the dismo package (path obtained with the sys- tem.file(’java’, package=’dismo’) command). Value A list of 4 model$algorithm "maxent" string character • model$data x dataframe that was used to implement the model • model$response Parameters returned by the model object • model$raster.prediction Raster layer that predicts the potential species distribution Note To implement MaxEnt models, Phillips & Dudik (2008) advice a large number of background data. You can also find further information about background selection in Barbet Massin et al. (2012). References <NAME>, <NAME>, <NAME> & <NAME> (2012) Selecting pseudo absences for species distribution models: how, where and how many? Methods in Ecology and Evolution, 3(2): 327-338. <NAME>, <NAME>, <NAME>, <NAME>, <NAME> & <NAME> (2011) A statistical explanation of MaxEnt for ecologists. Diversity and Distributions 17:43-57. <NAME>, <NAME> & <NAME> (2004) A maximum entropy approach to species distribution modeling. Proceedings of the Twenty-First International Conference on Machine Learning : 655- 662 <NAME>, <NAME> & <NAME> (2006) Maximum entropy modeling of species geographic distributions. Ecological Modelling 190:231-259. <NAME> and <NAME> (2008) Modeling of species distributions with MaxEnt: new extensions and a comprehensive evaluation. Ecography 31(2): 161-175. See Also maxent Examples #Download the presence data data('ctenocidaris.nutrix') occ <- ctenocidaris.nutrix # select longitude and latitude coordinates among all the information occ <- ctenocidaris.nutrix[,c('decimal.Longitude','decimal.Latitude')] #Download some environmental predictors data(predictors2005_2012) envi <- predictors2005_2012 envi #Create a SDMtab matrix SDMtable_ctenocidaris <- SDMPlay:::SDMtab(xydata=occ, unique.data=FALSE, same=TRUE) #only run if the maxent.jar file is available, in the right folder #jar <- paste(system.file(package="dismo"), "/java/maxent.jar", sep='') # Check first if maxent can be run (normally not part of your script) # (file.exists(jar) & require(rJava)) == TRUE ?? # rJava may pose a problem to load automatically within this package # please load it manually using eventually the archives available from CRAN # Run the model #model <- SDMPlay:::compute.maxent(x=SDMtable_ctenocidaris, proj.predictors=envi) # Plot the map prediction library(grDevices) # add nice colors #palet.col <- colorRampPalette(c('deepskyblue','green','yellow','red'))(80) #'raster::plot(model$raster.prediction, col=palet.col) data('worldmap') # add data points(worldmap, type="l") #points(occ, col='black',pch=16) # Get the partial dependance curves #dismo::response(model$response) # Get the percentage of contribution of each variable to the model #plot(model$response) # Get all the information provided by the model on a html document #model$response ctenocidaris.nutrix Presence-only records of the echinoid Ctenocidaris nutrix (Kerguelen Plateau) Description Dataset that contains the presence of the echinoid species Ctenocidaris nutrix reported from sev- eral campaigns including RV Marion Dufresne MD03 1974 & MD04 1975, POKER 2 (2010) and PROTEKER 2013, 2014, 2015. Ctenocidaris nutrix (Thomson 1876) is a broad range species, distributed from -70.5W to 143.7E and -76.13 to -47.18S in the Southern Ocean. The species is mainly found around the Kerguelen Plateau, and near Weddell Sea and Scotia Ridge regions. The species is known from littoral waters down to 800m. It is a carnivorous and direct developer species that breeds its youngs (David et al. 2005). Ctenocidaris nutrix is considered as an indicator species of Vulnerable Marine Ecosystems (VME) by the CCAMLR. See Guillaumot et al. (2016) for more details Usage data('ctenocidaris.nutrix') Format A data frame containing 125 occurrences and 13 descriptive variables • id Occurrence number indicator • scientific.name Species scientific name • scientific.name.authorship Author of the species description • genus Genus scientific name and its associated author • family Family scientific name and its associated author • order.and.higher.taxonomic.range Order scientific name and its associated author • decimal.Longitude Longitude in decimal degrees • decimal.Latitude Latitude in decimal degrees • depth Depth in meters • campaign Campaign origin of the data • reference Campaign reference • vessel Campaign vessel References <NAME>, <NAME>, <NAME>, <NAME> (2005) Antarctic Echinoidea. Synopses of the Antarctic Benthos 10. <NAME>, <NAME>, <NAME>, <NAME> & <NAME> (2016). Echinoids of the Kerguelen Plateau: Occurrence data and environmental setting for past, present, and future species distribution modelling, Zookeys, 630: 1-17. <NAME> (1876) Notice of some peculiarities in the mode of propagation of certain echino- derms of the southern seas. J. Linn. Soc. London 13: 55-79. Examples data('ctenocidaris.nutrix') x <- ctenocidaris.nutrix # plot of the occurrences: # selecting the species according to the campaigns ctenocidaris7475 <- base::subset(x,x$year==1974 | x$year==1975) ctenocidaris20102015 <- base::subset(x,x$campaign=='POKER II' | x$campaign=='PROTEKER') # drawing the background (depth) library(grDevices) blue.palette <- colorRampPalette(c('blue','deepskyblue','azure'))(100) data('predictors1965_1974') depth <- raster :: subset(predictors1965_1974, 1) raster::plot(depth, col=blue.palette,main= "Ctenocidaris nutrix occurrences") # adding the occurrences data to the background points(ctenocidaris7475[,c('decimal.Longitude','decimal.Latitude')], col='orange',pch=16) points(ctenocidaris20102015[,c('decimal.Longitude','decimal.Latitude')], col='darkgreen',pch=16) legend('bottomleft', legend=c('Ctenocidaris nutrix 1974-1975','Ctenocidaris nutrix 2010-2015'), col= c('orange','darkgreen'), pch= c(15, 15),cex=0.9) delim.area RasterStack preparation for modelling Description Delimit the RasterStack of environmental descriptors at a precise extent (latitude, longitude, maxi- mum depth...) before computing species distribution modelling Usage delim.area(predictors, longmin, longmax, latmin, latmax, interval=NULL, crslayer = raster::crs(predictors)) Arguments predictors RasterStack object that contains the environmental predictors used for species distribution models longmin Expected minimum longitude of the RasterStack longmax Expected maximum longitude of the RasterStack latmin Expected minimum latitude of the RasterStack latmax Expected maximum latitude of the RasterStack interval Vector of 2. Minimum and maximum values outside of which the pixel values of the RasterStack first layer will be assigned to NA values. Set as NULL by default (no treatment). crslayer CRS object or character string describing a projection and datum. The crs of the original RasterStack is set by default Details interval enable the user to delimit the RasterStack according to an interval of values applied on the first layer of the RasterStack. It is often applied on depth in SDM studies. Missing values contained in the provided RasterStack must be set up as NA values. Value RasterLayer object See Also stack, raster, origin, extent Examples data('predictors2005_2012') envi <- predictors2005_2012 r <- SDMPlay:::delim.area(predictors = envi, longmin = 70,longmax = 75, latmin = -50,latmax = -40,interval = c(0,-1000)) r library(grDevices) # plot the result with nice colors palet.col <- colorRampPalette(c('deepskyblue','green','yellow', 'red'))(80) raster::plot(r, col=palet.col) depth_SO Environmental descriptor example (depth, Southern Ocean) Description Depth layer at the scale of the Southern Ocean at 0.1° resolution Usage data("depth_SO") Format RasterLayer. Grid: nrow= 350, ncol= 3600, ncells= 1260000 pixels. Spatial resolution: 0.1. Spatial extent: -180, 180, -80, -45 (longmin, longmax, latmin, latmax); Crs : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0. Origin=0 Source [ADD website](https://data.aad.gov.au/metadata/records/fulldisplay/environmental_layers) Examples library(raster) data("depth_SO") data("ice_cover_mean_SO") data("seafloor_temp_2005_2012_mean_SO") predictors_stack_SO <- raster::stack(depth_SO,ice_cover_mean_SO,seafloor_temp_2005_2012_mean_SO) names(predictors_stack_SO)<-c("depth","ice_cover_mean","seafloor_temp_mean") predictors_stack_SO Glabraster.antarctica Presence-only records of the sea star Glabraster antarctica (Southern Ocean) Description Dataset that contains the presence data of the sea star species Glabraster antarctica reported in the Southern Ocean. The detailed description of the dataset is available in Moreau et al. (2018) Usage data(Glabraster.antarctica) Format A two columns table (longitude, latitude) Source <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & <NAME>. (2018). Antarctic and sub-Antarctic Asteroidea database. ZooKeys, (747), 141. Examples library(SDMPlay) data(Glabraster.antarctica) head(Glabraster.antarctica) ice_cover_mean_SO Environmental descriptor example (ice cover, Southern Ocean) Description Average ice cover layer at the scale of the Southern Ocean at 0.1° resolution Usage data("ice_cover_mean_SO") Format RasterLayer. Grid: nrow= 350, ncol= 3600, ncells= 1260000 pixels. Spatial resolution: 0.1. Spatial extent: -180, 180, -80, -45 (longmin, longmax, latmin, latmax); Crs : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0. Origin=0 Source [ADD website](https://data.aad.gov.au/metadata/records/fulldisplay/environmental_layers) Examples library(raster) data("depth_SO") data("ice_cover_mean_SO") data("seafloor_temp_2005_2012_mean_SO") predictors_stack_SO <- raster::stack(depth_SO,ice_cover_mean_SO,seafloor_temp_2005_2012_mean_SO) names(predictors_stack_SO)<-c("depth","ice_cover_mean","seafloor_temp_mean") predictors_stack_SO null.model Compute null model Description Compute null model. Null models are useful tools to highlight an a priori evaluation of the in- fluence of presence records spatial structuration in model predictions (i.e. influence of aggregated sampling). Null model type #1 performs a model by randomly sampling locations from the ensemble of visited stations, therefore simulating the influence of sampling effort on model predictions. Null model type #2 samples data in the entire study area, and reflects what should be predicted if occurrences were randomly distributed in the area. Models should be replicated nb.rep times in order to estimate statistical scores. Usage null.model(predictors, xy = NULL, type = c(1, 2), algorithm = c("brt", "maxent"), nb, unique.data = T, same = T, background.nb = nb, nb.rep = 10, tc = 2, lr = 0.001, bf = 0.75, n.trees = 50, step.size = n.trees) Arguments predictors Rasterstack object that contains the predictors that will be used for species dis- tribution models xy Dataframe that contains the longitude and latitude of the visited pixels. Infor- mation required to perform type 1 null model. Default= NULL type Null model type to perform. type=1 to perform a null model based on visited areas, type=2 to predict random model algorithm Algorithm to compute the null model. ’brt’ or ’maxent’ nb Number of points to randomly sample (among the matrix of visited pixels for ’type=1’ model or in the entire geographic space for ’type=2’) unique.data If TRUE (default), pixel duplicates contained in ’xy’ are removed same If TRUE (default), the number of background data sampled in the area will be ’nb’ background.nb Number of background data to sample. If this argument is filled, ’same’ is set FALSE. nb.rep Null models number of replicates. See compute.brt tc BRT parameter. Integer. Tree complexity. Sets the complexity of individual trees. See compute.brt lr BRT parameter.Learning rate. Sets the weight applied to individual trees. See compute.brt bf BRT parameter.Bag fraction. Sets the proportion of observations used in select- ing variables. See compute.brt n.trees BRT parameter.Number of initial trees to fit. Set at 50 by default. See com- pute.brt step.size BRT parameter.Number of trees to add at each cycle. See compute.brt Details Data are sampled without replacement. Each time the model is runned, new data (presence and background data) are sampled Value List of 6 • $inputs Remembers the arguments used to implement null.model function • $eval Evaluation parameters of each model that compose the null model. See SDMeval for further information • $eval.null Evaluation of the mean null model. See SDMeval for further information • $pred.stack RasterStack of all the models produced to build the null model • $pred.mean Raster layer. Null model prediction. Mean of the $pred.stack RasterStack • $correlation Spearman rank test value between the different maps produced Note Increasing the number of replications will enhance model null relevance (we advice nb.rep=100 for minimum). Please note that processing may take few minutes to hours. If you want to build a MaxEnt model, compute.maxent uses the functionalities of the maxent func- tion. This function uses MaxEnt species distribution software, which is a java program that could be downloaded at https://github.com/charleneguillaumot/SDMPlay. In order to run com- pute.maxent, put the ’maxent.jar’ file downloaded at this adress in the ’java’ folder of the dismo package (path obtained with the system.file(’java’, package=’dismo’) command). MaxEnt 3.3.3b version or higher is required. See Also nicheOverlap: compare prediction maps .jpackage: initialize dismo for Java Examples ## Not run: # Load environmental predictors data(predictors2005_2012) envi <- predictors2005_2012 envi # Realise a null model type #2 with BRT #-------------------------------------- modelN2 <- SDMPlay:::null.model(xy=NULL,predictors=envi,type=2,algorithm='brt', nb=300,unique.data=TRUE, same=TRUE, nb.rep=2,lr=0.0005) # Look at the inputs used to implement the model modelN2$input # Get the evaluation of the models produced modelN2$eval # Get the evaluation of the mean of all these produced models (i.e. evaluation # of the null model) modelN2$eval.null # Get the values of Spearman correlations between the all the prediction maps produced modelN2$correlation # Plot the mean null model map with nice colors library(grDevices) palet.col <- colorRampPalette(c('deepskyblue','green','yellow', 'red'))(80) data('worldmap') raster::plot(modelN2$pred.mean, col=palet.col) points(worldmap, type="l") ## End(Not run) Odontaster.validus Presence-only records of the sea star Odontaster validus (Southern Ocean) Description Dataset that contains the presence data of the sea star species Odontaster validus reported in the Southern Ocean. The detailed description of the dataset is available in Moreau et al. (2018) Usage data(Odontaster.validus) Format A two columns table (longitude, latitude) Source <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & Jażdż<NAME>. (2018). Antarctic and sub-Antarctic Asteroidea database. ZooKeys, (747), 141. Examples data(Odontaster.validus) head(Odontaster.validus) predictors1965_1974 Environmental descriptors for 1965-1974 (Kerguelen Plateau) Description RasterStack that compiles 15 environmental descriptors on the Kerguelen Plateau (63/81W; -46/- 56S). See Guillaumot et al. (2016) for more information Usage data('predictors1965_1974') Format RasterStack of 15 environmental descriptors. Grid: nrow= 100, ncol= 179, ncells= 17900 pixels. Spatial resolution: 0.1. Spatial extent: 63/81W; -46/-56S. Crs : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0. Origin=0 • depth Bathymetric grid around the Kerguelen Plateau Unit=meter. Reference=Guillaumot et al. (2016), derived from Smith & Sandwell (1997) https://topex.ucsd.edu/WWW_html/mar_topo.html • seasurface_temperature_mean_1965_1974 Mean sea surface temperature over 1965-1974 Unit=Celsius degrees. Reference= World Ocean Circulation Experiment 2013 https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seasurface_temperature_amplitude_1965_1974 Amplitude between mean summer and mean winter sea surface temperature over 1965-1974 Unit=Celsius degrees. Reference= World Ocean Circulation Experiment 2013 https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seafloor_temperature_mean_1965_1974 Mean seafloor temperature over 1965-1974 Unit=Celsius degrees. Reference=Guillaumot et al. (2016), derived from World Ocean Circu- lation Experiment 2013 sea surface temperature layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seafloor_temperature_amplitude_1965_1974 Amplitude between mean summer and mean winter seafloor temperature over 1965-1974 Unit=Celsius degrees. Reference=Guillaumot et al. (2016), derived from World Ocean Circu- lation Experiment 2013 sea surface temperature layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seasurface_salinity_mean_1965_1974 Mean sea surface salinity over 1965-1974 Unit=PSS. Reference= World Ocean Circulation Experiment 2013 https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seasurface_salinity_amplitude_1965_1974 Amplitude between mean summer and mean winter sea surface salinity over 1965-1974 Unit=PSS. Reference= World Ocean Circulation Experiment 2013 https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seafloor_salinity_mean_1965_1974 Mean seafloor salinity over 1965-1974 Unit=PSS. Reference= Guillaumot et al. (2016), derived from World Ocean Circulation Ex- periment 2013 sea surface salinity layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seafloor_salinity_amplitude_1965_1974 Amplitude between mean summer and mean winter seafloor salinity over 1965-1974 Unit=PSS. Reference= Guillaumot et al. (2016,submitted), derived from World Ocean Circu- lation Experiment 2013 sea surface salinity layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • chlorophyla_summer_mean_2002_2009 Surface chlorophyll a concentration. Summer mean over 2002-2009 Unit=mg/m3.Reference=MODIS AQUA (NASA) 2010 https://oceandata.sci.gsfc.nasa.gov/ • geomorphology Geomorphologic features Unit= 27 categories. Reference= ATLAS ETOPO2 2014 (Douglass et al. 2014) • sediments Sediment features Unit= 14 categories. Reference= McCoy (1991), updated by Griffiths 2014 (unpublished) • slope Bathymetric slope Unitless. Reference= Smith & Sandwell (1997) • seafloor_oxygen_mean_1955_2012 Mean seafloor oxygen concentration over 1955-2012 Unit=mL/L. Reference= Guillaumot et al. (2016), derived from World Ocean Circulation Experiment 2013 sea surface oxygen concentration layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • roughness Rugosity index (difference between minimal and maximal depth values of the 8 neighbour- pixels) Unit= meters. Reference=Guillaumot et al.(2016), derived from bathymetric layer References Douglass L, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Post A, Brandt A, <NAME> (2014) A hierarchical classification of benthic biodiversity and assessment of protected areas in the Southern Ocean. PloS one 9(7): e100551. doi: 10.1371/journal.pone.0100551. <NAME>., Martin , A., <NAME>., <NAME>. and <NAME>. (2016) Environmental parameters (1955-2012) for echinoids distribution modelling on the Kerguelen Plateau. Australian Antarctic Data Centre - doi:10.4225/15/578ED5A08050F McCoy FW (1991) Southern Ocean sediments: circum-Antarctic to 30S. Marine Geological and Geophysical Atlas of the circum-Antarctic to 30S. (ed. by <NAME>). Antarctic Research Series. <NAME>, Sandwell D (1997) Global seafloor topography from satellite altimetry and ship depth soundings. Science 277(5334): 1957-1962. doi: 10.1126/science.277.5334.1956. Examples data('predictors1965_1974') raster::plot(predictors1965_1974) predictors2005_2012 Environmental descriptors for 2005-2012 (Kerguelen Plateau) Description RasterStack that compiles 15 environmental descriptors on the Kerguelen Plateau (63/81W; -46/- 56S). See Guillaumot et al. (2016) for more information Usage data('predictors2005_2012') Format RasterStack of 15 environmental descriptors. Grid: nrow= 100, ncol= 179, ncells= 17900 pixels. Spatial resolution: 0.1. Spatial extent: 63/81W; -46/-56S. Crs : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0. Origin=0 • depth Bathymetric grid around the Kerguelen Plateau Unit=meter. Reference=Guillaumot et al. (2016), derived from Smith & Sandwell (1997) https://topex.ucsd.edu/WWW_html/mar_topo.html • seasurface_temperature_mean_2005_2012 Mean sea surface temperature over 2005-2012 Unit=Celsius degrees. Reference= World Ocean Circulation Experiment 2013 https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seasurface_temperature_amplitude_2005_2012 Amplitude between mean summer and mean winter sea surface temperature over 2005-2012 Unit=Celsius degrees. Reference= World Ocean Circulation Experiment 2013 https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seafloor_temperature_mean_2005_2012 Mean seafloor temperature over 2005-2012 Unit=Celsius degrees. Reference=Guillaumot et al. (2016), derived from World Ocean Circu- lation Experiment 2013 sea surface temperature layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seafloor_temperature_amplitude_2005_2012 Amplitude between mean summer and mean winter seafloor temperature over 2005-2012 Unit=Celsius degrees. Reference=Guillaumot et al. (2016,submitted), derived from World Ocean Circulation Experiment 2013 sea surface temperature layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seasurface_salinity_mean_2005_2012 Mean sea surface salinity over 2005-2012 Unit=PSS. Reference= World Ocean Circulation Experiment 2013 https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seasurface_salinity_amplitude_2005_2012 Amplitude between mean summer and mean winter sea surface salinity over 2005-2012 Unit=PSS. Reference= World Ocean Circulation Experiment 2013 https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seafloor_salinity_mean_2005_2012 Mean seafloor salinity over 2005-2012 Unit=PSS. Reference= Guillaumot et al. (2016), derived from World Ocean Circulation Ex- periment 2013 sea surface salinity layers. https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • seafloor_salinity_amplitude_2005_2012 Amplitude between mean summer and mean winter seafloor salinity over 2005-2012 Unit=PSS. Reference= Guillaumot et al. (2016), derived from World Ocean Circulation Ex- periment 2013 sea surface salinity layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • chlorophyla_summer_mean_2002_2009 Surface chlorophyll a concentration. Summer mean over 2002-2009 Unit=mg/m3.Reference=MODIS AQUA (NASA) 2010 https://oceandata.sci.gsfc.nasa.gov/ • geomorphology Geomorphologic features Unit= 27 categories. Reference= ATLAS ETOPO2 2014 (Douglass et al. 2014) • sediments Sediment features Unit= 14 categories. Reference= McCoy (1991), updated by Griffiths 2014 (unpublished). • slope Bathymetric slope Unitless. Reference= Smith & Sandwell (1997) • seafloor_oxygen_mean_1955_2012 Mean seafloor oxygen concentration over 1955-2012 Unit=mL/L. Reference= Guillaumot et al. (2016), derived from World Ocean Circulation Experiment 2013 sea surface oxygen concentration layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • roughness Rugosity index (difference between minimal and maximal depth values of the 8 neighbour- pixels) Unit= meters. Reference=Guillaumot et al.(2016), derived from bathymetric layer References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, Brandt A, <NAME> (2014) A hierarchical classification of benthic biodiversity and assessment of protected areas in the Southern Ocean. PloS one 9(7): e100551. doi: 10.1371/journal.pone.0100551. <NAME>., Martin , A., <NAME>., <NAME>. and <NAME>. (2016) Environmental parameters (1955-2012) for echinoids distribution modelling on the Kerguelen Plateau. Australian Antarctic Data Centre - doi:10.4225/15/578ED5A08050F McCoy FW (1991) Southern Ocean sediments: circum-Antarctic to 30S. Marine Geological and Geophysical Atlas of the circum-Antarctic to 30S. (ed. by <NAME>). Antarctic Research Series. <NAME>, <NAME> (1997) Global seafloor topography from satellite altimetry and ship depth soundings. Science 277(5334): 1957-1962. doi: 10.1126/science.277.5334.1956. Examples data('predictors2005_2012') raster::plot(predictors2005_2012) predictors2200AIB Environmental descriptors for future A1B scenario for 2200 (Kergue- len Plateau) Description RasterStack of 10 environmental descriptors modelled by IPCC (scenario A1B, 4th report, 2007) for 2187 to 2196 (described as 2200), on the extent of the Kerguelen Plateau (63/81W; -46/-56S) Usage data('predictors2200AIB') Format RasterStack of 10 environmental descriptors. Grid: nrow= 100, ncol= 179, ncells= 17900 pixels. Spatial resolution: 0.1. Spatial extent: 63/81W; -46/-56S. Crs : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0. Origin=0. See Guillaumot et al.(2016) for more information • depth Bathymetric grid around the Kerguelen Plateau Unit=meter. Reference=Guillaumot et al. (2016), derived from Smith & Sandwell (1997) https://topex.ucsd.edu/WWW_html/mar_topo.html • seasurface_salinity_mean_2200_A1B Mean sea surface salinity over 2187 to 2196, A1B scenario Unit= PSS. Reference= BIO ORACLE (Tyberghein et al. 2012) https://www.bio-oracle.org/ • seasurface_temperature_mean_2200_A1B Mean sea surface temperature over 2187-2196, A1B scenario Unit=Celsius degrees. Reference= BIO ORACLE (Tyberghein et al. 2012) https://www.bio-oracle.org/ • seasurface_temperature_amplitude_2200_A1B Amplitude between mean summer and mean winter sea surface temperature. Absolute value interpolated over 2187-2196, scenario A1B Unit=Celsius degrees. Reference= BIO ORACLE (Tyberghein et al. 2012) https://www.bio-oracle.org/ • chlorophyla_summer_mean_2002_2009 Surface chlorophyll a concentration. Summer mean over 2002-2009 Unit=mg/m3. Reference=MODIS AQUA (NASA) 2010 https://oceandata.sci.gsfc.nasa.gov/ • geomorphology Geomorphologic features Unit= 27 categories. Reference= ATLAS ETOPO2 2014 (Douglass et al. 2014) • sediments Sediment features Unit= 14 categories. Reference= McCoy (1991), updated by Griffiths 2014 (unpublished) • slope Bathymetric slope Unitless. Reference= Smith & Sandwell (1997) • seafloor_oxygen_mean_1955_2012 Mean seafloor oxygen concentration over 1955-2012 Unit=mL/L. Reference= Guillaumot et al. (2016), derived from World Ocean Circulation Experiment 2013 sea surface oxygen concentration layers https://www.nodc.noaa.gov/OC5/woa13/woa13data.html • roughness Rugosity index (difference between minimal and maximal depth values of the 8 neighbour- pixels) Unit= meters. Reference=Guillaumot et al.(2016), derived from bathymetric layer. References <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2014) A hierarchical classification of benthic biodiversity and assessment of protected areas in the Southern Ocean. PloS one 9(7): e100551. doi: 10.1371/journal.pone.0100551. <NAME>., <NAME>., <NAME>., <NAME> <NAME>. (2016) Environmental parameters (1955-2012) for echinoids distribution modelling on the Kerguelen Plateau. Australian Antarctic Data Centre - doi:10.4225/15/578ED5A08050F <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2013) Climate change impact on seaweed meadow distribution in the North Atlantic rocky intertidal. Ecology and Evolu- tion 3(5): 1356-1373. doi: 10.1002/ece3.541. McCoy FW (1991) Southern Ocean sediments: circum-Antarctic to 30S. Marine Geological and Geophysical Atlas of the circum-Antarctic to 30S. (ed. by D.E. Hayes). Antarctic Research Series. <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2012) Bio ORACLE: a global environmental dataset for marine species distribution modelling. Global Ecology and Bio- geography 21(2): 272-28. doi: 10.1111/j.1466-8238.2011.00656.x. <NAME>, <NAME> (1997) Global seafloor topography from satellite altimetry and ship depth soundings. Science 277(5334): 1957-1962. doi: 10.1126/science.277.5334.1956. Examples data('predictors2200AIB') raster :: plot(predictors2200AIB) SDMdata.quality Evaluate dataset quality Description Evaluate the percentage of occurrences that fall on pixels assigned by NA values in the environ- mental RasterStack. It may provide interesting information to interpret model robustness. Usage SDMdata.quality(data) Arguments data SDMtab object or dataframe that contains id, longitude, latitude and values of environmental descriptors at corresponding locations Value prop Dataframe that provides the proportion of NA values on which the presence data fall, for each environmental predictor See Also SDMeval Examples #Generate a SDMtab data('ctenocidaris.nutrix') occ <- ctenocidaris.nutrix # select longitude and latitude coordinates among all the information occ <- ctenocidaris.nutrix[,c('decimal.Longitude','decimal.Latitude')] library(raster) data("predictors2005_2012") envi <- predictors2005_2012 envi #Create the SDMtab matrix SDMtable_ctenocidaris <- SDMPlay:::SDMtab(xydata=occ, unique.data=FALSE, same=TRUE) # Evaluate the matrix quality SDMPlay:::SDMdata.quality(data=SDMtable_ctenocidaris) SDMeval Evaluate species distribution models Description Performs model evaluation. Measure of AUC (Area Under the Curve) value, confusion matrix, maxSSS threshold (Maximum Sensitivity plus Specificity), percentage of predicted preferential area based on the MaxSSS value and model stability (standard deviation of pixel values) Usage SDMeval(model) Arguments model Model produced with compute.maxent or compute.brt functions Details Area Under the Curve is a parameter largely refered in the literature and used to test species dis- tribution models performance (Fielding & Bell, 1997). It evaluates the area under the Receiver Operating Curve (ROC), which draws the relationship between 1-specificity (False Positive Rate) and specificity (True Positive Rate). AUC values bordering 1 present models with high True Pos- itive Rate, 0.5 model with random prediction and 0 to models presenting a strong False Positive Rate. MaxSSS threshold value maximizes the sum of True Positive Rate and True Negative Rate. See Liu et al. (2013) for more information. Modelling performance can be evaluated with the measure of omission rate, the proportion of oc- currences that falls out the area predicted as preferential by the MaxSSS threshold (False Positive Rate). Models stability is evaluated with the mean standard deviation value of the pixel values of the grid predicted by the model. Value Dataframe with the following information • AUC.value Returns the AUC (Area Under the Curve) value of the model • maxSSS Maximum Sensitivity plus Sensibility threshold of the model • preferential.area Pixel proportion for which the predicted value is superior to the MaxSSS threshold • omission.rate Proportion of data that fall out of the area predicted as preferential • nb.omission Corresponding number of data that fall out of the predicted preferential area • SD.value Mean standard deviation of the predicted grid References <NAME>, & <NAME> (1997) A review of methods for the assessment of prediction errors in conser- vation presence absence models. Environmental Conservation, 24(1): 38-49. <NAME>, <NAME> & <NAME> (2013) Selecting thresholds for the prediction of species occurrence with presence only data. Journal of Biogeography, 40(4): 778-789. Examples #Generate a SDMtab and launch a model data('ctenocidaris.nutrix') occ <- ctenocidaris.nutrix occ <- ctenocidaris.nutrix[,c('decimal.Longitude','decimal.Latitude')] data(predictors2005_2012) envi <- predictors2005_2012 envi SDMtable_ctenocidaris <- SDMPlay:::SDMtab(xydata=occ, unique.data=FALSE, same=TRUE) model <- SDMPlay:::compute.brt(x=SDMtable_ctenocidaris, proj.predictors=envi,lr=0.005) # Evaluate modelling performance SDMPlay:::SDMeval(model) SDMtab Compile species distribution dataset for modelling Description Create a dataframe that contains the required information to implement species distribution models Usage SDMtab(xydata, predictors, unique.data = TRUE, same = TRUE, background.nb=NULL, KDE_layer=NULL) Arguments xydata Dataframe with longitude (column 1) and latitude (column 2) of the presence- only data. Decimal longitude and latitude are required. predictors Rasterstack of environmental descriptors. Used to extract values of the presence location unique.data If TRUE (by default), duplicate presence points, that fall in the same grid cell, will be removed same If TRUE (by default), the number of background data sampled in the area equals the number of presence data background.nb Set as NULL if same= TRUE. KDE_layer Rasterlayer that describes the frequency of visits in the area (i.e. the spatial bias that could be present in the occurrence dataset) Details Background data are sampled randomly (without replacement) among the entire area, on pixels that are not assigned NA. It constitutes a summary of environmental descriptors to improve modelling performance. See Barbet Massin et al. (2012) for further information about background selection. Value A dataframe that contains the id (1 for presence, 0 for background data) of data, their longitude, latitude and extracted values of environmental descriptors at the corresponding locations. xydata for which coordinates fall out of the RasterStack extent are removed from the analysis. References <NAME>, <NAME>, <NAME> & <NAME> (2012) Selecting pseudo absences for species distribution models: how, where and how many? Methods in Ecology and Evolution, 3(2): 327-338. See Also delim.area to refine the environmental RasterStack before using this function Examples #Open occurrence data data('ctenocidaris.nutrix') occ <- ctenocidaris.nutrix #Open environmental descriptors RasterStack data(predictors2005_2012) envi <- predictors2005_2012 envi #create the dataframe for modelling z <- SDMPlay:::SDMtab(xydata=occ[,c('decimal.Longitude','decimal.Latitude')],predictors=envi) head(z) seafloor_temp_2005_2012_mean_SO Environmental descriptor example (seafloor temperatures, Southern Ocean) Description Average seafloor temperature layer at the scale of the Southern Ocean at 0.1° resolution Usage data("seafloor_temp_2005_2012_mean_SO") Format Three RasterLayers. Grid: nrow= 350, ncol= 3600, ncells= 1260000 pixels. Spatial resolution: 0.1. Spatial extent: -180, 180, -80, -45 (longmin, longmax, latmin, latmax); Crs : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0. Origin=0 Source [ADD website](https://data.aad.gov.au/metadata/records/fulldisplay/environmental_layers) Examples library(raster) data("depth_SO") data("ice_cover_mean_SO") data("seafloor_temp_2005_2012_mean_SO") predictors_stack_SO <- raster::stack(depth_SO,ice_cover_mean_SO,seafloor_temp_2005_2012_mean_SO) names(predictors_stack_SO)<-c("depth","ice_cover_mean","seafloor_temp_mean") predictors_stack_SO worldmap Worldmap Description csv file to draw worldmap on maps Usage data("worldmap") Format csv file Source [ADD website](https://data.aad.gov.au/metadata/records/fulldisplay/environmental_layers) Examples data("worldmap")
atomic_fn
rust
Rust
Crate atomic_fn === A small, no_std crate that adds atomic function pointers. See `AtomicFnPtr` for examples. Structs --- AtomicFnPtrA function pointer type which can be safely shared between threads. Traits --- FnPtr Struct atomic_fn::AtomicFnPtr === ``` #[repr(C, align(8))]pub struct AtomicFnPtr<T: FnPtr> { /* private fields */ } ``` A function pointer type which can be safely shared between threads. This type has the same in-memory representation as a `fn()`. **Note**: This type is only available on platforms that support atomic loads and stores of u8, u16, u32, u64, usize, or pointers. Its size depends on the target’s function pointer size. Implementations --- source### impl<T: FnPtr> AtomicFnPtr<Tsource#### pub fn new(fn_ptr: T) -> AtomicFnPtr<TCreates a new `AtomicFnPtr`. source#### pub fn into_inner(self) -> T Consumes the atomic and returns the contained value. This is safe because passing `self` by value guarantees that no other threads are concurrently accessing the atomic data. source#### pub fn get_mut(&mut self) -> &mutT Returns a mutable reference to the underlying pointer. This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data. source### impl<T: FnPtr> AtomicFnPtr<Tsource#### pub fn load(&self, order: Ordering) -> T Loads a value from the pointer. `load` takes an `Ordering` argument which describes the memory ordering of this operation. Possible values are `Ordering::SeqCst`, `Ordering::Acquire` and `Ordering::Relaxed`. ##### Panics Panics if `order` is `Ordering::Release` or `Ordering::AcqRel`. source#### pub fn store(&self, fn_ptr: T, order: Ordering) Stores a value into the pointer. `store` takes an `Ordering` argument which describes the memory ordering of this operation. Possible values are `Ordering::SeqCst`, `Ordering::Release` and `Ordering::Relaxed`. ##### Panics Panics if `order` is `Ordering::Acquire` or `Ordering::AcqRel`. source#### pub fn swap(&self, fn_ptr: T, order: Ordering) -> T Stores a value into the pointer, returning the previous value. `swap` takes an `Ordering` argument which describes the memory ordering of this operation. All ordering modes are possible. Note that using `Ordering::Acquire` makes the store part of this operation `Ordering::Relaxed`, and using `Ordering::Release` makes the load part `Ordering::Relaxed`. **Note:** This method is only available on platforms that support atomic operations on pointers. source#### pub fn compare_and_swap(&self, current: T, new: T, order: Ordering) -> T 👎 Deprecated since 0.1.0: Use `compare_exchange` or `compare_exchange_weak` instead. Only exists for compatibility with applications that use `compare_and_swap` on the `core` atomic types. Stores a value into the pointer if the current value is the same as the `current` value. The return value is always the previous value. If it is equal to `current`, then the value was updated. `compare_and_swap` also takes an `Ordering` argument which describes the memory ordering of this operation. Notice that even when using `Ordering::AcqRel`, the operation might fail and hence just perform an `Ordering::Acquire` load, but not have `Ordering::Release` semantics. Using `Ordering::Acquire` makes the store part of this operation `Ordering::Relaxed` if it happens, and using `Ordering::Release` makes the load part `Ordering::Relaxed`. **Note:** This method is only available on platforms that support atomic operations on pointers. ##### Migrating to `compare_exchange` and `compare_exchange_weak` `compare_and_swap` is equivalent to `compare_exchange` with the following mapping for memory orderings: | Original | Success | Failure | | --- | --- | --- | | `Relaxed` | `Relaxed` | `Relaxed` | | `Acquire` | `Acquire` | `Acquire` | | `Release` | `Release` | `Relaxed` | | `AcqRel` | `AcqRel` | `Acquire` | | `SeqCst` | `SeqCst` | `SeqCst` | `compare_exchange_weak` is allowed to fail spuriously even when the comparison succeeds, which allows the compiler to generate better assembly code when the compare and swap is used in a loop. ##### Examples ``` use atomic_fn::AtomicFnPtr; use std::sync::atomic::Ordering; fn a_fn() { println!("Called `a_fn`") } fn another_fn() { println!("Called `another_fn`") } let ptr = a_fn; let some_ptr = AtomicFnPtr::new(ptr); let other_ptr = another_fn; (some_ptr.load(Ordering::SeqCst))(); let value = some_ptr.compare_and_swap(ptr, other_ptr, Ordering::Relaxed); (some_ptr.load(Ordering::SeqCst))(); ``` source#### pub fn compare_exchange(    &self,     current: T,     new: T,     success: Ordering,     failure: Ordering) -> Result<T, TStores a value into the pointer if the current value is the same as the `current` value. The return value is a result indicating whether the new value was written and containing the previous value. On success this value is guaranteed to be equal to `current`. `compare_exchange` takes two `Ordering` arguments to describe the memory ordering of this operation. `success` describes the required ordering for the read-modify-write operation that takes place if the comparison with `current` succeeds. `failure` describes the required ordering for the load operation that takes place when the comparison fails. Using `Ordering::Acquire` as success ordering makes the store part of this operation `Ordering::Relaxed`, and using `Ordering::Release` makes the successful load `Ordering::Relaxed`. The failure ordering can only be `Ordering::SeqCst`, `Ordering::Acquire` or `Ordering::Relaxed` and must be equivalent to or weaker than the success ordering. **Note:** This method is only available on platforms that support atomic operations on pointers. ##### Examples ``` use std::sync::atomic::Ordering; use atomic_fn::AtomicFnPtr; fn a_fn() { println!("Called `a_fn`") } fn another_fn() { println!("Called `another_fn`") } let ptr = a_fn; let some_ptr = AtomicFnPtr::new(ptr); let other_ptr = another_fn; (some_ptr.load(Ordering::SeqCst))(); let value = some_ptr.compare_exchange( ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed ); (some_ptr.load(Ordering::SeqCst))(); ``` source#### pub fn compare_exchange_weak(    &self,     current: T,     new: T,     success: Ordering,     failure: Ordering) -> Result<T, TStores a value into the pointer if the current value is the same as the `current` value. Unlike `AtomicFnPtr::compare_exchange`, this function is allowed to spuriously fail even when the comparison succeeds, which can result in more efficient code on some platforms. The return value is a result indicating whether the new value was written and containing the previous value. `compare_exchange_weak` takes two `Ordering` arguments to describe the memory ordering of this operation. `success` describes the required ordering for the read-modify-write operation that takes place if the comparison with `current` succeeds. `failure` describes the required ordering for the load operation that takes place when the comparison fails. Using `Ordering::Acquire` as success ordering makes the store part of this operation `Ordering::Relaxed`, and using `Ordering::Release` makes the successful load `Ordering::Relaxed`. The failure ordering can only be `Ordering::SeqCst`, `Ordering::Acquire` or `Ordering::Relaxed` and must be equivalent to or weaker than the success ordering. **Note:** This method is only available on platforms that support atomic operations on pointers. ##### Examples ``` use atomic_fn::AtomicFnPtr; use std::sync::atomic::Ordering; fn a_fn() { println!("Called `a_fn`") } fn another_fn() { println!("Called `another_fn`") } let some_ptr = AtomicFnPtr::new(a_fn); let new = another_fn; let mut old = some_ptr.load(Ordering::Relaxed); old(); loop { match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) { Ok(x) => { x(); break; } Err(x) => { x(); old = x } } } ``` source#### pub fn fetch_update<F>(    &self,     set_order: Ordering,     fetch_order: Ordering,     func: F) -> Result<T, T> where    F: FnMut(T) -> Option<T>, Fetches the value, and applies a function to it that returns an optional new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else `Err(previous_value)`. Note: This may call the function multiple times if the value has been changed from other threads in the meantime, as long as the function returns `Some(_)`, but the function will have been applied only once to the stored value. `fetch_update` takes two `Ordering` arguments to describe the memory ordering of this operation. The first describes the required ordering for when the operation finally succeeds while the second describes the required ordering for loads. These correspond to the success and failure orderings of `AtomicFnPtr::compare_exchange` respectively. Using `Ordering::Acquire` as success ordering makes the store part of this operation `Ordering::Relaxed`, and using `Ordering::Release` makes the final successful load `Ordering::Relaxed`. The (failed) load ordering can only be `Ordering::SeqCst`, `Ordering::Acquire` or `Ordering::Relaxed` and must be equivalent to or weaker than the success ordering. **Note:** This method is only available on platforms that support atomic operations on pointers. ##### Examples ``` use atomic_fn::AtomicFnPtr; use std::sync::atomic::Ordering; fn a_fn() { println!("Called `a_fn`") } fn another_fn() { println!("Called `another_fn`") } let ptr: fn() = a_fn; let some_ptr = AtomicFnPtr::new(ptr); let new: fn() = another_fn; assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr)); (some_ptr.load(Ordering::SeqCst))(); let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| { if x == ptr { Some(new) } else { None } }); assert_eq!(result, Ok(ptr)); (some_ptr.load(Ordering::SeqCst))(); assert_eq!(some_ptr.load(Ordering::SeqCst), new); (some_ptr.load(Ordering::SeqCst))(); ``` Trait Implementations --- source### impl<T: FnPtr + Debug> Debug for AtomicFnPtr<Tsource#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl<T: FnPtr> From<T> for AtomicFnPtr<Tsource#### fn from(fn_ptr: T) -> AtomicFnPtr<TConverts to this type from the input type. source### impl<T: FnPtr + Pointer> Pointer for AtomicFnPtr<Tsource#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. source### impl<T: FnPtr + RefUnwindSafe> RefUnwindSafe for AtomicFnPtr<Tsource### impl<T: FnPtr + Sync> Sync for AtomicFnPtr<TAuto Trait Implementations --- ### impl<T> Send for AtomicFnPtr<T> where    T: Send, ### impl<T> Unpin for AtomicFnPtr<T> where    T: Unpin, ### impl<T> UnwindSafe for AtomicFnPtr<T> where    T: UnwindSafe, Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<!> for T const: unstable · source#### fn from(t: !) -> T Converts to this type from the input type. source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Trait atomic_fn::FnPtr === ``` pub trait FnPtr: Copy + FnPtrSealed { } ``` Implementations on Foreign Types --- source### impl<Ret> FnPtr for fn() -> Ret source### impl<Ret> FnPtr for unsafe fn() -> Ret source### impl<Ret> FnPtr for extern "C" fn() -> Ret source### impl<Ret> FnPtr for unsafe extern "C" fn() -> Ret source### impl<Ret, A, B> FnPtr for fn(_: A, _: B) -> Ret source### impl<Ret, A, B> FnPtr for unsafe fn(_: A, _: B) -> Ret source### impl<Ret, A, B> FnPtr for extern "C" fn(_: A, _: B) -> Ret source### impl<Ret, A, B> FnPtr for unsafe extern "C" fn(_: A, _: B) -> Ret source### impl<Ret, A, B> FnPtr for extern "C" fn(_: A, _: B, ...) -> Ret source### impl<Ret, A, B> FnPtr for unsafe extern "C" fn(_: A, _: B, ...) -> Ret source### impl<Ret, A, B, C> FnPtr for fn(_: A, _: B, _: C) -> Ret source### impl<Ret, A, B, C> FnPtr for unsafe fn(_: A, _: B, _: C) -> Ret source### impl<Ret, A, B, C> FnPtr for extern "C" fn(_: A, _: B, _: C) -> Ret source### impl<Ret, A, B, C> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C) -> Ret source### impl<Ret, A, B, C> FnPtr for extern "C" fn(_: A, _: B, _: C, ...) -> Ret source### impl<Ret, A, B, C> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, ...) -> Ret source### impl<Ret, A, B, C, D> FnPtr for fn(_: A, _: B, _: C, _: D) -> Ret source### impl<Ret, A, B, C, D> FnPtr for unsafe fn(_: A, _: B, _: C, _: D) -> Ret source### impl<Ret, A, B, C, D> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D) -> Ret source### impl<Ret, A, B, C, D> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D) -> Ret source### impl<Ret, A, B, C, D> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, ...) -> Ret source### impl<Ret, A, B, C, D> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, ...) -> Ret source### impl<Ret, A, B, C, D, E> FnPtr for fn(_: A, _: B, _: C, _: D, _: E) -> Ret source### impl<Ret, A, B, C, D, E> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E) -> Ret source### impl<Ret, A, B, C, D, E> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E) -> Ret source### impl<Ret, A, B, C, D, E> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E) -> Ret source### impl<Ret, A, B, C, D, E> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, ...) -> Ret source### impl<Ret, A, B, C, D, E> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, ...) -> Ret source### impl<Ret, A, B, C, D, E, F> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F) -> Ret source### impl<Ret, A, B, C, D, E, F> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F) -> Ret source### impl<Ret, A, B, C, D, E, F> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F) -> Ret source### impl<Ret, A, B, C, D, E, F> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F) -> Ret source### impl<Ret, A, B, C, D, E, F> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, ...) -> Ret source### impl<Ret, A, B, C, D, E, F> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G) -> Ret source### impl<Ret, A, B, C, D, E, F, G> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G) -> Ret source### impl<Ret, A, B, C, D, E, F, G> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G) -> Ret source### impl<Ret, A, B, C, D, E, F, G> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G) -> Ret source### impl<Ret, A, B, C, D, E, F, G> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P> FnPtr for fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O, _: P) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P> FnPtr for unsafe fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O, _: P) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O, _: P) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O, _: P) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P> FnPtr for extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O, _: P, ...) -> Ret source### impl<Ret, A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P> FnPtr for unsafe extern "C" fn(_: A, _: B, _: C, _: D, _: E, _: F, _: G, _: H, _: I, _: J, _: K, _: L, _: M, _: N, _: O, _: P, ...) -> Ret Implementors ---
nintendo64-pac
rust
Rust
Crate nintendo64_pac === Nintendo 64 PAC --- Provides access to low-level Nintendo 64 memory in a type- and memory-safe way. Modules --- * aiAudio Interface Wrapper * dpcDP Command Wrapper * dpsDP Span Wrapper * mi * pc * pi * rdram * ri * si * sp * vi Structs --- * HardwareContains references to each hardware peripheral on the system. * PeripheralAn ownership wrapper for peripherals. Enums --- * HardwareErrorErrors related to the `Hardware` mechanism. Statics --- * HARDWAREA global, static reference to the hardware peripherals of the Nintendo 64. Crate nintendo64_pac === Nintendo 64 PAC --- Provides access to low-level Nintendo 64 memory in a type- and memory-safe way. Modules --- * aiAudio Interface Wrapper * dpcDP Command Wrapper * dpsDP Span Wrapper * mi * pc * pi * rdram * ri * si * sp * vi Structs --- * HardwareContains references to each hardware peripheral on the system. * PeripheralAn ownership wrapper for peripherals. Enums --- * HardwareErrorErrors related to the `Hardware` mechanism. Statics --- * HARDWAREA global, static reference to the hardware peripherals of the Nintendo 64. Module nintendo64_pac::ai === Audio Interface Wrapper --- This module wraps the Nintendo 64’s AI registers and provides type- and memory safe ways of interacting with it. Structs --- * AudioInterfaceA zero-size wrapper around the Nintendo 64’s audio interface registers. Module nintendo64_pac::dpc === DP Command Wrapper --- This module wraps the Nintendo 64’s DPC registers and provides type- and memory safe ways of interacting with it. Structs --- * DpcA zero-size wrapper around the Nintendo 64’s DPC registers. Module nintendo64_pac::dps === DP Span Wrapper --- This module wraps the Nintendo 64’s DPC registers and provides type- and memory safe ways of interacting with it. Structs --- * DpsA zero-size wrapper around the Nintendo 64’s DPC registers. Struct nintendo64_pac::Hardware === ``` pub struct Hardware { pub peripheral_interface: Peripheral<PeripheralInterface>, pub serial_interface: Peripheral<SerialInterface>, pub audio_interface: Peripheral<AudioInterface>, pub program_counter: Peripheral<ProgramCounter>, pub rdram_interface: Peripheral<RdramInterface>, pub video_interface: Peripheral<VideoInterface>, pub mips_interface: Peripheral<MipsInterface>, pub stack_pointer: Peripheral<StackPointer>, pub rdram: Peripheral<Rdram>, pub dpc: Peripheral<Dpc>, pub dps: Peripheral<Dps>, } ``` Contains references to each hardware peripheral on the system. Fields --- `peripheral_interface: Peripheral<PeripheralInterface>`Controlled reference to the peripheral interface. `serial_interface: Peripheral<SerialInterface>`Controlled reference to the serial interface. `audio_interface: Peripheral<AudioInterface>`Controlled reference to the audio interface. `program_counter: Peripheral<ProgramCounter>`Controlled reference to the program counter. `rdram_interface: Peripheral<RdramInterface>`Controlled reference to the RDRAM interface. `video_interface: Peripheral<VideoInterface>`Controlled reference to the video interface. `mips_interface: Peripheral<MipsInterface>`Controlled reference to the MIPS interface. `stack_pointer: Peripheral<StackPointer>`Controlled reference to the stack pointer. `rdram: Peripheral<Rdram>`Controlled reference to the RDRAM system. `dpc: Peripheral<Dpc>`Controlled reference to the DPC system. `dps: Peripheral<Dps>`Controlled reference to the DPS system. Auto Trait Implementations --- ### impl RefUnwindSafe for Hardware ### impl Send for Hardware ### impl Sync for Hardware ### impl Unpin for Hardware ### impl UnwindSafe for Hardware Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct nintendo64_pac::Peripheral === ``` pub struct Peripheral<T>(_); ``` An ownership wrapper for peripherals. ``` let mut foo = Peripheral::new(12345); let bar = foo.take(); // Ok(12345) let baz = foo.take(); // Err(HardwareError::TakePeripheralError) ``` Implementations --- ### impl<T> Peripheral<T#### pub const fn new(peripheral: T) -> Self Creates a new wrapper around the given peripheral. #### pub fn take(&mut self) -> Result<T, HardwareErrorSurrenders ownership of this peripheral to the caller. #### pub fn drop(&mut self, peripheral: T) Takes ownership of the given peripheral back from the caller. Auto Trait Implementations --- ### impl<T> RefUnwindSafe for Peripheral<T>where T: RefUnwindSafe, ### impl<T> Send for Peripheral<T>where T: Send, ### impl<T> Sync for Peripheral<T>where T: Sync, ### impl<T> Unpin for Peripheral<T>where T: Unpin, ### impl<T> UnwindSafe for Peripheral<T>where T: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum nintendo64_pac::HardwareError === ``` pub enum HardwareError { TakePeripheralError, } ``` Errors related to the `Hardware` mechanism. Variants --- ### TakePeripheralError Occurs when an attempt is made to take a peripheral which has already been taken. Auto Trait Implementations --- ### impl RefUnwindSafe for HardwareError ### impl Send for HardwareError ### impl Sync for HardwareError ### impl Unpin for HardwareError ### impl UnwindSafe for HardwareError Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Static nintendo64_pac::HARDWARE === ``` pub static mut HARDWARE: Hardware ``` A global, static reference to the hardware peripherals of the Nintendo 64.
gold
hex
Erlang
gold v0.16.2 API Reference === Modules --- [Gold](Gold.html) Opinionated interface to Bitcoin core JSON-RPC API. Currently in MVP mode: architecture is ready and stable, doesn’t fully implement all of the RPC commands yet [Gold.Transaction](Gold.Transaction.html) Struct which holds all transaction data [PoisonedDecimal](PoisonedDecimal.html) Hacky wrapper for Decimal library to use different encoding than the one prepared for Decimal in Ecto (or different conflicting libraries) gold v0.16.2 Gold === Opinionated interface to Bitcoin core JSON-RPC API. Currently in MVP mode: architecture is ready and stable, doesn’t fully implement all of the RPC commands yet. [Link to this section](#summary) Summary === [Functions](#functions) --- [btc\_to\_decimal(btc)](#btc_to_decimal/1) Converts a float BTC amount to an Decimal [call(name, method)](#call/2) Call generic RPC command [generate(name, amount)](#generate/2) Mine block immediately. Blocks are mined before RPC call returns [generate!(name, amount)](#generate!/2) Mine block immediately. Blocks are mined before RPC call returns. Raises an exception on failure [getaccount(name, address)](#getaccount/2) Returns the account associated with the given address [getaccount!(name, address)](#getaccount!/2) Returns the account associated with the given address, raising an exception on failure [getbalance(name, account \\ nil)](#getbalance/2) Returns wallet’s total available balance, raising an exception on failure [getbalance!(name, account \\ nil)](#getbalance!/2) Returns server’s total available balance, raising an exception on failure [getblock(name, hash)](#getblock/2) https://bitcoin.org/en/developer-reference#getblock [getblock!(name, hash)](#getblock!/2) [getblockchaininfo(name)](#getblockchaininfo/1) https://bitcoin.org/en/developer-reference#getblockchaininfo” [getblockchaininfo!(name)](#getblockchaininfo!/1) https://bitcoin.org/en/developer-reference#getblockchaininfo” [getblockcount(name)](#getblockcount/1) https://bitcoin.org/en/developer-reference#getblockcount [getblockcount!(name)](#getblockcount!/1) [getblockhash(name, index)](#getblockhash/2) The header hash of a block at the given height in the local best block chain [getblockhash!(name, index)](#getblockhash!/2) [getinfo(name)](#getinfo/1) https://bitcoin.org/en/developer-reference#getinfo” [getinfo!(name)](#getinfo!/1) https://bitcoin.org/en/developer-reference#getinfo” [getmemoryinfo(name)](#getmemoryinfo/1) https://bitcoin.org/en/developer-reference#getmemoryinfo” [getmemoryinfo!(name)](#getmemoryinfo!/1) https://bitcoin.org/en/developer-reference#getmemoryinfo” [getmempoolinfo(name)](#getmempoolinfo/1) https://bitcoin.org/en/developer-reference#getmempoolinfo” [getmempoolinfo!(name)](#getmempoolinfo!/1) https://bitcoin.org/en/developer-reference#getmempoolinfo” [getmininginfo(name)](#getmininginfo/1) https://bitcoin.org/en/developer-reference#getmininginfo” [getmininginfo!(name)](#getmininginfo!/1) https://bitcoin.org/en/developer-reference#getmininginfo” [getnetworkinfo(name)](#getnetworkinfo/1) https://bitcoin.org/en/developer-reference#getnetworkinfo” [getnetworkinfo!(name)](#getnetworkinfo!/1) https://bitcoin.org/en/developer-reference#getnetworkinfo” [getnewaddress(name, account \\ "")](#getnewaddress/2) Returns a new bitcoin address for receiving payments [getnewaddress!(name, account \\ "")](#getnewaddress!/2) Returns a new bitcoin address for receiving payments, raising an exception on failure [getpeerinfo(name)](#getpeerinfo/1) https://bitcoin.org/en/developer-reference#getpeerinfo” [getpeerinfo!(name)](#getpeerinfo!/1) https://bitcoin.org/en/developer-reference#getpeerinfo” [getrawtransaction(name, txid, verbose \\ 1)](#getrawtransaction/3) https://bitcoin.org/en/developer-reference#getrawtransaction [getrawtransaction!(name, txid, verbose \\ 1)](#getrawtransaction!/3) [gettransaction(name, txid)](#gettransaction/2) Get detailed information about in-wallet transaction [gettransaction!(name, txid)](#gettransaction!/2) Get detailed information about in-wallet transaction, raising an exception on failure [gettxout(name, txid, n \\ 1)](#gettxout/3) https://bitcoin.org/en/developer-reference#gettxout [gettxout!(name, txid, n \\ 1)](#gettxout!/3) [gettxoutsetinfo(name)](#gettxoutsetinfo/1) https://bitcoin.org/en/developer-reference#gettxoutsetinfo” [gettxoutsetinfo!(name)](#gettxoutsetinfo!/1) https://bitcoin.org/en/developer-reference#gettxoutsetinfo” [getwalletinfo(name)](#getwalletinfo/1) https://bitcoin.org/en/developer-reference#getwalletinfo” [getwalletinfo!(name)](#getwalletinfo!/1) https://bitcoin.org/en/developer-reference#getwalletinfo” [importaddress(name, address, account \\ "", rescan \\ true)](#importaddress/4) Add an address or pubkey script to the wallet without the associated private key [importaddress!(name, address, account \\ "", rescan \\ true)](#importaddress!/4) Add an address or pubkey script to the wallet without the associated private key, raising an exception on failure [listsinceblock(name, header\_hash, target\_confirmations, watchonly)](#listsinceblock/4) Returns all transactions affecting the wallet which have occurred since a particular block [listsinceblock!(name, header\_hash, target\_confirmations, watchonly)](#listsinceblock!/4) Returns all transactions affecting the wallet which have occurred since a particular block [listtransactions(name, account \\ "\*", limit \\ 10, offset \\ 0)](#listtransactions/4) Returns most recent transactions in wallet [listtransactions!(name, account \\ "\*", limit \\ 10, offset \\ 0)](#listtransactions!/4) Returns most recent transactions in wallet, raising an exception on failure [sendtoaddress(name, address, amount)](#sendtoaddress/3) Send an amount to a given address [sendtoaddress!(name, address, amount)](#sendtoaddress!/3) Send an amount to a given address, raising an exception on failure [start(type, args)](#start/2) Called when an application is started [Link to this section](#functions) Functions === [Link to this function](#btc_to_decimal/1 "Link to this function") btc\_to\_decimal(btc) Converts a float BTC amount to an Decimal. [Link to this function](#call/2 "Link to this function") call(name, method) Call generic RPC command [Link to this function](#generate/2 "Link to this function") generate(name, amount) Mine block immediately. Blocks are mined before RPC call returns. [Link to this function](#generate!/2 "Link to this function") generate!(name, amount) Mine block immediately. Blocks are mined before RPC call returns. Raises an exception on failure. [Link to this function](#getaccount/2 "Link to this function") getaccount(name, address) Returns the account associated with the given address. [Link to this function](#getaccount!/2 "Link to this function") getaccount!(name, address) Returns the account associated with the given address, raising an exception on failure. [Link to this function](#getbalance/2 "Link to this function") getbalance(name, account \\ nil) Returns wallet’s total available balance, raising an exception on failure. [Link to this function](#getbalance!/2 "Link to this function") getbalance!(name, account \\ nil) Returns server’s total available balance, raising an exception on failure. [Link to this function](#getblock/2 "Link to this function") getblock(name, hash) https://bitcoin.org/en/developer-reference#getblock [Link to this function](#getblock!/2 "Link to this function") getblock!(name, hash) [Link to this function](#getblockchaininfo/1 "Link to this function") getblockchaininfo(name) https://bitcoin.org/en/developer-reference#getblockchaininfo” [Link to this function](#getblockchaininfo!/1 "Link to this function") getblockchaininfo!(name) https://bitcoin.org/en/developer-reference#getblockchaininfo” [Link to this function](#getblockcount/1 "Link to this function") getblockcount(name) https://bitcoin.org/en/developer-reference#getblockcount [Link to this function](#getblockcount!/1 "Link to this function") getblockcount!(name) [Link to this function](#getblockhash/2 "Link to this function") getblockhash(name, index) The header hash of a block at the given height in the local best block chain https://bitcoin.org/en/developer-reference#getblockhash [Link to this function](#getblockhash!/2 "Link to this function") getblockhash!(name, index) [Link to this function](#getinfo/1 "Link to this function") getinfo(name) https://bitcoin.org/en/developer-reference#getinfo” [Link to this function](#getinfo!/1 "Link to this function") getinfo!(name) https://bitcoin.org/en/developer-reference#getinfo” [Link to this function](#getmemoryinfo/1 "Link to this function") getmemoryinfo(name) https://bitcoin.org/en/developer-reference#getmemoryinfo” [Link to this function](#getmemoryinfo!/1 "Link to this function") getmemoryinfo!(name) https://bitcoin.org/en/developer-reference#getmemoryinfo” [Link to this function](#getmempoolinfo/1 "Link to this function") getmempoolinfo(name) https://bitcoin.org/en/developer-reference#getmempoolinfo” [Link to this function](#getmempoolinfo!/1 "Link to this function") getmempoolinfo!(name) https://bitcoin.org/en/developer-reference#getmempoolinfo” [Link to this function](#getmininginfo/1 "Link to this function") getmininginfo(name) https://bitcoin.org/en/developer-reference#getmininginfo” [Link to this function](#getmininginfo!/1 "Link to this function") getmininginfo!(name) https://bitcoin.org/en/developer-reference#getmininginfo” [Link to this function](#getnetworkinfo/1 "Link to this function") getnetworkinfo(name) https://bitcoin.org/en/developer-reference#getnetworkinfo” [Link to this function](#getnetworkinfo!/1 "Link to this function") getnetworkinfo!(name) https://bitcoin.org/en/developer-reference#getnetworkinfo” [Link to this function](#getnewaddress/2 "Link to this function") getnewaddress(name, account \\ "") Returns a new bitcoin address for receiving payments. [Link to this function](#getnewaddress!/2 "Link to this function") getnewaddress!(name, account \\ "") Returns a new bitcoin address for receiving payments, raising an exception on failure. [Link to this function](#getpeerinfo/1 "Link to this function") getpeerinfo(name) https://bitcoin.org/en/developer-reference#getpeerinfo” [Link to this function](#getpeerinfo!/1 "Link to this function") getpeerinfo!(name) https://bitcoin.org/en/developer-reference#getpeerinfo” [Link to this function](#getrawtransaction/3 "Link to this function") getrawtransaction(name, txid, verbose \\ 1) https://bitcoin.org/en/developer-reference#getrawtransaction [Link to this function](#getrawtransaction!/3 "Link to this function") getrawtransaction!(name, txid, verbose \\ 1) [Link to this function](#gettransaction/2 "Link to this function") gettransaction(name, txid) Get detailed information about in-wallet transaction. [Link to this function](#gettransaction!/2 "Link to this function") gettransaction!(name, txid) Get detailed information about in-wallet transaction, raising an exception on failure. [Link to this function](#gettxout/3 "Link to this function") gettxout(name, txid, n \\ 1) https://bitcoin.org/en/developer-reference#gettxout [Link to this function](#gettxout!/3 "Link to this function") gettxout!(name, txid, n \\ 1) [Link to this function](#gettxoutsetinfo/1 "Link to this function") gettxoutsetinfo(name) https://bitcoin.org/en/developer-reference#gettxoutsetinfo” [Link to this function](#gettxoutsetinfo!/1 "Link to this function") gettxoutsetinfo!(name) https://bitcoin.org/en/developer-reference#gettxoutsetinfo” [Link to this function](#getwalletinfo/1 "Link to this function") getwalletinfo(name) https://bitcoin.org/en/developer-reference#getwalletinfo” [Link to this function](#getwalletinfo!/1 "Link to this function") getwalletinfo!(name) https://bitcoin.org/en/developer-reference#getwalletinfo” [Link to this function](#importaddress/4 "Link to this function") importaddress(name, address, account \\ "", rescan \\ true) Add an address or pubkey script to the wallet without the associated private key. [Link to this function](#importaddress!/4 "Link to this function") importaddress!(name, address, account \\ "", rescan \\ true) Add an address or pubkey script to the wallet without the associated private key, raising an exception on failure. [Link to this function](#listsinceblock/4 "Link to this function") listsinceblock(name, header\_hash, target\_confirmations, watchonly) Returns all transactions affecting the wallet which have occurred since a particular block [Link to this function](#listsinceblock!/4 "Link to this function") listsinceblock!(name, header\_hash, target\_confirmations, watchonly) Returns all transactions affecting the wallet which have occurred since a particular block [Link to this function](#listtransactions/4 "Link to this function") listtransactions(name, account \\ "\*", limit \\ 10, offset \\ 0) Returns most recent transactions in wallet. [Link to this function](#listtransactions!/4 "Link to this function") listtransactions!(name, account \\ "\*", limit \\ 10, offset \\ 0) Returns most recent transactions in wallet, raising an exception on failure. [Link to this function](#sendtoaddress/3 "Link to this function") sendtoaddress(name, address, amount) Send an amount to a given address. [Link to this function](#sendtoaddress!/3 "Link to this function") sendtoaddress!(name, address, amount) Send an amount to a given address, raising an exception on failure. [Link to this function](#start/2 "Link to this function") start(type, args) Called when an application is started. This function is called when an application is started using [`Application.start/2`](https://hexdocs.pm/elixir/Application.html#start/2) (and functions on top of that, such as [`Application.ensure_started/2`](https://hexdocs.pm/elixir/Application.html#ensure_started/2)). This function should start the top-level process of the application (which should be the top supervisor of the application’s supervision tree if the application follows the OTP design principles around supervision). `start_type` defines how the application is started: * `:normal` - used if the startup is a normal startup or if the application is distributed and is started on the current node because of a failover from another node and the application specification key `:start_phases` is `:undefined`. * `{:takeover, node}` - used if the application is distributed and is started on the current node because of a failover on the node `node`. * `{:failover, node}` - used if the application is distributed and is started on the current node because of a failover on node `node`, and the application specification key `:start_phases` is not `:undefined`. `start_args` are the arguments passed to the application in the `:mod` specification key (e.g., `mod: {MyApp, [:my_args]}`). This function should either return `{:ok, pid}` or `{:ok, pid, state}` if startup is successful. `pid` should be the PID of the top supervisor. `state` can be an arbitrary term, and if omitted will default to `[]`; if the application is later stopped, `state` is passed to the `stop/1` callback (see the documentation for the `c:stop/1` callback for more information). `use Application` provides no default implementation for the [`start/2`](#start/2) callback. Callback implementation for [`Application.start/2`](https://hexdocs.pm/elixir/Application.html#c:start/2). gold v0.16.2 Gold.Transaction === Struct which holds all transaction data [Link to this section](#summary) Summary === [Functions](#functions) --- [from\_json(tx)](#from_json/1) Creates Transaction struct from JSON transaction object [transaction?(arg1)](#transaction?/1) Returns `true` if argument is a bitcoin transaction; otherwise `false` [Link to this section](#functions) Functions === [Link to this function](#from_json/1 "Link to this function") from\_json(tx) Creates Transaction struct from JSON transaction object. [Link to this function](#transaction?/1 "Link to this function") transaction?(arg1) Returns `true` if argument is a bitcoin transaction; otherwise `false`. gold v0.16.2 PoisonedDecimal === Hacky wrapper for Decimal library to use different encoding than the one prepared for Decimal in Ecto (or different conflicting libraries) [Link to this section](#summary) Summary === [Functions](#functions) --- [new(decimal)](#new/1) [poison\_params(params)](#poison_params/1) [Link to this section](#functions) Functions === [Link to this function](#new/1 "Link to this function") new(decimal) [Link to this function](#poison_params/1 "Link to this function") poison\_params(params)
github.com/go-gem/gem
go
Go
README [¶](#section-readme) --- ### Gem Web Framework [![GoDoc](https://godoc.org/github.com/go-gem/gem?status.svg)](https://godoc.org/github.com/go-gem/gem) [![Build Status](https://travis-ci.org/go-gem/gem.svg?branch=master)](https://travis-ci.org/go-gem/gem) [![Go Report Card](https://goreportcard.com/badge/github.com/go-gem/gem)](https://goreportcard.com/report/github.com/go-gem/gem) [![Coverage Status](https://coveralls.io/repos/github/go-gem/gem/badge.svg?branch=master)](https://coveralls.io/github/go-gem/gem?branch=master) [![Sourcegraph](https://sourcegraph.com/github.com/go-gem/gem/-/badge.svg)](https://sourcegraph.com/github.com/go-gem/gem?badge) Gem is an easy to use and high performance web framework written in Go(golang), it supports HTTP/2, and provides leveled logger and frequently used middlewares. > **Note**: requires `go1.8` or above. [Starter Kit](https://github.com/go-gem/StarterKit) is available, it provides a convenient way to create an application. #### Features * High performance * Friendly to REST API * Full test of all APIs [![Coverage Status](https://coveralls.io/repos/github/go-gem/gem/badge.svg?branch=master)](https://coveralls.io/github/go-gem/gem?branch=master) * Pretty and fast router - the router is custom version of [httprouter](https://github.com/julienschmidt/httprouter) * HTTP/2 support - HTTP/2 server push was supported since `go1.8` * Leveled logging - included four levels `debug`, `info`, `error` and `fatal`, the following packages are compatible with Gem + [logrus](https://github.com/Sirupsen/logrus) - structured, pluggable logging for Go + [go-logging](https://github.com/op/go-logging) - golang logging library + [gem-log](https://github.com/go-gem/log) - default logger * Frequently used [middlewares](#readme-middlewares) + [CORS Middleware](https://github.com/go-gem/middleware-cors) - Cross-Origin Resource Sharing + [AUTH Middleware](https://github.com/go-gem/middleware-auth) - HTTP Basic and HTTP Digest authentication + [JWT Middleware](https://github.com/go-gem/middleware-jwt) - JSON WEB TOKEN authentication + [Compress Middleware](https://github.com/go-gem/middleware-compress) - Compress response body + [Request Body Limit Middleware](https://github.com/go-gem/middleware-body-limit) - limit request body maximum size + [Rate Limiting Middleware](https://github.com/go-gem/middleware-rate-limit) - limit API usage of each user + [CSRF Middleware](https://github.com/go-gem/middleware-csrf) - Cross-Site Request Forgery protection * Frozen APIs * Hardly any third-party dependencies * Compatible with third-party packages of `net/http`, such as [gorilla sessions](https://github.com/gorilla/sessions), [gorilla websocket](https://github.com/gorilla/websocket) etc #### Getting Started ##### Install ``` $ go get -u github.com/go-gem/gem ``` ##### Quick Start ``` package main import ( "log" "github.com/go-gem/gem" ) func index(ctx *gem.Context) { ctx.HTML(200, "hello world") } func main() { // Create server. srv := gem.New(":8080") // Create router. router := gem.NewRouter() // Register handler router.GET("/", index) // Start server. log.Println(srv.ListenAndServe(router.Handler())) } ``` ##### Context [Context](https://godoc.org/github.com/go-gem/gem#Context) embedded `http.ResponseWriter` and `*http.Request`, and provides some useful APIs and shortcut, see <https://godoc.org/github.com/go-gem/gem#Context>. ##### Logger AFAIK, the following leveled logging packages are compatible with Gem web framework: * [logrus](https://github.com/Sirupsen/logrus) - structured, pluggable logging for Go * [go-logging](https://github.com/op/go-logging) - golang logging library * [gem-log](https://github.com/go-gem/log) - default logger * Please let me know if I missed the other logging packages :) [Logger](https://godoc.org/github.com/go-gem/gem#Logger) includes four levels: `debug`, `info`, `error` and `fatal`. **APIs** * `Debug` and `Debugf` * `Info` and `Infof` * `Error` and `Errorf` * `Fatal` and `Fatalf` For example: ``` // set logrus logger as server's logger. srv.SetLogger(logrus.New()) // we can use it in handler. router.GET("/logger", func(ctx *gem.Context) { ctx.Logger().Debug("debug") ctx.Logger().Info("info") ctx.Logger().Error("error") }) ``` ##### Static Files ``` router.ServeFiles("/tmp/*filepath", http.Dir(os.TempDir())) ``` Note: the first parameter must end with `*filepath`. ##### REST APIs The router is friendly to REST APIs. ``` // user list router.GET("/users", func(ctx *gem.Context) { ctx.JSON(200, userlist) }) // add user router.POST("/users", func(ctx *gem.Context) { ctx.Request.ParseForm() name := ctx.Request.FormValue("name") // add user ctx.JSON(200, msg) }) // user profile. router.GET("/users/:name", func(ctx *gem.Context) { // firstly, we need get the username from the URL query. name, err := gem.String(ctx.UserValue("name")) if err != nil { ctx.JSON(404, userNotFound) return } // return user profile. ctx.JSON(200, userProfileByName(name)) }) // update user profile router.PUT("/users/:name", func(ctx *gem.Context) { // firstly, we need get the username from the URL query. name, err := gem.String(ctx.UserValue("name")) if err != nil { ctx.JSON(404, userNotFound) return } // get nickname ctx.Request.ParseForm() nickname := ctx.Request.FormValue("nickname") // update user nickname. ctx.JSON(200, msg) }) // delete user router.DELETE("/users/:name", func(ctx *gem.Context) { // firstly, we need get the username from the URL query. name, err := gem.String(ctx.UserValue("name")) if err != nil { ctx.JSON(404, userNotFound) return } // delete user. ctx.JSON(200, msg) } ``` ##### HTTP/2 Server Push See <https://github.com/go-gem/examples/tree/master/http2>. ``` router.GET("/", func(ctx *gem.Context) { if err := ctx.Push("/images/logo.png", nil); err != nil { ctx.Logger().Info(err) } ctx.HTML(200, `<html><head></head><body><img src="/images/logo.png"/></body></html>`) }) router.ServeFiles("/images/*filepath", http.Dir(imagesDir)) ``` ##### Use Middleware It is easy to implement a middleware, see [Middleware](https://godoc.org/github.com/go-gem/gem#Middleware) interface, you just need to implement the `Wrap` function. ``` type Middleware interface { Wrap(next Handler) Handler } ``` For example, we defined a simple debug middleware: ``` type Debug struct{} // Wrap implements the Middleware interface. func (d *Debug) Wrap(next gem.Handler) gem.Handler { // gem.HandlerFunc is an adapter like http.HandlerFunc. return gem.HandlerFunc(func(ctx *gem.Context) { // print request info. log.Println(ctx.Request.URL, ctx.Request.Method) // call the next handler. next.Handle(ctx) }) } ``` and then we should register it: register the middleware for all handlers via [Router.Use](https://godoc.org/github.com/go-gem/gem#Router.Use). ``` router.Use(&Debug{}) ``` we can also set up the middleware for specific handler via [HandlerOption](https://godoc.org/github.com/go-gem/gem#HandlerOption). ``` router.GET("/specific", specificHandler, &gem.HandlerOption{Middlewares:[]gem.Middleware{&Debug{}}}) ``` Gem also provides some frequently used middlewares, see [Middlewares](#readme-middlewares). ##### Share data between middlewares Context provides two useful methods: `SetUserValue` and `UserValue` to share data between middlewares. ``` // Store data into context in one middleware ctx.SetUserValue("name", "foo") // Get data from context in other middleware or handler ctx.UserValue("name") ``` #### Middlewares **Please let me know that you composed some middlewares, I will mention it here, I believe it would be helpful to users.** * [CORS Middleware](https://github.com/go-gem/middleware-cors) - Cross-Origin Resource Sharing * [AUTH Middleware](https://github.com/go-gem/middleware-auth) - HTTP Basic and HTTP Digest authentication * [JWT Middleware](https://github.com/go-gem/middleware-jwt) - JSON WEB TOKEN authentication * [Compress Middleware](https://github.com/go-gem/middleware-compress) - compress response body * [Request Body Limit Middleware](https://github.com/go-gem/middleware-body-limit) - limit request body maximum size * [Rate Limiting Middleware](https://github.com/go-gem/middleware-rate-limit) - limit API usage of each user * [CSRF Middleware](https://github.com/go-gem/middleware-csrf) - Cross-Site Request Forgery protection #### Semantic Versioning Gem follows [semantic versioning 2.0.0](http://semver.org/) managed through GitHub releases. #### Support Us * ⭐ the project. * Spread the word. * [Contribute](#readme-contribute) to the project. #### Contribute * [Report issues](https://github.com/go-gem/gem/issues/new) * Send PRs. * Improve/fix documentation. **We’re always looking for help, so if you would like to contribute, we’d love to have you!** #### Changes The `v2` and `v1` are totally different: * `v2` built on top of `net/http` instead of `fasthttp`. * `v2` require `go1.8` or above. * `v2` is compatible with `Windows`. #### FAQ * Why choose `net/http` instead of `fasthttp`? 1. `net/http` has much more third-party packages than `fasthttp`. 2. `fasthttp` doesn't support `HTTP/2` yet. #### LICENSE BSD 3-Clause License, see [LICENSE](https://github.com/go-gem/gem/blob/a6812000e965/LICENSE) and [AUTHORS](https://github.com/go-gem/gem/blob/a6812000e965/AUTHORS.md). **Inspiration & Credits** For respecting the third party packages, I added their author into [AUTHORS](https://github.com/go-gem/gem/blob/a6812000e965/AUTHORS.md), and listed those packages here. * [**httprouter**](https://github.com/julienschmidt/httprouter) - [LICENSE](https://github.com/julienschmidt/httprouter/raw/master/LICENSE). Gem's router is a custom version of `httprouter`, thanks to `httprouter`. Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) * [Features](#hdr-Features) * [Install](#hdr-Install) * [Quick Start](#hdr-Quick_Start) * [Logger](#hdr-Logger) * [Static Files](#hdr-Static_Files) * [REST APIs](#hdr-REST_APIs) * [HTTP2 Server Push](#hdr-HTTP2_Server_Push) * [Use Middleware](#hdr-Use_Middleware) * [Share data between middlewares](#hdr-Share_data_between_middlewares) Package gem is a high performance web framework, it is friendly to REST APIs. Note: This package requires go1.8 or above. #### Features [¶](#hdr-Features) 1. High performance 2. Friendly to REST API 3. Full test of all APIs 4. Pretty and fast router 5. HTTP/2 support 6. Leveled logging - included four levels `debug`, `info`, `error` and `fatal`, there are many third-party implements the Logger: ``` logrus - Structured, pluggable logging for Go - https://github.com/Sirupsen/logrus go-logging - Golang logging library - https://github.com/op/go-logging gem-log - default logger - https://github.com/go-gem/log ``` 7. Middlewares ``` CSRF Middleware - Cross-Site Request Forgery protection - https://github.com/go-gem/middleware-csrf CORS Middleware - Cross-Origin Resource Sharing - https://github.com/go-gem/middleware-cors AUTH Middleware - HTTP Basic and HTTP Digest authentication - https://github.com/go-gem/middleware-auth JWT Middleware - JSON WEB TOKEN authentication - https://github.com/go-gem/middleware-jwt Compress Middleware - compress response body - https://github.com/go-gem/middleware-compress Request Body Limit Middleware - limit request body size - https://github.com/go-gem/middleware-body-limit ``` 8. Frozen APIs since the stable version `2.0.0` was released #### Install [¶](#hdr-Install) use go get command to install: ``` $ go get -u github.com/go-gem/gem ``` #### Quick Start [¶](#hdr-Quick_Start) a simple HTTP server: ``` package main import ( "log" "github.com/go-gem/gem" ) func index(ctx *gem.Context) { ctx.HTML(200, "hello world") } func main() { // Create server. srv := gem.New(":8080") // Create router. router := gem.NewRouter() // Register handler router.GET("/", index) // Start server. log.Println(srv.ListenAndServe(router.Handler())) } ``` #### Logger [¶](#hdr-Logger) AFAIK, the following leveled logging packages are compatible with Gem web framework: 1. logrus - structured, pluggable logging for Go - <https://github.com/Sirupsen/logrus2. go-logging - golang logging library - <https://github.com/op/go-logging3. gem-log - default logger, maintained by Gem Authors - <https://github.com/go-gem/logLogger(<https://godoc.org/github.com/go-gem/gem#Logger>) includes four levels: debug, info, error and fatal, their APIs are Debug and Debugf, Info and Infof, Error and Errorf, Fatal and Fatalf. We take logrus as example to show that how to set and use logger. ``` // set logrus logger as server's logger. srv.SetLogger(logrus.New()) // we can use it in handler. router.GET("/logger", func(ctx *gem.Context) { ctx.Logger().Debug("debug") ctx.Logger().Info("info") ctx.Logger().Error("error") }) ``` #### Static Files [¶](#hdr-Static_Files) example that serve static files: ``` router.ServeFiles("/tmp/*filepath", http.Dir(os.TempDir())) ``` Note: the path(first parameter) must end with `*filepath`. #### REST APIs [¶](#hdr-REST_APIs) The router is friendly to REST APIs. ``` // user list router.GET("/users", func(ctx *gem.Context) { ctx.JSON(200, userlist) }) // add user router.POST("/users", func(ctx *gem.Context) { ctx.Request.ParseForm() name := ctx.Request.FormValue("name") // add user ctx.JSON(200, msg) }) // user profile. router.GET("/users/:name", func(ctx *gem.Context) { // firstly, we need get the username from the URL query. name, ok := ctx.UserValue("name").(string) if !ok { ctx.JSON(404, userNotFound) return } // return user profile. ctx.JSON(200, userProfileByName(name)) }) // update user profile router.PUT("/users/:name", func(ctx *gem.Context) { // firstly, we need get the username from the URL query. name, ok := ctx.UserValue("name").(string) if !ok { ctx.JSON(404, userNotFound) return } // get nickname ctx.Request.ParseForm() nickname := ctx.Request.FormValue("nickname") // update user nickname. ctx.JSON(200, msg) }) // delete user router.DELETE("/users/:name", func(ctx *gem.Context) { // firstly, we need get the username from the URL query. name, ok := ctx.UserValue("name").(string) if !ok { ctx.JSON(404, userNotFound) return } // delete user. ctx.JSON(200, msg) } ``` #### HTTP2 Server Push [¶](#hdr-HTTP2_Server_Push) see <https://github.com/go-gem/examples/tree/master/http2> ``` router.GET("/", func(ctx *gem.Context) { if err := ctx.Push("/images/logo.png", nil); err != nil { ctx.Logger().Info(err) } ctx.HTML(200, `<html><head></head><body><img src="/images/logo.png"/></body></html>`) }) router.ServeFiles("/images/*filepath", http.Dir(imagesDir)) ``` #### Use Middleware [¶](#hdr-Use_Middleware) It is easy to implement a middleware, see [Middleware](#Middleware)(<https://godoc.org/github.com/go-gem/gem#Middleware>) interface, you just need to implement the `Wrap` function. ``` type Middleware interface { Wrap(next Handler) Handler } ``` For example, we defined a simple debug middleware: ``` type Debug struct{} // Wrap implements the Middleware interface. func (d *Debug) Wrap(next gem.Handler) gem.Handler { // gem.HandlerFunc is adapter like http.HandlerFunc. return gem.HandlerFunc(func(ctx *gem.Context) { // print request info. log.Println(ctx.Request.URL, ctx.Request.Method) // call the next handler. next.Handle(ctx) }) } ``` and then we should register it: register the middleware for all handlers via Router.Use(<https://godoc.org/github.com/go-gem/gem#Router.Use>). ``` router.Use(&Debug{}) ``` we can also register the middleware for specific handler via HandlerOption(<https://godoc.org/github.com/go-gem/gem#HandlerOption>). ``` router.GET("/specific", specificHandler, &gem.HandlerOption{Middlewares:[]gem.Middleware{&Debug{}}}) ``` Gem also provides some frequently used middlewares, such as: 1. CSRF Middleware - Cross-Site Request Forgery protection - <https://github.com/go-gem/middleware-csrf2. CORS Middleware - Cross-Origin Resource Sharing - <https://github.com/go-gem/middleware-cors3. AUTH Middleware - HTTP Basic and HTTP Digest authentication - <https://github.com/go-gem/middleware-auth4. JWT Middleware - JSON WEB TOKEN authentication - <https://github.com/go-gem/middleware-jwt5. Compress Middleware - Compress response body - <https://github.com/go-gem/middleware-compress6. Request Body Limit Middleware - limit request body maximum size - <https://github.com/go-gem/middleware-body-limit#### Share data between middlewares [¶](#hdr-Share_data_between_middlewares) Context provides two useful methods: `SetUserValue` and `UserValue` to share data between middlewares. ``` // Store data into context in one middleware ctx.SetUserValue("name", "foo") // Get data from context in other middleware or handler ctx.UserValue("name") ``` ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [func CleanPath(p string) string](#CleanPath) * [func Int(v interface{}) (int, error)](#Int) * [func ListenAndServe(addr string, handler Handler) error](#ListenAndServe) * [func ListenAndServeTLS(addr, certFile, keyFile string, handler Handler) error](#ListenAndServeTLS) * [func String(v interface{}) (string, error)](#String) * [func Version() string](#Version) * [type Application](#Application) * + [func NewApplication(filename string) (*Application, error)](#NewApplication) * + [func (app *Application) Close() (errs []error)](#Application.Close) + [func (app *Application) Component(name string) interface{}](#Application.Component) + [func (app *Application) Init() (err error)](#Application.Init) + [func (app *Application) InitControllers() (err error)](#Application.InitControllers) + [func (app *Application) Router() *Router](#Application.Router) + [func (app *Application) SetCloseCallback(callback ApplicationCallback)](#Application.SetCloseCallback) + [func (app *Application) SetComponent(name string, component interface{}) error](#Application.SetComponent) + [func (app *Application) SetController(path string, controller Controller)](#Application.SetController) + [func (app *Application) SetInitCallback(callback ApplicationCallback)](#Application.SetInitCallback) + [func (app *Application) Templates() *Templates](#Application.Templates) * [type ApplicationCallback](#ApplicationCallback) * [type AssetsOption](#AssetsOption) * [type Context](#Context) * + [func (ctx *Context) Error(error string, code int)](#Context.Error) + [func (ctx *Context) FormFile(key string) (multipart.File, *multipart.FileHeader, error)](#Context.FormFile) + [func (ctx *Context) FormValue(key string) string](#Context.FormValue) + [func (ctx *Context) HTML(code int, body string)](#Context.HTML) + [func (ctx *Context) Host() string](#Context.Host) + [func (ctx *Context) IsAjax() bool](#Context.IsAjax) + [func (ctx *Context) IsDelete() bool](#Context.IsDelete) + [func (ctx *Context) IsGet() bool](#Context.IsGet) + [func (ctx *Context) IsHead() bool](#Context.IsHead) + [func (ctx *Context) IsPost() bool](#Context.IsPost) + [func (ctx *Context) IsPut() bool](#Context.IsPut) + [func (ctx *Context) JSON(code int, v interface{})](#Context.JSON) + [func (ctx *Context) Logger() Logger](#Context.Logger) + [func (ctx *Context) NotFound()](#Context.NotFound) + [func (ctx *Context) ParseForm() error](#Context.ParseForm) + [func (ctx *Context) PostFormValue(key string) string](#Context.PostFormValue) + [func (ctx *Context) Push(target string, opts *http.PushOptions) error](#Context.Push) + [func (ctx *Context) Redirect(url string, code int)](#Context.Redirect) + [func (ctx *Context) Referer() string](#Context.Referer) + [func (ctx *Context) SetContentType(v string)](#Context.SetContentType) + [func (ctx *Context) SetServer(server *Server)](#Context.SetServer) + [func (ctx *Context) SetStatusCode(code int)](#Context.SetStatusCode) + [func (ctx *Context) SetUserValue(key string, value interface{})](#Context.SetUserValue) + [func (ctx *Context) URL() *url.URL](#Context.URL) + [func (ctx *Context) UserValue(key string) interface{}](#Context.UserValue) + [func (ctx *Context) Write(p []byte) (n int, err error)](#Context.Write) + [func (ctx *Context) XML(code int, v interface{}, headers ...string)](#Context.XML) * [type Controller](#Controller) * [type Handler](#Handler) * [type HandlerFunc](#HandlerFunc) * + [func (f HandlerFunc) Handle(ctx *Context)](#HandlerFunc.Handle) * [type HandlerOption](#HandlerOption) * + [func NewHandlerOption(middlewares ...Middleware) *HandlerOption](#NewHandlerOption) * [type Logger](#Logger) * [type Middleware](#Middleware) * [type Router](#Router) * + [func NewRouter() *Router](#NewRouter) * + [func (r *Router) DELETE(path string, handle HandlerFunc, opts ...*HandlerOption)](#Router.DELETE) + [func (r *Router) GET(path string, handle HandlerFunc, opts ...*HandlerOption)](#Router.GET) + [func (r *Router) HEAD(path string, handle HandlerFunc, opts ...*HandlerOption)](#Router.HEAD) + [func (r *Router) Handle(method, path string, handle HandlerFunc, opts ...*HandlerOption)](#Router.Handle) + [func (r *Router) Handler() Handler](#Router.Handler) + [func (r *Router) Lookup(method, path string, ctx *Context) (Handler, bool)](#Router.Lookup) + [func (r *Router) OPTIONS(path string, handle HandlerFunc, opts ...*HandlerOption)](#Router.OPTIONS) + [func (r *Router) PATCH(path string, handle HandlerFunc, opts ...*HandlerOption)](#Router.PATCH) + [func (r *Router) POST(path string, handle HandlerFunc, opts ...*HandlerOption)](#Router.POST) + [func (r *Router) PUT(path string, handle HandlerFunc, opts ...*HandlerOption)](#Router.PUT) + [func (r *Router) ServeFiles(path string, root http.FileSystem, opts ...*HandlerOption)](#Router.ServeFiles) + [func (r *Router) Use(middleware Middleware)](#Router.Use) * [type Server](#Server) * + [func New(addr string) *Server](#New) * + [func (srv *Server) ListenAndServe(handler Handler) error](#Server.ListenAndServe) + [func (srv *Server) ListenAndServeTLS(certFile, keyFile string, handler Handler) error](#Server.ListenAndServeTLS) + [func (srv *Server) SetLogger(logger Logger)](#Server.SetLogger) * [type ServerOption](#ServerOption) * [type Templates](#Templates) * + [func NewTemplates(path string) *Templates](#NewTemplates) * + [func (ts *Templates) Filenames(filenames ...string) []string](#Templates.Filenames) + [func (ts *Templates) Layout(name string) (*template.Template, error)](#Templates.Layout) + [func (ts *Templates) New(filenames ...string) (*template.Template, error)](#Templates.New) + [func (ts *Templates) Render(layoutName string, filenames ...string) (*template.Template, error)](#Templates.Render) + [func (ts *Templates) SetLayout(filenames ...string) error](#Templates.SetLayout) * [type TemplatesOption](#TemplatesOption) * [type WebController](#WebController) * + [func (wc *WebController) DELETE(ctx *Context)](#WebController.DELETE) + [func (wc *WebController) GET(ctx *Context)](#WebController.GET) + [func (wc *WebController) HEAD(ctx *Context)](#WebController.HEAD) + [func (wc *WebController) HandlerOptions() map[string]*HandlerOption](#WebController.HandlerOptions) + [func (wc *WebController) Init(app *Application) error](#WebController.Init) + [func (wc *WebController) Methods() []string](#WebController.Methods) + [func (wc *WebController) OPTIONS(ctx *Context)](#WebController.OPTIONS) + [func (wc *WebController) PATCH(ctx *Context)](#WebController.PATCH) + [func (wc *WebController) POST(ctx *Context)](#WebController.POST) + [func (wc *WebController) PUT(ctx *Context)](#WebController.PUT) ### Constants [¶](#pkg-constants) ``` const ( MIMEHTML = "text/html" MIMEJSON = "application/json" MIMEXML = "application/xml" ) ``` MIME types ``` const ( MethodGet = "GET" MethodPost = "POST" MethodPut = "PUT" MethodDelete = "DELETE" MethodHead = "HEAD" MethodConnect = "CONNECT" MethodOptions = "OPTIONS" MethodPatch = "PATCH" ) ``` Request methods. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [CleanPath](https://github.com/go-gem/gem/blob/a6812000e965/path.go#L21) [¶](#CleanPath) ``` func CleanPath(p [string](/builtin#string)) [string](/builtin#string) ``` CleanPath is the URL version of path.Clean, it returns a canonical URL path for p, eliminating . and .. elements. The following rules are applied iteratively until no further processing can be done: 1. Replace multiple slashes with a single slash. 2. Eliminate each . path name element (the current directory). 3. Eliminate each inner .. path name element (the parent directory) along with the non-.. element that precedes it. 4. Eliminate .. elements that begin a rooted path: that is, replace "/.." by "/" at the beginning of a path. If the result of this process is an empty string, "/" is returned #### func [Int](https://github.com/go-gem/gem/blob/a6812000e965/utils.go#L33) [¶](#Int) ``` func Int(v interface{}) ([int](/builtin#int), [error](/builtin#error)) ``` Int convert v to int. By default, zero and non nil error would be returned. #### func [ListenAndServe](https://github.com/go-gem/gem/blob/a6812000e965/server.go#L66) [¶](#ListenAndServe) ``` func ListenAndServe(addr [string](/builtin#string), handler [Handler](#Handler)) [error](/builtin#error) ``` ListenAndServe listens on the TCP network address addr and then calls Serve with handler to handle requests on incoming connections. #### func [ListenAndServeTLS](https://github.com/go-gem/gem/blob/a6812000e965/server.go#L77) [¶](#ListenAndServeTLS) ``` func ListenAndServeTLS(addr, certFile, keyFile [string](/builtin#string), handler [Handler](#Handler)) [error](/builtin#error) ``` ListenAndServeTLS acts identically to ListenAndServe, except that it expects HTTPS connections. Additionally, files containing a certificate and matching private key for the server must be provided. If the certificate is signed by a certificate authority, the certFile should be the concatenation of the server's certificate, any intermediates, and the CA's certificate. #### func [String](https://github.com/go-gem/gem/blob/a6812000e965/utils.go#L16) [¶](#String) ``` func String(v interface{}) ([string](/builtin#string), [error](/builtin#error)) ``` String convert v to string. By default, empty string and non nil error would be returned. #### func [Version](https://github.com/go-gem/gem/blob/a6812000e965/server.go#L14) [¶](#Version) ``` func Version() [string](/builtin#string) ``` Version return version number. ### Types [¶](#pkg-types) #### type [Application](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L58) [¶](#Application) ``` type Application struct { ServerOpt [ServerOption](#ServerOption) `json:"server"` AssetsOpt [AssetsOption](#AssetsOption) `json:"assets"` TemplatesOpt [TemplatesOption](#TemplatesOption) `json:"templates"` // contains filtered or unexported fields } ``` #### func [NewApplication](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L21) [¶](#NewApplication) ``` func NewApplication(filename [string](/builtin#string)) (*[Application](#Application), [error](/builtin#error)) ``` NewApplication #### func (*Application) [Close](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L132) [¶](#Application.Close) ``` func (app *[Application](#Application)) Close() (errs [][error](/builtin#error)) ``` Close close application, all the close callbacks will be invoked. #### func (*Application) [Component](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L153) [¶](#Application.Component) ``` func (app *[Application](#Application)) Component(name [string](/builtin#string)) interface{} ``` Component returns a component via the given name. #### func (*Application) [Init](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L81) [¶](#Application.Init) ``` func (app *[Application](#Application)) Init() (err [error](/builtin#error)) ``` Init initialize application, All the initialized callbacks. #### func (*Application) [InitControllers](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L188) [¶](#Application.InitControllers) ``` func (app *[Application](#Application)) InitControllers() (err [error](/builtin#error)) ``` InitControllers initialize controllers. #### func (*Application) [Router](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L143) [¶](#Application.Router) ``` func (app *[Application](#Application)) Router() *[Router](#Router) ``` Router returns an instance of router. #### func (*Application) [SetCloseCallback](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L127) [¶](#Application.SetCloseCallback) ``` func (app *[Application](#Application)) SetCloseCallback(callback [ApplicationCallback](#ApplicationCallback)) ``` SetCloseCallback set user-defined close callback. #### func (*Application) [SetComponent](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L160) [¶](#Application.SetComponent) ``` func (app *[Application](#Application)) SetComponent(name [string](/builtin#string), component interface{}) [error](/builtin#error) ``` SetComponent set component by the given name and component. If the component already exists, returns an non-nil error. #### func (*Application) [SetController](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L175) [¶](#Application.SetController) ``` func (app *[Application](#Application)) SetController(path [string](/builtin#string), controller [Controller](#Controller)) ``` SetController set controller with the given route's path and controller instance. #### func (*Application) [SetInitCallback](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L76) [¶](#Application.SetInitCallback) ``` func (app *[Application](#Application)) SetInitCallback(callback [ApplicationCallback](#ApplicationCallback)) ``` SetInitCallback set user-defined initialized callback. #### func (*Application) [Templates](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L148) [¶](#Application.Templates) ``` func (app *[Application](#Application)) Templates() *[Templates](#Templates) ``` Templates returns an instance of templates manager. #### type [ApplicationCallback](https://github.com/go-gem/gem/blob/a6812000e965/application.go#L18) [¶](#ApplicationCallback) ``` type ApplicationCallback func() [error](/builtin#error) ``` ApplicationCallback is type of func that defines #### type [AssetsOption](https://github.com/go-gem/gem/blob/a6812000e965/options.go#L13) [¶](#AssetsOption) ``` type AssetsOption struct { Root [string](/builtin#string) `json:"root"` Dirs map[[string](/builtin#string)][string](/builtin#string) `json:"dirs"` HandlerOption *[HandlerOption](#HandlerOption) } ``` #### type [Context](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L51) [¶](#Context) ``` type Context struct { Request *[http](/net/http).[Request](/net/http#Request) Response [http](/net/http).[ResponseWriter](/net/http#ResponseWriter) // contains filtered or unexported fields } ``` Context contains *http.Request and http.Response. #### func (*Context) [Error](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L142) [¶](#Context.Error) ``` func (ctx *[Context](#Context)) Error(error [string](/builtin#string), code [int](/builtin#int)) ``` Error is a shortcut of http.Error #### func (*Context) [FormFile](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L182) [¶](#Context.FormFile) ``` func (ctx *[Context](#Context)) FormFile(key [string](/builtin#string)) ([multipart](/mime/multipart).[File](/mime/multipart#File), *[multipart](/mime/multipart).[FileHeader](/mime/multipart#FileHeader), [error](/builtin#error)) ``` FormFile is a shortcut of http.Request.FormFile. #### func (*Context) [FormValue](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L167) [¶](#Context.FormValue) ``` func (ctx *[Context](#Context)) FormValue(key [string](/builtin#string)) [string](/builtin#string) ``` FormValue is a shortcut of http.Request.FormValue. #### func (*Context) [HTML](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L216) [¶](#Context.HTML) ``` func (ctx *[Context](#Context)) HTML(code [int](/builtin#int), body [string](/builtin#string)) ``` HTML responses HTML data and custom status code to client. #### func (*Context) [Host](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L187) [¶](#Context.Host) ``` func (ctx *[Context](#Context)) Host() [string](/builtin#string) ``` Host returns the value of http.Request.Host. #### func (*Context) [IsAjax](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L137) [¶](#Context.IsAjax) ``` func (ctx *[Context](#Context)) IsAjax() [bool](/builtin#bool) ``` IsAjax returns true if request is an AJAX (XMLHttpRequest) request. #### func (*Context) [IsDelete](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L112) [¶](#Context.IsDelete) ``` func (ctx *[Context](#Context)) IsDelete() [bool](/builtin#bool) ``` IsDelete returns true if request method is DELETE. #### func (*Context) [IsGet](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L117) [¶](#Context.IsGet) ``` func (ctx *[Context](#Context)) IsGet() [bool](/builtin#bool) ``` IsGet returns true if request method is GET. #### func (*Context) [IsHead](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L122) [¶](#Context.IsHead) ``` func (ctx *[Context](#Context)) IsHead() [bool](/builtin#bool) ``` IsHead returns true if request method is HEAD. #### func (*Context) [IsPost](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L127) [¶](#Context.IsPost) ``` func (ctx *[Context](#Context)) IsPost() [bool](/builtin#bool) ``` IsPost returns true if request method is POST. #### func (*Context) [IsPut](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L132) [¶](#Context.IsPut) ``` func (ctx *[Context](#Context)) IsPut() [bool](/builtin#bool) ``` IsPut returns true if request method is PUT. #### func (*Context) [JSON](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L223) [¶](#Context.JSON) ``` func (ctx *[Context](#Context)) JSON(code [int](/builtin#int), v interface{}) ``` JSON responses JSON data and custom status code to client. #### func (*Context) [Logger](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L96) [¶](#Context.Logger) ``` func (ctx *[Context](#Context)) Logger() [Logger](#Logger) ``` Logger returns the server's logger. #### func (*Context) [NotFound](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L147) [¶](#Context.NotFound) ``` func (ctx *[Context](#Context)) NotFound() ``` NotFound is a shortcut of http.NotFound #### func (*Context) [ParseForm](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L177) [¶](#Context.ParseForm) ``` func (ctx *[Context](#Context)) ParseForm() [error](/builtin#error) ``` ParseForm is a shortcut of http.Request.ParseForm. #### func (*Context) [PostFormValue](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L172) [¶](#Context.PostFormValue) ``` func (ctx *[Context](#Context)) PostFormValue(key [string](/builtin#string)) [string](/builtin#string) ``` PostFormValue is a shortcut of http.Request.PostFormValue. #### func (*Context) [Push](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L207) [¶](#Context.Push) ``` func (ctx *[Context](#Context)) Push(target [string](/builtin#string), opts *[http](/net/http).[PushOptions](/net/http#PushOptions)) [error](/builtin#error) ``` Push HTTP/2 server push. If http.Response does not implements http.Pusher, returns errNotSupportHTTP2ServerPush. #### func (*Context) [Redirect](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L107) [¶](#Context.Redirect) ``` func (ctx *[Context](#Context)) Redirect(url [string](/builtin#string), code [int](/builtin#int)) ``` Redirect replies to the request with a redirect to url, which may be a path relative to the request path. #### func (*Context) [Referer](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L192) [¶](#Context.Referer) ``` func (ctx *[Context](#Context)) Referer() [string](/builtin#string) ``` Referer is a shortcut of http.Request.Referer. #### func (*Context) [SetContentType](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L152) [¶](#Context.SetContentType) ``` func (ctx *[Context](#Context)) SetContentType(v [string](/builtin#string)) ``` SetContentType set response Content-Type. #### func (*Context) [SetServer](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L101) [¶](#Context.SetServer) ``` func (ctx *[Context](#Context)) SetServer(server *[Server](#Server)) ``` SetServer for testing, do not use it in other places. #### func (*Context) [SetStatusCode](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L157) [¶](#Context.SetStatusCode) ``` func (ctx *[Context](#Context)) SetStatusCode(code [int](/builtin#int)) ``` SetStatusCode is shortcut of http.ResponseWriter.WriteHeader. #### func (*Context) [SetUserValue](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L60) [¶](#Context.SetUserValue) ``` func (ctx *[Context](#Context)) SetUserValue(key [string](/builtin#string), value interface{}) ``` SetUserValue stores the given value under the given key in ctx. #### func (*Context) [URL](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L162) [¶](#Context.URL) ``` func (ctx *[Context](#Context)) URL() *[url](/net/url).[URL](/net/url#URL) ``` URL is shortcut of http.Request.URL. #### func (*Context) [UserValue](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L80) [¶](#Context.UserValue) ``` func (ctx *[Context](#Context)) UserValue(key [string](/builtin#string)) interface{} ``` UserValue returns the value stored via SetUserValue* under the given key. #### func (*Context) [Write](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L197) [¶](#Context.Write) ``` func (ctx *[Context](#Context)) Write(p [][byte](/builtin#byte)) (n [int](/builtin#int), err [error](/builtin#error)) ``` Write is a shortcut of http.Response.Write. #### func (*Context) [XML](https://github.com/go-gem/gem/blob/a6812000e965/context.go#L236) [¶](#Context.XML) ``` func (ctx *[Context](#Context)) XML(code [int](/builtin#int), v interface{}, headers ...[string](/builtin#string)) ``` XML responses XML data and custom status code to client. #### type [Controller](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L7) [¶](#Controller) ``` type Controller interface { Init(app *[Application](#Application)) [error](/builtin#error) Methods() [][string](/builtin#string) HandlerOptions() map[[string](/builtin#string)]*[HandlerOption](#HandlerOption) GET(ctx *[Context](#Context)) POST(ctx *[Context](#Context)) DELETE(ctx *[Context](#Context)) PUT(ctx *[Context](#Context)) HEAD(ctx *[Context](#Context)) OPTIONS(ctx *[Context](#Context)) PATCH(ctx *[Context](#Context)) } ``` #### type [Handler](https://github.com/go-gem/gem/blob/a6812000e965/handler.go#L8) [¶](#Handler) ``` type Handler interface { Handle(*[Context](#Context)) } ``` Handler for processing incoming requests. #### type [HandlerFunc](https://github.com/go-gem/gem/blob/a6812000e965/handler.go#L16) [¶](#HandlerFunc) ``` type HandlerFunc func(*[Context](#Context)) ``` The HandlerFunc type is an adapter to allow the use of ordinary functions as HTTP handlers. If f is a function with the appropriate signature, HandlerFunc(f) is a Handler that calls f. #### func (HandlerFunc) [Handle](https://github.com/go-gem/gem/blob/a6812000e965/handler.go#L19) [¶](#HandlerFunc.Handle) ``` func (f [HandlerFunc](#HandlerFunc)) Handle(ctx *[Context](#Context)) ``` Handle calls f(ctx). #### type [HandlerOption](https://github.com/go-gem/gem/blob/a6812000e965/handler.go#L24) [¶](#HandlerOption) ``` type HandlerOption struct { Middlewares [][Middleware](#Middleware) } ``` HandlerOption option for handler. #### func [NewHandlerOption](https://github.com/go-gem/gem/blob/a6812000e965/handler.go#L30) [¶](#NewHandlerOption) ``` func NewHandlerOption(middlewares ...[Middleware](#Middleware)) *[HandlerOption](#HandlerOption) ``` NewHandlerOption returns HandlerOption instance by the given middlewares. #### type [Logger](https://github.com/go-gem/gem/blob/a6812000e965/logger.go#L16) [¶](#Logger) ``` type Logger interface { Debug(v ...interface{}) Debugf(format [string](/builtin#string), v ...interface{}) Info(v ...interface{}) Infof(format [string](/builtin#string), v ...interface{}) Error(v ...interface{}) Errorf(format [string](/builtin#string), v ...interface{}) Fatal(v ...interface{}) Fatalf(format [string](/builtin#string), v ...interface{}) } ``` Logger defines a logging interface. #### type [Middleware](https://github.com/go-gem/gem/blob/a6812000e965/middleware.go#L8) [¶](#Middleware) ``` type Middleware interface { Wrap(next [Handler](#Handler)) [Handler](#Handler) } ``` Middleware interface. #### type [Router](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L13) [¶](#Router) ``` type Router struct { // Enables automatic redirection if the current route can't be matched but a // handler for the path with (without) the trailing slash exists. // For example if /foo/ is requested but a route only exists for /foo, the // client is redirected to /foo with http status code 301 for GET requests // and 307 for all other request methods. RedirectTrailingSlash [bool](/builtin#bool) // If enabled, the router tries to fix the current request path, if no // handle is registered for it. // First superfluous path elements like ../ or // are removed. // Afterwards the router does a case-insensitive lookup of the cleaned path. // If a handle can be found for this route, the router makes a redirection // to the corrected path with status code 301 for GET requests and 307 for // all other request methods. // For example /FOO and /..//Foo could be redirected to /foo. // RedirectTrailingSlash is independent of this option. RedirectFixedPath [bool](/builtin#bool) // If enabled, the router checks if another method is allowed for the // current route, if the current request can not be routed. // If this is the case, the request is answered with 'Method Not Allowed' // and HTTP status code 405. // If no other Method is allowed, the request is delegated to the NotFound // handler. HandleMethodNotAllowed [bool](/builtin#bool) // If enabled, the router automatically replies to OPTIONS requests. // Custom OPTIONS handlers take priority over automatic replies. HandleOPTIONS [bool](/builtin#bool) // Configurable http.Handler which is called when no matching route is // found. If it is not set, http.NotFound is used. NotFound [Handler](#Handler) // Configurable http.Handler which is called when a request // cannot be routed and HandleMethodNotAllowed is true. // If it is not set, http.Error with http.StatusMethodNotAllowed is used. // The "Allow" header with allowed request methods is set before the handler // is called. MethodNotAllowed [Handler](#Handler) // Function to handle panics recovered from http handlers. // It should be used to generate a error page and return the http error code // 500 (Internal Server Error). // The handler can be used to keep your server from crashing because of // unrecovered panics. PanicHandler func(*[Context](#Context), interface{}) // contains filtered or unexported fields } ``` Router is a http.Handler which can be used to dispatch requests to different handler functions via configurable routes #### func [NewRouter](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L69) [¶](#NewRouter) ``` func NewRouter() *[Router](#Router) ``` NewRouter returns a new initialized Router. Path auto-correction, including trailing slashes, is enabled by default. #### func (*Router) [DELETE](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L114) [¶](#Router.DELETE) ``` func (r *[Router](#Router)) DELETE(path [string](/builtin#string), handle [HandlerFunc](#HandlerFunc), opts ...*[HandlerOption](#HandlerOption)) ``` DELETE is a shortcut for router.Handle("DELETE", path, handle) #### func (*Router) [GET](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L84) [¶](#Router.GET) ``` func (r *[Router](#Router)) GET(path [string](/builtin#string), handle [HandlerFunc](#HandlerFunc), opts ...*[HandlerOption](#HandlerOption)) ``` GET is a shortcut for router.Handle("GET", path, handle) #### func (*Router) [HEAD](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L89) [¶](#Router.HEAD) ``` func (r *[Router](#Router)) HEAD(path [string](/builtin#string), handle [HandlerFunc](#HandlerFunc), opts ...*[HandlerOption](#HandlerOption)) ``` HEAD is a shortcut for router.Handle("HEAD", path, handle) #### func (*Router) [Handle](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L126) [¶](#Router.Handle) ``` func (r *[Router](#Router)) Handle(method, path [string](/builtin#string), handle [HandlerFunc](#HandlerFunc), opts ...*[HandlerOption](#HandlerOption)) ``` Handle registers a new request handle with the given path and method. For GET, POST, PUT, PATCH and DELETE requests the respective shortcut functions can be used. This function is intended for bulk loading and to allow the usage of less frequently used, non-standardized or custom methods (e.g. for internal communication with a proxy). #### func (*Router) [Handler](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L238) [¶](#Router.Handler) ``` func (r *[Router](#Router)) Handler() [Handler](#Handler) ``` Handler returns a Handler that wrapped by middlewarers. #### func (*Router) [Lookup](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L190) [¶](#Router.Lookup) ``` func (r *[Router](#Router)) Lookup(method, path [string](/builtin#string), ctx *[Context](#Context)) ([Handler](#Handler), [bool](/builtin#bool)) ``` Lookup allows the manual lookup of a method + path combo. This is e.g. useful to build a framework around this router. If the path was found, it returns the handle function and the path parameter values. Otherwise the third return value indicates whether a redirection to the same path with an extra / without the trailing slash should be performed. #### func (*Router) [OPTIONS](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L94) [¶](#Router.OPTIONS) ``` func (r *[Router](#Router)) OPTIONS(path [string](/builtin#string), handle [HandlerFunc](#HandlerFunc), opts ...*[HandlerOption](#HandlerOption)) ``` OPTIONS is a shortcut for router.Handle("OPTIONS", path, handle) #### func (*Router) [PATCH](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L109) [¶](#Router.PATCH) ``` func (r *[Router](#Router)) PATCH(path [string](/builtin#string), handle [HandlerFunc](#HandlerFunc), opts ...*[HandlerOption](#HandlerOption)) ``` PATCH is a shortcut for router.Handle("PATCH", path, handle) #### func (*Router) [POST](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L99) [¶](#Router.POST) ``` func (r *[Router](#Router)) POST(path [string](/builtin#string), handle [HandlerFunc](#HandlerFunc), opts ...*[HandlerOption](#HandlerOption)) ``` POST is a shortcut for router.Handle("POST", path, handle) #### func (*Router) [PUT](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L104) [¶](#Router.PUT) ``` func (r *[Router](#Router)) PUT(path [string](/builtin#string), handle [HandlerFunc](#HandlerFunc), opts ...*[HandlerOption](#HandlerOption)) ``` PUT is a shortcut for router.Handle("PUT", path, handle) #### func (*Router) [ServeFiles](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L163) [¶](#Router.ServeFiles) ``` func (r *[Router](#Router)) ServeFiles(path [string](/builtin#string), root [http](/net/http).[FileSystem](/net/http#FileSystem), opts ...*[HandlerOption](#HandlerOption)) ``` ServeFiles serves files from the given file system root. The path must end with "/*filepath", files are then served from the local path /defined/root/dir/*filepath. For example if root is "/etc" and *filepath is "passwd", the local file "/etc/passwd" would be served. Internally a http.FileServer is used, therefore http.NotFound is used instead of the Router's NotFound handler. To use the operating system's file system implementation, use http.Dir: ``` router.ServeFiles("/src/*filepath", http.Dir("/var/www")) ``` #### func (*Router) [Use](https://github.com/go-gem/gem/blob/a6812000e965/router.go#L79) [¶](#Router.Use) ``` func (r *[Router](#Router)) Use(middleware [Middleware](#Middleware)) ``` Use register middleware. #### type [Server](https://github.com/go-gem/gem/blob/a6812000e965/server.go#L29) [¶](#Server) ``` type Server struct { Server *[http](/net/http).[Server](/net/http#Server) // contains filtered or unexported fields } ``` Server contains *http.Server. #### func [New](https://github.com/go-gem/gem/blob/a6812000e965/server.go#L19) [¶](#New) ``` func New(addr [string](/builtin#string)) *[Server](#Server) ``` New return a Server instance by the given address. #### func (*Server) [ListenAndServe](https://github.com/go-gem/gem/blob/a6812000e965/server.go#L41) [¶](#Server.ListenAndServe) ``` func (srv *[Server](#Server)) ListenAndServe(handler [Handler](#Handler)) [error](/builtin#error) ``` ListenAndServe listens on the TCP network address srv.Addr and then calls Serve to handle requests on incoming connections. #### func (*Server) [ListenAndServeTLS](https://github.com/go-gem/gem/blob/a6812000e965/server.go#L50) [¶](#Server.ListenAndServeTLS) ``` func (srv *[Server](#Server)) ListenAndServeTLS(certFile, keyFile [string](/builtin#string), handler [Handler](#Handler)) [error](/builtin#error) ``` ListenAndServeTLS listens on the TCP network address srv.Addr and then calls Serve to handle requests on incoming TLS connections. Accepted connections are configured to enable TCP keep-alives. #### func (*Server) [SetLogger](https://github.com/go-gem/gem/blob/a6812000e965/server.go#L35) [¶](#Server.SetLogger) ``` func (srv *[Server](#Server)) SetLogger(logger [Logger](#Logger)) ``` SetLogger set logger. #### type [ServerOption](https://github.com/go-gem/gem/blob/a6812000e965/options.go#L7) [¶](#ServerOption) ``` type ServerOption struct { Addr [string](/builtin#string) `json:"addr"` CertFile [string](/builtin#string) `json:"cert_file"` KeyFile [string](/builtin#string) `json:"key_file"` } ``` #### type [Templates](https://github.com/go-gem/gem/blob/a6812000e965/templates.go#L28) [¶](#Templates) ``` type Templates struct { Path [string](/builtin#string) Suffix [string](/builtin#string) Delims [][string](/builtin#string) FuncMap [template](/html/template).[FuncMap](/html/template#FuncMap) LayoutDir [string](/builtin#string) // contains filtered or unexported fields } ``` Templates is a templates manager. #### func [NewTemplates](https://github.com/go-gem/gem/blob/a6812000e965/templates.go#L17) [¶](#NewTemplates) ``` func NewTemplates(path [string](/builtin#string)) *[Templates](#Templates) ``` NewTemplates returns a Templates instance with the given path and default options. #### func (*Templates) [Filenames](https://github.com/go-gem/gem/blob/a6812000e965/templates.go#L73) [¶](#Templates.Filenames) ``` func (ts *[Templates](#Templates)) Filenames(filenames ...[string](/builtin#string)) [][string](/builtin#string) ``` Filenames converts relative paths to absolute paths. #### func (*Templates) [Layout](https://github.com/go-gem/gem/blob/a6812000e965/templates.go#L64) [¶](#Templates.Layout) ``` func (ts *[Templates](#Templates)) Layout(name [string](/builtin#string)) (*[template](/html/template).[Template](/html/template#Template), [error](/builtin#error)) ``` if the layout does not exists, returns non-nil error. #### func (*Templates) [New](https://github.com/go-gem/gem/blob/a6812000e965/templates.go#L86) [¶](#Templates.New) ``` func (ts *[Templates](#Templates)) New(filenames ...[string](/builtin#string)) (*[template](/html/template).[Template](/html/template#Template), [error](/builtin#error)) ``` New returns a template.Template instance with the Templates's option and template.ParseFiles. Note: filenames should be absolute paths, either uses Templates.Filenames or specifies manually. #### func (*Templates) [Render](https://github.com/go-gem/gem/blob/a6812000e965/templates.go#L99) [¶](#Templates.Render) ``` func (ts *[Templates](#Templates)) Render(layoutName [string](/builtin#string), filenames ...[string](/builtin#string)) (*[template](/html/template).[Template](/html/template#Template), [error](/builtin#error)) ``` Render uses layout to render template. #### func (*Templates) [SetLayout](https://github.com/go-gem/gem/blob/a6812000e965/templates.go#L40) [¶](#Templates.SetLayout) ``` func (ts *[Templates](#Templates)) SetLayout(filenames ...[string](/builtin#string)) [error](/builtin#error) ``` SetLayout set layout. #### type [TemplatesOption](https://github.com/go-gem/gem/blob/a6812000e965/options.go#L19) [¶](#TemplatesOption) ``` type TemplatesOption struct { Root [string](/builtin#string) `json:"root"` Suffix [string](/builtin#string) `json:"suffix"` LayoutDir [string](/builtin#string) `json:"layout_dir"` Layouts [][string](/builtin#string) `json:"layouts"` } ``` #### type [WebController](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L22) [¶](#WebController) ``` type WebController struct{} ``` WebController is an empty controller that implements Controller interface. #### func (*WebController) [DELETE](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L59) [¶](#WebController.DELETE) ``` func (wc *[WebController](#WebController)) DELETE(ctx *[Context](#Context)) ``` DELETE implements Controller's DELETE method. #### func (*WebController) [GET](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L53) [¶](#WebController.GET) ``` func (wc *[WebController](#WebController)) GET(ctx *[Context](#Context)) ``` GET implements Controller's GET method. #### func (*WebController) [HEAD](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L65) [¶](#WebController.HEAD) ``` func (wc *[WebController](#WebController)) HEAD(ctx *[Context](#Context)) ``` HEAD implements Controller's HEAD method. #### func (*WebController) [HandlerOptions](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L48) [¶](#WebController.HandlerOptions) ``` func (wc *[WebController](#WebController)) HandlerOptions() map[[string](/builtin#string)]*[HandlerOption](#HandlerOption) ``` HandlerOptions defines handler's option. The key should be the name of request method, case-sensitive, such as GET, POST. #### func (*WebController) [Init](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L27) [¶](#WebController.Init) ``` func (wc *[WebController](#WebController)) Init(app *[Application](#Application)) [error](/builtin#error) ``` Init initialize the controller. It would be invoked when register a controller. #### func (*WebController) [Methods](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L38) [¶](#WebController.Methods) ``` func (wc *[WebController](#WebController)) Methods() [][string](/builtin#string) ``` Methods defines which handlers should be registered. The value should be the name of request method, case-sensitive, such as GET, POST. By default, GET and POST's handler will be registered. #### func (*WebController) [OPTIONS](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L68) [¶](#WebController.OPTIONS) ``` func (wc *[WebController](#WebController)) OPTIONS(ctx *[Context](#Context)) ``` OPTIONS implements Controller's OPTIONS method. #### func (*WebController) [PATCH](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L71) [¶](#WebController.PATCH) ``` func (wc *[WebController](#WebController)) PATCH(ctx *[Context](#Context)) ``` PATCH implements Controller's PATCH method. #### func (*WebController) [POST](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L56) [¶](#WebController.POST) ``` func (wc *[WebController](#WebController)) POST(ctx *[Context](#Context)) ``` POST implements Controller's POST method. #### func (*WebController) [PUT](https://github.com/go-gem/gem/blob/a6812000e965/controller.go#L62) [¶](#WebController.PUT) ``` func (wc *[WebController](#WebController)) PUT(ctx *[Context](#Context)) ``` PUT implements Controller's PUT method.
adhocracy
readthedoc
Python
Welcome to Adhocracy’s documentation! — Adhocracy 2.0dev documentation ### Navigation * [index](genindex.html) * [next](development/architecture.html) | * [Adhocracy 2.0dev documentation](#) » Welcome to Adhocracy’s documentation![¶](#welcome-to-adhocracy-s-documentation) === Adhocracy is a web-based platform that allows disperse groups to engage in a constructive process to draft proposals which will then represent the group’s opinions and eventually its decisions regarding a given subject. Contents: Development documentation[¶](#development-documentation) --- * [Architecture](development/architecture.html) + [Database and model classes](development/architecture.html#database-and-model-classes) + [Indexing/Searching](development/architecture.html#indexing-searching) + [Authentication and Permissions](development/architecture.html#authentication-and-permissions) * [Internal API Documentation](development/api.html) + [Core polling logic](development/api.html#core-polling-logic) + [Delegation management and traversal](development/api.html#delegation-management-and-traversal) + [Database models and helper classes](development/api.html#database-models-and-helper-classes) + [Template Variables](development/api.html#template-variables) * [Update translations](development/translations.html) + [Translations for contributors](development/translations.html#translations-for-contributors) + [Translations for developers](development/translations.html#translations-for-developers) * [Add and run tests](development/tests.html) + [Add a new test](development/tests.html#add-a-new-test) + [Run all tests](development/tests.html#run-all-tests) + [Run one test file](development/tests.html#run-one-test-file) REST interface (outdated)[¶](#rest-interface-outdated) --- Warning This is the documentation for the REST interface. Unfortunately it is outdated and parts of the interface may not work anymore. * [REST API Conventions](rest/intro.html) + [Data Submission](rest/intro.html#data-submission) + [Authentication and Security](rest/intro.html#authentication-and-security) + [Pagination](rest/intro.html#pagination) * [REST Resources](rest/resources.html) + [/instance - Group/Organization Instances](rest/resources.html#instance-group-organization-instances) + [/user - User Management](rest/resources.html#user-user-management) + [/proposal - Proposal drafting](rest/resources.html#proposal-proposal-drafting) + [/poll - Poll data and voting](rest/resources.html#poll-poll-data-and-voting) + [/comment - Commenting and comment history](rest/resources.html#comment-commenting-and-comment-history) + [/delegation - Vote delegation management](rest/resources.html#delegation-vote-delegation-management) Indices and tables[¶](#indices-and-tables) === * [*Index*](genindex.html) * [*Module Index*](py-modindex.html) * [*Search Page*](search.html) ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### [Table Of Contents](#) * [Welcome to Adhocracy’s documentation!](#) + [Development documentation](#development-documentation) + [REST interface (outdated)](#rest-interface-outdated) * [Indices and tables](#indices-and-tables) #### Next topic [Architecture](development/architecture.html) ### This Page * [Show Source](_sources/index.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](genindex.html) * [next](development/architecture.html) | * [Adhocracy 2.0dev documentation](#) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) Search — Adhocracy 2.0dev documentation ### Navigation * [index](genindex.html) * [Adhocracy 2.0dev documentation](index.html) » Search === Please activate JavaScript to enable the search functionality. From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list. ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### Navigation * [index](genindex.html) * [Adhocracy 2.0dev documentation](index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) Index — Adhocracy 2.0dev documentation ### Navigation * [index](#) * [Adhocracy 2.0dev documentation](index.html) » Index === ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](#) * [Adhocracy 2.0dev documentation](index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) REST Resources — Adhocracy 2.0dev documentation ### Navigation * [index](../genindex.html) * [previous](intro.html) | * [Adhocracy 2.0dev documentation](../index.html) » Warning This is the documentation for the REST interface. Unfortunately it is outdated and parts of the interface may not work anymore. REST Resources[¶](#rest-resources) === /instance - Group/Organization Instances[¶](#instance-group-organization-instances) --- ### index[¶](#index) * List all existing and non-hidden instances. * URL: http://[instance].adhocracy.cc/instance[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: instances_ ### create[¶](#create) * Create a new instance for a group or organization. * URL: http://[instance].adhocracy.cc/instance[.format] * Method: POST * Formats: html, json * Authentication: yes * Parameters: + key: A unique identifier for the instance. Short lower-case alpha-numeric text. This cannot be edited after the instance creation. + label: A title for the instance. + description: Short description for the instance, e.g. a mission statement. ### show[¶](#show) * View an instance’s home page or base data * URL: http://[instance].adhocracy.cc/instance/[key][.format] * Method: GET * Formats: html, json * Authentication: no * *Note*: If no instance subdomain has been specified, this will 302 to the actual instance. ### update[¶](#update) * Update some of an instance’s properties. * URL: http://[instance].adhocracy.cc/instance/[key][.format] * Method: PUT * Formats: html, json * Authentication: yes * Parameters: + label: A title for the instance. + description: Short description for the instance, e.g. a mission statement. + required_majority: The percentage of voters required for the adoption of a proposal (e.g. 0.66 for 66%). + activation_delay: Delay (in days) that a proposal needs to maintain a majority to be adopted. + allow_adopt: Whether to allow adoption polls on proposals (bool). + allow_delegate: Whether to enable delegated voting (bool). + allow_index: Allow search engine indexing (via robots.txt, bool). + hidden: Show instance in listings. + default_group: Default group for newly joined members (one of: observer, advisor, voter, supervisor). ### delete[¶](#delete) * Delete an instance and all contained entities. * URL: http://[instance].adhocracy.cc/instance/[key][.format] * Method: DELETE * Formats: html, json * Authentication: yes *(requires global admin rights)* * *Note*: This will also delete all contained items, such as proposals, delegations, polls or comments. ### activity[¶](#activity) * Retrieve a list of the latest events relating to this instance. * URL: http://[instance].adhocracy.cc/instance/[key]/activity[.format] * Method: GET * Formats: html, rss * Authentication: no * Pager prefix: events_ ### join[¶](#join) * Make the authenticated user a member of this Instance. * URL: http://[instance].adhocracy.cc/instance/[key]/join[.format] * Method: GET * Formats: html * Authentication: yes * *Note*: Fails if the user is already a member of the instance. ### leave[¶](#leave) * Terminate the authenticated user’s membership in this Instance. * URL: http://[instance].adhocracy.cc/instance/[key]/leave[.format] * Method: POST * Formats: html * Authentication: yes * *Note*: Fails if the user is not a member of the instance. /user - User Management[¶](#user-user-management) --- ### index[¶](#id1) * List all users with an active membership in the specified instance. * URL: http://[instance].adhocracy.cc/user[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: users_ * Parameters: + users_q: A search query to filter with. + users_filter: Filter by membership group (only in an instance context). * *Note*: If no instance is specified, all registered users will be returned. ### create[¶](#id2) * Create a new user. * URL: http://[instance].adhocracy.cc/user[.format] * Method: POST * Formats: html, json * Authentication: no * Parameters: + user_name: A unique user name for the new user. + email: An email, must be validated. + password: A password, min. 3 characters. + password_confirm: Must be identical to password. * *Note*: Does not require an instance to be specified. If an instance is selected, the user will also become a member of that instance. ### show[¶](#id3) * View an user’s home page and activity stream, * URL: http://[instance].adhocracy.cc/user/[user_name][.format] * Method: GET * Formats: html, json, rss * Authentication: no * *Note*: Also available outside of instance contexts. ### update[¶](#id4) * Update the user’s profile and settings. * URL: http://[instance].adhocracy.cc/user/[user_name][.format] * Method: PUT * Formats: html, json * Authentication: yes *(either to own user or with user management permissions)* * Parameters: + display_name: Display name, i.e. the real name to be shown in the application. + email: E-Mail address. Must be re-validated when changed. + locale: A locale, currently: de_DE, en_US or fr_FR. + password: A password, min. 3 characters. + password_confirm: Must be identical to password. + bio: A short bio, markdown-formatted. + email_priority: Minimum priority level for E-Mail notifications to be sent (0-6). + twitter_priority: Minimum priority level for Twitter direct message notifications to be sent (0-6). ### delete[¶](#id5) * Delete an user. **Not implemented** ### votes[¶](#votes) * Retrieve a list of the decisions that were made by this user. * URL: http://[instance].adhocracy.cc/user/[user_name]/votes[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: decisions_ * *Note*: Does not include rating polls, limited to adoption polls. ### delegations[¶](#delegations) * Retrieve a list of the delegations that were created by this user. * URL: http://[instance].adhocracy.cc/user/[user_name]/delegations[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: delegations_ *(``json`` view only)* * *Note*: In html, lists both incoming and outgoing delegations. When rendered as json, this only includes outgoing delegations. ### instances[¶](#instances) * A list of all non-hidden instances in which the user is a member. * URL: http://[instance].adhocracy.cc/user/[user_name]/instances[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: instances_ ### proposals[¶](#proposals) * A list of all proposals that the user has introduced. * URL: http://[instance].adhocracy.cc/user/[user_name]/proposals[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: proposals_ ### groupmod[¶](#groupmod) * Modify a user’s membership in the current instance * URL: http://[instance].adhocracy.cc/user/[user_name]/proposals[.format] * Method: GET * Formats: html * Authentication: yes *(requires instance admin privileges)* * Parameters: + to_group: Target group (one of: observer, advisor, voter, supervisor). ### kick[¶](#kick) * Terminate a user’s membership in the current instance * URL: http://[instance].adhocracy.cc/user/[user_name]/proposals[.format] * Method: GET * Formats: html * Authentication: yes *(requires instance admin privileges)* * *Note*: Since the user can re-join at any time, this is largely a symbolic action. /proposal - Proposal drafting[¶](#proposal-proposal-drafting) --- ### index[¶](#id6) * List all existing proposals in the given instance. * URL: http://[instance].adhocracy.cc/proposal[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: proposals_ * Parameters: + proposals_q: A search query to filter with. + proposals_state: Filter by state (one of: draft, polling, adopted). Only available if adoption polling is enabled in the selected instance. ### create[¶](#id7) * Create a new proposal. * URL: http://[instance].adhocracy.cc/proposal[.format] * Method: POST * Formats: html, json * Authentication: yes * Parameters: + label: A title for the proposal. + text: Goals of the proposal. + tags: Comma-separated or space-separated tag list to be applied to the proposal. + alternative (multiple values): IDs of any proposals that should be marked as an alternative to this proposal. ### show[¶](#id8) * View an proposals’s goal page * URL: http://[instance].adhocracy.cc/proposal/[id][.format] * Method: GET * Formats: html, json * Authentication: no ### update[¶](#id9) * Update some of a proposal’s properties. * URL: http://[instance].adhocracy.cc/proposal/[id][.format] * Method: PUT * Formats: html, json * Authentication: yes * Parameters: * label: A title for the proposal. * alternative (multiple values): IDs of any proposals that should be marked as an alternative to this proposal. * *Note*: The goal description and tag list are edited separately. ### delete[¶](#id10) * Delete a proposal and any contained entities. * URL: http://[instance].adhocracy.cc/proposal/[id][.format] * Method: DELETE * Formats: html, json * Authentication: yes *(requires instance admin rights)* * *Note*: This will also delete all contained items, such as comments and delegations. ### delegations[¶](#id11) * Retrieve a list of the delegations that exist regarding this proposal. * URL: http://[instance].adhocracy.cc/proposal/[id]/delegations[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: delegations_ ### canonicals[¶](#canonicals) * Retrieve a list of canonical comments regarding the proposal. Canonical comments are listed as “provisions” in the UI. * URL: http://[instance].adhocracy.cc/proposal/[id]/delegations[.format] * Method: GET * Formats: html, json * Authentication: no * *Note*: No pager. ### alternatives[¶](#alternatives) * Retrieve a list of the alternatives that exist regarding this proposal. * URL: http://[instance].adhocracy.cc/proposal/[id]/alternatives[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: proposals_ ### activity[¶](#id12) * Retrieve a list of events within the scope of the given proposal. * URL: http://[instance].adhocracy.cc/proposal/[id]/activity[.format] * Method: GET * Formats: html, rss * Authentication: no * Pager prefix: events_ ### adopt[¶](#adopt) * Trigger an adoption poll regarding this proposal. * URL: http://[instance].adhocracy.cc/proposal/[id]/adopt[.format] * Method: POST * Formats: html * Authentication: yes * *Note*: Requires at least one canonical comment. Adoption polls must be enabled on the instance level. ### tag[¶](#tag) * Apply an additional tag to a proposal (or support an existing tag). * URL: http://[instance].adhocracy.cc/proposal/[id]/tag[.format] * Method: GET * Formats: html * Authentication: yes * Parameters: + text: Comma-separated or space-separated tag list to be applied to the proposal. ### untag[¶](#untag) * Remove a tag association (tagging) from a proposal. * URL: http://[instance].adhocracy.cc/proposal/[id]/untag[.format] * Method: GET * Formats: html * Authentication: yes * Parameters: + tagging: ID of the tagging association to be removed. * *Note*: Only taggings created by the user can be removed. /poll - Poll data and voting[¶](#poll-poll-data-and-voting) --- ### show[¶](#id13) * View a poll, listing the current decisions and offering a chance to vote. * URL: http://[instance].adhocracy.cc/poll/[id][.format] * Method: GET * Formats: html, json * Authentication: no ### delete[¶](#id14) * End a poll and close voting. * URL: http://[instance].adhocracy.cc/poll/[id][.format] * Method: DELETE * Formats: html, json * Authentication: yes * *Note*: This will only work for adoption polls, rating polls cannot be terminated. ### votes[¶](#id15) * Retrieve a list of the decisions that were made regarding this poll. * URL: http://[instance].adhocracy.cc/poll/[id]/votes[.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: decisions_ * Parameters: + result: Filter for a specific decision, i.e. -1 (No), 1 (Yes), 0 (Abstained). ### rate[¶](#rate) * Vote in the poll via rating. * URL: http://[instance].adhocracy.cc/poll/[id]/rate[.format] * Method: POST * Formats: html, json * Authentication: yes * *Note*: This implements relative voting, i.e. if a user has previously voted -1 and now votes 1, the result will be 0 (a relative change). Used for comment up-/downvoting. Unlike vote, this will also trigger an automated tallying of the poll. It is thus slower, especially for large polls. ### vote[¶](#vote) * Vote in the poll. * URL: http://[instance].adhocracy.cc/poll/[id]/vote[.format] * Method: POST * Formats: html, json * Authentication: yes * *Note*: This does not trigger tallying. Thus a subsequent call to show might yield an incorrect tally until a server background job has run. /comment - Commenting and comment history[¶](#comment-commenting-and-comment-history) --- ### index[¶](#id16) * List all existing comments. * URL: http://[instance].adhocracy.cc/comment[.format] * Method: GET * Formats: json * Authentication: no * Pager prefix: comments_ ### create[¶](#id17) * Create a new comment within a specified context. * URL: http://[instance].adhocracy.cc/comment[.format] * Method: POST * Formats: html, json * Authentication: yes * Parameters: + topic: ID of the Delegateable to which this comment is associated. + reply: A parent comment ID, if applicable. + canonical (bool): Specify whether this is part of the implementation description of the proposal to which it will be associated. + text: The comment text, markdown-formatted. + sentiment: General tendency of the comment, i.e. -1 for negative, 0 for neutral and 1 for a supporting argument. ### show[¶](#id18) * View a comment separated out of their context. * URL: http://[instance].adhocracy.cc/comment/[id][.format] * Method: GET * Formats: html, json * Authentication: no ### update[¶](#id19) * Create a new revision of the given comment. * URL: http://[instance].adhocracy.cc/comment/[id][.format] * Method: PUT * Formats: html, json * Authentication: yes * Parameters: + text: The comment text, markdown-formatted. + sentiment: General tendency of the comment, i.e. -1 for negative, 0 for neutral and 1 for a supporting argument. ### delete[¶](#id20) * Delete a comment. * URL: http://[instance].adhocracy.cc/comment/[id][.format] * Method: DELETE * Formats: html, json * Authentication: yes * *Note*: Comments can only be deleted by non-admins if they have not yet been edited. ### history[¶](#history) * List all revisions of the specified comment. * URL: http://[instance].adhocracy.cc/comment/[id]/history[.format] * Method: GET * Formats: html, json * Authentication: yes * Pager prefix: revisions_ ### revert[¶](#revert) * Revert to an earlier revision of the specified comment. * URL: http://[instance].adhocracy.cc/comment/[id]/revert[.format] * Method: GET * Formats: html, json * Authentication: yes * Parameters: + to: Revision ID to revert to. * *Note*: This will actually create a new revision containing the specified revision’s text. /delegation - Vote delegation management[¶](#delegation-vote-delegation-management) --- ### index[¶](#id21) * List all existing delegations (instance-wide). * URL: http://[instance].adhocracy.cc/delegation[.format] * Method: GET * Formats: json, dot * Authentication: no * Pager prefix: delegations_ * *Note*: The dot format produces a graphviz file. ### create[¶](#id22) * Create a new delegation to a specified principal in a given scope. * URL: http://[instance].adhocracy.cc/delegation[.format] * Method: POST * Formats: html, json * Authentication: yes * Parameters: + scope: ID of the Delegateable which will be the delegation’s scope. + agent: User name of the delegation recipient. + replay: Whether or not to re-play all of the agents previous decisions within the scope. ### show[¶](#id23) * View the delegation. * URL: http://[instance].adhocracy.cc/delegation/[id][.format] * Method: GET * Formats: html, json * Authentication: no * Pager prefix: decisions_ * *Note*: For json this will return a tuple of the actual serialized delegation and a list of decisions. ### delete[¶](#id24) * Revoke a the delegation. * URL: http://[instance].adhocracy.cc/delegation/[id][.format] * Method: DELETE * Formats: html, json * Authentication: yes * *Note*: Can only be performed by the delegation’s principal. ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### [Table Of Contents](../index.html) * [REST Resources](#) + [/instance - Group/Organization Instances](#instance-group-organization-instances) - [index](#index) - [create](#create) - [show](#show) - [update](#update) - [delete](#delete) - [activity](#activity) - [join](#join) - [leave](#leave) + [/user - User Management](#user-user-management) - [index](#id1) - [create](#id2) - [show](#id3) - [update](#id4) - [delete](#id5) - [votes](#votes) - [delegations](#delegations) - [instances](#instances) - [proposals](#proposals) - [groupmod](#groupmod) - [kick](#kick) + [/proposal - Proposal drafting](#proposal-proposal-drafting) - [index](#id6) - [create](#id7) - [show](#id8) - [update](#id9) - [delete](#id10) - [delegations](#id11) - [canonicals](#canonicals) - [alternatives](#alternatives) - [activity](#id12) - [adopt](#adopt) - [tag](#tag) - [untag](#untag) + [/poll - Poll data and voting](#poll-poll-data-and-voting) - [show](#id13) - [delete](#id14) - [votes](#id15) - [rate](#rate) - [vote](#vote) + [/comment - Commenting and comment history](#comment-commenting-and-comment-history) - [index](#id16) - [create](#id17) - [show](#id18) - [update](#id19) - [delete](#id20) - [history](#history) - [revert](#revert) + [/delegation - Vote delegation management](#delegation-vote-delegation-management) - [index](#id21) - [create](#id22) - [show](#id23) - [delete](#id24) #### Previous topic [REST API Conventions](intro.html) ### This Page * [Show Source](../_sources/rest/resources.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](../genindex.html) * [previous](intro.html) | * [Adhocracy 2.0dev documentation](../index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) REST API Conventions — Adhocracy 2.0dev documentation ### Navigation * [index](../genindex.html) * [next](resources.html) | * [previous](../development/tests.html) | * [Adhocracy 2.0dev documentation](../index.html) » Warning This is the documentation for the REST interface. Unfortunately it is outdated and parts of the interface may not work anymore. REST API Conventions[¶](#rest-api-conventions) === Adhocracy provides a REST-inspired API for client applications to remotely gather or submit data or to synchronize Adhocracy sign-ups and voting processes with other sites. While at the moment only JSON and RSS is produced and only JSON is processed by the software, future support for other formats, such as XML (i.e. StratML, EML) is planned. Data Submission[¶](#data-submission) --- All data submitted is expected to be either URL-encoded (for GET requests) or application/x-www-form-urlencoded (i.e. formatted as an HTML form, for either POST and PUT requests). Accept/Content-type based submission of JSON/XML data will be implemented in a later release. A meta parameter called _method is evaluated for each request to fake a request method if needed. This is useful for older HTTP libraries or JavaScript clients which cannot actually perform any of the more exotic HTTP methods, such as PUT and DELETE. Authentication and Security[¶](#authentication-and-security) --- Authentication can take place either via form-based cookie creation (POST login and password to /perform_login) or via HTTP Basic authentication (i.e. via HTTP headers). Please note that for any write action using a cookie-based session, the site will expect an additional request parameter, _tok, containing a session ID. This is part of Adhocracy’s CSRF filter and it will not apply to requests made using HTTP Basic authentication. Since the value of _tok is not returned by the API, it is recommended to use HTTP Basic for any API applications. OAuth-based authorization is planned for a future release and will allow for token-based access to specific resources or operations. Pagination[¶](#pagination) --- Many listings in Adhocracy are powered by a common pager system. Each pager has a specific prefix (e.g. proposals_) and a set of request parameters that can be used to influence the pager: * [prefix]_page: The page number to retrieve, i.e. page offset. * [prefix]_count: Number of items to retrieve per page. * [prefix]_sort: Sorting key. These are somewhat erratically numbered and need to be redone in the future. * [prefix]_q (in some cases): A search query used to filter the items. ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### [Table Of Contents](../index.html) * [REST API Conventions](#) + [Data Submission](#data-submission) + [Authentication and Security](#authentication-and-security) + [Pagination](#pagination) #### Previous topic [Add and run tests](../development/tests.html) #### Next topic [REST Resources](resources.html) ### This Page * [Show Source](../_sources/rest/intro.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](../genindex.html) * [next](resources.html) | * [previous](../development/tests.html) | * [Adhocracy 2.0dev documentation](../index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) Update translations — Adhocracy 2.0dev documentation ### Navigation * [index](../genindex.html) * [next](tests.html) | * [previous](api.html) | * [Adhocracy 2.0dev documentation](../index.html) » Update translations[¶](#update-translations) === Translations for contributors[¶](#translations-for-contributors) --- We manage our translations in a [Transifex project](https://www.transifex.net/projects/p/adhocracy/). If you want to change a translation you can go to the project page, choose your language and click on the resource “Adhocracy”. You will get a menu where you can download the .po file to edit it on your computer with an application like [poedit](http://www.poedit.net/) (“Download for use”). After you translated the file you can go to the menu and upload the file. From the menu you can also use the transifex online editor (Button: “✔ Translate now”) It would be nice to drop us a note before you start to translate, to [<EMAIL>](mailto:<EMAIL>%<EMAIL>) or [<EMAIL>](mailto:info%40liqd.net). You can also contact us to set up a new language on transifex. Translations for developers[¶](#translations-for-developers) --- Adhocracy uses [Babel](http://babel.edgewall.org/) together with Transifex to manage translations. Both are preconfigured in setup.cfg and .tx/config. ### Preperations[¶](#preperations) CAUTION:: If you’re using the [adhocracy.buildout](https://bitbucket.org/liqd/adhocracy.buildout) (highly recommended) you need to use the --distribute option to bootstrap.py to work with the preconfigured babel commands. This document assumes that you installed the buildout in a virtualenv “adhocracy”. Install the [transifex client](http://pypi.python.org/pypi/transifex-client) on your system. Then add your username and password for transifex.net to ~/.transifexrc: ``` [https://www.transifex.net] hostname = https://www.transifex.net username = <your transifex username> password = <your transifex password> token = ``` ### Translation workflow[¶](#translation-workflow) All .po and .pot files should go through transifex before they are committed. This might be annoying but unifies the formatting and makes it easier to review commits. #### Extract new messages[¶](#extract-new-messages) 1. Extract new messages with extract_messages. This will update adhocracy/i18n/adhocracy.pot: ``` (adhocracy)/src/adhocracy$ ../../bin/adhocpy setup.py extract_messages ``` 2. Push that to transifex: ``` (adhocracy)/src/adhocracy$ tx push --source ``` 3. Pull all files from transifex: ``` (adhocracy)/src/adhocracy$ tx pull ``` If it skips languages the files on transifex are older than the files on your system. See Troubleshooting. 4. Commit adhocracy.pot: ``` (adhocracy)/src/adhocracy$ hg ci adhocracy/i18n/adhocracy.pot \ > -m 'i18n: extract new messages' ``` #### Update the translations[¶](#update-the-translations) 1. Go to the transifex project and use the the online translation editor to translate and continue with 4. Or translate it locally. To do that make sure you have pulled the most recent translations from transifex: ``` $ (adhocracy)/src/adhocracy$ tx pull # pulls all languages or $ (adhocracy)/src/adhocracy$ tx pull -l <language> ``` 2. Edit the .po files for your language(s). INFO:: The prefered way to edit .po files is to use an application like [poedit](http://www.poedit.net/). It will highlight untranslated messages and messages that were created with fuzzy matching and will automatically update or remove markers like , fuzzy and update the header of the .po file. 3. Push the translation to transifex: ``` (adhocracy)/src/adhocracy$ tx push -l <language> ``` 4. Pull the translation back: ``` (adhocracy)/src/adhocracy$ tx pull -l <language> ``` 5. Compile the catalogs with compile_catalog: ``` (adhocracy)/src/adhocracy$ ../../bin/adhocpy setup.py compile_catalog ``` This will also show you errors in the .po files and statistics about the translation. 6. Commit the .po and .mo files of the language(s) you translated, e.g.: ``` (adhocracy)/src/adhocracy$ hg ci adhocracy/i18n/de' -m 'i18n: ...' ``` #### Troubleshooting[¶](#troubleshooting) If tx skips the languages you want to pull from the server, the local file is most likely newer than the file on transifex.net. You can add -d to the command to get debug output, e.g.: ``` tx -d pull -l de ``` Than you have to check which of the files to use. Copy the local file and pull the language (with -f/–force) from transifex...: ``` (adhocracy)/src/adhocracy$ cd adhocracy/i18n/de/LC_MESSAGES (adhocracy) .../de/LC_MESSAGES$ cp adhocracy.po local.po (adhocracy) .../de/LC_MESSAGES$ tx pull -f -l de ``` ..and compare them. A good tool to compare is podiff from the [Python GetText Translation Toolkit](https://launchpad.net/pyg3t) (which you can install from source of from their ubuntu ppa). It contains several other tools to work with po-files. You might have to give the -r (relax) option to podiff. ``` (adhocracy) .../de/LC_MESSAGES$ podiff local.po adhocracy.po ``` (There is also another [podiff package](http://pypi.python.org/pypi/podiff) on pypi.) #### Babel command[¶](#babel-command) (adhocracy)/src/adhocracy$ ../../bin/adhocpy setup.py extract_messages Extract the messages from the python files and templates into adhocracy/i18n/adhocracy.pot (adhocracy)/src/adhocracy$ ../../adhocpy setup.py compile_catalog Compile the .po files for all languages to .mo files. The babel command update_catalog should not be used anymore. Use the tx client instead. ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### [Table Of Contents](../index.html) * [Update translations](#) + [Translations for contributors](#translations-for-contributors) + [Translations for developers](#translations-for-developers) - [Preperations](#preperations) - [Translation workflow](#translation-workflow) * [Extract new messages](#extract-new-messages) * [Update the translations](#update-the-translations) * [Troubleshooting](#troubleshooting) * [Babel command](#babel-command) #### Previous topic [Internal API Documentation](api.html) #### Next topic [Add and run tests](tests.html) ### This Page * [Show Source](../_sources/development/translations.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](../genindex.html) * [next](tests.html) | * [previous](api.html) | * [Adhocracy 2.0dev documentation](../index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) Add and run tests — Adhocracy 2.0dev documentation ### Navigation * [index](../genindex.html) * [next](../rest/intro.html) | * [previous](translations.html) | * [Adhocracy 2.0dev documentation](../index.html) » Add and run tests[¶](#add-and-run-tests) === Before every commit you have to run all tests. Every new feature has to have a good test coverage. You should use Test Driven Develpment (<http://en.wikipedia.org/wiki/Test-driven_development>) and Acceptance Test Driven Develpment. Acceptance Tests correspondence to user stories (<http://en.wikipedia.org/wiki/User_story>). They use TestBrowser sessions and reside inside the functional tests directory. Add a new test[¶](#add-a-new-test) --- [``](#id1)Go to (adhocracy)/src/adhocracy/adhocracy/tests and add you test (<http://pylonsbook.com/en/1.1/testing.html>). Run all tests[¶](#run-all-tests) --- In an [adhocracy.buildout](https://bitbucket.org/liqd/adhocracy.buildout) you have bin/test. Alternatively you can call: ``` (adhocracy)$ bin/nosetests --with-pylons=src/adhocracy/test.ini src/adhocracy/adhocracy/tests`` ``` Run one test file[¶](#run-one-test-file) --- ``` (adhocracy)/src/adhocracy/$ ../../bin/nosetest -s adhocracy.tests.test_module ``` The -s option enables stdout, so you can use pdb/ipdb statements in your code. ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### [Table Of Contents](../index.html) * [Add and run tests](#) + [Add a new test](#add-a-new-test) + [Run all tests](#run-all-tests) + [Run one test file](#run-one-test-file) #### Previous topic [Update translations](translations.html) #### Next topic [REST API Conventions](../rest/intro.html) ### This Page * [Show Source](../_sources/development/tests.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](../genindex.html) * [next](../rest/intro.html) | * [previous](translations.html) | * [Adhocracy 2.0dev documentation](../index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) Internal API Documentation — Adhocracy 2.0dev documentation ### Navigation * [index](../genindex.html) * [next](translations.html) | * [previous](architecture.html) | * [Adhocracy 2.0dev documentation](../index.html) » Internal API Documentation[¶](#internal-api-documentation) === Adhocracy does not have a public programming API. There is a plan to develop a common *Kernel* interface for *Liquid Democracy* projects. Once such an API becomes available, Adhocracy will be modified to implement that API. Core polling logic[¶](#core-polling-logic) --- Delegation management and traversal[¶](#delegation-management-and-traversal) --- Database models and helper classes[¶](#database-models-and-helper-classes) --- ### Badge[¶](#badge) ### Comment[¶](#comment) ### Delegateable[¶](#delegateable) ### Delegation[¶](#delegation) ### Event[¶](#event) ### Group[¶](#group) ### Instance[¶](#instance) ### Membership[¶](#membership) ### Milestone[¶](#milestone) ### Openid[¶](#openid) ### Page[¶](#page) ### Permission[¶](#permission) ### Poll[¶](#poll) ### Proposal[¶](#proposal) ### Revision[¶](#revision) ### Selection[¶](#selection) ### Tag[¶](#tag) ### Tagging[¶](#tagging) ### Tally[¶](#tally) ### Text[¶](#text) ### Twitter[¶](#twitter) ### User[¶](#user) ### Vote[¶](#vote) ### Watch[¶](#watch) Template Variables[¶](#template-variables) --- Pylons provides a thread local variable pylons.tmpl_context that is available in templates a c. The following variables are commonly or always available in templates: c.instance A adhocracy.model.Instance object or None. It is set by adhocracy.lib.base.BaseController from a value determinated by adhocracy.lib.instance.DescriminatorMiddleware from the host name. c.user A adhocracy.model.User object or None if unauthenticated. It is set by adhocracy.lib.base.BaseController from a value determinated by the repoze.who middleware. c.active_global_nav A str naming the current active top navigation item. It is set to ‘instance’ in adhocracy.lib.base.BaseController if the request is made to an instance and can be overridden in any controller. ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### [Table Of Contents](../index.html) * [Internal API Documentation](#) + [Core polling logic](#core-polling-logic) + [Delegation management and traversal](#delegation-management-and-traversal) + [Database models and helper classes](#database-models-and-helper-classes) - [Badge](#badge) - [Comment](#comment) - [Delegateable](#delegateable) - [Delegation](#delegation) - [Event](#event) - [Group](#group) - [Instance](#instance) - [Membership](#membership) - [Milestone](#milestone) - [Openid](#openid) - [Page](#page) - [Permission](#permission) - [Poll](#poll) - [Proposal](#proposal) - [Revision](#revision) - [Selection](#selection) - [Tag](#tag) - [Tagging](#tagging) - [Tally](#tally) - [Text](#text) - [Twitter](#twitter) - [User](#user) - [Vote](#vote) - [Watch](#watch) + [Template Variables](#template-variables) #### Previous topic [Architecture](architecture.html) #### Next topic [Update translations](translations.html) ### This Page * [Show Source](../_sources/development/api.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](../genindex.html) * [next](translations.html) | * [previous](architecture.html) | * [Adhocracy 2.0dev documentation](../index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) Architecture — Adhocracy 2.0dev documentation ### Navigation * [index](../genindex.html) * [next](api.html) | * [previous](../index.html) | * [Adhocracy 2.0dev documentation](../index.html) » Architecture[¶](#architecture) === Database and model classes[¶](#database-and-model-classes) --- ### Overview[¶](#overview) We have 4 top level content classes user work with: page.Page When the user creates a *Norm* (respectively Paper, or german “Papier”) through the web interface, he is internally creating a Page with the page.Page.function set to page.Page.NORM. The text of the page is saved in Text objects in page.Page.texts, that may have different variants. Page is a subclass of delegateable.Delegateable. proposal.Proposal The actual text of the proposal is saved as a Page object with the Page.function page.Page.DESCRIPTIION in an attribute page.Page.description. In a *Page* a user can propose changes to a *Page (Norm)*. These are saved as selection.Selection objects (see below). Proposal is a subclass of delegateable.Delegateable. milestone.Milestone A Milestone is used to represent a date or a deadline. Each Delegtable have a relation to one *Milestone*. comment.Comment Comments can be created for delegateable.Delegateable objects. The reference to the *Delegateable* is available as comment.Comment.topic. Each comment is associated with a poll.Poll to rate content. If it is an answer to another *Comment* this is referenced with comment.Comment.reply_id. The text of an comment is stored as a revision.Revision. If a comment is edited, a new *Revision* is created. *Revisions* have similar function for *Comments* like *Texts* have for *Pages*. Some of the more interesting models and support classes are: selection.Selection A *Selection* is a “reference” object for the relation between *Pages (Norms)* and *Proposals* and stores additional information, especially when the relation was created or removed. text.Text Text objects are only used to save text for page.Page objects. See *Page*. revision.Revision Store the text of a Comment. Similar to *Text*. See *Comment*. poll.Poll A *Poll* is an object representing a particular poll process a :class:Comment, :class:Proposal or :class:Page/Norm Single votes relate to the :class:Poll object. vote.Vote A Vote is a single vote from a user. It knows about the user whose vote it is, and if it was a delegated vote the useris of the delegatee that voted for the user. delegation.Delegation Created when a user delegates the voting to an delegatee. It knows about the user who (principal_id) delegated, who is the delegatee (agent_id) for which poll (scope_id - the id of the delegateable object). delegateable.Delegateable Base class for Pages and Proposals for which a user can delegate the voting. Sqlalchemy’s joint table inheritance is used. tally.Tally A Tally saves the linear history of the :class:Poll noting which vote occured and what is the sum number of for/against/abstains. watch.Watch Users can subscribe to content changes and will be notified depending settings in their preferences. A *Watch* object is created in these cases. meta.Indexable Mixin class for content that should be indexed in Solr. It defines only one method meta.Indexable.to_index() that collects data from the models and will be uses automatically when a model object participates in a transaction. Almost all model classes have a classmethod .create() to create a new instance of the model and setup the necessary data or relationships. Furthermore methods like .find() and .all() as convenient query method that support limiting the query to the current model.instance.Instance or in-/exclude deleted model instances. ### Diagrams[¶](#diagrams) Tables Diagram of the mapped tables Diagramm of all model classes #### Updating the diagrams[¶](#updating-the-diagrams) To update the diagramms install [graphviz](http://www.graphviz.org/) and easy_install [sqlalchemy_schemadisplay](http://pypi.python.org/pypi/sqlalchemy_schemadisplay/) into the environment adhocracy is installed in. Then run python /adhocracy/scripts/generate-db-diagrams.py. It will create the diagrams as GIF files. Finally replace the GIF files in adhocracy/docs/development with the new versions. ### Delegateables[¶](#delegateables) A user can delegate his vote to another user for comment.Comment, proposal.Proposal and page.Page. This functionality is enabled by inheriting from Delegateable Inheritance is done with sqlalchemy’s [joint table inheritance](http://www.sqlalchemy.org/docs/orm/inheritance.html#joined-table-inheritance) where the *delegateable* table is polymorphic on delegateable.Delegateable.type ### Page Models[¶](#page-models) The model class page.Page has 2 uses in adhocracy that are differentiated by the value of page.Page.function. * A *Page* represents a *Norm* if page.Page.function is page.Page.NORM. This is the primary usage of Page. * For every proposal.Proposal a page is created and available as proposal.Proposal.description to manage the text and text versions of the *proposal*. page.Page.function is page.Page.DESCRIPTION in this case. Pages are *delegateable* and inherit from delegateable.Delegateable. #### Variants and Versions[¶](#variants-and-versions) The text of the *Page* is not saved in the page table but created as a text.Text object. A page can contain different variants of the text, and for each variant an arbitrary number of versions, e.g.: * initial text + version 1 + version 2 + ... * other text variant + version 1 + version 2 + ... * ... Text variants are used for Norms. For the initial text, variant is set to text.Text.HEAD. This text variant is handled special in the UI and labeled Status Quo in Norms. Other variants can be freely named. All text variants are listed in Page.variants. Each variant can have a arbitrary number of versions. The newest version of the text is called the head (not to confuse with the default text variant Text.HEAD). You can get the newest version of a specific variant with Page.variant_head(). The newest versions of all variants is available as Page.heads. A shortcut to obtain the newest version of the HEAD variant is Page.head. Text variants are not used for pages that are used as the description of proposal.Proposal objects (proposal.Proposal.description). The poll tally of a variant or all variant can be optained with Page.variant_tally() or Page.variant_tallies() Polls are set up per variant, not for the Page object. #### Page Hierarchies[¶](#page-hierarchies) Page objects (that have the funciton *Norm*) can be organized in a tree stucture by setting another *Page (Norm)* object as one of the Page.parents of the current page. *Parents* can be an arbitrary number of delegateable.Delegateable objects, but only one, not already deleted Page with the Page.function Page.NORM is allowed. *Parents* are taken into account when we compute a *delegation graph*. The subpages of a page are available as Page.subpages. #### Other functionality[¶](#other-functionality) Beside that Pages have functions and attributes to handle purging, deleting, renaming, Selections (Page (Norm) <-> Proposal relationships) and other things. See the api documentation for Page Indexing/Searching[¶](#indexing-searching) --- Indexing and searching is done with sql(alchemy) and solr. Indexing with solr is done asyncronously most of the time while updates of the rdbm is done syncronously most of the time. The asyncronous indexing is done throug a rabbitmq job queue. ### Types of search indexes[¶](#types-of-search-indexes) Beside rdbm intern indexes adhocracy maintains application specific indexes (that partly act as audit trails too): * solr’ is used for full text and tag searches. It is an document oriented index. The index schema can be found in `adhocracy/solr/schema.xml * new tally.Tally objects are created with every new or changed vote and provide the current total of votes. ### Update application layer indexes[¶](#update-application-layer-indexes) adhocracy implements an sqlalchemy Mapperextension with hooks.HookExtension that provides hook method to sqlalchemy that will be called before and after insert, update and delete operations for every model instance that is part of a commit. To determinate what to do it will inspect the model instance for fitting hook methods. The asyncronous system roughly works like this: * hooks defines a list of event hook methods names that are also used as event identifiers (:const:.PREINSERT, :const:.PREDELETE, :const:.PREUPDATE, :const:.POSTINSERT, :const:.POSTDELETE, :const:.POSTUPDATE) * All model classes defined in adhocracy.models.refs.TYPES are patched by init_queue_hooks(). A function that posts a message message to the job queue (_handle_event) is patched in as all the method names listed above. The post to the job queue contains the the entity (model class) and the event identifier. * A number of callback functions is registered by adhocracy.lib.democracy.init_democracy() with the help of register_queue_callback() in the hooks REGISTRY * Everytime one of the patched models is inserted, updated, or deleted, a generic job is inserted into the jobqueue that contains the changed model instance and the event identifier. * The background thread (paster background <ini-file>) picks up the jobs and calls handle_queue_message() which calls all registered callbacks. To have indexing and searching working propperly you need: * a working rabbitmq * a working solr * a running background process to process the jobs pushed into the rabbitmq job queue (paster background <ini-file>) Read the install documentation for setup information. Authentication and Permissions[¶](#authentication-and-permissions) --- ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### [Table Of Contents](../index.html) * [Architecture](#) + [Database and model classes](#database-and-model-classes) - [Overview](#overview) - [Diagrams](#diagrams) * [Updating the diagrams](#updating-the-diagrams) - [Delegateables](#delegateables) - [Page Models](#page-models) * [Variants and Versions](#variants-and-versions) * [Page Hierarchies](#page-hierarchies) * [Other functionality](#other-functionality) + [Indexing/Searching](#indexing-searching) - [Types of search indexes](#types-of-search-indexes) - [Update application layer indexes](#update-application-layer-indexes) + [Authentication and Permissions](#authentication-and-permissions) #### Previous topic [Welcome to Adhocracy’s documentation!](../index.html) #### Next topic [Internal API Documentation](api.html) ### This Page * [Show Source](../_sources/development/architecture.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](../genindex.html) * [next](api.html) | * [previous](../index.html) | * [Adhocracy 2.0dev documentation](../index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) testbrowser example — Adhocracy 2.0dev documentation ### Navigation * [index](../../genindex.html) * [Adhocracy 2.0dev documentation](../../index.html) » testbrowser example[¶](#testbrowser-example) === Make (reasonably) sure that we have a clean environment: ``` >>> model.User.all() [<User(1,admin)>] ``` We have a testbrowser browser set up that we can use to browse throug the site: ``` >>> browser.open(app_url + "/login") >>> browser.dc() >>> browser.status '200 OK' >>> '<html class="no-js">' in browser.contents True ``` browser.dc(‘/path/to/file’) would dump the current html to a file. We can also instanciate a new browser and login as a certain user: ``` >>> admin_browser = make_browser() >>> admin_browser.open(app_url) >>> 'http://test.lan/user/admin/dashboard' in admin_browser.contents False >>> admin_browser.login('admin') >>> admin_browser.open(app_url) >>> 'http://test.lan/user/admin/dashboard' in admin_browser.contents True ``` And we can log out. ``` >>> admin_browser.logout() >>> admin_browser.open(app_url) >>> 'http://test.lan/user/admin/dashboard' in admin_browser.contents False ``` This won’t affect our first browser: ``` >>> browser.url 'http://test.lan/login' ``` ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### This Page * [Show Source](../../_sources/development/use_cases/test.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](../../genindex.html) * [Adhocracy 2.0dev documentation](../../index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) Mass import users — Adhocracy 2.0dev documentation ### Navigation * [index](../../genindex.html) * [Adhocracy 2.0dev documentation](../../index.html) » Mass import users[¶](#mass-import-users) === As a global administrator: ``` >>> admin = make_browser() >>> admin.login('admin') >>> admin.open('http://test.lan/admin') ``` Open and fill out the *Import Users* form: ``` >>> admin.follow('Import Users') >>> admin.url 'http://test.lan/admin/users/import' >>> csv = admin.getControl(name='users_csv') >>> csv.value = ('testuser,"Test User",<EMAIL>\n' ... 'testuser2,"Test User2",<EMAIL>') >>> admin.getControl(name='email_subject').value = 'hello new user' >>> template = admin.getControl(name='email_template') >>> template.value = ('{user_name}\n{password}\n{url}\n' ... '{display_name}\n{email}\nFree Text') >>> admin.dc('/tmp/saved-user-import-form') >>> admin.getControl('save').click() ``` As a result we have two new users and sent out emails to them: ``` >>> model.User.all() [<User(1,admin)>, <User(2,testuser)>, <User(3,testuser2)>] >>> self.mocked_mail_send.assert_any_call(mock.ANY, '<EMAIL>', mock.ANY) >>> self.mocked_mail_send.assert_any_call(mock.ANY, '<EMAIL>', mock.ANY) ``` Anonymous users cannot open the form: ``` >>> anon = make_browser() >>> anon.handleErrors = True >>> anon.raiseHttpErrors = False >>> anon.open('http://test.lan/admin/users/import') >>> anon.status '401 Unauthorized' ``` ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### This Page * [Show Source](../../_sources/development/use_cases/admin_user_import.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](../../genindex.html) * [Adhocracy 2.0dev documentation](../../index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/) Test basic functionality in the page root — Adhocracy 2.0dev documentation ### Navigation * [index](../../genindex.html) * [Adhocracy 2.0dev documentation](../../index.html) » Test basic functionality in the page root[¶](#test-basic-functionality-in-the-page-root) === Make (reasonably) sure that we have a clean environment: ``` >>> model.User.all() [<User(1,admin)>] ``` Call the root ``` >>> browser.open(app_url) >>> browser.status '200 OK' ``` Login Form[¶](#login-form) === We have a login link on the start page ``` >>> '%s/login' %app_url in browser.contents True >>> browser.getLink('Login').click() >>> browser.getControl(name='login') <Control name='login' type='text'> >>> browser.getControl(name='password') <Control name='password' type='password'> ``` RSS feed[¶](#rss-feed) === Adhocracy has a global rss feed showing events in all instances: ``` >>> browser.open(app_url) >>> browser.xpath("//link[@href='http://test.lan/feed.rss']") [<Element link at ...>] >>> browser.open('http://test.lan/feed.rss') >>> browser.open('http://test.lan/feed.rss') >>> browser.headers['Content-Type'] 'application/rss+xml; charset=utf-8' ``` We have no items in the rss feed yet: ``` >>> len(browser.xpath('//item')) 0 ``` If we add content in the test instance an the feed contains an item for the event: ``` >>> admin = make_browser() >>> admin.login('admin') >>> admin.open(instance_url) >>> admin.follow('new proposal') >>> form = admin.getForm(name='create_proposal') >>> form.getControl(name='label').value = u'Test Proposal' >>> form.getControl(name='text').value = u'Test Description' >>> form.submit() >>> browser.open('http://test.lan/feed.rss') >>> browser.url 'http://test.lan/feed.rss' >>> len(browser.xpath('//item')) 1 >>> admin.open(instance_url) >>> admin.follow('new proposal') >>> form = admin.getForm(name='create_proposal') >>> form.getControl(name='label').value = u'Test Proposal 2' >>> form.getControl(name='text').value = u'Test Description 2' >>> form.submit() >>> browser.open('http://test.lan/feed.rss') >>> browser.url 'http://test.lan/feed.rss' >>> len(browser.xpath('//item')) 2 ``` ### Project Versions * [latest](//readthedocs.org/docs/adhocracy/en/latest/) ### RTD Search Full-text doc search. ### [Table Of Contents](../../index.html) * [Test basic functionality in the page root](#) * [Login Form](#login-form) * [RSS feed](#rss-feed) ### This Page * [Show Source](../../_sources/development/use_cases/root.txt) ### Quick search Enter search terms or a module, class or function name. ### Navigation * [index](../../genindex.html) * [Adhocracy 2.0dev documentation](../../index.html) » [TEST Brought to you by Read the Docs](//readthedocs.org/projects/adhocracy/?fromdocs=adhocracy) * [latest](//readthedocs.org/docs/adhocracy/en/latest/)
advpistepper
readthedoc
Unknown
AdvPiStepper 0.9.0.alpha documentation [AdvPiStepper](index.html#document-index) --- Welcome to AdvPiStepper’s documentation![¶](#welcome-to-advpistepper-s-documentation) === AdvPiStepper is a driver for all kinds of stepper motors, written in Python for the Raspberry Pi, using the pigpio library. Features[¶](#features) --- > “Here comes the Hotstepper” > – <NAME> * Uses acceleration and deceleration ramps. * Fairly tight timing up to approx. 1500 steps per second (on Raspberry Pi 4) [[1]](#id2). * Complete API for relative and absolute moves, rotations and continuous running. * Runs in the background. Motor movements can be blocking or non-blocking. * Support for microstepping (depending on the driver). * Support for any unipolar stepper motors, like: + 28BYJ-48 (very cheap geared stepper) * {TODO} Support for Bipolar stepper drivers / dual H-Bridges like the + L293(D) + DRV8833 * {TODO} Support for Step/Direction controllers like + A4988 + DRV8825 + STSPIN220 / 820 * Other drivers should be easy to implement * Licensed under the very permissive MIT license. * 100% Python, no dependencies except pigpio. | [[1]](#id1) | At high step rates occasional stutters may occur when some Python / Linux background tasks run. | ### Uses[¶](#uses) AdvPiStepper is suitable for * Python projects that need to accuratly control a single stepper motor at reasonable speeds. * Stepper motor experiments and prototyping. It is not suitable for * multi-axis stepper motor projects * high speeds (> 1500 steps per second) ### Caveats[¶](#caveats) * Currently no support for multiple motors. Single motor only. * 100% Python, therefore no realtime performance - jitter and stutters may occur. Requirements[¶](#requirements) --- > “One small step for [a] man” > – <NAME> AdvPiStepper uses the [pigpio](http://abyz.me.uk/rpi/pigpio/) library to access the Raspberry Pi GPIO pins. It requires at least V76 of the library, which at the time of writing has not yet been uploaded to [PyPI.org](https://pypi.org/project/pigpio/) and therefore has to be [installed manually](http://abyz.me.uk/rpi/pigpio/download.html). A multicore Raspberry Pi (Model 2/3/4) is recommended so that the stepper engine with its critical timings can run on a seperate core. Single Core Pi Models (or heavy load on more than one core) will have timing jitter - neither Linux nor Python is really suited for these realtime uses. Usage[¶](#usage) --- > “A journey of a thousand miles begins with a single step” > – Laozi ### Installation[¶](#installation) #### pigpio[¶](#pigpio) AdvPiStepper requires the [pigpio](http://abyz.me.uk/rpi/pigpio/) library to work. If the [Remote GPIO](https://gpiozero.readthedocs.io/en/stable/remote_gpio.html) has been enabled in the Raspberry Pi configuration tool, then the pigpio daemon should already be installed and running. Run the following to check if pigpio daemon is installed and its version number: ``` $ pigpiod -v 76 ``` If either pigpio is not installed or has a version smaller than 76 (the minimum version required by AdvPiStepper), then refer to the pigpio [download & install](http://abyz.me.uk/rpi/pigpio/download.html) page on how to install pigpio. #### AdvPiStepper[¶](#advpistepper) AdvPiStepper can be simply installed with ``` $ pip install advpistepper ``` ### Usage[¶](#id1) AdvPiStepper is very simple to use. Here is a small example using the [`28BYJ-48`](index.html#module-advpistepper.driver_unipolar_28byj48) driver: ``` import advpistepper driver = advpistepper.Driver28BYJ48(pink=23, orange=25, yellow=24, blue=22) stepper = advpistepper.AdvPiStepper(driver) stepper.move(100, block = True) stepper.move(-100) while stepper. ``` This example will move the stepper motor 100 steps forward, waiting for it to finish, then move it 100 steps backward without waiting. Besides the obvious import of advpistepper, using it requires to instantiate a driver. AdvPiStepper comes with multiple generic and specific drivers, refer to the Drivers Section of the documentation for more details. In this example the 28BYJ-48 Driver is used which needs four arguments, the gpio numbers that the motor is connected to. When using a motor with a step & direction interface the driver can instantiated like this, with the step signal on pin 22 and the direction signal on pin 23 ``` driver = advpistepper.DriverStepDirGeneric(step=22, direction=23) ``` The next line of the example initializes the stepper engine. It needs the driver as an argument (without it defaults to a no-GPIO driver). It can take an optional argument with a Dict containing parameters to overwrite the build in default parameters. The last two lines of the example first move the stepper 100 steps forward, waiting for the move to finish, then 100 steps backwards without waiting, that is with the move running in the background. For all commands of AdvPiStepper refer to [`the API`](index.html#module-advpistepper.stepper) ### Tuning[¶](#tuning) To get the best performance from AdvPiStepper there should be as few background processes running as possible. For expample, on the AdvPiStepper development system (Raspi 4) the Desktop process does interfere with the AdvPiStepper process about every 500ms causing step delays of a few milliseconds, enough to cause late step pulses at high speeds (>500 steps per second) If AdvPiStepper is called with root privileges (sudo) it will decrease the niceness of the backend process to -10. This improves the timing at high speeds somewhat due to less interference by normal user processes. API[¶](#module-advpistepper) --- `advpistepper.``stepper`[¶](#advpistepper.stepper) alias of [`advpistepper.stepper`](index.html#module-advpistepper.stepper) Warning This program is not finished. It was uploaded to GitHub as a backup. Feel free to look at the source code and give feedback, but do not expect it to work in any shape or form. Features[¶](#features) --- > “Here comes the Hotstepper” > – <NAME> * Uses acceleration and deceleration ramps. * Fairly tight timing up to approx. 1500 steps per second (on Raspberry Pi 4) [[1]](#id2). * Complete API for relative and absolute moves, rotations and continuous running. * Runs in the background. Motor movements can be blocking or non-blocking. * Support for microstepping (depending on the driver). * Support for any unipolar stepper motors, like: + 28BYJ-48 (very cheap geared stepper) * {TODO} Support for Bipolar stepper drivers / dual H-Bridges like the + L293(D) + DRV8833 * {TODO} Support for Step/Direction controllers like + A4988 + DRV8825 + STSPIN220 / 820 * Other drivers should be easy to implement * Licensed under the very permissive MIT license. * 100% Python, no dependencies except pigpio. | [[1]](#id1) | At high step rates occasional stutters may occur when some Python / Linux background tasks run. | ### Uses[¶](#uses) AdvPiStepper is suitable for * Python projects that need to accuratly control a single stepper motor at reasonable speeds. * Stepper motor experiments and prototyping. It is not suitable for * multi-axis stepper motor projects * high speeds (> 1500 steps per second) ### Caveats[¶](#caveats) * Currently no support for multiple motors. Single motor only. * 100% Python, therefore no realtime performance - jitter and stutters may occur. Requirements[¶](#requirements) --- > “One small step for [a] man” > – <NAME> AdvPiStepper uses the [pigpio](http://abyz.me.uk/rpi/pigpio/) library to access the Raspberry Pi GPIO pins. It requires at least V76 of the library, which at the time of writing has not yet been uploaded to [PyPI.org](https://pypi.org/project/pigpio/) and therefore has to be [installed manually](http://abyz.me.uk/rpi/pigpio/download.html). A multicore Raspberry Pi (Model 2/3/4) is recommended so that the stepper engine with its critical timings can run on a seperate core. Single Core Pi Models (or heavy load on more than one core) will have timing jitter - neither Linux nor Python is really suited for these realtime uses. Usage[¶](#usage) --- > “A journey of a thousand miles begins with a single step” > – Laozi ### Installation[¶](#installation) #### pigpio[¶](#pigpio) AdvPiStepper requires the [pigpio](http://abyz.me.uk/rpi/pigpio/) library to work. If the [Remote GPIO](https://gpiozero.readthedocs.io/en/stable/remote_gpio.html) has been enabled in the Raspberry Pi configuration tool, then the pigpio daemon should already be installed and running. Run the following to check if pigpio daemon is installed and its version number: ``` $ pigpiod -v 76 ``` If either pigpio is not installed or has a version smaller than 76 (the minimum version required by AdvPiStepper), then refer to the pigpio [download & install](http://abyz.me.uk/rpi/pigpio/download.html) page on how to install pigpio. #### AdvPiStepper[¶](#advpistepper) AdvPiStepper can be simply installed with ``` $ pip install advpistepper ``` ### Usage[¶](#id3) AdvPiStepper is very simple to use. Here is a small example using the [`28BYJ-48`](index.html#module-advpistepper.driver_unipolar_28byj48) driver: ``` import advpistepper driver = advpistepper.Driver28BYJ48(pink=23, orange=25, yellow=24, blue=22) stepper = advpistepper.AdvPiStepper(driver) stepper.move(100, block = True) stepper.move(-100) while stepper. ``` This example will move the stepper motor 100 steps forward, waiting for it to finish, then move it 100 steps backward without waiting. Besides the obvious import of advpistepper, using it requires to instantiate a driver. AdvPiStepper comes with multiple generic and specific drivers, refer to the Drivers Section of the documentation for more details. In this example the 28BYJ-48 Driver is used which needs four arguments, the gpio numbers that the motor is connected to. When using a motor with a step & direction interface the driver can instantiated like this, with the step signal on pin 22 and the direction signal on pin 23 ``` driver = advpistepper.DriverStepDirGeneric(step=22, direction=23) ``` The next line of the example initializes the stepper engine. It needs the driver as an argument (without it defaults to a no-GPIO driver). It can take an optional argument with a Dict containing parameters to overwrite the build in default parameters. The last two lines of the example first move the stepper 100 steps forward, waiting for the move to finish, then 100 steps backwards without waiting, that is with the move running in the background. For all commands of AdvPiStepper refer to [`the API`](index.html#module-advpistepper.stepper) ### Tuning[¶](#tuning) To get the best performance from AdvPiStepper there should be as few background processes running as possible. For expample, on the AdvPiStepper development system (Raspi 4) the Desktop process does interfere with the AdvPiStepper process about every 500ms causing step delays of a few milliseconds, enough to cause late step pulses at high speeds (>500 steps per second) If AdvPiStepper is called with root privileges (sudo) it will decrease the niceness of the backend process to -10. This improves the timing at high speeds somewhat due to less interference by normal user processes. Theory of Operation[¶](#theory-of-operation) --- > “Step by step, Heart to heart, Left, right, left” > – Martika History[¶](#history) --- AdvPiStepper was started for a Raspberry Pi project where I needed to move a stepper motor for 400 +/- a few steps. I wanted acceleration and deceleration ramps because an early Arduino based prototype had them. As I could not find any suitable RPi library I started this programm, which quickly spiraled out of control and became this multipurpose stepper motor controller. V0.9 Work in Progress - Not officially released Indices and tables[¶](#indices-and-tables) === * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html)
aprof
cran
R
Package ‘aprof’ October 12, 2022 Type Package Title Amdahl's Profiler, Directed Optimization Made Easy Version 0.4.1 Date 2018-05-17 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Assists the evaluation of whether and where to focus code optimization, using Amdahl's law and visual aids based on line profiling. Amdahl's profiler organizes profiling output files (including memory profiling) in a visually appealing way. It is meant to help to balance development vs. execution time by helping to identify the most promising sections of code to optimize and projecting potential gains. The package is an addition to R's standard profiling tools and is not a wrapper for them. License GPL (>= 2) URL http://github.com/MarcoDVisser/aprof BugReports http://github.com/MarcoDVisser/aprof/issues Imports graphics, grDevices, stats, testthat Repository CRAN RoxygenNote 5.0.1 NeedsCompilation no Date/Publication 2018-05-22 18:23:45 UTC R topics documented: apro... 2 is.apro... 4 plot.apro... 4 print.apro... 5 profileplo... 6 readLineDensit... 7 summary.apro... 8 targetedSummar... 9 aprof Create an ’aprof’ object for usage in the package ’aprof’ Description Create ’aprof’ objects for usage with ’aprof’ functions Usage aprof(src = NULL, output = NULL) Arguments src The name of the source code file (and path if not in the working directory). The source code file is expected to be a a plain text file (e.g. txt, .R), containing the code of the previously profiled program. If left empty, some "aprof" functions (e.g. readLineDensity) will attempt to extract this information from the call stack but this is not recommended (as the success of file name detection opera- tions vary). Note that functions that require a defined source file will fail if the source file is not defined or detected in the call stack. output The file name (and path if not in the working directory) of a previously created profiling exercise. Details Creates an "aprof" object from the R-profiler’s output and a source file. The objects created through "aprof" can be used by the standard functions plot, summary and print (more specifically: plot.aprof, summary.aprof and print.arof). See the example below for more details. Using aprof with knitr and within .Rmd or .Rnw documents is not yet supported by the R profiler. Note that setting the chuck option: engine="Rscript", disables line-profiling. Line profiling only works in a interactive session (Oct 2015). In these cases users are advised to use the standard Rprof functions or "profr" (while setting engine="Rscript") and not to rely on line-profiling based packages (for the time being). Value An aprof object Author(s) <NAME> See Also plot.aprof, summary.aprof, print.aprof, Rprof and summaryRprof. Examples ## Not run: ## create function to profile foo <- function(N){ preallocate<-numeric(N) grow<-NULL for(i in 1:N){ preallocate[i]<-N/(i+1) grow<-c(grow,N/(i+1)) } } ## save function to a source file and reload dump("foo",file="foo.R") source("foo.R") ## create file to save profiler output tmp<-tempfile() ## Profile the function Rprof(tmp,line.profiling=TRUE) foo(1e4) Rprof(append=FALSE) ## Create a aprof object fooaprof<-aprof("foo.R",tmp) ## display basic information, summarize and plot the object fooaprof summary(fooaprof) plot(fooaprof) profileplot(fooaprof) ## To continue with memory profiling: ## enable memory.profiling=TRUE Rprof(tmp,line.profiling=TRUE,memory.profiling=TRUE) foo(1e4) Rprof(append=FALSE) ## Create a aprof object fooaprof<-aprof("foo.R",tmp) ## display basic information, and plot memory usage fooaprof plot(fooaprof) ## End(Not run) is.aprof is.aprof Description Generic lower-level function to test whether an object is an aprof object. Usage is.aprof(object) Arguments object Object to test plot.aprof plot.aprof Description Plot execution time, or total MB usage when memory profiling, per line of code from a previously profiled source file. The plot visually shows bottlenecks in a program’s execution time, shown directly next to the code of the source file. Usage ## S3 method for class 'aprof' plot(x, y, ...) Arguments x An aprof object as returned by aprof(). If this object contains both memory and time profiling information both will be plotted (as proportions of total time and total memory allocations. y Unused and ignored at current. ... Additional printing arguments. Unused at current. Author(s) <NAME> Examples ## Not run: # create function to profile foo <- function(N){ preallocate<-numeric(N) grow<-NULL for(i in 1:N){ preallocate[i]<-N/(i+1) grow<-c(grow,N/(i+1)) } } ## save function to a source file and reload dump("foo",file="foo.R") source("foo.R") ## create file to save profiler output tmp<-tempfile() ## Profile the function Rprof(tmp,line.profiling=TRUE) foo(1e4) Rprof(append=FALSE) ## Create a aprof object fooaprof<-aprof("foo.R",tmp) plot(fooaprof) ## End(Not run) print.aprof Generic print method for aprof objects Description Function that makes a pretty table, and returns some basic information. Usage ## S3 method for class 'aprof' print(x, ...) Arguments x An aprof object returned by the function aprof. ... Additional printing arguments. Unused. profileplot Line progression plot Description A profile plot describing the progression through each code line during the execution of the program. Usage profileplot(aprofobject) Arguments aprofobject An aprof object returned by the function aprof Details Given that a source code file was specified in an "aprof" object this function will estimate when each lines was executed. It identifies the largest bottleneck and indicates this on the plot with red markings (y-axis). R uses a statistical profiler which, using system interrupts, temporarily stops execution of a program at fixed intervals. This is a profiling technique that results in samples of "the call stack" every time the system was stopped. The function profileplot uses these samples to re- construct the progression through the program. Note that the best results are obtained when a decent amount of samples have been taken (relative to the length of the source code). Use print.aprof to see how many samples (termed "Calls") of the call stack were taken. Author(s) <NAME> See Also plot.aprof Examples ## Not run: # create function to profile foo <- function(N){ preallocate<-numeric(N) grow<-NULL for(i in 1:N){ preallocate[i]<-N/(i+1) grow<-c(grow,N/(i+1)) } } #save function to a source file and reload dump("foo",file="foo.R") source("foo.R") # create file to save profiler output tmp<-tempfile() # Profile the function Rprof(tmp,line.profiling=TRUE) foo(1e4) Rprof(append=FALSE) # Create a aprof object fooaprof<-aprof("foo.R",tmp) profileplot(fooaprof) ## End(Not run) readLineDensity readLineDensity Description Reads and calculates the line density (in execution time or memory) of an aprof object returned by the aprof function. If a sourcefile was not specified in the aprof object, then the first file within the profiling information is assumed to be the source. Usage readLineDensity(aprofobject = NULL, Memprof = FALSE) Arguments aprofobject An object returned by aprof, which contains the stack calls sampled by the R profiler. Memprof Logical. Should the function return information specific to memory profiling with memory use per line in MB? Otherwise, the default is to return line call density and execution time per line. Author(s) <NAME> summary.aprof Projected optimization gains using Amdahl’s law. Description summary.aprof, projections of code optimization gains. Usage ## S3 method for class 'aprof' summary(object, ...) Arguments object An object returned by the function aprof. ... Additional [and unused] arguments. Details Summarizes an "aprof" object and returns a table with the theoretical maximal improvement in execution time for the entire profiled program when a given line of code is sped-up by a factor (called S in the output). Calculations are done using R’s profiler output, and requires line profiling to be switched on. Expected improvements are estimated for the entire program using Amdahl’s law (Amdahl 1967), and note that Calculations are subject to the scaling of the problem at profiling. The table output aims to answer whether it is worthwhile to spend hours of time optimizing bits of code (e.g. refactoring in C) and, additionally, identifies where these efforts should be focused. Using aprof one can get estimates of the maximum possible gain. Such considerations are important when one wishes to balance development time vs execution time. All predictions are subject to the scaling of the problem. Author(s) <NAME> References Amdahl, Gene (1967). Validity of the Single Processor Approach to Achieving Large-Scale Com- puting Capabilities. AFIS Conference Proceedings (30): 483-485. targetedSummary targetedSummary Description Allows a detailed look into certain lines of code, which have previously been identified as bottle- necks in combination with a source file. Usage targetedSummary(target = NULL, aprofobject = NULL, findParent = FALSE) Arguments target The specific line of code to take a detailed look at. This can be identified using summary.aprof. aprofobject object of class "aprof" returned by the function aprof. findParent Logical, should an attempt be made to find the parent of a function call? E.g. "lm" would be a parent call of "lm.fit" or "mean" a parent call of "mean.default". Note that currently, the option only returns the most frequently associated parent call when multiple unique parents exist. Author(s) <NAME>
pypi_yutto.jsonl
personal_doc
Unknown
# 𝓫𝓲𝓵𝓲𝓵𝓲 🍻 bilibili video and danmaku downloader | B 站视频、匹幕䞋蜜噚 可以匀始了哟 倚线皋 + 分块䞋蜜总之就是埈快啊 自劚䞋蜜匹幕并可蜬换䞺 ass 匹幕 即䟿䞀次没䞋完也可以接着䞋蜜 ## # 我所支持的 url 嘛我也是比蟃挑剔的目前我只支持以䞋几种视频 url * 投皿视频䞻页 ``` https://www.bilibili.com/video/avxxxxxx ``` 嘛这种 av 号的我䌚支持 * ``` https://b23.tv/avxxxxxx ``` 短铟接也可以考虑 * ``` https://www.bilibili.com/video/BVxxxxxx ``` 最新的 bv 号也䞍错 * ``` https://b23.tv/BVxxxxxx ``` 圓然它的短铟接也可以 * 番剧视频䞻页 ``` https://www.bilibili.com/bangumi/media/mdxxxxxx ``` 番剧的䞻页圓然可以 * ``` https://www.bilibili.com/bangumi/play/ssxxxxxx ``` 番剧的播攟页ss 号的也可以啊 * ``` https://b23.tv/ssxxxxxx ``` 还有它的短铟接 * ``` https://www.bilibili.com/bangumi/play/epxxxxxx ``` 番剧的播攟页ep 号的也是可以哒 * ``` https://b23.tv/epxxxxxx ``` 圓然也包括它的短铟接啊 ## # 我的解释噚Python 3.8+ 䞺了胜借正垞䞎䜠亀流䜠需芁先安装 Python 前蟈圓然䞀定芁是 3.8 以䞊的版本䞍然她可胜也䞍知道我圚诎什么。 劂果䜠是 Windows请自行去 Python 官眑 (opens new window)䞋蜜并安装安装时记埗芁募选「Add to PATH」选项䞍然可胜需芁䜠手劚添加到环境变量。 macOS 及 Linux 发行版䞀般郜自垊 python 环境䜆芁泚意版本。 ## # 我的䟝赖FFmpeg 由于 B 站给我的视频倧倚是需芁合并的所以我需芁 FFmpeg 小可爱的垮助䜠需芁事先把她安装到䜠的电脑䞊 劂果䜠所䜿甚的操䜜系统是 Windows操䜜有些些麻烊䜠需芁手劚䞋蜜 (opens new window)她并将她攟到䜠的环境变量䞭 # 诊细操䜜 打匀䞋蜜铟接后圚 「Get packages & executable files」 郚分选择 Windows 埜标圚 「Windows EXE Files」 䞋扟到 「Windows builds by BtbN」 并点击䌚跳蜬到䞀䞪 GitHub Releases 页面圚 「Latest release」 里就胜看到最新的构建版本了 䞋蜜后解压并随䟿攟到䞀䞪安党的地方然后圚文件倹䞭扟到 `ffmpeg.exe` 倍制其所圚文件倹路埄。 右击「歀电脑」选择属性圚其䞭扟到「高级系统讟眮」 → 「环境变量」双击 PATH圚其䞭添加刚刚倍制的路埄非 Win10 系统操䜜略有差匂请自行查阅「环境变量讟眮」的方法。 保存保存完事啊 圓然劂果䜠䜿甚的是 macOS 或者 Linux 发行版的话盎接䜿甚自己的包管理噚就胜䞀键完成该过皋。 比劂 macOS 可以䜿甚 `brew install ffmpeg` Ubuntu 可以䜿甚 `apt install ffmpeg` Manjaro 等 Arch 系可以䜿甚 `pacman -S ffmpeg` 倧倚郜埈简单的其他就䞍䞀䞀列䞟啊 歀时䜠可以圚终端䞊䜿甚 `ffmpeg -version` 呜什来测试安装是吊正确只芁星瀺的䞍是 `Command not found` 之类的提瀺就诎明  成功啊 ## # 召唀 𝓫𝓲𝓵𝓲𝓵𝓲 是时候闪亮登场啊䜠可以通过以䞋䞀种方匏䞭任意䞀种方匏来召唀我 ### # 通过 pip 倍制我的镜像 由于我已经圚 PyPI 䞊攟眮了自己的䞀仜镜像因歀䜠可以通过 pip 来把那仜镜像 copy 到自己电脑䞊 `pip install bilili` ### # 通过 git 倍制我的本䜓 劂果䜠想见到我的最新版本䜓那么䜠需芁从 github 䞊将我 clone 䞋来 ``` git clone <EMAIL>:yutto-dev/bilili.git cd bilili/ pip install . ``` 无论通过哪种方匏安装歀时盎接䜿甚 `bilili -v` 呜什郜应该䞍再是 `Command not found` 之类的提瀺啊。 ## # 匀始工䜜 䞀切准倇就绪请䞺我分配任务吧 圓然䜠只可以指掟我可以完成的任务也就是我所支持的 url 栌匏。 我的工䜜指掟方匏非垞简单 `bilili <url>` 圓然这里的 `<url>` 需芁甚前面所诎的 `url` 来替换。 比劂䞋蜜我的 挔瀺视频 (opens new window)只需芁 ``` bilili https://www.bilibili.com/video/BV1vZ4y1M7mQ ``` 䞋蜜番剧《关于我蜬生变成史莱姆这档事》 (opens new window)只需芁 ``` bilili https://www.bilibili.com/bangumi/media/md139252/ ``` 劂果䞀切配眮正确歀时我应该䌚正垞工䜜咯。 圓然劂果䜠想了解我的曎倚功胜请查阅参数䜿甚郚分。 ## # 䜿甚 PotPlayer PotPlayer 是䞀欟 Windows 䞋十分区倧的播攟噚我默讀生成的播攟列衚栌匏就是 PotPlayer 䞓甚的播攟列衚栌匏 `dpl` 䜠可以䜿甚 PotPlayer 盎接打匀它。 圓然我并䞍䌚区制䜠䜿甚 PotPlayer话诎其它系统也没有 PotPlayer 的诎因歀其它系统请䜿甚参数 `--playlist-type` 进行修改。 ## # 终端的选择 请尜量䜿甚支持 emoji 的终端䞍然圚我向䜠䌠蟟信息时可胜出现倱真问题「乱码」现象䜆这并䞍䌚圱响䞋蜜过皋。 Windows 比蟃掚荐䜿甚 「Windows Terminal」或者劂果䜠有 VS Code 这样的自垊终端的猖蟑噚也是可以盎接䜿甚其终端的。 ## # 断点续䌠功胜的䜿甚 由于我具倇断点续䌠的功胜因歀䜠䞍必担心䞋蜜过皋的䞭断䜠可以圚任䜕时刻 `Ctrl + C` 䞭断䞋蜜䞋䞀次重新启劚只需芁重新运行䞀䞋䞊次的呜什即可。 圓然䜠也可以圚重新匀始时修改䞀郚分参数䜆由于断点续䌠功胜䌚䟝赖于本地已䞋蜜郚分的倧小盎接接着䞋蜜因歀劂果䜠圚䞀次䞋蜜䞭途停止后修改了 `type` 、 `block-size` 、 `quality` 参数再次让我䞋蜜的话䞀次䞋蜜的内容将截然䞍同䜆断点续䌠机制仍然䌚区制拌接圚䞀起䞺了避免该问题请圚修改盞关参数时删陀已䞋蜜郚分或者盎接添加参数 `overwrite` 来自劚完成该过皋。 ## # 升级方匏 劂果䜠是通过 pip 安装我的话那么只需芁䜿甚 ``` pip install --upgrade bilili ``` 而劂果䜠是䜿甚 git 盎接安装盎接重新运行安装时所䜿甚的呜什即可。 ## # 定义呜什别名 可胜䜠䞍想每次运行 bilili 郜蟓入各种各样参数所以这里我建议䜠将垞甚的参数郜记圕成圚䞀条 alias 里比劂 Nyakku 就是这样做的 ``` alias bll='bilili -d ~/Movies/bilili/ -c `cat ~/.sessdata` --disable-proxy --danmaku=ass --playlist-type=m3u -y --use-mirrors' ``` 由于 Nyakku 䜿甚的是 zsh将其存到 `~/.zshrc` 就奜了劂果䜠䜿甚的是 bash 的话存到 `~/.bashrc` 就奜。 圓然Nyakku 是将自己 Cookie 里的 SESSDATA 存到了 `~/.sessdata` 这样每次只需运行 bll 就可以省去定义存傚目圕、Cookie 等等的参数啊。 是时候让䜠了解䞀䞋我的工䜜方匏了。 ## # url 解析 最匀始我䌚解析䜠的 url 类型䟝歀送入 bangumi API 解析噚 (opens new window)或者 acg_video API 解析噚 (opens new window) ## # 列衚获取 䞀䞪解析噚䌚从圓前 url 䞭获取关键信息并通过 B 站的盞关 API 䞭获取敎䞪播攟列衚番剧自然就是该番该季的党郚剧集而投皿视频则是各 P 的信息。 ## # 视频铟接获取 进䞀步地通过 B 站盞关 API 来获取各䞪视频的铟接。 这里视频铟接的获取圓前是有倚种栌匏可选的B 站早期䜿甚的是 Flash 播攟噚自然䜿甚的是 `flv` 栌匏的视频B 站的 `flv` 视频倧倚是分段的因歀䞋蜜之后需芁合并。 后来 B 站采甚 HTML5 播攟噚的时候貌䌌也圚䜿甚 flv 栌匏 (opens new window)圓然甚的 API 应圓也是 flv 的 API。 现圚的 HTML5 播攟噚返回的是通过 dash 方匏组织的 `m4s` 栌匏的文件 (opens new window)䞀䞪是音频文件及䞀䞪自然就是视频文件咯。 陀歀之倖还可以请求出投皿视频的 `mp4` 栌匏文件䜆䞀般枅晰床并䞍䌚倪高而䞔枅晰床也䞍胜自己指定限制还是蛮倚的。 ## # 匹幕、字幕获取 圓然看 B 站视频的话匹幕是䞍可或猺的因歀我䌚垮䜠自劚䞋蜜 xml 栌匏的匹幕。 有些视频存圚字幕因歀也䌚䞀并䞋蜜。 ## # 视频䞋蜜 歀时由于每䞪视频的真实 url 我们郜已经埗到了因歀就可以盎接䞋蜜咯 䞺了提高䞋蜜速床我䌚同时幻化出倚䞪分身子线皋及倖我还䌚将每䞪视频切成小块将每䞪小块分发给䞀䞪分身来䞋蜜。 圓然圓䞀䞪视频块䞋蜜完成需芁合并这䞪过皋䌚由䞋蜜最后那䞪块的分身来完成。 及倖我还安排了䞉䞪分身甚于视频片段的合并劂果䞀䞪视频所有片段郜䞋蜜完成就䌚通知她们进行合并。 什么䜠问我我圚干嘛我䌚圚旁蟹监督她们的啊同时䌚告诉䜠她们的进床嘻嘻 ## # 解析时出现视频无法䞋蜜的问题 圓圚解析时出现某䞪视频无法䞋蜜的情况时劂果原因是 「啥郜朚有」那么䜠只需芁重新启劚皋序䞀般就可以解决。 而原因是其他情况时请针对该情况进行检查劂果该视频䜠确实没有获取权限请利甚选集参数跳过该视频这样就可以䞋蜜其䜙视频了。 ## # 总是解析䞀段时闎后就厩溃了 可胜是眑络䞍䜳䜠可以通过增加 `-p` 参数每次䞋蜜䞀话倚运行几次就奜了。 ## # 出现 ``` requests.exceptions.ProxyError ``` 由于䜠匀启了系统代理䌚富臎䞀些问题所以请䜿甚参数 `--disable-proxy` 绕过系统代理。 ## # 可以䞋蜜互劚视频吗 暂时䞍可以也䞍圚现阶段计划䞭因䞺本地的䜓验必然䞍劂盎接圚 B 站䞊看䜓验奜建议盎接圚 B 站看哊 ## # 可以䞍生成 `- bilibili` 目圕吗 圓前也是䞍可以的䞍过 bilili v2(yutto) (opens new window) 提䟛了曎加灵掻的路埄讟眮方匏劂果有兎趣可以尝试 v2  圚提出问题之前请确定䜠已经倧臎地将文档浏览过䞀遍这样䌚节省圌歀的䞀些时闎。圓然劂果䜠对文档的组织圢匏、内容等有任䜕问题的话欢迎提出䞎莡献。 劂果䜠有䞀些新的想法可以圚 GitHub 䞊发起 discussion (opens new window)劂果䜠确定䜠发现了䞀䞪 Bug或者想请求添加新的功胜也可以盎接发起 Issue (opens new window)。 歀倖䜠也可以通过邮件或者 Telegram (opens new window) 来䞎 Nyakku 取埗联系建议问题尜可胜盎奔䞻题尜可胜少询问䞍必芁的信息。 我的工䜜只是将 B 站的视频搬运到䜠的电脑䞊仅歀而已啊䜆有些事情䜠可胜需芁知道。 我䞍䌚垮䜠䞋蜜䜠没有权限访问的䞜西因歀我䞍是什么砎解皋序该匀倧䌚员还是去乖乖匀倧䌚员这是对 B 站的䞀种支持所以请䞍芁对我提让我䞺隟的芁求哊无论现圚还是以后。 我所䞋蜜的䞜西只代衚䜠有权限获取请䞍芁将我获取的䞜西随意分享砎坏平台和创䜜者的权益劂果䜠这么做了的话那就䞍芁怪我绝情咯只胜䞀切后果自莟哊。 及倖我本身的匀源协议是 GPL-3.0䞀方面我需芁 FFmpeg 和 danmaku2ass (opens new window) 䞀䜍前蟈的垮忙及䞀方面也尜可胜绎技 B 站本身的权益。䞍过我的文档采甚的是 CC0-1.0 协议api 子暡块采甚的是 MIT 协议。 ## # 平台以及创䜜者 感谢 bilibili (opens new window) 平台以及平台䞊无数䌘莚内容的创䜜者。 ## # 䟝赖项目 我的正垞运䜜犻䞍匀以䞋项目的支持 * FFmpeg (opens new window) 甚于视频的合并 * danmaku2ass (opens new window) 甚于 xml 匹幕蜬换䞺 ass 匹幕 * VuePress (opens new window) 本文档的生成噚 * vuepress-theme-vt (opens new window) 本文档䞻题 ## # 参考项目 我圚探玢过皋䞭埗到了以䞋项目的垮助 * Bilibili - 1033020837 (opens new window) 早期探玢时的参考项目 * BilibiliVideoDownload - blogwy (opens new window) 了解到曎倚枅晰床等级120、112 等 * BiliUtil (opens new window) 参考了 4k 枅晰床获取时需芁的额倖参数以及打包 PyPI 的方匏 ## # 云服务 ## # 莡献者 感谢每䞀䜍莡献者的蟛勀付出 ## # 赞助者 感谢各䜍赞助者的资金揎助 銖先诎明由于我只是将 B 站的视频搬运到䜠的电脑䞊是䞀件埈简单的事情请把最倧的感激给予平台以及创䜜者。 劂果䜠想支持我的话圚 GitHub 项目䞻页 (opens new window)给予我䞀䞪 「Star」 就是对我的最倧錓励。 歀倖劂果䜠想给予 Nyakku 䞀定资金支持以激励 Nyakku 的匀发的话䜠可以通过以䞋方匏进行 ## # 䞀次性赞助 䜠可以通过支付宝 (opens new window)或者埮信 (opens new window)来䞺 Nyakku 提䟛䞀笔匀发资金。 ## # 呚期性赞助 䜠可以通过 Patreon (opens new window) 或者爱发电 (opens new window)来䞺 Nyakku 提䟛每月的资金支持以激励 Nyakku 创䜜曎倚有趣、实甚的匀源项目。 䜠的任䜕金额的赞助我郜䌚无比珍惜我䌚圚项目臎谢䞭标泚䜠的 GitHub ID需芁圚赞助时倇泚䜠的 GitHub ID劂果有资助后忘记留 ID 的可以联系我。
deno_webidl
rust
Rust
Struct deno_webidl::deno_webidl === ``` pub struct deno_webidl {} ``` An extension for use with the Deno JS runtime. To use it, provide it as an argument when instantiating your runtime: ``` use deno_core::{ JsRuntime, RuntimeOptions }; let mut extensions = vec![deno_webidl::init_ops_and_esm()]; let mut js_runtime = JsRuntime::new(RuntimeOptions { extensions, ..Default::default() }); ``` Implementations --- ### impl deno_webidl #### pub fn init_js_only() -> Extension 👎Deprecated since 0.216.0: please use `init_ops_and_esm` or `init_ops` insteadLegacy function for extension instantiation. Please use `init_ops_and_esm` or `init_ops` instead ##### Returns an Extension object that can be used during instantiation of a JsRuntime #### pub fn init_ops_and_esm() -> Extension Initialize this extension for runtime or snapshot creation. Use this function if the runtime or snapshot is not created from a (separate) snapshot, or that snapshot does not contain this extension. Otherwise use `init_ops()` instead. ##### Returns an Extension object that can be used during instantiation of a JsRuntime #### pub fn init_ops() -> Extension Initialize this extension for runtime or snapshot creation, excluding its JavaScript sources and evaluation. This is used when the runtime or snapshot is created from a (separate) snapshot which includes this extension in order to avoid evaluating the JavaScript twice. ##### Returns an Extension object that can be used during instantiation of a JsRuntime Auto Trait Implementations --- ### impl RefUnwindSafe for deno_webidl ### impl Send for deno_webidl ### impl Sync for deno_webidl ### impl Unpin for deno_webidl ### impl UnwindSafe for deno_webidl Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
gravity_mailbox
ruby
Ruby
[![Gem Version](https://badge.fury.io/rb/gravity_mailbox.svg)](https://badge.fury.io/rb/gravity_mailbox) [![Ruby](https://github.com/petalmd/gravity_mailbox/actions/workflows/main.yml/badge.svg)](https://github.com/petalmd/gravity_mailbox/actions/workflows/main.yml) ![](https://user-images.githubusercontent.com/7858787/213794938-f55aef73-ce49-45b5-a388-d16f2435de15.png) Development tools that aim to make it simple to visualize mail sent by your Rails app directly through your Rails app. It works in development and also in a staging environment by using the `Rails.cache` to store the mails. Installation --- Add this line to your application's Gemfile: ``` gem 'gravity_mailbox' ``` And then execute: ``` $ bundle install ``` Or install it yourself as: ``` $ gem install gravity_mailbox ``` Usage --- * Config ActionMailer to use the RailsCacheDeliveryMethod. ``` # config/environments/(development|staging).rb config.action_mailer.delivery_method = :gravity_mailbox_rails_cache config.action_mailer.perform_deliveries = true ``` * Mount the Engine ``` # config/routes.rb mount [GravityMailbox](/gems/gravity_mailbox/GravityMailbox "GravityMailbox (module)")::[Engine](/gems/gravity_mailbox/GravityMailbox/Engine "GravityMailbox::Engine (class)") => "/gravity_mailbox" ``` * Send mails * Go to <http://localhost:3000/gravity_mailbox> to see the mails. **Note** You can use routing constraints to restrict access to GravityMailbox. More details and example in the [Rails Guides](https://guides.rubyonrails.org/routing.html#advanced-constraints). Screenshots --- ![image](https://user-images.githubusercontent.com/7858787/213796119-a22ac9da-3943-4cd0-95e6-2fb724de999a.png) Development --- After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment. To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org). ### Standalone app Use `bundle exec rackup` to test locally the gem. Go to <http://localhost:9292> Use the button to send a test mail and to see those mail into GravityMailbox. Contributing --- Bug reports and pull requests are welcome on GitHub at <https://github.com/petalmd/gravity_mailbox>. License --- The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
@thi.ng/hdom2020
npm
JavaScript
This project is part of the [@thi.ng/umbrella](https://github.com/thi-ng/umbrella/) monorepo. * [About](#about) + [Status](#status) - [HIC SUNT DRACONES](#hic-sunt-dracones) * [Installation](#installation) * [Dependencies](#dependencies) * [Usage examples](#usage-examples) * [API](#api) * [Authors](#authors) * [License](#license) About --- Experimental exploration for a new [@thi.ng/hdom](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/hdom) w/ entirely new, largely reactive & diff-less approach. WARNING: Your existing code WILL break!. ### Status **ALPHA** - bleeding edge / work-in-progress #### HIC SUNT DRACONES Pretty much **everything** here is still in a state of flux (without warning!) and merely shared for those brave souls who'd like to be part of the journey, even if just to provide early feedback and such... :) Installation --- ``` yarn add @thi.ng/hdom2020 ``` ``` // ES module <script type="module" src="https://unpkg.com/@thi.ng/hdom2020?module" crossorigin></script// UMD <script src="https://unpkg.com/@thi.ng/hdom2020/lib/index.umd.js" crossorigin></script> ``` Package sizes (gzipped, pre-treeshake): ESM: 3.78 KB / CJS: 3.93 KB / UMD: 3.92 KB Dependencies --- * [@thi.ng/adapt-dpi](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/adapt-dpi) * [@thi.ng/api](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/api) * [@thi.ng/atom](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/atom) * [@thi.ng/checks](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/checks) * [@thi.ng/errors](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/errors) * [@thi.ng/hiccup](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/hiccup) * [@thi.ng/hiccup-canvas](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/hiccup-canvas) * [@thi.ng/paths](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/paths) * [@thi.ng/rstream](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/packages/rstream) Usage examples --- Several demos in this repo's [/examples](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/examples) directory are using this package. A selection: | Description | Live demo | Source | | --- | --- | --- | | hdom2020 test sandbox / POC | [Demo](https://demo.thi.ng/umbrella/hdom2020-basics/) | [Source](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/examples/hdom2020-basics) | | hdom2020 drag & drop example | [Demo](https://demo.thi.ng/umbrella/hdom2020-dnd/) | [Source](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/examples/hdom2020-dnd) | | hdom2020 & hiccup-canvas interop test | [Demo](https://demo.thi.ng/umbrella/hdom2020-lissajous/) | [Source](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/examples/hdom2020-lissajous) | | Full umbrella repo doc string search w/ paginated results | [Demo](https://demo.thi.ng/umbrella/hdom2020-search-docs/) | [Source](https://github.com/thi-ng/umbrella/tree/feature/hdom2020/examples/hdom2020-search-docs) | API --- [Generated API docs](https://docs.thi.ng/umbrella/hdom2020/) TODO Authors --- <NAME> License --- © 2020 <NAME> // Apache Software License 2.0 Readme --- ### Keywords * aot * async * canvas * css * dom * es6 * components * frontend * hiccup * html * reactive * rstream * svg * typescript * ui
spBayes
cran
R
Package ‘spBayes’ October 14, 2022 Version 0.4-6 Date 2022-05-12 Title Univariate and Multivariate Spatial-Temporal Modeling Maintainer <NAME> <<EMAIL>> Author <NAME> [aut, cre], <NAME> [aut] Depends R (>= 1.8.0) Imports coda, sp, magic, Formula, Matrix Suggests MBA Description Fits univariate and multivariate spatio-temporal random effects models for point-referenced data using Markov chain Monte Carlo (MCMC). De- tails are given in Finley, Banerjee, and Gelfand (2015) <doi:10.18637/jss.v063.i13> and Fin- ley and Banerjee <doi:10.1016/j.envsoft.2019.104608>. License GPL (>= 2) URL https://www.finley-lab.com Repository CRAN NeedsCompilation yes Date/Publication 2022-05-17 17:00:02 UTC R topics documented: adaptMetropGibb... 2 bayesGeostatExac... 8 bayesLMConjugat... 12 bayesLMRe... 13 BEF.da... 15 FBC07.da... 15 FORMGMT.da... 16 iDis... 16 mkMv... 17 mkSpCo... 18 NETemp.da... 20 NYOzone.da... 20 PM10.da... 21 PM10.pol... 22 pointsInPol... 23 spDia... 25 spDynL... 27 spGL... 32 spL... 38 spMisalignGL... 42 spMisalignL... 47 spMvGL... 52 spMvL... 57 spPredic... 62 spRecove... 65 spSV... 68 SVCMvData.da... 73 WEF.da... 74 Zurich.da... 75 adaptMetropGibbs Adaptive Metropolis within Gibbs algorithm Description Markov chain Monte Carlo for continuous random vector using an adaptive Metropolis within Gibbs algorithm. Usage adaptMetropGibbs(ltd, starting, tuning=1, accept.rate=0.44, batch = 1, batch.length=25, report=100, verbose=TRUE, ...) Arguments ltd an R function that evaluates the log target density of the desired equilibrium distribution of the Markov chain. First argument is the starting value vector of the Markov chain. Pass variables used in the ltd via the . . . argument of aMetropGibbs. starting a real vector of parameter starting values. tuning a scalar or vector of initial Metropolis tuning values. The vector must be of length(starting). If a scalar is passed then it is expanded to length(starting). accept.rate a scalar or vector of target Metropolis acceptance rates. The vector must be of length(starting). If a scalar is passed then it is expanded to length(starting). batch the number of batches. batch.length the number of sampler iterations in each batch. report the number of batches between acceptance rate reports. verbose if TRUE, progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. ... currently no additional arguments. Value A list with the following tags: p.theta.samples a coda object of posterior samples for the parameters. acceptance the Metropolis acceptance rate at the end of each batch. ltd ltd accept.rate accept.rate batch batch batch.length batch.length proc.time the elapsed CPU and wall time (in seconds). Note This function is a rework of Rosenthal (2007) with some added niceties. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL> <EMAIL>> References <NAME>. and <NAME>. (2006). Examples of Adaptive MCMC. http://probability. ca/jeff/ftpdir/adaptex.pdf Preprint. Rosenthal J.S. (2007). AMCMC: An R interface for adaptive MCMC. Computational Statistics and Data Analysis. 51:5467-5470. Examples ## Not run: rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))) stop("Dimension problem!") D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } ########################### ##Fit a spatial regression ########################### set.seed(1) n <- 50 x <- runif(n, 0, 100) y <- runif(n, 0, 100) D <- as.matrix(dist(cbind(x, y))) phi <- 3/50 sigmasq <- 50 tausq <- 20 mu <- 150 s <- (sigmasq*exp(-phi*D)) w <- rmvn(1, rep(0, n), s) Y <- rmvn(1, rep(mu, n) + w, tausq*diag(n)) X <- as.matrix(rep(1, length(Y))) ##Priors ##IG sigma^2 and tau^2 a.sig <- 2 b.sig <- 100 a.tau <- 2 b.tau <- 100 ##Unif phi a.phi <- 3/100 b.phi <- 3/1 ##Functions used to transform phi to continuous support. logit <- function(theta, a, b){log((theta-a)/(b-theta))} logit.inv <- function(z, a, b){b-(b-a)/(1+exp(z))} ##Metrop. target target <- function(theta){ mu.cand <- theta[1] sigmasq.cand <- exp(theta[2]) tausq.cand <- exp(theta[3]) phi.cand <- logit.inv(theta[4], a.phi, b.phi) Sigma <- sigmasq.cand*exp(-phi.cand*D)+tausq.cand*diag(n) SigmaInv <- chol2inv(chol(Sigma)) logDetSigma <- determinant(Sigma, log=TRUE)$modulus[1] out <- ( ##Priors -(a.sig+1)*log(sigmasq.cand) - b.sig/sigmasq.cand -(a.tau+1)*log(tausq.cand) - b.tau/tausq.cand ##Jacobians +log(sigmasq.cand) + log(tausq.cand) +log(phi.cand - a.phi) + log(b.phi -phi.cand) ##Likelihood -0.5*logDetSigma-0.5*(t(Y-X%*%mu.cand)%*%SigmaInv%*%(Y-X%*%mu.cand)) ) return(out) } ##Run a couple chains n.batch <- 500 batch.length <- 25 inits <- c(0, log(1), log(1), logit(3/10, a.phi, b.phi)) chain.1 <- adaptMetropGibbs(ltd=target, starting=inits, batch=n.batch, batch.length=batch.length, report=100) inits <- c(500, log(100), log(100), logit(3/90, a.phi, b.phi)) chain.2 <- adaptMetropGibbs(ltd=target, starting=inits, batch=n.batch, batch.length=batch.length, report=100) ##Check out acceptance rate just for fun plot(mcmc.list(mcmc(chain.1$acceptance), mcmc(chain.2$acceptance))) ##Back transform chain.1$p.theta.samples[,2] <- exp(chain.1$p.theta.samples[,2]) chain.1$p.theta.samples[,3] <- exp(chain.1$p.theta.samples[,3]) chain.1$p.theta.samples[,4] <- 3/logit.inv(chain.1$p.theta.samples[,4], a.phi, b.phi) chain.2$p.theta.samples[,2] <- exp(chain.2$p.theta.samples[,2]) chain.2$p.theta.samples[,3] <- exp(chain.2$p.theta.samples[,3]) chain.2$p.theta.samples[,4] <- 3/logit.inv(chain.2$p.theta.samples[,4], a.phi, b.phi) par.names <- c("mu", "sigma.sq", "tau.sq", "effective range (-log(0.05)/phi)") colnames(chain.1$p.theta.samples) <- par.names colnames(chain.2$p.theta.samples) <- par.names ##Discard burn.in and plot and do some convergence diagnostics chains <- mcmc.list(mcmc(chain.1$p.theta.samples), mcmc(chain.2$p.theta.samples)) plot(window(chains, start=as.integer(0.5*n.batch*batch.length))) gelman.diag(chains) ########################## ##Example of fitting a ##a non-linear model ########################## ##Example of fitting a non-linear model set.seed(1) ######################################################## ##Simulate some data. ######################################################## a <- 0.1 #-Inf < a < Inf b <- 0.1 #b > 0 c <- 0.2 #c > 0 tau.sq <- 0.1 #tau.sq > 0 fn <- function(a,b,c,x){ a+b*exp(x/c) } n <- 200 x <- seq(0,1,0.01) y <- rnorm(length(x), fn(a,b,c,x), sqrt(tau.sq)) ##check out your data plot(x, y) ######################################################## ##The log target density ######################################################## ##Define the log target density used in the Metrop. ltd <- function(theta){ ##extract and transform as needed a <- theta[1] b <- exp(theta[2]) c <- exp(theta[3]) tau.sq <- exp(theta[4]) y.hat <- fn(a, b, c, x) ##likelihood logl <- sum(dnorm(y, y.hat, sqrt(tau.sq), log=TRUE)) ##priors IG on tau.sq and normal on a and transformed b, c, d logl <- (logl -(IG.a+1)*log(tau.sq)-IG.b/tau.sq +sum(dnorm(theta[1:3], N.mu, N.v, log=TRUE)) ) ##Jacobian adjustment for tau.sq logl <- logl+log(tau.sq) return(logl) } ######################################################## ##The rest ######################################################## ##Priors IG.a <- 2 IG.b <- 0.01 N.mu <- 0 N.v <- 10 theta.tuning <- c(0.01, 0.01, 0.005, 0.01) ##Run three chains with different starting values n.batch <- 1000 batch.length <- 25 theta.starting <- c(0, log(0.01), log(0.6), log(0.01)) run.1 <- adaptMetropGibbs(ltd=ltd, starting=theta.starting, tuning=theta.tuning, batch=n.batch, batch.length=batch.length, report=100) theta.starting <- c(1.5, log(0.05), log(0.5), log(0.05)) run.2 <- adaptMetropGibbs(ltd=ltd, starting=theta.starting, tuning=theta.tuning, batch=n.batch, batch.length=batch.length, report=100) theta.starting <- c(-1.5, log(0.1), log(0.4), log(0.1)) run.3 <- adaptMetropGibbs(ltd=ltd, starting=theta.starting, tuning=theta.tuning, batch=n.batch, batch.length=batch.length, report=100) ##Back transform samples.1 <- cbind(run.1$p.theta.samples[,1], exp(run.1$p.theta.samples[,2:4])) samples.2 <- cbind(run.2$p.theta.samples[,1], exp(run.2$p.theta.samples[,2:4])) samples.3 <- cbind(run.3$p.theta.samples[,1], exp(run.3$p.theta.samples[,2:4])) samples <- mcmc.list(mcmc(samples.1), mcmc(samples.2), mcmc(samples.3)) ##Summary plot(samples, density=FALSE) gelman.plot(samples) burn.in <- 5000 fn.pred <- function(theta,x){ a <- theta[1] b <- theta[2] c <- theta[3] tau.sq <- theta[4] rnorm(length(x), fn(a,b,c,x), sqrt(tau.sq)) } post.curves <- t(apply(samples.1[burn.in:nrow(samples.1),], 1, fn.pred, x)) post.curves.quants <- summary(mcmc(post.curves))$quantiles plot(x, y, pch=19, xlab="x", ylab="f(x)") lines(x, post.curves.quants[,1], lty="dashed", col="blue") lines(x, post.curves.quants[,3]) lines(x, post.curves.quants[,5], lty="dashed", col="blue") ## End(Not run) bayesGeostatExact Simple Bayesian spatial linear model with fixed semivariogram pa- rameters Description Given a observation coordinates and fixed semivariogram parameters the bayesGeostatExact function fits a simple Bayesian spatial linear model. Usage bayesGeostatExact(formula, data = parent.frame(), n.samples, beta.prior.mean, beta.prior.precision, coords, cov.model="exponential", phi, nu, alpha, sigma.sq.prior.shape, sigma.sq.prior.rate, sp.effects=TRUE, verbose=TRUE, ...) Arguments formula for a univariate model, this is a symbolic description of the regression model to be fit. See example below. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which spLM is called. n.samples the number of posterior samples to collect. beta.prior.mean β multivariate normal mean vector hyperprior. beta.prior.precision β multivariate normal precision matrix hyperprior. coords an n×2 matrix of the observation coordinates in R2 (e.g., easting and northing). cov.model a quoted key word that specifies the covariance function used to model the spa- tial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. phi the fixed value of the spatial decay. nu if cov.model is "matern" then the fixed value of the spatial process smoothness must be specified. alpha the fixed value of the ratio between the nugget τ 2 and partial-sill σ 2 parameters from the specified cov.model. sigma.sq.prior.shape σ 2 (i.e., partial-sill) inverse-Gamma shape hyperprior. sigma.sq.prior.rate σ 2 (i.e., partial-sill) inverse-Gamma 1/scale hyperprior. sp.effects a logical value indicating if spatial random effects should be recovered. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. ... currently no additional arguments. Value An object of class bayesGeostatExact, which is a list with the following tags: p.samples a coda object of posterior samples for the defined parameters. sp.effects a matrix that holds samples from the posterior distribution of the spatial random effects. The rows of this matrix correspond to the n point observations and the columns are the posterior samples. args a list with the initial function arguments. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> Examples ## Not run: data(FBC07.dat) Y <- FBC07.dat[1:150,"Y.2"] coords <- as.matrix(FBC07.dat[1:150,c("coord.X", "coord.Y")]) n.samples <- 500 n = length(Y) p = 1 phi <- 0.15 nu <- 0.5 beta.prior.mean <- as.matrix(rep(0, times=p)) beta.prior.precision <- matrix(0, nrow=p, ncol=p) alpha <- 5/5 sigma.sq.prior.shape <- 2.0 sigma.sq.prior.rate <- 5.0 ############################## ##Simple linear model with ##the default exponential ##spatial decay function ############################## set.seed(1) m.1 <- bayesGeostatExact(Y~1, n.samples=n.samples, beta.prior.mean=beta.prior.mean, beta.prior.precision=beta.prior.precision, coords=coords, phi=phi, alpha=alpha, sigma.sq.prior.shape=sigma.sq.prior.shape, sigma.sq.prior.rate=sigma.sq.prior.rate) print(summary(m.1$p.samples)) ##Requires MBA package to ##make surfaces library(MBA) par(mfrow=c(1,2)) obs.surf <- mba.surf(cbind(coords, Y), no.X=100, no.Y=100, extend=T)$xyz.est image(obs.surf, xaxs = "r", yaxs = "r", main="Observed response") points(coords) contour(obs.surf, add=T) w.hat <- rowMeans(m.1$sp.effects) w.surf <- mba.surf(cbind(coords, w.hat), no.X=100, no.Y=100, extend=T)$xyz.est image(w.surf, xaxs = "r", yaxs = "r", main="Estimated random effects") points(coords) contour(w.surf, add=T) ############################## ##Simple linear model with ##the matern spatial decay ##function. Note, nu=0.5 so ##should produce the same ##estimates as m.1 ############################## set.seed(1) m.2 <- bayesGeostatExact(Y~1, n.samples=n.samples, beta.prior.mean=beta.prior.mean, beta.prior.precision=beta.prior.precision, coords=coords, cov.model="matern", phi=phi, nu=nu, alpha=alpha, sigma.sq.prior.shape=sigma.sq.prior.shape, sigma.sq.prior.rate=sigma.sq.prior.rate) print(summary(m.2$p.samples)) ############################## ##This time with the ##spherical just for fun ############################## m.3 <- bayesGeostatExact(Y~1, n.samples=n.samples, beta.prior.mean=beta.prior.mean, beta.prior.precision=beta.prior.precision, coords=coords, cov.model="spherical", phi=phi, alpha=alpha, sigma.sq.prior.shape=sigma.sq.prior.shape, sigma.sq.prior.rate=sigma.sq.prior.rate) print(summary(m.3$p.samples)) ############################## ##Another example but this ##time with covariates ############################## data(FORMGMT.dat) n = nrow(FORMGMT.dat) p = 5 ##an intercept an four covariates n.samples <- 50 phi <- 0.0012 coords <- cbind(FORMGMT.dat$Longi, FORMGMT.dat$Lat) coords <- coords*(pi/180)*6378 beta.prior.mean <- rep(0, times=p) beta.prior.precision <- matrix(0, nrow=p, ncol=p) alpha <- 1/1.5 sigma.sq.prior.shape <- 2.0 sigma.sq.prior.rate <- 10.0 m.4 <- bayesGeostatExact(Y~X1+X2+X3+X4, data=FORMGMT.dat, n.samples=n.samples, beta.prior.mean=beta.prior.mean, beta.prior.precision=beta.prior.precision, coords=coords, phi=phi, alpha=alpha, sigma.sq.prior.shape=sigma.sq.prior.shape, sigma.sq.prior.rate=sigma.sq.prior.rate) print(summary(m.4$p.samples)) ##Requires MBA package to ##make surfaces library(MBA) par(mfrow=c(1,2)) obs.surf <- mba.surf(cbind(coords, resid(lm(Y~X1+X2+X3+X4, data=FORMGMT.dat))), no.X=100, no.Y=100, extend=TRUE)$xyz.est image(obs.surf, xaxs = "r", yaxs = "r", main="Observed response") points(coords) contour(obs.surf, add=T) w.hat <- rowMeans(m.4$sp.effects) w.surf <- mba.surf(cbind(coords, w.hat), no.X=100, no.Y=100, extend=TRUE)$xyz.est image(w.surf, xaxs = "r", yaxs = "r", main="Estimated random effects") contour(w.surf, add=T) points(coords, pch=1, cex=1) ## End(Not run) bayesLMConjugate Simple Bayesian linear model via the Normal/inverse-Gamma conju- gate Description Given an lm object, the bayesLMConjugate function fits a simple Bayesian linear model with Nor- mal and inverse-Gamma priors. Usage bayesLMConjugate(formula, data = parent.frame(), n.samples, beta.prior.mean, beta.prior.precision, prior.shape, prior.rate, ...) Arguments formula for a univariate model, this is a symbolic description of the regression model to be fit. See example below. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which spLM is called. n.samples the number of posterior samples to collect. beta.prior.mean β multivariate normal mean vector hyperprior. beta.prior.precision β multivariate normal precision matrix hyperprior. prior.shape τ 2 inverse-Gamma shape hyperprior. prior.rate τ 2 inverse-Gamma 1/scale hyperprior. ... currently no additional arguments. Value An object of class bayesLMConjugate, which is a list with at least the following tag: p.beta.tauSq.samples a coda object of posterior samples for the defined parameters. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> Examples ## Not run: data(FORMGMT.dat) n <- nrow(FORMGMT.dat) p <- 7 ##an intercept and six covariates n.samples <- 500 ## Below we demonstrate the conjugate function in the special case ## with improper priors. The results are the same as for the above, ## up to MC error. beta.prior.mean <- rep(0, times=p) beta.prior.precision <- matrix(0, nrow=p, ncol=p) prior.shape <- -p/2 prior.rate <- 0 m.1 <- bayesLMConjugate(Y ~ X1+X2+X3+X4+X5+X6, data = FORMGMT.dat, n.samples, beta.prior.mean, beta.prior.precision, prior.shape, prior.rate) summary(m.1$p.beta.tauSq.samples) ## End(Not run) bayesLMRef Simple Bayesian linear model with non-informative priors Description Given a lm object, the bayesLMRef function fits a simple Bayesian linear model with reference (non-informative) priors. Usage bayesLMRef(lm.obj, n.samples, ...) Arguments lm.obj an object returned by lm. n.samples the number of posterior samples to collect. ... currently no additional arguments. Details See page 355 in Gelman et al. (2004). Value An object of class bayesLMRef, which is a list with at least the following tag: p.beta.tauSq.samples a coda object of posterior samples for the defined parameters. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>., <NAME>., and <NAME>. (2004). Bayesian Data Analysis. 2nd ed. Boca Raton, FL: Chapman and Hall/CRC Press. Examples ## Not run: set.seed(1) n <- 100 X <- as.matrix(cbind(1, rnorm(n))) B <- as.matrix(c(1,5)) tau.sq <- 0.1 y <- rnorm(n, X%*%B, sqrt(tau.sq)) lm.obj <- lm(y ~ X-1) summary(lm.obj) ##Now with bayesLMRef n.samples <- 500 m.1 <- bayesLMRef(lm.obj, n.samples) summary(m.1$p.beta.tauSq.samples) ## End(Not run) BEF.dat Bartlett Experimental Forest inventory data Description Data generated in long-term research studies on the Bartlett Experimental Forest, Bartlett, NH funded by the U.S. Department of Agriculture, Forest Service, Northeastern Research Station. This dataset holds 1991 and 2002 forest inventory data for 437 points on the BEF.dat. Variables include species specific basal area and biomass; inventory plot coordinates; slope; elevation; and tasseled cap brightness (TC1), greenness (TC2), and wetness (TC3) components from spring, sum- mer, and fall 2002 Landsat images. Species specific basal area and biomass are recorded as a fraction of totals. Usage data(BEF.dat) Format A data frame containing 437 rows and 208 columns. Source BEF.dat inventory data provided by: <NAME> USDA Forest Service Northeastern Research Station <<EMAIL>> Additional variables provided by: <NAME> USDA Forest Service Northeastern Research Station <<EMAIL>> FBC07.dat Synthetic multivariate data with spatial and non-spatial variance structures Description The synthetic dataset describes a stationary and isotropic bivariate process. Please refer to the vignette Section 4.2 for specifics. Usage data(FBC07.dat) Format A data frame of 250 rows and 4 columns. Columns 1 and 2 are coordinates and columns 3 and 4 are response variables. Source <NAME>., <NAME>, and <NAME> (2007) spBayes: R package for Univariate and Multivari- ate Hierarchical Point-referenced Spatial Models. Journal of Statistical Software. FORMGMT.dat Data used for illustrations Description Data used for illustrations. Usage data(FORMGMT.dat) iDist Euclidean distance matrix Description Computes the inter-site Euclidean distance matrix for one or two sets of points. Usage iDist(coords.1, coords.2, ...) Arguments coords.1 an n × p matrix with each row corresponding to a point in p dimensional space. coords.2 an m × p matrix with each row corresponding to a point in p dimensional space. If this is missing then coords.1 is used. ... currently no additional arguments. Value The n × n or n × m inter-site Euclidean distance matrix. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>>, Examples ## Not run: n <- 10 p1 <- cbind(runif(n),runif(n)) m <- 5 p2 <- cbind(runif(m),runif(m)) D <- iDist(p1, p2) ## End(Not run) mkMvX Make a multivariate design matrix Description Given q univariate design matrices, the function mkMvX creates a multivariate design matrix suitable for use in spPredict. Usage mkMvX(X) Arguments X a list of q univariate design matrices. The matrices must have the same num- ber of rows (i.e., observations) but may have different number of columns (i.e., regressors). Value A multivariate design matrix suitable for use in spPredict. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>>. See Also spPredict Examples ## Not run: ##Define some univariate model design matrices ##with intercepts. X.1 <- cbind(rep(1, 10), matrix(rnorm(50), nrow=10)) X.2 <- cbind(rep(1, 10), matrix(rnorm(20), nrow=10)) X.3 <- cbind(rep(1, 10), matrix(rnorm(30), nrow=10)) ##Make a multivariate design matrix suitable ##for use in spPredict. X.mv <- mkMvX(list(X.1, X.2, X.3)) ## End(Not run) mkSpCov Function for calculating univariate and multivariate covariance ma- trices Description The function mkSpCov calculates a spatial covariance matrix given spatial locations and spatial covariance parameters. Usage mkSpCov(coords, K, Psi, theta, cov.model) Arguments coords an n×2 matrix of the observation coordinates in R2 (e.g., easting and northing). K the q×q spatial cross-covariance matrix. For a univariate model this corresponds to the partial sill, σ 2 . Psi the q × q non-spatial covariance matrix. For a univariate model this corresponds to the nugget, τ 2 . theta a vector of q spatial decay parameters. If cov.model is "matern" then theta is a vector of length 2 × q with the spatial decay parameters in the first q elements and the spatial smoothness parameters in the last q elements. cov.model a quoted keyword that specifies the covariance function used to model the spatial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. Details Covariance functions return the covariance C(h) between a pair locations separated by distance h. The covariance function can be written as a product of a variance parameter σ 2 and a positive definite correlation function ρ(h): C(h) = σ 2 ρ(h), see, e.g., Banerjee et al. (2004) p. 27 for more details. The expressions of the correlations functions available in spBayes are given below. More will be added upon request. For all correlations functions, φ is the spatial decay parameter. Some of the correlation functions will have an extra parameter Μ, the smoothness parameter. KΜ (x) denotes the modified Bessel function of the third kind of order Μ. See documentation of the function besselK for further details. The following functions are valid for φ > 0 and Μ > 0, unless stated otherwise. gaussian ρ(h) = exp[−(φh)2 ] exponential ρ(h) = exp(−φh) matern ρ(h) = (φh)Μ KΜ (φh) 2Μ−1 Γ(Μ) spherical 1 − 1.5φh + 0.5(φh)3 , if h <  ρ(h) = φ 0 , otherwise Value C the nq × nq spatial covariance matrix. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> Examples ## Not run: ##A bivariate spatial covariance matrix n <- 2 ##number of locations q <- 2 ##number of responses at each location nltr <- q*(q+1)/2 ##number of triangular elements in the cross-covariance matrix coords <- cbind(runif(n,0,1), runif(n,0,1)) ##spatial decay parameters theta <- rep(6,q) A <- matrix(0,q,q) A[lower.tri(A,TRUE)] <- rnorm(nltr, 5, 1) K <- A%*%t(A) Psi <- diag(1,q) C <- mkSpCov(coords, K, Psi, theta, cov.model="exponential") ## End(Not run) NETemp.dat Monthly weather station temperature data across the Northeastern US Description Monthly temperature data (Celsius) recorded across the Northeastern US starting in January 2000. Station UTM coordinates and elevation are also included. Usage data(NETemp.dat) Format A data frame containing 356 rows (weather stations) and 132 columns. NYOzone.dat Observations of ozone concentration levels. Description These data and subsequent description are drawn from the spTimer package (version 0.7). This data set contains values of daily 8-hour maximum average ozone concentrations (ppb; O3.8HRMAX), maximum temperature (degree Celsius; cMAXTMP), wind speed (knots; WDSP), and relative hu- midity (RM), obtained from 28 monitoring sites in New York, USA, between July 1 and August 31 in 2006. Each row represents a station and columns hold consecutive daily values. Usage data(NYOzone.dat) Format Columns for NYdata: • 1st col = Longitude • 2nd col = Latitude • 3rd col = X coordinates in UTM projection • 4th col = Y coordinates in UTM projection • 5th col = Ozone July 1 (O3.8HRMAX.1) • 6th col = Ozone July 2 (O3.8HRMAX.2) • ... • 66th col = Ozone August 31 (O3.8HRMAX.62) • remaining columns organize cMAXTMP, WDSP, and RH identical to the 62 O3.8HRMAX measurements References spTimer Bakar, K.S. and <NAME>. http://www.southampton.ac.uk/~sks/research/papers/ spTimeRpaper.pdf Sahu, S.K. and <NAME>. (2012) A Comparison of Bayesian Models for Daily Ozone Concentra- tion Levels. Statistical Methodology, 9, 144–157. PM10.dat Observed and modeled PM10 concentrations across Europe Description The PM10.dat data frame is a subset of data analyzed in Hamm et al. (2015) and Datta et al. (2016). Data comprise April 6, 2010 square root transformed PM10 measurements across central Europe with corresponding output from the LOTOS-EUROS Schaap et al. (2008) chemistry trans- port model (CTM). CTM data may differ slightly from that considered in the studies noted above due to LOTOS-EUROS CTM code updates. A NA value is given at CTM output locations were PM10 is not observed. Point coordinates are in "+proj=laea +lat_0=52 +lon_0=10 +x_0=4321000 +y_0=3210000 +ellps=GRS80 +units=km +no_defs". Usage data(PM10.dat) Format Columns for PM10.dat: • x.coord = x coordinate (see projection information in the description) • y.coord = y coordinate (see projection information in the description) • pm10.obs = square root transformed PM10 measurements at monitoring stations (NA means there is not a station at the given location) • pm10.ctm = square root transformed PM10 from CTM References <NAME>., <NAME>, <NAME>, <NAME>, and <NAME> (2016). Nonseparable dynamic nearest neighbor Gaussian process models for large spatio-temporal data with an application to particulate matter analysis. Annals of Applied Statistics, 10(3), 1286–1316. ISSN 1932-6157. doi:10.1214/16-AOAS931. <NAME>, <NAME>, <NAME> (2015). A Spatially Varying Coefficient Model for Mapping PM10 Air Quality at the European scale. Atmospheric Environment, 102, 393–405. <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME> (2008). The LOTOS-EUROS Model: Description, Validation and Latest Developments. International Journal of Environment and Pollution, 32(2), 270–290. PM10.poly European countries used in PM10.dat Description European countries corresponding to PM10.dat locations and used in Hamm et al. (2015) and Datta et al. (2016). Polygon projection is "+proj=laea +lat_0=52 +lon_0=10 +x_0=4321000 +y_0=3210000 +ellps=GRS80 +units=km +no_defs". Usage data(PM10.poly) Format List of polygons. See example below to convert to a SpatialPolygons object. References <NAME>., <NAME>, <NAME>, <NAME>, and <NAME> (2016). Nonseparable dynamic nearest neighbor Gaussian process models for large spatio-temporal data with an application to particulate matter analysis. Annals of Applied Statistics, 10(3), 1286–1316. ISSN 1932-6157. doi:10.1214/16-AOAS931. <NAME>, <NAME>, <NAME> (2015). A Spatially Varying Coefficient Model for Mapping PM10 Air Quality at the European scale. Atmospheric Environment, 102, 393–405. Examples ## Not run: library(sp) prj <- "+proj=laea +lat_0=52 +lon_0=10 +x_0=4321000 +y_0=3210000 +ellps=GRS80 +units=km +no_defs" pm10.poly <- SpatialPolygons(PM10.poly, pO = 1:length(PM10.poly), proj4string=CRS(prj)) ## End(Not run) pointsInPoly Finds points in a polygon Description Given a polygon and a set of points this function returns the subset of points that are within the polygon. Usage pointsInPoly(poly, points, ...) Arguments poly an n × 2 matrix of polygon vertices. Matrix columns correspond to vertices’ x and y coordinates, respectively. points an m × 2 matrix of points. Matrix columns correspond to points’ x and y coor- dinates, respectively. ... currently no additional arguments. Details It is assumed that the polygon is to be closed by joining the last vertex to the first vertex. Value If points are found with the polygon, then a vector is returned with elements corresponding to the row indices of points, otherwise NA is returned. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>>, Examples ## Not run: ##Example 1 points <- cbind(runif(1000, 0, 10),runif(1000, 0, 10)) poly <- cbind(c(1:9,8:1), c(1,2*(5:3),2,-1,17,9,8,2:9)) point.indx <- pointsInPoly(poly, points) plot(points, pch=19, cex=0.5, xlab="x", ylab="y", col="red") points(points[point.indx,], pch=19, cex=0.5, col="blue") polygon(poly) ##Example 2 ##a function to partition the domain tiles <- function(points, x.cnt, y.cnt, tol = 1.0e-10){ x.min <- min(points[,1])-tol x.max <- max(points[,1])+tol y.min <- min(points[,2])-tol y.max <- max(points[,2])+tol x.cnt <- x.cnt+1 y.cnt <- y.cnt+1 x <- seq(x.min, x.max, length.out=x.cnt) y <- seq(y.min, y.max, length.out=y.cnt) tile.list <- vector("list", (length(y)-1)*(length(x)-1)) l <- 1 for(i in 1:(length(y)-1)){ for(j in 1:(length(x)-1)){ tile.list[[l]] <- rbind(c(x[j], y[i]), c(x[j+1], y[i]), c(x[j+1], y[i+1]), c(x[j], y[i+1])) l <- l+1 } } tile.list } n <- 1000 points <- cbind(runif(n, 0, 10), runif(n, 0, 10)) grd <- tiles(points, x.cnt=10, y.cnt=10) plot(points, pch=19, cex=0.5, xlab="x", ylab="y") sum.points <- 0 for(i in 1:length(grd)){ polygon(grd[[i]], border="red") point.indx <- pointsInPoly(grd[[i]], points) if(!is.na(point.indx[1])){ sum.points <- length(point.indx)+sum.points text(mean(grd[[i]][,1]), mean(grd[[i]][,2]), length(point.indx), col="red") } } sum.points ## End(Not run) spDiag Model fit diagnostics Description The function spDiag calculates DIC, GP, GRS, and associated statistics given a spLM, spMvLM, spGLM, spMvGLM, spMvGLM, or spSVC object. Usage spDiag(sp.obj, start=1, end, thin=1, verbose=TRUE, n.report=100, ...) Arguments sp.obj an object returned by spLM, spMvLM, spGLM, spMvGLM. For spSVC, sp.obj is an object from spRecover. start specifies the first sample included in the computation. The start, end, and thin arguments only apply to spGLM or spMvGLM objects. Sub-sampling for spLM and spMvLM is controlled using spRecover which must be called prior to spDiag. end specifies the last sample included in the computation. The default is to use all posterior samples in sp.obj. See start argument description. thin a sample thinning factor. The default of 1 considers all samples between start and end. For example, if thin = 10 then 1 in 10 samples are considered between start and end. verbose if TRUE calculation progress is printed to the screen; otherwise, nothing is printed to the screen. n.report the interval to report progress. ... currently no additional arguments. Value A list with some of the following tags: DIC a matrix holding DIC and associated statistics, see Banerjee et al. (2004) for details. GP a matrix holding GP and associated statistics, see Gelfand and Ghosh (1998) for details. GRS a scoring rule, see Equation 27 in Gneiting and Raftery (2007) for details. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>., and <NAME>. (2004). Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC Press, B<NAME>,Fla. <NAME>. and <NAME> (2019) Efficient implementation of spatially-varying coefficients mod- els. Gelfand A.E. and <NAME>. (1998). Model choice: a minimum posterior predictive loss ap- proach. Biometrika. 85:1-11. <NAME>. and <NAME>. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association. 102:359-378. Examples ## Not run: rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))) stop("Dimension problem!") D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } set.seed(1) n <- 100 coords <- cbind(runif(n,0,1), runif(n,0,1)) X <- as.matrix(cbind(1, rnorm(n))) B <- as.matrix(c(1,5)) p <- length(B) sigma.sq <- 2 tau.sq <- 0.1 phi <- 3/0.5 D <- as.matrix(dist(coords)) R <- exp(-phi*D) w <- rmvn(1, rep(0,n), sigma.sq*R) y <- rnorm(n, X%*%B + w, sqrt(tau.sq)) n.samples <- 1000 starting <- list("phi"=3/0.5, "sigma.sq"=50, "tau.sq"=1) tuning <- list("phi"=0.1, "sigma.sq"=0.1, "tau.sq"=0.1) ##too restrictive of prior on beta priors.1 <- list("beta.Norm"=list(rep(0,p), diag(1,p)), "phi.Unif"=c(3/1, 3/0.1), "sigma.sq.IG"=c(2, 2), "tau.sq.IG"=c(2, 0.1)) ##more reasonable prior for beta priors.2 <- list("beta.Norm"=list(rep(0,p), diag(1000,p)), "phi.Unif"=c(3/1, 3/0.1), "sigma.sq.IG"=c(2, 2), "tau.sq.IG"=c(2, 0.1)) cov.model <- "exponential" n.report <- 500 verbose <- TRUE m.1 <- spLM(y~X-1, coords=coords, starting=starting, tuning=tuning, priors=priors.1, cov.model=cov.model, n.samples=n.samples, verbose=verbose, n.report=n.report) m.2 <- spLM(y~X-1, coords=coords, starting=starting, tuning=tuning, priors=priors.2, cov.model=cov.model, n.samples=n.samples, verbose=verbose, n.report=n.report) ##non-spatial model m.3 <- spLM(y~X-1, n.samples=n.samples, verbose=verbose, n.report=n.report) burn.in <- 0.5*n.samples ##recover beta and spatial random effects m.1 <- spRecover(m.1, start=burn.in, verbose=FALSE) m.2 <- spRecover(m.2, start=burn.in, verbose=FALSE) ##lower is better for DIC, GPD, and GRS print(spDiag(m.1)) print(spDiag(m.2)) print(spDiag(m.3)) ## End(Not run) spDynLM Function for fitting univariate Bayesian dynamic space-time regres- sion models Description The function spDynLM fits Gaussian univariate Bayesian dynamic space-time regression models for settings where space is viewed as continuous but time is taken to be discrete. Usage spDynLM(formula, data = parent.frame(), coords, knots, starting, tuning, priors, cov.model, get.fitted=FALSE, n.samples, verbose=TRUE, n.report=100, ...) Arguments formula a list of Nt symbolic regression models to be fit. Each model represents a time step. See example below. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which spDynLM is called. coords an n×2 matrix of the observation coordinates in R2 (e.g., easting and northing). starting a list with each tag corresponding to a parameter name. Valid tags are beta, sigma.sq, tau.sq, phi, nu, and sigma.eta. The value portion of each tag is the parameter’s starting value. knots either a m × 2 matrix of the predictive process knot coordinates in R2 (e.g., easting and northing) or a vector of length two or three with the first and second elements recording the number of columns and rows in the desired knot grid. The third, optional, element sets the offset of the outermost knots from the extent of the coords. tuning a list with each tag corresponding to a parameter name. Valid tags are phi and nu. The value portion of each tag defines the variance of the Metropolis sampler Normal proposal distribution. priors a list with tags beta.0.norm, sigma.sq.ig, tau.sq.ig, phi.unif, nu.unif, and sigma.eta.iw. Variance parameters, simga.sq and tau.sq, are assumed to follow an inverse-Gamma distribution, whereas the spatial decay phi and smoothness nu parameters are assumed to follow Uniform distributions. The beta.0.norm is a multivariate Normal distribution with hyperparameters passed as a list of length two with the first and second elements corresponding to the mean vector and positive definite covariance matrix, respectively. The hyper- parameters of the inverse-Wishart, sigma.eta.iw, are passed as a list of length two, with the first and second elements corresponding to the df and p × p scale matrix, respectively. The inverse-Gamma hyperparameters are passed in a list with two vectors that hold the shape and scale, respectively. The Uniform hy- perparameters are passed in a list with two vectors that hold the lower and upper support values, respectively. cov.model a quoted keyword that specifies the covariance function used to model the spatial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. get.fitted if TRUE, posterior predicted and fitted values are collected. Note, posterior pre- dicted samples are only collected for those yt (s) that are NA. n.samples the number of MCMC iterations. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report Metropolis sampler acceptance and MCMC progress. ... currently no additional arguments. Details Suppose, yt (s) denotes the observation at location s and time t. We model yt (s) through a mea- surement equation that provides a regression specification with a space-time varying intercept and serially and spatially uncorrelated zero-centered Gaussian disturbances as measurement error t (s). Next a transition equation introduces a p × 1 coefficient vector, say βt , which is a purely tempo- ral component (i.e., time-varying regression parameters), and a spatio-temporal component ut (s). Both these are generated through transition equations, capturing their Markovian dependence in time. While the transition equation of the purely temporal component is akin to usual state-space modeling, the spatio-temporal component is generated using Gaussian spatial processes. The over- all model is written as yt (s) = xt (s)0 βt + ut (s) + t (s), t = 1, 2, . . . , Nt βt = βt−1 + ηt ; ηt ∌ N (0, Ση ) ut (s) = ut−1 (s) + wt (s); wt (s) ∌ GP (0, Ct (·, Ξt )) Here xt (s) is a p × 1 vector of predictors and βt is a p × 1 vector of coefficients. In addition to an intercept, xt (s) can include location specific variables useful for explaining the variability in yt (s). The GP (0, Ct (·, Ξt )) denotes a spatial Gaussian process with covariance function Ct (·; Ξt ). We specify Ct (s1 , s2 ; Ξt ) = σt2 ρ(s1 , s2 ; φt ), where Ξt = {σt2 , φt , Μt } and ρ(·; φ) is a correlation function with φ controlling the correlation decay and σt2 represents the spatial variance component. The spatial smoothness parameter, Μ, is used if the Matern spatial correlation function is chosen. We further assume β0 ∌ N (m0 , Σ0 ) and u0 (s) ≡ 0, which completes the prior specifications leading to a well-identified Bayesian hierarhical model and also yield reasonable dependence structures. Value An object of class spDynLM, which is a list with the following tags: coords the n × 2 matrix specified by coords. p.theta.samples a coda object of posterior samples for τt2 , σt2 , φt , Μt . p.beta.0.samples a coda object of posterior samples for β at t = 0. p.beta.samples a coda object of posterior samples for βt . p.sigma.eta.samples a coda object of posterior samples for Ση . p.u.samples a coda object of posterior samples for spatio-temporal random effects u. Sam- ples are over the columns and time steps increase in blocks of n down the columns, e.g., the first n rows correspond to locations 1, 2, . . . , n in t = 1 and the last n rows correspond to locations 1, 2, . . . , n in t = Nt . p.y.samples a coda object of posterior samples for y. If yt (s) is specified as NA the p.y.samples hold the associated posterior predictive samples. Samples are over the columns and time steps increase in blocks of n down the columns, e.g., the first n rows correspond to locations 1, 2, . . . , n in t = 1 and the last n rows correspond to locations 1, 2, . . . , n in t = Nt . The return object might include additional data used for subsequent prediction and/or model fit evaluation. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>, and <NAME>. (2012) Bayesian dynamic modeling for large space- time datasets using Gaussian predictive processes. Journal of Geographical Systems, 14:29–47. <NAME>., <NAME>, and <NAME>. (2015) spBayes for large univariate and multivariate point-referenced spatio-temporal data models. Journal of Statistical Software, 63:1–28. https: //www.jstatsoft.org/article/view/v063i13. Gelfand, A.E., <NAME>, and <NAME> (2005) Spatial Process Modelling for Univariate and Multivariate Dynamic Spatial Data, Environmetrics, 16:465–479. See Also spLM Examples ## Not run: data("NETemp.dat") ne.temp <- NETemp.dat set.seed(1) ##take a chunk of New England ne.temp <- ne.temp[ne.temp[,"UTMX"] > 5500000 & ne.temp[,"UTMY"] > 3000000,] ##subset first 2 years (Jan 2000 - Dec. 2002) y.t <- ne.temp[,4:27] N.t <- ncol(y.t) ##number of months n <- nrow(y.t) ##number of observation per months ##add some missing observations to illistrate prediction miss <- sample(1:N.t, 10) holdout.station.id <- 5 y.t.holdout <- y.t[holdout.station.id, miss] y.t[holdout.station.id, miss] <- NA ##scale to km coords <- as.matrix(ne.temp[,c("UTMX", "UTMY")]/1000) max.d <- max(iDist(coords)) ##set starting and priors p <- 2 #number of regression parameters in each month starting <- list("beta"=rep(0,N.t*p), "phi"=rep(3/(0.5*max.d), N.t), "sigma.sq"=rep(2,N.t), "tau.sq"=rep(1, N.t), "sigma.eta"=diag(rep(0.01, p))) tuning <- list("phi"=rep(5, N.t)) priors <- list("beta.0.Norm"=list(rep(0,p), diag(1000,p)), "phi.Unif"=list(rep(3/(0.9*max.d), N.t), rep(3/(0.05*max.d), N.t)), "sigma.sq.IG"=list(rep(2,N.t), rep(10,N.t)), "tau.sq.IG"=list(rep(2,N.t), rep(5,N.t)), "sigma.eta.IW"=list(2, diag(0.001,p))) ##make symbolic model formula statement for each month mods <- lapply(paste(colnames(y.t),'elev',sep='~'), as.formula) n.samples <- 2000 m.1 <- spDynLM(mods, data=cbind(y.t,ne.temp[,"elev",drop=FALSE]), coords=coords, starting=starting, tuning=tuning, priors=priors, get.fitted =TRUE, cov.model="exponential", n.samples=n.samples, n.report=25) burn.in <- floor(0.75*n.samples) quant <- function(x){quantile(x, prob=c(0.5, 0.025, 0.975))} beta <- apply(m.1$p.beta.samples[burn.in:n.samples,], 2, quant) beta.0 <- beta[,grep("Intercept", colnames(beta))] beta.1 <- beta[,grep("elev", colnames(beta))] plot(m.1$p.beta.0.samples) par(mfrow=c(2,1)) plot(1:N.t, beta.0[1,], pch=19, cex=0.5, xlab="months", ylab="beta.0", ylim=range(beta.0)) arrows(1:N.t, beta.0[1,], 1:N.t, beta.0[3,], length=0.02, angle=90) arrows(1:N.t, beta.0[1,], 1:N.t, beta.0[2,], length=0.02, angle=90) plot(1:N.t, beta.1[1,], pch=19, cex=0.5, xlab="months", ylab="beta.1", ylim=range(beta.1)) arrows(1:N.t, beta.1[1,], 1:N.t, beta.1[3,], length=0.02, angle=90) arrows(1:N.t, beta.1[1,], 1:N.t, beta.1[2,], length=0.02, angle=90) theta <- apply(m.1$p.theta.samples[burn.in:n.samples,], 2, quant) sigma.sq <- theta[,grep("sigma.sq", colnames(theta))] tau.sq <- theta[,grep("tau.sq", colnames(theta))] phi <- theta[,grep("phi", colnames(theta))] par(mfrow=c(3,1)) plot(1:N.t, sigma.sq[1,], pch=19, cex=0.5, xlab="months", ylab="sigma.sq", ylim=range(sigma.sq)) arrows(1:N.t, sigma.sq[1,], 1:N.t, sigma.sq[3,], length=0.02, angle=90) arrows(1:N.t, sigma.sq[1,], 1:N.t, sigma.sq[2,], length=0.02, angle=90) plot(1:N.t, tau.sq[1,], pch=19, cex=0.5, xlab="months", ylab="tau.sq", ylim=range(tau.sq)) arrows(1:N.t, tau.sq[1,], 1:N.t, tau.sq[3,], length=0.02, angle=90) arrows(1:N.t, tau.sq[1,], 1:N.t, tau.sq[2,], length=0.02, angle=90) plot(1:N.t, 3/phi[1,], pch=19, cex=0.5, xlab="months", ylab="eff. range (km)", ylim=range(3/phi)) arrows(1:N.t, 3/phi[1,], 1:N.t, 3/phi[3,], length=0.02, angle=90) arrows(1:N.t, 3/phi[1,], 1:N.t, 3/phi[2,], length=0.02, angle=90) y.hat <- apply(m.1$p.y.samples[,burn.in:n.samples], 1, quant) y.hat.med <- matrix(y.hat[1,], ncol=N.t) y.hat.up <- matrix(y.hat[3,], ncol=N.t) y.hat.low <- matrix(y.hat[2,], ncol=N.t) y.obs <- as.vector(as.matrix(y.t[-holdout.station.id, -miss])) y.obs.hat.med <- as.vector(y.hat.med[-holdout.station.id, -miss]) y.obs.hat.up <- as.vector(y.hat.up[-holdout.station.id, -miss]) y.obs.hat.low <- as.vector(y.hat.low[-holdout.station.id, -miss]) y.ho <- as.matrix(y.t.holdout) y.ho.hat.med <- as.vector(y.hat.med[holdout.station.id, miss]) y.ho.hat.up <- as.vector(y.hat.up[holdout.station.id, miss]) y.ho.hat.low <- as.vector(y.hat.low[holdout.station.id, miss]) par(mfrow=c(2,1)) plot(y.obs, y.obs.hat.med, pch=19, cex=0.5, xlab="observed", ylab="fitted", main="Observed vs. fitted") arrows(y.obs, y.obs.hat.med, y.obs, y.obs.hat.up, length=0.02, angle=90) arrows(y.obs, y.obs.hat.med, y.obs, y.obs.hat.low, length=0.02, angle=90) lines(-50:50, -50:50, col="blue") plot(y.ho, y.ho.hat.med, pch=19, cex=0.5, xlab="observed", ylab="predicted", main="Observed vs. predicted") arrows(y.ho, y.ho.hat.med, y.ho, y.ho.hat.up, length=0.02, angle=90) arrows(y.ho, y.ho.hat.med, y.ho, y.ho.hat.low, length=0.02, angle=90) lines(-50:50, -50:50, col="blue") ## End(Not run) spGLM Function for fitting univariate Bayesian generalized linear spatial re- gression models Description The function spGLM fits univariate Bayesian generalized linear spatial regression models. Given a set of knots, spGLM will also fit a predictive process model (see references below). Usage spGLM(formula, family="binomial", weights, data = parent.frame(), coords, knots, starting, tuning, priors, cov.model, amcmc, n.samples, verbose=TRUE, n.report=100, ...) Arguments formula a symbolic description of the regression model to be fit. See example below. family currently only supports binomial and poisson data using the logit and log link functions, respectively. weights an optional vector of weights to be used in the fitting process. Weights cor- respond to number of trials and offset for each location for the binomial and poisson family, respectively. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the envi- ronment from which spGLM is called. coords an n×2 matrix of the observation coordinates in R2 (e.g., easting and northing). knots either a m × 2 matrix of the predictive process knot coordinates in R2 (e.g., easting and northing) or a vector of length two or three with the first and second elements recording the number of columns and rows in the desired knot grid. The third, optional, element sets the offset of the outermost knots from the extent of the coords. starting a list with each tag corresponding to a parameter name. Valid tags are beta, sigma.sq, phi, nu, and w. The value portion of each tag is the parameter’s starting value. If the predictive process is used then w must be of length m; otherwise, it must be of length n. Alternatively, w can be set as a scalar, in which case the value is repeated. tuning a list with each tag corresponding to a parameter name. Valid tags are beta, sigma.sq, phi, nu, and w. The value portion of each tag defines the variance of the Metropolis sampler Normal proposal distribution. The tuning value for beta can be a vector of length p (where p is the number of regression coefficients) or, if an adaptive MCMC is not used, i.e., amcmc is not specified, the lower-triangle of the p × p Cholesky square-root of the desired proposal covariance matrix. If the predictive process is used then w must be of length m; otherwise, it must be of length n. Alternatively, w can be set as a scalar, in which case the value is repeated. priors a list with each tag corresponding to a parameter name. Valid tags are sigma.sq.ig, phi.unif, nu.unif, beta.norm, and beta.flat. Variance parameter simga.sq is assumed to follow an inverse-Gamma distribution, whereas the spatial decay phi and smoothness nu parameters are assumed to follow Uniform distributions. The hyperparameters of the inverse-Gamma are passed as a vector of length two, with the first and second elements corresponding to the shape and scale, respec- tively. The hyperparameters of the Uniform are also passed as a vector of length two with the first and second elements corresponding to the lower and upper support, respectively. If the regression coefficients are each assumed to follow a Normal distribution, i.e., beta.norm, then mean and variance hyperparameters are passed as the first and second list elements, respectively. If beta is assumed flat then no arguments are passed. The default is a flat prior. cov.model a quoted keyword that specifies the covariance function used to model the spatial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. amcmc a list with tags n.batch, batch.length, and accept.rate. Specifying this ar- gument invokes an adaptive MCMC sampler, see <NAME> (2007) for an explanation. n.samples the number of MCMC iterations. This argument is ignored if amcmc is specified. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report Metropolis sampler acceptance and MCMC progress. ... currently no additional arguments. Details If a binomial model is specified the response vector is the number of successful trials at each location and weights is the total number of trials at each location. For a poisson specification, the weights vector is the count offset, e.g., population, at each loca- tion. This differs from the glm offset argument which is passed as the log of this value. A non-spatial model is fit when coords is not specified. See example below. Value An object of class spGLM, which is a list with the following tags: coords the n × 2 matrix specified by coords. knot.coords the m × 2 matrix as specified by knots. p.beta.theta.samples a coda object of posterior samples for the defined parameters. acceptance the Metropolis sampler acceptance rate. If amcmc is used then this will be a matrix of each parameter’s acceptance rate at the end of each batch. Otherwise, the sampler is a Metropolis with a joint proposal of all parameters. acceptance.w if this is a non-predictive process model and amcmc is used then this will be a matrix of the Metropolis sampler acceptance rate for each location’s spatial random effect. acceptance.w.knots if this is a predictive process model and amcmc is used then this will be a matrix of the Metropolis sampler acceptance rate for each knot’s spatial random effect. p.w.knots.samples a matrix that holds samples from the posterior distribution of the knots’ spatial random effects. The rows of this matrix correspond to the m knot locations and the columns are the posterior samples. This is only returned if a predictive process model is used. p.w.samples a matrix that holds samples from the posterior distribution of the locations’ spa- tial random effects. The rows of this matrix correspond to the n point observa- tions and the columns are the posterior samples. The return object might include additional data used for subsequent prediction and/or model fit evaluation. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>, <NAME>, and <NAME>. (2008) Gaussian Predictive Process Models for Large Spatial Datasets. Journal of the Royal Statistical Society Series B, 70:825–848. <NAME>., <NAME>., and <NAME>. (2004) Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC Press, Boca Raton, Fla. <NAME>., <NAME>, and <NAME>. (2015) spBayes for large univariate and multivariate point-referenced spatio-temporal data models. Journal of Statistical Software, 63:1–28. https: //www.jstatsoft.org/article/view/v063i13. <NAME>., <NAME>, and <NAME>. (2008) A Bayesian approach to quantifying uncer- tainty in multi-source forest area estimates. Environmental and Ecological Statistics, 15:241–258. <NAME>. and <NAME>. (2006) Examples of Adaptive MCMC. http://probability.ca/ jeff/ftpdir/adaptex.pdf Preprint. See Also spMvGLM Examples ## Not run: library(MBA) library(coda) set.seed(1) rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))) stop("Dimension problem!") D <- chol(V) t(matrix(rnorm(n*p), ncol=p) %*% D + rep(mu,rep(n,p))) } ################################ ##Spatial binomial ################################ ##Generate binary data coords <- as.matrix(expand.grid(seq(0,100,length.out=8), seq(0,100,length.out=8))) n <- nrow(coords) phi <- 3/50 sigma.sq <- 2 R <- sigma.sq*exp(-phi*as.matrix(dist(coords))) w <- rmvn(1, rep(0,n), R) x <- as.matrix(rep(1,n)) beta <- 0.1 p <- 1/(1+exp(-(x%*%beta+w))) weights <- rep(1, n) weights[coords[,1]>mean(coords[,1])] <- 10 y <- rbinom(n, size=weights, prob=p) ##Collect samples fit <- glm((y/weights)~x-1, weights=weights, family="binomial") beta.starting <- coefficients(fit) beta.tuning <- t(chol(vcov(fit))) n.batch <- 200 batch.length <- 50 n.samples <- n.batch*batch.length m.1 <- spGLM(y~1, family="binomial", coords=coords, weights=weights, starting=list("beta"=beta.starting, "phi"=0.06,"sigma.sq"=1, "w"=0), tuning=list("beta"=beta.tuning, "phi"=0.5, "sigma.sq"=0.5, "w"=0.5), priors=list("beta.Normal"=list(0,10), "phi.Unif"=c(0.03, 0.3), "sigma.sq.IG"=c(2, 1)), amcmc=list("n.batch"=n.batch, "batch.length"=batch.length, "accept.rate"=0.43), cov.model="exponential", verbose=TRUE, n.report=10) burn.in <- 0.9*n.samples sub.samps <- burn.in:n.samples print(summary(window(m.1$p.beta.theta.samples, start=burn.in))) beta.hat <- m.1$p.beta.theta.samples[sub.samps,"(Intercept)"] w.hat <- m.1$p.w.samples[,sub.samps] p.hat <- 1/(1+exp(-(x%*%beta.hat+w.hat))) y.hat <- apply(p.hat, 2, function(x){rbinom(n, size=weights, prob=p.hat)}) y.hat.mu <- apply(y.hat, 1, mean) y.hat.var <- apply(y.hat, 1, var) ##Take a look par(mfrow=c(1,2)) surf <- mba.surf(cbind(coords,y.hat.mu),no.X=100, no.Y=100, extend=TRUE)$xyz.est image(surf, main="Interpolated mean of posterior rate\n(observed rate)") contour(surf, add=TRUE) text(coords, label=paste("(",y,")",sep="")) surf <- mba.surf(cbind(coords,y.hat.var),no.X=100, no.Y=100, extend=TRUE)$xyz.est image(surf, main="Interpolated variance of posterior rate\n(observed # of trials)") contour(surf, add=TRUE) text(coords, label=paste("(",weights,")",sep="")) ########################### ##Spatial poisson ########################### ##Generate count data set.seed(1) n <- 100 coords <- cbind(runif(n,1,100),runif(n,1,100)) phi <- 3/50 sigma.sq <- 2 R <- sigma.sq*exp(-phi*as.matrix(dist(coords))) w <- rmvn(1, rep(0,n), R) x <- as.matrix(rep(1,n)) beta <- 0.1 y <- rpois(n, exp(x%*%beta+w)) ##Collect samples beta.starting <- coefficients(glm(y~x-1, family="poisson")) beta.tuning <- t(chol(vcov(glm(y~x-1, family="poisson")))) n.batch <- 500 batch.length <- 50 n.samples <- n.batch*batch.length ##Note tuning list is now optional m.1 <- spGLM(y~1, family="poisson", coords=coords, starting=list("beta"=beta.starting, "phi"=0.06,"sigma.sq"=1, "w"=0), tuning=list("beta"=0.1, "phi"=0.5, "sigma.sq"=0.5, "w"=0.5), priors=list("beta.Flat", "phi.Unif"=c(0.03, 0.3), "sigma.sq.IG"=c(2, 1)), amcmc=list("n.batch"=n.batch, "batch.length"=batch.length, "accept.rate"=0.43), cov.model="exponential", verbose=TRUE, n.report=10) ##Just for fun check out the progression of the acceptance ##as it moves to 43% (same can be seen for the random spatial effects). plot(mcmc(t(m.1$acceptance)), density=FALSE, smooth=FALSE) ##Now parameter summaries, etc. burn.in <- 0.9*n.samples sub.samps <- burn.in:n.samples m.1$p.samples[,"phi"] <- 3/m.1$p.samples[,"phi"] plot(m.1$p.beta.theta.samples) print(summary(window(m.1$p.beta.theta.samples, start=burn.in))) beta.hat <- m.1$p.beta.theta.samples[sub.samps,"(Intercept)"] w.hat <- m.1$p.w.samples[,sub.samps] y.hat <- apply(exp(x%*%beta.hat+w.hat), 2, function(x){rpois(n, x)}) y.hat.mu <- apply(y.hat, 1, mean) ##Take a look par(mfrow=c(1,2)) surf <- mba.surf(cbind(coords,y),no.X=100, no.Y=100, extend=TRUE)$xyz.est image(surf, main="Observed counts") contour(surf, add=TRUE) text(coords, labels=y, cex=1) surf <- mba.surf(cbind(coords,y.hat.mu),no.X=100, no.Y=100, extend=TRUE)$xyz.est image(surf, main="Fitted counts") contour(surf, add=TRUE) text(coords, labels=round(y.hat.mu,0), cex=1) ## End(Not run) spLM Function for fitting univariate Bayesian spatial regression models Description The function spLM fits Gaussian univariate Bayesian spatial regression models. Given a set of knots, spLM will also fit a predictive process model (see references below). Usage spLM(formula, data = parent.frame(), coords, knots, starting, tuning, priors, cov.model, modified.pp = TRUE, amcmc, n.samples, verbose=TRUE, n.report=100, ...) Arguments formula a symbolic description of the regression model to be fit. See example below. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the envi- ronment from which spLM is called. coords an n×2 matrix of the observation coordinates in R2 (e.g., easting and northing). knots either a m × 2 matrix of the predictive process knot coordinates in R2 (e.g., easting and northing) or a vector of length two or three with the first and second elements recording the number of columns and rows in the desired knot grid. The third, optional, element sets the offset of the outermost knots from the extent of the coords. starting a list with each tag corresponding to a parameter name. Valid tags are beta, sigma.sq, tau.sq, phi, and nu. The value portion of each tag is the parameter’s starting value. tuning a list with each tag corresponding to a parameter name. Valid tags are sigma.sq, tau.sq, phi, and nu. The value portion of each tag defines the variance of the Metropolis sampler Normal proposal distribution. priors a list with each tag corresponding to a parameter name. Valid tags are sigma.sq.ig, tau.sq.ig, phi.unif, nu.unif, beta.norm, and beta.flat. Variance param- eters, simga.sq and tau.sq, are assumed to follow an inverse-Gamma distribu- tion, whereas the spatial decay phi and smoothness nu parameters are assumed to follow Uniform distributions. The hyperparameters of the inverse-Gamma are passed as a vector of length two, with the first and second elements correspond- ing to the shape and scale, respectively. The hyperparameters of the Uniform are also passed as a vector of length two with the first and second elements corresponding to the lower and upper support, respectively. If the regression coefficients, i.e., beta vector, are assumed to follow a multivariate Normal dis- tribution then pass the hyperparameters as a list of length two with the first and second elements corresponding to the mean vector and positive definite covari- ance matrix, respectively. If beta is assumed flat then no arguments are passed. The default is a flat prior. cov.model a quoted keyword that specifies the covariance function used to model the spatial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. modified.pp a logical value indicating if the modified predictive process should be used (see references below for details). Note, if a predictive process model is not used (i.e., knots is not specified) then this argument is ignored. amcmc a list with tags n.batch, batch.length, and accept.rate. Specifying this ar- gument invokes an adaptive MCMC sampler, see Roberts and Rosenthal (2007) for an explanation. n.samples the number of MCMC iterations. This argument is ignored if amcmc is specified. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report Metropolis sampler acceptance and MCMC progress. ... currently no additional arguments. Details Model parameters can be fixed at their starting values by setting their tuning values to zero. The no nugget model is specified by removing tau.sq from the starting list. Value An object of class spLM, which is a list with the following tags: coords the n × 2 matrix specified by coords. knot.coords the m × 2 matrix as specified by knots. p.theta.samples a coda object of posterior samples for the defined parameters. acceptance the Metropolis sampling acceptance percent. Reported at batch.length or n.report intervals for amcmc specified and non-specified, respectively. The return object might include additional data used for subsequent prediction and/or model fit evaluation. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>, <NAME>, and <NAME>. (2008) Gaussian Predictive Process Models for Large Spatial Datasets. Journal of the Royal Statistical Society Series B, 70:825–848. <NAME>., <NAME>., and <NAME>. (2004). Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC Press, Boca Raton, FL. <NAME>., <NAME>, and <NAME>. (2015) spBayes for large univariate and multivariate point-referenced spatio-temporal data models. Journal of Statistical Software, 63:1–28. https: //www.jstatsoft.org/article/view/v063i13. <NAME>., <NAME>, <NAME>, and <NAME>. (2009) Improving the performance of pre- dictive process modeling for large datasets. Computational Statistics and Data Analysis, 53:2873– 2884. <NAME>. and <NAME>. (2006). Examples of Adaptive MCMC. http://probability. ca/jeff/ftpdir/adaptex.pdf. See Also spMvLM spSVC Examples library(coda) ## Not run: rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))) stop("Dimension problem!") D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } set.seed(1) n <- 100 coords <- cbind(runif(n,0,1), runif(n,0,1)) X <- as.matrix(cbind(1, rnorm(n))) B <- as.matrix(c(1,5)) p <- length(B) sigma.sq <- 2 tau.sq <- 0.1 phi <- 3/0.5 D <- as.matrix(dist(coords)) R <- exp(-phi*D) w <- rmvn(1, rep(0,n), sigma.sq*R) y <- rnorm(n, X%*%B + w, sqrt(tau.sq)) n.samples <- 2000 starting <- list("phi"=3/0.5, "sigma.sq"=50, "tau.sq"=1) tuning <- list("phi"=0.1, "sigma.sq"=0.1, "tau.sq"=0.1) priors.1 <- list("beta.Norm"=list(rep(0,p), diag(1000,p)), "phi.Unif"=c(3/1, 3/0.1), "sigma.sq.IG"=c(2, 2), "tau.sq.IG"=c(2, 0.1)) priors.2 <- list("beta.Flat", "phi.Unif"=c(3/1, 3/0.1), "sigma.sq.IG"=c(2, 2), "tau.sq.IG"=c(2, 0.1)) cov.model <- "exponential" n.report <- 500 verbose <- TRUE m.1 <- spLM(y~X-1, coords=coords, starting=starting, tuning=tuning, priors=priors.1, cov.model=cov.model, n.samples=n.samples, verbose=verbose, n.report=n.report) m.2 <- spLM(y~X-1, coords=coords, starting=starting, tuning=tuning, priors=priors.2, cov.model=cov.model, n.samples=n.samples, verbose=verbose, n.report=n.report) burn.in <- 0.5*n.samples ##recover beta and spatial random effects m.1 <- spRecover(m.1, start=burn.in, verbose=FALSE) m.2 <- spRecover(m.2, start=burn.in, verbose=FALSE) round(summary(m.1$p.theta.recover.samples)$quantiles[,c(3,1,5)],2) round(summary(m.2$p.theta.recover.samples)$quantiles[,c(3,1,5)],2) round(summary(m.1$p.beta.recover.samples)$quantiles[,c(3,1,5)],2) round(summary(m.2$p.beta.recover.samples)$quantiles[,c(3,1,5)],2) m.1.w.summary <- summary(mcmc(t(m.1$p.w.recover.samples)))$quantiles[,c(3,1,5)] m.2.w.summary <- summary(mcmc(t(m.2$p.w.recover.samples)))$quantiles[,c(3,1,5)] plot(w, m.1.w.summary[,1], xlab="Observed w", ylab="Fitted w", xlim=range(w), ylim=range(m.1.w.summary), main="Spatial random effects") arrows(w, m.1.w.summary[,1], w, m.1.w.summary[,2], length=0.02, angle=90) arrows(w, m.1.w.summary[,1], w, m.1.w.summary[,3], length=0.02, angle=90) lines(range(w), range(w)) points(w, m.2.w.summary[,1], col="blue", pch=19, cex=0.5) arrows(w, m.2.w.summary[,1], w, col="blue", m.2.w.summary[,2], length=0.02, angle=90) arrows(w, m.2.w.summary[,1], w, col="blue", m.2.w.summary[,3], length=0.02, angle=90) ########################### ##Predictive process model ########################### m.1 <- spLM(y~X-1, coords=coords, knots=c(6,6,0.1), starting=starting, tuning=tuning, priors=priors.1, cov.model=cov.model, n.samples=n.samples, verbose=verbose, n.report=n.report) m.2 <- spLM(y~X-1, coords=coords, knots=c(6,6,0.1), starting=starting, tuning=tuning, priors=priors.2, cov.model=cov.model, n.samples=n.samples, verbose=verbose, n.report=n.report) burn.in <- 0.5*n.samples round(summary(window(m.1$p.beta.samples, start=burn.in))$quantiles[,c(3,1,5)],2) round(summary(window(m.2$p.beta.samples, start=burn.in))$quantiles[,c(3,1,5)],2) round(summary(window(m.1$p.theta.samples, start=burn.in))$quantiles[,c(3,1,5)],2) round(summary(window(m.2$p.theta.samples, start=burn.in))$quantiles[,c(3,1,5)],2) ## End(Not run) spMisalignGLM Function for fitting multivariate generalized linear Bayesian spatial regression models to misaligned data Description The function spMisalignGLM fits Gaussian multivariate Bayesian generalized linear spatial regres- sion models to misaligned data. Usage spMisalignGLM(formula, family="binomial", weights, data = parent.frame(), coords, starting, tuning, priors, cov.model, amcmc, n.samples, verbose=TRUE, n.report=100, ...) Arguments formula a list of q symbolic regression models to be fit. See example below. family currently only supports binomial and poisson data using the logit and log link functions, respectively. weights an optional list of weight vectors associated with each model in the formula list. Weights correspond to number of trials and offset for each location for the binomial and poisson family, respectively. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the envi- ronment from which spMisalignGLM is called. coords a list of q ni × 2 matrices of the observation coordinates in R2 (e.g., easting and northing) where i = (1, 2, . . . , q). starting a list with tags corresponding to A, phi, and nu. The value portion of each tag is a vector that holds the parameter’s starting values. A is of length q(q+1) the lower-triangle elements in column major ordering of the Cholesky square root of the spatial cross-covariance matrix K = AA0 . phi and nu are of length q. tuning a list with tags A, phi, and nu. The value portion of each tag defines the variance of the Metropolis sampler Normal proposal distribution. A is of length q(q+1) 2 and phi and nu are of length q. priors a list with each tag corresponding to a parameter name. Valid tags are beta.flat, beta.norm, K.iw, phi.unif, and nu.unif. If the regression coefficients are each assumed to follow a Normal distribution, i.e., beta.norm, then mean and variance hyperparameters are passed as the first and second list elements, re- spectively. If beta is assumed flat then no arguments are passed. The default is a flat prior. The spatial cross-covariance matrix K = AA0 is assumed to follow an inverse-Wishart distribution, whereas the spatial decay phi and smoothness nu parameters are assumed to follow Uniform distributions. The hyperparame- ters of the inverse-Wishart are passed as a list of length two, with the first and second elements corresponding to the df and q × q scale matrix, respectively. The hyperparameters of the Uniform are also passed as a list of vectors with the first and second list elements corresponding to the lower and upper support, respectively. cov.model a quoted keyword that specifies the covariance function used to model the spatial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. amcmc a list with tags n.batch, batch.length, and accept.rate. Specifying this argument invokes an adaptive MCMC sampler see Roberts and Rosenthal (2007) for an explanation. n.samples the number of MCMC iterations. This argument is ignored if amcmc is specified. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report Metropolis acceptance and MCMC progress. ... currently no additional arguments. Details If a binomial model is specified the response vector is the number of successful trials at each location and weights is the total number of trials at each location. For a poisson specification, the weights vector is the count offset, e.g., population, at each loca- tion. This differs from the glm offset argument which is passed as the log of this value. Value An object of class spMisalignGLM, which is a list with the following tags: p.beta.theta.samples a coda object of posterior samples for the defined parameters. acceptance the Metropolis sampler acceptance rate. If amcmc is used then this will be a matrix of each parameter’s acceptance rate at the end of each batch. Otherwise, the sampler is a Metropolis with a joint proposal of all parameters. acceptance.w if amcmc is used then this will be a matrix of the Metropolis sampler acceptance rate for each location’s spatial random effect. p.w.samples a matrix that holds samples from the posterior distribution of the locations’ spa- tial random effects. Posterior samples are organized with the first response vari- able n1 locations held in rows 1 . . . , n1 rows, then the next response variable samples held in the (n1 + 1), . . . , (n1 + n2 ), etc. The return object might include additional data used for subsequent prediction and/or model fit evaluation. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>, <NAME>, and <NAME>. (2008) Gaussian Predictive Process Models for Large Spatial Datasets. Journal of the Royal Statistical Society Series B, 70:825–848. <NAME>., <NAME>., and <NAME>. (2004). Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC Press, B<NAME>, Fla. <NAME>., <NAME>, and <NAME>. (2014) Bayesian hierarchical models for spatially mis- aligned data. Methods in Ecology and Evolution, 5:514–523. <NAME>., <NAME>, <NAME>, and <NAME>. (2009) Improving the performance of pre- dictive process modeling for large datasets. Computational Statistics and Data Analysis, 53:2873– 2884. <NAME>., <NAME>, <NAME>, and <NAME>. (2008) Bayesian multivariate process modeling for prediction of forest attributes. Journal of Agricultural, Biological, and Environmental Statistics, 13:60–83. See Also spMvGLM spMisalignLM Examples ## Not run: rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))){stop("Dimension problem!")} D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } set.seed(1) ##generate some data n <- 100 ##number of locations q <- 3 ##number of outcomes at each location nltr <- q*(q+1)/2 ##number of triangular elements in the cross-covariance matrix coords <- cbind(runif(n,0,1), runif(n,0,1)) ##parameters for generating a multivariate spatial GP covariance matrix theta <- rep(3/0.5,q) ##spatial decay A <- matrix(0,q,q) A[lower.tri(A,TRUE)] <- c(1,1,-1,1,0.5,0.25) K <- A%*%t(A) K ##spatial cross-covariance cov2cor(K) ##spatial cross-correlation C <- mkSpCov(coords, K, diag(0,q), theta, cov.model="exponential") w <- rmvn(1, rep(0,nrow(C)), C) ##spatial random effects w.a <- w[seq(1,length(w),q)] w.b <- w[seq(2,length(w),q)] w.c <- w[seq(3,length(w),q)] ##covariate portion of the mean x.a <- cbind(1, rnorm(n)) x.b <- cbind(1, rnorm(n)) x.c <- cbind(1, rnorm(n)) x <- mkMvX(list(x.a, x.b, x.c)) B.1 <- c(1,-1) B.2 <- c(-1,1) B.3 <- c(-1,-1) B <- c(B.1, B.2, B.3) y <- rpois(nrow(C), exp(x%*%B+w)) y.a <- y[seq(1,length(y),q)] y.b <- y[seq(2,length(y),q)] y.c <- y[seq(3,length(y),q)] ##subsample to make spatially misaligned data sub.1 <- 1:50 y.1 <- y.a[sub.1] w.1 <- w.a[sub.1] coords.1 <- coords[sub.1,] x.1 <- x.a[sub.1,] sub.2 <- 25:75 y.2 <- y.b[sub.2] w.2 <- w.b[sub.2] coords.2 <- coords[sub.2,] x.2 <- x.b[sub.2,] sub.3 <- 50:100 y.3 <- y.c[sub.3] w.3 <- w.c[sub.3] coords.3 <- coords[sub.3,] x.3 <- x.c[sub.3,] ##call spMisalignGLM q <- 3 A.starting <- diag(1,q)[lower.tri(diag(1,q), TRUE)] n.batch <- 200 batch.length <- 25 n.samples <- n.batch*batch.length starting <- list("beta"=rep(0,length(B)), "phi"=rep(3/0.5,q), "A"=A.starting, "w"=0) tuning <- list("beta"=rep(0.1,length(B)), "phi"=rep(1,q), "A"=rep(0.1,length(A.starting)), "w"=1) priors <- list("phi.Unif"=list(rep(3/0.75,q), rep(3/0.25,q)), "K.IW"=list(q+1, diag(0.1,q)), rep(0.1,q)) m.1 <- spMisalignGLM(list(y.1~x.1-1, y.2~x.2-1, y.3~x.3-1), family="poisson", coords=list(coords.1, coords.2, coords.3), starting=starting, tuning=tuning, priors=priors, amcmc=list("n.batch"=n.batch, "batch.length"=batch.length, "accept.rate"=0.43), cov.model="exponential", n.report=10) burn.in <- floor(0.75*n.samples) plot(m.1$p.beta.theta.samples, density=FALSE) ##predict for all locations, i.e., observed and not observed out <- spPredict(m.1, start=burn.in, thin=10, pred.covars=list(x.a, x.b, x.c), pred.coords=list(coords, coords, coords)) ##summary and check quants <- function(x){quantile(x, prob=c(0.5,0.025,0.975))} y.hat <- apply(out$p.y.predictive.samples, 1, quants) ##unstack and plot y.a.hat <- y.hat[,1:n] y.b.hat <- y.hat[,(n+1):(2*n)] y.c.hat <- y.hat[,(2*n+1):(3*n)] par(mfrow=c(1,3)) plot(y.a ,y.a.hat[1,], xlab="Observed y.a", ylab="Fitted & predicted y.a") plot(y.b, y.b.hat[1,], xlab="Observed y.b", ylab="Fitted & predicted y.b") plot(y.c, y.c.hat[1,], xlab="Observed y.c", ylab="Fitted & predicted y.c") ## End(Not run) spMisalignLM Function for fitting multivariate Bayesian spatial regression models to misaligned data Description The function spMisalignLM fits Gaussian multivariate Bayesian spatial regression models to mis- aligned data. Usage spMisalignLM(formula, data = parent.frame(), coords, starting, tuning, priors, cov.model, amcmc, n.samples, verbose=TRUE, n.report=100, ...) Arguments formula a list of q symbolic regression models to be fit. See example below. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the envi- ronment from which spMisalignLM is called. coords a list of q ni × 2 matrices of the observation coordinates in R2 (e.g., easting and northing) where i = (1, 2, . . . , q). . starting a list with tags corresponding to A, phi, nu, and Psi. The value portion of each tag is a vector that holds the parameter’s starting values. A is of length q(q+1) 2 and holds the lower-triangle elements in column major ordering of the Cholesky square root of the spatial cross-covariance matrix. phi and nu are of length q. The vector of residual variances Psi is also of length q. tuning a list with tags A, phi, nu, and Psi. The value portion of each tag defines the variance of the Metropolis sampler Normal proposal distribution. A is of length q(q+1) 2 and Psi, phi, and nu are of length q. priors a list with tags beta.flat, K.iw, Psi.ig, phi.unif and nu.unif. The hyper- parameters of the inverse-Wishart for the cross-covariance matrix K = AA0 are passed as a list of length two, with the first and second elements corresponding to the df and q ×q scale matrix, respectively. The inverse-Gamma hyperparame- ters for the non-spatial residual variances are specified as a list Psi.ig of length two with the first and second list elements consisting of vectors of the q shape and scale hyperparameters, respectively. The hyperparameters of the Uniform phi.unif, and nu.unif are also passed as a list of vectors with the first and second list elements corresponding to the lower and upper support, respectively. cov.model a quoted keyword that specifies the covariance function used to model the spatial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. amcmc a list with tags n.batch, batch.length, and accept.rate. Specifying this argument invokes an adaptive MCMC sampler see Roberts and Rosenthal (2007) for an explanation. n.samples the number of MCMC iterations. This argument is ignored if amcmc is specified. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report Metropolis acceptance and MCMC progress. ... currently no additional arguments. Details Model parameters can be fixed at their starting values by setting their tuning values to zero. Value An object of class spMisalignLM, which is a list with the following tags: p.theta.samples a coda object of posterior samples for the defined parameters. acceptance the Metropolis sampling acceptance percent. Reported at batch.length or n.report intervals for amcmc specified and non-specified, respectively The return object might include additional data used for subsequent prediction and/or model fit evaluation. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>, <NAME>, and <NAME>. (2008) Gaussian Predictive Process Models for Large Spatial Datasets. Journal of the Royal Statistical Society Series B, 70:825–848. <NAME>., <NAME>., and <NAME>. (2004). Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC Press, Boca Raton, Fla. <NAME>., <NAME>, and <NAME>. (2014) Bayesian hierarchical models for spatially mis- aligned data. Methods in Ecology and Evolution, 5:514–523. <NAME>., <NAME>, <NAME>, and <NAME>. (2009) Improving the performance of pre- dictive process modeling for large datasets. Computational Statistics and Data Analysis, 53:2873– 2884. <NAME>., <NAME>, <NAME>, and <NAME>. (2008) Bayesian multivariate process modeling for prediction of forest attributes. Journal of Agricultural, Biological, and Environmental Statistics, 13:60–83. See Also spMvLMspMisalignGLM Examples ## Not run: rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))){stop("Dimension problem!")} D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } set.seed(1) ##generate some data n <- 100 ##number of locations q <- 3 ##number of outcomes at each location nltr <- q*(q+1)/2 ##number of triangular elements in the cross-covariance matrix coords <- cbind(runif(n,0,1), runif(n,0,1)) ##parameters for generating a multivariate spatial GP covariance matrix theta <- rep(3/0.5,q) ##spatial decay A <- matrix(0,q,q) A[lower.tri(A,TRUE)] <- c(1,1,-1,1,0.5,0.25) K <- A%*%t(A) K ##spatial cross-covariance cov2cor(K) ##spatial cross-correlation C <- mkSpCov(coords, K, diag(0,q), theta, cov.model="exponential") w <- rmvn(1, rep(0,nrow(C)), C) ##spatial random effects w.a <- w[seq(1,length(w),q)] w.b <- w[seq(2,length(w),q)] w.c <- w[seq(3,length(w),q)] ##covariate portion of the mean x.a <- cbind(1, rnorm(n)) x.b <- cbind(1, rnorm(n)) x.c <- cbind(1, rnorm(n)) x <- mkMvX(list(x.a, x.b, x.c)) B.1 <- c(1,-1) B.2 <- c(-1,1) B.3 <- c(-1,-1) B <- c(B.1, B.2, B.3) Psi <- c(0.1, 0.1, 0.1) ##non-spatial residual variance, i.e., nugget y <- rnorm(n*q, x%*%B+w, rep(sqrt(Psi),n)) y.a <- y[seq(1,length(y),q)] y.b <- y[seq(2,length(y),q)] y.c <- y[seq(3,length(y),q)] ##subsample to make spatially misaligned data sub.1 <- 1:50 y.1 <- y.a[sub.1] w.1 <- w.a[sub.1] coords.1 <- coords[sub.1,] x.1 <- x.a[sub.1,] sub.2 <- 25:75 y.2 <- y.b[sub.2] w.2 <- w.b[sub.2] coords.2 <- coords[sub.2,] x.2 <- x.b[sub.2,] sub.3 <- 50:100 y.3 <- y.c[sub.3] w.3 <- w.c[sub.3] coords.3 <- coords[sub.3,] x.3 <- x.c[sub.3,] ##call spMisalignLM q <- 3 A.starting <- diag(1,q)[lower.tri(diag(1,q), TRUE)] n.samples <- 5000 starting <- list("phi"=rep(3/0.5,q), "A"=A.starting, "Psi"=rep(1,q)) tuning <- list("phi"=rep(0.5,q), "A"=rep(0.01,length(A.starting)), "Psi"=rep(0.1,q)) priors <- list("phi.Unif"=list(rep(3/0.75,q), rep(3/0.25,q)), "K.IW"=list(q+1, diag(0.1,q)), "Psi.ig"=list(rep(2,q), rep(0.1,q))) m.1 <- spMisalignLM(list(y.1~x.1-1, y.2~x.2-1, y.3~x.3-1), coords=list(coords.1, coords.2, coords.3), starting=starting, tuning=tuning, priors=priors, n.samples=n.samples, cov.model="exponential", n.report=100) burn.in <- floor(0.75*n.samples) plot(m.1$p.theta.samples, density=FALSE) ##recover regression coefficients and random effects m.1 <- spRecover(m.1, start=burn.in) round(summary(m.1$p.theta.recover.samples)$quantiles[,c(3,1,5)],2) round(summary(m.1$p.beta.recover.samples)$quantiles[,c(3,1,5)],2) ##predict for all locations, i.e., observed and not observed out <- spPredict(m.1, start=burn.in, thin=10, pred.covars=list(x.a, x.b, x.c), pred.coords=list(coords, coords, coords)) ##summary and check quants <- function(x){quantile(x, prob=c(0.5,0.025,0.975))} y.hat <- apply(out$p.y.predictive.samples, 1, quants) ##unstack and plot y.a.hat <- y.hat[,1:n] y.b.hat <- y.hat[,(n+1):(2*n)] y.c.hat <- y.hat[,(2*n+1):(3*n)] par(mfrow=c(1,3)) plot(y.a, y.a.hat[1,], xlab="Observed y.a", ylab="Fitted & predicted y.a", xlim=range(y), ylim=range(y.hat), main="") arrows(y.a[-sub.1], y.a.hat[1,-sub.1], y.a[-sub.1], y.a.hat[2,-sub.1], length=0.02, angle=90) arrows(y.a[-sub.1], y.a.hat[1,-sub.1], y.a[-sub.1], y.a.hat[3,-sub.1], length=0.02, angle=90) lines(range(y.a), range(y.a)) plot(y.b, y.b.hat[1,], xlab="Observed y.b", ylab="Fitted & predicted y.b", xlim=range(y), ylim=range(y.hat), main="") arrows(y.b[-sub.2], y.b.hat[1,-sub.2], y.b[-sub.2], y.b.hat[2,-sub.2], length=0.02, angle=90) arrows(y.b[-sub.2], y.b.hat[1,-sub.2], y.b[-sub.2], y.b.hat[3,-sub.2], length=0.02, angle=90) lines(range(y.b), range(y.b)) plot(y.c, y.c.hat[1,], xlab="Observed y.c", ylab="Fitted & predicted y.c", xlim=range(y), ylim=range(y.hat), main="") arrows(y.c[-sub.3], y.c.hat[1,-sub.3], y.c[-sub.3], y.c.hat[2,-sub.3], length=0.02, angle=90) arrows(y.c[-sub.3], y.c.hat[1,-sub.3], y.c[-sub.3], y.c.hat[3,-sub.3], length=0.02, angle=90) lines(range(y.c), range(y.c)) ## End(Not run) spMvGLM Function for fitting multivariate Bayesian generalized linear spatial regression models Description The function spMvGLM fits multivariate Bayesian generalized linear spatial regression models. Given a set of knots, spMvGLM will also fit a predictive process model (see references below). Usage spMvGLM(formula, family="binomial", weights, data = parent.frame(), coords, knots, starting, tuning, priors, cov.model, amcmc, n.samples, verbose=TRUE, n.report=100, ...) Arguments formula a list of q symbolic regression model descriptions to be fit. See example below. family currently only supports binomial and poisson data using the logit and log link functions, respectively. weights an optional n × q matrix of weights to be used in the fitting process. The order of the columns correspond to the univariate models in the formula list. Weights correspond to number of trials and offset for each location for the binomial and poisson family, respectively. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the envi- ronment from which spMvGLM is called. coords an n×2 matrix of the observation coordinates in R2 (e.g., easting and northing). knots either a m × 2 matrix of the predictive process knot coordinates in R2 (e.g., easting and northing) or a vector of length two or three with the first and second elements recording the number of columns and rows in the desired knot grid. The third, optional, element sets the offset of the outermost knots from the extent of the coords. starting a list with each tag corresponding to a parameter name. Valid tags are beta, A, phi, nu, and w. The value portion of each tag is a vector that holds the param- eter’s starting values and are of length p for beta (where p is the total number of regression coefficients in the multivariate model), q(q+1) and nu. Here, A holds the the lower-triangle elements in column major ordering of the Cholesky square root of the spatial cross-covariance matrix. If the predic- tive process is used then w must be of length qm; otherwise, it must be of length qn. Alternatively, w can be set as a scalar, in which case the value is repeated. tuning a list with tags beta, A, phi, nu, and w. The value portion of each tag de- fines the variance of the Metropolis sampler Normal proposal distribution. The value portion of these tags is of length p for beta, q(q+1) and nu. Here, A holds the tuning values corresponding to the lower-triangle elements in column major ordering of the Cholesky square root of the spatial cross-covariance matrix. If the predictive process is used then w must be of length qm; otherwise, it must be of length qn. Alternatively, w can be set as a scalar, in which case the value is repeated. The tuning value for beta can be a vector of length p or, if an adaptive MCMC is not used, i.e., amcmc is not speci- fied, the lower-triangle of the p × p Cholesky square-root of the desired proposal covariance matrix. priors a list with each tag corresponding to a parameter name. Valid tags are beta.flat, beta.norm, K.iw, phi.unif, and nu.unif. If the regression coefficients are each assumed to follow a Normal distribution, i.e., beta.norm, then mean and variance hyperparameters are passed as the first and second list elements, re- spectively. If beta is assumed flat then no arguments are passed. The default is a flat prior. The spatial cross-covariance matrix K is assumed to follow an inverse-Wishart distribution, whereas the spatial decay phi and smoothness nu parameters are assumed to follow Uniform distributions. The hyperparameters of the inverse-Wishart are passed as a list of length two, with the first and second elements corresponding to the df and q×q scale matrix, respectively. The hyper- parameters of the Uniform are also passed as a list of vectors with the first and second list elements corresponding to the lower and upper support, respectively. cov.model a quoted keyword that specifies the covariance function used to model the spatial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. amcmc a list with tags n.batch, batch.length, and accept.rate. Specifying this argument invokes an adaptive MCMC sampler see Roberts and Rosenthal (2007) for an explanation. n.samples the number of MCMC iterations. This argument is ignored if amcmc is specified. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report Metropolis sampler acceptance and MCMC progress. ... currently no additional arguments. Details If a binomial model is specified the response vector is the number of successful trials at each location and weights is the total number of trials at each location. For a poisson specification, the weights vector is the count offset, e.g., population, at each loca- tion. This differs from the glm offset argument which is passed as the log of this value. A non-spatial model is fit when coords is not specified. See example below. Value An object of class spMvGLM, which is a list with the following tags: coords the n × 2 matrix specified by coords. knot.coords the m × 2 matrix as specified by knots. p.beta.theta.samples a coda object of posterior samples for the defined parameters. acceptance the Metropolis sampler acceptance rate. If amcmc is used then this will be a matrix of each parameter’s acceptance rate at the end of each batch. Otherwise, the sampler is a Metropolis with a joint proposal of all parameters. acceptance.w if this is a non-predictive process model and amcmc is used then this will be a matrix of the Metropolis sampler acceptance rate for each location’s spatial random effect. acceptance.w.knots if this is a predictive process model and amcmc is used then this will be a matrix of the Metropolis sampler acceptance rate for each knot’s spatial random effect. p.w.knots.samples a matrix that holds samples from the posterior distribution of the knots’ spatial random effects. The rows of this matrix correspond to the q × m knot locations and the columns are the posterior samples. This is only returned if a predictive process model is used. p.w.samples a matrix that holds samples from the posterior distribution of the locations’ spa- tial random effects. The rows of this matrix correspond to the q × n point obser- vations and the columns are the posterior samples. The return object might include additional data used for subsequent prediction and/or model fit evaluation. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>, and <NAME>. (2008) A Bayesian approach to quantifying uncer- tainty in multi-source forest area estimates. Environmental and Ecological Statistics, 15:241–258. <NAME>., <NAME>, <NAME>, and <NAME>. (2008) Gaussian Predictive Process Models for Large Spatial Datasets. Journal of the Royal Statistical Society Series B, 70:825–848. <NAME>., <NAME>, <NAME>, and <NAME>. (2009) Improving the performance of pre- dictive process modeling for large datasets. Computational Statistics and Data Analysis, 53:2873- 2884. <NAME>., <NAME>, and <NAME>. (2015) spBayes for large univariate and multivariate point-referenced spatio-temporal data models. Journal of Statistical Software, 63:1–28. https: //www.jstatsoft.org/article/view/v063i13. <NAME>., <NAME>., and <NAME>. (2004). Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC Press, <NAME>, Fla. <NAME>. and <NAME>. (2006) Examples of Adaptive MCMC. http://probability.ca/ jeff/ftpdir/adaptex.pdf Preprint. See Also spGLM Examples ## Not run: library(MBA) ##Some useful functions rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))){stop("Dimension problem!")} D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } set.seed(1) ##Generate some data n <- 25 ##number of locations q <- 2 ##number of outcomes at each location nltr <- q*(q+1)/2 ##number of triangular elements in the cross-covariance matrix coords <- cbind(runif(n,0,1), runif(n,0,1)) ##Parameters for the bivariate spatial random effects theta <- rep(3/0.5,q) A <- matrix(0,q,q) A[lower.tri(A,TRUE)] <- c(1,-1,0.25) K <- A%*%t(A) Psi <- diag(0,q) C <- mkSpCov(coords, K, Psi, theta, cov.model="exponential") w <- rmvn(1, rep(0,nrow(C)), C) w.1 <- w[seq(1,length(w),q)] w.2 <- w[seq(2,length(w),q)] ##Covariate portion of the mean x.1 <- cbind(1, rnorm(n)) x.2 <- cbind(1, rnorm(n)) x <- mkMvX(list(x.1, x.2)) B.1 <- c(1,-1) B.2 <- c(-1,1) B <- c(B.1, B.2) weight <- 10 ##i.e., trials p <- 1/(1+exp(-(x%*%B+w))) y <- rbinom(n*q, size=rep(weight,n*q), prob=p) y.1 <- y[seq(1,length(y),q)] y.2 <- y[seq(2,length(y),q)] ##Call spMvLM fit <- glm((y/weight)~x-1, weights=rep(weight, n*q), family="binomial") beta.starting <- coefficients(fit) beta.tuning <- t(chol(vcov(fit))) A.starting <- diag(1,q)[lower.tri(diag(1,q), TRUE)] n.batch <- 100 batch.length <- 50 n.samples <- n.batch*batch.length starting <- list("beta"=beta.starting, "phi"=rep(3/0.5,q), "A"=A.starting, "w"=0) tuning <- list("beta"=beta.tuning, "phi"=rep(1,q), "A"=rep(0.1,length(A.starting)), "w"=0.5) priors <- list("beta.Flat", "phi.Unif"=list(rep(3/0.75,q), rep(3/0.25,q)), "K.IW"=list(q+1, diag(0.1,q))) m.1 <- spMvGLM(list(y.1~x.1-1, y.2~x.2-1), coords=coords, weights=matrix(weight,n,q), starting=starting, tuning=tuning, priors=priors, amcmc=list("n.batch"=n.batch,"batch.length"=batch.length,"accept.rate"=0.43), cov.model="exponential", n.report=25) burn.in <- 0.75*n.samples sub.samps <- burn.in:n.samples print(summary(window(m.1$p.beta.theta.samples, start=burn.in))$quantiles[,c(3,1,5)]) beta.hat <- t(m.1$p.beta.theta.samples[sub.samps,1:length(B)]) w.hat <- m.1$p.w.samples[,sub.samps] p.hat <- 1/(1+exp(-(x%*%beta.hat+w.hat))) y.hat <- apply(p.hat, 2, function(x){rbinom(n*q, size=rep(weight, n*q), prob=p)}) y.hat.mu <- apply(y.hat, 1, mean) ##Unstack to get each response variable fitted values y.hat.mu.1 <- y.hat.mu[seq(1,length(y.hat.mu),q)] y.hat.mu.2 <- y.hat.mu[seq(2,length(y.hat.mu),q)] ##Take a look par(mfrow=c(2,2)) surf <- mba.surf(cbind(coords,y.1),no.X=100, no.Y=100, extend=TRUE)$xyz.est image(surf, main="Observed y.1 positive trials") contour(surf, add=TRUE) points(coords) zlim <- range(surf[["z"]], na.rm=TRUE) surf <- mba.surf(cbind(coords,y.hat.mu.1),no.X=100, no.Y=100, extend=TRUE)$xyz.est image(surf, zlim=zlim, main="Fitted y.1 positive trials") contour(surf, add=TRUE) points(coords) surf <- mba.surf(cbind(coords,y.2),no.X=100, no.Y=100, extend=TRUE)$xyz.est image(surf, main="Observed y.2 positive trials") contour(surf, add=TRUE) points(coords) zlim <- range(surf[["z"]], na.rm=TRUE) surf <- mba.surf(cbind(coords,y.hat.mu.2),no.X=100, no.Y=100, extend=TRUE)$xyz.est image(surf, zlim=zlim, main="Fitted y.2 positive trials") contour(surf, add=TRUE) points(coords) ## End(Not run) spMvLM Function for fitting multivariate Bayesian spatial regression models Description The function spMvLM fits Gaussian multivariate Bayesian spatial regression models. Given a set of knots, spMvLM will also fit a predictive process model (see references below). Usage spMvLM(formula, data = parent.frame(), coords, knots, starting, tuning, priors, cov.model, modified.pp = TRUE, amcmc, n.samples, verbose=TRUE, n.report=100, ...) Arguments formula a list of q symbolic regression model descriptions to be fit. See example below. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the envi- ronment from which spMvLM is called. coords an n×2 matrix of the observation coordinates in R2 (e.g., easting and northing). knots either a m × 2 matrix of the predictive process knot coordinates in R2 (e.g., easting and northing) or a vector of length two or three with the first and second elements recording the number of columns and rows in the desired knot grid. The third, optional, element sets the offset of the outermost knots from the extent of the coords. starting a list with tags corresponding to beta, A, phi, and nu. Depending on the spec- ification of the non-spatial residual, tags are L or Psi for a block diagonal or diagonal covariance matrix, respectively. The value portion of each tag is a vector that holds the parameter’s starting val- ues and are of length p for beta (where p is the total number of regression coefficients in the multivariate model), q(q+1) and nu. Here, A and L hold the lower-triangle elements in column major order- ing of the Cholesky square root of the spatial and non-spatial cross-covariance matrices, respectively. tuning a list with tags A, phi, and nu. Depending on the specification of the non- spatial residual, tags are L or Psi for a block diagonal or diagonal covariance matrix, respectively. The value portion of each tag defines the variance of the Metropolis sampler Normal proposal distribution. For A and L the vectors are of length q(q+1) 2 and q for Psi, phi, and nu. priors a list with tags beta.flat, beta.norm, K.iw, Psi.iw, Psi.ig, phi.unif, and nu.unif. If the regression coefficients, i.e., beta vector, are assumed to fol- low a multivariate Normal distribution then pass the hyperparameters as a list of length two with the first and second elements corresponding to the mean vec- tor and positive definite covariance matrix, respectively. If beta is assumed flat then no arguments are passed. The default is a flat prior. Use Psi.iw if the non-spatial residual covariance matrix is assumed block diagonal. Otherwise if the non-spatial residual covariance matrix is assumed diagonal then each of the q diagonal element are assumed to follow an inverse-Gamma in which case use Psi.ig. The hyperparameters of the inverse-Wishart, i.e., for cross-covariance matrices AA0 K.iw and LL0 Psi.iw, are passed as a list of length two, with the first and second elements corresponding to the df and q × q scale matrix, respectively. If Psi.ig is specified, the inverse-Gamma hyperparameters of the diagonal variance elements are pass using a list of length two with the first and second list elements consisting of vectors of the q shape and scale hyperparame- ters, respectively. The hyperparameters of the Uniform phi.unif, and nu.unif are also passed as a list of vectors with the first and second list elements corre- sponding to the lower and upper support, respectively. cov.model a quoted keyword that specifies the covariance function used to model the spatial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. modified.pp a logical value indicating if the modified predictive process should be used (see references below for details). Note, if a predictive process model is not used (i.e., knots is not specified) then this argument is ignored. amcmc a list with tags n.batch, batch.length, and accept.rate. Specifying this argument invokes an adaptive MCMC sampler see Roberts and Rosenthal (2007) for an explanation. n.samples the number of MCMC iterations. This argument is ignored if amcmc is specified. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report Metropolis acceptance and MCMC progress. ... currently no additional arguments. Details Model parameters can be fixed at their starting values by setting their tuning values to zero. The no nugget model is specified by removing Psi and L from the starting list. Value An object of class spMvLM, which is a list with the following tags: coords the n × 2 matrix specified by coords. knot.coords the m × 2 matrix as specified by knots. p.theta.samples a coda object of posterior samples for the defined parameters. acceptance the Metropolis sampling acceptance percent. Reported at batch.length or n.report intervals for amcmc specified and non-specified, respectively The return object might include additional data used for subsequent prediction and/or model fit evaluation. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>, <NAME>, and <NAME>. (2008) Gaussian Predictive Process Models for Large Spatial Datasets. Journal of the Royal Statistical Society Series B, 70:825–848. <NAME>., <NAME>., and <NAME>. (2004). Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC Press, Boca Raton, Fla. <NAME>., <NAME>, and <NAME>. (2015) spBayes for large univariate and multivariate point-referenced spatio-temporal data models. Journal of Statistical Software, 63:1–28. https: //www.jstatsoft.org/article/view/v063i13. <NAME>., <NAME>, <NAME>, and <NAME>. (2009) Improving the performance of pre- dictive process modeling for large datasets. Computational Statistics and Data Analysis, 53:2873– 2884. <NAME>., <NAME>, <NAME>, and <NAME>. (2008) Bayesian multivariate process modeling for prediction of forest attributes. Journal of Agricultural, Biological, and Environmental Statistics, 13:60–83. See Also spLM Examples ## Not run: rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))){stop("Dimension problem!")} D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } set.seed(1) ##Generate some data n <- 25 ##number of locations q <- 2 ##number of outcomes at each location nltr <- q*(q+1)/2 ##number of triangular elements in the cross-covariance matrix coords <- cbind(runif(n,0,1), runif(n,0,1)) ##Parameters for the bivariate spatial random effects theta <- rep(3/0.5,q) A <- matrix(0,q,q) A[lower.tri(A,TRUE)] <- c(1,-1,0.25) K <- A%*%t(A) Psi <- diag(0,q) C <- mkSpCov(coords, K, Psi, theta, cov.model="exponential") w <- rmvn(1, rep(0,nrow(C)), C) w.1 <- w[seq(1,length(w),q)] w.2 <- w[seq(2,length(w),q)] ##Covariate portion of the mean x.1 <- cbind(1, rnorm(n)) x.2 <- cbind(1, rnorm(n)) x <- mkMvX(list(x.1, x.2)) B.1 <- c(1,-1) B.2 <- c(-1,1) B <- c(B.1, B.2) Psi <- diag(c(0.1, 0.5)) y <- rnorm(n*q, x%*%B+w, diag(n)%x%Psi) y.1 <- y[seq(1,length(y),q)] y.2 <- y[seq(2,length(y),q)] ##Call spMvLM A.starting <- diag(1,q)[lower.tri(diag(1,q), TRUE)] n.samples <- 1000 starting <- list("phi"=rep(3/0.5,q), "A"=A.starting, "Psi"=rep(1,q)) tuning <- list("phi"=rep(1,q), "A"=rep(0.01,length(A.starting)), "Psi"=rep(0.01,q)) priors <- list("beta.Flat", "phi.Unif"=list(rep(3/0.75,q), rep(3/0.25,q)), "K.IW"=list(q+1, diag(0.1,q)), "Psi.ig"=list(c(2,2), c(0.1,0.1))) m.1 <- spMvLM(list(y.1~x.1-1, y.2~x.2-1), coords=coords, starting=starting, tuning=tuning, priors=priors, n.samples=n.samples, cov.model="exponential", n.report=100) burn.in <- 0.75*n.samples m.1 <- spRecover(m.1, start=burn.in) round(summary(m.1$p.theta.recover.samples)$quantiles[,c(3,1,5)],2) round(summary(m.1$p.beta.recover.samples)$quantiles[,c(3,1,5)],2) m.1.w.hat <- summary(mcmc(t(m.1$p.w.recover.samples)))$quantiles[,c(3,1,5)] m.1.w.1.hat <- m.1.w.hat[seq(1, nrow(m.1.w.hat), q),] m.1.w.2.hat <- m.1.w.hat[seq(2, nrow(m.1.w.hat), q),] par(mfrow=c(1,2)) plot(w.1, m.1.w.1.hat[,1], xlab="Observed w.1", ylab="Fitted w.1", xlim=range(w), ylim=range(m.1.w.hat), main="Spatial random effects w.1") arrows(w.1, m.1.w.1.hat[,1], w.1, m.1.w.1.hat[,2], length=0.02, angle=90) arrows(w.1, m.1.w.1.hat[,1], w.1, m.1.w.1.hat[,3], length=0.02, angle=90) lines(range(w), range(w)) plot(w.2, m.1.w.2.hat[,1], xlab="Observed w.2", ylab="Fitted w.2", xlim=range(w), ylim=range(m.1.w.hat), main="Spatial random effects w.2") arrows(w.2, m.1.w.2.hat[,1], w.2, m.1.w.2.hat[,2], length=0.02, angle=90) arrows(w.2, m.1.w.2.hat[,1], w.2, m.1.w.2.hat[,3], length=0.02, angle=90) lines(range(w), range(w)) ## End(Not run) spPredict Function for new locations given a model object Description The function spPredict collects posterior predictive samples for a set of new locations given a spLM, spMvLM, spGLM, spMvGLM, spMisalignLM, spMisalignGLM, bayesGeostatExact, bayesLMConjugate bayesLMRef or spSVC object. Usage spPredict(sp.obj, pred.coords, pred.covars, joint=FALSE, start=1, end, thin=1, verbose=TRUE, n.report=100, n.omp.threads=1, ...) Arguments sp.obj an object returned by spLM, spMvLM, spGLM, spMvGLM, spMisalignLM, spMisalignGLM, bayesGeostatExact, bayesLMConjugate or bayesLMRef. For spSVC, sp.obj is an object from spRecover. pred.coords for spLM, spMvLM, spGLM, spMvGLM, and bayesGeostatExact pred.coords is a n∗ × 2 matrix of n∗ prediction location coordinates in R2 (e.g., easting and northing). For spMisalignLM and spMisalignGLM pred.coords is a list of q n∗i × 2 matrices of prediction location coordinates where i = (1, 2, . . . , q). For spSVC pred.coords is an n∗ × m matrix of n∗ prediction location coordinates in Rm . pred.covars for spLM, spMvLM, spGLM, spMvGLM, bayesGeostatExact, bayesLMConjugate, bayesLMRef, and spSVC pred.covars is a n∗ × p design matrix associated with the new locations (including the intercept if one is specified in sp.obj’s formula argument). If this is a multivariate prediction defined by q models, i.e., for spMvLM or spMvGLM, the multivariate design matrix can be created by passing a list of the q univariate design matrices to the mkMvX function. For spMisalignLM and spMisalignGLM pred.covars is a list of q n∗i × pi design matrices where i = (1, 2, . . . , q) joint specifies whether posterior samples should be drawn from the joint or point-wise predictive distribution. This argument is only implemented for spSVC. Predic- tion for all other models uses the point-wise posterior predictive distribution. start specifies the first sample included in the composition sampling. end specifies the last sample included in the composition. The default is to use all posterior samples in sp.obj. thin a sample thinning factor. The default of 1 considers all samples between start and end. For example, if thin = 10 then 1 in 10 samples are considered between start and end. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report sampling progress. n.omp.threads a positive integer indicating the number of threads to use for SMP parallel pro- cessing. The package must be compiled for OpenMP support. For most Intel- based machines, we recommend setting n.omp.threads up to the number of hyperthreaded cores. This argument is only implemented for spSVC. ... currently no additional arguments. Value p.y.predictive.samples a matrix that holds the response variable(s) posterior predictive samples. For multivariate models spMvLM or spMvGLM the rows of this matrix correspond to the predicted locations and the columns are the posterior predictive samples. If prediction is for q response variables the p.y.predictive.samples matrix has qn∗ rows, where n∗ is the number of prediction locations. The predictions for locations are held in rows 1 : q, (q + 1) : 2q, . . . , ((n∗ − 1)q + 1) : qn∗ (i.e., the samples for the first location’s q response variables are in rows 1 : q, second location in rows (q + 1) : 2q, etc.). For spMisalignLM and spMisalignGLM the posterior predictive samples are or- ganized differently in p.y.predictive.samples with the first response vari- able n∗1 locations held in rows 1 . . . , n∗1 rows, then the next response variable samples held in the (n∗1 + 1), . . . , (n∗1 + n∗2 ), etc. For spSVC given the r space-varying coefficients, p.y.predictive.samples has rn∗ rows and the columns are the posterior predictive samples. The predic- tions for coefficient are held in rows 1 : r, (r +1) : 2r, . . . , ((n∗ −1)r +1) : rn∗ (i.e., the samples for the first location’s r regression coefficients are in rows 1:r, second location in rows (r + 1) : 2r, etc.). For spGLM and spMisalignGLM the p.y.predictive.samples matrix holds posterior predictive samples 1+exp(−x(s) 0 β−w(s)) and exp(x(s) β + w(s)) for family binomial and poisson, respectively. Here s indexes the prediction loca- tion, β is the vector of regression coefficients, and w is the associated spatial random spatial effect. These values can be fed directly into rbinom or rpois to generate the realization from the respective distribution. p.w.predictive.samples a matrix organized the same as p.y.predictive.samples, that holds the spatial random effects posterior predictive samples. p.w.predictive.samples.list only returned for spSVC. This provides p.w.predictive.samples in a different (more convenient form). Elements in this list hold samples for each of the r coefficients. List element names indicate either the coefficient index or name specified in spSVC’s svc.cols argument. The sample matrices are n∗ rows and predictive samples along the columns. p.tilde.beta.predictive.samples.list only returned for spSVC. Like p.w.predictive.samples.list but with the addition of the corresponding β posterior samples (i.e., β + w(s)). center.scale.pred.covars only returned for the spSVC when its center.scale argument is TRUE. This is the prediction design matrix centered and scaled with respect to column means and variances of the design matrix used to estimate model parameters, i.e., the one defined in spSVC’s formula argument. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>., and <NAME>. (2004). Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC Press, Boca Raton, FL. <NAME>., <NAME>, and <NAME>. (2015) spBayes for large univariate and multivariate point-referenced spatio-temporal data models. Journal of Statistical Software, 63:1–28. https: //www.jstatsoft.org/article/view/v063i13. <NAME>. and <NAME> (2019) Bayesian spatially varying coefficient models in the spBayes R package. https://arxiv.org/abs/1903.03028. Examples ## Not run: rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))) stop("Dimension problem!") D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } set.seed(1) n <- 200 coords <- cbind(runif(n,0,1), runif(n,0,1)) X <- as.matrix(cbind(1, rnorm(n))) B <- as.matrix(c(1,5)) p <- length(B) sigma.sq <- 10 tau.sq <- 0.01 phi <- 3/0.5 D <- as.matrix(dist(coords)) R <- exp(-phi*D) w <- rmvn(1, rep(0,n), sigma.sq*R) y <- rnorm(n, X%*%B + w, sqrt(tau.sq)) ##partition the data for out of sample prediction mod <- 1:100 y.mod <- y[mod] X.mod <- X[mod,] coords.mod <- coords[mod,] n.samples <- 1000 starting <- list("phi"=3/0.5, "sigma.sq"=50, "tau.sq"=1) tuning <- list("phi"=0.1, "sigma.sq"=0.1, "tau.sq"=0.1) priors <- list("beta.Flat", "phi.Unif"=c(3/1, 3/0.1), "sigma.sq.IG"=c(2, 5), "tau.sq.IG"=c(2, 0.01)) cov.model <- "exponential" m.1 <- spLM(y.mod~X.mod-1, coords=coords.mod, starting=starting, tuning=tuning, priors=priors, cov.model=cov.model, n.samples=n.samples) m.1.pred <- spPredict(m.1, pred.covars=X, pred.coords=coords, start=0.5*n.samples) y.hat <- apply(m.1.pred$p.y.predictive.samples, 1, mean) quant <- function(x){quantile(x, prob=c(0.025, 0.5, 0.975))} y.hat <- apply(m.1.pred$p.y.predictive.samples, 1, quant) plot(y, y.hat[2,], pch=19, cex=0.5, xlab="observed y", ylab="predicted y") arrows(y[-mod], y.hat[2,-mod], y[-mod], y.hat[1,-mod], angle=90, length=0.05) arrows(y[-mod], y.hat[2,-mod], y[-mod], y.hat[3,-mod], angle=90, length=0.05) ## End(Not run) spRecover Function for recovering regression coefficients and spatial random ef- fects for spLM, spMvLM, spMisalignLM, spSVC using composition sam- pling Description Function for recovering regression coefficients and spatial random effects for spLM, spMvLM, and spMisalignLM using composition sampling. Usage spRecover(sp.obj, get.beta=TRUE, get.w=TRUE, start=1, end, thin=1, verbose=TRUE, n.report=100, n.omp.threads=1, ...) Arguments sp.obj an object returned by spLM, spMvLM, spMisalignLM, or spSVC. get.beta if TRUE, regression coefficients will be recovered. get.w if TRUE, spatial random effects will be recovered. start specifies the first sample included in the composition sampling. end specifies the last sample included in the composition. The default is to use all posterior samples in sp.obj. thin a sample thinning factor. The default of 1 considers all samples between start and end. For example, if thin = 10 then 1 in 10 samples are considered between start and end. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report sampling progress. n.omp.threads a positive integer indicating the number of threads to use for SMP parallel pro- cessing. The package must be compiled for OpenMP support. For most Intel- based machines, we recommend setting n.omp.threads up to the number of hyperthreaded cores. This argument is only implemented for spSVC. ... currently no additional arguments. Value The input sp.obj with posterior samples of regression coefficients and/or spatial random effects appended. tags: p.theta.recover.samples those p.theta.samples used in the composition sampling. p.beta.recover.samples a coda object of regression coefficients posterior samples. p.w.recover.samples a coda object of spatial random effects posterior samples. Rows correspond to locations’ random effects and columns are posterior samples. Given q responses, the p.w.recover.samples matrix for spMvLM has qn rows. The recovered ran- dom effects for locations are held in rows 1 : q, (q + 1) : 2q, . . . , ((n − 1)q + 1) : qn (i.e., the samples for the first location’s q response variables are in rows 1:q, second location in rows (q + 1) : 2q, etc.). For spSVC given the r space-varying coefficients, p.w.recover.samples has rn rows. The recovered random effects for locations are held in rows 1 : r, (r + 1) : 2r, . . . , ((n − 1)r + 1) : rn (i.e., the samples for the first location’s r regression coefficients are in rows 1:r, second location in rows (r + 1) : 2r, etc.). p.w.recover.samples.list only returned for spSVC. This provides p.w.recover.samples in a different (more convenient form). Elements in this list hold samples for each of the r coefficients. List element names indicate either the coefficient index or name specified in spSVC’s svc.cols argument. The sample matrices are n rows and predictive samples along the columns. p.tilde.beta.recover.samples.list only returned for spSVC. Like p.w.recover.samples.list but with the addi- tion of the corresponding β posterior samples (i.e., β + w(s)). p.y.samples only returned for spSVC. These posterior are the fitted values with locations on the rows and samples on the columns. For a given sample the fitted value for the ith location is N (x(si )β + z(si )w(si ), τ 2 ). Author(s) <NAME>. Finley <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>., and <NAME>. (2004). Hierarchical modeling and analysis for spatial data. Chapman and Hall/CRC Press, Boca Raton, FL. <NAME>., <NAME>, and <NAME>. (2015) spBayes for large univariate and multivariate point-referenced spatio-temporal data models. Journal of Statistical Software, 63:1–28. https: //www.jstatsoft.org/article/view/v063i13. <NAME>. and <NAME> (2019) Bayesian spatially varying coefficient models in the spBayes R package. https://arxiv.org/abs/1903.03028. Examples ## Not run: rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))) stop("Dimension problem!") D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } set.seed(1) n <- 50 coords <- cbind(runif(n,0,1), runif(n,0,1)) X <- as.matrix(cbind(1, rnorm(n))) B <- as.matrix(c(1,5)) p <- length(B) sigma.sq <- 10 tau.sq <- 0.01 phi <- 3/0.5 D <- as.matrix(dist(coords)) R <- exp(-phi*D) w <- rmvn(1, rep(0,n), sigma.sq*R) y <- rnorm(n, X%*%B + w, sqrt(tau.sq)) n.samples <- 1000 starting <- list("phi"=3/0.5, "sigma.sq"=50, "tau.sq"=1) tuning <- list("phi"=0.1, "sigma.sq"=0.1, "tau.sq"=0.1) priors <- list("beta.Flat", "phi.Unif"=c(3/1, 3/0.1), "sigma.sq.IG"=c(2, 5), "tau.sq.IG"=c(2, 0.01)) cov.model <- "exponential" m.1 <- spLM(y~X-1, coords=coords, starting=starting, tuning=tuning, priors=priors, cov.model=cov.model, n.samples=n.samples) m.1 <- spRecover(m.1, start=0.5*n.samples, thin=2) summary(window(m.1$p.beta.recover.samples)) w.hat <- apply(m.1$p.w.recover.samples, 1, mean) plot(w, w.hat, xlab="Observed w", ylab="Fitted w") ## End(Not run) spSVC Function for fitting univariate Bayesian spatially-varying coefficient regression models Description The function spSVC fits Gaussian univariate Bayesian spatially-varying coefficient regression mod- els. Usage spSVC(formula, data = parent.frame(), svc.cols=1, coords, priors, starting, tuning, cov.model, center.scale=FALSE, amcmc, n.samples, n.omp.threads = 1, verbose=TRUE, n.report=100, ...) Arguments formula a symbolic description of the regression model to be fit. See example below. data an optional data frame containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which spSVC is called. svc.cols a vector indicating which columns of the regression design matrix X should be space-varying. svc.cols can be an integer vector with values indicating X columns or a character vector with values corresponding to X column names. svc.cols default argument of 1 results in a space-varying intercept model (as- suming an intercept is specified in the first column of the design matrix). coords an n×m matrix of the observation coordinates in Rm (e.g., R2 might be easting and northing). priors a list with each tag corresponding to a parameter name. Valid tags are sigma.sq.ig, k.iw, tau.sq.ig, phi.unif, nu.unif, beta.norm, and beta.flat. Scalar variance parameters simga.sq and tau.sq are assumed to follow an inverse- Gamma distribution. Cross-covariance matrix parameter K is assumed to follow an inverse-Wishart. The spatial decay phi and smoothness nu parameters are assumed to follow Uniform distributions. The regression coefficient priors can be either flat or multivariate Normal. There are two specification for the Gaussian Process (GP) on the svc.cols columns: 1) univariate GPs on each column; 2) multivariate GP on the r columns (i.e., where r equals length(svc.cols)). If univariate GPs are desired, specify sigma.sq.ig as a list of length two with the first and second elements corre- sponding to the length r shape and scale hyperparameter vectors, respectively. If a multivariate GP is desired, specify k.iw as a list of length two with the first and second elements corresponding to the degrees-of-freedom df and r × r scale matrix, respectively. This inverse-Wishart prior is on the r × r multivariate GP cross-covariance matrix defined as K = AA0 where A is the lower-triangle Cholesky square root of K. If the regression coefficients, i.e., beta vector, are assumed to follow a multi- variate Normal distribution then pass the hyperparameters as a list of length two with the first and second elements corresponding to the mean vector and positive definite covariance matrix, respectively. If beta is assumed flat then no argu- ments are passed. The default is a flat prior. Similarly, phi and nu are specified as lists of length two with the first and second elements holding vectors of length r lower and upper bounds of the Uniforms’ support, respectively. starting a list with each tag corresponding to a parameter name. Valid tags are beta, sigma.sq, A, tau.sq, phi, and nu. The value portion of each tag is the param- eter’s starting value(s). Starting values must be set for the r univariate or multi- variate GP phi and nu. For univariate GPs sigma.sq.ig is specified as a vector of length r and for a multivariate GP A is specified as a vector of r × (r + 1)/2 that gives the lower-triangle elements in column major ordering of the Cholesky square root of the cross-covaraince matrix K = AA0 . tau.sq is a single value. See Finley and Banerjee (2019) for more details. tuning a list with each tag corresponding to a parameter name. Valid tags are sigma.sq, A, tau.sq, phi, and nu. The value portion of each tag defines the variance of the Metropolis sampler Normal proposal distribution. For sigma.sq, phi, and nu the tuning value vectors are of length r and A is of length r × (r + 1)/2. tuning vector elements correspond to starting vector elements. tau.sq is a single value. cov.model a quoted keyword that specifies the covariance function used to model the spatial dependence structure among the observations. Supported covariance model key words are: "exponential", "matern", "spherical", and "gaussian". See below for details. center.scale if TRUE, non-constant columns of X are centered on zero and scaled to have variance one. If spPredict is subsequently called this centering and scaling is applied to pred.covars. amcmc a list with tags n.batch, batch.length, and accept.rate. Specifying this ar- gument invokes an adaptive MCMC sampler, see Roberts and Rosenthal (2007) for an explanation. n.samples the number of MCMC iterations. This argument is ignored if amcmc is specified. n.omp.threads a positive integer indicating the number of threads to use for SMP parallel pro- cessing. The package must be compiled for OpenMP support. For most Intel- based machines, we recommend setting n.omp.threads up to the number of hyperthreaded cores. verbose if TRUE, model specification and progress of the sampler is printed to the screen. Otherwise, nothing is printed to the screen. n.report the interval to report Metropolis sampler acceptance and MCMC progress. ... currently no additional arguments. Details Model parameters can be fixed at their starting values by setting their tuning values to zero. The no nugget model is specified by removing tau.sq from the starting list. Value An object of class spSVC, which is a list comprising: coords the n × m matrix specified by coords. p.theta.samples a coda object of posterior samples for the defined parameters. acceptance the Metropolis sampling acceptance percent. Reported at batch.length or n.report intervals for amcmc specified and non-specified, respectively. The return object will include additional objects used for subsequent parameter recovery, prediction, and model fit evaluation using spRecover, spPredict, spDiag, respectively. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>, and <NAME>. (2015) spBayes for large univariate and multivariate point-referenced spatio-temporal data models. Journal of Statistical Software, 63:1–28. https: //www.jstatsoft.org/article/view/v063i13. <NAME>. and <NAME>. (2006). Examples of Adaptive MCMC. http://probability. ca/jeff/ftpdir/adaptex.pdf. <NAME>. and <NAME> (2019) Bayesian spatially varying coefficient models in the spBayes R package. https://arxiv.org/abs/1903.03028. See Also spRecover, spDiag, spPredict Examples ## Not run: library(Matrix) rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))) stop("Dimension problem!") D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } ##Assume both columns of X are space-varying and the two GPs don't covary set.seed(1) n <- 200 coords <- cbind(runif(n,0,1), runif(n,0,1)) X <- as.matrix(cbind(1, rnorm(n))) colnames(X) <- c("x.1", "x.2") Z <- t(bdiag(as.list(as.data.frame(t(X))))) B <- as.matrix(c(1,5)) p <- length(B) sigma.sq <- c(1,5) tau.sq <- 1 phi <- 3/0.5 D <- as.matrix(dist(coords)) C <- exp(-phi*D)%x%diag(sigma.sq) w <- rmvn(1, rep(0,p*n), C) mu <- as.vector(X%*%B + Z%*%w) y <- rnorm(n, mu, sqrt(tau.sq)) ##fit a model to the simulated dat starting <- list("phi"=rep(3/0.5, p), "sigma.sq"=rep(1, p), "tau.sq"=1) tuning <- list("phi"=rep(0.1, p), "sigma.sq"=rep(0.1, p), "tau.sq"=0.1) cov.model <- "exponential" priors <- list("phi.Unif"=list(rep(3/2, p), rep(3/0.0001, p)), "sigma.sq.IG"=list(rep(2, p), rep(2, p)), "tau.sq.IG"=c(2, 1)) ##fit model n.samples <- 2000 m.1 <- spSVC(y~X-1, coords=coords, starting=starting, svc.cols=c(1,2), tuning=tuning, priors=priors, cov.model=cov.model, n.samples=n.samples, n.omp.threads=4) plot(m.1$p.theta.samples, density=FALSE) ##recover posterior samples m.1 <- spRecover(m.1, start=floor(0.75*n.samples), thin=2, n.omp.threads=4) summary(m.1$p.beta.recover.samples) summary(m.1$p.theta.recover.samples) ##check fitted values quant <- function(x){quantile(x, prob=c(0.025, 0.5, 0.975))} ##fitted y y.hat <- apply(m.1$p.y.samples, 1, quant) rng <- c(-15, 20) plot(y, y.hat[2,], pch=19, cex=0.5, xlab="Fitted y", ylab="Observed y", xlim=rng, ylim=rng) arrows(y, y.hat[2,], y, y.hat[1,], angle=90, length=0.05) arrows(y, y.hat[2,], y, y.hat[3,], angle=90, length=0.05) lines(rng, rng, col="blue") ##recovered w w.hat <- apply(m.1$p.w.recover.samples, 1, quant) w.1.indx <- seq(1, p*n, p) w.2.indx <- seq(2, p*n, p) par(mfrow=c(1,2)) rng <- c(-5,5) plot(w[w.1.indx], w.hat[2,w.1.indx], pch=19, cex=0.5, xlab="Fitted w.1", ylab="Observed w.1", xlim=rng, ylim=rng) arrows(w[w.1.indx], w.hat[2,w.1.indx], w[w.1.indx], w.hat[1,w.1.indx], angle=90, length=0.05) arrows(w[w.1.indx], w.hat[2,w.1.indx], w[w.1.indx], w.hat[3,w.1.indx], angle=90, length=0.05) lines(rng, rng, col="blue") rng <- c(-10,10) plot(w[w.2.indx], w.hat[2,w.2.indx], pch=19, cex=0.5, xlab="Fitted w.2", ylab="Observed w.2", xlim=rng, ylim=rng) arrows(w[w.2.indx], w.hat[2,w.2.indx], w[w.2.indx], w.hat[1,w.2.indx], angle=90, length=0.05) arrows(w[w.2.indx], w.hat[2,w.2.indx], w[w.2.indx], w.hat[3,w.2.indx], angle=90, length=0.05) lines(rng, rng, col="blue") ## End(Not run) SVCMvData.dat Synthetic data from a space-varying coefficients model Description Data simulated from a space-varying coefficients model. Usage data(SVCMvData.dat) Format The data frame generated from the code in the example section below. Examples ## Not run: ##The dataset was generated with the code below. library(Matrix) rmvn <- function(n, mu=0, V = matrix(1)){ p <- length(mu) if(any(is.na(match(dim(V),p)))) stop("Dimension problem!") D <- chol(V) t(matrix(rnorm(n*p), ncol=p)%*%D + rep(mu,rep(n,p))) } set.seed(1) n <- 200 coords <- cbind(runif(n,0,1), runif(n,0,1)) colnames(coords) <- c("x.coords","y.coords") X <- as.matrix(cbind(1, rnorm(n), rnorm(n))) colnames(X) <- c("intercept","a","b") Z <- t(bdiag(as.list(as.data.frame(t(X))))) beta <- c(1, 10, -10) p <- length(beta) q <- 3 A <- matrix(0, q, q) A[lower.tri(A, T)] <- c(1, -1, 0, 1, 1, 0.1) K <- A K cov2cor(K) phi <- c(3/0.75, 3/0.5, 3/0.5) Psi <- diag(0,q) C <- mkSpCov(coords, K, Psi, phi, cov.model="exponential") tau.sq <- 0.1 w <- rmvn(1, rep(0,q*n), C) y <- rnorm(n, as.vector(X%*%beta + Z%*%w), sqrt(tau.sq)) w.0 <- w[seq(1, length(w), by=q)] w.a <- w[seq(2, length(w), by=q)] w.b <- w[seq(3, length(w), by=q)] SVCMvData <- data.frame(cbind(coords, y, X[,2:3], w.0, w.a, w.b)) ## End(Not run) WEF.dat Western Experimental Forest inventory data Description Data generated as part of a long-term research study on an experimental forest in central Oregon. This dataset holds the coordinates for all trees in the experimental forest. The typical stem mea- surements are recorded for each tree. Crown radius was measured at the cardinal directions for a subset of trees. Mean crown radius was calculated for all trees using a simple relationship between DBH, Height, and observed crown dimension. Usage data(WEF.dat) Format A data frame containing 2422 rows and 15 columns. Zurich.dat Zurichberg Forest inventory data Description Inventory data of the Zurichberg Forest, Switzerland (see Mandallaz 2008 for details). These data are provided with the kind authorization of the Forest Service of the Caton of Zurich. This dataset holds the coordinates for all trees in the Zurichberg Forest. Species (SPP), basal area (BAREA) diameter at breast height (DBH), and volume (VOL) are recorded for each tree. See species codes below. Usage data(Zurich.dat) Format A data frame containing 4954 rows and 6 columns. Examples ## Not run: data(Zurich.dat) coords <- Zurich.dat[,c("X_TREE", "Y_TREE")] spp.name <- c("beech","maple","ash","other broadleaves", "spruce","silver fir", "larch", "other coniferous") spp.col <- c("yellow","red","orange","pink", "green","dark green","black","gray") plot(coords, col=spp.col[Zurich.dat$SPP+1], pch=19, cex=0.5, ylab="Northing", xlab="Easting") legend.coords <- c(23,240) legend(legend.coords, pch=19, legend=spp.name, col=spp.col, bty="n") ## End(Not run)
rAmCharts4
cran
R
Package ‘rAmCharts4’ October 14, 2022 Title Interface to the JavaScript Library 'amCharts 4' Version 1.6.0 Maintainer <NAME> <<EMAIL>> Description Creates JavaScript charts. The charts can be included in 'Shiny' apps and R mark- down documents, or viewed from the R console and 'RStudio' viewer. Based on the JavaScript li- brary 'amCharts 4' and the R packages 'htmlwidgets' and 'reactR'. Currently avail- able types of chart are: vertical and horizontal bar chart, radial bar chart, stacked bar chart, verti- cal and horizontal Dumbbell chart, line chart, scatter chart, range area chart, gauge chart, box- plot chart, pie chart, and 100% stacked bar chart. URL https://github.com/stla/rAmCharts4 BugReports https://github.com/stla/rAmCharts4/issues License GPL-3 Encoding UTF-8 Imports htmltools, htmlwidgets (>= 1.5.3), reactR, shiny, jsonlite, lubridate, minpack.lm, tools, base64enc, xml2, stringr, stats, grDevices Suggests reshape2 RoxygenNote 7.2.1 NeedsCompilation no Author <NAME> [aut, cre], <NAME> [ctb, cph] ('amCharts' library (https://www.amcharts.com/)), <NAME> [ctb, cph] ('SuperTinyIcons' library (https://github.com/edent/SuperTinyIcons/)), <NAME> [ctb, cph] ('regression-js' library (https://github.com/Tom-Alexander/regression-js)) Repository CRAN Date/Publication 2022-09-22 10:10:02 UTC R topics documented: amAxisBreak... 3 amAxisLabel... 3 amBarChar... 4 amBoxplotChar... 10 amButto... 13 amColum... 14 amDateAxisFormatte... 14 amDumbbellChar... 15 amFon... 19 amGaugeChar... 20 amHan... 22 amHorizontalBarChar... 23 amHorizontalDumbbellChar... 27 amImag... 31 amLegen... 32 amLin... 33 amLineChar... 34 amPercentageBarChar... 41 amPieChar... 43 amRadialBarChar... 46 amRangeAreaChar... 51 amScatterChar... 57 amSegmen... 64 amStackedBarChar... 65 amTex... 69 amToolti... 70 amZoomButton... 71 rAmCharts4-adapter... 71 rAmCharts4-import... 75 rAmCharts4-shape... 75 rAmCharts4-shin... 77 tinyIco... 79 updateAmBarChar... 80 updateAmGaugeChar... 83 updateAmPercentageBarChar... 84 updateAmPieChar... 86 amAxisBreaks Axis breaks Description Create an object defining the breaks on an axis. Usage amAxisBreaks( values = NULL, labels = NULL, interval = NULL, timeInterval = NULL ) Arguments values positions of the breaks, a vector of values; for a date axis, this must be a vector of dates labels if values is given, the labels of the breaks; if NULL, the labels are set to the values interval for equally spaced breaks, the number of pixels between two consecutive breaks; ignored if values is given timeInterval for equally spaced breaks on a date axis, this option defines the interval between two consecutive breaks; it must be a string like "1 day", "7 days", "1 week", "2 months", ...; ignored if values or interval is given amAxisLabels Axis labels Description Create a list of settings for the labels of an axis. Usage amAxisLabels( color = NULL, fontSize = 18, fontWeight = "normal", fontFamily = NULL, rotation = 0, formatter = NULL ) amAxisLabelsCircular( color = NULL, fontSize = 14, fontWeight = "normal", fontFamily = NULL, radius = NULL, relativeRotation = NULL ) Arguments color color of the labels fontSize size of the labels fontWeight font weight of the labels, it can be "normal", "bold", "bolder", "lighter", or a number in seq(100, 900, by = 100) fontFamily font family of the labels rotation rotation angle formatter this option defines the format of the axis labels; this should be a number format- ting string for a numeric axis, and a list created with amDateAxisFormatter for a date axis radius radius in percentage relativeRotation relative rotation angle Value A list of settings for the labels of an axis. Note A color can be given by the name of a R color, the name of a CSS color, e.g. "silver" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)". amBarChart HTML widget displaying a bar chart Description Create a HTML widget displaying a bar chart. Usage amBarChart( data, data2 = NULL, category, values, valueNames = NULL, showValues = TRUE, hline = NULL, yLimits = NULL, expandY = 5, valueFormatter = "#.", chartTitle = NULL, theme = NULL, animated = TRUE, draggable = FALSE, tooltip = NULL, columnStyle = NULL, threeD = FALSE, bullets = NULL, alwaysShowBullets = FALSE, backgroundColor = NULL, cellWidth = NULL, columnWidth = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = NULL, caption = NULL, image = NULL, button = NULL, cursor = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe data2 NULL or a dataframe used to update the data with the button; its column names must include the column names of data given in values, it must have the same number of rows as data and its rows must be in the same order as those of data category name of the column of data to be used on the category axis values name(s) of the column(s) of data to be used on the value axis valueNames names of the values variables, to appear in the legend; NULL to use values as names, otherwise a named list of the form list(value1 = "ValueName1", value2 = "ValueName2", ...) where value1, value2, ... are the column names given in values and "ValueName1", "ValueName2", ... are the desired names to appear in the legend; these names can also appear in the tooltips: they are substituted to the string {name} in the formatting string passed on to the tooltip (see the second example) showValues logical, whether to display the values on the chart hline an optional horizontal line to add to the chart; it must be a named list of the form list(value = h, line = settings) where h is the "intercept" and settings is a list of settings created with amLine yLimits range of the y-axis, a vector of two values specifying the lower and the upper limits of the y-axis; NULL for default values expandY if yLimits = NULL, a percentage of the range of the y-axis used to expand this range valueFormatter a number formatting string; it is used to format the values displayed on the chart if showValues = TRUE, the values displayed in the cursor tooltips if cursor = TRUE, the labels of the y-axis unless you specify your own formatter in the labels field of the list passed on to the yAxis option, and the values displayed in the tooltips unless you specify your own tooltip text (see the first example for the way to set a number formatter in the tooltip text) chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic draggable TRUE/FALSE to enable/disable dragging of all bars, otherwise a named list of the form list(value1 = TRUE, value2 = FALSE, ...) to enable/disable the drag- ging for each bar corresponding to a column given in values tooltip settings of the tooltips; NULL for default, FALSE for no tooltip, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amTooltip; this can also be a single list of settings that will be applied to each series, or a just a string for the text to display in the tooltip columnStyle settings of the columns (the bars); NULL for default, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amColumn; this can also be a single list of settings that will be applied to each column threeD logical, whether to render the columns in 3D bullets settings of the bullets; NULL for default, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amCircle, amTriangle or amRectangle; this can also be a single list of settings that will be applied to each series alwaysShowBullets logical, whether to always show the bullets; if FALSE, the bullets are shown only on hovering a column backgroundColor a color for the chart background; a color can be given by the name of a R color, the name of a CSS color, e.g. "rebeccapurple" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" cellWidth cell width in percent; for a simple bar chart, this is the width of the columns; for a grouped bar chart, this is the width of the clusters of columns; NULL for the default value columnWidth column width, a percentage of the cell width; set to 100 for a simple bar chart and use cellWidth to control the width of the columns; for a grouped bar chart, this controls the spacing between the columns within a cluster of columns; NULL for the default value xAxis settings of the category axis given as a list, or just a string for the axis title; the list of settings has three possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, and a field adjust, a number defining the vertical adjustment of the axis (in pixels) yAxis settings of the value axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine and a field breaks to control the axis breaks, an R object created with amAxisBreaks scrollbarX logical, whether to add a scrollbar for the category axis scrollbarY logical, whether to add a scrollbar for the value axis legend either a logical value, whether to display the legend, or a list of settings for the legend created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment button NULL for the default, FALSE for no button, or a list of settings created with amButton; this button is used to replace the current data with data2 cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor with default settings for the tooltips, or a list of settings created with amTooltip to set the style of the tooltips, or a list with three possible fields: a field tooltip, a list of tooltip settings created with amTooltip, a field extraTooltipPrecision, an integer, the number of additional decimals to display in the tooltips, and a field modifier, which defines a modifier for the values displayed in the tooltips; a modifier is some JavaScript code given as a string, which performs a modifica- tion of a string named text, e.g. modifier = "text = '>>>' + text;" width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples # a simple bar chart #### dat <- data.frame( country = c("USA", "China", "Japan", "Germany", "UK", "France"), visits = c(3025, 1882, 1809, 1322, 1122, 1114) ) amBarChart( data = dat, data2 = dat, width = "600px", category = "country", values = "visits", draggable = TRUE, tooltip = "[bold font-style:italic #ffff00]{valueY.value.formatNumber('#,###.')}[/]", chartTitle = amText(text = "Visits per country", fontSize = 22, color = "orangered"), xAxis = list(title = amText(text = "Country", color = "maroon")), yAxis = list( title = amText(text = "Visits", color = "maroon"), gridLines = amLine(color = "orange", width = 1, opacity = 0.4) ), yLimits = c(0, 4000), valueFormatter = "#,###.", caption = amText(text = "Year 2018", color = "red"), theme = "material") # bar chart with individual images in the bullets #### dat <- data.frame( language = c("Python", "Julia", "Java"), users = c(10000, 2000, 5000), href = c( tinyIcon("python", "transparent"), tinyIcon("julia", "transparent"), tinyIcon("java", "transparent") ) ) amBarChart( data = dat, width = "700px", category = "language", values = "users", valueNames = list(users = "#users"), showValues = FALSE, tooltip = amTooltip( text = "{name}: [bold]valueY[/]", textColor = "white", backgroundColor = "#101010", borderColor = "silver" ), draggable = FALSE, backgroundColor = "seashell", bullets = amCircle( radius = 30, color = "white", strokeWidth = 4, image = amImage( href = "inData:href", width = 50, height = 50 ) ), alwaysShowBullets = TRUE, xAxis = list(title = amText(text = "Programming language")), yAxis = list( title = amText(text = "# users"), gridLines = amLine(color = "orange", width = 1, opacity = 0.4) ), yLimits = c(0, 12000), valueFormatter = "#.", theme = "material") # a grouped bar chart #### set.seed(666) dat <- data.frame( country = c("USA", "China", "Japan", "Germany", "UK", "France"), visits = c(3025, 1882, 1809, 1322, 1122, 1114), income = rpois(6, 25), expenses = rpois(6, 20) ) amBarChart( data = dat, width = "700px", category = "country", values = c("income", "expenses"), valueNames = list(income = "Income", expenses = "Expenses"), tooltip = amTooltip( textColor = "white", backgroundColor = "#101010", borderColor = "silver" ), draggable = list(income = TRUE, expenses = FALSE), backgroundColor = "#30303d", columnStyle = list( income = amColumn( color = "darkmagenta", strokeColor = "#cccccc", strokeWidth = 2 ), expenses = amColumn( color = "darkred", strokeColor = "#cccccc", strokeWidth = 2 ) ), chartTitle = amText(text = "Income and expenses per country"), xAxis = list(title = amText(text = "Country")), yAxis = list( title = amText(text = "Income and expenses"), gridLines = amLine(color = "whitesmoke", width = 1, opacity = 0.4), breaks = amAxisBreaks(values = seq(0, 45, by = 5)) ), yLimits = c(0, 45), valueFormatter = "#.#", caption = amText(text = "Year 2018"), theme = "dark") amBoxplotChart HTML widget displaying a boxplot chart Description Create a HTML widget displaying a boxplot chart. Usage amBoxplotChart( data, category, value, color = NULL, hline = NULL, yLimits = NULL, expandY = 5, valueFormatter = "#.", chartTitle = NULL, theme = NULL, animated = TRUE, tooltip = TRUE, bullets = NULL, backgroundColor = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, caption = NULL, image = NULL, cursor = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe category name of the column of data to be used for the category axis; this can be a date column value name of the column of data to be used for the value axis color the color of the boxplots; it can be given by the name of a R color, the name of a CSS color, e.g. "crimson" or "fuchsia", a HEX code like "#FF009A", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" hline an optional horizontal line to add to the chart; it must be a named list of the form list(value = h, line = settings) where h is the "intercept" and settings is a list of settings created with amLine yLimits range of the y-axis, a vector of two values specifying the lower and the upper limits of the y-axis; NULL for default values expandY if yLimits = NULL, a percentage of the range of the y-axis used to expand this range valueFormatter a number formatting string; it is used to format the values displayed in the cursor tooltips, the labels of the y-axis unless you specify your own formatter in the labels field of the list passed on to the yAxis option, and the values displayed in the tooltips unless you specify your own tooltip text (see the first example of amBarChart for the way to set a number formatter in the tooltip text) chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic tooltip TRUE for the default tooltips, FALSE for no tooltip, otherwise a string for the text to display in the tooltip bullets settings of the bullets representing the outliers; NULL for default, otherwise a list created with amCircle, amTriangle or amRectangle backgroundColor a color for the chart background; it can be given by the name of a R color, the name of a CSS color, e.g. "lime" or "olive", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" xAxis settings of the category axis given as a list, or just a string for the axis title; the list of settings has four possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the vertical ad- justment of the axis (in pixels), and a field gridLines, a list of settings for the grid lines created with amLine yAxis settings of the value axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine and a field breaks to control the axis breaks, an R object created with amAxisBreaks scrollbarX logical, whether to add a scrollbar for the category axis scrollbarY logical, whether to add a scrollbar for the value axis caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor with default settings for the tooltips, or a list of settings created with amTooltip to set the style of the tooltips, or a list with three possible fields: a field tooltip, a list of tooltip settings created with amTooltip, a field extraTooltipPrecision, an integer, the number of additional decimals to display in the tooltips, and a field modifier, which defines a modifier for the values displayed in the tooltips; a modifier is some JavaScript code given as a string, which performs a modifica- tion of a string named text, e.g. modifier = "text = '>>>' + text;" width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples library(rAmCharts4) set.seed(666) dat <- data.frame( group = gl(4, 50, labels = c("A", "B", "C", "D")), y = rt(200, df = 3) ) amBoxplotChart( dat, category = "group", value = "y", color = "maroon", valueFormatter = "#.#", theme = "moonrisekingdom" ) amButton Button Description Create a list of settings for a button. Usage amButton(label, color = NULL, position = 0.9, marginRight = 10) Arguments label label of the button, a character string or a list created with amText for a formatted label color button color position the vertical position of the button: 0 for bottom, 1 for top marginRight right margin in pixels Value A list of settings for a button. amColumn Columns style Description Create a list of settings for the columns of a bar chart. Usage amColumn( color = NULL, opacity = NULL, strokeColor = NULL, strokeWidth = 4, cornerRadius = 8 ) Arguments color color of the columns; this can be a color adapter opacity opacity of the columns, a number between 0 and 1 strokeColor color of the border of the columns; this can be a color adapter strokeWidth width of the border of the columns cornerRadius radius of the corners of the columns Value A list of settings for usage in amBarChart or amHorizontalBarChart Note A color can be given by the name of a R color, the name of a CSS color, e.g. "transparent" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)". amDateAxisFormatter Date axis formatter Description Create a list of settings for formatting the labels of a date axis, to be passed on to the formatter argument of amAxisLabels. Usage amDateAxisFormatter( day = c("dd", "MMM dd"), week = c("dd", "MMM dd"), month = c("MMM", "MMM yyyy") ) Arguments day, week, month vectors of length two, the first component is a formatting string for the dates within a period, and the second one is a formatting string for the dates at a period change; see Formatting date and time Value A list of settings for formatting the labels of a date axis. amDumbbellChart HTML widget displaying a Dumbbell chart Description Create a HTML widget displaying a Dumbbell chart. Usage amDumbbellChart( data, data2 = NULL, category, values, seriesNames = NULL, hline = NULL, yLimits = NULL, expandY = 5, valueFormatter = "#.", chartTitle = NULL, theme = NULL, animated = TRUE, draggable = FALSE, tooltip = NULL, segmentsStyle = NULL, bullets = NULL, backgroundColor = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = NULL, caption = NULL, image = NULL, button = NULL, cursor = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe data2 NULL or a dataframe used to update the data with the button; its column names must include the column names of data given in values, it must have the same number of rows as data and its rows must be in the same order as those of data category name of the column of data to be used for the category axis values a character matrix with two columns; each row corresponds to a series and pro- vides the names of two columns of data to be used as the limits of the segments seriesNames a character vector providing the names of the series to appear in the legend; its length must equal the number of rows of the values matrix: the n-th component corresponds to the n-th row of the values matrix hline an optional horizontal line to add to the chart; it must be a named list of the form list(value = h, line = settings) where h is the "intercept" and settings is a list of settings created with amLine yLimits range of the y-axis, a vector of two values specifying the lower and the upper limits of the y-axis; NULL for default values expandY if yLimits = NULL, a percentage of the range of the y-axis used to expand this range valueFormatter a number formatting string; it is used to format the values displayed in the cursor tooltips, the labels of the y-axis unless you specify your own formatter in the labels field of the list passed on to the yAxis option, and the values displayed in the tooltips unless you specify your own tooltip text (see the first example of amBarChart for the way to set a number formatter in the tooltip text) chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic draggable TRUE/FALSE to enable/disable dragging of all bullets, otherwise a named list of the form list(value1 = TRUE, value2 = FALSE, ...) tooltip settings of the tooltips; NULL for default, FALSE for no tooltip, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amTooltip; this can also be a single list of settings that will be applied to each series, or a just a string for the text to display in the tooltip segmentsStyle settings of the segments; NULL for default, otherwise a named list of the form list(series1 = settings1, series2 = settings2, ...) where series1, series2, ... are the names of the series provided in seriesNames and settings1, settings2, ... are lists created with amSegment; this can also be a single list of settings that will be applied to each series bullets settings of the bullets; NULL for default, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amCircle, amTriangle or amRectangle; this can also be a single list of settings that will be applied to each series backgroundColor a color for the chart background; it can be given by the name of a R color, the name of a CSS color, e.g. "lime" or "olive", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" xAxis settings of the category axis given as a list, or just a string for the axis title; the list of settings has four possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the vertical ad- justment of the axis (in pixels), and a field gridLines, a list of settings for the grid lines created with amLine yAxis settings of the value axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine and a field breaks to control the axis breaks, an R object created with amAxisBreaks scrollbarX logical, whether to add a scrollbar for the category axis scrollbarY logical, whether to add a scrollbar for the value axis legend either a logical value, whether to display the legend, or a list of settings for the legend created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment button NULL for the default, FALSE for no button, or a list of settings created with amButton; this button is used to replace the current data with data2 cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor with default settings for the tooltips, or a list of settings created with amTooltip to set the style of the tooltips, or a list with three possible fields: a field tooltip, a list of tooltip settings created with amTooltip, a field extraTooltipPrecision, an integer, the number of additional decimals to display in the tooltips, and a field modifier, which defines a modifier for the values displayed in the tooltips; a modifier is some JavaScript code given as a string, which performs a modifica- tion of a string named text, e.g. modifier = "text = '>>>' + text;" width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples dat <- data.frame( x = c("T0", "T1", "T2"), y1 = c(7, 15, 10), y2 = c(20, 25, 23), z1 = c(5, 10, 5), z2 = c(25, 20, 15) ) amDumbbellChart( width = "500px", data = dat, draggable = TRUE, category = "x", values = rbind(c("y1","y2"), c("z1","z2")), seriesNames = c("Control", "Treatment"), yLimits = c(0, 30), segmentsStyle = list( "Control" = amSegment(width = 2), "Treatment" = amSegment(width = 2) ), bullets = list( y1 = amTriangle(strokeWidth = 0), y2 = amTriangle(rotation = 180, strokeWidth = 0), z1 = amTriangle(strokeWidth = 0), z2 = amTriangle(rotation = 180, strokeWidth = 0) ), tooltip = amTooltip("upper: {openValueY}\nlower: {valueY}", scale = 0.75), xAxis = list( title = amText( "timepoint", fontSize = 17, fontWeight = "bold", fontFamily = "Helvetica" ) ), yAxis = list( title = amText( "response", fontSize = 17, fontWeight = "bold", fontFamily = "Helvetica" ), gridLines = amLine("silver", width = 1, opacity = 0.4) ), legend = amLegend(position = "right", itemsWidth = 15, itemsHeight = 15), backgroundColor = "lightyellow", theme = "dataviz" ) amFont Font Description Create a list of settings for a font. Usage amFont(fontSize = NULL, fontWeight = "normal", fontFamily = NULL) Arguments fontSize font size, must be given as a character string like "10px" or "2em", or a numeric value, the font size in pixels fontWeight font weight, it can be "normal", "bold", "bolder", "lighter", or a number in seq(100, 900, by = 100) fontFamily font family Value A list of settings for a font. Note There is no option for the font style. amGaugeChart HTML widget displaying a gauge chart Description Create a HTML widget displaying a gauge chart. Usage amGaugeChart( score, minScore, maxScore, scorePrecision = 0, gradingData, innerRadius = 70, labelsRadius = (100 - innerRadius)/2, axisLabelsRadius = 19, chartFontSize = 11, labelsFont = amFont(fontSize = "2em", fontWeight = "bold"), axisLabelsFont = amFont(fontSize = "1.2em"), scoreFont = amFont(fontSize = "6em"), scoreLabelFont = amFont(fontSize = "2em"), hand = amHand(innerRadius = 45, width = 8, color = "slategray", strokeColor = "black"), gridLines = FALSE, chartTitle = NULL, theme = NULL, animated = TRUE, backgroundColor = NULL, caption = NULL, image = NULL, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments score gauge value, a number between minScore and maxScore minScore minimal score maxScore maximal score scorePrecision an integer, the number of decimals of the score to be displayed gradingData data for the gauge, a dataframe with three required columns: label, lowScore, and highScore, and an optional column color; if the column color is not present, then the colors will be derived from the theme innerRadius inner radius of the gauge given as a percentage, between 0 (the gauge has no width) and 100 (the gauge is a semi-disk) labelsRadius radius for the labels given as a percentage; use the default value to get centered labels axisLabelsRadius radius for the axis labels given as a percentage chartFontSize reference font size, a numeric value, the font size in pixels; this font size has an effect only if you use the relative CSS unit em for other font sizes labelsFont a list of settings for the font of the labels created with amFont, but the font size must be given in pixels or in em CSS units (no other units are accepted) axisLabelsFont a list of settings for the font of the axis labels created with amFont scoreFont a list of settings for the font of the score created with amFont scoreLabelFont a list of settings for the font of the score label created with amFont hand a list of settings for the hand of the gauge created with amHand gridLines a list of settings for the grid lines created with amLine, or a logical value: FALSE for no grid lines, TRUE for default grid lines chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic backgroundColor a color for the chart background; it can be given by the name of a R color, the name of a CSS color, e.g. "aqua" or "indigo", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Note In Shiny, you can change the score of a gauge chart with the help of updateAmGaugeChart. Examples library(rAmCharts4) gradingData <- data.frame( label = c("Slow", "Moderate", "Fast"), color = c("blue", "green", "red"), lowScore = c(0, 100/3, 200/3), highScore = c(100/3, 200/3, 100) ) amGaugeChart( score = 40, minScore = 0, maxScore = 100, gradingData = gradingData ) amHand Gauge hand Description Create a list of settings for the hand of a gauge chart. Usage amHand(innerRadius, width, color, strokeColor) Arguments innerRadius inner radius of the hand, given as a percentage width width of the base of the hand in pixels, a positive number color color of the hand strokeColor stroke color of the hand Value A list of settings for the hand of a gauge chart. amHorizontalBarChart HTML widget displaying a horizontal bar chart Description Create a HTML widget displaying a horizontal bar chart. Usage amHorizontalBarChart( data, data2 = NULL, category, values, valueNames = NULL, showValues = TRUE, vline = NULL, xLimits = NULL, expandX = 5, valueFormatter = "#.", chartTitle = NULL, theme = NULL, animated = TRUE, draggable = FALSE, tooltip = NULL, columnStyle = NULL, threeD = FALSE, bullets = NULL, alwaysShowBullets = FALSE, backgroundColor = NULL, cellWidth = NULL, columnWidth = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = NULL, caption = NULL, image = NULL, button = NULL, cursor = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe data2 NULL or a dataframe used to update the data with the button; its column names must include the column names of data given in values, it must have the same number of rows as data and its rows must be in the same order as those of data category name of the column of data to be used on the category axis values name(s) of the column(s) of data to be used on the value axis valueNames names of the values variables, to appear in the legend; NULL to use values as names, otherwise a named list of the form list(value1 = "ValueName1", value2 = "ValueName2", ...) where value1, value2, ... are the column names given in values and "ValueName1", "ValueName2", ... are the desired names to appear in the legend; these names can also appear in the tooltips: they are substituted to the string {name} in the formatting string passed on to the tooltip (see the second example) showValues logical, whether to display the values on the chart vline an optional vertical line to add to the chart; it must be a named list of the form list(value = v, line = settings) where v is the "intercept" and settings is a list of settings created with amLine xLimits range of the x-axis, a vector of two values specifying the left and the right limits of the x-axis; NULL for default values expandX if xLimits = NULL, a percentage of the range of the x-axis used to expand this range valueFormatter a number formatting string; it is used to format the values displayed on the chart if showValues = TRUE, the values displayed in the cursor tooltips if cursor = TRUE, the labels of the x-axis unless you specify your own formatter in the labels field of the list passed on to the xAxis option, and the values displayed in the tooltips unless you specify your own tooltip text (see the first example of amBarChart for the way to set a number formatter in the tooltip text) chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic draggable TRUE/FALSE to enable/disable dragging of all bars, otherwise a named list of the form list(value1 = TRUE, value2 = FALSE, ...) to enable/disable the drag- ging for each bar corresponding to a column given in values tooltip settings of the tooltips; NULL for default, FALSE for no tooltip, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amTooltip; this can also be a single list of settings that will be applied to each series, or a just a string for the text to display in the tooltip columnStyle settings of the columns; NULL for default, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amColumn; this can also be a single list of settings that will be applied to each column threeD logical, whether to render the columns in 3D bullets settings of the bullets; NULL for default, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amCircle, amTriangle or amRectangle; this can also be a single list of settings that will be applied to each series alwaysShowBullets logical, whether to always show the bullets; if FALSE, the bullets are shown only on hovering a column backgroundColor a color for the chart background; a color can be given by the name of a R color, the name of a CSS color, e.g. "aqua" or "indigo", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" cellWidth cell width in percent; for a simple bar chart, this is the width of the columns; for a grouped bar chart, this is the width of the clusters of columns; NULL for the default value columnWidth column width, a percentage of the cell width; set to 100 for a simple bar chart and use cellWidth to control the width of the columns; for a grouped bar chart, this controls the spacing between the columns within a cluster of columns; NULL for the default value xAxis settings of the value axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the vertical adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine, and a field breaks to control the axis breaks, an R object created with amAxisBreaks yAxis settings of the category axis given as a list, or just a string for the axis title; the list of settings has three possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, and a field adjust, a number defining the horizontal adjustment of the axis (in pixels) scrollbarX logical, whether to add a scrollbar for the value axis scrollbarY logical, whether to add a scrollbar for the category axis legend FALSE for no legend, TRUE for a legend with default settings, or a list of settings created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment button NULL for the default, FALSE for no button, or a list of settings created with amButton; this button is used to replace the current data with data2 cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor with default settings for the tooltips, or a list of settings created with amTooltip to set the style of the tooltips, or a list with three possible fields: a field tooltip, a list of tooltip settings created with amTooltip, a field extraTooltipPrecision, an integer, the number of additional decimals to display in the tooltips, and a field modifier, which defines a modifier for the values displayed in the tooltips; a modifier is some JavaScript code given a string, which performs a modification of a string named text, e.g. modifier = "text = '>>>' + text;" width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples # a simple horizontal bar chart #### dat <- data.frame( country = c("USA", "China", "Japan", "Germany", "UK", "France"), visits = c(3025, 1882, 1809, 1322, 1122, 1114) ) amHorizontalBarChart( data = dat, data2 = dat, width = "600px", height = "550px", category = "country", values = "visits", draggable = TRUE, tooltip = "[font-style:italic #ffff00]{valueX}[/]", chartTitle = amText(text = "Visits per country", fontSize = 22, color = "orangered"), xAxis = list( title = amText(text = "Country", color = "maroon"), gridLines = amLine(opacity = 0.4, width = 1, dash = "3,1") ), yAxis = list(title = amText(text = "Visits", color = "maroon")), xLimits = c(0, 4000), valueFormatter = "#,###", caption = amText(text = "Year 2018", color = "red"), theme = "moonrisekingdom") # a grouped horizontal bar chart #### set.seed(666) dat <- data.frame( country = c("USA", "China", "Japan", "Germany", "UK", "France"), visits = c(3025, 1882, 1809, 1322, 1122, 1114), income = rpois(6, 25), expenses = rpois(6, 20) ) amHorizontalBarChart( data = dat, width = "700px", category = "country", values = c("income", "expenses"), valueNames = list(income = "Income", expenses = "Expenses"), tooltip = amTooltip( text = "[bold]{name}:\n{valueX}[/]", textColor = "white", backgroundColor = "#101010", borderColor = "silver" ), draggable = list(income = TRUE, expenses = FALSE), backgroundColor = "#30303d", columnStyle = list( income = amColumn( color = "darkmagenta", strokeColor = "#cccccc", strokeWidth = 2 ), expenses = amColumn( color = "darkred", strokeColor = "#cccccc", strokeWidth = 2 ) ), chartTitle = amText(text = "Income and expenses per country"), yAxis = list(title = amText(text = "Country")), xAxis = list( title = amText(text = "Income and expenses"), gridLines = amLine(color = "whitesmoke", width = 1, opacity = 0.4) ), xLimits = c(0, 41), valueFormatter = "#.#", caption = amText(text = "Year 2018"), theme = "dark") amHorizontalDumbbellChart HTML widget displaying a horizontal Dumbbell chart Description Create a HTML widget displaying a horizontal Dumbbell chart. Usage amHorizontalDumbbellChart( data, data2 = NULL, category, values, seriesNames = NULL, vline = NULL, xLimits = NULL, expandX = 5, valueFormatter = "#.", chartTitle = NULL, theme = NULL, animated = TRUE, draggable = FALSE, tooltip = NULL, segmentsStyle = NULL, bullets = NULL, backgroundColor = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = NULL, caption = NULL, image = NULL, button = NULL, cursor = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe data2 NULL or a dataframe used to update the data with the button; its column names must include the column names of data given in values, it must have the same number of rows as data and its rows must be in the same order as those of data category name of the column of data to be used for the category axis values a character matrix with two columns; each row corresponds to a series and pro- vides the names of two columns of data to be used as the limits of the segments seriesNames a character vector providing the names of the series to appear in the legend; its length must equal the number of rows of the values matrix: the n-th component corresponds to the n-th row of the values matrix vline an optional vertical line to add to the chart; it must be a named list of the form list(value = v, line = settings) where v is the "intercept" and settings is a list of settings created with amLine xLimits range of the x-axis, a vector of two values specifying the left and right limits of the x-axis; NULL for default values expandX if xLimits = NULL, a percentage of the range of the x-axis used to expand this range valueFormatter a number formatting string; it is used to format the values displayed in the cursor tooltips, the labels of the x-axis unless you specify your own formatter in the labels field of the list passed on to the xAxis option, and the values displayed in the tooltips unless you specify your own tooltip text (see the first example of amBarChart for the way to set a number formatter in the tooltip text) chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic draggable TRUE/FALSE to enable/disable dragging of all bullets, otherwise a named list of the form list(value1 = TRUE, value2 = FALSE, ...) tooltip settings of the tooltips; NULL for default, FALSE for no tooltip, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amTooltip; this can also be a single list of settings that will be applied to each series, or a just a string for the text to display in the tooltip segmentsStyle settings of the segments; NULL for default, otherwise a named list of the form list(series1 = settings1, series2 = settings2, ...) where series1, series2, ... are the names of the series provided in seriesNames and settings1, settings2, ... are lists created with amSegment; this can also be a single list of settings that will be applied to each series bullets settings of the bullets; NULL for default, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amCircle, amTriangle or amRectangle; this can also be a single list of settings that will be applied to each series backgroundColor a color for the chart background; it can be given by the name of a R color, the name of a CSS color, e.g. "lime" or "olive", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" xAxis settings of the value axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine and a field breaks to control the axis breaks, an R object created with amAxisBreaks yAxis settings of the category axis given as a list, or just a string for the axis title; the list of settings has four possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the vertical ad- justment of the axis (in pixels), and a field gridLines, a list of settings for the grid lines created with amLine scrollbarX logical, whether to add a scrollbar for the value axis scrollbarY logical, whether to add a scrollbar for the category axis legend either a logical value, whether to display the legend, or a list of settings for the legend created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment button NULL for the default, FALSE for no button, or a list of settings created with amButton; this button is used to replace the current data with data2 cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor with default settings for the tooltips, or a list of settings created with amTooltip to set the style of the tooltips, or a list with three possible fields: a field tooltip, a list of tooltip settings created with amTooltip, a field extraTooltipPrecision, an integer, the number of additional decimals to display in the tooltips, and a field modifier, which defines a modifier for the values displayed in the tooltips; a modifier is some JavaScript code given as a string, which performs a modifica- tion of a string named text, e.g. modifier = "text = '>>>' + text;" width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples set.seed(666) lwr <- rpois(20, 5) dat <- data.frame( comparison = paste0("Ctrl vs. ", LETTERS[1:20]), lwr = lwr, upr = lwr + rpois(20, 10) ) amHorizontalDumbbellChart( width = "500px", height = "450px", data = dat, draggable = TRUE, category = "comparison", values = rbind(c("lwr", "upr")), xLimits = c(0, 30), segmentsStyle = amSegment(width = 1, color = "darkred"), bullets = amCircle(strokeWidth = 0, color = "darkred"), tooltip = amTooltip("left: {valueX}\nright: {openValueX}", scale = 0.75), xAxis = list( title = amText( "difference", fontSize = 17, fontWeight = "bold", fontFamily = "Helvetica" ), gridLines = amLine("darkblue", width = 2, opacity = 0.8, dash = "2,2"), breaks = amAxisBreaks(c(0,10,20,30)) ), yAxis = list( title = amText( "comparison", fontSize = 17, fontWeight = "bold", fontFamily = "Helvetica" ), labels = amAxisLabels(fontSize = 15), gridLines = amLine(color = "red", width = 1, opacity = 0.6, dash = "1,3") ), backgroundColor = "lightsalmon" ) amImage Image Description Create a list of settings for an image. Usage amImage(href, width, height, opacity = 1) Arguments href a link to an image file or a base64 string representing an image; you can get such a string with tinyIcon, or you can create it from a file with base64enc::dataURI; this option can also be a string of the form "inData:DATAFIELD" where DATAFIELD is the name of a column of the data - this is useful to have different images in the bullets width, height dimensions of the image opacity opacity of the image, a number between 0 and 1 Value A list of settings for an image. amLegend Legend Description Create a list of settings for a legend. Usage amLegend( position = "bottom", maxHeight = NULL, scrollable = FALSE, maxWidth = 220, itemsWidth = 20, itemsHeight = 20 ) Arguments position legend position maxHeight maximum height for a horizontal legend (position = "bottom" or position = "top") scrollable whether a vertical legend should be scrollable maxWidth maximum width for a vertical legend (position = "left" or position = "right"); set it to NULL for no limit itemsWidth width of the legend items itemsHeight height of the legend items Value A list of settings for a legend. amLine Line style Description Create a list of settings for a line. Usage amLine( color = NULL, opacity = 1, width = 3, dash = NULL, tensionX = NULL, tensionY = NULL ) Arguments color line color opacity line opacity, a number between 0 and 1 width line width dash string defining a dashed/dotted line; see Dotted and dashed lines tensionX, tensionY parameters for the smoothing; see Smoothed lines for the meaning of these pa- rameters Value A list of settings for a line. Note A color can be given by the name of a R color, the name of a CSS color, e.g. "transparent" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)". amLineChart HTML widget displaying a line chart Description Create a HTML widget displaying a line chart. Usage amLineChart( data, data2 = NULL, xValue, yValues, yValueNames = NULL, hline = NULL, vline = NULL, xLimits = NULL, yLimits = NULL, expandX = 0, expandY = 5, Xformatter = ifelse(isDate, "yyyy-MM-dd", "#."), Yformatter = "#.", trend = FALSE, chartTitle = NULL, theme = NULL, animated = TRUE, draggable = FALSE, tooltip = NULL, bullets = NULL, alwaysShowBullets = FALSE, lineStyle = NULL, backgroundColor = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = NULL, caption = NULL, image = NULL, button = NULL, cursor = FALSE, zoomButtons = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe data2 NULL or a dataframe used to update the data with the button; its column names must include the column names of data given in yValues as well as the column name given in xValue; moreover it must have the same number of rows as data and its rows must be in the same order as those of data xValue name of the column of data to be used on the x-axis yValues name(s) of the column(s) of data to be used on the y-axis yValueNames names of the variables on the y-axis, to appear in the legend; NULL to use yValues as names, otherwise a named list of the form list(yvalue1 = "ValueName1", yvalue2 = "ValueName2", ...) where yvalue1, yvalue2, ... are the column names given in yValues and "ValueName1", "ValueName2", ... are the desired names to appear in the legend hline an optional horizontal line to add to the chart; it must be a named list of the form list(value = h, line = settings) where h is the "intercept" and settings is a list of settings created with amLine vline an optional vertical line to add to the chart; it must be a named list of the form list(value = v, line = settings) where v is the "intercept" and settings is a list of settings created with amLine xLimits range of the x-axis, a vector of two values specifying the left and right limits of the x-axis; NULL for default values yLimits range of the y-axis, a vector of two values specifying the lower and the upper limits of the y-axis; NULL for default values expandX if xLimits = NULL, a percentage of the range of the x-axis used to expand this range expandY if yLimits = NULL, a percentage of the range of the y-axis used to expand this range Xformatter a number formatting string if xValue is set to a numeric column of data; it is used to format the values displayed in the cursor tooltips if cursor = TRUE, the labels of the x-axis unless you specify your own formatter in the labels field of the list passed on to the xAxis option, and the values displayed in the tooltips unless you specify your own tooltip text; if xValue is set to a date column of data, this option should be set to a date formatting string, and it has an effect only on the values displayed in the tooltips (unless you specify your own tooltip text); formatting the dates on the x-axis is done via the labels field of the list passed on to the xAxis option Yformatter a number formatting string; it is used to format the values displayed in the cursor tooltips if cursor = TRUE, the labels of the y-axis unless you specify your own formatter in the labels field of the list passed on to the yAxis option, and the values displayed in the tooltips unless you specify your own tooltip text (see the first example of amBarChart for the way to set a number formatter in the tooltip text) trend option to request trend lines and to set their settings; FALSE for no trend line, oth- erwise a named list of the form list(yvalue1 = trend1, yvalue2 = trend2, ...) where trend1, trend2, ... are lists with the following fields: method the modelling method, can be "lm", "lm.js", "nls", "nlsLM", or "loess"; "lm.js" performs a polynomial regression in JavaScript, its ad- vantage is that the fitted regression line is updated when the points of the line are dragged formula a formula passed on to the modelling function for methods "lm", "nls" or "nlsLM"; the lefthandside of this formula must always be y, and its righthandside must be a symbolic expression depending on x only, e.g. y ~ x, y ~ x + I(x^2), y ~ poly(x,2) interval effective for methods "lm" and "lm.js" only; a list with five possible fields: type can be "confidence" or "prediction", level is the confi- dence or prediction level (number between 0 and 1), color is the color of the shaded area, opacity is the opacity of the shaded area (number between 0 and 1), tensionX and tensionY to control the smoothing (see amLine) order the order of the polynomial regression when method = "lm.js" method.args a list of additional arguments passed on to the modelling function defined by method for methods "nls", "nlsLM" or "loess", e.g. method.args = list(span = 0.3) for method "loess" style a list of settings for the trend line created with amLine it is also possible to request the same kind of trend lines for all series given by the yValues argument, by passing a list of the form list("_all" = trendconfig), e.g. list("_all" = list(method = "lm", formula = y ~ 0+x, style = amLine())) chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic draggable TRUE/FALSE to enable/disable dragging of all lines, otherwise a named list of the form list(yvalue1 = TRUE, yvalue2 = FALSE, ...) to enable/disable the dragging for each series corresponding to a column given in yValues tooltip settings of the tooltips; NULL for default, FALSE for no tooltip, otherwise a named list of the form list(yvalue1 = settings1, yvalue2 = settings2, ...) where settings1, settings2, ... are lists created with amTooltip; this can also be a single list of settings that will be applied to each series, or a just a string for the text to display in the tooltip bullets settings of the bullets; NULL for default, otherwise a named list of the form list(yvalue1 = settings1, yvalue2 = settings2, ...) where settings1, settings2, ... are lists created with amCircle, amTriangle or amRectangle; this can also be a single list of settings that will be applied to each series alwaysShowBullets logical, whether the bullets should always be visible, or visible on hover only lineStyle settings of the lines; NULL for default, otherwise a named list of the form list(yvalue1 = settings1, yvalue2 = settings2, ...) where settings1, settings2, ... are lists created with amLine; this can also be a single list of settings that will be applied to each line backgroundColor a color for the chart background; it can be given by the name of a R color, the name of a CSS color, e.g. "teal" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" xAxis settings of the x-axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the vertical adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine, and a field breaks to control the axis breaks, an R object created with amAxisBreaks yAxis settings of the y-axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine, and a field breaks to control the axis breaks, an R object created with amAxisBreaks scrollbarX logical, whether to add a scrollbar for the x-axis scrollbarY logical, whether to add a scrollbar for the y-axis legend FALSE for no legend, TRUE for a legend with default settings, or a list of settings created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment button NULL for the default, FALSE for no button, or a list of settings created with amButton; this button is used to replace the current data with data2 cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor for both axes with default settings for the axes tooltips, otherwise a named list with four possible fields: a field axes to specify the axes for which the cur- sor is requested, can be "x", "y", or "xy", a field tooltip to set the style of the axes tooltips, this must be a list of settings created with amTooltip, a field extraTooltipPrecision, a named list of the form list(x = i, y = j) where i and j are the desired numbers of additional decimals for the tooltips on the x- axis and on the y-axis respectively, and a field modifier, a list with two possible fields, x and y, which defines modifiers for the values displayed in the tooltips; a modifier is some JavaScript code given a string, which performs a modifica- tion of a string named text, e.g. "text = '[font-style:italic]' + text + '[/]';"; see the first example for an example of modifier zoomButtons a Boolean value, or a list created with amZoomButtons width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples # a line chart with a numeric x-axis #### set.seed(666) dat <- data.frame( x = 1:10, y1 = rnorm(10), y2 = rnorm(10) ) amLineChart( data = dat, width = "700px", xValue = "x", yValues = c("y1", "y2"), yValueNames = list(y1 = "Sample 1", y2 = "Sample 2"), trend = list( y1 = list( method = "lm.js", order = 3, style = amLine(color = "lightyellow", dash = "3,2") ), y2 = list( method = "loess", style = amLine(color = "palevioletred", dash = "3,2") ) ), draggable = list(y1 = TRUE, y2 = FALSE), backgroundColor = "#30303d", tooltip = amTooltip( text = "[bold]({valueX},{valueY})[/]", textColor = "white", backgroundColor = "#101010", borderColor = "whitesmoke" ), bullets = list( y1 = amCircle(color = "yellow", strokeColor = "olive"), y2 = amCircle(color = "orangered", strokeColor = "darkred") ), alwaysShowBullets = TRUE, cursor = list( extraTooltipPrecision = list(x = 0, y = 2), modifier = list( y = c( "var value = parseFloat(text);", "var style = value > 0 ? '[#0000ff]' : '[#ff0000]';", "text = style + text + '[/]';" ) ) ), lineStyle = list( y1 = amLine(color = "yellow", width = 4), y2 = amLine(color = "orangered", width = 4) ), chartTitle = amText( text = "Gaussian samples", color = "whitesmoke", fontWeight = "bold" ), xAxis = list(title = amText(text = "Observation", fontSize = 21, color = "silver", fontWeight = "bold"), labels = amAxisLabels(fontSize = 17), breaks = amAxisBreaks( values = 1:10, labels = sprintf("[bold %s]%d[/]", rainbow(10), 1:10))), yAxis = list(title = amText(text = "Value", fontSize = 21, color = "silver", fontWeight = "bold"), labels = amAxisLabels(color = "whitesmoke", fontSize = 14), gridLines = amLine(color = "whitesmoke", opacity = 0.4, width = 1)), yLimits = c(-3, 3), Yformatter = "#.00", caption = amText(text = "[font-style:italic]try to drag the yellow line![/]", color = "yellow"), theme = "dark") # line chart with a date x-axis #### library(lubridate) set.seed(666) dat <- data.frame( date = ymd(180101) + days(0:60), visits = rpois(61, 20) ) amLineChart( data = dat, width = "750px", xValue = "date", yValues = "visits", draggable = TRUE, chartTitle = "Number of visits", xAxis = list( title = "Date", labels = amAxisLabels( formatter = amDateAxisFormatter( day = c("dt", "[bold]MMM[/] dt"), week = c("dt", "[bold]MMM[/] dt") ) ), breaks = amAxisBreaks(timeInterval = "7 days") ), yAxis = "Visits", xLimits = range(dat$date) + c(0,7), yLimits = c(0, 35), backgroundColor = "whitesmoke", tooltip = paste0( "[bold][font-style:italic]{dateX.value.formatDate('yyyy/MM/dd')}[/]", "\nvisits: {valueY}[/]" ), caption = amText(text = "Year 2018"), theme = "material") # smoothed lines #### x <- seq(-4, 4, length.out = 100) dat <- data.frame( x = x, Gauss = dnorm(x), Cauchy = dcauchy(x) ) amLineChart( data = dat, width = "700px", xValue = "x", yValues = c("Gauss", "Cauchy"), yValueNames = list( Gauss = "Standard normal distribution", Cauchy = "Cauchy distribution" ), draggable = FALSE, tooltip = FALSE, lineStyle = amLine( width = 4, tensionX = 0.8, tensionY = 0.8 ), xAxis = list(title = amText(text = "x", fontSize = 21, color = "navyblue"), labels = amAxisLabels( color = "midnightblue", fontSize = 17)), yAxis = list(title = amText(text = "density", fontSize = 21, color = "navyblue"), labels = FALSE), theme = "dataviz") amPercentageBarChart HTML widget displaying a 100% stacked bar chart Description Create a HTML widget displaying a 100% stacked bar chart. Usage amPercentageBarChart( data, category, values, valueNames = NULL, hline = NULL, chartTitle = NULL, theme = NULL, animated = TRUE, backgroundColor = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = TRUE, caption = NULL, image = NULL, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe category name of the column of data to be used on the category axis values names of the columns of data to be used on the value axis valueNames names of the values variables, to appear in the legend; NULL to use values as names, otherwise a named list of the form list(value1 = "ValueName1", value2 = "ValueName2", ...) where value1, value2, ... are the column names given in values and "ValueName1", "ValueName2", ... are the desired names to appear in the legend; these names also appear in the tooltips. hline an optional horizontal line to add to the chart; it must be a named list of the form list(value = h, line = settings) where h is the "intercept" and settings is a list of settings created with amLine chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic backgroundColor a color for the chart background; a color can be given by the name of a R color, the name of a CSS color, e.g. "rebeccapurple" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" xAxis settings of the category axis given as a list, or just a string for the axis title; the list of settings has three possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, and a field adjust, a number defining the vertical adjustment of the axis (in pixels) yAxis settings of the value axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine and a field breaks to control the axis breaks, an R object created with amAxisBreaks scrollbarX logical, whether to add a scrollbar for the category axis scrollbarY logical, whether to add a scrollbar for the value axis legend either a logical value, whether to display the legend, or a list of settings for the legend created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples library(rAmCharts4) dat <- data.frame( category = c("A", "B", "C"), v1 = c(1, 2, 3), v2 = c(9, 5, 7) ) amPercentageBarChart( dat, category = "category", values = c("v1", "v2"), valueNames = c("Value1", "Value2"), yAxis = "Percentage", theme = "dataviz", legend = amLegend(position = "right") ) amPieChart HTML widget displaying a pie chart Description Create a HTML widget displaying a pie chart. Usage amPieChart( data, category, value, innerRadius = 0, threeD = FALSE, depth = ifelse(variableDepth, 100, 10), colorStep = 3, variableRadius = FALSE, variableDepth = FALSE, chartTitle = NULL, theme = NULL, animated = TRUE, backgroundColor = NULL, legend = TRUE, caption = NULL, image = NULL, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe category name of the column of data to be used as the category value name of the column of data to be used as the value innerRadius the inner radius of the pie chart in percent threeD whether to render a 3D pie chart depth for a 3D chart, this parameter controls the height of the slices colorStep the step in the color palette variableRadius whether to render slices with variable radius variableDepth for a 3D chart, whether to render slices with variable depth chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic backgroundColor a color for the chart background; it can be given by the name of a R color, the name of a CSS color, e.g. "lime" or "olive", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" legend either a logical value, whether to display the legend, or a list of settings for the legend created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples library(rAmCharts4) dat <- data.frame( country = c( "Lithuania", "Czechia", "Ireland", "Germany", "Australia", "Austria" ), value = c(260, 230, 200, 165, 139, 128) ) amPieChart( data = dat, category = "country", value = "value", variableRadius = TRUE ) # shiny app demonstrating the options #### library(rAmCharts4) library(shiny) dat <- data.frame( country = c( "Lithuania", "Czechia", "Ireland", "Germany", "Australia", "Austria" ), value = c(260, 230, 200, 165, 139, 128) ) ui <- fluidPage( sidebarLayout( sidebarPanel( sliderInput( "innerRadius", "Inner radius", min = 0, max = 60, value = 0, step = 20 ), checkboxInput("variableRadius", "Variable radius", TRUE), checkboxInput("threeD", "3D"), conditionalPanel( "input.threeD", checkboxInput("variableDepth", "Variable depth") ) ), mainPanel( amChart4Output("piechart", height = "500px") ) ) ) server <- function(input, output, session){ piechart <- reactive({ amPieChart( data = dat, category = "country", value = "value", innerRadius = input[["innerRadius"]], threeD = input[["threeD"]], variableDepth = input[["variableDepth"]], depth = ifelse(input[["variableDepth"]], 300, 10), variableRadius = input[["variableRadius"]], theme = "dark" ) }) output[["piechart"]] <- renderAmChart4({ piechart() }) } if(interactive()){ shinyApp(ui, server) } amRadialBarChart HTML widget displaying a radial bar chart Description Create a HTML widget displaying a radial bar chart. Usage amRadialBarChart( data, data2 = NULL, category, values, valueNames = NULL, showValues = TRUE, innerRadius = 50, yLimits = NULL, expandY = 5, valueFormatter = "#.", chartTitle = NULL, theme = NULL, animated = TRUE, draggable = FALSE, tooltip = NULL, columnStyle = NULL, bullets = NULL, alwaysShowBullets = FALSE, backgroundColor = NULL, cellWidth = NULL, columnWidth = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = NULL, caption = NULL, image = NULL, button = NULL, cursor = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe data2 NULL or a dataframe used to update the data with the button; its column names must include the column names of data given in values, it must have the same number of rows as data and its rows must be in the same order as those of data category name of the column of data to be used on the category axis values name(s) of the column(s) of data to be used on the value axis valueNames names of the values variables, to appear in the legend; NULL to use values as names, otherwise a named list of the form list(value1 = "ValueName1", value2 = "ValueName2", ...) where value1, value2, ... are the column names given in values and "ValueName1", "ValueName2", ... are the desired names to appear in the legend; these names can also appear in the tooltips: they are substituted to the string {name} in the formatting string passed on to the tooltip (see the second example of amBarChart) showValues logical, whether to display the values on the chart innerRadius inner radius of the chart, a percentage (between 0 and 100 theoretically, but in practice it should be between 30 and 70) yLimits range of the y-axis, a vector of two values specifying the lower and the upper limits of the y-axis; NULL for default values expandY if yLimits = NULL, a percentage of the range of the y-axis used to expand this range valueFormatter a number formatting string; it is used to format the values displayed on the chart if showValues = TRUE, the values displayed in the cursor tooltips if cursor = TRUE, the labels of the y-axis unless you specify your own formatter in the labels field of the list passed on to the yAxis option, and the values displayed in the tooltips unless you specify your own tooltip text (see the first example for the way to set a number formatter in the tooltip text) chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic draggable TRUE/FALSE to enable/disable dragging of all bars, otherwise a named list of the form list(value1 = TRUE, value2 = FALSE, ...) to enable/disable the drag- ging for each bar corresponding to a column given in values tooltip settings of the tooltips; NULL for default, FALSE for no tooltip, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amTooltip; this can also be a single list of settings that will be applied to each series, or a just a string for the text to display in the tooltip columnStyle settings of the columns; NULL for default, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amColumn; this can also be a single list of settings that will be applied to each column bullets settings of the bullets; NULL for default, otherwise a named list of the form list(value1 = settings1, value2 = settings2, ...) where settings1, settings2, ... are lists created with amCircle, amTriangle or amRectangle; this can also be a single list of settings that will be applied to each series alwaysShowBullets logical, whether to always show the bullets; if FALSE, the bullets are shown only on hovering a column backgroundColor a color for the chart background; a color can be given by the name of a R color, the name of a CSS color, e.g. "lime" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" cellWidth cell width in percent; for a simple bar chart, this is the width of the columns; for a grouped bar chart, this is the width of the clusters of columns; NULL for the default value columnWidth column width, a percentage of the cell width; set to 100 for a simple bar chart and use cellWidth to control the width of the columns; for a grouped bar chart, this controls the spacing between the columns within a cluster of columns; NULL for the default value xAxis settings of the category axis given as a list, or just a string for the axis title; the list of settings has three possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabelsCircular, and a field adjust, a number defining the vertical adjustment of the axis (in pixels) yAxis settings of the value axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine and a field breaks to control the axis breaks, an R object created with amAxisBreaks scrollbarX logical, whether to add a scrollbar for the category axis scrollbarY logical, whether to add a scrollbar for the value axis legend either a logical value, whether to display the legend, or a list of settings for the legend created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment button NULL for the default, FALSE for no button, or a list of settings created with amButton; this button is used to replace the current data with data2 cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor with default settings for the tooltips, or a list of settings created with amTooltip to set the style of the tooltips, or a list with three possible fields: a field tooltip, a list of tooltip settings created with amTooltip, a field extraTooltipPrecision, an integer, the number of additional decimals to display in the tooltips, and a field modifier, which defines a modifier for the values displayed in the tooltips; a modifier is some JavaScript code given a string, which performs a modification of a string named text, e.g. modifier = "text = '>>>' + text;" width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples # a grouped radial bar chart #### set.seed(666) dat <- data.frame( country = c("USA", "China", "Japan", "Germany", "UK", "France"), visits = c(3025, 1882, 1809, 1322, 1122, 1114), income = rpois(6, 25), expenses = rpois(6, 20) ) amRadialBarChart( data = dat, data2 = dat, width = "600px", height = "600px", category = "country", values = c("income", "expenses"), valueNames = list(income = "Income", expenses = "Expenses"), showValues = FALSE, tooltip = amTooltip( textColor = "white", backgroundColor = "#101010", borderColor = "silver" ), draggable = TRUE, backgroundColor = "#30303d", columnStyle = list( income = amColumn( color = "darkmagenta", strokeColor = "#cccccc", strokeWidth = 2 ), expenses = amColumn( color = "darkred", strokeColor = "#cccccc", strokeWidth = 2 ) ), chartTitle = "Income and expenses per country", xAxis = list( labels = amAxisLabelsCircular( radius = -82, relativeRotation = 90 ) ), yAxis = list( labels = amAxisLabels(color = "orange"), gridLines = amLine(color = "whitesmoke", width = 1, opacity = 0.4), breaks = amAxisBreaks(values = seq(0, 40, by = 10)) ), yLimits = c(0, 40), valueFormatter = "#.#", caption = amText( text = "Year 2018", fontFamily = "Impact", fontSize = 18 ), theme = "dark") # just for fun #### dat <- data.frame( cluster = letters[1:6], y1 = rep(10, 6), y2 = rep(8, 6), y3 = rep(6, 6), y4 = rep(4, 6), y5 = rep(2, 6), y6 = rep(4, 6), y7 = rep(6, 6), y8 = rep(8, 6), y9 = rep(10, 6) ) amRadialBarChart( data = dat, width = "500px", height = "500px", innerRadius = 10, category = "cluster", values = paste0("y", 1:9), showValues = FALSE, tooltip = FALSE, draggable = FALSE, backgroundColor = "black", columnStyle = amColumn(strokeWidth = 1, strokeColor = "white"), cellWidth = 96, xAxis = list(labels = FALSE), yAxis = list(labels = FALSE, gridLines = FALSE), yLimits = c(0, 10), legend = FALSE, theme = "kelly") amRangeAreaChart HTML widget displaying a range area chart Description Create a HTML widget displaying a range area chart. Usage amRangeAreaChart( data, data2 = NULL, xValue, yValues, areas = NULL, hline = NULL, vline = NULL, xLimits = NULL, yLimits = NULL, expandX = 0, expandY = 5, Xformatter = ifelse(isDate, "yyyy-MM-dd", "#."), Yformatter = "#.", chartTitle = NULL, theme = NULL, animated = TRUE, draggable = FALSE, tooltip = NULL, bullets = NULL, alwaysShowBullets = FALSE, lineStyle = NULL, backgroundColor = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = NULL, caption = NULL, image = NULL, button = NULL, cursor = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe data2 NULL or a dataframe used to update the data with the button; its column names must include the column names of data given in yValues, it must have the same number of rows as data and its rows must be in the same order as those of data xValue name of the column of data to be used on the x-axis yValues a character matrix with two columns; each row corresponds to a range area and provides the names of two columns of data to be used as the limits of the range area areas an unnamed list of list of settings for the range areas; the n-th inner list of settings corresponds to the n-th row of the yValues matrix; each list of settings has three possible fields: name for the legend label, color for the color of the range area, and opacity for the opacity of the range area, a number between 0 and 1 hline an optional horizontal line to add to the chart; it must be a named list of the form list(value = h, line = settings) where h is the "intercept" and settings is a list of settings created with amLine vline an optional vertical line to add to the chart; it must be a named list of the form list(value = v, line = settings) where v is the "intercept" and settings is a list of settings created with amLine xLimits range of the x-axis, a vector of two values specifying the left and right limits of the x-axis; NULL for default values yLimits range of the y-axis, a vector of two values specifying the lower and upper limits of the y-axis; NULL for default values expandX if xLimits = NULL, a percentage of the range of the x-axis used to expand this range expandY if yLimits = NULL, a percentage of the range of the y-axis used to expand this range Xformatter a number formatting string if xValue is set to a numeric column of data; it is used to format the values displayed in the cursor tooltips if cursor = TRUE, the labels of the x-axis unless you specify your own formatter in the labels field of the list passed on to the xAxis option, and the values displayed in the tooltips unless you specify your own tooltip text; if xValue is set to a date column of data, this option should be set to a date formatting string, and it has an effect only on the values displayed in the tooltips (unless you specify your own tooltip text); formatting the dates on the x-axis is done via the labels field of the list passed on to the xAxis option Yformatter a number formatting string; it is used to format the values displayed in the cursor tooltips if cursor = TRUE, the labels of the y-axis unless you specify your own formatter in the labels field of the list passed on to the yAxis option, and the values displayed in the tooltips unless you specify your own tooltip text (see the first example of amBarChart for the way to set a number formatter in the tooltip text) chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic draggable TRUE/FALSE to enable/disable dragging of all lines, otherwise a named list of the form list(yvalue1 = TRUE, yvalue2 = FALSE, ...) to enable/disable the dragging for each series corresponding to a column given in yValues tooltip settings of the tooltips; NULL for default, FALSE for no tooltip, otherwise a named list of the form list(yvalue1 = settings1, yvalue2 = settings2, ...) where settings1, settings2, ... are lists created with amTooltip; this can also be a single list of settings that will be applied to each series, or a just a string for the text to display in the tooltip bullets settings of the bullets; NULL for default, otherwise a named list of the form list(yvalue1 = settings1, yvalue2 = settings2, ...) where settings1, settings2, ... are lists created with amCircle, amTriangle or amRectangle; this can also be a single list of settings that will be applied to each series alwaysShowBullets logical, whether the bullets should always be visible, or visible on hover only lineStyle settings of the lines; NULL for default, otherwise a named list of the form list(yvalue1 = settings1, yvalue2 = settings2, ...) where settings1, settings2, ... are lists created with amLine; this can also be a single list of settings that will be applied to each line backgroundColor a color for the chart background xAxis settings of the x-axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the vertical adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine, and a field breaks to control the axis breaks, an R object created with amAxisBreaks yAxis settings of the y-axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine, and a field breaks to control the axis breaks, an R object created with amAxisBreaks scrollbarX logical, whether to add a scrollbar for the x-axis scrollbarY logical, whether to add a scrollbar for the y-axis legend FALSE for no legend, TRUE for a legend with default settings, or a list of settings created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment button NULL for the default, FALSE for no button, or a list of settings created with amButton; this button is used to replace the current data with data2 cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor for both axes with default settings for the axes tooltips, otherwise a named list with four possible fields: a field axes to specify the axes for which the cur- sor is requested, can be "x", "y", or "xy", a field tooltip to set the style of the axes tooltips, this must be a list of settings created with amTooltip, a field extraTooltipPrecision, a named list of the form list(x = i, y = j) where i and j are the desired numbers of additional decimals for the tooltips on the x- axis and on the y-axis respectively, and a field modifier, a list with two possible fields, x and y, which defines modifiers for the values displayed in the tooltips; a modifier is some JavaScript code given as a string, which performs a modifi- cation of a string named text, e.g. "text = '[font-style:italic]' + text + '[/]';"; see the example for an example of modifier width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Note A color can be given by the name of a R color, the name of a CSS color, e.g. "crimson" or "silver", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)". Examples set.seed(666) x <- 1:20 dat <- data.frame( x = x, y1 = rnorm(20, sd = 1.5), y2 = rnorm(20, 10, sd = 1.5), z1 = rnorm(20, x+5, sd = 1.5), z2 = rnorm(20, x+15, sd = 1.5) ) amRangeAreaChart( data = dat, width = "700px", xValue = "x", yValues = rbind(c("y1", "y2"), c("z1", "z2")), xLimits = c(1, 20), draggable = TRUE, backgroundColor = "#30303d", tooltip = list( y1 = amTooltip( text = "[bold]upper: {openValueY}\nlower: {valueY}[/]", textColor = "yellow", backgroundColor = "darkmagenta", backgroundOpacity = 0.8, borderColor = "rebeccapurple", scale = 0.9 ), y2 = amTooltip( text = "[bold]upper: {valueY}\nlower: {openValueY}[/]", textColor = "yellow", backgroundColor = "darkmagenta", backgroundOpacity = 0.8, borderColor = "rebeccapurple", scale = 0.9 ), z1 = amTooltip( text = "[bold]upper: {openValueY}\nlower: {valueY}[/]", textColor = "white", backgroundColor = "darkred", backgroundOpacity = 0.8, borderColor = "crimson", scale = 0.9 ), z2 = amTooltip( text = "[bold]upper: {valueY}\nlower: {openValueY}[/]", textColor = "white", backgroundColor = "darkred", backgroundOpacity = 0.8, borderColor = "crimson", scale = 0.9 ) ), bullets = list( y1 = amCircle(color = "yellow", strokeColor = "olive"), y2 = amCircle(color = "yellow", strokeColor = "olive"), z1 = amCircle(color = "orangered", strokeColor = "darkred"), z2 = amCircle(color = "orangered", strokeColor = "darkred") ), alwaysShowBullets = FALSE, lineStyle = list( y1 = amLine(color = "yellow", width = 3, tensionX = 0.8, tensionY = 0.8), y2 = amLine(color = "yellow", width = 3, tensionX = 0.8, tensionY = 0.8), z1 = amLine(color = "orangered", width = 3, tensionX = 0.8, tensionY = 0.8), z2 = amLine(color = "orangered", width = 3, tensionX = 0.8, tensionY = 0.8) ), areas = list( list(name = "y1-y2", color = "blue", opacity = 0.2), list(name = "z1-z2", color = "red", opacity = 0.2) ), cursor = list( tooltip = amTooltip( backgroundColor = "silver" ), extraTooltipPrecision = list(x = 0, y = 2), modifier = list(y = "text = parseFloat(text).toFixed(2);") ), chartTitle = amText(text = "Range area chart", color = "whitesmoke", fontWeight = "bold"), xAxis = list(title = amText(text = "Observation", fontSize = 20, color = "silver"), labels = amAxisLabels(color = "whitesmoke", fontSize = 17), adjust = 5), yAxis = list(title = amText(text = "Value", fontSize = 20, color = "silver"), labels = amAxisLabels(color = "whitesmoke", fontSize = 17), gridLines = amLine(color = "antiquewhite", opacity = 0.4, width = 1)), Xformatter = "#", Yformatter = "#.00", image = list( image = amImage( href = tinyIcon("react", backgroundColor = "transparent"), width = 40, height = 40 ), position = "bottomleft", hjust = 2, vjust = -2 ), theme = "dark") amScatterChart HTML widget displaying a scatter chart Description Create a HTML widget displaying a scatter chart. Usage amScatterChart( data, data2 = NULL, xValue, yValues, yValueNames = NULL, hline = NULL, vline = NULL, xLimits = NULL, yLimits = NULL, expandX = 0, expandY = 5, Xformatter = ifelse(isDate, "yyyy-MM-dd", "#."), Yformatter = "#.", trend = FALSE, chartTitle = NULL, theme = NULL, animated = TRUE, draggable = FALSE, tooltip = NULL, pointsStyle = NULL, backgroundColor = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = NULL, caption = NULL, image = NULL, button = NULL, cursor = FALSE, zoomButtons = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe data2 NULL or a dataframe used to update the data with the button; its column names must include the column names of data given in yValues as well as the column name given in xValue; moreover it must have the same number of rows as data and its rows must be in the same order as those of data xValue name of the column of data to be used on the x-axis yValues name(s) of the column(s) of data to be used on the y-axis yValueNames names of the variables on the y-axis, to appear in the legend; NULL to use yValues as names, otherwise a named list of the form list(yvalue1 = "ValueName1", yvalue2 = "ValueName2", ...) where yvalue1, yvalue2, ... are the column names given in yValues and "ValueName1", "ValueName2", ... are the desired names to appear in the legend hline an optional horizontal line to add to the chart; it must be a named list of the form list(value = h, line = settings) where h is the "intercept" and settings is a list of settings created with amLine vline an optional vertical line to add to the chart; it must be a named list of the form list(value = v, line = settings) where v is the "intercept" and settings is a list of settings created with amLine xLimits range of the x-axis, a vector of two values specifying the left and the right limits of the x-axis; NULL for default values yLimits range of the y-axis, a vector of two values specifying the lower and the upper limits of the y-axis; NULL for default values expandX if xLimits = NULL, a percentage of the range of the x-axis used to expand this range expandY if yLimits = NULL, a percentage of the range of the y-axis used to expand this range Xformatter a number formatting string if xValue is set to a numeric column of data; it is used to format the values displayed in the cursor tooltips if cursor = TRUE, the labels of the x-axis unless you specify your own formatter in the labels field of the list passed on to the xAxis option, and the values displayed in the tooltips unless you specify your own tooltip text; if xValue is set to a date column of data, this option should be set to a date formatting string, and it has an effect only on the values displayed in the tooltips (unless you specify your own tooltip text); formatting the dates on the x-axis is done via the labels field of the list passed on to the xAxis option Yformatter a number formatting string; it is used to format the values displayed in the cursor tooltips if cursor = TRUE, the labels of the y-axis unless you specify your own formatter in the labels field of the list passed on to the yAxis option, and the values displayed in the tooltips unless you specify your own tooltip text (see the first example of amBarChart for the way to set a number formatter in the tooltip text) trend option to request trend lines and to set their settings; FALSE for no trend line, oth- erwise a named list of the form list(yvalue1 = trend1, yvalue2 = trend2, ...) where trend1, trend2, ... are lists with the following fields: method the modelling method, can be "lm", "lm.js", "nls", "nlsLM", or "loess"; "lm.js" performs a polynomial regression in JavaScript, its ad- vantage is that the fitted regression line is updated when the points are dragged formula a formula passed on to the modelling function for methods "lm", "nls" or "nlsLM"; the lefthandside of this formula must always be y, and its righthandside must be a symbolic expression depending on x only, e.g. y ~ x, y ~ x + I(x^2), y ~ poly(x,2) interval effective for methods "lm" and "lm.js" only; a list with five possible fields: type can be "confidence" or "prediction", level is the confi- dence or prediction level (number between 0 and 1), color is the color of the shaded area, opacity is the opacity of the shaded area (number between 0 and 1), tensionX and tensionY to control the smoothing (see amLine) order the order of the polynomial regression when method = "lm.js" method.args a list of additional arguments passed on to the modelling function defined by method for methods "nls", "nlsLM" or "loess", e.g. method.args = list(span = 0.3) for method "loess" style a list of settings for the trend line created with amLine it is also possible to request the same kind of trend lines for all series given by the yValues argument, by passing a list of the form list("_all" = trendconfig), e.g. list("_all" = list(method = "lm", formula = y ~ 0+x, style = amLine())) chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic draggable TRUE/FALSE to enable/disable dragging of all lines, otherwise a named list of the form list(yvalue1 = TRUE, yvalue2 = FALSE, ...) to enable/disable the dragging for each series corresponding to a column given in yValues tooltip settings of the tooltips; NULL for default, FALSE for no tooltip, otherwise a named list of the form list(yvalue1 = settings1, yvalue2 = settings2, ...) where settings1, settings2, ... are lists created with amTooltip; this can also be a single list of settings that will be applied to each series, or a just a string for the text to display in the tooltip pointsStyle settings of the points style; NULL for default, otherwise a named list of the form list(yvalue1 = settings1, yvalue2 = settings2, ...) where settings1, settings2, ... are lists created with amCircle, amTriangle or amRectangle; this can also be a single list of settings that will be applied to each series backgroundColor a color for the chart background; it can be given by the name of a R color, the name of a CSS color, e.g. "aqua" or "indigo", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" xAxis settings of the x-axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the vertical adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine, and a field breaks to control the axis breaks, an R object created with amAxisBreaks yAxis settings of the y-axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine, and a field breaks to control the axis breaks, an R object created with amAxisBreaks scrollbarX logical, whether to add a scrollbar for the x-axis scrollbarY logical, whether to add a scrollbar for the y-axis legend FALSE for no legend, TRUE for a legend with default settings, or a list of settings created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment button NULL for the default, FALSE for no button, or a list of settings created with amButton; this button is used to replace the current data with data2 cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor for both axes with default settings for the axes tooltips, otherwise a named list with four possible fields: a field axes to specify the axes for which the cursor is requested, can be "x", "y", or "xy", a field tooltip to set the style of the axes tooltips, this must be a list of settings created with amTooltip, a field extraTooltipPrecision, a named list of the form list(x = i, y = j) where i and j are the desired numbers of additional decimals for the tooltips on the x- axis and on the y-axis respectively, and a field modifier, a list with two possible fields, x and y, which defines modifiers for the values displayed in the tooltips; a modifier is some JavaScript code given a string, which performs a modification of a string named text; see the first example of amLineChart for an example of modifier zoomButtons a Boolean value, or a list created with amZoomButtons width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples # iris data: petal widths #### dat <- iris dat$obs <- rep(1:50, 3) dat <- reshape2::dcast(dat, obs ~ Species, value.var = "Petal.Width") amScatterChart( data = dat, width = "700px", xValue = "obs", yValues = c("setosa", "versicolor", "virginica"), draggable = FALSE, backgroundColor = "#30303d", pointsStyle = list( setosa = amCircle(color = "orange", strokeColor = "red"), versicolor = amCircle(color = "cyan", strokeColor = "blue"), virginica = amCircle(color = "palegreen", strokeColor = "darkgreen") ), tooltip = "obs: {valueX}\nvalue: {valueY}", chartTitle = amText(text = "Iris data", color = "whitesmoke"), xAxis = list(title = amText(text = "Observation", fontSize = 21, color = "silver"), labels = amAxisLabels(color = "whitesmoke", fontSize = 17)), yAxis = list(title = amText(text = "Petal width", fontSize = 21, color = "silver"), labels = amAxisLabels(color = "whitesmoke", fontSize = 14), gridLines = amLine(color = "whitesmoke", opacity = 0.4, width = 1)), Xformatter = "#", Yformatter = "#.0", caption = amText(text = "[font-style:italic]rAmCharts4[/]", color = "yellow"), theme = "dark") # iris data: petal widths vs petal lengths dat <- iris dat$obs <- rep(1:50, 3) dat <- reshape2::dcast(dat, obs + Petal.Length ~ Species, value.var = "Petal.Width") amScatterChart( data = dat, width = "700px", xValue = "Petal.Length", yValues = c("setosa", "versicolor", "virginica"), draggable = FALSE, backgroundColor = "#30303d", pointsStyle = list( setosa = amCircle(color = "orange", strokeColor = "red"), versicolor = amCircle(color = "cyan", strokeColor = "blue"), virginica = amCircle(color = "palegreen", strokeColor = "darkgreen") ), tooltip = list( setosa = amTooltip( text = "length: {valueX}\nwidth: {valueY}", backgroundColor = "orange", borderColor = "red", textColor = "black" ), versicolor = amTooltip( text = "length: {valueX}\nwidth: {valueY}", backgroundColor = "cyan", borderColor = "blue", textColor = "black" ), virginica = amTooltip( text = "length: {valueX}\nwidth: {valueY}", backgroundColor = "palegreen", borderColor = "darkgreen", textColor = "black" ) ), chartTitle = amText(text = "Iris data", color = "silver"), xAxis = list(title = amText(text = "Petal length", fontSize = 19, color = "gold"), labels = amAxisLabels(color = "whitesmoke", fontSize = 17)), yAxis = list(title = amText(text = "Petal width", fontSize = 19, color = "gold"), labels = amAxisLabels(color = "whitesmoke", fontSize = 17), gridLines = amLine(color = "whitesmoke", opacity = 0.4, width = 1)), cursor = list( tooltip = amTooltip(backgroundColor = "lightgray"), extraTooltipPrecision = list(x = 1, y = 1) ), caption = amText(text = "[font-style:italic]rAmCharts4[/]", color = "yellow"), theme = "dark") # scatter chart with trend lines #### Asym = 5; R0 = 1; lrc = -3/4 x <- seq(-.3, 5, len = 101) y0 <- Asym + (R0-Asym) * exp(-exp(lrc)* x) dat <- data.frame( x = x, y1 = y0 + rnorm(101, sd = 0.33), y2 = y0 + rnorm(101, sd = 0.33) + 2 ) amScatterChart( data = dat, width = "700px", xValue = "x", yValues = c("y1", "y2"), trend = list("_all" = list( method = "nls", formula = y ~ SSasymp(x, Asym, R0, lrc), style = amLine() )), draggable = FALSE, pointsStyle = list( y1 = amTriangle( width = 8, height = 8, strokeColor = "yellow", strokeWidth = 1 ), y2 = amTriangle( width = 8, height = 8, strokeColor = "chartreuse", strokeWidth = 1, rotation = 180 ) ), chartTitle = amText(text = "Asymptotic regression model"), xAxis = "x", yAxis = "y", Xformatter = "#.###", Yformatter = "#.", theme = "kelly", zoomButtons = TRUE) amSegment Segment style Description Create a list of settings for a segment. Usage amSegment(color = NULL, width = 1) Arguments color color of the segment; this can be a color adapter width width of the segment Value A list of settings for a segment. Note A color can be given by the name of a R color, the name of a CSS color, e.g. "lime" or "indigo", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)". amStackedBarChart HTML widget displaying a stacked bar chart Description Create a HTML widget displaying a stacked bar chart. Usage amStackedBarChart( data, data2 = NULL, category, stacks, seriesNames = NULL, colors = NULL, hline = NULL, yLimits = NULL, expandY = 5, valueFormatter = "#.", chartTitle = NULL, theme = NULL, animated = TRUE, tooltip = NULL, threeD = FALSE, backgroundColor = NULL, cellWidth = NULL, columnWidth = NULL, xAxis = NULL, yAxis = NULL, scrollbarX = FALSE, scrollbarY = FALSE, legend = NULL, caption = NULL, image = NULL, button = NULL, cursor = FALSE, width = NULL, height = NULL, export = FALSE, chartId = NULL, elementId = NULL ) Arguments data a dataframe data2 NULL or a dataframe used to update the data with the button; its column names must include the column names of data given in series, it must have the same number of rows as data and its rows must be in the same order as those of data category name of the column of data to be used on the category axis stacks a list of stacks; a stack is a character vector of the form c("series3", "series1", "series2"), and the first element of a stack corresponds to the bottom of the column seriesNames names of the series variables (the variables which appear in the stacks), to ap- pear in the legend; NULL to use the variables given in stacks as names, oth- erwise a named list of the form list(series1 = "SeriesName1", series2 = "SeriesName2", ...) where series1, series2, ... are the column names given in stacks and "SeriesName1", "SeriesName2", ... are the desired names to appear in the legend; these names can also appear in the tooltips: they are sub- stituted to the string {name} in the formatting string passed on to the tooltip colors colors of the bars; NULL for automatic colors based on the theme, otherwise a named list of the form list(series1 = Color1, series2 = Color2, ...) where series1, series2, ... are the column names given in stacks hline an optional horizontal line to add to the chart; it must be a named list of the form list(value = h, line = settings) where h is the "intercept" and settings is a list of settings created with amLine yLimits range of the y-axis, a vector of two values specifying the lower and the upper limits of the y-axis; NULL for default values expandY if yLimits = NULL, a percentage of the range of the y-axis used to expand this range valueFormatter a number formatting string; it is used to format the values displayed in the cursor tooltips if cursor = TRUE, the labels of the y-axis unless you specify your own formatter in the labels field of the list passed on to the yAxis option, and the values displayed in the tooltips unless you specify your own tooltip text chartTitle chart title, it can be NULL or FALSE for no title, a character string, a list of settings created with amText, or a list with two fields: text, a list of settings created with amText, and align, can be "left", "right" or "center" theme theme, NULL or one of "dataviz", "material", "kelly", "dark", "moonrisekingdom", "frozen", "spiritedaway", "patterns", "microchart" animated Boolean, whether to animate the rendering of the graphic tooltip settings of the tooltips; NULL for default, FALSE for no tooltip, otherwise a named list of the form list(series1 = settings1, series2 = settings2, ...) where settings1, settings2, ... are lists created with amTooltip; this can also be a single list of settings that will be applied to each series, or a just a string for the text to display in the tooltip threeD logical, whether to render the columns in 3D backgroundColor a color for the chart background; a color can be given by the name of a R color, the name of a CSS color, e.g. "rebeccapurple" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" cellWidth cell width in percent; for a simple bar chart, this is the width of the columns; for a grouped bar chart, this is the width of the clusters of columns; NULL for the default value columnWidth column width, a percentage of the cell width; set to 100 for a simple bar chart and use cellWidth to control the width of the columns; for a grouped bar chart, this controls the spacing between the columns within a cluster of columns; NULL for the default value xAxis settings of the category axis given as a list, or just a string for the axis title; the list of settings has three possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, and a field adjust, a number defining the vertical adjustment of the axis (in pixels) yAxis settings of the value axis given as a list, or just a string for the axis title; the list of settings has five possible fields: a field title, a list of settings for the axis title created with amText, a field labels, a list of settings for the axis labels created with amAxisLabels, a field adjust, a number defining the horizontal adjustment of the axis (in pixels), a field gridLines, a list of settings for the grid lines created with amLine and a field breaks to control the axis breaks, an R object created with amAxisBreaks scrollbarX logical, whether to add a scrollbar for the category axis scrollbarY logical, whether to add a scrollbar for the value axis legend either a logical value, whether to display the legend, or a list of settings for the legend created with amLegend caption NULL or FALSE for no caption, a formatted text created with amText, or a list with two fields: text, a list created with amText, and align, can be "left", "right" or "center" image option to include an image at a corner of the chart; NULL or FALSE for no image, otherwise a named list with four possible fields: the field image (required) is a list created with amImage, the field position can be "topleft", "topright", "bottomleft" or "bottomright", the field hjust defines the horizontal adjust- ment, and the field vjust defines the vertical adjustment button NULL for the default, FALSE for no button, or a list of settings created with amButton; this button is used to replace the current data with data2 cursor option to add a cursor on the chart; FALSE for no cursor, TRUE for a cursor with default settings for the tooltips, or a list of settings created with amTooltip to set the style of the tooltips, or a list with three possible fields: a field tooltip, a list of tooltip settings created with amTooltip, a field extraTooltipPrecision, an integer, the number of additional decimals to display in the tooltips, and a field modifier, which defines a modifier for the values displayed in the tooltips; a modifier is some JavaScript code given as a string, which performs a modifica- tion of a string named text, e.g. modifier = "text = '>>>' + text;" width the width of the chart, e.g. "600px" or "80%"; ignored if the chart is displayed in Shiny, in which case the width is given in amChart4Output height the height of the chart, e.g. "400px"; ignored if the chart is displayed in Shiny, in which case the height is given in amChart4Output export logical, whether to enable the export menu chartId a HTML id for the chart elementId a HTML id for the container of the chart; ignored if the chart is displayed in Shiny, in which case the id is given by the Shiny id Examples library(rAmCharts4) dat <- data.frame( year = c("2004", "2005", "2006"), europe = c(10, 15, 20), asia = c( 9, 10, 13), africa = c( 5, 6, 8), meast = c( 7, 8, 12), namerica = c(12, 15, 19), samerica = c(10, 16, 14) ) dat2 <- data.frame( year = c("2004", "2005", "2006"), europe = c( 7, 12, 16), asia = c( 8, 13, 10), africa = c( 7, 7, 10), meast = c( 8, 6, 14), namerica = c(10, 17, 17), samerica = c(12, 18, 17) ) stacks <- list( c("europe", "namerica"), c("asia", "africa", "meast", "samerica") ) seriesNames <- list( europe = "Europe", namerica = "North America", asia = "Asia", africa = "Africa", meast = "Middle East", samerica = "South America" ) amStackedBarChart( dat, data2 = dat2, category = "year", stacks = stacks, seriesNames = seriesNames, yLimits = c(0, 60), chartTitle = amText( "Stacked bar chart", fontFamily = "Trebuchet MS", fontSize = 30, fontWeight = "bold" ), xAxis = "Year", yAxis = "A quantity...", theme = "kelly", button = amButton("Update", position = 1), height = 450 ) amText Text Description Create a list of settings for a text. Usage amText( text, color = NULL, fontSize = NULL, fontWeight = "normal", fontFamily = NULL ) Arguments text the text to display, a character string color color of the text; it can be given by the name of a R color, the name of a CSS color, e.g. "crimson", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)" fontSize size of the text fontWeight font weight of the text, it can be "normal", "bold", "bolder", "lighter", or a number in seq(100, 900, by = 100) fontFamily font family Value A list of settings for a text. Note There is no option for the font style; you can get an italicized text by entering text = "[font-style:italic]Your text[/]". amTooltip Tooltip Description Create list of settings for a tooltip. Usage amTooltip( text, textColor = NULL, textAlign = "middle", backgroundColor = NULL, backgroundOpacity = 0.6, borderColor = NULL, borderWidth = 2, pointerLength = 10, scale = 1, auto = FALSE ) Arguments text text to display in the tooltip; this should be a formatting string textColor text color textAlign alignement of the text, can be "start", "middle", or "end" backgroundColor background color of the tooltip backgroundOpacity background opacity borderColor color of the border of the tooltip borderWidth width of the border of the tooltip pointerLength length of the pointer scale scale factor auto logical, whether to use automatic background color and text color Value A list of settings for a tooltip. Note A color can be given by the name of a R color, the name of a CSS color, e.g. "transparent" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)". amZoomButtons Zoom buttons Description Zoom buttons. Usage amZoomButtons( halign = "left", valign = "top", marginH = 5, marginV = 5, zoomFactor = 0.1 ) Arguments halign "left" or "right" valign "top" or "bottom" marginH horizontal margin marginV vertical margin zoomFactor zoom factor Value A list of parameters for zoom buttons, for usage in amLineChart or amScatterChart rAmCharts4-adapters Adapters Description Adapters allow to have finer control of settings such as the colors of the columns of a bar chart or the colors of the points of a scatter chart. Usage amColorAdapterFromVector(colors) amColorAdapterFromCuts(cuts, colors, value) Arguments colors a vector of colors cuts a vector of cut points (sorted increasingly) value a mathematical expression of the variables X and Y given as JavaScript code; the simplest examples are "X" and "Y", a more elaborate example is "Math.sqrt(X**2+Y**2)" (don’t forget that the power in JavaScript is ’**’, not ’^’!); see the examples Examples # bar chart with individual colors #### dat <- data.frame( country = c("USA", "China", "Japan", "Germany", "UK", "France"), visits = c(3025, 1882, 1809, 1322, 1122, 1114) ) amBarChart( data = dat, width = "600px", category = "country", values = "visits", showValues = FALSE, tooltip = FALSE, columnStyle = amColumn( color = amColorAdapterFromVector(hcl.colors(6, "Viridis")), opacity = 0.7, strokeColor = amColorAdapterFromVector(hcl.colors(6, "Cividis")), strokeWidth = 4 ), bullets = amCircle( color = amColorAdapterFromVector(hcl.colors(6, "Viridis")), opacity = 1, strokeColor = amColorAdapterFromVector(hcl.colors(6, "Cividis")), strokeWidth = 4, radius = 12 ), alwaysShowBullets = TRUE, chartTitle = amText(text = "Visits per country", fontSize = 22, color = "orangered"), backgroundColor = "rgb(164,167,174)", xAxis = list(title = amText(text = "Country", color = "maroon")), yAxis = list( title = amText(text = "Visits", color = "maroon"), gridLines = amLine(color = "white", width = 1, dash = "3,3") ), yLimits = c(0, 4000), valueFormatter = "#,###.", caption = amText(text = "Year 2018", color = "red") ) # usage example of amColorAdapterFromCuts #### set.seed(314159) dat <- data.frame( x = rnorm(200), y = rnorm(200) ) amScatterChart( data = dat, width = "500px", height = "500px", xValue = "x", yValues = "y", xLimits = c(-3,3), yLimits = c(-3,3), draggable = FALSE, backgroundColor = "#30303d", pointsStyle = amCircle( color = amColorAdapterFromCuts( cuts = c(-2, -1, 1, 2), colors = c("red", "green", "blue", "green", "red"), value = "Y" ), opacity = 0.5, strokeColor = amColorAdapterFromCuts( cuts = c(-2, -1, 1, 2), colors = c("darkred", "darkgreen", "darkblue", "darkgreen", "darkred"), value = "Y" ) ), xAxis = list( breaks = amAxisBreaks(seq(-3, 3, by=1)), gridLines = amLine(opacity = 0.3, width = 1) ), yAxis = list( breaks = amAxisBreaks(seq(-3, 3, by=1)), gridLines = amLine(opacity = 0.3, width = 1) ), tooltip = FALSE, caption = amText(text = "[font-style:italic]rAmCharts4[/]", color = "yellow"), theme = "dark") # other usage example of amColorAdapterFromCuts: linear gradient #### set.seed(314159) dat <- data.frame( x = rnorm(500), y = rnorm(500) ) amScatterChart( data = dat, width = "500px", height = "500px", xValue = "x", yValues = "y", xLimits = c(-3,3), yLimits = c(-3,3), draggable = FALSE, backgroundColor = "#30303d", pointsStyle = amCircle( radius = 4, strokeWidth = 1, color = amColorAdapterFromCuts( cuts = seq(-3, 3, length.out = 121), colors = colorRampPalette( c("red","orangered","blue","white","blue","orangered","red") )(122), value = "X" ), opacity = 0.75, strokeColor = amColorAdapterFromCuts( cuts = seq(-3, 3, length.out = 121), colors = colorRampPalette( c("red","orangered","blue","white","blue","orangered","red") )(122), value = "X" ) ), xAxis = list( breaks = amAxisBreaks(seq(-3, 3, by=1)), gridLines = amLine(opacity = 0.3, width = 1) ), yAxis = list( breaks = amAxisBreaks(seq(-3, 3, by=1)), gridLines = amLine(opacity = 0.3, width = 1) ), tooltip = FALSE, caption = amText(text = "[font-style:italic]rAmCharts4[/]", color = "yellow"), theme = "dark") # yet another usage example of amColorAdapterFromCuts: radial gradient set.seed(314159) dat <- data.frame( x = rnorm(1000), y = rnorm(1000) ) amScatterChart( data = dat, width = "500px", height = "500px", xValue = "x", yValues = "y", xLimits = c(-3,3), yLimits = c(-3,3), draggable = FALSE, backgroundColor = "#30303d", pointsStyle = amCircle( radius = 4, strokeWidth = 1, color = amColorAdapterFromCuts( cuts = seq(0, 3, length.out = 121), colors = colorRampPalette( c("white","blue","orangered","red") )(122), value = "Math.sqrt(X**2+Y**2)" ), opacity = 0.75, strokeColor = amColorAdapterFromCuts( cuts = seq(0, 3, length.out = 121), colors = colorRampPalette( c("white","blue","orangered","red") )(122), value = "Math.sqrt(X**2+Y**2)" ) ), xAxis = list( breaks = amAxisBreaks(seq(-3, 3, by=1)), gridLines = amLine(opacity = 0.3, width = 1) ), yAxis = list( breaks = amAxisBreaks(seq(-3, 3, by=1)), gridLines = amLine(opacity = 0.3, width = 1) ), tooltip = FALSE, caption = amText(text = "[font-style:italic]rAmCharts4[/]", color = "yellow"), theme = "dark") rAmCharts4-imports Objects imported from other packages Description These objects are imported from other packages. Follow the links to their documentation: JS, saveWidget rAmCharts4-shapes Bullets Description Create a list of settings for bullets, their shape and their style. Usage amTriangle( color = NULL, opacity = 1, width = 10, height = 10, strokeColor = NULL, strokeOpacity = 1, strokeWidth = 2, direction = "top", rotation = 0, image = NULL ) amCircle( color = NULL, opacity = 1, radius = 6, strokeColor = NULL, strokeOpacity = 1, strokeWidth = 2, image = NULL ) amRectangle( color = NULL, opacity = 1, width = 10, height = 10, strokeColor = NULL, strokeOpacity = 1, strokeWidth = 2, rotation = 0, cornerRadius = 3, image = NULL ) Arguments color bullet color; this can be a color adapter opacity bullet opacity, a number between 0 and 1 width bullet width height bullet height strokeColor stroke color of the bullet; this can be a color adapter strokeOpacity stroke opacity of the bullet, a number between 0 and 1 strokeWidth stroke width of the bullet direction triangle direction rotation rotation angle image option to include an image in the bullet, a list created with amImage radius circle radius cornerRadius radius of the rectangle corners Value A list of settings for the bullets. Note A color can be given by the name of a R color, the name of a CSS color, e.g. "transparent" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)". rAmCharts4-shiny Shiny bindings for using rAmCharts4 in Shiny Description Output and render functions for using the rAmCharts4 widgets within Shiny applications and inter- active Rmd documents. Usage amChart4Output(outputId, width = "100%", height = "400px") renderAmChart4(expr, env = parent.frame(), quoted = FALSE) Arguments outputId output variable to read from width, height must be a valid CSS unit (like "100%", "400px", "auto") or a number, which will be coerced to a string and have "px" appended expr an expression that generates a chart with amBarChart, amHorizontalBarChart, amLineChart, amScatterChart, amRangeAreaChart, amRadialBarChart, amDumbbellChart, amHorizontalDumbbellChart, amGaugeChart, amPieChart, or amPercentageBarChart env the environment in which to evaluate expr quoted whether expr is a quoted expression Examples library(rAmCharts4) library(shiny) library(lubridate) ui <- fluidPage( br(), fluidRow( column( width = 8, amChart4Output("linechart", height = "500px") ), column( width = 4, tags$fieldset( tags$legend("Chart data"), verbatimTextOutput("chartData"), ), tags$fieldset( tags$legend("Change"), verbatimTextOutput("chartChange") ) ) ) ) server <- function(input, output){ set.seed(666) dat <- data.frame( date = ymd(180101) + months(0:11), visits = rpois(12, 20), x = 1:12 ) output[["linechart"]] <- renderAmChart4({ amLineChart( data = dat, data2 = dat, xValue = "date", yValues = "visits", draggable = TRUE, chartTitle = amText( text = "Number of visits", color = "crimson", fontWeight = "bold", fontFamily = "cursive" ), xAxis = list( title = "Date", labels = amAxisLabels(rotation = -45), breaks = amAxisBreaks(timeInterval = "1 month") ), yAxis = "Visits", yLimits = c(0, 35), backgroundColor = "whitesmoke", tooltip = "[bold][font-style:italic]{dateX}[/]\nvisits: {valueY}[/]", Yformatter = "#", caption = amText( text = "[bold font-size:22]Year 2018[/]", color = "fuchsia" ), button = amButton( label = amText("Reset data", color = "black"), color = "seashell", position = 0.95 ), theme = "dataviz") }) output[["chartData"]] <- renderPrint({ input[["linechart"]] }) output[["chartChange"]] <- renderPrint({ input[["linechart_change"]] }) } if(interactive()) { shinyApp(ui, server) } tinyIcon Icons Description Icons for usage in amImage. Usage tinyIcon(icon, backgroundColor = NULL) tinyIcons() shinyAppTinyIcons() Arguments icon name of an icon; tinyIcons() returns the list of available icons, and shinyAppTinyIcons() runs a Shiny app which displays the available icons backgroundColor background color of the icon (possibly "transparent") Value A base64 string that can be used in the href argument of amImage. Note A color can be given by the name of a R color, the name of a CSS color, e.g. "transparent" or "fuchsia", an HEX code like "#ff009a", a RGB code like "rgb(255,100,39)", or a HSL code like "hsl(360,11,255)". updateAmBarChart Update the data of a bar chart Description Update the data of a bar chart in a Shiny app (vertical, horizontal, radial, or stacked bar chart). Usage updateAmBarChart(session, outputId, data) Arguments session the Shiny session object outputId the output id passed on to amChart4Output data new data; if it is not valid, then nothing will happen (in order to be valid it must have the same structure as the data passed on to amBarChart / amHorizontalBarChart / amRadialBarChart / amStackedBarChart); in this case check the JavaScript console, it will report the encountered issue Examples library(rAmCharts4) library(shiny) ui <- fluidPage( br(), actionButton("update", "Update", class = "btn-primary"), br(), br(), amChart4Output("barchart", width = "650px", height = "470px") ) server <- function(input, output, session){ set.seed(666) dat <- data.frame( country = c("USA", "China", "Japan", "Germany", "UK", "France"), visits = c(3025, 1882, 1809, 1322, 1122, 1114), income = rpois(6, 25), expenses = rpois(6, 20) ) newdat <- data.frame( country = c("USA", "China", "Japan", "Germany", "UK", "France"), income = rpois(6, 25), expenses = rpois(6, 20) ) output[["barchart"]] <- renderAmChart4({ amBarChart( data = dat, category = "country", values = c("income", "expenses"), valueNames = list(income = "Income", expenses = "Expenses"), draggable = TRUE, backgroundColor = "#30303d", columnStyle = list( income = amColumn( color = "darkmagenta", strokeColor = "#cccccc", strokeWidth = 2 ), expenses = amColumn( color = "darkred", strokeColor = "#cccccc", strokeWidth = 2 ) ), chartTitle = list(text = "Income and expenses per country"), xAxis = "Country", yAxis = "Income and expenses", yLimits = c(0, 41), valueFormatter = "#.#", caption = "Year 2018", theme = "dark") }) observeEvent(input[["update"]], { updateAmBarChart(session, "barchart", newdat) }) } if(interactive()){ shinyApp(ui, server) } # Survival probabilities #### library(shiny) library(rAmCharts4) probs <- c(control = 30, treatment = 75) # initial probabilities ui <- fluidPage( br(), sidebarLayout( sidebarPanel( wellPanel( tags$fieldset( tags$legend("Survival probability"), sliderInput( "control", "Control group", min = 0, max = 100, value = probs[["control"]], step = 1 ), sliderInput( "treatment", "Treatment group", min = 0, max = 100, value = probs[["treatment"]], step = 1 ) ) ) ), mainPanel( amChart4Output("barchart", width = "500px", height = "400px") ) ) ) server <- function(input, output, session){ dat <- data.frame( group = c("Control", "Treatment"), alive = c(probs[["control"]], probs[["treatment"]]), dead = 100 - c(probs[["control"]], probs[["treatment"]]) ) stacks <- list( c("alive", "dead") ) seriesNames <- list( alive = "Alive", dead = "Dead" ) output[["barchart"]] <- renderAmChart4({ amStackedBarChart( dat, category = "group", stacks = stacks, seriesNames = seriesNames, yLimits = c(0, 100), chartTitle = amText( "Survival probabilities", fontFamily = "Trebuchet MS", fontSize = 30, fontWeight = "bold" ), xAxis = "Group", yAxis = "Probability", theme = "dataviz" ) }) observeEvent(list(input[["control"]], input[["treatment"]]), { newdat <- data.frame( group = c("Control", "Treatment"), alive = c(input[["control"]], input[["treatment"]]), dead = 100 - c(input[["control"]], input[["treatment"]]) ) updateAmBarChart(session, "barchart", newdat) }) } if(interactive()){ shinyApp(ui, server) } updateAmGaugeChart Update the score of a gauge chart Description Update the score of a gauge chart in a Shiny app Usage updateAmGaugeChart(session, outputId, score) Arguments session the Shiny session object outputId the output id passed on to amChart4Output score new value of the score Examples library(rAmCharts4) library(shiny) gradingData <- data.frame( label = c("Slow", "Moderate", "Fast"), lowScore = c(0, 100/3, 200/3), highScore = c(100/3, 200/3, 100) ) ui <- fluidPage( sidebarLayout( sidebarPanel( sliderInput( "slider", "Score", min = 0, max = 100, value = 30 ) ), mainPanel( amChart4Output("gauge", height = "500px") ) ) ) server <- function(input, output, session){ output[["gauge"]] <- renderAmChart4({ amGaugeChart( score = isolate(input[["slider"]]), minScore = 0, maxScore = 100, gradingData = gradingData, theme = "dataviz" ) }) observeEvent(input[["slider"]], { updateAmGaugeChart(session, "gauge", score = input[["slider"]]) }) } if(interactive()){ shinyApp(ui, server) } updateAmPercentageBarChart Update the data of a 100% stacked bar chart Description Update the data of a 100% staced bar chart in a Shiny app (amPercentageBarChart). Usage updateAmPercentageBarChart(session, outputId, data) Arguments session the Shiny session object outputId the output id passed on to amChart4Output data new data; if it is not valid, then nothing will happen (in order to be valid it must have the same structure as the data passed on to amPercentageBarChart); in this case check the JavaScript console, it will report the encountered issue Examples library(rAmCharts4) library(shiny) dat <- data.frame( country = c("Australia", "Canada", "France", "Germany"), "35-44" = c(2, 2, 3, 3), "45-54" = c(9, 5, 7, 6), "55+" = c(8, 4, 6, 5), check.names = FALSE ) newdat <- data.frame( country = c("Australia", "Canada", "France", "Germany"), "35-44" = c(3, 2, 3, 4), "45-54" = c(7, 3, 5, 5), "55+" = c(7, 4, 5, 3), check.names = FALSE ) ui <- fluidPage( br(), actionButton("update", "Update", class = "btn-primary"), br(), br(), amChart4Output("pbarchart", width = "650px", height = "470px") ) server <- function(input, output, session){ output[["pbarchart"]] <- renderAmChart4({ amPercentageBarChart( dat, category = "country", values = c("35-44", "45-54", "55+"), chartTitle = "Profit by country and age breakdowns", xAxis = "Country", yAxis = "Profit", theme = "moonrisekingdom", legend = amLegend(position = "right") ) }) observeEvent(input[["update"]], { updateAmPercentageBarChart(session, "pbarchart", newdat) }) } if(interactive()){ shinyApp(ui, server) } updateAmPieChart Update the data of a pie chart Description Update the data of a pie chart in a Shiny app. Usage updateAmPieChart(session, outputId, data) Arguments session the Shiny session object outputId the output id passed on to amChart4Output data new data; if it is not valid, then nothing will happen (in order to be valid it must have the same structure as the data passed on to amPieChart); in this case check the JavaScript console, it will report the encountered issue
github.com/grpc/grpc-go/examples
go
Go
README [¶](#section-readme) --- ### gRPC in 3 minutes (Go) #### BACKGROUND For this sample, we've already generated the server and client stubs from [helloworld.proto](https://github.com/grpc/grpc-go/blob/v1.14.0/examples/helloworld/helloworld/helloworld.proto). #### PREREQUISITES * This requires Go 1.6 or later * Requires that [GOPATH is set](https://golang.org/doc/code.html#GOPATH) ``` $ go help gopath $ # ensure the PATH contains $GOPATH/bin $ export PATH=$PATH:$GOPATH/bin ``` #### INSTALL ``` $ go get -u google.golang.org/grpc/examples/helloworld/greeter_client $ go get -u google.golang.org/grpc/examples/helloworld/greeter_server ``` #### TRY IT! * Run the server ``` $ greeter_server & ``` * Run the client ``` $ greeter_client ``` #### OPTIONAL - Rebuilding the generated code 1. Install [protobuf compiler](https://github.com/google/protobuf/blob/master/README.md#protocol-compiler-installation) 2. Install the protoc Go plugin ``` $ go get -u github.com/golang/protobuf/protoc-gen-go ``` 3. Rebuild the generated Go code ``` $ go generate google.golang.org/grpc/examples/helloworld/... ``` Or run `protoc` command (with the grpc plugin) ``` $ protoc -I helloworld/ helloworld/helloworld.proto --go_out=plugins=grpc:helloworld ``` None
mxkssd
cran
R
Package ‘mxkssd’ October 13, 2022 Version 1.2 Date 2022-02-21 Title Efficient Mixed-Level k-Circulant Supersaturated Designs Author <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Depends R(>= 2.13.0) Description Generates efficient balanced mixed-level k-circulant supersaturated designs by inter- changing the elements of the generator vector. Attempts to generate a supersaturated de- sign that has EfNOD efficiency more than user specified efficiency level (mef). Dis- plays the progress of generation of an efficient mixed-level k-circulant de- sign through a progress bar. The progress of 100 per cent means that one full round of inter- change is completed. More than one full round (typically 4-5 rounds) of interchange may be re- quired for larger designs. For more details, please see Man- <NAME>., <NAME>. and <NAME>. (2011). Construction of Efficient Mixed-Level k- Circulant Supersaturated Designs, Journal of Statistical Theory and Practice, 5:4, 627- 648, <doi:10.1080/15598608.2011.10483735>. License GPL (>= 2) NeedsCompilation no Repository CRAN Date/Publication 2022-02-23 12:50:15 UTC R topics documented: mxkss... 2 mxkssd Efficient mixed-level k-circulant supersaturated designs Description mxkssd is a package that generates efficient balanced mixed-level k-circulant supersaturated designs by interchanging the elements of the generator vector. The package tries to generate a supersaturated design that has EfNOD efficiency more than user specified efficiency level (mef). The package also displays the progress of generation of an efficient mixed-level k-circulant design through a progress bar. The progress of 100 per cent means that one full round of interchange is completed. More than one full round (typically 4-5 rounds) of interchange may be required for larger designs. Usage mxkssd(m,n,level_vec,k,mef) Arguments m number of factors n number of runs level_vec level vector containing the levels of the factors such that (n-1) factors have each of these levels k order of circulation mef minimum efficiency required, should be between 0 to 1 Value A list containing following items m number of factors n number of runs level_vec level vector containing the levels of the factors such that (n-1) factors have each of these levels k order of circulation generator.vector generator vector design design EfNOD.efficiency EfNOD efficiency max.fNOD maximum fNOD time.taken time taken to generate the design number.aliased.pairs number of aliased pairs of columns Author(s) <NAME> References <NAME>, <NAME> & <NAME> (2011) Construction of Efficient Mixed-Level k- Circulant Supersaturated Designs, Journal of Statistical Theory and Practice, 5:4, 627-648, DOI: 10.1080/15598608.2011.10483735 Examples ##To generate an efficient mixed level 2-circulant supersaturated design #with 8 runs and 14 factors such that 7 factors have number of levels 2 and #another 7 factors have number of levels 4. So the level_vec is c(2,4). #The required minimum efficiency is 1. mxkssd(14,8,c(2,4),2,1)
viz-router
rust
Rust
Crate viz_router === Router for Viz Web Framework Structs --- * PathMatched route path infomation. * PathTreeA path tree. * ResourcesA resourceful route provides a mapping between HTTP verbs and URLs to handlers. * RouteA collection of verb-handler pair. * RouterA routes collection. * TreeStore all final routes. Functions --- * anyCreates a route with a handler and any HTTP verbs. * connectCreates a route with a handler and HTTP `CONNECT` verb pair. * deleteCreates a route with a handler and HTTP `DELETE` verb pair. * getCreates a route with a handler and HTTP `GET` verb pair. * headCreates a route with a handler and HTTP `HEAD` verb pair. * onCreates a route with a handler and HTTP verb pair. * optionsCreates a route with a handler and HTTP `OPTIONS` verb pair. * patchCreates a route with a handler and HTTP `PATCH` verb pair. * postCreates a route with a handler and HTTP `POST` verb pair. * putCreates a route with a handler and HTTP `PUT` verb pair. * traceCreates a route with a handler and HTTP `TRACE` verb pair. Crate viz_router === Router for Viz Web Framework Structs --- * PathMatched route path infomation. * PathTreeA path tree. * ResourcesA resourceful route provides a mapping between HTTP verbs and URLs to handlers. * RouteA collection of verb-handler pair. * RouterA routes collection. * TreeStore all final routes. Functions --- * anyCreates a route with a handler and any HTTP verbs. * connectCreates a route with a handler and HTTP `CONNECT` verb pair. * deleteCreates a route with a handler and HTTP `DELETE` verb pair. * getCreates a route with a handler and HTTP `GET` verb pair. * headCreates a route with a handler and HTTP `HEAD` verb pair. * onCreates a route with a handler and HTTP verb pair. * optionsCreates a route with a handler and HTTP `OPTIONS` verb pair. * patchCreates a route with a handler and HTTP `PATCH` verb pair. * postCreates a route with a handler and HTTP `POST` verb pair. * putCreates a route with a handler and HTTP `PUT` verb pair. * traceCreates a route with a handler and HTTP `TRACE` verb pair. Struct viz_router::Path === ``` pub struct Path<'a, 'b> { pub id: &'a usize, pub pieces: &'a [Piece], pub raws: SmallVec<[&'b str; 4]>, } ``` Matched route path infomation. Fields --- `id: &'a usize``pieces: &'a [Piece]``raws: SmallVec<[&'b str; 4]>`Implementations --- ### impl<'a, 'b> Path<'a, 'b#### pub fn pattern(&self) -> String Gets current path pattern. #### pub fn params(&self) -> Vec<(&str, &str), GlobalReturns the parameters of the current path. #### pub fn params_iter(&self) -> impl Iterator<Item = (&str, &str)Returns the parameters iterator of the current path. Trait Implementations --- ### impl<'a, 'b> Clone for Path<'a, 'b#### fn clone(&self) -> Path<'a, 'bReturns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl<'a, 'b> Eq for Path<'a, 'b### impl<'a, 'b> StructuralEq for Path<'a, 'b### impl<'a, 'b> StructuralPartialEq for Path<'a, 'bAuto Trait Implementations --- ### impl<'a, 'b> RefUnwindSafe for Path<'a, 'b### impl<'a, 'b> Send for Path<'a, 'b### impl<'a, 'b> Sync for Path<'a, 'b### impl<'a, 'b> Unpin for Path<'a, 'b### impl<'a, 'b> UnwindSafe for Path<'a, 'bBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Clone, #### fn __clone_box(&self, _: Private) -> *mut() ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided [`Span`], returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a [`WithDispatch`] wrapper. [`WithDispatch`] wrapper. Read more Struct viz_router::PathTree === ``` pub struct PathTree<T> { pub node: Node<usize>, /* private fields */ } ``` A path tree. Fields --- `node: Node<usize>`Implementations --- ### impl<T> PathTree<T#### pub fn new() -> PathTree<TCreates a new `PathTree`. #### pub fn insert(&mut self, path: &str, value: T) -> usize Inserts a part path-value to the tree and returns the id. #### pub fn find<'a, 'b>(&'a self, path: &'b str) -> Option<(&'a T, Path<'a, 'b>)Returns the Path by the given path. #### pub fn get_route(&self, index: usize) -> Option<&(T, Vec<Piece, Global>)Gets the route by id. #### pub fn url_for(&self, index: usize, params: &[&str]) -> Option<StringGenerates URL with the params. Trait Implementations --- ### impl<T> Debug for PathTree<T>where T: Debug, #### fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), ErrorFormats the value using the given formatter. --- ### impl<T> RefUnwindSafe for PathTree<T>where T: RefUnwindSafe, ### impl<T> Send for PathTree<T>where T: Send, ### impl<T> Sync for PathTree<T>where T: Sync, ### impl<T> Unpin for PathTree<T>where T: Unpin, ### impl<T> UnwindSafe for PathTree<T>where T: UnwindSafe, Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided [`Span`], returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a [`WithDispatch`] wrapper. [`WithDispatch`] wrapper. Read more Struct viz_router::Resources === ``` pub struct Resources { /* private fields */ } ``` A resourceful route provides a mapping between HTTP verbs and URLs to handlers. Implementations --- ### impl Resources #### pub fn named<S>(self, name: S) -> Selfwhere S: AsRef<str>, Named for the resources. #### pub fn singular(self) -> Self Without referencing an ID for a resource. #### pub fn route<S>(self, path: S, route: Route) -> Selfwhere S: AsRef<str>, Inserts a path-route pair into the resources. #### pub fn index<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Displays a list of the resources. #### pub fn new<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Returens an HTML form for creating the resources. #### pub fn create<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Creates the resources. #### pub fn show<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Displays the resources. #### pub fn edit<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Returens an HTML form for editing the resources. #### pub fn update<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Updates the resources, by default the `PUT` verb. #### pub fn update_with_patch<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Updates the resources, by the `PATCH` verb. #### pub fn destroy<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Deletes the resources. #### pub fn map_handler<F>(self, f: F) -> Selfwhere F: Fn(BoxHandler) -> BoxHandler, Takes a closure and creates an iterator which calls that closure on each handler. #### pub fn with<T>(self, t: T) -> Selfwhere T: Transform<BoxHandler>, T::Output: Handler<Request, Output = Result<Response>>, Transforms the types to a middleware and adds it. #### pub fn with_handler<F>(self, f: F) -> Selfwhere F: Handler<Next<Request, BoxHandler>, Output = Result<Response>> + Clone, Adds a middleware for the resources. Trait Implementations --- ### impl Clone for Resources #### fn clone(&self) -> Resources Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Resources Returns the “default value” for a type. #### type Item = (String, Route) The type of the elements being iterated over.#### type IntoIter = IntoIter<<Resources as IntoIterator>::Item, GlobalWhich kind of iterator are we turning this into?#### fn into_iter(self) -> Self::IntoIter Creates an iterator from a value. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for Resources ### impl Send for Resources ### impl Sync for Resources ### impl Unpin for Resources ### impl !UnwindSafe for Resources Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Clone, #### fn __clone_box(&self, _: Private) -> *mut() ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided [`Span`], returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a [`WithDispatch`] wrapper. [`WithDispatch`] wrapper. Read more Struct viz_router::Route === ``` pub struct Route { /* private fields */ } ``` A collection of verb-handler pair. Implementations --- ### impl Route #### pub fn new() -> Self Creates a new route. #### pub fn push(self, method: Method, handler: BoxHandler) -> Self Appends a HTTP verb and handler pair into the route. #### pub fn on<H, O>(self, method: Method, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler by the specified HTTP verb into the route. #### pub fn any<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler by any HTTP verbs into the route. #### pub fn get<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler buy the HTTP `GET` verb into the route. #### pub fn post<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler buy the HTTP `POST` verb into the route. #### pub fn put<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler buy the HTTP `PUT` verb into the route. #### pub fn delete<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler buy the HTTP `DELETE` verb into the route. #### pub fn head<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler buy the HTTP `HEAD` verb into the route. #### pub fn options<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler buy the HTTP `OPTIONS` verb into the route. #### pub fn connect<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler buy the HTTP `CONNECT` verb into the route. #### pub fn patch<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler buy the HTTP `PATCH` verb into the route. #### pub fn trace<H, O>(self, handler: H) -> Selfwhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Appends a handler buy the HTTP `TRACE` verb into the route. #### pub fn map_handler<F>(self, f: F) -> Selfwhere F: Fn(BoxHandler) -> BoxHandler, Takes a closure and creates an iterator which calls that closure on each handler. #### pub fn with<T>(self, t: T) -> Selfwhere T: Transform<BoxHandler>, T::Output: Handler<Request, Output = Result<Response>>, Transforms the types to a middleware and adds it. #### pub fn with_handler<F>(self, f: F) -> Selfwhere F: Handler<Next<Request, BoxHandler>, Output = Result<Response>> + Clone, Adds a middleware for the routes. Trait Implementations --- ### impl Clone for Route #### fn clone(&self) -> Route Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Route Returns the “default value” for a type. #### fn from_iter<T>(iter: T) -> Selfwhere T: IntoIterator<Item = (Method, BoxHandler)>, Creates a value from an iterator. #### type Item = (Method, Box<dyn Handler<Request<Body>, Output = Result<Response<Body>, Error>>, Global>) The type of the elements being iterated over.#### type IntoIter = IntoIter<(Method, Box<dyn Handler<Request<Body>, Output = Result<Response<Body>, Error>>, Global>), GlobalWhich kind of iterator are we turning this into?#### fn into_iter(self) -> Self::IntoIter Creates an iterator from a value. Read moreAuto Trait Implementations --- ### impl !RefUnwindSafe for Route ### impl Send for Route ### impl Sync for Route ### impl Unpin for Route ### impl !UnwindSafe for Route Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Clone, #### fn __clone_box(&self, _: Private) -> *mut() ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided [`Span`], returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a [`WithDispatch`] wrapper. [`WithDispatch`] wrapper. Read more Struct viz_router::Router === ``` pub struct Router { /* private fields */ } ``` A routes collection. Implementations --- ### impl Router #### pub fn new() -> Self Creates an empty `Router`. #### pub fn route<S>(self, path: S, route: Route) -> Selfwhere S: AsRef<str>, Inserts a path-route pair into the router. #### pub fn resources<S>(self, path: S, resource: Resources) -> Selfwhere S: AsRef<str>, Nested resources with a path. #### pub fn nest<S>(self, path: S, router: Self) -> Selfwhere S: AsRef<str>, Nested sub-router with a path. #### pub fn get<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and HTTP `GET` verb pair. #### pub fn post<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and HTTP `POST` verb pair. #### pub fn put<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and HTTP `PUT` verb pair. #### pub fn delete<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and HTTP `DELETE` verb pair. #### pub fn head<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and HTTP `HEAD` verb pair. #### pub fn options<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and HTTP `OPTIONS` verb pair. #### pub fn connect<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and HTTP `CONNECT` verb pair. #### pub fn patch<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and HTTP `PATCH` verb pair. #### pub fn trace<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and HTTP `TRACE` verb pair. #### pub fn any<S, H, O>(self, path: S, handler: H) -> Selfwhere S: AsRef<str>, H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, Adds a handler with a path and any HTTP verbs.“ #### pub fn map_handler<F>(self, f: F) -> Selfwhere F: Fn(BoxHandler) -> BoxHandler, Takes a closure and creates an iterator which calls that closure on each handler. #### pub fn with<T>(self, t: T) -> Selfwhere T: Transform<BoxHandler>, T::Output: Handler<Request, Output = Result<Response>>, Transforms the types to a middleware and adds it. #### pub fn with_handler<F>(self, f: F) -> Selfwhere F: Handler<Next<Request, BoxHandler>, Output = Result<Response>> + Clone, Adds a middleware for the routes. Trait Implementations --- ### impl Clone for Router #### fn clone(&self) -> Router Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Router Returns the “default value” for a type. #### fn from(router: Router) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl !RefUnwindSafe for Router ### impl Send for Router ### impl Sync for Router ### impl Unpin for Router ### impl !UnwindSafe for Router Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. T: Clone, #### fn __clone_box(&self, _: Private) -> *mut() ### impl<T> From<T> for T #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided [`Span`], returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a [`WithDispatch`] wrapper. [`WithDispatch`] wrapper. Read more Struct viz_router::Tree === ``` pub struct Tree(/* private fields */); ``` Store all final routes. Implementations --- ### impl Tree #### pub fn find<'a, 'b>( &'a self, method: &'b Method, path: &'b str ) -> Option<(&'a BoxHandler, Path<'a, 'b>)Find a handler by the HTTP method and the URI’s path. #### pub fn into_inner(self) -> Vec<(Method, PathTree<BoxHandler>)Consumes the Tree, returning the wrapped value. Trait Implementations --- ### impl AsMut<Vec<(Method, PathTree<Box<dyn Handler<Request<Body>, Output = Result<Response<Body>, Error>>, Global>>), Global>> for Tree #### fn as_mut(&mut self) -> &mut Vec<(Method, PathTree<BoxHandler>)Converts this type into a mutable reference of the (usually inferred) input type.### impl AsRef<Vec<(Method, PathTree<Box<dyn Handler<Request<Body>, Output = Result<Response<Body>, Error>>, Global>>), Global>> for Tree #### fn as_ref(&self) -> &Vec<(Method, PathTree<BoxHandler>)Converts this type into a shared reference of the (usually inferred) input type.### impl Debug for Tree #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn default() -> Tree Returns the “default value” for a type. #### fn from(router: Router) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl !RefUnwindSafe for Tree ### impl Send for Tree ### impl Sync for Tree ### impl Unpin for Tree ### impl !UnwindSafe for Tree Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T> Instrument for T #### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided [`Span`], returning an `Instrumented` wrapper. `Instrumented` wrapper. U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<V, T> VZip<V> for Twhere V: MultiLane<T>, #### fn vzip(self) -> V ### impl<T> WithSubscriber for T #### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a [`WithDispatch`] wrapper. [`WithDispatch`] wrapper. Read more Function viz_router::any === ``` pub fn any<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and any HTTP verbs. Function viz_router::connect === ``` pub fn connect<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP `CONNECT` verb pair. Function viz_router::delete === ``` pub fn delete<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP `DELETE` verb pair. Function viz_router::get === ``` pub fn get<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP `GET` verb pair. Function viz_router::head === ``` pub fn head<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP `HEAD` verb pair. Function viz_router::on === ``` pub fn on<H, O>(method: Method, handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP verb pair. Function viz_router::options === ``` pub fn options<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP `OPTIONS` verb pair. Function viz_router::patch === ``` pub fn patch<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP `PATCH` verb pair. Function viz_router::post === ``` pub fn post<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP `POST` verb pair. Function viz_router::put === ``` pub fn put<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP `PUT` verb pair. Function viz_router::trace === ``` pub fn trace<H, O>(handler: H) -> Routewhere H: Handler<Request, Output = Result<O>> + Clone, O: IntoResponse + Send + Sync + 'static, ``` Creates a route with a handler and HTTP `TRACE` verb pair.
bonny
hex
Erlang
README === ![Bonny](./assets/banner.png "Bonny") [![Module Version](https://img.shields.io/hexpm/v/bonny.svg)](https://hex.pm/packages/bonny) [![Coverage Status](https://coveralls.io/repos/github/coryodaniel/bonny/badge.svg?branch=master)](https://coveralls.io/github/coryodaniel/bonny?branch=master) [![Last Updated](https://img.shields.io/github/last-commit/coryodaniel/bonny.svg)](https://github.com/coryodaniel/bonny/commits/master) [![Build Status CI](https://github.com/coryodaniel/bonny/actions/workflows/ci.yaml/badge.svg)](https://github.com/coryodaniel/bonny/actions/workflows/ci.yaml) [![Build Status Elixir](https://github.com/coryodaniel/bonny/actions/workflows/elixir_matrix.yaml/badge.svg)](https://github.com/coryodaniel/bonny/actions/workflows/elixir_matrix.yaml) [![Build Status K8s](https://github.com/coryodaniel/bonny/actions/workflows/k8s_matrix.yaml/badge.svg)](https://github.com/coryodaniel/bonny/actions/workflows/k8s_matrix.yaml) [![Hex Docs](https://img.shields.io/badge/hex-docs-lightgreen.svg)](https://hexdocs.pm/bonny/) [![Total Download](https://img.shields.io/hexpm/dt/bonny.svg)](https://hex.pm/packages/bonny) [![License](https://img.shields.io/hexpm/l/bonny.svg)](https://github.com/coryodaniel/bonny/blob/master/LICENSE) Bonny: Kubernetes Development Framework === Extend the Kubernetes API with Elixir. Bonny make it easy to create Kubernetes Operators, Controllers, and Custom [Schedulers](./lib/bonny/server/scheduler.ex). If Kubernetes CRDs and controllers are new to you, read up on the [terminology](#terminology). [Getting Started](#getting-started) --- Kickstarting your first controller with bonny is very straight-forward. Bonny comes with some handy mix tasks to help you. ``` mix new your_operator ``` Now add bonny to your dependencies in `mix.exs` ``` def deps do [ {:bonny, "~> 1.0"} ] end ``` Install dependencies and initialize bonny. This task will ask you to answer a few questions about your operator. Refer to the [kubernetes docs](https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning) for API group and API version. ``` mix deps.get mix bonny.init ``` **Don't forget to add the generated operator module to your application supervisor.** ### [Configuration](#configuration) [`mix bonny.init`](Mix.Tasks.Bonny.Init.html) creates a configuration file `config/bonny.exs` and imports it to `config/config.exs` for you. #### Configuring Bonny Configuring bonny is necessary for the manifest generation through [`mix bonny.gen.manifest`](Mix.Tasks.Bonny.Gen.Manifest.html). ``` config :bonny, # Function to call to get a K8s.Conn object. # The function should return a %K8s.Conn{} struct or a {:ok, %K8s.Conn{}} tuple get_conn: {K8s.Conn, :from_file, ["~/.kube/config", [context: "docker-for-desktop"]]}, # Set the Kubernetes API group for this operator. # This can be overwritten using the @group attribute of a controller group: "your-operator.example.com", # Name must only consist of only lowercase letters and hyphens. # Defaults to hyphenated mix app name operator_name: "your-operator", # Name must only consist of only lowercase letters and hyphens. # Defaults to hyphenated mix app name service_account_name: "your-operator", # Labels to apply to the operator's resources. labels: %{ "kewl": "true" }, # Operator deployment resources. These are the defaults. resources: %{ limits: %{cpu: "200m", memory: "200Mi"}, requests: %{cpu: "200m", memory: "200Mi"} } ``` [Running outside of a cluster](#running-outside-of-a-cluster) --- Running an operator outside of Kubernetes is not recommended for production use, but can be very useful when testing. To start your operator and connect it to an existing cluster, one must first: 1. Have configured your operator. The above example is a good place to start. 2. Have some way of connecting to your cluster. The most common is to connect using your kubeconfig as in the example: ``` # config.exs config :bonny, get_conn: {K8s.Conn, :from_file, ["~/.kube/config", [context: "optional-alternate-context"]]} ``` If you've used [`mix bonny.init`](Mix.Tasks.Bonny.Init.html) to generate your config, it created a `YourOperator.Conn` module for you. You can edit that instead. 3. If RBAC is enabled, you must have permissions for creating and modifying `CustomResourceDefinition`, `ClusterRole`, `ClusterRoleBinding` and `ServiceAccount`. 4. Generate a manifest [`mix bonny.gen.manifest`](Mix.Tasks.Bonny.Gen.Manifest.html) and install it using kubectl `kubectl apply -f manifest.yaml` Now you are ready to run your operator ``` iex -S mix ``` [Guides](#guides) --- Have a look at the guides that come with this repository. Some can even be opened as a livebook. * [Mix Tasks](mix_tasks.html) * [The Operator](the_operator.html) * [Controllers](controllers.html) * [Testing Controllers](testing.html) * [CRD Versions](crd_versions.html) * [Migrations](migrations.html) * [Contributing](contributing.html) [Talks](#talks) --- * Commandeering Kubernetes @ The Big Elixir 2019 + [slides](https://speakerdeck.com/coryodaniel/commandeering-kubernetes-with-elixir) + [source code](https://github.com/coryodaniel/talks/tree/master/commandeering) + [video](https://www.youtube.com/watch?v=0r9YmbH0xTY) [Example Operators built with this version of Bonny](#example-operators-built-with-this-version-of-bonny) --- * [Kompost](https://github.com/mruoss/kompost) - Providing self-service management of resources for devs [Example Operators built with an older version of Bonny](#example-operators-built-with-an-older-version-of-bonny) --- * [Eviction Operator](https://github.com/bonny-k8s/eviction_operator) - Bonny v 0.4 * [Hello Operator](https://github.com/coryodaniel/hello_operator) - Bonny v 0.4 * [Todo Operator](https://github.com/bonny-k8s/todo-operator) - Bonny v 0.4 [Telemetry](#telemetry) --- Bonny uses the `telemetry` to emit event metrics. Events: `Bonny.Sys.Telemetry.events()` ``` [ [:reconciler, :reconcile, :start], [:reconciler, :reconcile, :stop], [:reconciler, :reconcile, :exception], [:watcher, :watch, :start], [:watcher, :watch, :stop], [:watcher, :watch, :exception], [:scheduler, :binding, :start], [:scheduler, :binding, :stop], [:scheduler, :binding, :exception], [:task, :execution, :start], [:task, :execution, :stop], [:task, :execution, :exception], ] ``` [Terminology](#terminology) --- *[Custom Resource](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-resources)*: > A custom resource is an extension of the Kubernetes API that is not necessarily available on every Kubernetes cluster. In other words, it represents a customization of a particular Kubernetes installation. *[CRD Custom Resource Definition](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions)*: > The CustomResourceDefinition API resource allows you to define custom resources. Defining a CRD object creates a new custom resource with a name and schema that you specify. The Kubernetes API serves and handles the storage of your custom resource. *[Controller](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-controllers)*: > A custom controller is a controller that users can deploy and update on a running cluster, independently of the cluster’s own lifecycle. Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources. The Operator pattern is one example of such a combination. It allows developers to encode domain knowledge for specific applications into an extension of the Kubernetes API. *[Operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)*: A set of application specific controllers deployed on Kubernetes and managed via kubectl and the Kubernetes API. [Contributing](#contributing) --- I'm thankful for any contribution to this project. Check out the [contribution guide](contributing.html) [Operator Blog Posts](#operator-blog-posts) --- * [Why Kubernetes Operators are a game changer](https://blog.couchbase.com/kubernetes-operators-game-changer/) [API Reference](api-reference.html) [Next Page → Changelog](changelog.html) Bonny === Extend the Kubernetes API and implement CustomResourceDefinitions lifecycles in Elixir. Bonny.API.CRD === A Custom Resource Definition. The `%Bonny.API.CRD{}` struct contains the fields `group`, `resource_type`, `scope` and `version`. New definitions can be created directly, using the [`new!/1`](#new!/1) function. [Summary](#summary) === [Types](#types) --- [names_t()](#t:names_t/0) Defines the names section of the CRD. [t()](#t:t/0) A Custom Resource Definition. [Functions](#functions) --- [fetch(term, key)](#fetch/2) See [`Map.fetch/2`](https://hexdocs.pm/elixir/Map.html#fetch/2). [get(term, key, default)](#get/3) See [`Map.get/3`](https://hexdocs.pm/elixir/Map.html#get/3). [get_and_update(term, key, fun)](#get_and_update/3) See [`Map.get_and_update/3`](https://hexdocs.pm/elixir/Map.html#get_and_update/3). [kind_to_names(kind, short_names \\ [])](#kind_to_names/2) Build a map of names form the given kind. [new!(fields)](#new!/1) Creates a new %Bonny.API.CRD{} struct from the given values. `:scope` is optional and defaults to `:Namespaced`. [to_manifest(crd)](#to_manifest/1) Converts the internally used structure to a map representing a kubernetes CRD manifest. [Types](#types) === [Functions](#functions) === Bonny.API.ResourceEndpoint === Defines the API Endpoint for a Kubernetes resource. The struct contains the fields `group`, `resource_type`, `scope` and `version`. New definitions can be created directly or using the [`new!/1`](#new!/1) function. [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) A Resource API Definition. Also see [Kubernetes API terminology](https://kubernetes.io/docs/reference/using-api/api-concepts/#standard-api-terminology). [Functions](#functions) --- [new!(fields)](#new!/1) Creates a new %Bonny.API.ResourceEndpoint{} struct from the given values. `:scope` is optional and defaults to `:Namespaced`. [new!(api_version, kind, scope \\ :Namespaced)](#new!/3) Creates a new %Bonny.API.ResourceEndpoint{} struct from `apiVersion` and `kind`. The scope can be passed as a third optional parameter. [resource_api_version(definition)](#resource_api_version/1) Gets apiVersion of the actual resources. [Types](#types) === [Functions](#functions) === Bonny.API.Version behaviour === Describes an API version of a custom resource. The `%Bonny.API.Version{}` struct contains the fields required to build the manifest for this version. This module is meant to be `use`d by a module representing the API version of a custom resource. The using module has to define the function `manifest/0`. The macro `defaults/1` is imported to the using module. It can be used to simplify getting started. The first argument is the version's name (e.g. "v1"). If no name is passed, The macro will use the using module's name as the version name. **Note: The `:storage` flag has to be `true` for exactly one version of a CRD.** ``` defmodule MyOperator.API.V1.CronTab do use Bonny.API.Version def manifest() do struct!(defaults(), storage: true) end ``` Use the `manifest/0` callback to override the defaults, e.g. add a schema. Pipe your struct into [`add_observed_generation_status/1`](#add_observed_generation_status/1) - which is imported into the using module - if you use the [`Bonny.Pluggable.SkipObservedGenerations`](Bonny.Pluggable.SkipObservedGenerations.html) step in your controller ``` defmodule MyOperator.API.V1.CronTab do use Bonny.API.Version def manifest() do struct!( defaults(), storage: true, schema: %{ openAPIV3Schema: %{ type: :object, properties: %{ spec: %{ } } } ) end ``` [Summary](#summary) === [Types](#types) --- [printer_column_t()](#t:printer_column_t/0) Defines an [additional printer column](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#additional-printer-columns). [schema_t()](#t:schema_t/0) Defines an [OpenAPI V3 Schema](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#specifying-a-structural-schema). [subresources_t()](#t:subresources_t/0) Defines a version of a custom resource. Refer to the [CRD versioning documentation](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/) [t()](#t:t/0) [Callbacks](#callbacks) --- [manifest()](#c:manifest/0) Return a `%Bonny.API.Version{}` struct representing the manifest for this version of the CRD API. [Functions](#functions) --- [add_conditions(version)](#add_conditions/1) Adds the status subresource if it hasn't been added before and adds the schema for the `.status.conditions` array. [add_observed_generation_status(version)](#add_observed_generation_status/1) Adds the status subresource if it hasn't been added before and adds a field .status.observedGeneration of type integer to the OpenAPIV3Schema. [defaults()](#defaults/0) Returns a [`Bonny.API.Version`](Bonny.API.Version.html#content) struct with default values. Use this and pipe it into `struct!()` to override the defaults in your `manifest/0` callback. [Types](#types) === [Callbacks](#callbacks) === [Functions](#functions) === Bonny.Axn === Describes a resource action event. This is the token passed to all steps of your operator and controller pipeline. This module gets imported to your controllers where you should use the functions [`register_descendant/3`](#register_descendant/3), [`update_status/2`](#update_status/2) and the ones to register events: [`success_event/2`](#success_event/2), [`failure_event/2`](#failure_event/2) and/or [`register_event/6`](#register_event/6). Note that these functions raise exceptions if those resources have already been applied to the cluster. The `register_before_*` functions can be used in [`Pluggable`](https://hexdocs.pm/pluggable/1.0.1/Pluggable.html) steps in order to register callbacks that are called before applying resources to the cluster. Have a look at [`Bonny.Pluggable.Logger`](Bonny.Pluggable.Logger.html) for a use case. [Action event fields](#module-action-event-fields) --- These fields contain information on the action event that occurred. * `action` - the action that triggered this event * `resource` - the resource the action was applied to * `conn` - the connection to the cluster the event occurred * `operator` - the operator that discovered and dispatched the event * `controller` - the controller handling the event and its init opts [Reaction fields](#module-reaction-fields) --- * `descendants` - descending resources defined by the handling controller * `status` - the data to be applied to the status subresource * `events` - Kubernetes events regarding the resource to be applied to the cluster [Pipeline fields](#module-pipeline-fields) --- * `halted` - the boolean status on whether the pipeline was halted * `assigns` - shared user data as a map * `private` - shared library data as a map * `states` - The states for status, events and descendants [Summary](#summary) === [Types](#types) --- [assigns()](#t:assigns/0) [states()](#t:states/0) [t()](#t:t/0) [Functions](#functions) --- [apply_descendants(axn, opts \\ [])](#apply_descendants/2) Applies the dependants to the cluster in groups. If `:create_events` is true, will create an event for each successful apply. Always creates events upon failed applies. [apply_status(axn, apply_opts \\ [])](#apply_status/2) Applies the status to the resource's status subresource in the cluster. If no status was specified, :noop is returned. [are_descendants_applied(axn)](#are_descendants_applied/1) [are_events_emitted(axn)](#are_events_emitted/1) [clear_events(axn)](#clear_events/1) Empties the list of events without emitting them. [emit_events(axn)](#emit_events/1) Emits the events created for this Axn. [failure_event(axn, opts \\ [])](#failure_event/2) Registers a failure event to the `%Axn{}` token to be emitted by Bonny. [identifier(axn)](#identifier/1) Returns an identifier of an action event (resource and action) as tuple. Can be used in logs and similar. [is_status_applied(axn)](#is_status_applied/1) [new!(fields)](#new!/1) [register_after_processed(axn, callback)](#register_after_processed/2) Registers a callback to be invoked at the very end of an action event's processing by the operator. [register_before_apply_descendants(axn, callback)](#register_before_apply_descendants/2) Registers a callback to be invoked before descendants are applied to the cluster. [register_before_apply_status(axn, callback)](#register_before_apply_status/2) Registers a callback to be invoked before a status is applied to the status subresource. [register_before_emit_event(axn, callback)](#register_before_emit_event/2) Registers a callback to be invoked before events are emitted to the cluster. [register_descendant(axn, descendant, opts \\ [])](#register_descendant/3) Registers a decending object to be applied. Owner reference will be added automatically. unless disabled through the option `omit_owner_ref`. [register_event(axn, related \\ nil, event_type, reason, action, message)](#register_event/6) Registers a Kubernetes event to the `%Axn{}` token to be emitted by Bonny. [set_condition(axn, type, status, message \\ nil)](#set_condition/4) Sets the condition in the resource status. [success_event(axn, opts \\ [])](#success_event/2) Registers a asuccess event to the `%Axn{}` token to be emitted by Bonny. [update_status(axn, fun)](#update_status/2) Executes `fun` for the resource status and applies the new status subresource. This can be called multiple times. [Types](#types) === [Functions](#functions) === Bonny.Axn.DescendantsAlreadyAppliedError exception === Error raised when trying to register a descendant or apply the descendants when already applied. Bonny.Axn.EventsAlreadyEmittedError exception === Error raised when trying to register an event or emit evnts when already emitted. Bonny.Axn.StatusAlreadyAppliedError exception === Error raised when trying to update or apply an already applied status Bonny.Axn.Test === Conveniences for testing Axn steps. This module can be used in your test cases, like this: ``` use ExUnit.Case, async: true use Bonny.Axn.Test ``` Using this module will: * import all the functions from this module * import all the functions from the [`Bonny.Axn`](Bonny.Axn.html) module * import all the functions from the [`Pluggable.Token`](https://hexdocs.pm/pluggable/1.0.1/Pluggable.Token.html) module [Summary](#summary) === [Functions](#functions) --- [assigns(axn)](#assigns/1) [axn(action, fields \\ [])](#axn/2) [descendants(axn)](#descendants/1) [events(axn)](#events/1) [status(axn)](#status/1) [Functions](#functions) === Bonny.CRD === Represents the `spec` portion of a Kubernetes [CustomResourceDefinition](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) manifest. > The CustomResourceDefinition API resource allows you to define custom resources. Defining a CRD object creates a new custom resource with a name and schema that you specify. The Kubernetes API serves and handles the storage of your custom resource. [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) CRD Spec [Functions](#functions) --- [api_version(crd)](#api_version/1) Gets group version from CRD spec [default_columns()](#default_columns/0) Default CLI printer columns. [kind(crd)](#kind/1) CRD Kind or plural name [to_manifest(crd, api_version \\ "apiextensions.k8s.io/v1beta1")](#to_manifest/2) Generates the map equivalent of the Kubernetes CRD YAML manifest [Types](#types) === [Functions](#functions) === Bonny.Config === Operator configuration interface [Summary](#summary) === [Functions](#functions) --- [api_version()](#api_version/0) Kubernetes APIVersion used. Defaults to `apiextensions.k8s.io/v1` [conn()](#conn/0) [`K8s.Conn`](https://hexdocs.pm/k8s/2.4.1/K8s.Conn.html) name used for this operator. [controllers()](#controllers/0) List of all controller modules to watch. [group()](#group/0) Kubernetes API Group of this operator [instance_name()](#instance_name/0) The name of the operator instance. [labels()](#labels/0) Labels to apply to all operator resources. [name()](#name/0) The name of the operator. [namespace()](#namespace/0) The namespace to watch for `Namespaced` CRDs. [service_account()](#service_account/0) Kubernetes service account name to run operator as. [versions()](#versions/0) Kubernetes API Versions of this operator [Functions](#functions) === Bonny.Controller behaviour === [`Bonny.Controller`](Bonny.Controller.html#content) defines controller behaviours and generates boilerplate for generating Kubernetes manifests. > A custom controller is a controller that users can deploy and update on a running cluster, independently of the cluster’s own lifecycle. Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources. The Operator pattern is one example of such a combination. It allows developers to encode domain knowledge for specific applications into an extension of the Kubernetes API. Controllers allow for simple `add`, `modify`, `delete`, and `reconcile` handling of custom resources in the Kubernetes API. [Summary](#summary) === [Callbacks](#callbacks) --- [add(map)](#c:add/1) [conn()](#c:conn/0) Bonny.Controller comes with a default implementation which returns Bonny.Config.config() [delete(map)](#c:delete/1) [list_operation()](#c:list_operation/0) Should return an operation to list resources for watching and reconciliation. [modify(map)](#c:modify/1) [reconcile(map)](#c:reconcile/1) [Functions](#functions) --- [ensure_list_query(op)](#ensure_list_query/1) [ensure_watch_query(op)](#ensure_watch_query/1) [list_operation(controller)](#list_operation/1) [Callbacks](#callbacks) === [Functions](#functions) === Bonny.ControllerV2 behaviour === Controllers handle action events observed by a resource watch query. Controllers must be registered at the operator together with the resource watch query. The operator will then delegate events observed by that query for processing to this controller Controllers use the [`Pluggable.StepBuilder`](https://hexdocs.pm/pluggable/1.0.1/Pluggable.StepBuilder.html) to build a step in the processing pipeline. In order to use it, a step has to be defined and implemented in the controller. The step must have the following spec ``` step_name(Bonny.Axn.t(), keyword()) :: Bonny.Axn.t() ``` The modules [`Bonny.Axn`](Bonny.Axn.html) module is imported to your controller. In your event handler step you should use the functions [`Bonny.Axn.register_descendant/3`](Bonny.Axn.html#register_descendant/3), [`Bonny.Axn.update_status/2`](Bonny.Axn.html#update_status/2) and the ones to register events: [`Bonny.Axn.success_event/2`](Bonny.Axn.html#success_event/2), [`Bonny.Axn.failure_event/2`](Bonny.Axn.html#failure_event/2) and/or [`Bonny.Axn.register_event/6`](Bonny.Axn.html#register_event/6). Note that these functions raise exceptions if those resources have already been applied to the cluster. [Example](#module-example) --- Match against the struct's `:action` field which is one of `:add`, `:modify`, `:reconcile` or `:delete` to provide an implementation for each case. ``` defmodule MyOperator.Controller.CronTabController do # other steps step :handle_event # other steps # apply the resource def handle_event(%Bonny.Axn{action: action, resource: resource} = axn, _opts) when action in [:add, :modify, :reconcile] do success_event(axn) end def handle_event(%Bonny.Axn{action: :delete, resource: resource} = axn, _opts) do # axn end end ``` Registering your descendants with the `%Bonny.Axn{}` token makes your controller easier to test. Be sure to add [`Bonny.Pluggable.ApplyDescendants`](Bonny.Pluggable.ApplyDescendants.html) as step to your operator in order for the descendants to be applied to the cluster. ``` defmodule MyOperator.Controller.CronTabController do # other steps step :handle_event # other steps # apply the resource def handle_event(axn, _opts) do deployment = generate_deployment(axn.resource) axn |> register_descendant(deployment) |> success_event() end end ``` Use [`Bonny.Axn.update_status/2`](Bonny.Axn.html#update_status/2) to store API responses or other status data in the resource status. Be sure to enable the status subresource in your CRD version module. ``` defmodule MyOperator.Controller.CronTabController do # other steps step :handle_event # other steps # apply the resource def handle_event(axn, _opts) do response = apply_state(axn.resource) axn |> update_status(fn status -> Map.put(status, "response", response) end) |> success_event() end end ``` [Summary](#summary) === [Types](#types) --- [api()](#t:api/0) [rbac_rule()](#t:rbac_rule/0) [resource()](#t:resource/0) [verb()](#t:verb/0) [Callbacks](#callbacks) --- [rbac_rules()](#c:rbac_rules/0) [Functions](#functions) --- [child_spec(init_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [start_link(init_args)](#start_link/1) [to_rbac_rule(arg)](#to_rbac_rule/1) [Types](#types) === [Callbacks](#callbacks) === [Functions](#functions) === Bonny.Event === Represents a kubernetes event. Documentation: <https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/[Summary](#summary) === [Types](#types) --- [event_type()](#t:event_type/0) Kubernetes events currently support these types. [t()](#t:t/0) See <https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/> for field explanations. [Functions](#functions) --- [new!(fields)](#new!/1) Creates an event. [new!(regarding, related \\ nil, event_type, reason, action, message, opts \\ [])](#new!/7) [Types](#types) === [Functions](#functions) === Bonny.EventRecorder === Records kubernetes events regarding objects controlled by this operator. [Summary](#summary) === [Types](#types) --- [event_key()](#t:event_key/0) A map to identify an event. [Functions](#functions) --- [child_spec(arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [emit(event, operator, conn)](#emit/3) Create a kubernetes event in the cluster. Documentation: <https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/[start_link(opts)](#start_link/1) [Types](#types) === [Functions](#functions) === Bonny.Mix.Operator === Encapsulates Kubernetes resource manifests for an operator [Summary](#summary) === [Functions](#functions) --- [cluster_role(operators)](#cluster_role/1) ClusterRole manifest [cluster_role_binding(namespace)](#cluster_role_binding/1) ClusterRoleBinding manifest [crds(operators)](#crds/1) CRD manifests [deployment(image, namespace)](#deployment/2) Deployment manifest [find_operators()](#find_operators/0) [rbac_rules(operators)](#rbac_rules/1) [service_account(namespace)](#service_account/1) ServiceAccount manifest [Functions](#functions) === Bonny.Naming === Naming functions [Summary](#summary) === [Functions](#functions) --- [module_to_kind(mod)](#module_to_kind/1) Converts an elixir module name to a string for use as the CRD's `kind`. [module_version(mod)](#module_version/1) Extract the CRD API version from the module name. Defaults to `"v1"` [Functions](#functions) === Bonny.Operator behaviour === Defines a Bonny operator. The operator defines custom resources, watch queries and their controllers and serves as the entry point to the watching and handling of processes. Overall, an operator has the following responsibilities: * to provide a wrapper for starting and stopping the operator as part of a supervision tree * To define the resources to be watched together with the controllers which handle action events on those resources. * to define an initial pluggable pipeline for all action events to pass through * To define any custom resources ending up in the manifest generated by [`mix bonny.gen.manifest`](Mix.Tasks.Bonny.Gen.Manifest.html) [Operators](#module-operators) --- An operator is defined with the help of `Bonyy.Operator`. The step `:delegate_to_controller` has do be part of the pipeline. It is the step that calls the handling controller for a given action event: ``` defmodule MyOperatorApp.Operator do use Bonny.Operator, default_watching_namespace: "default" # step ... step :delegate_to_controller # step ... def controllers(watching_namespace, _opts) do [ %{ query: K8s.Client.watch("my-controller.io", "MyCustomResource", namespace: nil) controller: MyOperator.Controller.MyCustomResourceController } ] end end ``` [Summary](#summary) === [Types](#types) --- [controller_spec()](#t:controller_spec/0) [Callbacks](#callbacks) --- [controllers(binary, t)](#c:controllers/2) [crds()](#c:crds/0) [Types](#types) === [Callbacks](#callbacks) === Bonny.Operator.LeaderElector === The leader uses a [Kubernetes Lease](https://kubernetes.io/docs/concepts/architecture/leases/) to make sure the operator only runs on one single replica (the leader) at the same time. [Enabling the Leader Election](#module-enabling-the-leader-election) --- > #### Functionality still in Beta > The leader election is still being tested. Enable it for testing purposes > only and please report any issues on Github. To enable leader election you have to pass the `enable_leader_election: true` option when [adding the operator to your Supervisor](#adding-the-operator-to-your-supervisor): ``` defmodule MyOperator.Application do use Application def start(_type, env: env) do children = [ {MyOperator.Operator, conn: MyOperator.K8sConn.get!(env), watch_namespace: :all, enable_leader_election: true} # <-- starts the leader elector ] opts = [strategy: :one_for_one, name: MyOperator.Supervisor] Supervisor.start_link(children, opts) end end ``` [Summary](#summary) === [Functions](#functions) --- [child_spec(init_arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [start_link(controllers, operator, init_args)](#start_link/3) [Functions](#functions) === Bonny.PeriodicTask === Register periodically run tasks. Use for running tasks as a part of reconciling a CRD with a lifetime, duration, or interval field. **Note:** Must be started by your operator. Add `Bonny.PeriodicTask.sttart_link(:ok)` to your application. Functions are expected to return one of: * `:ok` - task will be passed to subsequent calls * `{:ok, new_state}` state field will be updated in task and provided to next call * `{:stop, reason}` task will be removed from execution loop. Use for tasks opting out of being re-run * `any()` - any other result is treated as an error, and the execution loop will be halted [Examples](#module-examples) --- Registering a task ``` iex> Bonny.PeriodicTask.new(:pod_evictor, {PodEvictor, :evict, [reconcile_payload_map]}, 5000) ``` Unregistering a task ``` iex> Bonny.PeriodicTask.unregister(:pod_evictor) ``` [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [child_spec(arg)](#child_spec/1) Returns a specification to start this module under a supervisor. [new(id, handler, interval \\ 5000)](#new/3) Registers and starts a new task given [`Bonny.PeriodicTask`](Bonny.PeriodicTask.html#content) attributes [register(task)](#register/1) Registers and starts a new [`Bonny.PeriodicTask`](Bonny.PeriodicTask.html#content) [start_link(any)](#start_link/1) [unregister(id)](#unregister/1) Unregisters and stops a [`Bonny.PeriodicTask`](Bonny.PeriodicTask.html#content) [Types](#types) === [Functions](#functions) === Bonny.Pluggable.AddManagedByLabelToDescendants === Adds the `app.kubernetes.io/managed-by` label to all descendants registered within the pipeline. Add this to your operator or controllers to set this label to a value of your choice. [Options](#module-options) --- * `:managed_by` - Required. The value the label should be set to. [Examples](#module-examples) --- ``` step Bonny.Pluggable.AddManagedByLabelToDescendants, managed_by: Bonny.Config.name() ``` Bonny.Pluggable.AddMissingGVK === The Kubernetes API sometimes doesn't set the fields `apiVersion` and `kind` on items of a list operation. E.g. if you use [`K8s.Client`](https://hexdocs.pm/k8s/2.4.1/K8s.Client.html) to get a list of deployments, the deployments won't contains those two fields. This is being discussed on <https://github.com/kubernetes/kubernetes/issues/3030>. Bonny sometimes depends on `apiVersion` and `kind` to be defined on the resource being handled. This is the case for setting the status, e.g. when using [`Bonny.Pluggable.SkipObservedGenerations`](Bonny.Pluggable.SkipObservedGenerations.html) or [`Bonny.Axn.update_status/2`](Bonny.Axn.html#update_status/2). Add this step to your controller in order to set those values on all resources being handled. [Examples](#module-examples) --- ``` step Bonny.Pluggable.AddMissingGVK, apiVersion: "apps/v1", kind: "Deployment" ``` Bonny.Pluggable.ApplyDescendants === Applies all the descendants added to the `%Bonny.Axn{}` struct. [Options](#module-options) --- * `:events_for_actions` - List of actions for which events will be created upon successful apply. Defaults to `[:add, :modify]` (Reconcile actions are triggered regularly which would create lots of events for no actions.) * `:force` and `:field_manager` - Options forwarded to `K8s.Client.apply()`. [Examples](#module-examples) --- ``` step Bonny.Pluggable.ApplyDescendants, events_for_actions: [:add, :modify, :reconcile], field_manager: "MyOperator", force: true ``` Bonny.Pluggable.ApplyStatus === Applies the status of the given `%Bonny.Axn{}` struct to the status subresource. [Options](#module-options) --- * `:force` and `:field_manager` - Options forwarded to `K8s.Client.apply()`. [Examples](#module-examples) --- ``` step Bonny.Pluggable.ApplyStatus, field_manager: "MyOperator", force: true ``` Bonny.Pluggable.Finalizer === Declare a finalizer and its implementation. ### [Kubernetes Docs:](#module-kubernetes-docs) * <https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/> * <https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/> #### Use with `SkipObservedGenerations` > This step is best used with and placed right after > [`Bonny.Pluggable.SkipObservedGenerations`](Bonny.Pluggable.SkipObservedGenerations.html) in your controller. Have a look at > the examples below. > #### Note for Testing > This step directly updates the resource on the kubernetes > cluster. In order to write unit tests, you therefore want to use > [`K8s.Client.DynamicHTTPProvider`](https://hexdocs.pm/k8s/2.4.1/K8s.Client.DynamicHTTPProvider.html) in order to mock the calls to Kubernetes. ### [Examples](#module-examples) See [`options/0`](#t:options/0), [`finalizer_impl/0`](#t:finalizer_impl/0) and [`add_to_resource/0`](#t:add_to_resource/0) for more infos on the options. By default, a missing finalizer is not added to the resource. ``` step Bonny.Pluggable.SkipObservedGenerations step Bonny.Pluggable.Finalizer, id: "example.com/cleanup", impl: &__MODULE__.cleanup/1 ``` Set `add_to_resource` to true in order for Bonny to always add it. ``` step Bonny.Pluggable.SkipObservedGenerations step Bonny.Pluggable.Finalizer, id: "example.com/cleanup", impl: &__MODULE__.cleanup/1, add_to_resource: true ``` Or make it depending on the event/resource and enable logs. ``` step Bonny.Pluggable.SkipObservedGenerations step Bonny.Pluggable.Finalizer, id: "example.com/cleanup", impl: &__MODULE__.cleanup/1, add_to_resource: &__MODULE__.deletion_policy_not_abandon/1, log: :debug ``` [Summary](#summary) === [Types](#types) --- [add_to_resource()](#t:add_to_resource/0) Boolean or callback of arity 1 to tell Bonny whether or not to add the finalizer to the resource if it is missing. If it is a callback, it receives the `%Bonny.Axn{}` token and shoulr return a `boolean`. [finalizer_impl()](#t:finalizer_impl/0) The implementation of the finalizer. This is a function of arity 1 which is called when the resource is deleted. It receives the `%Bonny.Axn{}` token as argument and should return the same. [options()](#t:options/0) * `id` - Fully qualified finalizer identifier * `impl` - The implementation of the finalizer. See [`finalizer_impl/0`](#t:finalizer_impl/0) * `add_to_resource` - (otional) whether Bonny should add the finalizer to the resource if it is missing. See [`add_to_resource/0`](#t:add_to_resource/0) `%Bonny.Axn{}` token. Defaults to `false`. * `log_level` - (optional) Log level used for logging by this step. `:disable` for no logs. Defaults to `:disable` [Types](#types) === Bonny.Pluggable.Logger === A pluggable step for logging basic action event information in the format: ``` {"NAMESPACE/OBJECT_NAME", API_VERSION, "Kind=KIND, Action=ACTION"} ``` Example: ``` {"default/my-object", "example.com/v1", "Kind=MyCustomResource, Action=:add"} - Processing event {"default/my-object", "example.com/v1", "Kind=MyCustomResource, Action=:add"} - Applying status {"default/my-object", "example.com/v1", "Kind=MyCustomResource, Action=:add"} - Emitting Normal event {"default/my-object", "example.com/v1", "Kind=MyCustomResource, Action=:add"} - Applying descendant {"default/nginx", "apps/v1", "Kind=Deployment"} ``` To use it, just add a step to the desired module. ``` step Bonny.Pluggable.Logger, level: :debug ``` [Options](#module-options) --- * `:level` - The log level at which this plug should log its request info. Default is `:info`. The [list of supported levels](https://hexdocs.pm/logger/Logger.html#module-levels) is available in the [`Logger`](https://hexdocs.pm/logger/Logger.html) documentation. [Summary](#summary) === [Functions](#functions) --- [call(axn, level)](#call/2) Callback implementation for [`Pluggable.call/2`](https://hexdocs.pm/pluggable/1.0.1/Pluggable.html#c:call/2). [init(opts)](#init/1) Callback implementation for [`Pluggable.init/1`](https://hexdocs.pm/pluggable/1.0.1/Pluggable.html#c:init/1). [Functions](#functions) === Bonny.Pluggable.SkipObservedGenerations === Halts the pipelines for a defined list of actions if the observed generation equals the resource's generation. It also sets the observed generation value before applying the resource status to the cluster. [Options](#module-options) --- * `:actions` - The actions for which this rule applies. Defaults to `[:add, :modify]`. * `:observed_generation_key` - The resource status key where the observed generation is stored. This will be passed to `Kernel.get_in()`. Defaults to `["status", "observedGeneration"]`. [Usage](#module-usage) --- ``` step Bonny.Pluggable.SkipObservedGenerations, actions: [:add, :modify, :reconcile], observed_generation_key: ~w(status internal observedGeneration) ``` Bonny.Resource === Helper functions for dealing with kubernetes resources. [Summary](#summary) === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [add_owner_reference(resource, owner, opts \\ [])](#add_owner_reference/3) Add an owner reference to the given resource. [apply(resource, conn, opts)](#apply/3) Applies the given resource to the cluster. [apply_async(resources, conn, opts \\ [])](#apply_async/3) Applies the given resource to the cluster. [apply_status(resource, conn, opts \\ [])](#apply_status/3) Applies the status subresource of the given resource to the cluster. If the given resource doesn't contain a status object, nothing is done and :noop is returned. [drop_managed_fields(resource)](#drop_managed_fields/1) Removes .metadata.managedFields from the resource. [drop_rv(resource)](#drop_rv/1) Removes .metadata.resourceVersion from the resource. [gvkn(resource)](#gvkn/1) Returns a tuple in the form [resource_reference(resource)](#resource_reference/1) Get a reference to the given resource [set_label(resource, label, value)](#set_label/3) Set a label on the given resource. [set_observed_generation(resource)](#set_observed_generation/1) Sets .status.observedGeneration to .metadata.generation [Types](#types) === [Functions](#functions) === Bonny.Server.AsyncStreamRunner === Runs the given stream in a separate process. Prepare your stream and add this Runner to your supervision tree in order to control it (e.g. restart after the stream ends). [Example](#module-example) --- ``` # prepare a stream stream = conn |> K8s.Client.stream(operation) |> Stream.filter(&filter_resources/1) |> Stream.map(&process_stream/1) children = [ {Bonny.Server.AsyncStreamRunner, name: ReconcileServer, stream: stream, termination_delay: 30_000, ] Supervisor.init(children, strategy: :one_for_one) ``` [Options](#module-options) --- * `:stream` - The (prepared) stream to run * `:name` (optional) - Register this process under the given name. * `:termination_delay` (optional) - After the stream ends, how many milliseconds to wait before the process terminates (and might be restarted by the Supervisor). Per default there's no delay [Summary](#summary) === [Functions](#functions) --- [child_spec(args)](#child_spec/1) [run(stream, termination_delay)](#run/2) [start_link(args)](#start_link/1) [Functions](#functions) === Bonny.Server.Reconciler behaviour === Creates a stream that, when run, streams a list of resources and calls `reconcile/1` on the given controller for each resource in the stream in parallel. [Example](#module-example) --- ``` reconciliation_stream = Bonny.Server.Reconciler.get_stream(controller) Task.async(fn -> Stream.run(reconciliation_stream) end) ``` [Summary](#summary) === [Callbacks](#callbacks) --- [reconcile(map)](#c:reconcile/1) [Functions](#functions) --- [get_raw_stream(conn, reconcile_operation, stream_opts \\ [])](#get_raw_stream/3) [get_stream(module, conn, reconcile_operation, stream_opts \\ [])](#get_stream/4) Prepares a stream wich maps each resoruce returned by the `reconcile_operation` to a function `reconcile/1` on the given `module`. If given, the stream_opts are passed to K8s.Client.stream/3 [Callbacks](#callbacks) === [Functions](#functions) === Bonny.Server.Scheduler behaviour === Kubernetes custom scheduler interface. Built on top of `Reconciler`. The only function that needs to be implemented is `select_node_for_pod/2`. All others defined by behaviour have default implementations. [Examples](#module-examples) --- Will schedule each unschedule pod with `spec.schedulerName=cheap-node` to a node with a label `cheap=true`. `nodes` is a stream that can be lazily filtered: ``` defmodule CheapNodeScheduler do use Bonny.Server.Scheduler, name: "cheap-node" @impl Bonny.Server.Scheduler def select_node_for_pod(_pod, nodes) do nodes |> Stream.filter(fn(node) -> is_cheap = K8s.Resource.label(node, "cheap") is_cheap == "true" end) |> Enum.take(1) |> List.first end end CheapNodeScheduler.start_link() ``` Will schedule each unschedule pod with `spec.schedulerName=random-node` to a random node: ``` defmodule RandomNodeScheduler do use Bonny.Server.Scheduler, name: "random-node" @impl Bonny.Server.Scheduler def select_node_for_pod(_pod, nodes) do Enum.random(nodes) end end RandomNodeScheduler.start_link() ``` Override `nodes/0` default implementation (`pods/0` can be overridden too). Schedules pod on a random GPU node: ``` defmodule GpuScheduler do use Bonny.Server.Scheduler, name: "gpu-node" @impl Bonny.Server.Scheduler def select_node_for_pod(_pod, nodes) do Enum.random(nodes) end @impl Bonny.Server.Scheduler def nodes() do label = "my.label.on.gpu.instances" conn = Bonny.Config.conn() op = K8s.Client.list("v1", :nodes) K8s.Client.stream(conn, op, params: %{labelSelector: label}) end end GpuScheduler.start_link() ``` [Summary](#summary) === [Callbacks](#callbacks) --- [conn()](#c:conn/0) [field_selector()](#c:field_selector/0) Field selector for selecting unscheduled pods waiting to be scheduled by this scheduler. [name()](#c:name/0) Name of the scheduler. [nodes(t)](#c:nodes/1) List of nodes available to this scheduler. [select_node_for_pod(map, list)](#c:select_node_for_pod/2) Selects the best node for the current `pod`. [Functions](#functions) --- [bind(conn, pod, node)](#bind/3) Binds a pod to a node [field_selector(scheduler_name)](#field_selector/1) Kubernetes API `fieldSelector` value for unbound pods waiting on the given scheduler. [nodes(conn)](#nodes/1) Returns a list of all nodes in the cluster. [reconcile(scheduler, pod)](#reconcile/2) [Callbacks](#callbacks) === [Functions](#functions) === Bonny.Server.Scheduler.Binding === Kubernetes [binding](#placeholder) interface. Currently [undocumented](https://github.com/kubernetes/kubernetes/issues/75749) in Kubernetes docs. [Links](#module-links) --- * [Example using curl](https://gist.github.com/kelseyhightower/2349c9c645d32a3fcbe385082de74668) * [Example using golang](https://banzaicloud.com/blog/k8s-custom-scheduler/) [Summary](#summary) === [Functions](#functions) --- [create(conn, pod, node)](#create/3) Creates the pod's /binding subresource through K8s. [new(pod, node)](#new/2) Returns a map representing a `Binding` kubernetes resource [Functions](#functions) === Bonny.Server.Watcher === Creates the stream for watching resources in kubernetes and prepares its processing. Watching a resource in kubernetes results in a stream of add/modify/delete events. This module uses [`K8s.Client.stream/3`](https://hexdocs.pm/k8s/2.4.1/K8s.Client.html#stream/3) to create such a stream and maps events to a controller's event handler. It is then up to the caller to run the resulting stream. [Example](#module-example) --- ``` watch_stream = Bonny.Server.Watcher.get_stream(controller) Task.async(fn -> Stream.run(watch_stream) end) ``` [Summary](#summary) === [Types](#types) --- [action()](#t:action/0) [watch_event()](#t:watch_event/0) [Functions](#functions) --- [get_raw_stream(conn, watch_operation)](#get_raw_stream/2) [get_stream(controller, conn, watch_operation)](#get_stream/3) [Types](#types) === [Functions](#functions) === Bonny.Sys.Event === Telemetry event definitions for this library [Summary](#summary) === [Functions](#functions) --- [events()](#events/0) deprecated See [`Bonny.Sys.Telemetry.events/0`](Bonny.Sys.Telemetry.html#events/0). [Functions](#functions) === Bonny.Sys.Logger === Attaches telemetry events to the Elixir Logger [Summary](#summary) === [Functions](#functions) --- [attach()](#attach/0) Attaches telemetry events to the Elixir Logger [Functions](#functions) === Bonny.Sys.Telemetry === Telemetry event definitions for this library [Summary](#summary) === [Functions](#functions) --- [events()](#events/0) [Functions](#functions) === DeploymentEventLogController === This is a goofy config, but it makes this work in dev w/o having to POST an Example CRD. This controller simply logs lifecycle events on Deployments. [Summary](#summary) === [Functions](#functions) --- [call(token, opts)](#call/2) Callback implementation for [`Pluggable.call/2`](https://hexdocs.pm/pluggable/1.0.1/Pluggable.html#c:call/2). [handle_event(axn, opts)](#handle_event/2) [init(opts)](#init/1) Callback implementation for [`Pluggable.init/1`](https://hexdocs.pm/pluggable/1.0.1/Pluggable.html#c:init/1). [rbac_rules()](#rbac_rules/0) Callback implementation for [`Bonny.ControllerV2.rbac_rules/0`](Bonny.ControllerV2.html#c:rbac_rules/0). [track_event(type, resource)](#track_event/2) [Functions](#functions) === Mix.Bonny === Mix task helpers [Summary](#summary) === [Functions](#functions) --- [add_or_create_with(mode, target, content_to_add, new_file_content, check)](#add_or_create_with/5) [app_dir_name()](#app_dir_name/0) [app_name()](#app_name/0) Get the OTP app name [append_or_create_with(target, content_to_append, new_file_content, check)](#append_or_create_with/4) Appends `append_content` to `target`. If `target` does not exist, a new file with `new_file_content` is created. [copy(source, target)](#copy/2) [ensure_module_name(string)](#ensure_module_name/1) Capitalizes the string if it does not begin with a capital letter. [error(message)](#error/1) [hyphenated_app_name()](#hyphenated_app_name/0) Get the OTP app name with dashes [no_umbrella!()](#no_umbrella!/0) [parse_args(args, defaults, cli_opts \\ [])](#parse_args/3) Parse CLI input [prepend_or_create_with(target, content_to_prepend, new_file_content, check)](#prepend_or_create_with/4) Prepends `append_content` to `target`. If `target` does not exist, a new file with `new_file_content` is created. [render(source, target)](#render/2) Render text to a file. [render_template(source, target, bindings)](#render_template/3) [template(name)](#template/1) [Functions](#functions) === TestScheduler === [Summary](#summary) === [Functions](#functions) --- [child_spec(args \\ [])](#child_spec/1) [conn()](#conn/0) Callback implementation for [`Bonny.Server.Scheduler.conn/0`](Bonny.Server.Scheduler.html#c:conn/0). [field_selector()](#field_selector/0) Kubernetes HTTP API `fieldSelector`. [name()](#name/0) Scheduler name [nodes(conn)](#nodes/1) List of nodes available to this scheduler. [select_node_for_pod(pod, nodes)](#select_node_for_pod/2) Callback implementation for [`Bonny.Server.Scheduler.select_node_for_pod/2`](Bonny.Server.Scheduler.html#c:select_node_for_pod/2). [Functions](#functions) === mix bonny.gen.controller === Generates a new CRD controller An operator can have multiple controllers. Each controller handles the lifecycle of a custom resource. ``` mix bonny.gen.controller ``` Open up your controller and add functionality for your resources lifecycle: * Add * Modify * Delete * Reconcile If you selected to add a CRD, also edit the generated CRD version module. [Summary](#summary) === [Functions](#functions) --- [get_input(input)](#get_input/1) [run(args)](#run/1) Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). [Functions](#functions) === mix bonny.gen.dockerfile === Generates a Dockerfile for this operator [Summary](#summary) === [Functions](#functions) --- [run(args)](#run/1) Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). [Functions](#functions) === mix bonny.gen.manifest === Generates the Kubernetes YAML manifest for this operator mix bonny.gen.manifest expects a docker image name if deploying to a cluster. You may optionally provide a namespace. [Examples](#module-examples) --- The `image` switch is required. Options: * --image (docker image to deploy) * --namespace (of service account and deployment; defaults to "default") * --out (path to save manifest; defaults to "manifest.yaml") *Deploying to kubernetes:* ``` docker build -t $(YOUR_IMAGE_URL) . docker push $(YOUR_IMAGE_URL) mix bonny.gen.manifest --image $(YOUR_IMAGE_URL):latest --namespace default kubectl apply -f manifest.yaml -n default ``` To skip the `deployment` for running an operator outside of the cluster (like in development) simply omit the `--image` flag: ``` mix bonny.gen.manifest ``` [Summary](#summary) === [Functions](#functions) --- [run(args)](#run/1) Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). [Functions](#functions) === mix bonny.init === Initialized an operator wiht bonny. * Initializes application configuration * Generates helper files for tests [Summary](#summary) === [Functions](#functions) --- [get_input(input \\ [])](#get_input/1) [run(args)](#run/1) Callback implementation for [`Mix.Task.run/1`](https://hexdocs.pm/mix/Mix.Task.html#c:run/1). [Functions](#functions) ===
github.com/polycube-network/polycube
go
Go
README [¶](#section-readme) --- ![Polycube](https://github.com/polycube-network/polycube/raw/v0.11.0/Documentation/images/polycube-logo.png) ### Polycube [![Build Status](http://130.192.225.104:8080/buildStatus/icon?job=polycube_netgroup/master)](http://130.192.225.104:8080/job/polycube_netgroup/) [![Tests Status](http://130.192.225.104:9000/tests/polycube-test/master)](http://130.192.225.104:8080/job/polycube-test/job/master/) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](http://www.apache.org/licenses/LICENSE-2.0) `Polycube` is an **open source** software framework that provides **fast** and **lightweight** **network functions** such as bridges, routers, firewalls, and others. Polycube services, called `cubes`, can be composed to build arbitrary **service chains** and provide custom network connectivity to **namespaces**, **containers**, **virtual machines**, and **physical hosts**. For more information, jump to the project [Documentation](https://polycube-network.readthedocs.io/en/latest/). #### Quick links * [Introduction to Polycube](https://polycube-network.readthedocs.io/en/latest/intro.html) * [Quickstart](https://polycube-network.readthedocs.io/en/latest/quickstart.html) * [Documentation](https://polycube-network.readthedocs.io/en/latest/) * [pcn-k8s - The CNI network plugin for Kubernetes](https://polycube-network.readthedocs.io/en/latest/components/k8s/pcn-kubernetes.html) * [pcn-iptables - A clone of Iptables based on eBPF](https://polycube-network.readthedocs.io/en/latest/components/iptables/pcn-iptables.html) #### Licence Polycube is licensed under the Apache License, Version 2.0 (ALv2). None
github.com/kubernetes-incubator/node-feature-discovery
go
Go
README [¶](#section-readme) --- ### Node feature discovery for [Kubernetes](https://kubernetes.io) [![Build Status](https://api.travis-ci.org/kubernetes-sigs/node-feature-discovery.svg?branch=master)](https://travis-ci.org/kubernetes-sigs/node-feature-discovery) [![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes-sigs/node-feature-discovery)](https://goreportcard.com/report/github.com/kubernetes-sigs/node-feature-discovery) * [Overview](#readme-overview) * [Command line interface](#readme-command-line-interface) * [Feature discovery](#readme-feature-discovery) + [Feature sources](#readme-feature-sources) + [Feature labels](#readme-feature-labels) * [Getting started](#readme-getting-started) + [System requirements](#readme-system-requirements) + [Usage](#readme-usage) * [Building from source](#readme-building-from-source) * [Targeting nodes with specific features](#readme-targeting-nodes-with-specific-features) * [References](#readme-references) * [License](#readme-license) * [Demo](#readme-demo) #### Overview This software enables node feature discovery for Kubernetes. It detects hardware features available on each node in a Kubernetes cluster, and advertises those features using node labels. NFD consists of two software components: 1. **nfd-master** is responsible for labeling Kubernetes node objects 2. **nfd-worker** is detects features and communicates them to nfd-master. One instance of nfd-worker is supposed to be run on each node of the cluster #### Command line interface You can run NFD in stand-alone Docker containers e.g. for testing purposes. This is useful for checking features-detection. ##### NFD-Master When running as a standalone container labeling is expected to fail because Kubernetes API is not available. Thus, it is recommended to use `--no-publish` command line flag. E.g. ``` $ docker run --rm --name=nfd-test <NFD_CONTAINER_IMAGE> nfd-master --no-publish 2019/02/01 14:48:21 Node Feature Discovery Master <NFD_VERSION> 2019/02/01 14:48:21 gRPC server serving on port: 8080 ``` Command line flags of nfd-master: ``` $ docker run --rm <NFD_CONTAINER_IMAGE> nfd-master --help ... nfd-master. Usage: nfd-master [--no-publish] [--label-whitelist=<pattern>] [--port=<port>] [--ca-file=<path>] [--cert-file=<path>] [--key-file=<path>] [--verify-node-name] [--extra-label-ns=<list>] nfd-master -h | --help nfd-master --version Options: -h --help Show this screen. --version Output version and exit. --port=<port> Port on which to listen for connections. [Default: 8080] --ca-file=<path> Root certificate for verifying connections [Default: ] --cert-file=<path> Certificate used for authenticating connections [Default: ] --key-file=<path> Private key matching --cert-file [Default: ] --verify-node-name Verify worker node name against CN from the TLS certificate. Only has effect when TLS authentication has been enabled. --no-publish Do not publish feature labels --label-whitelist=<pattern> Regular expression to filter label names to publish to the Kubernetes API server. [Default: ] --extra-label-ns=<list> Comma separated list of allowed extra label namespaces [Default: ] ``` ##### NFD-Worker In order to run nfd-worker as a "stand-alone" container against your standalone nfd-master you need to run them in the same network namespace: ``` $ docker run --rm --network=container:nfd-test <NFD_CONTAINER_IMAGE> nfd-worker 2019/02/01 14:48:56 Node Feature Discovery Worker <NFD_VERSION> ... ``` If you just want to try out feature discovery without connecting to nfd-master, pass the `--no-publish` flag to nfd-worker. Command line flags of nfd-worker: ``` $ docker run --rm <CONTAINER_IMAGE_ID> nfd-worker --help ... nfd-worker. Usage: nfd-worker [--no-publish] [--sources=<sources>] [--label-whitelist=<pattern>] [--oneshot | --sleep-interval=<seconds>] [--config=<path>] [--options=<config>] [--server=<server>] [--server-name-override=<name>] [--ca-file=<path>] [--cert-file=<path>] [--key-file=<path>] nfd-worker -h | --help nfd-worker --version Options: -h --help Show this screen. --version Output version and exit. --config=<path> Config file to use. [Default: /etc/kubernetes/node-feature-discovery/nfd-worker.conf] --options=<config> Specify config options from command line. Config options are specified in the same format as in the config file (i.e. json or yaml). These options will override settings read from the config file. [Default: ] --ca-file=<path> Root certificate for verifying connections [Default: ] --cert-file=<path> Certificate used for authenticating connections [Default: ] --key-file=<path> Private key matching --cert-file [Default: ] --server=<server> NFD server address to connecto to. [Default: localhost:8080] --server-name-override=<name> Name (CN) expect from server certificate, useful in testing [Default: ] --sources=<sources> Comma separated list of feature sources. [Default: cpu,iommu,kernel,local,memory,network,pci,storage,system] --no-publish Do not publish discovered features to the cluster-local Kubernetes API server. --label-whitelist=<pattern> Regular expression to filter label names to publish to the Kubernetes API server. [Default: ] --oneshot Label once and exit. --sleep-interval=<seconds> Time to sleep between re-labeling. Non-positive value implies no re-labeling (i.e. infinite sleep). [Default: 60s] ``` **NOTE** Some feature sources need certain directories and/or files from the host mounted inside the NFD container. Thus, you need to provide Docker with the correct `--volume` options in order for them to work correctly when run stand-alone directly with `docker run`. See the [template spec](https://github.com/kubernetes-sigs/node-feature-discovery/raw/master/nfd-worker-daemonset.yaml.template) for up-to-date information about the required volume mounts. #### Feature discovery ##### Feature sources The current set of feature sources are the following: * CPU * IOMMU * Kernel * Memory * Network * PCI * Storage * System * Local (hooks for user-specific features) ##### Feature labels The published node labels encode a few pieces of information: * Namespace, i.e. `feature.node.kubernetes.io` * The source for each label (e.g. `cpu`). * The name of the discovered feature as it appears in the underlying source, (e.g. `cpuid.AESNI` from cpu). * The value of the discovered feature. Feature label names adhere to the following pattern: ``` <namespace>/<source name>-<feature name>[.<attribute name>] ``` The last component (i.e. `attribute-name`) is optional, and only used if a feature logically has sub-hierarchy, e.g. `sriov.capable` and `sriov.configure` from the `network` source. ``` { "feature.node.kubernetes.io/cpu-<feature-name>": "true", "feature.node.kubernetes.io/iommu-<feature-name>": "true", "feature.node.kubernetes.io/kernel-<feature name>": "<feature value>", "feature.node.kubernetes.io/memory-<feature-name>": "true", "feature.node.kubernetes.io/network-<feature-name>": "true", "feature.node.kubernetes.io/pci-<device label>.present": "true", "feature.node.kubernetes.io/storage-<feature-name>": "true", "feature.node.kubernetes.io/system-<feature name>": "<feature value>", "feature.node.kubernetes.io/<file name>-<feature name>": "<feature value>" } ``` The `--sources` flag controls which sources to use for discovery. *Note: Consecutive runs of nfd-worker will update the labels on a given node. If features are not discovered on a consecutive run, the corresponding label will be removed. This includes any restrictions placed on the consecutive run, such as restricting discovered features with the --label-whitelist option.* ##### CPU Features | Feature name | Attribute | Description | | --- | --- | --- | | cpuid | <cpuid flag> | CPU capability is supported | | hardware_multithreading | | Hardware multithreading, such as Intel HTT, enabled (number of logical CPUs is greater than physical CPUs) | | power | sst_bf.enabled | Intel SST-BF ([Intel Speed Select Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/speed-select-technology-article.html) - Base frequency) enabled | | [pstate](https://www.kernel.org/doc/Documentation/cpu-freq/intel-pstate.txt) | turbo | Set to 'true' if turbo frequencies are enabled in Intel pstate driver, set to 'false' if they have been disabled. | | [rdt](http://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html) | RDTMON | Intel RDT Monitoring Technology | | | RDTCMT | Intel Cache Monitoring (CMT) | | | RDTMBM | Intel Memory Bandwidth Monitoring (MBM) | | | RDTL3CA | Intel L3 Cache Allocation Technology | | | RDTL2CA | Intel L2 Cache Allocation Technology | | | RDTMBA | Intel Memory Bandwidth Allocation (MBA) Technology | The (sub-)set of CPUID attributes to publish is configurable via the `attributeBlacklist` and `attributeWhitelist` cpuid options of the cpu source. If whitelist is specified, only whitelisted attributes will be published. With blacklist, only blacklisted attributes are filtered out. `attributeWhitelist` has priority over `attributeBlacklist`. For examples and more information about configurability, see [Configuration Options](#readme-configuration-options). By default, the following CPUID flags have been blacklisted: BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT, RDRAND, RDSEED, RDTSCP, SGX, SSE, SSE2, SSE3, SSE4.1, SSE4.2 and SSSE3. **NOTE** The cpuid features advertise *supported* CPU capabilities, that is, a capability might be supported but not enabled. ###### X86 CPUID Attributes (Partial List) | Attribute | Description | | --- | --- | | ADX | Multi-Precision Add-Carry Instruction Extensions (ADX) | | AESNI | Advanced Encryption Standard (AES) New Instructions (AES-NI) | | AVX | Advanced Vector Extensions (AVX) | | AVX2 | Advanced Vector Extensions 2 (AVX2) | ###### Arm64 CPUID Attribute (Partial List) | Attribute | Description | | --- | --- | | AES | Announcing the Advanced Encryption Standard | | EVSTRM | Event Stream Frequency Features | | FPHP | Half Precision(16bit) Floating Point Data Processing Instructions | | ASIMDHP | Half Precision(16bit) Asimd Data Processing Instructions | | ATOMICS | Atomic Instructions to the A64 | | ASIMRDM | Support for Rounding Double Multiply Add/Subtract | | PMULL | Optional Cryptographic and CRC32 Instructions | | JSCVT | Perform Conversion to Match Javascript | | DCPOP | Persistent Memory Support | ##### IOMMU Features | Feature name | Description | | --- | --- | | enabled | IOMMU is present and enabled in the kernel | ##### Kernel Features | Feature | Attribute | Description | | --- | --- | --- | | config | <option name> | Kernel config option is enabled (set 'y' or 'm'). Default options are `NO_HZ`, `NO_HZ_IDLE`, `NO_HZ_FULL` and `PREEMPT` | | selinux | enabled | Selinux is enabled on the node | | version | full | Full kernel version as reported by `/proc/sys/kernel/osrelease` (e.g. '4.5.6-7-g123abcde') | | | major | First component of the kernel version (e.g. '4') | | | minor | Second component of the kernel version (e.g. '5') | | | revision | Third component of the kernel version (e.g. '6') | Kernel config file to use, and, the set of config options to be detected are configurable. See [configuration options](#readme-configuration-options) for more information. ##### Memory Features | Feature | Attribute | Description | | --- | --- | --- | | numa | | Multiple memory nodes i.e. NUMA architecture detected | | nv | present | NVDIMM device(s) are present | | nv | dax | NVDIMM region(s) configured in DAX mode are present | ##### Network Features | Feature | Attribute | Description | | --- | --- | --- | | sriov | capable | [Single Root Input/Output Virtualization](http://www.intel.com/content/www/us/en/pci-express/pci-sig-sr-iov-primer-sr-iov-technology-paper.html) (SR-IOV) enabled Network Interface Card(s) present | | | configured | SR-IOV virtual functions have been configured | ##### PCI Features | Feature | Attribute | Description | | --- | --- | --- | | <device label> | present | PCI device is detected | `<device label>` is composed of raw PCI IDs, separated by underscores. The set of fields used in `<device label>` is configurable, valid fields being `class`, `vendor`, `device`, `subsystem_vendor` and `subsystem_device`. Defaults are `class` and `vendor`. An example label using the default label fields: ``` feature.node.kubernetes.io/pci-1200_8086.present=true ``` Also the set of PCI device classes that the feature source detects is configurable. By default, device classes (0x)03, (0x)0b40 and (0x)12, i.e. GPUs, co-processors and accelerator cards are detected. See [configuration options](#readme-configuration-options) for more information on NFD config. ##### Storage Features | Feature name | Description | | --- | --- | | nonrotationaldisk | Non-rotational disk, like SSD, is present in the node | ##### System Features | Feature | Attribute | Description | | --- | --- | --- | | os_release | ID | Operating system identifier | | | VERSION_ID | Operating system version identifier (e.g. '6.7') | | | VERSION_ID.major | First component of the OS version id (e.g. '6') | | | VERSION_ID.minor | Second component of the OS version id (e.g. '7') | ##### Feature Detector Hooks (User-specific Features) NFD has a special feature source named *local* which is designed for getting the labels from user-specific feature detector. It provides a mechanism for users to implement custom feature sources in a pluggable way, without modifying nfd source code or Docker images. The local feature source can be used to advertise new user-specific features, and, for overriding labels created by the other feature sources. The *local* feature source gets its labels by two different ways: * It tries to execute files found under `/etc/kubernetes/node-feature-discovery/source.d/` directory. The hook files must be executable. When executed, the hooks are supposed to print all discovered features in `stdout`, one per line. * It reads files found under `/etc/kubernetes/node-feature-discovery/features.d/` directory. The file content is expected to be similar to the hook output (described above). These directories must be available inside the Docker image so Volumes and VolumeMounts must be used if standard NFD images are used. The given template files mount by default the `source.d` and the `features.d` directories respectively from `/etc/kubernetes/node-feature-discovery/source.d/` and `/etc/kubernetes/node-feature-discovery/features.d/` from the host. You should update them to match your needs. In both cases, the labels can be binary or non binary, using either `<name>` or `<name>=<value>` format. Unlike the other feature sources, the name of the file, instead of the name of the feature source (that would be `local` in this case), is used as a prefix in the label name, normally. However, if the `<name>` of the label starts with a slash (`/`) it is used as the label name as is, without any additional prefix. This makes it possible for the user to fully control the feature label names, e.g. for overriding labels created by other feature sources. You can also override the default namespace of your labels using this format: `<namespace>/<name>[=<value>]`. You must whitelist your namespace using the `--extra-label-ns` option on the master. In this case, the name of the file will not be added to the label name. For example, if you want to add the label `my.namespace.org/my-label=value`, your hook output or file must contains `my.namespace.org/my-label=value` and you must add `--extra-label-ns=my.namespace.org` on the master command line. `stderr` output of the hooks is propagated to NFD log so it can be used for debugging and logging. **A hook example:** User has a shell script `/etc/kubernetes/node-feature-discovery/source.d/my-source` which has the following `stdout` output: ``` MY_FEATURE_1 MY_FEATURE_2=myvalue /override_source-OVERRIDE_BOOL /override_source-OVERRIDE_VALUE=123 override.namespace/value=456 ``` which, in turn, will translate into the following node labels: ``` feature.node.kubernetes.io/my-source-MY_FEATURE_1=true feature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue feature.node.kubernetes.io/override_source-OVERRIDE_BOOL=true feature.node.kubernetes.io/override_source-OVERRIDE_VALUE=123 override.namespace/value=456 ``` **A file example:** User has a file `/etc/kubernetes/node-feature-discovery/features.d/my-source` which contains the following lines: ``` MY_FEATURE_1 MY_FEATURE_2=myvalue /override_source-OVERRIDE_BOOL /override_source-OVERRIDE_VALUE=123 override.namespace/value=456 ``` which, in turn, will translate into the following node labels: ``` feature.node.kubernetes.io/my-source-MY_FEATURE_1=true feature.node.kubernetes.io/my-source-MY_FEATURE_2=myvalue feature.node.kubernetes.io/override_source-OVERRIDE_BOOL=true feature.node.kubernetes.io/override_source-OVERRIDE_VALUE=123 override.namespace/value=456 ``` NFD tries to run any regular files found from the hooks directory. Any additional data files your hook might need (e.g. a configuration file) should be placed in a separate directory in order to avoid NFD unnecessarily trying to execute these. You can use a subdirectory under the hooks directory, for example `/etc/kubernetes/node-feature-discovery/source.d/conf/`. **NOTE!** NFD will blindly run any executables placed/mounted in the hooks directory. It is the user's responsibility to review the hooks for e.g. possible security implications. #### Getting started For a stable version with ready-built images see the [latest released version](https://github.com/kubernetes-sigs/node-feature-discovery/tree/v0.5.0) ([release notes](https://github.com/kubernetes-sigs/node-feature-discovery/releases/latest)). If you want to use the latest development version (master branch) you need to [build your own custom image](#readme-building-from-source). ##### System requirements 1. Linux (x86_64/Arm64) 2. [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) (properly set up and configured to work with your Kubernetes cluster) 3. [Docker](https://docs.docker.com/install) (only required to build and push docker images) ##### Usage ###### nfd-master Nfd-master runs as a DaemonSet, by default in the master node(s) only. You can use the template spec provided to deploy nfd-master, or use `nfd-master.yaml` generated by `Makefile`. The latter includes `image:` and `namespace:` definitions that match the latest built image. Example: ``` make IMAGE_TAG=<IMAGE_TAG> docker push <IMAGE_TAG> kubectl create -f nfd-master.yaml ``` Nfd-master listens for connections from nfd-worker(s) and connects to the Kubernetes API server to add node labels advertised by them. If you have RBAC authorization enabled (as is the default e.g. with clusters initialized with kubeadm) you need to configure the appropriate ClusterRoles, ClusterRoleBindings and a ServiceAccount in order for NFD to create node labels. The provided template will configure these for you. ###### nfd-worker Nfd-worker is preferably run as a Kubernetes DaemonSet. There is an example spec (`nfd-worker-daemonset.yaml.template`) that can be used as a template, or, as is when just trying out the service. Similarly to nfd-master above, the `Makefile` also generates `nfd-worker-daemonset.yaml` from the template that you can use to deploy the latest image. Example: ``` make IMAGE_TAG=<IMAGE_TAG> docker push <IMAGE_TAG> kubectl create -f nfd-worker-daemonset.yaml ``` Nfd-worker connects to the nfd-master service to advertise hardware features. When run as a daemonset, nodes are re-labeled at an interval specified using the `--sleep-interval` option. In the [template](https://github.com/kubernetes-sigs/node-feature-discovery/raw/master/nfd-worker-daemonset.yaml.template#L26) the default interval is set to 60s which is also the default when no `--sleep-interval` is specified. Feature discovery can alternatively be configured as a one-shot job. There is an example script in this repo that demonstrates how to deploy the job in the cluster. ``` ./label-nodes.sh [<IMAGE_TAG>] ``` The label-nodes.sh script tries to launch as many jobs as there are Ready nodes. Note that this approach does not guarantee running once on every node. For example, if some node is tainted NoSchedule or fails to start a job for some other reason, then some other node will run extra job instance(s) to satisfy the request and the tainted/failed node does not get labeled. ###### nfd-master and nfd-worker in the same Pod You can also run nfd-master and nfd-worker inside a single pod (skip the `sed` part if running the latest released version): ``` sed -E s',^(\s*)image:.+$,\1image: <YOUR_IMAGE_REPO>:<YOUR_IMAGE_TAG>,' nfd-daemonset-combined.yaml.template > nfd-daemonset-combined.yaml kubectl apply -f nfd-daemonset-combined.yaml ``` Similar to the nfd-worker setup above, this creates a DaemonSet that schedules an NFD Pod an all worker nodes, with the difference that the Pod also also contains an nfd-master instance. In this case no nfd-master service is run on the master node(s), but, the worker nodes are able to label themselves. This may be desirable e.g. in single-node setups. ###### TLS authentication NFD supports mutual TLS authentication between the nfd-master and nfd-worker instances. That is, nfd-worker and nfd-master both verify that the other end presents a valid certificate. TLS authentication is enabled by specifying `--ca-file`, `--key-file` and `--cert-file` args, on both the nfd-master and nfd-worker instances. The template specs provided with NFD contain (commented out) example configuration for enabling TLS authentication. The Common Name (CN) of the nfd-master certificate must match the DNS name of the nfd-master Service of the cluster. By default, nfd-master only check that the nfd-worker has been signed by the specified root certificate (--ca-file). Additional hardening can be enabled by specifying --verify-node-name in nfd-master args, in which case nfd-master verifies that the NodeName presented by nfd-worker matches the Common Name (CN) of its certificate. This means that each nfd-worker requires a individual node-specific TLS certificate. ###### Usage demo [![asciicast](https://asciinema.org/a/247316.svg)](https://asciinema.org/a/247316) ##### Configuration options Nfd-worker supports a configuration file. The default location is `/etc/kubernetes/node-feature-discovery/nfd-worker.conf`, but, this can be changed by specifying the`--config` command line flag. The file is read inside the container, and thus, Volumes and VolumeMounts are needed to make your configuration available for NFD. The preferred method is to use a ConfigMap. For example, create a config map using the example config as a template: ``` cp nfd-worker.conf.example nfd-worker.conf vim nfd-worker.conf # edit the configuration kubectl create configmap nfd-worker-config --from-file=nfd-worker.conf ``` Then, configure Volumes and VolumeMounts in the Pod spec (just the relevant snippets shown below): ``` ... containers: volumeMounts: - name: nfd-worker-config mountPath: "/etc/kubernetes/node-feature-discovery/" ... volumes: - name: nfd-worker-config configMap: name: nfd-worker-config ... ``` You could also use other types of volumes, of course. That is, hostPath if different config for different nodes would be required, for example. The (empty-by-default) [example config](https://github.com/kubernetes-sigs/node-feature-discovery/raw/master/nfd-worker.conf.example) is used as a config in the NFD Docker image. Thus, this can be used as a default configuration in custom-built images. Configuration options can also be specified via the `--options` command line flag, in which case no mounts need to be used. The same format as in the config file must be used, i.e. JSON (or YAML). For example: ``` --options='{"sources": { "pci": { "deviceClassWhitelist": ["12"] } } }' ``` Configuration options specified from the command line will override those read from the config file. Currently, the only available configuration options are related to the [CPU](#readme-cpu-features), [PCI](#readme-pci-features) and [Kernel](#readme-kernel-features) feature sources. #### Building from source **Download the source code:** ``` git clone https://github.com/kubernetes-sigs/node-feature-discovery ``` **Build the container image:** See [customizing the build](#readme-customizing-the-build) below for altering the container image registry, for example. ``` cd <project-root> make ``` **Push the container image:** Optional, this example with Docker. ``` docker push <IMAGE_TAG> ``` **Change the job spec to use your custom image (optional):** To use your published image from the step above instead of the `quay.io/kubernetes_incubator/node-feature-discovery` image, edit `image` attribute in the spec template(s) to the new location (`<quay-domain-name>/<registry-user>/<image-name>[:<version>]`). ##### Customizing the Build There are several Makefile variables that control the build process and the name of the resulting container image. | Variable | Description | Default value | | --- | --- | --- | | IMAGE_BUILD_CMD | Command to build the image | docker build | | IMAGE_BUILD_EXTRA_OPTS | Extra options to pass to build command | *empty* | | IMAGE_PUSH_CMD | Command to push the image to remote registry | docker push | | IMAGE_REGISTRY | Container image registry to use | quay.io/kubernetes_incubator | | IMAGE_NAME | Container image name | node-feature-discovery | | IMAGE_TAG_NAME | Container image tag name | <nfd version> | | IMAGE_REPO | Container image repository to use | <IMAGE_REGISTRY>/<IMAGE_NAME> | | IMAGE_TAG | Full image:tag to tag the image with | <IMAGE_REPO>/<IMAGE_NAME> | | K8S_NAMESPACE | nfd-master and nfd-worker namespace | kube-system | | KUBECONFIG | Kubeconfig for running e2e-tests | *empty* | For example, to use a custom registry: ``` make IMAGE_REGISTRY=<my custom registry uri> ``` Or to specify a build tool different from Docker: ``` make IMAGE_BUILD_CMD="buildah bud" ``` ##### Testing Unit tests are automatically run as part of the container image build. You can also run them manually in the source code tree by simply running: ``` make test ``` End-to-end tests are built on top of the e2e test framework of Kubernetes, and, they required a cluster to run them on. For running the tests on your test cluster you need to specify the kubeconfig to be used: ``` make e2e-test KUBECONFIG=$HOME/.kube/config ``` #### Targeting Nodes with Specific Features Nodes with specific features can be targeted using the `nodeSelector` field. The following example shows how to target nodes with Intel TurboBoost enabled. ``` apiVersion: v1 kind: Pod metadata: labels: env: test name: golang-test spec: containers: - image: golang name: go1 nodeSelector: feature.node.kubernetes.io/cpu-pstate.turbo: 'true' ``` For more details on targeting nodes, see [node selection](http://kubernetes.io/docs/user-guide/node-selection). #### References Github issues * [#28310](https://github.com/kubernetes/kubernetes/issues/28310) * [#28311](https://github.com/kubernetes/kubernetes/issues/28311) * [#28312](https://github.com/kubernetes/kubernetes/issues/28312) [Design proposal](https://docs.google.com/document/d/1uulT2AjqXjc_pLtDu0Kw9WyvvXm-WAZZaSiUziKsr68/edit) #### Governance This is a [SIG-node](https://github.com/kubernetes/community/blob/master/sig-node/README.md) subproject, hosted under the [Kubernetes SIGs](https://github.com/kubernetes-sigs) organization in Github. The project was established in 2016 as a [Kubernetes Incubator](https://github.com/kubernetes/community/blob/master/incubator.md) project and migrated to Kubernetes SIGs in 2018. #### License This is open source software released under the [Apache 2.0 License](https://github.com/kubernetes-incubator/node-feature-discovery/blob/v0.5.0/LICENSE). #### Demo A demo on the benefits of using node feature discovery can be found in [demo](https://github.com/kubernetes-incubator/node-feature-discovery/blob/v0.5.0/demo). None
randnet
cran
R
Package ‘randnet’ May 20, 2023 Type Package Title Random Network Model Estimation, Selection and Parameter Tuning Version 0.7 Date 2023-05-11 Description Model selection and parameter tuning procedures for a class of random network mod- els. The model selection can be done by a general cross-validation frame- work called ECV from Li et. al. (2016) <arXiv:1612.04717> . Several other model- based and task-specific methods are also in- cluded, such as NCV from Chen and Lei (2016) <arXiv:1411.1715>, likelihood ra- tio method from Wang and Bickel (2015) <arXiv:1502.02069>, spectral meth- ods from Le and Levina (2015) <arXiv:1507.00827>. Many network analysis meth- ods are also implemented, such as the regularized spectral cluster- ing (Amini et. al. 2013 <doi:10.1214/13-AOS1138>) and its degree corrected ver- sion and graphon neighborhood smoothing (Zhang et. al. 2015 <arXiv:1509.08588>). It also in- cludes the consensus clustering of Gao et. al. (2014) <arXiv:1410.5837>, the method of mo- ments estimation of nomination SBM of Li et. al. (2020) <arxiv:2008.03652>, and the net- work mixing method of Li and Le (2021) <arxiv:2106.02803>. It also includes the informa- tive core-periphery data process- ing method of Miao and Li (2021) <arXiv:2101.06388>. The work to build and im- prove this package is partially supported by the NSF grants DMS-2015298 and DMS-2015134. License GPL (>= 2) Depends Matrix, entropy, AUC,sparseFLMM, mgcv Imports methods, stats, poweRlaw, RSpectra, irlba,pracma,nnls,data.table NeedsCompilation no Author <NAME> [aut, cre], <NAME> [aut], <NAME> [aut], <NAME> [aut] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-05-20 07:30:02 UTC R topics documented: randnet-packag... 2 BHMC.estimat... 4 BlockModel.Ge... 5 ConsensusClus... 7 DCSBM.estimat... 8 ECV.bloc... 9 ECV.nSmooth.lowran... 11 ECV.Ran... 13 InformativeCor... 14 LRBI... 16 LSM.PG... 17 NCV.selec... 18 network.mixin... 20 network.mixing.Bfol... 22 NM... 23 NSBM.estimat... 24 NSBM.Ge... 25 nSmoot... 26 RDPG.Ge... 28 reg.S... 29 reg.SS... 30 RightS... 32 SBM.estimat... 33 smooth.oracl... 34 USV... 35 randnet-package Statistical modeling of random networks with model estimation, selec- tion and parameter tuning Description The package provides model fitting and estimation functions for some popular random network models. More importantly, it implements a general cross-validation framework for model selection and parameter tuning called ECV. Several other model selection methods are also included. The work to build and improve this package is partially supported by the NSF grants DMS-2015298 and DMS-2015134. Details Package: randnet Type: Package Version: 0.7 Date: 2023-05-11 License: GPL (>= 2) Author(s) <NAME>, <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>, <NAME>, and <NAME>. Network cross-validation by edge sampling. Biometrika, 107(2), pp.257-276, 2020. <NAME> and <NAME>. Network cross-validation for determining the number of communities in net- work data. Journal of the American Statistical Association, 113(521):241-251, 2018. <NAME>, <NAME>, and <NAME>. Spectral clustering and the high-dimensional stochastic block- model. The Annals of Statistics, pages 1878-1915, 2011. <NAME>, <NAME>, <NAME>, and <NAME>. Pseudo-likelihood methods for community detection in large sparse networks. The Annals of Statistics, 41(4):2097-2122, 2013. Qin, T. & Rohe, K. Regularized spectral clustering under the degree-corrected stochastic block- model Advances in Neural Information Processing Systems, 2013, 3120-3128 <NAME> and <NAME>. Consistency of spectral clustering in stochastic block models. The Annals of Statistics, 43(1):215-237, 2014. <NAME>, <NAME>, and <NAME>. Concentration and regularization of random graphs. Ran- dom Structures & Algorithms, 2017. <NAME> and <NAME>. Random dot product graph models for social networks. In International Workshop on Algorithms and Models for the Web-Graph, pages 138-149. Springer, 2007. <NAME> and <NAME>. Estimating the number of communities in networks by spectral methods. arXiv preprint arXiv:1507.00827, 2015. Zhang, Y.; Levina, E. & <NAME>. Estimating network edge probabilities by neighbourhood smoothing Biometrika, Oxford University Press, 2017, 104, 771-783 <NAME> and <NAME>. Stochastic blockmodels and community structure in networks. Phys- ical Review E, 83(1):016107, 2011. <NAME>. & <NAME>. Likelihood-based model selection for stochastic block models The Annals of Statistics, Institute of Mathematical Statistics, 2017, 45, 500-528 <NAME>.; <NAME>.; <NAME>. & <NAME>. Achieving optimal misclassification proportion in stochastic block models The Journal of Machine Learning Research, JMLR. org, 2017, 18, 1980- 2024 <NAME>, <NAME>, and <NAME>. Community models for networks observed through edge nominations. arXiv preprint arXiv:2008.03652 (2020). <NAME> and <NAME>, Network Estimation by Mixing: Adaptivity and More. arXiv preprint arXiv:2106.02803, 2021. <NAME> and <NAME>. Informative core identification in complex networks. arXiv preprint arXiv:2101.06388, 2021 <NAME>. and <NAME>., 2022. EM-based smooth graphon estimation using MCMC and spline-based approaches. Social Networks, 68, pp.279-295. BHMC.estimate Estimates the number of communities under block models by the spec- tral methods Description Estimates the number of communities under block models by using the spectral properties of net- work Beth-Hessian matrix with moment correction. Usage BHMC.estimate(A, K.max = 15) Arguments A adjacency matrix of the network K.max the maximum possible number of communities to check Details Note that the method cannot distinguish SBM and DCSBM. But it works under either model. Value A list of result K Estimated K values eigenvalues of the Beth-Hessian matrix Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME> and <NAME>. Estimating the number of communities in networks by spectral methods. arXiv preprint arXiv:1507.00827, 2015. See Also LRBIC,ECV.Block, NCV.select Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) A <- dt$A bhmc <- BHMC.estimate(A,15) bhmc BlockModel.Gen Generates networks from degree corrected stochastic block model Description Generates networks from degree corrected stochastic block model, with various options for node degree distribution Usage BlockModel.Gen(lambda, n, beta = 0, K = 3, w = rep(1, K), Pi = rep(1, K)/K, rho = 0, simple = TRUE, power = TRUE, alpha = 5, degree.seed = NULL) Arguments lambda average node degree n size of network beta out-in ratio: the ratio of between-block edges over within-block edges K number of communities w not effective Pi a vector of community proportion rho proportion of small degrees within each community if the degrees are from two point mass disbribution. rho >0 gives degree corrected block model. If rho > 0 and simple=TRUE, then generate the degrees from two point mass distribution, with rho porition of 0.2 values and 1-rho proportion of 1 for degree parameters. If rho=0, generate from SBM. simple Indicator of wether two point mass degrees are used, if rho > 0. If rho=0, this is not effective power Whether or not use powerlaw distribution for degrees. If FALSE, generate from theta from U(0.2,1); if TRUE, generate theta from powerlaw. Only effective if rho >0, simple=FALSE. alpha Shape parameter for powerlaw distribution. degree.seed Can be a vector of a prespecified values for theta. Then the function will do sampling with replacement from the vector to generate theta. It can be used to control noise level between different configuration settings. Value A list of A the generated network adjacency matrix g community membership P probability matrix of the network theta node degree parameter Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME> and <NAME>. Stochastic blockmodels and community structure in networks. Phys- ical Review E, 83(1):016107, 2011. <NAME>, <NAME>, <NAME>, and <NAME>. Pseudo-likelihood methods for community detection in large sparse networks. The Annals of Statistics, 41(4):2097-2122, 2013. <NAME>, <NAME>, and <NAME>. Network cross-validation by edge sampling. Biometrika, 107(2), pp.257-276, 2020. Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) ConsensusClust clusters nodes by concensus (majority voting) initialized by regular- ized spectral clustering Description community detection by concensus (majority voting) initialized by regularized spectral clustering Usage ConsensusClust(A,K,tau=0.25,lap=TRUE) Arguments A adjacency matrix K number of communities tau reguarlization parameter for regularized spectral clustering. Default value is 0.25. Typically set between 0 and 1. If tau=0, no regularization is applied. lap indicator. If TRUE, the Laplacian matrix for initializing clustering. If FALSE, the adjacency matrix will be used. Details Community detection algorithm by majority voting algorithm of Gao et. al. (2016). When ini- tialized by regularized spectral clustering, it is shown that the clustering accuracy of this algorithm gives minimax rate under the SBM. However, it can slow compared with spectral clustering. Value cluster labels Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>.; <NAME>.; <NAME>. & <NAME>. Achieving optimal misclassification proportion in stochastic block models The Journal of Machine Learning Research, JMLR. org, 2017, 18, 1980- 2024 See Also reg.SP Examples dt <- BlockModel.Gen(15,50,K=2,beta=0.2,rho=0) A <- dt$A cc <- ConsensusClust(A,K=2,lap=TRUE) DCSBM.estimate Estimates DCSBM model Description Estimates DCSBM model by given community labels Usage DCSBM.estimate(A, g) Arguments A adjacency matrix g vector of community labels for the nodes Details Estimation is based on maximum likelhood. Value A list object of Phat estimated probability matrix B the B matrix with block connection probability, up to a scaling constant Psi vector of of degree parameter theta, up to a scaling constant Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME> and <NAME>. Stochastic blockmodels and community structure in networks. Phys- ical Review E, 83(1):016107, 2011. See Also SBM.estimate Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) A <- dt$A ssc <- reg.SSP(A,K=3,lap=TRUE) est <- DCSBM.estimate(A,ssc$cluster) ECV.block selecting block models by ECV Description Model selection by ECV for SBM and DCSBM. It can be used to select between the two models or given on model (either SBM or DCSBM) and select K. Usage ECV.block(A, max.K, cv = NULL, B = 3, holdout.p = 0.1, tau = 0, dc.est = 2, kappa = NULL) Arguments A adjacency matrix max.K largest possible K for number of communities cv cross validation fold. The default value is NULL. We recommend to use the argument B instead, doing indpendent sampling. B number of replications holdout.p testing set proportion tau constant for numerical stability only. Not useful for current version. dc.est estimation method for DCSBM. By defaulty (dc.est=2), the maximum likeli- hood is used. If dc.est=1, the method used by Chen and Lei (2016) is used, which is less stable according to our observation. kappa constant for numerical stability only. Not useful for current version. Details The current version is based on a simple matrix completion procedure, as described in the paper. The performance can be improved by better matrix completion method that will be implemented in next version. Moreover, the current implementation is better in computational time but less efficient in memory. Another memory efficient implementation will be added in next version. Value impute.err average validaiton imputation error l2 average validation L_2 loss under SBM dev average validation binomial deviance loss under SBM auc average validation AUC dc.l2 average validation L_2 loss under DCSBM dc.dev average validation binomial deviance loss under DCSBM sse average validation SSE l2.model selected model by L_2 loss dev.model selected model by binomial deviance loss l2.mat, dc.l2.mat,... cross-validation loss matrix for B replications Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>, <NAME>, and <NAME>. Network cross-validation by edge sampling. Biometrika, 107(2), pp.257-276, 2020. See Also NCV.select Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) A <- dt$A ecv <- ECV.block(A,6,B=3) ecv$l2.model ecv$dev.model which.min(ecv$l2) which.min(ecv$dev) which.min(ecv$dc.l2) which.min(ecv$dc.dev) which.max(ecv$auc) which.min(ecv$sse) ECV.nSmooth.lowrank selecting tuning parameter for neighborhood smoothing estimation of graphon model Description selecting tuning parameter for neighborhood smoothing estimation of graphon model where the tuning parameter is to control estimation smoothness. Usage ECV.nSmooth.lowrank(A, h.seq, K, cv = NULL, B = 3, holdout.p = 0.1) Arguments A adjacency matrix h.seq a sequence of h values to tune. It is suggested h should be in the order of sqrt(log(n)/n). K the optimal rank for approximation. Can be obtained by rank selection of ECV. cv cross-validation fold. Recomend to use replication number B instead. B independent replication number of random splitting holdout.p proportion of test sample Details The neighborhood smoothing estimation can be slow, so the ECV may take long even for moder- ately large networks. Value a list object with err average validation error for h.seq min.index index of the minimum error Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>, <NAME>, and <NAME>. Network cross-validation by edge sampling. Biometrika, 107(2), pp.257-276, 2020. Examples set.seed(500) N <- 300 U = matrix(1:N,nrow=1) / (N+1) V = matrix(1:N,nrow=1) / (N+1) W = (t(U))^2 W = W/3*cos(1/(W + 1e-7)) + 0.15 upper.index <- which(upper.tri(W)) A <- matrix(0,N,N) rand.ind <- runif(length(upper.index)) edge.index <- upper.index[rand.ind < W[upper.index]] A[edge.index] <- 1 A <- A + t(A) diag(A) <- 0 h.seq <- sqrt(log(N)/N)*seq(0.5,5,by=0.5) ecv.nsmooth <- ECV.nSmooth.lowrank(A,h.seq,K=2,B=3) h <- h.seq[ecv.nsmooth$min.index] ECV.Rank estimates optimal low rank model for a network Description estimates the optimal low rank model for a network Usage ECV.Rank(A, max.K, B = 3, holdout.p = 0.1, weighted = TRUE,mode="directed") Arguments A adjacency matrix max.K maximum possible rank to check B number of replications in ECV holdout.p test set proportion weighted whether the network is weighted. If TRUE, only sum of squared errors are com- puted. If FALSE, then treat the network as binary and AUC will be computed along with SSE. mode Selectign the mode of "directed" or "undirected" for cross-validation. Details AUC is believed to be more accurate in many simulations for binary networks. But the computation of AUC is much slower than SSE, even slower than matrix completion steps. Note that we do not have to assume the true model is low rank. This function simply finds a best low-rank approximation to the true model. Value A list of sse.rank rank selection by SSE loss auc.rank rank selection by AUC loss auc auc sequence for each rank candidate sse sse sequence for each rank candidate Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>, <NAME>, and <NAME>. Network cross-validation by edge sampling. Biometrika, 107(2), pp.257-276, 2020. See Also ECV.block Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) A <- dt$A ecv.rank <- ECV.Rank(A,6,weighted=FALSE,mode="undirected") ecv.rank InformativeCore identify the informative core component of a network Description identify the informative core component of a network based on the spectral method of Miao and Li (2021). It can be used as a general data processing function for any network modeling purpose. Usage InformativeCore(A,r=3) Arguments A adjacency matrix. It does not have to be unweighted. r the rank for low-rank denoising. The rank can be selected by ECV or any other methods availale in the package. Details The function can be used as a general data processing function for any network modeling purpose. It automatically identify an informative core component with interesting connection structure and a noninformative periphery component with uninterestings structures. Depending on the user’s pref- erence, the uninteresting structure can be either the Erdos-Renyi type connections or configuration type of connections, both of which are generally regarded as noninformative structures. Including these additional non-informative structures in network models can potentially lower the modeling efficiency. Therefore, it is preferable to remove them and only focus on the core structure. Details can be found in the reference. Value A list of er.score A n dimensional vector of informative scores for ER-type periphery. A larger score indicates the corresponding node is more likely to be in the core. config.score A n dimensional vector of informative scores for configuration-type periphery. A larger score indicates the corresponding node is more likely to be in the core. er.theory.core The indices of identified core structure in the ER-type model based on a theo- retical threshold of the scores (for large sample size). config.theory.core The indices of identified core structure in the configuration-type model based on a theoretical threshold of the scores (for large sample size). er.kmeans.core The indices of identified core structure in the ER-type model based on kmeans clustering of the scores. config.kmeans.core The indices of identified core structure in the configuration-type model based on kmeans clustering of the scores (for large sample size). Author(s) <NAME>, <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME> and <NAME>. Informative core identification in complex networks. arXiv preprint arXiv:2101.06388, 2021 See Also ECV.Rank Examples set.seed(100) dt <- BlockModel.Gen(60,1000,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) ### this is not an interesting case -- only for demonstration of the usage. ### The network has no periphery nodes, all nodes are in the core. A <- dt$A core.fit <- InformativeCore(A,r=3) length(core.fit$er.theory.core) ### essentially all nodes are selected as the core. LRBIC selecting number of communities by asymptotic likelihood ratio Description selecting number of communities by asymptotic likelihood ratio based the methdo of Wang and Bickel 2015 Usage LRBIC(A, Kmax, lambda = NULL, model = "both") Arguments A adjacency matrix Kmax the largest possible number of communities to check lambda a tuning parameter. By default, will use the number recommended in the paper. model selecting K under which model. If set to be "SBM", the calculation will be done under SBM. If set to be "DCSBM", the calculation will be done under DCSBM. The default value is "both" so will give two selections under SBM and DCSBM respectively. Details Note that the method cannot distinguish SBM and DCSBM, though different calculation is done under the two models. So it is not valid to compare across models. The theoretical analysis of the method is done under maximum likelhood and variational EM. But as suggested in the paper, we use spectral clustering for community detection before fitting maximum likelhood. Value a list of SBM.K estimated number of communities under SBM DCSBM.K estimated number of communities under DCSBM SBM.BIC the BIC values for the K sequence under SBM DCSBM.BIC the BIC values for the K sequence under DCSBM Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References Wang, <NAME>. & Bickel, <NAME>. Likelihood-based model selection for stochastic block models The Annals of Statistics, Institute of Mathematical Statistics, 2017, 45, 500-528 See Also BHMC.estimate, ECV.block, NCV.select Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) A <- dt$A ### test LRBIC lrbic <- LRBIC(A,6,model="both") lrbic$SBM.K lrbic$DCSBM.K LSM.PGD estimates inner product latent space model by projected gradient de- scent Description estimates inner product latent space model by projected gradient descent from the paper of Ma et al. (2020). Usage LSM.PGD(A, k,step.size=0.3,niter=500,trace=0) Arguments A adjacency matrix k the dimension of the latent position step.size step size of gradient descent niter maximum number of iterations trace if trace > 0, the objective will be printed out after each iteration Details The method is based on the gradient descent of Ma et al (2020), with initialization of the universal singular value thresholding as discussed there. The parameter identifiability constraint is the same as in the paper. Value a list of Z latent positions alpha individual parameter alpha as in the paper Phat esitmated probability matrix obj the objective of the gradient method Author(s) <NAME> and <NAME> Maintainer: <NAME> <<EMAIL>> References Z. Ma, Z. Ma, and <NAME>. Universal latent space model fitting for large networks with edge covariates. Journal of Machine Learning Research, 21(4):1-67, 2020. See Also DCSBM.estimate Examples dt <- RDPG.Gen(n=600,K=2,directed=TRUE) A <- dt$A fit <- LSM.PGD(A,2,niter=50) NCV.select selecting block models by NCV Description selecting block models by NCV of Chen and Lei (2016) Usage NCV.select(A, max.K, cv = 3) Arguments A adjacency matrix max.K largest number of communities to check cv fold of cross-validation Details Spectral clustering is used for fitting the block models Value a list of dev the binomial deviance loss under SBM for each K l2 the L_2 loss under SBM for each K dc.dev the binomial deviance loss under DCSBM for each K dc.l2 the L_2 loss under DCSBM for each K dev.model the selected model by deviance loss l2.model the selected model by L_2 loss sbm.l2.mat, sbm.dev.mat,.... the corresponding matrices of loss for each fold (row) and each K value (col- umn) Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References Chen, K. & Lei, J. Network cross-validation for determining the number of communities in network data Journal of the American Statistical Association, Taylor & Francis, 2018, 113, 241-251 See Also ECV.block Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) A <- dt$A ncv <- NCV.select(A,6,3) ncv$l2.model ncv$dev.model which.min(ncv$dev) which.min(ncv$l2) which.min(ncv$dc.dev) which.min(ncv$dc.l2) network.mixing estimates network connection probability by network mixing Description estimates network connection probability by network mixing of Li and Le (2021). Usage network.mixing(A,index=NULL,rho = 0.1,max.K=15,dcsbm=TRUE, usvt=TRUE,ns=FALSE, lsm=FALSE,lsm.k=4,trace=FALSE) Arguments A adjacency matrix index a pre-specified hold-out set. If NULL, the set will be randomly generated ac- cording to rho. rho hold-out proportion as validation entries. Only effective when index is NULL. max.K the maximum number of blocks used for the block model approximation (see details). dcsbm whether to include the DCSBM as components, up to max.K. By default, the method will include it. usvt whether to include the USVT as a component. By default, the method will include it. ns whether to include the neighborhood smoothing as a component. lsm whether to include the gradient estimator of the latent space model as a compo- nent. lsm.k the dimension of the latent space. Only effective if lsm is TRUE. trace whether to print the model fitting progress. Details The basic version of the mixing estimator will include SBM and DCSBM estimates with the number of blocks from 1 to max.K. Users could also specify whether to include additional USVT, neighbor- hood smoothing and latent space model estimators. If NNL (non-negative linear), exponential, or ECV is used, the mixing is usually robust for a reasonable range of max.K and whether to include the other models. The linear mixing, however, is vulnerable for a large number of base estimates. The NNL is our recommended method. USVT is also recommended. the neighborhood smoothing and latent space model are slower, so are not suitable for large networks. Details can be found in Li and Le (2021). Value a list of linear.Phat estimated probability matrix by linear mixing linear.weight the weights of the indivdiual models in linear mixing nnl.Phat estimated probability matrix by NNL mixing nnl.weight the weights of the indivdiual models in NNL mixing exp.Phat estimated probability matrix by exponential mixing exp.weight the weights of the indivdiual models in exponential mixing ecv.Phat estimated probability matrix by ECV mixing (only one nonzero) ecv.weight the weights of the indivdiual models in ECV mixing (only one nonzero) model.names the names of all individual models, in the same order as the weights Author(s) <NAME> and <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME> and <NAME>, Network Estimation by Mixing: Adaptivity and More. arXiv preprint arXiv:2106.02803, 2021. Examples dt <- RDPG.Gen(n=500,K=5,directed=TRUE) A <- dt$A fit <- network.mixing(A) fit$model.names fit$nnl.weight network.mixing.Bfold estimates network connection probability by network mixing with B- fold averaging Description estimates network connection probability by network mixing of Li and Le (2021) with B-fold aver- aging. Usage network.mixing.Bfold(A,B=10,rho = 0.1,max.K=15,dcsbm=TRUE,usvt=TRUE,ns=FALSE, lsm=FALSE,lsm.k=4) Arguments A adjacency matrix B number of random replications to average over rho hold-out proportion as validation entries. Only effective when index is NULL. max.K the maximum number of blocks used for the block model approximation (see details). dcsbm whether to include the DCSBM as components, up to max.K. By default, the method will include it. usvt whether to include the USVT as a component. By default, the method will include it. ns whether to include the neighborhood smoothing as a component. lsm whether to include the gradient estimator of the latent space model as a compo- nent. lsm.k the dimension of the latent space. Only effective if lsm is TRUE. Details This is essentially the same procedure as the network.mixing, but repeat it B times and return the average. Use with cautious. Though it can make the estimate more stable, the procedure would increase the computational cost by a factor of B. Based on our limited experience, this is usually not a great trade-off as the improvement might be marginal. Value a list of linear.Phat estimated probability matrix by linear mixing nnl.Phat estimated probability matrix by NNL mixing exp.Phat estimated probability matrix by exponential mixing ecv.Phat estimated probability matrix by ECV mixing (only one nonzero) model.names the names of all individual models, in the same order as the weights Author(s) <NAME> and <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME> and <NAME>, Network Estimation by Mixing: Adaptivity and More. arXiv preprint arXiv:2106.02803, 2021. See Also network.mixing Examples dt <- RDPG.Gen(n=200,K=3,directed=TRUE) A <- dt$A fit <- network.mixing.Bfold(A,B=2) NMI calculates normalized mutual information Description calculates normalized mutual information, a metric that is commonly used to compare clustering results Usage NMI(g1, g2) Arguments g1 a vector of cluster labels g2 a vector of cluster labels (same length as g1) Value NMI value Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) A <- dt$A ssc <- reg.SSP(A,K=3,lap=TRUE) NMI(ssc$cluster,dt$g) NSBM.estimate estimates nomination SBM parameters given community labels by the method of moments Description estimates NSBM parameters given community labels Usage NSBM.estimate(A,K,g,reg.bound=-Inf) Arguments A adjacency matrix of a directed where Aij = 1 iff i -> j K number of communities g a vector of community labels reg.bound the regularity lower bound of lambda value. By default, -Inf. That means, no constraints. When the network is sparse, using certain constaints may improve stability. Details The method of moments is used for estimating the edge nomination SBM, so the strategy can be used for both unweighted and weighted networks. The details can be found in Li et. al. (2020). Value a list of B estimated block connection probability matrix lambda estimated lambda values for nomination intensity theta estimated theta values for nomination preference P.tilde estimated composiste probability matrix after nomination g community labels Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>, <NAME>, and <NAME>. Community models for networks observed through edge nominations. arXiv preprint arXiv:2008.03652 (2020). See Also SBM.estimate Examples dt <- NSBM.Gen(n=200,K=2,beta=0.2,avg.d=10) A <- dt$A sc <- RightSC(A,K=3) est <- NSBM.estimate(A,K=3,g=sc$cluster) NSBM.Gen Generates networks from nomination stochastic block model Description Generates networks from nomination stochastic block model for community structure in edge nom- ination procedures, proposed in Li et. al. (2020) Usage NSBM.Gen( n, K, avg.d,beta,theta.low=0.1, theta.p=0.2,lambda.scale=0.2,lambda.exp=FALSE) Arguments n size of network K number of communities avg.d expected average degree of the resuling network (after edge nomination) beta the out-in ratio of the original SBM theta.low the lower value of theta’s. The theta’s are generated as two-point mass at theta.low and 1. theta.p proportion of lower value of theta’s lambda.scale standard deviation of the lambda (before the exponential, see lambda.exp) lambda.exp If TRUE, lambda is generated as exponential of uniformation random randomes. Otherwise, they are normally distributed. Value A list of A the generated network adjacency matrix g community membership P probability matrix of the orignal SBM network P.tilde probability matrix of the observed network after nomination B B parameter lambda lambda parameter theta theta parameter Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>, <NAME>, and <NAME>. Community models for networks observed through edge nominations. arXiv preprint arXiv:2008.03652 (2020). Examples dt <- NSBM.Gen(n=200,K=2,beta=0.2,avg.d=10) nSmooth estimates probabilty matrix by neighborhood smoothing Description estimates probabilty matrix by neighborhood smoothing of Zhang et. al. (2017) Usage nSmooth(A, h = NULL) Arguments A adjacency matrix h quantile value used for smoothing. Recommended to be in the scale of sqrt(log(n)/n) where n is the size of the network. The default value is sqrt(log(n)/n) from the paper. Details The method assumes a graphon model where the underlying graphon function is piecewise Lipchitz. However, it may be slow for moderately large networks, though it is one of the fastest methods for graphon models. Value the probability matrix Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References Zhang, Y.; Levina, <NAME>. Estimating network edge probabilities by neighbourhood smoothing Biometrika, Oxford University Press, 2017, 104, 771-783 Examples N <- 100 U = matrix(1:N,nrow=1) / (N+1) V = matrix(1:N,nrow=1) / (N+1) W = (t(U))^2 W = W/3*cos(1/(W + 1e-7)) + 0.15 upper.index <- which(upper.tri(W)) A <- matrix(0,N,N) rand.ind <- runif(length(upper.index)) edge.index <- upper.index[rand.ind < W[upper.index]] A[edge.index] <- 1 A <- A + t(A) diag(A) <- 0 What <- nSmooth(A) RDPG.Gen generates random networks from random dot product graph model Description generates random networks from random dot product graph model Usage RDPG.Gen(n, K, directed = TRUE, avg.d = NULL) Arguments n size of the network K dimension of latent space directed whether the network is directed or not avg.d average node degree of the network (in expectation) Details The network is generated according to special formulation mentioned in ECV paper. Value a list of A the adjacency matrix P the probability matrix Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME> and <NAME>. Random dot product graph models for social networks. In International Workshop on Algorithms and Models for the Web-Graph, pages 138-149. Springer, 2007. <NAME>, <NAME>, and <NAME>. Network cross-validation by edge sampling. Biometrika, 107(2), pp.257-276, 2020. Examples dt <- RDPG.Gen(n=600,K=2,directed=TRUE) A <- dt$A reg.SP clusters nodes by regularized spectral clustering Description community detection by regularized spectral clustering Usage reg.SP(A, K, tau = 1, lap = FALSE,nstart=30,iter.max=100) Arguments A adjacency matrix K number of communities tau reguarlization parameter. Default value is one. Typically set between 0 and 1. If tau=0, no regularization is applied. lap indicator. If TRUE, the Laplacian matrix for clustering. If FALSE, the adjacency matrix will be used. nstart number of random initializations for K-means iter.max maximum number of iterations for K-means Details The regularlization is done by adding a small constant to each element of the adjacency matrix. It is shown by such perturbation helps concentration in sparse networks. It is shown to give consistent clustering under SBM. Value a list of cluster cluster labels loss the loss of Kmeans algorithm Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>, <NAME>, and <NAME>. Spectral clustering and the high-dimensional stochastic block- model. The Annals of Statistics, pages 1878-1915, 2011. <NAME>, <NAME>, <NAME>, and <NAME>. Pseudo-likelihood methods for community detection in large sparse networks. The Annals of Statistics, 41(4):2097-2122, 2013. <NAME> and <NAME>. Consistency of spectral clustering in stochastic block models. The Annals of Statistics, 43(1):215-237, 2014. <NAME>, <NAME>, and <NAME>. Concentration and regularization of random graphs. Ran- dom Structures & Algorithms, 2017. See Also reg.SP Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0) A <- dt$A sc <- reg.SP(A,K=3,lap=TRUE) NMI(sc$cluster,dt$g) reg.SSP detects communities by regularized spherical spectral clustering Description community detection by regularized spherical spectral clustering Usage reg.SSP(A, K, tau = 1, lap = FALSE,nstart=30,iter.max=100) Arguments A adjacency matrix K number of communities tau reguarlization parameter. Default value is one. Typically set between 0 and 1. If tau=0, no regularization is applied. lap indicator. If TRUE, the Laplacian matrix for clustering. If FALSE, the adjacency matrix will be used. nstart number of random initializations for K-means iter.max maximum number of iterations for K-means Details The regularlization is done by adding a small constant to each element of the adjacency matrix. It is shown by such perturbation helps concentration in sparse networks. The difference from spectral clustering (reg.SP) comes from its extra step to normalize the rows of spectral vectors. It is proved that it gives consistent clustering under DCSBM. Value a list of cluster cluster labels loss the loss of Kmeans algorithm Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME> and <NAME>. Regularized spectral clustering under the degree-corrected stochastic block- model. In Advances in Neural Information Processing Systems, pages 3120-3128, 2013. <NAME> and <NAME>. Consistency of spectral clustering in stochastic block models. The Annals of Statistics, 43(1):215-237, 2014. See Also reg.SP Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0.9,simple=FALSE,power=TRUE) A <- dt$A ssc <- reg.SSP(A,K=3,lap=TRUE) NMI(ssc$cluster,dt$g) RightSC clusters nodes in a directed network by regularized spectral clustering on right singular vectors Description community detection by regularized spectral clustering on right singular vectors Usage RightSC(A, K, normal = FALSE) Arguments A adjacency matrix of a directed adjacecy matrix K number of communities normal indicator. If TRUE, normalization of singular vector rows will be applied, simi- lar to the spectral spherical clustering. Details This is essentially the spectral clustering applied on right singular vectors. It can be used to handle directed networks where Aij = 1 if and only if i -> j, and the edges tend to have a missing issue specifically depending on the sender i. More details can be found in Li et. al. (2020). Value a list of cluster cluster labels loss the loss of Kmeans algorithm Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References <NAME>, <NAME>, and <NAME>. Community models for networks observed through edge nominations. arXiv preprint arXiv:2008.03652 (2020). See Also reg.SP Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0) A <- dt$A sc <- RightSC(A,K=2) SBM.estimate estimates SBM parameters given community labels Description estimates SBM parameters given community labels Usage SBM.estimate(A, g) Arguments A adjacency matrix g a vector of community labels Details maximum likelhood is used Value a list of B estimated block connection probability matrix Phat estimated probability matrix g community labels Author(s) <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References B. Karrer and <NAME>. Stochastic blockmodels and community structure in networks. Phys- ical Review E, 83(1):016107, 2011. See Also DCSBM.estimate Examples dt <- BlockModel.Gen(30,300,K=3,beta=0.2,rho=0) A <- dt$A sc <- reg.SP(A,K=3,lap=TRUE) sbm <- SBM.estimate(A,sc$cluster) sbm$B smooth.oracle oracle smooth graphon estimation Description oracle smooth graphon estimation according to given latent positions, based on smooth estimation. Usage smooth.oracle(Us,A) Arguments Us a vector whose elements are the latent positions of the network nodes under the graphon model. The dimension of the vector equals the number of nodes in the network. A adjacency matrix. It does not have to be unweighted. Details Note that the latenet positions are unknown in practice, so this estimation is an oracle estimation mainly for evaluation purpose. However, if the latenet positions can be approximated estimated, this function can also be used for estimating the edge probability matrix. This procedure is the M-step of the algorithm used in Sischka & Kauermann (2022). Our implementation is modified from the contribution of an anonymous reviewer, leveraging the tools of the sparseFLMM package. Value The estimated probability matrix. Author(s) <NAME>, <NAME>, <NAME>, <NAME> Maintainer: <NAME> <<EMAIL>> References Sischka, B. and Kauermann, G., 2022. EM-based smooth graphon estimation using MCMC and spline-based approaches. Social Networks, 68, pp.279-295. Examples set.seed(100) dt <- BlockModel.Gen(10,50,K=2,beta=0.2) ## oracle order oracle.index <- sort(dt$g,index.return=TRUE)$ix A <- dt$A[oracle.index,oracle.index] oracle.est <- smooth.oracle(seq(0,1,length.out=50),A) USVT estimates the network probability matrix by the improved universal singular value thresholding Description estimates the network probability matrix by the universal singular value thresholding of Chatterjee (2015), with the improvement mentioned in Zhang et. al. (2017). Usage USVT(A) Arguments A adjacency matrix Details Instead of using the original threshold in Chatterjee (2015), the estimate is generated by taking the n^(1/3) leading spectral components. The method was mentioned in Zhang et. al. (2017) and suggested by an anonymous reviewer. Value The estimated probability matrix. Author(s) <NAME> and <NAME> Maintainer: <NAME> <<EMAIL>> References S. Chatterjee. Matrix estimation by universal singular value thresholding. The Annals of Statistics, 43(1):177-214, 2015. <NAME>, <NAME>, and <NAME>. Estimating network edge probabilities by neighbourhood smoothing. Biometrika, 104(4):771-783, 2017. See Also LSM.PGD Examples dt <- RDPG.Gen(n=600,K=2,directed=TRUE) A <- dt$A fit <- USVT(A)
vvconverter
cran
R
Package ‘vvconverter’ February 1, 2023 Title Apply Transformations to Data Version 0.5.8 Description Provides a set of functions for data transformations. Transformations are performed on character and numeric data. As the scope of the package is within Student Analytics, there are functions focused around the academic year. License MIT + file LICENSE Encoding UTF-8 RoxygenNote 7.2.3 Imports dplyr, lubridate, stringr NeedsCompilation no Author <NAME> [aut, cre, cph] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-02-01 10:50:02 UTC R topics documented: academic_yea... 2 clean_multiple_underscore... 2 destrin... 3 interval_roun... 4 ltri... 4 median_top_1... 5 mod... 5 rtri... 6 str_replace_all_in_fil... 6 sum_0_... 7 test_0... 8 transform_01_to_f... 8 tri... 9 academic_year Academic year Description In this function, a date is translated to the academic year in which it falls. This is based on a start of the academic year on the 1st of September. Usage academic_year(x, start_1_oct = FALSE) Arguments x A date, or vector with multiple dates. POSIXct is also accepted. start_1_oct Does the academic year start on the 1st of October? default FALSE: based on September 1st Value The academic year in which the specified date falls See Also Other vector calculations: clean_multiple_underscores(), interval_round(), sum_0_1(), transform_01_to_ft() Examples academic_year(lubridate::today()) clean_multiple_underscores clean multiple underscores Description Replaces multiple underscores into a single underscore in a vector or string. Usage clean_multiple_underscores(x) Arguments x The vector or string to be cleaned. Value cleaned vector or string. See Also Other vector calculations: academic_year(), interval_round(), sum_0_1(), transform_01_to_ft() Examples clean_multiple_underscores("hello___world") destring Convert character vector to numeric, ignoring irrelevant characters. Description Convert character vector to numeric, ignoring irrelevant characters. Usage destring(x, keep = "0-9.-") Arguments x A vector to be operated on keep Characters to keep in, in bracket regular expression form. Typically includes 0-9 as well as the decimal separator (. in the US and , in Europe). Value vector of type numeric Examples destring("24k") destring("5,5") interval_round Interval round Description Function to round numeric values in a vector to values from an interval sequence. Usage interval_round(x, interval) Arguments x The numeric vector to adjust interval The interval sequence Value The vector corrected for the given interval See Also Other vector calculations: academic_year(), clean_multiple_underscores(), sum_0_1(), transform_01_to_ft() Examples interval_round(c(5,4,2,6), interval = seq(1:4)) ltrim LTrim Description Trim leading whitespace from sting. Usage ltrim(x) Arguments x A text string. Value Cleaned string. Examples trim(" hello") median_top_10 Median top 10 percentage Description Calculate the median of the top ten percentage of the values. Usage median_top_10(x, na.rm = FALSE) Arguments x A numerical vector na.rm Default TRUE: Remove NAs, before calculations. Value A numerical value Examples median_top_10(mtcars$cyl) mode Mode (most common value) Description Determine the most common value in a vector. If two values have the same frequency, the first occurring value is used. Usage mode(x, na.rm = FALSE) Arguments x a vector na.rm If TRUE: Remove nas before the calculation is done Value the most common value in the vector x Examples mode(c(0,3,5,7,5,3,2)) rtrim RTrim Description Trim trailing whitespaces from string. Usage rtrim(x) Arguments x A text string. Value Cleaned string. Examples trim("hello ") str_replace_all_in_file Replace all occurences of a pattern in a file Description Replace all occurences of a pattern in a file Usage str_replace_all_in_file( file, pattern, replacement = "[...]", only_comments = TRUE, collapse = FALSE ) Arguments file character, path of file to be modified pattern character, pattern to be replaced replacement character, replacement text only_comments logical, should the replacement only be done in comments collapse logical, should the lines be collapsed into a single line before replacement Value NULL, the file is modified in place sum_0_1 Sum 0 1 Description This function is the same as sum(), with one exception: If the outcome value is higher than 1, it will always return 1. Usage sum_0_1(x) Arguments x a vector with numeric values Value 0 or 1. Depending on whether the sum is greater than 0 or not. See Also Other vector calculations: academic_year(), clean_multiple_underscores(), interval_round(), transform_01_to_ft() test_01 Test 01 Description This function tests whether the vector is actually a boolean, but is encoded as a 0/1 variable. The function checks for numeric vectors whether the only occurring values are 0, 1, or NA. At character and factor vectors checks whether the only occurring values are "0", "1", or NA to be. If there is a 0/1 variable, TRUE is returned, in all others cases FALSE. Usage test_01(x) Arguments x The vector to test Value A TRUE/FALSE value on the test See Also Other booleans: transform_01_to_ft() Examples vector <- c(0, 1, 0, 1, 1, 1, 0) test_01(vector) transform_01_to_ft Transform 01 to FT Description If the vector is a 0/1 vector, it is converted to a logical one TRUE/FALSE vector. This transformation is performed only if the vector contains only values 0, 1, or NA. If this is not the case returns the original variable. This transformation can be done on numeric, string, and factor vectors. Usage transform_01_to_ft(x) Arguments x the vector to be tested and transformed. Value The transformed vector if a transformation is possible. If no transformation is possible, the original vector returned. See Also Other vector calculations: academic_year(), clean_multiple_underscores(), interval_round(), sum_0_1() Other booleans: test_01() Examples vector <- c(0, 1, 0, 1, 1, 1, 0) transform_01_to_ft(vector) trim Trim Description Trim both leading and trailing whitespaces from string. Usage trim(x) Arguments x A text string. Value Cleaned string. Examples trim(" hello ")
ex_dadata
hex
Erlang
ExDadata === [`ExDadata`](ExDadata.html#content) provides a wrapper for DaData API. Example --- Firs we need to define a configuration module. ``` defmodule MyApp.Dadata do use ExDadata, otp_app: :my_app end ``` Then we can set our configuration in `config.exs` file: ``` config :my_app, MyApp.Dadata, api_key: "<api_key>", secret_key: "<secret_key>", http_adapter: ExDadata.HTTPoisonHTTPAdapter, json_adapter: Jason ``` And then we can use it to make requests to DaData API: ``` client = MyApp.Dadata.client() {:ok, result} = ExDadata.Address.clean_address(client, ["Ќск сухПМска 11/-89"]) ``` ExDadata.Address === Entry point for address DaData API. For more information see [docs](https://dadata.ru/api/#address-clean). [Link to this section](#summary) Summary === [Functions](#functions) --- [geocode_address(client, data)](#geocode_address/2) Determine coordinates for the address. [geolocate_address(client, data)](#geolocate_address/2) Search addresses which are close to the coordinates. [suggest_address(client, data)](#suggest_address/2) Search address by any part of an address from the region to the house. Also, you can search by zip-code. [Link to this section](#functions) Functions === ExDadata.Client === Keeps authorization information for DaData. [Link to this section](#summary) Summary === [Types](#types) --- [t()](#t:t/0) [Functions](#functions) --- [new(api_key, secret_key, http_adapter, json_adapter)](#new/4) Create new client. [to_headers(client)](#to_headers/1) Create headers needed to authorize for this client. [Link to this section](#types) Types === [Link to this section](#functions) Functions === ExDadata.HTTPAdapter behaviour === An interface to provide an HTTP client. [Link to this section](#summary) Summary === [Types](#types) --- [body()](#t:body/0) JSON-body for the request, provided as json-encodable data. [header()](#t:header/0) A headers provided for the request. [method()](#t:method/0) An http method. [opts()](#t:opts/0) Opts provided for specific adapters. [url()](#t:url/0) Full url for the request. [Callbacks](#callbacks) --- [request(t, method, url, list, body, opts)](#c:request/6) Provides an ability to do an arbitrary request with json data. [Link to this section](#types) Types === [Link to this section](#callbacks) Callbacks === ExDadata.HTTPAdapter.Response === Response struct. [Link to this section](#summary) Summary === [Types](#types) --- [body()](#t:body/0) JSON-body for the request, provided as json-encodable data. [header()](#t:header/0) A headers provided for the request. [t()](#t:t/0) The response struct on which the main logic depends. [Link to this section](#types) Types === ExDadata.HTTPoisonHTTPAdapter === Default HTTP Adapter for this library. ExDadata.NoConfigError exception === An error raised in case you did not provide a required config key. [Link to this section](#summary) Summary === [Functions](#functions) --- [message(no_config_error)](#message/1) Callback implementation for [`Exception.message/1`](https://hexdocs.pm/elixir/Exception.html#c:message/1). [Link to this section](#functions) Functions === ExDadata === ![](https://github.com/Elonsoft/ex_dadata/workflows/test/badge.svg) ![](https://github.com/Elonsoft/ex_dadata/workflows/lint/badge.svg) [`ExDadata`](ExDadata.html) provides a wrapper for [DaData API](https://dadata.ru/api/). Installation --- Add the package to your list of dependencies in `mix.exs`: ``` def deps do [ {:ex_dadata, "~> 0.1.0"} ] end ``` Documentation is available on [HexDocs](https://hexdocs.pm/ex_dadata). Usage --- Firs we need to define a configuration module. ``` defmodule MyApp.Dadata do use ExDadata, otp_app: :my_app end ``` Then we can set our configuration in `config.exs` file: ``` config :my_app, MyApp.Dadata, api_key: "<api_key>", secret_key: "<secret_key>", http_adapter: ExDadata.HTTPoisonHTTPAdapter, json_adapter: Jason ``` And then we can use it to make requests to DaData API: ``` client = MyApp.Dadata.client() {:ok, result} = ExDadata.Address.clean_address(client, ["Ќск сухПМска 11/-89"]) ``` Development --- In `dev` mode `ExDadata.DevClient` is available. It's initialized from `config/dev.exs` config. You can create it and put the following data: ``` use Mix.Config config :ex_dadata, api_key: "<your_api_key>", secret_key: "<your_secret_key>", http_adapter: ExDadata.HTTPoisonHTTPAdapter, json_adapter: Jason ``` After this you can create your client in `iex -S mix`: ``` iex(2)> client = ExDadata.DevClient.new() iex(3)> ExDadata.Address.suggest_address(client, %{query: "ЌПсква хабар"}) ``` > **Note:** You'll need to start HTTPoison application: > ``` > iex(1)> Application.ensure_all_started(:httpoison) > ``` [API Reference](api-reference.html) API Reference === Modules --- [ExDadata](ExDadata.html) [`ExDadata`](ExDadata.html#content) provides a wrapper for DaData API. [ExDadata.Address](ExDadata.Address.html) Entry point for address DaData API. [ExDadata.Client](ExDadata.Client.html) Keeps authorization information for DaData. [ExDadata.HTTPAdapter](ExDadata.HTTPAdapter.html) An interface to provide an HTTP client. [ExDadata.HTTPAdapter.Response](ExDadata.HTTPAdapter.Response.html) Response struct. [ExDadata.HTTPoisonHTTPAdapter](ExDadata.HTTPoisonHTTPAdapter.html) Default HTTP Adapter for this library. [ExDadata.NoConfigError](ExDadata.NoConfigError.html) An error raised in case you did not provide a required config key. [ExDadata](readme.html)
inforion
readthedoc
Unknown
InforION Release 2.1 2021-05-03 Usage 2.1 Install from Package Manager... 5 2.2 Build from Sourc... 5 3.1 Python 3 Installer Downloa... 7 3.2 Run the Installe... 8 4.1 Installatio... 9 4.2 Upgrad... 9 4.3 Show Versio... 9 6.1 Descriptio... 15 6.2 Parameter... 15 6.3 Exampl... 15 7.1 creat... 17 7.2 delet... 18 8.1 lis... 19 8.2 ge... 20 8.3 uploa... 20 8.4 purg... 20 9.1 Descriptio... 23 9.2 Parameter... 23 9.3 Exampl... 24 i 10.1 Descriptio... 25 10.2 Parameter... 25 10.3 Exampl... 25 11.1 Descriptio... 27 11.2 Parameter... 28 11.3 Exampl... 28 12.1 Big change... 29 12.2 Code styl... 29 12.3 Test... 29 12.4 New Python dependencie... 29 12.5 New non-Python dependencie... 30 12.6 Style guide: Is it InforION or inforion... 30 12.7 Known ports/packager... 30 ii InforION, Release 2.1 InforION is a Python 3 application and library that interacts with InforOS. This library provides a series of functiona- lities to handle data via Infor ION: • Migration into M3 • Import Data to/from the Infor Datalake • Manage and load data from external systems into e.g. Infor M3 incl. data verification, logging reporting and workflow management via Infor ION. • Export data from Infor LN/M3/EAM InforION, Release 2.1 2 Usage KAPITEL 1 Introduction InforION is a Python 3 application and library that interacts with InforOS InforION, Release 2.1 4 Kapitel 1. Introduction KAPITEL 2 Installation There are two ways that one can install Python3 Application: InforION. If you follow the steps below, you should have installed everything in no time. 2.1 Install from Package Managers First of all, you can install InforION from Package Managers, which is available as a package on PyPI: https://pypi. org/project/inforion/. But, nevertheless, often the preferred way to install it is simply through pip in your Terminal (MacOS) or Command Prompt (Windows): pip3 install inforion 2.2 Build from Source Another easy way, is that Inforion may also be consumed and built directly from source. For this kind of installation we recommend the use of Visual Studio Code and opening its inside Terminal, so following these steps below wont be a problem. InforION, Release 2.1 git clone https://github.com/Fellow-Consulting-AG/inforion.git As soon as you are inside os the new inforion directory, just run the following command: make install 2.2.1 Install in 2 minutes https://asciinema.org/a/347875.svg KAPITEL 3 Python Download for Windows In this guide we are going to show you how to install Python 3.7 or higher for your Windows. 3.1 Python 3 Installer Download 1. In order to install Python, you must download it from this URL, but it is very important that you select or use at least Python 3.7 or higher: Python Releases for Windows 2. Then, scroll down and select x86–64 executable installer for Windows 10–64 bit computer or Windows x86 executable installer for 32-bit. InforION, Release 2.1 3.2 Run the Installer Once you have downloaded an installer, simply run it by double-clicking on the downloaded file. A pop-up window should appear that looks like the following one: 3. Prior installing Python, please select the option: Add Python 3.8 to PATH 4. After Setup was successful select: Disable path length limit KAPITEL 4 PIP Pip is a de facto standard package-management system used to install and manage software packages written in Python. Many packages can be found in the default source for packages and their dependencies — Python Package Index (PyPI). 4.1 Installation pip3 install inforion 4.2 Upgrade If you want to upgrade the inforion, please use: pip install inforion --upgrade 4.3 Show Version If you want to see the version you have installed, please use: pip show inforion InforION, Release 2.1 10 Kapitel 4. PIP KAPITEL 5 The last prerequisite is to install Jupyter Notebook using the following command: python -m pip install jupyter InforION, Release 2.1 To run Jupyter Notebook it is necessary that you use and command: jupyter notebook InforION, Release 2.1 Finally, start the notebook server and popup dashboard in browser using “localhost:8888/tree” url and now you will get access to the Jupyter Notebook. InforION, Release 2.1 14 Kapitel 5. Jupyter Notebook to Windows 10 KAPITEL 6 6.1 Description The Excel mapping file generation part generates an excel mapping file for a particular API program of M3. It populates the mapping file with all the available fields for the specific M3 API program. This will be used by the Infor Consultant or the customer to map the M3 fields with the source database fields. Once the mappings are specified, this mapping file is used to validate data from the source file (provided by the customer) and load it to M3. For more information and help, just enter this command to see the necessary parameters: inforion extract --help 6.2 Parameters Parameter Description -p or –pro- This parameter is used to provide the API program for which mapping file should be generated. e.g gram MMS301MI, CRS610MI, AAS320MI -o or –out- This parameter is used to provide the output file path where the generated mapping file should be putfile saved. 6.3 Example inforion extract -p CRS610MI -o CRS610MI-Mappings.xlsx Note: this command should always be executed in terminal as this needs user permissions, if you do not use the DM_ION Workflow manager. InforION, Release 2.1 16 Kapitel 6. Generate M3 Excel Mapping File KAPITEL 7 Data Catalog CLI In this section we are going to take a look at the following Data Catalog (datacatalog) commands: 7.1 create 7.1.1 Description: Add or update object metadata in the Data Catalog. Schema type of JSON and DSV requires that the name, type, and schema are provided, and you can optionally include the properties. For object type ANY requires the name and type only, and the schema and properties should be omitted. 7.1.2 Parameters: Parameter Description –ionfile or -i Infor IONAPI credentials file. –name or -n Object name. –schema_type or -t Object schema type. Example DSV or ANY. –schema or -s Schema file (JSON). –properties or -p Additional schema properties file (JSON). 7.1.3 Example: $ inforion catalog create --ionfile credentials/credentials.ionapi --name CSVSchema2 - ˓→-schema_type DSV --schema data/catalog_schema.json --properties data/catalog_ ˓→properties.json InforION, Release 2.1 7.2 delete 7.2.1 Description: Delete a schema object by name. 7.2.2 Parameters: Parameter Description –ionfile or -i Infor IONAPI credentials file. –name or -n Name of the schema object. 7.2.3 Example: $ inforion catalog delete --ionfile credentials/credentials.ionapi --name CSVSchema2 KAPITEL 8 Data Lake CLI Now, it is time to look up the Data Lake (datalake) commands: 8.1 list 8.1.1 Description: List data object properties using a filter. 8.1.2 Parameters: Parameter Description –ionfile or -i Infor IONAPI credentials file. –list_filter or The restrictions to be applied on the returned records. -f –sort or -s Field name followed by colon followed by direction (asc or desc; default asc). Example: ‚event_date:desc‘. –page or -p The page number from which to start returning records. Starts from 1. –records or -r The number of records that will be returned. Starts from 0 8.1.3 Example: $ inforion datalake list --ionfile credentials/credentials.ionapi --list_filter "dl_ ˓→document_name eq 'CSVSchema2'" InforION, Release 2.1 8.2 get 8.2.1 Description: Retrieve payload based on id from datalake. 8.2.2 Parameters: Parameter Description –ionfile or -i Infor IONAPI credentials file. –stream_id or -i Object ID. 8.2.3 Example: $ inforion datalake get --ionfile credentials/credentials.ionapi -id 1-7e476691-b17c- ˓→3e8d-8f0c-ea13222f56ef 8.3 upload 8.3.1 Description: This command use the ION Messaging Service to send documents into ION. 8.3.2 Parameters: Parameter Description –ionfile or -i Infor IONAPI credentials file. –schema or -s Schema name. –logical_id or -l The Logical Id of the sending application (fromLogicalId parameter). —file or -f File to upload. 8.3.3 Example: $ inforion datalake upload --ionfile credentials/credentials.ionapi --schema ˓→CSVSchema2 --logical_id lid://infor.ims.mongooseims --file data/sample.csv 8.4 purge 8.4.1 Description: Deletes Data Objects based on the given filter or a list of IDs. You cannot use both parameters together: ids and purge_filter. InforION, Release 2.1 8.4.2 Parameters: Parameter Description –ionfile or -i Infor IONAPI credentials file. –ids or -id Object ids. –purge_filter or -f The restrictions to be applied to purge the records. 8.4.3 Example: $ inforion datalake purge --ionfile credentials/credentials.ionapi --ids 1-dd6aa276- ˓→b34d-3905-b378-cdb5452ca17f,1-02d3ed52-5602-36ac-b3b1-fa670dbfeb72 $ inforion datalake purge --ionfile credentials/credentials.ionapi -f "dl_id eq '1- ˓→d358de11-4658-3c2d-a6ec-88c028f46315'" InforION, Release 2.1 22 Kapitel 8. Data Lake CLI KAPITEL 9 Export LN Data 9.1 Description This section is supposed to help and guide the user to export data from LN to Excel files using Infor ION API. Before getting to know the necessary parameters for exporting data, it is highly important to mention that Infor ION API only supports the following services: • SalesOrder • Business_Partner_v3 Just as a friendly reminder again, whenever you need help and advice just enter the following command and you will see which parameters are needed for the option of **exporting*:* inforion ln ExportData --help 9.2 Parameters Parame- Description ter -u, –url The full URL to the API is needed. Please note you need to enter the full url like . . . /LN/c4ws/services/SalesOrder [required] -i, –ionfile IONFile is needed to login in to Infor OS. Please go into ION and generate a IONFile. If not provided, a prompt will allow you to type the input text. [required] -c, –com- Company for which you want to export data. e.g 121. [required] pany -s, –ser- Service name for which you want to export data. See above for currently supported sevice names. vice_name [required] -o, –out- File as Output File - Data will be exported to this file. putfile InforION, Release 2.1 9.3 Example inforion ln ExportData -s BusinessPartner_v3 -u https://Xi2016.gdeinfor2.com:7443/ ˓→infor/LN/c4ws/services/ -i LN.ionapi -c 121 -o BusinessPartners.xlsx Anyway, here is a very self-explanatory video of how to handle the step of extracting data. https://asciinema.org/a/347871.svg KAPITEL 10 Transformation 10.1 Description This step provides the functionality of transforming the external data source into a dataset for M3, based on the mapping file. 10.2 Parameters Parameter Description -a or –mapping- This parameter is used to provide the mapping file based on which transformation will be done. file -b or –mainsheet This parameter is used to define the main sheet which will contain the mapping fields for transformation. -i or –inputfile This parameter is used to provide the input data. -o or –outputfile This parameter is used to write the transformed data into a file. 10.3 Example inforion transform -a sample_mapping.xlsx -b Sheet1 -i sample_data.xlsx -o output.xlsx InforION, Release 2.1 26 Kapitel 10. Transformation KAPITEL 11 Loading 11.1 Description One of the last and most important steps is to finally load the data from your initial Excelsheet to the desired Infor Application; as for example M3. Again, for any doubts or help just enter this command and so you will see the existing parameters for the option of **loading*:* inforion load --help InforION, Release 2.1 11.2 Parameters Para- Description meter -u, –url The full URL to the API is needed. Please note you need to enter the full url like . . . /M3/m3api- rest/v2/execute/CRS610MI [required] -f, –ion- IONFile is needed to login in to Infor OS. Please go into ION and generate a IONFile. If not provided, file a prompt will allow you to type the input text. [required] -p, –pro- What kind of program to use by the load [required] gram -m, –me- Select the method as a list [required] thod -i, –in- File to load the data. Please use XLSX or CSV format. If not provided, the input text will just be printed putfile [required] -o, –out- File as Output File - Data are saved here for the load putfile -s, –start Dataload can be started by 0 or by a number -e, –end Dataload can be end -z, –con- Use a Configfile instead of parameters figfile 11.3 Example inforion load -u https://mingle-ionapi.eu1.inforcloudsuite.com/Tendat_DEV/M3/m3api- ˓→rest/v2/execute -f FellowKey.ionapi -p CRS610MI -m "Add,ChgBasicData,ChgOrderInfo, ˓→ChgFinancial" -i excel/T-KundenNeu1.xlsx -o load_full_200.xlsx -s 0 -e 2 KAPITEL 12 Contributing guidelines Contributions are always welcomed! 12.1 Big changes Please open a new issue to discuss or propose a major change. Not only is it fun to discuss big ideas, but we might save each other’s time too. Perhaps some of the work you’re contemplating is already half-done in a development branch. 12.2 Code style We use PEP8, black for code formatting and isort for import sorting. The settings for these programs are in pyproject.toml and setup.cfg. Pull requests should follow the style guide. One difference we use from „black“ style is that strings shown to the user are always in double quotes (") and strings for internal uses are in single quotes ('). 12.3 Tests New features should come with tests that confirm their correctness. 12.4 New Python dependencies If you are proposing a change that will require a new Python dependency, we prefer dependencies that are already packaged by Debian or Red Hat. This makes life much easier for our downstream package maintainers. Python dependencies must also be GPLv3 compatible. InforION, Release 2.1 12.5 New non-Python dependencies InforION uses several external programs for its functionality. In general we prefer to avoid adding new external pro- grams. 12.6 Style guide: Is it InforION or inforion? The program/project is InforION and the name of the executable or library is inforion. 12.7 Known ports/packagers InforION has been ported to many platforms already. If you are interesting in porting to a new platform, KAPITEL 13 Logging In this section we present how logging works on Inforion CLI and API. User can set the log level in the following way: export LOG_LEVEL = 'see log levels bellow' Current available log levels are: • CRITICAL • ERROR • WARNING • INFO • DEBUG By default, it is logging the only errors, which is kind of a default behavior. However, we can change it if you insist. InforION, Release 2.1 32 Kapitel 13. Logging KAPITEL 14 Indices and tables • genindex • modindex • search 33
spc
cran
R
Package ‘spc’ October 24, 2022 Version 0.6.7 Date 2022-10-24 Title Statistical Process Control -- Calculation of ARL and Other Control Chart Performance Measures Author <NAME> Maintainer <NAME> <<EMAIL>> Depends R (>= 1.8.0) Description Evaluation of control charts by means of the zero-state, steady-state ARL (Average Run Length) and RL quantiles. Setting up control charts for given in-control ARL. The control charts under consideration are one- and two-sided EWMA, CUSUM, and Shiryaev-Roberts schemes for monitoring the mean or variance of normally distributed independent data. ARL calculation of the same set of schemes un- der drift (in the mean) are added. Eventually, all ARL measures for the multivariate EWMA (MEWMA) are provided. License GPL (>= 2) URL https://www.r-project.org NeedsCompilation yes Repository CRAN Date/Publication 2022-10-24 12:30:02 UTC R topics documented: dpha... 3 euklid.ewma.ar... 5 imr.ar... 6 imr.RuRl_alon... 10 lns2ewma.ar... 12 lns2ewma.cri... 14 mewma.ar... 16 mewma.cri... 20 mewma.ps... 21 p.ewma.ar... 23 phat.ewma.ar... 25 pois.cusum.ar... 27 pois.cusum.cri... 29 pois.cusum.crit.L0L... 31 pois.ewma.a... 33 pois.ewma.ar... 34 pois.ewma.cri... 36 quadrature.nodes.weight... 38 scusum.ar... 39 scusum.cri... 41 scusums.ar... 43 sewma.ar... 44 sewma.arl.preru... 46 sewma.cri... 48 sewma.crit.preru... 51 sewma.... 53 sewma.q.preru... 55 sewma.s... 57 sewma.sf.preru... 58 tewma.ar... 59 tol.lim.fa... 61 x.res.ewma.ar... 62 xcusum.a... 66 xcusum.ar... 68 xcusum.cri... 72 xcusum.crit.L0... 73 xcusum.crit.L0L... 74 xcusum.... 76 xcusum.s... 78 xDcusum.ar... 79 xDewma.ar... 81 xDgrsr.ar... 86 xDshewhartrunsrules.ar... 88 xewma.a... 90 xewma.ar... 92 xewma.arl.... 99 xewma.arl.preru... 100 xewma.cri... 103 xewma.... 104 xewma.q.preru... 106 xewma.s... 109 xewma.sf.preru... 111 xgrsr.a... 113 xgrsr.ar... 115 xgrsr.cri... 118 xsewma.ar... 119 xsewma.cri... 121 xsewma.... 123 xsewma.s... 125 xshewhart.ar1.ar... 127 xshewhartrunsrules.ar... 128 xtcusum.ar... 131 xtewma.a... 132 xtewma.ar... 134 xtewma.... 135 xtewma.s... 137 xtshewhart.ar1.ar... 139 dphat Percent defective for normal samples Description Density, distribution function and quantile function for the sample percent defective calculated on normal samples with mean equal to mu and standard deviation equal to sigma. Usage dphat(x, n, mu=0, sigma=1, type="known", LSL=-3, USL=3, nodes=30) pphat(q, n, mu=0, sigma=1, type="known", LSL=-3, USL=3, nodes=30) qphat(p, n, mu=0, sigma=1, type="known", LSL=-3, USL=3, nodes=30) Arguments x, q vector of quantiles. p vector of probabilities. n sample size. mu, sigma parameters of the underlying normal distribution. type choose whether the standard deviation is given and fixed ("known") or estimated and potententially monitored ("estimated"). LSL,USL lower and upper specification limit, respectively. nodes number of quadrature nodes needed for type="estimated". Details Bruhn-Suhr/Krumbholz (1990) derived the cumulative distribution function of the sample per- cent defective calculated on normal samples to applying them for a new variables sampling plan. These results were heavily used in Krumbholz/Z\"oller (1995) for Shewhart and in Knoth/Steinmetz (2013) for EWMA control charts. For algorithmic details see, essentially, Bruhn-Suhr/Krumbholz (1990). Two design variants are treated: The simple case, type="known", with known normal vari- ance and the presumably much more relevant and considerably intricate case, type="estimated", where both parameters of the normal distribution are unknown. Basically, given lower and upper specification limits and the normal distribution, one estimates the expected yield based on a normal sample of size n. Value Returns vector of pdf, cdf or qf values for the statistic phat. Author(s) <NAME>th References <NAME> and <NAME> (1990), A new variables sampling plan for normally distributed lots with unknown standard deviation and double specification limits, Statistical Papers 31(1), 195- 207. <NAME> and <NAME> (1995), p-Karten vom Shewhartschen Typ f\"ur die messende Pr\"ufung, Allgemeines Statistisches Archiv 79, 347-360. <NAME> and <NAME> (2013), EWMA p charts under sampling by variables, International Journal of Production Research 51(13), 3795-3807. See Also phat.ewma.arl for routines using the herewith considered phat statistic. Examples # Figures 1 (c) and (d) from Knoth/Steinmetz (2013) n <- 5 LSL <- -3 USL <- 3 par(mar=c(5, 5, 1, 1) + 0.1) p.star <- 2*pnorm( (LSL-USL)/2 ) # for p <= p.star pdf and cdf vanish p_ <- seq(p.star+1e-10, 0.07, 0.0001) # define support of Figure 1 # Figure 1 (c) pp_ <- pphat(p_, n) plot(p_, pp_, type="l", xlab="p", ylab=expression(P( hat(p) <= p )), xlim=c(0, 0.06), ylim=c(0,1), lwd=2) abline(h=0:1, v=p.star, col="grey") # Figure 1 (d) dp_ <- dphat(p_, n) plot(p_, dp_, type="l", xlab="p", ylab="f(p)", xlim=c(0, 0.06), ylim=c(0,50), lwd=2) abline(h=0, v=p.star, col="grey") euklid.ewma.arl Compute ARLs of Poisson NCS-EWMA control charts Description Computation of the (zero-state) Average Run Length (ARL) at given Poisson mean mu. Usage euklid.ewma.arl(gX, gY, kL, kU, mu, y0, r0=0) Arguments gX first and gY second integer forming the rational lambda = gX/(gX+gY), lambda mimics the usual EWMA smoothing constant. kL lower control limit of the NCS-EWMA control chart, integer. kU upper control limit of the NCS-EWMA control chart, integer. mu mean value of Poisson distribution. y0 headstart like value – it is proposed to use the in-control mean. r0 further element of the headstart – deviating from the default should be done only in case of full understanding of the scheme. Details A new idea of applying EWMA smoothing to count data based on integer divison with remainders. It is highly recommended to read the corresponding paper (see below). Value Return single value which resemble the ARL. Author(s) <NAME> References <NAME>, <NAME>, <NAME> (2015), A new memory-type monitoring technique for count data, Computers and Industrial Engineering 85, 235-247. See Also later. Examples # RCM (2015), Table 12, page 243, first NCS column gX <- 5 gY <- 24 kL <- 16 kU <- 24 mu0 <- 20 #L0 <- euklid.ewma.arl(gX, gY, kL, kU, mu0, mu0) # should be 1219.2 imr.arl Compute ARLs and control limit factors for I-MR combos in case of normal data Description Computation of the (zero-state) Average Run Length (ARL) at given mean mu and sigma etc. Usage imr.arl(M, Ru, mu, sigma, vsided="upper", Rl=0, cmode="coll", N=30, qm=30) imr.Ru_Mgiven(M, L0, N=30, qm=30) imr.Rl_Mgiven(M, L0, N=30, qm=30) imr.MandRu(L0, N=30, qm=30) imr.MandRuRl(L0, N=30, qm=30) Arguments M control limit multiple for mean chart. Ru upper control limit multiple for moving range chart. mu actual mean. sigma actual standard deviation. vsided switches between the more common "upper" and the less popular "two"(-sided) case of the MR chart. Setting vsided to "two" and Ru sufficiently large (at least 2*M), creates an I-MR chart with a lower limit only for the MR part. Rl lower control limit multiple for moving range chart (not needed in the upper case, i.e. if vsided="upper"). cmode selects the numerical algorithm. The default "coll" picks the piecewise collo- cation, which is the most accurate method. Selecting "Crowder", the algorithm from Crowder (1987b) is chosen (re-implemented in R). Taking a label from "gl", "rectangular", "trapezoid", "simpson" or "simpson3_8", one de- cides for the quite common Nystroem procedure to numerically solve the con- sidered integral equation. It is astonishing that Crowder’s modified Nystroem design with the trapezoidal quadrature works so well. However, it is clearly dominated by the piecewise collocation algorithm. N Controls the dimension of the linear equation system and consequently the ac- curacy of the result. See details. qm Number of quadrature nodes (and weights) to determine the collocation definite integrals. L0 pre-defined in-control ARL, that is, determine Ru, Rl, or M and Ru or all of them (essentially ending in a lower limit MR chart) so that the mean number of observations until a false alarm is L0. Details Crowder (1987a) provided some math to determine the ARL of the so-called individual moving range (IMR) chart. The given integral equation was approximated by a linear equation system applying trapezoidal quadratures. Interestingly, Crowder did not recognize the specific behavior of the solution for Ru >= M (which is the more common case), where the resulting function L() is constant in the central part of the considered domain. In addition, by performing collocation on two (Ru > M) or three (Ru < M) subintervals piecewise, one obtains highly accurate ARL numbers. Note that imr.MandRu yields M and Ru for the upper MR trace, whereas imr.MandRuRl provides in addition the lower factor Rl for IMR consisting of two two-sided control charts. Note that the underlying ARL unbiased condition suppresses the upper limit Ru in all considered cases so far. This is not completely surprising, because the mean chart is already quite sensitive for increases in the variance. The two functions imr.Ru_Mgiven and imr.Rl_Mgiven deliver the single upper and lower limit, respectively, if a one-sided MR design is utilized and the control lmit factor M of the mean chart is set already. Note that for Ru > 2*M, the upper MR limit is not operative anymore. If it was initially an upper MR chart, then it reduces to a single mean chart. If it was originally a two-sided MR design, then it becomes a two-sided mean/lower variance chart combo. Within the latter scheme, the mean chart signals variance increases (a well-known pattern), whereas the MR subchart delivers only decreasing variance signals. However, these simple Shewhart charts exhibit in all configurations week variance decreases detection behavior. Eventually, we should note that the scientific control chart community mainly recommends to ignore MR charts, see, for example, Vardeman and Jobe (2016), whereas standards (such as ISO), commercial SPC software and many training manuals provide the IMR scheme with completely wrong upper limits for the MR chart. Value Returns either the ARL or control limit factors (alias multiples). Author(s) <NAME> References <NAME> (1987a) Computation of ARL for Combined Individual Measurement and Moving Range Charts, Journal of Quality Technology 19(2), 98-102. <NAME> (1987b) A Program for the Computation of ARL for Combined Individual Measure- ment and Moving Range Charts, Journal of Quality Technology 19(2), 103-106. <NAME>, <NAME>, <NAME>, Shewhart-Type Control Charts for Individual Obser- vations, Journal of Quality Technology 25(3), 188-198. <NAME>, <NAME>, <NAME> (1994) Design Strategies for Individuals and Moving Range Control Charts, Journal of Quality Technology 26(4), 274-287. <NAME>, <NAME> (1995) Detecting Variance Reductions Using the Moving Range, Quality Engineering 8(1), 165-178. <NAME>, <NAME> (1997) A Supplementary Test Based on the Control Chart for Individuals, Journal of Quality Technology 29(1), 16-20. <NAME>, <NAME> (1998) A Note on Individual and Moving Range Control Charts, Journal of Quality Technology 30(1), 70-74. <NAME>, <NAME> (2000) Monitoring process dispersion without subgrouping, Journal of Quality Technology 32(2), 89-102. <NAME>, <NAME> (2011) Design And Application Of Individuals And Moving Range Control Charts, Journal of Applied Business Research (JABR) 25(5), 31-40. <NAME> (2014) Comparison of Individual and Moving Range Chart Combinations to Individual Charts in Terms of ARL after Designing for a Common “All OK” ARL, Journal of Modern Applied Statistical Methods 13(2), 364-378. <NAME>, <NAME> (2016) Statistical Methods for Quality Assurance, Springer, 2nd edition. See Also later. Examples ## Crowder (1987b), Output Listing 1, trapezoidal quadrature (less accurate) M <- 2 Ru <- 3 mu <- seq(0, 2, by=0.25) LL <- LL2 <- rep(NA, length(mu)) for ( i in 1:length(mu) ) { LL[i] <- round( imr.arl(M, Ru, mu[i], 1), digits=4) LL2[i] <- round( imr.arl(M, Ru, mu[i], 1, cmode="Crowder", N=80), digits=4) } LL1987b <- c(18.2164, 16.3541, 12.4282, 8.7559, 6.1071, 4.3582, 3.2260, 2.4878, 1.9989) print( data.frame(mu, LL2, LL1987b, LL) ) ## Crowder (1987a), Table 1, trapezoidal quadrature (less accurate) M <- 4 Ru <- 3 mu <- seq(0, 2, by=0.25) LL <- rep(NA, length(mu)) for ( i in 1:length(mu) ) LL[i] <- round( imr.arl(M, Ru, mu[i], 1), digits=4) LL1987a <- c(34.44, 34.28, 34.07, 33.81, 33.45, 32.82, 31.50, 28.85, 24.49) print( data.frame(mu, LL1987a, LL) ) ## Rigdon, Cruthis, Champ (1994), Table 1, Monte Carlo based M <- 2.992 Ru <- 4.139 icARL <- imr.arl(M, Ru, 0, 1) icARL1994 <- 200 print( data.frame(icARL1994, icARL) ) M <- 3.268 Ru <- 4.556 icARL <- imr.arl(M, Ru, 0, 1) icARL1994 <- 500 print( data.frame(icARL1994, icARL) ) ## ..., Table 2, Monte Carlo based M <- 2.992 Ru <- 4.139 tau <- c(seq(1, 1.3, by=0.05), seq(1.4, 2, by=0.1)) LL <- rep(NA, length(tau)) for ( i in 1:length(tau) ) LL[i] <- round( imr.arl(M, Ru, 0, tau[i]), digits=2) LL1994 <- c(200.54, 132.25, 90.84, 65.66, 49.35, 38.92, 31.11, 21.35, 15.47, 12.04, 9.81, 8.21, 7.03, 6.14) print( data.frame(tau, LL1994, LL) ) ## <NAME> (1995), Table 2 (Monte Carlo based), half-normal, known parameter case ## two-sided (!) MR-alone (!) chart, hence the ARL results has to be decreased by 1 ## Here: a large M (=12) is deployed to mimic Inf alpha <- 0.00915 Ru <- sqrt(2) * qnorm(1-alpha/4) Rl <- sqrt(2) * qnorm(0.5+alpha/4) k <- 1.5 - (0:7)/10 LL <- rep(NA, length(k)) for ( i in 1:length(k) ) LL[i] <- round( imr.arl(12, Ru, 0, k[i], vsided="two", Rl=Rl), digits=2) - 1 RA1995 <- c(18.61, 24.51, 34.21, 49.74, 75.08, 113.14, 150.15, 164.54) print( data.frame(k, RA1995, LL) ) ## <NAME> (1998), Table 2, column sigma/sigma_0 = 1.00 M <- 3.27 Ru <- 4.56 #M <- 3.268 #Ru <- 4.556 mu <- seq(0, 2, by=0.25) LL <- rep(NA, length(mu)) for ( i in 1:length(mu) ) LL[i] <- round( imr.arl(M, Ru, mu[i], 1), digits=1) LL1998 <- c(505.3, 427.6, 276.7, 156.2, 85.0, 46.9, 26.9, 16.1, 10.1) print( data.frame(mu, LL1998, LL) ) ## ..., column sigma/sigma_0 = 1.05 for ( i in 1:length(mu) ) LL[i] <- round( imr.arl(M, Ru, mu[i], 1.05), digits=1) LL1998 <- c(296.8, 251.6, 169.6, 101.6, 58.9, 34.5, 20.9, 13.2, 8.7) print( data.frame(mu, LL1998, LL) ) ## <NAME> (2000), Table 2 ## AMP utilized Markov chain approximation ## However, the MR series is not Markovian! ## MR-alone (!) chart, hence the ARL results has to be decreased by 1 ## Here: a large M (=8) is deployed to mimic Inf Ru <- 3.93 sigma <- c(1, 1.05, 1.1, 1.15, 1.2, 1.3, 1.4, 1.5, 1.75) LL <- rep(NA, length(sigma)) for ( i in 1:length(sigma) ) LL[i] <- round( imr.arl(8, Ru, 0, sigma[i], N=30), digits=1) - 1 AMP2000 <- c(201.0, 136.8, 97.9, 73.0, 56.3, 36.4, 25.6, 19.1, 11.0) print( data.frame(sigma, AMP2000, LL) ) ## <NAME> (2011), Table 2, deployment of Crowder (1987b), nominal ic ARL 500 M <- c(3.09, 3.20, 3.30, 3.50, 4.00) Ru <- c(6.00, 4.67, 4.53, 4.42, 4.36) LL0 <- rep(NA, length(M)) for ( i in 1:length(M) ) LL0[i] <- round( imr.arl(M[i], Ru[i], 0, 1), digits=1) print( data.frame(M, Ru, LL0) ) imr.RuRl_alone Compute control limits of MR charts for normal data Description Computation of control limits of standalone MR charts. Usage imr.RuRl_alone(L0, N=30, qm=30, M0=12, eps=1e-3) imr.RuRl_alone_s3(L0, N=30, qm=30, M0=12) imr.RuRl_alone_tail(L0, N=30, qm=30, M0=12) imr.Ru_Rlgiven(Rl, L0, N=30, qm=30, M0=12) imr.Rl_Rugiven(Ru, L0, N=30, qm=30, M0=12) Arguments L0 pre-defined in-control ARL, that is, determine Ru and Rl so that the mean num- ber of observations until a false alarm is L0. N controls the dimension of the linear equation system and consequently the accu- racy of the result. See details. qm number of quadrature nodes (and weights) to determine the definite collocation integrals. M0 mimics Inf — by setting M0 to some large value (having a standard normal dis- tribution in mind), the algorithm for IMR charts could be used as well for the standalone MR chart. eps resolution parameter, which controls the approximation of the ARL slope at the in-control level of the monitored standard deviation. It ensures the pattern that is called ARL unbiasedness. A small value is recommended. Rl lower control limit multiple for moving range chart. Ru upper control limit multiple for moving range chart. Details Crowder (1987a) provided some math to determine the ARL of the so-called individual moving range (IMR) chart, which consists of the mean X chart and the standard deviation MR chart. Mak- ing the alarm threshold, M0, huge (default value here is 12) for the X chart allows us to utilize Crowder’s setup for standalone MR charts. For details about the IMR numerics see imr.arl. The three different versions of imr.RuRl_alone determine limits that form an ARL unbiased design, follow the restriction Rl = 1/Ru^3 and feature equal probability tails for the MR’s half-normal dis- tribution, respectively in the order given above). The other two functions are helper routines for imr.RuRl_alone. Note that the elegant approach given in Acosta-Mejia/Pignatiello (2000) is only an approximation, because the MR series is not Markovian. Value Returns control limit factors (alias multiples). Author(s) <NAME> References <NAME> (1987a) Computation of ARL for Combined Individual Measurement and Moving Range Charts, Journal of Quality Technology 19(2), 98-102. <NAME> (1987b) A Program for the Computation of ARL for Combined Individual Measure- ment and Moving Range Charts, Journal of Quality Technology 19(2), 103-106. <NAME>, <NAME> (1995) Detecting Variance Reductions Using the Moving Range, Quality Engineering 8(1), 165-178. <NAME>, <NAME> (2000) Monitoring process dispersion without subgrouping, Journal of Quality Technology 32(2), 89-102. See Also later. Examples ## <NAME> (1995), Table 2 (Monte Carlo based), half-normal, known parameter case ## two-sided MR-alone chart, hence the ARL results has to be decreased by 1 ## Here: a large M0=12 (default of the functions above) is deployed to mimic Inf alpha <- 0.00915 Ru <- sqrt(2) * qnorm(1-alpha/4) Rl <- sqrt(2) * qnorm(0.5+alpha/4) M0 <- 12 ## Not run: ARL0 <- imr.arl(M0, Ru, 0, 1, vsided="two", Rl=Rl) RRR1995 <- imr.RuRl_alone_tail(ARL0) RRRs <- imr.RuRl_alone_s3(ARL0) RRR <- imr.RuRl_alone(ARL0) results <- rbind(c(Rl, Ru), RRR1995, RRRs, RRR) results ## End(Not run) lns2ewma.arl Compute ARLs of EWMA ln Sˆ2 control charts (variance charts) Description Computation of the (zero-state) Average Run Length (ARL) for different types of EWMA control charts (based on the log of the sample variance S 2 ) monitoring normal variance. Usage lns2ewma.arl(l,cl,cu,sigma,df,hs=NULL,sided="upper",r=40) Arguments l smoothing parameter lambda of the EWMA control chart. cl lower control limit of the EWMA control chart. cu upper control limit of the EWMA control chart. sigma true standard deviation. df actual degrees of freedom, corresponds to subsample size (for known mean it is equal to the subsample size, for unknown mean it is equal to subsample size minus one. hs so-called headstart (enables fast initial response) – the default value (hs=NULL) corresponds to the in-control mean of ln S 2 . sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart with reflection at cl), "lower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. r dimension of the resulting linear equation system: the larger the better. Details lns2ewma.arl determines the Average Run Length (ARL) by numerically solving the related ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadrature. Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME> and <NAME> (1992), An EWMA for monitoring a process standard deviation, Journal of Quality Technology 24, 12-21. <NAME> (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Comput- ing 15, 341-352. See Also xewma.arl for zero-state ARL computation of EWMA control charts for monitoring normal mean. Examples lns2ewma.ARL <- Vectorize("lns2ewma.arl", "sigma") ## Crowder/Hamilton (1992) ## moments of ln S^2 E_log_gamma <- function(df) log(2/df) + digamma(df/2) V_log_gamma <- function(df) trigamma(df/2) E_log_gamma_approx <- function(df) -1/df - 1/3/df^2 + 2/15/df^4 V_log_gamma_approx <- function(df) 2/df + 2/df^2 + 4/3/df^3 - 16/15/df^5 ## results from Table 3 ( upper chart with reflection at 0 = log(sigma0=1) ) ## original entries are (lambda = 0.05, K = 1.06, df=n-1=4) # sigma ARL # 1 200 # 1.1 43 # 1.2 18 # 1.3 11 # 1.4 7.6 # 1.5 6.0 # 2 3.2 df <- 4 lambda <- .05 K <- 1.06 cu <- K * sqrt( lambda/(2-lambda) * V_log_gamma_approx(df) ) sigmas <- c(1 + (0:5)/10, 2) arls <- round(lns2ewma.ARL(lambda, 0, cu, sigmas, df, hs=0, sided="upper"), digits=1) data.frame(sigmas, arls) ## Knoth (2005) ## compare with Table 3 (p. 351) lambda <- .05 df <- 4 K <- 1.05521 cu <- 1.05521 * sqrt( lambda/(2-lambda) * V_log_gamma_approx(df) ) ## upper chart with reflection at sigma0=1 in Table 4 ## original entries are # sigma ARL_0 ARL_-.267 # 1 200.0 200.0 # 1.1 43.04 41.55 # 1.2 18.10 19.92 # 1.3 10.75 13.11 # 1.4 7.63 9.93 # 1.5 5.97 8.11 # 2 3.17 4.67 M <- -0.267 cuM <- lns2ewma.crit(lambda, 200, df, cl=M, hs=M, r=60)[2] arls1 <- round(lns2ewma.ARL(lambda, 0, cu, sigmas, df, hs=0, sided="upper"), digits=2) arls2 <- round(lns2ewma.ARL(lambda, M, cuM, sigmas, df, hs=M, sided="upper", r=60), digits=2) data.frame(sigmas, arls1, arls2) lns2ewma.crit Compute critical values of EWMA ln Sˆ2 control charts (variance charts) Description Computation of the critical values (similar to alarm limits) for different types of EWMA control charts (based on the log of the sample variance S 2 ) monitoring normal variance. Usage lns2ewma.crit(l,L0,df,sigma0=1,cl=NULL,cu=NULL,hs=NULL,sided="upper",mode="fixed",r=40) Arguments l smoothing parameter lambda of the EWMA control chart. L0 in-control ARL. df actual degrees of freedom, corresponds to subsample size (for known mean it is equal to the subsample size, for unknown mean it is equal to subsample size minus one. sigma0 in-control standard deviation. cl deployed for sided="upper", that is, upper variance control chart with lower reflecting barrier cl. cu for two-sided (sided="two") and fixed upper control limit (mode="fixed"), for all other cases cu is ignored. hs so-called headstart (enables fast initial response) – the default value (hs=NULL) corresponds to the in-control mean of ln S 2 . sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart with reflection at cl), "lower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. mode only deployed for sided="two" – with "fixed" an upper control limit (see cu) is set and only the lower is calculated to obtain the in-control ARL L0, while with "unbiased" a certain unbiasedness of the ARL function is guaranteed (here, both the lower and the upper control limit are calculated). With "vanilla" limits symmetric around the in-control mean of ln S 2 are determined, while for "eq.tails" the in-control ARL values of two single EWMA variance charts (decompose the two-sided scheme into one lower and one upper scheme) are matched. r dimension of the resulting linear equation system: the larger the more accurate. Details lns2ewma.crit determines the critical values (similar to alarm limits) for given in-control ARL L0 by applying secant rule and using lns2ewma.arl(). In case of sided="two" and mode="unbiased" a two-dimensional secant rule is applied that also ensures that the maximum of the ARL function for given standard deviation is attained at sigma0. See Knoth (2010) and the related example. Value Returns the lower and upper control limit cl and cu. Author(s) <NAME> References <NAME> and <NAME>. and <NAME> (1999), A comparison of control charting procedures for monitoring process dispersion, IIE Transactions 31, 569-579. <NAME> and <NAME> (1992), An EWMA for monitoring a process standard deviation, Journal of Quality Technology 24, 12-21. <NAME> (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Comput- ing 15, 341-352. <NAME> (2010), Control Charting Normal Variance – Reflections, Curiosities, and Recommen- dations, in Frontiers in Statistical Quality Control 9, <NAME> and <NAME> (Eds.), Physica Verlag, Heidelberg, Germany, 3-18. See Also lns2ewma.arl for calculation of ARL of EWMA ln S 2 control charts. Examples ## Knoth (2005) ## compare with 1.05521 mentioned on page 350 third line from below L0 <- 200 lambda <- .05 df <- 4 limits <- lns2ewma.crit(lambda, L0, df, cl=0, hs=0) limits["cu"]/sqrt( lambda/(2-lambda)*(2/df+2/df^2+4/3/df^3-16/15/df^5) ) mewma.arl Compute ARLs of MEWMA control charts Description Computation of the (zero-state) Average Run Length (ARL) for multivariate exponentially weighted moving average (MEWMA) charts monitoring multivariate normal mean. Usage mewma.arl(l, cE, p, delta=0, hs=0, r=20, ntype=NULL, qm0=20, qm1=qm0) mewma.arl.f(l, cE, p, delta=0, r=20, ntype=NULL, qm0=20, qm1=qm0) mewma.ad(l, cE, p, delta=0, r=20, n=20, type="cond", hs=0, ntype=NULL, qm0=20, qm1=qm0) Arguments l smoothing parameter lambda of the MEWMA control chart. cE alarm threshold of the MEWMA control chart. p dimension of multivariate normal distribution. delta magnitude of the potential change, delta=0 refers to the in-control state. hs so-called headstart (enables fast initial response) – must be non-negative. r number of quadrature nodes – dimension of the resulting linear equation system for delta = 0. For non-zero delta this dimension is mostly r^2 (Markov chain approximation leads to some larger values). Caution: If ntype is set to "co" (collocation), then values of r larger than 20 lead to large computing times. For the other selections this would happen for values larger than 40. ntype choose the numerical algorithm to solve the ARL integral equation. For delta=0: Possible values are "gl", "gl2" (gauss-legendre, classic and with variables change: square), "co" (collocation, for delta > 0 with sin transformation), "ra" (radau), "cc" (clenshaw-curtis), "mc" (markov chain), and "sr" (simp- son rule). For delta larger than 0, some more values besides the others are possible: "gl3", "gl4", "gl5" (gauss-legendre with a further change in vari- ables: sin, tan, sinh), "co2", "co3" (collocation with some trimming and tan as quadrature stabilizing transformations, respectively). If it is set to NULL (the default), then for delta=0 then "gl2" is chosen. If delta larger than 0, then for p equal 2 or 4 "gl3" and for all other values "gl5" is taken. "ra" denotes the method used in Rigdon (1995a). "mc" denotes the Markov chain approximation. type switch between "cond" and "cycl" for differentiating between the conditional (no false alarm) and the cyclical (after false alarm re-start in hs), respectively. n number of quadrature nodes for Calculating the steady-state ARL integral(s). qm0,qm1 number of collocation quadrature nodes for the out-of-control case (qm0 for the inner integral, qm1 for the outer one), that is, for positive delta, and for the in- control case (now only qm0 is deployed) if via ntype the collocation procedure is requested. Details Basically, this is the implementation of different numerical algorithms for solving the integral equa- tion for the MEWMA in-control (delta = 0) ARL introduced in Rigdon (1995a) and out-of-control (delta != 0) ARL in Rigdon (1995b). Most of them are nothing else than the Nystroem approach – the integral is replaced by a suitable quadrature. Here, the Gauss-Legendre (more powerful), Radau (used by Rigdon, 1995a), Clenshaw-Curtis, and Simpson rule (which is really bad) are provided. Additionally, the collocation approach is offered as well, because it is much better for small odd values for p. FORTRAN code for the Radau quadrature based Nystroem of Rigdon (1995a) was published in Bodden and Rigdon (1999) – see also http://lib.stat.cmu.edu/jqt/31-1. Fur- thermore, FORTRAN code for the Markov chain approximation (in- and out-ot-control) could be found at http://lib.stat.cmu.edu/jqt/33-4. The related papers are Runger and Prabhu (1996) and Molnau et al. (2001). The idea of the Clenshaw-Curtis quadrature was taken from Capizzi and Masarotto (2010), who successfully deployed a modified Clenshaw-Curtis quadrature to calculate the ARL of com- bined (univariate) Shewhart-EWMA charts. It turns out that it works also nicely for the MEWMA ARL. The version mewma.arl.f() without the argument hs provides the ARL as function of one (in-control) or two (out-of-control) arguments. Value Returns a single value which is simply the zero-state ARL. Author(s) <NAME> References <NAME> and <NAME> (1999), A program for approximating the in-control ARL for the MEWMA chart, Journal of Quality Technology 31(1), 120-123. <NAME> and <NAME> (2010), Evaluation of the run-length distribution for a com- bined Shewhart-EWMA control chart, Statistics and Computing 20(1), 23-33. <NAME> (2017), ARL Numerics for MEWMA Charts, Journal of Quality Technology 49(1), 78-89. <NAME> al. (2001), A Program for ARL Calculation for Multivariate EWMA Charts, Journal of Quality Technology 33(4), 515-521. <NAME> and <NAME> (1997), Designing a multivariate EWMA control chart, Journal of Quality Technology 29(1), 8-15. <NAME> (1995a), An integral equation for the in-control average run length of a multi- variate exponentially weighted moving average control chart, J. Stat. Comput. Simulation 52(4), 351-365. <NAME> (1995b), A double-integral equation for the average run length of a multivariate exponentially weighted moving average control chart, Stat. Probab. Lett. 24(4), 365-373. <NAME> and <NAME> (1996), A Markov Chain Model for the Multivariate Expo- nentially Weighted Moving Averages Control Chart, J. Amer. Statist. Assoc. 91(436), 1701-1706. See Also mewma.crit for getting the alarm threshold to attain a certain in-control ARL. Examples # Rigdon (1995a), p. 357, Tab. 1 p <- 2 r <- 0.25 h4 <- c(8.37, 9.90, 11.89, 13.36, 14.82, 16.72) for ( i in 1:length(h4) ) cat(paste(h4[i], "\t", round(mewma.arl(r, h4[i], p, ntype="ra")), "\n")) r <- 0.1 h4 <- c(6.98, 8.63, 10.77, 12.37, 13.88, 15.88) for ( i in 1:length(h4) ) cat(paste(h4[i], "\t", round(mewma.arl(r, h4[i], p, ntype="ra")), "\n")) # Rigdon (1995b), p. 372, Tab. 1 ## Not run: r <- 0.1 p <- 4 h <- 12.73 for ( sdelta in c(0, 0.125, 0.25, .5, 1, 2, 3) ) cat(paste(sdelta, "\t", round(mewma.arl(r, h, p, delta=sdelta^2, ntype="ra", r=25), digits=2), "\n")) p <- 5 h <- 14.56 for ( sdelta in c(0, 0.125, 0.25, .5, 1, 2, 3) ) cat(paste(sdelta, "\t", round(mewma.arl(r, h, p, delta=sdelta^2, ntype="ra", r=25), digits=2), "\n")) p <- 10 h <- 22.67 for ( sdelta in c(0, 0.125, 0.25, .5, 1, 2, 3) ) cat(paste(sdelta, "\t", round(mewma.arl(r, h, p, delta=sdelta^2, ntype="ra", r=25), digits=2), "\n")) ## End(Not run) # Runger/Prabhu (1996), p. 1704, Tab. 1 ## Not run: r <- 0.1 p <- 4 H <- 12.73 cat(paste(0, "\t", round(mewma.arl(r, H, p, delta=0, ntype="mc", r=50), digits=2), "\n")) for ( delta in c(.5, 1, 1.5, 2, 3) ) cat(paste(delta, "\t", round(mewma.arl(r, H, p, delta=delta, ntype="mc", r=25), digits=2), "\n")) # compare with Fortran program (MEWMA-ARLs.f90) from Molnau et al. (2001) with m1 = m2 = 25 # H4 P R DEL ARL # 12.73 4. 0.10 0.00 199.78 # 12.73 4. 0.10 0.50 35.05 # 12.73 4. 0.10 1.00 12.17 # 12.73 4. 0.10 1.50 7.22 # 12.73 4. 0.10 2.00 5.19 # 12.73 4. 0.10 3.00 3.42 p <- 20 H <- 37.01 cat(paste(0, "\t", round(mewma.arl(r, H, p, delta=0, ntype="mc", r=50), digits=2), "\n")) for ( delta in c(.5, 1, 1.5, 2, 3) ) cat(paste(delta, "\t", round(mewma.arl(r, H, p, delta=delta, ntype="mc", r=25), digits=2), "\n")) # compare with Fortran program (MEWMA-ARLs.f90) from Molnau et al. (2001) with m1 = m2 = 25 # H4 P R DEL ARL # 37.01 20. 0.10 0.00 199.09 # 37.01 20. 0.10 0.50 61.62 # 37.01 20. 0.10 1.00 20.17 # 37.01 20. 0.10 1.50 11.40 # 37.01 20. 0.10 2.00 8.03 # 37.01 20. 0.10 3.00 5.18 ## End(Not run) # Knoth (2017), p. 85, Tab. 3, rows with p=3 ## Not run: p <- 3 lambda <- 0.05 h4 <- mewma.crit(lambda, 200, p) benchmark <- mewma.arl(lambda, h4, p, delta=1, r=50) mc.arl <- mewma.arl(lambda, h4, p, delta=1, r=25, ntype="mc") ra.arl <- mewma.arl(lambda, h4, p, delta=1, r=27, ntype="ra") co.arl <- mewma.arl(lambda, h4, p, delta=1, r=12, ntype="co2") gl3.arl <- mewma.arl(lambda, h4, p, delta=1, r=30, ntype="gl3") gl5.arl <- mewma.arl(lambda, h4, p, delta=1, r=25, ntype="gl5") abs( benchmark - data.frame(mc.arl, ra.arl, co.arl, gl3.arl, gl5.arl) ) ## End(Not run) # Prabhu/Runger (1997), p. 13, Tab. 3 ## Not run: p <- 2 r <- 0.1 H <- 8.64 cat(paste(0, "\t", round(mewma.ad(r, H, p, delta=0, type="cycl", ntype="mc", r=60), digits=2), "\n")) for ( delta in c(.5, 1, 1.5, 2, 3) ) cat(paste(delta, "\t", round(mewma.ad(r, H, p, delta=delta, type="cycl", ntype="mc", r=30), digits=2), "\n")) # better accuracy for ( delta in c(0, .5, 1, 1.5, 2, 3) ) cat(paste(delta, "\t", round(mewma.ad(r, H, p, delta=delta^2, type="cycl", r=30), digits=2), "\n")) ## End(Not run) mewma.crit Compute alarm threshold of MEWMA control charts Description Computation of the alarm threshold for multivariate exponentially weighted moving average (MEWMA) charts monitoring multivariate normal mean. Usage mewma.crit(l, L0, p, hs=0, r=20) Arguments l smoothing parameter lambda of the MEWMA control chart. L0 in-control ARL. p dimension of multivariate normal distribution. hs so-called headstart (enables fast initial response) – must be non-negative. r number of quadrature nodes – dimension of the resulting linear equation system. Details mewma.crit determines the alarm threshold of for given in-control ARL L0 by applying secant rule and using mewma.arl() with ntype="gl2". Value Returns a single value which resembles the critical value c. Author(s) <NAME> References <NAME> (2017), ARL Numerics for MEWMA Charts, Journal of Quality Technology 49(1), 78-89. <NAME> (1995), An integral equation for the in-control average run length of a multivariate exponentially weighted moving average control chart, J. Stat. Comput. Simulation 52(4), 351-365. See Also mewma.arl for zero-state ARL computation. Examples # Rigdon (1995), p. 358, Tab. 1 p <- 4 L0 <- 500 r <- .25 h4 <- mewma.crit(r, L0, p) h4 ## original value is 16.38. # Knoth (2017), p. 82, Tab. 2 p <- 3 L0 <- 1e3 lambda <- c(0.25, 0.2, 0.15, 0.1, 0.05) h4 <- rep(NA, length(lambda) ) for ( i in 1:length(lambda) ) h4[i] <- mewma.crit(lambda[i], L0, p, r=20) round(h4, digits=2) ## original values are ## 15.82 15.62 15.31 14.76 13.60 mewma.psi Compute steady-state density of the MEWMA statistic Description Computation of the (zero-state) steady-state density function of the statistic deployed in multivariate exponentially weighted moving average (MEWMA) charts monitoring multivariate normal mean. Usage mewma.psi(l, cE, p, type="cond", hs=0, r=20) Arguments l smoothing parameter lambda of the MEWMA control chart. cE alarm threshold of the MEWMA control chart. p dimension of multivariate normal distribution. type switch between "cond" and "cycl" for differentiating between the conditional (no false alarm) and the cyclical (after false alarm re-start in hs), respectively. hs the re-starting point for the cyclical steady-state framework. r number of quadrature nodes. Details Basically, ideas from Knoth (2017, MEWMA numerics) and Knoth (2016, steady-state ARL con- cepts) are merged. More details will follow. Value Returns a function. Author(s) <NAME> References <NAME> (2016), The Case Against the Use of Synthetic Control Charts, Journal of Quality Technology 48(2), 178-195. <NAME> (2017), ARL Numerics for MEWMA Charts, Journal of Quality Technology 49(1), 78-89. <NAME> (2018), The Steady-State Behavior of Multivariate Exponentially Weighted Moving Average Control Charts, Sequential Analysis 37(4), 511-529. See Also mewma.arl for calculating the in-control ARL of MEWMA. Examples lambda <- 0.1 L0 <- 200 p <- 3 h4 <- mewma.crit(lambda, L0, p) x_ <- seq(0, h4*lambda/(2-lambda), by=0.002) psi <- mewma.psi(lambda, h4, p) psi_ <- psi(x_) # plot(x_, psi_, type="l", xlab="x", ylab=expression(psi(x)), xlim=c(0,1.2)) # cf. to Figure 1 in Knoth (2018), p. 514, p=3 p.ewma.arl Compute ARLs of binomial EWMA p control charts Description Computation of the (zero-state) Average Run Length (ARL) at given rate p. Usage p.ewma.arl(lambda, ucl, n, p, z0, sided="upper", lcl=NULL, d.res=1, r.mode="ieee.round", i.mode="integer") Arguments lambda smoothing parameter of the EWMA p control chart. ucl upper control limit of the EWMA p control chart. n subgroup size. p (failure/success) rate. z0 so-called headstart (give fast initial response). sided distinguishes between one- and two-sided EWMA control chart by choosing "upper", "lower", and "two", respectively. lcl lower control limit of the EWMA p control chart; needed for two-sided design. d.res resolution (see details). r.mode round mode – allowed modes are "gan.floor", "floor", "ceil", "ieee.round", "round", "mix". i.mode type of interval center – "integer" or "half" integer. Details The monitored data follow a binomial distribution with size n and failure/success probability p. The ARL values of the resulting EWMA control chart are determined by Markov chain approximation. Here, the original EWMA values are approximated by multiples of one over d.res. Different ways of rounding (see r.mode) to the next multiple are implemented. Besides Gan’s paper nothing is published about the numerical subtleties. Value Return single value which resemble the ARL. Author(s) <NAME> References <NAME> (1990), Monitoring observations generated from a binomial distribution using modified exponentially weighted moving average control chart, J. Stat. Comput. Simulation 37, 45-60. <NAME> and <NAME> (2013), EWMA p charts under sampling by variables, International Journal of Production Research 51, 3795-3807. See Also later. Examples ## Gan (1990) # Table 1 n <- 150 p0 <- .1 z0 <- n*p0 lambda <- c(1, .51, .165) hu <- c(27, 22, 18) p.value <- .1 + (0:20)/200 p.EWMA.arl <- Vectorize(p.ewma.arl, "p") arl1.value <- round(p.EWMA.arl(lambda[1], hu[1], n, p.value, z0, r.mode="round"), digits=2) arl2.value <- round(p.EWMA.arl(lambda[2], hu[2], n, p.value, z0, r.mode="round"), digits=2) arl3.value <- round(p.EWMA.arl(lambda[3], hu[3], n, p.value, z0, r.mode="round"), digits=2) arls <- matrix(c(arl1.value, arl2.value, arl3.value), ncol=length(lambda)) rownames(arls) <- p.value colnames(arls) <- paste("lambda =", lambda) arls ## Knoth/Steinmetz (2013) n <- 5 p0 <- 0.02 z0 <- n*p0 lambda <- 0.3 ucl <- 0.649169922 ## in-control ARL 370.4 (determined with d.res = 2^14 = 16384) res.list <- 2^(1:11) arl.list <- NULL for ( res in res.list ) { arl <- p.ewma.arl(lambda, ucl, n, p0, z0, d.res=res) arl.list <- c(arl.list, arl) } cbind(res.list, arl.list) phat.ewma.arl Compute ARLs of EWMA phat control charts Description Computation of the (zero-state) Average Run Length (ARL), upper control limit (ucl) for given in-control ARL, and lambda for minimal out-of control ARL at given shift. Usage phat.ewma.arl(lambda, ucl, mu, n, z0, sigma=1, type="known", LSL=-3, USL=3, N=15, qm=25, ntype="coll") phat.ewma.crit(lambda, L0, mu, n, z0, sigma=1, type="known", LSL=-3, USL=3, N=15, qm=25) phat.ewma.lambda(L0, mu, n, z0, sigma=1, type="known", max_l=1, min_l=.001, LSL=-3, USL=3, qm=25) Arguments lambda smoothing parameter of the EWMA control chart. ucl upper control limit of the EWMA phat control chart. L0 pre-defined in-control ARL (Average Run Length). mu true mean or mean where the ARL should be minimized (then the in-control mean is simply 0). n subgroup size. z0 so-called headstart (gives fast initial response). type choose whether the standard deviation is given and fixed ("known") or estimated and potentially monitored ("estimated"). sigma actual standard deviation of the data – the in-control value is 1. max_l, min_l maximal and minimal value for optimal lambda search. LSL,USL lower and upper specification limit, respectively. N size of collocation base, dimension of the resulting linear equation system is equal to N. qm number of nodes for collocation quadratures. ntype switch between the default method coll (collocation) and the classic one markov (Markov chain approximation) for calculating the ARL numerically. Details The three implemented functions allow to apply a new type control chart. Basically, lower and upper specification limits are given. The monitoring vehicle then is the empirical probability that an item will not follow these specification given the sequence of sample means. If the related EWMA sequence violates the control limits, then the alarm indicates a significant process deterioration. For details see the paper mentioned in the references. To be able to construct the control charts, see the first example. Value Return single values which resemble the ARL, the critical value, and the optimal lambda, respec- tively. Author(s) <NAME> References <NAME> and <NAME> (2013), EWMA p charts under sampling by variables, International Journal of Production Research 51, 3795-3807. See Also sewma.arl for a further collocation based ARL calculation routine. Examples ## Simple example to demonstrate the chart. # some functions h.mu <- function(mu) pnorm(LSL-mu) + pnorm(mu-USL) ewma <- function(x, lambda=0.1, z0=0) filter(lambda*x, 1-lambda, m="r", init=z0) # parameters LSL <- -3 # lower specification limit USL <- 3 # upper specification limit n <- 5 # batch size lambda <- 0.1 # EWMA smoothing parameter L0 <- 1000 # in-control Average Run Length (ARL) z0 <- h.mu(0) # start at minimal defect level ucl <- phat.ewma.crit(lambda, L0, 0, n, z0, LSL=LSL, USL=USL) # data x0 <- matrix(rnorm(50*n), ncol=5) # in-control data x1 <- matrix(rnorm(50*n, mean=0.5), ncol=5)# out-of-control data x <- rbind(x0,x1) # all data # create chart xbar <- apply(x, 1, mean) phat <- h.mu(xbar) z <- ewma(phat, lambda=lambda, z0=z0) plot(1:length(z), z, type="l", xlab="batch", ylim=c(0,.02)) abline(h=z0, col="grey", lwd=.7) abline(h=ucl, col="red") ## <NAME>, <NAME> (2013) # Table 1 lambdas <- c(.5, .25, .2, .1) L0 <- 370.4 n <- 5 LSL <- -3 USL <- 3 phat.ewma.CRIT <- Vectorize("phat.ewma.crit", "lambda") p.star <- pnorm( LSL ) + pnorm( -USL ) ## lower bound of the chart ucls <- phat.ewma.CRIT(lambdas, L0, 0, n, p.star, LSL=LSL, USL=USL) print(cbind(lambdas, ucls)) # Table 2 mus <- c((0:4)/4, 1.5, 2, 3) phat.ewma.ARL <- Vectorize("phat.ewma.arl", "mu") arls <- NULL for ( i in 1:length(lambdas) ) { arls <- cbind(arls, round(phat.ewma.ARL(lambdas[i], ucls[i], mus, n, p.star, LSL=LSL, USL=USL), digits=2)) } arls <- data.frame(arls, row.names=NULL) names(arls) <- lambdas print(arls) # Table 3 ## Not run: mus <- c(.25, .5, 1, 2) phat.ewma.LAMBDA <- Vectorize("phat.ewma.lambda", "mu") lambdas <- phat.ewma.LAMBDA(L0, mus, n, p.star, LSL=LSL, USL=USL) print(cbind(mus, lambdas)) ## End(Not run) pois.cusum.arl Compute ARLs of Poisson CUSUM control charts Description Computation of the (zero-state) Average Run Length (ARL) at given mean mu. Usage pois.cusum.arl(mu, km, hm, m, i0=0, sided="upper", rando=FALSE, gamma=0, km2=0, hm2=0, m2=0, i02=0, gamma2=0) Arguments mu actual mean. km enumerator of rational approximation of reference value k. hm enumerator of rational approximation of reference value h. m denominator of rational approximation of reference value. i0 head start value as integer multiple of 1/m; should be an element of 0:hm. sided distinguishes between different one- and two-sided CUSUM control chart by choosing "upper", "lower" and "two", respectively. rando Switch for activating randomization in order to allow continuous ARL control. gamma Randomization probability. If the CUSUM statistic is equal to the threshold h, an control chart alarm is triggered with probability gamma. km2,hm2,m2,i02,gamma2 corresponding values of the second CUSUM chart (to building a two-sided CUSUM scheme). Details The monitored data follow a Poisson distribution with mu. The ARL values of the resulting EWMA control chart are determined via Markov chain calculations. We follow the algorithm given in Lucas (1985) expanded with some arithmetic ’tricks’ (e.g., by deploying Toeplitz matrix algebra). A paper explaining it is under preparation. Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME> (1985) Counted data CUSUM’s, Technometrics 27(2), 129-144. <NAME> and <NAME> (1996) ARLs and Higher-Order Run-Length Moments for the Poisson CUSUM, Journal of Quality Technology 28(3), 363-369. <NAME>, <NAME> and <NAME> (1997) Poisson CUSUM versus c chart for defect data, Quality Engineering 9(4), 673-679. <NAME> and <NAME> and <NAME> (1999), An approximate CUSUM procedure for surveillance of health events, Statistics in Medicine 18(16), 2111-2122. <NAME>, <NAME>, <NAME>, and <NAME> (2010), A comparison of CUSUM, EWMA, and temporal scan statistics for detection of increases in poisson rates, Quality and Reliability En- gineering International 26(3), 279-289. <NAME> and <NAME>. (2011) Estimating the time of step change with Poisson CUSUM and EWMA control charts, International Journal of Production Research 49(10), 2857-2871. See Also later. Examples ## Lucas 1985, upper chart (Tables 2 and 3) k <- .25 h <- 10 m <- 4 km <- m * k hm <- m * h mu0 <- 1 * k ARL <- pois.cusum.arl(mu0, km, hm-1, m) # Lucas reported 438 (in Table 2, first block, row 10.0 .25 .0 ..., column 1.0 # Recall that Lucas and other trigger an alarm, if the CUSUM statistic is greater than # or equal to the alarm threshold h print(ARL) ARL <- pois.cusum.arl(mu0, km, hm-1, m, i0=round((hm-1)/2)) # Lucas reported 333 (in Table 3, first block, row 10.0 .25 .0 ..., column 1.0 print(ARL) ## Lucas 1985, lower chart (Tables 4 and 5) ARL <- pois.cusum.arl(mu0, km, hm-1, m, sided="lower") # Lucas reported 437 (in Table 4, first block, row 10.0 .25 .0 ..., column 1.0 print(ARL) ARL <- pois.cusum.arl(mu0, km, hm-1, m, i0=round((hm-1)/2), sided="lower") # Lucas reported 318 (in Table 5, first block, row 10.0 .25 .0 ..., column 1.0 print(ARL) pois.cusum.crit Compute alarm thresholds and randomization constants of Poisson CUSUM control charts Description Computation of the CUSUM upper limit and, if needed, of the randomization probability, given mean mu0. Usage pois.cusum.crit(mu0, km, A, m, i0=0, sided="upper", rando=FALSE) Arguments mu0 actual in-control mean. km enumerator of rational approximation of reference value k. A target in-control ARL (average run length). m denominator of rational approximation of reference value. i0 head start value as integer multiple of 1/m; should be an element of 0:100 (a more reasonable upper limit will be established soon). It is planned, to set i0 as a fraction of the final threshold. sided distinguishes between different one- and two-sided CUSUM control chart by choosing "upper", "lower" and "two", respectively. rando Switch for activating randomization in order to allow continuous ARL control. Details The monitored data follow a Poisson distribution with mu (here the in-control level mu0). The ARL values of the resulting EWMA control chart are determined via Markov chain calculations. With some grid search, we obtain the smallest value for the integer threshold component hm so that the resulting ARL is not smaller than A. If equality is needed, then activating rando=TRUE yields the corresponding randomization probability gamma. More details will follow in a paper that will be submitted in 2020. Value Returns two single values, integer threshold hm resulting in the final alarm threshold h=hm/m, and the randomization probability. Author(s) <NAME> References <NAME> (1985) Counted data CUSUM’s, Technometrics 27(2), 129-144. <NAME> and <NAME> (1996) ARLs and Higher-Order Run-Length Moments for the Poisson CUSUM, Journal of Quality Technology 28(3), 363-369. <NAME>, <NAME> and <NAME> (1997) Poisson CUSUM versus c chart for defect data, Quality Engineering 9(4), 673-679. <NAME> and <NAME> and <NAME> (1999), An approximate CUSUM procedure for surveillance of health events, Statistics in Medicine 18(16), 2111-2122. <NAME>, <NAME>, <NAME>, and <NAME> (2010), A comparison of CUSUM, EWMA, and temporal scan statistics for detection of increases in poisson rates, Quality and Reliability En- gineering International 26(3), 279-289. <NAME> and <NAME>. (2011) Estimating the time of step change with Poisson CUSUM and EWMA control charts, International Journal of Production Research 49(10), 2857-2871. See Also later. Examples ## Lucas 1985 mu0 <- 0.25 km <- 1 A <- 430 m <- 4 #cv <- pois.cusum.crit(mu0, km, A, m) cv <- c(40, 0) # Lucas reported h = 10 alias hm = 40 (in Table 2, first block, row 10.0 .25 .0 ..., column 1.0 # Recall that Lucas and other trigger an alarm, if the CUSUM statistic is greater than # or equal to the alarm threshold h print(cv) pois.cusum.crit.L0L1 Compute the CUSUM k and h for given in-control ARL L0 and out-of- control ARL L1, Poisson case Description Computation of the reference value k and the alarm threshold h for one-sided CUSUM control charts monitoring Poisson data, if the in-control ARL L0 and the out-of-control ARL L1 are given. Usage pois.cusum.crit.L0L1(mu0, L0, L1, sided="upper", OUTPUT=FALSE) Arguments mu0 in-control Poisson mean. L0 in-control ARL. L1 out-of-control ARL. sided distinguishes between "upper" and "lower" CUSUM designs. OUTPUT controls whether iteration details are printed. Details pois.cusum.crit.L0L1 determines the reference value k and the alarm threshold h for given in- control ARL L0 and out-of-control ARL L1 by applying grid search and using pois.cusum.arl() and pois.cusum.crit(). These CUSUM design rules were firstly (and quite rarely afterwards) used by Ewan and Kemp. In the Poisson case, Rossi et al. applied them while analyzing three different normal approximations of the Poisson distribution. See the example which illustrates the validity of all these approaches. Value Returns a data frame with results for the denominator m of the rational approximation, km as (integer) enumerator of the reference value (approximation), the corresponding out-of-control mean mu1, the final approximation k of the reference value, the threshold values hm (integer) and h (=hm/m), and the randomization constant gamma (the target in-control ARL is exactly matched). Author(s) <NAME> References <NAME> and <NAME> (1960), Sampling inspection of continuous processes with no auto- correlation between successive results, Biometrika 47 (3/4), 363-380. <NAME> (1962), The Use of Cumulative Sums for Sampling Inspection Schemes, Journal of the Royal Statistical Sociecty C, Applied Statistics 11(1), 16-31. <NAME>, <NAME> and <NAME> (1999), An approximate CUSUM procedure for surveil- lance of health events, Statistics in Medicine 18(16), 2111-2122. See Also pois.cusum.arl for zero-state ARL and pois.cusum.crit for threshold h computation. Examples ## Table 1 from Rossi et al. (1999) -- one-sided CUSUM La <- 500 # in-control ARL Lr <- 7 # out-of-control ARL m_a <- 0.52 # in-control mean of the Poisson variate ## Not run: kh <- xcusum.crit.L0L1(La, Lr, sided="one") # instead of deploying EK1960, one could use more accurate n EK_k <- 0.60 # EK1960 results in EK_h <- 3.80 # Table 2 on p. 372 eZR <- 2*EK_h # reproduce normal ooc mean from reference value k m_r <- 1.58 # EK1960 Table 3 on p. 377 for m_a = 0.52 R1 <- round( eZR/sqrt(m_a) + 1, digits=2) R2 <- round( ( eZR/2/sqrt(m_a) + 1 )^2, digits=2) R3 <- round(( sqrt(4 + 2*eZR/sqrt(m_a)) - 1 )^2, digits=2) RS <- round( m_r / m_a, digits=2 ) ## Not run: K_hk <- pois.cusum.crit.L0L1(m_a, La, Lr) # 'our' 'exact' approach K_hk <- data.frame(m=1000, km=948, mu1=1.563777, k=0.948, hm=3832, h=3.832, gamma=0.1201901) # get k for competing means mu0 (m_a) and mu1 (m_r) k_m01 <- function(mu0, mu1) (mu1 - mu0) / (log(mu1) - log(mu0)) # get ooc mean mu1 (m_r) for given mu0 (m_a) and reference value k m1_km0 <- function(mu0, k) { zero <- function(x) k - k_m01(mu0,x) upper <- mu0 + .5 while ( zero(upper) > 0 ) upper <- upper + 0.5 mu1 <- uniroot(zero, c(mu0*1.00000001, upper), tol=1e-9)$root mu1 } K_m_r <- m1_km0(m_a, K_hk$k) RK <- round( K_m_r / m_a, digits=2 ) cat(paste(m_a, R1, R2, R3, RS, RK, "\n", sep="\t")) pois.ewma.ad Compute steady-state ARLs of Poisson EWMA control charts Description Computation of the steady-state Average Run Length (ARL) at given mean mu. Usage pois.ewma.ad(lambda, AL, AU, mu0, mu, sided="two", rando=FALSE, gL=0, gU=0, mcdesign="classic", N=101) Arguments lambda smoothing parameter of the EWMA p control chart. AL, AU factors to build the lower and upper control limit, respectively, of the Poisson EWMA control chart. mu0 in-control mean. mu actual mean. sided distinguishes between one- and two-sided EWMA control chart by choosing "upper", "lower", and "two", and "zwei", respectively. rando Switch between the standard limit treatment, FALSE, and an additional randomi- sation (to allow ‘perfect’ ARL calibration) by setting TRUE. If randomisation is used, then set the corresponding probailities, gL and gU, appropriately. gL, gU If the EWMA statistic is at the limit (approximately), then an alarm is triggered with probability gL and gU for the lower and upper limit, respectively. mcdesign choose either "classic" which follows Borror, Champ and Rigdon (1998), or the more sophisticated "transfer" which improves the accuracy heavily. N number of states of the approximating Markov chain; is equal to the dimension of the resulting linear equation system. Details The monitored data follow a Poisson distribution with mu. The ARL values of the resulting EWMA control chart are determined by Markov chain approximation. We follow the algorithm given in Borror, Champ and Rigdon (1998). The function is in an early development phase. Value Return single value which resembles the steady-state ARL. Author(s) <NAME> References <NAME>, <NAME> and <NAME> (1998) Poisson EWMA control charts, Journal of Quality Technonlogy 30(4), 352-361. <NAME> and <NAME> (2020) Improving the ARL profile and the accuracy of its calculation for Poisson EWMA charts, Quality and Reliability Engineering International 36(3), 876-889. See Also later. Examples ## Borror, Champ and Rigdon (1998), Table 2, PEWMA column mu0 <- 20 lambda <- 0.27 A <- 3.319 mu1 <- c(2*(3:15), 35) ARL1 <- AD1 <- rep(NA, length(mu1)) for ( i in 1:length(mu1) ) { ARL1[i] <- round(pois.ewma.arl(lambda,A,A,mu0,mu0,mu1[i],mcdesign="classic"),digits=1) AD1[i] <- round(pois.ewma.ad(lambda,A,A,mu0,mu1[i],mcdesign="classic"),digits=1) } print( cbind(mu1, ARL1, AD1) ) ## Morais and Knoth (2020), Table 2, lambda = 0.27 column ## randomisation not implemented for pois.ewma.ad() lambda <- 0.27 AL <- 3.0870 AU <- 3.4870 gL <- 0.001029 gU <- 0.000765 mu2 <- c(16, 18, 19.99, mu0, 20.01, 22, 24) ARL2 <- AD2 <- rep(NA, length(mu2)) for ( i in 1:length(mu2) ) { ARL2[i] <- round(pois.ewma.arl(lambda,AL,AU,mu0,mu0,mu2[i],rando=FALSE), digits=1) AD2[i] <- round(pois.ewma.ad(lambda,AL,AU,mu0,mu2[i],rando=FALSE), digits=1) } print( cbind(mu2, ARL2, AD2) ) pois.ewma.arl Compute ARLs of Poisson EWMA control charts Description Computation of the (zero-state) Average Run Length (ARL) at given mean mu. Usage pois.ewma.arl(lambda, AL, AU, mu0, z0, mu, sided="two", rando=FALSE, gL=0, gU=0, mcdesign="transfer", N=101) Arguments lambda smoothing parameter of the EWMA p control chart. AL, AU factors to build the lower and upper control limit, respectively, of the Poisson EWMA control chart. mu0 in-control mean. z0 so-called headstart (give fast initial response). mu actual mean. sided distinguishes between one- and two-sided EWMA control chart by choosing "upper", "lower", and "two", and "zwei", respectively. rando Switch between the standard limit treatment, FALSE, and an additional randomi- sation (to allow ‘perfect’ ARL calibration) by setting TRUE. If randomisation is used, then set the corresponding probailities, gL and gU, appropriately. gL, gU If the EWMA statistic is at the limit (approximately), then an alarm is triggered with probability gL and gU for the lower and upper limit, respectively. mcdesign choose either "classic" which follows Borror, Champ and Rigdon (1998), or the more sophisticated "transfer" which improves the accuracy heavily. N number of states of the approximating Markov chain; is equal to the dimension of the resulting linear equation system. Details The monitored data follow a Poisson distribution with mu. The ARL values of the resulting EWMA control chart are determined by Markov chain approximation. We follow the algorithm given in Borror, Champ and Rigdon (1998). However, by setting mcdesign="transfer" (now the default) from Morais and Knoth (2020), the accuracy is considerably improved. Value Return single value which resembles the ARL. Author(s) <NAME> References <NAME>, <NAME> and <NAME> (1998) Poisson EWMA control charts, Journal of Quality Technonlogy 30(4), 352-361. <NAME> and <NAME> (2020) Improving the ARL profile and the accuracy of its calculation for Poisson EWMA charts, Quality and Reliability Engineering International 36(3), 876-889. See Also later. Examples ## Borror, Champ and Rigdon (1998), Table 2, PEWMA column mu0 <- 20 lambda <- 0.27 A <- 3.319 mu1 <- c(2*(3:15), 35) ARL1 <- rep(NA, length(mu1)) for ( i in 1:length(mu1) ) ARL1[i] <- pois.ewma.arl(lambda, A, A, mu0, mu0, mu1[i], mcdesign="classic") print(cbind(mu1, round(ARL1, digits=1))) ## the same numbers with improved accuracy ARL2 <- rep(NA, length(mu1)) for ( i in 1:length(mu1) ) ARL2[i] <- pois.ewma.arl(lambda, A, A, mu0, mu0, mu1[i], mcdesign="transfer") print(cbind(mu1, round(ARL2, digits=1))) ## <NAME> Knoth (2020), Table 2, lambda = 0.27 column lambda <- 0.27 AL <- 3.0870 AU <- 3.4870 gL <- 0.001029 gU <- 0.000765 mu0 <- 20 mu1 <- c(16, 18, 19.99, mu0, 20.01, 22, 24) ARL3 <- rep(NA, length(mu1)) for ( i in 1:length(mu1) ) ARL3[i] <- pois.ewma.arl(lambda,AL,AU,mu0,mu0,mu1[i],rando=TRUE,gL=gL,gU=gU, N=101) print(cbind(mu1, round(ARL3, digits=1))) pois.ewma.crit Compute ARLs of Poisson EWMA control charts Description Computation of the (zero-state) Average Run Length (ARL) at given mean mu. Usage pois.ewma.crit(lambda, L0, mu0, z0, AU=3, sided="two", design="sym", rando=FALSE, mcdesign="transfer", N=101, jmax=4) Arguments lambda smoothing parameter of the EWMA p control chart. L0 value of the so-called in-control Average Run Length (ARL) for the Poisson EWMA control chart. mu0 in-control mean. z0 so-called headstart (give fast initial response). AU in case of the lower chart deployed as reflecting upper barrier – might be in- creased step by step until the resulting lower limit does not change anymore. sided distinguishes between one- and two-sided EWMA control chart by choosing "upper", "lower", and "two", respectively. design distinguishes between limits symmetric to the in-control mean mu0 and an ARL- unbiased design (ARL maximum at mu0); use the shortcuts "sym" and "unb", respectively, please. rando Switch between the standard limit treatment, FALSE, and an additional randomi- sation (to allow ‘perfect’ ARL calibration) by setting TRUE. If randomisation is used, then the corresponding probailities, gL and gU are determined, appropri- ately. mcdesign choose either "classic" which follows Borror, Champ and Rigdon (1998), or the more sophisticated "transfer" which improves the accuracy heavily. N number of states of the approximating Markov chain; is equal to the dimension of the resulting linear equation system. jmax number of digits for the to be calculated factors A (sort of accuracy). Details The monitored data follow a Poisson distribution with mu. Here we solve the inverse task to the usual ARL calculation. Hence, determine the control limit factors so that the in-control ARL is (roughly) equal to L0. The ARL values underneath the routine are determined by Markov chain approximation. The algorithm is just a grid search that takes care of the discrete ARL behavior. Value Return one or two values being he control limit factors. Author(s) <NAME> References <NAME>, <NAME> and <NAME> (1998) Poisson EWMA control charts, Journal of Quality Technonlogy 30(4), 352-361. <NAME> and <NAME> (2020) Improving the ARL profile and the accuracy of its calculation for Poisson EWMA charts, Quality and Reliability Engineering International 36(3), 876-889. See Also later. Examples ## Borror, Champ and Rigdon (1998), page 30, original value is A = 2.8275 mu0 <- 4 lambda <- 0.2 L0 <- 351 A <- pois.ewma.crit(lambda, L0, mu0, mu0, mcdesign="classic") print(round(A, digits=4)) ## Morais and Knoth (2020), Table 2, lambda = 0.27 column lambda <- 0.27 L0 <- 1233.4 ccgg <- pois.ewma.crit(lambda,1233.4,mu0,mu0,design="unb",rando=TRUE,mcdesign="transfer") print(ccgg, digits=3) quadrature.nodes.weights Calculate quadrature nodes and weights Description Computation of the nodes and weights to enable numerical quadrature. Usage quadrature.nodes.weights(n, type="GL", x1=-1, x2=1) Arguments n number of nodes (and weights). type quadrature type – currently Gauss-Legendre, "GL", and Radau, "Ra", are sup- ported. x1 lower limit of the integration interval. x2 upper limit of the integration interval. Details A more detailed description will follow soon. The algorithm for the Gauss-Legendre quadrature was delivered by Knut Petras to me, while the one for the Radau quadrature was taken from <NAME>. Value Returns two vectors which hold the needed quadrature nodes and weights. Author(s) <NAME> References <NAME> and <NAME> (2011), Quadrature Theory. The Theory of Numerical Integration on a Compact Interval, Mathematical Surveys and Monographs, American Mathematical Society. <NAME> (2015), https://people.sc.fsu.edu/~jburkardt/f_src/quadrule/quadrule. html See Also Many of the ARL routines use the Gauss-Legendre nodes. Examples # GL n <- 10 qnw <-quadrature.nodes.weights(n, type="GL") qnw # Radau n <- 10 qnw <-quadrature.nodes.weights(n, type="Ra") qnw scusum.arl Compute ARLs of CUSUM control charts (variance charts) Description Computation of the (zero-state) Average Run Length (ARL) for different types of CUSUM control charts (based on the sample variance S 2 ) monitoring normal variance. Usage scusum.arl(k, h, sigma, df, hs=0, sided="upper", k2=NULL, h2=NULL, hs2=0, r=40, qm=30, version=2) Arguments k reference value of the CUSUM control chart. h decision interval (alarm limit, threshold) of the CUSUM control chart. sigma true standard deviation. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided CUSUM-S 2 control charts by choosing "upper" (upper chart), "lower" (lower chart), and "two" (two- sided chart), respectively. Note that for the two-sided chart the parameters "k2" and "h2" have to be set too. k2 In case of a two-sided CUSUM chart for variance the reference value of the lower chart. h2 In case of a two-sided CUSUM chart for variance the decision interval of the lower chart. hs2 In case of a two-sided CUSUM chart for variance the headstart of the lower chart. r Dimension of the resulting linear equation system (highest order of the colloca- tion polynomials times number of intervals – see Knoth 2006). qm Number of quadrature nodes for calculating the collocation definite integrals. version Distinguish version numbers (1,2,...). For internal use only. Details scusum.arl determines the Average Run Length (ARL) by numerically solving the related ARL integral equation by means of collocation (piecewise Chebyshev polynomials). Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME> (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Comput- ing 15, 341-352. <NAME> (2006), Computation of the ARL for CUSUM-S 2 schemes, Computational Statistics & Data Analysis 51, 499-512. See Also xcusum.arl for zero-state ARL computation of CUSUM control charts for monitoring normal mean. Examples ## Knoth (2006) ## compare with Table 1 (p. 507) k <- 1.46 # sigma1 = 1.5 df <- 1 h <- 10 # original values # sigma coll63 BE Hawkins MC 10^9 (s.e.) # 1 260.7369 260.7546 261.32 260.7399 (0.0081) # 1.1 90.1319 90.1389 90.31 90.1319 (0.0027) # 1.2 43.6867 43.6897 43.75 43.6845 (0.0013) # 1.3 26.2916 26.2932 26.32 26.2929 (0.0007) # 1.4 18.1231 18.1239 18.14 18.1235 (0.0005) # 1.5 13.6268 13.6273 13.64 13.6272 (0.0003) # 2 5.9904 5.9910 5.99 5.9903 (0.0001) # replicate the column coll63 sigma <- c(1, 1.1, 1.2, 1.3, 1.4, 1.5, 2) arl <- rep(NA, length(sigma)) for ( i in 1:length(sigma) ) arl[i] <- round(scusum.arl(k, h, sigma[i], df, r=63, qm=20, version=2), digits=4) data.frame(sigma, arl) scusum.crit Compute decision intervals of CUSUM control charts (variance charts) Description omputation of the decision intervals (alarm limits) for different types of CUSUM control charts (based on the sample variance S 2 ) monitoring normal variance. Usage scusum.crit(k, L0, sigma, df, hs=0, sided="upper", mode="eq.tails", k2=NULL, hs2=0, r=40, qm=30) Arguments k reference value of the CUSUM control chart. L0 in-control ARL. sigma true standard deviation. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided CUSUM-S 2 control charts by choosing "upper" (upper chart), "lower" (lower chart), and "two" (two- sided chart), respectively. Note that for the two-sided chart the parameters "k2" and "h2" have to be set too. mode only deployed for sided="two" – with "eq.tails" two one-sided CUSUM charts (lower and upper) with the same in-control ARL are coupled. With "unbiased" a certain unbiasedness of the ARL function is guaranteed (here, both the lower and the upper control limit are calculated). k2 in case of a two-sided CUSUM chart for variance the reference value of the lower chart. hs2 in case of a two-sided CUSUM chart for variance the headstart of the lower chart. r Dimension of the resulting linear equation system (highest order of the colloca- tion polynomials times number of intervals – see Knoth 2006). qm Number of quadrature nodes for calculating the collocation definite integrals. Details scusum.crit ddetermines the decision interval (alarm limit) for given in-control ARL L0 by ap- plying secant rule and using scusum.arl(). Value Returns a single value which resembles the decision interval h. Author(s) <NAME> References <NAME> (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Comput- ing 15, 341-352. <NAME> (2006), Computation of the ARL for CUSUM-S 2 schemes, Computational Statistics & Data Analysis 51, 499-512. See Also xcusum.arl for zero-state ARL computation of CUSUM control charts monitoring normal mean. Examples ## Knoth (2006) ## compare with Table 1 (p. 507) k <- 1.46 # sigma1 = 1.5 df <- 1 L0 <- 260.74 h <- scusum.crit(k, L0, 1, df) h # original value is 10 scusums.arl Compute ARLs of CUSUM-Shewhart control charts (variance charts) Description Computation of the (zero-state) Average Run Length (ARL) for different types of CUSUM-Shewhart combo control charts (based on the sample variance S 2 ) monitoring normal variance. Usage scusums.arl(k, h, cS, sigma, df, hs=0, sided="upper", k2=NULL, h2=NULL, hs2=0, r=40, qm=30, version=2) Arguments k reference value of the CUSUM control chart. h decision interval (alarm limit, threshold) of the CUSUM control chart. cS Shewhart limit. sigma true standard deviation. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided CUSUM-S 2 control charts by choosing "upper" (upper chart), "lower" (lower chart), and "two" (two- sided chart), respectively. Note that for the two-sided chart the parameters "k2" and "h2" have to be set too. k2 In case of a two-sided CUSUM chart for variance the reference value of the lower chart. h2 In case of a two-sided CUSUM chart for variance the decision interval of the lower chart. hs2 In case of a two-sided CUSUM chart for variance the headstart of the lower chart. r Dimension of the resulting linear equation system (highest order of the colloca- tion polynomials times number of intervals – see Knoth 2006). qm Number of quadrature nodes for calculating the collocation definite integrals. version Distinguish version numbers (1,2,...). For internal use only. Details scusums.arl determines the Average Run Length (ARL) by numerically solving the related ARL integral equation by means of collocation (piecewise Chebyshev polynomials). Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME> (2006), Computation of the ARL for CUSUM-S 2 schemes, Computational Statistics & Data Analysis 51, 499-512. See Also scusum.arl for zero-state ARL computation of standalone CUSUM control charts for monitoring normal variance. Examples ## will follow sewma.arl Compute ARLs of EWMA control charts (variance charts) Description Computation of the (zero-state) Average Run Length (ARL) for different types of EWMA control charts (based on the sample variance S 2 ) monitoring normal variance. Usage sewma.arl(l,cl,cu,sigma,df,s2.on=TRUE,hs=NULL,sided="upper",r=40,qm=30) Arguments l smoothing parameter lambda of the EWMA control chart. cl lower control limit of the EWMA control chart. cu upper control limit of the EWMA control chart. sigma true standard deviation. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. s2.on distinguishes between S 2 and S chart. hs so-called headstart (enables fast initial response); the default (NULL) yields the expected in-control value of S 2 (1) and S (c4 ), respectively. sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. r dimension of the resulting linear equation system (highest order of the colloca- tion polynomials). qm number of quadrature nodes for calculating the collocation definite integrals. Details sewma.arl determines the Average Run Length (ARL) by numerically solving the related ARL integral equation by means of collocation (Chebyshev polynomials). Value Returns a single value which resembles the ARL. Author(s) <NAME> References S. Knoth (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Comput- ing 15, 341-352. <NAME> (2006), Computation of the ARL for CUSUM-S 2 schemes, Computational Statistics & Data Analysis 51, 499-512. See Also xewma.arl for zero-state ARL computation of EWMA control charts for monitoring normal mean. Examples ## Knoth (2005) ## compare with Table 1 (p. 347): 249.9997 ## Monte Carlo with 10^9 replicates: 249.9892 +/- 0.008 l <- .025 df <- 1 cu <- 1 + 1.661865*sqrt(l/(2-l))*sqrt(2/df) sewma.arl(l,0,cu,1,df) ## ARL values for upper and lower EWMA charts with reflecting barriers ## (reflection at in-control level sigma0 = 1) ## examples from Knoth (2006), Tables 4 and 5 Ssewma.arl <- Vectorize("sewma.arl", "sigma") ## upper chart with reflection at sigma0=1 in Table 4 ## original entries are # sigma ARL # 1 100.0 # 1.01 85.3 # 1.02 73.4 # 1.03 63.5 # 1.04 55.4 # 1.05 48.7 # 1.1 27.9 # 1.2 12.9 # 1.3 7.86 # 1.4 5.57 # 1.5 4.30 # 2 2.11 ## Not run: l <- 0.15 df <- 4 cu <- 1 + 2.4831*sqrt(l/(2-l))*sqrt(2/df) sigmas <- c(1 + (0:5)/100, 1 + (1:5)/10, 2) arls <- round(Ssewma.arl(l, 1, cu, sigmas, df, sided="Rupper", r=100), digits=2) data.frame(sigmas, arls) ## End(Not run) ## lower chart with reflection at sigma0=1 in Table 5 ## original entries are # sigma ARL # 1 200.04 # 0.9 38.47 # 0.8 14.63 # 0.7 8.65 # 0.6 6.31 ## Not run: l <- 0.115 df <- 5 cl <- 1 - 2.0613*sqrt(l/(2-l))*sqrt(2/df) sigmas <- c((10:6)/10) arls <- round(Ssewma.arl(l, cl, 1, sigmas, df, sided="Rlower", r=100), digits=2) data.frame(sigmas, arls) ## End(Not run) sewma.arl.prerun Compute ARLs of EWMA control charts (variance charts) in case of estimated parameters Description Computation of the (zero-state) Average Run Length (ARL) for EWMA control charts (based on the sample variance S 2 ) monitoring normal variance with estimated parameters. Usage sewma.arl.prerun(l, cl, cu, sigma, df1, df2, hs=1, sided="upper", r=40, qm=30, qm.sigma=30, truncate=1e-10) Arguments l smoothing parameter lambda of the EWMA control chart. cl lower control limit of the EWMA control chart. cu upper control limit of the EWMA control chart. sigma true standard deviation. df1 actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. df2 degrees of freedom of the pre-run variance estimator. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl),"Rlower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. r dimension of the resulting linear equation system (highest order of the colloca- tion polynomials). qm number of quadrature nodes for calculating the collocation definite integrals. qm.sigma number of quadrature nodes for convoluting the standard deviation uncertainty. truncate size of truncated tail. Details Essentially, the ARL function sewma.arl is convoluted with the distribution of the sample standard deviation. For details see Jones/Champ/Rigdon (2001) and Knoth (2014?). Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME>, <NAME>, <NAME> (2001), The performance of exponentially weighted moving average charts with estimated parameters, Technometrics 43, 156-167. <NAME> (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Comput- ing 15, 341-352. <NAME> (2006), Computation of the ARL for CUSUM-S 2 schemes, Computational Statistics & Data Analysis 51, 499-512. See Also sewma.arl for zero-state ARL function of EWMA control charts w/o pre run uncertainty. Examples ## will follow sewma.crit Compute critical values of EWMA control charts (variance charts) Description Computation of the critical values (similar to alarm limits) for different types of EWMA control charts (based on the sample variance S 2 ) monitoring normal variance. Usage sewma.crit(l,L0,df,sigma0=1,cl=NULL,cu=NULL,hs=NULL,s2.on=TRUE, sided="upper",mode="fixed",ur=4,r=40,qm=30) Arguments l smoothing parameter lambda of the EWMA control chart. L0 in-control ARL. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. sigma0 in-control standard deviation. cl deployed for sided="Rupper", that is, upper variance control chart with lower reflecting barrier cl. cu for two-sided (sided="two") and fixed upper control limit (mode="fixed") a value larger than sigma0 has to been given, for all other cases cu is ignored. hs so-called headstart (enables fast initial response); the default (NULL) yields the expected in-control value of S 2 (1) and S (c4 ), respectively. s2.on distinguishes between S 2 and S chart. sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. mode only deployed for sided="two" – with "fixed" an upper control limit (see cu) is set and only the lower is calculated to obtain the in-control ARL L0, while with "unbiased" a certain unbiasedness of the ARL function is guaranteed (here, both the lower and the upper control limit are calculated). With "vanilla" limits symmetric around 1 (the in-control value of the variance) are determined, while for "eq.tails" the in-control ARL values of two single EWMA variance charts (decompose the two-sided scheme into one lower and one upper scheme) are matched. ur truncation of lower chart for eq.tails mode. r dimension of the resulting linear equation system (highest order of the colloca- tion polynomials). qm number of quadrature nodes for calculating the collocation definite integrals. Details sewma.crit determines the critical values (similar to alarm limits) for given in-control ARL L0 by applying secant rule and using sewma.arl(). In case of sided="two" and mode="unbiased" a two-dimensional secant rule is applied that also ensures that the maximum of the ARL function for given standard deviation is attained at sigma0. See Knoth (2010) and the related example. Value Returns the lower and upper control limit cl and cu. Author(s) <NAME> References <NAME> and <NAME> and <NAME> (1998), EWMA-Karten zur \"Uberwachung der Streuung von Qualit\"atsmerkmalen, Allgemeines Statistisches Archiv 82, 327-338, <NAME> and <NAME>. and <NAME> (1999), A comparison of control charting procedures for monitoring process dispersion, IIE Transactions 31, 569-579. <NAME> (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Comput- ing 15, 341-352. <NAME> (2006a), Computation of the ARL for CUSUM-S 2 schemes, Computational Statistics & Data Analysis 51, 499-512. <NAME> (2006b), The art of evaluating monitoring schemes – how to measure the performance of control charts? in Frontiers in Statistical Quality Control 8, <NAME> and <NAME> (Eds.), Physica Verlag, Heidelberg, Germany, 74-99. <NAME> (2010), Control Charting Normal Variance – Reflections, Curiosities, and Recommen- dations, in Frontiers in Statistical Quality Control 9, <NAME> and <NAME> (Eds.), Physica Verlag, Heidelberg, Germany, 3-18. See Also sewma.arl for calculation of ARL of variance charts. Examples ## Mittag et al. (1998) ## compare their upper critical value 2.91 that ## leads to the upper control limit via the formula shown below ## (for the usual upper EWMA \eqn{S^2}{S^2}). ## See Knoth (2006b) for a discussion of this EWMA setup and it's evaluation. l <- 0.18 L0 <- 250 df <- 4 limits <- sewma.crit(l, L0, df) limits["cu"] limits.cu.mittag_et_al <- 1 + sqrt(l/(2-l))*sqrt(2/df)*2.91 limits.cu.mittag_et_al ## Knoth (2005) ## reproduce the critical value given in Figure 2 (c=1.661865) for ## upper EWMA \eqn{S^2}{S^2} with df=1 l <- 0.025 L0 <- 250 df <- 1 limits <- sewma.crit(l, L0, df) cv.Fig2 <- (limits["cu"]-1)/( sqrt(l/(2-l))*sqrt(2/df) ) cv.Fig2 ## the small difference (sixth digit after decimal point) stems from ## tighter criterion in the secant rule implemented in the R package. ## demo of unbiased ARL curves ## Deploy, please, not matrix dimensions smaller than 50 -- for the ## sake of accuracy, the value 80 was used. ## Additionally, this example needs between 1 and 2 minutes on a 1.6 Ghz box. ## Not run: l <- 0.1 L0 <- 500 df <- 4 limits <- sewma.crit(l, L0, df, sided="two", mode="unbiased", r=80) SEWMA.arl <- Vectorize(sewma.arl, "sigma") SEWMA.ARL <- function(sigma) SEWMA.arl(l, limits[1], limits[2], sigma, df, sided="two", r=80) layout(matrix(1:2, nrow=1)) curve(SEWMA.ARL, .75, 1.25, log="y") curve(SEWMA.ARL, .95, 1.05, log="y") ## End(Not run) # the above stuff needs about 1 minute ## control limits for upper and lower EWMA charts with reflecting barriers ## (reflection at in-control level sigma0 = 1) ## examples from Knoth (2006a), Tables 4 and 5 ## Not run: ## upper chart with reflection at sigma0=1 in Table 4: c = 2.4831 l <- 0.15 L0 <- 100 df <- 4 limits <- sewma.crit(l, L0, df, cl=1, sided="Rupper", r=100) cv.Tab4 <- (limits["cu"]-1)/( sqrt(l/(2-l))*sqrt(2/df) ) cv.Tab4 ## lower chart with reflection at sigma0=1 in Table 5: c = 2.0613 l <- 0.115 L0 <- 200 df <- 5 limits <- sewma.crit(l, L0, df, cu=1, sided="Rlower", r=100) cv.Tab5 <- -(limits["cl"]-1)/( sqrt(l/(2-l))*sqrt(2/df) ) cv.Tab5 ## End(Not run) sewma.crit.prerun Compute critical values of of EWMA (variance charts) control charts under pre-run uncertainty Description Computation of quantiles of the Run Length (RL) for EWMA control charts monitoring normal variance. Usage sewma.crit.prerun(l,L0,df1,df2,sigma0=1,cl=NULL,cu=NULL,hs=1,sided="upper", mode="fixed",r=40,qm=30,qm.sigma=30,truncate=1e-10, tail_approx=TRUE,c.error=1e-10,a.error=1e-9) Arguments l smoothing parameter lambda of the EWMA control chart. L0 in-control quantile value. df1 actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. df2 degrees of freedom of the pre-run variance estimator. sigma,sigma0 true and in-control standard deviation, respectively. cl deployed for sided="Rupper", that is, upper variance control chart with lower reflecting barrier cl. cu for two-sided (sided="two") and fixed upper control limit (mode="fixed") a value larger than sigma0 has to been given, for all other cases cu is ignored. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu),and "two" (two-sided chart), respectively. mode only deployed for sided="two" – with "fixed" an upper control limit (see cu) is set and only the lower is calculated to obtain the in-control ARL L0, while with "unbiased" a certain unbiasedness of the ARL function is guaranteed (here, both the lower and the upper control limit are calculated). r dimension of the resulting linear equation system (highest order of the colloca- tion polynomials). qm number of quadrature nodes for calculating the collocation definite integrals. qm.sigma number of quadrature nodes for convoluting the standard deviation uncertainty. truncate size of truncated tail. tail_approx controls whether the geometric tail approximation is used (is faster) or not. c.error error bound for two succeeding values of the critical value during applying the secant rule. a.error error bound for the quantile level alpha during applying the secant rule. Details sewma.crit.prerun determines the critical values (similar to alarm limits) for given in-control ARL L0 by applying secant rule and using sewma.arl.prerun(). In case of sided="two" and mode="unbiased" a two-dimensional secant rule is applied that also ensures that the maximum of the ARL function for given standard deviation is attained at sigma0. See Knoth (2010) for some details of the algorithm involved. Value Returns the lower and upper control limit cl and cu. Author(s) <NAME> References <NAME> and <NAME> and <NAME> (1998), EWMA-Karten zur \"Uberwachung der Streu- ung von Qualit\"atsmerkmalen, Allgemeines Statistisches Archiv 82, 327-338, <NAME> (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Computing 15, 341-352. <NAME> (2010), Control Charting Normal Variance – Reflections, Curiosities, and Recommen- dations, in Frontiers in Statistical Quality Control 9, <NAME> and <NAME> (Eds.), Physica Verlag, Heidelberg, Germany, 3-18. See Also sewma.arl.prerun for calculation of ARL of variance charts under pre-run uncertainty and sewma.crit for the algorithm w/o pre-run uncertainty. Examples ## will follow sewma.q Compute RL quantiles of EWMA (variance charts) control charts Description Computation of quantiles of the Run Length (RL) for EWMA control charts monitoring normal variance. Usage sewma.q(l, cl, cu, sigma, df, alpha, hs=1, sided="upper", r=40, qm=30) sewma.q.crit(l,L0,alpha,df,sigma0=1,cl=NULL,cu=NULL,hs=1,sided="upper", mode="fixed",ur=4,r=40,qm=30,c.error=1e-12,a.error=1e-9) Arguments l smoothing parameter lambda of the EWMA control chart. cl deployed for sided="Rupper", that is, upper variance control chart with lower reflecting barrier cl. cu for two-sided (sided="two") and fixed upper control limit (mode="fixed") a value larger than sigma0 has to been given, for all other cases cu is ignored. sigma,sigma0 true and in-control standard deviation, respectively. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. alpha quantile level. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu),and "two" (two-sided chart), respectively. mode only deployed for sided="two" – with "fixed" an upper control limit (see cu) is set and only the lower is calculated to obtain the in-control ARL L0, while with "unbiased" a certain unbiasedness of the ARL function is guaranteed (here, both the lower and the upper control limit are calculated). ur truncation of lower chart for classic mode. r dimension of the resulting linear equation system (highest order of the colloca- tion polynomials). qm number of quadrature nodes for calculating the collocation definite integrals. L0 in-control quantile value. c.error error bound for two succeeding values of the critical value during applying the secant rule. a.error error bound for the quantile level alpha during applying the secant rule. Details Instead of the popular ARL (Average Run Length) quantiles of the EWMA stopping time (Run Length) are determined. The algorithm is based on Waldmann’s survival function iteration proce- dure. Thereby the ideas presented in Knoth (2007) are used. sewma.q.crit determines the critical values (similar to alarm limits) for given in-control RL quantile L0 at level alpha by applying secant rule and using sewma.sf(). In case of sided="two" and mode="unbiased" a two-dimensional se- cant rule is applied that also ensures that the minimum of the cdf for given standard deviation is attained at sigma0. Value Returns a single value which resembles the RL quantile of order alpha and the lower and upper control limit cl and cu, respectively. Author(s) <NAME> References <NAME> and <NAME> and <NAME> (1998), EWMA-Karten zur \"Uberwachung der Streuung von Qualit\"atsmerkmalen, Allgemeines Statistisches Archiv 82, 327-338, <NAME> and <NAME>. and <NAME> (1999), A comparison of control charting procedures for monitoring process dispersion, IIE Transactions 31, 569-579. <NAME> (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Comput- ing 15, 341-352. <NAME> (2007), Accurate ARL calculation for EWMA control charts monitoring simultaneously normal mean and variance, Sequential Analysis 26, 251-264. <NAME> (2010), Control Charting Normal Variance – Reflections, Curiosities, and Recommen- dations, in Frontiers in Statistical Quality Control 9, <NAME> and <NAME> (Eds.), Physica Verlag, Heidelberg, Germany, 3-18. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also sewma.arl for calculation of ARL of variance charts and sewma.sf for the RL survival function. Examples ## will follow sewma.q.prerun Compute RL quantiles of EWMA (variance charts) control charts un- der pre-run uncertainty Description Computation of quantiles of the Run Length (RL) for EWMA control charts monitoring normal variance. Usage sewma.q.prerun(l,cl,cu,sigma,df1,df2,alpha,hs=1,sided="upper", r=40,qm=30,qm.sigma=30,truncate=1e-10) sewma.q.crit.prerun(l,L0,alpha,df1,df2,sigma0=1,cl=NULL,cu=NULL,hs=1, sided="upper",mode="fixed",r=40, qm=30,qm.sigma=30,truncate=1e-10, tail_approx=TRUE,c.error=1e-10,a.error=1e-9) Arguments l smoothing parameter lambda of the EWMA control chart. cl deployed for sided="Rupper", that is, upper variance control chart with lower reflecting barrier cl. cu for two-sided (sided="two") and fixed upper control limit (mode="fixed") a value larger than sigma0 has to been given, for all other cases cu is ignored. sigma,sigma0 true and in-control standard deviation, respectively. L0 in-control quantile value. alpha quantile level. df1 actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. df2 degrees of freedom of the pre-run variance estimator. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. mode only deployed for sided="two" – with "fixed" an upper control limit (see cu) is set and only the lower is calculated to obtain the in-control ARL L0, while with "unbiased" a certain unbiasedness of the ARL function is guaranteed (here, both the lower and the upper control limit are calculated). r dimension of the resulting linear equation system (highest order of the colloca- tion polynomials). qm number of quadrature nodes for calculating the collocation definite integrals. qm.sigma number of quadrature nodes for convoluting the standard deviation uncertainty. truncate size of truncated tail. tail_approx controls whether the geometric tail approximation is used (is faster) or not. c.error error bound for two succeeding values of the critical value during applying the secant rule. a.error error bound for the quantile level alpha during applying the secant rule. Details Instead of the popular ARL (Average Run Length) quantiles of the EWMA stopping time (Run Length) are determined. The algorithm is based on Waldmann’s survival function iteration proce- dure. Thereby the ideas presented in Knoth (2007) are used. sewma.q.crit.prerun determines the critical values (similar to alarm limits) for given in-control RL quantile L0 at level alpha by applying secant rule and using sewma.sf(). In case of sided="two" and mode="unbiased" a two-dimensional secant rule is applied that also ensures that the minimum of the cdf for given standard deviation is attained at sigma0. Value Returns a single value which resembles the RL quantile of order alpha and the lower and upper control limit cl and cu, respectively. Author(s) <NAME> References <NAME> (2007), Accurate ARL calculation for EWMA control charts monitoring simultaneously normal mean and variance, Sequential Analysis 26, 251-264. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also sewma.q and sewma.q.crit for the version w/o pre-run uncertainty. Examples ## will follow sewma.sf Compute the survival function of EWMA run length Description Computation of the survival function of the Run Length (RL) for EWMA control charts monitoring normal variance. Usage sewma.sf(n, l, cl, cu, sigma, df, hs=1, sided="upper", r=40, qm=30) Arguments n calculate sf up to value n. l smoothing parameter lambda of the EWMA control chart. cl lower control limit of the EWMA control chart. cu upper control limit of the EWMA control chart. sigma true standard deviation. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. r dimension of the resulting linear equation system (highest order of the colloca- tion polynomials). qm number of quadrature nodes for calculating the collocation definite integrals. Details The survival function P(L>n) and derived from it also the cdf P(L<=n) and the pmf P(L=n) illustrate the distribution of the EWMA run length. For large n the geometric tail could be exploited. That is, with reasonable large n the complete distribution is characterized. The algorithm is based on Waldmann’s survival function iteration procedure and on results in Knoth (2007). Value Returns a vector which resembles the survival function up to a certain point. Author(s) <NAME> References <NAME> (2007), Accurate ARL calculation for EWMA control charts monitoring simultaneously normal mean and variance, Sequential Analysis 26, 251-264. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also sewma.arl for zero-state ARL computation of variance EWMA control charts. Examples ## will follow sewma.sf.prerun Compute the survival function of EWMA run length Description Computation of the survival function of the Run Length (RL) for EWMA control charts monitoring normal variance. Usage sewma.sf.prerun(n, l, cl, cu, sigma, df1, df2, hs=1, sided="upper", qm=30, qm.sigma=30, truncate=1e-10, tail_approx=TRUE) Arguments n calculate sf up to value n. l smoothing parameter lambda of the EWMA control chart. cl lower control limit of the EWMA control chart. cu upper control limit of the EWMA control chart. sigma true standard deviation. df1 actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. df2 degrees of freedom of the pre-run variance estimator. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. qm number of quadrature nodes for calculating the collocation definite integrals. qm.sigma number of quadrature nodes for convoluting the standard deviation uncertainty. truncate size of truncated tail. tail_approx Controls whether the geometric tail approximation is used (is faster) or not. Details The survival function P(L>n) and derived from it also the cdf P(L<=n) and the pmf P(L=n) illustrate the distribution of the EWMA run length. For large n the geometric tail could be exploited. That is, with reasonable large n the complete distribution is characterized. The algorithm is based on Waldmann’s survival function iteration procedure and on results in Knoth (2007)... Value Returns a vector which resembles the survival function up to a certain point. Author(s) <NAME> References <NAME> (2007), Accurate ARL calculation for EWMA control charts monitoring simultaneously normal mean and variance, Sequential Analysis 26, 251-264. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also sewma.sf for the RL survival function of EWMA control charts w/o pre-run uncertainty. Examples ## will follow tewma.arl Compute ARLs of Poisson TEWMA control charts Description Computation of the (zero-state) Average Run Length (ARL) at given Poisson mean mu. Usage tewma.arl(lambda, k, lk, uk, mu, z0, rando=FALSE, gl=0, gu=0) Arguments lambda smoothing parameter of the EWMA p control chart. k resolution of grid (natural number). lk lower control limit of the TEWMA control chart, integer. uk upper control limit of the TEWMA control chart, integer. mu mean value of Poisson distribution. z0 so-called headstart (give fast initial response) – it is proposed to use the in- control mean. rando Distinguish between control chart design without or with randomisation. In the latter case some meaningful values for gl and gu should be provided. gl randomisation probability at the lower limit. gu randomisation probability at the upper limit. Details A new idea of applying EWMA smoothing to count data. Here, the thinning operation is applied to independent Poisson variates is performed. Moreover, the original thinning principle is expanded to multiples of one over k to allow finer grids and finally better detection perfomance. It is highly recommended to read the corresponding paper (see below). Value Return single value which resemble the ARL. Author(s) <NAME> References <NAME>, <NAME>, <NAME> (2019), A thinning-based EWMA chart to monitor counts, submitted. See Also later. Examples # MWK (2018) lambda <- 0.1 # (T)EWMA smoothing constant mu0 <- 5 # in-control mean k <- 10 # resolution z0 <- round(k*mu0) # starting value of (T)EWMA sequence # (i) without randomisation lk <- 28 uk <- 75 L0 <- tewma.arl(lambda, k, lk, uk, mu0, z0) # should be 501.9703 # (ii) with randomisation uk <- 76 # lk is not changed gl <- 0.5446310 gu <- 0.1375617 L0 <- tewma.arl(lambda, k, lk, uk, mu0, z0, rando=TRUE, gl=gl, gu=gu) # should be 500 tol.lim.fac Two-sided tolerance limit factors Description For constructing tolerance intervals, which cover a given proportion p of a normal distribution with unknown mean and variance with confidence 1 − α, one needs to calculate the so-called tolerance limit factors k. These values are computed for a given sample size n. Usage tol.lim.fac(n,p,a,mode="WW",m=30) Arguments n sample size. p coverage. a error probability α, resulting interval covers at least proportion p with confi- dence of at least 1 − α. mode distinguish between Wald/Wolfowitz’ approximation method ("WW") and the more accurate approach ("exact") based on Gauss-Legendre quadrature. m number of abscissas for the quadrature (needed only for method="exact"), of course, the larger the more accurate. Details tol.lim.fac determines tolerance limits factors k by means of the fast and simple approximation due to Wald/Wolfowitz (1946) and of Gauss-Legendre quadrature like Odeh/Owen (1980), respec- tively, who used in fact the Simpson Rule. Then, by x̄ ± k · s one can build the tolerance intervals which cover at least proportion p of a normal distribution for given confidence level of 1 − α. x̄ and s stand for the sample mean and the sample standard deviation, respectively. Value Returns a single value which resembles the tolerance limit factor. Author(s) <NAME> References <NAME>, <NAME> (1946), Tolerance limits for a normal distribution, Annals of Mathematical Statistics 17, 208-215. <NAME>, <NAME> (1980), Tables for Normal Tolerance Limits, Sampling Plans, and Screen- ing, <NAME>, New York. See Also qnorm for the ”asymptotic” case – cf. second example. Examples n <- 2:10 p <- .95 a <- .05 kWW <- sapply(n,p=p,a=a,tol.lim.fac) kEX <- sapply(n,p=p,a=a,mode="exact",tol.lim.fac) print(cbind(n,kWW,kEX),digits=4) ## Odeh/Owen (1980), page 98, in Table 3.4.1 ## n factor k ## 2 36.519 ## 3 9.789 ## 4 6.341 ## 5 5.077 ## 6 4.422 ## 7 4.020 ## 8 3.746 ## 9 3.546 ## 10 3.393 ## n -> infty n <- 10^{1:7} p <- .95 a <- .05 kEX <- round(sapply(n,p=p,a=a,mode="exact",tol.lim.fac),digits=4) kEXinf <- round(qnorm(1-a/2),digits=4) print(rbind(cbind(n,kEX),c("infinity",kEXinf)),quote=FALSE) x.res.ewma.arl Compute ARLs of EWMA residual control charts Description Computation of the (zero-state) Average Run Length (ARL) for EWMA residual control charts monitoring normal mean, variance, or mean and variance simultaneously. Additionally, the proba- bility of misleading signals (PMS) is calculated. Usage x.res.ewma.arl(l, c, mu, alpha=0, n=5, hs=0, r=40) s.res.ewma.arl(l, cu, sigma, mu=0, alpha=0, n=5, hs=1, r=40, qm=30) xs.res.ewma.arl(lx, cx, ls, csu, mu, sigma, alpha=0, n=5, hsx=0, rx=40, hss=1, rs=40, qm=30) xs.res.ewma.pms(lx, cx, ls, csu, mu, sigma, type="3", alpha=0, n=5, hsx=0, rx=40, hss=1, rs=40, qm=30) Arguments l, lx, ls smoothing parameter(s) lambda of the EWMA control chart. c, cu, cx, csu critical value (similar to alarm limit) of the EWMA control charts. mu true mean. sigma true standard deviation. alpha the AR(1) coefficient – first order autocorrelation of the original data. n batch size. hs, hsx, hss so-called headstart (enables fast initial response). r, rx, rs number of quadrature nodes or size of collocation base, dimension of the result- ing linear equation system is equal to r (two-sided). qm number of nodes for collocation quadratures. type PMS type, for PMS="3" (the default) the probability of getting a mean signal despite the variance changed, and for PMS="4" the opposite case is dealt with. Details The above list of functions provides the application of algorithms developed for iid data to the residual case. To be more precise, the underlying model is a sequence of normally distributed batches with size n with autocorrelation within the batch and independence between the batches (see also the references below). It is restricted to the classical EWMA chart types, that is two-sided for the mean, upper charts for the variance, and all equipped with fixed limits. The autocorrelation is modeled by an AR(1) process with parameter alpha. Additionally, with xs.res.ewma.pms the probability of misleading signals (PMS) of type is calculated. This is offered exclusively in this small collection so that for iid data this function has to be used too (with alpha=0). Value Return single values which resemble the ARL and the PMS, respectively. Author(s) <NAME> References <NAME>, <NAME>, <NAME>, <NAME> (2009), Misleading Signals in Simultaneous Resid- ual Schemes for the Mean and Variance of a Stationary Process, Commun. Stat., Theory Methods 38, 2923-2943. <NAME>, <NAME>, <NAME> (2001), Simultaneous Shewhart-Type Charts for the Mean and the Variance of a Time Series, Frontiers of Statistical Quality Control 6, A. Lenz, H.-J. & Wilrich, P.-T. (Eds.), 6, 61-79. <NAME>, <NAME> (2002) Monitoring the mean and the variance of a stationary process, Statistica Neerlandica 56, 77-100. See Also xewma.arl, sewma.arl, and xsewma.arl as more elaborated functions in the iid case. Examples ## Not run: ## <NAME>, <NAME> (2002) cat("\nFragments of Table 2 (n=5, lambda.1=lambda.2)\n") lambdas <- c(.5, .25, .1, .05) L0 <- 500 n <- 5 crit <- NULL for ( lambda in lambdas ) { cs <- xsewma.crit(lambda, lambda, L0, n-1) x.e <- round(cs[1], digits=4) names(x.e) <- NULL s.e <- round((cs[3]-1) * sqrt((2-lambda)/lambda)*sqrt((n-1)/2), digits=4) names(s.e) <- NULL crit <- rbind(crit, data.frame(lambda, x.e, s.e)) } ## orinal values are (Markov chain approximation with 50 states) # lambda x.e s.e # 0.50 3.2765 4.6439 # 0.25 3.2168 4.0149 # 0.10 3.0578 3.3376 # 0.05 2.8817 2.9103 print(crit) cat("\nFragments of Table 4 (n=5, lambda.1=lambda.2=0.1)\n\n") lambda <- .1 # the algorithm used in Knoth/Schmid is less accurate -- proceed with their values cx <- x.e <- 3.0578 s.e <- 3.3376 csu <- 1 + s.e * sqrt(lambda/(2-lambda))*sqrt(2/(n-1)) alpha <- .3 a.values <- c((0:6)/4, 2) d.values <- c(1 + (0:5)/10, 1.75 , 2) arls <- NULL for ( delta in d.values ) { row <- NULL for ( mu in a.values ) { arl <- round(xs.res.ewma.arl(lambda, cx, lambda, csu, mu*sqrt(n), delta, alpha=alpha, n=n), digits=2) names(arl) <- NULL row <- c(row, arl) } arls <- rbind(arls, data.frame(t(row))) } names(arls) <- a.values rownames(arls) <- d.values ## orinal values are (now Monte-Carlo with 10^6 replicates) # 0 0.25 0.5 0.75 1 1.25 1.5 2 #1 502.44 49.50 14.21 7.93 5.53 4.28 3.53 2.65 #1.1 73.19 32.91 13.33 7.82 5.52 4.29 3.54 2.66 #1.2 24.42 18.88 11.37 7.44 5.42 4.27 3.54 2.67 #1.3 13.11 11.83 9.09 6.74 5.18 4.17 3.50 2.66 #1.4 8.74 8.31 7.19 5.89 4.81 4.00 3.41 2.64 #1.5 6.50 6.31 5.80 5.08 4.37 3.76 3.28 2.59 #1.75 3.94 3.90 3.78 3.59 3.35 3.09 2.83 2.40 #2 2.85 2.84 2.80 2.73 2.63 2.51 2.39 2.14 print(arls) ## <NAME>, <NAME>, <NAME>, <NAME> (2009) cat("\nFragments of Table 5 (n=5, lambda=0.1)\n\n") d.values <- c(1.02, 1 + (1:5)/10, 1.75 , 2) arl.x <- arl.s <- arl.xs <- PMS.3 <- NULL for ( delta in d.values ) { arl.x <- c(arl.x, round(x.res.ewma.arl(lambda, cx/delta, 0, n=n), digits=3)) arl.s <- c(arl.s, round(s.res.ewma.arl(lambda, csu, delta, n=n), digits=3)) arl.xs <- c(arl.xs, round(xs.res.ewma.arl(lambda, cx, lambda, csu, 0, delta, n=n), digits=3)) PMS.3 <- c(PMS.3, round(xs.res.ewma.pms(lambda, cx, lambda, csu, 0, delta, n=n), digits=6)) } ## orinal values are (Markov chain approximation) # delta arl.x arl.s arl.xs PMS.3 # 1.02 833.086 518.935 323.324 0.381118 # 1.10 454.101 84.208 73.029 0.145005 # 1.20 250.665 25.871 24.432 0.071024 # 1.30 157.343 13.567 13.125 0.047193 # 1.40 108.112 8.941 8.734 0.035945 # 1.50 79.308 6.614 6.493 0.029499 # 1.75 44.128 3.995 3.942 0.021579 # 2.00 28.974 2.887 2.853 0.018220 print(cbind(delta=d.values, arl.x, arl.s, arl.xs, PMS.3)) cat("\nFragments of Table 6 (n=5, lambda=0.1)\n\n") alphas <- c(-0.9, -0.5, -0.3, 0, 0.3, 0.5, 0.9) deltas <- c(0.05, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2) PMS.4 <- NULL for ( ir in 1:length(deltas) ) { mu <- deltas[ir]*sqrt(n) pms <- NULL for ( alpha in alphas ) { pms <- c(pms, round(xs.res.ewma.pms(lambda, cx, lambda, csu, mu, 1, type="4", alpha=alpha, n=n), digits=6)) } PMS.4 <- rbind(PMS.4, data.frame(delta=deltas[ir], t(pms))) } names(PMS.4) <- c("delta", alphas) rownames(PMS.4) <- NULL ## orinal values are (Markov chain approximation) # delta -0.9 -0.5 -0.3 0 0.3 0.5 0.9 # 0.05 0.055789 0.224521 0.279842 0.342805 0.391299 0.418915 0.471386 # 0.25 0.003566 0.009522 0.014580 0.025786 0.044892 0.066584 0.192023 # 0.50 0.002994 0.001816 0.002596 0.004774 0.009259 0.015303 0.072945 # 0.75 0.006967 0.000703 0.000837 0.001529 0.003400 0.006424 0.046602 # 1.00 0.005098 0.000402 0.000370 0.000625 0.001589 0.003490 0.039978 # 1.25 0.000084 0.000266 0.000202 0.000300 0.000867 0.002220 0.039773 # 1.50 0.000000 0.000256 0.000120 0.000163 0.000531 0.001584 0.042734 # 2.00 0.000000 0.000311 0.000091 0.000056 0.000259 0.001029 0.054543 print(PMS.4) ## End(Not run) xcusum.ad Compute steady-state ARLs of CUSUM control charts Description Computation of the steady-state Average Run Length (ARL) for different types of CUSUM control charts monitoring normal mean. Usage xcusum.ad(k, h, mu1, mu0 = 0, sided = "one", r = 30) Arguments k reference value of the CUSUM control chart. h decision interval (alarm limit, threshold) of the CUSUM control chart. mu1 out-of-control mean. mu0 in-control mean. sided distinguish between one-, two-sided and Crosier’s modified two-sided CUSUM scheme by choosing "one", "two", and "Crosier", respectively. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-, two-sided) or 2r+1 (Crosier). Details xcusum.ad determines the steady-state Average Run Length (ARL) by numerically solving the related ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadra- ture and using the power method for deriving the largest in magnitude eigenvalue and the related left eigenfunction. Value Returns a single value which resembles the steady-state ARL. Note Be cautious in increasing the dimension parameter r for two-sided CUSUM schemes. The resulting matrix dimension is r^2 times r^2. Thus, go beyond 30 only on fast machines. This is the only case, were the package routines are based on the Markov chain approach. Moreover, the two-sided CUSUM scheme needs a two-dimensional Markov chain. Author(s) <NAME> References <NAME> (1986), A new two-sided cumulative quality control scheme, Technometrics 28, 187- 194. See Also xcusum.arl for zero-state ARL computation and xewma.ad for the steady-state ARL of EWMA control charts. Examples ## comparison of zero-state (= worst case ) and steady-state performance ## for one-sided CUSUM control charts k <- .5 h <- xcusum.crit(k,500) mu <- c(0,.5,1,1.5,2) arl <- sapply(mu,k=k,h=h,xcusum.arl) ad <- sapply(mu,k=k,h=h,xcusum.ad) round(cbind(mu,arl,ad),digits=2) ## Crosier (1986), Crosier's modified two-sided CUSUM ## He introduced the modification and evaluated it by means of ## Markov chain approximation k <- .5 h2 <- 4 hC <- 3.73 mu <- c(0,.25,.5,.75,1,1.5,2,2.5,3,4,5) ad2 <- sapply(mu,k=k,h=h2,sided="two",r=20,xcusum.ad) adC <- sapply(mu,k=k,h=hC,sided="Crosier",xcusum.ad) round(cbind(mu,ad2,adC),digits=2) ## results in the original paper are (in Table 5) ## 0.00 163. 164. ## 0.25 71.6 69.0 ## 0.50 25.2 24.3 ## 0.75 12.3 12.1 ## 1.00 7.68 7.69 ## 1.50 4.31 4.39 ## 2.00 3.03 3.12 ## 2.50 2.38 2.46 ## 3.00 2.00 2.07 ## 4.00 1.55 1.60 ## 5.00 1.22 1.29 xcusum.arl Compute ARLs of CUSUM control charts Description Computation of the (zero-state) Average Run Length (ARL) for different types of CUSUM control charts monitoring normal mean. Usage xcusum.arl(k, h, mu, hs = 0, sided = "one", method = "igl", q = 1, r = 30) Arguments k reference value of the CUSUM control chart. h decision interval (alarm limit, threshold) of the CUSUM control chart. mu true mean. hs so-called headstart (give fast initial response). sided distinguish between one-, two-sided and Crosier’s modified two-sided CUSUM scheme by choosing "one", "two", and "Crosier", respectively. method deploy the integral equation ("igl") or Markov chain approximation ("mc") method to calculate the ARL (currently only for two-sided CUSUM imple- mented). q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥ q), will be determined. Note that mu0=0 is implicitely fixed. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-, two-sided) or 2r+1 (Crosier). Details xcusum.arl determines the Average Run Length (ARL) by numerically solving the related ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadrature. Value Returns a vector of length q which resembles the ARL and the sequence of conditional expected delays for q=1 and q>1, respectively. Author(s) <NAME> References <NAME>, <NAME> (1971), Determination of A.R.L. and a contour nomogram for CUSUM charts to control normal mean, Technometrics 13, 221-230. <NAME>, <NAME> (1972), An approach to the probability distribution of cusum run length, Biometrika 59, 539-548. <NAME>, <NAME> (1982), Fast initial response for cusum quality-control schemes: Give your cusum a headstart, Technometrics 24, 199-205. <NAME> (1986), Average run lengths of cumulative sum control charts for controlling normal means, Journal of Quality Technology 18, 189-193. <NAME> (1986), Bounds for the distribution of the run length of one-sided and two-sided CUSUM quality control schemes, Technometrics 28, 61-67. <NAME> (1986), A new two-sided cumulative quality control scheme, Technometrics 28, 187- 194. See Also xewma.arl for zero-state ARL computation of EWMA control charts and xcusum.ad for the steady- state ARL. Examples ## Brook/Evans (1972), one-sided CUSUM ## Their results are based on the less accurate Markov chain approach. k <- .5 h <- 3 round(c( xcusum.arl(k,h,0), xcusum.arl(k,h,1.5) ),digits=2) ## results in the original paper are L0 = 117.59, L1 = 3.75 (in Subsection 4.3). ## <NAME> (1982) ## (one- and) two-sided CUSUM with possible headstarts k <- .5 h <- 4 mu <- c(0,.25,.5,.75,1,1.5,2,2.5,3,4,5) arl1 <- sapply(mu,k=k,h=h,sided="two",xcusum.arl) arl2 <- sapply(mu,k=k,h=h,hs=h/2,sided="two",xcusum.arl) round(cbind(mu,arl1,arl2),digits=2) ## results in the original paper are (in Table 1) ## 0.00 168. 149. ## 0.25 74.2 62.7 ## 0.50 26.6 20.1 ## 0.75 13.3 8.97 ## 1.00 8.38 5.29 ## 1.50 4.75 2.86 ## 2.00 3.34 2.01 ## 2.50 2.62 1.59 ## 3.00 2.19 1.32 ## 4.00 1.71 1.07 ## 5.00 1.31 1.01 ## Vance (1986), one-sided CUSUM ## The first paper on using Nystroem method and Gauss-Legendre quadrature ## for solving the ARL integral equation (see as well Goel/Wu, 1971) k <- 0 h <- 10 mu <- c(-.25,-.125,0,.125,.25,.5,.75,1) round(cbind(mu,sapply(mu,k=k,h=h,xcusum.arl)),digits=2) ## results in the original paper are (in Table 1 incl. Goel/Wu (1971) results) ## -0.25 2071.51 ## -0.125 400.28 ## 0.0 124.66 ## 0.125 59.30 ## 0.25 36.71 ## 0.50 20.37 ## 0.75 14.06 ## 1.00 10.75 ## Waldmann (1986), ## one- and two-sided CUSUM ## one-sided case k <- .5 h <- 3 mu <- c(-.5,0,.5) round(sapply(mu,k=k,h=h,xcusum.arl),digits=2) ## results in the original paper are 1963, 117.4, and 17.35, resp. ## (in Tables 3, 1, and 5, resp.). ## two-sided case k <- .6 h <- 3 round(xcusum.arl(k,h,-.2,sided="two"),digits=1) # fits to Waldmann's setup ## result in the original paper is 65.4 (in Table 6). ## Crosier (1986), Crosier's modified two-sided CUSUM ## He introduced the modification and evaluated it by means of ## Markov chain approximation k <- .5 h <- 3.73 mu <- c(0,.25,.5,.75,1,1.5,2,2.5,3,4,5) round(cbind(mu,sapply(mu,k=k,h=h,sided="Crosier",xcusum.arl)),digits=2) ## results in the original paper are (in Table 3) ## 0.00 168. ## 0.25 70.7 ## 0.50 25.1 ## 0.75 12.5 ## 1.00 7.92 ## 1.50 4.49 ## 2.00 3.17 ## 2.50 2.49 ## 3.00 2.09 ## 4.00 1.60 ## 5.00 1.22 ## SAS/QC manual 1999 ## one- and two-sided CUSUM schemes ## one-sided k <- .25 h <- 8 mu <- 2.5 print(xcusum.arl(k,h,mu),digits=12) print(xcusum.arl(k,h,mu,hs=.1),digits=12) ## original results are 4.1500836225 and 4.1061588131. ## two-sided print(xcusum.arl(k,h,mu,sided="two"),digits=12) ## original result is 4.1500826715. xcusum.crit Compute decision intervals of CUSUM control charts Description Computation of the decision intervals (alarm limits) for different types of CUSUM control charts monitoring normal mean. Usage xcusum.crit(k, L0, mu0 = 0, hs = 0, sided = "one", r = 30) Arguments k reference value of the CUSUM control chart. L0 in-control ARL. mu0 in-control mean. hs so-called headstart (enables fast initial response). sided distinguishes between one-, two-sided and Crosier’s modified two-sided CUSUM scheme by choosing "one", "two", and "Crosier", respectively. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-, two-sided) or 2r+1 (Crosier). Details xcusum.crit determines the decision interval (alarm limit) for given in-control ARL L0 by apply- ing secant rule and using xcusum.arl(). Value Returns a single value which resembles the decision interval h. Author(s) <NAME> See Also xcusum.arl for zero-state ARL computation. Examples k <- .5 incontrolARL <- c(500,5000,50000) sapply(incontrolARL,k=k,xcusum.crit,r=10) # accuracy with 10 nodes sapply(incontrolARL,k=k,xcusum.crit,r=20) # accuracy with 20 nodes sapply(incontrolARL,k=k,xcusum.crit) # accuracy with 30 nodes xcusum.crit.L0h Compute the CUSUM reference value k for given in-control ARL and threshold h Description Computation of the reference value k for one-sided CUSUM control charts monitoring normal mean, if the in-control ARL L0 and the alarm threshold h are given. Usage xcusum.crit.L0h(L0, h, hs=0, sided="one", r=30, L0.eps=1e-6, k.eps=1e-8) Arguments L0 in-control ARL. h alarm level of the CUSUM control chart. hs so-called headstart (enables fast initial response). sided distinguishes between one-, two-sided and Crosier’s modified two-sided CUSUM scheme choosing "one", "two", and "Crosier", respectively. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-, two-sided) or 2r+1 (Crosier). L0.eps error bound for the L0 error. k.eps bound for the difference of two successive values of k. Details xcusum.crit.L0h determines the reference value k for given in-control ARL L0 and alarm level h by applying secant rule and using xcusum.arl(). Note that not for any combination of L0 and h a solution exists – for given L0 there is a maximal value for h to get a valid result k. Value Returns a single value which resembles the reference value k. Author(s) <NAME> See Also xcusum.arl for zero-state ARL computation. Examples L0 <- 100 h.max <- xcusum.crit(0, L0, 0) hs <- (300:1)/100 hs <- hs[hs < h.max] ks <- NULL for ( h in hs ) ks <- c(ks, xcusum.crit.L0h(L0, h)) k.max <- qnorm( 1 - 1/L0 ) plot(hs, ks, type="l", ylim=c(0, max(k.max, ks)), xlab="h", ylab="k") abline(h=c(0, k.max), col="red") xcusum.crit.L0L1 Compute the CUSUM k and h for given in-control ARL L0 and out-of- control L1 Description Computation of the reference value k and the alarm threshold h for one-sided CUSUM control charts monitoring normal mean, if the in-control ARL L0 and the out-of-control L1 are given. Usage xcusum.crit.L0L1(L0, L1, hs=0, sided="one", r=30, L1.eps=1e-6, k.eps=1e-8) Arguments L0 in-control ARL. L1 out-of-control ARL. hs so-called headstart (enables fast initial response). sided distinguishes between one-, two-sided and Crosier’s modified two-sided CUSUM schemoosing "one", "two", and "Crosier", respectively. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-, two-sided) or 2r+1 (Crosier). L1.eps error bound for the L1 error. k.eps bound for the difference of two successive values of k. Details xcusum.crit.L0L1 determines the reference value k and the alarm threshold h for given in-control ARL L0 and out-of-control ARL L1 by applying secant rule and using xcusum.arl() and xcusum.crit(). These CUSUM design rules were firstly (and quite rarely afterwards) used by Ewan and Kemp. Value Returns two values which resemble the reference value k and the threshold h. Author(s) <NAME> References <NAME> and <NAME> (1960), Sampling inspection of continuous processes with no auto- correlation between successive results, Biometrika 47, 363-380. <NAME> (1962), The Use of Cumulative Sums for Sampling Inspection Schemes, Journal of the Royal Statistical Sociecty C, Applied Statistics, 10, 16-31. See Also xcusum.arl for zero-state ARL and xcusum.crit for threshold h computation. Examples ## Table 2 from Ewan/Kemp (1960) -- one-sided CUSUM # # A.R.L. at A.Q.L. A.R.L. at A.Q.L. k h # 1000 3 1.12 2.40 # 1000 7 0.65 4.06 # 500 3 1.04 2.26 # 500 7 0.60 3.80 # 250 3 0.94 2.11 # 250 7 0.54 3.51 # L0.set <- c(1000, 500, 250) L1.set <- c(3, 7) cat("\nL0\tL1\tk\th\n") for ( L0 in L0.set ) { for ( L1 in L1.set ) { result <- round(xcusum.crit.L0L1(L0, L1), digits=2) cat(paste(L0, L1, result[1], result[2], sep="\t"), "\n") } } # # two confirmation runs xcusum.arl(0.54, 3.51, 0) # Ewan/Kemp xcusum.arl(result[1], result[2], 0) # here xcusum.arl(0.54, 3.51, 2*0.54) # Ewan/Kemp xcusum.arl(result[1], result[2], 2*result[1]) # here # ## Table II from Kemp (1962) -- two-sided CUSUM # # Lr k # La=250 La=500 La=1000 # 2.5 1.05 1.17 1.27 # 3.0 0.94 1.035 1.13 # 4.0 0.78 0.85 0.92 # 5.0 0.68 0.74 0.80 # 6.0 0.60 0.655 0.71 # 7.5 0.52 0.57 0.62 # 10.0 0.43 0.48 0.52 # L0.set <- c(250, 500, 1000) L1.set <- c(2.5, 3:6, 7.5, 10) cat("\nL1\tL0=250\tL0=500\tL0=1000\n") for ( L1 in L1.set ) { cat(L1) for ( L0 in L0.set ) { result <- round(xcusum.crit.L0L1(L0, L1, sided="two"), digits=2) cat("\t", result[1]) } cat("\n") } xcusum.q Compute RL quantiles of CUSUM control charts Description Computation of quantiles of the Run Length (RL)for CUSUM control charts monitoring normal mean. Usage xcusum.q(k, h, mu, alpha, hs=0, sided="one", r=40) Arguments k reference value of the CUSUM control chart. h decision interval (alarm limit, threshold) of the CUSUM control chart. mu true mean. alpha quantile level. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided CUSUM control chart by choosing "one" and "two", respectively. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1. Details Instead of the popular ARL (Average Run Length) quantiles of the CUSUM stopping time (Run Length) are determined. The algorithm is based on Waldmann’s survival function iteration proce- dure. Value Returns a single value which resembles the RL quantile of order q. Author(s) <NAME> References <NAME> (1986), Bounds for the distribution of the run length of one-sided and two-sided CUSUM quality control schemes, Technometrics 28, 61-67. See Also xcusum.arl for zero-state ARL computation of CUSUM control charts. Examples ## Waldmann (1986), one-sided CUSUM, Table 2 ## original values are 345, 82, 9 XCUSUM.Q <- Vectorize("xcusum.q", "alpha") k <- .5 h <- 3 mu <- 0 # corresponds to Waldmann's -0.5 a.list <- c(.95, .5, .05) rl.quantiles <- ceiling(XCUSUM.Q(k, h, mu, a.list)) cbind(a.list, rl.quantiles) xcusum.sf Compute the survival function of CUSUM run length Description Computation of the survival function of the Run Length (RL) for CUSUM control charts monitoring normal mean. Usage xcusum.sf(k, h, mu, n, hs=0, sided="one", r=40) Arguments k reference value of the CUSUM control chart. h decision interval (alarm limit, threshold) of the CUSUM control chart. mu true mean. n calculate sf up to value n. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided CUSUM control chart by choosing "one" and "two", respectively. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1. Details The survival function P(L>n) and derived from it also the cdf P(L<=n) and the pmf P(L=n) illustrate the distribution of the CUSUM run length. For large n the geometric tail could be exploited. That is, with reasonable large n the complete distribution is characterized. The algorithm is based on Waldmann’s survival function iteration procedure. Value Returns a vector which resembles the survival function up to a certain point. Author(s) <NAME> References <NAME> (1986), Bounds for the distribution of the run length of one-sided and two-sided CUSUM quality control schemes, Technometrics 28, 61-67. See Also xcusum.q for computation of CUSUM run length quantiles. Examples ## Waldmann (1986), one-sided CUSUM, Table 2 k <- .5 h <- 3 mu <- 0 # corresponds to Waldmann's -0.5 SF <- xcusum.sf(k, h, 0, 1000) plot(1:length(SF), SF, type="l", xlab="n", ylab="P(L>n)", ylim=c(0,1)) # xDcusum.arl Compute ARLs of CUSUM control charts under drift Description Computation of the (zero-state and other) Average Run Length (ARL) under drift for one-sided CUSUM control charts monitoring normal mean. Usage xDcusum.arl(k, h, delta, hs = 0, sided = "one", mode = "Gan", m = NULL, q = 1, r = 30, with0 = FALSE) Arguments k reference value of the CUSUM control chart. h decision interval (alarm limit, threshold) of the CUSUM control chart. delta true drift parameter. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided CUSUM control chart by choosing "one" and "two", respectively. Currentlly, the two-sided scheme is not imple- mented. mode decide whether Gan’s or Knoth’s approach is used. Use "Gan" and "Knoth", respectively. m parameter used if mode="Gan". m is design parameter of Gan’s approach. If m=NULL, then m will increased until the resulting ARL does not change anymore. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥), will be determined. Note that mu0=0 is implicitely fixed. Deploy large q to mimic steady-state. It works only for mode="Knoth". r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). with0 defines whether the first observation used for the RL calculation follows already 1*delta or still 0*delta. With q additional flexibility is given. Details Based on Gan (1991) or Knoth (2003), the ARL is calculated for one-sided CUSUM control charts under drift. In case of Gan’s framework, the usual ARL function with mu=m*delta is determined and recursively via m-1, m-2, ... 1 (or 0) the drift ARL determined. The framework of Knoth allows to calculate ARLs for varying parameters, such as control limits and distributional parameters. For details see the cited papers. Note that two-sided CUSUM charts under drift are difficult to treat. Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME> (1992), CUSUM control charts under linear drift, Statistician 41, 71-84. <NAME> (1996), Average Run Lengths for Cumulative Sum control chart under linear trend, Ap- plied Statistics 45, 505-512. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2012), More on Control Charting under Drift, in: Frontiers in Statistical Quality Control 10, <NAME>, <NAME> and <NAME> (Eds.), Physica Verlag, Heidelberg, Germany, 53-68. <NAME>, <NAME> and <NAME> (2009), Comparisons of control schemes for monitoring the means of processes subject to drifts, Metrika 70, 141-163. See Also xcusum.arl and xcusum.ad for zero-state and steady-state ARL computation of CUSUM control charts for the classical step change model. Examples ## Gan (1992) ## Table 1 ## original values are # deltas arl # 0.0001 475 # 0.0005 261 # 0.0010 187 # 0.0020 129 # 0.0050 76.3 # 0.0100 52.0 # 0.0200 35.2 # 0.0500 21.4 # 0.1000 15.0 # 0.5000 6.95 # 1.0000 5.16 # 3.0000 3.30 ## Not run: k <- .25 h <- 8 r <- 50 DxDcusum.arl <- Vectorize(xDcusum.arl, "delta") deltas <- c(0.0001, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.5, 1, 3) arl.like.Gan <- round(DxDcusum.arl(k, h, deltas, r=r, with0=TRUE), digits=2) arl.like.Knoth <- round(DxDcusum.arl(k, h, deltas, r=r, mode="Knoth", with0=TRUE), digits=2) data.frame(deltas, arl.like.Gan, arl.like.Knoth) ## End(Not run) ## Zou et al. (2009) ## Table 1 ## original values are # delta arl1 arl2 arl3 # 0 ~ 1730 # 0.0005 345 412 470 # 0.001 231 275 317 # 0.005 86.6 98.6 112 # 0.01 56.9 61.8 69.3 # 0.05 22.6 21.6 22.7 # 0.1 15.4 14.7 14.2 # 0.5 6.60 5.54 5.17 # 1.0 4.63 3.80 3.45 # 2.0 3.17 2.67 2.32 # 3.0 2.79 2.04 1.96 # 4.0 2.10 1.98 1.74 ## Not run: k1 <- 0.25 k2 <- 0.5 k3 <- 0.75 h1 <- 9.660 h2 <- 5.620 h3 <- 3.904 deltas <- c(0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1:4) arl1 <- c(round(xcusum.arl(k1, h1, 0, r=r), digits=2), round(DxDcusum.arl(k1, h1, deltas, r=r), digits=2)) arl2 <- c(round(xcusum.arl(k2, h2, 0), digits=2), round(DxDcusum.arl(k2, h2, deltas, r=r), digits=2)) arl3 <- c(round(xcusum.arl(k3, h3, 0, r=r), digits=2), round(DxDcusum.arl(k3, h3, deltas, r=r), digits=2)) data.frame(delta=c(0, deltas), arl1, arl2, arl3) ## End(Not run) xDewma.arl Compute ARLs of EWMA control charts under drift Description Computation of the (zero-state and other) Average Run Length (ARL) under drift for different types of EWMA control charts monitoring normal mean. Usage xDewma.arl(l, c, delta, zr = 0, hs = 0, sided = "one", limits = "fix", mode = "Gan", m = NULL, q = 1, r = 40, with0 = FALSE) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. delta true drift parameter. zr reflection border for the one-sided chart. hs so-called headstart (enables fast initial response). sided distinguish between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. mode decide whether Gan’s or Knoth’s approach is used. Use "Gan" and "Knoth", respectively. m parameter used if mode="Gan". m is design parameter of Gan’s approach. If m=NULL, then m will increased until the resulting ARL does not change anymore. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥), will be determined. Note that mu0=0 is implicitely fixed. Deploy large q to mimic steady-state. It works only for mode="Knoth". r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). with0 defines whether the first observation used for the RL calculation follows already 1*delta or still 0*delta. With q additional flexibility is given. Details Based on Gan (1991) or Knoth (2003), the ARL is calculated for EWMA control charts under drift. In case of Gan’s framework, the usual ARL function with mu=m*delta is determined and recursively via m-1, m-2, ... 1 (or 0) the drift ARL determined. The framework of Knoth allows to calculate ARLs for varying parameters, such as control limits and distributional parameters. For details see the cited papers. Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME> (1991), EWMA control chart under linear drift, J. Stat. Comput. Simulation 38, 181-200. <NAME>, <NAME> and <NAME> (1991), Evaluation of control charts under linear trend, Commun. Stat., Theory Methods 20, 3341-3349. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> and <NAME> (2006), Detection of linear trends in process mean, International Journal of Production Research 44, 487-504. <NAME> (2012), More on Control Charting under Drift, in: Frontiers in Statistical Quality Control 10, <NAME>, <NAME> and <NAME> (Eds.), Physica Verlag, Heidelberg, Germany, 53-68. <NAME>, <NAME> and <NAME> (2009), Comparisons of control schemes for monitoring the means of processes subject to drifts, Metrika 70, 141-163. See Also xewma.arl and xewma.ad for zero-state and steady-state ARL computation of EWMA control charts for the classical step change model. Examples ## Not run: DxDewma.arl <- Vectorize(xDewma.arl, "delta") ## Gan (1991) ## Table 1 ## original values are # delta arlE1 arlE2 arlE3 # 0 500 500 500 # 0.0001 482 460 424 # 0.0010 289 231 185 # 0.0020 210 162 129 # 0.0050 126 94.6 77.9 # 0.0100 81.7 61.3 52.7 # 0.0500 27.5 21.8 21.9 # 0.1000 17.0 14.2 15.3 # 1.0000 4.09 4.28 5.25 # 3.0000 2.60 2.90 3.43 # lambda1 <- 0.676 lambda2 <- 0.242 lambda3 <- 0.047 h1 <- 2.204/sqrt(lambda1/(2-lambda1)) h2 <- 1.111/sqrt(lambda2/(2-lambda2)) h3 <- 0.403/sqrt(lambda3/(2-lambda3)) deltas <- c(.0001, .001, .002, .005, .01, .05, .1, 1, 3) arlE10 <- round(xewma.arl(lambda1, h1, 0, sided="two"), digits=2) arlE1 <- c(arlE10, round(DxDewma.arl(lambda1, h1, deltas, sided="two", with0=TRUE), digits=2)) arlE20 <- round(xewma.arl(lambda2, h2, 0, sided="two"), digits=2) arlE2 <- c(arlE20, round(DxDewma.arl(lambda2, h2, deltas, sided="two", with0=TRUE), digits=2)) arlE30 <- round(xewma.arl(lambda3, h3, 0, sided="two"), digits=2) arlE3 <- c(arlE30, round(DxDewma.arl(lambda3, h3, deltas, sided="two", with0=TRUE), digits=2)) data.frame(delta=c(0, deltas), arlE1, arlE2, arlE3) ## do the same with more digits for the alarm threshold L0 <- 500 h1 <- xewma.crit(lambda1, L0, sided="two") h2 <- xewma.crit(lambda2, L0, sided="two") h3 <- xewma.crit(lambda3, L0, sided="two") lambdas <- c(lambda1, lambda2, lambda3) hs <- c(h1, h2, h3) * sqrt(lambdas/(2-lambdas)) hs # compare with Gan's values 2.204, 1.111, 0.403 round(hs, digits=3) arlE10 <- round(xewma.arl(lambda1, h1, 0, sided="two"), digits=2) arlE1 <- c(arlE10, round(DxDewma.arl(lambda1, h1, deltas, sided="two", with0=TRUE), digits=2)) arlE20 <- round(xewma.arl(lambda2, h2, 0, sided="two"), digits=2) arlE2 <- c(arlE20, round(DxDewma.arl(lambda2, h2, deltas, sided="two", with0=TRUE), digits=2)) arlE30 <- round(xewma.arl(lambda3, h3, 0, sided="two"), digits=2) arlE3 <- c(arlE30, round(DxDewma.arl(lambda3, h3, deltas, sided="two", with0=TRUE), digits=2)) data.frame(delta=c(0, deltas), arlE1, arlE2, arlE3) ## Aerne et al. (1991) -- two-sided EWMA ## Table I (continued) ## original numbers are # delta arlE1 arlE2 arlE3 # 0.000000 465.0 465.0 465.0 # 0.005623 77.01 85.93 102.68 # 0.007499 64.61 71.78 85.74 # 0.010000 54.20 59.74 71.22 # 0.013335 45.20 49.58 58.90 # 0.017783 37.76 41.06 48.54 # 0.023714 31.54 33.95 39.87 # 0.031623 26.36 28.06 32.68 # 0.042170 22.06 23.19 26.73 # 0.056234 18.49 19.17 21.84 # 0.074989 15.53 15.87 17.83 # 0.100000 13.07 13.16 14.55 # 0.133352 11.03 10.94 11.88 # 0.177828 9.33 9.12 9.71 # 0.237137 7.91 7.62 7.95 # 0.316228 6.72 6.39 6.52 # 0.421697 5.72 5.38 5.37 # 0.562341 4.88 4.54 4.44 # 0.749894 4.18 3.84 3.68 # 1.000000 3.58 3.27 3.07 # lambda1 <- .133 lambda2 <- .25 lambda3 <- .5 cE1 <- 2.856 cE2 <- 2.974 cE3 <- 3.049 deltas <- 10^(-(18:0)/8) arlE10 <- round(xewma.arl(lambda1, cE1, 0, sided="two"), digits=2) arlE1 <- c(arlE10, round(DxDewma.arl(lambda1, cE1, deltas, sided="two"), digits=2)) arlE20 <- round(xewma.arl(lambda2, cE2, 0, sided="two"), digits=2) arlE2 <- c(arlE20, round(DxDewma.arl(lambda2, cE2, deltas, sided="two"), digits=2)) arlE30 <- round(xewma.arl(lambda3, cE3, 0, sided="two"), digits=2) arlE3 <- c(arlE30, round(DxDewma.arl(lambda3, cE3, deltas, sided="two"), digits=2)) data.frame(delta=c(0, round(deltas, digits=6)), arlE1, arlE2, arlE3) ## Fahmy/Elsayed (2006) -- two-sided EWMA ## Table 4 (Monte Carlo results, 10^4 replicates, change point at t=51!) ## original numbers are # delta arl s.e. # 0.00 365.749 3.598 # 0.10 12.971 0.029 # 0.25 7.738 0.015 # 0.50 5.312 0.009 # 0.75 4.279 0.007 # 1.00 3.680 0.006 # 1.25 3.271 0.006 # 1.50 2.979 0.005 # 1.75 2.782 0.004 # 2.00 2.598 0.005 # lambda <- 0.1 cE <- 2.7 deltas <- c(.1, (1:8)/4) arlE1 <- c(round(xewma.arl(lambda, cE, 0, sided="two"), digits=3), round(DxDewma.arl(lambda, cE, deltas, sided="two"), digits=3)) arlE51 <- c(round(xewma.arl(lambda, cE, 0, sided="two", q=51)[51], digits=3), round(DxDewma.arl(lambda, cE, deltas, sided="two", mode="Knoth", q=51), digits=3)) data.frame(delta=c(0, deltas), arlE1, arlE51) ## additional Monte Carlo results with 10^8 replicates # delta arl.q=1 s.e. arl.q=51 s.e. # 0.00 368.910 0.036 361.714 0.038 # 0.10 12.986 0.000 12.781 0.000 # 0.25 7.758 0.000 7.637 0.000 # 0.50 5.318 0.000 5.235 0.000 # 0.75 4.285 0.000 4.218 0.000 # 1.00 3.688 0.000 3.628 0.000 # 1.25 3.274 0.000 3.233 0.000 # 1.50 2.993 0.000 2.942 0.000 # 1.75 2.808 0.000 2.723 0.000 # 2.00 2.616 0.000 2.554 0.000 ## Zou et al. (2009) -- one-sided EWMA ## Table 1 ## original values are # delta arl1 arl2 arl3 # 0 ~ 1730 # 0.0005 317 377 440 # 0.001 215 253 297 # 0.005 83.6 92.6 106 # 0.01 55.6 58.8 66.1 # 0.05 22.6 21.1 22.0 # 0.1 15.5 13.9 13.8 # 0.5 6.65 5.56 5.09 # 1.0 4.67 3.83 3.43 # 2.0 3.21 2.74 2.32 # 3.0 2.86 2.06 1.98 # 4.0 2.14 2.00 1.83 l1 <- 0.03479 l2 <- 0.11125 l3 <- 0.23052 c1 <- 2.711 c2 <- 3.033 c3 <- 3.161 zr <- -6 r <- 50 deltas <- c(0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1:4) arl1 <- c(round(xewma.arl(l1, c1, 0, zr=zr, r=r), digits=2), round(DxDewma.arl(l1, c1, deltas, zr=zr, r=r), digits=2)) arl2 <- c(round(xewma.arl(l2, c2, 0, zr=zr), digits=2), round(DxDewma.arl(l2, c2, deltas, zr=zr, r=r), digits=2)) arl3 <- c(round(xewma.arl(l3, c3, 0, zr=zr, r=r), digits=2), round(DxDewma.arl(l3, c3, deltas, zr=zr, r=r), digits=2)) data.frame(delta=c(0, deltas), arl1, arl2, arl3) ## End(Not run) xDgrsr.arl Compute ARLs of Shiryaev-Roberts schemes under drift Description Computation of the (zero-state and other) Average Run Length (ARL) under drift for Shiryaev- Roberts schemes monitoring normal mean. Usage xDgrsr.arl(k, g, delta, zr = 0, hs = NULL, sided = "one", m = NULL, mode = "Gan", q = 1, r = 30, with0 = FALSE) Arguments k reference value of the Shiryaev-Roberts scheme. g control limit (alarm threshold) of Shiryaev-Roberts scheme. delta true drift parameter. zr reflection border for the one-sided chart. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided Shiryaev-Roberts schemes by choos- ing "one" and "two", respectively. Currentlly, the two-sided scheme is not im- plemented. m parameter used if mode="Gan". m is design parameter of Gan’s approach. If m=NULL, then m will increased until the resulting ARL does not change anymore. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥), will be determined. Note that mu0=0 is implicitely fixed. Deploy large q to mimic steady-state. It works only for mode="Knoth". mode decide whether Gan’s or Knoth’s approach is used. Use "Gan" and "Knoth", respectively. "Knoth" is not implemented yet. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). with0 defines whether the first observation used for the RL calculation follows already 1*delta or still 0*delta. With q additional flexibility is given. Details Based on Gan (1991) or Knoth (2003), the ARL is calculated for Shiryaev-Roberts schemes under drift. In case of Gan’s framework, the usual ARL function with mu=m*delta is determined and recursively via m-1, m-2, ... 1 (or 0) the drift ARL determined. The framework of Knoth allows to calculate ARLs for varying parameters, such as control limits and distributional parameters. For details see the cited papers. Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME> (1991), EWMA control chart under linear drift, J. Stat. Comput. Simulation 38, 181-200. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2012), More on Control Charting under Drift, in: Frontiers in Statistical Quality Control 10, <NAME>, <NAME> and <NAME> (Eds.), Physica Verlag, Heidelberg, Germany, 53-68. <NAME>, <NAME> and <NAME> (2009), Comparisons of control schemes for monitoring the means of processes subject to drifts, Metrika 70, 141-163. See Also xewma.arl and xewma.ad for zero-state and steady-state ARL computation of EWMA control charts for the classical step change model. Examples ## Not run: ## Monte Carlo example with 10^8 replicates # delta arl s.e. # 0.0001 381.8240 0.0304 # 0.0005 238.4630 0.0148 # 0.001 177.4061 0.0097 # 0.002 125.9055 0.0061 # 0.005 75.7574 0.0031 # 0.01 50.2203 0.0018 # 0.02 32.9458 0.0011 # 0.05 18.9213 0.0005 # 0.1 12.6054 0.0003 # 0.5 5.2157 0.0001 # 1 3.6537 0.0001 # 3 2.0289 0.0000 k <- .5 L0 <- 500 zr <- -7 r <- 50 g <- xgrsr.crit(k, L0, zr=zr, r=r) DxDgrsr.arl <- Vectorize(xDgrsr.arl, "delta") deltas <- c(0.0001, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.1, 0.5, 1, 3) arls <- round(DxDgrsr.arl(k, g, deltas, zr=zr, r=r), digits=4) data.frame(deltas, arls) ## End(Not run) xDshewhartrunsrules.arl Compute ARLs of Shewhart control charts with and without runs rules under drift Description Computation of the zero-state Average Run Length (ARL) under drift for Shewhart control charts with and without runs rules monitoring normal mean. Usage xDshewhartrunsrules.arl(delta, c = 1, m = NULL, type = "12") xDshewhartrunsrulesFixedm.arl(delta, c = 1, m = 100, type = "12") Arguments delta true drift parameter. c normalizing constant to ensure specific alarming behavior. type controls the type of Shewhart chart used, seed details section. m parameter of Gan’s approach. If m=NULL, then m will increased until the resulting ARL does not change anymore. Details Based on Gan (1991), the ARL is calculated for Shewhart control charts with and without runs rules under drift. The usual ARL function with mu=m*delta is determined and recursively via m-1, m-2, ... 1 (or 0) the drift ARL determined. xDshewhartrunsrulesFixedm.arl is the actual work horse, while xDshewhartrunsrules.arl provides a convenience wrapper. Note that Aerne et al. (1991) deployed a method that is quite similar to Gan’s algorithm. For type see the help page of xshewhartrunsrules.arl. Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME> (1991), EWMA control chart under linear drift, J. Stat. Comput. Simulation 38, 181-200. <NAME>, <NAME> and <NAME> (1991), Evaluation of control charts under linear trend, Commun. Stat., Theory Methods 20, 3341-3349. See Also xshewhartrunsrules.arl for zero-state ARL computation of Shewhart control charts with and without runs rules for the classical step change model. Examples ## Aerne et al. (1991) ## Table I (continued) ## original numbers are # delta arl1of1 arl2of3 arl4of5 arl10 # 0.005623 136.67 120.90 105.34 107.08 # 0.007499 114.98 101.23 88.09 89.94 # 0.010000 96.03 84.22 73.31 75.23 # 0.013335 79.69 69.68 60.75 62.73 # 0.017783 65.75 57.38 50.18 52.18 # 0.023714 53.99 47.06 41.33 43.35 # 0.031623 44.15 38.47 33.99 36.00 # 0.042170 35.97 31.36 27.91 29.90 # 0.056234 29.21 25.51 22.91 24.86 # 0.074989 23.65 20.71 18.81 20.70 # 0.100000 19.11 16.79 15.45 17.29 # 0.133352 15.41 13.61 12.72 14.47 # 0.177828 12.41 11.03 10.50 12.14 # 0.237137 9.98 8.94 8.71 10.18 # 0.316228 8.02 7.25 7.26 8.45 # 0.421697 6.44 5.89 6.09 6.84 # 0.562341 5.17 4.80 5.15 5.48 # 0.749894 4.16 3.92 4.36 4.39 # 1.000000 3.35 3.22 3.63 3.52 c1of1 <- 3.069/3 c2of3 <- 2.1494/2 c4of5 <- 1.14 c10 <- 3.2425/3 DxDshewhartrunsrules.arl <- Vectorize(xDshewhartrunsrules.arl, "delta") deltas <- 10^(-(18:0)/8) arl1of1 <- round(DxDshewhartrunsrules.arl(deltas, c=c1of1, type="1"), digits=2) arl2of3 <- round(DxDshewhartrunsrules.arl(deltas, c=c2of3, type="12"), digits=2) arl4of5 <- round(DxDshewhartrunsrules.arl(deltas, c=c4of5, type="13"), digits=2) arl10 <- round(DxDshewhartrunsrules.arl(deltas, c=c10, type="SameSide10"), digits=2) data.frame(delta=round(deltas, digits=6), arl1of1, arl2of3, arl4of5, arl10) xewma.ad Compute steady-state ARLs of EWMA control charts Description Computation of the steady-state Average Run Length (ARL) for different types of EWMA control charts monitoring normal mean. Usage xewma.ad(l, c, mu1, mu0=0, zr=0, z0=0, sided="one", limits="fix", steady.state.mode="conditional", r=40) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. mu1 out-of-control mean. mu0 in-control mean. zr reflection border for the one-sided chart. z0 restarting value of the EWMA sequence in case of a false alarm in steady.state.mode="cyclical". sided distinguishes between one- and two-sided two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. steady.state.mode distinguishes between two steady-state modes – conditional and cyclical. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). Details xewma.ad determines the steady-state Average Run Length (ARL) by numerically solving the re- lated ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadra- ture and using the power method for deriving the largest in magnitude eigenvalue and the related left eigenfunction. Value Returns a single value which resembles the steady-state ARL. Author(s) <NAME> References <NAME> (1986), A new two-sided cumulative quality control scheme, Technometrics 28, 187- 194. <NAME> (1987), A simple method for studying run-length distributions of exponentially weighted moving average charts, Technometrics 29, 401-407. <NAME> and <NAME> (1990), Exponentially weighted moving average control schemes: Properties and enhancements, Technometrics 32, 1-12. See Also xewma.arl for zero-state ARL computation and xcusum.ad for the steady-state ARL of CUSUM control charts. Examples ## comparison of zero-state (= worst case ) and steady-state performance ## for two-sided EWMA control charts l <- .1 c <- xewma.crit(l,500,sided="two") mu <- c(0,.5,1,1.5,2) arl <- sapply(mu,l=l,c=c,sided="two",xewma.arl) ad <- sapply(mu,l=l,c=c,sided="two",xewma.ad) round(cbind(mu,arl,ad),digits=2) ## Lucas/Saccucci (1990) ## two-sided EWMA ## with fixed limits l1 <- .5 l2 <- .03 c1 <- 3.071 c2 <- 2.437 mu <- c(0,.25,.5,.75,1,1.5,2,2.5,3,3.5,4,5) ad1 <- sapply(mu,l=l1,c=c1,sided="two",xewma.ad) ad2 <- sapply(mu,l=l2,c=c2,sided="two",xewma.ad) round(cbind(mu,ad1,ad2),digits=2) ## original results are (in Table 3) ## 0.00 499. 480. ## 0.25 254. 74.1 ## 0.50 88.4 28.6 ## 0.75 35.7 17.3 ## 1.00 17.3 12.5 ## 1.50 6.44 8.00 ## 2.00 3.58 5.95 ## 2.50 2.47 4.78 ## 3.00 1.91 4.02 ## 3.50 1.58 3.49 ## 4.00 1.36 3.09 ## 5.00 1.10 2.55 xewma.arl Compute ARLs of EWMA control charts Description Computation of the (zero-state) Average Run Length (ARL) for different types of EWMA control charts monitoring normal mean. Usage xewma.arl(l,c,mu,zr=0,hs=0,sided="one",limits="fix",q=1, steady.state.mode="conditional",r=40) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. mu true mean. zr reflection border for the one-sided chart. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥ q), will be determined. Note that mu0=0 is implicitely fixed. steady.state.mode distinguishes between two steady-state modes – conditional and cyclical (needed for q>1). r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). Details In case of the EWMA chart with fixed control limits, xewma.arl determines the Average Run Length (ARL) by numerically solving the related ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadrature. If limits is not "fix", then the method presented in Knoth (2003) is utilized. Note that for one-sided EWMA charts (sided="one"), only "vacl" and "stat" are deployed, while for two-sided ones (sided="two") also "fir", "both" (combination of "fir" and "vacl"), and "Steiner" are implemented. For details see Knoth (2004). Value Except for the fixed limits EWMA charts it returns a single value which resembles the ARL. For fixed limits charts, it returns a vector of length q which resembles the ARL and the sequence of conditional expected delays for q=1 and q>1, respectively. Author(s) <NAME> References <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. <NAME> (1987), A simple method for studying run-length distributions of exponentially weighted moving average charts, Technometrics 29, 401-407. <NAME> and <NAME> (1990), Exponentially weighted moving average control schemes: Properties and enhancements, Technometrics 32, 1-12. <NAME>, <NAME> and <NAME> (1995), Modeling and analysis of EWMA control schemes with variance-adjusted control limits, IIE Transactions 277, 282-290. <NAME>, <NAME> and <NAME> (1996), Fast initial response scheme for exponentially weighted moving average control chart, Quality Engineering 9, 317-327. <NAME> (1999), EWMA control charts with time-varying control limits and fast initial re- sponse, Journal of Quality Technology 31, 75-86. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2004), Fast initial response features for EWMA Control Charts, Statistical Papers 46, 47-64. See Also xcusum.arl for zero-state ARL computation of CUSUM control charts and xewma.ad for the steady-state ARL. Examples ## Waldmann (1986), one-sided EWMA l <- .75 round(xewma.arl(l,2*sqrt((2-l)/l),0,zr=-4*sqrt((2-l)/l)),digits=1) l <- .5 round(xewma.arl(l,2*sqrt((2-l)/l),0,zr=-4*sqrt((2-l)/l)),digits=1) ## original values are 209.3 and 3907.5 (in Table 2). ## Waldmann (1986), two-sided EWMA with fixed control limits l <- .75 round(xewma.arl(l,2*sqrt((2-l)/l),0,sided="two"),digits=1) l <- .5 round(xewma.arl(l,2*sqrt((2-l)/l),0,sided="two"),digits=1) ## original values are 104.0 and 1952 (in Table 1). ## Crowder (1987), two-sided EWMA with fixed control limits l1 <- .5 l2 <- .05 c <- 2 mu <- (0:16)/4 arl1 <- sapply(mu,l=l1,c=c,sided="two",xewma.arl) arl2 <- sapply(mu,l=l2,c=c,sided="two",xewma.arl) round(cbind(mu,arl1,arl2),digits=2) xewma.arl 95 ## original results are (in Table 1) ## 0.00 26.45 127.53 ## 0.25 20.12 43.94 ## 0.50 11.89 18.97 ## 0.75 7.29 11.64 ## 1.00 4.91 8.38 ## 1.25 3.95* 6.56 ## 1.50 2.80 5.41 ## 1.75 2.29 4.62 ## 2.00 1.94 4.04 ## 2.25 1.70 3.61 ## 2.50 1.51 3.26 ## 2.75 1.37 2.99 ## 3.00 1.26 2.76 ## 3.25 1.18 2.56 ## 3.50 1.12 2.39 ## 3.75 1.08 2.26 ## 4.00 1.05 2.15 (* -- in Crowder (1987) typo!?) ## Lucas/Saccucci (1990) ## two-sided EWMA ## with fixed limits l1 <- .5 l2 <- .03 c1 <- 3.071 c2 <- 2.437 mu <- c(0,.25,.5,.75,1,1.5,2,2.5,3,3.5,4,5) arl1 <- sapply(mu,l=l1,c=c1,sided="two",xewma.arl) arl2 <- sapply(mu,l=l2,c=c2,sided="two",xewma.arl) round(cbind(mu,arl1,arl2),digits=2) ## original results are (in Table 3) ## 0.00 500. 500. ## 0.25 255. 76.7 ## 0.50 88.8 29.3 ## 0.75 35.9 17.6 ## 1.00 17.5 12.6 ## 1.50 6.53 8.07 ## 2.00 3.63 5.99 ## 2.50 2.50 4.80 ## 3.00 1.93 4.03 ## 3.50 1.58 3.49 ## 4.00 1.34 3.11 ## 5.00 1.07 2.55 ## Not run: ## with fir feature l1 <- .5 l2 <- .03 c1 <- 3.071 c2 <- 2.437 hs1 <- c1/2 hs2 <- c2/2 mu <- c(0,.5,1,2,3,5) arl1 <- sapply(mu,l=l1,c=c1,hs=hs1,sided="two",limits="fir",xewma.arl) arl2 <- sapply(mu,l=l2,c=c2,hs=hs2,sided="two",limits="fir",xewma.arl) round(cbind(mu,arl1,arl2),digits=2) ## original results are (in Table 5) ## 0.0 487. 406. ## 0.5 86.1 18.4 ## 1.0 15.9 7.36 ## 2.0 2.87 3.43 ## 3.0 1.45 2.34 ## 5.0 1.01 1.57 ## Chandrasekaran, English, Disney (1995) ## two-sided EWMA with fixed and variance adjusted limits (vacl) l1 <- .25 l2 <- .1 c1s <- 2.9985 c1n <- 3.0042 c2s <- 2.8159 c2n <- 2.8452 mu <- c(0,.25,.5,.75,1,2) arl1s <- sapply(mu,l=l1,c=c1s,sided="two",xewma.arl) arl1n <- sapply(mu,l=l1,c=c1n,sided="two",limits="vacl",xewma.arl) arl2s <- sapply(mu,l=l2,c=c2s,sided="two",xewma.arl) arl2n <- sapply(mu,l=l2,c=c2n,sided="two",limits="vacl",xewma.arl) round(cbind(mu,arl1s,arl1n,arl2s,arl2n),digits=2) ## original results are (in Table 2) ## 0.00 500. 500. 500. 500. ## 0.25 170.09 167.54 105.90 96.6 ## 0.50 48.14 45.65 31.08 24.35 ## 0.75 20.02 19.72 15.71 10.74 ## 1.00 11.07 9.37 10.23 6.35 ## 2.00 3.59 2.64 4.32 2.73 ## The results in Chandrasekaran, English, Disney (1995) are not ## that accurate. Let us consider the more appropriate comparison c1s <- xewma.crit(l1,500,sided="two") c1n <- xewma.crit(l1,500,sided="two",limits="vacl") c2s <- xewma.crit(l2,500,sided="two") c2n <- xewma.crit(l2,500,sided="two",limits="vacl") mu <- c(0,.25,.5,.75,1,2) arl1s <- sapply(mu,l=l1,c=c1s,sided="two",xewma.arl) arl1n <- sapply(mu,l=l1,c=c1n,sided="two",limits="vacl",xewma.arl) arl2s <- sapply(mu,l=l2,c=c2s,sided="two",xewma.arl) arl2n <- sapply(mu,l=l2,c=c2n,sided="two",limits="vacl",xewma.arl) round(cbind(mu,arl1s,arl1n,arl2s,arl2n),digits=2) ## which demonstrate the abilities of the variance-adjusted limits ## scheme more explicitely. ## Rhoads, Montgomery, Mastrangelo (1996) ## two-sided EWMA with fixed and variance adjusted limits (vacl), ## with fir and both features l <- .03 c <- 2.437 mu <- c(0,.5,1,1.5,2,3,4) sl <- sqrt(l*(2-l)) arlfix <- sapply(mu,l=l,c=c,sided="two",xewma.arl) arlvacl <- sapply(mu,l=l,c=c,sided="two",limits="vacl",xewma.arl) arlfir <- sapply(mu,l=l,c=c,hs=c/2,sided="two",limits="fir",xewma.arl) arlboth <- sapply(mu,l=l,c=c,hs=c/2*sl,sided="two",limits="both",xewma.arl) round(cbind(mu,arlfix,arlvacl,arlfir,arlboth),digits=1) ## original results are (in Table 1) ## 0.0 477.3* 427.9* 383.4* 286.2* ## 0.5 29.7 20.0 18.6 12.8 ## 1.0 12.5 6.5 7.4 3.6 ## 1.5 8.1 3.3 4.6 1.9 ## 2.0 6.0 2.2 3.4 1.4 ## 3.0 4.0 1.3 2.4 1.0 ## 4.0 3.1 1.1 1.9 1.0 ## * -- the in-control values differ sustainably from the true values! ## Steiner (1999) ## two-sided EWMA control charts with various modifications ## fixed vs. variance adjusted limits l <- .05 c <- 3 mu <- c(0,.25,.5,.75,1,1.5,2,2.5,3,3.5,4) arlfix <- sapply(mu,l=l,c=c,sided="two",xewma.arl) arlvacl <- sapply(mu,l=l,c=c,sided="two",limits="vacl",xewma.arl) round(cbind(mu,arlfix,arlvacl),digits=1) ## original results are (in Table 2) ## 0.00 1379.0 1353.0 ## 0.25 135.0 127.0 ## 0.50 37.4 32.5 ## 0.75 20.0 15.6 ## 1.00 13.5 9.0 ## 1.50 8.3 4.5 ## 2.00 6.0 2.8 ## 2.50 4.8 2.0 ## 3.00 4.0 1.6 ## 3.50 3.4 1.3 ## 4.00 3.0 1.1 ## fir, both, and Steiner's modification l <- .03 cfir <- 2.44 cboth <- 2.54 cstein <- 2.55 hsfir <- cfir/2 hsboth <- cboth/2*sqrt(l*(2-l)) mu <- c(0,.5,1,1.5,2,3,4) arlfir <- sapply(mu,l=l,c=cfir,hs=hsfir,sided="two",limits="fir",xewma.arl) arlboth <- sapply(mu,l=l,c=cboth,hs=hsboth,sided="two",limits="both",xewma.arl) arlstein <- sapply(mu,l=l,c=cstein,sided="two",limits="Steiner",xewma.arl) round(cbind(mu,arlfir,arlboth,arlstein),digits=1) ## original values are (in Table 5) ## 0.0 383.0 384.0 391.0 ## 0.5 18.6 14.9 13.8 ## 1.0 7.4 3.9 3.6 ## 1.5 4.6 2.0 1.8 ## 2.0 3.4 1.4 1.3 ## 3.0 2.4 1.1 1.0 ## 4.0 1.9 1.0 1.0 ## SAS/QC manual 1999 ## two-sided EWMA control charts with fixed limits l <- .25 c <- 3 mu <- 1 print(xewma.arl(l,c,mu,sided="two"),digits=11) # original value is 11.154267016. ## Some recent examples for one-sided EWMA charts ## with varying limits and in the so-called stationary mode # 1. varying limits = "vacl" lambda <- .1 L0 <- 500 ## Monte Carlo results (10^9 replicates) # mu ARL s.e. # 0 500.00 0.0160 # 0.5 21.637 0.0006 # 1 6.7596 0.0001 # 1.5 3.5398 0.0001 # 2 2.3038 0.0000 # 2.5 1.7004 0.0000 # 3 1.3675 0.0000 zr <- -6 r <- 50 c <- xewma.crit(lambda, L0, zr=zr, limits="vacl", r=r) Mxewma.arl <- Vectorize(xewma.arl, "mu") mus <- (0:6)/2 arls <- round(Mxewma.arl(lambda, c, mus, zr=zr, limits="vacl", r=r), digits=4) data.frame(mus, arls) # 2. stationary mode, i. e. limits = "stat" ## Monte Carlo results (10^9 replicates) # mu ARL s.e. # 0 500.00 0.0159 # 0.5 22.313 0.0006 # 1 7.2920 0.0001 # 1.5 3.9064 0.0001 # 2 2.5131 0.0000 # 2.5 1.7983 0.0000 # 3 1.4029 0.0000 c <- xewma.crit(lambda, L0, zr=zr, limits="stat", r=r) arls <- round(Mxewma.arl(lambda, c, mus, zr=zr, limits="stat", r=r), digits=4) data.frame(mus, arls) ## End(Not run) xewma.arl.f Compute ARL function of EWMA control charts Description Computation of the (zero-state) Average Run Length (ARL) function for different types of EWMA control charts monitoring normal mean. Usage xewma.arl.f(l,c,mu,zr=0,sided="one",limits="fix",r=40) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. mu true mean. zr reflection border for the one-sided chart. sided distinguishes between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). Details It is a convenience function to yield the ARL as function of the head start hs. For more details see xewma.arl. Value It returns a function of a single argument, hs=x which maps the head-start value hs to the ARL. Author(s) <NAME> References <NAME> (1987), A simple method for studying run-length distributions of exponentially weighted moving average charts, Technometrics 29, 401-407. See Also xewma.arl for zero-state ARL for one specific head-start hs. Examples # will follow xewma.arl.prerun Compute ARLs of EWMA control charts in case of estimated parame- ters Description Computation of the (zero-state) Average Run Length (ARL) for different types of EWMA control charts monitoring normal mean if the in-control mean, standard deviation, or both are estimated by a pre run. Usage xewma.arl.prerun(l, c, mu, zr=0, hs=0, sided="two", limits="fix", q=1, size=100, df=NULL, estimated="mu", qm.mu=30, qm.sigma=30, truncate=1e-10) xewma.crit.prerun(l, L0, mu, zr=0, hs=0, sided="two", limits="fix", size=100, df=NULL, estimated="mu", qm.mu=30, qm.sigma=30, truncate=1e-10, c.error=1e-12, L.error=1e-9, OUTPUT=FALSE) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. mu true mean shift. zr reflection border for the one-sided chart. hs so-called headstart (give fast initial response). sided distinguish between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguish between different control limits behavior. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥ q), will be determined. Note that mu0=0 is implicitely fixed. size pre run sample size. df Degrees of freedom of the pre run variance estimator. Typically it is simply size - 1. If the pre run is collected in batches, then also other values are needed. estimated name the parameter to be estimated within the "mu", "sigma", "both". qm.mu number of quadrature nodes for convoluting the mean uncertainty. qm.sigma number of quadrature nodes for convoluting the standard deviation uncertainty. truncate size of truncated tail. L0 in-control ARL. c.error error bound for two succeeding values of the critical value during applying the secant rule. L.error error bound for the ARL level L0 during applying the secant rule. OUTPUT activate or deactivate additional output. Details Essentially, the ARL function xewma.arl is convoluted with the distribution of the sample mean, standard deviation or both. For details see Jones/Champ/Rigdon (2001) and Knoth (2014?). Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME>, <NAME>, <NAME> (2001), The performance of exponentially weighted moving average charts with estimated parameters, Technometrics 43, 156-167. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2004), Fast initial response features for EWMA Control Charts, Statistical Papers 46, 47-64. <NAME> (2014?), tbd, tbd, tbd-tbd. See Also xewma.arl for the usual zero-state ARL computation. Examples ## Jones/Champ/Rigdon (2001) c4m <- function(m, n) sqrt(2)*gamma( (m*(n-1)+1)/2 )/sqrt( m*(n-1) )/gamma( m*(n-1)/2 ) n <- 5 # sample size m <- 20 # pre run with 20 samples of size n = 5 C4m <- c4m(m, n) # needed for bias correction # Table 1, 3rd column lambda <- 0.2 L <- 2.636 xewma.ARL <- Vectorize("xewma.arl", "mu") xewma.ARL.prerun <- Vectorize("xewma.arl.prerun", "mu") mu <- c(0, .25, .5, 1, 1.5, 2) ARL <- round(xewma.ARL(lambda, L, mu, sided="two"), digits=2) p.ARL <- round(xewma.ARL.prerun(lambda, L/C4m, mu, sided="two", size=m, df=m*(n-1), estimated="both", qm.mu=70), digits=2) # Monte-Carlo with 10^8 repetitions: 200.325 (0.020) and 144.458 (0.022) cbind(mu, ARL, p.ARL) ## Not run: # Figure 5, subfigure r = 0.2 mu_ <- (0:85)/40 ARL_ <- round(xewma.ARL(lambda, L, mu_, sided="two"), digits=2) p.ARL_ <- round(xewma.ARL.prerun(lambda, L/C4m, mu_, sided="two", size=m, df=m*(n-1), estimated="both"), digits=2) plot(mu_, ARL_, type="l", xlab=expression(delta), ylab="ARL", xlim=c(0,2)) abline(v=0, h=0, col="grey", lwd=.7) points(mu, ARL, pch=5) lines(mu_, p.ARL_, col="blue") points(mu, p.ARL, pch=18, col ="blue") legend("topright", c("Known", "Estimated"), col=c("black", "blue"), lty=1, pch=c(5, 18)) ## End(Not run) xewma.crit Compute critical values of EWMA control charts Description Computation of the critical values (similar to alarm limits) for different types of EWMA control charts monitoring normal mean. Usage xewma.crit(l,L0,mu0=0,zr=0,hs=0,sided="one",limits="fix",r=40,c0=NULL) Arguments l smoothing parameter lambda of the EWMA control chart. L0 in-control ARL. mu0 in-control mean. zr reflection border for the one-sided chart. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). c0 starting value for iteration rule. Details xewma.crit determines the critical values (similar to alarm limits) for given in-control ARL L0 by applying secant rule and using xewma.arl(). Value Returns a single value which resembles the critical value c. Author(s) <NAME> References <NAME> (1989), Design of exponentially weighted moving average schemes, Journal of Qual- ity Technology 21, 155-162. See Also xewma.arl for zero-state ARL computation. Examples l <- .1 incontrolARL <- c(500,5000,50000) sapply(incontrolARL,l=l,sided="two",xewma.crit,r=35) # accuracy with 35 nodes sapply(incontrolARL,l=l,sided="two",xewma.crit) # accuracy with 40 nodes sapply(incontrolARL,l=l,sided="two",xewma.crit,r=50) # accuracy with 50 nodes ## Crowder (1989) ## two-sided EWMA control charts with fixed limits l <- c(.05,.1,.15,.2,.25) L0 <- 250 round(sapply(l,L0=L0,sided="two",xewma.crit),digits=2) ## original values are 2.32, 2.55, 2.65, 2.72, and 2.76. xewma.q Compute RL quantiles of EWMA control charts Description Computation of quantiles of the Run Length (RL) for EWMA control charts monitoring normal mean. Usage xewma.q(l, c, mu, alpha, zr=0, hs=0, sided="two", limits="fix", q=1, r=40) xewma.q.crit(l, L0, mu, alpha, zr=0, hs=0, sided="two", limits="fix", r=40, c.error=1e-12, a.error=1e-9, OUTPUT=FALSE) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. mu true mean. alpha quantile level. zr reflection border for the one-sided chart. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥), will be determined. Note that mu0=0 is implicitely fixed. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). L0 in-control quantile value. c.error error bound for two succeeding values of the critical value during applying the secant rule. a.error error bound for the quantile level alpha during applying the secant rule. OUTPUT activate or deactivate additional output. Details Instead of the popular ARL (Average Run Length) quantiles of the EWMA stopping time (Run Length) are determined. The algorithm is based on Waldmann’s survival function iteration proce- dure. If limits is not "fix", then the method presented in Knoth (2003) is utilized. Note that for one-sided EWMA charts (sided="one"), only "vacl" and "stat" are deployed, while for two- sided ones (sided="two") also "fir", "both" (combination of "fir" and "vacl"), and "Steiner" are implemented. For details see Knoth (2004). Value Returns a single value which resembles the RL quantile of order q. Author(s) <NAME> References <NAME> (1993), An optimal design of EWMA control charts based on the median run length, J. Stat. Comput. Simulation 45, 169-184. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2004), Fast initial response features for EWMA Control Charts, Statistical Papers 46, 47-64. <NAME> (2015), Run length quantiles of EWMA control charts monitoring normal mean or/and variance, International Journal of Production Research 53, 4629-4647. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also xewma.arl for zero-state ARL computation of EWMA control charts. Examples ## Gan (1993), two-sided EWMA with fixed control limits ## some values of his Table 1 -- any median RL should be 500 XEWMA.Q <- Vectorize("xewma.q", c("l", "c")) G.lambda <- c(.05, .1, .15, .2, .25) G.h <- c(.441, .675, .863, 1.027, 1.177) MEDIAN <- ceiling(XEWMA.Q(G.lambda, G.h/sqrt(G.lambda/(2-G.lambda)), 0, .5, sided="two")) print(cbind(G.lambda, MEDIAN)) ## increase accuracy of thresholds # (i) calculate threshold for given in-control median value by # deplyoing secant rule XEWMA.q.crit <- Vectorize("xewma.q.crit", "l") # (ii) re-calculate the thresholds and remove the standardization step L0 <- 500 G.h.new <- XEWMA.q.crit(G.lambda, L0, 0, .5, sided="two") G.h.new <- round(G.h.new * sqrt(G.lambda/(2-G.lambda)), digits=5) # (iii) compare Gan's original values and the new ones with 5 digits print(cbind(G.lambda, G.h.new, G.h)) # (iv) calculate the new medians MEDIAN <- ceiling(XEWMA.Q(G.lambda, G.h.new/sqrt(G.lambda/(2-G.lambda)), 0, .5, sided="two")) print(cbind(G.lambda, MEDIAN)) xewma.q.prerun Compute RL quantiles of EWMA control charts in case of estimated parameters Description Computation of quantiles of the Run Length (RL) for EWMA control charts monitoring normal mean if the in-control mean, standard deviation, or both are estimated by a pre run. Usage xewma.q.prerun(l, c, mu, p, zr=0, hs=0, sided="two", limits="fix", q=1, size=100, df=NULL, estimated="mu", qm.mu=30, qm.sigma=30, truncate=1e-10, bound=1e-10) xewma.q.crit.prerun(l, L0, mu, p, zr=0, hs=0, sided="two", limits="fix", size=100, df=NULL, estimated="mu", qm.mu=30, qm.sigma=30, truncate=1e-10, bound=1e-10, c.error=1e-10, p.error=1e-9, OUTPUT=FALSE) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. mu true mean shift. p quantile level. zr reflection border for the one-sided chart. hs so-called headstart (give fast initial response). sided distinguish between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguish between different control limits behavior. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥), will be determined. Note that mu0=0 is implicitely fixed. size pre run sample size. df Degrees of freedom of the pre run variance estimator. Typically it is simply size - 1. If the pre run is collected in batches, then also other values are needed. estimated name the parameter to be estimated within the "mu", "sigma", "both". qm.mu number of quadrature nodes for convoluting the mean uncertainty. qm.sigma number of quadrature nodes for convoluting the standard deviation uncertainty. truncate size of truncated tail. bound control when the geometric tail kicks in; the larger the quicker and less accurate; bound should be larger than 0 and less than 0.001. L0 in-control quantile value. c.error error bound for two succeeding values of the critical value during applying the secant rule. p.error error bound for the quantile level p during applying the secant rule. OUTPUT activate or deactivate additional output. Details Essentially, the ARL function xewma.q is convoluted with the distribution of the sample mean, standard deviation or both. For details see Jones/Champ/Rigdon (2001) and Knoth (2014?). Value Returns a single value which resembles the RL quantile of order q. Author(s) <NAME> References <NAME>, <NAME>, <NAME> (2001), The performance of exponentially weighted moving average charts with estimated parameters, Technometrics 43, 156-167. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2004), Fast initial response features for EWMA Control Charts, Statistical Papers 46, 47-64. <NAME> (2014?), tbd, tbd, tbd-tbd. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also xewma.q for the usual RL quantiles computation of EWMA control charts. Examples ## Jones/Champ/Rigdon (2001) c4m <- function(m, n) sqrt(2)*gamma( (m*(n-1)+1)/2 )/sqrt( m*(n-1) )/gamma( m*(n-1)/2 ) n <- 5 # sample size m <- 20 # pre run with 20 samples of size n = 5 C4m <- c4m(m, n) # needed for bias correction # Table 1, 3rd column lambda <- 0.2 L <- 2.636 xewma.Q <- Vectorize("xewma.q", "mu") xewma.Q.prerun <- Vectorize("xewma.q.prerun", "mu") mu <- c(0, .25, .5, 1, 1.5, 2) Q1 <- ceiling(xewma.Q(lambda, L, mu, 0.1, sided="two")) Q2 <- ceiling(xewma.Q(lambda, L, mu, 0.5, sided="two")) Q3 <- ceiling(xewma.Q(lambda, L, mu, 0.9, sided="two")) cbind(mu, Q1, Q2, Q3) ## Not run: p.Q1 <- xewma.Q.prerun(lambda, L/C4m, mu, 0.1, sided="two", size=m, df=m*(n-1), estimated="both") p.Q2 <- xewma.Q.prerun(lambda, L/C4m, mu, 0.5, sided="two", size=m, df=m*(n-1), estimated="both") p.Q3 <- xewma.Q.prerun(lambda, L/C4m, mu, 0.9, sided="two", size=m, df=m*(n-1), estimated="both") cbind(mu, p.Q1, p.Q2, p.Q3) ## End(Not run) ## original values are # mu Q1 Q2 Q3 p.Q1 p.Q2 p.Q3 # 0.00 25 140 456 13 73 345 # 0.25 12 56 174 9 46 253 # 0.50 7 20 56 6 20 101 # 1.00 4 7 15 3 7 18 # 1.50 3 4 7 2 4 8 # 2.00 2 3 5 2 3 5 xewma.sf Compute the survival function of EWMA run length Description Computation of the survival function of the Run Length (RL) for EWMA control charts monitoring normal mean. Usage xewma.sf(l, c, mu, n, zr=0, hs=0, sided="one", limits="fix", q=1, r=40) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. mu true mean. n calculate sf up to value n. zr reflection border for the one-sided chart. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state situation for the in-control and out-of-control case, respectively, are calculated. Note that mu0=0 is implicitely fixed. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). Details The survival function P(L>n) and derived from it also the cdf P(L<=n) and the pmf P(L=n) il- lustrate the distribution of the EWMA run length. For large n the geometric tail could be ex- ploited. That is, with reasonable large n the complete distribution is characterized. The algorithm is based on Waldmann’s survival function iteration procedure. For varying limits and for change points after 1 the algorithm from Knoth (2004) is applied. Note that for one-sided EWMA charts (sided="one"), only "vacl" and "stat" are deployed, while for two-sided ones (sided="two") also "fir", "both" (combination of "fir" and "vacl"), and "Steiner" are implemented. For details see Knoth (2004). Value Returns a vector which resembles the survival function up to a certain point. Author(s) <NAME> References <NAME> (1993), An optimal design of EWMA control charts based on the median run length, J. Stat. Comput. Simulation 45, 169-184. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2004), Fast initial response features for EWMA Control Charts, Statistical Papers 46, 47-64. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also xewma.arl for zero-state ARL computation of EWMA control charts. Examples ## Gan (1993), two-sided EWMA with fixed control limits ## some values of his Table 1 -- any median RL should be 500 G.lambda <- c(.05, .1, .15, .2, .25) G.h <- c(.441, .675, .863, 1.027, 1.177)/sqrt(G.lambda/(2-G.lambda)) for ( i in 1:length(G.lambda) ) { SF <- xewma.sf(G.lambda[i], G.h[i], 0, 1000) if (i==1) plot(1:length(SF), SF, type="l", xlab="n", ylab="P(L>n)") else lines(1:length(SF), SF, col=i) } xewma.sf.prerun Compute the survival function of EWMA run length in case of esti- mated parameters Description Computation of the survival function of the Run Length (RL) for EWMA control charts monitoring normal mean if the in-control mean, standard deviation, or both are estimated by a pre run. Usage xewma.sf.prerun(l, c, mu, n, zr=0, hs=0, sided="one", limits="fix", q=1, size=100, df=NULL, estimated="mu", qm.mu=30, qm.sigma=30, truncate=1e-10, tail_approx=TRUE, bound=1e-10) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. mu true mean. n calculate sf up to value n. zr reflection border for the one-sided chart. hs so-called headstart (give fast initial response). sided distinguish between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguish between different control limits behavior. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state situation for the in-control and out-of-control case, respectively, are calculated. Note that mu0=0 is implicitely fixed. size pre run sample size. df degrees of freedom of the pre run variance estimator. Typically it is simply size - 1. If the pre run is collected in batches, then also other values are needed. estimated name the parameter to be estimated within the "mu", "sigma", "both". qm.mu number of quadrature nodes for convoluting the mean uncertainty. qm.sigma number of quadrature nodes for convoluting the standard deviation uncertainty. truncate size of truncated tail. tail_approx Controls whether the geometric tail approximation is used (is faster) or not. bound control when the geometric tail kicks in; the larger the quicker and less accurate; bound should be larger than 0 and less than 0.001. Details The survival function P(L>n) and derived from it also the cdf P(L<=n) and the pmf P(L=n) illustrate the distribution of the EWMA run length... Value Returns a vector which resembles the survival function up to a certain point. Author(s) <NAME> References <NAME> (1993), An optimal design of EWMA control charts based on the median run length, J. Stat. Comput. Simulation 45, 169-184. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2004), Fast initial response features for EWMA Control Charts, Statistical Papers 46, 47-64. <NAME>, <NAME>, <NAME> (2001), The performance of exponentially weighted moving average charts with estimated parameters, Technometrics 43, 156-167. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also xewma.sf for the RL survival function of EWMA control charts w/o pre run uncertainty. Examples ## Jones/Champ/Rigdon (2001) c4m <- function(m, n) sqrt(2)*gamma( (m*(n-1)+1)/2 )/sqrt( m*(n-1) )/gamma( m*(n-1)/2 ) n <- 5 # sample size # Figure 6, subfigure r=0.1 lambda <- 0.1 L <- 2.454 CDF0 <- 1 - xewma.sf(lambda, L, 0, 600, sided="two") m <- 10 # pre run size CDF1 <- 1 - xewma.sf.prerun(lambda, L/c4m(m,n), 0, 600, sided="two", size=m, df=m*(n-1), estimated="both") m <- 20 CDF2 <- 1 - xewma.sf.prerun(lambda, L/c4m(m,n), 0, 600, sided="two", size=m, df=m*(n-1), estimated="both") m <- 50 CDF3 <- 1 - xewma.sf.prerun(lambda, L/c4m(m,n), 0, 600, sided="two", size=m, df=m*(n-1), estimated="both") plot(CDF0, type="l", xlab="t", ylab=expression(P(T<=t)), xlim=c(0,500), ylim=c(0,1)) abline(v=0, h=c(0,1), col="grey", lwd=.7) points((1:5)*100, CDF0[(1:5)*100], pch=18) lines(CDF1, col="blue") points((1:5)*100, CDF1[(1:5)*100], pch=2, col="blue") lines(CDF2, col="red") points((1:5)*100, CDF2[(1:5)*100], pch=16, col="red") lines(CDF3, col="green") points((1:5)*100, CDF3[(1:5)*100], pch=5, col="green") legend("bottomright", c("Known", "m=10, n=5", "m=20, n=5", "m=50, n=5"), col=c("black", "blue", "red", "green"), pch=c(18, 2, 16, 5), lty=1) xgrsr.ad Compute steady-state ARLs of Shiryaev-Roberts schemes Description Computation of the steady-state Average Run Length (ARL) for Shiryaev-Roberts schemes moni- toring normal mean. Usage xgrsr.ad(k, g, mu1, mu0 = 0, zr = 0, sided = "one", MPT = FALSE, r = 30) Arguments k reference value of the Shiryaev-Roberts scheme. g control limit (alarm threshold) of Shiryaev-Roberts scheme. mu1 out-of-control mean. mu0 in-control mean. zr reflection border to enable the numerical algorithms used here. sided distinguishes between one- and two-sided schemes by choosing "one" and"two", respectively. Currently only one-sided schemes are implemented. MPT switch between the old implementation (FALSE) and the new one (TRUE) that considers the completed likelihood ratio. MPT contains the initials of <NAME>- takides, <NAME> and <NAME>. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1. Details xgrsr.ad determines the steady-state Average Run Length (ARL) by numerically solving the re- lated ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadra- ture. Value Returns a single value which resembles the steady-state ARL. Author(s) <NAME> References <NAME> (2006), The art of evaluating monitoring schemes – how to measure the performance of control charts? <NAME>. & <NAME>. (ed.), Frontiers in Statistical Quality Control 8, Physica Verlag, Heidelberg, Germany, 74-99. <NAME>, <NAME>, <NAME> (2009), Numerical comparison of CUSUM and Shiryaev-Roberts procedures for detectin changes in distributions, Communications in Statistics: Theory and Methods 38, 3225-3239. See Also xewma.arl and xcusum-arl for zero-state ARL computation of EWMA and CUSUM control charts, respectively, and xgrsr.arl for the zero-state ARL. Examples ## Small study to identify appropriate reflection border to mimic unreflected schemes k <- .5 g <- log(390) zrs <- -(0:10) ZRxgrsr.ad <- Vectorize(xgrsr.ad, "zr") ads <- ZRxgrsr.ad(k, g, 0, zr=zrs) data.frame(zrs, ads) ## Table 2 from Knoth (2006) ## original values are # mu arl # 0 689 # 0.5 30 # 1 8.9 # 1.5 5.1 # 2 3.6 # 2.5 2.8 # 3 2.4 # k <- .5 g <- log(390) zr <- -5 # see first example mus <- (0:6)/2 Mxgrsr.ad <- Vectorize(xgrsr.ad, "mu1") ads <- round(Mxgrsr.ad(k, g, mus, zr=zr), digits=1) data.frame(mus, ads) ## Table 4 from Moustakides et al. (2009) ## original values are # gamma A STADD/steady-state ARL # 50 28.02 4.37 # 100 56.04 5.46 # 500 280.19 8.33 # 1000 560.37 9.64 # 5000 2801.75 12.79 # 10000 5603.7 14.17 Gxgrsr.ad <- Vectorize("xgrsr.ad", "g") As <- c(28.02, 56.04, 280.19, 560.37, 2801.75, 5603.7) gs <- log(As) theta <- 1 zr <- -6 ads <- round(Gxgrsr.ad(theta/2, gs, theta, zr=zr, r=100), digits=2) data.frame(As, ads) xgrsr.arl Compute (zero-state) ARLs of Shiryaev-Roberts schemes Description Computation of the (zero-state) Average Run Length (ARL) for Shiryaev-Roberts schemes moni- toring normal mean. Usage xgrsr.arl(k, g, mu, zr = 0, hs=NULL, sided = "one", q = 1, MPT = FALSE, r = 30) Arguments k reference value of the Shiryaev-Roberts scheme. g control limit (alarm threshold) of Shiryaev-Roberts scheme. mu true mean. zr reflection border to enable the numerical algorithms used here. hs so-called headstart (enables fast initial response). If hs=NULL, then the classical headstart -Inf is used (corresponds to 0 for the non-log scheme). sided distinguishes between one- and two-sided schemes by choosing "one" and"two", respectively. Currently only one-sided schemes are implemented. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥ q), will be determined. Note that mu0=0 is implicitely fixed. MPT switch between the old implementation (FALSE) and the new one (TRUE) that considers the complete likelihood ratio. MPT stands for the initials of <NAME>- takides, <NAME> and <NAME>. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1. Details xgrsr.arl determines the Average Run Length (ARL) by numerically solving the related ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadrature. Value Returns a vector of length q which resembles the ARL and the sequence of conditional expected delays for q=1 and q>1, respectively. Author(s) <NAME> References <NAME> (2006), The art of evaluating monitoring schemes – how to measure the performance of control charts? <NAME>, H. & <NAME>. (ed.), Frontiers in Statistical Quality Control 8, Physica Verlag, Heidelberg, Germany, 74-99. <NAME>, <NAME>, <NAME> (2009), Numerical comparison of CUSUM and Shiryaev-Roberts procedures for detecting changes in distributions, Communications in Statistics: Theory and Methods 38, 3225-3239. See Also xewma.arl and xcusum-arl for zero-state ARL computation of EWMA and CUSUM control charts, respectively, and xgrsr.ad for the steady-state ARL. Examples ## Small study to identify appropriate reflection border to mimic unreflected schemes k <- .5 g <- log(390) zrs <- -(0:10) ZRxgrsr.arl <- Vectorize(xgrsr.arl, "zr") arls <- ZRxgrsr.arl(k, g, 0, zr=zrs) data.frame(zrs, arls) ## Table 2 from Knoth (2006) ## original values are # mu arl # 0 697 # 0.5 33 # 1 10.4 # 1.5 6.2 # 2 4.4 # 2.5 3.5 # 3 2.9 # k <- .5 g <- log(390) zr <- -5 # see first example mus <- (0:6)/2 Mxgrsr.arl <- Vectorize(xgrsr.arl, "mu") arls <- round(Mxgrsr.arl(k, g, mus, zr=zr), digits=1) data.frame(mus, arls) XGRSR.arl <- Vectorize("xgrsr.arl", "g") zr <- -6 ## Table 2 from Moustakides et al. (2009) ## original values are # gamma A ARL/E_infty(L) SADD/E_1(L) # 50 47.17 50.29 41.40 # 100 94.34 100.28 72.32 # 500 471.70 500.28 209.44 # 1000 943.41 1000.28 298.50 # 5000 4717.04 5000.24 557.87 #10000 9434.08 10000.17 684.17 theta <- .1 As2 <- c(47.17, 94.34, 471.7, 943.41, 4717.04, 9434.08) gs2 <- log(As2) arls0 <- round(XGRSR.arl(theta/2, gs2, 0, zr=-5, r=300, MPT=TRUE), digits=2) arls1 <- round(XGRSR.arl(theta/2, gs2, theta, zr=-5, r=300, MPT=TRUE), digits=2) data.frame(As2, arls0, arls1) ## Table 3 from Moustakides et al. (2009) ## original values are # gamma A ARL/E_infty(L) SADD/E_1(L) # 50 37.38 49.45 12.30 # 100 74.76 99.45 16.60 # 500 373.81 499.45 28.05 # 1000 747.62 999.45 33.33 # 5000 3738.08 4999.45 45.96 #10000 7476.15 9999.24 51.49 theta <- .5 As3 <- c(37.38, 74.76, 373.81, 747.62, 3738.08, 7476.15) gs3 <- log(As3) arls0 <- round(XGRSR.arl(theta/2, gs3, 0, zr=-5, r=70, MPT=TRUE), digits=2) arls1 <- round(XGRSR.arl(theta/2, gs3, theta, zr=-5, r=70, MPT=TRUE), digits=2) data.frame(As3, arls0, arls1) ## Table 4 from Moustakides et al. (2009) ## original values are # gamma A ARL/E_infty(L) SADD/E_1(L) # 50 28.02 49.78 4.98 # 100 56.04 99.79 6.22 # 500 280.19 499.79 9.30 # 1000 560.37 999.79 10.66 # 5000 2801.85 5000.93 13.86 #10000 5603.70 9999.87 15.24 theta <- 1 As4 <- c(28.02, 56.04, 280.19, 560.37, 2801.85, 5603.7) gs4 <- log(As4) arls0 <- round(XGRSR.arl(theta/2, gs4, 0, zr=-6, r=40, MPT=TRUE), digits=2) arls1 <- round(XGRSR.arl(theta/2, gs4, theta, zr=-6, r=40, MPT=TRUE), digits=2) data.frame(As4, arls0, arls1) xgrsr.crit Compute alarm thresholds for Shiryaev-Roberts schemes Description Computation of the alarm thresholds (alarm limits) for Shiryaev-Roberts schemes monitoring nor- mal mean. Usage xgrsr.crit(k, L0, mu0 = 0, zr = 0, hs = NULL, sided = "one", MPT = FALSE, r = 30) Arguments k reference value of the Shiryaev-Roberts scheme. L0 in-control ARL. mu0 in-control mean. zr reflection border to enable the numerical algorithms used here. hs so-called headstart (enables fast initial response). If hs=NULL, then the classical headstart -Inf is used (corresponds to 0 for the non-log scheme). sided distinguishes between one- and two-sided schemes by choosing "one" and"two", respectively. Currently only one-sided schemes are implemented. MPT switch between the old implementation (FALSE) and the new one (TRUE) that considers the completed likelihood ratio. MPT contains the initials of <NAME>- takides, <NAME> and <NAME>. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1. Details xgrsr.crit determines the alarm threshold (alarm limit) for given in-control ARL L0 by applying secant rule and using xgrsr.arl(). Value Returns a single value which resembles the alarm limit g. Author(s) <NAME> References <NAME>, <NAME>, <NAME> (2009), Numerical comparison of CUSUM and Shiryaev-Roberts procedures for detecting changes in distributions, Communications in Statistics: Theory and Methods 38, 3225-3239.r. See Also xgrsr.arl for zero-state ARL computation. Examples ## Table 4 from Moustakides et al. (2009) ## original values are # gamma/L0 A/exp(g) # 50 28.02 # 100 56.04 # 500 280.19 # 1000 560.37 # 5000 2801.75 # 10000 5603.7 theta <- 1 zr <- -6 r <- 100 Lxgrsr.crit <- Vectorize("xgrsr.crit", "L0") L0s <- c(50, 100, 500, 1000, 5000, 10000) gs <- Lxgrsr.crit(theta/2, L0s, zr=zr, r=r) data.frame(L0s, gs, A=round(exp(gs), digits=2)) xsewma.arl Compute ARLs of simultaneous EWMA control charts (mean and vari- ance charts) Description Computation of the (zero-state) Average Run Length (ARL) for different types of simultaneous EWMA control charts (based on the sample mean and the sample variance S 2 ) monitoring normal mean and variance. Usage xsewma.arl(lx, cx, ls, csu, df, mu, sigma, hsx=0, Nx=40, csl=0, hss=1, Ns=40, s2.on=TRUE, sided="upper", qm=30) Arguments lx smoothing parameter lambda of the two-sided mean EWMA chart. cx control limit of the two-sided mean EWMA control chart. ls smoothing parameter lambda of the variance EWMA chart. csu upper control limit of the variance EWMA control chart. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. mu true mean. sigma true standard deviation. hsx so-called headstart (enables fast initial response) of the mean chart – do not confuse with the true FIR feature considered in xewma.arl; will be updated. Nx dimension of the approximating matrix of the mean chart. csl lower control limit of the variance EWMA control chart; default value is 0; not considered if sided is "upper". hss headstart (enables fast initial response) of the variance chart. Ns dimension of the approximating matrix of the variance chart. s2.on distinguishes between S 2 and S chart. sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. qm number of quadrature nodes used for the collocation integrals. Details xsewma.arl determines the Average Run Length (ARL) by an extension of Gan’s (derived from ideas already published by Waldmann) algorithm. The variance EWMA part is treated similarly to the ARL calculation method deployed for the single variance EWMA charts in Knoth (2005), that is, by means of collocation (Chebyshev polynomials). For more details see Knoth (2007). Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, J. R. Stat. Soc., Ser. C, Appl. Stat. 35, 151-158. <NAME> (1995), Joint monitoring of process mean and variance using exponentially weighted moving average control charts, Technometrics 37, 446-453. <NAME> (2005), Accurate ARL computation for EWMA-S 2 control charts, Statistics and Comput- ing 15, 341-352. <NAME> (2007), Accurate ARL calculation for EWMA control charts monitoring simultaneously normal mean and variance, Sequential Analysis 26, 251-264. See Also xewma.arl and sewma.arl for zero-state ARL computation of single mean and variance EWMA control charts, respectively. Examples ## Knoth (2007) ## collocation results in Table 1 ## Monte Carlo with 10^9 replicates: 252.307 +/- 0.0078 # process parameters mu <- 0 sigma <- 1 # subgroup size n=5, df=n-1 df <- 4 # lambda of mean chart lx <- .134 # c_mu^* = .345476571 = cx/sqrt(n) * sqrt(lx/(2-lx) cx <- .345476571*sqrt(df+1)/sqrt(lx/(2-lx)) # lambda of variance chart ls <- .1 # c_sigma = .477977 csu <- 1 + .477977 # matrix dimensions for mean and variance part Nx <- 25 Ns <- 25 # mode of variance chart SIDED <- "upper" arl <- xsewma.arl(lx, cx, ls, csu, df, mu, sigma, Nx=Nx, Ns=Ns, sided=SIDED) arl xsewma.crit Compute critical values of simultaneous EWMA control charts (mean and variance charts) Description Computation of the critical values (similar to alarm limits) for different types of simultaneous EWMA control charts (based on the sample mean and the sample variance S 2 ) monitoring nor- mal mean and variance. Usage xsewma.crit(lx, ls, L0, df, mu0=0, sigma0=1, cu=NULL, hsx=0, hss=1, s2.on=TRUE, sided="upper", mode="fixed", Nx=30, Ns=40, qm=30) Arguments lx smoothing parameter lambda of the two-sided mean EWMA chart. ls smoothing parameter lambda of the variance EWMA chart. L0 in-control ARL. mu0 in-control mean. sigma0 in-control standard deviation. cu for two-sided (sided="two") and fixed upper control limit (mode="fixed") a value larger than sigma0 has to been given, for all other cases cu is ignored. hsx so-called headstart (enables fast initial response) of the mean chart – do not confuse with the true FIR feature considered in xewma.arl; will be updated. hss headstart (enables fast initial response) of the variance chart. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. s2.on distinguishes between S 2 and S chart. sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. mode only deployed for sided="two" – with "fixed" an upper control limit (see cu) is set and only the lower is determined to obtain the in-control ARL L0, while with "unbiased" a certain unbiasedness of the ARL function is guaran- teed (here, both the lower and the upper control limit are calculated). Nx dimension of the approximating matrix of the mean chart. Ns dimension of the approximating matrix of the variance chart. qm number of quadrature nodes used for the collocation integrals. Details xsewma.crit determines the critical values (similar to alarm limits) for given in-control ARL L0 by applying secant rule and using xsewma.arl(). In case of sided="two" and mode="unbiased" a two-dimensional secant rule is applied that also ensures that the maximum of the ARL function for given standard deviation is attained at sigma0. See Knoth (2007) for details and application. Value Returns the critical value of the two-sided mean EWMA chart and the lower and upper controls limit cl and cu of the variance EWMA chart. Author(s) <NAME> References <NAME> (2007), Accurate ARL calculation for EWMA control charts monitoring simultaneously normal mean and variance, Sequential Analysis 26, 251-264. See Also xsewma.arl for calculation of ARL of simultaneous EWMA charts. Examples ## Knoth (2007) ## results in Table 2 # subgroup size n=5, df=n-1 df <- 4 # lambda of mean chart lx <- .134 # lambda of variance chart ls <- .1 # in-control ARL L0 <- 252.3 # matrix dimensions for mean and variance part Nx <- 25 Ns <- 25 # mode of variance chart SIDED <- "upper" crit <- xsewma.crit(lx, ls, L0, df, sided=SIDED, Nx=Nx, Ns=Ns) crit ## output as used in Knoth (2007) crit["cx"]/sqrt(df+1)*sqrt(lx/(2-lx)) crit["cu"] - 1 xsewma.q Compute critical values of simultaneous EWMA control charts (mean and variance charts) for given RL quantile Description Computation of the critical values (similar to alarm limits) for different types of simultaneous EWMA control charts (based on the sample mean and the sample variance S 2 ) monitoring nor- mal mean and variance. Usage xsewma.q(lx, cx, ls, csu, df, alpha, mu, sigma, hsx=0, Nx=40, csl=0, hss=1, Ns=40, sided="upper", qm=30) xsewma.q.crit(lx, ls, L0, alpha, df, mu0=0, sigma0=1, csu=NULL, hsx=0, hss=1, sided="upper", mode="fixed", Nx=20, Ns=40, qm=30, c.error=1e-12, a.error=1e-9) Arguments lx smoothing parameter lambda of the two-sided mean EWMA chart. cx control limit of the two-sided mean EWMA control chart. ls smoothing parameter lambda of the variance EWMA chart. csu for two-sided (sided="two") and fixed upper control limit (mode="fixed", only for xsewma.q.crit) a value larger than sigma0 has to been given, for all other cases cu is ignored. It is the upper control limit of the variance EWMA control chart. L0 in-control RL quantile at level alpha. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. alpha quantile level. mu true mean. sigma true standard deviation. mu0 in-control mean. sigma0 in-control standard deviation. hsx so-called headstart (enables fast initial response) of the mean chart – do not confuse with the true FIR feature considered in xewma.arl; will be updated. Nx dimension of the approximating matrix of the mean chart. csl lower control limit of the variance EWMA control chart; default value is 0; not considered if sided is "upper". hss headstart (enables fast initial response) of the variance chart. Ns dimension of the approximating matrix of the variance chart. sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of of cl is not used). mode only deployed for sided="two" – with "fixed" an upper control limit (see cu) is set and only the lower is determined to obtain the in-control ARL L0, while with "unbiased" a certain unbiasedness of the ARL function is guaran- teed (here, both the lower and the upper control limit are calculated). qm number of quadrature nodes used for the collocation integrals. c.error error bound for two succeeding values of the critical value during applying the secant rule. a.error error bound for the quantile level alpha during applying the secant rule. Details Instead of the popular ARL (Average Run Length) quantiles of the EWMA stopping time (Run Length) are determined. The algorithm is based on Waldmann’s survival function iteration proce- dure and on Knoth (2007). xsewma.q.crit determines the critical values (similar to alarm limits) for given in-control RL quantile L0 at level alpha by applying secant rule and using xsewma.sf(). In case of sided="two" and mode="unbiased" a two-dimensional secant rule is applied that also ensures that the maximum of the RL cdf for given standard deviation is attained at sigma0. Value Returns a single value which resembles the RL quantile of order alpha and the critical value of the two-sided mean EWMA chart and the lower and upper controls limit csl and csu of the variance EWMA chart, respectively. Author(s) <NAME> References <NAME> (2007), Accurate ARL calculation for EWMA control charts monitoring simultaneously normal mean and variance, Sequential Analysis 26, 251-264. See Also xsewma.arl for calculation of ARL of simultaneous EWMA charts and xsewma.sf for the RL survival function. Examples ## will follow xsewma.sf Compute the survival function of simultaneous EWMA control charts (mean and variance charts) Description Computation of the survival function of the Run Length (RL) for EWMA control charts monitoring simultaneously normal mean and variance. Usage xsewma.sf(n, lx, cx, ls, csu, df, mu, sigma, hsx=0, Nx=40, csl=0, hss=1, Ns=40, sided="upper", qm=30) Arguments n calculate sf up to value n. lx smoothing parameter lambda of the two-sided mean EWMA chart. cx control limit of the two-sided mean EWMA control chart. ls smoothing parameter lambda of the variance EWMA chart. csu upper control limit of the variance EWMA control chart. df actual degrees of freedom, corresponds to subgroup size (for known mean it is equal to the subgroup size, for unknown mean it is equal to subgroup size minus one. mu true mean. sigma true standard deviation. hsx so-called headstart (enables fast initial response) of the mean chart – do not confuse with the true FIR feature considered in xewma.arl; will be updated. Nx dimension of the approximating matrix of the mean chart. csl lower control limit of the variance EWMA control chart; default value is 0; not considered if sided is "upper". hss headstart (enables fast initial response) of the variance chart. Ns dimension of the approximating matrix of the variance chart. sided distinguishes between one- and two-sided two-sided EWMA-S 2 control charts by choosing "upper" (upper chart without reflection at cl – the actual value of cl is not used), "Rupper" (upper chart with reflection at cl), "Rlower" (lower chart with reflection at cu), and "two" (two-sided chart), respectively. qm number of quadrature nodes used for the collocation integrals. Details The survival function P(L>n) and derived from it also the cdf P(L<=n) and the pmf P(L=n) illustrate the distribution of the EWMA run length. For large n the geometric tail could be exploited. That is, with reasonable large n the complete distribution is characterized. The algorithm is based on Waldmann’s survival function iteration procedure and on results in Knoth (2007). Value Returns a vector which resembles the survival function up to a certain point. Author(s) <NAME> References <NAME> (2007), Accurate ARL calculation for EWMA control charts monitoring simultaneously normal mean and variance, Sequential Analysis 26, 251-264. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also xsewma.arl for zero-state ARL computation of simultaneous EWMA control charts. Examples ## will follow xshewhart.ar1.arl Compute ARLs of modified Shewhart control charts for AR(1) data Description Computation of the (zero-state) Average Run Length (ARL) for modified Shewhart charts deployed to the original AR(1) data. Usage xshewhart.ar1.arl(alpha, cS, delta=0, N1=50, N2=30) Arguments alpha lag 1 correlation of the data. cS critical value (alias to alarm limit) of the Shewhart control chart. delta potential shift in the data (in-control mean is zero. N1 number of quadrature nodes for solving the ARL integral equation, dimension of the resulting linear equation system is N1. N2 second number of quadrature nodes for combining the probability density func- tion of the first observation following the margin distribution and the solution of the ARL integral equation. Details Following the idea of Schmid (1995), 1- alpha times the data turns out to be an EWMA smoothing of the underlying AR(1) residuals. Hence, by combining the solution of the EWMA ARL integral equation and the stationary distribution of the AR(1) data (normal distribution is assumed) one gets easily the overall ARL. Value It returns a single value resembling the zero-state ARL of a modified Shewhart chart. Author(s) <NAME> References <NAME>, <NAME> (2004). Control charts for time series: A review. In Frontiers in Statistical Quality Control 7, edited by <NAME>, <NAME>, 210-236, Physica-Verlag. <NAME>, <NAME> (2000). The influence of parameter estimation on the ARL of Shewhart type charts for time series. Statistical Papers 41(2), 173-196. <NAME> (1995). On the run length of a Shewhart chart for correlated data. Statistical Papers 36(1), 111-130. See Also xewma.arl for zero-state ARL computation of EWMA control charts. Examples ## Table 1 in Kramer/Schmid (2000) cS <- 3.09023 a <- seq(0, 4, by=.5) row1 <- row2 <- row3 <- NULL for ( i in 1:length(a) ) { row1 <- c(row1, round(xshewhart.ar1.arl( 0.4, cS, delta=a[i]), digits=2)) row2 <- c(row2, round(xshewhart.ar1.arl( 0.2, cS, delta=a[i]), digits=2)) row3 <- c(row3, round(xshewhart.ar1.arl(-0.2, cS, delta=a[i]), digits=2)) } results <- rbind(row1, row2, row3) results # original values are # row1 515.44 215.48 61.85 21.63 9.19 4.58 2.61 1.71 1.29 # row2 502.56 204.97 56.72 19.13 7.95 3.97 2.33 1.59 1.25 # row3 502.56 201.41 54.05 17.42 6.89 3.37 2.03 1.46 1.20 xshewhartrunsrules.arl Compute ARLs of Shewhart control charts with and without runs rules Description Computation of the (zero-state and steady-state) Average Run Length (ARL) for Shewhart control charts with and without runs rules monitoring normal mean. Usage xshewhartrunsrules.arl(mu, c = 1, type = "12") xshewhartrunsrules.crit(L0, mu = 0, type = "12") xshewhartrunsrules.ad(mu1, mu0 = 0, c = 1, type = "12") xshewhartrunsrules.matrix(mu, c = 1, type = "12") Arguments mu true mean. L0 pre-defined in-control ARL, that is, determine c so that the mean number of observations until a false alarm is L0. mu1, mu0 for the steady-state ARL two means are specified, mu0 is the in-control one and usually equal to 0 , and mu1 must be given. c normalizing constant to ensure specific alarming behavior. type controls the type of Shewhart chart used, seed details section. Details xshewhartrunsrules.arl determines the zero-state Average Run Length (ARL) based on the Markov chain approach given in Champ and Woodall (1987). xshewhartrunsrules.matrix pro- vides the corresponding transition matrix that is also used in xDshewhartrunsrules.arl (ARL for control charting drift). xshewhartrunsrules.crit allows to find the normalization constant c to attain a fixed in-control ARL. Typically this is needed to calibrate the chart. With xshewhartrunsrules.ad the steady-state ARL is calculated. With the argument type certain runs rules could be set. The following list gives an overview. • "1" The classical Shewhart chart with +/- 3 c sigma control limits (c is typically equal to 1 and can be changed by the argument c). • "12" The classic and the following runs rule: 2 of 3 are beyond +/- 2 c sigma on the same side of the chart. • "13" The classic and the following runs rule: 4 of 5 are beyond +/- 1 c sigma on the same side of the chart. • "14" The classic and the following runs rule: 8 of 8 are on the same side of the chart (referring to the center line). Value Returns a single value which resembles the zero-state or steady-state ARL. xshewhartrunsrules.matrix returns a matrix. Author(s) <NAME> References <NAME> and <NAME> (1987), Exact results for Shewhart control charts with supplemen- tary runs rules, Technometrics 29, 393-399. See Also xDshewhartrunsrules.arl for zero-state ARL of Shewhart control charts with or without runs rules under drift. Examples ## Champ/Woodall (1987) ## Table 1 mus <- (0:15)/5 Mxshewhartrunsrules.arl <- Vectorize(xshewhartrunsrules.arl, "mu") # standard (1 of 1 beyond 3 sigma) Shewhart chart without runs rules C1 <- round(Mxshewhartrunsrules.arl(mus, type="1"), digits=2) # standard + runs rule: 2 of 3 beyond 2 sigma on the same side C12 <- round(Mxshewhartrunsrules.arl(mus, type="12"), digits=2) # standard + runs rule: 4 of 5 beyond 1 sigma on the same side C13 <- round(Mxshewhartrunsrules.arl(mus, type="13"), digits=2) # standard + runs rule: 8 of 8 on the same side of the center line C14 <- round(Mxshewhartrunsrules.arl(mus, type="14"), digits=2) ## original results are # mus C1 C12 C13 C14 # 0.0 370.40 225.44 166.05 152.73 # 0.2 308.43 177.56 120.70 110.52 # 0.4 200.08 104.46 63.88 59.76 # 0.6 119.67 57.92 33.99 33.64 # 0.8 71.55 33.12 19.78 21.07 # 1.0 43.89 20.01 12.66 14.58 # 1.2 27.82 12.81 8.84 10.90 # 1.4 18.25 8.69 6.62 8.60 # 1.6 12.38 6.21 5.24 7.03 # 1.8 8.69 4.66 4.33 5.85 # 2.0 6.30 3.65 3.68 4.89 # 2.2 4.72 2.96 3.18 4.08 # 2.4 3.65 2.48 2.78 3.38 # 2.6 2.90 2.13 2.43 2.81 # 2.8 2.38 1.87 2.14 2.35 # 3.0 2.00 1.68 1.89 1.99 data.frame(mus, C1, C12, C13, C14) ## plus calibration, i. e. L0=250 (the maximal value for "14" is 255! L0 <- 250 c1 <- xshewhartrunsrules.crit(L0, type = "1") c12 <- xshewhartrunsrules.crit(L0, type = "12") c13 <- xshewhartrunsrules.crit(L0, type = "13") c14 <- xshewhartrunsrules.crit(L0, type = "14") C1 <- round(Mxshewhartrunsrules.arl(mus, c=c1, type="1"), digits=2) C12 <- round(Mxshewhartrunsrules.arl(mus, c=c12, type="12"), digits=2) C13 <- round(Mxshewhartrunsrules.arl(mus, c=c13, type="13"), digits=2) C14 <- round(Mxshewhartrunsrules.arl(mus, c=c14, type="14"), digits=2) data.frame(mus, C1, C12, C13, C14) ## and the steady-state ARL Mxshewhartrunsrules.ad <- Vectorize(xshewhartrunsrules.ad, "mu1") C1 <- round(Mxshewhartrunsrules.ad(mus, c=c1, type="1"), digits=2) C12 <- round(Mxshewhartrunsrules.ad(mus, c=c12, type="12"), digits=2) C13 <- round(Mxshewhartrunsrules.ad(mus, c=c13, type="13"), digits=2) C14 <- round(Mxshewhartrunsrules.ad(mus, c=c14, type="14"), digits=2) data.frame(mus, C1, C12, C13, C14) xtcusum.arl Compute ARLs of CUSUM control charts Description Computation of the (zero-state) Average Run Length (ARL) for different types of CUSUM control charts monitoring normal mean. Usage xtcusum.arl(k, h, df, mu, hs = 0, sided="one", mode="tan", r=30) Arguments k reference value of the CUSUM control chart. h decision interval (alarm limit, threshold) of the CUSUM control chart. df degrees of freedom – parameter of the t distribution. mu true mean. hs so-called headstart (give fast initial response). sided distinguish between one- and two-sided CUSUM schemes by choosing "one" and "two", respectively. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1. mode Controls the type of variables substitution that might improve the numerical performance. Currently, "identity", "sin", "sinh", and "tan" (default) are provided. Details xtcusum.arl determines the Average Run Length (ARL) by numerically solving the related ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadrature. Value Returns a single value which resembles the ARL. Author(s) <NAME> References <NAME>, <NAME> (1971), Determination of A.R.L. and a contour nomogram for CUSUM charts to control normal mean, Technometrics 13, 221-230. <NAME>, <NAME> (1972), An approach to the probability distribution of cusum run length, Biometrika 59, 539-548. <NAME>, <NAME> (1982), Fast initial response for cusum quality-control schemes: Give your cusum a headstart, Technometrics 24, 199-205. <NAME> (1986), Average run lengths of cumulative sum control charts for controlling normal means, Journal of Quality Technology 18, 189-193. <NAME> (1986), Bounds for the distribution of the run length of one-sided and two-sided CUSUM quality control schemes, Technometrics 28, 61-67. <NAME> (1986), A new two-sided cumulative quality control scheme, Technometrics 28, 187- 194. See Also xtewma.arl for zero-state ARL computation of EWMA control charts and xtcusum.arl for the zero-state ARL of CUSUM for normal data. Examples ## will follow xtewma.ad Compute steady-state ARLs of EWMA control charts, t distributed data Description Computation of the steady-state Average Run Length (ARL) for different types of EWMA control charts monitoring the mean of t distributed data. Usage xtewma.ad(l, c, df, mu1, mu0=0, zr=0, z0=0, sided="one", limits="fix", steady.state.mode="conditional", mode="tan", r=40) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. df degrees of freedom – parameter of the t distribution. mu1 in-control mean. mu0 out-of-control mean. zr reflection border for the one-sided chart. z0 restarting value of the EWMA sequence in case of a false alarm in steady.state.mode="cyclical". sided distinguishes between one- and two-sided two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. steady.state.mode distinguishes between two steady-state modes – conditional and cyclical. mode Controls the type of variables substitution that might improve the numerical performance. Currently, "identity", "sin", "sinh", and "tan" (default) are provided. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). Details xtewma.ad determines the steady-state Average Run Length (ARL) by numerically solving the related ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadra- ture and using the power method for deriving the largest in magnitude eigenvalue and the related left eigenfunction. Value Returns a single value which resembles the steady-state ARL. Author(s) <NAME> References <NAME> (1986), A new two-sided cumulative quality control scheme, Technometrics 28, 187- 194. <NAME> (1987), A simple method for studying run-length distributions of exponentially weighted moving average charts, Technometrics 29, 401-407. <NAME> and <NAME> (1990), Exponentially weighted moving average control schemes: Properties and enhancements, Technometrics 32, 1-12. See Also xtewma.arl for zero-state ARL computation and xewma.ad for the steady-state ARL for normal data. Examples ## will follow xtewma.arl Compute ARLs of EWMA control charts, t distributed data Description Computation of the (zero-state) Average Run Length (ARL) for different types of EWMA control charts monitoring the mean of t distributed data. Usage xtewma.arl(l,c,df,mu,zr=0,hs=0,sided="two",limits="fix",mode="tan",q=1,r=40) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. df degrees of freedom – parameter of the t distribution. mu true mean. zr reflection border for the one-sided chart. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. mode Controls the type of variables substitution that might improve the numerical performance. Currently, "identity", "sin", "sinh", and "tan" (default) are provided. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥ q), will be determined. Note that mu0=0 is implicitely fixed. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). Details In case of the EWMA chart with fixed control limits, xtewma.arl determines the Average Run Length (ARL) by numerically solving the related ARL integral equation by means of the Nystroem method based on Gauss-Legendre quadrature. If limits is "vacl", then the method presented in Knoth (2003) is utilized. Other values (normal case) for limits are not yet supported. Value Except for the fixed limits EWMA charts it returns a single value which resembles the ARL. For fixed limits charts, it returns a vector of length q which resembles the ARL and the sequence of conditional expected delays for q=1 and q>1, respectively. Author(s) <NAME> References <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. <NAME> (1987), A simple method for studying run-length distributions of exponentially weighted moving average charts, Technometrics 29, 401-407. <NAME> and <NAME> (1990), Exponentially weighted moving average control schemes: Properties and enhancements, Technometrics 32, 1-12. <NAME>, <NAME>, and <NAME> (1999), Robustness of the EWMA control chart to non-normality , Journal of Quality Technology 31, 309-316. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2004), Fast initial response features for EWMA Control Charts, Statistical Papers 46, 47-64. See Also xewma.arl for zero-state ARL computation of EWMA control charts in the normal case. Examples ## Borror/Montgomery/Runger (1999), Table 3 lambda <- 0.1 cE <- 2.703 df <- c(4, 6, 8, 10, 15, 20, 30, 40, 50) L0 <- rep(NA, length(df)) for ( i in 1:length(df) ) { L0[i] <- round(xtewma.arl(lambda, cE*sqrt(df[i]/(df[i]-2)), df[i], 0), digits=0) } data.frame(df, L0) xtewma.q Compute RL quantiles of EWMA control charts Description Computation of quantiles of the Run Length (RL) for EWMA control charts monitoring normal mean. Usage xtewma.q(l, c, df, mu, alpha, zr=0, hs=0, sided="two", limits="fix", mode="tan", q=1, r=40) xtewma.q.crit(l, L0, df, mu, alpha, zr=0, hs=0, sided="two", limits="fix", mode="tan", r=40, c.error=1e-12, a.error=1e-9, OUTPUT=FALSE) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. df degrees of freedom – parameter of the t distribution. mu true mean. alpha quantile level. zr reflection border for the one-sided chart. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different control limits behavior. mode Controls the type of variables substitution that might improve the numerical performance. Currently, "identity", "sin", "sinh", and "tan" (default) are provided. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state ARLs for the in-control and out-of-control case, respectively, are calculated. For q > 1 and µ! = 0 conditional delays, that is, Eq (L − q + 1|L ≥), will be determined. Note that mu0=0 is implicitely fixed. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). L0 in-control quantile value. c.error error bound for two succeeding values of the critical value during applying the secant rule. a.error error bound for the quantile level alpha during applying the secant rule. OUTPUT activate or deactivate additional output. Details Instead of the popular ARL (Average Run Length) quantiles of the EWMA stopping time (Run Length) are determined. The algorithm is based on Waldmann’s survival function iteration proce- dure. If limits is "vacl", then the method presented in Knoth (2003) is utilized. For details see Knoth (2004). Value Returns a single value which resembles the RL quantile of order q. Author(s) <NAME> References <NAME> (1993), An optimal design of EWMA control charts based on the median run length, J. Stat. Comput. Simulation 45, 169-184. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2004), Fast initial response features for EWMA Control Charts, Statistical Papers 46, 47-64. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also xewma.q for RL quantile computation of EWMA control charts in the normal case. Examples ## will follow xtewma.sf Compute the survival function of EWMA run length Description Computation of the survival function of the Run Length (RL) for EWMA control charts monitoring normal mean. Usage xtewma.sf(l, c, df, mu, n, zr=0, hs=0, sided="two", limits="fix", mode="tan", q=1, r=40) Arguments l smoothing parameter lambda of the EWMA control chart. c critical value (similar to alarm limit) of the EWMA control chart. df degrees of freedom – parameter of the t distribution. mu true mean. n calculate sf up to value n. zr reflection border for the one-sided chart. hs so-called headstart (enables fast initial response). sided distinguishes between one- and two-sided EWMA control chart by choosing "one" and "two", respectively. limits distinguishes between different conrol limits behavior. mode Controls the type of variables substitution that might improve the numerical performance. Currently, "identity", "sin", "sinh", and "tan" (default) are provided. q change point position. For q = 1 and µ = µ0 and µ = µ1 , the usual zero-state situation for the in-control and out-of-control case, respectively, are calculated. Note that mu0=0 is implicitely fixed. r number of quadrature nodes, dimension of the resulting linear equation system is equal to r+1 (one-sided) or r (two-sided). Details The survival function P(L>n) and derived from it also the cdf P(L<=n) and the pmf P(L=n) illustrate the distribution of the EWMA run length. For large n the geometric tail could be exploited. That is, with reasonable large n the complete distribution is characterized. The algorithm is based on Waldmann’s survival function iteration procedure. For varying limits and for change points after 1 the algorithm from Knoth (2004) is applied. For details see Knoth (2004). Value Returns a vector which resembles the survival function up to a certain point. Author(s) <NAME> References <NAME> (1993), An optimal design of EWMA control charts based on the median run length, J. Stat. Comput. Simulation 45, 169-184. <NAME> (2003), EWMA schemes with non-homogeneous transition kernels, Sequential Analysis 22, 241-255. <NAME> (2004), Fast initial response features for EWMA Control Charts, Statistical Papers 46, 47-64. <NAME> (1986), Bounds for the distribution of the run length of geometric moving average charts, Appl. Statist. 35, 151-158. See Also xewma.sf for survival function computation of EWMA control charts in the normal case. Examples ## will follow xtshewhart.ar1.arl Compute ARLs of modified Shewhart control charts for AR(1) data with Student t residuals Description Computation of the (zero-state) Average Run Length (ARL) for modified Shewhart charts deployed to the original AR(1) data where the residuals follow a Student t distribution. Usage xtshewhart.ar1.arl(alpha, cS, df, delta=0, N1=50, N2=30, N3=2*N2, INFI=10, mode="tan") Arguments alpha lag 1 correlation of the data. cS critical value (alias to alarm limit) of the Shewhart control chart. df degrees of freedom – parameter of the t distribution. delta potential shift in the data (in-control mean is zero. N1 number of quadrature nodes for solving the ARL integral equation, dimension of the resulting linear equation system is N1. N2 second number of quadrature nodes for combining the probability density func- tion of the first observation following the margin distribution and the solution of the ARL integral equation. N3 third number of quadrature nodes for solving the left eigenfunction integral equation to determine the margin density (see Andel/Hrach, 2000), dimension of the resulting linear equation system is N3. INFI substitute of Inf – the left eigenfunction integral equation is defined on the whole real axis; now it is reduced to (-INFI,INFI). mode Controls the type of variables substitution that might improve the numerical performance. Currently, "identity", "sin", "sinh", and "tan" (default) are provided. Details Following the idea of Schmid (1995), 1-alpha times the data turns out to be an EWMA smoothing of the underlying AR(1) residuals. Hence, by combining the solution of the EWMA ARL integral equation and the stationary distribution of the AR(1) data (here Student t distribution is assumed) one gets easily the overall ARL. Value It returns a single value resembling the zero-state ARL of a modified Shewhart chart. Author(s) <NAME> References <NAME>, <NAME> (2000). On calculation of stationary density of autoregressive processes. Kyber- netika, Institute of Information Theory and Automation AS CR 36(3), 311-319. <NAME>, <NAME> (2000). The influence of parameter estimation on the ARL of Shewhart type charts for time series. Statistical Papers 41(2), 173-196. <NAME> (1995). On the run length of a Shewhart chart for correlated data. Statistical Papers 36(1), 111-130. See Also xtewma.arl for zero-state ARL computation of EWMA control charts in case of Student t dis- tributed data. Examples ## will follow
NPRED
cran
R
Package ‘NPRED’ June 21, 2023 Title Predictor Identifier: Nonparametric Prediction Version 1.0.7 Author <NAME> [aut] (<https://orcid.org/0000-0002-6758-0519>), <NAME> [aut], <NAME> [aut], <NAME> [aut], <NAME> [aut, cre] (<https://orcid.org/0000-0002-3472-0829>) Maintainer <NAME> <<EMAIL>> Description Partial informational correlation (PIC) is used to identify the meaningful predic- tors to the response from a large set of potential predictors. Details of methodolo- gies used in the package can be found in <NAME>., Mehro- tra, R. (2014). <doi:10.1002/2013WR013845>, <NAME>., Mehro- tra, R., <NAME>., & <NAME>. (2016). <doi:10.1016/j.envsoft.2016.05.021>, and Mehro- tra, R., & <NAME>. (2006). <doi:10.1016/j.advwatres.2005.08.007>. License GPL-3 Encoding UTF-8 LazyData true Depends R (>= 3.4.0) URL https://github.com/zejiang-unsw/NPRED#readme BugReports https://github.com/zejiang-unsw/NPRED/issues Imports stats Suggests zoo, SPEI, WASP, knitr, ggplot2, synthesis, testthat, bookdown, rmarkdown RoxygenNote 7.2.3 VignetteBuilder knitr NeedsCompilation yes Repository CRAN Date/Publication 2023-06-21 12:50:02 UTC R topics documented: calc.PI... 2 calc.P... 3 calc.scaleSTDrati... 3 data.gen.ar... 4 data.gen.ar... 5 data.gen.ar... 5 data... 6 data... 6 data... 6 kn... 7 knnregl1c... 9 pic.cal... 9 pw.cal... 10 stepwise.PI... 11 Sydne... 12 calc.PIC Calculate PIC Description Calculate PIC Usage calc.PIC(x, y, z, nnmax = 10000, nvarmax = 100) Arguments x A vector of response. y A vector of new predictor. z A matrix of pre-existing predictors that could be NULL if no prior predictors exist. nnmax The maximum number of sample size. nvarmax The maximum number of potential predictors. Value A list of 2 elements: the partial mutual information (pmi), and partial informational correlation (pic). References <NAME>., <NAME>., 2014. An information theoretic alternative to model a natural system using observational information alone. Water Resources Research, 50(1): 650-660. calc.PW Calculate Partial Weight Description Calculate Partial Weight Usage calc.PW(x, py, cpy, cpyPIC) Arguments x A vector of response. py A matrix containing possible predictors of x. cpy The column numbers of the meaningful predictors (cpy). cpyPIC Partial informational correlation (cpyPIC). Value A vector of partial weights(pw) of the same length of z. References <NAME>., <NAME>., 2014. An information theoretic alternative to model a natural system using observational information alone. Water Resources Research, 50(1): 650-660. calc.scaleSTDratio Calculate the ratio of conditional error standard deviations Description Calculate the ratio of conditional error standard deviations Usage calc.scaleSTDratio(x, zin, zout) Arguments x A vector of response. zin A matrix containing the meaningful predictors selected from a large set of pos- sible predictors (z). zout A matrix containing the remaining possible predictors after taking out the mean- ingful predictors (zin). Value The STD ratio. References <NAME>., <NAME>., 2014. An information theoretic alternative to model a natural system using observational information alone. Water Resources Research, 50(1): 650-660. data.gen.ar1 Generate predictor and response data. Description Generate predictor and response data. Usage data.gen.ar1(nobs, ndim = 9) Arguments nobs The data length to be generated. ndim The number of potential predictors (default is 9). Value A list of 2 elements: a vector of response (x), and a matrix of potential predictors (dp) with each column containing one potential predictor. Examples # AR1 model from paper with 9 dummy variables data.ar1 <- data.gen.ar1(500) stepwise.PIC(data.ar1$x, data.ar1$dp) data.gen.ar4 Generate predictor and response data. Description Generate predictor and response data. Usage data.gen.ar4(nobs, ndim = 9) Arguments nobs The data length to be generated. ndim The number of potential predictors (default is 9). Value A list of 2 elements: a vector of response (x), and a matrix of potential predictors (dp) with each column containing one potential predictor. Examples # AR4 model from paper with total 9 dimensions data.ar4 <- data.gen.ar4(500) stepwise.PIC(data.ar4$x, data.ar4$dp) data.gen.ar9 Generate predictor and response data. Description Generate predictor and response data. Usage data.gen.ar9(nobs, ndim = 9) Arguments nobs The data length to be generated. ndim The number of potential predictors (default is 9). Value A list of 2 elements: a vector of response (x), and a matrix of potential predictors (dp) with each column containing one potential predictor. Examples # AR9 model from paper with total 9 dimensions data.ar9 <- data.gen.ar9(500) stepwise.PIC(data.ar9$x, data.ar9$dp) data1 Sample data : AR9 model: x(i)=0.3*x(i-1)-0.6*x(i-4)-0.5*x(i-9)+eps Description A dataset containing 500 rows (data length) and 16 columns. The first column is response data and the rest columns are possible predictors. Usage data(data1) data2 Sample data : AR4 model: x(i)=0.6*x(i-1)-0.4*x(i-4)+eps Description A dataset containing 500 rows (data length) and 16 columns. The first column is response data and the rest columns are possible predictors. Usage data(data2) data3 Sample data : AR1 model: x(i)=0.9*x(i-1)+0.866*eps Description A dataset containing 500 rows (data length) and 16 columns. The first column is response data and the rest columns are possible predictors. Usage data(data3) knn Modified k-nearest neighbour conditional bootstrap or regression function estimation with extrapolation Description Modified k-nearest neighbour conditional bootstrap or regression function estimation with extrapo- lation Usage knn( x, z, zout, k = 0, pw, reg = TRUE, nensemble = 100, tailcorrection = TRUE, tailprob = 0.25, tailfac = 0.2, extrap = TRUE ) Arguments x A vector of response. z A matrix of existing predictors. zout A matrix of predictor values the response is to be estimated at. k The number of nearest neighbours used. The default value is 0, indicating Lall and Sharma default is used. pw A vector of partial weights of the same length of z. reg A logical operator to inform whether a conditional expectation should be output or not nensemble, Used if reg=F and represents the number of realisations that are generated Value. nensemble An integer the specifies the number of ensembles used. The default is 100. tailcorrection A logical value, T (default) or F, that denotes whether a reduced value of k (number of nearest neighbours) should be used in the tails of any conditioning plane. Whether one is in the tails or not is determined based on the nearest neighbour response value. tailprob A scalar that denotes the p-value of the cdf (on either extreme) the tailcorrection takes effect. The default value is 0.25. tailfac A scalar that specifies the lowest fraction of the default k that can be used in the tails. Depending on the how extreme one is in the tails, the actual k decreases linearly from k (for a p-value greater than tailprob) to tailfac*k proportional to the actual p-value of the nearest neighbour response, divided by tailprob. The default value is 0.2. extrap A logical value, T (default) or F, that denotes whether a kernel extraplation method is used to predict x. Value A matrix of responses having same rows as zout if reg=T, or having nensemble columns is reg=F. References <NAME>., <NAME>. and <NAME>., 1997. Streamflow simulation: A nonparametric approach. Water resources research, 33(2), pp.291-308. <NAME>. and <NAME>., 2002. A nonparametric approach for representing interannual depen- dence in monthly streamflow sequences. Water resources research, 38(7), pp.5-1. Examples data(data1) # AR9 model x(i)=0.3*x(i-1)-0.6*x(i-4)-0.5*x(i-9)+eps x <- data1[, 1] # response py <- data1[, -1] # possible predictors ans.ar9 <- stepwise.PIC(x, py) # identify the meaningful predictors and estimate partial weights z <- py[, ans.ar9$cpy] # predictor matrix pw <- ans.ar9$wt # partial weights # vector denoting where we want outputs, can be a matrix representing grid. zout <- apply(z, 2, mean) knn(x, z, zout, reg = TRUE, pw = pw) # knn regression estimate using partial weights. knn(x, z, zout, reg = FALSE, pw = pw) # alternatively, knn conditional bootstrap (100 realisations). # Mean of the conditional bootstrap estimate should be # approximately the same as the regression estimate. zout <- ts(data.gen.ar9(500, ndim = length(ans.ar9$cpy))$dp) # new input xhat1 <- xhat2 <- x xhat1 <- NPRED::knn(x, z, zout, k = 5, reg = TRUE, extrap = FALSE) # without extrapolation xhat2 <- NPRED::knn(x, z, zout, k = 5, reg = TRUE, extrap = TRUE) # with extrapolation ts.plot(ts(x), ts(xhat1), ts(xhat2), col = c("black", "red", "blue"), ylim = c(-5, 5), lwd = c(2, 2, 1) ) knnregl1cv Leave one out cross validation. Description Leave one out cross validation. Usage knnregl1cv(x, z, k = 0, pw) Arguments x A vector of response. z A matrix of predictors. k The number of nearest neighbours used. The default is 0, indicating Lall and Sharma default is used. pw A vector of partial weights of the same length of z. Value A vector of L1CV estimates of the response. References <NAME>., <NAME>., 1996. A Nearest Neighbor Bootstrap For Resampling Hydrologic Time Series. Water Resources Research, 32(3): 679-693. <NAME>., <NAME>., 2014. An information theoretic alternative to model a natural system using observational information alone. Water Resources Research, 50(1): 650-660. pic.calc Calculate PIC Description Calculate PIC Usage pic.calc(X, Y, Z = NULL) Arguments X A vector of response. Y A matrix of new predictors. Z A matrix of pre-existing predictors that could be NULL if no prior predictors exist. Value A list of 2 elements: the partial mutual information (pmi), and partial informational correlation (pic). References <NAME>., <NAME>., 2014. An information theoretic alternative to model a natural system using observational information alone. Water Resources Research, 50(1): 650-660. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>. and <NAME>. (2014) An evaluation framework for input variable selection algorithms for environmental data-driven models, Environmental Modelling and Software, 62, 33-51, DOI: 10.1016/j.envsoft.2014.08.015. pw.calc Calculate Partial Weight Description Calculate Partial Weight Usage pw.calc(x, py, cpy, cpyPIC) Arguments x A vector of response. py A matrix containing possible predictors of x. cpy The column numbers of the meaningful predictors (cpy). cpyPIC Partial informational correlation (cpyPIC). Value A vector of partial weights(pw) of the same length of z. References <NAME>., <NAME>., 2014. An information theoretic alternative to model a natural system using observational information alone. Water Resources Research, 50(1): 650-660. stepwise.PIC Calculate stepwise PIC Description Calculate stepwise PIC Usage stepwise.PIC(x, py, nvarmax = 100, alpha = 0.1) Arguments x A vector of response. py A matrix containing possible predictors of x. nvarmax The maximum number of variables to be selected. alpha The significance level used to judge whether the sample estimate in Equation P IC is significant or not. A default alpha value is 0.1. Value A list of 2 elements: the column numbers of the meaningful predictors (cpy), and partial informa- tional correlation (cpyPIC). References <NAME>., <NAME>., 2014. An information theoretic alternative to model a natural system using observational information alone. Water Resources Research, 50(1): 650-660. Examples data(data1) # AR9 model x(i)=0.3*x(i-1)-0.6*x(i-4)-0.5*x(i-9)+eps x <- data1[, 1] # response py <- data1[, -1] # possible predictors stepwise.PIC(x, py) data(data2) # AR4 model: x(i)=0.6*x(i-1)-0.4*x(i-4)+eps x <- data2[, 1] # response py <- data2[, -1] # possible predictors stepwise.PIC(x, py) data(data3) # AR1 model x(i)=0.9*x(i-1)+0.866*eps x <- data3[, 1] # response py <- data3[, -1] # possible predictors stepwise.PIC(x, py) Sydney Sample data: Data over Sydney region Description A dataset containing Rainfall (15 stations), NCEP and CSIRO (7 atmospheric variables). Usage data(Sydney)
multiROC
cran
R
Package ‘multiROC’ October 13, 2022 Title Calculating and Visualizing ROC and PR Curves Across Multi-Class Classifications Version 1.1.1 Description Tools to solve real-world problems with multiple classes classifications by computing the areas un- der ROC and PR curve via micro-averaging and macro-averaging. The vignettes of this pack- age can be found via <https://github.com/WandeRum/multiROC>. The methodology is de- scribed in V. <NAME> (2013) <https: //www.clips.uantwerpen.be/~vincent/pdf/microaverage.pdf> and Pe- dregosa et al. (2011) <http: //scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html>. License GPL-3 Encoding UTF-8 LazyData true Imports zoo, magrittr, boot, stats Suggests dplyr, ggplot2 NeedsCompilation no Repository CRAN Date/Publication 2018-06-26 20:24:05 UTC RoxygenNote 6.0.1.9000 Author <NAME> [aut, cre], <NAME> [aut], <NAME> [ctb] Maintainer <NAME> <<EMAIL>> R topics documented: cal_au... 2 cal_confu... 3 multi_p... 4 multi_ro... 5 plot_pr_dat... 6 plot_roc_dat... 6 pr_auc_with_c... 7 pr_c... 8 roc_auc_with_c... 9 roc_c... 10 test_dat... 11 cal_auc Area under ROC curve Description This function calculates the area under ROC curve Usage cal_auc(X, Y) Arguments X A vector of true positive rate Y A vector of false positive rate, same length with TPR Details This function calculates the area under ROC curve. Value A numeric value of AUC will be returned. References https://www.r-bloggers.com/calculating-auc-the-area-under-a-roc-curve/ See Also cal_confus() Examples data(test_data) true_vec <- test_data[, 1] pred_vec <- test_data[, 5] confus_res <- cal_confus(true_vec, pred_vec) AUC_res <- cal_auc(confus_res$TPR, confus_res$FPR) cal_confus Calculate confusion matrices Description This function calculates the confusion matrices across different cutoff points. Usage cal_confus(true_vec, pred_vec, force_diag=TRUE) Arguments true_vec A binary vector of real labels pred_vec A continuous predicted score(probabilities) vector, must be the same length with true_vec force_diag If TRUE, TPR and FPR will be forced to across (0, 0) and (1, 1) Details This function calculates the TP, FP, FN, TN, TPR, FPR and PPV across different cutoff points of pred_vec. TPR and FPR are forced to across (0, 0) and (1, 1) if force_diag=TRUE. Value TP True positive FP False positive FN False negative TN True negative TPR True positive rate FPR False positive rate PPV Positive predictive value References https://en.wikipedia.org/wiki/Confusion_matrix Examples data(test_data) true_vec <- test_data[, 1] pred_vec <- test_data[, 5] confus_res <- cal_confus(true_vec, pred_vec) multi_pr Multi-class classification PR Description This function calculates the Precision, Recall and AUC of multi-class classifications. Usage multi_pr(data, force_diag=TRUE) Arguments data A data frame contain true labels of multiple groups and corresponding predictive scores force_diag If TRUE, TPR and FPR will be forced to across (0, 0) and (1, 1) Details A data frame is required for this function as input. This data frame should contains true label (0 - Negative, 1 - Positive) columns named as XX_true (e.g. S1_true, S2_true and S3_true) and predic- tive scores (continuous) columns named as XX_pred_YY (e.g. S1_pred_SVM, S2_pred_RF), thus this function allows calcluating ROC on mulitiple classifiers. Predictive scores could be probabilities among [0, 1] and other continuous values. For each classi- fier, the number of columns should be equal to the number of groups of true labels. The order of columns won’t affect results. Recall, Precision, AUC for each group and each method will be calculated. Macro/Micro-average AUC for all groups and each method will be calculated. Micro-average ROC/AUC was calculated by stacking all groups together, thus converting the multi- class classification into binary classification. Macro-average ROC/AUC was calculated by averag- ing all groups results (one vs rest) and linear interpolation was used between points of ROC. AUC will be calculated using function cal_auc(). Value Recall A list of recalls for each group, each method and micro-/macro- average Precision A list of precisions for each group, each method and micro-/macro- average AUC A list of AUCs for each group, each method and micro-/macro- average Methods A vector contains the name of different classifiers Groups A vector contains the name of different groups Examples data(test_data) pr_test <- multi_pr(test_data) pr_test$AUC multi_roc Multi-class classification ROC Description This function calculates the Specificity, Sensitivity and AUC of multi-class classifications. Usage multi_roc(data, force_diag=TRUE) Arguments data A data frame contain true labels of multiple groups and corresponding predictive scores force_diag If TRUE, TPR and FPR will be forced to across (0, 0) and (1, 1) Details A data frame is required for this function as input. This data frame should contains true label (0 - Negative, 1 - Positive) columns named as XX_true (e.g. S1_true, S2_true and S3_true) and predic- tive scores (continuous) columns named as XX_pred_YY (e.g. S1_pred_SVM, S2_pred_RF), thus this function allows calcluating ROC on mulitiple classifiers. Predictive scores could be probabilities among [0, 1] and other continuous values. For each classi- fier, the number of columns should be equal to the number of groups of true labels. The order of columns won’t affect results. Specificity, Sensitivity, AUC for each group and each method will be calculated. Macro/Micro- average AUC for all groups and each method will be calculated. Micro-average ROC/AUC was calculated by stacking all groups together, thus converting the multi- class classification into binary classification. Macro-average ROC/AUC was calculated by averag- ing all groups results (one vs rest) and linear interpolation was used between points of ROC. AUC will be calculated using function cal_auc(). Value Specificity A list of specificities for each group, each method and micro-/macro- average Sensitivity A list of sensitivities for each group, each method and micro-/macro- average AUC A list of AUCs for each group, each method and micro-/macro- average Methods A vector contains the name of different classifiers Groups A vector contains the name of different groups References http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html Examples data(test_data) roc_test <- multi_roc(test_data) roc_test$AUC plot_pr_data Generate PR plotting data Description This function generates plotting PR data for following data visualization. Usage plot_pr_data(pr_res) Arguments pr_res A list of results from multi_pr function. Value pr_res_df The dataframe of results from multi_pr function, which is easy be visualized by ggplot2. Examples data(test_data) pr_res <- multi_pr(test_data) pr_res_df <- plot_pr_data(pr_res) plot_roc_data Generate ROC plotting data Description This function generates plotting ROC data for following data visualization. Usage plot_roc_data(roc_res) Arguments roc_res A list of results from multi_roc function. Value roc_res_df The dataframe of results from multi_roc function, which is easy be visualized by ggplot2. Examples data(test_data) roc_res <- multi_roc(test_data) roc_res_df <- plot_roc_data(roc_res) pr_auc_with_ci Output of PR bootstrap confidence intervals Description This function uses bootstrap to generate five types of equi-tailed two-sided confidence intervals of PR-AUC with different required percentages and output a dataframe with AUCs, lower CIs, and higher CIs of all methods and groups. Usage pr_auc_with_ci(data, conf= 0.95, type='bca', R = 100) Arguments data A data frame contains true labels of multiple groups and corresponding predic- tive scores. conf A scalar contains the required level of confidence intervals, and the default num- ber is 0.95. type A vector of character strings includes five different types of equi-tailed two- sided nonparametric confidence intervals (e.g., "norm","basic", "stud", "perc", "bca"). R A scalar contains the number of bootstrap replicates, and the default number is 100. Details A data frame is required for this function as input. This data frame should contains true label (0 - Negative, 1 - Positive) columns named as XX_true (e.g. S1_true, S2_true and S3_true) and pre- dictive scores (continuous) columns named as XX_pred_YY (e.g. S1_pred_SVM, S2_pred_RF). Predictive scores could be probabilities among [0, 1] and other continuous values. For each classi- fier, the number of columns should be equal to the number of groups of true labels. The order of columns won’t affect results. Value norm Using the normal approximation to calculate the confidence intervals. basic Using the basic bootstrap method to calculate the confidence intervals. stud Using the studentized bootstrap method to calculate the confidence intervals. perc Using the bootstrap percentile method to calculate the confidence intervals. bca Using the adjusted bootstrap percentile method to calculate the confidence in- tervals. Examples ## Not run: data(test_data) pr_auc_with_ci_res <- pr_auc_with_ci(test_data, conf= 0.95, type='bca', R = 100) ## End(Not run) pr_ci PR bootstrap confidence intervals Description This function uses bootstrap to generate five types of equi-tailed two-sided confidence intervals of PR-AUC with different required percentages. Usage pr_ci(data, conf= 0.95, type='basic', R = 100, index = 4) Arguments data A data frame contains true labels of multiple groups and corresponding predic- tive scores. conf A scalar contains the required level of confidence intervals, and the default num- ber is 0.95. type A vector of character strings includes five different types of equi-tailed two- sided nonparametric confidence intervals (e.g., "norm","basic", "stud", "perc", "bca", "all"). R A scalar contains the number of bootstrap replicates, and the default number is 100. index A scalar contains the position of the variable of interest. Details A data frame is required for this function as input. This data frame should contains true label (0 - Negative, 1 - Positive) columns named as XX_true (e.g. S1_true, S2_true and S3_true) and pre- dictive scores (continuous) columns named as XX_pred_YY (e.g. S1_pred_SVM, S2_pred_RF). Predictive scores could be probabilities among [0, 1] and other continuous values. For each classi- fier, the number of columns should be equal to the number of groups of true labels. The order of columns won’t affect results. Value norm Using the normal approximation to calculate the confidence intervals. basic Using the basic bootstrap method to calculate the confidence intervals. stud Using the studentized bootstrap method to calculate the confidence intervals. perc Using the bootstrap percentile method to calculate the confidence intervals. bca Using the adjusted bootstrap percentile method to calculate the confidence in- tervals. all Using all previous bootstrap methods to calculate the confidence intervals. Examples ## Not run: data(test_data) pr_ci_res <- pr_ci(test_data, conf= 0.95, type='basic', R = 1000, index = 4) ## End(Not run) roc_auc_with_ci Output of ROC bootstrap confidence intervals Description This function uses bootstrap to generate five types of equi-tailed two-sided confidence intervals of ROC-AUC with different required percentages and output a dataframe with AUCs, lower CIs, and higher CIs of all methods and groups. Usage roc_auc_with_ci(data, conf= 0.95, type='bca', R = 100) Arguments data A data frame contains true labels of multiple groups and corresponding predic- tive scores. conf A scalar contains the required level of confidence intervals, and the default num- ber is 0.95. type A vector of character strings includes five different types of equi-tailed two- sided nonparametric confidence intervals (e.g., "norm","basic", "stud", "perc", "bca"). R A scalar contains the number of bootstrap replicates, and the default number is 100. Details A data frame is required for this function as input. This data frame should contains true label (0 - Negative, 1 - Positive) columns named as XX_true (e.g. S1_true, S2_true and S3_true) and pre- dictive scores (continuous) columns named as XX_pred_YY (e.g. S1_pred_SVM, S2_pred_RF). Predictive scores could be probabilities among [0, 1] and other continuous values. For each classi- fier, the number of columns should be equal to the number of groups of true labels. The order of columns won’t affect results. Value norm Using the normal approximation to calculate the confidence intervals. basic Using the basic bootstrap method to calculate the confidence intervals. stud Using the studentized bootstrap method to calculate the confidence intervals. perc Using the bootstrap percentile method to calculate the confidence intervals. bca Using the adjusted bootstrap percentile method to calculate the confidence in- tervals. Examples ## Not run: data(test_data) roc_auc_with_ci_res <- roc_auc_with_ci(test_data, conf= 0.95, type='bca', R = 100) ## End(Not run) roc_ci ROC bootstrap confidence intervals Description This function uses bootstrap to generate five types of equi-tailed two-sided confidence intervals of ROC-AUC with different required percentages. Usage roc_ci(data, conf= 0.95, type='basic', R = 100, index = 4) Arguments data A data frame contains true labels of multiple groups and corresponding predic- tive scores. conf A scalar contains the required level of confidence intervals, and the default num- ber is 0.95. type A vector of character strings includes five different types of equi-tailed two- sided nonparametric confidence intervals (e.g., "norm","basic", "stud", "perc", "bca", "all"). R A scalar contains the number of bootstrap replicates, and the default number is 100. index A scalar contains the position of the variable of interest. Details A data frame is required for this function as input. This data frame should contains true label (0 - Negative, 1 - Positive) columns named as XX_true (e.g. S1_true, S2_true and S3_true) and pre- dictive scores (continuous) columns named as XX_pred_YY (e.g. S1_pred_SVM, S2_pred_RF). Predictive scores could be probabilities among [0, 1] and other continuous values. For each classi- fier, the number of columns should be equal to the number of groups of true labels. The order of columns won’t affect results. Value norm Using the normal approximation to calculate the confidence intervals. basic Using the basic bootstrap method to calculate the confidence intervals. stud Using the studentized bootstrap method to calculate the confidence intervals. perc Using the bootstrap percentile method to calculate the confidence intervals. bca Using the adjusted bootstrap percentile method to calculate the confidence in- tervals. all Using all previous bootstrap methods to calculate the confidence intervals. Examples ## Not run: data(test_data) roc_ci_res <- roc_ci(test_data, conf= 0.95, type='basic', R = 1000, index = 4) ## End(Not run) test_data Example dataset Description This example dataset contains two classifiers (m1, m2), and three groups (G1, G2, G3). Usage data("test_data") Format A data frame with 85 observations on the following 9 variables. G1_true true labels of G1 (0 - Negative, 1 - Positive) G2_true true labels of G2 (0 - Negative, 1 - Positive) G3_true true labels of G3 (0 - Negative, 1 - Positive) G1_pred_m1 predictive scores of G1 in the classifier m1 G2_pred_m1 predictive scores of G2 in the classifier m1 G3_pred_m1 predictive scores of G3 in the classifier m1 G1_pred_m2 predictive scores of G1 in the classifier m2 G2_pred_m2 predictive scores of G2 in the classifier m2 G3_pred_m2 predictive scores of G3 in the classifier m2 Examples data(test_data)
github.com/monochromegane/the_platinum_searcher
go
Go
README [¶](#section-readme) --- ### The Platinum Searcher [Build Status](https://travis-ci.org/monochromegane/the_platinum_searcher) [wercker status](https://app.wercker.com/project/bykey/59ef90ac217537abc0994546958037f3) A code search tool similar to `ack` and `the_silver_searcher(ag)`. It supports multi platforms and multi encodings. #### Features * It searches code about 3–5× faster than `ack`. * It searches code as fast as `the_silver_searcher(ag)`. * It ignores file patterns from your `.gitignore`. * It ignores directories with names that start with `.`, eg `.config`. Use `--hidden` option, if you want to search. * It searches `UTF-8`, `EUC-JP` and `Shift_JIS` files. * It provides binaries for multi platform (macOS, Windows, Linux). ##### Benchmarks ``` cd ~/src/github.com/torvalds/linux ack EXPORT_SYMBOL_GPL 30.18s user 2.32s system 99% cpu 32.613 total # ack ag EXPORT_SYMBOL_GPL 1.57s user 1.76s system 311% cpu 1.069 total # ag: It's faster than ack. pt EXPORT_SYMBOL_GPL 2.29s user 1.26s system 358% cpu 0.991 total # pt: It's faster than ag!! ``` #### Usage ``` $ # Recursively searches for PATTERN in current directory. $ pt PATTERN $ # You can specify PATH and some OPTIONS. $ pt OPTIONS PATTERN PATH ``` #### Configuration If you put configuration file on the following directories, pt use option in the file. * $XDG_CONFIG_HOME/pt/config.toml * $HOME/.ptconfig.toml * .ptconfig.toml (current directory) The file is TOML format like the following. ``` color = true context = 3 ignore = ["dir1", "dir2"] color-path = "1;34" ``` The options are same as command line options. #### Editor Integration ##### Vim + Unite.vim You can use pt with [Unite.vim](https://github.com/Shougo/unite.vim). ``` nnoremap <silent> ,g :<C-u>Unite grep:. -buffer-name=search-buffer<CR> if executable('pt') let g:unite_source_grep_command = 'pt' let g:unite_source_grep_default_opts = '--nogroup --nocolor' let g:unite_source_grep_recursive_opt = '' let g:unite_source_grep_encoding = 'utf-8' endif ``` ##### Emacs + pt.el You can use pt with [pt.el](https://github.com/bling/pt.el), which can be installed from [MELPA](http://melpa.milkbox.net/). #### Installation ##### Developer ``` $ go get -u github.com/monochromegane/the_platinum_searcher/... ``` ##### User Download from the following url. * <https://github.com/monochromegane/the_platinum_searcher/releasesOr, you can use Homebrew (Only macOS). ``` $ brew install pt ``` `pt` is an alias for `the_platinum_searcher` in Homebrew. #### Contribution 1. Fork it 2. Create a feature branch 3. Commit your changes 4. Rebase your local changes against the master branch 5. Run test suite with the `go test ./...` command and confirm that it passes 6. Run `gofmt -s` 7. Create new Pull Request #### License [MIT](https://github.com/monochromegane/the_platinum_searcher/raw/master/LICENSE) #### Author [monochromegane](https://github.com/monochromegane) Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [Variables](#pkg-variables) * [type Option](#Option) * [type OutputOption](#OutputOption) * + [func (o *OutputOption) SetColorLineNumber(code string)](#OutputOption.SetColorLineNumber) + [func (o *OutputOption) SetColorMatch(code string)](#OutputOption.SetColorMatch) + [func (o *OutputOption) SetColorPath(code string)](#OutputOption.SetColorPath) + [func (o *OutputOption) SetDisableColor()](#OutputOption.SetDisableColor) + [func (o *OutputOption) SetDisableGroup()](#OutputOption.SetDisableGroup) + [func (o *OutputOption) SetDisableLineNumber()](#OutputOption.SetDisableLineNumber) + [func (o *OutputOption) SetEnableColor()](#OutputOption.SetEnableColor) + [func (o *OutputOption) SetEnableGroup()](#OutputOption.SetEnableGroup) + [func (o *OutputOption) SetEnableLineNumber()](#OutputOption.SetEnableLineNumber) * [type PlatinumSearcher](#PlatinumSearcher) * + [func (p PlatinumSearcher) Run(args []string) int](#PlatinumSearcher.Run) * [type SearchOption](#SearchOption) * + [func (o *SearchOption) SetFilesWithRegexp(p string)](#SearchOption.SetFilesWithRegexp) ### Constants [¶](#pkg-constants) ``` const ( ColorReset = "\x1b[0m\x1b[K" SeparatorColon = ":" SeparatorHyphen = "-" ) ``` ``` const ( UNKNOWN = [iota](/builtin#iota) ERROR BINARY ASCII UTF8 EUCJP SHIFTJIS ) ``` ``` const ( ExitCodeOK = [iota](/builtin#iota) ExitCodeError ) ``` ### Variables [¶](#pkg-variables) ``` var NewLineBytes = [][byte](/builtin#byte){10} ``` ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [Option](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L6) [¶](#Option) ``` type Option struct { Version [bool](/builtin#bool) `long:"version" description:"Show version"` OutputOption *[OutputOption](#OutputOption) `group:"Output Options"` SearchOption *[SearchOption](#SearchOption) `group:"Search Options"` } ``` Top level options #### type [OutputOption](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L13) [¶](#OutputOption) ``` type OutputOption struct { Color func() `long:"color" description:"Print color codes in results (default: true)"` NoColor func() `long:"nocolor" description:"Don't print color codes in results (default: false)"` ForceColor [bool](/builtin#bool) // Force color. Not user option. EnableColor [bool](/builtin#bool) // Enable color. Not user option. ColorLineNumber func([string](/builtin#string)) `long:"color-line-number" description:"Color codes for line numbers (default: 1;33)"` ColorPath func([string](/builtin#string)) `long:"color-path" description:"Color codes for path names (default: 1;32)"` ColorMatch func([string](/builtin#string)) `long:"color-match" description:"Color codes for result matches (default: 30;43)"` ColorCodeLineNumber [string](/builtin#string) // Color line numbers. Not user option. ColorCodePath [string](/builtin#string) // Color path names. Not user option. ColorCodeMatch [string](/builtin#string) // Color result matches. Not user option. Group func() `long:"group" description:"Print file name at header (default: true)"` NoGroup func() `long:"nogroup" description:"Don't print file name at header (default: false)"` ForceGroup [bool](/builtin#bool) // Force group. Not user option. EnableGroup [bool](/builtin#bool) // Enable group. Not user option. Null [bool](/builtin#bool) `short:"0" long:"null" description:"Separate filenames with null (for 'xargs -0') (default: false)"` Column [bool](/builtin#bool) `long:"column" description:"Print column (default: false)"` LineNumber func() `long:"numbers" description:"Print Line number. (default: true)"` NoLineNumber func() `short:"N" long:"nonumbers" description:"Omit Line number. (default: false)"` ForceLineNumber [bool](/builtin#bool) // Force line number. Not user option. EnableLineNumber [bool](/builtin#bool) // Enable line number. Not user option. After [int](/builtin#int) `short:"A" long:"after" description:"Print lines after match"` Before [int](/builtin#int) `short:"B" long:"before" description:"Print lines before match"` Context [int](/builtin#int) `short:"C" long:"context" description:"Print lines before and after match"` FilesWithMatches [bool](/builtin#bool) `short:"l" long:"files-with-matches" description:"Only print filenames that contain matches"` Count [bool](/builtin#bool) `short:"c" long:"count" description:"Only print the number of matching lines for each input file."` OutputEncode [string](/builtin#string) `short:"o" long:"output-encode" description:"Specify output encoding (none, jis, sjis, euc)"` } ``` Output options. #### func (*OutputOption) [SetColorLineNumber](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L94) [¶](#OutputOption.SetColorLineNumber) ``` func (o *[OutputOption](#OutputOption)) SetColorLineNumber(code [string](/builtin#string)) ``` #### func (*OutputOption) [SetColorMatch](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L102) [¶](#OutputOption.SetColorMatch) ``` func (o *[OutputOption](#OutputOption)) SetColorMatch(code [string](/builtin#string)) ``` #### func (*OutputOption) [SetColorPath](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L98) [¶](#OutputOption.SetColorPath) ``` func (o *[OutputOption](#OutputOption)) SetColorPath(code [string](/builtin#string)) ``` #### func (*OutputOption) [SetDisableColor](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L72) [¶](#OutputOption.SetDisableColor) ``` func (o *[OutputOption](#OutputOption)) SetDisableColor() ``` #### func (*OutputOption) [SetDisableGroup](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L90) [¶](#OutputOption.SetDisableGroup) ``` func (o *[OutputOption](#OutputOption)) SetDisableGroup() ``` #### func (*OutputOption) [SetDisableLineNumber](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L81) [¶](#OutputOption.SetDisableLineNumber) ``` func (o *[OutputOption](#OutputOption)) SetDisableLineNumber() ``` #### func (*OutputOption) [SetEnableColor](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L67) [¶](#OutputOption.SetEnableColor) ``` func (o *[OutputOption](#OutputOption)) SetEnableColor() ``` #### func (*OutputOption) [SetEnableGroup](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L85) [¶](#OutputOption.SetEnableGroup) ``` func (o *[OutputOption](#OutputOption)) SetEnableGroup() ``` #### func (*OutputOption) [SetEnableLineNumber](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L76) [¶](#OutputOption.SetEnableLineNumber) ``` func (o *[OutputOption](#OutputOption)) SetEnableLineNumber() ``` #### type [PlatinumSearcher](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/platinum_searcher.go#L25) [¶](#PlatinumSearcher) ``` type PlatinumSearcher struct { Out, Err [io](/io).[Writer](/io#Writer) } ``` #### func (PlatinumSearcher) [Run](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/platinum_searcher.go#L29) [¶](#PlatinumSearcher.Run) ``` func (p [PlatinumSearcher](#PlatinumSearcher)) Run(args [][string](/builtin#string)) [int](/builtin#int) ``` #### type [SearchOption](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L107) [¶](#SearchOption) ``` type SearchOption struct { Regexp [bool](/builtin#bool) `` /* 168-byte string literal not displayed */ IgnoreCase [bool](/builtin#bool) `short:"i" long:"ignore-case" description:"Match case insensitively"` SmartCase [bool](/builtin#bool) `short:"S" long:"smart-case" description:"Match case insensitively unless PATTERN contains uppercase characters"` WordRegexp [bool](/builtin#bool) `short:"w" long:"word-regexp" description:"Only match whole words"` Ignore [][string](/builtin#string) `long:"ignore" description:"Ignore files/directories matching pattern"` VcsIgnore [][string](/builtin#string) `long:"vcs-ignore" description:"VCS ignore files" default:".gitignore"` GlobalGitIgnore [bool](/builtin#bool) `long:"global-gitignore" description:"Use git's global gitignore file for ignore patterns"` HomePtIgnore [bool](/builtin#bool) `long:"home-ptignore" description:"Use $Home/.ptignore file for ignore patterns"` SkipVcsIgnore [bool](/builtin#bool) `short:"U" long:"skip-vcs-ignores" description:"Don't use VCS ignore file for ignore patterns"` FilesWithRegexp func([string](/builtin#string)) `short:"g" description:"Print filenames matching PATTERN"` EnableFilesWithRegexp [bool](/builtin#bool) // Enable files with regexp. Not user option. PatternFilesWithRegexp [string](/builtin#string) // Pattern files with regexp. Not user option. FileSearchRegexp [string](/builtin#string) `short:"G" long:"file-search-regexp" description:"PATTERN Limit search to filenames matching PATTERN"` Depth [int](/builtin#int) `long:"depth" default:"25" description:"Search up to NUM directories deep"` Follow [bool](/builtin#bool) `short:"f" long:"follow" description:"Follow symlinks"` Hidden [bool](/builtin#bool) `long:"hidden" description:"Search hidden files and directories"` SearchStream [bool](/builtin#bool) // Input from pipe. Not user option. } ``` Search options. #### func (*SearchOption) [SetFilesWithRegexp](https://github.com/monochromegane/the_platinum_searcher/blob/v2.2.0/option.go#L127) [¶](#SearchOption.SetFilesWithRegexp) ``` func (o *[SearchOption](#SearchOption)) SetFilesWithRegexp(p [string](/builtin#string)) ```
rusda
cran
R
Package ‘rusda’ October 14, 2022 Type Package Title Interface to USDA Databases Version 1.0.8 Date 2016-03-18 Author <NAME> Maintainer <NAME> <<EMAIL>> Imports XML, httr (>= 0.6.1), plyr, foreach, stringr, testthat, taxize, RCurl Description An interface to the web service methods provided by the United States Depart- ment of Agriculture (USDA). The Agricultural Research Service (ARS) pro- vides a large set of databases. The current version of the package holds interfaces to the System- atic Mycology and Microbiology Laboratory (SMML), which consists of four databases: Fun- gus-Host Distributions, Specimens, Literature and the Nomenclature database. It provides func- tions for querying these databases. The main function is \code{associations}, which al- lows searching for fungus-host combinations. License GPL (>= 2) URL http://www.usda.gov/wps/portal/usda/usdahome, http://nt.ars-grin.gov/fungaldatabases/index.cfm Repository CRAN NeedsCompilation no Date/Publication 2016-04-03 00:24:19 R topics documented: rusda-packag... 2 association... 2 literatur... 4 meta_smm... 5 substrat... 6 synonyms_smm... 7 rusda-package Interface to USDA Databases Description An interface to the web service methods provided by the United States Department of Agriculture (USDA). The Agricultural Research Service (ARS) provides a large set of databases. The current version of the package holds interfaces to the Systematic Mycology and Microbiology Laboratory (SMML), which consists of four databases: Fungus-Host Distributions, Specimens, Literature and the Nomenclature database. It provides functions for querying these databases. The main function is associations, which allows searching for fungus-host combinations. Details Package: rusda Type: Package Version: 1.0.7 Date: 2016-01-20 Author(s) <NAME> Maintainer: <NAME> <<EMAIL>> References Farr, D.F., & Rossman, A.Y. Fungal Databases, Systematic Mycology and Microbiology Labora- tory, ARS, USDA http://nt.ars-grin.gov/sbmlweb/fungi/databases.cfm, http://www.usda.gov/wps/portal/ usda/usdahome associations Downloads associations for input species from SMML Fungus-Host DB Description Searches and downloads associations from SMML Fungus-Hosts Distributions and Specimens database for fungus or plant species input vector Usage associations(x, database = c("FH", "SP", "both"), spec_type = c("plant", "fungus"), clean = TRUE, syn_include = TRUE, process = TRUE) Arguments x a vector of class character containing fungal or plant species names or a genus name (see Details) database a character string specifying the databases that should be queried. Valid are "FH" (Fungus-Host Distributions), "SP" (Specimens) or "both" databases spec_type a character string specifying the type of x. Can be either "plant" or "fungus" clean logical, if TRUE a cleaning step is run of the resulting associations list syn_include logical, if TRUE associations for synonyms are searched and added. For a com- plete synonyms list check rusda::synonyms process logical, if TRUE downloading and extraction process is displayed Details The Fungus-Hosts distributions database ’FH’ comprises data compiled from Literature. In the uncleaned output all kinds of unspecified substrates are documented like "submerged wood". Cle- anded data displayes Linnean names only and species names with either "subsp.","f. sp." "f.", "var.". The Specimens database comprises entries from field collections. If genera names are supplied, then species are derived from the NCBI taxonomy. Value an object of class list. First is synonyms, second is associations. Synonmys is a vector of mode list with synonyms for x. Notice: This is not a complete list of synonym data in the database. This is the list of synonyms that contain data for the input x. For a complete synonyms list check rusda::synonyms or (if needed) for fungi R package rmycobank. Associations is a vector of mode list of associations for x Author(s) <NAME> Examples ## Not run: ## Example for species name(s) as input x <- "Fagus sylvatica" pathogens <- associations(x, database = "both", clean = TRUE, syn_include = TRUE, spec_type = "plant", process = TRUE) x <- "Rosellinia ligniaria" hosts <- associations(x, database = "both", clean = TRUE, syn_include = TRUE, spec_type = "fungus", process = TRUE) is.element("Rosellinia ligniaria", pathogens$association[[1]]) is.element("Fagus sylvatica", hosts$association[[1]]) ## Example for genus/genera name(s) as input x <- "Zehneria" # or x <- c("Zehneria", "Momordica") hosts <- associations(x, database = "both", clean = TRUE, syn_include = TRUE, spec_type = "plant", process = TRUE) ## End(Not run) literature Downloads literature from SMML Literature DB Description Searches and downloads literature entries from the SMML Literature database Usage literature(x, spec_type = c("plant", "fungus"), process = TRUE) Arguments x a vector of class character containing fungal or plant species names spec_type a character string specifying the type of spec. Can be either "plant" or "fungus" process logical, if TRUE downloading and extraction process is displayed an object of class list Value a vector of mode list with literature entries for x Author(s) <NAME> Examples ## Not run: x <- "Polyporus badius" lit <- literature(x, process = TRUE, spec_type = "fungus") lit ## End(Not run) meta_smml Downloads and evaluate species presence in SMML DBs Description Searches, downloads and evaluates presence/absence of data in the SMML databases Usage meta_smml(x, spec_type = c("plant", "fungus"), process = TRUE) Arguments x a vector of class character containing fungal or plant species or genus names spec_type a character string specifying the type of x. Can be either "plant" or "fungus" process logical, if TRUE downloading and extraction process is displayed Details Use this function before deriving data from one of the databases in order to prune your input species vector. With pruned species vectors the functions will run faster. This is important if x is some hundred species long. Value an object of class data.frame: presence/absence Author(s) <NAME> Examples ## Not run: fungus.meta <- meta_smml(x = "Picea abies", process = TRUE, spec_type = "plant") fungus.meta hosts.meta <- meta_smml(x = "Antrodiella citrinella", process = TRUE, spec_type = "fungus") hosts.meta ## End(Not run) substrate Downloads substrate data from SMML Nomenclature DB Description Searches and downloads substrate data from SMML Nomenclature database Usage substrate(x, process = TRUE) Arguments x a vector of class character containing fungal or plant species names process logical, if TRUE downloading and extraction process is displayed Details Don’t be disappointed. Not much data there. But depends on the study group, so give it try. Value an object of mode list containing substrate for fungus species Author(s) <NAME> Examples ## Not run: x <- c("Polyporus_rhizophilus", "Polyporus_squamosus") subs.poly <- substrate(x, process=TRUE) subs.poly ## End(Not run) synonyms_smml Downloads synonym data from SMML Nomenclature DB Description Searches and downloads synonym data from SMML Nomenclature database Usage synonyms_smml(x, spec_type = c("plant", "fungus"), clean = TRUE, process = TRUE) Arguments x a vector of class character containing fungal or plant species or genus names spec_type a character string specifying the type of x. Can be either "plant" or "fungus" clean logical, if TRUE a cleaning step is run of the resulting associations list process logical, if TRUE downloading and extraction process is displayed Value an object of class list containing synonyms for x Author(s) <NAME> Examples ## Not run: x <- "Solanum tuberosum" synonyms_usda(x, spec_type = "plant", process = TRUE, clean = TRUE) x <- c("Phytophthora infestans", "Polyporus badius") synonyms_usda(x, spec_type = "fungus", process = TRUE, clean = TRUE) ## End(Not run)
viralmodels
cran
R
Package ‘viralmodels’ September 29, 2023 Title Viral Load and CD4 Lymphocytes Regression Models Version 1.1.0 Description Provides a comprehensive framework for building, evaluating, and visualizing regression mod- els for analyzing viral load and CD4 (Cluster of Differentiation 4) lymphocytes data. It lever- ages the principles of the tidymodels ecosystem of Max Kuhn and Hadley Wick- ham (2020) <https://www.tidymodels.org> to offer a user-friendly experience in model de- velopment. This package includes functions for data preprocessing, feature engineer- ing, model training, tuning, and evaluation, along with visualization tools to enhance the inter- pretation of model results. It is specifically designed for researchers in biostatistics, computa- tional biology, and HIV research who aim to perform reproducible and rigorous analy- ses to gain insights into disease dynamics. The main focus is on improving the understand- ing of the relationships between viral load, CD4 lymphocytes, and other relevant covari- ates to contribute to HIV research and the visibility of vulnerable seropositive populations. License GPL (>= 3) Encoding UTF-8 RoxygenNote 7.2.3 Imports dplyr, earth, kernlab, nnet, parsnip, recipes, rsample, tidyselect, tune, vdiffr, workflows, workflowsets Suggests testthat (>= 3.0.0) Config/testthat/edition 3 NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0009-0003-6029-6560>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-09-29 08:40:02 UTC R topics documented: viralmode... 2 viralta... 3 viralvi... 4 viralmodel Select best model Description viralmodel returns metrics for a selected model Usage viralmodel(x, semilla, target, pliegues, repeticiones, rejilla, modelo) Arguments x A data frame semilla A numeric value target A character value pliegues A numeric value repeticiones A numeric value rejilla A numeric value modelo A character value Value A table with a single model hyperparameters Examples cd_2019 <- c(824, 169, 342, 423, 441, 507, 559, 173, 764, 780, 244, 527, 417, 800, 602, 494, 345, 780, 780, 527, 556, 559, 238, 288, 244, 353, 169, 556, 824, 169, 342, 423, 441, 507, 559) vl_2019 <- c(40, 11388, 38961, 40, 75, 4095, 103, 11388, 46, 103, 11388, 40, 0, 11388, 0, 4095, 40, 93, 49, 49, 49, 4095, 6837, 38961, 38961, 0, 0, 93, 40, 11388, 38961, 40, 75, 4095, 103) cd_2021 <- c(992, 275, 331, 454, 479, 553, 496, 230, 605, 432, 170, 670, 238, 238, 634, 422, 429, 513, 327, 465, 479, 661, 382, 364, 109, 398, 209, 1960, 992, 275, 331, 454, 479, 553, 496) vl_2021 <- c(80, 1690, 5113, 71, 289, 3063, 0, 262, 0, 15089, 13016, 1513, 60, 60, 49248, 159308, 56, 0, 516675, 49, 237, 84, 292, 414, 26176, 62, 126, 93, 80, 1690, 5113, 71, 289, 3063, 0) cd_2022 <- c(700, 127, 127, 547, 547, 547, 777, 149, 628, 614, 253, 918, 326, 326, 574, 361, 253, 726, 659, 596, 427, 447, 326, 253, 248, 326, 260, 918, 700, 127, 127, 547, 547, 547, 777) vl_2022 <- c(0, 0, 53250, 0, 40, 1901, 0, 955, 0, 0, 0, 0, 40, 0, 49248, 159308, 56, 0, 516675, 49, 237, 0, 23601, 0, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) x <- cbind(cd_2019, vl_2019, cd_2021, vl_2021, cd_2022, vl_2022) |> as.data.frame() semilla <- 123 target <- "cd_2022" pliegues <- 2 repeticiones <- 1 rejilla <- 1 modelo <- "simple_MARS" viralmodel(x, semilla, target, pliegues, repeticiones, rejilla, modelo) viraltab Competing models table Description viraltab trains and optimizes a series of regression models for viral load or cd4 counts Usage viraltab(x, semilla, target, pliegues, repeticiones, rejilla) Arguments x A data frame semilla A numeric value target A character value pliegues A numeric value repeticiones A numeric value rejilla A numeric value Value A table of competing models Examples cd_2019 <- c(824, 169, 342, 423, 441, 507, 559, 173, 764, 780, 244, 527, 417, 800, 602, 494, 345, 780, 780, 527, 556, 559, 238, 288, 244, 353, 169, 556, 824, 169, 342, 423, 441, 507, 559) vl_2019 <- c(40, 11388, 38961, 40, 75, 4095, 103, 11388, 46, 103, 11388, 40, 0, 11388, 0, 4095, 40, 93, 49, 49, 49, 4095, 6837, 38961, 38961, 0, 0, 93, 40, 11388, 38961, 40, 75, 4095, 103) cd_2021 <- c(992, 275, 331, 454, 479, 553, 496, 230, 605, 432, 170, 670, 238, 238, 634, 422, 429, 513, 327, 465, 479, 661, 382, 364, 109, 398, 209, 1960, 992, 275, 331, 454, 479, 553, 496) vl_2021 <- c(80, 1690, 5113, 71, 289, 3063, 0, 262, 0, 15089, 13016, 1513, 60, 60, 49248, 159308, 56, 0, 516675, 49, 237, 84, 292, 414, 26176, 62, 126, 93, 80, 1690, 5113, 71, 289, 3063, 0) cd_2022 <- c(700, 127, 127, 547, 547, 547, 777, 149, 628, 614, 253, 918, 326, 326, 574, 361, 253, 726, 659, 596, 427, 447, 326, 253, 248, 326, 260, 918, 700, 127, 127, 547, 547, 547, 777) vl_2022 <- c(0, 0, 53250, 0, 40, 1901, 0, 955, 0, 0, 0, 0, 40, 0, 49248, 159308, 56, 0, 516675, 49, 237, 0, 23601, 0, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) x <- cbind(cd_2019, vl_2019, cd_2021, vl_2021, cd_2022, vl_2022) |> as.data.frame() semilla <- 123 target <- "cd_2022" pliegues <- 2 repeticiones <- 1 rejilla <- 1 viraltab(x, semilla, target, pliegues, repeticiones, rejilla) viralvis Competing models plot Description viralvis plots the rankings of a series of regression models for viral load or cd4 counts Usage viralvis(x, semilla, target, pliegues, repeticiones, rejilla) Arguments x A data frame semilla A numeric value target A character value pliegues A numeric value repeticiones A numeric value rejilla A numeric value Value A plot of ranking models Examples cd_2019 <- c(824, 169, 342, 423, 441, 507, 559, 173, 764, 780, 244, 527, 417, 800, 602, 494, 345, 780, 780, 527, 556, 559, 238, 288, 244, 353, 169, 556, 824, 169, 342, 423, 441, 507, 559) vl_2019 <- c(40, 11388, 38961, 40, 75, 4095, 103, 11388, 46, 103, 11388, 40, 0, 11388, 0, 4095, 40, 93, 49, 49, 49, 4095, 6837, 38961, 38961, 0, 0, 93, 40, 11388, 38961, 40, 75, 4095, 103) cd_2021 <- c(992, 275, 331, 454, 479, 553, 496, 230, 605, 432, 170, 670, 238, 238, 634, 422, 429, 513, 327, 465, 479, 661, 382, 364, 109, 398, 209, 1960, 992, 275, 331, 454, 479, 553, 496) vl_2021 <- c(80, 1690, 5113, 71, 289, 3063, 0, 262, 0, 15089, 13016, 1513, 60, 60, 49248, 159308, 56, 0, 516675, 49, 237, 84, 292, 414, 26176, 62, 126, 93, 80, 1690, 5113, 71, 289, 3063, 0) cd_2022 <- c(700, 127, 127, 547, 547, 547, 777, 149, 628, 614, 253, 918, 326, 326, 574, 361, 253, 726, 659, 596, 427, 447, 326, 253, 248, 326, 260, 918, 700, 127, 127, 547, 547, 547, 777) vl_2022 <- c(0, 0, 53250, 0, 40, 1901, 0, 955, 0, 0, 0, 0, 40, 0, 49248, 159308, 56, 0, 516675, 49, 237, 0, 23601, 0, 40, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) x <- cbind(cd_2019, vl_2019, cd_2021, vl_2021, cd_2022, vl_2022) |> as.data.frame() semilla <- 123 target <- "cd_2022" pliegues <- 2 repeticiones <- 1 rejilla <- 1 viralvis(x, semilla, target, pliegues, repeticiones, rejilla)
lithos
readthedoc
YAML
Lithos 0.18.4 documentation [Lithos](index.html#document-index) --- Welcome to Lithos’s documentation![¶](#welcome-to-lithos-s-documentation) === Contents: Configuration Overview[¶](#configuration-overview) --- Lithos has 4 configs: 1. `/etc/lithos/master.yaml` – global configuration for whole lithos daemon. Empty config should work most of the time. [Master Configuration](index.html#master-config) 2. `/etc/lithos/sandboxes/<NAME>.yaml` – the allowed paths and other system limits for every sandbox. You may think of a sandbox as a single application. [Sandbox Config](index.html#sandbox-config) 3. `/etc/lithos/processes/<NAME>.yaml` – you may think of it as a list of pairs (image_name, num_of_processes_to_run). It’s only a tiny bit longer than that. [Process Config](index.html#process-config) 4. `<IMAGE>/config/<NAME>.yaml` – configuration of process to run. It’s where all the needed to run process are. It’s stored inside the image (so updated with new image), and limited by limits in sandbox config. [Container Configuration](index.html#container-config) It may look too much. But note that in some real-world deployment I have first two configs contain 8 lines (5 unique settings). The third is simple. And the fourth has essential info you need to run process in like in any other supervisor. Master Configuration[¶](#master-configuration) --- Master configuration file is the one that usually at `/etc/lithos/master.yaml` and defines small subset of global configuration parameters. Minimal configuration is an *empty file* but it **must exist** anyway. Here is the reference of the parameters along with the default values: `sandboxes-dir`[¶](#opt-sandboxes-dir) The directory for per-application configuration files which contain limits of what application might use. If path is relative it’s relative to the directory where configuration file is. Default is `./sandboxes`. `processes-dir`[¶](#opt-processes-dir) The directory for per-application configuration files which contain name of image directory, instance number, etc., to run. If path is relative it’s relative to the directory where configuration file is. Default is `./processes`. `runtime-dir`[¶](#opt-runtime-dir) The directory where `pid` file of master process is stored and also the base directory for `state-dir` and `mount-dir`. Path must be absolute. It’s expected to be stored on `tmpfs`. Default `/run/lithos`. `state-dir`[¶](#opt-state-dir) The directory where to keep container’s state dirs. If path is relative it’s relative to `runtime-dir`. Default `state` (i.e. `/run/lithos/state`). Path should be on `tmpfs`. `mount-dir`[¶](#opt-mount-dir) An empty directory to use for mounting. If path is relative it’s relative to `runtime-dir`. Default `mnt`. `devfs-dir`[¶](#opt-devfs-dir) The directory where `/dev` filesystem for container exists. If it’s not `/dev` (which is not recommended), you should create the directory with `lithos_mkdev` script. Default `/var/lib/lithos/dev`. `cgroup-name`[¶](#opt-cgroup-name) The name of the root cgroup for all lithos processes. Specify `null` (or any other form of YAMLy null) to turn cgroups off completely. `cgroup-controllers`[¶](#opt-cgroup-controllers) List of cgroup controllers to initialize for each container. Note: the empty list is treated as default. Default is `[name, cpu, cpuacct, memory, blkio]`. If you have some controllers joined together like `cpu,cpuacct` it’s ok. Use `cgroup-name: null` to turn cgroup tracking off (not empty list here). And use `cgroup-controllers: [name]` to only use cgroups for naming processes but not for resource control. Note turning off cgroups means that resource limits does not work completely. lithos will not try to enforce them by polling or some other means `default-log-dir`[¶](#opt-default-log-dir) (default `/var/log/lithos`) The directory where master and each of the application logs are created (unless are overrided by sandbox config). `config-log-dir`[¶](#opt-config-log-dir) (default `/var/log/lithos/config`) The directory where configurations of the processes are stored. These are used by `lithos_clean` to find out when it’s safe to clean directories. You may also reconstruct processes configuration at any point in time using this directory. Changed in version 0.10.2: Parameter can be `null`: ``` config-log-dir: null ``` In this case no configuration logging is done. This is mainly useful if you track configurations and versions by some other means. Note This is enabled by default for backwards-compatibility reasons. We consider resetting this value to `null` by default in `lithos 1.0` as this parameter is not as useful as were expected. `stdio-log-dir`[¶](#opt-stdio-log-dir) (default `/var/log/lithos/stderr`) The directory where stderr of the processes will be forwarded. One file per sandbox is created. These files are created by lithos and file descriptor is passed to the application as both the stdout and stderr. Lithos does not parse, copy or otherwise proxy the data. The operating system does all the work. This also means lithos can’t rotate or do any other magical things with the log. This should be used only to tackle the critical errors. Application should send log to a syslog or write some rotating log files on it’s own, because there is no good tools to groups lines of the stderr into solid log messages that include tracebacks and other fancy stuff. Good utilities to manage the files: * `logrotate` in `copytruncate` mode * `rsyslog` with file input plugin This can be overridden in process by [`stdout-stderr-file`](index.html#opt-stdout-stderr-file). Note The path is reopened on process restart. If [`restart-process-only`](index.html#opt-restart-process-only) is true then it’s only reopened when configuration changes. This is good to know if you remove or rename the file by hand. `log-file`[¶](#opt-log-file) (default `master.log`) Master log file. Relative paths are treated from [`default-log-dir`](#opt-default-log-dir). `log-level`[¶](#opt-log-level) (default `warn`) Level of logging. Can be overriden on the command line. `syslog-facility`[¶](#opt-syslog-facility) (no default) Enables logging to syslog (with specified facility) instead of file. `syslog-name`[¶](#opt-syslog-name) (default `lithos`) Application name for master process in syslog. The child processes are prefixed by this value. For example `lithos-django` (where `django` is a sandbox name). Sandbox Config[¶](#sandbox-config) --- This config resides in `/etc/lithos/sandboxes/NAME.yaml` (by default). Where `NAME` is the name of a sandbox. The configuration file contains security and resource limits for the container. Including: * A directory where image resides * Set of directories that are mounted inside the container (i.e. all writable directories for the container, the `/tmp`
) * ulimit settings * cgroup limits ### Reference[¶](#reference) `config-file`[¶](#opt-config-file) The path for the [processes config](index.html#process-config). In most cases should be left unset. Default is `null` which is results into `/etc/lithos/processes/NAME.yaml` with all other settings being defaults. `image-dir`[¶](#opt-image-dir) Directory where application images are. Every subdir of the `image-dir` may be mounted as a root file system in the container. **Required**. `image-dir-levels`[¶](#opt-image-dir-levels) (default `1`) A number of directory components required for image name in [`image-dir`](#opt-image-dir) `log-file`[¶](#opt-log-file) The file name where to put **supervisor** log of the container. Default is `/var/log/lithos/SANDBOX_NAME.yaml`. `log-level`[¶](#opt-log-level) (default `warn`). The logging level of the supervisor. `readonly-paths`[¶](#opt-readonly-paths) The mapping of `virtual_directory: host_system_directory` of folders which are visible for the container in read-only mode. (Note currently if you have submounts in the source directory, thay may be available as writeable). See [Volumes](index.html#volumes) for more details. `writable-paths`[¶](#opt-writable-paths) The mapping of `virtual_directory: host_system_directory` of folders which are visible for the container in writable mode. See [Volumes](index.html#volumes) for more details. `allow-users`[¶](#opt-allow-users) List of ranges of user ids which can be used by container. For containers without user namespaces, it’s just a limit of the `user-id` setting. Example: ``` allow-users: [1, 99, 1000-2000] ``` For containers which have uid maps enabled **in sandbox** this is a list of users available *after* uid mapping applied. For example, the following maps uid 100000 as root in namespace (e.g. for file permissions), but doesn’t allow to start process as root (even if it’s 100000 ouside): ``` uid-map: [{outside: 100000, inside: 0, count: 65536}] allow-users: [1-65535] ``` For containers which do have uid maps enabled **in container config**, it limits all the user ids available to the namespace (i.e. for the outside setting of the uid map). `default-user`[¶](#opt-default-user) (no default) A user id used in the container if no `user-id` is specified in container config. By default `user-id` is required. Note: `default-user` value must be contained in the `allow-users` range `allow-groups`[¶](#opt-allow-groups) List of ranges of group ids for the container. Works similarly to [`allow-users`](#opt-allow-users). `default-group`[¶](#opt-default-group) (default `0`) A group id used in the container if no `group-id` is specified in container config. Note: `default-group` value must be contained in the `allow-users` range `allow-tcp-ports`[¶](#opt-allow-tcp-ports) List of ranges of allowed TCP ports for container. This is currently not enforced in any way except: 1. Ports < 1024 are restricted by OS for non-root (but may be allowed here) 2. It restricts `bind-port` setting in container config Note if you have overlapping TCP port for different sandboxes, only single file descriptor will be used for each port. The config for opening port will be used arbitrary from single config amonst all users, which have obvious security implications. Warning [`tcp-ports`](index.html#opt-tcp-ports) bind at port in **host namespace**, i.e. it effectively discards [`bridged-network`](#opt-bridged-network) for that port this is both the feature and might be a pitfall. So most of the time you should avoid non-empty [`allow-tcp-ports`](#opt-allow-tcp-ports) if using bridged-network. `additional-hosts`[¶](#opt-additional-hosts) Mapping of `hostname: ip` for names that will be added to `/etc/hosts` file. This is occasinally used for cheap but static service discovery. `uid-map, gid-map`[¶](#opt-uid-map,gid-map) The list of mapping for uids(gids) in the user namespace of the container. If they are not specified the user namespace is not used. This setting allows to run processes with `uid` zero without the risk of being the `root` on host system. Here is a example of maps: ``` uid-map: - {inside: 0, outside: 1000, count: 1} - {inside: 1, outside: 1, count: 1} gid-map: - {inside: 0, outside: 100, count: 1} ``` Note Currently you may have uid-map either in a sandbox or in a container config, not both. `used-images-list`[¶](#opt-used-images-list) (optional) A text file that is used by `lithos_clean` to keep images alive. It’s not used by any other means except `lithos_clean` utility. Each line of the file should contain image name relative to the `image_dir`. It’s expected that the list is kept up by some orchestration system or by deployment scripts or by any other tool meaningful for ops team. This setting is only useful if `auto-clean` is `true` (default) `auto-clean`[¶](#opt-auto-clean) (default `true`) Clean images of this sandbox when running `lithos_clean`. This is a subject of the following caveats: 1. Lithos clean is not run by lithos automatically, you ought to run it using cron tab 2. If same `image-dir` is used for multiple sandboxes it will be cleaned if at least one of them has non-falsy `auto-clean`. `resolv-conf`[¶](#opt-resolv-conf) (default `/etc/resolv.conf`) default place to copy `resolv.conf` from for containers. Note: Container itself can override it’s own resolv.conf file, but can’t read original `/etc/resolv.conf` if this setting is changed. `hosts-file`[¶](#opt-hosts-file) (default `/etc/hosts`) default place to copy `hosts` from for containers. Note: Container itself can override it’s own `hosts` file, but can’t read original `/etc/hosts` if this setting is changed. `bridged-network`[¶](#opt-bridged-network) (default is absent) a network bridge configuration for all the cotainers in the bridge Example: ``` bridged-network: bridge: br0 network: 10.0.0.0/24 default_gateway: 10.0.0.1 after-setup-command: [/usr/bin/arping, -U, -c1, '@{container_ip}'] ``` Note when bridged network is active your [Process Config](index.html#process-config) should contain a list of ip addresses one for each container. Note this setting does not affect `tcp-ports`. So usually you should keep [`allow-tcp-ports`](#opt-allow-tcp-ports) setting empty when using bridged network. Options: `after-setup-command`[¶](#bopt-after-setup-command) Command to run after setting up container namespace but before running actual container. The example shown above sends unsolicited arp packet to notify router and other machines on the network that MAC address corresponding to container’s IP is changed. Command must have absolute path, and has almost empty environment, so don’t assume `PATH` is there if you’re writing a script. Command runs in *container’s network* namespace but with all other namespaces in host system (in particular in *host filesystem* and with permissions of root in host system) Replacement variables that work in command-line: * `@{container_ip}` – replaced with IP address of a container being set up Few examples: 1. `[/usr/bin/arping, -U, -c1, '@{container_ip}']` – default in v0.17.x. This notifies other peers that MAC address for this IP changed. 2. `[/usr/bin/arping, -c1, '10.0.0.1']` – other way to do that, that often does the same as in (1) a side-effect (where 10.0.0.1 is a default gateway) 3. `[/usr/bin/ping, -c1, '10.0.0.1']` – doing same as (2) but using ICMP instead of ARP directly Most of the time containers should work with empty `after-setup-command`, but because container gets new MAC address each time it starts, there might be a small delay (~ 5 sec) after container’s start where packets going to that IP are lost (so it appears that host is unavailable). `secrets-private-key`[¶](#opt-secrets-private-key) (default is absent) Use the specified private key(s) to decode secrets in container’s [`secret-environ`](index.html#opt-secret-environ) setting. The key in this file is openssh-compatible ed25519 private key (RSA keys are *not* supported). File can contain multiple keys (concatenated), if secret matches any of them it will be decoded. To create a key use normal `ssh-keygen` and leave the password empty (password-protected keys aren’t supported): ``` ssh-keygen -t ed25519 -t /etc/lithos/keys/secret.key ``` Note: the key must be owned by root with permissions of 0600 (default for ssh-keygen). `secrets-namespaces`[¶](#opt-secrets-namespaces) (default is [“”]) allow only secrets with listed namespaces. Useful only if `secrets-private-key` is set. For example: ``` secrets-namespaces: - project1.web - project1.celery ``` The idea is you might want to use single secret private key for a whole cluster. But diferent services having different “namespaces”. This means you can use single public key for encyption and specify different namespace for each service. With this setup user can’t just copy a key from one service to another if that another service isn’t authorized to read the namespace using [`secrets-namespaces`](#opt-secrets-namespaces). To encrypt secret for a specific namespace use: ``` lithos_crypt encrypt -k key.pub -d "secret" -n "project1.web" ``` By default both `lithos_crypt` and [`secrets-namespaces`](#opt-secrets-namespaces) specify empty string as a namespace. This is good enough if you don’t have multiple teams sharing the same cluster. Currently namespaces are limited to a regexp `^[a-zA-Z0-9_.-]*$` See [Encrypted Variables](index.html#encrypted-vars) for more info. Process Config[¶](#process-config) --- This config resides in `/etc/lithos/processes/NAME.yaml` (by default). Where `NAME` is the name of a sandbox. It mainly contains three things: * `image` the process is run from * `config` file name inside the image that specifies command-line and other process execution parameters * number `instances` of the process to run For example: ``` django: image: django.v3.5.7 config: /config/worker_process.yaml instances: 3 redis: image: redix.v1 config: /config/redis.yaml instances: 1 ``` This will start three python `django` worker processes and one redis. Hint Usually this config is generated by some tool like [ansible](http://www.ansible.com/) or [confd](https://github.com/kelseyhightower/confd). There is also a way to create **ad-hoc** commands. For example: ``` manage: kind: Command image: django.v3.5.7 config: /config/manage_py.yaml ``` This will allow to start a `manage.py` command with: ``` $ lithos_cmd SANDBOX_NAME manage syncdb ``` This runs command in the same sandbox like the worker process itself but the command is actually attached to current shell. The commands may be freely mixed with `Daemon` items (which is default `kind`) in same config. The only limitation is that names must not be duplicated The `Command` is occasionally useful, but should be used with care. To start a command you need root privileges on host system, so it’s only useful for SysOp tasks or **may be** for cron tasks but not for normal operation of application. ### Options[¶](#options) `instances`[¶](#popt-instances) Number of instances to run `image`[¶](#popt-image) Identifier of the image to run container from `config`[¶](#popt-config) Configuration file name (absolute name in container) to run `ip-addresses`[¶](#popt-ip-addresses) A list of ip addresses if [`bridged-network`](index.html#opt-bridged-network) is enforced in sandbox. Note the number of items in this list must match [`instances`](#popt-instances) value. `variables`[¶](#popt-variables) A mapping of variable: value for variables that can be used in process config. `extra-secrets-namespaces`[¶](#popt-extra-secrets-namespaces) Additional secrets namespaces allowed for this specific project. In addition to [`secrets-namespaces`](index.html#opt-secrets-namespaces). See [Encrypted Variables](index.html#encrypted-vars) for more info. ### Variables[¶](#variables) You can also add variables for specific config: For example: ``` django: image: django.v3.5.7 config: /config/worker_process.yaml variables: tcp_port: 10001 instances: 3 ``` Only variables that are **declared** in [container config](index.html#container-variables) can be substituted. Extra variables are ignored. If there is a declared variable but it’s not present in process config, it doesn’t pass configuration check. Container Configuration[¶](#container-configuration) --- Container configuration is a YAML file which is usually put into `/config/<service_name>.yaml` into container image itself. Note Curently container configuration may be put into any folder inside the image, but we may fix this folder later. The arbitrary path for container configuration may be a security vulnerability. The somewhat minimal configuration is looks like following: ``` kind: Daemon user-id: 1 volumes: /tmp: !Tmpfs { size: 100m } executable: /bin/sleep arguments: [60] ``` ### Variables[¶](#variables) Container can declare some things, that can be changed in specific instantiation of the service, for example: ``` variables: tcp_port: !TcpPort kind: Daemon user-id: 1 volumes: /tmp: !Tmpfs { size: 100m } executable: /bin/some_program arguments: - "--listen=localhost:@{tcp_port}" ``` The `variables` key declares variable names and types. Value for these variables can be provided in `variables` in [Process Config](index.html#process-config). There are the following types of variables: TcpPort Allows a number between 1-65535 and ensures that the number matches port range allowed in sandbox (see [`allow-tcp-ports`](index.html#opt-allow-tcp-ports)) Changed in version 0.17.4: Added `activation` parameter as a shortcut to support systemd activation protocol. I.e. the following (showing two ports for more comprehensive example): ``` variables: port1: !TcpPort { activation: systemd } port2: !TcpPort { activation: systemd } ``` Means to add something like this: ``` variables: port1: !TcpPort port2: !TcpPort tcp-ports: "@{port1}": fd: 3 "@{port2}": fd: 4 environ: LISTEN_FDS: 1 LISTEN_FDNAMES: "port1:port2" LISTEN_PID: "@{lithos:pid}" This works for any number of sockets. And it requires that ``LISTEN_FDS`, ``LISTEN_FDNAMES``, ``LISTEN_PID`` were absent in the ``environ`` as written in the file. Also it doesn't allow fine-grained control over parameters of the socket and file descriptor numbers. Use full form if you need specific options. ``` Choice Allows a value from a fixed set of choices (example: `!Choice ["high-priority", "low-priority"]`) Name Allows a value that matches regex `^[0-9a-zA-Z_-]+$`. Useful for passing names of things into a script without having a chance to keep value unescaped when passing somewhere within a script or using it as a filename. New in version 0.10.3. DottedName Allows arbitrary DNS-like name. It’s defined as dot-separated name with only alphanumeric and underscores, where no component could start or end with a dash and no consequent dots allowed. New in version 0.17.4. All entries of `@{variable_name}` are substituted in the following fields: 1. [`arguments`](#opt-arguments) 2. The values of [`environ`](#opt-environ) (not in the keys yet) 3. The key in the [`tcp-ports`](#opt-tcp-ports) (i.e. port number) The expansion in any other place does not work yet, but may be implemented in the future. Only **declared** variables can be substituted. Trying to substitute undeclared variables or non-existing built-in variable results into configuration syntax error. There are the number of builtin variables that start with `lithos:`: lithos:name Name of the process, same as inserted in `LITHOS_NAME` environment variable lithos:config_filename Full path of this configuration file as visible from within container lithos:pid Pid of the process as visible inside of the container. Note: this variable can only be in environment and can only be full value of the variable. I.e. PID: “@{lithos:pid}” is fine, but PID: “pid is @{lithos:pid}” is **not allowed**. (In most cases this variable is exaclty `2`, this is expected but might not be always true in some cases). More built-in variables may be added in the future. Built-in variables doesn’t have to be declared. ### Reference[¶](#reference) `kind`[¶](#opt-kind) One of `Daemon` (default), `Command` or `CommandOrDaemon`. The `Daemon` is long-running process that is monitored by supervisor. The `Command` things are just one-off tasks, for example to initialize local file system data, or to check health of daemon process. The `Command` things are run by `lithos_cmd` utility The `CommandOrDaemon` may be used in both ways, based on how it was declared in [Process Config](index.html#process-config). In the command itself you can distinguish how it is run by `/cmd.` in `LITHOS_NAME` or cgroup name or better you can pass [variable](#container-variables) to a specific command and/or daemon. New in version 0.10.3: `ContainerOrDaemon` mode `user-id`[¶](#opt-user-id) The numeric user indentifier for the process. It must be one of the allowed values in lithos configuration. Usually value of `0` is not allowed. `group-id`[¶](#opt-group-id) The numeric group indentifier for the process. It must be one of the allowed values in lithos configuration. Usually value of `0` is not allowed. `memory-limit`[¶](#opt-memory-limit) The memory limit for process and it’s children. This is enforced by cgroups, so this needs memory cgroup to be enabled (otherwise its no-op). See [`cgroup-controllers`](index.html#opt-cgroup-controllers) for more info. Default: nolimit. You can use `ki`, `Mi` and `Gi` units for memory accounting. See [integer-units](http://rust-quire.readthedocs.io/en/latest/user.html#units). Changed in version 0.14.0: Previously it only set `memory.limit_in_bytes` but now it also sets `memory.memsw.limit_in_bytes` if the latter exists (otherwise skipping silently). This helps to kill processes earlier instead of swapping out to disk. `cpu-shares`[¶](#opt-cpu-shares) The number of CPU shares for the process. Default is `1024` which means all processes get equal share. You may split them to different values like `768` for one process and `256` for another one. This is enforced by cgroups, so this needs cpu cgroup to be enabled (otherwise its no-op). See [`cgroup-controllers`](index.html#opt-cgroup-controllers) for more info. `fileno-limit`[¶](#opt-fileno-limit) The limit on file descriptors for process. Default `1024`. `restart-timeout`[¶](#opt-restart-timeout) The minimum time to wait between subsequent restarts of failed processes in seconds. This is to ensure that it doesn’t boggles down CPU. Default is `1` second. It’s enough so that lithos itself do not hang. But it should be bigger for heavy-weight processes. Note: this is time between restarts, i.e. if process were running more than this number of seconds it will be restarted immediately. `kill-timeout`[¶](#opt-kill-timeout) (default `5` seconds) The time to wait for application to die. If it is not dead by this number of seconds we kill it with `KILL`. You should not rely on this timeout to be precise for multiple reasons: 1. Unidentified children are killed with a default timeout (5 sec). This includes children which are being killed when their configuration is removed. 2. When lithos is restarted (i.e. to reload a configuration) during the timeout, the timeout is reset. I.e. the process may hang more than this time. `executable`[¶](#opt-executable) The path to executable to run. Only absolute paths are allowed. `arguments`[¶](#opt-arguments) The list of arguments for the command. Except argument zero. `environ`[¶](#opt-environ) The mapping of values that are set for process. You must set all needed environment variables here. The only variable that is propagated by default is `TERM`. Also few special `LITHOS_` variables may be set. This means you must set all the basic `LANG`, `HOME` and so on explicitly. This is to ensure that your environment is always the same regardless of where you run process. `secret-environ`[¶](#opt-secret-environ) Similarlty to `environ` but contains encrypted environment variables. For example: ``` secret-environ: DB_PASSWORD: v2:ROit92I5:82HdsExJ:Gd3ocJsr:Hp3pngQZUos5b8ioKVUx40kegM1u<KEY> ``` Note: if environment variable is both in `environ` and `secret-environ` which one overrides is not specified for now. You can encrypt variables using `lithos_crypt`: ``` lithos_crypt encrypt -k key.pub -d "secret" -n "some.namespace" ``` You only need public key for encryption. So the idea is that public key is published somewhere and anyone, even users having to access to server/private key can add a secret. The `-n` / `--namespace` parameter must match one of the [`secrets-namespaces`](index.html#opt-secrets-namespaces) defined for project’s sandbox. Usually there is only one private key for every deployment (cluster), and a single namespace per project. But in some cases you might need single lithos config for multiple destinations or just want to rotate private key smoothly. So you can put secret(s) encoded for multiple keys and/or namespaces: ``` secret-environ: DB_PASSWORD: - v2:h+M9Ue9x:82HdsExJ:Gd3ocJsr:/+f4ezLfKIP/mp0xdF7H6gfdM7onHWwbGFQX+M1aB+PoCNQidKyz/1yEGrwxD+i+qBGwLVBIXRqIc5FJ6/hw26CE - v2:ROit92I5:cX9ciQzf:Gd3ocJsr:LMHBRtPFpMRRrljNnkaU6Y9JyVvEukRiDs4mitnTksNGSX5xU/zADWDwEOCOtYoelbJeyDdPhM7Q1mEOSwjeyO317Q== - v2:h+M9Ue9x:82HdsExJ:Gd3ocJsr:/+f4ezLfKIP/mp0xdF7H6gfdM7onHWwbGFQX+M1aB+PoCNQidKyz/1yEGrwxD+i+qBGwLVBIXRqIc5FJ6/hw26CE ``` Note: technically you can encrypt different secrets here, we can’t enforce that, but it’s very discouraged. The underlying encyrption is curve25519xsalsa20poly1305 which is compatible with libnacl and libsodium. See [Encrypted Variables](index.html#encrypted-vars) for more info. This option conflicts with [`secret-environ-file`](#opt-secret-environ-file). `secret-environ-file`[¶](#opt-secret-environ-file) Path to the file where to read secret environ from. Instead of including `secret-environ` in the container config itself you can use a separate file where data is contained. This is useful to keep single set of secrets shared between multiple containers. The target file is also yaml, but it containers just mapping of names of the secrets to their values (or lists). For example: ``` PASSWD1: v2:ROit92I5:82HdsExJ:Gd3ocJsr:Hp3pngQZUos5b8ioKVUx40kegM1uDsYWwsWqC1cJ1/1KmQPQQWJZe86xgl1EOIxbuLj6PUlBH8yz5qCnWp//Ofbc PASSWD2: - v2:h+M9Ue9x:82HdsExJ:Gd3ocJsr:/+f4ezLfKIP/mp0xdF7H6gfdM7onHWwbGFQX+M1aB+PoCNQidKyz/1yEGrwxD+i+qBGwLVBIXRqIc5FJ6/hw26CE - v2:ROit92I5:cX9ciQzf:Gd3ocJsr:LMHBRtPFpMRRrljNnkaU6Y9JyVvEukRiDs4mitnTksNGSX5xU/zADWDwEOCOtYoelbJeyDdPhM7Q1mEOSwjeyO317Q== ``` Absolute paths here interpreted relative to the container root and relative paths are interpreted relative to the container config itself. Note: we currently support reading file from container’s filesystem only, whether reading from a volume works or not is *unspecified* at the moment. This option conflicts with [`secret-environ`](#opt-secret-environ). `workdir`[¶](#opt-workdir) The working directory for target process. Default is `/`. Working directory must be absolute. `resolv-conf`[¶](#opt-resolv-conf) > Parameters of the `/etc/resolv.conf` file to generate. Default > configuration is: > ``` > resolv-conf: > mount: nil # which basically means "auto" > copy-from-host: true > ``` > Which means `resolv.conf` from host where lithos is running is copied > to the “state” directory of the container. Then if `/etc/resolv.conf` > in container is a file (and not a symlink) resolv conf is mounted over > the `/etc/resolv.conf`. > More options are expected to be added later. > Changed in version 0.15.0: `mount` option added. Previously to make use of `resolv.conf` you > should symlink `ln -s /state/resolv.conf /etc/resolv.conf` in the > container’s image. > Another change is that `copy-from-host` copies file that is specified > in sandbox’s `resolv.conf` which default to `/etc/resolv.conf` but > may be different. Parameters: copy-from-host (default `true`) Copy `resolv.conf` file from host machine. Note: even if `copy-from-host` is `true`, [`additional-hosts`](index.html#opt-additional-hosts) from sandbox config work, which may lead to duplicate or conflicting entries if some names are specified in both places. Changed in version v0.11.0: The parameter used to be `false` by default, because we were thinking about better (perceived) isolation. mount (default `nil`, which means “auto”) Mount copied `resolv.conf` file over `/etc/resolf.conf`. nil enables mounting if `/etc/resolv.conf` is present in the container and is a file (not a symlink) and also `copy-from-host` is true New in version 0.15.0. `hosts-file`[¶](#opt-hosts-file) > Parameters of the `/etc/hosts` file to generate. Default > configuration is: > ``` > hosts-file: > mount: nil # which basically means "auto" > localhost: true > public-hostname: true > copy-from-host: false > ``` > Changed in version 0.15.0: `mount` option added. Previously to make use of `resolv.conf` you > should symlink `ln -s /state/resolv.conf /etc/resolv.conf` in the > container’s image. > Another change is that `copy-from-host` copies file that is specified > in sandbox’s `resolv.conf` which default to `/etc/resolv.conf` but > may be different. Parameters: copy-from-host (default `true`) Copy hosts file from host machine. Note: even if `copy-from-host` is `true`, [`additional-hosts`](index.html#opt-additional-hosts) from sandbox config work, which may lead to duplicate or conflicting entries if some names are specified in both places. Changed in version v0.11.0: The parameter used to be `false` by default, because we were thinking about better (perceived) isolation. And also because hostname in Ubuntu doesn’t resolve to real IP of the host. But we find those occassions where it matters to be quite rare in practice and using `hosts-file` as well as `resolv.conf` from the host system as the most expected and intuitive behavior. mount (default `nil`, which means “auto”) Mount produced `hosts` file over `/etc/hosts`. nil enables mounting if `/etc/hosts` is present in the container and is a file (not a symlink). Value of `true` fails if `/etc/hosts` is not a file. Value of `false` leaves `/etc/hosts` intact. New in version 0.15.0. localhost (default is true when `copy-from-host` is false) A boolean which defines whether to add `127.0.0.1 localhost` record to `hosts` public-hostname (default is true when `copy-from-host` is false) Add to `hosts` file the result of `gethostname` system call along with the ip address that name resolves into. `uid-map, gid-map`[¶](#opt-uid-map,gid-map) The list of mapping for uids(gids) in the user namespace of the container. If they are not specified the user namespace is not used. This setting allows to run processes with `uid` zero without the risk of being the `root` on host system. Here is a example of maps: ``` uid-map: - {inside: 0, outside: 1000, count: 1} - {inside: 1, outside: 1, count: 1} gid-map: - {inside: 0, outside: 100, count: 1} ``` Note Currently you may have uid-map either in a sandbox or in a container config, not both. `stdout-stderr-file`[¶](#opt-stdout-stderr-file) This redirects both stdout and stderr to a file. The path is opened inside the container. So must reside on one of the mounted writeable [Volumes](index.html#volumes). Probably you want [`Persistent`](index.html#volume-Persistent) volume. While it can be on [`Tmpfs`](index.html#volume-Tmpfs) or [`Statedir`](index.html#volume-Statedir) the applicability of such thing is very limited. Usually log is put into the directory specified by [`stdio-log-dir`](index.html#opt-stdio-log-dir). `interactive`[¶](#opt-interactive) (default `false`) Useful only for containers of kind `Command`. If `true` lithos_cmd doesn’t clobber stdin and doesn’t redirect stdout and stderr to a log file, effectively allowing command to be used for interactive commands or as a part of pipeline. Note for certain use cases, like pipelines it might be better to use fifo’s (see `man mkfifo`) and a `Daemon` instead of this one because daemons may be restarted on death or for software upgrade, while `Command` is not supervised by lithos. New in version 0.6.3. Changed in version ≥0.5: Commands were always interactive `restart-process-only`[¶](#opt-restart-process-only) (default `false`) If true when restarting process (i.e. in case process died or was killed), lithos restarts just the failed process. This means container will not be recreated, volumes will not be remounted, tmpfs will not be cleaned and some daemon processes may leave running. By default `lithos_knot` which is pid 1 in the container exits when process dies. Which means all other processes will die on `KILL` signal, and container will be removed and created again. It’s a little bit slower but safer default. This leaves no hanging daemons, orphan files in state dir and tmpfs garbage. `volumes`[¶](#opt-volumes) The mapping of mountpoint to volume definition. See [Volumes](index.html#volumes) for more info `tcp-ports`[¶](#opt-tcp-ports) Binds address and provides file descriptor to the child process. All the children receive dup of the same file descriptor, so may all do `accept()` simultaneously. The configuration looks like: ``` tcp-ports: 7777: fd: 3 host: 0.0.0.0 listen-backlog: 128 reuse-addr: true reuse-port: false ``` All the fields except `fd` are optional. Programs may require to pass listening file descriptor number by some means (usually environment). For example to run nginx with port bound (so you don’t need to start it as root) you need: ``` tcp-ports: 80: fd: 3 set-non-block: true environ: NGINX: "3;" ``` To run gunicorn you may want: ``` tcp-ports: 80: fd: 3 environ: GUNICORN_FD: "3" ``` More examples are in [Handing TCP Ports](index.html#tcp-ports-tips) Parameters: *key* TCP port number. Warning * The paramters (except `fd`) do not change after socket is bound even if configuration change * You can’t bind same port with different hostnames in a **single process** (previously there was a global limit for the single port for whole lithos master, currently this is limited just because `tcp-ports` is a mapping) Port parameter should be unique amongst all containers. But sharing port works because it is useful if you are doing smooth software upgrade (i.e. you have few old processes running and few new processes running both sharing same port/file-descriptor). *Running them on single port is not the best practices for smooth software upgrade but that topic if out of scope of this documentation.* fd *Required*. File descriptor number host (default is `0.0.0.0` meaning all addresses) Host to bind to. It must be IP address, hostname is not supported. listen-backlog (default `128`) the value to pass to the listen() system call. The value is capped by `net.core.somaxconn` reuse-addr (default `true`) Sets `SO_REUSEADDR` socket option reuse-port (default `false`) If set to `true` this changes behavior of the lithos with respect of the socket. In default case lithos binds socket as quick as possible and passes to each child on start. When this set to `true`, lithos creates a separate socket and calls bind for each process start. This has two consequences: * Socket is not bound when no processes started (i.e. they are failing) * Each process gets separate in-kernel queue of connections to accept This should be set to `true` only on very high performant servers that experience assymetric workload in default case. set-non-block (default `false`) Sets socket into non-blocking mode. This is usually done by an application itself but some of them (especially ones, that don’t expect socket to be created by an external utility, e.g. nginx) don’t do it themselves. external (default `false`) If set to `true` listen on the port in the external network (host network of the system not bridged network). This is only effective if [`bridged-network`](index.html#opt-bridged-network) is enabled for container. Changed in version 0.17.0: Previously we only allowed external ports to be declared in lithos config. It was expected that container in bridged network can listen port itself. But it turned out file descriptors are still convenient for some use-cases even inside a bridge. `metadata`[¶](#opt-metadata) (optional) Allows to add arbitrary metadata to lithos configuration file. Lithos does not use and does not validate this data in any way (except that it must be a valid YAML). The metadata can be used by other tools that inspect lithos configs and extract data from it. In particular, we use metadata for our deployment tools (to keep configuration files more consolidated instead of keeping then in small fragments). `normal-exit-codes`[¶](#opt-normal-exit-codes) (optional) A list of exit codes which are considered normal for process death. This currently only improves `failures` metric. See [Determining Failure](index.html#failures). Note: by default even `0` exit code is considered an error for daemons, and for commands (`lithos_cmd`) `0` is considered successful. This setting is intended for daemons which may voluntarily exit for some reason (soft memory limit, version upgrade, configuration reload). It’s not recommended to add 0 or 1 into the list, as some commands threat them pretty arbitrarily. For example 0 is exit code of most utilities running –help so this mistake will not be detected. And 1 is used for arbitrary crashes in scripting languages. So the good idea is to define some specific code in range of 8..120 to define successful exit. Metrics[¶](#metrics) --- Lithos submits metrics via a [cantal-compatible protocol](http://cantal.readthedocs.io/en/latest/mmap.html). All metrics usually belong to lithos’s cgroup, so for example in graphite you can find them under `cantal.<cluster-name>.<hostname>.lithos.groups.*`. Or you cand find them without this prefix in `http://hostname:22682/local/process_metrics` without a prefix. In the following description we skip the common prefix and only show metric names. Metrics of lithos master process: * `master.restarts` (counter) amount of restarts of a master process. Usually restart equals to configuration reload via `lithos_switch` or any other way. * `master.sandboxes` (gauge) number of sandboxes configured * `master.containers` (gauge) number of containers (processes) conigured * `master.queue` (gauge) length of the internal queue, the queue consists of processes to run and hanging processes to kill Per-process metrics: * `processes.<sandbox_name>.<process_name>.started` – (counter) number of times process have been started * `processes.<sandbox_name>.<process_name>.deaths` – (counter) number of times process have exited for any reason * `processes.<sandbox_name>.<process_name>.failures` – (counter) number of times process have exited for failure reason, for whatever reason lithos thinks it was failure. See [Determining Failure](#determining-failure) * `processes.<sandbox_name>.<process_name>.running` – (gauge) number of procesess that are currently running (was started but not yet found to be exited) Global metrics for all sandboxes and containers: * `containers.started` – (counter) same as for `processes.*` but for all containers * `containers.deaths` – (counter) see above * `containers.failures` – (counter) see above * `containers.running` – (gauge) see above * `containers.unknown` – (gauge) number of child processes of lithos that are found to be running but do not belong to any of the process groups known to lithos (they are being killed, and they are probably from deleted configs) ### Determining Failure[¶](#determining-failure) Currently there are two kinds of process death that are considered non-failures: 1. Processes that had been sent `SIGTERM` signal to (with any exit status) or ones dead on `SIGTERM` signal are considered non-failed. 2. Processes exited with one of the exit codes specified in [`normal-exit-codes`](index.html#opt-normal-exit-codes) Volumes[¶](#volumes) --- Volumes in lithos are just some kind of mount-points. The mount points are not created by `lithos` itself. So they must exist either in original image. Or on respective volume (if mount point is inside a volume). There are the following kinds of volumes: `Readonly`[¶](#volume-Readonly) Example: `!Readonly "/path/to/dir"` A **read-only** bind mount for some dir. The directory is mounted with `ro,nosuid,noexec,nodev` `Persistent`[¶](#volume-Persistent) Example: `!Persistent { path: /path/to/dir, mkdir: false, mode: 0o700, user: 0, group: 0 }` A **writeable** bind mount. The directory is mounted with `rw,nosuid,noexec,nodev`. If you need directory to be created set `mkdir` to `true`. You also probably need to customize either the user (to the one running command e.g. same as `user-id` of the container) or the mode (to something like `0o1777`, i.e. sticky writable by anyone). `Statedir`[¶](#volume-Statedir) Example: `!Statedir { path: /, mode: 0o700, user: 0, group: 0 }` Mount subdir of the container’s own state directory. This directory is used to store generated `resolv.conf` and `hosts` files as well as for other kinds of small state which is dropped when container dies. If you mount something other than `/` you should custimize mode or an owner similarly to `!Persistent` volumes (except that you can’t create statedir subdirectory by hand because statedir is created for each process at start) `Tmpfs`[¶](#volume-Tmpfs) Example: `!Tmpfs { size: 100Mi, mode: 0o766 }` The tmpfs mount point. Currently only `size` and `mode` options supported. Note that syntax of size and mode is generic syntax for numbers for our configuration library, not the syntax supported by kernel. Tips and Conventions[¶](#tips-and-conventions) --- This documents describes how to prepare images to run by lithos. You don’t have to obey all the rules. And you are free to create your own rules within the organization. But hopefully this will help you a lot when you’re confused. Contents: ### Handing TCP Ports[¶](#handing-tcp-ports) There are couple of reasons you want `lithos` to open tcp port on behalf of your application: 1. Running multiple instances of the application, each sharing the same port 2. Smooth upgrade of you app, where some of processes are running old version of software and some run new one 3. Grow and shrink number of processes without any application code to support that 4. Using port < 1024 and not starting process as root 5. Each process is in separate cgroup, so monitoring tools can have fine-grained metrics over them Note While you could use `SO_REUSE_PORT` socket option for solving #1 it’s not universally available option. Forking inside the application doesn’t work as well as running each process by lithos because in the former case your memory limits apply to all the processes rather than being fine-grained. Following sections describe how to configure various software stacks and frameworks to use tcp-ports opened by lithos. It’s possible to run any software that supports [systemd socket activation](http://0pointer.de/blog/projects/socket-activation.html) with [`tcp-ports`](index.html#opt-tcp-ports) of lithos. With the config similar to this: ``` environ: LISTEN_FDS: 1 # application receives single file descriptor # ... more env vars ... tcp-ports: 8080: # port number fd: 3 # SD_LISTEN_FDS_START, first fd number systemd passes host: 0.0.0.0 listen-backlog: 128 # application may change this on its own reuse-addr: true # ... other process settings ... ``` #### Python3 + Asyncio[¶](#python3-asyncio) For development purposes you probably have the code like this: ``` async def init(app): ... handler = app.make_handler() srv = await loop.create_server(handler, host, port) ``` To use tcp-ports you should check environment variable and pass socket if that exists: ``` import os import socket async def init(app): ... handler = app.make_handler() if os.environ.get("LISTEN_FDS") == "1": srv = await loop.create_server(handler, sock=socket.fromfd(3, socket.AF_INET, socket.SOCK_STREAM)) else: srv = await loop.create_server(handler, host, port) ``` This assumes you are configured `environ` and `tcp-ports` as [described above](#tp-systemd). #### Python + Werkzeug (Flask)[¶](#python-werkzeug-flask) Werkzeug supports the functionality out of the box, just put configure the environment: ``` environ: WERKZEUG_SERVER_FD: 3 # ... more env vars ... tcp-ports: 8080: # port number fd: 3 # this corresponds to WERKZEUG_SERVER_FD host: 0.0.0.0 listen-backlog: 128 # default in werkzeug reuse-addr: true # ... other process settings ... ``` Or you can pass `fd=3` to `werkzeug.serving.BaseWSGIServer`. Another hint: **do not use processes != 1**. Better use lithos’s `instances` to control the number of processes. #### Python + Twisted[¶](#python-twisted) Old code that looks like: ``` reactor.listenTCP(PORT, factory) ``` You need to change into something like this: ``` if os.environ.get("LISTEN_FD") == "1": import socket sock = socket.fromfd(3, socket.AF_INET, socket.SOCK_STREAM) sock.set_blocking(False) reactor.adoptStreamPort(sock.fileno(), AF_INET, factory) sock.close() os.close(3) else: reactor.listenTCP(PORT, factory) ``` #### Golang + net/http[¶](#golang-net-http) Previous code like this: ``` import "net/http" srv := &http.Server{ .. } if err := srv.ListenAndServe(); err != nil { log.Fatalf("Error listening") } ``` You should wrap into something like this: ``` import "os" import "net" import "net/http" srv := &http.Server{ .. } if os.Getenv("LISTEN_FDS") == "1" { listener, err := net.FileListener(os.NewFile(3, "fd 3")) if err != nil { log.Fatalf("Can't open fd 3") } if err := srv.Serve(listener); err != nil { log.Fatalf("Error listening on fd 3") } } else { if err := srv.ListenAndServe(); err != nil { log.Fatalf("Error listening") } } ``` #### Node.js with Express Framework[¶](#node-js-with-express-framework) Normal way to run express: ``` let port = 3000 app.listen(port, function() { console.log('server is listening on', this.address().port); }) ``` Turns into the following code: ``` let port = 3000; if (process.env.LISTEN_FDS && parseInt(process.env.LISTEN_FDS, 10) === 1) { port = {fd:3}; } app.listen(port, function() { console.log('server is listening on', this.address().port); }) ``` ### Deploying Vagga Containers[¶](#deploying-vagga-containers) [Vagga](http://vagga.readthedocs.io/en/latest/) is a common way to develop applications for later deployment using lithos. Also vagga is a common way to prepare a container image for use with lithos. Usually [vagga](http://vagga.readthedocs.io/en/latest/) does it’s best to make containers as close to production as possible. Still vagga tries to make good trade-off to make it’s easier to use for development, so there are few small quircks that you may or may not notice when deploying. Here is a boring list, later sections describe some things in more detail: 1. Unsurprisingly `/work` directory is absent in production container. Usually this means three things: 1. Your sources must be copied/installed into container (e.g. using [Copy](http://vagga.readthedocs.io/en/latest/build_steps.html?highlight=Copy#step-Copy)) 2. There is no current working directory, unless you specify it explicitly current directory is root `/` 3. You can’t **write** into working directory or `/work/somewhere` 2. All directories are read-only by default. Basic consequences are: 1. There is no writable `/tmp` unless you specify one. This also means there is no default for temporary dir, you have to chose whether this is an in-memory [`Tmpfs`](index.html#volume-Tmpfs) or on-disk [`Persistent`](index.html#volume-Persistent). 2. There is no `/dev/shm` by default. This is just another `tmpfs` volume in every system nowadays, so just measure how much you need and mount a [`Tmpfs`](index.html#volume-Tmpfs). Be aware that each container even on same machine get’s it’s own instance. 3. We can’t even overwrite `/etc/resolv.conf` and `/etc/hosts`, see below. 3. There are few environment variables that vagga sets in container by default: 1. `TERM` – is propagated from external environment. For daemons it should never matter. For [`interactive`](index.html#opt-interactive) commands it may matter. 2. `PATH` – in vagga is set to hard-coded value. There is no default value in lithos. If your program runs any binaries (and usually lots of them do, even if you don’t expect), you want to set `PATH`. 3. Various `*_proxy` variables are propagated. They are almost never useful for daemons. But are written here for completeness. 4. In vagga we don’t update `/etc/resolv.conf` and `/etc/hosts`, but in lithos we have such mechanism. The mechanism is following: 1. In container you make the symlinks `/etc/resolv.conf -> /state/resolv.conf`, `/etc/hosts -> /state/hosts` 2. The `/state` directory is mounted as [`Statedir`](index.html#volume-Statedir) 3. Lithos automatically puts `resolv.conf` and `hosts` into statedir when container is created (respecting [`resolv-conf`](index.html#opt-resolv-conf) and [`hosts-file`](index.html#opt-hosts-file)) 4. Then files can be updated by updating files in `/var/run/lithos/state/<sandbox>/<process>/` 5. Because by default neither vagga nor lithos have network isolation, some things that are accessible in the dev system may not be accessible in the server system. This includes both, services on `localhost` as well as in **abstract unix socket namespace**. Known examples are: 1. Dbus: for example if `DBUS_SESSION_BUS_ADDRESS` starts with `unix:abstract=` 2. Xorg: X Window System, the thing you configure with `DISPLAY` 3. nscd: name service cache daemon (this thing may resolve DNS names even if TCP/IP network is absent for your container) 4. systemd-resolved: listens at `127.0.0.53:53` as well as on **dbus** ### Storing Secrets[¶](#storing-secrets) There are currently two ways to provide “secrets” for containers: 1. Encrypted values inserted into environment variable 2. Mount a directory from the host system * [Encrypted Variables](#encrypted-variables) + [Guide](#guide) + [Ananomy of the Encrypted Key](#ananomy-of-the-encrypted-key) + [Security Notes](#security-notes) #### [Encrypted Variables](#id1)[¶](#encrypted-variables) ##### [Guide](#id2)[¶](#guide) Note: this guide covers both server setup and configuring specific containers. Usually setup (steps 1-3) is done once. And adding keys to a container (steps 4-5) is more regular job. 1. Create a key private key on the server: ``` ssh-keygen -f /etc/lithos/keys/main.key -t ed25519 -P "" ``` You can create a shared key or a per-project key. Depending on your convenience. Synchronize the key accross all the servers in the same cluster. This key should **never leave** that set of servers. 2. Add the reference to the key into your [Sandbox Config](index.html#sandbox-config) (e.g. `/etc/lithos/sandboxes/myapp.yaml`): > ``` > secrets-private-key: /etc/lithos/keys/main.key > secrets-namespaces: [myapp] > ``` > You can omit `secrets-namespaces` if you’re sole owner of this > server/cluster (it allows only empty string as a namespace). You can also > make per-process namespaces ([`extra-secrets-namespaces`](index.html#popt-extra-secrets-namespaces)). > 3. Publish your public key `/etc/lithos/keys/main.key.pub` for your users. *(Cryptography guarantees that even if this key is shared publically, i.e. commited into a git repo, or accessible over non-authorized web URL system is safe)* 4. Your users may now fetch the public key and encrypt their secrets with `lithos_crypt` (get static binary on [releases page](https://github.com/tailhook/lithos/releases)): ``` $ lithos_crypt encrypt -k main.key.pub -n myapp -d the_secret v2:ROit92I5:KqWSX0BY:8MtOoWUX:nHcVCIWZG2hivi0rKa8MRnAIbt7TDTHB8YC8bBnac3IGMzk57R/HsBhxeqCdC7Ljyf8pszBBjIGD33f6lwBM7Q== ``` The important thing here is to encrypt with the right key **and** the right namespace. 5. Then put a secret into your [Container Configuration](index.html#container-config): ``` executable: /usr/bin/python3 environ: DATABASE_URL: postgresql://myappuser@db.example.com/myappdb secret-environ: DATABASE_PASSWORD: v2:ROit92I5:KqWSX0BY:8MtOoWUX:nHcVCIWZG2hivi0rKa8MRnAIbt7TDTHB8YC8bBnac3IGMzk57R/HsBhxeqCdC7Ljyf8pszBBjIGD33f6lwBM7Q== ``` That’s it. To add a new password to the same or another container repeat steps 4-5. This scheme is specifically designed to be safe to store in a (public) git repository by using secure encryption. ##### [Ananomy of the Encrypted Key](#id3)[¶](#ananomy-of-the-encrypted-key) As you might see there is a pattern in an encrypted key. Here is how it looks like: ``` v2:ROit92I5:KqWSX0BY:8MtOoWUX:nHcVCIWZG2hivi0rKa8MRnAIbt7TDTHB8YC8bBnac3IGM‥wBM7Q== ^-- encrypted "namespace:actual_secret" ^^^^^^^^-- short hash of the password itself ^^^^^^^^-- short hash of the secrets namespace ^^^^^^^^-- short hash of the public key used for encryption ^^-- encryption version ``` Note the following things: 1. Only version `v2` is supported (`v1` was broken and dropped in 0.16.0) 2. The short hash is base64-encoded 6-bytes length blake2b hash of the value. You can check in using `b2sum` utility from recent version of `coreutils`: ``` $ echo -n "the_secret" | b2sum -l48 | xxd -r -p | base64 8MtOoWUX ``` (Note: we need `xxd` because `b2sum` outputs hexadecimal bytes, also note `-n` in `echo` command, as it’s a common mistake, without the option `echo` outputs newline at the end). 3. The encrypted payload contains `<namespace>:` prefix. While we could check just the hash. Prefix allows providing better error messages. The underlying encyrption is curve25519xsalsa20poly1305 which is compatible with libnacl and libsodium. Let’s see how it might be helpful, here is the list of keys: | | | | --- | --- | | ``` 1 2 3 ``` | ``` v2:h+M9Ue9x:82HdsExJ:Gd3ocJsr:/+f4ezLfKIP/mp0xdF7H6gfdM7onHWwbGFQX+M1aB+PoCNQidKyz/1yEGrwxD+i+qBGwLVBIXRqIc5FJ6/hw26CE v2:ROit92I5:cX9ciQzf:Gd3ocJsr:LMHBRtPFpMRRrljNnkaU6Y9JyVvEukRiDs4mitnTksNGSX5xU/zADWDwEOCOtYoelbJeyDdPhM7Q1mEOSwjeyO317Q== v2:ROit92I5:82HdsExJ:Gd3ocJsr:Hp3pngQZUos5b8ioKVUx40kegM1uDsYWwsWqC1cJ1/1KmQPQQWJZe86xgl1EOIxbuLj6PUlBH8yz5qCnWp//Ofbc ``` | You can see that: 1. All of them have same secret (3rd column) 2. Second and third ones have same encryption key (1st column) 3. First and third ones have the same namespace (2nd column) This is useful for versioning and debugging problems. You can’t deduce the actual password from this data anyway unless your password is very simple (dictioanry attack) or you already know it. Note: even if all three {encryption key, namespace, secret} match, the last part of data (encrypted payload) will be different each time you encode that same value. All of the outputs are equally right. ##### [Security Notes](#id4)[¶](#security-notes) 1. Namespaces allow to divide security zones between many projects without nightmare of generating, syncing and managing secret keys per project. 2. Namespaces match exactly they aren’t prefixes or any other kind of pattern 3. If you rely on `lithos_switch` to switch containers securely (with untrusted [Process Config](index.html#process-config)), you need to use different private key per project (as otherwise `extra-secrets-namespaces` can be used to steal keys) Frequently Asked Questions[¶](#frequently-asked-questions) --- ### How do I Start/Stop/Restart Processes Running By Lithos?[¶](#how-do-i-start-stop-restart-processes-running-by-lithos) Short answer: You can’t. Long answer: Lithos keep running all the processes that it’s configured to run. So: * To stop process: remove it from the config * To start process: add it to the config. If it’s added, it will be restarted indefinitely. Sometimes may want to fix [`restart-timeout`](index.html#opt-restart-timeout) * To restart process: well, kill it (with whatever signal you want). The ergonomic of these operations is intentionally not very pleasing. This is because you are supposed to have higher-level tool to manage lithos. At least you want to use [ansible](http://ansible.com/), [chef](http://chef.io/) or [puppet](http://puppetlabs.com/). ### Why /run/lithos/mnt is empty?[¶](#why-run-lithos-mnt-is-empty) This is a mount point. It’s never mounted in host system namespace (well it’s never visible in guest namespace too). The containerization works as follows: 1. The mount namespace is *unshared* (which means no future mounts are visible in the host system) 2. The root filesystem image is mounted to `/run/lithos/mnt` 3. Other things set up in root file system (`/dev`, `/etc/hosts`, whatever) 4. Pivot root is done, which means that `/run/lithos/mnt` is now visible as root dir, i.e. just plain `/` (you can think of it as good old `chroot`) This all means that if you error like this: ``` [2015-11-17T10:29:40Z][ERROR] Fatal error: Can't mount pseudofs /run/lithos/mnt/dev/pts (newinstance, options: devpts): No such file or directory (os error 2) ``` Or like this: ``` [2015-10-19T15:04:48Z][ERROR] Fatal error: Can't mount bind /whereever/external/storage/is to /run/lithos/mnt/storage: No such file or directory (os error 2) ``` It means that lithos have failed on step #3. And that it failed to mount the directory in the guest container file system (`/dev/pts` and `/storage` respectively) ### How to Organize Logging?[¶](#how-to-organize-logging) There is variety of ways. Here are some hints
 #### Syslog[¶](#syslog) You may accept logs by UDP. Since lithos has no network namespacing (yet). The UDP syslog just works. To setup syslog using unix sockets you may configure syslog daemon on the host system to listen for the socket inside the container’s `/dev`. For example, here is how to [configure rsyslog](http://www.rsyslog.com/doc/v8-stable/configuration/modules/imuxsock.html) for default lithos config: ``` module(load="imuxsock") # needs to be done just once input(type="imuxsock" Socket="/var/lib/lithos/dev/log") ``` Alternatively, (but *not* recommended) you may configure [`devfs-dir`](index.html#opt-devfs-dir): ``` devfs-dir: /dev ``` #### Stdout/Stderr[¶](#stdout-stderr) It’s recommended to use syslog or any similar solutions for logs. But there are still reasons to write logs to a file: 1. You may want to log early start errors (when you have not yet initialized the logging subsystem of the application) 2. If you have single server and don’t want additional daemons Starting with version `v0.5.0` lithos has a per-sandbox log file which contains all the stdout/stderr output of the processes. By default it’s in `/var/log/lithos/stderr/<sandbox_name>.log`. See [`stdio-log-dir`](index.html#opt-stdio-log-dir) for more info. ### How to Update Configs?[¶](#how-to-update-configs) The best way to update config of *processes* is to put it into a temporary file and run `lithos_switch` (see `lithos_switch --help` for more info). This is a main kind config you update multiple times a day. In case you’ve already put config in place, or for *master* and *sandbox* config, you should first run `lithos_check` to check that all configs are valid. Then just send `QUIT` signal to the `lithos_tree` process. Usually the following command-line is enough for manual operation: ``` pkill -QUIT lithos_tree ``` But if you for authomation it’s better to use `lithos_switch`. Note note By sending `QUIT` signal we’re effectivaly emulate crash of the supervisor daemon. It’s designed in a way that allows it survive crash and keep all fresh child processes alive. After an **in-place restart** it checks configuration of child processes, kills outdated ones and executes new configs. ### How to Run Commands in Container?[¶](#how-to-run-commands-in-container) There are two common ways: 1. If you have container already running use `nsenter` 2. Prepare a special command for `lithos_cmd` #### Running `nsenter`[¶](#running-nsenter) This way only works if you have a running container. It’s hard to get work if your process crashes too fast after start. You must also have a working shell in container, we use `/bin/sh` in examples. You can use `nsenter` to join most namespaces, except user namespace. For example, if you know pid, the following command would allow you to run shell in container and investigate files: ``` nsenter -m -p --target 12345 /bin/sh ``` If you don’t know PID, you may easily discover it with `lithos_ps` or automate it with `pgrep`: ``` nsenter -m -p \ --target=$(pgrep -f 'lithos_knot --name sandbox-name/process-name.0') \ /bin/sh ``` Warning This method is very insecure. It runs command in original user namespace with the host root user. While basic sandboxing (i.e. filesystem root) is enabled by -m and -p, the program that you’re trying to run (i.e. the shell itself) can still escape that sandbox. Because we do mount namespaces and user namespaces in different stages of container initialization there is currently no way to join both user namespace and mount namespace. (You can join just user namespace by running `nsenter -U --target=1235` where 123 is the pid of the process inside the container, not lithos_knot. But this is probably useless) #### Running `lithos_cmd`[¶](#running-lithos-cmd) In some cases you may want to have a special container with a shell to run with `lithos_cmd`. This is just a normal lithos container configuration with `kind: Command` and `interactive: true` and shell being specified as a command. So you run your `shell.yaml` with: ``` lithos_cmd sandbox-name shell ``` There are three important points about this method: 1. If you’re trying to investigate problem with the daemon config you copy daemon config into this interactive command. It’s your job to keep both configs in sync. This config must also be exposed in *processes* config just like any other. 2. It will run another (although identical) container on each run. You will not see processes running as daemons and other shells in `ps` or similar commands. 3. You must have shell in container to get use of it. Sometimes you just don’t have it. But you may use any interactive interpreter, like `python` or even non-interactive commands. ### How to Find Files Mounted in Container?[¶](#how-to-find-files-mounted-in-container) Linux provides many great tools to introspect running container. Here is short overview: 1. `/proc/<pid>/root` is a directory where you can `cd` into and look at files 2. `/proc/<pid>/mountinfo` is a mapping between host system directories and ones container 3. And you can [join container’s namespace](#running-commands) #### Example 1[¶](#example-1) Let’s try to explore some common tasks. First, let’s find container’s pid: ``` $ pgrep -f 'lithos_name --name sandbox-name/process-name.0' 12345 ``` Now we can find out the OS release used to build container: ``` $ sudo cat /proc/12345/root/etc/alpine-release 3.4.6 ``` Warning There is a caveat. Symlinks that point to paths starting with root are resolved differently that in container. So ensure that you’re not accessing a symlink (and that any intermediate components is not a symlink). #### Example 2[¶](#example-2) Now, let’s find out which volume is mounted as `/app/data` inside the container. If you have quire recent `findmnt` it’s easy: ``` $ findmnt -N 12345 /app/data TARGET SOURCE FSTYPE OPTIONS /app/data /dev/mapper/Disk-main[/all-storages/myproject] ext4 rw,noatime,discard,data=ordered ``` Here we can see that `/app/data` in container is a LVM partition `main` in group `Disk` with the path `all-storages/myproject` relative to the root of the partition. You can find out where this volume is mounted on host system by inspecting the output of `mount` or `findmnt` commands. Manual way is to look at `/proc/<pid>/mountinfo` (stripped output): ``` $ cat /proc/12345/mountinfo 347 107 9:1 /all-images/sandbox-name/myproject.c17cb162 / ro,relatime - ext4 /dev/md1 rw,data=ordered 356 347 0:267 / /tmp rw,nosuid,nodev,relatime - tmpfs tmpfs rw,size=102400k 360 347 9:1 /all-storages/myproject /app/data rw,relatime - ext4 /dev/mapper/Disk-main rw,data=ordered ``` Here you can observe same info. Important parts are: * Fifth column is the mountpoint (but be careful in complex cases there might be multiple overlapping mount points); * Fourth column is the path relative to the volume root; * And, 9th column (next to the last) is the volume name. Let’s find out where it is on host system: ``` $ mount | grep Disk-main /dev/mapper/Disk-main on /srv type ext4 (rw,noatime,discard,data=ordered) ``` That’s it, now you can look at `/srv/all-storages/myproject` to find files seen by an application. Lithos Changes By Release[¶](#lithos-changes-by-release) --- ### v0.18.4[¶](#v0-18-4) * Bugfix: only send SIGTERM to the process once when upgrading or stopping it (this prevents certain issues with the applications themselves) * Bugfix: use don’t reset kill timeout on SIGQUIT of lithos_tree * Bugfix: correctly wait for kill timeout for retired children (not in the config any more) ### v0.18.3[¶](#v0-18-3) * Bugfix: it looks like that reading through `/proc/` is inherently racy, i.e. some process may be skipped. This commit fixes walk faster and traverse directory twice. More elaborate fix will be implemented in future. ### v0.18.2[¶](#v0-18-2) * Feature: add `secret-environ-file` which can be used to offload secrets to a separate (perhaps shared) file ### v0.18.1[¶](#v0-18-1) * Feature: add `set-non-block` option to tcp-ports ### v0.18.0[¶](#v0-18-0) * Breaking: we don’t run `arping` after container setup by default, as it [doesn’t work in certain environments](https://github.com/tailhook/lithos/issues/17). Use [`after-setup-command`](index.html#bopt-after-setup-command) instead. ### v0.17.8[¶](#v0-17-8) * Bugfix: fixes issue with bridged networking when host system is alpine ([#15](https://github.com/tailhook/lithos/issues/15)) ### v0.17.7[¶](#v0-17-7) * Bugfix: log name of the process when lithos_knot failed * Bugfix: more robust parsing of process names by lithos_ps * Feature: add `@{lithos:pid}` magic variable ### v0.17.6[¶](#v0-17-6) * Bugfix: systemd protocol support fixed: LISTEN_FDNAMES and LISTEN_PID ### v0.17.5[¶](#v0-17-5) * Feature: check variable substitution with `lithos_check` even in `--check-container` (out of system) mode ### v0.17.4[¶](#v0-17-4) * Feature: Add `DottedName` [variable type](index.html#container-variables) * Feature: Add `activation` parameter to `TcpPort` variable ### v0.17.3[¶](#v0-17-3) * Bugfix: fix EADDRINUSE error when all children requiring file descriptor where queued for restart (throttled), bug was due to duped socket lying in scheduled command (where main socket is closed to notify peers there are no listeners) ### v0.17.2[¶](#v0-17-2) * Bugfix: previously lithos_tree process after fork but before execing lithos_knot could be recognized as undefined child and killed. This race-condition sometimes led to closing sockets prematurely and being unable to listen them again ### v0.17.1[¶](#v0-17-1) * Bugfix: passing sockets as FDs in non-bridged network was broken in v0.17.0 ### v0.17.0[¶](#v0-17-0) * Breaking: add `external` flag to [`tcp-ports`](index.html#opt-tcp-ports), which by default is `false` (previous behavior was equal to `external: true`) * Bugfix: `lithos_cmd` now returns exit code 0 if underlying command is exited successfully (was broken in 0.15.5) ### v0.16.0[¶](#v0-16-0) * Breaking: remove `v1` encryption for secrets (it was alive for a week) * Feature: add [`secrets-namespaces`](index.html#opt-secrets-namespaces) and `extra-secrets-namespaces` option to allow namespacing secrets on top of a single key * Feature: add `v2` key encryption scheme ### v0.15.6[¶](#v0-15-6) * Feature: add [`secret-environ`](index.html#opt-secret-environ) and `secrets-private-key`` settings which allow to pass to the application decrypted environment variables * Bugfix: when bridged network is enabled we use `arping` to update ARP cache ### v0.15.5[¶](#v0-15-5) * Bugfix: add support for bridged-network and ip-addresses for lithos_cmd * Bugfix: initialize looppack interface in container when `bridged-network` is configured * Feature: allow `lithos_cmd` without `ip_addresses` (only loopback is initialized in this case) * Bugfix: return error result from `lithos_cmd` if inner process failed ### v0.15.4[¶](#v0-15-4) * First release that stops support of ubuntu precise and adds repository for ubuntu bionic * Bugfix: passing TCP port as fd < 3 didn’t work before, now we allow `fd: 0` and fail gracefully on 1, 2. ### v0.15.3[¶](#v0-15-3) * feature: Add [`default-user`](index.html#opt-default-user) and [`default-group`](index.html#opt-default-group) to simplify container config * bugfix: fix containers having symlinks at `/etc/{resolv.conf, hosts}` (broken in v0.15.0) ### v0.15.2[¶](#v0-15-2) * bugfix: containers without bridged network work again ### v0.15.1[¶](#v0-15-1) * nothing changed, fixed tests only ### v0.15.0[¶](#v0-15-0) * feature: Add [`normal-exit-codes`](index.html#opt-normal-exit-codes) setting * feature: Add [`resolv-conf`](index.html#opt-resolv-conf) and [`hosts-file`](index.html#opt-hosts-file) to sandbox config * feature: Add [`bridged-network`](index.html#opt-bridged-network) option to sandbox config * breaking: By default `/etc/hosts` and `/etc/resolv.conf` will be mounted if they are proper mount points (can be opt out in container config) ### v0.14.3[¶](#v0-14-3) * Bugfix: when more than one variable is used lithos were restarting process every time (because of unstable serialization of hashmap) ### v0.14.2[¶](#v0-14-2) * Bugfix: if `auto-clean` is different in several sandboxes looking at the same image directory we skip cleaning the dir and print a warning * Add a timestamp to `lithos_clean` output (in `--delete-unused` mode) ### v0.14.1[¶](#v0-14-1) * Bugfix: variable substitution was broken in v0.14.0 ### v0.14.0[¶](#v0-14-0) * Sets `memory.memsw.limit_in_bytes` if that exists (usually requires `swapaccount=1` in kernel params). * Adds a warning-level message on process startup * Duplicates startup and death messages into stderr log, so you can corelate them with application messages ### v0.13.2[¶](#v0-13-2) * Upgrades many dependencies, no significant changes or bugfixes ### v0.13.1[¶](#v0-13-1) * Adds [`auto-clean`](index.html#opt-auto-clean) setting ### v0.13.0[¶](#v0-13-0) * `/dev/pts/ptmx` is created with `ptmxmode=0666`, which makes it suitable for creating ptys by unprivileged users. We always used `newinstance` option, so it should be safe enough. And it also matches how `ptmx` is configured on most systems by default ### v0.12.1[¶](#v0-12-1) * Added `image-dir-levels` parameter which allows using images in form of `xx/yy/zz` (for value of `3`) instead of bare name ### v0.12.0[¶](#v0-12-0) * Fixed order of `sandbox-name.process-name` in metrics * Dropped setting `cantal-appname` (never were useful, because cantal actually uses cgroup name, and lithos master process actually has one) ### v0.11.0[¶](#v0-11-0) * Option `cantal-appname` added to a config * If no `CANTAL_PATH` present in environment we set it to some default, along with `CANTAL_APPNAME=lithos` unless `cantal-appname` is overriden. * Added default container environment `LITHOS_CONFIG`. It may be used to log config name, read metadata and other purposes. ### v0.10.7[¶](#v0-10-7) * [Cantal](https://cantal.readthedocs.io) metrics added Indices and tables[¶](#indices-and-tables) === * [Index](genindex.html)
egui_extras
rust
Rust
Crate egui_extras === This is a crate that adds some features on top top of `egui`. This crate are for experimental features, and features that require big dependencies that does not belong in `egui`. ### Feature flags * **`all_loaders`** — Shorthand for enabling the different types of image loaders (`file`, `http`, `image`, `svg`). * **`datepicker`** — Enable `DatePickerButton` widget. * **`file`** — Add support for loading images from `file://` URIs. * **`http`** — Add support for loading images via HTTP. * **`image`** — Add support for loading images with the `image` crate. You also need to ALSO opt-in to the image formats you want to support, like so: ``` image = { version = "0.24", features = ["jpeg", "png"] } # Add the types you want support for ``` * **`puffin`** — Enable profiling with the `puffin` crate. Only enabled on native, because of the low resolution (1ms) of clocks in browsers. * **`svg`** — Support loading svg images. * **`syntect`** — Enable better syntax highlighting using `syntect`. #### Optional dependencies * **`document-features`** — Enable this when generating docs. Modules --- * syntax_highlightingSyntax highlighting for code. Structs --- * ColumnSpecifies the properties of a column, like its width range. * DatePickerButtonShows a date, and will open a date picker popup when clicked. * StripA Strip of cells which go in one direction. Each cell has a fixed size. In contrast to normal egui behavior, strip cells do *not* grow with its children! * StripBuilderBuilder for creating a new `Strip`. * TableTable struct which can construct a `TableBody`. * TableBodyThe body of a table. * TableBuilderBuilder for a `Table` with (optional) fixed header and scrolling body. * TableRowThe row of a table. Is created by `TableRow` for each created `TableBody::row` or each visible row in rows created by calling `TableBody::rows`. Enums --- * SizeSize hint for table column/strip cell. Functions --- * install_image_loadersInstalls a set of image loaders. Crate egui_extras === This is a crate that adds some features on top top of `egui`. This crate are for experimental features, and features that require big dependencies that does not belong in `egui`. ### Feature flags * **`all_loaders`** — Shorthand for enabling the different types of image loaders (`file`, `http`, `image`, `svg`). * **`datepicker`** — Enable `DatePickerButton` widget. * **`file`** — Add support for loading images from `file://` URIs. * **`http`** — Add support for loading images via HTTP. * **`image`** — Add support for loading images with the `image` crate. You also need to ALSO opt-in to the image formats you want to support, like so: ``` image = { version = "0.24", features = ["jpeg", "png"] } # Add the types you want support for ``` * **`puffin`** — Enable profiling with the `puffin` crate. Only enabled on native, because of the low resolution (1ms) of clocks in browsers. * **`svg`** — Support loading svg images. * **`syntect`** — Enable better syntax highlighting using `syntect`. #### Optional dependencies * **`document-features`** — Enable this when generating docs. Modules --- * syntax_highlightingSyntax highlighting for code. Structs --- * ColumnSpecifies the properties of a column, like its width range. * DatePickerButtonShows a date, and will open a date picker popup when clicked. * StripA Strip of cells which go in one direction. Each cell has a fixed size. In contrast to normal egui behavior, strip cells do *not* grow with its children! * StripBuilderBuilder for creating a new `Strip`. * TableTable struct which can construct a `TableBody`. * TableBodyThe body of a table. * TableBuilderBuilder for a `Table` with (optional) fixed header and scrolling body. * TableRowThe row of a table. Is created by `TableRow` for each created `TableBody::row` or each visible row in rows created by calling `TableBody::rows`. Enums --- * SizeSize hint for table column/strip cell. Functions --- * install_image_loadersInstalls a set of image loaders. Struct egui_extras::DatePickerButton === ``` pub struct DatePickerButton<'a> { /* private fields */ } ``` Shows a date, and will open a date picker popup when clicked. Implementations --- ### impl<'a> DatePickerButton<'a#### pub fn new(selection: &'a mut NaiveDate) -> Self #### pub fn id_source(self, id_source: &'a str) -> Self Add id source. Must be set if multiple date picker buttons are in the same Ui. #### pub fn combo_boxes(self, combo_boxes: bool) -> Self Show combo boxes in date picker popup. (Default: true) #### pub fn arrows(self, arrows: bool) -> Self Show arrows in date picker popup. (Default: true) #### pub fn calendar(self, calendar: bool) -> Self Show calendar in date picker popup. (Default: true) #### pub fn calendar_week(self, week: bool) -> Self Show calendar week in date picker popup. (Default: true) #### pub fn show_icon(self, show_icon: bool) -> Self Show the calendar icon on the button. (Default: true) Trait Implementations --- ### impl<'a> Widget for DatePickerButton<'a#### fn ui(self, ui: &mut Ui) -> Response Allocate space, interact, paint, and return a `Response`. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for DatePickerButton<'a### impl<'a> Send for DatePickerButton<'a### impl<'a> Sync for DatePickerButton<'a### impl<'a> Unpin for DatePickerButton<'a### impl<'a> !UnwindSafe for DatePickerButton<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Module egui_extras::syntax_highlighting === Syntax highlighting for code. Turn on the `syntect` feature for great syntax highlighting of any language. Otherwise, a very simple fallback will be used, that works okish for C, C++, Rust, and Python. Structs --- * CodeThemeA selected color theme. Functions --- * code_view_uiView some code with syntax highlighting and selection. * highlightAdd syntax highlighting to a code string. Struct egui_extras::Column === ``` pub struct Column { /* private fields */ } ``` Specifies the properties of a column, like its width range. Implementations --- ### impl Column #### pub fn auto() -> Self Automatically sized based on content. If you have many thousands of rows and are therefore using `TableBody::rows` or `TableBody::heterogeneous_rows`, then the automatic size will only be based on the currently visible rows. #### pub fn auto_with_initial_suggestion(suggested_width: f32) -> Self Automatically sized. The given fallback is a loose suggestion, that may be used to wrap cell contents, if they contain a wrapping layout. In most cases though, the given value is ignored. #### pub fn initial(width: f32) -> Self With this initial width. #### pub fn exact(width: f32) -> Self Always this exact width, never shrink or grow. #### pub fn remainder() -> Self Take all the space remaining after the other columns have been sized. If you have multiple `Column::remainder` they all share the remaining space equally. #### pub fn resizable(self, resizable: bool) -> Self Can this column be resized by dragging the column separator? If you don’t call this, the fallback value of `TableBuilder::resizable` is used (which by default is `false`). #### pub fn clip(self, clip: bool) -> Self If `true`: Allow the column to shrink enough to clip the contents. If `false`: The column will always be wide enough to contain all its content. Clipping can make sense if you expect a column to contain a lot of things, and you don’t want it too take up too much space. If you turn on clipping you should also consider calling `Self::at_least`. Default: `false`. #### pub fn at_least(self, minimum: f32) -> Self Won’t shrink below this width (in points). Default: 0.0 #### pub fn at_most(self, maximum: f32) -> Self Won’t grow above this width (in points). Default: `f32::INFINITY` #### pub fn range(self, range: impl Into<Rangef>) -> Self Allowed range of movement (in points), if in a resizable `Table`. Trait Implementations --- ### impl Clone for Column #### fn clone(&self) -> Column Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn eq(&self, other: &Column) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`.1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. The default implementation is almost always sufficient, and should not be overridden without very good reason.### impl Copy for Column ### impl StructuralPartialEq for Column Auto Trait Implementations --- ### impl RefUnwindSafe for Column ### impl Send for Column ### impl Sync for Column ### impl Unpin for Column ### impl UnwindSafe for Column Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> SerializableAny for Twhere T: 'static + Any + Clone + Send + Sync, Struct egui_extras::Strip === ``` pub struct Strip<'a, 'b> { /* private fields */ } ``` A Strip of cells which go in one direction. Each cell has a fixed size. In contrast to normal egui behavior, strip cells do *not* grow with its children! Implementations --- ### impl<'a, 'b> Strip<'a, 'b#### pub fn cell(&mut self, add_contents: impl FnOnce(&mut Ui)) Add cell contents. #### pub fn empty(&mut self) Add an empty cell. #### pub fn strip(&mut self, strip_builder: impl FnOnce(StripBuilder<'_>)) Add a strip as cell. Trait Implementations --- ### impl<'a, 'b> Drop for Strip<'a, 'b#### fn drop(&mut self) Executes the destructor for this type. Read moreAuto Trait Implementations --- ### impl<'a, 'b> !RefUnwindSafe for Strip<'a, 'b### impl<'a, 'b> Send for Strip<'a, 'b### impl<'a, 'b> Sync for Strip<'a, 'b### impl<'a, 'b> Unpin for Strip<'a, 'b### impl<'a, 'b> !UnwindSafe for Strip<'a, 'bBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct egui_extras::StripBuilder === ``` pub struct StripBuilder<'a> { /* private fields */ } ``` Builder for creating a new `Strip`. This can be used to do dynamic layouts. In contrast to normal egui behavior, strip cells do *not* grow with its children! First use `Self::size` and `Self::sizes` to allocate space for the rows or columns will follow. Then build the strip with `Self::horizontal`/`Self::vertical`, and add ‘cells’ to it using `Strip::cell`. The number of cells MUST match the number of pre-allocated sizes. #### Example ``` use egui_extras::{StripBuilder, Size}; StripBuilder::new(ui) .size(Size::remainder().at_least(100.0)) // top cell .size(Size::exact(40.0)) // bottom cell .vertical(|mut strip| { // Add the top 'cell' strip.cell(|ui| { ui.label("Fixed"); }); // We add a nested strip in the bottom cell: strip.strip(|builder| { builder.sizes(Size::remainder(), 2).horizontal(|mut strip| { strip.cell(|ui| { ui.label("Top Left"); }); strip.cell(|ui| { ui.label("Top Right"); }); }); }); }); ``` Implementations --- ### impl<'a> StripBuilder<'a#### pub fn new(ui: &'a mut Ui) -> Self Create new strip builder. #### pub fn clip(self, clip: bool) -> Self Should we clip the contents of each cell? Default: `false`. #### pub fn cell_layout(self, cell_layout: Layout) -> Self What layout should we use for the individual cells? #### pub fn size(self, size: Size) -> Self Allocate space for one column/row. #### pub fn sizes(self, size: Size, count: usize) -> Self Allocate space for several columns/rows at once. #### pub fn horizontal<F>(self, strip: F) -> Responsewhere F: for<'b> FnOnce(Strip<'a, 'b>), Build horizontal strip: Cells are positions from left to right. Takes the available horizontal width, so there can’t be anything right of the strip or the container will grow slowly! Returns a `egui::Response` for hover events. #### pub fn vertical<F>(self, strip: F) -> Responsewhere F: for<'b> FnOnce(Strip<'a, 'b>), Build vertical strip: Cells are positions from top to bottom. Takes the full available vertical height, so there can’t be anything below of the strip or the container will grow slowly! Returns a `egui::Response` for hover events. Auto Trait Implementations --- ### impl<'a> !RefUnwindSafe for StripBuilder<'a### impl<'a> Send for StripBuilder<'a### impl<'a> Sync for StripBuilder<'a### impl<'a> Unpin for StripBuilder<'a### impl<'a> !UnwindSafe for StripBuilder<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct egui_extras::Table === ``` pub struct Table<'a> { /* private fields */ } ``` Table struct which can construct a `TableBody`. Is created by `TableBuilder` by either calling `TableBuilder::body` or after creating a header row with `TableBuilder::header`. Implementations --- ### impl<'a> Table<'a#### pub fn ui_mut(&mut self) -> &mut Ui Access the contained `egui::Ui`. You can use this to e.g. modify the `egui::Style` with `egui::Ui::style_mut`. #### pub fn body<F>(self, add_body_contents: F)where F: for<'b> FnOnce(TableBody<'b>), Create table body after adding a header row Auto Trait Implementations --- ### impl<'a> !RefUnwindSafe for Table<'a### impl<'a> Send for Table<'a### impl<'a> Sync for Table<'a### impl<'a> Unpin for Table<'a### impl<'a> !UnwindSafe for Table<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct egui_extras::TableBody === ``` pub struct TableBody<'a> { /* private fields */ } ``` The body of a table. Is created by calling `body` on a `Table` (after adding a header row) or `TableBuilder` (without a header row). Implementations --- ### impl<'a> TableBody<'a#### pub fn ui_mut(&mut self) -> &mut Ui Access the contained `egui::Ui`. You can use this to e.g. modify the `egui::Style` with `egui::Ui::style_mut`. #### pub fn max_rect(&self) -> Rect Where in screen-space is the table body? #### pub fn widths(&self) -> &[f32] Return a vector containing all column widths for this table body. This is primarily meant for use with `TableBody::heterogeneous_rows` in cases where row heights are expected to according to the width of one or more cells – for example, if text is wrapped rather than clipped within the cell. #### pub fn row( &mut self, height: f32, add_row_content: impl FnOnce(TableRow<'a, '_>) ) Add a single row with the given height. If you have many thousands of row it can be more performant to instead use `Self::rows` or `Self::heterogeneous_rows`. #### pub fn rows( self, row_height_sans_spacing: f32, total_rows: usize, add_row_content: impl FnMut(usize, TableRow<'_, '_>) ) Add many rows with same height. Is a lot more performant than adding each individual row as non visible rows must not be rendered. If you need many rows with different heights, use `Self::heterogeneous_rows` instead. ###### Example ``` use egui_extras::{TableBuilder, Column}; TableBuilder::new(ui) .column(Column::remainder().at_least(100.0)) .body(|mut body| { let row_height = 18.0; let num_rows = 10_000; body.rows(row_height, num_rows, |row_index, mut row| { row.col(|ui| { ui.label("First column"); }); }); }); ``` #### pub fn heterogeneous_rows( self, heights: impl Iterator<Item = f32>, add_row_content: impl FnMut(usize, TableRow<'_, '_>) ) Add rows with varying heights. This takes a very slight performance hit compared to `TableBody::rows` due to the need to iterate over all row heights in to calculate the virtual table height above and below the visible region, but it is many orders of magnitude more performant than adding individual heterogeneously-sized rows using `TableBody::row` at the cost of the additional complexity that comes with pre-calculating row heights and representing them as an iterator. ###### Example ``` use egui_extras::{TableBuilder, Column}; TableBuilder::new(ui) .column(Column::remainder().at_least(100.0)) .body(|mut body| { let row_heights: Vec<f32> = vec![60.0, 18.0, 31.0, 240.0]; body.heterogeneous_rows(row_heights.into_iter(), |row_index, mut row| { let thick = row_index % 6 == 0; row.col(|ui| { ui.centered_and_justified(|ui| { ui.label(row_index.to_string()); }); }); }); }); ``` Trait Implementations --- ### impl<'a> Drop for TableBody<'a#### fn drop(&mut self) Executes the destructor for this type. Read moreAuto Trait Implementations --- ### impl<'a> !RefUnwindSafe for TableBody<'a### impl<'a> Send for TableBody<'a### impl<'a> Sync for TableBody<'a### impl<'a> Unpin for TableBody<'a### impl<'a> !UnwindSafe for TableBody<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct egui_extras::TableBuilder === ``` pub struct TableBuilder<'a> { /* private fields */ } ``` Builder for a `Table` with (optional) fixed header and scrolling body. You must pre-allocate all columns with `Self::column`/`Self::columns`. If you have multiple `Table`:s in the same `Ui` you will need to give them unique id:s by surrounding them with `Ui::push_id`. #### Example ``` use egui_extras::{TableBuilder, Column}; TableBuilder::new(ui) .column(Column::auto().resizable(true)) .column(Column::remainder()) .header(20.0, |mut header| { header.col(|ui| { ui.heading("First column"); }); header.col(|ui| { ui.heading("Second column"); }); }) .body(|mut body| { body.row(30.0, |mut row| { row.col(|ui| { ui.label("Hello"); }); row.col(|ui| { ui.button("world!"); }); }); }); ``` Implementations --- ### impl<'a> TableBuilder<'a#### pub fn new(ui: &'a mut Ui) -> Self #### pub fn striped(self, striped: bool) -> Self Enable striped row background for improved readability. Default is whatever is in `egui::Visuals::striped`. #### pub fn resizable(self, resizable: bool) -> Self Make the columns resizable by dragging. You can set this for individual columns with `Column::resizable`. `Self::resizable` is used as a fallback for any column for which you don’t call `Column::resizable`. If the *last* column is `Column::remainder`, then it won’t be resizable (and instead use up the remainder). Default is `false`. #### pub fn vscroll(self, vscroll: bool) -> Self Enable vertical scrolling in body (default: `true`) #### pub fn scroll(self, vscroll: bool) -> Self 👎Deprecated: Renamed to vscroll#### pub fn drag_to_scroll(self, drag_to_scroll: bool) -> Self Enables scrolling the table’s contents using mouse drag (default: `true`). See `ScrollArea::drag_to_scroll` for more. #### pub fn stick_to_bottom(self, stick: bool) -> Self Should the scroll handle stick to the bottom position even as the content size changes dynamically? The scroll handle remains stuck until manually changed, and will become stuck once again when repositioned to the bottom. Default: `false`. #### pub fn scroll_to_row(self, row: usize, align: Option<Align>) -> Self Set a row to scroll to. `align` specifies if the row should be positioned in the top, center, or bottom of the view (using [`Align::TOP`], [`Align::Center`] or [`Align::BOTTOM`]). If `align` is `None`, the table will scroll just enough to bring the cursor into view. See also: `Self::vertical_scroll_offset`. #### pub fn vertical_scroll_offset(self, offset: f32) -> Self Set the vertical scroll offset position, in points. See also: `Self::scroll_to_row`. #### pub fn min_scrolled_height(self, min_scrolled_height: f32) -> Self The minimum height of a vertical scroll area which requires scroll bars. The scroll area will only become smaller than this if the content is smaller than this (and so we don’t require scroll bars). Default: `200.0`. #### pub fn max_scroll_height(self, max_scroll_height: f32) -> Self Don’t make the scroll area higher than this (add scroll-bars instead!). In other words: add scroll-bars when this height is reached. Default: `800.0`. #### pub fn auto_shrink(self, auto_shrink: [bool; 2]) -> Self For each axis (x,y): * If true, add blank space outside the table, keeping the table small. * If false, add blank space inside the table, expanding the table to fit the containing ui. Default: `[true; 2]`. See `ScrollArea::auto_shrink` for more. #### pub fn cell_layout(self, cell_layout: Layout) -> Self What layout should we use for the individual cells? #### pub fn column(self, column: Column) -> Self Allocate space for one column. #### pub fn columns(self, column: Column, count: usize) -> Self Allocate space for several columns at once. #### pub fn header( self, height: f32, add_header_row: impl FnOnce(TableRow<'_, '_>) ) -> Table<'aCreate a header row which always stays visible and at the top #### pub fn body<F>(self, add_body_contents: F)where F: for<'b> FnOnce(TableBody<'b>), Create table body without a header row Auto Trait Implementations --- ### impl<'a> !RefUnwindSafe for TableBuilder<'a### impl<'a> Send for TableBuilder<'a### impl<'a> Sync for TableBuilder<'a### impl<'a> Unpin for TableBuilder<'a### impl<'a> !UnwindSafe for TableBuilder<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct egui_extras::TableRow === ``` pub struct TableRow<'a, 'b> { /* private fields */ } ``` The row of a table. Is created by `TableRow` for each created `TableBody::row` or each visible row in rows created by calling `TableBody::rows`. Implementations --- ### impl<'a, 'b> TableRow<'a, 'b#### pub fn col( &mut self, add_cell_contents: impl FnOnce(&mut Ui) ) -> (Rect, Response) Add the contents of a column. Return the used space (`min_rect`) plus the `Response` of the whole cell. Trait Implementations --- ### impl<'a, 'b> Drop for TableRow<'a, 'b#### fn drop(&mut self) Executes the destructor for this type. Read moreAuto Trait Implementations --- ### impl<'a, 'b> !RefUnwindSafe for TableRow<'a, 'b### impl<'a, 'b> Send for TableRow<'a, 'b### impl<'a, 'b> Sync for TableRow<'a, 'b### impl<'a, 'b> Unpin for TableRow<'a, 'b### impl<'a, 'b> !UnwindSafe for TableRow<'a, 'bBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum egui_extras::Size === ``` pub enum Size { Absolute { initial: f32, range: Rangef, }, Relative { fraction: f32, range: Rangef, }, Remainder { range: Rangef, }, } ``` Size hint for table column/strip cell. Variants --- ### Absolute #### Fields `initial: f32``range: Rangef`Absolute size in points, with a given range of allowed sizes to resize within. ### Relative #### Fields `fraction: f32``range: Rangef`Relative size relative to all available space. ### Remainder #### Fields `range: Rangef`Multiple remainders each get the same space. Implementations --- ### impl Size #### pub fn exact(points: f32) -> Self Exactly this big, with no room for resize. #### pub fn initial(points: f32) -> Self Initially this big, but can resize. #### pub fn relative(fraction: f32) -> Self Relative size relative to all available space. Values must be in range `0.0..=1.0`. #### pub fn remainder() -> Self Multiple remainders each get the same space. #### pub fn at_least(self, minimum: f32) -> Self Won’t shrink below this size (in points). #### pub fn at_most(self, maximum: f32) -> Self Won’t grow above this size (in points). #### pub fn range(self) -> Rangef Allowed range of movement (in points), if in a resizable `Table`. Trait Implementations --- ### impl Clone for Size #### fn clone(&self) -> Size Returns a copy of the value. Read more1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Auto Trait Implementations --- ### impl RefUnwindSafe for Size ### impl Send for Size ### impl Sync for Size ### impl Unpin for Size ### impl UnwindSafe for Size Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToOwned for Twhere T: Clone, #### type Owned = T The resulting type after obtaining ownership.#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Uses borrowed data to replace owned data, usually by cloning. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.### impl<T> SerializableAny for Twhere T: 'static + Any + Clone + Send + Sync, Function egui_extras::install_image_loaders === ``` pub fn install_image_loaders(ctx: &Context) ``` Installs a set of image loaders. Calling this enables the use of `egui::Image` and `egui::Ui::image`. ⚠ This will do nothing and you won’t see any images unless you also enable some feature flags on `egui_extras`: * `file` feature: `file://` loader on non-Wasm targets * `http` feature: `http(s)://` loader * `image` feature: Loader of png, jpeg etc using the `image` crate * `svg` feature: `.svg` loader Calling this multiple times on the same `egui::Context` is safe. It will never install duplicate loaders. * If you just want to be able to load `file://` and `http://` URIs, enable the `all_loaders` feature. * The supported set of image formats is configured by adding the `image` crate as your direct dependency, and enabling features on it: ``` egui_extras = { version = "*", features = ["all_loaders"] } image = { version = "0.24", features = ["jpeg", "png"] } # Add the types you want support for ``` ⚠ You have to configure both the supported loaders in `egui_extras` *and* the supported image formats in `image` to get any output! ### Loader-specific information ⚠ The exact way bytes, images, and textures are loaded is subject to change, but the supported protocols and file extensions are not. The `file` loader is a `BytesLoader`. It will attempt to load `file://` URIs, and infer the content type from the extension. The path will be passed to `std::fs::read` after trimming the `file://` prefix, and is resolved the same way as with `std::fs::read(path)`: * Relative paths are relative to the current working directory * Absolute paths are left as is. The `http` loader is a `BytesLoader`. It will attempt to load `http://` and `https://` URIs, and infer the content type from the `Content-Type` header. The `image` loader is an `ImageLoader`. It will attempt to load any URI with any extension other than `svg`. It will also try to load any URI without an extension. The content type specified by `BytesPoll::Ready::mime` always takes precedence. This means that even if the URI has a `png` extension, and the `png` image format is enabled, if the content type is not one of the supported and enabled image formats, the loader will return `LoadError::NotSupported`, allowing a different loader to attempt to load the image. The `svg` loader is an `ImageLoader`. It will attempt to load any URI with an `svg` extension. It will *not* attempt to load a URI without an extension. The content type specified by `BytesPoll::Ready::mime` always takes precedence, and must include `svg` for it to be considered supported. For example, `image/svg+xml` would be loaded by the `svg` loader. See `egui::load` for more information about how loaders work.
github.com/graphikDB/graphik
go
Go
README [¶](#section-readme) --- ![graphik](https://github.com/graphikDB/graphik/raw/v1.4.1/assets/graphik-logo.jpg) <https://graphikdb.github.io/graphik/[![GoDoc](https://godoc.org/github.com/graphikDB/graphik?status.svg)](https://godoc.org/github.com/graphikDB/graphik) ### Quick Start Download sample .env & docker-compose file for configuration: ``` curl https://raw.githubusercontent.com/graphikDB/graphik/master/.sample.env >> .env curl https://raw.githubusercontent.com/graphikDB/graphik/master/docker-compose.yml >> docker-compose.yml ``` Change `GRAPHIK_ROOT_USERS` in .env to your email address Start the server: ``` docker-compose -f docker-compose.yml pull docker-compose -f docker-compose.yml up -d ``` Gisit localhost:7820/ui and login to get started. See [Sample GraphQL Queries](#readme-sample-graphql-queries) for sample graphQL queries. when you're done, you may shutdown the server: ``` docker-compose -f docker-compose.yml down --remove-orphans ``` Graphik is a Backend as a Service implemented as an identity-aware, permissioned, persistant document/graph database & pubsub server written in Go. Support: [<EMAIL>](mailto:<EMAIL>) * [Problem Statement](#readme-problem-statement) + [Traditional relational databases are powerful but come with a number of issues that interfere with agile development methodologies:](#readme-traditional-relational-databases-are-powerful-but-come-with-a-number-of-issues-that-interfere-with-agile-development-methodologies-) + [Traditional non-relational databases are non-relational](#readme-traditional-non-relational-databases-are-non-relational) + [Traditional non-relational databases often don't have a declarative query language](#readme-traditional-non-relational-databases-often-don-t-have-a-declarative-query-language) + [Traditional non-relational databases often don't support custom constraints](#readme-traditional-non-relational-databases-often-don-t-support-custom-constraints) + [No awareness of origin/end user accessing the records(only the API/dba making the request)](#readme-no-awareness-of-origin-end-user-accessing-the-records-only-the-api-dba-making-the-request-) + [Solution](#readme-solution) * [Features](#readme-features) * [Key Dependencies](#readme-key-dependencies) * [Flags](#readme-flags) * [gRPC Client SDKs](#readme-grpc-client-sdks) * [Implemenation Details](#readme-implemenation-details) + [Primitives](#readme-primitives) + [Identity Graph](#readme-identity-graph) + [Login/Authorization/Authorizers](#readme-login-authorization-authorizers) - [Authorizers Examples](#readme-authorizers-examples) + [Secondary Indexes](#readme-secondary-indexes) - [Secondary Index Examples](#readme-secondary-index-examples) + [Constraints](#readme-constraints) - [Constraint Examples](#readme-constraint-examples) + [Triggers](#readme-triggers) - [Trigger Examples](#readme-trigger-examples) + [GraphQL vs gRPC API](#readme-graphql-vs-grpc-api) + [Streaming/PubSub](#readme-streaming-pubsub) + [Additional Details](#readme-additional-details) * [Sample GraphQL Queries](#readme-sample-graphql-queries) + [Get Currently Logged In User(me)](#readme-get-currently-logged-in-user-me-) + [Get the Graph Schema](#readme-get-the-graph-schema) + [Set a Request Authorizer](#readme-set-a-request-authorizer) + [Create a Document](#readme-create-a-document) + [Traverse Documents](#readme-traverse-documents) + [Traverse Documents Related to Logged In User](#readme-traverse-documents-related-to-logged-in-user) + [Change Streaming](#readme-change-streaming) + [Broadcasting a Message](#readme-broadcasting-a-message) + [Filtered Streaming](#readme-filtered-streaming) * [Deployment](#readme-deployment) + [Docker-Compose](#readme-docker-compose) + [Kubernetes](#readme-kubernetes) + [Mac/OSX (Homebrew)](#readme-mac-osx--homebrew-) * [Open ID Connect Providers](#readme-open-id-connect-providers) + [Google](#readme-google) + [Microsoft](#readme-microsoft) + [Okta](#readme-okta) + [Auth0](#readme-auth0) * [Glossary](#readme-glossary) * [FAQ](#readme-faq) ### Problem Statement ##### Traditional relational databases are powerful but come with a number of issues that interfere with agile development methodologies: * [database schema](https://en.wikipedia.org/wiki/Database_schema) setup requires application context & configuration overhead * database [schema changes are often dangerous](https://wikitech.wikimedia.org/wiki/Schema_changes#Dangers_of_schema_changes) and require skilled administration to pull off without downtime * [password rotation](https://www.beyondtrust.com/resources/glossary/password-rotation) is burdensome and leads to password sharing/leaks * user/application passwords are generally stored in an external store(identity provider) causing duplication of passwords(password-sprawl) * traditional database indexing requires application context in addition to database administration Because of these reasons, proper database adminstration requires a skilled database adminstration team(DBA). This is bad for the following reasons: * hiring dba's is [costly](https://www.payscale.com/research/US/Job=Database_Administrator_(DBA)/Salary) * dba's generally have little context of the API accessing the database(only the intricacies of the database itself) * communication between developers & dba's is slow (meetings & JIRA tickets) ![sql-schema-change-workflow](https://github.com/graphikDB/graphik/raw/v1.4.1/assets/sql-schema-change.png) ##### Traditional non-relational databases are non-relational * many times developers will utilize a NOSQL database in order to avoid the downsides of traditional relational databases * this leads to relational APIs being built on non-relational databases Because of these reasons, APIs often end up developing anti-patterns * references to related objects/records are embedded within the record itself instead of joined via foreign key + as an API scales, the number of relationships will often grow, causing embedded relationships and/or multi-request queries to grow * references to related objects/records are stored as foreign keys & joined client-side via multiple requests(slow) ##### Traditional non-relational databases often don't have a declarative query language * declarative query languages are much easier to build via graphical tooling for since a single query "console" is generally the only requirement. * without a declarative query language, interaction with the database often involves complicated forms on a user-interface to gather user input. * declarative query languages open up database querying to analysts, operators, managers and others with core competencies outside of software programming. ##### Traditional non-relational databases often don't support custom constraints * constraints are important for ensuring data integrity * for instance, you may want to apply a constraint to the "age" field of a user to ensure it's greater than 0 and less than 150 * this leads to developers enforcing constraints within the applications themselves, which leads to bugs ![nosql-constraints](https://github.com/graphikDB/graphik/raw/v1.4.1/assets/nosql-constraints.png) ##### No awareness of origin/end user accessing the records(only the API/dba making the request) * database "users" are generally expected to be database administrators and/or another API. * 3rd party [SSO](https://en.wikipedia.org/wiki/Single_sign-on) integrations are generally non-native * databases may be secured properly by the dba team while the APIs connecting to them can be insecure depending on the "origin" user This is bad for the following reasons: * dba teams falsely assuming their database resources are secured due to insecure APIs * api teams falsely assuming their api resources are secured due to insecure database administration #### Solution * a loosely typed Graph database with built in identity awareness via a configured identity provider(Google, Microsoft, Okta, Auth0, etc) + relational-oriented benefits of a SQL database + non-relational-oriented productivity benefits of a NOSQL database * zero password management- this is delegated to the configured identity provider * schema-optional for productivity gains - constraints enforce custom constraints when necessary * "identity graph" which creates automatically creates connections between users & the database objects the create/modify + index-free-adjacency allows insanely fast relational lookups from the POV of the origin user * fine-grained authorization model to support requests directly from the origin user/public client(user on browser, ios app, android app, etc) + enforce role-based-access-control based on attributes found on the profile of the user manged by the identity provider * graphQL API to support a declarative query language for public clients(user on browser, ios app, android app, etc), data analysts, and database administrators * gRPC API for api -> database requests - gRPC tooling server side is more performant & has better tooling + auto-generate client SDK's in most languages (python, javascript, csharp, java, go, etc) * database schema operations managed via integration with state of the art change management/automation tooling - [terraform](https://terraform.io) ### Features * 100% Go * Native gRPC Support * GraphQL Support * User Interface * Native Document & Graph Database * [Index-free Adjacency](https://dzone.com/articles/the-secret-sauce-of-graph-databases) * Native OAuth/OIDC Support & Single Sign On * Embedded SSO protected GraphQl Playground * Persistant(bbolt LMDB) * Identity-Aware PubSub with Channels & Message Filtering(gRPC & graphQL) * Change Streams * [Extended Common Expression Language](https://github.com/graphikDB/trigger#standard-definitionslibrary) Query Filtering * [Extended Common Expression Language](https://github.com/graphikDB/trigger#standard-definitionslibrary) Request Authorization * [Extended Common Expression Language](https://github.com/graphikDB/trigger#standard-definitionslibrary) Constraints * [Extended Common Expression Language](https://github.com/graphikDB/trigger#standard-definitionslibrary) Server Side Triggers * Loosely-Typed(mongo-esque) * Horizontal Scalability/HA via Raft Consensus Protocol * [Prometheus Metrics](https://prometheus.io/) * [Pprof Metrics](https://blog.golang.org/pprof) * Safe to Deploy Publicly(with authorizers/tls/constraints/cors) * Read-Optimized * Full Text Search Expression Macros/Functions(`startsWith, endsWith, contains`) * RegularExp Expression Macros/Functions(`matches`) * Geographic Expression Macros/Functions(`geoDistance`) * Cryptographic Expression Macros/Functions(`encrypt, decrypt, sha1, sha256, sha3`) * JWT Expression Macros/Functions(`parseClaims, parseHeader, parseSignature`) * Collection Expression Macros/Functions(`in, map, filter, exists`) * String Manipulation Expression Macros/Functions(`replace, join, titleCase, lowerCase, upperCase, trimSpace, trimPrefix, trimSuffix, render`) * URL Introspection Expression Macros/Functions(`parseHost, parseScheme, parseQuery, parsePath`) * Client to Server streaming(gRPC only) * [Terraform Provider](https://github.com/graphikDB/terraform-provider-graphik) for Schema Operations & Change Automation * [Command Line Interface](https://github.com/graphikDB/graphikctl) * [Multi-Node Kubernetes Manifest](https://github.com/graphikDB/graphik/blob/v1.4.1/k8s.yaml) * Mutual TLS(optional) ### Key Dependencies * google.golang.org/grpc * github.com/google/cel-go/cel * go.etcd.io/bbolt * go.uber.org/zap * golang.org/x/oauth2 * github.com/99designs/gqlgen * github.com/autom8ter/machine * github.com/graphikDB/raft * github.com/graphikDB/generic * github.com/graphikDB/trigger ### Flags please note that the following flags are required: * `--root-users (env: GRAPHIK_ROOT_USERS)` * `--open-id (env: GRAPHIK_OPEN_ID)` ``` --allow-headers strings cors allow headers (env: GRAPHIK_ALLOW_HEADERS) (default [*]) --allow-methods strings cors allow methods (env: GRAPHIK_ALLOW_METHODS) (default [HEAD,GET,POST,PUT,PATCH,DELETE]) --allow-origins strings cors allow origins (env: GRAPHIK_ALLOW_ORIGINS) (default [*]) --ca-cert string client CA certificate path for establishing mtls (env: GRAPHIK_CA_CERT) --debug enable debug logs (env: GRAPHIK_DEBUG) --enable-ui enable user interface (env: GRAPHIK_ENABLE_UI) (default true) --environment string deployment environment (k8s) (env: GRAPHIK_ENVIRONMENT) --join-raft string join raft cluster at target address (env: GRAPHIK_JOIN_RAFT) --listen-port int serve gRPC & graphQL on this port (env: GRAPHIK_LISTEN_PORT) (default 7820) --mutual-tls require mutual tls (env: GRAPHIK_MUTUAL_TLS) --open-id string open id connect discovery uri ex: https://accounts.google.com/.well-known/openid-configuration (env: GRAPHIK_OPEN_ID) (required) (default "https://accounts.google.com/.well-known/openid-configuration") --raft-max-pool int max nodes in pool (env: GRAPHIK_RAFT_MAX_POOL) (default 5) --raft-peer-id string raft peer ID - one will be generated if not set (env: GRAPHIK_RAFT_PEER_ID) --raft-secret string raft cluster secret (so only authorized nodes may join cluster) (env: GRAPHIK_RAFT_SECRET) --require-request-authorizers require request authorizers for all methods/endpoints (env: GRAPHIK_REQUIRE_REQUEST_AUTHORIZERS) --require-response-authorizers require request authorizers for all methods/endpoints (env: GRAPHIK_REQUIRE_RESPONSE_AUTHORIZERS) --root-users strings a list of email addresses that bypass registered authorizers (env: GRAPHIK_ROOT_USERS) (required) (default [<EMAIL>]) --storage string persistant storage path (env: GRAPHIK_STORAGE_PATH) (default "/tmp/graphik") --tls-cert string path to tls certificate (env: GRAPHIK_TLS_CERT) --tls-key string path to tls key (env: GRAPHIK_TLS_KEY) --ui-oauth-authorization-url string user authentication: oauth authorization url (env: GRAPHIK_UI_OAUTH_AUTHORIZATION_URL) (default "https://accounts.google.com/o/oauth2/v2/auth") --ui-oauth-client-id string user authentication: oauth client id (env: GRAPHIK_UI_OAUTH_CLIENT_ID) (default "723941275880-6i69h7d27ngmcnq02p6t8lbbgenm26um.apps.googleusercontent.com") --ui-oauth-client-secret string user authentication: oauth client secret (env: GRAPHIK_UI_OAUTH_CLIENT_SECRET) (default "E2ru-iJAxijisJ9RzMbloe4c") --ui-oauth-redirect-url string user authentication: oauth redirect url (env: GRAPHIK_UI_OAUTH_REDIRECT_URL) (default "http://localhost:7820/ui/login") --ui-oauth-scopes strings user authentication: oauth scopes (env: GRAPHIK_UI_OAUTH_SCOPES) (default [openid,email,profile]) --ui-oauth-token-url string user authentication: token url (env: GRAPHIK_UI_OAUTH_TOKEN_URL) (default "https://oauth2.googleapis.com/token") --ui-session-secret string user authentication: session secret (env: GRAPHIK_UI_SESSION_SECRET) (default "change-me-xxxx-xxxx") ``` sample .env file: ``` # change to a list of root user emails that have full access to all database operations GRAPHIK_ROOT_USERS=<EMAIL> GRAPHIK_OPEN_ID=https://accounts.google.com/.well-known/openid-configuration GRAPHIK_ALLOW_HEADERS=* GRAPHIK_ALLOW_METHOD=* GRAPHIK_ALLOW_ORIGINS=* # use for testing only GRAPHIK_UI_OAUTH_CLIENT_ID=723941275880-6i69h7d27ngmcnq02p6t8lbbgenm26um.apps.googleusercontent.com # use for testing only GRAPHIK_UI_CLIENT_SECRET=E<KEY> # change localhost:7820 to your deployed domain name when not running locally GRAPHIK_UI_OAUTH_REDIRECT_URL=http://localhost:7820/ui/login GRAPHIK_UI_OAUTH_SCOPES=openid,email,profile GRAPHIK_UI_OAUTH_AUTHORIZATION_URL=https://accounts.google.com/o/oauth2/v2/auth GRAPHIK_UI_OAUTH_TOKEN_URL=https://oauth2.googleapis.com/token GRAPHIK_UI_SESSION_SECRET=changeme-xxxxx-xxxx ``` ### gRPC Client SDKs * [Go](https://godoc.org/github.com/graphikDB/graphik/graphik-client-go) * [Python](https://github.com/graphikDB/graphik/blob/v1.4.1/gen/grpc/python) * [PHP](https://github.com/graphikDB/graphik/blob/v1.4.1/gen/grpc/php) * [Javascript](https://github.com/graphikDB/graphik/blob/v1.4.1/gen/grpc/js) * [Java](https://github.com/graphikDB/graphik/blob/v1.4.1/gen/grpc/java) * [C#](https://github.com/graphikDB/graphik/blob/v1.4.1/gen/grpc/csharp) * [Ruby](https://github.com/graphikDB/graphik/blob/v1.4.1/gen/grpc/ruby) ### User Interface The flag --enable-ui controls whether the UI is enabled or not. By default, it is enabled and available at the path `:7820/ui`. By default, it is configured to use an oauth client for testing purposes only. UI related flags: ``` --enable-ui enable user interface (env: GRAPHIK_ENABLE_UI) (default true) --ui-oauth-authorization-url string user authentication: oauth authorization url (env: GRAPHIK_UI_OAUTH_AUTHORIZATION_URL) (default "https://accounts.google.com/o/oauth2/v2/auth") --ui-oauth-client-id string user authentication: oauth client id (env: GRAPHIK_UI_OAUTH_CLIENT_ID) (default "723941275880-6i69h7d27ngmcnq02p6t8lbbgenm26um.apps.googleusercontent.com") --ui-oauth-client-secret string user authentication: oauth client secret (env: GRAPHIK_UI_OAUTH_CLIENT_SECRET) (default "E2ru-iJAxijisJ9RzMbloe4c") --ui-oauth-redirect-url string user authentication: oauth redirect url (env: GRAPHIK_UI_OAUTH_REDIRECT_URL) (default "http://localhost:7820/ui/login") --ui-oauth-scopes strings user authentication: oauth scopes (env: GRAPHIK_UI_OAUTH_SCOPES) (default [openid,email,profile]) --ui-oauth-token-url string user authentication: token url (env: GRAPHIK_UI_OAUTH_TOKEN_URL) (default "https://oauth2.googleapis.com/token") --ui-session-secret string user authentication: session secret (env: GRAPHIK_UI_SESSION_SECRET) (default "change-me-xxxx-xxxx") ``` ### Implemenation Details * [graphQL Schema](https://github.com/graphikDB/graphik/blob/v1.4.1/schema.graphql) * [graphQL Schema Documentation](https://github.com/graphikDB/graphik/blob/v1.4.1/gen/gql/docs/index.html) * [gRPC Schema](https://github.com/graphikDB/graphik/blob/v1.4.1/graphik.proto) * [gRPC Schema Documentation](https://github.com/graphikDB/graphik/blob/v1.4.1/gen/grpc/docs/index.html) #### Primitives * `Ref` == direct pointer to an doc or connection. ``` message Ref { // gtype is the type of the doc/connection ex: pet string gtype =1 [(validator.field) = {regex : "^.{1,225}$"}]; // gid is the unique id of the doc/connection within the context of it's type string gid =2 [(validator.field) = {regex : "^.{1,225}$"}]; } ``` * `Doc` == JSON document in document storage terms AND vertex/node in graph theory ``` message Doc { // ref is the ref to the doc Ref ref =1 [(validator.field) = {msg_exists : true}]; // k/v pairs google.protobuf.Struct attributes =2; } ``` * `Connection` == graph edge/relationship in graph theory. Connections relate Docs to one another. ``` message Connection { // ref is the ref to the connection Ref ref =1 [(validator.field) = {msg_exists : true}]; // attributes are k/v pairs google.protobuf.Struct attributes =2; // directed is false if the connection is bi-directional bool directed =3; // from is the doc ref that is the source of the connection Ref from =4 [(validator.field) = {msg_exists : true}]; // to is the doc ref that is the destination of the connection Ref to =5 [(validator.field) = {msg_exists : true}]; } ``` #### Identity Graph * any time a document is created, a connection of type `created` from the origin user to the new document is also created * any time a document is created, a connection of type `created_by` from the new document to the origin user is also created * any time a document is edited, a connection of type `edited` from the origin user to the new document is also created(if none exists) * any time a document is edited, a connection of type `edited_by` from the new document to the origin user is also created(if none exists) * every document a user has ever interacted with may be queried via the Traverse method with the user as the root document of the traversal #### Login/Authorization/Authorizers * an access token `Authorization: Bearer ${token}` from the configured open-id connect identity provider is required for all database functionality * the access token is used to fetch the users info from the oidc userinfo endpoint fetched from the oidc metadata url * if a user is not present in the database, one will be automatically created under the gtype: `user` with their email address as their `gid` * once the user is fetched, it is evaluated(along with the request & request method) against any registered authorizers(CEL expression) in the database. + if an authorizer evaluates false, the request will be denied + authorizers may be used to restrict access to functionality by domain, role, email, etc + registered root users(see flags) bypass these authorizers * authorizers are completely optional but highly recommended please note: * setAuthorizers method overwrites all authorizers in the database * authorizers may be listed with the getSchema method ##### Authorizers Examples 1. only allow access to the GetSchema method if the users email contains `coleman` AND their email is verified ``` mutation { setAuthorizers(input: { authorizers: [{ name: "getSchema", method: "/api.DatabaseService/GetSchema", expression: "this.user.attributes.email.contains('coleman') && this.user.attributes.email_verified" target_requests:true, target_responses: true }] }) } ``` 1. only allow access to the CreateDoc method if the users email endsWith acme.com AND the users email is verified AND the doc to create is of type note ``` mutation { setAuthorizers(input: { authorizers: [{ name: "createNote", method: "/api.DatabaseService/CreateDoc", expression: "this.user.attributes.email.endsWith('acme.com') && this.user.attributes.email_verified && this.target.ref.gtype == 'note'" target_requests:true, target_responses: false }] }) } ``` #### Secondary Indexes * secondary indexes are CEL expressions evaluated against a particular type of Doc or Connection * indexes may be used to speed up queries that iterate over a large number of elements * secondary indexes are completely optional but recommended please note: * setIndexes method overwrites all indexes in the database * indexes may be listed with the getSchema method ##### Secondary Index Examples 1. index documents of type `product` that have a price > 100 ``` mutation { setIndexes(input: { indexes: [{ name: "expensiveProducts" gtype: "product" expression: "int(this.attributes.price) > 100" target_docs: true target_connections: false }] }) } ``` you can search for the document within the new index like so: ``` query { searchDocs(where: { gtype: "product" limit: 1 index: "expensiveProducts" }){ docs { ref { gid gtype } attributes } } } ``` ``` { "data": { "searchDocs": { "docs": [ { "ref": { "gid": "1lw7gcc5yQ01YbLcsgMX0iz0Sgx", "gtype": "product" }, "attributes": { "price": 101, "title": "this is a product" } } ] } }, "extensions": {} } ``` #### Constraints * constraints are CEL expressions evaluated against a particular type of Doc or Connection to enforce custom constraints * constraints are completely optional please note: * setConstraints overwrites all constraints in the database * constraints may be listed with the getSchema method ##### Constraint Examples 1. ensure all documents of type 'note' have a title ``` mutation { setConstraints(input: { constraints: [{ name: "noteValidator" gtype: "note" expression: "this.attributes.title != ''" target_docs: true target_connections: false }] }) } ``` 1. ensure all documents of type 'product' have a price greater than 0 ``` mutation { setConstraints(input: { constraints: [{ name: "productValidator" gtype: "product" expression: "int(this.attributes.price) > 0" target_docs: true target_connections: false }] }) } ``` #### Triggers * triggers may be used to automatically mutate the attributes of documents/connections before they are commited to the database * this is useful for automatically annotating your data without having to make additional client-side requests ##### Trigger Examples 1. automatically add updated_at & created_at timestamp to all documents & connections ``` mutation { setTriggers(input: { triggers: [ { name: "updatedAt" gtype: "*" expression: "true" trigger: "{'updated_at': now()}" target_docs: true target_connections: true }, { name: "createdAt" gtype: "*" expression: "!has(this.attributes.created_at)" trigger: "{'created_at': now()}" target_docs: true target_connections: true }, ] }) } ``` ``` { "data": { "setTriggers": {} }, "extensions": {} } ``` ### User Interface Please take a look at the following options for graphik user-interface clients: * [OAuth GraphQL Playground](https://github.com/autom8ter/oauth-graphql-playground): A graphQL IDE that may be used to connect & interact with the full functionality of the graphik graphQL API as an authenticated user #### GraphQL vs gRPC API In my opinion, gRPC is king for svc-svc communication & graphQL is king for developing user interfaces & exploring data. In graphik the graphQL & gRPC are nearly identical, but every request flows through the gRPC server natively - the graphQL api is technically a wrapper that may be used for developing user interfaces & querying the database from the graphQL playground. The gRPC server is more performant so it is advised that you import one of the gRPC client libraries as opposed to utilizing the graphQL endpoint when developing backend APIs. The graphQL endpoint is particularly useful for developing public user interfaces against since it can be locked down to nearly any extent via authorizers, cors, constraints, & tls. #### Streaming/PubSub Graphik supports channel based pubsub as well as change-based streaming. All server -> client stream/subscriptions are started via the Stream() endpoint in gRPC or graphQL. All messages received on this channel include the user that triggered/sent the message. Messages on channels may be filtered via CEL expressions so that only messages are pushed to clients that they want to receive. Messages may be sent directly to channels via the Broadcast() method in gRPC & graphQL. All state changes in the graph are sent by graphik to the `state` channel which may be subscribed to just like any other channel. #### Additional Details * any time a Doc is deleted, so are all of its connections ### Sample GraphQL Queries #### Get Currently Logged In User(me) ``` query { me(where: {}) { ref { gid gtype } attributes } } ``` ``` { "data": { "me": { "ref": { "gid": "<EMAIL>", "gtype": "user" }, "attributes": { "email": "<EMAIL>", "email_verified": true, "family_name": "Word", "given_name": "Coleman", "hd": "graphikdb.io", "locale": "en", "name": "<NAME>", "picture": "https://lh3.googleusercontent.com/--LNU8XICB1A/AAAAAAAAAAI/AAAAAAAAAAA/AMZuuckp6gwH9JVkhlRkk-PTZdyDFctArg/s96-c/photo.jpg", "sub": "105138978122958973720" } } }, "extensions": {} } ``` #### Get the Graph Schema ``` query { getSchema(where: {}) { doc_types connection_types authorizers { authorizers { name expression } } constraints { constraints { name expression } } indexes { indexes { name expression } } } } ``` ``` { "data": { "getSchema": { "doc_types": [ "dog", "human", "note", "user" ], "connection_types": [ "created", "created_by", "edited", "edited_by", "owner" ], "authorizers": { "authorizers": [ { "name": "testing", "expression": "this.user.attributes.email.contains(\"coleman\")" } ] }, "constraints": { "constraints": [ { "name": "testing", "expression": "this.user.attributes.email.contains(\"coleman\")" } ] }, "indexes": { "indexes": [ { "name": "testing", "expression": "this.attributes.primary_owner" } ] } } }, "extensions": {} } ``` #### Set a Request Authorizer ``` mutation { setAuthorizers(input: { authorizers: [{ name: "testing", method: "/api.DatabaseService/GetSchema", expression: "this.user.attributes.email.contains('coleman') && this.user.attributes.email_verified" target_requests:true, target_responses: true }] }) } ``` ``` { "data": { "setAuthorizers": {} }, "extensions": {} } ``` #### Create a Document ``` mutation { createDoc(input: { ref: { gtype: "note" } attributes: { title: "do the dishes" } }){ ref { gid gtype } attributes } } ``` ``` { "data": { "createDoc": { "ref": { "gid": "1lU0w0QjiI0jnNL8XMzWJHqQmTd", "gtype": "note" }, "attributes": { "title": "do the dishes" } } }, "extensions": {} } ``` #### Traverse Documents ``` # Write your query or mutation here query { traverse(input: { root: { gid: "<EMAIL>" gtype: "user" } algorithm: BFS limit: 6 max_depth: 1 max_hops: 10 }){ traversals { doc { ref { gid gtype } } traversal_path { gid gtype } depth hops } } } ``` #### Traverse Documents Related to Logged In User ``` query { traverseMe(where: { max_hops: 100 max_depth:1 limit: 5 }){ traversals { traversal_path { gtype gid } depth hops doc { ref { gid gtype } } } } } ``` ``` { "data": { "traverseMe": { "traversals": [ { "traversal_path": null, "depth": 0, "hops": 0, "doc": { "ref": { "gid": "<EMAIL>", "gtype": "user" } } }, { "traversal_path": [ { "gtype": "user", "gid": "<EMAIL>" } ], "depth": 1, "hops": 1, "doc": { "ref": { "gid": "1lU0w0QjiI0jnNL8XMzWJHqQmTd", "gtype": "note" } } } ] } }, "extensions": {} } ``` #### Change Streaming ``` subscription { stream(where: { channel: "state" }){ data user { gid gtype } } } ``` ``` { "data": { "stream": { "data": { "attributes": { "title": "do the dishes" }, "ref": { "gid": "1lUAK3uwwmhQ503ByzC9nCvdH6W", "gtype": "note" } }, "user": { "gid": "<EMAIL>", "gtype": "user" } } }, "extensions": {} } ``` #### Broadcasting a Message ``` mutation { broadcast(input: { channel: "testing" data: { text: "hello world!" } }) } ``` ``` { "data": { "broadcast": {} }, "extensions": {} } ``` #### Filtered Streaming ``` subscription { stream(where: { channel: "testing" expression: "this.data.text == 'hello world!' && this.user.gid.endsWith('graphikdb.io')" }){ data user { gid gtype } } } ``` ``` { "data": { "stream": { "data": { "text": "hello world!" }, "user": { "gid": "<EMAIL>", "gtype": "user" } } }, "extensions": {} } ``` ### Deployment Regardless of deployment methodology, please set the following environmental variables or include them in a ${pwd}/.env file ``` GRAPHIK_OPEN_ID=${open_id_connect_metadata_url} #GRAPHIK_ALLOW_HEADERS=${cors_headers} #GRAPHIK_ALLOW_METHOD=${cors_methos} #GRAPHIK_ALLOW_ORIGINS=${cors_origins} #GRAPHIK_ROOT_USERS=${root_users} #GRAPHIK_TLS_CERT=${tls_cert_path} #GRAPHIK_TLS_KEY=${tls_key_path} ``` #### Docker-Compose add this docker-compose.yml to ${pwd}: ``` version: '3.7' services: graphik: image: graphikdb/graphik:v1.4.1 env_file: - .env ports: - "7820:7820" - "7821:7821" volumes: - default:/tmp/graphik networks: default: aliases: - graphikdb networks: default: volumes: default: ``` then run: ``` docker-compose -f docker-compose.yml pull docker-compose -f docker-compose.yml up -d ``` to shutdown: ``` docker-compose -f docker-compose.yml down --remove-orphans ``` #### Kubernetes(Multi-Node) Given a running Kubernetes cluster, run: ``` curl https://raw.githubusercontent.com/graphikDB/graphik/master/k8s.yaml >> k8s.yaml && \ kubectl apply -f k8s.yaml ``` to view pods as they spin up, run: ``` kubectl get pods -n graphik -w ``` graphik plugs into kubernetes service discovery #### Mac/OSX (Homebrew) ``` brew tap graphik/tools <EMAIL>:graphikDB/graphik-homebrew.git brew install graphik brew install graphikctl ``` ### Open ID Connect Providers When using an openid provider other than Google(default configuration), please replace the flag #### Google * metadata uri: <https://accounts.google.com/.well-known/openid-configuration#### Microsoft * metadata uri: [See More](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-protocols-oidc) #### Okta * metadata uri: https://${yourOktaOrg}/.well-known/openid-configuration [See More](https://developer.okta.com/docs/concepts/auth-servers/) #### Auth0 * metadata uri: https://${YOUR_DOMAIN}/.well-known/openid-configuration [See More](https://auth0.com/docs/protocols/configure-applications-with-oidc-discovery) ### Glossary | Term | Definition | Source | | --- | --- | --- | | Application Programming Interface(API) | An application programming interface (API) is a computing interface that defines interactions between multiple software intermediaries. It defines the kinds of calls or requests that can be made, how to make them, the data formats that should be used, the conventions to follow, etc. | <https://en.wikipedia.org/wiki/API#:~:text=An%20application%20programming%20interface%20(API,the%20conventions%20to%20follow%2C%20etc>. | | Structured Query Language(SQL) | a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS). It is particularly useful in handling structured data, i.e. data incorporating relations among entities and variables. | <https://en.wikipedia.org/wiki/SQL> | | NOSQL | (originally referring to "non-SQL" or "non-relational") database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases | <https://en.wikipedia.org/wiki/NoSQL> | | Graph Database | Very simply, a graph database is a database designed to treat the relationships between data as equally important to the data itself. It is intended to hold data without constricting it to a pre-defined model. Instead, the data is stored like we first draw it out - showing how each individual entity connects with or is related to others. | <https://neo4j.com/developer/graph-database/> | | Identity Provider | An identity provider (abbreviated IdP or IDP) is a system entity that creates, maintains, and manages identity information for principals and also provides authentication services to relying applications within a federation or distributed network. | <https://en.wikipedia.org/wiki/Identity_provider> | | OAuth2 | The OAuth 2.0 authorization framework enables a third-party | <https://tools.ietf.org/html/rfc6749> | | Open ID Connect(OIDC) | application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf. | <https://en.wikipedia.org/wiki/OpenID_Connect> | | Open ID Connect Provider Metadata | OpenID Providers supporting Discovery MUST make a JSON document available at the path formed by concatenating the string /.well-known/openid-configuration to the Issuer. | <https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata> | | gRPC | gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. It is also applicable in last mile of distributed computing to connect devices, mobile applications and browsers to backend services. | <https://grpc.io/> | | graphQL | GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. | <https://graphql.org/> | | Client-Server Model | Client–server model is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients. | <https://en.wikipedia.org/wiki/Client%E2%80%93server_model> | | Publish-Subscribe Architecture(PubSub) | In software architecture, publish–subscribe is a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers, but instead categorize published messages into classes without knowledge of which subscribers, if any, there may be. Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are. | <https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern> | | Database Administrator | Database administrators ensure databases run efficiently. Database administrators use specialized software to store and organize data, such as financial information and customer shipping records. They make sure that data are available to users and secure from unauthorized access. | <https://www.bls.gov/ooh/computer-and-information-technology/database-administrators.htm#:~:text=Database%20administrators%20ensure%20databases%20run,and%20secure%20from%20unauthorized%20access>. | | Raft | Raft is a consensus algorithm designed as an alternative to the Paxos family of algorithms. It was meant to be more understandable than Paxos by means of separation of logic, but it is also formally proven safe and offers some additional features.[1] Raft offers a generic way to distribute a state machine across a cluster of computing systems, ensuring that each node in the cluster agrees upon the same series of state transitions. | <https://en.wikipedia.org/wiki/Raft_(algorithm)> | | Raft Leader | At any given time, the peer set elects a single node to be the leader. The leader is responsible for ingesting new log entries, replicating to followers, and managing when an entry is considered committed. | <https://www.consul.io/docs/architecture/consensus> | | Raft Quorum | A quorum is a majority of members from a peer set: for a set of size N, quorum requires at least (N/2)+1 members. For example, if there are 5 members in the peer set, we would need 3 nodes to form a quorum. If a quorum of nodes is unavailable for any reason, the cluster becomes unavailable and no new logs can be committed. | <https://www.consul.io/docs/architecture/consensus> | | Raft Log | The primary unit of work in a Raft system is a log entry. The problem of consistency can be decomposed into a replicated log. A log is an ordered sequence of entries. Entries includes any cluster change: adding nodes, adding services, new key-value pairs, etc. We consider the log consistent if all members agree on the entries and their order. | <https://www.consul.io/docs/architecture/consensus> | | High Availability | High availability (HA) is a characteristic of a system which aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period. | <https://en.wikipedia.org/wiki/High_availability#:~:text=High%20availability%20(HA)%20is%20a,increased%20reliance%20on%20these%20systems>. | | Horizontal Scaleability | Horizontal scaling means scaling by adding more machines to your pool of resources (also described as “scaling out”), whereas vertical scaling refers to scaling by adding more power (e.g. CPU, RAM) to an existing machine (also described as “scaling up”). | <https://www.section.io/blog/scaling-horizontally-vs-vertically/#:~:text=Horizontal%20scaling%20means%20scaling%20by,as%20%E2%80%9Cscaling%20up%E2%80%9D)>. | | Database Trigger | A database trigger is procedural code that is automatically executed in response to certain events on a particular table or view in a database | <https://en.wikipedia.org/wiki/Database_trigger> | | Secondary Index | A secondary index, put simply, is a way to efficiently access records in a database (the primary) by means of some piece of information other than the usual (primary) key. | <https://docs.oracle.com/cd/E17275_01/html/programmer_reference/am_second.html#:~:text=A%20secondary%20index%2C%20put%20simply,the%20usual%20(primary)%20key>. | | Authentication vs Authorization | Authentication and authorization might sound similar, but they are distinct security processes in the world of identity and access management (IAM). Authentication confirms that users are who they say they are. Authorization gives those users permission to access a resource. | <https://www.okta.com/identity-101/authentication-vs-authorization/#:~:text=Authentication%20and%20authorization%20might%20sound,permission%20to%20access%20a%20resource>. | | Role Based Access Control(RBAC) | Role-based access control (RBAC) is a method of restricting network access based on the roles of individual users within an enterprise. RBAC lets employees have access rights only to the information they need to do their jobs and prevents them from accessing information that doesn't pertain to them. | <https://searchsecurity.techtarget.com/definition/role-based-access-control-RBAC#:~:text=Role%2Dbased%20access%20control%20(RBAC)%20is%20a%20method%20of,doesn't%20pertain%20to%20them>. | | Index Free Adjacency | Data lookup performance is dependent on the access speed from one particular node to another. Because index-free adjacency enforces the nodes to have direct physical RAM addresses and physically point to other adjacent nodes, it results in a fast retrieval. A native graph system with index-free adjacency does not have to move through any other type of data structures to find links between the nodes. | <https://en.wikipedia.org/wiki/Graph_database#Index-free_adjacency> | | Extended Common Expression Language | An extensive decision & trigger framework backed by Google's Common Expression Language | <https://github.com/graphikDB/trigger#standard-definitionslibrary> | ### FAQ 1. How can I login with some type of local "Root" or "Admin" account without having to use some authenticator service? It is not possible to use the database without an authenticator service, though, a default authenticator is configured by default for use locally. To configure the root/admin user with the default configurator, please utilize the `--root-users` flag or the `GRAPHIK_ROOT_USERS` environmental variable with a list of email addresses that have full access to the database. 1. Does GraphikDb have a built-in admin web server UI like RethinkDB, etc. to server some web pages for monitoring the database and cluster. The metrics server on port 7821(by default) exposes a `/metrics` endpoint that may be used with Prometheus/Grafana for monitoring database operations & memory allocation. The ClusterState operation on the database server may be used to monitor the state of the raft cluster. ex: ``` query { clusterState(where: {}) { membership peers { addr node_id } stats } } ``` ![cluster-state](https://github.com/graphikDB/graphik/raw/v1.4.1/assets/cluster-state.png) 1. How well does it cluster? Clustering is achieved via the raft protocol and is primarily used for redundancy purposes. The raft protocol does not scale particularly well for write operations - it may be replaced with a sharded, eventually-consistent clustering mechanism in order to scale past 10+ instances. None
nucleide
rust
Rust
Crate nucleide === Nucleide --- A crate to manipulate custom sections of a WebAssembly module to view/edit application metadata. Specification --- Nucleide specifies WASM metadata that is read by the Nucleic Desktop Environment. It includes the WebAssembly 2.0 and the Daku 1.0.0-beta.0 specifications. Daku programs are a WebAssembly module that must have the `daku` custom section, are compressed with ZStd, and should use the `.daku` file extension; thus the Nucleide specification, as an extension of Daku, shall follow. App data that can be displayed by a software manager, and where it comes from: * Non-Localized App Name: `name` section => Module Name Subsection * Programming Language: `producers` section => Language Field * Processed With: `producers` section => Processed-By Field * Generated With: `producers` section => SDK Field * Required Permissions: `daku` section => Portals Header * Localized App Names: `daku` section => Translations Subsection * App Description: `daku` section => Description Translations Subsection * App Icon Themes: `daku` section => App Icon Themes Subsection * App Screenshots: `daku` section => Description Assets Subsection * Searchable Tags: `daku` section => Tags Subsection * Categories: `daku` section => Categories Subsection * Organization: `daku` section => Organization Name Subsection ### Types Nucleide custom sections reuse WebAssembly types: ##### `Byte` Simply an 8-bit integer. ##### `Integer` A Unsigned LEB128 variable-length encoded litte-endian integer, with a maximum value of 2³²-1 (can be anywhere from 1-5 bytes). ##### `Vector[T]` A sequence of the following: * `size: Integer` * `data: [T; size]` ##### `Name` Containing valid UTF-8 (no null termination); wrapper around: * `Vector[Byte]` ##### `NameMap` A `Vector`, with each element containing a sequence of the following: * `index: Integer` - Must be sorted in sequence * `name: Name` ##### `IndirectNameMap` A `Vector`, with each element containing a sequence of the following: * `index: Integer` - Must be sorted in sequence * `name_map: NameMap` ### Custom Sections #### Name (`name`) From the wasm spec, debug info. It is expected that apps are built with this module generated for easier debugging, but stripped away and put into a separate `.name` file for distribution. * `subsection: u8`: Each subsection is optional, and must be placed in this order: + 0 => Module Name + 1 => Function Names + 2 => Local Names + 3 => Ext: Label Names + 4 => Ext: Type Names + 5 => Ext: Table Names + 6 => Ext: Memory Names + 7 => Ext: Global Names + 8 => Ext: Element Names + 9 => Ext: Data Names * `size: u32`: Number of bytes ##### 0 => Module Name * `name: Name`: Name of the app ##### 1 => Function Names * `name_map: NameMap`: Names of each function ##### 2 => Local Names * `indirect_name_map: IndirectNameMap`: Names of each variable in each function ##### 3 => Ext: Label Names * `indirect_name_map: IndirectNameMap`: Names of each label in each function ##### 4 => Ext: Type Names * `name_map: NameMap`: Names of each type ##### 5 => Ext: Table Names * `name_map: NameMap`: Names of each table ##### 6 => Ext: Memory Names * `name_map: NameMap`: Names of each memory ##### 7 => Ext: Global Names * `name_map: NameMap`: Names of each global ##### 8 => Ext: Element Names * `name_map: NameMap`: Names of each element ##### 9 => Ext: Data Names * `name_map: NameMap`: Names of each data #### Producers (`producers`) From WebAssembly’s tool conventions, information on how the `.daku` WebAssembly file was generated. A `Vector`, with each element containing a sequence of the following: * `name: Name` - One of: + `"language"` + `"processed-by"` + `"sdk"` * `tool_version_pairs: Vector<(String, String)>` #### Daku (`daku`) * `portals: Vector<Integer>`: List of Portal IDs Following the Daku portals list, is the nucleide extension: * `subsection: u8`: Each subsection is optional, and must be placed in this order: + 0 => Reserved for potential breaking 2.0 version of Daku + 1 => App Name Translations + 2 => App Description Translations + 3 => App Icon Themes + 4 => App Description Assets + 5 => Searchable Tags + 6 => Searchable Categories + 7 => Organization Name * `size: u32`: Number of bytes ##### 1 => App Name Translations * `localized_names: NameMap` Integer representation of a 4-letter (2-letter lowercase language, 2-letter uppercase region) locale ASCII description: * `locale: b"enUS"` ``` locale[0] | locale[1] << 7 | locale[2] << 14 | locale[3] << 21 ``` ##### 2 => App Description Translations * `localized_mdfiles: NameMap`: Markdown file for each description Integer representation of a 4-letter (2-letter lowercase language, 2-letter uppercase region) locale ASCII description: * `locale: b"enUS"` ``` locale[0] | locale[1] << 7 | locale[2] << 14 | locale[3] << 21 ``` ##### 3 => App Icon Themes A `Vector`, with each element containing a sequence of the following: * `name: Name`: Theme name, `"default"` or `"reduced"`; reduced theme should be binary (on/off) RGBA. default is full 0-255 range for each. * `data: Vector<u8>`: Concatenated list of QOI (future: or RVG) files. Best resolution out of the files will be chosen. None can have the same resolution. ##### 4 => App Description Assets A `Vector`, with each element containing a sequence of the following: * `locale: Integer`: Set to 0 for non-localized assets. * `path: Name`: Markdown path * `data: Vector<u8>`: QOI (future: or RVG) file. ##### 5 => Searchable Tags A `Vector` (limit 8), with each element containing: * `tag: Name`: Name of the tag (all lowercase ASCII english words separated by spaces; no `-` or `_`, other punctuation) ##### 6 => Searchable Categories A `Vector` (limit 2), with each element containing: * `tag: Byte`: App Category, one of: + 0 => Media - Applications for playing / recording / editing audio, video, drawing, photos, fonts, 3D-modeling + 1 => Office - Applications for viewing / editing / translating documents and spreadsheets + 2 => System - Applications for inspecting the operating system, tweaking, installing, and virtualization + 3 => Coding - Applications for software development, math, related tools + 4 => Internet - Applications for browsing the web, peer-to-peer file sharing, email, social media, etc. + 5 => Gaming - Applications for playing video games + 6 => Science - Applications for simulations, electrical/mechanical engineering, A/I for inspecting data, robots + 7 => Education - Applications for education, learning + 8 => Life - Applications to-do lists, calendar, wellbeing, fitness, directions, mapping, weather, smart home, etc. + 9 => Finance - Applications for coupons, buying/selling, trading, currency ##### 7 => Organization Name * `organization: Name`: Name of organization that developed the software Modules --- * dakuCustom section for daku. * nameCustom standard name section. * parseUtilities to help parse custom sections. * producersCustom conventional producers section. * wasmParsing extensions for WebAssembly-specific format primitives. Structs --- * ErrorDeserialization/serialization error * ModuleRepresents WebAssembly module. Use new to build from buffer. Enums --- * SectionCustom section Type Definitions --- * ResultResult type alias Crate nucleide === Nucleide --- A crate to manipulate custom sections of a WebAssembly module to view/edit application metadata. Specification --- Nucleide specifies WASM metadata that is read by the Nucleic Desktop Environment. It includes the WebAssembly 2.0 and the Daku 1.0.0-beta.0 specifications. Daku programs are a WebAssembly module that must have the `daku` custom section, are compressed with ZStd, and should use the `.daku` file extension; thus the Nucleide specification, as an extension of Daku, shall follow. App data that can be displayed by a software manager, and where it comes from: * Non-Localized App Name: `name` section => Module Name Subsection * Programming Language: `producers` section => Language Field * Processed With: `producers` section => Processed-By Field * Generated With: `producers` section => SDK Field * Required Permissions: `daku` section => Portals Header * Localized App Names: `daku` section => Translations Subsection * App Description: `daku` section => Description Translations Subsection * App Icon Themes: `daku` section => App Icon Themes Subsection * App Screenshots: `daku` section => Description Assets Subsection * Searchable Tags: `daku` section => Tags Subsection * Categories: `daku` section => Categories Subsection * Organization: `daku` section => Organization Name Subsection ### Types Nucleide custom sections reuse WebAssembly types: ##### `Byte` Simply an 8-bit integer. ##### `Integer` A Unsigned LEB128 variable-length encoded litte-endian integer, with a maximum value of 2³²-1 (can be anywhere from 1-5 bytes). ##### `Vector[T]` A sequence of the following: * `size: Integer` * `data: [T; size]` ##### `Name` Containing valid UTF-8 (no null termination); wrapper around: * `Vector[Byte]` ##### `NameMap` A `Vector`, with each element containing a sequence of the following: * `index: Integer` - Must be sorted in sequence * `name: Name` ##### `IndirectNameMap` A `Vector`, with each element containing a sequence of the following: * `index: Integer` - Must be sorted in sequence * `name_map: NameMap` ### Custom Sections #### Name (`name`) From the wasm spec, debug info. It is expected that apps are built with this module generated for easier debugging, but stripped away and put into a separate `.name` file for distribution. * `subsection: u8`: Each subsection is optional, and must be placed in this order: + 0 => Module Name + 1 => Function Names + 2 => Local Names + 3 => Ext: Label Names + 4 => Ext: Type Names + 5 => Ext: Table Names + 6 => Ext: Memory Names + 7 => Ext: Global Names + 8 => Ext: Element Names + 9 => Ext: Data Names * `size: u32`: Number of bytes ##### 0 => Module Name * `name: Name`: Name of the app ##### 1 => Function Names * `name_map: NameMap`: Names of each function ##### 2 => Local Names * `indirect_name_map: IndirectNameMap`: Names of each variable in each function ##### 3 => Ext: Label Names * `indirect_name_map: IndirectNameMap`: Names of each label in each function ##### 4 => Ext: Type Names * `name_map: NameMap`: Names of each type ##### 5 => Ext: Table Names * `name_map: NameMap`: Names of each table ##### 6 => Ext: Memory Names * `name_map: NameMap`: Names of each memory ##### 7 => Ext: Global Names * `name_map: NameMap`: Names of each global ##### 8 => Ext: Element Names * `name_map: NameMap`: Names of each element ##### 9 => Ext: Data Names * `name_map: NameMap`: Names of each data #### Producers (`producers`) From WebAssembly’s tool conventions, information on how the `.daku` WebAssembly file was generated. A `Vector`, with each element containing a sequence of the following: * `name: Name` - One of: + `"language"` + `"processed-by"` + `"sdk"` * `tool_version_pairs: Vector<(String, String)>` #### Daku (`daku`) * `portals: Vector<Integer>`: List of Portal IDs Following the Daku portals list, is the nucleide extension: * `subsection: u8`: Each subsection is optional, and must be placed in this order: + 0 => Reserved for potential breaking 2.0 version of Daku + 1 => App Name Translations + 2 => App Description Translations + 3 => App Icon Themes + 4 => App Description Assets + 5 => Searchable Tags + 6 => Searchable Categories + 7 => Organization Name * `size: u32`: Number of bytes ##### 1 => App Name Translations * `localized_names: NameMap` Integer representation of a 4-letter (2-letter lowercase language, 2-letter uppercase region) locale ASCII description: * `locale: b"enUS"` ``` locale[0] | locale[1] << 7 | locale[2] << 14 | locale[3] << 21 ``` ##### 2 => App Description Translations * `localized_mdfiles: NameMap`: Markdown file for each description Integer representation of a 4-letter (2-letter lowercase language, 2-letter uppercase region) locale ASCII description: * `locale: b"enUS"` ``` locale[0] | locale[1] << 7 | locale[2] << 14 | locale[3] << 21 ``` ##### 3 => App Icon Themes A `Vector`, with each element containing a sequence of the following: * `name: Name`: Theme name, `"default"` or `"reduced"`; reduced theme should be binary (on/off) RGBA. default is full 0-255 range for each. * `data: Vector<u8>`: Concatenated list of QOI (future: or RVG) files. Best resolution out of the files will be chosen. None can have the same resolution. ##### 4 => App Description Assets A `Vector`, with each element containing a sequence of the following: * `locale: Integer`: Set to 0 for non-localized assets. * `path: Name`: Markdown path * `data: Vector<u8>`: QOI (future: or RVG) file. ##### 5 => Searchable Tags A `Vector` (limit 8), with each element containing: * `tag: Name`: Name of the tag (all lowercase ASCII english words separated by spaces; no `-` or `_`, other punctuation) ##### 6 => Searchable Categories A `Vector` (limit 2), with each element containing: * `tag: Byte`: App Category, one of: + 0 => Media - Applications for playing / recording / editing audio, video, drawing, photos, fonts, 3D-modeling + 1 => Office - Applications for viewing / editing / translating documents and spreadsheets + 2 => System - Applications for inspecting the operating system, tweaking, installing, and virtualization + 3 => Coding - Applications for software development, math, related tools + 4 => Internet - Applications for browsing the web, peer-to-peer file sharing, email, social media, etc. + 5 => Gaming - Applications for playing video games + 6 => Science - Applications for simulations, electrical/mechanical engineering, A/I for inspecting data, robots + 7 => Education - Applications for education, learning + 8 => Life - Applications to-do lists, calendar, wellbeing, fitness, directions, mapping, weather, smart home, etc. + 9 => Finance - Applications for coupons, buying/selling, trading, currency ##### 7 => Organization Name * `organization: Name`: Name of organization that developed the software Modules --- * dakuCustom section for daku. * nameCustom standard name section. * parseUtilities to help parse custom sections. * producersCustom conventional producers section. * wasmParsing extensions for WebAssembly-specific format primitives. Structs --- * ErrorDeserialization/serialization error * ModuleRepresents WebAssembly module. Use new to build from buffer. Enums --- * SectionCustom section Type Definitions --- * ResultResult type alias Module nucleide::daku === Custom section for daku. Structs --- * DakuDaku section * FileMetadata file (Nucleide extension) Enums --- * CategoryApp category (Nucleide extension) * NucleideNucleide subsection extension for Daku (for use with Nucleic desktop) * PortalA portal Traits --- * ReadDaku section reader. * WriteDaku section writer. Module nucleide::name === Custom standard name section. Enums --- * NameName subsection Traits --- * ReadName section reader. * WriteName section writer Module nucleide::parse === Utilities to help parse custom sections. Structs --- * ReaderReads from a buffer. * WriterWrites to a buffer. Traits --- * NumberTrait implemented for integer primitives * UIntTrait implemented for unsigned integers Module nucleide::producers === Custom conventional producers section. Structs --- * ProducerProducer Field * VersionedSoftwareVersioned software name Enums --- * ProducerKindKind of producer Traits --- * ReadProducers section reader. * WriteProducers section writer Module nucleide::wasm === Parsing extensions for WebAssembly-specific format primitives. Traits --- * ReadWebAssembly primitive reader methods * WriteWebAssembly primitive reader methods Struct nucleide::Error === ``` pub struct Error(_); ``` Deserialization/serialization error Trait Implementations --- ### impl Debug for Error #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Error ### impl Send for Error ### impl Sync for Error ### impl Unpin for Error ### impl UnwindSafe for Error Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Struct nucleide::Module === ``` pub struct Module(_); ``` Represents WebAssembly module. Use new to build from buffer. Implementations --- ### impl Module #### pub fn new(buf: &[u8]) -> Result<SelfCreates a Module from buffer. #### pub fn sections(&self) -> Result<impl Iterator<Item = Section<'_>>Returns an iterator over the module’s custom sections. `Section`s are always yielded as the `Any` variant (borrowed). They can be parsed with `Section::to()`. #### pub fn set_section(&mut self, section: Section<'_>) -> Option<()Sets the payload associated with the given custom section, or adds a new custom section, as appropriate. #### pub fn clear_section( &mut self, name: impl AsRef<str> ) -> Option<Section<'static>Removes the given custom section, if it exists. Returns the removed section if it existed, or None otherwise. #### pub fn into_buffer(self) -> Result<Vec<u8>Write out module to a `Vec` of bytes. Trait Implementations --- ### impl Debug for Module #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl RefUnwindSafe for Module ### impl Send for Module ### impl Sync for Module ### impl Unpin for Module ### impl UnwindSafe for Module Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Enum nucleide::Section === ``` pub enum Section<'a> { Name(Vec<Name<'a>>), Producers(Vec<Producer<'a>>), Daku(Daku<'a>), Any { name: Cow<'a, str>, data: Cow<'a, [u8]>, }, } ``` Custom section Variants --- ### Name(Vec<Name<'a>>) The `name` section ### Producers(Vec<Producer<'a>>) The `producers` section ### Daku(Daku<'a>) The `daku` section ### Any #### Fields `name: Cow<'a, str>`The name of the custom section `data: Cow<'a, [u8]>`Data in the custom section Any section Implementations --- ### impl Section<'_#### pub fn name(&self) -> &str Get the name of the section. #### pub fn to_any(&mut self) -> Option<(&str, &[u8])Convert section to `Any` variant, and return the `name` and `data`. #### pub fn to(&self) -> Option<SelfConvert to non-Any variant if known, return `None` if can’t. ##### Notes Returns none if owned rather than borrowed, or if not the `Any` variant. Trait Implementations --- ### impl<'a> Debug for Section<'a#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read moreAuto Trait Implementations --- ### impl<'a> RefUnwindSafe for Section<'a### impl<'a> Send for Section<'a### impl<'a> Sync for Section<'a### impl<'a> Unpin for Section<'a### impl<'a> UnwindSafe for Section<'aBlanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Type Definition nucleide::Result === ``` pub type Result<T = (), E = Error> = Result<T, E>; ``` Result type alias
nemo-arethusa-plugin
readthedoc
Unknown
nemo\_arethusa\_plugin documentation [nemo\_arethusa\_plugin](index.html#document-index) --- Arethusa for Nemo Plugin[¶](#arethusa-for-nemo-plugin) === Arethusa for CapiTainS Nemo[¶](#arethusa-for-capitains-nemo) === This repository contains a plugin for embedding Arethusa in [Flask Capitains Nemo](https://github.com/capitains/flask-capitains-nemo). It was produced during the Perseids Tufts Hackathon in May 2016. The version of the Arethusa code which is used here can be found at <https://github.com/alpheios-project/arethusa/tree/nemo_arethusa_widget> It is currently using commit 924f384. Contributors[¶](#contributors) --- * <NAME> ( @balmas ) * <NAME> ( @fbaumgardt ) * <NAME> ( @ponteineptique ) * <NAME> ( @elijahjcooke ) Index : ### Arethusa for Nemo API[¶](#arethusa-for-nemo-api) *class* `nemo_arethusa_plugin.``Arethusa`(*queryinterface*, *\*args*, *\*\*kwargs*)[[source]](_modules/nemo_arethusa_plugin.html#Arethusa)[¶](#nemo_arethusa_plugin.Arethusa) Arethusa plugin for Nemo Note This class inherits some routes from the base [AnnotationsApiPlugin](http://flask-capitains-nemo.readthedocs.io/en/1.0.0b-dev/Nemo.api.html#flask.ext.nemo.plugins.annotations_api.AnnotationsApiPlugin) | Parameters: | **queryinterface** (*flask\_nemo.query.proto.QueryPrototype*) – QueryInterface to use to retrieve annotations | | Variables: | * **interface** – QueryInterface used to retrieve annotations * **HAS\_AUGMENT\_RENDER** – (True) Adds a stack of render | The overall plugins contains three new routes (on top of AnnotationsAPIPlugin) : > * `/arethusa.deps.json` which feeds informations about Arethusa assets dependencies > * `/arethusa-assets/<filename>` which is a self implemented assets route. > * `/arethusa.config.json` which is the config for the widget It contains two new templates : > * a `arethusa::text.html` template which overrides the original when there is treebank available > * a `arethusa::widget.tree.json` template which providees the configuration for the widget It contains a render functions which will use the arethusa::text.html instead of main::text.html if there is a treebank found within the QueryInterface `Arethusa.``render`(*\*\*kwargs*)[[source]](_modules/nemo_arethusa_plugin.html#Arethusa.render)[¶](#nemo_arethusa_plugin.Arethusa.render) Render function stack. If the template called is the main::text.html, it checks annotations from its query interface and replace it by arethusa::text.html if there is a treebank annotation | Parameters: | **kwargs** – Dictionary of named arguments | | Returns: | Dictionary of named arguments | `Arethusa.``r_arethusa_assets`(*filename*)[[source]](_modules/nemo_arethusa_plugin.html#Arethusa.r_arethusa_assets)[¶](#nemo_arethusa_plugin.Arethusa.r_arethusa_assets) Routes for assets | Parameters: | **filename** – Filename in data/assets to retrievee | | Returns: | Content of the file | `Arethusa.``r_arethusa_dependencies`()[[source]](_modules/nemo_arethusa_plugin.html#Arethusa.r_arethusa_dependencies)[¶](#nemo_arethusa_plugin.Arethusa.r_arethusa_dependencies) Return the json config of dependencies asked by arethusa | Returns: | Json with dependencies | `Arethusa.``r_arethusa_config`()[[source]](_modules/nemo_arethusa_plugin.html#Arethusa.r_arethusa_config)[¶](#nemo_arethusa_plugin.Arethusa.r_arethusa_config) Config JSON route for Arethusa : defines the layout and other resources | Returns: | {“template”} | ### Text Template and It’s Macro[¶](#text-template-and-it-s-macro) The `arethusa::text.html` contains few macro to ensure reusability of the concept by other resources : #### tb\_macro()[¶](#tb-macro) Contains the container for the Arethusa Widget #### tabs(text, treebank)[¶](#tabs-text-treebank) Creates a Bootstrap tab interface where the first tab is the text, the second would be the treebank #### Blocks and extensions[¶](#blocks-and-extensions) * The template extends main::container.html and feeds blocks additinalscript and article. * It imports from `main::text.html` : `show\_passage`, `header\_passage`, `default\_footer`, and `nav`
simpleNeural
cran
R
Package ‘simpleNeural’ October 14, 2022 Version 0.1.3 Date 2020-02-18 Title An Easy to Use Multilayer Perceptron Description Trains neural networks (multilayer perceptrons with one hidden layer) for bi- or multi- class classification. Depends R (>= 3.6) Suggests verification License MIT + file LICENSE LazyData true Contact mailto:<EMAIL> NeedsCompilation no Author <NAME> [aut, cre] (LUNAM Universite, Universite Angers, Laboratoire d'ergonomie et d'epidemiologie en sante au travail (LEEST), w/ support from the French National Research Program for Environmental and Occupational Health of Anses (2012/18)) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2020-02-18 15:50:02 UTC R topics documented: sN.MLPpredic... 2 sN.MLPtrai... 2 sN.normalizeD... 4 UCI.BCD.Wisconsi... 4 UCI.ISOLET.AB... 5 UCI.transfusio... 7 sN.MLPpredict Runs a multilayer perceptron Description Runs a multilayer perceptron Usage sN.MLPpredict(nnModel, X, raw = FALSE) Arguments nnModel A list containing the coefficients for the MLP (as produced with sN.MLPtrain()) X Matrix of predictors raw If true, returns score of each output option. If false, returns the output option with highest value. Value The predicted values obtained by the MLP Examples data(UCI.transfusion); X=as.matrix(sN.normalizeDF(as.data.frame(UCI.transfusion[,1:4]))); y=as.matrix(UCI.transfusion[,5]); myMLP=sN.MLPtrain(X=X,y=y,hidden_layer_size=4,it=50,lambda=0.5,alpha=0.5); myPrediction=sN.MLPpredict(nnModel=myMLP,X=X,raw=TRUE); #library('verification'); #roc.area(y,myPrediction[,2]); sN.MLPtrain Trains a multilayer perceptron with 1 hidden layer Description Trains a multilayer perceptron with 1 hidden layer and a sigmoid activation function, using back- propagation and gradient descent. Don’t forget to normalize the data first - sN.normalizeDF(), provided in the package, can be used to do so. Usage sN.MLPtrain(X, y, hidden_layer_size = 5, it = 50, lambda = 0.5, alpha = 0.5) Arguments X Matrix of predictors y Vector of output (the ANN learns y=ANN(X)). Classes should be assigned an integer number, starting at 0 for the first class. hidden_layer_size Number of units in the hidden layer it Number of iterations for the gradient descent. The default value of 50 may be a little low in some cases. 100 to 1000 are generally sensible values. lambda Penalization for model coefficients (regularization parameter) alpha Speed multiplier (learning rate) for gradient descent Value The coefficients of the MLP, in a list (Theta1 between input and hidden layers, Theta2 between hidden and output layers) References <NAME>, <NAME>, Artificial neural networks (the multilayer perceptron)- a review of applications in the atmospheric sciences, Atmospheric Environment, Volume 32, Issues 14-15, 1 August 1998, Pages 2627-2636, ISSN 1352-2310, doi: 10.1016/S1352-2310(97)00447-0 [http: //www.sciencedirect.com/science/article/pii/S1352231097004470] <NAME>.; <NAME>; <NAME>., "Artificial neural networks: a tutorial," Computer , vol.29, no.3, pp.31,44, Mar 1996. doi: 10.1109/2.485891 [http://ieeexplore.ieee.org/ stamp/stamp.jsp?tp=&arnumber=485891&isnumber=10412] Rumelhart, <NAME>., <NAME>, and <NAME>. "Learning Internal Representations by Error Propagation". <NAME>, <NAME>, and the PDP research group (editors). Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundations. MIT Press, 1986. Examples # NB: the provided examples are just here to help use the package's functions. # In real use cases you should perform a proper validation (cross-validation, # external validation data...) data(UCI.BCD.Wisconsin); X=as.matrix(sN.normalizeDF(as.data.frame(UCI.BCD.Wisconsin[,3:32]))); y=as.matrix(UCI.BCD.Wisconsin[,2]); myMLP=sN.MLPtrain(X=X,y=y,hidden_layer_size=20,it=50,lambda=0.5,alpha=0.5); myPrediction=sN.MLPpredict(nnModel=myMLP,X=X,raw=TRUE); #library('verification'); #roc.area(y,myPrediction[,2]); sN.normalizeDF Normalize data Description Normalize all columns of a dataframe so that all values are in [0;1] and for each column the maxi- mum value is 1 and the minimum 0. newx=(x-min(X))/(max(X)-min(X)) Usage sN.normalizeDF(dframe) Arguments dframe The dataframe to be normalized Value The normalized dataframe UCI.BCD.Wisconsin Breast Cancer Wisconsin (Diagnostic) Data Set Description Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. Usage data(UCI.BCD.Wisconsin) Format A data frame with 569 rows and 32 variables Details Separating plane described above was obtained using Multisurface Method-Tree (MSM-T) [<NAME>. Bennett, "Decision Tree Construction Via Linear Programming." Proceedings of the 4th Midwest Artificial Intelligence and Cognitive Science Society, pp. 97-101, 1992], a classification method which uses linear programming to construct a decision tree. Relevant features were selected using an exhaustive search in the space of 1-4 features and 1-3 separating planes. The actual linear program used to obtain the separating plane in the 3-dimensional space is that described in: [<NAME> and <NAME>: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34]. The variables are as follows: • ID number • Diagnosis (1 = malignant, 0 = benign) • Ten real-valued features are computed for each cell nucleus Source Dataset downloaded from the UCI Machine Learning Repository. http://archive.ics.uci. edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic) Creators: 1. Dr. <NAME>, General Surgery Dept. University of Wisconsin, Clinical Sciences Center Madison, WI 53792 wolberg ’at’ eagle.surgery.wisc.edu 2. <NAME>, Computer Sciences Dept. University of Wisconsin, 1210 West Dayton St., Madison, WI 53706 street ’at’ cs.wisc.edu 608-262-6619 3. <NAME>, Computer Sciences Dept. University of Wisconsin, 1210 West Dayton St., Madison, WI 53706 olvi ’at’ cs.wisc.edu Donor: <NAME> References <NAME>, <NAME> and <NAME>. Nuclear feature extraction for breast tumor diag- nosis. IS&T/SPIE 1993 International Symposium on Electronic Imaging: Science and Technology, volume 1905, pages 861-870, San Jose, CA, 1993. <NAME>. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. UCI.ISOLET.ABC ISOLET Data Set (ABC) Description This data set was generated as follows. 150 subjects spoke the name of each letter of the alphabet twice. Hence, we have 52 training examples from each speaker. Usage data(UCI.ISOLET.ABC) Format A data frame with 900 rows and 618 variables Details To reduce package size, only the 3 first letters are included here. The full dataset can be obtained from http://archive.ics.uci.edu/ml/datasets/ISOLET. The features are described in the paper by Cole and Fanty cited below. The features include spectral coefficients; contour features, sonorant features, pre-sonorant features, and post-sonorant features. Exact order of appearance of the features is not known. Source Dataset downloaded from the UCI Machine Learning Repository. http://archive.ics.uci. edu/ml/datasets/ISOLET Creators: <NAME> and <NAME> Department of Computer Science and Engineering, Oregon Graduate Institute, Beaverton, OR 97006. cole ’at’ cse.ogi.edu, fanty ’at’ cse.ogi.edu Donor: <NAME> Department of Computer Science Oregon State University, Corvallis, OR 97331 tgd ’at’ cs.orst.edu References <NAME>., <NAME>. (1991). Spoken letter recognition. In <NAME>., <NAME>., and Touretzky, <NAME>. (Eds). Advances in Neural Information Processing Systems 3. San Mateo, CA: Morgan Kaufmann. [http://rexa.info/paper/bee6de062d0d168c5c5b5290b11cd6b12ca8472e] Examples # NB: 50 iterations isn't enough in this case, # it was chosen so that the example runs fast enough on CRAN check farm data(UCI.ISOLET.ABC); X=as.matrix(sN.normalizeDF(as.data.frame(UCI.ISOLET.ABC[,1:617]))); y=as.matrix(UCI.ISOLET.ABC[,618]-1); myMLP=sN.MLPtrain(X=X,y=y,hidden_layer_size=20,it=50,lambda=0.5,alpha=0.5); myPrediction=sN.MLPpredict(nnModel=myMLP,X=X,raw=FALSE); table(y,myPrediction); UCI.transfusion Blood Transfusion Service Center Data Set Description Data taken from the Blood Transfusion Service Center in Hsin-Chu City in Taiwan. To demonstrate the RFMTC marketing model (a modified version of RFM), this study adopted the donor database of Blood Transfusion Service Center in Hsin-Chu City in Taiwan. The center passes their blood transfusion service bus to one university in Hsin-Chu City to gather blood donated about every three months. To build a FRMTC model, we selected 748 donors at random from the donor database. These 748 donor data, each one included R (Recency - months since last donation), F (Frequency - total number of donation), M (Monetary - total blood donated in c.c.), T (Time - months since first donation), and a binary variable representing whether he/she donated blood in March 2007 (1 stand for donating blood; 0 stands for not donating blood). Usage data(UCI.transfusion) Format A data frame with 748 rows and 5 variables Details The variables are as follows: • R. Recency - months since last donation • F. Frequency - total number of donations • M. Monetary - total blood donated in c.c. (mL) • T. Time - months since first donation • y. a binary variable representing whether he/she donated blood in March 2007 (1=yes; 0 =no) Source Dataset downloaded from the UCI Machine Learning Repository. http://archive.ics.uci. edu/ml/datasets/Blood+Transfusion+Service+Center Original Owner and Donor: Prof. <NAME> Department of Information Management Chung- Hua University Hsin Chu, Taiwan 30067, R.O.C. e-mail: icyeh ’at’ chu.edu.tw References Yeh, I-Cheng, Yang, King-Jang, and Ting, Tao-Ming, "Knowledge discovery on RFM model using Bernoulli sequence", Expert Systems with Applications, 2008. DOI: 10.1016/j.eswa.2008.07.018 <NAME>. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.
fechner
cran
R
Package ‘fechner’ October 13, 2022 Version 1.0-3 Date 2016-06-05 Title Fechnerian Scaling of Discrete Object Sets Description Functions and example datasets for Fechnerian scaling of discrete object sets. User can compute Fechnerian distances among objects representing subjective dissimilarities, and other related information. See package?fechner for an overview. Author <NAME> [aut, cre], <NAME> [aut, trl] (Based on original MATLAB source by <NAME>. Dzhafarov.) Maintainer <NAME> <<EMAIL>> Depends R (>= 3.3.0) Imports graphics, stats LazyLoad yes LazyData yes License GPL (>= 2) URL http://www.meb.edu.tum.de NeedsCompilation yes Repository CRAN Date/Publication 2016-06-06 13:35:26 R topics documented: fechner-packag... 2 check.dat... 3 check.regula... 5 fechne... 7 mors... 13 noRegMi... 14 plot.fechne... 15 print.fechne... 17 print.summary.fechne... 18 regMi... 19 summary.fechne... 20 wis... 22 fechner-package Fechnerian Scaling of Discrete Object Sets: The R Package fechner Description Fechnerian scaling is a procedure for constructing a metric on a set of objects (e.g., symbols, X-ray films). The constructed Fechnerian metric represents subjective dissimilarities among the objects as perceived by a system (e.g., person, technical device). The package fechner provides functions and example datasets for performing and illustrating Fechnerian scaling of discrete object sets in R. Details Package: fechner Type: Package Version: 1.0-3 Date: 2016-06-05 License: GPL (>= 2) Fechnerian scaling of discrete object (or stimulus) sets provides a theoretical framework for de- riving, so-called Fechnerian, distances among objects representing subjective dissimilarities. A Fechnerian metric on a set of stimuli is constructed from the probabilities with which the objects are discriminated from each other by a perceiving system. In addition to the oriented and overall Fechnerian distances, the package fechner also computes such related information as the points of subjective equality, the psychometric increments, the geodesic chains and loops with their cor- responding lengths, and the generalized Shepardian dissimilarities (or S-index). Moreover, the package fechner provides functions for checking the required data format and the fundamental reg- ular minimality/maximality condition. These concepts are explained in detail in the paper about the fechner package by Uenlue, <NAME> Dzhafarov (2009), and in the theoretical papers by Dzhafarov and Colonius (2006, 2007) (see ‘References’). The package fechner is implemented based on the S3 system. It comes with a namespace, and consists of three external functions (functions the package exports): check.data, check.regular, and the main function of this package, fechner. It also contains six internal functions (functions not exported by the package), which are plot, print, and summary methods for objects of the class fechner, a print method for objects of the class summary.fechner, and two functions for comput- ing intermediate graph-theoretic information: plot.fechner, print.fechner, summary.fechner, print.summary.fechner, and fechner-internal. The features of the package fechner are illus- trated with accompanying two real datasets, morse and wish, and two artificial datasets, regMin and noRegMin. Author(s) <NAME>, <NAME>. Based on original MATLAB source by <NAME>. Maintainer: <NAME> <<EMAIL>> References <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. Dzhafarov, <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. check.data Check for Required Data Format Description check.data is used to check whether the data are of required format. Usage check.data(X, format = c("probability.different", "percent.same", "general")) Arguments X a required square matrix or data frame of numeric data. No NA, NaN, Inf, or -Inf values are allowed. format an optional character string giving the data format to be checked. This must be one of "probability.different", "percent.same", or "general", with default "probability.different", and may be abbreviated to a unique prefix. Details The data must be a matrix or a data frame, have the same number of rows and columns, and be numeric consisting of real numbers. In particular, no infinite, undefined, or missing values are allowed. This is the general data format. The probability-different and percent-same formats, in addition, require that the data lie in the intervals [0, 1] and [0, 100], respectively. If all of the re- quirements for a data format are satisfied, the data are returned as a matrix with rows and columns labeled; otherwise the function produces respective messages. The labeling is as follows. 1. If the data are entered without any labeling of the rows and columns: The function does the labeling automatically, as a1, b1, . . . , z1, a2, b2, . . . , z2, . . . , etc., up to a9, b9, . . . , z9 if the data are as large as 234 × 234, or if the data are larger than 234 × 234, the labeling is v1, v2, . . . , vN , where N × N is the dimension of the data (and N > 234). 2. If the data are entered with either row or column labeling: In that case, the row or column labels are assigned to the columns or rows, respectively. 3. If the data are entered with row and column labeling: Since the labeling of both the rows and columns is now provided by the user manually, the same labeling must be used for both. If this is the case, the labeling is adopted. Otherwise the function produces a respective message. Value If the data are of required format, check.data returns a matrix of the data with rows and columns labeled. Author(s) <NAME>, <NAME>. Based on original MATLAB source by <NAME>. References <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also check.regular for checking regular minimality/maximality; fechner, the main function for Fech- nerian scaling. See also fechner-package for general information about this package. Examples ## dataset \link{wish} is of probability-different format check.data(wish) ## dataset \link{morse} is of percent-same format check.data(morse, format = "percent.same") ## a matrix without any labeling of rows and columns, of general format ## check.data does the labeling automatically (X <- ((-1) * matrix(1:16, nrow = 4))) check.data(X, format = "general") ## examples of data that are not of any of the three formats ## message: data must be matrix or data frame check.data(as.character(matrix(1:16, nrow = 4))) ## message: data must have same number of rows and columns check.data(matrix(1:12, nrow = 4)) ## message: data must be numbers check.data(matrix(LETTERS[1:16], nrow = 4)) check.regular Check for Regular Minimality/Maximality Description check.regular is used to check whether the data satisfy regular minimality/maximality. Usage check.regular(X, type = c("probability.different", "percent.same", "reg.minimal", "reg.maximal")) Arguments X a required square matrix or data frame of numeric data. No NA, NaN, Inf, or -Inf values are allowed. type an optional character string giving the type of check to be performed. This must be one of "probability.different", "percent.same", "reg.minimal", or "reg.maximal", with default "probability.different", and may be abbre- viated to a unique prefix. Details The type argument specifies whether regular minimality or regular maximality is to be checked. "probability.different" and "percent.same" are for datasets in the probability-different and percent-same formats, and imply regular minimality and regular maximality checks, respectively. "reg.minimal" and "reg.maximal" can be specified to force checking for regular minimality and regular maximality, respectively, independent of the used dataset. In particular, "reg.minimal" and"reg.maximal" are to be used for datasets that are properly in the general format. check.regular calls check.data. In particular, the rows and columns of the canonical representa- tion matrix (see ‘Value’) are canonically relabeled based on the labeling provided by check.data. That is, using the check.data labeling, the pairs of points of subjective equality (PSEs) are as- signed identical labels, leaving intact the labeling of the rows and relabeling the columns with their corresponding PSEs. If the data X do not satisfy regular minimality/maximality, check.regular produces respective messages. The latter give information about parts of X violating that condition. Regular minimality/maximality is a fundamental property of discrimination and means that 1. every row contains a single minimal/maximal entry; 2. every column contains a single minimal/maximal entry; 3. an entry pij of X which is minimal/maximal in the ith row is also minimal/maximal in the jth column, and vice versa. If pij is the entry which is minimal/maximal in the ith row and in the jth column, the ith row object (in one, the first, observation area) and the jth column object (in the other, the second, observation area) are called each other’s PSEs. In psychophysical applications, for instance, observation area refers to the two fixed and perceptually distinct areas in which the stimuli are presented pairwise; for example, spatial arrangement (left versus right) or temporal order (first versus second). Value If the data do satisfy regular minimality/maximality, check.regular returns a named list consisting of the following four components: canonical.representation a matrix giving the representation of X in which regular minimality/maximality is satisfied in the canonical form. That is, the single minimal/maximal entries of the rows and columns lie on the main diagonal (of the canonical representation). In addition, the rows and columns are canonically relabeled. canonical.transformation a data frame giving the permutation of the columns of X used to produce the canonical representation of X. The first and second variables of this data frame, observation.area.1 and observation.area.2, respectively, represent the pairs of PSEs. The third variable, common.label, lists the identical labels as- signed to the pairs of PSEs. check a character string giving the check that was performed. This is either "regular minimality" or "regular maximality". in.canonical.form logical. If TRUE, the permutation of the columns used to obtain the canonical representation of X is the identity; that is, the original data X are already in the canonical form. Author(s) <NAME>, <NAME>. Based on original MATLAB source by <NAME>. References <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also check.data for checking data format; fechner, the main function for Fechnerian scaling. See also fechner-package for general information about this package. Examples ## dataset \link{wish} satisfies regular minimality in canonical form check.regular(wish) ## dataset \link{regMin} satisfies regular minimality in non-canonical ## form and so is canonically transformed and relabeled regMin check.regular(regMin) ## dataset \link{noRegMin} does satisfy neither regular minimality nor ## regular maximality check.regular(noRegMin, type = "probability.different") check.regular(noRegMin, type = "reg.maximal") ## dataset \link{morse} satisfies regular maximality in canonical form check.regular(morse, type = "percent.same") ## part of \link{morse} data satisfies regular maximality check.regular(morse[c(2, 27:36), c(2, 27:36)], type = "reg.maximal") fechner Main Function For Fechnerian Scaling Description fechner provides the Fechnerian scaling computations. It is the main function of this package. Usage fechner(X, format = c("probability.different", "percent.same", "general"), compute.all = FALSE, check.computation = FALSE) Arguments X a required square matrix or data frame of numeric data. No NA, NaN, Inf, or -Inf values are allowed. format an optional character string giving the data format that is used. This must be one of "probability.different", "percent.same", or "general", with default "probability.different", and may be abbreviated to a unique prefix. compute.all an optional logical. The default value FALSE corresponds to short computation, which yields the main Fechnerian scaling computations. The value TRUE corre- sponds to long computation, which additionally yields intermediate results and also allows for a check of computations if check.computation is set TRUE. check.computation an optional logical. If TRUE, the check for whether the overall Fechnerian dis- tance of the first kind (in the first observation area) is equal to the overall Fech- nerian distance of the second kind (in the second observation area) is performed. The check requires compute.all to be set TRUE. Details The format argument specifies the data format that is used. "probability.different" and "percent.same" are for datasets in the probability-different and percent-same formats, and in the latter case, the data are automatically transformed prior to the analysis using the transformation (100 − X)/100. "general" is to be used for datasets that are properly in the general data format. Note that for "percent.same", the data must satisfy regular maximality, for "probability.different" and "general", regular minimality (otherwise function fechner produces respective messages). In particular, data in the general format may possibly need to be transformed manually prior to calling the function fechner. If compute.all = TRUE and check.computation = TRUE, the performed check computes the differ- ence ‘overall Fechnerian distance of the first kind minus overall Fechnerian distance of the second kind’. By theory, this difference is zero. The function fechner calculates that difference and checks for equality of these Fechnerian distances up to machine precision (see ‘Value’). fechner calls check.regular, which in turn calls check.data. In particular, the specified data format and regular minimality/maximality are checked, and the rows and columns of the canonical represen- tation matrix (see check.regular) are canonically relabeled based on the labeling provided by check.data. The function fechner returns an object of the class fechner (see ‘Value’), for which plot, print, and summary methods are provided; plot.fechner, print.fechner, and summary.fechner, re- spectively. Moreover, objects of the class fechner are set the specific named attribute computation, which is assumed to have the value short or long indicating whether short computation (compute.all = FALSE) or long computation (compute.all = TRUE) was performed, respectively. Value If the arguments X, format, compute.all, and check.computation are of required types, fechner returns a named list, of the class fechner and with the attribute computation, which consists of 6 or 18 components, depending on whether short computation (computation is then set short) or long computation (computation is then set long) was performed, respectively. The short computation list contains the following first 6 components, the long computation list the subsequent ones: points.of.subjective.equality a data frame giving the permutation of the columns of X used to produce the canonical representation of X. The first and second variables of this data frame, observation.area.1 and observation.area.2, respectively, represent the pairs of points of subjective equality (PSEs). The third variable, common.label, lists the identical labels assigned to the pairs of PSEs. (first component of short computation list) canonical.representation a matrix giving the representation of X in which regular minimality/maximality is satisfied in the canonical form. That is, the single minimal/maximal entries of the rows and columns lie on the main diagonal (of the canonical representation). In addition, the rows and columns are canonically relabeled. overall.Fechnerian.distances a matrix of the overall Fechnerian distances (of the first kind); by theory, invari- ant from observation area. geodesic.loops a data frame of the geodesic loops of the first kind; must be read from left to right for the first kind, and from right to left for the second kind. graph.lengths.of.geodesic.loops a matrix of the graph-theoretic (edge/link based) lengths of the geodesic loops (of the first kind). S.index a matrix of the generalized ‘Shepardian’ dissimilarity (or S-index) values. An S-index value is defined as the psychometric length of the loop between a row stimulus and a column stimulus containing only these two stimuli. (last compo- nent of short computation list) points.of.subjective.equality the same as in case of short computation; see above. (first component of long computation list) canonical.representation the same as in case of short computation; see above. psychometric.increments.1 a matrix of the psychometric increments of the first kind. psychometric.increments.2 a matrix of the psychometric increments of the second kind. oriented.Fechnerian.distances.1 a matrix of the oriented Fechnerian distances of the first kind. overall.Fechnerian.distances.1 a matrix of the overall Fechnerian distances of the first kind. oriented.Fechnerian.distances.2 a matrix of the oriented Fechnerian distances of the second kind. overall.Fechnerian.distances.2 a matrix of the overall Fechnerian distances of the second kind. check if check.computation = TRUE, a list of two components: difference and are.nearly.equal. The component difference is a matrix of the differences of the overall Fechne- rian distances of the first and second kind; ought to be a zero matrix. The com- ponent are.nearly.equal is a logical indicating whether this matrix of differ- ences is equal to the zero matrix up to machine precision. If check.computation = FALSE, a character string saying “computation check was not requested”. geodesic.chains.1 a data frame of the geodesic chains of the first kind. geodesic.loops.1 a data frame of the geodesic loops of the first kind. graph.lengths.of.geodesic.chains.1 a matrix of the graph-theoretic (edge/link based) lengths of the geodesic chains of the first kind. graph.lengths.of.geodesic.loops.1 a matrix of the graph-theoretic (edge/link based) lengths of the geodesic loops of the first kind. geodesic.chains.2 a data frame of the geodesic chains of the second kind. geodesic.loops.2 a data frame of the geodesic loops of the second kind. graph.lengths.of.geodesic.chains.2 a matrix of the graph-theoretic (edge/link based) lengths of the geodesic chains of the second kind. graph.lengths.of.geodesic.loops.2 a matrix of the graph-theoretic (edge/link based) lengths of the geodesic loops of the second kind. S.index the same as in case of short computation; see above. (last component of long computation list) Author(s) <NAME>, <NAME>. Based on original MATLAB source by <NAME>. References <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also check.data for checking data format; check.regular for checking regular minimality/maximality; plot.fechner, the S3 method for plotting objects of the class fechner; print.fechner, the S3 method for printing objects of the class fechner; summary.fechner, the S3 method for summariz- ing objects of the class fechner, which creates objects of the class summary.fechner; print.summary.fechner, the S3 method for printing objects of the class summary.fechner. See also fechner-package for general information about this package. Examples ## ## (1) examples based on dataset \link{morse} ## ## dataset \link{morse} satisfies regular maximality in canonical form morse check.regular(morse, type = "percent.same") ## a self-contained 10-code subspace consisting of the codes for the ## letter B and the digits 0, 1, 2, 4, \ldots, 9 indices <- which(is.element(names(morse), c("B", c(0, 1, 2, 4:9)))) f.scal.morse <- fechner(morse, format = "percent.same") f.scal.morse$geodesic.loops[indices, indices] morse.subspace <- morse[indices, indices] check.regular(morse.subspace, type = "percent.same") ## since the subspace is self-contained, results must be the same f.scal.subspace.mo <- fechner(morse.subspace, format = "percent.same") identical(f.scal.morse$geodesic.loops[indices, indices], f.scal.subspace.mo$geodesic.loops) identical(f.scal.morse$overall.Fechnerian.distances[indices, indices], f.scal.subspace.mo$overall.Fechnerian.distances) ## Fechnerian scaling analysis using short computation f.scal.subspace.mo str(f.scal.subspace.mo) attributes(f.scal.subspace.mo) ## for instance, the S-index f.scal.subspace.mo$S.index ## Fechnerian scaling analysis using long computation f.scal.subspace.long.mo <- fechner(morse.subspace, format = "percent.same", compute.all = TRUE, check.computation = TRUE) f.scal.subspace.long.mo str(f.scal.subspace.long.mo) attributes(f.scal.subspace.long.mo) ## for instance, the geodesic chains of the first kind f.scal.subspace.long.mo$geodesic.chains.1 ## check whether the overall Fechnerian distance of the first kind is ## equal to the overall Fechnerian distance of the second kind ## the difference, by theory a zero matrix f.scal.subspace.long.mo$check[1] ## or, up to machine precision f.scal.subspace.long.mo$check[2] ## plot of the S-index versus the overall Fechnerian distance ## for all (off-diagonal) pairs of stimuli plot(f.scal.subspace.long.mo) ## for all (off-diagonal) pairs of stimuli with geodesic loops ## containing at least 3 links plot(f.scal.subspace.long.mo, level = 3) ## corresponding summaries, including Pearson correlation and C-index summary(f.scal.subspace.long.mo) ## in particular, accessing detailed summary through assignment detailed.summary.mo <- summary(f.scal.subspace.long.mo, level = 3) str(detailed.summary.mo) ## ## (2) examples based on dataset \link{wish} ## ## dataset \link{wish} satisfies regular minimality in canonical form wish check.regular(wish, type = "probability.different") ## a self-contained 10-code subspace consisting of S, U, W, X, ## 0, 1, \ldots, 5 indices <- which(is.element(names(wish), c("S", "U", "W", "X", 0:5))) f.scal.wish <- fechner(wish, format = "probability.different") f.scal.wish$geodesic.loops[indices, indices] wish.subspace <- wish[indices, indices] check.regular(wish.subspace, type = "probability.different") ## since the subspace is self-contained, results must be the same f.scal.subspace.wi <- fechner(wish.subspace, format = "probability.different") identical(f.scal.wish$geodesic.loops[indices, indices], f.scal.subspace.wi$geodesic.loops) identical(f.scal.wish$overall.Fechnerian.distances[indices, indices], f.scal.subspace.wi$overall.Fechnerian.distances) ## dataset \link{wish} transformed to percent-same format check.data(100 - (wish * 100), format = "percent.same") ## Fechnerian scaling analysis using short computation f.scal.subspace.wi str(f.scal.subspace.wi) attributes(f.scal.subspace.wi) ## for instance, the graph-theoretic lengths of geodesic loops f.scal.subspace.wi$graph.lengths.of.geodesic.loops ## Fechnerian scaling analysis using long computation f.scal.subspace.long.wi <- fechner(wish.subspace, format = "probability.different", compute.all = TRUE, check.computation = TRUE) f.scal.subspace.long.wi str(f.scal.subspace.long.wi) attributes(f.scal.subspace.long.wi) ## for instance, the oriented Fechnerian distances of the first kind f.scal.subspace.long.wi$oriented.Fechnerian.distances.1 ## or, graph-theoretic lengths of chains and loops identical(f.scal.subspace.long.wi$graph.lengths.of.geodesic.chains.1 + t(f.scal.subspace.long.wi$graph.lengths.of.geodesic.chains.1), f.scal.subspace.long.wi$graph.lengths.of.geodesic.loops.1) ## overall Fechnerian distances are not monotonically related to ## discrimination probabilities; however, there is a strong positive ## correlation cor(as.vector(f.scal.wish$overall.Fechnerian.distances), as.vector(as.matrix(wish))) ## check whether the overall Fechnerian distance of the first kind is ## equal to the overall Fechnerian distance of the second kind ## the difference, by theory a zero matrix f.scal.subspace.long.wi$check[1] ## or, up to machine precision f.scal.subspace.long.wi$check[2] ## plot of the S-index versus the overall Fechnerian distance ## for all (off-diagonal) pairs of stimuli plot(f.scal.subspace.long.wi) ## for all (off-diagonal) pairs of stimuli with geodesic loops ## containing at least 5 links plot(f.scal.subspace.long.wi, level = 5) ## corresponding summaries, including Pearson correlation and C-index summary(f.scal.subspace.long.wi) ## in particular, accessing detailed summary through assignment detailed.summary.wi <- summary(f.scal.subspace.long.wi, level = 5) str(detailed.summary.wi) morse Rothkopf’s Morse Code Data Description Rothkopf’s (1957) Morse code data of discrimination probabilities among 36 auditory Morse code signals for the letters A, B, . . . , Z and the digits 0, 1, . . . , 9. Usage morse Format The morse data frame consists of 36 rows and 36 columns, representing the Morse code signals for the letters and digits A, . . . , Z, 0, . . . , 9 presented first and second, respectively. Each number, an integer, in the data frame gives the percentage of subjects who responded ‘same’ to the row signal followed by the column signal. Details Each signal consists of a sequence of dots and dashes. A chart of the Morse code letters and digits can be found at http://en.wikipedia.org/wiki/Morse_code. Rothkopf’s (1957) 36 × 36 Morse code data gives the same-different judgements of 598 subjects in response to the 36 × 36 auditorily presented pairs of Morse codes. Subjects who were not familiar with Morse code listened to a pair of signals constructed mechanically and separated by a pause of approximately 1.4 seconds. Each subject was required to state whether the two signals presented were the same or different. Each number in the morse data frame is the percentage of roughly 150 subjects. Note The original Rothkopf’s (1957) 36 × 36 dataset does not satisfy regular maximality. There are two maximal entries in row \#2, of value 84, which are pBB and pBX . Following the argument in Dzhafarov and Colonius (2006), a statistically compatible dataset is obtained by replacing the value of pBX with 83 and leaving the rest of the data unchanged. The latter is the dataset accompanying the package fechner. For typographic reasons, it may be useful to consider only a small subset of the stimulus set, best, chosen to form a ‘self-contained’ subspace: a geodesic loop for any two of the subset’s elements (computed using the complete dataset) is contained within the subset. For instance, a particular self-contained 10-code subspace of the 36 Morse codes consists of the codes for the letter B and the digits 0, 1, 2, 4, . . . , 9 (see fechner). Source <NAME>. (1957) A measure of stimulus similarity and errors in some paired-associate learn- ing tasks. Journal of Experimental Psychology, 53, 94–101. References Dzhafarov, <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also check.data for checking data format; check.regular for checking regular minimality/maximality; fechner, the main function for Fechnerian scaling. See also wish for Wish’s Morse-code-like data, and fechner-package for general information about this package. noRegMin Artificial Data: Regular Minimality Violated Description Artificial data of fictitious ‘discrimination probabilities’ among 10 fictitious stimuli. Usage noRegMin Format The noRegMin data frame consists of 10 rows and 10 columns, representing the fictitious stimuli presented in the first and second observation area, respectively. Each number, a numeric, in the data frame is assumed to give the relative frequency of perceivers scoring ‘different’ to the row stimulus ‘followed’ by the column stimulus. Note This dataset is artificial and included for illustrating regular minimality being violated. It differs from the artificial data regMin only in the entry in row \#9 and column \#10. References <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also regMin for the other artificial data satisfying regular minimality in non-canonical form; check.data for checking data format; check.regular for checking regular minimality/maximality; fechner, the main function for Fechnerian scaling. See also morse for Rothkopf’s Morse code data, wish for Wish’s Morse-code-like data, and fechner-package for general information about this package. Examples ## dataset noRegMin violates regular minimality noRegMin check.regular(noRegMin, type = "reg.minimal") plot.fechner Plot Method for Objects of Class fechner Description S3 method to plot objects of the class fechner. Usage ## S3 method for class 'fechner' plot(x, level = 2, ...) Arguments x a required object of class fechner, obtained from a call to the function fechner. level an optional numeric, integer-valued and greater than or equal to 2, giving the level of comparison of the S-index and the overall Fechnerian distance G. ... further arguments to be passed to or from other methods. They are ignored in this function. Details The plot method graphs the results obtained from Fechnerian scaling analyses. It produces a scatterplot of the overall Fechnerian distance G versus the S-index, with rugs added to the axes and jittered (amount = 0.01 of noise) to accommodate ties in the S-index and G values. The diagonal line y = x is for visual inspection of the deviations of the two types of values. The level of comparison refers to the minimum number of links in geodesic loops. That is, choos- ing level n means that comparison involves only those S-index and G values that have geodesic loops containing not less than n links. If there are no (off-diagonal) pairs of stimuli with geodesic loops containing at least level links (in this case a plot is not possible), plot.fechner stops with an error message. Value If the arguments x and level are of required types, and if there are (off-diagonal) pairs of stimuli with geodesic loops containing at least level links, plot.fechner produces a plot, and invisibly returns NULL. Author(s) <NAME>, <NAME>. Based on original MATLAB source by <NAME>. References <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also print.fechner, the S3 method for printing objects of the class fechner; summary.fechner, the S3 method for summarizing objects of the class fechner, which creates objects of the class summary.fechner; print.summary.fechner, the S3 method for printing objects of the class summary.fechner; fechner, the main function for Fechnerian scaling, which creates objects of the class fechner. See also fechner-package for general information about this package. Examples ## Fechnerian scaling of dataset \link{wish} f.scal.wish <- fechner(wish) ## results are plotted for comparison levels 2 and 5 plot(f.scal.wish) plot(f.scal.wish, level = 5) print.fechner Print Method for Objects of Class fechner Description S3 method to print objects of the class fechner. Usage ## S3 method for class 'fechner' print(x, ...) Arguments x a required object of class fechner, obtained from a call to the function fechner. ... further arguments to be passed to or from other methods. They are ignored in this function. Details The print method prints the main results obtained from Fechnerian scaling analyses, which are the overall Fechnerian distances and the geodesic loops. Value If the argument x is of required type, print.fechner prints the overall Fechnerian distances and the geodesic loops, and invisibly returns x. Author(s) <NAME>, <NAME>. Based on original MATLAB source by <NAME>. References <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also plot.fechner, the S3 method for plotting objects of the class fechner; summary.fechner, the S3 method for summarizing objects of the class fechner, which creates objects of the class summary.fechner; print.summary.fechner, the S3 method for printing objects of the class summary.fechner; fechner, the main function for Fechnerian scaling, which creates objects of the class fechner. See also fechner-package for general information about this package. Examples ## Fechnerian scaling of dataset \link{wish} ## overall Fechnerian distances and geodesic loops are printed (f.scal.wish <- fechner(wish)) print.summary.fechner Print Method for Objects of Class summary.fechner Description S3 method to print objects of the class summary.fechner. Usage ## S3 method for class 'summary.fechner' print(x, ...) Arguments x a required object of class summary.fechner, obtained from a call to the function summary.fechner (through generic function summary). ... further arguments to be passed to or from other methods. They are ignored in this function. Details The print method prints the summary information about objects of the class fechner computed by summary.fechner, which are the number of stimuli pairs used for comparison, a summary of the corresponding S-index values, a summary of the corresponding Fechnerian distance G values, the Pearson correlation, the C-index, and the comparison level. Specific summary information details such as individual stimuli pairs and their corresponding S-index and G values can be accessed through assignment (see ‘Examples’). Value If the argument x is of required type, print.summary.fechner prints the afore mentioned summary information in ‘Details’, and invisibly returns x. Author(s) <NAME>, <NAME>. Based on original MATLAB source by <NAME>. References Dzhafarov, <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. Dzhafarov, <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also plot.fechner, the S3 method for plotting objects of the class fechner; print.fechner, the S3 method for printing objects of the class fechner; summary.fechner, the S3 method for summariz- ing objects of the class fechner, which creates objects of the class summary.fechner; fechner, the main function for Fechnerian scaling, which creates objects of the class fechner. See also fechner-package for general information about this package. Examples ## Fechnerian scaling of dataset \link{morse} ## summary information about the Fechnerian scaling object are printed ## accessing detailed summary through assignment (detailed.summary <- summary(fechner(morse, format = "percent.same"))) str(detailed.summary) detailed.summary$pairs.used.for.comparison[3, ] regMin Artificial Data: Regular Minimality In Non-canonical Form Description Artificial data of fictitious ‘discrimination probabilities’ among 10 fictitious stimuli. Usage regMin Format The regMin data frame consists of 10 rows and 10 columns, representing the fictitious stimuli presented in the first and second observation area, respectively. Each number, a numeric, in the data frame is assumed to give the relative frequency of perceivers scoring ‘different’ to the row stimulus ‘followed’ by the column stimulus. Note This dataset is artificial and included for illustrating regular minimality in the non-canonical form. It differs from the artificial data noRegMin only in the entry in row \#9 and column \#10. References <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also noRegMin for the other artificial data violating regular minimality; check.data for checking data format; check.regular for checking regular minimality/maximality; fechner, the main function for Fechnerian scaling. See also morse for Rothkopf’s Morse code data, wish for Wish’s Morse- code-like data, and fechner-package for general information about this package. Examples ## dataset regMin satisfies regular minimality in non-canonical form regMin check.regular(regMin, type = "reg.minimal") summary.fechner Summary Method for Objects of Class fechner Description S3 method to summarize objects of the class fechner. Usage ## S3 method for class 'fechner' summary(object, level = 2, ...) Arguments object a required object of class fechner, obtained from a call to the function fechner. level an optional numeric, integer-valued and greater than or equal to 2, giving the level of comparison of the S-index and the overall Fechnerian distance G. ... further arguments to be passed to or from other methods. They are ignored in this function. Details The summary method outlines the results obtained from Fechnerian scaling analyses. It computes the Pearson correlation coefficient and the C-index (see Uenlue, Kiefer, and Dzhafarov (2009) ) P C=P 2 P 2 S + G for specific (controlled by the argument level) stimuli pairs with their corresponding S-index and G values. The level of comparison refers to the minimum number of links in geodesic loops. That is, choos- ing level n means that comparison involves only those S-index and G values that have geodesic loops containing not less than n links. If there are no (off-diagonal) pairs of stimuli with geodesic loops containing at least level links (in this case a summary is not possible), summary.fechner stops with an error message. The function summary.fechner returns an object of the class summary.fechner (see ‘Value’), for which a print method, print.summary.fechner, is provided. Specific summary information de- tails such as individual stimuli pairs and their corresponding S-index and G values can be accessed through assignment (see ‘Examples’). Value If the arguments object and level are of required types, and if there are (off-diagonal) pairs of stimuli with geodesic loops containing at least level links, summary.fechner returns a named list, of the class summary.fechner, consisting of the following four components: pairs.used.for.comparison a data frame giving the pairs of stimuli (first variable stimuli.pairs) and their corresponding S-index (second variable S.index) and G (third variable Fechnerian.distance.G) values used for comparison. Pearson.correlation a numeric giving the value of the Pearson correlation coefficient if it exists, or a character string saying “Pearson’s correlation coefficient is not defined” if it does not exist. C.index a numeric giving the value of the C-index. comparison.level a numeric giving the level of comparison used. Author(s) <NAME>, <NAME>. Based on original MATLAB source by <NAME>. References Dzhafarov, <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. Dzhafarov, <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also plot.fechner, the S3 method for plotting objects of the class fechner; print.fechner, the S3 method for printing objects of the class fechner; print.summary.fechner, the S3 method for printing objects of the class summary.fechner; fechner, the main function for Fechnerian scaling, which creates objects of the class fechner. See also fechner-package for general information about this package. Examples ## Fechnerian scaling of dataset \link{wish} f.scal.wish <- fechner(wish) ## results are summarized for comparison levels 2 and 5 summary(f.scal.wish) summary(f.scal.wish, level = 5) ## accessing detailed summaries through assignment str(detailed.summary.l1 <- summary(f.scal.wish)) detailed.summary.l5 <- summary(f.scal.wish, level = 5) detailed.summary.l5$pairs.used.for.comparison[1, ] ## to verify the obtained summaries f.scal.wish$geodesic.loops f.scal.wish$S.index f.scal.wish$overall.Fechnerian.distances wish Wish’s Morse-code-like Data Description Wish’s (1967) Morse-code-like data of discrimination probabilities among 32 auditory Morse-code- like signals. Usage wish Format The wish data frame consists of 32 rows and 32 columns, representing the Morse-code-like signals (see ‘Details’) presented first and second, respectively. Each number, a numeric, in the data frame gives the relative frequency of subjects who responded ‘different’ to the row signal followed by the column signal. Details The 32 Morse-code-like signals in Wish’s (1967) study were 5-element sequences T1 P1 T2 P2 T3 , where T stands for a tone (short or long) and P stands for a pause (1 or 3 units long). As in Dzhafarov and Colonius (2006), the stimuli are labeled A, B, . . . , Z, 0, 1, . . . , 5, in the order they are presented in Wish’s (1967) article. Wish’s (1967) 32 × 32 Morse-code-like data gives the same-different judgements of subjects in response to the 32 × 32 auditorily presented pairs of codes. Note The original Wish’s (1967) 32 × 32 dataset does not satisfy regular minimality. There is the entry pT V = 0.03, which is the same as pV V and smaller than pT T = 0.06. Following the argument in Dzhafarov and Colonius (2006), a statistically compatible dataset is obtained by replacing the value of pT V with 0.07 and leaving the rest of the data unchanged. The latter is the dataset accompanying the package fechner. For typographic reasons, it may be useful to consider only a small subset of the stimulus set, best, chosen to form a ‘self-contained’ subspace: a geodesic loop for any two of the subset’s elements (computed using the complete dataset) is contained within the subset. For instance, a particular self-contained 10-code subspace of the 32 Morse-code-like signals consists of S, U , W , X, 0, 1, . . . , 5 (see fechner). Source <NAME>. (1967) A model for the perception of Morse code-like signals. Human Factors, 9, 529– 540. References <NAME>. and <NAME>. (2006) Reconstructing distances among objects from their dis- criminability. Psychometrika, 71, 365–386. <NAME>. and <NAME>. (2007) Dissimilarity cumulation theory and subjective metrics. Journal of Mathematical Psychology, 51, 290–304. <NAME>. and <NAME>. and <NAME>. (2009) Fechnerian scaling in R: The package fech- ner. Journal of Statistical Software, 31(6), 1–24. URL http://www.jstatsoft.org/v31/i06/. See Also check.data for checking data format; check.regular for checking regular minimality/maximality; fechner, the main function for Fechnerian scaling. See also morse for Rothkopf’s Morse code data, and fechner-package for general information about this package.
@aws-sdk/credential-provider-node
npm
JavaScript
[@aws-sdk/credential-provider-node](#aws-sdkcredential-provider-node) === [AWS Credential Provider for Node.JS](#aws-credential-provider-for-nodejs) --- This module provides a factory function, `fromEnv`, that will attempt to source AWS credentials from a Node.JS environment. It will attempt to find credentials from the following sources (listed in order of precedence): * Environment variables exposed via `process.env` * SSO credentials from token cache * Web identity token credentials * Shared credentials and config ini files * The EC2/ECS Instance Metadata Service The default credential provider will invoke one provider at a time and only continue to the next if no credentials have been located. For example, if the process finds values defined via the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables, the files at `~/.aws/credentials` and `~/.aws/config` will not be read, nor will any messages be sent to the Instance Metadata Service. If invalid configuration is encountered (such as a profile in `~/.aws/credentials` specifying as its `source_profile` the name of a profile that does not exist), then the chained provider will be rejected with an error and will not invoke the next provider in the list. *IMPORTANT*: if you intend to acquire credentials using EKS [IAM Roles for Service Accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) then you must explicitly specify a value for `roleAssumerWithWebIdentity`. There is a default function available in `@aws-sdk/client-sts` package. An example of using this: ``` const { getDefaultRoleAssumerWithWebIdentity } = require("@aws-sdk/client-sts"); const { defaultProvider } = require("@aws-sdk/credential-provider-node"); const { S3Client, GetObjectCommand } = require("@aws-sdk/client-s3"); const provider = defaultProvider({ roleAssumerWithWebIdentity: getDefaultRoleAssumerWithWebIdentity({ // You must explicitly pass a region if you are not using us-east-1 region: "eu-west-1" }), }); const client = new S3Client({ credentialDefaultProvider: provider }); ``` *IMPORTANT*: We provide a wrapper of this provider in `@aws-sdk/credential-providers` package to save you from importing `getDefaultRoleAssumerWithWebIdentity()` or `getDefaultRoleAssume()` from STS package. Similarly, you can do: ``` const { fromNodeProviderChain } = require("@aws-sdk/credential-providers"); const credentials = fromNodeProviderChain(); const client = new S3Client({ credentials }); ``` [Supported configuration](#supported-configuration) --- You may customize how credentials are resolved by providing an options hash to the `defaultProvider` factory function. The following options are supported: * `profile` - The configuration profile to use. If not specified, the provider will use the value in the `AWS_PROFILE` environment variable or a default of `default`. * `filepath` - The path to the shared credentials file. If not specified, the provider will use the value in the `AWS_SHARED_CREDENTIALS_FILE` environment variable or a default of `~/.aws/credentials`. * `configFilepath` - The path to the shared config file. If not specified, the provider will use the value in the `AWS_CONFIG_FILE` environment variable or a default of `~/.aws/config`. * `mfaCodeProvider` - A function that returns a a promise fulfilled with an MFA token code for the provided MFA Serial code. If a profile requires an MFA code and `mfaCodeProvider` is not a valid function, the credential provider promise will be rejected. * `roleAssumer` - A function that assumes a role and returns a promise fulfilled with credentials for the assumed role. If not specified, no role will be assumed, and an error will be thrown. * `roleArn` - ARN to assume. If not specified, the provider will use the value in the `AWS_ROLE_ARN` environment variable. * `webIdentityTokenFile` - File location of where the `OIDC` token is stored. If not specified, the provider will use the value in the `AWS_WEB_IDENTITY_TOKEN_FILE` environment variable. * `roleAssumerWithWebIdentity` - A function that assumes a role with web identity and returns a promise fulfilled with credentials for the assumed role. * `timeout` - The connection timeout (in milliseconds) to apply to any remote requests. If not specified, a default value of `1000` (one second) is used. * `maxRetries` - The maximum number of times any HTTP connections should be retried. If not specified, a default value of `0` will be used. [Related packages:](#related-packages) --- * [AWS Credential Provider for Node.JS - Environment Variables](https://github.com/aws/aws-sdk-js-v3/blob/HEAD/packages/credential-provider-env) * [AWS Credential Provider for Node.JS - SSO](https://github.com/aws/aws-sdk-js-v3/blob/HEAD/packages/credential-provider-sso) * [AWS Credential Provider for Node.JS - Web Identity](https://github.com/aws/aws-sdk-js-v3/blob/HEAD/packages/credential-provider-web-identity) * [AWS Credential Provider for Node.JS - Shared Configuration Files](https://github.com/aws/aws-sdk-js-v3/blob/HEAD/packages/credential-provider-ini) * [AWS Credential Provider for Node.JS - Instance and Container Metadata](https://github.com/aws/aws-sdk-js-v3/blob/HEAD/packages/credential-provider-imds) * [AWS Shared Configuration File Loader](https://github.com/aws/aws-sdk-js-v3/blob/HEAD/packages/shared-ini-file-loader) Readme --- ### Keywords * aws * credentials
bayesanova
cran
R
Package ‘bayesanova’ October 12, 2022 Type Package Title Bayesian Inference in the Analysis of Variance via Markov Chain Monte Carlo in Gaussian Mixture Models Version 1.5 Date 2021-10-27 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Provides a Bayesian version of the analysis of variance based on a three- component Gaussian mixture for which a Gibbs sampler produces posterior draws. For de- tails about the Bayesian ANOVA based on Gaussian mixtures, see Kel- ter (2019) <arXiv:1906.07524>. License GPL-2 Imports MCMCpack Suggests coda, knitr, MASS NeedsCompilation no Repository CRAN Date/Publication 2021-10-28 13:40:09 UTC R topics documented: bayesANOVA-packag... 2 anovaplo... 2 assumption.chec... 4 bayes.anov... 5 post.pred.chec... 6 bayesANOVA-package Bayesian ANOVA Description Provides a Bayesian version of the analysis of variance (ANOVA) based on a three-component Gaussian mixture, for which a Gibbs sampler produces the posteriors of the means and standard deviation of each component. Also, model assumptions can be checked and results visualised. Details The DESCRIPTION file: This package was not yet installed at build time. Index: This package was not yet installed at build time. The core function is bayes.anova which provides the Bayesian version of the ANOVA. Also, assumptions can be checked via assumption.check, and anovaplot produces visualizations of the results. Author(s) <NAME> Maintainer: <NAME> References For details, see: https://arxiv.org/abs/1906.07524v1 Examples set.seed(42) x1=rnorm(75,0,1) x2=rnorm(75,1,1) x3=rnorm(75,2,1) assumption.check(x1,x2,x3,conf.level = 0.95) result=bayes.anova(n=1000,first=x1,second=x2,third=x3) anovaplot(result) anovaplot anovaplot Description Plots the results of a Bayesian ANOVA Usage anovaplot(dataframe, type="rope", sd="sd", ci=0.95) Arguments dataframe A dataframe which is the result of a Bayesian ANOVA type Selects the type of plot which should be produced. The default is "rope", which produces a posterior region of practical equivalence (ROPE) analysis and pos- terior distributions for the three effect sizes of interest. Other options are to produce posteriors for each parameter via type=="pars", or to produce poste- riors for the difference of means and variances via type=="diff". sd Selects if the results include posterior draws for the standard deviation (default) or the variance. sd="sd" is the default. sd="var" assumes that the dataframe includes posterior draws of the variances of each group. ci The credible level used for producing credible intervals. The default is ci=0.95. Value Produces plots of the results depending on which type is selected. In the default setting, the pro- duced plots include horizontal colored lines which visualize the standard regions of practical equiv- alence (ROPEs) for the effect size of Cohen. In particular, the ROPE of no effect, ή ∈ (−0.2, 0.2) is shown as a red horizontal line, the ROPE of a small effect, |ή| ∈ [0.2, 0.5) is shown as an orange horizontal line, the ROPE of a medium effect, |ή| ∈ [0.5, 0.8) is shown as an green horizontal line, and the ROPE of a large effect, |ή| ∈ [0.8, 0.∞) is shown as a purple horizontal line. Corresponding dashed vertical lines show the boundaries between these default ROPEs for effect sizes. Author(s) <NAME> References For details, see: https://arxiv.org/abs/1906.07524v1 Examples set.seed(42) x1=rnorm(75,0,1) x2=rnorm(75,1,1) x3=rnorm(75,2,1) result=bayes.anova(n=1000,first=x1,second=x2,third=x3) anovaplot(result) anovaplot(result, type="effect") x4=rnorm(75,3,1) result2=bayes.anova(n=1000,first=x1,second=x2,third=x3,fourth=x4) anovaplot(result2) assumption.check assumption.check Description This function checks the assumption of normality for each of the groups x1, x2, x3 (and optional x4, x5 and 6) used in the Bayesian ANOVA via Shapiro-Wilk tests with confidence level conf.level. Usage assumption.check(x1,x2,x3,x4=NULL,x5=NULL,x6=NULL,conf.level=0.95) Arguments x1 Numerical vector containing the values for the first group x2 Numerical vector containing the values for the second group x3 Numerical vector containing the values for the third group x4 Numerical vector containing the values for the fourth group. Defaults to NULL. x5 Numerical vector containing the values for the fifth group. Defaults to NULL. x6 Numerical vector containing the values for the sixth group. Defaults to NULL. conf.level Confidence level of the Shapiro-Wilk test used. Significance level equals 1-conf.level Details If a single Shapiro-Wilk test fails, the method returns a warning and recommends to use further diagnostics. Value Histograms and Quantile-Quantile plots for all groups are produced, either a warning or a confir- mation of normality in all three groups is printed to the console. Author(s) <NAME> References For details, see: https://arxiv.org/abs/1906.07524v1 Examples set.seed(42) x1=rnorm(75,0,1) x2=rnorm(75,1,1) x3=rnorm(75,2,1) assumption.check(x1,x2,x3,conf.level = 0.95) bayes.anova bayes.anova Description This function runs a Bayesian analysis of variance (ANOVA) on the data. The Bayesian ANOVA model assumes normally distributed data in all three groups, and conducts inference based on a Gibbs sampler in a three-component Gaussian-mixture with unknown parameters. Usage bayes.anova(n=10000,first,second,third,fourth=NULL,fifth=NULL,sixth=NULL, hyperpars="custom",burnin=n/2,sd="sd",q=0.1,ci=0.95) Arguments n Number of posterior draws the Gibbs sampler produces. The default is n=10000. first Numerical vector containing the values of the first group second Numerical vector containing the values of the second group third Numerical vector containing the values of the third group fourth Numerical vector containing the values of the fourth group. Default value is NULL. fifth Numerical vector containing the values of the fifth group. Default value is NULL. sixth Numerical vector containing the values of the sixth group. Default value is NULL. hyperpars Sets the hyperparameters on the prior distributions. Two options are provided. The default is "custom", and the other is "rafterys". For details, see the references. burnin Burn-in samples for the Gibbs sampler sd Selects if posterior draws should be produced for the standard deviation (default) or the variance. The two options are "sd" and "var" respectively. q Tuning parameter for the hyperparameters. The default is q=0.1 and it is rec- ommended not to change this. ci The credible level for the credible intervals produced. Default is ci=0.95. Details The Gibbs sampler is run with four Markov chains to run convergence diagnostics. Value Returns a dataframe which includes four columns for each parameter of interest. Each column cor- responds to the posterior draws of a single Markov chain obtained by the Gibbs sampling algorithm. Author(s) <NAME> References For details, see: https://arxiv.org/abs/1906.07524v1 Examples set.seed(42) x1=rnorm(75,0,1) x2=rnorm(75,1,1) x3=rnorm(75,2,1) x4=rnorm(75,-1,1) result=bayes.anova(first=x1,second=x2,third=x3) result=bayes.anova(n=1000,first=x1,second=x2,third=x3, hyperpars="custom",burnin=750,ci=0.99,sd="sd") result2=bayes.anova(n=1000,first=x1,second=x2,third=x3, fourth=x4) post.pred.check post.pred.check Description Provides a posterior predictive check for a fitted Bayesian ANOVA model. Usage post.pred.check(anovafit, ngroups, out, reps = 50, eta) Arguments anovafit A dataframe returned by bayes.anova ngroups An integer which is the number of groups used in the ANOVA out A numerical vector containing the originally observed data in all groups reps An integer which is the number of posterior predictive distributions sampled from the ANOVA models posterior distribution. Defaults to 50 sampled param- eters. eta A numerical vector containing the weight values of the mixture. Details Provides a posterior predictive check for a fitted Bayesian ANOVA model. Value Produces a plot consisting of a density estimate of the original data and posterior predictive distri- butions sampled from the posterior of the Bayesian ANOVA model as density overlays. Author(s) <NAME> Examples set.seed(700) x1=rnorm(1000,0,1) x2=rnorm(1000,1,1) x3=rnorm(1000,2,2) result=bayes.anova(n=1000,first = x1, second=x2, third=x3) post.pred.check(result, ngroups = 3, out = c(x1,x2,x3), reps = 25, eta = c(1/3,1/3,1/3))
GITHUB_raaminz_training.zip_unzipped_1_-_Introduction_to_Spring.pdf
free_programming_book
Unknown
Spring Introduction Presented by <NAME> Twitter: @zare88r Spring JEE web standalone ( lightweight ) )(Spring Core )Expert One-on-One: J2EE Design and Development by <NAME> (Wrox, 2002 . 5 July 2016 IOC Inversion of Control . container Java Servlet container Servlet Container Spring Beans DI Dependency Injection . //Hardcoded dependency public class MyClass { private MyDependency myDependency = new MyDependency(); } //Injected dependency public class MyClass { private MyDependency myDependency; public MyClass(MyDependency myDependency){ this.myDependency = myDependency; } } IOC & DI Dependency DI bean IOC & DI inject bean spring container @Component public class MyClass { @Autowired private MyDependency myDependency; } Aspect-Oriented Programming ( ) . AspectJ Spring AOP SpEL Spring Expression Language bean Parser.parseExpression("Officers['president'].PlaceOfBirth.City") .getValue(societyContext,String.class); Validation in Spring)JSR-303 (Bean Validation)( Accessing Data JDBC , Hibernate , JDO , JPA , Mongo , Neo4j , Object/XML Mapping XML Binding (JAXB) , Castor, XStream, JiBX, XMLBeans MVC View Technologies : JSP , Velocity , Freemarker , Thymeleaf , JSF , Struts , WebSocket Support (JSR-356) Remoting Support RMI, JAX-WS, JMS, Advanced Message Queuing Protocol (AMQP) Mail Support (JavaMail Api) Job Scheduling Support Dynamic Scripting Support dependency Spring boot spring Spring JBoss Seam Framework Google Guice PicoContainer JEE 8 Container
TSLSTM
cran
R
Package ‘TSLSTM’ October 12, 2022 Type Package Title Long Short Term Memory (LSTM) Model for Time Series Forecasting Version 0.1.0 Author Dr. <NAME> [aut, cre], Dr. <NAME> [aut] Maintainer Dr. <NAME> <<EMAIL>> Description The LSTM (Long Short-Term Memory) model is a Recurrent Neural Net- work (RNN) based architecture that is widely used for time series forecasting. Min-Max transfor- mation has been used for data preparation. Here, we have used one LSTM layer as a sim- ple LSTM model and a Dense layer is used as the output layer. Then, compile the model us- ing the loss function, optimizer and metrics. This package is based on Keras and Tensor- Flow modules and the algorithm of Paul and Garai (2021) <doi:10.1007/s00500-021-06087-4>. License GPL-3 Encoding UTF-8 RoxygenNote 7.1.2 Imports keras, tensorflow, tsutils, stats NeedsCompilation no Repository CRAN Date/Publication 2022-01-13 19:12:41 UTC R topics documented: ts.lst... 2 ts.lstm Long Short Term Memory (LSTM) Model for Time Series Forecasting Description The LSTM (Long Short-Term Memory) model is a Recurrent Neural Network (RNN) based archi- tecture that is widely used for time series forecasting. Min-Max transformation has been used for data preparation. Here, we have used one LSTM layer as a simple LSTM model and a Dense layer is used as the output layer. Then, compile the model using the loss function, optimizer and metrics. This package is based on Keras and TensorFlow modules. Usage ts.lstm( ts, xreg = NULL, tsLag, xregLag = 0, LSTMUnits, DropoutRate = 0, Epochs = 10, CompLoss = "mse", CompMetrics = "mae", ActivationFn = "tanh", SplitRatio = 0.8, ValidationSplit = 0.1 ) Arguments ts Time series data xreg Exogenous variables tsLag Lag of time series data xregLag Lag of exogenous variables LSTMUnits Number of unit in LSTM layer DropoutRate Dropout rate Epochs Number of epochs CompLoss Loss function CompMetrics Metrics ActivationFn Activation function SplitRatio Training and testing data split ratio ValidationSplit Validation split ration Value • TrainFittedValue: Fitted value of train data • TestPredictedValue: Predicted value of test data • AccuracyTable: RMSE and MAPE of train and test data References <NAME>. and <NAME>. (2021). Performance comparison of wavelets-based machine learning technique for forecasting agricultural commodity prices, Soft Computing, 25(20), 12857-12873 Examples y<-rnorm(100,mean=100,sd=50) x1<-rnorm(100,mean=50,sd=50) x2<-rnorm(100, mean=50, sd=25) x<-cbind(x1,x2) TSLSTM<-ts.lstm(ts=y,xreg = x,tsLag=2,xregLag = 0,LSTMUnits=5, Epochs=2)
numpy
devdocs
Haskell
NumPy documentation =================== **Version**: 1.23 **Download documentation**: [PDF Version](https://numpy.org/doc/stable/numpy-user.pdf) | [Historical versions of documentation](https://numpy.org/doc/) **Useful links**: [Installation](https://numpy.org/install/) | [Source Repository](https://github.com/numpy/numpy) | [Issue Tracker](https://github.com/numpy/numpy/issues) | [Q&A Support](https://numpy.org/gethelp/) | [Mailing List](https://mail.python.org/mailman/listinfo/numpy-discussion) NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/index.htmlNumPy user guide ================ This guide is an overview and explains the important features; details are found in [NumPy Reference](../reference/index#reference). * [What is NumPy?](whatisnumpy) * [Installation](https://numpy.org/install/) * [NumPy quickstart](quickstart) * [NumPy: the absolute basics for beginners](absolute_beginners) * [NumPy fundamentals](basics) * [Miscellaneous](misc) * [NumPy for MATLAB users](numpy-for-matlab-users) * [Building from source](building) * [Using NumPy C-API](c-info) * [NumPy Tutorials](https://numpy.org/numpy-tutorials/features.html) * [NumPy How Tos](howtos_index) * [For downstream package authors](depending_on_numpy) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/index.htmlNumPy Reference =============== Release 1.23 Date June 22, 2022 This reference manual details functions, modules, and objects included in NumPy, describing what they are and what they do. For learning how to use NumPy, see the [complete documentation](../index#numpy-docs-mainpage). * [Array objects](arrays) + [The N-dimensional array (`ndarray`)](arrays.ndarray) + [Scalars](arrays.scalars) + [Data type objects (`dtype`)](arrays.dtypes) + [Indexing routines](arrays.indexing) + [Iterating Over Arrays](arrays.nditer) + [Standard array subclasses](arrays.classes) + [Masked arrays](maskedarray) + [The array interface protocol](arrays.interface) + [Datetimes and Timedeltas](arrays.datetime) * [Array API Standard Compatibility](array_api) + [Table of Differences between `numpy.array_api` and `numpy`](array_api#table-of-differences-between-numpy-array-api-and-numpy) * [Constants](constants) * [Universal functions (`ufunc`)](ufuncs) + [`ufunc`](ufuncs#ufunc) + [Available ufuncs](ufuncs#available-ufuncs) * [Routines](routines) + [Array creation routines](routines.array-creation) + [Array manipulation routines](routines.array-manipulation) + [Binary operations](routines.bitwise) + [String operations](routines.char) + [C-Types Foreign Function Interface (`numpy.ctypeslib`)](routines.ctypeslib) + [Datetime Support Functions](routines.datetime) + [Data type routines](routines.dtype) + [Optionally SciPy-accelerated routines (`numpy.dual`)](routines.dual) + [Mathematical functions with automatic domain](routines.emath) + [Floating point error handling](routines.err) + [Discrete Fourier Transform (`numpy.fft`)](routines.fft) + [Functional programming](routines.functional) + [NumPy-specific help functions](routines.help) + [Input and output](routines.io) + [Linear algebra (`numpy.linalg`)](routines.linalg) + [Logic functions](routines.logic) + [Masked array operations](routines.ma) + [Mathematical functions](routines.math) + [Matrix library (`numpy.matlib`)](routines.matlib) + [Miscellaneous routines](routines.other) + [Padding Arrays](routines.padding) + [Polynomials](routines.polynomials) + [Random sampling (`numpy.random`)](random/index) + [Set routines](routines.set) + [Sorting, searching, and counting](routines.sort) + [Statistics](routines.statistics) + [Test Support (`numpy.testing`)](routines.testing) + [Window functions](routines.window) * [Typing (`numpy.typing`)](typing) + [Mypy plugin](typing#mypy-plugin) + [Differences from the runtime NumPy API](typing#differences-from-the-runtime-numpy-api) + [API](typing#api) * [Global State](global_state) + [Performance-Related Options](global_state#performance-related-options) + [Interoperability-Related Options](global_state#interoperability-related-options) + [Debugging-Related Options](global_state#debugging-related-options) * [Packaging (`numpy.distutils`)](distutils) + [Modules in `numpy.distutils`](distutils#modules-in-numpy-distutils) + [Configuration class](distutils#configuration-class) + [Building Installable C libraries](distutils#building-installable-c-libraries) + [Conversion of `.src` files](distutils#conversion-of-src-files) * [NumPy Distutils - Users Guide](distutils_guide) + [SciPy structure](distutils_guide#scipy-structure) + [Requirements for SciPy packages](distutils_guide#requirements-for-scipy-packages) + [The `setup.py` file](distutils_guide#the-setup-py-file) + [The `__init__.py` file](distutils_guide#the-init-py-file) + [Extra features in NumPy Distutils](distutils_guide#extra-features-in-numpy-distutils) * [Status of `numpy.distutils` and migration advice](distutils_status_migration) + [Migration advice](distutils_status_migration#migration-advice) + [Interaction of `numpy.disutils` with `setuptools`](distutils_status_migration#interaction-of-numpy-disutils-with-setuptools) * [NumPy C-API](c-api/index) + [Python Types and C-Structures](c-api/types-and-structures) + [System configuration](c-api/config) + [Data Type API](c-api/dtype) + [Array API](c-api/array) + [Array Iterator API](c-api/iterator) + [UFunc API](c-api/ufunc) + [Generalized Universal Function API](c-api/generalized-ufuncs) + [NumPy core libraries](c-api/coremath) + [C API Deprecations](c-api/deprecations) + [Memory management in NumPy](c-api/data_memory) * [CPU/SIMD Optimizations](simd/index) + [CPU build options](simd/build-options) + [How does the CPU dispatcher work?](simd/how-it-works) * [NumPy and SWIG](swig) + [numpy.i: a SWIG Interface File for NumPy](swig.interface-file) + [Testing the numpy.i Typemaps](swig.testing) Acknowledgements ---------------- Large parts of this manual originate from <NAME>’s book [Guide to NumPy](https://archive.org/details/NumPyBook) (which generously entered Public Domain in August 2008). The reference documentation for many of the functions are written by numerous contributors and developers of NumPy. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/index.htmlContributing to NumPy ===================== Not a coder? Not a problem! NumPy is multi-faceted, and we can use a lot of help. These are all activities we’d like to get help with (they’re all important, so we list them in alphabetical order): * Code maintenance and development * Community coordination * DevOps * Developing educational content & narrative documentation * Fundraising * Marketing * Project management * Translating content * Website design and development * Writing technical documentation The rest of this document discusses working on the NumPy code base and documentation. We’re in the process of updating our descriptions of other activities and roles. If you are interested in these other activities, please contact us! You can do this via the [numpy-discussion mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion), or on [GitHub](https://github.com/numpy/numpy) (open an issue or comment on a relevant issue). These are our preferred communication channels (open source is open by nature!), however if you prefer to discuss in private first, please reach out to our community coordinators at [<EMAIL>](mailto://numpy-team%40googlegroups.com) or [numpy-team.slack.com](https://numpy-team.slack.com) (send an email to [<EMAIL>](mailto://numpy-team%40googlegroups.com) for an invite the first time). Development process - summary ----------------------------- Here’s the short summary, complete TOC links are below: 1. If you are a first-time contributor: * Go to <https://github.com/numpy/numpy> and click the “fork” button to create your own copy of the project. * Clone the project to your local computer: ``` git clone https://github.com/your-username/numpy.git ``` * Change the directory: ``` cd numpy ``` * Add the upstream repository: ``` git remote add upstream https://github.com/numpy/numpy.git ``` * Now, `git remote -v` will show two remote repositories named: + `upstream`, which refers to the `numpy` repository + `origin`, which refers to your personal fork 2. Develop your contribution: * Pull the latest changes from upstream: ``` git checkout main git pull upstream main ``` * Create a branch for the feature you want to work on. Since the branch name will appear in the merge message, use a sensible name such as ‘linspace-speedups’: ``` git checkout -b linspace-speedups ``` * Commit locally as you progress (`git add` and `git commit`) Use a [properly formatted](development_workflow#writing-the-commit-message) commit message, write tests that fail before your change and pass afterward, run all the [tests locally](development_environment#development-environment). Be sure to document any changed behavior in docstrings, keeping to the NumPy docstring [standard](howto-docs#howto-document). 3. To submit your contribution: * Push your changes back to your fork on GitHub: ``` git push origin linspace-speedups ``` * Enter your GitHub username and password (repeat contributors or advanced users can remove this step by connecting to GitHub with [SSH](gitwash/development_setup#set-up-and-configure-a-github-account)). * Go to GitHub. The new branch will show up with a green Pull Request button. Make sure the title and message are clear, concise, and self- explanatory. Then click the button to submit it. * If your commit introduces a new feature or changes functionality, post on the [mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion) to explain your changes. For bug fixes, documentation updates, etc., this is generally not necessary, though if you do not get any reaction, do feel free to ask for review. 4. Review process: * Reviewers (the other developers and interested community members) will write inline and/or general comments on your Pull Request (PR) to help you improve its implementation, documentation and style. Every single developer working on the project has their code reviewed, and we’ve come to see it as friendly conversation from which we all learn and the overall code quality benefits. Therefore, please don’t let the review discourage you from contributing: its only aim is to improve the quality of project, not to criticize (we are, after all, very grateful for the time you’re donating!). See our [Reviewer Guidelines](reviewer_guidelines#reviewer-guidelines) for more information. * To update your PR, make your changes on your local repository, commit, **run tests, and only if they succeed** push to your fork. As soon as those changes are pushed up (to the same branch as before) the PR will update automatically. If you have no idea how to fix the test failures, you may push your changes anyway and ask for help in a PR comment. * Various continuous integration (CI) services are triggered after each PR update to build the code, run unit tests, measure code coverage and check coding style of your branch. The CI tests must pass before your PR can be merged. If CI fails, you can find out why by clicking on the “failed” icon (red cross) and inspecting the build and test log. To avoid overuse and waste of this resource, [test your work](development_environment#recommended-development-setup) locally before committing. * A PR must be **approved** by at least one core team member before merging. Approval means the core team member has carefully reviewed the changes, and the PR is ready for merging. 5. Document changes Beyond changes to a functions docstring and possible description in the general documentation, if your change introduces any user-facing modifications they may need to be mentioned in the release notes. To add your change to the release notes, you need to create a short file with a summary and place it in `doc/release/upcoming_changes`. The file `doc/release/upcoming_changes/README.rst` details the format and filename conventions. If your change introduces a deprecation, make sure to discuss this first on GitHub or the mailing list first. If agreement on the deprecation is reached, follow [NEP 23 deprecation policy](https://numpy.org/neps/nep-0023-backwards-compatibility.html#nep23 "(in NumPy Enhancement Proposals)") to add the deprecation. 6. Cross referencing issues If the PR relates to any issues, you can add the text `xref gh-xxxx` where `xxxx` is the number of the issue to github comments. Likewise, if the PR solves an issue, replace the `xref` with `closes`, `fixes` or any of the other flavors [github accepts](https://help.github.com/en/articles/closing-issues-using-keywords). In the source code, be sure to preface any issue or PR reference with `gh-xxxx`. For a more detailed discussion, read on and follow the links at the bottom of this page. ### Divergence between `upstream/main` and your feature branch If GitHub indicates that the branch of your Pull Request can no longer be merged automatically, you have to incorporate changes that have been made since you started into your branch. Our recommended way to do this is to [rebase on main](development_workflow#rebasing-on-main). ### Guidelines * All code should have tests (see [test coverage](#test-coverage) below for more details). * All code should be [documented](https://numpydoc.readthedocs.io/en/latest/format.html#docstring-standard). * No changes are ever committed without review and approval by a core team member. Please ask politely on the PR or on the [mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion) if you get no response to your pull request within a week. ### Stylistic Guidelines * Set up your editor to follow [PEP 8](https://www.python.org/dev/peps/pep-0008/) (remove trailing white space, no tabs, etc.). Check code with pyflakes / flake8. * Use NumPy data types instead of strings (`np.uint8` instead of `"uint8"`). * Use the following import conventions: ``` import numpy as np ``` * For C code, see [NEP 45](https://numpy.org/neps/nep-0045-c_style_guide.html#nep45 "(in NumPy Enhancement Proposals)"). ### Test coverage Pull requests (PRs) that modify code should either have new tests, or modify existing tests to fail before the PR and pass afterwards. You should [run the tests](development_environment#development-environment) before pushing a PR. Running NumPy’s test suite locally requires some additional packages, such as `pytest` and `hypothesis`. The additional testing dependencies are listed in `test_requirements.txt` in the top-level directory, and can conveniently be installed with: ``` pip install -r test_requirements.txt ``` Tests for a module should ideally cover all code in that module, i.e., statement coverage should be at 100%. To measure the test coverage, install [pytest-cov](https://pytest-cov.readthedocs.io/en/latest/) and then run: ``` $ python runtests.py --coverage ``` This will create a report in `build/coverage`, which can be viewed with: ``` $ firefox build/coverage/index.html ``` ### Building docs To build docs, run `make` from the `doc` directory. `make help` lists all targets. For example, to build the HTML documentation, you can run: ``` make html ``` To get the appropriate dependencies and other requirements, see [Building the NumPy API and reference docs](howto_build_docs#howto-build-docs). #### Fixing Warnings * “citation not found: R###” There is probably an underscore after a reference in the first line of a docstring (e.g. [1]_). Use this method to find the source file: $ cd doc/build; grep -rin R#### * “Duplicate citation R###, other instance in
”” There is probably a [2] without a [1] in one of the docstrings Development process - details ----------------------------- The rest of the story * [Git Basics](gitwash/index) + [Install git](gitwash/git_intro) + [Get the local copy of the code](gitwash/following_latest) + [Updating the code](gitwash/following_latest#updating-the-code) + [Setting up git for NumPy development](gitwash/development_setup) + [Git configuration](gitwash/configure_git) + [Two and three dots in difference specs](gitwash/dot2_dot3) + [Additional Git Resources](gitwash/git_resources) * [Setting up and using your development environment](development_environment) + [Recommended development setup](development_environment#recommended-development-setup) + [Testing builds](development_environment#testing-builds) + [Building in-place](development_environment#building-in-place) + [Other build options](development_environment#other-build-options) + [Using virtual environments](development_environment#using-virtual-environments) + [Running tests](development_environment#running-tests) + [Running Linting](development_environment#running-linting) + [Rebuilding & cleaning the workspace](development_environment#rebuilding-cleaning-the-workspace) + [Debugging](development_environment#debugging) + [Understanding the code & getting started](development_environment#understanding-the-code-getting-started) * [Using Gitpod for NumPy development](development_gitpod) + [Gitpod](development_gitpod#gitpod) + [Gitpod GitHub integration](development_gitpod#gitpod-github-integration) + [Forking the NumPy repository](development_gitpod#forking-the-numpy-repository) + [Starting Gitpod](development_gitpod#starting-gitpod) + [Quick workspace tour](development_gitpod#quick-workspace-tour) + [Development workflow with Gitpod](development_gitpod#development-workflow-with-gitpod) + [Rendering the NumPy documentation](development_gitpod#rendering-the-numpy-documentation) + [FAQ’s and troubleshooting](development_gitpod#faq-s-and-troubleshooting) * [Building the NumPy API and reference docs](howto_build_docs) + [Development environments](howto_build_docs#development-environments) + [Prerequisites](howto_build_docs#prerequisites) + [Instructions](howto_build_docs#instructions) * [Development workflow](development_workflow) + [Basic workflow](development_workflow#basic-workflow) + [Additional things you might want to do](development_workflow#additional-things-you-might-want-to-do) * [Advanced debugging tools](development_advanced_debugging) + [Finding C errors with additional tooling](development_advanced_debugging#finding-c-errors-with-additional-tooling) * [Reviewer Guidelines](reviewer_guidelines) + [Who can be a reviewer?](reviewer_guidelines#who-can-be-a-reviewer) + [Communication Guidelines](reviewer_guidelines#communication-guidelines) + [Reviewer Checklist](reviewer_guidelines#reviewer-checklist) + [Standard replies for reviewing](reviewer_guidelines#standard-replies-for-reviewing) * [NumPy benchmarks](../benchmarking) + [Usage](../benchmarking#usage) + [Writing benchmarks](../benchmarking#writing-benchmarks) * [NumPy C style guide](https://numpy.org/neps/nep-0045-c_style_guide.html) * [Releasing a version](releasing) + [How to Prepare a Release](releasing#how-to-prepare-a-release) + [Step-by-Step Directions](releasing#step-by-step-directions) * [NumPy governance](governance/index) + [NumPy project governance and decision-making](governance/governance) * [How to contribute to the NumPy documentation](howto-docs) + [Documentation team meetings](howto-docs#documentation-team-meetings) + [What’s needed](howto-docs#what-s-needed) + [Contributing fixes](howto-docs#contributing-fixes) + [Contributing new pages](howto-docs#contributing-new-pages) + [Contributing indirectly](howto-docs#contributing-indirectly) + [Documentation style](howto-docs#documentation-style) + [Documentation reading](howto-docs#documentation-reading) NumPy-specific workflow is in [numpy-development-workflow](development_workflow#development-workflow). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/index.htmlRelease notes ============= * [1.23.0](https://numpy.org/doc/1.23/release/1.23.0-notes.html) + [New functions](https://numpy.org/doc/1.23/release/1.23.0-notes.html#new-functions) + [Deprecations](https://numpy.org/doc/1.23/release/1.23.0-notes.html#deprecations) + [Expired deprecations](https://numpy.org/doc/1.23/release/1.23.0-notes.html#expired-deprecations) + [New Features](https://numpy.org/doc/1.23/release/1.23.0-notes.html#new-features) - [crackfortran has support for operator and assignment overloading](https://numpy.org/doc/1.23/release/1.23.0-notes.html#crackfortran-has-support-for-operator-and-assignment-overloading) - [f2py supports reading access type attributes from derived type statements](https://numpy.org/doc/1.23/release/1.23.0-notes.html#f2py-supports-reading-access-type-attributes-from-derived-type-statements) - [New parameter `ndmin` added to `genfromtxt`](https://numpy.org/doc/1.23/release/1.23.0-notes.html#new-parameter-ndmin-added-to-genfromtxt) - [`np.loadtxt` now supports quote character and single converter function](https://numpy.org/doc/1.23/release/1.23.0-notes.html#np-loadtxt-now-supports-quote-character-and-single-converter-function) - [Changing to dtype of a different size now requires contiguity of only the last axis](https://numpy.org/doc/1.23/release/1.23.0-notes.html#changing-to-dtype-of-a-different-size-now-requires-contiguity-of-only-the-last-axis) - [Deterministic output files for F2PY](https://numpy.org/doc/1.23/release/1.23.0-notes.html#deterministic-output-files-for-f2py) - [`keepdims` parameter for `average`](https://numpy.org/doc/1.23/release/1.23.0-notes.html#keepdims-parameter-for-average) - [New parameter `equal_nan` added to `np.unique`](https://numpy.org/doc/1.23/release/1.23.0-notes.html#new-parameter-equal-nan-added-to-np-unique) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.23.0-notes.html#compatibility-notes) - [1D `np.linalg.norm` preserves float input types, even for scalar results](https://numpy.org/doc/1.23/release/1.23.0-notes.html#d-np-linalg-norm-preserves-float-input-types-even-for-scalar-results) - [Changes to structured (void) dtype promotion and comparisons](https://numpy.org/doc/1.23/release/1.23.0-notes.html#changes-to-structured-void-dtype-promotion-and-comparisons) - [`NPY_RELAXED_STRIDES_CHECKING` has been removed](https://numpy.org/doc/1.23/release/1.23.0-notes.html#npy-relaxed-strides-checking-has-been-removed) - [`np.loadtxt` has recieved several changes](https://numpy.org/doc/1.23/release/1.23.0-notes.html#np-loadtxt-has-recieved-several-changes) + [Improvements](https://numpy.org/doc/1.23/release/1.23.0-notes.html#improvements) - [`ndarray.__array_finalize__` is now callable](https://numpy.org/doc/1.23/release/1.23.0-notes.html#ndarray-array-finalize-is-now-callable) - [Add support for VSX4/Power10](https://numpy.org/doc/1.23/release/1.23.0-notes.html#add-support-for-vsx4-power10) - [`np.fromiter` now accepts objects and subarrays](https://numpy.org/doc/1.23/release/1.23.0-notes.html#np-fromiter-now-accepts-objects-and-subarrays) - [Math C library feature detection now uses correct signatures](https://numpy.org/doc/1.23/release/1.23.0-notes.html#math-c-library-feature-detection-now-uses-correct-signatures) - [`np.kron` now maintains subclass information](https://numpy.org/doc/1.23/release/1.23.0-notes.html#np-kron-now-maintains-subclass-information) + [Performance improvements and changes](https://numpy.org/doc/1.23/release/1.23.0-notes.html#performance-improvements-and-changes) - [Faster `np.loadtxt`](https://numpy.org/doc/1.23/release/1.23.0-notes.html#faster-np-loadtxt) - [Faster reduction operators](https://numpy.org/doc/1.23/release/1.23.0-notes.html#faster-reduction-operators) - [Faster `np.where`](https://numpy.org/doc/1.23/release/1.23.0-notes.html#faster-np-where) - [Faster operations on NumPy scalars](https://numpy.org/doc/1.23/release/1.23.0-notes.html#faster-operations-on-numpy-scalars) - [Faster `np.kron`](https://numpy.org/doc/1.23/release/1.23.0-notes.html#faster-np-kron) * [1.22.4](https://numpy.org/doc/1.23/release/1.22.4-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.22.4-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.22.4-notes.html#pull-requests-merged) * [1.22.3](https://numpy.org/doc/1.23/release/1.22.3-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.22.3-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.22.3-notes.html#pull-requests-merged) * [1.22.2](https://numpy.org/doc/1.23/release/1.22.2-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.22.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.22.2-notes.html#pull-requests-merged) * [1.22.1](https://numpy.org/doc/1.23/release/1.22.1-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.22.1-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.22.1-notes.html#pull-requests-merged) * [1.22.0](https://numpy.org/doc/1.23/release/1.22.0-notes.html) + [Expired deprecations](https://numpy.org/doc/1.23/release/1.22.0-notes.html#expired-deprecations) - [Deprecated numeric style dtype strings have been removed](https://numpy.org/doc/1.23/release/1.22.0-notes.html#deprecated-numeric-style-dtype-strings-have-been-removed) - [Expired deprecations for `loads`, `ndfromtxt`, and `mafromtxt` in npyio](https://numpy.org/doc/1.23/release/1.22.0-notes.html#expired-deprecations-for-loads-ndfromtxt-and-mafromtxt-in-npyio) + [Deprecations](https://numpy.org/doc/1.23/release/1.22.0-notes.html#deprecations) - [Use delimiter rather than delimitor as kwarg in mrecords](https://numpy.org/doc/1.23/release/1.22.0-notes.html#use-delimiter-rather-than-delimitor-as-kwarg-in-mrecords) - [Passing boolean `kth` values to (arg-)partition has been deprecated](https://numpy.org/doc/1.23/release/1.22.0-notes.html#passing-boolean-kth-values-to-arg-partition-has-been-deprecated) - [The `np.MachAr` class has been deprecated](https://numpy.org/doc/1.23/release/1.22.0-notes.html#the-np-machar-class-has-been-deprecated) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.22.0-notes.html#compatibility-notes) - [Distutils forces strict floating point model on clang](https://numpy.org/doc/1.23/release/1.22.0-notes.html#distutils-forces-strict-floating-point-model-on-clang) - [Removed floor division support for complex types](https://numpy.org/doc/1.23/release/1.22.0-notes.html#removed-floor-division-support-for-complex-types) - [`numpy.vectorize` functions now produce the same output class as the base function](https://numpy.org/doc/1.23/release/1.22.0-notes.html#numpy-vectorize-functions-now-produce-the-same-output-class-as-the-base-function) - [Python 3.7 is no longer supported](https://numpy.org/doc/1.23/release/1.22.0-notes.html#python-3-7-is-no-longer-supported) - [str/repr of complex dtypes now include space after punctuation](https://numpy.org/doc/1.23/release/1.22.0-notes.html#str-repr-of-complex-dtypes-now-include-space-after-punctuation) - [Corrected `advance` in `PCG64DSXM` and `PCG64`](https://numpy.org/doc/1.23/release/1.22.0-notes.html#corrected-advance-in-pcg64dsxm-and-pcg64) - [Change in generation of random 32 bit floating point variates](https://numpy.org/doc/1.23/release/1.22.0-notes.html#change-in-generation-of-random-32-bit-floating-point-variates) + [C API changes](https://numpy.org/doc/1.23/release/1.22.0-notes.html#c-api-changes) - [Masked inner-loops cannot be customized anymore](https://numpy.org/doc/1.23/release/1.22.0-notes.html#masked-inner-loops-cannot-be-customized-anymore) - [Experimental exposure of future DType and UFunc API](https://numpy.org/doc/1.23/release/1.22.0-notes.html#experimental-exposure-of-future-dtype-and-ufunc-api) + [New Features](https://numpy.org/doc/1.23/release/1.22.0-notes.html#new-features) - [NEP 49 configurable allocators](https://numpy.org/doc/1.23/release/1.22.0-notes.html#nep-49-configurable-allocators) - [Implementation of the NEP 47 (adopting the array API standard)](https://numpy.org/doc/1.23/release/1.22.0-notes.html#implementation-of-the-nep-47-adopting-the-array-api-standard) - [Generate C/C++ API reference documentation from comments blocks is now possible](https://numpy.org/doc/1.23/release/1.22.0-notes.html#generate-c-c-api-reference-documentation-from-comments-blocks-is-now-possible) - [Assign the platform-specific `c_intp` precision via a mypy plugin](https://numpy.org/doc/1.23/release/1.22.0-notes.html#assign-the-platform-specific-c-intp-precision-via-a-mypy-plugin) - [Add NEP 47-compatible dlpack support](https://numpy.org/doc/1.23/release/1.22.0-notes.html#add-nep-47-compatible-dlpack-support) - [`keepdims` optional argument added to `numpy.argmin`, `numpy.argmax`](https://numpy.org/doc/1.23/release/1.22.0-notes.html#keepdims-optional-argument-added-to-numpy-argmin-numpy-argmax) - [`bit_count` to compute the number of 1-bits in an integer](https://numpy.org/doc/1.23/release/1.22.0-notes.html#bit-count-to-compute-the-number-of-1-bits-in-an-integer) - [The `ndim` and `axis` attributes have been added to `numpy.AxisError`](https://numpy.org/doc/1.23/release/1.22.0-notes.html#the-ndim-and-axis-attributes-have-been-added-to-numpy-axiserror) - [Preliminary support for `windows/arm64` target](https://numpy.org/doc/1.23/release/1.22.0-notes.html#preliminary-support-for-windows-arm64-target) - [Added support for LoongArch](https://numpy.org/doc/1.23/release/1.22.0-notes.html#added-support-for-loongarch) - [A `.clang-format` file has been added](https://numpy.org/doc/1.23/release/1.22.0-notes.html#a-clang-format-file-has-been-added) - [`is_integer` is now available to `numpy.floating` and `numpy.integer`](https://numpy.org/doc/1.23/release/1.22.0-notes.html#is-integer-is-now-available-to-numpy-floating-and-numpy-integer) - [Symbolic parser for Fortran dimension specifications](https://numpy.org/doc/1.23/release/1.22.0-notes.html#symbolic-parser-for-fortran-dimension-specifications) - [`ndarray`, `dtype` and `number` are now runtime-subscriptable](https://numpy.org/doc/1.23/release/1.22.0-notes.html#ndarray-dtype-and-number-are-now-runtime-subscriptable) + [Improvements](https://numpy.org/doc/1.23/release/1.22.0-notes.html#improvements) - [`ctypeslib.load_library` can now take any path-like object](https://numpy.org/doc/1.23/release/1.22.0-notes.html#ctypeslib-load-library-can-now-take-any-path-like-object) - [Add `smallest_normal` and `smallest_subnormal` attributes to `finfo`](https://numpy.org/doc/1.23/release/1.22.0-notes.html#add-smallest-normal-and-smallest-subnormal-attributes-to-finfo) - [`numpy.linalg.qr` accepts stacked matrices as inputs](https://numpy.org/doc/1.23/release/1.22.0-notes.html#numpy-linalg-qr-accepts-stacked-matrices-as-inputs) - [`numpy.fromregex` now accepts `os.PathLike` implementations](https://numpy.org/doc/1.23/release/1.22.0-notes.html#numpy-fromregex-now-accepts-os-pathlike-implementations) - [Add new methods for `quantile` and `percentile`](https://numpy.org/doc/1.23/release/1.22.0-notes.html#add-new-methods-for-quantile-and-percentile) - [Missing parameters have been added to the `nan<x>` functions](https://numpy.org/doc/1.23/release/1.22.0-notes.html#missing-parameters-have-been-added-to-the-nan-x-functions) - [Annotating the main Numpy namespace](https://numpy.org/doc/1.23/release/1.22.0-notes.html#annotating-the-main-numpy-namespace) - [Vectorize umath module using AVX-512](https://numpy.org/doc/1.23/release/1.22.0-notes.html#vectorize-umath-module-using-avx-512) - [OpenBLAS v0.3.18](https://numpy.org/doc/1.23/release/1.22.0-notes.html#openblas-v0-3-18) * [1.21.6](https://numpy.org/doc/1.23/release/1.21.6-notes.html) * [1.21.5](https://numpy.org/doc/1.23/release/1.21.5-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.21.5-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.21.5-notes.html#pull-requests-merged) * [1.21.4](https://numpy.org/doc/1.23/release/1.21.4-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.21.4-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.21.4-notes.html#pull-requests-merged) * [1.21.3](https://numpy.org/doc/1.23/release/1.21.3-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.21.3-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.21.3-notes.html#pull-requests-merged) * [1.21.2](https://numpy.org/doc/1.23/release/1.21.2-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.21.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.21.2-notes.html#pull-requests-merged) * [1.21.1](https://numpy.org/doc/1.23/release/1.21.1-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.21.1-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.21.1-notes.html#pull-requests-merged) * [1.21.0](https://numpy.org/doc/1.23/release/1.21.0-notes.html) + [New functions](https://numpy.org/doc/1.23/release/1.21.0-notes.html#new-functions) - [Add `PCG64DXSM` `BitGenerator`](https://numpy.org/doc/1.23/release/1.21.0-notes.html#add-pcg64dxsm-bitgenerator) + [Expired deprecations](https://numpy.org/doc/1.23/release/1.21.0-notes.html#expired-deprecations) + [Deprecations](https://numpy.org/doc/1.23/release/1.21.0-notes.html#deprecations) - [The `.dtype` attribute must return a `dtype`](https://numpy.org/doc/1.23/release/1.21.0-notes.html#the-dtype-attribute-must-return-a-dtype) - [Inexact matches for `numpy.convolve` and `numpy.correlate` are deprecated](https://numpy.org/doc/1.23/release/1.21.0-notes.html#inexact-matches-for-numpy-convolve-and-numpy-correlate-are-deprecated) - [`np.typeDict` has been formally deprecated](https://numpy.org/doc/1.23/release/1.21.0-notes.html#np-typedict-has-been-formally-deprecated) - [Exceptions will be raised during array-like creation](https://numpy.org/doc/1.23/release/1.21.0-notes.html#exceptions-will-be-raised-during-array-like-creation) - [Four `ndarray.ctypes` methods have been deprecated](https://numpy.org/doc/1.23/release/1.21.0-notes.html#four-ndarray-ctypes-methods-have-been-deprecated) + [Expired deprecations](https://numpy.org/doc/1.23/release/1.21.0-notes.html#id2) - [Remove deprecated `PolyBase` and unused `PolyError` and `PolyDomainError`](https://numpy.org/doc/1.23/release/1.21.0-notes.html#remove-deprecated-polybase-and-unused-polyerror-and-polydomainerror) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.21.0-notes.html#compatibility-notes) - [Error type changes in universal functions](https://numpy.org/doc/1.23/release/1.21.0-notes.html#error-type-changes-in-universal-functions) - [`__array_ufunc__` argument validation](https://numpy.org/doc/1.23/release/1.21.0-notes.html#array-ufunc-argument-validation) - [`__array_ufunc__` and additional positional arguments](https://numpy.org/doc/1.23/release/1.21.0-notes.html#array-ufunc-and-additional-positional-arguments) - [Validate input values in `Generator.uniform`](https://numpy.org/doc/1.23/release/1.21.0-notes.html#validate-input-values-in-generator-uniform) - [`/usr/include` removed from default include paths](https://numpy.org/doc/1.23/release/1.21.0-notes.html#usr-include-removed-from-default-include-paths) - [Changes to comparisons with `dtype=...`](https://numpy.org/doc/1.23/release/1.21.0-notes.html#changes-to-comparisons-with-dtype) - [Changes to `dtype` and `signature` arguments in ufuncs](https://numpy.org/doc/1.23/release/1.21.0-notes.html#changes-to-dtype-and-signature-arguments-in-ufuncs) - [Ufunc `signature=...` and `dtype=` generalization and `casting`](https://numpy.org/doc/1.23/release/1.21.0-notes.html#ufunc-signature-and-dtype-generalization-and-casting) - [Distutils forces strict floating point model on clang](https://numpy.org/doc/1.23/release/1.21.0-notes.html#distutils-forces-strict-floating-point-model-on-clang) + [C API changes](https://numpy.org/doc/1.23/release/1.21.0-notes.html#c-api-changes) - [Use of `ufunc->type_resolver` and “type tuple”](https://numpy.org/doc/1.23/release/1.21.0-notes.html#use-of-ufunc-type-resolver-and-type-tuple) + [New Features](https://numpy.org/doc/1.23/release/1.21.0-notes.html#new-features) - [Added a mypy plugin for handling platform-specific `numpy.number` precisions](https://numpy.org/doc/1.23/release/1.21.0-notes.html#added-a-mypy-plugin-for-handling-platform-specific-numpy-number-precisions) - [Let the mypy plugin manage extended-precision `numpy.number` subclasses](https://numpy.org/doc/1.23/release/1.21.0-notes.html#let-the-mypy-plugin-manage-extended-precision-numpy-number-subclasses) - [New `min_digits` argument for printing float values](https://numpy.org/doc/1.23/release/1.21.0-notes.html#new-min-digits-argument-for-printing-float-values) - [f2py now recognizes Fortran abstract interface blocks](https://numpy.org/doc/1.23/release/1.21.0-notes.html#f2py-now-recognizes-fortran-abstract-interface-blocks) - [BLAS and LAPACK configuration via environment variables](https://numpy.org/doc/1.23/release/1.21.0-notes.html#blas-and-lapack-configuration-via-environment-variables) - [A runtime-subcriptable alias has been added for `ndarray`](https://numpy.org/doc/1.23/release/1.21.0-notes.html#a-runtime-subcriptable-alias-has-been-added-for-ndarray) + [Improvements](https://numpy.org/doc/1.23/release/1.21.0-notes.html#improvements) - [Arbitrary `period` option for `numpy.unwrap`](https://numpy.org/doc/1.23/release/1.21.0-notes.html#arbitrary-period-option-for-numpy-unwrap) - [`np.unique` now returns single `NaN`](https://numpy.org/doc/1.23/release/1.21.0-notes.html#np-unique-now-returns-single-nan) - [`Generator.rayleigh` and `Generator.geometric` performance improved](https://numpy.org/doc/1.23/release/1.21.0-notes.html#generator-rayleigh-and-generator-geometric-performance-improved) - [Placeholder annotations have been improved](https://numpy.org/doc/1.23/release/1.21.0-notes.html#placeholder-annotations-have-been-improved) + [Performance improvements](https://numpy.org/doc/1.23/release/1.21.0-notes.html#performance-improvements) - [Improved performance in integer division of NumPy arrays](https://numpy.org/doc/1.23/release/1.21.0-notes.html#improved-performance-in-integer-division-of-numpy-arrays) - [Improve performance of `np.save` and `np.load` for small arrays](https://numpy.org/doc/1.23/release/1.21.0-notes.html#improve-performance-of-np-save-and-np-load-for-small-arrays) + [Changes](https://numpy.org/doc/1.23/release/1.21.0-notes.html#changes) - [`numpy.piecewise` output class now matches the input class](https://numpy.org/doc/1.23/release/1.21.0-notes.html#numpy-piecewise-output-class-now-matches-the-input-class) - [Enable Accelerate Framework](https://numpy.org/doc/1.23/release/1.21.0-notes.html#enable-accelerate-framework) * [1.20.3](https://numpy.org/doc/1.23/release/1.20.3-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.20.3-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.20.3-notes.html#pull-requests-merged) * [1.20.2](https://numpy.org/doc/1.23/release/1.20.2-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.20.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.20.2-notes.html#pull-requests-merged) * [1.20.1](https://numpy.org/doc/1.23/release/1.20.1-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.20.1-notes.html#highlights) + [Contributors](https://numpy.org/doc/1.23/release/1.20.1-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.20.1-notes.html#pull-requests-merged) * [1.20.0](https://numpy.org/doc/1.23/release/1.20.0-notes.html) + [New functions](https://numpy.org/doc/1.23/release/1.20.0-notes.html#new-functions) - [The random.Generator class has a new `permuted` function.](https://numpy.org/doc/1.23/release/1.20.0-notes.html#the-random-generator-class-has-a-new-permuted-function) - [`sliding_window_view` provides a sliding window view for numpy arrays](https://numpy.org/doc/1.23/release/1.20.0-notes.html#sliding-window-view-provides-a-sliding-window-view-for-numpy-arrays) - [`numpy.broadcast_shapes` is a new user-facing function](https://numpy.org/doc/1.23/release/1.20.0-notes.html#numpy-broadcast-shapes-is-a-new-user-facing-function) + [Deprecations](https://numpy.org/doc/1.23/release/1.20.0-notes.html#deprecations) - [Using the aliases of builtin types like `np.int` is deprecated](https://numpy.org/doc/1.23/release/1.20.0-notes.html#using-the-aliases-of-builtin-types-like-np-int-is-deprecated) - [Passing `shape=None` to functions with a non-optional shape argument is deprecated](https://numpy.org/doc/1.23/release/1.20.0-notes.html#passing-shape-none-to-functions-with-a-non-optional-shape-argument-is-deprecated) - [Indexing errors will be reported even when index result is empty](https://numpy.org/doc/1.23/release/1.20.0-notes.html#indexing-errors-will-be-reported-even-when-index-result-is-empty) - [Inexact matches for `mode` and `searchside` are deprecated](https://numpy.org/doc/1.23/release/1.20.0-notes.html#inexact-matches-for-mode-and-searchside-are-deprecated) - [Deprecation of `numpy.dual`](https://numpy.org/doc/1.23/release/1.20.0-notes.html#deprecation-of-numpy-dual) - [`outer` and `ufunc.outer` deprecated for matrix](https://numpy.org/doc/1.23/release/1.20.0-notes.html#outer-and-ufunc-outer-deprecated-for-matrix) - [Further Numeric Style types Deprecated](https://numpy.org/doc/1.23/release/1.20.0-notes.html#further-numeric-style-types-deprecated) - [The `ndincr` method of `ndindex` is deprecated](https://numpy.org/doc/1.23/release/1.20.0-notes.html#the-ndincr-method-of-ndindex-is-deprecated) - [ArrayLike objects which do not define `__len__` and `__getitem__`](https://numpy.org/doc/1.23/release/1.20.0-notes.html#arraylike-objects-which-do-not-define-len-and-getitem) + [Future Changes](https://numpy.org/doc/1.23/release/1.20.0-notes.html#future-changes) - [Arrays cannot be using subarray dtypes](https://numpy.org/doc/1.23/release/1.20.0-notes.html#arrays-cannot-be-using-subarray-dtypes) + [Expired deprecations](https://numpy.org/doc/1.23/release/1.20.0-notes.html#expired-deprecations) - [Financial functions removed](https://numpy.org/doc/1.23/release/1.20.0-notes.html#financial-functions-removed) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.20.0-notes.html#compatibility-notes) - [`isinstance(dtype, np.dtype)` and not `type(dtype) is not np.dtype`](https://numpy.org/doc/1.23/release/1.20.0-notes.html#isinstance-dtype-np-dtype-and-not-type-dtype-is-not-np-dtype) - [Same kind casting in concatenate with `axis=None`](https://numpy.org/doc/1.23/release/1.20.0-notes.html#same-kind-casting-in-concatenate-with-axis-none) - [NumPy Scalars are cast when assigned to arrays](https://numpy.org/doc/1.23/release/1.20.0-notes.html#numpy-scalars-are-cast-when-assigned-to-arrays) - [Array coercion changes when Strings and other types are mixed](https://numpy.org/doc/1.23/release/1.20.0-notes.html#array-coercion-changes-when-strings-and-other-types-are-mixed) - [Array coercion restructure](https://numpy.org/doc/1.23/release/1.20.0-notes.html#array-coercion-restructure) - [Writing to the result of `numpy.broadcast_arrays` will export readonly buffers](https://numpy.org/doc/1.23/release/1.20.0-notes.html#writing-to-the-result-of-numpy-broadcast-arrays-will-export-readonly-buffers) - [Numeric-style type names have been removed from type dictionaries](https://numpy.org/doc/1.23/release/1.20.0-notes.html#numeric-style-type-names-have-been-removed-from-type-dictionaries) - [The `operator.concat` function now raises TypeError for array arguments](https://numpy.org/doc/1.23/release/1.20.0-notes.html#the-operator-concat-function-now-raises-typeerror-for-array-arguments) - [`nickname` attribute removed from ABCPolyBase](https://numpy.org/doc/1.23/release/1.20.0-notes.html#nickname-attribute-removed-from-abcpolybase) - [`float->timedelta` and `uint64->timedelta` promotion will raise a TypeError](https://numpy.org/doc/1.23/release/1.20.0-notes.html#float-timedelta-and-uint64-timedelta-promotion-will-raise-a-typeerror) - [`numpy.genfromtxt` now correctly unpacks structured arrays](https://numpy.org/doc/1.23/release/1.20.0-notes.html#numpy-genfromtxt-now-correctly-unpacks-structured-arrays) - [`mgrid`, `r_`, etc. consistently return correct outputs for non-default precision input](https://numpy.org/doc/1.23/release/1.20.0-notes.html#mgrid-r-etc-consistently-return-correct-outputs-for-non-default-precision-input) - [Boolean array indices with mismatching shapes now properly give `IndexError`](https://numpy.org/doc/1.23/release/1.20.0-notes.html#boolean-array-indices-with-mismatching-shapes-now-properly-give-indexerror) - [Casting errors interrupt Iteration](https://numpy.org/doc/1.23/release/1.20.0-notes.html#casting-errors-interrupt-iteration) - [f2py generated code may return unicode instead of byte strings](https://numpy.org/doc/1.23/release/1.20.0-notes.html#f2py-generated-code-may-return-unicode-instead-of-byte-strings) - [The first element of the `__array_interface__["data"]` tuple must be an integer](https://numpy.org/doc/1.23/release/1.20.0-notes.html#the-first-element-of-the-array-interface-data-tuple-must-be-an-integer) - [poly1d respects the dtype of all-zero argument](https://numpy.org/doc/1.23/release/1.20.0-notes.html#poly1d-respects-the-dtype-of-all-zero-argument) - [The numpy.i file for swig is Python 3 only.](https://numpy.org/doc/1.23/release/1.20.0-notes.html#the-numpy-i-file-for-swig-is-python-3-only) - [Void dtype discovery in `np.array`](https://numpy.org/doc/1.23/release/1.20.0-notes.html#void-dtype-discovery-in-np-array) + [C API changes](https://numpy.org/doc/1.23/release/1.20.0-notes.html#c-api-changes) - [The `PyArray_DescrCheck` macro is modified](https://numpy.org/doc/1.23/release/1.20.0-notes.html#the-pyarray-descrcheck-macro-is-modified) - [Size of `np.ndarray` and `np.void_` changed](https://numpy.org/doc/1.23/release/1.20.0-notes.html#size-of-np-ndarray-and-np-void-changed) + [New Features](https://numpy.org/doc/1.23/release/1.20.0-notes.html#new-features) - [`where` keyword argument for `numpy.all` and `numpy.any` functions](https://numpy.org/doc/1.23/release/1.20.0-notes.html#where-keyword-argument-for-numpy-all-and-numpy-any-functions) - [`where` keyword argument for `numpy` functions `mean`, `std`, `var`](https://numpy.org/doc/1.23/release/1.20.0-notes.html#where-keyword-argument-for-numpy-functions-mean-std-var) - [`norm=backward`, `forward` keyword options for `numpy.fft` functions](https://numpy.org/doc/1.23/release/1.20.0-notes.html#norm-backward-forward-keyword-options-for-numpy-fft-functions) - [NumPy is now typed](https://numpy.org/doc/1.23/release/1.20.0-notes.html#numpy-is-now-typed) - [`numpy.typing` is accessible at runtime](https://numpy.org/doc/1.23/release/1.20.0-notes.html#numpy-typing-is-accessible-at-runtime) - [New `__f2py_numpy_version__` attribute for f2py generated modules.](https://numpy.org/doc/1.23/release/1.20.0-notes.html#new-f2py-numpy-version-attribute-for-f2py-generated-modules) - [`mypy` tests can be run via runtests.py](https://numpy.org/doc/1.23/release/1.20.0-notes.html#mypy-tests-can-be-run-via-runtests-py) - [Negation of user defined BLAS/LAPACK detection order](https://numpy.org/doc/1.23/release/1.20.0-notes.html#negation-of-user-defined-blas-lapack-detection-order) - [Allow passing optimizations arguments to asv build](https://numpy.org/doc/1.23/release/1.20.0-notes.html#allow-passing-optimizations-arguments-to-asv-build) - [The NVIDIA HPC SDK nvfortran compiler is now supported](https://numpy.org/doc/1.23/release/1.20.0-notes.html#the-nvidia-hpc-sdk-nvfortran-compiler-is-now-supported) - [`dtype` option for `cov` and `corrcoef`](https://numpy.org/doc/1.23/release/1.20.0-notes.html#dtype-option-for-cov-and-corrcoef) + [Improvements](https://numpy.org/doc/1.23/release/1.20.0-notes.html#improvements) - [Improved string representation for polynomials (`__str__`)](https://numpy.org/doc/1.23/release/1.20.0-notes.html#improved-string-representation-for-polynomials-str) - [Remove the Accelerate library as a candidate LAPACK library](https://numpy.org/doc/1.23/release/1.20.0-notes.html#remove-the-accelerate-library-as-a-candidate-lapack-library) - [Object arrays containing multi-line objects have a more readable `repr`](https://numpy.org/doc/1.23/release/1.20.0-notes.html#object-arrays-containing-multi-line-objects-have-a-more-readable-repr) - [Concatenate supports providing an output dtype](https://numpy.org/doc/1.23/release/1.20.0-notes.html#concatenate-supports-providing-an-output-dtype) - [Thread safe f2py callback functions](https://numpy.org/doc/1.23/release/1.20.0-notes.html#thread-safe-f2py-callback-functions) - [`numpy.core.records.fromfile` now supports file-like objects](https://numpy.org/doc/1.23/release/1.20.0-notes.html#numpy-core-records-fromfile-now-supports-file-like-objects) - [RPATH support on AIX added to distutils](https://numpy.org/doc/1.23/release/1.20.0-notes.html#rpath-support-on-aix-added-to-distutils) - [Use f90 compiler specified by the command line args](https://numpy.org/doc/1.23/release/1.20.0-notes.html#use-f90-compiler-specified-by-the-command-line-args) - [Add NumPy declarations for Cython 3.0 and later](https://numpy.org/doc/1.23/release/1.20.0-notes.html#add-numpy-declarations-for-cython-3-0-and-later) - [Make the window functions exactly symmetric](https://numpy.org/doc/1.23/release/1.20.0-notes.html#make-the-window-functions-exactly-symmetric) + [Performance improvements and changes](https://numpy.org/doc/1.23/release/1.20.0-notes.html#performance-improvements-and-changes) - [Enable multi-platform SIMD compiler optimizations](https://numpy.org/doc/1.23/release/1.20.0-notes.html#enable-multi-platform-simd-compiler-optimizations) + [Changes](https://numpy.org/doc/1.23/release/1.20.0-notes.html#changes) - [Changed behavior of `divmod(1., 0.)` and related functions](https://numpy.org/doc/1.23/release/1.20.0-notes.html#changed-behavior-of-divmod-1-0-and-related-functions) - [`np.linspace` on integers now uses floor](https://numpy.org/doc/1.23/release/1.20.0-notes.html#np-linspace-on-integers-now-uses-floor) * [1.19.5](https://numpy.org/doc/1.23/release/1.19.5-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.19.5-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.19.5-notes.html#pull-requests-merged) * [1.19.4](https://numpy.org/doc/1.23/release/1.19.4-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.19.4-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.19.4-notes.html#pull-requests-merged) * [1.19.3](https://numpy.org/doc/1.23/release/1.19.3-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.19.3-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.19.3-notes.html#pull-requests-merged) * [1.19.2](https://numpy.org/doc/1.23/release/1.19.2-notes.html) + [Improvements](https://numpy.org/doc/1.23/release/1.19.2-notes.html#improvements) - [Add NumPy declarations for Cython 3.0 and later](https://numpy.org/doc/1.23/release/1.19.2-notes.html#add-numpy-declarations-for-cython-3-0-and-later) + [Contributors](https://numpy.org/doc/1.23/release/1.19.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.19.2-notes.html#pull-requests-merged) * [1.19.1](https://numpy.org/doc/1.23/release/1.19.1-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.19.1-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.19.1-notes.html#pull-requests-merged) * [1.19.0](https://numpy.org/doc/1.23/release/1.19.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.19.0-notes.html#highlights) + [Expired deprecations](https://numpy.org/doc/1.23/release/1.19.0-notes.html#expired-deprecations) - [`numpy.insert` and `numpy.delete` can no longer be passed an axis on 0d arrays](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-insert-and-numpy-delete-can-no-longer-be-passed-an-axis-on-0d-arrays) - [`numpy.delete` no longer ignores out-of-bounds indices](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-delete-no-longer-ignores-out-of-bounds-indices) - [`numpy.insert` and `numpy.delete` no longer accept non-integral indices](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-insert-and-numpy-delete-no-longer-accept-non-integral-indices) - [`numpy.delete` no longer casts boolean indices to integers](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-delete-no-longer-casts-boolean-indices-to-integers) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.19.0-notes.html#compatibility-notes) - [Changed random variate stream from `numpy.random.Generator.dirichlet`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#changed-random-variate-stream-from-numpy-random-generator-dirichlet) - [Scalar promotion in `PyArray_ConvertToCommonType`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#scalar-promotion-in-pyarray-converttocommontype) - [Fasttake and fastputmask slots are deprecated and NULL’ed](https://numpy.org/doc/1.23/release/1.19.0-notes.html#fasttake-and-fastputmask-slots-are-deprecated-and-null-ed) - [`np.ediff1d` casting behaviour with `to_end` and `to_begin`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#np-ediff1d-casting-behaviour-with-to-end-and-to-begin) - [Converting of empty array-like objects to NumPy arrays](https://numpy.org/doc/1.23/release/1.19.0-notes.html#converting-of-empty-array-like-objects-to-numpy-arrays) - [Removed `multiarray.int_asbuffer`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#removed-multiarray-int-asbuffer) - [`numpy.distutils.compat` has been removed](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-distutils-compat-has-been-removed) - [`issubdtype` no longer interprets `float` as `np.floating`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#issubdtype-no-longer-interprets-float-as-np-floating) - [Change output of `round` on scalars to be consistent with Python](https://numpy.org/doc/1.23/release/1.19.0-notes.html#change-output-of-round-on-scalars-to-be-consistent-with-python) - [The `numpy.ndarray` constructor no longer interprets `strides=()` as `strides=None`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#the-numpy-ndarray-constructor-no-longer-interprets-strides-as-strides-none) - [C-Level string to datetime casts changed](https://numpy.org/doc/1.23/release/1.19.0-notes.html#c-level-string-to-datetime-casts-changed) - [`SeedSequence` with small seeds no longer conflicts with spawning](https://numpy.org/doc/1.23/release/1.19.0-notes.html#seedsequence-with-small-seeds-no-longer-conflicts-with-spawning) + [Deprecations](https://numpy.org/doc/1.23/release/1.19.0-notes.html#deprecations) - [Deprecate automatic `dtype=object` for ragged input](https://numpy.org/doc/1.23/release/1.19.0-notes.html#deprecate-automatic-dtype-object-for-ragged-input) - [Passing `shape=0` to factory functions in `numpy.rec` is deprecated](https://numpy.org/doc/1.23/release/1.19.0-notes.html#passing-shape-0-to-factory-functions-in-numpy-rec-is-deprecated) - [Deprecation of probably unused C-API functions](https://numpy.org/doc/1.23/release/1.19.0-notes.html#deprecation-of-probably-unused-c-api-functions) - [Converting certain types to dtypes is Deprecated](https://numpy.org/doc/1.23/release/1.19.0-notes.html#converting-certain-types-to-dtypes-is-deprecated) - [Deprecation of `round` for `np.complexfloating` scalars](https://numpy.org/doc/1.23/release/1.19.0-notes.html#deprecation-of-round-for-np-complexfloating-scalars) - [`numpy.ndarray.tostring()` is deprecated in favor of `tobytes()`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-ndarray-tostring-is-deprecated-in-favor-of-tobytes) + [C API changes](https://numpy.org/doc/1.23/release/1.19.0-notes.html#c-api-changes) - [Better support for `const` dimensions in API functions](https://numpy.org/doc/1.23/release/1.19.0-notes.html#better-support-for-const-dimensions-in-api-functions) - [Const qualify UFunc inner loops](https://numpy.org/doc/1.23/release/1.19.0-notes.html#const-qualify-ufunc-inner-loops) + [New Features](https://numpy.org/doc/1.23/release/1.19.0-notes.html#new-features) - [`numpy.frompyfunc` now accepts an identity argument](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-frompyfunc-now-accepts-an-identity-argument) - [`np.str_` scalars now support the buffer protocol](https://numpy.org/doc/1.23/release/1.19.0-notes.html#np-str-scalars-now-support-the-buffer-protocol) - [`subok` option for `numpy.copy`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#subok-option-for-numpy-copy) - [`numpy.linalg.multi_dot` now accepts an `out` argument](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-linalg-multi-dot-now-accepts-an-out-argument) - [`keepdims` parameter for `numpy.count_nonzero`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#keepdims-parameter-for-numpy-count-nonzero) - [`equal_nan` parameter for `numpy.array_equal`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#equal-nan-parameter-for-numpy-array-equal) + [Improvements](https://numpy.org/doc/1.23/release/1.19.0-notes.html#improvements) + [Improve detection of CPU features](https://numpy.org/doc/1.23/release/1.19.0-notes.html#improve-detection-of-cpu-features) - [Use 64-bit integer size on 64-bit platforms in fallback lapack_lite](https://numpy.org/doc/1.23/release/1.19.0-notes.html#use-64-bit-integer-size-on-64-bit-platforms-in-fallback-lapack-lite) - [Use AVX512 intrinsic to implement `np.exp` when input is `np.float64`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#use-avx512-intrinsic-to-implement-np-exp-when-input-is-np-float64) - [Ability to disable madvise hugepages](https://numpy.org/doc/1.23/release/1.19.0-notes.html#ability-to-disable-madvise-hugepages) - [`numpy.einsum` accepts NumPy `int64` type in subscript list](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-einsum-accepts-numpy-int64-type-in-subscript-list) - [`np.logaddexp2.identity` changed to `-inf`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#np-logaddexp2-identity-changed-to-inf) + [Changes](https://numpy.org/doc/1.23/release/1.19.0-notes.html#changes) - [Remove handling of extra argument to `__array__`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#remove-handling-of-extra-argument-to-array) - [`numpy.random._bit_generator` moved to `numpy.random.bit_generator`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#numpy-random-bit-generator-moved-to-numpy-random-bit-generator) - [Cython access to the random distributions is provided via a `pxd` file](https://numpy.org/doc/1.23/release/1.19.0-notes.html#cython-access-to-the-random-distributions-is-provided-via-a-pxd-file) - [Fixed `eigh` and `cholesky` methods in `numpy.random.multivariate_normal`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#fixed-eigh-and-cholesky-methods-in-numpy-random-multivariate-normal) - [Fixed the jumping implementation in `MT19937.jumped`](https://numpy.org/doc/1.23/release/1.19.0-notes.html#fixed-the-jumping-implementation-in-mt19937-jumped) * [1.18.5](https://numpy.org/doc/1.23/release/1.18.5-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.18.5-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.18.5-notes.html#pull-requests-merged) * [1.18.4](https://numpy.org/doc/1.23/release/1.18.4-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.18.4-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.18.4-notes.html#pull-requests-merged) * [1.18.3](https://numpy.org/doc/1.23/release/1.18.3-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.18.3-notes.html#highlights) + [Contributors](https://numpy.org/doc/1.23/release/1.18.3-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.18.3-notes.html#pull-requests-merged) * [1.18.2](https://numpy.org/doc/1.23/release/1.18.2-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.18.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.18.2-notes.html#pull-requests-merged) * [1.18.1](https://numpy.org/doc/1.23/release/1.18.1-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.18.1-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.18.1-notes.html#pull-requests-merged) * [1.18.0](https://numpy.org/doc/1.23/release/1.18.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.18.0-notes.html#highlights) + [New functions](https://numpy.org/doc/1.23/release/1.18.0-notes.html#new-functions) - [Multivariate hypergeometric distribution added to `numpy.random`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#multivariate-hypergeometric-distribution-added-to-numpy-random) + [Deprecations](https://numpy.org/doc/1.23/release/1.18.0-notes.html#deprecations) - [`np.fromfile` and `np.fromstring` will error on bad data](https://numpy.org/doc/1.23/release/1.18.0-notes.html#np-fromfile-and-np-fromstring-will-error-on-bad-data) - [Deprecate non-scalar arrays as fill values in `ma.fill_value`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#deprecate-non-scalar-arrays-as-fill-values-in-ma-fill-value) - [Deprecate `PyArray_As1D`, `PyArray_As2D`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#deprecate-pyarray-as1d-pyarray-as2d) - [Deprecate `np.alen`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#deprecate-np-alen) - [Deprecate the financial functions](https://numpy.org/doc/1.23/release/1.18.0-notes.html#deprecate-the-financial-functions) - [The `axis` argument to `numpy.ma.mask_cols` and `numpy.ma.mask_row` is deprecated](https://numpy.org/doc/1.23/release/1.18.0-notes.html#the-axis-argument-to-numpy-ma-mask-cols-and-numpy-ma-mask-row-is-deprecated) + [Expired deprecations](https://numpy.org/doc/1.23/release/1.18.0-notes.html#expired-deprecations) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.18.0-notes.html#compatibility-notes) - [`numpy.lib.recfunctions.drop_fields` can no longer return None](https://numpy.org/doc/1.23/release/1.18.0-notes.html#numpy-lib-recfunctions-drop-fields-can-no-longer-return-none) - [`numpy.argmin/argmax/min/max` returns `NaT` if it exists in array](https://numpy.org/doc/1.23/release/1.18.0-notes.html#numpy-argmin-argmax-min-max-returns-nat-if-it-exists-in-array) - [`np.can_cast(np.uint64, np.timedelta64, casting='safe')` is now `False`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#np-can-cast-np-uint64-np-timedelta64-casting-safe-is-now-false) - [Changed random variate stream from `numpy.random.Generator.integers`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#changed-random-variate-stream-from-numpy-random-generator-integers) - [Add more ufunc loops for `datetime64`, `timedelta64`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#add-more-ufunc-loops-for-datetime64-timedelta64) - [Moved modules in `numpy.random`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#moved-modules-in-numpy-random) + [C API changes](https://numpy.org/doc/1.23/release/1.18.0-notes.html#c-api-changes) - [`PyDataType_ISUNSIZED(descr)` now returns False for structured datatypes](https://numpy.org/doc/1.23/release/1.18.0-notes.html#pydatatype-isunsized-descr-now-returns-false-for-structured-datatypes) + [New Features](https://numpy.org/doc/1.23/release/1.18.0-notes.html#new-features) - [Add our own `*.pxd` cython import file](https://numpy.org/doc/1.23/release/1.18.0-notes.html#add-our-own-pxd-cython-import-file) - [A tuple of axes can now be input to `expand_dims`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#a-tuple-of-axes-can-now-be-input-to-expand-dims) - [Support for 64-bit OpenBLAS](https://numpy.org/doc/1.23/release/1.18.0-notes.html#support-for-64-bit-openblas) - [Add `--f2cmap` option to F2PY](https://numpy.org/doc/1.23/release/1.18.0-notes.html#add-f2cmap-option-to-f2py) + [Improvements](https://numpy.org/doc/1.23/release/1.18.0-notes.html#improvements) - [Different C numeric types of the same size have unique names](https://numpy.org/doc/1.23/release/1.18.0-notes.html#different-c-numeric-types-of-the-same-size-have-unique-names) - [`argwhere` now produces a consistent result on 0d arrays](https://numpy.org/doc/1.23/release/1.18.0-notes.html#argwhere-now-produces-a-consistent-result-on-0d-arrays) - [Add `axis` argument for `random.permutation` and `random.shuffle`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#add-axis-argument-for-random-permutation-and-random-shuffle) - [`method` keyword argument for `np.random.multivariate_normal`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#method-keyword-argument-for-np-random-multivariate-normal) - [Add complex number support for `numpy.fromstring`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#add-complex-number-support-for-numpy-fromstring) - [`numpy.unique` has consistent axes order when `axis` is not None](https://numpy.org/doc/1.23/release/1.18.0-notes.html#numpy-unique-has-consistent-axes-order-when-axis-is-not-none) - [`numpy.matmul` with boolean output now converts to boolean values](https://numpy.org/doc/1.23/release/1.18.0-notes.html#numpy-matmul-with-boolean-output-now-converts-to-boolean-values) - [`numpy.random.randint` produced incorrect value when the range was `2**32`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#numpy-random-randint-produced-incorrect-value-when-the-range-was-2-32) - [Add complex number support for `numpy.fromfile`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#add-complex-number-support-for-numpy-fromfile) - [`std=c99` added if compiler is named `gcc`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#std-c99-added-if-compiler-is-named-gcc) + [Changes](https://numpy.org/doc/1.23/release/1.18.0-notes.html#changes) - [`NaT` now sorts to the end of arrays](https://numpy.org/doc/1.23/release/1.18.0-notes.html#nat-now-sorts-to-the-end-of-arrays) - [Incorrect `threshold` in `np.set_printoptions` raises `TypeError` or `ValueError`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#incorrect-threshold-in-np-set-printoptions-raises-typeerror-or-valueerror) - [Warn when saving a dtype with metadata](https://numpy.org/doc/1.23/release/1.18.0-notes.html#warn-when-saving-a-dtype-with-metadata) - [`numpy.distutils` append behavior changed for LDFLAGS and similar](https://numpy.org/doc/1.23/release/1.18.0-notes.html#numpy-distutils-append-behavior-changed-for-ldflags-and-similar) - [Remove `numpy.random.entropy` without a deprecation](https://numpy.org/doc/1.23/release/1.18.0-notes.html#remove-numpy-random-entropy-without-a-deprecation) - [Add options to quiet build configuration and build with `-Werror`](https://numpy.org/doc/1.23/release/1.18.0-notes.html#add-options-to-quiet-build-configuration-and-build-with-werror) * [1.17.5](https://numpy.org/doc/1.23/release/1.17.5-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.17.5-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.17.5-notes.html#pull-requests-merged) * [1.17.4](https://numpy.org/doc/1.23/release/1.17.4-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.17.4-notes.html#highlights) + [Contributors](https://numpy.org/doc/1.23/release/1.17.4-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.17.4-notes.html#pull-requests-merged) * [1.17.3](https://numpy.org/doc/1.23/release/1.17.3-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.17.3-notes.html#highlights) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.17.3-notes.html#compatibility-notes) + [Contributors](https://numpy.org/doc/1.23/release/1.17.3-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.17.3-notes.html#pull-requests-merged) * [1.17.2](https://numpy.org/doc/1.23/release/1.17.2-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.17.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.17.2-notes.html#pull-requests-merged) * [1.17.1](https://numpy.org/doc/1.23/release/1.17.1-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.17.1-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.17.1-notes.html#pull-requests-merged) * [1.17.0](https://numpy.org/doc/1.23/release/1.17.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.17.0-notes.html#highlights) + [New functions](https://numpy.org/doc/1.23/release/1.17.0-notes.html#new-functions) + [Deprecations](https://numpy.org/doc/1.23/release/1.17.0-notes.html#deprecations) - [`numpy.polynomial` functions warn when passed `float` in place of `int`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#numpy-polynomial-functions-warn-when-passed-float-in-place-of-int) - [Deprecate `numpy.distutils.exec_command` and `temp_file_name`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#deprecate-numpy-distutils-exec-command-and-temp-file-name) - [Writeable flag of C-API wrapped arrays](https://numpy.org/doc/1.23/release/1.17.0-notes.html#writeable-flag-of-c-api-wrapped-arrays) - [`numpy.nonzero` should no longer be called on 0d arrays](https://numpy.org/doc/1.23/release/1.17.0-notes.html#numpy-nonzero-should-no-longer-be-called-on-0d-arrays) - [Writing to the result of `numpy.broadcast_arrays` will warn](https://numpy.org/doc/1.23/release/1.17.0-notes.html#writing-to-the-result-of-numpy-broadcast-arrays-will-warn) + [Future Changes](https://numpy.org/doc/1.23/release/1.17.0-notes.html#future-changes) - [Shape-1 fields in dtypes won’t be collapsed to scalars in a future version](https://numpy.org/doc/1.23/release/1.17.0-notes.html#shape-1-fields-in-dtypes-won-t-be-collapsed-to-scalars-in-a-future-version) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.17.0-notes.html#compatibility-notes) - [`float16` subnormal rounding](https://numpy.org/doc/1.23/release/1.17.0-notes.html#float16-subnormal-rounding) - [Signed zero when using divmod](https://numpy.org/doc/1.23/release/1.17.0-notes.html#signed-zero-when-using-divmod) - [`MaskedArray.mask` now returns a view of the mask, not the mask itself](https://numpy.org/doc/1.23/release/1.17.0-notes.html#maskedarray-mask-now-returns-a-view-of-the-mask-not-the-mask-itself) - [Do not lookup `__buffer__` attribute in `numpy.frombuffer`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#do-not-lookup-buffer-attribute-in-numpy-frombuffer) - [`out` is buffered for memory overlaps in `take`, `choose`, `put`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#out-is-buffered-for-memory-overlaps-in-take-choose-put) - [Unpickling while loading requires explicit opt-in](https://numpy.org/doc/1.23/release/1.17.0-notes.html#unpickling-while-loading-requires-explicit-opt-in) - [Potential changes to the random stream in old random module](https://numpy.org/doc/1.23/release/1.17.0-notes.html#potential-changes-to-the-random-stream-in-old-random-module) - [`i0` now always returns a result with the same shape as the input](https://numpy.org/doc/1.23/release/1.17.0-notes.html#i0-now-always-returns-a-result-with-the-same-shape-as-the-input) - [`can_cast` no longer assumes all unsafe casting is allowed](https://numpy.org/doc/1.23/release/1.17.0-notes.html#can-cast-no-longer-assumes-all-unsafe-casting-is-allowed) - [`ndarray.flags.writeable` can be switched to true slightly more often](https://numpy.org/doc/1.23/release/1.17.0-notes.html#ndarray-flags-writeable-can-be-switched-to-true-slightly-more-often) + [C API changes](https://numpy.org/doc/1.23/release/1.17.0-notes.html#c-api-changes) - [dimension or stride input arguments are now passed by `npy_intp const*`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#dimension-or-stride-input-arguments-are-now-passed-by-npy-intp-const) + [New Features](https://numpy.org/doc/1.23/release/1.17.0-notes.html#new-features) - [New extensible `numpy.random` module with selectable random number generators](https://numpy.org/doc/1.23/release/1.17.0-notes.html#new-extensible-numpy-random-module-with-selectable-random-number-generators) - [libFLAME](https://numpy.org/doc/1.23/release/1.17.0-notes.html#libflame) - [User-defined BLAS detection order](https://numpy.org/doc/1.23/release/1.17.0-notes.html#user-defined-blas-detection-order) - [User-defined LAPACK detection order](https://numpy.org/doc/1.23/release/1.17.0-notes.html#user-defined-lapack-detection-order) - [`ufunc.reduce` and related functions now accept a `where` mask](https://numpy.org/doc/1.23/release/1.17.0-notes.html#ufunc-reduce-and-related-functions-now-accept-a-where-mask) - [Timsort and radix sort have replaced mergesort for stable sorting](https://numpy.org/doc/1.23/release/1.17.0-notes.html#timsort-and-radix-sort-have-replaced-mergesort-for-stable-sorting) - [`packbits` and `unpackbits` accept an `order` keyword](https://numpy.org/doc/1.23/release/1.17.0-notes.html#packbits-and-unpackbits-accept-an-order-keyword) - [`unpackbits` now accepts a `count` parameter](https://numpy.org/doc/1.23/release/1.17.0-notes.html#unpackbits-now-accepts-a-count-parameter) - [`linalg.svd` and `linalg.pinv` can be faster on hermitian inputs](https://numpy.org/doc/1.23/release/1.17.0-notes.html#linalg-svd-and-linalg-pinv-can-be-faster-on-hermitian-inputs) - [divmod operation is now supported for two `timedelta64` operands](https://numpy.org/doc/1.23/release/1.17.0-notes.html#divmod-operation-is-now-supported-for-two-timedelta64-operands) - [`fromfile` now takes an `offset` argument](https://numpy.org/doc/1.23/release/1.17.0-notes.html#fromfile-now-takes-an-offset-argument) - [New mode “empty” for `pad`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#new-mode-empty-for-pad) - [`empty_like` and related functions now accept a `shape` argument](https://numpy.org/doc/1.23/release/1.17.0-notes.html#empty-like-and-related-functions-now-accept-a-shape-argument) - [Floating point scalars implement `as_integer_ratio` to match the builtin float](https://numpy.org/doc/1.23/release/1.17.0-notes.html#floating-point-scalars-implement-as-integer-ratio-to-match-the-builtin-float) - [Structured `dtype` objects can be indexed with multiple fields names](https://numpy.org/doc/1.23/release/1.17.0-notes.html#structured-dtype-objects-can-be-indexed-with-multiple-fields-names) - [`.npy` files support unicode field names](https://numpy.org/doc/1.23/release/1.17.0-notes.html#npy-files-support-unicode-field-names) + [Improvements](https://numpy.org/doc/1.23/release/1.17.0-notes.html#improvements) - [Array comparison assertions include maximum differences](https://numpy.org/doc/1.23/release/1.17.0-notes.html#array-comparison-assertions-include-maximum-differences) - [Replacement of the fftpack based `fft` module by the pocketfft library](https://numpy.org/doc/1.23/release/1.17.0-notes.html#replacement-of-the-fftpack-based-fft-module-by-the-pocketfft-library) - [Further improvements to `ctypes` support in `numpy.ctypeslib`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#further-improvements-to-ctypes-support-in-numpy-ctypeslib) - [`numpy.errstate` is now also a function decorator](https://numpy.org/doc/1.23/release/1.17.0-notes.html#numpy-errstate-is-now-also-a-function-decorator) - [`numpy.exp` and `numpy.log` speed up for float32 implementation](https://numpy.org/doc/1.23/release/1.17.0-notes.html#numpy-exp-and-numpy-log-speed-up-for-float32-implementation) - [Improve performance of `numpy.pad`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#improve-performance-of-numpy-pad) - [`numpy.interp` handles infinities more robustly](https://numpy.org/doc/1.23/release/1.17.0-notes.html#numpy-interp-handles-infinities-more-robustly) - [Pathlib support for `fromfile`, `tofile` and `ndarray.dump`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#pathlib-support-for-fromfile-tofile-and-ndarray-dump) - [Specialized `isnan`, `isinf`, and `isfinite` ufuncs for bool and int types](https://numpy.org/doc/1.23/release/1.17.0-notes.html#specialized-isnan-isinf-and-isfinite-ufuncs-for-bool-and-int-types) - [`isfinite` supports `datetime64` and `timedelta64` types](https://numpy.org/doc/1.23/release/1.17.0-notes.html#isfinite-supports-datetime64-and-timedelta64-types) - [New keywords added to `nan_to_num`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#new-keywords-added-to-nan-to-num) - [MemoryErrors caused by allocated overly large arrays are more descriptive](https://numpy.org/doc/1.23/release/1.17.0-notes.html#memoryerrors-caused-by-allocated-overly-large-arrays-are-more-descriptive) - [`floor`, `ceil`, and `trunc` now respect builtin magic methods](https://numpy.org/doc/1.23/release/1.17.0-notes.html#floor-ceil-and-trunc-now-respect-builtin-magic-methods) - [`quantile` now works on `fraction.Fraction` and `decimal.Decimal` objects](https://numpy.org/doc/1.23/release/1.17.0-notes.html#quantile-now-works-on-fraction-fraction-and-decimal-decimal-objects) - [Support of object arrays in `matmul`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#support-of-object-arrays-in-matmul) + [Changes](https://numpy.org/doc/1.23/release/1.17.0-notes.html#changes) - [`median` and `percentile` family of functions no longer warn about `nan`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#median-and-percentile-family-of-functions-no-longer-warn-about-nan) - [`timedelta64 % 0` behavior adjusted to return `NaT`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#timedelta64-0-behavior-adjusted-to-return-nat) - [NumPy functions now always support overrides with `__array_function__`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#numpy-functions-now-always-support-overrides-with-array-function) - [`lib.recfunctions.structured_to_unstructured` does not squeeze single-field views](https://numpy.org/doc/1.23/release/1.17.0-notes.html#lib-recfunctions-structured-to-unstructured-does-not-squeeze-single-field-views) - [`clip` now uses a ufunc under the hood](https://numpy.org/doc/1.23/release/1.17.0-notes.html#clip-now-uses-a-ufunc-under-the-hood) - [`__array_interface__` offset now works as documented](https://numpy.org/doc/1.23/release/1.17.0-notes.html#array-interface-offset-now-works-as-documented) - [Pickle protocol in `savez` set to 3 for `force zip64` flag](https://numpy.org/doc/1.23/release/1.17.0-notes.html#pickle-protocol-in-savez-set-to-3-for-force-zip64-flag) - [Structured arrays indexed with non-existent fields raise `KeyError` not `ValueError`](https://numpy.org/doc/1.23/release/1.17.0-notes.html#structured-arrays-indexed-with-non-existent-fields-raise-keyerror-not-valueerror) * [1.16.6](https://numpy.org/doc/1.23/release/1.16.6-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.16.6-notes.html#highlights) + [New functions](https://numpy.org/doc/1.23/release/1.16.6-notes.html#new-functions) - [Allow matmul (`@` operator) to work with object arrays.](https://numpy.org/doc/1.23/release/1.16.6-notes.html#allow-matmul-operator-to-work-with-object-arrays) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.16.6-notes.html#compatibility-notes) - [Fix regression in matmul (`@` operator) for boolean types](https://numpy.org/doc/1.23/release/1.16.6-notes.html#fix-regression-in-matmul-operator-for-boolean-types) + [Improvements](https://numpy.org/doc/1.23/release/1.16.6-notes.html#improvements) - [Array comparison assertions include maximum differences](https://numpy.org/doc/1.23/release/1.16.6-notes.html#array-comparison-assertions-include-maximum-differences) + [Contributors](https://numpy.org/doc/1.23/release/1.16.6-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.16.6-notes.html#pull-requests-merged) * [1.16.5](https://numpy.org/doc/1.23/release/1.16.5-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.16.5-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.16.5-notes.html#pull-requests-merged) * [1.16.4](https://numpy.org/doc/1.23/release/1.16.4-notes.html) + [New deprecations](https://numpy.org/doc/1.23/release/1.16.4-notes.html#new-deprecations) - [Writeable flag of C-API wrapped arrays](https://numpy.org/doc/1.23/release/1.16.4-notes.html#writeable-flag-of-c-api-wrapped-arrays) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.16.4-notes.html#compatibility-notes) - [Potential changes to the random stream](https://numpy.org/doc/1.23/release/1.16.4-notes.html#potential-changes-to-the-random-stream) + [Changes](https://numpy.org/doc/1.23/release/1.16.4-notes.html#changes) - [`numpy.lib.recfunctions.structured_to_unstructured` does not squeeze single-field views](https://numpy.org/doc/1.23/release/1.16.4-notes.html#numpy-lib-recfunctions-structured-to-unstructured-does-not-squeeze-single-field-views) + [Contributors](https://numpy.org/doc/1.23/release/1.16.4-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.16.4-notes.html#pull-requests-merged) * [1.16.3](https://numpy.org/doc/1.23/release/1.16.3-notes.html) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.16.3-notes.html#compatibility-notes) - [Unpickling while loading requires explicit opt-in](https://numpy.org/doc/1.23/release/1.16.3-notes.html#unpickling-while-loading-requires-explicit-opt-in) + [Improvements](https://numpy.org/doc/1.23/release/1.16.3-notes.html#improvements) - [Covariance in `random.mvnormal` cast to double](https://numpy.org/doc/1.23/release/1.16.3-notes.html#covariance-in-random-mvnormal-cast-to-double) + [Changes](https://numpy.org/doc/1.23/release/1.16.3-notes.html#changes) - [`__array_interface__` offset now works as documented](https://numpy.org/doc/1.23/release/1.16.3-notes.html#array-interface-offset-now-works-as-documented) * [1.16.2](https://numpy.org/doc/1.23/release/1.16.2-notes.html) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.16.2-notes.html#compatibility-notes) - [Signed zero when using divmod](https://numpy.org/doc/1.23/release/1.16.2-notes.html#signed-zero-when-using-divmod) + [Contributors](https://numpy.org/doc/1.23/release/1.16.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.16.2-notes.html#pull-requests-merged) * [1.16.1](https://numpy.org/doc/1.23/release/1.16.1-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.16.1-notes.html#contributors) + [Enhancements](https://numpy.org/doc/1.23/release/1.16.1-notes.html#enhancements) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.16.1-notes.html#compatibility-notes) + [New Features](https://numpy.org/doc/1.23/release/1.16.1-notes.html#new-features) - [divmod operation is now supported for two `timedelta64` operands](https://numpy.org/doc/1.23/release/1.16.1-notes.html#divmod-operation-is-now-supported-for-two-timedelta64-operands) + [Improvements](https://numpy.org/doc/1.23/release/1.16.1-notes.html#improvements) - [Further improvements to `ctypes` support in `np.ctypeslib`](https://numpy.org/doc/1.23/release/1.16.1-notes.html#further-improvements-to-ctypes-support-in-np-ctypeslib) - [Array comparison assertions include maximum differences](https://numpy.org/doc/1.23/release/1.16.1-notes.html#array-comparison-assertions-include-maximum-differences) + [Changes](https://numpy.org/doc/1.23/release/1.16.1-notes.html#changes) - [`timedelta64 % 0` behavior adjusted to return `NaT`](https://numpy.org/doc/1.23/release/1.16.1-notes.html#timedelta64-0-behavior-adjusted-to-return-nat) * [1.16.0](https://numpy.org/doc/1.23/release/1.16.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.16.0-notes.html#highlights) + [New functions](https://numpy.org/doc/1.23/release/1.16.0-notes.html#new-functions) + [New deprecations](https://numpy.org/doc/1.23/release/1.16.0-notes.html#new-deprecations) + [Expired deprecations](https://numpy.org/doc/1.23/release/1.16.0-notes.html#expired-deprecations) + [Future changes](https://numpy.org/doc/1.23/release/1.16.0-notes.html#future-changes) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.16.0-notes.html#compatibility-notes) - [f2py script on Windows](https://numpy.org/doc/1.23/release/1.16.0-notes.html#f2py-script-on-windows) - [NaT comparisons](https://numpy.org/doc/1.23/release/1.16.0-notes.html#nat-comparisons) - [complex64/128 alignment has changed](https://numpy.org/doc/1.23/release/1.16.0-notes.html#complex64-128-alignment-has-changed) - [nd_grid __len__ removal](https://numpy.org/doc/1.23/release/1.16.0-notes.html#nd-grid-len-removal) - [`np.unravel_index` now accepts `shape` keyword argument](https://numpy.org/doc/1.23/release/1.16.0-notes.html#np-unravel-index-now-accepts-shape-keyword-argument) - [multi-field views return a view instead of a copy](https://numpy.org/doc/1.23/release/1.16.0-notes.html#multi-field-views-return-a-view-instead-of-a-copy) + [C API changes](https://numpy.org/doc/1.23/release/1.16.0-notes.html#c-api-changes) + [New Features](https://numpy.org/doc/1.23/release/1.16.0-notes.html#new-features) - [Integrated squared error (ISE) estimator added to `histogram`](https://numpy.org/doc/1.23/release/1.16.0-notes.html#integrated-squared-error-ise-estimator-added-to-histogram) - [`max_rows` keyword added for `np.loadtxt`](https://numpy.org/doc/1.23/release/1.16.0-notes.html#max-rows-keyword-added-for-np-loadtxt) - [modulus operator support added for `np.timedelta64` operands](https://numpy.org/doc/1.23/release/1.16.0-notes.html#modulus-operator-support-added-for-np-timedelta64-operands) + [Improvements](https://numpy.org/doc/1.23/release/1.16.0-notes.html#improvements) - [no-copy pickling of numpy arrays](https://numpy.org/doc/1.23/release/1.16.0-notes.html#no-copy-pickling-of-numpy-arrays) - [build shell independence](https://numpy.org/doc/1.23/release/1.16.0-notes.html#build-shell-independence) - [`np.polynomial.Polynomial` classes render in LaTeX in Jupyter notebooks](https://numpy.org/doc/1.23/release/1.16.0-notes.html#np-polynomial-polynomial-classes-render-in-latex-in-jupyter-notebooks) - [`randint` and `choice` now work on empty distributions](https://numpy.org/doc/1.23/release/1.16.0-notes.html#randint-and-choice-now-work-on-empty-distributions) - [`linalg.lstsq`, `linalg.qr`, and `linalg.svd` now work with empty arrays](https://numpy.org/doc/1.23/release/1.16.0-notes.html#linalg-lstsq-linalg-qr-and-linalg-svd-now-work-with-empty-arrays) - [Chain exceptions to give better error messages for invalid PEP3118 format strings](https://numpy.org/doc/1.23/release/1.16.0-notes.html#chain-exceptions-to-give-better-error-messages-for-invalid-pep3118-format-strings) - [Einsum optimization path updates and efficiency improvements](https://numpy.org/doc/1.23/release/1.16.0-notes.html#einsum-optimization-path-updates-and-efficiency-improvements) - [`numpy.angle` and `numpy.expand_dims` now work on `ndarray` subclasses](https://numpy.org/doc/1.23/release/1.16.0-notes.html#numpy-angle-and-numpy-expand-dims-now-work-on-ndarray-subclasses) - [`NPY_NO_DEPRECATED_API` compiler warning suppression](https://numpy.org/doc/1.23/release/1.16.0-notes.html#npy-no-deprecated-api-compiler-warning-suppression) - [`np.diff` Added kwargs prepend and append](https://numpy.org/doc/1.23/release/1.16.0-notes.html#np-diff-added-kwargs-prepend-and-append) - [ARM support updated](https://numpy.org/doc/1.23/release/1.16.0-notes.html#arm-support-updated) - [Appending to build flags](https://numpy.org/doc/1.23/release/1.16.0-notes.html#appending-to-build-flags) - [Generalized ufunc signatures now allow fixed-size dimensions](https://numpy.org/doc/1.23/release/1.16.0-notes.html#generalized-ufunc-signatures-now-allow-fixed-size-dimensions) - [Generalized ufunc signatures now allow flexible dimensions](https://numpy.org/doc/1.23/release/1.16.0-notes.html#generalized-ufunc-signatures-now-allow-flexible-dimensions) - [`np.clip` and the `clip` method check for memory overlap](https://numpy.org/doc/1.23/release/1.16.0-notes.html#np-clip-and-the-clip-method-check-for-memory-overlap) - [New value `unscaled` for option `cov` in `np.polyfit`](https://numpy.org/doc/1.23/release/1.16.0-notes.html#new-value-unscaled-for-option-cov-in-np-polyfit) - [Detailed docstrings for scalar numeric types](https://numpy.org/doc/1.23/release/1.16.0-notes.html#detailed-docstrings-for-scalar-numeric-types) - [`__module__` attribute now points to public modules](https://numpy.org/doc/1.23/release/1.16.0-notes.html#module-attribute-now-points-to-public-modules) - [Large allocations marked as suitable for transparent hugepages](https://numpy.org/doc/1.23/release/1.16.0-notes.html#large-allocations-marked-as-suitable-for-transparent-hugepages) - [Alpine Linux (and other musl c library distros) support](https://numpy.org/doc/1.23/release/1.16.0-notes.html#alpine-linux-and-other-musl-c-library-distros-support) - [Speedup `np.block` for large arrays](https://numpy.org/doc/1.23/release/1.16.0-notes.html#speedup-np-block-for-large-arrays) - [Speedup `np.take` for read-only arrays](https://numpy.org/doc/1.23/release/1.16.0-notes.html#speedup-np-take-for-read-only-arrays) - [Support path-like objects for more functions](https://numpy.org/doc/1.23/release/1.16.0-notes.html#support-path-like-objects-for-more-functions) - [Better behaviour of ufunc identities during reductions](https://numpy.org/doc/1.23/release/1.16.0-notes.html#better-behaviour-of-ufunc-identities-during-reductions) - [Improved conversion from ctypes objects](https://numpy.org/doc/1.23/release/1.16.0-notes.html#improved-conversion-from-ctypes-objects) - [A new `ndpointer.contents` member](https://numpy.org/doc/1.23/release/1.16.0-notes.html#a-new-ndpointer-contents-member) - [`matmul` is now a `ufunc`](https://numpy.org/doc/1.23/release/1.16.0-notes.html#matmul-is-now-a-ufunc) - [Start and stop arrays for `linspace`, `logspace` and `geomspace`](https://numpy.org/doc/1.23/release/1.16.0-notes.html#start-and-stop-arrays-for-linspace-logspace-and-geomspace) - [CI extended with additional services](https://numpy.org/doc/1.23/release/1.16.0-notes.html#ci-extended-with-additional-services) + [Changes](https://numpy.org/doc/1.23/release/1.16.0-notes.html#changes) - [Comparison ufuncs will now error rather than return NotImplemented](https://numpy.org/doc/1.23/release/1.16.0-notes.html#comparison-ufuncs-will-now-error-rather-than-return-notimplemented) - [Positive will now raise a deprecation warning for non-numerical arrays](https://numpy.org/doc/1.23/release/1.16.0-notes.html#positive-will-now-raise-a-deprecation-warning-for-non-numerical-arrays) - [`NDArrayOperatorsMixin` now implements matrix multiplication](https://numpy.org/doc/1.23/release/1.16.0-notes.html#ndarrayoperatorsmixin-now-implements-matrix-multiplication) - [The scaling of the covariance matrix in `np.polyfit` is different](https://numpy.org/doc/1.23/release/1.16.0-notes.html#the-scaling-of-the-covariance-matrix-in-np-polyfit-is-different) - [`maximum` and `minimum` no longer emit warnings](https://numpy.org/doc/1.23/release/1.16.0-notes.html#maximum-and-minimum-no-longer-emit-warnings) - [Umath and multiarray c-extension modules merged into a single module](https://numpy.org/doc/1.23/release/1.16.0-notes.html#umath-and-multiarray-c-extension-modules-merged-into-a-single-module) - [`getfield` validity checks extended](https://numpy.org/doc/1.23/release/1.16.0-notes.html#getfield-validity-checks-extended) - [NumPy functions now support overrides with `__array_function__`](https://numpy.org/doc/1.23/release/1.16.0-notes.html#numpy-functions-now-support-overrides-with-array-function) - [Arrays based off readonly buffers cannot be set `writeable`](https://numpy.org/doc/1.23/release/1.16.0-notes.html#arrays-based-off-readonly-buffers-cannot-be-set-writeable) * [1.15.4](https://numpy.org/doc/1.23/release/1.15.4-notes.html) + [Compatibility Note](https://numpy.org/doc/1.23/release/1.15.4-notes.html#compatibility-note) + [Contributors](https://numpy.org/doc/1.23/release/1.15.4-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.15.4-notes.html#pull-requests-merged) * [1.15.3](https://numpy.org/doc/1.23/release/1.15.3-notes.html) + [Compatibility Note](https://numpy.org/doc/1.23/release/1.15.3-notes.html#compatibility-note) + [Contributors](https://numpy.org/doc/1.23/release/1.15.3-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.15.3-notes.html#pull-requests-merged) * [1.15.2](https://numpy.org/doc/1.23/release/1.15.2-notes.html) + [Compatibility Note](https://numpy.org/doc/1.23/release/1.15.2-notes.html#compatibility-note) + [Contributors](https://numpy.org/doc/1.23/release/1.15.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.15.2-notes.html#pull-requests-merged) * [1.15.1](https://numpy.org/doc/1.23/release/1.15.1-notes.html) + [Compatibility Note](https://numpy.org/doc/1.23/release/1.15.1-notes.html#compatibility-note) + [Contributors](https://numpy.org/doc/1.23/release/1.15.1-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.15.1-notes.html#pull-requests-merged) * [1.15.0](https://numpy.org/doc/1.23/release/1.15.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.15.0-notes.html#highlights) + [New functions](https://numpy.org/doc/1.23/release/1.15.0-notes.html#new-functions) + [Deprecations](https://numpy.org/doc/1.23/release/1.15.0-notes.html#deprecations) + [Future Changes](https://numpy.org/doc/1.23/release/1.15.0-notes.html#future-changes) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.15.0-notes.html#compatibility-notes) - [Compiled testing modules renamed and made private](https://numpy.org/doc/1.23/release/1.15.0-notes.html#compiled-testing-modules-renamed-and-made-private) - [The `NpzFile` returned by `np.savez` is now a `collections.abc.Mapping`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#the-npzfile-returned-by-np-savez-is-now-a-collections-abc-mapping) - [Under certain conditions, `nditer` must be used in a context manager](https://numpy.org/doc/1.23/release/1.15.0-notes.html#under-certain-conditions-nditer-must-be-used-in-a-context-manager) - [Numpy has switched to using pytest instead of nose for testing](https://numpy.org/doc/1.23/release/1.15.0-notes.html#numpy-has-switched-to-using-pytest-instead-of-nose-for-testing) - [Numpy no longer monkey-patches `ctypes` with `__array_interface__`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#numpy-no-longer-monkey-patches-ctypes-with-array-interface) - [`np.ma.notmasked_contiguous` and `np.ma.flatnotmasked_contiguous` always return lists](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-ma-notmasked-contiguous-and-np-ma-flatnotmasked-contiguous-always-return-lists) - [`np.squeeze` restores old behavior of objects that cannot handle an `axis` argument](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-squeeze-restores-old-behavior-of-objects-that-cannot-handle-an-axis-argument) - [unstructured void array’s `.item` method now returns a bytes object](https://numpy.org/doc/1.23/release/1.15.0-notes.html#unstructured-void-array-s-item-method-now-returns-a-bytes-object) - [`copy.copy` and `copy.deepcopy` no longer turn `masked` into an array](https://numpy.org/doc/1.23/release/1.15.0-notes.html#copy-copy-and-copy-deepcopy-no-longer-turn-masked-into-an-array) - [Multifield Indexing of Structured Arrays will still return a copy](https://numpy.org/doc/1.23/release/1.15.0-notes.html#multifield-indexing-of-structured-arrays-will-still-return-a-copy) + [C API changes](https://numpy.org/doc/1.23/release/1.15.0-notes.html#c-api-changes) - [New functions `npy_get_floatstatus_barrier` and `npy_clear_floatstatus_barrier`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#new-functions-npy-get-floatstatus-barrier-and-npy-clear-floatstatus-barrier) - [Changes to `PyArray_GetDTypeTransferFunction`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#changes-to-pyarray-getdtypetransferfunction) + [New Features](https://numpy.org/doc/1.23/release/1.15.0-notes.html#new-features) - [`np.gcd` and `np.lcm` ufuncs added for integer and objects types](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-gcd-and-np-lcm-ufuncs-added-for-integer-and-objects-types) - [Support for cross-platform builds for iOS](https://numpy.org/doc/1.23/release/1.15.0-notes.html#support-for-cross-platform-builds-for-ios) - [`return_indices` keyword added for `np.intersect1d`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#return-indices-keyword-added-for-np-intersect1d) - [`np.quantile` and `np.nanquantile`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-quantile-and-np-nanquantile) - [Build system](https://numpy.org/doc/1.23/release/1.15.0-notes.html#build-system) + [Improvements](https://numpy.org/doc/1.23/release/1.15.0-notes.html#improvements) - [`np.einsum` updates](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-einsum-updates) - [`np.ufunc.reduce` and related functions now accept an initial value](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-ufunc-reduce-and-related-functions-now-accept-an-initial-value) - [`np.flip` can operate over multiple axes](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-flip-can-operate-over-multiple-axes) - [`histogram` and `histogramdd` functions have moved to `np.lib.histograms`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#histogram-and-histogramdd-functions-have-moved-to-np-lib-histograms) - [`histogram` will accept NaN values when explicit bins are given](https://numpy.org/doc/1.23/release/1.15.0-notes.html#histogram-will-accept-nan-values-when-explicit-bins-are-given) - [`histogram` works on datetime types, when explicit bin edges are given](https://numpy.org/doc/1.23/release/1.15.0-notes.html#histogram-works-on-datetime-types-when-explicit-bin-edges-are-given) - [`histogram` “auto” estimator handles limited variance better](https://numpy.org/doc/1.23/release/1.15.0-notes.html#histogram-auto-estimator-handles-limited-variance-better) - [The edges returned by `histogram`` and `histogramdd` now match the data float type](https://numpy.org/doc/1.23/release/1.15.0-notes.html#the-edges-returned-by-histogram-and-histogramdd-now-match-the-data-float-type) - [`histogramdd` allows explicit ranges to be given in a subset of axes](https://numpy.org/doc/1.23/release/1.15.0-notes.html#histogramdd-allows-explicit-ranges-to-be-given-in-a-subset-of-axes) - [The normed arguments of `histogramdd` and `histogram2d` have been renamed](https://numpy.org/doc/1.23/release/1.15.0-notes.html#the-normed-arguments-of-histogramdd-and-histogram2d-have-been-renamed) - [`np.r_` works with 0d arrays, and `np.ma.mr_` works with `np.ma.masked`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-r-works-with-0d-arrays-and-np-ma-mr-works-with-np-ma-masked) - [`np.ptp` accepts a `keepdims` argument, and extended axis tuples](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-ptp-accepts-a-keepdims-argument-and-extended-axis-tuples) - [`MaskedArray.astype` now is identical to `ndarray.astype`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#maskedarray-astype-now-is-identical-to-ndarray-astype) - [Enable AVX2/AVX512 at compile time](https://numpy.org/doc/1.23/release/1.15.0-notes.html#enable-avx2-avx512-at-compile-time) - [`nan_to_num` always returns scalars when receiving scalar or 0d inputs](https://numpy.org/doc/1.23/release/1.15.0-notes.html#nan-to-num-always-returns-scalars-when-receiving-scalar-or-0d-inputs) - [`np.flatnonzero` works on numpy-convertible types](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-flatnonzero-works-on-numpy-convertible-types) - [`np.interp` returns numpy scalars rather than builtin scalars](https://numpy.org/doc/1.23/release/1.15.0-notes.html#np-interp-returns-numpy-scalars-rather-than-builtin-scalars) - [Allow dtype field names to be unicode in Python 2](https://numpy.org/doc/1.23/release/1.15.0-notes.html#allow-dtype-field-names-to-be-unicode-in-python-2) - [Comparison ufuncs accept `dtype=object`, overriding the default `bool`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#comparison-ufuncs-accept-dtype-object-overriding-the-default-bool) - [`sort` functions accept `kind='stable'`](https://numpy.org/doc/1.23/release/1.15.0-notes.html#sort-functions-accept-kind-stable) - [Do not make temporary copies for in-place accumulation](https://numpy.org/doc/1.23/release/1.15.0-notes.html#do-not-make-temporary-copies-for-in-place-accumulation) - [`linalg.matrix_power` can now handle stacks of matrices](https://numpy.org/doc/1.23/release/1.15.0-notes.html#linalg-matrix-power-can-now-handle-stacks-of-matrices) - [Increased performance in `random.permutation` for multidimensional arrays](https://numpy.org/doc/1.23/release/1.15.0-notes.html#increased-performance-in-random-permutation-for-multidimensional-arrays) - [Generalized ufuncs now accept `axes`, `axis` and `keepdims` arguments](https://numpy.org/doc/1.23/release/1.15.0-notes.html#generalized-ufuncs-now-accept-axes-axis-and-keepdims-arguments) - [float128 values now print correctly on ppc systems](https://numpy.org/doc/1.23/release/1.15.0-notes.html#float128-values-now-print-correctly-on-ppc-systems) - [New `np.take_along_axis` and `np.put_along_axis` functions](https://numpy.org/doc/1.23/release/1.15.0-notes.html#new-np-take-along-axis-and-np-put-along-axis-functions) * [1.14.6](https://numpy.org/doc/1.23/release/1.14.6-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.14.6-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.14.6-notes.html#pull-requests-merged) * [1.14.5](https://numpy.org/doc/1.23/release/1.14.5-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.14.5-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.14.5-notes.html#pull-requests-merged) * [1.14.4](https://numpy.org/doc/1.23/release/1.14.4-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.14.4-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.14.4-notes.html#pull-requests-merged) * [1.14.3](https://numpy.org/doc/1.23/release/1.14.3-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.14.3-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.14.3-notes.html#pull-requests-merged) * [1.14.2](https://numpy.org/doc/1.23/release/1.14.2-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.14.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.14.2-notes.html#pull-requests-merged) * [1.14.1](https://numpy.org/doc/1.23/release/1.14.1-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.14.1-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.14.1-notes.html#pull-requests-merged) * [1.14.0](https://numpy.org/doc/1.23/release/1.14.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.14.0-notes.html#highlights) + [New functions](https://numpy.org/doc/1.23/release/1.14.0-notes.html#new-functions) + [Deprecations](https://numpy.org/doc/1.23/release/1.14.0-notes.html#deprecations) + [Future Changes](https://numpy.org/doc/1.23/release/1.14.0-notes.html#future-changes) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.14.0-notes.html#compatibility-notes) - [The mask of a masked array view is also a view rather than a copy](https://numpy.org/doc/1.23/release/1.14.0-notes.html#the-mask-of-a-masked-array-view-is-also-a-view-rather-than-a-copy) - [`np.ma.masked` is no longer writeable](https://numpy.org/doc/1.23/release/1.14.0-notes.html#np-ma-masked-is-no-longer-writeable) - [`np.ma` functions producing `fill_value` s have changed](https://numpy.org/doc/1.23/release/1.14.0-notes.html#np-ma-functions-producing-fill-value-s-have-changed) - [`a.flat.__array__()` returns non-writeable arrays when `a` is non-contiguous](https://numpy.org/doc/1.23/release/1.14.0-notes.html#a-flat-array-returns-non-writeable-arrays-when-a-is-non-contiguous) - [`np.tensordot` now returns zero array when contracting over 0-length dimension](https://numpy.org/doc/1.23/release/1.14.0-notes.html#np-tensordot-now-returns-zero-array-when-contracting-over-0-length-dimension) - [`numpy.testing` reorganized](https://numpy.org/doc/1.23/release/1.14.0-notes.html#numpy-testing-reorganized) - [`np.asfarray` no longer accepts non-dtypes through the `dtype` argument](https://numpy.org/doc/1.23/release/1.14.0-notes.html#np-asfarray-no-longer-accepts-non-dtypes-through-the-dtype-argument) - [1D `np.linalg.norm` preserves float input types, even for arbitrary orders](https://numpy.org/doc/1.23/release/1.14.0-notes.html#d-np-linalg-norm-preserves-float-input-types-even-for-arbitrary-orders) - [`count_nonzero(arr, axis=())` now counts over no axes, not all axes](https://numpy.org/doc/1.23/release/1.14.0-notes.html#count-nonzero-arr-axis-now-counts-over-no-axes-not-all-axes) - [`__init__.py` files added to test directories](https://numpy.org/doc/1.23/release/1.14.0-notes.html#init-py-files-added-to-test-directories) - [`.astype(bool)` on unstructured void arrays now calls `bool` on each element](https://numpy.org/doc/1.23/release/1.14.0-notes.html#astype-bool-on-unstructured-void-arrays-now-calls-bool-on-each-element) - [`MaskedArray.squeeze` never returns `np.ma.masked`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#maskedarray-squeeze-never-returns-np-ma-masked) - [Renamed first parameter of `can_cast` from `from` to `from_`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#renamed-first-parameter-of-can-cast-from-from-to-from) - [`isnat` raises `TypeError` when passed wrong type](https://numpy.org/doc/1.23/release/1.14.0-notes.html#isnat-raises-typeerror-when-passed-wrong-type) - [`dtype.__getitem__` raises `TypeError` when passed wrong type](https://numpy.org/doc/1.23/release/1.14.0-notes.html#dtype-getitem-raises-typeerror-when-passed-wrong-type) - [User-defined types now need to implement `__str__` and `__repr__`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#user-defined-types-now-need-to-implement-str-and-repr) - [Many changes to array printing, disableable with the new “legacy” printing mode](https://numpy.org/doc/1.23/release/1.14.0-notes.html#many-changes-to-array-printing-disableable-with-the-new-legacy-printing-mode) + [C API changes](https://numpy.org/doc/1.23/release/1.14.0-notes.html#c-api-changes) - [PyPy compatible alternative to `UPDATEIFCOPY` arrays](https://numpy.org/doc/1.23/release/1.14.0-notes.html#pypy-compatible-alternative-to-updateifcopy-arrays) + [New Features](https://numpy.org/doc/1.23/release/1.14.0-notes.html#new-features) - [Encoding argument for text IO functions](https://numpy.org/doc/1.23/release/1.14.0-notes.html#encoding-argument-for-text-io-functions) - [External `nose` plugins are usable by `numpy.testing.Tester`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#external-nose-plugins-are-usable-by-numpy-testing-tester) - [`parametrize` decorator added to `numpy.testing`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#parametrize-decorator-added-to-numpy-testing) - [`chebinterpolate` function added to `numpy.polynomial.chebyshev`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#chebinterpolate-function-added-to-numpy-polynomial-chebyshev) - [Support for reading lzma compressed text files in Python 3](https://numpy.org/doc/1.23/release/1.14.0-notes.html#support-for-reading-lzma-compressed-text-files-in-python-3) - [`sign` option added to `np.setprintoptions` and `np.array2string`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#sign-option-added-to-np-setprintoptions-and-np-array2string) - [`hermitian` option added to``np.linalg.matrix_rank``](https://numpy.org/doc/1.23/release/1.14.0-notes.html#hermitian-option-added-to-np-linalg-matrix-rank) - [`threshold` and `edgeitems` options added to `np.array2string`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#threshold-and-edgeitems-options-added-to-np-array2string) - [`concatenate` and `stack` gained an `out` argument](https://numpy.org/doc/1.23/release/1.14.0-notes.html#concatenate-and-stack-gained-an-out-argument) - [Support for PGI flang compiler on Windows](https://numpy.org/doc/1.23/release/1.14.0-notes.html#support-for-pgi-flang-compiler-on-windows) + [Improvements](https://numpy.org/doc/1.23/release/1.14.0-notes.html#improvements) - [Numerator degrees of freedom in `random.noncentral_f` need only be positive.](https://numpy.org/doc/1.23/release/1.14.0-notes.html#numerator-degrees-of-freedom-in-random-noncentral-f-need-only-be-positive) - [The GIL is released for all `np.einsum` variations](https://numpy.org/doc/1.23/release/1.14.0-notes.html#the-gil-is-released-for-all-np-einsum-variations) - [The `np.einsum` function will use BLAS when possible and optimize by default](https://numpy.org/doc/1.23/release/1.14.0-notes.html#the-np-einsum-function-will-use-blas-when-possible-and-optimize-by-default) - [`f2py` now handles arrays of dimension 0](https://numpy.org/doc/1.23/release/1.14.0-notes.html#f2py-now-handles-arrays-of-dimension-0) - [`numpy.distutils` supports using MSVC and mingw64-gfortran together](https://numpy.org/doc/1.23/release/1.14.0-notes.html#numpy-distutils-supports-using-msvc-and-mingw64-gfortran-together) - [`np.linalg.pinv` now works on stacked matrices](https://numpy.org/doc/1.23/release/1.14.0-notes.html#np-linalg-pinv-now-works-on-stacked-matrices) - [`numpy.save` aligns data to 64 bytes instead of 16](https://numpy.org/doc/1.23/release/1.14.0-notes.html#numpy-save-aligns-data-to-64-bytes-instead-of-16) - [NPZ files now can be written without using temporary files](https://numpy.org/doc/1.23/release/1.14.0-notes.html#npz-files-now-can-be-written-without-using-temporary-files) - [Better support for empty structured and string types](https://numpy.org/doc/1.23/release/1.14.0-notes.html#better-support-for-empty-structured-and-string-types) - [Support for `decimal.Decimal` in `np.lib.financial`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#support-for-decimal-decimal-in-np-lib-financial) - [Float printing now uses “dragon4” algorithm for shortest decimal representation](https://numpy.org/doc/1.23/release/1.14.0-notes.html#float-printing-now-uses-dragon4-algorithm-for-shortest-decimal-representation) - [`void` datatype elements are now printed in hex notation](https://numpy.org/doc/1.23/release/1.14.0-notes.html#void-datatype-elements-are-now-printed-in-hex-notation) - [printing style for `void` datatypes is now independently customizable](https://numpy.org/doc/1.23/release/1.14.0-notes.html#printing-style-for-void-datatypes-is-now-independently-customizable) - [Reduced memory usage of `np.loadtxt`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#reduced-memory-usage-of-np-loadtxt) + [Changes](https://numpy.org/doc/1.23/release/1.14.0-notes.html#changes) - [Multiple-field indexing/assignment of structured arrays](https://numpy.org/doc/1.23/release/1.14.0-notes.html#multiple-field-indexing-assignment-of-structured-arrays) - [Integer and Void scalars are now unaffected by `np.set_string_function`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#integer-and-void-scalars-are-now-unaffected-by-np-set-string-function) - [0d array printing changed, `style` arg of array2string deprecated](https://numpy.org/doc/1.23/release/1.14.0-notes.html#d-array-printing-changed-style-arg-of-array2string-deprecated) - [Seeding `RandomState` using an array requires a 1-d array](https://numpy.org/doc/1.23/release/1.14.0-notes.html#seeding-randomstate-using-an-array-requires-a-1-d-array) - [`MaskedArray` objects show a more useful `repr`](https://numpy.org/doc/1.23/release/1.14.0-notes.html#maskedarray-objects-show-a-more-useful-repr) - [The `repr` of `np.polynomial` classes is more explicit](https://numpy.org/doc/1.23/release/1.14.0-notes.html#the-repr-of-np-polynomial-classes-is-more-explicit) * [1.13.3](https://numpy.org/doc/1.23/release/1.13.3-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.13.3-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.13.3-notes.html#pull-requests-merged) * [1.13.2](https://numpy.org/doc/1.23/release/1.13.2-notes.html) + [Contributors](https://numpy.org/doc/1.23/release/1.13.2-notes.html#contributors) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.13.2-notes.html#pull-requests-merged) * [1.13.1](https://numpy.org/doc/1.23/release/1.13.1-notes.html) + [Pull requests merged](https://numpy.org/doc/1.23/release/1.13.1-notes.html#pull-requests-merged) + [Contributors](https://numpy.org/doc/1.23/release/1.13.1-notes.html#contributors) * [1.13.0](https://numpy.org/doc/1.23/release/1.13.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.13.0-notes.html#highlights) + [New functions](https://numpy.org/doc/1.23/release/1.13.0-notes.html#new-functions) + [Deprecations](https://numpy.org/doc/1.23/release/1.13.0-notes.html#deprecations) + [Future Changes](https://numpy.org/doc/1.23/release/1.13.0-notes.html#future-changes) + [Build System Changes](https://numpy.org/doc/1.23/release/1.13.0-notes.html#build-system-changes) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.13.0-notes.html#compatibility-notes) - [Error type changes](https://numpy.org/doc/1.23/release/1.13.0-notes.html#error-type-changes) - [Tuple object dtypes](https://numpy.org/doc/1.23/release/1.13.0-notes.html#tuple-object-dtypes) - [DeprecationWarning to error](https://numpy.org/doc/1.23/release/1.13.0-notes.html#deprecationwarning-to-error) - [FutureWarning to changed behavior](https://numpy.org/doc/1.23/release/1.13.0-notes.html#futurewarning-to-changed-behavior) - [dtypes are now always true](https://numpy.org/doc/1.23/release/1.13.0-notes.html#dtypes-are-now-always-true) - [`__getslice__` and `__setslice__` are no longer needed in `ndarray` subclasses](https://numpy.org/doc/1.23/release/1.13.0-notes.html#getslice-and-setslice-are-no-longer-needed-in-ndarray-subclasses) - [Indexing MaskedArrays/Constants with `...` (ellipsis) now returns MaskedArray](https://numpy.org/doc/1.23/release/1.13.0-notes.html#indexing-maskedarrays-constants-with-ellipsis-now-returns-maskedarray) + [C API changes](https://numpy.org/doc/1.23/release/1.13.0-notes.html#c-api-changes) - [GUfuncs on empty arrays and NpyIter axis removal](https://numpy.org/doc/1.23/release/1.13.0-notes.html#gufuncs-on-empty-arrays-and-npyiter-axis-removal) - [`PyArray_MapIterArrayCopyIfOverlap` added to NumPy C-API](https://numpy.org/doc/1.23/release/1.13.0-notes.html#pyarray-mapiterarraycopyifoverlap-added-to-numpy-c-api) + [New Features](https://numpy.org/doc/1.23/release/1.13.0-notes.html#new-features) - [`__array_ufunc__` added](https://numpy.org/doc/1.23/release/1.13.0-notes.html#array-ufunc-added) - [New `positive` ufunc](https://numpy.org/doc/1.23/release/1.13.0-notes.html#new-positive-ufunc) - [New `divmod` ufunc](https://numpy.org/doc/1.23/release/1.13.0-notes.html#new-divmod-ufunc) - [`np.isnat` ufunc tests for NaT special datetime and timedelta values](https://numpy.org/doc/1.23/release/1.13.0-notes.html#np-isnat-ufunc-tests-for-nat-special-datetime-and-timedelta-values) - [`np.heaviside` ufunc computes the Heaviside function](https://numpy.org/doc/1.23/release/1.13.0-notes.html#np-heaviside-ufunc-computes-the-heaviside-function) - [`np.block` function for creating blocked arrays](https://numpy.org/doc/1.23/release/1.13.0-notes.html#np-block-function-for-creating-blocked-arrays) - [`isin` function, improving on `in1d`](https://numpy.org/doc/1.23/release/1.13.0-notes.html#isin-function-improving-on-in1d) - [Temporary elision](https://numpy.org/doc/1.23/release/1.13.0-notes.html#temporary-elision) - [`axes` argument for `unique`](https://numpy.org/doc/1.23/release/1.13.0-notes.html#axes-argument-for-unique) - [`np.gradient` now supports unevenly spaced data](https://numpy.org/doc/1.23/release/1.13.0-notes.html#np-gradient-now-supports-unevenly-spaced-data) - [Support for returning arrays of arbitrary dimensions in `apply_along_axis`](https://numpy.org/doc/1.23/release/1.13.0-notes.html#support-for-returning-arrays-of-arbitrary-dimensions-in-apply-along-axis) - [`.ndim` property added to `dtype` to complement `.shape`](https://numpy.org/doc/1.23/release/1.13.0-notes.html#ndim-property-added-to-dtype-to-complement-shape) - [Support for tracemalloc in Python 3.6](https://numpy.org/doc/1.23/release/1.13.0-notes.html#support-for-tracemalloc-in-python-3-6) - [NumPy may be built with relaxed stride checking debugging](https://numpy.org/doc/1.23/release/1.13.0-notes.html#numpy-may-be-built-with-relaxed-stride-checking-debugging) + [Improvements](https://numpy.org/doc/1.23/release/1.13.0-notes.html#improvements) - [Ufunc behavior for overlapping inputs](https://numpy.org/doc/1.23/release/1.13.0-notes.html#ufunc-behavior-for-overlapping-inputs) - [Partial support for 64-bit f2py extensions with MinGW](https://numpy.org/doc/1.23/release/1.13.0-notes.html#partial-support-for-64-bit-f2py-extensions-with-mingw) - [Performance improvements for `packbits` and `unpackbits`](https://numpy.org/doc/1.23/release/1.13.0-notes.html#performance-improvements-for-packbits-and-unpackbits) - [Fix for PPC long double floating point information](https://numpy.org/doc/1.23/release/1.13.0-notes.html#fix-for-ppc-long-double-floating-point-information) - [Better default repr for `ndarray` subclasses](https://numpy.org/doc/1.23/release/1.13.0-notes.html#better-default-repr-for-ndarray-subclasses) - [More reliable comparisons of masked arrays](https://numpy.org/doc/1.23/release/1.13.0-notes.html#more-reliable-comparisons-of-masked-arrays) - [np.matrix with booleans elements can now be created using the string syntax](https://numpy.org/doc/1.23/release/1.13.0-notes.html#np-matrix-with-booleans-elements-can-now-be-created-using-the-string-syntax) - [More `linalg` operations now accept empty vectors and matrices](https://numpy.org/doc/1.23/release/1.13.0-notes.html#more-linalg-operations-now-accept-empty-vectors-and-matrices) - [Bundled version of LAPACK is now 3.2.2](https://numpy.org/doc/1.23/release/1.13.0-notes.html#bundled-version-of-lapack-is-now-3-2-2) - [`reduce` of `np.hypot.reduce` and `np.logical_xor` allowed in more cases](https://numpy.org/doc/1.23/release/1.13.0-notes.html#reduce-of-np-hypot-reduce-and-np-logical-xor-allowed-in-more-cases) - [Better `repr` of object arrays](https://numpy.org/doc/1.23/release/1.13.0-notes.html#better-repr-of-object-arrays) + [Changes](https://numpy.org/doc/1.23/release/1.13.0-notes.html#changes) - [`argsort` on masked arrays takes the same default arguments as `sort`](https://numpy.org/doc/1.23/release/1.13.0-notes.html#argsort-on-masked-arrays-takes-the-same-default-arguments-as-sort) - [`average` now preserves subclasses](https://numpy.org/doc/1.23/release/1.13.0-notes.html#average-now-preserves-subclasses) - [`array == None` and `array != None` do element-wise comparison](https://numpy.org/doc/1.23/release/1.13.0-notes.html#array-none-and-array-none-do-element-wise-comparison) - [`np.equal, np.not_equal` for object arrays ignores object identity](https://numpy.org/doc/1.23/release/1.13.0-notes.html#np-equal-np-not-equal-for-object-arrays-ignores-object-identity) - [Boolean indexing changes](https://numpy.org/doc/1.23/release/1.13.0-notes.html#boolean-indexing-changes) - [`np.random.multivariate_normal` behavior with bad covariance matrix](https://numpy.org/doc/1.23/release/1.13.0-notes.html#np-random-multivariate-normal-behavior-with-bad-covariance-matrix) - [`assert_array_less` compares `np.inf` and `-np.inf` now](https://numpy.org/doc/1.23/release/1.13.0-notes.html#assert-array-less-compares-np-inf-and-np-inf-now) - [`assert_array_` and masked arrays `assert_equal` hide less warnings](https://numpy.org/doc/1.23/release/1.13.0-notes.html#assert-array-and-masked-arrays-assert-equal-hide-less-warnings) - [`offset` attribute value in `memmap` objects](https://numpy.org/doc/1.23/release/1.13.0-notes.html#offset-attribute-value-in-memmap-objects) - [`np.real` and `np.imag` return scalars for scalar inputs](https://numpy.org/doc/1.23/release/1.13.0-notes.html#np-real-and-np-imag-return-scalars-for-scalar-inputs) - [The polynomial convenience classes cannot be passed to ufuncs](https://numpy.org/doc/1.23/release/1.13.0-notes.html#the-polynomial-convenience-classes-cannot-be-passed-to-ufuncs) - [Output arguments to ufuncs can be tuples also for ufunc methods](https://numpy.org/doc/1.23/release/1.13.0-notes.html#output-arguments-to-ufuncs-can-be-tuples-also-for-ufunc-methods) * [1.12.1](https://numpy.org/doc/1.23/release/1.12.1-notes.html) + [Bugs Fixed](https://numpy.org/doc/1.23/release/1.12.1-notes.html#bugs-fixed) * [1.12.0](https://numpy.org/doc/1.23/release/1.12.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.12.0-notes.html#highlights) + [Dropped Support](https://numpy.org/doc/1.23/release/1.12.0-notes.html#dropped-support) + [Added Support](https://numpy.org/doc/1.23/release/1.12.0-notes.html#added-support) + [Build System Changes](https://numpy.org/doc/1.23/release/1.12.0-notes.html#build-system-changes) + [Deprecations](https://numpy.org/doc/1.23/release/1.12.0-notes.html#deprecations) - [Assignment of ndarray object’s `data` attribute](https://numpy.org/doc/1.23/release/1.12.0-notes.html#assignment-of-ndarray-object-s-data-attribute) - [Unsafe int casting of the num attribute in `linspace`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#unsafe-int-casting-of-the-num-attribute-in-linspace) - [Insufficient bit width parameter to `binary_repr`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#insufficient-bit-width-parameter-to-binary-repr) + [Future Changes](https://numpy.org/doc/1.23/release/1.12.0-notes.html#future-changes) - [Multiple-field manipulation of structured arrays](https://numpy.org/doc/1.23/release/1.12.0-notes.html#multiple-field-manipulation-of-structured-arrays) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.12.0-notes.html#compatibility-notes) - [DeprecationWarning to error](https://numpy.org/doc/1.23/release/1.12.0-notes.html#deprecationwarning-to-error) - [FutureWarning to changed behavior](https://numpy.org/doc/1.23/release/1.12.0-notes.html#futurewarning-to-changed-behavior) - [`power` and `**` raise errors for integer to negative integer powers](https://numpy.org/doc/1.23/release/1.12.0-notes.html#power-and-raise-errors-for-integer-to-negative-integer-powers) - [Relaxed stride checking is the default](https://numpy.org/doc/1.23/release/1.12.0-notes.html#relaxed-stride-checking-is-the-default) - [The `np.percentile` ‘midpoint’ interpolation method fixed for exact indices](https://numpy.org/doc/1.23/release/1.12.0-notes.html#the-np-percentile-midpoint-interpolation-method-fixed-for-exact-indices) - [`keepdims` kwarg is passed through to user-class methods](https://numpy.org/doc/1.23/release/1.12.0-notes.html#keepdims-kwarg-is-passed-through-to-user-class-methods) - [`bitwise_and` identity changed](https://numpy.org/doc/1.23/release/1.12.0-notes.html#bitwise-and-identity-changed) - [ma.median warns and returns nan when unmasked invalid values are encountered](https://numpy.org/doc/1.23/release/1.12.0-notes.html#ma-median-warns-and-returns-nan-when-unmasked-invalid-values-are-encountered) - [Greater consistency in `assert_almost_equal`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#greater-consistency-in-assert-almost-equal) - [`NoseTester` behaviour of warnings during testing](https://numpy.org/doc/1.23/release/1.12.0-notes.html#nosetester-behaviour-of-warnings-during-testing) - [`assert_warns` and `deprecated` decorator more specific](https://numpy.org/doc/1.23/release/1.12.0-notes.html#assert-warns-and-deprecated-decorator-more-specific) - [C API](https://numpy.org/doc/1.23/release/1.12.0-notes.html#c-api) + [New Features](https://numpy.org/doc/1.23/release/1.12.0-notes.html#new-features) - [Writeable keyword argument for `as_strided`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#writeable-keyword-argument-for-as-strided) - [`axes` keyword argument for `rot90`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#axes-keyword-argument-for-rot90) - [Generalized `flip`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#generalized-flip) - [BLIS support in `numpy.distutils`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#blis-support-in-numpy-distutils) - [Hook in `numpy/__init__.py` to run distribution-specific checks](https://numpy.org/doc/1.23/release/1.12.0-notes.html#hook-in-numpy-init-py-to-run-distribution-specific-checks) - [New nanfunctions `nancumsum` and `nancumprod` added](https://numpy.org/doc/1.23/release/1.12.0-notes.html#new-nanfunctions-nancumsum-and-nancumprod-added) - [`np.interp` can now interpolate complex values](https://numpy.org/doc/1.23/release/1.12.0-notes.html#np-interp-can-now-interpolate-complex-values) - [New polynomial evaluation function `polyvalfromroots` added](https://numpy.org/doc/1.23/release/1.12.0-notes.html#new-polynomial-evaluation-function-polyvalfromroots-added) - [New array creation function `geomspace` added](https://numpy.org/doc/1.23/release/1.12.0-notes.html#new-array-creation-function-geomspace-added) - [New context manager for testing warnings](https://numpy.org/doc/1.23/release/1.12.0-notes.html#new-context-manager-for-testing-warnings) - [New masked array functions `ma.convolve` and `ma.correlate` added](https://numpy.org/doc/1.23/release/1.12.0-notes.html#new-masked-array-functions-ma-convolve-and-ma-correlate-added) - [New `float_power` ufunc](https://numpy.org/doc/1.23/release/1.12.0-notes.html#new-float-power-ufunc) - [`np.loadtxt` now supports a single integer as `usecol` argument](https://numpy.org/doc/1.23/release/1.12.0-notes.html#np-loadtxt-now-supports-a-single-integer-as-usecol-argument) - [Improved automated bin estimators for `histogram`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#improved-automated-bin-estimators-for-histogram) - [`np.roll` can now roll multiple axes at the same time](https://numpy.org/doc/1.23/release/1.12.0-notes.html#np-roll-can-now-roll-multiple-axes-at-the-same-time) - [The `__complex__` method has been implemented for the ndarrays](https://numpy.org/doc/1.23/release/1.12.0-notes.html#the-complex-method-has-been-implemented-for-the-ndarrays) - [`pathlib.Path` objects now supported](https://numpy.org/doc/1.23/release/1.12.0-notes.html#pathlib-path-objects-now-supported) - [New `bits` attribute for `np.finfo`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#new-bits-attribute-for-np-finfo) - [New `signature` argument to `np.vectorize`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#new-signature-argument-to-np-vectorize) - [Emit py3kwarnings for division of integer arrays](https://numpy.org/doc/1.23/release/1.12.0-notes.html#emit-py3kwarnings-for-division-of-integer-arrays) - [numpy.sctypes now includes bytes on Python3 too](https://numpy.org/doc/1.23/release/1.12.0-notes.html#numpy-sctypes-now-includes-bytes-on-python3-too) + [Improvements](https://numpy.org/doc/1.23/release/1.12.0-notes.html#improvements) - [`bitwise_and` identity changed](https://numpy.org/doc/1.23/release/1.12.0-notes.html#id1) - [Generalized Ufuncs will now unlock the GIL](https://numpy.org/doc/1.23/release/1.12.0-notes.html#generalized-ufuncs-will-now-unlock-the-gil) - [Caches in `np.fft` are now bounded in total size and item count](https://numpy.org/doc/1.23/release/1.12.0-notes.html#caches-in-np-fft-are-now-bounded-in-total-size-and-item-count) - [Improved handling of zero-width string/unicode dtypes](https://numpy.org/doc/1.23/release/1.12.0-notes.html#improved-handling-of-zero-width-string-unicode-dtypes) - [Integer ufuncs vectorized with AVX2](https://numpy.org/doc/1.23/release/1.12.0-notes.html#integer-ufuncs-vectorized-with-avx2) - [Order of operations optimization in `np.einsum`](https://numpy.org/doc/1.23/release/1.12.0-notes.html#order-of-operations-optimization-in-np-einsum) - [quicksort has been changed to an introsort](https://numpy.org/doc/1.23/release/1.12.0-notes.html#quicksort-has-been-changed-to-an-introsort) - [`ediff1d` improved performance and subclass handling](https://numpy.org/doc/1.23/release/1.12.0-notes.html#ediff1d-improved-performance-and-subclass-handling) - [Improved precision of `ndarray.mean` for float16 arrays](https://numpy.org/doc/1.23/release/1.12.0-notes.html#improved-precision-of-ndarray-mean-for-float16-arrays) + [Changes](https://numpy.org/doc/1.23/release/1.12.0-notes.html#changes) - [All array-like methods are now called with keyword arguments in fromnumeric.py](https://numpy.org/doc/1.23/release/1.12.0-notes.html#all-array-like-methods-are-now-called-with-keyword-arguments-in-fromnumeric-py) - [Operations on np.memmap objects return numpy arrays in most cases](https://numpy.org/doc/1.23/release/1.12.0-notes.html#operations-on-np-memmap-objects-return-numpy-arrays-in-most-cases) - [stacklevel of warnings increased](https://numpy.org/doc/1.23/release/1.12.0-notes.html#stacklevel-of-warnings-increased) * [1.11.3](https://numpy.org/doc/1.23/release/1.11.3-notes.html) + [Contributors to maintenance/1.11.3](https://numpy.org/doc/1.23/release/1.11.3-notes.html#contributors-to-maintenance-1-11-3) + [Pull Requests Merged](https://numpy.org/doc/1.23/release/1.11.3-notes.html#pull-requests-merged) * [1.11.2](https://numpy.org/doc/1.23/release/1.11.2-notes.html) + [Pull Requests Merged](https://numpy.org/doc/1.23/release/1.11.2-notes.html#pull-requests-merged) * [1.11.1](https://numpy.org/doc/1.23/release/1.11.1-notes.html) + [Fixes Merged](https://numpy.org/doc/1.23/release/1.11.1-notes.html#fixes-merged) * [1.11.0](https://numpy.org/doc/1.23/release/1.11.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.11.0-notes.html#highlights) + [Build System Changes](https://numpy.org/doc/1.23/release/1.11.0-notes.html#build-system-changes) + [Future Changes](https://numpy.org/doc/1.23/release/1.11.0-notes.html#future-changes) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.11.0-notes.html#compatibility-notes) - [datetime64 changes](https://numpy.org/doc/1.23/release/1.11.0-notes.html#datetime64-changes) - [`linalg.norm` return type changes](https://numpy.org/doc/1.23/release/1.11.0-notes.html#linalg-norm-return-type-changes) - [polynomial fit changes](https://numpy.org/doc/1.23/release/1.11.0-notes.html#polynomial-fit-changes) - [np.dot now raises `TypeError` instead of `ValueError`](https://numpy.org/doc/1.23/release/1.11.0-notes.html#np-dot-now-raises-typeerror-instead-of-valueerror) - [FutureWarning to changed behavior](https://numpy.org/doc/1.23/release/1.11.0-notes.html#futurewarning-to-changed-behavior) - [`%` and `//` operators](https://numpy.org/doc/1.23/release/1.11.0-notes.html#and-operators) - [C API](https://numpy.org/doc/1.23/release/1.11.0-notes.html#c-api) - [object dtype detection for old-style classes](https://numpy.org/doc/1.23/release/1.11.0-notes.html#object-dtype-detection-for-old-style-classes) + [New Features](https://numpy.org/doc/1.23/release/1.11.0-notes.html#new-features) + [Improvements](https://numpy.org/doc/1.23/release/1.11.0-notes.html#improvements) - [`np.gradient` now supports an `axis` argument](https://numpy.org/doc/1.23/release/1.11.0-notes.html#np-gradient-now-supports-an-axis-argument) - [`np.lexsort` now supports arrays with object data-type](https://numpy.org/doc/1.23/release/1.11.0-notes.html#np-lexsort-now-supports-arrays-with-object-data-type) - [`np.ma.core.MaskedArray` now supports an `order` argument](https://numpy.org/doc/1.23/release/1.11.0-notes.html#np-ma-core-maskedarray-now-supports-an-order-argument) - [Memory and speed improvements for masked arrays](https://numpy.org/doc/1.23/release/1.11.0-notes.html#memory-and-speed-improvements-for-masked-arrays) - [`ndarray.tofile` now uses fallocate on linux](https://numpy.org/doc/1.23/release/1.11.0-notes.html#ndarray-tofile-now-uses-fallocate-on-linux) - [Optimizations for operations of the form `A.T @ A` and `A @ A.T`](https://numpy.org/doc/1.23/release/1.11.0-notes.html#optimizations-for-operations-of-the-form-a-t-a-and-a-a-t) - [`np.testing.assert_warns` can now be used as a context manager](https://numpy.org/doc/1.23/release/1.11.0-notes.html#np-testing-assert-warns-can-now-be-used-as-a-context-manager) - [Speed improvement for np.random.shuffle](https://numpy.org/doc/1.23/release/1.11.0-notes.html#speed-improvement-for-np-random-shuffle) + [Changes](https://numpy.org/doc/1.23/release/1.11.0-notes.html#changes) - [Pyrex support was removed from `numpy.distutils`](https://numpy.org/doc/1.23/release/1.11.0-notes.html#pyrex-support-was-removed-from-numpy-distutils) - [`np.broadcast` can now be called with a single argument](https://numpy.org/doc/1.23/release/1.11.0-notes.html#np-broadcast-can-now-be-called-with-a-single-argument) - [`np.trace` now respects array subclasses](https://numpy.org/doc/1.23/release/1.11.0-notes.html#np-trace-now-respects-array-subclasses) - [`np.dot` now raises `TypeError` instead of `ValueError`](https://numpy.org/doc/1.23/release/1.11.0-notes.html#id1) - [`linalg.norm` return type changes](https://numpy.org/doc/1.23/release/1.11.0-notes.html#id2) + [Deprecations](https://numpy.org/doc/1.23/release/1.11.0-notes.html#deprecations) - [Views of arrays in Fortran order](https://numpy.org/doc/1.23/release/1.11.0-notes.html#views-of-arrays-in-fortran-order) - [Invalid arguments for array ordering](https://numpy.org/doc/1.23/release/1.11.0-notes.html#invalid-arguments-for-array-ordering) - [Random number generator in the `testing` namespace](https://numpy.org/doc/1.23/release/1.11.0-notes.html#random-number-generator-in-the-testing-namespace) - [Random integer generation on a closed interval](https://numpy.org/doc/1.23/release/1.11.0-notes.html#random-integer-generation-on-a-closed-interval) + [FutureWarnings](https://numpy.org/doc/1.23/release/1.11.0-notes.html#futurewarnings) - [Assigning to slices/views of `MaskedArray`](https://numpy.org/doc/1.23/release/1.11.0-notes.html#assigning-to-slices-views-of-maskedarray) * [1.10.4](https://numpy.org/doc/1.23/release/1.10.4-notes.html) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.10.4-notes.html#compatibility-notes) + [Issues Fixed](https://numpy.org/doc/1.23/release/1.10.4-notes.html#issues-fixed) + [Merged PRs](https://numpy.org/doc/1.23/release/1.10.4-notes.html#merged-prs) * [1.10.3](https://numpy.org/doc/1.23/release/1.10.3-notes.html) * [1.10.2](https://numpy.org/doc/1.23/release/1.10.2-notes.html) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.10.2-notes.html#compatibility-notes) - [Relaxed stride checking is no longer the default](https://numpy.org/doc/1.23/release/1.10.2-notes.html#relaxed-stride-checking-is-no-longer-the-default) - [Fix swig bug in `numpy.i`](https://numpy.org/doc/1.23/release/1.10.2-notes.html#fix-swig-bug-in-numpy-i) - [Deprecate views changing dimensions in fortran order](https://numpy.org/doc/1.23/release/1.10.2-notes.html#deprecate-views-changing-dimensions-in-fortran-order) + [Issues Fixed](https://numpy.org/doc/1.23/release/1.10.2-notes.html#issues-fixed) + [Merged PRs](https://numpy.org/doc/1.23/release/1.10.2-notes.html#merged-prs) + [Notes](https://numpy.org/doc/1.23/release/1.10.2-notes.html#notes) * [1.10.1](https://numpy.org/doc/1.23/release/1.10.1-notes.html) * [1.10.0](https://numpy.org/doc/1.23/release/1.10.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.10.0-notes.html#highlights) + [Dropped Support](https://numpy.org/doc/1.23/release/1.10.0-notes.html#dropped-support) + [Future Changes](https://numpy.org/doc/1.23/release/1.10.0-notes.html#future-changes) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.10.0-notes.html#compatibility-notes) - [Default casting rule change](https://numpy.org/doc/1.23/release/1.10.0-notes.html#default-casting-rule-change) - [numpy version string](https://numpy.org/doc/1.23/release/1.10.0-notes.html#numpy-version-string) - [relaxed stride checking](https://numpy.org/doc/1.23/release/1.10.0-notes.html#relaxed-stride-checking) - [Concatenation of 1d arrays along any but `axis=0` raises `IndexError`](https://numpy.org/doc/1.23/release/1.10.0-notes.html#concatenation-of-1d-arrays-along-any-but-axis-0-raises-indexerror) - [np.ravel, np.diagonal and np.diag now preserve subtypes](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-ravel-np-diagonal-and-np-diag-now-preserve-subtypes) - [rollaxis and swapaxes always return a view](https://numpy.org/doc/1.23/release/1.10.0-notes.html#rollaxis-and-swapaxes-always-return-a-view) - [nonzero now returns base ndarrays](https://numpy.org/doc/1.23/release/1.10.0-notes.html#nonzero-now-returns-base-ndarrays) - [C API](https://numpy.org/doc/1.23/release/1.10.0-notes.html#c-api) - [recarray field return types](https://numpy.org/doc/1.23/release/1.10.0-notes.html#recarray-field-return-types) - [recarray views](https://numpy.org/doc/1.23/release/1.10.0-notes.html#recarray-views) - [‘out’ keyword argument of ufuncs now accepts tuples of arrays](https://numpy.org/doc/1.23/release/1.10.0-notes.html#out-keyword-argument-of-ufuncs-now-accepts-tuples-of-arrays) - [byte-array indices now raises an IndexError](https://numpy.org/doc/1.23/release/1.10.0-notes.html#byte-array-indices-now-raises-an-indexerror) - [Masked arrays containing objects with arrays](https://numpy.org/doc/1.23/release/1.10.0-notes.html#masked-arrays-containing-objects-with-arrays) - [Median warns and returns nan when invalid values are encountered](https://numpy.org/doc/1.23/release/1.10.0-notes.html#median-warns-and-returns-nan-when-invalid-values-are-encountered) - [Functions available from numpy.ma.testutils have changed](https://numpy.org/doc/1.23/release/1.10.0-notes.html#functions-available-from-numpy-ma-testutils-have-changed) + [New Features](https://numpy.org/doc/1.23/release/1.10.0-notes.html#new-features) - [Reading extra flags from site.cfg](https://numpy.org/doc/1.23/release/1.10.0-notes.html#reading-extra-flags-from-site-cfg) - [np.cbrt to compute cube root for real floats](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-cbrt-to-compute-cube-root-for-real-floats) - [numpy.distutils now allows parallel compilation](https://numpy.org/doc/1.23/release/1.10.0-notes.html#numpy-distutils-now-allows-parallel-compilation) - [genfromtxt has a new `max_rows` argument](https://numpy.org/doc/1.23/release/1.10.0-notes.html#genfromtxt-has-a-new-max-rows-argument) - [New function np.broadcast_to for invoking array broadcasting](https://numpy.org/doc/1.23/release/1.10.0-notes.html#new-function-np-broadcast-to-for-invoking-array-broadcasting) - [New context manager clear_and_catch_warnings for testing warnings](https://numpy.org/doc/1.23/release/1.10.0-notes.html#new-context-manager-clear-and-catch-warnings-for-testing-warnings) - [cov has new `fweights` and `aweights` arguments](https://numpy.org/doc/1.23/release/1.10.0-notes.html#cov-has-new-fweights-and-aweights-arguments) - [Support for the ‘@’ operator in Python 3.5+](https://numpy.org/doc/1.23/release/1.10.0-notes.html#support-for-the-operator-in-python-3-5) - [New argument `norm` to fft functions](https://numpy.org/doc/1.23/release/1.10.0-notes.html#new-argument-norm-to-fft-functions) + [Improvements](https://numpy.org/doc/1.23/release/1.10.0-notes.html#improvements) - [np.digitize using binary search](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-digitize-using-binary-search) - [np.poly now casts integer inputs to float](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-poly-now-casts-integer-inputs-to-float) - [np.interp can now be used with periodic functions](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-interp-can-now-be-used-with-periodic-functions) - [np.pad supports more input types for `pad_width` and `constant_values`](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-pad-supports-more-input-types-for-pad-width-and-constant-values) - [np.argmax and np.argmin now support an `out` argument](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-argmax-and-np-argmin-now-support-an-out-argument) - [More system C99 complex functions detected and used](https://numpy.org/doc/1.23/release/1.10.0-notes.html#more-system-c99-complex-functions-detected-and-used) - [np.loadtxt support for the strings produced by the `float.hex` method](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-loadtxt-support-for-the-strings-produced-by-the-float-hex-method) - [np.isclose properly handles minimal values of integer dtypes](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-isclose-properly-handles-minimal-values-of-integer-dtypes) - [np.allclose uses np.isclose internally.](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-allclose-uses-np-isclose-internally) - [np.genfromtxt now handles large integers correctly](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-genfromtxt-now-handles-large-integers-correctly) - [np.load, np.save have pickle backward compatibility flags](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-load-np-save-have-pickle-backward-compatibility-flags) - [MaskedArray support for more complicated base classes](https://numpy.org/doc/1.23/release/1.10.0-notes.html#maskedarray-support-for-more-complicated-base-classes) + [Changes](https://numpy.org/doc/1.23/release/1.10.0-notes.html#changes) - [dotblas functionality moved to multiarray](https://numpy.org/doc/1.23/release/1.10.0-notes.html#dotblas-functionality-moved-to-multiarray) - [stricter check of gufunc signature compliance](https://numpy.org/doc/1.23/release/1.10.0-notes.html#stricter-check-of-gufunc-signature-compliance) - [views returned from np.einsum are writeable](https://numpy.org/doc/1.23/release/1.10.0-notes.html#views-returned-from-np-einsum-are-writeable) - [np.argmin skips NaT values](https://numpy.org/doc/1.23/release/1.10.0-notes.html#np-argmin-skips-nat-values) + [Deprecations](https://numpy.org/doc/1.23/release/1.10.0-notes.html#deprecations) - [Array comparisons involving strings or structured dtypes](https://numpy.org/doc/1.23/release/1.10.0-notes.html#array-comparisons-involving-strings-or-structured-dtypes) - [SafeEval](https://numpy.org/doc/1.23/release/1.10.0-notes.html#safeeval) - [alterdot, restoredot](https://numpy.org/doc/1.23/release/1.10.0-notes.html#alterdot-restoredot) - [pkgload, PackageLoader](https://numpy.org/doc/1.23/release/1.10.0-notes.html#pkgload-packageloader) - [bias, ddof arguments to corrcoef](https://numpy.org/doc/1.23/release/1.10.0-notes.html#bias-ddof-arguments-to-corrcoef) - [dtype string representation changes](https://numpy.org/doc/1.23/release/1.10.0-notes.html#dtype-string-representation-changes) * [1.9.2](https://numpy.org/doc/1.23/release/1.9.2-notes.html) + [Issues fixed](https://numpy.org/doc/1.23/release/1.9.2-notes.html#issues-fixed) * [1.9.1](https://numpy.org/doc/1.23/release/1.9.1-notes.html) + [Issues fixed](https://numpy.org/doc/1.23/release/1.9.1-notes.html#issues-fixed) * [1.9.0](https://numpy.org/doc/1.23/release/1.9.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.9.0-notes.html#highlights) + [Dropped Support](https://numpy.org/doc/1.23/release/1.9.0-notes.html#dropped-support) + [Future Changes](https://numpy.org/doc/1.23/release/1.9.0-notes.html#future-changes) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.9.0-notes.html#compatibility-notes) - [The diagonal and diag functions return readonly views.](https://numpy.org/doc/1.23/release/1.9.0-notes.html#the-diagonal-and-diag-functions-return-readonly-views) - [Special scalar float values don’t cause upcast to double anymore](https://numpy.org/doc/1.23/release/1.9.0-notes.html#special-scalar-float-values-don-t-cause-upcast-to-double-anymore) - [Percentile output changes](https://numpy.org/doc/1.23/release/1.9.0-notes.html#percentile-output-changes) - [ndarray.tofile exception type](https://numpy.org/doc/1.23/release/1.9.0-notes.html#ndarray-tofile-exception-type) - [Invalid fill value exceptions](https://numpy.org/doc/1.23/release/1.9.0-notes.html#invalid-fill-value-exceptions) - [Polynomial Classes no longer derived from PolyBase](https://numpy.org/doc/1.23/release/1.9.0-notes.html#polynomial-classes-no-longer-derived-from-polybase) - [Using numpy.random.binomial may change the RNG state vs. numpy < 1.9](https://numpy.org/doc/1.23/release/1.9.0-notes.html#using-numpy-random-binomial-may-change-the-rng-state-vs-numpy-1-9) - [Random seed enforced to be a 32 bit unsigned integer](https://numpy.org/doc/1.23/release/1.9.0-notes.html#random-seed-enforced-to-be-a-32-bit-unsigned-integer) - [Argmin and argmax out argument](https://numpy.org/doc/1.23/release/1.9.0-notes.html#argmin-and-argmax-out-argument) - [Einsum](https://numpy.org/doc/1.23/release/1.9.0-notes.html#einsum) - [Indexing](https://numpy.org/doc/1.23/release/1.9.0-notes.html#indexing) - [Non-integer reduction axis indexes are deprecated](https://numpy.org/doc/1.23/release/1.9.0-notes.html#non-integer-reduction-axis-indexes-are-deprecated) - [`promote_types` and string dtype](https://numpy.org/doc/1.23/release/1.9.0-notes.html#promote-types-and-string-dtype) - [`can_cast` and string dtype](https://numpy.org/doc/1.23/release/1.9.0-notes.html#can-cast-and-string-dtype) - [astype and string dtype](https://numpy.org/doc/1.23/release/1.9.0-notes.html#astype-and-string-dtype) - [`npyio.recfromcsv` keyword arguments change](https://numpy.org/doc/1.23/release/1.9.0-notes.html#npyio-recfromcsv-keyword-arguments-change) - [The `doc/swig` directory moved](https://numpy.org/doc/1.23/release/1.9.0-notes.html#the-doc-swig-directory-moved) - [The `npy_3kcompat.h` header changed](https://numpy.org/doc/1.23/release/1.9.0-notes.html#the-npy-3kcompat-h-header-changed) - [Negative indices in C-Api `sq_item` and `sq_ass_item` sequence methods](https://numpy.org/doc/1.23/release/1.9.0-notes.html#negative-indices-in-c-api-sq-item-and-sq-ass-item-sequence-methods) - [NDIter](https://numpy.org/doc/1.23/release/1.9.0-notes.html#nditer) - [`zeros_like` for string dtypes now returns empty strings](https://numpy.org/doc/1.23/release/1.9.0-notes.html#zeros-like-for-string-dtypes-now-returns-empty-strings) + [New Features](https://numpy.org/doc/1.23/release/1.9.0-notes.html#new-features) - [Percentile supports more interpolation options](https://numpy.org/doc/1.23/release/1.9.0-notes.html#percentile-supports-more-interpolation-options) - [Generalized axis support for median and percentile](https://numpy.org/doc/1.23/release/1.9.0-notes.html#generalized-axis-support-for-median-and-percentile) - [Dtype parameter added to `np.linspace` and `np.logspace`](https://numpy.org/doc/1.23/release/1.9.0-notes.html#dtype-parameter-added-to-np-linspace-and-np-logspace) - [More general `np.triu` and `np.tril` broadcasting](https://numpy.org/doc/1.23/release/1.9.0-notes.html#more-general-np-triu-and-np-tril-broadcasting) - [`tobytes` alias for `tostring` method](https://numpy.org/doc/1.23/release/1.9.0-notes.html#tobytes-alias-for-tostring-method) - [Build system](https://numpy.org/doc/1.23/release/1.9.0-notes.html#build-system) - [Compatibility to python `numbers` module](https://numpy.org/doc/1.23/release/1.9.0-notes.html#compatibility-to-python-numbers-module) - [`increasing` parameter added to `np.vander`](https://numpy.org/doc/1.23/release/1.9.0-notes.html#increasing-parameter-added-to-np-vander) - [`unique_counts` parameter added to `np.unique`](https://numpy.org/doc/1.23/release/1.9.0-notes.html#unique-counts-parameter-added-to-np-unique) - [Support for median and percentile in nanfunctions](https://numpy.org/doc/1.23/release/1.9.0-notes.html#support-for-median-and-percentile-in-nanfunctions) - [NumpyVersion class added](https://numpy.org/doc/1.23/release/1.9.0-notes.html#numpyversion-class-added) - [Allow saving arrays with large number of named columns](https://numpy.org/doc/1.23/release/1.9.0-notes.html#allow-saving-arrays-with-large-number-of-named-columns) - [Full broadcasting support for `np.cross`](https://numpy.org/doc/1.23/release/1.9.0-notes.html#full-broadcasting-support-for-np-cross) + [Improvements](https://numpy.org/doc/1.23/release/1.9.0-notes.html#improvements) - [Better numerical stability for sum in some cases](https://numpy.org/doc/1.23/release/1.9.0-notes.html#better-numerical-stability-for-sum-in-some-cases) - [Percentile implemented in terms of `np.partition`](https://numpy.org/doc/1.23/release/1.9.0-notes.html#percentile-implemented-in-terms-of-np-partition) - [Performance improvement for `np.array`](https://numpy.org/doc/1.23/release/1.9.0-notes.html#performance-improvement-for-np-array) - [Performance improvement for `np.searchsorted`](https://numpy.org/doc/1.23/release/1.9.0-notes.html#performance-improvement-for-np-searchsorted) - [Optional reduced verbosity for np.distutils](https://numpy.org/doc/1.23/release/1.9.0-notes.html#optional-reduced-verbosity-for-np-distutils) - [Covariance check in `np.random.multivariate_normal`](https://numpy.org/doc/1.23/release/1.9.0-notes.html#covariance-check-in-np-random-multivariate-normal) - [Polynomial Classes no longer template based](https://numpy.org/doc/1.23/release/1.9.0-notes.html#polynomial-classes-no-longer-template-based) - [More GIL releases](https://numpy.org/doc/1.23/release/1.9.0-notes.html#more-gil-releases) - [MaskedArray support for more complicated base classes](https://numpy.org/doc/1.23/release/1.9.0-notes.html#maskedarray-support-for-more-complicated-base-classes) - [C-API](https://numpy.org/doc/1.23/release/1.9.0-notes.html#c-api) + [Deprecations](https://numpy.org/doc/1.23/release/1.9.0-notes.html#deprecations) - [Non-integer scalars for sequence repetition](https://numpy.org/doc/1.23/release/1.9.0-notes.html#non-integer-scalars-for-sequence-repetition) - [`select` input deprecations](https://numpy.org/doc/1.23/release/1.9.0-notes.html#select-input-deprecations) - [`rank` function](https://numpy.org/doc/1.23/release/1.9.0-notes.html#rank-function) - [Object array equality comparisons](https://numpy.org/doc/1.23/release/1.9.0-notes.html#object-array-equality-comparisons) - [C-API](https://numpy.org/doc/1.23/release/1.9.0-notes.html#id1) * [1.8.2](https://numpy.org/doc/1.23/release/1.8.2-notes.html) + [Issues fixed](https://numpy.org/doc/1.23/release/1.8.2-notes.html#issues-fixed) * [1.8.1](https://numpy.org/doc/1.23/release/1.8.1-notes.html) + [Issues fixed](https://numpy.org/doc/1.23/release/1.8.1-notes.html#issues-fixed) + [Changes](https://numpy.org/doc/1.23/release/1.8.1-notes.html#changes) - [NDIter](https://numpy.org/doc/1.23/release/1.8.1-notes.html#nditer) - [Optional reduced verbosity for np.distutils](https://numpy.org/doc/1.23/release/1.8.1-notes.html#optional-reduced-verbosity-for-np-distutils) + [Deprecations](https://numpy.org/doc/1.23/release/1.8.1-notes.html#deprecations) - [C-API](https://numpy.org/doc/1.23/release/1.8.1-notes.html#c-api) * [1.8.0](https://numpy.org/doc/1.23/release/1.8.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.8.0-notes.html#highlights) + [Dropped Support](https://numpy.org/doc/1.23/release/1.8.0-notes.html#dropped-support) + [Future Changes](https://numpy.org/doc/1.23/release/1.8.0-notes.html#future-changes) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.8.0-notes.html#compatibility-notes) - [NPY_RELAXED_STRIDES_CHECKING](https://numpy.org/doc/1.23/release/1.8.0-notes.html#npy-relaxed-strides-checking) - [Binary operations with non-arrays as second argument](https://numpy.org/doc/1.23/release/1.8.0-notes.html#binary-operations-with-non-arrays-as-second-argument) - [Function `median` used with `overwrite_input` only partially sorts array](https://numpy.org/doc/1.23/release/1.8.0-notes.html#function-median-used-with-overwrite-input-only-partially-sorts-array) - [Fix to financial.npv](https://numpy.org/doc/1.23/release/1.8.0-notes.html#fix-to-financial-npv) - [Runtime warnings when comparing NaN numbers](https://numpy.org/doc/1.23/release/1.8.0-notes.html#runtime-warnings-when-comparing-nan-numbers) + [New Features](https://numpy.org/doc/1.23/release/1.8.0-notes.html#new-features) - [Support for linear algebra on stacked arrays](https://numpy.org/doc/1.23/release/1.8.0-notes.html#support-for-linear-algebra-on-stacked-arrays) - [In place fancy indexing for ufuncs](https://numpy.org/doc/1.23/release/1.8.0-notes.html#in-place-fancy-indexing-for-ufuncs) - [New functions `partition` and `argpartition`](https://numpy.org/doc/1.23/release/1.8.0-notes.html#new-functions-partition-and-argpartition) - [New functions `nanmean`, `nanvar` and `nanstd`](https://numpy.org/doc/1.23/release/1.8.0-notes.html#new-functions-nanmean-nanvar-and-nanstd) - [New functions `full` and `full_like`](https://numpy.org/doc/1.23/release/1.8.0-notes.html#new-functions-full-and-full-like) - [IO compatibility with large files](https://numpy.org/doc/1.23/release/1.8.0-notes.html#io-compatibility-with-large-files) - [Building against OpenBLAS](https://numpy.org/doc/1.23/release/1.8.0-notes.html#building-against-openblas) - [New constant](https://numpy.org/doc/1.23/release/1.8.0-notes.html#new-constant) - [New modes for qr](https://numpy.org/doc/1.23/release/1.8.0-notes.html#new-modes-for-qr) - [New `invert` argument to `in1d`](https://numpy.org/doc/1.23/release/1.8.0-notes.html#new-invert-argument-to-in1d) - [Advanced indexing using `np.newaxis`](https://numpy.org/doc/1.23/release/1.8.0-notes.html#advanced-indexing-using-np-newaxis) - [C-API](https://numpy.org/doc/1.23/release/1.8.0-notes.html#c-api) - [runtests.py](https://numpy.org/doc/1.23/release/1.8.0-notes.html#runtests-py) + [Improvements](https://numpy.org/doc/1.23/release/1.8.0-notes.html#improvements) - [IO performance improvements](https://numpy.org/doc/1.23/release/1.8.0-notes.html#io-performance-improvements) - [Performance improvements to `pad`](https://numpy.org/doc/1.23/release/1.8.0-notes.html#performance-improvements-to-pad) - [Performance improvements to `isnan`, `isinf`, `isfinite` and `byteswap`](https://numpy.org/doc/1.23/release/1.8.0-notes.html#performance-improvements-to-isnan-isinf-isfinite-and-byteswap) - [Performance improvements via SSE2 vectorization](https://numpy.org/doc/1.23/release/1.8.0-notes.html#performance-improvements-via-sse2-vectorization) - [Performance improvements to `median`](https://numpy.org/doc/1.23/release/1.8.0-notes.html#performance-improvements-to-median) - [Overridable operand flags in ufunc C-API](https://numpy.org/doc/1.23/release/1.8.0-notes.html#overridable-operand-flags-in-ufunc-c-api) + [Changes](https://numpy.org/doc/1.23/release/1.8.0-notes.html#changes) - [General](https://numpy.org/doc/1.23/release/1.8.0-notes.html#general) - [C-API Array Additions](https://numpy.org/doc/1.23/release/1.8.0-notes.html#c-api-array-additions) - [C-API Ufunc Additions](https://numpy.org/doc/1.23/release/1.8.0-notes.html#c-api-ufunc-additions) - [C-API Developer Improvements](https://numpy.org/doc/1.23/release/1.8.0-notes.html#c-api-developer-improvements) + [Deprecations](https://numpy.org/doc/1.23/release/1.8.0-notes.html#deprecations) - [General](https://numpy.org/doc/1.23/release/1.8.0-notes.html#id1) + [Authors](https://numpy.org/doc/1.23/release/1.8.0-notes.html#authors) * [1.7.2](https://numpy.org/doc/1.23/release/1.7.2-notes.html) + [Issues fixed](https://numpy.org/doc/1.23/release/1.7.2-notes.html#issues-fixed) * [1.7.1](https://numpy.org/doc/1.23/release/1.7.1-notes.html) + [Issues fixed](https://numpy.org/doc/1.23/release/1.7.1-notes.html#issues-fixed) * [1.7.0](https://numpy.org/doc/1.23/release/1.7.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.7.0-notes.html#highlights) + [Compatibility notes](https://numpy.org/doc/1.23/release/1.7.0-notes.html#compatibility-notes) + [New features](https://numpy.org/doc/1.23/release/1.7.0-notes.html#new-features) - [Reduction UFuncs Generalize axis= Parameter](https://numpy.org/doc/1.23/release/1.7.0-notes.html#reduction-ufuncs-generalize-axis-parameter) - [Reduction UFuncs New keepdims= Parameter](https://numpy.org/doc/1.23/release/1.7.0-notes.html#reduction-ufuncs-new-keepdims-parameter) - [Datetime support](https://numpy.org/doc/1.23/release/1.7.0-notes.html#datetime-support) - [Custom formatter for printing arrays](https://numpy.org/doc/1.23/release/1.7.0-notes.html#custom-formatter-for-printing-arrays) - [New function numpy.random.choice](https://numpy.org/doc/1.23/release/1.7.0-notes.html#new-function-numpy-random-choice) - [New function isclose](https://numpy.org/doc/1.23/release/1.7.0-notes.html#new-function-isclose) - [Preliminary multi-dimensional support in the polynomial package](https://numpy.org/doc/1.23/release/1.7.0-notes.html#preliminary-multi-dimensional-support-in-the-polynomial-package) - [Ability to pad rank-n arrays](https://numpy.org/doc/1.23/release/1.7.0-notes.html#ability-to-pad-rank-n-arrays) - [New argument to searchsorted](https://numpy.org/doc/1.23/release/1.7.0-notes.html#new-argument-to-searchsorted) - [Build system](https://numpy.org/doc/1.23/release/1.7.0-notes.html#build-system) - [C API](https://numpy.org/doc/1.23/release/1.7.0-notes.html#c-api) + [Changes](https://numpy.org/doc/1.23/release/1.7.0-notes.html#changes) - [General](https://numpy.org/doc/1.23/release/1.7.0-notes.html#general) - [Casting Rules](https://numpy.org/doc/1.23/release/1.7.0-notes.html#casting-rules) + [Deprecations](https://numpy.org/doc/1.23/release/1.7.0-notes.html#deprecations) - [General](https://numpy.org/doc/1.23/release/1.7.0-notes.html#id1) - [C-API](https://numpy.org/doc/1.23/release/1.7.0-notes.html#id2) * [1.6.2](https://numpy.org/doc/1.23/release/1.6.2-notes.html) + [Issues fixed](https://numpy.org/doc/1.23/release/1.6.2-notes.html#issues-fixed) - [`numpy.core`](https://numpy.org/doc/1.23/release/1.6.2-notes.html#numpy-core) - [`numpy.lib`](https://numpy.org/doc/1.23/release/1.6.2-notes.html#numpy-lib) - [`numpy.distutils`](https://numpy.org/doc/1.23/release/1.6.2-notes.html#numpy-distutils) - [`numpy.random`](https://numpy.org/doc/1.23/release/1.6.2-notes.html#numpy-random) + [Changes](https://numpy.org/doc/1.23/release/1.6.2-notes.html#changes) - [`numpy.f2py`](https://numpy.org/doc/1.23/release/1.6.2-notes.html#numpy-f2py) - [`numpy.poly`](https://numpy.org/doc/1.23/release/1.6.2-notes.html#numpy-poly) * [1.6.1](https://numpy.org/doc/1.23/release/1.6.1-notes.html) + [Issues Fixed](https://numpy.org/doc/1.23/release/1.6.1-notes.html#issues-fixed) * [1.6.0](https://numpy.org/doc/1.23/release/1.6.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.6.0-notes.html#highlights) + [New features](https://numpy.org/doc/1.23/release/1.6.0-notes.html#new-features) - [New 16-bit floating point type](https://numpy.org/doc/1.23/release/1.6.0-notes.html#new-16-bit-floating-point-type) - [New iterator](https://numpy.org/doc/1.23/release/1.6.0-notes.html#new-iterator) - [Legendre, Laguerre, Hermite, HermiteE polynomials in `numpy.polynomial`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#legendre-laguerre-hermite-hermitee-polynomials-in-numpy-polynomial) - [Fortran assumed shape array and size function support in `numpy.f2py`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#fortran-assumed-shape-array-and-size-function-support-in-numpy-f2py) - [Other new functions](https://numpy.org/doc/1.23/release/1.6.0-notes.html#other-new-functions) + [Changes](https://numpy.org/doc/1.23/release/1.6.0-notes.html#changes) - [`default error handling`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#default-error-handling) - [`numpy.distutils`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#numpy-distutils) - [`numpy.testing`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#numpy-testing) - [`C API`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#c-api) + [Deprecated features](https://numpy.org/doc/1.23/release/1.6.0-notes.html#deprecated-features) + [Removed features](https://numpy.org/doc/1.23/release/1.6.0-notes.html#removed-features) - [`numpy.fft`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#numpy-fft) - [`numpy.memmap`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#numpy-memmap) - [`numpy.lib`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#numpy-lib) - [`numpy.ma`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#numpy-ma) - [`numpy.distutils`](https://numpy.org/doc/1.23/release/1.6.0-notes.html#id1) * [1.5.0](https://numpy.org/doc/1.23/release/1.5.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.5.0-notes.html#highlights) - [Python 3 compatibility](https://numpy.org/doc/1.23/release/1.5.0-notes.html#python-3-compatibility) - [**PEP 3118** compatibility](https://numpy.org/doc/1.23/release/1.5.0-notes.html#pep-3118-compatibility) + [New features](https://numpy.org/doc/1.23/release/1.5.0-notes.html#new-features) - [Warning on casting complex to real](https://numpy.org/doc/1.23/release/1.5.0-notes.html#warning-on-casting-complex-to-real) - [Dot method for ndarrays](https://numpy.org/doc/1.23/release/1.5.0-notes.html#dot-method-for-ndarrays) - [linalg.slogdet function](https://numpy.org/doc/1.23/release/1.5.0-notes.html#linalg-slogdet-function) - [new header](https://numpy.org/doc/1.23/release/1.5.0-notes.html#new-header) + [Changes](https://numpy.org/doc/1.23/release/1.5.0-notes.html#changes) - [polynomial.polynomial](https://numpy.org/doc/1.23/release/1.5.0-notes.html#polynomial-polynomial) - [polynomial.chebyshev](https://numpy.org/doc/1.23/release/1.5.0-notes.html#polynomial-chebyshev) - [histogram](https://numpy.org/doc/1.23/release/1.5.0-notes.html#histogram) - [correlate](https://numpy.org/doc/1.23/release/1.5.0-notes.html#correlate) * [1.4.0](https://numpy.org/doc/1.23/release/1.4.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.4.0-notes.html#highlights) + [New features](https://numpy.org/doc/1.23/release/1.4.0-notes.html#new-features) - [Extended array wrapping mechanism for ufuncs](https://numpy.org/doc/1.23/release/1.4.0-notes.html#extended-array-wrapping-mechanism-for-ufuncs) - [Automatic detection of forward incompatibilities](https://numpy.org/doc/1.23/release/1.4.0-notes.html#automatic-detection-of-forward-incompatibilities) - [New iterators](https://numpy.org/doc/1.23/release/1.4.0-notes.html#new-iterators) - [New polynomial support](https://numpy.org/doc/1.23/release/1.4.0-notes.html#new-polynomial-support) - [New C API](https://numpy.org/doc/1.23/release/1.4.0-notes.html#new-c-api) - [New ufuncs](https://numpy.org/doc/1.23/release/1.4.0-notes.html#new-ufuncs) - [New defines](https://numpy.org/doc/1.23/release/1.4.0-notes.html#new-defines) - [Testing](https://numpy.org/doc/1.23/release/1.4.0-notes.html#testing) - [Reusing npymath](https://numpy.org/doc/1.23/release/1.4.0-notes.html#reusing-npymath) - [Improved set operations](https://numpy.org/doc/1.23/release/1.4.0-notes.html#improved-set-operations) + [Improvements](https://numpy.org/doc/1.23/release/1.4.0-notes.html#improvements) + [Deprecations](https://numpy.org/doc/1.23/release/1.4.0-notes.html#deprecations) + [Internal changes](https://numpy.org/doc/1.23/release/1.4.0-notes.html#internal-changes) - [Use C99 complex functions when available](https://numpy.org/doc/1.23/release/1.4.0-notes.html#use-c99-complex-functions-when-available) - [split multiarray and umath source code](https://numpy.org/doc/1.23/release/1.4.0-notes.html#split-multiarray-and-umath-source-code) - [Separate compilation](https://numpy.org/doc/1.23/release/1.4.0-notes.html#separate-compilation) - [Separate core math library](https://numpy.org/doc/1.23/release/1.4.0-notes.html#separate-core-math-library) * [1.3.0](https://numpy.org/doc/1.23/release/1.3.0-notes.html) + [Highlights](https://numpy.org/doc/1.23/release/1.3.0-notes.html#highlights) - [Python 2.6 support](https://numpy.org/doc/1.23/release/1.3.0-notes.html#python-2-6-support) - [Generalized ufuncs](https://numpy.org/doc/1.23/release/1.3.0-notes.html#generalized-ufuncs) - [Experimental Windows 64 bits support](https://numpy.org/doc/1.23/release/1.3.0-notes.html#experimental-windows-64-bits-support) + [New features](https://numpy.org/doc/1.23/release/1.3.0-notes.html#new-features) - [Formatting issues](https://numpy.org/doc/1.23/release/1.3.0-notes.html#formatting-issues) - [Nan handling in max/min](https://numpy.org/doc/1.23/release/1.3.0-notes.html#nan-handling-in-max-min) - [Nan handling in sign](https://numpy.org/doc/1.23/release/1.3.0-notes.html#nan-handling-in-sign) - [New ufuncs](https://numpy.org/doc/1.23/release/1.3.0-notes.html#new-ufuncs) - [Masked arrays](https://numpy.org/doc/1.23/release/1.3.0-notes.html#masked-arrays) - [gfortran support on windows](https://numpy.org/doc/1.23/release/1.3.0-notes.html#gfortran-support-on-windows) - [Arch option for windows binary](https://numpy.org/doc/1.23/release/1.3.0-notes.html#arch-option-for-windows-binary) + [Deprecated features](https://numpy.org/doc/1.23/release/1.3.0-notes.html#deprecated-features) - [Histogram](https://numpy.org/doc/1.23/release/1.3.0-notes.html#histogram) + [Documentation changes](https://numpy.org/doc/1.23/release/1.3.0-notes.html#documentation-changes) + [New C API](https://numpy.org/doc/1.23/release/1.3.0-notes.html#new-c-api) - [Multiarray API](https://numpy.org/doc/1.23/release/1.3.0-notes.html#multiarray-api) - [Ufunc API](https://numpy.org/doc/1.23/release/1.3.0-notes.html#ufunc-api) - [New defines](https://numpy.org/doc/1.23/release/1.3.0-notes.html#new-defines) - [Portable NAN, INFINITY, etc
](https://numpy.org/doc/1.23/release/1.3.0-notes.html#portable-nan-infinity-etc) + [Internal changes](https://numpy.org/doc/1.23/release/1.3.0-notes.html#internal-changes) - [numpy.core math configuration revamp](https://numpy.org/doc/1.23/release/1.3.0-notes.html#numpy-core-math-configuration-revamp) - [umath refactor](https://numpy.org/doc/1.23/release/1.3.0-notes.html#umath-refactor) - [Improvements to build warnings](https://numpy.org/doc/1.23/release/1.3.0-notes.html#improvements-to-build-warnings) - [Separate core math library](https://numpy.org/doc/1.23/release/1.3.0-notes.html#separate-core-math-library) - [CPU arch detection](https://numpy.org/doc/1.23/release/1.3.0-notes.html#cpu-arch-detection) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/release.htmlNumPy: the absolute basics for beginners ======================================== Welcome to the absolute beginner’s guide to NumPy! If you have comments or suggestions, please don’t hesitate to [reach out](https://numpy.org/community/)! Welcome to NumPy! ----------------- NumPy (**Numerical Python**) is an open source Python library that’s used in almost every field of science and engineering. It’s the universal standard for working with numerical data in Python, and it’s at the core of the scientific Python and PyData ecosystems. NumPy users include everyone from beginning coders to experienced researchers doing state-of-the-art scientific and industrial research and development. The NumPy API is used extensively in Pandas, SciPy, Matplotlib, scikit-learn, scikit-image and most other data science and scientific Python packages. The NumPy library contains multidimensional array and matrix data structures (you’ll find more information about this in later sections). It provides **ndarray**, a homogeneous n-dimensional array object, with methods to efficiently operate on it. NumPy can be used to perform a wide variety of mathematical operations on arrays. It adds powerful data structures to Python that guarantee efficient calculations with arrays and matrices and it supplies an enormous library of high-level mathematical functions that operate on these arrays and matrices. Learn more about [NumPy here](whatisnumpy#whatisnumpy)! Installing NumPy ---------------- To install NumPy, we strongly recommend using a scientific Python distribution. If you’re looking for the full instructions for installing NumPy on your operating system, see [Installing NumPy](https://numpy.org/install/). If you already have Python, you can install NumPy with: ``` conda install numpy ``` or ``` pip install numpy ``` If you don’t have Python yet, you might want to consider using [Anaconda](https://www.anaconda.com/). It’s the easiest way to get started. The good thing about getting this distribution is the fact that you don’t need to worry too much about separately installing NumPy or any of the major packages that you’ll be using for your data analyses, like pandas, Scikit-Learn, etc. How to import NumPy ------------------- To access NumPy and its functions import it in your Python code like this: ``` import numpy as np ``` We shorten the imported name to `np` for better readability of code using NumPy. This is a widely adopted convention that you should follow so that anyone working with your code can easily understand it. Reading the example code ------------------------ If you aren’t already comfortable with reading tutorials that contain a lot of code, you might not know how to interpret a code block that looks like this: ``` >>> a = np.arange(6) >>> a2 = a[np.newaxis, :] >>> a2.shape (1, 6) ``` If you aren’t familiar with this style, it’s very easy to understand. If you see `>>>`, you’re looking at **input**, or the code that you would enter. Everything that doesn’t have `>>>` in front of it is **output**, or the results of running your code. This is the style you see when you run `python` on the command line, but if you’re using IPython, you might see a different style. Note that it is not part of the code and will cause an error if typed or pasted into the Python shell. It can be safely typed or pasted into the IPython shell; the `>>>` is ignored. What’s the difference between a Python list and a NumPy array? -------------------------------------------------------------- NumPy gives you an enormous range of fast and efficient ways of creating arrays and manipulating numerical data inside them. While a Python list can contain different data types within a single list, all of the elements in a NumPy array should be homogeneous. The mathematical operations that are meant to be performed on arrays would be extremely inefficient if the arrays weren’t homogeneous. **Why use NumPy?** NumPy arrays are faster and more compact than Python lists. An array consumes less memory and is convenient to use. NumPy uses much less memory to store data and it provides a mechanism of specifying the data types. This allows the code to be optimized even further. What is an array? ----------------- An array is a central data structure of the NumPy library. An array is a grid of values and it contains information about the raw data, how to locate an element, and how to interpret an element. It has a grid of elements that can be indexed in [various ways](quickstart#quickstart-indexing-slicing-and-iterating). The elements are all of the same type, referred to as the array `dtype`. An array can be indexed by a tuple of nonnegative integers, by booleans, by another array, or by integers. The `rank` of the array is the number of dimensions. The `shape` of the array is a tuple of integers giving the size of the array along each dimension. One way we can initialize NumPy arrays is from Python lists, using nested lists for two- or higher-dimensional data. For example: ``` >>> a = np.array([1, 2, 3, 4, 5, 6]) ``` or: ``` >>> a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) ``` We can access the elements in the array using square brackets. When you’re accessing elements, remember that indexing in NumPy starts at 0. That means that if you want to access the first element in your array, you’ll be accessing element “0”. ``` >>> print(a[0]) [1 2 3 4] ``` More information about arrays ----------------------------- *This section covers* `1D array`, `2D array`, `ndarray`, `vector`, `matrix` You might occasionally hear an array referred to as a “ndarray,” which is shorthand for “N-dimensional array.” An N-dimensional array is simply an array with any number of dimensions. You might also hear **1-D**, or one-dimensional array, **2-D**, or two-dimensional array, and so on. The NumPy `ndarray` class is used to represent both matrices and vectors. A **vector** is an array with a single dimension (there’s no difference between row and column vectors), while a **matrix** refers to an array with two dimensions. For **3-D** or higher dimensional arrays, the term **tensor** is also commonly used. **What are the attributes of an array?** An array is usually a fixed-size container of items of the same type and size. The number of dimensions and items in an array is defined by its shape. The shape of an array is a tuple of non-negative integers that specify the sizes of each dimension. In NumPy, dimensions are called **axes**. This means that if you have a 2D array that looks like this: ``` [[0., 0., 0.], [1., 1., 1.]] ``` Your array has 2 axes. The first axis has a length of 2 and the second axis has a length of 3. Just like in other Python container objects, the contents of an array can be accessed and modified by indexing or slicing the array. Unlike the typical container objects, different arrays can share the same data, so changes made on one array might be visible in another. Array **attributes** reflect information intrinsic to the array itself. If you need to get, or even set, properties of an array without creating a new array, you can often access an array through its attributes. [Read more about array attributes here](../reference/arrays.ndarray#arrays-ndarray) and learn about [array objects here](../reference/arrays#arrays). How to create a basic array --------------------------- *This section covers* `np.array()`, `np.zeros()`, `np.ones()`, `np.empty()`, `np.arange()`, `np.linspace()`, `dtype` To create a NumPy array, you can use the function `np.array()`. All you need to do to create a simple array is pass a list to it. If you choose to, you can also specify the type of data in your list. [You can find more information about data types here](../reference/arrays.dtypes#arrays-dtypes). ``` >>> import numpy as np >>> a = np.array([1, 2, 3]) ``` You can visualize your array this way: ![../_images/np_array.png] *Be aware that these visualizations are meant to simplify ideas and give you a basic understanding of NumPy concepts and mechanics. Arrays and array operations are much more complicated than are captured here!* Besides creating an array from a sequence of elements, you can easily create an array filled with `0`’s: ``` >>> np.zeros(2) array([0., 0.]) ``` Or an array filled with `1`’s: ``` >>> np.ones(2) array([1., 1.]) ``` Or even an empty array! The function `empty` creates an array whose initial content is random and depends on the state of the memory. The reason to use `empty` over `zeros` (or something similar) is speed - just make sure to fill every element afterwards! ``` >>> # Create an empty array with 2 elements >>> np.empty(2) array([3.14, 42. ]) # may vary ``` You can create an array with a range of elements: ``` >>> np.arange(4) array([0, 1, 2, 3]) ``` And even an array that contains a range of evenly spaced intervals. To do this, you will specify the **first number**, **last number**, and the **step size**. ``` >>> np.arange(2, 9, 2) array([2, 4, 6, 8]) ``` You can also use `np.linspace()` to create an array with values that are spaced linearly in a specified interval: ``` >>> np.linspace(0, 10, num=5) array([ 0. , 2.5, 5. , 7.5, 10. ]) ``` **Specifying your data type** While the default data type is floating point (`np.float64`), you can explicitly specify which data type you want using the `dtype` keyword. ``` >>> x = np.ones(2, dtype=np.int64) >>> x array([1, 1]) ``` [Learn more about creating arrays here](quickstart#quickstart-array-creation) Adding, removing, and sorting elements -------------------------------------- *This section covers* `np.sort()`, `np.concatenate()` Sorting an element is simple with `np.sort()`. You can specify the axis, kind, and order when you call the function. If you start with this array: ``` >>> arr = np.array([2, 1, 5, 3, 7, 4, 6, 8]) ``` You can quickly sort the numbers in ascending order with: ``` >>> np.sort(arr) array([1, 2, 3, 4, 5, 6, 7, 8]) ``` In addition to sort, which returns a sorted copy of an array, you can use: * [`argsort`](../reference/generated/numpy.argsort#numpy.argsort "numpy.argsort"), which is an indirect sort along a specified axis, * [`lexsort`](../reference/generated/numpy.lexsort#numpy.lexsort "numpy.lexsort"), which is an indirect stable sort on multiple keys, * [`searchsorted`](../reference/generated/numpy.searchsorted#numpy.searchsorted "numpy.searchsorted"), which will find elements in a sorted array, and * [`partition`](../reference/generated/numpy.partition#numpy.partition "numpy.partition"), which is a partial sort. To read more about sorting an array, see: [`sort`](../reference/generated/numpy.sort#numpy.sort "numpy.sort"). If you start with these arrays: ``` >>> a = np.array([1, 2, 3, 4]) >>> b = np.array([5, 6, 7, 8]) ``` You can concatenate them with `np.concatenate()`. ``` >>> np.concatenate((a, b)) array([1, 2, 3, 4, 5, 6, 7, 8]) ``` Or, if you start with these arrays: ``` >>> x = np.array([[1, 2], [3, 4]]) >>> y = np.array([[5, 6]]) ``` You can concatenate them with: ``` >>> np.concatenate((x, y), axis=0) array([[1, 2], [3, 4], [5, 6]]) ``` In order to remove elements from an array, it’s simple to use indexing to select the elements that you want to keep. To read more about concatenate, see: [`concatenate`](../reference/generated/numpy.concatenate#numpy.concatenate "numpy.concatenate"). How do you know the shape and size of an array? ----------------------------------------------- *This section covers* `ndarray.ndim`, `ndarray.size`, `ndarray.shape` `ndarray.ndim` will tell you the number of axes, or dimensions, of the array. `ndarray.size` will tell you the total number of elements of the array. This is the *product* of the elements of the array’s shape. `ndarray.shape` will display a tuple of integers that indicate the number of elements stored along each dimension of the array. If, for example, you have a 2-D array with 2 rows and 3 columns, the shape of your array is `(2, 3)`. For example, if you create this array: ``` >>> array_example = np.array([[[0, 1, 2, 3], ... [4, 5, 6, 7]], ... ... [[0, 1, 2, 3], ... [4, 5, 6, 7]], ... ... [[0 ,1 ,2, 3], ... [4, 5, 6, 7]]]) ``` To find the number of dimensions of the array, run: ``` >>> array_example.ndim 3 ``` To find the total number of elements in the array, run: ``` >>> array_example.size 24 ``` And to find the shape of your array, run: ``` >>> array_example.shape (3, 2, 4) ``` Can you reshape an array? ------------------------- *This section covers* `arr.reshape()` **Yes!** Using `arr.reshape()` will give a new shape to an array without changing the data. Just remember that when you use the reshape method, the array you want to produce needs to have the same number of elements as the original array. If you start with an array with 12 elements, you’ll need to make sure that your new array also has a total of 12 elements. If you start with this array: ``` >>> a = np.arange(6) >>> print(a) [0 1 2 3 4 5] ``` You can use `reshape()` to reshape your array. For example, you can reshape this array to an array with three rows and two columns: ``` >>> b = a.reshape(3, 2) >>> print(b) [[0 1] [2 3] [4 5]] ``` With `np.reshape`, you can specify a few optional parameters: ``` >>> np.reshape(a, newshape=(1, 6), order='C') array([[0, 1, 2, 3, 4, 5]]) ``` `a` is the array to be reshaped. `newshape` is the new shape you want. You can specify an integer or a tuple of integers. If you specify an integer, the result will be an array of that length. The shape should be compatible with the original shape. `order:` `C` means to read/write the elements using C-like index order, `F` means to read/write the elements using Fortran-like index order, `A` means to read/write the elements in Fortran-like index order if a is Fortran contiguous in memory, C-like order otherwise. (This is an optional parameter and doesn’t need to be specified.) If you want to learn more about C and Fortran order, you can [read more about the internal organization of NumPy arrays here](../dev/internals#numpy-internals). Essentially, C and Fortran orders have to do with how indices correspond to the order the array is stored in memory. In Fortran, when moving through the elements of a two-dimensional array as it is stored in memory, the **first** index is the most rapidly varying index. As the first index moves to the next row as it changes, the matrix is stored one column at a time. This is why Fortran is thought of as a **Column-major language**. In C on the other hand, the **last** index changes the most rapidly. The matrix is stored by rows, making it a **Row-major language**. What you do for C or Fortran depends on whether it’s more important to preserve the indexing convention or not reorder the data. [Learn more about shape manipulation here](quickstart#quickstart-shape-manipulation). How to convert a 1D array into a 2D array (how to add a new axis to an array) ----------------------------------------------------------------------------- *This section covers* `np.newaxis`, `np.expand_dims` You can use `np.newaxis` and `np.expand_dims` to increase the dimensions of your existing array. Using `np.newaxis` will increase the dimensions of your array by one dimension when used once. This means that a **1D** array will become a **2D** array, a **2D** array will become a **3D** array, and so on. For example, if you start with this array: ``` >>> a = np.array([1, 2, 3, 4, 5, 6]) >>> a.shape (6,) ``` You can use `np.newaxis` to add a new axis: ``` >>> a2 = a[np.newaxis, :] >>> a2.shape (1, 6) ``` You can explicitly convert a 1D array with either a row vector or a column vector using `np.newaxis`. For example, you can convert a 1D array to a row vector by inserting an axis along the first dimension: ``` >>> row_vector = a[np.newaxis, :] >>> row_vector.shape (1, 6) ``` Or, for a column vector, you can insert an axis along the second dimension: ``` >>> col_vector = a[:, np.newaxis] >>> col_vector.shape (6, 1) ``` You can also expand an array by inserting a new axis at a specified position with `np.expand_dims`. For example, if you start with this array: ``` >>> a = np.array([1, 2, 3, 4, 5, 6]) >>> a.shape (6,) ``` You can use `np.expand_dims` to add an axis at index position 1 with: ``` >>> b = np.expand_dims(a, axis=1) >>> b.shape (6, 1) ``` You can add an axis at index position 0 with: ``` >>> c = np.expand_dims(a, axis=0) >>> c.shape (1, 6) ``` Find more information about [newaxis here](../reference/arrays.indexing#arrays-indexing) and `expand_dims` at [`expand_dims`](../reference/generated/numpy.expand_dims#numpy.expand_dims "numpy.expand_dims"). Indexing and slicing -------------------- You can index and slice NumPy arrays in the same ways you can slice Python lists. ``` >>> data = np.array([1, 2, 3]) >>> data[1] 2 >>> data[0:2] array([1, 2]) >>> data[1:] array([2, 3]) >>> data[-2:] array([2, 3]) ``` You can visualize it this way: ![../_images/np_indexing.png](https://numpy.org/doc/1.23/_images/np_indexing.png) You may want to take a section of your array or specific array elements to use in further analysis or additional operations. To do that, you’ll need to subset, slice, and/or index your arrays. If you want to select values from your array that fulfill certain conditions, it’s straightforward with NumPy. For example, if you start with this array: ``` >>> a = np.array([[1 , 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) ``` You can easily print all of the values in the array that are less than 5. ``` >>> print(a[a < 5]) [1 2 3 4] ``` You can also select, for example, numbers that are equal to or greater than 5, and use that condition to index an array. ``` >>> five_up = (a >= 5) >>> print(a[five_up]) [ 5 6 7 8 9 10 11 12] ``` You can select elements that are divisible by 2: ``` >>> divisible_by_2 = a[a%2==0] >>> print(divisible_by_2) [ 2 4 6 8 10 12] ``` Or you can select elements that satisfy two conditions using the `&` and `|` operators: ``` >>> c = a[(a > 2) & (a < 11)] >>> print(c) [ 3 4 5 6 7 8 9 10] ``` You can also make use of the logical operators **&** and **|** in order to return boolean values that specify whether or not the values in an array fulfill a certain condition. This can be useful with arrays that contain names or other categorical values. ``` >>> five_up = (a > 5) | (a == 5) >>> print(five_up) [[False False False False] [ True True True True] [ True True True True]] ``` You can also use `np.nonzero()` to select elements or indices from an array. Starting with this array: ``` >>> a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) ``` You can use `np.nonzero()` to print the indices of elements that are, for example, less than 5: ``` >>> b = np.nonzero(a < 5) >>> print(b) (array([0, 0, 0, 0]), array([0, 1, 2, 3])) ``` In this example, a tuple of arrays was returned: one for each dimension. The first array represents the row indices where these values are found, and the second array represents the column indices where the values are found. If you want to generate a list of coordinates where the elements exist, you can zip the arrays, iterate over the list of coordinates, and print them. For example: ``` >>> list_of_coordinates= list(zip(b[0], b[1])) >>> for coord in list_of_coordinates: ... print(coord) (0, 0) (0, 1) (0, 2) (0, 3) ``` You can also use `np.nonzero()` to print the elements in an array that are less than 5 with: ``` >>> print(a[b]) [1 2 3 4] ``` If the element you’re looking for doesn’t exist in the array, then the returned array of indices will be empty. For example: ``` >>> not_there = np.nonzero(a == 42) >>> print(not_there) (array([], dtype=int64), array([], dtype=int64)) ``` Learn more about [indexing and slicing here](quickstart#quickstart-indexing-slicing-and-iterating) and [here](basics.indexing#basics-indexing). Read more about using the nonzero function at: [`nonzero`](../reference/generated/numpy.nonzero#numpy.nonzero "numpy.nonzero"). How to create an array from existing data ----------------------------------------- *This section covers* `slicing and indexing`, `np.vstack()`, `np.hstack()`, `np.hsplit()`, `.view()`, `copy()` You can easily create a new array from a section of an existing array. Let’s say you have this array: ``` >>> a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) ``` You can create a new array from a section of your array any time by specifying where you want to slice your array. ``` >>> arr1 = a[3:8] >>> arr1 array([4, 5, 6, 7, 8]) ``` Here, you grabbed a section of your array from index position 3 through index position 8. You can also stack two existing arrays, both vertically and horizontally. Let’s say you have two arrays, `a1` and `a2`: ``` >>> a1 = np.array([[1, 1], ... [2, 2]]) >>> a2 = np.array([[3, 3], ... [4, 4]]) ``` You can stack them vertically with `vstack`: ``` >>> np.vstack((a1, a2)) array([[1, 1], [2, 2], [3, 3], [4, 4]]) ``` Or stack them horizontally with `hstack`: ``` >>> np.hstack((a1, a2)) array([[1, 1, 3, 3], [2, 2, 4, 4]]) ``` You can split an array into several smaller arrays using `hsplit`. You can specify either the number of equally shaped arrays to return or the columns *after* which the division should occur. Let’s say you have this array: ``` >>> x = np.arange(1, 25).reshape(2, 12) >>> x array([[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]]) ``` If you wanted to split this array into three equally shaped arrays, you would run: ``` >>> np.hsplit(x, 3) [array([[ 1, 2, 3, 4], [13, 14, 15, 16]]), array([[ 5, 6, 7, 8], [17, 18, 19, 20]]), array([[ 9, 10, 11, 12], [21, 22, 23, 24]])] ``` If you wanted to split your array after the third and fourth column, you’d run: ``` >>> np.hsplit(x, (3, 4)) [array([[ 1, 2, 3], [13, 14, 15]]), array([[ 4], [16]]), array([[ 5, 6, 7, 8, 9, 10, 11, 12], [17, 18, 19, 20, 21, 22, 23, 24]])] ``` [Learn more about stacking and splitting arrays here](quickstart#quickstart-stacking-arrays). You can use the `view` method to create a new array object that looks at the same data as the original array (a *shallow copy*). Views are an important NumPy concept! NumPy functions, as well as operations like indexing and slicing, will return views whenever possible. This saves memory and is faster (no copy of the data has to be made). However it’s important to be aware of this - modifying data in a view also modifies the original array! Let’s say you create this array: ``` >>> a = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) ``` Now we create an array `b1` by slicing `a` and modify the first element of `b1`. This will modify the corresponding element in `a` as well! ``` >>> b1 = a[0, :] >>> b1 array([1, 2, 3, 4]) >>> b1[0] = 99 >>> b1 array([99, 2, 3, 4]) >>> a array([[99, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]]) ``` Using the `copy` method will make a complete copy of the array and its data (a *deep copy*). To use this on your array, you could run: ``` >>> b2 = a.copy() ``` [Learn more about copies and views here](quickstart#quickstart-copies-and-views). Basic array operations ---------------------- *This section covers addition, subtraction, multiplication, division, and more* Once you’ve created your arrays, you can start to work with them. Let’s say, for example, that you’ve created two arrays, one called “data” and one called “ones” ![../_images/np_array_dataones.png] You can add the arrays together with the plus sign. ``` >>> data = np.array([1, 2]) >>> ones = np.ones(2, dtype=int) >>> data + ones array([2, 3]) ``` ![../_images/np_data_plus_ones.png] You can, of course, do more than just addition! ``` >>> data - ones array([0, 1]) >>> data * data array([1, 4]) >>> data / data array([1., 1.]) ``` ![../_images/np_sub_mult_divide.png]`. This works for 1D arrays, 2D arrays, and arrays in higher dimensions. ``` >>> a = np.array([1, 2, 3, 4]) >>> a.sum() 10 ``` To add the rows or the columns in a 2D array, you would specify the axis. If you start with this array: ``` >>> b = np.array([[1, 1], [2, 2]]) ``` You can sum over the axis of rows with: ``` >>> b.sum(axis=0) array([3, 3]) ``` You can sum over the axis of columns with: ``` >>> b.sum(axis=1) array([2, 4]) ``` [Learn more about basic operations here](quickstart#quickstart-basic-operations). Broadcasting ------------ There are times when you might want to carry out an operation between an array and a single number (also called *an operation between a vector and a scalar*) or between arrays of two different sizes. For example, your array (we’ll call it “data”) might contain information about distance in miles but you want to convert the information to kilometers. You can perform this operation with: ``` >>> data = np.array([1.0, 2.0]) >>> data * 1.6 array([1.6, 3.2]) ``` ![../_images/np_multiply_broadcasting.png] NumPy understands that the multiplication should happen with each cell. That concept is called **broadcasting**. Broadcasting is a mechanism that allows NumPy to perform operations on arrays of different shapes. The dimensions of your array must be compatible, for example, when the dimensions of both arrays are equal or when one of them is 1. If the dimensions are not compatible, you will get a `ValueError`. [Learn more about broadcasting here](basics.broadcasting#basics-broadcasting). More useful array operations ---------------------------- *This section covers maximum, minimum, sum, mean, product, standard deviation, and more* NumPy also performs aggregation functions. In addition to `min`, `max`, and `sum`, you can easily run `mean` to get the average, `prod` to get the result of multiplying the elements together, `std` to get the standard deviation, and more. ``` >>> data.max() 2.0 >>> data.min() 1.0 >>> data.sum() 3.0 ``` ![../_images/np_aggregation.png] Let’s start with this array, called “a” ``` >>> a = np.array([[0.45053314, 0.17296777, 0.34376245, 0.5510652], ... [0.54627315, 0.05093587, 0.40067661, 0.55645993], ... [0.12697628, 0.82485143, 0.26590556, 0.56917101]]) ``` It’s very common to want to aggregate along a row or column. By default, every NumPy aggregation function will return the aggregate of the entire array. To find the sum or the minimum of the elements in your array, run: ``` >>> a.sum() 4.8595784 ``` Or: ``` >>> a.min() 0.05093587 ``` You can specify on which axis you want the aggregation function to be computed. For example, you can find the minimum value within each column by specifying `axis=0`. ``` >>> a.min(axis=0) array([0.12697628, 0.05093587, 0.26590556, 0.5510652 ]) ``` The four values listed above correspond to the number of columns in your array. With a four-column array, you will get four values as your result. Read more about [array methods here](../reference/arrays.ndarray#array-ndarray-methods). Creating matrices ----------------- You can pass Python lists of lists to create a 2-D array (or “matrix”) to represent them in NumPy. ``` >>> data = np.array([[1, 2], [3, 4], [5, 6]]) >>> data array([[1, 2], [3, 4], [5, 6]]) ``` ![../_images/np_create_matrix.png] Indexing and slicing operations are useful when you’re manipulating matrices: ``` >>> data[0, 1] 2 >>> data[1:3] array([[3, 4], [5, 6]]) >>> data[0:2, 0] array([1, 3]) ``` ![../_images/np_matrix_indexing.png] You can aggregate matrices the same way you aggregated vectors: ``` >>> data.max() 6 >>> data.min() 1 >>> data.sum() 21 ``` ![../_images/np_matrix_aggregation.png] You can aggregate all the values in a matrix and you can aggregate them across columns or rows using the `axis` parameter. To illustrate this point, let’s look at a slightly modified dataset: ``` >>> data = np.array([[1, 2], [5, 3], [4, 6]]) >>> data array([[1, 2], [5, 3], [4, 6]]) >>> data.max(axis=0) array([5, 6]) >>> data.max(axis=1) array([2, 5, 6]) ``` ![../_images/np_matrix_aggregation_row.png] Once you’ve created your matrices, you can add and multiply them using arithmetic operators if you have two matrices that are the same size. ``` >>> data = np.array([[1, 2], [3, 4]]) >>> ones = np.array([[1, 1], [1, 1]]) >>> data + ones array([[2, 3], [4, 5]]) ``` ![../_images/np_matrix_arithmetic.png] You can do these arithmetic operations on matrices of different sizes, but only if one matrix has only one column or one row. In this case, NumPy will use its broadcast rules for the operation. ``` >>> data = np.array([[1, 2], [3, 4], [5, 6]]) >>> ones_row = np.array([[1, 1]]) >>> data + ones_row array([[2, 3], [4, 5], [6, 7]]) ``` ![../_images/np_matrix_broadcasting.png] Be aware that when NumPy prints N-dimensional arrays, the last axis is looped over the fastest while the first axis is the slowest. For instance: ``` >>> np.ones((4, 3, 2)) array([[[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]]]) ``` There are often instances where we want NumPy to initialize the values of an array. NumPy offers functions like `ones()` and `zeros()`, and the `random.Generator` class for random number generation for that. All you need to do is pass in the number of elements you want it to generate: ``` >>> np.ones(3) array([1., 1., 1.]) >>> np.zeros(3) array([0., 0., 0.]) >>> rng = np.random.default_rng() # the simplest way to generate random numbers >>> rng.random(3) array([0.63696169, 0.26978671, 0.04097352]) ``` ![../_images/np_ones_zeros_random.png]` to create a 2D array if you give them a tuple describing the dimensions of the matrix: ``` >>> np.ones((3, 2)) array([[1., 1.], [1., 1.], [1., 1.]]) >>> np.zeros((3, 2)) array([[0., 0.], [0., 0.], [0., 0.]]) >>> rng.random((3, 2)) array([[0.01652764, 0.81327024], [0.91275558, 0.60663578], [0.72949656, 0.54362499]]) # may vary ``` ![../_images/np_ones_zeros_matrix.png]. Generating random numbers ------------------------- The use of random number generation is an important part of the configuration and evaluation of many numerical and machine learning algorithms. Whether you need to randomly initialize weights in an artificial neural network, split data into random sets, or randomly shuffle your dataset, being able to generate random numbers (actually, repeatable pseudo-random numbers) is essential. With `Generator.integers`, you can generate random integers from low (remember that this is inclusive with NumPy) to high (exclusive). You can set `endpoint=True` to make the high number inclusive. You can generate a 2 x 4 array of random integers between 0 and 4 with: ``` >>> rng.integers(5, size=(2, 4)) array([[2, 1, 1, 0], [0, 0, 0, 4]]) # may vary ``` [Read more about random number generation here](../reference/random/index#numpyrandom). How to get unique items and counts ---------------------------------- *This section covers* `np.unique()` You can find the unique elements in an array easily with `np.unique`. For example, if you start with this array: ``` >>> a = np.array([11, 11, 12, 13, 14, 15, 16, 17, 12, 13, 11, 14, 18, 19, 20]) ``` you can use `np.unique` to print the unique values in your array: ``` >>> unique_values = np.unique(a) >>> print(unique_values) [11 12 13 14 15 16 17 18 19 20] ``` To get the indices of unique values in a NumPy array (an array of first index positions of unique values in the array), just pass the `return_index` argument in `np.unique()` as well as your array. ``` >>> unique_values, indices_list = np.unique(a, return_index=True) >>> print(indices_list) [ 0 2 3 4 5 6 7 12 13 14] ``` You can pass the `return_counts` argument in `np.unique()` along with your array to get the frequency count of unique values in a NumPy array. ``` >>> unique_values, occurrence_count = np.unique(a, return_counts=True) >>> print(occurrence_count) [3 2 2 2 1 1 1 1 1 1] ``` This also works with 2D arrays! If you start with this array: ``` >>> a_2d = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [1, 2, 3, 4]]) ``` You can find unique values with: ``` >>> unique_values = np.unique(a_2d) >>> print(unique_values) [ 1 2 3 4 5 6 7 8 9 10 11 12] ``` If the axis argument isn’t passed, your 2D array will be flattened. If you want to get the unique rows or columns, make sure to pass the `axis` argument. To find the unique rows, specify `axis=0` and for columns, specify `axis=1`. ``` >>> unique_rows = np.unique(a_2d, axis=0) >>> print(unique_rows) [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] ``` To get the unique rows, index position, and occurrence count, you can use: ``` >>> unique_rows, indices, occurrence_count = np.unique( ... a_2d, axis=0, return_counts=True, return_index=True) >>> print(unique_rows) [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] >>> print(indices) [0 1 2] >>> print(occurrence_count) [2 1 1] ``` To learn more about finding the unique elements in an array, see [`unique`](../reference/generated/numpy.unique#numpy.unique "numpy.unique"). Transposing and reshaping a matrix ---------------------------------- *This section covers* `arr.reshape()`, `arr.transpose()`, `arr.T` It’s common to need to transpose your matrices. NumPy arrays have the property `T` that allows you to transpose a matrix. ![../_images/np_transposing_reshaping.png] You may also need to switch the dimensions of a matrix. This can happen when, for example, you have a model that expects a certain input shape that is different from your dataset. This is where the `reshape` method can be useful. You simply need to pass in the new dimensions that you want for the matrix. ``` >>> data.reshape(2, 3) array([[1, 2, 3], [4, 5, 6]]) >>> data.reshape(3, 2) array([[1, 2], [3, 4], [5, 6]]) ``` ![../_images/np_reshape.png]` to reverse or change the axes of an array according to the values you specify. If you start with this array: ``` >>> arr = np.arange(6).reshape((2, 3)) >>> arr array([[0, 1, 2], [3, 4, 5]]) ``` You can transpose your array with `arr.transpose()`. ``` >>> arr.transpose() array([[0, 3], [1, 4], [2, 5]]) ``` You can also use `arr.T`: ``` >>> arr.T array([[0, 3], [1, 4], [2, 5]]) ``` To learn more about transposing and reshaping arrays, see [`transpose`](../reference/generated/numpy.transpose#numpy.transpose "numpy.transpose") and [`reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape"). How to reverse an array ----------------------- *This section covers* `np.flip()` NumPy’s `np.flip()` function allows you to flip, or reverse, the contents of an array along an axis. When using `np.flip()`, specify the array you would like to reverse and the axis. If you don’t specify the axis, NumPy will reverse the contents along all of the axes of your input array. **Reversing a 1D array** If you begin with a 1D array like this one: ``` >>> arr = np.array([1, 2, 3, 4, 5, 6, 7, 8]) ``` You can reverse it with: ``` >>> reversed_arr = np.flip(arr) ``` If you want to print your reversed array, you can run: ``` >>> print('Reversed Array: ', reversed_arr) Reversed Array: [8 7 6 5 4 3 2 1] ``` **Reversing a 2D array** A 2D array works much the same way. If you start with this array: ``` >>> arr_2d = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) ``` You can reverse the content in all of the rows and all of the columns with: ``` >>> reversed_arr = np.flip(arr_2d) >>> print(reversed_arr) [[12 11 10 9] [ 8 7 6 5] [ 4 3 2 1]] ``` You can easily reverse only the *rows* with: ``` >>> reversed_arr_rows = np.flip(arr_2d, axis=0) >>> print(reversed_arr_rows) [[ 9 10 11 12] [ 5 6 7 8] [ 1 2 3 4]] ``` Or reverse only the *columns* with: ``` >>> reversed_arr_columns = np.flip(arr_2d, axis=1) >>> print(reversed_arr_columns) [[ 4 3 2 1] [ 8 7 6 5] [12 11 10 9]] ``` You can also reverse the contents of only one column or row. For example, you can reverse the contents of the row at index position 1 (the second row): ``` >>> arr_2d[1] = np.flip(arr_2d[1]) >>> print(arr_2d) [[ 1 2 3 4] [ 8 7 6 5] [ 9 10 11 12]] ``` You can also reverse the column at index position 1 (the second column): ``` >>> arr_2d[:,1] = np.flip(arr_2d[:,1]) >>> print(arr_2d) [[ 1 10 3 4] [ 8 7 6 5] [ 9 2 11 12]] ``` Read more about reversing arrays at [`flip`](../reference/generated/numpy.flip#numpy.flip "numpy.flip"). Reshaping and flattening multidimensional arrays ------------------------------------------------ *This section covers* `.flatten()`, `ravel()` There are two popular ways to flatten an array: `.flatten()` and `.ravel()`. The primary difference between the two is that the new array created using `ravel()` is actually a reference to the parent array (i.e., a “view”). This means that any changes to the new array will affect the parent array as well. Since `ravel` does not create a copy, it’s memory efficient. If you start with this array: ``` >>> x = np.array([[1 , 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]) ``` You can use `flatten` to flatten your array into a 1D array. ``` >>> x.flatten() array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]) ``` When you use `flatten`, changes to your new array won’t change the parent array. For example: ``` >>> a1 = x.flatten() >>> a1[0] = 99 >>> print(x) # Original array [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] >>> print(a1) # New array [99 2 3 4 5 6 7 8 9 10 11 12] ``` But when you use `ravel`, the changes you make to the new array will affect the parent array. For example: ``` >>> a2 = x.ravel() >>> a2[0] = 98 >>> print(x) # Original array [[98 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] >>> print(a2) # New array [98 2 3 4 5 6 7 8 9 10 11 12] ``` Read more about `flatten` at [`ndarray.flatten`](../reference/generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") and `ravel` at [`ravel`](../reference/generated/numpy.ravel#numpy.ravel "numpy.ravel"). How to access the docstring for more information ------------------------------------------------ *This section covers* `help()`, `?`, `??` When it comes to the data science ecosystem, Python and NumPy are built with the user in mind. One of the best examples of this is the built-in access to documentation. Every object contains the reference to a string, which is known as the **docstring**. In most cases, this docstring contains a quick and concise summary of the object and how to use it. Python has a built-in `help()` function that can help you access this information. This means that nearly any time you need more information, you can use `help()` to quickly find the information that you need. For example: ``` >>> help(max) Help on built-in function max in module builtins: max(...) max(iterable, *[, default=obj, key=func]) -> value max(arg1, arg2, *args, *[, key=func]) -> value With a single iterable argument, return its biggest item. The default keyword-only argument specifies an object to return if the provided iterable is empty. With two or more arguments, return the largest argument. ``` Because access to additional information is so useful, IPython uses the `?` character as a shorthand for accessing this documentation along with other relevant information. IPython is a command shell for interactive computing in multiple languages. [You can find more information about IPython here](https://ipython.org/). For example: ``` In [0]: max? max(iterable, *[, default=obj, key=func]) -> value max(arg1, arg2, *args, *[, key=func]) -> value With a single iterable argument, return its biggest item. The default keyword-only argument specifies an object to return if the provided iterable is empty. With two or more arguments, return the largest argument. Type: builtin_function_or_method ``` You can even use this notation for object methods and objects themselves. Let’s say you create this array: ``` >>> a = np.array([1, 2, 3, 4, 5, 6]) ``` Then you can obtain a lot of useful information (first details about `a` itself, followed by the docstring of `ndarray` of which `a` is an instance): ``` In [1]: a? Type: ndarray String form: [1 2 3 4 5 6] Length: 6 File: ~/anaconda3/lib/python3.9/site-packages/numpy/__init__.py Docstring: <no docstring> Class docstring: ndarray(shape, dtype=float, buffer=None, offset=0, strides=None, order=None) An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using `array`, `zeros` or `empty` (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(...)`) for instantiating an array. For more information, refer to the `numpy` module and examine the methods and attributes of an array. Parameters ---------- (for the __new__ method; see Notes below) shape : tuple of ints Shape of created array. ... ``` This also works for functions and other objects that **you** create. Just remember to include a docstring with your function using a string literal (`""" """` or `''' '''` around your documentation). For example, if you create this function: ``` >>> def double(a): ... '''Return a * 2''' ... return a * 2 ``` You can obtain information about the function: ``` In [2]: double? Signature: double(a) Docstring: Return a * 2 File: ~/Desktop/<ipython-input-23-b5adf20be596> Type: function ``` You can reach another level of information by reading the source code of the object you’re interested in. Using a double question mark (`??`) allows you to access the source code. For example: ``` In [3]: double?? Signature: double(a) Source: def double(a): '''Return a * 2''' return a * 2 File: ~/Desktop/<ipython-input-23-b5adf20be596> Type: function ``` If the object in question is compiled in a language other than Python, using `??` will return the same information as `?`. You’ll find this with a lot of built-in objects and types, for example: ``` In [4]: len? Signature: len(obj, /) Docstring: Return the number of items in a container. Type: builtin_function_or_method ``` and : ``` In [5]: len?? Signature: len(obj, /) Docstring: Return the number of items in a container. Type: builtin_function_or_method ``` have the same output because they were compiled in a programming language other than Python. Working with mathematical formulas ---------------------------------- The ease of implementing mathematical formulas that work on arrays is one of the things that make NumPy so widely used in the scientific Python community. For example, this is the mean square error formula (a central formula used in supervised machine learning models that deal with regression): ![../_images/np_MSE_formula.png] Implementing this formula is simple and straightforward in NumPy: ![../_images/np_MSE_implementation.png] What makes this work so well is that `predictions` and `labels` can contain one or a thousand values. They only need to be the same size. You can visualize it this way: ![../_images/np_mse_viz1.png] In this example, both the predictions and labels vectors contain three values, meaning `n` has a value of three. After we carry out subtractions the values in the vector are squared. Then NumPy sums the values, and your result is the error value for that prediction and a score for the quality of the model. ![../_images/np_mse_viz2.png] How to save and load NumPy objects ---------------------------------- *This section covers* `np.save`, `np.savez`, `np.savetxt`, `np.load`, `np.loadtxt` You will, at some point, want to save your arrays to disk and load them back without having to re-run the code. Fortunately, there are several ways to save and load objects with NumPy. The ndarray objects can be saved to and loaded from the disk files with `loadtxt` and `savetxt` functions that handle normal text files, `load` and `save` functions that handle NumPy binary files with a **.npy** file extension, and a `savez` function that handles NumPy files with a **.npz** file extension. The **.npy** and **.npz** files store data, shape, dtype, and other information required to reconstruct the ndarray in a way that allows the array to be correctly retrieved, even when the file is on another machine with different architecture. If you want to store a single ndarray object, store it as a .npy file using `np.save`. If you want to store more than one ndarray object in a single file, save it as a .npz file using `np.savez`. You can also save several arrays into a single file in compressed npz format with [`savez_compressed`](../reference/generated/numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed"). It’s easy to save and load and array with `np.save()`. Just make sure to specify the array you want to save and a file name. For example, if you create this array: ``` >>> a = np.array([1, 2, 3, 4, 5, 6]) ``` You can save it as “filename.npy” with: ``` >>> np.save('filename', a) ``` You can use `np.load()` to reconstruct your array. ``` >>> b = np.load('filename.npy') ``` If you want to check your array, you can run:: ``` >>> print(b) [1 2 3 4 5 6] ``` You can save a NumPy array as a plain text file like a **.csv** or **.txt** file with `np.savetxt`. For example, if you create this array: ``` >>> csv_arr = np.array([1, 2, 3, 4, 5, 6, 7, 8]) ``` You can easily save it as a .csv file with the name “new_file.csv” like this: ``` >>> np.savetxt('new_file.csv', csv_arr) ``` You can quickly and easily load your saved text file using `loadtxt()`: ``` >>> np.loadtxt('new_file.csv') array([1., 2., 3., 4., 5., 6., 7., 8.]) ``` The `savetxt()` and `loadtxt()` functions accept additional optional parameters such as header, footer, and delimiter. While text files can be easier for sharing, .npy and .npz files are smaller and faster to read. If you need more sophisticated handling of your text file (for example, if you need to work with lines that contain missing values), you will want to use the [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") function. With [`savetxt`](../reference/generated/numpy.savetxt#numpy.savetxt "numpy.savetxt"), you can specify headers, footers, comments, and more. Learn more about [input and output routines here](../reference/routines.io#routines-io). Importing and exporting a CSV ----------------------------- It’s simple to read in a CSV that contains existing information. The best and easiest way to do this is to use [Pandas](https://pandas.pydata.org). ``` >>> import pandas as pd >>> # If all of your columns are the same type: >>> x = pd.read_csv('music.csv', header=0).values >>> print(x) [['Billie Holiday' 'Jazz' 1300000 27000000] ['<NAME>' 'Rock' 2700000 70000000] ['<NAME>' 'Jazz' 1500000 48000000] ['SIA' 'Pop' 2000000 74000000]] >>> # You can also simply select the columns you need: >>> x = pd.read_csv('music.csv', usecols=['Artist', 'Plays']).values >>> print(x) [['Billie Holiday' 27000000] ['<NAME>' 70000000] ['<NAME>' 48000000] ['SIA' 74000000]] ``` ![../_images/np_pandas.png](https://numpy.org/doc/1.23/_images/np_pandas.png) It’s simple to use Pandas in order to export your array as well. If you are new to NumPy, you may want to create a Pandas dataframe from the values in your array and then write the data frame to a CSV file with Pandas. If you created this array “a” ``` >>> a = np.array([[-2.58289208, 0.43014843, -1.24082018, 1.59572603], ... [ 0.99027828, 1.17150989, 0.94125714, -0.14692469], ... [ 0.76989341, 0.81299683, -0.95068423, 0.11769564], ... [ 0.20484034, 0.34784527, 1.96979195, 0.51992837]]) ``` You could create a Pandas dataframe ``` >>> df = pd.DataFrame(a) >>> print(df) 0 1 2 3 0 -2.582892 0.430148 -1.240820 1.595726 1 0.990278 1.171510 0.941257 -0.146925 2 0.769893 0.812997 -0.950684 0.117696 3 0.204840 0.347845 1.969792 0.519928 ``` You can easily save your dataframe with: ``` >>> df.to_csv('pd.csv') ``` And read your CSV with: ``` >>> data = pd.read_csv('pd.csv') ``` ![../_images/np_readcsv.png] You can also save your array with the NumPy `savetxt` method. ``` >>> np.savetxt('np.csv', a, fmt='%.2f', delimiter=',', header='1, 2, 3, 4') ``` If you’re using the command line, you can read your saved CSV any time with a command such as: ``` $ cat np.csv # 1, 2, 3, 4 -2.58,0.43,-1.24,1.60 0.99,1.17,0.94,-0.15 0.77,0.81,-0.95,0.12 0.20,0.35,1.97,0.52 ``` Or you can open the file any time with a text editor! If you’re interested in learning more about Pandas, take a look at the [official Pandas documentation](https://pandas.pydata.org/index.html). Learn how to install Pandas with the [official Pandas installation information](https://pandas.pydata.org/pandas-docs/stable/install.html). Plotting arrays with Matplotlib ------------------------------- If you need to generate a plot for your values, it’s very simple with [Matplotlib](https://matplotlib.org/). For example, you may have an array like this one: ``` >>> a = np.array([2, 1, 5, 7, 4, 6, 8, 14, 10, 9, 18, 20, 22]) ``` If you already have Matplotlib installed, you can import it with: ``` >>> import matplotlib.pyplot as plt # If you're using Jupyter Notebook, you may also want to run the following # line of code to display your code in the notebook: %matplotlib inline ``` All you need to do to plot your values is run: ``` >>> plt.plot(a) # If you are running from a command line, you may need to do this: # >>> plt.show() ``` ![../_images/matplotlib1.png] For example, you can plot a 1D array like this: ``` >>> x = np.linspace(0, 5, 20) >>> y = np.linspace(0, 10, 20) >>> plt.plot(x, y, 'purple') # line >>> plt.plot(x, y, 'o') # dots ``` ![../_images/matplotlib2.png] With Matplotlib, you have access to an enormous number of visualization options. ``` >>> fig = plt.figure() >>> ax = fig.add_subplot(projection='3d') >>> X = np.arange(-5, 5, 0.15) >>> Y = np.arange(-5, 5, 0.15) >>> X, Y = np.meshgrid(X, Y) >>> R = np.sqrt(X**2 + Y**2) >>> Z = np.sin(R) >>> ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='viridis') ``` ![../_images/matplotlib3.png]. *Image credits: <NAME> http://jalammar.github.io/* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/absolute_beginners.htmlWhat is NumPy? ============== NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more. At the core of the NumPy package, is the `ndarray` object. This encapsulates *n*-dimensional arrays of homogeneous data types, with many operations being performed in compiled code for performance. There are several important differences between NumPy arrays and the standard Python sequences: * NumPy arrays have a fixed size at creation, unlike Python lists (which can grow dynamically). Changing the size of an `ndarray` will create a new array and delete the original. * The elements in a NumPy array are all required to be of the same data type, and thus will be the same size in memory. The exception: one can have arrays of (Python, including NumPy) objects, thereby allowing for arrays of different sized elements. * NumPy arrays facilitate advanced mathematical and other types of operations on large numbers of data. Typically, such operations are executed more efficiently and with less code than is possible using Python’s built-in sequences. * A growing plethora of scientific and mathematical Python-based packages are using NumPy arrays; though these typically support Python-sequence input, they convert such input to NumPy arrays prior to processing, and they often output NumPy arrays. In other words, in order to efficiently use much (perhaps even most) of today’s scientific/mathematical Python-based software, just knowing how to use Python’s built-in sequence types is insufficient - one also needs to know how to use NumPy arrays. The points about sequence size and speed are particularly important in scientific computing. As a simple example, consider the case of multiplying each element in a 1-D sequence with the corresponding element in another sequence of the same length. If the data are stored in two Python lists, `a` and `b`, we could iterate over each element: ``` c = [] for i in range(len(a)): c.append(a[i]*b[i]) ``` This produces the correct answer, but if `a` and `b` each contain millions of numbers, we will pay the price for the inefficiencies of looping in Python. We could accomplish the same task much more quickly in C by writing (for clarity we neglect variable declarations and initializations, memory allocation, etc.) ``` for (i = 0; i < rows; i++) { c[i] = a[i]*b[i]; } ``` This saves all the overhead involved in interpreting the Python code and manipulating Python objects, but at the expense of the benefits gained from coding in Python. Furthermore, the coding work required increases with the dimensionality of our data. In the case of a 2-D array, for example, the C code (abridged as before) expands to ``` for (i = 0; i < rows; i++) { for (j = 0; j < columns; j++) { c[i][j] = a[i][j]*b[i][j]; } } ``` NumPy gives us the best of both worlds: element-by-element operations are the “default mode” when an `ndarray` is involved, but the element-by-element operation is speedily executed by pre-compiled C code. In NumPy ``` c = a * b ``` does what the earlier examples do, at near-C speeds, but with the code simplicity we expect from something based on Python. Indeed, the NumPy idiom is even simpler! This last example illustrates two of NumPy’s features which are the basis of much of its power: vectorization and broadcasting. Why is NumPy Fast? ------------------ Vectorization describes the absence of any explicit looping, indexing, etc., in the code - these things are taking place, of course, just “behind the scenes” in optimized, pre-compiled C code. Vectorized code has many advantages, among which are: * vectorized code is more concise and easier to read * fewer lines of code generally means fewer bugs * the code more closely resembles standard mathematical notation (making it easier, typically, to correctly code mathematical constructs) * vectorization results in more “Pythonic” code. Without vectorization, our code would be littered with inefficient and difficult to read `for` loops. Broadcasting is the term used to describe the implicit element-by-element behavior of operations; generally speaking, in NumPy all operations, not just arithmetic operations, but logical, bit-wise, functional, etc., behave in this implicit element-by-element fashion, i.e., they broadcast. Moreover, in the example above, `a` and `b` could be multidimensional arrays of the same shape, or a scalar and an array, or even two arrays of with different shapes, provided that the smaller array is “expandable” to the shape of the larger in such a way that the resulting broadcast is unambiguous. For detailed “rules” of broadcasting see [Broadcasting](basics.broadcasting#basics-broadcasting). Who Else Uses NumPy? -------------------- NumPy fully supports an object-oriented approach, starting, once again, with `ndarray`. For example, `ndarray` is a class, possessing numerous methods and attributes. Many of its methods are mirrored by functions in the outer-most NumPy namespace, allowing the programmer to code in whichever paradigm they prefer. This flexibility has allowed the NumPy array dialect and NumPy `ndarray` class to become the *de-facto* language of multi-dimensional data interchange used in Python. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/whatisnumpy.htmlNumPy quickstart ================ Prerequisites ------------- You’ll need to know a bit of Python. For a refresher, see the [Python tutorial](https://docs.python.org/tutorial/). To work the examples, you’ll need `matplotlib` installed in addition to NumPy. **Learner profile** This is a quick overview of arrays in NumPy. It demonstrates how n-dimensional (\(n>=2\)) arrays are represented and can be manipulated. In particular, if you don’t know how to apply common functions to n-dimensional arrays (without using for-loops), or if you want to understand axis and shape properties for n-dimensional arrays, this article might be of help. **Learning Objectives** After reading, you should be able to: * Understand the difference between one-, two- and n-dimensional arrays in NumPy; * Understand how to apply some linear algebra operations to n-dimensional arrays without using for-loops; * Understand axis and shape properties for n-dimensional arrays. The Basics ---------- NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of non-negative integers. In NumPy dimensions are called *axes*. For example, the array for the coordinates of a point in 3D space, `[1, 2, 1]`, has one axis. That axis has 3 elements in it, so we say it has a length of 3. In the example pictured below, the array has 2 axes. The first axis has a length of 2, the second axis has a length of 3. ``` [[1., 0., 0.], [0., 1., 2.]] ``` NumPy’s array class is called `ndarray`. It is also known by the alias `array`. Note that `numpy.array` is not the same as the Standard Python Library class `array.array`, which only handles one-dimensional arrays and offers less functionality. The more important attributes of an `ndarray` object are: ndarray.ndim the number of axes (dimensions) of the array. ndarray.shape the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension. For a matrix with *n* rows and *m* columns, `shape` will be `(n,m)`. The length of the `shape` tuple is therefore the number of axes, `ndim`. ndarray.size the total number of elements of the array. This is equal to the product of the elements of `shape`. ndarray.dtype an object describing the type of the elements in the array. One can create or specify dtype’s using standard Python types. Additionally NumPy provides types of its own. numpy.int32, numpy.int16, and numpy.float64 are some examples. ndarray.itemsize the size in bytes of each element of the array. For example, an array of elements of type `float64` has `itemsize` 8 (=64/8), while one of type `complex32` has `itemsize` 4 (=32/8). It is equivalent to `ndarray.dtype.itemsize`. ndarray.data the buffer containing the actual elements of the array. Normally, we won’t need to use this attribute because we will access the elements in an array using indexing facilities. ### An example ``` >>> import numpy as np >>> a = np.arange(15).reshape(3, 5) >>> a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) >>> a.shape (3, 5) >>> a.ndim 2 >>> a.dtype.name 'int64' >>> a.itemsize 8 >>> a.size 15 >>> type(a) <class 'numpy.ndarray'> >>> b = np.array([6, 7, 8]) >>> b array([6, 7, 8]) >>> type(b) <class 'numpy.ndarray'``` ### Array Creation There are several ways to create arrays. For example, you can create an array from a regular Python list or tuple using the `array` function. The type of the resulting array is deduced from the type of the elements in the sequences. ``` >>> import numpy as np >>> a = np.array([2, 3, 4]) >>> a array([2, 3, 4]) >>> a.dtype dtype('int64') >>> b = np.array([1.2, 3.5, 5.1]) >>> b.dtype dtype('float64') ``` A frequent error consists in calling `array` with multiple arguments, rather than providing a single sequence as an argument. ``` >>> a = np.array(1, 2, 3, 4) # WRONG Traceback (most recent call last): ... TypeError: array() takes from 1 to 2 positional arguments but 4 were given >>> a = np.array([1, 2, 3, 4]) # RIGHT ``` `array` transforms sequences of sequences into two-dimensional arrays, sequences of sequences of sequences into three-dimensional arrays, and so on. ``` >>> b = np.array([(1.5, 2, 3), (4, 5, 6)]) >>> b array([[1.5, 2. , 3. ], [4. , 5. , 6. ]]) ``` The type of the array can also be explicitly specified at creation time: ``` >>> c = np.array([[1, 2], [3, 4]], dtype=complex) >>> c array([[1.+0.j, 2.+0.j], [3.+0.j, 4.+0.j]]) ``` Often, the elements of an array are originally unknown, but its size is known. Hence, NumPy offers several functions to create arrays with initial placeholder content. These minimize the necessity of growing arrays, an expensive operation. The function `zeros` creates an array full of zeros, the function `ones` creates an array full of ones, and the function `empty` creates an array whose initial content is random and depends on the state of the memory. By default, the dtype of the created array is `float64`, but it can be specified via the key word argument `dtype`. ``` >>> np.zeros((3, 4)) array([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]) >>> np.ones((2, 3, 4), dtype=np.int16) array([[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]], dtype=int16) >>> np.empty((2, 3)) array([[3.73603959e-262, 6.02658058e-154, 6.55490914e-260], # may vary [5.30498948e-313, 3.14673309e-307, 1.00000000e+000]]) ``` To create sequences of numbers, NumPy provides the `arange` function which is analogous to the Python built-in `range`, but returns an array. ``` >>> np.arange(10, 30, 5) array([10, 15, 20, 25]) >>> np.arange(0, 2, 0.3) # it accepts float arguments array([0. , 0.3, 0.6, 0.9, 1.2, 1.5, 1.8]) ``` When `arange` is used with floating point arguments, it is generally not possible to predict the number of elements obtained, due to the finite floating point precision. For this reason, it is usually better to use the function `linspace` that receives as an argument the number of elements that we want, instead of the step: ``` >>> from numpy import pi >>> np.linspace(0, 2, 9) # 9 numbers from 0 to 2 array([0. , 0.25, 0.5 , 0.75, 1. , 1.25, 1.5 , 1.75, 2. ]) >>> x = np.linspace(0, 2 * pi, 100) # useful to evaluate function at lots of points >>> f = np.sin(x) ``` See also [`array`](../reference/generated/numpy.array#numpy.array "numpy.array"), [`zeros`](../reference/generated/numpy.zeros#numpy.zeros "numpy.zeros"), [`zeros_like`](../reference/generated/numpy.zeros_like#numpy.zeros_like "numpy.zeros_like"), [`ones`](../reference/generated/numpy.ones#numpy.ones "numpy.ones"), [`ones_like`](../reference/generated/numpy.ones_like#numpy.ones_like "numpy.ones_like"), [`empty`](../reference/generated/numpy.empty#numpy.empty "numpy.empty"), [`empty_like`](../reference/generated/numpy.empty_like#numpy.empty_like "numpy.empty_like"), [`arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange"), [`linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace"), `numpy.random.Generator.rand`, `numpy.random.Generator.randn`, [`fromfunction`](../reference/generated/numpy.fromfunction#numpy.fromfunction "numpy.fromfunction"), [`fromfile`](../reference/generated/numpy.fromfile#numpy.fromfile "numpy.fromfile") ### Printing Arrays When you print an array, NumPy displays it in a similar way to nested lists, but with the following layout: * the last axis is printed from left to right, * the second-to-last is printed from top to bottom, * the rest are also printed from top to bottom, with each slice separated from the next by an empty line. One-dimensional arrays are then printed as rows, bidimensionals as matrices and tridimensionals as lists of matrices. ``` >>> a = np.arange(6) # 1d array >>> print(a) [0 1 2 3 4 5] >>> >>> b = np.arange(12).reshape(4, 3) # 2d array >>> print(b) [[ 0 1 2] [ 3 4 5] [ 6 7 8] [ 9 10 11]] >>> >>> c = np.arange(24).reshape(2, 3, 4) # 3d array >>> print(c) [[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11]] [[12 13 14 15] [16 17 18 19] [20 21 22 23]]] ``` See [below](#quickstart-shape-manipulation) to get more details on `reshape`. If an array is too large to be printed, NumPy automatically skips the central part of the array and only prints the corners: ``` >>> print(np.arange(10000)) [ 0 1 2 ... 9997 9998 9999] >>> >>> print(np.arange(10000).reshape(100, 100)) [[ 0 1 2 ... 97 98 99] [ 100 101 102 ... 197 198 199] [ 200 201 202 ... 297 298 299] ... [9700 9701 9702 ... 9797 9798 9799] [9800 9801 9802 ... 9897 9898 9899] [9900 9901 9902 ... 9997 9998 9999]] ``` To disable this behaviour and force NumPy to print the entire array, you can change the printing options using `set_printoptions`. ``` >>> np.set_printoptions(threshold=sys.maxsize) # sys module should be imported ``` ### Basic Operations Arithmetic operators on arrays apply *elementwise*. A new array is created and filled with the result. ``` >>> a = np.array([20, 30, 40, 50]) >>> b = np.arange(4) >>> b array([0, 1, 2, 3]) >>> c = a - b >>> c array([20, 29, 38, 47]) >>> b**2 array([0, 1, 4, 9]) >>> 10 * np.sin(a) array([ 9.12945251, -9.88031624, 7.4511316 , -2.62374854]) >>> a < 35 array([ True, True, False, False]) ``` Unlike in many matrix languages, the product operator `*` operates elementwise in NumPy arrays. The matrix product can be performed using the `@` operator (in python >=3.5) or the `dot` function or method: ``` >>> A = np.array([[1, 1], ... [0, 1]]) >>> B = np.array([[2, 0], ... [3, 4]]) >>> A * B # elementwise product array([[2, 0], [0, 4]]) >>> A @ B # matrix product array([[5, 4], [3, 4]]) >>> A.dot(B) # another matrix product array([[5, 4], [3, 4]]) ``` Some operations, such as `+=` and `*=`, act in place to modify an existing array rather than create a new one. ``` >>> rg = np.random.default_rng(1) # create instance of default random number generator >>> a = np.ones((2, 3), dtype=int) >>> b = rg.random((2, 3)) >>> a *= 3 >>> a array([[3, 3, 3], [3, 3, 3]]) >>> b += a >>> b array([[3.51182162, 3.9504637 , 3.14415961], [3.94864945, 3.31183145, 3.42332645]]) >>> a += b # b is not automatically converted to integer type Traceback (most recent call last): ... numpy.core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'add' output from dtype('float64') to dtype('int64') with casting rule 'same_kind' ``` When operating with arrays of different types, the type of the resulting array corresponds to the more general or precise one (a behavior known as upcasting). ``` >>> a = np.ones(3, dtype=np.int32) >>> b = np.linspace(0, pi, 3) >>> b.dtype.name 'float64' >>> c = a + b >>> c array([1. , 2.57079633, 4.14159265]) >>> c.dtype.name 'float64' >>> d = np.exp(c * 1j) >>> d array([ 0.54030231+0.84147098j, -0.84147098+0.54030231j, -0.54030231-0.84147098j]) >>> d.dtype.name 'complex128' ``` Many unary operations, such as computing the sum of all the elements in the array, are implemented as methods of the `ndarray` class. ``` >>> a = rg.random((2, 3)) >>> a array([[0.82770259, 0.40919914, 0.54959369], [0.02755911, 0.75351311, 0.53814331]]) >>> a.sum() 3.1057109529998157 >>> a.min() 0.027559113243068367 >>> a.max() 0.8277025938204418 ``` By default, these operations apply to the array as though it were a list of numbers, regardless of its shape. However, by specifying the `axis` parameter you can apply an operation along the specified axis of an array: ``` >>> b = np.arange(12).reshape(3, 4) >>> b array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> b.sum(axis=0) # sum of each column array([12, 15, 18, 21]) >>> >>> b.min(axis=1) # min of each row array([0, 4, 8]) >>> >>> b.cumsum(axis=1) # cumulative sum along each row array([[ 0, 1, 3, 6], [ 4, 9, 15, 22], [ 8, 17, 27, 38]]) ``` ### Universal Functions NumPy provides familiar mathematical functions such as sin, cos, and exp. In NumPy, these are called “universal functions” (`ufunc`). Within NumPy, these functions operate elementwise on an array, producing an array as output. ``` >>> B = np.arange(3) >>> B array([0, 1, 2]) >>> np.exp(B) array([1. , 2.71828183, 7.3890561 ]) >>> np.sqrt(B) array([0. , 1. , 1.41421356]) >>> C = np.array([2., -1., 4.]) >>> np.add(B, C) array([2., 0., 6.]) ``` See also [`all`](../reference/generated/numpy.all#numpy.all "numpy.all"), [`any`](../reference/generated/numpy.any#numpy.any "numpy.any"), [`apply_along_axis`](../reference/generated/numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis"), [`argmax`](../reference/generated/numpy.argmax#numpy.argmax "numpy.argmax"), [`argmin`](../reference/generated/numpy.argmin#numpy.argmin "numpy.argmin"), [`argsort`](../reference/generated/numpy.argsort#numpy.argsort "numpy.argsort"), [`average`](../reference/generated/numpy.average#numpy.average "numpy.average"), [`bincount`](../reference/generated/numpy.bincount#numpy.bincount "numpy.bincount"), [`ceil`](../reference/generated/numpy.ceil#numpy.ceil "numpy.ceil"), [`clip`](../reference/generated/numpy.clip#numpy.clip "numpy.clip"), [`conj`](../reference/generated/numpy.conj#numpy.conj "numpy.conj"), [`corrcoef`](../reference/generated/numpy.corrcoef#numpy.corrcoef "numpy.corrcoef"), [`cov`](../reference/generated/numpy.cov#numpy.cov "numpy.cov"), [`cross`](../reference/generated/numpy.cross#numpy.cross "numpy.cross"), [`cumprod`](../reference/generated/numpy.cumprod#numpy.cumprod "numpy.cumprod"), [`cumsum`](../reference/generated/numpy.cumsum#numpy.cumsum "numpy.cumsum"), [`diff`](../reference/generated/numpy.diff#numpy.diff "numpy.diff"), [`dot`](../reference/generated/numpy.dot#numpy.dot "numpy.dot"), [`floor`](../reference/generated/numpy.floor#numpy.floor "numpy.floor"), [`inner`](../reference/generated/numpy.inner#numpy.inner "numpy.inner"), [`invert`](../reference/generated/numpy.invert#numpy.invert "numpy.invert"), [`lexsort`](../reference/generated/numpy.lexsort#numpy.lexsort "numpy.lexsort"), [`max`](https://docs.python.org/3/library/functions.html#max "(in Python v3.10)"), [`maximum`](../reference/generated/numpy.maximum#numpy.maximum "numpy.maximum"), [`mean`](../reference/generated/numpy.mean#numpy.mean "numpy.mean"), [`median`](../reference/generated/numpy.median#numpy.median "numpy.median"), [`min`](https://docs.python.org/3/library/functions.html#min "(in Python v3.10)"), [`minimum`](../reference/generated/numpy.minimum#numpy.minimum "numpy.minimum"), [`nonzero`](../reference/generated/numpy.nonzero#numpy.nonzero "numpy.nonzero"), [`outer`](../reference/generated/numpy.outer#numpy.outer "numpy.outer"), [`prod`](../reference/generated/numpy.prod#numpy.prod "numpy.prod"), [`re`](https://docs.python.org/3/library/re.html#module-re "(in Python v3.10)"), [`round`](https://docs.python.org/3/library/functions.html#round "(in Python v3.10)"), [`sort`](../reference/generated/numpy.sort#numpy.sort "numpy.sort"), [`std`](../reference/generated/numpy.std#numpy.std "numpy.std"), [`sum`](../reference/generated/numpy.sum#numpy.sum "numpy.sum"), [`trace`](../reference/generated/numpy.trace#numpy.trace "numpy.trace"), [`transpose`](../reference/generated/numpy.transpose#numpy.transpose "numpy.transpose"), [`var`](../reference/generated/numpy.var#numpy.var "numpy.var"), [`vdot`](../reference/generated/numpy.vdot#numpy.vdot "numpy.vdot"), [`vectorize`](../reference/generated/numpy.vectorize#numpy.vectorize "numpy.vectorize"), [`where`](../reference/generated/numpy.where#numpy.where "numpy.where") ### Indexing, Slicing and Iterating **One-dimensional** arrays can be indexed, sliced and iterated over, much like [lists](https://docs.python.org/tutorial/introduction.html#lists) and other Python sequences. ``` >>> a = np.arange(10)**3 >>> a array([ 0, 1, 8, 27, 64, 125, 216, 343, 512, 729]) >>> a[2] 8 >>> a[2:5] array([ 8, 27, 64]) >>> # equivalent to a[0:6:2] = 1000; >>> # from start to position 6, exclusive, set every 2nd element to 1000 >>> a[:6:2] = 1000 >>> a array([1000, 1, 1000, 27, 1000, 125, 216, 343, 512, 729]) >>> a[::-1] # reversed a array([ 729, 512, 343, 216, 125, 1000, 27, 1000, 1, 1000]) >>> for i in a: ... print(i**(1 / 3.)) ... 9.999999999999998 1.0 9.999999999999998 3.0 9.999999999999998 4.999999999999999 5.999999999999999 6.999999999999999 7.999999999999999 8.999999999999998 ``` **Multidimensional** arrays can have one index per axis. These indices are given in a tuple separated by commas: ``` >>> def f(x, y): ... return 10 * x + y ... >>> b = np.fromfunction(f, (5, 4), dtype=int) >>> b array([[ 0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23], [30, 31, 32, 33], [40, 41, 42, 43]]) >>> b[2, 3] 23 >>> b[0:5, 1] # each row in the second column of b array([ 1, 11, 21, 31, 41]) >>> b[:, 1] # equivalent to the previous example array([ 1, 11, 21, 31, 41]) >>> b[1:3, :] # each column in the second and third row of b array([[10, 11, 12, 13], [20, 21, 22, 23]]) ``` When fewer indices are provided than the number of axes, the missing indices are considered complete slices`:` ``` >>> b[-1] # the last row. Equivalent to b[-1, :] array([40, 41, 42, 43]) ``` The expression within brackets in `b[i]` is treated as an `i` followed by as many instances of `:` as needed to represent the remaining axes. NumPy also allows you to write this using dots as `b[i, ...]`. The **dots** (`...`) represent as many colons as needed to produce a complete indexing tuple. For example, if `x` is an array with 5 axes, then * `x[1, 2, ...]` is equivalent to `x[1, 2, :, :, :]`, * `x[..., 3]` to `x[:, :, :, :, 3]` and * `x[4, ..., 5, :]` to `x[4, :, :, 5, :]`. ``` >>> c = np.array([[[ 0, 1, 2], # a 3D array (two stacked 2D arrays) ... [ 10, 12, 13]], ... [[100, 101, 102], ... [110, 112, 113]]]) >>> c.shape (2, 2, 3) >>> c[1, ...] # same as c[1, :, :] or c[1] array([[100, 101, 102], [110, 112, 113]]) >>> c[..., 2] # same as c[:, :, 2] array([[ 2, 13], [102, 113]]) ``` **Iterating** over multidimensional arrays is done with respect to the first axis: ``` >>> for row in b: ... print(row) ... [0 1 2 3] [10 11 12 13] [20 21 22 23] [30 31 32 33] [40 41 42 43] ``` However, if one wants to perform an operation on each element in the array, one can use the `flat` attribute which is an [iterator](https://docs.python.org/tutorial/classes.html#iterators) over all the elements of the array: ``` >>> for element in b.flat: ... print(element) ... 0 1 2 3 10 11 12 13 20 21 22 23 30 31 32 33 40 41 42 43 ``` See also [Indexing on ndarrays](basics.indexing#basics-indexing), [Indexing routines](../reference/arrays.indexing#arrays-indexing) (reference), [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis"), [`ndenumerate`](../reference/generated/numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate"), [`indices`](../reference/generated/numpy.indices#numpy.indices "numpy.indices") Shape Manipulation ------------------ ### Changing the shape of an array An array has a shape given by the number of elements along each axis: ``` >>> a = np.floor(10 * rg.random((3, 4))) >>> a array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) >>> a.shape (3, 4) ``` The shape of an array can be changed with various commands. Note that the following three commands all return a modified array, but do not change the original array: ``` >>> a.ravel() # returns the array, flattened array([3., 7., 3., 4., 1., 4., 2., 2., 7., 2., 4., 9.]) >>> a.reshape(6, 2) # returns the array with a modified shape array([[3., 7.], [3., 4.], [1., 4.], [2., 2.], [7., 2.], [4., 9.]]) >>> a.T # returns the array, transposed array([[3., 1., 7.], [7., 4., 2.], [3., 2., 4.], [4., 2., 9.]]) >>> a.T.shape (4, 3) >>> a.shape (3, 4) ``` The order of the elements in the array resulting from `ravel` is normally “C-style”, that is, the rightmost index “changes the fastest”, so the element after `a[0, 0]` is `a[0, 1]`. If the array is reshaped to some other shape, again the array is treated as “C-style”. NumPy normally creates arrays stored in this order, so `ravel` will usually not need to copy its argument, but if the array was made by taking slices of another array or created with unusual options, it may need to be copied. The functions `ravel` and `reshape` can also be instructed, using an optional argument, to use FORTRAN-style arrays, in which the leftmost index changes the fastest. The [`reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape") function returns its argument with a modified shape, whereas the [`ndarray.resize`](../reference/generated/numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize") method modifies the array itself: ``` >>> a array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) >>> a.resize((2, 6)) >>> a array([[3., 7., 3., 4., 1., 4.], [2., 2., 7., 2., 4., 9.]]) ``` If a dimension is given as `-1` in a reshaping operation, the other dimensions are automatically calculated: ``` >>> a.reshape(3, -1) array([[3., 7., 3., 4.], [1., 4., 2., 2.], [7., 2., 4., 9.]]) ``` See also [`ndarray.shape`](../reference/generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape"), [`reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape"), [`resize`](../reference/generated/numpy.resize#numpy.resize "numpy.resize"), [`ravel`](../reference/generated/numpy.ravel#numpy.ravel "numpy.ravel") ### Stacking together different arrays Several arrays can be stacked together along different axes: ``` >>> a = np.floor(10 * rg.random((2, 2))) >>> a array([[9., 7.], [5., 2.]]) >>> b = np.floor(10 * rg.random((2, 2))) >>> b array([[1., 9.], [5., 1.]]) >>> np.vstack((a, b)) array([[9., 7.], [5., 2.], [1., 9.], [5., 1.]]) >>> np.hstack((a, b)) array([[9., 7., 1., 9.], [5., 2., 5., 1.]]) ``` The function [`column_stack`](../reference/generated/numpy.column_stack#numpy.column_stack "numpy.column_stack") stacks 1D arrays as columns into a 2D array. It is equivalent to [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack") only for 2D arrays: ``` >>> from numpy import newaxis >>> np.column_stack((a, b)) # with 2D arrays array([[9., 7., 1., 9.], [5., 2., 5., 1.]]) >>> a = np.array([4., 2.]) >>> b = np.array([3., 8.]) >>> np.column_stack((a, b)) # returns a 2D array array([[4., 3.], [2., 8.]]) >>> np.hstack((a, b)) # the result is different array([4., 2., 3., 8.]) >>> a[:, newaxis] # view `a` as a 2D column vector array([[4.], [2.]]) >>> np.column_stack((a[:, newaxis], b[:, newaxis])) array([[4., 3.], [2., 8.]]) >>> np.hstack((a[:, newaxis], b[:, newaxis])) # the result is the same array([[4., 3.], [2., 8.]]) ``` On the other hand, the function [`row_stack`](../reference/generated/numpy.row_stack#numpy.row_stack "numpy.row_stack") is equivalent to [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack") for any input arrays. In fact, [`row_stack`](../reference/generated/numpy.row_stack#numpy.row_stack "numpy.row_stack") is an alias for [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack"): ``` >>> np.column_stack is np.hstack False >>> np.row_stack is np.vstack True ``` In general, for arrays with more than two dimensions, [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack") stacks along their second axes, [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack") stacks along their first axes, and [`concatenate`](../reference/generated/numpy.concatenate#numpy.concatenate "numpy.concatenate") allows for an optional arguments giving the number of the axis along which the concatenation should happen. **Note** In complex cases, [`r_`](../reference/generated/numpy.r_#numpy.r_ "numpy.r_") and [`c_`](../reference/generated/numpy.c_#numpy.c_ "numpy.c_") are useful for creating arrays by stacking numbers along one axis. They allow the use of range literals `:`. ``` >>> np.r_[1:4, 0, 4] array([1, 2, 3, 0, 4]) ``` When used with arrays as arguments, [`r_`](../reference/generated/numpy.r_#numpy.r_ "numpy.r_") and [`c_`](../reference/generated/numpy.c_#numpy.c_ "numpy.c_") are similar to [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack") and [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack") in their default behavior, but allow for an optional argument giving the number of the axis along which to concatenate. See also [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack"), [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack"), [`column_stack`](../reference/generated/numpy.column_stack#numpy.column_stack "numpy.column_stack"), [`concatenate`](../reference/generated/numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`c_`](../reference/generated/numpy.c_#numpy.c_ "numpy.c_"), [`r_`](../reference/generated/numpy.r_#numpy.r_ "numpy.r_") ### Splitting one array into several smaller ones Using [`hsplit`](../reference/generated/numpy.hsplit#numpy.hsplit "numpy.hsplit"), you can split an array along its horizontal axis, either by specifying the number of equally shaped arrays to return, or by specifying the columns after which the division should occur: ``` >>> a = np.floor(10 * rg.random((2, 12))) >>> a array([[6., 7., 6., 9., 0., 5., 4., 0., 6., 8., 5., 2.], [8., 5., 5., 7., 1., 8., 6., 7., 1., 8., 1., 0.]]) >>> # Split `a` into 3 >>> np.hsplit(a, 3) [array([[6., 7., 6., 9.], [8., 5., 5., 7.]]), array([[0., 5., 4., 0.], [1., 8., 6., 7.]]), array([[6., 8., 5., 2.], [1., 8., 1., 0.]])] >>> # Split `a` after the third and the fourth column >>> np.hsplit(a, (3, 4)) [array([[6., 7., 6.], [8., 5., 5.]]), array([[9.], [7.]]), array([[0., 5., 4., 0., 6., 8., 5., 2.], [1., 8., 6., 7., 1., 8., 1., 0.]])] ``` [`vsplit`](../reference/generated/numpy.vsplit#numpy.vsplit "numpy.vsplit") splits along the vertical axis, and [`array_split`](../reference/generated/numpy.array_split#numpy.array_split "numpy.array_split") allows one to specify along which axis to split. Copies and Views ---------------- When operating and manipulating arrays, their data is sometimes copied into a new array and sometimes not. This is often a source of confusion for beginners. There are three cases: ### No Copy at All Simple assignments make no copy of objects or their data. ``` >>> a = np.array([[ 0, 1, 2, 3], ... [ 4, 5, 6, 7], ... [ 8, 9, 10, 11]]) >>> b = a # no new object is created >>> b is a # a and b are two names for the same ndarray object True ``` Python passes mutable objects as references, so function calls make no copy. ``` >>> def f(x): ... print(id(x)) ... >>> id(a) # id is a unique identifier of an object 148293216 # may vary >>> f(a) 148293216 # may vary ``` ### View or Shallow Copy Different array objects can share the same data. The `view` method creates a new array object that looks at the same data. ``` >>> c = a.view() >>> c is a False >>> c.base is a # c is a view of the data owned by a True >>> c.flags.owndata False >>> >>> c = c.reshape((2, 6)) # a's shape doesn't change >>> a.shape (3, 4) >>> c[0, 4] = 1234 # a's data changes >>> a array([[ 0, 1, 2, 3], [1234, 5, 6, 7], [ 8, 9, 10, 11]]) ``` Slicing an array returns a view of it: ``` >>> s = a[:, 1:3] >>> s[:] = 10 # s[:] is a view of s. Note the difference between s = 10 and s[:] = 10 >>> a array([[ 0, 10, 10, 3], [1234, 10, 10, 7], [ 8, 10, 10, 11]]) ``` ### Deep Copy The `copy` method makes a complete copy of the array and its data. ``` >>> d = a.copy() # a new array object with new data is created >>> d is a False >>> d.base is a # d doesn't share anything with a False >>> d[0, 0] = 9999 >>> a array([[ 0, 10, 10, 3], [1234, 10, 10, 7], [ 8, 10, 10, 11]]) ``` Sometimes `copy` should be called after slicing if the original array is not required anymore. For example, suppose `a` is a huge intermediate result and the final result `b` only contains a small fraction of `a`, a deep copy should be made when constructing `b` with slicing: ``` >>> a = np.arange(int(1e8)) >>> b = a[:100].copy() >>> del a # the memory of ``a`` can be released. ``` If `b = a[:100]` is used instead, `a` is referenced by `b` and will persist in memory even if `del a` is executed. ### Functions and Methods Overview Here is a list of some useful NumPy functions and methods names ordered in categories. See [Routines](../reference/routines#routines) for the full list. Array Creation [`arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange"), [`array`](../reference/generated/numpy.array#numpy.array "numpy.array"), [`copy`](../reference/generated/numpy.copy#numpy.copy "numpy.copy"), [`empty`](../reference/generated/numpy.empty#numpy.empty "numpy.empty"), [`empty_like`](../reference/generated/numpy.empty_like#numpy.empty_like "numpy.empty_like"), [`eye`](../reference/generated/numpy.eye#numpy.eye "numpy.eye"), [`fromfile`](../reference/generated/numpy.fromfile#numpy.fromfile "numpy.fromfile"), [`fromfunction`](../reference/generated/numpy.fromfunction#numpy.fromfunction "numpy.fromfunction"), [`identity`](../reference/generated/numpy.identity#numpy.identity "numpy.identity"), [`linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace"), [`logspace`](../reference/generated/numpy.logspace#numpy.logspace "numpy.logspace"), [`mgrid`](../reference/generated/numpy.mgrid#numpy.mgrid "numpy.mgrid"), [`ogrid`](../reference/generated/numpy.ogrid#numpy.ogrid "numpy.ogrid"), [`ones`](../reference/generated/numpy.ones#numpy.ones "numpy.ones"), [`ones_like`](../reference/generated/numpy.ones_like#numpy.ones_like "numpy.ones_like"), [`r_`](../reference/generated/numpy.r_#numpy.r_ "numpy.r_"), [`zeros`](../reference/generated/numpy.zeros#numpy.zeros "numpy.zeros"), [`zeros_like`](../reference/generated/numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Conversions [`ndarray.astype`](../reference/generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"), [`atleast_1d`](../reference/generated/numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](../reference/generated/numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](../reference/generated/numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d"), [`mat`](../reference/generated/numpy.mat#numpy.mat "numpy.mat") Manipulations [`array_split`](../reference/generated/numpy.array_split#numpy.array_split "numpy.array_split"), [`column_stack`](../reference/generated/numpy.column_stack#numpy.column_stack "numpy.column_stack"), [`concatenate`](../reference/generated/numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`diagonal`](../reference/generated/numpy.diagonal#numpy.diagonal "numpy.diagonal"), [`dsplit`](../reference/generated/numpy.dsplit#numpy.dsplit "numpy.dsplit"), [`dstack`](../reference/generated/numpy.dstack#numpy.dstack "numpy.dstack"), [`hsplit`](../reference/generated/numpy.hsplit#numpy.hsplit "numpy.hsplit"), [`hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack"), [`ndarray.item`](../reference/generated/numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item"), [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis"), [`ravel`](../reference/generated/numpy.ravel#numpy.ravel "numpy.ravel"), [`repeat`](../reference/generated/numpy.repeat#numpy.repeat "numpy.repeat"), [`reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape"), [`resize`](../reference/generated/numpy.resize#numpy.resize "numpy.resize"), [`squeeze`](../reference/generated/numpy.squeeze#numpy.squeeze "numpy.squeeze"), [`swapaxes`](../reference/generated/numpy.swapaxes#numpy.swapaxes "numpy.swapaxes"), [`take`](../reference/generated/numpy.take#numpy.take "numpy.take"), [`transpose`](../reference/generated/numpy.transpose#numpy.transpose "numpy.transpose"), [`vsplit`](../reference/generated/numpy.vsplit#numpy.vsplit "numpy.vsplit"), [`vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack") Questions [`all`](../reference/generated/numpy.all#numpy.all "numpy.all"), [`any`](../reference/generated/numpy.any#numpy.any "numpy.any"), [`nonzero`](../reference/generated/numpy.nonzero#numpy.nonzero "numpy.nonzero"), [`where`](../reference/generated/numpy.where#numpy.where "numpy.where") Ordering [`argmax`](../reference/generated/numpy.argmax#numpy.argmax "numpy.argmax"), [`argmin`](../reference/generated/numpy.argmin#numpy.argmin "numpy.argmin"), [`argsort`](../reference/generated/numpy.argsort#numpy.argsort "numpy.argsort"), [`max`](https://docs.python.org/3/library/functions.html#max "(in Python v3.10)"), [`min`](https://docs.python.org/3/library/functions.html#min "(in Python v3.10)"), [`ptp`](../reference/generated/numpy.ptp#numpy.ptp "numpy.ptp"), [`searchsorted`](../reference/generated/numpy.searchsorted#numpy.searchsorted "numpy.searchsorted"), [`sort`](../reference/generated/numpy.sort#numpy.sort "numpy.sort") Operations [`choose`](../reference/generated/numpy.choose#numpy.choose "numpy.choose"), [`compress`](../reference/generated/numpy.compress#numpy.compress "numpy.compress"), [`cumprod`](../reference/generated/numpy.cumprod#numpy.cumprod "numpy.cumprod"), [`cumsum`](../reference/generated/numpy.cumsum#numpy.cumsum "numpy.cumsum"), [`inner`](../reference/generated/numpy.inner#numpy.inner "numpy.inner"), [`ndarray.fill`](../reference/generated/numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill"), [`imag`](../reference/generated/numpy.imag#numpy.imag "numpy.imag"), [`prod`](../reference/generated/numpy.prod#numpy.prod "numpy.prod"), [`put`](../reference/generated/numpy.put#numpy.put "numpy.put"), [`putmask`](../reference/generated/numpy.putmask#numpy.putmask "numpy.putmask"), [`real`](../reference/generated/numpy.real#numpy.real "numpy.real"), [`sum`](../reference/generated/numpy.sum#numpy.sum "numpy.sum") Basic Statistics [`cov`](../reference/generated/numpy.cov#numpy.cov "numpy.cov"), [`mean`](../reference/generated/numpy.mean#numpy.mean "numpy.mean"), [`std`](../reference/generated/numpy.std#numpy.std "numpy.std"), [`var`](../reference/generated/numpy.var#numpy.var "numpy.var") Basic Linear Algebra [`cross`](../reference/generated/numpy.cross#numpy.cross "numpy.cross"), [`dot`](../reference/generated/numpy.dot#numpy.dot "numpy.dot"), [`outer`](../reference/generated/numpy.outer#numpy.outer "numpy.outer"), [`linalg.svd`](../reference/generated/numpy.linalg.svd#numpy.linalg.svd "numpy.linalg.svd"), [`vdot`](../reference/generated/numpy.vdot#numpy.vdot "numpy.vdot") Less Basic ---------- ### Broadcasting rules Broadcasting allows universal functions to deal in a meaningful way with inputs that do not have exactly the same shape. The first rule of broadcasting is that if all input arrays do not have the same number of dimensions, a “1” will be repeatedly prepended to the shapes of the smaller arrays until all the arrays have the same number of dimensions. The second rule of broadcasting ensures that arrays with a size of 1 along a particular dimension act as if they had the size of the array with the largest shape along that dimension. The value of the array element is assumed to be the same along that dimension for the “broadcast” array. After application of the broadcasting rules, the sizes of all arrays must match. More details can be found in [Broadcasting](basics.broadcasting#basics-broadcasting). Advanced indexing and index tricks ---------------------------------- NumPy offers more indexing facilities than regular Python sequences. In addition to indexing by integers and slices, as we saw before, arrays can be indexed by arrays of integers and arrays of booleans. ### Indexing with Arrays of Indices ``` >>> a = np.arange(12)**2 # the first 12 square numbers >>> i = np.array([1, 1, 3, 8, 5]) # an array of indices >>> a[i] # the elements of `a` at the positions `i` array([ 1, 1, 9, 64, 25]) >>> >>> j = np.array([[3, 4], [9, 7]]) # a bidimensional array of indices >>> a[j] # the same shape as `j` array([[ 9, 16], [81, 49]]) ``` When the indexed array `a` is multidimensional, a single array of indices refers to the first dimension of `a`. The following example shows this behavior by converting an image of labels into a color image using a palette. ``` >>> palette = np.array([[0, 0, 0], # black ... [255, 0, 0], # red ... [0, 255, 0], # green ... [0, 0, 255], # blue ... [255, 255, 255]]) # white >>> image = np.array([[0, 1, 2, 0], # each value corresponds to a color in the palette ... [0, 3, 4, 0]]) >>> palette[image] # the (2, 4, 3) color image array([[[ 0, 0, 0], [255, 0, 0], [ 0, 255, 0], [ 0, 0, 0]], [[ 0, 0, 0], [ 0, 0, 255], [255, 255, 255], [ 0, 0, 0]]]) ``` We can also give indexes for more than one dimension. The arrays of indices for each dimension must have the same shape. ``` >>> a = np.arange(12).reshape(3, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> i = np.array([[0, 1], # indices for the first dim of `a` ... [1, 2]]) >>> j = np.array([[2, 1], # indices for the second dim ... [3, 3]]) >>> >>> a[i, j] # i and j must have equal shape array([[ 2, 5], [ 7, 11]]) >>> >>> a[i, 2] array([[ 2, 6], [ 6, 10]]) >>> >>> a[:, j] array([[[ 2, 1], [ 3, 3]], [[ 6, 5], [ 7, 7]], [[10, 9], [11, 11]]]) ``` In Python, `arr[i, j]` is exactly the same as `arr[(i, j)]`—so we can put `i` and `j` in a `tuple` and then do the indexing with that. ``` >>> l = (i, j) >>> # equivalent to a[i, j] >>> a[l] array([[ 2, 5], [ 7, 11]]) ``` However, we can not do this by putting `i` and `j` into an array, because this array will be interpreted as indexing the first dimension of `a`. ``` >>> s = np.array([i, j]) >>> # not what we want >>> a[s] Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: index 3 is out of bounds for axis 0 with size 3 >>> # same as `a[i, j]` >>> a[tuple(s)] array([[ 2, 5], [ 7, 11]]) ``` Another common use of indexing with arrays is the search of the maximum value of time-dependent series: ``` >>> time = np.linspace(20, 145, 5) # time scale >>> data = np.sin(np.arange(20)).reshape(5, 4) # 4 time-dependent series >>> time array([ 20. , 51.25, 82.5 , 113.75, 145. ]) >>> data array([[ 0. , 0.84147098, 0.90929743, 0.14112001], [-0.7568025 , -0.95892427, -0.2794155 , 0.6569866 ], [ 0.98935825, 0.41211849, -0.54402111, -0.99999021], [-0.53657292, 0.42016704, 0.99060736, 0.65028784], [-0.28790332, -0.96139749, -0.75098725, 0.14987721]]) >>> # index of the maxima for each series >>> ind = data.argmax(axis=0) >>> ind array([2, 0, 3, 1]) >>> # times corresponding to the maxima >>> time_max = time[ind] >>> >>> data_max = data[ind, range(data.shape[1])] # => data[ind[0], 0], data[ind[1], 1]... >>> time_max array([ 82.5 , 20. , 113.75, 51.25]) >>> data_max array([0.98935825, 0.84147098, 0.99060736, 0.6569866 ]) >>> np.all(data_max == data.max(axis=0)) True ``` You can also use indexing with arrays as a target to assign to: ``` >>> a = np.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> a[[1, 3, 4]] = 0 >>> a array([0, 0, 2, 0, 0]) ``` However, when the list of indices contains repetitions, the assignment is done several times, leaving behind the last value: ``` >>> a = np.arange(5) >>> a[[0, 0, 2]] = [1, 2, 3] >>> a array([2, 1, 3, 3, 4]) ``` This is reasonable enough, but watch out if you want to use Python’s `+=` construct, as it may not do what you expect: ``` >>> a = np.arange(5) >>> a[[0, 0, 2]] += 1 >>> a array([1, 1, 3, 3, 4]) ``` Even though 0 occurs twice in the list of indices, the 0th element is only incremented once. This is because Python requires `a += 1` to be equivalent to `a = a + 1`. ### Indexing with Boolean Arrays When we index arrays with arrays of (integer) indices we are providing the list of indices to pick. With boolean indices the approach is different; we explicitly choose which items in the array we want and which ones we don’t. The most natural way one can think of for boolean indexing is to use boolean arrays that have *the same shape* as the original array: ``` >>> a = np.arange(12).reshape(3, 4) >>> b = a > 4 >>> b # `b` is a boolean with `a`'s shape array([[False, False, False, False], [False, True, True, True], [ True, True, True, True]]) >>> a[b] # 1d array with the selected elements array([ 5, 6, 7, 8, 9, 10, 11]) ``` This property can be very useful in assignments: ``` >>> a[b] = 0 # All elements of `a` higher than 4 become 0 >>> a array([[0, 1, 2, 3], [4, 0, 0, 0], [0, 0, 0, 0]]) ``` You can look at the following example to see how to use boolean indexing to generate an image of the [Mandelbrot set](https://en.wikipedia.org/wiki/Mandelbrot_set): ``` >>> import numpy as np >>> import matplotlib.pyplot as plt >>> def mandelbrot(h, w, maxit=20, r=2): ... """Returns an image of the Mandelbrot fractal of size (h,w).""" ... x = np.linspace(-2.5, 1.5, 4*h+1) ... y = np.linspace(-1.5, 1.5, 3*w+1) ... A, B = np.meshgrid(x, y) ... C = A + B*1j ... z = np.zeros_like(C) ... divtime = maxit + np.zeros(z.shape, dtype=int) ... ... for i in range(maxit): ... z = z**2 + C ... diverge = abs(z) > r # who is diverging ... div_now = diverge & (divtime == maxit) # who is diverging now ... divtime[div_now] = i # note when ... z[diverge] = r # avoid diverging too much ... ... return divtime >>> plt.clf() >>> plt.imshow(mandelbrot(400, 400)) ``` ![../_images/quickstart-1.png] The second way of indexing with booleans is more similar to integer indexing; for each dimension of the array we give a 1D boolean array selecting the slices we want: ``` >>> a = np.arange(12).reshape(3, 4) >>> b1 = np.array([False, True, True]) # first dim selection >>> b2 = np.array([True, False, True, False]) # second dim selection >>> >>> a[b1, :] # selecting rows array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> a[b1] # same thing array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> >>> a[:, b2] # selecting columns array([[ 0, 2], [ 4, 6], [ 8, 10]]) >>> >>> a[b1, b2] # a weird thing to do array([ 4, 10]) ``` Note that the length of the 1D boolean array must coincide with the length of the dimension (or axis) you want to slice. In the previous example, `b1` has length 3 (the number of *rows* in `a`), and `b2` (of length 4) is suitable to index the 2nd axis (columns) of `a`. ### The ix_() function The [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_") function can be used to combine different vectors so as to obtain the result for each n-uplet. For example, if you want to compute all the a+b*c for all the triplets taken from each of the vectors a, b and c: ``` >>> a = np.array([2, 3, 4, 5]) >>> b = np.array([8, 5, 4]) >>> c = np.array([5, 4, 6, 8, 3]) >>> ax, bx, cx = np.ix_(a, b, c) >>> ax array([[[2]], [[3]], [[4]], [[5]]]) >>> bx array([[[8], [5], [4]]]) >>> cx array([[[5, 4, 6, 8, 3]]]) >>> ax.shape, bx.shape, cx.shape ((4, 1, 1), (1, 3, 1), (1, 1, 5)) >>> result = ax + bx * cx >>> result array([[[42, 34, 50, 66, 26], [27, 22, 32, 42, 17], [22, 18, 26, 34, 14]], [[43, 35, 51, 67, 27], [28, 23, 33, 43, 18], [23, 19, 27, 35, 15]], [[44, 36, 52, 68, 28], [29, 24, 34, 44, 19], [24, 20, 28, 36, 16]], [[45, 37, 53, 69, 29], [30, 25, 35, 45, 20], [25, 21, 29, 37, 17]]]) >>> result[3, 2, 4] 17 >>> a[3] + b[2] * c[4] 17 ``` You could also implement the reduce as follows: ``` >>> def ufunc_reduce(ufct, *vectors): ... vs = np.ix_(*vectors) ... r = ufct.identity ... for v in vs: ... r = ufct(r, v) ... return r ``` and then use it as: ``` >>> ufunc_reduce(np.add, a, b, c) array([[[15, 14, 16, 18, 13], [12, 11, 13, 15, 10], [11, 10, 12, 14, 9]], [[16, 15, 17, 19, 14], [13, 12, 14, 16, 11], [12, 11, 13, 15, 10]], [[17, 16, 18, 20, 15], [14, 13, 15, 17, 12], [13, 12, 14, 16, 11]], [[18, 17, 19, 21, 16], [15, 14, 16, 18, 13], [14, 13, 15, 17, 12]]]) ``` The advantage of this version of reduce compared to the normal ufunc.reduce is that it makes use of the [broadcasting rules](#broadcasting-rules) in order to avoid creating an argument array the size of the output times the number of vectors. ### Indexing with strings See [Structured arrays](basics.rec#structured-arrays). Tricks and Tips --------------- Here we give a list of short and useful tips. ### “Automatic” Reshaping To change the dimensions of an array, you can omit one of the sizes which will then be deduced automatically: ``` >>> a = np.arange(30) >>> b = a.reshape((2, -1, 3)) # -1 means "whatever is needed" >>> b.shape (2, 5, 3) >>> b array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11], [12, 13, 14]], [[15, 16, 17], [18, 19, 20], [21, 22, 23], [24, 25, 26], [27, 28, 29]]]) ``` ### Vector Stacking How do we construct a 2D array from a list of equally-sized row vectors? In MATLAB this is quite easy: if `x` and `y` are two vectors of the same length you only need do `m=[x;y]`. In NumPy this works via the functions `column_stack`, `dstack`, `hstack` and `vstack`, depending on the dimension in which the stacking is to be done. For example: ``` >>> x = np.arange(0, 10, 2) >>> y = np.arange(5) >>> m = np.vstack([x, y]) >>> m array([[0, 2, 4, 6, 8], [0, 1, 2, 3, 4]]) >>> xy = np.hstack([x, y]) >>> xy array([0, 2, 4, 6, 8, 0, 1, 2, 3, 4]) ``` The logic behind those functions in more than two dimensions can be strange. See also [NumPy for MATLAB users](numpy-for-matlab-users) ### Histograms The NumPy `histogram` function applied to an array returns a pair of vectors: the histogram of the array and a vector of the bin edges. Beware: `matplotlib` also has a function to build histograms (called `hist`, as in Matlab) that differs from the one in NumPy. The main difference is that `pylab.hist` plots the histogram automatically, while `numpy.histogram` only generates the data. ``` >>> import numpy as np >>> rg = np.random.default_rng(1) >>> import matplotlib.pyplot as plt >>> # Build a vector of 10000 normal deviates with variance 0.5^2 and mean 2 >>> mu, sigma = 2, 0.5 >>> v = rg.normal(mu, sigma, 10000) >>> # Plot a normalized histogram with 50 bins >>> plt.hist(v, bins=50, density=True) # matplotlib version (plot) (array...) >>> # Compute the histogram with numpy and then plot it >>> (n, bins) = np.histogram(v, bins=50, density=True) # NumPy version (no plot) >>> plt.plot(.5 * (bins[1:] + bins[:-1]), n) ``` ![../_images/quickstart-2.png]`. Further reading --------------- * The [Python tutorial](https://docs.python.org/tutorial/) * [NumPy Reference](../reference/index#reference) * [SciPy Tutorial](https://docs.scipy.org/doc/scipy/reference/tutorial/index.html) * [SciPy Lecture Notes](https://scipy-lectures.org) * A [matlab, R, IDL, NumPy/SciPy dictionary](http://mathesaurus.sf.net/) * [tutorial-svd](https://numpy.org/numpy-tutorials/content/tutorial-svd.html "(in NumPy tutorials)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/quickstart.htmlNumPy fundamentals ================== These documents clarify concepts, design decisions, and technical constraints in NumPy. This is a great place to understand the fundamental NumPy ideas and philosophy. * [Array creation](basics.creation) * [Indexing on `ndarrays`](basics.indexing) * [I/O with NumPy](basics.io) * [Data types](basics.types) * [Broadcasting](basics.broadcasting) * [Byte-swapping](basics.byteswapping) * [Structured arrays](basics.rec) * [Writing custom array containers](basics.dispatch) * [Subclassing ndarray](basics.subclassing) * [Universal functions (`ufunc`) basics](basics.ufuncs) * [Copies and views](basics.copies) * [Interoperability with NumPy](basics.interoperability) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.htmlMiscellaneous ============= IEEE 754 Floating Point Special Values -------------------------------------- Special values defined in numpy: nan, inf, NaNs can be used as a poor-man’s mask (if you don’t care what the original value was) Note: cannot use equality to test NaNs. E.g.: ``` >>> myarr = np.array([1., 0., np.nan, 3.]) >>> np.nonzero(myarr == np.nan) (array([], dtype=int64),) >>> np.nan == np.nan # is always False! Use special numpy functions instead. False >>> myarr[myarr == np.nan] = 0. # doesn't work >>> myarr array([ 1., 0., nan, 3.]) >>> myarr[np.isnan(myarr)] = 0. # use this instead find >>> myarr array([1., 0., 0., 3.]) ``` Other related special value functions: ``` isinf(): True if value is inf isfinite(): True if not nan or inf nan_to_num(): Map nan to 0, inf to max float, -inf to min float ``` The following corresponds to the usual functions except that nans are excluded from the results: ``` nansum() nanmax() nanmin() nanargmax() nanargmin() >>> x = np.arange(10.) >>> x[3] = np.nan >>> x.sum() nan >>> np.nansum(x) 42.0 ``` How numpy handles numerical exceptions -------------------------------------- The default is to `'warn'` for `invalid`, `divide`, and `overflow` and `'ignore'` for `underflow`. But this can be changed, and it can be set individually for different kinds of exceptions. The different behaviors are: * ‘ignore’ : Take no action when the exception occurs. * ‘warn’ : Print a `RuntimeWarning` (via the Python [`warnings`](https://docs.python.org/3/library/warnings.html#module-warnings "(in Python v3.10)") module). * ‘raise’ : Raise a `FloatingPointError`. * ‘call’ : Call a function specified using the `seterrcall` function. * ‘print’ : Print a warning directly to `stdout`. * ‘log’ : Record error in a Log object specified by `seterrcall`. These behaviors can be set for all kinds of errors or specific ones: * all : apply to all numeric exceptions * invalid : when NaNs are generated * divide : divide by zero (for integers as well!) * overflow : floating point overflows * underflow : floating point underflows Note that integer divide-by-zero is handled by the same machinery. These behaviors are set on a per-thread basis. Examples -------- ``` >>> oldsettings = np.seterr(all='warn') >>> np.zeros(5,dtype=np.float32)/0. Traceback (most recent call last): ... RuntimeWarning: invalid value encountered in divide >>> j = np.seterr(under='ignore') >>> np.array([1.e-100])**10 array([0.]) >>> j = np.seterr(invalid='raise') >>> np.sqrt(np.array([-1.])) Traceback (most recent call last): ... FloatingPointError: invalid value encountered in sqrt >>> def errorhandler(errstr, errflag): ... print("saw stupid error!") >>> np.seterrcall(errorhandler) >>> j = np.seterr(all='call') >>> np.zeros(5, dtype=np.int32)/0 saw stupid error! array([nan, nan, nan, nan, nan]) >>> j = np.seterr(**oldsettings) # restore previous ... # error-handling settings ``` Interfacing to C ---------------- Only a survey of the choices. Little detail on how each works. 1. Bare metal, wrap your own C-code manually. * Plusses: + Efficient + No dependencies on other tools * Minuses: + Lots of learning overhead: - need to learn basics of Python C API - need to learn basics of numpy C API - need to learn how to handle reference counting and love it. + Reference counting often difficult to get right. - getting it wrong leads to memory leaks, and worse, segfaults 2. Cython * Plusses: + avoid learning C API’s + no dealing with reference counting + can code in pseudo python and generate C code + can also interface to existing C code + should shield you from changes to Python C api + has become the de-facto standard within the scientific Python community + fast indexing support for arrays * Minuses: + Can write code in non-standard form which may become obsolete + Not as flexible as manual wrapping 3. ctypes * Plusses: + part of Python standard library + good for interfacing to existing shareable libraries, particularly Windows DLLs + avoids API/reference counting issues + good numpy support: arrays have all these in their ctypes attribute: ``` a.ctypes.data a.ctypes.data_as a.ctypes.shape a.ctypes.shape_as a.ctypes.strides a.ctypes.strides_as ``` * Minuses: + can’t use for writing code to be turned into C extensions, only a wrapper tool. 4. SWIG (automatic wrapper generator) * Plusses: + around a long time + multiple scripting language support + C++ support + Good for wrapping large (many functions) existing C libraries * Minuses: + generates lots of code between Python and the C code + can cause performance problems that are nearly impossible to optimize out + interface files can be hard to write + doesn’t necessarily avoid reference counting issues or needing to know API’s 5. Psyco * Plusses: + Turns pure python into efficient machine code through jit-like optimizations + very fast when it optimizes well * Minuses: + Only on intel (windows?) + Doesn’t do much for numpy? Interfacing to Fortran: ----------------------- The clear choice to wrap Fortran code is [f2py](https://docs.scipy.org/doc/numpy/f2py/). Pyfort is an older alternative, but not supported any longer. Fwrap is a newer project that looked promising but isn’t being developed any longer. Interfacing to C++: ------------------- 1. Cython 2. CXX 3. Boost.python 4. SWIG 5. SIP (used mainly in PyQT) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/misc.htmlNumPy for MATLAB users ====================== Introduction ------------ MATLAB® and NumPy have a lot in common, but NumPy was created to work with Python, not to be a MATLAB clone. This guide will help MATLAB users get started with NumPy. Some key differences -------------------- | | | | --- | --- | | In MATLAB, the basic type, even for scalars, is a multidimensional array. Array assignments in MATLAB are stored as 2D arrays of double precision floating point numbers, unless you specify the number of dimensions and type. Operations on the 2D instances of these arrays are modeled on matrix operations in linear algebra. | In NumPy, the basic type is a multidimensional `array`. Array assignments in NumPy are usually stored as [n-dimensional arrays](../reference/arrays#arrays) with the minimum type required to hold the objects in sequence, unless you specify the number of dimensions and type. NumPy performs operations element-by-element, so multiplying 2D arrays with `*` is not a matrix multiplication – it’s an element-by-element multiplication. (The `@` operator, available since Python 3.5, can be used for conventional matrix multiplication.) | | MATLAB numbers indices from 1; `a(1)` is the first element. [See note INDEXING](#numpy-for-matlab-users-notes) | NumPy, like Python, numbers indices from 0; `a[0]` is the first element. | | MATLAB’s scripting language was created for linear algebra so the syntax for some array manipulations is more compact than NumPy’s. On the other hand, the API for adding GUIs and creating full-fledged applications is more or less an afterthought. | NumPy is based on Python, a general-purpose language. The advantage to NumPy is access to Python libraries including: [SciPy](https://www.scipy.org/), [Matplotlib](https://matplotlib.org/), [Pandas](https://pandas.pydata.org/), [OpenCV](https://opencv.org/), and more. In addition, Python is often [embedded as a scripting language](https://en.wikipedia.org/wiki/List_of_Python_software#Embedded_as_a_scripting_language) in other software, allowing NumPy to be used there too. | | MATLAB array slicing uses pass-by-value semantics, with a lazy copy-on-write scheme to prevent creating copies until they are needed. Slicing operations copy parts of the array. | NumPy array slicing uses pass-by-reference, that does not copy the arguments. Slicing operations are views into an array. | Rough equivalents ----------------- The table below gives rough equivalents for some common MATLAB expressions. These are similar expressions, not equivalents. For details, see the [documentation](../reference/index#reference). In the table below, it is assumed that you have executed the following commands in Python: ``` import numpy as np from scipy import io, integrate, linalg, signal from scipy.sparse.linalg import eigs ``` Also assume below that if the Notes talk about “matrix” that the arguments are two-dimensional entities. ### General purpose equivalents | MATLAB | NumPy | Notes | | --- | --- | --- | | `help func` | `info(func)` or `help(func)` or `func?` (in IPython) | get help on the function *func* | | `which func` | [see note HELP](#numpy-for-matlab-users-notes) | find out where *func* is defined | | `type func` | `np.source(func)` or `func??` (in IPython) | print source for *func* (if not a native function) | | `% comment` | `# comment` | comment a line of code with the text `comment` | | ``` for i=1:3 fprintf('%i\n',i) end ``` | ``` for i in range(1, 4): print(i) ``` | use a for-loop to print the numbers 1, 2, and 3 using [`range`](https://docs.python.org/3/library/stdtypes.html#range "(in Python v3.10)") | | `a && b` | `a and b` | short-circuiting logical AND operator ([Python native operator](https://docs.python.org/3/library/stdtypes.html#boolean "(in Python v3.10)")); scalar arguments only | | `a || b` | `a or b` | short-circuiting logical OR operator ([Python native operator](https://docs.python.org/3/library/stdtypes.html#boolean "(in Python v3.10)")); scalar arguments only | | ``` >> 4 == 4 ans = 1 >> 4 == 5 ans = 0 ``` | ``` >>> 4 == 4 True >>> 4 == 5 False ``` | The [boolean objects](https://docs.python.org/3/library/stdtypes.html#bltin-boolean-values "(in Python v3.10)") in Python are `True` and `False`, as opposed to MATLAB logical types of `1` and `0`. | | ``` a=4 if a==4 fprintf('a = 4\n') elseif a==5 fprintf('a = 5\n') end ``` | ``` a = 4 if a == 4: print('a = 4') elif a == 5: print('a = 5') ``` | create an if-else statement to check if `a` is 4 or 5 and print result | | `1*i`, `1*j`, `1i`, `1j` | `1j` | complex numbers | | `eps` | `np.finfo(float).eps` or `np.spacing(1)` | Upper bound to relative error due to rounding in 64-bit floating point arithmetic. | | `load data.mat` | `io.loadmat('data.mat')` | Load MATLAB variables saved to the file `data.mat`. (Note: When saving arrays to `data.mat` in MATLAB/Octave, use a recent binary format. [`scipy.io.loadmat`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.loadmat.html#scipy.io.loadmat "(in SciPy v1.8.1)") will create a dictionary with the saved arrays and further information.) | | `ode45` | `integrate.solve_ivp(f)` | integrate an ODE with Runge-Kutta 4,5 | | `ode15s` | `integrate.solve_ivp(f, method='BDF')` | integrate an ODE with BDF method | ### Linear algebra equivalents | MATLAB | NumPy | Notes | | --- | --- | --- | | `ndims(a)` | `np.ndim(a)` or `a.ndim` | number of dimensions of array `a` | | `numel(a)` | `np.size(a)` or `a.size` | number of elements of array `a` | | `size(a)` | `np.shape(a)` or `a.shape` | “size” of array `a` | | `size(a,n)` | `a.shape[n-1]` | get the number of elements of the n-th dimension of array `a`. (Note that MATLAB uses 1 based indexing while Python uses 0 based indexing, See note [INDEXING](#numpy-for-matlab-users-notes)) | | `[ 1 2 3; 4 5 6 ]` | `np.array([[1. ,2. ,3.], [4. ,5. ,6.]])` | define a 2x3 2D array | | `[ a b; c d ]` | `np.block([[a, b], [c, d]])` | construct a matrix from blocks `a`, `b`, `c`, and `d` | | `a(end)` | `a[-1]` | access last element in MATLAB vector (1xn or nx1) or 1D NumPy array `a` (length n) | | `a(2,5)` | `a[1, 4]` | access element in second row, fifth column in 2D array `a` | | `a(2,:)` | `a[1]` or `a[1, :]` | entire second row of 2D array `a` | | `a(1:5,:)` | `a[0:5]` or `a[:5]` or `a[0:5, :]` | first 5 rows of 2D array `a` | | `a(end-4:end,:)` | `a[-5:]` | last 5 rows of 2D array `a` | | `a(1:3,5:9)` | `a[0:3, 4:9]` | The first through third rows and fifth through ninth columns of a 2D array, `a`. | | `a([2,4,5],[1,3])` | `a[np.ix_([1, 3, 4], [0, 2])]` | rows 2,4 and 5 and columns 1 and 3. This allows the matrix to be modified, and doesn’t require a regular slice. | | `a(3:2:21,:)` | `a[2:21:2,:]` | every other row of `a`, starting with the third and going to the twenty-first | | `a(1:2:end,:)` | `a[ ::2,:]` | every other row of `a`, starting with the first | | `a(end:-1:1,:)` or `flipud(a)` | `a[::-1,:]` | `a` with rows in reverse order | | `a([1:end 1],:)` | `a[np.r_[:len(a),0]]` | `a` with copy of the first row appended to the end | | `a.'` | `a.transpose()` or `a.T` | transpose of `a` | | `a'` | `a.conj().transpose()` or `a.conj().T` | conjugate transpose of `a` | | `a * b` | `a @ b` | matrix multiply | | `a .* b` | `a * b` | element-wise multiply | | `a./b` | `a/b` | element-wise divide | | `a.^3` | `a**3` | element-wise exponentiation | | `(a > 0.5)` | `(a > 0.5)` | matrix whose i,jth element is (a_ij > 0.5). The MATLAB result is an array of logical values 0 and 1. The NumPy result is an array of the boolean values `False` and `True`. | | `find(a > 0.5)` | `np.nonzero(a > 0.5)` | find the indices where (`a` > 0.5) | | `a(:,find(v > 0.5))` | `a[:,np.nonzero(v > 0.5)[0]]` | extract the columns of `a` where vector v > 0.5 | | `a(:,find(v>0.5))` | `a[:, v.T > 0.5]` | extract the columns of `a` where column vector v > 0.5 | | `a(a<0.5)=0` | `a[a < 0.5]=0` | `a` with elements less than 0.5 zeroed out | | `a .* (a>0.5)` | `a * (a > 0.5)` | `a` with elements less than 0.5 zeroed out | | `a(:) = 3` | `a[:] = 3` | set all values to the same scalar value | | `y=x` | `y = x.copy()` | NumPy assigns by reference | | `y=x(2,:)` | `y = x[1, :].copy()` | NumPy slices are by reference | | `y=x(:)` | `y = x.flatten()` | turn array into vector (note that this forces a copy). To obtain the same data ordering as in MATLAB, use `x.flatten('F')`. | | `1:10` | `np.arange(1., 11.)` or `np.r_[1.:11.]` or `np.r_[1:10:10j]` | create an increasing vector (see note [RANGES](#numpy-for-matlab-users-notes)) | | `0:9` | `np.arange(10.)` or `np.r_[:10.]` or `np.r_[:9:10j]` | create an increasing vector (see note [RANGES](#numpy-for-matlab-users-notes)) | | `[1:10]'` | `np.arange(1.,11.)[:, np.newaxis]` | create a column vector | | `zeros(3,4)` | `np.zeros((3, 4))` | 3x4 two-dimensional array full of 64-bit floating point zeros | | `zeros(3,4,5)` | `np.zeros((3, 4, 5))` | 3x4x5 three-dimensional array full of 64-bit floating point zeros | | `ones(3,4)` | `np.ones((3, 4))` | 3x4 two-dimensional array full of 64-bit floating point ones | | `eye(3)` | `np.eye(3)` | 3x3 identity matrix | | `diag(a)` | `np.diag(a)` | returns a vector of the diagonal elements of 2D array, `a` | | `diag(v,0)` | `np.diag(v, 0)` | returns a square diagonal matrix whose nonzero values are the elements of vector, `v` | | ``` rng(42,'twister') rand(3,4) ``` | ``` from numpy.random import default_rng rng = default_rng(42) rng.random(3, 4) ``` or older version: `random.rand((3, 4))` | generate a random 3x4 array with default random number generator and seed = 42 | | `linspace(1,3,4)` | `np.linspace(1,3,4)` | 4 equally spaced samples between 1 and 3, inclusive | | `[x,y]=meshgrid(0:8,0:5)` | `np.mgrid[0:9.,0:6.]` or `np.meshgrid(r_[0:9.],r_[0:6.]` | two 2D arrays: one of x values, the other of y values | | | `ogrid[0:9.,0:6.]` or `np.ix_(np.r_[0:9.],np.r_[0:6.]` | the best way to eval functions on a grid | | `[x,y]=meshgrid([1,2,4],[2,4,5])` | `np.meshgrid([1,2,4],[2,4,5])` | | | | `ix_([1,2,4],[2,4,5])` | the best way to eval functions on a grid | | `repmat(a, m, n)` | `np.tile(a, (m, n))` | create m by n copies of `a` | | `[a b]` | `np.concatenate((a,b),1)` or `np.hstack((a,b))` or `np.column_stack((a,b))` or `np.c_[a,b]` | concatenate columns of `a` and `b` | | `[a; b]` | `np.concatenate((a,b))` or `np.vstack((a,b))` or `np.r_[a,b]` | concatenate rows of `a` and `b` | | `max(max(a))` | `a.max()` or `np.nanmax(a)` | maximum element of `a` (with ndims(a)<=2 for MATLAB, if there are NaN’s, `nanmax` will ignore these and return largest value) | | `max(a)` | `a.max(0)` | maximum element of each column of array `a` | | `max(a,[],2)` | `a.max(1)` | maximum element of each row of array `a` | | `max(a,b)` | `np.maximum(a, b)` | compares `a` and `b` element-wise, and returns the maximum value from each pair | | `norm(v)` | `np.sqrt(v @ v)` or `np.linalg.norm(v)` | L2 norm of vector `v` | | `a & b` | `logical_and(a,b)` | element-by-element AND operator (NumPy ufunc) [See note LOGICOPS](#numpy-for-matlab-users-notes) | | `a | b` | `np.logical_or(a,b)` | element-by-element OR operator (NumPy ufunc) [See note LOGICOPS](#numpy-for-matlab-users-notes) | | `bitand(a,b)` | `a & b` | bitwise AND operator (Python native and NumPy ufunc) | | `bitor(a,b)` | `a | b` | bitwise OR operator (Python native and NumPy ufunc) | | `inv(a)` | `linalg.inv(a)` | inverse of square 2D array `a` | | `pinv(a)` | `linalg.pinv(a)` | pseudo-inverse of 2D array `a` | | `rank(a)` | `linalg.matrix_rank(a)` | matrix rank of a 2D array `a` | | `a\b` | `linalg.solve(a, b)` if `a` is square; `linalg.lstsq(a, b)` otherwise | solution of a x = b for x | | `b/a` | Solve `a.T x.T = b.T` instead | solution of x a = b for x | | `[U,S,V]=svd(a)` | `U, S, Vh = linalg.svd(a), V = Vh.T` | singular value decomposition of `a` | | `c=chol(a)` where `a==c'*c` | `c = linalg.cholesky(a)` where `a == c@c.T` | Cholesky factorization of a 2D array (`chol(a)` in MATLAB returns an upper triangular 2D array, but [`cholesky`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cholesky.html#scipy.linalg.cholesky "(in SciPy v1.8.1)") returns a lower triangular 2D array) | | `[V,D]=eig(a)` | `D,V = linalg.eig(a)` | eigenvalues \(\lambda\) and eigenvectors \(\bar{v}\) of `a`, where \(\lambda\bar{v}=\mathbf{a}\bar{v}\) | | `[V,D]=eig(a,b)` | `D,V = linalg.eig(a, b)` | eigenvalues \(\lambda\) and eigenvectors \(\bar{v}\) of `a`, `b` where \(\lambda\mathbf{b}\bar{v}=\mathbf{a}\bar{v}\) | | `[V,D]=eigs(a,3)` | `D,V = eigs(a, k = 3)` | find the `k=3` largest eigenvalues and eigenvectors of 2D array, `a` | | `[Q,R,P]=qr(a,0)` | `Q,R = linalg.qr(a)` | QR decomposition | | `[L,U,P]=lu(a)` where `a==P'*L*U` | `P,L,U = linalg.lu(a)` where `a == P@L@U` | LU decomposition (note: P(MATLAB) == transpose(P(NumPy))) | | `conjgrad` | `cg` | Conjugate gradients solver | | `fft(a)` | `np.fft(a)` | Fourier transform of `a` | | `ifft(a)` | `np.ifft(a)` | inverse Fourier transform of `a` | | `sort(a)` | `np.sort(a)` or `a.sort(axis=0)` | sort each column of a 2D array, `a` | | `sort(a, 2)` | `np.sort(a, axis = 1)` or `a.sort(axis = 1)` | sort the each row of 2D array, `a` | | `[b,I]=sortrows(a,1)` | `I = np.argsort(a[:, 0]); b = a[I,:]` | save the array `a` as array `b` with rows sorted by the first column | | `x = Z\y` | `x = linalg.lstsq(Z, y)` | perform a linear regression of the form \(\mathbf{Zx}=\mathbf{y}\) | | `decimate(x, q)` | `signal.resample(x, np.ceil(len(x)/q))` | downsample with low-pass filtering | | `unique(a)` | `np.unique(a)` | a vector of unique values in array `a` | | `squeeze(a)` | `a.squeeze()` | remove singleton dimensions of array `a`. Note that MATLAB will always return arrays of 2D or higher while NumPy will return arrays of 0D or higher | Notes ----- **Submatrix**: Assignment to a submatrix can be done with lists of indices using the `ix_` command. E.g., for 2D array `a`, one might do: `ind=[1, 3]; a[np.ix_(ind, ind)] += 100`. **HELP**: There is no direct equivalent of MATLAB’s `which` command, but the commands [`help`](https://docs.python.org/3/library/functions.html#help "(in Python v3.10)") and [`numpy.source`](../reference/generated/numpy.source#numpy.source "numpy.source") will usually list the filename where the function is located. Python also has an `inspect` module (do `import inspect`) which provides a `getfile` that often works. **INDEXING**: MATLAB uses one based indexing, so the initial element of a sequence has index 1. Python uses zero based indexing, so the initial element of a sequence has index 0. Confusion and flamewars arise because each has advantages and disadvantages. One based indexing is consistent with common human language usage, where the “first” element of a sequence has index 1. Zero based indexing [simplifies indexing](https://groups.google.com/group/comp.lang.python/msg/1bf4d925dfbf368?q=g:thl3498076713d&hl=en). See also [a text by prof.dr. <NAME>](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html). **RANGES**: In MATLAB, `0:5` can be used as both a range literal and a ‘slice’ index (inside parentheses); however, in Python, constructs like `0:5` can *only* be used as a slice index (inside square brackets). Thus the somewhat quirky `r_` object was created to allow NumPy to have a similarly terse range construction mechanism. Note that `r_` is not called like a function or a constructor, but rather *indexed* using square brackets, which allows the use of Python’s slice syntax in the arguments. **LOGICOPS**: `&` or `|` in NumPy is bitwise AND/OR, while in MATLAB & and `|` are logical AND/OR. The two can appear to work the same, but there are important differences. If you would have used MATLAB’s `&` or `|` operators, you should use the NumPy ufuncs `logical_and`/`logical_or`. The notable differences between MATLAB’s and NumPy’s `&` and `|` operators are: * Non-logical {0,1} inputs: NumPy’s output is the bitwise AND of the inputs. MATLAB treats any non-zero value as 1 and returns the logical AND. For example `(3 & 4)` in NumPy is `0`, while in MATLAB both `3` and `4` are considered logical true and `(3 & 4)` returns `1`. * Precedence: NumPy’s & operator is higher precedence than logical operators like `<` and `>`; MATLAB’s is the reverse. If you know you have boolean arguments, you can get away with using NumPy’s bitwise operators, but be careful with parentheses, like this: `z = (x > 1) & (x < 2)`. The absence of NumPy operator forms of `logical_and` and `logical_or` is an unfortunate consequence of Python’s design. **RESHAPE and LINEAR INDEXING**: MATLAB always allows multi-dimensional arrays to be accessed using scalar or linear indices, NumPy does not. Linear indices are common in MATLAB programs, e.g. `find()` on a matrix returns them, whereas NumPy’s find behaves differently. When converting MATLAB code it might be necessary to first reshape a matrix to a linear sequence, perform some indexing operations and then reshape back. As reshape (usually) produces views onto the same storage, it should be possible to do this fairly efficiently. Note that the scan order used by reshape in NumPy defaults to the ‘C’ order, whereas MATLAB uses the Fortran order. If you are simply converting to a linear sequence and back this doesn’t matter. But if you are converting reshapes from MATLAB code which relies on the scan order, then this MATLAB code: `z = reshape(x,3,4);` should become `z = x.reshape(3,4,order='F').copy()` in NumPy. ‘array’ or ‘matrix’? Which should I use? ---------------------------------------- Historically, NumPy has provided a special matrix type, `np.matrix`, which is a subclass of ndarray which makes binary operations linear algebra operations. You may see it used in some existing code instead of `np.array`. So, which one to use? ### Short answer **Use arrays**. * They support multidimensional array algebra that is supported in MATLAB * They are the standard vector/matrix/tensor type of NumPy. Many NumPy functions return arrays, not matrices. * There is a clear distinction between element-wise operations and linear algebra operations. * You can have standard vectors or row/column vectors if you like. Until Python 3.5 the only disadvantage of using the array type was that you had to use `dot` instead of `*` to multiply (reduce) two tensors (scalar product, matrix vector multiplication etc.). Since Python 3.5 you can use the matrix multiplication `@` operator. Given the above, we intend to deprecate `matrix` eventually. ### Long answer NumPy contains both an `array` class and a `matrix` class. The `array` class is intended to be a general-purpose n-dimensional array for many kinds of numerical computing, while `matrix` is intended to facilitate linear algebra computations specifically. In practice there are only a handful of key differences between the two. * Operators `*` and `@`, functions `dot()`, and `multiply()`: + For `array`, **``*`` means element-wise multiplication**, while **``@`` means matrix multiplication**; they have associated functions `multiply()` and `dot()`. (Before Python 3.5, `@` did not exist and one had to use `dot()` for matrix multiplication). + For `matrix`, **``*`` means matrix multiplication**, and for element-wise multiplication one has to use the `multiply()` function. * Handling of vectors (one-dimensional arrays) + For `array`, the **vector shapes 1xN, Nx1, and N are all different things**. Operations like `A[:,1]` return a one-dimensional array of shape N, not a two-dimensional array of shape Nx1. Transpose on a one-dimensional `array` does nothing. + For `matrix`, **one-dimensional arrays are always upconverted to 1xN or Nx1 matrices** (row or column vectors). `A[:,1]` returns a two-dimensional matrix of shape Nx1. * Handling of higher-dimensional arrays (ndim > 2) + `array` objects **can have number of dimensions > 2**; + `matrix` objects **always have exactly two dimensions**. * Convenience attributes + `array` **has a .T attribute**, which returns the transpose of the data. + `matrix` **also has .H, .I, and .A attributes**, which return the conjugate transpose, inverse, and `asarray()` of the matrix, respectively. * Convenience constructor + The `array` constructor **takes (nested) Python sequences as initializers**. As in, `array([[1,2,3],[4,5,6]])`. + The `matrix` constructor additionally **takes a convenient string initializer**. As in `matrix("[1 2 3; 4 5 6]")`. There are pros and cons to using both: * `array` + `:)` Element-wise multiplication is easy: `A*B`. + `:(` You have to remember that matrix multiplication has its own operator, `@`. + `:)` You can treat one-dimensional arrays as *either* row or column vectors. `A @ v` treats `v` as a column vector, while `v @ A` treats `v` as a row vector. This can save you having to type a lot of transposes. + `:)` `array` is the “default” NumPy type, so it gets the most testing, and is the type most likely to be returned by 3rd party code that uses NumPy. + `:)` Is quite at home handling data of any number of dimensions. + `:)` Closer in semantics to tensor algebra, if you are familiar with that. + `:)` *All* operations (`*`, `/`, `+`, `-` etc.) are element-wise. + `:(` Sparse matrices from `scipy.sparse` do not interact as well with arrays. * `matrix` + `:\\` Behavior is more like that of MATLAB matrices. + `<:(` Maximum of two-dimensional. To hold three-dimensional data you need `array` or perhaps a Python list of `matrix`. + `<:(` Minimum of two-dimensional. You cannot have vectors. They must be cast as single-column or single-row matrices. + `<:(` Since `array` is the default in NumPy, some functions may return an `array` even if you give them a `matrix` as an argument. This shouldn’t happen with NumPy functions (if it does it’s a bug), but 3rd party code based on NumPy may not honor type preservation like NumPy does. + `:)` `A*B` is matrix multiplication, so it looks just like you write it in linear algebra (For Python >= 3.5 plain arrays have the same convenience with the `@` operator). + `<:(` Element-wise multiplication requires calling a function, `multiply(A,B)`. + `<:(` The use of operator overloading is a bit illogical: `*` does not work element-wise but `/` does. + Interaction with `scipy.sparse` is a bit cleaner. The `array` is thus much more advisable to use. Indeed, we intend to deprecate `matrix` eventually. Customizing your environment ---------------------------- In MATLAB the main tool available to you for customizing the environment is to modify the search path with the locations of your favorite functions. You can put such customizations into a startup script that MATLAB will run on startup. NumPy, or rather Python, has similar facilities. * To modify your Python search path to include the locations of your own modules, define the `PYTHONPATH` environment variable. * To have a particular script file executed when the interactive Python interpreter is started, define the `PYTHONSTARTUP` environment variable to contain the name of your startup script. Unlike MATLAB, where anything on your path can be called immediately, with Python you need to first do an ‘import’ statement to make functions in a particular file accessible. For example you might make a startup script that looks like this (Note: this is just an example, not a statement of “best practices”): ``` # Make all numpy available via shorter 'np' prefix import numpy as np # # Make the SciPy linear algebra functions available as linalg.func() # e.g. linalg.lu, linalg.eig (for general l*B@u==A@u solution) from scipy import linalg # # Define a Hermitian function def hermitian(A, **kwargs): return np.conj(A,**kwargs).T # Make a shortcut for hermitian: # hermitian(A) --> H(A) H = hermitian ``` To use the deprecated `matrix` and other `matlib` functions: ``` # Make all matlib functions accessible at the top level via M.func() import numpy.matlib as M # Make some matlib functions accessible directly at the top level via, e.g. rand(3,3) from numpy.matlib import matrix,rand,zeros,ones,empty,eye ``` Links ----- Another somewhat outdated MATLAB/NumPy cross-reference can be found at <http://mathesaurus.sf.net/ An extensive list of tools for scientific work with Python can be found in the [topical software page](https://scipy.org/topical-software.html). See [List of Python software: scripting](https://en.wikipedia.org/wiki/List_of_Python_software#Embedded_as_a_scripting_language) for a list of software that use Python as a scripting language MATLAB® and SimuLink® are registered trademarks of The MathWorks, Inc. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/numpy-for-matlab-users.htmlBuilding from source ==================== There are two options for building NumPy- building with Gitpod or locally from source. Your choice depends on your operating system and familiarity with the command line. Gitpod ------ Gitpod is an open-source platform that automatically creates the correct development environment right in your browser, reducing the need to install local development environments and deal with incompatible dependencies. If you are a Windows user, unfamiliar with using the command line or building NumPy for the first time, it is often faster to build with Gitpod. Here are the in-depth instructions for building NumPy with [building NumPy with Gitpod](https://numpy.org/devdocs/dev/development_gitpod.html). Building locally ---------------- Building locally on your machine gives you more granular control. If you are a MacOS or Linux user familiar with using the command line, you can continue with building NumPy locally by following the instructions below. Prerequisites ------------- Building NumPy requires the following software installed: 1. Python 3.8.x or newer Please note that the Python development headers also need to be installed, e.g., on Debian/Ubuntu one needs to install both `python3` and `python3-dev`. On Windows and macOS this is normally not an issue. 2. Compilers Much of NumPy is written in C. You will need a C compiler that complies with the C99 standard. Part of Numpy is now written in C++. You will also need a C++ compiler that complies with the C++11 standard. While a FORTRAN 77 compiler is not necessary for building NumPy, it is needed to run the `numpy.f2py` tests. These tests are skipped if the compiler is not auto-detected. Note that NumPy is developed mainly using GNU compilers and tested on MSVC and Clang compilers. Compilers from other vendors such as Intel, Absoft, Sun, NAG, Compaq, Vast, Portland, Lahey, HP, IBM are only supported in the form of community feedback, and may not work out of the box. GCC 4.x (and later) compilers are recommended. On ARM64 (aarch64) GCC 8.x (and later) are recommended. 3. Linear Algebra libraries NumPy does not require any external linear algebra libraries to be installed. However, if these are available, NumPy’s setup script can detect them and use them for building. A number of different LAPACK library setups can be used, including optimized LAPACK libraries such as OpenBLAS or MKL. The choice and location of these libraries as well as include paths and other such build options can be specified in a `site.cfg` file located in the NumPy root repository or a `.numpy-site.cfg` file in your home directory. See the `site.cfg.example` example file included in the NumPy repository or sdist for documentation, and below for specifying search priority from environmental variables. 4. Cython For building NumPy, you’ll need a recent version of Cython. Basic Installation ------------------ To install NumPy, run: ``` pip install . ``` To perform an in-place build that can be run from the source folder run: ``` python setup.py build_ext --inplace ``` *Note: for build instructions to do development work on NumPy itself, see* [Setting up and using your development environment](../dev/development_environment#development-environment). Testing ------- Make sure to test your builds. To ensure everything stays in shape, see if all tests pass. The test suite requires additional dependencies, which can easily be installed with: ``` $ python -m pip install -r test_requirements.txt ``` Run tests: ``` $ python runtests.py -v -m full ``` For detailed info on testing, see [Testing builds](../dev/development_environment#testing-builds). ### Parallel builds It’s possible to do a parallel build with: ``` python setup.py build -j 4 install --prefix $HOME/.local ``` This will compile numpy on 4 CPUs and install it into the specified prefix. to perform a parallel in-place build, run: ``` python setup.py build_ext --inplace -j 4 ``` The number of build jobs can also be specified via the environment variable `NPY_NUM_BUILD_JOBS`. ### Choosing the fortran compiler Compilers are auto-detected; building with a particular compiler can be done with `--fcompiler`. E.g. to select gfortran: ``` python setup.py build --fcompiler=gnu95 ``` For more information see: ``` python setup.py build --help-fcompiler ``` ### How to check the ABI of BLAS/LAPACK libraries One relatively simple and reliable way to check for the compiler used to build a library is to use ldd on the library. If libg2c.so is a dependency, this means that g77 has been used (note: g77 is no longer supported for building NumPy). If libgfortran.so is a dependency, gfortran has been used. If both are dependencies, this means both have been used, which is almost always a very bad idea. Accelerated BLAS/LAPACK libraries --------------------------------- NumPy searches for optimized linear algebra libraries such as BLAS and LAPACK. There are specific orders for searching these libraries, as described below and in the `site.cfg.example` file. ### BLAS Note that both BLAS and CBLAS interfaces are needed for a properly optimized build of NumPy. The default order for the libraries are: 1. MKL 2. BLIS 3. OpenBLAS 4. ATLAS 5. BLAS (NetLIB) The detection of BLAS libraries may be bypassed by defining the environment variable `NPY_BLAS_LIBS` , which should contain the exact linker flags you want to use (interface is assumed to be Fortran 77). Also define `NPY_CBLAS_LIBS` (even empty if CBLAS is contained in your BLAS library) to trigger use of CBLAS and avoid slow fallback code for matrix calculations. If you wish to build against OpenBLAS but you also have BLIS available one may predefine the order of searching via the environment variable `NPY_BLAS_ORDER` which is a comma-separated list of the above names which is used to determine what to search for, for instance: ``` NPY_BLAS_ORDER=ATLAS,blis,openblas,MKL python setup.py build ``` will prefer to use ATLAS, then BLIS, then OpenBLAS and as a last resort MKL. If neither of these exists the build will fail (names are compared lower case). Alternatively one may use `!` or `^` to negate all items: ``` NPY_BLAS_ORDER='^blas,atlas' python setup.py build ``` will allow using anything **but** NetLIB BLAS and ATLAS libraries, the order of the above list is retained. One cannot mix negation and positives, nor have multiple negations, such cases will raise an error. ### LAPACK The default order for the libraries are: 1. MKL 2. OpenBLAS 3. libFLAME 4. ATLAS 5. LAPACK (NetLIB) The detection of LAPACK libraries may be bypassed by defining the environment variable `NPY_LAPACK_LIBS`, which should contain the exact linker flags you want to use (language is assumed to be Fortran 77). If you wish to build against OpenBLAS but you also have MKL available one may predefine the order of searching via the environment variable `NPY_LAPACK_ORDER` which is a comma-separated list of the above names, for instance: ``` NPY_LAPACK_ORDER=ATLAS,openblas,MKL python setup.py build ``` will prefer to use ATLAS, then OpenBLAS and as a last resort MKL. If neither of these exists the build will fail (names are compared lower case). Alternatively one may use `!` or `^` to negate all items: ``` NPY_LAPACK_ORDER='^lapack' python setup.py build ``` will allow using anything **but** the NetLIB LAPACK library, the order of the above list is retained. One cannot mix negation and positives, nor have multiple negations, such cases will raise an error. Deprecated since version 1.20: The native libraries on macOS, provided by Accelerate, are not fit for use in NumPy since they have bugs that cause wrong output under easily reproducible conditions. If the vendor fixes those bugs, the library could be reinstated, but until then users compiling for themselves should use another linear algebra library or use the built-in (but slower) default, see the next section. ### Disabling ATLAS and other accelerated libraries Usage of ATLAS and other accelerated libraries in NumPy can be disabled via: ``` NPY_BLAS_ORDER= NPY_LAPACK_ORDER= python setup.py build ``` or: ``` BLAS=None LAPACK=None ATLAS=None python setup.py build ``` ### 64-bit BLAS and LAPACK You can tell Numpy to use 64-bit BLAS/LAPACK libraries by setting the environment variable: ``` NPY_USE_BLAS_ILP64=1 ``` when building Numpy. The following 64-bit BLAS/LAPACK libraries are supported: 1. OpenBLAS ILP64 with `64_` symbol suffix (`openblas64_`) 2. OpenBLAS ILP64 without symbol suffix (`openblas_ilp64`) The order in which they are preferred is determined by `NPY_BLAS_ILP64_ORDER` and `NPY_LAPACK_ILP64_ORDER` environment variables. The default value is `openblas64_,openblas_ilp64`. Note Using non-symbol-suffixed 64-bit BLAS/LAPACK in a program that also uses 32-bit BLAS/LAPACK can cause crashes under certain conditions (e.g. with embedded Python interpreters on Linux). The 64-bit OpenBLAS with `64_` symbol suffix is obtained by compiling OpenBLAS with settings: ``` make INTERFACE64=1 SYMBOLSUFFIX=64_ ``` The symbol suffix avoids the symbol name clashes between 32-bit and 64-bit BLAS/LAPACK libraries. Supplying additional compiler flags ----------------------------------- Additional compiler flags can be supplied by setting the `OPT`, `FOPT` (for Fortran), and `CC` environment variables. When providing options that should improve the performance of the code ensure that you also set `-DNDEBUG` so that debugging code is not executed. Cross compilation ----------------- Although `numpy.distutils` and `setuptools` do not directly support cross compilation, it is possible to build NumPy on one system for different architectures with minor modifications to the build environment. This may be desirable, for example, to use the power of a high-performance desktop to create a NumPy package for a low-power, single-board computer. Because the `setup.py` scripts are unaware of cross-compilation environments and tend to make decisions based on the environment detected on the build system, it is best to compile for the same type of operating system that runs on the builder. Attempting to compile a Mac version of NumPy on Windows, for example, is likely to be met with challenges not considered here. For the purpose of this discussion, the nomenclature adopted by [meson](https://mesonbuild.com/Cross-compilation.html#cross-compilation) will be used: the “build” system is that which will be running the NumPy build process, while the “host” is the platform on which the compiled package will be run. A native Python interpreter, the setuptools and Cython packages and the desired cross compiler must be available for the build system. In addition, a Python interpreter and its development headers as well as any external linear algebra libraries must be available for the host platform. For convenience, it is assumed that all host software is available under a separate prefix directory, here called `$CROSS_PREFIX`. When building and installing NumPy for a host system, the `CC` environment variable must provide the path the cross compiler that will be used to build NumPy C extensions. It may also be necessary to set the `LDSHARED` environment variable to the path to the linker that can link compiled objects for the host system. The compiler must be told where it can find Python libraries and development headers. On Unix-like systems, this generally requires adding, *e.g.*, the following parameters to the `CFLAGS` environment variable: ``` -I${CROSS_PREFIX}/usr/include -I${CROSS_PREFIX}/usr/include/python3.y ``` for Python version 3.y. (Replace the “y” in this path with the actual minor number of the installed Python runtime.) Likewise, the linker should be told where to find host libraries by adding a parameter to the `LDFLAGS` environment variable: ``` -L${CROSS_PREFIX}/usr/lib ``` To make sure Python-specific system configuration options are provided for the intended host and not the build system, set: ``` _PYTHON_SYSCONFIGDATA_NAME=_sysconfigdata_${ARCH_TRIPLET} ``` where `${ARCH_TRIPLET}` is an architecture-dependent suffix appropriate for the host architecture. (This should be the name of a `_sysconfigdata` file, without the `.py` extension, found in the host Python library directory.) When using external linear algebra libraries, include and library directories should be provided for the desired libraries in `site.cfg` as described above and in the comments of the `site.cfg.example` file included in the NumPy repository or sdist. In this example, set: ``` include_dirs = ${CROSS_PREFIX}/usr/include library_dirs = ${CROSS_PREFIX}/usr/lib ``` under appropriate sections of the file to allow `numpy.distutils` to find the libraries. As of NumPy 1.22.0, a vendored copy of SVML will be built on `x86_64` Linux hosts to provide AVX-512 acceleration of floating-point operations. When using an `x86_64` Linux build system to cross compile NumPy for hosts other than `x86_64` Linux, set the environment variable `NPY_DISABLE_SVML` to prevent the NumPy build script from incorrectly attempting to cross-compile this platform-specific library: ``` NPY_DISABLE_SVML=1 ``` With the environment configured, NumPy may be built as it is natively: ``` python setup.py build ``` When the `wheel` package is available, the cross-compiled package may be packed into a wheel for installation on the host with: ``` python setup.py bdist_wheel ``` It may be possible to use `pip` to build a wheel, but `pip` configures its own environment; adapting the `pip` environment to cross-compilation is beyond the scope of this guide. The cross-compiled package may also be installed into the host prefix for cross-compilation of other packages using, *e.g.*, the command: ``` python setup.py install --prefix=${CROSS_PREFIX} ``` When cross compiling other packages that depend on NumPy, the host npy-pkg-config file must be made available. For further discussion, refer to [numpy distutils documentation](https://numpy.org/devdocs/reference/distutils.html#numpy.distutils.misc_util.Configuration.add_npy_pkg_config). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/building.htmlUsing NumPy C-API ================= * [How to extend NumPy](c-info.how-to-extend) + [Writing an extension module](c-info.how-to-extend#writing-an-extension-module) + [Required subroutine](c-info.how-to-extend#required-subroutine) + [Defining functions](c-info.how-to-extend#defining-functions) - [Functions without keyword arguments](c-info.how-to-extend#functions-without-keyword-arguments) - [Functions with keyword arguments](c-info.how-to-extend#functions-with-keyword-arguments) - [Reference counting](c-info.how-to-extend#reference-counting) + [Dealing with array objects](c-info.how-to-extend#dealing-with-array-objects) - [Converting an arbitrary sequence object](c-info.how-to-extend#converting-an-arbitrary-sequence-object) - [Creating a brand-new ndarray](c-info.how-to-extend#creating-a-brand-new-ndarray) - [Getting at ndarray memory and accessing elements of the ndarray](c-info.how-to-extend#getting-at-ndarray-memory-and-accessing-elements-of-the-ndarray) + [Example](c-info.how-to-extend#example) * [Using Python as glue](c-info.python-as-glue) + [Calling other compiled libraries from Python](c-info.python-as-glue#calling-other-compiled-libraries-from-python) + [Hand-generated wrappers](c-info.python-as-glue#hand-generated-wrappers) + [f2py](c-info.python-as-glue#f2py) + [Cython](c-info.python-as-glue#cython) - [Complex addition in Cython](c-info.python-as-glue#complex-addition-in-cython) - [Image filter in Cython](c-info.python-as-glue#image-filter-in-cython) - [Conclusion](c-info.python-as-glue#conclusion) + [ctypes](c-info.python-as-glue#index-2) - [Having a shared library](c-info.python-as-glue#having-a-shared-library) - [Loading the shared library](c-info.python-as-glue#loading-the-shared-library) - [Converting arguments](c-info.python-as-glue#converting-arguments) - [Calling the function](c-info.python-as-glue#calling-the-function) - [Complete example](c-info.python-as-glue#complete-example) - [Conclusion](c-info.python-as-glue#id4) + [Additional tools you may find useful](c-info.python-as-glue#additional-tools-you-may-find-useful) - [SWIG](c-info.python-as-glue#swig) - [SIP](c-info.python-as-glue#sip) - [Boost Python](c-info.python-as-glue#boost-python) - [PyFort](c-info.python-as-glue#pyfort) * [Writing your own ufunc](c-info.ufunc-tutorial) + [Creating a new universal function](c-info.ufunc-tutorial#creating-a-new-universal-function) + [Example Non-ufunc extension](c-info.ufunc-tutorial#example-non-ufunc-extension) + [Example NumPy ufunc for one dtype](c-info.ufunc-tutorial#example-numpy-ufunc-for-one-dtype) + [Example NumPy ufunc with multiple dtypes](c-info.ufunc-tutorial#example-numpy-ufunc-with-multiple-dtypes) + [Example NumPy ufunc with multiple arguments/return values](c-info.ufunc-tutorial#example-numpy-ufunc-with-multiple-arguments-return-values) + [Example NumPy ufunc with structured array dtype arguments](c-info.ufunc-tutorial#example-numpy-ufunc-with-structured-array-dtype-arguments) * [Beyond the Basics](c-info.beyond-basics) + [Iterating over elements in the array](c-info.beyond-basics#iterating-over-elements-in-the-array) - [Basic Iteration](c-info.beyond-basics#basic-iteration) - [Iterating over all but one axis](c-info.beyond-basics#iterating-over-all-but-one-axis) - [Iterating over multiple arrays](c-info.beyond-basics#iterating-over-multiple-arrays) - [Broadcasting over multiple arrays](c-info.beyond-basics#broadcasting-over-multiple-arrays) + [User-defined data-types](c-info.beyond-basics#user-defined-data-types) - [Adding the new data-type](c-info.beyond-basics#adding-the-new-data-type) - [Registering a casting function](c-info.beyond-basics#registering-a-casting-function) - [Registering coercion rules](c-info.beyond-basics#registering-coercion-rules) - [Registering a ufunc loop](c-info.beyond-basics#registering-a-ufunc-loop) + [Subtyping the ndarray in C](c-info.beyond-basics#subtyping-the-ndarray-in-c) - [Creating sub-types](c-info.beyond-basics#creating-sub-types) - [Specific features of ndarray sub-typing](c-info.beyond-basics#specific-features-of-ndarray-sub-typing) * [The __array_finalize__ method](c-info.beyond-basics#the-array-finalize-method) * [The __array_priority__ attribute](c-info.beyond-basics#the-array-priority-attribute) * [The __array_wrap__ method](c-info.beyond-basics#the-array-wrap-method) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/c-info.htmlNumPy How Tos ============= These documents are intended as recipes to common tasks using NumPy. For detailed reference documentation of the functions and classes contained in the package, see the [API reference](../reference/index#reference). * [How to write a NumPy how-to](how-to-how-to) * [Reading and writing files](how-to-io) * [How to index `ndarrays`](how-to-index) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/howtos_index.htmlFor downstream package authors ============================== This document aims to explain some best practices for authoring a package that depends on NumPy. Understanding NumPy’s versioning and API/ABI stability ------------------------------------------------------ NumPy uses a standard, [**PEP 440**](https://peps.python.org/pep-0440/) compliant, versioning scheme: `major.minor.bugfix`. A *major* release is highly unusual (NumPy is still at version `1.xx`) and if it happens it will likely indicate an ABI break. *Minor* versions are released regularly, typically every 6 months. Minor versions contain new features, deprecations, and removals of previously deprecated code. *Bugfix* releases are made even more frequently; they do not contain any new features or deprecations. It is important to know that NumPy, like Python itself and most other well known scientific Python projects, does **not** use semantic versioning. Instead, backwards incompatible API changes require deprecation warnings for at least two releases. For more details, see [NEP 23 — Backwards compatibility and deprecation policy](https://numpy.org/neps/nep-0023-backwards-compatibility.html#nep23 "(in NumPy Enhancement Proposals)"). NumPy has both a Python API and a C API. The C API can be used directly or via Cython, f2py, or other such tools. If your package uses the C API, then ABI (application binary interface) stability of NumPy is important. NumPy’s ABI is forward but not backward compatible. This means: binaries compiled against a given version of NumPy will still run correctly with newer NumPy versions, but not with older versions. Testing against the NumPy main branch or pre-releases ----------------------------------------------------- For large, actively maintained packages that depend on NumPy, we recommend testing against the development version of NumPy in CI. To make this easy, nightly builds are provided as wheels at <https://anaconda.org/scipy-wheels-nightly/>. This helps detect regressions in NumPy that need fixing before the next NumPy release. Furthermore, we recommend to raise errors on warnings in CI for this job, either all warnings or otherwise at least `DeprecationWarning` and `FutureWarning`. This gives you an early warning about changes in NumPy to adapt your code. Adding a dependency on NumPy ---------------------------- ### Build-time dependency If a package either uses the NumPy C API directly or it uses some other tool that depends on it like Cython or Pythran, NumPy is a *build-time* dependency of the package. Because the NumPy ABI is only forward compatible, you must build your own binaries (wheels or other package formats) against the lowest NumPy version that you support (or an even older version). Picking the correct NumPy version to build against for each Python version and platform can get complicated. There are a couple of ways to do this. Build-time dependencies are specified in `pyproject.toml` (see PEP 517), which is the file used to build wheels by PEP 517 compliant tools (e.g., when using `pip wheel`). You can specify everything manually in `pyproject.toml`, or you can instead rely on the [oldest-supported-numpy](https://github.com/scipy/oldest-supported-numpy/) metapackage. `oldest-supported-numpy` will specify the correct NumPy version at build time for wheels, taking into account Python version, Python implementation (CPython or PyPy), operating system and hardware platform. It will specify the oldest NumPy version that supports that combination of characteristics. Note: for platforms for which NumPy provides wheels on PyPI, it will be the first version with wheels (even if some older NumPy version happens to build). For conda-forge it’s a little less complicated: there’s dedicated handling for NumPy in build-time and runtime dependencies, so typically this is enough (see [here](https://conda-forge.org/docs/maintainer/knowledge_base.html#building-against-numpy) for docs): ``` host: - numpy run: - {{ pin_compatible('numpy') }} ``` Note `pip` has `--no-use-pep517` and `--no-build-isolation` flags that may ignore `pyproject.toml` or treat it differently - if users use those flags, they are responsible for installing the correct build dependencies themselves. `conda` will always use `-no-build-isolation`; dependencies for conda builds are given in the conda recipe (`meta.yaml`), the ones in `pyproject.toml` have no effect. Please do not use `setup_requires` (it is deprecated and may invoke `easy_install`). Because for NumPy you have to care about ABI compatibility, you specify the version with `==` to the lowest supported version. For your other build dependencies you can probably be looser, however it’s still important to set lower and upper bounds for each dependency. It’s fine to specify either a range or a specific version for a dependency like `wheel` or `setuptools`. It’s recommended to set the upper bound of the range to the latest already released version of `wheel` and `setuptools` - this prevents future releases from breaking your packages on PyPI. ### Runtime dependency & version ranges NumPy itself and many core scientific Python packages have agreed on a schedule for dropping support for old Python and NumPy versions: [NEP 29 — Recommend Python and NumPy version support as a community policy standard](https://numpy.org/neps/nep-0029-deprecation_policy.html#nep29 "(in NumPy Enhancement Proposals)"). We recommend all packages depending on NumPy to follow the recommendations in NEP 29. For *run-time dependencies*, specify version bounds using `install_requires` in `setup.py` (assuming you use `numpy.distutils` or `setuptools` to build). Most libraries that rely on NumPy will not need to set an upper version bound: NumPy is careful to preserve backward-compatibility. That said, if you are (a) a project that is guaranteed to release frequently, (b) use a large part of NumPy’s API surface, and (c) is worried that changes in NumPy may break your code, you can set an upper bound of `<MAJOR.MINOR + N` with N no less than 3, and `MAJOR.MINOR` being the current release of NumPy [*](#id3). If you use the NumPy C API (directly or via Cython), you can also pin the current major version to prevent ABI breakage. Note that setting an upper bound on NumPy may [affect the ability of your library to be installed alongside other, newer packages](https://iscinumpy.dev/post/bound-version-constraints/). [*](#id2) The reason for setting `N=3` is that NumPy will, on the rare occasion where it makes breaking changes, raise warnings for at least two releases. (NumPy releases about once every six months, so this translates to a window of at least a year; hence the subsequent requirement that your project releases at least on that cadence.) Note SciPy has more documentation on how it builds wheels and deals with its build-time and runtime dependencies [here](https://scipy.github.io/devdocs/dev/core-dev/index.html#distributing). NumPy and SciPy wheel build CI may also be useful as a reference, it can be found [here for NumPy](https://github.com/MacPython/numpy-wheels) and [here for SciPy](https://github.com/MacPython/scipy-wheels). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/depending_on_numpy.htmlF2PY user guide and reference manual ==================================== The purpose of the `F2PY` –*Fortran to Python interface generator*– utility is to provide a connection between Python and Fortran. F2PY is a part of [NumPy](https://www.numpy.org/) (`numpy.f2py`) and also available as a standalone command line tool. F2PY facilitates creating/building Python C/API extension modules that make it possible * to call Fortran 77/90/95 external subroutines and Fortran 90/95 module subroutines as well as C functions; * to access Fortran 77 `COMMON` blocks and Fortran 90/95 module data, including allocatable arrays from Python. F2PY can be used either as a command line tool `f2py` or as a Python module `numpy.f2py`. While we try to provide the command line tool as part of the numpy setup, some platforms like Windows make it difficult to reliably put the executables on the `PATH`. If the `f2py` command is not available in your system, you may have to run it as a module: ``` python -m numpy.f2py ``` If you run `f2py` with no arguments, and the line `numpy Version` at the end matches the NumPy version printed from `python -m numpy.f2py`, then you can use the shorter version. If not, or if you cannot run `f2py`, you should replace all calls to `f2py` mentioned in this guide with the longer version. * [Three ways to wrap - getting started](f2py.getting-started) + [The quick way](f2py.getting-started#the-quick-way) + [The smart way](f2py.getting-started#the-smart-way) + [The quick and smart way](f2py.getting-started#the-quick-and-smart-way) * [F2PY user guide](f2py-user) + [Three ways to wrap - getting started](f2py.getting-started) - [The quick way](f2py.getting-started#the-quick-way) - [The smart way](f2py.getting-started#the-smart-way) - [The quick and smart way](f2py.getting-started#the-quick-and-smart-way) + [Using F2PY](usage) - [Using `f2py` as a command-line tool](usage#using-f2py-as-a-command-line-tool) - [Python module `numpy.f2py`](usage#python-module-numpy-f2py) - [Automatic extension module generation](usage#automatic-extension-module-generation) + [F2PY examples](f2py-examples) - [F2PY walkthrough: a basic extension module](f2py-examples#f2py-walkthrough-a-basic-extension-module) - [A filtering example](f2py-examples#a-filtering-example) - [`depends` keyword example](f2py-examples#depends-keyword-example) - [Read more](f2py-examples#read-more) * [F2PY reference manual](f2py-reference) + [Signature file](signature-file) - [Signature files syntax](signature-file#signature-files-syntax) + [Using F2PY bindings in Python](python-usage) - [Fortran type objects](python-usage#fortran-type-objects) - [Scalar arguments](python-usage#scalar-arguments) - [String arguments](python-usage#string-arguments) - [Array arguments](python-usage#array-arguments) - [Call-back arguments](python-usage#call-back-arguments) - [Common blocks](python-usage#common-blocks) - [Fortran 90 module data](python-usage#fortran-90-module-data) - [Allocatable arrays](python-usage#allocatable-arrays) + [F2PY and Build Systems](buildtools/index) - [Basic Concepts](buildtools/index#basic-concepts) - [Build Systems](buildtools/index#build-systems) + [Advanced F2PY use cases](advanced) - [Adding user-defined functions to F2PY generated modules](advanced#adding-user-defined-functions-to-f2py-generated-modules) - [Adding user-defined variables](advanced#adding-user-defined-variables) - [Dealing with KIND specifiers](advanced#dealing-with-kind-specifiers) + [F2PY test suite](f2py-testing) - [Adding a test](f2py-testing#adding-a-test) * [Using F2PY](usage) + [Using `f2py` as a command-line tool](usage#using-f2py-as-a-command-line-tool) - [1. Signature file generation](usage#signature-file-generation) - [2. Extension module construction](usage#extension-module-construction) - [3. Building a module](usage#building-a-module) - [Other options](usage#other-options) + [Python module `numpy.f2py`](usage#python-module-numpy-f2py) + [Automatic extension module generation](usage#automatic-extension-module-generation) * [Using F2PY bindings in Python](python-usage) + [Fortran type objects](python-usage#fortran-type-objects) + [Scalar arguments](python-usage#scalar-arguments) + [String arguments](python-usage#string-arguments) + [Array arguments](python-usage#array-arguments) + [Call-back arguments](python-usage#call-back-arguments) - [Resolving arguments to call-back functions](python-usage#resolving-arguments-to-call-back-functions) + [Common blocks](python-usage#common-blocks) + [Fortran 90 module data](python-usage#fortran-90-module-data) + [Allocatable arrays](python-usage#allocatable-arrays) * [Signature file](signature-file) + [Signature files syntax](signature-file#signature-files-syntax) - [Python module block](signature-file#python-module-block) - [Fortran/C routine signatures](signature-file#fortran-c-routine-signatures) - [Type declarations](signature-file#type-declarations) - [Statements](signature-file#statements) - [Attributes](signature-file#attributes) - [Extensions](signature-file#extensions) * [F2PY and Build Systems](buildtools/index) + [Basic Concepts](buildtools/index#basic-concepts) + [Build Systems](buildtools/index#build-systems) - [Using via `numpy.distutils`](buildtools/distutils) - [Using via `meson`](buildtools/meson) - [Using via `cmake`](buildtools/cmake) - [Using via `scikit-build`](buildtools/skbuild) * [Advanced F2PY use cases](advanced) + [Adding user-defined functions to F2PY generated modules](advanced#adding-user-defined-functions-to-f2py-generated-modules) + [Adding user-defined variables](advanced#adding-user-defined-variables) + [Dealing with KIND specifiers](advanced#dealing-with-kind-specifiers) * [F2PY and Windows](windows/index) + [Overview](windows/index#overview) + [Baseline](windows/index#baseline) + [Powershell and MSVC](windows/index#powershell-and-msvc) + [Windows Store Python Paths](windows/index#windows-store-python-paths) - [F2PY and Windows Intel Fortran](windows/intel) - [F2PY and Windows with MSYS2](windows/msys2) - [F2PY and Conda on Windows](windows/conda) - [F2PY and PGI Fortran on Windows](windows/pgi) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/index.htmlGlossary ======== (`n`,) A parenthesized number followed by a comma denotes a tuple with one element. The trailing comma distinguishes a one-element tuple from a parenthesized `n`. -1 * **In a dimension entry**, instructs NumPy to choose the length that will keep the total number of array elements the same. ``` >>> np.arange(12).reshape(4, -1).shape (4, 3) ``` * **In an index**, any negative value [denotes](https://docs.python.org/dev/faq/programming.html#what-s-a-negative-index) indexing from the right. 
 An [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "(in Python v3.10)"). * **When indexing an array**, shorthand that the missing axes, if they exist, are full slices. ``` >>> a = np.arange(24).reshape(2,3,4) ``` ``` >>> a[...].shape (2, 3, 4) ``` ``` >>> a[...,0].shape (2, 3) ``` ``` >>> a[0,...].shape (3, 4) ``` ``` >>> a[0,...,0].shape (3,) ``` It can be used at most once; `a[...,0,...]` raises an [`IndexError`](https://docs.python.org/3/library/exceptions.html#IndexError "(in Python v3.10)"). * **In printouts**, NumPy substitutes `...` for the middle elements of large arrays. To see the entire array, use [`numpy.printoptions`](reference/generated/numpy.printoptions#numpy.printoptions "numpy.printoptions") : The Python [slice](https://docs.python.org/3/glossary.html#term-slice "(in Python v3.10)") operator. In ndarrays, slicing can be applied to every axis: ``` >>> a = np.arange(24).reshape(2,3,4) >>> a array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> a[1:,-2:,:-1] array([[[16, 17, 18], [20, 21, 22]]]) ``` Trailing slices can be omitted: ``` >>> a[1] == a[1,:,:] array([[ True, True, True, True], [ True, True, True, True], [ True, True, True, True]]) ``` In contrast to Python, where slicing creates a copy, in NumPy slicing creates a [view](#term-view). For details, see [Combining advanced and basic indexing](user/basics.indexing#combining-advanced-and-basic-indexing). < In a dtype declaration, indicates that the data is [little-endian](#term-little-endian) (the bracket is big on the right). ``` >>> dt = np.dtype('<f') # little-endian single-precision float ``` In a dtype declaration, indicates that the data is [big-endian](#term-big-endian) (the bracket is big on the left). ``` >>> dt = np.dtype('>H') # big-endian unsigned short ``` advanced indexing Rather than using a [scalar](reference/arrays.scalars) or slice as an index, an axis can be indexed with an array, providing fine-grained selection. This is known as [advanced indexing](user/basics.indexing#advanced-indexing) or “fancy indexing”. along an axis An operation `along axis n` of array `a` behaves as if its argument were an array of slices of `a` where each slice has a successive index of axis `n`. For example, if `a` is a 3 x `N` array, an operation along axis 0 behaves as if its argument were an array containing slices of each row: ``` >>> np.array((a[0,:], a[1,:], a[2,:])) ``` To make it concrete, we can pick the operation to be the array-reversal function [`numpy.flip`](reference/generated/numpy.flip#numpy.flip "numpy.flip"), which accepts an `axis` argument. We construct a 3 x 4 array `a`: ``` >>> a = np.arange(12).reshape(3,4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) ``` Reversing along axis 0 (the row axis) yields ``` >>> np.flip(a,axis=0) array([[ 8, 9, 10, 11], [ 4, 5, 6, 7], [ 0, 1, 2, 3]]) ``` Recalling the definition of `along an axis`, `flip` along axis 0 is treating its argument as if it were ``` >>> np.array((a[0,:], a[1,:], a[2,:])) array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) ``` and the result of `np.flip(a,axis=0)` is to reverse the slices: ``` >>> np.array((a[2,:],a[1,:],a[0,:])) array([[ 8, 9, 10, 11], [ 4, 5, 6, 7], [ 0, 1, 2, 3]]) ``` array Used synonymously in the NumPy docs with [ndarray](#term-ndarray). array_like Any [scalar](reference/arrays.scalars) or [sequence](https://docs.python.org/3/glossary.html#term-sequence "(in Python v3.10)") that can be interpreted as an ndarray. In addition to ndarrays and scalars this category includes lists (possibly nested and with different element types) and tuples. Any argument accepted by [numpy.array](reference/generated/numpy.array) is array_like. ``` >>> a = np.array([[1, 2.0], [0, 0], (1+1j, 3.)]) >>> a array([[1.+0.j, 2.+0.j], [0.+0.j, 0.+0.j], [1.+1.j, 3.+0.j]]) ``` array scalar An [array scalar](reference/arrays.scalars) is an instance of the types/classes float32, float64, etc.. For uniformity in handling operands, NumPy treats a scalar as an array of zero dimension. In contrast, a 0-dimensional array is an [ndarray](reference/arrays.ndarray) instance containing precisely one value. axis Another term for an array dimension. Axes are numbered left to right; axis 0 is the first element in the shape tuple. In a two-dimensional vector, the elements of axis 0 are rows and the elements of axis 1 are columns. In higher dimensions, the picture changes. NumPy prints higher-dimensional vectors as replications of row-by-column building blocks, as in this three-dimensional vector: ``` >>> a = np.arange(12).reshape(2,2,3) >>> a array([[[ 0, 1, 2], [ 3, 4, 5]], [[ 6, 7, 8], [ 9, 10, 11]]]) ``` `a` is depicted as a two-element array whose elements are 2x3 vectors. From this point of view, rows and columns are the final two axes, respectively, in any shape. This rule helps you anticipate how a vector will be printed, and conversely how to find the index of any of the printed elements. For instance, in the example, the last two values of 8’s index must be 0 and 2. Since 8 appears in the second of the two 2x3’s, the first index must be 1: ``` >>> a[1,0,2] 8 ``` A convenient way to count dimensions in a printed vector is to count `[` symbols after the open-parenthesis. This is useful in distinguishing, say, a (1,2,3) shape from a (2,3) shape: ``` >>> a = np.arange(6).reshape(2,3) >>> a.ndim 2 >>> a array([[0, 1, 2], [3, 4, 5]]) ``` ``` >>> a = np.arange(6).reshape(1,2,3) >>> a.ndim 3 >>> a array([[[0, 1, 2], [3, 4, 5]]]) ``` .base If an array does not own its memory, then its [base](reference/generated/numpy.ndarray.base) attribute returns the object whose memory the array is referencing. That object may be referencing the memory from still another object, so the owning object may be `a.base.base.base...`. Some writers erroneously claim that testing `base` determines if arrays are [view](#term-view)s. For the correct way, see [`numpy.shares_memory`](reference/generated/numpy.shares_memory#numpy.shares_memory "numpy.shares_memory"). big-endian See [Endianness](https://en.wikipedia.org/wiki/Endianness). BLAS [Basic Linear Algebra Subprograms](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms) broadcast *broadcasting* is NumPy’s ability to process ndarrays of different sizes as if all were the same size. It permits an elegant do-what-I-mean behavior where, for instance, adding a scalar to a vector adds the scalar value to every element. ``` >>> a = np.arange(3) >>> a array([0, 1, 2]) ``` ``` >>> a + [3, 3, 3] array([3, 4, 5]) ``` ``` >>> a + 3 array([3, 4, 5]) ``` Ordinarly, vector operands must all be the same size, because NumPy works element by element – for instance, `c = a * b` is ``` c[0,0,0] = a[0,0,0] * b[0,0,0] c[0,0,1] = a[0,0,1] * b[0,0,1] ... ``` But in certain useful cases, NumPy can duplicate data along “missing” axes or “too-short” dimensions so shapes will match. The duplication costs no memory or time. For details, see [Broadcasting.](user/basics.broadcasting) C order Same as [row-major](#term-row-major). column-major See [Row- and column-major order](https://en.wikipedia.org/wiki/Row-_and_column-major_order). contiguous An array is contiguous if: * it occupies an unbroken block of memory, and * array elements with higher indexes occupy higher addresses (that is, no [stride](#term-stride) is negative). There are two types of proper-contiguous NumPy arrays: * Fortran-contiguous arrays refer to data that is stored column-wise, i.e. the indexing of data as stored in memory starts from the lowest dimension; * C-contiguous, or simply contiguous arrays, refer to data that is stored row-wise, i.e. the indexing of data as stored in memory starts from the highest dimension. For one-dimensional arrays these notions coincide. For example, a 2x2 array `A` is Fortran-contiguous if its elements are stored in memory in the following order: ``` A[0,0] A[1,0] A[0,1] A[1,1] ``` and C-contiguous if the order is as follows: ``` A[0,0] A[0,1] A[1,0] A[1,1] ``` To test whether an array is C-contiguous, use the `.flags.c_contiguous` attribute of NumPy arrays. To test for Fortran contiguity, use the `.flags.f_contiguous` attribute. copy See [view](#term-view). dimension See [axis](#term-axis). dtype The datatype describing the (identically typed) elements in an ndarray. It can be changed to reinterpret the array contents. For details, see [Data type objects (dtype).](reference/arrays.dtypes) fancy indexing Another term for [advanced indexing](#term-advanced-indexing). field In a [structured data type](#term-structured-data-type), each subtype is called a `field`. The `field` has a name (a string), a type (any valid dtype), and an optional `title`. See [Data type objects (dtype)](reference/arrays.dtypes#arrays-dtypes). Fortran order Same as [column-major](#term-column-major). flattened See [ravel](#term-ravel). homogeneous All elements of a homogeneous array have the same type. ndarrays, in contrast to Python lists, are homogeneous. The type can be complicated, as in a [structured array](#term-structured-array), but all elements have that type. NumPy [object arrays](#term-object-array), which contain references to Python objects, fill the role of heterogeneous arrays. itemsize The size of the dtype element in bytes. little-endian See [Endianness](https://en.wikipedia.org/wiki/Endianness). mask A boolean array used to select only certain elements for an operation: ``` >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) ``` ``` >>> mask = (x > 2) >>> mask array([False, False, False, True, True]) ``` ``` >>> x[mask] = -1 >>> x array([ 0, 1, 2, -1, -1]) ``` masked array Bad or missing data can be cleanly ignored by putting it in a masked array, which has an internal boolean array indicating invalid entries. Operations with masked arrays ignore these entries. ``` >>> a = np.ma.masked_array([np.nan, 2, np.nan], [True, False, True]) >>> a masked_array(data=[--, 2.0, --], mask=[ True, False, True], fill_value=1e+20) >>> a + [1, 2, 3] masked_array(data=[--, 4.0, --], mask=[ True, False, True], fill_value=1e+20) ``` For details, see [Masked arrays.](reference/maskedarray) matrix NumPy’s two-dimensional [matrix class](reference/generated/numpy.matrix) should no longer be used; use regular ndarrays. ndarray [NumPy’s basic structure](reference/arrays). object array An array whose dtype is `object`; that is, it contains references to Python objects. Indexing the array dereferences the Python objects, so unlike other ndarrays, an object array has the ability to hold heterogeneous objects. ravel [numpy.ravel](reference/generated/numpy.ravel) and [numpy.flatten](reference/generated/numpy.ndarray.flatten) both flatten an ndarray. `ravel` will return a view if possible; `flatten` always returns a copy. Flattening collapses a multidimensional array to a single dimension; details of how this is done (for instance, whether `a[n+1]` should be the next row or next column) are parameters. record array A [structured array](#term-structured-array) with allowing access in an attribute style (`a.field`) in addition to `a['field']`. For details, see [numpy.recarray.](reference/generated/numpy.recarray) row-major See [Row- and column-major order](https://en.wikipedia.org/wiki/Row-_and_column-major_order). NumPy creates arrays in row-major order by default. scalar In NumPy, usually a synonym for [array scalar](#term-array-scalar). shape A tuple showing the length of each dimension of an ndarray. The length of the tuple itself is the number of dimensions ([numpy.ndim](reference/generated/numpy.ndarray.ndim)). The product of the tuple elements is the number of elements in the array. For details, see [numpy.ndarray.shape](reference/generated/numpy.ndarray.shape). stride Physical memory is one-dimensional; strides provide a mechanism to map a given index to an address in memory. For an N-dimensional array, its `strides` attribute is an N-element tuple; advancing from index `i` to index `i+1` on axis `n` means adding `a.strides[n]` bytes to the address. Strides are computed automatically from an array’s dtype and shape, but can be directly specified using [as_strided.](reference/generated/numpy.lib.stride_tricks.as_strided) For details, see [numpy.ndarray.strides](reference/generated/numpy.ndarray.strides). To see how striding underlies the power of NumPy views, see [The NumPy array: a structure for efficient numerical computation.](https://arxiv.org/pdf/1102.1523.pdf) structured array Array whose [dtype](#term-dtype) is a [structured data type](#term-structured-data-type). structured data type Users can create arbitrarily complex [dtypes](#term-dtype) that can include other arrays and dtypes. These composite dtypes are called [structured data types.](user/basics.rec) subarray An array nested in a [structured data type](#term-structured-data-type), as `b` is here: ``` >>> dt = np.dtype([('a', np.int32), ('b', np.float32, (3,))]) >>> np.zeros(3, dtype=dt) array([(0, [0., 0., 0.]), (0, [0., 0., 0.]), (0, [0., 0., 0.])], dtype=[('a', '<i4'), ('b', '<f4', (3,))]) ``` subarray data type An element of a structured datatype that behaves like an ndarray. title An alias for a field name in a structured datatype. type In NumPy, usually a synonym for [dtype](#term-dtype). For the more general Python meaning, [see here.](https://docs.python.org/3/glossary.html#term-type "(in Python v3.10)") ufunc NumPy’s fast element-by-element computation ([vectorization](#term-vectorization)) gives a choice which function gets applied. The general term for the function is `ufunc`, short for `universal function`. NumPy routines have built-in ufuncs, but users can also [write their own.](reference/ufuncs) vectorization NumPy hands off array processing to C, where looping and computation are much faster than in Python. To exploit this, programmers using NumPy eliminate Python loops in favor of array-to-array operations. [vectorization](#term-vectorization) can refer both to the C offloading and to structuring NumPy code to leverage it. view Without touching underlying data, NumPy can make one array appear to change its datatype and shape. An array created this way is a `view`, and NumPy often exploits the performance gain of using a view versus making a new array. A potential drawback is that writing to a view can alter the original as well. If this is a problem, NumPy instead needs to create a physically distinct array – a [`copy`](https://docs.python.org/3/library/copy.html#module-copy "(in Python v3.10)"). Some NumPy routines always return views, some always return copies, some may return one or the other, and for some the choice can be specified. Responsibility for managing views and copies falls to the programmer. [`numpy.shares_memory`](reference/generated/numpy.shares_memory#numpy.shares_memory "numpy.shares_memory") will check whether `b` is a view of `a`, but an exact answer isn’t always feasible, as the documentation page explains. ``` >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) ``` ``` >>> y = x[::2] >>> y array([0, 2, 4]) ``` ``` >>> x[0] = 3 # changing x changes y as well, since y is a view on x >>> y array([3, 2, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/glossary.htmlUnder-the-hood Documentation for developers =========================================== These documents are intended as a low-level look into NumPy; focused towards developers. * [Internal organization of NumPy arrays](internals) * [NumPy C code explanations](internals.code-explanations) * [Memory Alignment](alignment) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/underthehood.htmlReporting bugs ============== File bug reports or feature requests, and make contributions (e.g. code patches), by opening a “new issue” on GitHub: * NumPy Issues: <https://github.com/numpy/numpy/issues Please give as much information as you can in the ticket. It is extremely useful if you can supply a small self-contained code snippet that reproduces the problem. Also specify the component, the version you are referring to and the milestone. Report bugs to the appropriate GitHub project (there is one for NumPy and a different one for SciPy). More information can be found on the <https://www.scipy.org/scipylib/dev-zone.html> website. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/bugs.htmlNumPy license ============= ``` Copyright (c) 2005-2022, NumPy Developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the NumPy Developers nor the names of any contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/license.htmlArray objects ============= NumPy provides an N-dimensional array type, the [ndarray](arrays.ndarray#arrays-ndarray), which describes a collection of “items” of the same type. The items can be [indexed](arrays.indexing#arrays-indexing) using for example N integers. All ndarrays are [homogeneous](../glossary#term-homogeneous): every item takes up the same size block of memory, and all blocks are interpreted in exactly the same way. How each item in the array is to be interpreted is specified by a separate [data-type object](arrays.dtypes#arrays-dtypes), one of which is associated with every array. In addition to basic types (integers, floats, *etc.*), the data type objects can also represent data structures. An item extracted from an array, *e.g.*, by indexing, is represented by a Python object whose type is one of the [array scalar types](arrays.scalars#arrays-scalars) built in NumPy. The array scalars allow easy manipulation of also more complicated arrangements of data. ![../_images/threefundamental.png] the array-scalar Python object that is returned when a single element of the array is accessed. * [The N-dimensional array (`ndarray`)](arrays.ndarray) + [Constructing arrays](arrays.ndarray#constructing-arrays) + [Indexing arrays](arrays.ndarray#indexing-arrays) + [Internal memory layout of an ndarray](arrays.ndarray#internal-memory-layout-of-an-ndarray) + [Array attributes](arrays.ndarray#array-attributes) + [Array methods](arrays.ndarray#array-methods) + [Arithmetic, matrix multiplication, and comparison operations](arrays.ndarray#arithmetic-matrix-multiplication-and-comparison-operations) + [Special methods](arrays.ndarray#special-methods) * [Scalars](arrays.scalars) + [Built-in scalar types](arrays.scalars#built-in-scalar-types) + [Attributes](arrays.scalars#attributes) + [Indexing](arrays.scalars#indexing) + [Methods](arrays.scalars#methods) + [Defining new types](arrays.scalars#defining-new-types) * [Data type objects (`dtype`)](arrays.dtypes) + [Specifying and constructing data types](arrays.dtypes#specifying-and-constructing-data-types) + [`dtype`](arrays.dtypes#dtype) * [Indexing routines](arrays.indexing) + [Generating index arrays](arrays.indexing#generating-index-arrays) + [Indexing-like operations](arrays.indexing#indexing-like-operations) + [Inserting data into arrays](arrays.indexing#inserting-data-into-arrays) + [Iterating over arrays](arrays.indexing#iterating-over-arrays) * [Iterating Over Arrays](arrays.nditer) + [Single Array Iteration](arrays.nditer#single-array-iteration) + [Broadcasting Array Iteration](arrays.nditer#broadcasting-array-iteration) + [Putting the Inner Loop in Cython](arrays.nditer#putting-the-inner-loop-in-cython) * [Standard array subclasses](arrays.classes) + [Special attributes and methods](arrays.classes#special-attributes-and-methods) + [Matrix objects](arrays.classes#matrix-objects) + [Memory-mapped file arrays](arrays.classes#memory-mapped-file-arrays) + [Character arrays (`numpy.char`)](arrays.classes#character-arrays-numpy-char) + [Record arrays (`numpy.rec`)](arrays.classes#record-arrays-numpy-rec) + [Masked arrays (`numpy.ma`)](arrays.classes#masked-arrays-numpy-ma) + [Standard container class](arrays.classes#standard-container-class) + [Array Iterators](arrays.classes#array-iterators) * [Masked arrays](maskedarray) + [The `numpy.ma` module](maskedarray.generic) + [Using numpy.ma](maskedarray.generic#using-numpy-ma) + [Examples](maskedarray.generic#examples) + [Constants of the `numpy.ma` module](maskedarray.baseclass) + [The `MaskedArray` class](maskedarray.baseclass#the-maskedarray-class) + [`MaskedArray` methods](maskedarray.baseclass#maskedarray-methods) + [Masked array operations](routines.ma) * [The array interface protocol](arrays.interface) + [Python side](arrays.interface#python-side) + [C-struct access](arrays.interface#c-struct-access) + [Type description examples](arrays.interface#type-description-examples) + [Differences with Array interface (Version 2)](arrays.interface#differences-with-array-interface-version-2) * [Datetimes and Timedeltas](arrays.datetime) + [Datetime64 Conventions and Assumptions](arrays.datetime#datetime64-conventions-and-assumptions) + [Basic Datetimes](arrays.datetime#basic-datetimes) + [Datetime and Timedelta Arithmetic](arrays.datetime#datetime-and-timedelta-arithmetic) + [Datetime Units](arrays.datetime#datetime-units) + [Business Day Functionality](arrays.datetime#business-day-functionality) + [Datetime64 shortcomings](arrays.datetime#datetime64-shortcomings) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/arrays.htmlArray API Standard Compatibility ================================ Note The `numpy.array_api` module is still experimental. See [NEP 47](https://numpy.org/neps/nep-0047-array-api-standard.html). NumPy includes a reference implementation of the [array API standard](https://data-apis.org/array-api/latest/) in `numpy.array_api`. [NEP 47](https://numpy.org/neps/nep-0047-array-api-standard.html) describes the motivation and scope for implementing the array API standard in NumPy. The `numpy.array_api` module serves as a minimal, reference implementation of the array API standard. In being minimal, the module only implements those things that are explicitly required by the specification. Certain things are allowed by the specification but are explicitly disallowed in `numpy.array_api`. This is so that the module can serve as a reference implementation for users of the array API standard. Any consumer of the array API can test their code against `numpy.array_api` and be sure that they aren’t using any features that aren’t guaranteed by the spec, and which may not be present in other conforming libraries. The `numpy.array_api` module is not documented here. For a listing of the functions present in the array API specification, refer to the [array API standard](https://data-apis.org/array-api/latest/). The `numpy.array_api` implementation is functionally complete, so all functionality described in the standard is implemented. Table of Differences between `numpy.array_api` and `numpy` ---------------------------------------------------------- This table outlines the primary differences between `numpy.array_api` from the main `numpy` namespace. There are three types of differences: 1. **Strictness**. Things that are only done so that `numpy.array_api` is a strict, minimal implementation. They aren’t actually required by the spec, and other conforming libraries may not follow them. In most cases, spec does not specify or require any behavior outside of the given domain. The main `numpy` namespace would not need to change in any way to be spec-compatible for these. 2. **Compatible**. Things that could be added to the main `numpy` namespace without breaking backwards compatibility. 3. **Breaking**. Things that would break backwards compatibility if implemented in the main `numpy` namespace. ### Name Differences Many functions have been renamed in the spec from NumPy. These are otherwise identical in behavior, and are thus all **compatible** changes, unless otherwise noted. #### Function Name Changes The following functions are named differently in the array API | Array API name | NumPy namespace name | Notes | | --- | --- | --- | | `acos` | `arccos` | | | `acosh` | `arccosh` | | | `asin` | `arcsin` | | | `asinh` | `arcsinh` | | | `atan` | `arctan` | | | `atan2` | `arctan2` | | | `atanh` | `arctanh` | | | `bitwise_left_shift` | `left_shift` | | | `bitwise_invert` | `invert` | | | `bitwise_right_shift` | `right_shift` | | | `bool` | `bool_` | This is **breaking** because `np.bool` is currently a deprecated alias for the built-in `bool`. | | `concat` | `concatenate` | | | `matrix_norm` and `vector_norm` | `norm` | `matrix_norm` and `vector_norm` each do a limited subset of what `np.norm` does. | | `permute_dims` | `transpose` | Unlike `np.transpose`, the `axis` keyword-argument to `permute_dims` is required. | | `pow` | `power` | | | `unique_all`, `unique_counts`, `unique_inverse`, and `unique_values` | `unique` | Each is equivalent to `np.unique` with certain flags set. | #### Function instead of method * `astype` is a function in the array API, whereas it is a method on `ndarray` in `numpy`. #### `linalg` Namespace Differences These functions are in the `linalg` sub-namespace in the array API, but are only in the top-level namespace in NumPy: * `cross` * `diagonal` * `matmul` (*) * `outer` * `tensordot` (*) * `trace` (*): These functions are also in the top-level namespace in the array API. #### Keyword Argument Renames The following functions have keyword arguments that have been renamed. The functionality of the keyword argument is identical unless otherwise stated. Each new keyword argument is not already present on the given function in `numpy`, so the changes are **compatible**. Note, this page does not list function keyword arguments that are in the main `numpy` namespace but not in the array API. Such keyword arguments are omitted from `numpy.array_api` for **strictness**, as the spec allows functions to include additional keyword arguments from those required. | Function | Array API keyword name | NumPy keyword name | Notes | | --- | --- | --- | --- | | `argsort` and `sort` | `stable` | `kind` | The definitions of `stable` and `kind` differ, as do the default values. The change of the default value makes this **breaking**. See [Set Functions Differences](#array-api-set-functions-differences). | | `matrix_rank` | `rtol` | `tol` | The definitions of `rtol` and `tol` differ, as do the default values. The change of the default value makes this **breaking**. See [Linear Algebra Differences](#array-api-linear-algebra-differences). | | `pinv` | `rtol` | `rcond` | The definitions of `rtol` and `rcond` are the same, but their default values differ, making this **breaking**. See [Linear Algebra Differences](#array-api-linear-algebra-differences). | | `std` and `var` | `correction` | `ddof` | | ### Type Promotion Differences Type promotion is the biggest area where NumPy deviates from the spec. The most notable difference is that NumPy does value-based casting in many cases. The spec explicitly disallows value-based casting. In the array API, the result type of any operation is always determined entirely by the input types, independently of values or shapes. | Feature | Type | Notes | | --- | --- | --- | | Limited set of dtypes. | **Strictness** | `numpy.array_api` only implements those [dtypes that are required by the spec](https://data-apis.org/array-api/latest/API_specification/data_types.html). | | Operators (like `+`) with Python scalars only accept matching scalar types. | **Strictness** | For example, `<int32 array> + 1.0` is not allowed. See [the spec rules for mixing arrays and Python scalars](https://data-apis.org/array-api/latest/API_specification/type_promotion.html#mixing-arrays-with-python-scalars). | | Operators (like `+`) with Python scalars always return the same dtype as the array. | **Breaking** | For example, `numpy.array_api.asarray(0., dtype=float32) + 1e64` is a `float32` array. | | In-place operators are disallowed when the left-hand side would be promoted. | **Breaking** | Example: `a = np.array(1, dtype=np.int8); a += np.array(1, dtype=np.int16)`. The spec explicitly disallows this. | | `int` promotion for operators is only specified for integers within the bounds of the dtype. | **Strictness** | `numpy.array_api` fallsback to `np.ndarray` behavior (either cast or raise `OverflowError`). | | `__pow__` and `__rpow__` do not do value-based casting for 0-D arrays. | **Breaking** | For example, `np.array(0., dtype=float32)**np.array(0., dtype=float64)` is `float32`. Note that this is value-based casting on 0-D arrays, not scalars. | | No cross-kind casting. | **Strictness** | Namely, boolean, integer, and floating-point data types do not cast to each other, except explicitly with `astype` (this is separate from the behavior with Python scalars). | | No casting unsigned integer dtypes to floating dtypes (e.g., `int64 + uint64 -> float64`. | **Strictness** | | | `can_cast` and `result_type` are restricted. | **Strictness** | The `numpy.array_api` implementations disallow cross-kind casting. | | `sum` and `prod` always upcast `float32` to `float64` when `dtype=None`. | **Breaking** | | ### Indexing Differences The spec requires only a subset of indexing, but all indexing rules in the spec are compatible with NumPy’s more broad indexing rules. | Feature | Type | Notes | | --- | --- | --- | | No implicit ellipses (`...`). | **Strictness** | If an index does not include an ellipsis, all axes must be indexed. | | The start and stop of a slice may not be out of bounds. | **Strictness** | For a slice `i:j:k`, only the following are allowed:* `i` or `j` omitted (`None`). * `-n <= i <= max(0, n - 1)`. * For `k > 0` or `k` omitted (`None`), `-n <= j <= n`. * For `k < 0`, `-n - 1 <= j <= max(0, n - 1)`. | | Boolean array indices are only allowed as the sole index. | **Strictness** | | | Integer array indices are not allowed at all. | **Strictness** | With the exception of 0-D arrays, which are treated like integers. | ### Type Strictness Functions in `numpy.array_api` restrict their inputs to only those dtypes that are explicitly required by the spec, even when the wrapped corresponding NumPy function would allow a broader set. Here, we list each function and the dtypes that are allowed in `numpy.array_api`. These are **strictness** differences because the spec does not require that other dtypes result in an error. The categories here are defined as follows: * **Floating-point**: `float32` or `float64`. * **Integer**: Any signed or unsigned integer dtype (`int8`, `int16`, `int32`, `int64`, `uint8`, `uint16`, `uint32`, or `uint64`). * **Boolean**: `bool`. * **Integer or boolean**: Any signed or unsigned integer dtype, or `bool`. For two-argument functions, both arguments must be integer or both must be `bool`. * **Numeric**: Any integer or floating-point dtype. For two-argument functions, both arguments must be integer or both must be floating-point. * **All**: Any of the above dtype categories. For two-argument functions, both arguments must be the same kind (integer, floating-point, or boolean). In all cases, the return dtype is chosen according to [the rules outlined in the spec](https://data-apis.org/array-api/latest/API_specification/type_promotion.html), and does not differ from NumPy’s return dtype for any of the allowed input dtypes, except in the cases mentioned specifically in the subsections below. #### Elementwise Functions | Function Name | Dtypes | | --- | --- | | `abs` | Numeric | | `acos` | Floating-point | | `acosh` | Floating-point | | `add` | Numeric | | `asin` (*) | Floating-point | | `asinh` (*) | Floating-point | | `atan` (*) | Floating-point | | `atan2` (*) | Floating-point | | `atanh` (*) | Floating-point | | `bitwise_and` | Integer or boolean | | `bitwise_invert` | Integer or boolean | | `bitwise_left_shift` (*) | Integer | | `bitwise_or` | Integer or boolean | | `bitwise_right_shift` (*) | Integer | | `bitwise_xor` | Integer or boolean | | `ceil` | Numeric | | `cos` | Floating-point | | `cosh` | Floating-point | | `divide` | Floating-point | | `equal` | All | | `exp` | Floating-point | | `expm1` | Floating-point | | `floor` | Numeric | | `floor_divide` | Numeric | | `greater` | Numeric | | `greater_equal` | Numeric | | `isfinite` | Numeric | | `isinf` | Numeric | | `isnan` | Numeric | | `less` | Numeric | | `less_equal` | Numeric | | `log` | Floating-point | | `logaddexp` | Floating-point | | `log10` | Floating-point | | `log1p` | Floating-point | | `log2` | Floating-point | | `logical_and` | Boolean | | `logical_not` | Boolean | | `logical_or` | Boolean | | `logical_xor` | Boolean | | `multiply` | Numeric | | `negative` | Numeric | | `not_equal` | All | | `positive` | Numeric | | `pow` (*) | Numeric | | `remainder` | Numeric | | `round` | Numeric | | `sign` | Numeric | | `sin` | Floating-point | | `sinh` | Floating-point | | `sqrt` | Floating-point | | `square` | Numeric | | `subtract` | Numeric | | `tan` | Floating-point | | `tanh` | Floating-point | | `trunc` | Numeric | (*) These functions have different names from the main `numpy` namespace. See [Function Name Changes](#array-api-name-changes). #### Creation Functions | Function Name | Dtypes | | --- | --- | | `meshgrid` | Any (all input dtypes must be the same) | #### Linear Algebra Functions | Function Name | Dtypes | | --- | --- | | `cholesky` | Floating-point | | `cross` | Numeric | | `det` | Floating-point | | `diagonal` | Any | | `eigh` | Floating-point | | `eighvals` | Floating-point | | `inv` | Floating-point | | `matmul` | Numeric | | `matrix_norm` (*) | Floating-point | | `matrix_power` | Floating-point | | `matrix_rank` | Floating-point | | `matrix_transpose` (**) | Any | | `outer` | Numeric | | `pinv` | Floating-point | | `qr` | Floating-point | | `slogdet` | Floating-point | | `solve` | Floating-point | | `svd` | Floating-point | | `svdvals` (**) | Floating-point | | `tensordot` | Numeric | | `trace` | Numeric | | `vecdot` (**) | Numeric | | `vector_norm` (*) | Floating-point | (*) Thes functions are split from `norm` from the main `numpy` namespace. See [Function Name Changes](#array-api-name-changes). (**) These functions are new in the array API and are not in the main `numpy` namespace. #### Array Object All the special `__operator__` methods on the array object behave identically to their corresponding functions (see [the spec](https://data-apis.org/array-api/latest/API_specification/array_object.html#methods) for a list of which methods correspond to which functions). The exception is that operators explicitly allow Python scalars according to the [rules outlined in the spec](https://data-apis.org/array-api/latest/API_specification/type_promotion.html#mixing-arrays-with-python-scalars) (see [Type Promotion Differences](#array-api-type-promotion-differences)). ### Array Object Differences | Feature | Type | Notes | | --- | --- | --- | | No array scalars | **Strictness** | The spec does not have array scalars, only 0-D arrays. However, other than the promotion differences outlined in [Type Promotion Differences](#array-api-type-promotion-differences), scalars duck type as 0-D arrays for the purposes of the spec. The are immutable, but the spec [does not require mutability](https://data-apis.org/array-api/latest/design_topics/copies_views_and_mutation.html). | | `bool()`, `int()`, and `float()` only work on 0-D arrays. | **Strictness** | See <https://github.com/numpy/numpy/issues/10404>. | | `__imatmul__` | **Compatible** | `np.ndarray` does not currently implement `__imatmul`. Note that `a @= b` should only defined when it does not change the shape of `a`. | | The `mT` attribute for matrix transpose. | **Compatible** | See [the spec definition](https://data-apis.org/array-api/latest/API_specification/generated/signatures.array_object.array.mT.html) for `mT`. | | The `T` attribute should error if the input is not 2-dimensional. | **Breaking** | See [the note in the spec](https://data-apis.org/array-api/latest/API_specification/generated/signatures.array_object.array.T.html). | | New method `to_device` and attribute `device` | **Compatible** | The methods would effectively not do anything since NumPy is CPU only | ### Creation Functions Differences | Feature | Type | Notes | | --- | --- | --- | | `copy` keyword argument to `asarray` | **Compatible** | | | New `device` keyword argument to all array creation functions (`asarray`, `arange`, `empty`, `empty_like`, `eye`, `full`, `full_like`, `linspace`, `ones`, `ones_like`, `zeros`, and `zeros_like`). | **Compatible** | `device` would effectively do nothing, since NumPy is CPU only. | ### Elementwise Functions Differences | Feature | Type | Notes | | --- | --- | --- | | Various functions have been renamed. | **Compatible** | See [Function Name Changes](#array-api-name-changes). | | Elementwise functions are only defined for given input type combinations. | **Strictness** | See [Type Strictness](#array-api-type-strictness). | | `bitwise_left_shift` and `bitwise_right_shift` are only defined for `x2` nonnegative. | **Strictness** | | | `ceil`, `floor`, and `trunc` return an integer with integer input. | **Breaking** | `np.ceil`, `np.floor`, and `np.trunc` return a floating-point dtype on integer dtype input. | ### Linear Algebra Differences | Feature | Type | Notes | | --- | --- | --- | | `cholesky` includes an `upper` keyword argument. | **Compatible** | | | `cross` does not allow size 2 vectors (only size 3). | **Breaking** | | | `diagonal` operates on the last two axes. | **Breaking** | Strictly speaking this can be **compatible** because `diagonal` is moved to the `linalg` namespace. | | `eigh`, `qr`, `slogdet` and `svd` return a named tuple. | **Compatible** | The corresponding `numpy` functions return a `tuple`, with the resulting arrays in the same order. | | New functions `matrix_norm` and `vector_norm`. | **Compatible** | The `norm` function has been omitted from the array API and split into `matrix_norm` for matrix norms and `vector_norm` for vector norms. Note that `vector_norm` supports any number of axes, whereas `np.linalg.norm` only supports a single axis for vector norms. | | `matrix_rank` has an `rtol` keyword argument instead of `tol`. | **Breaking** | In the array API, `rtol` filters singular values smaller than `rtol * largest_singular_value`. In `np.linalg.matrix_rank`, `tol` filters singular values smaller than `tol`. Furthermore, the default value for `rtol` is `max(M, N) * eps`, whereas the default value of `tol` in `np.linalg.matrix_rank` is `S.max() * max(M, N) * eps`, where `S` is the singular values of the input. The new flag name is compatible but the default change is breaking | | `matrix_rank` does not support 1-dimensional arrays. | **Breaking** | | | New function `matrix_transpose`. | **Compatible** | Unlike `np.transpose`, `matrix_transpose` only transposes the last two axes. See [the spec definition](https://data-apis.org/array-api/latest/API_specification/generated/signatures.linear_algebra_functions.matrix_transpose.html#signatures.linear_algebra_functions.matrix_transpose) | | `outer` only supports 1-dimensional arrays. | **Breaking** | The spec currently only specifies behavior on 1-D arrays but future behavior will likely be to broadcast, rather than flatten, which is what `np.outer` does. | | `pinv` has an `rtol` keyword argument instead of `rcond` | **Breaking** | The meaning of `rtol` and `rcond` is the same, but the default value for `rtol` is `max(M, N) * eps`, whereas the default value for `rcond` is `1e-15`. The new flag name is compatible but the default change is breaking. | | `solve` only accepts `x2` as a vector when it is exactly 1-dimensional. | **Breaking** | The `np.linalg.solve` behavior is ambiguous. See [this numpy issue](https://github.com/numpy/numpy/issues/15349) and [this array API specification issue](https://github.com/data-apis/array-api/issues/285) for more details. | | New function `svdvals`. | **Compatible** | Equivalent to `np.linalg.svd(compute_uv=False)`. | | The `axis` keyword to `tensordot` must be a tuple. | **Compatible** | In `np.tensordot`, it can also be an array or array-like. | | `trace` operates on the last two axes. | **Breaking** | `np.trace` operates on the first two axes by default. Note that the array API `trace` does not allow specifying which axes to operate on. | ### Manipulation Functions Differences | Feature | Type | Notes | | --- | --- | --- | | Various functions have been renamed | **Compatible** | See [Function Name Changes](#array-api-name-changes). | | `concat` has different default casting rules from `np.concatenate` | **Strictness** | No cross-kind casting. No value-based casting on scalars (when axis=None). | | `stack` has different default casting rules from `np.stack` | **Strictness** | No cross-kind casting. | | New function `permute_dims`. | **Compatible** | Unlike `np.transpose`, the `axis` keyword argument to `permute_dims` is required. | | `reshape` function has a `copy` keyword argument | **Compatible** | See <https://github.com/numpy/numpy/issues/9818>. | ### Set Functions Differences | Feature | Type | Notes | | --- | --- | --- | | New functions `unique_all`, `unique_counts`, `unique_inverse`, and `unique_values`. | **Compatible** | See [Function Name Changes](#array-api-name-changes). | | The four `unique_*` functions return a named tuple. | **Compatible** | | | `unique_all` and `unique_indices` return indices with the same shape as `x`. | **Compatible** | See <https://github.com/numpy/numpy/issues/20638>. | ### Set Functions Differences | Feature | Type | Notes | | --- | --- | --- | | `argsort` and `sort` have a `stable` keyword argument instead of `kind`. | **Breaking** | `stable` is a boolean keyword argument, defaulting to `True`. `kind` takes a string, defaulting to `"quicksort"`. `stable=True` is equivalent to `kind="stable"` and `kind=False` is equivalent to `kind="quicksort"`, although any sorting algorithm is allowed by the spec when `stable=False`. The new flag name is compatible but the default change is breaking. | | `argsort` and `sort` have a `descending` keyword argument. | **Compatible** | | ### Statistical Functions Differences | Feature | Type | Notes | | --- | --- | --- | | `sum` and `prod` always upcast `float32` to `float64` when `dtype=None`. | **Breaking** | | | The `std` and `var` functions have a `correction` keyword argument instead of `ddof`. | **Compatible** | | ### Other Differences | Feature | Type | Notes | | --- | --- | --- | | Dtypes can only be spelled as dtype objects. | **Strictness** | For example, `numpy.array_api.asarray([0], dtype='int32')` is not allowed. | | `asarray` is not implicitly called in any function. | **Strictness** | The exception is Python operators, which accept Python scalars in certain cases (see [Type Promotion Differences](#array-api-type-promotion-differences)). | | `tril` and `triu` require the input to be at least 2-D. | **Strictness** | | | finfo() return type uses `float` for the various attributes. | **Strictness** | The spec allows duck typing, so `finfo` returning dtype scalars is considered type compatible with `float`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/array_api.htmlConstants ========= NumPy includes several constants: numpy.Inf IEEE 754 floating point representation of (positive) infinity. Use [`inf`](#numpy.inf "numpy.inf") because [`Inf`](#numpy.Inf "numpy.Inf"), [`Infinity`](#numpy.Infinity "numpy.Infinity"), [`PINF`](#numpy.PINF "numpy.PINF") and [`infty`](#numpy.infty "numpy.infty") are aliases for [`inf`](#numpy.inf "numpy.inf"). For more details, see [`inf`](#numpy.inf "numpy.inf"). #### See Also inf numpy.Infinity IEEE 754 floating point representation of (positive) infinity. Use [`inf`](#numpy.inf "numpy.inf") because [`Inf`](#numpy.Inf "numpy.Inf"), [`Infinity`](#numpy.Infinity "numpy.Infinity"), [`PINF`](#numpy.PINF "numpy.PINF") and [`infty`](#numpy.infty "numpy.infty") are aliases for [`inf`](#numpy.inf "numpy.inf"). For more details, see [`inf`](#numpy.inf "numpy.inf"). #### See Also inf numpy.NAN IEEE 754 floating point representation of Not a Number (NaN). [`NaN`](#numpy.NaN "numpy.NaN") and [`NAN`](#numpy.NAN "numpy.NAN") are equivalent definitions of [`nan`](#numpy.nan "numpy.nan"). Please use [`nan`](#numpy.nan "numpy.nan") instead of [`NAN`](#numpy.NAN "numpy.NAN"). #### See Also nan numpy.NINF IEEE 754 floating point representation of negative infinity. #### Returns yfloat A floating point representation of negative infinity. #### See Also isinf : Shows which elements are positive or negative infinity isposinf : Shows which elements are positive infinity isneginf : Shows which elements are negative infinity isnan : Shows which elements are Not a Number isfinite : Shows which elements are finite (not one of Not a Number, positive infinity and negative infinity) #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Also that positive infinity is not equivalent to negative infinity. But infinity is equivalent to positive infinity. #### Examples ``` >>> np.NINF -inf >>> np.log(0) -inf ``` numpy.NZERO IEEE 754 floating point representation of negative zero. #### Returns yfloat A floating point representation of negative zero. #### See Also PZERO : Defines positive zero. isinf : Shows which elements are positive or negative infinity. isposinf : Shows which elements are positive infinity. isneginf : Shows which elements are negative infinity. isnan : Shows which elements are Not a Number. isfiniteShows which elements are finite - not one of Not a Number, positive infinity and negative infinity. #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Negative zero is considered to be a finite number. #### Examples ``` >>> np.NZERO -0.0 >>> np.PZERO 0.0 ``` ``` >>> np.isfinite([np.NZERO]) array([ True]) >>> np.isnan([np.NZERO]) array([False]) >>> np.isinf([np.NZERO]) array([False]) ``` numpy.NaN IEEE 754 floating point representation of Not a Number (NaN). [`NaN`](#numpy.NaN "numpy.NaN") and [`NAN`](#numpy.NAN "numpy.NAN") are equivalent definitions of [`nan`](#numpy.nan "numpy.nan"). Please use [`nan`](#numpy.nan "numpy.nan") instead of [`NaN`](#numpy.NaN "numpy.NaN"). #### See Also nan numpy.PINF IEEE 754 floating point representation of (positive) infinity. Use [`inf`](#numpy.inf "numpy.inf") because [`Inf`](#numpy.Inf "numpy.Inf"), [`Infinity`](#numpy.Infinity "numpy.Infinity"), [`PINF`](#numpy.PINF "numpy.PINF") and [`infty`](#numpy.infty "numpy.infty") are aliases for [`inf`](#numpy.inf "numpy.inf"). For more details, see [`inf`](#numpy.inf "numpy.inf"). #### See Also inf numpy.PZERO IEEE 754 floating point representation of positive zero. #### Returns yfloat A floating point representation of positive zero. #### See Also NZERO : Defines negative zero. isinf : Shows which elements are positive or negative infinity. isposinf : Shows which elements are positive infinity. isneginf : Shows which elements are negative infinity. isnan : Shows which elements are Not a Number. isfiniteShows which elements are finite - not one of Not a Number, positive infinity and negative infinity. #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Positive zero is considered to be a finite number. #### Examples ``` >>> np.PZERO 0.0 >>> np.NZERO -0.0 ``` ``` >>> np.isfinite([np.PZERO]) array([ True]) >>> np.isnan([np.PZERO]) array([False]) >>> np.isinf([np.PZERO]) array([False]) ``` numpy.e Euler’s constant, base of natural logarithms, Napier’s constant. `e = 2.71828182845904523536028747135266249775724709369995...` #### See Also exp : Exponential function log : Natural logarithm #### References <https://en.wikipedia.org/wiki/E_%28mathematical_constant%29 numpy.euler_gamma `γ = 0.5772156649015328606065120900824024310421...` #### References <https://en.wikipedia.org/wiki/Euler-Mascheroni_constant numpy.inf IEEE 754 floating point representation of (positive) infinity. #### Returns yfloat A floating point representation of positive infinity. #### See Also isinf : Shows which elements are positive or negative infinity isposinf : Shows which elements are positive infinity isneginf : Shows which elements are negative infinity isnan : Shows which elements are Not a Number isfinite : Shows which elements are finite (not one of Not a Number, positive infinity and negative infinity) #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Also that positive infinity is not equivalent to negative infinity. But infinity is equivalent to positive infinity. [`Inf`](#numpy.Inf "numpy.Inf"), [`Infinity`](#numpy.Infinity "numpy.Infinity"), [`PINF`](#numpy.PINF "numpy.PINF") and [`infty`](#numpy.infty "numpy.infty") are aliases for [`inf`](#numpy.inf "numpy.inf"). #### Examples ``` >>> np.inf inf >>> np.array([1]) / 0. array([ Inf]) ``` numpy.infty IEEE 754 floating point representation of (positive) infinity. Use [`inf`](#numpy.inf "numpy.inf") because [`Inf`](#numpy.Inf "numpy.Inf"), [`Infinity`](#numpy.Infinity "numpy.Infinity"), [`PINF`](#numpy.PINF "numpy.PINF") and [`infty`](#numpy.infty "numpy.infty") are aliases for [`inf`](#numpy.inf "numpy.inf"). For more details, see [`inf`](#numpy.inf "numpy.inf"). #### See Also inf numpy.nan IEEE 754 floating point representation of Not a Number (NaN). #### Returns y : A floating point representation of Not a Number. #### See Also isnan : Shows which elements are Not a Number. isfinite : Shows which elements are finite (not one of Not a Number, positive infinity and negative infinity) #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. [`NaN`](#numpy.NaN "numpy.NaN") and [`NAN`](#numpy.NAN "numpy.NAN") are aliases of [`nan`](#numpy.nan "numpy.nan"). #### Examples ``` >>> np.nan nan >>> np.log(-1) nan >>> np.log([-1, 1, 2]) array([ NaN, 0. , 0.69314718]) ``` numpy.newaxis A convenient alias for None, useful for indexing arrays. #### Examples ``` >>> newaxis is None True >>> x = np.arange(3) >>> x array([0, 1, 2]) >>> x[:, newaxis] array([[0], [1], [2]]) >>> x[:, newaxis, newaxis] array([[[0]], [[1]], [[2]]]) >>> x[:, newaxis] * x array([[0, 0, 0], [0, 1, 2], [0, 2, 4]]) ``` Outer product, same as `outer(x, y)`: ``` >>> y = np.arange(3, 6) >>> x[:, newaxis] * y array([[ 0, 0, 0], [ 3, 4, 5], [ 6, 8, 10]]) ``` `x[newaxis, :]` is equivalent to `x[newaxis]` and `x[None]`: ``` >>> x[newaxis, :].shape (1, 3) >>> x[newaxis].shape (1, 3) >>> x[None].shape (1, 3) >>> x[:, newaxis].shape (3, 1) ``` numpy.pi `pi = 3.1415926535897932384626433...` #### References <https://en.wikipedia.org/wiki/Pi © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/constants.htmlUniversal functions (ufunc) =========================== See also [Universal functions (ufunc) basics](../user/basics.ufuncs#ufuncs-basics) A universal function (or [ufunc](../glossary#term-ufunc) for short) is a function that operates on [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") in an element-by-element fashion, supporting [array broadcasting](../user/basics.ufuncs#ufuncs-broadcasting), [type casting](../user/basics.ufuncs#ufuncs-casting), and several other standard features. That is, a ufunc is a “[vectorized](../glossary#term-vectorization)” wrapper for a function that takes a fixed number of specific inputs and produces a fixed number of specific outputs. For detailed information on universal functions, see [Universal functions (ufunc) basics](../user/basics.ufuncs#ufuncs-basics). ufunc ----- | | | | --- | --- | | [`numpy.ufunc`](generated/numpy.ufunc#numpy.ufunc "numpy.ufunc")() | Functions that operate element by element on whole arrays. | ### Optional keyword arguments All ufuncs take optional keyword arguments. Most of these represent advanced usage and will not typically be used. #### *out* New in version 1.6. The first output can be provided as either a positional or a keyword parameter. Keyword ‘out’ arguments are incompatible with positional ones. New in version 1.10. The ‘out’ keyword argument is expected to be a tuple with one entry per output (which can be None for arrays to be allocated by the ufunc). For ufuncs with a single output, passing a single array (instead of a tuple holding a single array) is also valid. Passing a single array in the ‘out’ keyword argument to a ufunc with multiple outputs is deprecated, and will raise a warning in numpy 1.10, and an error in a future release. If ‘out’ is None (the default), a uninitialized return array is created. The output array is then filled with the results of the ufunc in the places that the broadcast ‘where’ is True. If ‘where’ is the scalar True (the default), then this corresponds to the entire output being filled. Note that outputs not explicitly filled are left with their uninitialized values. New in version 1.13. Operations where ufunc input and output operands have memory overlap are defined to be the same as for equivalent operations where there is no memory overlap. Operations affected make temporary copies as needed to eliminate data dependency. As detecting these cases is computationally expensive, a heuristic is used, which may in rare cases result in needless temporary copies. For operations where the data dependency is simple enough for the heuristic to analyze, temporary copies will not be made even if the arrays overlap, if it can be deduced copies are not necessary. As an example, `np.add(a, b, out=a)` will not involve copies. #### *where* New in version 1.7. Accepts a boolean array which is broadcast together with the operands. Values of True indicate to calculate the ufunc at that position, values of False indicate to leave the value in the output alone. This argument cannot be used for generalized ufuncs as those take non-scalar input. Note that if an uninitialized return array is created, values of False will leave those values **uninitialized**. #### *axes* New in version 1.15. A list of tuples with indices of axes a generalized ufunc should operate on. For instance, for a signature of `(i,j),(j,k)->(i,k)` appropriate for matrix multiplication, the base elements are two-dimensional matrices and these are taken to be stored in the two last axes of each argument. The corresponding axes keyword would be `[(-2, -1), (-2, -1), (-2, -1)]`. For simplicity, for generalized ufuncs that operate on 1-dimensional arrays (vectors), a single integer is accepted instead of a single-element tuple, and for generalized ufuncs for which all outputs are scalars, the output tuples can be omitted. #### *axis* New in version 1.15. A single axis over which a generalized ufunc should operate. This is a short-cut for ufuncs that operate over a single, shared core dimension, equivalent to passing in `axes` with entries of `(axis,)` for each single-core-dimension argument and `()` for all others. For instance, for a signature `(i),(i)->()`, it is equivalent to passing in `axes=[(axis,), (axis,), ()]`. #### *keepdims* New in version 1.15. If this is set to `True`, axes which are reduced over will be left in the result as a dimension with size one, so that the result will broadcast correctly against the inputs. This option can only be used for generalized ufuncs that operate on inputs that all have the same number of core dimensions and with outputs that have no core dimensions, i.e., with signatures like `(i),(i)->()` or `(m,m)->()`. If used, the location of the dimensions in the output can be controlled with `axes` and `axis`. #### *casting* New in version 1.6. May be ‘no’, ‘equiv’, ‘safe’, ‘same_kind’, or ‘unsafe’. See [`can_cast`](generated/numpy.can_cast#numpy.can_cast "numpy.can_cast") for explanations of the parameter values. Provides a policy for what kind of casting is permitted. For compatibility with previous versions of NumPy, this defaults to ‘unsafe’ for numpy < 1.7. In numpy 1.7 a transition to ‘same_kind’ was begun where ufuncs produce a DeprecationWarning for calls which are allowed under the ‘unsafe’ rules, but not under the ‘same_kind’ rules. From numpy 1.10 and onwards, the default is ‘same_kind’. #### *order* New in version 1.6. Specifies the calculation iteration order/memory layout of the output array. Defaults to ‘K’. ‘C’ means the output should be C-contiguous, ‘F’ means F-contiguous, ‘A’ means F-contiguous if the inputs are F-contiguous and not also not C-contiguous, C-contiguous otherwise, and ‘K’ means to match the element ordering of the inputs as closely as possible. #### *dtype* New in version 1.6. Overrides the DType of the output arrays the same way as the *signature*. This should ensure a matching precision of the calculation. The exact calculation DTypes chosen may depend on the ufunc and the inputs may be cast to this DType to perform the calculation. #### *subok* New in version 1.6. Defaults to true. If set to false, the output will always be a strict array, not a subtype. #### *signature* Either a Dtype, a tuple of DTypes, or a special signature string indicating the input and output types of a ufunc. This argument allows the user to specify exact DTypes to be used for the calculation. Casting will be used as necessary. The actual DType of the input arrays is not considered unless `signature` is `None` for that array. When all DTypes are fixed, a specific loop is chosen or an error raised if no matching loop exists. If some DTypes are not specified and left `None`, the behaviour may depend on the ufunc. At this time, a list of available signatures is provided by the **types** attribute of the ufunc. (This list may be missing DTypes not defined by NumPy.) The `signature` only specifies the DType class/type. For example, it can specify that the operation should be `datetime64` or `float64` operation. It does not specify the `datetime64` time-unit or the `float64` byte-order. For backwards compatibility this argument can also be provided as *sig*, although the long form is preferred. Note that this should not be confused with the generalized ufunc [signature](c-api/generalized-ufuncs#details-of-signature) that is stored in the **signature** attribute of the of the ufunc object. #### *extobj* A list of length 3 specifying the ufunc buffer-size, the error mode integer, and the error call-back function. Normally, these values are looked up in a thread-specific dictionary. Passing them here circumvents that look up and uses the low-level specification provided for the error mode. This may be useful, for example, as an optimization for calculations requiring many ufunc calls on small arrays in a loop. ### Attributes There are some informational attributes that universal functions possess. None of the attributes can be set. | | | | --- | --- | | **__doc__** | A docstring for each ufunc. The first part of the docstring is dynamically generated from the number of outputs, the name, and the number of inputs. The second part of the docstring is provided at creation time and stored with the ufunc. | | **__name__** | The name of the ufunc. | | | | | --- | --- | | [`ufunc.nin`](generated/numpy.ufunc.nin#numpy.ufunc.nin "numpy.ufunc.nin") | The number of inputs. | | [`ufunc.nout`](generated/numpy.ufunc.nout#numpy.ufunc.nout "numpy.ufunc.nout") | The number of outputs. | | [`ufunc.nargs`](generated/numpy.ufunc.nargs#numpy.ufunc.nargs "numpy.ufunc.nargs") | The number of arguments. | | [`ufunc.ntypes`](generated/numpy.ufunc.ntypes#numpy.ufunc.ntypes "numpy.ufunc.ntypes") | The number of types. | | [`ufunc.types`](generated/numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") | Returns a list with types grouped input->output. | | [`ufunc.identity`](generated/numpy.ufunc.identity#numpy.ufunc.identity "numpy.ufunc.identity") | The identity value. | | [`ufunc.signature`](generated/numpy.ufunc.signature#numpy.ufunc.signature "numpy.ufunc.signature") | Definition of the core elements a generalized ufunc operates on. | ### Methods | | | | --- | --- | | [`ufunc.reduce`](generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce")(array[, axis, dtype, out, ...]) | Reduces [`array`](generated/numpy.array#numpy.array "numpy.array")'s dimension by one, by applying ufunc along one axis. | | [`ufunc.accumulate`](generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate")(array[, axis, dtype, out]) | Accumulate the result of applying the operator to all elements. | | [`ufunc.reduceat`](generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat")(array, indices[, axis, ...]) | Performs a (local) reduce with specified slices over a single axis. | | [`ufunc.outer`](generated/numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer")(A, B, /, **kwargs) | Apply the ufunc `op` to all pairs (a, b) with a in `A` and b in `B`. | | [`ufunc.at`](generated/numpy.ufunc.at#numpy.ufunc.at "numpy.ufunc.at")(a, indices[, b]) | Performs unbuffered in place operation on operand 'a' for elements specified by 'indices'. | Warning A reduce-like operation on an array with a data-type that has a range “too small” to handle the result will silently wrap. One should use [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") to increase the size of the data-type over which reduction takes place. Available ufuncs ---------------- There are currently more than 60 universal functions defined in [`numpy`](index#module-numpy "numpy") on one or more types, covering a wide variety of operations. Some of these ufuncs are called automatically on arrays when the relevant infix notation is used (*e.g.*, [`add(a, b)`](generated/numpy.add#numpy.add "numpy.add") is called internally when `a + b` is written and *a* or *b* is an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray")). Nevertheless, you may still want to use the ufunc call in order to use the optional output argument(s) to place the output(s) in an object (or objects) of your choice. Recall that each ufunc operates element-by-element. Therefore, each scalar ufunc will be described as if acting on a set of scalar inputs to return a set of scalar outputs. Note The ufunc still returns its output(s) even if you use the optional output argument(s). ### Math operations | | | | --- | --- | | [`add`](generated/numpy.add#numpy.add "numpy.add")(x1, x2, /[, out, where, casting, order, ...]) | Add arguments element-wise. | | [`subtract`](generated/numpy.subtract#numpy.subtract "numpy.subtract")(x1, x2, /[, out, where, casting, ...]) | Subtract arguments, element-wise. | | [`multiply`](generated/numpy.multiply#numpy.multiply "numpy.multiply")(x1, x2, /[, out, where, casting, ...]) | Multiply arguments element-wise. | | [`matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul")(x1, x2, /[, out, casting, order, ...]) | Matrix product of two arrays. | | [`divide`](generated/numpy.divide#numpy.divide "numpy.divide")(x1, x2, /[, out, where, casting, ...]) | Divide arguments element-wise. | | [`logaddexp`](generated/numpy.logaddexp#numpy.logaddexp "numpy.logaddexp")(x1, x2, /[, out, where, casting, ...]) | Logarithm of the sum of exponentiations of the inputs. | | [`logaddexp2`](generated/numpy.logaddexp2#numpy.logaddexp2 "numpy.logaddexp2")(x1, x2, /[, out, where, casting, ...]) | Logarithm of the sum of exponentiations of the inputs in base-2. | | [`true_divide`](generated/numpy.true_divide#numpy.true_divide "numpy.true_divide")(x1, x2, /[, out, where, ...]) | Divide arguments element-wise. | | [`floor_divide`](generated/numpy.floor_divide#numpy.floor_divide "numpy.floor_divide")(x1, x2, /[, out, where, ...]) | Return the largest integer smaller or equal to the division of the inputs. | | [`negative`](generated/numpy.negative#numpy.negative "numpy.negative")(x, /[, out, where, casting, order, ...]) | Numerical negative, element-wise. | | [`positive`](generated/numpy.positive#numpy.positive "numpy.positive")(x, /[, out, where, casting, order, ...]) | Numerical positive, element-wise. | | [`power`](generated/numpy.power#numpy.power "numpy.power")(x1, x2, /[, out, where, casting, ...]) | First array elements raised to powers from second array, element-wise. | | [`float_power`](generated/numpy.float_power#numpy.float_power "numpy.float_power")(x1, x2, /[, out, where, ...]) | First array elements raised to powers from second array, element-wise. | | [`remainder`](generated/numpy.remainder#numpy.remainder "numpy.remainder")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. | | [`mod`](generated/numpy.mod#numpy.mod "numpy.mod")(x1, x2, /[, out, where, casting, order, ...]) | Returns the element-wise remainder of division. | | [`fmod`](generated/numpy.fmod#numpy.fmod "numpy.fmod")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. | | [`divmod`](generated/numpy.divmod#numpy.divmod "numpy.divmod")(x1, x2[, out1, out2], / [[, out, ...]) | Return element-wise quotient and remainder simultaneously. | | [`absolute`](generated/numpy.absolute#numpy.absolute "numpy.absolute")(x, /[, out, where, casting, order, ...]) | Calculate the absolute value element-wise. | | [`fabs`](generated/numpy.fabs#numpy.fabs "numpy.fabs")(x, /[, out, where, casting, order, ...]) | Compute the absolute values element-wise. | | [`rint`](generated/numpy.rint#numpy.rint "numpy.rint")(x, /[, out, where, casting, order, ...]) | Round elements of the array to the nearest integer. | | [`sign`](generated/numpy.sign#numpy.sign "numpy.sign")(x, /[, out, where, casting, order, ...]) | Returns an element-wise indication of the sign of a number. | | [`heaviside`](generated/numpy.heaviside#numpy.heaviside "numpy.heaviside")(x1, x2, /[, out, where, casting, ...]) | Compute the Heaviside step function. | | [`conj`](generated/numpy.conj#numpy.conj "numpy.conj")(x, /[, out, where, casting, order, ...]) | Return the complex conjugate, element-wise. | | [`conjugate`](generated/numpy.conjugate#numpy.conjugate "numpy.conjugate")(x, /[, out, where, casting, ...]) | Return the complex conjugate, element-wise. | | [`exp`](generated/numpy.exp#numpy.exp "numpy.exp")(x, /[, out, where, casting, order, ...]) | Calculate the exponential of all elements in the input array. | | [`exp2`](generated/numpy.exp2#numpy.exp2 "numpy.exp2")(x, /[, out, where, casting, order, ...]) | Calculate `2**p` for all `p` in the input array. | | [`log`](generated/numpy.log#numpy.log "numpy.log")(x, /[, out, where, casting, order, ...]) | Natural logarithm, element-wise. | | [`log2`](generated/numpy.log2#numpy.log2 "numpy.log2")(x, /[, out, where, casting, order, ...]) | Base-2 logarithm of `x`. | | [`log10`](generated/numpy.log10#numpy.log10 "numpy.log10")(x, /[, out, where, casting, order, ...]) | Return the base 10 logarithm of the input array, element-wise. | | [`expm1`](generated/numpy.expm1#numpy.expm1 "numpy.expm1")(x, /[, out, where, casting, order, ...]) | Calculate `exp(x) - 1` for all elements in the array. | | [`log1p`](generated/numpy.log1p#numpy.log1p "numpy.log1p")(x, /[, out, where, casting, order, ...]) | Return the natural logarithm of one plus the input array, element-wise. | | [`sqrt`](generated/numpy.sqrt#numpy.sqrt "numpy.sqrt")(x, /[, out, where, casting, order, ...]) | Return the non-negative square-root of an array, element-wise. | | [`square`](generated/numpy.square#numpy.square "numpy.square")(x, /[, out, where, casting, order, ...]) | Return the element-wise square of the input. | | [`cbrt`](generated/numpy.cbrt#numpy.cbrt "numpy.cbrt")(x, /[, out, where, casting, order, ...]) | Return the cube-root of an array, element-wise. | | [`reciprocal`](generated/numpy.reciprocal#numpy.reciprocal "numpy.reciprocal")(x, /[, out, where, casting, ...]) | Return the reciprocal of the argument, element-wise. | | [`gcd`](generated/numpy.gcd#numpy.gcd "numpy.gcd")(x1, x2, /[, out, where, casting, order, ...]) | Returns the greatest common divisor of `|x1|` and `|x2|` | | [`lcm`](generated/numpy.lcm#numpy.lcm "numpy.lcm")(x1, x2, /[, out, where, casting, order, ...]) | Returns the lowest common multiple of `|x1|` and `|x2|` | Tip The optional output arguments can be used to help you save memory for large calculations. If your arrays are large, complicated expressions can take longer than absolutely necessary due to the creation and (later) destruction of temporary calculation spaces. For example, the expression `G = A * B + C` is equivalent to `T1 = A * B; G = T1 + C; del T1`. It will be more quickly executed as `G = A * B; add(G, C, G)` which is the same as `G = A * B; G += C`. ### Trigonometric functions All trigonometric functions use radians when an angle is called for. The ratio of degrees to radians is \(180^{\circ}/\pi.\) | | | | --- | --- | | [`sin`](generated/numpy.sin#numpy.sin "numpy.sin")(x, /[, out, where, casting, order, ...]) | Trigonometric sine, element-wise. | | [`cos`](generated/numpy.cos#numpy.cos "numpy.cos")(x, /[, out, where, casting, order, ...]) | Cosine element-wise. | | [`tan`](generated/numpy.tan#numpy.tan "numpy.tan")(x, /[, out, where, casting, order, ...]) | Compute tangent element-wise. | | [`arcsin`](generated/numpy.arcsin#numpy.arcsin "numpy.arcsin")(x, /[, out, where, casting, order, ...]) | Inverse sine, element-wise. | | [`arccos`](generated/numpy.arccos#numpy.arccos "numpy.arccos")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse cosine, element-wise. | | [`arctan`](generated/numpy.arctan#numpy.arctan "numpy.arctan")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse tangent, element-wise. | | [`arctan2`](generated/numpy.arctan2#numpy.arctan2 "numpy.arctan2")(x1, x2, /[, out, where, casting, ...]) | Element-wise arc tangent of `x1/x2` choosing the quadrant correctly. | | [`hypot`](generated/numpy.hypot#numpy.hypot "numpy.hypot")(x1, x2, /[, out, where, casting, ...]) | Given the "legs" of a right triangle, return its hypotenuse. | | [`sinh`](generated/numpy.sinh#numpy.sinh "numpy.sinh")(x, /[, out, where, casting, order, ...]) | Hyperbolic sine, element-wise. | | [`cosh`](generated/numpy.cosh#numpy.cosh "numpy.cosh")(x, /[, out, where, casting, order, ...]) | Hyperbolic cosine, element-wise. | | [`tanh`](generated/numpy.tanh#numpy.tanh "numpy.tanh")(x, /[, out, where, casting, order, ...]) | Compute hyperbolic tangent element-wise. | | [`arcsinh`](generated/numpy.arcsinh#numpy.arcsinh "numpy.arcsinh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic sine element-wise. | | [`arccosh`](generated/numpy.arccosh#numpy.arccosh "numpy.arccosh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic cosine, element-wise. | | [`arctanh`](generated/numpy.arctanh#numpy.arctanh "numpy.arctanh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic tangent element-wise. | | [`degrees`](generated/numpy.degrees#numpy.degrees "numpy.degrees")(x, /[, out, where, casting, order, ...]) | Convert angles from radians to degrees. | | [`radians`](generated/numpy.radians#numpy.radians "numpy.radians")(x, /[, out, where, casting, order, ...]) | Convert angles from degrees to radians. | | [`deg2rad`](generated/numpy.deg2rad#numpy.deg2rad "numpy.deg2rad")(x, /[, out, where, casting, order, ...]) | Convert angles from degrees to radians. | | [`rad2deg`](generated/numpy.rad2deg#numpy.rad2deg "numpy.rad2deg")(x, /[, out, where, casting, order, ...]) | Convert angles from radians to degrees. | ### Bit-twiddling functions These function all require integer arguments and they manipulate the bit-pattern of those arguments. | | | | --- | --- | | [`bitwise_and`](generated/numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and")(x1, x2, /[, out, where, ...]) | Compute the bit-wise AND of two arrays element-wise. | | [`bitwise_or`](generated/numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or")(x1, x2, /[, out, where, casting, ...]) | Compute the bit-wise OR of two arrays element-wise. | | [`bitwise_xor`](generated/numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor")(x1, x2, /[, out, where, ...]) | Compute the bit-wise XOR of two arrays element-wise. | | [`invert`](generated/numpy.invert#numpy.invert "numpy.invert")(x, /[, out, where, casting, order, ...]) | Compute bit-wise inversion, or bit-wise NOT, element-wise. | | [`left_shift`](generated/numpy.left_shift#numpy.left_shift "numpy.left_shift")(x1, x2, /[, out, where, casting, ...]) | Shift the bits of an integer to the left. | | [`right_shift`](generated/numpy.right_shift#numpy.right_shift "numpy.right_shift")(x1, x2, /[, out, where, ...]) | Shift the bits of an integer to the right. | ### Comparison functions | | | | --- | --- | | [`greater`](generated/numpy.greater#numpy.greater "numpy.greater")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 > x2) element-wise. | | [`greater_equal`](generated/numpy.greater_equal#numpy.greater_equal "numpy.greater_equal")(x1, x2, /[, out, where, ...]) | Return the truth value of (x1 >= x2) element-wise. | | [`less`](generated/numpy.less#numpy.less "numpy.less")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 < x2) element-wise. | | [`less_equal`](generated/numpy.less_equal#numpy.less_equal "numpy.less_equal")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 <= x2) element-wise. | | [`not_equal`](generated/numpy.not_equal#numpy.not_equal "numpy.not_equal")(x1, x2, /[, out, where, casting, ...]) | Return (x1 != x2) element-wise. | | [`equal`](generated/numpy.equal#numpy.equal "numpy.equal")(x1, x2, /[, out, where, casting, ...]) | Return (x1 == x2) element-wise. | Warning Do not use the Python keywords `and` and `or` to combine logical array expressions. These keywords will test the truth value of the entire array (not element-by-element as you might expect). Use the bitwise operators & and | instead. | | | | --- | --- | | [`logical_and`](generated/numpy.logical_and#numpy.logical_and "numpy.logical_and")(x1, x2, /[, out, where, ...]) | Compute the truth value of x1 AND x2 element-wise. | | [`logical_or`](generated/numpy.logical_or#numpy.logical_or "numpy.logical_or")(x1, x2, /[, out, where, casting, ...]) | Compute the truth value of x1 OR x2 element-wise. | | [`logical_xor`](generated/numpy.logical_xor#numpy.logical_xor "numpy.logical_xor")(x1, x2, /[, out, where, ...]) | Compute the truth value of x1 XOR x2, element-wise. | | [`logical_not`](generated/numpy.logical_not#numpy.logical_not "numpy.logical_not")(x, /[, out, where, casting, ...]) | Compute the truth value of NOT x element-wise. | Warning The bit-wise operators & and | are the proper way to perform element-by-element array comparisons. Be sure you understand the operator precedence: `(a > 2) & (a < 5)` is the proper syntax because `a > 2 & a < 5` will result in an error due to the fact that `2 & a` is evaluated first. | | | | --- | --- | | [`maximum`](generated/numpy.maximum#numpy.maximum "numpy.maximum")(x1, x2, /[, out, where, casting, ...]) | Element-wise maximum of array elements. | Tip The Python function `max()` will find the maximum over a one-dimensional array, but it will do so using a slower sequence interface. The reduce method of the maximum ufunc is much faster. Also, the `max()` method will not give answers you might expect for arrays with greater than one dimension. The reduce method of minimum also allows you to compute a total minimum over an array. | | | | --- | --- | | [`minimum`](generated/numpy.minimum#numpy.minimum "numpy.minimum")(x1, x2, /[, out, where, casting, ...]) | Element-wise minimum of array elements. | Warning the behavior of `maximum(a, b)` is different than that of `max(a, b)`. As a ufunc, `maximum(a, b)` performs an element-by-element comparison of `a` and `b` and chooses each element of the result according to which element in the two arrays is larger. In contrast, `max(a, b)` treats the objects `a` and `b` as a whole, looks at the (total) truth value of `a > b` and uses it to return either `a` or `b` (as a whole). A similar difference exists between `minimum(a, b)` and `min(a, b)`. | | | | --- | --- | | [`fmax`](generated/numpy.fmax#numpy.fmax "numpy.fmax")(x1, x2, /[, out, where, casting, ...]) | Element-wise maximum of array elements. | | [`fmin`](generated/numpy.fmin#numpy.fmin "numpy.fmin")(x1, x2, /[, out, where, casting, ...]) | Element-wise minimum of array elements. | ### Floating functions Recall that all of these functions work element-by-element over an array, returning an array output. The description details only a single operation. | | | | --- | --- | | [`isfinite`](generated/numpy.isfinite#numpy.isfinite "numpy.isfinite")(x, /[, out, where, casting, order, ...]) | Test element-wise for finiteness (not infinity and not Not a Number). | | [`isinf`](generated/numpy.isinf#numpy.isinf "numpy.isinf")(x, /[, out, where, casting, order, ...]) | Test element-wise for positive or negative infinity. | | [`isnan`](generated/numpy.isnan#numpy.isnan "numpy.isnan")(x, /[, out, where, casting, order, ...]) | Test element-wise for NaN and return result as a boolean array. | | [`isnat`](generated/numpy.isnat#numpy.isnat "numpy.isnat")(x, /[, out, where, casting, order, ...]) | Test element-wise for NaT (not a time) and return result as a boolean array. | | [`fabs`](generated/numpy.fabs#numpy.fabs "numpy.fabs")(x, /[, out, where, casting, order, ...]) | Compute the absolute values element-wise. | | [`signbit`](generated/numpy.signbit#numpy.signbit "numpy.signbit")(x, /[, out, where, casting, order, ...]) | Returns element-wise True where signbit is set (less than zero). | | [`copysign`](generated/numpy.copysign#numpy.copysign "numpy.copysign")(x1, x2, /[, out, where, casting, ...]) | Change the sign of x1 to that of x2, element-wise. | | [`nextafter`](generated/numpy.nextafter#numpy.nextafter "numpy.nextafter")(x1, x2, /[, out, where, casting, ...]) | Return the next floating-point value after x1 towards x2, element-wise. | | [`spacing`](generated/numpy.spacing#numpy.spacing "numpy.spacing")(x, /[, out, where, casting, order, ...]) | Return the distance between x and the nearest adjacent number. | | [`modf`](generated/numpy.modf#numpy.modf "numpy.modf")(x[, out1, out2], / [[, out, where, ...]) | Return the fractional and integral parts of an array, element-wise. | | [`ldexp`](generated/numpy.ldexp#numpy.ldexp "numpy.ldexp")(x1, x2, /[, out, where, casting, ...]) | Returns x1 * 2**x2, element-wise. | | [`frexp`](generated/numpy.frexp#numpy.frexp "numpy.frexp")(x[, out1, out2], / [[, out, where, ...]) | Decompose the elements of x into mantissa and twos exponent. | | [`fmod`](generated/numpy.fmod#numpy.fmod "numpy.fmod")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. | | [`floor`](generated/numpy.floor#numpy.floor "numpy.floor")(x, /[, out, where, casting, order, ...]) | Return the floor of the input, element-wise. | | [`ceil`](generated/numpy.ceil#numpy.ceil "numpy.ceil")(x, /[, out, where, casting, order, ...]) | Return the ceiling of the input, element-wise. | | [`trunc`](generated/numpy.trunc#numpy.trunc "numpy.trunc")(x, /[, out, where, casting, order, ...]) | Return the truncated value of the input, element-wise. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/ufuncs.htmlRoutines ======== In this chapter routine docstrings are presented, grouped by functionality. Many docstrings contain example code, which demonstrates basic usage of the routine. The examples assume that NumPy is imported with: ``` >>> import numpy as np ``` A convenient way to execute examples is the `%doctest_mode` mode of IPython, which allows for pasting of multi-line examples and preserves indentation. * [Array creation routines](routines.array-creation) + [From shape or value](routines.array-creation#from-shape-or-value) + [From existing data](routines.array-creation#from-existing-data) + [Creating record arrays (`numpy.rec`)](routines.array-creation#creating-record-arrays-numpy-rec) + [Creating character arrays (`numpy.char`)](routines.array-creation#creating-character-arrays-numpy-char) + [Numerical ranges](routines.array-creation#numerical-ranges) + [Building matrices](routines.array-creation#building-matrices) + [The Matrix class](routines.array-creation#the-matrix-class) * [Array manipulation routines](routines.array-manipulation) + [Basic operations](routines.array-manipulation#basic-operations) + [Changing array shape](routines.array-manipulation#changing-array-shape) + [Transpose-like operations](routines.array-manipulation#transpose-like-operations) + [Changing number of dimensions](routines.array-manipulation#changing-number-of-dimensions) + [Changing kind of array](routines.array-manipulation#changing-kind-of-array) + [Joining arrays](routines.array-manipulation#joining-arrays) + [Splitting arrays](routines.array-manipulation#splitting-arrays) + [Tiling arrays](routines.array-manipulation#tiling-arrays) + [Adding and removing elements](routines.array-manipulation#adding-and-removing-elements) + [Rearranging elements](routines.array-manipulation#rearranging-elements) * [Binary operations](routines.bitwise) + [Elementwise bit operations](routines.bitwise#elementwise-bit-operations) + [Bit packing](routines.bitwise#bit-packing) + [Output formatting](routines.bitwise#output-formatting) * [String operations](routines.char) + [String operations](routines.char#id1) + [Comparison](routines.char#comparison) + [String information](routines.char#string-information) + [Convenience class](routines.char#convenience-class) * [C-Types Foreign Function Interface (`numpy.ctypeslib`)](routines.ctypeslib) * [Datetime Support Functions](routines.datetime) + [numpy.datetime_as_string](generated/numpy.datetime_as_string) + [numpy.datetime_data](generated/numpy.datetime_data) + [Business Day Functions](routines.datetime#business-day-functions) * [Data type routines](routines.dtype) + [numpy.can_cast](generated/numpy.can_cast) + [numpy.promote_types](generated/numpy.promote_types) + [numpy.min_scalar_type](generated/numpy.min_scalar_type) + [numpy.result_type](generated/numpy.result_type) + [numpy.common_type](generated/numpy.common_type) + [numpy.obj2sctype](generated/numpy.obj2sctype) + [Creating data types](routines.dtype#creating-data-types) + [Data type information](routines.dtype#data-type-information) + [Data type testing](routines.dtype#data-type-testing) + [Miscellaneous](routines.dtype#miscellaneous) * [Optionally SciPy-accelerated routines (`numpy.dual`)](routines.dual) + [Linear algebra](routines.dual#linear-algebra) + [FFT](routines.dual#fft) + [Other](routines.dual#other) * [Mathematical functions with automatic domain](routines.emath) + [Functions](routines.emath#functions) * [Floating point error handling](routines.err) + [Setting and getting error handling](routines.err#setting-and-getting-error-handling) + [Internal functions](routines.err#internal-functions) * [Discrete Fourier Transform (`numpy.fft`)](routines.fft) + [Standard FFTs](routines.fft#standard-ffts) + [Real FFTs](routines.fft#real-ffts) + [Hermitian FFTs](routines.fft#hermitian-ffts) + [Helper routines](routines.fft#helper-routines) + [Background information](routines.fft#background-information) + [Implementation details](routines.fft#implementation-details) + [Type Promotion](routines.fft#type-promotion) + [Normalization](routines.fft#normalization) + [Real and Hermitian transforms](routines.fft#real-and-hermitian-transforms) + [Higher dimensions](routines.fft#higher-dimensions) + [References](routines.fft#references) + [Examples](routines.fft#examples) * [Functional programming](routines.functional) + [numpy.apply_along_axis](generated/numpy.apply_along_axis) + [numpy.apply_over_axes](generated/numpy.apply_over_axes) + [numpy.vectorize](generated/numpy.vectorize) + [numpy.frompyfunc](generated/numpy.frompyfunc) + [numpy.piecewise](generated/numpy.piecewise) * [NumPy-specific help functions](routines.help) + [Finding help](routines.help#finding-help) + [Reading help](routines.help#reading-help) * [Input and output](routines.io) + [NumPy binary files (NPY, NPZ)](routines.io#numpy-binary-files-npy-npz) + [Text files](routines.io#text-files) + [Raw binary files](routines.io#raw-binary-files) + [String formatting](routines.io#string-formatting) + [Memory mapping files](routines.io#memory-mapping-files) + [Text formatting options](routines.io#text-formatting-options) + [Base-n representations](routines.io#base-n-representations) + [Data sources](routines.io#data-sources) + [Binary Format Description](routines.io#binary-format-description) * [Linear algebra (`numpy.linalg`)](routines.linalg) + [The `@` operator](routines.linalg#the-operator) + [Matrix and vector products](routines.linalg#matrix-and-vector-products) + [Decompositions](routines.linalg#decompositions) + [Matrix eigenvalues](routines.linalg#matrix-eigenvalues) + [Norms and other numbers](routines.linalg#norms-and-other-numbers) + [Solving equations and inverting matrices](routines.linalg#solving-equations-and-inverting-matrices) + [Exceptions](routines.linalg#exceptions) + [Linear algebra on several matrices at once](routines.linalg#linear-algebra-on-several-matrices-at-once) * [Logic functions](routines.logic) + [Truth value testing](routines.logic#truth-value-testing) + [Array contents](routines.logic#array-contents) + [Array type testing](routines.logic#array-type-testing) + [Logical operations](routines.logic#logical-operations) + [Comparison](routines.logic#comparison) * [Masked array operations](routines.ma) + [Constants](routines.ma#constants) + [Creation](routines.ma#creation) + [Inspecting the array](routines.ma#inspecting-the-array) + [Manipulating a MaskedArray](routines.ma#manipulating-a-maskedarray) + [Operations on masks](routines.ma#operations-on-masks) + [Conversion operations](routines.ma#conversion-operations) + [Masked arrays arithmetic](routines.ma#masked-arrays-arithmetic) * [Mathematical functions](routines.math) + [Trigonometric functions](routines.math#trigonometric-functions) + [Hyperbolic functions](routines.math#hyperbolic-functions) + [Rounding](routines.math#rounding) + [Sums, products, differences](routines.math#sums-products-differences) + [Exponents and logarithms](routines.math#exponents-and-logarithms) + [Other special functions](routines.math#other-special-functions) + [Floating point routines](routines.math#floating-point-routines) + [Rational routines](routines.math#rational-routines) + [Arithmetic operations](routines.math#arithmetic-operations) + [Handling complex numbers](routines.math#handling-complex-numbers) + [Extrema Finding](routines.math#extrema-finding) + [Miscellaneous](routines.math#miscellaneous) * [Matrix library (`numpy.matlib`)](routines.matlib) + [numpy.matlib.empty](generated/numpy.matlib.empty) + [numpy.matlib.zeros](generated/numpy.matlib.zeros) + [numpy.matlib.ones](generated/numpy.matlib.ones) + [numpy.matlib.eye](generated/numpy.matlib.eye) + [numpy.matlib.identity](generated/numpy.matlib.identity) + [numpy.matlib.repmat](generated/numpy.matlib.repmat) + [numpy.matlib.rand](generated/numpy.matlib.rand) + [numpy.matlib.randn](generated/numpy.matlib.randn) * [Miscellaneous routines](routines.other) + [Performance tuning](routines.other#performance-tuning) + [Memory ranges](routines.other#memory-ranges) + [Array mixins](routines.other#array-mixins) + [NumPy version comparison](routines.other#numpy-version-comparison) + [Utility](routines.other#utility) + [Matlab-like Functions](routines.other#matlab-like-functions) + [Exceptions](routines.other#exceptions) * [Padding Arrays](routines.padding) + [numpy.pad](generated/numpy.pad) * [Polynomials](routines.polynomials) + [Transitioning from `numpy.poly1d` to `numpy.polynomial`](routines.polynomials#transitioning-from-numpy-poly1d-to-numpy-polynomial) + [Documentation for the `polynomial` Package](routines.polynomials#documentation-for-the-polynomial-package) + [Documentation for Legacy Polynomials](routines.polynomials#documentation-for-legacy-polynomials) * [Random sampling (`numpy.random`)](random/index) + [Quick Start](random/index#quick-start) + [Introduction](random/index#introduction) + [Concepts](random/index#concepts) + [Features](random/index#features) * [Set routines](routines.set) + [numpy.lib.arraysetops](generated/numpy.lib.arraysetops) + [Making proper sets](routines.set#making-proper-sets) + [Boolean operations](routines.set#boolean-operations) * [Sorting, searching, and counting](routines.sort) + [Sorting](routines.sort#sorting) + [Searching](routines.sort#searching) + [Counting](routines.sort#counting) * [Statistics](routines.statistics) + [Order statistics](routines.statistics#order-statistics) + [Averages and variances](routines.statistics#averages-and-variances) + [Correlating](routines.statistics#correlating) + [Histograms](routines.statistics#histograms) * [Test Support (`numpy.testing`)](routines.testing) + [Asserts](routines.testing#asserts) + [Asserts (not recommended)](routines.testing#asserts-not-recommended) + [Decorators](routines.testing#decorators) + [Test Running](routines.testing#test-running) + [Guidelines](routines.testing#guidelines) * [Window functions](routines.window) + [Various windows](routines.window#various-windows) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.htmlTyping (numpy.typing) ===================== New in version 1.20. Large parts of the NumPy API have PEP-484-style type annotations. In addition a number of type aliases are available to users, most prominently the two below: * [`ArrayLike`](#numpy.typing.ArrayLike "numpy.typing.ArrayLike"): objects that can be converted to arrays * [`DTypeLike`](#numpy.typing.DTypeLike "numpy.typing.DTypeLike"): objects that can be converted to dtypes Mypy plugin ----------- New in version 1.21. A [mypy](http://mypy-lang.org/) plugin for managing a number of platform-specific annotations. Its functionality can be split into three distinct parts: * Assigning the (platform-dependent) precisions of certain [`number`](arrays.scalars#numpy.number "numpy.number") subclasses, including the likes of [`int_`](arrays.scalars#numpy.int_ "numpy.int_"), [`intp`](arrays.scalars#numpy.intp "numpy.intp") and [`longlong`](arrays.scalars#numpy.longlong "numpy.longlong"). See the documentation on [scalar types](arrays.scalars#arrays-scalars-built-in) for a comprehensive overview of the affected classes. Without the plugin the precision of all relevant classes will be inferred as [`Any`](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.10)"). * Removing all extended-precision [`number`](arrays.scalars#numpy.number "numpy.number") subclasses that are unavailable for the platform in question. Most notably this includes the likes of [`float128`](arrays.scalars#numpy.float128 "numpy.float128") and [`complex256`](arrays.scalars#numpy.complex256 "numpy.complex256"). Without the plugin *all* extended-precision types will, as far as mypy is concerned, be available to all platforms. * Assigning the (platform-dependent) precision of [`c_intp`](routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp"). Without the plugin the type will default to [`ctypes.c_int64`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int64 "(in Python v3.10)"). New in version 1.22. ### Examples To enable the plugin, one must add it to their mypy [configuration file](https://mypy.readthedocs.io/en/stable/config_file.html): ``` [mypy] plugins = numpy.typing.mypy_plugin ``` Differences from the runtime NumPy API -------------------------------------- NumPy is very flexible. Trying to describe the full range of possibilities statically would result in types that are not very helpful. For that reason, the typed NumPy API is often stricter than the runtime NumPy API. This section describes some notable differences. ### ArrayLike The [`ArrayLike`](#numpy.typing.ArrayLike "numpy.typing.ArrayLike") type tries to avoid creating object arrays. For example, ``` >>> np.array(x**2 for x in range(10)) array(<generator object <genexpr> at ...>, dtype=object) ``` is valid NumPy code which will create a 0-dimensional object array. Type checkers will complain about the above example when using the NumPy types however. If you really intended to do the above, then you can either use a `# type: ignore` comment: ``` >>> np.array(x**2 for x in range(10)) # type: ignore ``` or explicitly type the array like object as [`Any`](https://docs.python.org/3/library/typing.html#typing.Any "(in Python v3.10)"): ``` >>> from typing import Any >>> array_like: Any = (x**2 for x in range(10)) >>> np.array(array_like) array(<generator object <genexpr> at ...>, dtype=object) ``` ### ndarray It’s possible to mutate the dtype of an array at runtime. For example, the following code is valid: ``` >>> x = np.array([1, 2]) >>> x.dtype = np.bool_ ``` This sort of mutation is not allowed by the types. Users who want to write statically typed code should instead use the [`numpy.ndarray.view`](generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") method to create a view of the array with a different dtype. ### DTypeLike The [`DTypeLike`](#numpy.typing.DTypeLike "numpy.typing.DTypeLike") type tries to avoid creation of dtype objects using dictionary of fields like below: ``` >>> x = np.dtype({"field1": (float, 1), "field2": (int, 3)}) ``` Although this is valid NumPy code, the type checker will complain about it, since its usage is discouraged. Please see : [Data type objects](arrays.dtypes#arrays-dtypes) ### Number precision The precision of [`numpy.number`](arrays.scalars#numpy.number "numpy.number") subclasses is treated as a covariant generic parameter (see [`NBitBase`](#numpy.typing.NBitBase "numpy.typing.NBitBase")), simplifying the annotating of processes involving precision-based casting. ``` >>> from typing import TypeVar >>> import numpy as np >>> import numpy.typing as npt >>> T = TypeVar("T", bound=npt.NBitBase) >>> def func(a: "np.floating[T]", b: "np.floating[T]") -> "np.floating[T]": ... ... ``` Consequently, the likes of [`float16`](arrays.scalars#numpy.float16 "numpy.float16"), [`float32`](arrays.scalars#numpy.float32 "numpy.float32") and [`float64`](arrays.scalars#numpy.float64 "numpy.float64") are still sub-types of [`floating`](arrays.scalars#numpy.floating "numpy.floating"), but, contrary to runtime, they’re not necessarily considered as sub-classes. ### Timedelta64 The [`timedelta64`](arrays.scalars#numpy.timedelta64 "numpy.timedelta64") class is not considered a subclass of [`signedinteger`](arrays.scalars#numpy.signedinteger "numpy.signedinteger"), the former only inheriting from [`generic`](arrays.scalars#numpy.generic "numpy.generic") while static type checking. ### 0D arrays During runtime numpy aggressively casts any passed 0D arrays into their corresponding [`generic`](arrays.scalars#numpy.generic "numpy.generic") instance. Until the introduction of shape typing (see [**PEP 646**](https://peps.python.org/pep-0646/)) it is unfortunately not possible to make the necessary distinction between 0D and >0D arrays. While thus not strictly correct, all operations are that can potentially perform a 0D-array -> scalar cast are currently annotated as exclusively returning an `ndarray`. If it is known in advance that an operation _will_ perform a 0D-array -> scalar cast, then one can consider manually remedying the situation with either [`typing.cast`](https://docs.python.org/3/library/typing.html#typing.cast "(in Python v3.10)") or a `# type: ignore` comment. ### Record array dtypes The dtype of [`numpy.recarray`](generated/numpy.recarray#numpy.recarray "numpy.recarray"), and the `numpy.rec` functions in general, can be specified in one of two ways: * Directly via the `dtype` argument. * With up to five helper arguments that operate via [`numpy.format_parser`](generated/numpy.format_parser#numpy.format_parser "numpy.format_parser"): `formats`, `names`, `titles`, `aligned` and `byteorder`. These two approaches are currently typed as being mutually exclusive, *i.e.* if `dtype` is specified than one may not specify `formats`. While this mutual exclusivity is not (strictly) enforced during runtime, combining both dtype specifiers can lead to unexpected or even downright buggy behavior. API --- numpy.typing.ArrayLike*=typing.Union[...]* A [`Union`](https://docs.python.org/3/library/typing.html#typing.Union "(in Python v3.10)") representing objects that can be coerced into an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Among others this includes the likes of: * Scalars. * (Nested) sequences. * Objects implementing the `__array__` protocol. New in version 1.20. See Also [array_like](../glossary#term-array_like): Any scalar or sequence that can be interpreted as an ndarray. #### Examples ``` >>> import numpy as np >>> import numpy.typing as npt >>> def as_array(a: npt.ArrayLike) -> np.ndarray: ... return np.array(a) ``` numpy.typing.DTypeLike*=typing.Union[...]* A [`Union`](https://docs.python.org/3/library/typing.html#typing.Union "(in Python v3.10)") representing objects that can be coerced into a [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype"). Among others this includes the likes of: * [`type`](https://docs.python.org/3/library/functions.html#type "(in Python v3.10)") objects. * Character codes or the names of [`type`](https://docs.python.org/3/library/functions.html#type "(in Python v3.10)") objects. * Objects with the `.dtype` attribute. New in version 1.20. See Also [Specifying and constructing data types](arrays.dtypes#arrays-dtypes-constructing) A comprehensive overview of all objects that can be coerced into data types. #### Examples ``` >>> import numpy as np >>> import numpy.typing as npt >>> def as_dtype(d: npt.DTypeLike) -> np.dtype: ... return np.dtype(d) ``` numpy.typing.NDArray*=numpy.ndarray[typing.Any, numpy.dtype[+ScalarType]]*[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) A [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") version of [`np.ndarray[Any, np.dtype[+ScalarType]]`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Can be used during runtime for typing arrays with a given dtype and unspecified shape. New in version 1.21. #### Examples ``` >>> import numpy as np >>> import numpy.typing as npt >>> print(npt.NDArray) numpy.ndarray[typing.Any, numpy.dtype[+ScalarType]] >>> print(npt.NDArray[np.float64]) numpy.ndarray[typing.Any, numpy.dtype[numpy.float64]] >>> NDArrayInt = npt.NDArray[np.int_] >>> a: NDArrayInt = np.arange(10) >>> def func(a: npt.ArrayLike) -> npt.NDArray[Any]: ... return np.array(a) ``` *class*numpy.typing.NBitBase[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/typing/__init__.py) A type representing [`numpy.number`](arrays.scalars#numpy.number "numpy.number") precision during static type checking. Used exclusively for the purpose static type checking, [`NBitBase`](#numpy.typing.NBitBase "numpy.typing.NBitBase") represents the base of a hierarchical set of subclasses. Each subsequent subclass is herein used for representing a lower level of precision, *e.g.* `64Bit > 32Bit > 16Bit`. New in version 1.20. #### Examples Below is a typical usage example: [`NBitBase`](#numpy.typing.NBitBase "numpy.typing.NBitBase") is herein used for annotating a function that takes a float and integer of arbitrary precision as arguments and returns a new float of whichever precision is largest (*e.g.* `np.float16 + np.int64 -> np.float64`). ``` >>> from __future__ import annotations >>> from typing import TypeVar, TYPE_CHECKING >>> import numpy as np >>> import numpy.typing as npt >>> T1 = TypeVar("T1", bound=npt.NBitBase) >>> T2 = TypeVar("T2", bound=npt.NBitBase) >>> def add(a: np.floating[T1], b: np.integer[T2]) -> np.floating[T1 | T2]: ... return a + b >>> a = np.float16() >>> b = np.int64() >>> out = add(a, b) >>> if TYPE_CHECKING: ... reveal_locals() ... # note: Revealed local types are: ... # note: a: numpy.floating[numpy.typing._16Bit*] ... # note: b: numpy.signedinteger[numpy.typing._64Bit*] ... # note: out: numpy.floating[numpy.typing._64Bit*] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/typing.htmlGlobal State ============ NumPy has a few import-time, compile-time, or runtime options which change the global behaviour. Most of these are related to performance or for debugging purposes and will not be interesting to the vast majority of users. Performance-Related Options --------------------------- ### Number of Threads used for Linear Algebra NumPy itself is normally intentionally limited to a single thread during function calls, however it does support multiple Python threads running at the same time. Note that for performant linear algebra NumPy uses a BLAS backend such as OpenBLAS or MKL, which may use multiple threads that may be controlled by environment variables such as `OMP_NUM_THREADS` depending on what is used. One way to control the number of threads is the package [threadpoolctl](https://pypi.org/project/threadpoolctl/) ### Madvise Hugepage on Linux When working with very large arrays on modern Linux kernels, you can experience a significant speedup when [transparent hugepage](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html) is used. The current system policy for transparent hugepages can be seen by: ``` cat /sys/kernel/mm/transparent_hugepage/enabled ``` When set to `madvise` NumPy will typically use hugepages for a performance boost. This behaviour can be modified by setting the environment variable: ``` NUMPY_MADVISE_HUGEPAGE=0 ``` or setting it to `1` to always enable it. When not set, the default is to use madvise on Kernels 4.6 and newer. These kernels presumably experience a large speedup with hugepage support. This flag is checked at import time. Interoperability-Related Options -------------------------------- The array function protocol which allows array-like objects to hook into the NumPy API is currently enabled by default. This option exists since NumPy 1.16 and is enabled by default since NumPy 1.17. It can be disabled using: ``` NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=0 ``` See also [`numpy.class.__array_function__`](arrays.classes#numpy.class.__array_function__ "numpy.class.__array_function__") for more information. This flag is checked at import time. Debugging-Related Options ------------------------- ### Relaxed Strides Checking The *compile-time* environment variable: ``` NPY_RELAXED_STRIDES_DEBUG=0 ``` can be set to help debug code written in C which iteraters through arrays manually. When an array is contiguous and iterated in a contiguous manner, its `strides` should not be queried. This option can help find errors where the `strides` are incorrectly used. For details see the [memory layout](arrays.ndarray#memory-layout) documentation. ### Warn if no memory allocation policy when deallocating data Some users might pass ownership of the data pointer to the `ndarray` by setting the `OWNDATA` flag. If they do this without setting (manually) a memory allocation policy, the default will be to call `free`. If `NUMPY_WARN_IF_NO_MEM_POLICY` is set to `"1"`, a `RuntimeWarning` will be emitted. A better alternative is to use a `PyCapsule` with a deallocator and set the `ndarray.base`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/global_state.htmlPackaging (numpy.distutils) =========================== Warning `numpy.distutils` is deprecated, and will be removed for Python >= 3.12. For more details, see [Status of numpy.distutils and migration advice](distutils_status_migration#distutils-status-migration) NumPy provides enhanced distutils functionality to make it easier to build and install sub-packages, auto-generate code, and extension modules that use Fortran-compiled libraries. To use features of NumPy distutils, use the `setup` command from `numpy.distutils.core`. A useful [`Configuration`](#numpy.distutils.misc_util.Configuration "numpy.distutils.misc_util.Configuration") class is also provided in [`numpy.distutils.misc_util`](distutils/misc_util#module-numpy.distutils.misc_util "numpy.distutils.misc_util") that can make it easier to construct keyword arguments to pass to the setup function (by passing the dictionary obtained from the todict() method of the class). More information is available in the [NumPy Distutils - Users Guide](distutils_guide#distutils-user-guide). The choice and location of linked libraries such as BLAS and LAPACK as well as include paths and other such build options can be specified in a `site.cfg` file located in the NumPy root repository or a `.numpy-site.cfg` file in your home directory. See the `site.cfg.example` example file included in the NumPy repository or sdist for documentation. Modules in numpy.distutils -------------------------- * [distutils.misc_util](distutils/misc_util) | | | | --- | --- | | [`ccompiler`](generated/numpy.distutils.ccompiler#module-numpy.distutils.ccompiler "numpy.distutils.ccompiler") | | | [`ccompiler_opt`](generated/numpy.distutils.ccompiler_opt#module-numpy.distutils.ccompiler_opt "numpy.distutils.ccompiler_opt") | Provides the `CCompilerOpt` class, used for handling the CPU/hardware optimization, starting from parsing the command arguments, to managing the relation between the CPU baseline and dispatch-able features, also generating the required C headers and ending with compiling the sources with proper compiler's flags. | | [`cpuinfo.cpu`](generated/numpy.distutils.cpuinfo.cpu#numpy.distutils.cpuinfo.cpu "numpy.distutils.cpuinfo.cpu") | | | [`core.Extension`](generated/numpy.distutils.core.extension#numpy.distutils.core.Extension "numpy.distutils.core.Extension")(name, sources[, ...]) | Parameters | | [`exec_command`](generated/numpy.distutils.exec_command#module-numpy.distutils.exec_command "numpy.distutils.exec_command") | exec_command | | [`log.set_verbosity`](generated/numpy.distutils.log.set_verbosity#numpy.distutils.log.set_verbosity "numpy.distutils.log.set_verbosity")(v[, force]) | | | [`system_info.get_info`](generated/numpy.distutils.system_info.get_info#numpy.distutils.system_info.get_info "numpy.distutils.system_info.get_info")(name[, notfound_action]) | notfound_action: | | [`system_info.get_standard_file`](generated/numpy.distutils.system_info.get_standard_file#numpy.distutils.system_info.get_standard_file "numpy.distutils.system_info.get_standard_file")(fname) | Returns a list of files named 'fname' from 1) System-wide directory (directory-location of this module) 2) Users HOME directory (os.environ['HOME']) 3) Local directory | Configuration class ------------------- *class*numpy.distutils.misc_util.Configuration(*package_name=None*, *parent_name=None*, *top_path=None*, *package_path=None*, ***attrs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L759-L2128) Construct a configuration instance for the given package name. If *parent_name* is not None, then construct the package as a sub-package of the *parent_name* package. If *top_path* and *package_path* are None then they are assumed equal to the path of the file this instance was created in. The setup.py files in the numpy distribution are good examples of how to use the [`Configuration`](#numpy.distutils.misc_util.Configuration "numpy.distutils.misc_util.Configuration") instance. todict()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L866-L883) Return a dictionary compatible with the keyword arguments of distutils setup function. #### Examples ``` >>> setup(**config.todict()) ``` get_distribution()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L909-L912) Return the distutils distribution object for self. get_subpackage(*subpackage_name*, *subpackage_path=None*, *parent_name=None*, *caller_level=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L966-L1025) Return list of subpackage configurations. Parameters **subpackage_name**str or None Name of the subpackage to get the configuration. ‘*’ in subpackage_name is handled as a wildcard. **subpackage_path**str If None, then the path is assumed to be the local path plus the subpackage_name. If a setup.py file is not found in the subpackage_path, then a default configuration is used. **parent_name**str Parent name. add_subpackage(*subpackage_name*, *subpackage_path=None*, *standalone=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1027-L1068) Add a sub-package to the current Configuration instance. This is useful in a setup.py script for adding sub-packages to a package. Parameters **subpackage_name**str name of the subpackage **subpackage_path**str if given, the subpackage path such as the subpackage is in subpackage_path / subpackage_name. If None,the subpackage is assumed to be located in the local path / subpackage_name. **standalone**bool add_data_files(**files*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1200-L1349) Add data files to configuration data_files. Parameters **files**sequence Argument(s) can be either * 2-sequence (<datadir prefix>,<path to data file(s)>) * paths to data files where python datadir prefix defaults to package dir. #### Notes The form of each element of the files sequence is very flexible allowing many combinations of where to get the files from the package and where they should ultimately be installed on the system. The most basic usage is for an element of the files argument sequence to be a simple filename. This will cause that file from the local path to be installed to the installation path of the self.name package (package path). The file argument can also be a relative path in which case the entire relative path will be installed into the package directory. Finally, the file can be an absolute path name in which case the file will be found at the absolute path name but installed to the package path. This basic behavior can be augmented by passing a 2-tuple in as the file argument. The first element of the tuple should specify the relative path (under the package install directory) where the remaining sequence of files should be installed to (it has nothing to do with the file-names in the source distribution). The second element of the tuple is the sequence of files that should be installed. The files in this sequence can be filenames, relative paths, or absolute paths. For absolute paths the file will be installed in the top-level package installation directory (regardless of the first argument). Filenames and relative path names will be installed in the package install directory under the path name given as the first element of the tuple. Rules for installation paths: 1. file.txt -> (., file.txt)-> parent/file.txt 2. foo/file.txt -> (foo, foo/file.txt) -> parent/foo/file.txt 3. /foo/bar/file.txt -> (., /foo/bar/file.txt) -> parent/file.txt 4. `*`.txt -> parent/a.txt, parent/b.txt 5. foo/`*`.txt`` -> parent/foo/a.txt, parent/foo/b.txt 6. `*/*.txt` -> (`*`, `*`/`*`.txt) -> parent/c/a.txt, parent/d/b.txt 7. (sun, file.txt) -> parent/sun/file.txt 8. (sun, bar/file.txt) -> parent/sun/file.txt 9. (sun, /foo/bar/file.txt) -> parent/sun/file.txt 10. (sun, `*`.txt) -> parent/sun/a.txt, parent/sun/b.txt 11. (sun, bar/`*`.txt) -> parent/sun/a.txt, parent/sun/b.txt 12. (sun/`*`, `*`/`*`.txt) -> parent/sun/c/a.txt, parent/d/b.txt An additional feature is that the path to a data-file can actually be a function that takes no arguments and returns the actual path(s) to the data-files. This is useful when the data files are generated while building the package. #### Examples Add files to the list of data_files to be included with the package. ``` >>> self.add_data_files('foo.dat', ... ('fun', ['gun.dat', 'nun/pun.dat', '/tmp/sun.dat']), ... 'bar/cat.dat', ... '/full/path/to/can.dat') ``` will install these data files to: ``` <package install directory>/ foo.dat fun/ gun.dat nun/ pun.dat sun.dat bar/ car.dat can.dat ``` where <package install directory> is the package (or sub-package) directory such as ‘/usr/lib/python2.4/site-packages/mypackage’ (‘C: Python2.4 Lib site-packages mypackage’) or ‘/usr/lib/python2.4/site- packages/mypackage/mysubpackage’ (‘C: Python2.4 Lib site-packages mypackage mysubpackage’). add_data_dir(*data_path*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1070-L1189) Recursively add files under data_path to data_files list. Recursively add files under data_path to the list of data_files to be installed (and distributed). The data_path can be either a relative path-name, or an absolute path-name, or a 2-tuple where the first argument shows where in the install directory the data directory should be installed to. Parameters **data_path**seq or str Argument can be either * 2-sequence (<datadir suffix>, <path to data directory>) * path to data directory where python datadir suffix defaults to package dir. #### Notes Rules for installation paths: ``` foo/bar -> (foo/bar, foo/bar) -> parent/foo/bar (gun, foo/bar) -> parent/gun foo/* -> (foo/a, foo/a), (foo/b, foo/b) -> parent/foo/a, parent/foo/b (gun, foo/*) -> (gun, foo/a), (gun, foo/b) -> gun (gun/*, foo/*) -> parent/gun/a, parent/gun/b /foo/bar -> (bar, /foo/bar) -> parent/bar (gun, /foo/bar) -> parent/gun (fun/*/gun/*, sun/foo/bar) -> parent/fun/foo/gun/bar ``` #### Examples For example suppose the source directory contains fun/foo.dat and fun/bar/car.dat: ``` >>> self.add_data_dir('fun') >>> self.add_data_dir(('sun', 'fun')) >>> self.add_data_dir(('gun', '/full/path/to/fun')) ``` Will install data-files to the locations: ``` <package install directory>/ fun/ foo.dat bar/ car.dat sun/ foo.dat bar/ car.dat gun/ foo.dat car.dat ``` add_include_dirs(**paths*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1369-L1383) Add paths to configuration include directories. Add the given sequence of paths to the beginning of the include_dirs list. This list will be visible to all extension modules of the current package. add_headers(**files*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1385-L1417) Add installable headers to configuration. Add the given sequence of files to the beginning of the headers list. By default, headers will be installed under <python- include>/<self.name.replace(‘.’,’/’)>/ directory. If an item of files is a tuple, then its first argument specifies the actual installation location relative to the <python-include> path. Parameters **files**str or seq Argument(s) can be either: * 2-sequence (<includedir suffix>,<path to header file(s)>) * path(s) to header file(s) where python includedir suffix will default to package name. add_extension(*name*, *sources*, ***kw*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1442-L1543) Add extension to configuration. Create and add an Extension instance to the ext_modules list. This method also takes the following optional keyword arguments that are passed on to the Extension constructor. Parameters **name**str name of the extension **sources**seq list of the sources. The list of sources may contain functions (called source generators) which must take an extension instance and a build directory as inputs and return a source file or list of source files or None. If None is returned then no sources are generated. If the Extension instance has no sources after processing all source generators, then no extension module is built. **include_dirs** **define_macros** **undef_macros** **library_dirs** **libraries** **runtime_library_dirs** **extra_objects** **extra_compile_args** **extra_link_args** **extra_f77_compile_args** **extra_f90_compile_args** **export_symbols** **swig_opts** **depends** The depends list contains paths to files or directories that the sources of the extension module depend on. If any path in the depends list is newer than the extension module, then the module will be rebuilt. **language** **f2py_options** **module_dirs** **extra_info**dict or list dict or list of dict of keywords to be appended to keywords. #### Notes The self.paths(
) method is applied to all lists that may contain paths. add_library(*name*, *sources*, ***build_info*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1545-L1579) Add library to configuration. Parameters **name**str Name of the extension. **sources**sequence List of the sources. The list of sources may contain functions (called source generators) which must take an extension instance and a build directory as inputs and return a source file or list of source files or None. If None is returned then no sources are generated. If the Extension instance has no sources after processing all source generators, then no extension module is built. **build_info**dict, optional The following keys are allowed: * depends * macros * include_dirs * extra_compiler_args * extra_f77_compile_args * extra_f90_compile_args * f2py_options * language add_scripts(**files*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1755-L1769) Add scripts to configuration. Add the sequence of files to the beginning of the scripts list. Scripts will be installed under the <prefix>/bin/ directory. add_installed_library(*name*, *sources*, *install_dir*, *build_info=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1597-L1646) Similar to add_library, but the specified library is installed. Most C libraries used with [`distutils`](https://docs.python.org/3/library/distutils.html#module-distutils "(in Python v3.10)") are only used to build python extensions, but libraries built through this method will be installed so that they can be reused by third-party packages. Parameters **name**str Name of the installed library. **sources**sequence List of the library’s source files. See [`add_library`](#numpy.distutils.misc_util.Configuration.add_library "numpy.distutils.misc_util.Configuration.add_library") for details. **install_dir**str Path to install the library, relative to the current sub-package. **build_info**dict, optional The following keys are allowed: * depends * macros * include_dirs * extra_compiler_args * extra_f77_compile_args * extra_f90_compile_args * f2py_options * language Returns None See also [`add_library`](#numpy.distutils.misc_util.Configuration.add_library "numpy.distutils.misc_util.Configuration.add_library"), [`add_npy_pkg_config`](#numpy.distutils.misc_util.Configuration.add_npy_pkg_config "numpy.distutils.misc_util.Configuration.add_npy_pkg_config"), [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info") #### Notes The best way to encode the options required to link against the specified C libraries is to use a “libname.ini” file, and use [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info") to retrieve the required options (see [`add_npy_pkg_config`](#numpy.distutils.misc_util.Configuration.add_npy_pkg_config "numpy.distutils.misc_util.Configuration.add_npy_pkg_config") for more information). add_npy_pkg_config(*template*, *install_dir*, *subst_dict=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1648-L1752) Generate and install a npy-pkg config file from a template. The config file generated from `template` is installed in the given install directory, using `subst_dict` for variable substitution. Parameters **template**str The path of the template, relatively to the current package path. **install_dir**str Where to install the npy-pkg config file, relatively to the current package path. **subst_dict**dict, optional If given, any string of the form `@key@` will be replaced by `subst_dict[key]` in the template file when installed. The install prefix is always available through the variable `@prefix@`, since the install prefix is not easy to get reliably from setup.py. See also [`add_installed_library`](#numpy.distutils.misc_util.Configuration.add_installed_library "numpy.distutils.misc_util.Configuration.add_installed_library"), [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info") #### Notes This works for both standard installs and in-place builds, i.e. the `@prefix@` refer to the source directory for in-place builds. #### Examples ``` config.add_npy_pkg_config('foo.ini.in', 'lib', {'foo': bar}) ``` Assuming the foo.ini.in file has the following content: ``` [meta] Name=@foo@ Version=1.0 Description=dummy description [default] Cflags=-I@prefix@/include Libs= ``` The generated file will have the following content: ``` [meta] Name=bar Version=1.0 Description=dummy description [default] Cflags=-Iprefix_dir/include Libs= ``` and will be installed as foo.ini in the ‘lib’ subpath. When cross-compiling with numpy distutils, it might be necessary to use modified npy-pkg-config files. Using the default/generated files will link with the host libraries (i.e. libnpymath.a). For cross-compilation you of-course need to link with target libraries, while using the host Python installation. You can copy out the numpy/core/lib/npy-pkg-config directory, add a pkgdir value to the .ini files and set NPY_PKG_CONFIG_PATH environment variable to point to the directory with the modified npy-pkg-config files. Example npymath.ini modified for cross-compilation: ``` [meta] Name=npymath Description=Portable, core math library implementing C99 standard Version=0.1 [variables] pkgname=numpy.core pkgdir=/build/arm-linux-gnueabi/sysroot/usr/lib/python3.7/site-packages/numpy/core prefix=${pkgdir} libdir=${prefix}/lib includedir=${prefix}/include [default] Libs=-L${libdir} -lnpymath Cflags=-I${includedir} Requires=mlib [msvc] Libs=/LIBPATH:${libdir} npymath.lib Cflags=/INCLUDE:${includedir} Requires=mlib ``` paths(**paths*, ***kws*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1419-L1432) Apply glob to paths and prepend local_path if needed. Applies glob.glob(
) to each path in the sequence (if needed) and pre-pends the local_path if needed. Because this is called on all source lists, this allows wildcard characters to be specified in lists of sources for extension modules and libraries and scripts and allows path-names be relative to the source directory. get_config_cmd()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1809-L1821) Returns the numpy.distutils config command instance. get_build_temp_dir()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1823-L1830) Return a path to a temporary directory where temporary files should be placed. have_f77c()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1832-L1849) Check for availability of Fortran 77 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. #### Notes True if a Fortran 77 compiler is available (because a simple Fortran 77 code was able to be compiled successfully). have_f90c()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1851-L1868) Check for availability of Fortran 90 compiler. Use it inside source generating function to ensure that setup distribution instance has been initialized. #### Notes True if a Fortran 90 compiler is available (because a simple Fortran 90 code was able to be compiled successfully) get_version(*version_file=None*, *version_variable=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L1951-L2025) Try to get version string of a package. Return a version string of the current package or None if the version information could not be detected. #### Notes This method scans files named __version__.py, <packagename>_version.py, version.py, and __svn_version__.py for string variables version, __version__, and <packagename>_version, until a version number is found. make_svn_version_py(*delete=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2027-L2066) Appends a data function to the data_files list that will generate __svn_version__.py file to the current package directory. Generate package __svn_version__.py file from SVN revision number, it will be removed after python exits but will be available when sdist, etc commands are executed. #### Notes If __svn_version__.py existed before, nothing is done. This is intended for working with source directories that are in an SVN repository. make_config_py(*name='__config__'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2108-L2116) Generate package __config__.py file containing system_info information used during building the package. This file is installed to the package installation directory. get_info(**names*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2118-L2128) Get resources information. Return information (from system_info.get_info) for all of the names in the argument list in a single dictionary. Building Installable C libraries -------------------------------- Conventional C libraries (installed through `add_library`) are not installed, and are just used during the build (they are statically linked). An installable C library is a pure C library, which does not depend on the python C runtime, and is installed such that it may be used by third-party packages. To build and install the C library, you just use the method `add_installed_library` instead of `add_library`, which takes the same arguments except for an additional `install_dir` argument: ``` .. hidden in a comment so as to be included in refguide but not rendered documentation >>> import numpy.distutils.misc_util >>> config = np.distutils.misc_util.Configuration(None, '', '.') >>> with open('foo.c', 'w') as f: pass >>> config.add_installed_library('foo', sources=['foo.c'], install_dir='lib') ``` ### npy-pkg-config files To make the necessary build options available to third parties, you could use the `npy-pkg-config` mechanism implemented in [`numpy.distutils`](#module-numpy.distutils "numpy.distutils"). This mechanism is based on a .ini file which contains all the options. A .ini file is very similar to .pc files as used by the pkg-config unix utility: ``` [meta] Name: foo Version: 1.0 Description: foo library [variables] prefix = /home/user/local libdir = ${prefix}/lib includedir = ${prefix}/include [default] cflags = -I${includedir} libs = -L${libdir} -lfoo ``` Generally, the file needs to be generated during the build, since it needs some information known at build time only (e.g. prefix). This is mostly automatic if one uses the [`Configuration`](#numpy.distutils.misc_util.Configuration "numpy.distutils.misc_util.Configuration") method `add_npy_pkg_config`. Assuming we have a template file foo.ini.in as follows: ``` [meta] Name: foo Version: @version@ Description: foo library [variables] prefix = @prefix@ libdir = ${prefix}/lib includedir = ${prefix}/include [default] cflags = -I${includedir} libs = -L${libdir} -lfoo ``` and the following code in setup.py: ``` >>> config.add_installed_library('foo', sources=['foo.c'], install_dir='lib') >>> subst = {'version': '1.0'} >>> config.add_npy_pkg_config('foo.ini.in', 'lib', subst_dict=subst) ``` This will install the file foo.ini into the directory package_dir/lib, and the foo.ini file will be generated from foo.ini.in, where each `@version@` will be replaced by `subst_dict['version']`. The dictionary has an additional prefix substitution rule automatically added, which contains the install prefix (since this is not easy to get from setup.py). npy-pkg-config files can also be installed at the same location as used for numpy, using the path returned from `get_npy_pkg_dir` function. ### Reusing a C library from another package Info are easily retrieved from the [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info") function in [`numpy.distutils.misc_util`](distutils/misc_util#module-numpy.distutils.misc_util "numpy.distutils.misc_util"): ``` >>> info = np.distutils.misc_util.get_info('npymath') >>> config.add_extension('foo', sources=['foo.c'], extra_info=info) <numpy.distutils.extension.Extension('foo') at 0x...``` An additional list of paths to look for .ini files can be given to [`get_info`](distutils/misc_util#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info"). Conversion of `.src` files -------------------------- NumPy distutils supports automatic conversion of source files named <somefile>.src. This facility can be used to maintain very similar code blocks requiring only simple changes between blocks. During the build phase of setup, if a template file named <somefile>.src is encountered, a new file named <somefile> is constructed from the template and placed in the build directory to be used instead. Two forms of template conversion are supported. The first form occurs for files named <file>.ext.src where ext is a recognized Fortran extension (f, f90, f95, f77, for, ftn, pyf). The second form is used for all other cases. See [Conversion of .src files using Templates](distutils_guide#templating). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/distutils.htmlNumPy Distutils - Users Guide ============================= Warning `numpy.distutils` is deprecated, and will be removed for Python >= 3.12. For more details, see [Status of numpy.distutils and migration advice](distutils_status_migration#distutils-status-migration) SciPy structure --------------- Currently SciPy project consists of two packages: * NumPy — it provides packages like: + numpy.distutils - extension to Python distutils + numpy.f2py - a tool to bind Fortran/C codes to Python + numpy.core - future replacement of Numeric and numarray packages + numpy.lib - extra utility functions + numpy.testing - numpy-style tools for unit testing + etc * SciPy — a collection of scientific tools for Python. The aim of this document is to describe how to add new tools to SciPy. Requirements for SciPy packages ------------------------------- SciPy consists of Python packages, called SciPy packages, that are available to Python users via the `scipy` namespace. Each SciPy package may contain other SciPy packages. And so on. Therefore, the SciPy directory tree is a tree of packages with arbitrary depth and width. Any SciPy package may depend on NumPy packages but the dependence on other SciPy packages should be kept minimal or zero. A SciPy package contains, in addition to its sources, the following files and directories: * `setup.py` — building script * `__init__.py` — package initializer * `tests/` — directory of unittests Their contents are described below. The `setup.py` file ------------------- In order to add a Python package to SciPy, its build script (`setup.py`) must meet certain requirements. The most important requirement is that the package define a `configuration(parent_package='',top_path=None)` function which returns a dictionary suitable for passing to `numpy.distutils.core.setup(..)`. To simplify the construction of this dictionary, `numpy.distutils.misc_util` provides the `Configuration` class, described below. ### SciPy pure Python package example Below is an example of a minimal `setup.py` file for a pure SciPy package: ``` #!/usr/bin/env python3 def configuration(parent_package='',top_path=None): from numpy.distutils.misc_util import Configuration config = Configuration('mypackage',parent_package,top_path) return config if __name__ == "__main__": from numpy.distutils.core import setup #setup(**configuration(top_path='').todict()) setup(configuration=configuration) ``` The arguments of the `configuration` function specify the name of parent SciPy package (`parent_package`) and the directory location of the main `setup.py` script (`top_path`). These arguments, along with the name of the current package, should be passed to the `Configuration` constructor. The `Configuration` constructor has a fourth optional argument, `package_path`, that can be used when package files are located in a different location than the directory of the `setup.py` file. Remaining `Configuration` arguments are all keyword arguments that will be used to initialize attributes of `Configuration` instance. Usually, these keywords are the same as the ones that `setup(..)` function would expect, for example, `packages`, `ext_modules`, `data_files`, `include_dirs`, `libraries`, `headers`, `scripts`, `package_dir`, etc. However, the direct specification of these keywords is not recommended as the content of these keyword arguments will not be processed or checked for the consistency of SciPy building system. Finally, `Configuration` has `.todict()` method that returns all the configuration data as a dictionary suitable for passing on to the `setup(..)` function. ### `Configuration` instance attributes In addition to attributes that can be specified via keyword arguments to `Configuration` constructor, `Configuration` instance (let us denote as `config`) has the following attributes that can be useful in writing setup scripts: * `config.name` - full name of the current package. The names of parent packages can be extracted as `config.name.split('.')`. * `config.local_path` - path to the location of current `setup.py` file. * `config.top_path` - path to the location of main `setup.py` file. ### `Configuration` instance methods * `config.todict()` — returns configuration dictionary suitable for passing to `numpy.distutils.core.setup(..)` function. * `config.paths(*paths) --- applies ``glob.glob(..)` to items of `paths` if necessary. Fixes `paths` item that is relative to `config.local_path`. * `config.get_subpackage(subpackage_name,subpackage_path=None)` — returns a list of subpackage configurations. Subpackage is looked in the current directory under the name `subpackage_name` but the path can be specified also via optional `subpackage_path` argument. If `subpackage_name` is specified as `None` then the subpackage name will be taken the basename of `subpackage_path`. Any `*` used for subpackage names are expanded as wildcards. * `config.add_subpackage(subpackage_name,subpackage_path=None)` — add SciPy subpackage configuration to the current one. The meaning and usage of arguments is explained above, see `config.get_subpackage()` method. * `config.add_data_files(*files)` — prepend `files` to `data_files` list. If `files` item is a tuple then its first element defines the suffix of where data files are copied relative to package installation directory and the second element specifies the path to data files. By default data files are copied under package installation directory. For example, ``` config.add_data_files('foo.dat', ('fun',['gun.dat','nun/pun.dat','/tmp/sun.dat']), 'bar/car.dat'. '/full/path/to/can.dat', ) ``` will install data files to the following locations ``` <installation path of config.name package>/ foo.dat fun/ gun.dat pun.dat sun.dat bar/ car.dat can.dat ``` Path to data files can be a function taking no arguments and returning path(s) to data files – this is a useful when data files are generated while building the package. (XXX: explain the step when this function are called exactly) * `config.add_data_dir(data_path)` — add directory `data_path` recursively to `data_files`. The whole directory tree starting at `data_path` will be copied under package installation directory. If `data_path` is a tuple then its first element defines the suffix of where data files are copied relative to package installation directory and the second element specifies the path to data directory. By default, data directory are copied under package installation directory under the basename of `data_path`. For example, ``` config.add_data_dir('fun') # fun/ contains foo.dat bar/car.dat config.add_data_dir(('sun','fun')) config.add_data_dir(('gun','/full/path/to/fun')) ``` will install data files to the following locations ``` <installation path of config.name package>/ fun/ foo.dat bar/ car.dat sun/ foo.dat bar/ car.dat gun/ foo.dat bar/ car.dat ``` * `config.add_include_dirs(*paths)` — prepend `paths` to `include_dirs` list. This list will be visible to all extension modules of the current package. * `config.add_headers(*files)` — prepend `files` to `headers` list. By default, headers will be installed under `<prefix>/include/pythonX.X/<config.name.replace('.','/')>/` directory. If `files` item is a tuple then it’s first argument specifies the installation suffix relative to `<prefix>/include/pythonX.X/` path. This is a Python distutils method; its use is discouraged for NumPy and SciPy in favour of `config.add_data_files(*files)`. * `config.add_scripts(*files)` — prepend `files` to `scripts` list. Scripts will be installed under `<prefix>/bin/` directory. * `config.add_extension(name,sources,**kw)` — create and add an `Extension` instance to `ext_modules` list. The first argument `name` defines the name of the extension module that will be installed under `config.name` package. The second argument is a list of sources. `add_extension` method takes also keyword arguments that are passed on to the `Extension` constructor. The list of allowed keywords is the following: `include_dirs`, `define_macros`, `undef_macros`, `library_dirs`, `libraries`, `runtime_library_dirs`, `extra_objects`, `extra_compile_args`, `extra_link_args`, `export_symbols`, `swig_opts`, `depends`, `language`, `f2py_options`, `module_dirs`, `extra_info`, `extra_f77_compile_args`, `extra_f90_compile_args`. Note that `config.paths` method is applied to all lists that may contain paths. `extra_info` is a dictionary or a list of dictionaries that content will be appended to keyword arguments. The list `depends` contains paths to files or directories that the sources of the extension module depend on. If any path in the `depends` list is newer than the extension module, then the module will be rebuilt. The list of sources may contain functions (‘source generators’) with a pattern `def <funcname>(ext, build_dir): return <source(s) or None>`. If `funcname` returns `None`, no sources are generated. And if the `Extension` instance has no sources after processing all source generators, no extension module will be built. This is the recommended way to conditionally define extension modules. Source generator functions are called by the `build_src` sub-command of `numpy.distutils`. For example, here is a typical source generator function: ``` def generate_source(ext,build_dir): import os from distutils.dep_util import newer target = os.path.join(build_dir,'somesource.c') if newer(target,__file__): # create target file return target ``` The first argument contains the Extension instance that can be useful to access its attributes like `depends`, `sources`, etc. lists and modify them during the building process. The second argument gives a path to a build directory that must be used when creating files to a disk. * `config.add_library(name, sources, **build_info)` — add a library to `libraries` list. Allowed keywords arguments are `depends`, `macros`, `include_dirs`, `extra_compiler_args`, `f2py_options`, `extra_f77_compile_args`, `extra_f90_compile_args`. See `.add_extension()` method for more information on arguments. * `config.have_f77c()` — return True if Fortran 77 compiler is available (read: a simple Fortran 77 code compiled successfully). * `config.have_f90c()` — return True if Fortran 90 compiler is available (read: a simple Fortran 90 code compiled successfully). * `config.get_version()` — return version string of the current package, `None` if version information could not be detected. This methods scans files `__version__.py`, `<packagename>_version.py`, `version.py`, `__svn_version__.py` for string variables `version`, `__version__`, `<packagename>_version`. * `config.make_svn_version_py()` — appends a data function to `data_files` list that will generate `__svn_version__.py` file to the current package directory. The file will be removed from the source directory when Python exits. * `config.get_build_temp_dir()` — return a path to a temporary directory. This is the place where one should build temporary files. * `config.get_distribution()` — return distutils `Distribution` instance. * `config.get_config_cmd()` — returns `numpy.distutils` config command instance. * `config.get_info(*names)` — ### Conversion of `.src` files using Templates NumPy distutils supports automatic conversion of source files named <somefile>.src. This facility can be used to maintain very similar code blocks requiring only simple changes between blocks. During the build phase of setup, if a template file named <somefile>.src is encountered, a new file named <somefile> is constructed from the template and placed in the build directory to be used instead. Two forms of template conversion are supported. The first form occurs for files named <file>.ext.src where ext is a recognized Fortran extension (f, f90, f95, f77, for, ftn, pyf). The second form is used for all other cases. ### Fortran files This template converter will replicate all **function** and **subroutine** blocks in the file with names that contain ‘<
>’ according to the rules in ‘<
>’. The number of comma-separated words in ‘<
>’ determines the number of times the block is repeated. What these words are indicates what that repeat rule, ‘<
>’, should be replaced with in each block. All of the repeat rules in a block must contain the same number of comma-separated words indicating the number of times that block should be repeated. If the word in the repeat rule needs a comma, leftarrow, or rightarrow, then prepend it with a backslash ‘ '. If a word in the repeat rule matches ‘ \<index>’ then it will be replaced with the <index>-th word in the same repeat specification. There are two forms for the repeat rule: named and short. #### Named repeat rule A named repeat rule is useful when the same set of repeats must be used several times in a block. It is specified using <rule1=item1, item2, item3,
, itemN>, where N is the number of times the block should be repeated. On each repeat of the block, the entire expression, ‘<
>’ will be replaced first with item1, and then with item2, and so forth until N repeats are accomplished. Once a named repeat specification has been introduced, the same repeat rule may be used **in the current block** by referring only to the name (i.e. <rule1>). #### Short repeat rule A short repeat rule looks like <item1, item2, item3, 
, itemN>. The rule specifies that the entire expression, ‘<
>’ should be replaced first with item1, and then with item2, and so forth until N repeats are accomplished. #### Pre-defined names The following predefined named repeat rules are available: * <prefix=s,d,c,z> * <_c=s,d,c,z> * <_t=real, double precision, complex, double complex> * <ftype=real, double precision, complex, double complex> * <ctype=float, double, complex_float, complex_double> * <ftypereal=float, double precision, \0, \1> * <ctypereal=float, double, \0, \1 ### Other files Non-Fortran files use a separate syntax for defining template blocks that should be repeated using a variable expansion similar to the named repeat rules of the Fortran-specific repeats. NumPy Distutils preprocesses C source files (extension: `.c.src`) written in a custom templating language to generate C code. The `@` symbol is used to wrap macro-style variables to empower a string substitution mechanism that might describe (for instance) a set of data types. The template language blocks are delimited by `/**begin repeat` and `/**end repeat**/` lines, which may also be nested using consecutively numbered delimiting lines such as `/**begin repeat1` and `/**end repeat1**/`: 1. `/**begin repeat` on a line by itself marks the beginning of a segment that should be repeated. 2. Named variable expansions are defined using `#name=item1, item2, item3, ..., itemN#` and placed on successive lines. These variables are replaced in each repeat block with corresponding word. All named variables in the same repeat block must define the same number of words. 3. In specifying the repeat rule for a named variable, `item*N` is short- hand for `item, item, ..., item` repeated N times. In addition, parenthesis in combination with `*N` can be used for grouping several items that should be repeated. Thus, `#name=(item1, item2)*4#` is equivalent to `#name=item1, item2, item1, item2, item1, item2, item1, item2#`. 4. `*/` on a line by itself marks the end of the variable expansion naming. The next line is the first line that will be repeated using the named rules. 5. Inside the block to be repeated, the variables that should be expanded are specified as `@name@`. 6. `/**end repeat**/` on a line by itself marks the previous line as the last line of the block to be repeated. 7. A loop in the NumPy C source code may have a `@TYPE@` variable, targeted for string substitution, which is preprocessed to a number of otherwise identical loops with several strings such as `INT`, `LONG`, `UINT`, `ULONG`. The `@TYPE@` style syntax thus reduces code duplication and maintenance burden by mimicking languages that have generic type support. The above rules may be clearer in the following template source example: ``` 1 /* TIMEDELTA to non-float types */ 2 3 /**begin repeat 4 * 5 * #TOTYPE = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG, 6 * LONGLONG, ULONGLONG, DATETIME, 7 * TIMEDELTA# 8 * #totype = npy_byte, npy_ubyte, npy_short, npy_ushort, npy_int, npy_uint, 9 * npy_long, npy_ulong, npy_longlong, npy_ulonglong, 10 * npy_datetime, npy_timedelta# 11 */ 12 13 /**begin repeat1 14 * 15 * #FROMTYPE = TIMEDELTA# 16 * #fromtype = npy_timedelta# 17 */ 18 static void 19 @FROMTYPE@_to_@TOTYPE@(void *input, void *output, npy_intp n, 20 void *NPY_UNUSED(aip), void *NPY_UNUSED(aop)) 21 { 22 const @fromtype@ *ip = input; 23 @totype@ *op = output; 24 25 while (n--) { 26 *op++ = (@totype@)*ip++; 27 } 28 } 29 /**end repeat1**/ 30 31 /**end repeat**/ ``` The preprocessing of generically-typed C source files (whether in NumPy proper or in any third party package using NumPy Distutils) is performed by [conv_template.py](https://github.com/numpy/numpy/blob/main/numpy/distutils/conv_template.py). The type-specific C files generated (extension: `.c`) by these modules during the build process are ready to be compiled. This form of generic typing is also supported for C header files (preprocessed to produce `.h` files). ### Useful functions in `numpy.distutils.misc_util` * `get_numpy_include_dirs()` — return a list of NumPy base include directories. NumPy base include directories contain header files such as `numpy/arrayobject.h`, `numpy/funcobject.h` etc. For installed NumPy the returned list has length 1 but when building NumPy the list may contain more directories, for example, a path to `config.h` file that `numpy/base/setup.py` file generates and is used by `numpy` header files. * `append_path(prefix,path)` — smart append `path` to `prefix`. * `gpaths(paths, local_path='')` — apply glob to paths and prepend `local_path` if needed. * `njoin(*path)` — join pathname components + convert `/`-separated path to `os.sep`-separated path and resolve `..`, `.` from paths. Ex. `njoin('a',['b','./c'],'..','g') -> os.path.join('a','b','g')`. * `minrelpath(path)` — resolves dots in `path`. * `rel_path(path, parent_path)` — return `path` relative to `parent_path`. * `def get_cmd(cmdname,_cache={})` — returns `numpy.distutils` command instance. * `all_strings(lst)` * `has_f_sources(sources)` * `has_cxx_sources(sources)` * `filter_sources(sources)` — return `c_sources, cxx_sources, f_sources, fmodule_sources` * `get_dependencies(sources)` * `is_local_src_dir(directory)` * `get_ext_source_files(ext)` * `get_script_files(scripts)` * `get_lib_source_files(lib)` * `get_data_files(data)` * `dot_join(*args)` — join non-zero arguments with a dot. * `get_frame(level=0)` — return frame object from call stack with given level. * `cyg2win32(path)` * `mingw32()` — return `True` when using mingw32 environment. * `terminal_has_colors()`, `red_text(s)`, `green_text(s)`, `yellow_text(s)`, `blue_text(s)`, `cyan_text(s)` * `get_path(mod_name,parent_path=None)` — return path of a module relative to parent_path when given. Handles also `__main__` and `__builtin__` modules. * `allpath(name)` — replaces `/` with `os.sep` in `name`. * `cxx_ext_match`, `fortran_ext_match`, `f90_ext_match`, `f90_module_name_match` ### `numpy.distutils.system_info` module * `get_info(name,notfound_action=0)` * `combine_paths(*args,**kws)` * `show_all()` ### `numpy.distutils.cpuinfo` module * `cpuinfo` ### `numpy.distutils.log` module * `set_verbosity(v)` ### `numpy.distutils.exec_command` module * `get_pythonexe()` * `find_executable(exe, path=None)` * `exec_command( command, execute_in='', use_shell=None, use_tee=None, **env )` The `__init__.py` file ---------------------- The header of a typical SciPy `__init__.py` is: ``` """ Package docstring, typically with a brief description and function listing. """ # import functions into module namespace from .subpackage import * ... __all__ = [s for s in dir() if not s.startswith('_')] from numpy.testing import Tester test = Tester().test bench = Tester().bench ``` Extra features in NumPy Distutils --------------------------------- ### Specifying config_fc options for libraries in setup.py script It is possible to specify config_fc options in setup.py scripts. For example, using config.add_library(‘library’, sources=[
], config_fc={‘noopt’:(__file__,1)}) will compile the `library` sources without optimization flags. It’s recommended to specify only those config_fc options in such a way that are compiler independent. ### Getting extra Fortran 77 compiler options from source Some old Fortran codes need special compiler options in order to work correctly. In order to specify compiler options per source file, `numpy.distutils` Fortran compiler looks for the following pattern: ``` CF77FLAGS(<fcompiler type>) = <fcompiler f77flags``` in the first 20 lines of the source and use the `f77flags` for specified type of the fcompiler (the first character `C` is optional). TODO: This feature can be easily extended for Fortran 90 codes as well. Let us know if you would need such a feature. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/distutils_guide.htmlStatus of numpy.distutils and migration advice ============================================== [`numpy.distutils`](distutils#module-numpy.distutils "numpy.distutils") has been deprecated in NumPy `1.23.0`. It will be removed for Python 3.12; for Python <= 3.11 it will not be removed until 2 years after the Python 3.12 release (Oct 2025). Warning `numpy.distutils` is only tested with `setuptools < 60.0`, newer versions may break. See [Interaction of numpy.disutils with setuptools](#numpy-setuptools-interaction) for details. Migration advice ---------------- It is **not necessary** to migrate immediately - the release date for Python 3.12 is October 2023. It may be beneficial to wait with migrating until there are examples from other projects to follow (see below). There are several build systems which are good options to migrate to. Assuming you have compiled code in your package (if not, we recommend using [Flit](https://flit.readthedocs.io)) and you want to be using a well-designed, modern and reliable build system, we recommend: 1. [Meson](https://mesonbuild.com/) 2. [CMake](https://cmake.org/) (or [scikit-build](https://scikit-build.readthedocs.io/) as an interface to CMake) If you have modest needs (only simple Cython/C extensions, and perhaps nested `setup.py` files) and have been happy with `numpy.distutils` so far, you can also consider switching to `setuptools`. Note that most functionality of `numpy.disutils` is unlikely to be ported to `setuptools`. ### Moving to Meson SciPy is moving to Meson for its 1.9.0 release, planned for July 2022. During this process, any remaining issues with Meson’s Python support and achieving feature parity with `numpy.distutils` will be resolved. *Note: parity means a large superset, but right now some BLAS/LAPACK support is missing and there are a few open issues related to Cython.* SciPy uses almost all functionality that `numpy.distutils` offers, so if SciPy has successfully made a release with Meson as the build system, there should be no blockers left to migrate, and SciPy will be a good reference for other packages who are migrating. For more details about the SciPy migration, see: * [RFC: switch to Meson as a build system](https://github.com/scipy/scipy/issues/13615) * [Tracking issue for Meson support](https://github.com/rgommers/scipy/issues/22) NumPy itself will very likely migrate to Meson as well, once the SciPy migration is done. ### Moving to CMake / scikit-build See the [scikit-build documentation](https://scikit-build.readthedocs.io/en/latest/) for how to use scikit-build. Please note that as of Feb 2022, scikit-build still relies on setuptools, so it’s probably not quite ready yet for a post-distutils world. How quickly this changes depends on funding, the current (Feb 2022) estimate is that if funding arrives then a viable `numpy.distutils` replacement will be ready at the end of 2022, and a very polished replacement mid-2023. For more details on this, see [this blog post by <NAME>](https://iscinumpy.gitlab.io/post/scikit-build-proposal/). ### Moving to `setuptools` For projects that only use `numpy.distutils` for historical reasons, and do not actually use features beyond those that `setuptools` also supports, moving to `setuptools` is likely the solution which costs the least effort. To assess that, there are the `numpy.distutils` features that are *not* present in `setuptools`: * Nested `setup.py` files * Fortran build support * BLAS/LAPACK library support (OpenBLAS, MKL, ATLAS, Netlib LAPACK/BLAS, BLIS, 64-bit ILP interface, etc.) * Support for a few other scientific libraries, like FFTW and UMFPACK * Better MinGW support * Per-compiler build flag customization (e.g. `-O3` and `SSE2` flags are default) * a simple user build config system, see [site.cfg.example](<https://github.com/numpy/numpy/blob/master/site.cfg.example>) * SIMD intrinsics support The most widely used feature is nested `setup.py` files. This feature will likely be ported to `setuptools` (see [gh-18588](https://github.com/numpy/numpy/issues/18588) for status). Projects only using that feature could move to `setuptools` after that is done. In case a project uses only a couple of `setup.py` files, it also could make sense to simply aggregate all the content of those files into a single `setup.py` file and then move to `setuptools`. This involves dropping all `Configuration` instances, and using `Extension` instead. E.g.,: ``` from distutils.core import setup from distutils.extension import Extension setup(name='foobar', version='1.0', ext_modules=[ Extension('foopkg.foo', ['foo.c']), Extension('barpkg.bar', ['bar.c']), ], ) ``` For more details, see the [setuptools documentation](https://setuptools.pypa.io/en/latest/setuptools.html) Interaction of `numpy.disutils` with `setuptools` ------------------------------------------------- It is recommended to use `setuptools < 60.0`. Newer versions may work, but are not guaranteed to. The reason for this is that `setuptools` 60.0 enabled a vendored copy of `distutils`, including backwards incompatible changes that affect some functionality in `numpy.distutils`. If you are using only simple Cython or C extensions with minimal use of `numpy.distutils` functionality beyond nested `setup.py` files (its most popular feature, see [`Configuration`](distutils#numpy.distutils.misc_util.Configuration "numpy.distutils.misc_util.Configuration")), then latest `setuptools` is likely to continue working. In case of problems, you can also try `SETUPTOOLS_USE_DISTUTILS=stdlib` to avoid the backwards incompatible changes in `setuptools`. Whatever you do, it is recommended to put an upper bound on your `setuptools` build requirement in `pyproject.toml` to avoid future breakage - see [For downstream package authors](../user/depending_on_numpy#for-downstream-package-authors). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/distutils_status_migration.htmlNumPy C-API =========== NumPy provides a C-API to enable users to extend the system and get access to the array object for use in other routines. The best way to truly understand the C-API is to read the source code. If you are unfamiliar with (C) source code, however, this can be a daunting experience at first. Be assured that the task becomes easier with practice, and you may be surprised at how simple the C-code can be to understand. Even if you don’t think you can write C-code from scratch, it is much easier to understand and modify already-written source code than create it *de novo*. Python extensions are especially straightforward to understand because they all have a very similar structure. Admittedly, NumPy is not a trivial extension to Python, and may take a little more snooping to grasp. This is especially true because of the code-generation techniques, which simplify maintenance of very similar code, but can make the code a little less readable to beginners. Still, with a little persistence, the code can be opened to your understanding. It is my hope, that this guide to the C-API can assist in the process of becoming familiar with the compiled-level work that can be done with NumPy in order to squeeze that last bit of necessary speed out of your code. * [Python Types and C-Structures](types-and-structures) + [New Python Types Defined](types-and-structures#new-python-types-defined) + [Other C-Structures](types-and-structures#other-c-structures) * [System configuration](config) + [Data type sizes](config#data-type-sizes) + [Platform information](config#platform-information) + [Compiler directives](config#compiler-directives) + [Interrupt Handling](config#interrupt-handling) * [Data Type API](dtype) + [Enumerated Types](dtype#enumerated-types) + [Defines](dtype#defines) + [C-type names](dtype#c-type-names) + [Printf Formatting](dtype#printf-formatting) * [Array API](array) + [Array structure and data access](array#array-structure-and-data-access) + [Creating arrays](array#creating-arrays) + [Dealing with types](array#dealing-with-types) + [Array flags](array#array-flags) + [Array method alternative API](array#array-method-alternative-api) + [Functions](array#functions) + [Auxiliary Data With Object Semantics](array#auxiliary-data-with-object-semantics) + [Array Iterators](array#array-iterators) + [Broadcasting (multi-iterators)](array#broadcasting-multi-iterators) + [Neighborhood iterator](array#neighborhood-iterator) + [Array mapping](array#array-mapping) + [Array Scalars](array#array-scalars) + [Data-type descriptors](array#data-type-descriptors) + [Conversion Utilities](array#conversion-utilities) + [Miscellaneous](array#miscellaneous) * [Array Iterator API](iterator) + [Array Iterator](iterator#array-iterator) + [Simple Iteration Example](iterator#simple-iteration-example) + [Simple Multi-Iteration Example](iterator#simple-multi-iteration-example) + [Iterator Data Types](iterator#iterator-data-types) + [Construction and Destruction](iterator#construction-and-destruction) + [Functions For Iteration](iterator#functions-for-iteration) + [Converting from Previous NumPy Iterators](iterator#converting-from-previous-numpy-iterators) * [UFunc API](ufunc) + [Constants](ufunc#constants) + [Macros](ufunc#macros) + [Types](ufunc#types) + [Functions](ufunc#functions) + [Generic functions](ufunc#generic-functions) + [Importing the API](ufunc#importing-the-api) * [Generalized Universal Function API](generalized-ufuncs) + [Definitions](generalized-ufuncs#definitions) + [Details of Signature](generalized-ufuncs#details-of-signature) + [C-API for implementing Elementary Functions](generalized-ufuncs#c-api-for-implementing-elementary-functions) * [NumPy core libraries](coremath) + [NumPy core math library](coremath#numpy-core-math-library) * [C API Deprecations](deprecations) + [Background](deprecations#background) + [Deprecation Mechanism NPY_NO_DEPRECATED_API](deprecations#deprecation-mechanism-npy-no-deprecated-api) * [Memory management in NumPy](data_memory) + [Historical overview](data_memory#historical-overview) + [Configurable memory routines in NumPy (NEP 49)](data_memory#configurable-memory-routines-in-numpy-nep-49) + [What happens when deallocating if there is no policy set](data_memory#what-happens-when-deallocating-if-there-is-no-policy-set) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/index.htmlCPU/SIMD Optimizations ====================== NumPy comes with a flexible working mechanism that allows it to harness the SIMD features that CPUs own, in order to provide faster and more stable performance on all popular platforms. Currently, NumPy supports the X86, IBM/Power, ARM7 and ARM8 architectures. The optimization process in NumPy is carried out in three layers: * Code is *written* using the universal intrinsics which is a set of types, macros and functions that are mapped to each supported instruction-sets by using guards that will enable use of the them only when the compiler recognizes them. This allow us to generate multiple kernels for the same functionality, in which each generated kernel represents a set of instructions that related one or multiple certain CPU features. The first kernel represents the minimum (baseline) CPU features, and the other kernels represent the additional (dispatched) CPU features. * At *compile* time, CPU build options are used to define the minimum and additional features to support, based on user choice and compiler support. The appropriate intrinsics are overlaid with the platform / architecture intrinsics, and multiple kernels are compiled. * At *runtime import*, the CPU is probed for the set of supported CPU features. A mechanism is used to grab the pointer to the most appropriate kernel, and this will be the one called for the function. Note NumPy community had a deep discussion before implementing this work, please check [NEP-38](https://numpy.org/neps/nep-0038-SIMD-optimizations.html) for more clarification. * [CPU build options](build-options) + [Description](build-options#description) + [Quick Start](build-options#quick-start) - [I am building NumPy for my local use](build-options#i-am-building-numpy-for-my-local-use) - [I do not want to support the old processors of the `x86` architecture](build-options#i-do-not-want-to-support-the-old-processors-of-the-x86-architecture) - [I’m facing the same case above but with `ppc64` architecture](build-options#i-m-facing-the-same-case-above-but-with-ppc64-architecture) - [Having issues with `AVX512` features?](build-options#having-issues-with-avx512-features) + [Supported Features](build-options#supported-features) - [On x86](build-options#on-x86) - [On IBM/POWER big-endian](build-options#on-ibm-power-big-endian) - [On IBM/POWER little-endian](build-options#on-ibm-power-little-endian) - [On ARMv7/A32](build-options#on-armv7-a32) - [On ARMv8/A64](build-options#on-armv8-a64) - [On IBM/ZSYSTEM(S390X)](build-options#on-ibm-zsystem-s390x) + [Special Options](build-options#special-options) + [Behaviors](build-options#behaviors) + [Platform differences](build-options#platform-differences) - [On x86::Intel Compiler](build-options#on-x86-intel-compiler) - [On x86::Microsoft Visual C/C++](build-options#on-x86-microsoft-visual-c-c) + [Build report](build-options#build-report) + [Runtime Trace](build-options#runtime-trace) * [How does the CPU dispatcher work?](how-it-works) + [1- Configuration](how-it-works#configuration) + [2- Discovering the environment](how-it-works#discovering-the-environment) + [3- Validating the requested optimizations](how-it-works#validating-the-requested-optimizations) + [4- Generating the main configuration header](how-it-works#generating-the-main-configuration-header) + [5- Dispatch-able sources and configuration statements](how-it-works#dispatch-able-sources-and-configuration-statements) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/simd/index.htmlNumPy and SWIG ============== * [numpy.i: a SWIG Interface File for NumPy](swig.interface-file) + [Introduction](swig.interface-file#introduction) + [Using numpy.i](swig.interface-file#using-numpy-i) + [Available Typemaps](swig.interface-file#available-typemaps) + [NumPy Array Scalars and SWIG](swig.interface-file#numpy-array-scalars-and-swig) + [Helper Functions](swig.interface-file#helper-functions) + [Beyond the Provided Typemaps](swig.interface-file#beyond-the-provided-typemaps) + [Summary](swig.interface-file#summary) * [Testing the numpy.i Typemaps](swig.testing) + [Introduction](swig.testing#introduction) + [Testing Organization](swig.testing#testing-organization) + [Testing Header Files](swig.testing#testing-header-files) + [Testing Source Files](swig.testing#testing-source-files) + [Testing SWIG Interface Files](swig.testing#testing-swig-interface-files) + [Testing Python Scripts](swig.testing#testing-python-scripts) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/swig.htmlThe N-dimensional array (ndarray) ================================= An [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is a (usually fixed-size) multidimensional container of items of the same type and size. The number of dimensions and items in an array is defined by its [`shape`](generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape"), which is a [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.10)") of *N* non-negative integers that specify the sizes of each dimension. The type of items in the array is specified by a separate [data-type object (dtype)](arrays.dtypes#arrays-dtypes), one of which is associated with each ndarray. As with other container objects in Python, the contents of an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") can be accessed and modified by [indexing or slicing](arrays.indexing#arrays-indexing) the array (using, for example, *N* integers), and via the methods and attributes of the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Different [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") can share the same data, so that changes made in one [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") may be visible in another. That is, an ndarray can be a *“view”* to another ndarray, and the data it is referring to is taken care of by the *“base”* ndarray. ndarrays can also be views to memory owned by Python [`strings`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or objects implementing the `buffer` or [array](arrays.interface#arrays-interface) interfaces. #### Example A 2-dimensional array of size 2 x 3, composed of 4-byte integer elements: ``` >>> x = np.array([[1, 2, 3], [4, 5, 6]], np.int32) >>> type(x) <class 'numpy.ndarray'> >>> x.shape (2, 3) >>> x.dtype dtype('int32') ``` The array can be indexed using Python container-like syntax: ``` >>> # The element of x in the *second* row, *third* column, namely, 6. >>> x[1, 2] 6 ``` For example [slicing](arrays.indexing#arrays-indexing) can produce views of the array: ``` >>> y = x[:,1] >>> y array([2, 5], dtype=int32) >>> y[0] = 9 # this also changes the corresponding element in x >>> y array([9, 5], dtype=int32) >>> x array([[1, 9, 3], [4, 5, 6]], dtype=int32) ``` Constructing arrays ------------------- New arrays can be constructed using the routines detailed in [Array creation routines](routines.array-creation#routines-array-creation), and also by using the low-level [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor: | | | | --- | --- | | [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray")(shape[, dtype, buffer, offset, ...]) | An array object represents a multidimensional, homogeneous array of fixed-size items. | Indexing arrays --------------- Arrays can be indexed using an extended Python slicing syntax, `array[selection]`. Similar syntax is also used for accessing fields in a [structured data type](../glossary#term-structured-data-type). See also [Array Indexing](arrays.indexing#arrays-indexing). Internal memory layout of an ndarray ------------------------------------ An instance of class [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") consists of a contiguous one-dimensional segment of computer memory (owned by the array, or by some other object), combined with an indexing scheme that maps *N* integers into the location of an item in the block. The ranges in which the indices can vary is specified by the [`shape`](generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") of the array. How many bytes each item takes and how the bytes are interpreted is defined by the [data-type object](arrays.dtypes#arrays-dtypes) associated with the array. A segment of memory is inherently 1-dimensional, and there are many different schemes for arranging the items of an *N*-dimensional array in a 1-dimensional block. NumPy is flexible, and [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") objects can accommodate any *strided indexing scheme*. In a strided scheme, the N-dimensional index \((n_0, n_1, ..., n_{N-1})\) corresponds to the offset (in bytes): \[n_{\mathrm{offset}} = \sum_{k=0}^{N-1} s_k n_k\] from the beginning of the memory block associated with the array. Here, \(s_k\) are integers which specify the [`strides`](generated/numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides") of the array. The [column-major](../glossary#term-column-major) order (used, for example, in the Fortran language and in *Matlab*) and [row-major](../glossary#term-row-major) order (used in C) schemes are just specific kinds of strided scheme, and correspond to memory that can be *addressed* by the strides: \[s_k^{\mathrm{column}} = \mathrm{itemsize} \prod_{j=0}^{k-1} d_j , \quad s_k^{\mathrm{row}} = \mathrm{itemsize} \prod_{j=k+1}^{N-1} d_j .\] where \(d_j\) `= self.shape[j]`. Both the C and Fortran orders are [contiguous](../glossary#term-contiguous), *i.e.,* single-segment, memory layouts, in which every part of the memory block can be accessed by some combination of the indices. Note `Contiguous arrays` and `single-segment arrays` are synonymous and are used interchangeably throughout the documentation. While a C-style and Fortran-style contiguous array, which has the corresponding flags set, can be addressed with the above strides, the actual strides may be different. This can happen in two cases: 1. If `self.shape[k] == 1` then for any legal index `index[k] == 0`. This means that in the formula for the offset \(n_k = 0\) and thus \(s_k n_k = 0\) and the value of \(s_k\) `= self.strides[k]` is arbitrary. 2. If an array has no elements (`self.size == 0`) there is no legal index and the strides are never used. Any array with no elements may be considered C-style and Fortran-style contiguous. Point 1. means that `self` and `self.squeeze()` always have the same contiguity and `aligned` flags value. This also means that even a high dimensional array could be C-style and Fortran-style contiguous at the same time. An array is considered aligned if the memory offsets for all elements and the base offset itself is a multiple of `self.itemsize`. Understanding `memory-alignment` leads to better performance on most hardware. Warning It does *not* generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. `NPY_RELAXED_STRIDES_DEBUG=1` can be used to help find errors when incorrectly relying on the strides in C-extension code (see below warning). Data in new [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is in the [row-major](../glossary#term-row-major) (C) order, unless otherwise specified, but, for example, [basic array slicing](arrays.indexing#arrays-indexing) often produces [views](../glossary#term-view) in a different scheme. Note Several algorithms in NumPy work on arbitrarily strided arrays. However, some algorithms require single-segment arrays. When an irregularly strided array is passed in to such algorithms, a copy is automatically made. Array attributes ---------------- Array attributes reflect information that is intrinsic to the array itself. Generally, accessing an array through its attributes allows you to get and sometimes set intrinsic properties of the array without creating a new array. The exposed attributes are the core parts of an array and only some of them can be reset meaningfully without creating a new array. Information on each attribute is given below. ### Memory layout The following attributes contain information about the memory layout of the array: | | | | --- | --- | | [`ndarray.flags`](generated/numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") | Information about the memory layout of the array. | | [`ndarray.shape`](generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") | Tuple of array dimensions. | | [`ndarray.strides`](generated/numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides") | Tuple of bytes to step in each dimension when traversing an array. | | [`ndarray.ndim`](generated/numpy.ndarray.ndim#numpy.ndarray.ndim "numpy.ndarray.ndim") | Number of array dimensions. | | [`ndarray.data`](generated/numpy.ndarray.data#numpy.ndarray.data "numpy.ndarray.data") | Python buffer object pointing to the start of the array's data. | | [`ndarray.size`](generated/numpy.ndarray.size#numpy.ndarray.size "numpy.ndarray.size") | Number of elements in the array. | | [`ndarray.itemsize`](generated/numpy.ndarray.itemsize#numpy.ndarray.itemsize "numpy.ndarray.itemsize") | Length of one array element in bytes. | | [`ndarray.nbytes`](generated/numpy.ndarray.nbytes#numpy.ndarray.nbytes "numpy.ndarray.nbytes") | Total bytes consumed by the elements of the array. | | [`ndarray.base`](generated/numpy.ndarray.base#numpy.ndarray.base "numpy.ndarray.base") | Base object if memory is from some other object. | ### Data type See also [Data type objects](arrays.dtypes#arrays-dtypes) The data type object associated with the array can be found in the [`dtype`](generated/numpy.ndarray.dtype#numpy.ndarray.dtype "numpy.ndarray.dtype") attribute: | | | | --- | --- | | [`ndarray.dtype`](generated/numpy.ndarray.dtype#numpy.ndarray.dtype "numpy.ndarray.dtype") | Data-type of the array's elements. | ### Other attributes | | | | --- | --- | | [`ndarray.T`](generated/numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") | The transposed array. | | [`ndarray.real`](generated/numpy.ndarray.real#numpy.ndarray.real "numpy.ndarray.real") | The real part of the array. | | [`ndarray.imag`](generated/numpy.ndarray.imag#numpy.ndarray.imag "numpy.ndarray.imag") | The imaginary part of the array. | | [`ndarray.flat`](generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") | A 1-D iterator over the array. | ### Array interface See also [The array interface protocol](arrays.interface#arrays-interface). | | | | --- | --- | | [`__array_interface__`](arrays.interface#object.__array_interface__ "object.__array_interface__") | Python-side of the array interface | | [`__array_struct__`](arrays.interface#object.__array_struct__ "object.__array_struct__") | C-side of the array interface | ### [`ctypes`](https://docs.python.org/3/library/ctypes.html#module-ctypes "(in Python v3.10)") foreign function interface | | | | --- | --- | | [`ndarray.ctypes`](generated/numpy.ndarray.ctypes#numpy.ndarray.ctypes "numpy.ndarray.ctypes") | An object to simplify the interaction of the array with the ctypes module. | Array methods ------------- An [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") object has many methods which operate on or with the array in some fashion, typically returning an array result. These methods are briefly explained below. (Each method’s docstring has a more complete description.) For the following methods there are also corresponding functions in [`numpy`](index#module-numpy "numpy"): [`all`](generated/numpy.all#numpy.all "numpy.all"), [`any`](generated/numpy.any#numpy.any "numpy.any"), [`argmax`](generated/numpy.argmax#numpy.argmax "numpy.argmax"), [`argmin`](generated/numpy.argmin#numpy.argmin "numpy.argmin"), [`argpartition`](generated/numpy.argpartition#numpy.argpartition "numpy.argpartition"), [`argsort`](generated/numpy.argsort#numpy.argsort "numpy.argsort"), [`choose`](generated/numpy.choose#numpy.choose "numpy.choose"), [`clip`](generated/numpy.clip#numpy.clip "numpy.clip"), [`compress`](generated/numpy.compress#numpy.compress "numpy.compress"), [`copy`](generated/numpy.copy#numpy.copy "numpy.copy"), [`cumprod`](generated/numpy.cumprod#numpy.cumprod "numpy.cumprod"), [`cumsum`](generated/numpy.cumsum#numpy.cumsum "numpy.cumsum"), [`diagonal`](generated/numpy.diagonal#numpy.diagonal "numpy.diagonal"), [`imag`](generated/numpy.imag#numpy.imag "numpy.imag"), [`max`](generated/numpy.amax#numpy.amax "numpy.amax"), [`mean`](generated/numpy.mean#numpy.mean "numpy.mean"), [`min`](generated/numpy.amin#numpy.amin "numpy.amin"), [`nonzero`](generated/numpy.nonzero#numpy.nonzero "numpy.nonzero"), [`partition`](generated/numpy.partition#numpy.partition "numpy.partition"), [`prod`](generated/numpy.prod#numpy.prod "numpy.prod"), [`ptp`](generated/numpy.ptp#numpy.ptp "numpy.ptp"), [`put`](generated/numpy.put#numpy.put "numpy.put"), [`ravel`](generated/numpy.ravel#numpy.ravel "numpy.ravel"), [`real`](generated/numpy.real#numpy.real "numpy.real"), [`repeat`](generated/numpy.repeat#numpy.repeat "numpy.repeat"), [`reshape`](generated/numpy.reshape#numpy.reshape "numpy.reshape"), [`round`](generated/numpy.around#numpy.around "numpy.around"), [`searchsorted`](generated/numpy.searchsorted#numpy.searchsorted "numpy.searchsorted"), [`sort`](generated/numpy.sort#numpy.sort "numpy.sort"), [`squeeze`](generated/numpy.squeeze#numpy.squeeze "numpy.squeeze"), [`std`](generated/numpy.std#numpy.std "numpy.std"), [`sum`](generated/numpy.sum#numpy.sum "numpy.sum"), [`swapaxes`](generated/numpy.swapaxes#numpy.swapaxes "numpy.swapaxes"), [`take`](generated/numpy.take#numpy.take "numpy.take"), [`trace`](generated/numpy.trace#numpy.trace "numpy.trace"), [`transpose`](generated/numpy.transpose#numpy.transpose "numpy.transpose"), [`var`](generated/numpy.var#numpy.var "numpy.var"). ### Array conversion | | | | --- | --- | | [`ndarray.item`](generated/numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item")(*args) | Copy an element of an array to a standard Python scalar and return it. | | [`ndarray.tolist`](generated/numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. | | [`ndarray.itemset`](generated/numpy.ndarray.itemset#numpy.ndarray.itemset "numpy.ndarray.itemset")(*args) | Insert scalar into an array (scalar is cast to array's dtype, if possible) | | [`ndarray.tostring`](generated/numpy.ndarray.tostring#numpy.ndarray.tostring "numpy.ndarray.tostring")([order]) | A compatibility alias for `tobytes`, with exactly the same behavior. | | [`ndarray.tobytes`](generated/numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes")([order]) | Construct Python bytes containing the raw data bytes in the array. | | [`ndarray.tofile`](generated/numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). | | [`ndarray.dump`](generated/numpy.ndarray.dump#numpy.ndarray.dump "numpy.ndarray.dump")(file) | Dump a pickle of the array to the specified file. | | [`ndarray.dumps`](generated/numpy.ndarray.dumps#numpy.ndarray.dumps "numpy.ndarray.dumps")() | Returns the pickle of the array as a string. | | [`ndarray.astype`](generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")(dtype[, order, casting, ...]) | Copy of the array, cast to a specified type. | | [`ndarray.byteswap`](generated/numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap")([inplace]) | Swap the bytes of the array elements | | [`ndarray.copy`](generated/numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy")([order]) | Return a copy of the array. | | [`ndarray.view`](generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view")([dtype][, type]) | New view of array with the same data. | | [`ndarray.getfield`](generated/numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. | | [`ndarray.setflags`](generated/numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. | | [`ndarray.fill`](generated/numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill")(value) | Fill the array with a scalar value. | ### Shape manipulation For reshape, resize, and transpose, the single tuple argument may be replaced with `n` integers which will be interpreted as an n-tuple. | | | | --- | --- | | [`ndarray.reshape`](generated/numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape")(shape[, order]) | Returns an array containing the same data with a new shape. | | [`ndarray.resize`](generated/numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. | | [`ndarray.transpose`](generated/numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose")(*axes) | Returns a view of the array with axes transposed. | | [`ndarray.swapaxes`](generated/numpy.ndarray.swapaxes#numpy.ndarray.swapaxes "numpy.ndarray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. | | [`ndarray.flatten`](generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten")([order]) | Return a copy of the array collapsed into one dimension. | | [`ndarray.ravel`](generated/numpy.ndarray.ravel#numpy.ndarray.ravel "numpy.ndarray.ravel")([order]) | Return a flattened array. | | [`ndarray.squeeze`](generated/numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze")([axis]) | Remove axes of length one from `a`. | ### Item selection and manipulation For array methods that take an *axis* keyword, it defaults to *None*. If axis is *None*, then the array is treated as a 1-D array. Any other value for *axis* represents the dimension along which the operation should proceed. | | | | --- | --- | | [`ndarray.take`](generated/numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. | | [`ndarray.put`](generated/numpy.ndarray.put#numpy.ndarray.put "numpy.ndarray.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. | | [`ndarray.repeat`](generated/numpy.ndarray.repeat#numpy.ndarray.repeat "numpy.ndarray.repeat")(repeats[, axis]) | Repeat elements of an array. | | [`ndarray.choose`](generated/numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose")(choices[, out, mode]) | Use an index array to construct a new array from a set of choices. | | [`ndarray.sort`](generated/numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort")([axis, kind, order]) | Sort an array in-place. | | [`ndarray.argsort`](generated/numpy.ndarray.argsort#numpy.ndarray.argsort "numpy.ndarray.argsort")([axis, kind, order]) | Returns the indices that would sort this array. | | [`ndarray.partition`](generated/numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition")(kth[, axis, kind, order]) | Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array. | | [`ndarray.argpartition`](generated/numpy.ndarray.argpartition#numpy.ndarray.argpartition "numpy.ndarray.argpartition")(kth[, axis, kind, order]) | Returns the indices that would partition this array. | | [`ndarray.searchsorted`](generated/numpy.ndarray.searchsorted#numpy.ndarray.searchsorted "numpy.ndarray.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. | | [`ndarray.nonzero`](generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero")() | Return the indices of the elements that are non-zero. | | [`ndarray.compress`](generated/numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress")(condition[, axis, out]) | Return selected slices of this array along given axis. | | [`ndarray.diagonal`](generated/numpy.ndarray.diagonal#numpy.ndarray.diagonal "numpy.ndarray.diagonal")([offset, axis1, axis2]) | Return specified diagonals. | ### Calculation Many of these methods take an argument named *axis*. In such cases, * If *axis* is *None* (the default), the array is treated as a 1-D array and the operation is performed over the entire array. This behavior is also the default if self is a 0-dimensional array or array scalar. (An array scalar is an instance of the types/classes float32, float64, etc., whereas a 0-dimensional array is an ndarray instance containing precisely one array scalar.) * If *axis* is an integer, then the operation is done over the given axis (for each 1-D subarray that can be created along the given axis). Example of the *axis* argument A 3-dimensional array of size 3 x 3 x 3, summed over each of its three axes ``` >>> x = np.arange(27).reshape((3,3,3)) >>> x array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]], [[ 9, 10, 11], [12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23], [24, 25, 26]]]) >>> x.sum(axis=0) array([[27, 30, 33], [36, 39, 42], [45, 48, 51]]) >>> # for sum, axis is the first keyword, so we may omit it, >>> # specifying only its value >>> x.sum(0), x.sum(1), x.sum(2) (array([[27, 30, 33], [36, 39, 42], [45, 48, 51]]), array([[ 9, 12, 15], [36, 39, 42], [63, 66, 69]]), array([[ 3, 12, 21], [30, 39, 48], [57, 66, 75]])) ``` The parameter *dtype* specifies the data type over which a reduction operation (like summing) should take place. The default reduce data type is the same as the data type of *self*. To avoid overflow, it can be useful to perform the reduction using a larger data type. For several methods, an optional *out* argument can also be provided and the result will be placed into the output array given. The *out* argument must be an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") and have the same number of elements. It can have a different data type in which case casting will be performed. | | | | --- | --- | | [`ndarray.max`](generated/numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max")([axis, out, keepdims, initial, ...]) | Return the maximum along a given axis. | | [`ndarray.argmax`](generated/numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax")([axis, out, keepdims]) | Return indices of the maximum values along the given axis. | | [`ndarray.min`](generated/numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min")([axis, out, keepdims, initial, ...]) | Return the minimum along a given axis. | | [`ndarray.argmin`](generated/numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin")([axis, out, keepdims]) | Return indices of the minimum values along the given axis. | | [`ndarray.ptp`](generated/numpy.ndarray.ptp#numpy.ndarray.ptp "numpy.ndarray.ptp")([axis, out, keepdims]) | Peak to peak (maximum - minimum) value along a given axis. | | [`ndarray.clip`](generated/numpy.ndarray.clip#numpy.ndarray.clip "numpy.ndarray.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. | | [`ndarray.conj`](generated/numpy.ndarray.conj#numpy.ndarray.conj "numpy.ndarray.conj")() | Complex-conjugate all elements. | | [`ndarray.round`](generated/numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round")([decimals, out]) | Return `a` with each element rounded to the given number of decimals. | | [`ndarray.trace`](generated/numpy.ndarray.trace#numpy.ndarray.trace "numpy.ndarray.trace")([offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. | | [`ndarray.sum`](generated/numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum")([axis, dtype, out, keepdims, ...]) | Return the sum of the array elements over the given axis. | | [`ndarray.cumsum`](generated/numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum")([axis, dtype, out]) | Return the cumulative sum of the elements along the given axis. | | [`ndarray.mean`](generated/numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean")([axis, dtype, out, keepdims, where]) | Returns the average of the array elements along given axis. | | [`ndarray.var`](generated/numpy.ndarray.var#numpy.ndarray.var "numpy.ndarray.var")([axis, dtype, out, ddof, ...]) | Returns the variance of the array elements, along given axis. | | [`ndarray.std`](generated/numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std")([axis, dtype, out, ddof, ...]) | Returns the standard deviation of the array elements along given axis. | | [`ndarray.prod`](generated/numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod")([axis, dtype, out, keepdims, ...]) | Return the product of the array elements over the given axis | | [`ndarray.cumprod`](generated/numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod")([axis, dtype, out]) | Return the cumulative product of the elements along the given axis. | | [`ndarray.all`](generated/numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all")([axis, out, keepdims, where]) | Returns True if all elements evaluate to True. | | [`ndarray.any`](generated/numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any")([axis, out, keepdims, where]) | Returns True if any of the elements of `a` evaluate to True. | Arithmetic, matrix multiplication, and comparison operations ------------------------------------------------------------ Arithmetic and comparison operations on [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") are defined as element-wise operations, and generally yield [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") objects as results. Each of the arithmetic operations (`+`, `-`, `*`, `/`, `//`, `%`, `divmod()`, `**` or `pow()`, `<<`, `>>`, `&`, `^`, `|`, `~`) and the comparisons (`==`, `<`, `>`, `<=`, `>=`, `!=`) is equivalent to the corresponding universal function (or [ufunc](../glossary#term-ufunc) for short) in NumPy. For more information, see the section on [Universal Functions](ufuncs#ufuncs). Comparison operators: | | | | --- | --- | | [`ndarray.__lt__`](generated/numpy.ndarray.__lt__#numpy.ndarray.__lt__ "numpy.ndarray.__lt__")(value, /) | Return self<value. | | [`ndarray.__le__`](generated/numpy.ndarray.__le__#numpy.ndarray.__le__ "numpy.ndarray.__le__")(value, /) | Return self<=value. | | [`ndarray.__gt__`](generated/numpy.ndarray.__gt__#numpy.ndarray.__gt__ "numpy.ndarray.__gt__")(value, /) | Return self>value. | | [`ndarray.__ge__`](generated/numpy.ndarray.__ge__#numpy.ndarray.__ge__ "numpy.ndarray.__ge__")(value, /) | Return self>=value. | | [`ndarray.__eq__`](generated/numpy.ndarray.__eq__#numpy.ndarray.__eq__ "numpy.ndarray.__eq__")(value, /) | Return self==value. | | [`ndarray.__ne__`](generated/numpy.ndarray.__ne__#numpy.ndarray.__ne__ "numpy.ndarray.__ne__")(value, /) | Return self!=value. | Truth value of an array ([`bool()`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.10)")): | | | | --- | --- | | [`ndarray.__bool__`](generated/numpy.ndarray.__bool__#numpy.ndarray.__bool__ "numpy.ndarray.__bool__")(/) | True if self else False | Note Truth-value testing of an array invokes [`ndarray.__bool__`](generated/numpy.ndarray.__bool__#numpy.ndarray.__bool__ "numpy.ndarray.__bool__"), which raises an error if the number of elements in the array is larger than 1, because the truth value of such arrays is ambiguous. Use [`.any()`](generated/numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") and [`.all()`](generated/numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") instead to be clear about what is meant in such cases. (If the number of elements is 0, the array evaluates to `False`.) Unary operations: | | | | --- | --- | | [`ndarray.__neg__`](generated/numpy.ndarray.__neg__#numpy.ndarray.__neg__ "numpy.ndarray.__neg__")(/) | -self | | [`ndarray.__pos__`](generated/numpy.ndarray.__pos__#numpy.ndarray.__pos__ "numpy.ndarray.__pos__")(/) | +self | | [`ndarray.__abs__`](generated/numpy.ndarray.__abs__#numpy.ndarray.__abs__ "numpy.ndarray.__abs__")(self) | | | [`ndarray.__invert__`](generated/numpy.ndarray.__invert__#numpy.ndarray.__invert__ "numpy.ndarray.__invert__")(/) | ~self | Arithmetic: | | | | --- | --- | | [`ndarray.__add__`](generated/numpy.ndarray.__add__#numpy.ndarray.__add__ "numpy.ndarray.__add__")(value, /) | Return self+value. | | [`ndarray.__sub__`](generated/numpy.ndarray.__sub__#numpy.ndarray.__sub__ "numpy.ndarray.__sub__")(value, /) | Return self-value. | | [`ndarray.__mul__`](generated/numpy.ndarray.__mul__#numpy.ndarray.__mul__ "numpy.ndarray.__mul__")(value, /) | Return self*value. | | [`ndarray.__truediv__`](generated/numpy.ndarray.__truediv__#numpy.ndarray.__truediv__ "numpy.ndarray.__truediv__")(value, /) | Return self/value. | | [`ndarray.__floordiv__`](generated/numpy.ndarray.__floordiv__#numpy.ndarray.__floordiv__ "numpy.ndarray.__floordiv__")(value, /) | Return self//value. | | [`ndarray.__mod__`](generated/numpy.ndarray.__mod__#numpy.ndarray.__mod__ "numpy.ndarray.__mod__")(value, /) | Return self%value. | | [`ndarray.__divmod__`](generated/numpy.ndarray.__divmod__#numpy.ndarray.__divmod__ "numpy.ndarray.__divmod__")(value, /) | Return divmod(self, value). | | [`ndarray.__pow__`](generated/numpy.ndarray.__pow__#numpy.ndarray.__pow__ "numpy.ndarray.__pow__")(value[, mod]) | Return pow(self, value, mod). | | [`ndarray.__lshift__`](generated/numpy.ndarray.__lshift__#numpy.ndarray.__lshift__ "numpy.ndarray.__lshift__")(value, /) | Return self<<value. | | [`ndarray.__rshift__`](generated/numpy.ndarray.__rshift__#numpy.ndarray.__rshift__ "numpy.ndarray.__rshift__")(value, /) | Return self>>value. | | [`ndarray.__and__`](generated/numpy.ndarray.__and__#numpy.ndarray.__and__ "numpy.ndarray.__and__")(value, /) | Return self&value. | | [`ndarray.__or__`](generated/numpy.ndarray.__or__#numpy.ndarray.__or__ "numpy.ndarray.__or__")(value, /) | Return self|value. | | [`ndarray.__xor__`](generated/numpy.ndarray.__xor__#numpy.ndarray.__xor__ "numpy.ndarray.__xor__")(value, /) | Return self^value. | Note * Any third argument to [`pow`](https://docs.python.org/3/library/functions.html#pow "(in Python v3.10)") is silently ignored, as the underlying [`ufunc`](generated/numpy.power#numpy.power "numpy.power") takes only two arguments. * Because [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is a built-in type (written in C), the `__r{op}__` special methods are not directly defined. * The functions called to implement many arithmetic special methods for arrays can be modified using [`__array_ufunc__`](arrays.classes#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__"). Arithmetic, in-place: | | | | --- | --- | | [`ndarray.__iadd__`](generated/numpy.ndarray.__iadd__#numpy.ndarray.__iadd__ "numpy.ndarray.__iadd__")(value, /) | Return self+=value. | | [`ndarray.__isub__`](generated/numpy.ndarray.__isub__#numpy.ndarray.__isub__ "numpy.ndarray.__isub__")(value, /) | Return self-=value. | | [`ndarray.__imul__`](generated/numpy.ndarray.__imul__#numpy.ndarray.__imul__ "numpy.ndarray.__imul__")(value, /) | Return self*=value. | | [`ndarray.__itruediv__`](generated/numpy.ndarray.__itruediv__#numpy.ndarray.__itruediv__ "numpy.ndarray.__itruediv__")(value, /) | Return self/=value. | | [`ndarray.__ifloordiv__`](generated/numpy.ndarray.__ifloordiv__#numpy.ndarray.__ifloordiv__ "numpy.ndarray.__ifloordiv__")(value, /) | Return self//=value. | | [`ndarray.__imod__`](generated/numpy.ndarray.__imod__#numpy.ndarray.__imod__ "numpy.ndarray.__imod__")(value, /) | Return self%=value. | | [`ndarray.__ipow__`](generated/numpy.ndarray.__ipow__#numpy.ndarray.__ipow__ "numpy.ndarray.__ipow__")(value, /) | Return self**=value. | | [`ndarray.__ilshift__`](generated/numpy.ndarray.__ilshift__#numpy.ndarray.__ilshift__ "numpy.ndarray.__ilshift__")(value, /) | Return self<<=value. | | [`ndarray.__irshift__`](generated/numpy.ndarray.__irshift__#numpy.ndarray.__irshift__ "numpy.ndarray.__irshift__")(value, /) | Return self>>=value. | | [`ndarray.__iand__`](generated/numpy.ndarray.__iand__#numpy.ndarray.__iand__ "numpy.ndarray.__iand__")(value, /) | Return self&=value. | | [`ndarray.__ior__`](generated/numpy.ndarray.__ior__#numpy.ndarray.__ior__ "numpy.ndarray.__ior__")(value, /) | Return self|=value. | | [`ndarray.__ixor__`](generated/numpy.ndarray.__ixor__#numpy.ndarray.__ixor__ "numpy.ndarray.__ixor__")(value, /) | Return self^=value. | Warning In place operations will perform the calculation using the precision decided by the data type of the two operands, but will silently downcast the result (if necessary) so it can fit back into the array. Therefore, for mixed precision calculations, `A {op}= B` can be different than `A = A {op} B`. For example, suppose `a = ones((3,3))`. Then, `a += 3j` is different than `a = a + 3j`: while they both perform the same computation, `a += 3` casts the result to fit back in `a`, whereas `a = a + 3j` re-binds the name `a` to the result. Matrix Multiplication: | | | | --- | --- | | [`ndarray.__matmul__`](generated/numpy.ndarray.__matmul__#numpy.ndarray.__matmul__ "numpy.ndarray.__matmul__")(value, /) | Return [self@value](mailto:self%40value). | Note Matrix operators `@` and `@=` were introduced in Python 3.5 following [**PEP 465**](https://peps.python.org/pep-0465/), and the `@` operator has been introduced in NumPy 1.10.0. Further information can be found in the [`matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul") documentation. Special methods --------------- For standard library functions: | | | | --- | --- | | [`ndarray.__copy__`](generated/numpy.ndarray.__copy__#numpy.ndarray.__copy__ "numpy.ndarray.__copy__")() | Used if [`copy.copy`](https://docs.python.org/3/library/copy.html#copy.copy "(in Python v3.10)") is called on an array. | | [`ndarray.__deepcopy__`](generated/numpy.ndarray.__deepcopy__#numpy.ndarray.__deepcopy__ "numpy.ndarray.__deepcopy__")(memo, /) | Used if [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "(in Python v3.10)") is called on an array. | | [`ndarray.__reduce__`](generated/numpy.ndarray.__reduce__#numpy.ndarray.__reduce__ "numpy.ndarray.__reduce__")() | For pickling. | | [`ndarray.__setstate__`](generated/numpy.ndarray.__setstate__#numpy.ndarray.__setstate__ "numpy.ndarray.__setstate__")(state, /) | For unpickling. | Basic customization: | | | | --- | --- | | [`ndarray.__new__`](generated/numpy.ndarray.__new__#numpy.ndarray.__new__ "numpy.ndarray.__new__")(*args, **kwargs) | | | [`ndarray.__array__`](generated/numpy.ndarray.__array__#numpy.ndarray.__array__ "numpy.ndarray.__array__")([dtype], /) | Returns either a new reference to self if dtype is not given or a new array of provided data type if dtype is different from the current dtype of the array. | | [`ndarray.__array_wrap__`](generated/numpy.ndarray.__array_wrap__#numpy.ndarray.__array_wrap__ "numpy.ndarray.__array_wrap__")(array[, context], /) | Returns a view of [`array`](generated/numpy.array#numpy.array "numpy.array") with the same type as self. | Container customization: (see [Indexing](arrays.indexing#arrays-indexing)) | | | | --- | --- | | [`ndarray.__len__`](generated/numpy.ndarray.__len__#numpy.ndarray.__len__ "numpy.ndarray.__len__")(/) | Return len(self). | | [`ndarray.__getitem__`](generated/numpy.ndarray.__getitem__#numpy.ndarray.__getitem__ "numpy.ndarray.__getitem__")(key, /) | Return self[key]. | | [`ndarray.__setitem__`](generated/numpy.ndarray.__setitem__#numpy.ndarray.__setitem__ "numpy.ndarray.__setitem__")(key, value, /) | Set self[key] to value. | | [`ndarray.__contains__`](generated/numpy.ndarray.__contains__#numpy.ndarray.__contains__ "numpy.ndarray.__contains__")(key, /) | Return key in self. | Conversion; the operations [`int()`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)"), [`float()`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)") and [`complex()`](https://docs.python.org/3/library/functions.html#complex "(in Python v3.10)"). They work only on arrays that have one element in them and return the appropriate scalar. | | | | --- | --- | | [`ndarray.__int__`](generated/numpy.ndarray.__int__#numpy.ndarray.__int__ "numpy.ndarray.__int__")(self) | | | [`ndarray.__float__`](generated/numpy.ndarray.__float__#numpy.ndarray.__float__ "numpy.ndarray.__float__")(self) | | | [`ndarray.__complex__`](generated/numpy.ndarray.__complex__#numpy.ndarray.__complex__ "numpy.ndarray.__complex__") | | String representations: | | | | --- | --- | | [`ndarray.__str__`](generated/numpy.ndarray.__str__#numpy.ndarray.__str__ "numpy.ndarray.__str__")(/) | Return str(self). | | [`ndarray.__repr__`](generated/numpy.ndarray.__repr__#numpy.ndarray.__repr__ "numpy.ndarray.__repr__")(/) | Return repr(self). | Utility method for typing: | | | | --- | --- | | [`ndarray.__class_getitem__`](generated/numpy.ndarray.__class_getitem__#numpy.ndarray.__class_getitem__ "numpy.ndarray.__class_getitem__")(item, /) | Return a parametrized wrapper around the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") type. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/arrays.ndarray.htmlScalars ======= Python defines only one type of a particular data class (there is only one integer type, one floating-point type, etc.). This can be convenient in applications that don’t need to be concerned with all the ways data can be represented in a computer. For scientific computing, however, more control is often needed. In NumPy, there are 24 new fundamental Python types to describe different types of scalars. These type descriptors are mostly based on the types available in the C language that CPython is written in, with several additional types compatible with Python’s types. Array scalars have the same attributes and methods as [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). [1](#id2) This allows one to treat items of an array partly on the same footing as arrays, smoothing out rough edges that result when mixing scalar and array operations. Array scalars live in a hierarchy (see the Figure below) of data types. They can be detected using the hierarchy: For example, `isinstance(val, np.generic)` will return [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)") if *val* is an array scalar object. Alternatively, what kind of array scalar is present can be determined using other members of the data type hierarchy. Thus, for example `isinstance(val, np.complexfloating)` will return [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)") if *val* is a complex valued type, while `isinstance(val, np.flexible)` will return true if *val* is one of the flexible itemsize array types ([`str_`](#numpy.str_ "numpy.str_"), [`bytes_`](#numpy.bytes_ "numpy.bytes_"), [`void`](#numpy.void "numpy.void")). ![../_images/dtype-hierarchy.png] which just point to the integer type that holds a pointer for the platform. All the number types can be obtained using bit-width names as well. [1](#id1) However, array scalars are immutable, so none of the array scalar attributes are settable. Built-in scalar types --------------------- The built-in scalar types are shown below. The C-like names are associated with character codes, which are shown in their descriptions. Use of the character codes, however, is discouraged. Some of the scalar types are essentially equivalent to fundamental Python types and therefore inherit from them as well as from the generic array scalar type: | Array scalar type | Related Python type | Inherits? | | --- | --- | --- | | [`int_`](#numpy.int_ "numpy.int_") | [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)") | Python 2 only | | [`float_`](#numpy.float_ "numpy.float_") | [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)") | yes | | [`complex_`](#numpy.complex_ "numpy.complex_") | [`complex`](https://docs.python.org/3/library/functions.html#complex "(in Python v3.10)") | yes | | [`bytes_`](#numpy.bytes_ "numpy.bytes_") | [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.10)") | yes | | [`str_`](#numpy.str_ "numpy.str_") | [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") | yes | | [`bool_`](#numpy.bool_ "numpy.bool_") | [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.10)") | no | | [`datetime64`](#numpy.datetime64 "numpy.datetime64") | [`datetime.datetime`](https://docs.python.org/3/library/datetime.html#datetime.datetime "(in Python v3.10)") | no | | [`timedelta64`](#numpy.timedelta64 "numpy.timedelta64") | [`datetime.timedelta`](https://docs.python.org/3/library/datetime.html#datetime.timedelta "(in Python v3.10)") | no | The [`bool_`](#numpy.bool_ "numpy.bool_") data type is very similar to the Python [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.10)") but does not inherit from it because Python’s [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.10)") does not allow itself to be inherited from, and on the C-level the size of the actual bool data is not the same as a Python Boolean scalar. Warning The [`int_`](#numpy.int_ "numpy.int_") type does **not** inherit from the [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)") built-in under Python 3, because type [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)") is no longer a fixed-width integer type. Tip The default data type in NumPy is [`float_`](#numpy.float_ "numpy.float_"). *class*numpy.generic[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Base class for numpy scalar types. Class from which most (all?) numpy scalar types are derived. For consistency, exposes the same API as [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), despite many consequent attributes being either “get-only,” or completely irrelevant. This is the class from which it is strongly suggested users should derive custom scalar types. *class*numpy.number[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Abstract base class of all numeric scalar types. ### Integer types *class*numpy.integer[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Abstract base class of all integer scalar types. Note The numpy integer types mirror the behavior of C integers, and can therefore be subject to [Overflow Errors](../user/basics.types#overflow-errors). #### Signed integer types *class*numpy.signedinteger[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Abstract base class of all signed integer scalar types. *class*numpy.byte[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `char`. Character code `'b'` Alias on this platform (Linux x86_64) [`numpy.int8`](#numpy.int8 "numpy.int8"): 8-bit signed integer (`-128` to `127`). *class*numpy.short[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `short`. Character code `'h'` Alias on this platform (Linux x86_64) [`numpy.int16`](#numpy.int16 "numpy.int16"): 16-bit signed integer (`-32_768` to `32_767`). *class*numpy.intc[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `int`. Character code `'i'` Alias on this platform (Linux x86_64) [`numpy.int32`](#numpy.int32 "numpy.int32"): 32-bit signed integer (`-2_147_483_648` to `2_147_483_647`). *class*numpy.int_[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with Python [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)") and C `long`. Character code `'l'` Alias on this platform (Linux x86_64) [`numpy.int64`](#numpy.int64 "numpy.int64"): 64-bit signed integer (`-9_223_372_036_854_775_808` to `9_223_372_036_854_775_807`). Alias on this platform (Linux x86_64) [`numpy.intp`](#numpy.intp "numpy.intp"): Signed integer large enough to fit pointer, compatible with C `intptr_t`. *class*numpy.longlong[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `long long`. Character code `'q'` #### Unsigned integer types *class*numpy.unsignedinteger[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Abstract base class of all unsigned integer scalar types. *class*numpy.ubyte[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Unsigned integer type, compatible with C `unsigned char`. Character code `'B'` Alias on this platform (Linux x86_64) [`numpy.uint8`](#numpy.uint8 "numpy.uint8"): 8-bit unsigned integer (`0` to `255`). *class*numpy.ushort[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Unsigned integer type, compatible with C `unsigned short`. Character code `'H'` Alias on this platform (Linux x86_64) [`numpy.uint16`](#numpy.uint16 "numpy.uint16"): 16-bit unsigned integer (`0` to `65_535`). *class*numpy.uintc[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Unsigned integer type, compatible with C `unsigned int`. Character code `'I'` Alias on this platform (Linux x86_64) [`numpy.uint32`](#numpy.uint32 "numpy.uint32"): 32-bit unsigned integer (`0` to `4_294_967_295`). *class*numpy.uint[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Unsigned integer type, compatible with C `unsigned long`. Character code `'L'` Alias on this platform (Linux x86_64) [`numpy.uint64`](#numpy.uint64 "numpy.uint64"): 64-bit unsigned integer (`0` to `18_446_744_073_709_551_615`). Alias on this platform (Linux x86_64) [`numpy.uintp`](#numpy.uintp "numpy.uintp"): Unsigned integer large enough to fit pointer, compatible with C `uintptr_t`. *class*numpy.ulonglong[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Signed integer type, compatible with C `unsigned long long`. Character code `'Q'` ### Inexact types *class*numpy.inexact[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Abstract base class of all numeric scalar types with a (potentially) inexact representation of the values in its range, such as floating-point numbers. Note Inexact scalars are printed using the fewest decimal digits needed to distinguish their value from other values of the same datatype, by judicious rounding. See the `unique` parameter of [`format_float_positional`](generated/numpy.format_float_positional#numpy.format_float_positional "numpy.format_float_positional") and [`format_float_scientific`](generated/numpy.format_float_scientific#numpy.format_float_scientific "numpy.format_float_scientific"). This means that variables with equal binary values but whose datatypes are of different precisions may display differently: ``` >>> f16 = np.float16("0.1") >>> f32 = np.float32(f16) >>> f64 = np.float64(f32) >>> f16 == f32 == f64 True >>> f16, f32, f64 (0.1, 0.099975586, 0.0999755859375) ``` Note that none of these floats hold the exact value \(\frac{1}{10}\); `f16` prints as `0.1` because it is as close to that value as possible, whereas the other types do not as they have more precision and therefore have closer values. Conversely, floating-point scalars of different precisions which approximate the same decimal value may compare unequal despite printing identically: ``` >>> f16 = np.float16("0.1") >>> f32 = np.float32("0.1") >>> f64 = np.float64("0.1") >>> f16 == f32 == f64 False >>> f16, f32, f64 (0.1, 0.1, 0.1) ``` #### Floating-point types *class*numpy.floating[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Abstract base class of all floating-point scalar types. *class*numpy.half[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Half-precision floating-point number type. Character code `'e'` Alias on this platform (Linux x86_64) [`numpy.float16`](#numpy.float16 "numpy.float16"): 16-bit-precision floating-point number type: sign bit, 5 bits exponent, 10 bits mantissa. *class*numpy.single[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Single-precision floating-point number type, compatible with C `float`. Character code `'f'` Alias on this platform (Linux x86_64) [`numpy.float32`](#numpy.float32 "numpy.float32"): 32-bit-precision floating-point number type: sign bit, 8 bits exponent, 23 bits mantissa. *class*numpy.double(*x=0*, */*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Double-precision floating-point number type, compatible with Python [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)") and C `double`. Character code `'d'` Alias [`numpy.float_`](#numpy.float_ "numpy.float_") Alias on this platform (Linux x86_64) [`numpy.float64`](#numpy.float64 "numpy.float64"): 64-bit precision floating-point number type: sign bit, 11 bits exponent, 52 bits mantissa. *class*numpy.longdouble[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Extended-precision floating-point number type, compatible with C `long double` but not necessarily with IEEE 754 quadruple-precision. Character code `'g'` Alias [`numpy.longfloat`](#numpy.longfloat "numpy.longfloat") Alias on this platform (Linux x86_64) [`numpy.float128`](#numpy.float128 "numpy.float128"): 128-bit extended-precision floating-point number type. #### Complex floating-point types *class*numpy.complexfloating[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Abstract base class of all complex number scalar types that are made up of floating-point numbers. *class*numpy.csingle[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Complex number type composed of two single-precision floating-point numbers. Character code `'F'` Alias [`numpy.singlecomplex`](#numpy.singlecomplex "numpy.singlecomplex") Alias on this platform (Linux x86_64) [`numpy.complex64`](#numpy.complex64 "numpy.complex64"): Complex number type composed of 2 32-bit-precision floating-point numbers. *class*numpy.cdouble(*real=0*, *imag=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Complex number type composed of two double-precision floating-point numbers, compatible with Python [`complex`](https://docs.python.org/3/library/functions.html#complex "(in Python v3.10)"). Character code `'D'` Alias [`numpy.cfloat`](#numpy.cfloat "numpy.cfloat") Alias [`numpy.complex_`](#numpy.complex_ "numpy.complex_") Alias on this platform (Linux x86_64) [`numpy.complex128`](#numpy.complex128 "numpy.complex128"): Complex number type composed of 2 64-bit-precision floating-point numbers. *class*numpy.clongdouble[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Complex number type composed of two extended-precision floating-point numbers. Character code `'G'` Alias [`numpy.clongfloat`](#numpy.clongfloat "numpy.clongfloat") Alias [`numpy.longcomplex`](#numpy.longcomplex "numpy.longcomplex") Alias on this platform (Linux x86_64) [`numpy.complex256`](#numpy.complex256 "numpy.complex256"): Complex number type composed of 2 128-bit extended-precision floating-point numbers. ### Other types *class*numpy.bool_[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Boolean type (True or False), stored as a byte. Warning The [`bool_`](#numpy.bool_ "numpy.bool_") type is not a subclass of the [`int_`](#numpy.int_ "numpy.int_") type (the [`bool_`](#numpy.bool_ "numpy.bool_") is not even a number type). This is different than Python’s default implementation of [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.10)") as a sub-class of [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)"). Character code `'?'` Alias [`numpy.bool8`](#numpy.bool8 "numpy.bool8") *class*numpy.datetime64[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) If created from a 64-bit integer, it represents an offset from `1970-01-01T00:00:00`. If created from string, the string can be in ISO 8601 date or datetime format. ``` >>> np.datetime64(10, 'Y') numpy.datetime64('1980') >>> np.datetime64('1980', 'Y') numpy.datetime64('1980') >>> np.datetime64(10, 'D') numpy.datetime64('1970-01-11') ``` See [Datetimes and Timedeltas](arrays.datetime#arrays-datetime) for more information. Character code `'M'` *class*numpy.timedelta64[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) A timedelta stored as a 64-bit integer. See [Datetimes and Timedeltas](arrays.datetime#arrays-datetime) for more information. Character code `'m'` *class*numpy.object_[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Any Python object. Character code `'O'` Note The data actually stored in object arrays (*i.e.*, arrays having dtype [`object_`](#numpy.object_ "numpy.object_")) are references to Python objects, not the objects themselves. Hence, object arrays behave more like usual Python [`lists`](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.10)"), in the sense that their contents need not be of the same Python type. The object type is also special because an array containing [`object_`](#numpy.object_ "numpy.object_") items does not return an [`object_`](#numpy.object_ "numpy.object_") object on item access, but instead returns the actual object that the array item refers to. The following data types are **flexible**: they have no predefined size and the data they describe can be of different length in different arrays. (In the character codes `#` is an integer denoting how many elements the data type consists of.) *class*numpy.flexible[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Abstract base class of all scalar types without predefined length. The actual size of these types depends on the specific `np.dtype` instantiation. *class*numpy.bytes_[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) A byte string. When used in arrays, this type strips trailing null bytes. Character code `'S'` Alias [`numpy.string_`](#numpy.string_ "numpy.string_") *class*numpy.str_[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) A unicode string. When used in arrays, this type strips trailing null codepoints. Unlike the builtin [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)"), this supports the [Buffer Protocol](https://docs.python.org/3/c-api/buffer.html#bufferobjects "(in Python v3.10)"), exposing its contents as UCS4: ``` >>> m = memoryview(np.str_("abc")) >>> m.format '3w' >>> m.tobytes() b'a\x00\x00\x00b\x00\x00\x00c\x00\x00\x00' ``` Character code `'U'` Alias [`numpy.unicode_`](#numpy.unicode_ "numpy.unicode_") *class*numpy.void[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Either an opaque sequence of bytes, or a structure. ``` >>> np.void(b'abcd') void(b'\x61\x62\x63\x64') ``` Structured [`void`](#numpy.void "numpy.void") scalars can only be constructed via extraction from [Structured arrays](../user/basics.rec#structured-arrays): ``` >>> arr = np.array((1, 2), dtype=[('x', np.int8), ('y', np.int8)]) >>> arr[()] (1, 2) # looks like a tuple, but is `np.void` ``` Character code `'V'` Warning See [Note on string types](arrays.dtypes#string-dtype-note). Numeric Compatibility: If you used old typecode characters in your Numeric code (which was never recommended), you will need to change some of them to the new characters. In particular, the needed changes are `c -> S1`, `b -> B`, `1 -> b`, `s -> h`, `w -> H`, and `u -> I`. These changes make the type character convention more consistent with other Python modules such as the [`struct`](https://docs.python.org/3/library/struct.html#module-struct "(in Python v3.10)") module. ### Sized aliases Along with their (mostly) C-derived names, the integer, float, and complex data-types are also available using a bit-width convention so that an array of the right size can always be ensured. Two aliases ([`numpy.intp`](#numpy.intp "numpy.intp") and [`numpy.uintp`](#numpy.uintp "numpy.uintp")) pointing to the integer type that is sufficiently large to hold a C pointer are also provided. numpy.bool8[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.bool_`](#numpy.bool_ "numpy.bool_") numpy.int8[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) numpy.int16 numpy.int32 numpy.int64 Aliases for the signed integer types (one of [`numpy.byte`](#numpy.byte "numpy.byte"), [`numpy.short`](#numpy.short "numpy.short"), [`numpy.intc`](#numpy.intc "numpy.intc"), [`numpy.int_`](#numpy.int_ "numpy.int_") and [`numpy.longlong`](#numpy.longlong "numpy.longlong")) with the specified number of bits. Compatible with the C99 `int8_t`, `int16_t`, `int32_t`, and `int64_t`, respectively. numpy.uint8[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) numpy.uint16 numpy.uint32 numpy.uint64 Alias for the unsigned integer types (one of [`numpy.ubyte`](#numpy.ubyte "numpy.ubyte"), [`numpy.ushort`](#numpy.ushort "numpy.ushort"), [`numpy.uintc`](#numpy.uintc "numpy.uintc"), [`numpy.uint`](#numpy.uint "numpy.uint") and [`numpy.ulonglong`](#numpy.ulonglong "numpy.ulonglong")) with the specified number of bits. Compatible with the C99 `uint8_t`, `uint16_t`, `uint32_t`, and `uint64_t`, respectively. numpy.intp[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Alias for the signed integer type (one of [`numpy.byte`](#numpy.byte "numpy.byte"), [`numpy.short`](#numpy.short "numpy.short"), [`numpy.intc`](#numpy.intc "numpy.intc"), [`numpy.int_`](#numpy.int_ "numpy.int_") and `np.longlong`) that is the same size as a pointer. Compatible with the C `intptr_t`. Character code `'p'` numpy.uintp[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Alias for the unsigned integer type (one of [`numpy.ubyte`](#numpy.ubyte "numpy.ubyte"), [`numpy.ushort`](#numpy.ushort "numpy.ushort"), [`numpy.uintc`](#numpy.uintc "numpy.uintc"), [`numpy.uint`](#numpy.uint "numpy.uint") and `np.ulonglong`) that is the same size as a pointer. Compatible with the C `uintptr_t`. Character code `'P'` numpy.float16[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.half`](#numpy.half "numpy.half") numpy.float32[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.single`](#numpy.single "numpy.single") numpy.float64[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.double`](#numpy.double "numpy.double") numpy.float96 numpy.float128[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Alias for [`numpy.longdouble`](#numpy.longdouble "numpy.longdouble"), named after its size in bits. The existence of these aliases depends on the platform. numpy.complex64[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.csingle`](#numpy.csingle "numpy.csingle") numpy.complex128[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.cdouble`](#numpy.cdouble "numpy.cdouble") numpy.complex192 numpy.complex256[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) Alias for [`numpy.clongdouble`](#numpy.clongdouble "numpy.clongdouble"), named after its size in bits. The existence of these aliases depends on the platform. ### Other aliases The first two of these are conveniences which resemble the names of the builtin types, in the same style as [`bool_`](#numpy.bool_ "numpy.bool_"), [`int_`](#numpy.int_ "numpy.int_"), [`str_`](#numpy.str_ "numpy.str_"), [`bytes_`](#numpy.bytes_ "numpy.bytes_"), and [`object_`](#numpy.object_ "numpy.object_"): numpy.float_[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.double`](#numpy.double "numpy.double") numpy.complex_[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.cdouble`](#numpy.cdouble "numpy.cdouble") Some more use alternate naming conventions for extended-precision floats and complex numbers: numpy.longfloat[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.longdouble`](#numpy.longdouble "numpy.longdouble") numpy.singlecomplex[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.csingle`](#numpy.csingle "numpy.csingle") numpy.cfloat[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.cdouble`](#numpy.cdouble "numpy.cdouble") numpy.longcomplex[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.clongdouble`](#numpy.clongdouble "numpy.clongdouble") numpy.clongfloat[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.clongdouble`](#numpy.clongdouble "numpy.clongdouble") The following aliases originate from Python 2, and it is recommended that they not be used in new code. numpy.string_[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.bytes_`](#numpy.bytes_ "numpy.bytes_") numpy.unicode_[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.str_`](#numpy.str_ "numpy.str_") Attributes ---------- The array scalar objects have an [`array priority`](arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") of [`NPY_SCALAR_PRIORITY`](c-api/array#c.NPY_SCALAR_PRIORITY "NPY_SCALAR_PRIORITY") (-1,000,000.0). They also do not (yet) have a [`ctypes`](generated/numpy.ndarray.ctypes#numpy.ndarray.ctypes "numpy.ndarray.ctypes") attribute. Otherwise, they share the same attributes as arrays: | | | | --- | --- | | [`generic.flags`](generated/numpy.generic.flags#numpy.generic.flags "numpy.generic.flags") | The integer value of flags. | | [`generic.shape`](generated/numpy.generic.shape#numpy.generic.shape "numpy.generic.shape") | Tuple of array dimensions. | | [`generic.strides`](generated/numpy.generic.strides#numpy.generic.strides "numpy.generic.strides") | Tuple of bytes steps in each dimension. | | [`generic.ndim`](generated/numpy.generic.ndim#numpy.generic.ndim "numpy.generic.ndim") | The number of array dimensions. | | [`generic.data`](generated/numpy.generic.data#numpy.generic.data "numpy.generic.data") | Pointer to start of data. | | [`generic.size`](generated/numpy.generic.size#numpy.generic.size "numpy.generic.size") | The number of elements in the gentype. | | [`generic.itemsize`](generated/numpy.generic.itemsize#numpy.generic.itemsize "numpy.generic.itemsize") | The length of one element in bytes. | | [`generic.base`](generated/numpy.generic.base#numpy.generic.base "numpy.generic.base") | Scalar attribute identical to the corresponding array attribute. | | [`generic.dtype`](generated/numpy.generic.dtype#numpy.generic.dtype "numpy.generic.dtype") | Get array data-descriptor. | | [`generic.real`](generated/numpy.generic.real#numpy.generic.real "numpy.generic.real") | The real part of the scalar. | | [`generic.imag`](generated/numpy.generic.imag#numpy.generic.imag "numpy.generic.imag") | The imaginary part of the scalar. | | [`generic.flat`](generated/numpy.generic.flat#numpy.generic.flat "numpy.generic.flat") | A 1-D view of the scalar. | | [`generic.T`](generated/numpy.generic.t#numpy.generic.T "numpy.generic.T") | Scalar attribute identical to the corresponding array attribute. | | [`generic.__array_interface__`](generated/numpy.generic.__array_interface__#numpy.generic.__array_interface__ "numpy.generic.__array_interface__") | Array protocol: Python side | | [`generic.__array_struct__`](generated/numpy.generic.__array_struct__#numpy.generic.__array_struct__ "numpy.generic.__array_struct__") | Array protocol: struct | | [`generic.__array_priority__`](generated/numpy.generic.__array_priority__#numpy.generic.__array_priority__ "numpy.generic.__array_priority__") | Array priority. | | [`generic.__array_wrap__`](generated/numpy.generic.__array_wrap__#numpy.generic.__array_wrap__ "numpy.generic.__array_wrap__") | sc.__array_wrap__(obj) return scalar from array | Indexing -------- See also [Indexing routines](arrays.indexing#arrays-indexing), [Data type objects (dtype)](arrays.dtypes#arrays-dtypes) Array scalars can be indexed like 0-dimensional arrays: if *x* is an array scalar, * `x[()]` returns a copy of array scalar * `x[...]` returns a 0-dimensional [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") * `x['field-name']` returns the array scalar in the field *field-name*. (*x* can have fields, for example, when it corresponds to a structured data type.) Methods ------- Array scalars have exactly the same methods as arrays. The default behavior of these methods is to internally convert the scalar to an equivalent 0-dimensional array and to call the corresponding array method. In addition, math operations on array scalars are defined so that the same hardware flags are set and used to interpret the results as for [ufunc](ufuncs#ufuncs), so that the error state used for ufuncs also carries over to the math on array scalars. The exceptions to the above rules are given below: | | | | --- | --- | | [`generic.__array__`](generated/numpy.generic.__array__#numpy.generic.__array__ "numpy.generic.__array__") | sc.__array__(dtype) return 0-dim array from scalar with specified dtype | | [`generic.__array_wrap__`](generated/numpy.generic.__array_wrap__#numpy.generic.__array_wrap__ "numpy.generic.__array_wrap__") | sc.__array_wrap__(obj) return scalar from array | | [`generic.squeeze`](generated/numpy.generic.squeeze#numpy.generic.squeeze "numpy.generic.squeeze") | Scalar method identical to the corresponding array attribute. | | [`generic.byteswap`](generated/numpy.generic.byteswap#numpy.generic.byteswap "numpy.generic.byteswap") | Scalar method identical to the corresponding array attribute. | | [`generic.__reduce__`](generated/numpy.generic.__reduce__#numpy.generic.__reduce__ "numpy.generic.__reduce__") | Helper for pickle. | | [`generic.__setstate__`](generated/numpy.generic.__setstate__#numpy.generic.__setstate__ "numpy.generic.__setstate__") | | | [`generic.setflags`](generated/numpy.generic.setflags#numpy.generic.setflags "numpy.generic.setflags") | Scalar method identical to the corresponding array attribute. | Utility method for typing: | | | | --- | --- | | [`number.__class_getitem__`](generated/numpy.number.__class_getitem__#numpy.number.__class_getitem__ "numpy.number.__class_getitem__")(item, /) | Return a parametrized wrapper around the [`number`](#numpy.number "numpy.number") type. | Defining new types ------------------ There are two ways to effectively define a new array scalar type (apart from composing structured types [dtypes](arrays.dtypes#arrays-dtypes) from the built-in scalar types): One way is to simply subclass the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") and overwrite the methods of interest. This will work to a degree, but internally certain behaviors are fixed by the data type of the array. To fully customize the data type of an array you need to define a new data-type, and register it with NumPy. Such new types can only be defined in C, using the [NumPy C-API](c-api/index#c-api). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/arrays.scalars.htmlData type objects (dtype) ========================= A data type object (an instance of [`numpy.dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") class) describes how the bytes in the fixed-size block of memory corresponding to an array item should be interpreted. It describes the following aspects of the data: 1. Type of the data (integer, float, Python object, etc.) 2. Size of the data (how many bytes is in *e.g.* the integer) 3. Byte order of the data ([little-endian](../glossary#term-little-endian) or [big-endian](../glossary#term-big-endian)) 4. If the data type is [structured data type](../glossary#term-structured-data-type), an aggregate of other data types, (*e.g.*, describing an array item consisting of an integer and a float), 1. what are the names of the “[fields](../glossary#term-field)” of the structure, by which they can be [accessed](../user/basics.indexing#arrays-indexing-fields), 2. what is the data-type of each [field](../glossary#term-field), and 3. which part of the memory block each field takes. 5. If the data type is a sub-array, what is its shape and data type. To describe the type of scalar data, there are several [built-in scalar types](arrays.scalars#arrays-scalars-built-in) in NumPy for various precision of integers, floating-point numbers, *etc*. An item extracted from an array, *e.g.*, by indexing, will be a Python object whose type is the scalar type associated with the data type of the array. Note that the scalar types are not [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") objects, even though they can be used in place of one whenever a data type specification is needed in NumPy. Structured data types are formed by creating a data type whose [field](../glossary#term-field) contain other data types. Each field has a name by which it can be [accessed](../user/basics.indexing#arrays-indexing-fields). The parent data type should be of sufficient size to contain all its fields; the parent is nearly always based on the [`void`](arrays.scalars#numpy.void "numpy.void") type which allows an arbitrary item size. Structured data types may also contain nested structured sub-array data types in their fields. Finally, a data type can describe items that are themselves arrays of items of another data type. These sub-arrays must, however, be of a fixed size. If an array is created using a data-type describing a sub-array, the dimensions of the sub-array are appended to the shape of the array when the array is created. Sub-arrays in a field of a structured type behave differently, see [Field access](../user/basics.indexing#arrays-indexing-fields). Sub-arrays always have a C-contiguous memory layout. #### Example A simple data type containing a 32-bit big-endian integer: (see [Specifying and constructing data types](#arrays-dtypes-constructing) for details on construction) ``` >>> dt = np.dtype('>i4') >>> dt.byteorder '>' >>> dt.itemsize 4 >>> dt.name 'int32' >>> dt.type is np.int32 True ``` The corresponding array scalar type is [`int32`](arrays.scalars#numpy.int32 "numpy.int32"). #### Example A structured data type containing a 16-character string (in field ‘name’) and a sub-array of two 64-bit floating-point number (in field ‘grades’): ``` >>> dt = np.dtype([('name', np.unicode_, 16), ('grades', np.float64, (2,))]) >>> dt['name'] dtype('<U16') >>> dt['grades'] dtype(('<f8', (2,))) ``` Items of an array of this data type are wrapped in an [array scalar](arrays.scalars#arrays-scalars) type that also has two fields: ``` >>> x = np.array([('Sarah', (8.0, 7.0)), ('John', (6.0, 7.0))], dtype=dt) >>> x[1] ('John', [6., 7.]) >>> x[1]['grades'] array([6., 7.]) >>> type(x[1]) <class 'numpy.void'> >>> type(x[1]['grades']) <class 'numpy.ndarray'``` Specifying and constructing data types -------------------------------------- Whenever a data-type is required in a NumPy function or method, either a [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") object or something that can be converted to one can be supplied. Such conversions are done by the [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") constructor: | | | | --- | --- | | [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype")(dtype[, align, copy]) | Create a data type object. | What can be converted to a data-type object is described below: [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") object Used as-is. None The default data type: [`float_`](arrays.scalars#numpy.float_ "numpy.float_"). Array-scalar types The 24 built-in [array scalar type objects](arrays.scalars#arrays-scalars-built-in) all convert to an associated data-type object. This is true for their sub-classes as well. Note that not all data-type information can be supplied with a type-object: for example, [`flexible`](arrays.scalars#numpy.flexible "numpy.flexible") data-types have a default *itemsize* of 0, and require an explicitly given size to be useful. #### Example ``` >>> dt = np.dtype(np.int32) # 32-bit integer >>> dt = np.dtype(np.complex128) # 128-bit complex floating-point number ``` Generic types The generic hierarchical type objects convert to corresponding type objects according to the associations: | | | | --- | --- | | [`number`](arrays.scalars#numpy.number "numpy.number"), [`inexact`](arrays.scalars#numpy.inexact "numpy.inexact"), [`floating`](arrays.scalars#numpy.floating "numpy.floating") | [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)") | | [`complexfloating`](arrays.scalars#numpy.complexfloating "numpy.complexfloating") | [`cfloat`](arrays.scalars#numpy.cfloat "numpy.cfloat") | | [`integer`](arrays.scalars#numpy.integer "numpy.integer"), [`signedinteger`](arrays.scalars#numpy.signedinteger "numpy.signedinteger") | [`int_`](arrays.scalars#numpy.int_ "numpy.int_") | | [`unsignedinteger`](arrays.scalars#numpy.unsignedinteger "numpy.unsignedinteger") | [`uint`](arrays.scalars#numpy.uint "numpy.uint") | | `character` | `string` | | [`generic`](arrays.scalars#numpy.generic "numpy.generic"), [`flexible`](arrays.scalars#numpy.flexible "numpy.flexible") | [`void`](arrays.scalars#numpy.void "numpy.void") | Deprecated since version 1.19: This conversion of generic scalar types is deprecated. This is because it can be unexpected in a context such as `arr.astype(dtype=np.floating)`, which casts an array of `float32` to an array of `float64`, even though `float32` is a subdtype of `np.floating`. Built-in Python types Several python types are equivalent to a corresponding array scalar when used to generate a [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") object: | | | | --- | --- | | [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)") | [`int_`](arrays.scalars#numpy.int_ "numpy.int_") | | [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.10)") | [`bool_`](arrays.scalars#numpy.bool_ "numpy.bool_") | | [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)") | [`float_`](arrays.scalars#numpy.float_ "numpy.float_") | | [`complex`](https://docs.python.org/3/library/functions.html#complex "(in Python v3.10)") | [`cfloat`](arrays.scalars#numpy.cfloat "numpy.cfloat") | | [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.10)") | [`bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_") | | [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") | [`str_`](arrays.scalars#numpy.str_ "numpy.str_") | | `buffer` | [`void`](arrays.scalars#numpy.void "numpy.void") | | (all others) | [`object_`](arrays.scalars#numpy.object_ "numpy.object_") | Note that `str` refers to either null terminated bytes or unicode strings depending on the Python version. In code targeting both Python 2 and 3 `np.unicode_` should be used as a dtype for strings. See [Note on string types](#string-dtype-note). #### Example ``` >>> dt = np.dtype(float) # Python-compatible floating-point number >>> dt = np.dtype(int) # Python-compatible integer >>> dt = np.dtype(object) # Python object ``` Note All other types map to `object_` for convenience. Code should expect that such types may map to a specific (new) dtype in the future. Types with `.dtype` Any type object with a `dtype` attribute: The attribute will be accessed and used directly. The attribute must return something that is convertible into a dtype object. Several kinds of strings can be converted. Recognized strings can be prepended with `'>'` ([big-endian](../glossary#term-big-endian)), `'<'` ([little-endian](../glossary#term-little-endian)), or `'='` (hardware-native, the default), to specify the byte order. One-character strings Each built-in data-type has a character code (the updated Numeric typecodes), that uniquely identifies it. #### Example ``` >>> dt = np.dtype('b') # byte, native byte order >>> dt = np.dtype('>H') # big-endian unsigned short >>> dt = np.dtype('<f') # little-endian single-precision float >>> dt = np.dtype('d') # double-precision floating-point number ``` Array-protocol type strings (see [The array interface protocol](arrays.interface#arrays-interface)) The first character specifies the kind of data and the remaining characters specify the number of bytes per item, except for Unicode, where it is interpreted as the number of characters. The item size must correspond to an existing type, or an error will be raised. The supported kinds are | | | | --- | --- | | `'?'` | boolean | | `'b'` | (signed) byte | | `'B'` | unsigned byte | | `'i'` | (signed) integer | | `'u'` | unsigned integer | | `'f'` | floating-point | | `'c'` | complex-floating point | | `'m'` | timedelta | | `'M'` | datetime | | `'O'` | (Python) objects | | `'S'`, `'a'` | zero-terminated bytes (not recommended) | | `'U'` | Unicode string | | `'V'` | raw data ([`void`](arrays.scalars#numpy.void "numpy.void")) | #### Example ``` >>> dt = np.dtype('i4') # 32-bit signed integer >>> dt = np.dtype('f8') # 64-bit floating-point number >>> dt = np.dtype('c16') # 128-bit complex floating-point number >>> dt = np.dtype('a25') # 25-length zero-terminated bytes >>> dt = np.dtype('U25') # 25-character string ``` Note on string types For backward compatibility with Python 2 the `S` and `a` typestrings remain zero-terminated bytes and [`numpy.string_`](arrays.scalars#numpy.string_ "numpy.string_") continues to alias [`numpy.bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_"). To use actual strings in Python 3 use `U` or [`numpy.str_`](arrays.scalars#numpy.str_ "numpy.str_"). For signed bytes that do not need zero-termination `b` or `i1` can be used. String with comma-separated fields A short-hand notation for specifying the format of a structured data type is a comma-separated string of basic formats. A basic format in this context is an optional shape specifier followed by an array-protocol type string. Parenthesis are required on the shape if it has more than one dimension. NumPy allows a modification on the format in that any string that can uniquely identify the type can be used to specify the data-type in a field. The generated data-type fields are named `'f0'`, `'f1'`, 
, `'f<N-1>'` where N (>1) is the number of comma-separated basic formats in the string. If the optional shape specifier is provided, then the data-type for the corresponding field describes a sub-array. #### Example * field named `f0` containing a 32-bit integer * field named `f1` containing a 2 x 3 sub-array of 64-bit floating-point numbers * field named `f2` containing a 32-bit floating-point number ``` >>> dt = np.dtype("i4, (2,3)f8, f4") ``` * field named `f0` containing a 3-character string * field named `f1` containing a sub-array of shape (3,) containing 64-bit unsigned integers * field named `f2` containing a 3 x 4 sub-array containing 10-character strings ``` >>> dt = np.dtype("a3, 3u8, (3,4)a10") ``` Type strings Any string in `numpy.sctypeDict`.keys(): #### Example ``` >>> dt = np.dtype('uint32') # 32-bit unsigned integer >>> dt = np.dtype('float64') # 64-bit floating-point number ``` `(flexible_dtype, itemsize)` The first argument must be an object that is converted to a zero-sized flexible data-type object, the second argument is an integer providing the desired itemsize. #### Example ``` >>> dt = np.dtype((np.void, 10)) # 10-byte wide data block >>> dt = np.dtype(('U', 10)) # 10-character unicode string ``` `(fixed_dtype, shape)` The first argument is any object that can be converted into a fixed-size data-type object. The second argument is the desired shape of this type. If the shape parameter is 1, then the data-type object used to be equivalent to fixed dtype. This behaviour is deprecated since NumPy 1.17 and will raise an error in the future. If *shape* is a tuple, then the new dtype defines a sub-array of the given shape. #### Example ``` >>> dt = np.dtype((np.int32, (2,2))) # 2 x 2 integer sub-array >>> dt = np.dtype(('i4, (2,3)f8, f4', (2,3))) # 2 x 3 structured sub-array ``` `[(field_name, field_dtype, field_shape), ...]` *obj* should be a list of fields where each field is described by a tuple of length 2 or 3. (Equivalent to the `descr` item in the [`__array_interface__`](arrays.interface#object.__array_interface__ "object.__array_interface__") attribute.) The first element, *field_name*, is the field name (if this is `''` then a standard field name, `'f#'`, is assigned). The field name may also be a 2-tuple of strings where the first string is either a “title” (which may be any string or unicode string) or meta-data for the field which can be any object, and the second string is the “name” which must be a valid Python identifier. The second element, *field_dtype*, can be anything that can be interpreted as a data-type. The optional third element *field_shape* contains the shape if this field represents an array of the data-type in the second element. Note that a 3-tuple with a third argument equal to 1 is equivalent to a 2-tuple. This style does not accept *align* in the [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") constructor as it is assumed that all of the memory is accounted for by the array interface description. #### Example Data-type with fields `big` (big-endian 32-bit integer) and `little` (little-endian 32-bit integer): ``` >>> dt = np.dtype([('big', '>i4'), ('little', '<i4')]) ``` Data-type with fields `R`, `G`, `B`, `A`, each being an unsigned 8-bit integer: ``` >>> dt = np.dtype([('R','u1'), ('G','u1'), ('B','u1'), ('A','u1')]) ``` `{'names': ..., 'formats': ..., 'offsets': ..., 'titles': ..., 'itemsize': ...}` This style has two required and three optional keys. The *names* and *formats* keys are required. Their respective values are equal-length lists with the field names and the field formats. The field names must be strings and the field formats can be any object accepted by [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") constructor. When the optional keys *offsets* and *titles* are provided, their values must each be lists of the same length as the *names* and *formats* lists. The *offsets* value is a list of byte offsets (limited to [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "(in Python v3.10)")) for each field, while the *titles* value is a list of titles for each field (`None` can be used if no title is desired for that field). The *titles* can be any object, but when a [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") object will add another entry to the fields dictionary keyed by the title and referencing the same field tuple which will contain the title as an additional tuple member. The *itemsize* key allows the total size of the dtype to be set, and must be an integer large enough so all the fields are within the dtype. If the dtype being constructed is aligned, the *itemsize* must also be divisible by the struct alignment. Total dtype *itemsize* is limited to [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "(in Python v3.10)"). #### Example Data type with fields `r`, `g`, `b`, `a`, each being an 8-bit unsigned integer: ``` >>> dt = np.dtype({'names': ['r','g','b','a'], ... 'formats': [np.uint8, np.uint8, np.uint8, np.uint8]}) ``` Data type with fields `r` and `b` (with the given titles), both being 8-bit unsigned integers, the first at byte position 0 from the start of the field and the second at position 2: ``` >>> dt = np.dtype({'names': ['r','b'], 'formats': ['u1', 'u1'], ... 'offsets': [0, 2], ... 'titles': ['Red pixel', 'Blue pixel']}) ``` `{'field1': ..., 'field2': ..., ...}` This usage is discouraged, because it is ambiguous with the other dict-based construction method. If you have a field called ‘names’ and a field called ‘formats’ there will be a conflict. This style allows passing in the [`fields`](generated/numpy.dtype.fields#numpy.dtype.fields "numpy.dtype.fields") attribute of a data-type object. *obj* should contain string or unicode keys that refer to `(data-type, offset)` or `(data-type, offset, title)` tuples. #### Example Data type containing field `col1` (10-character string at byte position 0), `col2` (32-bit float at byte position 10), and `col3` (integers at byte position 14): ``` >>> dt = np.dtype({'col1': ('U10', 0), 'col2': (np.float32, 10), ... 'col3': (int, 14)}) ``` `(base_dtype, new_dtype)` In NumPy 1.7 and later, this form allows `base_dtype` to be interpreted as a structured dtype. Arrays created with this dtype will have underlying dtype `base_dtype` but will have fields and flags taken from `new_dtype`. This is useful for creating custom structured dtypes, as done in [record arrays](arrays.classes#arrays-classes-rec). This form also makes it possible to specify struct dtypes with overlapping fields, functioning like the ‘union’ type in C. This usage is discouraged, however, and the union mechanism is preferred. Both arguments must be convertible to data-type objects with the same total size. #### Example 32-bit integer, whose first two bytes are interpreted as an integer via field `real`, and the following two bytes via field `imag`. ``` >>> dt = np.dtype((np.int32,{'real':(np.int16, 0),'imag':(np.int16, 2)})) ``` 32-bit integer, which is interpreted as consisting of a sub-array of shape `(4,)` containing 8-bit integers: ``` >>> dt = np.dtype((np.int32, (np.int8, 4))) ``` 32-bit integer, containing fields `r`, `g`, `b`, `a` that interpret the 4 bytes in the integer as four unsigned integers: ``` >>> dt = np.dtype(('i4', [('r','u1'),('g','u1'),('b','u1'),('a','u1')])) ``` dtype ----- NumPy data type descriptions are instances of the [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") class. ### Attributes The type of the data is described by the following [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") attributes: | | | | --- | --- | | [`dtype.type`](generated/numpy.dtype.type#numpy.dtype.type "numpy.dtype.type") | | | [`dtype.kind`](generated/numpy.dtype.kind#numpy.dtype.kind "numpy.dtype.kind") | A character code (one of 'biufcmMOSUV') identifying the general kind of data. | | [`dtype.char`](generated/numpy.dtype.char#numpy.dtype.char "numpy.dtype.char") | A unique character code for each of the 21 different built-in types. | | [`dtype.num`](generated/numpy.dtype.num#numpy.dtype.num "numpy.dtype.num") | A unique number for each of the 21 different built-in types. | | [`dtype.str`](generated/numpy.dtype.str#numpy.dtype.str "numpy.dtype.str") | The array-protocol typestring of this data-type object. | Size of the data is in turn described by: | | | | --- | --- | | [`dtype.name`](generated/numpy.dtype.name#numpy.dtype.name "numpy.dtype.name") | A bit-width name for this data-type. | | [`dtype.itemsize`](generated/numpy.dtype.itemsize#numpy.dtype.itemsize "numpy.dtype.itemsize") | The element size of this data-type object. | Endianness of this data: | | | | --- | --- | | [`dtype.byteorder`](generated/numpy.dtype.byteorder#numpy.dtype.byteorder "numpy.dtype.byteorder") | A character indicating the byte-order of this data-type object. | Information about sub-data-types in a [structured data type](../glossary#term-structured-data-type): | | | | --- | --- | | [`dtype.fields`](generated/numpy.dtype.fields#numpy.dtype.fields "numpy.dtype.fields") | Dictionary of named fields defined for this data type, or `None`. | | [`dtype.names`](generated/numpy.dtype.names#numpy.dtype.names "numpy.dtype.names") | Ordered list of field names, or `None` if there are no fields. | For data types that describe sub-arrays: | | | | --- | --- | | [`dtype.subdtype`](generated/numpy.dtype.subdtype#numpy.dtype.subdtype "numpy.dtype.subdtype") | Tuple `(item_dtype, shape)` if this [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") describes a sub-array, and None otherwise. | | [`dtype.shape`](generated/numpy.dtype.shape#numpy.dtype.shape "numpy.dtype.shape") | Shape tuple of the sub-array if this data type describes a sub-array, and `()` otherwise. | Attributes providing additional information: | | | | --- | --- | | [`dtype.hasobject`](generated/numpy.dtype.hasobject#numpy.dtype.hasobject "numpy.dtype.hasobject") | Boolean indicating whether this dtype contains any reference-counted objects in any fields or sub-dtypes. | | [`dtype.flags`](generated/numpy.dtype.flags#numpy.dtype.flags "numpy.dtype.flags") | Bit-flags describing how this data type is to be interpreted. | | [`dtype.isbuiltin`](generated/numpy.dtype.isbuiltin#numpy.dtype.isbuiltin "numpy.dtype.isbuiltin") | Integer indicating how this dtype relates to the built-in dtypes. | | [`dtype.isnative`](generated/numpy.dtype.isnative#numpy.dtype.isnative "numpy.dtype.isnative") | Boolean indicating whether the byte order of this dtype is native to the platform. | | [`dtype.descr`](generated/numpy.dtype.descr#numpy.dtype.descr "numpy.dtype.descr") | `__array_interface__` description of the data-type. | | [`dtype.alignment`](generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment") | The required alignment (bytes) of this data-type according to the compiler. | | [`dtype.base`](generated/numpy.dtype.base#numpy.dtype.base "numpy.dtype.base") | Returns dtype for the base element of the subarrays, regardless of their dimension or shape. | Metadata attached by the user: | | | | --- | --- | | [`dtype.metadata`](generated/numpy.dtype.metadata#numpy.dtype.metadata "numpy.dtype.metadata") | Either `None` or a readonly dictionary of metadata (mappingproxy). | ### Methods Data types have the following method for changing the byte order: | | | | --- | --- | | [`dtype.newbyteorder`](generated/numpy.dtype.newbyteorder#numpy.dtype.newbyteorder "numpy.dtype.newbyteorder")([new_order]) | Return a new dtype with a different byte order. | The following methods implement the pickle protocol: | | | | --- | --- | | [`dtype.__reduce__`](generated/numpy.dtype.__reduce__#numpy.dtype.__reduce__ "numpy.dtype.__reduce__") | Helper for pickle. | | [`dtype.__setstate__`](generated/numpy.dtype.__setstate__#numpy.dtype.__setstate__ "numpy.dtype.__setstate__") | | Utility method for typing: | | | | --- | --- | | [`dtype.__class_getitem__`](generated/numpy.dtype.__class_getitem__#numpy.dtype.__class_getitem__ "numpy.dtype.__class_getitem__")(item, /) | Return a parametrized wrapper around the [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") type. | Comparison operations: | | | | --- | --- | | [`dtype.__ge__`](generated/numpy.dtype.__ge__#numpy.dtype.__ge__ "numpy.dtype.__ge__")(value, /) | Return self>=value. | | [`dtype.__gt__`](generated/numpy.dtype.__gt__#numpy.dtype.__gt__ "numpy.dtype.__gt__")(value, /) | Return self>value. | | [`dtype.__le__`](generated/numpy.dtype.__le__#numpy.dtype.__le__ "numpy.dtype.__le__")(value, /) | Return self<=value. | | [`dtype.__lt__`](generated/numpy.dtype.__lt__#numpy.dtype.__lt__ "numpy.dtype.__lt__")(value, /) | Return self<value. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/arrays.dtypes.htmlIndexing routines ================= See also [Indexing on ndarrays](../user/basics.indexing#basics-indexing) Generating index arrays ----------------------- | | | | --- | --- | | [`c_`](generated/numpy.c_#numpy.c_ "numpy.c_") | Translates slice objects to concatenation along the second axis. | | [`r_`](generated/numpy.r_#numpy.r_ "numpy.r_") | Translates slice objects to concatenation along the first axis. | | [`s_`](generated/numpy.s_#numpy.s_ "numpy.s_") | A nicer way to build up index tuples for arrays. | | [`nonzero`](generated/numpy.nonzero#numpy.nonzero "numpy.nonzero")(a) | Return the indices of the elements that are non-zero. | | [`where`](generated/numpy.where#numpy.where "numpy.where")(condition, [x, y], /) | Return elements chosen from `x` or `y` depending on `condition`. | | [`indices`](generated/numpy.indices#numpy.indices "numpy.indices")(dimensions[, dtype, sparse]) | Return an array representing the indices of a grid. | | [`ix_`](generated/numpy.ix_#numpy.ix_ "numpy.ix_")(*args) | Construct an open mesh from multiple sequences. | | [`ogrid`](generated/numpy.ogrid#numpy.ogrid "numpy.ogrid") | `nd_grid` instance which returns an open multi-dimensional "meshgrid". | | [`ravel_multi_index`](generated/numpy.ravel_multi_index#numpy.ravel_multi_index "numpy.ravel_multi_index")(multi_index, dims[, mode, ...]) | Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index. | | [`unravel_index`](generated/numpy.unravel_index#numpy.unravel_index "numpy.unravel_index")(indices, shape[, order]) | Converts a flat index or array of flat indices into a tuple of coordinate arrays. | | [`diag_indices`](generated/numpy.diag_indices#numpy.diag_indices "numpy.diag_indices")(n[, ndim]) | Return the indices to access the main diagonal of an array. | | [`diag_indices_from`](generated/numpy.diag_indices_from#numpy.diag_indices_from "numpy.diag_indices_from")(arr) | Return the indices to access the main diagonal of an n-dimensional array. | | [`mask_indices`](generated/numpy.mask_indices#numpy.mask_indices "numpy.mask_indices")(n, mask_func[, k]) | Return the indices to access (n, n) arrays, given a masking function. | | [`tril_indices`](generated/numpy.tril_indices#numpy.tril_indices "numpy.tril_indices")(n[, k, m]) | Return the indices for the lower-triangle of an (n, m) array. | | [`tril_indices_from`](generated/numpy.tril_indices_from#numpy.tril_indices_from "numpy.tril_indices_from")(arr[, k]) | Return the indices for the lower-triangle of arr. | | [`triu_indices`](generated/numpy.triu_indices#numpy.triu_indices "numpy.triu_indices")(n[, k, m]) | Return the indices for the upper-triangle of an (n, m) array. | | [`triu_indices_from`](generated/numpy.triu_indices_from#numpy.triu_indices_from "numpy.triu_indices_from")(arr[, k]) | Return the indices for the upper-triangle of arr. | Indexing-like operations ------------------------ | | | | --- | --- | | [`take`](generated/numpy.take#numpy.take "numpy.take")(a, indices[, axis, out, mode]) | Take elements from an array along an axis. | | [`take_along_axis`](generated/numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis")(arr, indices, axis) | Take values from the input array by matching 1d index and data slices. | | [`choose`](generated/numpy.choose#numpy.choose "numpy.choose")(a, choices[, out, mode]) | Construct an array from an index array and a list of arrays to choose from. | | [`compress`](generated/numpy.compress#numpy.compress "numpy.compress")(condition, a[, axis, out]) | Return selected slices of an array along given axis. | | [`diag`](generated/numpy.diag#numpy.diag "numpy.diag")(v[, k]) | Extract a diagonal or construct a diagonal array. | | [`diagonal`](generated/numpy.diagonal#numpy.diagonal "numpy.diagonal")(a[, offset, axis1, axis2]) | Return specified diagonals. | | [`select`](generated/numpy.select#numpy.select "numpy.select")(condlist, choicelist[, default]) | Return an array drawn from elements in choicelist, depending on conditions. | | [`lib.stride_tricks.sliding_window_view`](generated/numpy.lib.stride_tricks.sliding_window_view#numpy.lib.stride_tricks.sliding_window_view "numpy.lib.stride_tricks.sliding_window_view")(x, ...) | Create a sliding window view into the array with the given window shape. | | [`lib.stride_tricks.as_strided`](generated/numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided")(x[, shape, ...]) | Create a view into the array with the given shape and strides. | Inserting data into arrays -------------------------- | | | | --- | --- | | [`place`](generated/numpy.place#numpy.place "numpy.place")(arr, mask, vals) | Change elements of an array based on conditional and input values. | | [`put`](generated/numpy.put#numpy.put "numpy.put")(a, ind, v[, mode]) | Replaces specified elements of an array with given values. | | [`put_along_axis`](generated/numpy.put_along_axis#numpy.put_along_axis "numpy.put_along_axis")(arr, indices, values, axis) | Put values into the destination array by matching 1d index and data slices. | | [`putmask`](generated/numpy.putmask#numpy.putmask "numpy.putmask")(a, mask, values) | Changes elements of an array based on conditional and input values. | | [`fill_diagonal`](generated/numpy.fill_diagonal#numpy.fill_diagonal "numpy.fill_diagonal")(a, val[, wrap]) | Fill the main diagonal of the given array of any dimensionality. | Iterating over arrays --------------------- | | | | --- | --- | | [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer")(op[, flags, op_flags, op_dtypes, ...]) | Efficient multi-dimensional iterator object to iterate over arrays. | | [`ndenumerate`](generated/numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate")(arr) | Multidimensional index iterator. | | [`ndindex`](generated/numpy.ndindex#numpy.ndindex "numpy.ndindex")(*shape) | An N-dimensional iterator object to index arrays. | | [`nested_iters`](generated/numpy.nested_iters#numpy.nested_iters "numpy.nested_iters")(op, axes[, flags, op_flags, ...]) | Create nditers for use in nested loops | | [`flatiter`](generated/numpy.flatiter#numpy.flatiter "numpy.flatiter")() | Flat iterator object to iterate over arrays. | | [`lib.Arrayterator`](generated/numpy.lib.arrayterator#numpy.lib.Arrayterator "numpy.lib.Arrayterator")(var[, buf_size]) | Buffered iterator for big arrays. | | [`iterable`](generated/numpy.iterable#numpy.iterable "numpy.iterable")(y) | Check whether or not an object can be iterated over. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/arrays.indexing.htmlIterating Over Arrays ===================== Note Arrays support the iterator protocol and can be iterated over like Python lists. See the [Indexing, Slicing and Iterating](../user/quickstart#quickstart-indexing-slicing-and-iterating) section in the Quickstart guide for basic usage and examples. The remainder of this document presents the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object and covers more advanced usage. The iterator object [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer"), introduced in NumPy 1.6, provides many flexible ways to visit all the elements of one or more arrays in a systematic fashion. This page introduces some basic ways to use the object for computations on arrays in Python, then concludes with how one can accelerate the inner loop in Cython. Since the Python exposure of [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") is a relatively straightforward mapping of the C array iterator API, these ideas will also provide help working with array iteration from C or C++. Single Array Iteration ---------------------- The most basic task that can be done with the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") is to visit every element of an array. Each element is provided one by one using the standard Python iterator interface. #### Example ``` >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a): ... print(x, end=' ') ... 0 1 2 3 4 5 ``` An important thing to be aware of for this iteration is that the order is chosen to match the memory layout of the array instead of using a standard C or Fortran ordering. This is done for access efficiency, reflecting the idea that by default one simply wants to visit each element without concern for a particular ordering. We can see this by iterating over the transpose of our previous array, compared to taking a copy of that transpose in C order. #### Example ``` >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a.T): ... print(x, end=' ') ... 0 1 2 3 4 5 ``` ``` >>> for x in np.nditer(a.T.copy(order='C')): ... print(x, end=' ') ... 0 3 1 4 2 5 ``` The elements of both `a` and `a.T` get traversed in the same order, namely the order they are stored in memory, whereas the elements of `a.T.copy(order=’C’)` get visited in a different order because they have been put into a different memory layout. ### Controlling Iteration Order There are times when it is important to visit the elements of an array in a specific order, irrespective of the layout of the elements in memory. The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object provides an `order` parameter to control this aspect of iteration. The default, having the behavior described above, is order=’K’ to keep the existing order. This can be overridden with order=’C’ for C order and order=’F’ for Fortran order. #### Example ``` >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a, order='F'): ... print(x, end=' ') ... 0 3 1 4 2 5 >>> for x in np.nditer(a.T, order='C'): ... print(x, end=' ') ... 0 3 1 4 2 5 ``` ### Modifying Array Values By default, the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") treats the input operand as a read-only object. To be able to modify the array elements, you must specify either read-write or write-only mode using the `‘readwrite’` or `‘writeonly’` per-operand flags. The nditer will then yield writeable buffer arrays which you may modify. However, because the nditer must copy this buffer data back to the original array once iteration is finished, you must signal when the iteration is ended, by one of two methods. You may either: * used the nditer as a context manager using the `with` statement, and the temporary data will be written back when the context is exited. * call the iterator’s `close` method once finished iterating, which will trigger the write-back. The nditer can no longer be iterated once either `close` is called or its context is exited. #### Example ``` >>> a = np.arange(6).reshape(2,3) >>> a array([[0, 1, 2], [3, 4, 5]]) >>> with np.nditer(a, op_flags=['readwrite']) as it: ... for x in it: ... x[...] = 2 * x ... >>> a array([[ 0, 2, 4], [ 6, 8, 10]]) ``` If you are writing code that needs to support older versions of numpy, note that prior to 1.15, [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") was not a context manager and did not have a `close` method. Instead it relied on the destructor to initiate the writeback of the buffer. ### Using an External Loop In all the examples so far, the elements of `a` are provided by the iterator one at a time, because all the looping logic is internal to the iterator. While this is simple and convenient, it is not very efficient. A better approach is to move the one-dimensional innermost loop into your code, external to the iterator. This way, NumPy’s vectorized operations can be used on larger chunks of the elements being visited. The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") will try to provide chunks that are as large as possible to the inner loop. By forcing ‘C’ and ‘F’ order, we get different external loop sizes. This mode is enabled by specifying an iterator flag. Observe that with the default of keeping native memory order, the iterator is able to provide a single one-dimensional chunk, whereas when forcing Fortran order, it has to provide three chunks of two elements each. #### Example ``` >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a, flags=['external_loop']): ... print(x, end=' ') ... [0 1 2 3 4 5] ``` ``` >>> for x in np.nditer(a, flags=['external_loop'], order='F'): ... print(x, end=' ') ... [0 3] [1 4] [2 5] ``` ### Tracking an Index or Multi-Index During iteration, you may want to use the index of the current element in a computation. For example, you may want to visit the elements of an array in memory order, but use a C-order, Fortran-order, or multidimensional index to look up values in a different array. The index is tracked by the iterator object itself, and accessible through the `index` or `multi_index` properties, depending on what was requested. The examples below show printouts demonstrating the progression of the index: #### Example ``` >>> a = np.arange(6).reshape(2,3) >>> it = np.nditer(a, flags=['f_index']) >>> for x in it: ... print("%d <%d>" % (x, it.index), end=' ') ... 0 <0> 1 <2> 2 <4> 3 <1> 4 <3> 5 <5``` ``` >>> it = np.nditer(a, flags=['multi_index']) >>> for x in it: ... print("%d <%s>" % (x, it.multi_index), end=' ') ... 0 <(0, 0)> 1 <(0, 1)> 2 <(0, 2)> 3 <(1, 0)> 4 <(1, 1)> 5 <(1, 2)``` ``` >>> with np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) as it: ... for x in it: ... x[...] = it.multi_index[1] - it.multi_index[0] ... >>> a array([[ 0, 1, 2], [-1, 0, 1]]) ``` Tracking an index or multi-index is incompatible with using an external loop, because it requires a different index value per element. If you try to combine these flags, the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object will raise an exception. #### Example ``` >>> a = np.zeros((2,3)) >>> it = np.nditer(a, flags=['c_index', 'external_loop']) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: Iterator flag EXTERNAL_LOOP cannot be used if an index or multi-index is being tracked ``` ### Alternative Looping and Element Access To make its properties more readily accessible during iteration, [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") has an alternative syntax for iterating, which works explicitly with the iterator object itself. With this looping construct, the current value is accessible by indexing into the iterator. Other properties, such as tracked indices remain as before. The examples below produce identical results to the ones in the previous section. #### Example ``` >>> a = np.arange(6).reshape(2,3) >>> it = np.nditer(a, flags=['f_index']) >>> while not it.finished: ... print("%d <%d>" % (it[0], it.index), end=' ') ... is_not_finished = it.iternext() ... 0 <0> 1 <2> 2 <4> 3 <1> 4 <3> 5 <5``` ``` >>> it = np.nditer(a, flags=['multi_index']) >>> while not it.finished: ... print("%d <%s>" % (it[0], it.multi_index), end=' ') ... is_not_finished = it.iternext() ... 0 <(0, 0)> 1 <(0, 1)> 2 <(0, 2)> 3 <(1, 0)> 4 <(1, 1)> 5 <(1, 2)``` ``` >>> with np.nditer(a, flags=['multi_index'], op_flags=['writeonly']) as it: ... while not it.finished: ... it[0] = it.multi_index[1] - it.multi_index[0] ... is_not_finished = it.iternext() ... >>> a array([[ 0, 1, 2], [-1, 0, 1]]) ``` ### Buffering the Array Elements When forcing an iteration order, we observed that the external loop option may provide the elements in smaller chunks because the elements can’t be visited in the appropriate order with a constant stride. When writing C code, this is generally fine, however in pure Python code this can cause a significant reduction in performance. By enabling buffering mode, the chunks provided by the iterator to the inner loop can be made larger, significantly reducing the overhead of the Python interpreter. In the example forcing Fortran iteration order, the inner loop gets to see all the elements in one go when buffering is enabled. #### Example ``` >>> a = np.arange(6).reshape(2,3) >>> for x in np.nditer(a, flags=['external_loop'], order='F'): ... print(x, end=' ') ... [0 3] [1 4] [2 5] ``` ``` >>> for x in np.nditer(a, flags=['external_loop','buffered'], order='F'): ... print(x, end=' ') ... [0 3 1 4 2 5] ``` ### Iterating as a Specific Data Type There are times when it is necessary to treat an array as a different data type than it is stored as. For instance, one may want to do all computations on 64-bit floats, even if the arrays being manipulated are 32-bit floats. Except when writing low-level C code, it’s generally better to let the iterator handle the copying or buffering instead of casting the data type yourself in the inner loop. There are two mechanisms which allow this to be done, temporary copies and buffering mode. With temporary copies, a copy of the entire array is made with the new data type, then iteration is done in the copy. Write access is permitted through a mode which updates the original array after all the iteration is complete. The major drawback of temporary copies is that the temporary copy may consume a large amount of memory, particularly if the iteration data type has a larger itemsize than the original one. Buffering mode mitigates the memory usage issue and is more cache-friendly than making temporary copies. Except for special cases, where the whole array is needed at once outside the iterator, buffering is recommended over temporary copying. Within NumPy, buffering is used by the ufuncs and other functions to support flexible inputs with minimal memory overhead. In our examples, we will treat the input array with a complex data type, so that we can take square roots of negative numbers. Without enabling copies or buffering mode, the iterator will raise an exception if the data type doesn’t match precisely. #### Example ``` >>> a = np.arange(6).reshape(2,3) - 3 >>> for x in np.nditer(a, op_dtypes=['complex128']): ... print(np.sqrt(x), end=' ') ... Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Iterator operand required copying or buffering, but neither copying nor buffering was enabled ``` In copying mode, ‘copy’ is specified as a per-operand flag. This is done to provide control in a per-operand fashion. Buffering mode is specified as an iterator flag. #### Example ``` >>> a = np.arange(6).reshape(2,3) - 3 >>> for x in np.nditer(a, op_flags=['readonly','copy'], ... op_dtypes=['complex128']): ... print(np.sqrt(x), end=' ') ... 1.7320508075688772j 1.4142135623730951j 1j 0j (1+0j) (1.4142135623730951+0j) ``` ``` >>> for x in np.nditer(a, flags=['buffered'], op_dtypes=['complex128']): ... print(np.sqrt(x), end=' ') ... 1.7320508075688772j 1.4142135623730951j 1j 0j (1+0j) (1.4142135623730951+0j) ``` The iterator uses NumPy’s casting rules to determine whether a specific conversion is permitted. By default, it enforces ‘safe’ casting. This means, for example, that it will raise an exception if you try to treat a 64-bit float array as a 32-bit float array. In many cases, the rule ‘same_kind’ is the most reasonable rule to use, since it will allow conversion from 64 to 32-bit float, but not from float to int or from complex to float. #### Example ``` >>> a = np.arange(6.) >>> for x in np.nditer(a, flags=['buffered'], op_dtypes=['float32']): ... print(x, end=' ') ... Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Iterator operand 0 dtype could not be cast from dtype('float64') to dtype('float32') according to the rule 'safe' ``` ``` >>> for x in np.nditer(a, flags=['buffered'], op_dtypes=['float32'], ... casting='same_kind'): ... print(x, end=' ') ... 0.0 1.0 2.0 3.0 4.0 5.0 ``` ``` >>> for x in np.nditer(a, flags=['buffered'], op_dtypes=['int32'], casting='same_kind'): ... print(x, end=' ') ... Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Iterator operand 0 dtype could not be cast from dtype('float64') to dtype('int32') according to the rule 'same_kind' ``` One thing to watch out for is conversions back to the original data type when using a read-write or write-only operand. A common case is to implement the inner loop in terms of 64-bit floats, and use ‘same_kind’ casting to allow the other floating-point types to be processed as well. While in read-only mode, an integer array could be provided, read-write mode will raise an exception because conversion back to the array would violate the casting rule. #### Example ``` >>> a = np.arange(6) >>> for x in np.nditer(a, flags=['buffered'], op_flags=['readwrite'], ... op_dtypes=['float64'], casting='same_kind'): ... x[...] = x / 2.0 ... Traceback (most recent call last): File "<stdin>", line 2, in <module> TypeError: Iterator requested dtype could not be cast from dtype('float64') to dtype('int64'), the operand 0 dtype, according to the rule 'same_kind' ``` Broadcasting Array Iteration ---------------------------- NumPy has a set of rules for dealing with arrays that have differing shapes which are applied whenever functions take multiple operands which combine element-wise. This is called [broadcasting](../user/basics.ufuncs#ufuncs-broadcasting). The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object can apply these rules for you when you need to write such a function. As an example, we print out the result of broadcasting a one and a two dimensional array together. #### Example ``` >>> a = np.arange(3) >>> b = np.arange(6).reshape(2,3) >>> for x, y in np.nditer([a,b]): ... print("%d:%d" % (x,y), end=' ') ... 0:0 1:1 2:2 0:3 1:4 2:5 ``` When a broadcasting error occurs, the iterator raises an exception which includes the input shapes to help diagnose the problem. #### Example ``` >>> a = np.arange(2) >>> b = np.arange(6).reshape(2,3) >>> for x, y in np.nditer([a,b]): ... print("%d:%d" % (x,y), end=' ') ... Traceback (most recent call last): ... ValueError: operands could not be broadcast together with shapes (2,) (2,3) ``` ### Iterator-Allocated Output Arrays A common case in NumPy functions is to have outputs allocated based on the broadcasting of the input, and additionally have an optional parameter called ‘out’ where the result will be placed when it is provided. The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object provides a convenient idiom that makes it very easy to support this mechanism. We’ll show how this works by creating a function [`square`](generated/numpy.square#numpy.square "numpy.square") which squares its input. Let’s start with a minimal function definition excluding ‘out’ parameter support. #### Example ``` >>> def square(a): ... with np.nditer([a, None]) as it: ... for x, y in it: ... y[...] = x*x ... return it.operands[1] ... >>> square([1,2,3]) array([1, 4, 9]) ``` By default, the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") uses the flags ‘allocate’ and ‘writeonly’ for operands that are passed in as None. This means we were able to provide just the two operands to the iterator, and it handled the rest. When adding the ‘out’ parameter, we have to explicitly provide those flags, because if someone passes in an array as ‘out’, the iterator will default to ‘readonly’, and our inner loop would fail. The reason ‘readonly’ is the default for input arrays is to prevent confusion about unintentionally triggering a reduction operation. If the default were ‘readwrite’, any broadcasting operation would also trigger a reduction, a topic which is covered later in this document. While we’re at it, let’s also introduce the ‘no_broadcast’ flag, which will prevent the output from being broadcast. This is important, because we only want one input value for each output. Aggregating more than one input value is a reduction operation which requires special handling. It would already raise an error because reductions must be explicitly enabled in an iterator flag, but the error message that results from disabling broadcasting is much more understandable for end-users. To see how to generalize the square function to a reduction, look at the sum of squares function in the section about Cython. For completeness, we’ll also add the ‘external_loop’ and ‘buffered’ flags, as these are what you will typically want for performance reasons. #### Example ``` >>> def square(a, out=None): ... it = np.nditer([a, out], ... flags = ['external_loop', 'buffered'], ... op_flags = [['readonly'], ... ['writeonly', 'allocate', 'no_broadcast']]) ... with it: ... for x, y in it: ... y[...] = x*x ... return it.operands[1] ... ``` ``` >>> square([1,2,3]) array([1, 4, 9]) ``` ``` >>> b = np.zeros((3,)) >>> square([1,2,3], out=b) array([1., 4., 9.]) >>> b array([1., 4., 9.]) ``` ``` >>> square(np.arange(6).reshape(2,3), out=b) Traceback (most recent call last): ... ValueError: non-broadcastable output operand with shape (3,) doesn't match the broadcast shape (2,3) ``` ### Outer Product Iteration Any binary operation can be extended to an array operation in an outer product fashion like in [`outer`](generated/numpy.outer#numpy.outer "numpy.outer"), and the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object provides a way to accomplish this by explicitly mapping the axes of the operands. It is also possible to do this with [`newaxis`](constants#numpy.newaxis "numpy.newaxis") indexing, but we will show you how to directly use the nditer `op_axes` parameter to accomplish this with no intermediate views. We’ll do a simple outer product, placing the dimensions of the first operand before the dimensions of the second operand. The `op_axes` parameter needs one list of axes for each operand, and provides a mapping from the iterator’s axes to the axes of the operand. Suppose the first operand is one dimensional and the second operand is two dimensional. The iterator will have three dimensions, so `op_axes` will have two 3-element lists. The first list picks out the one axis of the first operand, and is -1 for the rest of the iterator axes, with a final result of [0, -1, -1]. The second list picks out the two axes of the second operand, but shouldn’t overlap with the axes picked out in the first operand. Its list is [-1, 0, 1]. The output operand maps onto the iterator axes in the standard manner, so we can provide None instead of constructing another list. The operation in the inner loop is a straightforward multiplication. Everything to do with the outer product is handled by the iterator setup. #### Example ``` >>> a = np.arange(3) >>> b = np.arange(8).reshape(2,4) >>> it = np.nditer([a, b, None], flags=['external_loop'], ... op_axes=[[0, -1, -1], [-1, 0, 1], None]) >>> with it: ... for x, y, z in it: ... z[...] = x*y ... result = it.operands[2] # same as z ... >>> result array([[[ 0, 0, 0, 0], [ 0, 0, 0, 0]], [[ 0, 1, 2, 3], [ 4, 5, 6, 7]], [[ 0, 2, 4, 6], [ 8, 10, 12, 14]]]) ``` Note that once the iterator is closed we can not access [`operands`](generated/numpy.nditer.operands#numpy.nditer.operands "numpy.nditer.operands") and must use a reference created inside the context manager. ### Reduction Iteration Whenever a writeable operand has fewer elements than the full iteration space, that operand is undergoing a reduction. The [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object requires that any reduction operand be flagged as read-write, and only allows reductions when ‘reduce_ok’ is provided as an iterator flag. For a simple example, consider taking the sum of all elements in an array. #### Example ``` >>> a = np.arange(24).reshape(2,3,4) >>> b = np.array(0) >>> with np.nditer([a, b], flags=['reduce_ok'], ... op_flags=[['readonly'], ['readwrite']]) as it: ... for x,y in it: ... y[...] += x ... >>> b array(276) >>> np.sum(a) 276 ``` Things are a little bit more tricky when combining reduction and allocated operands. Before iteration is started, any reduction operand must be initialized to its starting values. Here’s how we can do this, taking sums along the last axis of `a`. #### Example ``` >>> a = np.arange(24).reshape(2,3,4) >>> it = np.nditer([a, None], flags=['reduce_ok'], ... op_flags=[['readonly'], ['readwrite', 'allocate']], ... op_axes=[None, [0,1,-1]]) >>> with it: ... it.operands[1][...] = 0 ... for x, y in it: ... y[...] += x ... result = it.operands[1] ... >>> result array([[ 6, 22, 38], [54, 70, 86]]) >>> np.sum(a, axis=2) array([[ 6, 22, 38], [54, 70, 86]]) ``` To do buffered reduction requires yet another adjustment during the setup. Normally the iterator construction involves copying the first buffer of data from the readable arrays into the buffer. Any reduction operand is readable, so it may be read into a buffer. Unfortunately, initialization of the operand after this buffering operation is complete will not be reflected in the buffer that the iteration starts with, and garbage results will be produced. The iterator flag “delay_bufalloc” is there to allow iterator-allocated reduction operands to exist together with buffering. When this flag is set, the iterator will leave its buffers uninitialized until it receives a reset, after which it will be ready for regular iteration. Here’s how the previous example looks if we also enable buffering. #### Example ``` >>> a = np.arange(24).reshape(2,3,4) >>> it = np.nditer([a, None], flags=['reduce_ok', ... 'buffered', 'delay_bufalloc'], ... op_flags=[['readonly'], ['readwrite', 'allocate']], ... op_axes=[None, [0,1,-1]]) >>> with it: ... it.operands[1][...] = 0 ... it.reset() ... for x, y in it: ... y[...] += x ... result = it.operands[1] ... >>> result array([[ 6, 22, 38], [54, 70, 86]]) ``` Putting the Inner Loop in Cython -------------------------------- Those who want really good performance out of their low level operations should strongly consider directly using the iteration API provided in C, but for those who are not comfortable with C or C++, Cython is a good middle ground with reasonable performance tradeoffs. For the [`nditer`](generated/numpy.nditer#numpy.nditer "numpy.nditer") object, this means letting the iterator take care of broadcasting, dtype conversion, and buffering, while giving the inner loop to Cython. For our example, we’ll create a sum of squares function. To start, let’s implement this function in straightforward Python. We want to support an ‘axis’ parameter similar to the numpy [`sum`](generated/numpy.sum#numpy.sum "numpy.sum") function, so we will need to construct a list for the `op_axes` parameter. Here’s how this looks. #### Example ``` >>> def axis_to_axeslist(axis, ndim): ... if axis is None: ... return [-1] * ndim ... else: ... if type(axis) is not tuple: ... axis = (axis,) ... axeslist = [1] * ndim ... for i in axis: ... axeslist[i] = -1 ... ax = 0 ... for i in range(ndim): ... if axeslist[i] != -1: ... axeslist[i] = ax ... ax += 1 ... return axeslist ... >>> def sum_squares_py(arr, axis=None, out=None): ... axeslist = axis_to_axeslist(axis, arr.ndim) ... it = np.nditer([arr, out], flags=['reduce_ok', ... 'buffered', 'delay_bufalloc'], ... op_flags=[['readonly'], ['readwrite', 'allocate']], ... op_axes=[None, axeslist], ... op_dtypes=['float64', 'float64']) ... with it: ... it.operands[1][...] = 0 ... it.reset() ... for x, y in it: ... y[...] += x*x ... return it.operands[1] ... >>> a = np.arange(6).reshape(2,3) >>> sum_squares_py(a) array(55.) >>> sum_squares_py(a, axis=-1) array([ 5., 50.]) ``` To Cython-ize this function, we replace the inner loop (y[
] += x*x) with Cython code that’s specialized for the float64 dtype. With the ‘external_loop’ flag enabled, the arrays provided to the inner loop will always be one-dimensional, so very little checking needs to be done. Here’s the listing of sum_squares.pyx: ``` import numpy as np cimport numpy as np cimport cython def axis_to_axeslist(axis, ndim): if axis is None: return [-1] * ndim else: if type(axis) is not tuple: axis = (axis,) axeslist = [1] * ndim for i in axis: axeslist[i] = -1 ax = 0 for i in range(ndim): if axeslist[i] != -1: axeslist[i] = ax ax += 1 return axeslist @cython.boundscheck(False) def sum_squares_cy(arr, axis=None, out=None): cdef np.ndarray[double] x cdef np.ndarray[double] y cdef int size cdef double value axeslist = axis_to_axeslist(axis, arr.ndim) it = np.nditer([arr, out], flags=['reduce_ok', 'external_loop', 'buffered', 'delay_bufalloc'], op_flags=[['readonly'], ['readwrite', 'allocate']], op_axes=[None, axeslist], op_dtypes=['float64', 'float64']) with it: it.operands[1][...] = 0 it.reset() for xarr, yarr in it: x = xarr y = yarr size = x.shape[0] for i in range(size): value = x[i] y[i] = y[i] + value * value return it.operands[1] ``` On this machine, building the .pyx file into a module looked like the following, but you may have to find some Cython tutorials to tell you the specifics for your system configuration.: ``` $ cython sum_squares.pyx $ gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -I/usr/include/python2.7 -fno-strict-aliasing -o sum_squares.so sum_squares.c ``` Running this from the Python interpreter produces the same answers as our native Python/NumPy code did. #### Example ``` >>> from sum_squares import sum_squares_cy >>> a = np.arange(6).reshape(2,3) >>> sum_squares_cy(a) array(55.0) >>> sum_squares_cy(a, axis=-1) array([ 5., 50.]) ``` Doing a little timing in IPython shows that the reduced overhead and memory allocation of the Cython inner loop is providing a very nice speedup over both the straightforward Python code and an expression using NumPy’s built-in sum function.: ``` >>> a = np.random.rand(1000,1000) >>> timeit sum_squares_py(a, axis=-1) 10 loops, best of 3: 37.1 ms per loop >>> timeit np.sum(a*a, axis=-1) 10 loops, best of 3: 20.9 ms per loop >>> timeit sum_squares_cy(a, axis=-1) 100 loops, best of 3: 11.8 ms per loop >>> np.all(sum_squares_cy(a, axis=-1) == np.sum(a*a, axis=-1)) True >>> np.all(sum_squares_py(a, axis=-1) == np.sum(a*a, axis=-1)) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/arrays.nditer.htmlStandard array subclasses ========================= Note Subclassing a `numpy.ndarray` is possible but if your goal is to create an array with *modified* behavior, as do dask arrays for distributed computation and cupy arrays for GPU-based computation, subclassing is discouraged. Instead, using numpy’s [dispatch mechanism](../user/basics.dispatch#basics-dispatch) is recommended. The [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") can be inherited from (in Python or in C) if desired. Therefore, it can form a foundation for many useful classes. Often whether to sub-class the array object or to simply use the core array component as an internal part of a new class is a difficult decision, and can be simply a matter of choice. NumPy has several tools for simplifying how your new object interacts with other array objects, and so the choice may not be significant in the end. One way to simplify the question is by asking yourself if the object you are interested in can be replaced as a single array or does it really require two or more arrays at its core. Note that [`asarray`](generated/numpy.asarray#numpy.asarray "numpy.asarray") always returns the base-class ndarray. If you are confident that your use of the array object can handle any subclass of an ndarray, then [`asanyarray`](generated/numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") can be used to allow subclasses to propagate more cleanly through your subroutine. In principal a subclass could redefine any aspect of the array and therefore, under strict guidelines, [`asanyarray`](generated/numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") would rarely be useful. However, most subclasses of the array object will not redefine certain aspects of the array object such as the buffer interface, or the attributes of the array. One important example, however, of why your subroutine may not be able to handle an arbitrary subclass of an array is that matrices redefine the “*” operator to be matrix-multiplication, rather than element-by-element multiplication. Special attributes and methods ------------------------------ See also [Subclassing ndarray](../user/basics.subclassing#basics-subclassing) NumPy provides several hooks that classes can customize: class.__array_ufunc__(*ufunc*, *method*, **inputs*, ***kwargs*) New in version 1.13. Any class, ndarray subclass or not, can define this method or set it to None in order to override the behavior of NumPy’s ufuncs. This works quite similarly to Python’s `__mul__` and other binary operation routines. * *ufunc* is the ufunc object that was called. * *method* is a string indicating which Ufunc method was called (one of `"__call__"`, `"reduce"`, `"reduceat"`, `"accumulate"`, `"outer"`, `"inner"`). * *inputs* is a tuple of the input arguments to the `ufunc`. * *kwargs* is a dictionary containing the optional input arguments of the ufunc. If given, any `out` arguments, both positional and keyword, are passed as a [`tuple`](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.10)") in *kwargs*. See the discussion in [Universal functions (ufunc)](ufuncs#ufuncs) for details. The method should return either the result of the operation, or [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "(in Python v3.10)") if the operation requested is not implemented. If one of the input or output arguments has a [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__") method, it is executed *instead* of the ufunc. If more than one of the arguments implements [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__"), they are tried in the order: subclasses before superclasses, inputs before outputs, otherwise left to right. The first routine returning something other than [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "(in Python v3.10)") determines the result. If all of the [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__") operations return [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "(in Python v3.10)"), a [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") is raised. Note We intend to re-implement numpy functions as (generalized) Ufunc, in which case it will become possible for them to be overridden by the `__array_ufunc__` method. A prime candidate is [`matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul"), which currently is not a Ufunc, but could be relatively easily be rewritten as a (set of) generalized Ufuncs. The same may happen with functions such as [`median`](generated/numpy.median#numpy.median "numpy.median"), [`amin`](generated/numpy.amin#numpy.amin "numpy.amin"), and [`argsort`](generated/numpy.argsort#numpy.argsort "numpy.argsort"). Like with some other special methods in python, such as `__hash__` and `__iter__`, it is possible to indicate that your class does *not* support ufuncs by setting `__array_ufunc__ = None`. Ufuncs always raise [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)") when called on an object that sets `__array_ufunc__ = None`. The presence of [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__") also influences how [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") handles binary operations like `arr + obj` and `arr < obj` when `arr` is an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") and `obj` is an instance of a custom class. There are two possibilities. If `obj.__array_ufunc__` is present and not None, then `ndarray.__add__` and friends will delegate to the ufunc machinery, meaning that `arr + obj` becomes `np.add(arr, obj)`, and then [`add`](generated/numpy.add#numpy.add "numpy.add") invokes `obj.__array_ufunc__`. This is useful if you want to define an object that acts like an array. Alternatively, if `obj.__array_ufunc__` is set to None, then as a special case, special methods like `ndarray.__add__` will notice this and *unconditionally* raise [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)"). This is useful if you want to create objects that interact with arrays via binary operations, but are not themselves arrays. For example, a units handling system might have an object `m` representing the “meters” unit, and want to support the syntax `arr * m` to represent that the array has units of “meters”, but not want to otherwise interact with arrays via ufuncs or otherwise. This can be done by setting `__array_ufunc__ = None` and defining `__mul__` and `__rmul__` methods. (Note that this means that writing an `__array_ufunc__` that always returns [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "(in Python v3.10)") is not quite the same as setting `__array_ufunc__ = None`: in the former case, `arr + obj` will raise [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)"), while in the latter case it is possible to define a `__radd__` method to prevent this.) The above does not hold for in-place operators, for which [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") never returns [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "(in Python v3.10)"). Hence, `arr += obj` would always lead to a [`TypeError`](https://docs.python.org/3/library/exceptions.html#TypeError "(in Python v3.10)"). This is because for arrays in-place operations cannot generically be replaced by a simple reverse operation. (For instance, by default, `arr += obj` would be translated to `arr = arr + obj`, i.e., `arr` would be replaced, contrary to what is expected for in-place array operations.) Note If you define `__array_ufunc__`: * If you are not a subclass of [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), we recommend your class define special methods like `__add__` and `__lt__` that delegate to ufuncs just like ndarray does. An easy way to do this is to subclass from [`NDArrayOperatorsMixin`](generated/numpy.lib.mixins.ndarrayoperatorsmixin#numpy.lib.mixins.NDArrayOperatorsMixin "numpy.lib.mixins.NDArrayOperatorsMixin"). * If you subclass [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), we recommend that you put all your override logic in `__array_ufunc__` and not also override special methods. This ensures the class hierarchy is determined in only one place rather than separately by the ufunc machinery and by the binary operation rules (which gives preference to special methods of subclasses; the alternative way to enforce a one-place only hierarchy, of setting [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__") to None, would seem very unexpected and thus confusing, as then the subclass would not work at all with ufuncs). * [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") defines its own [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__"), which, evaluates the ufunc if no arguments have overrides, and returns [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "(in Python v3.10)") otherwise. This may be useful for subclasses for which [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__") converts any instances of its own class to [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"): it can then pass these on to its superclass using `super().__array_ufunc__(*inputs, **kwargs)`, and finally return the results after possible back-conversion. The advantage of this practice is that it ensures that it is possible to have a hierarchy of subclasses that extend the behaviour. See [Subclassing ndarray](../user/basics.subclassing#basics-subclassing) for details. Note If a class defines the [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__") method, this disables the [`__array_wrap__`](#numpy.class.__array_wrap__ "numpy.class.__array_wrap__"), [`__array_prepare__`](#numpy.class.__array_prepare__ "numpy.class.__array_prepare__"), [`__array_priority__`](#numpy.class.__array_priority__ "numpy.class.__array_priority__") mechanism described below for ufuncs (which may eventually be deprecated). class.__array_function__(*func*, *types*, *args*, *kwargs*) New in version 1.16. Note * In NumPy 1.17, the protocol is enabled by default, but can be disabled with `NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=0`. * In NumPy 1.16, you need to set the environment variable `NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1` before importing NumPy to use NumPy function overrides. * Eventually, expect to `__array_function__` to always be enabled. * `func` is an arbitrary callable exposed by NumPy’s public API, which was called in the form `func(*args, **kwargs)`. * `types` is a collection [`collections.abc.Collection`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Collection "(in Python v3.10)") of unique argument types from the original NumPy function call that implement `__array_function__`. * The tuple `args` and dict `kwargs` are directly passed on from the original call. As a convenience for `__array_function__` implementors, `types` provides all argument types with an `'__array_function__'` attribute. This allows implementors to quickly identify cases where they should defer to `__array_function__` implementations on other arguments. Implementations should not rely on the iteration order of `types`. Most implementations of `__array_function__` will start with two checks: 1. Is the given function something that we know how to overload? 2. Are all arguments of a type that we know how to handle? If these conditions hold, `__array_function__` should return the result from calling its implementation for `func(*args, **kwargs)`. Otherwise, it should return the sentinel value `NotImplemented`, indicating that the function is not implemented by these types. There are no general requirements on the return value from `__array_function__`, although most sensible implementations should probably return array(s) with the same type as one of the function’s arguments. It may also be convenient to define a custom decorators (`implements` below) for registering `__array_function__` implementations. ``` HANDLED_FUNCTIONS = {} class MyArray: def __array_function__(self, func, types, args, kwargs): if func not in HANDLED_FUNCTIONS: return NotImplemented # Note: this allows subclasses that don't override # __array_function__ to handle MyArray objects if not all(issubclass(t, MyArray) for t in types): return NotImplemented return HANDLED_FUNCTIONS[func](*args, **kwargs) def implements(numpy_function): """Register an __array_function__ implementation for MyArray objects.""" def decorator(func): HANDLED_FUNCTIONS[numpy_function] = func return func return decorator @implements(np.concatenate) def concatenate(arrays, axis=0, out=None): ... # implementation of concatenate for MyArray objects @implements(np.broadcast_to) def broadcast_to(array, shape): ... # implementation of broadcast_to for MyArray objects ``` Note that it is not required for `__array_function__` implementations to include *all* of the corresponding NumPy function’s optional arguments (e.g., `broadcast_to` above omits the irrelevant `subok` argument). Optional arguments are only passed in to `__array_function__` if they were explicitly used in the NumPy function call. Just like the case for builtin special methods like `__add__`, properly written `__array_function__` methods should always return `NotImplemented` when an unknown type is encountered. Otherwise, it will be impossible to correctly override NumPy functions from another object if the operation also includes one of your objects. For the most part, the rules for dispatch with `__array_function__` match those for `__array_ufunc__`. In particular: * NumPy will gather implementations of `__array_function__` from all specified inputs and call them in order: subclasses before superclasses, and otherwise left to right. Note that in some edge cases involving subclasses, this differs slightly from the [current behavior](https://bugs.python.org/issue30140) of Python. * Implementations of `__array_function__` indicate that they can handle the operation by returning any value other than `NotImplemented`. * If all `__array_function__` methods return `NotImplemented`, NumPy will raise `TypeError`. If no `__array_function__` methods exists, NumPy will default to calling its own implementation, intended for use on NumPy arrays. This case arises, for example, when all array-like arguments are Python numbers or lists. (NumPy arrays do have a `__array_function__` method, given below, but it always returns `NotImplemented` if any argument other than a NumPy array subclass implements `__array_function__`.) One deviation from the current behavior of `__array_ufunc__` is that NumPy will only call `__array_function__` on the *first* argument of each unique type. This matches Python’s [rule for calling reflected methods](https://docs.python.org/3/reference/datamodel.html#object.__ror__), and this ensures that checking overloads has acceptable performance even when there are a large number of overloaded arguments. class.__array_finalize__(*obj*) This method is called whenever the system internally allocates a new array from *obj*, where *obj* is a subclass (subtype) of the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). It can be used to change attributes of *self* after construction (so as to ensure a 2-d matrix for example), or to update meta-information from the “parent.” Subclasses inherit a default implementation of this method that does nothing. class.__array_prepare__(*array*, *context=None*) At the beginning of every [ufunc](../user/basics.ufuncs#ufuncs-output-type), this method is called on the input object with the highest array priority, or the output object if one was specified. The output array is passed in and whatever is returned is passed to the ufunc. Subclasses inherit a default implementation of this method which simply returns the output array unmodified. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the ufunc for computation. Note For ufuncs, it is hoped to eventually deprecate this method in favour of [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__"). class.__array_wrap__(*array*, *context=None*) At the end of every [ufunc](../user/basics.ufuncs#ufuncs-output-type), this method is called on the input object with the highest array priority, or the output object if one was specified. The ufunc-computed array is passed in and whatever is returned is passed to the user. Subclasses inherit a default implementation of this method, which transforms the array into a new instance of the object’s class. Subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the user. Note For ufuncs, it is hoped to eventually deprecate this method in favour of [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__"). class.__array_priority__ The value of this attribute is used to determine what type of object to return in situations where there is more than one possibility for the Python type of the returned object. Subclasses inherit a default value of 0.0 for this attribute. Note For ufuncs, it is hoped to eventually deprecate this method in favour of [`__array_ufunc__`](#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__"). class.__array__([*dtype*]) If a class (ndarray subclass or not) having the [`__array__`](#numpy.class.__array__ "numpy.class.__array__") method is used as the output object of an [ufunc](../user/basics.ufuncs#ufuncs-output-type), results will *not* be written to the object returned by [`__array__`](#numpy.class.__array__ "numpy.class.__array__"). This practice will return `TypeError`. Matrix objects -------------- Note It is strongly advised *not* to use the matrix subclass. As described below, it makes writing functions that deal consistently with matrices and regular arrays very difficult. Currently, they are mainly used for interacting with `scipy.sparse`. We hope to provide an alternative for this use, however, and eventually remove the `matrix` subclass. [`matrix`](generated/numpy.matrix#numpy.matrix "numpy.matrix") objects inherit from the ndarray and therefore, they have the same attributes and methods of ndarrays. There are six important differences of matrix objects, however, that may lead to unexpected results when you use matrices but expect them to act like arrays: 1. Matrix objects can be created using a string notation to allow Matlab-style syntax where spaces separate columns and semicolons (‘;’) separate rows. 2. Matrix objects are always two-dimensional. This has far-reaching implications, in that m.ravel() is still two-dimensional (with a 1 in the first dimension) and item selection returns two-dimensional objects so that sequence behavior is fundamentally different than arrays. 3. Matrix objects over-ride multiplication to be matrix-multiplication. **Make sure you understand this for functions that you may want to receive matrices. Especially in light of the fact that asanyarray(m) returns a matrix when m is a matrix.** 4. Matrix objects over-ride power to be matrix raised to a power. The same warning about using power inside a function that uses asanyarray(
) to get an array object holds for this fact. 5. The default __array_priority__ of matrix objects is 10.0, and therefore mixed operations with ndarrays always produce matrices. 6. Matrices have special attributes which make calculations easier. These are | | | | --- | --- | | [`matrix.T`](generated/numpy.matrix.t#numpy.matrix.T "numpy.matrix.T") | Returns the transpose of the matrix. | | [`matrix.H`](generated/numpy.matrix.h#numpy.matrix.H "numpy.matrix.H") | Returns the (complex) conjugate transpose of `self`. | | [`matrix.I`](generated/numpy.matrix.i#numpy.matrix.I "numpy.matrix.I") | Returns the (multiplicative) inverse of invertible `self`. | | [`matrix.A`](generated/numpy.matrix.a#numpy.matrix.A "numpy.matrix.A") | Return `self` as an [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") object. | Warning Matrix objects over-ride multiplication, ‘*’, and power, ‘**’, to be matrix-multiplication and matrix power, respectively. If your subroutine can accept sub-classes and you do not convert to base- class arrays, then you must use the ufuncs multiply and power to be sure that you are performing the correct operation for all inputs. The matrix class is a Python subclass of the ndarray and can be used as a reference for how to construct your own subclass of the ndarray. Matrices can be created from other matrices, strings, and anything else that can be converted to an `ndarray` . The name “mat “is an alias for “matrix “in NumPy. | | | | --- | --- | | [`matrix`](generated/numpy.matrix#numpy.matrix "numpy.matrix")(data[, dtype, copy]) | Note It is no longer recommended to use this class, even for linear | | [`asmatrix`](generated/numpy.asmatrix#numpy.asmatrix "numpy.asmatrix")(data[, dtype]) | Interpret the input as a matrix. | | [`bmat`](generated/numpy.bmat#numpy.bmat "numpy.bmat")(obj[, ldict, gdict]) | Build a matrix object from a string, nested sequence, or array. | Example 1: Matrix creation from a string ``` >>> a = np.mat('1 2 3; 4 5 3') >>> print((a*a.T).I) [[ 0.29239766 -0.13450292] [-0.13450292 0.08187135]] ``` Example 2: Matrix creation from nested sequence ``` >>> np.mat([[1,5,10],[1.0,3,4j]]) matrix([[ 1.+0.j, 5.+0.j, 10.+0.j], [ 1.+0.j, 3.+0.j, 0.+4.j]]) ``` Example 3: Matrix creation from an array ``` >>> np.mat(np.random.rand(3,3)).T matrix([[4.17022005e-01, 3.02332573e-01, 1.86260211e-01], [7.20324493e-01, 1.46755891e-01, 3.45560727e-01], [1.14374817e-04, 9.23385948e-02, 3.96767474e-01]]) ``` Memory-mapped file arrays ------------------------- Memory-mapped files are useful for reading and/or modifying small segments of a large file with regular layout, without reading the entire file into memory. A simple subclass of the ndarray uses a memory-mapped file for the data buffer of the array. For small files, the over-head of reading the entire file into memory is typically not significant, however for large files using memory mapping can save considerable resources. Memory-mapped-file arrays have one additional method (besides those they inherit from the ndarray): [`.flush()`](generated/numpy.memmap.flush#numpy.memmap.flush "numpy.memmap.flush") which must be called manually by the user to ensure that any changes to the array actually get written to disk. | | | | --- | --- | | [`memmap`](generated/numpy.memmap#numpy.memmap "numpy.memmap")(filename[, dtype, mode, offset, ...]) | Create a memory-map to an array stored in a *binary* file on disk. | | [`memmap.flush`](generated/numpy.memmap.flush#numpy.memmap.flush "numpy.memmap.flush")() | Write any changes in the array to the file on disk. | Example: ``` >>> a = np.memmap('newfile.dat', dtype=float, mode='w+', shape=1000) >>> a[10] = 10.0 >>> a[30] = 30.0 >>> del a >>> b = np.fromfile('newfile.dat', dtype=float) >>> print(b[10], b[30]) 10.0 30.0 >>> a = np.memmap('newfile.dat', dtype=float) >>> print(a[10], a[30]) 10.0 30.0 ``` Character arrays (numpy.char) ----------------------------- See also [Creating character arrays (numpy.char)](routines.array-creation#routines-array-creation-char) Note The [`chararray`](generated/numpy.chararray#numpy.chararray "numpy.chararray") class exists for backwards compatibility with Numarray, it is not recommended for new development. Starting from numpy 1.4, if one needs arrays of strings, it is recommended to use arrays of [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype") [`object_`](arrays.scalars#numpy.object_ "numpy.object_"), [`bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_") or [`str_`](arrays.scalars#numpy.str_ "numpy.str_"), and use the free functions in the [`numpy.char`](routines.char#module-numpy.char "numpy.char") module for fast vectorized string operations. These are enhanced arrays of either [`str_`](arrays.scalars#numpy.str_ "numpy.str_") type or [`bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_") type. These arrays inherit from the [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), but specially-define the operations `+`, `*`, and `%` on a (broadcasting) element-by-element basis. These operations are not available on the standard [`ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") of character type. In addition, the [`chararray`](generated/numpy.chararray#numpy.chararray "numpy.chararray") has all of the standard [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") (and [`bytes`](https://docs.python.org/3/library/stdtypes.html#bytes "(in Python v3.10)")) methods, executing them on an element-by-element basis. Perhaps the easiest way to create a chararray is to use [`self.view(chararray)`](generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") where *self* is an ndarray of str or unicode data-type. However, a chararray can also be created using the [`numpy.chararray`](generated/numpy.chararray#numpy.chararray "numpy.chararray") constructor, or via the [`numpy.char.array`](generated/numpy.core.defchararray.array#numpy.core.defchararray.array "numpy.core.defchararray.array") function: | | | | --- | --- | | [`chararray`](generated/numpy.chararray#numpy.chararray "numpy.chararray")(shape[, itemsize, unicode, ...]) | Provides a convenient view on arrays of string and unicode values. | | [`core.defchararray.array`](generated/numpy.core.defchararray.array#numpy.core.defchararray.array "numpy.core.defchararray.array")(obj[, itemsize, ...]) | Create a [`chararray`](generated/numpy.chararray#numpy.chararray "numpy.chararray"). | Another difference with the standard ndarray of str data-type is that the chararray inherits the feature introduced by Numarray that white-space at the end of any element in the array will be ignored on item retrieval and comparison operations. Record arrays (`numpy.rec`) --------------------------- See also [Creating record arrays (numpy.rec)](routines.array-creation#routines-array-creation-rec), [Data type routines](routines.dtype#routines-dtype), [Data type objects (dtype)](arrays.dtypes#arrays-dtypes). NumPy provides the [`recarray`](generated/numpy.recarray#numpy.recarray "numpy.recarray") class which allows accessing the fields of a structured array as attributes, and a corresponding scalar data type object [`record`](generated/numpy.record#numpy.record "numpy.record"). | | | | --- | --- | | [`recarray`](generated/numpy.recarray#numpy.recarray "numpy.recarray")(shape[, dtype, buf, offset, ...]) | Construct an ndarray that allows field access using attributes. | | [`record`](generated/numpy.record#numpy.record "numpy.record") | A data-type scalar that allows field access as attribute lookup. | Masked arrays (numpy.ma) ------------------------ See also [Masked arrays](maskedarray#maskedarray) Standard container class ------------------------ For backward compatibility and as a standard “container “class, the UserArray from Numeric has been brought over to NumPy and named [`numpy.lib.user_array.container`](generated/numpy.lib.user_array.container#numpy.lib.user_array.container "numpy.lib.user_array.container") The container class is a Python class whose self.array attribute is an ndarray. Multiple inheritance is probably easier with numpy.lib.user_array.container than with the ndarray itself and so it is included by default. It is not documented here beyond mentioning its existence because you are encouraged to use the ndarray class directly if you can. | | | | --- | --- | | [`numpy.lib.user_array.container`](generated/numpy.lib.user_array.container#numpy.lib.user_array.container "numpy.lib.user_array.container")(data[, ...]) | Standard container-class for easy multiple-inheritance. | Array Iterators --------------- Iterators are a powerful concept for array processing. Essentially, iterators implement a generalized for-loop. If *myiter* is an iterator object, then the Python code: ``` for val in myiter: ... some code involving val ... ``` calls `val = next(myiter)` repeatedly until [`StopIteration`](https://docs.python.org/3/library/exceptions.html#StopIteration "(in Python v3.10)") is raised by the iterator. There are several ways to iterate over an array that may be useful: default iteration, flat iteration, and \(N\)-dimensional enumeration. ### Default iteration The default iterator of an ndarray object is the default Python iterator of a sequence type. Thus, when the array object itself is used as an iterator. The default behavior is equivalent to: ``` for i in range(arr.shape[0]): val = arr[i] ``` This default iterator selects a sub-array of dimension \(N-1\) from the array. This can be a useful construct for defining recursive algorithms. To loop over the entire array requires \(N\) for-loops. ``` >>> a = np.arange(24).reshape(3,2,4)+10 >>> for val in a: ... print('item:', val) item: [[10 11 12 13] [14 15 16 17]] item: [[18 19 20 21] [22 23 24 25]] item: [[26 27 28 29] [30 31 32 33]] ``` ### Flat iteration | | | | --- | --- | | [`ndarray.flat`](generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") | A 1-D iterator over the array. | As mentioned previously, the flat attribute of ndarray objects returns an iterator that will cycle over the entire array in C-style contiguous order. ``` >>> for i, val in enumerate(a.flat): ... if i%5 == 0: print(i, val) 0 10 5 15 10 20 15 25 20 30 ``` Here, I’ve used the built-in enumerate iterator to return the iterator index as well as the value. ### N-dimensional enumeration | | | | --- | --- | | [`ndenumerate`](generated/numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate")(arr) | Multidimensional index iterator. | Sometimes it may be useful to get the N-dimensional index while iterating. The ndenumerate iterator can achieve this. ``` >>> for i, val in np.ndenumerate(a): ... if sum(i)%5 == 0: print(i, val) (0, 0, 0) 10 (1, 1, 3) 25 (2, 0, 3) 29 (2, 1, 2) 32 ``` ### Iterator for broadcasting | | | | --- | --- | | [`broadcast`](generated/numpy.broadcast#numpy.broadcast "numpy.broadcast") | Produce an object that mimics broadcasting. | The general concept of broadcasting is also available from Python using the [`broadcast`](generated/numpy.broadcast#numpy.broadcast "numpy.broadcast") iterator. This object takes \(N\) objects as inputs and returns an iterator that returns tuples providing each of the input sequence elements in the broadcasted result. ``` >>> for val in np.broadcast([[1,0],[2,3]],[0,1]): ... print(val) (1, 0) (0, 1) (2, 0) (3, 1) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/arrays.classes.htmlMasked arrays ============= Masked arrays are arrays that may have missing or invalid entries. The [`numpy.ma`](maskedarray.generic#module-numpy.ma "numpy.ma") module provides a nearly work-alike replacement for numpy that supports data arrays with masks. * [The `numpy.ma` module](maskedarray.generic) + [Rationale](maskedarray.generic#rationale) + [What is a masked array?](maskedarray.generic#what-is-a-masked-array) + [The `numpy.ma` module](maskedarray.generic#id1) * [Using numpy.ma](maskedarray.generic#using-numpy-ma) + [Constructing masked arrays](maskedarray.generic#constructing-masked-arrays) + [Accessing the data](maskedarray.generic#accessing-the-data) + [Accessing the mask](maskedarray.generic#accessing-the-mask) + [Accessing only the valid entries](maskedarray.generic#accessing-only-the-valid-entries) + [Modifying the mask](maskedarray.generic#modifying-the-mask) + [Indexing and slicing](maskedarray.generic#indexing-and-slicing) + [Operations on masked arrays](maskedarray.generic#operations-on-masked-arrays) * [Examples](maskedarray.generic#examples) + [Data with a given value representing missing data](maskedarray.generic#data-with-a-given-value-representing-missing-data) + [Filling in the missing data](maskedarray.generic#filling-in-the-missing-data) + [Numerical operations](maskedarray.generic#numerical-operations) + [Ignoring extreme values](maskedarray.generic#ignoring-extreme-values) * [Constants of the `numpy.ma` module](maskedarray.baseclass) * [The `MaskedArray` class](maskedarray.baseclass#the-maskedarray-class) + [Attributes and properties of masked arrays](maskedarray.baseclass#attributes-and-properties-of-masked-arrays) * [`MaskedArray` methods](maskedarray.baseclass#maskedarray-methods) + [Conversion](maskedarray.baseclass#conversion) + [Shape manipulation](maskedarray.baseclass#shape-manipulation) + [Item selection and manipulation](maskedarray.baseclass#item-selection-and-manipulation) + [Pickling and copy](maskedarray.baseclass#pickling-and-copy) + [Calculations](maskedarray.baseclass#calculations) + [Arithmetic and comparison operations](maskedarray.baseclass#arithmetic-and-comparison-operations) + [Representation](maskedarray.baseclass#representation) + [Special methods](maskedarray.baseclass#special-methods) + [Specific methods](maskedarray.baseclass#specific-methods) * [Masked array operations](routines.ma) + [Constants](routines.ma#constants) + [Creation](routines.ma#creation) + [Inspecting the array](routines.ma#inspecting-the-array) + [Manipulating a MaskedArray](routines.ma#manipulating-a-maskedarray) + [Operations on masks](routines.ma#operations-on-masks) + [Conversion operations](routines.ma#conversion-operations) + [Masked arrays arithmetic](routines.ma#masked-arrays-arithmetic) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/maskedarray.htmlThe array interface protocol ============================ Note This page describes the NumPy-specific API for accessing the contents of a NumPy array from other C extensions. [**PEP 3118**](https://peps.python.org/pep-3118/) – [`The Revised Buffer Protocol`](https://docs.python.org/3/c-api/buffer.html#c.PyObject_GetBuffer "(in Python v3.10)") introduces similar, standardized API to Python 2.6 and 3.0 for any extension module to use. [Cython](http://cython.org/)’s buffer array support uses the [**PEP 3118**](https://peps.python.org/pep-3118/) API; see the [Cython NumPy tutorial](https://github.com/cython/cython/wiki/tutorials-numpy). Cython provides a way to write code that supports the buffer protocol with Python versions older than 2.6 because it has a backward-compatible implementation utilizing the array interface described here. version 3 The array interface (sometimes called array protocol) was created in 2005 as a means for array-like Python objects to re-use each other’s data buffers intelligently whenever possible. The homogeneous N-dimensional array interface is a default mechanism for objects to share N-dimensional array memory and information. The interface consists of a Python-side and a C-side using two attributes. Objects wishing to be considered an N-dimensional array in application code should support at least one of these attributes. Objects wishing to support an N-dimensional array in application code should look for at least one of these attributes and use the information provided appropriately. This interface describes homogeneous arrays in the sense that each item of the array has the same “type”. This type can be very simple or it can be a quite arbitrary and complicated C-like structure. There are two ways to use the interface: A Python side and a C-side. Both are separate attributes. Python side ----------- This approach to the interface consists of the object having an [`__array_interface__`](#object.__array_interface__ "object.__array_interface__") attribute. object.__array_interface__ A dictionary of items (3 required and 5 optional). The optional keys in the dictionary have implied defaults if they are not provided. The keys are: **shape** (required) Tuple whose elements are the array size in each dimension. Each entry is an integer (a Python [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)")). Note that these integers could be larger than the platform `int` or `long` could hold (a Python [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)") is a C `long`). It is up to the code using this attribute to handle this appropriately; either by raising an error when overflow is possible, or by using `long long` as the C type for the shapes. **typestr** (required) A string providing the basic type of the homogeneous array The basic string format consists of 3 parts: a character describing the byteorder of the data (`<`: little-endian, `>`: big-endian, `|`: not-relevant), a character code giving the basic type of the array, and an integer providing the number of bytes the type uses. The basic type character codes are: | | | | --- | --- | | `t` | Bit field (following integer gives the number of bits in the bit field). | | `b` | Boolean (integer type where all values are only `True` or `False`) | | `i` | Integer | | `u` | Unsigned integer | | `f` | Floating point | | `c` | Complex floating point | | `m` | Timedelta | | `M` | Datetime | | `O` | Object (i.e. the memory contains a pointer to [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")) | | `S` | String (fixed-length sequence of char) | | `U` | Unicode (fixed-length sequence of [`Py_UCS4`](https://docs.python.org/3/c-api/unicode.html#c.Py_UCS4 "(in Python v3.10)")) | | `V` | Other (void * – each item is a fixed-size chunk of memory) | **descr** (optional) A list of tuples providing a more detailed description of the memory layout for each item in the homogeneous array. Each tuple in the list has two or three elements. Normally, this attribute would be used when *typestr* is `V[0-9]+`, but this is not a requirement. The only requirement is that the number of bytes represented in the *typestr* key is the same as the total number of bytes represented here. The idea is to support descriptions of C-like structs that make up array elements. The elements of each tuple in the list are 1. A string providing a name associated with this portion of the datatype. This could also be a tuple of `('full name', 'basic_name')` where basic name would be a valid Python variable name representing the full name of the field. 2. Either a basic-type description string as in *typestr* or another list (for nested structured types) 3. An optional shape tuple providing how many times this part of the structure should be repeated. No repeats are assumed if this is not given. Very complicated structures can be described using this generic interface. Notice, however, that each element of the array is still of the same data-type. Some examples of using this interface are given below. **Default**: `[('', typestr)]` **data** (optional) A 2-tuple whose first argument is an integer (a long integer if necessary) that points to the data-area storing the array contents. This pointer must point to the first element of data (in other words any offset is always ignored in this case). The second entry in the tuple is a read-only flag (true means the data area is read-only). This attribute can also be an object exposing the [buffer interface](https://docs.python.org/3/c-api/buffer.html#bufferobjects "(in Python v3.10)") which will be used to share the data. If this key is not present (or returns None), then memory sharing will be done through the buffer interface of the object itself. In this case, the offset key can be used to indicate the start of the buffer. A reference to the object exposing the array interface must be stored by the new object if the memory area is to be secured. **Default**: `None` **strides** (optional) Either `None` to indicate a C-style contiguous array or a tuple of strides which provides the number of bytes needed to jump to the next array element in the corresponding dimension. Each entry must be an integer (a Python [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)")). As with shape, the values may be larger than can be represented by a C `int` or `long`; the calling code should handle this appropriately, either by raising an error, or by using `long long` in C. The default is `None` which implies a C-style contiguous memory buffer. In this model, the last dimension of the array varies the fastest. For example, the default strides tuple for an object whose array entries are 8 bytes long and whose shape is `(10, 20, 30)` would be `(4800, 240, 8)`. **Default**: `None` (C-style contiguous) **mask** (optional) `None` or an object exposing the array interface. All elements of the mask array should be interpreted only as true or not true indicating which elements of this array are valid. The shape of this object should be `“broadcastable”` to the shape of the original array. **Default**: `None` (All array values are valid) **offset** (optional) An integer offset into the array data region. This can only be used when data is `None` or returns a `buffer` object. **Default**: `0`. **version** (required) An integer showing the version of the interface (i.e. 3 for this version). Be careful not to use this to invalidate objects exposing future versions of the interface. C-struct access --------------- This approach to the array interface allows for faster access to an array using only one attribute lookup and a well-defined C-structure. object.__array_struct__ A [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "(in Python v3.10)") whose `pointer` member contains a pointer to a filled [`PyArrayInterface`](c-api/types-and-structures#c.PyArrayInterface "PyArrayInterface") structure. Memory for the structure is dynamically created and the [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "(in Python v3.10)") is also created with an appropriate destructor so the retriever of this attribute simply has to apply [`Py_DECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_DECREF "(in Python v3.10)") to the object returned by this attribute when it is finished. Also, either the data needs to be copied out, or a reference to the object exposing this attribute must be held to ensure the data is not freed. Objects exposing the [`__array_struct__`](#object.__array_struct__ "object.__array_struct__") interface must also not reallocate their memory if other objects are referencing them. The [`PyArrayInterface`](c-api/types-and-structures#c.PyArrayInterface "PyArrayInterface") structure is defined in `numpy/ndarrayobject.h` as: ``` typedef struct { int two; /* contains the integer 2 -- simple sanity check */ int nd; /* number of dimensions */ char typekind; /* kind in array --- character code of typestr */ int itemsize; /* size of each element */ int flags; /* flags indicating how the data should be interpreted */ /* must set ARR_HAS_DESCR bit to validate descr */ Py_intptr_t *shape; /* A length-nd array of shape information */ Py_intptr_t *strides; /* A length-nd array of stride information */ void *data; /* A pointer to the first element of the array */ PyObject *descr; /* NULL or data-description (same as descr key of __array_interface__) -- must set ARR_HAS_DESCR flag or this will be ignored. */ } PyArrayInterface; ``` The flags member may consist of 5 bits showing how the data should be interpreted and one bit showing how the Interface should be interpreted. The data-bits are [`NPY_ARRAY_C_CONTIGUOUS`](c-api/array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") (0x1), [`NPY_ARRAY_F_CONTIGUOUS`](c-api/array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") (0x2), [`NPY_ARRAY_ALIGNED`](c-api/array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") (0x100), [`NPY_ARRAY_NOTSWAPPED`](c-api/array#c.NPY_ARRAY_NOTSWAPPED "NPY_ARRAY_NOTSWAPPED") (0x200), and [`NPY_ARRAY_WRITEABLE`](c-api/array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") (0x400). A final flag [`NPY_ARR_HAS_DESCR`](#c.NPY_ARR_HAS_DESCR "NPY_ARR_HAS_DESCR") (0x800) indicates whether or not this structure has the arrdescr field. The field should not be accessed unless this flag is present. NPY_ARR_HAS_DESCR New since June 16, 2006: In the past most implementations used the `desc` member of the `PyCObject` (now [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "(in Python v3.10)")) itself (do not confuse this with the “descr” member of the [`PyArrayInterface`](c-api/types-and-structures#c.PyArrayInterface "PyArrayInterface") structure above — they are two separate things) to hold the pointer to the object exposing the interface. This is now an explicit part of the interface. Be sure to take a reference to the object and call [`PyCapsule_SetContext`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule_SetContext "(in Python v3.10)") before returning the [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "(in Python v3.10)"), and configure a destructor to decref this reference. Note `__array_struct__` is considered legacy and should not be used for new code. Use the [buffer protocol](https://docs.python.org/3/c-api/buffer.html "(in Python v3.10)") or the DLPack protocol [`numpy.from_dlpack`](generated/numpy.from_dlpack#numpy.from_dlpack "numpy.from_dlpack") instead. Type description examples ------------------------- For clarity it is useful to provide some examples of the type description and corresponding [`__array_interface__`](#object.__array_interface__ "object.__array_interface__") ‘descr’ entries. Thanks to <NAME> for these examples: In every case, the ‘descr’ key is optional, but of course provides more information which may be important for various applications: ``` * Float data typestr == '>f4' descr == [('','>f4')] * Complex double typestr == '>c8' descr == [('real','>f4'), ('imag','>f4')] * RGB Pixel data typestr == '|V3' descr == [('r','|u1'), ('g','|u1'), ('b','|u1')] * Mixed endian (weird but could happen). typestr == '|V8' (or '>u8') descr == [('big','>i4'), ('little','<i4')] * Nested structure struct { int ival; struct { unsigned short sval; unsigned char bval; unsigned char cval; } sub; } typestr == '|V8' (or '<u8' if you want) descr == [('ival','<i4'), ('sub', [('sval','<u2'), ('bval','|u1'), ('cval','|u1') ]) ] * Nested array struct { int ival; double data[16*4]; } typestr == '|V516' descr == [('ival','>i4'), ('data','>f8',(16,4))] * Padded structure struct { int ival; double dval; } typestr == '|V16' descr == [('ival','>i4'),('','|V4'),('dval','>f8')] ``` It should be clear that any structured type could be described using this interface. Differences with Array interface (Version 2) -------------------------------------------- The version 2 interface was very similar. The differences were largely aesthetic. In particular: 1. The PyArrayInterface structure had no descr member at the end (and therefore no flag ARR_HAS_DESCR) 2. The `context` member of the [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "(in Python v3.10)") (formally the `desc` member of the `PyCObject`) returned from `__array_struct__` was not specified. Usually, it was the object exposing the array (so that a reference to it could be kept and destroyed when the C-object was destroyed). It is now an explicit requirement that this field be used in some way to hold a reference to the owning object. Note Until August 2020, this said: Now it must be a tuple whose first element is a string with “PyArrayInterface Version #” and whose second element is the object exposing the array. This design was retracted almost immediately after it was proposed, in <<https://mail.python.org/pipermail/numpy-discussion/2006-June/020995.html>>. Despite 14 years of documentation to the contrary, at no point was it valid to assume that `__array_interface__` capsules held this tuple content. 3. The tuple returned from `__array_interface__['data']` used to be a hex-string (now it is an integer or a long integer). 4. There was no `__array_interface__` attribute instead all of the keys (except for version) in the `__array_interface__` dictionary were their own attribute: Thus to obtain the Python-side information you had to access separately the attributes: * `__array_data__` * `__array_shape__` * `__array_strides__` * `__array_typestr__` * `__array_descr__` * `__array_offset__` * `__array_mask__` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/arrays.interface.htmlDatetimes and Timedeltas ======================== New in version 1.7.0. Starting in NumPy 1.7, there are core array data types which natively support datetime functionality. The data type is called [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64"), so named because [`datetime`](https://docs.python.org/3/library/datetime.html#datetime.datetime "(in Python v3.10)") is already taken by the Python standard library. Datetime64 Conventions and Assumptions -------------------------------------- Similar to the Python [`date`](https://docs.python.org/3/library/datetime.html#datetime.date "(in Python v3.10)") class, dates are expressed in the current Gregorian Calendar, indefinitely extended both in the future and in the past. [1](#id3) Contrary to Python [`date`](https://docs.python.org/3/library/datetime.html#datetime.date "(in Python v3.10)"), which supports only years in the 1 AD — 9999 AD range, [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64") allows also for dates BC; years BC follow the [Astronomical year numbering](https://en.wikipedia.org/wiki/Astronomical_year_numbering) convention, i.e. year 2 BC is numbered −1, year 1 BC is numbered 0, year 1 AD is numbered 1. Time instants, say 16:23:32.234, are represented counting hours, minutes, seconds and fractions from midnight: i.e. 00:00:00.000 is midnight, 12:00:00.000 is noon, etc. Each calendar day has exactly 86400 seconds. This is a “naive” time, with no explicit notion of timezones or specific time scales (UT1, UTC, TAI, etc.). [2](#id4) [1](#id1) The calendar obtained by extending the Gregorian calendar before its official adoption on Oct. 15, 1582 is called [Proleptic Gregorian Calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) [2](#id2) The assumption of 86400 seconds per calendar day is not valid for UTC, the present day civil time scale. In fact due to the presence of [leap seconds](https://en.wikipedia.org/wiki/Leap_second) on rare occasions a day may be 86401 or 86399 seconds long. On the contrary the 86400s day assumption holds for the TAI timescale. An explicit support for TAI and TAI to UTC conversion, accounting for leap seconds, is proposed but not yet implemented. See also the [shortcomings](#shortcomings) section below. Basic Datetimes --------------- The most basic way to create datetimes is from strings in ISO 8601 date or datetime format. It is also possible to create datetimes from an integer by offset relative to the Unix epoch (00:00:00 UTC on 1 January 1970). The unit for internal storage is automatically selected from the form of the string, and can be either a [date unit](#arrays-dtypes-dateunits) or a [time unit](#arrays-dtypes-timeunits). The date units are years (‘Y’), months (‘M’), weeks (‘W’), and days (‘D’), while the time units are hours (‘h’), minutes (‘m’), seconds (‘s’), milliseconds (‘ms’), and some additional SI-prefix seconds-based units. The [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64") data type also accepts the string “NAT”, in any combination of lowercase/uppercase letters, for a “Not A Time” value. #### Example A simple ISO date: ``` >>> np.datetime64('2005-02-25') numpy.datetime64('2005-02-25') ``` From an integer and a date unit, 1 year since the UNIX epoch: ``` >>> np.datetime64(1, 'Y') numpy.datetime64('1971') ``` Using months for the unit: ``` >>> np.datetime64('2005-02') numpy.datetime64('2005-02') ``` Specifying just the month, but forcing a ‘days’ unit: ``` >>> np.datetime64('2005-02', 'D') numpy.datetime64('2005-02-01') ``` From a date and time: ``` >>> np.datetime64('2005-02-25T03:30') numpy.datetime64('2005-02-25T03:30') ``` NAT (not a time): ``` >>> np.datetime64('nat') numpy.datetime64('NaT') ``` When creating an array of datetimes from a string, it is still possible to automatically select the unit from the inputs, by using the datetime type with generic units. #### Example ``` >>> np.array(['2007-07-13', '2006-01-13', '2010-08-13'], dtype='datetime64') array(['2007-07-13', '2006-01-13', '2010-08-13'], dtype='datetime64[D]') ``` ``` >>> np.array(['2001-01-01T12:00', '2002-02-03T13:56:03.172'], dtype='datetime64') array(['2001-01-01T12:00:00.000', '2002-02-03T13:56:03.172'], dtype='datetime64[ms]') ``` An array of datetimes can be constructed from integers representing POSIX timestamps with the given unit. #### Example ``` >>> np.array([0, 1577836800], dtype='datetime64[s]') array(['1970-01-01T00:00:00', '2020-01-01T00:00:00'], dtype='datetime64[s]') ``` ``` >>> np.array([0, 1577836800000]).astype('datetime64[ms]') array(['1970-01-01T00:00:00.000', '2020-01-01T00:00:00.000'], dtype='datetime64[ms]') ``` The datetime type works with many common NumPy functions, for example [`arange`](generated/numpy.arange#numpy.arange "numpy.arange") can be used to generate ranges of dates. #### Example All the dates for one month: ``` >>> np.arange('2005-02', '2005-03', dtype='datetime64[D]') array(['2005-02-01', '2005-02-02', '2005-02-03', '2005-02-04', '2005-02-05', '2005-02-06', '2005-02-07', '2005-02-08', '2005-02-09', '2005-02-10', '2005-02-11', '2005-02-12', '2005-02-13', '2005-02-14', '2005-02-15', '2005-02-16', '2005-02-17', '2005-02-18', '2005-02-19', '2005-02-20', '2005-02-21', '2005-02-22', '2005-02-23', '2005-02-24', '2005-02-25', '2005-02-26', '2005-02-27', '2005-02-28'], dtype='datetime64[D]') ``` The datetime object represents a single moment in time. If two datetimes have different units, they may still be representing the same moment of time, and converting from a bigger unit like months to a smaller unit like days is considered a ‘safe’ cast because the moment of time is still being represented exactly. #### Example ``` >>> np.datetime64('2005') == np.datetime64('2005-01-01') True ``` ``` >>> np.datetime64('2010-03-14T15') == np.datetime64('2010-03-14T15:00:00.00') True ``` Deprecated since version 1.11.0: NumPy does not store timezone information. For backwards compatibility, datetime64 still parses timezone offsets, which it handles by converting to UTC±00:00 (Zulu time). This behaviour is deprecated and will raise an error in the future. Datetime and Timedelta Arithmetic --------------------------------- NumPy allows the subtraction of two datetime values, an operation which produces a number with a time unit. Because NumPy doesn’t have a physical quantities system in its core, the [`timedelta64`](arrays.scalars#numpy.timedelta64 "numpy.timedelta64") data type was created to complement [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64"). The arguments for [`timedelta64`](arrays.scalars#numpy.timedelta64 "numpy.timedelta64") are a number, to represent the number of units, and a date/time unit, such as (D)ay, (M)onth, (Y)ear, (h)ours, (m)inutes, or (s)econds. The [`timedelta64`](arrays.scalars#numpy.timedelta64 "numpy.timedelta64") data type also accepts the string “NAT” in place of the number for a “Not A Time” value. #### Example ``` >>> np.timedelta64(1, 'D') numpy.timedelta64(1,'D') ``` ``` >>> np.timedelta64(4, 'h') numpy.timedelta64(4,'h') ``` ``` >>> np.timedelta64('nAt') numpy.timedelta64('NaT') ``` Datetimes and Timedeltas work together to provide ways for simple datetime calculations. #### Example ``` >>> np.datetime64('2009-01-01') - np.datetime64('2008-01-01') numpy.timedelta64(366,'D') ``` ``` >>> np.datetime64('2009') + np.timedelta64(20, 'D') numpy.datetime64('2009-01-21') ``` ``` >>> np.datetime64('2011-06-15T00:00') + np.timedelta64(12, 'h') numpy.datetime64('2011-06-15T12:00') ``` ``` >>> np.timedelta64(1,'W') / np.timedelta64(1,'D') 7.0 ``` ``` >>> np.timedelta64(1,'W') % np.timedelta64(10,'D') numpy.timedelta64(7,'D') ``` ``` >>> np.datetime64('nat') - np.datetime64('2009-01-01') numpy.timedelta64('NaT','D') ``` ``` >>> np.datetime64('2009-01-01') + np.timedelta64('nat') numpy.datetime64('NaT') ``` There are two Timedelta units (‘Y’, years and ‘M’, months) which are treated specially, because how much time they represent changes depending on when they are used. While a timedelta day unit is equivalent to 24 hours, there is no way to convert a month unit into days, because different months have different numbers of days. #### Example ``` >>> a = np.timedelta64(1, 'Y') ``` ``` >>> np.timedelta64(a, 'M') numpy.timedelta64(12,'M') ``` ``` >>> np.timedelta64(a, 'D') Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Cannot cast NumPy timedelta64 scalar from metadata [Y] to [D] according to the rule 'same_kind' ``` Datetime Units -------------- The Datetime and Timedelta data types support a large number of time units, as well as generic units which can be coerced into any of the other units based on input data. Datetimes are always stored with an epoch of 1970-01-01T00:00. This means the supported dates are always a symmetric interval around the epoch, called “time span” in the table below. The length of the span is the range of a 64-bit integer times the length of the date or unit. For example, the time span for ‘W’ (week) is exactly 7 times longer than the time span for ‘D’ (day), and the time span for ‘D’ (day) is exactly 24 times longer than the time span for ‘h’ (hour). Here are the date units: | Code | Meaning | Time span (relative) | Time span (absolute) | | --- | --- | --- | --- | | Y | year | +/- 9.2e18 years | [9.2e18 BC, 9.2e18 AD] | | M | month | +/- 7.6e17 years | [7.6e17 BC, 7.6e17 AD] | | W | week | +/- 1.7e17 years | [1.7e17 BC, 1.7e17 AD] | | D | day | +/- 2.5e16 years | [2.5e16 BC, 2.5e16 AD] | And here are the time units: | Code | Meaning | Time span (relative) | Time span (absolute) | | --- | --- | --- | --- | | h | hour | +/- 1.0e15 years | [1.0e15 BC, 1.0e15 AD] | | m | minute | +/- 1.7e13 years | [1.7e13 BC, 1.7e13 AD] | | s | second | +/- 2.9e11 years | [2.9e11 BC, 2.9e11 AD] | | ms | millisecond | +/- 2.9e8 years | [ 2.9e8 BC, 2.9e8 AD] | | us / ÎŒs | microsecond | +/- 2.9e5 years | [290301 BC, 294241 AD] | | ns | nanosecond | +/- 292 years | [ 1678 AD, 2262 AD] | | ps | picosecond | +/- 106 days | [ 1969 AD, 1970 AD] | | fs | femtosecond | +/- 2.6 hours | [ 1969 AD, 1970 AD] | | as | attosecond | +/- 9.2 seconds | [ 1969 AD, 1970 AD] | Business Day Functionality -------------------------- To allow the datetime to be used in contexts where only certain days of the week are valid, NumPy includes a set of “busday” (business day) functions. The default for busday functions is that the only valid days are Monday through Friday (the usual business days). The implementation is based on a “weekmask” containing 7 Boolean flags to indicate valid days; custom weekmasks are possible that specify other sets of valid days. The “busday” functions can additionally check a list of “holiday” dates, specific dates that are not valid days. The function [`busday_offset`](generated/numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") allows you to apply offsets specified in business days to datetimes with a unit of ‘D’ (day). #### Example ``` >>> np.busday_offset('2011-06-23', 1) numpy.datetime64('2011-06-24') ``` ``` >>> np.busday_offset('2011-06-23', 2) numpy.datetime64('2011-06-27') ``` When an input date falls on the weekend or a holiday, [`busday_offset`](generated/numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") first applies a rule to roll the date to a valid business day, then applies the offset. The default rule is ‘raise’, which simply raises an exception. The rules most typically used are ‘forward’ and ‘backward’. #### Example ``` >>> np.busday_offset('2011-06-25', 2) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: Non-business day date in busday_offset ``` ``` >>> np.busday_offset('2011-06-25', 0, roll='forward') numpy.datetime64('2011-06-27') ``` ``` >>> np.busday_offset('2011-06-25', 2, roll='forward') numpy.datetime64('2011-06-29') ``` ``` >>> np.busday_offset('2011-06-25', 0, roll='backward') numpy.datetime64('2011-06-24') ``` ``` >>> np.busday_offset('2011-06-25', 2, roll='backward') numpy.datetime64('2011-06-28') ``` In some cases, an appropriate use of the roll and the offset is necessary to get a desired answer. #### Example The first business day on or after a date: ``` >>> np.busday_offset('2011-03-20', 0, roll='forward') numpy.datetime64('2011-03-21') >>> np.busday_offset('2011-03-22', 0, roll='forward') numpy.datetime64('2011-03-22') ``` The first business day strictly after a date: ``` >>> np.busday_offset('2011-03-20', 1, roll='backward') numpy.datetime64('2011-03-21') >>> np.busday_offset('2011-03-22', 1, roll='backward') numpy.datetime64('2011-03-23') ``` The function is also useful for computing some kinds of days like holidays. In Canada and the U.S., Mother’s day is on the second Sunday in May, which can be computed with a custom weekmask. #### Example ``` >>> np.busday_offset('2012-05', 1, roll='forward', weekmask='Sun') numpy.datetime64('2012-05-13') ``` When performance is important for manipulating many business dates with one particular choice of weekmask and holidays, there is an object [`busdaycalendar`](generated/numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") which stores the data necessary in an optimized form. ### np.is_busday(): To test a [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64") value to see if it is a valid day, use [`is_busday`](generated/numpy.is_busday#numpy.is_busday "numpy.is_busday"). #### Example ``` >>> np.is_busday(np.datetime64('2011-07-15')) # a Friday True >>> np.is_busday(np.datetime64('2011-07-16')) # a Saturday False >>> np.is_busday(np.datetime64('2011-07-16'), weekmask="Sat Sun") True >>> a = np.arange(np.datetime64('2011-07-11'), np.datetime64('2011-07-18')) >>> np.is_busday(a) array([ True, True, True, True, True, False, False]) ``` ### np.busday_count(): To find how many valid days there are in a specified range of datetime64 dates, use [`busday_count`](generated/numpy.busday_count#numpy.busday_count "numpy.busday_count"): #### Example ``` >>> np.busday_count(np.datetime64('2011-07-11'), np.datetime64('2011-07-18')) 5 >>> np.busday_count(np.datetime64('2011-07-18'), np.datetime64('2011-07-11')) -5 ``` If you have an array of datetime64 day values, and you want a count of how many of them are valid dates, you can do this: #### Example ``` >>> a = np.arange(np.datetime64('2011-07-11'), np.datetime64('2011-07-18')) >>> np.count_nonzero(np.is_busday(a)) 5 ``` #### Custom Weekmasks Here are several examples of custom weekmask values. These examples specify the “busday” default of Monday through Friday being valid days. Some examples: ``` # Positional sequences; positions are Monday through Sunday. # Length of the sequence must be exactly 7. weekmask = [1, 1, 1, 1, 1, 0, 0] # list or other sequence; 0 == invalid day, 1 == valid day weekmask = "1111100" # string '0' == invalid day, '1' == valid day # string abbreviations from this list: Mon Tue Wed Thu Fri Sat Sun weekmask = "Mon Tue Wed Thu Fri" # any amount of whitespace is allowed; abbreviations are case-sensitive. weekmask = "MonTue Wed Thu\tFri" ``` Datetime64 shortcomings ----------------------- The assumption that all days are exactly 86400 seconds long makes [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64") largely compatible with Python [`datetime`](https://docs.python.org/3/library/datetime.html#module-datetime "(in Python v3.10)") and “POSIX time” semantics; therefore they all share the same well known shortcomings with respect to the UTC timescale and historical time determination. A brief non exhaustive summary is given below. * It is impossible to parse valid UTC timestamps occurring during a positive leap second. #### Example “2016-12-31 23:59:60 UTC” was a leap second, therefore “2016-12-31 23:59:60.450 UTC” is a valid timestamp which is not parseable by [`datetime64`](arrays.scalars#numpy.datetime64 "numpy.datetime64"): ``` >>> np.datetime64("2016-12-31 23:59:60.450") Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: Seconds out of range in datetime string "2016-12-31 23:59:60.450" ``` * Timedelta64 computations between two UTC dates can be wrong by an integer number of SI seconds. #### Example Compute the number of SI seconds between “2021-01-01 12:56:23.423 UTC” and “2001-01-01 00:00:00.000 UTC”: ``` >>> ( ... np.datetime64("2021-01-01 12:56:23.423") ... - np.datetime64("2001-01-01") ... ) / np.timedelta64(1, "s") 631198583.423 ``` however correct answer is `631198588.423` SI seconds because there were 5 leap seconds between 2001 and 2021. * Timedelta64 computations for dates in the past do not return SI seconds, as one would expect. #### Example Compute the number of seconds between “000-01-01 UT” and “1600-01-01 UT”, where UT is [universal time](https://en.wikipedia.org/wiki/Universal_Time): ``` >>> a = np.datetime64("0000-01-01", "us") >>> b = np.datetime64("1600-01-01", "us") >>> b - a numpy.timedelta64(50491123200000000,'us') ``` The computed results, `50491123200` seconds, is obtained as the elapsed number of days (`584388`) times `86400` seconds; this is the number of seconds of a clock in sync with earth rotation. The exact value in SI seconds can only be estimated, e.g using data published in [Measurement of the Earth’s rotation: 720 BC to AD 2015, 2016, Royal Society’s Proceedings A 472, by Stephenson et.al.](https://doi.org/10.1098/rspa.2016.0404). A sensible estimate is `50491112870 ± 90` seconds, with a difference of 10330 seconds. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/arrays.datetime.htmlArray creation routines ======================= See also [Array creation](../user/basics.creation#arrays-creation) From shape or value ------------------- | | | | --- | --- | | [`empty`](generated/numpy.empty#numpy.empty "numpy.empty")(shape[, dtype, order, like]) | Return a new array of given shape and type, without initializing entries. | | [`empty_like`](generated/numpy.empty_like#numpy.empty_like "numpy.empty_like")(prototype[, dtype, order, subok, ...]) | Return a new array with the same shape and type as a given array. | | [`eye`](generated/numpy.eye#numpy.eye "numpy.eye")(N[, M, k, dtype, order, like]) | Return a 2-D array with ones on the diagonal and zeros elsewhere. | | [`identity`](generated/numpy.identity#numpy.identity "numpy.identity")(n[, dtype, like]) | Return the identity array. | | [`ones`](generated/numpy.ones#numpy.ones "numpy.ones")(shape[, dtype, order, like]) | Return a new array of given shape and type, filled with ones. | | [`ones_like`](generated/numpy.ones_like#numpy.ones_like "numpy.ones_like")(a[, dtype, order, subok, shape]) | Return an array of ones with the same shape and type as a given array. | | [`zeros`](generated/numpy.zeros#numpy.zeros "numpy.zeros")(shape[, dtype, order, like]) | Return a new array of given shape and type, filled with zeros. | | [`zeros_like`](generated/numpy.zeros_like#numpy.zeros_like "numpy.zeros_like")(a[, dtype, order, subok, shape]) | Return an array of zeros with the same shape and type as a given array. | | [`full`](generated/numpy.full#numpy.full "numpy.full")(shape, fill_value[, dtype, order, like]) | Return a new array of given shape and type, filled with `fill_value`. | | [`full_like`](generated/numpy.full_like#numpy.full_like "numpy.full_like")(a, fill_value[, dtype, order, ...]) | Return a full array with the same shape and type as a given array. | From existing data ------------------ | | | | --- | --- | | [`array`](generated/numpy.array#numpy.array "numpy.array")(object[, dtype, copy, order, subok, ...]) | Create an array. | | [`asarray`](generated/numpy.asarray#numpy.asarray "numpy.asarray")(a[, dtype, order, like]) | Convert the input to an array. | | [`asanyarray`](generated/numpy.asanyarray#numpy.asanyarray "numpy.asanyarray")(a[, dtype, order, like]) | Convert the input to an ndarray, but pass ndarray subclasses through. | | [`ascontiguousarray`](generated/numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray")(a[, dtype, like]) | Return a contiguous array (ndim >= 1) in memory (C order). | | [`asmatrix`](generated/numpy.asmatrix#numpy.asmatrix "numpy.asmatrix")(data[, dtype]) | Interpret the input as a matrix. | | [`copy`](generated/numpy.copy#numpy.copy "numpy.copy")(a[, order, subok]) | Return an array copy of the given object. | | [`frombuffer`](generated/numpy.frombuffer#numpy.frombuffer "numpy.frombuffer")(buffer[, dtype, count, offset, like]) | Interpret a buffer as a 1-dimensional array. | | [`from_dlpack`](generated/numpy.from_dlpack#numpy.from_dlpack "numpy.from_dlpack")(x, /) | Create a NumPy array from an object implementing the `__dlpack__` protocol. | | [`fromfile`](generated/numpy.fromfile#numpy.fromfile "numpy.fromfile")(file[, dtype, count, sep, offset, like]) | Construct an array from data in a text or binary file. | | [`fromfunction`](generated/numpy.fromfunction#numpy.fromfunction "numpy.fromfunction")(function, shape, *[, dtype, like]) | Construct an array by executing a function over each coordinate. | | [`fromiter`](generated/numpy.fromiter#numpy.fromiter "numpy.fromiter")(iter, dtype[, count, like]) | Create a new 1-dimensional array from an iterable object. | | [`fromstring`](generated/numpy.fromstring#numpy.fromstring "numpy.fromstring")(string[, dtype, count, like]) | A new 1-D array initialized from text data in a string. | | [`loadtxt`](generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt")(fname[, dtype, comments, delimiter, ...]) | Load data from a text file. | Creating record arrays (`numpy.rec`) ------------------------------------ Note `numpy.rec` is the preferred alias for `numpy.core.records`. | | | | --- | --- | | [`core.records.array`](generated/numpy.core.records.array#numpy.core.records.array "numpy.core.records.array")(obj[, dtype, shape, ...]) | Construct a record array from a wide-variety of objects. | | [`core.records.fromarrays`](generated/numpy.core.records.fromarrays#numpy.core.records.fromarrays "numpy.core.records.fromarrays")(arrayList[, dtype, ...]) | Create a record array from a (flat) list of arrays | | [`core.records.fromrecords`](generated/numpy.core.records.fromrecords#numpy.core.records.fromrecords "numpy.core.records.fromrecords")(recList[, dtype, ...]) | Create a recarray from a list of records in text form. | | [`core.records.fromstring`](generated/numpy.core.records.fromstring#numpy.core.records.fromstring "numpy.core.records.fromstring")(datastring[, dtype, ...]) | Create a record array from binary data | | [`core.records.fromfile`](generated/numpy.core.records.fromfile#numpy.core.records.fromfile "numpy.core.records.fromfile")(fd[, dtype, shape, ...]) | Create an array from binary file data | Creating character arrays (numpy.char) -------------------------------------- Note [`numpy.char`](routines.char#module-numpy.char "numpy.char") is the preferred alias for `numpy.core.defchararray`. | | | | --- | --- | | [`core.defchararray.array`](generated/numpy.core.defchararray.array#numpy.core.defchararray.array "numpy.core.defchararray.array")(obj[, itemsize, ...]) | Create a [`chararray`](generated/numpy.chararray#numpy.chararray "numpy.chararray"). | | [`core.defchararray.asarray`](generated/numpy.core.defchararray.asarray#numpy.core.defchararray.asarray "numpy.core.defchararray.asarray")(obj[, itemsize, ...]) | Convert the input to a [`chararray`](generated/numpy.chararray#numpy.chararray "numpy.chararray"), copying the data only if necessary. | Numerical ranges ---------------- | | | | --- | --- | | [`arange`](generated/numpy.arange#numpy.arange "numpy.arange")([start,] stop[, step,][, dtype, like]) | Return evenly spaced values within a given interval. | | [`linspace`](generated/numpy.linspace#numpy.linspace "numpy.linspace")(start, stop[, num, endpoint, ...]) | Return evenly spaced numbers over a specified interval. | | [`logspace`](generated/numpy.logspace#numpy.logspace "numpy.logspace")(start, stop[, num, endpoint, base, ...]) | Return numbers spaced evenly on a log scale. | | [`geomspace`](generated/numpy.geomspace#numpy.geomspace "numpy.geomspace")(start, stop[, num, endpoint, ...]) | Return numbers spaced evenly on a log scale (a geometric progression). | | [`meshgrid`](generated/numpy.meshgrid#numpy.meshgrid "numpy.meshgrid")(*xi[, copy, sparse, indexing]) | Return coordinate matrices from coordinate vectors. | | [`mgrid`](generated/numpy.mgrid#numpy.mgrid "numpy.mgrid") | `nd_grid` instance which returns a dense multi-dimensional "meshgrid". | | [`ogrid`](generated/numpy.ogrid#numpy.ogrid "numpy.ogrid") | `nd_grid` instance which returns an open multi-dimensional "meshgrid". | Building matrices ----------------- | | | | --- | --- | | [`diag`](generated/numpy.diag#numpy.diag "numpy.diag")(v[, k]) | Extract a diagonal or construct a diagonal array. | | [`diagflat`](generated/numpy.diagflat#numpy.diagflat "numpy.diagflat")(v[, k]) | Create a two-dimensional array with the flattened input as a diagonal. | | [`tri`](generated/numpy.tri#numpy.tri "numpy.tri")(N[, M, k, dtype, like]) | An array with ones at and below the given diagonal and zeros elsewhere. | | [`tril`](generated/numpy.tril#numpy.tril "numpy.tril")(m[, k]) | Lower triangle of an array. | | [`triu`](generated/numpy.triu#numpy.triu "numpy.triu")(m[, k]) | Upper triangle of an array. | | [`vander`](generated/numpy.vander#numpy.vander "numpy.vander")(x[, N, increasing]) | Generate a Vandermonde matrix. | The Matrix class ---------------- | | | | --- | --- | | [`mat`](generated/numpy.mat#numpy.mat "numpy.mat")(data[, dtype]) | Interpret the input as a matrix. | | [`bmat`](generated/numpy.bmat#numpy.bmat "numpy.bmat")(obj[, ldict, gdict]) | Build a matrix object from a string, nested sequence, or array. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.array-creation.htmlArray manipulation routines =========================== Basic operations ---------------- | | | | --- | --- | | [`copyto`](generated/numpy.copyto#numpy.copyto "numpy.copyto")(dst, src[, casting, where]) | Copies values from one array to another, broadcasting as necessary. | | [`shape`](generated/numpy.shape#numpy.shape "numpy.shape")(a) | Return the shape of an array. | Changing array shape -------------------- | | | | --- | --- | | [`reshape`](generated/numpy.reshape#numpy.reshape "numpy.reshape")(a, newshape[, order]) | Gives a new shape to an array without changing its data. | | [`ravel`](generated/numpy.ravel#numpy.ravel "numpy.ravel")(a[, order]) | Return a contiguous flattened array. | | [`ndarray.flat`](generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") | A 1-D iterator over the array. | | [`ndarray.flatten`](generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten")([order]) | Return a copy of the array collapsed into one dimension. | Transpose-like operations ------------------------- | | | | --- | --- | | [`moveaxis`](generated/numpy.moveaxis#numpy.moveaxis "numpy.moveaxis")(a, source, destination) | Move axes of an array to new positions. | | [`rollaxis`](generated/numpy.rollaxis#numpy.rollaxis "numpy.rollaxis")(a, axis[, start]) | Roll the specified axis backwards, until it lies in a given position. | | [`swapaxes`](generated/numpy.swapaxes#numpy.swapaxes "numpy.swapaxes")(a, axis1, axis2) | Interchange two axes of an array. | | [`ndarray.T`](generated/numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") | The transposed array. | | [`transpose`](generated/numpy.transpose#numpy.transpose "numpy.transpose")(a[, axes]) | Reverse or permute the axes of an array; returns the modified array. | Changing number of dimensions ----------------------------- | | | | --- | --- | | [`atleast_1d`](generated/numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d")(*arys) | Convert inputs to arrays with at least one dimension. | | [`atleast_2d`](generated/numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d")(*arys) | View inputs as arrays with at least two dimensions. | | [`atleast_3d`](generated/numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d")(*arys) | View inputs as arrays with at least three dimensions. | | [`broadcast`](generated/numpy.broadcast#numpy.broadcast "numpy.broadcast") | Produce an object that mimics broadcasting. | | [`broadcast_to`](generated/numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to")(array, shape[, subok]) | Broadcast an array to a new shape. | | [`broadcast_arrays`](generated/numpy.broadcast_arrays#numpy.broadcast_arrays "numpy.broadcast_arrays")(*args[, subok]) | Broadcast any number of arrays against each other. | | [`expand_dims`](generated/numpy.expand_dims#numpy.expand_dims "numpy.expand_dims")(a, axis) | Expand the shape of an array. | | [`squeeze`](generated/numpy.squeeze#numpy.squeeze "numpy.squeeze")(a[, axis]) | Remove axes of length one from `a`. | Changing kind of array ---------------------- | | | | --- | --- | | [`asarray`](generated/numpy.asarray#numpy.asarray "numpy.asarray")(a[, dtype, order, like]) | Convert the input to an array. | | [`asanyarray`](generated/numpy.asanyarray#numpy.asanyarray "numpy.asanyarray")(a[, dtype, order, like]) | Convert the input to an ndarray, but pass ndarray subclasses through. | | [`asmatrix`](generated/numpy.asmatrix#numpy.asmatrix "numpy.asmatrix")(data[, dtype]) | Interpret the input as a matrix. | | [`asfarray`](generated/numpy.asfarray#numpy.asfarray "numpy.asfarray")(a[, dtype]) | Return an array converted to a float type. | | [`asfortranarray`](generated/numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray")(a[, dtype, like]) | Return an array (ndim >= 1) laid out in Fortran order in memory. | | [`ascontiguousarray`](generated/numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray")(a[, dtype, like]) | Return a contiguous array (ndim >= 1) in memory (C order). | | [`asarray_chkfinite`](generated/numpy.asarray_chkfinite#numpy.asarray_chkfinite "numpy.asarray_chkfinite")(a[, dtype, order]) | Convert the input to an array, checking for NaNs or Infs. | | [`require`](generated/numpy.require#numpy.require "numpy.require")(a[, dtype, requirements, like]) | Return an ndarray of the provided type that satisfies requirements. | Joining arrays -------------- | | | | --- | --- | | [`concatenate`](generated/numpy.concatenate#numpy.concatenate "numpy.concatenate")([axis, out, dtype, casting]) | Join a sequence of arrays along an existing axis. | | [`stack`](generated/numpy.stack#numpy.stack "numpy.stack")(arrays[, axis, out]) | Join a sequence of arrays along a new axis. | | [`block`](generated/numpy.block#numpy.block "numpy.block")(arrays) | Assemble an nd-array from nested lists of blocks. | | [`vstack`](generated/numpy.vstack#numpy.vstack "numpy.vstack")(tup) | Stack arrays in sequence vertically (row wise). | | [`hstack`](generated/numpy.hstack#numpy.hstack "numpy.hstack")(tup) | Stack arrays in sequence horizontally (column wise). | | [`dstack`](generated/numpy.dstack#numpy.dstack "numpy.dstack")(tup) | Stack arrays in sequence depth wise (along third axis). | | [`column_stack`](generated/numpy.column_stack#numpy.column_stack "numpy.column_stack")(tup) | Stack 1-D arrays as columns into a 2-D array. | | [`row_stack`](generated/numpy.row_stack#numpy.row_stack "numpy.row_stack")(tup) | Stack arrays in sequence vertically (row wise). | Splitting arrays ---------------- | | | | --- | --- | | [`split`](generated/numpy.split#numpy.split "numpy.split")(ary, indices_or_sections[, axis]) | Split an array into multiple sub-arrays as views into `ary`. | | [`array_split`](generated/numpy.array_split#numpy.array_split "numpy.array_split")(ary, indices_or_sections[, axis]) | Split an array into multiple sub-arrays. | | [`dsplit`](generated/numpy.dsplit#numpy.dsplit "numpy.dsplit")(ary, indices_or_sections) | Split array into multiple sub-arrays along the 3rd axis (depth). | | [`hsplit`](generated/numpy.hsplit#numpy.hsplit "numpy.hsplit")(ary, indices_or_sections) | Split an array into multiple sub-arrays horizontally (column-wise). | | [`vsplit`](generated/numpy.vsplit#numpy.vsplit "numpy.vsplit")(ary, indices_or_sections) | Split an array into multiple sub-arrays vertically (row-wise). | Tiling arrays ------------- | | | | --- | --- | | [`tile`](generated/numpy.tile#numpy.tile "numpy.tile")(A, reps) | Construct an array by repeating A the number of times given by reps. | | [`repeat`](generated/numpy.repeat#numpy.repeat "numpy.repeat")(a, repeats[, axis]) | Repeat elements of an array. | Adding and removing elements ---------------------------- | | | | --- | --- | | [`delete`](generated/numpy.delete#numpy.delete "numpy.delete")(arr, obj[, axis]) | Return a new array with sub-arrays along an axis deleted. | | [`insert`](generated/numpy.insert#numpy.insert "numpy.insert")(arr, obj, values[, axis]) | Insert values along the given axis before the given indices. | | [`append`](generated/numpy.append#numpy.append "numpy.append")(arr, values[, axis]) | Append values to the end of an array. | | [`resize`](generated/numpy.resize#numpy.resize "numpy.resize")(a, new_shape) | Return a new array with the specified shape. | | [`trim_zeros`](generated/numpy.trim_zeros#numpy.trim_zeros "numpy.trim_zeros")(filt[, trim]) | Trim the leading and/or trailing zeros from a 1-D array or sequence. | | [`unique`](generated/numpy.unique#numpy.unique "numpy.unique")(ar[, return_index, return_inverse, ...]) | Find the unique elements of an array. | Rearranging elements -------------------- | | | | --- | --- | | [`flip`](generated/numpy.flip#numpy.flip "numpy.flip")(m[, axis]) | Reverse the order of elements in an array along the given axis. | | [`fliplr`](generated/numpy.fliplr#numpy.fliplr "numpy.fliplr")(m) | Reverse the order of elements along axis 1 (left/right). | | [`flipud`](generated/numpy.flipud#numpy.flipud "numpy.flipud")(m) | Reverse the order of elements along axis 0 (up/down). | | [`reshape`](generated/numpy.reshape#numpy.reshape "numpy.reshape")(a, newshape[, order]) | Gives a new shape to an array without changing its data. | | [`roll`](generated/numpy.roll#numpy.roll "numpy.roll")(a, shift[, axis]) | Roll array elements along a given axis. | | [`rot90`](generated/numpy.rot90#numpy.rot90 "numpy.rot90")(m[, k, axes]) | Rotate an array by 90 degrees in the plane specified by axes. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.array-manipulation.htmlBinary operations ================= Elementwise bit operations -------------------------- | | | | --- | --- | | [`bitwise_and`](generated/numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and")(x1, x2, /[, out, where, ...]) | Compute the bit-wise AND of two arrays element-wise. | | [`bitwise_or`](generated/numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or")(x1, x2, /[, out, where, casting, ...]) | Compute the bit-wise OR of two arrays element-wise. | | [`bitwise_xor`](generated/numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor")(x1, x2, /[, out, where, ...]) | Compute the bit-wise XOR of two arrays element-wise. | | [`invert`](generated/numpy.invert#numpy.invert "numpy.invert")(x, /[, out, where, casting, order, ...]) | Compute bit-wise inversion, or bit-wise NOT, element-wise. | | [`left_shift`](generated/numpy.left_shift#numpy.left_shift "numpy.left_shift")(x1, x2, /[, out, where, casting, ...]) | Shift the bits of an integer to the left. | | [`right_shift`](generated/numpy.right_shift#numpy.right_shift "numpy.right_shift")(x1, x2, /[, out, where, ...]) | Shift the bits of an integer to the right. | Bit packing ----------- | | | | --- | --- | | [`packbits`](generated/numpy.packbits#numpy.packbits "numpy.packbits")(a, /[, axis, bitorder]) | Packs the elements of a binary-valued array into bits in a uint8 array. | | [`unpackbits`](generated/numpy.unpackbits#numpy.unpackbits "numpy.unpackbits")(a, /[, axis, count, bitorder]) | Unpacks elements of a uint8 array into a binary-valued output array. | Output formatting ----------------- | | | | --- | --- | | [`binary_repr`](generated/numpy.binary_repr#numpy.binary_repr "numpy.binary_repr")(num[, width]) | Return the binary representation of the input number as a string. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.bitwise.htmlString operations ================= The [`numpy.char`](#module-numpy.char "numpy.char") module provides a set of vectorized string operations for arrays of type [`numpy.str_`](arrays.scalars#numpy.str_ "numpy.str_") or [`numpy.bytes_`](arrays.scalars#numpy.bytes_ "numpy.bytes_"). All of them are based on the string methods in the Python standard library. String operations ----------------- | | | | --- | --- | | [`add`](generated/numpy.char.add#numpy.char.add "numpy.char.add")(x1, x2) | Return element-wise string concatenation for two arrays of str or unicode. | | [`multiply`](generated/numpy.char.multiply#numpy.char.multiply "numpy.char.multiply")(a, i) | Return (a * i), that is string multiple concatenation, element-wise. | | [`mod`](generated/numpy.char.mod#numpy.char.mod "numpy.char.mod")(a, values) | Return (a % i), that is pre-Python 2.6 string formatting (interpolation), element-wise for a pair of array_likes of str or unicode. | | [`capitalize`](generated/numpy.char.capitalize#numpy.char.capitalize "numpy.char.capitalize")(a) | Return a copy of `a` with only the first character of each element capitalized. | | [`center`](generated/numpy.char.center#numpy.char.center "numpy.char.center")(a, width[, fillchar]) | Return a copy of `a` with its elements centered in a string of length `width`. | | [`decode`](generated/numpy.char.decode#numpy.char.decode "numpy.char.decode")(a[, encoding, errors]) | Calls `str.decode` element-wise. | | [`encode`](generated/numpy.char.encode#numpy.char.encode "numpy.char.encode")(a[, encoding, errors]) | Calls `str.encode` element-wise. | | [`expandtabs`](generated/numpy.char.expandtabs#numpy.char.expandtabs "numpy.char.expandtabs")(a[, tabsize]) | Return a copy of each string element where all tab characters are replaced by one or more spaces. | | [`join`](generated/numpy.char.join#numpy.char.join "numpy.char.join")(sep, seq) | Return a string which is the concatenation of the strings in the sequence `seq`. | | [`ljust`](generated/numpy.char.ljust#numpy.char.ljust "numpy.char.ljust")(a, width[, fillchar]) | Return an array with the elements of `a` left-justified in a string of length `width`. | | [`lower`](generated/numpy.char.lower#numpy.char.lower "numpy.char.lower")(a) | Return an array with the elements converted to lowercase. | | [`lstrip`](generated/numpy.char.lstrip#numpy.char.lstrip "numpy.char.lstrip")(a[, chars]) | For each element in `a`, return a copy with the leading characters removed. | | [`partition`](generated/numpy.char.partition#numpy.char.partition "numpy.char.partition")(a, sep) | Partition each element in `a` around `sep`. | | [`replace`](generated/numpy.char.replace#numpy.char.replace "numpy.char.replace")(a, old, new[, count]) | For each element in `a`, return a copy of the string with all occurrences of substring `old` replaced by `new`. | | [`rjust`](generated/numpy.char.rjust#numpy.char.rjust "numpy.char.rjust")(a, width[, fillchar]) | Return an array with the elements of `a` right-justified in a string of length `width`. | | [`rpartition`](generated/numpy.char.rpartition#numpy.char.rpartition "numpy.char.rpartition")(a, sep) | Partition (split) each element around the right-most separator. | | [`rsplit`](generated/numpy.char.rsplit#numpy.char.rsplit "numpy.char.rsplit")(a[, sep, maxsplit]) | For each element in `a`, return a list of the words in the string, using `sep` as the delimiter string. | | [`rstrip`](generated/numpy.char.rstrip#numpy.char.rstrip "numpy.char.rstrip")(a[, chars]) | For each element in `a`, return a copy with the trailing characters removed. | | [`split`](generated/numpy.char.split#numpy.char.split "numpy.char.split")(a[, sep, maxsplit]) | For each element in `a`, return a list of the words in the string, using `sep` as the delimiter string. | | [`splitlines`](generated/numpy.char.splitlines#numpy.char.splitlines "numpy.char.splitlines")(a[, keepends]) | For each element in `a`, return a list of the lines in the element, breaking at line boundaries. | | [`strip`](generated/numpy.char.strip#numpy.char.strip "numpy.char.strip")(a[, chars]) | For each element in `a`, return a copy with the leading and trailing characters removed. | | [`swapcase`](generated/numpy.char.swapcase#numpy.char.swapcase "numpy.char.swapcase")(a) | Return element-wise a copy of the string with uppercase characters converted to lowercase and vice versa. | | [`title`](generated/numpy.char.title#numpy.char.title "numpy.char.title")(a) | Return element-wise title cased version of string or unicode. | | [`translate`](generated/numpy.char.translate#numpy.char.translate "numpy.char.translate")(a, table[, deletechars]) | For each element in `a`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. | | [`upper`](generated/numpy.char.upper#numpy.char.upper "numpy.char.upper")(a) | Return an array with the elements converted to uppercase. | | [`zfill`](generated/numpy.char.zfill#numpy.char.zfill "numpy.char.zfill")(a, width) | Return the numeric string left-filled with zeros | Comparison ---------- Unlike the standard numpy comparison operators, the ones in the `char` module strip trailing whitespace characters before performing the comparison. | | | | --- | --- | | [`equal`](generated/numpy.char.equal#numpy.char.equal "numpy.char.equal")(x1, x2) | Return (x1 == x2) element-wise. | | [`not_equal`](generated/numpy.char.not_equal#numpy.char.not_equal "numpy.char.not_equal")(x1, x2) | Return (x1 != x2) element-wise. | | [`greater_equal`](generated/numpy.char.greater_equal#numpy.char.greater_equal "numpy.char.greater_equal")(x1, x2) | Return (x1 >= x2) element-wise. | | [`less_equal`](generated/numpy.char.less_equal#numpy.char.less_equal "numpy.char.less_equal")(x1, x2) | Return (x1 <= x2) element-wise. | | [`greater`](generated/numpy.char.greater#numpy.char.greater "numpy.char.greater")(x1, x2) | Return (x1 > x2) element-wise. | | [`less`](generated/numpy.char.less#numpy.char.less "numpy.char.less")(x1, x2) | Return (x1 < x2) element-wise. | | [`compare_chararrays`](generated/numpy.char.compare_chararrays#numpy.char.compare_chararrays "numpy.char.compare_chararrays")(a1, a2, cmp, rstrip) | Performs element-wise comparison of two string arrays using the comparison operator specified by `cmp_op`. | String information ------------------ | | | | --- | --- | | [`count`](generated/numpy.char.count#numpy.char.count "numpy.char.count")(a, sub[, start, end]) | Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`]. | | [`endswith`](generated/numpy.char.endswith#numpy.char.endswith "numpy.char.endswith")(a, suffix[, start, end]) | Returns a boolean array which is `True` where the string element in `a` ends with `suffix`, otherwise `False`. | | [`find`](generated/numpy.char.find#numpy.char.find "numpy.char.find")(a, sub[, start, end]) | For each element, return the lowest index in the string where substring `sub` is found. | | [`index`](generated/numpy.char.index#numpy.char.index "numpy.char.index")(a, sub[, start, end]) | Like [`find`](generated/numpy.char.find#numpy.char.find "numpy.char.find"), but raises `ValueError` when the substring is not found. | | [`isalpha`](generated/numpy.char.isalpha#numpy.char.isalpha "numpy.char.isalpha")(a) | Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. | | [`isalnum`](generated/numpy.char.isalnum#numpy.char.isalnum "numpy.char.isalnum")(a) | Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. | | [`isdecimal`](generated/numpy.char.isdecimal#numpy.char.isdecimal "numpy.char.isdecimal")(a) | For each element, return True if there are only decimal characters in the element. | | [`isdigit`](generated/numpy.char.isdigit#numpy.char.isdigit "numpy.char.isdigit")(a) | Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. | | [`islower`](generated/numpy.char.islower#numpy.char.islower "numpy.char.islower")(a) | Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. | | [`isnumeric`](generated/numpy.char.isnumeric#numpy.char.isnumeric "numpy.char.isnumeric")(a) | For each element, return True if there are only numeric characters in the element. | | [`isspace`](generated/numpy.char.isspace#numpy.char.isspace "numpy.char.isspace")(a) | Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. | | [`istitle`](generated/numpy.char.istitle#numpy.char.istitle "numpy.char.istitle")(a) | Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. | | [`isupper`](generated/numpy.char.isupper#numpy.char.isupper "numpy.char.isupper")(a) | Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. | | [`rfind`](generated/numpy.char.rfind#numpy.char.rfind "numpy.char.rfind")(a, sub[, start, end]) | For each element in `a`, return the highest index in the string where substring `sub` is found, such that `sub` is contained within [`start`, `end`]. | | [`rindex`](generated/numpy.char.rindex#numpy.char.rindex "numpy.char.rindex")(a, sub[, start, end]) | Like [`rfind`](generated/numpy.char.rfind#numpy.char.rfind "numpy.char.rfind"), but raises `ValueError` when the substring `sub` is not found. | | [`startswith`](generated/numpy.char.startswith#numpy.char.startswith "numpy.char.startswith")(a, prefix[, start, end]) | Returns a boolean array which is `True` where the string element in `a` starts with `prefix`, otherwise `False`. | | [`str_len`](generated/numpy.char.str_len#numpy.char.str_len "numpy.char.str_len")(a) | Return len(a) element-wise. | Convenience class ----------------- | | | | --- | --- | | [`array`](generated/numpy.char.array#numpy.char.array "numpy.char.array")(obj[, itemsize, copy, unicode, order]) | Create a [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray"). | | [`asarray`](generated/numpy.char.asarray#numpy.char.asarray "numpy.char.asarray")(obj[, itemsize, unicode, order]) | Convert the input to a [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray"), copying the data only if necessary. | | [`chararray`](generated/numpy.char.chararray#numpy.char.chararray "numpy.char.chararray")(shape[, itemsize, unicode, ...]) | Provides a convenient view on arrays of string and unicode values. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.char.htmlC-Types Foreign Function Interface (numpy.ctypeslib) ==================================================== numpy.ctypeslib.as_array(*obj*, *shape=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ctypeslib.py#L508-L526) Create a numpy array from a ctypes array or POINTER. The numpy array shares the memory with the ctypes object. The shape parameter must be given if converting from a ctypes POINTER. The shape parameter is ignored if converting from a ctypes array numpy.ctypeslib.as_ctypes(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ctypeslib.py#L529-L547) Create and return a ctypes object from a numpy array. Actually anything that exposes the __array_interface__ is accepted. numpy.ctypeslib.as_ctypes_type(*dtype*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ctypeslib.py#L467-L505) Convert a dtype into a ctypes type. Parameters **dtype**dtype The dtype to convert Returns ctype A ctype scalar, union, array, or struct Raises NotImplementedError If the conversion is not possible #### Notes This function does not losslessly round-trip in either direction. `np.dtype(as_ctypes_type(dt))` will: * insert padding fields * reorder fields to be sorted by offset * discard field titles `as_ctypes_type(np.dtype(ctype))` will: * discard the class names of [`ctypes.Structure`](https://docs.python.org/3/library/ctypes.html#ctypes.Structure "(in Python v3.10)")s and [`ctypes.Union`](https://docs.python.org/3/library/ctypes.html#ctypes.Union "(in Python v3.10)")s * convert single-element [`ctypes.Union`](https://docs.python.org/3/library/ctypes.html#ctypes.Union "(in Python v3.10)")s into single-element [`ctypes.Structure`](https://docs.python.org/3/library/ctypes.html#ctypes.Structure "(in Python v3.10)")s * insert padding fields numpy.ctypeslib.load_library(*libname*, *loader_path*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ctypeslib.py#L90-L163) It is possible to load a library using ``` >>> lib = ctypes.cdll[<full_path_name>] ``` But there are cross-platform considerations, such as library file extensions, plus the fact Windows will just load the first library it finds with that name. NumPy supplies the load_library function as a convenience. Changed in version 1.20.0: Allow libname and loader_path to take any [path-like object](https://docs.python.org/3/glossary.html#term-path-like-object "(in Python v3.10)"). Parameters **libname**path-like Name of the library, which can have ‘lib’ as a prefix, but without an extension. **loader_path**path-like Where the library can be found. Returns **ctypes.cdll[libpath]**library object A ctypes library object Raises OSError If there is no library with the expected extension, or the library is defective and cannot be loaded. numpy.ctypeslib.ndpointer(*dtype=None*, *ndim=None*, *shape=None*, *flags=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ctypeslib.py#L235-L349) Array-checking restype/argtypes. An ndpointer instance is used to describe an ndarray in restypes and argtypes specifications. This approach is more flexible than using, for example, `POINTER(c_double)`, since several restrictions can be specified, which are verified upon calling the ctypes function. These include data type, number of dimensions, shape and flags. If a given array does not satisfy the specified restrictions, a `TypeError` is raised. Parameters **dtype**data-type, optional Array data-type. **ndim**int, optional Number of array dimensions. **shape**tuple of ints, optional Array shape. **flags**str or tuple of str Array flags; may be one or more of: * C_CONTIGUOUS / C / CONTIGUOUS * F_CONTIGUOUS / F / FORTRAN * OWNDATA / O * WRITEABLE / W * ALIGNED / A * WRITEBACKIFCOPY / X Returns **klass**ndpointer type object A type object, which is an `_ndtpr` instance containing dtype, ndim, shape and flags information. Raises TypeError If a given array does not satisfy the specified restrictions. #### Examples ``` >>> clib.somefunc.argtypes = [np.ctypeslib.ndpointer(dtype=np.float64, ... ndim=1, ... flags='C_CONTIGUOUS')] ... >>> clib.somefunc(np.array([1, 2, 3], dtype=np.float64)) ... ``` *class*numpy.ctypeslib.c_intp A [`ctypes`](https://docs.python.org/3/library/ctypes.html#module-ctypes "(in Python v3.10)") signed integer type of the same size as [`numpy.intp`](arrays.scalars#numpy.intp "numpy.intp"). Depending on the platform, it can be an alias for either [`c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "(in Python v3.10)"), [`c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "(in Python v3.10)") or [`c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "(in Python v3.10)"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.ctypeslib.htmlDatetime Support Functions ========================== | | | | --- | --- | | [`datetime_as_string`](generated/numpy.datetime_as_string#numpy.datetime_as_string "numpy.datetime_as_string")(arr[, unit, timezone, ...]) | Convert an array of datetimes into an array of strings. | | [`datetime_data`](generated/numpy.datetime_data#numpy.datetime_data "numpy.datetime_data")(dtype, /) | Get information about the step size of a date or time type. | Business Day Functions ---------------------- | | | | --- | --- | | [`busdaycalendar`](generated/numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar")([weekmask, holidays]) | A business day calendar object that efficiently stores information defining valid days for the busday family of functions. | | [`is_busday`](generated/numpy.is_busday#numpy.is_busday "numpy.is_busday")(dates[, weekmask, holidays, ...]) | Calculates which of the given dates are valid days, and which are not. | | [`busday_offset`](generated/numpy.busday_offset#numpy.busday_offset "numpy.busday_offset")(dates, offsets[, roll, ...]) | First adjusts the date to fall on a valid day according to the `roll` rule, then applies offsets to the given dates counted in valid days. | | [`busday_count`](generated/numpy.busday_count#numpy.busday_count "numpy.busday_count")(begindates, enddates[, ...]) | Counts the number of valid days between `begindates` and `enddates`, not including the day of `enddates`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.datetime.htmlData type routines ================== | | | | --- | --- | | [`can_cast`](generated/numpy.can_cast#numpy.can_cast "numpy.can_cast")(from_, to[, casting]) | Returns True if cast between data types can occur according to the casting rule. | | [`promote_types`](generated/numpy.promote_types#numpy.promote_types "numpy.promote_types")(type1, type2) | Returns the data type with the smallest size and smallest scalar kind to which both `type1` and `type2` may be safely cast. | | [`min_scalar_type`](generated/numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type")(a, /) | For scalar `a`, returns the data type with the smallest size and smallest scalar kind which can hold its value. | | [`result_type`](generated/numpy.result_type#numpy.result_type "numpy.result_type")(*arrays_and_dtypes) | Returns the type that results from applying the NumPy type promotion rules to the arguments. | | [`common_type`](generated/numpy.common_type#numpy.common_type "numpy.common_type")(*arrays) | Return a scalar type which is common to the input arrays. | | [`obj2sctype`](generated/numpy.obj2sctype#numpy.obj2sctype "numpy.obj2sctype")(rep[, default]) | Return the scalar dtype or NumPy equivalent of Python type of an object. | Creating data types ------------------- | | | | --- | --- | | [`dtype`](generated/numpy.dtype#numpy.dtype "numpy.dtype")(dtype[, align, copy]) | Create a data type object. | | [`format_parser`](generated/numpy.format_parser#numpy.format_parser "numpy.format_parser")(formats, names, titles[, ...]) | Class to convert formats, names, titles description to a dtype. | Data type information --------------------- | | | | --- | --- | | [`finfo`](generated/numpy.finfo#numpy.finfo "numpy.finfo")(dtype) | Machine limits for floating point types. | | [`iinfo`](generated/numpy.iinfo#numpy.iinfo "numpy.iinfo")(type) | Machine limits for integer types. | | [`MachAr`](generated/numpy.machar#numpy.MachAr "numpy.MachAr")([float_conv, int_conv, ...]) | Diagnosing machine parameters. | Data type testing ----------------- | | | | --- | --- | | [`issctype`](generated/numpy.issctype#numpy.issctype "numpy.issctype")(rep) | Determines whether the given object represents a scalar data-type. | | [`issubdtype`](generated/numpy.issubdtype#numpy.issubdtype "numpy.issubdtype")(arg1, arg2) | Returns True if first argument is a typecode lower/equal in type hierarchy. | | [`issubsctype`](generated/numpy.issubsctype#numpy.issubsctype "numpy.issubsctype")(arg1, arg2) | Determine if the first argument is a subclass of the second argument. | | [`issubclass_`](generated/numpy.issubclass_#numpy.issubclass_ "numpy.issubclass_")(arg1, arg2) | Determine if a class is a subclass of a second class. | | [`find_common_type`](generated/numpy.find_common_type#numpy.find_common_type "numpy.find_common_type")(array_types, scalar_types) | Determine common type following standard coercion rules. | Miscellaneous ------------- | | | | --- | --- | | [`typename`](generated/numpy.typename#numpy.typename "numpy.typename")(char) | Return a description for the given data type code. | | [`sctype2char`](generated/numpy.sctype2char#numpy.sctype2char "numpy.sctype2char")(sctype) | Return the string representation of a scalar dtype. | | [`mintypecode`](generated/numpy.mintypecode#numpy.mintypecode "numpy.mintypecode")(typechars[, typeset, default]) | Return the character for the minimum-size type to which given types can be safely cast. | | [`maximum_sctype`](generated/numpy.maximum_sctype#numpy.maximum_sctype "numpy.maximum_sctype")(t) | Return the scalar type of highest precision of the same kind as the input. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.dtype.htmlOptionally SciPy-accelerated routines (numpy.dual) ================================================== Deprecated since version 1.20. *This module is deprecated. Instead of importing functions from* `numpy.dual`, *the functions should be imported directly from NumPy or SciPy*. Aliases for functions which may be accelerated by SciPy. [SciPy](https://www.scipy.org) can be built to use accelerated or otherwise improved libraries for FFTs, linear algebra, and special functions. This module allows developers to transparently support these accelerated functions when SciPy is available but still support users who have only installed NumPy. Linear algebra -------------- | | | | --- | --- | | [`cholesky`](generated/numpy.linalg.cholesky#numpy.linalg.cholesky "numpy.linalg.cholesky")(a) | Cholesky decomposition. | | [`det`](generated/numpy.linalg.det#numpy.linalg.det "numpy.linalg.det")(a) | Compute the determinant of an array. | | [`eig`](generated/numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig")(a) | Compute the eigenvalues and right eigenvectors of a square array. | | [`eigh`](generated/numpy.linalg.eigh#numpy.linalg.eigh "numpy.linalg.eigh")(a[, UPLO]) | Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. | | [`eigvals`](generated/numpy.linalg.eigvals#numpy.linalg.eigvals "numpy.linalg.eigvals")(a) | Compute the eigenvalues of a general matrix. | | [`eigvalsh`](generated/numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh")(a[, UPLO]) | Compute the eigenvalues of a complex Hermitian or real symmetric matrix. | | [`inv`](generated/numpy.linalg.inv#numpy.linalg.inv "numpy.linalg.inv")(a) | Compute the (multiplicative) inverse of a matrix. | | [`lstsq`](generated/numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq")(a, b[, rcond]) | Return the least-squares solution to a linear matrix equation. | | [`norm`](generated/numpy.linalg.norm#numpy.linalg.norm "numpy.linalg.norm")(x[, ord, axis, keepdims]) | Matrix or vector norm. | | [`pinv`](generated/numpy.linalg.pinv#numpy.linalg.pinv "numpy.linalg.pinv")(a[, rcond, hermitian]) | Compute the (Moore-Penrose) pseudo-inverse of a matrix. | | [`solve`](generated/numpy.linalg.solve#numpy.linalg.solve "numpy.linalg.solve")(a, b) | Solve a linear matrix equation, or system of linear scalar equations. | | [`svd`](generated/numpy.linalg.svd#numpy.linalg.svd "numpy.linalg.svd")(a[, full_matrices, compute_uv, hermitian]) | Singular Value Decomposition. | FFT --- | | | | --- | --- | | [`fft`](generated/numpy.fft.fft#numpy.fft.fft "numpy.fft.fft")(a[, n, axis, norm]) | Compute the one-dimensional discrete Fourier Transform. | | [`fft2`](generated/numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2")(a[, s, axes, norm]) | Compute the 2-dimensional discrete Fourier Transform. | | [`fftn`](generated/numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn")(a[, s, axes, norm]) | Compute the N-dimensional discrete Fourier Transform. | | [`ifft`](generated/numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft")(a[, n, axis, norm]) | Compute the one-dimensional inverse discrete Fourier Transform. | | [`ifft2`](generated/numpy.fft.ifft2#numpy.fft.ifft2 "numpy.fft.ifft2")(a[, s, axes, norm]) | Compute the 2-dimensional inverse discrete Fourier Transform. | | [`ifftn`](generated/numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn")(a[, s, axes, norm]) | Compute the N-dimensional inverse discrete Fourier Transform. | Other ----- | | | | --- | --- | | [`i0`](generated/numpy.i0#numpy.i0 "numpy.i0")(x) | Modified Bessel function of the first kind, order 0. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.dual.htmlMathematical functions with automatic domain ============================================ Note [`numpy.emath`](#module-numpy.emath "numpy.emath") is a preferred alias for `numpy.lib.scimath`, available after [`numpy`](index#module-numpy "numpy") is imported. Wrapper functions to more user-friendly calling of certain math functions whose output data-type is different than the input data-type in certain domains of the input. For example, for functions like [`log`](generated/numpy.emath.log#numpy.emath.log "numpy.emath.log") with branch cuts, the versions in this module provide the mathematically valid answers in the complex plane: ``` >>> import math >>> np.emath.log(-math.exp(1)) == (1+1j*math.pi) True ``` Similarly, [`sqrt`](generated/numpy.emath.sqrt#numpy.emath.sqrt "numpy.emath.sqrt"), other base logarithms, [`power`](generated/numpy.emath.power#numpy.emath.power "numpy.emath.power") and trig functions are correctly handled. See their respective docstrings for specific examples. Functions --------- | | | | --- | --- | | [`sqrt`](generated/numpy.emath.sqrt#numpy.emath.sqrt "numpy.emath.sqrt")(x) | Compute the square root of x. | | [`log`](generated/numpy.emath.log#numpy.emath.log "numpy.emath.log")(x) | Compute the natural logarithm of `x`. | | [`log2`](generated/numpy.emath.log2#numpy.emath.log2 "numpy.emath.log2")(x) | Compute the logarithm base 2 of `x`. | | [`logn`](generated/numpy.emath.logn#numpy.emath.logn "numpy.emath.logn")(n, x) | Take log base n of x. | | [`log10`](generated/numpy.emath.log10#numpy.emath.log10 "numpy.emath.log10")(x) | Compute the logarithm base 10 of `x`. | | [`power`](generated/numpy.emath.power#numpy.emath.power "numpy.emath.power")(x, p) | Return x to the power p, (x**p). | | [`arccos`](generated/numpy.emath.arccos#numpy.emath.arccos "numpy.emath.arccos")(x) | Compute the inverse cosine of x. | | [`arcsin`](generated/numpy.emath.arcsin#numpy.emath.arcsin "numpy.emath.arcsin")(x) | Compute the inverse sine of x. | | [`arctanh`](generated/numpy.emath.arctanh#numpy.emath.arctanh "numpy.emath.arctanh")(x) | Compute the inverse hyperbolic tangent of `x`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.emath.htmlFloating point error handling ============================= Setting and getting error handling ---------------------------------- | | | | --- | --- | | [`seterr`](generated/numpy.seterr#numpy.seterr "numpy.seterr")([all, divide, over, under, invalid]) | Set how floating-point errors are handled. | | [`geterr`](generated/numpy.geterr#numpy.geterr "numpy.geterr")() | Get the current way of handling floating-point errors. | | [`seterrcall`](generated/numpy.seterrcall#numpy.seterrcall "numpy.seterrcall")(func) | Set the floating-point error callback function or log object. | | [`geterrcall`](generated/numpy.geterrcall#numpy.geterrcall "numpy.geterrcall")() | Return the current callback function used on floating-point errors. | | [`errstate`](generated/numpy.errstate#numpy.errstate "numpy.errstate")(**kwargs) | Context manager for floating-point error handling. | Internal functions ------------------ | | | | --- | --- | | [`seterrobj`](generated/numpy.seterrobj#numpy.seterrobj "numpy.seterrobj")(errobj, /) | Set the object that defines floating-point error handling. | | [`geterrobj`](generated/numpy.geterrobj#numpy.geterrobj "numpy.geterrobj")() | Return the current object that defines floating-point error handling. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.err.htmlDiscrete Fourier Transform (numpy.fft) ====================================== The SciPy module [`scipy.fft`](https://docs.scipy.org/doc/scipy/reference/fft.html#module-scipy.fft "(in SciPy v1.8.1)") is a more comprehensive superset of `numpy.fft`, which includes only a basic set of routines. Standard FFTs ------------- | | | | --- | --- | | [`fft`](generated/numpy.fft.fft#numpy.fft.fft "numpy.fft.fft")(a[, n, axis, norm]) | Compute the one-dimensional discrete Fourier Transform. | | [`ifft`](generated/numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft")(a[, n, axis, norm]) | Compute the one-dimensional inverse discrete Fourier Transform. | | [`fft2`](generated/numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2")(a[, s, axes, norm]) | Compute the 2-dimensional discrete Fourier Transform. | | [`ifft2`](generated/numpy.fft.ifft2#numpy.fft.ifft2 "numpy.fft.ifft2")(a[, s, axes, norm]) | Compute the 2-dimensional inverse discrete Fourier Transform. | | [`fftn`](generated/numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn")(a[, s, axes, norm]) | Compute the N-dimensional discrete Fourier Transform. | | [`ifftn`](generated/numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn")(a[, s, axes, norm]) | Compute the N-dimensional inverse discrete Fourier Transform. | Real FFTs --------- | | | | --- | --- | | [`rfft`](generated/numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft")(a[, n, axis, norm]) | Compute the one-dimensional discrete Fourier Transform for real input. | | [`irfft`](generated/numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft")(a[, n, axis, norm]) | Computes the inverse of [`rfft`](generated/numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"). | | [`rfft2`](generated/numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2")(a[, s, axes, norm]) | Compute the 2-dimensional FFT of a real array. | | [`irfft2`](generated/numpy.fft.irfft2#numpy.fft.irfft2 "numpy.fft.irfft2")(a[, s, axes, norm]) | Computes the inverse of [`rfft2`](generated/numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2"). | | [`rfftn`](generated/numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn")(a[, s, axes, norm]) | Compute the N-dimensional discrete Fourier Transform for real input. | | [`irfftn`](generated/numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn")(a[, s, axes, norm]) | Computes the inverse of [`rfftn`](generated/numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn"). | Hermitian FFTs -------------- | | | | --- | --- | | [`hfft`](generated/numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft")(a[, n, axis, norm]) | Compute the FFT of a signal that has Hermitian symmetry, i.e., a real spectrum. | | [`ihfft`](generated/numpy.fft.ihfft#numpy.fft.ihfft "numpy.fft.ihfft")(a[, n, axis, norm]) | Compute the inverse FFT of a signal that has Hermitian symmetry. | Helper routines --------------- | | | | --- | --- | | [`fftfreq`](generated/numpy.fft.fftfreq#numpy.fft.fftfreq "numpy.fft.fftfreq")(n[, d]) | Return the Discrete Fourier Transform sample frequencies. | | [`rfftfreq`](generated/numpy.fft.rfftfreq#numpy.fft.rfftfreq "numpy.fft.rfftfreq")(n[, d]) | Return the Discrete Fourier Transform sample frequencies (for usage with rfft, irfft). | | [`fftshift`](generated/numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift")(x[, axes]) | Shift the zero-frequency component to the center of the spectrum. | | [`ifftshift`](generated/numpy.fft.ifftshift#numpy.fft.ifftshift "numpy.fft.ifftshift")(x[, axes]) | The inverse of [`fftshift`](generated/numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift"). | Background information ---------------------- Fourier analysis is fundamentally a method for expressing a function as a sum of periodic components, and for recovering the function from those components. When both the function and its Fourier transform are replaced with discretized counterparts, it is called the discrete Fourier transform (DFT). The DFT has become a mainstay of numerical computing in part because of a very fast algorithm for computing it, called the Fast Fourier Transform (FFT), which was known to Gauss (1805) and was brought to light in its current form by Cooley and Tukey [[CT]](#rfb1dc64dd6a5-ct). Press et al. [[NR]](#rfb1dc64dd6a5-nr) provide an accessible introduction to Fourier analysis and its applications. Because the discrete Fourier transform separates its input into components that contribute at discrete frequencies, it has a great number of applications in digital signal processing, e.g., for filtering, and in this context the discretized input to the transform is customarily referred to as a *signal*, which exists in the *time domain*. The output is called a *spectrum* or *transform* and exists in the *frequency domain*. Implementation details ---------------------- There are many ways to define the DFT, varying in the sign of the exponent, normalization, etc. In this implementation, the DFT is defined as \[A_k = \sum_{m=0}^{n-1} a_m \exp\left\{-2\pi i{mk \over n}\right\} \qquad k = 0,\ldots,n-1.\] The DFT is in general defined for complex inputs and outputs, and a single-frequency component at linear frequency \(f\) is represented by a complex exponential \(a_m = \exp\{2\pi i\,f m\Delta t\}\), where \(\Delta t\) is the sampling interval. The values in the result follow so-called “standard” order: If `A = fft(a, n)`, then `A[0]` contains the zero-frequency term (the sum of the signal), which is always purely real for real inputs. Then `A[1:n/2]` contains the positive-frequency terms, and `A[n/2+1:]` contains the negative-frequency terms, in order of decreasingly negative frequency. For an even number of input points, `A[n/2]` represents both positive and negative Nyquist frequency, and is also purely real for real input. For an odd number of input points, `A[(n-1)/2]` contains the largest positive frequency, while `A[(n+1)/2]` contains the largest negative frequency. The routine `np.fft.fftfreq(n)` returns an array giving the frequencies of corresponding elements in the output. The routine `np.fft.fftshift(A)` shifts transforms and their frequencies to put the zero-frequency components in the middle, and `np.fft.ifftshift(A)` undoes that shift. When the input `a` is a time-domain signal and `A = fft(a)`, `np.abs(A)` is its amplitude spectrum and `np.abs(A)**2` is its power spectrum. The phase spectrum is obtained by `np.angle(A)`. The inverse DFT is defined as \[a_m = \frac{1}{n}\sum_{k=0}^{n-1}A_k\exp\left\{2\pi i{mk\over n}\right\} \qquad m = 0,\ldots,n-1.\] It differs from the forward transform by the sign of the exponential argument and the default normalization by \(1/n\). Type Promotion -------------- [`numpy.fft`](#module-numpy.fft "numpy.fft") promotes `float32` and `complex64` arrays to `float64` and `complex128` arrays respectively. For an FFT implementation that does not promote input arrays, see [`scipy.fftpack`](https://docs.scipy.org/doc/scipy/reference/fftpack.html#module-scipy.fftpack "(in SciPy v1.8.1)"). Normalization ------------- The argument `norm` indicates which direction of the pair of direct/inverse transforms is scaled and with what normalization factor. The default normalization (`"backward"`) has the direct (forward) transforms unscaled and the inverse (backward) transforms scaled by \(1/n\). It is possible to obtain unitary transforms by setting the keyword argument `norm` to `"ortho"` so that both direct and inverse transforms are scaled by \(1/\sqrt{n}\). Finally, setting the keyword argument `norm` to `"forward"` has the direct transforms scaled by \(1/n\) and the inverse transforms unscaled (i.e. exactly opposite to the default `"backward"`). `None` is an alias of the default option `"backward"` for backward compatibility. Real and Hermitian transforms ----------------------------- When the input is purely real, its transform is Hermitian, i.e., the component at frequency \(f_k\) is the complex conjugate of the component at frequency \(-f_k\), which means that for real inputs there is no information in the negative frequency components that is not already available from the positive frequency components. The family of [`rfft`](generated/numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") functions is designed to operate on real inputs, and exploits this symmetry by computing only the positive frequency components, up to and including the Nyquist frequency. Thus, `n` input points produce `n/2+1` complex output points. The inverses of this family assumes the same symmetry of its input, and for an output of `n` points uses `n/2+1` input points. Correspondingly, when the spectrum is purely real, the signal is Hermitian. The [`hfft`](generated/numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft") family of functions exploits this symmetry by using `n/2+1` complex points in the input (time) domain for `n` real points in the frequency domain. In higher dimensions, FFTs are used, e.g., for image analysis and filtering. The computational efficiency of the FFT means that it can also be a faster way to compute large convolutions, using the property that a convolution in the time domain is equivalent to a point-by-point multiplication in the frequency domain. Higher dimensions ----------------- In two dimensions, the DFT is defined as \[A_{kl} = \sum_{m=0}^{M-1} \sum_{n=0}^{N-1} a_{mn}\exp\left\{-2\pi i \left({mk\over M}+{nl\over N}\right)\right\} \qquad k = 0, \ldots, M-1;\quad l = 0, \ldots, N-1,\] which extends in the obvious way to higher dimensions, and the inverses in higher dimensions also extend in the same way. References ---------- [CT](#id1) Cooley, <NAME>., and <NAME>, 1965, “An algorithm for the machine calculation of complex Fourier series,” *Math. Comput.* 19: 297-301. [NR](#id2) <NAME>., <NAME>., <NAME>., and <NAME>., 2007, *Numerical Recipes: The Art of Scientific Computing*, ch. 12-13. Cambridge Univ. Press, Cambridge, UK. Examples -------- For examples, see the various functions. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.fft.htmlFunctional programming ====================== | | | | --- | --- | | [`apply_along_axis`](generated/numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis")(func1d, axis, arr, *args, ...) | Apply a function to 1-D slices along the given axis. | | [`apply_over_axes`](generated/numpy.apply_over_axes#numpy.apply_over_axes "numpy.apply_over_axes")(func, a, axes) | Apply a function repeatedly over multiple axes. | | [`vectorize`](generated/numpy.vectorize#numpy.vectorize "numpy.vectorize")(pyfunc[, otypes, doc, excluded, ...]) | Generalized function class. | | [`frompyfunc`](generated/numpy.frompyfunc#numpy.frompyfunc "numpy.frompyfunc")(func, /, nin, nout, *[, identity]) | Takes an arbitrary Python function and returns a NumPy ufunc. | | [`piecewise`](generated/numpy.piecewise#numpy.piecewise "numpy.piecewise")(x, condlist, funclist, *args, **kw) | Evaluate a piecewise-defined function. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.functional.htmlNumPy-specific help functions ============================= Finding help ------------ | | | | --- | --- | | [`lookfor`](generated/numpy.lookfor#numpy.lookfor "numpy.lookfor")(what[, module, import_modules, ...]) | Do a keyword search on docstrings. | Reading help ------------ | | | | --- | --- | | [`info`](generated/numpy.info#numpy.info "numpy.info")([object, maxwidth, output, toplevel]) | Get help information for a function, class, or module. | | [`source`](generated/numpy.source#numpy.source "numpy.source")(object[, output]) | Print or write to a file the source code for a NumPy object. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.help.htmlInput and output ================ NumPy binary files (NPY, NPZ) ----------------------------- | | | | --- | --- | | [`load`](generated/numpy.load#numpy.load "numpy.load")(file[, mmap_mode, allow_pickle, ...]) | Load arrays or pickled objects from `.npy`, `.npz` or pickled files. | | [`save`](generated/numpy.save#numpy.save "numpy.save")(file, arr[, allow_pickle, fix_imports]) | Save an array to a binary file in NumPy `.npy` format. | | [`savez`](generated/numpy.savez#numpy.savez "numpy.savez")(file, *args, **kwds) | Save several arrays into a single file in uncompressed `.npz` format. | | [`savez_compressed`](generated/numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed")(file, *args, **kwds) | Save several arrays into a single file in compressed `.npz` format. | The format of these binary file types is documented in [`numpy.lib.format`](generated/numpy.lib.format#module-numpy.lib.format "numpy.lib.format") Text files ---------- | | | | --- | --- | | [`loadtxt`](generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt")(fname[, dtype, comments, delimiter, ...]) | Load data from a text file. | | [`savetxt`](generated/numpy.savetxt#numpy.savetxt "numpy.savetxt")(fname, X[, fmt, delimiter, newline, ...]) | Save an array to a text file. | | [`genfromtxt`](generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt")(fname[, dtype, comments, ...]) | Load data from a text file, with missing values handled as specified. | | [`fromregex`](generated/numpy.fromregex#numpy.fromregex "numpy.fromregex")(file, regexp, dtype[, encoding]) | Construct an array from a text file, using regular expression parsing. | | [`fromstring`](generated/numpy.fromstring#numpy.fromstring "numpy.fromstring")(string[, dtype, count, like]) | A new 1-D array initialized from text data in a string. | | [`ndarray.tofile`](generated/numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). | | [`ndarray.tolist`](generated/numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. | Raw binary files ---------------- | | | | --- | --- | | [`fromfile`](generated/numpy.fromfile#numpy.fromfile "numpy.fromfile")(file[, dtype, count, sep, offset, like]) | Construct an array from data in a text or binary file. | | [`ndarray.tofile`](generated/numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). | String formatting ----------------- | | | | --- | --- | | [`array2string`](generated/numpy.array2string#numpy.array2string "numpy.array2string")(a[, max_line_width, precision, ...]) | Return a string representation of an array. | | [`array_repr`](generated/numpy.array_repr#numpy.array_repr "numpy.array_repr")(arr[, max_line_width, precision, ...]) | Return the string representation of an array. | | [`array_str`](generated/numpy.array_str#numpy.array_str "numpy.array_str")(a[, max_line_width, precision, ...]) | Return a string representation of the data in an array. | | [`format_float_positional`](generated/numpy.format_float_positional#numpy.format_float_positional "numpy.format_float_positional")(x[, precision, ...]) | Format a floating-point scalar as a decimal string in positional notation. | | [`format_float_scientific`](generated/numpy.format_float_scientific#numpy.format_float_scientific "numpy.format_float_scientific")(x[, precision, ...]) | Format a floating-point scalar as a decimal string in scientific notation. | Memory mapping files -------------------- | | | | --- | --- | | [`memmap`](generated/numpy.memmap#numpy.memmap "numpy.memmap")(filename[, dtype, mode, offset, ...]) | Create a memory-map to an array stored in a *binary* file on disk. | | [`lib.format.open_memmap`](generated/numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap")(filename[, mode, ...]) | Open a .npy file as a memory-mapped array. | Text formatting options ----------------------- | | | | --- | --- | | [`set_printoptions`](generated/numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions")([precision, threshold, ...]) | Set printing options. | | [`get_printoptions`](generated/numpy.get_printoptions#numpy.get_printoptions "numpy.get_printoptions")() | Return the current print options. | | [`set_string_function`](generated/numpy.set_string_function#numpy.set_string_function "numpy.set_string_function")(f[, repr]) | Set a Python function to be used when pretty printing arrays. | | [`printoptions`](generated/numpy.printoptions#numpy.printoptions "numpy.printoptions")(*args, **kwargs) | Context manager for setting print options. | Base-n representations ---------------------- | | | | --- | --- | | [`binary_repr`](generated/numpy.binary_repr#numpy.binary_repr "numpy.binary_repr")(num[, width]) | Return the binary representation of the input number as a string. | | [`base_repr`](generated/numpy.base_repr#numpy.base_repr "numpy.base_repr")(number[, base, padding]) | Return a string representation of a number in the given base system. | Data sources ------------ | | | | --- | --- | | [`DataSource`](generated/numpy.datasource#numpy.DataSource "numpy.DataSource")([destpath]) | A generic data source file (file, http, ftp, ...). | Binary Format Description ------------------------- | | | | --- | --- | | [`lib.format`](generated/numpy.lib.format#module-numpy.lib.format "numpy.lib.format") | Binary serialization | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.io.htmlLinear algebra (numpy.linalg) ============================= The NumPy linear algebra functions rely on BLAS and LAPACK to provide efficient low level implementations of standard linear algebra algorithms. Those libraries may be provided by NumPy itself using C versions of a subset of their reference implementations but, when possible, highly optimized libraries that take advantage of specialized processor functionality are preferred. Examples of such libraries are [OpenBLAS](https://www.openblas.net/), MKL (TM), and ATLAS. Because those libraries are multithreaded and processor dependent, environmental variables and external packages such as [threadpoolctl](https://github.com/joblib/threadpoolctl) may be needed to control the number of threads or specify the processor architecture. The SciPy library also contains a [`linalg`](https://docs.scipy.org/doc/scipy/reference/linalg.html#module-scipy.linalg "(in SciPy v1.8.1)") submodule, and there is overlap in the functionality provided by the SciPy and NumPy submodules. SciPy contains functions not found in [`numpy.linalg`](#module-numpy.linalg "numpy.linalg"), such as functions related to LU decomposition and the Schur decomposition, multiple ways of calculating the pseudoinverse, and matrix transcendentals such as the matrix logarithm. Some functions that exist in both have augmented functionality in [`scipy.linalg`](https://docs.scipy.org/doc/scipy/reference/linalg.html#module-scipy.linalg "(in SciPy v1.8.1)"). For example, [`scipy.linalg.eig`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eig.html#scipy.linalg.eig "(in SciPy v1.8.1)") can take a second matrix argument for solving generalized eigenvalue problems. Some functions in NumPy, however, have more flexible broadcasting options. For example, [`numpy.linalg.solve`](generated/numpy.linalg.solve#numpy.linalg.solve "numpy.linalg.solve") can handle “stacked” arrays, while [`scipy.linalg.solve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve.html#scipy.linalg.solve "(in SciPy v1.8.1)") accepts only a single square array as its first argument. Note The term *matrix* as it is used on this page indicates a 2d [`numpy.array`](generated/numpy.array#numpy.array "numpy.array") object, and *not* a [`numpy.matrix`](generated/numpy.matrix#numpy.matrix "numpy.matrix") object. The latter is no longer recommended, even for linear algebra. See [the matrix object documentation](arrays.classes#matrix-objects) for more information. The `@` operator ---------------- Introduced in NumPy 1.10.0, the `@` operator is preferable to other methods when computing the matrix product between 2d arrays. The [`numpy.matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul") function implements the `@` operator. Matrix and vector products -------------------------- | | | | --- | --- | | [`dot`](generated/numpy.dot#numpy.dot "numpy.dot")(a, b[, out]) | Dot product of two arrays. | | [`linalg.multi_dot`](generated/numpy.linalg.multi_dot#numpy.linalg.multi_dot "numpy.linalg.multi_dot")(arrays, *[, out]) | Compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order. | | [`vdot`](generated/numpy.vdot#numpy.vdot "numpy.vdot")(a, b, /) | Return the dot product of two vectors. | | [`inner`](generated/numpy.inner#numpy.inner "numpy.inner")(a, b, /) | Inner product of two arrays. | | [`outer`](generated/numpy.outer#numpy.outer "numpy.outer")(a, b[, out]) | Compute the outer product of two vectors. | | [`matmul`](generated/numpy.matmul#numpy.matmul "numpy.matmul")(x1, x2, /[, out, casting, order, ...]) | Matrix product of two arrays. | | [`tensordot`](generated/numpy.tensordot#numpy.tensordot "numpy.tensordot")(a, b[, axes]) | Compute tensor dot product along specified axes. | | [`einsum`](generated/numpy.einsum#numpy.einsum "numpy.einsum")(subscripts, *operands[, out, dtype, ...]) | Evaluates the Einstein summation convention on the operands. | | [`einsum_path`](generated/numpy.einsum_path#numpy.einsum_path "numpy.einsum_path")(subscripts, *operands[, optimize]) | Evaluates the lowest cost contraction order for an einsum expression by considering the creation of intermediate arrays. | | [`linalg.matrix_power`](generated/numpy.linalg.matrix_power#numpy.linalg.matrix_power "numpy.linalg.matrix_power")(a, n) | Raise a square matrix to the (integer) power `n`. | | [`kron`](generated/numpy.kron#numpy.kron "numpy.kron")(a, b) | Kronecker product of two arrays. | Decompositions -------------- | | | | --- | --- | | [`linalg.cholesky`](generated/numpy.linalg.cholesky#numpy.linalg.cholesky "numpy.linalg.cholesky")(a) | Cholesky decomposition. | | [`linalg.qr`](generated/numpy.linalg.qr#numpy.linalg.qr "numpy.linalg.qr")(a[, mode]) | Compute the qr factorization of a matrix. | | [`linalg.svd`](generated/numpy.linalg.svd#numpy.linalg.svd "numpy.linalg.svd")(a[, full_matrices, compute_uv, ...]) | Singular Value Decomposition. | Matrix eigenvalues ------------------ | | | | --- | --- | | [`linalg.eig`](generated/numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig")(a) | Compute the eigenvalues and right eigenvectors of a square array. | | [`linalg.eigh`](generated/numpy.linalg.eigh#numpy.linalg.eigh "numpy.linalg.eigh")(a[, UPLO]) | Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. | | [`linalg.eigvals`](generated/numpy.linalg.eigvals#numpy.linalg.eigvals "numpy.linalg.eigvals")(a) | Compute the eigenvalues of a general matrix. | | [`linalg.eigvalsh`](generated/numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh")(a[, UPLO]) | Compute the eigenvalues of a complex Hermitian or real symmetric matrix. | Norms and other numbers ----------------------- | | | | --- | --- | | [`linalg.norm`](generated/numpy.linalg.norm#numpy.linalg.norm "numpy.linalg.norm")(x[, ord, axis, keepdims]) | Matrix or vector norm. | | [`linalg.cond`](generated/numpy.linalg.cond#numpy.linalg.cond "numpy.linalg.cond")(x[, p]) | Compute the condition number of a matrix. | | [`linalg.det`](generated/numpy.linalg.det#numpy.linalg.det "numpy.linalg.det")(a) | Compute the determinant of an array. | | [`linalg.matrix_rank`](generated/numpy.linalg.matrix_rank#numpy.linalg.matrix_rank "numpy.linalg.matrix_rank")(A[, tol, hermitian]) | Return matrix rank of array using SVD method | | [`linalg.slogdet`](generated/numpy.linalg.slogdet#numpy.linalg.slogdet "numpy.linalg.slogdet")(a) | Compute the sign and (natural) logarithm of the determinant of an array. | | [`trace`](generated/numpy.trace#numpy.trace "numpy.trace")(a[, offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. | Solving equations and inverting matrices ---------------------------------------- | | | | --- | --- | | [`linalg.solve`](generated/numpy.linalg.solve#numpy.linalg.solve "numpy.linalg.solve")(a, b) | Solve a linear matrix equation, or system of linear scalar equations. | | [`linalg.tensorsolve`](generated/numpy.linalg.tensorsolve#numpy.linalg.tensorsolve "numpy.linalg.tensorsolve")(a, b[, axes]) | Solve the tensor equation `a x = b` for x. | | [`linalg.lstsq`](generated/numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq")(a, b[, rcond]) | Return the least-squares solution to a linear matrix equation. | | [`linalg.inv`](generated/numpy.linalg.inv#numpy.linalg.inv "numpy.linalg.inv")(a) | Compute the (multiplicative) inverse of a matrix. | | [`linalg.pinv`](generated/numpy.linalg.pinv#numpy.linalg.pinv "numpy.linalg.pinv")(a[, rcond, hermitian]) | Compute the (Moore-Penrose) pseudo-inverse of a matrix. | | [`linalg.tensorinv`](generated/numpy.linalg.tensorinv#numpy.linalg.tensorinv "numpy.linalg.tensorinv")(a[, ind]) | Compute the 'inverse' of an N-dimensional array. | Exceptions ---------- | | | | --- | --- | | [`linalg.LinAlgError`](generated/numpy.linalg.linalgerror#numpy.linalg.LinAlgError "numpy.linalg.LinAlgError") | Generic Python-exception-derived object raised by linalg functions. | Linear algebra on several matrices at once ------------------------------------------ New in version 1.8.0. Several of the linear algebra routines listed above are able to compute results for several matrices at once, if they are stacked into the same array. This is indicated in the documentation via input parameter specifications such as `a : (..., M, M) array_like`. This means that if for instance given an input array `a.shape == (N, M, M)`, it is interpreted as a “stack” of N matrices, each of size M-by-M. Similar specification applies to return values, for instance the determinant has `det : (...)` and will in this case return an array of shape `det(a).shape == (N,)`. This generalizes to linear algebra operations on higher-dimensional arrays: the last 1 or 2 dimensions of a multidimensional array are interpreted as vectors or matrices, as appropriate for each operation. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.linalg.htmlLogic functions =============== Truth value testing ------------------- | | | | --- | --- | | [`all`](generated/numpy.all#numpy.all "numpy.all")(a[, axis, out, keepdims, where]) | Test whether all array elements along a given axis evaluate to True. | | [`any`](generated/numpy.any#numpy.any "numpy.any")(a[, axis, out, keepdims, where]) | Test whether any array element along a given axis evaluates to True. | Array contents -------------- | | | | --- | --- | | [`isfinite`](generated/numpy.isfinite#numpy.isfinite "numpy.isfinite")(x, /[, out, where, casting, order, ...]) | Test element-wise for finiteness (not infinity and not Not a Number). | | [`isinf`](generated/numpy.isinf#numpy.isinf "numpy.isinf")(x, /[, out, where, casting, order, ...]) | Test element-wise for positive or negative infinity. | | [`isnan`](generated/numpy.isnan#numpy.isnan "numpy.isnan")(x, /[, out, where, casting, order, ...]) | Test element-wise for NaN and return result as a boolean array. | | [`isnat`](generated/numpy.isnat#numpy.isnat "numpy.isnat")(x, /[, out, where, casting, order, ...]) | Test element-wise for NaT (not a time) and return result as a boolean array. | | [`isneginf`](generated/numpy.isneginf#numpy.isneginf "numpy.isneginf")(x[, out]) | Test element-wise for negative infinity, return result as bool array. | | [`isposinf`](generated/numpy.isposinf#numpy.isposinf "numpy.isposinf")(x[, out]) | Test element-wise for positive infinity, return result as bool array. | Array type testing ------------------ | | | | --- | --- | | [`iscomplex`](generated/numpy.iscomplex#numpy.iscomplex "numpy.iscomplex")(x) | Returns a bool array, where True if input element is complex. | | [`iscomplexobj`](generated/numpy.iscomplexobj#numpy.iscomplexobj "numpy.iscomplexobj")(x) | Check for a complex type or an array of complex numbers. | | [`isfortran`](generated/numpy.isfortran#numpy.isfortran "numpy.isfortran")(a) | Check if the array is Fortran contiguous but *not* C contiguous. | | [`isreal`](generated/numpy.isreal#numpy.isreal "numpy.isreal")(x) | Returns a bool array, where True if input element is real. | | [`isrealobj`](generated/numpy.isrealobj#numpy.isrealobj "numpy.isrealobj")(x) | Return True if x is a not complex type or an array of complex numbers. | | [`isscalar`](generated/numpy.isscalar#numpy.isscalar "numpy.isscalar")(element) | Returns True if the type of `element` is a scalar type. | Logical operations ------------------ | | | | --- | --- | | [`logical_and`](generated/numpy.logical_and#numpy.logical_and "numpy.logical_and")(x1, x2, /[, out, where, ...]) | Compute the truth value of x1 AND x2 element-wise. | | [`logical_or`](generated/numpy.logical_or#numpy.logical_or "numpy.logical_or")(x1, x2, /[, out, where, casting, ...]) | Compute the truth value of x1 OR x2 element-wise. | | [`logical_not`](generated/numpy.logical_not#numpy.logical_not "numpy.logical_not")(x, /[, out, where, casting, ...]) | Compute the truth value of NOT x element-wise. | | [`logical_xor`](generated/numpy.logical_xor#numpy.logical_xor "numpy.logical_xor")(x1, x2, /[, out, where, ...]) | Compute the truth value of x1 XOR x2, element-wise. | Comparison ---------- | | | | --- | --- | | [`allclose`](generated/numpy.allclose#numpy.allclose "numpy.allclose")(a, b[, rtol, atol, equal_nan]) | Returns True if two arrays are element-wise equal within a tolerance. | | [`isclose`](generated/numpy.isclose#numpy.isclose "numpy.isclose")(a, b[, rtol, atol, equal_nan]) | Returns a boolean array where two arrays are element-wise equal within a tolerance. | | [`array_equal`](generated/numpy.array_equal#numpy.array_equal "numpy.array_equal")(a1, a2[, equal_nan]) | True if two arrays have the same shape and elements, False otherwise. | | [`array_equiv`](generated/numpy.array_equiv#numpy.array_equiv "numpy.array_equiv")(a1, a2) | Returns True if input arrays are shape consistent and all elements equal. | | | | | --- | --- | | [`greater`](generated/numpy.greater#numpy.greater "numpy.greater")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 > x2) element-wise. | | [`greater_equal`](generated/numpy.greater_equal#numpy.greater_equal "numpy.greater_equal")(x1, x2, /[, out, where, ...]) | Return the truth value of (x1 >= x2) element-wise. | | [`less`](generated/numpy.less#numpy.less "numpy.less")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 < x2) element-wise. | | [`less_equal`](generated/numpy.less_equal#numpy.less_equal "numpy.less_equal")(x1, x2, /[, out, where, casting, ...]) | Return the truth value of (x1 <= x2) element-wise. | | [`equal`](generated/numpy.equal#numpy.equal "numpy.equal")(x1, x2, /[, out, where, casting, ...]) | Return (x1 == x2) element-wise. | | [`not_equal`](generated/numpy.not_equal#numpy.not_equal "numpy.not_equal")(x1, x2, /[, out, where, casting, ...]) | Return (x1 != x2) element-wise. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.logic.htmlMasked array operations ======================= Constants --------- | | | | --- | --- | | [`ma.MaskType`](generated/numpy.ma.masktype#numpy.ma.MaskType "numpy.ma.MaskType") | alias of [`numpy.bool_`](arrays.scalars#numpy.bool_ "numpy.bool_") | Creation -------- ### From existing data | | | | --- | --- | | [`ma.masked_array`](generated/numpy.ma.masked_array#numpy.ma.masked_array "numpy.ma.masked_array") | alias of `numpy.ma.core.MaskedArray` | | [`ma.array`](generated/numpy.ma.array#numpy.ma.array "numpy.ma.array")(data[, dtype, copy, order, mask, ...]) | An array class with possibly masked values. | | [`ma.copy`](generated/numpy.ma.copy#numpy.ma.copy "numpy.ma.copy")(self, *args, **params) a.copy(order=) | Return a copy of the array. | | [`ma.frombuffer`](generated/numpy.ma.frombuffer#numpy.ma.frombuffer "numpy.ma.frombuffer")(buffer[, dtype, count, ...]) | Interpret a buffer as a 1-dimensional array. | | [`ma.fromfunction`](generated/numpy.ma.fromfunction#numpy.ma.fromfunction "numpy.ma.fromfunction")(function, shape, **dtype) | Construct an array by executing a function over each coordinate. | | [`ma.MaskedArray.copy`](generated/numpy.ma.maskedarray.copy#numpy.ma.MaskedArray.copy "numpy.ma.MaskedArray.copy")([order]) | Return a copy of the array. | ### Ones and zeros | | | | --- | --- | | [`ma.empty`](generated/numpy.ma.empty#numpy.ma.empty "numpy.ma.empty")(shape[, dtype, order, like]) | Return a new array of given shape and type, without initializing entries. | | [`ma.empty_like`](generated/numpy.ma.empty_like#numpy.ma.empty_like "numpy.ma.empty_like")(prototype[, dtype, order, ...]) | Return a new array with the same shape and type as a given array. | | [`ma.masked_all`](generated/numpy.ma.masked_all#numpy.ma.masked_all "numpy.ma.masked_all")(shape[, dtype]) | Empty masked array with all elements masked. | | [`ma.masked_all_like`](generated/numpy.ma.masked_all_like#numpy.ma.masked_all_like "numpy.ma.masked_all_like")(arr) | Empty masked array with the properties of an existing array. | | [`ma.ones`](generated/numpy.ma.ones#numpy.ma.ones "numpy.ma.ones")(shape[, dtype, order]) | Return a new array of given shape and type, filled with ones. | | [`ma.ones_like`](generated/numpy.ma.ones_like#numpy.ma.ones_like "numpy.ma.ones_like")(*args, **kwargs) | Return an array of ones with the same shape and type as a given array. | | [`ma.zeros`](generated/numpy.ma.zeros#numpy.ma.zeros "numpy.ma.zeros")(shape[, dtype, order, like]) | Return a new array of given shape and type, filled with zeros. | | [`ma.zeros_like`](generated/numpy.ma.zeros_like#numpy.ma.zeros_like "numpy.ma.zeros_like")(*args, **kwargs) | Return an array of zeros with the same shape and type as a given array. | Inspecting the array -------------------- | | | | --- | --- | | [`ma.all`](generated/numpy.ma.all#numpy.ma.all "numpy.ma.all")(self[, axis, out, keepdims]) | Returns True if all elements evaluate to True. | | [`ma.any`](generated/numpy.ma.any#numpy.ma.any "numpy.ma.any")(self[, axis, out, keepdims]) | Returns True if any of the elements of `a` evaluate to True. | | [`ma.count`](generated/numpy.ma.count#numpy.ma.count "numpy.ma.count")(self[, axis, keepdims]) | Count the non-masked elements of the array along the given axis. | | [`ma.count_masked`](generated/numpy.ma.count_masked#numpy.ma.count_masked "numpy.ma.count_masked")(arr[, axis]) | Count the number of masked elements along the given axis. | | [`ma.getmask`](generated/numpy.ma.getmask#numpy.ma.getmask "numpy.ma.getmask")(a) | Return the mask of a masked array, or nomask. | | [`ma.getmaskarray`](generated/numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray")(arr) | Return the mask of a masked array, or full boolean array of False. | | [`ma.getdata`](generated/numpy.ma.getdata#numpy.ma.getdata "numpy.ma.getdata")(a[, subok]) | Return the data of a masked array as an ndarray. | | [`ma.nonzero`](generated/numpy.ma.nonzero#numpy.ma.nonzero "numpy.ma.nonzero")(self) | Return the indices of unmasked elements that are not zero. | | [`ma.shape`](generated/numpy.ma.shape#numpy.ma.shape "numpy.ma.shape")(obj) | Return the shape of an array. | | [`ma.size`](generated/numpy.ma.size#numpy.ma.size "numpy.ma.size")(obj[, axis]) | Return the number of elements along a given axis. | | [`ma.is_masked`](generated/numpy.ma.is_masked#numpy.ma.is_masked "numpy.ma.is_masked")(x) | Determine whether input has masked values. | | [`ma.is_mask`](generated/numpy.ma.is_mask#numpy.ma.is_mask "numpy.ma.is_mask")(m) | Return True if m is a valid, standard mask. | | [`ma.isMaskedArray`](generated/numpy.ma.ismaskedarray#numpy.ma.isMaskedArray "numpy.ma.isMaskedArray")(x) | Test whether input is an instance of MaskedArray. | | [`ma.isMA`](generated/numpy.ma.isma#numpy.ma.isMA "numpy.ma.isMA")(x) | Test whether input is an instance of MaskedArray. | | [`ma.isarray`](generated/numpy.ma.isarray#numpy.ma.isarray "numpy.ma.isarray")(x) | Test whether input is an instance of MaskedArray. | | [`ma.MaskedArray.all`](generated/numpy.ma.maskedarray.all#numpy.ma.MaskedArray.all "numpy.ma.MaskedArray.all")([axis, out, keepdims]) | Returns True if all elements evaluate to True. | | [`ma.MaskedArray.any`](generated/numpy.ma.maskedarray.any#numpy.ma.MaskedArray.any "numpy.ma.MaskedArray.any")([axis, out, keepdims]) | Returns True if any of the elements of `a` evaluate to True. | | [`ma.MaskedArray.count`](generated/numpy.ma.maskedarray.count#numpy.ma.MaskedArray.count "numpy.ma.MaskedArray.count")([axis, keepdims]) | Count the non-masked elements of the array along the given axis. | | [`ma.MaskedArray.nonzero`](generated/numpy.ma.maskedarray.nonzero#numpy.ma.MaskedArray.nonzero "numpy.ma.MaskedArray.nonzero")() | Return the indices of unmasked elements that are not zero. | | [`ma.shape`](generated/numpy.ma.shape#numpy.ma.shape "numpy.ma.shape")(obj) | Return the shape of an array. | | [`ma.size`](generated/numpy.ma.size#numpy.ma.size "numpy.ma.size")(obj[, axis]) | Return the number of elements along a given axis. | | | | | --- | --- | | [`ma.MaskedArray.data`](maskedarray.baseclass#numpy.ma.MaskedArray.data "numpy.ma.MaskedArray.data") | Returns the underlying data, as a view of the masked array. | | [`ma.MaskedArray.mask`](maskedarray.baseclass#numpy.ma.MaskedArray.mask "numpy.ma.MaskedArray.mask") | Current mask. | | [`ma.MaskedArray.recordmask`](maskedarray.baseclass#numpy.ma.MaskedArray.recordmask "numpy.ma.MaskedArray.recordmask") | Get or set the mask of the array if it has no named fields. | Manipulating a MaskedArray -------------------------- ### Changing the shape | | | | --- | --- | | [`ma.ravel`](generated/numpy.ma.ravel#numpy.ma.ravel "numpy.ma.ravel")(self[, order]) | Returns a 1D version of self, as a view. | | [`ma.reshape`](generated/numpy.ma.reshape#numpy.ma.reshape "numpy.ma.reshape")(a, new_shape[, order]) | Returns an array containing the same data with a new shape. | | [`ma.resize`](generated/numpy.ma.resize#numpy.ma.resize "numpy.ma.resize")(x, new_shape) | Return a new masked array with the specified size and shape. | | [`ma.MaskedArray.flatten`](generated/numpy.ma.maskedarray.flatten#numpy.ma.MaskedArray.flatten "numpy.ma.MaskedArray.flatten")([order]) | Return a copy of the array collapsed into one dimension. | | [`ma.MaskedArray.ravel`](generated/numpy.ma.maskedarray.ravel#numpy.ma.MaskedArray.ravel "numpy.ma.MaskedArray.ravel")([order]) | Returns a 1D version of self, as a view. | | [`ma.MaskedArray.reshape`](generated/numpy.ma.maskedarray.reshape#numpy.ma.MaskedArray.reshape "numpy.ma.MaskedArray.reshape")(*s, **kwargs) | Give a new shape to the array without changing its data. | | [`ma.MaskedArray.resize`](generated/numpy.ma.maskedarray.resize#numpy.ma.MaskedArray.resize "numpy.ma.MaskedArray.resize")(newshape[, refcheck, ...]) | | ### Modifying axes | | | | --- | --- | | [`ma.swapaxes`](generated/numpy.ma.swapaxes#numpy.ma.swapaxes "numpy.ma.swapaxes")(self, *args, ...) | Return a view of the array with `axis1` and `axis2` interchanged. | | [`ma.transpose`](generated/numpy.ma.transpose#numpy.ma.transpose "numpy.ma.transpose")(a[, axes]) | Permute the dimensions of an array. | | [`ma.MaskedArray.swapaxes`](generated/numpy.ma.maskedarray.swapaxes#numpy.ma.MaskedArray.swapaxes "numpy.ma.MaskedArray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. | | [`ma.MaskedArray.transpose`](generated/numpy.ma.maskedarray.transpose#numpy.ma.MaskedArray.transpose "numpy.ma.MaskedArray.transpose")(*axes) | Returns a view of the array with axes transposed. | ### Changing the number of dimensions | | | | --- | --- | | [`ma.atleast_1d`](generated/numpy.ma.atleast_1d#numpy.ma.atleast_1d "numpy.ma.atleast_1d")(*args, **kwargs) | Convert inputs to arrays with at least one dimension. | | [`ma.atleast_2d`](generated/numpy.ma.atleast_2d#numpy.ma.atleast_2d "numpy.ma.atleast_2d")(*args, **kwargs) | View inputs as arrays with at least two dimensions. | | [`ma.atleast_3d`](generated/numpy.ma.atleast_3d#numpy.ma.atleast_3d "numpy.ma.atleast_3d")(*args, **kwargs) | View inputs as arrays with at least three dimensions. | | [`ma.expand_dims`](generated/numpy.ma.expand_dims#numpy.ma.expand_dims "numpy.ma.expand_dims")(a, axis) | Expand the shape of an array. | | [`ma.squeeze`](generated/numpy.ma.squeeze#numpy.ma.squeeze "numpy.ma.squeeze")(*args, **kwargs) | Remove axes of length one from `a`. | | [`ma.MaskedArray.squeeze`](generated/numpy.ma.maskedarray.squeeze#numpy.ma.MaskedArray.squeeze "numpy.ma.MaskedArray.squeeze")([axis]) | Remove axes of length one from `a`. | | [`ma.stack`](generated/numpy.ma.stack#numpy.ma.stack "numpy.ma.stack")(*args, **kwargs) | Join a sequence of arrays along a new axis. | | [`ma.column_stack`](generated/numpy.ma.column_stack#numpy.ma.column_stack "numpy.ma.column_stack")(*args, **kwargs) | Stack 1-D arrays as columns into a 2-D array. | | [`ma.concatenate`](generated/numpy.ma.concatenate#numpy.ma.concatenate "numpy.ma.concatenate")(arrays[, axis]) | Concatenate a sequence of arrays along the given axis. | | [`ma.dstack`](generated/numpy.ma.dstack#numpy.ma.dstack "numpy.ma.dstack")(*args, **kwargs) | Stack arrays in sequence depth wise (along third axis). | | [`ma.hstack`](generated/numpy.ma.hstack#numpy.ma.hstack "numpy.ma.hstack")(*args, **kwargs) | Stack arrays in sequence horizontally (column wise). | | [`ma.hsplit`](generated/numpy.ma.hsplit#numpy.ma.hsplit "numpy.ma.hsplit")(*args, **kwargs) | Split an array into multiple sub-arrays horizontally (column-wise). | | [`ma.mr_`](generated/numpy.ma.mr_#numpy.ma.mr_ "numpy.ma.mr_") | Translate slice objects to concatenation along the first axis. | | [`ma.row_stack`](generated/numpy.ma.row_stack#numpy.ma.row_stack "numpy.ma.row_stack")(*args, **kwargs) | Stack arrays in sequence vertically (row wise). | | [`ma.vstack`](generated/numpy.ma.vstack#numpy.ma.vstack "numpy.ma.vstack")(*args, **kwargs) | Stack arrays in sequence vertically (row wise). | ### Joining arrays | | | | --- | --- | | [`ma.concatenate`](generated/numpy.ma.concatenate#numpy.ma.concatenate "numpy.ma.concatenate")(arrays[, axis]) | Concatenate a sequence of arrays along the given axis. | | [`ma.stack`](generated/numpy.ma.stack#numpy.ma.stack "numpy.ma.stack")(*args, **kwargs) | Join a sequence of arrays along a new axis. | | [`ma.vstack`](generated/numpy.ma.vstack#numpy.ma.vstack "numpy.ma.vstack")(*args, **kwargs) | Stack arrays in sequence vertically (row wise). | | [`ma.hstack`](generated/numpy.ma.hstack#numpy.ma.hstack "numpy.ma.hstack")(*args, **kwargs) | Stack arrays in sequence horizontally (column wise). | | [`ma.dstack`](generated/numpy.ma.dstack#numpy.ma.dstack "numpy.ma.dstack")(*args, **kwargs) | Stack arrays in sequence depth wise (along third axis). | | [`ma.column_stack`](generated/numpy.ma.column_stack#numpy.ma.column_stack "numpy.ma.column_stack")(*args, **kwargs) | Stack 1-D arrays as columns into a 2-D array. | | [`ma.append`](generated/numpy.ma.append#numpy.ma.append "numpy.ma.append")(a, b[, axis]) | Append values to the end of an array. | Operations on masks ------------------- ### Creating a mask | | | | --- | --- | | [`ma.make_mask`](generated/numpy.ma.make_mask#numpy.ma.make_mask "numpy.ma.make_mask")(m[, copy, shrink, dtype]) | Create a boolean mask from an array. | | [`ma.make_mask_none`](generated/numpy.ma.make_mask_none#numpy.ma.make_mask_none "numpy.ma.make_mask_none")(newshape[, dtype]) | Return a boolean mask of the given shape, filled with False. | | [`ma.mask_or`](generated/numpy.ma.mask_or#numpy.ma.mask_or "numpy.ma.mask_or")(m1, m2[, copy, shrink]) | Combine two masks with the `logical_or` operator. | | [`ma.make_mask_descr`](generated/numpy.ma.make_mask_descr#numpy.ma.make_mask_descr "numpy.ma.make_mask_descr")(ndtype) | Construct a dtype description list from a given dtype. | ### Accessing a mask | | | | --- | --- | | [`ma.getmask`](generated/numpy.ma.getmask#numpy.ma.getmask "numpy.ma.getmask")(a) | Return the mask of a masked array, or nomask. | | [`ma.getmaskarray`](generated/numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray")(arr) | Return the mask of a masked array, or full boolean array of False. | | [`ma.masked_array.mask`](generated/numpy.ma.masked_array.mask#numpy.ma.masked_array.mask "numpy.ma.masked_array.mask") | Current mask. | ### Finding masked data | | | | --- | --- | | [`ma.ndenumerate`](generated/numpy.ma.ndenumerate#numpy.ma.ndenumerate "numpy.ma.ndenumerate")(a[, compressed]) | Multidimensional index iterator. | | [`ma.flatnotmasked_contiguous`](generated/numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous")(a) | Find contiguous unmasked data in a masked array. | | [`ma.flatnotmasked_edges`](generated/numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges")(a) | Find the indices of the first and last unmasked values. | | [`ma.notmasked_contiguous`](generated/numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous")(a[, axis]) | Find contiguous unmasked data in a masked array along the given axis. | | [`ma.notmasked_edges`](generated/numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges")(a[, axis]) | Find the indices of the first and last unmasked values along an axis. | | [`ma.clump_masked`](generated/numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked")(a) | Returns a list of slices corresponding to the masked clumps of a 1-D array. | | [`ma.clump_unmasked`](generated/numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked")(a) | Return list of slices corresponding to the unmasked clumps of a 1-D array. | ### Modifying a mask | | | | --- | --- | | [`ma.mask_cols`](generated/numpy.ma.mask_cols#numpy.ma.mask_cols "numpy.ma.mask_cols")(a[, axis]) | Mask columns of a 2D array that contain masked values. | | [`ma.mask_or`](generated/numpy.ma.mask_or#numpy.ma.mask_or "numpy.ma.mask_or")(m1, m2[, copy, shrink]) | Combine two masks with the `logical_or` operator. | | [`ma.mask_rowcols`](generated/numpy.ma.mask_rowcols#numpy.ma.mask_rowcols "numpy.ma.mask_rowcols")(a[, axis]) | Mask rows and/or columns of a 2D array that contain masked values. | | [`ma.mask_rows`](generated/numpy.ma.mask_rows#numpy.ma.mask_rows "numpy.ma.mask_rows")(a[, axis]) | Mask rows of a 2D array that contain masked values. | | [`ma.harden_mask`](generated/numpy.ma.harden_mask#numpy.ma.harden_mask "numpy.ma.harden_mask")(self) | Force the mask to hard, preventing unmasking by assignment. | | [`ma.soften_mask`](generated/numpy.ma.soften_mask#numpy.ma.soften_mask "numpy.ma.soften_mask")(self) | Force the mask to soft (default), allowing unmasking by assignment. | | [`ma.MaskedArray.harden_mask`](generated/numpy.ma.maskedarray.harden_mask#numpy.ma.MaskedArray.harden_mask "numpy.ma.MaskedArray.harden_mask")() | Force the mask to hard, preventing unmasking by assignment. | | [`ma.MaskedArray.soften_mask`](generated/numpy.ma.maskedarray.soften_mask#numpy.ma.MaskedArray.soften_mask "numpy.ma.MaskedArray.soften_mask")() | Force the mask to soft (default), allowing unmasking by assignment. | | [`ma.MaskedArray.shrink_mask`](generated/numpy.ma.maskedarray.shrink_mask#numpy.ma.MaskedArray.shrink_mask "numpy.ma.MaskedArray.shrink_mask")() | Reduce a mask to nomask when possible. | | [`ma.MaskedArray.unshare_mask`](generated/numpy.ma.maskedarray.unshare_mask#numpy.ma.MaskedArray.unshare_mask "numpy.ma.MaskedArray.unshare_mask")() | Copy the mask and set the `sharedmask` flag to `False`. | Conversion operations --------------------- ### > to a masked array | | | | --- | --- | | [`ma.asarray`](generated/numpy.ma.asarray#numpy.ma.asarray "numpy.ma.asarray")(a[, dtype, order]) | Convert the input to a masked array of the given data-type. | | [`ma.asanyarray`](generated/numpy.ma.asanyarray#numpy.ma.asanyarray "numpy.ma.asanyarray")(a[, dtype]) | Convert the input to a masked array, conserving subclasses. | | [`ma.fix_invalid`](generated/numpy.ma.fix_invalid#numpy.ma.fix_invalid "numpy.ma.fix_invalid")(a[, mask, copy, fill_value]) | Return input with invalid data masked and replaced by a fill value. | | [`ma.masked_equal`](generated/numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal")(x, value[, copy]) | Mask an array where equal to a given value. | | [`ma.masked_greater`](generated/numpy.ma.masked_greater#numpy.ma.masked_greater "numpy.ma.masked_greater")(x, value[, copy]) | Mask an array where greater than a given value. | | [`ma.masked_greater_equal`](generated/numpy.ma.masked_greater_equal#numpy.ma.masked_greater_equal "numpy.ma.masked_greater_equal")(x, value[, copy]) | Mask an array where greater than or equal to a given value. | | [`ma.masked_inside`](generated/numpy.ma.masked_inside#numpy.ma.masked_inside "numpy.ma.masked_inside")(x, v1, v2[, copy]) | Mask an array inside a given interval. | | [`ma.masked_invalid`](generated/numpy.ma.masked_invalid#numpy.ma.masked_invalid "numpy.ma.masked_invalid")(a[, copy]) | Mask an array where invalid values occur (NaNs or infs). | | [`ma.masked_less`](generated/numpy.ma.masked_less#numpy.ma.masked_less "numpy.ma.masked_less")(x, value[, copy]) | Mask an array where less than a given value. | | [`ma.masked_less_equal`](generated/numpy.ma.masked_less_equal#numpy.ma.masked_less_equal "numpy.ma.masked_less_equal")(x, value[, copy]) | Mask an array where less than or equal to a given value. | | [`ma.masked_not_equal`](generated/numpy.ma.masked_not_equal#numpy.ma.masked_not_equal "numpy.ma.masked_not_equal")(x, value[, copy]) | Mask an array where `not` equal to a given value. | | [`ma.masked_object`](generated/numpy.ma.masked_object#numpy.ma.masked_object "numpy.ma.masked_object")(x, value[, copy, shrink]) | Mask the array `x` where the data are exactly equal to value. | | [`ma.masked_outside`](generated/numpy.ma.masked_outside#numpy.ma.masked_outside "numpy.ma.masked_outside")(x, v1, v2[, copy]) | Mask an array outside a given interval. | | [`ma.masked_values`](generated/numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values")(x, value[, rtol, atol, ...]) | Mask using floating point equality. | | [`ma.masked_where`](generated/numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where")(condition, a[, copy]) | Mask an array where a condition is met. | ### > to a ndarray | | | | --- | --- | | [`ma.compress_cols`](generated/numpy.ma.compress_cols#numpy.ma.compress_cols "numpy.ma.compress_cols")(a) | Suppress whole columns of a 2-D array that contain masked values. | | [`ma.compress_rowcols`](generated/numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols")(x[, axis]) | Suppress the rows and/or columns of a 2-D array that contain masked values. | | [`ma.compress_rows`](generated/numpy.ma.compress_rows#numpy.ma.compress_rows "numpy.ma.compress_rows")(a) | Suppress whole rows of a 2-D array that contain masked values. | | [`ma.compressed`](generated/numpy.ma.compressed#numpy.ma.compressed "numpy.ma.compressed")(x) | Return all the non-masked data as a 1-D array. | | [`ma.filled`](generated/numpy.ma.filled#numpy.ma.filled "numpy.ma.filled")(a[, fill_value]) | Return input as an array with masked data replaced by a fill value. | | [`ma.MaskedArray.compressed`](generated/numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed")() | Return all the non-masked data as a 1-D array. | | [`ma.MaskedArray.filled`](generated/numpy.ma.maskedarray.filled#numpy.ma.MaskedArray.filled "numpy.ma.MaskedArray.filled")([fill_value]) | Return a copy of self, with masked values filled with a given value. | ### > to another object | | | | --- | --- | | [`ma.MaskedArray.tofile`](generated/numpy.ma.maskedarray.tofile#numpy.ma.MaskedArray.tofile "numpy.ma.MaskedArray.tofile")(fid[, sep, format]) | Save a masked array to a file in binary format. | | [`ma.MaskedArray.tolist`](generated/numpy.ma.maskedarray.tolist#numpy.ma.MaskedArray.tolist "numpy.ma.MaskedArray.tolist")([fill_value]) | Return the data portion of the masked array as a hierarchical Python list. | | [`ma.MaskedArray.torecords`](generated/numpy.ma.maskedarray.torecords#numpy.ma.MaskedArray.torecords "numpy.ma.MaskedArray.torecords")() | Transforms a masked array into a flexible-type array. | | [`ma.MaskedArray.tobytes`](generated/numpy.ma.maskedarray.tobytes#numpy.ma.MaskedArray.tobytes "numpy.ma.MaskedArray.tobytes")([fill_value, order]) | Return the array data as a string containing the raw bytes in the array. | ### Filling a masked array | | | | --- | --- | | [`ma.common_fill_value`](generated/numpy.ma.common_fill_value#numpy.ma.common_fill_value "numpy.ma.common_fill_value")(a, b) | Return the common filling value of two masked arrays, if any. | | [`ma.default_fill_value`](generated/numpy.ma.default_fill_value#numpy.ma.default_fill_value "numpy.ma.default_fill_value")(obj) | Return the default fill value for the argument object. | | [`ma.maximum_fill_value`](generated/numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value")(obj) | Return the minimum value that can be represented by the dtype of an object. | | [`ma.minimum_fill_value`](generated/numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value")(obj) | Return the maximum value that can be represented by the dtype of an object. | | [`ma.set_fill_value`](generated/numpy.ma.set_fill_value#numpy.ma.set_fill_value "numpy.ma.set_fill_value")(a, fill_value) | Set the filling value of a, if a is a masked array. | | [`ma.MaskedArray.get_fill_value`](generated/numpy.ma.maskedarray.get_fill_value#numpy.ma.MaskedArray.get_fill_value "numpy.ma.MaskedArray.get_fill_value")() | The filling value of the masked array is a scalar. | | [`ma.MaskedArray.set_fill_value`](generated/numpy.ma.maskedarray.set_fill_value#numpy.ma.MaskedArray.set_fill_value "numpy.ma.MaskedArray.set_fill_value")([value]) | | | | | | --- | --- | | [`ma.MaskedArray.fill_value`](maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") | The filling value of the masked array is a scalar. | Masked arrays arithmetic ------------------------ ### Arithmetic | | | | --- | --- | | [`ma.anom`](generated/numpy.ma.anom#numpy.ma.anom "numpy.ma.anom")(self[, axis, dtype]) | Compute the anomalies (deviations from the arithmetic mean) along the given axis. | | [`ma.anomalies`](generated/numpy.ma.anomalies#numpy.ma.anomalies "numpy.ma.anomalies")(self[, axis, dtype]) | Compute the anomalies (deviations from the arithmetic mean) along the given axis. | | [`ma.average`](generated/numpy.ma.average#numpy.ma.average "numpy.ma.average")(a[, axis, weights, returned, ...]) | Return the weighted average of array over the given axis. | | [`ma.conjugate`](generated/numpy.ma.conjugate#numpy.ma.conjugate "numpy.ma.conjugate")(x, /[, out, where, casting, ...]) | Return the complex conjugate, element-wise. | | [`ma.corrcoef`](generated/numpy.ma.corrcoef#numpy.ma.corrcoef "numpy.ma.corrcoef")(x[, y, rowvar, bias, ...]) | Return Pearson product-moment correlation coefficients. | | [`ma.cov`](generated/numpy.ma.cov#numpy.ma.cov "numpy.ma.cov")(x[, y, rowvar, bias, allow_masked, ddof]) | Estimate the covariance matrix. | | [`ma.cumsum`](generated/numpy.ma.cumsum#numpy.ma.cumsum "numpy.ma.cumsum")(self[, axis, dtype, out]) | Return the cumulative sum of the array elements over the given axis. | | [`ma.cumprod`](generated/numpy.ma.cumprod#numpy.ma.cumprod "numpy.ma.cumprod")(self[, axis, dtype, out]) | Return the cumulative product of the array elements over the given axis. | | [`ma.mean`](generated/numpy.ma.mean#numpy.ma.mean "numpy.ma.mean")(self[, axis, dtype, out, keepdims]) | Returns the average of the array elements along given axis. | | [`ma.median`](generated/numpy.ma.median#numpy.ma.median "numpy.ma.median")(a[, axis, out, overwrite_input, ...]) | Compute the median along the specified axis. | | [`ma.power`](generated/numpy.ma.power#numpy.ma.power "numpy.ma.power")(a, b[, third]) | Returns element-wise base array raised to power from second array. | | [`ma.prod`](generated/numpy.ma.prod#numpy.ma.prod "numpy.ma.prod")(self[, axis, dtype, out, keepdims]) | Return the product of the array elements over the given axis. | | [`ma.std`](generated/numpy.ma.std#numpy.ma.std "numpy.ma.std")(self[, axis, dtype, out, ddof, keepdims]) | Returns the standard deviation of the array elements along given axis. | | [`ma.sum`](generated/numpy.ma.sum#numpy.ma.sum "numpy.ma.sum")(self[, axis, dtype, out, keepdims]) | Return the sum of the array elements over the given axis. | | [`ma.var`](generated/numpy.ma.var#numpy.ma.var "numpy.ma.var")(self[, axis, dtype, out, ddof, keepdims]) | Compute the variance along the specified axis. | | [`ma.MaskedArray.anom`](generated/numpy.ma.maskedarray.anom#numpy.ma.MaskedArray.anom "numpy.ma.MaskedArray.anom")([axis, dtype]) | Compute the anomalies (deviations from the arithmetic mean) along the given axis. | | [`ma.MaskedArray.cumprod`](generated/numpy.ma.maskedarray.cumprod#numpy.ma.MaskedArray.cumprod "numpy.ma.MaskedArray.cumprod")([axis, dtype, out]) | Return the cumulative product of the array elements over the given axis. | | [`ma.MaskedArray.cumsum`](generated/numpy.ma.maskedarray.cumsum#numpy.ma.MaskedArray.cumsum "numpy.ma.MaskedArray.cumsum")([axis, dtype, out]) | Return the cumulative sum of the array elements over the given axis. | | [`ma.MaskedArray.mean`](generated/numpy.ma.maskedarray.mean#numpy.ma.MaskedArray.mean "numpy.ma.MaskedArray.mean")([axis, dtype, out, keepdims]) | Returns the average of the array elements along given axis. | | [`ma.MaskedArray.prod`](generated/numpy.ma.maskedarray.prod#numpy.ma.MaskedArray.prod "numpy.ma.MaskedArray.prod")([axis, dtype, out, keepdims]) | Return the product of the array elements over the given axis. | | [`ma.MaskedArray.std`](generated/numpy.ma.maskedarray.std#numpy.ma.MaskedArray.std "numpy.ma.MaskedArray.std")([axis, dtype, out, ddof, ...]) | Returns the standard deviation of the array elements along given axis. | | [`ma.MaskedArray.sum`](generated/numpy.ma.maskedarray.sum#numpy.ma.MaskedArray.sum "numpy.ma.MaskedArray.sum")([axis, dtype, out, keepdims]) | Return the sum of the array elements over the given axis. | | [`ma.MaskedArray.var`](generated/numpy.ma.maskedarray.var#numpy.ma.MaskedArray.var "numpy.ma.MaskedArray.var")([axis, dtype, out, ddof, ...]) | Compute the variance along the specified axis. | ### Minimum/maximum | | | | --- | --- | | [`ma.argmax`](generated/numpy.ma.argmax#numpy.ma.argmax "numpy.ma.argmax")(self[, axis, fill_value, out]) | Returns array of indices of the maximum values along the given axis. | | [`ma.argmin`](generated/numpy.ma.argmin#numpy.ma.argmin "numpy.ma.argmin")(self[, axis, fill_value, out]) | Return array of indices to the minimum values along the given axis. | | [`ma.max`](generated/numpy.ma.max#numpy.ma.max "numpy.ma.max")(obj[, axis, out, fill_value, keepdims]) | Return the maximum along a given axis. | | [`ma.min`](generated/numpy.ma.min#numpy.ma.min "numpy.ma.min")(obj[, axis, out, fill_value, keepdims]) | Return the minimum along a given axis. | | [`ma.ptp`](generated/numpy.ma.ptp#numpy.ma.ptp "numpy.ma.ptp")(obj[, axis, out, fill_value, keepdims]) | Return (maximum - minimum) along the given dimension (i.e. | | [`ma.diff`](generated/numpy.ma.diff#numpy.ma.diff "numpy.ma.diff")(*args, **kwargs) | Calculate the n-th discrete difference along the given axis. | | [`ma.MaskedArray.argmax`](generated/numpy.ma.maskedarray.argmax#numpy.ma.MaskedArray.argmax "numpy.ma.MaskedArray.argmax")([axis, fill_value, ...]) | Returns array of indices of the maximum values along the given axis. | | [`ma.MaskedArray.argmin`](generated/numpy.ma.maskedarray.argmin#numpy.ma.MaskedArray.argmin "numpy.ma.MaskedArray.argmin")([axis, fill_value, ...]) | Return array of indices to the minimum values along the given axis. | | [`ma.MaskedArray.max`](generated/numpy.ma.maskedarray.max#numpy.ma.MaskedArray.max "numpy.ma.MaskedArray.max")([axis, out, fill_value, ...]) | Return the maximum along a given axis. | | [`ma.MaskedArray.min`](generated/numpy.ma.maskedarray.min#numpy.ma.MaskedArray.min "numpy.ma.MaskedArray.min")([axis, out, fill_value, ...]) | Return the minimum along a given axis. | | [`ma.MaskedArray.ptp`](generated/numpy.ma.maskedarray.ptp#numpy.ma.MaskedArray.ptp "numpy.ma.MaskedArray.ptp")([axis, out, fill_value, ...]) | Return (maximum - minimum) along the given dimension (i.e. | ### Sorting | | | | --- | --- | | [`ma.argsort`](generated/numpy.ma.argsort#numpy.ma.argsort "numpy.ma.argsort")(a[, axis, kind, order, endwith, ...]) | Return an ndarray of indices that sort the array along the specified axis. | | [`ma.sort`](generated/numpy.ma.sort#numpy.ma.sort "numpy.ma.sort")(a[, axis, kind, order, endwith, ...]) | Return a sorted copy of the masked array. | | [`ma.MaskedArray.argsort`](generated/numpy.ma.maskedarray.argsort#numpy.ma.MaskedArray.argsort "numpy.ma.MaskedArray.argsort")([axis, kind, order, ...]) | Return an ndarray of indices that sort the array along the specified axis. | | [`ma.MaskedArray.sort`](generated/numpy.ma.maskedarray.sort#numpy.ma.MaskedArray.sort "numpy.ma.MaskedArray.sort")([axis, kind, order, ...]) | Sort the array, in-place | ### Algebra | | | | --- | --- | | [`ma.diag`](generated/numpy.ma.diag#numpy.ma.diag "numpy.ma.diag")(v[, k]) | Extract a diagonal or construct a diagonal array. | | [`ma.dot`](generated/numpy.ma.dot#numpy.ma.dot "numpy.ma.dot")(a, b[, strict, out]) | Return the dot product of two arrays. | | [`ma.identity`](generated/numpy.ma.identity#numpy.ma.identity "numpy.ma.identity")(n[, dtype]) | Return the identity array. | | [`ma.inner`](generated/numpy.ma.inner#numpy.ma.inner "numpy.ma.inner")(a, b, /) | Inner product of two arrays. | | [`ma.innerproduct`](generated/numpy.ma.innerproduct#numpy.ma.innerproduct "numpy.ma.innerproduct")(a, b, /) | Inner product of two arrays. | | [`ma.outer`](generated/numpy.ma.outer#numpy.ma.outer "numpy.ma.outer")(a, b) | Compute the outer product of two vectors. | | [`ma.outerproduct`](generated/numpy.ma.outerproduct#numpy.ma.outerproduct "numpy.ma.outerproduct")(a, b) | Compute the outer product of two vectors. | | [`ma.trace`](generated/numpy.ma.trace#numpy.ma.trace "numpy.ma.trace")(self[, offset, axis1, axis2, ...]) | Return the sum along diagonals of the array. | | [`ma.transpose`](generated/numpy.ma.transpose#numpy.ma.transpose "numpy.ma.transpose")(a[, axes]) | Permute the dimensions of an array. | | [`ma.MaskedArray.trace`](generated/numpy.ma.maskedarray.trace#numpy.ma.MaskedArray.trace "numpy.ma.MaskedArray.trace")([offset, axis1, axis2, ...]) | Return the sum along diagonals of the array. | | [`ma.MaskedArray.transpose`](generated/numpy.ma.maskedarray.transpose#numpy.ma.MaskedArray.transpose "numpy.ma.MaskedArray.transpose")(*axes) | Returns a view of the array with axes transposed. | ### Polynomial fit | | | | --- | --- | | [`ma.vander`](generated/numpy.ma.vander#numpy.ma.vander "numpy.ma.vander")(x[, n]) | Generate a Vandermonde matrix. | | [`ma.polyfit`](generated/numpy.ma.polyfit#numpy.ma.polyfit "numpy.ma.polyfit")(x, y, deg[, rcond, full, w, cov]) | Least squares polynomial fit. | ### Clipping and rounding | | | | --- | --- | | [`ma.around`](generated/numpy.ma.around#numpy.ma.around "numpy.ma.around") | Round an array to the given number of decimals. | | [`ma.clip`](generated/numpy.ma.clip#numpy.ma.clip "numpy.ma.clip")(*args, **kwargs) | Clip (limit) the values in an array. | | [`ma.round`](generated/numpy.ma.round#numpy.ma.round "numpy.ma.round")(a[, decimals, out]) | Return a copy of a, rounded to 'decimals' places. | | [`ma.MaskedArray.clip`](generated/numpy.ma.maskedarray.clip#numpy.ma.MaskedArray.clip "numpy.ma.MaskedArray.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. | | [`ma.MaskedArray.round`](generated/numpy.ma.maskedarray.round#numpy.ma.MaskedArray.round "numpy.ma.MaskedArray.round")([decimals, out]) | Return each element rounded to the given number of decimals. | ### Miscellanea | | | | --- | --- | | [`ma.allequal`](generated/numpy.ma.allequal#numpy.ma.allequal "numpy.ma.allequal")(a, b[, fill_value]) | Return True if all entries of a and b are equal, using fill_value as a truth value where either or both are masked. | | [`ma.allclose`](generated/numpy.ma.allclose#numpy.ma.allclose "numpy.ma.allclose")(a, b[, masked_equal, rtol, atol]) | Returns True if two arrays are element-wise equal within a tolerance. | | [`ma.apply_along_axis`](generated/numpy.ma.apply_along_axis#numpy.ma.apply_along_axis "numpy.ma.apply_along_axis")(func1d, axis, arr, ...) | Apply a function to 1-D slices along the given axis. | | [`ma.apply_over_axes`](generated/numpy.ma.apply_over_axes#numpy.ma.apply_over_axes "numpy.ma.apply_over_axes")(func, a, axes) | Apply a function repeatedly over multiple axes. | | [`ma.arange`](generated/numpy.ma.arange#numpy.ma.arange "numpy.ma.arange")([start,] stop[, step,][, dtype, like]) | Return evenly spaced values within a given interval. | | [`ma.choose`](generated/numpy.ma.choose#numpy.ma.choose "numpy.ma.choose")(indices, choices[, out, mode]) | Use an index array to construct a new array from a list of choices. | | [`ma.ediff1d`](generated/numpy.ma.ediff1d#numpy.ma.ediff1d "numpy.ma.ediff1d")(arr[, to_end, to_begin]) | Compute the differences between consecutive elements of an array. | | [`ma.indices`](generated/numpy.ma.indices#numpy.ma.indices "numpy.ma.indices")(dimensions[, dtype, sparse]) | Return an array representing the indices of a grid. | | [`ma.where`](generated/numpy.ma.where#numpy.ma.where "numpy.ma.where")(condition[, x, y]) | Return a masked array with elements from `x` or `y`, depending on condition. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.ma.htmlMathematical functions ====================== Trigonometric functions ----------------------- | | | | --- | --- | | [`sin`](generated/numpy.sin#numpy.sin "numpy.sin")(x, /[, out, where, casting, order, ...]) | Trigonometric sine, element-wise. | | [`cos`](generated/numpy.cos#numpy.cos "numpy.cos")(x, /[, out, where, casting, order, ...]) | Cosine element-wise. | | [`tan`](generated/numpy.tan#numpy.tan "numpy.tan")(x, /[, out, where, casting, order, ...]) | Compute tangent element-wise. | | [`arcsin`](generated/numpy.arcsin#numpy.arcsin "numpy.arcsin")(x, /[, out, where, casting, order, ...]) | Inverse sine, element-wise. | | [`arccos`](generated/numpy.arccos#numpy.arccos "numpy.arccos")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse cosine, element-wise. | | [`arctan`](generated/numpy.arctan#numpy.arctan "numpy.arctan")(x, /[, out, where, casting, order, ...]) | Trigonometric inverse tangent, element-wise. | | [`hypot`](generated/numpy.hypot#numpy.hypot "numpy.hypot")(x1, x2, /[, out, where, casting, ...]) | Given the "legs" of a right triangle, return its hypotenuse. | | [`arctan2`](generated/numpy.arctan2#numpy.arctan2 "numpy.arctan2")(x1, x2, /[, out, where, casting, ...]) | Element-wise arc tangent of `x1/x2` choosing the quadrant correctly. | | [`degrees`](generated/numpy.degrees#numpy.degrees "numpy.degrees")(x, /[, out, where, casting, order, ...]) | Convert angles from radians to degrees. | | [`radians`](generated/numpy.radians#numpy.radians "numpy.radians")(x, /[, out, where, casting, order, ...]) | Convert angles from degrees to radians. | | [`unwrap`](generated/numpy.unwrap#numpy.unwrap "numpy.unwrap")(p[, discont, axis, period]) | Unwrap by taking the complement of large deltas with respect to the period. | | [`deg2rad`](generated/numpy.deg2rad#numpy.deg2rad "numpy.deg2rad")(x, /[, out, where, casting, order, ...]) | Convert angles from degrees to radians. | | [`rad2deg`](generated/numpy.rad2deg#numpy.rad2deg "numpy.rad2deg")(x, /[, out, where, casting, order, ...]) | Convert angles from radians to degrees. | Hyperbolic functions -------------------- | | | | --- | --- | | [`sinh`](generated/numpy.sinh#numpy.sinh "numpy.sinh")(x, /[, out, where, casting, order, ...]) | Hyperbolic sine, element-wise. | | [`cosh`](generated/numpy.cosh#numpy.cosh "numpy.cosh")(x, /[, out, where, casting, order, ...]) | Hyperbolic cosine, element-wise. | | [`tanh`](generated/numpy.tanh#numpy.tanh "numpy.tanh")(x, /[, out, where, casting, order, ...]) | Compute hyperbolic tangent element-wise. | | [`arcsinh`](generated/numpy.arcsinh#numpy.arcsinh "numpy.arcsinh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic sine element-wise. | | [`arccosh`](generated/numpy.arccosh#numpy.arccosh "numpy.arccosh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic cosine, element-wise. | | [`arctanh`](generated/numpy.arctanh#numpy.arctanh "numpy.arctanh")(x, /[, out, where, casting, order, ...]) | Inverse hyperbolic tangent element-wise. | Rounding -------- | | | | --- | --- | | [`around`](generated/numpy.around#numpy.around "numpy.around")(a[, decimals, out]) | Evenly round to the given number of decimals. | | [`round_`](generated/numpy.round_#numpy.round_ "numpy.round_")(a[, decimals, out]) | Round an array to the given number of decimals. | | [`rint`](generated/numpy.rint#numpy.rint "numpy.rint")(x, /[, out, where, casting, order, ...]) | Round elements of the array to the nearest integer. | | [`fix`](generated/numpy.fix#numpy.fix "numpy.fix")(x[, out]) | Round to nearest integer towards zero. | | [`floor`](generated/numpy.floor#numpy.floor "numpy.floor")(x, /[, out, where, casting, order, ...]) | Return the floor of the input, element-wise. | | [`ceil`](generated/numpy.ceil#numpy.ceil "numpy.ceil")(x, /[, out, where, casting, order, ...]) | Return the ceiling of the input, element-wise. | | [`trunc`](generated/numpy.trunc#numpy.trunc "numpy.trunc")(x, /[, out, where, casting, order, ...]) | Return the truncated value of the input, element-wise. | Sums, products, differences --------------------------- | | | | --- | --- | | [`prod`](generated/numpy.prod#numpy.prod "numpy.prod")(a[, axis, dtype, out, keepdims, ...]) | Return the product of array elements over a given axis. | | [`sum`](generated/numpy.sum#numpy.sum "numpy.sum")(a[, axis, dtype, out, keepdims, ...]) | Sum of array elements over a given axis. | | [`nanprod`](generated/numpy.nanprod#numpy.nanprod "numpy.nanprod")(a[, axis, dtype, out, keepdims, ...]) | Return the product of array elements over a given axis treating Not a Numbers (NaNs) as ones. | | [`nansum`](generated/numpy.nansum#numpy.nansum "numpy.nansum")(a[, axis, dtype, out, keepdims, ...]) | Return the sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. | | [`cumprod`](generated/numpy.cumprod#numpy.cumprod "numpy.cumprod")(a[, axis, dtype, out]) | Return the cumulative product of elements along a given axis. | | [`cumsum`](generated/numpy.cumsum#numpy.cumsum "numpy.cumsum")(a[, axis, dtype, out]) | Return the cumulative sum of the elements along a given axis. | | [`nancumprod`](generated/numpy.nancumprod#numpy.nancumprod "numpy.nancumprod")(a[, axis, dtype, out]) | Return the cumulative product of array elements over a given axis treating Not a Numbers (NaNs) as one. | | [`nancumsum`](generated/numpy.nancumsum#numpy.nancumsum "numpy.nancumsum")(a[, axis, dtype, out]) | Return the cumulative sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. | | [`diff`](generated/numpy.diff#numpy.diff "numpy.diff")(a[, n, axis, prepend, append]) | Calculate the n-th discrete difference along the given axis. | | [`ediff1d`](generated/numpy.ediff1d#numpy.ediff1d "numpy.ediff1d")(ary[, to_end, to_begin]) | The differences between consecutive elements of an array. | | [`gradient`](generated/numpy.gradient#numpy.gradient "numpy.gradient")(f, *varargs[, axis, edge_order]) | Return the gradient of an N-dimensional array. | | [`cross`](generated/numpy.cross#numpy.cross "numpy.cross")(a, b[, axisa, axisb, axisc, axis]) | Return the cross product of two (arrays of) vectors. | | [`trapz`](generated/numpy.trapz#numpy.trapz "numpy.trapz")(y[, x, dx, axis]) | Integrate along the given axis using the composite trapezoidal rule. | Exponents and logarithms ------------------------ | | | | --- | --- | | [`exp`](generated/numpy.exp#numpy.exp "numpy.exp")(x, /[, out, where, casting, order, ...]) | Calculate the exponential of all elements in the input array. | | [`expm1`](generated/numpy.expm1#numpy.expm1 "numpy.expm1")(x, /[, out, where, casting, order, ...]) | Calculate `exp(x) - 1` for all elements in the array. | | [`exp2`](generated/numpy.exp2#numpy.exp2 "numpy.exp2")(x, /[, out, where, casting, order, ...]) | Calculate `2**p` for all `p` in the input array. | | [`log`](generated/numpy.log#numpy.log "numpy.log")(x, /[, out, where, casting, order, ...]) | Natural logarithm, element-wise. | | [`log10`](generated/numpy.log10#numpy.log10 "numpy.log10")(x, /[, out, where, casting, order, ...]) | Return the base 10 logarithm of the input array, element-wise. | | [`log2`](generated/numpy.log2#numpy.log2 "numpy.log2")(x, /[, out, where, casting, order, ...]) | Base-2 logarithm of `x`. | | [`log1p`](generated/numpy.log1p#numpy.log1p "numpy.log1p")(x, /[, out, where, casting, order, ...]) | Return the natural logarithm of one plus the input array, element-wise. | | [`logaddexp`](generated/numpy.logaddexp#numpy.logaddexp "numpy.logaddexp")(x1, x2, /[, out, where, casting, ...]) | Logarithm of the sum of exponentiations of the inputs. | | [`logaddexp2`](generated/numpy.logaddexp2#numpy.logaddexp2 "numpy.logaddexp2")(x1, x2, /[, out, where, casting, ...]) | Logarithm of the sum of exponentiations of the inputs in base-2. | Other special functions ----------------------- | | | | --- | --- | | [`i0`](generated/numpy.i0#numpy.i0 "numpy.i0")(x) | Modified Bessel function of the first kind, order 0. | | [`sinc`](generated/numpy.sinc#numpy.sinc "numpy.sinc")(x) | Return the normalized sinc function. | Floating point routines ----------------------- | | | | --- | --- | | [`signbit`](generated/numpy.signbit#numpy.signbit "numpy.signbit")(x, /[, out, where, casting, order, ...]) | Returns element-wise True where signbit is set (less than zero). | | [`copysign`](generated/numpy.copysign#numpy.copysign "numpy.copysign")(x1, x2, /[, out, where, casting, ...]) | Change the sign of x1 to that of x2, element-wise. | | [`frexp`](generated/numpy.frexp#numpy.frexp "numpy.frexp")(x[, out1, out2], / [[, out, where, ...]) | Decompose the elements of x into mantissa and twos exponent. | | [`ldexp`](generated/numpy.ldexp#numpy.ldexp "numpy.ldexp")(x1, x2, /[, out, where, casting, ...]) | Returns x1 * 2**x2, element-wise. | | [`nextafter`](generated/numpy.nextafter#numpy.nextafter "numpy.nextafter")(x1, x2, /[, out, where, casting, ...]) | Return the next floating-point value after x1 towards x2, element-wise. | | [`spacing`](generated/numpy.spacing#numpy.spacing "numpy.spacing")(x, /[, out, where, casting, order, ...]) | Return the distance between x and the nearest adjacent number. | Rational routines ----------------- | | | | --- | --- | | [`lcm`](generated/numpy.lcm#numpy.lcm "numpy.lcm")(x1, x2, /[, out, where, casting, order, ...]) | Returns the lowest common multiple of `|x1|` and `|x2|` | | [`gcd`](generated/numpy.gcd#numpy.gcd "numpy.gcd")(x1, x2, /[, out, where, casting, order, ...]) | Returns the greatest common divisor of `|x1|` and `|x2|` | Arithmetic operations --------------------- | | | | --- | --- | | [`add`](generated/numpy.add#numpy.add "numpy.add")(x1, x2, /[, out, where, casting, order, ...]) | Add arguments element-wise. | | [`reciprocal`](generated/numpy.reciprocal#numpy.reciprocal "numpy.reciprocal")(x, /[, out, where, casting, ...]) | Return the reciprocal of the argument, element-wise. | | [`positive`](generated/numpy.positive#numpy.positive "numpy.positive")(x, /[, out, where, casting, order, ...]) | Numerical positive, element-wise. | | [`negative`](generated/numpy.negative#numpy.negative "numpy.negative")(x, /[, out, where, casting, order, ...]) | Numerical negative, element-wise. | | [`multiply`](generated/numpy.multiply#numpy.multiply "numpy.multiply")(x1, x2, /[, out, where, casting, ...]) | Multiply arguments element-wise. | | [`divide`](generated/numpy.divide#numpy.divide "numpy.divide")(x1, x2, /[, out, where, casting, ...]) | Divide arguments element-wise. | | [`power`](generated/numpy.power#numpy.power "numpy.power")(x1, x2, /[, out, where, casting, ...]) | First array elements raised to powers from second array, element-wise. | | [`subtract`](generated/numpy.subtract#numpy.subtract "numpy.subtract")(x1, x2, /[, out, where, casting, ...]) | Subtract arguments, element-wise. | | [`true_divide`](generated/numpy.true_divide#numpy.true_divide "numpy.true_divide")(x1, x2, /[, out, where, ...]) | Divide arguments element-wise. | | [`floor_divide`](generated/numpy.floor_divide#numpy.floor_divide "numpy.floor_divide")(x1, x2, /[, out, where, ...]) | Return the largest integer smaller or equal to the division of the inputs. | | [`float_power`](generated/numpy.float_power#numpy.float_power "numpy.float_power")(x1, x2, /[, out, where, ...]) | First array elements raised to powers from second array, element-wise. | | [`fmod`](generated/numpy.fmod#numpy.fmod "numpy.fmod")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. | | [`mod`](generated/numpy.mod#numpy.mod "numpy.mod")(x1, x2, /[, out, where, casting, order, ...]) | Returns the element-wise remainder of division. | | [`modf`](generated/numpy.modf#numpy.modf "numpy.modf")(x[, out1, out2], / [[, out, where, ...]) | Return the fractional and integral parts of an array, element-wise. | | [`remainder`](generated/numpy.remainder#numpy.remainder "numpy.remainder")(x1, x2, /[, out, where, casting, ...]) | Returns the element-wise remainder of division. | | [`divmod`](generated/numpy.divmod#numpy.divmod "numpy.divmod")(x1, x2[, out1, out2], / [[, out, ...]) | Return element-wise quotient and remainder simultaneously. | Handling complex numbers ------------------------ | | | | --- | --- | | [`angle`](generated/numpy.angle#numpy.angle "numpy.angle")(z[, deg]) | Return the angle of the complex argument. | | [`real`](generated/numpy.real#numpy.real "numpy.real")(val) | Return the real part of the complex argument. | | [`imag`](generated/numpy.imag#numpy.imag "numpy.imag")(val) | Return the imaginary part of the complex argument. | | [`conj`](generated/numpy.conj#numpy.conj "numpy.conj")(x, /[, out, where, casting, order, ...]) | Return the complex conjugate, element-wise. | | [`conjugate`](generated/numpy.conjugate#numpy.conjugate "numpy.conjugate")(x, /[, out, where, casting, ...]) | Return the complex conjugate, element-wise. | Extrema Finding --------------- | | | | --- | --- | | [`maximum`](generated/numpy.maximum#numpy.maximum "numpy.maximum")(x1, x2, /[, out, where, casting, ...]) | Element-wise maximum of array elements. | | [`fmax`](generated/numpy.fmax#numpy.fmax "numpy.fmax")(x1, x2, /[, out, where, casting, ...]) | Element-wise maximum of array elements. | | [`amax`](generated/numpy.amax#numpy.amax "numpy.amax")(a[, axis, out, keepdims, initial, where]) | Return the maximum of an array or maximum along an axis. | | [`nanmax`](generated/numpy.nanmax#numpy.nanmax "numpy.nanmax")(a[, axis, out, keepdims, initial, where]) | Return the maximum of an array or maximum along an axis, ignoring any NaNs. | | [`minimum`](generated/numpy.minimum#numpy.minimum "numpy.minimum")(x1, x2, /[, out, where, casting, ...]) | Element-wise minimum of array elements. | | [`fmin`](generated/numpy.fmin#numpy.fmin "numpy.fmin")(x1, x2, /[, out, where, casting, ...]) | Element-wise minimum of array elements. | | [`amin`](generated/numpy.amin#numpy.amin "numpy.amin")(a[, axis, out, keepdims, initial, where]) | Return the minimum of an array or minimum along an axis. | | [`nanmin`](generated/numpy.nanmin#numpy.nanmin "numpy.nanmin")(a[, axis, out, keepdims, initial, where]) | Return minimum of an array or minimum along an axis, ignoring any NaNs. | Miscellaneous ------------- | | | | --- | --- | | [`convolve`](generated/numpy.convolve#numpy.convolve "numpy.convolve")(a, v[, mode]) | Returns the discrete, linear convolution of two one-dimensional sequences. | | [`clip`](generated/numpy.clip#numpy.clip "numpy.clip")(a, a_min, a_max[, out]) | Clip (limit) the values in an array. | | [`sqrt`](generated/numpy.sqrt#numpy.sqrt "numpy.sqrt")(x, /[, out, where, casting, order, ...]) | Return the non-negative square-root of an array, element-wise. | | [`cbrt`](generated/numpy.cbrt#numpy.cbrt "numpy.cbrt")(x, /[, out, where, casting, order, ...]) | Return the cube-root of an array, element-wise. | | [`square`](generated/numpy.square#numpy.square "numpy.square")(x, /[, out, where, casting, order, ...]) | Return the element-wise square of the input. | | [`absolute`](generated/numpy.absolute#numpy.absolute "numpy.absolute")(x, /[, out, where, casting, order, ...]) | Calculate the absolute value element-wise. | | [`fabs`](generated/numpy.fabs#numpy.fabs "numpy.fabs")(x, /[, out, where, casting, order, ...]) | Compute the absolute values element-wise. | | [`sign`](generated/numpy.sign#numpy.sign "numpy.sign")(x, /[, out, where, casting, order, ...]) | Returns an element-wise indication of the sign of a number. | | [`heaviside`](generated/numpy.heaviside#numpy.heaviside "numpy.heaviside")(x1, x2, /[, out, where, casting, ...]) | Compute the Heaviside step function. | | [`nan_to_num`](generated/numpy.nan_to_num#numpy.nan_to_num "numpy.nan_to_num")(x[, copy, nan, posinf, neginf]) | Replace NaN with zero and infinity with large finite numbers (default behaviour) or with the numbers defined by the user using the [`nan`](constants#numpy.nan "numpy.nan"), `posinf` and/or `neginf` keywords. | | [`real_if_close`](generated/numpy.real_if_close#numpy.real_if_close "numpy.real_if_close")(a[, tol]) | If input is complex with all imaginary parts close to zero, return real parts. | | [`interp`](generated/numpy.interp#numpy.interp "numpy.interp")(x, xp, fp[, left, right, period]) | One-dimensional linear interpolation for monotonically increasing sample points. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.math.htmlMatrix library (numpy.matlib) ============================= This module contains all functions in the [`numpy`](index#module-numpy "numpy") namespace, with the following replacement functions that return [`matrices`](generated/numpy.matrix#numpy.matrix "numpy.matrix") instead of [`ndarrays`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Functions that are also in the numpy namespace and return matrices | | | | --- | --- | | [`mat`](generated/numpy.mat#numpy.mat "numpy.mat")(data[, dtype]) | Interpret the input as a matrix. | | [`matrix`](generated/numpy.matrix#numpy.matrix "numpy.matrix")(data[, dtype, copy]) | Note It is no longer recommended to use this class, even for linear | | [`asmatrix`](generated/numpy.asmatrix#numpy.asmatrix "numpy.asmatrix")(data[, dtype]) | Interpret the input as a matrix. | | [`bmat`](generated/numpy.bmat#numpy.bmat "numpy.bmat")(obj[, ldict, gdict]) | Build a matrix object from a string, nested sequence, or array. | Replacement functions in [`matlib`](#module-numpy.matlib "numpy.matlib") | | | | --- | --- | | [`empty`](generated/numpy.matlib.empty#numpy.matlib.empty "numpy.matlib.empty")(shape[, dtype, order]) | Return a new matrix of given shape and type, without initializing entries. | | [`zeros`](generated/numpy.matlib.zeros#numpy.matlib.zeros "numpy.matlib.zeros")(shape[, dtype, order]) | Return a matrix of given shape and type, filled with zeros. | | [`ones`](generated/numpy.matlib.ones#numpy.matlib.ones "numpy.matlib.ones")(shape[, dtype, order]) | Matrix of ones. | | [`eye`](generated/numpy.matlib.eye#numpy.matlib.eye "numpy.matlib.eye")(n[, M, k, dtype, order]) | Return a matrix with ones on the diagonal and zeros elsewhere. | | [`identity`](generated/numpy.matlib.identity#numpy.matlib.identity "numpy.matlib.identity")(n[, dtype]) | Returns the square identity matrix of given size. | | [`repmat`](generated/numpy.matlib.repmat#numpy.matlib.repmat "numpy.matlib.repmat")(a, m, n) | Repeat a 0-D to 2-D array or matrix MxN times. | | [`rand`](generated/numpy.matlib.rand#numpy.matlib.rand "numpy.matlib.rand")(*args) | Return a matrix of random values with given shape. | | [`randn`](generated/numpy.matlib.randn#numpy.matlib.randn "numpy.matlib.randn")(*args) | Return a random matrix with data from the "standard normal" distribution. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.matlib.htmlMiscellaneous routines ====================== Performance tuning ------------------ | | | | --- | --- | | [`setbufsize`](generated/numpy.setbufsize#numpy.setbufsize "numpy.setbufsize")(size) | Set the size of the buffer used in ufuncs. | | [`getbufsize`](generated/numpy.getbufsize#numpy.getbufsize "numpy.getbufsize")() | Return the size of the buffer used in ufuncs. | Memory ranges ------------- | | | | --- | --- | | [`shares_memory`](generated/numpy.shares_memory#numpy.shares_memory "numpy.shares_memory")(a, b, /[, max_work]) | Determine if two arrays share memory. | | [`may_share_memory`](generated/numpy.may_share_memory#numpy.may_share_memory "numpy.may_share_memory")(a, b, /[, max_work]) | Determine if two arrays might share memory | | [`byte_bounds`](generated/numpy.byte_bounds#numpy.byte_bounds "numpy.byte_bounds")(a) | Returns pointers to the end-points of an array. | Array mixins ------------ | | | | --- | --- | | [`lib.mixins.NDArrayOperatorsMixin`](generated/numpy.lib.mixins.ndarrayoperatorsmixin#numpy.lib.mixins.NDArrayOperatorsMixin "numpy.lib.mixins.NDArrayOperatorsMixin")() | Mixin defining all operator special methods using __array_ufunc__. | NumPy version comparison ------------------------ | | | | --- | --- | | [`lib.NumpyVersion`](generated/numpy.lib.numpyversion#numpy.lib.NumpyVersion "numpy.lib.NumpyVersion")(vstring) | Parse and compare numpy version strings. | Utility ------- | | | | --- | --- | | [`get_include`](generated/numpy.get_include#numpy.get_include "numpy.get_include")() | Return the directory that contains the NumPy *.h header files. | | [`show_config`](generated/numpy.show_config#numpy.show_config "numpy.show_config")() | Show libraries in the system on which NumPy was built. | | [`deprecate`](generated/numpy.deprecate#numpy.deprecate "numpy.deprecate")(*args, **kwargs) | Issues a DeprecationWarning, adds warning to `old_name`'s docstring, rebinds `old_name.__name__` and returns the new function object. | | [`deprecate_with_doc`](generated/numpy.deprecate_with_doc#numpy.deprecate_with_doc "numpy.deprecate_with_doc")(msg) | Deprecates a function and includes the deprecation in its docstring. | | [`broadcast_shapes`](generated/numpy.broadcast_shapes#numpy.broadcast_shapes "numpy.broadcast_shapes")(*args) | Broadcast the input shapes into a single shape. | Matlab-like Functions --------------------- | | | | --- | --- | | [`who`](generated/numpy.who#numpy.who "numpy.who")([vardict]) | Print the NumPy arrays in the given dictionary. | | [`disp`](generated/numpy.disp#numpy.disp "numpy.disp")(mesg[, device, linefeed]) | Display a message on a device. | Exceptions ---------- | | | | --- | --- | | [`AxisError`](generated/numpy.axiserror#numpy.AxisError "numpy.AxisError")(axis[, ndim, msg_prefix]) | Axis supplied was invalid. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.other.htmlPadding Arrays ============== | | | | --- | --- | | [`pad`](generated/numpy.pad#numpy.pad "numpy.pad")(array, pad_width[, mode]) | Pad an array. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.padding.htmlPolynomials =========== Polynomials in NumPy can be *created*, *manipulated*, and even *fitted* using the [convenience classes](routines.polynomials.classes) of the [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") package, introduced in NumPy 1.4. Prior to NumPy 1.4, [`numpy.poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d") was the class of choice and it is still available in order to maintain backward compatibility. However, the newer [`polynomial package`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is more complete and its `convenience classes` provide a more consistent, better-behaved interface for working with polynomial expressions. Therefore [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is recommended for new coding. Note **Terminology** The term *polynomial module* refers to the old API defined in `numpy.lib.polynomial`, which includes the [`numpy.poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d") class and the polynomial functions prefixed with *poly* accessible from the [`numpy`](index#module-numpy "numpy") namespace (e.g. [`numpy.polyadd`](generated/numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`numpy.polyval`](generated/numpy.polyval#numpy.polyval "numpy.polyval"), [`numpy.polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit"), etc.). The term *polynomial package* refers to the new API defined in [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial"), which includes the convenience classes for the different kinds of polynomials (`numpy.polynomial.Polynomial`, `numpy.polynomial.Chebyshev`, etc.). Transitioning from numpy.poly1d to numpy.polynomial --------------------------------------------------- As noted above, the [`poly1d class`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d") and associated functions defined in `numpy.lib.polynomial`, such as [`numpy.polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit") and [`numpy.poly`](generated/numpy.poly#numpy.poly "numpy.poly"), are considered legacy and should **not** be used in new code. Since NumPy version 1.4, the [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") package is preferred for working with polynomials. ### Quick Reference The following table highlights some of the main differences between the legacy polynomial module and the polynomial package for common tasks. The [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial") class is imported for brevity: ``` from numpy.polynomial import Polynomial ``` | | | | | --- | --- | --- | | **How to
** | Legacy ([`numpy.poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d")) | [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") | | Create a polynomial object from coefficients [1](#id2) | `p = np.poly1d([1, 2, 3])` | `p = Polynomial([3, 2, 1])` | | Create a polynomial object from roots | `r = np.poly([-1, 1])` `p = np.poly1d(r)` | `p = Polynomial.fromroots([-1, 1])` | | Fit a polynomial of degree `deg` to data | `np.polyfit(x, y, deg)` | `Polynomial.fit(x, y, deg)` | [1](#id1) Note the reversed ordering of the coefficients ### Transition Guide There are significant differences between `numpy.lib.polynomial` and [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial"). The most significant difference is the ordering of the coefficients for the polynomial expressions. The various routines in [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") all deal with series whose coefficients go from degree zero upward, which is the *reverse order* of the poly1d convention. The easy way to remember this is that indices correspond to degree, i.e., `coef[i]` is the coefficient of the term of degree *i*. Though the difference in convention may be confusing, it is straightforward to convert from the legacy polynomial API to the new. For example, the following demonstrates how you would convert a [`numpy.poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d") instance representing the expression \(x^{2} + 2x + 3\) to a [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial") instance representing the same expression: ``` >>> p1d = np.poly1d([1, 2, 3]) >>> p = np.polynomial.Polynomial(p1d.coef[::-1]) ``` In addition to the `coef` attribute, polynomials from the polynomial package also have `domain` and `window` attributes. These attributes are most relevant when fitting polynomials to data, though it should be noted that polynomials with different `domain` and `window` attributes are not considered equal, and can’t be mixed in arithmetic: ``` >>> p1 = np.polynomial.Polynomial([1, 2, 3]) >>> p1 Polynomial([1., 2., 3.], domain=[-1, 1], window=[-1, 1]) >>> p2 = np.polynomial.Polynomial([1, 2, 3], domain=[-2, 2]) >>> p1 == p2 False >>> p1 + p2 Traceback (most recent call last): ... TypeError: Domains differ ``` See the documentation for the [convenience classes](https://numpy.org/doc/1.23/reference/routines.polynomials.classes) for further details on the `domain` and `window` attributes. Another major difference between the legacy polynomial module and the polynomial package is polynomial fitting. In the old module, fitting was done via the [`polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit") function. In the polynomial package, the [`fit`](generated/numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit") class method is preferred. For example, consider a simple linear fit to the following data: ``` In [1]: rng = np.random.default_rng() In [2]: x = np.arange(10) In [3]: y = np.arange(10) + rng.standard_normal(10) ``` With the legacy polynomial module, a linear fit (i.e. polynomial of degree 1) could be applied to these data with [`polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit"): ``` In [4]: np.polyfit(x, y, deg=1) Out[4]: array([0.91740556, 0.45014661]) ``` With the new polynomial API, the [`fit`](generated/numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit") class method is preferred: ``` In [5]: p_fitted = np.polynomial.Polynomial.fit(x, y, deg=1) In [6]: p_fitted Out[6]: Polynomial([4.57847165, 4.12832504], domain=[0., 9.], window=[-1., 1.]) ``` Note that the coefficients are given *in the scaled domain* defined by the linear mapping between the `window` and `domain`. [`convert`](generated/numpy.polynomial.polynomial.polynomial.convert#numpy.polynomial.polynomial.Polynomial.convert "numpy.polynomial.polynomial.Polynomial.convert") can be used to get the coefficients in the unscaled data domain. ``` In [7]: p_fitted.convert() Out[7]: Polynomial([0.45014661, 0.91740556], domain=[-1., 1.], window=[-1., 1.]) ``` Documentation for the polynomial Package ---------------------------------------- In addition to standard power series polynomials, the polynomial package provides several additional kinds of polynomials including Chebyshev, Hermite (two subtypes), Laguerre, and Legendre polynomials. Each of these has an associated `convenience class` available from the [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") namespace that provides a consistent interface for working with polynomials regardless of their type. * [Using the Convenience Classes](routines.polynomials.classes) Documentation pertaining to specific functions defined for each kind of polynomial individually can be found in the corresponding module documentation: * [Power Series (`numpy.polynomial.polynomial`)](routines.polynomials.polynomial) * [Chebyshev Series (`numpy.polynomial.chebyshev`)](routines.polynomials.chebyshev) * [Hermite Series, “Physicists” (`numpy.polynomial.hermite`)](routines.polynomials.hermite) * [HermiteE Series, “Probabilists” (`numpy.polynomial.hermite_e`)](routines.polynomials.hermite_e) * [Laguerre Series (`numpy.polynomial.laguerre`)](routines.polynomials.laguerre) * [Legendre Series (`numpy.polynomial.legendre`)](routines.polynomials.legendre) * [Polyutils](routines.polynomials.polyutils) Documentation for Legacy Polynomials ------------------------------------ * [Poly1d](routines.polynomials.poly1d) + [Basics](routines.polynomials.poly1d#basics) + [Fitting](routines.polynomials.poly1d#fitting) + [Calculus](routines.polynomials.poly1d#calculus) + [Arithmetic](routines.polynomials.poly1d#arithmetic) + [Warnings](routines.polynomials.poly1d#warnings) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.htmlRandom sampling (numpy.random) ============================== Numpy’s random number routines produce pseudo random numbers using combinations of a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") to create sequences and a [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") to use those sequences to sample from different statistical distributions: * BitGenerators: Objects that generate random numbers. These are typically unsigned integer words filled with sequences of either 32 or 64 random bits. * Generators: Objects that transform sequences of random bits from a BitGenerator into sequences of numbers that follow a specific probability distribution (such as uniform, Normal or Binomial) within a specified interval. Since Numpy version 1.17.0 the Generator can be initialized with a number of different BitGenerators. It exposes many different probability distributions. See [NEP 19](https://www.numpy.org/neps/nep-0019-rng-policy.html) for context on the updated random Numpy number routines. The legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") random number routines are still available, but limited to a single BitGenerator. See [What’s New or Different](new-or-different#new-or-different) for a complete list of improvements and differences from the legacy `RandomState`. For convenience and backward compatibility, a single [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") instance’s methods are imported into the numpy.random namespace, see [Legacy Random Generation](legacy#legacy) for the complete list. Quick Start ----------- Call [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") to get a new instance of a [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"), then call its methods to obtain samples from different distributions. By default, [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") uses bits provided by [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") which has better statistical properties than the legacy [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") used in [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). ``` # Do this (new version) from numpy.random import default_rng rng = default_rng() vals = rng.standard_normal(10) more_vals = rng.standard_normal(10) # instead of this (legacy version) from numpy import random vals = random.standard_normal(10) more_vals = random.standard_normal(10) ``` [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") can be used as a replacement for [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). Both class instances hold an internal [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") instance to provide the bit stream, it is accessible as `gen.bit_generator`. Some long-overdue API cleanup means that legacy and compatibility methods have been removed from [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") | | | | | --- | --- | --- | | [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") | [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") | Notes | | `random_sample`, | `random` | Compatible with [`random.random`](https://docs.python.org/3/library/random.html#random.random "(in Python v3.10)") | | `rand` | | | | `randint`, | `integers` | Add an `endpoint` kwarg | | `random_integers` | | | | `tomaxint` | removed | Use `integers(0, np.iinfo(np.int_).max,` `endpoint=False)` | | `seed` | removed | Use [`SeedSequence.spawn`](bit_generators/generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") | See [What’s New or Different](new-or-different#new-or-different) for more information. Something like the following code can be used to support both `RandomState` and `Generator`, with the understanding that the interfaces are slightly different ``` try: rng_integers = rng.integers except AttributeError: rng_integers = rng.randint a = rng_integers(1000) ``` Seeds can be passed to any of the BitGenerators. The provided value is mixed via [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to spread a possible sequence of seeds across a wider range of initialization states for the BitGenerator. Here [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") is used and is wrapped with a [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). ``` from numpy.random import Generator, PCG64 rng = Generator(PCG64(12345)) rng.standard_normal() ``` Here we use [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") to create an instance of [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") to generate a random float: ``` >>> import numpy as np >>> rng = np.random.default_rng(12345) >>> print(rng) Generator(PCG64) >>> rfloat = rng.random() >>> rfloat 0.22733602246716966 >>> type(rfloat) <class 'float'``` Here we use [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") to create an instance of [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") to generate 3 random integers between 0 (inclusive) and 10 (exclusive): ``` >>> import numpy as np >>> rng = np.random.default_rng(12345) >>> rints = rng.integers(low=0, high=10, size=3) >>> rints array([6, 2, 7]) >>> type(rints[0]) <class 'numpy.int64'``` Introduction ------------ The new infrastructure takes a different approach to producing random numbers from the [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") object. Random number generation is separated into two components, a bit generator and a random generator. The [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") has a limited set of responsibilities. It manages state and provides functions to produce random doubles and random unsigned 32- and 64-bit values. The [`random generator`](generator#numpy.random.Generator "numpy.random.Generator") takes the bit generator-provided stream and transforms them into more useful distributions, e.g., simulated normal random values. This structure allows alternative bit generators to be used with little code duplication. The [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") is the user-facing object that is nearly identical to the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). It accepts a bit generator instance as an argument. The default is currently [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") but this may change in future versions. As a convenience NumPy provides the [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") function to hide these details: ``` >>> from numpy.random import default_rng >>> rng = default_rng(12345) >>> print(rng) Generator(PCG64) >>> print(rng.random()) 0.22733602246716966 ``` One can also instantiate [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") directly with a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") instance. To use the default [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") bit generator, one can instantiate it directly and pass it to [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"): ``` >>> from numpy.random import Generator, PCG64 >>> rng = Generator(PCG64(12345)) >>> print(rng) Generator(PCG64) ``` Similarly to use the older [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") bit generator (not recommended), one can instantiate it directly and pass it to [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"): ``` >>> from numpy.random import Generator, MT19937 >>> rng = Generator(MT19937(12345)) >>> print(rng) Generator(MT19937) ``` ### What’s New or Different Warning The Box-Muller method used to produce NumPy’s normals is no longer available in [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). It is not possible to reproduce the exact random values using Generator for the normal distribution or any other distribution that relies on the normal such as the [`RandomState.gamma`](generated/numpy.random.randomstate.gamma#numpy.random.RandomState.gamma "numpy.random.RandomState.gamma") or [`RandomState.standard_t`](generated/numpy.random.randomstate.standard_t#numpy.random.RandomState.standard_t "numpy.random.RandomState.standard_t"). If you require bitwise backward compatible streams, use [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). * The Generator’s normal, exponential and gamma functions use 256-step Ziggurat methods which are 2-10 times faster than NumPy’s Box-Muller or inverse CDF implementations. * Optional `dtype` argument that accepts `np.float32` or `np.float64` to produce either single or double precision uniform random variables for select distributions * Optional `out` argument that allows existing arrays to be filled for select distributions * All BitGenerators can produce doubles, uint64s and uint32s via CTypes ([`PCG64.ctypes`](bit_generators/generated/numpy.random.pcg64.ctypes#numpy.random.PCG64.ctypes "numpy.random.PCG64.ctypes")) and CFFI ([`PCG64.cffi`](bit_generators/generated/numpy.random.pcg64.cffi#numpy.random.PCG64.cffi "numpy.random.PCG64.cffi")). This allows the bit generators to be used in numba. * The bit generators can be used in downstream projects via [Cython](extending#random-cython). * [`Generator.integers`](generated/numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") is now the canonical way to generate integer random numbers from a discrete uniform distribution. The `rand` and `randn` methods are only available through the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). The `endpoint` keyword can be used to specify open or closed intervals. This replaces both `randint` and the deprecated `random_integers`. * [`Generator.random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") is now the canonical way to generate floating-point random numbers, which replaces [`RandomState.random_sample`](generated/numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample"), `RandomState.sample`, and `RandomState.ranf`. This is consistent with Python’s [`random.random`](https://docs.python.org/3/library/random.html#random.random "(in Python v3.10)"). * All BitGenerators in numpy use [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to convert seeds into initialized states. * The addition of an `axis` keyword argument to methods such as [`Generator.choice`](generated/numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice"), [`Generator.permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation"), and [`Generator.shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") improves support for sampling from and shuffling multi-dimensional arrays. See [What’s New or Different](new-or-different#new-or-different) for a complete list of improvements and differences from the traditional `Randomstate`. ### Parallel Generation The included generators can be used in parallel, distributed applications in one of three ways: * [SeedSequence spawning](parallel#seedsequence-spawn) * [Independent Streams](parallel#independent-streams) * [Jumping the BitGenerator state](parallel#parallel-jumped) Users with a very large amount of parallelism will want to consult [Upgrading PCG64 with PCG64DXSM](upgrading-pcg64#upgrading-pcg64). Concepts -------- * [Random Generator](generator) * [Legacy Generator (RandomState)](legacy) * [Bit Generators](bit_generators/index) * [Seeding and Entropy](bit_generators/index#seeding-and-entropy) * [Upgrading PCG64 with PCG64DXSM](upgrading-pcg64) Features -------- * [Parallel Applications](parallel) + [`SeedSequence` spawning](parallel#seedsequence-spawning) + [Independent Streams](parallel#independent-streams) + [Jumping the BitGenerator state](parallel#jumping-the-bitgenerator-state) * [Multithreaded Generation](multithreading) * [What’s New or Different](new-or-different) * [Comparing Performance](performance) + [Recommendation](performance#recommendation) + [Timings](performance#timings) + [Performance on different Operating Systems](performance#performance-on-different-operating-systems) * [C API for random](c-api) * [Examples of using Numba, Cython, CFFI](extending) + [Numba](extending#numba) + [Cython](extending#cython) + [CFFI](extending#cffi) + [New Bit Generators](extending#new-bit-generators) + [Examples](extending#examples) ### Original Source of the Generator and BitGenerators This package was developed independently of NumPy and was integrated in version 1.17.0. The original repo is at <https://github.com/bashtage/randomgen>. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/index.htmlSet routines ============ | | | | --- | --- | | [`lib.arraysetops`](generated/numpy.lib.arraysetops#module-numpy.lib.arraysetops "numpy.lib.arraysetops") | Set operations for arrays based on sorting. | Making proper sets ------------------ | | | | --- | --- | | [`unique`](generated/numpy.unique#numpy.unique "numpy.unique")(ar[, return_index, return_inverse, ...]) | Find the unique elements of an array. | Boolean operations ------------------ | | | | --- | --- | | [`in1d`](generated/numpy.in1d#numpy.in1d "numpy.in1d")(ar1, ar2[, assume_unique, invert]) | Test whether each element of a 1-D array is also present in a second array. | | [`intersect1d`](generated/numpy.intersect1d#numpy.intersect1d "numpy.intersect1d")(ar1, ar2[, assume_unique, ...]) | Find the intersection of two arrays. | | [`isin`](generated/numpy.isin#numpy.isin "numpy.isin")(element, test_elements[, ...]) | Calculates `element in test_elements`, broadcasting over `element` only. | | [`setdiff1d`](generated/numpy.setdiff1d#numpy.setdiff1d "numpy.setdiff1d")(ar1, ar2[, assume_unique]) | Find the set difference of two arrays. | | [`setxor1d`](generated/numpy.setxor1d#numpy.setxor1d "numpy.setxor1d")(ar1, ar2[, assume_unique]) | Find the set exclusive-or of two arrays. | | [`union1d`](generated/numpy.union1d#numpy.union1d "numpy.union1d")(ar1, ar2) | Find the union of two arrays. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.set.htmlSorting, searching, and counting ================================ Sorting ------- | | | | --- | --- | | [`sort`](generated/numpy.sort#numpy.sort "numpy.sort")(a[, axis, kind, order]) | Return a sorted copy of an array. | | [`lexsort`](generated/numpy.lexsort#numpy.lexsort "numpy.lexsort")(keys[, axis]) | Perform an indirect stable sort using a sequence of keys. | | [`argsort`](generated/numpy.argsort#numpy.argsort "numpy.argsort")(a[, axis, kind, order]) | Returns the indices that would sort an array. | | [`ndarray.sort`](generated/numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort")([axis, kind, order]) | Sort an array in-place. | | [`msort`](generated/numpy.msort#numpy.msort "numpy.msort")(a) | Return a copy of an array sorted along the first axis. | | [`sort_complex`](generated/numpy.sort_complex#numpy.sort_complex "numpy.sort_complex")(a) | Sort a complex array using the real part first, then the imaginary part. | | [`partition`](generated/numpy.partition#numpy.partition "numpy.partition")(a, kth[, axis, kind, order]) | Return a partitioned copy of an array. | | [`argpartition`](generated/numpy.argpartition#numpy.argpartition "numpy.argpartition")(a, kth[, axis, kind, order]) | Perform an indirect partition along the given axis using the algorithm specified by the `kind` keyword. | Searching --------- | | | | --- | --- | | [`argmax`](generated/numpy.argmax#numpy.argmax "numpy.argmax")(a[, axis, out, keepdims]) | Returns the indices of the maximum values along an axis. | | [`nanargmax`](generated/numpy.nanargmax#numpy.nanargmax "numpy.nanargmax")(a[, axis, out, keepdims]) | Return the indices of the maximum values in the specified axis ignoring NaNs. | | [`argmin`](generated/numpy.argmin#numpy.argmin "numpy.argmin")(a[, axis, out, keepdims]) | Returns the indices of the minimum values along an axis. | | [`nanargmin`](generated/numpy.nanargmin#numpy.nanargmin "numpy.nanargmin")(a[, axis, out, keepdims]) | Return the indices of the minimum values in the specified axis ignoring NaNs. | | [`argwhere`](generated/numpy.argwhere#numpy.argwhere "numpy.argwhere")(a) | Find the indices of array elements that are non-zero, grouped by element. | | [`nonzero`](generated/numpy.nonzero#numpy.nonzero "numpy.nonzero")(a) | Return the indices of the elements that are non-zero. | | [`flatnonzero`](generated/numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero")(a) | Return indices that are non-zero in the flattened version of a. | | [`where`](generated/numpy.where#numpy.where "numpy.where")(condition, [x, y], /) | Return elements chosen from `x` or `y` depending on `condition`. | | [`searchsorted`](generated/numpy.searchsorted#numpy.searchsorted "numpy.searchsorted")(a, v[, side, sorter]) | Find indices where elements should be inserted to maintain order. | | [`extract`](generated/numpy.extract#numpy.extract "numpy.extract")(condition, arr) | Return the elements of an array that satisfy some condition. | Counting -------- | | | | --- | --- | | [`count_nonzero`](generated/numpy.count_nonzero#numpy.count_nonzero "numpy.count_nonzero")(a[, axis, keepdims]) | Counts the number of non-zero values in the array `a`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.sort.htmlStatistics ========== Order statistics ---------------- | | | | --- | --- | | [`ptp`](generated/numpy.ptp#numpy.ptp "numpy.ptp")(a[, axis, out, keepdims]) | Range of values (maximum - minimum) along an axis. | | [`percentile`](generated/numpy.percentile#numpy.percentile "numpy.percentile")(a, q[, axis, out, ...]) | Compute the q-th percentile of the data along the specified axis. | | [`nanpercentile`](generated/numpy.nanpercentile#numpy.nanpercentile "numpy.nanpercentile")(a, q[, axis, out, ...]) | Compute the qth percentile of the data along the specified axis, while ignoring nan values. | | [`quantile`](generated/numpy.quantile#numpy.quantile "numpy.quantile")(a, q[, axis, out, overwrite_input, ...]) | Compute the q-th quantile of the data along the specified axis. | | [`nanquantile`](generated/numpy.nanquantile#numpy.nanquantile "numpy.nanquantile")(a, q[, axis, out, ...]) | Compute the qth quantile of the data along the specified axis, while ignoring nan values. | Averages and variances ---------------------- | | | | --- | --- | | [`median`](generated/numpy.median#numpy.median "numpy.median")(a[, axis, out, overwrite_input, keepdims]) | Compute the median along the specified axis. | | [`average`](generated/numpy.average#numpy.average "numpy.average")(a[, axis, weights, returned, keepdims]) | Compute the weighted average along the specified axis. | | [`mean`](generated/numpy.mean#numpy.mean "numpy.mean")(a[, axis, dtype, out, keepdims, where]) | Compute the arithmetic mean along the specified axis. | | [`std`](generated/numpy.std#numpy.std "numpy.std")(a[, axis, dtype, out, ddof, keepdims, where]) | Compute the standard deviation along the specified axis. | | [`var`](generated/numpy.var#numpy.var "numpy.var")(a[, axis, dtype, out, ddof, keepdims, where]) | Compute the variance along the specified axis. | | [`nanmedian`](generated/numpy.nanmedian#numpy.nanmedian "numpy.nanmedian")(a[, axis, out, overwrite_input, ...]) | Compute the median along the specified axis, while ignoring NaNs. | | [`nanmean`](generated/numpy.nanmean#numpy.nanmean "numpy.nanmean")(a[, axis, dtype, out, keepdims, where]) | Compute the arithmetic mean along the specified axis, ignoring NaNs. | | [`nanstd`](generated/numpy.nanstd#numpy.nanstd "numpy.nanstd")(a[, axis, dtype, out, ddof, ...]) | Compute the standard deviation along the specified axis, while ignoring NaNs. | | [`nanvar`](generated/numpy.nanvar#numpy.nanvar "numpy.nanvar")(a[, axis, dtype, out, ddof, ...]) | Compute the variance along the specified axis, while ignoring NaNs. | Correlating ----------- | | | | --- | --- | | [`corrcoef`](generated/numpy.corrcoef#numpy.corrcoef "numpy.corrcoef")(x[, y, rowvar, bias, ddof, dtype]) | Return Pearson product-moment correlation coefficients. | | [`correlate`](generated/numpy.correlate#numpy.correlate "numpy.correlate")(a, v[, mode]) | Cross-correlation of two 1-dimensional sequences. | | [`cov`](generated/numpy.cov#numpy.cov "numpy.cov")(m[, y, rowvar, bias, ddof, fweights, ...]) | Estimate a covariance matrix, given data and weights. | Histograms ---------- | | | | --- | --- | | [`histogram`](generated/numpy.histogram#numpy.histogram "numpy.histogram")(a[, bins, range, normed, weights, ...]) | Compute the histogram of a dataset. | | [`histogram2d`](generated/numpy.histogram2d#numpy.histogram2d "numpy.histogram2d")(x, y[, bins, range, normed, ...]) | Compute the bi-dimensional histogram of two data samples. | | [`histogramdd`](generated/numpy.histogramdd#numpy.histogramdd "numpy.histogramdd")(sample[, bins, range, normed, ...]) | Compute the multidimensional histogram of some data. | | [`bincount`](generated/numpy.bincount#numpy.bincount "numpy.bincount")(x, /[, weights, minlength]) | Count number of occurrences of each value in array of non-negative ints. | | [`histogram_bin_edges`](generated/numpy.histogram_bin_edges#numpy.histogram_bin_edges "numpy.histogram_bin_edges")(a[, bins, range, weights]) | Function to calculate only the edges of the bins used by the [`histogram`](generated/numpy.histogram#numpy.histogram "numpy.histogram") function. | | [`digitize`](generated/numpy.digitize#numpy.digitize "numpy.digitize")(x, bins[, right]) | Return the indices of the bins to which each value in input array belongs. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.statistics.htmlTest Support (numpy.testing) ============================ Common test support for all numpy test scripts. This single module should provide all the common functionality for numpy tests in a single location, so that [test scripts](../dev/development_environment#development-environment) can just import it and work right away. For background, see the [Testing Guidelines](testing#testing-guidelines) Asserts ------- | | | | --- | --- | | [`assert_allclose`](generated/numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose")(actual, desired[, rtol, ...]) | Raises an AssertionError if two objects are not equal up to desired tolerance. | | [`assert_array_almost_equal_nulp`](generated/numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp")(x, y[, nulp]) | Compare two arrays relatively to their spacing. | | [`assert_array_max_ulp`](generated/numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp")(a, b[, maxulp, dtype]) | Check that all items of arrays differ in at most N Units in the Last Place. | | [`assert_array_equal`](generated/numpy.testing.assert_array_equal#numpy.testing.assert_array_equal "numpy.testing.assert_array_equal")(x, y[, err_msg, verbose]) | Raises an AssertionError if two array_like objects are not equal. | | [`assert_array_less`](generated/numpy.testing.assert_array_less#numpy.testing.assert_array_less "numpy.testing.assert_array_less")(x, y[, err_msg, verbose]) | Raises an AssertionError if two array_like objects are not ordered by less than. | | [`assert_equal`](generated/numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal")(actual, desired[, err_msg, verbose]) | Raises an AssertionError if two objects are not equal. | | [`assert_raises`](generated/numpy.testing.assert_raises#numpy.testing.assert_raises "numpy.testing.assert_raises")(assert_raises) | Fail unless an exception of class exception_class is thrown by callable when invoked with arguments args and keyword arguments kwargs. | | [`assert_raises_regex`](generated/numpy.testing.assert_raises_regex#numpy.testing.assert_raises_regex "numpy.testing.assert_raises_regex")(exception_class, ...) | Fail unless an exception of class exception_class and with message that matches expected_regexp is thrown by callable when invoked with arguments args and keyword arguments kwargs. | | [`assert_warns`](generated/numpy.testing.assert_warns#numpy.testing.assert_warns "numpy.testing.assert_warns")(warning_class, *args, **kwargs) | Fail unless the given callable throws the specified warning. | | [`assert_no_warnings`](generated/numpy.testing.assert_no_warnings#numpy.testing.assert_no_warnings "numpy.testing.assert_no_warnings")(*args, **kwargs) | Fail if the given callable produces any warnings. | | [`assert_no_gc_cycles`](generated/numpy.testing.assert_no_gc_cycles#numpy.testing.assert_no_gc_cycles "numpy.testing.assert_no_gc_cycles")(*args, **kwargs) | Fail if the given callable produces any reference cycles. | | [`assert_string_equal`](generated/numpy.testing.assert_string_equal#numpy.testing.assert_string_equal "numpy.testing.assert_string_equal")(actual, desired) | Test if two strings are equal. | Asserts (not recommended) ------------------------- It is recommended to use one of [`assert_allclose`](generated/numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose"), [`assert_array_almost_equal_nulp`](generated/numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") or [`assert_array_max_ulp`](generated/numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") instead of these functions for more consistent floating point comparisons. | | | | --- | --- | | [`assert_`](generated/numpy.testing.assert_#numpy.testing.assert_ "numpy.testing.assert_")(val[, msg]) | Assert that works in release mode. | | [`assert_almost_equal`](generated/numpy.testing.assert_almost_equal#numpy.testing.assert_almost_equal "numpy.testing.assert_almost_equal")(actual, desired[, ...]) | Raises an AssertionError if two items are not equal up to desired precision. | | [`assert_approx_equal`](generated/numpy.testing.assert_approx_equal#numpy.testing.assert_approx_equal "numpy.testing.assert_approx_equal")(actual, desired[, ...]) | Raises an AssertionError if two items are not equal up to significant digits. | | [`assert_array_almost_equal`](generated/numpy.testing.assert_array_almost_equal#numpy.testing.assert_array_almost_equal "numpy.testing.assert_array_almost_equal")(x, y[, decimal, ...]) | Raises an AssertionError if two objects are not equal up to desired precision. | | [`print_assert_equal`](generated/numpy.testing.print_assert_equal#numpy.testing.print_assert_equal "numpy.testing.print_assert_equal")(test_string, actual, desired) | Test if two objects are equal, and print an error message if test fails. | Decorators ---------- | | | | --- | --- | | [`dec.deprecated`](generated/numpy.testing.dec.deprecated#numpy.testing.dec.deprecated "numpy.testing.dec.deprecated")([conditional]) | Deprecated since version 1.21. | | [`dec.knownfailureif`](generated/numpy.testing.dec.knownfailureif#numpy.testing.dec.knownfailureif "numpy.testing.dec.knownfailureif")(fail_condition[, msg]) | Deprecated since version 1.21. | | [`dec.setastest`](generated/numpy.testing.dec.setastest#numpy.testing.dec.setastest "numpy.testing.dec.setastest")([tf]) | Deprecated since version 1.21. | | [`dec.skipif`](generated/numpy.testing.dec.skipif#numpy.testing.dec.skipif "numpy.testing.dec.skipif")(skip_condition[, msg]) | Deprecated since version 1.21. | | [`dec.slow`](generated/numpy.testing.dec.slow#numpy.testing.dec.slow "numpy.testing.dec.slow")(t) | Deprecated since version 1.21. | | [`decorate_methods`](generated/numpy.testing.decorate_methods#numpy.testing.decorate_methods "numpy.testing.decorate_methods")(cls, decorator[, testmatch]) | Apply a decorator to all methods in a class matching a regular expression. | Test Running ------------ | | | | --- | --- | | [`Tester`](generated/numpy.testing.tester#numpy.testing.Tester "numpy.testing.Tester") | alias of `numpy.testing._private.nosetester.NoseTester` | | [`clear_and_catch_warnings`](generated/numpy.testing.clear_and_catch_warnings#numpy.testing.clear_and_catch_warnings "numpy.testing.clear_and_catch_warnings")([record, modules]) | Context manager that resets warning registry for catching warnings | | [`measure`](generated/numpy.testing.measure#numpy.testing.measure "numpy.testing.measure")(code_str[, times, label]) | Return elapsed time for executing code in the namespace of the caller. | | [`run_module_suite`](generated/numpy.testing.run_module_suite#numpy.testing.run_module_suite "numpy.testing.run_module_suite")([file_to_run, argv]) | Run a test module. | | [`rundocs`](generated/numpy.testing.rundocs#numpy.testing.rundocs "numpy.testing.rundocs")([filename, raise_on_error]) | Run doctests found in the given file. | | [`suppress_warnings`](generated/numpy.testing.suppress_warnings#numpy.testing.suppress_warnings "numpy.testing.suppress_warnings")([forwarding_rule]) | Context manager and decorator doing much the same as `warnings.catch_warnings`. | Guidelines ---------- * [Testing Guidelines](testing) + [Introduction](testing#introduction) + [Testing NumPy](testing#testing-numpy) - [Running tests from inside Python](testing#running-tests-from-inside-python) - [Running tests from the command line](testing#running-tests-from-the-command-line) - [Other methods of running tests](testing#other-methods-of-running-tests) + [Writing your own tests](testing#writing-your-own-tests) - [Using C code in tests](testing#using-c-code-in-tests) - [Labeling tests](testing#labeling-tests) - [Easier setup and teardown functions / methods](testing#easier-setup-and-teardown-functions-methods) - [Parametric tests](testing#parametric-tests) - [Doctests](testing#doctests) - [`tests/`](testing#tests) - [`__init__.py` and `setup.py`](testing#init-py-and-setup-py) + [Tips & Tricks](testing#tips-tricks) - [Creating many similar tests](testing#creating-many-similar-tests) - [Known failures & skipping tests](testing#known-failures-skipping-tests) - [Tests on random data](testing#tests-on-random-data) - [Documentation for `numpy.test`](testing#documentation-for-numpy-test) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.testing.htmlWindow functions ================ Various windows --------------- | | | | --- | --- | | [`bartlett`](generated/numpy.bartlett#numpy.bartlett "numpy.bartlett")(M) | Return the Bartlett window. | | [`blackman`](generated/numpy.blackman#numpy.blackman "numpy.blackman")(M) | Return the Blackman window. | | [`hamming`](generated/numpy.hamming#numpy.hamming "numpy.hamming")(M) | Return the Hamming window. | | [`hanning`](generated/numpy.hanning#numpy.hanning "numpy.hanning")(M) | Return the Hanning window. | | [`kaiser`](generated/numpy.kaiser#numpy.kaiser "numpy.kaiser")(M, beta) | Return the Kaiser window. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.window.htmlPython Types and C-Structures ============================= Several new types are defined in the C-code. Most of these are accessible from Python, but a few are not exposed due to their limited use. Every new Python type has an associated [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")* with an internal structure that includes a pointer to a “method table” that defines how the new object behaves in Python. When you receive a Python object into C code, you always get a pointer to a [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)") structure. Because a [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)") structure is very generic and defines only [`PyObject_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_HEAD "(in Python v3.10)"), by itself it is not very interesting. However, different objects contain more details after the [`PyObject_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_HEAD "(in Python v3.10)") (but you have to cast to the correct type to access them — or use accessor functions or macros). New Python Types Defined ------------------------ Python types are the functional equivalent in C of classes in Python. By constructing a new Python type you make available a new object for Python. The ndarray object is an example of a new type defined in C. New types are defined in C by two basic steps: 1. creating a C-structure (usually named `Py{Name}Object`) that is binary- compatible with the [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)") structure itself but holds the additional information needed for that particular object; 2. populating the [`PyTypeObject`](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)") table (pointed to by the ob_type member of the [`PyObject`](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)") structure) with pointers to functions that implement the desired behavior for the type. Instead of special method names which define behavior for Python classes, there are “function tables” which point to functions that implement the desired results. Since Python 2.2, the PyTypeObject itself has become dynamic which allows C types that can be “sub-typed “from other C-types in C, and sub-classed in Python. The children types inherit the attributes and methods from their parent(s). There are two major new types: the ndarray ( [`PyArray_Type`](#c.PyArray_Type "PyArray_Type") ) and the ufunc ( [`PyUFunc_Type`](#c.PyUFunc_Type "PyUFunc_Type") ). Additional types play a supportive role: the [`PyArrayIter_Type`](#c.PyArrayIter_Type "PyArrayIter_Type"), the [`PyArrayMultiIter_Type`](#c.PyArrayMultiIter_Type "PyArrayMultiIter_Type"), and the [`PyArrayDescr_Type`](#c.PyArrayDescr_Type "PyArrayDescr_Type") . The [`PyArrayIter_Type`](#c.PyArrayIter_Type "PyArrayIter_Type") is the type for a flat iterator for an ndarray (the object that is returned when getting the flat attribute). The [`PyArrayMultiIter_Type`](#c.PyArrayMultiIter_Type "PyArrayMultiIter_Type") is the type of the object returned when calling `broadcast` (). It handles iteration and broadcasting over a collection of nested sequences. Also, the [`PyArrayDescr_Type`](#c.PyArrayDescr_Type "PyArrayDescr_Type") is the data-type-descriptor type whose instances describe the data. Finally, there are 21 new scalar-array types which are new Python scalars corresponding to each of the fundamental data types available for arrays. An additional 10 other types are place holders that allow the array scalars to fit into a hierarchy of actual Python types. ### PyArray_Type and PyArrayObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")PyArray_Type The Python type of the ndarray is [`PyArray_Type`](#c.PyArray_Type "PyArray_Type"). In C, every ndarray is a pointer to a [`PyArrayObject`](#c.PyArrayObject "PyArrayObject") structure. The ob_type member of this structure contains a pointer to the [`PyArray_Type`](#c.PyArray_Type "PyArray_Type") typeobject. typePyArrayObject typeNPY_AO The [`PyArrayObject`](#c.PyArrayObject "PyArrayObject") C-structure contains all of the required information for an array. All instances of an ndarray (and its subclasses) will have this structure. For future compatibility, these structure members should normally be accessed using the provided macros. If you need a shorter name, then you can make use of [`NPY_AO`](#c.NPY_AO "NPY_AO") (deprecated) which is defined to be equivalent to [`PyArrayObject`](#c.PyArrayObject "PyArrayObject"). Direct access to the struct fields are deprecated. Use the `PyArray_*(arr)` form instead. As of NumPy 1.20, the size of this struct is not considered part of the NumPy ABI (see note at the end of the member list). ``` typedef struct PyArrayObject { PyObject_HEAD char *data; int nd; npy_intp *dimensions; npy_intp *strides; PyObject *base; PyArray_Descr *descr; int flags; PyObject *weakreflist; /* version dependent private members */ } PyArrayObject; ``` PyObject_HEAD This is needed by all Python objects. It consists of (at least) a reference count member ( `ob_refcnt` ) and a pointer to the typeobject ( `ob_type` ). (Other elements may also be present if Python was compiled with special options see Include/object.h in the Python source tree for more information). The ob_type member points to a Python type object. char*data Accessible via [`PyArray_DATA`](array#c.PyArray_DATA "PyArray_DATA"), this data member is a pointer to the first element of the array. This pointer can (and normally should) be recast to the data type of the array. intnd An integer providing the number of dimensions for this array. When nd is 0, the array is sometimes called a rank-0 array. Such arrays have undefined dimensions and strides and cannot be accessed. Macro [`PyArray_NDIM`](array#c.PyArray_NDIM "PyArray_NDIM") defined in `ndarraytypes.h` points to this data member. [`NPY_MAXDIMS`](array#c.NPY_MAXDIMS "NPY_MAXDIMS") is the largest number of dimensions for any array. [npy_intp](dtype#c.npy_intp "npy_intp")dimensions An array of integers providing the shape in each dimension as long as nd \(\geq\) 1. The integer is always large enough to hold a pointer on the platform, so the dimension size is only limited by memory. [`PyArray_DIMS`](array#c.PyArray_DIMS "PyArray_DIMS") is the macro associated with this data member. [npy_intp](dtype#c.npy_intp "npy_intp")*strides An array of integers providing for each dimension the number of bytes that must be skipped to get to the next element in that dimension. Associated with macro [`PyArray_STRIDES`](array#c.PyArray_STRIDES "PyArray_STRIDES"). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*base Pointed to by [`PyArray_BASE`](array#c.PyArray_BASE "PyArray_BASE"), this member is used to hold a pointer to another Python object that is related to this array. There are two use cases: * If this array does not own its own memory, then base points to the Python object that owns it (perhaps another array object) * If this array has the [`NPY_ARRAY_WRITEBACKIFCOPY`](array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag set, then this array is a working copy of a “misbehaved” array. When `PyArray_ResolveWritebackIfCopy` is called, the array pointed to by base will be updated with the contents of this array. [PyArray_Descr](#c.PyArray_Descr "PyArray_Descr")*descr A pointer to a data-type descriptor object (see below). The data-type descriptor object is an instance of a new built-in type which allows a generic description of memory. There is a descriptor structure for each data type supported. This descriptor structure contains useful information about the type as well as a pointer to a table of function pointers to implement specific functionality. As the name suggests, it is associated with the macro [`PyArray_DESCR`](array#c.PyArray_DESCR "PyArray_DESCR"). intflags Pointed to by the macro [`PyArray_FLAGS`](array#c.PyArray_FLAGS "PyArray_FLAGS"), this data member represents the flags indicating how the memory pointed to by data is to be interpreted. Possible flags are [`NPY_ARRAY_C_CONTIGUOUS`](array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"), [`NPY_ARRAY_F_CONTIGUOUS`](array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS"), [`NPY_ARRAY_OWNDATA`](array#c.NPY_ARRAY_OWNDATA "NPY_ARRAY_OWNDATA"), [`NPY_ARRAY_ALIGNED`](array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED"), [`NPY_ARRAY_WRITEABLE`](array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE"), [`NPY_ARRAY_WRITEBACKIFCOPY`](array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY"). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*weakreflist This member allows array objects to have weak references (using the weakref module). Note Further members are considered private and version dependent. If the size of the struct is important for your code, special care must be taken. A possible use-case when this is relevant is subclassing in C. If your code relies on `sizeof(PyArrayObject)` to be constant, you must add the following check at import time: ``` if (sizeof(PyArrayObject) < PyArray_Type.tp_basicsize) { PyErr_SetString(PyExc_ImportError, "Binary incompatibility with NumPy, must recompile/update X."); return NULL; } ``` To ensure that your code does not have to be compiled for a specific NumPy version, you may add a constant, leaving room for changes in NumPy. A solution guaranteed to be compatible with any future NumPy version requires the use of a runtime calculate offset and allocation size. ### PyArrayDescr_Type and PyArray_Descr [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")PyArrayDescr_Type The [`PyArrayDescr_Type`](#c.PyArrayDescr_Type "PyArrayDescr_Type") is the built-in type of the data-type-descriptor objects used to describe how the bytes comprising the array are to be interpreted. There are 21 statically-defined [`PyArray_Descr`](#c.PyArray_Descr "PyArray_Descr") objects for the built-in data-types. While these participate in reference counting, their reference count should never reach zero. There is also a dynamic table of user-defined [`PyArray_Descr`](#c.PyArray_Descr "PyArray_Descr") objects that is also maintained. Once a data-type-descriptor object is “registered” it should never be deallocated either. The function [`PyArray_DescrFromType`](array#c.PyArray_DescrFromType "PyArray_DescrFromType") (
) can be used to retrieve a [`PyArray_Descr`](#c.PyArray_Descr "PyArray_Descr") object from an enumerated type-number (either built-in or user- defined). typePyArray_Descr The [`PyArray_Descr`](#c.PyArray_Descr "PyArray_Descr") structure lies at the heart of the [`PyArrayDescr_Type`](#c.PyArrayDescr_Type "PyArrayDescr_Type"). While it is described here for completeness, it should be considered internal to NumPy and manipulated via `PyArrayDescr_*` or `PyDataType*` functions and macros. The size of this structure is subject to change across versions of NumPy. To ensure compatibility: * Never declare a non-pointer instance of the struct * Never perform pointer arithmetic * Never use `sizof(PyArray_Descr)` It has the following structure: ``` typedef struct { PyObject_HEAD PyTypeObject *typeobj; char kind; char type; char byteorder; char flags; int type_num; int elsize; int alignment; PyArray_ArrayDescr *subarray; PyObject *fields; PyObject *names; PyArray_ArrFuncs *f; PyObject *metadata; NpyAuxData *c_metadata; npy_hash_t hash; } PyArray_Descr; ``` [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")*typeobj Pointer to a typeobject that is the corresponding Python type for the elements of this array. For the builtin types, this points to the corresponding array scalar. For user-defined types, this should point to a user-defined typeobject. This typeobject can either inherit from array scalars or not. If it does not inherit from array scalars, then the [`NPY_USE_GETITEM`](#c.NPY_USE_GETITEM "NPY_USE_GETITEM") and [`NPY_USE_SETITEM`](#c.NPY_USE_SETITEM "NPY_USE_SETITEM") flags should be set in the `flags` member. charkind A character code indicating the kind of array (using the array interface typestring notation). A ‘b’ represents Boolean, a ‘i’ represents signed integer, a ‘u’ represents unsigned integer, ‘f’ represents floating point, ‘c’ represents complex floating point, ‘S’ represents 8-bit zero-terminated bytes, ‘U’ represents 32-bit/character unicode string, and ‘V’ represents arbitrary. chartype A traditional character code indicating the data type. charbyteorder A character indicating the byte-order: ‘>’ (big-endian), ‘<’ (little- endian), ‘=’ (native), ‘|’ (irrelevant, ignore). All builtin data- types have byteorder ‘=’. charflags A data-type bit-flag that determines if the data-type exhibits object- array like behavior. Each bit in this member is a flag which are named as: intalignment Non-NULL if this type is an array (C-contiguous) of some other type NPY_ITEM_REFCOUNT Indicates that items of this data-type must be reference counted (using [`Py_INCREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_INCREF "(in Python v3.10)") and [`Py_DECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_DECREF "(in Python v3.10)") ). NPY_ITEM_HASOBJECT Same as [`NPY_ITEM_REFCOUNT`](#c.NPY_ITEM_REFCOUNT "NPY_ITEM_REFCOUNT"). NPY_LIST_PICKLE Indicates arrays of this data-type must be converted to a list before pickling. NPY_ITEM_IS_POINTER Indicates the item is a pointer to some other data-type NPY_NEEDS_INIT Indicates memory for this data-type must be initialized (set to 0) on creation. NPY_NEEDS_PYAPI Indicates this data-type requires the Python C-API during access (so don’t give up the GIL if array access is going to be needed). NPY_USE_GETITEM On array access use the `f->getitem` function pointer instead of the standard conversion to an array scalar. Must use if you don’t define an array scalar to go along with the data-type. NPY_USE_SETITEM When creating a 0-d array from an array scalar use `f->setitem` instead of the standard copy from an array scalar. Must use if you don’t define an array scalar to go along with the data-type. NPY_FROM_FIELDS The bits that are inherited for the parent data-type if these bits are set in any field of the data-type. Currently ( [`NPY_NEEDS_INIT`](#c.NPY_NEEDS_INIT "NPY_NEEDS_INIT") | [`NPY_LIST_PICKLE`](#c.NPY_LIST_PICKLE "NPY_LIST_PICKLE") | [`NPY_ITEM_REFCOUNT`](#c.NPY_ITEM_REFCOUNT "NPY_ITEM_REFCOUNT") | [`NPY_NEEDS_PYAPI`](#c.NPY_NEEDS_PYAPI "NPY_NEEDS_PYAPI") ). NPY_OBJECT_DTYPE_FLAGS Bits set for the object data-type: ( [`NPY_LIST_PICKLE`](#c.NPY_LIST_PICKLE "NPY_LIST_PICKLE") | [`NPY_USE_GETITEM`](#c.NPY_USE_GETITEM "NPY_USE_GETITEM") | [`NPY_ITEM_IS_POINTER`](#c.NPY_ITEM_IS_POINTER "NPY_ITEM_IS_POINTER") | [`NPY_ITEM_REFCOUNT`](#c.NPY_ITEM_REFCOUNT "NPY_ITEM_REFCOUNT") | [`NPY_NEEDS_INIT`](#c.NPY_NEEDS_INIT "NPY_NEEDS_INIT") | [`NPY_NEEDS_PYAPI`](#c.NPY_NEEDS_PYAPI "NPY_NEEDS_PYAPI")). intPyDataType_FLAGCHK([PyArray_Descr](#c.PyArray_Descr "PyArray_Descr")*dtype, intflags) Return true if all the given flags are set for the data-type object. intPyDataType_REFCHK([PyArray_Descr](#c.PyArray_Descr "PyArray_Descr")*dtype) Equivalent to [`PyDataType_FLAGCHK`](#c.NPY_USE_SETITEM.PyDataType_FLAGCHK "PyDataType_FLAGCHK") (*dtype*, [`NPY_ITEM_REFCOUNT`](#c.NPY_ITEM_REFCOUNT "NPY_ITEM_REFCOUNT")). inttype_num A number that uniquely identifies the data type. For new data-types, this number is assigned when the data-type is registered. intelsize For data types that are always the same size (such as long), this holds the size of the data type. For flexible data types where different arrays can have a different elementsize, this should be 0. intalignment A number providing alignment information for this data type. Specifically, it shows how far from the start of a 2-element structure (whose first element is a `char` ), the compiler places an item of this type: `offsetof(struct {char c; type v;}, v)` [PyArray_ArrayDescr](#c.NPY_USE_SETITEM.subarray.PyArray_ArrayDescr "PyArray_ArrayDescr")*subarray If this is non- `NULL`, then this data-type descriptor is a C-style contiguous array of another data-type descriptor. In other-words, each element that this descriptor describes is actually an array of some other base descriptor. This is most useful as the data-type descriptor for a field in another data-type descriptor. The fields member should be `NULL` if this is non- `NULL` (the fields member of the base descriptor can be non- `NULL` however). typePyArray_ArrayDescr ``` typedef struct { PyArray_Descr *base; PyObject *shape; } PyArray_ArrayDescr; ``` [PyArray_Descr](#c.PyArray_Descr "PyArray_Descr")*base The data-type-descriptor object of the base-type. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*shape The shape (always C-style contiguous) of the sub-array as a Python tuple. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*fields If this is non-NULL, then this data-type-descriptor has fields described by a Python dictionary whose keys are names (and also titles if given) and whose values are tuples that describe the fields. Recall that a data-type-descriptor always describes a fixed-length set of bytes. A field is a named sub-region of that total, fixed-length collection. A field is described by a tuple composed of another data- type-descriptor and a byte offset. Optionally, the tuple may contain a title which is normally a Python string. These tuples are placed in this dictionary keyed by name (and also title if given). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*names An ordered tuple of field names. It is NULL if no field is defined. [PyArray_ArrFuncs](#c.PyArray_ArrFuncs "PyArray_ArrFuncs")*f A pointer to a structure containing functions that the type needs to implement internal features. These functions are not the same thing as the universal functions (ufuncs) described later. Their signatures can vary arbitrarily. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*metadata Metadata about this dtype. [NpyAuxData](array#c.NpyAuxData "NpyAuxData")*c_metadata Metadata specific to the C implementation of the particular dtype. Added for NumPy 1.7.0. typenpy_hash_t [npy_hash_t](#c.NPY_USE_SETITEM.npy_hash_t "npy_hash_t")*hash Currently unused. Reserved for future use in caching hash values. typePyArray_ArrFuncs Functions implementing internal features. Not all of these function pointers must be defined for a given type. The required members are `nonzero`, `copyswap`, `copyswapn`, `setitem`, `getitem`, and `cast`. These are assumed to be non- `NULL` and `NULL` entries will cause a program crash. The other functions may be `NULL` which will just mean reduced functionality for that data-type. (Also, the nonzero function will be filled in with a default function if it is `NULL` when you register a user-defined data-type). ``` typedef struct { PyArray_VectorUnaryFunc *cast[NPY_NTYPES]; PyArray_GetItemFunc *getitem; PyArray_SetItemFunc *setitem; PyArray_CopySwapNFunc *copyswapn; PyArray_CopySwapFunc *copyswap; PyArray_CompareFunc *compare; PyArray_ArgFunc *argmax; PyArray_DotFunc *dotfunc; PyArray_ScanFunc *scanfunc; PyArray_FromStrFunc *fromstr; PyArray_NonzeroFunc *nonzero; PyArray_FillFunc *fill; PyArray_FillWithScalarFunc *fillwithscalar; PyArray_SortFunc *sort[NPY_NSORTS]; PyArray_ArgSortFunc *argsort[NPY_NSORTS]; PyObject *castdict; PyArray_ScalarKindFunc *scalarkind; int **cancastscalarkindto; int *cancastto; PyArray_FastClipFunc *fastclip; /* deprecated */ PyArray_FastPutmaskFunc *fastputmask; /* deprecated */ PyArray_FastTakeFunc *fasttake; /* deprecated */ PyArray_ArgFunc *argmin; } PyArray_ArrFuncs; ``` The concept of a behaved segment is used in the description of the function pointers. A behaved segment is one that is aligned and in native machine byte-order for the data-type. The `nonzero`, `copyswap`, `copyswapn`, `getitem`, and `setitem` functions can (and must) deal with mis-behaved arrays. The other functions require behaved memory segments. voidcast(void*from, void*to, [npy_intp](dtype#c.npy_intp "npy_intp")n, void*fromarr, void*toarr) An array of function pointers to cast from the current type to all of the other builtin types. Each function casts a contiguous, aligned, and notswapped buffer pointed at by *from* to a contiguous, aligned, and notswapped buffer pointed at by *to* The number of items to cast is given by *n*, and the arguments *fromarr* and *toarr* are interpreted as PyArrayObjects for flexible arrays to get itemsize information. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*getitem(void*data, void*arr) A pointer to a function that returns a standard Python object from a single element of the array object *arr* pointed to by *data*. This function must be able to deal with “misbehaved “(misaligned and/or swapped) arrays correctly. intsetitem([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*item, void*data, void*arr) A pointer to a function that sets the Python object *item* into the array, *arr*, at the position pointed to by *data* . This function deals with “misbehaved” arrays. If successful, a zero is returned, otherwise, a negative one is returned (and a Python error set). voidcopyswapn(void*dest, [npy_intp](dtype#c.npy_intp "npy_intp")dstride, void*src, [npy_intp](dtype#c.npy_intp "npy_intp")sstride, [npy_intp](dtype#c.npy_intp "npy_intp")n, intswap, void*arr) voidcopyswap(void*dest, void*src, intswap, void*arr) These members are both pointers to functions to copy data from *src* to *dest* and *swap* if indicated. The value of arr is only used for flexible ( [`NPY_STRING`](dtype#c.NPY_STRING "NPY_STRING"), [`NPY_UNICODE`](dtype#c.NPY_UNICODE "NPY_UNICODE"), and [`NPY_VOID`](dtype#c.NPY_VOID "NPY_VOID") ) arrays (and is obtained from `arr->descr->elsize` ). The second function copies a single value, while the first loops over n values with the provided strides. These functions can deal with misbehaved *src* data. If *src* is NULL then no copy is performed. If *swap* is 0, then no byteswapping occurs. It is assumed that *dest* and *src* do not overlap. If they overlap, then use `memmove` (
) first followed by `copyswap(n)` with NULL valued `src`. intcompare(constvoid*d1, constvoid*d2, void*arr) A pointer to a function that compares two elements of the array, `arr`, pointed to by `d1` and `d2`. This function requires behaved (aligned and not swapped) arrays. The return value is 1 if * `d1` > * `d2`, 0 if * `d1` == * `d2`, and -1 if * `d1` < * `d2`. The array object `arr` is used to retrieve itemsize and field information for flexible arrays. intargmax(void*data, [npy_intp](dtype#c.npy_intp "npy_intp")n, [npy_intp](dtype#c.npy_intp "npy_intp")*max_ind, void*arr) A pointer to a function that retrieves the index of the largest of `n` elements in `arr` beginning at the element pointed to by `data`. This function requires that the memory segment be contiguous and behaved. The return value is always 0. The index of the largest element is returned in `max_ind`. voiddotfunc(void*ip1, [npy_intp](dtype#c.npy_intp "npy_intp")is1, void*ip2, [npy_intp](dtype#c.npy_intp "npy_intp")is2, void*op, [npy_intp](dtype#c.npy_intp "npy_intp")n, void*arr) A pointer to a function that multiplies two `n` -length sequences together, adds them, and places the result in element pointed to by `op` of `arr`. The start of the two sequences are pointed to by `ip1` and `ip2`. To get to the next element in each sequence requires a jump of `is1` and `is2` *bytes*, respectively. This function requires behaved (though not necessarily contiguous) memory. intscanfunc(FILE*fd, void*ip, void*arr) A pointer to a function that scans (scanf style) one element of the corresponding type from the file descriptor `fd` into the array memory pointed to by `ip`. The array is assumed to be behaved. The last argument `arr` is the array to be scanned into. Returns number of receiving arguments successfully assigned (which may be zero in case a matching failure occurred before the first receiving argument was assigned), or EOF if input failure occurs before the first receiving argument was assigned. This function should be called without holding the Python GIL, and has to grab it for error reporting. intfromstr(char*str, void*ip, char**endptr, void*arr) A pointer to a function that converts the string pointed to by `str` to one element of the corresponding type and places it in the memory location pointed to by `ip`. After the conversion is completed, `*endptr` points to the rest of the string. The last argument `arr` is the array into which ip points (needed for variable-size data- types). Returns 0 on success or -1 on failure. Requires a behaved array. This function should be called without holding the Python GIL, and has to grab it for error reporting. [npy_bool](dtype#c.npy_bool "npy_bool")nonzero(void*data, void*arr) A pointer to a function that returns TRUE if the item of `arr` pointed to by `data` is nonzero. This function can deal with misbehaved arrays. voidfill(void*data, [npy_intp](dtype#c.npy_intp "npy_intp")length, void*arr) A pointer to a function that fills a contiguous array of given length with data. The first two elements of the array must already be filled- in. From these two values, a delta will be computed and the values from item 3 to the end will be computed by repeatedly adding this computed delta. The data buffer must be well-behaved. voidfillwithscalar(void*buffer, [npy_intp](dtype#c.npy_intp "npy_intp")length, void*value, void*arr) A pointer to a function that fills a contiguous `buffer` of the given `length` with a single scalar `value` whose address is given. The final argument is the array which is needed to get the itemsize for variable-length arrays. intsort(void*start, [npy_intp](dtype#c.npy_intp "npy_intp")length, void*arr) An array of function pointers to a particular sorting algorithms. A particular sorting algorithm is obtained using a key (so far [`NPY_QUICKSORT`](array#c.NPY_SORTKIND.NPY_QUICKSORT "NPY_QUICKSORT"), [`NPY_HEAPSORT`](array#c.NPY_SORTKIND.NPY_HEAPSORT "NPY_HEAPSORT"), and [`NPY_MERGESORT`](array#c.NPY_SORTKIND.NPY_MERGESORT "NPY_MERGESORT") are defined). These sorts are done in-place assuming contiguous and aligned data. intargsort(void*start, [npy_intp](dtype#c.npy_intp "npy_intp")*result, [npy_intp](dtype#c.npy_intp "npy_intp")length, void*arr) An array of function pointers to sorting algorithms for this data type. The same sorting algorithms as for sort are available. The indices producing the sort are returned in `result` (which must be initialized with indices 0 to `length-1` inclusive). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*castdict Either `NULL` or a dictionary containing low-level casting functions for user- defined data-types. Each function is wrapped in a [PyCapsule](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "(in Python v3.10)")* and keyed by the data-type number. [NPY_SCALARKIND](array#c.NPY_SCALARKIND "NPY_SCALARKIND")scalarkind([PyArrayObject](#c.PyArrayObject "PyArrayObject")*arr) A function to determine how scalars of this type should be interpreted. The argument is `NULL` or a 0-dimensional array containing the data (if that is needed to determine the kind of scalar). The return value must be of type [`NPY_SCALARKIND`](array#c.NPY_SCALARKIND "NPY_SCALARKIND"). int**cancastscalarkindto Either `NULL` or an array of [`NPY_NSCALARKINDS`](array#c.NPY_SCALARKIND.NPY_NSCALARKINDS "NPY_NSCALARKINDS") pointers. These pointers should each be either `NULL` or a pointer to an array of integers (terminated by [`NPY_NOTYPE`](dtype#c.NPY_NOTYPE "NPY_NOTYPE")) indicating data-types that a scalar of this data-type of the specified kind can be cast to safely (this usually means without losing precision). int*cancastto Either `NULL` or an array of integers (terminated by [`NPY_NOTYPE`](dtype#c.NPY_NOTYPE "NPY_NOTYPE") ) indicated data-types that this data-type can be cast to safely (this usually means without losing precision). voidfastclip(void*in, [npy_intp](dtype#c.npy_intp "npy_intp")n_in, void*min, void*max, void*out) Deprecated since version 1.17: The use of this function will give a deprecation warning when `np.clip`. Instead of this function, the datatype must instead use `PyUFunc_RegisterLoopForDescr` to attach a custom loop to `np.core.umath.clip`, `np.minimum`, and `np.maximum`. Deprecated since version 1.19: Setting this function is deprecated and should always be `NULL`, if set, it will be ignored. A function that reads `n_in` items from `in`, and writes to `out` the read value if it is within the limits pointed to by `min` and `max`, or the corresponding limit if outside. The memory segments must be contiguous and behaved, and either `min` or `max` may be `NULL`, but not both. voidfastputmask(void*in, void*mask, [npy_intp](dtype#c.npy_intp "npy_intp")n_in, void*values, [npy_intp](dtype#c.npy_intp "npy_intp")nv) Deprecated since version 1.19: Setting this function is deprecated and should always be `NULL`, if set, it will be ignored. A function that takes a pointer `in` to an array of `n_in` items, a pointer `mask` to an array of `n_in` boolean values, and a pointer `vals` to an array of `nv` items. Items from `vals` are copied into `in` wherever the value in `mask` is non-zero, tiling `vals` as needed if `nv < n_in`. All arrays must be contiguous and behaved. voidfasttake(void*dest, void*src, [npy_intp](dtype#c.npy_intp "npy_intp")*indarray, [npy_intp](dtype#c.npy_intp "npy_intp")nindarray, [npy_intp](dtype#c.npy_intp "npy_intp")n_outer, [npy_intp](dtype#c.npy_intp "npy_intp")m_middle, [npy_intp](dtype#c.npy_intp "npy_intp")nelem, [NPY_CLIPMODE](array#c.NPY_CLIPMODE "NPY_CLIPMODE")clipmode) Deprecated since version 1.19: Setting this function is deprecated and should always be `NULL`, if set, it will be ignored. A function that takes a pointer `src` to a C contiguous, behaved segment, interpreted as a 3-dimensional array of shape `(n_outer, nindarray, nelem)`, a pointer `indarray` to a contiguous, behaved segment of `m_middle` integer indices, and a pointer `dest` to a C contiguous, behaved segment, interpreted as a 3-dimensional array of shape `(n_outer, m_middle, nelem)`. The indices in `indarray` are used to index `src` along the second dimension, and copy the corresponding chunks of `nelem` items into `dest`. `clipmode` (which can take on the values [`NPY_RAISE`](array#c.NPY_CLIPMODE.NPY_RAISE "NPY_RAISE"), [`NPY_WRAP`](array#c.NPY_CLIPMODE.NPY_WRAP "NPY_WRAP") or [`NPY_CLIP`](array#c.NPY_CLIPMODE.NPY_CLIP "NPY_CLIP")) determines how will indices smaller than 0 or larger than `nindarray` will be handled. intargmin(void*data, [npy_intp](dtype#c.npy_intp "npy_intp")n, [npy_intp](dtype#c.npy_intp "npy_intp")*min_ind, void*arr) A pointer to a function that retrieves the index of the smallest of `n` elements in `arr` beginning at the element pointed to by `data`. This function requires that the memory segment be contiguous and behaved. The return value is always 0. The index of the smallest element is returned in `min_ind`. The [`PyArray_Type`](#c.PyArray_Type "PyArray_Type") typeobject implements many of the features of [`Python objects`](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)") including the [`tp_as_number`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_as_number "(in Python v3.10)"), [`tp_as_sequence`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_as_sequence "(in Python v3.10)"), [`tp_as_mapping`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_as_mapping "(in Python v3.10)"), and [`tp_as_buffer`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_as_buffer "(in Python v3.10)") interfaces. The [`rich comparison`](https://docs.python.org/3/c-api/typeobj.html#c.richcmpfunc "(in Python v3.10)")) is also used along with new-style attribute lookup for member ([`tp_members`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_members "(in Python v3.10)")) and properties ([`tp_getset`](https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_getset "(in Python v3.10)")). The [`PyArray_Type`](#c.PyArray_Type "PyArray_Type") can also be sub-typed. Tip The `tp_as_number` methods use a generic approach to call whatever function has been registered for handling the operation. When the `_multiarray_umath module` is imported, it sets the numeric operations for all arrays to the corresponding ufuncs. This choice can be changed with [`PyUFunc_ReplaceLoopBySignature`](ufunc#c.PyUFunc_ReplaceLoopBySignature "PyUFunc_ReplaceLoopBySignature") The `tp_str` and `tp_repr` methods can also be altered using [`PyArray_SetStringFunction`](array#c.PyArray_SetStringFunction "PyArray_SetStringFunction"). ### PyUFunc_Type and PyUFuncObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")PyUFunc_Type The ufunc object is implemented by creation of the [`PyUFunc_Type`](#c.PyUFunc_Type "PyUFunc_Type"). It is a very simple type that implements only basic getattribute behavior, printing behavior, and has call behavior which allows these objects to act like functions. The basic idea behind the ufunc is to hold a reference to fast 1-dimensional (vector) loops for each data type that supports the operation. These one-dimensional loops all have the same signature and are the key to creating a new ufunc. They are called by the generic looping code as appropriate to implement the N-dimensional function. There are also some generic 1-d loops defined for floating and complexfloating arrays that allow you to define a ufunc using a single scalar function (*e.g.* atanh). typePyUFuncObject The core of the ufunc is the [`PyUFuncObject`](#c.PyUFuncObject "PyUFuncObject") which contains all the information needed to call the underlying C-code loops that perform the actual work. While it is described here for completeness, it should be considered internal to NumPy and manipulated via `PyUFunc_*` functions. The size of this structure is subject to change across versions of NumPy. To ensure compatibility: * Never declare a non-pointer instance of the struct * Never perform pointer arithmetic * Never use `sizeof(PyUFuncObject)` It has the following structure: ``` typedef struct { PyObject_HEAD int nin; int nout; int nargs; int identity; PyUFuncGenericFunction *functions; void **data; int ntypes; int reserved1; const char *name; char *types; const char *doc; void *ptr; PyObject *obj; PyObject *userloops; int core_enabled; int core_num_dim_ix; int *core_num_dims; int *core_dim_ixs; int *core_offsets; char *core_signature; PyUFunc_TypeResolutionFunc *type_resolver; PyUFunc_LegacyInnerLoopSelectionFunc *legacy_inner_loop_selector; void *reserved2; npy_uint32 *op_flags; npy_uint32 *iter_flags; /* new in API version 0x0000000D */ npy_intp *core_dim_sizes; npy_uint32 *core_dim_flags; PyObject *identity_value; /* Further private slots (size depends on the NumPy version) */ } PyUFuncObject; ``` intnin The number of input arguments. intnout The number of output arguments. intnargs The total number of arguments (*nin* + *nout*). This must be less than [`NPY_MAXARGS`](array#c.NPY_MAXARGS "NPY_MAXARGS"). intidentity Either [`PyUFunc_One`](ufunc#c.PyUFunc_One "PyUFunc_One"), [`PyUFunc_Zero`](ufunc#c.PyUFunc_Zero "PyUFunc_Zero"), [`PyUFunc_MinusOne`](ufunc#c.PyUFunc_MinusOne "PyUFunc_MinusOne"), [`PyUFunc_None`](ufunc#c.PyUFunc_None "PyUFunc_None"), [`PyUFunc_ReorderableNone`](ufunc#c.PyUFunc_ReorderableNone "PyUFunc_ReorderableNone"), or [`PyUFunc_IdentityValue`](ufunc#c.PyUFunc_IdentityValue "PyUFunc_IdentityValue") to indicate the identity for this operation. It is only used for a reduce-like call on an empty array. voidfunctions(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")*dims, [npy_intp](dtype#c.npy_intp "npy_intp")*steps, void*extradata) An array of function pointers — one for each data type supported by the ufunc. This is the vector loop that is called to implement the underlying function *dims* [0] times. The first argument, *args*, is an array of *nargs* pointers to behaved memory. Pointers to the data for the input arguments are first, followed by the pointers to the data for the output arguments. How many bytes must be skipped to get to the next element in the sequence is specified by the corresponding entry in the *steps* array. The last argument allows the loop to receive extra information. This is commonly used so that a single, generic vector loop can be used for multiple functions. In this case, the actual scalar function to call is passed in as *extradata*. The size of this function pointer array is ntypes. void**data Extra data to be passed to the 1-d vector loops or `NULL` if no extra-data is needed. This C-array must be the same size ( *i.e.* ntypes) as the functions array. `NULL` is used if extra_data is not needed. Several C-API calls for UFuncs are just 1-d vector loops that make use of this extra data to receive a pointer to the actual function to call. intntypes The number of supported data types for the ufunc. This number specifies how many different 1-d loops (of the builtin data types) are available. intreserved1 Unused. char*name A string name for the ufunc. This is used dynamically to build the __doc__ attribute of ufuncs. char*types An array of \(nargs \times ntypes\) 8-bit type_numbers which contains the type signature for the function for each of the supported (builtin) data types. For each of the *ntypes* functions, the corresponding set of type numbers in this array shows how the *args* argument should be interpreted in the 1-d vector loop. These type numbers do not have to be the same type and mixed-type ufuncs are supported. char*doc Documentation for the ufunc. Should not contain the function signature as this is generated dynamically when __doc__ is retrieved. void*ptr Any dynamically allocated memory. Currently, this is used for dynamic ufuncs created from a python function to store room for the types, data, and name members. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj For ufuncs dynamically created from python functions, this member holds a reference to the underlying Python function. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*userloops A dictionary of user-defined 1-d vector loops (stored as CObject ptrs) for user-defined types. A loop may be registered by the user for any user-defined type. It is retrieved by type number. User defined type numbers are always larger than [`NPY_USERDEF`](dtype#c.NPY_USERDEF "NPY_USERDEF"). intcore_enabled 0 for scalar ufuncs; 1 for generalized ufuncs intcore_num_dim_ix Number of distinct core dimension names in the signature int*core_num_dims Number of core dimensions of each argument int*core_dim_ixs Dimension indices in a flattened form; indices of argument `k` are stored in `core_dim_ixs[core_offsets[k] : core_offsets[k] + core_numdims[k]]` int*core_offsets Position of 1st core dimension of each argument in `core_dim_ixs`, equivalent to cumsum(`core_num_dims`) char*core_signature Core signature string PyUFunc_TypeResolutionFunc*type_resolver A function which resolves the types and fills an array with the dtypes for the inputs and outputs PyUFunc_LegacyInnerLoopSelectionFunc*legacy_inner_loop_selector Deprecated since version 1.22: Some fallback support for this slot exists, but will be removed eventually. A universal function that relied on this will have to be ported eventually. See ref:`NEP 41` and ref:`NEP 43` void*reserved2 For a possible future loop selector with a different signature. [npy_uint32](dtype#c.npy_uint32 "npy_uint32")op_flags Override the default operand flags for each ufunc operand. [npy_uint32](dtype#c.npy_uint32 "npy_uint32")iter_flags Override the default nditer flags for the ufunc. Added in API version 0x0000000D [npy_intp](dtype#c.npy_intp "npy_intp")*core_dim_sizes For each distinct core dimension, the possible [frozen](generalized-ufuncs#frozen) size if [`UFUNC_CORE_DIM_SIZE_INFERRED`](#c.UFUNC_CORE_DIM_SIZE_INFERRED "UFUNC_CORE_DIM_SIZE_INFERRED") is `0` [npy_uint32](dtype#c.npy_uint32 "npy_uint32")*core_dim_flags For each distinct core dimension, a set of `UFUNC_CORE_DIM*` flags UFUNC_CORE_DIM_CAN_IGNORE if the dim name ends in `?` UFUNC_CORE_DIM_SIZE_INFERRED if the dim size will be determined from the operands and not from a [frozen](generalized-ufuncs#frozen) signature [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*identity_value Identity for reduction, when [`PyUFuncObject.identity`](#c.PyUFuncObject.identity "PyUFuncObject.identity") is equal to [`PyUFunc_IdentityValue`](ufunc#c.PyUFunc_IdentityValue "PyUFunc_IdentityValue"). ### PyArrayIter_Type and PyArrayIterObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")PyArrayIter_Type This is an iterator object that makes it easy to loop over an N-dimensional array. It is the object returned from the flat attribute of an ndarray. It is also used extensively throughout the implementation internals to loop over an N-dimensional array. The tp_as_mapping interface is implemented so that the iterator object can be indexed (using 1-d indexing), and a few methods are implemented through the tp_methods table. This object implements the next method and can be used anywhere an iterator can be used in Python. typePyArrayIterObject The C-structure corresponding to an object of [`PyArrayIter_Type`](#c.PyArrayIter_Type "PyArrayIter_Type") is the [`PyArrayIterObject`](#c.PyArrayIterObject "PyArrayIterObject"). The [`PyArrayIterObject`](#c.PyArrayIterObject "PyArrayIterObject") is used to keep track of a pointer into an N-dimensional array. It contains associated information used to quickly march through the array. The pointer can be adjusted in three basic ways: 1) advance to the “next” position in the array in a C-style contiguous fashion, 2) advance to an arbitrary N-dimensional coordinate in the array, and 3) advance to an arbitrary one-dimensional index into the array. The members of the [`PyArrayIterObject`](#c.PyArrayIterObject "PyArrayIterObject") structure are used in these calculations. Iterator objects keep their own dimension and strides information about an array. This can be adjusted as needed for “broadcasting,” or to loop over only specific dimensions. ``` typedef struct { PyObject_HEAD int nd_m1; npy_intp index; npy_intp size; npy_intp coordinates[NPY_MAXDIMS]; npy_intp dims_m1[NPY_MAXDIMS]; npy_intp strides[NPY_MAXDIMS]; npy_intp backstrides[NPY_MAXDIMS]; npy_intp factors[NPY_MAXDIMS]; PyArrayObject *ao; char *dataptr; npy_bool contiguous; } PyArrayIterObject; ``` intnd_m1 \(N-1\) where \(N\) is the number of dimensions in the underlying array. [npy_intp](dtype#c.npy_intp "npy_intp")index The current 1-d index into the array. [npy_intp](dtype#c.npy_intp "npy_intp")size The total size of the underlying array. [npy_intp](dtype#c.npy_intp "npy_intp")*coordinates An \(N\) -dimensional index into the array. [npy_intp](dtype#c.npy_intp "npy_intp")*dims_m1 The size of the array minus 1 in each dimension. [npy_intp](dtype#c.npy_intp "npy_intp")*strides The strides of the array. How many bytes needed to jump to the next element in each dimension. [npy_intp](dtype#c.npy_intp "npy_intp")*backstrides How many bytes needed to jump from the end of a dimension back to its beginning. Note that `backstrides[k] == strides[k] * dims_m1[k]`, but it is stored here as an optimization. [npy_intp](dtype#c.npy_intp "npy_intp")*factors This array is used in computing an N-d index from a 1-d index. It contains needed products of the dimensions. [PyArrayObject](#c.PyArrayObject "PyArrayObject")*ao A pointer to the underlying ndarray this iterator was created to represent. char*dataptr This member points to an element in the ndarray indicated by the index. [npy_bool](dtype#c.npy_bool "npy_bool")contiguous This flag is true if the underlying array is [`NPY_ARRAY_C_CONTIGUOUS`](array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"). It is used to simplify calculations when possible. How to use an array iterator on a C-level is explained more fully in later sections. Typically, you do not need to concern yourself with the internal structure of the iterator object, and merely interact with it through the use of the macros [`PyArray_ITER_NEXT`](array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") (it), [`PyArray_ITER_GOTO`](array#c.PyArray_ITER_GOTO "PyArray_ITER_GOTO") (it, dest), or [`PyArray_ITER_GOTO1D`](array#c.PyArray_ITER_GOTO1D "PyArray_ITER_GOTO1D") (it, index). All of these macros require the argument *it* to be a [PyArrayIterObject](#c.PyArrayIterObject "PyArrayIterObject")*. ### PyArrayMultiIter_Type and PyArrayMultiIterObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")PyArrayMultiIter_Type This type provides an iterator that encapsulates the concept of broadcasting. It allows \(N\) arrays to be broadcast together so that the loop progresses in C-style contiguous fashion over the broadcasted array. The corresponding C-structure is the [`PyArrayMultiIterObject`](#c.PyArrayMultiIterObject "PyArrayMultiIterObject") whose memory layout must begin any object, *obj*, passed in to the [`PyArray_Broadcast`](array#c.PyArray_Broadcast "PyArray_Broadcast") (obj) function. Broadcasting is performed by adjusting array iterators so that each iterator represents the broadcasted shape and size, but has its strides adjusted so that the correct element from the array is used at each iteration. typePyArrayMultiIterObject ``` typedef struct { PyObject_HEAD int numiter; npy_intp size; npy_intp index; int nd; npy_intp dimensions[NPY_MAXDIMS]; PyArrayIterObject *iters[NPY_MAXDIMS]; } PyArrayMultiIterObject; ``` intnumiter The number of arrays that need to be broadcast to the same shape. [npy_intp](dtype#c.npy_intp "npy_intp")size The total broadcasted size. [npy_intp](dtype#c.npy_intp "npy_intp")index The current (1-d) index into the broadcasted result. intnd The number of dimensions in the broadcasted result. [npy_intp](dtype#c.npy_intp "npy_intp")*dimensions The shape of the broadcasted result (only `nd` slots are used). [PyArrayIterObject](#c.PyArrayIterObject "PyArrayIterObject")**iters An array of iterator objects that holds the iterators for the arrays to be broadcast together. On return, the iterators are adjusted for broadcasting. ### PyArrayNeighborhoodIter_Type and PyArrayNeighborhoodIterObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")PyArrayNeighborhoodIter_Type This is an iterator object that makes it easy to loop over an N-dimensional neighborhood. typePyArrayNeighborhoodIterObject The C-structure corresponding to an object of [`PyArrayNeighborhoodIter_Type`](#c.PyArrayNeighborhoodIter_Type "PyArrayNeighborhoodIter_Type") is the [`PyArrayNeighborhoodIterObject`](#c.PyArrayNeighborhoodIterObject "PyArrayNeighborhoodIterObject"). ``` typedef struct { PyObject_HEAD int nd_m1; npy_intp index, size; npy_intp coordinates[NPY_MAXDIMS] npy_intp dims_m1[NPY_MAXDIMS]; npy_intp strides[NPY_MAXDIMS]; npy_intp backstrides[NPY_MAXDIMS]; npy_intp factors[NPY_MAXDIMS]; PyArrayObject *ao; char *dataptr; npy_bool contiguous; npy_intp bounds[NPY_MAXDIMS][2]; npy_intp limits[NPY_MAXDIMS][2]; npy_intp limits_sizes[NPY_MAXDIMS]; npy_iter_get_dataptr_t translate; npy_intp nd; npy_intp dimensions[NPY_MAXDIMS]; PyArrayIterObject* _internal_iter; char* constant; int mode; } PyArrayNeighborhoodIterObject; ``` ### PyArrayFlags_Type and PyArrayFlagsObject [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")PyArrayFlags_Type When the flags attribute is retrieved from Python, a special builtin object of this type is constructed. This special type makes it easier to work with the different flags by accessing them as attributes or by accessing them as if the object were a dictionary with the flag names as entries. typePyArrayFlagsObject ``` typedef struct PyArrayFlagsObject { PyObject_HEAD PyObject *arr; int flags; } PyArrayFlagsObject; ``` ### ScalarArrayTypes There is a Python type for each of the different built-in data types that can be present in the array Most of these are simple wrappers around the corresponding data type in C. The C-names for these types are `Py{TYPE}ArrType_Type` where `{TYPE}` can be **Bool**, **Byte**, **Short**, **Int**, **Long**, **LongLong**, **UByte**, **UShort**, **UInt**, **ULong**, **ULongLong**, **Half**, **Float**, **Double**, **LongDouble**, **CFloat**, **CDouble**, **CLongDouble**, **String**, **Unicode**, **Void**, and **Object**. These type names are part of the C-API and can therefore be created in extension C-code. There is also a `PyIntpArrType_Type` and a `PyUIntpArrType_Type` that are simple substitutes for one of the integer types that can hold a pointer on the platform. The structure of these scalar objects is not exposed to C-code. The function [`PyArray_ScalarAsCtype`](array#c.PyArray_ScalarAsCtype "PyArray_ScalarAsCtype") (..) can be used to extract the C-type value from the array scalar and the function [`PyArray_Scalar`](array#c.PyArray_Scalar "PyArray_Scalar") (
) can be used to construct an array scalar from a C-value. Other C-Structures ------------------ A few new C-structures were found to be useful in the development of NumPy. These C-structures are used in at least one C-API call and are therefore documented here. The main reason these structures were defined is to make it easy to use the Python ParseTuple C-API to convert from Python objects to a useful C-Object. ### PyArray_Dims typePyArray_Dims This structure is very useful when shape and/or strides information is supposed to be interpreted. The structure is: ``` typedef struct { npy_intp *ptr; int len; } PyArray_Dims; ``` The members of this structure are [npy_intp](dtype#c.npy_intp "npy_intp")*ptr A pointer to a list of ([`npy_intp`](dtype#c.npy_intp "npy_intp")) integers which usually represent array shape or array strides. intlen The length of the list of integers. It is assumed safe to access *ptr* [0] to *ptr* [len-1]. ### PyArray_Chunk typePyArray_Chunk This is equivalent to the buffer object structure in Python up to the ptr member. On 32-bit platforms (*i.e.* if [`NPY_SIZEOF_INT`](config#c.NPY_SIZEOF_INT "NPY_SIZEOF_INT") == [`NPY_SIZEOF_INTP`](config#c.NPY_SIZEOF_INTP "NPY_SIZEOF_INTP")), the len member also matches an equivalent member of the buffer object. It is useful to represent a generic single-segment chunk of memory. ``` typedef struct { PyObject_HEAD PyObject *base; void *ptr; npy_intp len; int flags; } PyArray_Chunk; ``` The members are [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*base The Python object this chunk of memory comes from. Needed so that memory can be accounted for properly. void*ptr A pointer to the start of the single-segment chunk of memory. [npy_intp](dtype#c.npy_intp "npy_intp")len The length of the segment in bytes. intflags Any data flags (*e.g.* [`NPY_ARRAY_WRITEABLE`](array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") ) that should be used to interpret the memory. ### PyArrayInterface See also [The array interface protocol](../arrays.interface#arrays-interface) typePyArrayInterface The [`PyArrayInterface`](#c.PyArrayInterface "PyArrayInterface") structure is defined so that NumPy and other extension modules can use the rapid array interface protocol. The [`__array_struct__`](../arrays.interface#object.__array_struct__ "object.__array_struct__") method of an object that supports the rapid array interface protocol should return a [`PyCapsule`](https://docs.python.org/3/c-api/capsule.html#c.PyCapsule "(in Python v3.10)") that contains a pointer to a [`PyArrayInterface`](#c.PyArrayInterface "PyArrayInterface") structure with the relevant details of the array. After the new array is created, the attribute should be `DECREF`’d which will free the [`PyArrayInterface`](#c.PyArrayInterface "PyArrayInterface") structure. Remember to `INCREF` the object (whose [`__array_struct__`](../arrays.interface#object.__array_struct__ "object.__array_struct__") attribute was retrieved) and point the base member of the new [`PyArrayObject`](#c.PyArrayObject "PyArrayObject") to this same object. In this way the memory for the array will be managed correctly. ``` typedef struct { int two; int nd; char typekind; int itemsize; int flags; npy_intp *shape; npy_intp *strides; void *data; PyObject *descr; } PyArrayInterface; ``` inttwo the integer 2 as a sanity check. intnd the number of dimensions in the array. chartypekind A character indicating what kind of array is present according to the typestring convention with ‘t’ -> bitfield, ‘b’ -> Boolean, ‘i’ -> signed integer, ‘u’ -> unsigned integer, ‘f’ -> floating point, ‘c’ -> complex floating point, ‘O’ -> object, ‘S’ -> (byte-)string, ‘U’ -> unicode, ‘V’ -> void. intitemsize The number of bytes each item in the array requires. intflags Any of the bits [`NPY_ARRAY_C_CONTIGUOUS`](array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") (1), [`NPY_ARRAY_F_CONTIGUOUS`](array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") (2), [`NPY_ARRAY_ALIGNED`](array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") (0x100), [`NPY_ARRAY_NOTSWAPPED`](array#c.NPY_ARRAY_NOTSWAPPED "NPY_ARRAY_NOTSWAPPED") (0x200), or [`NPY_ARRAY_WRITEABLE`](array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") (0x400) to indicate something about the data. The [`NPY_ARRAY_ALIGNED`](array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED"), [`NPY_ARRAY_C_CONTIGUOUS`](array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"), and [`NPY_ARRAY_F_CONTIGUOUS`](array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") flags can actually be determined from the other parameters. The flag [`NPY_ARR_HAS_DESCR`](../arrays.interface#c.NPY_ARR_HAS_DESCR "NPY_ARR_HAS_DESCR") (0x800) can also be set to indicate to objects consuming the version 3 array interface that the descr member of the structure is present (it will be ignored by objects consuming version 2 of the array interface). [npy_intp](dtype#c.npy_intp "npy_intp")*shape An array containing the size of the array in each dimension. [npy_intp](dtype#c.npy_intp "npy_intp")*strides An array containing the number of bytes to jump to get to the next element in each dimension. void*data A pointer *to* the first element of the array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*descr A Python object describing the data-type in more detail (same as the *descr* key in [`__array_interface__`](../arrays.interface#object.__array_interface__ "object.__array_interface__")). This can be `NULL` if *typekind* and *itemsize* provide enough information. This field is also ignored unless [`NPY_ARR_HAS_DESCR`](../arrays.interface#c.NPY_ARR_HAS_DESCR "NPY_ARR_HAS_DESCR") flag is on in *flags*. ### Internally used structures Internally, the code uses some additional Python objects primarily for memory management. These types are not accessible directly from Python, and are not exposed to the C-API. They are included here only for completeness and assistance in understanding the code. typePyUFuncLoopObject A loose wrapper for a C-structure that contains the information needed for looping. This is useful if you are trying to understand the ufunc looping code. The [`PyUFuncLoopObject`](#c.PyUFuncLoopObject "PyUFuncLoopObject") is the associated C-structure. It is defined in the `ufuncobject.h` header. typePyUFuncReduceObject A loose wrapper for the C-structure that contains the information needed for reduce-like methods of ufuncs. This is useful if you are trying to understand the reduce, accumulate, and reduce-at code. The [`PyUFuncReduceObject`](#c.PyUFuncReduceObject "PyUFuncReduceObject") is the associated C-structure. It is defined in the `ufuncobject.h` header. typePyUFunc_Loop1d A simple linked-list of C-structures containing the information needed to define a 1-d loop for a ufunc for every defined signature of a user-defined data-type. [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")PyArrayMapIter_Type Advanced indexing is handled with this Python type. It is simply a loose wrapper around the C-structure containing the variables needed for advanced array indexing. The associated C-structure, `PyArrayMapIterObject`, is useful if you are trying to understand the advanced-index mapping code. It is defined in the `arrayobject.h` header. This type is not exposed to Python and could be replaced with a C-structure. As a Python type it takes advantage of reference- counted memory management. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/types-and-structures.htmlSystem configuration ==================== When NumPy is built, information about system configuration is recorded, and is made available for extension modules using NumPy’s C API. These are mostly defined in `numpyconfig.h` (included in `ndarrayobject.h`). The public symbols are prefixed by `NPY_*`. NumPy also offers some functions for querying information about the platform in use. For private use, NumPy also constructs a `config.h` in the NumPy include directory, which is not exported by NumPy (that is a python extension which use the numpy C API will not see those symbols), to avoid namespace pollution. Data type sizes --------------- The `NPY_SIZEOF_{CTYPE}` constants are defined so that sizeof information is available to the pre-processor. NPY_SIZEOF_SHORT sizeof(short) NPY_SIZEOF_INT sizeof(int) NPY_SIZEOF_LONG sizeof(long) NPY_SIZEOF_LONGLONG sizeof(longlong) where longlong is defined appropriately on the platform. NPY_SIZEOF_PY_LONG_LONG NPY_SIZEOF_FLOAT sizeof(float) NPY_SIZEOF_DOUBLE sizeof(double) NPY_SIZEOF_LONG_DOUBLE NPY_SIZEOF_LONGDOUBLE sizeof(longdouble) NPY_SIZEOF_PY_INTPTR_T NPY_SIZEOF_INTP Size of a pointer on this platform (sizeof(void *)) Platform information -------------------- NPY_CPU_X86 NPY_CPU_AMD64 NPY_CPU_IA64 NPY_CPU_PPC NPY_CPU_PPC64 NPY_CPU_SPARC NPY_CPU_SPARC64 NPY_CPU_S390 NPY_CPU_PARISC New in version 1.3.0. CPU architecture of the platform; only one of the above is defined. Defined in `numpy/npy_cpu.h` NPY_LITTLE_ENDIAN NPY_BIG_ENDIAN NPY_BYTE_ORDER New in version 1.3.0. Portable alternatives to the `endian.h` macros of GNU Libc. If big endian, [`NPY_BYTE_ORDER`](#c.NPY_BYTE_ORDER "NPY_BYTE_ORDER") == [`NPY_BIG_ENDIAN`](#c.NPY_BIG_ENDIAN "NPY_BIG_ENDIAN"), and similarly for little endian architectures. Defined in `numpy/npy_endian.h`. intPyArray_GetEndianness() New in version 1.3.0. Returns the endianness of the current platform. One of [`NPY_CPU_BIG`](#c.PyArray_GetEndianness.NPY_CPU_BIG "NPY_CPU_BIG"), [`NPY_CPU_LITTLE`](#c.PyArray_GetEndianness.NPY_CPU_LITTLE "NPY_CPU_LITTLE"), or [`NPY_CPU_UNKNOWN_ENDIAN`](#c.PyArray_GetEndianness.NPY_CPU_UNKNOWN_ENDIAN "NPY_CPU_UNKNOWN_ENDIAN"). NPY_CPU_BIG NPY_CPU_LITTLE NPY_CPU_UNKNOWN_ENDIAN Compiler directives ------------------- NPY_LIKELY NPY_UNLIKELY NPY_UNUSED Interrupt Handling ------------------ NPY_INTERRUPT_H NPY_SIGSETJMP NPY_SIGLONGJMP NPY_SIGJMP_BUF NPY_SIGINT_ON NPY_SIGINT_OFF © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/config.htmlData Type API ============= The standard array can have 24 different data types (and has some support for adding your own types). These data types all have an enumerated type, an enumerated type-character, and a corresponding array scalar Python type object (placed in a hierarchy). There are also standard C typedefs to make it easier to manipulate elements of the given data type. For the numeric types, there are also bit-width equivalent C typedefs and named typenumbers that make it easier to select the precision desired. Warning The names for the types in c code follows c naming conventions more closely. The Python names for these types follow Python conventions. Thus, [`NPY_FLOAT`](#c.NPY_FLOAT "NPY_FLOAT") picks up a 32-bit float in C, but [`numpy.float_`](../arrays.scalars#numpy.float_ "numpy.float_") in Python corresponds to a 64-bit double. The bit-width names can be used in both Python and C for clarity. Enumerated Types ---------------- enumeratorNPY_TYPES There is a list of enumerated types defined providing the basic 24 data types plus some useful generic names. Whenever the code requires a type number, one of these enumerated types is requested. The types are all called `NPY_{NAME}`: enumeratorNPY_BOOL The enumeration value for the boolean type, stored as one byte. It may only be set to the values 0 and 1. enumeratorNPY_BYTE enumeratorNPY_INT8 The enumeration value for an 8-bit/1-byte signed integer. enumeratorNPY_SHORT enumeratorNPY_INT16 The enumeration value for a 16-bit/2-byte signed integer. enumeratorNPY_INT enumeratorNPY_INT32 The enumeration value for a 32-bit/4-byte signed integer. enumeratorNPY_LONG Equivalent to either NPY_INT or NPY_LONGLONG, depending on the platform. enumeratorNPY_LONGLONG enumeratorNPY_INT64 The enumeration value for a 64-bit/8-byte signed integer. enumeratorNPY_UBYTE enumeratorNPY_UINT8 The enumeration value for an 8-bit/1-byte unsigned integer. enumeratorNPY_USHORT enumeratorNPY_UINT16 The enumeration value for a 16-bit/2-byte unsigned integer. enumeratorNPY_UINT enumeratorNPY_UINT32 The enumeration value for a 32-bit/4-byte unsigned integer. enumeratorNPY_ULONG Equivalent to either NPY_UINT or NPY_ULONGLONG, depending on the platform. enumeratorNPY_ULONGLONG enumeratorNPY_UINT64 The enumeration value for a 64-bit/8-byte unsigned integer. enumeratorNPY_HALF enumeratorNPY_FLOAT16 The enumeration value for a 16-bit/2-byte IEEE 754-2008 compatible floating point type. enumeratorNPY_FLOAT enumeratorNPY_FLOAT32 The enumeration value for a 32-bit/4-byte IEEE 754 compatible floating point type. enumeratorNPY_DOUBLE enumeratorNPY_FLOAT64 The enumeration value for a 64-bit/8-byte IEEE 754 compatible floating point type. enumeratorNPY_LONGDOUBLE The enumeration value for a platform-specific floating point type which is at least as large as NPY_DOUBLE, but larger on many platforms. enumeratorNPY_CFLOAT enumeratorNPY_COMPLEX64 The enumeration value for a 64-bit/8-byte complex type made up of two NPY_FLOAT values. enumeratorNPY_CDOUBLE enumeratorNPY_COMPLEX128 The enumeration value for a 128-bit/16-byte complex type made up of two NPY_DOUBLE values. enumeratorNPY_CLONGDOUBLE The enumeration value for a platform-specific complex floating point type which is made up of two NPY_LONGDOUBLE values. enumeratorNPY_DATETIME The enumeration value for a data type which holds dates or datetimes with a precision based on selectable date or time units. enumeratorNPY_TIMEDELTA The enumeration value for a data type which holds lengths of times in integers of selectable date or time units. enumeratorNPY_STRING The enumeration value for ASCII strings of a selectable size. The strings have a fixed maximum size within a given array. enumeratorNPY_UNICODE The enumeration value for UCS4 strings of a selectable size. The strings have a fixed maximum size within a given array. enumeratorNPY_OBJECT The enumeration value for references to arbitrary Python objects. enumeratorNPY_VOID Primarily used to hold struct dtypes, but can contain arbitrary binary data. Some useful aliases of the above types are enumeratorNPY_INTP The enumeration value for a signed integer type which is the same size as a (void *) pointer. This is the type used by all arrays of indices. enumeratorNPY_UINTP The enumeration value for an unsigned integer type which is the same size as a (void *) pointer. enumeratorNPY_MASK The enumeration value of the type used for masks, such as with the [`NPY_ITER_ARRAYMASK`](iterator#c.NPY_ITER_ARRAYMASK "NPY_ITER_ARRAYMASK") iterator flag. This is equivalent to [`NPY_UINT8`](#c.NPY_UINT8 "NPY_UINT8"). enumeratorNPY_DEFAULT_TYPE The default type to use when no dtype is explicitly specified, for example when calling np.zero(shape). This is equivalent to [`NPY_DOUBLE`](#c.NPY_DOUBLE "NPY_DOUBLE"). Other useful related constants are NPY_NTYPES The total number of built-in NumPy types. The enumeration covers the range from 0 to NPY_NTYPES-1. NPY_NOTYPE A signal value guaranteed not to be a valid type enumeration number. NPY_USERDEF The start of type numbers used for Custom Data types. The various character codes indicating certain types are also part of an enumerated list. References to type characters (should they be needed at all) should always use these enumerations. The form of them is `NPY_{NAME}LTR` where `{NAME}` can be **BOOL**, **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**, **UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**, **HALF**, **FLOAT**, **DOUBLE**, **LONGDOUBLE**, **CFLOAT**, **CDOUBLE**, **CLONGDOUBLE**, **DATETIME**, **TIMEDELTA**, **OBJECT**, **STRING**, **VOID** **INTP**, **UINTP** **GENBOOL**, **SIGNED**, **UNSIGNED**, **FLOATING**, **COMPLEX** The latter group of `{NAME}s` corresponds to letters used in the array interface typestring specification. Defines ------- ### Max and min values for integers `NPY_MAX_INT{bits}`, `NPY_MAX_UINT{bits}`, `NPY_MIN_INT{bits}` These are defined for `{bits}` = 8, 16, 32, 64, 128, and 256 and provide the maximum (minimum) value of the corresponding (unsigned) integer type. Note: the actual integer type may not be available on all platforms (i.e. 128-bit and 256-bit integers are rare). `NPY_MIN_{type}` This is defined for `{type}` = **BYTE**, **SHORT**, **INT**, **LONG**, **LONGLONG**, **INTP** `NPY_MAX_{type}` This is defined for all defined for `{type}` = **BYTE**, **UBYTE**, **SHORT**, **USHORT**, **INT**, **UINT**, **LONG**, **ULONG**, **LONGLONG**, **ULONGLONG**, **INTP**, **UINTP** ### Number of bits in data types All `NPY_SIZEOF_{CTYPE}` constants have corresponding `NPY_BITSOF_{CTYPE}` constants defined. The `NPY_BITSOF_{CTYPE}` constants provide the number of bits in the data type. Specifically, the available `{CTYPE}s` are **BOOL**, **CHAR**, **SHORT**, **INT**, **LONG**, **LONGLONG**, **FLOAT**, **DOUBLE**, **LONGDOUBLE** ### Bit-width references to enumerated typenums All of the numeric data types (integer, floating point, and complex) have constants that are defined to be a specific enumerated type number. Exactly which enumerated type a bit-width type refers to is platform dependent. In particular, the constants available are `PyArray_{NAME}{BITS}` where `{NAME}` is **INT**, **UINT**, **FLOAT**, **COMPLEX** and `{BITS}` can be 8, 16, 32, 64, 80, 96, 128, 160, 192, 256, and 512. Obviously not all bit-widths are available on all platforms for all the kinds of numeric types. Commonly 8-, 16-, 32-, 64-bit integers; 32-, 64-bit floats; and 64-, 128-bit complex types are available. ### Integer that can hold a pointer The constants **NPY_INTP** and **NPY_UINTP** refer to an enumerated integer type that is large enough to hold a pointer on the platform. Index arrays should always be converted to **NPY_INTP** , because the dimension of the array is of type npy_intp. C-type names ------------ There are standard variable types for each of the numeric data types and the bool data type. Some of these are already available in the C-specification. You can create variables in extension code with these types. ### Boolean typenpy_bool unsigned char; The constants [`NPY_FALSE`](array#c.NPY_FALSE "NPY_FALSE") and [`NPY_TRUE`](array#c.NPY_TRUE "NPY_TRUE") are also defined. ### (Un)Signed Integer Unsigned versions of the integers can be defined by pre-pending a ‘u’ to the front of the integer name. typenpy_byte char typenpy_ubyte unsigned char typenpy_short short typenpy_ushort unsigned short typenpy_int int typenpy_uint unsigned int typenpy_int16 16-bit integer typenpy_uint16 16-bit unsigned integer typenpy_int32 32-bit integer typenpy_uint32 32-bit unsigned integer typenpy_int64 64-bit integer typenpy_uint64 64-bit unsigned integer typenpy_long long int typenpy_ulong unsigned long int typenpy_longlong long long int typenpy_ulonglong unsigned long long int typenpy_intp Py_intptr_t (an integer that is the size of a pointer on the platform). typenpy_uintp unsigned Py_intptr_t (an integer that is the size of a pointer on the platform). ### (Complex) Floating point typenpy_half 16-bit float typenpy_float 32-bit float typenpy_cfloat 32-bit complex float typenpy_double 64-bit double typenpy_cdouble 64-bit complex double typenpy_longdouble long double typenpy_clongdouble long complex double complex types are structures with **.real** and **.imag** members (in that order). ### Bit-width names There are also typedefs for signed integers, unsigned integers, floating point, and complex floating point types of specific bit- widths. The available type names are `npy_int{bits}`, `npy_uint{bits}`, `npy_float{bits}`, and `npy_complex{bits}` where `{bits}` is the number of bits in the type and can be **8**, **16**, **32**, **64**, 128, and 256 for integer types; 16, **32** , **64**, 80, 96, 128, and 256 for floating-point types; and 32, **64**, **128**, 160, 192, and 512 for complex-valued types. Which bit-widths are available is platform dependent. The bolded bit-widths are usually available on all platforms. Printf Formatting ----------------- For help in printing, the following strings are defined as the correct format specifier in printf and related commands. NPY_LONGLONG_FMT NPY_ULONGLONG_FMT NPY_INTP_FMT NPY_UINTP_FMT NPY_LONGDOUBLE_FMT © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/dtype.htmlArray API ========= Array structure and data access ------------------------------- These macros access the [`PyArrayObject`](types-and-structures#c.PyArrayObject "PyArrayObject") structure members and are defined in `ndarraytypes.h`. The input argument, *arr*, can be any [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")* that is directly interpretable as a [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")* (any instance of the [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type") and its sub-types). intPyArray_NDIM([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) The number of dimensions in the array. intPyArray_FLAGS([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns an integer representing the [array-flags](#array-flags). intPyArray_TYPE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Return the (builtin) typenumber for the elements of this array. intPyArray_SETITEM([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, void*itemptr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) Convert obj and place it in the ndarray, *arr*, at the place pointed to by itemptr. Return -1 if an error occurs or 0 on success. voidPyArray_ENABLEFLAGS([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, intflags) New in version 1.7. Enables the specified array flags. This function does no validation, and assumes that you know what you’re doing. voidPyArray_CLEARFLAGS([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, intflags) New in version 1.7. Clears the specified array flags. This function does no validation, and assumes that you know what you’re doing. void*PyArray_DATA([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) char*PyArray_BYTES([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) These two macros are similar and obtain the pointer to the data-buffer for the array. The first macro can (and should be) assigned to a particular pointer where the second is for generic processing. If you have not guaranteed a contiguous and/or aligned array then be sure you understand how to access the data in the array to avoid memory and/or alignment problems. [npy_intp](dtype#c.npy_intp "npy_intp")*PyArray_DIMS([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns a pointer to the dimensions/shape of the array. The number of elements matches the number of dimensions of the array. Can return `NULL` for 0-dimensional arrays. [npy_intp](dtype#c.npy_intp "npy_intp")*PyArray_SHAPE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) New in version 1.7. A synonym for [`PyArray_DIMS`](#c.PyArray_DIMS "PyArray_DIMS"), named to be consistent with the [`shape`](../generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") usage within Python. [npy_intp](dtype#c.npy_intp "npy_intp")*PyArray_STRIDES([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns a pointer to the strides of the array. The number of elements matches the number of dimensions of the array. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_DIM([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, intn) Return the shape in the *n* \(^{\textrm{th}}\) dimension. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_STRIDE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, intn) Return the stride in the *n* \(^{\textrm{th}}\) dimension. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_ITEMSIZE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Return the itemsize for the elements of this array. Note that, in the old API that was deprecated in version 1.7, this function had the return type `int`. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_SIZE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns the total size (in number of elements) of the array. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_Size([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Returns 0 if *obj* is not a sub-class of ndarray. Otherwise, returns the total number of elements in the array. Safer version of [`PyArray_SIZE`](#c.PyArray_SIZE "PyArray_SIZE") (*obj*). [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_NBYTES([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns the total number of bytes consumed by the array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_BASE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) This returns the base object of the array. In most cases, this means the object which owns the memory the array is pointing at. If you are constructing an array using the C API, and specifying your own memory, you should use the function [`PyArray_SetBaseObject`](#c.PyArray_SetBaseObject "PyArray_SetBaseObject") to set the base to an object which owns the memory. If the [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag is set, it has a different meaning, namely base is the array into which the current array will be copied upon copy resolution. This overloading of the base property for two functions is likely to change in a future version of NumPy. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DESCR([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Returns a borrowed reference to the dtype property of the array. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DTYPE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) New in version 1.7. A synonym for PyArray_DESCR, named to be consistent with the ‘dtype’ usage within Python. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_GETITEM([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, void*itemptr) Get a Python object of a builtin type from the ndarray, *arr*, at the location pointed to by itemptr. Return `NULL` on failure. [`numpy.ndarray.item`](../generated/numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item") is identical to PyArray_GETITEM. intPyArray_FinalizeFunc([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) The function pointed to by the CObject [`__array_finalize__`](../arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__"). The first argument is the newly created sub-type. The second argument (if not NULL) is the “parent” array (if the array was created using slicing or some other operation where a clearly-distinguishable parent is present). This routine can do anything it wants to. It should return a -1 on error and 0 otherwise. ### Data access These functions and macros provide easy access to elements of the ndarray from C. These work for all arrays. You may need to take care when accessing the data in the array, however, if it is not in machine byte-order, misaligned, or not writeable. In other words, be sure to respect the state of the flags unless you know what you are doing, or have previously guaranteed an array that is writeable, aligned, and in machine byte-order using [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny"). If you wish to handle all types of arrays, the copyswap function for each type is useful for handling misbehaved arrays. Some platforms (e.g. Solaris) do not like misaligned data and will crash if you de-reference a misaligned pointer. Other platforms (e.g. x86 Linux) will just work more slowly with misaligned data. void*PyArray_GetPtr([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*aobj, [npy_intp](dtype#c.npy_intp "npy_intp")*ind) Return a pointer to the data of the ndarray, *aobj*, at the N-dimensional index given by the c-array, *ind*, (which must be at least *aobj* ->nd in size). You may want to typecast the returned pointer to the data type of the ndarray. void*PyArray_GETPTR1([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj, [npy_intp](dtype#c.npy_intp "npy_intp")i) void*PyArray_GETPTR2([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj, [npy_intp](dtype#c.npy_intp "npy_intp")i, [npy_intp](dtype#c.npy_intp "npy_intp")j) void*PyArray_GETPTR3([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj, [npy_intp](dtype#c.npy_intp "npy_intp")i, [npy_intp](dtype#c.npy_intp "npy_intp")j, [npy_intp](dtype#c.npy_intp "npy_intp")k) void*PyArray_GETPTR4([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj, [npy_intp](dtype#c.npy_intp "npy_intp")i, [npy_intp](dtype#c.npy_intp "npy_intp")j, [npy_intp](dtype#c.npy_intp "npy_intp")k, [npy_intp](dtype#c.npy_intp "npy_intp")l) Quick, inline access to the element at the given coordinates in the ndarray, *obj*, which must have respectively 1, 2, 3, or 4 dimensions (this is not checked). The corresponding *i*, *j*, *k*, and *l* coordinates can be any integer but will be interpreted as `npy_intp`. You may want to typecast the returned pointer to the data type of the ndarray. Creating arrays --------------- ### From scratch [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_NewFromDescr([PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")*subtype, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr, intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, [npy_intp](dtype#c.npy_intp "npy_intp")const*strides, void*data, intflags, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) This function steals a reference to *descr*. The easiest way to get one is using [`PyArray_DescrFromType`](#c.PyArray_DescrFromType "PyArray_DescrFromType"). This is the main array creation function. Most new arrays are created with this flexible function. The returned object is an object of Python-type *subtype*, which must be a subtype of [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"). The array has *nd* dimensions, described by *dims*. The data-type descriptor of the new array is *descr*. If *subtype* is of an array subclass instead of the base [`&PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"), then *obj* is the object to pass to the [`__array_finalize__`](../arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__") method of the subclass. If *data* is `NULL`, then new unitinialized memory will be allocated and *flags* can be non-zero to indicate a Fortran-style contiguous array. Use [`PyArray_FILLWBYTE`](#c.PyArray_FILLWBYTE "PyArray_FILLWBYTE") to initialize the memory. If *data* is not `NULL`, then it is assumed to point to the memory to be used for the array and the *flags* argument is used as the new flags for the array (except the state of [`NPY_ARRAY_OWNDATA`](#c.NPY_ARRAY_OWNDATA "NPY_ARRAY_OWNDATA"), [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag of the new array will be reset). In addition, if *data* is non-NULL, then *strides* can also be provided. If *strides* is `NULL`, then the array strides are computed as C-style contiguous (default) or Fortran-style contiguous (*flags* is nonzero for *data* = `NULL` or *flags* & [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") is nonzero non-NULL *data*). Any provided *dims* and *strides* are copied into newly allocated dimension and strides arrays for the new array object. [`PyArray_CheckStrides`](#c.PyArray_CheckStrides "PyArray_CheckStrides") can help verify non- `NULL` stride information. If `data` is provided, it must stay alive for the life of the array. One way to manage this is through [`PyArray_SetBaseObject`](#c.PyArray_SetBaseObject "PyArray_SetBaseObject") [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_NewLikeArray([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*prototype, [NPY_ORDER](#c.NPY_ORDER "NPY_ORDER")order, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr, intsubok) New in version 1.6. This function steals a reference to *descr* if it is not NULL. This array creation routine allows for the convenient creation of a new array matching an existing array’s shapes and memory layout, possibly changing the layout and/or data type. When *order* is [`NPY_ANYORDER`](#c.NPY_ORDER.NPY_ANYORDER "NPY_ANYORDER"), the result order is [`NPY_FORTRANORDER`](#c.NPY_ORDER.NPY_FORTRANORDER "NPY_FORTRANORDER") if *prototype* is a fortran array, [`NPY_CORDER`](#c.NPY_ORDER.NPY_CORDER "NPY_CORDER") otherwise. When *order* is [`NPY_KEEPORDER`](#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER"), the result order matches that of *prototype*, even when the axes of *prototype* aren’t in C or Fortran order. If *descr* is NULL, the data type of *prototype* is used. If *subok* is 1, the newly created array will use the sub-type of *prototype* to create the new array, otherwise it will create a base-class array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_New([PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")*subtype, intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttype_num, [npy_intp](dtype#c.npy_intp "npy_intp")const*strides, void*data, intitemsize, intflags, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) This is similar to [`PyArray_NewFromDescr`](#c.PyArray_NewFromDescr "PyArray_NewFromDescr") (
) except you specify the data-type descriptor with *type_num* and *itemsize*, where *type_num* corresponds to a builtin (or user-defined) type. If the type always has the same number of bytes, then itemsize is ignored. Otherwise, itemsize specifies the particular size of this array. Warning If data is passed to [`PyArray_NewFromDescr`](#c.PyArray_NewFromDescr "PyArray_NewFromDescr") or [`PyArray_New`](#c.PyArray_New "PyArray_New"), this memory must not be deallocated until the new array is deleted. If this data came from another Python object, this can be accomplished using [`Py_INCREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_INCREF "(in Python v3.10)") on that object and setting the base member of the new array to point to that object. If strides are passed in they must be consistent with the dimensions, the itemsize, and the data of the array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_SimpleNew(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttypenum) Create a new uninitialized array of type, *typenum*, whose size in each of *nd* dimensions is given by the integer array, *dims*.The memory for the array is uninitialized (unless typenum is [`NPY_OBJECT`](dtype#c.NPY_OBJECT "NPY_OBJECT") in which case each element in the array is set to NULL). The *typenum* argument allows specification of any of the builtin data-types such as [`NPY_FLOAT`](dtype#c.NPY_FLOAT "NPY_FLOAT") or [`NPY_LONG`](dtype#c.NPY_LONG "NPY_LONG"). The memory for the array can be set to zero if desired using [`PyArray_FILLWBYTE`](#c.PyArray_FILLWBYTE "PyArray_FILLWBYTE") (return_object, 0).This function cannot be used to create a flexible-type array (no itemsize given). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_SimpleNewFromData(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttypenum, void*data) Create an array wrapper around *data* pointed to by the given pointer. The array flags will have a default that the data area is well-behaved and C-style contiguous. The shape of the array is given by the *dims* c-array of length *nd*. The data-type of the array is indicated by *typenum*. If data comes from another reference-counted Python object, the reference count on this object should be increased after the pointer is passed in, and the base member of the returned ndarray should point to the Python object that owns the data. This will ensure that the provided memory is not freed while the returned array is in existence. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_SimpleNewFromDescr(intnd, [npy_int](dtype#c.npy_int "npy_int")const*dims, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) This function steals a reference to *descr*. Create a new array with the provided data-type descriptor, *descr*, of the shape determined by *nd* and *dims*. voidPyArray_FILLWBYTE([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, intval) Fill the array pointed to by *obj* —which must be a (subclass of) ndarray—with the contents of *val* (evaluated as a byte). This macro calls memset, so obj must be contiguous. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Zeros(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, intfortran) Construct a new *nd* -dimensional array with shape given by *dims* and data type given by *dtype*. If *fortran* is non-zero, then a Fortran-order array is created, otherwise a C-order array is created. Fill the memory with zeros (or the 0 object if *dtype* corresponds to [`NPY_OBJECT`](dtype#c.NPY_OBJECT "NPY_OBJECT") ). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ZEROS(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttype_num, intfortran) Macro form of [`PyArray_Zeros`](#c.PyArray_Zeros "PyArray_Zeros") which takes a type-number instead of a data-type object. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Empty(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, intfortran) Construct a new *nd* -dimensional array with shape given by *dims* and data type given by *dtype*. If *fortran* is non-zero, then a Fortran-order array is created, otherwise a C-order array is created. The array is uninitialized unless the data type corresponds to [`NPY_OBJECT`](dtype#c.NPY_OBJECT "NPY_OBJECT") in which case the array is filled with [`Py_None`](https://docs.python.org/3/c-api/none.html#c.Py_None "(in Python v3.10)"). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_EMPTY(intnd, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, inttypenum, intfortran) Macro form of [`PyArray_Empty`](#c.PyArray_Empty "PyArray_Empty") which takes a type-number, *typenum*, instead of a data-type object. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Arange(doublestart, doublestop, doublestep, inttypenum) Construct a new 1-dimensional array of data-type, *typenum*, that ranges from *start* to *stop* (exclusive) in increments of *step* . Equivalent to **arange** (*start*, *stop*, *step*, dtype). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ArangeObj([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*start, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*stop, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*step, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) Construct a new 1-dimensional array of data-type determined by `descr`, that ranges from `start` to `stop` (exclusive) in increments of `step`. Equivalent to arange( `start`, `stop`, `step`, `typenum` ). intPyArray_SetBaseObject([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) New in version 1.7. This function **steals a reference** to `obj` and sets it as the base property of `arr`. If you construct an array by passing in your own memory buffer as a parameter, you need to set the array’s `base` property to ensure the lifetime of the memory buffer is appropriate. The return value is 0 on success, -1 on failure. If the object provided is an array, this function traverses the chain of `base` pointers so that each array points to the owner of the memory directly. Once the base is set, it may not be changed to another value. ### From other objects [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromAny([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, intmin_depth, intmax_depth, intrequirements, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*context) This is the main function used to obtain an array from any nested sequence, or object that exposes the array interface, *op*. The parameters allow specification of the required *dtype*, the minimum (*min_depth*) and maximum (*max_depth*) number of dimensions acceptable, and other *requirements* for the array. This function **steals a reference** to the dtype argument, which needs to be a [`PyArray_Descr`](types-and-structures#c.PyArray_Descr "PyArray_Descr") structure indicating the desired data-type (including required byteorder). The *dtype* argument may be `NULL`, indicating that any data-type (and byteorder) is acceptable. Unless [`NPY_ARRAY_FORCECAST`](#c.PyArray_FromAny.NPY_ARRAY_FORCECAST "NPY_ARRAY_FORCECAST") is present in `flags`, this call will generate an error if the data type cannot be safely obtained from the object. If you want to use `NULL` for the *dtype* and ensure the array is notswapped then use [`PyArray_CheckFromAny`](#c.PyArray_CheckFromAny "PyArray_CheckFromAny"). A value of 0 for either of the depth parameters causes the parameter to be ignored. Any of the following array flags can be added (*e.g.* using |) to get the *requirements* argument. If your code can handle general (*e.g.* strided, byte-swapped, or unaligned arrays) then *requirements* may be 0. Also, if *op* is not already an array (or does not expose the array interface), then a new array will be created (and filled from *op* using the sequence protocol). The new array will have [`NPY_ARRAY_DEFAULT`](#c.PyArray_FromAny.NPY_ARRAY_DEFAULT "NPY_ARRAY_DEFAULT") as its flags member. The *context* argument is unused. NPY_ARRAY_C_CONTIGUOUS Make sure the returned array is C-style contiguous NPY_ARRAY_F_CONTIGUOUS Make sure the returned array is Fortran-style contiguous. NPY_ARRAY_ALIGNED Make sure the returned array is aligned on proper boundaries for its data type. An aligned array has the data pointer and every strides factor as a multiple of the alignment factor for the data-type- descriptor. NPY_ARRAY_WRITEABLE Make sure the returned array can be written to. NPY_ARRAY_ENSURECOPY Make sure a copy is made of *op*. If this flag is not present, data is not copied if it can be avoided. NPY_ARRAY_ENSUREARRAY Make sure the result is a base-class ndarray. By default, if *op* is an instance of a subclass of ndarray, an instance of that same subclass is returned. If this flag is set, an ndarray object will be returned instead. NPY_ARRAY_FORCECAST Force a cast to the output type even if it cannot be done safely. Without this flag, a data cast will occur only if it can be done safely, otherwise an error is raised. NPY_ARRAY_WRITEBACKIFCOPY If *op* is already an array, but does not satisfy the requirements, then a copy is made (which will satisfy the requirements). If this flag is present and a copy (of an object that is already an array) must be made, then the corresponding [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.PyArray_FromAny.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag is set in the returned copy and *op* is made to be read-only. You must be sure to call [`PyArray_ResolveWritebackIfCopy`](#c.PyArray_ResolveWritebackIfCopy "PyArray_ResolveWritebackIfCopy") to copy the contents back into *op* and the *op* array will be made writeable again. If *op* is not writeable to begin with, or if it is not already an array, then an error is raised. NPY_ARRAY_BEHAVED [`NPY_ARRAY_ALIGNED`](#c.PyArray_FromAny.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") | [`NPY_ARRAY_WRITEABLE`](#c.PyArray_FromAny.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") NPY_ARRAY_CARRAY [`NPY_ARRAY_C_CONTIGUOUS`](#c.PyArray_FromAny.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") | [`NPY_ARRAY_BEHAVED`](#c.PyArray_FromAny.NPY_ARRAY_BEHAVED "NPY_ARRAY_BEHAVED") NPY_ARRAY_CARRAY_RO [`NPY_ARRAY_C_CONTIGUOUS`](#c.PyArray_FromAny.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") | [`NPY_ARRAY_ALIGNED`](#c.PyArray_FromAny.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") NPY_ARRAY_FARRAY [`NPY_ARRAY_F_CONTIGUOUS`](#c.PyArray_FromAny.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") | [`NPY_ARRAY_BEHAVED`](#c.PyArray_FromAny.NPY_ARRAY_BEHAVED "NPY_ARRAY_BEHAVED") NPY_ARRAY_FARRAY_RO [`NPY_ARRAY_F_CONTIGUOUS`](#c.PyArray_FromAny.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") | [`NPY_ARRAY_ALIGNED`](#c.PyArray_FromAny.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") NPY_ARRAY_DEFAULT [`NPY_ARRAY_CARRAY`](#c.PyArray_FromAny.NPY_ARRAY_CARRAY "NPY_ARRAY_CARRAY") NPY_ARRAY_IN_ARRAY [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") NPY_ARRAY_IN_FARRAY [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") NPY_OUT_ARRAY [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") | [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") NPY_ARRAY_OUT_ARRAY [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") | [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") NPY_ARRAY_OUT_FARRAY [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") | [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") NPY_ARRAY_INOUT_ARRAY [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") | [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") | [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") NPY_ARRAY_INOUT_FARRAY [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") | [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") | [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") intPyArray_GetArrayParamsFromObject([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*requested_dtype, [npy_bool](dtype#c.npy_bool "npy_bool")writeable, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**out_dtype, int*out_ndim, [npy_intp](dtype#c.npy_intp "npy_intp")*out_dims, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**out_arr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*context) Deprecated since version NumPy: 1.19 Unless NumPy is made aware of an issue with this, this function is scheduled for rapid removal without replacement. Changed in version NumPy: 1.19 `context` is never used. Its use results in an error. New in version 1.6. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_CheckFromAny([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, intmin_depth, intmax_depth, intrequirements, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*context) Nearly identical to [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") (
) except *requirements* can contain [`NPY_ARRAY_NOTSWAPPED`](#c.NPY_ARRAY_NOTSWAPPED "NPY_ARRAY_NOTSWAPPED") (over-riding the specification in *dtype*) and [`NPY_ARRAY_ELEMENTSTRIDES`](#c.NPY_ARRAY_ELEMENTSTRIDES "NPY_ARRAY_ELEMENTSTRIDES") which indicates that the array should be aligned in the sense that the strides are multiples of the element size. In versions 1.6 and earlier of NumPy, the following flags did not have the _ARRAY_ macro namespace in them. That form of the constant names is deprecated in 1.7. NPY_ARRAY_NOTSWAPPED Make sure the returned array has a data-type descriptor that is in machine byte-order, over-riding any specification in the *dtype* argument. Normally, the byte-order requirement is determined by the *dtype* argument. If this flag is set and the dtype argument does not indicate a machine byte-order descriptor (or is NULL and the object is already an array with a data-type descriptor that is not in machine byte- order), then a new data-type descriptor is created and used with its byte-order field set to native. NPY_ARRAY_BEHAVED_NS [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") | [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") | [`NPY_ARRAY_NOTSWAPPED`](#c.NPY_ARRAY_NOTSWAPPED "NPY_ARRAY_NOTSWAPPED") NPY_ARRAY_ELEMENTSTRIDES Make sure the returned array has strides that are multiples of the element size. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromArray([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*op, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*newtype, intrequirements) Special case of [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") for when *op* is already an array but it needs to be of a specific *newtype* (including byte-order) or has certain *requirements*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromStructInterface([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Returns an ndarray object from a Python object that exposes the [`__array_struct__`](../arrays.interface#object.__array_struct__ "object.__array_struct__") attribute and follows the array interface protocol. If the object does not contain this attribute then a borrowed reference to [`Py_NotImplemented`](https://docs.python.org/3/c-api/object.html#c.Py_NotImplemented "(in Python v3.10)") is returned. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromInterface([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Returns an ndarray object from a Python object that exposes the [`__array_interface__`](../arrays.interface#object.__array_interface__ "object.__array_interface__") attribute following the array interface protocol. If the object does not contain this attribute then a borrowed reference to [`Py_NotImplemented`](https://docs.python.org/3/c-api/object.html#c.Py_NotImplemented "(in Python v3.10)") is returned. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromArrayAttr([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*context) Return an ndarray object from a Python object that exposes the [`__array__`](../arrays.classes#numpy.class.__array__ "numpy.class.__array__") method. The [`__array__`](../arrays.classes#numpy.class.__array__ "numpy.class.__array__") method can take 0, or 1 argument `([dtype])`. `context` is unused. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ContiguousFromAny([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, inttypenum, intmin_depth, intmax_depth) This function returns a (C-style) contiguous and behaved function array from any nested sequence or array interface exporting object, *op*, of (non-flexible) type given by the enumerated *typenum*, of minimum depth *min_depth*, and of maximum depth *max_depth*. Equivalent to a call to [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") with requirements set to [`NPY_ARRAY_DEFAULT`](#c.NPY_ARRAY_DEFAULT "NPY_ARRAY_DEFAULT") and the type_num member of the type argument set to *typenum*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ContiguousFromObject([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, inttypenum, intmin_depth, intmax_depth) This function returns a well-behaved C-style contiguous array from any nested sequence or array-interface exporting object. The minimum number of dimensions the array can have is given by `min_depth` while the maximum is `max_depth`. This is equivalent to call [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") with requirements [`NPY_ARRAY_DEFAULT`](#c.NPY_ARRAY_DEFAULT "NPY_ARRAY_DEFAULT") and [`NPY_ARRAY_ENSUREARRAY`](#c.NPY_ARRAY_ENSUREARRAY "NPY_ARRAY_ENSUREARRAY"). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromObject([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, inttypenum, intmin_depth, intmax_depth) Return an aligned and in native-byteorder array from any nested sequence or array-interface exporting object, op, of a type given by the enumerated typenum. The minimum number of dimensions the array can have is given by min_depth while the maximum is max_depth. This is equivalent to a call to [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") with requirements set to BEHAVED. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_EnsureArray([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) This function **steals a reference** to `op` and makes sure that `op` is a base-class ndarray. It special cases array scalars, but otherwise calls [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") ( `op`, NULL, 0, 0, [`NPY_ARRAY_ENSUREARRAY`](#c.NPY_ARRAY_ENSUREARRAY "NPY_ARRAY_ENSUREARRAY"), NULL). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromString(char*string, [npy_intp](dtype#c.npy_intp "npy_intp")slen, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, [npy_intp](dtype#c.npy_intp "npy_intp")num, char*sep) Construct a one-dimensional ndarray of a single type from a binary or (ASCII) text `string` of length `slen`. The data-type of the array to-be-created is given by `dtype`. If num is -1, then **copy** the entire string and return an appropriately sized array, otherwise, `num` is the number of items to **copy** from the string. If `sep` is NULL (or “”), then interpret the string as bytes of binary data, otherwise convert the sub-strings separated by `sep` to items of data-type `dtype`. Some data-types may not be readable in text mode and an error will be raised if that occurs. All errors return NULL. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromFile(FILE*fp, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, [npy_intp](dtype#c.npy_intp "npy_intp")num, char*sep) Construct a one-dimensional ndarray of a single type from a binary or text file. The open file pointer is `fp`, the data-type of the array to be created is given by `dtype`. This must match the data in the file. If `num` is -1, then read until the end of the file and return an appropriately sized array, otherwise, `num` is the number of items to read. If `sep` is NULL (or “”), then read from the file in binary mode, otherwise read from the file in text mode with `sep` providing the item separator. Some array types cannot be read in text mode in which case an error is raised. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromBuffer([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*buf, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, [npy_intp](dtype#c.npy_intp "npy_intp")count, [npy_intp](dtype#c.npy_intp "npy_intp")offset) Construct a one-dimensional ndarray of a single type from an object, `buf`, that exports the (single-segment) buffer protocol (or has an attribute __buffer__ that returns an object that exports the buffer protocol). A writeable buffer will be tried first followed by a read- only buffer. The [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") flag of the returned array will reflect which one was successful. The data is assumed to start at `offset` bytes from the start of the memory location for the object. The type of the data in the buffer will be interpreted depending on the data- type descriptor, `dtype.` If `count` is negative then it will be determined from the size of the buffer and the requested itemsize, otherwise, `count` represents how many elements should be converted from the buffer. intPyArray_CopyInto([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*dest, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*src) Copy from the source array, `src`, into the destination array, `dest`, performing a data-type conversion if necessary. If an error occurs return -1 (otherwise 0). The shape of `src` must be broadcastable to the shape of `dest`. The data areas of dest and src must not overlap. intPyArray_CopyObject([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*dest, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*src) Assign an object `src` to a NumPy array `dest` according to array-coercion rules. This is basically identical to [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny"), but assigns directly to the output array. Returns 0 on success and -1 on failures. intPyArray_MoveInto([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*dest, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*src) Move data from the source array, `src`, into the destination array, `dest`, performing a data-type conversion if necessary. If an error occurs return -1 (otherwise 0). The shape of `src` must be broadcastable to the shape of `dest`. The data areas of dest and src may overlap. [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*PyArray_GETCONTIGUOUS([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) If `op` is already (C-style) contiguous and well-behaved then just return a reference, otherwise return a (contiguous and well-behaved) copy of the array. The parameter op must be a (sub-class of an) ndarray and no checking for that is done. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FROM_O([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) Convert `obj` to an ndarray. The argument can be any nested sequence or object that exports the array interface. This is a macro form of [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") using `NULL`, 0, 0, 0 for the other arguments. Your code must be able to handle any data-type descriptor and any combination of data-flags to use this macro. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FROM_OF([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, intrequirements) Similar to [`PyArray_FROM_O`](#c.PyArray_FROM_O "PyArray_FROM_O") except it can take an argument of *requirements* indicating properties the resulting array must have. Available requirements that can be enforced are [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"), [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS"), [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED"), [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE"), [`NPY_ARRAY_NOTSWAPPED`](#c.NPY_ARRAY_NOTSWAPPED "NPY_ARRAY_NOTSWAPPED"), [`NPY_ARRAY_ENSURECOPY`](#c.NPY_ARRAY_ENSURECOPY "NPY_ARRAY_ENSURECOPY"), [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY"), [`NPY_ARRAY_FORCECAST`](#c.NPY_ARRAY_FORCECAST "NPY_ARRAY_FORCECAST"), and [`NPY_ARRAY_ENSUREARRAY`](#c.NPY_ARRAY_ENSUREARRAY "NPY_ARRAY_ENSUREARRAY"). Standard combinations of flags can also be used: [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FROM_OT([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, inttypenum) Similar to [`PyArray_FROM_O`](#c.PyArray_FROM_O "PyArray_FROM_O") except it can take an argument of *typenum* specifying the type-number the returned array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FROM_OTF([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, inttypenum, intrequirements) Combination of [`PyArray_FROM_OF`](#c.PyArray_FROM_OF "PyArray_FROM_OF") and [`PyArray_FROM_OT`](#c.PyArray_FROM_OT "PyArray_FROM_OT") allowing both a *typenum* and a *flags* argument to be provided. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FROMANY([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, inttypenum, intmin, intmax, intrequirements) Similar to [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") except the data-type is specified using a typenumber. [`PyArray_DescrFromType`](#c.PyArray_DescrFromType "PyArray_DescrFromType") (*typenum*) is passed directly to [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny"). This macro also adds [`NPY_ARRAY_DEFAULT`](#c.NPY_ARRAY_DEFAULT "NPY_ARRAY_DEFAULT") to requirements if [`NPY_ARRAY_ENSURECOPY`](#c.NPY_ARRAY_ENSURECOPY "NPY_ARRAY_ENSURECOPY") is passed in as requirements. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_CheckAxis([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, int*axis, intrequirements) Encapsulate the functionality of functions and methods that take the axis= keyword and work properly with None as the axis argument. The input array is `obj`, while `*axis` is a converted integer (so that >=MAXDIMS is the None value), and `requirements` gives the needed properties of `obj`. The output is a converted version of the input so that requirements are met and if needed a flattening has occurred. On output negative values of `*axis` are converted and the new value is checked to ensure consistency with the shape of `obj`. Dealing with types ------------------ ### General check of Python Type intPyArray_Check([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Evaluates true if *op* is a Python object whose type is a sub-type of [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"). intPyArray_CheckExact([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Evaluates true if *op* is a Python object with type [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"). intPyArray_HasArrayInterface([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*out) If `op` implements any part of the array interface, then `out` will contain a new reference to the newly created ndarray using the interface or `out` will contain `NULL` if an error during conversion occurs. Otherwise, out will contain a borrowed reference to [`Py_NotImplemented`](https://docs.python.org/3/c-api/object.html#c.Py_NotImplemented "(in Python v3.10)") and no error condition is set. intPyArray_HasArrayInterfaceType([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*context, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*out) If `op` implements any part of the array interface, then `out` will contain a new reference to the newly created ndarray using the interface or `out` will contain `NULL` if an error during conversion occurs. Otherwise, out will contain a borrowed reference to Py_NotImplemented and no error condition is set. This version allows setting of the dtype in the part of the array interface that looks for the [`__array__`](../arrays.classes#numpy.class.__array__ "numpy.class.__array__") attribute. `context` is unused. intPyArray_IsZeroDim([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Evaluates true if *op* is an instance of (a subclass of) [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type") and has 0 dimensions. PyArray_IsScalar(op, cls) Evaluates true if *op* is an instance of `Py{cls}ArrType_Type`. intPyArray_CheckScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Evaluates true if *op* is either an array scalar (an instance of a sub-type of `PyGenericArr_Type` ), or an instance of (a sub-class of) [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type") whose dimensionality is 0. intPyArray_IsPythonNumber([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Evaluates true if *op* is an instance of a builtin numeric type (int, float, complex, long, bool) intPyArray_IsPythonScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Evaluates true if *op* is a builtin Python scalar object (int, float, complex, bytes, str, long, bool). intPyArray_IsAnyScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Evaluates true if *op* is either a Python scalar object (see [`PyArray_IsPythonScalar`](#c.PyArray_IsPythonScalar "PyArray_IsPythonScalar")) or an array scalar (an instance of a sub- type of `PyGenericArr_Type` ). intPyArray_CheckAnyScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Evaluates true if *op* is a Python scalar object (see [`PyArray_IsPythonScalar`](#c.PyArray_IsPythonScalar "PyArray_IsPythonScalar")), an array scalar (an instance of a sub-type of `PyGenericArr_Type`) or an instance of a sub-type of [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type") whose dimensionality is 0. ### Data-type checking For the typenum macros, the argument is an integer representing an enumerated array data type. For the array type checking macros the argument must be a [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")* that can be directly interpreted as a [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*. intPyTypeNum_ISUNSIGNED(intnum) intPyDataType_ISUNSIGNED([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISUNSIGNED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents an unsigned integer. intPyTypeNum_ISSIGNED(intnum) intPyDataType_ISSIGNED([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISSIGNED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents a signed integer. intPyTypeNum_ISINTEGER(intnum) intPyDataType_ISINTEGER([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISINTEGER([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents any integer. intPyTypeNum_ISFLOAT(intnum) intPyDataType_ISFLOAT([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISFLOAT([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents any floating point number. intPyTypeNum_ISCOMPLEX(intnum) intPyDataType_ISCOMPLEX([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISCOMPLEX([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents any complex floating point number. intPyTypeNum_ISNUMBER(intnum) intPyDataType_ISNUMBER([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISNUMBER([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents any integer, floating point, or complex floating point number. intPyTypeNum_ISSTRING(intnum) intPyDataType_ISSTRING([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISSTRING([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents a string data type. intPyTypeNum_ISPYTHON(intnum) intPyDataType_ISPYTHON([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISPYTHON([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents an enumerated type corresponding to one of the standard Python scalar (bool, int, float, or complex). intPyTypeNum_ISFLEXIBLE(intnum) intPyDataType_ISFLEXIBLE([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISFLEXIBLE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents one of the flexible array types ( [`NPY_STRING`](dtype#c.NPY_STRING "NPY_STRING"), [`NPY_UNICODE`](dtype#c.NPY_UNICODE "NPY_UNICODE"), or [`NPY_VOID`](dtype#c.NPY_VOID "NPY_VOID") ). intPyDataType_ISUNSIZED([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) Type has no size information attached, and can be resized. Should only be called on flexible dtypes. Types that are attached to an array will always be sized, hence the array form of this macro not existing. Changed in version 1.18. For structured datatypes with no fields this function now returns False. intPyTypeNum_ISUSERDEF(intnum) intPyDataType_ISUSERDEF([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISUSERDEF([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents a user-defined type. intPyTypeNum_ISEXTENDED(intnum) intPyDataType_ISEXTENDED([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISEXTENDED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type is either flexible or user-defined. intPyTypeNum_ISOBJECT(intnum) intPyDataType_ISOBJECT([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISOBJECT([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents object data type. intPyTypeNum_ISBOOL(intnum) intPyDataType_ISBOOL([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_ISBOOL([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type represents Boolean data type. intPyDataType_HASFIELDS([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr) intPyArray_HASFIELDS([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) Type has fields associated with it. intPyArray_ISNOTSWAPPED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*m) Evaluates true if the data area of the ndarray *m* is in machine byte-order according to the array’s data-type descriptor. intPyArray_ISBYTESWAPPED([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*m) Evaluates true if the data area of the ndarray *m* is **not** in machine byte-order according to the array’s data-type descriptor. [npy_bool](dtype#c.npy_bool "npy_bool")PyArray_EquivTypes([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*type1, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*type2) Return [`NPY_TRUE`](#c.NPY_TRUE "NPY_TRUE") if *type1* and *type2* actually represent equivalent types for this platform (the fortran member of each type is ignored). For example, on 32-bit platforms, [`NPY_LONG`](dtype#c.NPY_LONG "NPY_LONG") and [`NPY_INT`](dtype#c.NPY_INT "NPY_INT") are equivalent. Otherwise return [`NPY_FALSE`](#c.NPY_FALSE "NPY_FALSE"). [npy_bool](dtype#c.npy_bool "npy_bool")PyArray_EquivArrTypes([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*a1, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*a2) Return [`NPY_TRUE`](#c.NPY_TRUE "NPY_TRUE") if *a1* and *a2* are arrays with equivalent types for this platform. [npy_bool](dtype#c.npy_bool "npy_bool")PyArray_EquivTypenums(inttypenum1, inttypenum2) Special case of [`PyArray_EquivTypes`](#c.PyArray_EquivTypes "PyArray_EquivTypes") (
) that does not accept flexible data types but may be easier to call. intPyArray_EquivByteorders(intb1, intb2) True if byteorder characters *b1* and *b2* ( [`NPY_LITTLE`](#c.NPY_LITTLE "NPY_LITTLE"), [`NPY_BIG`](#c.NPY_BIG "NPY_BIG"), [`NPY_NATIVE`](#c.NPY_NATIVE "NPY_NATIVE"), [`NPY_IGNORE`](#c.NPY_IGNORE "NPY_IGNORE") ) are either equal or equivalent as to their specification of a native byte order. Thus, on a little-endian machine [`NPY_LITTLE`](#c.NPY_LITTLE "NPY_LITTLE") and [`NPY_NATIVE`](#c.NPY_NATIVE "NPY_NATIVE") are equivalent where they are not equivalent on a big-endian machine. ### Converting data types [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Cast([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, inttypenum) Mainly for backwards compatibility to the Numeric C-API and for simple casts to non-flexible types. Return a new array object with the elements of *arr* cast to the data-type *typenum* which must be one of the enumerated types and not a flexible type. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_CastToType([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*type, intfortran) Return a new array of the *type* specified, casting the elements of *arr* as appropriate. The fortran argument specifies the ordering of the output array. intPyArray_CastTo([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*in) As of 1.6, this function simply calls [`PyArray_CopyInto`](#c.PyArray_CopyInto "PyArray_CopyInto"), which handles the casting. Cast the elements of the array *in* into the array *out*. The output array should be writeable, have an integer-multiple of the number of elements in the input array (more than one copy can be placed in out), and have a data type that is one of the builtin types. Returns 0 on success and -1 if an error occurs. PyArray_VectorUnaryFunc*PyArray_GetCastFunc([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*from, inttotype) Return the low-level casting function to cast from the given descriptor to the builtin type number. If no casting function exists return `NULL` and set an error. Using this function instead of direct access to *from* ->f->cast will allow support of any user-defined casting functions added to a descriptors casting dictionary. intPyArray_CanCastSafely(intfromtype, inttotype) Returns non-zero if an array of data type *fromtype* can be cast to an array of data type *totype* without losing information. An exception is that 64-bit integers are allowed to be cast to 64-bit floating point values even though this can lose precision on large integers so as not to proliferate the use of long doubles without explicit requests. Flexible array types are not checked according to their lengths with this function. intPyArray_CanCastTo([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*fromtype, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*totype) [`PyArray_CanCastTypeTo`](#c.PyArray_CanCastTypeTo "PyArray_CanCastTypeTo") supersedes this function in NumPy 1.6 and later. Equivalent to PyArray_CanCastTypeTo(fromtype, totype, NPY_SAFE_CASTING). intPyArray_CanCastTypeTo([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*fromtype, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*totype, [NPY_CASTING](#c.NPY_CASTING "NPY_CASTING")casting) New in version 1.6. Returns non-zero if an array of data type *fromtype* (which can include flexible types) can be cast safely to an array of data type *totype* (which can include flexible types) according to the casting rule *casting*. For simple types with [`NPY_SAFE_CASTING`](#c.NPY_CASTING.NPY_SAFE_CASTING "NPY_SAFE_CASTING"), this is basically a wrapper around [`PyArray_CanCastSafely`](#c.PyArray_CanCastSafely "PyArray_CanCastSafely"), but for flexible types such as strings or unicode, it produces results taking into account their sizes. Integer and float types can only be cast to a string or unicode type using [`NPY_SAFE_CASTING`](#c.NPY_CASTING.NPY_SAFE_CASTING "NPY_SAFE_CASTING") if the string or unicode type is big enough to hold the max value of the integer/float type being cast from. intPyArray_CanCastArrayTo([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*totype, [NPY_CASTING](#c.NPY_CASTING "NPY_CASTING")casting) New in version 1.6. Returns non-zero if *arr* can be cast to *totype* according to the casting rule given in *casting*. If *arr* is an array scalar, its value is taken into account, and non-zero is also returned when the value will not overflow or be truncated to an integer when converting to a smaller type. This is almost the same as the result of PyArray_CanCastTypeTo(PyArray_MinScalarType(arr), totype, casting), but it also handles a special case arising because the set of uint values is not a subset of the int values for types with the same number of bits. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_MinScalarType([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) New in version 1.6. If *arr* is an array, returns its data type descriptor, but if *arr* is an array scalar (has 0 dimensions), it finds the data type of smallest size to which the value may be converted without overflow or truncation to an integer. This function will not demote complex to float or anything to boolean, but will demote a signed integer to an unsigned integer when the scalar value is positive. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_PromoteTypes([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*type1, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*type2) New in version 1.6. Finds the data type of smallest size and kind to which *type1* and *type2* may be safely converted. This function is symmetric and associative. A string or unicode result will be the proper size for storing the max value of the input types converted to a string or unicode. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_ResultType([npy_intp](dtype#c.npy_intp "npy_intp")narrs, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**arrs, [npy_intp](dtype#c.npy_intp "npy_intp")ndtypes, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**dtypes) New in version 1.6. This applies type promotion to all the inputs, using the NumPy rules for combining scalars and arrays, to determine the output type of a set of operands. This is the same result type that ufuncs produce. The specific algorithm used is as follows. Categories are determined by first checking which of boolean, integer (int/uint), or floating point (float/complex) the maximum kind of all the arrays and the scalars are. If there are only scalars or the maximum category of the scalars is higher than the maximum category of the arrays, the data types are combined with [`PyArray_PromoteTypes`](#c.PyArray_PromoteTypes "PyArray_PromoteTypes") to produce the return value. Otherwise, PyArray_MinScalarType is called on each array, and the resulting data types are all combined with [`PyArray_PromoteTypes`](#c.PyArray_PromoteTypes "PyArray_PromoteTypes") to produce the return value. The set of int values is not a subset of the uint values for types with the same number of bits, something not reflected in [`PyArray_MinScalarType`](#c.PyArray_MinScalarType "PyArray_MinScalarType"), but handled as a special case in PyArray_ResultType. intPyArray_ObjectType([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, intmintype) This function is superseded by [`PyArray_MinScalarType`](#c.PyArray_MinScalarType "PyArray_MinScalarType") and/or [`PyArray_ResultType`](#c.PyArray_ResultType "PyArray_ResultType"). This function is useful for determining a common type that two or more arrays can be converted to. It only works for non-flexible array types as no itemsize information is passed. The *mintype* argument represents the minimum type acceptable, and *op* represents the object that will be converted to an array. The return value is the enumerated typenumber that represents the data-type that *op* should have. voidPyArray_ArrayType([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*mintype, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*outtype) This function is superseded by [`PyArray_ResultType`](#c.PyArray_ResultType "PyArray_ResultType"). This function works similarly to [`PyArray_ObjectType`](#c.PyArray_ObjectType "PyArray_ObjectType") (
) except it handles flexible arrays. The *mintype* argument can have an itemsize member and the *outtype* argument will have an itemsize member at least as big but perhaps bigger depending on the object *op*. [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**PyArray_ConvertToCommonType([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, int*n) The functionality this provides is largely superseded by iterator [`NpyIter`](iterator#c.NpyIter "NpyIter") introduced in 1.6, with flag [`NPY_ITER_COMMON_DTYPE`](iterator#c.NPY_ITER_COMMON_DTYPE "NPY_ITER_COMMON_DTYPE") or with the same dtype parameter for all operands. Convert a sequence of Python objects contained in *op* to an array of ndarrays each having the same data type. The type is selected in the same way as `PyArray_ResultType`. The length of the sequence is returned in *n*, and an *n* -length array of [`PyArrayObject`](types-and-structures#c.PyArrayObject "PyArrayObject") pointers is the return value (or `NULL` if an error occurs). The returned array must be freed by the caller of this routine (using [`PyDataMem_FREE`](#c.PyDataMem_FREE "PyDataMem_FREE") ) and all the array objects in it `DECREF` ‘d or a memory-leak will occur. The example template-code below shows a typically usage: Changed in version 1.18.0: A mix of scalars and zero-dimensional arrays now produces a type capable of holding the scalar value. Previously priority was given to the dtype of the arrays. ``` mps = PyArray_ConvertToCommonType(obj, &n); if (mps==NULL) return NULL; {code} <before return> for (i=0; i<n; i++) Py_DECREF(mps[i]); PyDataMem_FREE(mps); {return} ``` char*PyArray_Zero([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) A pointer to newly created memory of size *arr* ->itemsize that holds the representation of 0 for that type. The returned pointer, *ret*, **must be freed** using [`PyDataMem_FREE`](#c.PyDataMem_FREE "PyDataMem_FREE") (ret) when it is not needed anymore. char*PyArray_One([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) A pointer to newly created memory of size *arr* ->itemsize that holds the representation of 1 for that type. The returned pointer, *ret*, **must be freed** using [`PyDataMem_FREE`](#c.PyDataMem_FREE "PyDataMem_FREE") (ret) when it is not needed anymore. intPyArray_ValidType(inttypenum) Returns [`NPY_TRUE`](#c.NPY_TRUE "NPY_TRUE") if *typenum* represents a valid type-number (builtin or user-defined or character code). Otherwise, this function returns [`NPY_FALSE`](#c.NPY_FALSE "NPY_FALSE"). ### User-defined data types voidPyArray_InitArrFuncs([PyArray_ArrFuncs](types-and-structures#c.PyArray_ArrFuncs "PyArray_ArrFuncs")*f) Initialize all function pointers and members to `NULL`. intPyArray_RegisterDataType([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype) Register a data-type as a new user-defined data type for arrays. The type must have most of its entries filled in. This is not always checked and errors can produce segfaults. In particular, the typeobj member of the `dtype` structure must be filled with a Python type that has a fixed-size element-size that corresponds to the elsize member of *dtype*. Also the `f` member must have the required functions: nonzero, copyswap, copyswapn, getitem, setitem, and cast (some of the cast functions may be `NULL` if no support is desired). To avoid confusion, you should choose a unique character typecode but this is not enforced and not relied on internally. A user-defined type number is returned that uniquely identifies the type. A pointer to the new structure can then be obtained from [`PyArray_DescrFromType`](#c.PyArray_DescrFromType "PyArray_DescrFromType") using the returned type number. A -1 is returned if an error occurs. If this *dtype* has already been registered (checked only by the address of the pointer), then return the previously-assigned type-number. intPyArray_RegisterCastFunc([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr, inttotype, PyArray_VectorUnaryFunc*castfunc) Register a low-level casting function, *castfunc*, to convert from the data-type, *descr*, to the given data-type number, *totype*. Any old casting function is over-written. A `0` is returned on success or a `-1` on failure. intPyArray_RegisterCanCast([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*descr, inttotype, [NPY_SCALARKIND](#c.NPY_SCALARKIND "NPY_SCALARKIND")scalar) Register the data-type number, *totype*, as castable from data-type object, *descr*, of the given *scalar* kind. Use *scalar* = [`NPY_NOSCALAR`](#c.NPY_SCALARKIND.NPY_NOSCALAR "NPY_NOSCALAR") to register that an array of data-type *descr* can be cast safely to a data-type whose type_number is *totype*. The return value is 0 on success or -1 on failure. intPyArray_TypeNumFromName(charconst*str) Given a string return the type-number for the data-type with that string as the type-object name. Returns `NPY_NOTYPE` without setting an error if no type can be found. Only works for user-defined data-types. ### Special functions for NPY_OBJECT intPyArray_INCREF([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*op) Used for an array, *op*, that contains any Python objects. It increments the reference count of every object in the array according to the data-type of *op*. A -1 is returned if an error occurs, otherwise 0 is returned. voidPyArray_Item_INCREF(char*ptr, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype) A function to INCREF all the objects at the location *ptr* according to the data-type *dtype*. If *ptr* is the start of a structured type with an object at any offset, then this will (recursively) increment the reference count of all object-like items in the structured type. intPyArray_XDECREF([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*op) Used for an array, *op*, that contains any Python objects. It decrements the reference count of every object in the array according to the data-type of *op*. Normal return value is 0. A -1 is returned if an error occurs. voidPyArray_Item_XDECREF(char*ptr, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype) A function to XDECREF all the object-like items at the location *ptr* as recorded in the data-type, *dtype*. This works recursively so that if `dtype` itself has fields with data-types that contain object-like items, all the object-like fields will be XDECREF `'d`. voidPyArray_FillObjectArray([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) Fill a newly created array with a single value obj at all locations in the structure with object data-types. No checking is performed but *arr* must be of data-type [`NPY_OBJECT`](dtype#c.NPY_OBJECT "NPY_OBJECT") and be single-segment and uninitialized (no previous objects in position). Use [`PyArray_XDECREF`](#c.PyArray_XDECREF "PyArray_XDECREF") (*arr*) if you need to decrement all the items in the object array prior to calling this function. intPyArray_SetWritebackIfCopyBase([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*base) Precondition: `arr` is a copy of `base` (though possibly with different strides, ordering, etc.) Sets the [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag and `arr->base`, and set `base` to READONLY. Call [`PyArray_ResolveWritebackIfCopy`](#c.PyArray_ResolveWritebackIfCopy "PyArray_ResolveWritebackIfCopy") before calling `Py_DECREF` in order copy any changes back to `base` and reset the READONLY flag. Returns 0 for success, -1 for failure. Array flags ----------- The `flags` attribute of the `PyArrayObject` structure contains important information about the memory used by the array (pointed to by the data member) This flag information must be kept accurate or strange results and even segfaults may result. There are 6 (binary) flags that describe the memory area used by the data buffer. These constants are defined in `arrayobject.h` and determine the bit-position of the flag. Python exposes a nice attribute- based interface as well as a dictionary-like interface for getting (and, if appropriate, setting) these flags. Memory areas of all kinds can be pointed to by an ndarray, necessitating these flags. If you get an arbitrary `PyArrayObject` in C-code, you need to be aware of the flags that are set. If you need to guarantee a certain kind of array (like [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") and [`NPY_ARRAY_BEHAVED`](#c.NPY_ARRAY_BEHAVED "NPY_ARRAY_BEHAVED")), then pass these requirements into the PyArray_FromAny function. ### Basic Array Flags An ndarray can have a data segment that is not a simple contiguous chunk of well-behaved memory you can manipulate. It may not be aligned with word boundaries (very important on some platforms). It might have its data in a different byte-order than the machine recognizes. It might not be writeable. It might be in Fortran-contiguous order. The array flags are used to indicate what can be said about data associated with an array. In versions 1.6 and earlier of NumPy, the following flags did not have the _ARRAY_ macro namespace in them. That form of the constant names is deprecated in 1.7. NPY_ARRAY_C_CONTIGUOUS The data area is in C-style contiguous order (last index varies the fastest). NPY_ARRAY_F_CONTIGUOUS The data area is in Fortran-style contiguous order (first index varies the fastest). Note Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be *arbitrary* if `arr.shape[dim] == 1` or the array has no elements. It does *not* generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. The correct way to access the `itemsize` of an array from the C API is `PyArray_ITEMSIZE(arr)`. See also [Internal memory layout of an ndarray](../arrays.ndarray#arrays-ndarray) NPY_ARRAY_OWNDATA The data area is owned by this array. Should never be set manually, instead create a `PyObject` wrapping the data and set the array’s base to that object. For an example, see the test in `test_mem_policy`. NPY_ARRAY_ALIGNED The data area and all array elements are aligned appropriately. NPY_ARRAY_WRITEABLE The data area can be written to. Notice that the above 3 flags are defined so that a new, well- behaved array has these flags defined as true. NPY_ARRAY_WRITEBACKIFCOPY The data area represents a (well-behaved) copy whose information should be transferred back to the original when [`PyArray_ResolveWritebackIfCopy`](#c.PyArray_ResolveWritebackIfCopy "PyArray_ResolveWritebackIfCopy") is called. This is a special flag that is set if this array represents a copy made because a user required certain flags in [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") and a copy had to be made of some other array (and the user asked for this flag to be set in such a situation). The base attribute then points to the “misbehaved” array (which is set read_only). :c:func`PyArray_ResolveWritebackIfCopy` will copy its contents back to the “misbehaved” array (casting if necessary) and will reset the “misbehaved” array to [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE"). If the “misbehaved” array was not [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") to begin with then [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") would have returned an error because [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") would not have been possible. [`PyArray_UpdateFlags`](#c.PyArray_UpdateFlags "PyArray_UpdateFlags") (obj, flags) will update the `obj->flags` for `flags` which can be any of [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"), [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS"), [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED"), or [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE"). ### Combinations of array flags NPY_ARRAY_BEHAVED [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") | [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") NPY_ARRAY_CARRAY [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") | [`NPY_ARRAY_BEHAVED`](#c.NPY_ARRAY_BEHAVED "NPY_ARRAY_BEHAVED") NPY_ARRAY_CARRAY_RO [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") NPY_ARRAY_FARRAY [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") | [`NPY_ARRAY_BEHAVED`](#c.NPY_ARRAY_BEHAVED "NPY_ARRAY_BEHAVED") NPY_ARRAY_FARRAY_RO [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") NPY_ARRAY_DEFAULT [`NPY_ARRAY_CARRAY`](#c.NPY_ARRAY_CARRAY "NPY_ARRAY_CARRAY") NPY_ARRAY_UPDATE_ALL [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") | [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") | [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") ### Flag-like constants These constants are used in [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") (and its macro forms) to specify desired properties of the new array. NPY_ARRAY_FORCECAST Cast to the desired type, even if it can’t be done without losing information. NPY_ARRAY_ENSURECOPY Make sure the resulting array is a copy of the original. NPY_ARRAY_ENSUREARRAY Make sure the resulting object is an actual ndarray, and not a sub-class. ### Flag checking For all of these macros *arr* must be an instance of a (subclass of) [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"). intPyArray_CHKFLAGS([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr, intflags) The first parameter, arr, must be an ndarray or subclass. The parameter, *flags*, should be an integer consisting of bitwise combinations of the possible flags an array can have: [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"), [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS"), [`NPY_ARRAY_OWNDATA`](#c.NPY_ARRAY_OWNDATA "NPY_ARRAY_OWNDATA"), [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED"), [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE"), [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY"). intPyArray_IS_C_CONTIGUOUS([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if *arr* is C-style contiguous. intPyArray_IS_F_CONTIGUOUS([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if *arr* is Fortran-style contiguous. intPyArray_ISFORTRAN([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if *arr* is Fortran-style contiguous and *not* C-style contiguous. [`PyArray_IS_F_CONTIGUOUS`](#c.PyArray_IS_F_CONTIGUOUS "PyArray_IS_F_CONTIGUOUS") is the correct way to test for Fortran-style contiguity. intPyArray_ISWRITEABLE([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if the data area of *arr* can be written to intPyArray_ISALIGNED([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if the data area of *arr* is properly aligned on the machine. intPyArray_ISBEHAVED([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if the data area of *arr* is aligned and writeable and in machine byte-order according to its descriptor. intPyArray_ISBEHAVED_RO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if the data area of *arr* is aligned and in machine byte-order. intPyArray_ISCARRAY([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if the data area of *arr* is C-style contiguous, and [`PyArray_ISBEHAVED`](#c.PyArray_ISBEHAVED "PyArray_ISBEHAVED") (*arr*) is true. intPyArray_ISFARRAY([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if the data area of *arr* is Fortran-style contiguous and [`PyArray_ISBEHAVED`](#c.PyArray_ISBEHAVED "PyArray_ISBEHAVED") (*arr*) is true. intPyArray_ISCARRAY_RO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if the data area of *arr* is C-style contiguous, aligned, and in machine byte-order. intPyArray_ISFARRAY_RO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if the data area of *arr* is Fortran-style contiguous, aligned, and in machine byte-order **.** intPyArray_ISONESEGMENT([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Evaluates true if the data area of *arr* consists of a single (C-style or Fortran-style) contiguous segment. voidPyArray_UpdateFlags([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, intflagmask) The [`NPY_ARRAY_C_CONTIGUOUS`](#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS"), [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED"), and [`NPY_ARRAY_F_CONTIGUOUS`](#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS") array flags can be “calculated” from the array object itself. This routine updates one or more of these flags of *arr* as specified in *flagmask* by performing the required calculation. Warning It is important to keep the flags updated (using [`PyArray_UpdateFlags`](#c.PyArray_UpdateFlags "PyArray_UpdateFlags") can help) whenever a manipulation with an array is performed that might cause them to change. Later calculations in NumPy that rely on the state of these flags do not repeat the calculation to update them. intPyArray_FailUnlessWriteable([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj, constchar*name) This function does nothing and returns 0 if *obj* is writeable. It raises an exception and returns -1 if *obj* is not writeable. It may also do other house-keeping, such as issuing warnings on arrays which are transitioning to become views. Always call this function at some point before writing to an array. *name* is a name for the array, used to give better error messages. It can be something like “assignment destination”, “output array”, or even just “array”. Array method alternative API ---------------------------- ### Conversion [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_GetField([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, intoffset) Equivalent to [`ndarray.getfield`](../generated/numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield") (*self*, *dtype*, *offset*). This function [steals a reference](https://docs.python.org/3/c-api/intro.html?reference-count-details) to `PyArray_Descr` and returns a new array of the given `dtype` using the data in the current array at a specified `offset` in bytes. The `offset` plus the itemsize of the new array type must be less than `self ->descr->elsize` or an error is raised. The same shape and strides as the original array are used. Therefore, this function has the effect of returning a field from a structured array. But, it can also be used to select specific bytes or groups of bytes from any array type. intPyArray_SetField([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, intoffset, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*val) Equivalent to [`ndarray.setfield`](../generated/numpy.ndarray.setfield#numpy.ndarray.setfield "numpy.ndarray.setfield") (*self*, *val*, *dtype*, *offset* ). Set the field starting at *offset* in bytes and of the given *dtype* to *val*. The *offset* plus *dtype* ->elsize must be less than *self* ->descr->elsize or an error is raised. Otherwise, the *val* argument is converted to an array and copied into the field pointed to. If necessary, the elements of *val* are repeated to fill the destination array, But, the number of elements in the destination must be an integer multiple of the number of elements in *val*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Byteswap([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [npy_bool](dtype#c.npy_bool "npy_bool")inplace) Equivalent to [`ndarray.byteswap`](../generated/numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap") (*self*, *inplace*). Return an array whose data area is byteswapped. If *inplace* is non-zero, then do the byteswap inplace and return a reference to self. Otherwise, create a byteswapped copy and leave self unchanged. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_NewCopy([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*old, [NPY_ORDER](#c.NPY_ORDER "NPY_ORDER")order) Equivalent to [`ndarray.copy`](../generated/numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy") (*self*, *fortran*). Make a copy of the *old* array. The returned array is always aligned and writeable with data interpreted the same as the old array. If *order* is [`NPY_CORDER`](#c.NPY_ORDER.NPY_CORDER "NPY_CORDER"), then a C-style contiguous array is returned. If *order* is [`NPY_FORTRANORDER`](#c.NPY_ORDER.NPY_FORTRANORDER "NPY_FORTRANORDER"), then a Fortran-style contiguous array is returned. If *order is* [`NPY_ANYORDER`](#c.NPY_ORDER.NPY_ANYORDER "NPY_ANYORDER"), then the array returned is Fortran-style contiguous only if the old one is; otherwise, it is C-style contiguous. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ToList([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self) Equivalent to [`ndarray.tolist`](../generated/numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist") (*self*). Return a nested Python list from *self*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ToString([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [NPY_ORDER](#c.NPY_ORDER "NPY_ORDER")order) Equivalent to [`ndarray.tobytes`](../generated/numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes") (*self*, *order*). Return the bytes of this array in a Python string. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ToFile([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, FILE*fp, char*sep, char*format) Write the contents of *self* to the file pointer *fp* in C-style contiguous fashion. Write the data as binary bytes if *sep* is the string “”or `NULL`. Otherwise, write the contents of *self* as text using the *sep* string as the item separator. Each item will be printed to the file. If the *format* string is not `NULL` or “”, then it is a Python print statement format string showing how the items are to be written. intPyArray_Dump([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*file, intprotocol) Pickle the object in *self* to the given *file* (either a string or a Python file object). If *file* is a Python string it is considered to be the name of a file which is then opened in binary mode. The given *protocol* is used (if *protocol* is negative, or the highest available is used). This is a simple wrapper around cPickle.dump(*self*, *file*, *protocol*). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Dumps([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*self, intprotocol) Pickle the object in *self* to a Python string and return it. Use the Pickle *protocol* provided (or the highest available if *protocol* is negative). intPyArray_FillWithScalar([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) Fill the array, *arr*, with the given scalar object, *obj*. The object is first converted to the data type of *arr*, and then copied into every location. A -1 is returned if an error occurs, otherwise 0 is returned. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_View([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, [PyTypeObject](https://docs.python.org/3/c-api/type.html#c.PyTypeObject "(in Python v3.10)")*ptype) Equivalent to [`ndarray.view`](../generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") (*self*, *dtype*). Return a new view of the array *self* as possibly a different data-type, *dtype*, and different array subclass *ptype*. If *dtype* is `NULL`, then the returned array will have the same data type as *self*. The new data-type must be consistent with the size of *self*. Either the itemsizes must be identical, or *self* must be single-segment and the total number of bytes must be the same. In the latter case the dimensions of the returned array will be altered in the last (or first for Fortran-style contiguous arrays) dimension. The data area of the returned array and self is exactly the same. ### Shape Manipulation [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Newshape([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Dims](types-and-structures#c.PyArray_Dims "PyArray_Dims")*newshape, [NPY_ORDER](#c.NPY_ORDER "NPY_ORDER")order) Result will be a new array (pointing to the same memory location as *self* if possible), but having a shape given by *newshape*. If the new shape is not compatible with the strides of *self*, then a copy of the array with the new specified shape will be returned. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Reshape([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*shape) Equivalent to [`ndarray.reshape`](../generated/numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") (*self*, *shape*) where *shape* is a sequence. Converts *shape* to a [`PyArray_Dims`](types-and-structures#c.PyArray_Dims "PyArray_Dims") structure and calls [`PyArray_Newshape`](#c.PyArray_Newshape "PyArray_Newshape") internally. For back-ward compatibility – Not recommended [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Squeeze([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self) Equivalent to [`ndarray.squeeze`](../generated/numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze") (*self*). Return a new view of *self* with all of the dimensions of length 1 removed from the shape. Warning matrix objects are always 2-dimensional. Therefore, [`PyArray_Squeeze`](#c.PyArray_Squeeze "PyArray_Squeeze") has no effect on arrays of matrix sub-class. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_SwapAxes([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, inta1, inta2) Equivalent to [`ndarray.swapaxes`](../generated/numpy.ndarray.swapaxes#numpy.ndarray.swapaxes "numpy.ndarray.swapaxes") (*self*, *a1*, *a2*). The returned array is a new view of the data in *self* with the given axes, *a1* and *a2*, swapped. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Resize([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Dims](types-and-structures#c.PyArray_Dims "PyArray_Dims")*newshape, intrefcheck, [NPY_ORDER](#c.NPY_ORDER "NPY_ORDER")fortran) Equivalent to [`ndarray.resize`](../generated/numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize") (*self*, *newshape*, refcheck `=` *refcheck*, order= fortran ). This function only works on single-segment arrays. It changes the shape of *self* inplace and will reallocate the memory for *self* if *newshape* has a different total number of elements then the old shape. If reallocation is necessary, then *self* must own its data, have *self* - `>base==NULL`, have *self* - `>weakrefs==NULL`, and (unless refcheck is 0) not be referenced by any other array. The fortran argument can be [`NPY_ANYORDER`](#c.NPY_ORDER.NPY_ANYORDER "NPY_ANYORDER"), [`NPY_CORDER`](#c.NPY_ORDER.NPY_CORDER "NPY_CORDER"), or [`NPY_FORTRANORDER`](#c.NPY_ORDER.NPY_FORTRANORDER "NPY_FORTRANORDER"). It currently has no effect. Eventually it could be used to determine how the resize operation should view the data when constructing a differently-dimensioned array. Returns None on success and NULL on error. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Transpose([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyArray_Dims](types-and-structures#c.PyArray_Dims "PyArray_Dims")*permute) Equivalent to [`ndarray.transpose`](../generated/numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") (*self*, *permute*). Permute the axes of the ndarray object *self* according to the data structure *permute* and return the result. If *permute* is `NULL`, then the resulting array has its axes reversed. For example if *self* has shape \(10\times20\times30\), and *permute* `.ptr` is (0,2,1) the shape of the result is \(10\times30\times20.\) If *permute* is `NULL`, the shape of the result is \(30\times20\times10.\) [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Flatten([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [NPY_ORDER](#c.NPY_ORDER "NPY_ORDER")order) Equivalent to [`ndarray.flatten`](../generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") (*self*, *order*). Return a 1-d copy of the array. If *order* is [`NPY_FORTRANORDER`](#c.NPY_ORDER.NPY_FORTRANORDER "NPY_FORTRANORDER") the elements are scanned out in Fortran order (first-dimension varies the fastest). If *order* is [`NPY_CORDER`](#c.NPY_ORDER.NPY_CORDER "NPY_CORDER"), the elements of `self` are scanned in C-order (last dimension varies the fastest). If *order* [`NPY_ANYORDER`](#c.NPY_ORDER.NPY_ANYORDER "NPY_ANYORDER"), then the result of [`PyArray_ISFORTRAN`](#c.PyArray_ISFORTRAN "PyArray_ISFORTRAN") (*self*) is used to determine which order to flatten. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Ravel([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [NPY_ORDER](#c.NPY_ORDER "NPY_ORDER")order) Equivalent to *self*.ravel(*order*). Same basic functionality as [`PyArray_Flatten`](#c.PyArray_Flatten "PyArray_Flatten") (*self*, *order*) except if *order* is 0 and *self* is C-style contiguous, the shape is altered but no copy is performed. ### Item selection and manipulation [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_TakeFrom([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*indices, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*ret, [NPY_CLIPMODE](#c.NPY_CLIPMODE "NPY_CLIPMODE")clipmode) Equivalent to [`ndarray.take`](../generated/numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take") (*self*, *indices*, *axis*, *ret*, *clipmode*) except *axis* =None in Python is obtained by setting *axis* = [`NPY_MAXDIMS`](#c.NPY_MAXDIMS "NPY_MAXDIMS") in C. Extract the items from self indicated by the integer-valued *indices* along the given *axis.* The clipmode argument can be [`NPY_RAISE`](#c.NPY_CLIPMODE.NPY_RAISE "NPY_RAISE"), [`NPY_WRAP`](#c.NPY_CLIPMODE.NPY_WRAP "NPY_WRAP"), or [`NPY_CLIP`](#c.NPY_CLIPMODE.NPY_CLIP "NPY_CLIP") to indicate what to do with out-of-bound indices. The *ret* argument can specify an output array rather than having one created internally. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_PutTo([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*values, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*indices, [NPY_CLIPMODE](#c.NPY_CLIPMODE "NPY_CLIPMODE")clipmode) Equivalent to *self*.put(*values*, *indices*, *clipmode* ). Put *values* into *self* at the corresponding (flattened) *indices*. If *values* is too small it will be repeated as necessary. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_PutMask([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*values, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*mask) Place the *values* in *self* wherever corresponding positions (using a flattened context) in *mask* are true. The *mask* and *self* arrays must have the same total number of elements. If *values* is too small, it will be repeated as necessary. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Repeat([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, intaxis) Equivalent to [`ndarray.repeat`](../generated/numpy.ndarray.repeat#numpy.ndarray.repeat "numpy.ndarray.repeat") (*self*, *op*, *axis*). Copy the elements of *self*, *op* times along the given *axis*. Either *op* is a scalar integer or a sequence of length *self* ->dimensions[ *axis* ] indicating how many times to repeat each item along the axis. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Choose([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*ret, [NPY_CLIPMODE](#c.NPY_CLIPMODE "NPY_CLIPMODE")clipmode) Equivalent to [`ndarray.choose`](../generated/numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose") (*self*, *op*, *ret*, *clipmode*). Create a new array by selecting elements from the sequence of arrays in *op* based on the integer values in *self*. The arrays must all be broadcastable to the same shape and the entries in *self* should be between 0 and len(*op*). The output is placed in *ret* unless it is `NULL` in which case a new output is created. The *clipmode* argument determines behavior for when entries in *self* are not between 0 and len(*op*). NPY_RAISE raise a ValueError; NPY_WRAP wrap values < 0 by adding len(*op*) and values >=len(*op*) by subtracting len(*op*) until they are in range; NPY_CLIP all values are clipped to the region [0, len(*op*) ). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Sort([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [NPY_SORTKIND](#c.NPY_SORTKIND "NPY_SORTKIND")kind) Equivalent to [`ndarray.sort`](../generated/numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") (*self*, *axis*, *kind*). Return an array with the items of *self* sorted along *axis*. The array is sorted using the algorithm denoted by *kind*, which is an integer/enum pointing to the type of sorting algorithms used. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ArgSort([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis) Equivalent to [`ndarray.argsort`](../generated/numpy.ndarray.argsort#numpy.ndarray.argsort "numpy.ndarray.argsort") (*self*, *axis*). Return an array of indices such that selection of these indices along the given `axis` would return a sorted version of *self*. If *self* ->descr is a data-type with fields defined, then self->descr->names is used to determine the sort order. A comparison where the first field is equal will use the second field and so on. To alter the sort order of a structured array, create a new data-type with a different order of names and construct a view of the array with that new data-type. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_LexSort([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*sort_keys, intaxis) Given a sequence of arrays (*sort_keys*) of the same shape, return an array of indices (similar to [`PyArray_ArgSort`](#c.PyArray_ArgSort "PyArray_ArgSort") (
)) that would sort the arrays lexicographically. A lexicographic sort specifies that when two keys are found to be equal, the order is based on comparison of subsequent keys. A merge sort (which leaves equal entries unmoved) is required to be defined for the types. The sort is accomplished by sorting the indices first using the first *sort_key* and then using the second *sort_key* and so forth. This is equivalent to the lexsort(*sort_keys*, *axis*) Python command. Because of the way the merge-sort works, be sure to understand the order the *sort_keys* must be in (reversed from the order you would use when comparing two elements). If these arrays are all collected in a structured array, then [`PyArray_Sort`](#c.PyArray_Sort "PyArray_Sort") (
) can also be used to sort the array directly. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_SearchSorted([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*values, [NPY_SEARCHSIDE](#c.NPY_SEARCHSIDE "NPY_SEARCHSIDE")side, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*perm) Equivalent to [`ndarray.searchsorted`](../generated/numpy.ndarray.searchsorted#numpy.ndarray.searchsorted "numpy.ndarray.searchsorted") (*self*, *values*, *side*, *perm*). Assuming *self* is a 1-d array in ascending order, then the output is an array of indices the same shape as *values* such that, if the elements in *values* were inserted before the indices, the order of *self* would be preserved. No checking is done on whether or not self is in ascending order. The *side* argument indicates whether the index returned should be that of the first suitable location (if [`NPY_SEARCHLEFT`](#c.NPY_SEARCHSIDE.NPY_SEARCHLEFT "NPY_SEARCHLEFT")) or of the last (if [`NPY_SEARCHRIGHT`](#c.NPY_SEARCHSIDE.NPY_SEARCHRIGHT "NPY_SEARCHRIGHT")). The *sorter* argument, if not `NULL`, must be a 1D array of integer indices the same length as *self*, that sorts it into ascending order. This is typically the result of a call to [`PyArray_ArgSort`](#c.PyArray_ArgSort "PyArray_ArgSort") (
) Binary search is used to find the required insertion points. intPyArray_Partition([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*ktharray, intaxis, [NPY_SELECTKIND](#c.NPY_SELECTKIND "NPY_SELECTKIND")which) Equivalent to [`ndarray.partition`](../generated/numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition") (*self*, *ktharray*, *axis*, *kind*). Partitions the array so that the values of the element indexed by *ktharray* are in the positions they would be if the array is fully sorted and places all elements smaller than the kth before and all elements equal or greater after the kth element. The ordering of all elements within the partitions is undefined. If *self*->descr is a data-type with fields defined, then self->descr->names is used to determine the sort order. A comparison where the first field is equal will use the second field and so on. To alter the sort order of a structured array, create a new data-type with a different order of names and construct a view of the array with that new data-type. Returns zero on success and -1 on failure. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ArgPartition([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*op, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*ktharray, intaxis, [NPY_SELECTKIND](#c.NPY_SELECTKIND "NPY_SELECTKIND")which) Equivalent to [`ndarray.argpartition`](../generated/numpy.ndarray.argpartition#numpy.ndarray.argpartition "numpy.ndarray.argpartition") (*self*, *ktharray*, *axis*, *kind*). Return an array of indices such that selection of these indices along the given `axis` would return a partitioned version of *self*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Diagonal([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intoffset, intaxis1, intaxis2) Equivalent to [`ndarray.diagonal`](../generated/numpy.ndarray.diagonal#numpy.ndarray.diagonal "numpy.ndarray.diagonal") (*self*, *offset*, *axis1*, *axis2* ). Return the *offset* diagonals of the 2-d arrays defined by *axis1* and *axis2*. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_CountNonzero([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self) New in version 1.6. Counts the number of non-zero elements in the array object *self*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Nonzero([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self) Equivalent to [`ndarray.nonzero`](../generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") (*self*). Returns a tuple of index arrays that select elements of *self* that are nonzero. If (nd= [`PyArray_NDIM`](#c.PyArray_NDIM "PyArray_NDIM") ( `self` ))==1, then a single index array is returned. The index arrays have data type [`NPY_INTP`](dtype#c.NPY_INTP "NPY_INTP"). If a tuple is returned (nd \(\neq\) 1), then its length is nd. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Compress([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*condition, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.compress`](../generated/numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress") (*self*, *condition*, *axis* ). Return the elements along *axis* corresponding to elements of *condition* that are true. ### Calculation Tip Pass in [`NPY_MAXDIMS`](#c.NPY_MAXDIMS "NPY_MAXDIMS") for axis in order to achieve the same effect that is obtained by passing in `axis=None` in Python (treating the array as a 1-d array). Note The out argument specifies where to place the result. If out is NULL, then the output array is created, otherwise the output is placed in out which must be the correct size and type. A new reference to the output array is always returned even when out is not NULL. The caller of the routine has the responsibility to `Py_DECREF` out if not NULL or a memory-leak will occur. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ArgMax([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.argmax`](../generated/numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax") (*self*, *axis*). Return the index of the largest element of *self* along *axis*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ArgMin([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.argmin`](../generated/numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin") (*self*, *axis*). Return the index of the smallest element of *self* along *axis*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Max([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.max`](../generated/numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max") (*self*, *axis*). Returns the largest element of *self* along the given *axis*. When the result is a single element, returns a numpy scalar instead of an ndarray. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Min([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.min`](../generated/numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min") (*self*, *axis*). Return the smallest element of *self* along the given *axis*. When the result is a single element, returns a numpy scalar instead of an ndarray. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Ptp([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.ptp`](../generated/numpy.ndarray.ptp#numpy.ndarray.ptp "numpy.ndarray.ptp") (*self*, *axis*). Return the difference between the largest element of *self* along *axis* and the smallest element of *self* along *axis*. When the result is a single element, returns a numpy scalar instead of an ndarray. Note The rtype argument specifies the data-type the reduction should take place over. This is important if the data-type of the array is not “large” enough to handle the output. By default, all integer data-types are made at least as large as [`NPY_LONG`](dtype#c.NPY_LONG "NPY_LONG") for the “add” and “multiply” ufuncs (which form the basis for mean, sum, cumsum, prod, and cumprod functions). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Mean([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.mean`](../generated/numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean") (*self*, *axis*, *rtype*). Returns the mean of the elements along the given *axis*, using the enumerated type *rtype* as the data type to sum in. Default sum behavior is obtained using [`NPY_NOTYPE`](dtype#c.NPY_NOTYPE "NPY_NOTYPE") for *rtype*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Trace([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intoffset, intaxis1, intaxis2, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.trace`](../generated/numpy.ndarray.trace#numpy.ndarray.trace "numpy.ndarray.trace") (*self*, *offset*, *axis1*, *axis2*, *rtype*). Return the sum (using *rtype* as the data type of summation) over the *offset* diagonal elements of the 2-d arrays defined by *axis1* and *axis2* variables. A positive offset chooses diagonals above the main diagonal. A negative offset selects diagonals below the main diagonal. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Clip([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*min, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*max) Equivalent to [`ndarray.clip`](../generated/numpy.ndarray.clip#numpy.ndarray.clip "numpy.ndarray.clip") (*self*, *min*, *max*). Clip an array, *self*, so that values larger than *max* are fixed to *max* and values less than *min* are fixed to *min*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Conjugate([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self) Equivalent to [`ndarray.conjugate`](../generated/numpy.ndarray.conjugate#numpy.ndarray.conjugate "numpy.ndarray.conjugate") (*self*). Return the complex conjugate of *self*. If *self* is not of complex data type, then return *self* with a reference. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Round([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intdecimals, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.round`](../generated/numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round") (*self*, *decimals*, *out*). Returns the array with elements rounded to the nearest decimal place. The decimal place is defined as the \(10^{-\textrm{decimals}}\) digit so that negative *decimals* cause rounding to the nearest 10’s, 100’s, etc. If out is `NULL`, then the output array is created, otherwise the output is placed in *out* which must be the correct size and type. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Std([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.std`](../generated/numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std") (*self*, *axis*, *rtype*). Return the standard deviation using data along *axis* converted to data type *rtype*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Sum([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.sum`](../generated/numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum") (*self*, *axis*, *rtype*). Return 1-d vector sums of elements in *self* along *axis*. Perform the sum after converting data to data type *rtype*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_CumSum([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.cumsum`](../generated/numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum") (*self*, *axis*, *rtype*). Return cumulative 1-d sums of elements in *self* along *axis*. Perform the sum after converting data to data type *rtype*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Prod([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.prod`](../generated/numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") (*self*, *axis*, *rtype*). Return 1-d products of elements in *self* along *axis*. Perform the product after converting data to data type *rtype*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_CumProd([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, intrtype, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.cumprod`](../generated/numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod") (*self*, *axis*, *rtype*). Return 1-d cumulative products of elements in `self` along `axis`. Perform the product after converting data to data type `rtype`. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_All([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.all`](../generated/numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") (*self*, *axis*). Return an array with True elements for every 1-d sub-array of `self` defined by `axis` in which all the elements are True. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Any([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*self, intaxis, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) Equivalent to [`ndarray.any`](../generated/numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") (*self*, *axis*). Return an array with True elements for every 1-d sub-array of *self* defined by *axis* in which any of the elements are True. Functions --------- ### Array Functions intPyArray_AsCArray([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")**op, void*ptr, [npy_intp](dtype#c.npy_intp "npy_intp")*dims, intnd, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*typedescr) Sometimes it is useful to access a multidimensional array as a C-style multi-dimensional array so that algorithms can be implemented using C’s a[i][j][k] syntax. This routine returns a pointer, *ptr*, that simulates this kind of C-style array, for 1-, 2-, and 3-d ndarrays. Parameters * **op** – The address to any Python object. This Python object will be replaced with an equivalent well-behaved, C-style contiguous, ndarray of the given data type specified by the last two arguments. Be sure that stealing a reference in this way to the input object is justified. * **ptr** – The address to a (ctype* for 1-d, ctype** for 2-d or ctype*** for 3-d) variable where ctype is the equivalent C-type for the data type. On return, *ptr* will be addressable as a 1-d, 2-d, or 3-d array. * **dims** – An output array that contains the shape of the array object. This array gives boundaries on any looping that will take place. * **nd** – The dimensionality of the array (1, 2, or 3). * **typedescr** – A [`PyArray_Descr`](types-and-structures#c.PyArray_Descr "PyArray_Descr") structure indicating the desired data-type (including required byteorder). The call will steal a reference to the parameter. Note The simulation of a C-style array is not complete for 2-d and 3-d arrays. For example, the simulated arrays of pointers cannot be passed to subroutines expecting specific, statically-defined 2-d and 3-d arrays. To pass to functions requiring those kind of inputs, you must statically define the required array and copy data. intPyArray_Free([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, void*ptr) Must be called with the same objects and memory locations returned from [`PyArray_AsCArray`](#c.PyArray_AsCArray "PyArray_AsCArray") (
). This function cleans up memory that otherwise would get leaked. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Concatenate([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, intaxis) Join the sequence of objects in *obj* together along *axis* into a single array. If the dimensions or types are not compatible an error is raised. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_InnerProduct([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj2) Compute a product-sum over the last dimensions of *obj1* and *obj2*. Neither array is conjugated. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_MatrixProduct([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) Compute a product-sum over the last dimension of *obj1* and the second-to-last dimension of *obj2*. For 2-d arrays this is a matrix-product. Neither array is conjugated. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_MatrixProduct2([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) New in version 1.6. Same as PyArray_MatrixProduct, but store the result in *out*. The output array must have the correct shape, type, and be C-contiguous, or an exception is raised. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_EinsteinSum(char*subscripts, [npy_intp](dtype#c.npy_intp "npy_intp")nop, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**op_in, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, [NPY_ORDER](#c.NPY_ORDER "NPY_ORDER")order, [NPY_CASTING](#c.NPY_CASTING "NPY_CASTING")casting, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*out) New in version 1.6. Applies the Einstein summation convention to the array operands provided, returning a new array or placing the result in *out*. The string in *subscripts* is a comma separated list of index letters. The number of operands is in *nop*, and *op_in* is an array containing those operands. The data type of the output can be forced with *dtype*, the output order can be forced with *order* ([`NPY_KEEPORDER`](#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER") is recommended), and when *dtype* is specified, *casting* indicates how permissive the data conversion should be. See the [`einsum`](../generated/numpy.einsum#numpy.einsum "numpy.einsum") function for more details. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_CopyAndTranspose([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) A specialized copy and transpose function that works only for 2-d arrays. The returned array is a transposed copy of *op*. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Correlate([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op2, intmode) Compute the 1-d correlation of the 1-d arrays *op1* and *op2* . The correlation is computed at each output point by multiplying *op1* by a shifted version of *op2* and summing the result. As a result of the shift, needed values outside of the defined range of *op1* and *op2* are interpreted as zero. The mode determines how many shifts to return: 0 - return only shifts that did not need to assume zero- values; 1 - return an object that is the same size as *op1*, 2 - return all possible shifts (any overlap at all is accepted). #### Notes This does not compute the usual correlation: if op2 is larger than op1, the arguments are swapped, and the conjugate is never taken for complex arrays. See PyArray_Correlate2 for the usual signal processing correlation. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Correlate2([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op1, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op2, intmode) Updated version of PyArray_Correlate, which uses the usual definition of correlation for 1d arrays. The correlation is computed at each output point by multiplying *op1* by a shifted version of *op2* and summing the result. As a result of the shift, needed values outside of the defined range of *op1* and *op2* are interpreted as zero. The mode determines how many shifts to return: 0 - return only shifts that did not need to assume zero- values; 1 - return an object that is the same size as *op1*, 2 - return all possible shifts (any overlap at all is accepted). #### Notes Compute z as follows: ``` z[k] = sum_n op1[n] * conj(op2[n+k]) ``` [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Where([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*condition, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*x, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*y) If both `x` and `y` are `NULL`, then return [`PyArray_Nonzero`](#c.PyArray_Nonzero "PyArray_Nonzero") (*condition*). Otherwise, both *x* and *y* must be given and the object returned is shaped like *condition* and has elements of *x* and *y* where *condition* is respectively True or False. ### Other functions [npy_bool](dtype#c.npy_bool "npy_bool")PyArray_CheckStrides(intelsize, intnd, [npy_intp](dtype#c.npy_intp "npy_intp")numbytes, [npy_intp](dtype#c.npy_intp "npy_intp")const*dims, [npy_intp](dtype#c.npy_intp "npy_intp")const*newstrides) Determine if *newstrides* is a strides array consistent with the memory of an *nd* -dimensional array with shape `dims` and element-size, *elsize*. The *newstrides* array is checked to see if jumping by the provided number of bytes in each direction will ever mean jumping more than *numbytes* which is the assumed size of the available memory segment. If *numbytes* is 0, then an equivalent *numbytes* is computed assuming *nd*, *dims*, and *elsize* refer to a single-segment array. Return [`NPY_TRUE`](#c.NPY_TRUE "NPY_TRUE") if *newstrides* is acceptable, otherwise return [`NPY_FALSE`](#c.NPY_FALSE "NPY_FALSE"). [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_MultiplyList([npy_intp](dtype#c.npy_intp "npy_intp")const*seq, intn) intPyArray_MultiplyIntList(intconst*seq, intn) Both of these routines multiply an *n* -length array, *seq*, of integers and return the result. No overflow checking is performed. intPyArray_CompareLists([npy_intp](dtype#c.npy_intp "npy_intp")const*l1, [npy_intp](dtype#c.npy_intp "npy_intp")const*l2, intn) Given two *n* -length arrays of integers, *l1*, and *l2*, return 1 if the lists are identical; otherwise, return 0. Auxiliary Data With Object Semantics ------------------------------------ New in version 1.7.0. typeNpyAuxData When working with more complex dtypes which are composed of other dtypes, such as the struct dtype, creating inner loops that manipulate the dtypes requires carrying along additional data. NumPy supports this idea through a struct [`NpyAuxData`](#c.NpyAuxData "NpyAuxData"), mandating a few conventions so that it is possible to do this. Defining an [`NpyAuxData`](#c.NpyAuxData "NpyAuxData") is similar to defining a class in C++, but the object semantics have to be tracked manually since the API is in C. Here’s an example for a function which doubles up an element using an element copier function as a primitive. ``` typedef struct { NpyAuxData base; ElementCopier_Func *func; NpyAuxData *funcdata; } eldoubler_aux_data; void free_element_doubler_aux_data(NpyAuxData *data) { eldoubler_aux_data *d = (eldoubler_aux_data *)data; /* Free the memory owned by this auxdata */ NPY_AUXDATA_FREE(d->funcdata); PyArray_free(d); } NpyAuxData *clone_element_doubler_aux_data(NpyAuxData *data) { eldoubler_aux_data *ret = PyArray_malloc(sizeof(eldoubler_aux_data)); if (ret == NULL) { return NULL; } /* Raw copy of all data */ memcpy(ret, data, sizeof(eldoubler_aux_data)); /* Fix up the owned auxdata so we have our own copy */ ret->funcdata = NPY_AUXDATA_CLONE(ret->funcdata); if (ret->funcdata == NULL) { PyArray_free(ret); return NULL; } return (NpyAuxData *)ret; } NpyAuxData *create_element_doubler_aux_data( ElementCopier_Func *func, NpyAuxData *funcdata) { eldoubler_aux_data *ret = PyArray_malloc(sizeof(eldoubler_aux_data)); if (ret == NULL) { PyErr_NoMemory(); return NULL; } memset(&ret, 0, sizeof(eldoubler_aux_data)); ret->base->free = &free_element_doubler_aux_data; ret->base->clone = &clone_element_doubler_aux_data; ret->func = func; ret->funcdata = funcdata; return (NpyAuxData *)ret; } ``` typeNpyAuxData_FreeFunc The function pointer type for NpyAuxData free functions. typeNpyAuxData_CloneFunc The function pointer type for NpyAuxData clone functions. These functions should never set the Python exception on error, because they may be called from a multi-threaded context. voidNPY_AUXDATA_FREE([NpyAuxData](#c.NpyAuxData "NpyAuxData")*auxdata) A macro which calls the auxdata’s free function appropriately, does nothing if auxdata is NULL. [NpyAuxData](#c.NpyAuxData "NpyAuxData")*NPY_AUXDATA_CLONE([NpyAuxData](#c.NpyAuxData "NpyAuxData")*auxdata) A macro which calls the auxdata’s clone function appropriately, returning a deep copy of the auxiliary data. Array Iterators --------------- As of NumPy 1.6.0, these array iterators are superseded by the new array iterator, [`NpyIter`](iterator#c.NpyIter "NpyIter"). An array iterator is a simple way to access the elements of an N-dimensional array quickly and efficiently. Section [2](#sec-array-iterator) provides more description and examples of this useful approach to looping over an array. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_IterNew([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr) Return an array iterator object from the array, *arr*. This is equivalent to *arr*. **flat**. The array iterator object makes it easy to loop over an N-dimensional non-contiguous array in C-style contiguous fashion. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_IterAllButAxis([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr, int*axis) Return an array iterator that will iterate over all axes but the one provided in **axis*. The returned iterator cannot be used with [`PyArray_ITER_GOTO1D`](#c.PyArray_ITER_GOTO1D "PyArray_ITER_GOTO1D"). This iterator could be used to write something similar to what ufuncs do wherein the loop over the largest axis is done by a separate sub-routine. If **axis* is negative then **axis* will be set to the axis having the smallest stride and that axis will be used. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_BroadcastToShape([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*arr, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, intnd) Return an array iterator that is broadcast to iterate as an array of the shape provided by *dimensions* and *nd*. intPyArrayIter_Check([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Evaluates true if *op* is an array iterator (or instance of a subclass of the array iterator type). voidPyArray_ITER_RESET([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*iterator) Reset an *iterator* to the beginning of the array. voidPyArray_ITER_NEXT([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*iterator) Incremement the index and the dataptr members of the *iterator* to point to the next element of the array. If the array is not (C-style) contiguous, also increment the N-dimensional coordinates array. void*PyArray_ITER_DATA([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*iterator) A pointer to the current element of the array. voidPyArray_ITER_GOTO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*iterator, [npy_intp](dtype#c.npy_intp "npy_intp")*destination) Set the *iterator* index, dataptr, and coordinates members to the location in the array indicated by the N-dimensional c-array, *destination*, which must have size at least *iterator* ->nd_m1+1. voidPyArray_ITER_GOTO1D([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*iterator, [npy_intp](dtype#c.npy_intp "npy_intp")index) Set the *iterator* index and dataptr to the location in the array indicated by the integer *index* which points to an element in the C-styled flattened array. intPyArray_ITER_NOTDONE([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*iterator) Evaluates TRUE as long as the iterator has not looped through all of the elements, otherwise it evaluates FALSE. Broadcasting (multi-iterators) ------------------------------ [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_MultiIterNew(intnum, ...) A simplified interface to broadcasting. This function takes the number of arrays to broadcast and then *num* extra ( [`PyObject *`](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)") ) arguments. These arguments are converted to arrays and iterators are created. [`PyArray_Broadcast`](#c.PyArray_Broadcast "PyArray_Broadcast") is then called on the resulting multi-iterator object. The resulting, broadcasted mult-iterator object is then returned. A broadcasted operation can then be performed using a single loop and using [`PyArray_MultiIter_NEXT`](#c.PyArray_MultiIter_NEXT "PyArray_MultiIter_NEXT") (..) voidPyArray_MultiIter_RESET([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*multi) Reset all the iterators to the beginning in a multi-iterator object, *multi*. voidPyArray_MultiIter_NEXT([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*multi) Advance each iterator in a multi-iterator object, *multi*, to its next (broadcasted) element. void*PyArray_MultiIter_DATA([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*multi, inti) Return the data-pointer of the *i* \(^{\textrm{th}}\) iterator in a multi-iterator object. voidPyArray_MultiIter_NEXTi([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*multi, inti) Advance the pointer of only the *i* \(^{\textrm{th}}\) iterator. voidPyArray_MultiIter_GOTO([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*multi, [npy_intp](dtype#c.npy_intp "npy_intp")*destination) Advance each iterator in a multi-iterator object, *multi*, to the given \(N\) -dimensional *destination* where \(N\) is the number of dimensions in the broadcasted array. voidPyArray_MultiIter_GOTO1D([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*multi, [npy_intp](dtype#c.npy_intp "npy_intp")index) Advance each iterator in a multi-iterator object, *multi*, to the corresponding location of the *index* into the flattened broadcasted array. intPyArray_MultiIter_NOTDONE([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*multi) Evaluates TRUE as long as the multi-iterator has not looped through all of the elements (of the broadcasted result), otherwise it evaluates FALSE. intPyArray_Broadcast([PyArrayMultiIterObject](types-and-structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*mit) This function encapsulates the broadcasting rules. The *mit* container should already contain iterators for all the arrays that need to be broadcast. On return, these iterators will be adjusted so that iteration over each simultaneously will accomplish the broadcasting. A negative number is returned if an error occurs. intPyArray_RemoveSmallest([PyArrayMultiIterObject](types-and-structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")*mit) This function takes a multi-iterator object that has been previously “broadcasted,” finds the dimension with the smallest “sum of strides” in the broadcasted result and adapts all the iterators so as not to iterate over that dimension (by effectively making them of length-1 in that dimension). The corresponding dimension is returned unless *mit* ->nd is 0, then -1 is returned. This function is useful for constructing ufunc-like routines that broadcast their inputs correctly and then call a strided 1-d version of the routine as the inner-loop. This 1-d version is usually optimized for speed and for this reason the loop should be performed over the axis that won’t require large stride jumps. Neighborhood iterator --------------------- New in version 1.4.0. Neighborhood iterators are subclasses of the iterator object, and can be used to iter over a neighborhood of a point. For example, you may want to iterate over every voxel of a 3d image, and for every such voxel, iterate over an hypercube. Neighborhood iterator automatically handle boundaries, thus making this kind of code much easier to write than manual boundaries handling, at the cost of a slight overhead. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_NeighborhoodIterNew([PyArrayIterObject](types-and-structures#c.PyArrayIterObject "PyArrayIterObject")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")bounds, intmode, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*fill_value) This function creates a new neighborhood iterator from an existing iterator. The neighborhood will be computed relatively to the position currently pointed by *iter*, the bounds define the shape of the neighborhood iterator, and the mode argument the boundaries handling mode. The *bounds* argument is expected to be a (2 * iter->ao->nd) arrays, such as the range bound[2*i]->bounds[2*i+1] defines the range where to walk for dimension i (both bounds are included in the walked coordinates). The bounds should be ordered for each dimension (bounds[2*i] <= bounds[2*i+1]). The mode should be one of: NPY_NEIGHBORHOOD_ITER_ZERO_PADDING Zero padding. Outside bounds values will be 0. NPY_NEIGHBORHOOD_ITER_ONE_PADDING One padding, Outside bounds values will be 1. NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING Constant padding. Outside bounds values will be the same as the first item in fill_value. NPY_NEIGHBORHOOD_ITER_MIRROR_PADDING Mirror padding. Outside bounds values will be as if the array items were mirrored. For example, for the array [1, 2, 3, 4], x[-2] will be 2, x[-2] will be 1, x[4] will be 4, x[5] will be 1, etc
 NPY_NEIGHBORHOOD_ITER_CIRCULAR_PADDING Circular padding. Outside bounds values will be as if the array was repeated. For example, for the array [1, 2, 3, 4], x[-2] will be 3, x[-2] will be 4, x[4] will be 1, x[5] will be 2, etc
 If the mode is constant filling (`NPY_NEIGHBORHOOD_ITER_CONSTANT_PADDING`), fill_value should point to an array object which holds the filling value (the first item will be the filling value if the array contains more than one item). For other cases, fill_value may be NULL. * The iterator holds a reference to iter * Return NULL on failure (in which case the reference count of iter is not changed) * iter itself can be a Neighborhood iterator: this can be useful for .e.g automatic boundaries handling * the object returned by this function should be safe to use as a normal iterator * If the position of iter is changed, any subsequent call to PyArrayNeighborhoodIter_Next is undefined behavior, and PyArrayNeighborhoodIter_Reset must be called. * If the position of iter is not the beginning of the data and the underlying data for iter is contiguous, the iterator will point to the start of the data instead of position pointed by iter. To avoid this situation, iter should be moved to the required position only after the creation of iterator, and PyArrayNeighborhoodIter_Reset must be called. ``` PyArrayIterObject *iter; PyArrayNeighborhoodIterObject *neigh_iter; iter = PyArray_IterNew(x); /*For a 3x3 kernel */ bounds = {-1, 1, -1, 1}; neigh_iter = (PyArrayNeighborhoodIterObject*)PyArray_NeighborhoodIterNew( iter, bounds, NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, NULL); for(i = 0; i < iter->size; ++i) { for (j = 0; j < neigh_iter->size; ++j) { /* Walk around the item currently pointed by iter->dataptr */ PyArrayNeighborhoodIter_Next(neigh_iter); } /* Move to the next point of iter */ PyArrayIter_Next(iter); PyArrayNeighborhoodIter_Reset(neigh_iter); } ``` intPyArrayNeighborhoodIter_Reset([PyArrayNeighborhoodIterObject](types-and-structures#c.PyArrayNeighborhoodIterObject "PyArrayNeighborhoodIterObject")*iter) Reset the iterator position to the first point of the neighborhood. This should be called whenever the iter argument given at PyArray_NeighborhoodIterObject is changed (see example) intPyArrayNeighborhoodIter_Next([PyArrayNeighborhoodIterObject](types-and-structures#c.PyArrayNeighborhoodIterObject "PyArrayNeighborhoodIterObject")*iter) After this call, iter->dataptr points to the next point of the neighborhood. Calling this function after every point of the neighborhood has been visited is undefined. Array mapping ------------- Array mapping is the machinery behind advanced indexing. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_MapIterArray([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*a, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*index) Use advanced indexing to iterate an array. voidPyArray_MapIterSwapAxes(PyArrayMapIterObject*mit, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**ret, intgetmap) Swap the axes to or from their inserted form. `MapIter` always puts the advanced (array) indices first in the iteration. But if they are consecutive, it will insert/transpose them back before returning. This is stored as `mit->consec != 0` (the place where they are inserted). For assignments, the opposite happens: the values to be assigned are transposed (`getmap=1` instead of `getmap=0`). `getmap=0` and `getmap=1` undo the other operation. voidPyArray_MapIterNext(PyArrayMapIterObject*mit) This function needs to update the state of the map iterator and point `mit->dataptr` to the memory-location of the next object. Note that this function never handles an extra operand but provides compatibility for an old (exposed) API. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_MapIterArrayCopyIfOverlap([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*a, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*index, intcopy_if_overlap, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*extra_op) Similar to [`PyArray_MapIterArray`](#c.PyArray_MapIterArray "PyArray_MapIterArray") but with an additional `copy_if_overlap` argument. If `copy_if_overlap != 0`, checks if `a` has memory overlap with any of the arrays in `index` and with `extra_op`, and make copies as appropriate to avoid problems if the input is modified during the iteration. `iter->array` may contain a copied array (WRITEBACKIFCOPY set). Array Scalars ------------- [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Return([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) This function steals a reference to *arr*. This function checks to see if *arr* is a 0-dimensional array and, if so, returns the appropriate array scalar. It should be used whenever 0-dimensional arrays could be returned to Python. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_Scalar(void*data, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*base) Return an array scalar object of the given *dtype* by **copying** from memory pointed to by *data*. *base* is expected to be the array object that is the owner of the data. *base* is required if `dtype` is a `void` scalar, or if the `NPY_USE_GETITEM` flag is set and it is known that the `getitem` method uses the `arr` argument without checking if it is `NULL`. Otherwise `base` may be `NULL`. If the data is not in native byte order (as indicated by `dtype->byteorder`) then this function will byteswap the data, because array scalars are always in correct machine-byte order. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_ToScalar(void*data, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*arr) Return an array scalar object of the type and itemsize indicated by the array object *arr* copied from the memory pointed to by *data* and swapping if the data in *arr* is not in machine byte-order. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FromScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*scalar, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*outcode) Return a 0-dimensional array of type determined by *outcode* from *scalar* which should be an array-scalar object. If *outcode* is NULL, then the type is determined from *scalar*. voidPyArray_ScalarAsCtype([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*scalar, void*ctypeptr) Return in *ctypeptr* a pointer to the actual value in an array scalar. There is no error checking so *scalar* must be an array-scalar object, and ctypeptr must have enough space to hold the correct type. For flexible-sized types, a pointer to the data is copied into the memory of *ctypeptr*, for all other types, the actual data is copied into the address pointed to by *ctypeptr*. voidPyArray_CastScalarToCtype([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*scalar, void*ctypeptr, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*outcode) Return the data (cast to the data type indicated by *outcode*) from the array-scalar, *scalar*, into the memory pointed to by *ctypeptr* (which must be large enough to handle the incoming memory). [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_TypeObjectFromType(inttype) Returns a scalar type-object from a type-number, *type* . Equivalent to [`PyArray_DescrFromType`](#c.PyArray_DescrFromType "PyArray_DescrFromType") (*type*)->typeobj except for reference counting and error-checking. Returns a new reference to the typeobject on success or `NULL` on failure. [NPY_SCALARKIND](#c.NPY_SCALARKIND "NPY_SCALARKIND")PyArray_ScalarKind(inttypenum, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**arr) See the function [`PyArray_MinScalarType`](#c.PyArray_MinScalarType "PyArray_MinScalarType") for an alternative mechanism introduced in NumPy 1.6.0. Return the kind of scalar represented by *typenum* and the array in **arr* (if *arr* is not `NULL` ). The array is assumed to be rank-0 and only used if *typenum* represents a signed integer. If *arr* is not `NULL` and the first element is negative then [`NPY_INTNEG_SCALAR`](#c.NPY_SCALARKIND.NPY_INTNEG_SCALAR "NPY_INTNEG_SCALAR") is returned, otherwise [`NPY_INTPOS_SCALAR`](#c.NPY_SCALARKIND.NPY_INTPOS_SCALAR "NPY_INTPOS_SCALAR") is returned. The possible return values are the enumerated values in [`NPY_SCALARKIND`](#c.NPY_SCALARKIND "NPY_SCALARKIND"). intPyArray_CanCoerceScalar(charthistype, charneededtype, [NPY_SCALARKIND](#c.NPY_SCALARKIND "NPY_SCALARKIND")scalar) See the function [`PyArray_ResultType`](#c.PyArray_ResultType "PyArray_ResultType") for details of NumPy type promotion, updated in NumPy 1.6.0. Implements the rules for scalar coercion. Scalars are only silently coerced from thistype to neededtype if this function returns nonzero. If scalar is [`NPY_NOSCALAR`](#c.NPY_SCALARKIND.NPY_NOSCALAR "NPY_NOSCALAR"), then this function is equivalent to [`PyArray_CanCastSafely`](#c.PyArray_CanCastSafely "PyArray_CanCastSafely"). The rule is that scalars of the same KIND can be coerced into arrays of the same KIND. This rule means that high-precision scalars will never cause low-precision arrays of the same KIND to be upcast. Data-type descriptors --------------------- Warning Data-type objects must be reference counted so be aware of the action on the data-type reference of different C-API calls. The standard rule is that when a data-type object is returned it is a new reference. Functions that take [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")* objects and return arrays steal references to the data-type their inputs unless otherwise noted. Therefore, you must own a reference to any data-type object used as input to such a function. intPyArray_DescrCheck([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) Evaluates as true if *obj* is a data-type object ( [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")* ). [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrNew([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*obj) Return a new data-type object copied from *obj* (the fields reference is just updated so that the new object points to the same fields dictionary if any). [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrNewFromType(inttypenum) Create a new data-type object from the built-in (or user-registered) data-type indicated by *typenum*. All builtin types should not have any of their fields changed. This creates a new copy of the [`PyArray_Descr`](types-and-structures#c.PyArray_Descr "PyArray_Descr") structure so that you can fill it in as appropriate. This function is especially needed for flexible data-types which need to have a new elsize member in order to be meaningful in array construction. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrNewByteorder([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*obj, charnewendian) Create a new data-type object with the byteorder set according to *newendian*. All referenced data-type objects (in subdescr and fields members of the data-type object) are also changed (recursively). The value of *newendian* is one of these macros: NPY_IGNORE NPY_SWAP NPY_NATIVE NPY_LITTLE NPY_BIG If a byteorder of [`NPY_IGNORE`](#c.NPY_IGNORE "NPY_IGNORE") is encountered it is left alone. If newendian is [`NPY_SWAP`](#c.NPY_SWAP "NPY_SWAP"), then all byte-orders are swapped. Other valid newendian values are [`NPY_NATIVE`](#c.NPY_NATIVE "NPY_NATIVE"), [`NPY_LITTLE`](#c.NPY_LITTLE "NPY_LITTLE"), and [`NPY_BIG`](#c.NPY_BIG "NPY_BIG") which all cause the returned data-typed descriptor (and all it’s referenced data-type descriptors) to have the corresponding byte- order. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrFromObject([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*mintype) Determine an appropriate data-type object from the object *op* (which should be a “nested” sequence object) and the minimum data-type descriptor mintype (which can be `NULL` ). Similar in behavior to array(*op*).dtype. Don’t confuse this function with [`PyArray_DescrConverter`](#c.PyArray_DescrConverter "PyArray_DescrConverter"). This function essentially looks at all the objects in the (nested) sequence and determines the data-type from the elements it finds. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrFromScalar([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*scalar) Return a data-type object from an array-scalar object. No checking is done to be sure that *scalar* is an array scalar. If no suitable data-type can be determined, then a data-type of [`NPY_OBJECT`](dtype#c.NPY_OBJECT "NPY_OBJECT") is returned by default. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*PyArray_DescrFromType(inttypenum) Returns a data-type object corresponding to *typenum*. The *typenum* can be one of the enumerated types, a character code for one of the enumerated types, or a user-defined type. If you want to use a flexible size array, then you need to `flexible typenum` and set the results `elsize` parameter to the desired size. The typenum is one of the [`NPY_TYPES`](dtype#c.NPY_TYPES "NPY_TYPES"). intPyArray_DescrConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**dtype) Convert any compatible Python object, *obj*, to a data-type object in *dtype*. A large number of Python objects can be converted to data-type objects. See [Data type objects (dtype)](../arrays.dtypes#arrays-dtypes) for a complete description. This version of the converter converts None objects to a [`NPY_DEFAULT_TYPE`](dtype#c.NPY_DEFAULT_TYPE "NPY_DEFAULT_TYPE") data-type object. This function can be used with the “O&” character code in [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)") processing. intPyArray_DescrConverter2([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**dtype) Convert any compatible Python object, *obj*, to a data-type object in *dtype*. This version of the converter converts None objects so that the returned data-type is `NULL`. This function can also be used with the “O&” character in PyArg_ParseTuple processing. intPyarray_DescrAlignConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**dtype) Like [`PyArray_DescrConverter`](#c.PyArray_DescrConverter "PyArray_DescrConverter") except it aligns C-struct-like objects on word-boundaries as the compiler would. intPyarray_DescrAlignConverter2([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**dtype) Like [`PyArray_DescrConverter2`](#c.PyArray_DescrConverter2 "PyArray_DescrConverter2") except it aligns C-struct-like objects on word-boundaries as the compiler would. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_FieldNames([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*dict) Take the fields dictionary, *dict*, such as the one attached to a data-type object and construct an ordered-list of field names such as is stored in the names field of the [`PyArray_Descr`](types-and-structures#c.PyArray_Descr "PyArray_Descr") object. Conversion Utilities -------------------- ### For use with [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)") All of these functions can be used in [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)") (
) with the “O&” format specifier to automatically convert any Python object to the required C-object. All of these functions return [`NPY_SUCCEED`](#c.NPY_SUCCEED "NPY_SUCCEED") if successful and [`NPY_FAIL`](#c.NPY_FAIL "NPY_FAIL") if not. The first argument to all of these function is a Python object. The second argument is the **address** of the C-type to convert the Python object to. Warning Be sure to understand what steps you should take to manage the memory when using these conversion functions. These functions can require freeing memory, and/or altering the reference counts of specific objects based on your use. intPyArray_Converter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")**address) Convert any Python object to a [`PyArrayObject`](types-and-structures#c.PyArrayObject "PyArrayObject"). If [`PyArray_Check`](#c.PyArray_Check "PyArray_Check") (*obj*) is TRUE then its reference count is incremented and a reference placed in *address*. If *obj* is not an array, then convert it to an array using [`PyArray_FromAny`](#c.PyArray_FromAny "PyArray_FromAny") . No matter what is returned, you must DECREF the object returned by this routine in *address* when you are done with it. intPyArray_OutputConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**address) This is a default converter for output arrays given to functions. If *obj* is [`Py_None`](https://docs.python.org/3/c-api/none.html#c.Py_None "(in Python v3.10)") or `NULL`, then **address* will be `NULL` but the call will succeed. If [`PyArray_Check`](#c.PyArray_Check "PyArray_Check") ( *obj*) is TRUE then it is returned in **address* without incrementing its reference count. intPyArray_IntpConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [PyArray_Dims](types-and-structures#c.PyArray_Dims "PyArray_Dims")*seq) Convert any Python sequence, *obj*, smaller than [`NPY_MAXDIMS`](#c.NPY_MAXDIMS "NPY_MAXDIMS") to a C-array of [`npy_intp`](dtype#c.npy_intp "npy_intp"). The Python object could also be a single number. The *seq* variable is a pointer to a structure with members ptr and len. On successful return, *seq* ->ptr contains a pointer to memory that must be freed, by calling [`PyDimMem_FREE`](#c.PyDimMem_FREE "PyDimMem_FREE"), to avoid a memory leak. The restriction on memory size allows this converter to be conveniently used for sequences intended to be interpreted as array shapes. intPyArray_BufferConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [PyArray_Chunk](types-and-structures#c.PyArray_Chunk "PyArray_Chunk")*buf) Convert any Python object, *obj*, with a (single-segment) buffer interface to a variable with members that detail the object’s use of its chunk of memory. The *buf* variable is a pointer to a structure with base, ptr, len, and flags members. The [`PyArray_Chunk`](types-and-structures#c.PyArray_Chunk "PyArray_Chunk") structure is binary compatible with the Python’s buffer object (through its len member on 32-bit platforms and its ptr member on 64-bit platforms or in Python 2.5). On return, the base member is set to *obj* (or its base if *obj* is already a buffer object pointing to another object). If you need to hold on to the memory be sure to INCREF the base member. The chunk of memory is pointed to by *buf* ->ptr member and has length *buf* ->len. The flags member of *buf* is [`NPY_ARRAY_ALIGNED`](#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") with the [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") flag set if *obj* has a writeable buffer interface. intPyArray_AxisConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, int*axis) Convert a Python object, *obj*, representing an axis argument to the proper value for passing to the functions that take an integer axis. Specifically, if *obj* is None, *axis* is set to [`NPY_MAXDIMS`](#c.NPY_MAXDIMS "NPY_MAXDIMS") which is interpreted correctly by the C-API functions that take axis arguments. intPyArray_BoolConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [npy_bool](dtype#c.npy_bool "npy_bool")*value) Convert any Python object, *obj*, to [`NPY_TRUE`](#c.NPY_TRUE "NPY_TRUE") or [`NPY_FALSE`](#c.NPY_FALSE "NPY_FALSE"), and place the result in *value*. intPyArray_ByteorderConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, char*endian) Convert Python strings into the corresponding byte-order character: ‘>’, ‘<’, ‘s’, ‘=’, or ‘|’. intPyArray_SortkindConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [NPY_SORTKIND](#c.NPY_SORTKIND "NPY_SORTKIND")*sort) Convert Python strings into one of [`NPY_QUICKSORT`](#c.NPY_SORTKIND.NPY_QUICKSORT "NPY_QUICKSORT") (starts with ‘q’ or ‘Q’), [`NPY_HEAPSORT`](#c.NPY_SORTKIND.NPY_HEAPSORT "NPY_HEAPSORT") (starts with ‘h’ or ‘H’), [`NPY_MERGESORT`](#c.NPY_SORTKIND.NPY_MERGESORT "NPY_MERGESORT") (starts with ‘m’ or ‘M’) or [`NPY_STABLESORT`](#c.NPY_SORTKIND.NPY_STABLESORT "NPY_STABLESORT") (starts with ‘t’ or ‘T’). [`NPY_MERGESORT`](#c.NPY_SORTKIND.NPY_MERGESORT "NPY_MERGESORT") and [`NPY_STABLESORT`](#c.NPY_SORTKIND.NPY_STABLESORT "NPY_STABLESORT") are aliased to each other for backwards compatibility and may refer to one of several stable sorting algorithms depending on the data type. intPyArray_SearchsideConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [NPY_SEARCHSIDE](#c.NPY_SEARCHSIDE "NPY_SEARCHSIDE")*side) Convert Python strings into one of [`NPY_SEARCHLEFT`](#c.NPY_SEARCHSIDE.NPY_SEARCHLEFT "NPY_SEARCHLEFT") (starts with ‘l’ or ‘L’), or [`NPY_SEARCHRIGHT`](#c.NPY_SEARCHSIDE.NPY_SEARCHRIGHT "NPY_SEARCHRIGHT") (starts with ‘r’ or ‘R’). intPyArray_OrderConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [NPY_ORDER](#c.NPY_ORDER "NPY_ORDER")*order) Convert the Python strings ‘C’, ‘F’, ‘A’, and ‘K’ into the [`NPY_ORDER`](#c.NPY_ORDER "NPY_ORDER") enumeration [`NPY_CORDER`](#c.NPY_ORDER.NPY_CORDER "NPY_CORDER"), [`NPY_FORTRANORDER`](#c.NPY_ORDER.NPY_FORTRANORDER "NPY_FORTRANORDER"), [`NPY_ANYORDER`](#c.NPY_ORDER.NPY_ANYORDER "NPY_ANYORDER"), and [`NPY_KEEPORDER`](#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER"). intPyArray_CastingConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, [NPY_CASTING](#c.NPY_CASTING "NPY_CASTING")*casting) Convert the Python strings ‘no’, ‘equiv’, ‘safe’, ‘same_kind’, and ‘unsafe’ into the [`NPY_CASTING`](#c.NPY_CASTING "NPY_CASTING") enumeration [`NPY_NO_CASTING`](#c.NPY_CASTING.NPY_NO_CASTING "NPY_NO_CASTING"), [`NPY_EQUIV_CASTING`](#c.NPY_CASTING.NPY_EQUIV_CASTING "NPY_EQUIV_CASTING"), [`NPY_SAFE_CASTING`](#c.NPY_CASTING.NPY_SAFE_CASTING "NPY_SAFE_CASTING"), [`NPY_SAME_KIND_CASTING`](#c.NPY_CASTING.NPY_SAME_KIND_CASTING "NPY_SAME_KIND_CASTING"), and [`NPY_UNSAFE_CASTING`](#c.NPY_CASTING.NPY_UNSAFE_CASTING "NPY_UNSAFE_CASTING"). intPyArray_ClipmodeConverter([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*object, [NPY_CLIPMODE](#c.NPY_CLIPMODE "NPY_CLIPMODE")*val) Convert the Python strings ‘clip’, ‘wrap’, and ‘raise’ into the [`NPY_CLIPMODE`](#c.NPY_CLIPMODE "NPY_CLIPMODE") enumeration [`NPY_CLIP`](#c.NPY_CLIPMODE.NPY_CLIP "NPY_CLIP"), [`NPY_WRAP`](#c.NPY_CLIPMODE.NPY_WRAP "NPY_WRAP"), and [`NPY_RAISE`](#c.NPY_CLIPMODE.NPY_RAISE "NPY_RAISE"). intPyArray_ConvertClipmodeSequence([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*object, [NPY_CLIPMODE](#c.NPY_CLIPMODE "NPY_CLIPMODE")*modes, intn) Converts either a sequence of clipmodes or a single clipmode into a C array of [`NPY_CLIPMODE`](#c.NPY_CLIPMODE "NPY_CLIPMODE") values. The number of clipmodes *n* must be known before calling this function. This function is provided to help functions allow a different clipmode for each dimension. ### Other conversions intPyArray_PyIntAsInt([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Convert all kinds of Python objects (including arrays and array scalars) to a standard integer. On error, -1 is returned and an exception set. You may find useful the macro: ``` #define error_converting(x) (((x) == -1) && PyErr_Occurred()) ``` [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_PyIntAsIntp([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Convert all kinds of Python objects (including arrays and array scalars) to a (platform-pointer-sized) integer. On error, -1 is returned and an exception set. intPyArray_IntpFromSequence([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*seq, [npy_intp](dtype#c.npy_intp "npy_intp")*vals, intmaxvals) Convert any Python sequence (or single Python number) passed in as *seq* to (up to) *maxvals* pointer-sized integers and place them in the *vals* array. The sequence can be smaller then *maxvals* as the number of converted objects is returned. intPyArray_TypestrConvert(intitemsize, intgentype) Convert typestring characters (with *itemsize*) to basic enumerated data types. The typestring character corresponding to signed and unsigned integers, floating point numbers, and complex-floating point numbers are recognized and converted. Other values of gentype are returned. This function can be used to convert, for example, the string ‘f4’ to [`NPY_FLOAT32`](dtype#c.NPY_FLOAT32 "NPY_FLOAT32"). Miscellaneous ------------- ### Importing the API In order to make use of the C-API from another extension module, the [`import_array`](#c.import_array "import_array") function must be called. If the extension module is self-contained in a single .c file, then that is all that needs to be done. If, however, the extension module involves multiple files where the C-API is needed then some additional steps must be taken. voidimport_array(void) This function must be called in the initialization section of a module that will make use of the C-API. It imports the module where the function-pointer table is stored and points the correct variable to it. PY_ARRAY_UNIQUE_SYMBOL NO_IMPORT_ARRAY Using these #defines you can use the C-API in multiple files for a single extension module. In each file you must define [`PY_ARRAY_UNIQUE_SYMBOL`](#c.PY_ARRAY_UNIQUE_SYMBOL "PY_ARRAY_UNIQUE_SYMBOL") to some name that will hold the C-API (*e.g.* myextension_ARRAY_API). This must be done **before** including the numpy/arrayobject.h file. In the module initialization routine you call [`import_array`](#c.import_array "import_array"). In addition, in the files that do not have the module initialization sub_routine define [`NO_IMPORT_ARRAY`](#c.NO_IMPORT_ARRAY "NO_IMPORT_ARRAY") prior to including numpy/arrayobject.h. Suppose I have two files coolmodule.c and coolhelper.c which need to be compiled and linked into a single extension module. Suppose coolmodule.c contains the required initcool module initialization function (with the import_array() function called). Then, coolmodule.c would have at the top: ``` #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API #include numpy/arrayobject.h ``` On the other hand, coolhelper.c would contain at the top: ``` #define NO_IMPORT_ARRAY #define PY_ARRAY_UNIQUE_SYMBOL cool_ARRAY_API #include numpy/arrayobject.h ``` You can also put the common two last lines into an extension-local header file as long as you make sure that NO_IMPORT_ARRAY is #defined before #including that file. Internally, these #defines work as follows: * If neither is defined, the C-API is declared to be `static void**`, so it is only visible within the compilation unit that #includes numpy/arrayobject.h. * If [`PY_ARRAY_UNIQUE_SYMBOL`](#c.PY_ARRAY_UNIQUE_SYMBOL "PY_ARRAY_UNIQUE_SYMBOL") is #defined, but [`NO_IMPORT_ARRAY`](#c.NO_IMPORT_ARRAY "NO_IMPORT_ARRAY") is not, the C-API is declared to be `void**`, so that it will also be visible to other compilation units. * If [`NO_IMPORT_ARRAY`](#c.NO_IMPORT_ARRAY "NO_IMPORT_ARRAY") is #defined, regardless of whether [`PY_ARRAY_UNIQUE_SYMBOL`](#c.PY_ARRAY_UNIQUE_SYMBOL "PY_ARRAY_UNIQUE_SYMBOL") is, the C-API is declared to be `extern void**`, so it is expected to be defined in another compilation unit. * Whenever [`PY_ARRAY_UNIQUE_SYMBOL`](#c.PY_ARRAY_UNIQUE_SYMBOL "PY_ARRAY_UNIQUE_SYMBOL") is #defined, it also changes the name of the variable holding the C-API, which defaults to `PyArray_API`, to whatever the macro is #defined to. ### Checking the API Version Because python extensions are not used in the same way as usual libraries on most platforms, some errors cannot be automatically detected at build time or even runtime. For example, if you build an extension using a function available only for numpy >= 1.3.0, and you import the extension later with numpy 1.2, you will not get an import error (but almost certainly a segmentation fault when calling the function). That’s why several functions are provided to check for numpy versions. The macros [`NPY_VERSION`](#c.NPY_VERSION "NPY_VERSION") and [`NPY_FEATURE_VERSION`](#c.NPY_FEATURE_VERSION "NPY_FEATURE_VERSION") corresponds to the numpy version used to build the extension, whereas the versions returned by the functions [`PyArray_GetNDArrayCVersion`](#c.PyArray_GetNDArrayCVersion "PyArray_GetNDArrayCVersion") and [`PyArray_GetNDArrayCFeatureVersion`](#c.PyArray_GetNDArrayCFeatureVersion "PyArray_GetNDArrayCFeatureVersion") corresponds to the runtime numpy’s version. The rules for ABI and API compatibilities can be summarized as follows: * Whenever [`NPY_VERSION`](#c.NPY_VERSION "NPY_VERSION") != `PyArray_GetNDArrayCVersion()`, the extension has to be recompiled (ABI incompatibility). * [`NPY_VERSION`](#c.NPY_VERSION "NPY_VERSION") == `PyArray_GetNDArrayCVersion()` and [`NPY_FEATURE_VERSION`](#c.NPY_FEATURE_VERSION "NPY_FEATURE_VERSION") <= `PyArray_GetNDArrayCFeatureVersion()` means backward compatible changes. ABI incompatibility is automatically detected in every numpy’s version. API incompatibility detection was added in numpy 1.4.0. If you want to supported many different numpy versions with one extension binary, you have to build your extension with the lowest [`NPY_FEATURE_VERSION`](#c.NPY_FEATURE_VERSION "NPY_FEATURE_VERSION") as possible. NPY_VERSION The current version of the ndarray object (check to see if this variable is defined to guarantee the `numpy/arrayobject.h` header is being used). NPY_FEATURE_VERSION The current version of the C-API. unsignedintPyArray_GetNDArrayCVersion(void) This just returns the value [`NPY_VERSION`](#c.NPY_VERSION "NPY_VERSION"). [`NPY_VERSION`](#c.NPY_VERSION "NPY_VERSION") changes whenever a backward incompatible change at the ABI level. Because it is in the C-API, however, comparing the output of this function from the value defined in the current header gives a way to test if the C-API has changed thus requiring a re-compilation of extension modules that use the C-API. This is automatically checked in the function [`import_array`](#c.import_array "import_array"). unsignedintPyArray_GetNDArrayCFeatureVersion(void) New in version 1.4.0. This just returns the value [`NPY_FEATURE_VERSION`](#c.NPY_FEATURE_VERSION "NPY_FEATURE_VERSION"). [`NPY_FEATURE_VERSION`](#c.NPY_FEATURE_VERSION "NPY_FEATURE_VERSION") changes whenever the API changes (e.g. a function is added). A changed value does not always require a recompile. ### Internal Flexibility intPyArray_SetNumericOps([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*dict) NumPy stores an internal table of Python callable objects that are used to implement arithmetic operations for arrays as well as certain array calculation methods. This function allows the user to replace any or all of these Python objects with their own versions. The keys of the dictionary, *dict*, are the named functions to replace and the paired value is the Python callable object to use. Care should be taken that the function used to replace an internal array operation does not itself call back to that internal array operation (unless you have designed the function to handle that), or an unchecked infinite recursion can result (possibly causing program crash). The key names that represent operations that can be replaced are: **add**, **subtract**, **multiply**, **divide**, **remainder**, **power**, **square**, **reciprocal**, **ones_like**, **sqrt**, **negative**, **positive**, **absolute**, **invert**, **left_shift**, **right_shift**, **bitwise_and**, **bitwise_xor**, **bitwise_or**, **less**, **less_equal**, **equal**, **not_equal**, **greater**, **greater_equal**, **floor_divide**, **true_divide**, **logical_or**, **logical_and**, **floor**, **ceil**, **maximum**, **minimum**, **rint**. These functions are included here because they are used at least once in the array object’s methods. The function returns -1 (without setting a Python Error) if one of the objects being assigned is not callable. Deprecated since version 1.16. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyArray_GetNumericOps(void) Return a Python dictionary containing the callable Python objects stored in the internal arithmetic operation table. The keys of this dictionary are given in the explanation for [`PyArray_SetNumericOps`](#c.PyArray_SetNumericOps "PyArray_SetNumericOps"). Deprecated since version 1.16. voidPyArray_SetStringFunction([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op, intrepr) This function allows you to alter the tp_str and tp_repr methods of the array object to any Python function. Thus you can alter what happens for all arrays when str(arr) or repr(arr) is called from Python. The function to be called is passed in as *op*. If *repr* is non-zero, then this function will be called in response to repr(arr), otherwise the function will be called in response to str(arr). No check on whether or not *op* is callable is performed. The callable passed in to *op* should expect an array argument and should return a string to be printed. ### Memory management char*PyDataMem_NEW(size_tnbytes) voidPyDataMem_FREE(char*ptr) char*PyDataMem_RENEW(void*ptr, size_tnewbytes) Macros to allocate, free, and reallocate memory. These macros are used internally to create arrays. [npy_intp](dtype#c.npy_intp "npy_intp")*PyDimMem_NEW(intnd) voidPyDimMem_FREE(char*ptr) [npy_intp](dtype#c.npy_intp "npy_intp")*PyDimMem_RENEW(void*ptr, size_tnewnd) Macros to allocate, free, and reallocate dimension and strides memory. void*PyArray_malloc(size_tnbytes) voidPyArray_free(void*ptr) void*PyArray_realloc([npy_intp](dtype#c.npy_intp "npy_intp")*ptr, size_tnbytes) These macros use different memory allocators, depending on the constant [`NPY_USE_PYMEM`](#c.PyArray_realloc.NPY_USE_PYMEM "NPY_USE_PYMEM"). The system malloc is used when [`NPY_USE_PYMEM`](#c.PyArray_realloc.NPY_USE_PYMEM "NPY_USE_PYMEM") is 0, if [`NPY_USE_PYMEM`](#c.PyArray_realloc.NPY_USE_PYMEM "NPY_USE_PYMEM") is 1, then the Python memory allocator is used. NPY_USE_PYMEM intPyArray_ResolveWritebackIfCopy([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*obj) If `obj.flags` has [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY"), this function clears the flags, `DECREF` s `obj->base` and makes it writeable, and sets `obj->base` to NULL. It then copies `obj->data` to `obj->base->data`, and returns the error state of the copy operation. This is the opposite of [`PyArray_SetWritebackIfCopyBase`](#c.PyArray_SetWritebackIfCopyBase "PyArray_SetWritebackIfCopyBase"). Usually this is called once you are finished with `obj`, just before `Py_DECREF(obj)`. It may be called multiple times, or with `NULL` input. See also [`PyArray_DiscardWritebackIfCopy`](#c.PyArray_DiscardWritebackIfCopy "PyArray_DiscardWritebackIfCopy"). Returns 0 if nothing was done, -1 on error, and 1 if action was taken. ### Threading support These macros are only meaningful if [`NPY_ALLOW_THREADS`](#c.NPY_ALLOW_THREADS "NPY_ALLOW_THREADS") evaluates True during compilation of the extension module. Otherwise, these macros are equivalent to whitespace. Python uses a single Global Interpreter Lock (GIL) for each Python process so that only a single thread may execute at a time (even on multi-cpu machines). When calling out to a compiled function that may take time to compute (and does not have side-effects for other threads like updated global variables), the GIL should be released so that other Python threads can run while the time-consuming calculations are performed. This can be accomplished using two groups of macros. Typically, if one macro in a group is used in a code block, all of them must be used in the same code block. Currently, [`NPY_ALLOW_THREADS`](#c.NPY_ALLOW_THREADS "NPY_ALLOW_THREADS") is defined to the python-defined [`WITH_THREADS`](#c.WITH_THREADS "WITH_THREADS") constant unless the environment variable `NPY_NOSMP` is set in which case [`NPY_ALLOW_THREADS`](#c.NPY_ALLOW_THREADS "NPY_ALLOW_THREADS") is defined to be 0. NPY_ALLOW_THREADS WITH_THREADS #### Group 1 This group is used to call code that may take some time but does not use any Python C-API calls. Thus, the GIL should be released during its calculation. NPY_BEGIN_ALLOW_THREADS Equivalent to [`Py_BEGIN_ALLOW_THREADS`](https://docs.python.org/3/c-api/init.html#c.Py_BEGIN_ALLOW_THREADS "(in Python v3.10)") except it uses [`NPY_ALLOW_THREADS`](#c.NPY_ALLOW_THREADS "NPY_ALLOW_THREADS") to determine if the macro if replaced with white-space or not. NPY_END_ALLOW_THREADS Equivalent to [`Py_END_ALLOW_THREADS`](https://docs.python.org/3/c-api/init.html#c.Py_END_ALLOW_THREADS "(in Python v3.10)") except it uses [`NPY_ALLOW_THREADS`](#c.NPY_ALLOW_THREADS "NPY_ALLOW_THREADS") to determine if the macro if replaced with white-space or not. NPY_BEGIN_THREADS_DEF Place in the variable declaration area. This macro sets up the variable needed for storing the Python state. NPY_BEGIN_THREADS Place right before code that does not need the Python interpreter (no Python C-API calls). This macro saves the Python state and releases the GIL. NPY_END_THREADS Place right after code that does not need the Python interpreter. This macro acquires the GIL and restores the Python state from the saved variable. voidNPY_BEGIN_THREADS_DESCR([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype) Useful to release the GIL only if *dtype* does not contain arbitrary Python objects which may need the Python interpreter during execution of the loop. voidNPY_END_THREADS_DESCR([PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype) Useful to regain the GIL in situations where it was released using the BEGIN form of this macro. voidNPY_BEGIN_THREADS_THRESHOLDED(intloop_size) Useful to release the GIL only if *loop_size* exceeds a minimum threshold, currently set to 500. Should be matched with a [`NPY_END_THREADS`](#c.NPY_END_THREADS "NPY_END_THREADS") to regain the GIL. #### Group 2 This group is used to re-acquire the Python GIL after it has been released. For example, suppose the GIL has been released (using the previous calls), and then some path in the code (perhaps in a different subroutine) requires use of the Python C-API, then these macros are useful to acquire the GIL. These macros accomplish essentially a reverse of the previous three (acquire the LOCK saving what state it had) and then re-release it with the saved state. NPY_ALLOW_C_API_DEF Place in the variable declaration area to set up the necessary variable. NPY_ALLOW_C_API Place before code that needs to call the Python C-API (when it is known that the GIL has already been released). NPY_DISABLE_C_API Place after code that needs to call the Python C-API (to re-release the GIL). Tip Never use semicolons after the threading support macros. ### Priority NPY_PRIORITY Default priority for arrays. NPY_SUBTYPE_PRIORITY Default subtype priority. NPY_SCALAR_PRIORITY Default scalar priority (very small) doublePyArray_GetPriority([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj, doubledef) Return the [`__array_priority__`](../arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute (converted to a double) of *obj* or *def* if no attribute of that name exists. Fast returns that avoid the attribute lookup are provided for objects of type [`PyArray_Type`](types-and-structures#c.PyArray_Type "PyArray_Type"). ### Default buffers NPY_BUFSIZE Default size of the user-settable internal buffers. NPY_MIN_BUFSIZE Smallest size of user-settable internal buffers. NPY_MAX_BUFSIZE Largest size allowed for the user-settable buffers. ### Other constants NPY_NUM_FLOATTYPE The number of floating-point types NPY_MAXDIMS The maximum number of dimensions allowed in arrays. NPY_MAXARGS The maximum number of array arguments that can be used in functions. NPY_FALSE Defined as 0 for use with Bool. NPY_TRUE Defined as 1 for use with Bool. NPY_FAIL The return value of failed converter functions which are called using the “O&” syntax in [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)")-like functions. NPY_SUCCEED The return value of successful converter functions which are called using the “O&” syntax in [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)")-like functions. ### Miscellaneous Macros intPyArray_SAMESHAPE([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*a1, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*a2) Evaluates as True if arrays *a1* and *a2* have the same shape. PyArray_MAX(a, b) Returns the maximum of *a* and *b*. If (*a*) or (*b*) are expressions they are evaluated twice. PyArray_MIN(a, b) Returns the minimum of *a* and *b*. If (*a*) or (*b*) are expressions they are evaluated twice. PyArray_CLT(a, b) PyArray_CGT(a, b) PyArray_CLE(a, b) PyArray_CGE(a, b) PyArray_CEQ(a, b) PyArray_CNE(a, b) Implements the complex comparisons between two complex numbers (structures with a real and imag member) using NumPy’s definition of the ordering which is lexicographic: comparing the real parts first and then the complex parts if the real parts are equal. [npy_intp](dtype#c.npy_intp "npy_intp")PyArray_REFCOUNT([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*op) Returns the reference count of any Python object. voidPyArray_DiscardWritebackIfCopy([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) If `obj.flags` has [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY"), this function clears the flags, `DECREF` s `obj->base` and makes it writeable, and sets `obj->base` to NULL. In contrast to [`PyArray_DiscardWritebackIfCopy`](#c.PyArray_DiscardWritebackIfCopy "PyArray_DiscardWritebackIfCopy") it makes no attempt to copy the data from `obj->base` This undoes [`PyArray_SetWritebackIfCopyBase`](#c.PyArray_SetWritebackIfCopyBase "PyArray_SetWritebackIfCopyBase"). Usually this is called after an error when you are finished with `obj`, just before `Py_DECREF(obj)`. It may be called multiple times, or with `NULL` input. voidPyArray_XDECREF_ERR([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*obj) Deprecated in 1.14, use [`PyArray_DiscardWritebackIfCopy`](#c.PyArray_DiscardWritebackIfCopy "PyArray_DiscardWritebackIfCopy") followed by `Py_XDECREF` DECREF’s an array object which may have the [`NPY_ARRAY_WRITEBACKIFCOPY`](#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag set without causing the contents to be copied back into the original array. Resets the [`NPY_ARRAY_WRITEABLE`](#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") flag on the base object. This is useful for recovering from an error condition when writeback semantics are used, but will lead to wrong results. ### Enumerated Types enumNPY_SORTKIND A special variable-type which can take on different values to indicate the sorting algorithm being used. enumeratorNPY_QUICKSORT enumeratorNPY_HEAPSORT enumeratorNPY_MERGESORT enumeratorNPY_STABLESORT Used as an alias of [`NPY_MERGESORT`](#c.NPY_SORTKIND.NPY_MERGESORT "NPY_MERGESORT") and vica versa. enumeratorNPY_NSORTS Defined to be the number of sorts. It is fixed at three by the need for backwards compatibility, and consequently [`NPY_MERGESORT`](#c.NPY_SORTKIND.NPY_MERGESORT "NPY_MERGESORT") and [`NPY_STABLESORT`](#c.NPY_SORTKIND.NPY_STABLESORT "NPY_STABLESORT") are aliased to each other and may refer to one of several stable sorting algorithms depending on the data type. enumNPY_SCALARKIND A special variable type indicating the number of “kinds” of scalars distinguished in determining scalar-coercion rules. This variable can take on the values: enumeratorNPY_NOSCALAR enumeratorNPY_BOOL_SCALAR enumeratorNPY_INTPOS_SCALAR enumeratorNPY_INTNEG_SCALAR enumeratorNPY_FLOAT_SCALAR enumeratorNPY_COMPLEX_SCALAR enumeratorNPY_OBJECT_SCALAR enumeratorNPY_NSCALARKINDS Defined to be the number of scalar kinds (not including [`NPY_NOSCALAR`](#c.NPY_SCALARKIND.NPY_NOSCALAR "NPY_NOSCALAR")). enumNPY_ORDER An enumeration type indicating the element order that an array should be interpreted in. When a brand new array is created, generally only **NPY_CORDER** and **NPY_FORTRANORDER** are used, whereas when one or more inputs are provided, the order can be based on them. enumeratorNPY_ANYORDER Fortran order if all the inputs are Fortran, C otherwise. enumeratorNPY_CORDER C order. enumeratorNPY_FORTRANORDER Fortran order. enumeratorNPY_KEEPORDER An order as close to the order of the inputs as possible, even if the input is in neither C nor Fortran order. enumNPY_CLIPMODE A variable type indicating the kind of clipping that should be applied in certain functions. enumeratorNPY_RAISE The default for most operations, raises an exception if an index is out of bounds. enumeratorNPY_CLIP Clips an index to the valid range if it is out of bounds. enumeratorNPY_WRAP Wraps an index to the valid range if it is out of bounds. enumNPY_SEARCHSIDE A variable type indicating whether the index returned should be that of the first suitable location (if [`NPY_SEARCHLEFT`](#c.NPY_SEARCHSIDE.NPY_SEARCHLEFT "NPY_SEARCHLEFT")) or of the last (if [`NPY_SEARCHRIGHT`](#c.NPY_SEARCHSIDE.NPY_SEARCHRIGHT "NPY_SEARCHRIGHT")). enumeratorNPY_SEARCHLEFT enumeratorNPY_SEARCHRIGHT enumNPY_SELECTKIND A variable type indicating the selection algorithm being used. enumeratorNPY_INTROSELECT enumNPY_CASTING New in version 1.6. An enumeration type indicating how permissive data conversions should be. This is used by the iterator added in NumPy 1.6, and is intended to be used more broadly in a future version. enumeratorNPY_NO_CASTING Only allow identical types. enumeratorNPY_EQUIV_CASTING Allow identical and casts involving byte swapping. enumeratorNPY_SAFE_CASTING Only allow casts which will not cause values to be rounded, truncated, or otherwise changed. enumeratorNPY_SAME_KIND_CASTING Allow any safe casts, and casts between types of the same kind. For example, float64 -> float32 is permitted with this rule. enumeratorNPY_UNSAFE_CASTING Allow any cast, no matter what kind of data loss may occur. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/array.htmlArray Iterator API ================== New in version 1.6. Array Iterator -------------- The array iterator encapsulates many of the key features in ufuncs, allowing user code to support features like output parameters, preservation of memory layouts, and buffering of data with the wrong alignment or type, without requiring difficult coding. This page documents the API for the iterator. The iterator is named `NpyIter` and functions are named `NpyIter_*`. There is an [introductory guide to array iteration](../arrays.nditer#arrays-nditer) which may be of interest for those using this C API. In many instances, testing out ideas by creating the iterator in Python is a good idea before writing the C iteration code. Simple Iteration Example ------------------------ The best way to become familiar with the iterator is to look at its usage within the NumPy codebase itself. For example, here is a slightly tweaked version of the code for [`PyArray_CountNonzero`](array#c.PyArray_CountNonzero "PyArray_CountNonzero"), which counts the number of non-zero elements in an array. ``` npy_intp PyArray_CountNonzero(PyArrayObject* self) { /* Nonzero boolean function */ PyArray_NonzeroFunc* nonzero = PyArray_DESCR(self)->f->nonzero; NpyIter* iter; NpyIter_IterNextFunc *iternext; char** dataptr; npy_intp nonzero_count; npy_intp* strideptr,* innersizeptr; /* Handle zero-sized arrays specially */ if (PyArray_SIZE(self) == 0) { return 0; } /* * Create and use an iterator to count the nonzeros. * flag NPY_ITER_READONLY * - The array is never written to. * flag NPY_ITER_EXTERNAL_LOOP * - Inner loop is done outside the iterator for efficiency. * flag NPY_ITER_NPY_ITER_REFS_OK * - Reference types are acceptable. * order NPY_KEEPORDER * - Visit elements in memory order, regardless of strides. * This is good for performance when the specific order * elements are visited is unimportant. * casting NPY_NO_CASTING * - No casting is required for this operation. */ iter = NpyIter_New(self, NPY_ITER_READONLY| NPY_ITER_EXTERNAL_LOOP| NPY_ITER_REFS_OK, NPY_KEEPORDER, NPY_NO_CASTING, NULL); if (iter == NULL) { return -1; } /* * The iternext function gets stored in a local variable * so it can be called repeatedly in an efficient manner. */ iternext = NpyIter_GetIterNext(iter, NULL); if (iternext == NULL) { NpyIter_Deallocate(iter); return -1; } /* The location of the data pointer which the iterator may update */ dataptr = NpyIter_GetDataPtrArray(iter); /* The location of the stride which the iterator may update */ strideptr = NpyIter_GetInnerStrideArray(iter); /* The location of the inner loop size which the iterator may update */ innersizeptr = NpyIter_GetInnerLoopSizePtr(iter); nonzero_count = 0; do { /* Get the inner loop data/stride/count values */ char* data = *dataptr; npy_intp stride = *strideptr; npy_intp count = *innersizeptr; /* This is a typical inner loop for NPY_ITER_EXTERNAL_LOOP */ while (count--) { if (nonzero(data, self)) { ++nonzero_count; } data += stride; } /* Increment the iterator to the next inner loop */ } while(iternext(iter)); NpyIter_Deallocate(iter); return nonzero_count; } ``` Simple Multi-Iteration Example ------------------------------ Here is a simple copy function using the iterator. The `order` parameter is used to control the memory layout of the allocated result, typically [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER") is desired. ``` PyObject *CopyArray(PyObject *arr, NPY_ORDER order) { NpyIter *iter; NpyIter_IterNextFunc *iternext; PyObject *op[2], *ret; npy_uint32 flags; npy_uint32 op_flags[2]; npy_intp itemsize, *innersizeptr, innerstride; char **dataptrarray; /* * No inner iteration - inner loop is handled by CopyArray code */ flags = NPY_ITER_EXTERNAL_LOOP; /* * Tell the constructor to automatically allocate the output. * The data type of the output will match that of the input. */ op[0] = arr; op[1] = NULL; op_flags[0] = NPY_ITER_READONLY; op_flags[1] = NPY_ITER_WRITEONLY | NPY_ITER_ALLOCATE; /* Construct the iterator */ iter = NpyIter_MultiNew(2, op, flags, order, NPY_NO_CASTING, op_flags, NULL); if (iter == NULL) { return NULL; } /* * Make a copy of the iternext function pointer and * a few other variables the inner loop needs. */ iternext = NpyIter_GetIterNext(iter, NULL); innerstride = NpyIter_GetInnerStrideArray(iter)[0]; itemsize = NpyIter_GetDescrArray(iter)[0]->elsize; /* * The inner loop size and data pointers may change during the * loop, so just cache the addresses. */ innersizeptr = NpyIter_GetInnerLoopSizePtr(iter); dataptrarray = NpyIter_GetDataPtrArray(iter); /* * Note that because the iterator allocated the output, * it matches the iteration order and is packed tightly, * so we don't need to check it like the input. */ if (innerstride == itemsize) { do { memcpy(dataptrarray[1], dataptrarray[0], itemsize * (*innersizeptr)); } while (iternext(iter)); } else { /* For efficiency, should specialize this based on item size... */ npy_intp i; do { npy_intp size = *innersizeptr; char *src = dataptrarray[0], *dst = dataptrarray[1]; for(i = 0; i < size; i++, src += innerstride, dst += itemsize) { memcpy(dst, src, itemsize); } } while (iternext(iter)); } /* Get the result from the iterator object array */ ret = NpyIter_GetOperandArray(iter)[1]; Py_INCREF(ret); if (NpyIter_Deallocate(iter) != NPY_SUCCEED) { Py_DECREF(ret); return NULL; } return ret; } ``` Iterator Data Types ------------------- The iterator layout is an internal detail, and user code only sees an incomplete struct. typeNpyIter This is an opaque pointer type for the iterator. Access to its contents can only be done through the iterator API. typeNpyIter_Type This is the type which exposes the iterator to Python. Currently, no API is exposed which provides access to the values of a Python-created iterator. If an iterator is created in Python, it must be used in Python and vice versa. Such an API will likely be created in a future version. typeNpyIter_IterNextFunc This is a function pointer for the iteration loop, returned by [`NpyIter_GetIterNext`](#c.NpyIter_GetIterNext "NpyIter_GetIterNext"). typeNpyIter_GetMultiIndexFunc This is a function pointer for getting the current iterator multi-index, returned by [`NpyIter_GetGetMultiIndex`](#c.NpyIter_GetGetMultiIndex "NpyIter_GetGetMultiIndex"). Construction and Destruction ---------------------------- [NpyIter](#c.NpyIter "NpyIter")*NpyIter_New([PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")*op, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")flags, [NPY_ORDER](array#c.NPY_ORDER "NPY_ORDER")order, [NPY_CASTING](array#c.NPY_CASTING "NPY_CASTING")casting, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*dtype) Creates an iterator for the given numpy array object `op`. Flags that may be passed in `flags` are any combination of the global and per-operand flags documented in [`NpyIter_MultiNew`](#c.NpyIter_MultiNew "NpyIter_MultiNew"), except for [`NPY_ITER_ALLOCATE`](#c.NPY_ITER_ALLOCATE "NPY_ITER_ALLOCATE"). Any of the [`NPY_ORDER`](array#c.NPY_ORDER "NPY_ORDER") enum values may be passed to `order`. For efficient iteration, [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER") is the best option, and the other orders enforce the particular iteration pattern. Any of the [`NPY_CASTING`](array#c.NPY_CASTING "NPY_CASTING") enum values may be passed to `casting`. The values include [`NPY_NO_CASTING`](array#c.NPY_CASTING.NPY_NO_CASTING "NPY_NO_CASTING"), [`NPY_EQUIV_CASTING`](array#c.NPY_CASTING.NPY_EQUIV_CASTING "NPY_EQUIV_CASTING"), [`NPY_SAFE_CASTING`](array#c.NPY_CASTING.NPY_SAFE_CASTING "NPY_SAFE_CASTING"), [`NPY_SAME_KIND_CASTING`](array#c.NPY_CASTING.NPY_SAME_KIND_CASTING "NPY_SAME_KIND_CASTING"), and [`NPY_UNSAFE_CASTING`](array#c.NPY_CASTING.NPY_UNSAFE_CASTING "NPY_UNSAFE_CASTING"). To allow the casts to occur, copying or buffering must also be enabled. If `dtype` isn’t `NULL`, then it requires that data type. If copying is allowed, it will make a temporary copy if the data is castable. If [`NPY_ITER_UPDATEIFCOPY`](#c.NPY_ITER_UPDATEIFCOPY "NPY_ITER_UPDATEIFCOPY") is enabled, it will also copy the data back with another cast upon iterator destruction. Returns NULL if there is an error, otherwise returns the allocated iterator. To make an iterator similar to the old iterator, this should work. ``` iter = NpyIter_New(op, NPY_ITER_READWRITE, NPY_CORDER, NPY_NO_CASTING, NULL); ``` If you want to edit an array with aligned `double` code, but the order doesn’t matter, you would use this. ``` dtype = PyArray_DescrFromType(NPY_DOUBLE); iter = NpyIter_New(op, NPY_ITER_READWRITE| NPY_ITER_BUFFERED| NPY_ITER_NBO| NPY_ITER_ALIGNED, NPY_KEEPORDER, NPY_SAME_KIND_CASTING, dtype); Py_DECREF(dtype); ``` [NpyIter](#c.NpyIter "NpyIter")*NpyIter_MultiNew([npy_intp](dtype#c.npy_intp "npy_intp")nop, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**op, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")flags, [NPY_ORDER](array#c.NPY_ORDER "NPY_ORDER")order, [NPY_CASTING](array#c.NPY_CASTING "NPY_CASTING")casting, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")*op_flags, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**op_dtypes) Creates an iterator for broadcasting the `nop` array objects provided in `op`, using regular NumPy broadcasting rules. Any of the [`NPY_ORDER`](array#c.NPY_ORDER "NPY_ORDER") enum values may be passed to `order`. For efficient iteration, [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER") is the best option, and the other orders enforce the particular iteration pattern. When using [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER"), if you also want to ensure that the iteration is not reversed along an axis, you should pass the flag [`NPY_ITER_DONT_NEGATE_STRIDES`](#c.NPY_ITER_DONT_NEGATE_STRIDES "NPY_ITER_DONT_NEGATE_STRIDES"). Any of the [`NPY_CASTING`](array#c.NPY_CASTING "NPY_CASTING") enum values may be passed to `casting`. The values include [`NPY_NO_CASTING`](array#c.NPY_CASTING.NPY_NO_CASTING "NPY_NO_CASTING"), [`NPY_EQUIV_CASTING`](array#c.NPY_CASTING.NPY_EQUIV_CASTING "NPY_EQUIV_CASTING"), [`NPY_SAFE_CASTING`](array#c.NPY_CASTING.NPY_SAFE_CASTING "NPY_SAFE_CASTING"), [`NPY_SAME_KIND_CASTING`](array#c.NPY_CASTING.NPY_SAME_KIND_CASTING "NPY_SAME_KIND_CASTING"), and [`NPY_UNSAFE_CASTING`](array#c.NPY_CASTING.NPY_UNSAFE_CASTING "NPY_UNSAFE_CASTING"). To allow the casts to occur, copying or buffering must also be enabled. If `op_dtypes` isn’t `NULL`, it specifies a data type or `NULL` for each `op[i]`. Returns NULL if there is an error, otherwise returns the allocated iterator. Flags that may be passed in `flags`, applying to the whole iterator, are: NPY_ITER_C_INDEX Causes the iterator to track a raveled flat index matching C order. This option cannot be used with [`NPY_ITER_F_INDEX`](#c.NPY_ITER_F_INDEX "NPY_ITER_F_INDEX"). NPY_ITER_F_INDEX Causes the iterator to track a raveled flat index matching Fortran order. This option cannot be used with [`NPY_ITER_C_INDEX`](#c.NPY_ITER_C_INDEX "NPY_ITER_C_INDEX"). NPY_ITER_MULTI_INDEX Causes the iterator to track a multi-index. This prevents the iterator from coalescing axes to produce bigger inner loops. If the loop is also not buffered and no index is being tracked (`NpyIter_RemoveAxis` can be called), then the iterator size can be `-1` to indicate that the iterator is too large. This can happen due to complex broadcasting and will result in errors being created when the setting the iterator range, removing the multi index, or getting the next function. However, it is possible to remove axes again and use the iterator normally if the size is small enough after removal. NPY_ITER_EXTERNAL_LOOP Causes the iterator to skip iteration of the innermost loop, requiring the user of the iterator to handle it. This flag is incompatible with [`NPY_ITER_C_INDEX`](#c.NPY_ITER_C_INDEX "NPY_ITER_C_INDEX"), [`NPY_ITER_F_INDEX`](#c.NPY_ITER_F_INDEX "NPY_ITER_F_INDEX"), and [`NPY_ITER_MULTI_INDEX`](#c.NPY_ITER_MULTI_INDEX "NPY_ITER_MULTI_INDEX"). NPY_ITER_DONT_NEGATE_STRIDES This only affects the iterator when [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER") is specified for the order parameter. By default with [`NPY_KEEPORDER`](array#c.NPY_ORDER.NPY_KEEPORDER "NPY_KEEPORDER"), the iterator reverses axes which have negative strides, so that memory is traversed in a forward direction. This disables this step. Use this flag if you want to use the underlying memory-ordering of the axes, but don’t want an axis reversed. This is the behavior of `numpy.ravel(a, order='K')`, for instance. NPY_ITER_COMMON_DTYPE Causes the iterator to convert all the operands to a common data type, calculated based on the ufunc type promotion rules. Copying or buffering must be enabled. If the common data type is known ahead of time, don’t use this flag. Instead, set the requested dtype for all the operands. NPY_ITER_REFS_OK Indicates that arrays with reference types (object arrays or structured arrays containing an object type) may be accepted and used in the iterator. If this flag is enabled, the caller must be sure to check whether NpyIter_IterationNeedsAPI(iter) is true, in which case it may not release the GIL during iteration. NPY_ITER_ZEROSIZE_OK Indicates that arrays with a size of zero should be permitted. Since the typical iteration loop does not naturally work with zero-sized arrays, you must check that the IterSize is larger than zero before entering the iteration loop. Currently only the operands are checked, not a forced shape. NPY_ITER_REDUCE_OK Permits writeable operands with a dimension with zero stride and size greater than one. Note that such operands must be read/write. When buffering is enabled, this also switches to a special buffering mode which reduces the loop length as necessary to not trample on values being reduced. Note that if you want to do a reduction on an automatically allocated output, you must use [`NpyIter_GetOperandArray`](#c.NpyIter_GetOperandArray "NpyIter_GetOperandArray") to get its reference, then set every value to the reduction unit before doing the iteration loop. In the case of a buffered reduction, this means you must also specify the flag [`NPY_ITER_DELAY_BUFALLOC`](#c.NPY_ITER_DELAY_BUFALLOC "NPY_ITER_DELAY_BUFALLOC"), then reset the iterator after initializing the allocated operand to prepare the buffers. NPY_ITER_RANGED Enables support for iteration of sub-ranges of the full `iterindex` range `[0, NpyIter_IterSize(iter))`. Use the function [`NpyIter_ResetToIterIndexRange`](#c.NpyIter_ResetToIterIndexRange "NpyIter_ResetToIterIndexRange") to specify a range for iteration. This flag can only be used with [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP") when [`NPY_ITER_BUFFERED`](#c.NPY_ITER_BUFFERED "NPY_ITER_BUFFERED") is enabled. This is because without buffering, the inner loop is always the size of the innermost iteration dimension, and allowing it to get cut up would require special handling, effectively making it more like the buffered version. NPY_ITER_BUFFERED Causes the iterator to store buffering data, and use buffering to satisfy data type, alignment, and byte-order requirements. To buffer an operand, do not specify the [`NPY_ITER_COPY`](#c.NPY_ITER_COPY "NPY_ITER_COPY") or [`NPY_ITER_UPDATEIFCOPY`](#c.NPY_ITER_UPDATEIFCOPY "NPY_ITER_UPDATEIFCOPY") flags, because they will override buffering. Buffering is especially useful for Python code using the iterator, allowing for larger chunks of data at once to amortize the Python interpreter overhead. If used with [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP"), the inner loop for the caller may get larger chunks than would be possible without buffering, because of how the strides are laid out. Note that if an operand is given the flag [`NPY_ITER_COPY`](#c.NPY_ITER_COPY "NPY_ITER_COPY") or [`NPY_ITER_UPDATEIFCOPY`](#c.NPY_ITER_UPDATEIFCOPY "NPY_ITER_UPDATEIFCOPY"), a copy will be made in preference to buffering. Buffering will still occur when the array was broadcast so elements need to be duplicated to get a constant stride. In normal buffering, the size of each inner loop is equal to the buffer size, or possibly larger if [`NPY_ITER_GROWINNER`](#c.NPY_ITER_GROWINNER "NPY_ITER_GROWINNER") is specified. If [`NPY_ITER_REDUCE_OK`](#c.NPY_ITER_REDUCE_OK "NPY_ITER_REDUCE_OK") is enabled and a reduction occurs, the inner loops may become smaller depending on the structure of the reduction. NPY_ITER_GROWINNER When buffering is enabled, this allows the size of the inner loop to grow when buffering isn’t necessary. This option is best used if you’re doing a straight pass through all the data, rather than anything with small cache-friendly arrays of temporary values for each inner loop. NPY_ITER_DELAY_BUFALLOC When buffering is enabled, this delays allocation of the buffers until [`NpyIter_Reset`](#c.NpyIter_Reset "NpyIter_Reset") or another reset function is called. This flag exists to avoid wasteful copying of buffer data when making multiple copies of a buffered iterator for multi-threaded iteration. Another use of this flag is for setting up reduction operations. After the iterator is created, and a reduction output is allocated automatically by the iterator (be sure to use READWRITE access), its value may be initialized to the reduction unit. Use [`NpyIter_GetOperandArray`](#c.NpyIter_GetOperandArray "NpyIter_GetOperandArray") to get the object. Then, call [`NpyIter_Reset`](#c.NpyIter_Reset "NpyIter_Reset") to allocate and fill the buffers with their initial values. NPY_ITER_COPY_IF_OVERLAP If any write operand has overlap with any read operand, eliminate all overlap by making temporary copies (enabling UPDATEIFCOPY for write operands, if necessary). A pair of operands has overlap if there is a memory address that contains data common to both arrays. Because exact overlap detection has exponential runtime in the number of dimensions, the decision is made based on heuristics, which has false positives (needless copies in unusual cases) but has no false negatives. If any read/write overlap exists, this flag ensures the result of the operation is the same as if all operands were copied. In cases where copies would need to be made, **the result of the computation may be undefined without this flag!** Flags that may be passed in `op_flags[i]`, where `0 <= i < nop`: NPY_ITER_READWRITE NPY_ITER_READONLY NPY_ITER_WRITEONLY Indicate how the user of the iterator will read or write to `op[i]`. Exactly one of these flags must be specified per operand. Using `NPY_ITER_READWRITE` or `NPY_ITER_WRITEONLY` for a user-provided operand may trigger `WRITEBACKIFCOPY`` semantics. The data will be written back to the original array when `NpyIter_Deallocate` is called. NPY_ITER_COPY Allow a copy of `op[i]` to be made if it does not meet the data type or alignment requirements as specified by the constructor flags and parameters. NPY_ITER_UPDATEIFCOPY Triggers [`NPY_ITER_COPY`](#c.NPY_ITER_COPY "NPY_ITER_COPY"), and when an array operand is flagged for writing and is copied, causes the data in a copy to be copied back to `op[i]` when `NpyIter_Deallocate` is called. If the operand is flagged as write-only and a copy is needed, an uninitialized temporary array will be created and then copied to back to `op[i]` on calling `NpyIter_Deallocate`, instead of doing the unnecessary copy operation. NPY_ITER_NBO NPY_ITER_ALIGNED NPY_ITER_CONTIG Causes the iterator to provide data for `op[i]` that is in native byte order, aligned according to the dtype requirements, contiguous, or any combination. By default, the iterator produces pointers into the arrays provided, which may be aligned or unaligned, and with any byte order. If copying or buffering is not enabled and the operand data doesn’t satisfy the constraints, an error will be raised. The contiguous constraint applies only to the inner loop, successive inner loops may have arbitrary pointer changes. If the requested data type is in non-native byte order, the NBO flag overrides it and the requested data type is converted to be in native byte order. NPY_ITER_ALLOCATE This is for output arrays, and requires that the flag [`NPY_ITER_WRITEONLY`](#c.NPY_ITER_WRITEONLY "NPY_ITER_WRITEONLY") or [`NPY_ITER_READWRITE`](#c.NPY_ITER_READWRITE "NPY_ITER_READWRITE") be set. If `op[i]` is NULL, creates a new array with the final broadcast dimensions, and a layout matching the iteration order of the iterator. When `op[i]` is NULL, the requested data type `op_dtypes[i]` may be NULL as well, in which case it is automatically generated from the dtypes of the arrays which are flagged as readable. The rules for generating the dtype are the same is for UFuncs. Of special note is handling of byte order in the selected dtype. If there is exactly one input, the input’s dtype is used as is. Otherwise, if more than one input dtypes are combined together, the output will be in native byte order. After being allocated with this flag, the caller may retrieve the new array by calling [`NpyIter_GetOperandArray`](#c.NpyIter_GetOperandArray "NpyIter_GetOperandArray") and getting the i-th object in the returned C array. The caller must call Py_INCREF on it to claim a reference to the array. NPY_ITER_NO_SUBTYPE For use with [`NPY_ITER_ALLOCATE`](#c.NPY_ITER_ALLOCATE "NPY_ITER_ALLOCATE"), this flag disables allocating an array subtype for the output, forcing it to be a straight ndarray. TODO: Maybe it would be better to introduce a function `NpyIter_GetWrappedOutput` and remove this flag? NPY_ITER_NO_BROADCAST Ensures that the input or output matches the iteration dimensions exactly. NPY_ITER_ARRAYMASK New in version 1.7. Indicates that this operand is the mask to use for selecting elements when writing to operands which have the [`NPY_ITER_WRITEMASKED`](#c.NPY_ITER_WRITEMASKED "NPY_ITER_WRITEMASKED") flag applied to them. Only one operand may have [`NPY_ITER_ARRAYMASK`](#c.NPY_ITER_ARRAYMASK "NPY_ITER_ARRAYMASK") flag applied to it. The data type of an operand with this flag should be either [`NPY_BOOL`](dtype#c.NPY_BOOL "NPY_BOOL"), [`NPY_MASK`](dtype#c.NPY_MASK "NPY_MASK"), or a struct dtype whose fields are all valid mask dtypes. In the latter case, it must match up with a struct operand being WRITEMASKED, as it is specifying a mask for each field of that array. This flag only affects writing from the buffer back to the array. This means that if the operand is also [`NPY_ITER_READWRITE`](#c.NPY_ITER_READWRITE "NPY_ITER_READWRITE") or [`NPY_ITER_WRITEONLY`](#c.NPY_ITER_WRITEONLY "NPY_ITER_WRITEONLY"), code doing iteration can write to this operand to control which elements will be untouched and which ones will be modified. This is useful when the mask should be a combination of input masks. NPY_ITER_WRITEMASKED New in version 1.7. This array is the mask for all [`writemasked`](../generated/numpy.nditer#numpy.nditer "numpy.nditer") operands. Code uses the `writemasked` flag which indicates that only elements where the chosen ARRAYMASK operand is True will be written to. In general, the iterator does not enforce this, it is up to the code doing the iteration to follow that promise. When `writemasked` flag is used, and this operand is buffered, this changes how data is copied from the buffer into the array. A masked copying routine is used, which only copies the elements in the buffer for which `writemasked` returns true from the corresponding element in the ARRAYMASK operand. NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE In memory overlap checks, assume that operands with `NPY_ITER_OVERLAP_ASSUME_ELEMENTWISE` enabled are accessed only in the iterator order. This enables the iterator to reason about data dependency, possibly avoiding unnecessary copies. This flag has effect only if `NPY_ITER_COPY_IF_OVERLAP` is enabled on the iterator. [NpyIter](#c.NpyIter "NpyIter")*NpyIter_AdvancedNew([npy_intp](dtype#c.npy_intp "npy_intp")nop, [PyArrayObject](types-and-structures#c.PyArrayObject "PyArrayObject")**op, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")flags, [NPY_ORDER](array#c.NPY_ORDER "NPY_ORDER")order, [NPY_CASTING](array#c.NPY_CASTING "NPY_CASTING")casting, [npy_uint32](dtype#c.npy_uint32 "npy_uint32")*op_flags, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**op_dtypes, intoa_ndim, int**op_axes, [npy_intp](dtype#c.npy_intp "npy_intp")const*itershape, [npy_intp](dtype#c.npy_intp "npy_intp")buffersize) Extends [`NpyIter_MultiNew`](#c.NpyIter_MultiNew "NpyIter_MultiNew") with several advanced options providing more control over broadcasting and buffering. If -1/NULL values are passed to `oa_ndim`, `op_axes`, `itershape`, and `buffersize`, it is equivalent to [`NpyIter_MultiNew`](#c.NpyIter_MultiNew "NpyIter_MultiNew"). The parameter `oa_ndim`, when not zero or -1, specifies the number of dimensions that will be iterated with customized broadcasting. If it is provided, `op_axes` must and `itershape` can also be provided. The `op_axes` parameter let you control in detail how the axes of the operand arrays get matched together and iterated. In `op_axes`, you must provide an array of `nop` pointers to `oa_ndim`-sized arrays of type `npy_intp`. If an entry in `op_axes` is NULL, normal broadcasting rules will apply. In `op_axes[j][i]` is stored either a valid axis of `op[j]`, or -1 which means `newaxis`. Within each `op_axes[j]` array, axes may not be repeated. The following example is how normal broadcasting applies to a 3-D array, a 2-D array, a 1-D array and a scalar. **Note**: Before NumPy 1.8 `oa_ndim == 0` was used for signalling that ``op_axes` and `itershape` are unused. This is deprecated and should be replaced with -1. Better backward compatibility may be achieved by using [`NpyIter_MultiNew`](#c.NpyIter_MultiNew "NpyIter_MultiNew") for this case. ``` int oa_ndim = 3; /* # iteration axes */ int op0_axes[] = {0, 1, 2}; /* 3-D operand */ int op1_axes[] = {-1, 0, 1}; /* 2-D operand */ int op2_axes[] = {-1, -1, 0}; /* 1-D operand */ int op3_axes[] = {-1, -1, -1} /* 0-D (scalar) operand */ int* op_axes[] = {op0_axes, op1_axes, op2_axes, op3_axes}; ``` The `itershape` parameter allows you to force the iterator to have a specific iteration shape. It is an array of length `oa_ndim`. When an entry is negative, its value is determined from the operands. This parameter allows automatically allocated outputs to get additional dimensions which don’t match up with any dimension of an input. If `buffersize` is zero, a default buffer size is used, otherwise it specifies how big of a buffer to use. Buffers which are powers of 2 such as 4096 or 8192 are recommended. Returns NULL if there is an error, otherwise returns the allocated iterator. [NpyIter](#c.NpyIter "NpyIter")*NpyIter_Copy([NpyIter](#c.NpyIter "NpyIter")*iter) Makes a copy of the given iterator. This function is provided primarily to enable multi-threaded iteration of the data. *TODO*: Move this to a section about multithreaded iteration. The recommended approach to multithreaded iteration is to first create an iterator with the flags [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP"), [`NPY_ITER_RANGED`](#c.NPY_ITER_RANGED "NPY_ITER_RANGED"), [`NPY_ITER_BUFFERED`](#c.NPY_ITER_BUFFERED "NPY_ITER_BUFFERED"), [`NPY_ITER_DELAY_BUFALLOC`](#c.NPY_ITER_DELAY_BUFALLOC "NPY_ITER_DELAY_BUFALLOC"), and possibly [`NPY_ITER_GROWINNER`](#c.NPY_ITER_GROWINNER "NPY_ITER_GROWINNER"). Create a copy of this iterator for each thread (minus one for the first iterator). Then, take the iteration index range `[0, NpyIter_GetIterSize(iter))` and split it up into tasks, for example using a TBB parallel_for loop. When a thread gets a task to execute, it then uses its copy of the iterator by calling [`NpyIter_ResetToIterIndexRange`](#c.NpyIter_ResetToIterIndexRange "NpyIter_ResetToIterIndexRange") and iterating over the full range. When using the iterator in multi-threaded code or in code not holding the Python GIL, care must be taken to only call functions which are safe in that context. [`NpyIter_Copy`](#c.NpyIter_Copy "NpyIter_Copy") cannot be safely called without the Python GIL, because it increments Python references. The `Reset*` and some other functions may be safely called by passing in the `errmsg` parameter as non-NULL, so that the functions will pass back errors through it instead of setting a Python exception. [`NpyIter_Deallocate`](#c.NpyIter_Deallocate "NpyIter_Deallocate") must be called for each copy. intNpyIter_RemoveAxis([NpyIter](#c.NpyIter "NpyIter")*iter, intaxis) Removes an axis from iteration. This requires that [`NPY_ITER_MULTI_INDEX`](#c.NPY_ITER_MULTI_INDEX "NPY_ITER_MULTI_INDEX") was set for iterator creation, and does not work if buffering is enabled or an index is being tracked. This function also resets the iterator to its initial state. This is useful for setting up an accumulation loop, for example. The iterator can first be created with all the dimensions, including the accumulation axis, so that the output gets created correctly. Then, the accumulation axis can be removed, and the calculation done in a nested fashion. **WARNING**: This function may change the internal memory layout of the iterator. Any cached functions or pointers from the iterator must be retrieved again! The iterator range will be reset as well. Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_RemoveMultiIndex([NpyIter](#c.NpyIter "NpyIter")*iter) If the iterator is tracking a multi-index, this strips support for them, and does further iterator optimizations that are possible if multi-indices are not needed. This function also resets the iterator to its initial state. **WARNING**: This function may change the internal memory layout of the iterator. Any cached functions or pointers from the iterator must be retrieved again! After calling this function, [NpyIter_HasMultiIndex](#c.NpyIter_HasMultiIndex "NpyIter_HasMultiIndex")([iter](#c.NpyIter_RemoveMultiIndex "iter")) will return false. Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_EnableExternalLoop([NpyIter](#c.NpyIter "NpyIter")*iter) If [`NpyIter_RemoveMultiIndex`](#c.NpyIter_RemoveMultiIndex "NpyIter_RemoveMultiIndex") was called, you may want to enable the flag [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP"). This flag is not permitted together with [`NPY_ITER_MULTI_INDEX`](#c.NPY_ITER_MULTI_INDEX "NPY_ITER_MULTI_INDEX"), so this function is provided to enable the feature after [`NpyIter_RemoveMultiIndex`](#c.NpyIter_RemoveMultiIndex "NpyIter_RemoveMultiIndex") is called. This function also resets the iterator to its initial state. **WARNING**: This function changes the internal logic of the iterator. Any cached functions or pointers from the iterator must be retrieved again! Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_Deallocate([NpyIter](#c.NpyIter "NpyIter")*iter) Deallocates the iterator object and resolves any needed writebacks. Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_Reset([NpyIter](#c.NpyIter "NpyIter")*iter, char**errmsg) Resets the iterator back to its initial state, at the beginning of the iteration range. Returns `NPY_SUCCEED` or `NPY_FAIL`. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. intNpyIter_ResetToIterIndexRange([NpyIter](#c.NpyIter "NpyIter")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")istart, [npy_intp](dtype#c.npy_intp "npy_intp")iend, char**errmsg) Resets the iterator and restricts it to the `iterindex` range `[istart, iend)`. See [`NpyIter_Copy`](#c.NpyIter_Copy "NpyIter_Copy") for an explanation of how to use this for multi-threaded iteration. This requires that the flag [`NPY_ITER_RANGED`](#c.NPY_ITER_RANGED "NPY_ITER_RANGED") was passed to the iterator constructor. If you want to reset both the `iterindex` range and the base pointers at the same time, you can do the following to avoid extra buffer copying (be sure to add the return code error checks when you copy this code). ``` /* Set to a trivial empty range */ NpyIter_ResetToIterIndexRange(iter, 0, 0); /* Set the base pointers */ NpyIter_ResetBasePointers(iter, baseptrs); /* Set to the desired range */ NpyIter_ResetToIterIndexRange(iter, istart, iend); ``` Returns `NPY_SUCCEED` or `NPY_FAIL`. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. intNpyIter_ResetBasePointers([NpyIter](#c.NpyIter "NpyIter")*iter, char**baseptrs, char**errmsg) Resets the iterator back to its initial state, but using the values in `baseptrs` for the data instead of the pointers from the arrays being iterated. This functions is intended to be used, together with the `op_axes` parameter, by nested iteration code with two or more iterators. Returns `NPY_SUCCEED` or `NPY_FAIL`. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. *TODO*: Move the following into a special section on nested iterators. Creating iterators for nested iteration requires some care. All the iterator operands must match exactly, or the calls to [`NpyIter_ResetBasePointers`](#c.NpyIter_ResetBasePointers "NpyIter_ResetBasePointers") will be invalid. This means that automatic copies and output allocation should not be used haphazardly. It is possible to still use the automatic data conversion and casting features of the iterator by creating one of the iterators with all the conversion parameters enabled, then grabbing the allocated operands with the [`NpyIter_GetOperandArray`](#c.NpyIter_GetOperandArray "NpyIter_GetOperandArray") function and passing them into the constructors for the rest of the iterators. **WARNING**: When creating iterators for nested iteration, the code must not use a dimension more than once in the different iterators. If this is done, nested iteration will produce out-of-bounds pointers during iteration. **WARNING**: When creating iterators for nested iteration, buffering can only be applied to the innermost iterator. If a buffered iterator is used as the source for `baseptrs`, it will point into a small buffer instead of the array and the inner iteration will be invalid. The pattern for using nested iterators is as follows. ``` NpyIter *iter1, *iter1; NpyIter_IterNextFunc *iternext1, *iternext2; char **dataptrs1; /* * With the exact same operands, no copies allowed, and * no axis in op_axes used both in iter1 and iter2. * Buffering may be enabled for iter2, but not for iter1. */ iter1 = ...; iter2 = ...; iternext1 = NpyIter_GetIterNext(iter1); iternext2 = NpyIter_GetIterNext(iter2); dataptrs1 = NpyIter_GetDataPtrArray(iter1); do { NpyIter_ResetBasePointers(iter2, dataptrs1); do { /* Use the iter2 values */ } while (iternext2(iter2)); } while (iternext1(iter1)); ``` intNpyIter_GotoMultiIndex([NpyIter](#c.NpyIter "NpyIter")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")const*multi_index) Adjusts the iterator to point to the `ndim` indices pointed to by `multi_index`. Returns an error if a multi-index is not being tracked, the indices are out of bounds, or inner loop iteration is disabled. Returns `NPY_SUCCEED` or `NPY_FAIL`. intNpyIter_GotoIndex([NpyIter](#c.NpyIter "NpyIter")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")index) Adjusts the iterator to point to the `index` specified. If the iterator was constructed with the flag [`NPY_ITER_C_INDEX`](#c.NPY_ITER_C_INDEX "NPY_ITER_C_INDEX"), `index` is the C-order index, and if the iterator was constructed with the flag [`NPY_ITER_F_INDEX`](#c.NPY_ITER_F_INDEX "NPY_ITER_F_INDEX"), `index` is the Fortran-order index. Returns an error if there is no index being tracked, the index is out of bounds, or inner loop iteration is disabled. Returns `NPY_SUCCEED` or `NPY_FAIL`. [npy_intp](dtype#c.npy_intp "npy_intp")NpyIter_GetIterSize([NpyIter](#c.NpyIter "NpyIter")*iter) Returns the number of elements being iterated. This is the product of all the dimensions in the shape. When a multi index is being tracked (and `NpyIter_RemoveAxis` may be called) the size may be `-1` to indicate an iterator is too large. Such an iterator is invalid, but may become valid after `NpyIter_RemoveAxis` is called. It is not necessary to check for this case. [npy_intp](dtype#c.npy_intp "npy_intp")NpyIter_GetIterIndex([NpyIter](#c.NpyIter "NpyIter")*iter) Gets the `iterindex` of the iterator, which is an index matching the iteration order of the iterator. voidNpyIter_GetIterIndexRange([NpyIter](#c.NpyIter "NpyIter")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")*istart, [npy_intp](dtype#c.npy_intp "npy_intp")*iend) Gets the `iterindex` sub-range that is being iterated. If [`NPY_ITER_RANGED`](#c.NPY_ITER_RANGED "NPY_ITER_RANGED") was not specified, this always returns the range `[0, NpyIter_IterSize(iter))`. intNpyIter_GotoIterIndex([NpyIter](#c.NpyIter "NpyIter")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")iterindex) Adjusts the iterator to point to the `iterindex` specified. The IterIndex is an index matching the iteration order of the iterator. Returns an error if the `iterindex` is out of bounds, buffering is enabled, or inner loop iteration is disabled. Returns `NPY_SUCCEED` or `NPY_FAIL`. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_HasDelayedBufAlloc([NpyIter](#c.NpyIter "NpyIter")*iter) Returns 1 if the flag [`NPY_ITER_DELAY_BUFALLOC`](#c.NPY_ITER_DELAY_BUFALLOC "NPY_ITER_DELAY_BUFALLOC") was passed to the iterator constructor, and no call to one of the Reset functions has been done yet, 0 otherwise. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_HasExternalLoop([NpyIter](#c.NpyIter "NpyIter")*iter) Returns 1 if the caller needs to handle the inner-most 1-dimensional loop, or 0 if the iterator handles all looping. This is controlled by the constructor flag [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP") or [`NpyIter_EnableExternalLoop`](#c.NpyIter_EnableExternalLoop "NpyIter_EnableExternalLoop"). [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_HasMultiIndex([NpyIter](#c.NpyIter "NpyIter")*iter) Returns 1 if the iterator was created with the [`NPY_ITER_MULTI_INDEX`](#c.NPY_ITER_MULTI_INDEX "NPY_ITER_MULTI_INDEX") flag, 0 otherwise. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_HasIndex([NpyIter](#c.NpyIter "NpyIter")*iter) Returns 1 if the iterator was created with the [`NPY_ITER_C_INDEX`](#c.NPY_ITER_C_INDEX "NPY_ITER_C_INDEX") or [`NPY_ITER_F_INDEX`](#c.NPY_ITER_F_INDEX "NPY_ITER_F_INDEX") flag, 0 otherwise. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_RequiresBuffering([NpyIter](#c.NpyIter "NpyIter")*iter) Returns 1 if the iterator requires buffering, which occurs when an operand needs conversion or alignment and so cannot be used directly. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_IsBuffered([NpyIter](#c.NpyIter "NpyIter")*iter) Returns 1 if the iterator was created with the [`NPY_ITER_BUFFERED`](#c.NPY_ITER_BUFFERED "NPY_ITER_BUFFERED") flag, 0 otherwise. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_IsGrowInner([NpyIter](#c.NpyIter "NpyIter")*iter) Returns 1 if the iterator was created with the [`NPY_ITER_GROWINNER`](#c.NPY_ITER_GROWINNER "NPY_ITER_GROWINNER") flag, 0 otherwise. [npy_intp](dtype#c.npy_intp "npy_intp")NpyIter_GetBufferSize([NpyIter](#c.NpyIter "NpyIter")*iter) If the iterator is buffered, returns the size of the buffer being used, otherwise returns 0. intNpyIter_GetNDim([NpyIter](#c.NpyIter "NpyIter")*iter) Returns the number of dimensions being iterated. If a multi-index was not requested in the iterator constructor, this value may be smaller than the number of dimensions in the original objects. intNpyIter_GetNOp([NpyIter](#c.NpyIter "NpyIter")*iter) Returns the number of operands in the iterator. [npy_intp](dtype#c.npy_intp "npy_intp")*NpyIter_GetAxisStrideArray([NpyIter](#c.NpyIter "NpyIter")*iter, intaxis) Gets the array of strides for the specified axis. Requires that the iterator be tracking a multi-index, and that buffering not be enabled. This may be used when you want to match up operand axes in some fashion, then remove them with [`NpyIter_RemoveAxis`](#c.NpyIter_RemoveAxis "NpyIter_RemoveAxis") to handle their processing manually. By calling this function before removing the axes, you can get the strides for the manual processing. Returns `NULL` on error. intNpyIter_GetShape([NpyIter](#c.NpyIter "NpyIter")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")*outshape) Returns the broadcast shape of the iterator in `outshape`. This can only be called on an iterator which is tracking a multi-index. Returns `NPY_SUCCEED` or `NPY_FAIL`. [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**NpyIter_GetDescrArray([NpyIter](#c.NpyIter "NpyIter")*iter) This gives back a pointer to the `nop` data type Descrs for the objects being iterated. The result points into `iter`, so the caller does not gain any references to the Descrs. This pointer may be cached before the iteration loop, calling `iternext` will not change it. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")**NpyIter_GetOperandArray([NpyIter](#c.NpyIter "NpyIter")*iter) This gives back a pointer to the `nop` operand PyObjects that are being iterated. The result points into `iter`, so the caller does not gain any references to the PyObjects. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*NpyIter_GetIterView([NpyIter](#c.NpyIter "NpyIter")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")i) This gives back a reference to a new ndarray view, which is a view into the i-th object in the array [`NpyIter_GetOperandArray`](#c.NpyIter_GetOperandArray "NpyIter_GetOperandArray"), whose dimensions and strides match the internal optimized iteration pattern. A C-order iteration of this view is equivalent to the iterator’s iteration order. For example, if an iterator was created with a single array as its input, and it was possible to rearrange all its axes and then collapse it into a single strided iteration, this would return a view that is a one-dimensional array. voidNpyIter_GetReadFlags([NpyIter](#c.NpyIter "NpyIter")*iter, char*outreadflags) Fills `nop` flags. Sets `outreadflags[i]` to 1 if `op[i]` can be read from, and to 0 if not. voidNpyIter_GetWriteFlags([NpyIter](#c.NpyIter "NpyIter")*iter, char*outwriteflags) Fills `nop` flags. Sets `outwriteflags[i]` to 1 if `op[i]` can be written to, and to 0 if not. intNpyIter_CreateCompatibleStrides([NpyIter](#c.NpyIter "NpyIter")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")itemsize, [npy_intp](dtype#c.npy_intp "npy_intp")*outstrides) Builds a set of strides which are the same as the strides of an output array created using the [`NPY_ITER_ALLOCATE`](#c.NPY_ITER_ALLOCATE "NPY_ITER_ALLOCATE") flag, where NULL was passed for op_axes. This is for data packed contiguously, but not necessarily in C or Fortran order. This should be used together with [`NpyIter_GetShape`](#c.NpyIter_GetShape "NpyIter_GetShape") and [`NpyIter_GetNDim`](#c.NpyIter_GetNDim "NpyIter_GetNDim") with the flag [`NPY_ITER_MULTI_INDEX`](#c.NPY_ITER_MULTI_INDEX "NPY_ITER_MULTI_INDEX") passed into the constructor. A use case for this function is to match the shape and layout of the iterator and tack on one or more dimensions. For example, in order to generate a vector per input value for a numerical gradient, you pass in ndim*itemsize for itemsize, then add another dimension to the end with size ndim and stride itemsize. To do the Hessian matrix, you do the same thing but add two dimensions, or take advantage of the symmetry and pack it into 1 dimension with a particular encoding. This function may only be called if the iterator is tracking a multi-index and if [`NPY_ITER_DONT_NEGATE_STRIDES`](#c.NPY_ITER_DONT_NEGATE_STRIDES "NPY_ITER_DONT_NEGATE_STRIDES") was used to prevent an axis from being iterated in reverse order. If an array is created with this method, simply adding ‘itemsize’ for each iteration will traverse the new array matching the iterator. Returns `NPY_SUCCEED` or `NPY_FAIL`. [npy_bool](dtype#c.npy_bool "npy_bool")NpyIter_IsFirstVisit([NpyIter](#c.NpyIter "NpyIter")*iter, intiop) New in version 1.7. Checks to see whether this is the first time the elements of the specified reduction operand which the iterator points at are being seen for the first time. The function returns a reasonable answer for reduction operands and when buffering is disabled. The answer may be incorrect for buffered non-reduction operands. This function is intended to be used in EXTERNAL_LOOP mode only, and will produce some wrong answers when that mode is not enabled. If this function returns true, the caller should also check the inner loop stride of the operand, because if that stride is 0, then only the first element of the innermost external loop is being visited for the first time. *WARNING*: For performance reasons, ‘iop’ is not bounds-checked, it is not confirmed that ‘iop’ is actually a reduction operand, and it is not confirmed that EXTERNAL_LOOP mode is enabled. These checks are the responsibility of the caller, and should be done outside of any inner loops. Functions For Iteration ----------------------- [NpyIter_IterNextFunc](#c.NpyIter_IterNextFunc "NpyIter_IterNextFunc")*NpyIter_GetIterNext([NpyIter](#c.NpyIter "NpyIter")*iter, char**errmsg) Returns a function pointer for iteration. A specialized version of the function pointer may be calculated by this function instead of being stored in the iterator structure. Thus, to get good performance, it is required that the function pointer be saved in a variable rather than retrieved for each loop iteration. Returns NULL if there is an error. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. The typical looping construct is as follows. ``` NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL); char** dataptr = NpyIter_GetDataPtrArray(iter); do { /* use the addresses dataptr[0], ... dataptr[nop-1] */ } while(iternext(iter)); ``` When [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP") is specified, the typical inner loop construct is as follows. ``` NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL); char** dataptr = NpyIter_GetDataPtrArray(iter); npy_intp* stride = NpyIter_GetInnerStrideArray(iter); npy_intp* size_ptr = NpyIter_GetInnerLoopSizePtr(iter), size; npy_intp iop, nop = NpyIter_GetNOp(iter); do { size = *size_ptr; while (size--) { /* use the addresses dataptr[0], ... dataptr[nop-1] */ for (iop = 0; iop < nop; ++iop) { dataptr[iop] += stride[iop]; } } } while (iternext()); ``` Observe that we are using the dataptr array inside the iterator, not copying the values to a local temporary. This is possible because when `iternext()` is called, these pointers will be overwritten with fresh values, not incrementally updated. If a compile-time fixed buffer is being used (both flags [`NPY_ITER_BUFFERED`](#c.NPY_ITER_BUFFERED "NPY_ITER_BUFFERED") and [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP")), the inner size may be used as a signal as well. The size is guaranteed to become zero when `iternext()` returns false, enabling the following loop construct. Note that if you use this construct, you should not pass [`NPY_ITER_GROWINNER`](#c.NPY_ITER_GROWINNER "NPY_ITER_GROWINNER") as a flag, because it will cause larger sizes under some circumstances. ``` /* The constructor should have buffersize passed as this value */ #define FIXED_BUFFER_SIZE 1024 NpyIter_IterNextFunc *iternext = NpyIter_GetIterNext(iter, NULL); char **dataptr = NpyIter_GetDataPtrArray(iter); npy_intp *stride = NpyIter_GetInnerStrideArray(iter); npy_intp *size_ptr = NpyIter_GetInnerLoopSizePtr(iter), size; npy_intp i, iop, nop = NpyIter_GetNOp(iter); /* One loop with a fixed inner size */ size = *size_ptr; while (size == FIXED_BUFFER_SIZE) { /* * This loop could be manually unrolled by a factor * which divides into FIXED_BUFFER_SIZE */ for (i = 0; i < FIXED_BUFFER_SIZE; ++i) { /* use the addresses dataptr[0], ... dataptr[nop-1] */ for (iop = 0; iop < nop; ++iop) { dataptr[iop] += stride[iop]; } } iternext(); size = *size_ptr; } /* Finish-up loop with variable inner size */ if (size > 0) do { size = *size_ptr; while (size--) { /* use the addresses dataptr[0], ... dataptr[nop-1] */ for (iop = 0; iop < nop; ++iop) { dataptr[iop] += stride[iop]; } } } while (iternext()); ``` [NpyIter_GetMultiIndexFunc](#c.NpyIter_GetMultiIndexFunc "NpyIter_GetMultiIndexFunc")*NpyIter_GetGetMultiIndex([NpyIter](#c.NpyIter "NpyIter")*iter, char**errmsg) Returns a function pointer for getting the current multi-index of the iterator. Returns NULL if the iterator is not tracking a multi-index. It is recommended that this function pointer be cached in a local variable before the iteration loop. Returns NULL if there is an error. If errmsg is non-NULL, no Python exception is set when `NPY_FAIL` is returned. Instead, *errmsg is set to an error message. When errmsg is non-NULL, the function may be safely called without holding the Python GIL. char**NpyIter_GetDataPtrArray([NpyIter](#c.NpyIter "NpyIter")*iter) This gives back a pointer to the `nop` data pointers. If [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP") was not specified, each data pointer points to the current data item of the iterator. If no inner iteration was specified, it points to the first data item of the inner loop. This pointer may be cached before the iteration loop, calling `iternext` will not change it. This function may be safely called without holding the Python GIL. char**NpyIter_GetInitialDataPtrArray([NpyIter](#c.NpyIter "NpyIter")*iter) Gets the array of data pointers directly into the arrays (never into the buffers), corresponding to iteration index 0. These pointers are different from the pointers accepted by `NpyIter_ResetBasePointers`, because the direction along some axes may have been reversed. This function may be safely called without holding the Python GIL. [npy_intp](dtype#c.npy_intp "npy_intp")*NpyIter_GetIndexPtr([NpyIter](#c.NpyIter "NpyIter")*iter) This gives back a pointer to the index being tracked, or NULL if no index is being tracked. It is only usable if one of the flags [`NPY_ITER_C_INDEX`](#c.NPY_ITER_C_INDEX "NPY_ITER_C_INDEX") or [`NPY_ITER_F_INDEX`](#c.NPY_ITER_F_INDEX "NPY_ITER_F_INDEX") were specified during construction. When the flag [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP") is used, the code needs to know the parameters for doing the inner loop. These functions provide that information. [npy_intp](dtype#c.npy_intp "npy_intp")*NpyIter_GetInnerStrideArray([NpyIter](#c.NpyIter "NpyIter")*iter) Returns a pointer to an array of the `nop` strides, one for each iterated object, to be used by the inner loop. This pointer may be cached before the iteration loop, calling `iternext` will not change it. This function may be safely called without holding the Python GIL. **WARNING**: While the pointer may be cached, its values may change if the iterator is buffered. [npy_intp](dtype#c.npy_intp "npy_intp")*NpyIter_GetInnerLoopSizePtr([NpyIter](#c.NpyIter "NpyIter")*iter) Returns a pointer to the number of iterations the inner loop should execute. This address may be cached before the iteration loop, calling `iternext` will not change it. The value itself may change during iteration, in particular if buffering is enabled. This function may be safely called without holding the Python GIL. voidNpyIter_GetInnerFixedStrideArray([NpyIter](#c.NpyIter "NpyIter")*iter, [npy_intp](dtype#c.npy_intp "npy_intp")*out_strides) Gets an array of strides which are fixed, or will not change during the entire iteration. For strides that may change, the value NPY_MAX_INTP is placed in the stride. Once the iterator is prepared for iteration (after a reset if [`NPY_ITER_DELAY_BUFALLOC`](#c.NPY_ITER_DELAY_BUFALLOC "NPY_ITER_DELAY_BUFALLOC") was used), call this to get the strides which may be used to select a fast inner loop function. For example, if the stride is 0, that means the inner loop can always load its value into a variable once, then use the variable throughout the loop, or if the stride equals the itemsize, a contiguous version for that operand may be used. This function may be safely called without holding the Python GIL. Converting from Previous NumPy Iterators ---------------------------------------- The old iterator API includes functions like PyArrayIter_Check, PyArray_Iter* and PyArray_ITER_*. The multi-iterator array includes PyArray_MultiIter*, PyArray_Broadcast, and PyArray_RemoveSmallest. The new iterator design replaces all of this functionality with a single object and associated API. One goal of the new API is that all uses of the existing iterator should be replaceable with the new iterator without significant effort. In 1.6, the major exception to this is the neighborhood iterator, which does not have corresponding features in this iterator. Here is a conversion table for which functions to use with the new iterator: | | | | --- | --- | | *Iterator Functions* | | | [`PyArray_IterNew`](array#c.PyArray_IterNew "PyArray_IterNew") | [`NpyIter_New`](#c.NpyIter_New "NpyIter_New") | | [`PyArray_IterAllButAxis`](array#c.PyArray_IterAllButAxis "PyArray_IterAllButAxis") | [`NpyIter_New`](#c.NpyIter_New "NpyIter_New") + `axes` parameter **or** Iterator flag [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP") | | [`PyArray_BroadcastToShape`](array#c.PyArray_BroadcastToShape "PyArray_BroadcastToShape") | **NOT SUPPORTED** (Use the support for multiple operands instead.) | | [`PyArrayIter_Check`](array#c.PyArrayIter_Check "PyArrayIter_Check") | Will need to add this in Python exposure | | [`PyArray_ITER_RESET`](array#c.PyArray_ITER_RESET "PyArray_ITER_RESET") | [`NpyIter_Reset`](#c.NpyIter_Reset "NpyIter_Reset") | | [`PyArray_ITER_NEXT`](array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") | Function pointer from [`NpyIter_GetIterNext`](#c.NpyIter_GetIterNext "NpyIter_GetIterNext") | | [`PyArray_ITER_DATA`](array#c.PyArray_ITER_DATA "PyArray_ITER_DATA") | [`NpyIter_GetDataPtrArray`](#c.NpyIter_GetDataPtrArray "NpyIter_GetDataPtrArray") | | [`PyArray_ITER_GOTO`](array#c.PyArray_ITER_GOTO "PyArray_ITER_GOTO") | [`NpyIter_GotoMultiIndex`](#c.NpyIter_GotoMultiIndex "NpyIter_GotoMultiIndex") | | [`PyArray_ITER_GOTO1D`](array#c.PyArray_ITER_GOTO1D "PyArray_ITER_GOTO1D") | [`NpyIter_GotoIndex`](#c.NpyIter_GotoIndex "NpyIter_GotoIndex") or [`NpyIter_GotoIterIndex`](#c.NpyIter_GotoIterIndex "NpyIter_GotoIterIndex") | | [`PyArray_ITER_NOTDONE`](array#c.PyArray_ITER_NOTDONE "PyArray_ITER_NOTDONE") | Return value of `iternext` function pointer | | *Multi-iterator Functions* | | | [`PyArray_MultiIterNew`](array#c.PyArray_MultiIterNew "PyArray_MultiIterNew") | [`NpyIter_MultiNew`](#c.NpyIter_MultiNew "NpyIter_MultiNew") | | [`PyArray_MultiIter_RESET`](array#c.PyArray_MultiIter_RESET "PyArray_MultiIter_RESET") | [`NpyIter_Reset`](#c.NpyIter_Reset "NpyIter_Reset") | | [`PyArray_MultiIter_NEXT`](array#c.PyArray_MultiIter_NEXT "PyArray_MultiIter_NEXT") | Function pointer from [`NpyIter_GetIterNext`](#c.NpyIter_GetIterNext "NpyIter_GetIterNext") | | [`PyArray_MultiIter_DATA`](array#c.PyArray_MultiIter_DATA "PyArray_MultiIter_DATA") | [`NpyIter_GetDataPtrArray`](#c.NpyIter_GetDataPtrArray "NpyIter_GetDataPtrArray") | | [`PyArray_MultiIter_NEXTi`](array#c.PyArray_MultiIter_NEXTi "PyArray_MultiIter_NEXTi") | **NOT SUPPORTED** (always lock-step iteration) | | [`PyArray_MultiIter_GOTO`](array#c.PyArray_MultiIter_GOTO "PyArray_MultiIter_GOTO") | [`NpyIter_GotoMultiIndex`](#c.NpyIter_GotoMultiIndex "NpyIter_GotoMultiIndex") | | [`PyArray_MultiIter_GOTO1D`](array#c.PyArray_MultiIter_GOTO1D "PyArray_MultiIter_GOTO1D") | [`NpyIter_GotoIndex`](#c.NpyIter_GotoIndex "NpyIter_GotoIndex") or [`NpyIter_GotoIterIndex`](#c.NpyIter_GotoIterIndex "NpyIter_GotoIterIndex") | | [`PyArray_MultiIter_NOTDONE`](array#c.PyArray_MultiIter_NOTDONE "PyArray_MultiIter_NOTDONE") | Return value of `iternext` function pointer | | [`PyArray_Broadcast`](array#c.PyArray_Broadcast "PyArray_Broadcast") | Handled by [`NpyIter_MultiNew`](#c.NpyIter_MultiNew "NpyIter_MultiNew") | | [`PyArray_RemoveSmallest`](array#c.PyArray_RemoveSmallest "PyArray_RemoveSmallest") | Iterator flag [`NPY_ITER_EXTERNAL_LOOP`](#c.NPY_ITER_EXTERNAL_LOOP "NPY_ITER_EXTERNAL_LOOP") | | *Other Functions* | | | [`PyArray_ConvertToCommonType`](array#c.PyArray_ConvertToCommonType "PyArray_ConvertToCommonType") | Iterator flag [`NPY_ITER_COMMON_DTYPE`](#c.NPY_ITER_COMMON_DTYPE "NPY_ITER_COMMON_DTYPE") | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/iterator.htmlUFunc API ========= Constants --------- `UFUNC_ERR_{HANDLER}` UFUNC_ERR_IGNORE UFUNC_ERR_WARN UFUNC_ERR_RAISE UFUNC_ERR_CALL `UFUNC_{THING}_{ERR}` UFUNC_MASK_DIVIDEBYZERO UFUNC_MASK_OVERFLOW UFUNC_MASK_UNDERFLOW UFUNC_MASK_INVALID UFUNC_SHIFT_DIVIDEBYZERO UFUNC_SHIFT_OVERFLOW UFUNC_SHIFT_UNDERFLOW UFUNC_SHIFT_INVALID UFUNC_FPE_DIVIDEBYZERO UFUNC_FPE_OVERFLOW UFUNC_FPE_UNDERFLOW UFUNC_FPE_INVALID `PyUFunc_{VALUE}` PyUFunc_One PyUFunc_Zero PyUFunc_MinusOne PyUFunc_ReorderableNone PyUFunc_None PyUFunc_IdentityValue Macros ------ NPY_LOOP_BEGIN_THREADS Used in universal function code to only release the Python GIL if loop->obj is not true (*i.e.* this is not an OBJECT array loop). Requires use of [`NPY_BEGIN_THREADS_DEF`](array#c.NPY_BEGIN_THREADS_DEF "NPY_BEGIN_THREADS_DEF") in variable declaration area. NPY_LOOP_END_THREADS Used in universal function code to re-acquire the Python GIL if it was released (because loop->obj was not true). Types ----- typePyUFuncGenericFunction Pointers to functions that actually implement the underlying (element-by-element) function \(N\) times with the following signature: voidloopfunc(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*data) *args* An array of pointers to the actual data for the input and output arrays. The input arguments are given first followed by the output arguments. *dimensions* A pointer to the size of the dimension over which this function is looping. *steps* A pointer to the number of bytes to jump to get to the next element in this dimension for each of the input and output arguments. *data* Arbitrary data (extra arguments, function names, *etc.* ) that can be stored with the ufunc and will be passed in when it is called. May be `NULL`. Changed in version 1.23.0: Accepts `NULL` `data` in addition to array of `NULL` values. This is an example of a func specialized for addition of doubles returning doubles. ``` static void double_add(char **args, npy_intp const *dimensions, npy_intp const *steps, void *extra) { npy_intp i; npy_intp is1 = steps[0], is2 = steps[1]; npy_intp os = steps[2], n = dimensions[0]; char *i1 = args[0], *i2 = args[1], *op = args[2]; for (i = 0; i < n; i++) { *((double *)op) = *((double *)i1) + *((double *)i2); i1 += is1; i2 += is2; op += os; } } ``` Functions --------- [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyUFunc_FromFuncAndData([PyUFuncGenericFunction](#c.PyUFuncGenericFunction "PyUFuncGenericFunction")*func, void**data, char*types, intntypes, intnin, intnout, intidentity, char*name, char*doc, intunused) Create a new broadcasting universal function from required variables. Each ufunc builds around the notion of an element-by-element operation. Each ufunc object contains pointers to 1-d loops implementing the basic functionality for each supported type. Note The *func*, *data*, *types*, *name*, and *doc* arguments are not copied by [`PyUFunc_FromFuncAndData`](#c.PyUFunc_FromFuncAndData "PyUFunc_FromFuncAndData"). The caller must ensure that the memory used by these arrays is not freed as long as the ufunc object is alive. Parameters * **func** – Must point to an array containing *ntypes* [`PyUFuncGenericFunction`](#c.PyUFuncGenericFunction "PyUFuncGenericFunction") elements. * **data** – Should be `NULL` or a pointer to an array of size *ntypes*. This array may contain arbitrary extra-data to be passed to the corresponding loop function in the func array, including `NULL`. * **types** – Length `(nin + nout) * ntypes` array of `char` encoding the [`numpy.dtype.num`](../generated/numpy.dtype.num#numpy.dtype.num "numpy.dtype.num") (built-in only) that the corresponding function in the `func` array accepts. For instance, for a comparison ufunc with three `ntypes`, two `nin` and one `nout`, where the first function accepts [`numpy.int32`](../arrays.scalars#numpy.int32 "numpy.int32") and the second [`numpy.int64`](../arrays.scalars#numpy.int64 "numpy.int64"), with both returning [`numpy.bool_`](../arrays.scalars#numpy.bool_ "numpy.bool_"), `types` would be `(char[]) {5, 5, 0, 7, 7, 0}` since `NPY_INT32` is 5, `NPY_INT64` is 7, and `NPY_BOOL` is 0. The bit-width names can also be used (e.g. [`NPY_INT32`](dtype#c.NPY_INT32 "NPY_INT32"), [`NPY_COMPLEX128`](dtype#c.NPY_COMPLEX128 "NPY_COMPLEX128") ) if desired. [Type casting rules](../../user/basics.ufuncs#ufuncs-casting) will be used at runtime to find the first `func` callable by the input/output provided. * **ntypes** – How many different data-type-specific functions the ufunc has implemented. * **nin** – The number of inputs to this operation. * **nout** – The number of outputs * **identity** – Either [`PyUFunc_One`](#c.PyUFunc_One "PyUFunc_One"), [`PyUFunc_Zero`](#c.PyUFunc_Zero "PyUFunc_Zero"), [`PyUFunc_MinusOne`](#c.PyUFunc_MinusOne "PyUFunc_MinusOne"), or [`PyUFunc_None`](#c.PyUFunc_None "PyUFunc_None"). This specifies what should be returned when an empty array is passed to the reduce method of the ufunc. The special value [`PyUFunc_IdentityValue`](#c.PyUFunc_IdentityValue "PyUFunc_IdentityValue") may only be used with the [`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`](#c.PyUFunc_FromFuncAndDataAndSignatureAndIdentity "PyUFunc_FromFuncAndDataAndSignatureAndIdentity") method, to allow an arbitrary python object to be used as the identity. * **name** – The name for the ufunc as a `NULL` terminated string. Specifying a name of ‘add’ or ‘multiply’ enables a special behavior for integer-typed reductions when no dtype is given. If the input type is an integer (or boolean) data type smaller than the size of the [`numpy.int_`](../arrays.scalars#numpy.int_ "numpy.int_") data type, it will be internally upcast to the [`numpy.int_`](../arrays.scalars#numpy.int_ "numpy.int_") (or [`numpy.uint`](../arrays.scalars#numpy.uint "numpy.uint")) data type. * **doc** – Allows passing in a documentation string to be stored with the ufunc. The documentation string should not contain the name of the function or the calling signature as that will be dynamically determined from the object and available when accessing the **__doc__** attribute of the ufunc. * **unused** – Unused and present for backwards compatibility of the C-API. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyUFunc_FromFuncAndDataAndSignature([PyUFuncGenericFunction](#c.PyUFuncGenericFunction "PyUFuncGenericFunction")*func, void**data, char*types, intntypes, intnin, intnout, intidentity, char*name, char*doc, intunused, char*signature) This function is very similar to PyUFunc_FromFuncAndData above, but has an extra *signature* argument, to define a [generalized universal functions](generalized-ufuncs#c-api-generalized-ufuncs). Similarly to how ufuncs are built around an element-by-element operation, gufuncs are around subarray-by-subarray operations, the [signature](generalized-ufuncs#details-of-signature) defining the subarrays to operate on. Parameters * **signature** – The signature for the new gufunc. Setting it to NULL is equivalent to calling PyUFunc_FromFuncAndData. A copy of the string is made, so the passed in buffer can be freed. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyUFunc_FromFuncAndDataAndSignatureAndIdentity([PyUFuncGenericFunction](#c.PyUFuncGenericFunction "PyUFuncGenericFunction")*func, void**data, char*types, intntypes, intnin, intnout, intidentity, char*name, char*doc, intunused, char*signature, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*identity_value) This function is very similar to `PyUFunc_FromFuncAndDataAndSignature` above, but has an extra *identity_value* argument, to define an arbitrary identity for the ufunc when `identity` is passed as `PyUFunc_IdentityValue`. Parameters * **identity_value** – The identity for the new gufunc. Must be passed as `NULL` unless the `identity` argument is `PyUFunc_IdentityValue`. Setting it to NULL is equivalent to calling PyUFunc_FromFuncAndDataAndSignature. intPyUFunc_RegisterLoopForType([PyUFuncObject](types-and-structures#c.PyUFuncObject "PyUFuncObject")*ufunc, intusertype, [PyUFuncGenericFunction](#c.PyUFuncGenericFunction "PyUFuncGenericFunction")function, int*arg_types, void*data) This function allows the user to register a 1-d loop with an already- created ufunc to be used whenever the ufunc is called with any of its input arguments as the user-defined data-type. This is needed in order to make ufuncs work with built-in data-types. The data-type must have been previously registered with the numpy system. The loop is passed in as *function*. This loop can take arbitrary data which should be passed in as *data*. The data-types the loop requires are passed in as *arg_types* which must be a pointer to memory at least as large as ufunc->nargs. intPyUFunc_RegisterLoopForDescr([PyUFuncObject](types-and-structures#c.PyUFuncObject "PyUFuncObject")*ufunc, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")*userdtype, [PyUFuncGenericFunction](#c.PyUFuncGenericFunction "PyUFuncGenericFunction")function, [PyArray_Descr](types-and-structures#c.PyArray_Descr "PyArray_Descr")**arg_dtypes, void*data) This function behaves like PyUFunc_RegisterLoopForType above, except that it allows the user to register a 1-d loop using PyArray_Descr objects instead of dtype type num values. This allows a 1-d loop to be registered for structured array data-dtypes and custom data-types instead of scalar data-types. intPyUFunc_ReplaceLoopBySignature([PyUFuncObject](types-and-structures#c.PyUFuncObject "PyUFuncObject")*ufunc, [PyUFuncGenericFunction](#c.PyUFuncGenericFunction "PyUFuncGenericFunction")newfunc, int*signature, [PyUFuncGenericFunction](#c.PyUFuncGenericFunction "PyUFuncGenericFunction")*oldfunc) Replace a 1-d loop matching the given *signature* in the already-created *ufunc* with the new 1-d loop newfunc. Return the old 1-d loop function in *oldfunc*. Return 0 on success and -1 on failure. This function works only with built-in types (use [`PyUFunc_RegisterLoopForType`](#c.PyUFunc_RegisterLoopForType "PyUFunc_RegisterLoopForType") for user-defined types). A signature is an array of data-type numbers indicating the inputs followed by the outputs assumed by the 1-d loop. intPyUFunc_checkfperr(interrmask, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*errobj) A simple interface to the IEEE error-flag checking support. The *errmask* argument is a mask of `UFUNC_MASK_{ERR}` bitmasks indicating which errors to check for (and how to check for them). The *errobj* must be a Python tuple with two elements: a string containing the name which will be used in any communication of error and either a callable Python object (call-back function) or [`Py_None`](https://docs.python.org/3/c-api/none.html#c.Py_None "(in Python v3.10)"). The callable object will only be used if [`UFUNC_ERR_CALL`](#c.UFUNC_ERR_CALL "UFUNC_ERR_CALL") is set as the desired error checking method. This routine manages the GIL and is safe to call even after releasing the GIL. If an error in the IEEE-compatible hardware is determined a -1 is returned, otherwise a 0 is returned. voidPyUFunc_clearfperr() Clear the IEEE error flags. voidPyUFunc_GetPyValues(char*name, int*bufsize, int*errmask, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")**errobj) Get the Python values used for ufunc processing from the thread-local storage area unless the defaults have been set in which case the name lookup is bypassed. The name is placed as a string in the first element of **errobj*. The second element is the looked-up function to call on error callback. The value of the looked-up buffer-size to use is passed into *bufsize*, and the value of the error mask is placed into *errmask*. Generic functions ----------------- At the core of every ufunc is a collection of type-specific functions that defines the basic functionality for each of the supported types. These functions must evaluate the underlying function \(N\geq1\) times. Extra-data may be passed in that may be used during the calculation. This feature allows some general functions to be used as these basic looping functions. The general function has all the code needed to point variables to the right place and set up a function call. The general function assumes that the actual function to call is passed in as the extra data and calls it with the correct values. All of these functions are suitable for placing directly in the array of functions stored in the functions member of the PyUFuncObject structure. voidPyUFunc_f_f_As_d_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_d_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_f_f(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_g_g(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_F_F_As_D_D(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_F_F(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_D_D(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_G_G(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_e_e(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_e_e_As_f_f(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_e_e_As_d_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) Type specific, core 1-d functions for ufuncs where each calculation is obtained by calling a function taking one input argument and returning one output. This function is passed in `func`. The letters correspond to dtypechar’s of the supported data types ( `e` - half, `f` - float, `d` - double, `g` - long double, `F` - cfloat, `D` - cdouble, `G` - clongdouble). The argument *func* must support the same signature. The _As_X_X variants assume ndarray’s of one data type but cast the values to use an underlying function that takes a different data type. Thus, [`PyUFunc_f_f_As_d_d`](#c.PyUFunc_f_f_As_d_d "PyUFunc_f_f_As_d_d") uses ndarrays of data type [`NPY_FLOAT`](dtype#c.NPY_FLOAT "NPY_FLOAT") but calls out to a C-function that takes double and returns double. voidPyUFunc_ff_f_As_dd_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_ff_f(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_dd_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_gg_g(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_FF_F_As_DD_D(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_DD_D(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_FF_F(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_GG_G(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_ee_e(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_ee_e_As_ff_f(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_ee_e_As_dd_d(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) Type specific, core 1-d functions for ufuncs where each calculation is obtained by calling a function taking two input arguments and returning one output. The underlying function to call is passed in as *func*. The letters correspond to dtypechar’s of the specific data type supported by the general-purpose function. The argument `func` must support the corresponding signature. The `_As_XX_X` variants assume ndarrays of one data type but cast the values at each iteration of the loop to use the underlying function that takes a different data type. voidPyUFunc_O_O(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) voidPyUFunc_OO_O(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) One-input, one-output, and two-input, one-output core 1-d functions for the [`NPY_OBJECT`](dtype#c.NPY_OBJECT "NPY_OBJECT") data type. These functions handle reference count issues and return early on error. The actual function to call is *func* and it must accept calls with the signature `(PyObject*) (PyObject*)` for [`PyUFunc_O_O`](#c.PyUFunc_O_O "PyUFunc_O_O") or `(PyObject*)(PyObject *, PyObject *)` for [`PyUFunc_OO_O`](#c.PyUFunc_OO_O "PyUFunc_OO_O"). voidPyUFunc_O_O_method(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) This general purpose 1-d core function assumes that *func* is a string representing a method of the input object. For each iteration of the loop, the Python object is extracted from the array and its *func* method is called returning the result to the output array. voidPyUFunc_OO_O_method(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) This general purpose 1-d core function assumes that *func* is a string representing a method of the input object that takes one argument. The first argument in *args* is the method whose function is called, the second argument in *args* is the argument passed to the function. The output of the function is stored in the third entry of *args*. voidPyUFunc_On_Om(char**args, [npy_intp](dtype#c.npy_intp "npy_intp")const*dimensions, [npy_intp](dtype#c.npy_intp "npy_intp")const*steps, void*func) This is the 1-d core function used by the dynamic ufuncs created by umath.frompyfunc(function, nin, nout). In this case *func* is a pointer to a [`PyUFunc_PyFuncData`](#c.PyUFunc_On_Om.PyUFunc_PyFuncData "PyUFunc_PyFuncData") structure which has definition typePyUFunc_PyFuncData ``` typedef struct { int nin; int nout; PyObject *callable; } PyUFunc_PyFuncData; ``` At each iteration of the loop, the *nin* input objects are extracted from their object arrays and placed into an argument tuple, the Python *callable* is called with the input arguments, and the nout outputs are placed into their object arrays. Importing the API ----------------- PY_UFUNC_UNIQUE_SYMBOL NO_IMPORT_UFUNC voidimport_ufunc(void) These are the constants and functions for accessing the ufunc C-API from extension modules in precisely the same way as the array C-API can be accessed. The `import_ufunc` () function must always be called (in the initialization subroutine of the extension module). If your extension module is in one file then that is all that is required. The other two constants are useful if your extension module makes use of multiple files. In that case, define [`PY_UFUNC_UNIQUE_SYMBOL`](#c.PY_UFUNC_UNIQUE_SYMBOL "PY_UFUNC_UNIQUE_SYMBOL") to something unique to your code and then in source files that do not contain the module initialization function but still need access to the UFUNC API, define [`PY_UFUNC_UNIQUE_SYMBOL`](#c.PY_UFUNC_UNIQUE_SYMBOL "PY_UFUNC_UNIQUE_SYMBOL") to the same name used previously and also define [`NO_IMPORT_UFUNC`](#c.NO_IMPORT_UFUNC "NO_IMPORT_UFUNC"). The C-API is actually an array of function pointers. This array is created (and pointed to by a global variable) by import_ufunc. The global variable is either statically defined or allowed to be seen by other files depending on the state of [`PY_UFUNC_UNIQUE_SYMBOL`](#c.PY_UFUNC_UNIQUE_SYMBOL "PY_UFUNC_UNIQUE_SYMBOL") and [`NO_IMPORT_UFUNC`](#c.NO_IMPORT_UFUNC "NO_IMPORT_UFUNC"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/ufunc.htmlGeneralized Universal Function API ================================== There is a general need for looping over not only functions on scalars but also over functions on vectors (or arrays). This concept is realized in NumPy by generalizing the universal functions (ufuncs). In regular ufuncs, the elementary function is limited to element-by-element operations, whereas the generalized version (gufuncs) supports “sub-array” by “sub-array” operations. The Perl vector library PDL provides a similar functionality and its terms are re-used in the following. Each generalized ufunc has information associated with it that states what the “core” dimensionality of the inputs is, as well as the corresponding dimensionality of the outputs (the element-wise ufuncs have zero core dimensions). The list of the core dimensions for all arguments is called the “signature” of a ufunc. For example, the ufunc numpy.add has signature `(),()->()` defining two scalar inputs and one scalar output. Another example is the function `inner1d(a, b)` with a signature of `(i),(i)->()`. This applies the inner product along the last axis of each input, but keeps the remaining indices intact. For example, where `a` is of shape `(3, 5, N)` and `b` is of shape `(5, N)`, this will return an output of shape `(3,5)`. The underlying elementary function is called `3 * 5` times. In the signature, we specify one core dimension `(i)` for each input and zero core dimensions `()` for the output, since it takes two 1-d arrays and returns a scalar. By using the same name `i`, we specify that the two corresponding dimensions should be of the same size. The dimensions beyond the core dimensions are called “loop” dimensions. In the above example, this corresponds to `(3, 5)`. The signature determines how the dimensions of each input/output array are split into core and loop dimensions: 1. Each dimension in the signature is matched to a dimension of the corresponding passed-in array, starting from the end of the shape tuple. These are the core dimensions, and they must be present in the arrays, or an error will be raised. 2. Core dimensions assigned to the same label in the signature (e.g. the `i` in `inner1d`’s `(i),(i)->()`) must have exactly matching sizes, no broadcasting is performed. 3. The core dimensions are removed from all inputs and the remaining dimensions are broadcast together, defining the loop dimensions. 4. The shape of each output is determined from the loop dimensions plus the output’s core dimensions Typically, the size of all core dimensions in an output will be determined by the size of a core dimension with the same label in an input array. This is not a requirement, and it is possible to define a signature where a label comes up for the first time in an output, although some precautions must be taken when calling such a function. An example would be the function `euclidean_pdist(a)`, with signature `(n,d)->(p)`, that given an array of `n` `d`-dimensional vectors, computes all unique pairwise Euclidean distances among them. The output dimension `p` must therefore be equal to `n * (n - 1) / 2`, but it is the caller’s responsibility to pass in an output array of the right size. If the size of a core dimension of an output cannot be determined from a passed in input or output array, an error will be raised. Note: Prior to NumPy 1.10.0, less strict checks were in place: missing core dimensions were created by prepending 1’s to the shape as necessary, core dimensions with the same label were broadcast together, and undetermined dimensions were created with size 1. Definitions ----------- Elementary Function Each ufunc consists of an elementary function that performs the most basic operation on the smallest portion of array arguments (e.g. adding two numbers is the most basic operation in adding two arrays). The ufunc applies the elementary function multiple times on different parts of the arrays. The input/output of elementary functions can be vectors; e.g., the elementary function of inner1d takes two vectors as input. Signature A signature is a string describing the input/output dimensions of the elementary function of a ufunc. See section below for more details. Core Dimension The dimensionality of each input/output of an elementary function is defined by its core dimensions (zero core dimensions correspond to a scalar input/output). The core dimensions are mapped to the last dimensions of the input/output arrays. Dimension Name A dimension name represents a core dimension in the signature. Different dimensions may share a name, indicating that they are of the same size. Dimension Index A dimension index is an integer representing a dimension name. It enumerates the dimension names according to the order of the first occurrence of each name in the signature. Details of Signature -------------------- The signature defines “core” dimensionality of input and output variables, and thereby also defines the contraction of the dimensions. The signature is represented by a string of the following format: * Core dimensions of each input or output array are represented by a list of dimension names in parentheses, `(i_1,...,i_N)`; a scalar input/output is denoted by `()`. Instead of `i_1`, `i_2`, etc, one can use any valid Python variable name. * Dimension lists for different arguments are separated by `","`. Input/output arguments are separated by `"->"`. * If one uses the same dimension name in multiple locations, this enforces the same size of the corresponding dimensions. The formal syntax of signatures is as follows: ``` <Signature> ::= <Input arguments> "->" <Output arguments> <Input arguments> ::= <Argument list> <Output arguments> ::= <Argument list> <Argument list> ::= nil | <Argument> | <Argument> "," <Argument list> <Argument> ::= "(" <Core dimension list> ")" <Core dimension list> ::= nil | <Core dimension> | <Core dimension> "," <Core dimension list> <Core dimension> ::= <Dimension name> <Dimension modifier> <Dimension name> ::= valid Python variable name | valid integer <Dimension modifier> ::= nil | "?" ``` Notes: 1. All quotes are for clarity. 2. Unmodified core dimensions that share the same name must have the same size. Each dimension name typically corresponds to one level of looping in the elementary function’s implementation. 3. White spaces are ignored. 4. An integer as a dimension name freezes that dimension to the value. 5. If the name is suffixed with the “?” modifier, the dimension is a core dimension only if it exists on all inputs and outputs that share it; otherwise it is ignored (and replaced by a dimension of size 1 for the elementary function). Here are some examples of signatures: | name | signature | common usage | | --- | --- | --- | | add | `(),()->()` | binary ufunc | | sum1d | `(i)->()` | reduction | | inner1d | `(i),(i)->()` | vector-vector multiplication | | matmat | `(m,n),(n,p)->(m,p)` | matrix multiplication | | vecmat | `(n),(n,p)->(p)` | vector-matrix multiplication | | matvec | `(m,n),(n)->(m)` | matrix-vector multiplication | | matmul | `(m?,n),(n,p?)->(m?,p?)` | combination of the four above | | outer_inner | `(i,t),(j,t)->(i,j)` | inner over the last dimension, outer over the second to last, and loop/broadcast over the rest. | | cross1d | `(3),(3)->(3)` | cross product where the last dimension is frozen and must be 3 | The last is an instance of freezing a core dimension and can be used to improve ufunc performance C-API for implementing Elementary Functions ------------------------------------------- The current interface remains unchanged, and `PyUFunc_FromFuncAndData` can still be used to implement (specialized) ufuncs, consisting of scalar elementary functions. One can use `PyUFunc_FromFuncAndDataAndSignature` to declare a more general ufunc. The argument list is the same as `PyUFunc_FromFuncAndData`, with an additional argument specifying the signature as C string. Furthermore, the callback function is of the same type as before, `void (*foo)(char **args, intp *dimensions, intp *steps, void *func)`. When invoked, `args` is a list of length `nargs` containing the data of all input/output arguments. For a scalar elementary function, `steps` is also of length `nargs`, denoting the strides used for the arguments. `dimensions` is a pointer to a single integer defining the size of the axis to be looped over. For a non-trivial signature, `dimensions` will also contain the sizes of the core dimensions as well, starting at the second entry. Only one size is provided for each unique dimension name and the sizes are given according to the first occurrence of a dimension name in the signature. The first `nargs` elements of `steps` remain the same as for scalar ufuncs. The following elements contain the strides of all core dimensions for all arguments in order. For example, consider a ufunc with signature `(i,j),(i)->()`. In this case, `args` will contain three pointers to the data of the input/output arrays `a`, `b`, `c`. Furthermore, `dimensions` will be `[N, I, J]` to define the size of `N` of the loop and the sizes `I` and `J` for the core dimensions `i` and `j`. Finally, `steps` will be `[a_N, b_N, c_N, a_i, a_j, b_i]`, containing all necessary strides. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/generalized-ufuncs.htmlNumPy core libraries ==================== New in version 1.3.0. Starting from numpy 1.3.0, we are working on separating the pure C, “computational” code from the python dependent code. The goal is twofolds: making the code cleaner, and enabling code reuse by other extensions outside numpy (scipy, etc
). NumPy core math library ----------------------- The numpy core math library (‘npymath’) is a first step in this direction. This library contains most math-related C99 functionality, which can be used on platforms where C99 is not well supported. The core math functions have the same API as the C99 ones, except for the npy_* prefix. The available functions are defined in <numpy/npy_math.h> - please refer to this header when in doubt. ### Floating point classification NPY_NAN This macro is defined to a NaN (Not a Number), and is guaranteed to have the signbit unset (‘positive’ NaN). The corresponding single and extension precision macro are available with the suffix F and L. NPY_INFINITY This macro is defined to a positive inf. The corresponding single and extension precision macro are available with the suffix F and L. NPY_PZERO This macro is defined to positive zero. The corresponding single and extension precision macro are available with the suffix F and L. NPY_NZERO This macro is defined to negative zero (that is with the sign bit set). The corresponding single and extension precision macro are available with the suffix F and L. npy_isnan(x) This is a macro, and is equivalent to C99 isnan: works for single, double and extended precision, and return a non 0 value if x is a NaN. npy_isfinite(x) This is a macro, and is equivalent to C99 isfinite: works for single, double and extended precision, and return a non 0 value if x is neither a NaN nor an infinity. npy_isinf(x) This is a macro, and is equivalent to C99 isinf: works for single, double and extended precision, and return a non 0 value if x is infinite (positive and negative). npy_signbit(x) This is a macro, and is equivalent to C99 signbit: works for single, double and extended precision, and return a non 0 value if x has the signbit set (that is the number is negative). npy_copysign(x, y) This is a function equivalent to C99 copysign: return x with the same sign as y. Works for any value, including inf and nan. Single and extended precisions are available with suffix f and l. New in version 1.4.0. ### Useful math constants The following math constants are available in `npy_math.h`. Single and extended precision are also available by adding the `f` and `l` suffixes respectively. NPY_E Base of natural logarithm (\(e\)) NPY_LOG2E Logarithm to base 2 of the Euler constant (\(\frac{\ln(e)}{\ln(2)}\)) NPY_LOG10E Logarithm to base 10 of the Euler constant (\(\frac{\ln(e)}{\ln(10)}\)) NPY_LOGE2 Natural logarithm of 2 (\(\ln(2)\)) NPY_LOGE10 Natural logarithm of 10 (\(\ln(10)\)) NPY_PI Pi (\(\pi\)) NPY_PI_2 Pi divided by 2 (\(\frac{\pi}{2}\)) NPY_PI_4 Pi divided by 4 (\(\frac{\pi}{4}\)) NPY_1_PI Reciprocal of pi (\(\frac{1}{\pi}\)) NPY_2_PI Two times the reciprocal of pi (\(\frac{2}{\pi}\)) NPY_EULER The Euler constant \(\lim_{n\rightarrow\infty}({\sum_{k=1}^n{\frac{1}{k}}-\ln n})\) ### Low-level floating point manipulation Those can be useful for precise floating point comparison. doublenpy_nextafter(doublex, doubley) This is a function equivalent to C99 nextafter: return next representable floating point value from x in the direction of y. Single and extended precisions are available with suffix f and l. New in version 1.4.0. doublenpy_spacing(doublex) This is a function equivalent to Fortran intrinsic. Return distance between x and next representable floating point value from x, e.g. spacing(1) == eps. spacing of nan and +/- inf return nan. Single and extended precisions are available with suffix f and l. New in version 1.4.0. voidnpy_set_floatstatus_divbyzero() Set the divide by zero floating point exception New in version 1.6.0. voidnpy_set_floatstatus_overflow() Set the overflow floating point exception New in version 1.6.0. voidnpy_set_floatstatus_underflow() Set the underflow floating point exception New in version 1.6.0. voidnpy_set_floatstatus_invalid() Set the invalid floating point exception New in version 1.6.0. intnpy_get_floatstatus() Get floating point status. Returns a bitmask with following possible flags: * NPY_FPE_DIVIDEBYZERO * NPY_FPE_OVERFLOW * NPY_FPE_UNDERFLOW * NPY_FPE_INVALID Note that [`npy_get_floatstatus_barrier`](#c.npy_get_floatstatus_barrier "npy_get_floatstatus_barrier") is preferable as it prevents aggressive compiler optimizations reordering the call relative to the code setting the status, which could lead to incorrect results. New in version 1.9.0. intnpy_get_floatstatus_barrier(char*) Get floating point status. A pointer to a local variable is passed in to prevent aggressive compiler optimizations from reordering this function call relative to the code setting the status, which could lead to incorrect results. Returns a bitmask with following possible flags: * NPY_FPE_DIVIDEBYZERO * NPY_FPE_OVERFLOW * NPY_FPE_UNDERFLOW * NPY_FPE_INVALID New in version 1.15.0. intnpy_clear_floatstatus() Clears the floating point status. Returns the previous status mask. Note that [`npy_clear_floatstatus_barrier`](#c.npy_clear_floatstatus_barrier "npy_clear_floatstatus_barrier") is preferable as it prevents aggressive compiler optimizations reordering the call relative to the code setting the status, which could lead to incorrect results. New in version 1.9.0. intnpy_clear_floatstatus_barrier(char*) Clears the floating point status. A pointer to a local variable is passed in to prevent aggressive compiler optimizations from reordering this function call. Returns the previous status mask. New in version 1.15.0. ### Complex functions New in version 1.4.0. C99-like complex functions have been added. Those can be used if you wish to implement portable C extensions. Since we still support platforms without C99 complex type, you need to restrict to C90-compatible syntax, e.g.: ``` /* a = 1 + 2i */ npy_complex a = npy_cpack(1, 2); npy_complex b; b = npy_log(a); ``` ### Linking against the core math library in an extension New in version 1.4.0. To use the core math library in your own extension, you need to add the npymath compile and link options to your extension in your setup.py: ``` >>> from numpy.distutils.misc_util import get_info >>> info = get_info('npymath') >>> _ = config.add_extension('foo', sources=['foo.c'], extra_info=info) ``` In other words, the usage of info is exactly the same as when using blas_info and co. ### Half-precision functions New in version 1.6.0. The header file <numpy/halffloat.h> provides functions to work with IEEE 754-2008 16-bit floating point values. While this format is not typically used for numerical computations, it is useful for storing values which require floating point but do not need much precision. It can also be used as an educational tool to understand the nature of floating point round-off error. Like for other types, NumPy includes a typedef npy_half for the 16 bit float. Unlike for most of the other types, you cannot use this as a normal type in C, since it is a typedef for npy_uint16. For example, 1.0 looks like 0x3c00 to C, and if you do an equality comparison between the different signed zeros, you will get -0.0 != 0.0 (0x8000 != 0x0000), which is incorrect. For these reasons, NumPy provides an API to work with npy_half values accessible by including <numpy/halffloat.h> and linking to ‘npymath’. For functions that are not provided directly, such as the arithmetic operations, the preferred method is to convert to float or double and back again, as in the following example. ``` npy_half sum(int n, npy_half *array) { float ret = 0; while(n--) { ret += npy_half_to_float(*array++); } return npy_float_to_half(ret); } ``` External Links: * [754-2008 IEEE Standard for Floating-Point Arithmetic](https://ieeexplore.ieee.org/document/4610935/) * [Half-precision Float Wikipedia Article](https://en.wikipedia.org/wiki/Half-precision_floating-point_format). * [OpenGL Half Float Pixel Support](https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_half_float_pixel.txt) * [The OpenEXR image format](https://www.openexr.com/about.html). NPY_HALF_ZERO This macro is defined to positive zero. NPY_HALF_PZERO This macro is defined to positive zero. NPY_HALF_NZERO This macro is defined to negative zero. NPY_HALF_ONE This macro is defined to 1.0. NPY_HALF_NEGONE This macro is defined to -1.0. NPY_HALF_PINF This macro is defined to +inf. NPY_HALF_NINF This macro is defined to -inf. NPY_HALF_NAN This macro is defined to a NaN value, guaranteed to have its sign bit unset. floatnpy_half_to_float([npy_half](dtype#c.npy_half "npy_half")h) Converts a half-precision float to a single-precision float. doublenpy_half_to_double([npy_half](dtype#c.npy_half "npy_half")h) Converts a half-precision float to a double-precision float. [npy_half](dtype#c.npy_half "npy_half")npy_float_to_half(floatf) Converts a single-precision float to a half-precision float. The value is rounded to the nearest representable half, with ties going to the nearest even. If the value is too small or too big, the system’s floating point underflow or overflow bit will be set. [npy_half](dtype#c.npy_half "npy_half")npy_double_to_half(doubled) Converts a double-precision float to a half-precision float. The value is rounded to the nearest representable half, with ties going to the nearest even. If the value is too small or too big, the system’s floating point underflow or overflow bit will be set. intnpy_half_eq([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 == h2). intnpy_half_ne([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 != h2). intnpy_half_le([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 <= h2). intnpy_half_lt([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 < h2). intnpy_half_ge([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 >= h2). intnpy_half_gt([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats (h1 > h2). intnpy_half_eq_nonan([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats that are known to not be NaN (h1 == h2). If a value is NaN, the result is undefined. intnpy_half_lt_nonan([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats that are known to not be NaN (h1 < h2). If a value is NaN, the result is undefined. intnpy_half_le_nonan([npy_half](dtype#c.npy_half "npy_half")h1, [npy_half](dtype#c.npy_half "npy_half")h2) Compares two half-precision floats that are known to not be NaN (h1 <= h2). If a value is NaN, the result is undefined. intnpy_half_iszero([npy_half](dtype#c.npy_half "npy_half")h) Tests whether the half-precision float has a value equal to zero. This may be slightly faster than calling npy_half_eq(h, NPY_ZERO). intnpy_half_isnan([npy_half](dtype#c.npy_half "npy_half")h) Tests whether the half-precision float is a NaN. intnpy_half_isinf([npy_half](dtype#c.npy_half "npy_half")h) Tests whether the half-precision float is plus or minus Inf. intnpy_half_isfinite([npy_half](dtype#c.npy_half "npy_half")h) Tests whether the half-precision float is finite (not NaN or Inf). intnpy_half_signbit([npy_half](dtype#c.npy_half "npy_half")h) Returns 1 is h is negative, 0 otherwise. [npy_half](dtype#c.npy_half "npy_half")npy_half_copysign([npy_half](dtype#c.npy_half "npy_half")x, [npy_half](dtype#c.npy_half "npy_half")y) Returns the value of x with the sign bit copied from y. Works for any value, including Inf and NaN. [npy_half](dtype#c.npy_half "npy_half")npy_half_spacing([npy_half](dtype#c.npy_half "npy_half")h) This is the same for half-precision float as npy_spacing and npy_spacingf described in the low-level floating point section. [npy_half](dtype#c.npy_half "npy_half")npy_half_nextafter([npy_half](dtype#c.npy_half "npy_half")x, [npy_half](dtype#c.npy_half "npy_half")y) This is the same for half-precision float as npy_nextafter and npy_nextafterf described in the low-level floating point section. [npy_uint16](dtype#c.npy_uint16 "npy_uint16")npy_floatbits_to_halfbits([npy_uint32](dtype#c.npy_uint32 "npy_uint32")f) Low-level function which converts a 32-bit single-precision float, stored as a uint32, into a 16-bit half-precision float. [npy_uint16](dtype#c.npy_uint16 "npy_uint16")npy_doublebits_to_halfbits([npy_uint64](dtype#c.npy_uint64 "npy_uint64")d) Low-level function which converts a 64-bit double-precision float, stored as a uint64, into a 16-bit half-precision float. [npy_uint32](dtype#c.npy_uint32 "npy_uint32")npy_halfbits_to_floatbits([npy_uint16](dtype#c.npy_uint16 "npy_uint16")h) Low-level function which converts a 16-bit half-precision float into a 32-bit single-precision float, stored as a uint32. [npy_uint64](dtype#c.npy_uint64 "npy_uint64")npy_halfbits_to_doublebits([npy_uint16](dtype#c.npy_uint16 "npy_uint16")h) Low-level function which converts a 16-bit half-precision float into a 64-bit double-precision float, stored as a uint64. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/coremath.htmlC API Deprecations ================== Background ---------- The API exposed by NumPy for third-party extensions has grown over years of releases, and has allowed programmers to directly access NumPy functionality from C. This API can be best described as “organic”. It has emerged from multiple competing desires and from multiple points of view over the years, strongly influenced by the desire to make it easy for users to move to NumPy from Numeric and Numarray. The core API originated with Numeric in 1995 and there are patterns such as the heavy use of macros written to mimic Python’s C-API as well as account for compiler technology of the late 90’s. There is also only a small group of volunteers who have had very little time to spend on improving this API. There is an ongoing effort to improve the API. It is important in this effort to ensure that code that compiles for NumPy 1.X continues to compile for NumPy 1.X. At the same time, certain API’s will be marked as deprecated so that future-looking code can avoid these API’s and follow better practices. Another important role played by deprecation markings in the C API is to move towards hiding internal details of the NumPy implementation. For those needing direct, easy, access to the data of ndarrays, this will not remove this ability. Rather, there are many potential performance optimizations which require changing the implementation details, and NumPy developers have been unable to try them because of the high value of preserving ABI compatibility. By deprecating this direct access, we will in the future be able to improve NumPy’s performance in ways we cannot presently. Deprecation Mechanism NPY_NO_DEPRECATED_API ---------------------------------------------- In C, there is no equivalent to the deprecation warnings that Python supports. One way to do deprecations is to flag them in the documentation and release notes, then remove or change the deprecated features in a future major version (NumPy 2.0 and beyond). Minor versions of NumPy should not have major C-API changes, however, that prevent code that worked on a previous minor release. For example, we will do our best to ensure that code that compiled and worked on NumPy 1.4 should continue to work on NumPy 1.7 (but perhaps with compiler warnings). To use the NPY_NO_DEPRECATED_API mechanism, you need to #define it to the target API version of NumPy before #including any NumPy headers. If you want to confirm that your code is clean against 1.7, use: ``` #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION ``` On compilers which support a #warning mechanism, NumPy issues a compiler warning if you do not define the symbol NPY_NO_DEPRECATED_API. This way, the fact that there are deprecations will be flagged for third-party developers who may not have read the release notes closely. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/deprecations.htmlMemory management in NumPy ========================== The [`numpy.ndarray`](../generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is a python class. It requires additional memory allocations to hold [`numpy.ndarray.strides`](../generated/numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides"), [`numpy.ndarray.shape`](../generated/numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") and [`numpy.ndarray.data`](../generated/numpy.ndarray.data#numpy.ndarray.data "numpy.ndarray.data") attributes. These attributes are specially allocated after creating the python object in `__new__`. The `strides` and `shape` are stored in a piece of memory allocated internally. The `data` allocation used to store the actual array values (which could be pointers in the case of `object` arrays) can be very large, so NumPy has provided interfaces to manage its allocation and release. This document details how those interfaces work. Historical overview ------------------- Since version 1.7.0, NumPy has exposed a set of `PyDataMem_*` functions ([`PyDataMem_NEW`](array#c.PyDataMem_NEW "PyDataMem_NEW"), [`PyDataMem_FREE`](array#c.PyDataMem_FREE "PyDataMem_FREE"), [`PyDataMem_RENEW`](array#c.PyDataMem_RENEW "PyDataMem_RENEW")) which are backed by `alloc`, `free`, `realloc` respectively. In that version NumPy also exposed the `PyDataMem_EventHook` function (now deprecated) described below, which wrap the OS-level calls. Since those early days, Python also improved its memory management capabilities, and began providing various [management policies](https://docs.python.org/3/c-api/memory.html#memoryoverview "(in Python v3.10)") beginning in version 3.4. These routines are divided into a set of domains, each domain has a [`PyMemAllocatorEx`](https://docs.python.org/3/c-api/memory.html#c.PyMemAllocatorEx "(in Python v3.10)") structure of routines for memory management. Python also added a [`tracemalloc`](https://docs.python.org/3/library/tracemalloc.html#module-tracemalloc "(in Python v3.10)") module to trace calls to the various routines. These tracking hooks were added to the NumPy `PyDataMem_*` routines. NumPy added a small cache of allocated memory in its internal `npy_alloc_cache`, `npy_alloc_cache_zero`, and `npy_free_cache` functions. These wrap `alloc`, `alloc-and-memset(0)` and `free` respectively, but when `npy_free_cache` is called, it adds the pointer to a short list of available blocks marked by size. These blocks can be re-used by subsequent calls to `npy_alloc*`, avoiding memory thrashing. Configurable memory routines in NumPy (NEP 49) ---------------------------------------------- Users may wish to override the internal data memory routines with ones of their own. Since NumPy does not use the Python domain strategy to manage data memory, it provides an alternative set of C-APIs to change memory routines. There are no Python domain-wide strategies for large chunks of object data, so those are less suited to NumPy’s needs. User who wish to change the NumPy data memory management routines can use [`PyDataMem_SetHandler`](#c.PyDataMem_SetHandler "PyDataMem_SetHandler"), which uses a [`PyDataMem_Handler`](#c.PyDataMem_Handler "PyDataMem_Handler") structure to hold pointers to functions used to manage the data memory. The calls are still wrapped by internal routines to call [`PyTraceMalloc_Track`](https://docs.python.org/3/c-api/memory.html#c.PyTraceMalloc_Track "(in Python v3.10)"), [`PyTraceMalloc_Untrack`](https://docs.python.org/3/c-api/memory.html#c.PyTraceMalloc_Untrack "(in Python v3.10)"), and will use the deprecated [`PyDataMem_EventHookFunc`](#c.PyDataMem_EventHookFunc "PyDataMem_EventHookFunc") mechanism. Since the functions may change during the lifetime of the process, each `ndarray` carries with it the functions used at the time of its instantiation, and these will be used to reallocate or free the data memory of the instance. typePyDataMem_Handler A struct to hold function pointers used to manipulate memory ``` typedef struct { char name[127]; /* multiple of 64 to keep the struct aligned */ uint8_t version; /* currently 1 */ PyDataMemAllocator allocator; } PyDataMem_Handler; ``` where the allocator structure is ``` /* The declaration of free differs from PyMemAllocatorEx */ typedef struct { void *ctx; void* (*malloc) (void *ctx, size_t size); void* (*calloc) (void *ctx, size_t nelem, size_t elsize); void* (*realloc) (void *ctx, void *ptr, size_t new_size); void (*free) (void *ctx, void *ptr, size_t size); } PyDataMemAllocator; ``` [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyDataMem_SetHandler([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*handler) Set a new allocation policy. If the input value is `NULL`, will reset the policy to the default. Return the previous policy, or return `NULL` if an error has occurred. We wrap the user-provided functions so they will still call the python and numpy memory management callback hooks. [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*PyDataMem_GetHandler() Return the current policy that will be used to allocate data for the next `PyArrayObject`. On failure, return `NULL`. For an example of setting up and using the PyDataMem_Handler, see the test in `numpy/core/tests/test_mem_policy.py` voidPyDataMem_EventHookFunc(void*inp, void*outp, size_tsize, void*user_data); This function will be called during data memory manipulation [PyDataMem_EventHookFunc](#c.PyDataMem_EventHookFunc "PyDataMem_EventHookFunc")*PyDataMem_SetEventHook([PyDataMem_EventHookFunc](#c.PyDataMem_EventHookFunc "PyDataMem_EventHookFunc")*newhook, void*user_data, void**old_data) Sets the allocation event hook for numpy array data. Returns a pointer to the previous hook or `NULL`. If old_data is non-`NULL`, the previous user_data pointer will be copied to it. If not `NULL`, hook will be called at the end of each `PyDataMem_NEW/FREE/RENEW`: ``` result = PyDataMem_NEW(size) -> (*hook)(NULL, result, size, user_data) PyDataMem_FREE(ptr) -> (*hook)(ptr, NULL, 0, user_data) result = PyDataMem_RENEW(ptr, size) -> (*hook)(ptr, result, size, user_data) ``` When the hook is called, the GIL will be held by the calling thread. The hook should be written to be reentrant, if it performs operations that might cause new allocation events (such as the creation/destruction numpy objects, or creating/destroying Python objects which might cause a gc). Deprecated in v1.23 What happens when deallocating if there is no policy set -------------------------------------------------------- A rare but useful technique is to allocate a buffer outside NumPy, use [`PyArray_NewFromDescr`](array#c.PyArray_NewFromDescr "PyArray_NewFromDescr") to wrap the buffer in a `ndarray`, then switch the `OWNDATA` flag to true. When the `ndarray` is released, the appropriate function from the `ndarray`’s `PyDataMem_Handler` should be called to free the buffer. But the `PyDataMem_Handler` field was never set, it will be `NULL`. For backward compatibility, NumPy will call `free()` to release the buffer. If `NUMPY_WARN_IF_NO_MEM_POLICY` is set to `1`, a warning will be emitted. The current default is not to emit a warning, this may change in a future version of NumPy. A better technique would be to use a `PyCapsule` as a base object: ``` /* define a PyCapsule_Destructor, using the correct deallocator for buff */ void free_wrap(void *capsule){ void * obj = PyCapsule_GetPointer(capsule, PyCapsule_GetName(capsule)); free(obj); }; /* then inside the function that creates arr from buff */ ... arr = PyArray_NewFromDescr(... buf, ...); if (arr == NULL) { return NULL; } capsule = PyCapsule_New(buf, "my_wrapped_buffer", (PyCapsule_Destructor)&free_wrap); if (PyArray_SetBaseObject(arr, capsule) == -1) { Py_DECREF(arr); return NULL; } ... ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/c-api/data_memory.htmlCPU build options ================= Description ----------- The following options are mainly used to change the default behavior of optimizations that target certain CPU features: * `--cpu-baseline`: minimal set of required CPU features. Default value is `min` which provides the minimum CPU features that can safely run on a wide range of platforms within the processor family. Note During the runtime, NumPy modules will fail to load if any of specified features are not supported by the target CPU (raises Python runtime error). * `--cpu-dispatch`: dispatched set of additional CPU features. Default value is `max -xop -fma4` which enables all CPU features, except for AMD legacy features (in case of X86). Note During the runtime, NumPy modules will skip any specified features that are not available in the target CPU. These options are accessible through [`distutils`](https://docs.python.org/3/library/distutils.html#module-distutils "(in Python v3.10)") commands [`distutils.command.build`](https://docs.python.org/3/distutils/apiref.html#module-distutils.command.build "(in Python v3.10)"), [`distutils.command.build_clib`](https://docs.python.org/3/distutils/apiref.html#module-distutils.command.build_clib "(in Python v3.10)") and [`distutils.command.build_ext`](https://docs.python.org/3/distutils/apiref.html#module-distutils.command.build_ext "(in Python v3.10)"). They accept a set of [CPU features](#opt-supported-features) or groups of features that gather several features or [special options](#opt-special-options) that perform a series of procedures. Note If `build_clib` or `build_ext` are not specified by the user, the arguments of `build` will be used instead, which also holds the default values. To customize both `build_ext` and `build_clib`: ``` cd /path/to/numpy python setup.py build --cpu-baseline="avx2 fma3" install --user ``` To customize only `build_ext`: ``` cd /path/to/numpy python setup.py build_ext --cpu-baseline="avx2 fma3" install --user ``` To customize only `build_clib`: ``` cd /path/to/numpy python setup.py build_clib --cpu-baseline="avx2 fma3" install --user ``` You can also customize CPU/build options through PIP command: ``` pip install --no-use-pep517 --global-option=build \ --global-option="--cpu-baseline=avx2 fma3" \ --global-option="--cpu-dispatch=max" ./ ``` Quick Start ----------- In general, the default settings tend to not impose certain CPU features that may not be available on some older processors. Raising the ceiling of the baseline features will often improve performance and may also reduce binary size. The following are the most common scenarios that may require changing the default settings: ### I am building NumPy for my local use And I do not intend to export the build to other users or target a different CPU than what the host has. Set `native` for baseline, or manually specify the CPU features in case of option `native` isn’t supported by your platform: ``` python setup.py build --cpu-baseline="native" bdist ``` Building NumPy with extra CPU features isn’t necessary for this case, since all supported features are already defined within the baseline features: ``` python setup.py build --cpu-baseline=native --cpu-dispatch=none bdist ``` Note A fatal error will be raised if `native` isn’t supported by the host platform. ### I do not want to support the old processors of the `x86` architecture Since most of the CPUs nowadays support at least `AVX`, `F16C` features, you can use: ``` python setup.py build --cpu-baseline="avx f16c" bdist ``` Note `--cpu-baseline` force combine all implied features, so there’s no need to add SSE features. ### I’m facing the same case above but with `ppc64` architecture Then raise the ceiling of the baseline features to Power8: ``` python setup.py build --cpu-baseline="vsx2" bdist ``` ### Having issues with `AVX512` features? You may have some reservations about including of `AVX512` or any other CPU feature and you want to exclude from the dispatched features: ``` python setup.py build --cpu-dispatch="max -avx512f -avx512cd \ -avx512_knl -avx512_knm -avx512_skx -avx512_clx -avx512_cnl -avx512_icl" \ bdist ``` Supported Features ------------------ The names of the features can express one feature or a group of features, as shown in the following tables supported depend on the lowest interest: Note The following features may not be supported by all compilers, also some compilers may produce different set of implied features when it comes to features like `AVX512`, `AVX2`, and `FMA3`. See [Platform differences](#opt-platform-differences) for more details. ### On x86 | Name | Implies | Gathers | | --- | --- | --- | | `SSE` | `SSE2` | | | `SSE2` | `SSE` | | | `SSE3` | `SSE` `SSE2` | | | `SSSE3` | `SSE` `SSE2` `SSE3` | | | `SSE41` | `SSE` `SSE2` `SSE3` `SSSE3` | | | `POPCNT` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` | | | `SSE42` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` | | | `AVX` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` | | | `XOP` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` | | | `FMA4` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` | | | `F16C` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` | | | `FMA3` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` | | | `AVX2` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` | | | `AVX512F` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` | | | `AVX512CD` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` | | | `AVX512_KNL` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` | `AVX512ER` `AVX512PF` | | `AVX512_KNM` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` `AVX512_KNL` | `AVX5124FMAPS` `AVX5124VNNIW` `AVX512VPOPCNTDQ` | | `AVX512_SKX` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` | `AVX512VL` `AVX512BW` `AVX512DQ` | | `AVX512_CLX` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` `AVX512_SKX` | `AVX512VNNI` | | `AVX512_CNL` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` `AVX512_SKX` | `AVX512IFMA` `AVX512VBMI` | | `AVX512_ICL` | `SSE` `SSE2` `SSE3` `SSSE3` `SSE41` `POPCNT` `SSE42` `AVX` `F16C` `FMA3` `AVX2` `AVX512F` `AVX512CD` `AVX512_SKX` `AVX512_CLX` `AVX512_CNL` | `AVX512VBMI2` `AVX512BITALG` `AVX512VPOPCNTDQ` | ### On IBM/POWER big-endian | Name | Implies | | --- | --- | | `VSX` | | | `VSX2` | `VSX` | | `VSX3` | `VSX` `VSX2` | | `VSX4` | `VSX` `VSX2` `VSX3` | ### On IBM/POWER little-endian | Name | Implies | | --- | --- | | `VSX` | `VSX2` | | `VSX2` | `VSX` | | `VSX3` | `VSX` `VSX2` | | `VSX4` | `VSX` `VSX2` `VSX3` | ### On ARMv7/A32 | Name | Implies | | --- | --- | | `NEON` | | | `NEON_FP16` | `NEON` | | `NEON_VFPV4` | `NEON` `NEON_FP16` | | `ASIMD` | `NEON` `NEON_FP16` `NEON_VFPV4` | | `ASIMDHP` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` | | `ASIMDDP` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` | | `ASIMDFHM` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` `ASIMDHP` | ### On ARMv8/A64 | Name | Implies | | --- | --- | | `NEON` | `NEON_FP16` `NEON_VFPV4` `ASIMD` | | `NEON_FP16` | `NEON` `NEON_VFPV4` `ASIMD` | | `NEON_VFPV4` | `NEON` `NEON_FP16` `ASIMD` | | `ASIMD` | `NEON` `NEON_FP16` `NEON_VFPV4` | | `ASIMDHP` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` | | `ASIMDDP` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` | | `ASIMDFHM` | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` `ASIMDHP` | ### On IBM/ZSYSTEM(S390X) | Name | Implies | | --- | --- | | `VX` | | | `VXE` | `VX` | | `VXE2` | `VX` `VXE` | Special Options --------------- * `NONE`: enable no features. * `NATIVE`: Enables all CPU features that supported by the host CPU, this operation is based on the compiler flags (`-march=native`, `-xHost`, `/QxHost`) * `MIN`: Enables the minimum CPU features that can safely run on a wide range of platforms: | For Arch | Implies | | --- | --- | | x86 (32-bit mode) | `SSE` `SSE2` | | x86_64 | `SSE` `SSE2` `SSE3` | | IBM/POWER (big-endian mode) | `NONE` | | IBM/POWER (little-endian mode) | `VSX` `VSX2` | | ARMHF | `NONE` | | ARM64 A.K. AARCH64 | `NEON` `NEON_FP16` `NEON_VFPV4` `ASIMD` | | IBM/ZSYSTEM(S390X) | `NONE` | * `MAX`: Enables all supported CPU features by the compiler and platform. * `Operators-/+`: remove or add features, useful with options `MAX`, `MIN` and `NATIVE`. Behaviors --------- * CPU features and other options are case-insensitive, for example: ``` python setup.py build --cpu-dispatch="SSE41 avx2 FMA3" ``` * The order of the requested optimizations doesn’t matter: ``` python setup.py build --cpu-dispatch="SSE41 AVX2 FMA3" # equivalent to python setup.py build --cpu-dispatch="FMA3 AVX2 SSE41" ``` * Either commas or spaces or ‘+’ can be used as a separator, for example: ``` python setup.py build --cpu-dispatch="avx2 avx512f" # or python setup.py build --cpu-dispatch=avx2,avx512f # or python setup.py build --cpu-dispatch="avx2+avx512f" ``` all works but arguments should be enclosed in quotes or escaped by backslash if any spaces are used. * `--cpu-baseline` combines all implied CPU features, for example: ``` python setup.py build --cpu-baseline=sse42 # equivalent to python setup.py build --cpu-baseline="sse sse2 sse3 ssse3 sse41 popcnt sse42" ``` * `--cpu-baseline` will be treated as “native” if compiler native flag `-march=native` or `-xHost` or `/QxHost` is enabled through environment variable `CFLAGS`: ``` export CFLAGS="-march=native" python setup.py install --user # is equivalent to python setup.py build --cpu-baseline=native install --user ``` * `--cpu-baseline` escapes any specified features that aren’t supported by the target platform or compiler rather than raising fatal errors. Note Since `--cpu-baseline` combines all implied features, the maximum supported of implied features will be enabled rather than escape all of them. For example: ``` # Requesting `AVX2,FMA3` but the compiler only support **SSE** features python setup.py build --cpu-baseline="avx2 fma3" # is equivalent to python setup.py build --cpu-baseline="sse sse2 sse3 ssse3 sse41 popcnt sse42" ``` * `--cpu-dispatch` does not combain any of implied CPU features, so you must add them unless you want to disable one or all of them: ``` # Only dispatches AVX2 and FMA3 python setup.py build --cpu-dispatch=avx2,fma3 # Dispatches AVX and SSE features python setup.py build --cpu-baseline=ssse3,sse41,sse42,avx,avx2,fma3 ``` * `--cpu-dispatch` escapes any specified baseline features and also escapes any features not supported by the target platform or compiler without raising fatal errors. Eventually, you should always check the final report through the build log to verify the enabled features. See [Build report](#opt-build-report) for more details. Platform differences -------------------- Some exceptional conditions force us to link some features together when it come to certain compilers or architectures, resulting in the impossibility of building them separately. These conditions can be divided into two parts, as follows: **Architectural compatibility** The need to align certain CPU features that are assured to be supported by successive generations of the same architecture, some cases: * On ppc64le `VSX(ISA 2.06)` and `VSX2(ISA 2.07)` both imply one another since the first generation that supports little-endian mode is Power-8`(ISA 2.07)` * On AArch64 `NEON NEON_FP16 NEON_VFPV4 ASIMD` implies each other since they are part of the hardware baseline. For example: ``` # On ARMv8/A64, specify NEON is going to enable Advanced SIMD # and all predecessor extensions python setup.py build --cpu-baseline=neon # which equivalent to python setup.py build --cpu-baseline="neon neon_fp16 neon_vfpv4 asimd" ``` Note Please take a deep look at [Supported Features](#opt-supported-features), in order to determine the features that imply one another. **Compilation compatibility** Some compilers don’t provide independent support for all CPU features. For instance **Intel**’s compiler doesn’t provide separated flags for `AVX2` and `FMA3`, it makes sense since all Intel CPUs that comes with `AVX2` also support `FMA3`, but this approach is incompatible with other **x86** CPUs from **AMD** or **VIA**. For example: ``` # Specify AVX2 will force enables FMA3 on Intel compilers python setup.py build --cpu-baseline=avx2 # which equivalent to python setup.py build --cpu-baseline="avx2 fma3" ``` The following tables only show the differences imposed by some compilers from the general context that been shown in the [Supported Features](#opt-supported-features) tables: Note Features names with strikeout represent the unsupported CPU features. ### On x86::Intel Compiler | Name | Implies | | --- | --- | | FMA3 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C AVX2 | | AVX2 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 | | AVX512F | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512CD | | XOP | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX | | FMA4 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX | ### On x86::Microsoft Visual C/C++ | Name | Implies | Gathers | | --- | --- | --- | | FMA3 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C AVX2 | | | AVX2 | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 | | | AVX512F | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512CD AVX512_SKX | | | AVX512CD | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512_SKX | | | AVX512_KNL | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD | AVX512ER AVX512PF | | AVX512_KNM | SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_KNL | AVX5124FMAPS AVX5124VNNIW AVX512VPOPCNTDQ | Build report ------------ In most cases, the CPU build options do not produce any fatal errors that lead to hanging the build. Most of the errors that may appear in the build log serve as heavy warnings due to the lack of some expected CPU features by the compiler. So we strongly recommend checking the final report log, to be aware of what kind of CPU features are enabled and what are not. You can find the final report of CPU optimizations at the end of the build log, and here is how it looks on x86_64/gcc: ``` ########### EXT COMPILER OPTIMIZATION ########### Platform : Architecture: x64 Compiler : gcc CPU baseline : Requested : 'min' Enabled : SSE SSE2 SSE3 Flags : -msse -msse2 -msse3 Extra checks: none CPU dispatch : Requested : 'max -xop -fma4' Enabled : SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_KNL AVX512_KNM AVX512_SKX AVX512_CLX AVX512_CNL AVX512_ICL Generated : : SSE41 : SSE SSE2 SSE3 SSSE3 Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 Extra checks: none Detect : SSE SSE2 SSE3 SSSE3 SSE41 : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_arithmetic.dispatch.c : numpy/core/src/umath/_umath_tests.dispatch.c : SSE42 : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 Extra checks: none Detect : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 : build/src.linux-x86_64-3.9/numpy/core/src/_simd/_simd.dispatch.c : AVX2 : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mavx2 Extra checks: none Detect : AVX F16C AVX2 : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_arithm_fp.dispatch.c : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_arithmetic.dispatch.c : numpy/core/src/umath/_umath_tests.dispatch.c : (FMA3 AVX2) : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mfma -mavx2 Extra checks: none Detect : AVX F16C FMA3 AVX2 : build/src.linux-x86_64-3.9/numpy/core/src/_simd/_simd.dispatch.c : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_exponent_log.dispatch.c : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_trigonometric.dispatch.c : AVX512F : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mfma -mavx2 -mavx512f Extra checks: AVX512F_REDUCE Detect : AVX512F : build/src.linux-x86_64-3.9/numpy/core/src/_simd/_simd.dispatch.c : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_arithm_fp.dispatch.c : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_arithmetic.dispatch.c : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_exponent_log.dispatch.c : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_trigonometric.dispatch.c : AVX512_SKX : SSE SSE2 SSE3 SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD Flags : -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq Extra checks: AVX512BW_MASK AVX512DQ_MASK Detect : AVX512_SKX : build/src.linux-x86_64-3.9/numpy/core/src/_simd/_simd.dispatch.c : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_arithmetic.dispatch.c : build/src.linux-x86_64-3.9/numpy/core/src/umath/loops_exponent_log.dispatch.c CCompilerOpt.cache_flush[804] : write cache to path -> /home/seiko/work/repos/numpy/build/temp.linux-x86_64-3.9/ccompiler_opt_cache_ext.py ########### CLIB COMPILER OPTIMIZATION ########### Platform : Architecture: x64 Compiler : gcc CPU baseline : Requested : 'min' Enabled : SSE SSE2 SSE3 Flags : -msse -msse2 -msse3 Extra checks: none CPU dispatch : Requested : 'max -xop -fma4' Enabled : SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_KNL AVX512_KNM AVX512_SKX AVX512_CLX AVX512_CNL AVX512_ICL Generated : none ``` As you see, there is a separate report for each of `build_ext` and `build_clib` that includes several sections, and each section has several values, representing the following: **Platform**: * Architecture: The architecture name of target CPU. It should be one of `x86`, `x64`, `ppc64`, `ppc64le`, `armhf`, `aarch64`, `s390x` or `unknown`. * Compiler: The compiler name. It should be one of gcc, clang, msvc, icc, iccw or unix-like. **CPU baseline**: * Requested: The specific features and options to `--cpu-baseline` as-is. * Enabled: The final set of enabled CPU features. * Flags: The compiler flags that were used to all NumPy `C/C++` sources during the compilation except for temporary sources that have been used for generating the binary objects of dispatched features. * Extra checks: list of internal checks that activate certain functionality or intrinsics related to the enabled features, useful for debugging when it comes to developing SIMD kernels. **CPU dispatch**: * Requested: The specific features and options to `--cpu-dispatch` as-is. * Enabled: The final set of enabled CPU features. * Generated: At the beginning of the next row of this property, the features for which optimizations have been generated are shown in the form of several sections with similar properties explained as follows: + One or multiple dispatched feature: The implied CPU features. + Flags: The compiler flags that been used for these features. + Extra checks: Similar to the baseline but for these dispatched features. + Detect: Set of CPU features that need be detected in runtime in order to execute the generated optimizations. + The lines that come after the above property and end with a ‘:’ on a separate line, represent the paths of c/c++ sources that define the generated optimizations. Runtime Trace ------------- To be completed. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/simd/build-options.htmlHow does the CPU dispatcher work? ================================= NumPy dispatcher is based on multi-source compiling, which means taking a certain source and compiling it multiple times with different compiler flags and also with different **C** definitions that affect the code paths. This enables certain instruction-sets for each compiled object depending on the required optimizations and ends with linking the returned objects together. ![../../_images/opt-infra.png] This mechanism should support all compilers and it doesn’t require any compiler-specific extension, but at the same time it adds a few steps to normal compilation that are explained as follows. 1- Configuration ---------------- Configuring the required optimization by the user before starting to build the source files via the two command arguments as explained above: * `--cpu-baseline`: minimal set of required optimizations. * `--cpu-dispatch`: dispatched set of additional optimizations. 2- Discovering the environment ------------------------------ In this part, we check the compiler and platform architecture and cache some of the intermediary results to speed up rebuilding. 3- Validating the requested optimizations ----------------------------------------- By testing them against the compiler, and seeing what the compiler can support according to the requested optimizations. 4- Generating the main configuration header ------------------------------------------- The generated header `_cpu_dispatch.h` contains all the definitions and headers of instruction-sets for the required optimizations that have been validated during the previous step. It also contains extra C definitions that are used for defining NumPy’s Python-level module attributes `__cpu_baseline__` and `__cpu_dispatch__`. **What is in this header?** The example header was dynamically generated by gcc on an X86 machine. The compiler supports `--cpu-baseline="sse sse2 sse3"` and `--cpu-dispatch="ssse3 sse41"`, and the result is below. ``` // The header should be located at numpy/numpy/core/src/common/_cpu_dispatch.h /**NOTE ** C definitions prefixed with "NPY_HAVE_" represent ** the required optimizations. ** ** C definitions prefixed with 'NPY__CPU_TARGET_' are protected and ** shouldn't be used by any NumPy C sources. */ /******* baseline features *******/ /** SSE **/ #define NPY_HAVE_SSE 1 #include <xmmintrin.h> /** SSE2 **/ #define NPY_HAVE_SSE2 1 #include <emmintrin.h> /** SSE3 **/ #define NPY_HAVE_SSE3 1 #include <pmmintrin.h/******* dispatch-able features *******/ #ifdef NPY__CPU_TARGET_SSSE3 /** SSSE3 **/ #define NPY_HAVE_SSSE3 1 #include <tmmintrin.h> #endif #ifdef NPY__CPU_TARGET_SSE41 /** SSE41 **/ #define NPY_HAVE_SSE41 1 #include <smmintrin.h> #endif ``` **Baseline features** are the minimal set of required optimizations configured via `--cpu-baseline`. They have no preprocessor guards and they’re always on, which means they can be used in any source. Does this mean NumPy’s infrastructure passes the compiler’s flags of baseline features to all sources? Definitely, yes. But the [dispatch-able sources](#dispatchable-sources) are treated differently. What if the user specifies certain **baseline features** during the build but at runtime the machine doesn’t support even these features? Will the compiled code be called via one of these definitions, or maybe the compiler itself auto-generated/vectorized certain piece of code based on the provided command line compiler flags? During the loading of the NumPy module, there’s a validation step which detects this behavior. It will raise a Python runtime error to inform the user. This is to prevent the CPU reaching an illegal instruction error causing a segfault. **Dispatch-able features** are our dispatched set of additional optimizations that were configured via `--cpu-dispatch`. They are not activated by default and are always guarded by other C definitions prefixed with `NPY__CPU_TARGET_`. C definitions `NPY__CPU_TARGET_` are only enabled within **dispatch-able sources**. 5- Dispatch-able sources and configuration statements ----------------------------------------------------- Dispatch-able sources are special **C** files that can be compiled multiple times with different compiler flags and also with different **C** definitions. These affect code paths to enable certain instruction-sets for each compiled object according to “**the configuration statements**” that must be declared between a **C** comment`(/**/)` and start with a special mark **@targets** at the top of each dispatch-able source. At the same time, dispatch-able sources will be treated as normal **C** sources if the optimization was disabled by the command argument `--disable-optimization` . **What are configuration statements?** Configuration statements are sort of keywords combined together to determine the required optimization for the dispatch-able source. Example: ``` /*@targets avx2 avx512f vsx2 vsx3 asimd asimdhp */ // C code ``` The keywords mainly represent the additional optimizations configured through `--cpu-dispatch`, but it can also represent other options such as: * Target groups: pre-configured configuration statements used for managing the required optimizations from outside the dispatch-able source. * Policies: collections of options used for changing the default behaviors or forcing the compilers to perform certain things. * “baseline”: a unique keyword represents the minimal optimizations that configured through `--cpu-baseline` **Numpy’s infrastructure handles dispatch-able sources in four steps**: * **(A) Recognition**: Just like source templates and F2PY, the dispatch-able sources requires a special extension `*.dispatch.c` to mark C dispatch-able source files, and for C++ `*.dispatch.cpp` or `*.dispatch.cxx` **NOTE**: C++ not supported yet. * **(B) Parsing and validating**: In this step, the dispatch-able sources that had been filtered by the previous step are parsed and validated by the configuration statements for each one of them one by one in order to determine the required optimizations. * **(C) Wrapping**: This is the approach taken by NumPy’s infrastructure, which has proved to be sufficiently flexible in order to compile a single source multiple times with different **C** definitions and flags that affect the code paths. The process is achieved by creating a temporary **C** source for each required optimization that related to the additional optimization, which contains the declarations of the **C** definitions and includes the involved source via the **C** directive **#include**. For more clarification take a look at the following code for AVX512F : ``` /* * this definition is used by NumPy utilities as suffixes for the * exported symbols */ #define NPY__CPU_TARGET_CURRENT AVX512F /* * The following definitions enable * definitions of the dispatch-able features that are defined within the main * configuration header. These are definitions for the implied features. */ #define NPY__CPU_TARGET_SSE #define NPY__CPU_TARGET_SSE2 #define NPY__CPU_TARGET_SSE3 #define NPY__CPU_TARGET_SSSE3 #define NPY__CPU_TARGET_SSE41 #define NPY__CPU_TARGET_POPCNT #define NPY__CPU_TARGET_SSE42 #define NPY__CPU_TARGET_AVX #define NPY__CPU_TARGET_F16C #define NPY__CPU_TARGET_FMA3 #define NPY__CPU_TARGET_AVX2 #define NPY__CPU_TARGET_AVX512F // our dispatch-able source #include "/the/absuolate/path/of/hello.dispatch.c" ``` * **(D) Dispatch-able configuration header**: The infrastructure generates a config header for each dispatch-able source, this header mainly contains two abstract **C** macros used for identifying the generated objects, so they can be used for runtime dispatching certain symbols from the generated objects by any **C** source. It is also used for forward declarations. The generated header takes the name of the dispatch-able source after excluding the extension and replace it with `.h`, for example assume we have a dispatch-able source called `hello.dispatch.c` and contains the following: ``` // hello.dispatch.c /*@targets baseline sse42 avx512f */ #include <stdio.h> #include "numpy/utils.h" // NPY_CAT, NPY_TOSTR #ifndef NPY__CPU_TARGET_CURRENT // wrapping the dispatch-able source only happens to the additional optimizations // but if the keyword 'baseline' provided within the configuration statements, // the infrastructure will add extra compiling for the dispatch-able source by // passing it as-is to the compiler without any changes. #define CURRENT_TARGET(X) X #define NPY__CPU_TARGET_CURRENT baseline // for printing only #else // since we reach to this point, that's mean we're dealing with // the additional optimizations, so it could be SSE42 or AVX512F #define CURRENT_TARGET(X) NPY_CAT(NPY_CAT(X, _), NPY__CPU_TARGET_CURRENT) #endif // Macro 'CURRENT_TARGET' adding the current target as suffux to the exported symbols, // to avoid linking duplications, NumPy already has a macro called // 'NPY_CPU_DISPATCH_CURFX' similar to it, located at // numpy/numpy/core/src/common/npy_cpu_dispatch.h // NOTE: we tend to not adding suffixes to the baseline exported symbols void CURRENT_TARGET(simd_whoami)(const char *extra_info) { printf("I'm " NPY_TOSTR(NPY__CPU_TARGET_CURRENT) ", %s\n", extra_info); } ``` Now assume you attached **hello.dispatch.c** to the source tree, then the infrastructure should generate a temporary config header called **hello.dispatch.h** that can be reached by any source in the source tree, and it should contain the following code : ``` #ifndef NPY__CPU_DISPATCH_EXPAND_ // To expand the macro calls in this header #define NPY__CPU_DISPATCH_EXPAND_(X) X #endif // Undefining the following macros, due to the possibility of including config headers // multiple times within the same source and since each config header represents // different required optimizations according to the specified configuration // statements in the dispatch-able source that derived from it. #undef NPY__CPU_DISPATCH_BASELINE_CALL #undef NPY__CPU_DISPATCH_CALL // nothing strange here, just a normal preprocessor callback // enabled only if 'baseline' specified within the configuration statements #define NPY__CPU_DISPATCH_BASELINE_CALL(CB, ...) \ NPY__CPU_DISPATCH_EXPAND_(CB(__VA_ARGS__)) // 'NPY__CPU_DISPATCH_CALL' is an abstract macro is used for dispatching // the required optimizations that specified within the configuration statements. // // @param CHK, Expected a macro that can be used to detect CPU features // in runtime, which takes a CPU feature name without string quotes and // returns the testing result in a shape of boolean value. // NumPy already has macro called "NPY_CPU_HAVE", which fits this requirement. // // @param CB, a callback macro that expected to be called multiple times depending // on the required optimizations, the callback should receive the following arguments: // 1- The pending calls of @param CHK filled up with the required CPU features, // that need to be tested first in runtime before executing call belong to // the compiled object. // 2- The required optimization name, same as in 'NPY__CPU_TARGET_CURRENT' // 3- Extra arguments in the macro itself // // By default the callback calls are sorted depending on the highest interest // unless the policy "$keep_sort" was in place within the configuration statements // see "Dive into the CPU dispatcher" for more clarification. #define NPY__CPU_DISPATCH_CALL(CHK, CB, ...) \ NPY__CPU_DISPATCH_EXPAND_(CB((CHK(AVX512F)), AVX512F, __VA_ARGS__)) \ NPY__CPU_DISPATCH_EXPAND_(CB((CHK(SSE)&&CHK(SSE2)&&CHK(SSE3)&&CHK(SSSE3)&&CHK(SSE41)), SSE41, __VA_ARGS__)) ``` An example of using the config header in light of the above: ``` // NOTE: The following macros are only defined for demonstration purposes only. // NumPy already has a collections of macros located at // numpy/numpy/core/src/common/npy_cpu_dispatch.h, that covers all dispatching // and declarations scenarios. #include "numpy/npy_cpu_features.h" // NPY_CPU_HAVE #include "numpy/utils.h" // NPY_CAT, NPY_EXPAND // An example for setting a macro that calls all the exported symbols at once // after checking if they're supported by the running machine. #define DISPATCH_CALL_ALL(FN, ARGS) \ NPY__CPU_DISPATCH_CALL(NPY_CPU_HAVE, DISPATCH_CALL_ALL_CB, FN, ARGS) \ NPY__CPU_DISPATCH_BASELINE_CALL(DISPATCH_CALL_BASELINE_ALL_CB, FN, ARGS) // The preprocessor callbacks. // The same suffixes as we define it in the dispatch-able source. #define DISPATCH_CALL_ALL_CB(CHECK, TARGET_NAME, FN, ARGS) \ if (CHECK) { NPY_CAT(NPY_CAT(FN, _), TARGET_NAME) ARGS; } #define DISPATCH_CALL_BASELINE_ALL_CB(FN, ARGS) \ FN NPY_EXPAND(ARGS); // An example for setting a macro that calls the exported symbols of highest // interest optimization, after checking if they're supported by the running machine. #define DISPATCH_CALL_HIGH(FN, ARGS) \ if (0) {} \ NPY__CPU_DISPATCH_CALL(NPY_CPU_HAVE, DISPATCH_CALL_HIGH_CB, FN, ARGS) \ NPY__CPU_DISPATCH_BASELINE_CALL(DISPATCH_CALL_BASELINE_HIGH_CB, FN, ARGS) // The preprocessor callbacks // The same suffixes as we define it in the dispatch-able source. #define DISPATCH_CALL_HIGH_CB(CHECK, TARGET_NAME, FN, ARGS) \ else if (CHECK) { NPY_CAT(NPY_CAT(FN, _), TARGET_NAME) ARGS; } #define DISPATCH_CALL_BASELINE_HIGH_CB(FN, ARGS) \ else { FN NPY_EXPAND(ARGS); } // NumPy has a macro called 'NPY_CPU_DISPATCH_DECLARE' can be used // for forward declarations any kind of prototypes based on // 'NPY__CPU_DISPATCH_CALL' and 'NPY__CPU_DISPATCH_BASELINE_CALL'. // However in this example, we just handle it manually. void simd_whoami(const char *extra_info); void simd_whoami_AVX512F(const char *extra_info); void simd_whoami_SSE41(const char *extra_info); void trigger_me(void) { // bring the auto-generated config header // which contains config macros 'NPY__CPU_DISPATCH_CALL' and // 'NPY__CPU_DISPATCH_BASELINE_CALL'. // it is highly recommended to include the config header before executing // the dispatching macros in case if there's another header in the scope. #include "hello.dispatch.h" DISPATCH_CALL_ALL(simd_whoami, ("all")) DISPATCH_CALL_HIGH(simd_whoami, ("the highest interest")) // An example of including multiple config headers in the same source // #include "hello2.dispatch.h" // DISPATCH_CALL_HIGH(another_function, ("the highest interest")) } ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/simd/how-it-works.htmlnumpy.i: a SWIG Interface File for NumPy ======================================== Introduction ------------ The Simple Wrapper and Interface Generator (or [SWIG](http://www.swig.org)) is a powerful tool for generating wrapper code for interfacing to a wide variety of scripting languages. [SWIG](http://www.swig.org) can parse header files, and using only the code prototypes, create an interface to the target language. But [SWIG](http://www.swig.org) is not omnipotent. For example, it cannot know from the prototype: ``` double rms(double* seq, int n); ``` what exactly `seq` is. Is it a single value to be altered in-place? Is it an array, and if so what is its length? Is it input-only? Output-only? Input-output? [SWIG](http://www.swig.org) cannot determine these details, and does not attempt to do so. If we designed `rms`, we probably made it a routine that takes an input-only array of length `n` of `double` values called `seq` and returns the root mean square. The default behavior of [SWIG](http://www.swig.org), however, will be to create a wrapper function that compiles, but is nearly impossible to use from the scripting language in the way the C routine was intended. For Python, the preferred way of handling contiguous (or technically, *strided*) blocks of homogeneous data is with NumPy, which provides full object-oriented access to multidimensial arrays of data. Therefore, the most logical Python interface for the `rms` function would be (including doc string): ``` def rms(seq): """ rms: return the root mean square of a sequence rms(numpy.ndarray) -> double rms(list) -> double rms(tuple) -> double """ ``` where `seq` would be a NumPy array of `double` values, and its length `n` would be extracted from `seq` internally before being passed to the C routine. Even better, since NumPy supports construction of arrays from arbitrary Python sequences, `seq` itself could be a nearly arbitrary sequence (so long as each element can be converted to a `double`) and the wrapper code would internally convert it to a NumPy array before extracting its data and length. [SWIG](http://www.swig.org) allows these types of conversions to be defined via a mechanism called *typemaps*. This document provides information on how to use `numpy.i`, a [SWIG](http://www.swig.org) interface file that defines a series of typemaps intended to make the type of array-related conversions described above relatively simple to implement. For example, suppose that the `rms` function prototype defined above was in a header file named `rms.h`. To obtain the Python interface discussed above, your [SWIG](http://www.swig.org) interface file would need the following: ``` %{ #define SWIG_FILE_WITH_INIT #include "rms.h" %} %include "numpy.i" %init %{ import_array(); %} %apply (double* IN_ARRAY1, int DIM1) {(double* seq, int n)}; %include "rms.h" ``` Typemaps are keyed off a list of one or more function arguments, either by type or by type and name. We will refer to such lists as *signatures*. One of the many typemaps defined by `numpy.i` is used above and has the signature `(double* IN_ARRAY1, int DIM1)`. The argument names are intended to suggest that the `double*` argument is an input array of one dimension and that the `int` represents the size of that dimension. This is precisely the pattern in the `rms` prototype. Most likely, no actual prototypes to be wrapped will have the argument names `IN_ARRAY1` and `DIM1`. We use the [SWIG](http://www.swig.org) `%apply` directive to apply the typemap for one-dimensional input arrays of type `double` to the actual prototype used by `rms`. Using `numpy.i` effectively, therefore, requires knowing what typemaps are available and what they do. A [SWIG](http://www.swig.org) interface file that includes the [SWIG](http://www.swig.org) directives given above will produce wrapper code that looks something like: ``` 1 PyObject *_wrap_rms(PyObject *args) { 2 PyObject *resultobj = 0; 3 double *arg1 = (double *) 0 ; 4 int arg2 ; 5 double result; 6 PyArrayObject *array1 = NULL ; 7 int is_new_object1 = 0 ; 8 PyObject * obj0 = 0 ; 9 10 if (!PyArg_ParseTuple(args,(char *)"O:rms",&obj0)) SWIG_fail; 11 { 12 array1 = obj_to_array_contiguous_allow_conversion( 13 obj0, NPY_DOUBLE, &is_new_object1); 14 npy_intp size[1] = { 15 -1 16 }; 17 if (!array1 || !require_dimensions(array1, 1) || 18 !require_size(array1, size, 1)) SWIG_fail; 19 arg1 = (double*) array1->data; 20 arg2 = (int) array1->dimensions[0]; 21 } 22 result = (double)rms(arg1,arg2); 23 resultobj = SWIG_From_double((double)(result)); 24 { 25 if (is_new_object1 && array1) Py_DECREF(array1); 26 } 27 return resultobj; 28 fail: 29 { 30 if (is_new_object1 && array1) Py_DECREF(array1); 31 } 32 return NULL; 33 } ``` The typemaps from `numpy.i` are responsible for the following lines of code: 12–20, 25 and 30. Line 10 parses the input to the `rms` function. From the format string `"O:rms"`, we can see that the argument list is expected to be a single Python object (specified by the `O` before the colon) and whose pointer is stored in `obj0`. A number of functions, supplied by `numpy.i`, are called to make and check the (possible) conversion from a generic Python object to a NumPy array. These functions are explained in the section [Helper Functions](#helper-functions), but hopefully their names are self-explanatory. At line 12 we use `obj0` to construct a NumPy array. At line 17, we check the validity of the result: that it is non-null and that it has a single dimension of arbitrary length. Once these states are verified, we extract the data buffer and length in lines 19 and 20 so that we can call the underlying C function at line 22. Line 25 performs memory management for the case where we have created a new array that is no longer needed. This code has a significant amount of error handling. Note the `SWIG_fail` is a macro for `goto fail`, referring to the label at line 28. If the user provides the wrong number of arguments, this will be caught at line 10. If construction of the NumPy array fails or produces an array with the wrong number of dimensions, these errors are caught at line 17. And finally, if an error is detected, memory is still managed correctly at line 30. Note that if the C function signature was in a different order: ``` double rms(int n, double* seq); ``` that [SWIG](http://www.swig.org) would not match the typemap signature given above with the argument list for `rms`. Fortunately, `numpy.i` has a set of typemaps with the data pointer given last: ``` %apply (int DIM1, double* IN_ARRAY1) {(int n, double* seq)}; ``` This simply has the effect of switching the definitions of `arg1` and `arg2` in lines 3 and 4 of the generated code above, and their assignments in lines 19 and 20. Using numpy.i ------------- The `numpy.i` file is currently located in the `tools/swig` sub-directory under the `numpy` installation directory. Typically, you will want to copy it to the directory where you are developing your wrappers. A simple module that only uses a single [SWIG](http://www.swig.org) interface file should include the following: ``` %{ #define SWIG_FILE_WITH_INIT %} %include "numpy.i" %init %{ import_array(); %} ``` Within a compiled Python module, `import_array()` should only get called once. This could be in a C/C++ file that you have written and is linked to the module. If this is the case, then none of your interface files should `#define SWIG_FILE_WITH_INIT` or call `import_array()`. Or, this initialization call could be in a wrapper file generated by [SWIG](http://www.swig.org) from an interface file that has the `%init` block as above. If this is the case, and you have more than one [SWIG](http://www.swig.org) interface file, then only one interface file should `#define SWIG_FILE_WITH_INIT` and call `import_array()`. Available Typemaps ------------------ The typemap directives provided by `numpy.i` for arrays of different data types, say `double` and `int`, and dimensions of different types, say `int` or `long`, are identical to one another except for the C and NumPy type specifications. The typemaps are therefore implemented (typically behind the scenes) via a macro: ``` %numpy_typemaps(DATA_TYPE, DATA_TYPECODE, DIM_TYPE) ``` that can be invoked for appropriate `(DATA_TYPE, DATA_TYPECODE, DIM_TYPE)` triplets. For example: ``` %numpy_typemaps(double, NPY_DOUBLE, int) %numpy_typemaps(int, NPY_INT , int) ``` The `numpy.i` interface file uses the `%numpy_typemaps` macro to implement typemaps for the following C data types and `int` dimension types: * `signed char` * `unsigned char` * `short` * `unsigned short` * `int` * `unsigned int` * `long` * `unsigned long` * `long long` * `unsigned long long` * `float` * `double` In the following descriptions, we reference a generic `DATA_TYPE`, which could be any of the C data types listed above, and `DIM_TYPE` which should be one of the many types of integers. The typemap signatures are largely differentiated on the name given to the buffer pointer. Names with `FARRAY` are for Fortran-ordered arrays, and names with `ARRAY` are for C-ordered (or 1D arrays). ### Input Arrays Input arrays are defined as arrays of data that are passed into a routine but are not altered in-place or returned to the user. The Python input array is therefore allowed to be almost any Python sequence (such as a list) that can be converted to the requested type of array. The input array signatures are 1D: * `( DATA_TYPE IN_ARRAY1[ANY] )` * `( DATA_TYPE* IN_ARRAY1, int DIM1 )` * `( int DIM1, DATA_TYPE* IN_ARRAY1 )` 2D: * `( DATA_TYPE IN_ARRAY2[ANY][ANY] )` * `( DATA_TYPE* IN_ARRAY2, int DIM1, int DIM2 )` * `( int DIM1, int DIM2, DATA_TYPE* IN_ARRAY2 )` * `( DATA_TYPE* IN_FARRAY2, int DIM1, int DIM2 )` * `( int DIM1, int DIM2, DATA_TYPE* IN_FARRAY2 )` 3D: * `( DATA_TYPE IN_ARRAY3[ANY][ANY][ANY] )` * `( DATA_TYPE* IN_ARRAY3, int DIM1, int DIM2, int DIM3 )` * `( int DIM1, int DIM2, int DIM3, DATA_TYPE* IN_ARRAY3 )` * `( DATA_TYPE* IN_FARRAY3, int DIM1, int DIM2, int DIM3 )` * `( int DIM1, int DIM2, int DIM3, DATA_TYPE* IN_FARRAY3 )` 4D: * `(DATA_TYPE IN_ARRAY4[ANY][ANY][ANY][ANY])` * `(DATA_TYPE* IN_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4)` * `(DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, , DIM_TYPE DIM4, DATA_TYPE* IN_ARRAY4)` * `(DATA_TYPE* IN_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4)` * `(DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* IN_FARRAY4)` The first signature listed, `( DATA_TYPE IN_ARRAY[ANY] )` is for one-dimensional arrays with hard-coded dimensions. Likewise, `( DATA_TYPE IN_ARRAY2[ANY][ANY] )` is for two-dimensional arrays with hard-coded dimensions, and similarly for three-dimensional. ### In-Place Arrays In-place arrays are defined as arrays that are modified in-place. The input values may or may not be used, but the values at the time the function returns are significant. The provided Python argument must therefore be a NumPy array of the required type. The in-place signatures are 1D: * `( DATA_TYPE INPLACE_ARRAY1[ANY] )` * `( DATA_TYPE* INPLACE_ARRAY1, int DIM1 )` * `( int DIM1, DATA_TYPE* INPLACE_ARRAY1 )` 2D: * `( DATA_TYPE INPLACE_ARRAY2[ANY][ANY] )` * `( DATA_TYPE* INPLACE_ARRAY2, int DIM1, int DIM2 )` * `( int DIM1, int DIM2, DATA_TYPE* INPLACE_ARRAY2 )` * `( DATA_TYPE* INPLACE_FARRAY2, int DIM1, int DIM2 )` * `( int DIM1, int DIM2, DATA_TYPE* INPLACE_FARRAY2 )` 3D: * `( DATA_TYPE INPLACE_ARRAY3[ANY][ANY][ANY] )` * `( DATA_TYPE* INPLACE_ARRAY3, int DIM1, int DIM2, int DIM3 )` * `( int DIM1, int DIM2, int DIM3, DATA_TYPE* INPLACE_ARRAY3 )` * `( DATA_TYPE* INPLACE_FARRAY3, int DIM1, int DIM2, int DIM3 )` * `( int DIM1, int DIM2, int DIM3, DATA_TYPE* INPLACE_FARRAY3 )` 4D: * `(DATA_TYPE INPLACE_ARRAY4[ANY][ANY][ANY][ANY])` * `(DATA_TYPE* INPLACE_ARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4)` * `(DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, , DIM_TYPE DIM4, DATA_TYPE* INPLACE_ARRAY4)` * `(DATA_TYPE* INPLACE_FARRAY4, DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4)` * `(DIM_TYPE DIM1, DIM_TYPE DIM2, DIM_TYPE DIM3, DIM_TYPE DIM4, DATA_TYPE* INPLACE_FARRAY4)` These typemaps now check to make sure that the `INPLACE_ARRAY` arguments use native byte ordering. If not, an exception is raised. There is also a “flat” in-place array for situations in which you would like to modify or process each element, regardless of the number of dimensions. One example is a “quantization” function that quantizes each element of an array in-place, be it 1D, 2D or whatever. This form checks for continuity but allows either C or Fortran ordering. ND: * `(DATA_TYPE* INPLACE_ARRAY_FLAT, DIM_TYPE DIM_FLAT)` ### Argout Arrays Argout arrays are arrays that appear in the input arguments in C, but are in fact output arrays. This pattern occurs often when there is more than one output variable and the single return argument is therefore not sufficient. In Python, the conventional way to return multiple arguments is to pack them into a sequence (tuple, list, etc.) and return the sequence. This is what the argout typemaps do. If a wrapped function that uses these argout typemaps has more than one return argument, they are packed into a tuple or list, depending on the version of Python. The Python user does not pass these arrays in, they simply get returned. For the case where a dimension is specified, the python user must provide that dimension as an argument. The argout signatures are 1D: * `( DATA_TYPE ARGOUT_ARRAY1[ANY] )` * `( DATA_TYPE* ARGOUT_ARRAY1, int DIM1 )` * `( int DIM1, DATA_TYPE* ARGOUT_ARRAY1 )` 2D: * `( DATA_TYPE ARGOUT_ARRAY2[ANY][ANY] )` 3D: * `( DATA_TYPE ARGOUT_ARRAY3[ANY][ANY][ANY] )` 4D: * `( DATA_TYPE ARGOUT_ARRAY4[ANY][ANY][ANY][ANY] )` These are typically used in situations where in C/C++, you would allocate a(n) array(s) on the heap, and call the function to fill the array(s) values. In Python, the arrays are allocated for you and returned as new array objects. Note that we support `DATA_TYPE*` argout typemaps in 1D, but not 2D or 3D. This is because of a quirk with the [SWIG](http://www.swig.org) typemap syntax and cannot be avoided. Note that for these types of 1D typemaps, the Python function will take a single argument representing `DIM1`. ### Argout View Arrays Argoutview arrays are for when your C code provides you with a view of its internal data and does not require any memory to be allocated by the user. This can be dangerous. There is almost no way to guarantee that the internal data from the C code will remain in existence for the entire lifetime of the NumPy array that encapsulates it. If the user destroys the object that provides the view of the data before destroying the NumPy array, then using that array may result in bad memory references or segmentation faults. Nevertheless, there are situations, working with large data sets, where you simply have no other choice. The C code to be wrapped for argoutview arrays are characterized by pointers: pointers to the dimensions and double pointers to the data, so that these values can be passed back to the user. The argoutview typemap signatures are therefore 1D: * `( DATA_TYPE** ARGOUTVIEW_ARRAY1, DIM_TYPE* DIM1 )` * `( DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEW_ARRAY1 )` 2D: * `( DATA_TYPE** ARGOUTVIEW_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2 )` * `( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_ARRAY2 )` * `( DATA_TYPE** ARGOUTVIEW_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2 )` * `( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEW_FARRAY2 )` 3D: * `( DATA_TYPE** ARGOUTVIEW_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3)` * `( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_ARRAY3)` * `( DATA_TYPE** ARGOUTVIEW_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3)` * `( DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEW_FARRAY3)` 4D: * `(DATA_TYPE** ARGOUTVIEW_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_ARRAY4)` * `(DATA_TYPE** ARGOUTVIEW_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEW_FARRAY4)` Note that arrays with hard-coded dimensions are not supported. These cannot follow the double pointer signatures of these typemaps. ### Memory Managed Argout View Arrays A recent addition to `numpy.i` are typemaps that permit argout arrays with views into memory that is managed. See the discussion [here](http://blog.enthought.com/python/numpy-arrays-with-pre-allocated-memory). 1D: * `(DATA_TYPE** ARGOUTVIEWM_ARRAY1, DIM_TYPE* DIM1)` * `(DIM_TYPE* DIM1, DATA_TYPE** ARGOUTVIEWM_ARRAY1)` 2D: * `(DATA_TYPE** ARGOUTVIEWM_ARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_ARRAY2)` * `(DATA_TYPE** ARGOUTVIEWM_FARRAY2, DIM_TYPE* DIM1, DIM_TYPE* DIM2)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DATA_TYPE** ARGOUTVIEWM_FARRAY2)` 3D: * `(DATA_TYPE** ARGOUTVIEWM_ARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_ARRAY3)` * `(DATA_TYPE** ARGOUTVIEWM_FARRAY3, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DATA_TYPE** ARGOUTVIEWM_FARRAY3)` 4D: * `(DATA_TYPE** ARGOUTVIEWM_ARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_ARRAY4)` * `(DATA_TYPE** ARGOUTVIEWM_FARRAY4, DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4)` * `(DIM_TYPE* DIM1, DIM_TYPE* DIM2, DIM_TYPE* DIM3, DIM_TYPE* DIM4, DATA_TYPE** ARGOUTVIEWM_FARRAY4)` ### Output Arrays The `numpy.i` interface file does not support typemaps for output arrays, for several reasons. First, C/C++ return arguments are limited to a single value. This prevents obtaining dimension information in a general way. Second, arrays with hard-coded lengths are not permitted as return arguments. In other words: ``` double[3] newVector(double x, double y, double z); ``` is not legal C/C++ syntax. Therefore, we cannot provide typemaps of the form: ``` %typemap(out) (TYPE[ANY]); ``` If you run into a situation where a function or method is returning a pointer to an array, your best bet is to write your own version of the function to be wrapped, either with `%extend` for the case of class methods or `%ignore` and `%rename` for the case of functions. ### Other Common Types: bool Note that C++ type `bool` is not supported in the list in the [Available Typemaps](#available-typemaps) section. NumPy bools are a single byte, while the C++ `bool` is four bytes (at least on my system). Therefore: ``` %numpy_typemaps(bool, NPY_BOOL, int) ``` will result in typemaps that will produce code that reference improper data lengths. You can implement the following macro expansion: ``` %numpy_typemaps(bool, NPY_UINT, int) ``` to fix the data length problem, and [Input Arrays](#input-arrays) will work fine, but [In-Place Arrays](#in-place-arrays) might fail type-checking. ### Other Common Types: complex Typemap conversions for complex floating-point types is also not supported automatically. This is because Python and NumPy are written in C, which does not have native complex types. Both Python and NumPy implement their own (essentially equivalent) `struct` definitions for complex variables: ``` /* Python */ typedef struct {double real; double imag;} Py_complex; /* NumPy */ typedef struct {float real, imag;} npy_cfloat; typedef struct {double real, imag;} npy_cdouble; ``` We could have implemented: ``` %numpy_typemaps(Py_complex , NPY_CDOUBLE, int) %numpy_typemaps(npy_cfloat , NPY_CFLOAT , int) %numpy_typemaps(npy_cdouble, NPY_CDOUBLE, int) ``` which would have provided automatic type conversions for arrays of type `Py_complex`, `npy_cfloat` and `npy_cdouble`. However, it seemed unlikely that there would be any independent (non-Python, non-NumPy) application code that people would be using [SWIG](http://www.swig.org) to generate a Python interface to, that also used these definitions for complex types. More likely, these application codes will define their own complex types, or in the case of C++, use `std::complex`. Assuming these data structures are compatible with Python and NumPy complex types, `%numpy_typemap` expansions as above (with the user’s complex type substituted for the first argument) should work. NumPy Array Scalars and SWIG ---------------------------- [SWIG](http://www.swig.org) has sophisticated type checking for numerical types. For example, if your C/C++ routine expects an integer as input, the code generated by [SWIG](http://www.swig.org) will check for both Python integers and Python long integers, and raise an overflow error if the provided Python integer is too big to cast down to a C integer. With the introduction of NumPy scalar arrays into your Python code, you might conceivably extract an integer from a NumPy array and attempt to pass this to a [SWIG](http://www.swig.org)-wrapped C/C++ function that expects an `int`, but the [SWIG](http://www.swig.org) type checking will not recognize the NumPy array scalar as an integer. (Often, this does in fact work – it depends on whether NumPy recognizes the integer type you are using as inheriting from the Python integer type on the platform you are using. Sometimes, this means that code that works on a 32-bit machine will fail on a 64-bit machine.) If you get a Python error that looks like the following: ``` TypeError: in method 'MyClass_MyMethod', argument 2 of type 'int' ``` and the argument you are passing is an integer extracted from a NumPy array, then you have stumbled upon this problem. The solution is to modify the [SWIG](http://www.swig.org) type conversion system to accept NumPy array scalars in addition to the standard integer types. Fortunately, this capability has been provided for you. Simply copy the file: ``` pyfragments.swg ``` to the working build directory for you project, and this problem will be fixed. It is suggested that you do this anyway, as it only increases the capabilities of your Python interface. ### Why is There a Second File? The [SWIG](http://www.swig.org) type checking and conversion system is a complicated combination of C macros, [SWIG](http://www.swig.org) macros, [SWIG](http://www.swig.org) typemaps and [SWIG](http://www.swig.org) fragments. Fragments are a way to conditionally insert code into your wrapper file if it is needed, and not insert it if not needed. If multiple typemaps require the same fragment, the fragment only gets inserted into your wrapper code once. There is a fragment for converting a Python integer to a C `long`. There is a different fragment that converts a Python integer to a C `int`, that calls the routine defined in the `long` fragment. We can make the changes we want here by changing the definition for the `long` fragment. [SWIG](http://www.swig.org) determines the active definition for a fragment using a “first come, first served” system. That is, we need to define the fragment for `long` conversions prior to [SWIG](http://www.swig.org) doing it internally. [SWIG](http://www.swig.org) allows us to do this by putting our fragment definitions in the file `pyfragments.swg`. If we were to put the new fragment definitions in `numpy.i`, they would be ignored. Helper Functions ---------------- The `numpy.i` file contains several macros and routines that it uses internally to build its typemaps. However, these functions may be useful elsewhere in your interface file. These macros and routines are implemented as fragments, which are described briefly in the previous section. If you try to use one or more of the following macros or functions, but your compiler complains that it does not recognize the symbol, then you need to force these fragments to appear in your code using: ``` %fragment("NumPy_Fragments"); ``` in your [SWIG](http://www.swig.org) interface file. ### Macros **is_array(a)** Evaluates as true if `a` is non-`NULL` and can be cast to a `PyArrayObject*`. **array_type(a)** Evaluates to the integer data type code of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_numdims(a)** Evaluates to the integer number of dimensions of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_dimensions(a)** Evaluates to an array of type `npy_intp` and length `array_numdims(a)`, giving the lengths of all of the dimensions of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_size(a,i)** Evaluates to the `i`-th dimension size of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_strides(a)** Evaluates to an array of type `npy_intp` and length `array_numdims(a)`, giving the stridess of all of the dimensions of `a`, assuming `a` can be cast to a `PyArrayObject*`. A stride is the distance in bytes between an element and its immediate neighbor along the same axis. **array_stride(a,i)** Evaluates to the `i`-th stride of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_data(a)** Evaluates to a pointer of type `void*` that points to the data buffer of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_descr(a)** Returns a borrowed reference to the dtype property (`PyArray_Descr*`) of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_flags(a)** Returns an integer representing the flags of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_enableflags(a,f)** Sets the flag represented by `f` of `a`, assuming `a` can be cast to a `PyArrayObject*`. **array_is_contiguous(a)** Evaluates as true if `a` is a contiguous array. Equivalent to `(PyArray_ISCONTIGUOUS(a))`. **array_is_native(a)** Evaluates as true if the data buffer of `a` uses native byte order. Equivalent to `(PyArray_ISNOTSWAPPED(a))`. **array_is_fortran(a)** Evaluates as true if `a` is FORTRAN ordered. ### Routines **pytype_string()** Return type: `const char*` Arguments: * `PyObject* py_obj`, a general Python object. Return a string describing the type of `py_obj`. **typecode_string()** Return type: `const char*` Arguments: * `int typecode`, a NumPy integer typecode. Return a string describing the type corresponding to the NumPy `typecode`. **type_match()** Return type: `int` Arguments: * `int actual_type`, the NumPy typecode of a NumPy array. * `int desired_type`, the desired NumPy typecode. Make sure that `actual_type` is compatible with `desired_type`. For example, this allows character and byte types, or int and long types, to match. This is now equivalent to `PyArray_EquivTypenums()`. **obj_to_array_no_conversion()** Return type: `PyArrayObject*` Arguments: * `PyObject* input`, a general Python object. * `int typecode`, the desired NumPy typecode. Cast `input` to a `PyArrayObject*` if legal, and ensure that it is of type `typecode`. If `input` cannot be cast, or the `typecode` is wrong, set a Python error and return `NULL`. **obj_to_array_allow_conversion()** Return type: `PyArrayObject*` Arguments: * `PyObject* input`, a general Python object. * `int typecode`, the desired NumPy typecode of the resulting array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. Convert `input` to a NumPy array with the given `typecode`. On success, return a valid `PyArrayObject*` with the correct type. On failure, the Python error string will be set and the routine returns `NULL`. **make_contiguous()** Return type: `PyArrayObject*` Arguments: * `PyArrayObject* ary`, a NumPy array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. * `int min_dims`, minimum allowable dimensions. * `int max_dims`, maximum allowable dimensions. Check to see if `ary` is contiguous. If so, return the input pointer and flag it as not a new object. If it is not contiguous, create a new `PyArrayObject*` using the original data, flag it as a new object and return the pointer. **make_fortran()** Return type: `PyArrayObject*` Arguments * `PyArrayObject* ary`, a NumPy array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. Check to see if `ary` is Fortran contiguous. If so, return the input pointer and flag it as not a new object. If it is not Fortran contiguous, create a new `PyArrayObject*` using the original data, flag it as a new object and return the pointer. **obj_to_array_contiguous_allow_conversion()** Return type: `PyArrayObject*` Arguments: * `PyObject* input`, a general Python object. * `int typecode`, the desired NumPy typecode of the resulting array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. Convert `input` to a contiguous `PyArrayObject*` of the specified type. If the input object is not a contiguous `PyArrayObject*`, a new one will be created and the new object flag will be set. **obj_to_array_fortran_allow_conversion()** Return type: `PyArrayObject*` Arguments: * `PyObject* input`, a general Python object. * `int typecode`, the desired NumPy typecode of the resulting array. * `int* is_new_object`, returns a value of 0 if no conversion performed, else 1. Convert `input` to a Fortran contiguous `PyArrayObject*` of the specified type. If the input object is not a Fortran contiguous `PyArrayObject*`, a new one will be created and the new object flag will be set. **require_contiguous()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. Test whether `ary` is contiguous. If so, return 1. Otherwise, set a Python error and return 0. **require_native()** Return type: `int` Arguments: * `PyArray_Object* ary`, a NumPy array. Require that `ary` is not byte-swapped. If the array is not byte-swapped, return 1. Otherwise, set a Python error and return 0. **require_dimensions()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. * `int exact_dimensions`, the desired number of dimensions. Require `ary` to have a specified number of dimensions. If the array has the specified number of dimensions, return 1. Otherwise, set a Python error and return 0. **require_dimensions_n()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. * `int* exact_dimensions`, an array of integers representing acceptable numbers of dimensions. * `int n`, the length of `exact_dimensions`. Require `ary` to have one of a list of specified number of dimensions. If the array has one of the specified number of dimensions, return 1. Otherwise, set the Python error string and return 0. **require_size()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. * `npy_int* size`, an array representing the desired lengths of each dimension. * `int n`, the length of `size`. Require `ary` to have a specified shape. If the array has the specified shape, return 1. Otherwise, set the Python error string and return 0. **require_fortran()** Return type: `int` Arguments: * `PyArrayObject* ary`, a NumPy array. Require the given `PyArrayObject` to be Fortran ordered. If the `PyArrayObject` is already Fortran ordered, do nothing. Else, set the Fortran ordering flag and recompute the strides. Beyond the Provided Typemaps ---------------------------- There are many C or C++ array/NumPy array situations not covered by a simple `%include "numpy.i"` and subsequent `%apply` directives. ### A Common Example Consider a reasonable prototype for a dot product function: ``` double dot(int len, double* vec1, double* vec2); ``` The Python interface that we want is: ``` def dot(vec1, vec2): """ dot(PyObject,PyObject) -> double """ ``` The problem here is that there is one dimension argument and two array arguments, and our typemaps are set up for dimensions that apply to a single array (in fact, [SWIG](http://www.swig.org) does not provide a mechanism for associating `len` with `vec2` that takes two Python input arguments). The recommended solution is the following: ``` %apply (int DIM1, double* IN_ARRAY1) {(int len1, double* vec1), (int len2, double* vec2)} %rename (dot) my_dot; %exception my_dot { $action if (PyErr_Occurred()) SWIG_fail; } %inline %{ double my_dot(int len1, double* vec1, int len2, double* vec2) { if (len1 != len2) { PyErr_Format(PyExc_ValueError, "Arrays of lengths (%d,%d) given", len1, len2); return 0.0; } return dot(len1, vec1, vec2); } %} ``` If the header file that contains the prototype for `double dot()` also contains other prototypes that you want to wrap, so that you need to `%include` this header file, then you will also need a `%ignore dot;` directive, placed after the `%rename` and before the `%include` directives. Or, if the function in question is a class method, you will want to use `%extend` rather than `%inline` in addition to `%ignore`. **A note on error handling:** Note that `my_dot` returns a `double` but that it can also raise a Python error. The resulting wrapper function will return a Python float representation of 0.0 when the vector lengths do not match. Since this is not `NULL`, the Python interpreter will not know to check for an error. For this reason, we add the `%exception` directive above for `my_dot` to get the behavior we want (note that `$action` is a macro that gets expanded to a valid call to `my_dot`). In general, you will probably want to write a [SWIG](http://www.swig.org) macro to perform this task. ### Other Situations There are other wrapping situations in which `numpy.i` may be helpful when you encounter them. * In some situations, it is possible that you could use the `%numpy_typemaps` macro to implement typemaps for your own types. See the [Other Common Types: bool](#other-common-types-bool) or [Other Common Types: complex](#other-common-types-complex) sections for examples. Another situation is if your dimensions are of a type other than `int` (say `long` for example): ``` %numpy_typemaps(double, NPY_DOUBLE, long) ``` * You can use the code in `numpy.i` to write your own typemaps. For example, if you had a five-dimensional array as a function argument, you could cut-and-paste the appropriate four-dimensional typemaps into your interface file. The modifications for the fourth dimension would be trivial. * Sometimes, the best approach is to use the `%extend` directive to define new methods for your classes (or overload existing ones) that take a `PyObject*` (that either is or can be converted to a `PyArrayObject*`) instead of a pointer to a buffer. In this case, the helper routines in `numpy.i` can be very useful. * Writing typemaps can be a bit nonintuitive. If you have specific questions about writing [SWIG](http://www.swig.org) typemaps for NumPy, the developers of `numpy.i` do monitor the [Numpy-discussion](mailto:Numpy-discussion%40python.org) and [Swig-user](mailto:Swig-user%40lists.sourceforge.net) mail lists. ### A Final Note When you use the `%apply` directive, as is usually necessary to use `numpy.i`, it will remain in effect until you tell [SWIG](http://www.swig.org) that it shouldn’t be. If the arguments to the functions or methods that you are wrapping have common names, such as `length` or `vector`, these typemaps may get applied in situations you do not expect or want. Therefore, it is always a good idea to add a `%clear` directive after you are done with a specific typemap: ``` %apply (double* IN_ARRAY1, int DIM1) {(double* vector, int length)} %include "my_header.h" %clear (double* vector, int length); ``` In general, you should target these typemap signatures specifically where you want them, and then clear them after you are done. Summary ------- Out of the box, `numpy.i` provides typemaps that support conversion between NumPy arrays and C arrays: * That can be one of 12 different scalar types: `signed char`, `unsigned char`, `short`, `unsigned short`, `int`, `unsigned int`, `long`, `unsigned long`, `long long`, `unsigned long long`, `float` and `double`. * That support 74 different argument signatures for each data type, including: + One-dimensional, two-dimensional, three-dimensional and four-dimensional arrays. + Input-only, in-place, argout, argoutview, and memory managed argoutview behavior. + Hard-coded dimensions, data-buffer-then-dimensions specification, and dimensions-then-data-buffer specification. + Both C-ordering (“last dimension fastest”) or Fortran-ordering (“first dimension fastest”) support for 2D, 3D and 4D arrays. The `numpy.i` interface file also provides additional tools for wrapper developers, including: * A [SWIG](http://www.swig.org) macro (`%numpy_typemaps`) with three arguments for implementing the 74 argument signatures for the user’s choice of (1) C data type, (2) NumPy data type (assuming they match), and (3) dimension type. * Fourteen C macros and fifteen C functions that can be used to write specialized typemaps, extensions, or inlined functions that handle cases not covered by the provided typemaps. Note that the macros and functions are coded specifically to work with the NumPy C/API regardless of NumPy version number, both before and after the deprecation of some aspects of the API after version 1.6. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/swig.interface-file.htmlTesting the numpy.i Typemaps ============================ Introduction ------------ Writing tests for the `numpy.i` [SWIG](http://www.swig.org) interface file is a combinatorial headache. At present, 12 different data types are supported, each with 74 different argument signatures, for a total of 888 typemaps supported “out of the box”. Each of these typemaps, in turn, might require several unit tests in order to verify expected behavior for both proper and improper inputs. Currently, this results in more than 1,000 individual unit tests executed when `make test` is run in the `numpy/tools/swig` subdirectory. To facilitate this many similar unit tests, some high-level programming techniques are employed, including C and [SWIG](http://www.swig.org) macros, as well as Python inheritance. The purpose of this document is to describe the testing infrastructure employed to verify that the `numpy.i` typemaps are working as expected. Testing Organization -------------------- There are three independent testing frameworks supported, for one-, two-, and three-dimensional arrays respectively. For one-dimensional arrays, there are two C++ files, a header and a source, named: ``` Vector.h Vector.cxx ``` that contain prototypes and code for a variety of functions that have one-dimensional arrays as function arguments. The file: ``` Vector.i ``` is a [SWIG](http://www.swig.org) interface file that defines a python module `Vector` that wraps the functions in `Vector.h` while utilizing the typemaps in `numpy.i` to correctly handle the C arrays. The `Makefile` calls `swig` to generate `Vector.py` and `Vector_wrap.cxx`, and also executes the `setup.py` script that compiles `Vector_wrap.cxx` and links together the extension module `_Vector.so` or `_Vector.dylib`, depending on the platform. This extension module and the proxy file `Vector.py` are both placed in a subdirectory under the `build` directory. The actual testing takes place with a Python script named: ``` testVector.py ``` that uses the standard Python library module `unittest`, which performs several tests of each function defined in `Vector.h` for each data type supported. Two-dimensional arrays are tested in exactly the same manner. The above description applies, but with `Matrix` substituted for `Vector`. For three-dimensional tests, substitute `Tensor` for `Vector`. For four-dimensional tests, substitute `SuperTensor` for `Vector`. For flat in-place array tests, substitute `Flat` for `Vector`. For the descriptions that follow, we will reference the `Vector` tests, but the same information applies to `Matrix`, `Tensor` and `SuperTensor` tests. The command `make test` will ensure that all of the test software is built and then run all three test scripts. Testing Header Files -------------------- `Vector.h` is a C++ header file that defines a C macro called `TEST_FUNC_PROTOS` that takes two arguments: `TYPE`, which is a data type name such as `unsigned int`; and `SNAME`, which is a short name for the same data type with no spaces, e.g. `uint`. This macro defines several function prototypes that have the prefix `SNAME` and have at least one argument that is an array of type `TYPE`. Those functions that have return arguments return a `TYPE` value. `TEST_FUNC_PROTOS` is then implemented for all of the data types supported by `numpy.i`: * `signed char` * `unsigned char` * `short` * `unsigned short` * `int` * `unsigned int` * `long` * `unsigned long` * `long long` * `unsigned long long` * `float` * `double` Testing Source Files -------------------- `Vector.cxx` is a C++ source file that implements compilable code for each of the function prototypes specified in `Vector.h`. It defines a C macro `TEST_FUNCS` that has the same arguments and works in the same way as `TEST_FUNC_PROTOS` does in `Vector.h`. `TEST_FUNCS` is implemented for each of the 12 data types as above. Testing SWIG Interface Files ---------------------------- `Vector.i` is a [SWIG](http://www.swig.org) interface file that defines python module `Vector`. It follows the conventions for using `numpy.i` as described in this chapter. It defines a [SWIG](http://www.swig.org) macro `%apply_numpy_typemaps` that has a single argument `TYPE`. It uses the [SWIG](http://www.swig.org) directive `%apply` to apply the provided typemaps to the argument signatures found in `Vector.h`. This macro is then implemented for all of the data types supported by `numpy.i`. It then does a `%include "Vector.h"` to wrap all of the function prototypes in `Vector.h` using the typemaps in `numpy.i`. Testing Python Scripts ---------------------- After `make` is used to build the testing extension modules, `testVector.py` can be run to execute the tests. As with other scripts that use `unittest` to facilitate unit testing, `testVector.py` defines a class that inherits from `unittest.TestCase`: ``` class VectorTestCase(unittest.TestCase): ``` However, this class is not run directly. Rather, it serves as a base class to several other python classes, each one specific to a particular data type. The `VectorTestCase` class stores two strings for typing information: **self.typeStr** A string that matches one of the `SNAME` prefixes used in `Vector.h` and `Vector.cxx`. For example, `"double"`. **self.typeCode** A short (typically single-character) string that represents a data type in numpy and corresponds to `self.typeStr`. For example, if `self.typeStr` is `"double"`, then `self.typeCode` should be `"d"`. Each test defined by the `VectorTestCase` class extracts the python function it is trying to test by accessing the `Vector` module’s dictionary: ``` length = Vector.__dict__[self.typeStr + "Length"] ``` In the case of double precision tests, this will return the python function `Vector.doubleLength`. We then define a new test case class for each supported data type with a short definition such as: ``` class doubleTestCase(VectorTestCase): def __init__(self, methodName="runTest"): VectorTestCase.__init__(self, methodName) self.typeStr = "double" self.typeCode = "d" ``` Each of these 12 classes is collected into a `unittest.TestSuite`, which is then executed. Errors and failures are summed together and returned as the exit argument. Any non-zero result indicates that at least one test did not pass. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/swig.testing.htmlGit for development =================== These pages describe a general [git](https://git-scm.com/) and [github](https://github.com/numpy/numpy) workflow. This is not a comprehensive [git](https://git-scm.com/) reference. It’s tailored to the [github](https://github.com/numpy/numpy) hosting service. You may well find better or quicker ways of getting stuff done with [git](https://git-scm.com/), but these should get you started. For general resources for learning [git](https://git-scm.com/) see [Additional Git Resources](git_resources#git-resources). Have a look at the [github](https://github.com/numpy/numpy) install help pages available from [github help](https://help.github.com) Contents: * [Install git](git_intro) * [Get the local copy of the code](following_latest) * [Updating the code](following_latest#updating-the-code) * [Setting up git for NumPy development](development_setup) + [Install git](development_setup#install-git) + [Create a GitHub account](development_setup#create-a-github-account) + [Create a NumPy fork](development_setup#create-a-numpy-fork) + [Look it over](development_setup#look-it-over) + [Optional: set up SSH keys to avoid passwords](development_setup#optional-set-up-ssh-keys-to-avoid-passwords) * [Git configuration](configure_git) + [Overview](configure_git#overview) + [In detail](configure_git#in-detail) * [Two and three dots in difference specs](dot2_dot3) * [Additional Git Resources](git_resources) + [Tutorials and summaries](git_resources#tutorials-and-summaries) + [Advanced git workflow](git_resources#advanced-git-workflow) + [Manual pages online](git_resources#manual-pages-online) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/gitwash/index.htmlSetting up and using your development environment ================================================= Recommended development setup ----------------------------- Since NumPy contains parts written in C and Cython that need to be compiled before use, make sure you have the necessary compilers and Python development headers installed - see [Building from source](../user/building#building-from-source). Building NumPy as of version `1.17` requires a C99 compliant compiler. Having compiled code also means that importing NumPy from the development sources needs some additional steps, which are explained below. For the rest of this chapter we assume that you have set up your git repo as described in [Git for development](gitwash/index#using-git). Note If you are having trouble building NumPy from source or setting up your local development environment, you can try to [build NumPy with Gitpod](development_gitpod#development-gitpod). Testing builds -------------- To build the development version of NumPy and run tests, spawn interactive shells with the Python import paths properly set up etc., do one of: ``` $ python runtests.py -v $ python runtests.py -v -s random $ python runtests.py -v -t numpy/core/tests/test_nditer.py::test_iter_c_order $ python runtests.py --ipython $ python runtests.py --python somescript.py $ python runtests.py --bench $ python runtests.py -g -m full ``` This builds NumPy first, so the first time it may take a few minutes. If you specify `-n`, the tests are run against the version of NumPy (if any) found on current PYTHONPATH. When specifying a target using `-s`, `-t`, or `--python`, additional arguments may be forwarded to the target embedded by `runtests.py` by passing the extra arguments after a bare `--`. For example, to run a test method with the `--pdb` flag forwarded to the target, run the following: ``` $ python runtests.py -t numpy/tests/test_scripts.py::test_f2py -- --pdb ``` When using pytest as a target (the default), you can [match test names using python operators](https://docs.pytest.org/en/latest/usage.html#specifying-tests-selecting-tests) by passing the `-k` argument to pytest: ``` $ python runtests.py -v -t numpy/core/tests/test_multiarray.py -- -k "MatMul and not vector" ``` Note Remember that all tests of NumPy should pass before committing your changes. Using `runtests.py` is the recommended approach to running tests. There are also a number of alternatives to it, for example in-place build or installing to a virtualenv or a conda environment. See the FAQ below for details. Note Some of the tests in the test suite require a large amount of memory, and are skipped if your system does not have enough. To override the automatic detection of available memory, set the environment variable `NPY_AVAILABLE_MEM`, for example `NPY_AVAILABLE_MEM=32GB`, or using pytest `--available-memory=32GB` target option. Building in-place ----------------- For development, you can set up an in-place build so that changes made to `.py` files have effect without rebuild. First, run: ``` $ python setup.py build_ext -i ``` This allows you to import the in-place built NumPy *from the repo base directory only*. If you want the in-place build to be visible outside that base dir, you need to point your `PYTHONPATH` environment variable to this directory. Some IDEs ([Spyder](https://www.spyder-ide.org/) for example) have utilities to manage `PYTHONPATH`. On Linux and OSX, you can run the command: ``` $ export PYTHONPATH=$PWD ``` and on Windows: ``` $ set PYTHONPATH=/path/to/numpy ``` Now editing a Python source file in NumPy allows you to immediately test and use your changes (in `.py` files), by simply restarting the interpreter. Note that another way to do an inplace build visible outside the repo base dir is with `python setup.py develop`. Instead of adjusting `PYTHONPATH`, this installs a `.egg-link` file into your site-packages as well as adjusts the `easy-install.pth` there, so its a more permanent (and magical) operation. Other build options ------------------- Build options can be discovered by running any of: ``` $ python setup.py --help $ python setup.py --help-commands ``` It’s possible to do a parallel build with `numpy.distutils` with the `-j` option; see [Parallel builds](../user/building#parallel-builds) for more details. A similar approach to in-place builds and use of `PYTHONPATH` but outside the source tree is to use: ``` $ pip install . --prefix /some/owned/folder $ export PYTHONPATH=/some/owned/folder/lib/python3.4/site-packages ``` NumPy uses a series of tests to probe the compiler and libc libraries for functions. The results are stored in `_numpyconfig.h` and `config.h` files using `HAVE_XXX` definitions. These tests are run during the `build_src` phase of the `_multiarray_umath` module in the `generate_config_h` and `generate_numpyconfig_h` functions. Since the output of these calls includes many compiler warnings and errors, by default it is run quietly. If you wish to see this output, you can run the `build_src` stage verbosely: ``` $ python build build_src -v ``` Using virtual environments -------------------------- A frequently asked question is “How do I set up a development version of NumPy in parallel to a released version that I use to do my job/research?”. One simple way to achieve this is to install the released version in site-packages, by using pip or conda for example, and set up the development version in a virtual environment. If you use conda, we recommend creating a separate virtual environment for numpy development using the `environment.yml` file in the root of the repo (this will create the environment and install all development dependencies at once): ``` $ conda env create -f environment.yml # `mamba` works too for this command $ conda activate numpy-dev ``` If you installed Python some other way than conda, first install [virtualenv](http://www.virtualenv.org/) (optionally use [virtualenvwrapper](http://www.doughellmann.com/projects/virtualenvwrapper/)), then create your virtualenv (named `numpy-dev` here) with: ``` $ virtualenv numpy-dev ``` Now, whenever you want to switch to the virtual environment, you can use the command `source numpy-dev/bin/activate`, and `deactivate` to exit from the virtual environment and back to your previous shell. Running tests ------------- Besides using `runtests.py`, there are various ways to run the tests. Inside the interpreter, tests can be run like this: ``` >>> np.test() >>> np.test('full') # Also run tests marked as slow >>> np.test('full', verbose=2) # Additionally print test name/file An example of a successful test : ``4686 passed, 362 skipped, 9 xfailed, 5 warnings in 213.99 seconds`` ``` Or a similar way from the command line: ``` $ python -c "import numpy as np; np.test()" ``` Tests can also be run with `pytest numpy`, however then the NumPy-specific plugin is not found which causes strange side effects Running individual test files can be useful; it’s much faster than running the whole test suite or that of a whole module (example: `np.random.test()`). This can be done with: ``` $ python path_to_testfile/test_file.py ``` That also takes extra arguments, like `--pdb` which drops you into the Python debugger when a test fails or an exception is raised. Running tests with [tox](https://tox.readthedocs.io/) is also supported. For example, to build NumPy and run the test suite with Python 3.9, use: ``` $ tox -e py39 ``` For more extensive information, see [Testing Guidelines](../reference/testing#testing-guidelines) *Note: do not run the tests from the root directory of your numpy git repo without ``runtests.py``, that will result in strange test errors.* Running Linting --------------- Lint checks can be performed on newly added lines of Python code. Install all dependent packages using pip: ``` $ python -m pip install -r linter_requirements.txt ``` To run lint checks before committing new code, run: ``` $ python runtests.py --lint uncommitted ``` To check all changes in newly added Python code of current branch with target branch, run: ``` $ python runtests.py --lint main ``` If there are no errors, the script exits with no message. In case of errors: ``` $ python runtests.py --lint main ./numpy/core/tests/test_scalarmath.py:34:5: E303 too many blank lines (3) 1 E303 too many blank lines (3) ``` It is advisable to run lint checks before pushing commits to a remote branch since the linter runs as part of the CI pipeline. For more details on Style Guidelines: * [Python Style Guide](https://www.python.org/dev/peps/pep-0008/) * [C Style Guide](https://numpy.org/neps/nep-0045-c_style_guide.html) Rebuilding & cleaning the workspace ----------------------------------- Rebuilding NumPy after making changes to compiled code can be done with the same build command as you used previously - only the changed files will be re-built. Doing a full build, which sometimes is necessary, requires cleaning the workspace first. The standard way of doing this is (*note: deletes any uncommitted files!*): ``` $ git clean -xdf ``` When you want to discard all changes and go back to the last commit in the repo, use one of: ``` $ git checkout . $ git reset --hard ``` Debugging --------- Another frequently asked question is “How do I debug C code inside NumPy?”. First, ensure that you have gdb installed on your system with the Python extensions (often the default on Linux). You can see which version of Python is running inside gdb to verify your setup: ``` (gdb) python >import sys >print(sys.version_info) >end sys.version_info(major=3, minor=7, micro=0, releaselevel='final', serial=0) ``` Next you need to write a Python script that invokes the C code whose execution you want to debug. For instance `mytest.py`: ``` import numpy as np x = np.arange(5) np.empty_like(x) ``` Now, you can run: ``` $ gdb --args python runtests.py -g --python mytest.py ``` And then in the debugger: ``` (gdb) break array_empty_like (gdb) run ``` The execution will now stop at the corresponding C function and you can step through it as usual. A number of useful Python-specific commands are available. For example to see where in the Python code you are, use `py-list`. For more details, see [DebuggingWithGdb](https://wiki.python.org/moin/DebuggingWithGdb). Here are some commonly used commands: * `list`: List specified function or line. * `next`: Step program, proceeding through subroutine calls. * `step`: Continue program being debugged, after signal or breakpoint. * `print`: Print value of expression EXP. Instead of plain `gdb` you can of course use your favourite alternative debugger; run it on the python binary with arguments `runtests.py -g --python mytest.py`. Building NumPy with a Python built with debug support (on Linux distributions typically packaged as `python-dbg`) is highly recommended. Understanding the code & getting started ---------------------------------------- The best strategy to better understand the code base is to pick something you want to change and start reading the code to figure out how it works. When in doubt, you can ask questions on the mailing list. It is perfectly okay if your pull requests aren’t perfect, the community is always happy to help. As a volunteer project, things do sometimes get dropped and it’s totally fine to ping us if something has sat without a response for about two to four weeks. So go ahead and pick something that annoys or confuses you about NumPy, experiment with the code, hang around for discussions or go through the reference documents to try to fix it. Things will fall in place and soon you’ll have a pretty good understanding of the project as a whole. Good Luck! © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/development_environment.htmlUsing Gitpod for NumPy development ================================== This section of the documentation will guide you through: * using GitPod for your NumPy development environment * creating a personal fork of the NumPy repository on GitHub * a quick tour of Gitpod and VSCode * working on the NumPy documentation in Gitpod Gitpod ------ [Gitpod](https://www.gitpod.io/) is an open-source platform for automated and ready-to-code development environments. It enables developers to describe their dev environment as code and start instant and fresh development environments for each new task directly from your browser. This reduces the need to install local development environments and deal with incompatible dependencies. Gitpod GitHub integration ------------------------- To be able to use Gitpod, you will need to have the Gitpod app installed on your GitHub account, so if you do not have an account yet, you will need to create one first. Head over to the [Gitpod](https://www.gitpod.io/) website and click on the **Continue with GitHub** button. You will be redirected to the GitHub authentication page. You will then be asked to install the [Gitpod GitHub app](https://github.com/marketplace/gitpod-io). Make sure to select **All repositories** access option to avoid issues with permissions later on. Click on the green **Install** button ![Gitpod repository access and installation screenshot] This will install the necessary hooks for the integration. Forking the NumPy repository ---------------------------- The best way to work on NumPy as a contributor is by making a fork of the repository first. 1. Browse to the [NumPy repository on GitHub](https://github.com/NumPy/NumPy) and [create your own fork](https://help.github.com/en/articles/fork-a-repo). 2. Browse to your fork. Your fork will have a URL like <https://github.com/melissawm/NumPy>, except with your GitHub username in place of `melissawm`. Starting Gitpod --------------- Once you have authenticated to Gitpod through GitHub, you can install the [Gitpod browser extension](https://www.gitpod.io/docs/browser-extension) which will add a **Gitpod** button next to the **Code** button in the repository: ![NumPy repository with Gitpod button screenshot] 1. If you install the extension - you can click the **Gitpod** button to start a new workspace. 2. Alternatively, if you do not want to install the browser extension, you can visit <https://gitpod.io/#https://github.com/USERNAME/NumPy> replacing `USERNAME` with your GitHub username. 3. In both cases, this will open a new tab on your web browser and start building your development environment. Please note this can take a few minutes. 4. Once the build is complete, you will be directed to your workspace, including the VSCode editor and all the dependencies you need to work on NumPy. The first time you start your workspace, you will notice that there might be some actions running. This will ensure that you have a development version of NumPy installed and that the docs are being pre-built for you. 5. When your workspace is ready, you can [test the build](development_environment#testing-builds) by entering: ``` $ python runtests.py -v ``` `runtests.py` is another script in the NumPy root directory. It runs a suite of tests that make sure NumPy is working as it should, and `-v` activates the `--verbose` option to show all the test output. Quick workspace tour -------------------- Gitpod uses VSCode as the editor. If you have not used this editor before, you can check the Getting started [VSCode docs](https://code.visualstudio.com/docs/getstarted/tips-and-tricks) to familiarize yourself with it. Your workspace will look similar to the image below: ![Gitpod workspace screenshot](https://numpy.org/doc/1.23/_images/gitpod-workspace.png) Note By default, VSCode initializes with a light theme. You can change to a dark theme by with the keyboard shortcut ``Cmd`-`K` `Cmd`-`T`` in Mac or ``Ctrl`-`K` `Ctrl`-`T`` in Linux and Windows. We have marked some important sections in the editor: 1. Your current Python interpreter - by default, this is `numpy-dev` and should be displayed in the status bar and on your terminal. You do not need to activate the conda environment as this will always be activated for you. 2. Your current branch is always displayed in the status bar. You can also use this button to change or create branches. 3. GitHub Pull Requests extension - you can use this to work with Pull Requests from your workspace. 4. Marketplace extensions - we have added some essential extensions to the NumPy Gitpod. Still, you can also install other extensions or syntax highlighting themes for your user, and these will be preserved for you. 5. Your workspace directory - by default, it is `/workspace/numpy`. **Do not change this** as this is the only directory preserved in Gitpod. We have also pre-installed a few tools and VSCode extensions to help with the development experience: * [GitHub CLI](https://cli.github.com/) * [VSCode rst extension](https://marketplace.visualstudio.com/items?itemName=lextudio.restructuredtext) * [VSCode Live server extension](https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer) * [VSCode Gitlens extension](https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens) * [VSCode autodocstrings extension](https://marketplace.visualstudio.com/items?itemName=njpwerner.autodocstring) * [VSCode Git Graph extension](https://marketplace.visualstudio.com/items?itemName=mhutchie.git-graph) Development workflow with Gitpod -------------------------------- The [Development workflow](development_workflow#development-workflow) section of this documentation contains information regarding the NumPy development workflow. Make sure to check this before working on your contributions. When using Gitpod, git is pre configured for you: 1. You do not need to configure your git username, and email as this should be done for you as you authenticated through GitHub. You can check the git configuration with the command `git config --list` in your terminal. 2. As you started your workspace from your own NumPy fork, you will by default have both `upstream` and `origin` added as remotes. You can verify this by typing `git remote` on your terminal or by clicking on the **branch name** on the status bar (see image below). ![Gitpod workspace branches plugin screenshot](https://numpy.org/doc/1.23/_images/NumPy-gitpod-branches.png) Rendering the NumPy documentation --------------------------------- You can find the detailed documentation on how rendering the documentation with Sphinx works in the [Building the NumPy API and reference docs](howto_build_docs#howto-build-docs) section. The documentation is pre-built during your workspace initialization. So once this task is completed, you have two main options to render the documentation in Gitpod. ### Option 1: Using Liveserve 1. View the documentation in `NumPy/doc/build/html`. You can start with `index.html` and browse, or you can jump straight to the file you’re interested in. 2. To see the rendered version of a page, you can right-click on the `.html` file and click on **Open with Live Serve**. Alternatively, you can open the file in the editor and click on the **Go live** button on the status bar. ![Gitpod workspace VSCode start live serve screenshot] 3. A simple browser will open to the right-hand side of the editor. We recommend closing it and click on the **Open in browser** button in the pop-up. 4. To stop the server click on the **Port: 5500** button on the status bar. ### Option 2: Using the rst extension A quick and easy way to see live changes in a `.rst` file as you work on it uses the rst extension with docutils. Note This will generate a simple live preview of the document without the `html` theme, and some backlinks might not be added correctly. But it is an easy and lightweight way to get instant feedback on your work. 1. Open any of the source documentation files located in `doc/source` in the editor. 2. Open VSCode Command Palette with ``Cmd`-`Shift`-`P`` in Mac or ``Ctrl`-`Shift`-`P`` in Linux and Windows. Start typing “restructured” and choose either “Open preview” or “Open preview to the Side”. ![Gitpod workspace VSCode open rst screenshot] 3. As you work on the document, you will see a live rendering of it on the editor. ![Gitpod workspace VSCode rst rendering screenshot](https://numpy.org/doc/1.23/_images/rst-rendering.png) If you want to see the final output with the `html` theme you will need to rebuild the docs with `make html` and use Live Serve as described in option 1. FAQ’s and troubleshooting ------------------------- ### How long is my Gitpod workspace kept for? Your stopped workspace will be kept for 14 days and deleted afterwards if you do not use them. ### Can I come back to a previous workspace? Yes, let’s say you stepped away for a while and you want to carry on working on your NumPy contributions. You need to visit <https://gitpod.io/workspaces> and click on the workspace you want to spin up again. All your changes will be there as you last left them. ### Can I install additional VSCode extensions? Absolutely! Any extensions you installed will be installed in your own workspace and preserved. ### I registered on Gitpod but I still cannot see a `Gitpod` button in my repositories. Head to <https://gitpod.io/integrations> and make sure you are logged in. Hover over GitHub and click on the three buttons that appear on the right. Click on edit permissions and make sure you have `user:email`, `read:user`, and `public_repo` checked. Click on **Update Permissions** and confirm the changes in the GitHub application page. ![Gitpod integrations - edit GH permissions screenshot] ### How long does my workspace stay active if I’m not using it? If you keep your workspace open in a browser tab but don’t interact with it, it will shut down after 30 minutes. If you close the browser tab, it will shut down after 3 minutes. ### My terminal is blank - there is no cursor and it’s completely unresponsive Unfortunately this is a known-issue on Gitpod’s side. You can sort this issue in two ways: 1. Create a new Gitpod workspace altogether. 2. Head to your [Gitpod dashboard](https://gitpod.io/workspaces) and locate the running workspace. Hover on it and click on the **three dots menu** and then click on **Stop**. When the workspace is completely stopped you can click on its name to restart it again. ![Gitpod dashboard and workspace menu screenshot] ### I authenticated through GitHub but I still cannot commit to the repository through Gitpod. Head to <https://gitpod.io/integrations> and make sure you are logged in. Hover over GitHub and click on the three buttons that appear on the right. Click on edit permissions and make sure you have `public_repo` checked. Click on **Update Permissions** and confirm the changes in the GitHub application page. ![Gitpod integrations - edit GH repository permissions screenshot] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/development_gitpod.htmlBuilding the NumPy API and reference docs ========================================= If you only want to get the documentation, note that pre-built versions can be found at <https://numpy.org/doc/ in several different formats. Development environments ------------------------ Before proceeding further it should be noted that the documentation is built with the `make` tool, which is not natively available on Windows. MacOS or Linux users can jump to [Prerequisites](#how-todoc-prerequisites). It is recommended for Windows users to set up their development environment on [Gitpod](development_gitpod#development-gitpod) or [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install-win10). WSL is a good option for a persistent local set-up. ### Gitpod Gitpod is an open-source platform that automatically creates the correct development environment right in your browser, reducing the need to install local development environments and deal with incompatible dependencies. If you have good internet connectivity and want a temporary set-up, it is often faster to build with Gitpod. Here are the in-depth instructions for [building NumPy with Gitpod](development_gitpod#development-gitpod). Prerequisites ------------- Building the NumPy documentation and API reference requires the following: ### NumPy Since large parts of the main documentation are obtained from NumPy via `import numpy` and examining the docstrings, you will need to first [build](../user/building#building-from-source) and install it so that the correct version is imported. NumPy has to be re-built and re-installed every time you fetch the latest version of the repository, before generating the documentation. This ensures that the NumPy version and the git repository version are in sync. Note that you can e.g. install NumPy to a temporary location and set the PYTHONPATH environment variable appropriately. Alternatively, if using Python virtual environments (via e.g. `conda`, `virtualenv` or the `venv` module), installing NumPy into a new virtual environment is recommended. ### Dependencies All of the necessary dependencies for building the NumPy docs except for [Doxygen](https://www.doxygen.nl/index.html) can be installed with: ``` pip install -r doc_requirements.txt ``` We currently use [Sphinx](http://www.sphinx-doc.org/) along with [Doxygen](https://www.doxygen.nl/index.html) for generating the API and reference documentation for NumPy. In addition, building the documentation requires the Sphinx extension `plot_directive`, which is shipped with [Matplotlib](https://matplotlib.org/stable/index.html "(in Matplotlib v3.5.2)"). We also use [numpydoc](https://numpydoc.readthedocs.io/en/latest/index.html) to render docstrings in the generated API documentation. [SciPy](https://docs.scipy.org/doc/scipy/index.html "(in SciPy v1.8.1)") is installed since some parts of the documentation require SciPy functions. For installing [Doxygen](https://www.doxygen.nl/index.html), please check the official [download](https://www.doxygen.nl/download.html#srcbin) and [installation](https://www.doxygen.nl/manual/install.html) pages, or if you are using Linux then you can install it through your distribution package manager. Note Try to install a newer version of [Doxygen](https://www.doxygen.nl/index.html) > 1.8.10 otherwise you may get some warnings during the build. ### Submodules If you obtained NumPy via git, also get the git submodules that contain additional parts required for building the documentation: ``` git submodule update --init ``` Instructions ------------ Now you are ready to generate the docs, so write: ``` cd doc make html ``` If all goes well, this will generate a `build/html` subdirectory in the `/doc` directory, containing the built documentation. If you get a message about `installed numpy != current repo git version`, you must either override the check by setting `GITVER` or re-install NumPy. If you have built NumPy into a virtual environment and get an error that says `numpy not found, cannot build documentation without...`, you need to override the makefile `PYTHON` variable at the command line, so instead of writing `make  html` write: ``` make PYTHON=python html ``` To build the PDF documentation, do instead: ``` make latex make -C build/latex all-pdf ``` You will need to have [LaTeX](https://www.latex-project.org/) installed for this, inclusive of support for Greek letters. For example, on Ubuntu xenial `texlive-lang-greek` and `cm-super` are needed. Also, `latexmk` is needed on non-Windows systems. Instead of the above, you can also do: ``` make dist ``` which will rebuild NumPy, install it to a temporary location, and build the documentation in all formats. This will most likely again only work on Unix platforms. The documentation for NumPy distributed at <https://numpy.org/doc> in html and pdf format is also built with `make dist`. See [HOWTO RELEASE](https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst.txt) for details on how to update <https://numpy.org/doc>. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/howto_build_docs.htmlDevelopment workflow ==================== You already have your own forked copy of the [NumPy](https://www.numpy.org) repository, by following [Create a NumPy fork](gitwash/development_setup#forking), [Make the local copy](gitwash/development_setup#set-up-fork), you have configured [git](https://git-scm.com/) by following [Git configuration](gitwash/configure_git#configure-git), and have linked the upstream repository as explained in [Linking your repository to the upstream repo](https://scikit-image.org/docs/stable/gitwash/set_up_fork.html#linking-to-upstream "(in skimage v0.19.2)"). What is described below is a recommended workflow with Git. Basic workflow -------------- In short: 1. Start a new *feature branch* for each set of edits that you do. See [below](#making-a-new-feature-branch). 2. Hack away! See [below](#editing-workflow) 3. When finished: * *Contributors*: push your feature branch to your own Github repo, and [create a pull request](#asking-for-merging). * *Core developers*: If you want to push changes without further review, see the notes [below](#pushing-to-main). This way of working helps to keep work well organized and the history as clear as possible. See also There are many online tutorials to help you [learn git](https://try.github.io/). For discussions of specific git workflows, see these discussions on [linux git workflow](https://www.<EMAIL>-archive.com/<EMAIL>/msg39091.html), and [ipython git workflow](https://mail.python.org/pipermail/ipython-dev/2010-October/005632.html). ### Making a new feature branch First, fetch new commits from the `upstream` repository: ``` git fetch upstream ``` Then, create a new branch based on the main branch of the upstream repository: ``` git checkout -b my-new-feature upstream/main ``` ### The editing workflow #### Overview ``` # hack hack git status # Optional git diff # Optional git add modified_file git commit # push the branch to your own Github repo git push origin my-new-feature ``` #### In more detail 1. Make some changes. When you feel that you’ve made a complete, working set of related changes, move on to the next steps. 2. Optional: Check which files have changed with `git status` (see [git status](https://www.kernel.org/pub/software/scm/git/docs/git-status.html)). You’ll see a listing like this one: ``` # On branch my-new-feature # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: README # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # INSTALL no changes added to commit (use "git add" and/or "git commit -a") ``` 3. Optional: Compare the changes with the previous version using with `git diff` ([git diff](https://www.kernel.org/pub/software/scm/git/docs/git-diff.html)). This brings up a simple text browser interface that highlights the difference between your files and the previous version. 4. Add any relevant modified or new files using `git add modified_file` (see [git add](https://www.kernel.org/pub/software/scm/git/docs/git-add.html)). This puts the files into a staging area, which is a queue of files that will be added to your next commit. Only add files that have related, complete changes. Leave files with unfinished changes for later commits. 5. To commit the staged files into the local copy of your repo, do `git commit`. At this point, a text editor will open up to allow you to write a commit message. Read the [commit message section](#writing-the-commit-message) to be sure that you are writing a properly formatted and sufficiently detailed commit message. After saving your message and closing the editor, your commit will be saved. For trivial commits, a short commit message can be passed in through the command line using the `-m` flag. For example, `git commit -am "ENH: Some message"`. In some cases, you will see this form of the commit command: `git commit -a`. The extra `-a` flag automatically commits all modified files and removes all deleted files. This can save you some typing of numerous `git add` commands; however, it can add unwanted changes to a commit if you’re not careful. For more information, see [why the -a flag?](http://www.gitready.com/beginner/2009/01/18/the-staging-area.html) - and the helpful use-case description in the [tangled working copy problem](https://tomayko.com/writings/the-thing-about-git). 6. Push the changes to your forked repo on [github](https://github.com/numpy/numpy): ``` git push origin my-new-feature ``` For more information, see [git push](https://www.kernel.org/pub/software/scm/git/docs/git-push.html). Note Assuming you have followed the instructions in these pages, git will create a default link to your [github](https://github.com/numpy/numpy) repo called `origin`. In git >= 1.7 you can ensure that the link to origin is permanently set by using the `--set-upstream` option: ``` git push --set-upstream origin my-new-feature ``` From now on [git](https://git-scm.com/) will know that `my-new-feature` is related to the `my-new-feature` branch in your own [github](https://github.com/numpy/numpy) repo. Subsequent push calls are then simplified to the following: ``` git push ``` You have to use `--set-upstream` for each new branch that you create. It may be the case that while you were working on your edits, new commits have been added to `upstream` that affect your work. In this case, follow the [Rebasing on main](#rebasing-on-main) section of this document to apply those changes to your branch. #### Writing the commit message Commit messages should be clear and follow a few basic rules. Example: ``` ENH: add functionality X to numpy.<submodule>. The first line of the commit message starts with a capitalized acronym (options listed below) indicating what type of commit this is. Then a blank line, then more text if needed. Lines shouldn't be longer than 72 characters. If the commit is related to a ticket, indicate that with "See #3456", "See ticket 3456", "Closes #3456" or similar. ``` Describing the motivation for a change, the nature of a bug for bug fixes or some details on what an enhancement does are also good to include in a commit message. Messages should be understandable without looking at the code changes. A commit message like `MAINT: fixed another one` is an example of what not to do; the reader has to go look for context elsewhere. Standard acronyms to start the commit message with are: ``` API: an (incompatible) API change BENCH: changes to the benchmark suite BLD: change related to building numpy BUG: bug fix DEP: deprecate something, or remove a deprecated object DEV: development tool or utility DOC: documentation ENH: enhancement MAINT: maintenance commit (refactoring, typos, etc.) REV: revert an earlier commit STY: style fix (whitespace, PEP8) TST: addition or modification of tests TYP: static typing REL: related to releasing numpy ``` ##### Commands to skip continuous integration By default a lot of continuous integration (CI) jobs are run for every PR, from running the test suite on different operating systems and hardware platforms to building the docs. In some cases you already know that CI isn’t needed (or not all of it), for example if you work on CI config files, text in the README, or other files that aren’t involved in regular build, test or docs sequences. In such cases you may explicitly skip CI by including one of these fragments in your commit message: ``` ``[ci skip]``: skip as much CI as possible (not all jobs can be skipped) ``[skip github]``: skip GitHub Actions "build numpy and run tests" jobs ``[skip travis]``: skip TravisCI jobs ``[skip azurepipelines]``: skip Azure jobs ``` *Note*: unfortunately not all CI systems implement this feature well, or at all. CircleCI supports `ci skip` but has no command to skip only CircleCI. Azure chooses to still run jobs with skip commands on PRs, the jobs only get skipped on merging to master. ##### Test building wheels Numpy currently uses [cibuildwheel](https://https//cibuildwheel.readthedocs.io/en/stable/) in order to build wheels through continuous integration services. To save resources, the cibuildwheel wheel builders are not run by default on every single PR or commit to main. If you would like to test that your pull request do not break the wheel builders, you may either append `[wheel build]` to the end of the commit message of the commit or add one of the following labels to the pull request(if you have the permissions to do so): * `36 - Build`: for pull requests changing build processes/configurations * `03 - Maintenance`: for pull requests upgrading dependencies * `14 - Release`: for pull requests preparing for a release The wheels built via github actions (including 64-bit linux, macOS, and windows, arm64 macOS, and 32-bit windows) will be uploaded as artifacts in zip files. You can access them from the Summary page of the “Wheel builder” [Action](https://github.com/numpy/numpy/actions). The aarch64 wheels built via [travis](https://app.travis-ci.com/github/numpy/numpy/builds) CI are not available as artifacts. Additionally, the wheels will be uploaded to <https://anaconda.org/scipy-wheels-nightly/> on the following conditions: * by a weekly cron job or * if the github action or travis build has been manually triggered, which requires appropriate permissions The wheels wil be uploaded to <https://anaconda.org/multibuild-wheels-staging/> if the build was triggered by a tag to the repo that begins with `v` ### Get the mailing list’s opinion If you plan a new feature or API change, it’s wisest to first email the NumPy [mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion) asking for comment. If you haven’t heard back in a week, it’s OK to ping the list again. ### Asking for your changes to be merged with the main repo When you feel your work is finished, you can create a pull request (PR). Github has a nice help page that outlines the process for [filing pull requests](https://help.github.com/articles/using-pull-requests/#initiating-the-pull-request). If your changes involve modifications to the API or addition/modification of a function, add a release note to the `doc/release/upcoming_changes/` directory, following the instructions and format in the `doc/release/upcoming_changes/README.rst` file. ### Getting your PR reviewed We review pull requests as soon as we can, typically within a week. If you get no review comments within two weeks, feel free to ask for feedback by adding a comment on your PR (this will notify maintainers). If your PR is large or complicated, asking for input on the numpy-discussion mailing list may also be useful. ### Rebasing on main This updates your feature branch with changes from the upstream [NumPy github](https://github.com/numpy/numpy) repo. If you do not absolutely need to do this, try to avoid doing it, except perhaps when you are finished. The first step will be to update the remote repository with new commits from upstream: ``` git fetch upstream ``` Next, you need to update the feature branch: ``` # go to the feature branch git checkout my-new-feature # make a backup in case you mess up git branch tmp my-new-feature # rebase on upstream main branch git rebase upstream/main ``` If you have made changes to files that have changed also upstream, this may generate merge conflicts that you need to resolve. See [below](#recovering-from-mess-up) for help in this case. Finally, remove the backup branch upon a successful rebase: ``` git branch -D tmp ``` Note Rebasing on main is preferred over merging upstream back to your branch. Using `git merge` and `git pull` is discouraged when working on feature branches. ### Recovering from mess-ups Sometimes, you mess up merges or rebases. Luckily, in Git it is relatively straightforward to recover from such mistakes. If you mess up during a rebase: ``` git rebase --abort ``` If you notice you messed up after the rebase: ``` # reset branch back to the saved point git reset --hard tmp ``` If you forgot to make a backup branch: ``` # look at the reflog of the branch git reflog show my-feature-branch 8630830 my-feature-branch@{0}: commit: BUG: io: close file handles immediately 278dd2a my-feature-branch@{1}: rebase finished: refs/heads/my-feature-branch onto 11ee694744f2552d 26aa21a my-feature-branch@{2}: commit: BUG: lib: make seek_gzip_factory not leak gzip obj ... # reset the branch to where it was before the botched rebase git reset --hard my-feature-branch@{2} ``` If you didn’t actually mess up but there are merge conflicts, you need to resolve those. This can be one of the trickier things to get right. For a good description of how to do this, see [this article on merging conflicts](https://git-scm.com/book/en/Git-Branching-Basic-Branching-and-Merging#Basic-Merge-Conflicts). Additional things you might want to do -------------------------------------- ### Rewriting commit history Note Do this only for your own feature branches. There’s an embarrassing typo in a commit you made? Or perhaps you made several false starts you would like the posterity not to see. This can be done via *interactive rebasing*. Suppose that the commit history looks like this: ``` git log --oneline eadc391 Fix some remaining bugs a815645 Modify it so that it works 2dec1ac Fix a few bugs + disable 13d7934 First implementation 6ad92e5 * masked is now an instance of a new object, MaskedConstant 29001ed Add pre-nep for a couple of structured_array_extensions. ... ``` and `6ad92e5` is the last commit in the `main` branch. Suppose we want to make the following changes: * Rewrite the commit message for `13d7934` to something more sensible. * Combine the commits `2dec1ac`, `a815645`, `eadc391` into a single one. We do as follows: ``` # make a backup of the current state git branch tmp HEAD # interactive rebase git rebase -i 6ad92e5 ``` This will open an editor with the following text in it: ``` pick 13d7934 First implementation pick 2dec1ac Fix a few bugs + disable pick a815645 Modify it so that it works pick eadc391 Fix some remaining bugs # Rebase 6ad92e5..eadc391 onto 6ad92e5 # # Commands: # p, pick = use commit # r, reword = use commit, but edit the commit message # e, edit = use commit, but stop for amending # s, squash = use commit, but meld into previous commit # f, fixup = like "squash", but discard this commit's log message # # If you remove a line here THAT COMMIT WILL BE LOST. # However, if you remove everything, the rebase will be aborted. # ``` To achieve what we want, we will make the following changes to it: ``` r 13d7934 First implementation pick 2dec1ac Fix a few bugs + disable f a815645 Modify it so that it works f eadc391 Fix some remaining bugs ``` This means that (i) we want to edit the commit message for `13d7934`, and (ii) collapse the last three commits into one. Now we save and quit the editor. Git will then immediately bring up an editor for editing the commit message. After revising it, we get the output: ``` [detached HEAD 721fc64] FOO: First implementation 2 files changed, 199 insertions(+), 66 deletions(-) [detached HEAD 0f22701] Fix a few bugs + disable 1 files changed, 79 insertions(+), 61 deletions(-) Successfully rebased and updated refs/heads/my-feature-branch. ``` and the history looks now like this: ``` 0f22701 Fix a few bugs + disable 721fc64 ENH: Sophisticated feature 6ad92e5 * masked is now an instance of a new object, MaskedConstant ``` If it went wrong, recovery is again possible as explained [above](#recovering-from-mess-up). ### Deleting a branch on [github](https://github.com/numpy/numpy) ``` git checkout main # delete branch locally git branch -D my-unwanted-branch # delete branch on github git push origin --delete my-unwanted-branch ``` See also: <https://stackoverflow.com/questions/2003505/how-do-i-delete-a-git-branch-locally-and-remotely ### Several people sharing a single repository If you want to work on some stuff with other people, where you are all committing into the same repository, or even the same branch, then just share it via [github](https://github.com/numpy/numpy). First fork NumPy into your account, as from [Create a NumPy fork](gitwash/development_setup#forking). Then, go to your forked repository github page, say `https://github.com/your-user-name/numpy` Click on the ‘Admin’ button, and add anyone else to the repo as a collaborator: ![../_images/pull_button.png] Now all those people can do: ``` git clone git@github.com:your-user-name/numpy.git ``` Remember that links starting with `git@` use the ssh protocol and are read-write; links starting with `git://` are read-only. Your collaborators can then commit directly into that repo with the usual: ``` git commit -am 'ENH - much better code' git push origin my-feature-branch # pushes directly into your repo ``` ### Exploring your repository To see a graphical representation of the repository branches and commits: ``` gitk --all ``` To see a linear list of commits for this branch: ``` git log ``` You can also look at the [network graph visualizer](https://github.blog/2008-04-10-say-hello-to-the-network-graph-visualizer/) for your [github](https://github.com/numpy/numpy) repo. ### Backporting Backporting is the process of copying new feature/fixes committed in [numpy/main](https://github.com/numpy/numpy) back to stable release branches. To do this you make a branch off the branch you are backporting to, cherry pick the commits you want from `numpy/main`, and then submit a pull request for the branch containing the backport. 1. First, you need to make the branch you will work on. This needs to be based on the older version of NumPy (not main): ``` # Make a new branch based on numpy/maintenance/1.8.x, # backport-3324 is our new name for the branch. git checkout -b backport-3324 upstream/maintenance/1.8.x ``` 2. Now you need to apply the changes from main to this branch using [git cherry-pick](https://www.kernel.org/pub/software/scm/git/docs/git-cherry-pick.html): ``` # Update remote git fetch upstream # Check the commit log for commits to cherry pick git log upstream/main # This pull request included commits aa7a047 to c098283 (inclusive) # so you use the .. syntax (for a range of commits), the ^ makes the # range inclusive. git cherry-pick aa7a047^..c098283 ... # Fix any conflicts, then if needed: git cherry-pick --continue ``` 3. You might run into some conflicts cherry picking here. These are resolved the same way as merge/rebase conflicts. Except here you can use [git blame](https://www.kernel.org/pub/software/scm/git/docs/git-blame.html) to see the difference between main and the backported branch to make sure nothing gets screwed up. 4. Push the new branch to your Github repository: ``` git push -u origin backport-3324 ``` 5. Finally make a pull request using Github. Make sure it is against the maintenance branch and not main, Github will usually suggest you make the pull request against main. ### Pushing changes to the main repo *Requires commit rights to the main NumPy repo.* When you have a set of “ready” changes in a feature branch ready for NumPy’s `main` or `maintenance` branches, you can push them to `upstream` as follows: 1. First, merge or rebase on the target branch. 1. Only a few, unrelated commits then prefer rebasing: ``` git fetch upstream git rebase upstream/main ``` See [Rebasing on main](#rebasing-on-main). 2. If all of the commits are related, create a merge commit: ``` git fetch upstream git merge --no-ff upstream/main ``` 2. Check that what you are going to push looks sensible: ``` git log -p upstream/main.. git log --oneline --graph ``` 3. Push to upstream: ``` git push upstream my-feature-branch:main ``` Note It’s usually a good idea to use the `-n` flag to `git push` to check first that you’re about to push the changes you want to the place you want. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/development_workflow.htmlAdvanced debugging tools ======================== If you reached here, you want to dive into, or use, more advanced tooling. This is usually not necessary for first time contributors and most day-to-day development. These are used more rarely, for example close to a new NumPy release, or when a large or particular complex change was made. Since not all of these tools are used on a regular bases and only available on some systems, please expect differences, issues, or quirks; we will be happy to help if you get stuck and appreciate any improvements or suggestions to these workflows. Finding C errors with additional tooling ---------------------------------------- Most development will not require more than a typical debugging toolchain as shown in [Debugging](development_environment#debugging). But for example memory leaks can be particularly subtle or difficult to narrow down. We do not expect any of these tools to be run by most contributors. However, you can ensure that we can track down such issues more easily easier: * Tests should cover all code paths, including error paths. * Try to write short and simple tests. If you have a very complicated test consider creating an additional simpler test as well. This can be helpful, because often it is only easy to find which test triggers an issue and not which line of the test. * Never use `np.empty` if data is read/used. `valgrind` will notice this and report an error. When you do not care about values, you can generate random values instead. This will help us catch any oversights before your change is released and means you do not have to worry about making reference counting errors, which can be intimidating. ### Python debug build for finding memory leaks Debug builds of Python are easily available for example on `debian` systems, and can be used on all platforms. Running a test or terminal is usually as easy as: ``` python3.8d runtests.py # or python3.8d runtests.py --ipython ``` and were already mentioned in [Debugging](development_environment#debugging). A Python debug build will help: * Find bugs which may otherwise cause random behaviour. One example is when an object is still used after it has been deleted. * Python debug builds allows to check correct reference counting. This works using the additional commands: ``` sys.gettotalrefcount() sys.getallocatedblocks() ``` #### Use together with `pytest` Running the test suite only with a debug python build will not find many errors on its own. An additional advantage of a debug build of Python is that it allows detecting memory leaks. A tool to make this easier is [pytest-leaks](https://github.com/abalkin/pytest-leaks), which can be installed using `pip`. Unfortunately, `pytest` itself may leak memory, but good results can usually (currently) be achieved by removing: ``` @pytest.fixture(autouse=True) def add_np(doctest_namespace): doctest_namespace['np'] = numpy @pytest.fixture(autouse=True) def env_setup(monkeypatch): monkeypatch.setenv('PYTHONHASHSEED', '0') ``` from `numpy/conftest.py` (This may change with new `pytest-leaks` versions or `pytest` updates). This allows to run the test suite, or part of it, conveniently: ``` python3.8d runtests.py -t numpy/core/tests/test_multiarray.py -- -R2:3 -s ``` where `-R2:3` is the `pytest-leaks` command (see its documentation), the `-s` causes output to print and may be necessary (in some versions captured output was detected as a leak). Note that some tests are known (or even designed) to leak references, we try to mark them, but expect some false positives. ### `valgrind` Valgrind is a powerful tool to find certain memory access problems and should be run on complicated C code. Basic use of `valgrind` usually requires no more than: ``` PYTHONMALLOC=malloc valgrind python runtests.py ``` where `PYTHONMALLOC=malloc` is necessary to avoid false positives from python itself. Depending on the system and valgrind version, you may see more false positives. `valgrind` supports “suppressions” to ignore some of these, and Python does have a suppression file (and even a compile time option) which may help if you find it necessary. Valgrind helps: * Find use of uninitialized variables/memory. * Detect memory access violations (reading or writing outside of allocated memory). * Find *many* memory leaks. Note that for *most* leaks the python debug build approach (and `pytest-leaks`) is much more sensitive. The reason is that `valgrind` can only detect if memory is definitely lost. If: ``` dtype = np.dtype(np.int64) arr.astype(dtype=dtype) ``` Has incorrect reference counting for `dtype`, this is a bug, but valgrind cannot see it because `np.dtype(np.int64)` always returns the same object. However, not all dtypes are singletons, so this might leak memory for different input. In rare cases NumPy uses `malloc` and not the Python memory allocators which are invisible to the Python debug build. `malloc` should normally be avoided, but there are some exceptions (e.g. the `PyArray_Dims` structure is public API and cannot use the Python allocators.) Even though using valgrind for memory leak detection is slow and less sensitive it can be a convenient: you can run most programs with valgrind without modification. Things to be aware of: * Valgrind does not support the numpy `longdouble`, this means that tests will fail or be flagged errors that are completely fine. * Expect some errors before and after running your NumPy code. * Caches can mean that errors (specifically memory leaks) may not be detected or are only detect at a later, unrelated time. A big advantage of valgrind is that it has no requirements aside from valgrind itself (although you probably want to use debug builds for better tracebacks). #### Use together with `pytest` You can run the test suite with valgrind which may be sufficient when you are only interested in a few tests: ``` PYTHOMMALLOC=malloc valgrind python runtests.py \ -t numpy/core/tests/test_multiarray.py -- --continue-on-collection-errors ``` Note the `--continue-on-collection-errors`, which is currently necessary due to missing `longdouble` support causing failures (this will usually not be necessary if you do not run the full test suite). If you wish to detect memory leaks you will also require `--show-leak-kinds=definite` and possibly more valgrind options. Just as for `pytest-leaks` certain tests are known to leak cause errors in valgrind and may or may not be marked as such. We have developed [pytest-valgrind](https://github.com/seberg/pytest-valgrind) which: * Reports errors for each test individually * Narrows down memory leaks to individual tests (by default valgrind only checks for memory leaks after a program stops, which is very cumbersome). Please refer to its `README` for more information (it includes an example command for NumPy). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/development_advanced_debugging.htmlReviewer Guidelines =================== Reviewing open pull requests (PRs) helps move the project forward. We encourage people outside the project to get involved as well; it’s a great way to get familiar with the codebase. Who can be a reviewer? ---------------------- Reviews can come from outside the NumPy team – we welcome contributions from domain experts (for instance, `linalg` or `fft`) or maintainers of other projects. You do not need to be a NumPy maintainer (a NumPy team member with permission to merge a PR) to review. If we do not know you yet, consider introducing yourself in [the mailing list or Slack](https://numpy.org/community/) before you start reviewing pull requests. Communication Guidelines ------------------------ * Every PR, good or bad, is an act of generosity. Opening with a positive comment will help the author feel rewarded, and your subsequent remarks may be heard more clearly. You may feel good also. * Begin if possible with the large issues, so the author knows they’ve been understood. Resist the temptation to immediately go line by line, or to open with small pervasive issues. * You are the face of the project, and NumPy some time ago decided [the kind of project it will be](https://numpy.org/code-of-conduct/): open, empathetic, welcoming, friendly and patient. Be [kind](https://youtu.be/tzFWz5fiVKU?t=49m30s) to contributors. * Do not let perfect be the enemy of the good, particularly for documentation. If you find yourself making many small suggestions, or being too nitpicky on style or grammar, consider merging the current PR when all important concerns are addressed. Then, either push a commit directly (if you are a maintainer) or open a follow-up PR yourself. * If you need help writing replies in reviews, check out some [standard replies for reviewing](#saved-replies). Reviewer Checklist ------------------ * Is the intended behavior clear under all conditions? Some things to watch: + What happens with unexpected inputs like empty arrays or nan/inf values? + Are axis or shape arguments tested to be `int` or `tuples`? + Are unusual `dtypes` tested if a function supports those? * Should variable names be improved for clarity or consistency? * Should comments be added, or rather removed as unhelpful or extraneous? * Does the documentation follow the [NumPy guidelines](howto-docs#howto-document)? Are the docstrings properly formatted? * Does the code follow NumPy’s [Stylistic Guidelines](index#stylistic-guidelines)? * If you are a maintainer, and it is not obvious from the PR description, add a short explanation of what a branch did to the merge message and, if closing an issue, also add “Closes gh-123” where 123 is the issue number. * For code changes, at least one maintainer (i.e. someone with commit rights) should review and approve a pull request. If you are the first to review a PR and approve of the changes use the GitHub [approve review](https://help.github.com/articles/reviewing-changes-in-pull-requests/) tool to mark it as such. If a PR is straightforward, for example it’s a clearly correct bug fix, it can be merged straight away. If it’s more complex or changes public API, please leave it open for at least a couple of days so other maintainers get a chance to review. * If you are a subsequent reviewer on an already approved PR, please use the same review method as for a new PR (focus on the larger issues, resist the temptation to add only a few nitpicks). If you have commit rights and think no more review is needed, merge the PR. ### For maintainers * Make sure all automated CI tests pass before merging a PR, and that the [documentation builds](index#building-docs) without any errors. * In case of merge conflicts, ask the PR submitter to [rebase on main](development_workflow#rebasing-on-main). * For PRs that add new features or are in some way complex, wait at least a day or two before merging it. That way, others get a chance to comment before the code goes in. Consider adding it to the release notes. * When merging contributions, a committer is responsible for ensuring that those meet the requirements outlined in the [Development process guidelines](index#guidelines) for NumPy. Also, check that new features and backwards compatibility breaks were discussed on the [numpy-discussion mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion). * Squashing commits or cleaning up commit messages of a PR that you consider too messy is OK. Remember to retain the original author’s name when doing this. Make sure commit messages follow the [rules for NumPy](development_workflow#writing-the-commit-message). * When you want to reject a PR: if it’s very obvious, you can just close it and explain why. If it’s not, then it’s a good idea to first explain why you think the PR is not suitable for inclusion in NumPy and then let a second committer comment or close. ### GitHub Workflow When reviewing pull requests, please use workflow tracking features on GitHub as appropriate: * After you have finished reviewing, if you want to ask for the submitter to make changes, change your review status to “Changes requested.” This can be done on GitHub, PR page, Files changed tab, Review changes (button on the top right). * If you’re happy about the current status, mark the pull request as Approved (same way as Changes requested). Alternatively (for maintainers): merge the pull request, if you think it is ready to be merged. It may be helpful to have a copy of the pull request code checked out on your own machine so that you can play with it locally. You can use the [GitHub CLI](https://docs.github.com/en/github/getting-started-with-github/github-cli) to do this by clicking the `Open with` button in the upper right-hand corner of the PR page. Assuming you have your [development environment](development_environment#development-environment) set up, you can now build the code and test it. Standard replies for reviewing ------------------------------ It may be helpful to store some of these in GitHub’s [saved replies](https://github.com/settings/replies/) for reviewing: **Usage question** ``` You are asking a usage question. The issue tracker is for bugs and new features. I'm going to close this issue, feel free to ask for help via our [help channels](https://numpy.org/gethelp/). ``` **You’re welcome to update the docs** ``` Please feel free to offer a pull request updating the documentation if you feel it could be improved. ``` **Self-contained example for bug** ``` Please provide a [self-contained example code](https://stackoverflow.com/help/mcve), including imports and data (if possible), so that other contributors can just run it and reproduce your issue. Ideally your example code should be minimal. ``` **Software versions** ``` To help diagnose your issue, please paste the output of: ``` python -c 'import numpy; print(numpy.version.version)' ``` Thanks. ``` **Code blocks** ``` Readability can be greatly improved if you [format](https://help.github.com/articles/creating-and-highlighting-code-blocks/) your code snippets and complete error messages appropriately. You can edit your issue descriptions and comments at any time to improve readability. This helps maintainers a lot. Thanks! ``` **Linking to code** ``` For clarity's sake, you can link to code like [this](https://help.github.com/articles/creating-a-permanent-link-to-a-code-snippet/). ``` **Better description and title** ``` Please make the title of the PR more descriptive. The title will become the commit message when this is merged. You should state what issue (or PR) it fixes/resolves in the description using the syntax described [here](https://docs.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword). ``` **Regression test needed** ``` Please add a [non-regression test](https://en.wikipedia.org/wiki/Non-regression_testing) that would fail at main but pass in this PR. ``` **Don’t change unrelated** ``` Please do not change unrelated lines. It makes your contribution harder to review and may introduce merge conflicts to other pull requests. ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/reviewer_guidelines.htmlNumPy benchmarks ================ Benchmarking NumPy with Airspeed Velocity. Usage ----- Airspeed Velocity manages building and Python virtualenvs by itself, unless told otherwise. Some of the benchmarking features in `runtests.py` also tell ASV to use the NumPy compiled by `runtests.py`. To run the benchmarks, you do not need to install a development version of NumPy to your current Python environment. Before beginning, ensure that *airspeed velocity* is installed. By default, `asv` ships with support for anaconda and virtualenv: ``` pip install asv pip install virtualenv ``` After contributing new benchmarks, you should test them locally before submitting a pull request. To run all benchmarks, navigate to the root NumPy directory at the command line and execute: ``` python runtests.py --bench ``` where `--bench` activates the benchmark suite instead of the test suite. This builds NumPy and runs all available benchmarks defined in `benchmarks/`. (Note: this could take a while. Each benchmark is run multiple times to measure the distribution in execution times.) To run benchmarks from a particular benchmark module, such as `bench_core.py`, simply append the filename without the extension: ``` python runtests.py --bench bench_core ``` To run a benchmark defined in a class, such as `Mandelbrot` from `bench_avx.py`: ``` python runtests.py --bench bench_avx.Mandelbrot ``` Compare change in benchmark results to another version/commit/branch: ``` python runtests.py --bench-compare v1.6.2 bench_core python runtests.py --bench-compare 8bf4e9b bench_core python runtests.py --bench-compare main bench_core ``` All of the commands above display the results in plain text in the console, and the results are not saved for comparison with future commits. For greater control, a graphical view, and to have results saved for future comparison you can run ASV commands (record results and generate HTML): ``` cd benchmarks asv run -n -e --python=same asv publish asv preview ``` More on how to use `asv` can be found in [ASV documentation](https://asv.readthedocs.io/) Command-line help is available as usual via `asv --help` and `asv run --help`. Writing benchmarks ------------------ See [ASV documentation](https://asv.readthedocs.io/) for basics on how to write benchmarks. Some things to consider: * The benchmark suite should be importable with any NumPy version. * The benchmark parameters etc. should not depend on which NumPy version is installed. * Try to keep the runtime of the benchmark reasonable. * Prefer ASV’s `time_` methods for benchmarking times rather than cooking up time measurements via `time.clock`, even if it requires some juggling when writing the benchmark. * Preparing arrays etc. should generally be put in the `setup` method rather than the `time_` methods, to avoid counting preparation time together with the time of the benchmarked operation. * Be mindful that large arrays created with `np.empty` or `np.zeros` might not be allocated in physical memory until the memory is accessed. If this is desired behaviour, make sure to comment it in your setup function. If you are benchmarking an algorithm, it is unlikely that a user will be executing said algorithm on a newly created empty/zero array. One can force pagefaults to occur in the setup phase either by calling `np.ones` or `arr.fill(value)` after creating the array, © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/benchmarking.htmlReleasing a version =================== How to Prepare a Release ------------------------ This file gives an overview of what is necessary to build binary releases for NumPy. ### Current build and release info The current info on building and releasing NumPy and SciPy is scattered in several places. It should be summarized in one place, updated, and where necessary described in more detail. The sections below list all places where useful info can be found. #### Source tree * INSTALL.rst.txt * pavement.py #### NumPy Docs * <https://github.com/numpy/numpy/blob/main/doc/HOWTO_RELEASE.rst.txt #### SciPy.org wiki * <https://www.scipy.org/Installing_SciPy> and links on that page. #### Release Scripts * <https://github.com/numpy/numpy-vendor ### Supported platforms and versions [NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html#nep29 "(in NumPy Enhancement Proposals)") outlines which Python versions are supported; For the first half of 2020, this will be Python >= 3.6. We test NumPy against all these versions every time we merge code to main. Binary installers may be available for a subset of these versions (see below). #### OS X OS X versions >= 10.9 are supported, for Python version support see [NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html#nep29 "(in NumPy Enhancement Proposals)"). We build binary wheels for OSX that are compatible with Python.org Python, system Python, homebrew and macports - see this [OSX wheel building summary](https://github.com/MacPython/wiki/wiki/Spinning-wheels) for details. #### Windows We build 32- and 64-bit wheels on Windows. Windows 7, 8 and 10 are supported. We build NumPy using the [mingw-w64 toolchain](https://mingwpy.github.io) on Appveyor. #### Linux We build and ship [manylinux1](https://www.python.org/dev/peps/pep-0513) wheels for NumPy. Many Linux distributions include their own binary builds of NumPy. #### BSD / Solaris No binaries are provided, but successful builds on Solaris and BSD have been reported. ### Tool chain We build all our wheels on cloud infrastructure - so this list of compilers is for information and debugging builds locally. See the `.travis.yml` script in the [numpy wheels](https://github.com/MacPython/numpy-wheels) repo for the definitive source of the build recipes. Packages that are available using pip are noted. #### Compilers The same gcc version is used as the one with which Python itself is built on each platform. At the moment this means: * OS X builds on travis currently use `clang`. It appears that binary wheels for OSX >= 10.6 can be safely built from the travis-ci OSX 10.9 VMs when building against the Python from the Python.org installers; * Windows builds use the [mingw-w64 toolchain](https://mingwpy.github.io); * Manylinux1 wheels use the gcc provided on the Manylinux docker images. You will need Cython for building the binaries. Cython compiles the `.pyx` files in the NumPy distribution to `.c` files. #### OpenBLAS All the wheels link to a version of [OpenBLAS](https://github.com/xianyi/OpenBLAS) supplied via the [openblas-libs](https://github.com/MacPython/openblas-libs) repo. The shared object (or DLL) is shipped with in the wheel, renamed to prevent name collisions with other OpenBLAS shared objects that may exist in the filesystem. #### Building source archives and wheels You will need write permission for numpy-wheels in order to trigger wheel builds. * Python(s) from [python.org](https://python.org) or linux distro. * cython (pip) * virtualenv (pip) * Paver (pip) * pandoc [pandoc.org](https://www.pandoc.org) or linux distro. * numpy-wheels <https://github.com/MacPython/numpy-wheels> (clone) #### Building docs Building the documents requires a number of latex `.sty` files. Install them all to avoid aggravation. * Sphinx (pip) * numpydoc (pip) * Matplotlib * Texlive (or MikTeX on Windows) #### Uploading to PyPI * terryfy <https://github.com/MacPython/terryfy> (clone). * beautifulsoup4 (pip) * delocate (pip) * auditwheel (pip) * twine (pip) #### Generating author/pr lists You will need a personal access token <https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/> so that scripts can access the github NumPy repository. * gitpython (pip) * pygithub (pip) #### Virtualenv Virtualenv is a very useful tool to keep several versions of packages around. It is also used in the Paver script to build the docs. ### What is released #### Wheels We currently support Python 3.8-3.10 on Windows, OSX, and Linux * Windows: 32-bit and 64-bit wheels built using Appveyor; * OSX: x64_86 OSX wheels built using travis-ci; * Linux: 32-bit and 64-bit Manylinux1 wheels built using travis-ci. See the [numpy wheels](https://github.com/MacPython/numpy-wheels) building repository for more detail. #### Other * Release Notes * Changelog #### Source distribution We build source releases in both .zip and .tar.gz formats. ### Release process #### Agree on a release schedule A typical release schedule is one beta, two release candidates and a final release. It’s best to discuss the timing on the mailing list first, in order for people to get their commits in on time, get doc wiki edits merged, etc. After a date is set, create a new maintenance/x.y.z branch, add new empty release notes for the next version in the main branch and update the Trac Milestones. #### Make sure current branch builds a package correctly ``` git clean -fxd python setup.py bdist_wheel python setup.py sdist ``` For details of the build process itself, it is best to read the pavement.py script. Note The following steps are repeated for the beta(s), release candidates(s) and the final release. #### Check deprecations Before the release branch is made, it should be checked that all deprecated code that should be removed is actually removed, and all new deprecations say in the docstring or deprecation warning at what version the code will be removed. #### Check the C API version number The C API version needs to be tracked in three places * numpy/core/setup_common.py * numpy/core/code_generators/cversions.txt * numpy/core/include/numpy/numpyconfig.h There are three steps to the process. 1. If the API has changed, increment the C_API_VERSION in setup_common.py. The API is unchanged only if any code compiled against the current API will be backward compatible with the last released NumPy version. Any changes to C structures or additions to the public interface will make the new API not backward compatible. 2. If the C_API_VERSION in the first step has changed, or if the hash of the API has changed, the cversions.txt file needs to be updated. To check the hash, run the script numpy/core/cversions.py and note the API hash that is printed. If that hash does not match the last hash in numpy/core/code_generators/cversions.txt the hash has changed. Using both the appropriate C_API_VERSION and hash, add a new entry to cversions.txt. If the API version was not changed, but the hash differs, you will need to comment out the previous entry for that API version. For instance, in NumPy 1.9 annotations were added, which changed the hash, but the API was the same as in 1.8. The hash serves as a check for API changes, but it is not definitive. If steps 1 and 2 are done correctly, compiling the release should not give a warning “API mismatch detect at the beginning of the build”. 3. The numpy/core/include/numpy/numpyconfig.h will need a new NPY_X_Y_API_VERSION macro, where X and Y are the major and minor version numbers of the release. The value given to that macro only needs to be increased from the previous version if some of the functions or macros in the include files were deprecated. The C ABI version number in numpy/core/setup_common.py should only be updated for a major release. #### Check the release notes Use [towncrier](https://pypi.org/project/towncrier/) to build the release note and commit the changes. This will remove all the fragments from `doc/release/upcoming_changes` and add `doc/release/<version>-note.rst`. towncrier build –version “<version>” git commit -m”Create release note” Check that the release notes are up-to-date. Update the release notes with a Highlights section. Mention some of the following: * major new features * deprecated and removed features * supported Python versions * for SciPy, supported NumPy version(s) * outlook for the near future #### Update the release status and create a release “tag” Identify the commit hash of the release, e.g. 1b2e1d63ff: ``` git co 1b2e1d63ff # gives warning about detached head ``` First, change/check the following variables in `pavement.py` depending on the release version: ``` RELEASE_NOTES = 'doc/release/1.7.0-notes.rst' LOG_START = 'v1.6.0' LOG_END = 'maintenance/1.7.x' ``` Do any other changes. When you are ready to release, do the following changes: ``` diff --git a/setup.py b/setup.py index b1f53e3..8b36dbe 100755 --- a/setup.py +++ b/setup.py @@ -57,7 +57,7 @@ PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS- MAJOR = 1 MINOR = 7 MICRO = 0 -ISRELEASED = False +ISRELEASED = True VERSION = '%d.%d.%drc1' % (MAJOR, MINOR, MICRO) # Return the git revision as a string ``` And make sure the `VERSION` variable is set properly. Now you can make the release commit and tag. We recommend you don’t push the commit or tag immediately, just in case you need to do more cleanup. We prefer to defer the push of the tag until we’re confident this is the exact form of the released code (see: [Push the release tag and commit](#push-tag-and-commit)): git commit -s -m “REL: Release.” setup.py git tag -s <version The `-s` flag makes a PGP (usually GPG) signed tag. Please do sign the release tags. The release tag should have the release number in the annotation (tag message). Unfortunately, the name of a tag can be changed without breaking the signature, the contents of the message cannot. See: <https://github.com/scipy/scipy/issues/4919> for a discussion of signing release tags, and <https://keyring.debian.org/creating-key.html> for instructions on creating a GPG key if you do not have one. To make your key more readily identifiable as you, consider sending your key to public keyservers, with a command such as: ``` gpg --send-keys <yourkeyid``` #### Update the version of the main branch Increment the release number in setup.py. Release candidates should have “rc1” (or “rc2”, “rcN”) appended to the X.Y.Z format. Also create a new version hash in cversions.txt and a corresponding version define NPY_x_y_API_VERSION in numpyconfig.h #### Trigger the wheel builds See the [numpy wheels](https://github.com/MacPython/numpy-wheels) repository. In that repository edit the files: * `azure/posix.yml` * `azure/windows.yml`. In both cases, set the `BUILD_COMMIT` variable to the current release tag - e.g. `v1.19.0`: ``` $ gvim azure/posix.yml azure/windows.yml $ git commit -a $ git push upstream HEAD ``` Make sure that the release tag has been pushed. Trigger a build by pushing a commit of your edits to the repository. Note that you can do this on a branch, but it must be pushed upstream to the `MacPython/numpy-wheels` repository to trigger uploads since only that repo has the appropriate tokens to allow uploads. The wheels, once built, appear at <https://anaconda.org/multibuild-wheels-staging/numpy #### Make the release Build the changelog and notes for upload with: ``` paver write_release ``` #### Build and archive documentation Do: ``` cd doc/ make dist ``` to check that the documentation is in a buildable state. Then, after tagging, create an archive of the documentation in the numpy/doc repo: ``` # This checks out github.com/numpy/doc and adds (``git add``) the # documentation to the checked out repo. make merge-doc # Now edit the ``index.html`` file in the repo to reflect the new content. # If the documentation is for a non-patch release (e.g. 1.19 -> 1.20), # make sure to update the ``stable`` symlink to point to the new directory. ln -sfn <latest_stable_directory> stable # Commit the changes git -C build/merge commit -am "Add documentation for <version>" # Push to numpy/doc repo git -C build/merge push ``` #### Update PyPI The wheels and source should be uploaded to PyPI. You should upload the wheels first, and the source formats last, to make sure that pip users don’t accidentally get a source install when they were expecting a binary wheel. You can do this automatically using the `wheel-uploader` script from <https://github.com/MacPython/terryfy>. Here is the recommended incantation for downloading all the Windows, Manylinux, OSX wheels and uploading to PyPI. ``` NPY_WHLS=~/wheelhouse # local directory to cache wheel downloads CDN_URL=https://anaconda.org/multibuild-wheels-staging/numpy/files wheel-uploader -u $CDN_URL -w $NPY_WHLS -v -s -t win numpy 1.11.1rc1 wheel-uploader -u $CDN_URL -w warehouse -v -s -t macosx numpy 1.11.1rc1 wheel-uploader -u $CDN_URL -w warehouse -v -s -t manylinux1 numpy 1.11.1rc1 ``` The `-v` flag gives verbose feedback, `-s` causes the script to sign the wheels with your GPG key before upload. Don’t forget to upload the wheels before the source tarball, so there is no period for which people switch from an expected binary install to a source install from PyPI. There are two ways to update the source release on PyPI, the first one is: ``` $ git clean -fxd # to be safe $ python setup.py sdist --formats=gztar,zip # to check # python setup.py sdist --formats=gztar,zip upload --sign ``` This will ask for your key PGP passphrase, in order to sign the built source packages. The second way is to upload the PKG_INFO file inside the sdist dir in the web interface of PyPI. The source tarball can also be uploaded through this interface. #### Push the release tag and commit Finally, now you are confident this tag correctly defines the source code that you released you can push the tag and release commit up to github: ``` git push # Push release commit git push upstream <version> # Push tag named <version``` where `upstream` points to the main <https://github.com/numpy/numpy.git> repository. #### Update scipy.org A release announcement with a link to the download site should be placed in the sidebar of the front page of scipy.org. The scipy.org should be a PR at <https://github.com/scipy/scipy.org>. The file that needs modification is `www/index.rst`. Search for `News`. #### Update oldest-supported-numpy If this release is the first one to support a new Python version, or the first to provide wheels for a new platform or PyPy version, the version pinnings in <https://github.com/scipy/oldest-supported-numpy> should be updated. Either submit a PR with changes to `setup.cfg` there, or open an issue with info on needed changes. #### Announce to the lists The release should be announced on the mailing lists of NumPy and SciPy, to python-announce, and possibly also those of Matplotlib, IPython and/or Pygame. During the beta/RC phase, an explicit request for testing the binaries with several other libraries (SciPy/Matplotlib/Pygame) should be posted on the mailing list. #### Announce to Linux Weekly News Email the editor of LWN to let them know of the release. Directions at: <https://lwn.net/op/FAQ.lwn#contact #### After the final release After the final release is announced, a few administrative tasks are left to be done: * Forward port changes in the release branch to release notes and release scripts, if any, to main branch. * Update the Milestones in Trac. Step-by-Step Directions ----------------------- This file contains a walkthrough of the NumPy 1.21.0 release on Linux, modified for building on azure and uploading to anaconda.org The commands can be copied into the command line, but be sure to replace 1.21.0 by the correct version. This should be read together with the general directions in `releasing`. ### Facility Preparation Before beginning to make a release, use the `*_requirements.txt` files to ensure that you have the needed software. Most software can be installed with pip, but some will require apt-get, dnf, or whatever your system uses for software. Note that at this time the documentation cannot be built with Python 3.10, for that use 3.8-3.9 instead. You will also need a GitHub personal access token (PAT) to push the documentation. There are a few ways to streamline things. * Git can be set up to use a keyring to store your GitHub personal access token. Search online for the details. * You can use the `keyring` app to store the PyPI password for twine. See the online twine documentation for details. ### Release Preparation #### Backport Pull Requests Changes that have been marked for this release must be backported to the maintenance/1.21.x branch. #### Update Release documentation Four documents usually need to be updated or created before making a release: * The changelog * The release-notes * The `.mailmap` file * The `doc/source/release.rst` file These changes should be made as an ordinary PR against the maintenance branch. After release all files except `doc/source/release.rst` will need to be forward ported to the main branch. ##### Generate the changelog The changelog is generated using the changelog tool: ``` $ python tools/changelog.py $GITHUB v1.20.0..maintenance/1.21.x > doc/changelog/1.21.0-changelog.rst ``` where `GITHUB` contains your GitHub access token. The text will need to be checked for non-standard contributor names and dependabot entries removed. It is also a good idea to remove any links that may be present in the PR titles as they don’t translate well to markdown, replace them with monospaced text. The non-standard contributor names should be fixed by updating the `.mailmap` file, which is a lot of work. It is best to make several trial runs before reaching this point and ping the malefactors using a GitHub issue to get the needed information. ##### Finish the release notes If this is the first release in a series the release note is generated, see the release note in `doc/release/upcoming_changes/README.rst` to see how to do this. Generating the release notes will also delete all the news fragment files in `doc/release/upcoming_changes/`. The generated release note will always need some fixups, the introduction will need to be written, and significant changes should be called out. For patch releases the changelog text may also be appended, but not for the initial release as it is too long. Check previous release notes to see how this is done. Note that the `:orphan:` markup at the top, if present, will need changing to `.. currentmodule:: numpy` and the `doc/source/release.rst` index file will need updating. ##### Check the pavement.py file Check that the pavement.py file points to the correct release notes. It should have been updated after the last release, but if not, fix it now: ``` $gvim pavement.py ``` ### Release Walkthrough Note that in the code snippets below, `upstream` refers to the root repository on GitHub and `origin` to its fork in your personal GitHub repositories. You may need to make adjustments if you have not forked the repository but simply cloned it locally. You can also edit `.git/config` and add `upstream` if it isn’t already present. #### Prepare the release commit Checkout the branch for the release, make sure it is up to date, and clean the repository: ``` $ git checkout maintenance/1.21.x $ git pull upstream maintenance/1.21.x $ git submodule update $ git clean -xdfq ``` Sanity check: ``` $ python3 runtests.py -m "full" ``` Tag the release and push the tag. This requires write permission for the numpy repository: ``` $ git tag -a -s v1.21.0 -m"NumPy 1.21.0 release" $ git push upstream v1.21.0 ``` #### Build source releases Paver is used to build the source releases. It will create the `release` and `release/installers` directories and put the `*.zip` and `*.tar.gz` source releases in the latter. ``` $ paver sdist # sdist will do a git clean -xdfq, so we omit that ``` #### Build wheels via MacPython/numpy-wheels Trigger the wheels build by pointing the numpy-wheels repository at this commit. This can take up to an hour. The numpy-wheels repository is cloned from <https://github.com/MacPython/numpy-wheels>. If this is the first release in a series, start with a pull as the repo may have been accessed and changed by someone else, then create a new branch for the series. If the branch already exists skip this: ``` $ cd ../numpy-wheels $ git checkout main $ git pull upstream main $ git branch v1.21.x ``` Checkout the new branch and edit the `azure-pipelines.yml` and `.travis.yml` files to make sure they have the correct version, and put in the commit hash for the `REL` commit created above for `BUILD_COMMIT` variable. The `azure/posix.yml` and `.travis.yml` files may also need the Cython versions updated to keep up with Python releases, but generally just do: ``` $ git checkout v1.21.x $ gvim azure-pipelines.yml .travis.yml $ git commit -a -m"NumPy 1.21.0 release." $ git push upstream HEAD ``` Now wait. If you get nervous at the amount of time taken – the builds can take a while – you can check the build progress by following the links provided at <https://github.com/MacPython/numpy-wheels> to check the build status. Check if all the needed wheels have been built and uploaded to the staging repository before proceeding. Note that sometimes builds, like tests, fail for unrelated reasons and you will need to rerun them. You will need to be logged in under ‘numpy’ to do this on azure. #### Build wheels via cibuildwheel Tagging the build at the beginning of this process will trigger a wheel build via cibuildwheel and upload wheels and an sdist to the staging area. The CI run on github actions (for all x86-based and macOS arm64 wheels) takes about 1 1/4 hours. The CI run on travis (for aarch64) takes less time. If you wish to manually trigger a wheel build, you can do so: * On github actions -> [Wheel builder](https://github.com/numpy/numpy/actions/workflows/wheels.yml) there is a “Run workflow” button, click on it and choose the tag to build * On [travis](https://app.travis-ci.com/github/numpy/numpy) there is a “More Options” button, click on it and choose a branch to build. There does not appear to be an option to build a tag. #### Download wheels When the wheels have all been successfully built and staged, download them from the Anaconda staging directory using the `tools/download-wheels.py` script: ``` $ cd ../numpy $ python3 tools/download-wheels.py 1.21.0 ``` #### Generate the README files This needs to be done after all installers are downloaded, but before the pavement file is updated for continued development: ``` $ paver write_release ``` #### Reset the maintenance branch into a development state (skip for prereleases) Create release notes for next release and edit them to set the version. These notes will be a skeleton and have little content: ``` $ cp doc/source/release/template.rst doc/source/release/1.21.1-notes.rst $ gvim doc/source/release/1.21.1-notes.rst $ git add doc/source/release/1.21.1-notes.rst ``` Add new release notes to the documentation release list and update the `RELEASE_NOTES` variable in `pavement.py`. $ gvim doc/source/release.rst pavement.py Commit the result: ``` $ git commit -a -m"REL: prepare 1.21.x for further development" $ git push upstream HEAD ``` #### Upload to PyPI Upload to PyPI using `twine`. A recent version of `twine` of is needed after recent PyPI changes, version `3.4.1` was used here: ``` $ cd ../numpy $ twine upload release/installers/*.whl $ twine upload release/installers/numpy-1.21.0.zip # Upload last. ``` If one of the commands breaks in the middle, you may need to selectively upload the remaining files because PyPI does not allow the same file to be uploaded twice. The source file should be uploaded last to avoid synchronization problems that might occur if pip users access the files while this is in process, causing pip to build from source rather than downloading a binary wheel. PyPI only allows a single source distribution, here we have chosen the zip archive. #### Upload files to github Go to <https://github.com/numpy/numpy/releases>, there should be a `v1.21.0 tag`, click on it and hit the edit button for that tag. There are two ways to add files, using an editable text window and as binary uploads. Start by editing the `release/README.md` that is translated from the rst version using pandoc. Things that will need fixing: PR lines from the changelog, if included, are wrapped and need unwrapping, links should be changed to monospaced text. Then copy the contents to the clipboard and paste them into the text window. It may take several tries to get it look right. Then * Upload `release/installers/numpy-1.21.0.tar.gz` as a binary file. * Upload `release/installers/numpy-1.21.0.zip` as a binary file. * Upload `release/README.rst` as a binary file. * Upload `doc/changelog/1.21.0-changelog.rst` as a binary file. * Check the pre-release button if this is a pre-releases. * Hit the `{Publish,Update} release` button at the bottom. #### Upload documents to numpy.org (skip for prereleases) Note You will need a GitHub personal access token to push the update. This step is only needed for final releases and can be skipped for pre-releases and most patch releases. `make merge-doc` clones the `numpy/doc` repo into `doc/build/merge` and updates it with the new documentation. If you already have a numpy installed, you need to locally install the new NumPy version so that document generation will use the correct NumPy. This is because `make dist` does not correctly set up the path. Note that Python 3.10 cannot be used for generating the docs as it has no `easy_install`, use 3.9 or 3.8 instead: ``` $ pushd doc $ make dist $ make merge-doc $ pushd build/merge ``` If the release series is a new one, you will need to add a new section to the `doc/build/merge/index.html` front page just after the “insert here” comment: ``` $ gvim index.html +/'insert here' ``` Further, update the version-switcher json file to add the new release and update the version marked `(stable)`: ``` $ gvim _static/versions.json ``` Otherwise, only the `zip` and `pdf` links should be updated with the new tag name: ``` $ gvim index.html +/'tag v1.21' ``` You can “test run” the new documentation in a browser to make sure the links work: ``` $ firefox index.html # or google-chrome, etc. ``` Update the stable link and update: ``` $ ln -sfn 1.21 stable $ ls -l # check the link ``` Once everything seems satisfactory, update, commit and upload the changes: ``` $ python3 update.py $ git commit -a -m"Add documentation for v1.21.0" $ git push $ popd $ popd ``` #### Announce the release on numpy.org (skip for prereleases) This assumes that you have forked <https://github.com/numpy/numpy.org>: ``` $ cd ../numpy.org $ git checkout master $ git pull upstream master $ git checkout -b announce-numpy-1.21.0 $ gvim content/en/news.md ``` * For all releases, go to the bottom of the page and add a one line link. Look to the previous links for example. * For the `*.0` release in a cycle, add a new section at the top with a short description of the new features and point the news link to it. commit and push: ``` $ git commit -a -m"announce the NumPy 1.21.0 release" $ git push origin HEAD ``` Go to your Github fork and make a pull request. #### Announce to mailing lists The release should be announced on the numpy-discussion, scipy-devel, scipy-user, and python-announce-list mailing lists. Look at previous announcements for the basic template. The contributor and PR lists are the same as generated for the release notes above. If you crosspost, make sure that python-announce-list is BCC so that replies will not be sent to that list. #### Post-Release Tasks (skip for prereleases) Checkout main and forward port the documentation changes: ``` $ git checkout -b post-1.21.0-release-update $ git checkout maintenance/1.21.x doc/source/release/1.21.0-notes.rst $ git checkout maintenance/1.21.x doc/changelog/1.21.0-changelog.rst $ git checkout maintenance/1.21.x .mailmap # only if updated for release. $ gvim doc/source/release.rst # Add link to new notes $ git add doc/changelog/1.21.0-changelog.rst doc/source/release/1.21.0-notes.rst $ git status # check status before commit $ git commit -a -m"REL: Update main after 1.21.0 release." $ git push origin HEAD ``` Go to GitHub and make a PR. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/releasing.htmlNumPy governance ================ * [NumPy project governance and decision-making](governance) + [Summary](governance#summary) + [The Project](governance#the-project) + [Governance](governance#governance) - [Consensus-based decision making by the community](governance#consensus-based-decision-making-by-the-community) - [Steering Council](governance#steering-council) + [Institutional Partners and Funding](governance#institutional-partners-and-funding) + [Document history](governance#document-history) + [Acknowledgements](governance#acknowledgements) + [License](governance#license) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/governance/index.htmlHow to contribute to the NumPy documentation ============================================ This guide will help you decide what to contribute and how to submit it to the official NumPy documentation. Documentation team meetings --------------------------- The NumPy community has set a firm goal of improving its documentation. We hold regular documentation meetings on Zoom (dates are announced on the [numpy-discussion mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion)), and everyone is welcome. Reach out if you have questions or need someone to guide you through your first steps – we’re happy to help. Minutes are taken [on hackmd.io](https://hackmd.io/oB_boakvRqKR-_2jRV-Qjg) and stored in the [NumPy Archive repository](https://github.com/numpy/archive). What’s needed ------------- The [NumPy Documentation](../index#numpy-docs-mainpage) has the details covered. API reference documentation is generated directly from [docstrings](https://www.python.org/dev/peps/pep-0257/) in the code when the documentation is [built](howto_build_docs#howto-build-docs). Although we have mostly complete reference documentation for each function and class exposed to users, there is a lack of usage examples for some of them. What we lack are docs with broader scope – tutorials, how-tos, and explanations. Reporting defects is another way to contribute. We discuss both. Contributing fixes ------------------ We’re eager to hear about and fix doc defects. But to attack the biggest problems we end up having to defer or overlook some bug reports. Here are the best defects to go after. Top priority goes to **technical inaccuracies** – a docstring missing a parameter, a faulty description of a function/parameter/method, and so on. Other “structural” defects like broken links also get priority. All these fixes are easy to confirm and put in place. You can submit a [pull request (PR)](https://numpy.org/devdocs/dev/index.html#devindex) with the fix, if you know how to do that; otherwise please [open an issue](https://github.com/numpy/numpy/issues). **Typos and misspellings** fall on a lower rung; we welcome hearing about them but may not be able to fix them promptly. These too can be handled as pull requests or issues. Obvious **wording** mistakes (like leaving out a “not”) fall into the typo category, but other rewordings – even for grammar – require a judgment call, which raises the bar. Test the waters by first presenting the fix as an issue. Some functions/objects like numpy.ndarray.transpose, numpy.array etc. defined in C-extension modules have their docstrings defined separately in [_add_newdocs.py](https://github.com/numpy/numpy/blob/main/numpy/core/_add_newdocs.py) Contributing new pages ---------------------- Your frustrations using our documents are our best guide to what needs fixing. If you write a missing doc you join the front line of open source, but it’s a meaningful contribution just to let us know what’s missing. If you want to compose a doc, run your thoughts by the [mailing list](https://mail.python.org/mailman/listinfo/numpy-discussion) for further ideas and feedback. If you want to alert us to a gap, [open an issue](https://github.com/numpy/numpy/issues). See [this issue](https://github.com/numpy/numpy/issues/15760) for an example. If you’re looking for subjects, our formal roadmap for documentation is a *NumPy Enhancement Proposal (NEP)*, [NEP 44 - Restructuring the NumPy Documentation](https://www.numpy.org/neps/nep-0044-restructuring-numpy-docs). It identifies areas where our docs need help and lists several additions we’d like to see, including [Jupyter notebooks](#numpy-tutorials). ### Documentation framework There are formulas for writing useful documents, and four formulas cover nearly everything. There are four formulas because there are four categories of document – `tutorial`, `how-to guide`, `explanation`, and `reference`. The insight that docs divide up this way belongs to <NAME> and his [Diátaxis Framework](https://diataxis.fr/). When you begin a document or propose one, have in mind which of these types it will be. ### NumPy tutorials In addition to the documentation that is part of the NumPy source tree, you can submit content in Jupyter Notebook format to the [NumPy Tutorials](https://numpy.org/numpy-tutorials) page. This set of tutorials and educational materials is meant to provide high-quality resources by the NumPy project, both for self-learning and for teaching classes with. These resources are developed in a separate GitHub repository, [numpy-tutorials](https://github.com/numpy/numpy-tutorials), where you can check out existing notebooks, open issues to suggest new topics or submit your own tutorials as pull requests. ### More on contributing Don’t worry if English is not your first language, or if you can only come up with a rough draft. Open source is a community effort. Do your best – we’ll help fix issues. Images and real-life data make text more engaging and powerful, but be sure what you use is appropriately licensed and available. Here again, even a rough idea for artwork can be polished by others. For now, the only data formats accepted by NumPy are those also used by other Python scientific libraries like pandas, SciPy, or Matplotlib. We’re developing a package to accept more formats; contact us for details. NumPy documentation is kept in the source code tree. To get your document into the docbase you must download the tree, [build it](howto_build_docs#howto-build-docs), and submit a pull request. If GitHub and pull requests are new to you, check our [Contributor Guide](index#devindex). Our markup language is reStructuredText (rST), which is more elaborate than Markdown. Sphinx, the tool many Python projects use to build and link project documentation, converts the rST into HTML and other formats. For more on rST, see the [Quick reStructuredText Guide](https://docutils.sourceforge.io/docs/user/rst/quickref.html) or the [reStructuredText Primer](http://www.sphinx-doc.org/en/stable/usage/restructuredtext/basics.html) Contributing indirectly ----------------------- If you run across outside material that would be a useful addition to the NumPy docs, let us know by [opening an issue](https://github.com/numpy/numpy/issues). You don’t have to contribute here to contribute to NumPy. You’ve contributed if you write a tutorial on your blog, create a YouTube video, or answer questions on Stack Overflow and other sites. Documentation style ------------------- ### User documentation * In general, we follow the [Google developer documentation style guide](https://developers.google.com/style) for the User Guide. * NumPy style governs cases where: + Google has no guidance, or + We prefer not to use the Google styleOur current rules: + We pluralize *index* as *indices* rather than [indexes](https://developers.google.com/style/word-list#letter-i), following the precedent of [`numpy.indices`](../reference/generated/numpy.indices#numpy.indices "numpy.indices"). + For consistency we also pluralize *matrix* as *matrices*. * Grammatical issues inadequately addressed by the NumPy or Google rules are decided by the section on “Grammar and Usage” in the most recent edition of the [Chicago Manual of Style](https://en.wikipedia.org/wiki/The_Chicago_Manual_of_Style). * We welcome being [alerted](https://github.com/numpy/numpy/issues) to cases we should add to the NumPy style rules. ### Docstrings When using [Sphinx](http://www.sphinx-doc.org/) in combination with the NumPy conventions, you should use the `numpydoc` extension so that your docstrings will be handled correctly. For example, Sphinx will extract the `Parameters` section from your docstring and convert it into a field list. Using `numpydoc` will also avoid the reStructuredText errors produced by plain Sphinx when it encounters NumPy docstring conventions like section headers (e.g. `-------------`) that sphinx does not expect to find in docstrings. It is available from: * [numpydoc on PyPI](https://pypi.python.org/pypi/numpydoc) * [numpydoc on GitHub](https://github.com/numpy/numpydoc/) Note that for documentation within NumPy, it is not necessary to do `import numpy as np` at the beginning of an example. Please use the `numpydoc` [formatting standard](https://numpydoc.readthedocs.io/en/latest/format.html#format "(in numpydoc v1.5.dev0)") as shown in their [example](https://numpydoc.readthedocs.io/en/latest/example.html#example "(in numpydoc v1.5.dev0)"). ### Documenting C/C++ Code NumPy uses [Doxygen](https://www.doxygen.nl/index.html) to parse specially-formatted C/C++ comment blocks. This generates XML files, which are converted by [Breathe](https://breathe.readthedocs.io/en/latest/) into RST, which is used by Sphinx. **It takes three steps to complete the documentation process**: #### 1. Writing the comment blocks Although there is still no commenting style set to follow, the Javadoc is more preferable than the others due to the similarities with the current existing non-indexed comment blocks. Note Please see [“Documenting the code”](https://www.doxygen.nl/manual/docblocks.html). **This is what Javadoc style looks like**: ``` /** * This a simple brief. * * And the details goes here. * Multi lines are welcome. * * @param num leave a comment for parameter num. * @param str leave a comment for the second parameter. * @return leave a comment for the returned value. */ int doxy_javadoc_example(int num, const char *str); ``` **And here is how it is rendered**: intdoxy_javadoc_example(intnum, constchar*str) This a simple brief. And the details goes here. Multi lines are welcome. Parameters * **num** – leave a comment for parameter num. * **str** – leave a comment for the second parameter. Returns leave a comment for the returned value. **For line comment, you can use a triple forward slash. For example**: ``` /** * Template to represent limbo numbers. * * Specializations for integer types that are part of nowhere. * It doesn't support with any real types. * * @param Tp Type of the integer. Required to be an integer type. * @param N Number of elements. */ template<typename Tp, std::size_t N> class DoxyLimbo { public: /// Default constructor. Initialize nothing. DoxyLimbo(); /// Set Default behavior for copy the limbo. DoxyLimbo(const DoxyLimbo<Tp, N> &l); /// Returns the raw data for the limbo. const Tp *data(); protected: Tp p_data[N]; ///< Example for inline comment. }; ``` **And here is how it is rendered**: template<typenameTp,std::size_tNclassDoxyLimbo Template to represent limbo numbers. Specializations for integer types that are part of nowhere. It doesn’t support with any real types. param Tp Type of the integer. Required to be an integer type. param N Number of elements. #### Public Functions DoxyLimbo() Default constructor. Initialize nothing. DoxyLimbo(const[DoxyLimbo](#_CPPv4N9DoxyLimbo9DoxyLimboERK9DoxyLimboI2Tp1NE "DoxyLimbo::DoxyLimbo")<[Tp](#_CPPv4I0_NSt6size_tEE9DoxyLimbo "DoxyLimbo::Tp"),[N](#_CPPv4I0_NSt6size_tEE9DoxyLimbo "DoxyLimbo::N")>&l) Set Default behavior for copy the limbo. const[Tp](#_CPPv4I0_NSt6size_tEE9DoxyLimbo "DoxyLimbo::Tp")*data() Returns the raw data for the limbo. #### Protected Attributes [Tp](#_CPPv4I0_NSt6size_tEE9DoxyLimbo "DoxyLimbo::Tp")p_data[[N](#_CPPv4I0_NSt6size_tEE9DoxyLimbo "DoxyLimbo::N")] Example for inline comment. ##### Common Doxygen Tags: Note For more tags/commands, please take a look at <https://www.doxygen.nl/manual/commands.html `@brief` Starts a paragraph that serves as a brief description. By default the first sentence of the documentation block is automatically treated as a brief description, since option [JAVADOC_AUTOBRIEF](https://www.doxygen.nl/manual/config.html#cfg_javadoc_autobrief) is enabled within doxygen configurations. `@details` Just like `@brief` starts a brief description, `@details` starts the detailed description. You can also start a new paragraph (blank line) then the `@details` command is not needed. `@param` Starts a parameter description for a function parameter with name <parameter-name>, followed by a description of the parameter. The existence of the parameter is checked and a warning is given if the documentation of this (or any other) parameter is missing or not present in the function declaration or definition. `@return` Starts a return value description for a function. Multiple adjacent `@return` commands will be joined into a single paragraph. The `@return` description ends when a blank line or some other sectioning command is encountered. `@code/@endcode` Starts/Ends a block of code. A code block is treated differently from ordinary text. It is interpreted as source code. `@rst/@endrst` Starts/Ends a block of reST markup. ###### Example **Take a look at the following example**: ``` /** * A comment block contains reST markup. * @rst * .. note:: * * Thanks to Breathe_, we were able to bring it to Doxygen_ * * Some code example:: * * int example(int x) { * return x * 2; * } * @endrst */ void doxy_reST_example(void); ``` **And here is how it is rendered**: voiddoxy_reST_example(void) A comment block contains reST markup. Some code example: ``` int example(int x) { return x * 2; } ``` Note Thanks to [Breathe](https://breathe.readthedocs.io/en/latest/), we were able to bring it to [Doxygen](https://www.doxygen.nl/index.html) #### 2. Feeding Doxygen Not all headers files are collected automatically. You have to add the desired C/C++ header paths within the sub-config files of Doxygen. Sub-config files have the unique name `.doxyfile`, which you can usually find near directories that contain documented headers. You need to create a new config file if there’s not one located in a path close(2-depth) to the headers you want to add. Sub-config files can accept any of [Doxygen](https://www.doxygen.nl/index.html) [configuration options](https://www.doxygen.nl/manual/config.html), but do not override or re-initialize any configuration option, rather only use the concatenation operator “+=”. For example: ``` # to specify certain headers INPUT += @CUR_DIR/header1.h \ @CUR_DIR/header2.h # to add all headers in certain path INPUT += @CUR_DIR/to/headers # to define certain macros PREDEFINED += C_MACRO(X)=X # to enable certain branches PREDEFINED += NPY_HAVE_FEATURE \ NPY_HAVE_FEATURE2 ``` Note @CUR_DIR is a template constant returns the current dir path of the sub-config file. #### 3. Inclusion directives [Breathe](https://breathe.readthedocs.io/en/latest/) provides a wide range of custom directives to allow converting the documents generated by [Doxygen](https://www.doxygen.nl/index.html) into reST files. Note For more information, please check out “[Directives & Config Variables](https://breathe.readthedocs.io/en/latest/directives.html)” ##### Common directives: `doxygenfunction` This directive generates the appropriate output for a single function. The function name is required to be unique in the project. ``` .. doxygenfunction:: <function name> :outline: :no-link: ``` Checkout the [example](https://breathe.readthedocs.io/en/latest/function.html#function-example) to see it in action. `doxygenclass` This directive generates the appropriate output for a single class. It takes the standard project, path, outline and no-link options and additionally the members, protected-members, private-members, undoc-members, membergroups and members-only options: ``` .. doxygenclass:: <class name> :members: [...] :protected-members: :private-members: :undoc-members: :membergroups: ... :members-only: :outline: :no-link: ``` Checkout the `doxygenclass documentation <https://breathe.readthedocs.io/en/latest/class.html#class-example>_` for more details and to see it in action. `doxygennamespace` This directive generates the appropriate output for the contents of a namespace. It takes the standard project, path, outline and no-link options and additionally the content-only, members, protected-members, private-members and undoc-members options. To reference a nested namespace, the full namespaced path must be provided, e.g. foo::bar for the bar namespace inside the foo namespace. ``` .. doxygennamespace:: <namespace> :content-only: :outline: :members: :protected-members: :private-members: :undoc-members: :no-link: ``` Checkout the [doxygennamespace documentation](https://breathe.readthedocs.io/en/latest/namespace.html#namespace-example) for more details and to see it in action. `doxygengroup` This directive generates the appropriate output for the contents of a doxygen group. A doxygen group can be declared with specific doxygen markup in the source comments as covered in the doxygen [grouping documentation](https://www.doxygen.nl/manual/grouping.html). It takes the standard project, path, outline and no-link options and additionally the content-only, members, protected-members, private-members and undoc-members options. ``` .. doxygengroup:: <group name> :content-only: :outline: :members: :protected-members: :private-members: :undoc-members: :no-link: :inner: ``` Checkout the [doxygengroup documentation](https://breathe.readthedocs.io/en/latest/group.html#group-example) for more details and to see it in action. Documentation reading --------------------- * The leading organization of technical writers, [Write the Docs](https://www.writethedocs.org/), holds conferences, hosts learning resources, and runs a Slack channel. * “Every engineer is also a writer,” says Google’s [collection of technical writing resources](https://developers.google.com/tech-writing), which includes free online courses for developers in planning and writing documents. * [Software Carpentry’s](https://software-carpentry.org/lessons) mission is teaching software to researchers. In addition to hosting the curriculum, the website explains how to present ideas effectively. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/howto-docs.htmlSetting up git for NumPy development ==================================== To contribute code or documentation, you first need 1. git installed on your machine 2. a GitHub account 3. a fork of NumPy Install git ----------- You may already have git; check by typing `git --version`. If it’s installed you’ll see some variation of `git version 2.11.0`. If instead you see `command is not recognized`, `command not found`, etc., [install git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). Then set your name and email: ``` git config --global user.email <EMAIL> git config --global user.name "<NAME>" ``` Create a GitHub account ----------------------- If you don’t have a GitHub account, visit <https://github.com/join> to create one. Create a NumPy fork ------------------- `Forking` has two steps – visit GitHub to create a fork repo in your account, then make a copy of it on your own machine. ### Create the fork repo 1. Log into your GitHub account. 2. Go to the [NumPy GitHub home](https://github.com/numpy/numpy). 3. At the upper right of the page, click `Fork`: ![../../_images/forking_button.png] You’ll see ![../../_images/forking_message.png] and then you’ll be taken to the home page of your forked copy: ![../../_images/forked_page.png] ### Make the local copy 1. In the directory where you want the copy created, run ``` git clone https://github.com/your-user-name/numpy.git ``` You’ll see something like: ``` $ git clone https://github.com/your-user-name/numpy.git Cloning into 'numpy'... remote: Enumerating objects: 12, done. remote: Counting objects: 100% (12/12), done. remote: Compressing objects: 100% (12/12), done. remote: Total 175837 (delta 0), reused 0 (delta 0), pack-reused 175825 Receiving objects: 100% (175837/175837), 78.16 MiB | 9.87 MiB/s, done. Resolving deltas: 100% (139317/139317), done. ``` A directory `numpy` is created on your machine. (If you already have a numpy directory, GitHub will choose a different name like `numpy-1`.) ``` $ ls -l total 0 drwxrwxrwx 1 bjn bjn 4096 Jun 20 07:20 numpy ``` 1. Give the name `upstream` to the main NumPy repo: ``` cd numpy git remote add upstream https://github.com/numpy/numpy.git ``` 2. Set up your repository so `git pull` pulls from `upstream` by default: ``` git config branch.main.remote upstream git config branch.main.merge refs/heads/main ``` Look it over ------------ 1. The branches shown by `git branch -a` will include * the `main` branch you just cloned on your own machine * the `main` branch from your fork on GitHub, which git named `origin` by default * the `main` branch on the main NumPy repo, which you named `upstream`. ``` main remotes/origin/main remotes/upstream/main ``` If `upstream` isn’t there, it will be added after you access the NumPy repo with a command like `git fetch` or `git pull`. 2. The repos shown by `git remote -v show` will include your fork on GitHub and the main repo: ``` upstream https://github.com/numpy/numpy.git (fetch) upstream https://github.com/numpy/numpy.git (push) origin https://github.com/your-user-name/numpy.git (fetch) origin https://github.com/your-user-name/numpy.git (push) ``` 3. `git config --list` will include ``` user.email=<EMAIL> user.name=<NAME> remote.origin.url=<EMAIL>.com:your-github-id/numpy.git remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* branch.main.remote=upstream branch.main.merge=refs/heads/main remote.upstream.url=https://github.com/numpy/numpy.git remote.upstream.fetch=+refs/heads/*:refs/remotes/upstream/* ``` Optional: set up SSH keys to avoid passwords -------------------------------------------- Cloning your NumPy fork repo required no password, because it read the remote repo without changing it. Later, though, submitting your pull requests will write to it, and GitHub will ask for your username and password – even though it’s your own repo. You can eliminate this authentication without compromising security by [setting up SSH keys](https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh) . **If you set up the keys before cloning**, the instructions above change slightly. Instead of ``` git clone https://github.com/your-user-name/numpy.git ``` run ``` git clone git@github.com:your-user-name/numpy.git ``` and instead of showing an `https` URL, `git remote -v` will show ``` origin git@github.com:your-user-name/numpy.git (fetch) origin git@github.com:your-user-name/numpy.git (push) ``` **If you have cloned already** and want to start using SSH, see [Switching remote URLs from HTTPS to SSH](https://help.github.com/en/github/using-git/changing-a-remotes-url#switching-remote-urls-from-https-to-ssh) . © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/gitwash/development_setup.htmlInstall git =========== Developing with git can be done entirely without github. Git is a distributed version control system. In order to use git on your machine you must [install it](https://git-scm.com/downloads). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/gitwash/git_intro.htmlGet the local copy of the code ============================== From the command line: ``` git clone https://github.com/numpy/numpy.git ``` You now have a copy of the code tree in the new `numpy` directory. If this doesn’t work you can try the alternative read-only url: ``` git clone https://github.com/numpy/numpy.git ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/gitwash/following_latest.htmlGit configuration ================= Overview -------- Your personal [git](https://git-scm.com/) configurations are saved in the `.gitconfig` file in your home directory. Here is an example `.gitconfig` file: ``` [user] name = <NAME> email = <EMAIL> [alias] ci = commit -a co = checkout st = status -a stat = status -a br = branch wdiff = diff --color-words [core] editor = vim [merge] summary = true ``` You can edit this file directly or you can use the `git config --global` command: ``` git config --global user.name "<NAME>" git config --global user.email <EMAIL> git config --global alias.ci "commit -a" git config --global alias.co checkout git config --global alias.st "status -a" git config --global alias.stat "status -a" git config --global alias.br branch git config --global alias.wdiff "diff --color-words" git config --global core.editor vim git config --global merge.summary true ``` To set up on another computer, you can copy your `~/.gitconfig` file, or run the commands above. In detail --------- ### user.name and user.email It is good practice to tell [git](https://git-scm.com/) who you are, for labeling any changes you make to the code. The simplest way to do this is from the command line: ``` git config --global user.name "<NAME>" git config --global user.email <EMAIL> ``` This will write the settings into your git configuration file, which should now contain a user section with your name and email: ``` [user] name = <NAME> email = <EMAIL> ``` Of course you’ll need to replace `<NAME>` and `<EMAIL>` with your actual name and email address. ### Aliases You might well benefit from some aliases to common commands. For example, you might well want to be able to shorten `git checkout` to `git co`. Or you may want to alias `git diff --color-words` (which gives a nicely formatted output of the diff) to `git wdiff` The following `git config --global` commands: ``` git config --global alias.ci "commit -a" git config --global alias.co checkout git config --global alias.st "status -a" git config --global alias.stat "status -a" git config --global alias.br branch git config --global alias.wdiff "diff --color-words" ``` will create an `alias` section in your `.gitconfig` file with contents like this: ``` [alias] ci = commit -a co = checkout st = status -a stat = status -a br = branch wdiff = diff --color-words ``` ### Editor You may also want to make sure that your editor of choice is used ``` git config --global core.editor vim ``` ### Merging To enforce summaries when doing merges (`~/.gitconfig` file again): ``` [merge] log = true ``` Or from the command line: ``` git config --global merge.log true ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/gitwash/configure_git.htmlTwo and three dots in difference specs ====================================== Thanks to <NAME> for this explanation. Imagine a series of commits A, B, C, D
 Imagine that there are two branches, *topic* and *main*. You branched *topic* off *main* when *main* was at commit ‘E’. The graph of the commits looks like this: ``` A---B---C topic / D---E---F---G main ``` Then: ``` git diff main..topic ``` will output the difference from G to C (i.e. with effects of F and G), while: ``` git diff main...topic ``` would output just differences in the topic branch (i.e. only A, B, and C). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/gitwash/dot2_dot3.htmlAdditional Git Resources ======================== Tutorials and summaries ----------------------- * [github help](https://help.github.com) has an excellent series of how-to guides. * [learn.github](https://learn.github.com/) has an excellent series of tutorials * The [pro git book](https://git-scm.com/book/) is a good in-depth book on git. * A [git cheat sheet](http://cheat.errtheblog.com/s/git) is a page giving summaries of common commands. * The [git user manual](https://www.kernel.org/pub/software/scm/git/docs/user-manual.html) * The [git tutorial](https://www.kernel.org/pub/software/scm/git/docs/gittutorial.html) * The [git community book](https://book.git-scm.com/) * [git ready](http://www.gitready.com/) - a nice series of tutorials * [git casts](http://www.gitcasts.com/) - video snippets giving git how-tos. * [git magic](http://www-cs-students.stanford.edu/~blynn/gitmagic/index.html) - extended introduction with intermediate detail * The [git parable](http://tom.preston-werner.com/2009/05/19/the-git-parable.html) is an easy read explaining the concepts behind git. * Our own [git foundation](http://matthew-brett.github.com/pydagogue/foundation.html) expands on the [git parable](http://tom.preston-werner.com/2009/05/19/the-git-parable.html). * <NAME>’ git page - [Fernando’s git page](http://www.fperez.org/py4science/git.html) - many links and tips * A good but technical page on [git concepts](http://www.eecs.harvard.edu/~cduan/technical/git/) * [git svn crash course](https://git.wiki.kernel.org/index.php/GitSvnCrashCourse): [git](https://git-scm.com/) for those of us used to [subversion](http://subversion.tigris.org/) Advanced git workflow --------------------- There are many ways of working with [git](https://git-scm.com/); here are some posts on the rules of thumb that other projects have come up with: * <NAME> on [git management](https://web.archive.org/web/20090328043540/http://kerneltrap.org/Linux/Git_Management) * <NAME> on [linux git workflow](https://www.mail-archive.com/d<EMAIL>/msg39091.html) . Summary; use the git tools to make the history of your edits as clean as possible; merge from upstream edits as little as possible in branches where you are doing active development. Manual pages online ------------------- You can get these on your own machine with (e.g) `git help push` or (same thing) `git push --help`, but, for convenience, here are the online manual pages for some common commands: * [git add](https://www.kernel.org/pub/software/scm/git/docs/git-add.html) * [git branch](https://www.kernel.org/pub/software/scm/git/docs/git-branch.html) * [git checkout](https://www.kernel.org/pub/software/scm/git/docs/git-checkout.html) * [git clone](https://www.kernel.org/pub/software/scm/git/docs/git-clone.html) * [git commit](https://www.kernel.org/pub/software/scm/git/docs/git-commit.html) * [git config](https://www.kernel.org/pub/software/scm/git/docs/git-config.html) * [git diff](https://www.kernel.org/pub/software/scm/git/docs/git-diff.html) * [git log](https://www.kernel.org/pub/software/scm/git/docs/git-log.html) * [git pull](https://www.kernel.org/pub/software/scm/git/docs/git-pull.html) * [git push](https://www.kernel.org/pub/software/scm/git/docs/git-push.html) * [git remote](https://www.kernel.org/pub/software/scm/git/docs/git-remote.html) * [git status](https://www.kernel.org/pub/software/scm/git/docs/git-status.html) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/gitwash/git_resources.htmlNumPy project governance and decision-making ============================================ The purpose of this document is to formalize the governance process used by the NumPy project in both ordinary and extraordinary situations, and to clarify how decisions are made and how the various elements of our community interact, including the relationship between open source collaborative development and work that may be funded by for-profit or non-profit entities. Summary ------- NumPy is a community-owned and community-run project. To the maximum extent possible, decisions about project direction are made by community consensus (but note that “consensus” here has a somewhat technical meaning that might not match everyone’s expectations – see below). Some members of the community additionally contribute by serving on the NumPy steering council, where they are responsible for facilitating the establishment of community consensus, for stewarding project resources, and – in extreme cases – for making project decisions if the normal community-based process breaks down. The Project ----------- The NumPy Project (The Project) is an open source software project affiliated with the 501(c)3 NumFOCUS Foundation. The goal of The Project is to develop open source software for array-based computing in Python, and in particular the `numpy` package, along with related software such as `f2py` and the NumPy Sphinx extensions. The Software developed by The Project is released under the BSD (or similar) open source license, developed openly and hosted on public GitHub repositories under the `numpy` GitHub organization. The Project is developed by a team of distributed developers, called Contributors. Contributors are individuals who have contributed code, documentation, designs or other work to the Project. Anyone can be a Contributor. Contributors can be affiliated with any legal entity or none. Contributors participate in the project by submitting, reviewing and discussing GitHub Pull Requests and Issues and participating in open and public Project discussions on GitHub, mailing lists, and other channels. The foundation of Project participation is openness and transparency. The Project Community consists of all Contributors and Users of the Project. Contributors work on behalf of and are responsible to the larger Project Community and we strive to keep the barrier between Contributors and Users as low as possible. The Project is formally affiliated with the 501(c)3 NumFOCUS Foundation (<http://numfocus.org>), which serves as its fiscal sponsor, may hold project trademarks and other intellectual property, helps manage project donations and acts as a parent legal entity. NumFOCUS is the only legal entity that has a formal relationship with the project (see Institutional Partners section below). Governance ---------- This section describes the governance and leadership model of The Project. The foundations of Project governance are: * Openness & Transparency * Active Contribution * Institutional Neutrality ### Consensus-based decision making by the community Normally, all project decisions will be made by consensus of all interested Contributors. The primary goal of this approach is to ensure that the people who are most affected by and involved in any given change can contribute their knowledge in the confidence that their voices will be heard, because thoughtful review from a broad community is the best mechanism we know of for creating high-quality software. The mechanism we use to accomplish this goal may be unfamiliar for those who are not experienced with the cultural norms around free/open-source software development. We provide a summary here, and highly recommend that all Contributors additionally read [Chapter 4: Social and Political Infrastructure](http://producingoss.com/en/producingoss.html#social-infrastructure) of Karl Fogel’s classic *Producing Open Source Software*, and in particular the section on [Consensus-based Democracy](http://producingoss.com/en/producingoss.html#consensus-democracy), for a more detailed discussion. In this context, consensus does *not* require: * that we wait to solicit everybody’s opinion on every change, * that we ever hold a vote on anything, * or that everybody is happy or agrees with every decision. For us, what consensus means is that we entrust *everyone* with the right to veto any change if they feel it necessary. While this may sound like a recipe for obstruction and pain, this is not what happens. Instead, we find that most people take this responsibility seriously, and only invoke their veto when they judge that a serious problem is being ignored, and that their veto is necessary to protect the project. And in practice, it turns out that such vetoes are almost never formally invoked, because their mere possibility ensures that Contributors are motivated from the start to find some solution that everyone can live with – thus accomplishing our goal of ensuring that all interested perspectives are taken into account. How do we know when consensus has been achieved? In principle, this is rather difficult, since consensus is defined by the absence of vetos, which requires us to somehow prove a negative. In practice, we use a combination of our best judgement (e.g., a simple and uncontroversial bug fix posted on GitHub and reviewed by a core developer is probably fine) and best efforts (e.g., all substantive API changes must be posted to the mailing list in order to give the broader community a chance to catch any problems and suggest improvements; we assume that anyone who cares enough about NumPy to invoke their veto right should be on the mailing list). If no-one bothers to comment on the mailing list after a few days, then it’s probably fine. And worst case, if a change is more controversial than expected, or a crucial critique is delayed because someone was on vacation, then it’s no big deal: we apologize for misjudging the situation, [back up, and sort things out](http://producingoss.com/en/producingoss.html#version-control-relaxation). If one does need to invoke a formal veto, then it should consist of: * an unambiguous statement that a veto is being invoked, * an explanation of why it is being invoked, and * a description of what conditions (if any) would convince the vetoer to withdraw their veto. If all proposals for resolving some issue are vetoed, then the status quo wins by default. In the worst case, if a Contributor is genuinely misusing their veto in an obstructive fashion to the detriment of the project, then they can be ejected from the project by consensus of the Steering Council – see below. ### Steering Council The Project will have a Steering Council that consists of Project Contributors who have produced contributions that are substantial in quality and quantity, and sustained over at least one year. The overall role of the Council is to ensure, with input from the Community, the long-term well-being of the project, both technically and as a community. During the everyday project activities, council members participate in all discussions, code review and other project activities as peers with all other Contributors and the Community. In these everyday activities, Council Members do not have any special power or privilege through their membership on the Council. However, it is expected that because of the quality and quantity of their contributions and their expert knowledge of the Project Software and Services that Council Members will provide useful guidance, both technical and in terms of project direction, to potentially less experienced contributors. The Steering Council and its Members play a special role in certain situations. In particular, the Council may, if necessary: * Make decisions about the overall scope, vision and direction of the project. * Make decisions about strategic collaborations with other organizations or individuals. * Make decisions about specific technical issues, features, bugs and pull requests. They are the primary mechanism of guiding the code review process and merging pull requests. * Make decisions about the Services that are run by The Project and manage those Services for the benefit of the Project and Community. * Update policy documents such as this one. * Make decisions when regular community discussion doesn’t produce consensus on an issue in a reasonable time frame. However, the Council’s primary responsibility is to facilitate the ordinary community-based decision making procedure described above. If we ever have to step in and formally override the community for the health of the Project, then we will do so, but we will consider reaching this point to indicate a failure in our leadership. #### Council decision making If it becomes necessary for the Steering Council to produce a formal decision, then they will use a form of the [Apache Foundation voting process](https://www.apache.org/foundation/voting.html). This is a formalized version of consensus, in which +1 votes indicate agreement, -1 votes are vetoes (and must be accompanied with a rationale, as above), and one can also vote fractionally (e.g. -0.5, +0.5) if one wishes to express an opinion without registering a full veto. These numeric votes are also often used informally as a way of getting a general sense of people’s feelings on some issue, and should not normally be taken as formal votes. A formal vote only occurs if explicitly declared, and if this does occur then the vote should be held open for long enough to give all interested Council Members a chance to respond – at least one week. In practice, we anticipate that for most Steering Council decisions (e.g., voting in new members) a more informal process will suffice. #### Council membership A list of current Steering Council Members is maintained at the page [About Us](https://numpy.org/about/). To become eligible to join the Steering Council, an individual must be a Project Contributor who has produced contributions that are substantial in quality and quantity, and sustained over at least one year. Potential Council Members are nominated by existing Council members, and become members following consensus of the existing Council members, and confirmation that the potential Member is interested and willing to serve in that capacity. The Council will be initially formed from the set of existing Core Developers who, as of late 2015, have been significantly active over the last year. When considering potential Members, the Council will look at candidates with a comprehensive view of their contributions. This will include but is not limited to code, code review, infrastructure work, mailing list and chat participation, community help/building, education and outreach, design work, etc. We are deliberately not setting arbitrary quantitative metrics (like “100 commits in this repo”) to avoid encouraging behavior that plays to the metrics rather than the project’s overall well-being. We want to encourage a diverse array of backgrounds, viewpoints and talents in our team, which is why we explicitly do not define code as the sole metric on which council membership will be evaluated. If a Council member becomes inactive in the project for a period of one year, they will be considered for removal from the Council. Before removal, inactive Member will be approached to see if they plan on returning to active participation. If not they will be removed immediately upon a Council vote. If they plan on returning to active participation soon, they will be given a grace period of one year. If they don’t return to active participation within that time period they will be removed by vote of the Council without further grace period. All former Council members can be considered for membership again at any time in the future, like any other Project Contributor. Retired Council members will be listed on the project website, acknowledging the period during which they were active in the Council. The Council reserves the right to eject current Members, if they are deemed to be actively harmful to the project’s well-being, and attempts at communication and conflict resolution have failed. This requires the consensus of the remaining Members. #### Conflict of interest It is expected that the Council Members will be employed at a wide range of companies, universities and non-profit organizations. Because of this, it is possible that Members will have conflict of interests. Such conflict of interests include, but are not limited to: * Financial interests, such as investments, employment or contracting work, outside of The Project that may influence their work on The Project. * Access to proprietary information of their employer that could potentially leak into their work with the Project. All members of the Council shall disclose to the rest of the Council any conflict of interest they may have. Members with a conflict of interest in a particular issue may participate in Council discussions on that issue, but must recuse themselves from voting on the issue. #### Private communications of the Council To the maximum extent possible, Council discussions and activities will be public and done in collaboration and discussion with the Project Contributors and Community. The Council will have a private mailing list that will be used sparingly and only when a specific matter requires privacy. When private communications and decisions are needed, the Council will do its best to summarize those to the Community after eliding personal/private/sensitive information that should not be posted to the public internet. #### Subcommittees The Council can create subcommittees that provide leadership and guidance for specific aspects of the project. Like the Council as a whole, subcommittees should conduct their business in an open and public manner unless privacy is specifically called for. Private subcommittee communications should happen on the main private mailing list of the Council unless specifically called for. #### NumFOCUS Subcommittee The Council will maintain one narrowly focused subcommittee to manage its interactions with NumFOCUS. * The NumFOCUS Subcommittee is comprised of 5 persons who manage project funding that comes through NumFOCUS. It is expected that these funds will be spent in a manner that is consistent with the non-profit mission of NumFOCUS and the direction of the Project as determined by the full Council. * This Subcommittee shall NOT make decisions about the direction, scope or technical direction of the Project. * This Subcommittee will have 5 members, 4 of whom will be current Council Members and 1 of whom will be external to the Steering Council. No more than 2 Subcommittee Members can report to one person through employment or contracting work (including the reportee, i.e. the reportee + 1 is the max). This avoids effective majorities resting on one person. The current membership of the NumFOCUS Subcommittee is listed at the page [About Us](https://numpy.org/about/). Institutional Partners and Funding ---------------------------------- The Steering Council are the primary leadership for the project. No outside institution, individual or legal entity has the ability to own, control, usurp or influence the project other than by participating in the Project as Contributors and Council Members. However, because institutions can be an important funding mechanism for the project, it is important to formally acknowledge institutional participation in the project. These are Institutional Partners. An Institutional Contributor is any individual Project Contributor who contributes to the project as part of their official duties at an Institutional Partner. Likewise, an Institutional Council Member is any Project Steering Council Member who contributes to the project as part of their official duties at an Institutional Partner. With these definitions, an Institutional Partner is any recognized legal entity in the United States or elsewhere that employs at least 1 Institutional Contributor of Institutional Council Member. Institutional Partners can be for-profit or non-profit entities. Institutions become eligible to become an Institutional Partner by employing individuals who actively contribute to The Project as part of their official duties. To state this another way, the only way for a Partner to influence the project is by actively contributing to the open development of the project, in equal terms to any other member of the community of Contributors and Council Members. Merely using Project Software in institutional context does not allow an entity to become an Institutional Partner. Financial gifts do not enable an entity to become an Institutional Partner. Once an institution becomes eligible for Institutional Partnership, the Steering Council must nominate and approve the Partnership. If at some point an existing Institutional Partner stops having any contributing employees, then a one year grace period commences. If at the end of this one year period they continue not to have any contributing employees, then their Institutional Partnership will lapse, and resuming it will require going through the normal process for new Partnerships. An Institutional Partner is free to pursue funding for their work on The Project through any legal means. This could involve a non-profit organization raising money from private foundations and donors or a for-profit company building proprietary products and services that leverage Project Software and Services. Funding acquired by Institutional Partners to work on The Project is called Institutional Funding. However, no funding obtained by an Institutional Partner can override the Steering Council. If a Partner has funding to do NumPy work and the Council decides to not pursue that work as a project, the Partner is free to pursue it on their own. However in this situation, that part of the Partner’s work will not be under the NumPy umbrella and cannot use the Project trademarks in a way that suggests a formal relationship. Institutional Partner benefits are: * Acknowledgement on the NumPy websites, in talks and T-shirts. * Ability to acknowledge their own funding sources on the NumPy websites, in talks and T-shirts. * Ability to influence the project through the participation of their Council Member. * Council Members invited to NumPy Developer Meetings. A list of current Institutional Partners is maintained at the page [About Us](https://numpy.org/about/). Document history ---------------- <https://github.com/numpy/numpy/commits/main/doc/source/dev/governance/governance.rst Acknowledgements ---------------- Substantial portions of this document were adapted from the [Jupyter/IPython project’s governance document](https://github.com/jupyter/governance) License ------- To the extent possible under law, the authors have waived all copyright and related or neighboring rights to the NumPy project governance and decision-making document, as per the [CC-0 public domain dedication / license](https://creativecommons.org/publicdomain/zero/1.0/). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/governance/governance.htmlnumpy.argsort ============= numpy.argsort(*a*, *axis=- 1*, *kind=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1012-L1120) Returns the indices that would sort an array. Perform an indirect sort along the given axis using the algorithm specified by the `kind` keyword. It returns an array of indices of the same shape as `a` that index data along the given axis in sorted order. Parameters **a**array_like Array to sort. **axis**int or None, optional Axis along which to sort. The default is -1 (the last axis). If None, the flattened array is used. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with data type. The ‘mergesort’ option is retained for backwards compatibility. Changed in version 1.15.0.: The ‘stable’ option was added. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. Returns **index_array**ndarray, int Array of indices that sort `a` along the specified `axis`. If `a` is one-dimensional, `a[index_array]` yields a sorted `a`. More generally, `np.take_along_axis(a, index_array, axis=axis)` always yields the sorted `a`, irrespective of dimensionality. See also [`sort`](numpy.sort#numpy.sort "numpy.sort") Describes sorting algorithms used. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort with multiple keys. [`ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Inplace sort. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partial sort. [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Apply `index_array` from argsort to an array as if by calling sort. #### Notes See [`sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. As of NumPy 1.4.0 [`argsort`](#numpy.argsort "numpy.argsort") works with real/complex arrays containing nan values. The enhanced sort order is documented in [`sort`](numpy.sort#numpy.sort "numpy.sort"). #### Examples One dimensional array: ``` >>> x = np.array([3, 1, 2]) >>> np.argsort(x) array([1, 2, 0]) ``` Two-dimensional array: ``` >>> x = np.array([[0, 3], [2, 2]]) >>> x array([[0, 3], [2, 2]]) ``` ``` >>> ind = np.argsort(x, axis=0) # sorts along first axis (down) >>> ind array([[0, 1], [1, 0]]) >>> np.take_along_axis(x, ind, axis=0) # same as np.sort(x, axis=0) array([[0, 2], [2, 3]]) ``` ``` >>> ind = np.argsort(x, axis=1) # sorts along last axis (across) >>> ind array([[0, 1], [0, 1]]) >>> np.take_along_axis(x, ind, axis=1) # same as np.sort(x, axis=1) array([[0, 3], [2, 2]]) ``` Indices of the sorted elements of a N-dimensional array: ``` >>> ind = np.unravel_index(np.argsort(x, axis=None), x.shape) >>> ind (array([0, 1, 1, 0]), array([0, 0, 1, 1])) >>> x[ind] # same as np.sort(x, axis=None) array([0, 2, 2, 3]) ``` Sorting with keys: ``` >>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '<i4'), ('y', '<i4')]) >>> x array([(1, 0), (0, 1)], dtype=[('x', '<i4'), ('y', '<i4')]) ``` ``` >>> np.argsort(x, order=('x','y')) array([1, 0]) ``` ``` >>> np.argsort(x, order=('y','x')) array([0, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.argsort.htmlnumpy.lexsort ============= numpy.lexsort(*keys*, *axis=- 1*) Perform an indirect stable sort using a sequence of keys. Given multiple sorting keys, which can be interpreted as columns in a spreadsheet, lexsort returns an array of integer indices that describes the sort order by multiple columns. The last key in the sequence is used for the primary sort order, the second-to-last key for the secondary sort order, and so on. The keys argument must be a sequence of objects that can be converted to arrays of the same shape. If a 2D array is provided for the keys argument, its rows are interpreted as the sorting keys and sorting is according to the last row, second last row etc. Parameters **keys**(k, N) array or tuple containing k (N,)-shaped sequences The `k` different “columns” to be sorted. The last column (or row if `keys` is a 2D array) is the primary sort key. **axis**int, optional Axis to be indirectly sorted. By default, sort over the last axis. Returns **indices**(N,) ndarray of ints Array of indices that sort the keys along the specified axis. See also [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") In-place sort. [`sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. #### Examples Sort names: first by surname, then by name. ``` >>> surnames = ('Hertz', 'Galilei', 'Hertz') >>> first_names = ('Heinrich', 'Galileo', 'Gustav') >>> ind = np.lexsort((first_names, surnames)) >>> ind array([1, 2, 0]) ``` ``` >>> [surnames[i] + ", " + first_names[i] for i in ind] ['<NAME>', '<NAME>', '<NAME>'] ``` Sort two columns of numbers: ``` >>> a = [1,5,1,4,3,4,4] # First column >>> b = [9,4,0,4,0,2,1] # Second column >>> ind = np.lexsort((b,a)) # Sort by a, then by b >>> ind array([2, 0, 4, 6, 5, 3, 1]) ``` ``` >>> [(a[i],b[i]) for i in ind] [(1, 0), (1, 9), (3, 0), (4, 1), (4, 2), (4, 4), (5, 4)] ``` Note that sorting is first according to the elements of `a`. Secondary sorting is according to the elements of `b`. A normal `argsort` would have yielded: ``` >>> [(a[i],b[i]) for i in np.argsort(a)] [(1, 9), (1, 0), (3, 0), (4, 4), (4, 2), (4, 1), (5, 4)] ``` Structured arrays are sorted lexically by `argsort`: ``` >>> x = np.array([(1,9), (5,4), (1,0), (4,4), (3,0), (4,2), (4,1)], ... dtype=np.dtype([('x', int), ('y', int)])) ``` ``` >>> np.argsort(x) # or np.argsort(x, order=('x', 'y')) array([2, 0, 4, 6, 5, 3, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lexsort.htmlnumpy.searchsorted ================== numpy.searchsorted(*a*, *v*, *side='left'*, *sorter=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1319-L1387) Find indices where elements should be inserted to maintain order. Find the indices into a sorted array `a` such that, if the corresponding elements in `v` were inserted before the indices, the order of `a` would be preserved. Assuming that `a` is sorted: | `side` | returned index `i` satisfies | | --- | --- | | left | `a[i-1] < v <= a[i]` | | right | `a[i-1] <= v < a[i]` | Parameters **a**1-D array_like Input array. If `sorter` is None, then it must be sorted in ascending order, otherwise `sorter` must be an array of indices that sort it. **v**array_like Values to insert into `a`. **side**{‘left’, ‘right’}, optional If ‘left’, the index of the first suitable location found is given. If ‘right’, return the last such index. If there is no suitable index, return either 0 or N (where N is the length of `a`). **sorter**1-D array_like, optional Optional array of integer indices that sort array a into ascending order. They are typically the result of argsort. New in version 1.7.0. Returns **indices**int or array of ints Array of insertion points with the same shape as `v`, or an integer if `v` is a scalar. See also [`sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") Produce histogram from 1-D data. #### Notes Binary search is used to find the required insertion points. As of NumPy 1.4.0 [`searchsorted`](#numpy.searchsorted "numpy.searchsorted") works with real/complex arrays containing [`nan`](../constants#numpy.nan "numpy.nan") values. The enhanced sort order is documented in [`sort`](numpy.sort#numpy.sort "numpy.sort"). This function uses the same algorithm as the builtin python [`bisect.bisect_left`](https://docs.python.org/3/library/bisect.html#bisect.bisect_left "(in Python v3.10)") (`side='left'`) and [`bisect.bisect_right`](https://docs.python.org/3/library/bisect.html#bisect.bisect_right "(in Python v3.10)") (`side='right'`) functions, which is also vectorized in the `v` argument. #### Examples ``` >>> np.searchsorted([1,2,3,4,5], 3) 2 >>> np.searchsorted([1,2,3,4,5], 3, side='right') 3 >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) array([0, 5, 1, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.searchsorted.htmlnumpy.partition =============== numpy.partition(*a*, *kth*, *axis=- 1*, *kind='introselect'*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L667-L759) Return a partitioned copy of an array. Creates a copy of the array with its elements rearranged in such a way that the value of the element in k-th position is in the position it would be in a sorted array. All elements smaller than the k-th element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined. New in version 1.8.0. Parameters **a**array_like Array to be sorted. **kth**int or sequence of ints Element index to partition by. The k-th value of the element will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of k-th it will partition all elements indexed by k-th of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis**int or None, optional Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string. Not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. Returns **partitioned_array**ndarray Array of the same type and shape as `a`. See also [`ndarray.partition`](numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition") Method to sort an array in-place. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partition. [`sort`](numpy.sort#numpy.sort "numpy.sort") Full sorting #### Notes The various selection algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The available algorithms have the following properties: | kind | speed | worst case | work space | stable | | --- | --- | --- | --- | --- | | ‘introselect’ | 1 | O(n) | 0 | no | All the partition algorithms make temporary copies of the data when partitioning along any but the last axis. Consequently, partitioning along the last axis is faster and uses less space than partitioning along any other axis. The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts. #### Examples ``` >>> a = np.array([3, 4, 2, 1]) >>> np.partition(a, 3) array([2, 1, 3, 4]) ``` ``` >>> np.partition(a, (1, 3)) array([1, 2, 3, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.partition.htmlnumpy.sort ========== numpy.sort(*a*, *axis=- 1*, *kind=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L852-L1005) Return a sorted copy of an array. Parameters **a**array_like Array to be sorted. **axis**int or None, optional Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort or radix sort under the covers and, in general, the actual implementation will vary with data type. The ‘mergesort’ option is retained for backwards compatibility. Changed in version 1.15.0.: The ‘stable’ option was added. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. Returns **sorted_array**ndarray Array of the same type and shape as `a`. See also [`ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Method to sort an array in-place. [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in a sorted array. [`partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes The various sorting algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The four algorithms implemented in NumPy have the following properties: | kind | speed | worst case | work space | stable | | --- | --- | --- | --- | --- | | ‘quicksort’ | 1 | O(n^2) | 0 | no | | ‘heapsort’ | 3 | O(n*log(n)) | 0 | no | | ‘mergesort’ | 2 | O(n*log(n)) | ~n/2 | yes | | ‘timsort’ | 2 | O(n*log(n)) | ~n/2 | yes | Note The datatype determines which of ‘mergesort’ or ‘timsort’ is actually used, even if ‘mergesort’ is specified. User selection at a finer scale is not currently available. All the sort algorithms make temporary copies of the data when sorting along any but the last axis. Consequently, sorting along the last axis is faster and uses less space than sorting along any other axis. The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts. Previous to numpy 1.4.0 sorting real and complex arrays containing nan values led to undefined behaviour. In numpy versions >= 1.4.0 nan values are sorted to the end. The extended sort order is: * Real: [R, nan] * Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj] where R is a non-nan real value. Complex values with the same nan placements are sorted according to the non-nan part if it exists. Non-nan values are sorted as before. New in version 1.12.0. quicksort has been changed to [introsort](https://en.wikipedia.org/wiki/Introsort). When sorting does not make enough progress it switches to [heapsort](https://en.wikipedia.org/wiki/Heapsort). This implementation makes quicksort O(n*log(n)) in the worst case. ‘stable’ automatically chooses the best stable sorting algorithm for the data type being sorted. It, along with ‘mergesort’ is currently mapped to [timsort](https://en.wikipedia.org/wiki/Timsort) or [radix sort](https://en.wikipedia.org/wiki/Radix_sort) depending on the data type. API forward compatibility currently limits the ability to select the implementation and it is hardwired for the different data types. New in version 1.17.0. Timsort is added for better performance on already or nearly sorted data. On random data timsort is almost identical to mergesort. It is now used for stable sort while quicksort is still the default sort if none is chosen. For timsort details, refer to [CPython listsort.txt](https://github.com/python/cpython/blob/3.7/Objects/listsort.txt). ‘mergesort’ and ‘stable’ are mapped to radix sort for integer data types. Radix sort is an O(n) sort instead of O(n log n). Changed in version 1.18.0. NaT now sorts to the end of arrays for consistency with NaN. #### Examples ``` >>> a = np.array([[1,4],[3,1]]) >>> np.sort(a) # sort along the last axis array([[1, 4], [1, 3]]) >>> np.sort(a, axis=None) # sort the flattened array array([1, 1, 3, 4]) >>> np.sort(a, axis=0) # sort along the first axis array([[1, 1], [3, 4]]) ``` Use the `order` keyword to specify a field to use when sorting a structured array: ``` >>> dtype = [('name', 'S10'), ('height', float), ('age', int)] >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), ... ('Galahad', 1.7, 38)] >>> a = np.array(values, dtype=dtype) # create a structured array >>> np.sort(a, order='height') array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), ('Lancelot', 1.8999999999999999, 38)], dtype=[('name', '|S10'), ('height', '<f8'), ('age', '<i4')]) ``` Sort by age, then height if ages are equal: ``` >>> np.sort(a, order=['age', 'height']) array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), ('Arthur', 1.8, 41)], dtype=[('name', '|S10'), ('height', '<f8'), ('age', '<i4')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.sort.htmlnumpy.concatenate ================= numpy.concatenate(*(a1*, *a2*, *...)*, *axis=0*, *out=None*, *dtype=None*, *casting="same_kind"*) Join a sequence of arrays along an existing axis. Parameters **a1, a2, 
**sequence of array_like The arrays must have the same shape, except in the dimension corresponding to `axis` (the first, by default). **axis**int, optional The axis along which the arrays will be joined. If axis is None, arrays are flattened before use. Default is 0. **out**ndarray, optional If provided, the destination to place the result. The shape must be correct, matching that of what concatenate would have returned if no out argument were specified. **dtype**str or dtype If provided, the destination array will have this dtype. Cannot be provided together with `out`. New in version 1.20.0. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘same_kind’. New in version 1.20.0. Returns **res**ndarray The concatenated array. See also [`ma.concatenate`](numpy.ma.concatenate#numpy.ma.concatenate "numpy.ma.concatenate") Concatenate function that preserves input masks. [`array_split`](numpy.array_split#numpy.array_split "numpy.array_split") Split an array into multiple sub-arrays of equal or near-equal size. [`split`](numpy.split#numpy.split "numpy.split") Split array into a list of multiple sub-arrays of equal size. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") Split array into multiple sub-arrays horizontally (column wise). [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split array into multiple sub-arrays vertically (row wise). [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit") Split array into multiple sub-arrays along the 3rd axis (depth). [`stack`](numpy.stack#numpy.stack "numpy.stack") Stack a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble arrays from blocks. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third dimension). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. #### Notes When one or more of the arrays to be concatenated is a MaskedArray, this function will return a MaskedArray object instead of an ndarray, but the input masks are *not* preserved. In cases where a MaskedArray is expected as input, use the ma.concatenate function from the masked array module instead. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> b = np.array([[5, 6]]) >>> np.concatenate((a, b), axis=0) array([[1, 2], [3, 4], [5, 6]]) >>> np.concatenate((a, b.T), axis=1) array([[1, 2, 5], [3, 4, 6]]) >>> np.concatenate((a, b), axis=None) array([1, 2, 3, 4, 5, 6]) ``` This function will not preserve masking of MaskedArray inputs. ``` >>> a = np.ma.arange(3) >>> a[1] = np.ma.masked >>> b = np.arange(2, 5) >>> a masked_array(data=[0, --, 2], mask=[False, True, False], fill_value=999999) >>> b array([2, 3, 4]) >>> np.concatenate([a, b]) masked_array(data=[0, 1, 2, 2, 3, 4], mask=False, fill_value=999999) >>> np.ma.concatenate([a, b]) masked_array(data=[0, --, 2, 2, 3, 4], mask=[False, True, False, False, False, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.concatenate.htmlInternal organization of NumPy arrays ===================================== It helps to understand a bit about how NumPy arrays are handled under the covers to help understand NumPy better. This section will not go into great detail. Those wishing to understand the full details are requested to refer to <NAME>’s book [Guide to NumPy](http://web.mit.edu/dvp/Public/numpybook.pdf). NumPy arrays consist of two major components: the raw array data (from now on, referred to as the data buffer), and the information about the raw array data. The data buffer is typically what people think of as arrays in C or Fortran, a [contiguous](../glossary#term-contiguous) (and fixed) block of memory containing fixed-sized data items. NumPy also contains a significant set of data that describes how to interpret the data in the data buffer. This extra information contains (among other things): 1. The basic data element’s size in bytes. 2. The start of the data within the data buffer (an offset relative to the beginning of the data buffer). 3. The number of [dimensions](../glossary#term-dimension) and the size of each dimension. 4. The separation between elements for each dimension (the [stride](../glossary#term-stride)). This does not have to be a multiple of the element size. 5. The byte order of the data (which may not be the native byte order). 6. Whether the buffer is read-only. 7. Information (via the [`dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype") object) about the interpretation of the basic data element. The basic data element may be as simple as an int or a float, or it may be a compound object (e.g., [struct-like](../glossary#term-structured-data-type)), a fixed character field, or Python object pointers. 8. Whether the array is to be interpreted as [C-order](../glossary#term-C-order) or [Fortran-order](../glossary#term-Fortran-order). This arrangement allows for the very flexible use of arrays. One thing that it allows is simple changes to the metadata to change the interpretation of the array buffer. Changing the byteorder of the array is a simple change involving no rearrangement of the data. The [shape](../glossary#term-shape) of the array can be changed very easily without changing anything in the data buffer or any data copying at all. Among other things that are made possible is one can create a new array metadata object that uses the same data buffer to create a new [view](../glossary#term-view) of that data buffer that has a different interpretation of the buffer (e.g., different shape, offset, byte order, strides, etc) but shares the same data bytes. Many operations in NumPy do just this such as [slicing](https://docs.python.org/3/glossary.html#term-slice "(in Python v3.10)"). Other operations, such as transpose, don’t move data elements around in the array, but rather change the information about the shape and strides so that the indexing of the array changes, but the data in the doesn’t move. Typically these new versions of the array metadata but the same data buffer are new views into the data buffer. There is a different [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") object, but it uses the same data buffer. This is why it is necessary to force copies through the use of the [`copy`](../reference/generated/numpy.copy#numpy.copy "numpy.copy") method if one really wants to make a new and independent copy of the data buffer. New views into arrays mean the object reference counts for the data buffer increase. Simply doing away with the original array object will not remove the data buffer if other views of it still exist. Multidimensional array indexing order issues -------------------------------------------- See also [Indexing on ndarrays](../user/basics.indexing#basics-indexing) What is the right way to index multi-dimensional arrays? Before you jump to conclusions about the one and true way to index multi-dimensional arrays, it pays to understand why this is a confusing issue. This section will try to explain in detail how NumPy indexing works and why we adopt the convention we do for images, and when it may be appropriate to adopt other conventions. The first thing to understand is that there are two conflicting conventions for indexing 2-dimensional arrays. Matrix notation uses the first index to indicate which row is being selected and the second index to indicate which column is selected. This is opposite the geometrically oriented-convention for images where people generally think the first index represents x position (i.e., column) and the second represents y position (i.e., row). This alone is the source of much confusion; matrix-oriented users and image-oriented users expect two different things with regard to indexing. The second issue to understand is how indices correspond to the order in which the array is stored in memory. In Fortran, the first index is the most rapidly varying index when moving through the elements of a two-dimensional array as it is stored in memory. If you adopt the matrix convention for indexing, then this means the matrix is stored one column at a time (since the first index moves to the next row as it changes). Thus Fortran is considered a Column-major language. C has just the opposite convention. In C, the last index changes most rapidly as one moves through the array as stored in memory. Thus C is a Row-major language. The matrix is stored by rows. Note that in both cases it presumes that the matrix convention for indexing is being used, i.e., for both Fortran and C, the first index is the row. Note this convention implies that the indexing convention is invariant and that the data order changes to keep that so. But that’s not the only way to look at it. Suppose one has large two-dimensional arrays (images or matrices) stored in data files. Suppose the data are stored by rows rather than by columns. If we are to preserve our index convention (whether matrix or image) that means that depending on the language we use, we may be forced to reorder the data if it is read into memory to preserve our indexing convention. For example, if we read row-ordered data into memory without reordering, it will match the matrix indexing convention for C, but not for Fortran. Conversely, it will match the image indexing convention for Fortran, but not for C. For C, if one is using data stored in row order, and one wants to preserve the image index convention, the data must be reordered when reading into memory. In the end, what you do for Fortran or C depends on which is more important, not reordering data or preserving the indexing convention. For large images, reordering data is potentially expensive, and often the indexing convention is inverted to avoid that. The situation with NumPy makes this issue yet more complicated. The internal machinery of NumPy arrays is flexible enough to accept any ordering of indices. One can simply reorder indices by manipulating the internal [stride](../glossary#term-stride) information for arrays without reordering the data at all. NumPy will know how to map the new index order to the data without moving the data. So if this is true, why not choose the index order that matches what you most expect? In particular, why not define row-ordered images to use the image convention? (This is sometimes referred to as the Fortran convention vs the C convention, thus the ‘C’ and ‘FORTRAN’ order options for array ordering in NumPy.) The drawback of doing this is potential performance penalties. It’s common to access the data sequentially, either implicitly in array operations or explicitly by looping over rows of an image. When that is done, then the data will be accessed in non-optimal order. As the first index is incremented, what is actually happening is that elements spaced far apart in memory are being sequentially accessed, with usually poor memory access speeds. For example, for a two-dimensional image `im` defined so that `im[0, 10]` represents the value at `x = 0`, `y = 10`. To be consistent with usual Python behavior then `im[0]` would represent a column at `x = 0`. Yet that data would be spread over the whole array since the data are stored in row order. Despite the flexibility of NumPy’s indexing, it can’t really paper over the fact basic operations are rendered inefficient because of data order or that getting contiguous subarrays is still awkward (e.g., `im[:, 0]` for the first row, vs `im[0]`). Thus one can’t use an idiom such as for row in `im`; for col in `im` does work, but doesn’t yield contiguous column data. As it turns out, NumPy is smart enough when dealing with [ufuncs](internals.code-explanations#ufuncs-internals) to determine which index is the most rapidly varying one in memory and uses that for the innermost loop. Thus for ufuncs, there is no large intrinsic advantage to either approach in most cases. On the other hand, use of [`ndarray.flat`](../reference/generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") with a FORTRAN ordered array will lead to non-optimal memory access as adjacent elements in the flattened array (iterator, actually) are not contiguous in memory. Indeed, the fact is that Python indexing on lists and other sequences naturally leads to an outside-to-inside ordering (the first index gets the largest grouping, the next largest, and the last gets the smallest element). Since image data are normally stored in rows, this corresponds to the position within rows being the last item indexed. If you do want to use Fortran ordering realize that there are two approaches to consider: 1) accept that the first index is just not the most rapidly changing in memory and have all your I/O routines reorder your data when going from memory to disk or visa versa, or use NumPy’s mechanism for mapping the first index to the most rapidly varying data. We recommend the former if possible. The disadvantage of the latter is that many of NumPy’s functions will yield arrays without Fortran ordering unless you are careful to use the `order` keyword. Doing this would be highly inconvenient. Otherwise, we recommend simply learning to reverse the usual order of indices when accessing elements of an array. Granted, it goes against the grain, but it is more in line with Python semantics and the natural order of the data. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/internals.htmlnumpy.expand_dims ================== numpy.expand_dims(*a*, *axis*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L512-L602) Expand the shape of an array. Insert a new axis that will appear at the `axis` position in the expanded array shape. Parameters **a**array_like Input array. **axis**int or tuple of ints Position in the expanded axes where the new axis (or axes) is placed. Deprecated since version 1.13.0: Passing an axis where `axis > a.ndim` will be treated as `axis == a.ndim`, and passing `axis < -a.ndim - 1` will be treated as `axis == 0`. This behavior is deprecated. Changed in version 1.18.0: A tuple of axes is now supported. Out of range axes as described above are now forbidden and raise an [`AxisError`](numpy.axiserror#numpy.AxisError "numpy.AxisError"). Returns **result**ndarray View of `a` with the number of dimensions increased. See also [`squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") The inverse operation, removing singleton dimensions [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Insert, remove, and combine dimensions, and resize existing ones `doc.indexing`, [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Examples ``` >>> x = np.array([1, 2]) >>> x.shape (2,) ``` The following is equivalent to `x[np.newaxis, :]` or `x[np.newaxis]`: ``` >>> y = np.expand_dims(x, axis=0) >>> y array([[1, 2]]) >>> y.shape (1, 2) ``` The following is equivalent to `x[:, np.newaxis]`: ``` >>> y = np.expand_dims(x, axis=1) >>> y array([[1], [2]]) >>> y.shape (2, 1) ``` `axis` may also be a tuple: ``` >>> y = np.expand_dims(x, axis=(0, 1)) >>> y array([[[1, 2]]]) ``` ``` >>> y = np.expand_dims(x, axis=(2, 0)) >>> y array([[[1], [2]]]) ``` Note that some examples may use `None` instead of `np.newaxis`. These are the same objects: ``` >>> np.newaxis is None True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.expand_dims.htmlIndexing on ndarrays ==================== See also [Indexing routines](../reference/arrays.indexing#routines-indexing) [`ndarrays`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") can be indexed using the standard Python `x[obj]` syntax, where *x* is the array and *obj* the selection. There are different kinds of indexing available depending on *obj*: basic indexing, advanced indexing and field access. Most of the following examples show the use of indexing when referencing data in an array. The examples work just as well when assigning to an array. See [Assigning values to indexed arrays](#assigning-values-to-indexed-arrays) for specific examples and explanations on how assignments work. Note that in Python, `x[(exp1, exp2, ..., expN)]` is equivalent to `x[exp1, exp2, ..., expN]`; the latter is just syntactic sugar for the former. Basic indexing -------------- ### Single element indexing Single element indexing works exactly like that for other standard Python sequences. It is 0-based, and accepts negative indices for indexing from the end of the array. ``` >>> x = np.arange(10) >>> x[2] 2 >>> x[-2] 8 ``` It is not necessary to separate each dimension’s index into its own set of square brackets. ``` >>> x.shape = (2, 5) # now x is 2-dimensional >>> x[1, 3] 8 >>> x[1, -1] 9 ``` Note that if one indexes a multidimensional array with fewer indices than dimensions, one gets a subdimensional array. For example: ``` >>> x[0] array([0, 1, 2, 3, 4]) ``` That is, each index specified selects the array corresponding to the rest of the dimensions selected. In the above example, choosing 0 means that the remaining dimension of length 5 is being left unspecified, and that what is returned is an array of that dimensionality and size. It must be noted that the returned array is a [view](../glossary#term-view), i.e., it is not a copy of the original, but points to the same values in memory as does the original array. In this case, the 1-D array at the first position (0) is returned. So using a single index on the returned array, results in a single element being returned. That is: ``` >>> x[0][2] 2 ``` So note that `x[0, 2] == x[0][2]` though the second case is more inefficient as a new temporary array is created after the first index that is subsequently indexed by 2. Note NumPy uses C-order indexing. That means that the last index usually represents the most rapidly changing memory location, unlike Fortran or IDL, where the first index represents the most rapidly changing location in memory. This difference represents a great potential for confusion. ### Slicing and striding Basic slicing extends Python’s basic concept of slicing to N dimensions. Basic slicing occurs when *obj* is a [`slice`](https://docs.python.org/3/library/functions.html#slice "(in Python v3.10)") object (constructed by `start:stop:step` notation inside of brackets), an integer, or a tuple of slice objects and integers. [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "(in Python v3.10)") and [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") objects can be interspersed with these as well. The simplest case of indexing with *N* integers returns an [array scalar](../reference/arrays.scalars#arrays-scalars) representing the corresponding item. As in Python, all indices are zero-based: for the *i*-th index \(n_i\), the valid range is \(0 \le n_i < d_i\) where \(d_i\) is the *i*-th element of the shape of the array. Negative indices are interpreted as counting from the end of the array (*i.e.*, if \(n_i < 0\), it means \(n_i + d_i\)). All arrays generated by basic slicing are always [views](../glossary#term-view) of the original array. Note NumPy slicing creates a [view](../glossary#term-view) instead of a copy as in the case of built-in Python sequences such as string, tuple and list. Care must be taken when extracting a small portion from a large array which becomes useless after the extraction, because the small portion extracted contains a reference to the large original array whose memory will not be released until all arrays derived from it are garbage-collected. In such cases an explicit `copy()` is recommended. The standard rules of sequence slicing apply to basic slicing on a per-dimension basis (including using a step index). Some useful concepts to remember include: * The basic slice syntax is `i:j:k` where *i* is the starting index, *j* is the stopping index, and *k* is the step (\(k\neq0\)). This selects the *m* elements (in the corresponding dimension) with index values *i*, *i + k*, 
, *i + (m - 1) k* where \(m = q + (r\neq0)\) and *q* and *r* are the quotient and remainder obtained by dividing *j - i* by *k*: *j - i = q k + r*, so that *i + (m - 1) k < j*. For example: ``` >>> x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> x[1:7:2] array([1, 3, 5]) ``` * Negative *i* and *j* are interpreted as *n + i* and *n + j* where *n* is the number of elements in the corresponding dimension. Negative *k* makes stepping go towards smaller indices. From the above example: ``` >>> x[-2:10] array([8, 9]) >>> x[-3:3:-1] array([7, 6, 5, 4]) ``` * Assume *n* is the number of elements in the dimension being sliced. Then, if *i* is not given it defaults to 0 for *k > 0* and *n - 1* for *k < 0* . If *j* is not given it defaults to *n* for *k > 0* and *-n-1* for *k < 0* . If *k* is not given it defaults to 1. Note that `::` is the same as `:` and means select all indices along this axis. From the above example: ``` >>> x[5:] array([5, 6, 7, 8, 9]) ``` * If the number of objects in the selection tuple is less than *N*, then `:` is assumed for any subsequent dimensions. For example: ``` >>> x = np.array([[[1],[2],[3]], [[4],[5],[6]]]) >>> x.shape (2, 3, 1) >>> x[1:2] array([[[4], [5], [6]]]) ``` * An integer, *i*, returns the same values as `i:i+1` **except** the dimensionality of the returned object is reduced by 1. In particular, a selection tuple with the *p*-th element an integer (and all other entries `:`) returns the corresponding sub-array with dimension *N - 1*. If *N = 1* then the returned object is an array scalar. These objects are explained in [Scalars](../reference/arrays.scalars#arrays-scalars). * If the selection tuple has all entries `:` except the *p*-th entry which is a slice object `i:j:k`, then the returned array has dimension *N* formed by concatenating the sub-arrays returned by integer indexing of elements *i*, *i+k*, 
, *i + (m - 1) k < j*, * Basic slicing with more than one non-`:` entry in the slicing tuple, acts like repeated application of slicing using a single non-`:` entry, where the non-`:` entries are successively taken (with all other non-`:` entries replaced by `:`). Thus, `x[ind1, ..., ind2,:]` acts like `x[ind1][..., ind2, :]` under basic slicing. Warning The above is **not** true for advanced indexing. * You may use slicing to set values in the array, but (unlike lists) you can never grow the array. The size of the value to be set in `x[obj] = value` must be (broadcastable) to the same shape as `x[obj]`. * A slicing tuple can always be constructed as *obj* and used in the `x[obj]` notation. Slice objects can be used in the construction in place of the `[start:stop:step]` notation. For example, `x[1:10:5, ::-1]` can also be implemented as `obj = (slice(1, 10, 5), slice(None, None, -1)); x[obj]` . This can be useful for constructing generic code that works on arrays of arbitrary dimensions. See [Dealing with variable numbers of indices within programs](#dealing-with-variable-indices) for more information. ### Dimensional indexing tools There are some tools to facilitate the easy matching of array shapes with expressions and in assignments. [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "(in Python v3.10)") expands to the number of `:` objects needed for the selection tuple to index all dimensions. In most cases, this means that the length of the expanded selection tuple is `x.ndim`. There may only be a single ellipsis present. From the above example: ``` >>> x[..., 0] array([[1, 2, 3], [4, 5, 6]]) ``` This is equivalent to: ``` >>> x[:, :, 0] array([[1, 2, 3], [4, 5, 6]]) ``` Each [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") object in the selection tuple serves to expand the dimensions of the resulting selection by one unit-length dimension. The added dimension is the position of the [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") object in the selection tuple. [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") is an alias for `None`, and `None` can be used in place of this with the same result. From the above example: ``` >>> x[:, np.newaxis, :, :].shape (2, 1, 3, 1) >>> x[:, None, :, :].shape (2, 1, 3, 1) ``` This can be handy to combine two arrays in a way that otherwise would require explicit reshaping operations. For example: ``` >>> x = np.arange(5) >>> x[:, np.newaxis] + x[np.newaxis, :] array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) ``` Advanced indexing ----------------- Advanced indexing is triggered when the selection object, *obj*, is a non-tuple sequence object, an [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") (of data type integer or bool), or a tuple with at least one sequence object or ndarray (of data type integer or bool). There are two types of advanced indexing: integer and Boolean. Advanced indexing always returns a *copy* of the data (contrast with basic slicing that returns a [view](../glossary#term-view)). Warning The definition of advanced indexing means that `x[(1, 2, 3),]` is fundamentally different than `x[(1, 2, 3)]`. The latter is equivalent to `x[1, 2, 3]` which will trigger basic selection while the former will trigger advanced indexing. Be sure to understand why this occurs. Also recognize that `x[[1, 2, 3]]` will trigger advanced indexing, whereas due to the deprecated Numeric compatibility mentioned above, `x[[1, 2, slice(None)]]` will trigger basic slicing. ### Integer array indexing Integer array indexing allows selection of arbitrary items in the array based on their *N*-dimensional index. Each integer array represents a number of indices into that dimension. Negative values are permitted in the index arrays and work as they do with single indices or slices: ``` >>> x = np.arange(10, 1, -1) >>> x array([10, 9, 8, 7, 6, 5, 4, 3, 2]) >>> x[np.array([3, 3, 1, 8])] array([7, 7, 9, 2]) >>> x[np.array([3, 3, -3, 8])] array([7, 7, 4, 2]) ``` If the index values are out of bounds then an `IndexError` is thrown: ``` >>> x = np.array([[1, 2], [3, 4], [5, 6]]) >>> x[np.array([1, -1])] array([[3, 4], [5, 6]]) >>> x[np.array([3, 4])] Traceback (most recent call last): ... IndexError: index 3 is out of bounds for axis 0 with size 3 ``` When the index consists of as many integer arrays as dimensions of the array being indexed, the indexing is straightforward, but different from slicing. Advanced indices always are [broadcast](basics.broadcasting#basics-broadcasting) and iterated as *one*: ``` result[i_1, ..., i_M] == x[ind_1[i_1, ..., i_M], ind_2[i_1, ..., i_M], ..., ind_N[i_1, ..., i_M]] ``` Note that the resulting shape is identical to the (broadcast) indexing array shapes `ind_1, ..., ind_N`. If the indices cannot be broadcast to the same shape, an exception `IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes...` is raised. Indexing with multidimensional index arrays tend to be more unusual uses, but they are permitted, and they are useful for some problems. We’ll start with the simplest multidimensional case: ``` >>> y = np.arange(35).reshape(5, 7) >>> y array([[ 0, 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12, 13], [14, 15, 16, 17, 18, 19, 20], [21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) >>> y[np.array([0, 2, 4]), np.array([0, 1, 2])] array([ 0, 15, 30]) ``` In this case, if the index arrays have a matching shape, and there is an index array for each dimension of the array being indexed, the resultant array has the same shape as the index arrays, and the values correspond to the index set for each position in the index arrays. In this example, the first index value is 0 for both index arrays, and thus the first value of the resultant array is `y[0, 0]`. The next value is `y[2, 1]`, and the last is `y[4, 2]`. If the index arrays do not have the same shape, there is an attempt to broadcast them to the same shape. If they cannot be broadcast to the same shape, an exception is raised: ``` >>> y[np.array([0, 2, 4]), np.array([0, 1])] Traceback (most recent call last): ... IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (3,) (2,) ``` The broadcasting mechanism permits index arrays to be combined with scalars for other indices. The effect is that the scalar value is used for all the corresponding values of the index arrays: ``` >>> y[np.array([0, 2, 4]), 1] array([ 1, 15, 29]) ``` Jumping to the next level of complexity, it is possible to only partially index an array with index arrays. It takes a bit of thought to understand what happens in such cases. For example if we just use one index array with y: ``` >>> y[np.array([0, 2, 4])] array([[ 0, 1, 2, 3, 4, 5, 6], [14, 15, 16, 17, 18, 19, 20], [28, 29, 30, 31, 32, 33, 34]]) ``` It results in the construction of a new array where each value of the index array selects one row from the array being indexed and the resultant array has the resulting shape (number of index elements, size of row). In general, the shape of the resultant array will be the concatenation of the shape of the index array (or the shape that all the index arrays were broadcast to) with the shape of any unused dimensions (those not indexed) in the array being indexed. #### Example From each row, a specific element should be selected. The row index is just `[0, 1, 2]` and the column index specifies the element to choose for the corresponding row, here `[0, 1, 0]`. Using both together the task can be solved using advanced indexing: ``` >>> x = np.array([[1, 2], [3, 4], [5, 6]]) >>> x[[0, 1, 2], [0, 1, 0]] array([1, 4, 5]) ``` To achieve a behaviour similar to the basic slicing above, broadcasting can be used. The function [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_") can help with this broadcasting. This is best understood with an example. #### Example From a 4x3 array the corner elements should be selected using advanced indexing. Thus all elements for which the column is one of `[0, 2]` and the row is one of `[0, 3]` need to be selected. To use advanced indexing one needs to select all elements *explicitly*. Using the method explained previously one could write: ``` >>> x = np.array([[ 0, 1, 2], ... [ 3, 4, 5], ... [ 6, 7, 8], ... [ 9, 10, 11]]) >>> rows = np.array([[0, 0], ... [3, 3]], dtype=np.intp) >>> columns = np.array([[0, 2], ... [0, 2]], dtype=np.intp) >>> x[rows, columns] array([[ 0, 2], [ 9, 11]]) ``` However, since the indexing arrays above just repeat themselves, broadcasting can be used (compare operations such as `rows[:, np.newaxis] + columns`) to simplify this: ``` >>> rows = np.array([0, 3], dtype=np.intp) >>> columns = np.array([0, 2], dtype=np.intp) >>> rows[:, np.newaxis] array([[0], [3]]) >>> x[rows[:, np.newaxis], columns] array([[ 0, 2], [ 9, 11]]) ``` This broadcasting can also be achieved using the function [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_"): ``` >>> x[np.ix_(rows, columns)] array([[ 0, 2], [ 9, 11]]) ``` Note that without the `np.ix_` call, only the diagonal elements would be selected: ``` >>> x[rows, columns] array([ 0, 11]) ``` This difference is the most important thing to remember about indexing with multiple advanced indices. #### Example A real-life example of where advanced indexing may be useful is for a color lookup table where we want to map the values of an image into RGB triples for display. The lookup table could have a shape (nlookup, 3). Indexing such an array with an image with shape (ny, nx) with dtype=np.uint8 (or any integer type so long as values are with the bounds of the lookup table) will result in an array of shape (ny, nx, 3) where a triple of RGB values is associated with each pixel location. ### Boolean array indexing This advanced indexing occurs when *obj* is an array object of Boolean type, such as may be returned from comparison operators. A single boolean index array is practically identical to `x[obj.nonzero()]` where, as described above, [`obj.nonzero()`](../reference/generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") returns a tuple (of length [`obj.ndim`](../reference/generated/numpy.ndarray.ndim#numpy.ndarray.ndim "numpy.ndarray.ndim")) of integer index arrays showing the [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)") elements of *obj*. However, it is faster when `obj.shape == x.shape`. If `obj.ndim == x.ndim`, `x[obj]` returns a 1-dimensional array filled with the elements of *x* corresponding to the [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)") values of *obj*. The search order will be [row-major](../glossary#term-row-major), C-style. If *obj* has [`True`](https://docs.python.org/3/library/constants.html#True "(in Python v3.10)") values at entries that are outside of the bounds of *x*, then an index error will be raised. If *obj* is smaller than *x* it is identical to filling it with [`False`](https://docs.python.org/3/library/constants.html#False "(in Python v3.10)"). A common use case for this is filtering for desired element values. For example, one may wish to select all entries from an array which are not [`NaN`](../reference/constants#numpy.NaN "numpy.NaN"): ``` >>> x = np.array([[1., 2.], [np.nan, 3.], [np.nan, np.nan]]) >>> x[~np.isnan(x)] array([1., 2., 3.]) ``` Or wish to add a constant to all negative elements: ``` >>> x = np.array([1., -1., -2., 3]) >>> x[x < 0] += 20 >>> x array([ 1., 19., 18., 3.]) ``` In general if an index includes a Boolean array, the result will be identical to inserting `obj.nonzero()` into the same position and using the integer array indexing mechanism described above. `x[ind_1, boolean_array, ind_2]` is equivalent to `x[(ind_1,) + boolean_array.nonzero() + (ind_2,)]`. If there is only one Boolean array and no integer indexing array present, this is straightforward. Care must only be taken to make sure that the boolean index has *exactly* as many dimensions as it is supposed to work with. In general, when the boolean array has fewer dimensions than the array being indexed, this is equivalent to `x[b, ...]`, which means x is indexed by b followed by as many `:` as are needed to fill out the rank of x. Thus the shape of the result is one dimension containing the number of True elements of the boolean array, followed by the remaining dimensions of the array being indexed: ``` >>> x = np.arange(35).reshape(5, 7) >>> b = x > 20 >>> b[:, 5] array([False, False, False, True, True]) >>> x[b[:, 5]] array([[21, 22, 23, 24, 25, 26, 27], [28, 29, 30, 31, 32, 33, 34]]) ``` Here the 4th and 5th rows are selected from the indexed array and combined to make a 2-D array. #### Example From an array, select all rows which sum up to less or equal two: ``` >>> x = np.array([[0, 1], [1, 1], [2, 2]]) >>> rowsum = x.sum(-1) >>> x[rowsum <= 2, :] array([[0, 1], [1, 1]]) ``` Combining multiple Boolean indexing arrays or a Boolean with an integer indexing array can best be understood with the [`obj.nonzero()`](../reference/generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") analogy. The function [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_") also supports boolean arrays and will work without any surprises. #### Example Use boolean indexing to select all rows adding up to an even number. At the same time columns 0 and 2 should be selected with an advanced integer index. Using the [`ix_`](../reference/generated/numpy.ix_#numpy.ix_ "numpy.ix_") function this can be done with: ``` >>> x = np.array([[ 0, 1, 2], ... [ 3, 4, 5], ... [ 6, 7, 8], ... [ 9, 10, 11]]) >>> rows = (x.sum(-1) % 2) == 0 >>> rows array([False, True, False, True]) >>> columns = [0, 2] >>> x[np.ix_(rows, columns)] array([[ 3, 5], [ 9, 11]]) ``` Without the `np.ix_` call, only the diagonal elements would be selected. Or without `np.ix_` (compare the integer array examples): ``` >>> rows = rows.nonzero()[0] >>> x[rows[:, np.newaxis], columns] array([[ 3, 5], [ 9, 11]]) ``` #### Example Use a 2-D boolean array of shape (2, 3) with four True elements to select rows from a 3-D array of shape (2, 3, 5) results in a 2-D result of shape (4, 5): ``` >>> x = np.arange(30).reshape(2, 3, 5) >>> x array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> b = np.array([[True, True, False], [False, True, True]]) >>> x[b] array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]) ``` ### Combining advanced and basic indexing When there is at least one slice (`:`), ellipsis (`...`) or [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") in the index (or the array has more dimensions than there are advanced indices), then the behaviour can be more complicated. It is like concatenating the indexing result for each advanced index element. In the simplest case, there is only a *single* advanced index combined with a slice. For example: ``` >>> y = np.arange(35).reshape(5,7) >>> y[np.array([0, 2, 4]), 1:3] array([[ 1, 2], [15, 16], [29, 30]]) ``` In effect, the slice and index array operation are independent. The slice operation extracts columns with index 1 and 2, (i.e. the 2nd and 3rd columns), followed by the index array operation which extracts rows with index 0, 2 and 4 (i.e the first, third and fifth rows). This is equivalent to: ``` >>> y[:, 1:3][np.array([0, 2, 4]), :] array([[ 1, 2], [15, 16], [29, 30]]) ``` A single advanced index can, for example, replace a slice and the result array will be the same. However, it is a copy and may have a different memory layout. A slice is preferable when it is possible. For example: ``` >>> x = np.array([[ 0, 1, 2], ... [ 3, 4, 5], ... [ 6, 7, 8], ... [ 9, 10, 11]]) >>> x[1:2, 1:3] array([[4, 5]]) >>> x[1:2, [1, 2]] array([[4, 5]]) ``` The easiest way to understand a combination of *multiple* advanced indices may be to think in terms of the resulting shape. There are two parts to the indexing operation, the subspace defined by the basic indexing (excluding integers) and the subspace from the advanced indexing part. Two cases of index combination need to be distinguished: * The advanced indices are separated by a slice, [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "(in Python v3.10)") or [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis"). For example `x[arr1, :, arr2]`. * The advanced indices are all next to each other. For example `x[..., arr1, arr2, :]` but *not* `x[arr1, :, 1]` since `1` is an advanced index in this regard. In the first case, the dimensions resulting from the advanced indexing operation come first in the result array, and the subspace dimensions after that. In the second case, the dimensions from the advanced indexing operations are inserted into the result array at the same spot as they were in the initial array (the latter logic is what makes simple advanced indexing behave just like slicing). #### Example Suppose `x.shape` is (10, 20, 30) and `ind` is a (2, 3, 4)-shaped indexing [`intp`](../reference/arrays.scalars#numpy.intp "numpy.intp") array, then `result = x[..., ind, :]` has shape (10, 2, 3, 4, 30) because the (20,)-shaped subspace has been replaced with a (2, 3, 4)-shaped broadcasted indexing subspace. If we let *i, j, k* loop over the (2, 3, 4)-shaped subspace then `result[..., i, j, k, :] = x[..., ind[i, j, k], :]`. This example produces the same result as [`x.take(ind, axis=-2)`](../reference/generated/numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take"). #### Example Let `x.shape` be (10, 20, 30, 40, 50) and suppose `ind_1` and `ind_2` can be broadcast to the shape (2, 3, 4). Then `x[:, ind_1, ind_2]` has shape (10, 2, 3, 4, 40, 50) because the (20, 30)-shaped subspace from X has been replaced with the (2, 3, 4) subspace from the indices. However, `x[:, ind_1, :, ind_2]` has shape (2, 3, 4, 10, 30, 50) because there is no unambiguous place to drop in the indexing subspace, thus it is tacked-on to the beginning. It is always possible to use [`.transpose()`](../reference/generated/numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") to move the subspace anywhere desired. Note that this example cannot be replicated using [`take`](../reference/generated/numpy.take#numpy.take "numpy.take"). #### Example Slicing can be combined with broadcasted boolean indices: ``` >>> x = np.arange(35).reshape(5, 7) >>> b = x > 20 >>> b array([[False, False, False, False, False, False, False], [False, False, False, False, False, False, False], [False, False, False, False, False, False, False], [ True, True, True, True, True, True, True], [ True, True, True, True, True, True, True]]) >>> x[b[:, 5], 1:3] array([[22, 23], [29, 30]]) ``` Field access ------------ See also [Structured arrays](basics.rec#structured-arrays) If the [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") object is a structured array the [fields](../glossary#term-field) of the array can be accessed by indexing the array with strings, dictionary-like. Indexing `x['field-name']` returns a new [view](../glossary#term-view) to the array, which is of the same shape as *x* (except when the field is a sub-array) but of data type `x.dtype['field-name']` and contains only the part of the data in the specified field. Also, [record array](../reference/arrays.classes#arrays-classes-rec) scalars can be “indexed” this way. Indexing into a structured array can also be done with a list of field names, e.g. `x[['field-name1', 'field-name2']]`. As of NumPy 1.16, this returns a view containing only those fields. In older versions of NumPy, it returned a copy. See the user guide section on [Structured arrays](basics.rec#structured-arrays) for more information on multifield indexing. If the accessed field is a sub-array, the dimensions of the sub-array are appended to the shape of the result. For example: ``` >>> x = np.zeros((2, 2), dtype=[('a', np.int32), ('b', np.float64, (3, 3))]) >>> x['a'].shape (2, 2) >>> x['a'].dtype dtype('int32') >>> x['b'].shape (2, 2, 3, 3) >>> x['b'].dtype dtype('float64') ``` Flat Iterator indexing ---------------------- [`x.flat`](../reference/generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") returns an iterator that will iterate over the entire array (in C-contiguous style with the last index varying the fastest). This iterator object can also be indexed using basic slicing or advanced indexing as long as the selection object is not a tuple. This should be clear from the fact that [`x.flat`](../reference/generated/numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") is a 1-dimensional view. It can be used for integer indexing with 1-dimensional C-style-flat indices. The shape of any returned array is therefore the shape of the integer indexing object. Assigning values to indexed arrays ---------------------------------- As mentioned, one can select a subset of an array to assign to using a single index, slices, and index and mask arrays. The value being assigned to the indexed array must be shape consistent (the same shape or broadcastable to the shape the index produces). For example, it is permitted to assign a constant to a slice: ``` >>> x = np.arange(10) >>> x[2:7] = 1 ``` or an array of the right size: ``` >>> x[2:7] = np.arange(5) ``` Note that assignments may result in changes if assigning higher types to lower types (like floats to ints) or even exceptions (assigning complex to floats or ints): ``` >>> x[1] = 1.2 >>> x[1] 1 >>> x[1] = 1.2j Traceback (most recent call last): ... TypeError: can't convert complex to int ``` Unlike some of the references (such as array and mask indices) assignments are always made to the original data in the array (indeed, nothing else would make sense!). Note though, that some actions may not work as one may naively expect. This particular example is often surprising to people: ``` >>> x = np.arange(0, 50, 10) >>> x array([ 0, 10, 20, 30, 40]) >>> x[np.array([1, 1, 3, 1])] += 1 >>> x array([ 0, 11, 20, 31, 40]) ``` Where people expect that the 1st location will be incremented by 3. In fact, it will only be incremented by 1. The reason is that a new array is extracted from the original (as a temporary) containing the values at 1, 1, 3, 1, then the value 1 is added to the temporary, and then the temporary is assigned back to the original array. Thus the value of the array at `x[1] + 1` is assigned to `x[1]` three times, rather than being incremented 3 times. Dealing with variable numbers of indices within programs -------------------------------------------------------- The indexing syntax is very powerful but limiting when dealing with a variable number of indices. For example, if you want to write a function that can handle arguments with various numbers of dimensions without having to write special case code for each number of possible dimensions, how can that be done? If one supplies to the index a tuple, the tuple will be interpreted as a list of indices. For example: ``` >>> z = np.arange(81).reshape(3, 3, 3, 3) >>> indices = (1, 1, 1, 1) >>> z[indices] 40 ``` So one can use code to construct tuples of any number of indices and then use these within an index. Slices can be specified within programs by using the slice() function in Python. For example: ``` >>> indices = (1, 1, 1, slice(0, 2)) # same as [1, 1, 1, 0:2] >>> z[indices] array([39, 40]) ``` Likewise, ellipsis can be specified by code by using the Ellipsis object: ``` >>> indices = (1, Ellipsis, 1) # same as [1, ..., 1] >>> z[indices] array([[28, 31, 34], [37, 40, 43], [46, 49, 52]]) ``` For this reason, it is possible to use the output from the [`np.nonzero()`](../reference/generated/numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") function directly as an index since it always returns a tuple of index arrays. Because of the special treatment of tuples, they are not automatically converted to an array as a list would be. As an example: ``` >>> z[[1, 1, 1, 1]] # produces a large array array([[[[27, 28, 29], [30, 31, 32], ... >>> z[(1, 1, 1, 1)] # returns a single value 40 ``` Detailed notes -------------- These are some detailed notes, which are not of importance for day to day indexing (in no particular order): * The native NumPy indexing type is `intp` and may differ from the default integer array type. `intp` is the smallest data type sufficient to safely index any array; for advanced indexing it may be faster than other types. * For advanced assignments, there is in general no guarantee for the iteration order. This means that if an element is set more than once, it is not possible to predict the final result. * An empty (tuple) index is a full scalar index into a zero-dimensional array. `x[()]` returns a *scalar* if `x` is zero-dimensional and a view otherwise. On the other hand, `x[...]` always returns a view. * If a zero-dimensional array is present in the index *and* it is a full integer index the result will be a *scalar* and not a zero-dimensional array. (Advanced indexing is not triggered.) * When an ellipsis (`...`) is present but has no size (i.e. replaces zero `:`) the result will still always be an array. A view if no advanced index is present, otherwise a copy. * The `nonzero` equivalence for Boolean arrays does not hold for zero dimensional boolean arrays. * When the result of an advanced indexing operation has no elements but an individual index is out of bounds, whether or not an `IndexError` is raised is undefined (e.g. `x[[], [123]]` with `123` being out of bounds). * When a *casting* error occurs during assignment (for example updating a numerical array using a sequence of strings), the array being assigned to may end up in an unpredictable partially updated state. However, if any other error (such as an out of bounds index) occurs, the array will remain unchanged. * The memory layout of an advanced indexing result is optimized for each indexing operation and no particular memory order can be assumed. * When using a subclass (especially one which manipulates its shape), the default `ndarray.__setitem__` behaviour will call `__getitem__` for *basic* indexing but not for *advanced* indexing. For such a subclass it may be preferable to call `ndarray.__setitem__` with a *base class* ndarray view on the data. This *must* be done if the subclasses `__getitem__` does not return views. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.indexing.htmlnumpy.nonzero ============= numpy.nonzero(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1866-L1958) Return the indices of the elements that are non-zero. Returns a tuple of arrays, one for each dimension of `a`, containing the indices of the non-zero elements in that dimension. The values in `a` are always tested and returned in row-major, C-style order. To group the indices by element, rather than dimension, use [`argwhere`](numpy.argwhere#numpy.argwhere "numpy.argwhere"), which returns a row for each non-zero element. Note When called on a zero-d array or scalar, `nonzero(a)` is treated as `nonzero(atleast_1d(a))`. Deprecated since version 1.17.0: Use [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d") explicitly if this behavior is deliberate. Parameters **a**array_like Input array. Returns **tuple_of_arrays**tuple Indices of elements that are non-zero. See also [`flatnonzero`](numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero") Return indices that are non-zero in the flattened version of the input array. [`ndarray.nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") Equivalent ndarray method. [`count_nonzero`](numpy.count_nonzero#numpy.count_nonzero "numpy.count_nonzero") Counts the number of non-zero elements in the input array. #### Notes While the nonzero values can be obtained with `a[nonzero(a)]`, it is recommended to use `x[x.astype(bool)]` or `x[x != 0]` instead, which will correctly handle 0-d arrays. #### Examples ``` >>> x = np.array([[3, 0, 0], [0, 4, 0], [5, 6, 0]]) >>> x array([[3, 0, 0], [0, 4, 0], [5, 6, 0]]) >>> np.nonzero(x) (array([0, 1, 2, 2]), array([0, 1, 0, 1])) ``` ``` >>> x[np.nonzero(x)] array([3, 4, 5, 6]) >>> np.transpose(np.nonzero(x)) array([[0, 0], [1, 1], [2, 0], [2, 1]]) ``` A common use for `nonzero` is to find the indices of an array, where a condition is True. Given an array `a`, the condition `a` > 3 is a boolean array and since False is interpreted as 0, np.nonzero(a > 3) yields the indices of the `a` where the condition is true. ``` >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> a > 3 array([[False, False, False], [ True, True, True], [ True, True, True]]) >>> np.nonzero(a > 3) (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) ``` Using this result to index `a` is equivalent to using the mask directly: ``` >>> a[np.nonzero(a > 3)] array([4, 5, 6, 7, 8, 9]) >>> a[a > 3] # prefer this spelling array([4, 5, 6, 7, 8, 9]) ``` `nonzero` can also be called as a method of the array. ``` >>> (a > 3).nonzero() (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nonzero.htmlBroadcasting ============ See also [`numpy.broadcast`](../reference/generated/numpy.broadcast#numpy.broadcast "numpy.broadcast") The term broadcasting describes how NumPy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. There are, however, cases where broadcasting is a bad idea because it leads to inefficient use of memory that slows computation. NumPy operations are usually done on pairs of arrays on an element-by-element basis. In the simplest case, the two arrays must have exactly the same shape, as in the following example: ``` >>> a = np.array([1.0, 2.0, 3.0]) >>> b = np.array([2.0, 2.0, 2.0]) >>> a * b array([2., 4., 6.]) ``` NumPy’s broadcasting rule relaxes this constraint when the arrays’ shapes meet certain constraints. The simplest broadcasting example occurs when an array and a scalar value are combined in an operation: ``` >>> a = np.array([1.0, 2.0, 3.0]) >>> b = 2.0 >>> a * b array([2., 4., 6.]) ``` The result is equivalent to the previous example where `b` was an array. We can think of the scalar `b` being *stretched* during the arithmetic operation into an array with the same shape as `a`. The new elements in `b`, as shown in [Figure 1](#broadcasting-figure-1), are simply copies of the original scalar. The stretching analogy is only conceptual. NumPy is smart enough to use the original scalar value without actually making copies so that broadcasting operations are as memory and computationally efficient as possible. ![A scalar is broadcast to match the shape of the 1-d array it is being multiplied to.] *Figure 1* *In the simplest example of broadcasting, the scalar* `b` *is stretched to become an array of same shape as* `a` *so the shapes are compatible for element-by-element multiplication.* The code in the second example is more efficient than that in the first because broadcasting moves less memory around during the multiplication (`b` is a scalar rather than an array). General Broadcasting Rules -------------------------- When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing (i.e. rightmost) dimensions and works its way left. Two dimensions are compatible when 1. they are equal, or 2. one of them is 1 If these conditions are not met, a `ValueError: operands could not be broadcast together` exception is thrown, indicating that the arrays have incompatible shapes. The size of the resulting array is the size that is not 1 along each axis of the inputs. Arrays do not need to have the same *number* of dimensions. For example, if you have a `256x256x3` array of RGB values, and you want to scale each color in the image by a different value, you can multiply the image by a one-dimensional array with 3 values. Lining up the sizes of the trailing axes of these arrays according to the broadcast rules, shows that they are compatible: ``` Image (3d array): 256 x 256 x 3 Scale (1d array): 3 Result (3d array): 256 x 256 x 3 ``` When either of the dimensions compared is one, the other is used. In other words, dimensions with size 1 are stretched or “copied” to match the other. In the following example, both the `A` and `B` arrays have axes with length one that are expanded to a larger size during the broadcast operation: ``` A (4d array): 8 x 1 x 6 x 1 B (3d array): 7 x 1 x 5 Result (4d array): 8 x 7 x 6 x 5 ``` Broadcastable arrays -------------------- A set of arrays is called “broadcastable” to the same shape if the above rules produce a valid result. For example, if `a.shape` is (5,1), `b.shape` is (1,6), `c.shape` is (6,) and `d.shape` is () so that *d* is a scalar, then *a*, *b*, *c*, and *d* are all broadcastable to dimension (5,6); and * *a* acts like a (5,6) array where `a[:,0]` is broadcast to the other columns, * *b* acts like a (5,6) array where `b[0,:]` is broadcast to the other rows, * *c* acts like a (1,6) array and therefore like a (5,6) array where `c[:]` is broadcast to every row, and finally, * *d* acts like a (5,6) array where the single value is repeated. Here are some more examples: ``` A (2d array): 5 x 4 B (1d array): 1 Result (2d array): 5 x 4 A (2d array): 5 x 4 B (1d array): 4 Result (2d array): 5 x 4 A (3d array): 15 x 3 x 5 B (3d array): 15 x 1 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 5 Result (3d array): 15 x 3 x 5 A (3d array): 15 x 3 x 5 B (2d array): 3 x 1 Result (3d array): 15 x 3 x 5 ``` Here are examples of shapes that do not broadcast: ``` A (1d array): 3 B (1d array): 4 # trailing dimensions do not match A (2d array): 2 x 1 B (3d array): 8 x 4 x 3 # second from last dimensions mismatched ``` An example of broadcasting when a 1-d array is added to a 2-d array: ``` >>> a = np.array([[ 0.0, 0.0, 0.0], ... [10.0, 10.0, 10.0], ... [20.0, 20.0, 20.0], ... [30.0, 30.0, 30.0]]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a + b array([[ 1., 2., 3.], [11., 12., 13.], [21., 22., 23.], [31., 32., 33.]]) >>> b = np.array([1.0, 2.0, 3.0, 4.0]) >>> a + b Traceback (most recent call last): ValueError: operands could not be broadcast together with shapes (4,3) (4,) ``` As shown in [Figure 2](#broadcasting-figure-2), `b` is added to each row of `a`. In [Figure 3](#broadcasting-figure-3), an exception is raised because of the incompatible shapes. ![A 1-d array with shape (3) is stretched to match the 2-d array of shape (4, 3) it is being added to, and the result is a 2-d array of shape (4, 3).] *Figure 2* *A one dimensional array added to a two dimensional array results in broadcasting if number of 1-d array elements matches the number of 2-d array columns.* ![A huge cross over the 2-d array of shape (4, 3) and the 1-d array of shape (4) shows that they can not be broadcast due to mismatch of shapes and thus produce no result.] *Figure 3* *When the trailing dimensions of the arrays are unequal, broadcasting fails because it is impossible to align the values in the rows of the 1st array with the elements of the 2nd arrays for element-by-element addition.* Broadcasting provides a convenient way of taking the outer product (or any other outer operation) of two arrays. The following example shows an outer addition operation of two 1-d arrays: ``` >>> a = np.array([0.0, 10.0, 20.0, 30.0]) >>> b = np.array([1.0, 2.0, 3.0]) >>> a[:, np.newaxis] + b array([[ 1., 2., 3.], [11., 12., 13.], [21., 22., 23.], [31., 32., 33.]]) ``` ![A 2-d array of shape (4, 1) and a 1-d array of shape (3) are stretched to match their shapes and produce a resultant array of shape (4, 3).] *Figure 4* *In some cases, broadcasting stretches both arrays to form an output array larger than either of the initial arrays.* Here the `newaxis` index operator inserts a new axis into `a`, making it a two-dimensional `4x1` array. Combining the `4x1` array with `b`, which has shape `(3,)`, yields a `4x3` array. A Practical Example: Vector Quantization ---------------------------------------- Broadcasting comes up quite often in real world problems. A typical example occurs in the vector quantization (VQ) algorithm used in information theory, classification, and other related areas. The basic operation in VQ finds the closest point in a set of points, called `codes` in VQ jargon, to a given point, called the `observation`. In the very simple, two-dimensional case shown below, the values in `observation` describe the weight and height of an athlete to be classified. The `codes` represent different classes of athletes. [1](#f1) Finding the closest point requires calculating the distance between observation and each of the codes. The shortest distance provides the best match. In this example, `codes[0]` is the closest class indicating that the athlete is likely a basketball player. ``` >>> from numpy import array, argmin, sqrt, sum >>> observation = array([111.0, 188.0]) >>> codes = array([[102.0, 203.0], ... [132.0, 193.0], ... [45.0, 155.0], ... [57.0, 173.0]]) >>> diff = codes - observation # the broadcast happens here >>> dist = sqrt(sum(diff**2,axis=-1)) >>> argmin(dist) 0 ``` In this example, the `observation` array is stretched to match the shape of the `codes` array: ``` Observation (1d array): 2 Codes (2d array): 4 x 2 Diff (2d array): 4 x 2 ``` ![A height versus weight graph that shows data of a female gymnast, marathon runner, basketball player, football lineman and the athlete to be classified. Shortest distance is found between the basketball player and the athlete to be classified.] *Figure 5* *The basic operation of vector quantization calculates the distance between an object to be classified, the dark square, and multiple known codes, the gray circles. In this simple case, the codes represent individual classes. More complex cases use multiple codes per class.* Typically, a large number of `observations`, perhaps read from a database, are compared to a set of `codes`. Consider this scenario: ``` Observation (2d array): 10 x 3 Codes (2d array): 5 x 3 Diff (3d array): 5 x 10 x 3 ``` The three-dimensional array, `diff`, is a consequence of broadcasting, not a necessity for the calculation. Large data sets will generate a large intermediate array that is computationally inefficient. Instead, if each observation is calculated individually using a Python loop around the code in the two-dimensional example above, a much smaller array is used. Broadcasting is a powerful tool for writing short and usually intuitive code that does its computations very efficiently in C. However, there are cases when broadcasting uses unnecessarily large amounts of memory for a particular algorithm. In these cases, it is better to write the algorithm’s outer loop in Python. This may also produce more readable code, as algorithms that use broadcasting tend to become more difficult to interpret as the number of dimensions in the broadcast increases. #### Footnotes [1](#id2) In this example, weight has more impact on the distance calculation than height because of the larger values. In practice, it is important to normalize the height and weight, often by their standard deviation across the data set, so that both have equal influence on the distance calculation. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.broadcasting.htmlnumpy.unique ============ numpy.unique(*ar*, *return_index=False*, *return_inverse=False*, *return_counts=False*, *axis=None*, ***, *equal_nan=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arraysetops.py#L138-L320) Find the unique elements of an array. Returns the sorted unique elements of an array. There are three optional outputs in addition to the unique elements: * the indices of the input array that give the unique values * the indices of the unique array that reconstruct the input array * the number of times each unique value comes up in the input array Parameters **ar**array_like Input array. Unless `axis` is specified, this will be flattened if it is not already 1-D. **return_index**bool, optional If True, also return the indices of `ar` (along the specified axis, if provided, or in the flattened array) that result in the unique array. **return_inverse**bool, optional If True, also return the indices of the unique array (for the specified axis, if provided) that can be used to reconstruct `ar`. **return_counts**bool, optional If True, also return the number of times each unique item appears in `ar`. **axis**int or None, optional The axis to operate on. If None, `ar` will be flattened. If an integer, the subarrays indexed by the given axis will be flattened and treated as the elements of a 1-D array with the dimension of the given axis, see the notes for more details. Object arrays or structured arrays that contain objects are not supported if the `axis` kwarg is used. The default is None. New in version 1.13.0. **equal_nan**bool, optional If True, collapses multiple NaN values in the return array into one. New in version 1.24. Returns **unique**ndarray The sorted unique values. **unique_indices**ndarray, optional The indices of the first occurrences of the unique values in the original array. Only provided if `return_index` is True. **unique_inverse**ndarray, optional The indices to reconstruct the original array from the unique array. Only provided if `return_inverse` is True. **unique_counts**ndarray, optional The number of times each of the unique values comes up in the original array. Only provided if `return_counts` is True. New in version 1.9.0. See also [`numpy.lib.arraysetops`](numpy.lib.arraysetops#module-numpy.lib.arraysetops "numpy.lib.arraysetops") Module with a number of other functions for performing set operations on arrays. [`repeat`](numpy.repeat#numpy.repeat "numpy.repeat") Repeat elements of an array. #### Notes When an axis is specified the subarrays indexed by the axis are sorted. This is done by making the specified axis the first dimension of the array (move the axis to the first dimension to keep the order of the other axes) and then flattening the subarrays in C order. The flattened subarrays are then viewed as a structured type with each element given a label, with the effect that we end up with a 1-D array of structured types that can be treated in the same way as any other 1-D array. The result is that the flattened subarrays are sorted in lexicographic order starting with the first element. #### Examples ``` >>> np.unique([1, 1, 2, 2, 3, 3]) array([1, 2, 3]) >>> a = np.array([[1, 1], [2, 3]]) >>> np.unique(a) array([1, 2, 3]) ``` Return the unique rows of a 2D array ``` >>> a = np.array([[1, 0, 0], [1, 0, 0], [2, 3, 4]]) >>> np.unique(a, axis=0) array([[1, 0, 0], [2, 3, 4]]) ``` Return the indices of the original array that give the unique values: ``` >>> a = np.array(['a', 'b', 'b', 'c', 'a']) >>> u, indices = np.unique(a, return_index=True) >>> u array(['a', 'b', 'c'], dtype='<U1') >>> indices array([0, 1, 3]) >>> a[indices] array(['a', 'b', 'c'], dtype='<U1') ``` Reconstruct the input array from the unique values and inverse: ``` >>> a = np.array([1, 2, 6, 4, 2, 3, 2]) >>> u, indices = np.unique(a, return_inverse=True) >>> u array([1, 2, 3, 4, 6]) >>> indices array([0, 1, 4, 3, 1, 2, 1]) >>> u[indices] array([1, 2, 6, 4, 2, 3, 2]) ``` Reconstruct the input values from the unique values and counts: ``` >>> a = np.array([1, 2, 6, 4, 2, 3, 2]) >>> values, counts = np.unique(a, return_counts=True) >>> values array([1, 2, 3, 4, 6]) >>> counts array([1, 3, 1, 1, 1]) >>> np.repeat(values, counts) array([1, 2, 2, 2, 3, 4, 6]) # original order not preserved ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.unique.htmlnumpy.transpose =============== numpy.transpose(*a*, *axes=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L601-L660) Reverse or permute the axes of an array; returns the modified array. For an array a with two axes, transpose(a) gives the matrix transpose. Refer to [`numpy.ndarray.transpose`](numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") for full documentation. Parameters **a**array_like Input array. **axes**tuple or list of ints, optional If specified, it must be a tuple or list which contains a permutation of [0,1,..,N-1] where N is the number of axes of a. The i’th axis of the returned array will correspond to the axis numbered `axes[i]` of the input. If not specified, defaults to `range(a.ndim)[::-1]`, which reverses the order of the axes. Returns **p**ndarray `a` with its axes permuted. A view is returned whenever possible. See also [`ndarray.transpose`](numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") Equivalent method [`moveaxis`](numpy.moveaxis#numpy.moveaxis "numpy.moveaxis") [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") #### Notes Use `transpose(a, argsort(axes))` to invert the transposition of tensors when using the `axes` keyword argument. Transposing a 1-D array returns an unchanged view of the original array. #### Examples ``` >>> x = np.arange(4).reshape((2,2)) >>> x array([[0, 1], [2, 3]]) ``` ``` >>> np.transpose(x) array([[0, 2], [1, 3]]) ``` ``` >>> x = np.ones((1, 2, 3)) >>> np.transpose(x, (1, 0, 2)).shape (2, 1, 3) ``` ``` >>> x = np.ones((2, 3, 4, 5)) >>> np.transpose(x).shape (5, 4, 3, 2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.transpose.htmlnumpy.reshape ============= numpy.reshape(*a*, *newshape*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L198-L298) Gives a new shape to an array without changing its data. Parameters **a**array_like Array to be reshaped. **newshape**int or tuple of ints The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions. **order**{‘C’, ‘F’, ‘A’}, optional Read the elements of `a` using this index order, and place the elements into the reshaped array using this index order. ‘C’ means to read / write the elements using C-like index order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to read / write the elements using Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of indexing. ‘A’ means to read / write the elements in Fortran-like index order if `a` is Fortran *contiguous* in memory, C-like order otherwise. Returns **reshaped_array**ndarray This will be a new view object if possible; otherwise, it will be a copy. Note there is no guarantee of the *memory layout* (C- or Fortran- contiguous) of the returned array. See also [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Equivalent method. #### Notes It is not always possible to change the shape of an array without copying the data. If you want an error to be raised when the data is copied, you should assign the new shape to the shape attribute of the array: ``` >>> a = np.zeros((10, 2)) # A transpose makes the array non-contiguous >>> b = a.T # Taking a view makes it possible to modify the shape without modifying # the initial object. >>> c = b.view() >>> c.shape = (20) Traceback (most recent call last): ... AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape. ``` The `order` keyword gives the index ordering both for *fetching* the values from `a`, and then *placing* the values into the output array. For example, let’s say you have an array: ``` >>> a = np.arange(6).reshape((3, 2)) >>> a array([[0, 1], [2, 3], [4, 5]]) ``` You can think of reshaping as first raveling the array (using the given index order), then inserting the elements from the raveled array into the new array using the same kind of index ordering as was used for the raveling. ``` >>> np.reshape(a, (2, 3)) # C-like index ordering array([[0, 1, 2], [3, 4, 5]]) >>> np.reshape(np.ravel(a), (2, 3)) # equivalent to C ravel then C reshape array([[0, 1, 2], [3, 4, 5]]) >>> np.reshape(a, (2, 3), order='F') # Fortran-like index ordering array([[0, 4, 3], [2, 1, 5]]) >>> np.reshape(np.ravel(a, order='F'), (2, 3), order='F') array([[0, 4, 3], [2, 1, 5]]) ``` #### Examples ``` >>> a = np.array([[1,2,3], [4,5,6]]) >>> np.reshape(a, 6) array([1, 2, 3, 4, 5, 6]) >>> np.reshape(a, 6, order='F') array([1, 4, 2, 5, 3, 6]) ``` ``` >>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 array([[1, 2], [3, 4], [5, 6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.reshape.htmlnumpy.flip ========== numpy.flip(*m*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L252-L343) Reverse the order of elements in an array along the given axis. The shape of the array is preserved, but the elements are reordered. New in version 1.12.0. Parameters **m**array_like Input array. **axis**None or int or tuple of ints, optional Axis or axes along which to flip over. The default, axis=None, will flip over all of the axes of the input array. If axis is negative it counts from the last to the first axis. If axis is a tuple of ints, flipping is performed on all of the axes specified in the tuple. Changed in version 1.15.0: None and tuples of axes are supported Returns **out**array_like A view of `m` with the entries of axis reversed. Since a view is returned, this operation is done in constant time. See also [`flipud`](numpy.flipud#numpy.flipud "numpy.flipud") Flip an array vertically (axis=0). [`fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr") Flip an array horizontally (axis=1). #### Notes flip(m, 0) is equivalent to flipud(m). flip(m, 1) is equivalent to fliplr(m). flip(m, n) corresponds to `m[...,::-1,...]` with `::-1` at position n. flip(m) corresponds to `m[::-1,::-1,...,::-1]` with `::-1` at all positions. flip(m, (0, 1)) corresponds to `m[::-1,::-1,...]` with `::-1` at position 0 and position 1. #### Examples ``` >>> A = np.arange(8).reshape((2,2,2)) >>> A array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> np.flip(A, 0) array([[[4, 5], [6, 7]], [[0, 1], [2, 3]]]) >>> np.flip(A, 1) array([[[2, 3], [0, 1]], [[6, 7], [4, 5]]]) >>> np.flip(A) array([[[7, 6], [5, 4]], [[3, 2], [1, 0]]]) >>> np.flip(A, (0, 2)) array([[[5, 4], [7, 6]], [[1, 0], [3, 2]]]) >>> A = np.random.randn(3,4,5) >>> np.all(np.flip(A,2) == A[:,:,::-1,...]) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.flip.htmlnumpy.ndarray.flatten ===================== method ndarray.flatten(*order='C'*) Return a copy of the array collapsed into one dimension. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if `a` is Fortran *contiguous* in memory, row-major order otherwise. ‘K’ means to flatten `a` in the order the elements occur in memory. The default is ‘C’. Returns **y**ndarray A copy of the input array, flattened to one dimension. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") A 1-D flat iterator over the array. #### Examples ``` >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.flatten.htmlnumpy.ravel =========== numpy.ravel(*a*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1755-L1859) Return a contiguous flattened array. A 1-D array, containing the elements of the input, is returned. A copy is made only if needed. As of NumPy 1.10, the returned array will have the same type as the input array. (for example, a masked array will be returned for a masked array input) Parameters **a**array_like Input array. The elements in `a` are read in the order specified by `order`, and packed as a 1-D array. **order**{‘C’,’F’, ‘A’, ‘K’}, optional The elements of `a` are read using this index order. ‘C’ means to index the elements in row-major, C-style order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to index the elements in column-major, Fortran-style order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. ‘A’ means to read the elements in Fortran-like index order if `a` is Fortran *contiguous* in memory, C-like order otherwise. ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, ‘C’ index order is used. Returns **y**array_like y is an array of the same subtype as `a`, with shape `(a.size,)`. Note that matrices are special cased for backward compatibility, if `a` is a matrix, then y is a 1-D ndarray. See also [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") 1-D iterator over an array. [`ndarray.flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") 1-D array copy of the elements of an array in row-major order. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Change the shape of an array without changing its data. #### Notes In row-major, C-style order, in two dimensions, the row index varies the slowest, and the column index the quickest. This can be generalized to multiple dimensions, where row-major order implies that the index along the first axis varies slowest, and the index along the last quickest. The opposite holds for column-major, Fortran-style index ordering. When a view is desired in as many cases as possible, `arr.reshape(-1)` may be preferable. #### Examples It is equivalent to `reshape(-1, order=order)`. ``` >>> x = np.array([[1, 2, 3], [4, 5, 6]]) >>> np.ravel(x) array([1, 2, 3, 4, 5, 6]) ``` ``` >>> x.reshape(-1) array([1, 2, 3, 4, 5, 6]) ``` ``` >>> np.ravel(x, order='F') array([1, 4, 2, 5, 3, 6]) ``` When `order` is ‘A’, it will preserve the array’s ‘C’ or ‘F’ ordering: ``` >>> np.ravel(x.T) array([1, 4, 2, 5, 3, 6]) >>> np.ravel(x.T, order='A') array([1, 2, 3, 4, 5, 6]) ``` When `order` is ‘K’, it will preserve orderings that are neither ‘C’ nor ‘F’, but won’t reverse axes: ``` >>> a = np.arange(3)[::-1]; a array([2, 1, 0]) >>> a.ravel(order='C') array([2, 1, 0]) >>> a.ravel(order='K') array([2, 1, 0]) ``` ``` >>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a array([[[ 0, 2, 4], [ 1, 3, 5]], [[ 6, 8, 10], [ 7, 9, 11]]]) >>> a.ravel(order='C') array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) >>> a.ravel(order='K') array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ravel.htmlnumpy.savez_compressed ======================= numpy.savez_compressed(*file*, **args*, ***kwds*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/npyio.py#L603-L666) Save several arrays into a single file in compressed `.npz` format. Provide arrays as keyword arguments to store them under the corresponding name in the output file: `savez(fn, x=x, y=y)`. If arrays are specified as positional arguments, i.e., `savez(fn, x, y)`, their names will be `arr_0`, `arr_1`, etc. Parameters **file**str or file Either the filename (string) or an open file (file-like object) where the data will be saved. If file is a string or a Path, the `.npz` extension will be appended to the filename if it is not already there. **args**Arguments, optional Arrays to save to the file. Please use keyword arguments (see `kwds` below) to assign names to arrays. Arrays specified as args will be named “arr_0”, “arr_1”, and so on. **kwds**Keyword arguments, optional Arrays to save to the file. Each array will be saved to the output file with its corresponding keyword name. Returns None See also [`numpy.save`](numpy.save#numpy.save "numpy.save") Save a single array to a binary file in NumPy format. [`numpy.savetxt`](numpy.savetxt#numpy.savetxt "numpy.savetxt") Save an array to a file as plain text. [`numpy.savez`](numpy.savez#numpy.savez "numpy.savez") Save several arrays into an uncompressed `.npz` file format [`numpy.load`](numpy.load#numpy.load "numpy.load") Load the files created by savez_compressed. #### Notes The `.npz` file format is a zipped archive of files named after the variables they contain. The archive is compressed with `zipfile.ZIP_DEFLATED` and each file in the archive contains one variable in `.npy` format. For a description of the `.npy` format, see [`numpy.lib.format`](numpy.lib.format#module-numpy.lib.format "numpy.lib.format"). When opening the saved `.npz` file with [`load`](numpy.load#numpy.load "numpy.load") a `NpzFile` object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the `.files` attribute), and for the arrays themselves. #### Examples ``` >>> test_array = np.random.rand(3, 2) >>> test_vector = np.random.rand(4) >>> np.savez_compressed('/tmp/123', a=test_array, b=test_vector) >>> loaded = np.load('/tmp/123.npz') >>> print(np.array_equal(test_array, loaded['a'])) True >>> print(np.array_equal(test_vector, loaded['b'])) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.savez_compressed.htmlnumpy.genfromtxt ================ numpy.genfromtxt(*fname*, *dtype=<class 'float'>*, *comments='#'*, *delimiter=None*, *skip_header=0*, *skip_footer=0*, *converters=None*, *missing_values=None*, *filling_values=None*, *usecols=None*, *names=None*, *excludelist=None*, *deletechars=" !#$%&'()*+*, *-./:;<=>?@[\\]^{|}~"*, *replace_space='_'*, *autostrip=False*, *case_sensitive=True*, *defaultfmt='f%i'*, *unpack=None*, *usemask=False*, *loose=True*, *invalid_raise=True*, *max_rows=None*, *encoding='bytes'*, ***, *ndmin=0*, *like=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/npyio.py#L1683-L2416) Load data from a text file, with missing values handled as specified. Each line past the first `skip_header` lines is split at the `delimiter` character, and characters following the `comments` character are discarded. Parameters **fname**file, str, pathlib.Path, list of str, generator File, filename, list, or generator to read. If the filename extension is `.gz` or `.bz2`, the file is first decompressed. Note that generators must return bytes or strings. The strings in a list or produced by a generator are treated as lines. **dtype**dtype, optional Data type of the resulting array. If None, the dtypes will be determined by the contents of each column, individually. **comments**str, optional The character used to indicate the start of a comment. All the characters occurring on a line after a comment are discarded. **delimiter**str, int, or sequence, optional The string used to separate values. By default, any consecutive whitespaces act as delimiter. An integer or sequence of integers can also be provided as width(s) of each field. **skiprows**int, optional `skiprows` was removed in numpy 1.10. Please use `skip_header` instead. **skip_header**int, optional The number of lines to skip at the beginning of the file. **skip_footer**int, optional The number of lines to skip at the end of the file. **converters**variable, optional The set of functions that convert the data of a column to a value. The converters can also be used to provide a default value for missing data: `converters = {3: lambda s: float(s or 0)}`. **missing**variable, optional `missing` was removed in numpy 1.10. Please use `missing_values` instead. **missing_values**variable, optional The set of strings corresponding to missing data. **filling_values**variable, optional The set of values to be used as default when the data are missing. **usecols**sequence, optional Which columns to read, with 0 being the first. For example, `usecols = (1, 4, 5)` will extract the 2nd, 5th and 6th columns. **names**{None, True, str, sequence}, optional If `names` is True, the field names are read from the first line after the first `skip_header` lines. This line can optionally be preceded by a comment delimiter. If `names` is a sequence or a single-string of comma-separated names, the names will be used to define the field names in a structured dtype. If `names` is None, the names of the dtype fields will be used, if any. **excludelist**sequence, optional A list of names to exclude. This list is appended to the default list [‘return’,’file’,’print’]. Excluded names are appended with an underscore: for example, `file` would become `file_`. **deletechars**str, optional A string combining invalid characters that must be deleted from the names. **defaultfmt**str, optional A format used to define default field names, such as “f%i” or “f_%02i”. **autostrip**bool, optional Whether to automatically strip white spaces from the variables. **replace_space**char, optional Character(s) used in replacement of white spaces in the variable names. By default, use a ‘_’. **case_sensitive**{True, False, ‘upper’, ‘lower’}, optional If True, field names are case sensitive. If False or ‘upper’, field names are converted to upper case. If ‘lower’, field names are converted to lower case. **unpack**bool, optional If True, the returned array is transposed, so that arguments may be unpacked using `x, y, z = genfromtxt(...)`. When used with a structured data-type, arrays are returned for each field. Default is False. **usemask**bool, optional If True, return a masked array. If False, return a regular array. **loose**bool, optional If True, do not raise errors for invalid values. **invalid_raise**bool, optional If True, an exception is raised if an inconsistency is detected in the number of columns. If False, a warning is emitted and the offending lines are skipped. **max_rows**int, optional The maximum number of rows to read. Must not be used with skip_footer at the same time. If given, the value must be at least 1. Default is to read the entire file. New in version 1.10.0. **encoding**str, optional Encoding used to decode the inputfile. Does not apply when `fname` is a file object. The special value ‘bytes’ enables backward compatibility workarounds that ensure that you receive byte arrays when possible and passes latin1 encoded strings to converters. Override this value to receive unicode arrays and pass strings as input to converters. If set to None the system default is used. The default value is ‘bytes’. New in version 1.14.0. **ndmin**int, optional Same parameter as [`loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") New in version 1.23.0. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray Data read from the text file. If `usemask` is True, this is a masked array. See also [`numpy.loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") equivalent function when no data is missing. #### Notes * When spaces are used as delimiters, or when no delimiter has been given as input, there should not be any missing data between two fields. * When the variables are named (either by a flexible dtype or with `names`), there must not be any header in the file (else a ValueError exception is raised). * Individual values are not stripped of spaces by default. When using a custom converter, make sure the function does remove spaces. #### References 1 NumPy User Guide, section [I/O with NumPy](https://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html). #### Examples ``` >>> from io import StringIO >>> import numpy as np ``` Comma delimited file with mixed dtype ``` >>> s = StringIO(u"1,1.3,abcde") >>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'), ... ('mystring','S5')], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', 'S5')]) ``` Using dtype = None ``` >>> _ = s.seek(0) # needed for StringIO example only >>> data = np.genfromtxt(s, dtype=None, ... names = ['myint','myfloat','mystring'], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', 'S5')]) ``` Specifying dtype and names ``` >>> _ = s.seek(0) >>> data = np.genfromtxt(s, dtype="i8,f8,S5", ... names=['myint','myfloat','mystring'], delimiter=",") >>> data array((1, 1.3, b'abcde'), dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', 'S5')]) ``` An example with fixed-width columns ``` >>> s = StringIO(u"11.3abcde") >>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'], ... delimiter=[1,3,5]) >>> data array((1, 1.3, b'abcde'), dtype=[('intvar', '<i8'), ('fltvar', '<f8'), ('strvar', 'S5')]) ``` An example to show comments ``` >>> f = StringIO(''' ... text,# of chars ... hello world,11 ... numpy,5''') >>> np.genfromtxt(f, dtype='S12,S12', delimiter=',') array([(b'text', b''), (b'hello world', b'11'), (b'numpy', b'5')], dtype=[('f0', 'S12'), ('f1', 'S12')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.genfromtxt.htmlnumpy.savetxt ============= numpy.savetxt(*fname*, *X*, *fmt='%.18e'*, *delimiter=' '*, *newline='\n'*, *header=''*, *footer=''*, *comments='# '*, *encoding=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/npyio.py#L1320-L1565) Save an array to a text file. Parameters **fname**filename or file handle If the filename ends in `.gz`, the file is automatically saved in compressed gzip format. [`loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") understands gzipped files transparently. **X**1D or 2D array_like Data to be saved to a text file. **fmt**str or sequence of strs, optional A single format (%10.5f), a sequence of formats, or a multi-format string, e.g. ‘Iteration %d – %10.5f’, in which case `delimiter` is ignored. For complex `X`, the legal options for `fmt` are: * a single specifier, `fmt=’%.4e’`, resulting in numbers formatted like `‘ (%s+%sj)’ % (fmt, fmt)` * a full string specifying every real and imaginary part, e.g. `‘ %.4e %+.4ej %.4e %+.4ej %.4e %+.4ej’` for 3 columns * a list of specifiers, one per column - in this case, the real and imaginary part must have separate specifiers, e.g. `[‘%.3e + %.3ej’, ‘(%.15e%+.15ej)’]` for 2 columns **delimiter**str, optional String or character separating columns. **newline**str, optional String or character separating lines. New in version 1.5.0. **header**str, optional String that will be written at the beginning of the file. New in version 1.7.0. **footer**str, optional String that will be written at the end of the file. New in version 1.7.0. **comments**str, optional String that will be prepended to the `header` and `footer` strings, to mark them as comments. Default: ‘# ‘, as expected by e.g. `numpy.loadtxt`. New in version 1.7.0. **encoding**{None, str}, optional Encoding used to encode the outputfile. Does not apply to output streams. If the encoding is something other than ‘bytes’ or ‘latin1’ you will not be able to load the file in NumPy versions < 1.14. Default is ‘latin1’. New in version 1.14.0. See also [`save`](numpy.save#numpy.save "numpy.save") Save an array to a binary file in NumPy `.npy` format [`savez`](numpy.savez#numpy.savez "numpy.savez") Save several arrays into an uncompressed `.npz` archive [`savez_compressed`](numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed") Save several arrays into a compressed `.npz` archive #### Notes Further explanation of the `fmt` parameter (`%[flag]width[.precision]specifier`): flags: `-` : left justify `+` : Forces to precede result with + or -. `0` : Left pad the number with zeros instead of space (see width). width: Minimum number of characters to be printed. The value is not truncated if it has more characters. precision: * For integer specifiers (eg. `d,i,o,x`), the minimum number of digits. * For `e, E` and `f` specifiers, the number of digits to print after the decimal point. * For `g` and `G`, the maximum number of significant digits. * For `s`, the maximum number of characters. specifiers: `c` : character `d` or `i` : signed decimal integer `e` or `E` : scientific notation with `e` or `E`. `f` : decimal floating point `g,G` : use the shorter of `e,E` or `f` `o` : signed octal `s` : string of characters `u` : unsigned decimal integer `x,X` : unsigned hexadecimal integer This explanation of `fmt` is not complete, for an exhaustive specification see [[1]](#r672d4d5b6143-1). #### References [1](#id1) [Format Specification Mini-Language](https://docs.python.org/library/string.html#format-specification-mini-language), Python Documentation. #### Examples ``` >>> x = y = z = np.arange(0.0,5.0,1.0) >>> np.savetxt('test.out', x, delimiter=',') # X is an array >>> np.savetxt('test.out', (x,y,z)) # x,y,z equal sized 1D arrays >>> np.savetxt('test.out', x, fmt='%1.4e') # use exponential notation ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.savetxt.htmlnumpy.array =========== numpy.array(*object*, *dtype=None*, ***, *copy=True*, *order='K'*, *subok=False*, *ndmin=0*, *like=None*) Create an array. Parameters **object**array_like An array, any object exposing the array interface, an object whose __array__ method returns an array, or any (nested) sequence. If object is a scalar, a 0-dimensional array containing object is returned. **dtype**data-type, optional The desired data-type for the array. If not given, then the type will be determined as the minimum type required to hold the objects in the sequence. **copy**bool, optional If true (default), then the object is copied. Otherwise, a copy will only be made if __array__ returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements ([`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, etc.). **order**{‘K’, ‘A’, ‘C’, ‘F’}, optional Specify the memory layout of the array. If object is not an array, the newly created array will be in C order (row major) unless ‘F’ is specified, in which case it will be in Fortran order (column major). If object is an array the following holds. | order | no copy | copy=True | | --- | --- | --- | | ‘K’ | unchanged | F & C order preserved, otherwise most similar order | | ‘A’ | unchanged | F order if input is F and not C, otherwise C order | | ‘C’ | C order | C order | | ‘F’ | F order | F order | When `copy=False` and a copy is made for other reasons, the result is the same as if `copy=True`, with some exceptions for ‘A’, see the Notes section. The default order is ‘K’. **subok**bool, optional If True, then sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (default). **ndmin**int, optional Specifies the minimum number of dimensions that the resulting array should have. Ones will be prepended to the shape as needed to meet this requirement. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray An array object satisfying the specified requirements. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Notes When order is ‘A’ and [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") is an array in neither ‘C’ nor ‘F’ order, and a copy is forced by a change in dtype, then the order of the result is not necessarily ‘C’ as expected. This is likely a bug. #### Examples ``` >>> np.array([1, 2, 3]) array([1, 2, 3]) ``` Upcasting: ``` >>> np.array([1, 2, 3.0]) array([ 1., 2., 3.]) ``` More than one dimension: ``` >>> np.array([[1, 2], [3, 4]]) array([[1, 2], [3, 4]]) ``` Minimum dimensions 2: ``` >>> np.array([1, 2, 3], ndmin=2) array([[1, 2, 3]]) ``` Type provided: ``` >>> np.array([1, 2, 3], dtype=complex) array([ 1.+0.j, 2.+0.j, 3.+0.j]) ``` Data-type consisting of more than one element: ``` >>> x = np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')]) >>> x['a'] array([1, 3]) ``` Creating an array from sub-classes: ``` >>> np.array(np.mat('1 2; 3 4')) array([[1, 2], [3, 4]]) ``` ``` >>> np.array(np.mat('1 2; 3 4'), subok=True) matrix([[1, 2], [3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.array.htmlnumpy.zeros =========== numpy.zeros(*shape*, *dtype=float*, *order='C'*, ***, *like=None*) Return a new array of given shape and type, filled with zeros. Parameters **shape**int or tuple of ints Shape of the new array, e.g., `(2, 3)` or `2`. **dtype**data-type, optional The desired data-type for the array, e.g., [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: ‘C’ Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray Array of zeros with the given shape, dtype, and order. See also [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples ``` >>> np.zeros(5) array([ 0., 0., 0., 0., 0.]) ``` ``` >>> np.zeros((5,), dtype=int) array([0, 0, 0, 0, 0]) ``` ``` >>> np.zeros((2, 1)) array([[ 0.], [ 0.]]) ``` ``` >>> s = (2,2) >>> np.zeros(s) array([[ 0., 0.], [ 0., 0.]]) ``` ``` >>> np.zeros((2,), dtype=[('x', 'i4'), ('y', 'i4')]) # custom dtype array([(0, 0), (0, 0)], dtype=[('x', '<i4'), ('y', '<i4')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.zeros.htmlnumpy.zeros_like ================= numpy.zeros_like(*a*, *dtype=None*, *order='K'*, *subok=True*, *shape=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L76-L142) Return an array of zeros with the same shape and type as a given array. Parameters **a**array_like The shape and data-type of `a` define these same attributes of the returned array. **dtype**data-type, optional Overrides the data type of the result. New in version 1.6.0. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. New in version 1.6.0. **subok**bool, optional. If True, then the newly created array will use the sub-class type of `a`, otherwise it will be a base-class array. Defaults to True. **shape**int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. New in version 1.17.0. Returns **out**ndarray Array of zeros with the same shape and type as `a`. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. #### Examples ``` >>> x = np.arange(6) >>> x = x.reshape((2, 3)) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.zeros_like(x) array([[0, 0, 0], [0, 0, 0]]) ``` ``` >>> y = np.arange(3, dtype=float) >>> y array([0., 1., 2.]) >>> np.zeros_like(y) array([0., 0., 0.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.zeros_like.htmlnumpy.ones ========== numpy.ones(*shape*, *dtype=None*, *order='C'*, ***, *like=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L149-L206) Return a new array of given shape and type, filled with ones. Parameters **shape**int or sequence of ints Shape of the new array, e.g., `(2, 3)` or `2`. **dtype**data-type, optional The desired data-type for the array, e.g., [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: C Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray Array of ones with the given shape, dtype, and order. See also [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples ``` >>> np.ones(5) array([1., 1., 1., 1., 1.]) ``` ``` >>> np.ones((5,), dtype=int) array([1, 1, 1, 1, 1]) ``` ``` >>> np.ones((2, 1)) array([[1.], [1.]]) ``` ``` >>> s = (2,2) >>> np.ones(s) array([[1., 1.], [1., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ones.htmlnumpy.ones_like ================ numpy.ones_like(*a*, *dtype=None*, *order='K'*, *subok=True*, *shape=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L218-L282) Return an array of ones with the same shape and type as a given array. Parameters **a**array_like The shape and data-type of `a` define these same attributes of the returned array. **dtype**data-type, optional Overrides the data type of the result. New in version 1.6.0. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. New in version 1.6.0. **subok**bool, optional. If True, then the newly created array will use the sub-class type of `a`, otherwise it will be a base-class array. Defaults to True. **shape**int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. New in version 1.17.0. Returns **out**ndarray Array of ones with the same shape and type as `a`. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. #### Examples ``` >>> x = np.arange(6) >>> x = x.reshape((2, 3)) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.ones_like(x) array([[1, 1, 1], [1, 1, 1]]) ``` ``` >>> y = np.arange(3, dtype=float) >>> y array([0., 1., 2.]) >>> np.ones_like(y) array([1., 1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ones_like.htmlnumpy.empty =========== numpy.empty(*shape*, *dtype=float*, *order='C'*, ***, *like=None*) Return a new array of given shape and type, without initializing entries. Parameters **shape**int or tuple of int Shape of the empty array, e.g., `(2, 3)` or `2`. **dtype**data-type, optional Desired output data-type for the array, e.g, [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: ‘C’ Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray Array of uninitialized (arbitrary) data of the given shape, dtype, and order. Object arrays will be initialized to None. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Notes [`empty`](#numpy.empty "numpy.empty"), unlike [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros"), does not set the array values to zero, and may therefore be marginally faster. On the other hand, it requires the user to manually set all the values in the array, and should be used with caution. #### Examples ``` >>> np.empty([2, 2]) array([[ -9.74499359e+001, 6.69583040e-309], [ 2.13182611e-314, 3.06959433e-309]]) #uninitialized ``` ``` >>> np.empty([2, 2], dtype=int) array([[-1073741821, -1067949133], [ 496041986, 19249760]]) #uninitialized ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.empty.htmlnumpy.empty_like ================= numpy.empty_like(*prototype*, *dtype=None*, *order='K'*, *subok=True*, *shape=None*) Return a new array with the same shape and type as a given array. Parameters **prototype**array_like The shape and data-type of `prototype` define these same attributes of the returned array. **dtype**data-type, optional Overrides the data type of the result. New in version 1.6.0. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `prototype` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `prototype` as closely as possible. New in version 1.6.0. **subok**bool, optional. If True, then the newly created array will use the sub-class type of `prototype`, otherwise it will be a base-class array. Defaults to True. **shape**int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. New in version 1.17.0. Returns **out**ndarray Array of uninitialized (arbitrary) data with the same shape and type as `prototype`. See also [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. #### Notes This function does *not* initialize the returned array; to do that use [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") or [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") instead. It may be marginally faster than the functions that do set the array values. #### Examples ``` >>> a = ([1,2,3], [4,5,6]) # a is array-like >>> np.empty_like(a) array([[-1073741821, -1073741821, 3], # uninitialized [ 0, 0, -1073741821]]) >>> a = np.array([[1., 2., 3.],[4.,5.,6.]]) >>> np.empty_like(a) array([[ -2.00000715e+000, 1.48219694e-323, -2.00000572e+000], # uninitialized [ 4.38791518e-305, -2.00000715e+000, 4.17269252e-309]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.empty_like.htmlnumpy.arange ============ numpy.arange([*start*, ]*stop*, [*step*, ]*dtype=None*, ***, *like=None*) Return evenly spaced values within a given interval. `arange` can be called with a varying number of positional arguments: * `arange(stop)`: Values are generated within the half-open interval `[0, stop)` (in other words, the interval including `start` but excluding `stop`). * `arange(start, stop)`: Values are generated within the half-open interval `[start, stop)`. * `arange(start, stop, step)` Values are generated within the half-open interval `[start, stop)`, with spacing between values given by `step`. For integer arguments the function is roughly equivalent to the Python built-in [`range`](https://docs.python.org/3/library/stdtypes.html#range "(in Python v3.10)"), but returns an ndarray rather than a `range` instance. When using a non-integer step, such as 0.1, it is often better to use [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace"). See the Warning sections below for more information. Parameters **start**integer or real, optional Start of interval. The interval includes this value. The default start value is 0. **stop**integer or real End of interval. The interval does not include this value, except in some cases where `step` is not an integer and floating point round-off affects the length of `out`. **step**integer or real, optional Spacing between values. For any output `out`, this is the distance between two adjacent values, `out[i+1] - out[i]`. The default step size is 1. If `step` is specified as a position argument, `start` must also be given. **dtype**dtype, optional The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, infer the data type from the other input arguments. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **arange**ndarray Array of evenly spaced values. For floating point arguments, the length of the result is `ceil((stop - start)/step)`. Because of floating point overflow, this rule may result in the last element of `out` being greater than `stop`. Warning The length of the output might not be numerically stable. Another stability issue is due to the internal implementation of [`numpy.arange`](#numpy.arange "numpy.arange"). The actual step value used to populate the array is `dtype(start + step) - dtype(start)` and not `step`. Precision loss can occur here, due to casting or due to using floating points when `start` is much larger than `step`. This can lead to unexpected behaviour. For example: ``` >>> np.arange(0, 5, 0.5, dtype=int) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) >>> np.arange(-3, 3, 0.5, dtype=int) array([-3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8]) ``` In such cases, the use of [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace") should be preferred. The built-in [`range`](https://docs.python.org/3/library/stdtypes.html#range "(in Python v3.10)") generates [Python built-in integers that have arbitrary size](https://docs.python.org/3/c-api/long.html "(in Python v3.10)"), while [`numpy.arange`](#numpy.arange "numpy.arange") produces [`numpy.int32`](../arrays.scalars#numpy.int32 "numpy.int32") or [`numpy.int64`](../arrays.scalars#numpy.int64 "numpy.int64") numbers. This may result in incorrect results for large integer values: ``` >>> power = 40 >>> modulo = 10000 >>> x1 = [(n ** power) % modulo for n in range(8)] >>> x2 = [(n ** power) % modulo for n in np.arange(8)] >>> print(x1) [0, 1, 7776, 8801, 6176, 625, 6576, 4001] # correct >>> print(x2) [0, 1, 7776, 7185, 0, 5969, 4816, 3361] # incorrect ``` See also [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace") Evenly spaced numbers with careful handling of endpoints. [`numpy.ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid") Arrays of evenly spaced numbers in N-dimensions. [`numpy.mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid") Grid-shaped arrays of evenly spaced numbers in N-dimensions. #### Examples ``` >>> np.arange(3) array([0, 1, 2]) >>> np.arange(3.0) array([ 0., 1., 2.]) >>> np.arange(3,7) array([3, 4, 5, 6]) >>> np.arange(3,7,2) array([3, 5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.arange.htmlnumpy.linspace ============== numpy.linspace(*start*, *stop*, *num=50*, *endpoint=True*, *retstep=False*, *dtype=None*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/function_base.py#L23-L175) Return evenly spaced numbers over a specified interval. Returns `num` evenly spaced samples, calculated over the interval [`start`, `stop`]. The endpoint of the interval can optionally be excluded. Changed in version 1.16.0: Non-scalar `start` and `stop` are now supported. Changed in version 1.20.0: Values are rounded towards `-inf` instead of `0` when an integer `dtype` is specified. The old behavior can still be obtained with `np.linspace(start, stop, num).astype(int)` Parameters **start**array_like The starting value of the sequence. **stop**array_like The end value of the sequence, unless `endpoint` is set to False. In that case, the sequence consists of all but the last of `num + 1` evenly spaced samples, so that `stop` is excluded. Note that the step size changes when `endpoint` is False. **num**int, optional Number of samples to generate. Default is 50. Must be non-negative. **endpoint**bool, optional If True, `stop` is the last sample. Otherwise, it is not included. Default is True. **retstep**bool, optional If True, return (`samples`, `step`), where `step` is the spacing between samples. **dtype**dtype, optional The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, the data type is inferred from `start` and `stop`. The inferred dtype will never be an integer; [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)") is chosen even if the arguments would produce an array of integers. New in version 1.9.0. **axis**int, optional The axis in the result to store the samples. Relevant only if start or stop are array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. New in version 1.16.0. Returns **samples**ndarray There are `num` equally spaced samples in the closed interval `[start, stop]` or the half-open interval `[start, stop)` (depending on whether `endpoint` is True or False). **step**float, optional Only returned if `retstep` is True Size of spacing between samples. See also [`arange`](numpy.arange#numpy.arange "numpy.arange") Similar to [`linspace`](#numpy.linspace "numpy.linspace"), but uses a step size (instead of the number of samples). [`geomspace`](numpy.geomspace#numpy.geomspace "numpy.geomspace") Similar to [`linspace`](#numpy.linspace "numpy.linspace"), but with numbers spaced evenly on a log scale (a geometric progression). [`logspace`](numpy.logspace#numpy.logspace "numpy.logspace") Similar to [`geomspace`](numpy.geomspace#numpy.geomspace "numpy.geomspace"), but with the end points specified as logarithms. #### Examples ``` >>> np.linspace(2.0, 3.0, num=5) array([2. , 2.25, 2.5 , 2.75, 3. ]) >>> np.linspace(2.0, 3.0, num=5, endpoint=False) array([2. , 2.2, 2.4, 2.6, 2.8]) >>> np.linspace(2.0, 3.0, num=5, retstep=True) (array([2. , 2.25, 2.5 , 2.75, 3. ]), 0.25) ``` Graphical illustration: ``` >>> import matplotlib.pyplot as plt >>> N = 8 >>> y = np.zeros(N) >>> x1 = np.linspace(0, 10, N, endpoint=True) >>> x2 = np.linspace(0, 10, N, endpoint=False) >>> plt.plot(x1, y, 'o') [<matplotlib.lines.Line2D object at 0x...>] >>> plt.plot(x2, y + 0.5, 'o') [<matplotlib.lines.Line2D object at 0x...>] >>> plt.ylim([-0.5, 1]) (-0.5, 1) >>> plt.show() ``` ![../../_images/numpy-linspace-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linspace.htmlnumpy.fromfunction ================== numpy.fromfunction(*function*, *shape*, ***, *dtype=<class 'float'>*, *like=None*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L1793-L1861) Construct an array by executing a function over each coordinate. The resulting array therefore has a value `fn(x, y, z)` at coordinate `(x, y, z)`. Parameters **function**callable The function is called with N parameters, where N is the rank of [`shape`](numpy.shape#numpy.shape "numpy.shape"). Each parameter represents the coordinates of the array varying along a specific axis. For example, if [`shape`](numpy.shape#numpy.shape "numpy.shape") were `(2, 2)`, then the parameters would be `array([[0, 0], [1, 1]])` and `array([[0, 1], [0, 1]])` **shape**(N,) tuple of ints Shape of the output array, which also determines the shape of the coordinate arrays passed to `function`. **dtype**data-type, optional Data-type of the coordinate arrays passed to `function`. By default, [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is float. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **fromfunction**any The result of the call to `function` is passed back directly. Therefore the shape of [`fromfunction`](#numpy.fromfunction "numpy.fromfunction") is completely determined by `function`. If `function` returns a scalar value, the shape of [`fromfunction`](#numpy.fromfunction "numpy.fromfunction") would not match the [`shape`](numpy.shape#numpy.shape "numpy.shape") parameter. See also [`indices`](numpy.indices#numpy.indices "numpy.indices"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Notes Keywords other than [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") are passed to `function`. #### Examples ``` >>> np.fromfunction(lambda i, j: i, (2, 2), dtype=float) array([[0., 0.], [1., 1.]]) ``` ``` >>> np.fromfunction(lambda i, j: j, (2, 2), dtype=float) array([[0., 1.], [0., 1.]]) ``` ``` >>> np.fromfunction(lambda i, j: i == j, (3, 3), dtype=int) array([[ True, False, False], [False, True, False], [False, False, True]]) ``` ``` >>> np.fromfunction(lambda i, j: i + j, (3, 3), dtype=int) array([[0, 1, 2], [1, 2, 3], [2, 3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fromfunction.htmlnumpy.fromfile ============== numpy.fromfile(*file*, *dtype=float*, *count=- 1*, *sep=''*, *offset=0*, ***, *like=None*) Construct an array from data in a text or binary file. A highly efficient way of reading binary data with a known data-type, as well as parsing simply formatted text files. Data written using the `tofile` method can be read using this function. Parameters **file**file or str or Path Open file object or filename. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. **dtype**data-type Data type of the returned array. For binary files, it is used to determine the size and byte-order of the items in the file. Most builtin numeric types are supported and extension types may be supported. New in version 1.18.0: Complex dtypes. **count**int Number of items to read. `-1` means all items (i.e., the complete file). **sep**str Separator between items if file is a text file. Empty (“”) separator means the file should be treated as binary. Spaces (” “) in the separator match zero or more whitespace characters. A separator consisting only of spaces must match at least one whitespace. **offset**int The offset (in bytes) from the file’s current position. Defaults to 0. Only permitted for binary files. New in version 1.17.0. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. See also [`load`](numpy.load#numpy.load "numpy.load"), [`save`](numpy.save#numpy.save "numpy.save") [`ndarray.tofile`](numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile") [`loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") More flexible way of loading data from a text file. #### Notes Do not rely on the combination of `tofile` and [`fromfile`](#numpy.fromfile "numpy.fromfile") for data storage, as the binary files generated are not platform independent. In particular, no byte-order or data-type information is saved. Data can be stored in the platform independent `.npy` format using [`save`](numpy.save#numpy.save "numpy.save") and [`load`](numpy.load#numpy.load "numpy.load") instead. #### Examples Construct an ndarray: ``` >>> dt = np.dtype([('time', [('min', np.int64), ('sec', np.int64)]), ... ('temp', float)]) >>> x = np.zeros((1,), dtype=dt) >>> x['time']['min'] = 10; x['temp'] = 98.25 >>> x array([((10, 0), 98.25)], dtype=[('time', [('min', '<i8'), ('sec', '<i8')]), ('temp', '<f8')]) ``` Save the raw data to disk: ``` >>> import tempfile >>> fname = tempfile.mkstemp()[1] >>> x.tofile(fname) ``` Read the raw data from disk: ``` >>> np.fromfile(fname, dtype=dt) array([((10, 0), 98.25)], dtype=[('time', [('min', '<i8'), ('sec', '<i8')]), ('temp', '<f8')]) ``` The recommended way to store and load data: ``` >>> np.save(fname, x) >>> np.load(fname + '.npy') array([((10, 0), 98.25)], dtype=[('time', [('min', '<i8'), ('sec', '<i8')]), ('temp', '<f8')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fromfile.htmlnumpy.all ========= numpy.all(*a*, *axis=None*, *out=None*, *keepdims=<no value>*, ***, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2406-L2490) Test whether all array elements along a given axis evaluate to True. Parameters **a**array_like Input array or object that can be converted to an array. **axis**None or int or tuple of ints, optional Axis or axes along which a logical AND reduction is performed. The default (`axis=None`) is to perform a logical AND over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. New in version 1.7.0. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. **out**ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output and its type is preserved (e.g., if `dtype(out)` is float, the result will consist of 0.0’s and 1.0’s). See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`all`](#numpy.all "numpy.all") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where**array_like of bool, optional Elements to include in checking for all `True` values. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns **all**ndarray, bool A new boolean or array is returned unless `out` is specified, in which case a reference to `out` is returned. See also [`ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") equivalent method [`any`](numpy.any#numpy.any "numpy.any") Test whether any element along a given axis evaluates to True. #### Notes Not a Number (NaN), positive infinity and negative infinity evaluate to `True` because these are not equal to zero. #### Examples ``` >>> np.all([[True,False],[True,True]]) False ``` ``` >>> np.all([[True,False],[True,True]], axis=0) array([ True, False]) ``` ``` >>> np.all([-1, 4, 5]) True ``` ``` >>> np.all([1.0, np.nan]) True ``` ``` >>> np.all([[True, True], [False, True]], where=[[True], [False]]) True ``` ``` >>> o=np.array(False) >>> z=np.all([-1, 4, 5], out=o) >>> id(z), id(o), z (28293632, 28293632, array(True)) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.all.htmlnumpy.any ========= numpy.any(*a*, *axis=None*, *out=None*, *keepdims=<no value>*, ***, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2307-L2398) Test whether any array element along a given axis evaluates to True. Returns single boolean if `axis` is `None` Parameters **a**array_like Input array or object that can be converted to an array. **axis**None or int or tuple of ints, optional Axis or axes along which a logical OR reduction is performed. The default (`axis=None`) is to perform a logical OR over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. New in version 1.7.0. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. **out**ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output and its type is preserved (e.g., if it is of type float, then it will remain so, returning 1.0 for True and 0.0 for False, regardless of the type of `a`). See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`any`](#numpy.any "numpy.any") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where**array_like of bool, optional Elements to include in checking for any `True` values. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns **any**bool or ndarray A new boolean or [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") is returned unless `out` is specified, in which case a reference to `out` is returned. See also [`ndarray.any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") equivalent method [`all`](numpy.all#numpy.all "numpy.all") Test whether all elements along a given axis evaluate to True. #### Notes Not a Number (NaN), positive infinity and negative infinity evaluate to `True` because these are not equal to zero. #### Examples ``` >>> np.any([[True, False], [True, True]]) True ``` ``` >>> np.any([[True, False], [False, False]], axis=0) array([ True, False]) ``` ``` >>> np.any([-1, 0, 5]) True ``` ``` >>> np.any(np.nan) True ``` ``` >>> np.any([[True, False], [False, False]], where=[[False], [True]]) False ``` ``` >>> o=np.array(False) >>> z=np.any([-1, 4, 5], out=o) >>> z, o (array(True), array(True)) >>> # Check now that z is a reference to o >>> z is o True >>> id(z), id(o) # identity of z and o (191614240, 191614240) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.any.htmlnumpy.apply_along_axis ======================== numpy.apply_along_axis(*func1d*, *axis*, *arr*, **args*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L267-L414) Apply a function to 1-D slices along the given axis. Execute `func1d(a, *args, **kwargs)` where `func1d` operates on 1-D arrays and `a` is a 1-D slice of `arr` along `axis`. This is equivalent to (but faster than) the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex") and [`s_`](numpy.s_#numpy.s_ "numpy.s_"), which sets each of `ii`, `jj`, and `kk` to a tuple of indices: ``` Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): f = func1d(arr[ii + s_[:,] + kk]) Nj = f.shape for jj in ndindex(Nj): out[ii + jj + kk] = f[jj] ``` Equivalently, eliminating the inner loop, this can be expressed as: ``` Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): out[ii + s_[...,] + kk] = func1d(arr[ii + s_[:,] + kk]) ``` Parameters **func1d**function (M,) -> (Nj
) This function should accept 1-D arrays. It is applied to 1-D slices of `arr` along the specified axis. **axis**integer Axis along which `arr` is sliced. **arr**ndarray (Ni
, M, Nk
) Input array. **args**any Additional arguments to `func1d`. **kwargs**any Additional named arguments to `func1d`. New in version 1.9.0. Returns **out**ndarray (Ni
, Nj
, Nk
) The output array. The shape of `out` is identical to the shape of `arr`, except along the `axis` dimension. This axis is removed, and replaced with new dimensions equal to the shape of the return value of `func1d`. So if `func1d` returns a scalar `out` will have one fewer dimensions than `arr`. See also [`apply_over_axes`](numpy.apply_over_axes#numpy.apply_over_axes "numpy.apply_over_axes") Apply a function repeatedly over multiple axes. #### Examples ``` >>> def my_func(a): ... """Average first and last element of a 1-D array""" ... return (a[0] + a[-1]) * 0.5 >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(my_func, 0, b) array([4., 5., 6.]) >>> np.apply_along_axis(my_func, 1, b) array([2., 5., 8.]) ``` For a function that returns a 1D array, the number of dimensions in `outarr` is the same as `arr`. ``` >>> b = np.array([[8,1,7], [4,3,9], [5,2,6]]) >>> np.apply_along_axis(sorted, 1, b) array([[1, 7, 8], [3, 4, 9], [2, 5, 6]]) ``` For a function that returns a higher dimensional array, those dimensions are inserted in place of the `axis` dimension. ``` >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(np.diag, -1, b) array([[[1, 0, 0], [0, 2, 0], [0, 0, 3]], [[4, 0, 0], [0, 5, 0], [0, 0, 6]], [[7, 0, 0], [0, 8, 0], [0, 0, 9]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.apply_along_axis.htmlnumpy.argmax ============ numpy.argmax(*a*, *axis=None*, *out=None*, ***, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1127-L1216) Returns the indices of the maximum values along an axis. Parameters **a**array_like Input array. **axis**int, optional By default, the index is into the flattened array, otherwise along the specified axis. **out**array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns **index_array**ndarray of ints Array of indices into the array. It has the same shape as `a.shape` with the dimension along `axis` removed. If `keepdims` is set to True, then the size of `axis` will be 1 with the resulting array having same shape as `a.shape`. See also [`ndarray.argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax"), [`argmin`](numpy.argmin#numpy.argmin "numpy.argmin") [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value along a given axis. [`unravel_index`](numpy.unravel_index#numpy.unravel_index "numpy.unravel_index") Convert a flat index into an index tuple. [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Apply `np.expand_dims(index_array, axis)` from argmax to an array as if by calling max. #### Notes In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence are returned. #### Examples ``` >>> a = np.arange(6).reshape(2,3) + 10 >>> a array([[10, 11, 12], [13, 14, 15]]) >>> np.argmax(a) 5 >>> np.argmax(a, axis=0) array([1, 1, 1]) >>> np.argmax(a, axis=1) array([2, 2]) ``` Indexes of the maximal elements of a N-dimensional array: ``` >>> ind = np.unravel_index(np.argmax(a, axis=None), a.shape) >>> ind (1, 2) >>> a[ind] 15 ``` ``` >>> b = np.arange(6) >>> b[1] = 5 >>> b array([0, 5, 2, 3, 4, 5]) >>> np.argmax(b) # Only the first occurrence is returned. 1 ``` ``` >>> x = np.array([[4,2,3], [1,0,3]]) >>> index_array = np.argmax(x, axis=-1) >>> # Same as np.amax(x, axis=-1, keepdims=True) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1) array([[4], [3]]) >>> # Same as np.amax(x, axis=-1) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1).squeeze(axis=-1) array([4, 3]) ``` Setting `keepdims` to `True`, ``` >>> x = np.arange(24).reshape((2, 3, 4)) >>> res = np.argmax(x, axis=1, keepdims=True) >>> res.shape (2, 1, 4) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.argmax.htmlnumpy.argmin ============ numpy.argmin(*a*, *axis=None*, *out=None*, ***, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1223-L1312) Returns the indices of the minimum values along an axis. Parameters **a**array_like Input array. **axis**int, optional By default, the index is into the flattened array, otherwise along the specified axis. **out**array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns **index_array**ndarray of ints Array of indices into the array. It has the same shape as `a.shape` with the dimension along `axis` removed. If `keepdims` is set to True, then the size of `axis` will be 1 with the resulting array having same shape as `a.shape`. See also [`ndarray.argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin"), [`argmax`](numpy.argmax#numpy.argmax "numpy.argmax") [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value along a given axis. [`unravel_index`](numpy.unravel_index#numpy.unravel_index "numpy.unravel_index") Convert a flat index into an index tuple. [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Apply `np.expand_dims(index_array, axis)` from argmin to an array as if by calling min. #### Notes In case of multiple occurrences of the minimum values, the indices corresponding to the first occurrence are returned. #### Examples ``` >>> a = np.arange(6).reshape(2,3) + 10 >>> a array([[10, 11, 12], [13, 14, 15]]) >>> np.argmin(a) 0 >>> np.argmin(a, axis=0) array([0, 0, 0]) >>> np.argmin(a, axis=1) array([0, 0]) ``` Indices of the minimum elements of a N-dimensional array: ``` >>> ind = np.unravel_index(np.argmin(a, axis=None), a.shape) >>> ind (0, 0) >>> a[ind] 10 ``` ``` >>> b = np.arange(6) + 10 >>> b[4] = 10 >>> b array([10, 11, 12, 13, 10, 15]) >>> np.argmin(b) # Only the first occurrence is returned. 0 ``` ``` >>> x = np.array([[4,2,3], [1,0,3]]) >>> index_array = np.argmin(x, axis=-1) >>> # Same as np.amin(x, axis=-1, keepdims=True) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1) array([[2], [0]]) >>> # Same as np.amax(x, axis=-1) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1).squeeze(axis=-1) array([2, 0]) ``` Setting `keepdims` to `True`, ``` >>> x = np.arange(24).reshape((2, 3, 4)) >>> res = np.argmin(x, axis=1, keepdims=True) >>> res.shape (2, 1, 4) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.argmin.htmlnumpy.average ============= numpy.average(*a*, *axis=None*, *weights=None*, *returned=False*, ***, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L396-L558) Compute the weighted average along the specified axis. Parameters **a**array_like Array containing data to be averaged. If `a` is not an array, a conversion is attempted. **axis**None or int or tuple of ints, optional Axis or axes along which to average `a`. The default, axis=None, will average over all of the elements of the input array. If axis is negative it counts from the last to the first axis. New in version 1.7.0. If axis is a tuple of ints, averaging is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. **weights**array_like, optional An array of weights associated with the values in `a`. Each value in `a` contributes to the average according to its associated weight. The weights array can either be 1-D (in which case its length must be the size of `a` along the given axis) or of the same shape as `a`. If `weights=None`, then all data in `a` are assumed to have a weight equal to one. The 1-D calculation is: ``` avg = sum(a * weights) / sum(weights) ``` The only constraint on `weights` is that `sum(weights)` must not be 0. **returned**bool, optional Default is `False`. If `True`, the tuple ([`average`](#numpy.average "numpy.average"), `sum_of_weights`) is returned, otherwise only the average is returned. If `weights=None`, `sum_of_weights` is equivalent to the number of elements over which the average is taken. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. *Note:* `keepdims` will not work with instances of [`numpy.matrix`](numpy.matrix#numpy.matrix "numpy.matrix") or other classes whose methods do not support `keepdims`. New in version 1.23.0. Returns **retval, [sum_of_weights]**array_type or double Return the average along the specified axis. When `returned` is `True`, return a tuple with the average as the first element and the sum of the weights as the second element. `sum_of_weights` is of the same type as `retval`. The result dtype follows a genereal pattern. If `weights` is None, the result dtype will be that of `a` , or `float64` if `a` is integral. Otherwise, if `weights` is not None and `a` is non- integral, the result type will be the type of lowest precision capable of representing values of both `a` and `weights`. If `a` happens to be integral, the previous rules still applies but the result dtype will at least be `float64`. Raises ZeroDivisionError When all weights along axis are zero. See [`numpy.ma.average`](numpy.ma.average#numpy.ma.average "numpy.ma.average") for a version robust to this type of error. TypeError When the length of 1D `weights` is not the same as the shape of `a` along axis. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") [`ma.average`](numpy.ma.average#numpy.ma.average "numpy.ma.average") average for masked arrays – useful if your data contains “missing” values [`numpy.result_type`](numpy.result_type#numpy.result_type "numpy.result_type") Returns the type that results from applying the numpy type promotion rules to the arguments. #### Examples ``` >>> data = np.arange(1, 5) >>> data array([1, 2, 3, 4]) >>> np.average(data) 2.5 >>> np.average(np.arange(1, 11), weights=np.arange(10, 0, -1)) 4.0 ``` ``` >>> data = np.arange(6).reshape((3, 2)) >>> data array([[0, 1], [2, 3], [4, 5]]) >>> np.average(data, axis=1, weights=[1./4, 3./4]) array([0.75, 2.75, 4.75]) >>> np.average(data, weights=[1./4, 3./4]) Traceback (most recent call last): ... TypeError: Axis must be specified when shapes of a and weights differ. ``` ``` >>> a = np.ones(5, dtype=np.float128) >>> w = np.ones(5, dtype=np.complex64) >>> avg = np.average(a, weights=w) >>> print(avg.dtype) complex256 ``` With `keepdims=True`, the following result has shape (3, 1). ``` >>> np.average(data, axis=1, keepdims=True) array([[0.5], [2.5], [4.5]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.average.htmlnumpy.bincount ============== numpy.bincount(*x*, */*, *weights=None*, *minlength=0*) Count number of occurrences of each value in array of non-negative ints. The number of bins (of size 1) is one larger than the largest value in `x`. If `minlength` is specified, there will be at least this number of bins in the output array (though it will be longer if necessary, depending on the contents of `x`). Each bin gives the number of occurrences of its index value in `x`. If `weights` is specified the input array is weighted by it, i.e. if a value `n` is found at position `i`, `out[n] += weight[i]` instead of `out[n] += 1`. Parameters **x**array_like, 1 dimension, nonnegative ints Input array. **weights**array_like, optional Weights, array of the same shape as `x`. **minlength**int, optional A minimum number of bins for the output array. New in version 1.6.0. Returns **out**ndarray of ints The result of binning the input array. The length of `out` is equal to `np.amax(x)+1`. Raises ValueError If the input is not 1-dimensional, or contains elements with negative values, or if `minlength` is negative. TypeError If the type of the input is float or complex. See also [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram"), [`digitize`](numpy.digitize#numpy.digitize "numpy.digitize"), [`unique`](numpy.unique#numpy.unique "numpy.unique") #### Examples ``` >>> np.bincount(np.arange(5)) array([1, 1, 1, 1, 1]) >>> np.bincount(np.array([0, 1, 1, 3, 2, 1, 7])) array([1, 3, 1, 1, 0, 0, 0, 1]) ``` ``` >>> x = np.array([0, 1, 1, 3, 2, 1, 7, 23]) >>> np.bincount(x).size == np.amax(x)+1 True ``` The input array needs to be of integer dtype, otherwise a TypeError is raised: ``` >>> np.bincount(np.arange(5, dtype=float)) Traceback (most recent call last): ... TypeError: Cannot cast array data from dtype('float64') to dtype('int64') according to the rule 'safe' ``` A possible use of `bincount` is to perform sums over variable-size chunks of an array, using the `weights` keyword. ``` >>> w = np.array([0.3, 0.5, 0.2, 0.7, 1., -0.6]) # weights >>> x = np.array([0, 1, 1, 2, 2, 2]) >>> np.bincount(x, weights=w) array([ 0.3, 0.7, 1.1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.bincount.htmlnumpy.ceil ========== numpy.ceil(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'ceil'>* Return the ceiling of the input, element-wise. The ceil of the scalar `x` is the smallest integer `i`, such that `i >= x`. It is often denoted as \(\lceil x \rceil\). Parameters **x**array_like Input data. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The ceiling of each element in `x`, with [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)") dtype. This is a scalar if `x` is a scalar. See also [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`fix`](numpy.fix#numpy.fix "numpy.fix") #### Examples ``` >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) >>> np.ceil(a) array([-1., -1., -0., 1., 2., 2., 2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ceil.htmlnumpy.clip ========== numpy.clip(*a*, *a_min*, *a_max*, *out=None*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2085-L2154) Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of `[0, 1]` is specified, values smaller than 0 become 0, and values larger than 1 become 1. Equivalent to but faster than `np.minimum(a_max, np.maximum(a, a_min))`. No check is performed to ensure `a_min < a_max`. Parameters **a**array_like Array containing elements to clip. **a_min, a_max**array_like or None Minimum and maximum value. If `None`, clipping is not performed on the corresponding edge. Only one of `a_min` and `a_max` may be `None`. Both are broadcast against `a`. **out**ndarray, optional The results will be placed in this array. It may be the input array for in-place clipping. `out` must be of the right shape to hold the output. Its type is preserved. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). New in version 1.17.0. Returns **clipped_array**ndarray An array with the elements of `a`, but where values < `a_min` are replaced with `a_min`, and those > `a_max` with `a_max`. See also [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes When `a_min` is greater than `a_max`, [`clip`](#numpy.clip "numpy.clip") returns an array in which all values are equal to `a_max`, as shown in the second example. #### Examples ``` >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, 1, 8) array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) >>> np.clip(a, 8, 1) array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) >>> np.clip(a, 3, 6, out=a) array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, [3, 4, 1, 1, 1, 4, 4, 4, 4, 4], 8) array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.clip.htmlnumpy.conj ========== numpy.conj(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'conjugate'>* Return the complex conjugate, element-wise. The complex conjugate of a complex number is obtained by changing the sign of its imaginary part. Parameters **x**array_like Input value. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The complex conjugate of `x`, with same dtype as `y`. This is a scalar if `x` is a scalar. #### Notes [`conj`](#numpy.conj "numpy.conj") is an alias for [`conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate"): ``` >>> np.conj is np.conjugate True ``` #### Examples ``` >>> np.conjugate(1+2j) (1-2j) ``` ``` >>> x = np.eye(2) + 1j * np.eye(2) >>> np.conjugate(x) array([[ 1.-1.j, 0.-0.j], [ 0.-0.j, 1.-1.j]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.conj.htmlnumpy.corrcoef ============== numpy.corrcoef(*x*, *y=None*, *rowvar=True*, *bias=<no value>*, *ddof=<no value>*, ***, *dtype=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L2713-L2863) Return Pearson product-moment correlation coefficients. Please refer to the documentation for [`cov`](numpy.cov#numpy.cov "numpy.cov") for more detail. The relationship between the correlation coefficient matrix, `R`, and the covariance matrix, `C`, is \[R_{ij} = \frac{ C_{ij} } { \sqrt{ C_{ii} C_{jj} } }\] The values of `R` are between -1 and 1, inclusive. Parameters **x**array_like A 1-D or 2-D array containing multiple variables and observations. Each row of `x` represents a variable, and each column a single observation of all those variables. Also see `rowvar` below. **y**array_like, optional An additional set of variables and observations. `y` has the same shape as `x`. **rowvar**bool, optional If `rowvar` is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. **bias**_NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. **ddof**_NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. **dtype**data-type, optional Data-type of the result. By default, the return data-type will have at least [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64") precision. New in version 1.20. Returns **R**ndarray The correlation coefficient matrix of the variables. See also [`cov`](numpy.cov#numpy.cov "numpy.cov") Covariance matrix #### Notes Due to floating point rounding the resulting array may not be Hermitian, the diagonal elements may not be 1, and the elements may not satisfy the inequality abs(a) <= 1. The real and imaginary parts are clipped to the interval [-1, 1] in an attempt to improve on that situation but is not much help in the complex case. This function accepts but discards arguments `bias` and `ddof`. This is for backwards compatibility with previous versions of this function. These arguments had no effect on the return values of the function and can be safely ignored in this and previous versions of numpy. #### Examples In this example we generate two random arrays, `xarr` and `yarr`, and compute the row-wise and column-wise Pearson correlation coefficients, `R`. Since `rowvar` is true by default, we first find the row-wise Pearson correlation coefficients between the variables of `xarr`. ``` >>> import numpy as np >>> rng = np.random.default_rng(seed=42) >>> xarr = rng.random((3, 3)) >>> xarr array([[0.77395605, 0.43887844, 0.85859792], [0.69736803, 0.09417735, 0.97562235], [0.7611397 , 0.78606431, 0.12811363]]) >>> R1 = np.corrcoef(xarr) >>> R1 array([[ 1. , 0.99256089, -0.68080986], [ 0.99256089, 1. , -0.76492172], [-0.68080986, -0.76492172, 1. ]]) ``` If we add another set of variables and observations `yarr`, we can compute the row-wise Pearson correlation coefficients between the variables in `xarr` and `yarr`. ``` >>> yarr = rng.random((3, 3)) >>> yarr array([[0.45038594, 0.37079802, 0.92676499], [0.64386512, 0.82276161, 0.4434142 ], [0.22723872, 0.55458479, 0.06381726]]) >>> R2 = np.corrcoef(xarr, yarr) >>> R2 array([[ 1. , 0.99256089, -0.68080986, 0.75008178, -0.934284 , -0.99004057], [ 0.99256089, 1. , -0.76492172, 0.82502011, -0.97074098, -0.99981569], [-0.68080986, -0.76492172, 1. , -0.99507202, 0.89721355, 0.77714685], [ 0.75008178, 0.82502011, -0.99507202, 1. , -0.93657855, -0.83571711], [-0.934284 , -0.97074098, 0.89721355, -0.93657855, 1. , 0.97517215], [-0.99004057, -0.99981569, 0.77714685, -0.83571711, 0.97517215, 1. ]]) ``` Finally if we use the option `rowvar=False`, the columns are now being treated as the variables and we will find the column-wise Pearson correlation coefficients between variables in `xarr` and `yarr`. ``` >>> R3 = np.corrcoef(xarr, yarr, rowvar=False) >>> R3 array([[ 1. , 0.77598074, -0.47458546, -0.75078643, -0.9665554 , 0.22423734], [ 0.77598074, 1. , -0.92346708, -0.99923895, -0.58826587, -0.44069024], [-0.47458546, -0.92346708, 1. , 0.93773029, 0.23297648, 0.75137473], [-0.75078643, -0.99923895, 0.93773029, 1. , 0.55627469, 0.47536961], [-0.9665554 , -0.58826587, 0.23297648, 0.55627469, 1. , -0.46666491], [ 0.22423734, -0.44069024, 0.75137473, 0.47536961, -0.46666491, 1. ]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.corrcoef.htmlnumpy.cov ========= numpy.cov(*m*, *y=None*, *rowvar=True*, *bias=False*, *ddof=None*, *fweights=None*, *aweights=None*, ***, *dtype=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L2486-L2705) Estimate a covariance matrix, given data and weights. Covariance indicates the level to which two variables vary together. If we examine N-dimensional samples, \(X = [x_1, x_2, ... x_N]^T\), then the covariance matrix element \(C_{ij}\) is the covariance of \(x_i\) and \(x_j\). The element \(C_{ii}\) is the variance of \(x_i\). See the notes for an outline of the algorithm. Parameters **m**array_like A 1-D or 2-D array containing multiple variables and observations. Each row of `m` represents a variable, and each column a single observation of all those variables. Also see `rowvar` below. **y**array_like, optional An additional set of variables and observations. `y` has the same form as that of `m`. **rowvar**bool, optional If `rowvar` is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. **bias**bool, optional Default normalization (False) is by `(N - 1)`, where `N` is the number of observations given (unbiased estimate). If `bias` is True, then normalization is by `N`. These values can be overridden by using the keyword `ddof` in numpy versions >= 1.5. **ddof**int, optional If not `None` the default value implied by `bias` is overridden. Note that `ddof=1` will return the unbiased estimate, even if both `fweights` and `aweights` are specified, and `ddof=0` will return the simple average. See the notes for the details. The default value is `None`. New in version 1.5. **fweights**array_like, int, optional 1-D array of integer frequency weights; the number of times each observation vector should be repeated. New in version 1.10. **aweights**array_like, optional 1-D array of observation vector weights. These relative weights are typically large for observations considered “important” and smaller for observations considered less “important”. If `ddof=0` the array of weights can be used to assign probabilities to observation vectors. New in version 1.10. **dtype**data-type, optional Data-type of the result. By default, the return data-type will have at least [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64") precision. New in version 1.20. Returns **out**ndarray The covariance matrix of the variables. See also [`corrcoef`](numpy.corrcoef#numpy.corrcoef "numpy.corrcoef") Normalized covariance matrix #### Notes Assume that the observations are in the columns of the observation array `m` and let `f = fweights` and `a = aweights` for brevity. The steps to compute the weighted covariance are as follows: ``` >>> m = np.arange(10, dtype=np.float64) >>> f = np.arange(10) * 2 >>> a = np.arange(10) ** 2. >>> ddof = 1 >>> w = f * a >>> v1 = np.sum(w) >>> v2 = np.sum(w * a) >>> m -= np.sum(m * w, axis=None, keepdims=True) / v1 >>> cov = np.dot(m * w, m.T) * v1 / (v1**2 - ddof * v2) ``` Note that when `a == 1`, the normalization factor `v1 / (v1**2 - ddof * v2)` goes over to `1 / (np.sum(f) - ddof)` as it should. #### Examples Consider two variables, \(x_0\) and \(x_1\), which correlate perfectly, but in opposite directions: ``` >>> x = np.array([[0, 2], [1, 1], [2, 0]]).T >>> x array([[0, 1, 2], [2, 1, 0]]) ``` Note how \(x_0\) increases while \(x_1\) decreases. The covariance matrix shows this clearly: ``` >>> np.cov(x) array([[ 1., -1.], [-1., 1.]]) ``` Note that element \(C_{0,1}\), which shows the correlation between \(x_0\) and \(x_1\), is negative. Further, note how `x` and `y` are combined: ``` >>> x = [-2.1, -1, 4.3] >>> y = [3, 1.1, 0.12] >>> X = np.stack((x, y), axis=0) >>> np.cov(X) array([[11.71 , -4.286 ], # may vary [-4.286 , 2.144133]]) >>> np.cov(x, y) array([[11.71 , -4.286 ], # may vary [-4.286 , 2.144133]]) >>> np.cov(x) array(11.71) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.cov.htmlnumpy.cross =========== numpy.cross(*a*, *b*, *axisa=- 1*, *axisb=- 1*, *axisc=- 1*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L1485-L1680) Return the cross product of two (arrays of) vectors. The cross product of `a` and `b` in \(R^3\) is a vector perpendicular to both `a` and `b`. If `a` and `b` are arrays of vectors, the vectors are defined by the last axis of `a` and `b` by default, and these axes can have dimensions 2 or 3. Where the dimension of either `a` or `b` is 2, the third component of the input vector is assumed to be zero and the cross product calculated accordingly. In cases where both input vectors have dimension 2, the z-component of the cross product is returned. Parameters **a**array_like Components of the first vector(s). **b**array_like Components of the second vector(s). **axisa**int, optional Axis of `a` that defines the vector(s). By default, the last axis. **axisb**int, optional Axis of `b` that defines the vector(s). By default, the last axis. **axisc**int, optional Axis of `c` containing the cross product vector(s). Ignored if both input vectors have dimension 2, as the return is scalar. By default, the last axis. **axis**int, optional If defined, the axis of `a`, `b` and `c` that defines the vector(s) and cross product(s). Overrides `axisa`, `axisb` and `axisc`. Returns **c**ndarray Vector cross product(s). Raises ValueError When the dimension of the vector(s) in `a` and/or `b` does not equal 2 or 3. See also [`inner`](numpy.inner#numpy.inner "numpy.inner") Inner product [`outer`](numpy.outer#numpy.outer "numpy.outer") Outer product. [`ix_`](numpy.ix_#numpy.ix_ "numpy.ix_") Construct index arrays. #### Notes New in version 1.9.0. Supports full broadcasting of the inputs. #### Examples Vector cross-product. ``` >>> x = [1, 2, 3] >>> y = [4, 5, 6] >>> np.cross(x, y) array([-3, 6, -3]) ``` One vector with dimension 2. ``` >>> x = [1, 2] >>> y = [4, 5, 6] >>> np.cross(x, y) array([12, -6, -3]) ``` Equivalently: ``` >>> x = [1, 2, 0] >>> y = [4, 5, 6] >>> np.cross(x, y) array([12, -6, -3]) ``` Both vectors with dimension 2. ``` >>> x = [1,2] >>> y = [4,5] >>> np.cross(x, y) array(-3) ``` Multiple vector cross-products. Note that the direction of the cross product vector is defined by the *right-hand rule*. ``` >>> x = np.array([[1,2,3], [4,5,6]]) >>> y = np.array([[4,5,6], [1,2,3]]) >>> np.cross(x, y) array([[-3, 6, -3], [ 3, -6, 3]]) ``` The orientation of `c` can be changed using the `axisc` keyword. ``` >>> np.cross(x, y, axisc=0) array([[-3, 3], [ 6, -6], [-3, 3]]) ``` Change the vector definition of `x` and `y` using `axisa` and `axisb`. ``` >>> x = np.array([[1,2,3], [4,5,6], [7, 8, 9]]) >>> y = np.array([[7, 8, 9], [4,5,6], [1,2,3]]) >>> np.cross(x, y) array([[ -6, 12, -6], [ 0, 0, 0], [ 6, -12, 6]]) >>> np.cross(x, y, axisa=0, axisb=0) array([[-24, 48, -24], [-30, 60, -30], [-36, 72, -36]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.cross.htmlnumpy.cumprod ============= numpy.cumprod(*a*, *axis=None*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L3053-L3114) Return the cumulative product of elements along a given axis. Parameters **a**array_like Input array. **axis**int, optional Axis along which the cumulative product is computed. By default the input is flattened. **dtype**dtype, optional Type of the returned array, as well as of the accumulator in which the elements are multiplied. If *dtype* is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used instead. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type of the resulting values will be cast if necessary. Returns **cumprod**ndarray A new array holding the result is returned unless `out` is specified, in which case a reference to out is returned. See also [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. #### Examples ``` >>> a = np.array([1,2,3]) >>> np.cumprod(a) # intermediate results 1, 1*2 ... # total product 1*2*3 = 6 array([1, 2, 6]) >>> a = np.array([[1, 2, 3], [4, 5, 6]]) >>> np.cumprod(a, dtype=float) # specify type of output array([ 1., 2., 6., 24., 120., 720.]) ``` The cumulative product for each column (i.e., over the rows) of `a`: ``` >>> np.cumprod(a, axis=0) array([[ 1, 2, 3], [ 4, 10, 18]]) ``` The cumulative product for each row (i.e. over the columns) of `a`: ``` >>> np.cumprod(a,axis=1) array([[ 1, 2, 6], [ 4, 20, 120]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.cumprod.htmlnumpy.cumsum ============ numpy.cumsum(*a*, *axis=None*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2497-L2571) Return the cumulative sum of the elements along a given axis. Parameters **a**array_like Input array. **axis**int, optional Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array. **dtype**dtype, optional Type of the returned array and of the accumulator in which the elements are summed. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. Returns **cumsum_along_axis**ndarray. A new array holding the result is returned unless `out` is specified, in which case a reference to `out` is returned. The result has the same size as `a`, and the same shape as `a` if `axis` is not None or `a` is a 1-d array. See also [`sum`](numpy.sum#numpy.sum "numpy.sum") Sum array elements. [`trapz`](numpy.trapz#numpy.trapz "numpy.trapz") Integration of array values using the composite trapezoidal rule. [`diff`](numpy.diff#numpy.diff "numpy.diff") Calculate the n-th discrete difference along given axis. #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. `cumsum(a)[-1]` may not be equal to `sum(a)` for floating-point values since `sum` may use a pairwise summation routine, reducing the roundoff-error. See [`sum`](numpy.sum#numpy.sum "numpy.sum") for more information. #### Examples ``` >>> a = np.array([[1,2,3], [4,5,6]]) >>> a array([[1, 2, 3], [4, 5, 6]]) >>> np.cumsum(a) array([ 1, 3, 6, 10, 15, 21]) >>> np.cumsum(a, dtype=float) # specifies type of output value(s) array([ 1., 3., 6., 10., 15., 21.]) ``` ``` >>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns array([[1, 2, 3], [5, 7, 9]]) >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows array([[ 1, 3, 6], [ 4, 9, 15]]) ``` `cumsum(b)[-1]` may not be equal to `sum(b)` ``` >>> b = np.array([1, 2e-9, 3e-9] * 1000000) >>> b.cumsum()[-1] 1000000.0050045159 >>> b.sum() 1000000.0050000029 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.cumsum.htmlnumpy.diff ========== numpy.diff(*a*, *n=1*, *axis=-1*, *prepend=<no value>*, *append=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L1319-L1449) Calculate the n-th discrete difference along the given axis. The first difference is given by `out[i] = a[i+1] - a[i]` along the given axis, higher differences are calculated by using [`diff`](#numpy.diff "numpy.diff") recursively. Parameters **a**array_like Input array **n**int, optional The number of times values are differenced. If zero, the input is returned as-is. **axis**int, optional The axis along which the difference is taken, default is the last axis. **prepend, append**array_like, optional Values to prepend or append to `a` along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array in along all other axes. Otherwise the dimension and shape must match `a` except along axis. New in version 1.16.0. Returns **diff**ndarray The n-th differences. The shape of the output is the same as `a` except along `axis` where the dimension is smaller by `n`. The type of the output is the same as the type of the difference between any two elements of `a`. This is the same as the type of `a` in most cases. A notable exception is [`datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64"), which results in a [`timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") output array. See also [`gradient`](numpy.gradient#numpy.gradient "numpy.gradient"), [`ediff1d`](numpy.ediff1d#numpy.ediff1d "numpy.ediff1d"), [`cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") #### Notes Type is preserved for boolean arrays, so the result will contain `False` when consecutive elements are the same and `True` when they differ. For unsigned integer arrays, the results will also be unsigned. This should not be surprising, as the result is consistent with calculating the difference directly: ``` >>> u8_arr = np.array([1, 0], dtype=np.uint8) >>> np.diff(u8_arr) array([255], dtype=uint8) >>> u8_arr[1,...] - u8_arr[0,...] 255 ``` If this is not desirable, then the array should be cast to a larger integer type first: ``` >>> i16_arr = u8_arr.astype(np.int16) >>> np.diff(i16_arr) array([-1], dtype=int16) ``` #### Examples ``` >>> x = np.array([1, 2, 4, 7, 0]) >>> np.diff(x) array([ 1, 2, 3, -7]) >>> np.diff(x, n=2) array([ 1, 1, -10]) ``` ``` >>> x = np.array([[1, 3, 6, 10], [0, 5, 6, 8]]) >>> np.diff(x) array([[2, 3, 4], [5, 1, 2]]) >>> np.diff(x, axis=0) array([[-1, 2, 0, -2]]) ``` ``` >>> x = np.arange('1066-10-13', '1066-10-16', dtype=np.datetime64) >>> np.diff(x) array([1, 1], dtype='timedelta64[D]') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.diff.htmlnumpy.dot ========= numpy.dot(*a*, *b*, *out=None*) Dot product of two arrays. Specifically, * If both `a` and `b` are 1-D arrays, it is inner product of vectors (without complex conjugation). * If both `a` and `b` are 2-D arrays, it is matrix multiplication, but using [`matmul`](numpy.matmul#numpy.matmul "numpy.matmul") or `a @ b` is preferred. * If either `a` or `b` is 0-D (scalar), it is equivalent to [`multiply`](numpy.multiply#numpy.multiply "numpy.multiply") and using `numpy.multiply(a, b)` or `a * b` is preferred. * If `a` is an N-D array and `b` is a 1-D array, it is a sum product over the last axis of `a` and `b`. * If `a` is an N-D array and `b` is an M-D array (where `M>=2`), it is a sum product over the last axis of `a` and the second-to-last axis of `b`: ``` dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m]) ``` Parameters **a**array_like First argument. **b**array_like Second argument. **out**ndarray, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for `dot(a,b)`. This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. Returns **output**ndarray Returns the dot product of `a` and `b`. If `a` and `b` are both scalars or both 1-D arrays then a scalar is returned; otherwise an array is returned. If `out` is given, then it is returned. Raises ValueError If the last dimension of `a` is not the same size as the second-to-last dimension of `b`. See also [`vdot`](numpy.vdot#numpy.vdot "numpy.vdot") Complex-conjugating dot product. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. [`matmul`](numpy.matmul#numpy.matmul "numpy.matmul") ‘@’ operator as method with out parameter. [`linalg.multi_dot`](numpy.linalg.multi_dot#numpy.linalg.multi_dot "numpy.linalg.multi_dot") Chained dot product. #### Examples ``` >>> np.dot(3, 4) 12 ``` Neither argument is complex-conjugated: ``` >>> np.dot([2j, 3j], [2j, 3j]) (-13+0j) ``` For 2-D arrays it is the matrix product: ``` >>> a = [[1, 0], [0, 1]] >>> b = [[4, 1], [2, 2]] >>> np.dot(a, b) array([[4, 1], [2, 2]]) ``` ``` >>> a = np.arange(3*4*5*6).reshape((3,4,5,6)) >>> b = np.arange(3*4*5*6)[::-1].reshape((5,4,6,3)) >>> np.dot(a, b)[2,3,2,1,2,2] 499128 >>> sum(a[2,3,2,:] * b[1,2,:,2]) 499128 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dot.htmlnumpy.floor =========== numpy.floor(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'floor'>* Return the floor of the input, element-wise. The floor of the scalar `x` is the largest integer `i`, such that `i <= x`. It is often denoted as \(\lfloor x \rfloor\). Parameters **x**array_like Input data. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The floor of each element in `x`. This is a scalar if `x` is a scalar. See also [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`fix`](numpy.fix#numpy.fix "numpy.fix") #### Notes Some spreadsheet programs calculate the “floor-towards-zero”, where `floor(-2.5) == -2`. NumPy instead uses the definition of [`floor`](#numpy.floor "numpy.floor") where `floor(-2.5) == -3`. The “floor-towards-zero” function is called `fix` in NumPy. #### Examples ``` >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) >>> np.floor(a) array([-2., -2., -1., 0., 1., 1., 2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.floor.htmlnumpy.inner =========== numpy.inner(*a*, *b*, */*) Inner product of two arrays. Ordinary inner product of vectors for 1-D arrays (without complex conjugation), in higher dimensions a sum product over the last axes. Parameters **a, b**array_like If `a` and `b` are nonscalar, their last dimensions must match. Returns **out**ndarray If `a` and `b` are both scalars or both 1-D arrays then a scalar is returned; otherwise an array is returned. `out.shape = (*a.shape[:-1], *b.shape[:-1])` Raises ValueError If both `a` and `b` are nonscalar and their last dimensions have different sizes. See also [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`dot`](numpy.dot#numpy.dot "numpy.dot") Generalised matrix product, using second last dimension of `b`. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. #### Notes For vectors (1-D arrays) it computes the ordinary inner-product: ``` np.inner(a, b) = sum(a[:]*b[:]) ``` More generally, if `ndim(a) = r > 0` and `ndim(b) = s > 0`: ``` np.inner(a, b) = np.tensordot(a, b, axes=(-1,-1)) ``` or explicitly: ``` np.inner(a, b)[i0,...,ir-2,j0,...,js-2] = sum(a[i0,...,ir-2,:]*b[j0,...,js-2,:]) ``` In addition `a` or `b` may be scalars, in which case: ``` np.inner(a,b) = a*b ``` #### Examples Ordinary inner product for vectors: ``` >>> a = np.array([1,2,3]) >>> b = np.array([0,1,0]) >>> np.inner(a, b) 2 ``` Some multidimensional examples: ``` >>> a = np.arange(24).reshape((2,3,4)) >>> b = np.arange(4) >>> c = np.inner(a, b) >>> c.shape (2, 3) >>> c array([[ 14, 38, 62], [ 86, 110, 134]]) ``` ``` >>> a = np.arange(2).reshape((1,1,2)) >>> b = np.arange(6).reshape((3,2)) >>> c = np.inner(a, b) >>> c.shape (1, 1, 3) >>> c array([[[1, 3, 5]]]) ``` An example where `b` is a scalar: ``` >>> np.inner(np.eye(2), 7) array([[7., 0.], [0., 7.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.inner.htmlnumpy.invert ============ numpy.invert(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'invert'>* Compute bit-wise inversion, or bit-wise NOT, element-wise. Computes the bit-wise NOT of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `~`. For signed integer inputs, the two’s complement is returned. In a two’s-complement system negative numbers are represented by the two’s complement of the absolute value. This is the most common method of representing signed integers on computers [[1]](#rde927b304c4f-1). A N-bit two’s-complement system can represent every integer in the range \(-2^{N-1}\) to \(+2^{N-1}-1\). Parameters **x**array_like Only integer and boolean types are handled. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Result. This is a scalar if `x` is a scalar. See also [`bitwise_and`](numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and"), [`bitwise_or`](numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or"), [`bitwise_xor`](numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor") [`logical_not`](numpy.logical_not#numpy.logical_not "numpy.logical_not") [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Notes `bitwise_not` is an alias for [`invert`](#numpy.invert "numpy.invert"): ``` >>> np.bitwise_not is np.invert True ``` #### References [1](#id1) Wikipedia, “Two’s complement”, [https://en.wikipedia.org/wiki/Two’s_complement](https://en.wikipedia.org/wiki/Two's_complement) #### Examples We’ve seen that 13 is represented by `00001101`. The invert or bit-wise NOT of 13 is then: ``` >>> x = np.invert(np.array(13, dtype=np.uint8)) >>> x 242 >>> np.binary_repr(x, width=8) '11110010' ``` The result depends on the bit-width: ``` >>> x = np.invert(np.array(13, dtype=np.uint16)) >>> x 65522 >>> np.binary_repr(x, width=16) '1111111111110010' ``` When using signed integer types the result is the two’s complement of the result for the unsigned type: ``` >>> np.invert(np.array([13], dtype=np.int8)) array([-14], dtype=int8) >>> np.binary_repr(-14, width=8) '11110010' ``` Booleans are accepted as well: ``` >>> np.invert(np.array([True, False])) array([False, True]) ``` The `~` operator can be used as a shorthand for `np.invert` on ndarrays. ``` >>> x1 = np.array([True, False]) >>> ~x1 array([False, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.invert.htmlnumpy.maximum ============= numpy.maximum(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'maximum'>* Element-wise maximum of array elements. Compare two arrays and returns a new array containing the element-wise maxima. If one of the elements being compared is a NaN, then that element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are propagated. Parameters **x1, x2**array_like The arrays holding the elements to be compared. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The maximum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") Element-wise minimum of two arrays, propagates NaNs. [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") Element-wise maximum of two arrays, ignores NaNs. [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value of an array along a given axis, propagates NaNs. [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") The maximum value of an array along a given axis, ignores NaNs. [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin"), [`amin`](numpy.amin#numpy.amin "numpy.amin"), [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") #### Notes The maximum is equivalent to `np.where(x1 >= x2, x1, x2)` when neither x1 nor x2 are nans, but it is faster and does proper broadcasting. #### Examples ``` >>> np.maximum([2, 3, 4], [1, 5, 2]) array([2, 5, 4]) ``` ``` >>> np.maximum(np.eye(2), [0.5, 2]) # broadcasting array([[ 1. , 2. ], [ 0.5, 2. ]]) ``` ``` >>> np.maximum([np.nan, 0, np.nan], [0, np.nan, np.nan]) array([nan, nan, nan]) >>> np.maximum(np.Inf, 1) inf ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.maximum.htmlnumpy.mean ========== numpy.mean(*a*, *axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*, ***, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L3313-L3433) Compute the arithmetic mean along the specified axis. Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. [`float64`](../arrays.scalars#numpy.float64 "numpy.float64") intermediate and return values are used for integer inputs. Parameters **a**array_like Array containing numbers whose mean is desired. If `a` is not an array, a conversion is attempted. **axis**None or int or tuple of ints, optional Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. New in version 1.7.0. If this is a tuple of ints, a mean is performed over multiple axes, instead of a single axis or all the axes as before. **dtype**data-type, optional Type to use in computing the mean. For integer inputs, the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for floating point inputs, it is the same as the input dtype. **out**ndarray, optional Alternate output array in which to place the result. The default is `None`; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`mean`](#numpy.mean "numpy.mean") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where**array_like of bool, optional Elements to include in the mean. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns **m**ndarray, see dtype parameter above If `out=None`, returns a new array containing the mean values, otherwise a reference to the output array is returned. See also [`average`](numpy.average#numpy.average "numpy.average") Weighted average [`std`](numpy.std#numpy.std "numpy.std"), [`var`](numpy.var#numpy.var "numpy.var"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") #### Notes The arithmetic mean is the sum of the elements along the axis divided by the number of elements. Note that for floating-point input, the mean is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-precision accumulator using the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") keyword can alleviate this issue. By default, [`float16`](../arrays.scalars#numpy.float16 "numpy.float16") results are computed using [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") intermediates for extra precision. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> np.mean(a) 2.5 >>> np.mean(a, axis=0) array([2., 3.]) >>> np.mean(a, axis=1) array([1.5, 3.5]) ``` In single precision, [`mean`](#numpy.mean "numpy.mean") can be inaccurate: ``` >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.mean(a) 0.54999924 ``` Computing the mean in float64 is more accurate: ``` >>> np.mean(a, dtype=np.float64) 0.55000000074505806 # may vary ``` Specifying a where argument: ``` >>> a = np.array([[5, 9, 13], [14, 10, 12], [11, 15, 19]]) >>> np.mean(a) 12.0 >>> np.mean(a, where=[[True], [False], [False]]) 9.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.mean.htmlnumpy.median ============ numpy.median(*a*, *axis=None*, *out=None*, *overwrite_input=False*, *keepdims=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L3734-L3821) Compute the median along the specified axis. Returns the median of the array elements. Parameters **a**array_like Input array or object that can be converted to an array. **axis**{int, sequence of int, None}, optional Axis or axes along which the medians are computed. The default is to compute the median along a flattened version of the array. A sequence of axes is supported since version 1.9.0. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input**bool, optional If True, then allow use of memory of input array `a` for calculations. The input array will be modified by the call to [`median`](#numpy.median "numpy.median"). This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. If `overwrite_input` is `True` and `a` is not already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), an error will be raised. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `arr`. New in version 1.9.0. Returns **median**ndarray A new array holding the result. If the input contains integers or floats smaller than `float64`, then the output data-type is `np.float64`. Otherwise, the data-type of the output is the same as that of the input. If `out` is specified, that array is returned instead. See also [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`percentile`](numpy.percentile#numpy.percentile "numpy.percentile") #### Notes Given a vector `V` of length `N`, the median of `V` is the middle value of a sorted copy of `V`, `V_sorted` - i e., `V_sorted[(N-1)/2]`, when `N` is odd, and the average of the two middle values of `V_sorted` when `N` is even. #### Examples ``` >>> a = np.array([[10, 7, 4], [3, 2, 1]]) >>> a array([[10, 7, 4], [ 3, 2, 1]]) >>> np.median(a) 3.5 >>> np.median(a, axis=0) array([6.5, 4.5, 2.5]) >>> np.median(a, axis=1) array([7., 2.]) >>> m = np.median(a, axis=0) >>> out = np.zeros_like(m) >>> np.median(a, axis=0, out=m) array([6.5, 4.5, 2.5]) >>> m array([6.5, 4.5, 2.5]) >>> b = a.copy() >>> np.median(b, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a==b) >>> b = a.copy() >>> np.median(b, axis=None, overwrite_input=True) 3.5 >>> assert not np.all(a==b) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.median.htmlnumpy.minimum ============= numpy.minimum(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'minimum'>* Element-wise minimum of array elements. Compare two arrays and returns a new array containing the element-wise minima. If one of the elements being compared is a NaN, then that element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are propagated. Parameters **x1, x2**array_like The arrays holding the elements to be compared. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The minimum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") Element-wise maximum of two arrays, propagates NaNs. [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") Element-wise minimum of two arrays, ignores NaNs. [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value of an array along a given axis, propagates NaNs. [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") The minimum value of an array along a given axis, ignores NaNs. [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax"), [`amax`](numpy.amax#numpy.amax "numpy.amax"), [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") #### Notes The minimum is equivalent to `np.where(x1 <= x2, x1, x2)` when neither x1 nor x2 are NaNs, but it is faster and does proper broadcasting. #### Examples ``` >>> np.minimum([2, 3, 4], [1, 5, 2]) array([1, 3, 2]) ``` ``` >>> np.minimum(np.eye(2), [0.5, 2]) # broadcasting array([[ 0.5, 0. ], [ 0. , 1. ]]) ``` ``` >>> np.minimum([np.nan, 0, np.nan],[0, np.nan, np.nan]) array([nan, nan, nan]) >>> np.minimum(-np.Inf, 1) -inf ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.minimum.htmlnumpy.outer =========== numpy.outer(*a*, *b*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L857-L942) Compute the outer product of two vectors. Given two vectors, `a = [a0, a1, ..., aM]` and `b = [b0, b1, ..., bN]`, the outer product [[1]](#r14e6c54b746b-1) is: ``` [[a0*b0 a0*b1 ... a0*bN ] [a1*b0 . [ ... . [aM*b0 aM*bN ]] ``` Parameters **a**(M,) array_like First input vector. Input is flattened if not already 1-dimensional. **b**(N,) array_like Second input vector. Input is flattened if not already 1-dimensional. **out**(M, N) ndarray, optional A location where the result is stored New in version 1.9.0. Returns **out**(M, N) ndarray `out[i, j] = a[i] * b[j]` See also [`inner`](numpy.inner#numpy.inner "numpy.inner") [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") `einsum('i,j->ij', a.ravel(), b.ravel())` is the equivalent. [`ufunc.outer`](numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer") A generalization to dimensions other than 1D and other operations. `np.multiply.outer(a.ravel(), b.ravel())` is the equivalent. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") `np.tensordot(a.ravel(), b.ravel(), axes=((), ()))` is the equivalent. #### References [1](#id1) : <NAME> and <NAME>, *Matrix Computations*, 3rd ed., Baltimore, MD, Johns Hopkins University Press, 1996, pg. 8. #### Examples Make a (*very* coarse) grid for computing a Mandelbrot set: ``` >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5)) >>> rl array([[-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.]]) >>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,))) >>> im array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j], [0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j], [0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]]) >>> grid = rl + im >>> grid array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j], [-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j], [-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j], [-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j], [-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]]) ``` An example using a “vector” of letters: ``` >>> x = np.array(['a', 'b', 'c'], dtype=object) >>> np.outer(x, [1, 2, 3]) array([['a', 'aa', 'aaa'], ['b', 'bb', 'bbb'], ['c', 'cc', 'ccc']], dtype=object) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.outer.htmlnumpy.prod ========== numpy.prod(*a*, *axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*, *initial=<no value>*, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2927-L3046) Return the product of array elements over a given axis. Parameters **a**array_like Input data. **axis**None or int or tuple of ints, optional Axis or axes along which a product is performed. The default, axis=None, will calculate the product of all the elements in the input array. If axis is negative it counts from the last to the first axis. New in version 1.7.0. If axis is a tuple of ints, a product is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. **dtype**dtype, optional The type of the returned array, as well as of the accumulator in which the elements are multiplied. The dtype of `a` is used by default unless `a` has an integer dtype of less precision than the default platform integer. In that case, if `a` is signed then the platform integer is used while if `a` is unsigned then an unsigned integer of the same precision as the platform integer is used. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`prod`](#numpy.prod "numpy.prod") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **initial**scalar, optional The starting value for this product. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.15.0. **where**array_like of bool, optional Elements to include in the product. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.17.0. Returns **product_along_axis**ndarray, see [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") parameter above. An array shaped as `a` but with the specified axis removed. Returns a reference to `out` if specified. See also [`ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") equivalent method [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. That means that, on a 32-bit platform: ``` >>> x = np.array([536870910, 536870910, 536870910, 536870910]) >>> np.prod(x) 16 # may vary ``` The product of an empty array is the neutral element 1: ``` >>> np.prod([]) 1.0 ``` #### Examples By default, calculate the product of all elements: ``` >>> np.prod([1.,2.]) 2.0 ``` Even when the input array is two-dimensional: ``` >>> np.prod([[1.,2.],[3.,4.]]) 24.0 ``` But we can also specify the axis over which to multiply: ``` >>> np.prod([[1.,2.],[3.,4.]], axis=1) array([ 2., 12.]) ``` Or select specific elements to include: ``` >>> np.prod([1., np.nan, 3.], where=[True, False, True]) 3.0 ``` If the type of `x` is unsigned, then the output type is the unsigned platform integer: ``` >>> x = np.array([1, 2, 3], dtype=np.uint8) >>> np.prod(x).dtype == np.uint True ``` If `x` is of a signed integer type, then the output type is the default platform integer: ``` >>> x = np.array([1, 2, 3], dtype=np.int8) >>> np.prod(x).dtype == int True ``` You can also start the product with a value other than one: ``` >>> np.prod([1, 2], initial=5) 10 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.prod.htmlnumpy.std ========= numpy.std(*a*, *axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=<no value>*, ***, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L3441-L3574) Compute the standard deviation along the specified axis. Returns the standard deviation, a measure of the spread of a distribution, of the array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis. Parameters **a**array_like Calculate the standard deviation of these values. **axis**None or int or tuple of ints, optional Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. New in version 1.7.0. If this is a tuple of ints, a standard deviation is performed over multiple axes, instead of a single axis or all the axes as before. **dtype**dtype, optional Type to use in computing the standard deviation. For arrays of integer type the default is float64, for arrays of float types it is the same as the array type. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type (of the calculated values) will be cast if necessary. **ddof**int, optional Means Delta Degrees of Freedom. The divisor used in calculations is `N - ddof`, where `N` represents the number of elements. By default `ddof` is zero. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`std`](#numpy.std "numpy.std") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where**array_like of bool, optional Elements to include in the standard deviation. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns **standard_deviation**ndarray, see dtype parameter above. If `out` is None, return a new array containing the standard deviation, otherwise return a reference to the output array. See also [`var`](numpy.var#numpy.var "numpy.var"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes The standard deviation is the square root of the average of the squared deviations from the mean, i.e., `std = sqrt(mean(x))`, where `x = abs(a - a.mean())**2`. The average squared deviation is typically calculated as `x.sum() / N`, where `N = len(x)`. If, however, `ddof` is specified, the divisor `N - ddof` is used instead. In standard statistical practice, `ddof=1` provides an unbiased estimator of the variance of the infinite population. `ddof=0` provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with `ddof=1`, it will not be an unbiased estimate of the standard deviation per se. Note that, for complex numbers, [`std`](#numpy.std "numpy.std") takes the absolute value before squaring, so that the result is always real and nonnegative. For floating-point input, the *std* is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") keyword can alleviate this issue. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> np.std(a) 1.1180339887498949 # may vary >>> np.std(a, axis=0) array([1., 1.]) >>> np.std(a, axis=1) array([0.5, 0.5]) ``` In single precision, std() can be inaccurate: ``` >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.std(a) 0.45000005 ``` Computing the standard deviation in float64 is more accurate: ``` >>> np.std(a, dtype=np.float64) 0.44999999925494177 # may vary ``` Specifying a where argument: ``` >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.std(a) 2.614064523559687 # may vary >>> np.std(a, where=[[True], [True], [False]]) 2.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.std.htmlnumpy.sum ========= numpy.sum(*a*, *axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*, *initial=<no value>*, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2162-L2299) Sum of array elements over a given axis. Parameters **a**array_like Elements to sum. **axis**None or int or tuple of ints, optional Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. New in version 1.7.0. If axis is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before. **dtype**dtype, optional The type of the returned array and of the accumulator in which the elements are summed. The dtype of `a` is used by default unless `a` has an integer dtype of less precision than the default platform integer. In that case, if `a` is signed then the platform integer is used while if `a` is unsigned then an unsigned integer of the same precision as the platform integer is used. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`sum`](#numpy.sum "numpy.sum") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **initial**scalar, optional Starting value for the sum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.15.0. **where**array_like of bool, optional Elements to include in the sum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.17.0. Returns **sum_along_axis**ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if `axis` is None, a scalar is returned. If an output array is specified, a reference to `out` is returned. See also [`ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum") Equivalent method. `add.reduce` Equivalent functionality of [`add`](numpy.add#numpy.add "numpy.add"). [`cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") Cumulative sum of array elements. [`trapz`](numpy.trapz#numpy.trapz "numpy.trapz") Integration of array values using the composite trapezoidal rule. [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`average`](numpy.average#numpy.average "numpy.average") #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. The sum of an empty array is the neutral element 0: ``` >>> np.sum([]) 0.0 ``` For floating point numbers the numerical precision of sum (and `np.add.reduce`) is in general limited by directly adding each number individually to the result causing rounding errors in every step. However, often numpy will use a numerically better approach (partial pairwise summation) leading to improved precision in many use-cases. This improved precision is always provided when no `axis` is given. When `axis` is given, it will depend on which axis is summed. Technically, to provide the best speed possible, the improved precision is only used when the summation is along the fast axis in memory. Note that the exact precision may vary depending on other parameters. In contrast to NumPy, Python’s `math.fsum` function uses a slower but more precise approach to summation. Especially when summing a large number of lower precision floating point numbers, such as `float32`, numerical errors can become significant. In such cases it can be advisable to use `dtype=”float64”` to use a higher precision for the output. #### Examples ``` >>> np.sum([0.5, 1.5]) 2.0 >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) 1 >>> np.sum([[0, 1], [0, 5]]) 6 >>> np.sum([[0, 1], [0, 5]], axis=0) array([0, 6]) >>> np.sum([[0, 1], [0, 5]], axis=1) array([1, 5]) >>> np.sum([[0, 1], [np.nan, 5]], where=[False, True], axis=1) array([1., 5.]) ``` If the accumulator is too small, overflow occurs: ``` >>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) -128 ``` You can also start the sum with a value other than zero: ``` >>> np.sum([10], initial=5) 15 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.sum.htmlnumpy.trace =========== numpy.trace(*a*, *offset=0*, *axis1=0*, *axis2=1*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1687-L1748) Return the sum along diagonals of the array. If `a` is 2-D, the sum along its diagonal with the given offset is returned, i.e., the sum of elements `a[i,i+offset]` for all i. If `a` has more than two dimensions, then the axes specified by axis1 and axis2 are used to determine the 2-D sub-arrays whose traces are returned. The shape of the resulting array is the same as that of `a` with `axis1` and `axis2` removed. Parameters **a**array_like Input array, from which the diagonals are taken. **offset**int, optional Offset of the diagonal from the main diagonal. Can be both positive and negative. Defaults to 0. **axis1, axis2**int, optional Axes to be used as the first and second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults are the first two axes of `a`. **dtype**dtype, optional Determines the data-type of the returned array and of the accumulator where the elements are summed. If dtype has the value None and `a` is of integer type of precision less than the default integer precision, then the default integer precision is used. Otherwise, the precision is the same as that of `a`. **out**ndarray, optional Array into which the output is placed. Its type is preserved and it must be of the right shape to hold the output. Returns **sum_along_diagonals**ndarray If `a` is 2-D, the sum along the diagonal is returned. If `a` has larger dimensions, then an array of sums along diagonals is returned. See also [`diag`](numpy.diag#numpy.diag "numpy.diag"), [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal"), [`diagflat`](numpy.diagflat#numpy.diagflat "numpy.diagflat") #### Examples ``` >>> np.trace(np.eye(3)) 3.0 >>> a = np.arange(8).reshape((2,2,2)) >>> np.trace(a) array([6, 8]) ``` ``` >>> a = np.arange(24).reshape((2,2,2,3)) >>> np.trace(a).shape (2, 3) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.trace.htmlnumpy.var ========= numpy.var(*a*, *axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=<no value>*, ***, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L3582-L3716) Compute the variance along the specified axis. Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. Parameters **a**array_like Array containing numbers whose variance is desired. If `a` is not an array, a conversion is attempted. **axis**None or int or tuple of ints, optional Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. New in version 1.7.0. If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before. **dtype**data-type, optional Type to use in computing the variance. For arrays of integer type the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for arrays of float types it is the same as the array type. **out**ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary. **ddof**int, optional “Delta Degrees of Freedom”: the divisor used in the calculation is `N - ddof`, where `N` represents the number of elements. By default `ddof` is zero. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`var`](#numpy.var "numpy.var") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where**array_like of bool, optional Elements to include in the variance. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns **variance**ndarray, see dtype parameter above If `out=None`, returns a new array containing the variance; otherwise, a reference to the output array is returned. See also [`std`](numpy.std#numpy.std "numpy.std"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes The variance is the average of the squared deviations from the mean, i.e., `var = mean(x)`, where `x = abs(a - a.mean())**2`. The mean is typically calculated as `x.sum() / N`, where `N = len(x)`. If, however, `ddof` is specified, the divisor `N - ddof` is used instead. In standard statistical practice, `ddof=1` provides an unbiased estimator of the variance of a hypothetical infinite population. `ddof=0` provides a maximum likelihood estimate of the variance for normally distributed variables. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-accuracy accumulator using the `dtype` keyword can alleviate this issue. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25]) ``` In single precision, var() can be inaccurate: ``` >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) 0.20250003 ``` Computing the variance in float64 is more accurate: ``` >>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025 ``` Specifying a where argument: ``` >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.var.htmlnumpy.vdot ========== numpy.vdot(*a*, *b*, */*) Return the dot product of two vectors. The vdot(`a`, `b`) function handles complex numbers differently than dot(`a`, `b`). If the first argument is complex the complex conjugate of the first argument is used for the calculation of the dot product. Note that [`vdot`](#numpy.vdot "numpy.vdot") handles multidimensional arrays differently than [`dot`](numpy.dot#numpy.dot "numpy.dot"): it does *not* perform a matrix product, but flattens input arguments to 1-D vectors first. Consequently, it should only be used for vectors. Parameters **a**array_like If `a` is complex the complex conjugate is taken before calculation of the dot product. **b**array_like Second argument to the dot product. Returns **output**ndarray Dot product of `a` and `b`. Can be an int, float, or complex depending on the types of `a` and `b`. See also [`dot`](numpy.dot#numpy.dot "numpy.dot") Return the dot product without using the complex conjugate of the first argument. #### Examples ``` >>> a = np.array([1+2j,3+4j]) >>> b = np.array([5+6j,7+8j]) >>> np.vdot(a, b) (70-8j) >>> np.vdot(b, a) (70+8j) ``` Note that higher-dimensional arrays are flattened! ``` >>> a = np.array([[1, 4], [5, 6]]) >>> b = np.array([[4, 1], [2, 2]]) >>> np.vdot(a, b) 30 >>> np.vdot(b, a) 30 >>> 1*4 + 4*1 + 5*2 + 6*2 30 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.vdot.htmlnumpy.vectorize =============== *class*numpy.vectorize(*pyfunc*, *otypes=None*, *doc=None*, *excluded=None*, *cache=False*, *signature=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Generalized function class. Define a vectorized function which takes a nested sequence of objects or numpy arrays as inputs and returns a single numpy array or a tuple of numpy arrays. The vectorized function evaluates `pyfunc` over successive tuples of the input arrays like the python map function, except it uses the broadcasting rules of numpy. The data type of the output of `vectorized` is determined by calling the function with the first element of the input. This can be avoided by specifying the `otypes` argument. Parameters **pyfunc**callable A python function or method. **otypes**str or list of dtypes, optional The output data type. It must be specified as either a string of typecode characters or a list of data type specifiers. There should be one data type specifier for each output. **doc**str, optional The docstring for the function. If None, the docstring will be the `pyfunc.__doc__`. **excluded**set, optional Set of strings or integers representing the positional or keyword arguments for which the function will not be vectorized. These will be passed directly to `pyfunc` unmodified. New in version 1.7.0. **cache**bool, optional If `True`, then cache the first function call that determines the number of outputs if `otypes` is not provided. New in version 1.7.0. **signature**string, optional Generalized universal function signature, e.g., `(m,n),(n)->(m)` for vectorized matrix-vector multiplication. If provided, `pyfunc` will be called with (and expected to return) arrays with shapes given by the size of corresponding core dimensions. By default, `pyfunc` is assumed to take scalars as input and output. New in version 1.12.0. Returns **vectorized**callable Vectorized function. See also [`frompyfunc`](numpy.frompyfunc#numpy.frompyfunc "numpy.frompyfunc") Takes an arbitrary Python function and returns a ufunc #### Notes The [`vectorize`](#numpy.vectorize "numpy.vectorize") function is provided primarily for convenience, not for performance. The implementation is essentially a for loop. If `otypes` is not specified, then a call to the function with the first argument will be used to determine the number of outputs. The results of this call will be cached if `cache` is `True` to prevent calling the function twice. However, to implement the cache, the original function must be wrapped which will slow down subsequent calls, so only do this if your function is expensive. The new keyword argument interface and `excluded` argument support further degrades performance. #### References 1 [Generalized Universal Function API](../c-api/generalized-ufuncs) #### Examples ``` >>> def myfunc(a, b): ... "Return a-b if a>b, otherwise return a+b" ... if a > b: ... return a - b ... else: ... return a + b ``` ``` >>> vfunc = np.vectorize(myfunc) >>> vfunc([1, 2, 3, 4], 2) array([3, 4, 1, 2]) ``` The docstring is taken from the input function to [`vectorize`](#numpy.vectorize "numpy.vectorize") unless it is specified: ``` >>> vfunc.__doc__ 'Return a-b if a>b, otherwise return a+b' >>> vfunc = np.vectorize(myfunc, doc='Vectorized `myfunc`') >>> vfunc.__doc__ 'Vectorized `myfunc`' ``` The output type is determined by evaluating the first element of the input, unless it is specified: ``` >>> out = vfunc([1, 2, 3, 4], 2) >>> type(out[0]) <class 'numpy.int64'> >>> vfunc = np.vectorize(myfunc, otypes=[float]) >>> out = vfunc([1, 2, 3, 4], 2) >>> type(out[0]) <class 'numpy.float64'``` The `excluded` argument can be used to prevent vectorizing over certain arguments. This can be useful for array-like arguments of a fixed length such as the coefficients for a polynomial as in [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"): ``` >>> def mypolyval(p, x): ... _p = list(p) ... res = _p.pop(0) ... while _p: ... res = res*x + _p.pop(0) ... return res >>> vpolyval = np.vectorize(mypolyval, excluded=['p']) >>> vpolyval(p=[1, 2, 3], x=[0, 1]) array([3, 6]) ``` Positional arguments may also be excluded by specifying their position: ``` >>> vpolyval.excluded.add(0) >>> vpolyval([1, 2, 3], x=[0, 1]) array([3, 6]) ``` The `signature` argument allows for vectorizing functions that act on non-scalar arrays of fixed length. For example, you can use it for a vectorized calculation of Pearson correlation coefficient and its p-value: ``` >>> import scipy.stats >>> pearsonr = np.vectorize(scipy.stats.pearsonr, ... signature='(n),(n)->(),()') >>> pearsonr([[0, 1, 2, 3]], [[1, 2, 3, 4], [4, 3, 2, 1]]) (array([ 1., -1.]), array([ 0., 0.])) ``` Or for a vectorized convolution: ``` >>> convolve = np.vectorize(np.convolve, signature='(n),(m)->(k)') >>> convolve(np.eye(4), [1, 2, 1]) array([[1., 2., 1., 0., 0., 0.], [0., 1., 2., 1., 0., 0.], [0., 0., 1., 2., 1., 0.], [0., 0., 0., 1., 2., 1.]]) ``` #### Methods | | | | --- | --- | | [`__call__`](numpy.vectorize.__call__#numpy.vectorize.__call__ "numpy.vectorize.__call__")(*args, **kwargs) | Return arrays with the results of `pyfunc` broadcast (vectorized) over `args` and `kwargs` not in `excluded`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.vectorize.htmlnumpy.where =========== numpy.where(*condition*, [*x*, *y*, ]*/*) Return elements chosen from `x` or `y` depending on `condition`. Note When only `condition` is provided, this function is a shorthand for `np.asarray(condition).nonzero()`. Using [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") directly should be preferred, as it behaves correctly for subclasses. The rest of this documentation covers only the case where all three arguments are provided. Parameters **condition**array_like, bool Where True, yield `x`, otherwise yield `y`. **x, y**array_like Values from which to choose. `x`, `y` and `condition` need to be broadcastable to some shape. Returns **out**ndarray An array with elements from `x` where `condition` is True, and elements from `y` elsewhere. See also [`choose`](numpy.choose#numpy.choose "numpy.choose") [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") The function that is called when x and y are omitted #### Notes If all the arrays are 1-D, [`where`](#numpy.where "numpy.where") is equivalent to: ``` [xv if c else yv for c, xv, yv in zip(condition, x, y)] ``` #### Examples ``` >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.where(a < 5, a, 10*a) array([ 0, 1, 2, 3, 4, 50, 60, 70, 80, 90]) ``` This can be used on multidimensional arrays too: ``` >>> np.where([[True, False], [True, True]], ... [[1, 2], [3, 4]], ... [[9, 8], [7, 6]]) array([[1, 8], [3, 4]]) ``` The shapes of x, y, and the condition are broadcast together: ``` >>> x, y = np.ogrid[:3, :4] >>> np.where(x < y, x, 10 + y) # both x and 10+y are broadcast array([[10, 0, 0, 0], [10, 11, 1, 1], [10, 11, 12, 2]]) ``` ``` >>> a = np.array([[0, 1, 2], ... [0, 2, 4], ... [0, 3, 6]]) >>> np.where(a < 4, a, -1) # -1 is broadcast array([[ 0, 1, 2], [ 0, 2, -1], [ 0, 3, -1]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.where.htmlnumpy.ndenumerate ================= *class*numpy.ndenumerate(*arr*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Multidimensional index iterator. Return an iterator yielding pairs of array coordinates and values. Parameters **arr**ndarray Input array. See also [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex"), [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> for index, x in np.ndenumerate(a): ... print(index, x) (0, 0) 1 (0, 1) 2 (1, 0) 3 (1, 1) 4 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndenumerate.htmlnumpy.indices ============= numpy.indices(*dimensions*, *dtype=<class 'int'>*, *sparse=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L1686-L1786) Return an array representing the indices of a grid. Compute an array where the subarrays contain index values 0, 1, 
 varying only along the corresponding axis. Parameters **dimensions**sequence of ints The shape of the grid. **dtype**dtype, optional Data type of the result. **sparse**boolean, optional Return a sparse representation of the grid instead of a dense representation. Default is False. New in version 1.17. Returns **grid**one ndarray or tuple of ndarrays If sparse is False: Returns one array of grid indices, `grid.shape = (len(dimensions),) + tuple(dimensions)`. If sparse is True: Returns a tuple of arrays, with `grid[i].shape = (1, ..., 1, dimensions[i], 1, ..., 1)` with dimensions[i] in the ith place See also [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid"), [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Notes The output shape in the dense case is obtained by prepending the number of dimensions in front of the tuple of dimensions, i.e. if `dimensions` is a tuple `(r0, ..., rN-1)` of length `N`, the output shape is `(N, r0, ..., rN-1)`. The subarrays `grid[k]` contains the N-D array of indices along the `k-th` axis. Explicitly: ``` grid[k, i0, i1, ..., iN-1] = ik ``` #### Examples ``` >>> grid = np.indices((2, 3)) >>> grid.shape (2, 2, 3) >>> grid[0] # row indices array([[0, 0, 0], [1, 1, 1]]) >>> grid[1] # column indices array([[0, 1, 2], [0, 1, 2]]) ``` The indices can be used as an index into an array. ``` >>> x = np.arange(20).reshape(5, 4) >>> row, col = np.indices((2, 3)) >>> x[row, col] array([[0, 1, 2], [4, 5, 6]]) ``` Note that it would be more straightforward in the above example to extract the required elements directly with `x[:2, :3]`. If sparse is set to true, the grid will be returned in a sparse representation. ``` >>> i, j = np.indices((2, 3), sparse=True) >>> i.shape (2, 1) >>> j.shape (1, 3) >>> i # row indices array([[0], [1]]) >>> j # column indices array([[0, 1, 2]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.indices.htmlnumpy.ndarray.resize ==================== method ndarray.resize(*new_shape*, *refcheck=True*) Change shape and size of array in-place. Parameters **new_shape**tuple of ints, or `n` ints Shape of resized array. **refcheck**bool, optional If False, reference count will not be checked. Default is True. Returns None Raises ValueError If `a` does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the `order` keyword argument is specified. This behaviour is a bug in NumPy. See also [`resize`](numpy.resize#numpy.resize "numpy.resize") Return a new array with the specified shape. #### Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set `refcheck` to False. #### Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: ``` >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) ``` ``` >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) ``` Enlarging an array: as above, but missing entries are filled with zeros: ``` >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) ``` Referencing an array prevents resizing
 ``` >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... ``` Unless `refcheck` is False: ``` >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.resize.htmlnumpy.ndarray.shape =================== attribute ndarray.shape Tuple of array dimensions. The shape property is usually used to get the current shape of an array, but may also be used to reshape the array in-place by assigning a tuple of array dimensions to it. As with [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), one of the new shape dimensions can be -1, in which case its value is inferred from the size of the array and the remaining dimensions. Reshaping an array in-place will fail if a copy is required. Warning Setting `arr.shape` is discouraged and may be deprecated in the future. Using [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") is the preferred approach. See also [`numpy.shape`](numpy.shape#numpy.shape "numpy.shape") Equivalent getter function. [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Function similar to setting `shape`. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Method similar to setting `shape`. #### Examples ``` >>> x = np.array([1, 2, 3, 4]) >>> x.shape (4,) >>> y = np.zeros((2, 3, 4)) >>> y.shape (2, 3, 4) >>> y.shape = (3, 8) >>> y array([[ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> y.shape = (3, 6) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: total size of new array must be unchanged >>> np.zeros((4,2))[::2].shape = (-1,) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape. ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.shape.htmlnumpy.resize ============ numpy.resize(*a*, *new_shape*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1394-L1471) Return a new array with the specified shape. If the new array is larger than the original array, then the new array is filled with repeated copies of `a`. Note that this behavior is different from a.resize(new_shape) which fills with zeros instead of repeated copies of `a`. Parameters **a**array_like Array to be resized. **new_shape**int or tuple of int Shape of resized array. Returns **reshaped_array**ndarray The new array is formed from the data in the old array, repeated if necessary to fill out the required number of elements. The data are repeated iterating over the array in C-order. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Reshape an array without changing the total size. [`numpy.pad`](numpy.pad#numpy.pad "numpy.pad") Enlarge and pad an array. [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") Repeat elements of an array. [`ndarray.resize`](numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize") resize an array in-place. #### Notes When the total size of the array does not change [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") should be used. In most other cases either indexing (to reduce the size) or padding (to increase the size) may be a more appropriate solution. Warning: This functionality does **not** consider axes separately, i.e. it does not apply interpolation/extrapolation. It fills the return array with the required number of elements, iterating over `a` in C-order, disregarding axes (and cycling back from the start if the new shape is larger). This functionality is therefore not suitable to resize images, or data where each axis represents a separate and distinct entity. #### Examples ``` >>> a=np.array([[0,1],[2,3]]) >>> np.resize(a,(2,3)) array([[0, 1, 2], [3, 0, 1]]) >>> np.resize(a,(1,4)) array([[0, 1, 2, 3]]) >>> np.resize(a,(2,4)) array([[0, 1, 2, 3], [0, 1, 2, 3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.resize.htmlnumpy.column_stack =================== numpy.column_stack(*tup*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L612-L656) Stack 1-D arrays as columns into a 2-D array. Take a sequence of 1-D arrays and stack them as columns to make a single 2-D array. 2-D arrays are stacked as-is, just like with [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack"). 1-D arrays are turned into 2-D columns first. Parameters **tup**sequence of 1-D or 2-D arrays. Arrays to stack. All of them must have the same first dimension. Returns **stacked**2-D array The array formed by stacking the given arrays. See also [`stack`](numpy.stack#numpy.stack "numpy.stack"), [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack"), [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack"), [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") #### Examples ``` >>> a = np.array((1,2,3)) >>> b = np.array((2,3,4)) >>> np.column_stack((a,b)) array([[1, 2], [2, 3], [3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.column_stack.htmlnumpy.hstack ============ numpy.hstack(*tup*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/shape_base.py#L285-L345) Stack arrays in sequence horizontally (column wise). This is equivalent to concatenation along the second axis, except for 1-D arrays where it concatenates along the first axis. Rebuilds arrays divided by [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters **tup**sequence of ndarrays The arrays must have the same shape along all but the second axis, except 1-D arrays which can be any length. Returns **stacked**ndarray The array formed by stacking the given arrays. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") Split an array into multiple sub-arrays horizontally (column-wise). #### Examples ``` >>> a = np.array((1,2,3)) >>> b = np.array((4,5,6)) >>> np.hstack((a,b)) array([1, 2, 3, 4, 5, 6]) >>> a = np.array([[1],[2],[3]]) >>> b = np.array([[4],[5],[6]]) >>> np.hstack((a,b)) array([[1, 4], [2, 5], [3, 6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.hstack.htmlnumpy.row_stack ================ numpy.row_stack(*tup*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/shape_base.py#L222-L282) Stack arrays in sequence vertically (row wise). This is equivalent to concatenation along the first axis after 1-D arrays of shape `(N,)` have been reshaped to `(1,N)`. Rebuilds arrays divided by [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters **tup**sequence of ndarrays The arrays must have the same shape along all but the first axis. 1-D arrays must have the same length. Returns **stacked**ndarray The array formed by stacking the given arrays, will be at least 2-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split an array into multiple sub-arrays vertically (row-wise). #### Examples ``` >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.vstack((a,b)) array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> a = np.array([[1], [2], [3]]) >>> b = np.array([[4], [5], [6]]) >>> np.vstack((a,b)) array([[1], [2], [3], [4], [5], [6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.row_stack.htmlnumpy.vstack ============ numpy.vstack(*tup*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/shape_base.py#L222-L282) Stack arrays in sequence vertically (row wise). This is equivalent to concatenation along the first axis after 1-D arrays of shape `(N,)` have been reshaped to `(1,N)`. Rebuilds arrays divided by [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters **tup**sequence of ndarrays The arrays must have the same shape along all but the first axis. 1-D arrays must have the same length. Returns **stacked**ndarray The array formed by stacking the given arrays, will be at least 2-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split an array into multiple sub-arrays vertically (row-wise). #### Examples ``` >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.vstack((a,b)) array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> a = np.array([[1], [2], [3]]) >>> b = np.array([[4], [5], [6]]) >>> np.vstack((a,b)) array([[1], [2], [3], [4], [5], [6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.vstack.htmlnumpy.r_ ========= numpy.r_*=<numpy.lib.index_tricks.RClass object>* Translates slice objects to concatenation along the first axis. This is a simple way to build up arrays quickly. There are two use cases. 1. If the index expression contains comma separated arrays, then stack them along their first axis. 2. If the index expression contains slice notation or scalars then create a 1-D array with a range indicated by the slice notation. If slice notation is used, the syntax `start:stop:step` is equivalent to `np.arange(start, stop, step)` inside of the brackets. However, if `step` is an imaginary number (i.e. 100j) then its integer portion is interpreted as a number-of-points desired and the start and stop are inclusive. In other words `start:stop:stepj` is interpreted as `np.linspace(start, stop, step, endpoint=1)` inside of the brackets. After expansion of slice notation, all comma separated sequences are concatenated together. Optional character strings placed as the first element of the index expression can be used to change the output. The strings ‘r’ or ‘c’ result in matrix output. If the result is 1-D and ‘r’ is specified a 1 x N (row) matrix is produced. If the result is 1-D and ‘c’ is specified, then a N x 1 (column) matrix is produced. If the result is 2-D then both provide the same matrix result. A string integer specifies which axis to stack multiple comma separated arrays along. A string of two comma-separated integers allows indication of the minimum number of dimensions to force each entry into as the second integer (the axis to concatenate along is still the first integer). A string with three comma-separated integers allows specification of the axis to concatenate along, the minimum number of dimensions to force the entries to, and which axis should contain the start of the arrays which are less than the specified number of dimensions. In other words the third integer allows you to specify where the 1’s should be placed in the shape of the arrays that have their shapes upgraded. By default, they are placed in the front of the shape tuple. The third argument allows you to specify where the start of the array should be instead. Thus, a third argument of ‘0’ would place the 1’s at the end of the array shape. Negative integers specify where in the new shape tuple the last dimension of upgraded arrays should be placed, so the default is ‘-1’. Parameters **Not a function, so takes no parameters** Returns A concatenated ndarray or matrix. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`c_`](numpy.c_#numpy.c_ "numpy.c_") Translates slice objects to concatenation along the second axis. #### Examples ``` >>> np.r_[np.array([1,2,3]), 0, 0, np.array([4,5,6])] array([1, 2, 3, ..., 4, 5, 6]) >>> np.r_[-1:1:6j, [0]*3, 5, 6] array([-1. , -0.6, -0.2, 0.2, 0.6, 1. , 0. , 0. , 0. , 5. , 6. ]) ``` String integers specify the axis to concatenate along or the minimum number of dimensions to force entries into. ``` >>> a = np.array([[0, 1, 2], [3, 4, 5]]) >>> np.r_['-1', a, a] # concatenate along last axis array([[0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5]]) >>> np.r_['0,2', [1,2,3], [4,5,6]] # concatenate along first axis, dim>=2 array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> np.r_['0,2,0', [1,2,3], [4,5,6]] array([[1], [2], [3], [4], [5], [6]]) >>> np.r_['1,2,0', [1,2,3], [4,5,6]] array([[1, 4], [2, 5], [3, 6]]) ``` Using ‘r’ or ‘c’ as a first string argument creates a matrix. ``` >>> np.r_['r',[1,2,3], [4,5,6]] matrix([[1, 2, 3, 4, 5, 6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.r_.htmlnumpy.c_ ========= numpy.c_*=<numpy.lib.index_tricks.CClass object>* Translates slice objects to concatenation along the second axis. This is short-hand for `np.r_['-1,2,0', index expression]`, which is useful because of its common occurrence. In particular, arrays will be stacked along their last axis after being upgraded to at least 2-D with 1’s post-pended to the shape (column vectors made out of 1-D arrays). See also [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`r_`](numpy.r_#numpy.r_ "numpy.r_") For more detailed documentation. #### Examples ``` >>> np.c_[np.array([1,2,3]), np.array([4,5,6])] array([[1, 4], [2, 5], [3, 6]]) >>> np.c_[np.array([[1,2,3]]), 0, 0, np.array([[4,5,6]])] array([[1, 2, 3, ..., 4, 5, 6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.c_.htmlnumpy.hsplit ============ numpy.hsplit(*ary*, *indices_or_sections*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L881-L948) Split an array into multiple sub-arrays horizontally (column-wise). Please refer to the [`split`](numpy.split#numpy.split "numpy.split") documentation. [`hsplit`](#numpy.hsplit "numpy.hsplit") is equivalent to [`split`](numpy.split#numpy.split "numpy.split") with `axis=1`, the array is always split along the second axis except for 1-D arrays, where it is split at `axis=0`. See also [`split`](numpy.split#numpy.split "numpy.split") Split an array into multiple sub-arrays of equal size. #### Examples ``` >>> x = np.arange(16.0).reshape(4, 4) >>> x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) >>> np.hsplit(x, 2) [array([[ 0., 1.], [ 4., 5.], [ 8., 9.], [12., 13.]]), array([[ 2., 3.], [ 6., 7.], [10., 11.], [14., 15.]])] >>> np.hsplit(x, np.array([3, 6])) [array([[ 0., 1., 2.], [ 4., 5., 6.], [ 8., 9., 10.], [12., 13., 14.]]), array([[ 3.], [ 7.], [11.], [15.]]), array([], shape=(4, 0), dtype=float64)] ``` With a higher dimensional array the split is still along the second axis. ``` >>> x = np.arange(8.0).reshape(2, 2, 2) >>> x array([[[0., 1.], [2., 3.]], [[4., 5.], [6., 7.]]]) >>> np.hsplit(x, 2) [array([[[0., 1.]], [[4., 5.]]]), array([[[2., 3.]], [[6., 7.]]])] ``` With a 1-D array, the split is along axis 0. ``` >>> x = np.array([0, 1, 2, 3, 4, 5]) >>> np.hsplit(x, 2) [array([0, 1, 2]), array([3, 4, 5])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.hsplit.htmlnumpy.vsplit ============ numpy.vsplit(*ary*, *indices_or_sections*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L951-L997) Split an array into multiple sub-arrays vertically (row-wise). Please refer to the `split` documentation. `vsplit` is equivalent to `split` with `axis=0` (default), the array is always split along the first axis regardless of the array dimension. See also [`split`](numpy.split#numpy.split "numpy.split") Split an array into multiple sub-arrays of equal size. #### Examples ``` >>> x = np.arange(16.0).reshape(4, 4) >>> x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) >>> np.vsplit(x, 2) [array([[0., 1., 2., 3.], [4., 5., 6., 7.]]), array([[ 8., 9., 10., 11.], [12., 13., 14., 15.]])] >>> np.vsplit(x, np.array([3, 6])) [array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]]), array([[12., 13., 14., 15.]]), array([], shape=(0, 4), dtype=float64)] ``` With a higher dimensional array the split is still along the first axis. ``` >>> x = np.arange(8.0).reshape(2, 2, 2) >>> x array([[[0., 1.], [2., 3.]], [[4., 5.], [6., 7.]]]) >>> np.vsplit(x, 2) [array([[[0., 1.], [2., 3.]]]), array([[[4., 5.], [6., 7.]]])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.vsplit.htmlnumpy.array_split ================== numpy.array_split(*ary*, *indices_or_sections*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L739-L792) Split an array into multiple sub-arrays. Please refer to the `split` documentation. The only difference between these functions is that `array_split` allows `indices_or_sections` to be an integer that does *not* equally divide the axis. For an array of length l that should be split into n sections, it returns l % n sub-arrays of size l//n + 1 and the rest of size l//n. See also [`split`](numpy.split#numpy.split "numpy.split") Split array into multiple sub-arrays of equal size. #### Examples ``` >>> x = np.arange(8.0) >>> np.array_split(x, 3) [array([0., 1., 2.]), array([3., 4., 5.]), array([6., 7.])] ``` ``` >>> x = np.arange(9) >>> np.array_split(x, 4) [array([0, 1, 2]), array([3, 4]), array([5, 6]), array([7, 8])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.array_split.htmlnumpy.copy ========== numpy.copy(*a*, *order='K'*, *subok=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L870-L959) Return an array copy of the given object. Parameters **a**array_like Input data. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`ndarray.copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy") are very similar, but have different default values for their order= arguments.) **subok**bool, optional If True, then sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (defaults to False). New in version 1.19.0. Returns **arr**ndarray Array interpretation of `a`. See also [`ndarray.copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy") Preferred method for creating an array copy #### Notes This is equivalent to: ``` >>> np.array(a, copy=True) ``` #### Examples Create an array x, with a reference y and a copy z: ``` >>> x = np.array([1, 2, 3]) >>> y = x >>> z = np.copy(x) ``` Note that, when we modify x, y changes, but not z: ``` >>> x[0] = 10 >>> x[0] == y[0] True >>> x[0] == z[0] False ``` Note that, np.copy clears previously set WRITEABLE=False flag. ``` >>> a = np.array([1, 2, 3]) >>> a.flags["WRITEABLE"] = False >>> b = np.copy(a) >>> b.flags["WRITEABLE"] True >>> b[0] = 3 >>> b array([3, 2, 3]) ``` Note that np.copy is a shallow copy and will not copy object elements within arrays. This is mainly important for arrays containing Python objects. The new array will contain the same object which may lead to surprises if that object can be modified (is mutable): ``` >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> b = np.copy(a) >>> b[2][0] = 10 >>> a array([1, 'm', list([10, 3, 4])], dtype=object) ``` To ensure all elements within an `object` array are copied, use [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "(in Python v3.10)"): ``` >>> import copy >>> a = np.array([1, 'm', [2, 3, 4]], dtype=object) >>> c = copy.deepcopy(a) >>> c[2][0] = 10 >>> c array([1, 'm', list([10, 3, 4])], dtype=object) >>> a array([1, 'm', list([2, 3, 4])], dtype=object) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.copy.htmlnumpy.eye ========= numpy.eye(*N*, *M=None*, *k=0*, *dtype=<class 'float'>*, *order='C'*, ***, *like=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L162-L228) Return a 2-D array with ones on the diagonal and zeros elsewhere. Parameters **N**int Number of rows in the output. **M**int, optional Number of columns in the output. If None, defaults to `N`. **k**int, optional Index of the diagonal: 0 (the default) refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal. **dtype**data-type, optional Data-type of the returned array. **order**{‘C’, ‘F’}, optional Whether the output should be stored in row-major (C-style) or column-major (Fortran-style) order in memory. New in version 1.14.0. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **I**ndarray of shape (N,M) An array where all elements are equal to zero, except for the `k`-th diagonal, whose values are equal to one. See also [`identity`](numpy.identity#numpy.identity "numpy.identity") (almost) equivalent function [`diag`](numpy.diag#numpy.diag "numpy.diag") diagonal 2-D array from a 1-D array specified by the user. #### Examples ``` >>> np.eye(2, dtype=int) array([[1, 0], [0, 1]]) >>> np.eye(3, k=1) array([[0., 1., 0.], [0., 0., 1.], [0., 0., 0.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.eye.htmlnumpy.identity ============== numpy.identity(*n*, *dtype=None*, ***, *like=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L2145-L2182) Return the identity array. The identity array is a square array with ones on the main diagonal. Parameters **n**int Number of rows (and columns) in `n` x `n` output. **dtype**data-type, optional Data-type of the output. Defaults to `float`. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray `n` x `n` array with its main diagonal set to one, and all other elements 0. #### Examples ``` >>> np.identity(3) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.identity.htmlnumpy.logspace ============== numpy.logspace(*start*, *stop*, *num=50*, *endpoint=True*, *base=10.0*, *dtype=None*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/function_base.py#L183-L278) Return numbers spaced evenly on a log scale. In linear space, the sequence starts at `base ** start` (`base` to the power of `start`) and ends with `base ** stop` (see `endpoint` below). Changed in version 1.16.0: Non-scalar `start` and `stop` are now supported. Parameters **start**array_like `base ** start` is the starting value of the sequence. **stop**array_like `base ** stop` is the final value of the sequence, unless `endpoint` is False. In that case, `num + 1` values are spaced over the interval in log-space, of which all but the last (a sequence of length `num`) are returned. **num**integer, optional Number of samples to generate. Default is 50. **endpoint**boolean, optional If true, `stop` is the last sample. Otherwise, it is not included. Default is True. **base**array_like, optional The base of the log space. The step size between the elements in `ln(samples) / ln(base)` (or `log_base(samples)`) is uniform. Default is 10.0. **dtype**dtype The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, the data type is inferred from `start` and `stop`. The inferred type will never be an integer; [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)") is chosen even if the arguments would produce an array of integers. **axis**int, optional The axis in the result to store the samples. Relevant only if start or stop are array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. New in version 1.16.0. Returns **samples**ndarray `num` samples, equally spaced on a log scale. See also [`arange`](numpy.arange#numpy.arange "numpy.arange") Similar to linspace, with the step size specified instead of the number of samples. Note that, when used with a float endpoint, the endpoint may or may not be included. [`linspace`](numpy.linspace#numpy.linspace "numpy.linspace") Similar to logspace, but with the samples uniformly distributed in linear space, instead of log space. [`geomspace`](numpy.geomspace#numpy.geomspace "numpy.geomspace") Similar to logspace, but with endpoints specified directly. #### Notes Logspace is equivalent to the code ``` >>> y = np.linspace(start, stop, num=num, endpoint=endpoint) ... >>> power(base, y).astype(dtype) ... ``` #### Examples ``` >>> np.logspace(2.0, 3.0, num=4) array([ 100. , 215.443469 , 464.15888336, 1000. ]) >>> np.logspace(2.0, 3.0, num=4, endpoint=False) array([100. , 177.827941 , 316.22776602, 562.34132519]) >>> np.logspace(2.0, 3.0, num=4, base=2.0) array([4. , 5.0396842 , 6.34960421, 8. ]) ``` Graphical illustration: ``` >>> import matplotlib.pyplot as plt >>> N = 10 >>> x1 = np.logspace(0.1, 1, N, endpoint=True) >>> x2 = np.logspace(0.1, 1, N, endpoint=False) >>> y = np.zeros(N) >>> plt.plot(x1, y, 'o') [<matplotlib.lines.Line2D object at 0x...>] >>> plt.plot(x2, y + 0.5, 'o') [<matplotlib.lines.Line2D object at 0x...>] >>> plt.ylim([-0.5, 1]) (-0.5, 1) >>> plt.show() ``` ![../../_images/numpy-logspace-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.logspace.htmlnumpy.mgrid =========== numpy.mgrid*=<numpy.lib.index_tricks.MGridClass object>* `nd_grid` instance which returns a dense multi-dimensional “meshgrid”. An instance of `numpy.lib.index_tricks.nd_grid` which returns an dense (or fleshed out) mesh-grid when indexed, so that each returned argument has the same shape. The dimensions and number of the output arrays are equal to the number of indexing dimensions. If the step length is not a complex number, then the stop is not inclusive. However, if the step length is a **complex number** (e.g. 5j), then the integer part of its magnitude is interpreted as specifying the number of points to create between the start and stop values, where the stop value **is inclusive**. Returns mesh-grid `ndarrays` all of the same dimensions See also `lib.index_tricks.nd_grid` class of [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid") and [`mgrid`](#numpy.mgrid "numpy.mgrid") objects [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid") like mgrid but returns open (not fleshed out) mesh grids [`r_`](numpy.r_#numpy.r_ "numpy.r_") array concatenator #### Examples ``` >>> np.mgrid[0:5, 0:5] array([[[0, 0, 0, 0, 0], [1, 1, 1, 1, 1], [2, 2, 2, 2, 2], [3, 3, 3, 3, 3], [4, 4, 4, 4, 4]], [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]]) >>> np.mgrid[-1:1:5j] array([-1. , -0.5, 0. , 0.5, 1. ]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.mgrid.htmlnumpy.ogrid =========== numpy.ogrid*=<numpy.lib.index_tricks.OGridClass object>* `nd_grid` instance which returns an open multi-dimensional “meshgrid”. An instance of `numpy.lib.index_tricks.nd_grid` which returns an open (i.e. not fleshed out) mesh-grid when indexed, so that only one dimension of each returned array is greater than 1. The dimension and number of the output arrays are equal to the number of indexing dimensions. If the step length is not a complex number, then the stop is not inclusive. However, if the step length is a **complex number** (e.g. 5j), then the integer part of its magnitude is interpreted as specifying the number of points to create between the start and stop values, where the stop value **is inclusive**. Returns mesh-grid `ndarrays` with only one dimension not equal to 1 See also `np.lib.index_tricks.nd_grid` class of [`ogrid`](#numpy.ogrid "numpy.ogrid") and [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid") objects [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid") like [`ogrid`](#numpy.ogrid "numpy.ogrid") but returns dense (or fleshed out) mesh grids [`r_`](numpy.r_#numpy.r_ "numpy.r_") array concatenator #### Examples ``` >>> from numpy import ogrid >>> ogrid[-1:1:5j] array([-1. , -0.5, 0. , 0.5, 1. ]) >>> ogrid[0:5,0:5] [array([[0], [1], [2], [3], [4]]), array([[0, 1, 2, 3, 4]])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ogrid.htmlnumpy.ndarray.astype ==================== method ndarray.astype(*dtype*, *order='K'*, *casting='unsafe'*, *subok=True*, *copy=True*) Copy of the array, cast to a specified type. Parameters **dtype**str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok**bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy**bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns **arr_t**ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Notes Changed in version 1.17.0: Casting between a simple data type and a structured one is possible only for “unsafe” casting. Casting to multiple fields is allowed, but casting from multiple fields is not. Changed in version 1.9.0: Casting from numeric to string types in ‘safe’ casting mode requires that the string dtype length is long enough to store the max integer/float value converted. #### Examples ``` >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) ``` ``` >>> x.astype(int) array([1, 2, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.astype.htmlnumpy.atleast_1d ================= numpy.atleast_1d(**arys*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/shape_base.py#L23-L74) Convert inputs to arrays with at least one dimension. Scalar inputs are converted to 1-dimensional arrays, whilst higher-dimensional inputs are preserved. Parameters **arys1, arys2, 
**array_like One or more input arrays. Returns **ret**ndarray An array, or list of arrays, each with `a.ndim >= 1`. Copies are made only if necessary. See also [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Examples ``` >>> np.atleast_1d(1.0) array([1.]) ``` ``` >>> x = np.arange(9.0).reshape(3,3) >>> np.atleast_1d(x) array([[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]) >>> np.atleast_1d(x) is x True ``` ``` >>> np.atleast_1d(1, [3, 4]) [array([1]), array([3, 4])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.atleast_1d.htmlnumpy.atleast_2d ================= numpy.atleast_2d(**arys*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/shape_base.py#L81-L132) View inputs as arrays with at least two dimensions. Parameters **arys1, arys2, 
**array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have two or more dimensions are preserved. Returns **res, res2, 
**ndarray An array, or list of arrays, each with `a.ndim >= 2`. Copies are avoided where possible, and views with two or more dimensions are returned. See also [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Examples ``` >>> np.atleast_2d(3.0) array([[3.]]) ``` ``` >>> x = np.arange(3.0) >>> np.atleast_2d(x) array([[0., 1., 2.]]) >>> np.atleast_2d(x).base is x True ``` ``` >>> np.atleast_2d(1, [1, 2], [[1, 2]]) [array([[1]]), array([[1, 2]]), array([[1, 2]])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.atleast_2d.htmlnumpy.atleast_3d ================= numpy.atleast_3d(**arys*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/shape_base.py#L139-L204) View inputs as arrays with at least three dimensions. Parameters **arys1, arys2, 
**array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have three or more dimensions are preserved. Returns **res1, res2, 
**ndarray An array, or list of arrays, each with `a.ndim >= 3`. Copies are avoided where possible, and views with three or more dimensions are returned. For example, a 1-D array of shape `(N,)` becomes a view of shape `(1, N, 1)`, and a 2-D array of shape `(M, N)` becomes a view of shape `(M, N, 1)`. See also [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d") #### Examples ``` >>> np.atleast_3d(3.0) array([[[3.]]]) ``` ``` >>> x = np.arange(3.0) >>> np.atleast_3d(x).shape (1, 3, 1) ``` ``` >>> x = np.arange(12.0).reshape(4,3) >>> np.atleast_3d(x).shape (4, 3, 1) >>> np.atleast_3d(x).base is x.base # x is a reshape, so not base itself True ``` ``` >>> for arr in np.atleast_3d([1, 2], [[1, 2]], [[[1, 2]]]): ... print(arr, arr.shape) ... [[[1] [2]]] (1, 2, 1) [[[1] [2]]] (1, 2, 1) [[[1 2]]] (1, 1, 2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.atleast_3d.htmlnumpy.mat ========= numpy.mat(*data*, *dtype=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L36-L69) Interpret the input as a matrix. Unlike [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix"), [`asmatrix`](numpy.asmatrix#numpy.asmatrix "numpy.asmatrix") does not make a copy if the input is already a matrix or an ndarray. Equivalent to `matrix(data, copy=False)`. Parameters **data**array_like Input data. **dtype**data-type Data-type of the output matrix. Returns **mat**matrix `data` interpreted as a matrix. #### Examples ``` >>> x = np.array([[1, 2], [3, 4]]) ``` ``` >>> m = np.asmatrix(x) ``` ``` >>> x[0,0] = 5 ``` ``` >>> m matrix([[5, 2], [3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.mat.htmlnumpy.diagonal ============== numpy.diagonal(*a*, *offset=0*, *axis1=0*, *axis2=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1552-L1679) Return specified diagonals. If `a` is 2-D, returns the diagonal of `a` with the given offset, i.e., the collection of elements of the form `a[i, i+offset]`. If `a` has more than two dimensions, then the axes specified by `axis1` and `axis2` are used to determine the 2-D sub-array whose diagonal is returned. The shape of the resulting array can be determined by removing `axis1` and `axis2` and appending an index to the right equal to the size of the resulting diagonals. In versions of NumPy prior to 1.7, this function always returned a new, independent array containing a copy of the values in the diagonal. In NumPy 1.7 and 1.8, it continues to return a copy of the diagonal, but depending on this fact is deprecated. Writing to the resulting array continues to work as it used to, but a FutureWarning is issued. Starting in NumPy 1.9 it returns a read-only view on the original array. Attempting to write to the resulting array will produce an error. In some future release, it will return a read/write view and writing to the returned array will alter your original array. The returned array will have the same type as the input array. If you don’t write to the array returned by this function, then you can just ignore all of the above. If you depend on the current behavior, then we suggest copying the returned array explicitly, i.e., use `np.diagonal(a).copy()` instead of just `np.diagonal(a)`. This will work with both past and future versions of NumPy. Parameters **a**array_like Array from which the diagonals are taken. **offset**int, optional Offset of the diagonal from the main diagonal. Can be positive or negative. Defaults to main diagonal (0). **axis1**int, optional Axis to be used as the first axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to first axis (0). **axis2**int, optional Axis to be used as the second axis of the 2-D sub-arrays from which the diagonals should be taken. Defaults to second axis (1). Returns **array_of_diagonals**ndarray If `a` is 2-D, then a 1-D array containing the diagonal and of the same type as `a` is returned unless `a` is a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix"), in which case a 1-D array rather than a (2-D) [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") is returned in order to maintain backward compatibility. If `a.ndim > 2`, then the dimensions specified by `axis1` and `axis2` are removed, and a new axis inserted at the end corresponding to the diagonal. Raises ValueError If the dimension of `a` is less than 2. See also [`diag`](numpy.diag#numpy.diag "numpy.diag") MATLAB work-a-like for 1-D and 2-D arrays. [`diagflat`](numpy.diagflat#numpy.diagflat "numpy.diagflat") Create diagonal arrays. [`trace`](numpy.trace#numpy.trace "numpy.trace") Sum along diagonals. #### Examples ``` >>> a = np.arange(4).reshape(2,2) >>> a array([[0, 1], [2, 3]]) >>> a.diagonal() array([0, 3]) >>> a.diagonal(1) array([1]) ``` A 3-D example: ``` >>> a = np.arange(8).reshape(2,2,2); a array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> a.diagonal(0, # Main diagonals of two arrays created by skipping ... 0, # across the outer(left)-most axis last and ... 1) # the "middle" (row) axis first. array([[0, 6], [1, 7]]) ``` The sub-arrays whose main diagonals we just obtained; note that each corresponds to fixing the right-most (column) axis, and that the diagonals are “packed” in rows. ``` >>> a[:,:,0] # main diagonal is [0 6] array([[0, 2], [4, 6]]) >>> a[:,:,1] # main diagonal is [1 7] array([[1, 3], [5, 7]]) ``` The anti-diagonal can be obtained by reversing the order of elements using either [`numpy.flipud`](numpy.flipud#numpy.flipud "numpy.flipud") or [`numpy.fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr"). ``` >>> a = np.arange(9).reshape(3, 3) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.fliplr(a).diagonal() # Horizontal flip array([2, 4, 6]) >>> np.flipud(a).diagonal() # Vertical flip array([6, 4, 2]) ``` Note that the order in which the diagonal is retrieved varies depending on the flip function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.diagonal.htmlnumpy.dsplit ============ numpy.dsplit(*ary*, *indices_or_sections*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L1000-L1042) Split array into multiple sub-arrays along the 3rd axis (depth). Please refer to the [`split`](numpy.split#numpy.split "numpy.split") documentation. [`dsplit`](#numpy.dsplit "numpy.dsplit") is equivalent to [`split`](numpy.split#numpy.split "numpy.split") with `axis=2`, the array is always split along the third axis provided the array dimension is greater than or equal to 3. See also [`split`](numpy.split#numpy.split "numpy.split") Split an array into multiple sub-arrays of equal size. #### Examples ``` >>> x = np.arange(16.0).reshape(2, 2, 4) >>> x array([[[ 0., 1., 2., 3.], [ 4., 5., 6., 7.]], [[ 8., 9., 10., 11.], [12., 13., 14., 15.]]]) >>> np.dsplit(x, 2) [array([[[ 0., 1.], [ 4., 5.]], [[ 8., 9.], [12., 13.]]]), array([[[ 2., 3.], [ 6., 7.]], [[10., 11.], [14., 15.]]])] >>> np.dsplit(x, np.array([3, 6])) [array([[[ 0., 1., 2.], [ 4., 5., 6.]], [[ 8., 9., 10.], [12., 13., 14.]]]), array([[[ 3.], [ 7.]], [[11.], [15.]]]), array([], shape=(2, 2, 0), dtype=float64)] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dsplit.htmlnumpy.dstack ============ numpy.dstack(*tup*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L663-L723) Stack arrays in sequence depth wise (along third axis). This is equivalent to concatenation along the third axis after 2-D arrays of shape `(M,N)` have been reshaped to `(M,N,1)` and 1-D arrays of shape `(N,)` have been reshaped to `(1,N,1)`. Rebuilds arrays divided by [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters **tup**sequence of arrays The arrays must have the same shape along all but the third axis. 1-D or 2-D arrays must have the same shape. Returns **stacked**ndarray The array formed by stacking the given arrays, will be at least 3-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit") Split array along third axis. #### Examples ``` >>> a = np.array((1,2,3)) >>> b = np.array((2,3,4)) >>> np.dstack((a,b)) array([[[1, 2], [2, 3], [3, 4]]]) ``` ``` >>> a = np.array([[1],[2],[3]]) >>> b = np.array([[2],[3],[4]]) >>> np.dstack((a,b)) array([[[1, 2]], [[2, 3]], [[3, 4]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dstack.htmlnumpy.ndarray.item ================== method ndarray.item(**args*) Copy an element of an array to a standard Python scalar and return it. Parameters ***args**Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns **z**Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. [`item`](#numpy.ndarray.item "numpy.ndarray.item") is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples ``` >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.item.htmlnumpy.repeat ============ numpy.repeat(*a*, *repeats*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L436-L479) Repeat elements of an array. Parameters **a**array_like Input array. **repeats**int or array of ints The number of repetitions for each element. `repeats` is broadcasted to fit the shape of the given axis. **axis**int, optional The axis along which to repeat values. By default, use the flattened input array, and return a flat output array. Returns **repeated_array**ndarray Output array which has the same shape as `a`, except along the given axis. See also [`tile`](numpy.tile#numpy.tile "numpy.tile") Tile an array. [`unique`](numpy.unique#numpy.unique "numpy.unique") Find the unique elements of an array. #### Examples ``` >>> np.repeat(3, 4) array([3, 3, 3, 3]) >>> x = np.array([[1,2],[3,4]]) >>> np.repeat(x, 2) array([1, 1, 2, 2, 3, 3, 4, 4]) >>> np.repeat(x, 3, axis=1) array([[1, 1, 1, 2, 2, 2], [3, 3, 3, 4, 4, 4]]) >>> np.repeat(x, [1, 2], axis=0) array([[1, 2], [3, 4], [3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.repeat.htmlnumpy.squeeze ============= numpy.squeeze(*a*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1478-L1545) Remove axes of length one from `a`. Parameters **a**array_like Input data. **axis**None or int or tuple of ints, optional New in version 1.7.0. Selects a subset of the entries of length one in the shape. If an axis is selected with shape entry greater than one, an error is raised. Returns **squeezed**ndarray The input array, but with all or a subset of the dimensions of length 1 removed. This is always `a` itself or a view into `a`. Note that if all axes are squeezed, the result is a 0d array and not a scalar. Raises ValueError If `axis` is not None, and an axis being squeezed is not of length 1 See also [`expand_dims`](numpy.expand_dims#numpy.expand_dims "numpy.expand_dims") The inverse operation, adding entries of length one [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Insert, remove, and combine dimensions, and resize existing ones #### Examples ``` >>> x = np.array([[[0], [1], [2]]]) >>> x.shape (1, 3, 1) >>> np.squeeze(x).shape (3,) >>> np.squeeze(x, axis=0).shape (3, 1) >>> np.squeeze(x, axis=1).shape Traceback (most recent call last): ... ValueError: cannot select an axis to squeeze out which has size not equal to one >>> np.squeeze(x, axis=2).shape (1, 3) >>> x = np.array([[1234]]) >>> x.shape (1, 1) >>> np.squeeze(x) array(1234) # 0d array >>> np.squeeze(x).shape () >>> np.squeeze(x)[()] 1234 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.squeeze.htmlnumpy.swapaxes ============== numpy.swapaxes(*a*, *axis1*, *axis2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L550-L594) Interchange two axes of an array. Parameters **a**array_like Input array. **axis1**int First axis. **axis2**int Second axis. Returns **a_swapped**ndarray For NumPy >= 1.10.0, if `a` is an ndarray, then a view of `a` is returned; otherwise a new array is created. For earlier NumPy versions a view of `a` is returned only if the order of the axes is changed, otherwise the input array is returned. #### Examples ``` >>> x = np.array([[1,2,3]]) >>> np.swapaxes(x,0,1) array([[1], [2], [3]]) ``` ``` >>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) >>> x array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) ``` ``` >>> np.swapaxes(x,0,2) array([[[0, 4], [2, 6]], [[1, 5], [3, 7]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.swapaxes.htmlnumpy.take ========== numpy.take(*a*, *indices*, *axis=None*, *out=None*, *mode='raise'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L93-L190) Take elements from an array along an axis. When axis is not None, this function does the same thing as “fancy” indexing (indexing arrays using arrays); however, it can be easier to use if you need elements along a given axis. A call such as `np.take(arr, indices, axis=3)` is equivalent to `arr[:,:,:,indices,...]`. Explained without fancy indexing, this is equivalent to the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex"), which sets each of `ii`, `jj`, and `kk` to a tuple of indices: ``` Ni, Nk = a.shape[:axis], a.shape[axis+1:] Nj = indices.shape for ii in ndindex(Ni): for jj in ndindex(Nj): for kk in ndindex(Nk): out[ii + jj + kk] = a[ii + (indices[jj],) + kk] ``` Parameters **a**array_like (Ni
, M, Nk
) The source array. **indices**array_like (Nj
) The indices of the values to extract. New in version 1.8.0. Also allow scalars for indices. **axis**int, optional The axis over which to select values. By default, the flattened input array is used. **out**ndarray, optional (Ni
, Nj
, Nk
) If provided, the result will be placed in this array. It should be of the appropriate shape and dtype. Note that `out` is always buffered if `mode=’raise’`; use other modes for better performance. **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. * ‘raise’ – raise an error (default) * ‘wrap’ – wrap around * ‘clip’ – clip to the range ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers. Returns **out**ndarray (Ni
, Nj
, Nk
) The returned array has the same type as `a`. See also [`compress`](numpy.compress#numpy.compress "numpy.compress") Take elements using a boolean mask [`ndarray.take`](numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take") equivalent method [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Take elements by matching the array and the index arrays #### Notes By eliminating the inner loop in the description above, and using [`s_`](numpy.s_#numpy.s_ "numpy.s_") to build simple slice objects, [`take`](#numpy.take "numpy.take") can be expressed in terms of applying fancy indexing to each 1-d slice: ``` Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nj): out[ii + s_[...,] + kk] = a[ii + s_[:,] + kk][indices] ``` For this reason, it is equivalent to (but faster than) the following use of [`apply_along_axis`](numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis"): ``` out = np.apply_along_axis(lambda a_1d: a_1d[indices], axis, a) ``` #### Examples ``` >>> a = [4, 3, 5, 7, 6, 8] >>> indices = [0, 1, 4] >>> np.take(a, indices) array([4, 3, 6]) ``` In this example if `a` is an ndarray, “fancy” indexing can be used. ``` >>> a = np.array(a) >>> a[indices] array([4, 3, 6]) ``` If [`indices`](numpy.indices#numpy.indices "numpy.indices") is not one dimensional, the output also has these dimensions. ``` >>> np.take(a, [[0, 1], [2, 3]]) array([[4, 3], [5, 7]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.take.htmlnumpy.ptp ========= numpy.ptp(*a*, *axis=None*, *out=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2578-L2669) Range of values (maximum - minimum) along an axis. The name of the function comes from the acronym for ‘peak to peak’. Warning [`ptp`](#numpy.ptp "numpy.ptp") preserves the data type of the array. This means the return value for an input of signed integers with n bits (e.g. `np.int8`, `np.int16`, etc) is also a signed integer with n bits. In that case, peak-to-peak values greater than `2**(n-1)-1` will be returned as negative values. An example with a work-around is shown below. Parameters **a**array_like Input values. **axis**None or int or tuple of ints, optional Axis along which to find the peaks. By default, flatten the array. `axis` may be negative, in which case it counts from the last to the first axis. New in version 1.15.0. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. **out**array_like Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type of the output values will be cast if necessary. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`ptp`](#numpy.ptp "numpy.ptp") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. Returns **ptp**ndarray A new array holding the result, unless `out` was specified, in which case a reference to `out` is returned. #### Examples ``` >>> x = np.array([[4, 9, 2, 10], ... [6, 9, 7, 12]]) ``` ``` >>> np.ptp(x, axis=1) array([8, 6]) ``` ``` >>> np.ptp(x, axis=0) array([2, 0, 5, 2]) ``` ``` >>> np.ptp(x) 10 ``` This example shows that a negative value can be returned when the input is an array of signed integers. ``` >>> y = np.array([[1, 127], ... [0, 127], ... [-1, 127], ... [-2, 127]], dtype=np.int8) >>> np.ptp(y, axis=1) array([ 126, 127, -128, -127], dtype=int8) ``` A work-around is to use the `view()` method to view the result as unsigned integers with the same bit width: ``` >>> np.ptp(y, axis=1).view(np.uint8) array([126, 127, 128, 129], dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ptp.htmlnumpy.choose ============ numpy.choose(*a*, *choices*, *out=None*, *mode='raise'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L307-L429) Construct an array from an index array and a list of arrays to choose from. First of all, if confused or uncertain, definitely look at the Examples - in its full generality, this function is less simple than it might seem from the following code description (below ndi = `numpy.lib.index_tricks`): `np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)])`. But this omits some subtleties. Here is a fully general summary: Given an “index” array (`a`) of integers and a sequence of `n` arrays (`choices`), `a` and each choice array are first broadcast, as necessary, to arrays of a common shape; calling these *Ba* and *Bchoices[i], i = 0,
,n-1* we have that, necessarily, `Ba.shape == Bchoices[i].shape` for each `i`. Then, a new array with shape `Ba.shape` is created as follows: * if `mode='raise'` (the default), then, first of all, each element of `a` (and thus `Ba`) must be in the range `[0, n-1]`; now, suppose that `i` (in that range) is the value at the `(j0, j1, ..., jm)` position in `Ba` - then the value at the same position in the new array is the value in `Bchoices[i]` at that same position; * if `mode='wrap'`, values in `a` (and thus `Ba`) may be any (signed) integer; modular arithmetic is used to map integers outside the range `[0, n-1]` back into that range; and then the new array is constructed as above; * if `mode='clip'`, values in `a` (and thus `Ba`) may be any (signed) integer; negative integers are mapped to 0; values greater than `n-1` are mapped to `n-1`; and then the new array is constructed as above. Parameters **a**int array This array must contain integers in `[0, n-1]`, where `n` is the number of choices, unless `mode=wrap` or `mode=clip`, in which cases any integers are permissible. **choices**sequence of arrays Choice arrays. `a` and all of the choices must be broadcastable to the same shape. If `choices` is itself an array (not recommended), then its outermost dimension (i.e., the one corresponding to `choices.shape[0]`) is taken as defining the “sequence”. **out**array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. Note that `out` is always buffered if `mode='raise'`; use other modes for better performance. **mode**{‘raise’ (default), ‘wrap’, ‘clip’}, optional Specifies how indices outside `[0, n-1]` will be treated: * ‘raise’ : an exception is raised * ‘wrap’ : value becomes value mod `n` * ‘clip’ : values < 0 are mapped to 0, values > n-1 are mapped to n-1 Returns **merged_array**array The merged result. Raises ValueError: shape mismatch If `a` and each choice array are not all broadcastable to the same shape. See also [`ndarray.choose`](numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose") equivalent method [`numpy.take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Preferable if `choices` is an array #### Notes To reduce the chance of misinterpretation, even though the following “abuse” is nominally supported, `choices` should neither be, nor be thought of as, a single array, i.e., the outermost sequence-like container should be either a list or a tuple. #### Examples ``` >>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], ... [20, 21, 22, 23], [30, 31, 32, 33]] >>> np.choose([2, 3, 1, 0], choices ... # the first element of the result will be the first element of the ... # third (2+1) "array" in choices, namely, 20; the second element ... # will be the second element of the fourth (3+1) choice array, i.e., ... # 31, etc. ... ) array([20, 31, 12, 3]) >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) array([20, 31, 12, 3]) >>> # because there are 4 choice arrays >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) array([20, 1, 12, 3]) >>> # i.e., 0 ``` A couple examples illustrating how choose broadcasts: ``` >>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] >>> choices = [-10, 10] >>> np.choose(a, choices) array([[ 10, -10, 10], [-10, 10, -10], [ 10, -10, 10]]) ``` ``` >>> # With thanks to <NAME> >>> a = np.array([0, 1]).reshape((2,1,1)) >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 array([[[ 1, 1, 1, 1, 1], [ 2, 2, 2, 2, 2], [ 3, 3, 3, 3, 3]], [[-1, -2, -3, -4, -5], [-1, -2, -3, -4, -5], [-1, -2, -3, -4, -5]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.choose.htmlnumpy.compress ============== numpy.compress(*condition*, *a*, *axis=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2017-L2078) Return selected slices of an array along given axis. When working along a given axis, a slice along that axis is returned in `output` for each index where `condition` evaluates to True. When working on a 1-D array, [`compress`](#numpy.compress "numpy.compress") is equivalent to [`extract`](numpy.extract#numpy.extract "numpy.extract"). Parameters **condition**1-D array of bools Array that selects which entries to return. If len(condition) is less than the size of `a` along the given axis, then output is truncated to the length of the condition array. **a**array_like Array from which to extract a part. **axis**int, optional Axis along which to take slices. If None (default), work on the flattened array. **out**ndarray, optional Output array. Its type is preserved and it must be of the right shape to hold the output. Returns **compressed_array**ndarray A copy of `a` without the slices along axis for which `condition` is false. See also [`take`](numpy.take#numpy.take "numpy.take"), [`choose`](numpy.choose#numpy.choose "numpy.choose"), [`diag`](numpy.diag#numpy.diag "numpy.diag"), [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal"), [`select`](numpy.select#numpy.select "numpy.select") [`ndarray.compress`](numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress") Equivalent method in ndarray [`extract`](numpy.extract#numpy.extract "numpy.extract") Equivalent method when working on 1-D arrays [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Examples ``` >>> a = np.array([[1, 2], [3, 4], [5, 6]]) >>> a array([[1, 2], [3, 4], [5, 6]]) >>> np.compress([0, 1], a, axis=0) array([[3, 4]]) >>> np.compress([False, True, True], a, axis=0) array([[3, 4], [5, 6]]) >>> np.compress([False, True], a, axis=1) array([[2], [4], [6]]) ``` Working on the flattened array does not return slices along an axis but selects elements. ``` >>> np.compress([False, True], a) array([2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.compress.htmlnumpy.ndarray.fill ================== method ndarray.fill(*value*) Fill the array with a scalar value. Parameters **value**scalar All elements of `a` will be assigned this value. #### Examples ``` >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.fill.htmlnumpy.imag ========== numpy.imag(*val*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L167-L203) Return the imaginary part of the complex argument. Parameters **val**array_like Input array. Returns **out**ndarray or scalar The imaginary component of the complex argument. If `val` is real, the type of `val` is used for the output. If `val` has complex elements, the returned type is float. See also [`real`](numpy.real#numpy.real "numpy.real"), [`angle`](numpy.angle#numpy.angle "numpy.angle"), [`real_if_close`](numpy.real_if_close#numpy.real_if_close "numpy.real_if_close") #### Examples ``` >>> a = np.array([1+2j, 3+4j, 5+6j]) >>> a.imag array([2., 4., 6.]) >>> a.imag = np.array([8, 10, 12]) >>> a array([1. +8.j, 3.+10.j, 5.+12.j]) >>> np.imag(1 + 1j) 1.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.imag.htmlnumpy.put ========= numpy.put(*a*, *ind*, *v*, *mode='raise'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L486-L543) Replaces specified elements of an array with given values. The indexing works on the flattened target array. [`put`](#numpy.put "numpy.put") is roughly equivalent to: ``` a.flat[ind] = v ``` Parameters **a**ndarray Target array. **ind**array_like Target indices, interpreted as integers. **v**array_like Values to place in `a` at target indices. If `v` is shorter than `ind` it will be repeated as necessary. **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. * ‘raise’ – raise an error (default) * ‘wrap’ – wrap around * ‘clip’ – clip to the range ‘clip’ mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers. In ‘raise’ mode, if an exception occurs the target array may still be modified. See also [`putmask`](numpy.putmask#numpy.putmask "numpy.putmask"), [`place`](numpy.place#numpy.place "numpy.place") [`put_along_axis`](numpy.put_along_axis#numpy.put_along_axis "numpy.put_along_axis") Put elements by matching the array and the index arrays #### Examples ``` >>> a = np.arange(5) >>> np.put(a, [0, 2], [-44, -55]) >>> a array([-44, 1, -55, 3, 4]) ``` ``` >>> a = np.arange(5) >>> np.put(a, 22, -5, mode='clip') >>> a array([ 0, 1, 2, 3, -5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.put.htmlnumpy.putmask ============= numpy.putmask(*a*, *mask*, *values*) Changes elements of an array based on conditional and input values. Sets `a.flat[n] = values[n]` for each n where `mask.flat[n]==True`. If `values` is not the same size as `a` and `mask` then it will repeat. This gives behavior different from `a[mask] = values`. Parameters **a**ndarray Target array. **mask**array_like Boolean mask array. It has to be the same shape as `a`. **values**array_like Values to put into `a` where `mask` is True. If `values` is smaller than `a` it will be repeated. See also [`place`](numpy.place#numpy.place "numpy.place"), [`put`](numpy.put#numpy.put "numpy.put"), [`take`](numpy.take#numpy.take "numpy.take"), [`copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Examples ``` >>> x = np.arange(6).reshape(2, 3) >>> np.putmask(x, x>2, x**2) >>> x array([[ 0, 1, 2], [ 9, 16, 25]]) ``` If `values` is smaller than `a` it is repeated: ``` >>> x = np.arange(5) >>> np.putmask(x, x>1, [-33, -44]) >>> x array([ 0, 1, -33, -44, -33]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.putmask.htmlnumpy.real ========== numpy.real(*val*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L121-L160) Return the real part of the complex argument. Parameters **val**array_like Input array. Returns **out**ndarray or scalar The real component of the complex argument. If `val` is real, the type of `val` is used for the output. If `val` has complex elements, the returned type is float. See also [`real_if_close`](numpy.real_if_close#numpy.real_if_close "numpy.real_if_close"), [`imag`](numpy.imag#numpy.imag "numpy.imag"), [`angle`](numpy.angle#numpy.angle "numpy.angle") #### Examples ``` >>> a = np.array([1+2j, 3+4j, 5+6j]) >>> a.real array([1., 3., 5.]) >>> a.real = 9 >>> a array([9.+2.j, 9.+4.j, 9.+6.j]) >>> a.real = np.array([9, 8, 7]) >>> a array([9.+2.j, 8.+4.j, 7.+6.j]) >>> np.real(1 + 1j) 1.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.real.htmlnumpy.linalg.svd ================ linalg.svd(*a*, *full_matrices=True*, *compute_uv=True*, *hermitian=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L1477-L1671) Singular Value Decomposition. When `a` is a 2D array, and `full_matrices=False`, then it is factorized as `u @ np.diag(s) @ vh = (u * s) @ vh`, where `u` and the Hermitian transpose of `vh` are 2D arrays with orthonormal columns and `s` is a 1D array of `a`’s singular values. When `a` is higher-dimensional, SVD is applied in stacked mode as explained below. Parameters **a**(
, M, N) array_like A real or complex array with `a.ndim >= 2`. **full_matrices**bool, optional If True (default), `u` and `vh` have the shapes `(..., M, M)` and `(..., N, N)`, respectively. Otherwise, the shapes are `(..., M, K)` and `(..., K, N)`, respectively, where `K = min(M, N)`. **compute_uv**bool, optional Whether or not to compute `u` and `vh` in addition to `s`. True by default. **hermitian**bool, optional If True, `a` is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. New in version 1.17.0. Returns **u**{ (
, M, M), (
, M, K) } array Unitary array(s). The first `a.ndim - 2` dimensions have the same size as those of the input `a`. The size of the last two dimensions depends on the value of `full_matrices`. Only returned when `compute_uv` is True. **s**(
, K) array Vector(s) with the singular values, within each vector sorted in descending order. The first `a.ndim - 2` dimensions have the same size as those of the input `a`. **vh**{ (
, N, N), (
, K, N) } array Unitary array(s). The first `a.ndim - 2` dimensions have the same size as those of the input `a`. The size of the last two dimensions depends on the value of `full_matrices`. Only returned when `compute_uv` is True. Raises LinAlgError If SVD computation does not converge. See also [`scipy.linalg.svd`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.svd.html#scipy.linalg.svd "(in SciPy v1.8.1)") Similar function in SciPy. [`scipy.linalg.svdvals`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.svdvals.html#scipy.linalg.svdvals "(in SciPy v1.8.1)") Compute singular values of a matrix. #### Notes Changed in version 1.8.0: Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. The decomposition is performed using LAPACK routine `_gesdd`. SVD is usually described for the factorization of a 2D matrix \(A\). The higher-dimensional case will be discussed below. In the 2D case, SVD is written as \(A = U S V^H\), where \(A = a\), \(U= u\), \(S= \mathtt{np.diag}(s)\) and \(V^H = vh\). The 1D array `s` contains the singular values of `a` and `u` and `vh` are unitary. The rows of `vh` are the eigenvectors of \(A^H A\) and the columns of `u` are the eigenvectors of \(A A^H\). In both cases the corresponding (possibly non-zero) eigenvalues are given by `s**2`. If `a` has more than two dimensions, then broadcasting rules apply, as explained in [Linear algebra on several matrices at once](../routines.linalg#routines-linalg-broadcasting). This means that SVD is working in “stacked” mode: it iterates over all indices of the first `a.ndim - 2` dimensions and for each combination SVD is applied to the last two indices. The matrix `a` can be reconstructed from the decomposition with either `(u * s[..., None, :]) @ vh` or `u @ (s[..., None] * vh)`. (The `@` operator can be replaced by the function `np.matmul` for python versions below 3.5.) If `a` is a `matrix` object (as opposed to an `ndarray`), then so are all the return values. #### Examples ``` >>> a = np.random.randn(9, 6) + 1j*np.random.randn(9, 6) >>> b = np.random.randn(2, 7, 8, 3) + 1j*np.random.randn(2, 7, 8, 3) ``` Reconstruction based on full SVD, 2D case: ``` >>> u, s, vh = np.linalg.svd(a, full_matrices=True) >>> u.shape, s.shape, vh.shape ((9, 9), (6,), (6, 6)) >>> np.allclose(a, np.dot(u[:, :6] * s, vh)) True >>> smat = np.zeros((9, 6), dtype=complex) >>> smat[:6, :6] = np.diag(s) >>> np.allclose(a, np.dot(u, np.dot(smat, vh))) True ``` Reconstruction based on reduced SVD, 2D case: ``` >>> u, s, vh = np.linalg.svd(a, full_matrices=False) >>> u.shape, s.shape, vh.shape ((9, 6), (6,), (6, 6)) >>> np.allclose(a, np.dot(u * s, vh)) True >>> smat = np.diag(s) >>> np.allclose(a, np.dot(u, np.dot(smat, vh))) True ``` Reconstruction based on full SVD, 4D case: ``` >>> u, s, vh = np.linalg.svd(b, full_matrices=True) >>> u.shape, s.shape, vh.shape ((2, 7, 8, 8), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(u[..., :3] * s[..., None, :], vh)) True >>> np.allclose(b, np.matmul(u[..., :3], s[..., None] * vh)) True ``` Reconstruction based on reduced SVD, 4D case: ``` >>> u, s, vh = np.linalg.svd(b, full_matrices=False) >>> u.shape, s.shape, vh.shape ((2, 7, 8, 3), (2, 7, 3), (2, 7, 3, 3)) >>> np.allclose(b, np.matmul(u * s[..., None, :], vh)) True >>> np.allclose(b, np.matmul(u, s[..., None] * vh)) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.svd.htmlnumpy.ix_ ========== numpy.ix_(**args*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/index_tricks.py#L35-L107) Construct an open mesh from multiple sequences. This function takes N 1-D sequences and returns N outputs with N dimensions each, such that the shape is 1 in all but one dimension and the dimension with the non-unit shape value cycles through all N dimensions. Using [`ix_`](#numpy.ix_ "numpy.ix_") one can quickly construct index arrays that will index the cross product. `a[np.ix_([1,3],[2,5])]` returns the array `[[a[1,2] a[1,5]], [a[3,2] a[3,5]]]`. Parameters **args**1-D sequences Each sequence should be of integer or boolean type. Boolean sequences will be interpreted as boolean masks for the corresponding dimension (equivalent to passing in `np.nonzero(boolean_sequence)`). Returns **out**tuple of ndarrays N arrays with N dimensions each, with N the number of input sequences. Together these arrays form an open mesh. See also [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid"), [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Examples ``` >>> a = np.arange(10).reshape(2, 5) >>> a array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> ixgrid = np.ix_([0, 1], [2, 4]) >>> ixgrid (array([[0], [1]]), array([[2, 4]])) >>> ixgrid[0].shape, ixgrid[1].shape ((2, 1), (1, 2)) >>> a[ixgrid] array([[2, 4], [7, 9]]) ``` ``` >>> ixgrid = np.ix_([True, True], [2, 4]) >>> a[ixgrid] array([[2, 4], [7, 9]]) >>> ixgrid = np.ix_([True, True], [False, False, True, False, True]) >>> a[ixgrid] array([[2, 4], [7, 9]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ix_.htmlStructured arrays ================= Introduction ------------ Structured arrays are ndarrays whose datatype is a composition of simpler datatypes organized as a sequence of named [fields](../glossary#term-field). For example, ``` >>> x = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)], ... dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')]) >>> x array([('Rex', 9, 81.), ('Fido', 3, 27.)], dtype=[('name', '<U10'), ('age', '<i4'), ('weight', '<f4')]) ``` Here `x` is a one-dimensional array of length two whose datatype is a structure with three fields: 1. A string of length 10 or less named ‘name’, 2. a 32-bit integer named ‘age’, and 3. a 32-bit float named ‘weight’. If you index `x` at position 1 you get a structure: ``` >>> x[1] ('Fido', 3, 27.) ``` You can access and modify individual fields of a structured array by indexing with the field name: ``` >>> x['age'] array([9, 3], dtype=int32) >>> x['age'] = 5 >>> x array([('Rex', 5, 81.), ('Fido', 5, 27.)], dtype=[('name', '<U10'), ('age', '<i4'), ('weight', '<f4')]) ``` Structured datatypes are designed to be able to mimic ‘structs’ in the C language, and share a similar memory layout. They are meant for interfacing with C code and for low-level manipulation of structured buffers, for example for interpreting binary blobs. For these purposes they support specialized features such as subarrays, nested datatypes, and unions, and allow control over the memory layout of the structure. Users looking to manipulate tabular data, such as stored in csv files, may find other pydata projects more suitable, such as xarray, pandas, or DataArray. These provide a high-level interface for tabular data analysis and are better optimized for that use. For instance, the C-struct-like memory layout of structured arrays in numpy can lead to poor cache behavior in comparison. Structured Datatypes -------------------- A structured datatype can be thought of as a sequence of bytes of a certain length (the structure’s [itemsize](../glossary#term-itemsize)) which is interpreted as a collection of fields. Each field has a name, a datatype, and a byte offset within the structure. The datatype of a field may be any numpy datatype including other structured datatypes, and it may also be a [subarray data type](../glossary#term-subarray-data-type) which behaves like an ndarray of a specified shape. The offsets of the fields are arbitrary, and fields may even overlap. These offsets are usually determined automatically by numpy, but can also be specified. ### Structured Datatype Creation Structured datatypes may be created using the function [`numpy.dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype"). There are 4 alternative forms of specification which vary in flexibility and conciseness. These are further documented in the [Data Type Objects](../reference/arrays.dtypes#arrays-dtypes-constructing) reference page, and in summary they are: 1. A list of tuples, one tuple per field Each tuple has the form `(fieldname, datatype, shape)` where shape is optional. `fieldname` is a string (or tuple if titles are used, see [Field Titles](#titles) below), `datatype` may be any object convertible to a datatype, and `shape` is a tuple of integers specifying subarray shape. ``` >>> np.dtype([('x', 'f4'), ('y', np.float32), ('z', 'f4', (2, 2))]) dtype([('x', '<f4'), ('y', '<f4'), ('z', '<f4', (2, 2))]) ``` If `fieldname` is the empty string `''`, the field will be given a default name of the form `f#`, where `#` is the integer index of the field, counting from 0 from the left: ``` >>> np.dtype([('x', 'f4'), ('', 'i4'), ('z', 'i8')]) dtype([('x', '<f4'), ('f1', '<i4'), ('z', '<i8')]) ``` The byte offsets of the fields within the structure and the total structure itemsize are determined automatically. 2. A string of comma-separated dtype specifications In this shorthand notation any of the [string dtype specifications](../reference/arrays.dtypes#arrays-dtypes-constructing) may be used in a string and separated by commas. The itemsize and byte offsets of the fields are determined automatically, and the field names are given the default names `f0`, `f1`, etc. ``` >>> np.dtype('i8, f4, S3') dtype([('f0', '<i8'), ('f1', '<f4'), ('f2', 'S3')]) >>> np.dtype('3int8, float32, (2, 3)float64') dtype([('f0', 'i1', (3,)), ('f1', '<f4'), ('f2', '<f8', (2, 3))]) ``` 3. A dictionary of field parameter arrays This is the most flexible form of specification since it allows control over the byte-offsets of the fields and the itemsize of the structure. The dictionary has two required keys, ‘names’ and ‘formats’, and four optional keys, ‘offsets’, ‘itemsize’, ‘aligned’ and ‘titles’. The values for ‘names’ and ‘formats’ should respectively be a list of field names and a list of dtype specifications, of the same length. The optional ‘offsets’ value should be a list of integer byte-offsets, one for each field within the structure. If ‘offsets’ is not given the offsets are determined automatically. The optional ‘itemsize’ value should be an integer describing the total size in bytes of the dtype, which must be large enough to contain all the fields. ``` >>> np.dtype({'names': ['col1', 'col2'], 'formats': ['i4', 'f4']}) dtype([('col1', '<i4'), ('col2', '<f4')]) >>> np.dtype({'names': ['col1', 'col2'], ... 'formats': ['i4', 'f4'], ... 'offsets': [0, 4], ... 'itemsize': 12}) dtype({'names': ['col1', 'col2'], 'formats': ['<i4', '<f4'], 'offsets': [0, 4], 'itemsize': 12}) ``` Offsets may be chosen such that the fields overlap, though this will mean that assigning to one field may clobber any overlapping field’s data. As an exception, fields of [`numpy.object_`](../reference/arrays.scalars#numpy.object_ "numpy.object_") type cannot overlap with other fields, because of the risk of clobbering the internal object pointer and then dereferencing it. The optional ‘aligned’ value can be set to `True` to make the automatic offset computation use aligned offsets (see [Automatic Byte Offsets and Alignment](#offsets-and-alignment)), as if the ‘align’ keyword argument of [`numpy.dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype") had been set to True. The optional ‘titles’ value should be a list of titles of the same length as ‘names’, see [Field Titles](#titles) below. 4. A dictionary of field names The keys of the dictionary are the field names and the values are tuples specifying type and offset: ``` >>> np.dtype({'col1': ('i1', 0), 'col2': ('f4', 1)}) dtype([('col1', 'i1'), ('col2', '<f4')]) ``` This form was discouraged because Python dictionaries did not preserve order in Python versions before Python 3.6. [Field Titles](#titles) may be specified by using a 3-tuple, see below. ### Manipulating and Displaying Structured Datatypes The list of field names of a structured datatype can be found in the `names` attribute of the dtype object: ``` >>> d = np.dtype([('x', 'i8'), ('y', 'f4')]) >>> d.names ('x', 'y') ``` The field names may be modified by assigning to the `names` attribute using a sequence of strings of the same length. The dtype object also has a dictionary-like attribute, `fields`, whose keys are the field names (and [Field Titles](#titles), see below) and whose values are tuples containing the dtype and byte offset of each field. ``` >>> d.fields mappingproxy({'x': (dtype('int64'), 0), 'y': (dtype('float32'), 8)}) ``` Both the `names` and `fields` attributes will equal `None` for unstructured arrays. The recommended way to test if a dtype is structured is with `if dt.names is not None` rather than `if dt.names`, to account for dtypes with 0 fields. The string representation of a structured datatype is shown in the “list of tuples” form if possible, otherwise numpy falls back to using the more general dictionary form. ### Automatic Byte Offsets and Alignment Numpy uses one of two methods to automatically determine the field byte offsets and the overall itemsize of a structured datatype, depending on whether `align=True` was specified as a keyword argument to [`numpy.dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype"). By default (`align=False`), numpy will pack the fields together such that each field starts at the byte offset the previous field ended, and the fields are contiguous in memory. ``` >>> def print_offsets(d): ... print("offsets:", [d.fields[name][1] for name in d.names]) ... print("itemsize:", d.itemsize) >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2')) offsets: [0, 1, 2, 6, 7, 15] itemsize: 17 ``` If `align=True` is set, numpy will pad the structure in the same way many C compilers would pad a C-struct. Aligned structures can give a performance improvement in some cases, at the cost of increased datatype size. Padding bytes are inserted between fields such that each field’s byte offset will be a multiple of that field’s alignment, which is usually equal to the field’s size in bytes for simple datatypes, see [`PyArray_Descr.alignment`](../reference/c-api/types-and-structures#c.PyArray_Descr.alignment "PyArray_Descr.alignment"). The structure will also have trailing padding added so that its itemsize is a multiple of the largest field’s alignment. ``` >>> print_offsets(np.dtype('u1, u1, i4, u1, i8, u2', align=True)) offsets: [0, 1, 4, 8, 16, 24] itemsize: 32 ``` Note that although almost all modern C compilers pad in this way by default, padding in C structs is C-implementation-dependent so this memory layout is not guaranteed to exactly match that of a corresponding struct in a C program. Some work may be needed, either on the numpy side or the C side, to obtain exact correspondence. If offsets were specified using the optional `offsets` key in the dictionary-based dtype specification, setting `align=True` will check that each field’s offset is a multiple of its size and that the itemsize is a multiple of the largest field size, and raise an exception if not. If the offsets of the fields and itemsize of a structured array satisfy the alignment conditions, the array will have the `ALIGNED` [`flag`](../reference/generated/numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") set. A convenience function [`numpy.lib.recfunctions.repack_fields`](#numpy.lib.recfunctions.repack_fields "numpy.lib.recfunctions.repack_fields") converts an aligned dtype or array to a packed one and vice versa. It takes either a dtype or structured ndarray as an argument, and returns a copy with fields re-packed, with or without padding bytes. ### Field Titles In addition to field names, fields may also have an associated [title](../glossary#term-title), an alternate name, which is sometimes used as an additional description or alias for the field. The title may be used to index an array, just like a field name. To add titles when using the list-of-tuples form of dtype specification, the field name may be specified as a tuple of two strings instead of a single string, which will be the field’s title and field name respectively. For example: ``` >>> np.dtype([(('my title', 'name'), 'f4')]) dtype([(('my title', 'name'), '<f4')]) ``` When using the first form of dictionary-based specification, the titles may be supplied as an extra `'titles'` key as described above. When using the second (discouraged) dictionary-based specification, the title can be supplied by providing a 3-element tuple `(datatype, offset, title)` instead of the usual 2-element tuple: ``` >>> np.dtype({'name': ('i4', 0, 'my title')}) dtype([(('my title', 'name'), '<i4')]) ``` The `dtype.fields` dictionary will contain titles as keys, if any titles are used. This means effectively that a field with a title will be represented twice in the fields dictionary. The tuple values for these fields will also have a third element, the field title. Because of this, and because the `names` attribute preserves the field order while the `fields` attribute may not, it is recommended to iterate through the fields of a dtype using the `names` attribute of the dtype, which will not list titles, as in: ``` >>> for name in d.names: ... print(d.fields[name][:2]) (dtype('int64'), 0) (dtype('float32'), 8) ``` ### Union types Structured datatypes are implemented in numpy to have base type [`numpy.void`](../reference/arrays.scalars#numpy.void "numpy.void") by default, but it is possible to interpret other numpy types as structured types using the `(base_dtype, dtype)` form of dtype specification described in [Data Type Objects](../reference/arrays.dtypes#arrays-dtypes-constructing). Here, `base_dtype` is the desired underlying dtype, and fields and flags will be copied from `dtype`. This dtype is similar to a ‘union’ in C. Indexing and Assignment to Structured arrays -------------------------------------------- ### Assigning data to a Structured Array There are a number of ways to assign values to a structured array: Using python tuples, using scalar values, or using other structured arrays. #### Assignment from Python Native Types (Tuples) The simplest way to assign values to a structured array is using python tuples. Each assigned value should be a tuple of length equal to the number of fields in the array, and not a list or array as these will trigger numpy’s broadcasting rules. The tuple’s elements are assigned to the successive fields of the array, from left to right: ``` >>> x = np.array([(1, 2, 3), (4, 5, 6)], dtype='i8, f4, f8') >>> x[1] = (7, 8, 9) >>> x array([(1, 2., 3.), (7, 8., 9.)], dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '<f8')]) ``` #### Assignment from Scalars A scalar assigned to a structured element will be assigned to all fields. This happens when a scalar is assigned to a structured array, or when an unstructured array is assigned to a structured array: ``` >>> x = np.zeros(2, dtype='i8, f4, ?, S1') >>> x[:] = 3 >>> x array([(3, 3., True, b'3'), (3, 3., True, b'3')], dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')]) >>> x[:] = np.arange(2) >>> x array([(0, 0., False, b'0'), (1, 1., True, b'1')], dtype=[('f0', '<i8'), ('f1', '<f4'), ('f2', '?'), ('f3', 'S1')]) ``` Structured arrays can also be assigned to unstructured arrays, but only if the structured datatype has just a single field: ``` >>> twofield = np.zeros(2, dtype=[('A', 'i4'), ('B', 'i4')]) >>> onefield = np.zeros(2, dtype=[('A', 'i4')]) >>> nostruct = np.zeros(2, dtype='i4') >>> nostruct[:] = twofield Traceback (most recent call last): ... TypeError: Cannot cast array data from dtype([('A', '<i4'), ('B', '<i4')]) to dtype('int32') according to the rule 'unsafe' ``` #### Assignment from other Structured Arrays Assignment between two structured arrays occurs as if the source elements had been converted to tuples and then assigned to the destination elements. That is, the first field of the source array is assigned to the first field of the destination array, and the second field likewise, and so on, regardless of field names. Structured arrays with a different number of fields cannot be assigned to each other. Bytes of the destination structure which are not included in any of the fields are unaffected. ``` >>> a = np.zeros(3, dtype=[('a', 'i8'), ('b', 'f4'), ('c', 'S3')]) >>> b = np.ones(3, dtype=[('x', 'f4'), ('y', 'S3'), ('z', 'O')]) >>> b[:] = a >>> b array([(0., b'0.0', b''), (0., b'0.0', b''), (0., b'0.0', b'')], dtype=[('x', '<f4'), ('y', 'S3'), ('z', 'O')]) ``` #### Assignment involving subarrays When assigning to fields which are subarrays, the assigned value will first be broadcast to the shape of the subarray. ### Indexing Structured Arrays #### Accessing Individual Fields Individual fields of a structured array may be accessed and modified by indexing the array with the field name. ``` >>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')]) >>> x['foo'] array([1, 3]) >>> x['foo'] = 10 >>> x array([(10, 2.), (10, 4.)], dtype=[('foo', '<i8'), ('bar', '<f4')]) ``` The resulting array is a view into the original array. It shares the same memory locations and writing to the view will modify the original array. ``` >>> y = x['bar'] >>> y[:] = 11 >>> x array([(10, 11.), (10, 11.)], dtype=[('foo', '<i8'), ('bar', '<f4')]) ``` This view has the same dtype and itemsize as the indexed field, so it is typically a non-structured array, except in the case of nested structures. ``` >>> y.dtype, y.shape, y.strides (dtype('float32'), (2,), (12,)) ``` If the accessed field is a subarray, the dimensions of the subarray are appended to the shape of the result: ``` >>> x = np.zeros((2, 2), dtype=[('a', np.int32), ('b', np.float64, (3, 3))]) >>> x['a'].shape (2, 2) >>> x['b'].shape (2, 2, 3, 3) ``` #### Accessing Multiple Fields One can index and assign to a structured array with a multi-field index, where the index is a list of field names. Warning The behavior of multi-field indexes changed from Numpy 1.15 to Numpy 1.16. The result of indexing with a multi-field index is a view into the original array, as follows: ``` >>> a = np.zeros(3, dtype=[('a', 'i4'), ('b', 'i4'), ('c', 'f4')]) >>> a[['a', 'c']] array([(0, 0.), (0, 0.), (0, 0.)], dtype={'names': ['a', 'c'], 'formats': ['<i4', '<f4'], 'offsets': [0, 8], 'itemsize': 12}) ``` Assignment to the view modifies the original array. The view’s fields will be in the order they were indexed. Note that unlike for single-field indexing, the dtype of the view has the same itemsize as the original array, and has fields at the same offsets as in the original array, and unindexed fields are merely missing. Warning In Numpy 1.15, indexing an array with a multi-field index returned a copy of the result above, but with fields packed together in memory as if passed through [`numpy.lib.recfunctions.repack_fields`](#numpy.lib.recfunctions.repack_fields "numpy.lib.recfunctions.repack_fields"). The new behavior as of Numpy 1.16 leads to extra “padding” bytes at the location of unindexed fields compared to 1.15. You will need to update any code which depends on the data having a “packed” layout. For instance code such as: ``` >>> a[['a', 'c']].view('i8') # Fails in Numpy 1.16 Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: When changing to a smaller dtype, its size must be a divisor of the size of original dtype ``` will need to be changed. This code has raised a `FutureWarning` since Numpy 1.12, and similar code has raised `FutureWarning` since 1.7. In 1.16 a number of functions have been introduced in the [`numpy.lib.recfunctions`](#module-numpy.lib.recfunctions "numpy.lib.recfunctions") module to help users account for this change. These are [`numpy.lib.recfunctions.repack_fields`](#numpy.lib.recfunctions.repack_fields "numpy.lib.recfunctions.repack_fields"). [`numpy.lib.recfunctions.structured_to_unstructured`](#numpy.lib.recfunctions.structured_to_unstructured "numpy.lib.recfunctions.structured_to_unstructured"), [`numpy.lib.recfunctions.unstructured_to_structured`](#numpy.lib.recfunctions.unstructured_to_structured "numpy.lib.recfunctions.unstructured_to_structured"), [`numpy.lib.recfunctions.apply_along_fields`](#numpy.lib.recfunctions.apply_along_fields "numpy.lib.recfunctions.apply_along_fields"), [`numpy.lib.recfunctions.assign_fields_by_name`](#numpy.lib.recfunctions.assign_fields_by_name "numpy.lib.recfunctions.assign_fields_by_name"), and [`numpy.lib.recfunctions.require_fields`](#numpy.lib.recfunctions.require_fields "numpy.lib.recfunctions.require_fields"). The function [`numpy.lib.recfunctions.repack_fields`](#numpy.lib.recfunctions.repack_fields "numpy.lib.recfunctions.repack_fields") can always be used to reproduce the old behavior, as it will return a packed copy of the structured array. The code above, for example, can be replaced with: ``` >>> from numpy.lib.recfunctions import repack_fields >>> repack_fields(a[['a', 'c']]).view('i8') # supported in 1.16 array([0, 0, 0]) ``` Furthermore, numpy now provides a new function [`numpy.lib.recfunctions.structured_to_unstructured`](#numpy.lib.recfunctions.structured_to_unstructured "numpy.lib.recfunctions.structured_to_unstructured") which is a safer and more efficient alternative for users who wish to convert structured arrays to unstructured arrays, as the view above is often intended to do. This function allows safe conversion to an unstructured type taking into account padding, often avoids a copy, and also casts the datatypes as needed, unlike the view. Code such as: ``` >>> b = np.zeros(3, dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4')]) >>> b[['x', 'z']].view('f4') array([0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32) ``` can be made safer by replacing with: ``` >>> from numpy.lib.recfunctions import structured_to_unstructured >>> structured_to_unstructured(b[['x', 'z']]) array([[0., 0.], [0., 0.], [0., 0.]], dtype=float32) ``` Assignment to an array with a multi-field index modifies the original array: ``` >>> a[['a', 'c']] = (2, 3) >>> a array([(2, 0, 3.), (2, 0, 3.), (2, 0, 3.)], dtype=[('a', '<i4'), ('b', '<i4'), ('c', '<f4')]) ``` This obeys the structured array assignment rules described above. For example, this means that one can swap the values of two fields using appropriate multi-field indexes: ``` >>> a[['a', 'c']] = a[['c', 'a']] ``` #### Indexing with an Integer to get a Structured Scalar Indexing a single element of a structured array (with an integer index) returns a structured scalar: ``` >>> x = np.array([(1, 2., 3.)], dtype='i, f, f') >>> scalar = x[0] >>> scalar (1, 2., 3.) >>> type(scalar) <class 'numpy.void'``` Unlike other numpy scalars, structured scalars are mutable and act like views into the original array, such that modifying the scalar will modify the original array. Structured scalars also support access and assignment by field name: ``` >>> x = np.array([(1, 2), (3, 4)], dtype=[('foo', 'i8'), ('bar', 'f4')]) >>> s = x[0] >>> s['bar'] = 100 >>> x array([(1, 100.), (3, 4.)], dtype=[('foo', '<i8'), ('bar', '<f4')]) ``` Similarly to tuples, structured scalars can also be indexed with an integer: ``` >>> scalar = np.array([(1, 2., 3.)], dtype='i, f, f')[0] >>> scalar[0] 1 >>> scalar[1] = 4 ``` Thus, tuples might be thought of as the native Python equivalent to numpy’s structured types, much like native python integers are the equivalent to numpy’s integer types. Structured scalars may be converted to a tuple by calling [`numpy.ndarray.item`](../reference/generated/numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item"): ``` >>> scalar.item(), type(scalar.item()) ((1, 4.0, 3.0), <class 'tuple'>) ``` ### Viewing Structured Arrays Containing Objects In order to prevent clobbering object pointers in fields of [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") type, numpy currently does not allow views of structured arrays containing objects. ### Structure Comparison and Promotion If the dtypes of two void structured arrays are equal, testing the equality of the arrays will result in a boolean array with the dimensions of the original arrays, with elements set to `True` where all fields of the corresponding structures are equal: ``` >>> a = np.array([(1, 1), (2, 2)], dtype=[('a', 'i4'), ('b', 'i4')]) >>> b = np.array([(1, 1), (2, 3)], dtype=[('a', 'i4'), ('b', 'i4')]) >>> a == b array([True, False]) ``` NumPy will promote individual field datatypes to perform the comparison. So the following is also valid (note the `'f4'` dtype for the `'a'` field): ``` >>> b = np.array([(1.0, 1), (2.5, 2)], dtype=[("a", "f4"), ("b", "i4")]) >>> a == b array([True, False]) ``` To compare two structured arrays, it must be possible to promote them to a common dtype as returned by [`numpy.result_type`](../reference/generated/numpy.result_type#numpy.result_type "numpy.result_type") and `np.promote_types`. This enforces that the number of fields, the field names, and the field titles must match precisely. When promotion is not possible, for example due to mismatching field names, NumPy will raise an error. Promotion between two structured dtypes results in a canonical dtype that ensures native byte-order for all fields: ``` >>> np.result_type(np.dtype("i,>i")) dtype([('f0', '<i4'), ('f1', '<i4')]) >>> np.result_type(np.dtype("i,>i"), np.dtype("i,i")) dtype([('f0', '<i4'), ('f1', '<i4')]) ``` The resulting dtype from promotion is also guaranteed to be packed, meaning that all fields are ordered contiguously and any unnecessary padding is removed: ``` >>> dt = np.dtype("i1,V3,i4,V1")[["f0", "f2"]] >>> dt dtype({'names':['f0','f2'], 'formats':['i1','<i4'], 'offsets':[0,4], 'itemsize':9}) >>> np.result_type(dt) dtype([('f0', 'i1'), ('f2', '<i4')]) ``` Note that the result prints without `offsets` or `itemsize` indicating no additional padding. If a structured dtype is created with `align=True` ensuring that `dtype.isalignedstruct` is true, this property is preserved: ``` >>> dt = np.dtype("i1,V3,i4,V1", align=True)[["f0", "f2"]] >>> dt dtype({'names':['f0','f2'], 'formats':['i1','<i4'], 'offsets':[0,4], 'itemsize':12}, align=True) >>> np.result_type(dt) dtype([('f0', 'i1'), ('f2', '<i4')], align=True) >>> np.result_type(dt).isalignedstruct True ``` When promoting multiple dtypes, the result is aligned if any of the inputs is: ``` >>> np.result_type(np.dtype("i,i"), np.dtype("i,i", align=True)) dtype([('f0', '<i4'), ('f1', '<i4')], align=True) ``` The `<` and `>` operators always return `False` when comparing void structured arrays, and arithmetic and bitwise operations are not supported. Changed in version 1.23: Before NumPy 1.23, a warning was given and `False` returned when promotion to a common dtype failed. Further, promotion was much more restrictive: It would reject the mixed float/integer comparison example above. Record Arrays ------------- As an optional convenience numpy provides an ndarray subclass, [`numpy.recarray`](../reference/generated/numpy.recarray#numpy.recarray "numpy.recarray") that allows access to fields of structured arrays by attribute instead of only by index. Record arrays use a special datatype, [`numpy.record`](../reference/generated/numpy.record#numpy.record "numpy.record"), that allows field access by attribute on the structured scalars obtained from the array. The `numpy.rec` module provides functions for creating recarrays from various objects. Additional helper functions for creating and manipulating structured arrays can be found in [`numpy.lib.recfunctions`](#module-numpy.lib.recfunctions "numpy.lib.recfunctions"). The simplest way to create a record array is with [`numpy.rec.array`](../reference/generated/numpy.core.records.array#numpy.core.records.array "numpy.core.records.array"): ``` >>> recordarr = np.rec.array([(1, 2., 'Hello'), (2, 3., "World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]) >>> recordarr.bar array([2., 3.], dtype=float32) >>> recordarr[1:2] rec.array([(2, 3., b'World')], dtype=[('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')]) >>> recordarr[1:2].foo array([2], dtype=int32) >>> recordarr.foo[1:2] array([2], dtype=int32) >>> recordarr[1].baz b'World' ``` [`numpy.rec.array`](../reference/generated/numpy.core.records.array#numpy.core.records.array "numpy.core.records.array") can convert a wide variety of arguments into record arrays, including structured arrays: ``` >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")], ... dtype=[('foo', 'i4'), ('bar', 'f4'), ('baz', 'S10')]) >>> recordarr = np.rec.array(arr) ``` The `numpy.rec` module provides a number of other convenience functions for creating record arrays, see [record array creation routines](../reference/routines.array-creation#routines-array-creation-rec). A record array representation of a structured array can be obtained using the appropriate [view](https://numpy.org/doc/1.23/user/numpy-ndarray-view): ``` >>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")], ... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')]) >>> recordarr = arr.view(dtype=np.dtype((np.record, arr.dtype)), ... type=np.recarray) ``` For convenience, viewing an ndarray as type [`numpy.recarray`](../reference/generated/numpy.recarray#numpy.recarray "numpy.recarray") will automatically convert to [`numpy.record`](../reference/generated/numpy.record#numpy.record "numpy.record") datatype, so the dtype can be left out of the view: ``` >>> recordarr = arr.view(np.recarray) >>> recordarr.dtype dtype((numpy.record, [('foo', '<i4'), ('bar', '<f4'), ('baz', 'S10')])) ``` To get back to a plain ndarray both the dtype and type must be reset. The following view does so, taking into account the unusual case that the recordarr was not a structured type: ``` >>> arr2 = recordarr.view(recordarr.dtype.fields or recordarr.dtype, np.ndarray) ``` Record array fields accessed by index or by attribute are returned as a record array if the field has a structured type but as a plain ndarray otherwise. ``` >>> recordarr = np.rec.array([('Hello', (1, 2)), ("World", (3, 4))], ... dtype=[('foo', 'S6'),('bar', [('A', int), ('B', int)])]) >>> type(recordarr.foo) <class 'numpy.ndarray'> >>> type(recordarr.bar) <class 'numpy.recarray'``` Note that if a field has the same name as an ndarray attribute, the ndarray attribute takes precedence. Such fields will be inaccessible by attribute but will still be accessible by index. ### Recarray Helper Functions Collection of utilities to manipulate structured arrays. Most of these functions were initially implemented by <NAME> for matplotlib. They have been rewritten and extended for convenience. numpy.lib.recfunctions.append_fields(*base*, *names*, *data*, *dtypes=None*, *fill_value=- 1*, *usemask=True*, *asrecarray=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L654-L722) Add new fields to an existing array. The names of the fields are given with the `names` arguments, the corresponding values with the `data` arguments. If a single field is appended, `names`, `data` and `dtypes` do not have to be lists but just values. Parameters **base**array Input array to extend. **names**string, sequence String or sequence of strings corresponding to the names of the new fields. **data**array or sequence of arrays Array or sequence of arrays storing the fields to add to the base. **dtypes**sequence of datatypes, optional Datatype or sequence of datatypes. If None, the datatypes are estimated from the `data`. **fill_value**{float}, optional Filling value used to pad missing data on the shorter arrays. **usemask**{False, True}, optional Whether to return a masked array or not. **asrecarray**{False, True}, optional Whether to return a recarray (MaskedRecords) or not. numpy.lib.recfunctions.apply_along_fields(*func*, *arr*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L1099-L1140) Apply function ‘func’ as a reduction across fields of a structured array. This is similar to `apply_along_axis`, but treats the fields of a structured array as an extra axis. The fields are all first cast to a common type following the type-promotion rules from [`numpy.result_type`](../reference/generated/numpy.result_type#numpy.result_type "numpy.result_type") applied to the field’s dtypes. Parameters **func**function Function to apply on the “field” dimension. This function must support an `axis` argument, like np.mean, np.sum, etc. **arr**ndarray Structured array for which to apply func. Returns **out**ndarray Result of the recution operation #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> b = np.array([(1, 2, 5), (4, 5, 7), (7, 8 ,11), (10, 11, 12)], ... dtype=[('x', 'i4'), ('y', 'f4'), ('z', 'f8')]) >>> rfn.apply_along_fields(np.mean, b) array([ 2.66666667, 5.33333333, 8.66666667, 11. ]) >>> rfn.apply_along_fields(np.mean, b[['x', 'z']]) array([ 3. , 5.5, 9. , 11. ]) ``` numpy.lib.recfunctions.assign_fields_by_name(*dst*, *src*, *zero_unassigned=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L1145-L1181) Assigns values from one structured array to another by field name. Normally in numpy >= 1.14, assignment of one structured array to another copies fields “by position”, meaning that the first field from the src is copied to the first field of the dst, and so on, regardless of field name. This function instead copies “by field name”, such that fields in the dst are assigned from the identically named field in the src. This applies recursively for nested structures. This is how structure assignment worked in numpy >= 1.6 to <= 1.13. Parameters **dst**ndarray **src**ndarray The source and destination arrays during assignment. **zero_unassigned**bool, optional If True, fields in the dst for which there was no matching field in the src are filled with the value 0 (zero). This was the behavior of numpy <= 1.13. If False, those fields are not modified. numpy.lib.recfunctions.drop_fields(*base*, *drop_names*, *usemask=True*, *asrecarray=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L502-L564) Return a new array with fields in `drop_names` dropped. Nested fields are supported. Changed in version 1.18.0: [`drop_fields`](#numpy.lib.recfunctions.drop_fields "numpy.lib.recfunctions.drop_fields") returns an array with 0 fields if all fields are dropped, rather than returning `None` as it did previously. Parameters **base**array Input array **drop_names**string or sequence String or sequence of strings corresponding to the names of the fields to drop. **usemask**{False, True}, optional Whether to return a masked array or not. **asrecarray**string or sequence, optional Whether to return a recarray or a mrecarray (`asrecarray=True`) or a plain ndarray or masked array with flexible dtype. The default is False. #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> a = np.array([(1, (2, 3.0)), (4, (5, 6.0))], ... dtype=[('a', np.int64), ('b', [('ba', np.double), ('bb', np.int64)])]) >>> rfn.drop_fields(a, 'a') array([((2., 3),), ((5., 6),)], dtype=[('b', [('ba', '<f8'), ('bb', '<i8')])]) >>> rfn.drop_fields(a, 'ba') array([(1, (3,)), (4, (6,))], dtype=[('a', '<i8'), ('b', [('bb', '<i8')])]) >>> rfn.drop_fields(a, ['ba', 'bb']) array([(1,), (4,)], dtype=[('a', '<i8')]) ``` numpy.lib.recfunctions.find_duplicates(*a*, *key=None*, *ignoremask=True*, *return_index=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L1328-L1383) Find the duplicates in a structured array along a given key Parameters **a**array-like Input array **key**{string, None}, optional Name of the fields along which to check the duplicates. If None, the search is performed by records **ignoremask**{True, False}, optional Whether masked data should be discarded or considered as duplicates. **return_index**{False, True}, optional Whether to return the indices of the duplicated values. #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> ndtype = [('a', int)] >>> a = np.ma.array([1, 1, 1, 2, 2, 3, 3], ... mask=[0, 0, 1, 0, 0, 0, 1]).view(ndtype) >>> rfn.find_duplicates(a, ignoremask=True, return_index=True) (masked_array(data=[(1,), (1,), (2,), (2,)], mask=[(False,), (False,), (False,), (False,)], fill_value=(999999,), dtype=[('a', '<i8')]), array([0, 1, 3, 4])) ``` numpy.lib.recfunctions.flatten_descr(*ndtype*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L170-L193) Flatten a structured data-type description. #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> ndtype = np.dtype([('a', '<i4'), ('b', [('ba', '<f8'), ('bb', '<i4')])]) >>> rfn.flatten_descr(ndtype) (('a', dtype('int32')), ('ba', dtype('float64')), ('bb', dtype('int32'))) ``` numpy.lib.recfunctions.get_fieldstructure(*adtype*, *lastname=None*, *parents=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L226-L270) Returns a dictionary with fields indexing lists of their parent fields. This function is used to simplify access to fields nested in other fields. Parameters **adtype**np.dtype Input datatype **lastname**optional Last processed field name (used internally during recursion). **parents**dictionary Dictionary of parent fields (used interbally during recursion). #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> ndtype = np.dtype([('A', int), ... ('B', [('BA', int), ... ('BB', [('BBA', int), ('BBB', int)])])]) >>> rfn.get_fieldstructure(ndtype) ... # XXX: possible regression, order of BBA and BBB is swapped {'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']} ``` numpy.lib.recfunctions.get_names(*adtype*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L106-L135) Returns the field names of the input datatype as a tuple. Input datatype must have fields otherwise error is raised. Parameters **adtype**dtype Input datatype #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> rfn.get_names(np.empty((1,), dtype=[('A', int)]).dtype) ('A',) >>> rfn.get_names(np.empty((1,), dtype=[('A',int), ('B', float)]).dtype) ('A', 'B') >>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])]) >>> rfn.get_names(adtype) ('a', ('b', ('ba', 'bb'))) ``` numpy.lib.recfunctions.get_names_flat(*adtype*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L138-L167) Returns the field names of the input datatype as a tuple. Input datatype must have fields otherwise error is raised. Nested structure are flattened beforehand. Parameters **adtype**dtype Input datatype #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> rfn.get_names_flat(np.empty((1,), dtype=[('A', int)]).dtype) is None False >>> rfn.get_names_flat(np.empty((1,), dtype=[('A',int), ('B', str)]).dtype) ('A', 'B') >>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])]) >>> rfn.get_names_flat(adtype) ('a', 'b', 'ba', 'bb') ``` numpy.lib.recfunctions.join_by(*key*, *r1*, *r2*, *jointype='inner'*, *r1postfix='1'*, *r2postfix='2'*, *defaults=None*, *usemask=True*, *asrecarray=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L1392-L1569) Join arrays `r1` and `r2` on key `key`. The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the `key` field cannot be found in the two input arrays. Neither `r1` nor `r2` should have any duplicates along `key`: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm. Parameters **key**{string, sequence} A string or a sequence of strings corresponding to the fields used for comparison. **r1, r2**arrays Structured arrays. **jointype**{‘inner’, ‘outer’, ‘leftouter’}, optional If ‘inner’, returns the elements common to both r1 and r2. If ‘outer’, returns the common elements as well as the elements of r1 not in r2 and the elements of not in r2. If ‘leftouter’, returns the common elements and the elements of r1 not in r2. **r1postfix**string, optional String appended to the names of the fields of r1 that are present in r2 but absent of the key. **r2postfix**string, optional String appended to the names of the fields of r2 that are present in r1 but absent of the key. **defaults**{dictionary}, optional Dictionary mapping field names to the corresponding default values. **usemask**{True, False}, optional Whether to return a MaskedArray (or MaskedRecords is `asrecarray==True`) or a ndarray. **asrecarray**{False, True}, optional Whether to return a recarray (or MaskedRecords if `usemask==True`) or just a flexible-type ndarray. #### Notes * The output is sorted along the key. * A temporary array is formed by dropping the fields not in the key for the two arrays and concatenating the result. This array is then sorted, and the common entries selected. The output is constructed by filling the fields with the selected entries. Matching is not preserved if there are some duplicates
 numpy.lib.recfunctions.merge_arrays(*seqarrays*, *fill_value=- 1*, *flatten=False*, *usemask=False*, *asrecarray=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L362-L495) Merge arrays field by field. Parameters **seqarrays**sequence of ndarrays Sequence of arrays **fill_value**{float}, optional Filling value used to pad missing data on the shorter arrays. **flatten**{False, True}, optional Whether to collapse nested fields. **usemask**{False, True}, optional Whether to return a masked array or not. **asrecarray**{False, True}, optional Whether to return a recarray (MaskedRecords) or not. #### Notes * Without a mask, the missing value will be filled with something, depending on what its corresponding type: + `-1` for integers + `-1.0` for floating point numbers + `'-'` for characters + `'-1'` for strings + `True` for boolean values * XXX: I just obtained these values empirically #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.]))) array([( 1, 10.), ( 2, 20.), (-1, 30.)], dtype=[('f0', '<i8'), ('f1', '<f8')]) ``` ``` >>> rfn.merge_arrays((np.array([1, 2], dtype=np.int64), ... np.array([10., 20., 30.])), usemask=False) array([(1, 10.0), (2, 20.0), (-1, 30.0)], dtype=[('f0', '<i8'), ('f1', '<f8')]) >>> rfn.merge_arrays((np.array([1, 2]).view([('a', np.int64)]), ... np.array([10., 20., 30.])), ... usemask=False, asrecarray=True) rec.array([( 1, 10.), ( 2, 20.), (-1, 30.)], dtype=[('a', '<i8'), ('f1', '<f8')]) ``` numpy.lib.recfunctions.rec_append_fields(*base*, *names*, *data*, *dtypes=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L730-L762) Add new fields to an existing array. The names of the fields are given with the `names` arguments, the corresponding values with the `data` arguments. If a single field is appended, `names`, `data` and `dtypes` do not have to be lists but just values. Parameters **base**array Input array to extend. **names**string, sequence String or sequence of strings corresponding to the names of the new fields. **data**array or sequence of arrays Array or sequence of arrays storing the fields to add to the base. **dtypes**sequence of datatypes, optional Datatype or sequence of datatypes. If None, the datatypes are estimated from the `data`. Returns **appended_array**np.recarray See also [`append_fields`](#numpy.lib.recfunctions.append_fields "numpy.lib.recfunctions.append_fields") numpy.lib.recfunctions.rec_drop_fields(*base*, *drop_names*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L596-L601) Returns a new numpy.recarray with fields in `drop_names` dropped. numpy.lib.recfunctions.rec_join(*key*, *r1*, *r2*, *jointype='inner'*, *r1postfix='1'*, *r2postfix='2'*, *defaults=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L1578-L1591) Join arrays `r1` and `r2` on keys. Alternative to join_by, that always returns a np.recarray. See also [`join_by`](#numpy.lib.recfunctions.join_by "numpy.lib.recfunctions.join_by") equivalent function numpy.lib.recfunctions.recursive_fill_fields(*input*, *output*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L36-L72) Fills fields from output with fields from input, with support for nested structures. Parameters **input**ndarray Input array. **output**ndarray Output array. #### Notes * `output` should be at least the same size as `input` #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> a = np.array([(1, 10.), (2, 20.)], dtype=[('A', np.int64), ('B', np.float64)]) >>> b = np.zeros((3,), dtype=a.dtype) >>> rfn.recursive_fill_fields(a, b) array([(1, 10.), (2, 20.), (0, 0.)], dtype=[('A', '<i8'), ('B', '<f8')]) ``` numpy.lib.recfunctions.rename_fields(*base*, *namemapper*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L608-L645) Rename the fields from a flexible-datatype ndarray or recarray. Nested fields are supported. Parameters **base**ndarray Input array whose fields must be modified. **namemapper**dictionary Dictionary mapping old field names to their new version. #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))], ... dtype=[('a', int),('b', [('ba', float), ('bb', (float, 2))])]) >>> rfn.rename_fields(a, {'a':'A', 'bb':'BB'}) array([(1, (2., [ 3., 30.])), (4, (5., [ 6., 60.]))], dtype=[('A', '<i8'), ('b', [('ba', '<f8'), ('BB', '<f8', (2,))])]) ``` numpy.lib.recfunctions.repack_fields(*a*, *align=False*, *recurse=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L769-L850) Re-pack the fields of a structured array or dtype in memory. The memory layout of structured datatypes allows fields at arbitrary byte offsets. This means the fields can be separated by padding bytes, their offsets can be non-monotonically increasing, and they can overlap. This method removes any overlaps and reorders the fields in memory so they have increasing byte offsets, and adds or removes padding bytes depending on the `align` option, which behaves like the `align` option to [`numpy.dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype"). If `align=False`, this method produces a “packed” memory layout in which each field starts at the byte the previous field ended, and any padding bytes are removed. If `align=True`, this methods produces an “aligned” memory layout in which each field’s offset is a multiple of its alignment, and the total itemsize is a multiple of the largest alignment, by adding padding bytes as needed. Parameters **a**ndarray or dtype array or dtype for which to repack the fields. **align**boolean If true, use an “aligned” memory layout, otherwise use a “packed” layout. **recurse**boolean If True, also repack nested structures. Returns **repacked**ndarray or dtype Copy of `a` with fields repacked, or `a` itself if no repacking was needed. #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> def print_offsets(d): ... print("offsets:", [d.fields[name][1] for name in d.names]) ... print("itemsize:", d.itemsize) ... >>> dt = np.dtype('u1, <i8, <f8', align=True) >>> dt dtype({'names': ['f0', 'f1', 'f2'], 'formats': ['u1', '<i8', '<f8'], 'offsets': [0, 8, 16], 'itemsize': 24}, align=True) >>> print_offsets(dt) offsets: [0, 8, 16] itemsize: 24 >>> packed_dt = rfn.repack_fields(dt) >>> packed_dt dtype([('f0', 'u1'), ('f1', '<i8'), ('f2', '<f8')]) >>> print_offsets(packed_dt) offsets: [0, 1, 9] itemsize: 17 ``` numpy.lib.recfunctions.require_fields(*array*, *required_dtype*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L1186-L1227) Casts a structured array to a new dtype using assignment by field-name. This function assigns from the old to the new array by name, so the value of a field in the output array is the value of the field with the same name in the source array. This has the effect of creating a new ndarray containing only the fields “required” by the required_dtype. If a field name in the required_dtype does not exist in the input array, that field is created and set to 0 in the output array. Parameters **a**ndarray array to cast **required_dtype**dtype datatype for output array Returns **out**ndarray array with the new dtype, with field values copied from the fields in the input array with the same name #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> a = np.ones(4, dtype=[('a', 'i4'), ('b', 'f8'), ('c', 'u1')]) >>> rfn.require_fields(a, [('b', 'f4'), ('c', 'u1')]) array([(1., 1), (1., 1), (1., 1), (1., 1)], dtype=[('b', '<f4'), ('c', 'u1')]) >>> rfn.require_fields(a, [('b', 'f4'), ('newf', 'u1')]) array([(1., 0), (1., 0), (1., 0), (1., 0)], dtype=[('b', '<f4'), ('newf', 'u1')]) ``` numpy.lib.recfunctions.stack_arrays(*arrays*, *defaults=None*, *usemask=True*, *asrecarray=False*, *autoconvert=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L1235-L1320) Superposes arrays fields by fields Parameters **arrays**array or sequence Sequence of input arrays. **defaults**dictionary, optional Dictionary mapping field names to the corresponding default values. **usemask**{True, False}, optional Whether to return a MaskedArray (or MaskedRecords is `asrecarray==True`) or a ndarray. **asrecarray**{False, True}, optional Whether to return a recarray (or MaskedRecords if `usemask==True`) or just a flexible-type ndarray. **autoconvert**{False, True}, optional Whether automatically cast the type of the field to the maximum. #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> x = np.array([1, 2,]) >>> rfn.stack_arrays(x) is x True >>> z = np.array([('A', 1), ('B', 2)], dtype=[('A', '|S3'), ('B', float)]) >>> zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)], ... dtype=[('A', '|S3'), ('B', np.double), ('C', np.double)]) >>> test = rfn.stack_arrays((z,zz)) >>> test masked_array(data=[(b'A', 1.0, --), (b'B', 2.0, --), (b'a', 10.0, 100.0), (b'b', 20.0, 200.0), (b'c', 30.0, 300.0)], mask=[(False, False, True), (False, False, True), (False, False, False), (False, False, False), (False, False, False)], fill_value=(b'N/A', 1.e+20, 1.e+20), dtype=[('A', 'S3'), ('B', '<f8'), ('C', '<f8')]) ``` numpy.lib.recfunctions.structured_to_unstructured(*arr*, *dtype=None*, *copy=False*, *casting='unsafe'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L894-L984) Converts an n-D structured array into an (n+1)-D unstructured array. The new array will have a new last dimension equal in size to the number of field-elements of the input array. If not supplied, the output datatype is determined from the numpy type promotion rules applied to all the field datatypes. Nested fields, as well as each element of any subarray fields, all count as a single field-elements. Parameters **arr**ndarray Structured array or dtype to convert. Cannot contain object datatype. **dtype**dtype, optional The dtype of the output unstructured array. **copy**bool, optional See copy argument to [`numpy.ndarray.astype`](../reference/generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"). If true, always return a copy. If false, and `dtype` requirements are satisfied, a view is returned. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional See casting argument of [`numpy.ndarray.astype`](../reference/generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"). Controls what kind of data casting may occur. Returns **unstructured**ndarray Unstructured array with one more dimension. #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> a = np.zeros(4, dtype=[('a', 'i4'), ('b', 'f4,u2'), ('c', 'f4', 2)]) >>> a array([(0, (0., 0), [0., 0.]), (0, (0., 0), [0., 0.]), (0, (0., 0), [0., 0.]), (0, (0., 0), [0., 0.])], dtype=[('a', '<i4'), ('b', [('f0', '<f4'), ('f1', '<u2')]), ('c', '<f4', (2,))]) >>> rfn.structured_to_unstructured(a) array([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]) ``` ``` >>> b = np.array([(1, 2, 5), (4, 5, 7), (7, 8 ,11), (10, 11, 12)], ... dtype=[('x', 'i4'), ('y', 'f4'), ('z', 'f8')]) >>> np.mean(rfn.structured_to_unstructured(b[['x', 'z']]), axis=-1) array([ 3. , 5.5, 9. , 11. ]) ``` numpy.lib.recfunctions.unstructured_to_structured(*arr*, *dtype=None*, *names=None*, *align=False*, *copy=False*, *casting='unsafe'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/recfunctions.py#L991-L1094) Converts an n-D unstructured array into an (n-1)-D structured array. The last dimension of the input array is converted into a structure, with number of field-elements equal to the size of the last dimension of the input array. By default all output fields have the input array’s dtype, but an output structured dtype with an equal number of fields-elements can be supplied instead. Nested fields, as well as each element of any subarray fields, all count towards the number of field-elements. Parameters **arr**ndarray Unstructured array or dtype to convert. **dtype**dtype, optional The structured dtype of the output array **names**list of strings, optional If dtype is not supplied, this specifies the field names for the output dtype, in order. The field dtypes will be the same as the input array. **align**boolean, optional Whether to create an aligned memory layout. **copy**bool, optional See copy argument to [`numpy.ndarray.astype`](../reference/generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"). If true, always return a copy. If false, and `dtype` requirements are satisfied, a view is returned. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional See casting argument of [`numpy.ndarray.astype`](../reference/generated/numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"). Controls what kind of data casting may occur. Returns **structured**ndarray Structured array with fewer dimensions. #### Examples ``` >>> from numpy.lib import recfunctions as rfn >>> dt = np.dtype([('a', 'i4'), ('b', 'f4,u2'), ('c', 'f4', 2)]) >>> a = np.arange(20).reshape((4,5)) >>> a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19]]) >>> rfn.unstructured_to_structured(a, dt) array([( 0, ( 1., 2), [ 3., 4.]), ( 5, ( 6., 7), [ 8., 9.]), (10, (11., 12), [13., 14.]), (15, (16., 17), [18., 19.])], dtype=[('a', '<i4'), ('b', [('f0', '<f4'), ('f1', '<u2')]), ('c', '<f4', (2,))]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.rec.htmlArray creation ============== See also [Array creation routines](../reference/routines.array-creation#routines-array-creation) Introduction ------------ There are 6 general mechanisms for creating arrays: 1. Conversion from other Python structures (i.e. lists and tuples) 2. Intrinsic NumPy array creation functions (e.g. arange, ones, zeros, etc.) 3. Replicating, joining, or mutating existing arrays 4. Reading arrays from disk, either from standard or custom formats 5. Creating arrays from raw bytes through the use of strings or buffers 6. Use of special library functions (e.g., random) You can use these methods to create ndarrays or [Structured arrays](basics.rec#structured-arrays). This document will cover general methods for ndarray creation. 1) Converting Python sequences to NumPy Arrays ---------------------------------------------- NumPy arrays can be defined using Python sequences such as lists and tuples. Lists and tuples are defined using `[...]` and `(...)`, respectively. Lists and tuples can define ndarray creation: * a list of numbers will create a 1D array, * a list of lists will create a 2D array, * further nested lists will create higher-dimensional arrays. In general, any array object is called an **ndarray** in NumPy. ``` >>> a1D = np.array([1, 2, 3, 4]) >>> a2D = np.array([[1, 2], [3, 4]]) >>> a3D = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) ``` When you use [`numpy.array`](../reference/generated/numpy.array#numpy.array "numpy.array") to define a new array, you should consider the [dtype](basics.types) of the elements in the array, which can be specified explicitly. This feature gives you more control over the underlying data structures and how the elements are handled in C/C++ functions. If you are not careful with `dtype` assignments, you can get unwanted overflow, as such ``` >>> a = np.array([127, 128, 129], dtype=np.int8) >>> a array([ 127, -128, -127], dtype=int8) ``` An 8-bit signed integer represents integers from -128 to 127. Assigning the `int8` array to integers outside of this range results in overflow. This feature can often be misunderstood. If you perform calculations with mismatching `dtypes`, you can get unwanted results, for example: ``` >>> a = np.array([2, 3, 4], dtype=np.uint32) >>> b = np.array([5, 6, 7], dtype=np.uint32) >>> c_unsigned32 = a - b >>> print('unsigned c:', c_unsigned32, c_unsigned32.dtype) unsigned c: [4294967293 4294967293 4294967293] uint32 >>> c_signed32 = a - b.astype(np.int32) >>> print('signed c:', c_signed32, c_signed32.dtype) signed c: [-3 -3 -3] int64 ``` Notice when you perform operations with two arrays of the same `dtype`: `uint32`, the resulting array is the same type. When you perform operations with different `dtype`, NumPy will assign a new type that satisfies all of the array elements involved in the computation, here `uint32` and `int32` can both be represented in as `int64`. The default NumPy behavior is to create arrays in either 32 or 64-bit signed integers (platform dependent and matches C int size) or double precision floating point numbers, int32/int64 and float, respectively. If you expect your integer arrays to be a specific type, then you need to specify the dtype while you create the array. 2) Intrinsic NumPy array creation functions ------------------------------------------- NumPy has over 40 built-in functions for creating arrays as laid out in the [Array creation routines](../reference/routines.array-creation#routines-array-creation). These functions can be split into roughly three categories, based on the dimension of the array they create: 1. 1D arrays 2. 2D arrays 3. ndarrays ### 1 - 1D array creation functions The 1D array creation functions e.g. [`numpy.linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace") and [`numpy.arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange") generally need at least two inputs, `start` and `stop`. [`numpy.arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange") creates arrays with regularly incrementing values. Check the documentation for complete information and examples. A few examples are shown: ``` >>> np.arange(10) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.arange(2, 10, dtype=float) array([2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.arange(2, 3, 0.1) array([2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9]) ``` Note: best practice for [`numpy.arange`](../reference/generated/numpy.arange#numpy.arange "numpy.arange") is to use integer start, end, and step values. There are some subtleties regarding `dtype`. In the second example, the `dtype` is defined. In the third example, the array is `dtype=float` to accommodate the step size of `0.1`. Due to roundoff error, the `stop` value is sometimes included. [`numpy.linspace`](../reference/generated/numpy.linspace#numpy.linspace "numpy.linspace") will create arrays with a specified number of elements, and spaced equally between the specified beginning and end values. For example: ``` >>> np.linspace(1., 4., 6) array([1. , 1.6, 2.2, 2.8, 3.4, 4. ]) ``` The advantage of this creation function is that you guarantee the number of elements and the starting and end point. The previous `arange(start, stop, step)` will not include the value `stop`. ### 2 - 2D array creation functions The 2D array creation functions e.g. [`numpy.eye`](../reference/generated/numpy.eye#numpy.eye "numpy.eye"), [`numpy.diag`](../reference/generated/numpy.diag#numpy.diag "numpy.diag"), and [`numpy.vander`](../reference/generated/numpy.vander#numpy.vander "numpy.vander") define properties of special matrices represented as 2D arrays. `np.eye(n, m)` defines a 2D identity matrix. The elements where i=j (row index and column index are equal) are 1 and the rest are 0, as such: ``` >>> np.eye(3) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> np.eye(3, 5) array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.]]) ``` [`numpy.diag`](../reference/generated/numpy.diag#numpy.diag "numpy.diag") can define either a square 2D array with given values along the diagonal *or* if given a 2D array returns a 1D array that is only the diagonal elements. The two array creation functions can be helpful while doing linear algebra, as such: ``` >>> np.diag([1, 2, 3]) array([[1, 0, 0], [0, 2, 0], [0, 0, 3]]) >>> np.diag([1, 2, 3], 1) array([[0, 1, 0, 0], [0, 0, 2, 0], [0, 0, 0, 3], [0, 0, 0, 0]]) >>> a = np.array([[1, 2], [3, 4]]) >>> np.diag(a) array([1, 4]) ``` `vander(x, n)` defines a Vandermonde matrix as a 2D NumPy array. Each column of the Vandermonde matrix is a decreasing power of the input 1D array or list or tuple, `x` where the highest polynomial order is `n-1`. This array creation routine is helpful in generating linear least squares models, as such: ``` >>> np.vander(np.linspace(0, 2, 5), 2) array([[0. , 1. ], [0.5, 1. ], [1. , 1. ], [1.5, 1. ], [2. , 1. ]]) >>> np.vander([1, 2, 3, 4], 2) array([[1, 1], [2, 1], [3, 1], [4, 1]]) >>> np.vander((1, 2, 3, 4), 4) array([[ 1, 1, 1, 1], [ 8, 4, 2, 1], [27, 9, 3, 1], [64, 16, 4, 1]]) ``` ### 3 - general ndarray creation functions The ndarray creation functions e.g. [`numpy.ones`](../reference/generated/numpy.ones#numpy.ones "numpy.ones"), [`numpy.zeros`](../reference/generated/numpy.zeros#numpy.zeros "numpy.zeros"), and [`random`](../reference/random/generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") define arrays based upon the desired shape. The ndarray creation functions can create arrays with any dimension by specifying how many dimensions and length along that dimension in a tuple or list. [`numpy.zeros`](../reference/generated/numpy.zeros#numpy.zeros "numpy.zeros") will create an array filled with 0 values with the specified shape. The default dtype is `float64`: ``` >>> np.zeros((2, 3)) array([[0., 0., 0.], [0., 0., 0.]]) >>> np.zeros((2, 3, 2)) array([[[0., 0.], [0., 0.], [0., 0.]], [[0., 0.], [0., 0.], [0., 0.]]]) ``` [`numpy.ones`](../reference/generated/numpy.ones#numpy.ones "numpy.ones") will create an array filled with 1 values. It is identical to `zeros` in all other respects as such: ``` >>> np.ones((2, 3)) array([[1., 1., 1.], [1., 1., 1.]]) >>> np.ones((2, 3, 2)) array([[[1., 1.], [1., 1.], [1., 1.]], [[1., 1.], [1., 1.], [1., 1.]]]) ``` The [`random`](../reference/random/generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") method of the result of `default_rng` will create an array filled with random values between 0 and 1. It is included with the [`numpy.random`](../reference/random/index#module-numpy.random "numpy.random") library. Below, two arrays are created with shapes (2,3) and (2,3,2), respectively. The seed is set to 42 so you can reproduce these pseudorandom numbers: ``` >>> from numpy.random import default_rng >>> default_rng(42).random((2,3)) array([[0.77395605, 0.43887844, 0.85859792], [0.69736803, 0.09417735, 0.97562235]]) >>> default_rng(42).random((2,3,2)) array([[[0.77395605, 0.43887844], [0.85859792, 0.69736803], [0.09417735, 0.97562235]], [[0.7611397 , 0.78606431], [0.12811363, 0.45038594], [0.37079802, 0.92676499]]]) ``` [`numpy.indices`](../reference/generated/numpy.indices#numpy.indices "numpy.indices") will create a set of arrays (stacked as a one-higher dimensioned array), one per dimension with each representing variation in that dimension: ``` >>> np.indices((3,3)) array([[[0, 0, 0], [1, 1, 1], [2, 2, 2]], [[0, 1, 2], [0, 1, 2], [0, 1, 2]]]) ``` This is particularly useful for evaluating functions of multiple dimensions on a regular grid. 3) Replicating, joining, or mutating existing arrays ---------------------------------------------------- Once you have created arrays, you can replicate, join, or mutate those existing arrays to create new arrays. When you assign an array or its elements to a new variable, you have to explicitly [`numpy.copy`](../reference/generated/numpy.copy#numpy.copy "numpy.copy") the array, otherwise the variable is a view into the original array. Consider the following example: ``` >>> a = np.array([1, 2, 3, 4, 5, 6]) >>> b = a[:2] >>> b += 1 >>> print('a =', a, '; b =', b) a = [2 3 3 4 5 6] ; b = [2 3] ``` In this example, you did not create a new array. You created a variable, `b` that viewed the first 2 elements of `a`. When you added 1 to `b` you would get the same result by adding 1 to `a[:2]`. If you want to create a *new* array, use the [`numpy.copy`](../reference/generated/numpy.copy#numpy.copy "numpy.copy") array creation routine as such: ``` >>> a = np.array([1, 2, 3, 4]) >>> b = a[:2].copy() >>> b += 1 >>> print('a = ', a, 'b = ', b) a = [1 2 3 4] b = [2 3] ``` For more information and examples look at [Copies and Views](quickstart#quickstart-copies-and-views). There are a number of routines to join existing arrays e.g. [`numpy.vstack`](../reference/generated/numpy.vstack#numpy.vstack "numpy.vstack"), [`numpy.hstack`](../reference/generated/numpy.hstack#numpy.hstack "numpy.hstack"), and [`numpy.block`](../reference/generated/numpy.block#numpy.block "numpy.block"). Here is an example of joining four 2-by-2 arrays into a 4-by-4 array using `block`: ``` >>> A = np.ones((2, 2)) >>> B = np.eye(2, 2) >>> C = np.zeros((2, 2)) >>> D = np.diag((-3, -4)) >>> np.block([[A, B], [C, D]]) array([[ 1., 1., 1., 0.], [ 1., 1., 0., 1.], [ 0., 0., -3., 0.], [ 0., 0., 0., -4.]]) ``` Other routines use similar syntax to join ndarrays. Check the routine’s documentation for further examples and syntax. 4) Reading arrays from disk, either from standard or custom formats ------------------------------------------------------------------- This is the most common case of large array creation. The details depend greatly on the format of data on disk. This section gives general pointers on how to handle various formats. For more detailed examples of IO look at [How to Read and Write files](how-to-io#how-to-io). ### Standard Binary Formats Various fields have standard formats for array data. The following lists the ones with known Python libraries to read them and return NumPy arrays (there may be others for which it is possible to read and convert to NumPy arrays so check the last section as well) ``` HDF5: h5py FITS: Astropy ``` Examples of formats that cannot be read directly but for which it is not hard to convert are those formats supported by libraries like PIL (able to read and write many image formats such as jpg, png, etc). ### Common ASCII Formats Delimited files such as comma separated value (csv) and tab separated value (tsv) files are used for programs like Excel and LabView. Python functions can read and parse these files line-by-line. NumPy has two standard routines for importing a file with delimited data [`numpy.loadtxt`](../reference/generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") and [`numpy.genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt"). These functions have more involved use cases in [Reading and writing files](how-to-io). A simple example given a `simple.csv`: ``` $ cat simple.csv x, y 0, 0 1, 1 2, 4 3, 9 ``` Importing `simple.csv` is accomplished using `loadtxt`: ``` >>> np.loadtxt('simple.csv', delimiter = ',', skiprows = 1) array([[0., 0.], [1., 1.], [2., 4.], [3., 9.]]) ``` More generic ASCII files can be read using [`scipy.io`](https://docs.scipy.org/doc/scipy/reference/io.html#module-scipy.io "(in SciPy v1.8.1)") and [Pandas](https://pandas.pydata.org/). 5) Creating arrays from raw bytes through the use of strings or buffers ----------------------------------------------------------------------- There are a variety of approaches one can use. If the file has a relatively simple format then one can write a simple I/O library and use the NumPy `fromfile()` function and `.tofile()` method to read and write NumPy arrays directly (mind your byteorder though!) If a good C or C++ library exists that read the data, one can wrap that library with a variety of techniques though that certainly is much more work and requires significantly more advanced knowledge to interface with C or C++. 6) Use of special library functions (e.g., SciPy, Pandas, and OpenCV) --------------------------------------------------------------------- NumPy is the fundamental library for array containers in the Python Scientific Computing stack. Many Python libraries, including SciPy, Pandas, and OpenCV, use NumPy ndarrays as the common format for data exchange, These libraries can create, operate on, and work with NumPy arrays. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.creation.htmlI/O with NumPy ============== * [Importing data with `genfromtxt`](basics.io.genfromtxt) + [Defining the input](basics.io.genfromtxt#defining-the-input) + [Splitting the lines into columns](basics.io.genfromtxt#splitting-the-lines-into-columns) + [Skipping lines and choosing columns](basics.io.genfromtxt#skipping-lines-and-choosing-columns) + [Choosing the data type](basics.io.genfromtxt#choosing-the-data-type) + [Setting the names](basics.io.genfromtxt#setting-the-names) + [Tweaking the conversion](basics.io.genfromtxt#tweaking-the-conversion) + [Shortcut functions](basics.io.genfromtxt#shortcut-functions) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.io.htmlData types ========== See also [Data type objects](../reference/arrays.dtypes#arrays-dtypes) Array types and conversions between types ----------------------------------------- NumPy supports a much greater variety of numerical types than Python does. This section shows which are available, and how to modify an array’s data-type. The primitive types supported are tied closely to those in C: | Numpy type | C type | Description | | --- | --- | --- | | [`numpy.bool_`](../reference/arrays.scalars#numpy.bool_ "numpy.bool_") | `bool` | Boolean (True or False) stored as a byte | | [`numpy.byte`](../reference/arrays.scalars#numpy.byte "numpy.byte") | `signed char` | Platform-defined | | [`numpy.ubyte`](../reference/arrays.scalars#numpy.ubyte "numpy.ubyte") | `unsigned char` | Platform-defined | | [`numpy.short`](../reference/arrays.scalars#numpy.short "numpy.short") | `short` | Platform-defined | | [`numpy.ushort`](../reference/arrays.scalars#numpy.ushort "numpy.ushort") | `unsigned short` | Platform-defined | | [`numpy.intc`](../reference/arrays.scalars#numpy.intc "numpy.intc") | `int` | Platform-defined | | [`numpy.uintc`](../reference/arrays.scalars#numpy.uintc "numpy.uintc") | `unsigned int` | Platform-defined | | [`numpy.int_`](../reference/arrays.scalars#numpy.int_ "numpy.int_") | `long` | Platform-defined | | [`numpy.uint`](../reference/arrays.scalars#numpy.uint "numpy.uint") | `unsigned long` | Platform-defined | | [`numpy.longlong`](../reference/arrays.scalars#numpy.longlong "numpy.longlong") | `long long` | Platform-defined | | [`numpy.ulonglong`](../reference/arrays.scalars#numpy.ulonglong "numpy.ulonglong") | `unsigned long long` | Platform-defined | | [`numpy.half`](../reference/arrays.scalars#numpy.half "numpy.half") / [`numpy.float16`](../reference/arrays.scalars#numpy.float16 "numpy.float16") | | Half precision float: sign bit, 5 bits exponent, 10 bits mantissa | | [`numpy.single`](../reference/arrays.scalars#numpy.single "numpy.single") | `float` | Platform-defined single precision float: typically sign bit, 8 bits exponent, 23 bits mantissa | | [`numpy.double`](../reference/arrays.scalars#numpy.double "numpy.double") | `double` | Platform-defined double precision float: typically sign bit, 11 bits exponent, 52 bits mantissa. | | [`numpy.longdouble`](../reference/arrays.scalars#numpy.longdouble "numpy.longdouble") | `long double` | Platform-defined extended-precision float | | [`numpy.csingle`](../reference/arrays.scalars#numpy.csingle "numpy.csingle") | `float complex` | Complex number, represented by two single-precision floats (real and imaginary components) | | [`numpy.cdouble`](../reference/arrays.scalars#numpy.cdouble "numpy.cdouble") | `double complex` | Complex number, represented by two double-precision floats (real and imaginary components). | | [`numpy.clongdouble`](../reference/arrays.scalars#numpy.clongdouble "numpy.clongdouble") | `long double complex` | Complex number, represented by two extended-precision floats (real and imaginary components). | Since many of these have platform-dependent definitions, a set of fixed-size aliases are provided (See [Sized aliases](../reference/arrays.scalars#sized-aliases)). NumPy numerical types are instances of `dtype` (data-type) objects, each having unique characteristics. Once you have imported NumPy using `>>> import numpy as np` the dtypes are available as `np.bool_`, `np.float32`, etc. Advanced types, not listed above, are explored in section [Structured arrays](basics.rec#structured-arrays). There are 5 basic numerical types representing booleans (bool), integers (int), unsigned integers (uint) floating point (float) and complex. Those with numbers in their name indicate the bitsize of the type (i.e. how many bits are needed to represent a single value in memory). Some types, such as `int` and `intp`, have differing bitsizes, dependent on the platforms (e.g. 32-bit vs. 64-bit machines). This should be taken into account when interfacing with low-level code (such as C or Fortran) where the raw memory is addressed. Data-types can be used as functions to convert python numbers to array scalars (see the array scalar section for an explanation), python sequences of numbers to arrays of that type, or as arguments to the dtype keyword that many numpy functions or methods accept. Some examples: ``` >>> x = np.float32(1.0) >>> x 1.0 >>> y = np.int_([1,2,4]) >>> y array([1, 2, 4]) >>> z = np.arange(3, dtype=np.uint8) >>> z array([0, 1, 2], dtype=uint8) ``` Array types can also be referred to by character codes, mostly to retain backward compatibility with older packages such as Numeric. Some documentation may still refer to these, for example: ``` >>> np.array([1, 2, 3], dtype='f') array([1., 2., 3.], dtype=float32) ``` We recommend using dtype objects instead. To convert the type of an array, use the .astype() method (preferred) or the type itself as a function. For example: ``` >>> z.astype(float) array([0., 1., 2.]) >>> np.int8(z) array([0, 1, 2], dtype=int8) ``` Note that, above, we use the *Python* float object as a dtype. NumPy knows that `int` refers to `np.int_`, `bool` means `np.bool_`, that `float` is `np.float_` and `complex` is `np.complex_`. The other data-types do not have Python equivalents. To determine the type of an array, look at the dtype attribute: ``` >>> z.dtype dtype('uint8') ``` dtype objects also contain information about the type, such as its bit-width and its byte-order. The data type can also be used indirectly to query properties of the type, such as whether it is an integer: ``` >>> d = np.dtype(int) >>> d dtype('int32') >>> np.issubdtype(d, np.integer) True >>> np.issubdtype(d, np.floating) False ``` Array Scalars ------------- NumPy generally returns elements of arrays as array scalars (a scalar with an associated dtype). Array scalars differ from Python scalars, but for the most part they can be used interchangeably (the primary exception is for versions of Python older than v2.x, where integer array scalars cannot act as indices for lists and tuples). There are some exceptions, such as when code requires very specific attributes of a scalar or when it checks specifically whether a value is a Python scalar. Generally, problems are easily fixed by explicitly converting array scalars to Python scalars, using the corresponding Python type function (e.g., `int`, `float`, `complex`, `str`, `unicode`). The primary advantage of using array scalars is that they preserve the array type (Python may not have a matching scalar type available, e.g. `int16`). Therefore, the use of array scalars ensures identical behaviour between arrays and scalars, irrespective of whether the value is inside an array or not. NumPy scalars also have many of the same methods arrays do. Overflow Errors --------------- The fixed size of NumPy numeric types may cause overflow errors when a value requires more memory than available in the data type. For example, [`numpy.power`](../reference/generated/numpy.power#numpy.power "numpy.power") evaluates `100 ** 8` correctly for 64-bit integers, but gives 1874919424 (incorrect) for a 32-bit integer. ``` >>> np.power(100, 8, dtype=np.int64) 10000000000000000 >>> np.power(100, 8, dtype=np.int32) 1874919424 ``` The behaviour of NumPy and Python integer types differs significantly for integer overflows and may confuse users expecting NumPy integers to behave similar to Python’s `int`. Unlike NumPy, the size of Python’s `int` is flexible. This means Python integers may expand to accommodate any integer and will not overflow. NumPy provides [`numpy.iinfo`](../reference/generated/numpy.iinfo#numpy.iinfo "numpy.iinfo") and [`numpy.finfo`](../reference/generated/numpy.finfo#numpy.finfo "numpy.finfo") to verify the minimum or maximum values of NumPy integer and floating point values respectively ``` >>> np.iinfo(int) # Bounds of the default integer on this system. iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64) >>> np.iinfo(np.int32) # Bounds of a 32-bit integer iinfo(min=-2147483648, max=2147483647, dtype=int32) >>> np.iinfo(np.int64) # Bounds of a 64-bit integer iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64) ``` If 64-bit integers are still too small the result may be cast to a floating point number. Floating point numbers offer a larger, but inexact, range of possible values. ``` >>> np.power(100, 100, dtype=np.int64) # Incorrect even with 64-bit int 0 >>> np.power(100, 100, dtype=np.float64) 1e+200 ``` Extended Precision ------------------ Python’s floating-point numbers are usually 64-bit floating-point numbers, nearly equivalent to `np.float64`. In some unusual situations it may be useful to use floating-point numbers with more precision. Whether this is possible in numpy depends on the hardware and on the development environment: specifically, x86 machines provide hardware floating-point with 80-bit precision, and while most C compilers provide this as their `long double` type, MSVC (standard for Windows builds) makes `long double` identical to `double` (64 bits). NumPy makes the compiler’s `long double` available as `np.longdouble` (and `np.clongdouble` for the complex numbers). You can find out what your numpy provides with `np.finfo(np.longdouble)`. NumPy does not provide a dtype with more precision than C’s `long double`; in particular, the 128-bit IEEE quad precision data type (FORTRAN’s `REAL*16`) is not available. For efficient memory alignment, `np.longdouble` is usually stored padded with zero bits, either to 96 or 128 bits. Which is more efficient depends on hardware and development environment; typically on 32-bit systems they are padded to 96 bits, while on 64-bit systems they are typically padded to 128 bits. `np.longdouble` is padded to the system default; `np.float96` and `np.float128` are provided for users who want specific padding. In spite of the names, `np.float96` and `np.float128` provide only as much precision as `np.longdouble`, that is, 80 bits on most x86 machines and 64 bits in standard Windows builds. Be warned that even if `np.longdouble` offers more precision than python `float`, it is easy to lose that extra precision, since python often forces values to pass through `float`. For example, the `%` formatting operator requires its arguments to be converted to standard python types, and it is therefore impossible to preserve extended precision even if many decimal places are requested. It can be useful to test your code with the value `1 + np.finfo(np.longdouble).eps`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.types.htmlByte-swapping ============= Introduction to byte ordering and ndarrays ------------------------------------------ The `ndarray` is an object that provide a python array interface to data in memory. It often happens that the memory that you want to view with an array is not of the same byte ordering as the computer on which you are running Python. For example, I might be working on a computer with a little-endian CPU - such as an Intel Pentium, but I have loaded some data from a file written by a computer that is big-endian. Let’s say I have loaded 4 bytes from a file written by a Sun (big-endian) computer. I know that these 4 bytes represent two 16-bit integers. On a big-endian machine, a two-byte integer is stored with the Most Significant Byte (MSB) first, and then the Least Significant Byte (LSB). Thus the bytes are, in memory order: 1. MSB integer 1 2. LSB integer 1 3. MSB integer 2 4. LSB integer 2 Let’s say the two integers were in fact 1 and 770. Because 770 = 256 * 3 + 2, the 4 bytes in memory would contain respectively: 0, 1, 3, 2. The bytes I have loaded from the file would have these contents: ``` >>> big_end_buffer = bytearray([0,1,3,2]) >>> big_end_buffer bytearray(b'\x00\x01\x03\x02') ``` We might want to use an `ndarray` to access these integers. In that case, we can create an array around this memory, and tell numpy that there are two integers, and that they are 16 bit and big-endian: ``` >>> import numpy as np >>> big_end_arr = np.ndarray(shape=(2,),dtype='>i2', buffer=big_end_buffer) >>> big_end_arr[0] 1 >>> big_end_arr[1] 770 ``` Note the array `dtype` above of `>i2`. The `>` means ‘big-endian’ (`<` is little-endian) and `i2` means ‘signed 2-byte integer’. For example, if our data represented a single unsigned 4-byte little-endian integer, the dtype string would be `<u4`. In fact, why don’t we try that? ``` >>> little_end_u4 = np.ndarray(shape=(1,),dtype='<u4', buffer=big_end_buffer) >>> little_end_u4[0] == 1 * 256**1 + 3 * 256**2 + 2 * 256**3 True ``` Returning to our `big_end_arr` - in this case our underlying data is big-endian (data endianness) and we’ve set the dtype to match (the dtype is also big-endian). However, sometimes you need to flip these around. Warning Scalars currently do not include byte order information, so extracting a scalar from an array will return an integer in native byte order. Hence: ``` >>> big_end_arr[0].dtype.byteorder == little_end_u4[0].dtype.byteorder True ``` Changing byte ordering ---------------------- As you can imagine from the introduction, there are two ways you can affect the relationship between the byte ordering of the array and the underlying memory it is looking at: * Change the byte-ordering information in the array dtype so that it interprets the underlying data as being in a different byte order. This is the role of `arr.newbyteorder()` * Change the byte-ordering of the underlying data, leaving the dtype interpretation as it was. This is what `arr.byteswap()` does. The common situations in which you need to change byte ordering are: 1. Your data and dtype endianness don’t match, and you want to change the dtype so that it matches the data. 2. Your data and dtype endianness don’t match, and you want to swap the data so that they match the dtype 3. Your data and dtype endianness match, but you want the data swapped and the dtype to reflect this ### Data and dtype endianness don’t match, change dtype to match data We make something where they don’t match: ``` >>> wrong_end_dtype_arr = np.ndarray(shape=(2,),dtype='<i2', buffer=big_end_buffer) >>> wrong_end_dtype_arr[0] 256 ``` The obvious fix for this situation is to change the dtype so it gives the correct endianness: ``` >>> fixed_end_dtype_arr = wrong_end_dtype_arr.newbyteorder() >>> fixed_end_dtype_arr[0] 1 ``` Note the array has not changed in memory: ``` >>> fixed_end_dtype_arr.tobytes() == big_end_buffer True ``` ### Data and type endianness don’t match, change data to match dtype You might want to do this if you need the data in memory to be a certain ordering. For example you might be writing the memory out to a file that needs a certain byte ordering. ``` >>> fixed_end_mem_arr = wrong_end_dtype_arr.byteswap() >>> fixed_end_mem_arr[0] 1 ``` Now the array *has* changed in memory: ``` >>> fixed_end_mem_arr.tobytes() == big_end_buffer False ``` ### Data and dtype endianness match, swap data and dtype You may have a correctly specified array dtype, but you need the array to have the opposite byte order in memory, and you want the dtype to match so the array values make sense. In this case you just do both of the previous operations: ``` >>> swapped_end_arr = big_end_arr.byteswap().newbyteorder() >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_buffer False ``` An easier way of casting the data to a specific dtype and byte ordering can be achieved with the ndarray astype method: ``` >>> swapped_end_arr = big_end_arr.astype('<i2') >>> swapped_end_arr[0] 1 >>> swapped_end_arr.tobytes() == big_end_buffer False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.byteswapping.htmlWriting custom array containers =============================== Numpy’s dispatch mechanism, introduced in numpy version v1.16 is the recommended approach for writing custom N-dimensional array containers that are compatible with the numpy API and provide custom implementations of numpy functionality. Applications include [dask](http://dask.pydata.org) arrays, an N-dimensional array distributed across multiple nodes, and [cupy](https://docs-cupy.chainer.org/en/stable/) arrays, an N-dimensional array on a GPU. To get a feel for writing custom array containers, we’ll begin with a simple example that has rather narrow utility but illustrates the concepts involved. ``` >>> import numpy as np >>> class DiagonalArray: ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self, dtype=None): ... return self._i * np.eye(self._N, dtype=dtype) ``` Our custom array can be instantiated like: ``` >>> arr = DiagonalArray(5, 1) >>> arr DiagonalArray(N=5, value=1) ``` We can convert to a numpy array using [`numpy.array`](../reference/generated/numpy.array#numpy.array "numpy.array") or [`numpy.asarray`](../reference/generated/numpy.asarray#numpy.asarray "numpy.asarray"), which will call its `__array__` method to obtain a standard `numpy.ndarray`. ``` >>> np.asarray(arr) array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.]]) ``` If we operate on `arr` with a numpy function, numpy will again use the `__array__` interface to convert it to an array and then apply the function in the usual way. ``` >>> np.multiply(arr, 2) array([[2., 0., 0., 0., 0.], [0., 2., 0., 0., 0.], [0., 0., 2., 0., 0.], [0., 0., 0., 2., 0.], [0., 0., 0., 0., 2.]]) ``` Notice that the return type is a standard `numpy.ndarray`. ``` >>> type(np.multiply(arr, 2)) <class 'numpy.ndarray'``` How can we pass our custom array type through this function? Numpy allows a class to indicate that it would like to handle computations in a custom-defined way through the interfaces `__array_ufunc__` and `__array_function__`. Let’s take one at a time, starting with `_array_ufunc__`. This method covers [Universal functions (ufunc)](../reference/ufuncs#ufuncs), a class of functions that includes, for example, [`numpy.multiply`](../reference/generated/numpy.multiply#numpy.multiply "numpy.multiply") and [`numpy.sin`](../reference/generated/numpy.sin#numpy.sin "numpy.sin"). The `__array_ufunc__` receives: * `ufunc`, a function like `numpy.multiply` * `method`, a string, differentiating between `numpy.multiply(...)` and variants like `numpy.multiply.outer`, `numpy.multiply.accumulate`, and so on. For the common case, `numpy.multiply(...)`, `method == '__call__'`. * `inputs`, which could be a mixture of different types * `kwargs`, keyword arguments passed to the function For this example we will only handle the method `__call__` ``` >>> from numbers import Number >>> class DiagonalArray: ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self, dtype=None): ... return self._i * np.eye(self._N, dtype=dtype) ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... if method == '__call__': ... N = None ... scalars = [] ... for input in inputs: ... if isinstance(input, Number): ... scalars.append(input) ... elif isinstance(input, self.__class__): ... scalars.append(input._i) ... if N is not None: ... if N != self._N: ... raise TypeError("inconsistent sizes") ... else: ... N = self._N ... else: ... return NotImplemented ... return self.__class__(N, ufunc(*scalars, **kwargs)) ... else: ... return NotImplemented ``` Now our custom array type passes through numpy functions. ``` >>> arr = DiagonalArray(5, 1) >>> np.multiply(arr, 3) DiagonalArray(N=5, value=3) >>> np.add(arr, 3) DiagonalArray(N=5, value=4) >>> np.sin(arr) DiagonalArray(N=5, value=0.8414709848078965) ``` At this point `arr + 3` does not work. ``` >>> arr + 3 Traceback (most recent call last): ... TypeError: unsupported operand type(s) for +: 'DiagonalArray' and 'int' ``` To support it, we need to define the Python interfaces `__add__`, `__lt__`, and so on to dispatch to the corresponding ufunc. We can achieve this conveniently by inheriting from the mixin [`NDArrayOperatorsMixin`](../reference/generated/numpy.lib.mixins.ndarrayoperatorsmixin#numpy.lib.mixins.NDArrayOperatorsMixin "numpy.lib.mixins.NDArrayOperatorsMixin"). ``` >>> import numpy.lib.mixins >>> class DiagonalArray(numpy.lib.mixins.NDArrayOperatorsMixin): ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self, dtype=None): ... return self._i * np.eye(self._N, dtype=dtype) ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... if method == '__call__': ... N = None ... scalars = [] ... for input in inputs: ... if isinstance(input, Number): ... scalars.append(input) ... elif isinstance(input, self.__class__): ... scalars.append(input._i) ... if N is not None: ... if N != self._N: ... raise TypeError("inconsistent sizes") ... else: ... N = self._N ... else: ... return NotImplemented ... return self.__class__(N, ufunc(*scalars, **kwargs)) ... else: ... return NotImplemented ``` ``` >>> arr = DiagonalArray(5, 1) >>> arr + 3 DiagonalArray(N=5, value=4) >>> arr > 0 DiagonalArray(N=5, value=True) ``` Now let’s tackle `__array_function__`. We’ll create dict that maps numpy functions to our custom variants. ``` >>> HANDLED_FUNCTIONS = {} >>> class DiagonalArray(numpy.lib.mixins.NDArrayOperatorsMixin): ... def __init__(self, N, value): ... self._N = N ... self._i = value ... def __repr__(self): ... return f"{self.__class__.__name__}(N={self._N}, value={self._i})" ... def __array__(self, dtype=None): ... return self._i * np.eye(self._N, dtype=dtype) ... def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): ... if method == '__call__': ... N = None ... scalars = [] ... for input in inputs: ... # In this case we accept only scalar numbers or DiagonalArrays. ... if isinstance(input, Number): ... scalars.append(input) ... elif isinstance(input, self.__class__): ... scalars.append(input._i) ... if N is not None: ... if N != self._N: ... raise TypeError("inconsistent sizes") ... else: ... N = self._N ... else: ... return NotImplemented ... return self.__class__(N, ufunc(*scalars, **kwargs)) ... else: ... return NotImplemented ... def __array_function__(self, func, types, args, kwargs): ... if func not in HANDLED_FUNCTIONS: ... return NotImplemented ... # Note: this allows subclasses that don't override ... # __array_function__ to handle DiagonalArray objects. ... if not all(issubclass(t, self.__class__) for t in types): ... return NotImplemented ... return HANDLED_FUNCTIONS[func](*args, **kwargs) ... ``` A convenient pattern is to define a decorator `implements` that can be used to add functions to `HANDLED_FUNCTIONS`. ``` >>> def implements(np_function): ... "Register an __array_function__ implementation for DiagonalArray objects." ... def decorator(func): ... HANDLED_FUNCTIONS[np_function] = func ... return func ... return decorator ... ``` Now we write implementations of numpy functions for `DiagonalArray`. For completeness, to support the usage `arr.sum()` add a method `sum` that calls `numpy.sum(self)`, and the same for `mean`. ``` >>> @implements(np.sum) ... def sum(arr): ... "Implementation of np.sum for DiagonalArray objects" ... return arr._i * arr._N ... >>> @implements(np.mean) ... def mean(arr): ... "Implementation of np.mean for DiagonalArray objects" ... return arr._i / arr._N ... >>> arr = DiagonalArray(5, 1) >>> np.sum(arr) 5 >>> np.mean(arr) 0.2 ``` If the user tries to use any numpy functions not included in `HANDLED_FUNCTIONS`, a `TypeError` will be raised by numpy, indicating that this operation is not supported. For example, concatenating two `DiagonalArrays` does not produce another diagonal array, so it is not supported. ``` >>> np.concatenate([arr, arr]) Traceback (most recent call last): ... TypeError: no implementation found for 'numpy.concatenate' on types that implement __array_function__: [<class '__main__.DiagonalArray'>] ``` Additionally, our implementations of `sum` and `mean` do not accept the optional arguments that numpy’s implementation does. ``` >>> np.sum(arr, axis=0) Traceback (most recent call last): ... TypeError: sum() got an unexpected keyword argument 'axis' ``` The user always has the option of converting to a normal `numpy.ndarray` with [`numpy.asarray`](../reference/generated/numpy.asarray#numpy.asarray "numpy.asarray") and using standard numpy from there. ``` >>> np.concatenate([np.asarray(arr), np.asarray(arr)]) array([[1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.], [1., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.]]) ``` Refer to the [dask source code](https://github.com/dask/dask) and [cupy source code](https://github.com/cupy/cupy) for more fully-worked examples of custom array containers. See also [NEP 18](https://numpy.org/neps/nep-0018-array-function-protocol.html "(in NumPy Enhancement Proposals)"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.dispatch.htmlSubclassing ndarray =================== Introduction ------------ Subclassing ndarray is relatively simple, but it has some complications compared to other Python objects. On this page we explain the machinery that allows you to subclass ndarray, and the implications for implementing a subclass. ### ndarrays and object creation Subclassing ndarray is complicated by the fact that new instances of ndarray classes can come about in three different ways. These are: 1. Explicit constructor call - as in `MySubClass(params)`. This is the usual route to Python instance creation. 2. View casting - casting an existing ndarray as a given subclass 3. New from template - creating a new instance from a template instance. Examples include returning slices from a subclassed array, creating return types from ufuncs, and copying arrays. See [Creating new from template](#new-from-template) for more details The last two are characteristics of ndarrays - in order to support things like array slicing. The complications of subclassing ndarray are due to the mechanisms numpy has to support these latter two routes of instance creation. ### When to use subclassing Besides the additional complexities of subclassing a NumPy array, subclasses can run into unexpected behaviour because some functions may convert the subclass to a baseclass and “forget” any additional information associated with the subclass. This can result in surprising behavior if you use NumPy methods or functions you have not explicitly tested. On the other hand, compared to other interoperability approaches, subclassing can be a useful because many thing will “just work”. This means that subclassing can be a convenient approach and for a long time it was also often the only available approach. However, NumPy now provides additional interoperability protocols described in “[Interoperability with NumPy](basics.interoperability#basics-interoperability)”. For many use-cases these interoperability protocols may now be a better fit or supplement the use of subclassing. Subclassing can be a good fit if: * you are less worried about maintainability or users other than yourself: Subclass will be faster to implement and additional interoperability can be added “as-needed”. And with few users, possible surprises are not an issue. * you do not think it is problematic if the subclass information is ignored or lost silently. An example is `np.memmap` where “forgetting” about data being memory mapped cannot lead to a wrong result. An example of a subclass that sometimes confuses users are NumPy’s masked arrays. When they were introduced, subclassing was the only approach for implementation. However, today we would possibly try to avoid subclassing and rely only on interoperability protocols. Note that also subclass authors may wish to study [Interoperability with NumPy](basics.interoperability#basics-interoperability) to support more complex use-cases or work around the surprising behavior. `astropy.units.Quantity` and `xarray` are examples for array-like objects that interoperate well with NumPy. Astropy’s `Quantity` is an example which uses a dual approach of both subclassing and interoperability protocols. View casting ------------ *View casting* is the standard ndarray mechanism by which you take an ndarray of any subclass, and return a view of the array as another (specified) subclass: ``` >>> import numpy as np >>> # create a completely useless ndarray subclass >>> class C(np.ndarray): pass >>> # create a standard ndarray >>> arr = np.zeros((3,)) >>> # take a view of it, as our useless subclass >>> c_arr = arr.view(C) >>> type(c_arr) <class '__main__.C'``` Creating new from template -------------------------- New instances of an ndarray subclass can also come about by a very similar mechanism to [View casting](#view-casting), when numpy finds it needs to create a new instance from a template instance. The most obvious place this has to happen is when you are taking slices of subclassed arrays. For example: ``` >>> v = c_arr[1:] >>> type(v) # the view is of type 'C' <class '__main__.C'> >>> v is c_arr # but it's a new instance False ``` The slice is a *view* onto the original `c_arr` data. So, when we take a view from the ndarray, we return a new ndarray, of the same class, that points to the data in the original. There are other points in the use of ndarrays where we need such views, such as copying arrays (`c_arr.copy()`), creating ufunc output arrays (see also [__array_wrap__ for ufuncs and other functions](#array-wrap)), and reducing methods (like `c_arr.mean()`). Relationship of view casting and new-from-template -------------------------------------------------- These paths both use the same machinery. We make the distinction here, because they result in different input to your methods. Specifically, [View casting](#view-casting) means you have created a new instance of your array type from any potential subclass of ndarray. [Creating new from template](#new-from-template) means you have created a new instance of your class from a pre-existing instance, allowing you - for example - to copy across attributes that are particular to your subclass. Implications for subclassing ---------------------------- If we subclass ndarray, we need to deal not only with explicit construction of our array type, but also [View casting](#view-casting) or [Creating new from template](#new-from-template). NumPy has the machinery to do this, and it is this machinery that makes subclassing slightly non-standard. There are two aspects to the machinery that ndarray uses to support views and new-from-template in subclasses. The first is the use of the `ndarray.__new__` method for the main work of object initialization, rather then the more usual `__init__` method. The second is the use of the `__array_finalize__` method to allow subclasses to clean up after the creation of views and new instances from templates. ### A brief Python primer on `__new__` and `__init__` `__new__` is a standard Python method, and, if present, is called before `__init__` when we create a class instance. See the [python __new__ documentation](https://docs.python.org/reference/datamodel.html#object.__new__) for more detail. For example, consider the following Python code: ``` >>> class C: >>> def __new__(cls, *args): >>> print('Cls in __new__:', cls) >>> print('Args in __new__:', args) >>> # The `object` type __new__ method takes a single argument. >>> return object.__new__(cls) >>> def __init__(self, *args): >>> print('type(self) in __init__:', type(self)) >>> print('Args in __init__:', args) ``` meaning that we get: ``` >>> c = C('hello') Cls in __new__: <class 'C'> Args in __new__: ('hello',) type(self) in __init__: <class 'C'> Args in __init__: ('hello',) ``` When we call `C('hello')`, the `__new__` method gets its own class as first argument, and the passed argument, which is the string `'hello'`. After python calls `__new__`, it usually (see below) calls our `__init__` method, with the output of `__new__` as the first argument (now a class instance), and the passed arguments following. As you can see, the object can be initialized in the `__new__` method or the `__init__` method, or both, and in fact ndarray does not have an `__init__` method, because all the initialization is done in the `__new__` method. Why use `__new__` rather than just the usual `__init__`? Because in some cases, as for ndarray, we want to be able to return an object of some other class. Consider the following: ``` class D(C): def __new__(cls, *args): print('D cls is:', cls) print('D args in __new__:', args) return C.__new__(C, *args) def __init__(self, *args): # we never get here print('In D __init__') ``` meaning that: ``` >>> obj = D('hello') D cls is: <class 'D'> D args in __new__: ('hello',) Cls in __new__: <class 'C'> Args in __new__: ('hello',) >>> type(obj) <class 'C'``` The definition of `C` is the same as before, but for `D`, the `__new__` method returns an instance of class `C` rather than `D`. Note that the `__init__` method of `D` does not get called. In general, when the `__new__` method returns an object of class other than the class in which it is defined, the `__init__` method of that class is not called. This is how subclasses of the ndarray class are able to return views that preserve the class type. When taking a view, the standard ndarray machinery creates the new ndarray object with something like: ``` obj = ndarray.__new__(subtype, shape, ... ``` where `subdtype` is the subclass. Thus the returned view is of the same class as the subclass, rather than being of class `ndarray`. That solves the problem of returning views of the same type, but now we have a new problem. The machinery of ndarray can set the class this way, in its standard methods for taking views, but the ndarray `__new__` method knows nothing of what we have done in our own `__new__` method in order to set attributes, and so on. (Aside - why not call `obj = subdtype.__new__(...` then? Because we may not have a `__new__` method with the same call signature). ### The role of `__array_finalize__` `__array_finalize__` is the mechanism that numpy provides to allow subclasses to handle the various ways that new instances get created. Remember that subclass instances can come about in these three ways: 1. explicit constructor call (`obj = MySubClass(params)`). This will call the usual sequence of `MySubClass.__new__` then (if it exists) `MySubClass.__init__`. 2. [View casting](#view-casting) 3. [Creating new from template](#new-from-template) Our `MySubClass.__new__` method only gets called in the case of the explicit constructor call, so we can’t rely on `MySubClass.__new__` or `MySubClass.__init__` to deal with the view casting and new-from-template. It turns out that `MySubClass.__array_finalize__` *does* get called for all three methods of object creation, so this is where our object creation housekeeping usually goes. * For the explicit constructor call, our subclass will need to create a new ndarray instance of its own class. In practice this means that we, the authors of the code, will need to make a call to `ndarray.__new__(MySubClass,...)`, a class-hierarchy prepared call to `super().__new__(cls, ...)`, or do view casting of an existing array (see below) * For view casting and new-from-template, the equivalent of `ndarray.__new__(MySubClass,...` is called, at the C level. The arguments that `__array_finalize__` receives differ for the three methods of instance creation above. The following code allows us to look at the call sequences and arguments: ``` import numpy as np class C(np.ndarray): def __new__(cls, *args, **kwargs): print('In __new__ with class %s' % cls) return super().__new__(cls, *args, **kwargs) def __init__(self, *args, **kwargs): # in practice you probably will not need or want an __init__ # method for your subclass print('In __init__ with class %s' % self.__class__) def __array_finalize__(self, obj): print('In array_finalize:') print(' self type is %s' % type(self)) print(' obj type is %s' % type(obj)) ``` Now: ``` >>> # Explicit constructor >>> c = C((10,)) In __new__ with class <class 'C'> In array_finalize: self type is <class 'C'> obj type is <type 'NoneType'> In __init__ with class <class 'C'> >>> # View casting >>> a = np.arange(10) >>> cast_a = a.view(C) In array_finalize: self type is <class 'C'> obj type is <type 'numpy.ndarray'> >>> # Slicing (example of new-from-template) >>> cv = c[:1] In array_finalize: self type is <class 'C'> obj type is <class 'C'``` The signature of `__array_finalize__` is: ``` def __array_finalize__(self, obj): ``` One sees that the `super` call, which goes to `ndarray.__new__`, passes `__array_finalize__` the new object, of our own class (`self`) as well as the object from which the view has been taken (`obj`). As you can see from the output above, the `self` is always a newly created instance of our subclass, and the type of `obj` differs for the three instance creation methods: * When called from the explicit constructor, `obj` is `None` * When called from view casting, `obj` can be an instance of any subclass of ndarray, including our own. * When called in new-from-template, `obj` is another instance of our own subclass, that we might use to update the new `self` instance. Because `__array_finalize__` is the only method that always sees new instances being created, it is the sensible place to fill in instance defaults for new object attributes, among other tasks. This may be clearer with an example. Simple example - adding an extra attribute to ndarray ----------------------------------------------------- ``` import numpy as np class InfoArray(np.ndarray): def __new__(subtype, shape, dtype=float, buffer=None, offset=0, strides=None, order=None, info=None): # Create the ndarray instance of our type, given the usual # ndarray input arguments. This will call the standard # ndarray constructor, but return an object of our type. # It also triggers a call to InfoArray.__array_finalize__ obj = super().__new__(subtype, shape, dtype, buffer, offset, strides, order) # set the new 'info' attribute to the value passed obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # ``self`` is a new object resulting from # ndarray.__new__(InfoArray, ...), therefore it only has # attributes that the ndarray.__new__ constructor gave it - # i.e. those of a standard ndarray. # # We could have got to the ndarray.__new__ call in 3 ways: # From an explicit constructor - e.g. InfoArray(): # obj is None # (we're in the middle of the InfoArray.__new__ # constructor, and self.info will be set when we return to # InfoArray.__new__) if obj is None: return # From view casting - e.g arr.view(InfoArray): # obj is arr # (type(obj) can be InfoArray) # From new-from-template - e.g infoarr[:3] # type(obj) is InfoArray # # Note that it is here, rather than in the __new__ method, # that we set the default value for 'info', because this # method sees all creation of default objects - with the # InfoArray.__new__ constructor, but also with # arr.view(InfoArray). self.info = getattr(obj, 'info', None) # We do not need to return anything ``` Using the object looks like this: ``` >>> obj = InfoArray(shape=(3,)) # explicit constructor >>> type(obj) <class 'InfoArray'> >>> obj.info is None True >>> obj = InfoArray(shape=(3,), info='information') >>> obj.info 'information' >>> v = obj[1:] # new-from-template - here - slicing >>> type(v) <class 'InfoArray'> >>> v.info 'information' >>> arr = np.arange(10) >>> cast_arr = arr.view(InfoArray) # view casting >>> type(cast_arr) <class 'InfoArray'> >>> cast_arr.info is None True ``` This class isn’t very useful, because it has the same constructor as the bare ndarray object, including passing in buffers and shapes and so on. We would probably prefer the constructor to be able to take an already formed ndarray from the usual numpy calls to `np.array` and return an object. Slightly more realistic example - attribute added to existing array ------------------------------------------------------------------- Here is a class that takes a standard ndarray that already exists, casts as our type, and adds an extra attribute. ``` import numpy as np class RealisticInfoArray(np.ndarray): def __new__(cls, input_array, info=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(input_array).view(cls) # add the new attribute to the created instance obj.info = info # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): # see InfoArray.__array_finalize__ for comments if obj is None: return self.info = getattr(obj, 'info', None) ``` So: ``` >>> arr = np.arange(5) >>> obj = RealisticInfoArray(arr, info='information') >>> type(obj) <class 'RealisticInfoArray'> >>> obj.info 'information' >>> v = obj[1:] >>> type(v) <class 'RealisticInfoArray'> >>> v.info 'information' ``` `__array_ufunc__` for ufuncs ----------------------------- New in version 1.13. A subclass can override what happens when executing numpy ufuncs on it by overriding the default `ndarray.__array_ufunc__` method. This method is executed *instead* of the ufunc and should return either the result of the operation, or [`NotImplemented`](https://docs.python.org/3/library/constants.html#NotImplemented "(in Python v3.10)") if the operation requested is not implemented. The signature of `__array_ufunc__` is: ``` def __array_ufunc__(ufunc, method, *inputs, **kwargs): - *ufunc* is the ufunc object that was called. - *method* is a string indicating how the Ufunc was called, either ``"__call__"`` to indicate it was called directly, or one of its :ref:`methods<ufuncs.methods>`: ``"reduce"``, ``"accumulate"``, ``"reduceat"``, ``"outer"``, or ``"at"``. - *inputs* is a tuple of the input arguments to the ``ufunc`` - *kwargs* contains any optional or keyword arguments passed to the function. This includes any ``out`` arguments, which are always contained in a tuple. ``` A typical implementation would convert any inputs or outputs that are instances of one’s own class, pass everything on to a superclass using `super()`, and finally return the results after possible back-conversion. An example, taken from the test case `test_ufunc_override_with_super` in `core/tests/test_umath.py`, is the following. ``` input numpy as np class A(np.ndarray): def __array_ufunc__(self, ufunc, method, *inputs, out=None, **kwargs): args = [] in_no = [] for i, input_ in enumerate(inputs): if isinstance(input_, A): in_no.append(i) args.append(input_.view(np.ndarray)) else: args.append(input_) outputs = out out_no = [] if outputs: out_args = [] for j, output in enumerate(outputs): if isinstance(output, A): out_no.append(j) out_args.append(output.view(np.ndarray)) else: out_args.append(output) kwargs['out'] = tuple(out_args) else: outputs = (None,) * ufunc.nout info = {} if in_no: info['inputs'] = in_no if out_no: info['outputs'] = out_no results = super().__array_ufunc__(ufunc, method, *args, **kwargs) if results is NotImplemented: return NotImplemented if method == 'at': if isinstance(inputs[0], A): inputs[0].info = info return if ufunc.nout == 1: results = (results,) results = tuple((np.asarray(result).view(A) if output is None else output) for result, output in zip(results, outputs)) if results and isinstance(results[0], A): results[0].info = info return results[0] if len(results) == 1 else results ``` So, this class does not actually do anything interesting: it just converts any instances of its own to regular ndarray (otherwise, we’d get infinite recursion!), and adds an `info` dictionary that tells which inputs and outputs it converted. Hence, e.g., ``` >>> a = np.arange(5.).view(A) >>> b = np.sin(a) >>> b.info {'inputs': [0]} >>> b = np.sin(np.arange(5.), out=(a,)) >>> b.info {'outputs': [0]} >>> a = np.arange(5.).view(A) >>> b = np.ones(1).view(A) >>> c = a + b >>> c.info {'inputs': [0, 1]} >>> a += b >>> a.info {'inputs': [0, 1], 'outputs': [0]} ``` Note that another approach would be to use `getattr(ufunc, methods)(*inputs, **kwargs)` instead of the `super` call. For this example, the result would be identical, but there is a difference if another operand also defines `__array_ufunc__`. E.g., lets assume that we evalulate `np.add(a, b)`, where `b` is an instance of another class `B` that has an override. If you use `super` as in the example, `ndarray.__array_ufunc__` will notice that `b` has an override, which means it cannot evaluate the result itself. Thus, it will return `NotImplemented` and so will our class `A`. Then, control will be passed over to `b`, which either knows how to deal with us and produces a result, or does not and returns `NotImplemented`, raising a `TypeError`. If instead, we replace our `super` call with `getattr(ufunc, method)`, we effectively do `np.add(a.view(np.ndarray), b)`. Again, `B.__array_ufunc__` will be called, but now it sees an `ndarray` as the other argument. Likely, it will know how to handle this, and return a new instance of the `B` class to us. Our example class is not set up to handle this, but it might well be the best approach if, e.g., one were to re-implement `MaskedArray` using `__array_ufunc__`. As a final note: if the `super` route is suited to a given class, an advantage of using it is that it helps in constructing class hierarchies. E.g., suppose that our other class `B` also used the `super` in its `__array_ufunc__` implementation, and we created a class `C` that depended on both, i.e., `class C(A, B)` (with, for simplicity, not another `__array_ufunc__` override). Then any ufunc on an instance of `C` would pass on to `A.__array_ufunc__`, the `super` call in `A` would go to `B.__array_ufunc__`, and the `super` call in `B` would go to `ndarray.__array_ufunc__`, thus allowing `A` and `B` to collaborate. `__array_wrap__` for ufuncs and other functions ------------------------------------------------ Prior to numpy 1.13, the behaviour of ufuncs could only be tuned using `__array_wrap__` and `__array_prepare__`. These two allowed one to change the output type of a ufunc, but, in contrast to `__array_ufunc__`, did not allow one to make any changes to the inputs. It is hoped to eventually deprecate these, but `__array_wrap__` is also used by other numpy functions and methods, such as `squeeze`, so at the present time is still needed for full functionality. Conceptually, `__array_wrap__` “wraps up the action” in the sense of allowing a subclass to set the type of the return value and update attributes and metadata. Let’s show how this works with an example. First we return to the simpler example subclass, but with a different name and some print statements: ``` import numpy as np class MySubClass(np.ndarray): def __new__(cls, input_array, info=None): obj = np.asarray(input_array).view(cls) obj.info = info return obj def __array_finalize__(self, obj): print('In __array_finalize__:') print(' self is %s' % repr(self)) print(' obj is %s' % repr(obj)) if obj is None: return self.info = getattr(obj, 'info', None) def __array_wrap__(self, out_arr, context=None): print('In __array_wrap__:') print(' self is %s' % repr(self)) print(' arr is %s' % repr(out_arr)) # then just call the parent return super().__array_wrap__(self, out_arr, context) ``` We run a ufunc on an instance of our new array: ``` >>> obj = MySubClass(np.arange(5), info='spam') In __array_finalize__: self is MySubClass([0, 1, 2, 3, 4]) obj is array([0, 1, 2, 3, 4]) >>> arr2 = np.arange(5)+1 >>> ret = np.add(arr2, obj) In __array_wrap__: self is MySubClass([0, 1, 2, 3, 4]) arr is array([1, 3, 5, 7, 9]) In __array_finalize__: self is MySubClass([1, 3, 5, 7, 9]) obj is MySubClass([0, 1, 2, 3, 4]) >>> ret MySubClass([1, 3, 5, 7, 9]) >>> ret.info 'spam' ``` Note that the ufunc (`np.add`) has called the `__array_wrap__` method with arguments `self` as `obj`, and `out_arr` as the (ndarray) result of the addition. In turn, the default `__array_wrap__` (`ndarray.__array_wrap__`) has cast the result to class `MySubClass`, and called `__array_finalize__` - hence the copying of the `info` attribute. This has all happened at the C level. But, we could do anything we wanted: ``` class SillySubClass(np.ndarray): def __array_wrap__(self, arr, context=None): return 'I lost your data' ``` ``` >>> arr1 = np.arange(5) >>> obj = arr1.view(SillySubClass) >>> arr2 = np.arange(5) >>> ret = np.multiply(obj, arr2) >>> ret 'I lost your data' ``` So, by defining a specific `__array_wrap__` method for our subclass, we can tweak the output from ufuncs. The `__array_wrap__` method requires `self`, then an argument - which is the result of the ufunc - and an optional parameter *context*. This parameter is returned by ufuncs as a 3-element tuple: (name of the ufunc, arguments of the ufunc, domain of the ufunc), but is not set by other numpy functions. Though, as seen above, it is possible to do otherwise, `__array_wrap__` should return an instance of its containing class. See the masked array subclass for an implementation. In addition to `__array_wrap__`, which is called on the way out of the ufunc, there is also an `__array_prepare__` method which is called on the way into the ufunc, after the output arrays are created but before any computation has been performed. The default implementation does nothing but pass through the array. `__array_prepare__` should not attempt to access the array data or resize the array, it is intended for setting the output array type, updating attributes and metadata, and performing any checks based on the input that may be desired before computation begins. Like `__array_wrap__`, `__array_prepare__` must return an ndarray or subclass thereof or raise an error. Extra gotchas - custom `__del__` methods and ndarray.base --------------------------------------------------------- One of the problems that ndarray solves is keeping track of memory ownership of ndarrays and their views. Consider the case where we have created an ndarray, `arr` and have taken a slice with `v = arr[1:]`. The two objects are looking at the same memory. NumPy keeps track of where the data came from for a particular array or view, with the `base` attribute: ``` >>> # A normal ndarray, that owns its own data >>> arr = np.zeros((4,)) >>> # In this case, base is None >>> arr.base is None True >>> # We take a view >>> v1 = arr[1:] >>> # base now points to the array that it derived from >>> v1.base is arr True >>> # Take a view of a view >>> v2 = v1[1:] >>> # base points to the original array that it was derived from >>> v2.base is arr True ``` In general, if the array owns its own memory, as for `arr` in this case, then `arr.base` will be None - there are some exceptions to this - see the numpy book for more details. The `base` attribute is useful in being able to tell whether we have a view or the original array. This in turn can be useful if we need to know whether or not to do some specific cleanup when the subclassed array is deleted. For example, we may only want to do the cleanup if the original array is deleted, but not the views. For an example of how this can work, have a look at the `memmap` class in `numpy.core`. Subclassing and Downstream Compatibility ---------------------------------------- When sub-classing `ndarray` or creating duck-types that mimic the `ndarray` interface, it is your responsibility to decide how aligned your APIs will be with those of numpy. For convenience, many numpy functions that have a corresponding `ndarray` method (e.g., `sum`, `mean`, `take`, `reshape`) work by checking if the first argument to a function has a method of the same name. If it exists, the method is called instead of coercing the arguments to a numpy array. For example, if you want your sub-class or duck-type to be compatible with numpy’s `sum` function, the method signature for this object’s `sum` method should be the following: ``` def sum(self, axis=None, dtype=None, out=None, keepdims=False): ... ``` This is the exact same method signature for `np.sum`, so now if a user calls `np.sum` on this object, numpy will call the object’s own `sum` method and pass in these arguments enumerated above in the signature, and no errors will be raised because the signatures are completely compatible with each other. If, however, you decide to deviate from this signature and do something like this: ``` def sum(self, axis=None, dtype=None): ... ``` This object is no longer compatible with `np.sum` because if you call `np.sum`, it will pass in unexpected arguments `out` and `keepdims`, causing a TypeError to be raised. If you wish to maintain compatibility with numpy and its subsequent versions (which might add new keyword arguments) but do not want to surface all of numpy’s arguments, your function’s signature should accept `**kwargs`. For example: ``` def sum(self, axis=None, dtype=None, **unused_kwargs): ... ``` This object is now compatible with `np.sum` again because any extraneous arguments (i.e. keywords that are not `axis` or `dtype`) will be hidden away in the `**unused_kwargs` parameter. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.subclassing.htmlUniversal functions (ufunc) basics ================================== See also [Universal functions (ufunc)](../reference/ufuncs#ufuncs) A universal function (or [ufunc](../glossary#term-ufunc) for short) is a function that operates on [`ndarrays`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") in an element-by-element fashion, supporting [array broadcasting](#ufuncs-broadcasting), [type casting](#ufuncs-casting), and several other standard features. That is, a ufunc is a “[vectorized](../glossary#term-vectorization)” wrapper for a function that takes a fixed number of specific inputs and produces a fixed number of specific outputs. In NumPy, universal functions are instances of the [`numpy.ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") class. Many of the built-in functions are implemented in compiled C code. The basic ufuncs operate on scalars, but there is also a generalized kind for which the basic elements are sub-arrays (vectors, matrices, etc.), and broadcasting is done over other dimensions. The simplest example is the addition operator: ``` >>> np.array([0,2,3,4]) + np.array([1,1,-1,2]) array([1, 3, 2, 6]) ``` One can also produce custom [`numpy.ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") instances using the [`numpy.frompyfunc`](../reference/generated/numpy.frompyfunc#numpy.frompyfunc "numpy.frompyfunc") factory function. Ufunc methods ------------- All ufuncs have four methods. They can be found at [Methods](../reference/ufuncs#ufuncs-methods). However, these methods only make sense on scalar ufuncs that take two input arguments and return one output argument. Attempting to call these methods on other ufuncs will cause a [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.10)"). The reduce-like methods all take an *axis* keyword, a *dtype* keyword, and an *out* keyword, and the arrays must all have dimension >= 1. The *axis* keyword specifies the axis of the array over which the reduction will take place (with negative values counting backwards). Generally, it is an integer, though for [`numpy.ufunc.reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce"), it can also be a tuple of `int` to reduce over several axes at once, or `None`, to reduce over all axes. For example: ``` >>> x = np.arange(9).reshape(3,3) >>> x array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.add.reduce(x, 1) array([ 3, 12, 21]) >>> np.add.reduce(x, (0, 1)) 36 ``` The *dtype* keyword allows you to manage a very common problem that arises when naively using [`ufunc.reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce"). Sometimes you may have an array of a certain data type and wish to add up all of its elements, but the result does not fit into the data type of the array. This commonly happens if you have an array of single-byte integers. The *dtype* keyword allows you to alter the data type over which the reduction takes place (and therefore the type of the output). Thus, you can ensure that the output is a data type with precision large enough to handle your output. The responsibility of altering the reduce type is mostly up to you. There is one exception: if no *dtype* is given for a reduction on the “add” or “multiply” operations, then if the input type is an integer (or Boolean) data-type and smaller than the size of the [`numpy.int_`](../reference/arrays.scalars#numpy.int_ "numpy.int_") data type, it will be internally upcast to the [`int_`](../reference/arrays.scalars#numpy.int_ "numpy.int_") (or [`numpy.uint`](../reference/arrays.scalars#numpy.uint "numpy.uint")) data-type. In the previous example: ``` >>> x.dtype dtype('int64') >>> np.multiply.reduce(x, dtype=float) array([ 0., 28., 80.]) ``` Finally, the *out* keyword allows you to provide an output array (for single-output ufuncs, which are currently the only ones supported; for future extension, however, a tuple with a single argument can be passed in). If *out* is given, the *dtype* argument is ignored. Considering `x` from the previous example: ``` >>> y = np.zeros(3, dtype=int) >>> y array([0, 0, 0]) >>> np.multiply.reduce(x, dtype=float, out=y) array([ 0, 28, 80]) ``` Ufuncs also have a fifth method, [`numpy.ufunc.at`](../reference/generated/numpy.ufunc.at#numpy.ufunc.at "numpy.ufunc.at"), that allows in place operations to be performed using advanced indexing. No [buffering](#use-of-internal-buffers) is used on the dimensions where advanced indexing is used, so the advanced index can list an item more than once and the operation will be performed on the result of the previous operation for that item. Output type determination ------------------------- The output of the ufunc (and its methods) is not necessarily an [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), if all input arguments are not [`ndarrays`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Indeed, if any input defines an [`__array_ufunc__`](../reference/arrays.classes#numpy.class.__array_ufunc__ "numpy.class.__array_ufunc__") method, control will be passed completely to that function, i.e., the ufunc is [overridden](#ufuncs-overrides). If none of the inputs overrides the ufunc, then all output arrays will be passed to the [`__array_prepare__`](../reference/arrays.classes#numpy.class.__array_prepare__ "numpy.class.__array_prepare__") and [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") methods of the input (besides [`ndarrays`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"), and scalars) that defines it **and** has the highest [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") of any other input to the universal function. The default [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") of the ndarray is 0.0, and the default [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") of a subtype is 0.0. Matrices have [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") equal to 10.0. All ufuncs can also take output arguments. If necessary, output will be cast to the data-type(s) of the provided output array(s). If a class with an [`__array__`](../reference/arrays.classes#numpy.class.__array__ "numpy.class.__array__") method is used for the output, results will be written to the object returned by [`__array__`](../reference/arrays.classes#numpy.class.__array__ "numpy.class.__array__"). Then, if the class also has an [`__array_prepare__`](../reference/arrays.classes#numpy.class.__array_prepare__ "numpy.class.__array_prepare__") method, it is called so metadata may be determined based on the context of the ufunc (the context consisting of the ufunc itself, the arguments passed to the ufunc, and the ufunc domain.) The array object returned by [`__array_prepare__`](../reference/arrays.classes#numpy.class.__array_prepare__ "numpy.class.__array_prepare__") is passed to the ufunc for computation. Finally, if the class also has an [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") method, the returned [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") result will be passed to that method just before passing control back to the caller. Broadcasting ------------ See also [Broadcasting basics](basics.broadcasting) Each universal function takes array inputs and produces array outputs by performing the core function element-wise on the inputs (where an element is generally a scalar, but can be a vector or higher-order sub-array for generalized ufuncs). Standard [broadcasting rules](basics.broadcasting#general-broadcasting-rules) are applied so that inputs not sharing exactly the same shapes can still be usefully operated on. By these rules, if an input has a dimension size of 1 in its shape, the first data entry in that dimension will be used for all calculations along that dimension. In other words, the stepping machinery of the [ufunc](../glossary#term-ufunc) will simply not step along that dimension (the [stride](../reference/arrays.ndarray#memory-layout) will be 0 for that dimension). Type casting rules ------------------ Note In NumPy 1.6.0, a type promotion API was created to encapsulate the mechanism for determining output types. See the functions [`numpy.result_type`](../reference/generated/numpy.result_type#numpy.result_type "numpy.result_type"), [`numpy.promote_types`](../reference/generated/numpy.promote_types#numpy.promote_types "numpy.promote_types"), and [`numpy.min_scalar_type`](../reference/generated/numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type") for more details. At the core of every ufunc is a one-dimensional strided loop that implements the actual function for a specific type combination. When a ufunc is created, it is given a static list of inner loops and a corresponding list of type signatures over which the ufunc operates. The ufunc machinery uses this list to determine which inner loop to use for a particular case. You can inspect the [`.types`](../reference/generated/numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") attribute for a particular ufunc to see which type combinations have a defined inner loop and which output type they produce ([character codes](../reference/arrays.scalars#arrays-scalars-character-codes) are used in said output for brevity). Casting must be done on one or more of the inputs whenever the ufunc does not have a core loop implementation for the input types provided. If an implementation for the input types cannot be found, then the algorithm searches for an implementation with a type signature to which all of the inputs can be cast “safely.” The first one it finds in its internal list of loops is selected and performed, after all necessary type casting. Recall that internal copies during ufuncs (even for casting) are limited to the size of an internal buffer (which is user settable). Note Universal functions in NumPy are flexible enough to have mixed type signatures. Thus, for example, a universal function could be defined that works with floating-point and integer values. See [`numpy.ldexp`](../reference/generated/numpy.ldexp#numpy.ldexp "numpy.ldexp") for an example. By the above description, the casting rules are essentially implemented by the question of when a data type can be cast “safely” to another data type. The answer to this question can be determined in Python with a function call: [`can_cast(fromtype, totype)`](../reference/generated/numpy.can_cast#numpy.can_cast "numpy.can_cast"). The example below shows the results of this call for the 24 internally supported types on the author’s 64-bit system. You can generate this table for your system with the code given in the example. #### Example Code segment showing the “can cast safely” table for a 64-bit system. Generally the output depends on the system; your system might result in a different table. ``` >>> mark = {False: ' -', True: ' Y'} >>> def print_table(ntypes): ... print('X ' + ' '.join(ntypes)) ... for row in ntypes: ... print(row, end='') ... for col in ntypes: ... print(mark[np.can_cast(row, col)], end='') ... print() ... >>> print_table(np.typecodes['All']) X ? b h i l q p B H I L Q P e f d g F D G S U V O M m ? Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y - Y b - Y Y Y Y Y Y - - - - - - Y Y Y Y Y Y Y Y Y Y Y - Y h - - Y Y Y Y Y - - - - - - - Y Y Y Y Y Y Y Y Y Y - Y i - - - Y Y Y Y - - - - - - - - Y Y - Y Y Y Y Y Y - Y l - - - - Y Y Y - - - - - - - - Y Y - Y Y Y Y Y Y - Y q - - - - Y Y Y - - - - - - - - Y Y - Y Y Y Y Y Y - Y p - - - - Y Y Y - - - - - - - - Y Y - Y Y Y Y Y Y - Y B - - Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y - Y H - - - Y Y Y Y - Y Y Y Y Y - Y Y Y Y Y Y Y Y Y Y - Y I - - - - Y Y Y - - Y Y Y Y - - Y Y - Y Y Y Y Y Y - Y L - - - - - - - - - - Y Y Y - - Y Y - Y Y Y Y Y Y - - Q - - - - - - - - - - Y Y Y - - Y Y - Y Y Y Y Y Y - - P - - - - - - - - - - Y Y Y - - Y Y - Y Y Y Y Y Y - - e - - - - - - - - - - - - - Y Y Y Y Y Y Y Y Y Y Y - - f - - - - - - - - - - - - - - Y Y Y Y Y Y Y Y Y Y - - d - - - - - - - - - - - - - - - Y Y - Y Y Y Y Y Y - - g - - - - - - - - - - - - - - - - Y - - Y Y Y Y Y - - F - - - - - - - - - - - - - - - - - Y Y Y Y Y Y Y - - D - - - - - - - - - - - - - - - - - - Y Y Y Y Y Y - - G - - - - - - - - - - - - - - - - - - - Y Y Y Y Y - - S - - - - - - - - - - - - - - - - - - - - Y Y Y Y - - U - - - - - - - - - - - - - - - - - - - - - Y Y Y - - V - - - - - - - - - - - - - - - - - - - - - - Y Y - - O - - - - - - - - - - - - - - - - - - - - - - - Y - - M - - - - - - - - - - - - - - - - - - - - - - Y Y Y - m - - - - - - - - - - - - - - - - - - - - - - Y Y - Y ``` You should note that, while included in the table for completeness, the ‘S’, ‘U’, and ‘V’ types cannot be operated on by ufuncs. Also, note that on a 32-bit system the integer types may have different sizes, resulting in a slightly altered table. Mixed scalar-array operations use a different set of casting rules that ensure that a scalar cannot “upcast” an array unless the scalar is of a fundamentally different kind of data (i.e., under a different hierarchy in the data-type hierarchy) than the array. This rule enables you to use scalar constants in your code (which, as Python types, are interpreted accordingly in ufuncs) without worrying about whether the precision of the scalar constant will cause upcasting on your large (small precision) array. Use of internal buffers ----------------------- Internally, buffers are used for misaligned data, swapped data, and data that has to be converted from one data type to another. The size of internal buffers is settable on a per-thread basis. There can be up to \(2 (n_{\mathrm{inputs}} + n_{\mathrm{outputs}})\) buffers of the specified size created to handle the data from all the inputs and outputs of a ufunc. The default size of a buffer is 10,000 elements. Whenever buffer-based calculation would be needed, but all input arrays are smaller than the buffer size, those misbehaved or incorrectly-typed arrays will be copied before the calculation proceeds. Adjusting the size of the buffer may therefore alter the speed at which ufunc calculations of various sorts are completed. A simple interface for setting this variable is accessible using the function [`numpy.setbufsize`](../reference/generated/numpy.setbufsize#numpy.setbufsize "numpy.setbufsize"). Error handling -------------- Universal functions can trip special floating-point status registers in your hardware (such as divide-by-zero). If available on your platform, these registers will be regularly checked during calculation. Error handling is controlled on a per-thread basis, and can be configured using the functions [`numpy.seterr`](../reference/generated/numpy.seterr#numpy.seterr "numpy.seterr") and [`numpy.seterrcall`](../reference/generated/numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"). Overriding ufunc behavior ------------------------- Classes (including ndarray subclasses) can override how ufuncs act on them by defining certain special methods. For details, see [Standard array subclasses](../reference/arrays.classes#arrays-classes). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.ufuncs.htmlCopies and views ================ When operating on NumPy arrays, it is possible to access the internal data buffer directly using a [view](#view) without copying data around. This ensures good performance but can also cause unwanted problems if the user is not aware of how this works. Hence, it is important to know the difference between these two terms and to know which operations return copies and which return views. The NumPy array is a data structure consisting of two parts: the [contiguous](../glossary#term-contiguous) data buffer with the actual data elements and the metadata that contains information about the data buffer. The metadata includes data type, strides, and other important information that helps manipulate the [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") easily. See the [Internal organization of NumPy arrays](../dev/internals#numpy-internals) section for a detailed look. View ---- It is possible to access the array differently by just changing certain metadata like [stride](../glossary#term-stride) and [dtype](../glossary#term-dtype) without changing the data buffer. This creates a new way of looking at the data and these new arrays are called views. The data buffer remains the same, so any changes made to a view reflects in the original copy. A view can be forced through the [`ndarray.view`](../reference/generated/numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") method. Copy ---- When a new array is created by duplicating the data buffer as well as the metadata, it is called a copy. Changes made to the copy do not reflect on the original array. Making a copy is slower and memory-consuming but sometimes necessary. A copy can be forced by using [`ndarray.copy`](../reference/generated/numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy"). Indexing operations ------------------- See also [Indexing on ndarrays](basics.indexing#basics-indexing) Views are created when elements can be addressed with offsets and strides in the original array. Hence, basic indexing always creates views. For example: ``` >>> x = np.arange(10) >>> x array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> y = x[1:3] # creates a view >>> y array([1, 2]) >>> x[1:3] = [10, 11] >>> x array([ 0, 10, 11, 3, 4, 5, 6, 7, 8, 9]) >>> y array([10, 11]) ``` Here, `y` gets changed when `x` is changed because it is a view. [Advanced indexing](basics.indexing#advanced-indexing), on the other hand, always creates copies. For example: ``` >>> x = np.arange(9).reshape(3, 3) >>> x array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> y = x[[1, 2]] >>> y array([[3, 4, 5], [6, 7, 8]]) >>> y.base is None True ``` Here, `y` is a copy, as signified by the [`base`](../reference/generated/numpy.ndarray.base#numpy.ndarray.base "numpy.ndarray.base") attribute. We can also confirm this by assigning new values to `x[[1, 2]]` which in turn will not affect `y` at all: ``` >>> x[[1, 2]] = [[10, 11, 12], [13, 14, 15]] >>> x array([[ 0, 1, 2], [10, 11, 12], [13, 14, 15]]) >>> y array([[3, 4, 5], [6, 7, 8]]) ``` It must be noted here that during the assignment of `x[[1, 2]]` no view or copy is created as the assignment happens in-place. Other operations ---------------- The [`numpy.reshape`](../reference/generated/numpy.reshape#numpy.reshape "numpy.reshape") function creates a view where possible or a copy otherwise. In most cases, the strides can be modified to reshape the array with a view. However, in some cases where the array becomes non-contiguous (perhaps after a [`ndarray.transpose`](../reference/generated/numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose") operation), the reshaping cannot be done by modifying strides and requires a copy. In these cases, we can raise an error by assigning the new shape to the shape attribute of the array. For example: ``` >>> x = np.ones((2, 3)) >>> y = x.T # makes the array non-contiguous >>> y array([[1., 1.], [1., 1.], [1., 1.]]) >>> z = y.view() >>> z.shape = 6 Traceback (most recent call last): ... AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape. ``` Taking the example of another operation, [`ravel`](../reference/generated/numpy.ravel#numpy.ravel "numpy.ravel") returns a contiguous flattened view of the array wherever possible. On the other hand, [`ndarray.flatten`](../reference/generated/numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") always returns a flattened copy of the array. However, to guarantee a view in most cases, `x.reshape(-1)` may be preferable. How to tell if the array is a view or a copy -------------------------------------------- The [`base`](../reference/generated/numpy.ndarray.base#numpy.ndarray.base "numpy.ndarray.base") attribute of the ndarray makes it easy to tell if an array is a view or a copy. The base attribute of a view returns the original array while it returns `None` for a copy. ``` >>> x = np.arange(9) >>> x array([0, 1, 2, 3, 4, 5, 6, 7, 8]) >>> y = x.reshape(3, 3) >>> y array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> y.base # .reshape() creates a view array([0, 1, 2, 3, 4, 5, 6, 7, 8]) >>> z = y[[2, 1]] >>> z array([[6, 7, 8], [3, 4, 5]]) >>> z.base is None # advanced indexing creates a copy True ``` Note that the `base` attribute should not be used to determine if an ndarray object is *new*; only if it is a view or a copy of another ndarray. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.copies.htmlInteroperability with NumPy =========================== NumPy’s ndarray objects provide both a high-level API for operations on array-structured data and a concrete implementation of the API based on [strided in-RAM storage](../reference/arrays#arrays). While this API is powerful and fairly general, its concrete implementation has limitations. As datasets grow and NumPy becomes used in a variety of new environments and architectures, there are cases where the strided in-RAM storage strategy is inappropriate, which has caused different libraries to reimplement this API for their own uses. This includes GPU arrays ([CuPy](https://cupy.dev/)), Sparse arrays ([`scipy.sparse`](https://docs.scipy.org/doc/scipy/reference/sparse.html#module-scipy.sparse "(in SciPy v1.8.1)"), [PyData/Sparse](https://sparse.pydata.org/)) and parallel arrays ([Dask](https://docs.dask.org/) arrays) as well as various NumPy-like implementations in deep learning frameworks, like [TensorFlow](https://www.tensorflow.org/) and [PyTorch](https://pytorch.org/). Similarly, there are many projects that build on top of the NumPy API for labeled and indexed arrays ([XArray](http://xarray.pydata.org/)), automatic differentiation ([JAX](https://jax.readthedocs.io/)), masked arrays ([`numpy.ma`](../reference/maskedarray.generic#module-numpy.ma "numpy.ma")), physical units ([astropy.units](https://docs.astropy.org/en/stable/units/), [pint](https://pint.readthedocs.io/), [unyt](https://unyt.readthedocs.io/)), among others that add additional functionality on top of the NumPy API. Yet, users still want to work with these arrays using the familiar NumPy API and re-use existing code with minimal (ideally zero) porting overhead. With this goal in mind, various protocols are defined for implementations of multi-dimensional arrays with high-level APIs matching NumPy. Broadly speaking, there are three groups of features used for interoperability with NumPy: 1. Methods of turning a foreign object into an ndarray; 2. Methods of deferring execution from a NumPy function to another array library; 3. Methods that use NumPy functions and return an instance of a foreign object. We describe these features below. 1. Using arbitrary objects in NumPy ----------------------------------- The first set of interoperability features from the NumPy API allows foreign objects to be treated as NumPy arrays whenever possible. When NumPy functions encounter a foreign object, they will try (in order): 1. The buffer protocol, described [in the Python C-API documentation](https://docs.python.org/3/c-api/buffer.html "(in Python v3.10)"). 2. The `__array_interface__` protocol, described [in this page](../reference/arrays.interface#arrays-interface). A precursor to Python’s buffer protocol, it defines a way to access the contents of a NumPy array from other C extensions. 3. The `__array__()` method, which asks an arbitrary object to convert itself into an array. For both the buffer and the `__array_interface__` protocols, the object describes its memory layout and NumPy does everything else (zero-copy if possible). If that’s not possible, the object itself is responsible for returning a `ndarray` from `__array__()`. [DLPack](https://dmlc.github.io/dlpack/latest/index.html "(in DLPack)") is yet another protocol to convert foreign objects to NumPy arrays in a language and device agnostic manner. NumPy doesn’t implicitly convert objects to ndarrays using DLPack. It provides the function [`numpy.from_dlpack`](../reference/generated/numpy.from_dlpack#numpy.from_dlpack "numpy.from_dlpack") that accepts any object implementing the `__dlpack__` method and outputs a NumPy ndarray (which is generally a view of the input object’s data buffer). The [Python Specification for DLPack](https://dmlc.github.io/dlpack/latest/python_spec.html#python-spec "(in DLPack)") page explains the `__dlpack__` protocol in detail. ### The array interface protocol The [array interface protocol](../reference/arrays.interface#arrays-interface) defines a way for array-like objects to re-use each other’s data buffers. Its implementation relies on the existence of the following attributes or methods: * `__array_interface__`: a Python dictionary containing the shape, the element type, and optionally, the data buffer address and the strides of an array-like object; * `__array__()`: a method returning the NumPy ndarray view of an array-like object; The `__array_interface__` attribute can be inspected directly: ``` >>> import numpy as np >>> x = np.array([1, 2, 5.0, 8]) >>> x.__array_interface__ {'data': (94708397920832, False), 'strides': None, 'descr': [('', '<f8')], 'typestr': '<f8', 'shape': (4,), 'version': 3} ``` The `__array_interface__` attribute can also be used to manipulate the object data in place: ``` >>> class wrapper(): ... pass ... >>> arr = np.array([1, 2, 3, 4]) >>> buf = arr.__array_interface__ >>> buf {'data': (140497590272032, False), 'strides': None, 'descr': [('', '<i8')], 'typestr': '<i8', 'shape': (4,), 'version': 3} >>> buf['shape'] = (2, 2) >>> w = wrapper() >>> w.__array_interface__ = buf >>> new_arr = np.array(w, copy=False) >>> new_arr array([[1, 2], [3, 4]]) ``` We can check that `arr` and `new_arr` share the same data buffer: ``` >>> new_arr[0, 0] = 1000 >>> new_arr array([[1000, 2], [ 3, 4]]) >>> arr array([1000, 2, 3, 4]) ``` ### The `__array__()` method The `__array__()` method ensures that any NumPy-like object (an array, any object exposing the array interface, an object whose `__array__()` method returns an array or any nested sequence) that implements it can be used as a NumPy array. If possible, this will mean using `__array__()` to create a NumPy ndarray view of the array-like object. Otherwise, this copies the data into a new ndarray object. This is not optimal, as coercing arrays into ndarrays may cause performance problems or create the need for copies and loss of metadata, as the original object and any attributes/behavior it may have had, is lost. To see an example of a custom array implementation including the use of `__array__()`, see [Writing custom array containers](basics.dispatch#basics-dispatch). ### The DLPack Protocol The [DLPack](https://dmlc.github.io/dlpack/latest/index.html "(in DLPack)") protocol defines a memory-layout of strided n-dimensional array objects. It offers the following syntax for data exchange: 1. A [`numpy.from_dlpack`](../reference/generated/numpy.from_dlpack#numpy.from_dlpack "numpy.from_dlpack") function, which accepts (array) objects with a `__dlpack__` method and uses that method to construct a new array containing the data from `x`. 2. `__dlpack__(self, stream=None)` and `__dlpack_device__` methods on the array object, which will be called from within `from_dlpack`, to query what device the array is on (may be needed to pass in the correct stream, e.g. in the case of multiple GPUs) and to access the data. Unlike the buffer protocol, DLPack allows exchanging arrays containing data on devices other than the CPU (e.g. Vulkan or GPU). Since NumPy only supports CPU, it can only convert objects whose data exists on the CPU. But other libraries, like [PyTorch](https://pytorch.org/) and [CuPy](https://cupy.dev/), may exchange data on GPU using this protocol. 2. Operating on foreign objects without converting -------------------------------------------------- A second set of methods defined by the NumPy API allows us to defer the execution from a NumPy function to another array library. Consider the following function. ``` >>> import numpy as np >>> def f(x): ... return np.mean(np.exp(x)) ``` Note that [`np.exp`](../reference/generated/numpy.exp#numpy.exp "numpy.exp") is a [ufunc](basics.ufuncs#ufuncs-basics), which means that it operates on ndarrays in an element-by-element fashion. On the other hand, [`np.mean`](../reference/generated/numpy.mean#numpy.mean "numpy.mean") operates along one of the array’s axes. We can apply `f` to a NumPy ndarray object directly: ``` >>> x = np.array([1, 2, 3, 4]) >>> f(x) 21.1977562209304 ``` We would like this function to work equally well with any NumPy-like array object. NumPy allows a class to indicate that it would like to handle computations in a custom-defined way through the following interfaces: * `__array_ufunc__`: allows third-party objects to support and override [ufuncs](basics.ufuncs#ufuncs-basics). * `__array_function__`: a catch-all for NumPy functionality that is not covered by the `__array_ufunc__` protocol for universal functions. As long as foreign objects implement the `__array_ufunc__` or `__array_function__` protocols, it is possible to operate on them without the need for explicit conversion. ### The `__array_ufunc__` protocol A [universal function (or ufunc for short)](basics.ufuncs#ufuncs-basics) is a “vectorized” wrapper for a function that takes a fixed number of specific inputs and produces a fixed number of specific outputs. The output of the ufunc (and its methods) is not necessarily a ndarray, if not all input arguments are ndarrays. Indeed, if any input defines an `__array_ufunc__` method, control will be passed completely to that function, i.e., the ufunc is overridden. The `__array_ufunc__` method defined on that (non-ndarray) object has access to the NumPy ufunc. Because ufuncs have a well-defined structure, the foreign `__array_ufunc__` method may rely on ufunc attributes like `.at()`, `.reduce()`, and others. A subclass can override what happens when executing NumPy ufuncs on it by overriding the default `ndarray.__array_ufunc__` method. This method is executed instead of the ufunc and should return either the result of the operation, or `NotImplemented` if the operation requested is not implemented. ### The `__array_function__` protocol To achieve enough coverage of the NumPy API to support downstream projects, there is a need to go beyond `__array_ufunc__` and implement a protocol that allows arguments of a NumPy function to take control and divert execution to another function (for example, a GPU or parallel implementation) in a way that is safe and consistent across projects. The semantics of `__array_function__` are very similar to `__array_ufunc__`, except the operation is specified by an arbitrary callable object rather than a ufunc instance and method. For more details, see [NEP 18 — A dispatch mechanism for NumPy’s high level array functions](https://numpy.org/neps/nep-0018-array-function-protocol.html#nep18 "(in NumPy Enhancement Proposals)"). 3. Returning foreign objects ---------------------------- A third type of feature set is meant to use the NumPy function implementation and then convert the return value back into an instance of the foreign object. The `__array_finalize__` and `__array_wrap__` methods act behind the scenes to ensure that the return type of a NumPy function can be specified as needed. The `__array_finalize__` method is the mechanism that NumPy provides to allow subclasses to handle the various ways that new instances get created. This method is called whenever the system internally allocates a new array from an object which is a subclass (subtype) of the ndarray. It can be used to change attributes after construction, or to update meta-information from the “parent.” The `__array_wrap__` method “wraps up the action” in the sense of allowing any object (such as user-defined functions) to set the type of its return value and update attributes and metadata. This can be seen as the opposite of the `__array__` method. At the end of every object that implements `__array_wrap__`, this method is called on the input object with the highest *array priority*, or the output object if one was specified. The `__array_priority__` attribute is used to determine what type of object to return in situations where there is more than one possibility for the Python type of the returned object. For example, subclasses may opt to use this method to transform the output array into an instance of the subclass and update metadata before returning the array to the user. For more information on these methods, see [Subclassing ndarray](basics.subclassing#basics-subclassing) and [Specific features of ndarray sub-typing](c-info.beyond-basics#specific-array-subtyping). Interoperability examples ------------------------- ### Example: Pandas `Series` objects Consider the following: ``` >>> import pandas as pd >>> ser = pd.Series([1, 2, 3, 4]) >>> type(ser) pandas.core.series.Series ``` Now, `ser` is **not** a ndarray, but because it [implements the __array_ufunc__ protocol](https://pandas.pydata.org/docs/user_guide/dsintro.html#dataframe-interoperability-with-numpy-functions), we can apply ufuncs to it as if it were a ndarray: ``` >>> np.exp(ser) 0 2.718282 1 7.389056 2 20.085537 3 54.598150 dtype: float64 >>> np.sin(ser) 0 0.841471 1 0.909297 2 0.141120 3 -0.756802 dtype: float64 ``` We can even do operations with other ndarrays: ``` >>> np.add(ser, np.array([5, 6, 7, 8])) 0 6 1 8 2 10 3 12 dtype: int64 >>> f(ser) 21.1977562209304 >>> result = ser.__array__() >>> type(result) numpy.ndarray ``` ### Example: PyTorch tensors [PyTorch](https://pytorch.org/) is an optimized tensor library for deep learning using GPUs and CPUs. PyTorch arrays are commonly called *tensors*. Tensors are similar to NumPy’s ndarrays, except that tensors can run on GPUs or other hardware accelerators. In fact, tensors and NumPy arrays can often share the same underlying memory, eliminating the need to copy data. ``` >>> import torch >>> data = [[1, 2],[3, 4]] >>> x_np = np.array(data) >>> x_tensor = torch.tensor(data) ``` Note that `x_np` and `x_tensor` are different kinds of objects: ``` >>> x_np array([[1, 2], [3, 4]]) >>> x_tensor tensor([[1, 2], [3, 4]]) ``` However, we can treat PyTorch tensors as NumPy arrays without the need for explicit conversion: ``` >>> np.exp(x_tensor) tensor([[ 2.7183, 7.3891], [20.0855, 54.5982]], dtype=torch.float64) ``` Also, note that the return type of this function is compatible with the initial data type. Warning While this mixing of ndarrays and tensors may be convenient, it is not recommended. It will not work for non-CPU tensors, and will have unexpected behavior in corner cases. Users should prefer explicitly converting the ndarray to a tensor. Note PyTorch does not implement `__array_function__` or `__array_ufunc__`. Under the hood, the `Tensor.__array__()` method returns a NumPy ndarray as a view of the tensor data buffer. See [this issue](https://github.com/pytorch/pytorch/issues/24015) and the [__torch_function__ implementation](https://github.com/pytorch/pytorch/blob/master/torch/overrides.py) for details. Note also that we can see `__array_wrap__` in action here, even though `torch.Tensor` is not a subclass of ndarray: ``` >>> import torch >>> t = torch.arange(4) >>> np.abs(t) tensor([0, 1, 2, 3]) ``` PyTorch implements `__array_wrap__` to be able to get tensors back from NumPy functions, and we can modify it directly to control which type of objects are returned from these functions. ### Example: CuPy arrays CuPy is a NumPy/SciPy-compatible array library for GPU-accelerated computing with Python. CuPy implements a subset of the NumPy interface by implementing `cupy.ndarray`, [a counterpart to NumPy ndarrays](https://docs.cupy.dev/en/stable/reference/ndarray.html). ``` >>> import cupy as cp >>> x_gpu = cp.array([1, 2, 3, 4]) ``` The `cupy.ndarray` object implements the `__array_ufunc__` interface. This enables NumPy ufuncs to be applied to CuPy arrays (this will defer operation to the matching CuPy CUDA/ROCm implementation of the ufunc): ``` >>> np.mean(np.exp(x_gpu)) array(21.19775622) ``` Note that the return type of these operations is still consistent with the initial type: ``` >>> arr = cp.random.randn(1, 2, 3, 4).astype(cp.float32) >>> result = np.sum(arr) >>> print(type(result)) <class 'cupy._core.core.ndarray'``` See [this page in the CuPy documentation for details](https://docs.cupy.dev/en/stable/reference/ufunc.html). `cupy.ndarray` also implements the `__array_function__` interface, meaning it is possible to do operations such as ``` >>> a = np.random.randn(100, 100) >>> a_gpu = cp.asarray(a) >>> qr_gpu = np.linalg.qr(a_gpu) ``` CuPy implements many NumPy functions on `cupy.ndarray` objects, but not all. See [the CuPy documentation](https://docs.cupy.dev/en/stable/user_guide/difference.html) for details. ### Example: Dask arrays Dask is a flexible library for parallel computing in Python. Dask Array implements a subset of the NumPy ndarray interface using blocked algorithms, cutting up the large array into many small arrays. This allows computations on larger-than-memory arrays using multiple cores. Dask supports `__array__()` and `__array_ufunc__`. ``` >>> import dask.array as da >>> x = da.random.normal(1, 0.1, size=(20, 20), chunks=(10, 10)) >>> np.mean(np.exp(x)) dask.array<mean_agg-aggregate, shape=(), dtype=float64, chunksize=(), chunktype=numpy.ndarray> >>> np.mean(np.exp(x)).compute() 5.090097550553843 ``` Note Dask is lazily evaluated, and the result from a computation isn’t computed until you ask for it by invoking `compute()`. See [the Dask array documentation](https://docs.dask.org/en/stable/array.html) and the [scope of Dask arrays interoperability with NumPy arrays](https://docs.dask.org/en/stable/array.html#scope) for details. ### Example: DLPack Several Python data science libraries implement the `__dlpack__` protocol. Among them are [PyTorch](https://pytorch.org/) and [CuPy](https://cupy.dev/). A full list of libraries that implement this protocol can be found on [this page of DLPack documentation](https://dmlc.github.io/dlpack/latest/index.html "(in DLPack)"). Convert a PyTorch CPU tensor to NumPy array: ``` >>> import torch >>> x_torch = torch.arange(5) >>> x_torch tensor([0, 1, 2, 3, 4]) >>> x_np = np.from_dlpack(x_torch) >>> x_np array([0, 1, 2, 3, 4]) >>> # note that x_np is a view of x_torch >>> x_torch[1] = 100 >>> x_torch tensor([ 0, 100, 2, 3, 4]) >>> x_np array([ 0, 100, 2, 3, 4]) ``` The imported arrays are read-only so writing or operating in-place will fail: ``` >>> x.flags.writeable False >>> x_np[1] = 1 Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: assignment destination is read-only ``` A copy must be created in order to operate on the imported arrays in-place, but will mean duplicating the memory. Do not do this for very large arrays: ``` >>> x_np_copy = x_np.copy() >>> x_np_copy.sort() # works ``` Note Note that GPU tensors can’t be converted to NumPy arrays since NumPy doesn’t support GPU devices: ``` >>> x_torch = torch.arange(5, device='cuda') >>> np.from_dlpack(x_torch) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: Unsupported device in DLTensor. ``` But, if both libraries support the device the data buffer is on, it is possible to use the `__dlpack__` protocol (e.g. [PyTorch](https://pytorch.org/) and [CuPy](https://cupy.dev/)): ``` >>> x_torch = torch.arange(5, device='cuda') >>> x_cupy = cupy.from_dlpack(x_torch) ``` Similarly, a NumPy array can be converted to a PyTorch tensor: ``` >>> x_np = np.arange(5) >>> x_torch = torch.from_dlpack(x_np) ``` Read-only arrays cannot be exported: ``` >>> x_np = np.arange(5) >>> x_np.flags.writeable = False >>> torch.from_dlpack(x_np) Traceback (most recent call last): File "<stdin>", line 1, in <module> File ".../site-packages/torch/utils/dlpack.py", line 63, in from_dlpack dlpack = ext_tensor.__dlpack__() TypeError: NumPy currently only supports dlpack for writeable arrays ``` Further reading --------------- * [The array interface protocol](../reference/arrays.interface#arrays-interface) * [Writing custom array containers](basics.dispatch#basics-dispatch) * [Special attributes and methods](../reference/arrays.classes#special-attributes-and-methods) (details on the `__array_ufunc__` and `__array_function__` protocols) * [Subclassing ndarray](basics.subclassing#basics-subclassing) (details on the `__array_wrap__` and `__array_finalize__` methods) * [Specific features of ndarray sub-typing](c-info.beyond-basics#specific-array-subtyping) (more details on the implementation of `__array_finalize__`, `__array_wrap__` and `__array_priority__`) * [NumPy roadmap: interoperability](https://numpy.org/neps/roadmap.html "(in NumPy Enhancement Proposals)") * [PyTorch documentation on the Bridge with NumPy](https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#bridge-to-np-label) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.interoperability.htmlnumpy.source ============ numpy.source(*object*, *output=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/utils.py#L635-L681) Print or write to a file the source code for a NumPy object. The source code is only returned for objects written in Python. Many functions and classes are defined in C and will therefore not return useful information. Parameters **object**numpy object Input object. This can be any object (function, class, module, 
). **output**file object, optional If `output` not supplied then source code is printed to screen (sys.stdout). File object must be created with either write ‘w’ or append ‘a’ modes. See also [`lookfor`](numpy.lookfor#numpy.lookfor "numpy.lookfor"), [`info`](numpy.info#numpy.info "numpy.info") #### Examples ``` >>> np.source(np.interp) In file: /usr/lib/python2.6/dist-packages/numpy/lib/function_base.py def interp(x, xp, fp, left=None, right=None): """.... (full docstring printed)""" if isinstance(x, (float, int, number)): return compiled_interp([x], xp, fp, left, right).item() else: return compiled_interp(x, xp, fp, left, right) ``` The source code is only returned for objects written in Python. ``` >>> np.source(np.array) Not available for this object. ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.source.htmlHow to extend NumPy =================== Writing an extension module --------------------------- While the ndarray object is designed to allow rapid computation in Python, it is also designed to be general-purpose and satisfy a wide- variety of computational needs. As a result, if absolute speed is essential, there is no replacement for a well-crafted, compiled loop specific to your application and hardware. This is one of the reasons that numpy includes f2py so that an easy-to-use mechanisms for linking (simple) C/C++ and (arbitrary) Fortran code directly into Python are available. You are encouraged to use and improve this mechanism. The purpose of this section is not to document this tool but to document the more basic steps to writing an extension module that this tool depends on. When an extension module is written, compiled, and installed to somewhere in the Python path (sys.path), the code can then be imported into Python as if it were a standard python file. It will contain objects and methods that have been defined and compiled in C code. The basic steps for doing this in Python are well-documented and you can find more information in the documentation for Python itself available online at [www.python.org](https://www.python.org) . In addition to the Python C-API, there is a full and rich C-API for NumPy allowing sophisticated manipulations on a C-level. However, for most applications, only a few API calls will typically be used. For example, if you need to just extract a pointer to memory along with some shape information to pass to another calculation routine, then you will use very different calls than if you are trying to create a new array-like type or add a new data type for ndarrays. This chapter documents the API calls and macros that are most commonly used. Required subroutine ------------------- There is exactly one function that must be defined in your C-code in order for Python to use it as an extension module. The function must be called init{name} where {name} is the name of the module from Python. This function must be declared so that it is visible to code outside of the routine. Besides adding the methods and constants you desire, this subroutine must also contain calls like `import_array()` and/or `import_ufunc()` depending on which C-API is needed. Forgetting to place these commands will show itself as an ugly segmentation fault (crash) as soon as any C-API subroutine is actually called. It is actually possible to have multiple init{name} functions in a single file in which case multiple modules will be defined by that file. However, there are some tricks to get that to work correctly and it is not covered here. A minimal `init{name}` method looks like: ``` PyMODINIT_FUNC init{name}(void) { (void)Py_InitModule({name}, mymethods); import_array(); } ``` The mymethods must be an array (usually statically declared) of PyMethodDef structures which contain method names, actual C-functions, a variable indicating whether the method uses keyword arguments or not, and docstrings. These are explained in the next section. If you want to add constants to the module, then you store the returned value from Py_InitModule which is a module object. The most general way to add items to the module is to get the module dictionary using PyModule_GetDict(module). With the module dictionary, you can add whatever you like to the module manually. An easier way to add objects to the module is to use one of three additional Python C-API calls that do not require a separate extraction of the module dictionary. These are documented in the Python documentation, but repeated here for convenience: intPyModule_AddObject([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*module, char*name, [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*value) intPyModule_AddIntConstant([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*module, char*name, longvalue) intPyModule_AddStringConstant([PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*module, char*name, char*value) All three of these functions require the *module* object (the return value of Py_InitModule). The *name* is a string that labels the value in the module. Depending on which function is called, the *value* argument is either a general object ([`PyModule_AddObject`](#c.PyModule_AddObject "PyModule_AddObject") steals a reference to it), an integer constant, or a string constant. Defining functions ------------------ The second argument passed in to the Py_InitModule function is a structure that makes it easy to define functions in the module. In the example given above, the mymethods structure would have been defined earlier in the file (usually right before the init{name} subroutine) to: ``` static PyMethodDef mymethods[] = { { nokeywordfunc,nokeyword_cfunc, METH_VARARGS, Doc string}, { keywordfunc, keyword_cfunc, METH_VARARGS|METH_KEYWORDS, Doc string}, {NULL, NULL, 0, NULL} /* Sentinel */ } ``` Each entry in the mymethods array is a [`PyMethodDef`](https://docs.python.org/3/c-api/structures.html#c.PyMethodDef "(in Python v3.10)") structure containing 1) the Python name, 2) the C-function that implements the function, 3) flags indicating whether or not keywords are accepted for this function, and 4) The docstring for the function. Any number of functions may be defined for a single module by adding more entries to this table. The last entry must be all NULL as shown to act as a sentinel. Python looks for this entry to know that all of the functions for the module have been defined. The last thing that must be done to finish the extension module is to actually write the code that performs the desired functions. There are two kinds of functions: those that don’t accept keyword arguments, and those that do. ### Functions without keyword arguments Functions that don’t accept keyword arguments should be written as: ``` static PyObject* nokeyword_cfunc (PyObject *dummy, PyObject *args) { /* convert Python arguments */ /* do function */ /* return something */ } ``` The dummy argument is not used in this context and can be safely ignored. The *args* argument contains all of the arguments passed in to the function as a tuple. You can do anything you want at this point, but usually the easiest way to manage the input arguments is to call [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)") (args, format_string, addresses_to_C_variables
) or [`PyArg_UnpackTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_UnpackTuple "(in Python v3.10)") (tuple, “name”, min, max, 
). A good description of how to use the first function is contained in the Python C-API reference manual under section 5.5 (Parsing arguments and building values). You should pay particular attention to the “O&” format which uses converter functions to go between the Python object and the C object. All of the other format functions can be (mostly) thought of as special cases of this general rule. There are several converter functions defined in the NumPy C-API that may be of use. In particular, the [`PyArray_DescrConverter`](../reference/c-api/array#c.PyArray_DescrConverter "PyArray_DescrConverter") function is very useful to support arbitrary data-type specification. This function transforms any valid data-type Python object into a [PyArray_Descr](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr")* object. Remember to pass in the address of the C-variables that should be filled in. There are lots of examples of how to use [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)") throughout the NumPy source code. The standard usage is like this: ``` PyObject *input; PyArray_Descr *dtype; if (!PyArg_ParseTuple(args, "OO&", &input, PyArray_DescrConverter, &dtype)) return NULL; ``` It is important to keep in mind that you get a *borrowed* reference to the object when using the “O” format string. However, the converter functions usually require some form of memory handling. In this example, if the conversion is successful, *dtype* will hold a new reference to a [PyArray_Descr](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr")* object, while *input* will hold a borrowed reference. Therefore, if this conversion were mixed with another conversion (say to an integer) and the data-type conversion was successful but the integer conversion failed, then you would need to release the reference count to the data-type object before returning. A typical way to do this is to set *dtype* to `NULL` before calling [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)") and then use [`Py_XDECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_XDECREF "(in Python v3.10)") on *dtype* before returning. After the input arguments are processed, the code that actually does the work is written (likely calling other functions as needed). The final step of the C-function is to return something. If an error is encountered then `NULL` should be returned (making sure an error has actually been set). If nothing should be returned then increment [`Py_None`](https://docs.python.org/3/c-api/none.html#c.Py_None "(in Python v3.10)") and return it. If a single object should be returned then it is returned (ensuring that you own a reference to it first). If multiple objects should be returned then you need to return a tuple. The [`Py_BuildValue`](https://docs.python.org/3/c-api/arg.html#c.Py_BuildValue "(in Python v3.10)") (format_string, c_variables
) function makes it easy to build tuples of Python objects from C variables. Pay special attention to the difference between ‘N’ and ‘O’ in the format string or you can easily create memory leaks. The ‘O’ format string increments the reference count of the [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")* C-variable it corresponds to, while the ‘N’ format string steals a reference to the corresponding [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")* C-variable. You should use ‘N’ if you have already created a reference for the object and just want to give that reference to the tuple. You should use ‘O’ if you only have a borrowed reference to an object and need to create one to provide for the tuple. ### Functions with keyword arguments These functions are very similar to functions without keyword arguments. The only difference is that the function signature is: ``` static PyObject* keyword_cfunc (PyObject *dummy, PyObject *args, PyObject *kwds) { ... } ``` The kwds argument holds a Python dictionary whose keys are the names of the keyword arguments and whose values are the corresponding keyword-argument values. This dictionary can be processed however you see fit. The easiest way to handle it, however, is to replace the [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)") (args, format_string, addresses
) function with a call to [`PyArg_ParseTupleAndKeywords`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTupleAndKeywords "(in Python v3.10)") (args, kwds, format_string, char *kwlist[], addresses
). The kwlist parameter to this function is a `NULL` -terminated array of strings providing the expected keyword arguments. There should be one string for each entry in the format_string. Using this function will raise a TypeError if invalid keyword arguments are passed in. For more help on this function please see section 1.8 (Keyword Parameters for Extension Functions) of the Extending and Embedding tutorial in the Python documentation. ### Reference counting The biggest difficulty when writing extension modules is reference counting. It is an important reason for the popularity of f2py, weave, Cython, ctypes, etc
. If you mis-handle reference counts you can get problems from memory-leaks to segmentation faults. The only strategy I know of to handle reference counts correctly is blood, sweat, and tears. First, you force it into your head that every Python variable has a reference count. Then, you understand exactly what each function does to the reference count of your objects, so that you can properly use DECREF and INCREF when you need them. Reference counting can really test the amount of patience and diligence you have towards your programming craft. Despite the grim depiction, most cases of reference counting are quite straightforward with the most common difficulty being not using DECREF on objects before exiting early from a routine due to some error. In second place, is the common error of not owning the reference on an object that is passed to a function or macro that is going to steal the reference ( *e.g.* [`PyTuple_SET_ITEM`](https://docs.python.org/3/c-api/tuple.html#c.PyTuple_SET_ITEM "(in Python v3.10)"), and most functions that take [`PyArray_Descr`](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr") objects). Typically you get a new reference to a variable when it is created or is the return value of some function (there are some prominent exceptions, however — such as getting an item out of a tuple or a dictionary). When you own the reference, you are responsible to make sure that [`Py_DECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_DECREF "(in Python v3.10)") (var) is called when the variable is no longer necessary (and no other function has “stolen” its reference). Also, if you are passing a Python object to a function that will “steal” the reference, then you need to make sure you own it (or use [`Py_INCREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_INCREF "(in Python v3.10)") to get your own reference). You will also encounter the notion of borrowing a reference. A function that borrows a reference does not alter the reference count of the object and does not expect to “hold on “to the reference. It’s just going to use the object temporarily. When you use [`PyArg_ParseTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_ParseTuple "(in Python v3.10)") or [`PyArg_UnpackTuple`](https://docs.python.org/3/c-api/arg.html#c.PyArg_UnpackTuple "(in Python v3.10)") you receive a borrowed reference to the objects in the tuple and should not alter their reference count inside your function. With practice, you can learn to get reference counting right, but it can be frustrating at first. One common source of reference-count errors is the [`Py_BuildValue`](https://docs.python.org/3/c-api/arg.html#c.Py_BuildValue "(in Python v3.10)") function. Pay careful attention to the difference between the ‘N’ format character and the ‘O’ format character. If you create a new object in your subroutine (such as an output array), and you are passing it back in a tuple of return values, then you should most- likely use the ‘N’ format character in [`Py_BuildValue`](https://docs.python.org/3/c-api/arg.html#c.Py_BuildValue "(in Python v3.10)"). The ‘O’ character will increase the reference count by one. This will leave the caller with two reference counts for a brand-new array. When the variable is deleted and the reference count decremented by one, there will still be that extra reference count, and the array will never be deallocated. You will have a reference-counting induced memory leak. Using the ‘N’ character will avoid this situation as it will return to the caller an object (inside the tuple) with a single reference count. Dealing with array objects -------------------------- Most extension modules for NumPy will need to access the memory for an ndarray object (or one of it’s sub-classes). The easiest way to do this doesn’t require you to know much about the internals of NumPy. The method is to 1. Ensure you are dealing with a well-behaved array (aligned, in machine byte-order and single-segment) of the correct type and number of dimensions. 1. By converting it from some Python object using [`PyArray_FromAny`](../reference/c-api/array#c.PyArray_FromAny "PyArray_FromAny") or a macro built on it. 2. By constructing a new ndarray of your desired shape and type using [`PyArray_NewFromDescr`](../reference/c-api/array#c.PyArray_NewFromDescr "PyArray_NewFromDescr") or a simpler macro or function based on it. 2. Get the shape of the array and a pointer to its actual data. 3. Pass the data and shape information on to a subroutine or other section of code that actually performs the computation. 4. If you are writing the algorithm, then I recommend that you use the stride information contained in the array to access the elements of the array (the [`PyArray_GetPtr`](../reference/c-api/array#c.PyArray_GetPtr "PyArray_GetPtr") macros make this painless). Then, you can relax your requirements so as not to force a single-segment array and the data-copying that might result. Each of these sub-topics is covered in the following sub-sections. ### Converting an arbitrary sequence object The main routine for obtaining an array from any Python object that can be converted to an array is [`PyArray_FromAny`](../reference/c-api/array#c.PyArray_FromAny "PyArray_FromAny"). This function is very flexible with many input arguments. Several macros make it easier to use the basic function. [`PyArray_FROM_OTF`](../reference/c-api/array#c.PyArray_FROM_OTF "PyArray_FROM_OTF") is arguably the most useful of these macros for the most common uses. It allows you to convert an arbitrary Python object to an array of a specific builtin data-type ( *e.g.* float), while specifying a particular set of requirements ( *e.g.* contiguous, aligned, and writeable). The syntax is [`PyArray_FROM_OTF`](../reference/c-api/array#c.PyArray_FROM_OTF "PyArray_FROM_OTF") Return an ndarray from any Python object, *obj*, that can be converted to an array. The number of dimensions in the returned array is determined by the object. The desired data-type of the returned array is provided in *typenum* which should be one of the enumerated types. The *requirements* for the returned array can be any combination of standard array flags. Each of these arguments is explained in more detail below. You receive a new reference to the array on success. On failure, `NULL` is returned and an exception is set. *obj* The object can be any Python object convertible to an ndarray. If the object is already (a subclass of) the ndarray that satisfies the requirements then a new reference is returned. Otherwise, a new array is constructed. The contents of *obj* are copied to the new array unless the array interface is used so that data does not have to be copied. Objects that can be converted to an array include: 1) any nested sequence object, 2) any object exposing the array interface, 3) any object with an [`__array__`](../reference/arrays.classes#numpy.class.__array__ "numpy.class.__array__") method (which should return an ndarray), and 4) any scalar object (becomes a zero-dimensional array). Sub-classes of the ndarray that otherwise fit the requirements will be passed through. If you want to ensure a base-class ndarray, then use [`NPY_ARRAY_ENSUREARRAY`](../reference/c-api/array#c.NPY_ARRAY_ENSUREARRAY "NPY_ARRAY_ENSUREARRAY") in the requirements flag. A copy is made only if necessary. If you want to guarantee a copy, then pass in [`NPY_ARRAY_ENSURECOPY`](../reference/c-api/array#c.NPY_ARRAY_ENSURECOPY "NPY_ARRAY_ENSURECOPY") to the requirements flag. *typenum* One of the enumerated types or [`NPY_NOTYPE`](../reference/c-api/dtype#c.NPY_NOTYPE "NPY_NOTYPE") if the data-type should be determined from the object itself. The C-based names can be used: [`NPY_BOOL`](../reference/c-api/dtype#c.NPY_BOOL "NPY_BOOL"), [`NPY_BYTE`](../reference/c-api/dtype#c.NPY_BYTE "NPY_BYTE"), [`NPY_UBYTE`](../reference/c-api/dtype#c.NPY_UBYTE "NPY_UBYTE"), [`NPY_SHORT`](../reference/c-api/dtype#c.NPY_SHORT "NPY_SHORT"), [`NPY_USHORT`](../reference/c-api/dtype#c.NPY_USHORT "NPY_USHORT"), [`NPY_INT`](../reference/c-api/dtype#c.NPY_INT "NPY_INT"), [`NPY_UINT`](../reference/c-api/dtype#c.NPY_UINT "NPY_UINT"), [`NPY_LONG`](../reference/c-api/dtype#c.NPY_LONG "NPY_LONG"), [`NPY_ULONG`](../reference/c-api/dtype#c.NPY_ULONG "NPY_ULONG"), [`NPY_LONGLONG`](../reference/c-api/dtype#c.NPY_LONGLONG "NPY_LONGLONG"), [`NPY_ULONGLONG`](../reference/c-api/dtype#c.NPY_ULONGLONG "NPY_ULONGLONG"), [`NPY_DOUBLE`](../reference/c-api/dtype#c.NPY_DOUBLE "NPY_DOUBLE"), [`NPY_LONGDOUBLE`](../reference/c-api/dtype#c.NPY_LONGDOUBLE "NPY_LONGDOUBLE"), [`NPY_CFLOAT`](../reference/c-api/dtype#c.NPY_CFLOAT "NPY_CFLOAT"), [`NPY_CDOUBLE`](../reference/c-api/dtype#c.NPY_CDOUBLE "NPY_CDOUBLE"), [`NPY_CLONGDOUBLE`](../reference/c-api/dtype#c.NPY_CLONGDOUBLE "NPY_CLONGDOUBLE"), [`NPY_OBJECT`](../reference/c-api/dtype#c.NPY_OBJECT "NPY_OBJECT"). Alternatively, the bit-width names can be used as supported on the platform. For example: [`NPY_INT8`](../reference/c-api/dtype#c.NPY_INT8 "NPY_INT8"), [`NPY_INT16`](../reference/c-api/dtype#c.NPY_INT16 "NPY_INT16"), [`NPY_INT32`](../reference/c-api/dtype#c.NPY_INT32 "NPY_INT32"), [`NPY_INT64`](../reference/c-api/dtype#c.NPY_INT64 "NPY_INT64"), [`NPY_UINT8`](../reference/c-api/dtype#c.NPY_UINT8 "NPY_UINT8"), [`NPY_UINT16`](../reference/c-api/dtype#c.NPY_UINT16 "NPY_UINT16"), [`NPY_UINT32`](../reference/c-api/dtype#c.NPY_UINT32 "NPY_UINT32"), [`NPY_UINT64`](../reference/c-api/dtype#c.NPY_UINT64 "NPY_UINT64"), [`NPY_FLOAT32`](../reference/c-api/dtype#c.NPY_FLOAT32 "NPY_FLOAT32"), [`NPY_FLOAT64`](../reference/c-api/dtype#c.NPY_FLOAT64 "NPY_FLOAT64"), [`NPY_COMPLEX64`](../reference/c-api/dtype#c.NPY_COMPLEX64 "NPY_COMPLEX64"), [`NPY_COMPLEX128`](../reference/c-api/dtype#c.NPY_COMPLEX128 "NPY_COMPLEX128"). The object will be converted to the desired type only if it can be done without losing precision. Otherwise `NULL` will be returned and an error raised. Use [`NPY_ARRAY_FORCECAST`](../reference/c-api/array#c.NPY_ARRAY_FORCECAST "NPY_ARRAY_FORCECAST") in the requirements flag to override this behavior. *requirements* The memory model for an ndarray admits arbitrary strides in each dimension to advance to the next element of the array. Often, however, you need to interface with code that expects a C-contiguous or a Fortran-contiguous memory layout. In addition, an ndarray can be misaligned (the address of an element is not at an integral multiple of the size of the element) which can cause your program to crash (or at least work more slowly) if you try and dereference a pointer into the array data. Both of these problems can be solved by converting the Python object into an array that is more “well-behaved” for your specific usage. The requirements flag allows specification of what kind of array is acceptable. If the object passed in does not satisfy this requirements then a copy is made so that the returned object will satisfy the requirements. these ndarray can use a very generic pointer to memory. This flag allows specification of the desired properties of the returned array object. All of the flags are explained in the detailed API chapter. The flags most commonly needed are [`NPY_ARRAY_IN_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_IN_ARRAY "NPY_ARRAY_IN_ARRAY"), [`NPY_OUT_ARRAY`](../reference/c-api/array#c.NPY_OUT_ARRAY "NPY_OUT_ARRAY"), and [`NPY_ARRAY_INOUT_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_INOUT_ARRAY "NPY_ARRAY_INOUT_ARRAY"): [`NPY_ARRAY_IN_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_IN_ARRAY "NPY_ARRAY_IN_ARRAY") This flag is useful for arrays that must be in C-contiguous order and aligned. These kinds of arrays are usually input arrays for some algorithm. [`NPY_ARRAY_OUT_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_OUT_ARRAY "NPY_ARRAY_OUT_ARRAY") This flag is useful to specify an array that is in C-contiguous order, is aligned, and can be written to as well. Such an array is usually returned as output (although normally such output arrays are created from scratch). [`NPY_ARRAY_INOUT_ARRAY`](../reference/c-api/array#c.NPY_ARRAY_INOUT_ARRAY "NPY_ARRAY_INOUT_ARRAY") This flag is useful to specify an array that will be used for both input and output. [`PyArray_ResolveWritebackIfCopy`](../reference/c-api/array#c.PyArray_ResolveWritebackIfCopy "PyArray_ResolveWritebackIfCopy") must be called before [`Py_DECREF`](https://docs.python.org/3/c-api/refcounting.html#c.Py_DECREF "(in Python v3.10)") at the end of the interface routine to write back the temporary data into the original array passed in. Use of the [`NPY_ARRAY_WRITEBACKIFCOPY`](../reference/c-api/array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag requires that the input object is already an array (because other objects cannot be automatically updated in this fashion). If an error occurs use [`PyArray_DiscardWritebackIfCopy`](../reference/c-api/array#c.PyArray_DiscardWritebackIfCopy "PyArray_DiscardWritebackIfCopy") (obj) on an array with these flags set. This will set the underlying base array writable without causing the contents to be copied back into the original array. Other useful flags that can be OR’d as additional requirements are: [`NPY_ARRAY_FORCECAST`](../reference/c-api/array#c.NPY_ARRAY_FORCECAST "NPY_ARRAY_FORCECAST") Cast to the desired type, even if it can’t be done without losing information. [`NPY_ARRAY_ENSURECOPY`](../reference/c-api/array#c.NPY_ARRAY_ENSURECOPY "NPY_ARRAY_ENSURECOPY") Make sure the resulting array is a copy of the original. [`NPY_ARRAY_ENSUREARRAY`](../reference/c-api/array#c.NPY_ARRAY_ENSUREARRAY "NPY_ARRAY_ENSUREARRAY") Make sure the resulting object is an actual ndarray and not a sub- class. Note Whether or not an array is byte-swapped is determined by the data-type of the array. Native byte-order arrays are always requested by [`PyArray_FROM_OTF`](../reference/c-api/array#c.PyArray_FROM_OTF "PyArray_FROM_OTF") and so there is no need for a [`NPY_ARRAY_NOTSWAPPED`](../reference/c-api/array#c.NPY_ARRAY_NOTSWAPPED "NPY_ARRAY_NOTSWAPPED") flag in the requirements argument. There is also no way to get a byte-swapped array from this routine. ### Creating a brand-new ndarray Quite often, new arrays must be created from within extension-module code. Perhaps an output array is needed and you don’t want the caller to have to supply it. Perhaps only a temporary array is needed to hold an intermediate calculation. Whatever the need there are simple ways to get an ndarray object of whatever data-type is needed. The most general function for doing this is [`PyArray_NewFromDescr`](../reference/c-api/array#c.PyArray_NewFromDescr "PyArray_NewFromDescr"). All array creation functions go through this heavily re-used code. Because of its flexibility, it can be somewhat confusing to use. As a result, simpler forms exist that are easier to use. These forms are part of the [`PyArray_SimpleNew`](../reference/c-api/array#c.PyArray_SimpleNew "PyArray_SimpleNew") family of functions, which simplify the interface by providing default values for common use cases. ### Getting at ndarray memory and accessing elements of the ndarray If obj is an ndarray ([PyArrayObject](../reference/c-api/types-and-structures#c.PyArrayObject "PyArrayObject")*), then the data-area of the ndarray is pointed to by the void* pointer [`PyArray_DATA`](../reference/c-api/array#c.PyArray_DATA "PyArray_DATA") (obj) or the char* pointer [`PyArray_BYTES`](../reference/c-api/array#c.PyArray_BYTES "PyArray_BYTES") (obj). Remember that (in general) this data-area may not be aligned according to the data-type, it may represent byte-swapped data, and/or it may not be writeable. If the data area is aligned and in native byte-order, then how to get at a specific element of the array is determined only by the array of npy_intp variables, [`PyArray_STRIDES`](../reference/c-api/array#c.PyArray_STRIDES "PyArray_STRIDES") (obj). In particular, this c-array of integers shows how many **bytes** must be added to the current element pointer to get to the next element in each dimension. For arrays less than 4-dimensions there are `PyArray_GETPTR{k}` (obj, 
) macros where {k} is the integer 1, 2, 3, or 4 that make using the array strides easier. The arguments 
. represent {k} non- negative integer indices into the array. For example, suppose `E` is a 3-dimensional ndarray. A (void*) pointer to the element `E[i,j,k]` is obtained as [`PyArray_GETPTR3`](../reference/c-api/array#c.PyArray_GETPTR3 "PyArray_GETPTR3") (E, i, j, k). As explained previously, C-style contiguous arrays and Fortran-style contiguous arrays have particular striding patterns. Two array flags ([`NPY_ARRAY_C_CONTIGUOUS`](../reference/c-api/array#c.NPY_ARRAY_C_CONTIGUOUS "NPY_ARRAY_C_CONTIGUOUS") and [`NPY_ARRAY_F_CONTIGUOUS`](../reference/c-api/array#c.NPY_ARRAY_F_CONTIGUOUS "NPY_ARRAY_F_CONTIGUOUS")) indicate whether or not the striding pattern of a particular array matches the C-style contiguous or Fortran-style contiguous or neither. Whether or not the striding pattern matches a standard C or Fortran one can be tested Using [`PyArray_IS_C_CONTIGUOUS`](../reference/c-api/array#c.PyArray_IS_C_CONTIGUOUS "PyArray_IS_C_CONTIGUOUS") (obj) and [`PyArray_ISFORTRAN`](../reference/c-api/array#c.PyArray_ISFORTRAN "PyArray_ISFORTRAN") (obj) respectively. Most third-party libraries expect contiguous arrays. But, often it is not difficult to support general-purpose striding. I encourage you to use the striding information in your own code whenever possible, and reserve single-segment requirements for wrapping third-party code. Using the striding information provided with the ndarray rather than requiring a contiguous striding reduces copying that otherwise must be made. Example ------- The following example shows how you might write a wrapper that accepts two input arguments (that will be converted to an array) and an output argument (that must be an array). The function returns None and updates the output array. Note the updated use of WRITEBACKIFCOPY semantics for NumPy v1.14 and above ``` static PyObject * example_wrapper(PyObject *dummy, PyObject *args) { PyObject *arg1=NULL, *arg2=NULL, *out=NULL; PyObject *arr1=NULL, *arr2=NULL, *oarr=NULL; if (!PyArg_ParseTuple(args, "OOO!", &arg1, &arg2, &PyArray_Type, &out)) return NULL; arr1 = PyArray_FROM_OTF(arg1, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); if (arr1 == NULL) return NULL; arr2 = PyArray_FROM_OTF(arg2, NPY_DOUBLE, NPY_ARRAY_IN_ARRAY); if (arr2 == NULL) goto fail; #if NPY_API_VERSION >= 0x0000000c oarr = PyArray_FROM_OTF(out, NPY_DOUBLE, NPY_ARRAY_INOUT_ARRAY2); #else oarr = PyArray_FROM_OTF(out, NPY_DOUBLE, NPY_ARRAY_INOUT_ARRAY); #endif if (oarr == NULL) goto fail; /* code that makes use of arguments */ /* You will probably need at least nd = PyArray_NDIM(<..>) -- number of dimensions dims = PyArray_DIMS(<..>) -- npy_intp array of length nd showing length in each dim. dptr = (double *)PyArray_DATA(<..>) -- pointer to data. If an error occurs goto fail. */ Py_DECREF(arr1); Py_DECREF(arr2); #if NPY_API_VERSION >= 0x0000000c PyArray_ResolveWritebackIfCopy(oarr); #endif Py_DECREF(oarr); Py_INCREF(Py_None); return Py_None; fail: Py_XDECREF(arr1); Py_XDECREF(arr2); #if NPY_API_VERSION >= 0x0000000c PyArray_DiscardWritebackIfCopy(oarr); #endif Py_XDECREF(oarr); return NULL; } ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/c-info.how-to-extend.htmlUsing Python as glue ==================== Many people like to say that Python is a fantastic glue language. Hopefully, this Chapter will convince you that this is true. The first adopters of Python for science were typically people who used it to glue together large application codes running on super-computers. Not only was it much nicer to code in Python than in a shell script or Perl, in addition, the ability to easily extend Python made it relatively easy to create new classes and types specifically adapted to the problems being solved. From the interactions of these early contributors, Numeric emerged as an array-like object that could be used to pass data between these applications. As Numeric has matured and developed into NumPy, people have been able to write more code directly in NumPy. Often this code is fast-enough for production use, but there are still times that there is a need to access compiled code. Either to get that last bit of efficiency out of the algorithm or to make it easier to access widely-available codes written in C/C++ or Fortran. This chapter will review many of the tools that are available for the purpose of accessing code written in other compiled languages. There are many resources available for learning to call other compiled libraries from Python and the purpose of this Chapter is not to make you an expert. The main goal is to make you aware of some of the possibilities so that you will know what to “Google” in order to learn more. Calling other compiled libraries from Python -------------------------------------------- While Python is a great language and a pleasure to code in, its dynamic nature results in overhead that can cause some code ( *i.e.* raw computations inside of for loops) to be up 10-100 times slower than equivalent code written in a static compiled language. In addition, it can cause memory usage to be larger than necessary as temporary arrays are created and destroyed during computation. For many types of computing needs, the extra slow-down and memory consumption can often not be spared (at least for time- or memory- critical portions of your code). Therefore one of the most common needs is to call out from Python code to a fast, machine-code routine (e.g. compiled using C/C++ or Fortran). The fact that this is relatively easy to do is a big reason why Python is such an excellent high-level language for scientific and engineering programming. Their are two basic approaches to calling compiled code: writing an extension module that is then imported to Python using the import command, or calling a shared-library subroutine directly from Python using the [ctypes](https://docs.python.org/3/library/ctypes.html) module. Writing an extension module is the most common method. Warning Calling C-code from Python can result in Python crashes if you are not careful. None of the approaches in this chapter are immune. You have to know something about the way data is handled by both NumPy and by the third-party library being used. Hand-generated wrappers ----------------------- Extension modules were discussed in [Writing an extension module](c-info.how-to-extend#writing-an-extension). The most basic way to interface with compiled code is to write an extension module and construct a module method that calls the compiled code. For improved readability, your method should take advantage of the `PyArg_ParseTuple` call to convert between Python objects and C data-types. For standard C data-types there is probably already a built-in converter. For others you may need to write your own converter and use the `"O&"` format string which allows you to specify a function that will be used to perform the conversion from the Python object to whatever C-structures are needed. Once the conversions to the appropriate C-structures and C data-types have been performed, the next step in the wrapper is to call the underlying function. This is straightforward if the underlying function is in C or C++. However, in order to call Fortran code you must be familiar with how Fortran subroutines are called from C/C++ using your compiler and platform. This can vary somewhat platforms and compilers (which is another reason f2py makes life much simpler for interfacing Fortran code) but generally involves underscore mangling of the name and the fact that all variables are passed by reference (i.e. all arguments are pointers). The advantage of the hand-generated wrapper is that you have complete control over how the C-library gets used and called which can lead to a lean and tight interface with minimal over-head. The disadvantage is that you have to write, debug, and maintain C-code, although most of it can be adapted using the time-honored technique of “cutting-pasting-and-modifying” from other extension modules. Because, the procedure of calling out to additional C-code is fairly regimented, code-generation procedures have been developed to make this process easier. One of these code-generation techniques is distributed with NumPy and allows easy integration with Fortran and (simple) C code. This package, f2py, will be covered briefly in the next section. f2py ---- F2py allows you to automatically construct an extension module that interfaces to routines in Fortran 77/90/95 code. It has the ability to parse Fortran 77/90/95 code and automatically generate Python signatures for the subroutines it encounters, or you can guide how the subroutine interfaces with Python by constructing an interface-definition-file (or modifying the f2py-produced one). See the [F2PY documentation](../f2py/index#f2py) for more information and examples. The f2py method of linking compiled code is currently the most sophisticated and integrated approach. It allows clean separation of Python with compiled code while still allowing for separate distribution of the extension module. The only draw-back is that it requires the existence of a Fortran compiler in order for a user to install the code. However, with the existence of the free-compilers g77, gfortran, and g95, as well as high-quality commercial compilers, this restriction is not particularly onerous. In our opinion, Fortran is still the easiest way to write fast and clear code for scientific computing. It handles complex numbers, and multi-dimensional indexing in the most straightforward way. Be aware, however, that some Fortran compilers will not be able to optimize code as well as good hand- written C-code. Cython ------ [Cython](http://cython.org) is a compiler for a Python dialect that adds (optional) static typing for speed, and allows mixing C or C++ code into your modules. It produces C or C++ extensions that can be compiled and imported in Python code. If you are writing an extension module that will include quite a bit of your own algorithmic code as well, then Cython is a good match. Among its features is the ability to easily and quickly work with multidimensional arrays. Notice that Cython is an extension-module generator only. Unlike f2py, it includes no automatic facility for compiling and linking the extension module (which must be done in the usual fashion). It does provide a modified distutils class called `build_ext` which lets you build an extension module from a `.pyx` source. Thus, you could write in a `setup.py` file: ``` from Cython.Distutils import build_ext from distutils.extension import Extension from distutils.core import setup import numpy setup(name='mine', description='Nothing', ext_modules=[Extension('filter', ['filter.pyx'], include_dirs=[numpy.get_include()])], cmdclass = {'build_ext':build_ext}) ``` Adding the NumPy include directory is, of course, only necessary if you are using NumPy arrays in the extension module (which is what we assume you are using Cython for). The distutils extensions in NumPy also include support for automatically producing the extension-module and linking it from a `.pyx` file. It works so that if the user does not have Cython installed, then it looks for a file with the same file-name but a `.c` extension which it then uses instead of trying to produce the `.c` file again. If you just use Cython to compile a standard Python module, then you will get a C extension module that typically runs a bit faster than the equivalent Python module. Further speed increases can be gained by using the `cdef` keyword to statically define C variables. Let’s look at two examples we’ve seen before to see how they might be implemented using Cython. These examples were compiled into extension modules using Cython 0.21.1. ### Complex addition in Cython Here is part of a Cython module named `add.pyx` which implements the complex addition functions we previously implemented using f2py: ``` cimport cython cimport numpy as np import numpy as np # We need to initialize NumPy. np.import_array() <EMAIL>(False) def zadd(in1, in2): cdef double complex[:] a = in1.ravel() cdef double complex[:] b = in2.ravel() out = np.empty(a.shape[0], np.complex64) cdef double complex[:] c = out.ravel() for i in range(c.shape[0]): c[i].real = a[i].real + b[i].real c[i].imag = a[i].imag + b[i].imag return out ``` This module shows use of the `cimport` statement to load the definitions from the `numpy.pxd` header that ships with Cython. It looks like NumPy is imported twice; `cimport` only makes the NumPy C-API available, while the regular `import` causes a Python-style import at runtime and makes it possible to call into the familiar NumPy Python API. The example also demonstrates Cython’s “typed memoryviews”, which are like NumPy arrays at the C level, in the sense that they are shaped and strided arrays that know their own extent (unlike a C array addressed through a bare pointer). The syntax `double complex[:]` denotes a one-dimensional array (vector) of doubles, with arbitrary strides. A contiguous array of ints would be `int[::1]`, while a matrix of floats would be `float[:, :]`. Shown commented is the `cython.boundscheck` decorator, which turns bounds-checking for memory view accesses on or off on a per-function basis. We can use this to further speed up our code, at the expense of safety (or a manual check prior to entering the loop). Other than the view syntax, the function is immediately readable to a Python programmer. Static typing of the variable `i` is implicit. Instead of the view syntax, we could also have used Cython’s special NumPy array syntax, but the view syntax is preferred. ### Image filter in Cython The two-dimensional example we created using Fortran is just as easy to write in Cython: ``` cimport numpy as np import numpy as np np.import_array() def filter(img): cdef double[:, :] a = np.asarray(img, dtype=np.double) out = np.zeros(img.shape, dtype=np.double) cdef double[:, ::1] b = out cdef np.npy_intp i, j for i in range(1, a.shape[0] - 1): for j in range(1, a.shape[1] - 1): b[i, j] = (a[i, j] + .5 * ( a[i-1, j] + a[i+1, j] + a[i, j-1] + a[i, j+1]) + .25 * ( a[i-1, j-1] + a[i-1, j+1] + a[i+1, j-1] + a[i+1, j+1])) return out ``` This 2-d averaging filter runs quickly because the loop is in C and the pointer computations are done only as needed. If the code above is compiled as a module `image`, then a 2-d image, `img`, can be filtered using this code very quickly using: ``` import image out = image.filter(img) ``` Regarding the code, two things are of note: firstly, it is impossible to return a memory view to Python. Instead, a NumPy array `out` is first created, and then a view `b` onto this array is used for the computation. Secondly, the view `b` is typed `double[:, ::1]`. This means 2-d array with contiguous rows, i.e., C matrix order. Specifying the order explicitly can speed up some algorithms since they can skip stride computations. ### Conclusion Cython is the extension mechanism of choice for several scientific Python libraries, including Scipy, Pandas, SAGE, scikit-image and scikit-learn, as well as the XML processing library LXML. The language and compiler are well-maintained. There are several disadvantages of using Cython: 1. When coding custom algorithms, and sometimes when wrapping existing C libraries, some familiarity with C is required. In particular, when using C memory management (`malloc` and friends), it’s easy to introduce memory leaks. However, just compiling a Python module renamed to `.pyx` can already speed it up, and adding a few type declarations can give dramatic speedups in some code. 2. It is easy to lose a clean separation between Python and C which makes re-using your C-code for other non-Python-related projects more difficult. 3. The C-code generated by Cython is hard to read and modify (and typically compiles with annoying but harmless warnings). One big advantage of Cython-generated extension modules is that they are easy to distribute. In summary, Cython is a very capable tool for either gluing C code or generating an extension module quickly and should not be over-looked. It is especially useful for people that can’t or won’t write C or Fortran code. ctypes ------ [Ctypes](https://docs.python.org/3/library/ctypes.html) is a Python extension module, included in the stdlib, that allows you to call an arbitrary function in a shared library directly from Python. This approach allows you to interface with C-code directly from Python. This opens up an enormous number of libraries for use from Python. The drawback, however, is that coding mistakes can lead to ugly program crashes very easily (just as can happen in C) because there is little type or bounds checking done on the parameters. This is especially true when array data is passed in as a pointer to a raw memory location. The responsibility is then on you that the subroutine will not access memory outside the actual array area. But, if you don’t mind living a little dangerously ctypes can be an effective tool for quickly taking advantage of a large shared library (or writing extended functionality in your own shared library). Because the ctypes approach exposes a raw interface to the compiled code it is not always tolerant of user mistakes. Robust use of the ctypes module typically involves an additional layer of Python code in order to check the data types and array bounds of objects passed to the underlying subroutine. This additional layer of checking (not to mention the conversion from ctypes objects to C-data-types that ctypes itself performs), will make the interface slower than a hand-written extension-module interface. However, this overhead should be negligible if the C-routine being called is doing any significant amount of work. If you are a great Python programmer with weak C skills, ctypes is an easy way to write a useful interface to a (shared) library of compiled code. To use ctypes you must 1. Have a shared library. 2. Load the shared library. 3. Convert the Python objects to ctypes-understood arguments. 4. Call the function from the library with the ctypes arguments. ### Having a shared library There are several requirements for a shared library that can be used with ctypes that are platform specific. This guide assumes you have some familiarity with making a shared library on your system (or simply have a shared library available to you). Items to remember are: * A shared library must be compiled in a special way ( *e.g.* using the `-shared` flag with gcc). * On some platforms (*e.g.* Windows), a shared library requires a .def file that specifies the functions to be exported. For example a mylib.def file might contain: ``` LIBRARY mylib.dll EXPORTS cool_function1 cool_function2 ``` Alternatively, you may be able to use the storage-class specifier `__declspec(dllexport)` in the C-definition of the function to avoid the need for this `.def` file. There is no standard way in Python distutils to create a standard shared library (an extension module is a “special” shared library Python understands) in a cross-platform manner. Thus, a big disadvantage of ctypes at the time of writing this book is that it is difficult to distribute in a cross-platform manner a Python extension that uses ctypes and includes your own code which should be compiled as a shared library on the users system. ### Loading the shared library A simple, but robust way to load the shared library is to get the absolute path name and load it using the cdll object of ctypes: ``` lib = ctypes.cdll[<full_path_name>] ``` However, on Windows accessing an attribute of the `cdll` method will load the first DLL by that name found in the current directory or on the PATH. Loading the absolute path name requires a little finesse for cross-platform work since the extension of shared libraries varies. There is a `ctypes.util.find_library` utility available that can simplify the process of finding the library to load but it is not foolproof. Complicating matters, different platforms have different default extensions used by shared libraries (e.g. .dll – Windows, .so – Linux, .dylib – Mac OS X). This must also be taken into account if you are using ctypes to wrap code that needs to work on several platforms. NumPy provides a convenience function called `ctypeslib.load_library` (name, path). This function takes the name of the shared library (including any prefix like ‘lib’ but excluding the extension) and a path where the shared library can be located. It returns a ctypes library object or raises an `OSError` if the library cannot be found or raises an `ImportError` if the ctypes module is not available. (Windows users: the ctypes library object loaded using `load_library` is always loaded assuming cdecl calling convention. See the ctypes documentation under `ctypes.windll` and/or `ctypes.oledll` for ways to load libraries under other calling conventions). The functions in the shared library are available as attributes of the ctypes library object (returned from `ctypeslib.load_library`) or as items using `lib['func_name']` syntax. The latter method for retrieving a function name is particularly useful if the function name contains characters that are not allowable in Python variable names. ### Converting arguments Python ints/longs, strings, and unicode objects are automatically converted as needed to equivalent ctypes arguments The None object is also converted automatically to a NULL pointer. All other Python objects must be converted to ctypes-specific types. There are two ways around this restriction that allow ctypes to integrate with other objects. 1. Don’t set the argtypes attribute of the function object and define an `_as_parameter_` method for the object you want to pass in. The `_as_parameter_` method must return a Python int which will be passed directly to the function. 2. Set the argtypes attribute to a list whose entries contain objects with a classmethod named from_param that knows how to convert your object to an object that ctypes can understand (an int/long, string, unicode, or object with the `_as_parameter_` attribute). NumPy uses both methods with a preference for the second method because it can be safer. The ctypes attribute of the ndarray returns an object that has an `_as_parameter_` attribute which returns an integer representing the address of the ndarray to which it is associated. As a result, one can pass this ctypes attribute object directly to a function expecting a pointer to the data in your ndarray. The caller must be sure that the ndarray object is of the correct type, shape, and has the correct flags set or risk nasty crashes if the data-pointer to inappropriate arrays are passed in. To implement the second method, NumPy provides the class-factory function [`ndpointer`](#ndpointer "ndpointer") in the [`numpy.ctypeslib`](../reference/routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") module. This class-factory function produces an appropriate class that can be placed in an argtypes attribute entry of a ctypes function. The class will contain a from_param method which ctypes will use to convert any ndarray passed in to the function to a ctypes-recognized object. In the process, the conversion will perform checking on any properties of the ndarray that were specified by the user in the call to [`ndpointer`](#ndpointer "ndpointer"). Aspects of the ndarray that can be checked include the data-type, the number-of-dimensions, the shape, and/or the state of the flags on any array passed. The return value of the from_param method is the ctypes attribute of the array which (because it contains the `_as_parameter_` attribute pointing to the array data area) can be used by ctypes directly. The ctypes attribute of an ndarray is also endowed with additional attributes that may be convenient when passing additional information about the array into a ctypes function. The attributes **data**, **shape**, and **strides** can provide ctypes compatible types corresponding to the data-area, the shape, and the strides of the array. The data attribute returns a `c_void_p` representing a pointer to the data area. The shape and strides attributes each return an array of ctypes integers (or None representing a NULL pointer, if a 0-d array). The base ctype of the array is a ctype integer of the same size as a pointer on the platform. There are also methods `data_as({ctype})`, `shape_as(<base ctype>)`, and `strides_as(<base ctype>)`. These return the data as a ctype object of your choice and the shape/strides arrays using an underlying base type of your choice. For convenience, the `ctypeslib` module also contains `c_intp` as a ctypes integer data-type whose size is the same as the size of `c_void_p` on the platform (its value is None if ctypes is not installed). ### Calling the function The function is accessed as an attribute of or an item from the loaded shared-library. Thus, if `./mylib.so` has a function named `cool_function1`, it may be accessed either as: ``` lib = numpy.ctypeslib.load_library('mylib','.') func1 = lib.cool_function1 # or equivalently func1 = lib['cool_function1'] ``` In ctypes, the return-value of a function is set to be ‘int’ by default. This behavior can be changed by setting the restype attribute of the function. Use None for the restype if the function has no return value (‘void’): ``` func1.restype = None ``` As previously discussed, you can also set the argtypes attribute of the function in order to have ctypes check the types of the input arguments when the function is called. Use the [`ndpointer`](#ndpointer "ndpointer") factory function to generate a ready-made class for data-type, shape, and flags checking on your new function. The [`ndpointer`](#ndpointer "ndpointer") function has the signature ndpointer(*dtype=None*, *ndim=None*, *shape=None*, *flags=None*) Keyword arguments with the value `None` are not checked. Specifying a keyword enforces checking of that aspect of the ndarray on conversion to a ctypes-compatible object. The dtype keyword can be any object understood as a data-type object. The ndim keyword should be an integer, and the shape keyword should be an integer or a sequence of integers. The flags keyword specifies the minimal flags that are required on any array passed in. This can be specified as a string of comma separated requirements, an integer indicating the requirement bits OR’d together, or a flags object returned from the flags attribute of an array with the necessary requirements. Using an ndpointer class in the argtypes method can make it significantly safer to call a C function using ctypes and the data- area of an ndarray. You may still want to wrap the function in an additional Python wrapper to make it user-friendly (hiding some obvious arguments and making some arguments output arguments). In this process, the `requires` function in NumPy may be useful to return the right kind of array from a given input. ### Complete example In this example, we will demonstrate how the addition function and the filter function implemented previously using the other approaches can be implemented using ctypes. First, the C code which implements the algorithms contains the functions `zadd`, `dadd`, `sadd`, `cadd`, and `dfilter2d`. The `zadd` function is: ``` /* Add arrays of contiguous data */ typedef struct {double real; double imag;} cdouble; typedef struct {float real; float imag;} cfloat; void zadd(cdouble *a, cdouble *b, cdouble *c, long n) { while (n--) { c->real = a->real + b->real; c->imag = a->imag + b->imag; a++; b++; c++; } } ``` with similar code for `cadd`, `dadd`, and `sadd` that handles complex float, double, and float data-types, respectively: ``` void cadd(cfloat *a, cfloat *b, cfloat *c, long n) { while (n--) { c->real = a->real + b->real; c->imag = a->imag + b->imag; a++; b++; c++; } } void dadd(double *a, double *b, double *c, long n) { while (n--) { *c++ = *a++ + *b++; } } void sadd(float *a, float *b, float *c, long n) { while (n--) { *c++ = *a++ + *b++; } } ``` The `code.c` file also contains the function `dfilter2d`: ``` /* * Assumes b is contiguous and has strides that are multiples of * sizeof(double) */ void dfilter2d(double *a, double *b, ssize_t *astrides, ssize_t *dims) { ssize_t i, j, M, N, S0, S1; ssize_t r, c, rm1, rp1, cp1, cm1; M = dims[0]; N = dims[1]; S0 = astrides[0]/sizeof(double); S1 = astrides[1]/sizeof(double); for (i = 1; i < M - 1; i++) { r = i*S0; rp1 = r + S0; rm1 = r - S0; for (j = 1; j < N - 1; j++) { c = j*S1; cp1 = j + S1; cm1 = j - S1; b[i*N + j] = a[r + c] + (a[rp1 + c] + a[rm1 + c] + a[r + cp1] + a[r + cm1])*0.5 + (a[rp1 + cp1] + a[rp1 + cm1] + a[rm1 + cp1] + a[rm1 + cp1])*0.25; } } } ``` A possible advantage this code has over the Fortran-equivalent code is that it takes arbitrarily strided (i.e. non-contiguous arrays) and may also run faster depending on the optimization capability of your compiler. But, it is an obviously more complicated than the simple code in `filter.f`. This code must be compiled into a shared library. On my Linux system this is accomplished using: ``` gcc -o code.so -shared code.c ``` Which creates a shared_library named code.so in the current directory. On Windows don’t forget to either add `__declspec(dllexport)` in front of void on the line preceding each function definition, or write a `code.def` file that lists the names of the functions to be exported. A suitable Python interface to this shared library should be constructed. To do this create a file named interface.py with the following lines at the top: ``` __all__ = ['add', 'filter2d'] import numpy as np import os _path = os.path.dirname('__file__') lib = np.ctypeslib.load_library('code', _path) _typedict = {'zadd' : complex, 'sadd' : np.single, 'cadd' : np.csingle, 'dadd' : float} for name in _typedict.keys(): val = getattr(lib, name) val.restype = None _type = _typedict[name] val.argtypes = [np.ctypeslib.ndpointer(_type, flags='aligned, contiguous'), np.ctypeslib.ndpointer(_type, flags='aligned, contiguous'), np.ctypeslib.ndpointer(_type, flags='aligned, contiguous,'\ 'writeable'), np.ctypeslib.c_intp] ``` This code loads the shared library named `code.{ext}` located in the same path as this file. It then adds a return type of void to the functions contained in the library. It also adds argument checking to the functions in the library so that ndarrays can be passed as the first three arguments along with an integer (large enough to hold a pointer on the platform) as the fourth argument. Setting up the filtering function is similar and allows the filtering function to be called with ndarray arguments as the first two arguments and with pointers to integers (large enough to handle the strides and shape of an ndarray) as the last two arguments.: ``` lib.dfilter2d.restype=None lib.dfilter2d.argtypes = [np.ctypeslib.ndpointer(float, ndim=2, flags='aligned'), np.ctypeslib.ndpointer(float, ndim=2, flags='aligned, contiguous,'\ 'writeable'), ctypes.POINTER(np.ctypeslib.c_intp), ctypes.POINTER(np.ctypeslib.c_intp)] ``` Next, define a simple selection function that chooses which addition function to call in the shared library based on the data-type: ``` def select(dtype): if dtype.char in ['?bBhHf']: return lib.sadd, single elif dtype.char in ['F']: return lib.cadd, csingle elif dtype.char in ['DG']: return lib.zadd, complex else: return lib.dadd, float return func, ntype ``` Finally, the two functions to be exported by the interface can be written simply as: ``` def add(a, b): requires = ['CONTIGUOUS', 'ALIGNED'] a = np.asanyarray(a) func, dtype = select(a.dtype) a = np.require(a, dtype, requires) b = np.require(b, dtype, requires) c = np.empty_like(a) func(a,b,c,a.size) return c ``` and: ``` def filter2d(a): a = np.require(a, float, ['ALIGNED']) b = np.zeros_like(a) lib.dfilter2d(a, b, a.ctypes.strides, a.ctypes.shape) return b ``` ### Conclusion Using ctypes is a powerful way to connect Python with arbitrary C-code. Its advantages for extending Python include * clean separation of C code from Python code + no need to learn a new syntax except Python and C + allows re-use of C code + functionality in shared libraries written for other purposes can be obtained with a simple Python wrapper and search for the library. * easy integration with NumPy through the ctypes attribute * full argument checking with the ndpointer class factory Its disadvantages include * It is difficult to distribute an extension module made using ctypes because of a lack of support for building shared libraries in distutils. * You must have shared-libraries of your code (no static libraries). * Very little support for C++ code and its different library-calling conventions. You will probably need a C wrapper around C++ code to use with ctypes (or just use Boost.Python instead). Because of the difficulty in distributing an extension module made using ctypes, f2py and Cython are still the easiest ways to extend Python for package creation. However, ctypes is in some cases a useful alternative. This should bring more features to ctypes that should eliminate the difficulty in extending Python and distributing the extension using ctypes. Additional tools you may find useful ------------------------------------ These tools have been found useful by others using Python and so are included here. They are discussed separately because they are either older ways to do things now handled by f2py, Cython, or ctypes (SWIG, PyFort) or because of a lack of reasonable documentation (SIP, Boost). Links to these methods are not included since the most relevant can be found using Google or some other search engine, and any links provided here would be quickly dated. Do not assume that inclusion in this list means that the package deserves attention. Information about these packages are collected here because many people have found them useful and we’d like to give you as many options as possible for tackling the problem of easily integrating your code. ### SWIG Simplified Wrapper and Interface Generator (SWIG) is an old and fairly stable method for wrapping C/C++-libraries to a large variety of other languages. It does not specifically understand NumPy arrays but can be made usable with NumPy through the use of typemaps. There are some sample typemaps in the numpy/tools/swig directory under numpy.i together with an example module that makes use of them. SWIG excels at wrapping large C/C++ libraries because it can (almost) parse their headers and auto-produce an interface. Technically, you need to generate a `.i` file that defines the interface. Often, however, this `.i` file can be parts of the header itself. The interface usually needs a bit of tweaking to be very useful. This ability to parse C/C++ headers and auto-generate the interface still makes SWIG a useful approach to adding functionalilty from C/C++ into Python, despite the other methods that have emerged that are more targeted to Python. SWIG can actually target extensions for several languages, but the typemaps usually have to be language-specific. Nonetheless, with modifications to the Python-specific typemaps, SWIG can be used to interface a library with other languages such as Perl, Tcl, and Ruby. My experience with SWIG has been generally positive in that it is relatively easy to use and quite powerful. It has been used often before becoming more proficient at writing C-extensions. However, writing custom interfaces with SWIG is often troublesome because it must be done using the concept of typemaps which are not Python specific and are written in a C-like syntax. Therefore, other gluing strategies are preferred and SWIG would be probably considered only to wrap a very-large C/C++ library. Nonetheless, there are others who use SWIG quite happily. ### SIP SIP is another tool for wrapping C/C++ libraries that is Python specific and appears to have very good support for C++. Riverbank Computing developed SIP in order to create Python bindings to the QT library. An interface file must be written to generate the binding, but the interface file looks a lot like a C/C++ header file. While SIP is not a full C++ parser, it understands quite a bit of C++ syntax as well as its own special directives that allow modification of how the Python binding is accomplished. It also allows the user to define mappings between Python types and C/C++ structures and classes. ### Boost Python Boost is a repository of C++ libraries and Boost.Python is one of those libraries which provides a concise interface for binding C++ classes and functions to Python. The amazing part of the Boost.Python approach is that it works entirely in pure C++ without introducing a new syntax. Many users of C++ report that Boost.Python makes it possible to combine the best of both worlds in a seamless fashion. Using Boost to wrap simple C-subroutines is usually over-kill. Its primary purpose is to make C++ classes available in Python. So, if you have a set of C++ classes that need to be integrated cleanly into Python, consider learning about and using Boost.Python. ### PyFort PyFort is a nice tool for wrapping Fortran and Fortran-like C-code into Python with support for Numeric arrays. It was written by <NAME>, a distinguished computer scientist and the very first maintainer of Numeric (now retired). It is worth mentioning in the hopes that somebody will update PyFort to work with NumPy arrays as well which now support either Fortran or C-style contiguous arrays. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/c-info.python-as-glue.htmlWriting your own ufunc ====================== Creating a new universal function --------------------------------- Before reading this, it may help to familiarize yourself with the basics of C extensions for Python by reading/skimming the tutorials in Section 1 of [Extending and Embedding the Python Interpreter](https://docs.python.org/extending/index.html) and in [How to extend NumPy](c-info.how-to-extend) The umath module is a computer-generated C-module that creates many ufuncs. It provides a great many examples of how to create a universal function. Creating your own ufunc that will make use of the ufunc machinery is not difficult either. Suppose you have a function that you want to operate element-by-element over its inputs. By creating a new ufunc you will obtain a function that handles * broadcasting * N-dimensional looping * automatic type-conversions with minimal memory usage * optional output arrays It is not difficult to create your own ufunc. All that is required is a 1-d loop for each data-type you want to support. Each 1-d loop must have a specific signature, and only ufuncs for fixed-size data-types can be used. The function call used to create a new ufunc to work on built-in data-types is given below. A different mechanism is used to register ufuncs for user-defined data-types. In the next several sections we give example code that can be easily modified to create your own ufuncs. The examples are successively more complete or complicated versions of the logit function, a common function in statistical modeling. Logit is also interesting because, due to the magic of IEEE standards (specifically IEEE 754), all of the logit functions created below automatically have the following behavior. ``` >>> logit(0) -inf >>> logit(1) inf >>> logit(2) nan >>> logit(-2) nan ``` This is wonderful because the function writer doesn’t have to manually propagate infs or nans. Example Non-ufunc extension --------------------------- For comparison and general edification of the reader we provide a simple implementation of a C extension of `logit` that uses no numpy. To do this we need two files. The first is the C file which contains the actual code, and the second is the `setup.py` file used to create the module. ``` #define PY_SSIZE_T_CLEAN #include <Python.h> #include <math.h/* * spammodule.c * This is the C code for a non-numpy Python extension to * define the logit function, where logit(p) = log(p/(1-p)). * This function will not work on numpy arrays automatically. * numpy.vectorize must be called in python to generate * a numpy-friendly function. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org . */ /* This declares the logit function */ static PyObject *spam_logit(PyObject *self, PyObject *args); /* * This tells Python what methods this module has. * See the Python-C API for more information. */ static PyMethodDef SpamMethods[] = { {"logit", spam_logit, METH_VARARGS, "compute logit"}, {NULL, NULL, 0, NULL} }; /* * This actually defines the logit function for * input args from Python. */ static PyObject *spam_logit(PyObject *self, PyObject *args) { double p; /* This parses the Python argument into a double */ if(!PyArg_ParseTuple(args, "d", &p)) { return NULL; } /* THE ACTUAL LOGIT FUNCTION */ p = p/(1-p); p = log(p); /*This builds the answer back into a python object */ return Py_BuildValue("d", p); } /* This initiates the module using the above definitions. */ static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "spam", NULL, -1, SpamMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_spam(void) { PyObject *m; m = PyModule_Create(&moduledef); if (!m) { return NULL; } return m; } ``` To use the `setup.py file`, place `setup.py` and `spammodule.c` in the same folder. Then `python setup.py build` will build the module to import, or `python setup.py install` will install the module to your site-packages directory. ``` ''' setup.py file for spammodule.c Calling $python setup.py build_ext --inplace will build the extension library in the current file. Calling $python setup.py build will build a file that looks like ./build/lib*, where lib* is a file that begins with lib. The library will be in this file and end with a C library extension, such as .so Calling $python setup.py install will install the module in your site-packages file. See the distutils section of 'Extending and Embedding the Python Interpreter' at docs.python.org for more information. ''' from distutils.core import setup, Extension module1 = Extension('spam', sources=['spammodule.c'], include_dirs=['/usr/local/lib']) setup(name = 'spam', version='1.0', description='This is my spam package', ext_modules = [module1]) ``` Once the spam module is imported into python, you can call logit via `spam.logit`. Note that the function used above cannot be applied as-is to numpy arrays. To do so we must call [`numpy.vectorize`](../reference/generated/numpy.vectorize#numpy.vectorize "numpy.vectorize") on it. For example, if a python interpreter is opened in the file containing the spam library or spam has been installed, one can perform the following commands: ``` >>> import numpy as np >>> import spam >>> spam.logit(0) -inf >>> spam.logit(1) inf >>> spam.logit(0.5) 0.0 >>> x = np.linspace(0,1,10) >>> spam.logit(x) TypeError: only length-1 arrays can be converted to Python scalars >>> f = np.vectorize(spam.logit) >>> f(x) array([ -inf, -2.07944154, -1.25276297, -0.69314718, -0.22314355, 0.22314355, 0.69314718, 1.25276297, 2.07944154, inf]) ``` THE RESULTING LOGIT FUNCTION IS NOT FAST! `numpy.vectorize` simply loops over `spam.logit`. The loop is done at the C level, but the numpy array is constantly being parsed and build back up. This is expensive. When the author compared `numpy.vectorize(spam.logit)` against the logit ufuncs constructed below, the logit ufuncs were almost exactly 4 times faster. Larger or smaller speedups are, of course, possible depending on the nature of the function. Example NumPy ufunc for one dtype --------------------------------- For simplicity we give a ufunc for a single dtype, the `'f8'` `double`. As in the previous section, we first give the `.c` file and then the `setup.py` file used to create the module containing the ufunc. The place in the code corresponding to the actual computations for the ufunc are marked with `/* BEGIN main ufunc computation */` and `/* END main ufunc computation */`. The code in between those lines is the primary thing that must be changed to create your own ufunc. ``` #define PY_SSIZE_T_CLEAN #include <Python.h> #include "numpy/ndarraytypes.h" #include "numpy/ufuncobject.h" #include "numpy/npy_3kcompat.h" #include <math.h/* * single_type_logit.c * This is the C code for creating your own * NumPy ufunc for a logit function. * * In this code we only define the ufunc for * a single dtype. The computations that must * be replaced to create a ufunc for * a different function are marked with BEGIN * and END. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org . */ static PyMethodDef LogitMethods[] = { {NULL, NULL, 0, NULL} }; /* The loop definition must precede the PyMODINIT_FUNC. */ static void double_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; double tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(double *)in; tmp /= 1 - tmp; *((double *)out) = log(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } /* This a pointer to the above function */ PyUFuncGenericFunction funcs[1] = {&double_logit}; /* These are the input and return dtypes of logit.*/ static char types[2] = {NPY_DOUBLE, NPY_DOUBLE}; static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "npufunc", NULL, -1, LogitMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_npufunc(void) { PyObject *m, *logit, *d; m = PyModule_Create(&moduledef); if (!m) { return NULL; } import_array(); import_umath(); logit = PyUFunc_FromFuncAndData(funcs, NULL, types, 1, 1, 1, PyUFunc_None, "logit", "logit_docstring", 0); d = PyModule_GetDict(m); PyDict_SetItemString(d, "logit", logit); Py_DECREF(logit); return m; } ``` This is a `setup.py file` for the above code. As before, the module can be build via calling `python setup.py build` at the command prompt, or installed to site-packages via `python setup.py install`. The module can also be placed into a local folder e.g. `npufunc_directory` below using `python setup.py build_ext --inplace`. ``` ''' setup.py file for single_type_logit.c Note that since this is a numpy extension we use numpy.distutils instead of distutils from the python standard library. Calling $python setup.py build_ext --inplace will build the extension library in the npufunc_directory. Calling $python setup.py build will build a file that looks like ./build/lib*, where lib* is a file that begins with lib. The library will be in this file and end with a C library extension, such as .so Calling $python setup.py install will install the module in your site-packages file. See the distutils section of 'Extending and Embedding the Python Interpreter' at docs.python.org and the documentation on numpy.distutils for more information. ''' def configuration(parent_package='', top_path=None): from numpy.distutils.misc_util import Configuration config = Configuration('npufunc_directory', parent_package, top_path) config.add_extension('npufunc', ['single_type_logit.c']) return config if __name__ == "__main__": from numpy.distutils.core import setup setup(configuration=configuration) ``` After the above has been installed, it can be imported and used as follows. ``` >>> import numpy as np >>> import npufunc >>> npufunc.logit(0.5) 0.0 >>> a = np.linspace(0,1,5) >>> npufunc.logit(a) array([ -inf, -1.09861229, 0. , 1.09861229, inf]) ``` Example NumPy ufunc with multiple dtypes ---------------------------------------- We finally give an example of a full ufunc, with inner loops for half-floats, floats, doubles, and long doubles. As in the previous sections we first give the `.c` file and then the corresponding `setup.py` file. The places in the code corresponding to the actual computations for the ufunc are marked with `/* BEGIN main ufunc computation */` and `/* END main ufunc computation */`. The code in between those lines is the primary thing that must be changed to create your own ufunc. ``` #define PY_SSIZE_T_CLEAN #include <Python.h> #include "numpy/ndarraytypes.h" #include "numpy/ufuncobject.h" #include "numpy/halffloat.h" #include <math.h/* * multi_type_logit.c * This is the C code for creating your own * NumPy ufunc for a logit function. * * Each function of the form type_logit defines the * logit function for a different numpy dtype. Each * of these functions must be modified when you * create your own ufunc. The computations that must * be replaced to create a ufunc for * a different function are marked with BEGIN * and END. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org . * */ static PyMethodDef LogitMethods[] = { {NULL, NULL, 0, NULL} }; /* The loop definitions must precede the PyMODINIT_FUNC. */ static void long_double_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; long double tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(long double *)in; tmp /= 1 - tmp; *((long double *)out) = logl(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } static void double_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; double tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(double *)in; tmp /= 1 - tmp; *((double *)out) = log(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } static void float_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; float tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(float *)in; tmp /= 1 - tmp; *((float *)out) = logf(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } static void half_float_logit(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in = args[0], *out = args[1]; npy_intp in_step = steps[0], out_step = steps[1]; float tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = npy_half_to_float(*(npy_half *)in); tmp /= 1 - tmp; tmp = logf(tmp); *((npy_half *)out) = npy_float_to_half(tmp); /* END main ufunc computation */ in += in_step; out += out_step; } } /*This gives pointers to the above functions*/ PyUFuncGenericFunction funcs[4] = {&half_float_logit, &float_logit, &double_logit, &long_double_logit}; static char types[8] = {NPY_HALF, NPY_HALF, NPY_FLOAT, NPY_FLOAT, NPY_DOUBLE, NPY_DOUBLE, NPY_LONGDOUBLE, NPY_LONGDOUBLE}; static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "npufunc", NULL, -1, LogitMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_npufunc(void) { PyObject *m, *logit, *d; m = PyModule_Create(&moduledef); if (!m) { return NULL; } import_array(); import_umath(); logit = PyUFunc_FromFuncAndData(funcs, NULL, types, 4, 1, 1, PyUFunc_None, "logit", "logit_docstring", 0); d = PyModule_GetDict(m); PyDict_SetItemString(d, "logit", logit); Py_DECREF(logit); return m; } ``` This is a `setup.py` file for the above code. As before, the module can be build via calling `python setup.py build` at the command prompt, or installed to site-packages via `python setup.py install`. ``` ''' setup.py file for multi_type_logit.c Note that since this is a numpy extension we use numpy.distutils instead of distutils from the python standard library. Calling $python setup.py build_ext --inplace will build the extension library in the current file. Calling $python setup.py build will build a file that looks like ./build/lib*, where lib* is a file that begins with lib. The library will be in this file and end with a C library extension, such as .so Calling $python setup.py install will install the module in your site-packages file. See the distutils section of 'Extending and Embedding the Python Interpreter' at docs.python.org and the documentation on numpy.distutils for more information. ''' def configuration(parent_package='', top_path=None): from numpy.distutils.misc_util import Configuration, get_info #Necessary for the half-float d-type. info = get_info('npymath') config = Configuration('npufunc_directory', parent_package, top_path) config.add_extension('npufunc', ['multi_type_logit.c'], extra_info=info) return config if __name__ == "__main__": from numpy.distutils.core import setup setup(configuration=configuration) ``` After the above has been installed, it can be imported and used as follows. ``` >>> import numpy as np >>> import npufunc >>> npufunc.logit(0.5) 0.0 >>> a = np.linspace(0,1,5) >>> npufunc.logit(a) array([ -inf, -1.09861229, 0. , 1.09861229, inf]) ``` Example NumPy ufunc with multiple arguments/return values --------------------------------------------------------- Our final example is a ufunc with multiple arguments. It is a modification of the code for a logit ufunc for data with a single dtype. We compute `(A * B, logit(A * B))`. We only give the C code as the setup.py file is exactly the same as the `setup.py` file in [Example NumPy ufunc for one dtype](#example-numpy-ufunc-for-one-dtype), except that the line ``` config.add_extension('npufunc', ['single_type_logit.c']) ``` is replaced with ``` config.add_extension('npufunc', ['multi_arg_logit.c']) ``` The C file is given below. The ufunc generated takes two arguments `A` and `B`. It returns a tuple whose first element is `A * B` and whose second element is `logit(A * B)`. Note that it automatically supports broadcasting, as well as all other properties of a ufunc. ``` #define PY_SSIZE_T_CLEAN #include <Python.h> #include "numpy/ndarraytypes.h" #include "numpy/ufuncobject.h" #include "numpy/halffloat.h" #include <math.h/* * multi_arg_logit.c * This is the C code for creating your own * NumPy ufunc for a multiple argument, multiple * return value ufunc. The places where the * ufunc computation is carried out are marked * with comments. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org. */ static PyMethodDef LogitMethods[] = { {NULL, NULL, 0, NULL} }; /* The loop definition must precede the PyMODINIT_FUNC. */ static void double_logitprod(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp n = dimensions[0]; char *in1 = args[0], *in2 = args[1]; char *out1 = args[2], *out2 = args[3]; npy_intp in1_step = steps[0], in2_step = steps[1]; npy_intp out1_step = steps[2], out2_step = steps[3]; double tmp; for (i = 0; i < n; i++) { /* BEGIN main ufunc computation */ tmp = *(double *)in1; tmp *= *(double *)in2; *((double *)out1) = tmp; *((double *)out2) = log(tmp / (1 - tmp)); /* END main ufunc computation */ in1 += in1_step; in2 += in2_step; out1 += out1_step; out2 += out2_step; } } /*This a pointer to the above function*/ PyUFuncGenericFunction funcs[1] = {&double_logitprod}; /* These are the input and return dtypes of logit.*/ static char types[4] = {NPY_DOUBLE, NPY_DOUBLE, NPY_DOUBLE, NPY_DOUBLE}; static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "npufunc", NULL, -1, LogitMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_npufunc(void) { PyObject *m, *logit, *d; m = PyModule_Create(&moduledef); if (!m) { return NULL; } import_array(); import_umath(); logit = PyUFunc_FromFuncAndData(funcs, NULL, types, 1, 2, 2, PyUFunc_None, "logit", "logit_docstring", 0); d = PyModule_GetDict(m); PyDict_SetItemString(d, "logit", logit); Py_DECREF(logit); return m; } ``` Example NumPy ufunc with structured array dtype arguments --------------------------------------------------------- This example shows how to create a ufunc for a structured array dtype. For the example we show a trivial ufunc for adding two arrays with dtype `'u8,u8,u8'`. The process is a bit different from the other examples since a call to [`PyUFunc_FromFuncAndData`](../reference/c-api/ufunc#c.PyUFunc_FromFuncAndData "PyUFunc_FromFuncAndData") doesn’t fully register ufuncs for custom dtypes and structured array dtypes. We need to also call [`PyUFunc_RegisterLoopForDescr`](../reference/c-api/ufunc#c.PyUFunc_RegisterLoopForDescr "PyUFunc_RegisterLoopForDescr") to finish setting up the ufunc. We only give the C code as the `setup.py` file is exactly the same as the `setup.py` file in [Example NumPy ufunc for one dtype](#example-numpy-ufunc-for-one-dtype), except that the line ``` config.add_extension('npufunc', ['single_type_logit.c']) ``` is replaced with ``` config.add_extension('npufunc', ['add_triplet.c']) ``` The C file is given below. ``` #define PY_SSIZE_T_CLEAN #include <Python.h> #include "numpy/ndarraytypes.h" #include "numpy/ufuncobject.h" #include "numpy/npy_3kcompat.h" #include <math.h/* * add_triplet.c * This is the C code for creating your own * NumPy ufunc for a structured array dtype. * * Details explaining the Python-C API can be found under * 'Extending and Embedding' and 'Python/C API' at * docs.python.org. */ static PyMethodDef StructUfuncTestMethods[] = { {NULL, NULL, 0, NULL} }; /* The loop definition must precede the PyMODINIT_FUNC. */ static void add_uint64_triplet(char **args, const npy_intp *dimensions, const npy_intp *steps, void *data) { npy_intp i; npy_intp is1 = steps[0]; npy_intp is2 = steps[1]; npy_intp os = steps[2]; npy_intp n = dimensions[0]; uint64_t *x, *y, *z; char *i1 = args[0]; char *i2 = args[1]; char *op = args[2]; for (i = 0; i < n; i++) { x = (uint64_t *)i1; y = (uint64_t *)i2; z = (uint64_t *)op; z[0] = x[0] + y[0]; z[1] = x[1] + y[1]; z[2] = x[2] + y[2]; i1 += is1; i2 += is2; op += os; } } /* This a pointer to the above function */ PyUFuncGenericFunction funcs[1] = {&add_uint64_triplet}; /* These are the input and return dtypes of add_uint64_triplet. */ static char types[3] = {NPY_UINT64, NPY_UINT64, NPY_UINT64}; static struct PyModuleDef moduledef = { PyModuleDef_HEAD_INIT, "struct_ufunc_test", NULL, -1, StructUfuncTestMethods, NULL, NULL, NULL, NULL }; PyMODINIT_FUNC PyInit_struct_ufunc_test(void) { PyObject *m, *add_triplet, *d; PyObject *dtype_dict; PyArray_Descr *dtype; PyArray_Descr *dtypes[3]; m = PyModule_Create(&moduledef); if (m == NULL) { return NULL; } import_array(); import_umath(); /* Create a new ufunc object */ add_triplet = PyUFunc_FromFuncAndData(NULL, NULL, NULL, 0, 2, 1, PyUFunc_None, "add_triplet", "add_triplet_docstring", 0); dtype_dict = Py_BuildValue("[(s, s), (s, s), (s, s)]", "f0", "u8", "f1", "u8", "f2", "u8"); PyArray_DescrConverter(dtype_dict, &dtype); Py_DECREF(dtype_dict); dtypes[0] = dtype; dtypes[1] = dtype; dtypes[2] = dtype; /* Register ufunc for structured dtype */ PyUFunc_RegisterLoopForDescr(add_triplet, dtype, &add_uint64_triplet, dtypes, NULL); d = PyModule_GetDict(m); PyDict_SetItemString(d, "add_triplet", add_triplet); Py_DECREF(add_triplet); return m; } ``` The returned ufunc object is a callable Python object. It should be placed in a (module) dictionary under the same name as was used in the name argument to the ufunc-creation routine. The following example is adapted from the umath module ``` static PyUFuncGenericFunction atan2_functions[] = { PyUFunc_ff_f, PyUFunc_dd_d, PyUFunc_gg_g, PyUFunc_OO_O_method}; static void *atan2_data[] = { (void *)atan2f, (void *)atan2, (void *)atan2l, (void *)"arctan2"}; static char atan2_signatures[] = { NPY_FLOAT, NPY_FLOAT, NPY_FLOAT, NPY_DOUBLE, NPY_DOUBLE, NPY_DOUBLE, NPY_LONGDOUBLE, NPY_LONGDOUBLE, NPY_LONGDOUBLE NPY_OBJECT, NPY_OBJECT, NPY_OBJECT}; ... /* in the module initialization code */ PyObject *f, *dict, *module; ... dict = PyModule_GetDict(module); ... f = PyUFunc_FromFuncAndData(atan2_functions, atan2_data, atan2_signatures, 4, 2, 1, PyUFunc_None, "arctan2", "a safe and correct arctan(x1/x2)", 0); PyDict_SetItemString(dict, "arctan2", f); Py_DECREF(f); ... ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/c-info.ufunc-tutorial.htmlBeyond the Basics ================= Iterating over elements in the array ------------------------------------ ### Basic Iteration One common algorithmic requirement is to be able to walk over all elements in a multidimensional array. The array iterator object makes this easy to do in a generic way that works for arrays of any dimension. Naturally, if you know the number of dimensions you will be using, then you can always write nested for loops to accomplish the iteration. If, however, you want to write code that works with any number of dimensions, then you can make use of the array iterator. An array iterator object is returned when accessing the .flat attribute of an array. Basic usage is to call [`PyArray_IterNew`](../reference/c-api/array#c.PyArray_IterNew "PyArray_IterNew") ( `array` ) where array is an ndarray object (or one of its sub-classes). The returned object is an array-iterator object (the same object returned by the .flat attribute of the ndarray). This object is usually cast to PyArrayIterObject* so that its members can be accessed. The only members that are needed are `iter->size` which contains the total size of the array, `iter->index`, which contains the current 1-d index into the array, and `iter->dataptr` which is a pointer to the data for the current element of the array. Sometimes it is also useful to access `iter->ao` which is a pointer to the underlying ndarray object. After processing data at the current element of the array, the next element of the array can be obtained using the macro [`PyArray_ITER_NEXT`](../reference/c-api/array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") ( `iter` ). The iteration always proceeds in a C-style contiguous fashion (last index varying the fastest). The [`PyArray_ITER_GOTO`](../reference/c-api/array#c.PyArray_ITER_GOTO "PyArray_ITER_GOTO") ( `iter`, `destination` ) can be used to jump to a particular point in the array, where `destination` is an array of npy_intp data-type with space to handle at least the number of dimensions in the underlying array. Occasionally it is useful to use [`PyArray_ITER_GOTO1D`](../reference/c-api/array#c.PyArray_ITER_GOTO1D "PyArray_ITER_GOTO1D") ( `iter`, `index` ) which will jump to the 1-d index given by the value of `index`. The most common usage, however, is given in the following example. ``` PyObject *obj; /* assumed to be some ndarray object */ PyArrayIterObject *iter; ... iter = (PyArrayIterObject *)PyArray_IterNew(obj); if (iter == NULL) goto fail; /* Assume fail has clean-up code */ while (iter->index < iter->size) { /* do something with the data at it->dataptr */ PyArray_ITER_NEXT(it); } ... ``` You can also use [`PyArrayIter_Check`](../reference/c-api/array#c.PyArrayIter_Check "PyArrayIter_Check") ( `obj` ) to ensure you have an iterator object and [`PyArray_ITER_RESET`](../reference/c-api/array#c.PyArray_ITER_RESET "PyArray_ITER_RESET") ( `iter` ) to reset an iterator object back to the beginning of the array. It should be emphasized at this point that you may not need the array iterator if your array is already contiguous (using an array iterator will work but will be slower than the fastest code you could write). The major purpose of array iterators is to encapsulate iteration over N-dimensional arrays with arbitrary strides. They are used in many, many places in the NumPy source code itself. If you already know your array is contiguous (Fortran or C), then simply adding the element- size to a running pointer variable will step you through the array very efficiently. In other words, code like this will probably be faster for you in the contiguous case (assuming doubles). ``` npy_intp size; double *dptr; /* could make this any variable type */ size = PyArray_SIZE(obj); dptr = PyArray_DATA(obj); while(size--) { /* do something with the data at dptr */ dptr++; } ``` ### Iterating over all but one axis A common algorithm is to loop over all elements of an array and perform some function with each element by issuing a function call. As function calls can be time consuming, one way to speed up this kind of algorithm is to write the function so it takes a vector of data and then write the iteration so the function call is performed for an entire dimension of data at a time. This increases the amount of work done per function call, thereby reducing the function-call over-head to a small(er) fraction of the total time. Even if the interior of the loop is performed without a function call it can be advantageous to perform the inner loop over the dimension with the highest number of elements to take advantage of speed enhancements available on micro- processors that use pipelining to enhance fundamental operations. The [`PyArray_IterAllButAxis`](../reference/c-api/array#c.PyArray_IterAllButAxis "PyArray_IterAllButAxis") ( `array`, `&dim` ) constructs an iterator object that is modified so that it will not iterate over the dimension indicated by dim. The only restriction on this iterator object, is that the [`PyArray_ITER_GOTO1D`](../reference/c-api/array#c.PyArray_ITER_GOTO1D "PyArray_ITER_GOTO1D") ( `it`, `ind` ) macro cannot be used (thus flat indexing won’t work either if you pass this object back to Python — so you shouldn’t do this). Note that the returned object from this routine is still usually cast to PyArrayIterObject *. All that’s been done is to modify the strides and dimensions of the returned iterator to simulate iterating over array[
,0,
] where 0 is placed on the \(\textrm{dim}^{\textrm{th}}\) dimension. If dim is negative, then the dimension with the largest axis is found and used. ### Iterating over multiple arrays Very often, it is desirable to iterate over several arrays at the same time. The universal functions are an example of this kind of behavior. If all you want to do is iterate over arrays with the same shape, then simply creating several iterator objects is the standard procedure. For example, the following code iterates over two arrays assumed to be the same shape and size (actually obj1 just has to have at least as many total elements as does obj2): ``` /* It is already assumed that obj1 and obj2 are ndarrays of the same shape and size. */ iter1 = (PyArrayIterObject *)PyArray_IterNew(obj1); if (iter1 == NULL) goto fail; iter2 = (PyArrayIterObject *)PyArray_IterNew(obj2); if (iter2 == NULL) goto fail; /* assume iter1 is DECREF'd at fail */ while (iter2->index < iter2->size) { /* process with iter1->dataptr and iter2->dataptr */ PyArray_ITER_NEXT(iter1); PyArray_ITER_NEXT(iter2); } ``` ### Broadcasting over multiple arrays When multiple arrays are involved in an operation, you may want to use the same broadcasting rules that the math operations (*i.e.* the ufuncs) use. This can be done easily using the [`PyArrayMultiIterObject`](../reference/c-api/types-and-structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject"). This is the object returned from the Python command numpy.broadcast and it is almost as easy to use from C. The function [`PyArray_MultiIterNew`](../reference/c-api/array#c.PyArray_MultiIterNew "PyArray_MultiIterNew") ( `n`, `...` ) is used (with `n` input objects in place of `...` ). The input objects can be arrays or anything that can be converted into an array. A pointer to a PyArrayMultiIterObject is returned. Broadcasting has already been accomplished which adjusts the iterators so that all that needs to be done to advance to the next element in each array is for PyArray_ITER_NEXT to be called for each of the inputs. This incrementing is automatically performed by [`PyArray_MultiIter_NEXT`](../reference/c-api/array#c.PyArray_MultiIter_NEXT "PyArray_MultiIter_NEXT") ( `obj` ) macro (which can handle a multiterator `obj` as either a [PyArrayMultiIterObject](../reference/c-api/types-and-structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject")* or a [PyObject](https://docs.python.org/3/c-api/structures.html#c.PyObject "(in Python v3.10)")*). The data from input number `i` is available using [`PyArray_MultiIter_DATA`](../reference/c-api/array#c.PyArray_MultiIter_DATA "PyArray_MultiIter_DATA") ( `obj`, `i` ). An example of using this feature follows. ``` mobj = PyArray_MultiIterNew(2, obj1, obj2); size = mobj->size; while(size--) { ptr1 = PyArray_MultiIter_DATA(mobj, 0); ptr2 = PyArray_MultiIter_DATA(mobj, 1); /* code using contents of ptr1 and ptr2 */ PyArray_MultiIter_NEXT(mobj); } ``` The function [`PyArray_RemoveSmallest`](../reference/c-api/array#c.PyArray_RemoveSmallest "PyArray_RemoveSmallest") ( `multi` ) can be used to take a multi-iterator object and adjust all the iterators so that iteration does not take place over the largest dimension (it makes that dimension of size 1). The code being looped over that makes use of the pointers will very-likely also need the strides data for each of the iterators. This information is stored in multi->iters[i]->strides. There are several examples of using the multi-iterator in the NumPy source code as it makes N-dimensional broadcasting-code very simple to write. Browse the source for more examples. User-defined data-types ----------------------- NumPy comes with 24 builtin data-types. While this covers a large majority of possible use cases, it is conceivable that a user may have a need for an additional data-type. There is some support for adding an additional data-type into the NumPy system. This additional data- type will behave much like a regular data-type except ufuncs must have 1-d loops registered to handle it separately. Also checking for whether or not other data-types can be cast “safely” to and from this new type or not will always return “can cast” unless you also register which types your new data-type can be cast to and from. The NumPy source code includes an example of a custom data-type as part of its test suite. The file `_rational_tests.c.src` in the source code directory `numpy/numpy/core/src/umath/` contains an implementation of a data-type that represents a rational number as the ratio of two 32 bit integers. ### Adding the new data-type To begin to make use of the new data-type, you need to first define a new Python type to hold the scalars of your new data-type. It should be acceptable to inherit from one of the array scalars if your new type has a binary compatible layout. This will allow your new data type to have the methods and attributes of array scalars. New data- types must have a fixed memory size (if you want to define a data-type that needs a flexible representation, like a variable-precision number, then use a pointer to the object as the data-type). The memory layout of the object structure for the new Python type must be PyObject_HEAD followed by the fixed-size memory needed for the data- type. For example, a suitable structure for the new Python type is: ``` typedef struct { PyObject_HEAD; some_data_type obval; /* the name can be whatever you want */ } PySomeDataTypeObject; ``` After you have defined a new Python type object, you must then define a new [`PyArray_Descr`](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr") structure whose typeobject member will contain a pointer to the data-type you’ve just defined. In addition, the required functions in the “.f” member must be defined: nonzero, copyswap, copyswapn, setitem, getitem, and cast. The more functions in the “.f” member you define, however, the more useful the new data-type will be. It is very important to initialize unused functions to NULL. This can be achieved using [`PyArray_InitArrFuncs`](../reference/c-api/array#c.PyArray_InitArrFuncs "PyArray_InitArrFuncs") (f). Once a new [`PyArray_Descr`](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr") structure is created and filled with the needed information and useful functions you call [`PyArray_RegisterDataType`](../reference/c-api/array#c.PyArray_RegisterDataType "PyArray_RegisterDataType") (new_descr). The return value from this call is an integer providing you with a unique type_number that specifies your data-type. This type number should be stored and made available by your module so that other modules can use it to recognize your data-type (the other mechanism for finding a user-defined data-type number is to search based on the name of the type-object associated with the data-type using [`PyArray_TypeNumFromName`](../reference/c-api/array#c.PyArray_TypeNumFromName "PyArray_TypeNumFromName") ). ### Registering a casting function You may want to allow builtin (and other user-defined) data-types to be cast automatically to your data-type. In order to make this possible, you must register a casting function with the data-type you want to be able to cast from. This requires writing low-level casting functions for each conversion you want to support and then registering these functions with the data-type descriptor. A low-level casting function has the signature. voidcastfunc(void*from, void*to, [npy_intp](../reference/c-api/dtype#c.npy_intp "npy_intp")n, void*fromarr, void*toarr) Cast `n` elements `from` one type `to` another. The data to cast from is in a contiguous, correctly-swapped and aligned chunk of memory pointed to by from. The buffer to cast to is also contiguous, correctly-swapped and aligned. The fromarr and toarr arguments should only be used for flexible-element-sized arrays (string, unicode, void). An example castfunc is: ``` static void double_to_float(double *from, float* to, npy_intp n, void* ignore1, void* ignore2) { while (n--) { (*to++) = (double) *(from++); } } ``` This could then be registered to convert doubles to floats using the code: ``` doub = PyArray_DescrFromType(NPY_DOUBLE); PyArray_RegisterCastFunc(doub, NPY_FLOAT, (PyArray_VectorUnaryFunc *)double_to_float); Py_DECREF(doub); ``` ### Registering coercion rules By default, all user-defined data-types are not presumed to be safely castable to any builtin data-types. In addition builtin data-types are not presumed to be safely castable to user-defined data-types. This situation limits the ability of user-defined data-types to participate in the coercion system used by ufuncs and other times when automatic coercion takes place in NumPy. This can be changed by registering data-types as safely castable from a particular data-type object. The function [`PyArray_RegisterCanCast`](../reference/c-api/array#c.PyArray_RegisterCanCast "PyArray_RegisterCanCast") (from_descr, totype_number, scalarkind) should be used to specify that the data-type object from_descr can be cast to the data-type with type number totype_number. If you are not trying to alter scalar coercion rules, then use [`NPY_NOSCALAR`](../reference/c-api/array#c.NPY_SCALARKIND.NPY_NOSCALAR "NPY_NOSCALAR") for the scalarkind argument. If you want to allow your new data-type to also be able to share in the scalar coercion rules, then you need to specify the scalarkind function in the data-type object’s “.f” member to return the kind of scalar the new data-type should be seen as (the value of the scalar is available to that function). Then, you can register data-types that can be cast to separately for each scalar kind that may be returned from your user-defined data-type. If you don’t register scalar coercion handling, then all of your user-defined data-types will be seen as [`NPY_NOSCALAR`](../reference/c-api/array#c.NPY_SCALARKIND.NPY_NOSCALAR "NPY_NOSCALAR"). ### Registering a ufunc loop You may also want to register low-level ufunc loops for your data-type so that an ndarray of your data-type can have math applied to it seamlessly. Registering a new loop with exactly the same arg_types signature, silently replaces any previously registered loops for that data-type. Before you can register a 1-d loop for a ufunc, the ufunc must be previously created. Then you call [`PyUFunc_RegisterLoopForType`](../reference/c-api/ufunc#c.PyUFunc_RegisterLoopForType "PyUFunc_RegisterLoopForType") (
) with the information needed for the loop. The return value of this function is `0` if the process was successful and `-1` with an error condition set if it was not successful. Subtyping the ndarray in C -------------------------- One of the lesser-used features that has been lurking in Python since 2.2 is the ability to sub-class types in C. This facility is one of the important reasons for basing NumPy off of the Numeric code-base which was already in C. A sub-type in C allows much more flexibility with regards to memory management. Sub-typing in C is not difficult even if you have only a rudimentary understanding of how to create new types for Python. While it is easiest to sub-type from a single parent type, sub-typing from multiple parent types is also possible. Multiple inheritance in C is generally less useful than it is in Python because a restriction on Python sub-types is that they have a binary compatible memory layout. Perhaps for this reason, it is somewhat easier to sub-type from a single parent type. All C-structures corresponding to Python objects must begin with [`PyObject_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_HEAD "(in Python v3.10)") (or [`PyObject_VAR_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_VAR_HEAD "(in Python v3.10)")). In the same way, any sub-type must have a C-structure that begins with exactly the same memory layout as the parent type (or all of the parent types in the case of multiple-inheritance). The reason for this is that Python may attempt to access a member of the sub-type structure as if it had the parent structure ( *i.e.* it will cast a given pointer to a pointer to the parent structure and then dereference one of it’s members). If the memory layouts are not compatible, then this attempt will cause unpredictable behavior (eventually leading to a memory violation and program crash). One of the elements in [`PyObject_HEAD`](https://docs.python.org/3/c-api/structures.html#c.PyObject_HEAD "(in Python v3.10)") is a pointer to a type-object structure. A new Python type is created by creating a new type-object structure and populating it with functions and pointers to describe the desired behavior of the type. Typically, a new C-structure is also created to contain the instance-specific information needed for each object of the type as well. For example, [`&PyArray_Type`](../reference/c-api/types-and-structures#c.PyArray_Type "PyArray_Type") is a pointer to the type-object table for the ndarray while a [PyArrayObject](../reference/c-api/types-and-structures#c.PyArrayObject "PyArrayObject")* variable is a pointer to a particular instance of an ndarray (one of the members of the ndarray structure is, in turn, a pointer to the type- object table [`&PyArray_Type`](../reference/c-api/types-and-structures#c.PyArray_Type "PyArray_Type")). Finally [`PyType_Ready`](https://docs.python.org/3/c-api/type.html#c.PyType_Ready "(in Python v3.10)") (<pointer_to_type_object>) must be called for every new Python type. ### Creating sub-types To create a sub-type, a similar procedure must be followed except only behaviors that are different require new entries in the type- object structure. All other entries can be NULL and will be filled in by [`PyType_Ready`](https://docs.python.org/3/c-api/type.html#c.PyType_Ready "(in Python v3.10)") with appropriate functions from the parent type(s). In particular, to create a sub-type in C follow these steps: 1. If needed create a new C-structure to handle each instance of your type. A typical C-structure would be: ``` typedef _new_struct { PyArrayObject base; /* new things here */ } NewArrayObject; ``` Notice that the full PyArrayObject is used as the first entry in order to ensure that the binary layout of instances of the new type is identical to the PyArrayObject. 2. Fill in a new Python type-object structure with pointers to new functions that will over-ride the default behavior while leaving any function that should remain the same unfilled (or NULL). The tp_name element should be different. 3. Fill in the tp_base member of the new type-object structure with a pointer to the (main) parent type object. For multiple-inheritance, also fill in the tp_bases member with a tuple containing all of the parent objects in the order they should be used to define inheritance. Remember, all parent-types must have the same C-structure for multiple inheritance to work properly. 4. Call [`PyType_Ready`](https://docs.python.org/3/c-api/type.html#c.PyType_Ready "(in Python v3.10)") (<pointer_to_new_type>). If this function returns a negative number, a failure occurred and the type is not initialized. Otherwise, the type is ready to be used. It is generally important to place a reference to the new type into the module dictionary so it can be accessed from Python. More information on creating sub-types in C can be learned by reading PEP 253 (available at <https://www.python.org/dev/peps/pep-0253>). ### Specific features of ndarray sub-typing Some special methods and attributes are used by arrays in order to facilitate the interoperation of sub-types with the base ndarray type. #### The __array_finalize__ method ndarray.__array_finalize__ Several array-creation functions of the ndarray allow specification of a particular sub-type to be created. This allows sub-types to be handled seamlessly in many routines. When a sub-type is created in such a fashion, however, neither the __new__ method nor the __init__ method gets called. Instead, the sub-type is allocated and the appropriate instance-structure members are filled in. Finally, the [`__array_finalize__`](../reference/arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__") attribute is looked-up in the object dictionary. If it is present and not None, then it can be either a CObject containing a pointer to a [`PyArray_FinalizeFunc`](../reference/c-api/array#c.PyArray_FinalizeFunc "PyArray_FinalizeFunc") or it can be a method taking a single argument (which could be None) If the [`__array_finalize__`](../reference/arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__") attribute is a CObject, then the pointer must be a pointer to a function with the signature: ``` (int) (PyArrayObject *, PyObject *) ``` The first argument is the newly created sub-type. The second argument (if not NULL) is the “parent” array (if the array was created using slicing or some other operation where a clearly-distinguishable parent is present). This routine can do anything it wants to. It should return a -1 on error and 0 otherwise. If the [`__array_finalize__`](../reference/arrays.classes#numpy.class.__array_finalize__ "numpy.class.__array_finalize__") attribute is not None nor a CObject, then it must be a Python method that takes the parent array as an argument (which could be None if there is no parent), and returns nothing. Errors in this method will be caught and handled. #### The __array_priority__ attribute ndarray.__array_priority__ This attribute allows simple but flexible determination of which sub- type should be considered “primary” when an operation involving two or more sub-types arises. In operations where different sub-types are being used, the sub-type with the largest [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute will determine the sub-type of the output(s). If two sub- types have the same [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") then the sub-type of the first argument determines the output. The default [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute returns a value of 0.0 for the base ndarray type and 1.0 for a sub-type. This attribute can also be defined by objects that are not sub-types of the ndarray and can be used to determine which [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") method should be called for the return output. #### The __array_wrap__ method ndarray.__array_wrap__ Any class or type can define this method which should take an ndarray argument and return an instance of the type. It can be seen as the opposite of the [`__array__`](../reference/arrays.classes#numpy.class.__array__ "numpy.class.__array__") method. This method is used by the ufuncs (and other NumPy functions) to allow other objects to pass through. For Python >2.4, it can also be used to write a decorator that converts a function that works only with ndarrays to one that works with any type with [`__array__`](../reference/arrays.classes#numpy.class.__array__ "numpy.class.__array__") and [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") methods. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/c-info.beyond-basics.htmlHow to write a NumPy how-to =========================== How-tos get straight to the point – they * answer a focused question, or * narrow a broad question into focused questions that the user can choose among. A stranger has asked for directions
 ------------------------------------ **“I need to refuel my car.”** Give a brief but explicit answer -------------------------------- * `“Three kilometers/miles, take a right at Hayseed Road, it’s on your left.”` Add helpful details for newcomers (“Hayseed Road”, even though it’s the only turnoff at three km/mi). But not irrelevant ones: * Don’t also give directions from Route 7. * Don’t explain why the town has only one filling station. If there’s related background (tutorial, explanation, reference, alternative approach), bring it to the user’s attention with a link (“Directions from Route 7,” “Why so few filling stations?”). Delegate -------- * `“Three km/mi, take a right at Hayseed Road, follow the signs.”` If the information is already documented and succinct enough for a how-to, just link to it, possibly after an introduction (“Three km/mi, take a right”). If the question is broad, narrow and redirect it ------------------------------------------------ **“I want to see the sights.”** The `See the sights` how-to should link to a set of narrower how-tos: * Find historic buildings * Find scenic lookouts * Find the town center and these might in turn link to still narrower how-tos – so the town center page might link to * Find the court house * Find city hall By organizing how-tos this way, you not only display the options for people who need to narrow their question, you also have provided answers for users who start with narrower questions (“I want to see historic buildings,” “Which way to city hall?”). If there are many steps, break them up -------------------------------------- If a how-to has many steps: * Consider breaking a step out into an individual how-to and linking to it. * Include subheadings. They help readers grasp what’s coming and return where they left off. Why write how-tos when there’s Stack Overflow, Reddit, Gitter
? --------------------------------------------------------------- * We have authoritative answers. * How-tos make the site less forbidding to non-experts. * How-tos bring people into the site and help them discover other information that’s here . * Creating how-tos helps us see NumPy usability through new eyes. Aren’t how-tos and tutorials the same thing? -------------------------------------------- People use the terms “how-to” and “tutorial” interchangeably, but we draw a distinction, following Daniele Procida’s [taxonomy of documentation](https://documentation.divio.com/). Documentation needs to meet users where they are. `How-tos` offer get-it-done information; the user wants steps to copy and doesn’t necessarily want to understand NumPy. `Tutorials` are warm-fuzzy information; the user wants a feel for some aspect of NumPy (and again, may or may not care about deeper knowledge). We distinguish both tutorials and how-tos from `Explanations`, which are deep dives intended to give understanding rather than immediate assistance, and `References`, which give complete, authoritative data on some concrete part of NumPy (like its API) but aren’t obligated to paint a broader picture. For more on tutorials, see [Learn to write a NumPy tutorial](https://numpy.org/numpy-tutorials/content/tutorial-style-guide.html "(in NumPy tutorials)") Is this page an example of a how-to? ------------------------------------ Yes – until the sections with question-mark headings; they explain rather than giving directions. In a how-to, those would be links. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/how-to-how-to.htmlReading and writing files ========================= This page tackles common applications; for the full collection of I/O routines, see [Input and output](../reference/routines.io#routines-io). Reading text and [CSV](https://en.wikipedia.org/wiki/Comma-separated_values) files ---------------------------------------------------------------------------------- ### With no missing values Use [`numpy.loadtxt`](../reference/generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt"). ### With missing values Use [`numpy.genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt"). [`numpy.genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") will either * return a [masked array](../reference/maskedarray.generic#maskedarray-generic) **masking out missing values** (if `usemask=True`), or * **fill in the missing value** with the value specified in `filling_values` (default is `np.nan` for float, -1 for int). #### With non-whitespace delimiters ``` >>> print(open("csv.txt").read()) 1, 2, 3 4,, 6 7, 8, 9 ``` ##### Masked-array output ``` >>> np.genfromtxt("csv.txt", delimiter=",", usemask=True) masked_array( data=[[1.0, 2.0, 3.0], [4.0, --, 6.0], [7.0, 8.0, 9.0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1e+20) ``` ##### Array output ``` >>> np.genfromtxt("csv.txt", delimiter=",") array([[ 1., 2., 3.], [ 4., nan, 6.], [ 7., 8., 9.]]) ``` ##### Array output, specified fill-in value ``` >>> np.genfromtxt("csv.txt", delimiter=",", dtype=np.int8, filling_values=99) array([[ 1, 2, 3], [ 4, 99, 6], [ 7, 8, 9]], dtype=int8) ``` #### Whitespace-delimited [`numpy.genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") can also parse whitespace-delimited data files that have missing values if * **Each field has a fixed width**: Use the width as the `delimiter` argument. ``` # File with width=4. The data does not have to be justified (for example, # the 2 in row 1), the last column can be less than width (for example, the 6 # in row 2), and no delimiting character is required (for instance 8888 and 9 # in row 3) >>> f = open("fixedwidth.txt").read() # doctest: +SKIP >>> print(f) # doctest: +SKIP 1 2 3 44 6 7 88889 # Showing spaces as ^ >>> print(f.replace(" ","^")) # doctest: +SKIP 1^^^2^^^^^^3 44^^^^^^6 7^^^88889 >>> np.genfromtxt("fixedwidth.txt", delimiter=4) # doctest: +SKIP array([[1.000e+00, 2.000e+00, 3.000e+00], [4.400e+01, nan, 6.000e+00], [7.000e+00, 8.888e+03, 9.000e+00]]) ``` * **A special value (e.g. “x”) indicates a missing field**: Use it as the `missing_values` argument. ``` >>> print(open("nan.txt").read()) 1 2 3 44 x 6 7 8888 9 >>> np.genfromtxt("nan.txt", missing_values="x") array([[1.000e+00, 2.000e+00, 3.000e+00], [4.400e+01, nan, 6.000e+00], [7.000e+00, 8.888e+03, 9.000e+00]]) ``` * **You want to skip the rows with missing values**: Set `invalid_raise=False`. ``` >>> print(open("skip.txt").read()) 1 2 3 44 6 7 888 9 >>> np.genfromtxt("skip.txt", invalid_raise=False) __main__:1: ConversionWarning: Some errors were detected ! Line #2 (got 2 columns instead of 3) array([[ 1., 2., 3.], [ 7., 888., 9.]]) ``` * **The delimiter whitespace character is different from the whitespace that indicates missing data**. For instance, if columns are delimited by `\t`, then missing data will be recognized if it consists of one or more spaces. ``` >>> f = open("tabs.txt").read() >>> print(f) 1 2 3 44 6 7 888 9 # Tabs vs. spaces >>> print(f.replace("\t","^")) 1^2^3 44^ ^6 7^888^9 >>> np.genfromtxt("tabs.txt", delimiter="\t", missing_values=" +") array([[ 1., 2., 3.], [ 44., nan, 6.], [ 7., 888., 9.]]) ``` Read a file in .npy or .npz format ---------------------------------- Choices: * Use [`numpy.load`](../reference/generated/numpy.load#numpy.load "numpy.load"). It can read files generated by any of [`numpy.save`](../reference/generated/numpy.save#numpy.save "numpy.save"), [`numpy.savez`](../reference/generated/numpy.savez#numpy.savez "numpy.savez"), or [`numpy.savez_compressed`](../reference/generated/numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed"). * Use memory mapping. See [`numpy.lib.format.open_memmap`](../reference/generated/numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap"). Write to a file to be read back by NumPy ---------------------------------------- ### Binary Use [`numpy.save`](../reference/generated/numpy.save#numpy.save "numpy.save"), or to store multiple arrays [`numpy.savez`](../reference/generated/numpy.savez#numpy.savez "numpy.savez") or [`numpy.savez_compressed`](../reference/generated/numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed"). For [security and portability](#how-to-io-pickle-file), set `allow_pickle=False` unless the dtype contains Python objects, which requires pickling. Masked arrays [`can't currently be saved`](../reference/generated/numpy.ma.maskedarray.tofile#numpy.ma.MaskedArray.tofile "numpy.ma.MaskedArray.tofile"), nor can other arbitrary array subclasses. ### Human-readable [`numpy.save`](../reference/generated/numpy.save#numpy.save "numpy.save") and [`numpy.savez`](../reference/generated/numpy.savez#numpy.savez "numpy.savez") create binary files. To **write a human-readable file**, use [`numpy.savetxt`](../reference/generated/numpy.savetxt#numpy.savetxt "numpy.savetxt"). The array can only be 1- or 2-dimensional, and there’s no ` savetxtz` for multiple files. ### Large arrays See [Write or read large arrays](#how-to-io-large-arrays). Read an arbitrarily formatted binary file (“binary blob”) --------------------------------------------------------- Use a [structured array](basics.rec). **Example:** The `.wav` file header is a 44-byte block preceding `data_size` bytes of the actual sound data: ``` chunk_id "RIFF" chunk_size 4-byte unsigned little-endian integer format "WAVE" fmt_id "fmt " fmt_size 4-byte unsigned little-endian integer audio_fmt 2-byte unsigned little-endian integer num_channels 2-byte unsigned little-endian integer sample_rate 4-byte unsigned little-endian integer byte_rate 4-byte unsigned little-endian integer block_align 2-byte unsigned little-endian integer bits_per_sample 2-byte unsigned little-endian integer data_id "data" data_size 4-byte unsigned little-endian integer ``` The `.wav` file header as a NumPy structured dtype: ``` wav_header_dtype = np.dtype([ ("chunk_id", (bytes, 4)), # flexible-sized scalar type, item size 4 ("chunk_size", "<u4"), # little-endian unsigned 32-bit integer ("format", "S4"), # 4-byte string, alternate spelling of (bytes, 4) ("fmt_id", "S4"), ("fmt_size", "<u4"), ("audio_fmt", "<u2"), # ("num_channels", "<u2"), # .. more of the same ... ("sample_rate", "<u4"), # ("byte_rate", "<u4"), ("block_align", "<u2"), ("bits_per_sample", "<u2"), ("data_id", "S4"), ("data_size", "<u4"), # # the sound data itself cannot be represented here: # it does not have a fixed size ]) header = np.fromfile(f, dtype=wave_header_dtype, count=1)[0] ``` This `.wav` example is for illustration; to read a `.wav` file in real life, use Python’s built-in module [`wave`](https://docs.python.org/3/library/wave.html#module-wave "(in Python v3.10)"). (Adapted from <NAME>, [Advanced NumPy](https://scipy-lectures.org/advanced/advanced_numpy/index.html#advanced-numpy "(in Scipy lecture notes vHEAD)"), licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).) Write or read large arrays -------------------------- **Arrays too large to fit in memory** can be treated like ordinary in-memory arrays using memory mapping. * Raw array data written with [`numpy.ndarray.tofile`](../reference/generated/numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile") or [`numpy.ndarray.tobytes`](../reference/generated/numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes") can be read with [`numpy.memmap`](../reference/generated/numpy.memmap#numpy.memmap "numpy.memmap"): ``` array = numpy.memmap("mydata/myarray.arr", mode="r", dtype=np.int16, shape=(1024, 1024)) ``` * Files output by [`numpy.save`](../reference/generated/numpy.save#numpy.save "numpy.save") (that is, using the numpy format) can be read using [`numpy.load`](../reference/generated/numpy.load#numpy.load "numpy.load") with the `mmap_mode` keyword argument: ``` large_array[some_slice] = np.load("path/to/small_array", mmap_mode="r") ``` Memory mapping lacks features like data chunking and compression; more full-featured formats and libraries usable with NumPy include: * **HDF5**: [h5py](https://www.h5py.org/) or [PyTables](https://www.pytables.org/). * **Zarr**: [here](https://zarr.readthedocs.io/en/stable/tutorial.html#reading-and-writing-data). * **NetCDF**: [`scipy.io.netcdf_file`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.netcdf_file.html#scipy.io.netcdf_file "(in SciPy v1.8.1)"). For tradeoffs among memmap, Zarr, and HDF5, see [pythonspeed.com](https://pythonspeed.com/articles/mmap-vs-zarr-hdf5/). Write files for reading by other (non-NumPy) tools -------------------------------------------------- Formats for **exchanging data** with other tools include HDF5, Zarr, and NetCDF (see [Write or read large arrays](#how-to-io-large-arrays)). Write or read a JSON file ------------------------- NumPy arrays are **not** directly [JSON serializable](https://github.com/numpy/numpy/issues/12481). Save/restore using a pickle file -------------------------------- Avoid when possible; [pickles](https://docs.python.org/3/library/pickle.html "(in Python v3.10)") are not secure against erroneous or maliciously constructed data. Use [`numpy.save`](../reference/generated/numpy.save#numpy.save "numpy.save") and [`numpy.load`](../reference/generated/numpy.load#numpy.load "numpy.load"). Set `allow_pickle=False`, unless the array dtype includes Python objects, in which case pickling is required. Convert from a pandas DataFrame to a NumPy array ------------------------------------------------ See [`pandas.DataFrame.to_numpy`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html#pandas.DataFrame.to_numpy "(in pandas v1.4.2)"). Save/restore using tofile and fromfile -------------------------------------- In general, prefer [`numpy.save`](../reference/generated/numpy.save#numpy.save "numpy.save") and [`numpy.load`](../reference/generated/numpy.load#numpy.load "numpy.load"). [`numpy.ndarray.tofile`](../reference/generated/numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile") and [`numpy.fromfile`](../reference/generated/numpy.fromfile#numpy.fromfile "numpy.fromfile") lose information on endianness and precision and so are unsuitable for anything but scratch storage. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/how-to-io.htmlHow to index ndarrays ===================== See also [Indexing on ndarrays](basics.indexing#basics-indexing) This page tackles common examples. For an in-depth look into indexing, refer to [Indexing on ndarrays](basics.indexing#basics-indexing). Access specific/arbitrary rows and columns ------------------------------------------ Use [Basic indexing](basics.indexing#basic-indexing) features like [Slicing and striding](basics.indexing#slicing-and-striding), and [Dimensional indexing tools](basics.indexing#dimensional-indexing-tools). ``` >>> a = np.arange(30).reshape(2, 3, 5) >>> a array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> a[0, 2, :] array([10, 11, 12, 13, 14]) >>> a[0, :, 3] array([ 3, 8, 13]) ``` Note that the output from indexing operations can have different shape from the original object. To preserve the original dimensions after indexing, you can use [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis"). To use other such tools, refer to [Dimensional indexing tools](basics.indexing#dimensional-indexing-tools). ``` >>> a[0, :, 3].shape (3,) >>> a[0, :, 3, np.newaxis].shape (3, 1) >>> a[0, :, 3, np.newaxis, np.newaxis].shape (3, 1, 1) ``` Variables can also be used to index: ``` >>> y = 0 >>> a[y, :, y+3] array([ 3, 8, 13]) ``` Refer to [Dealing with variable numbers of indices within programs](basics.indexing#dealing-with-variable-indices) to see how to use [slice](https://docs.python.org/3/glossary.html#term-slice "(in Python v3.10)") and [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "(in Python v3.10)") in your index variables. ### Index columns To index columns, you have to index the last axis. Use [Dimensional indexing tools](basics.indexing#dimensional-indexing-tools) to get the desired number of dimensions: ``` >>> a = np.arange(24).reshape(2, 3, 4) >>> a array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> a[..., 3] array([[ 3, 7, 11], [15, 19, 23]]) ``` To index specific elements in each column, make use of [Advanced indexing](basics.indexing#advanced-indexing) as below: ``` >>> arr = np.arange(3*4).reshape(3, 4) >>> arr array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> column_indices = [[1, 3], [0, 2], [2, 2]] >>> np.arange(arr.shape[0]) array([0, 1, 2]) >>> row_indices = np.arange(arr.shape[0])[:, np.newaxis] >>> row_indices array([[0], [1], [2]]) ``` Use the `row_indices` and `column_indices` for advanced indexing: ``` >>> arr[row_indices, column_indices] array([[ 1, 3], [ 4, 6], [10, 10]]) ``` ### Index along a specific axis Use [`take`](../reference/generated/numpy.take#numpy.take "numpy.take"). See also [`take_along_axis`](../reference/generated/numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") and [`put_along_axis`](../reference/generated/numpy.put_along_axis#numpy.put_along_axis "numpy.put_along_axis"). ``` >>> a = np.arange(30).reshape(2, 3, 5) >>> a array([[[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]], [[15, 16, 17, 18, 19], [20, 21, 22, 23, 24], [25, 26, 27, 28, 29]]]) >>> np.take(a, [2, 3], axis=2) array([[[ 2, 3], [ 7, 8], [12, 13]], [[17, 18], [22, 23], [27, 28]]]) >>> np.take(a, [2], axis=1) array([[[10, 11, 12, 13, 14]], [[25, 26, 27, 28, 29]]]) ``` Create subsets of larger matrices --------------------------------- Use [Slicing and striding](basics.indexing#slicing-and-striding) to access chunks of a large array: ``` >>> a = np.arange(100).reshape(10, 10) >>> a array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19], [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], [40, 41, 42, 43, 44, 45, 46, 47, 48, 49], [50, 51, 52, 53, 54, 55, 56, 57, 58, 59], [60, 61, 62, 63, 64, 65, 66, 67, 68, 69], [70, 71, 72, 73, 74, 75, 76, 77, 78, 79], [80, 81, 82, 83, 84, 85, 86, 87, 88, 89], [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]]) >>> a[2:5, 2:5] array([[22, 23, 24], [32, 33, 34], [42, 43, 44]]) >>> a[2:5, 1:3] array([[21, 22], [31, 32], [41, 42]]) >>> a[:5, :5] array([[ 0, 1, 2, 3, 4], [10, 11, 12, 13, 14], [20, 21, 22, 23, 24], [30, 31, 32, 33, 34], [40, 41, 42, 43, 44]]) ``` The same thing can be done with advanced indexing in a slightly more complex way. Remember that [advanced indexing creates a copy](basics.copies#indexing-operations): ``` >>> a[np.arange(5)[:, None], np.arange(5)[None, :]] array([[ 0, 1, 2, 3, 4], [10, 11, 12, 13, 14], [20, 21, 22, 23, 24], [30, 31, 32, 33, 34], [40, 41, 42, 43, 44]]) ``` You can also use [`mgrid`](../reference/generated/numpy.mgrid#numpy.mgrid "numpy.mgrid") to generate indices: ``` >>> indices = np.mgrid[0:6:2] >>> indices array([0, 2, 4]) >>> a[:, indices] array([[ 0, 2, 4], [10, 12, 14], [20, 22, 24], [30, 32, 34], [40, 42, 44], [50, 52, 54], [60, 62, 64], [70, 72, 74], [80, 82, 84], [90, 92, 94]]) ``` Filter values ------------- ### Non-zero elements Use [`nonzero`](../reference/generated/numpy.nonzero#numpy.nonzero "numpy.nonzero") to get a tuple of array indices of non-zero elements corresponding to every dimension: ``` >>> z = np.array([[1, 2, 3, 0], [0, 0, 5, 3], [4, 6, 0, 0]]) >>> z array([[1, 2, 3, 0], [0, 0, 5, 3], [4, 6, 0, 0]]) >>> np.nonzero(z) (array([0, 0, 0, 1, 1, 2, 2]), array([0, 1, 2, 2, 3, 0, 1])) ``` Use [`flatnonzero`](../reference/generated/numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero") to fetch indices of elements that are non-zero in the flattened version of the ndarray: ``` >>> np.flatnonzero(z) array([0, 1, 2, 6, 7, 8, 9]) ``` ### Arbitrary conditions Use [`where`](../reference/generated/numpy.where#numpy.where "numpy.where") to generate indices based on conditions and then use [Advanced indexing](basics.indexing#advanced-indexing). ``` >>> a = np.arange(30).reshape(2, 3, 5) >>> indices = np.where(a % 2 == 0) >>> indices (array([0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]), array([0, 0, 0, 1, 1, 2, 2, 2, 0, 0, 1, 1, 1, 2, 2]), array([0, 2, 4, 1, 3, 0, 2, 4, 1, 3, 0, 2, 4, 1, 3])) >>> a[indices] array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28]) ``` Or, use [Boolean array indexing](basics.indexing#boolean-indexing): ``` >>> a > 14 array([[[False, False, False, False, False], [False, False, False, False, False], [False, False, False, False, False]], [[ True, True, True, True, True], [ True, True, True, True, True], [ True, True, True, True, True]]]) >>> a[a > 14] array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) ``` ### Replace values after filtering Use assignment with filtering to replace desired values: ``` >>> p = np.arange(-10, 10).reshape(2, 2, 5) >>> p array([[[-10, -9, -8, -7, -6], [ -5, -4, -3, -2, -1]], [[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9]]]) >>> q = p < 0 >>> q array([[[ True, True, True, True, True], [ True, True, True, True, True]], [[False, False, False, False, False], [False, False, False, False, False]]]) >>> p[q] = 0 >>> p array([[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]], [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]]) ``` Fetch indices of max/min values ------------------------------- Use [`argmax`](../reference/generated/numpy.argmax#numpy.argmax "numpy.argmax") and [`argmin`](../reference/generated/numpy.argmin#numpy.argmin "numpy.argmin"): ``` >>> a = np.arange(30).reshape(2, 3, 5) >>> np.argmax(a) 29 >>> np.argmin(a) 0 ``` Use the `axis` keyword to get the indices of maximum and minimum values along a specific axis: ``` >>> np.argmax(a, axis=0) array([[1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]) >>> np.argmax(a, axis=1) array([[2, 2, 2, 2, 2], [2, 2, 2, 2, 2]]) >>> np.argmax(a, axis=2) array([[4, 4, 4], [4, 4, 4]]) >>> np.argmin(a, axis=1) array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]) >>> np.argmin(a, axis=2) array([[0, 0, 0], [0, 0, 0]]) ``` Set `keepdims` to `True` to keep the axes which are reduced in the result as dimensions with size one: ``` >>> np.argmin(a, axis=2, keepdims=True) array([[[0], [0], [0]], [[0], [0], [0]]]) >>> np.argmax(a, axis=1, keepdims=True) array([[[2, 2, 2, 2, 2]], [[2, 2, 2, 2, 2]]]) ``` Index the same ndarray multiple times efficiently ------------------------------------------------- It must be kept in mind that basic indexing produces [views](../glossary#term-view) and advanced indexing produces [copies](../glossary#term-copy), which are computationally less efficient. Hence, you should take care to use basic indexing wherever possible instead of advanced indexing. Further reading --------------- <NAME>’s [100 NumPy exercises](https://github.com/rougier/numpy-100) provide a good insight into how indexing is combined with other operations. Exercises [6](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#6-create-a-null-vector-of-size-10-but-the-fifth-value-which-is-1-), [8](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#8-reverse-a-vector-first-element-becomes-last-), [10](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#10-find-indices-of-non-zero-elements-from-120040-), [15](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#15-create-a-2d-array-with-1-on-the-border-and-0-inside-), [16](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#16-how-to-add-a-border-filled-with-0s-around-an-existing-array-), [19](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#19-create-a-8x8-matrix-and-fill-it-with-a-checkerboard-pattern-), [20](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#20-consider-a-678-shape-array-what-is-the-index-xyz-of-the-100th-element-), [45](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#45-create-random-vector-of-size-10-and-replace-the-maximum-value-by-0-), [59](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#59-how-to-sort-an-array-by-the-nth-column-), [64](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#64-consider-a-given-vector-how-to-add-1-to-each-element-indexed-by-a-second-vector-be-careful-with-repeated-indices-), [65](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#65-how-to-accumulate-elements-of-a-vector-x-to-an-array-f-based-on-an-index-list-i-), [70](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#70-consider-the-vector-1-2-3-4-5-how-to-build-a-new-vector-with-3-consecutive-zeros-interleaved-between-each-value-), [71](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#71-consider-an-array-of-dimension-553-how-to-mulitply-it-by-an-array-with-dimensions-55-), [72](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#72-how-to-swap-two-rows-of-an-array-), [76](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#76-consider-a-one-dimensional-array-z-build-a-two-dimensional-array-whose-first-row-is-z0z1z2-and-each-subsequent-row-is--shifted-by-1-last-row-should-be-z-3z-2z-1-), [80](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#80-consider-an-arbitrary-array-write-a-function-that-extract-a-subpart-with-a-fixed-shape-and-centered-on-a-given-element-pad-with-a-fill-value-when-necessary-), [81](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#81-consider-an-array-z--1234567891011121314-how-to-generate-an-array-r--1234-2345-3456--11121314-), [84](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#84-extract-all-the-contiguous-3x3-blocks-from-a-random-10x10-matrix-), [87](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#87-consider-a-16x16-array-how-to-get-the-block-sum-block-size-is-4x4-), [90](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#90-given-an-arbitrary-number-of-vectors-build-the-cartesian-product-every-combinations-of-every-item-), [93](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#93-consider-two-arrays-a-and-b-of-shape-83-and-22-how-to-find-rows-of-a-that-contain-elements-of-each-row-of-b-regardless-of-the-order-of-the-elements-in-b-), [94](https://github.com/rougier/numpy-100/blob/master/100_Numpy_exercises_with_solutions.md#94-considering-a-10x3-matrix-extract-rows-with-unequal-values-eg-223-) are specially focused on indexing. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/how-to-index.htmlThree ways to wrap - getting started ==================================== Wrapping Fortran or C functions to Python using F2PY consists of the following steps: * Creating the so-called [signature file](signature-file) that contains descriptions of wrappers to Fortran or C functions, also called the signatures of the functions. For Fortran routines, F2PY can create an initial signature file by scanning Fortran source codes and tracking all relevant information needed to create wrapper functions. + Optionally, F2PY-created signature files can be edited to optimize wrapper functions, which can make them “smarter” and more “Pythonic”. * F2PY reads a signature file and writes a Python C/API module containing Fortran/C/Python bindings. * F2PY compiles all sources and builds an extension module containing the wrappers. + In building the extension modules, F2PY uses `numpy_distutils` which supports a number of Fortran 77/90/95 compilers, including Gnu, Intel, Sun Fortran, SGI MIPSpro, Absoft, NAG, Compaq etc. For different build systems, see [F2PY and Build Systems](buildtools/index#f2py-bldsys). Depending on the situation, these steps can be carried out in a single composite command or step-by-step; in which case some steps can be omitted or combined with others. Below, we describe three typical approaches of using F2PY. These can be read in order of increasing effort, but also cater to different access levels depending on whether the Fortran code can be freely modified. The following example Fortran 77 code will be used for illustration, save it as `fib1.f`: ``` C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F ``` Note F2PY parses Fortran/C signatures to build wrapper functions to be used with Python. However, it is not a compiler, and does not check for additional errors in source code, nor does it implement the entire language standards. Some errors may pass silently (or as warnings) and need to be verified by the user. The quick way ------------- The quickest way to wrap the Fortran subroutine `FIB` for use in Python is to run ``` python -m numpy.f2py -c fib1.f -m fib1 ``` or, alternatively, if the `f2py` command-line tool is available, ``` f2py -c fib1.f -m fib1 ``` Note Because the `f2py` command might not be available in all system, notably on Windows, we will use the `python -m numpy.f2py` command throughout this guide. This command compiles and wraps `fib1.f` (`-c`) to create the extension module `fib1.so` (`-m`) in the current directory. A list of command line options can be seen by executing `python -m numpy.f2py`. Now, in Python the Fortran subroutine `FIB` is accessible via `fib1.fib`: ``` >>> import numpy as np >>> import fib1 >>> print(fib1.fib.__doc__) fib(a,[n]) Wrapper for ``fib``. Parameters ---------- a : input rank-1 array('d') with bounds (n) Other Parameters ---------------- n : input int, optional Default: len(a) >>> a = np.zeros(8, 'd') >>> fib1.fib(a) >>> print(a) [ 0. 1. 1. 2. 3. 5. 8. 13.] ``` Note * Note that F2PY recognized that the second argument `n` is the dimension of the first array argument `a`. Since by default all arguments are input-only arguments, F2PY concludes that `n` can be optional with the default value `len(a)`. * One can use different values for optional `n`: ``` >>> a1 = np.zeros(8, 'd') >>> fib1.fib(a1, 6) >>> print(a1) [ 0. 1. 1. 2. 3. 5. 0. 0.] ``` but an exception is raised when it is incompatible with the input array `a`: ``` >>> fib1.fib(a, 10) Traceback (most recent call last): File "<stdin>", line 1, in <module> fib.error: (len(a)>=n) failed for 1st keyword n: fib:n=10 >>``` F2PY implements basic compatibility checks between related arguments in order to avoid unexpected crashes. * When a NumPy array that is [Fortran](../glossary#term-Fortran-order) [contiguous](../glossary#term-contiguous) and has a `dtype` corresponding to a presumed Fortran type is used as an input array argument, then its C pointer is directly passed to Fortran. Otherwise, F2PY makes a contiguous copy (with the proper `dtype`) of the input array and passes a C pointer of the copy to the Fortran subroutine. As a result, any possible changes to the (copy of) input array have no effect on the original argument, as demonstrated below: ``` >>> a = np.ones(8, 'i') >>> fib1.fib(a) >>> print(a) [1 1 1 1 1 1 1 1] ``` Clearly, this is unexpected, as Fortran typically passes by reference. That the above example worked with `dtype=float` is considered accidental. F2PY provides an `intent(inplace)` attribute that modifies the attributes of an input array so that any changes made by the Fortran routine will be reflected in the input argument. For example, if one specifies the `intent(inplace) a` directive (see [Attributes](signature-file#f2py-attributes) for details), then the example above would read: ``` >>> a = np.ones(8, 'i') >>> fib1.fib(a) >>> print(a) [ 0. 1. 1. 2. 3. 5. 8. 13.] ``` However, the recommended way to have changes made by Fortran subroutine propagate to Python is to use the `intent(out)` attribute. That approach is more efficient and also cleaner. * The usage of `fib1.fib` in Python is very similar to using `FIB` in Fortran. However, using *in situ* output arguments in Python is poor style, as there are no safety mechanisms in Python to protect against wrong argument types. When using Fortran or C, compilers discover any type mismatches during the compilation process, but in Python the types must be checked at runtime. Consequently, using *in situ* output arguments in Python may lead to difficult to find bugs, not to mention the fact that the codes will be less readable when all required type checks are implemented. Though the approach to wrapping Fortran routines for Python discussed so far is very straightforward, it has several drawbacks (see the comments above). The drawbacks are due to the fact that there is no way for F2PY to determine the actual intention of the arguments; that is, there is ambiguity in distinguishing between input and output arguments. Consequently, F2PY assumes that all arguments are input arguments by default. There are ways (see below) to remove this ambiguity by “teaching” F2PY about the true intentions of function arguments, and F2PY is then able to generate more explicit, easier to use, and less error prone wrappers for Fortran functions. The smart way ------------- If we want to have more control over how F2PY will treat the interface to our Fortran code, we can apply the wrapping steps one by one. * First, we create a signature file from `fib1.f` by running: ``` python -m numpy.f2py fib1.f -m fib2 -h fib1.pyf ``` The signature file is saved to `fib1.pyf` (see the `-h` flag) and its contents are shown below. ``` ! -*- f90 -*- python module fib2 ! in interface ! in :fib2 subroutine fib(a,n) ! in :fib2:fib1.f real*8 dimension(n) :: a integer optional,check(len(a)>=n),depend(a) :: n=len(a) end subroutine fib end interface end python module fib2 ! This file was auto-generated with f2py (version:2.28.198-1366). ! See http://cens.ioc.ee/projects/f2py2e/ ``` * Next, we’ll teach F2PY that the argument `n` is an input argument (using the `intent(in)` attribute) and that the result, i.e., the contents of `a` after calling the Fortran function `FIB`, should be returned to Python (using the `intent(out)` attribute). In addition, an array `a` should be created dynamically using the size determined by the input argument `n` (using the `depend(n)` attribute to indicate this dependence relation). The contents of a suitably modified version of `fib1.pyf` (saved as `fib2.pyf`) are as follows: ``` ! -*- f90 -*- python module fib2 interface subroutine fib(a,n) real*8 dimension(n),intent(out),depend(n) :: a integer intent(in) :: n end subroutine fib end interface end python module fib2 ``` * Finally, we build the extension module with `numpy.distutils` by running: ``` python -m numpy.f2py -c fib2.pyf fib1.f ``` In Python: ``` >>> import fib2 >>> print(fib2.fib.__doc__) a = fib(n) Wrapper for ``fib``. Parameters ---------- n : input int Returns ------- a : rank-1 array('d') with bounds (n) >>> print(fib2.fib(8)) [ 0. 1. 1. 2. 3. 5. 8. 13.] ``` Note * The signature of `fib2.fib` now more closely corresponds to the intention of the Fortran subroutine `FIB`: given the number `n`, `fib2.fib` returns the first `n` Fibonacci numbers as a NumPy array. The new Python signature `fib2.fib` also rules out the unexpected behaviour in `fib1.fib`. * Note that by default, using a single `intent(out)` also implies `intent(hide)`. Arguments that have the `intent(hide)` attribute specified will not be listed in the argument list of a wrapper function. For more details, see [Signature file](signature-file). The quick and smart way ----------------------- The “smart way” of wrapping Fortran functions, as explained above, is suitable for wrapping (e.g. third party) Fortran codes for which modifications to their source codes are not desirable nor even possible. However, if editing Fortran codes is acceptable, then the generation of an intermediate signature file can be skipped in most cases. F2PY specific attributes can be inserted directly into Fortran source codes using F2PY directives. A F2PY directive consists of special comment lines (starting with `Cf2py` or `!f2py`, for example) which are ignored by Fortran compilers but interpreted by F2PY as normal lines. Consider a modified version of the previous Fortran code with F2PY directives, saved as `fib3.f`: ``` C FILE: FIB3.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) Cf2py intent(in) n Cf2py intent(out) a Cf2py depend(n) a DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB3.F ``` Building the extension module can be now carried out in one command: ``` python -m numpy.f2py -c -m fib3 fib3.f ``` Notice that the resulting wrapper to `FIB` is as “smart” (unambiguous) as in the previous case: ``` >>> import fib3 >>> print(fib3.fib.__doc__) a = fib(n) Wrapper for ``fib``. Parameters ---------- n : input int Returns ------- a : rank-1 array('d') with bounds (n) >>> print(fib3.fib(8)) [ 0. 1. 1. 2. 3. 5. 8. 13.] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/f2py.getting-started.htmlF2PY user guide =============== * [Three ways to wrap - getting started](f2py.getting-started) + [The quick way](f2py.getting-started#the-quick-way) + [The smart way](f2py.getting-started#the-smart-way) + [The quick and smart way](f2py.getting-started#the-quick-and-smart-way) * [Using F2PY](usage) + [Using `f2py` as a command-line tool](usage#using-f2py-as-a-command-line-tool) + [Python module `numpy.f2py`](usage#python-module-numpy-f2py) + [Automatic extension module generation](usage#automatic-extension-module-generation) * [F2PY examples](f2py-examples) + [F2PY walkthrough: a basic extension module](f2py-examples#f2py-walkthrough-a-basic-extension-module) + [A filtering example](f2py-examples#a-filtering-example) + [`depends` keyword example](f2py-examples#depends-keyword-example) + [Read more](f2py-examples#read-more) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/f2py-user.htmlF2PY reference manual ===================== * [Signature file](signature-file) + [Signature files syntax](signature-file#signature-files-syntax) * [Using F2PY bindings in Python](python-usage) + [Fortran type objects](python-usage#fortran-type-objects) + [Scalar arguments](python-usage#scalar-arguments) + [String arguments](python-usage#string-arguments) + [Array arguments](python-usage#array-arguments) + [Call-back arguments](python-usage#call-back-arguments) + [Common blocks](python-usage#common-blocks) + [Fortran 90 module data](python-usage#fortran-90-module-data) + [Allocatable arrays](python-usage#allocatable-arrays) * [F2PY and Build Systems](buildtools/index) + [Basic Concepts](buildtools/index#basic-concepts) + [Build Systems](buildtools/index#build-systems) * [Advanced F2PY use cases](advanced) + [Adding user-defined functions to F2PY generated modules](advanced#adding-user-defined-functions-to-f2py-generated-modules) + [Adding user-defined variables](advanced#adding-user-defined-variables) + [Dealing with KIND specifiers](advanced#dealing-with-kind-specifiers) * [F2PY test suite](f2py-testing) + [Adding a test](f2py-testing#adding-a-test) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/f2py-reference.htmlUsing F2PY ========== This page contains a reference to all command-line options for the `f2py` command, as well as a reference to internal functions of the `numpy.f2py` module. Using `f2py` as a command-line tool ----------------------------------- When used as a command-line tool, `f2py` has three major modes, distinguished by the usage of `-c` and `-h` switches. ### 1. Signature file generation To scan Fortran sources and generate a signature file, use ``` f2py -h <filename.pyf> <options> <fortran files> \ [[ only: <fortran functions> : ] \ [ skip: <fortran functions> : ]]... \ [<fortran files> ...] ``` Note A Fortran source file can contain many routines, and it is often not necessary to allow all routines to be usable from Python. In such cases, either specify which routines should be wrapped (in the `only: .. :` part) or which routines F2PY should ignore (in the `skip: .. :` part). If `<filename.pyf>` is specified as `stdout`, then signatures are written to standard output instead of a file. Among other options (see below), the following can be used in this mode: `--overwrite-signature` Overwrites an existing signature file. ### 2. Extension module construction To construct an extension module, use ``` f2py -m <modulename> <options> <fortran files> \ [[ only: <fortran functions> : ] \ [ skip: <fortran functions> : ]]... \ [<fortran files> ...] ``` The constructed extension module is saved as `<modulename>module.c` to the current directory. Here `<fortran files>` may also contain signature files. Among other options (see below), the following options can be used in this mode: `--debug-capi` Adds debugging hooks to the extension module. When using this extension module, various diagnostic information about the wrapper is written to the standard output, for example, the values of variables, the steps taken, etc. `-include'<includefile>'` Add a CPP `#include` statement to the extension module source. `<includefile>` should be given in one of the following forms ``` "filename.ext" <filename.ext``` The include statement is inserted just before the wrapper functions. This feature enables using arbitrary C functions (defined in `<includefile>`) in F2PY generated wrappers. Note This option is deprecated. Use `usercode` statement to specify C code snippets directly in signature files. `--[no-]wrap-functions` Create Fortran subroutine wrappers to Fortran functions. `--wrap-functions` is default because it ensures maximum portability and compiler independence. `--include-paths <path1>:<path2>:..` Search include files from given directories. `--help-link [<list of resources names>]` List system resources found by `numpy_distutils/system_info.py`. For example, try `f2py --help-link lapack_opt`. ### 3. Building a module To build an extension module, use ``` f2py -c <options> <fortran files> \ [[ only: <fortran functions> : ] \ [ skip: <fortran functions> : ]]... \ [ <fortran/c source files> ] [ <.o, .a, .so files> ] ``` If `<fortran files>` contains a signature file, then the source for an extension module is constructed, all Fortran and C sources are compiled, and finally all object and library files are linked to the extension module `<modulename>.so` which is saved into the current directory. If `<fortran files>` does not contain a signature file, then an extension module is constructed by scanning all Fortran source codes for routine signatures, before proceeding to build the extension module. Among other options (see below) and options described for previous modes, the following options can be used in this mode: `--help-fcompiler` List the available Fortran compilers. `--help-compiler` **[depreciated]** List the available Fortran compilers. `--fcompiler=<Vendor>` Specify a Fortran compiler type by vendor. `--f77exec=<path>` Specify the path to a F77 compiler `--fcompiler-exec=<path>` **[depreciated]** Specify the path to a F77 compiler `--f90exec=<path>` Specify the path to a F90 compiler `--f90compiler-exec=<path>` **[depreciated]** Specify the path to a F90 compiler `--f77flags=<string>` Specify F77 compiler flags `--f90flags=<string>` Specify F90 compiler flags `--opt=<string>` Specify optimization flags `--arch=<string>` Specify architecture specific optimization flags `--noopt` Compile without optimization flags `--noarch` Compile without arch-dependent optimization flags `--debug` Compile with debugging information `-l<libname>` Use the library `<libname>` when linking. `-D<macro>[=<defn=1>]` Define macro `<macro>` as `<defn>`. `-U<macro>` Define macro `<macro>` `-I<dir>` Append directory `<dir>` to the list of directories searched for include files. `-L<dir>` Add directory `<dir>` to the list of directories to be searched for `-l`. `link-<resource>` Link the extension module with <resource> as defined by `numpy_distutils/system_info.py`. E.g. to link with optimized LAPACK libraries (vecLib on MacOSX, ATLAS elsewhere), use `--link-lapack_opt`. See also `--help-link` switch. Note The `f2py -c` option must be applied either to an existing `.pyf` file (plus the source/object/library files) or one must specify the `-m <modulename>` option (plus the sources/object/library files). Use one of the following options: ``` f2py -c -m fib1 fib1.f ``` or ``` f2py -m fib1 fib1.f -h fib1.pyf f2py -c fib1.pyf fib1.f ``` For more information, see the [Building C and C++ Extensions](https://docs.python.org/3/extending/building.html) Python documentation for details. When building an extension module, a combination of the following macros may be required for non-gcc Fortran compilers: ``` -DPREPEND_FORTRAN -DNO_APPEND_FORTRAN -DUPPERCASE_FORTRAN ``` To test the performance of F2PY generated interfaces, use `-DF2PY_REPORT_ATEXIT`. Then a report of various timings is printed out at the exit of Python. This feature may not work on all platforms, and currently only Linux is supported. To see whether F2PY generated interface performs copies of array arguments, use `-DF2PY_REPORT_ON_ARRAY_COPY=<int>`. When the size of an array argument is larger than `<int>`, a message about the copying is sent to `stderr`. ### Other options `-m <modulename>` Name of an extension module. Default is `untitled`. Warning Don’t use this option if a signature file (`*.pyf`) is used. `--[no-]lower` Do [not] lower the cases in `<fortran files>`. By default, `--lower` is assumed with `-h` switch, and `--no-lower` without the `-h` switch. `-include<header>` Writes additional headers in the C wrapper, can be passed multiple times, generates #include <header> each time. Note that this is meant to be passed in single quotes and without spaces, for example `'-include<stdbool.h>'` `--build-dir <dirname>` All F2PY generated files are created in `<dirname>`. Default is `tempfile.mkdtemp()`. `--quiet` Run quietly. `--verbose` Run with extra verbosity. `--skip-empty-wrappers` Do not generate wrapper files unless required by the inputs. This is a backwards compatibility flag to restore pre 1.22.4 behavior. `-v` Print the F2PY version and exit. Execute `f2py` without any options to get an up-to-date list of available options. Python module `numpy.f2py` -------------------------- The f2py program is written in Python and can be run from inside your code to compile Fortran code at runtime, as follows: ``` from numpy import f2py with open("add.f") as sourcefile: sourcecode = sourcefile.read() f2py.compile(sourcecode, modulename='add') import add ``` The source string can be any valid Fortran code. If you want to save the extension-module source code then a suitable file-name can be provided by the `source_fn` keyword to the compile function. When using `numpy.f2py` as a module, the following functions can be invoked. Warning The current Python interface to the `f2py` module is not mature and may change in the future. Fortran to Python Interface Generator. numpy.f2py.compile(*source*, *modulename='untitled'*, *extra_args=''*, *verbose=True*, *source_fn=None*, *extension='.f'*, *full_output=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/f2py/__init__.py#L18-L122) Build extension module from a Fortran 77 source string with f2py. Parameters **source**str or bytes Fortran source of module / subroutine to compile Changed in version 1.16.0: Accept str as well as bytes **modulename**str, optional The name of the compiled python module **extra_args**str or list, optional Additional parameters passed to f2py Changed in version 1.16.0: A list of args may also be provided. **verbose**bool, optional Print f2py output to screen **source_fn**str, optional Name of the file where the fortran source is written. The default is to use a temporary file with the extension provided by the `extension` parameter **extension**`{'.f', '.f90'}`, optional Filename extension if `source_fn` is not provided. The extension tells which fortran standard is used. The default is `.f`, which implies F77 standard. New in version 1.11.0. **full_output**bool, optional If True, return a [`subprocess.CompletedProcess`](https://docs.python.org/3/library/subprocess.html#subprocess.CompletedProcess "(in Python v3.10)") containing the stdout and stderr of the compile process, instead of just the status code. New in version 1.20.0. Returns **result**int or [`subprocess.CompletedProcess`](https://docs.python.org/3/library/subprocess.html#subprocess.CompletedProcess "(in Python v3.10)") 0 on success, or a [`subprocess.CompletedProcess`](https://docs.python.org/3/library/subprocess.html#subprocess.CompletedProcess "(in Python v3.10)") if `full_output=True` #### Examples ``` >>> import numpy.f2py >>> fsource = ''' ... subroutine foo ... print*, "Hello world!" ... end ... ''' >>> numpy.f2py.compile(fsource, modulename='hello', verbose=0) 0 >>> import hello >>> hello.foo() Hello world! ``` numpy.f2py.get_include()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/f2py/__init__.py#L125-L169) Return the directory that contains the `fortranobject.c` and `.h` files. Note This function is not needed when building an extension with [`numpy.distutils`](../reference/distutils#module-numpy.distutils "numpy.distutils") directly from `.f` and/or `.pyf` files in one go. Python extension modules built with f2py-generated code need to use `fortranobject.c` as a source file, and include the `fortranobject.h` header. This function can be used to obtain the directory containing both of these files. Returns **include_path**str Absolute path to the directory containing `fortranobject.c` and `fortranobject.h`. See also [`numpy.get_include`](../reference/generated/numpy.get_include#numpy.get_include "numpy.get_include") function that returns the numpy include directory #### Notes New in version 1.21.1. Unless the build system you are using has specific support for f2py, building a Python extension using a `.pyf` signature file is a two-step process. For a module `mymod`: * Step 1: run `python -m numpy.f2py mymod.pyf --quiet`. This generates `_mymodmodule.c` and (if needed) `_fblas-f2pywrappers.f` files next to `mymod.pyf`. * Step 2: build your Python extension module. This requires the following source files: + `_mymodmodule.c` + `_mymod-f2pywrappers.f` (if it was generated in Step 1) + `fortranobject.c` numpy.f2py.run_main(*comline_list*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/f2py/f2py2e.py#L411-L479) Equivalent to running: ``` f2py <args``` where `<args>=string.join(<list>,' ')`, but in Python. Unless `-h` is used, this function returns a dictionary containing information on generated modules and their dependencies on source files. You cannot build extension modules with this function, that is, using `-c` is not allowed. Use the `compile` command instead. #### Examples The command `f2py -m scalar scalar.f` can be executed from Python as follows. ``` >>> import numpy.f2py >>> r = numpy.f2py.run_main(['-m','scalar','doc/source/f2py/scalar.f']) Reading fortran codes... Reading file 'doc/source/f2py/scalar.f' (format:fix,strict) Post-processing... Block: scalar Block: FOO Building modules... Building module "scalar"... Wrote C/API module "scalar" to file "./scalarmodule.c" >>> print(r) {'scalar': {'h': ['/home/users/pearu/src_cvs/f2py/src/fortranobject.h'], 'csrc': ['./scalarmodule.c', '/home/users/pearu/src_cvs/f2py/src/fortranobject.c']}} ``` Automatic extension module generation ------------------------------------- If you want to distribute your f2py extension module, then you only need to include the .pyf file and the Fortran code. The distutils extensions in NumPy allow you to define an extension module entirely in terms of this interface file. A valid `setup.py` file allowing distribution of the `add.f` module (as part of the package `f2py_examples` so that it would be loaded as `f2py_examples.add`) is: ``` def configuration(parent_package='', top_path=None) from numpy.distutils.misc_util import Configuration config = Configuration('f2py_examples',parent_package, top_path) config.add_extension('add', sources=['add.pyf','add.f']) return config if __name__ == '__main__': from numpy.distutils.core import setup setup(**configuration(top_path='').todict()) ``` Installation of the new package is easy using: ``` pip install . ``` assuming you have the proper permissions to write to the main site- packages directory for the version of Python you are using. For the resulting package to work, you need to create a file named `__init__.py` (in the same directory as `add.pyf`). Notice the extension module is defined entirely in terms of the `add.pyf` and `add.f` files. The conversion of the .pyf file to a .c file is handled by [`numpy.distutils`](../reference/distutils#module-numpy.distutils "numpy.distutils"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/usage.htmlUsing F2PY bindings in Python ============================= In this page, you can find a full description and a few examples of common usage patterns for F2PY with Python and different argument types. For more examples and use cases, see [F2PY examples](f2py-examples#f2py-examples). Fortran type objects -------------------- All wrappers for Fortran/C routines, common blocks, or for Fortran 90 module data generated by F2PY are exposed to Python as `fortran` type objects. Routine wrappers are callable `fortran` type objects while wrappers to Fortran data have attributes referring to data objects. All `fortran` type objects have an attribute `_cpointer` that contains a `CObject` referring to the C pointer of the corresponding Fortran/C function or variable at the C level. Such `CObjects` can be used as callback arguments for F2PY generated functions to bypass the Python C/API layer for calling Python functions from Fortran or C. This can be useful when the computational aspects of such functions are implemented in C or Fortran and wrapped with F2PY (or any other tool capable of providing the `CObject` of a function). Consider a Fortran 77 file ``ftype.f`: ``` C FILE: FTYPE.F SUBROUTINE FOO(N) INTEGER N Cf2py integer optional,intent(in) :: n = 13 REAL A,X COMMON /DATA/ A,X(3) PRINT*, "IN FOO: N=",N," A=",A," X=[",X(1),X(2),X(3),"]" END C END OF FTYPE.F ``` and a wrapper built using `f2py -c ftype.f -m ftype`. In Python, you can observe the types of `foo` and `data`, and how to access individual objects of the wrapped Fortran code. ``` >>> import ftype >>> print(ftype.__doc__) This module 'ftype' is auto-generated with f2py (version:2). Functions: foo(n=13) COMMON blocks: /data/ a,x(3) . >>> type(ftype.foo), type(ftype.data) (<class 'fortran'>, <class 'fortran'>) >>> ftype.foo() IN FOO: N= 13 A= 0. X=[ 0. 0. 0.] >>> ftype.data.a = 3 >>> ftype.data.x = [1,2,3] >>> ftype.foo() IN FOO: N= 13 A= 3. X=[ 1. 2. 3.] >>> ftype.data.x[1] = 45 >>> ftype.foo(24) IN FOO: N= 24 A= 3. X=[ 1. 45. 3.] >>> ftype.data.x array([ 1., 45., 3.], dtype=float32) ``` Scalar arguments ---------------- In general, a scalar argument for a F2PY generated wrapper function can be an ordinary Python scalar (integer, float, complex number) as well as an arbitrary sequence object (list, tuple, array, string) of scalars. In the latter case, the first element of the sequence object is passed to the Fortran routine as a scalar argument. Note * When type-casting is required and there is possible loss of information via narrowing e.g. when type-casting float to integer or complex to float, F2PY *does not* raise an exception. + For complex to real type-casting only the real part of a complex number is used. * `intent(inout)` scalar arguments are assumed to be array objects in order to have *in situ* changes be effective. It is recommended to use arrays with proper type but also other types work. [Read more about the intent attribute](signature-file#f2py-attributes). Consider the following Fortran 77 code: ``` C FILE: SCALAR.F SUBROUTINE FOO(A,B) REAL*8 A, B Cf2py intent(in) a Cf2py intent(inout) b PRINT*, " A=",A," B=",B PRINT*, "INCREMENT A AND B" A = A + 1D0 B = B + 1D0 PRINT*, "NEW A=",A," B=",B END C END OF FILE SCALAR.F ``` and wrap it using `f2py -c -m scalar scalar.f`. In Python: ``` >>> import scalar >>> print(scalar.foo.__doc__) foo(a,b) Wrapper for ``foo``. Parameters ---------- a : input float b : in/output rank-0 array(float,'d') >>> scalar.foo(2, 3) A= 2. B= 3. INCREMENT A AND B NEW A= 3. B= 4. >>> import numpy >>> a = numpy.array(2) # these are integer rank-0 arrays >>> b = numpy.array(3) >>> scalar.foo(a, b) A= 2. B= 3. INCREMENT A AND B NEW A= 3. B= 4. >>> print(a, b) # note that only b is changed in situ 2 4 ``` String arguments ---------------- F2PY generated wrapper functions accept almost any Python object as a string argument, since `str` is applied for non-string objects. Exceptions are NumPy arrays that must have type code `'S1'` or `'b'` (corresponding to the outdated `'c'` or `'1'` typecodes, respectively) when used as string arguments. See [Scalars](../reference/arrays.scalars#arrays-scalars) for more information on these typecodes. A string can have an arbitrary length when used as a string argument for an F2PY generated wrapper function. If the length is greater than expected, the string is truncated silently. If the length is smaller than expected, additional memory is allocated and filled with `\0`. Because Python strings are immutable, an `intent(inout)` argument expects an array version of a string in order to have *in situ* changes be effective. Consider the following Fortran 77 code: ``` C FILE: STRING.F SUBROUTINE FOO(A,B,C,D) CHARACTER*5 A, B CHARACTER*(*) C,D Cf2py intent(in) a,c Cf2py intent(inout) b,d PRINT*, "A=",A PRINT*, "B=",B PRINT*, "C=",C PRINT*, "D=",D PRINT*, "CHANGE A,B,C,D" A(1:1) = 'A' B(1:1) = 'B' C(1:1) = 'C' D(1:1) = 'D' PRINT*, "A=",A PRINT*, "B=",B PRINT*, "C=",C PRINT*, "D=",D END C END OF FILE STRING.F ``` and wrap it using `f2py -c -m mystring string.f`. Python session: ``` >>> import mystring >>> print(mystring.foo.__doc__) foo(a,b,c,d) Wrapper for ``foo``. Parameters ---------- a : input string(len=5) b : in/output rank-0 array(string(len=5),'c') c : input string(len=-1) d : in/output rank-0 array(string(len=-1),'c') >>> from numpy import array >>> a = array(b'123\0\0') >>> b = array(b'123\0\0') >>> c = array(b'123') >>> d = array(b'123') >>> mystring.foo(a, b, c, d) A=123 B=123 C=123 D=123 CHANGE A,B,C,D A=A23 B=B23 C=C23 D=D23 >>> a[()], b[()], c[()], d[()] (b'123', b'B23', b'123', b'D2') ``` Array arguments --------------- In general, array arguments for F2PY generated wrapper functions accept arbitrary sequences that can be transformed to NumPy array objects. There are two notable exceptions: * `intent(inout)` array arguments must always be [proper-contiguous](../glossary#term-contiguous) and have a compatible `dtype`, otherwise an exception is raised. * `intent(inplace)` array arguments will be changed *in situ* if the argument has a different type than expected (see the `intent(inplace)` [attribute](signature-file#f2py-attributes) for more information). In general, if a NumPy array is [proper-contiguous](../glossary#term-contiguous) and has a proper type then it is directly passed to the wrapped Fortran/C function. Otherwise, an element-wise copy of the input array is made and the copy, being proper-contiguous and with proper type, is used as the array argument. Usually there is no need to worry about how the arrays are stored in memory and whether the wrapped functions, being either Fortran or C functions, assume one or another storage order. F2PY automatically ensures that wrapped functions get arguments with the proper storage order; the underlying algorithm is designed to make copies of arrays only when absolutely necessary. However, when dealing with very large multidimensional input arrays with sizes close to the size of the physical memory in your computer, then care must be taken to ensure the usage of proper-contiguous and proper type arguments. To transform input arrays to column major storage order before passing them to Fortran routines, use the function [`numpy.asfortranarray`](../reference/generated/numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray"). Consider the following Fortran 77 code: ``` C FILE: ARRAY.F SUBROUTINE FOO(A,N,M) C C INCREMENT THE FIRST ROW AND DECREMENT THE FIRST COLUMN OF A C INTEGER N,M,I,J REAL*8 A(N,M) Cf2py intent(in,out,copy) a Cf2py integer intent(hide),depend(a) :: n=shape(a,0), m=shape(a,1) DO J=1,M A(1,J) = A(1,J) + 1D0 ENDDO DO I=1,N A(I,1) = A(I,1) - 1D0 ENDDO END C END OF FILE ARRAY.F ``` and wrap it using `f2py -c -m arr array.f -DF2PY_REPORT_ON_ARRAY_COPY=1`. In Python: ``` >>> import arr >>> from numpy import asfortranarray >>> print(arr.foo.__doc__) a = foo(a,[overwrite_a]) Wrapper for ``foo``. Parameters ---------- a : input rank-2 array('d') with bounds (n,m) Other Parameters ---------------- overwrite_a : input int, optional Default: 0 Returns ------- a : rank-2 array('d') with bounds (n,m) >>> a = arr.foo([[1, 2, 3], ... [4, 5, 6]]) created an array from object >>> print(a) [[ 1. 3. 4.] [ 3. 5. 6.]] >>> a.flags.c_contiguous False >>> a.flags.f_contiguous True # even if a is proper-contiguous and has proper type, # a copy is made forced by intent(copy) attribute # to preserve its original contents >>> b = arr.foo(a) copied an array: size=6, elsize=8 >>> print(a) [[ 1. 3. 4.] [ 3. 5. 6.]] >>> print(b) [[ 1. 4. 5.] [ 2. 5. 6.]] >>> b = arr.foo(a, overwrite_a = 1) # a is passed directly to Fortran ... # routine and its contents is discarded ... >>> print(a) [[ 1. 4. 5.] [ 2. 5. 6.]] >>> print(b) [[ 1. 4. 5.] [ 2. 5. 6.]] >>> a is b # a and b are actually the same objects True >>> print(arr.foo([1, 2, 3])) # different rank arrays are allowed created an array from object [ 1. 1. 2.] >>> print(arr.foo([[[1], [2], [3]]])) created an array from object [[[ 1.] [ 1.] [ 2.]]] >>> >>> # Creating arrays with column major data storage order: ... >>> s = asfortranarray([[1, 2, 3], [4, 5, 6]]) >>> s.flags.f_contiguous True >>> print(s) [[1 2 3] [4 5 6]] >>> print(arr.foo(s)) >>> s2 = asfortranarray(s) >>> s2 is s # an array with column major storage order # is returned immediately True >>> # Note that arr.foo returns a column major data storage order array: ... >>> s3 = ascontiguousarray(s) >>> s3.flags.f_contiguous False >>> s3.flags.c_contiguous True >>> s3 = arr.foo(s3) copied an array: size=6, elsize=8 >>> s3.flags.f_contiguous True >>> s3.flags.c_contiguous False ``` Call-back arguments ------------------- F2PY supports calling Python functions from Fortran or C codes. Consider the following Fortran 77 code: ``` C FILE: CALLBACK.F SUBROUTINE FOO(FUN,R) EXTERNAL FUN INTEGER I REAL*8 R, FUN Cf2py intent(out) r R = 0D0 DO I=-5,5 R = R + FUN(I) ENDDO END C END OF FILE CALLBACK.F ``` and wrap it using `f2py -c -m callback callback.f`. In Python: ``` >>> import callback >>> print(callback.foo.__doc__) r = foo(fun,[fun_extra_args]) Wrapper for ``foo``. Parameters ---------- fun : call-back function Other Parameters ---------------- fun_extra_args : input tuple, optional Default: () Returns ------- r : float Notes ----- Call-back functions:: def fun(i): return r Required arguments: i : input int Return objects: r : float >>> def f(i): return i*i ... >>> print(callback.foo(f)) 110.0 >>> print(callback.foo(lambda i:1)) 11.0 ``` In the above example F2PY was able to guess accurately the signature of the call-back function. However, sometimes F2PY cannot establish the appropriate signature; in these cases the signature of the call-back function must be explicitly defined in the signature file. To facilitate this, signature files may contain special modules (the names of these modules contain the special `__user__` sub-string) that define the various signatures for call-back functions. Callback arguments in routine signatures have the `external` attribute (see also the `intent(callback)` [attribute](signature-file#f2py-attributes)). To relate a callback argument with its signature in a `__user__` module block, a `use` statement can be utilized as illustrated below. The same signature for a callback argument can be referred to in different routine signatures. We use the same Fortran 77 code as in the previous example but now we will pretend that F2PY was not able to guess the signatures of call-back arguments correctly. First, we create an initial signature file `callback2.pyf` using F2PY: ``` f2py -m callback2 -h callback2.pyf callback.f ``` Then modify it as follows ``` ! -*- f90 -*- python module __user__routines interface function fun(i) result (r) integer :: i real*8 :: r end function fun end interface end python module __user__routines python module callback2 interface subroutine foo(f,r) use __user__routines, f=>fun external f real*8 intent(out) :: r end subroutine foo end interface end python module callback2 ``` Finally, we build the extension module using `f2py -c callback2.pyf callback.f`. An example Python session for this snippet would be identical to the previous example except that the argument names would differ. Sometimes a Fortran package may require that users provide routines that the package will use. F2PY can construct an interface to such routines so that Python functions can be called from Fortran. Consider the following Fortran 77 subroutine that takes an array as its input and applies a function `func` to its elements. ``` subroutine calculate(x,n) cf2py intent(callback) func external func c The following lines define the signature of func for F2PY: cf2py real*8 y cf2py y = func(y) c cf2py intent(in,out,copy) x integer n,i real*8 x(n), func do i=1,n x(i) = func(x(i)) end do end ``` The Fortran code expects that the function `func` has been defined externally. In order to use a Python function for `func`, it must have an attribute `intent(callback)` and it must be specified before the `external` statement. Finally, build an extension module using `f2py -c -m foo calculate.f` In Python: ``` >>> import foo >>> foo.calculate(range(5), lambda x: x*x) array([ 0., 1., 4., 9., 16.]) >>> import math >>> foo.calculate(range(5), math.exp) array([ 1. , 2.71828183, 7.3890561, 20.08553692, 54.59815003]) ``` The function is included as an argument to the python function call to the Fortran subroutine even though it was *not* in the Fortran subroutine argument list. The “external” keyword refers to the C function generated by f2py, not the Python function itself. The python function is essentially being supplied to the C function. The callback function may also be explicitly set in the module. Then it is not necessary to pass the function in the argument list to the Fortran function. This may be desired if the Fortran function calling the Python callback function is itself called by another Fortran function. Consider the following Fortran 77 subroutine: ``` subroutine f1() print *, "in f1, calling f2 twice.." call f2() call f2() return end subroutine f2() cf2py intent(callback, hide) fpy external fpy print *, "in f2, calling f2py.." call fpy() return end ``` and wrap it using `f2py -c -m pfromf extcallback.f`. In Python: ``` >>> import pfromf >>> pfromf.f2() Traceback (most recent call last): File "<stdin>", line 1, in <module> pfromf.error: Callback fpy not defined (as an argument or module pfromf attribute). >>> def f(): print("python f") ... >>> pfromf.fpy = f >>> pfromf.f2() in f2, calling f2py.. python f >>> pfromf.f1() in f1, calling f2 twice.. in f2, calling f2py.. python f in f2, calling f2py.. python f >>``` ### Resolving arguments to call-back functions F2PY generated interfaces are very flexible with respect to call-back arguments. For each call-back argument an additional optional argument `<name>_extra_args` is introduced by F2PY. This argument can be used to pass extra arguments to user provided call-back functions. If a F2PY generated wrapper function expects the following call-back argument: ``` def fun(a_1,...,a_n): ... return x_1,...,x_k ``` but the following Python function ``` def gun(b_1,...,b_m): ... return y_1,...,y_l ``` is provided by a user, and in addition, ``` fun_extra_args = (e_1,...,e_p) ``` is used, then the following rules are applied when a Fortran or C function evaluates the call-back argument `gun`: * If `p == 0` then `gun(a_1, ..., a_q)` is called, here `q = min(m, n)`. * If `n + p <= m` then `gun(a_1, ..., a_n, e_1, ..., e_p)` is called. * If `p <= m < n + p` then `gun(a_1, ..., a_q, e_1, ..., e_p)` is called, and here `q=m-p`. * If `p > m` then `gun(e_1, ..., e_m)` is called. * If `n + p` is less than the number of required arguments to `gun` then an exception is raised. If the function `gun` may return any number of objects as a tuple; then the following rules are applied: * If `k < l`, then `y_{k + 1}, ..., y_l` are ignored. * If `k > l`, then only `x_1, ..., x_l` are set. Common blocks ------------- F2PY generates wrappers to `common` blocks defined in a routine signature block. Common blocks are visible to all Fortran codes linked to the current extension module, but not to other extension modules (this restriction is due to the way Python imports shared libraries). In Python, the F2PY wrappers to `common` blocks are `fortran` type objects that have (dynamic) attributes related to the data members of the common blocks. When accessed, these attributes return as NumPy array objects (multidimensional arrays are Fortran-contiguous) which directly link to data members in common blocks. Data members can be changed by direct assignment or by in-place changes to the corresponding array objects. Consider the following Fortran 77 code: ``` C FILE: COMMON.F SUBROUTINE FOO INTEGER I,X REAL A COMMON /DATA/ I,X(4),A(2,3) PRINT*, "I=",I PRINT*, "X=[",X,"]" PRINT*, "A=[" PRINT*, "[",A(1,1),",",A(1,2),",",A(1,3),"]" PRINT*, "[",A(2,1),",",A(2,2),",",A(2,3),"]" PRINT*, "]" END C END OF COMMON.F ``` and wrap it using `f2py -c -m common common.f`. In Python: ``` >>> import common >>> print(common.data.__doc__) i : 'i'-scalar x : 'i'-array(4) a : 'f'-array(2,3) >>> common.data.i = 5 >>> common.data.x[1] = 2 >>> common.data.a = [[1,2,3],[4,5,6]] >>> common.foo() >>> common.foo() I= 5 X=[ 0 2 0 0 ] A=[ [ 1.00000000 , 2.00000000 , 3.00000000 ] [ 4.00000000 , 5.00000000 , 6.00000000 ] ] >>> common.data.a[1] = 45 >>> common.foo() I= 5 X=[ 0 2 0 0 ] A=[ [ 1.00000000 , 2.00000000 , 3.00000000 ] [ 45.0000000 , 45.0000000 , 45.0000000 ] ] >>> common.data.a # a is Fortran-contiguous array([[ 1., 2., 3.], [ 45., 45., 45.]], dtype=float32) >>> common.data.a.flags.f_contiguous True ``` Fortran 90 module data ---------------------- The F2PY interface to Fortran 90 module data is similar to the handling of Fortran 77 common blocks. Consider the following Fortran 90 code: ``` module mod integer i integer :: x(4) real, dimension(2,3) :: a real, allocatable, dimension(:,:) :: b contains subroutine foo integer k print*, "i=",i print*, "x=[",x,"]" print*, "a=[" print*, "[",a(1,1),",",a(1,2),",",a(1,3),"]" print*, "[",a(2,1),",",a(2,2),",",a(2,3),"]" print*, "]" print*, "Setting a(1,2)=a(1,2)+3" a(1,2) = a(1,2)+3 end subroutine foo end module mod ``` and wrap it using `f2py -c -m moddata moddata.f90`. In Python: ``` >>> import moddata >>> print(moddata.mod.__doc__) i : 'i'-scalar x : 'i'-array(4) a : 'f'-array(2,3) b : 'f'-array(-1,-1), not allocated foo() Wrapper for ``foo``. >>> moddata.mod.i = 5 >>> moddata.mod.x[:2] = [1,2] >>> moddata.mod.a = [[1,2,3],[4,5,6]] >>> moddata.mod.foo() i= 5 x=[ 1 2 0 0 ] a=[ [ 1.000000 , 2.000000 , 3.000000 ] [ 4.000000 , 5.000000 , 6.000000 ] ] Setting a(1,2)=a(1,2)+3 >>> moddata.mod.a # a is Fortran-contiguous array([[ 1., 5., 3.], [ 4., 5., 6.]], dtype=float32) >>> moddata.mod.a.flags.f_contiguous True ``` Allocatable arrays ------------------ F2PY has basic support for Fortran 90 module allocatable arrays. Consider the following Fortran 90 code: ``` module mod real, allocatable, dimension(:,:) :: b contains subroutine foo integer k if (allocated(b)) then print*, "b=[" do k = 1,size(b,1) print*, b(k,1:size(b,2)) enddo print*, "]" else print*, "b is not allocated" endif end subroutine foo end module mod ``` and wrap it using `f2py -c -m allocarr allocarr.f90`. In Python: ``` >>> import allocarr >>> print(allocarr.mod.__doc__) b : 'f'-array(-1,-1), not allocated foo() Wrapper for ``foo``. >>> allocarr.mod.foo() b is not allocated >>> allocarr.mod.b = [[1, 2, 3], [4, 5, 6]] # allocate/initialize b >>> allocarr.mod.foo() b=[ 1.000000 2.000000 3.000000 4.000000 5.000000 6.000000 ] >>> allocarr.mod.b # b is Fortran-contiguous array([[ 1., 2., 3.], [ 4., 5., 6.]], dtype=float32) >>> allocarr.mod.b.flags.f_contiguous True >>> allocarr.mod.b = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] # reallocate/initialize b >>> allocarr.mod.foo() b=[ 1.000000 2.000000 3.000000 4.000000 5.000000 6.000000 7.000000 8.000000 9.000000 ] >>> allocarr.mod.b = None # deallocate array >>> allocarr.mod.foo() b is not allocated ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/python-usage.htmlSignature file ============== The interface definition file (.pyf) is how you can fine-tune the interface between Python and Fortran. The syntax specification for signature files (`.pyf` files) is modeled on the Fortran 90/95 language specification. Almost all Fortran 90/95 standard constructs are understood, both in free and fixed format (recall that Fortran 77 is a subset of Fortran 90/95). F2PY introduces some extensions to the Fortran 90/95 language specification that help in the design of the Fortran to Python interface, making it more “Pythonic”. Signature files may contain arbitrary Fortran code so that any Fortran 90/95 codes can be treated as signature files. F2PY silently ignores Fortran constructs that are irrelevant for creating the interface. However, this also means that syntax errors are not caught by F2PY and will only be caught when the library is built. Note Currently, F2PY may fail with valid Fortran constructs, such as intrinsic modules. If this happens, you can check the [NumPy GitHub issue tracker](https://github.com/numpy/numpy/issues) for possible workarounds or work-in-progress ideas. In general, the contents of the signature files are case-sensitive. When scanning Fortran codes to generate a signature file, F2PY lowers all cases automatically except in multi-line blocks or when the `--no-lower` option is used. The syntax of signature files is presented below. Signature files syntax ---------------------- ### Python module block A signature file may contain one (recommended) or more `python module` blocks. The `python module` block describes the contents of a Python/C extension module `<modulename>module.c` that F2PY generates. Warning Exception: if `<modulename>` contains a substring `__user__`, then the corresponding `python module` block describes the signatures of call-back functions (see [Call-back arguments](python-usage#call-back-arguments)). A `python module` block has the following structure: ``` python module <modulename> [<usercode statement>]... [ interface <usercode statement> <Fortran block data signatures> <Fortran/C routine signatures> end [interface] ]... [ interface module <F90 modulename> [<F90 module data type declarations>] [<F90 module routine signatures>] end [module [<F90 modulename>]] end [interface] ]... end [python module [<modulename>]] ``` Here brackets `[]` indicate an optional section, dots `...` indicate one or more of a previous section. So, `[]...` is to be read as zero or more of a previous section. ### Fortran/C routine signatures The signature of a Fortran routine has the following structure: ``` [<typespec>] function | subroutine <routine name> \ [ ( [<arguments>] ) ] [ result ( <entityname> ) ] [<argument/variable type declarations>] [<argument/variable attribute statements>] [<use statements>] [<common block statements>] [<other statements>] end [ function | subroutine [<routine name>] ] ``` From a Fortran routine signature F2PY generates a Python/C extension function that has the following signature: ``` def <routine name>(<required arguments>[,<optional arguments>]): ... return <return variables``` The signature of a Fortran block data has the following structure: ``` block data [ <block data name> ] [<variable type declarations>] [<variable attribute statements>] [<use statements>] [<common block statements>] [<include statements>] end [ block data [<block data name>] ] ``` ### Type declarations The definition of the `<argument/variable type declaration>` part is ``` <typespec> [ [<attrspec>] :: ] <entitydecl``` where ``` <typespec> := byte | character [<charselector>] | complex [<kindselector>] | real [<kindselector>] | double complex | double precision | integer [<kindselector>] | logical [<kindselector>] <charselector> := * <charlen> | ( [len=] <len> [ , [kind=] <kind>] ) | ( kind= <kind> [ , len= <len> ] ) <kindselector> := * <intlen> | ( [kind=] <kind> ) <entitydecl> := <name> [ [ * <charlen> ] [ ( <arrayspec> ) ] | [ ( <arrayspec> ) ] * <charlen> ] | [ / <init_expr> / | = <init_expr> ] \ [ , <entitydecl> ] ``` and * `<attrspec>` is a comma separated list of [attributes](#attributes); * `<arrayspec>` is a comma separated list of dimension bounds; * `<init_expr>` is a [C expression](#c-expressions); * `<intlen>` may be negative integer for `integer` type specifications. In such cases `integer*<negintlen>` represents unsigned C integers; If an argument has no `<argument type declaration>`, its type is determined by applying `implicit` rules to its name. ### Statements #### Attribute statements The `<argument/variable attribute statement>` is similar to the `<argument/variable type declaration>`, but without `<typespec>`. An attribute statement cannot contain other attributes, and `<entitydecl>` can be only a list of names. See [Attributes](#f2py-attributes) for more details on the attributes that can be used by F2PY. #### Use statements * The definition of the `<use statement>` part is ``` use <modulename> [ , <rename_list> | , ONLY : <only_list> ] ``` where ``` <rename_list> := <local_name> => <use_name> [ , <rename_list> ] ``` * Currently F2PY uses `use` statements only for linking call-back modules and `external` arguments (call-back functions). See [Call-back arguments](python-usage#call-back-arguments). #### Common block statements * The definition of the `<common block statement>` part is ``` common / <common name> / <shortentitydecl``` where ``` <shortentitydecl> := <name> [ ( <arrayspec> ) ] [ , <shortentitydecl> ] ``` * If a `python module` block contains two or more `common` blocks with the same name, the variables from the additional declarations are appended. The types of variables in `<shortentitydecl>` are defined using `<argument type declarations>`. Note that the corresponding `<argument type declarations>` may contain array specifications; then these need not be specified in `<shortentitydecl>`. #### Other statements * The `<other statement>` part refers to any other Fortran language constructs that are not described above. F2PY ignores most of them except the following: + `call` statements and function calls of `external` arguments (see [more details on external arguments](#external)); + `include` statements ``` include '<filename>' include "<filename>" ``` If a file `<filename>` does not exist, the `include` statement is ignored. Otherwise, the file `<filename>` is included to a signature file. `include` statements can be used in any part of a signature file, also outside the Fortran/C routine signature blocks. + `implicit` statements ``` implicit none implicit <list of implicit maps ``` where ``` <implicit map> := <typespec> ( <list of letters or range of letters> ) ``` Implicit rules are used to determine the type specification of a variable (from the first-letter of its name) if the variable is not defined using `<variable type declaration>`. Default implicit rules are given by: ``` implicit real (a-h,o-z,$_), integer (i-m) ``` + `entry` statements ``` entry <entry name> [([<arguments>])] ``` F2PY generates wrappers for all entry names using the signature of the routine block. Note The `entry` statement can be used to describe the signature of an arbitrary subroutine or function allowing F2PY to generate a number of wrappers from only one routine block signature. There are few restrictions while doing this: `fortranname` cannot be used, `callstatement` and `callprotoargument` can be used only if they are valid for all entry routines, etc. #### F2PY statements In addition, F2PY introduces the following statements: `threadsafe` Uses a `Py_BEGIN_ALLOW_THREADS .. Py_END_ALLOW_THREADS` block around the call to Fortran/C function. `callstatement <C-expr|multi-line block>` Replaces the F2PY generated call statement to Fortran/C function with `<C-expr|multi-line block>`. The wrapped Fortran/C function is available as `(*f2py_func)`. To raise an exception, set `f2py_success = 0` in `<C-expr|multi-line block>`. `callprotoargument <C-typespecs>` When the `callstatement` statement is used, F2PY may not generate proper prototypes for Fortran/C functions (because `<C-expr>` may contain function calls, and F2PY has no way to determine what should be the proper prototype). With this statement you can explicitly specify the arguments of the corresponding prototype: ``` extern <return type> FUNC_F(<routine name>,<ROUTINE NAME>)(<callprotoargument>); ``` `fortranname [<actual Fortran/C routine name>]` F2PY allows for the use of an arbitrary `<routine name>` for a given Fortran/C function. Then this statement is used for the `<actual Fortran/C routine name>`. If `fortranname` statement is used without `<actual Fortran/C routine name>` then a dummy wrapper is generated. `usercode <multi-line block>` When this is used inside a `python module` block, the given C code will be inserted to generated C/API source just before wrapper function definitions. Here you can define arbitrary C functions to be used for the initialization of optional arguments. For example, if `usercode` is used twice inside `python module` block then the second multi-line block is inserted after the definition of the external routines. When used inside `<routine signature>`, then the given C code will be inserted into the corresponding wrapper function just after the declaration of variables but before any C statements. So, the `usercode` follow-up can contain both declarations and C statements. When used inside the first `interface` block, then the given C code will be inserted at the end of the initialization function of the extension module. This is how the extension modules dictionary can be modified and has many use-cases; for example, to define additional variables. `pymethoddef <multiline block>` This is a multi-line block which will be inserted into the definition of a module methods `PyMethodDef`-array. It must be a comma-separated list of C arrays (see [Extending and Embedding](https://docs.python.org/extending/index.html) Python documentation for details). `pymethoddef` statement can be used only inside `python module` block. ### Attributes The following attributes can be used by F2PY. `optional` The corresponding argument is moved to the end of `<optional arguments>` list. A default value for an optional argument can be specified via `<init_expr>` (see the `entitydecl` [definition](#type-declarations)) Note * The default value must be given as a valid C expression. * Whenever `<init_expr>` is used, the `optional` attribute is set automatically by F2PY. * For an optional array argument, all its dimensions must be bounded. `required` The corresponding argument with this attribute is considered mandatory. This is the default. `required` should only be specified if there is a need to disable the automatic `optional` setting when `<init_expr>` is used. If a Python `None` object is used as a required argument, the argument is treated as optional. That is, in the case of array arguments, the memory is allocated. If `<init_expr>` is given, then the corresponding initialization is carried out. `dimension(<arrayspec>)` The corresponding variable is considered as an array with dimensions given in `<arrayspec>`. `intent(<intentspec>)` This specifies the “intention” of the corresponding argument. `<intentspec>` is a comma separated list of the following keys: * `in` The corresponding argument is considered to be input-only. This means that the value of the argument is passed to a Fortran/C function and that the function is expected to not change the value of this argument. * `inout` The corresponding argument is marked for input/output or as an *in situ* output argument. `intent(inout)` arguments can be only [contiguous](../glossary#term-contiguous) NumPy arrays (in either the Fortran or C sense) with proper type and size. The latter coincides with the default contiguous concept used in NumPy and is effective only if `intent(c)` is used. F2PY assumes Fortran contiguous arguments by default. Note Using `intent(inout)` is generally not recommended, as it can cause unexpected results. For example, scalar arguments using `intent(inout)` are assumed to be array objects in order to have *in situ* changes be effective. Use `intent(in,out)` instead. See also the `intent(inplace)` attribute. * `inplace` The corresponding argument is considered to be an input/output or *in situ* output argument. `intent(inplace)` arguments must be NumPy arrays of a proper size. If the type of an array is not “proper” or the array is non-contiguous then the array will be modified in-place to fix the type and make it contiguous. Note Using `intent(inplace)` is generally not recommended either. For example, when slices have been taken from an `intent(inplace)` argument then after in-place changes, the data pointers for the slices may point to an unallocated memory area. * `out` The corresponding argument is considered to be a return variable. It is appended to the `<returned variables>` list. Using `intent(out)` sets `intent(hide)` automatically, unless `intent(in)` or `intent(inout)` are specified as well. By default, returned multidimensional arrays are Fortran-contiguous. If `intent(c)` attribute is used, then the returned multidimensional arrays are C-contiguous. * `hide` The corresponding argument is removed from the list of required or optional arguments. Typically `intent(hide)` is used with `intent(out)` or when `<init_expr>` completely determines the value of the argument like in the following example: ``` integer intent(hide),depend(a) :: n = len(a) real intent(in),dimension(n) :: a ``` * `c` The corresponding argument is treated as a C scalar or C array argument. For the case of a scalar argument, its value is passed to a C function as a C scalar argument (recall that Fortran scalar arguments are actually C pointer arguments). For array arguments, the wrapper function is assumed to treat multidimensional arrays as C-contiguous arrays. There is no need to use `intent(c)` for one-dimensional arrays, irrespective of whether the wrapped function is in Fortran or C. This is because the concepts of Fortran- and C contiguity overlap in one-dimensional cases. If `intent(c)` is used as a statement but without an entity declaration list, then F2PY adds the `intent(c)` attribute to all arguments. Also, when wrapping C functions, one must use `intent(c)` attribute for `<routine name>` in order to disable Fortran specific `F_FUNC(..,..)` macros. * `cache` The corresponding argument is treated as junk memory. No Fortran nor C contiguity checks are carried out. Using `intent(cache)` makes sense only for array arguments, also in conjunction with `intent(hide)` or `optional` attributes. * `copy` Ensures that the original contents of `intent(in)` argument is preserved. Typically used with the `intent(in,out)` attribute. F2PY creates an optional argument `overwrite_<argument name>` with the default value `0`. * `overwrite` This indicates that the original contents of the `intent(in)` argument may be altered by the Fortran/C function. F2PY creates an optional argument `overwrite_<argument name>` with the default value `1`. * `out=<new name>` Replaces the returned name with `<new name>` in the `__doc__` string of the wrapper function. * `callback` Constructs an external function suitable for calling Python functions from Fortran. `intent(callback)` must be specified before the corresponding `external` statement. If the ‘argument’ is not in the argument list then it will be added to Python wrapper but only by initializing an external function. Note Use `intent(callback)` in situations where the Fortran/C code assumes that the user implemented a function with a given prototype and linked it to an executable. Don’t use `intent(callback)` if the function appears in the argument list of a Fortran routine. With `intent(hide)` or `optional` attributes specified and using a wrapper function without specifying the callback argument in the argument list; then the call-back function is assumed to be found in the namespace of the F2PY generated extension module where it can be set as a module attribute by a user. * `aux` Defines an auxiliary C variable in the F2PY generated wrapper function. Useful to save parameter values so that they can be accessed in initialization expressions for other variables. Note `intent(aux)` silently implies `intent(c)`. The following rules apply: * If none of `intent(in | inout | out | hide)` are specified, `intent(in)` is assumed. + `intent(in,inout)` is `intent(in)`; + `intent(in,hide)` or `intent(inout,hide)` is `intent(hide)`; + `intent(out)` is `intent(out,hide)` unless `intent(in)` or `intent(inout)` is specified. * If `intent(copy)` or `intent(overwrite)` is used, then an additional optional argument is introduced with a name `overwrite_<argument name>` and a default value 0 or 1, respectively. + `intent(inout,inplace)` is `intent(inplace)`; + `intent(in,inplace)` is `intent(inplace)`; + `intent(hide)` disables `optional` and `required`. `check([<C-booleanexpr>])` Performs a consistency check on the arguments by evaluating `<C-booleanexpr>`; if `<C-booleanexpr>` returns 0, an exception is raised. Note If `check(..)` is not used then F2PY automatically generates a few standard checks (e.g. in a case of an array argument, it checks for the proper shape and size). Use `check()` to disable checks generated by F2PY. `depend([<names>])` This declares that the corresponding argument depends on the values of variables in the `<names>` list. For example, `<init_expr>` may use the values of other arguments. Using information given by `depend(..)` attributes, F2PY ensures that arguments are initialized in a proper order. If the `depend(..)` attribute is not used then F2PY determines dependence relations automatically. Use `depend()` to disable the dependence relations generated by F2PY. When you edit dependence relations that were initially generated by F2PY, be careful not to break the dependence relations of other relevant variables. Another thing to watch out for is cyclic dependencies. F2PY is able to detect cyclic dependencies when constructing wrappers and it complains if any are found. `allocatable` The corresponding variable is a Fortran 90 allocatable array defined as Fortran 90 module data. `external` The corresponding argument is a function provided by user. The signature of this call-back function can be defined * in `__user__` module block, * or by demonstrative (or real, if the signature file is a real Fortran code) call in the `<other statements>` block. For example, F2PY generates from: ``` external cb_sub, cb_fun integer n real a(n),r call cb_sub(a,n) r = cb_fun(4) ``` the following call-back signatures: ``` subroutine cb_sub(a,n) real dimension(n) :: a integer optional,check(len(a)>=n),depend(a) :: n=len(a) end subroutine cb_sub function cb_fun(e_4_e) result (r) integer :: e_4_e real :: r end function cb_fun ``` The corresponding user-provided Python function are then: ``` def cb_sub(a,[n]): ... return def cb_fun(e_4_e): ... return r ``` See also the `intent(callback)` attribute. `parameter` This indicates that the corresponding variable is a parameter and it must have a fixed value. F2PY replaces all parameter occurrences by their corresponding values. ### Extensions #### F2PY directives The F2PY directives allow using F2PY signature file constructs in Fortran 77/90 source codes. With this feature one can (almost) completely skip the intermediate signature file generation and apply F2PY directly to Fortran source codes. F2PY directives have the following form: ``` <comment char>f2py ... ``` where allowed comment characters for fixed and free format Fortran codes are `cC*!#` and `!`, respectively. Everything that follows `<comment char>f2py` is ignored by a compiler but read by F2PY as a normal non-comment Fortran line: Note When F2PY finds a line with F2PY directive, the directive is first replaced by 5 spaces and then the line is reread. For fixed format Fortran codes, `<comment char>` must be at the first column of a file, of course. For free format Fortran codes, the F2PY directives can appear anywhere in a file. #### C expressions C expressions are used in the following parts of signature files: * `<init_expr>` for variable initialization; * `<C-booleanexpr>` of the `check` attribute; * `<arrayspec>` of the `dimension` attribute; * `callstatement` statement, here also a C multi-line block can be used. A C expression may contain: * standard C constructs; * functions from `math.h` and `Python.h`; * variables from the argument list, presumably initialized before according to given dependence relations; * the following CPP macros: + `rank(<name>)` Returns the rank of an array `<name>`. + `shape(<name>,<n>)` Returns the `<n>`-th dimension of an array `<name>`. + `len(<name>)` Returns the length of an array `<name>`. + `size(<name>)` Returns the size of an array `<name>`. + `slen(<name>)` Returns the length of a string `<name>`. For initializing an array `<array name>`, F2PY generates a loop over all indices and dimensions that executes the following pseudo-statement: ``` <array name>(_i[0],_i[1],...) = <init_expr>; ``` where `_i[<i>]` refers to the `<i>`-th index value and that runs from `0` to `shape(<array name>,<i>)-1`. For example, a function `myrange(n)` generated from the following signature ``` subroutine myrange(a,n) fortranname ! myrange is a dummy wrapper integer intent(in) :: n real*8 intent(c,out),dimension(n),depend(n) :: a = _i[0] end subroutine myrange ``` is equivalent to `numpy.arange(n,dtype=float)`. Warning F2PY may lower cases also in C expressions when scanning Fortran codes (see `--[no]-lower` option). #### Multi-line blocks A multi-line block starts with `'''` (triple single-quotes) and ends with `'''` in some *strictly* subsequent line. Multi-line blocks can be used only within .pyf files. The contents of a multi-line block can be arbitrary (except that it cannot contain `'''`) and no transformations (e.g. lowering cases) are applied to it. Currently, multi-line blocks can be used in the following constructs: * as a C expression of the `callstatement` statement; * as a C type specification of the `callprotoargument` statement; * as a C code block of the `usercode` statement; * as a list of C arrays of the `pymethoddef` statement; * as a documentation string. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/signature-file.htmlF2PY and Build Systems ====================== In this section we will cover the various popular build systems and their usage with `f2py`. Note **As of November 2021** The default build system for `F2PY` has traditionally been the through the enhanced `numpy.distutils` module. This module is based on `distutils` which will be removed in `Python 3.12.0` in **October 2023**; `setuptools` does not have support for Fortran or `F2PY` and it is unclear if it will be supported in the future. Alternative methods are thus increasingly more important. Basic Concepts -------------- Building an extension module which includes Python and Fortran consists of: * Fortran source(s) * One or more generated files from `f2py` + A `C` wrapper file is always created + Code with modules require an additional `.f90` wrapper + Code with functions generate an additional `.f` wrapper * `fortranobject.{c,h}` + Distributed with `numpy` + Can be queried via `python -c "import numpy.f2py; print(numpy.f2py.get_include())"` * NumPy headers + Can be queried via `python -c "import numpy; print(numpy.get_include())"` * Python libraries and development headers Broadly speaking there are three cases which arise when considering the outputs of `f2py`: Fortran 77 programs * Input file `blah.f` * Generates + `blahmodule.c` + `blah-f2pywrappers.f` When no `COMMON` blocks are present only a `C` wrapper file is generated. Wrappers are also generated to rewrite assumed shape arrays as automatic arrays. Fortran 90 programs * Input file `blah.f90` * Generates: + `blahmodule.c` + `blah-f2pywrappers.f` + `blah-f2pywrappers2.f90` The `f90` wrapper is used to handle code which is subdivided into modules. The `f` wrapper makes `subroutines` for `functions`. It rewrites assumed shape arrays as automatic arrays. Signature files * Input file `blah.pyf` * Generates: + `blahmodule.c` + `blah-f2pywrappers2.f90` (occasionally) + `blah-f2pywrappers.f` (occasionally) Signature files `.pyf` do not signal their language standard via the file extension, they may generate the F90 and F77 specific wrappers depending on their contents; which shifts the burden of checking for generated files onto the build system. Note From NumPy `1.22.4` onwards, `f2py` will deterministically generate wrapper files based on the input file Fortran standard (F77 or greater). `--skip-empty-wrappers` can be passed to `f2py` to restore the previous behaviour of only generating wrappers when needed by the input . In theory keeping the above requirements in hand, any build system can be adapted to generate `f2py` extension modules. Here we will cover a subset of the more popular systems. Note `make` has no place in a modern multi-language setup, and so is not discussed further. Build Systems ------------- * [Using via `numpy.distutils`](distutils) + [Extensions to `distutils`](distutils#extensions-to-distutils) * [Using via `meson`](meson) + [Fibonacci Walkthrough (F77)](meson#fibonacci-walkthrough-f77) + [Salient points](meson#salient-points) * [Using via `cmake`](cmake) + [Fibonacci Walkthrough (F77)](cmake#fibonacci-walkthrough-f77) * [Using via `scikit-build`](skbuild) + [Fibonacci Walkthrough (F77)](skbuild#fibonacci-walkthrough-f77) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/buildtools/index.htmlAdvanced F2PY use cases ======================= Adding user-defined functions to F2PY generated modules ------------------------------------------------------- User-defined Python C/API functions can be defined inside signature files using `usercode` and `pymethoddef` statements (they must be used inside the `python module` block). For example, the following signature file `spam.pyf` ``` ! -*- f90 -*- python module spam usercode ''' static char doc_spam_system[] = "Execute a shell command."; static PyObject *spam_system(PyObject *self, PyObject *args) { char *command; int sts; if (!PyArg_ParseTuple(args, "s", &command)) return NULL; sts = system(command); return Py_BuildValue("i", sts); } ''' pymethoddef ''' {"system", spam_system, METH_VARARGS, doc_spam_system}, ''' end python module spam ``` wraps the C library function `system()`: ``` f2py -c spam.pyf ``` In Python this can then be used as: ``` >>> import spam >>> status = spam.system('whoami') pearu >>> status = spam.system('blah') sh: line 1: blah: command not found ``` Adding user-defined variables ----------------------------- The following example illustrates how to add user-defined variables to a F2PY generated extension module by modifying the dictionary of a F2PY generated module. Consider the following signature file (compiled with `f2py -c var.pyf`): ``` ! -*- f90 -*- python module var usercode ''' int BAR = 5; ''' interface usercode ''' PyDict_SetItemString(d,"BAR",PyInt_FromLong(BAR)); ''' end interface end python module ``` Notice that the second `usercode` statement must be defined inside an `interface` block and the module dictionary is available through the variable `d` (see `varmodule.c` generated by `f2py var.pyf` for additional details). Usage in Python: ``` >>> import var >>> var.BAR 5 ``` Dealing with KIND specifiers ---------------------------- Currently, F2PY can handle only `<type spec>(kind=<kindselector>)` declarations where `<kindselector>` is a numeric integer (e.g. 1, 2, 4,
), but not a function call `KIND(..)` or any other expression. F2PY needs to know what would be the corresponding C type and a general solution for that would be too complicated to implement. However, F2PY provides a hook to overcome this difficulty, namely, users can define their own <Fortran type> to <C type> maps. For example, if Fortran 90 code contains: ``` REAL(kind=KIND(0.0D0)) ... ``` then create a mapping file containing a Python dictionary: ``` {'real': {'KIND(0.0D0)': 'double'}} ``` for instance. Use the `--f2cmap` command-line option to pass the file name to F2PY. By default, F2PY assumes file name is `.f2py_f2cmap` in the current working directory. More generally, the f2cmap file must contain a dictionary with items: ``` <Fortran typespec> : {<selector_expr>:<C type>} ``` that defines mapping between Fortran type: ``` <Fortran typespec>([kind=]<selector_expr>) ``` and the corresponding <C type>. The <C type> can be one of the following: ``` double float long_double char signed_char unsigned_char short unsigned_short int long long_long unsigned complex_float complex_double complex_long_double string ``` For more information, see the F2Py source code `numpy/f2py/capi_maps.py`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/advanced.htmlF2PY and Windows ================ Warning F2PY support for Windows is not at par with Linux support, and OS specific flags can be seen via `python -m numpy.f2py` Broadly speaking, there are two issues working with F2PY on Windows: * the lack of actively developed FOSS Fortran compilers, and, * the linking issues related to the C runtime library for building Python-C extensions. The focus of this section is to establish a guideline for developing and extending Fortran modules for Python natively, via F2PY on Windows. Overview -------- From a user perspective, the most UNIX compatible Windows development environment is through emulation, either via the Windows Subsystem on Linux, or facilitated by Docker. In a similar vein, traditional virtualization methods like VirtualBox are also reasonable methods to develop UNIX tools on Windows. Native Windows support is typically stunted beyond the usage of commercial compilers. However, as of 2022, most commercial compilers have free plans which are sufficient for general use. Additionally, the Fortran language features supported by `f2py` (partial coverage of Fortran 2003), means that newer toolchains are often not required. Briefly, then, for an end user, in order of use: Classic Intel Compilers (commercial) These are maintained actively, though licensing restrictions may apply as further detailed in [F2PY and Windows Intel Fortran](intel#f2py-win-intel). Suitable for general use for those building native Windows programs by building off of MSVC. MSYS2 (FOSS) In conjunction with the `mingw-w64` project, `gfortran` and `gcc` toolchains can be used to natively build Windows programs. Windows Subsystem for Linux Assuming the usage of `gfortran`, this can be used for cross-compiling Windows applications, but is significantly more complicated. Conda Windows support for compilers in `conda` is facilitated by pulling MSYS2 binaries, however these [are outdated](https://github.com/conda-forge/conda-forge.github.io/issues/1044), and therefore not recommended (as of 30-01-2022). PGI Compilers (commercial) Unmaintained but sufficient if an existing license is present. Works natively, but has been superseded by the Nvidia HPC SDK, with no [native Windows support](https://developer.nvidia.com/nvidia-hpc-sdk-downloads#collapseFour). Cygwin (FOSS) Can also be used for `gfortran`. Howeve, the POSIX API compatibility layer provided by Cygwin is meant to compile UNIX software on Windows, instead of building native Windows programs. This means cross compilation is required. The compilation suites described so far are compatible with the [now deprecated](https://github.com/numpy/numpy/pull/20875) `np.distutils` build backend which is exposed by the F2PY CLI. Additional build system usage (`meson`, `cmake`) as described in [F2PY and Build Systems](../buildtools/index#f2py-bldsys) allows for a more flexible set of compiler backends including: Intel oneAPI The newer Intel compilers (`ifx`, `icx`) are based on LLVM and can be used for native compilation. Licensing requirements can be onerous. Classic Flang (FOSS) The backbone of the PGI compilers were cannibalized to form the “classic” or [legacy version of Flang](https://github.com/flang-compiler/flang). This may be compiled from source and used natively. [LLVM Flang](https://releases.llvm.org/11.0.0/tools/flang/docs/ReleaseNotes.html) does not support Windows yet (30-01-2022). LFortran (FOSS) One of two LLVM based compilers. Not all of F2PY supported Fortran can be compiled yet (30-01-2022) but uses MSVC for native linking. Baseline -------- For this document we will asume the following basic tools: * The IDE being considered is the community supported [Microsoft Visual Studio Code](https://code.visualstudio.com/Download) * The terminal being used is the [Windows Terminal](https://www.microsoft.com/en-us/p/windows-terminal/9n0dx20hk701?activetab=pivot:overviewtab) * The shell environment is assumed to be [Powershell 7.x](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.1) * Python 3.10 from [the Microsoft Store](https://www.microsoft.com/en-us/p/python-310/9pjpw5ldxlz5) and this can be tested with `Get-Command python.exe` resolving to `C:\Users\$USERNAME\AppData\Local\Microsoft\WindowsApps\python.exe` * The Microsoft Visual C++ (MSVC) toolset With this baseline configuration, we will further consider a configuration matrix as follows: Support matrix, exe implies a Windows installer| **Fortran Compiler** | **C/C++ Compiler** | **Source** | | --- | --- | --- | | Intel Fortran | MSVC / ICC | exe | | GFortran | MSVC | MSYS2/exe | | GFortran | GCC | WSL | | Classic Flang | MSVC | Source / Conda | | Anaconda GFortran | Anaconda GCC | exe | For an understanding of the key issues motivating the need for such a matrix [Pauli Virtanen’s in-depth post on wheels with Fortran for Windows](https://pav.iki.fi/blog/2017-10-08/pywingfortran.html#building-python-wheels-with-fortran-for-windows) is an excellent resource. An entertaining explanation of an application binary interface (ABI) can be found in this post by [<NAME>](https://thephd.dev/binary-banshees-digital-demons-abi-c-c++-help-me-god-please). Powershell and MSVC ------------------- MSVC is installed either via the Visual Studio Bundle or the lighter (preferred) [Build Tools for Visual Studio](https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019) with the `Desktop development with C++` setting. Note This can take a significant amount of time as it includes a download of around 2GB and requires a restart. It is possible to use the resulting environment from a [standard command prompt](https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=msvc-160#developer_command_file_locations). However, it is more pleasant to use a [developer powershell](https://docs.microsoft.com/en-us/visualstudio/ide/reference/command-prompt-powershell?view=vs-2019), with a [profile in Windows Terminal](https://techcommunity.microsoft.com/t5/microsoft-365-pnp-blog/add-developer-powershell-and-developer-command-prompt-for-visual/ba-p/2243078). This can be achieved by adding the following block to the `profiles->list` section of the JSON file used to configure Windows Terminal (see `Settings->Open JSON file`): ``` { "name": "Developer PowerShell for VS 2019", "commandline": "powershell.exe -noe -c \"$vsPath = (Join-Path ${env:ProgramFiles(x86)} -ChildPath 'Microsoft Visual Studio\\2019\\BuildTools'); Import-Module (Join-Path $vsPath 'Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll'); Enter-VsDevShell -VsInstallPath $vsPath -SkipAutomaticLocation\"", "icon": "ms-appx:///ProfileIcons/{61c54bbd-c2c6-5271-96e7-009a87ff44bf}.png" } ``` Now, testing the compiler toolchain could look like: ``` # New Windows Developer Powershell instance / tab # or $vsPath = (Join-Path ${env:ProgramFiles(x86)} -ChildPath 'Microsoft Visual Studio\\2019\\BuildTools'); Import-Module (Join-Path $vsPath 'Common7\\Tools\\Microsoft.VisualStudio.DevShell.dll'); Enter-VsDevShell -VsInstallPath $vsPath -SkipAutomaticLocation ********************************************************************** ** Visual Studio 2019 Developer PowerShell v16.11.9 ** Copyright (c) 2021 Microsoft Corporation ********************************************************************** cd $HOME echo "#include<stdio.h>" > blah.cpp; echo 'int main(){printf("Hi");return 1;}' >> blah.cpp cl blah.cpp .\blah.exe # Hi rm blah.cpp ``` It is also possible to check that the environment has been updated correctly with `$ENV:PATH`. Windows Store Python Paths -------------------------- The MS Windows version of Python discussed here installs to a non-deterministic path using a hash. This needs to be added to the `PATH` variable. ``` $Env:Path += ";$env:LOCALAPPDATA\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\scripts" ``` * [F2PY and Windows Intel Fortran](intel) * [F2PY and Windows with MSYS2](msys2) * [F2PY and Conda on Windows](conda) * [F2PY and PGI Fortran on Windows](pgi) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/windows/index.htmlF2PY examples ============= Below are some examples of F2PY usage. This list is not comprehensive, but can be used as a starting point when wrapping your own code. F2PY walkthrough: a basic extension module ------------------------------------------ ### Creating source for a basic extension module Consider the following subroutine, contained in a file named `add.f` ``` C SUBROUTINE ZADD(A,B,C,N) C DOUBLE COMPLEX A(*) DOUBLE COMPLEX B(*) DOUBLE COMPLEX C(*) INTEGER N DO 20 J = 1, N C(J) = A(J)+B(J) 20 CONTINUE END ``` This routine simply adds the elements in two contiguous arrays and places the result in a third. The memory for all three arrays must be provided by the calling routine. A very basic interface to this routine can be automatically generated by f2py: ``` python -m numpy.f2py -m add add.f ``` This command will produce an extension module named `addmodule.c` in the current directory. This extension module can now be compiled and used from Python just like any other extension module. ### Creating a compiled extension module Note This usage depends heavily on `numpy.distutils`, see [F2PY and Build Systems](buildtools/index#f2py-bldsys) for more details. You can also get f2py to both compile `add.f` along with the produced extension module leaving only a shared-library extension file that can be imported from Python: ``` python -m numpy.f2py -c -m add add.f ``` This command produces a Python extension module compatible with your platform. This module may then be imported from Python. It will contain a method for each subroutine in `add`. The docstring of each method contains information about how the module method may be called: ``` >>> import add >>> print(add.zadd.__doc__) zadd(a,b,c,n) Wrapper for ``zadd``. Parameters ---------- a : input rank-1 array('D') with bounds (*) b : input rank-1 array('D') with bounds (*) c : input rank-1 array('D') with bounds (*) n : input int ``` ### Improving the basic interface The default interface is a very literal translation of the Fortran code into Python. The Fortran array arguments are converted to NumPy arrays and the integer argument should be mapped to a `C` integer. The interface will attempt to convert all arguments to their required types (and shapes) and issue an error if unsuccessful. However, because `f2py` knows nothing about the semantics of the arguments (such that `C` is an output and `n` should really match the array sizes), it is possible to abuse this function in ways that can cause Python to crash. For example: ``` >>> add.zadd([1, 2, 3], [1, 2], [3, 4], 1000) ``` will cause a program crash on most systems. Under the hood, the lists are being converted to arrays but then the underlying `add` function is told to cycle way beyond the borders of the allocated memory. In order to improve the interface, `f2py` supports directives. This is accomplished by constructing a signature file. It is usually best to start from the interfaces that `f2py` produces in that file, which correspond to the default behavior. To get `f2py` to generate the interface file use the `-h` option: ``` python -m numpy.f2py -h add.pyf -m add add.f ``` This command creates the `add.pyf` file in the current directory. The section of this file corresponding to `zadd` is: ``` subroutine zadd(a,b,c,n) ! in :add:add.f double complex dimension(*) :: a double complex dimension(*) :: b double complex dimension(*) :: c integer :: n end subroutine zadd ``` By placing intent directives and checking code, the interface can be cleaned up quite a bit so the Python module method is both easier to use and more robust to malformed inputs. ``` subroutine zadd(a,b,c,n) ! in :add:add.f double complex dimension(n) :: a double complex dimension(n) :: b double complex intent(out),dimension(n) :: c integer intent(hide),depend(a) :: n=len(a) end subroutine zadd ``` The intent directive, intent(out) is used to tell f2py that `c` is an output variable and should be created by the interface before being passed to the underlying code. The intent(hide) directive tells f2py to not allow the user to specify the variable, `n`, but instead to get it from the size of `a`. The depend( `a` ) directive is necessary to tell f2py that the value of n depends on the input a (so that it won’t try to create the variable n until the variable a is created). After modifying `add.pyf`, the new Python module file can be generated by compiling both `add.f` and `add.pyf`: ``` python -m numpy.f2py -c add.pyf add.f ``` The new interface’s docstring is: ``` >>> import add >>> print(add.zadd.__doc__) c = zadd(a,b) Wrapper for ``zadd``. Parameters ---------- a : input rank-1 array('D') with bounds (n) b : input rank-1 array('D') with bounds (n) Returns ------- c : rank-1 array('D') with bounds (n) ``` Now, the function can be called in a much more robust way: ``` >>> add.zadd([1, 2, 3], [4, 5, 6]) array([5.+0.j, 7.+0.j, 9.+0.j]) ``` Notice the automatic conversion to the correct format that occurred. ### Inserting directives in Fortran source The robust interface of the previous section can also be generated automatically by placing the variable directives as special comments in the original Fortran code. Note For projects where the Fortran code is being actively developed, this may be preferred. Thus, if the source code is modified to contain: ``` C SUBROUTINE ZADD(A,B,C,N) C CF2PY INTENT(OUT) :: C CF2PY INTENT(HIDE) :: N CF2PY DOUBLE COMPLEX :: A(N) CF2PY DOUBLE COMPLEX :: B(N) CF2PY DOUBLE COMPLEX :: C(N) DOUBLE COMPLEX A(*) DOUBLE COMPLEX B(*) DOUBLE COMPLEX C(*) INTEGER N DO 20 J = 1, N C(J) = A(J) + B(J) 20 CONTINUE END ``` Then, one can compile the extension module using: ``` python -m numpy.f2py -c -m add add.f ``` The resulting signature for the function add.zadd is exactly the same one that was created previously. If the original source code had contained `A(N)` instead of `A(*)` and so forth with `B` and `C`, then nearly the same interface can be obtained by placing the `INTENT(OUT) :: C` comment line in the source code. The only difference is that `N` would be an optional input that would default to the length of `A`. A filtering example ------------------- This example shows a function that filters a two-dimensional array of double precision floating-point numbers using a fixed averaging filter. The advantage of using Fortran to index into multi-dimensional arrays should be clear from this example. ``` C SUBROUTINE DFILTER2D(A,B,M,N) C DOUBLE PRECISION A(M,N) DOUBLE PRECISION B(M,N) INTEGER N, M CF2PY INTENT(OUT) :: B CF2PY INTENT(HIDE) :: N CF2PY INTENT(HIDE) :: M DO 20 I = 2,M-1 DO 40 J = 2,N-1 B(I,J) = A(I,J) + & (A(I-1,J)+A(I+1,J) + & A(I,J-1)+A(I,J+1) )*0.5D0 + & (A(I-1,J-1) + A(I-1,J+1) + & A(I+1,J-1) + A(I+1,J+1))*0.25D0 40 CONTINUE 20 CONTINUE END ``` This code can be compiled and linked into an extension module named filter using: ``` python -m numpy.f2py -c -m filter filter.f ``` This will produce an extension module in the current directory with a method named `dfilter2d` that returns a filtered version of the input. `depends` keyword example -------------------------- Consider the following code, saved in the file `myroutine.f90`: ``` subroutine s(n, m, c, x) implicit none integer, intent(in) :: n, m real(kind=8), intent(out), dimension(n,m) :: x real(kind=8), intent(in) :: c(:) x = 0.0d0 x(1, 1) = c(1) end subroutine s ``` Wrapping this with `python -m numpy.f2py -c myroutine.f90 -m myroutine`, we can do the following in Python: ``` >>> import numpy as np >>> import myroutine >>> x = myroutine.s(2, 3, np.array([5, 6, 7])) >>> x array([[5., 0., 0.], [0., 0., 0.]]) ``` Now, instead of generating the extension module directly, we will create a signature file for this subroutine first. This is a common pattern for multi-step extension module generation. In this case, after running ``` python -m numpy.f2py myroutine.f90 -h myroutine.pyf ``` the following signature file is generated: ``` ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module myroutine ! in interface ! in :myroutine subroutine s(n,m,c,x) ! in :myroutine:myroutine.f90 integer intent(in) :: n integer intent(in) :: m real(kind=8) dimension(:),intent(in) :: c real(kind=8) dimension(n,m),intent(out),depend(m,n) :: x end subroutine s end interface end python module myroutine ! This file was auto-generated with f2py (version:1.23.0.dev0+120.g4da01f42d). ! See: ! https://web.archive.org/web/20140822061353/http://cens.ioc.ee/projects/f2py2e ``` Now, if we run `python -m numpy.f2py -c myroutine.pyf myroutine.f90` we see an error; note that the signature file included a `depend(m,n)` statement for `x` which is not necessary. Indeed, editing the file above to read ``` ! -*- f90 -*- ! Note: the context of this file is case sensitive. python module myroutine ! in interface ! in :myroutine subroutine s(n,m,c,x) ! in :myroutine:myroutine.f90 integer intent(in) :: n integer intent(in) :: m real(kind=8) dimension(:),intent(in) :: c real(kind=8) dimension(n,m),intent(out) :: x end subroutine s end interface end python module myroutine ! This file was auto-generated with f2py (version:1.23.0.dev0+120.g4da01f42d). ! See: ! https://web.archive.org/web/20140822061353/http://cens.ioc.ee/projects/f2py2e ``` and running `f2py -c myroutine.pyf myroutine.f90` yields correct results. Read more --------- * [Wrapping C codes using f2py](https://scipy.github.io/old-wiki/pages/Cookbook/f2py_and_NumPy.html) * [F2py section on the SciPy Cookbook](https://scipy-cookbook.readthedocs.io/items/F2Py.html) * [F2py example: Interactive System for Ice sheet Simulation](http://websrv.cs.umt.edu/isis/index.php/F2py_example) * [“Interfacing With Other Languages” section on the SciPy Cookbook.](https://scipy-cookbook.readthedocs.io/items/idx_interfacing_with_other_languages.html) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/f2py-examples.htmlF2PY test suite =============== F2PY’s test suite is present in the directory `numpy/f2py/tests`. Its aim is to ensure that Fortran language features are correctly translated to Python. For example, the user can specify starting and ending indices of arrays in Fortran. This behaviour is translated to the generated CPython library where the arrays strictly start from 0 index. The directory of the test suite looks like the following: ``` ./tests/ ├── __init__.py ├── src │ ├── abstract_interface │ ├── array_from_pyobj │ ├── // ... several test folders │ └── string ├── test_abstract_interface.py ├── test_array_from_pyobj.py ├── // ... several test files ├── test_symbolic.py └── util.py ``` Files starting with `test_` contain tests for various aspects of f2py from parsing Fortran files to checking modules’ documentation. `src` directory contains the Fortran source files upon which we do the testing. `util.py` contains utility functions for building and importing Fortran modules during test time using a temporary location. Adding a test ------------- F2PY’s current test suite predates `pytest` and therefore does not use fixtures. Instead, the test files contain test classes that inherit from `F2PyTest` class present in `util.py`. ``` 1class F2PyTest: 2 code = None 3 sources = None 4 options = [] 5 skip = [] 6 only = [] 7 suffix = ".f" 8 module = None 9 module_name = None 10 ``` This class many helper functions for parsing and compiling test source files. Its child classes can override its `sources` data member to provide their own source files. This superclass will then compile the added source files upon object creation andtheir functions will be appended to `self.module` data member. Thus, the child classes will be able to access the fortran functions specified in source file by calling `self.module.[fortran_function_name]`. ### Example Consider the following subroutines, contained in a file named `add-test.f` ``` subroutine addb(k) real(8), intent(inout) :: k(:) k=k+1 endsubroutine subroutine addc(w,k) real(8), intent(in) :: w(:) real(8), intent(out) :: k(size(w)) k=w+1 endsubroutine ``` The first routine `addb` simply takes an array and increases its elements by 1. The second subroutine `addc` assigns a new array `k` with elements greater that the elements of the input array `w` by 1. A test can be implemented as follows: ``` class TestAdd(util.F2PyTest): sources = [util.getpath("add-test.f")] def test_module(self): k = np.array([1, 2, 3], dtype=np.float64) w = np.array([1, 2, 3], dtype=np.float64) self.module.subb(k) assert np.allclose(k, w + 1) self.module.subc([w, k]) assert np.allclose(k, w + 1) ``` We override the `sources` data member to provide the source file. The source files are compiled and subroutines are attached to module data member when the class object is created. The `test_module` function calls the subroutines and tests their results. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/f2py-testing.htmlUsing via numpy.distutils ========================= [`numpy.distutils`](../../reference/distutils#module-numpy.distutils "numpy.distutils") is part of NumPy, and extends the standard Python `distutils` module to deal with Fortran sources and F2PY signature files, e.g. compile Fortran sources, call F2PY to construct extension modules, etc. Example Consider the following `setup_file.py` for the `fib` and `scalar` examples from [Three ways to wrap - getting started](../f2py.getting-started#f2py-getting-started) section: ``` from numpy.distutils.core import Extension ext1 = Extension(name = 'scalar', sources = ['scalar.f']) ext2 = Extension(name = 'fib2', sources = ['fib2.pyf', 'fib1.f']) if __name__ == "__main__": from numpy.distutils.core import setup setup(name = 'f2py_example', description = "F2PY Users Guide examples", author = "<NAME>", author_email = "<EMAIL>", ext_modules = [ext1, ext2] ) # End of setup_example.py ``` Running ``` python setup_example.py build ``` will build two extension modules `scalar` and `fib2` to the build directory. Extensions to `distutils` ------------------------- [`numpy.distutils`](../../reference/distutils#module-numpy.distutils "numpy.distutils") extends `distutils` with the following features: * [`Extension`](../../reference/generated/numpy.distutils.core.extension#numpy.distutils.core.Extension "numpy.distutils.core.Extension") class argument `sources` may contain Fortran source files. In addition, the list `sources` may contain at most one F2PY signature file, and in this case, the name of an Extension module must match with the `<modulename>` used in signature file. It is assumed that an F2PY signature file contains exactly one `python module` block. If `sources` do not contain a signature file, then F2PY is used to scan Fortran source files to construct wrappers to the Fortran codes. Additional options to the F2PY executable can be given using the [`Extension`](../../reference/generated/numpy.distutils.core.extension#numpy.distutils.core.Extension "numpy.distutils.core.Extension") class argument `f2py_options`. * The following new `distutils` commands are defined: `build_src` to construct Fortran wrapper extension modules, among many other things. `config_fc` to change Fortran compiler options. Additionally, the `build_ext` and `build_clib` commands are also enhanced to support Fortran sources. Run ``` python <setup.py file> config_fc build_src build_ext --help ``` to see available options for these commands. * When building Python packages containing Fortran sources, one can choose different Fortran compilers by using the `build_ext` command option `--fcompiler=<Vendor>`. Here `<Vendor>` can be one of the following names (on `linux` systems): ``` absoft compaq fujitsu g95 gnu gnu95 intel intele intelem lahey nag nagfor nv pathf95 pg vast ``` See `numpy_distutils/fcompiler.py` for an up-to-date list of supported compilers for different platforms, or run ``` python -m numpy.f2py -c --help-fcompiler ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/buildtools/distutils.htmlUsing via meson =============== The key advantage gained by leveraging `meson` over the techniques described in [Using via numpy.distutils](distutils#f2py-distutils) is that this feeds into existing systems and larger projects with ease. `meson` has a rather pythonic syntax which makes it more comfortable and amenable to extension for `python` users. Note Meson needs to be at-least `0.46.0` in order to resolve the `python` include directories. Fibonacci Walkthrough (F77) --------------------------- We will need the generated `C` wrapper before we can use a general purpose build system like `meson`. We will acquire this by: ``` python -n numpy.f2py fib1.f -m fib2 ``` Now, consider the following `meson.build` file for the `fib` and `scalar` examples from [Three ways to wrap - getting started](../f2py.getting-started#f2py-getting-started) section: ``` project('f2py_examples', 'c', version : '0.1', default_options : ['warning_level=2']) add_languages('fortran') py_mod = import('python') py3 = py_mod.find_installation('python3') py3_dep = py3.dependency() message(py3.path()) message(py3.get_install_dir()) incdir_numpy = run_command(py3, ['-c', 'import os; os.chdir(".."); import numpy; print(numpy.get_include())'], check : true ).stdout().strip() incdir_f2py = run_command(py3, ['-c', 'import os; os.chdir(".."); import numpy.f2py; print(numpy.f2py.get_include())'], check : true ).stdout().strip() fibby_source = custom_target('fibbymodule.c', input : ['fib1.f'], # .f so no F90 wrappers output : ['fibbymodule.c', 'fibby-f2pywrappers.f'], command : [ py3, '-m', 'numpy.f2py', '@INPUT@', '-m', 'fibby', '--lower'] ) inc_np = include_directories(incdir_numpy, incdir_f2py) py3.extension_module('fibby', 'fib1.f', fibby_source, incdir_f2py+'/fortranobject.c', include_directories: inc_np, dependencies : py3_dep, install : true) ``` At this point the build will complete, but the import will fail: ``` meson setup builddir meson compile -C builddir cd builddir python -c 'import fib2' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: fib2.cpython-39-x86_64-linux-gnu.so: undefined symbol: FIB_ # Check this isn't a false positive nm -A fib2.cpython-39-x86_64-linux-gnu.so | grep FIB_ fib2.cpython-39-x86_64-linux-gnu.so: U FIB_ ``` Recall that the original example, as reproduced below, was in SCREAMCASE: ``` C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F ``` With the standard approach, the subroutine exposed to `python` is `fib` and not `FIB`. This means we have a few options. One approach (where possible) is to lowercase the original Fortran file with say: ``` tr "[:upper:]" "[:lower:]" < fib1.f > fib1.f python -n numpy.f2py fib1.f -m fib2 meson --wipe builddir meson compile -C builddir cd builddir python -c 'import fib2' ``` However this requires the ability to modify the source which is not always possible. The easiest way to solve this is to let `f2py` deal with it: ``` python -n numpy.f2py fib1.f -m fib2 --lower meson --wipe builddir meson compile -C builddir cd builddir python -c 'import fib2' ``` ### Automating wrapper generation A major pain point in the workflow defined above, is the manual tracking of inputs. Although it would require more effort to figure out the actual outputs for reasons discussed in [F2PY and Build Systems](index#f2py-bldsys). Note From NumPy `1.22.4` onwards, `f2py` will deterministically generate wrapper files based on the input file Fortran standard (F77 or greater). `--skip-empty-wrappers` can be passed to `f2py` to restore the previous behaviour of only generating wrappers when needed by the input . However, we can augment our workflow in a straightforward to take into account files for which the outputs are known when the build system is set up. ``` project('f2py_examples', 'c', version : '0.1', default_options : ['warning_level=2']) add_languages('fortran') py_mod = import('python') py3 = py_mod.find_installation('python3') py3_dep = py3.dependency() message(py3.path()) message(py3.get_install_dir()) incdir_numpy = run_command(py3, ['-c', 'import os; os.chdir(".."); import numpy; print(numpy.get_include())'], check : true ).stdout().strip() incdir_f2py = run_command(py3, ['-c', 'import os; os.chdir(".."); import numpy.f2py; print(numpy.f2py.get_include())'], check : true ).stdout().strip() fibby_source = custom_target('fibbymodule.c', input : ['fib1.f'], # .f so no F90 wrappers output : ['fibbymodule.c', 'fibby-f2pywrappers.f'], command : [ py3, '-m', 'numpy.f2py', '@INPUT@', '-m', 'fibby', '--lower']) inc_np = include_directories(incdir_numpy, incdir_f2py) py3.extension_module('fibby', 'fib1.f', fibby_source, incdir_f2py+'/fortranobject.c', include_directories: inc_np, dependencies : py3_dep, install : true) ``` This can be compiled and run as before. ``` rm -rf builddir meson setup builddir meson compile -C builddir cd builddir python -c "import numpy as np; import fibby; a = np.zeros(9); fibby.fib(a); print (a)" # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] ``` Salient points -------------- It is worth keeping in mind the following: * `meson` will default to passing `-fimplicit-none` under `gfortran` by default, which differs from that of the standard `np.distutils` behaviour * It is not possible to use SCREAMCASE in this context, so either the contents of the `.f` file or the generated wrapper `.c` needs to be lowered to regular letters; which can be facilitated by the `--lower` option of `F2PY` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/buildtools/meson.htmlUsing via cmake =============== In terms of complexity, `cmake` falls between `make` and `meson`. The learning curve is steeper since CMake syntax is not pythonic and is closer to `make` with environment variables. However, the trade-off is enhanced flexibility and support for most architectures and compilers. An introduction to the syntax is out of scope for this document, but this [extensive CMake collection](https://cliutils.gitlab.io/modern-cmake/) of resources is great. Note `cmake` is very popular for mixed-language systems, however support for `f2py` is not particularly native or pleasant; and a more natural approach is to consider [Using via scikit-build](skbuild#f2py-skbuild) Fibonacci Walkthrough (F77) --------------------------- Returning to the `fib` example from [Three ways to wrap - getting started](../f2py.getting-started#f2py-getting-started) section. ``` C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F ``` We do not need to explicitly generate the `python -m numpy.f2py fib1.f` output, which is `fib1module.c`, which is beneficial. With this; we can now initialize a `CMakeLists.txt` file as follows: ``` cmake_minimum_required(VERSION 3.18) # Needed to avoid requiring embedded Python libs too project(fibby VERSION 1.0 DESCRIPTION "FIB module" LANGUAGES C Fortran ) # Safety net if(PROJECT_SOURCE_DIR STREQUAL PROJECT_BINARY_DIR) message( FATAL_ERROR "In-source builds not allowed. Please make a new directory (called a build directory) and run CMake from there.\n" ) endif() # Grab Python, 3.8 or newer find_package(Python 3.8 REQUIRED COMPONENTS Interpreter Development.Module NumPy) # Grab the variables from a local Python installation # F2PY headers execute_process( COMMAND "${Python_EXECUTABLE}" -c "import numpy.f2py; print(numpy.f2py.get_include())" OUTPUT_VARIABLE F2PY_INCLUDE_DIR OUTPUT_STRIP_TRAILING_WHITESPACE ) # Print out the discovered paths include(CMakePrintHelpers) cmake_print_variables(Python_INCLUDE_DIRS) cmake_print_variables(F2PY_INCLUDE_DIR) cmake_print_variables(Python_NumPy_INCLUDE_DIRS) # Common variables set(f2py_module_name "fibby") set(fortran_src_file "${CMAKE_SOURCE_DIR}/fib1.f") set(f2py_module_c "${f2py_module_name}module.c") # Generate sources add_custom_target( genpyf DEPENDS "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" ) add_custom_command( OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" COMMAND ${Python_EXECUTABLE} -m "numpy.f2py" "${fortran_src_file}" -m "fibby" --lower # Important DEPENDS fib1.f # Fortran source ) # Set up target Python_add_library(${CMAKE_PROJECT_NAME} MODULE WITH_SOABI "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" # Generated "${F2PY_INCLUDE_DIR}/fortranobject.c" # From NumPy "${fortran_src_file}" # Fortran source(s) ) # Depend on sources target_link_libraries(${CMAKE_PROJECT_NAME} PRIVATE Python::NumPy) add_dependencies(${CMAKE_PROJECT_NAME} genpyf) target_include_directories(${CMAKE_PROJECT_NAME} PRIVATE "${F2PY_INCLUDE_DIR}") ``` A key element of the `CMakeLists.txt` file defined above is that the `add_custom_command` is used to generate the wrapper `C` files and then added as a dependency of the actual shared library target via a `add_custom_target` directive which prevents the command from running every time. Additionally, the method used for obtaining the `fortranobject.c` file can also be used to grab the `numpy` headers on older `cmake` versions. This then works in the same manner as the other modules, although the naming conventions are different and the output library is not automatically prefixed with the `cython` information. ``` ls . # CMakeLists.txt fib1.f cmake -S . -B build cmake --build build cd build python -c "import numpy as np; import fibby; a = np.zeros(9); fibby.fib(a); print (a)" # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] ``` This is particularly useful where an existing toolchain already exists and `scikit-build` or other additional `python` dependencies are discouraged. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/buildtools/cmake.htmlUsing via scikit-build ====================== `scikit-build` provides two separate concepts geared towards the users of Python extension modules. 1. A `setuptools` replacement (legacy behaviour) 2. A series of `cmake` modules with definitions which help building Python extensions Note It is possible to use `scikit-build`’s `cmake` modules to [bypass the cmake setup mechanism](https://scikit-build.readthedocs.io/en/latest/cmake-modules/F2PY.html) completely, and to write targets which call `f2py -c`. This usage is **not recommended** since the point of these build system documents are to move away from the internal `numpy.distutils` methods. For situations where no `setuptools` replacements are required or wanted (i.e. if `wheels` are not needed), it is recommended to instead use the vanilla `cmake` setup described in [Using via cmake](cmake#f2py-cmake). Fibonacci Walkthrough (F77) --------------------------- We will consider the `fib` example from [Three ways to wrap - getting started](../f2py.getting-started#f2py-getting-started) section. ``` C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F ``` ### `CMake` modules only Consider using the following `CMakeLists.txt`. ``` ### setup project ### cmake_minimum_required(VERSION 3.9) project(fibby VERSION 1.0 DESCRIPTION "FIB module" LANGUAGES C Fortran ) # Safety net if(PROJECT_SOURCE_DIR STREQUAL PROJECT_BINARY_DIR) message( FATAL_ERROR "In-source builds not allowed. Please make a new directory (called a build directory) and run CMake from there.\n" ) endif() # Ensure scikit-build modules if (NOT SKBUILD) find_package(PythonInterp 3.8 REQUIRED) # Kanged --> https://github.com/Kitware/torch_liberator/blob/master/CMakeLists.txt # If skbuild is not the driver; include its utilities in CMAKE_MODULE_PATH execute_process( COMMAND "${PYTHON_EXECUTABLE}" -c "import os, skbuild; print(os.path.dirname(skbuild.__file__))" OUTPUT_VARIABLE SKBLD_DIR OUTPUT_STRIP_TRAILING_WHITESPACE ) list(APPEND CMAKE_MODULE_PATH "${SKBLD_DIR}/resources/cmake") message(STATUS "Looking in ${SKBLD_DIR}/resources/cmake for CMake modules") endif() # scikit-build style includes find_package(PythonExtensions REQUIRED) # for ${PYTHON_EXTENSION_MODULE_SUFFIX} # Grab the variables from a local Python installation # NumPy headers execute_process( COMMAND "${PYTHON_EXECUTABLE}" -c "import numpy; print(numpy.get_include())" OUTPUT_VARIABLE NumPy_INCLUDE_DIRS OUTPUT_STRIP_TRAILING_WHITESPACE ) # F2PY headers execute_process( COMMAND "${PYTHON_EXECUTABLE}" -c "import numpy.f2py; print(numpy.f2py.get_include())" OUTPUT_VARIABLE F2PY_INCLUDE_DIR OUTPUT_STRIP_TRAILING_WHITESPACE ) # Prepping the module set(f2py_module_name "fibby") set(fortran_src_file "${CMAKE_SOURCE_DIR}/fib1.f") set(f2py_module_c "${f2py_module_name}module.c") # Target for enforcing dependencies add_custom_target(genpyf DEPENDS "${fortran_src_file}" ) add_custom_command( OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/${f2py_module_c}" COMMAND ${PYTHON_EXECUTABLE} -m "numpy.f2py" "${fortran_src_file}" -m "fibby" --lower # Important DEPENDS fib1.f # Fortran source ) add_library(${CMAKE_PROJECT_NAME} MODULE "${f2py_module_name}module.c" "${F2PY_INCLUDE_DIR}/fortranobject.c" "${fortran_src_file}") target_include_directories(${CMAKE_PROJECT_NAME} PUBLIC ${F2PY_INCLUDE_DIR} ${NumPy_INCLUDE_DIRS} ${PYTHON_INCLUDE_DIRS}) set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES SUFFIX "${PYTHON_EXTENSION_MODULE_SUFFIX}") set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES PREFIX "") # Linker fixes if (UNIX) if (APPLE) set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES LINK_FLAGS '-Wl,-dylib,-undefined,dynamic_lookup') else() set_target_properties(${CMAKE_PROJECT_NAME} PROPERTIES LINK_FLAGS '-Wl,--allow-shlib-undefined') endif() endif() add_dependencies(${CMAKE_PROJECT_NAME} genpyf) install(TARGETS ${CMAKE_PROJECT_NAME} DESTINATION fibby) ``` Much of the logic is the same as in [Using via cmake](cmake#f2py-cmake), however notably here the appropriate module suffix is generated via `sysconfig.get_config_var("SO")`. The resulting extension can be built and loaded in the standard workflow. ``` ls . # CMakeLists.txt fib1.f cmake -S . -B build cmake --build build cd build python -c "import numpy as np; import fibby; a = np.zeros(9); fibby.fib(a); print (a)" # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] ``` ### `setuptools` replacement Note **As of November 2021** The behavior described here of driving the `cmake` build of a module is considered to be legacy behaviour and should not be depended on. The utility of `scikit-build` lies in being able to drive the generation of more than extension modules, in particular a common usage pattern is the generation of Python distributables (for example for PyPI). The workflow with `scikit-build` straightforwardly supports such packaging requirements. Consider augmenting the project with a `setup.py` as defined: ``` from skbuild import setup setup( name="fibby", version="0.0.1", description="a minimal example package (fortran version)", license="MIT", packages=['fibby'], python_requires=">=3.7", ) ``` Along with a commensurate `pyproject.toml` ``` [build-system] requires = ["setuptools>=42", "wheel", "scikit-build", "cmake>=3.9", "numpy>=1.21"] build-backend = "setuptools.build_meta" ``` Together these can build the extension using `cmake` in tandem with other standard `setuptools` outputs. Running `cmake` through `setup.py` is mostly used when it is necessary to integrate with extension modules not built with `cmake`. ``` ls . # CMakeLists.txt fib1.f pyproject.toml setup.py python setup.py build_ext --inplace python -c "import numpy as np; import fibby.fibby; a = np.zeros(9); fibby.fibby.fib(a); print (a)" # [ 0. 1. 1. 2. 3. 5. 8. 13. 21.] ``` Where we have modified the path to the module as `--inplace` places the extension module in a subfolder. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/buildtools/skbuild.htmlF2PY and Windows Intel Fortran ============================== As of NumPy 1.23, only the classic Intel compilers (`ifort`) are supported. Note The licensing restrictions for beta software [have been relaxed](https://www.intel.com/content/www/us/en/developer/articles/release-notes/oneapi-fortran-compiler-release-notes.html) during the transition to the LLVM backed `ifx/icc` family of compilers. However this document does not endorse the usage of Intel in downstream projects due to the issues pertaining to [disassembly of components and liability](https://software.sintel.com/content/www/us/en/develop/articles/end-user-license-agreement.html). Neither the Python Intel installation nor the `Classic Intel C/C++ Compiler` are required. * The [Intel Fortran Compilers](https://www.intel.com/content/www/us/en/developer/articles/tool/oneapi-standalone-components.html#inpage-nav-6-1) come in a combined installer providing both Classic and Beta versions; these also take around a gigabyte and a half or so. We will consider the classic example of the generation of Fibonnaci numbers, `fib1.f`, given by: ``` C FILE: FIB1.F SUBROUTINE FIB(A,N) C C CALCULATE FIRST N FIBONACCI NUMBERS C INTEGER N REAL*8 A(N) DO I=1,N IF (I.EQ.1) THEN A(I) = 0.0D0 ELSEIF (I.EQ.2) THEN A(I) = 1.0D0 ELSE A(I) = A(I-1) + A(I-2) ENDIF ENDDO END C END FILE FIB1.F ``` For `cmd.exe` fans, using the Intel oneAPI command prompt is the easiest approach, as it loads the required environment for both `ifort` and `msvc`. Helper batch scripts are also provided. ``` # cmd.exe "C:\Program Files (x86)\Intel\oneAPI\setvars.bat" python -m numpy.f2py -c fib1.f -m fib1 python -c "import fib1; import numpy as np; a=np.zeros(8); fib1.fib(a); print(a)" ``` Powershell usage is a little less pleasant, and this configuration now works with MSVC as: ``` # Powershell python -m numpy.f2py -c fib1.f -m fib1 --f77exec='C:\Program Files (x86)\Intel\oneAPI\compiler\latest\windows\bin\intel64\ifort.exe' --f90exec='C:\Program Files (x86)\Intel\oneAPI\compiler\latest\windows\bin\intel64\ifort.exe' -L'C:\Program Files (x86)\Intel\oneAPI\compiler\latest\windows\compiler\lib\ia32' python -c "import fib1; import numpy as np; a=np.zeros(8); fib1.fib(a); print(a)" # Alternatively, set environment and reload Powershell in one line cmd.exe /k '"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" && powershell' python -m numpy.f2py -c fib1.f -m fib1 python -c "import fib1; import numpy as np; a=np.zeros(8); fib1.fib(a); print(a)" ``` Note that the actual path to your local installation of `ifort` may vary, and the command above will need to be updated accordingly. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/windows/intel.htmlF2PY and Windows with MSYS2 =========================== Follow the standard [installation instructions](https://www.msys2.org/). Then, to grab the requisite Fortran compiler with `MVSC`: ``` # Assuming a fresh install pacman -Syu # Restart the terminal pacman -Su # Update packages # Get the toolchains pacman -S --needed base-devel gcc-fortran pacman -S mingw-w64-x86_64-toolchain ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/windows/msys2.htmlF2PY and Conda on Windows ========================= As a convienience measure, we will additionally assume the existence of `scoop`, which can be used to install tools without administrative access. ``` Invoke-Expression (New-Object System.Net.WebClient).DownloadString('https://get.scoop.sh') ``` Now we will setup a `conda` environment. ``` scoop install miniconda3 # For conda activate / deactivate in powershell conda install -n root -c pscondaenvs pscondaenvs Powershell -c Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser conda init powershell # Open a new shell for the rest ``` `conda` pulls packages from `msys2`, however, the UX is sufficiently different enough to warrant a separate discussion. Warning As of 30-01-2022, the [MSYS2 binaries](https://github.com/conda-forge/conda-forge.github.io/issues/1044) shipped with `conda` are **outdated** and this approach is **not preferred**. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/windows/conda.htmlF2PY and PGI Fortran on Windows =============================== A variant of these are part of the so called “classic” Flang, however, as classic Flang requires a custom LLVM and compilation from sources. Warning Since the proprietary compilers are no longer available for usage they are not recommended and will not be ported to the new `f2py` CLI. Note **As of November 2021** As of 29-01-2022, [PGI compiler toolchains](https://www.pgroup.com/index.html) have been superceeded by the Nvidia HPC SDK, with no [native Windows support](https://developer.nvidia.com/nvidia-hpc-sdk-downloads#collapseFour). However, © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/f2py/windows/pgi.htmlnumpy.printoptions ================== numpy.printoptions(**args*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/arrayprint.py#L334-L366) Context manager for setting print options. Set print options for the scope of the `with` block, and restore the old options at the end. See [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions") for the full description of available options. See also [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions"), [`get_printoptions`](numpy.get_printoptions#numpy.get_printoptions "numpy.get_printoptions") #### Examples ``` >>> from numpy.testing import assert_equal >>> with np.printoptions(precision=2): ... np.array([2.0]) / 3 array([0.67]) ``` The `as`-clause of the `with`-statement gives the current print options: ``` >>> with np.printoptions(precision=2) as opts: ... assert_equal(opts, np.get_printoptions()) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.printoptions.htmlnumpy.ndarray.base ================== attribute ndarray.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: ``` >>> x = np.array([1,2,3,4]) >>> x.base is None True ``` Slicing creates a view, whose memory is shared with x: ``` >>> y = x[2:] >>> y.base is x True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.base.htmlnumpy.shares_memory ==================== numpy.shares_memory(*a*, *b*, */*, *max_work=None*) Determine if two arrays share memory. Warning This function can be exponentially slow for some inputs, unless `max_work` is set to a finite number or `MAY_SHARE_BOUNDS`. If in doubt, use [`numpy.may_share_memory`](numpy.may_share_memory#numpy.may_share_memory "numpy.may_share_memory") instead. Parameters **a, b**ndarray Input arrays **max_work**int, optional Effort to spend on solving the overlap problem (maximum number of candidate solutions to consider). The following special values are recognized: max_work=MAY_SHARE_EXACT (default) The problem is solved exactly. In this case, the function returns True only if there is an element shared between the arrays. Finding the exact solution may take extremely long in some cases. max_work=MAY_SHARE_BOUNDS Only the memory bounds of a and b are checked. Returns **out**bool Raises numpy.TooHardError Exceeded max_work. See also [`may_share_memory`](numpy.may_share_memory#numpy.may_share_memory "numpy.may_share_memory") #### Examples ``` >>> x = np.array([1, 2, 3, 4]) >>> np.shares_memory(x, np.array([5, 6, 7])) False >>> np.shares_memory(x[::2], x) True >>> np.shares_memory(x[::2], x[1::2]) False ``` Checking whether two arrays share memory is NP-complete, and runtime may increase exponentially in the number of dimensions. Hence, `max_work` should generally be set to a finite number, as it is possible to construct examples that take extremely long to run: ``` >>> from numpy.lib.stride_tricks import as_strided >>> x = np.zeros([192163377], dtype=np.int8) >>> x1 = as_strided(x, strides=(36674, 61119, 85569), shape=(1049, 1049, 1049)) >>> x2 = as_strided(x[64023025:], strides=(12223, 12224, 1), shape=(1049, 1049, 1)) >>> np.shares_memory(x1, x2, max_work=1000) Traceback (most recent call last): ... numpy.TooHardError: Exceeded max_work ``` Running `np.shares_memory(x1, x2)` without `max_work` set takes around 1 minute for this case. It is possible to find problems that take still significantly longer. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.shares_memory.htmlnumpy.matrix ============ *class*numpy.matrix(*data*, *dtype=None*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Note It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future. Returns a matrix from an array-like object, or from a string of data. A matrix is a specialized 2-D array that retains its 2-D nature through operations. It has certain special operators, such as `*` (matrix multiplication) and `**` (matrix power). Parameters **data**array_like or string If [`data`](numpy.matrix.data#numpy.matrix.data "numpy.matrix.data") is a string, it is interpreted as a matrix with commas or spaces separating columns, and semicolons separating rows. **dtype**data-type Data-type of the output matrix. **copy**bool If [`data`](numpy.matrix.data#numpy.matrix.data "numpy.matrix.data") is already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), then this flag determines whether the data is copied (the default), or whether a view is constructed. See also [`array`](numpy.array#numpy.array "numpy.array") #### Examples ``` >>> a = np.matrix('1 2; 3 4') >>> a matrix([[1, 2], [3, 4]]) ``` ``` >>> np.matrix([[1, 2], [3, 4]]) matrix([[1, 2], [3, 4]]) ``` Attributes [`A`](numpy.matrix.a#numpy.matrix.A "numpy.matrix.A") Return `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object. [`A1`](numpy.matrix.a1#numpy.matrix.A1 "numpy.matrix.A1") Return `self` as a flattened [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). [`H`](numpy.matrix.h#numpy.matrix.H "numpy.matrix.H") Returns the (complex) conjugate transpose of `self`. [`I`](numpy.matrix.i#numpy.matrix.I "numpy.matrix.I") Returns the (multiplicative) inverse of invertible `self`. [`T`](numpy.matrix.t#numpy.matrix.T "numpy.matrix.T") Returns the transpose of the matrix. [`base`](numpy.matrix.base#numpy.matrix.base "numpy.matrix.base") Base object if memory is from some other object. [`ctypes`](numpy.matrix.ctypes#numpy.matrix.ctypes "numpy.matrix.ctypes") An object to simplify the interaction of the array with the ctypes module. [`data`](numpy.matrix.data#numpy.matrix.data "numpy.matrix.data") Python buffer object pointing to the start of the array’s data. [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Data-type of the array’s elements. [`flags`](numpy.matrix.flags#numpy.matrix.flags "numpy.matrix.flags") Information about the memory layout of the array. [`flat`](numpy.matrix.flat#numpy.matrix.flat "numpy.matrix.flat") A 1-D iterator over the array. [`imag`](numpy.imag#numpy.imag "numpy.imag") The imaginary part of the array. [`itemsize`](numpy.matrix.itemsize#numpy.matrix.itemsize "numpy.matrix.itemsize") Length of one array element in bytes. [`nbytes`](numpy.matrix.nbytes#numpy.matrix.nbytes "numpy.matrix.nbytes") Total bytes consumed by the elements of the array. [`ndim`](numpy.matrix.ndim#numpy.matrix.ndim "numpy.matrix.ndim") Number of array dimensions. [`real`](numpy.real#numpy.real "numpy.real") The real part of the array. [`shape`](numpy.shape#numpy.shape "numpy.shape") Tuple of array dimensions. [`size`](numpy.matrix.size#numpy.matrix.size "numpy.matrix.size") Number of elements in the array. [`strides`](numpy.matrix.strides#numpy.matrix.strides "numpy.matrix.strides") Tuple of bytes to step in each dimension when traversing an array. #### Methods | | | | --- | --- | | [`all`](numpy.matrix.all#numpy.matrix.all "numpy.matrix.all")([axis, out]) | Test whether all matrix elements along a given axis evaluate to True. | | [`any`](numpy.matrix.any#numpy.matrix.any "numpy.matrix.any")([axis, out]) | Test whether any array element along a given axis evaluates to True. | | [`argmax`](numpy.matrix.argmax#numpy.matrix.argmax "numpy.matrix.argmax")([axis, out]) | Indexes of the maximum values along an axis. | | [`argmin`](numpy.matrix.argmin#numpy.matrix.argmin "numpy.matrix.argmin")([axis, out]) | Indexes of the minimum values along an axis. | | [`argpartition`](numpy.matrix.argpartition#numpy.matrix.argpartition "numpy.matrix.argpartition")(kth[, axis, kind, order]) | Returns the indices that would partition this array. | | [`argsort`](numpy.matrix.argsort#numpy.matrix.argsort "numpy.matrix.argsort")([axis, kind, order]) | Returns the indices that would sort this array. | | [`astype`](numpy.matrix.astype#numpy.matrix.astype "numpy.matrix.astype")(dtype[, order, casting, subok, copy]) | Copy of the array, cast to a specified type. | | [`byteswap`](numpy.matrix.byteswap#numpy.matrix.byteswap "numpy.matrix.byteswap")([inplace]) | Swap the bytes of the array elements | | [`choose`](numpy.matrix.choose#numpy.matrix.choose "numpy.matrix.choose")(choices[, out, mode]) | Use an index array to construct a new array from a set of choices. | | [`clip`](numpy.matrix.clip#numpy.matrix.clip "numpy.matrix.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. | | [`compress`](numpy.matrix.compress#numpy.matrix.compress "numpy.matrix.compress")(condition[, axis, out]) | Return selected slices of this array along given axis. | | [`conj`](numpy.matrix.conj#numpy.matrix.conj "numpy.matrix.conj")() | Complex-conjugate all elements. | | [`conjugate`](numpy.matrix.conjugate#numpy.matrix.conjugate "numpy.matrix.conjugate")() | Return the complex conjugate, element-wise. | | [`copy`](numpy.matrix.copy#numpy.matrix.copy "numpy.matrix.copy")([order]) | Return a copy of the array. | | [`cumprod`](numpy.matrix.cumprod#numpy.matrix.cumprod "numpy.matrix.cumprod")([axis, dtype, out]) | Return the cumulative product of the elements along the given axis. | | [`cumsum`](numpy.matrix.cumsum#numpy.matrix.cumsum "numpy.matrix.cumsum")([axis, dtype, out]) | Return the cumulative sum of the elements along the given axis. | | [`diagonal`](numpy.matrix.diagonal#numpy.matrix.diagonal "numpy.matrix.diagonal")([offset, axis1, axis2]) | Return specified diagonals. | | [`dump`](numpy.matrix.dump#numpy.matrix.dump "numpy.matrix.dump")(file) | Dump a pickle of the array to the specified file. | | [`dumps`](numpy.matrix.dumps#numpy.matrix.dumps "numpy.matrix.dumps")() | Returns the pickle of the array as a string. | | [`fill`](numpy.matrix.fill#numpy.matrix.fill "numpy.matrix.fill")(value) | Fill the array with a scalar value. | | [`flatten`](numpy.matrix.flatten#numpy.matrix.flatten "numpy.matrix.flatten")([order]) | Return a flattened copy of the matrix. | | [`getA`](numpy.matrix.geta#numpy.matrix.getA "numpy.matrix.getA")() | Return `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object. | | [`getA1`](numpy.matrix.geta1#numpy.matrix.getA1 "numpy.matrix.getA1")() | Return `self` as a flattened [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). | | [`getH`](numpy.matrix.geth#numpy.matrix.getH "numpy.matrix.getH")() | Returns the (complex) conjugate transpose of `self`. | | [`getI`](numpy.matrix.geti#numpy.matrix.getI "numpy.matrix.getI")() | Returns the (multiplicative) inverse of invertible `self`. | | [`getT`](numpy.matrix.gett#numpy.matrix.getT "numpy.matrix.getT")() | Returns the transpose of the matrix. | | [`getfield`](numpy.matrix.getfield#numpy.matrix.getfield "numpy.matrix.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. | | [`item`](numpy.matrix.item#numpy.matrix.item "numpy.matrix.item")(*args) | Copy an element of an array to a standard Python scalar and return it. | | [`itemset`](numpy.matrix.itemset#numpy.matrix.itemset "numpy.matrix.itemset")(*args) | Insert scalar into an array (scalar is cast to array's dtype, if possible) | | [`max`](numpy.matrix.max#numpy.matrix.max "numpy.matrix.max")([axis, out]) | Return the maximum value along an axis. | | [`mean`](numpy.matrix.mean#numpy.matrix.mean "numpy.matrix.mean")([axis, dtype, out]) | Returns the average of the matrix elements along the given axis. | | [`min`](numpy.matrix.min#numpy.matrix.min "numpy.matrix.min")([axis, out]) | Return the minimum value along an axis. | | [`newbyteorder`](numpy.matrix.newbyteorder#numpy.matrix.newbyteorder "numpy.matrix.newbyteorder")([new_order]) | Return the array with the same data viewed with a different byte order. | | [`nonzero`](numpy.matrix.nonzero#numpy.matrix.nonzero "numpy.matrix.nonzero")() | Return the indices of the elements that are non-zero. | | [`partition`](numpy.matrix.partition#numpy.matrix.partition "numpy.matrix.partition")(kth[, axis, kind, order]) | Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array. | | [`prod`](numpy.matrix.prod#numpy.matrix.prod "numpy.matrix.prod")([axis, dtype, out]) | Return the product of the array elements over the given axis. | | [`ptp`](numpy.matrix.ptp#numpy.matrix.ptp "numpy.matrix.ptp")([axis, out]) | Peak-to-peak (maximum - minimum) value along the given axis. | | [`put`](numpy.matrix.put#numpy.matrix.put "numpy.matrix.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. | | [`ravel`](numpy.matrix.ravel#numpy.matrix.ravel "numpy.matrix.ravel")([order]) | Return a flattened matrix. | | [`repeat`](numpy.matrix.repeat#numpy.matrix.repeat "numpy.matrix.repeat")(repeats[, axis]) | Repeat elements of an array. | | [`reshape`](numpy.matrix.reshape#numpy.matrix.reshape "numpy.matrix.reshape")(shape[, order]) | Returns an array containing the same data with a new shape. | | [`resize`](numpy.matrix.resize#numpy.matrix.resize "numpy.matrix.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. | | [`round`](numpy.matrix.round#numpy.matrix.round "numpy.matrix.round")([decimals, out]) | Return `a` with each element rounded to the given number of decimals. | | [`searchsorted`](numpy.matrix.searchsorted#numpy.matrix.searchsorted "numpy.matrix.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. | | [`setfield`](numpy.matrix.setfield#numpy.matrix.setfield "numpy.matrix.setfield")(val, dtype[, offset]) | Put a value into a specified place in a field defined by a data-type. | | [`setflags`](numpy.matrix.setflags#numpy.matrix.setflags "numpy.matrix.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. | | [`sort`](numpy.matrix.sort#numpy.matrix.sort "numpy.matrix.sort")([axis, kind, order]) | Sort an array in-place. | | [`squeeze`](numpy.matrix.squeeze#numpy.matrix.squeeze "numpy.matrix.squeeze")([axis]) | Return a possibly reshaped matrix. | | [`std`](numpy.matrix.std#numpy.matrix.std "numpy.matrix.std")([axis, dtype, out, ddof]) | Return the standard deviation of the array elements along the given axis. | | [`sum`](numpy.matrix.sum#numpy.matrix.sum "numpy.matrix.sum")([axis, dtype, out]) | Returns the sum of the matrix elements, along the given axis. | | [`swapaxes`](numpy.matrix.swapaxes#numpy.matrix.swapaxes "numpy.matrix.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. | | [`take`](numpy.matrix.take#numpy.matrix.take "numpy.matrix.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. | | [`tobytes`](numpy.matrix.tobytes#numpy.matrix.tobytes "numpy.matrix.tobytes")([order]) | Construct Python bytes containing the raw data bytes in the array. | | [`tofile`](numpy.matrix.tofile#numpy.matrix.tofile "numpy.matrix.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). | | [`tolist`](numpy.matrix.tolist#numpy.matrix.tolist "numpy.matrix.tolist")() | Return the matrix as a (possibly nested) list. | | [`tostring`](numpy.matrix.tostring#numpy.matrix.tostring "numpy.matrix.tostring")([order]) | A compatibility alias for [`tobytes`](numpy.matrix.tobytes#numpy.matrix.tobytes "numpy.matrix.tobytes"), with exactly the same behavior. | | [`trace`](numpy.matrix.trace#numpy.matrix.trace "numpy.matrix.trace")([offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. | | [`transpose`](numpy.matrix.transpose#numpy.matrix.transpose "numpy.matrix.transpose")(*axes) | Returns a view of the array with axes transposed. | | [`var`](numpy.matrix.var#numpy.matrix.var "numpy.matrix.var")([axis, dtype, out, ddof]) | Returns the variance of the matrix elements, along the given axis. | | [`view`](numpy.matrix.view#numpy.matrix.view "numpy.matrix.view")([dtype][, type]) | New view of array with the same data. | | | | | --- | --- | | **dot** | | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.htmlnumpy.recarray ============== *class*numpy.recarray(*shape*, *dtype=None*, *buf=None*, *offset=0*, *strides=None*, *formats=None*, *names=None*, *titles=None*, *byteorder=None*, *aligned=False*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Construct an ndarray that allows field access using attributes. Arrays may have a data-types containing fields, analogous to columns in a spread sheet. An example is `[(x, int), (y, float)]`, where each entry in the array is a pair of `(int, float)`. Normally, these attributes are accessed using dictionary lookups such as `arr['x']` and `arr['y']`. Record arrays allow the fields to be accessed as members of the array, using `arr.x` and `arr.y`. Parameters **shape**tuple Shape of output array. **dtype**data-type, optional The desired data-type. By default, the data-type is determined from `formats`, `names`, `titles`, `aligned` and `byteorder`. **formats**list of data-types, optional A list containing the data-types for the different columns, e.g. `['i4', 'f8', 'i4']`. `formats` does *not* support the new convention of using types directly, i.e. `(int, float, int)`. Note that `formats` must be a list, not a tuple. Given that `formats` is somewhat limited, we recommend specifying [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") instead. **names**tuple of str, optional The name of each column, e.g. `('x', 'y', 'z')`. **buf**buffer, optional By default, a new array is created of the given shape and data-type. If `buf` is specified and is an object exposing the buffer interface, the array will use the memory from the existing buffer. In this case, the `offset` and [`strides`](numpy.recarray.strides#numpy.recarray.strides "numpy.recarray.strides") keywords are available. Returns **rec**recarray Empty array of the given shape and type. Other Parameters **titles**tuple of str, optional Aliases for column names. For example, if `names` were `('x', 'y', 'z')` and `titles` is `('x_coordinate', 'y_coordinate', 'z_coordinate')`, then `arr['x']` is equivalent to both `arr.x` and `arr.x_coordinate`. **byteorder**{‘<’, ‘>’, ‘=’}, optional Byte-order for all fields. **aligned**bool, optional Align the fields in memory as the C-compiler would. **strides**tuple of ints, optional Buffer (`buf`) is interpreted according to these strides (strides define how many bytes each array element, row, column, etc. occupy in memory). **offset**int, optional Start reading buffer (`buf`) from this offset onwards. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`core.records.fromrecords`](numpy.core.records.fromrecords#numpy.core.records.fromrecords "numpy.core.records.fromrecords") Construct a record array from data. [`record`](numpy.record#numpy.record "numpy.record") fundamental data-type for [`recarray`](#numpy.recarray "numpy.recarray"). [`format_parser`](numpy.format_parser#numpy.format_parser "numpy.format_parser") determine a data-type from formats, names, titles. #### Notes This constructor can be compared to `empty`: it creates a new record array but does not fill it with data. To create a record array from data, use one of the following methods: 1. Create a standard ndarray and convert it to a record array, using `arr.view(np.recarray)` 2. Use the `buf` keyword. 3. Use `np.rec.fromrecords`. #### Examples Create an array with two fields, `x` and `y`: ``` >>> x = np.array([(1.0, 2), (3.0, 4)], dtype=[('x', '<f8'), ('y', '<i8')]) >>> x array([(1., 2), (3., 4)], dtype=[('x', '<f8'), ('y', '<i8')]) ``` ``` >>> x['x'] array([1., 3.]) ``` View the array as a record array: ``` >>> x = x.view(np.recarray) ``` ``` >>> x.x array([1., 3.]) ``` ``` >>> x.y array([2, 4]) ``` Create a new, empty record array: ``` >>> np.recarray((2,), ... dtype=[('x', int), ('y', float), ('z', int)]) rec.array([(-1073741821, 1.2249118382103472e-301, 24547520), (3471280, 1.2134086255804012e-316, 0)], dtype=[('x', '<i4'), ('y', '<f8'), ('z', '<i4')]) ``` Attributes [`T`](numpy.recarray.t#numpy.recarray.T "numpy.recarray.T") The transposed array. [`base`](numpy.recarray.base#numpy.recarray.base "numpy.recarray.base") Base object if memory is from some other object. [`ctypes`](numpy.recarray.ctypes#numpy.recarray.ctypes "numpy.recarray.ctypes") An object to simplify the interaction of the array with the ctypes module. [`data`](numpy.recarray.data#numpy.recarray.data "numpy.recarray.data") Python buffer object pointing to the start of the array’s data. [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Data-type of the array’s elements. [`flags`](numpy.recarray.flags#numpy.recarray.flags "numpy.recarray.flags") Information about the memory layout of the array. [`flat`](numpy.recarray.flat#numpy.recarray.flat "numpy.recarray.flat") A 1-D iterator over the array. [`imag`](numpy.imag#numpy.imag "numpy.imag") The imaginary part of the array. [`itemsize`](numpy.recarray.itemsize#numpy.recarray.itemsize "numpy.recarray.itemsize") Length of one array element in bytes. [`nbytes`](numpy.recarray.nbytes#numpy.recarray.nbytes "numpy.recarray.nbytes") Total bytes consumed by the elements of the array. [`ndim`](numpy.recarray.ndim#numpy.recarray.ndim "numpy.recarray.ndim") Number of array dimensions. [`real`](numpy.real#numpy.real "numpy.real") The real part of the array. [`shape`](numpy.shape#numpy.shape "numpy.shape") Tuple of array dimensions. [`size`](numpy.recarray.size#numpy.recarray.size "numpy.recarray.size") Number of elements in the array. [`strides`](numpy.recarray.strides#numpy.recarray.strides "numpy.recarray.strides") Tuple of bytes to step in each dimension when traversing an array. #### Methods | | | | --- | --- | | [`all`](numpy.recarray.all#numpy.recarray.all "numpy.recarray.all")([axis, out, keepdims, where]) | Returns True if all elements evaluate to True. | | [`any`](numpy.recarray.any#numpy.recarray.any "numpy.recarray.any")([axis, out, keepdims, where]) | Returns True if any of the elements of `a` evaluate to True. | | [`argmax`](numpy.recarray.argmax#numpy.recarray.argmax "numpy.recarray.argmax")([axis, out, keepdims]) | Return indices of the maximum values along the given axis. | | [`argmin`](numpy.recarray.argmin#numpy.recarray.argmin "numpy.recarray.argmin")([axis, out, keepdims]) | Return indices of the minimum values along the given axis. | | [`argpartition`](numpy.recarray.argpartition#numpy.recarray.argpartition "numpy.recarray.argpartition")(kth[, axis, kind, order]) | Returns the indices that would partition this array. | | [`argsort`](numpy.recarray.argsort#numpy.recarray.argsort "numpy.recarray.argsort")([axis, kind, order]) | Returns the indices that would sort this array. | | [`astype`](numpy.recarray.astype#numpy.recarray.astype "numpy.recarray.astype")(dtype[, order, casting, subok, copy]) | Copy of the array, cast to a specified type. | | [`byteswap`](numpy.recarray.byteswap#numpy.recarray.byteswap "numpy.recarray.byteswap")([inplace]) | Swap the bytes of the array elements | | [`choose`](numpy.recarray.choose#numpy.recarray.choose "numpy.recarray.choose")(choices[, out, mode]) | Use an index array to construct a new array from a set of choices. | | [`clip`](numpy.recarray.clip#numpy.recarray.clip "numpy.recarray.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. | | [`compress`](numpy.recarray.compress#numpy.recarray.compress "numpy.recarray.compress")(condition[, axis, out]) | Return selected slices of this array along given axis. | | [`conj`](numpy.recarray.conj#numpy.recarray.conj "numpy.recarray.conj")() | Complex-conjugate all elements. | | [`conjugate`](numpy.recarray.conjugate#numpy.recarray.conjugate "numpy.recarray.conjugate")() | Return the complex conjugate, element-wise. | | [`copy`](numpy.recarray.copy#numpy.recarray.copy "numpy.recarray.copy")([order]) | Return a copy of the array. | | [`cumprod`](numpy.recarray.cumprod#numpy.recarray.cumprod "numpy.recarray.cumprod")([axis, dtype, out]) | Return the cumulative product of the elements along the given axis. | | [`cumsum`](numpy.recarray.cumsum#numpy.recarray.cumsum "numpy.recarray.cumsum")([axis, dtype, out]) | Return the cumulative sum of the elements along the given axis. | | [`diagonal`](numpy.recarray.diagonal#numpy.recarray.diagonal "numpy.recarray.diagonal")([offset, axis1, axis2]) | Return specified diagonals. | | [`dump`](numpy.recarray.dump#numpy.recarray.dump "numpy.recarray.dump")(file) | Dump a pickle of the array to the specified file. | | [`dumps`](numpy.recarray.dumps#numpy.recarray.dumps "numpy.recarray.dumps")() | Returns the pickle of the array as a string. | | [`fill`](numpy.recarray.fill#numpy.recarray.fill "numpy.recarray.fill")(value) | Fill the array with a scalar value. | | [`flatten`](numpy.recarray.flatten#numpy.recarray.flatten "numpy.recarray.flatten")([order]) | Return a copy of the array collapsed into one dimension. | | [`getfield`](numpy.recarray.getfield#numpy.recarray.getfield "numpy.recarray.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. | | [`item`](numpy.recarray.item#numpy.recarray.item "numpy.recarray.item")(*args) | Copy an element of an array to a standard Python scalar and return it. | | [`itemset`](numpy.recarray.itemset#numpy.recarray.itemset "numpy.recarray.itemset")(*args) | Insert scalar into an array (scalar is cast to array's dtype, if possible) | | [`max`](numpy.recarray.max#numpy.recarray.max "numpy.recarray.max")([axis, out, keepdims, initial, where]) | Return the maximum along a given axis. | | [`mean`](numpy.recarray.mean#numpy.recarray.mean "numpy.recarray.mean")([axis, dtype, out, keepdims, where]) | Returns the average of the array elements along given axis. | | [`min`](numpy.recarray.min#numpy.recarray.min "numpy.recarray.min")([axis, out, keepdims, initial, where]) | Return the minimum along a given axis. | | [`newbyteorder`](numpy.recarray.newbyteorder#numpy.recarray.newbyteorder "numpy.recarray.newbyteorder")([new_order]) | Return the array with the same data viewed with a different byte order. | | [`nonzero`](numpy.recarray.nonzero#numpy.recarray.nonzero "numpy.recarray.nonzero")() | Return the indices of the elements that are non-zero. | | [`partition`](numpy.recarray.partition#numpy.recarray.partition "numpy.recarray.partition")(kth[, axis, kind, order]) | Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array. | | [`prod`](numpy.recarray.prod#numpy.recarray.prod "numpy.recarray.prod")([axis, dtype, out, keepdims, initial, ...]) | Return the product of the array elements over the given axis | | [`ptp`](numpy.recarray.ptp#numpy.recarray.ptp "numpy.recarray.ptp")([axis, out, keepdims]) | Peak to peak (maximum - minimum) value along a given axis. | | [`put`](numpy.recarray.put#numpy.recarray.put "numpy.recarray.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. | | [`ravel`](numpy.recarray.ravel#numpy.recarray.ravel "numpy.recarray.ravel")([order]) | Return a flattened array. | | [`repeat`](numpy.recarray.repeat#numpy.recarray.repeat "numpy.recarray.repeat")(repeats[, axis]) | Repeat elements of an array. | | [`reshape`](numpy.recarray.reshape#numpy.recarray.reshape "numpy.recarray.reshape")(shape[, order]) | Returns an array containing the same data with a new shape. | | [`resize`](numpy.recarray.resize#numpy.recarray.resize "numpy.recarray.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. | | [`round`](numpy.recarray.round#numpy.recarray.round "numpy.recarray.round")([decimals, out]) | Return `a` with each element rounded to the given number of decimals. | | [`searchsorted`](numpy.recarray.searchsorted#numpy.recarray.searchsorted "numpy.recarray.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. | | [`setfield`](numpy.recarray.setfield#numpy.recarray.setfield "numpy.recarray.setfield")(val, dtype[, offset]) | Put a value into a specified place in a field defined by a data-type. | | [`setflags`](numpy.recarray.setflags#numpy.recarray.setflags "numpy.recarray.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. | | [`sort`](numpy.recarray.sort#numpy.recarray.sort "numpy.recarray.sort")([axis, kind, order]) | Sort an array in-place. | | [`squeeze`](numpy.recarray.squeeze#numpy.recarray.squeeze "numpy.recarray.squeeze")([axis]) | Remove axes of length one from `a`. | | [`std`](numpy.recarray.std#numpy.recarray.std "numpy.recarray.std")([axis, dtype, out, ddof, keepdims, where]) | Returns the standard deviation of the array elements along given axis. | | [`sum`](numpy.recarray.sum#numpy.recarray.sum "numpy.recarray.sum")([axis, dtype, out, keepdims, initial, where]) | Return the sum of the array elements over the given axis. | | [`swapaxes`](numpy.recarray.swapaxes#numpy.recarray.swapaxes "numpy.recarray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. | | [`take`](numpy.recarray.take#numpy.recarray.take "numpy.recarray.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. | | [`tobytes`](numpy.recarray.tobytes#numpy.recarray.tobytes "numpy.recarray.tobytes")([order]) | Construct Python bytes containing the raw data bytes in the array. | | [`tofile`](numpy.recarray.tofile#numpy.recarray.tofile "numpy.recarray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). | | [`tolist`](numpy.recarray.tolist#numpy.recarray.tolist "numpy.recarray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. | | [`tostring`](numpy.recarray.tostring#numpy.recarray.tostring "numpy.recarray.tostring")([order]) | A compatibility alias for [`tobytes`](numpy.recarray.tobytes#numpy.recarray.tobytes "numpy.recarray.tobytes"), with exactly the same behavior. | | [`trace`](numpy.recarray.trace#numpy.recarray.trace "numpy.recarray.trace")([offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. | | [`transpose`](numpy.recarray.transpose#numpy.recarray.transpose "numpy.recarray.transpose")(*axes) | Returns a view of the array with axes transposed. | | [`var`](numpy.recarray.var#numpy.recarray.var "numpy.recarray.var")([axis, dtype, out, ddof, keepdims, where]) | Returns the variance of the array elements, along given axis. | | [`view`](numpy.recarray.view#numpy.recarray.view "numpy.recarray.view")([dtype][, type]) | New view of array with the same data. | | | | | --- | --- | | **dot** | | | **field** | | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.htmlnumpy.ndarray.ndim ================== attribute ndarray.ndim Number of array dimensions. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.ndim.htmlnumpy.lib.stride_tricks.as_strided ==================================== lib.stride_tricks.as_strided(*x*, *shape=None*, *strides=None*, *subok=False*, *writeable=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/stride_tricks.py#L38-L115) Create a view into the array with the given shape and strides. Warning This function has to be used with extreme care, see notes. Parameters **x**ndarray Array to create a new. **shape**sequence of int, optional The shape of the new array. Defaults to `x.shape`. **strides**sequence of int, optional The strides of the new array. Defaults to `x.strides`. **subok**bool, optional New in version 1.10. If True, subclasses are preserved. **writeable**bool, optional New in version 1.12. If set to False, the returned array will always be readonly. Otherwise it will be writable if the original array was. It is advisable to set this to False if possible (see Notes). Returns **view**ndarray See also [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") broadcast an array to a given shape. [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") reshape an array. [`lib.stride_tricks.sliding_window_view`](numpy.lib.stride_tricks.sliding_window_view#numpy.lib.stride_tricks.sliding_window_view "numpy.lib.stride_tricks.sliding_window_view") userfriendly and safe function for the creation of sliding window views. #### Notes `as_strided` creates a view into the array given the exact strides and shape. This means it manipulates the internal data structure of ndarray and, if done incorrectly, the array elements can point to invalid memory and can corrupt results or crash your program. It is advisable to always use the original `x.strides` when calculating new strides to avoid reliance on a contiguous memory layout. Furthermore, arrays created with this function often contain self overlapping memory, so that two elements are identical. Vectorized write operations on such arrays will typically be unpredictable. They may even give different results for small, large, or transposed arrays. Since writing to these arrays has to be tested and done with great care, you may want to use `writeable=False` to avoid accidental write operations. For these reasons it is advisable to avoid `as_strided` when possible. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.stride_tricks.as_strided.htmlnumpy.ndarray.strides ===================== attribute ndarray.strides Tuple of bytes to step in each dimension when traversing an array. The byte offset of element `(i[0], i[1], ..., i[n])` in an array `a` is: ``` offset = sum(np.array(i) * a.strides) ``` A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide. Warning Setting `arr.strides` is discouraged and may be deprecated in the future. [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") should be preferred to create a new view of the same data in a safer way. See also [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") #### Notes Imagine an array of 32-bit integers (each 4 bytes): ``` x = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int32) ``` This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array `x` will be `(20, 4)`. #### Examples ``` >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 ``` ``` >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.strides.htmlNumPy C code explanations ========================= Fanaticism consists of redoubling your efforts when you have forgotten your aim. — *<NAME>* An authority is a person who can tell you more about something than you really care to know. — *Unknown* This page attempts to explain the logic behind some of the new pieces of code. The purpose behind these explanations is to enable somebody to be able to understand the ideas behind the implementation somewhat more easily than just staring at the code. Perhaps in this way, the algorithms can be improved on, borrowed from, and/or optimized by more people. Memory model ------------ One fundamental aspect of the [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") is that an array is seen as a “chunk” of memory starting at some location. The interpretation of this memory depends on the [stride](../glossary#term-stride) information. For each dimension in an \(N\)-dimensional array, an integer ([stride](../glossary#term-stride)) dictates how many bytes must be skipped to get to the next element in that dimension. Unless you have a single-segment array, this [stride](../glossary#term-stride) information must be consulted when traversing through an array. It is not difficult to write code that accepts strides, you just have to use `char*` pointers because strides are in units of bytes. Keep in mind also that strides do not have to be unit-multiples of the element size. Also, remember that if the number of dimensions of the array is 0 (sometimes called a `rank-0` array), then the [strides](../glossary#term-stride) and [dimensions](../glossary#term-dimension) variables are `NULL`. Besides the structural information contained in the strides and dimensions members of the [`PyArrayObject`](../reference/c-api/types-and-structures#c.PyArrayObject "PyArrayObject"), the flags contain important information about how the data may be accessed. In particular, the [`NPY_ARRAY_ALIGNED`](../reference/c-api/array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") flag is set when the memory is on a suitable boundary according to the datatype array. Even if you have a [contiguous](../glossary#term-contiguous) chunk of memory, you cannot just assume it is safe to dereference a datatype-specific pointer to an element. Only if the [`NPY_ARRAY_ALIGNED`](../reference/c-api/array#c.NPY_ARRAY_ALIGNED "NPY_ARRAY_ALIGNED") flag is set, this is a safe operation. On some platforms it will work but on others, like Solaris, it will cause a bus error. The [`NPY_ARRAY_WRITEABLE`](../reference/c-api/array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") should also be ensured if you plan on writing to the memory area of the array. It is also possible to obtain a pointer to an unwritable memory area. Sometimes, writing to the memory area when the [`NPY_ARRAY_WRITEABLE`](../reference/c-api/array#c.NPY_ARRAY_WRITEABLE "NPY_ARRAY_WRITEABLE") flag is not set will just be rude. Other times it can cause program crashes (*e.g.* a data-area that is a read-only memory-mapped file). Data-type encapsulation ----------------------- See also [Data type objects (dtype)](../reference/arrays.dtypes#arrays-dtypes) The [datatype](../reference/arrays.dtypes#arrays-dtypes) is an important abstraction of the [`ndarray`](../reference/generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). Operations will look to the datatype to provide the key functionality that is needed to operate on the array. This functionality is provided in the list of function pointers pointed to by the `f` member of the [`PyArray_Descr`](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr") structure. In this way, the number of datatypes can be extended simply by providing a [`PyArray_Descr`](../reference/c-api/types-and-structures#c.PyArray_Descr "PyArray_Descr") structure with suitable function pointers in the `f` member. For built-in types, there are some optimizations that bypass this mechanism, but the point of the datatype abstraction is to allow new datatypes to be added. One of the built-in datatypes, the [`void`](../reference/arrays.scalars#numpy.void "numpy.void") datatype allows for arbitrary [structured types](../glossary#term-structured-data-type) containing 1 or more fields as elements of the array. A [field](../glossary#term-field) is simply another datatype object along with an offset into the current structured type. In order to support arbitrarily nested fields, several recursive implementations of datatype access are implemented for the void type. A common idiom is to cycle through the elements of the dictionary and perform a specific operation based on the datatype object stored at the given offset. These offsets can be arbitrary numbers. Therefore, the possibility of encountering misaligned data must be recognized and taken into account if necessary. N-D Iterators ------------- See also [Iterating Over Arrays](../reference/arrays.nditer#arrays-nditer) A very common operation in much of NumPy code is the need to iterate over all the elements of a general, strided, N-dimensional array. This operation of a general-purpose N-dimensional loop is abstracted in the notion of an iterator object. To write an N-dimensional loop, you only have to create an iterator object from an ndarray, work with the [`dataptr`](../reference/c-api/types-and-structures#c.PyArrayIterObject.dataptr "PyArrayIterObject.dataptr") member of the iterator object structure and call the macro [`PyArray_ITER_NEXT`](../reference/c-api/array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") on the iterator object to move to the next element. The `next` element is always in C-contiguous order. The macro works by first special-casing the C-contiguous, 1-D, and 2-D cases which work very simply. For the general case, the iteration works by keeping track of a list of coordinate counters in the iterator object. At each iteration, the last coordinate counter is increased (starting from 0). If this counter is smaller than one less than the size of the array in that dimension (a pre-computed and stored value), then the counter is increased and the [`dataptr`](../reference/c-api/types-and-structures#c.PyArrayIterObject.dataptr "PyArrayIterObject.dataptr") member is increased by the strides in that dimension and the macro ends. If the end of a dimension is reached, the counter for the last dimension is reset to zero and the [`dataptr`](../reference/c-api/types-and-structures#c.PyArrayIterObject.dataptr "PyArrayIterObject.dataptr") is moved back to the beginning of that dimension by subtracting the strides value times one less than the number of elements in that dimension (this is also pre-computed and stored in the [`backstrides`](../reference/c-api/types-and-structures#c.PyArrayIterObject.backstrides "PyArrayIterObject.backstrides") member of the iterator object). In this case, the macro does not end, but a local dimension counter is decremented so that the next-to-last dimension replaces the role that the last dimension played and the previously-described tests are executed again on the next-to-last dimension. In this way, the [`dataptr`](../reference/c-api/types-and-structures#c.PyArrayIterObject.dataptr "PyArrayIterObject.dataptr") is adjusted appropriately for arbitrary striding. The [`coordinates`](../reference/c-api/types-and-structures#c.PyArrayIterObject.coordinates "PyArrayIterObject.coordinates") member of the [`PyArrayIterObject`](../reference/c-api/types-and-structures#c.PyArrayIterObject "PyArrayIterObject") structure maintains the current N-d counter unless the underlying array is C-contiguous in which case the coordinate counting is bypassed. The [`index`](../reference/c-api/types-and-structures#c.PyArrayIterObject.index "PyArrayIterObject.index") member of the [`PyArrayIterObject`](../reference/c-api/types-and-structures#c.PyArrayIterObject "PyArrayIterObject") keeps track of the current flat index of the iterator. It is updated by the [`PyArray_ITER_NEXT`](../reference/c-api/array#c.PyArray_ITER_NEXT "PyArray_ITER_NEXT") macro. Broadcasting ------------ See also [Broadcasting](../user/basics.broadcasting#basics-broadcasting) In Numeric, the ancestor of NumPy, broadcasting was implemented in several lines of code buried deep in `ufuncobject.c`. In NumPy, the notion of broadcasting has been abstracted so that it can be performed in multiple places. Broadcasting is handled by the function [`PyArray_Broadcast`](../reference/c-api/array#c.PyArray_Broadcast "PyArray_Broadcast"). This function requires a [`PyArrayMultiIterObject`](../reference/c-api/types-and-structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject") (or something that is a binary equivalent) to be passed in. The [`PyArrayMultiIterObject`](../reference/c-api/types-and-structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject") keeps track of the broadcast number of dimensions and size in each dimension along with the total size of the broadcast result. It also keeps track of the number of arrays being broadcast and a pointer to an iterator for each of the arrays being broadcast. The [`PyArray_Broadcast`](../reference/c-api/array#c.PyArray_Broadcast "PyArray_Broadcast") function takes the iterators that have already been defined and uses them to determine the broadcast shape in each dimension (to create the iterators at the same time that broadcasting occurs then use the [`PyArray_MultiIterNew`](../reference/c-api/array#c.PyArray_MultiIterNew "PyArray_MultiIterNew") function). Then, the iterators are adjusted so that each iterator thinks it is iterating over an array with the broadcast size. This is done by adjusting the iterators number of dimensions, and the [shape](../glossary#term-shape) in each dimension. This works because the iterator strides are also adjusted. Broadcasting only adjusts (or adds) length-1 dimensions. For these dimensions, the strides variable is simply set to 0 so that the data-pointer for the iterator over that array doesn’t move as the broadcasting operation operates over the extended dimension. Broadcasting was always implemented in Numeric using 0-valued strides for the extended dimensions. It is done in exactly the same way in NumPy. The big difference is that now the array of strides is kept track of in a [`PyArrayIterObject`](../reference/c-api/types-and-structures#c.PyArrayIterObject "PyArrayIterObject"), the iterators involved in a broadcast result are kept track of in a [`PyArrayMultiIterObject`](../reference/c-api/types-and-structures#c.PyArrayMultiIterObject "PyArrayMultiIterObject"), and the [`PyArray_Broadcast`](../reference/c-api/array#c.PyArray_Broadcast "PyArray_Broadcast") call implements the [General Broadcasting Rules](../user/basics.broadcasting#general-broadcasting-rules). Array Scalars ------------- See also [Scalars](../reference/arrays.scalars#arrays-scalars) The array scalars offer a hierarchy of Python types that allow a one-to-one correspondence between the datatype stored in an array and the Python-type that is returned when an element is extracted from the array. An exception to this rule was made with object arrays. Object arrays are heterogeneous collections of arbitrary Python objects. When you select an item from an object array, you get back the original Python object (and not an object array scalar which does exist but is rarely used for practical purposes). The array scalars also offer the same methods and attributes as arrays with the intent that the same code can be used to support arbitrary dimensions (including 0-dimensions). The array scalars are read-only (immutable) with the exception of the void scalar which can also be written to so that structured array field setting works more naturally (`a[0]['f1'] = value`). Indexing -------- See also [Indexing on ndarrays](../user/basics.indexing#basics-indexing), [Indexing routines](../reference/arrays.indexing#arrays-indexing) All Python indexing operations `arr[index]` are organized by first preparing the index and finding the index type. The supported index types are: * integer * [`newaxis`](../reference/constants#numpy.newaxis "numpy.newaxis") * [slice](https://docs.python.org/3/glossary.html#term-slice "(in Python v3.10)") * [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "(in Python v3.10)") * integer arrays/array-likes (advanced) * boolean (single boolean array); if there is more than one boolean array as the index or the shape does not match exactly, the boolean array will be converted to an integer array instead. * 0-d boolean (and also integer); 0-d boolean arrays are a special case that has to be handled in the advanced indexing code. They signal that a 0-d boolean array had to be interpreted as an integer array. As well as the scalar array special case signaling that an integer array was interpreted as an integer index, which is important because an integer array index forces a copy but is ignored if a scalar is returned (full integer index). The prepared index is guaranteed to be valid with the exception of out of bound values and broadcasting errors for advanced indexing. This includes that an [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "(in Python v3.10)") is added for incomplete indices for example when a two-dimensional array is indexed with a single integer. The next step depends on the type of index which was found. If all dimensions are indexed with an integer a scalar is returned or set. A single boolean indexing array will call specialized boolean functions. Indices containing an [`Ellipsis`](https://docs.python.org/3/library/constants.html#Ellipsis "(in Python v3.10)") or [slice](https://docs.python.org/3/glossary.html#term-slice "(in Python v3.10)") but no advanced indexing will always create a view into the old array by calculating the new strides and memory offset. This view can then either be returned or, for assignments, filled using `PyArray_CopyObject`. Note that `PyArray_CopyObject` may also be called on temporary arrays in other branches to support complicated assignments when the array is of object [`dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype"). ### Advanced indexing By far the most complex case is advanced indexing, which may or may not be combined with typical view-based indexing. Here integer indices are interpreted as view-based. Before trying to understand this, you may want to make yourself familiar with its subtleties. The advanced indexing code has three different branches and one special case: * There is one indexing array and it, as well as the assignment array, can be iterated trivially. For example, they may be contiguous. Also, the indexing array must be of [`intp`](../reference/arrays.scalars#numpy.intp "numpy.intp") type and the value array in assignments should be of the correct type. This is purely a fast path. * There are only integer array indices so that no subarray exists. * View-based and advanced indexing is mixed. In this case, the view-based indexing defines a collection of subarrays that are combined by the advanced indexing. For example, `arr[[1, 2, 3], :]` is created by vertically stacking the subarrays `arr[1, :]`, `arr[2, :]`, and `arr[3, :]`. * There is a subarray but it has exactly one element. This case can be handled as if there is no subarray but needs some care during setup. Deciding what case applies, checking broadcasting, and determining the kind of transposition needed are all done in `PyArray_MapIterNew`. After setting up, there are two cases. If there is no subarray or it only has one element, no subarray iteration is necessary and an iterator is prepared which iterates all indexing arrays *as well as* the result or value array. If there is a subarray, there are three iterators prepared. One for the indexing arrays, one for the result or value array (minus its subarray), and one for the subarrays of the original and the result/assignment array. The first two iterators give (or allow calculation) of the pointers into the start of the subarray, which then allows restarting the subarray iteration. When advanced indices are next to each other transposing may be necessary. All necessary transposing is handled by [`PyArray_MapIterSwapAxes`](../reference/c-api/array#c.PyArray_MapIterSwapAxes "PyArray_MapIterSwapAxes") and has to be handled by the caller unless `PyArray_MapIterNew` is asked to allocate the result. After preparation, getting and setting are relatively straightforward, although the different modes of iteration need to be considered. Unless there is only a single indexing array during item getting, the validity of the indices is checked beforehand. Otherwise, it is handled in the inner loop itself for optimization. Universal functions ------------------- See also [Universal functions (ufunc)](../reference/ufuncs#ufuncs), [Universal functions (ufunc) basics](../user/basics.ufuncs#ufuncs-basics) Universal functions are callable objects that take \(N\) inputs and produce \(M\) outputs by wrapping basic 1-D loops that work element-by-element into full easy-to-use functions that seamlessly implement [broadcasting](../user/basics.broadcasting#basics-broadcasting), [type-checking](../user/basics.ufuncs#ufuncs-casting), [buffered coercion](../user/basics.ufuncs#use-of-internal-buffers), and [output-argument handling](../user/basics.ufuncs#ufuncs-output-type). New universal functions are normally created in C, although there is a mechanism for creating ufuncs from Python functions ([`frompyfunc`](../reference/generated/numpy.frompyfunc#numpy.frompyfunc "numpy.frompyfunc")). The user must supply a 1-D loop that implements the basic function taking the input scalar values and placing the resulting scalars into the appropriate output slots as explained in implementation. ### Setup Every [`ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") calculation involves some overhead related to setting up the calculation. The practical significance of this overhead is that even though the actual calculation of the ufunc is very fast, you will be able to write array and type-specific code that will work faster for small arrays than the ufunc. In particular, using ufuncs to perform many calculations on 0-D arrays will be slower than other Python-based solutions (the silently-imported `scalarmath` module exists precisely to give array scalars the look-and-feel of ufunc based calculations with significantly reduced overhead). When a [`ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") is called, many things must be done. The information collected from these setup operations is stored in a loop object. This loop object is a C-structure (that could become a Python object but is not initialized as such because it is only used internally). This loop object has the layout needed to be used with [`PyArray_Broadcast`](../reference/c-api/array#c.PyArray_Broadcast "PyArray_Broadcast") so that the broadcasting can be handled in the same way as it is handled in other sections of code. The first thing done is to look up in the thread-specific global dictionary the current values for the buffer-size, the error mask, and the associated error object. The state of the error mask controls what happens when an error condition is found. It should be noted that checking of the hardware error flags is only performed after each 1-D loop is executed. This means that if the input and output arrays are contiguous and of the correct type so that a single 1-D loop is performed, then the flags may not be checked until all elements of the array have been calculated. Looking up these values in a thread-specific dictionary takes time which is easily ignored for all but very small arrays. After checking, the thread-specific global variables, the inputs are evaluated to determine how the ufunc should proceed and the input and output arrays are constructed if necessary. Any inputs which are not arrays are converted to arrays (using context if necessary). Which of the inputs are scalars (and therefore converted to 0-D arrays) is noted. Next, an appropriate 1-D loop is selected from the 1-D loops available to the [`ufunc`](../reference/generated/numpy.ufunc#numpy.ufunc "numpy.ufunc") based on the input array types. This 1-D loop is selected by trying to match the signature of the datatypes of the inputs against the available signatures. The signatures corresponding to built-in types are stored in the [`ufunc.types`](../reference/generated/numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") member of the ufunc structure. The signatures corresponding to user-defined types are stored in a linked list of function information with the head element stored as a `CObject` in the `userloops` dictionary keyed by the datatype number (the first user-defined type in the argument list is used as the key). The signatures are searched until a signature is found to which the input arrays can all be cast safely (ignoring any scalar arguments which are not allowed to determine the type of the result). The implication of this search procedure is that “lesser types” should be placed below “larger types” when the signatures are stored. If no 1-D loop is found, then an error is reported. Otherwise, the `argument_list` is updated with the stored signature — in case casting is necessary and to fix the output types assumed by the 1-D loop. If the ufunc has 2 inputs and 1 output and the second input is an `Object` array then a special-case check is performed so that `NotImplemented` is returned if the second input is not an ndarray, has the [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute, and has an `__r{op}__` special method. In this way, Python is signaled to give the other object a chance to complete the operation instead of using generic object-array calculations. This allows (for example) sparse matrices to override the multiplication operator 1-D loop. For input arrays that are smaller than the specified buffer size, copies are made of all non-contiguous, misaligned, or out-of-byteorder arrays to ensure that for small arrays, a single loop is used. Then, array iterators are created for all the input arrays and the resulting collection of iterators is broadcast to a single shape. The output arguments (if any) are then processed and any missing return arrays are constructed. If any provided output array doesn’t have the correct type (or is misaligned) and is smaller than the buffer size, then a new output array is constructed with the special [`NPY_ARRAY_WRITEBACKIFCOPY`](../reference/c-api/array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag set. At the end of the function, [`PyArray_ResolveWritebackIfCopy`](../reference/c-api/array#c.PyArray_ResolveWritebackIfCopy "PyArray_ResolveWritebackIfCopy") is called so that its contents will be copied back into the output array. Iterators for the output arguments are then processed. Finally, the decision is made about how to execute the looping mechanism to ensure that all elements of the input arrays are combined to produce the output arrays of the correct type. The options for loop execution are one-loop (for :term`contiguous`, aligned, and correct data type), strided-loop (for non-contiguous but still aligned and correct data type), and a buffered loop (for misaligned or incorrect data type situations). Depending on which execution method is called for, the loop is then set up and computed. ### Function call This section describes how the basic universal function computation loop is set up and executed for each of the three different kinds of execution. If [`NPY_ALLOW_THREADS`](../reference/c-api/array#c.NPY_ALLOW_THREADS "NPY_ALLOW_THREADS") is defined during compilation, then as long as no object arrays are involved, the Python Global Interpreter Lock (GIL) is released prior to calling the loops. It is re-acquired if necessary to handle error conditions. The hardware error flags are checked only after the 1-D loop is completed. #### One loop This is the simplest case of all. The ufunc is executed by calling the underlying 1-D loop exactly once. This is possible only when we have aligned data of the correct type (including byteorder) for both input and output and all arrays have uniform strides (either [contiguous](../glossary#term-contiguous), 0-D, or 1-D). In this case, the 1-D computational loop is called once to compute the calculation for the entire array. Note that the hardware error flags are only checked after the entire calculation is complete. #### Strided loop When the input and output arrays are aligned and of the correct type, but the striding is not uniform (non-contiguous and 2-D or larger), then a second looping structure is employed for the calculation. This approach converts all of the iterators for the input and output arguments to iterate over all but the largest dimension. The inner loop is then handled by the underlying 1-D computational loop. The outer loop is a standard iterator loop on the converted iterators. The hardware error flags are checked after each 1-D loop is completed. #### Buffered loop This is the code that handles the situation whenever the input and/or output arrays are either misaligned or of the wrong datatype (including being byteswapped) from what the underlying 1-D loop expects. The arrays are also assumed to be non-contiguous. The code works very much like the strided-loop except for the inner 1-D loop is modified so that pre-processing is performed on the inputs and post-processing is performed on the outputs in `bufsize` chunks (where `bufsize` is a user-settable parameter). The underlying 1-D computational loop is called on data that is copied over (if it needs to be). The setup code and the loop code is considerably more complicated in this case because it has to handle: * memory allocation of the temporary buffers * deciding whether or not to use buffers on the input and output data (misaligned and/or wrong datatype) * copying and possibly casting data for any inputs or outputs for which buffers are necessary. * special-casing `Object` arrays so that reference counts are properly handled when copies and/or casts are necessary. * breaking up the inner 1-D loop into `bufsize` chunks (with a possible remainder). Again, the hardware error flags are checked at the end of each 1-D loop. ### Final output manipulation Ufuncs allow other array-like classes to be passed seamlessly through the interface in that inputs of a particular class will induce the outputs to be of that same class. The mechanism by which this works is the following. If any of the inputs are not ndarrays and define the [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") method, then the class with the largest [`__array_priority__`](../reference/arrays.classes#numpy.class.__array_priority__ "numpy.class.__array_priority__") attribute determines the type of all the outputs (with the exception of any output arrays passed in). The [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") method of the input array will be called with the ndarray being returned from the ufunc as its input. There are two calling styles of the [`__array_wrap__`](../reference/arrays.classes#numpy.class.__array_wrap__ "numpy.class.__array_wrap__") function supported. The first takes the ndarray as the first argument and a tuple of “context” as the second argument. The context is (ufunc, arguments, output argument number). This is the first call tried. If a `TypeError` occurs, then the function is called with just the ndarray as the first argument. ### Methods There are three methods of ufuncs that require calculation similar to the general-purpose ufuncs. These are [`ufunc.reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce"), [`ufunc.accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate"), and [`ufunc.reduceat`](../reference/generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat"). Each of these methods requires a setup command followed by a loop. There are four loop styles possible for the methods corresponding to no-elements, one-element, strided-loop, and buffered-loop. These are the same basic loop styles as implemented for the general-purpose function call except for the no-element and one-element cases which are special-cases occurring when the input array objects have 0 and 1 elements respectively. #### Setup The setup function for all three methods is `construct_reduce`. This function creates a reducing loop object and fills it with the parameters needed to complete the loop. All of the methods only work on ufuncs that take 2-inputs and return 1 output. Therefore, the underlying 1-D loop is selected assuming a signature of `[otype, otype, otype]` where `otype` is the requested reduction datatype. The buffer size and error handling are then retrieved from (per-thread) global storage. For small arrays that are misaligned or have incorrect datatype, a copy is made so that the un-buffered section of code is used. Then, the looping strategy is selected. If there is 1 element or 0 elements in the array, then a simple looping method is selected. If the array is not misaligned and has the correct datatype, then strided looping is selected. Otherwise, buffered looping must be performed. Looping parameters are then established, and the return array is constructed. The output array is of a different [shape](../glossary#term-shape) depending on whether the method is [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce"), [`accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate"), or [`reduceat`](../reference/generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat"). If an output array is already provided, then its shape is checked. If the output array is not C-contiguous, aligned, and of the correct data type, then a temporary copy is made with the [`NPY_ARRAY_WRITEBACKIFCOPY`](../reference/c-api/array#c.NPY_ARRAY_WRITEBACKIFCOPY "NPY_ARRAY_WRITEBACKIFCOPY") flag set. In this way, the methods will be able to work with a well-behaved output array but the result will be copied back into the true output array when [`PyArray_ResolveWritebackIfCopy`](../reference/c-api/array#c.PyArray_ResolveWritebackIfCopy "PyArray_ResolveWritebackIfCopy") is called at function completion. Finally, iterators are set up to loop over the correct [axis](../glossary#term-axis) (depending on the value of axis provided to the method) and the setup routine returns to the actual computation routine. #### [`Reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") All of the ufunc methods use the same underlying 1-D computational loops with input and output arguments adjusted so that the appropriate reduction takes place. For example, the key to the functioning of [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") is that the 1-D loop is called with the output and the second input pointing to the same position in memory and both having a step-size of 0. The first input is pointing to the input array with a step-size given by the appropriate stride for the selected axis. In this way, the operation performed is \begin{align*} o & = & i[0] \\ o & = & i[k]\textrm{<op>}o\quad k=1\ldots N \end{align*} where \(N+1\) is the number of elements in the input, \(i\), \(o\) is the output, and \(i[k]\) is the \(k^{\textrm{th}}\) element of \(i\) along the selected axis. This basic operation is repeated for arrays with greater than 1 dimension so that the reduction takes place for every 1-D sub-array along the selected axis. An iterator with the selected dimension removed handles this looping. For buffered loops, care must be taken to copy and cast data before the loop function is called because the underlying loop expects aligned data of the correct datatype (including byteorder). The buffered loop must handle this copying and casting prior to calling the loop function on chunks no greater than the user-specified `bufsize`. #### [`Accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate") The [`accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate") method is very similar to the [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") method in that the output and the second input both point to the output. The difference is that the second input points to memory one stride behind the current output pointer. Thus, the operation performed is \begin{align*} o[0] & = & i[0] \\ o[k] & = & i[k]\textrm{<op>}o[k-1]\quad k=1\ldots N. \end{align*} The output has the same shape as the input and each 1-D loop operates over \(N\) elements when the shape in the selected axis is \(N+1\). Again, buffered loops take care to copy and cast the data before calling the underlying 1-D computational loop. #### [`Reduceat`](../reference/generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat") The [`reduceat`](../reference/generated/numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat") function is a generalization of both the [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") and [`accumulate`](../reference/generated/numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate") functions. It implements a [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") over ranges of the input array specified by indices. The extra indices argument is checked to be sure that every input is not too large for the input array along the selected dimension before the loop calculations take place. The loop implementation is handled using code that is very similar to the [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") code repeated as many times as there are elements in the indices input. In particular: the first input pointer passed to the underlying 1-D computational loop points to the input array at the correct location indicated by the index array. In addition, the output pointer and the second input pointer passed to the underlying 1-D loop point to the same position in memory. The size of the 1-D computational loop is fixed to be the difference between the current index and the next index (when the current index is the last index, then the next index is assumed to be the length of the array along the selected dimension). In this way, the 1-D loop will implement a [`reduce`](../reference/generated/numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") over the specified indices. Misaligned or a loop datatype that does not match the input and/or output datatype is handled using buffered code wherein data is copied to a temporary buffer and cast to the correct datatype if necessary prior to calling the underlying 1-D function. The temporary buffers are created in (element) sizes no bigger than the user settable buffer-size value. Thus, the loop must be flexible enough to call the underlying 1-D computational loop enough times to complete the total calculation in chunks no bigger than the buffer-size. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/internals.code-explanations.htmlMemory Alignment ================ NumPy alignment goals --------------------- There are three use-cases related to memory alignment in NumPy (as of 1.14): 1. Creating [structured datatypes](../glossary#term-structured-data-type) with [fields](../glossary#term-field) aligned like in a C-struct. 2. Speeding up copy operations by using [`uint`](../reference/arrays.scalars#numpy.uint "numpy.uint") assignment in instead of `memcpy`. 3. Guaranteeing safe aligned access for ufuncs/setitem/casting code. NumPy uses two different forms of alignment to achieve these goals: “True alignment” and “Uint alignment”. “True” alignment refers to the architecture-dependent alignment of an equivalent C-type in C. For example, in x64 systems [`float64`](../reference/arrays.scalars#numpy.float64 "numpy.float64") is equivalent to `double` in C. On most systems, this has either an alignment of 4 or 8 bytes (and this can be controlled in GCC by the option `malign-double`). A variable is aligned in memory if its memory offset is a multiple of its alignment. On some systems (eg. sparc) memory alignment is required; on others, it gives a speedup. “Uint” alignment depends on the size of a datatype. It is defined to be the “True alignment” of the uint used by NumPy’s copy-code to copy the datatype, or undefined/unaligned if there is no equivalent uint. Currently, NumPy uses `uint8`, `uint16`, `uint32`, `uint64`, and `uint64` to copy data of size 1, 2, 4, 8, 16 bytes respectively, and all other sized datatypes cannot be uint-aligned. For example, on a (typical Linux x64 GCC) system, the NumPy [`complex64`](../reference/arrays.scalars#numpy.complex64 "numpy.complex64") datatype is implemented as `struct { float real, imag; }`. This has “true” alignment of 4 and “uint” alignment of 8 (equal to the true alignment of `uint64`). Some cases where uint and true alignment are different (default GCC Linux): | arch | type | true-aln | uint-aln | | --- | --- | --- | --- | | x86_64 | complex64 | 4 | 8 | | x86_64 | float128 | 16 | 8 | | x86 | float96 | 4 | - | Variables in NumPy which control and describe alignment ------------------------------------------------------- There are 4 relevant uses of the word `align` used in NumPy: * The [`dtype.alignment`](../reference/generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment") attribute (`descr->alignment` in C). This is meant to reflect the “true alignment” of the type. It has arch-dependent default values for all datatypes, except for the structured types created with `align=True` as described below. * The `ALIGNED` flag of an ndarray, computed in `IsAligned` and checked by [`PyArray_ISALIGNED`](../reference/c-api/array#c.PyArray_ISALIGNED "PyArray_ISALIGNED"). This is computed from [`dtype.alignment`](../reference/generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment"). It is set to `True` if every item in the array is at a memory location consistent with [`dtype.alignment`](../reference/generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment"), which is the case if the `data ptr` and all strides of the array are multiples of that alignment. * The `align` keyword of the dtype constructor, which only affects [Structured arrays](../user/basics.rec#structured-arrays). If the structure’s field offsets are not manually provided, NumPy determines offsets automatically. In that case, `align=True` pads the structure so that each field is “true” aligned in memory and sets [`dtype.alignment`](../reference/generated/numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment") to be the largest of the field “true” alignments. This is like what C-structs usually do. Otherwise if offsets or itemsize were manually provided `align=True` simply checks that all the fields are “true” aligned and that the total itemsize is a multiple of the largest field alignment. In either case [`dtype.isalignedstruct`](../reference/generated/numpy.dtype.isalignedstruct#numpy.dtype.isalignedstruct "numpy.dtype.isalignedstruct") is also set to True. * `IsUintAligned` is used to determine if an ndarray is “uint aligned” in an analogous way to how `IsAligned` checks for true alignment. Consequences of alignment ------------------------- Here is how the variables above are used: 1. Creating aligned structs: To know how to offset a field when `align=True`, NumPy looks up `field.dtype.alignment`. This includes fields that are nested structured arrays. 2. Ufuncs: If the `ALIGNED` flag of an array is False, ufuncs will buffer/cast the array before evaluation. This is needed since ufunc inner loops access raw elements directly, which might fail on some archs if the elements are not true-aligned. 3. Getitem/setitem/copyswap function: Similar to ufuncs, these functions generally have two code paths. If `ALIGNED` is False they will use a code path that buffers the arguments so they are true-aligned. 4. Strided copy code: Here, “uint alignment” is used instead. If the itemsize of an array is equal to 1, 2, 4, 8 or 16 bytes and the array is uint aligned then instead NumPy will do `*(uintN*)dst) = *(uintN*)src)` for appropriate N. Otherwise, NumPy copies by doing `memcpy(dst, src, N)`. 5. Nditer code: Since this often calls the strided copy code, it must check for “uint alignment”. 6. Cast code: This checks for “true” alignment, as it does `*dst = CASTFUNC(*src)` if aligned. Otherwise, it does `memmove(srcval, src); dstval = CASTFUNC(srcval); memmove(dst, dstval)` where dstval/srcval are aligned. Note that the strided-copy and strided-cast code are deeply intertwined and so any arrays being processed by them must be both uint and true aligned, even though the copy-code only needs uint alignment and the cast code only true alignment. If there is ever a big rewrite of this code it would be good to allow them to use different alignments. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/dev/alignment.htmlThe numpy.ma module =================== Rationale --------- Masked arrays are arrays that may have missing or invalid entries. The [`numpy.ma`](#module-numpy.ma "numpy.ma") module provides a nearly work-alike replacement for numpy that supports data arrays with masks. What is a masked array? ----------------------- In many circumstances, datasets can be incomplete or tainted by the presence of invalid data. For example, a sensor may have failed to record a data, or recorded an invalid value. The [`numpy.ma`](#module-numpy.ma "numpy.ma") module provides a convenient way to address this issue, by introducing masked arrays. A masked array is the combination of a standard [`numpy.ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") and a mask. A mask is either [`nomask`](maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"), indicating that no value of the associated array is invalid, or an array of booleans that determines for each element of the associated array whether the value is valid or not. When an element of the mask is `False`, the corresponding element of the associated array is valid and is said to be unmasked. When an element of the mask is `True`, the corresponding element of the associated array is said to be masked (invalid). The package ensures that masked entries are not used in computations. As an illustration, let’s consider the following dataset: ``` >>> import numpy as np >>> import numpy.ma as ma >>> x = np.array([1, 2, 3, -1, 5]) ``` We wish to mark the fourth entry as invalid. The easiest is to create a masked array: ``` >>> mx = ma.masked_array(x, mask=[0, 0, 0, 1, 0]) ``` We can now compute the mean of the dataset, without taking the invalid data into account: ``` >>> mx.mean() 2.75 ``` The numpy.ma module ------------------- The main feature of the [`numpy.ma`](#module-numpy.ma "numpy.ma") module is the [`MaskedArray`](maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") class, which is a subclass of [`numpy.ndarray`](generated/numpy.ndarray#numpy.ndarray "numpy.ndarray"). The class, its attributes and methods are described in more details in the [MaskedArray class](maskedarray.baseclass#maskedarray-baseclass) section. The [`numpy.ma`](#module-numpy.ma "numpy.ma") module can be used as an addition to [`numpy`](index#module-numpy "numpy"): ``` >>> import numpy as np >>> import numpy.ma as ma ``` To create an array with the second element invalid, we would do: ``` >>> y = ma.array([1, 2, 3], mask = [0, 1, 0]) ``` To create a masked array where all values close to 1.e20 are invalid, we would do: ``` >>> z = ma.masked_values([1.0, 1.e20, 3.0, 4.0], 1.e20) ``` For a complete discussion of creation methods for masked arrays please see section [Constructing masked arrays](#maskedarray-generic-constructing). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/maskedarray.generic.htmlConstants of the numpy.ma module ================================ In addition to the [`MaskedArray`](#numpy.ma.MaskedArray "numpy.ma.MaskedArray") class, the [`numpy.ma`](maskedarray.generic#module-numpy.ma "numpy.ma") module defines several constants. numpy.ma.masked The [`masked`](#numpy.ma.masked "numpy.ma.masked") constant is a special case of [`MaskedArray`](#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), with a float datatype and a null shape. It is used to test whether a specific entry of a masked array is masked, or to mask one or several entries of a masked array: ``` >>> x = ma.array([1, 2, 3], mask=[0, 1, 0]) >>> x[1] is ma.masked True >>> x[-1] = ma.masked >>> x masked_array(data=[1, --, --], mask=[False, True, True], fill_value=999999) ``` numpy.ma.nomask Value indicating that a masked array has no invalid entry. [`nomask`](#numpy.ma.nomask "numpy.ma.nomask") is used internally to speed up computations when the mask is not needed. It is represented internally as `np.False_`. numpy.ma.masked_print_options String used in lieu of missing data when a masked array is printed. By default, this string is `'--'`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/maskedarray.baseclass.htmlnumpy.ufunc =========== *class*numpy.ufunc[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Functions that operate element by element on whole arrays. To see the documentation for a specific ufunc, use [`info`](numpy.info#numpy.info "numpy.info"). For example, `np.info(np.sin)`. Because ufuncs are written in C (for speed) and linked into Python with NumPy’s ufunc facility, Python’s help() function finds this page whenever help() is called on a ufunc. A detailed explanation of ufuncs can be found in the docs for [Universal functions (ufunc)](../ufuncs#ufuncs). **Calling ufuncs:** `op(*x[, out], where=True, **kwargs)` Apply `op` to the arguments `*x` elementwise, broadcasting the arguments. The broadcasting rules are: * Dimensions of length 1 may be prepended to either array. * Arrays may be repeated along dimensions of length 1. Parameters ***x**array_like Input arrays. **out**ndarray, None, or tuple of ndarray and None, optional Alternate array object(s) in which to put the result; if provided, it must have a shape that the inputs broadcast to. A tuple of arrays (possible only as a keyword argument) must have length equal to the number of outputs; use None for uninitialized outputs to be allocated by the ufunc. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **r**ndarray or tuple of ndarray `r` will have the shape that the arrays in `x` broadcast to; if `out` is provided, it will be returned. If not, `r` will be allocated and may contain uninitialized values. If the function has more than one output, then the result will be a tuple of arrays. Attributes [`identity`](numpy.identity#numpy.identity "numpy.identity") The identity value. [`nargs`](numpy.ufunc.nargs#numpy.ufunc.nargs "numpy.ufunc.nargs") The number of arguments. [`nin`](numpy.ufunc.nin#numpy.ufunc.nin "numpy.ufunc.nin") The number of inputs. [`nout`](numpy.ufunc.nout#numpy.ufunc.nout "numpy.ufunc.nout") The number of outputs. [`ntypes`](numpy.ufunc.ntypes#numpy.ufunc.ntypes "numpy.ufunc.ntypes") The number of types. [`signature`](numpy.ufunc.signature#numpy.ufunc.signature "numpy.ufunc.signature") Definition of the core elements a generalized ufunc operates on. [`types`](numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") Returns a list with types grouped input->output. #### Methods | | | | --- | --- | | [`__call__`](numpy.ufunc.__call__#numpy.ufunc.__call__ "numpy.ufunc.__call__")(*args, **kwargs) | Call self as a function. | | [`accumulate`](numpy.ufunc.accumulate#numpy.ufunc.accumulate "numpy.ufunc.accumulate")(array[, axis, dtype, out]) | Accumulate the result of applying the operator to all elements. | | [`at`](numpy.ufunc.at#numpy.ufunc.at "numpy.ufunc.at")(a, indices[, b]) | Performs unbuffered in place operation on operand 'a' for elements specified by 'indices'. | | [`outer`](numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer")(A, B, /, **kwargs) | Apply the ufunc `op` to all pairs (a, b) with a in `A` and b in `B`. | | [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce")(array[, axis, dtype, out, keepdims, ...]) | Reduces [`array`](numpy.array#numpy.array "numpy.array")'s dimension by one, by applying ufunc along one axis. | | [`reduceat`](numpy.ufunc.reduceat#numpy.ufunc.reduceat "numpy.ufunc.reduceat")(array, indices[, axis, dtype, out]) | Performs a (local) reduce with specified slices over a single axis. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.htmlnumpy.ufunc.nin =============== attribute ufunc.nin The number of inputs. Data attribute containing the number of arguments the ufunc treats as input. #### Examples ``` >>> np.add.nin 2 >>> np.multiply.nin 2 >>> np.power.nin 2 >>> np.exp.nin 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.nin.htmlnumpy.ufunc.nout ================ attribute ufunc.nout The number of outputs. Data attribute containing the number of arguments the ufunc treats as output. #### Notes Since all ufuncs can take output arguments, this will always be (at least) 1. #### Examples ``` >>> np.add.nout 1 >>> np.multiply.nout 1 >>> np.power.nout 1 >>> np.exp.nout 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.nout.htmlnumpy.ufunc.nargs ================= attribute ufunc.nargs The number of arguments. Data attribute containing the number of arguments the ufunc takes, including optional ones. #### Notes Typically this value will be one more than what you might expect because all ufuncs take the optional “out” argument. #### Examples ``` >>> np.add.nargs 3 >>> np.multiply.nargs 3 >>> np.power.nargs 3 >>> np.exp.nargs 2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.nargs.htmlnumpy.ufunc.ntypes ================== attribute ufunc.ntypes The number of types. The number of numerical NumPy types - of which there are 18 total - on which the ufunc can operate. See also [`numpy.ufunc.types`](numpy.ufunc.types#numpy.ufunc.types "numpy.ufunc.types") #### Examples ``` >>> np.add.ntypes 18 >>> np.multiply.ntypes 18 >>> np.power.ntypes 17 >>> np.exp.ntypes 7 >>> np.remainder.ntypes 14 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.ntypes.htmlnumpy.ufunc.types ================= attribute ufunc.types Returns a list with types grouped input->output. Data attribute listing the data-type “Domain-Range” groupings the ufunc can deliver. The data-types are given using the character codes. See also [`numpy.ufunc.ntypes`](numpy.ufunc.ntypes#numpy.ufunc.ntypes "numpy.ufunc.ntypes") #### Examples ``` >>> np.add.types ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O'] ``` ``` >>> np.multiply.types ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O'] ``` ``` >>> np.power.types ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G', 'OO->O'] ``` ``` >>> np.exp.types ['f->f', 'd->d', 'g->g', 'F->F', 'D->D', 'G->G', 'O->O'] ``` ``` >>> np.remainder.types ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'OO->O'] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.types.htmlnumpy.ufunc.identity ==================== attribute ufunc.identity The identity value. Data attribute containing the identity element for the ufunc, if it has one. If it does not, the attribute value is None. #### Examples ``` >>> np.add.identity 0 >>> np.multiply.identity 1 >>> np.power.identity 1 >>> print(np.exp.identity) None ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.identity.htmlnumpy.ufunc.signature ===================== attribute ufunc.signature Definition of the core elements a generalized ufunc operates on. The signature determines how the dimensions of each input/output array are split into core and loop dimensions: 1. Each dimension in the signature is matched to a dimension of the corresponding passed-in array, starting from the end of the shape tuple. 2. Core dimensions assigned to the same label in the signature must have exactly matching sizes, no broadcasting is performed. 3. The core dimensions are removed from all inputs and the remaining dimensions are broadcast together, defining the loop dimensions. #### Notes Generalized ufuncs are used internally in many linalg functions, and in the testing suite; the examples below are taken from these. For ufuncs that operate on scalars, the signature is None, which is equivalent to ‘()’ for every argument. #### Examples ``` >>> np.core.umath_tests.matrix_multiply.signature '(m,n),(n,p)->(m,p)' >>> np.linalg._umath_linalg.det.signature '(m,m)->()' >>> np.add.signature is None True # equivalent to '(),()->()' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.signature.htmlnumpy.ufunc.reduce ================== method ufunc.reduce(*array*, *axis=0*, *dtype=None*, *out=None*, *keepdims=False*, *initial=<no value>*, *where=True*) Reduces [`array`](numpy.array#numpy.array "numpy.array")’s dimension by one, by applying ufunc along one axis. Let \(array.shape = (N_0, ..., N_i, ..., N_{M-1})\). Then \(ufunc.reduce(array, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]\) = the result of iterating `j` over \(range(N_i)\), cumulatively applying ufunc to each \(array[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]\). For a one-dimensional array, reduce produces results equivalent to: ``` r = op.identity # op = ufunc for i in range(len(A)): r = op(r, A[i]) return r ``` For example, add.reduce() is equivalent to sum(). Parameters **array**array_like The array to act on. **axis**None or int or tuple of ints, optional Axis or axes along which a reduction is performed. The default (`axis` = 0) is perform a reduction over the first dimension of the input array. `axis` may be negative, in which case it counts from the last to the first axis. New in version 1.7.0. If this is None, a reduction is performed over all the axes. If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before. For operations which are either not commutative or not associative, doing a reduction over multiple axes is not well-defined. The ufuncs do not currently raise an exception in this case, but will likely do so in the future. **dtype**data-type code, optional The type used to represent the intermediate results. Defaults to the data-type of the output array if this is provided, or the data-type of the input array if no output array is provided. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with `ufunc.__call__`, if given as a keyword, this may be wrapped in a 1-element tuple. Changed in version 1.13.0: Tuples are allowed for keyword argument. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original [`array`](numpy.array#numpy.array "numpy.array"). New in version 1.7.0. **initial**scalar, optional The value with which to start the reduction. If the ufunc has no identity or the dtype is object, this defaults to None - otherwise it defaults to ufunc.identity. If `None` is given, the first element of the reduction is used, and an error is thrown if the reduction is empty. New in version 1.15.0. **where**array_like of bool, optional A boolean array which is broadcasted to match the dimensions of [`array`](numpy.array#numpy.array "numpy.array"), and selects elements to include in the reduction. Note that for ufuncs like `minimum` that do not have an identity defined, one has to pass in also `initial`. New in version 1.17.0. Returns **r**ndarray The reduced array. If `out` was supplied, `r` is a reference to it. #### Examples ``` >>> np.multiply.reduce([2,3,5]) 30 ``` A multi-dimensional array example: ``` >>> X = np.arange(8).reshape((2,2,2)) >>> X array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> np.add.reduce(X, 0) array([[ 4, 6], [ 8, 10]]) >>> np.add.reduce(X) # confirm: default axis value is 0 array([[ 4, 6], [ 8, 10]]) >>> np.add.reduce(X, 1) array([[ 2, 4], [10, 12]]) >>> np.add.reduce(X, 2) array([[ 1, 5], [ 9, 13]]) ``` You can use the `initial` keyword argument to initialize the reduction with a different value, and `where` to select specific elements to include: ``` >>> np.add.reduce([10], initial=5) 15 >>> np.add.reduce(np.ones((2, 2, 2)), axis=(0, 2), initial=10) array([14., 14.]) >>> a = np.array([10., np.nan, 10]) >>> np.add.reduce(a, where=~np.isnan(a)) 20.0 ``` Allows reductions of empty arrays where they would normally fail, i.e. for ufuncs without an identity. ``` >>> np.minimum.reduce([], initial=np.inf) inf >>> np.minimum.reduce([[1., 2.], [3., 4.]], initial=10., where=[True, False]) array([ 1., 10.]) >>> np.minimum.reduce([]) Traceback (most recent call last): ... ValueError: zero-size array to reduction operation minimum which has no identity ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.reduce.htmlnumpy.ufunc.accumulate ====================== method ufunc.accumulate(*array*, *axis=0*, *dtype=None*, *out=None*) Accumulate the result of applying the operator to all elements. For a one-dimensional array, accumulate produces results equivalent to: ``` r = np.empty(len(A)) t = op.identity # op = the ufunc being applied to A's elements for i in range(len(A)): t = op(t, A[i]) r[i] = t return r ``` For example, add.accumulate() is equivalent to np.cumsum(). For a multi-dimensional array, accumulate is applied along only one axis (axis zero by default; see Examples below) so repeated use is necessary if one wants to accumulate over multiple axes. Parameters **array**array_like The array to act on. **axis**int, optional The axis along which to apply the accumulation; default is zero. **dtype**data-type code, optional The data-type used to represent the intermediate results. Defaults to the data-type of the output array if such is provided, or the data-type of the input array if no output array is provided. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with `ufunc.__call__`, if given as a keyword, this may be wrapped in a 1-element tuple. Changed in version 1.13.0: Tuples are allowed for keyword argument. Returns **r**ndarray The accumulated values. If `out` was supplied, `r` is a reference to `out`. #### Examples 1-D array examples: ``` >>> np.add.accumulate([2, 3, 5]) array([ 2, 5, 10]) >>> np.multiply.accumulate([2, 3, 5]) array([ 2, 6, 30]) ``` 2-D array examples: ``` >>> I = np.eye(2) >>> I array([[1., 0.], [0., 1.]]) ``` Accumulate along axis 0 (rows), down columns: ``` >>> np.add.accumulate(I, 0) array([[1., 0.], [1., 1.]]) >>> np.add.accumulate(I) # no axis specified = axis zero array([[1., 0.], [1., 1.]]) ``` Accumulate along axis 1 (columns), through rows: ``` >>> np.add.accumulate(I, 1) array([[1., 1.], [0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.accumulate.htmlnumpy.ufunc.reduceat ==================== method ufunc.reduceat(*array*, *indices*, *axis=0*, *dtype=None*, *out=None*) Performs a (local) reduce with specified slices over a single axis. For i in `range(len(indices))`, [`reduceat`](#numpy.ufunc.reduceat "numpy.ufunc.reduceat") computes `ufunc.reduce(array[indices[i]:indices[i+1]])`, which becomes the i-th generalized “row” parallel to `axis` in the final result (i.e., in a 2-D array, for example, if `axis = 0`, it becomes the i-th row, but if `axis = 1`, it becomes the i-th column). There are three exceptions to this: * when `i = len(indices) - 1` (so for the last index), `indices[i+1] = array.shape[axis]`. * if `indices[i] >= indices[i + 1]`, the i-th generalized “row” is simply `array[indices[i]]`. * if `indices[i] >= len(array)` or `indices[i] < 0`, an error is raised. The shape of the output depends on the size of [`indices`](numpy.indices#numpy.indices "numpy.indices"), and may be larger than [`array`](numpy.array#numpy.array "numpy.array") (this happens if `len(indices) > array.shape[axis]`). Parameters **array**array_like The array to act on. **indices**array_like Paired indices, comma separated (not colon), specifying slices to reduce. **axis**int, optional The axis along which to apply the reduceat. **dtype**data-type code, optional The type used to represent the intermediate results. Defaults to the data type of the output array if this is provided, or the data type of the input array if no output array is provided. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with `ufunc.__call__`, if given as a keyword, this may be wrapped in a 1-element tuple. Changed in version 1.13.0: Tuples are allowed for keyword argument. Returns **r**ndarray The reduced values. If `out` was supplied, `r` is a reference to `out`. #### Notes A descriptive example: If [`array`](numpy.array#numpy.array "numpy.array") is 1-D, the function `ufunc.accumulate(array)` is the same as `ufunc.reduceat(array, indices)[::2]` where [`indices`](numpy.indices#numpy.indices "numpy.indices") is `range(len(array) - 1)` with a zero placed in every other element: `indices = zeros(2 * len(array) - 1)`, `indices[1::2] = range(1, len(array))`. Don’t be fooled by this attribute’s name: `reduceat(array)` is not necessarily smaller than [`array`](numpy.array#numpy.array "numpy.array"). #### Examples To take the running sum of four successive values: ``` >>> np.add.reduceat(np.arange(8),[0,4, 1,5, 2,6, 3,7])[::2] array([ 6, 10, 14, 18]) ``` A 2-D example: ``` >>> x = np.linspace(0, 15, 16).reshape(4,4) >>> x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) ``` ``` # reduce such that the result has the following five rows: # [row1 + row2 + row3] # [row4] # [row2] # [row3] # [row1 + row2 + row3 + row4] ``` ``` >>> np.add.reduceat(x, [0, 3, 1, 2, 0]) array([[12., 15., 18., 21.], [12., 13., 14., 15.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [24., 28., 32., 36.]]) ``` ``` # reduce such that result has the following two columns: # [col1 * col2 * col3, col4] ``` ``` >>> np.multiply.reduceat(x, [0, 3], 1) array([[ 0., 3.], [ 120., 7.], [ 720., 11.], [2184., 15.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.reduceat.htmlnumpy.ufunc.outer ================= method ufunc.outer(*A*, *B*, */*, ***kwargs*) Apply the ufunc `op` to all pairs (a, b) with a in `A` and b in `B`. Let `M = A.ndim`, `N = B.ndim`. Then the result, `C`, of `op.outer(A, B)` is an array of dimension M + N such that: \[C[i_0, ..., i_{M-1}, j_0, ..., j_{N-1}] = op(A[i_0, ..., i_{M-1}], B[j_0, ..., j_{N-1}])\] For `A` and `B` one-dimensional, this is equivalent to: ``` r = empty(len(A),len(B)) for i in range(len(A)): for j in range(len(B)): r[i,j] = op(A[i], B[j]) # op = ufunc in question ``` Parameters **A**array_like First array **B**array_like Second array **kwargs**any Arguments to pass on to the ufunc. Typically [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") or `out`. See [`ufunc`](numpy.ufunc#numpy.ufunc "numpy.ufunc") for a comprehensive overview of all available arguments. Returns **r**ndarray Output array See also [`numpy.outer`](numpy.outer#numpy.outer "numpy.outer") A less powerful version of `np.multiply.outer` that [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel")s all inputs to 1D. This exists primarily for compatibility with old code. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") `np.tensordot(a, b, axes=((), ()))` and `np.multiply.outer(a, b)` behave same for all dimensions of a and b. #### Examples ``` >>> np.multiply.outer([1, 2, 3], [4, 5, 6]) array([[ 4, 5, 6], [ 8, 10, 12], [12, 15, 18]]) ``` A multi-dimensional example: ``` >>> A = np.array([[1, 2, 3], [4, 5, 6]]) >>> A.shape (2, 3) >>> B = np.array([[1, 2, 3, 4]]) >>> B.shape (1, 4) >>> C = np.multiply.outer(A, B) >>> C.shape; C (2, 3, 1, 4) array([[[[ 1, 2, 3, 4]], [[ 2, 4, 6, 8]], [[ 3, 6, 9, 12]]], [[[ 4, 8, 12, 16]], [[ 5, 10, 15, 20]], [[ 6, 12, 18, 24]]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.outer.htmlnumpy.ufunc.at ============== method ufunc.at(*a*, *indices*, *b=None*, */*) Performs unbuffered in place operation on operand ‘a’ for elements specified by ‘indices’. For addition ufunc, this method is equivalent to `a[indices] += b`, except that results are accumulated for elements that are indexed more than once. For example, `a[[0,0]] += 1` will only increment the first element once because of buffering, whereas `add.at(a, [0,0], 1)` will increment the first element twice. New in version 1.8.0. Parameters **a**array_like The array to perform in place operation on. **indices**array_like or tuple Array like index object or slice object for indexing into first operand. If first operand has multiple dimensions, indices can be a tuple of array like index objects or slice objects. **b**array_like Second operand for ufuncs requiring two operands. Operand must be broadcastable over first operand after indexing or slicing. #### Examples Set items 0 and 1 to their negative values: ``` >>> a = np.array([1, 2, 3, 4]) >>> np.negative.at(a, [0, 1]) >>> a array([-1, -2, 3, 4]) ``` Increment items 0 and 1, and increment item 2 twice: ``` >>> a = np.array([1, 2, 3, 4]) >>> np.add.at(a, [0, 1, 2, 2], 1) >>> a array([2, 3, 5, 4]) ``` Add items 0 and 1 in first array to second array, and store results in first array: ``` >>> a = np.array([1, 2, 3, 4]) >>> b = np.array([1, 2]) >>> np.add.at(a, [0, 1], b) >>> a array([2, 4, 3, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.at.htmlnumpy.ndarray ============= *class*numpy.ndarray(*shape*, *dtype=float*, *buffer=None*, *offset=0*, *strides=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/arrayobject.c) An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes [`T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T")ndarray The transposed array. [`data`](numpy.ndarray.data#numpy.ndarray.data "numpy.ndarray.data")buffer Python buffer object pointing to the start of the array’s data. [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype")dtype object Data-type of the array’s elements. [`flags`](numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags")dict Information about the memory layout of the array. [`flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat")numpy.flatiter object A 1-D iterator over the array. [`imag`](numpy.imag#numpy.imag "numpy.imag")ndarray The imaginary part of the array. [`real`](numpy.real#numpy.real "numpy.real")ndarray The real part of the array. [`size`](numpy.ndarray.size#numpy.ndarray.size "numpy.ndarray.size")int Number of elements in the array. [`itemsize`](numpy.ndarray.itemsize#numpy.ndarray.itemsize "numpy.ndarray.itemsize")int Length of one array element in bytes. [`nbytes`](numpy.ndarray.nbytes#numpy.ndarray.nbytes "numpy.ndarray.nbytes")int Total bytes consumed by the elements of the array. [`ndim`](numpy.ndarray.ndim#numpy.ndarray.ndim "numpy.ndarray.ndim")int Number of array dimensions. [`shape`](numpy.shape#numpy.shape "numpy.shape")tuple of ints Tuple of array dimensions. [`strides`](numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides")tuple of ints Tuple of bytes to step in each dimension when traversing an array. [`ctypes`](numpy.ndarray.ctypes#numpy.ndarray.ctypes "numpy.ndarray.ctypes")ctypes object An object to simplify the interaction of the array with the ctypes module. [`base`](numpy.ndarray.base#numpy.ndarray.base "numpy.ndarray.base")ndarray Base object if memory is from some other object. #### Methods | | | | --- | --- | | [`all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all")([axis, out, keepdims, where]) | Returns True if all elements evaluate to True. | | [`any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any")([axis, out, keepdims, where]) | Returns True if any of the elements of `a` evaluate to True. | | [`argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax")([axis, out, keepdims]) | Return indices of the maximum values along the given axis. | | [`argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin")([axis, out, keepdims]) | Return indices of the minimum values along the given axis. | | [`argpartition`](numpy.ndarray.argpartition#numpy.ndarray.argpartition "numpy.ndarray.argpartition")(kth[, axis, kind, order]) | Returns the indices that would partition this array. | | [`argsort`](numpy.ndarray.argsort#numpy.ndarray.argsort "numpy.ndarray.argsort")([axis, kind, order]) | Returns the indices that would sort this array. | | [`astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")(dtype[, order, casting, subok, copy]) | Copy of the array, cast to a specified type. | | [`byteswap`](numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap")([inplace]) | Swap the bytes of the array elements | | [`choose`](numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose")(choices[, out, mode]) | Use an index array to construct a new array from a set of choices. | | [`clip`](numpy.ndarray.clip#numpy.ndarray.clip "numpy.ndarray.clip")([min, max, out]) | Return an array whose values are limited to `[min, max]`. | | [`compress`](numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress")(condition[, axis, out]) | Return selected slices of this array along given axis. | | [`conj`](numpy.ndarray.conj#numpy.ndarray.conj "numpy.ndarray.conj")() | Complex-conjugate all elements. | | [`conjugate`](numpy.ndarray.conjugate#numpy.ndarray.conjugate "numpy.ndarray.conjugate")() | Return the complex conjugate, element-wise. | | [`copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy")([order]) | Return a copy of the array. | | [`cumprod`](numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod")([axis, dtype, out]) | Return the cumulative product of the elements along the given axis. | | [`cumsum`](numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum")([axis, dtype, out]) | Return the cumulative sum of the elements along the given axis. | | [`diagonal`](numpy.ndarray.diagonal#numpy.ndarray.diagonal "numpy.ndarray.diagonal")([offset, axis1, axis2]) | Return specified diagonals. | | [`dump`](numpy.ndarray.dump#numpy.ndarray.dump "numpy.ndarray.dump")(file) | Dump a pickle of the array to the specified file. | | [`dumps`](numpy.ndarray.dumps#numpy.ndarray.dumps "numpy.ndarray.dumps")() | Returns the pickle of the array as a string. | | [`fill`](numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill")(value) | Fill the array with a scalar value. | | [`flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten")([order]) | Return a copy of the array collapsed into one dimension. | | [`getfield`](numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. | | [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item")(*args) | Copy an element of an array to a standard Python scalar and return it. | | [`itemset`](numpy.ndarray.itemset#numpy.ndarray.itemset "numpy.ndarray.itemset")(*args) | Insert scalar into an array (scalar is cast to array's dtype, if possible) | | [`max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max")([axis, out, keepdims, initial, where]) | Return the maximum along a given axis. | | [`mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean")([axis, dtype, out, keepdims, where]) | Returns the average of the array elements along given axis. | | [`min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min")([axis, out, keepdims, initial, where]) | Return the minimum along a given axis. | | [`newbyteorder`](numpy.ndarray.newbyteorder#numpy.ndarray.newbyteorder "numpy.ndarray.newbyteorder")([new_order]) | Return the array with the same data viewed with a different byte order. | | [`nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero")() | Return the indices of the elements that are non-zero. | | [`partition`](numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition")(kth[, axis, kind, order]) | Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array. | | [`prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod")([axis, dtype, out, keepdims, initial, ...]) | Return the product of the array elements over the given axis | | [`ptp`](numpy.ndarray.ptp#numpy.ndarray.ptp "numpy.ndarray.ptp")([axis, out, keepdims]) | Peak to peak (maximum - minimum) value along a given axis. | | [`put`](numpy.ndarray.put#numpy.ndarray.put "numpy.ndarray.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. | | [`ravel`](numpy.ndarray.ravel#numpy.ndarray.ravel "numpy.ndarray.ravel")([order]) | Return a flattened array. | | [`repeat`](numpy.ndarray.repeat#numpy.ndarray.repeat "numpy.ndarray.repeat")(repeats[, axis]) | Repeat elements of an array. | | [`reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape")(shape[, order]) | Returns an array containing the same data with a new shape. | | [`resize`](numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. | | [`round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round")([decimals, out]) | Return `a` with each element rounded to the given number of decimals. | | [`searchsorted`](numpy.ndarray.searchsorted#numpy.ndarray.searchsorted "numpy.ndarray.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. | | [`setfield`](numpy.ndarray.setfield#numpy.ndarray.setfield "numpy.ndarray.setfield")(val, dtype[, offset]) | Put a value into a specified place in a field defined by a data-type. | | [`setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. | | [`sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort")([axis, kind, order]) | Sort an array in-place. | | [`squeeze`](numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze")([axis]) | Remove axes of length one from `a`. | | [`std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std")([axis, dtype, out, ddof, keepdims, where]) | Returns the standard deviation of the array elements along given axis. | | [`sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum")([axis, dtype, out, keepdims, initial, where]) | Return the sum of the array elements over the given axis. | | [`swapaxes`](numpy.ndarray.swapaxes#numpy.ndarray.swapaxes "numpy.ndarray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. | | [`take`](numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. | | [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes")([order]) | Construct Python bytes containing the raw data bytes in the array. | | [`tofile`](numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). | | [`tolist`](numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. | | [`tostring`](numpy.ndarray.tostring#numpy.ndarray.tostring "numpy.ndarray.tostring")([order]) | A compatibility alias for [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), with exactly the same behavior. | | [`trace`](numpy.ndarray.trace#numpy.ndarray.trace "numpy.ndarray.trace")([offset, axis1, axis2, dtype, out]) | Return the sum along diagonals of the array. | | [`transpose`](numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose")(*axes) | Returns a view of the array with axes transposed. | | [`var`](numpy.ndarray.var#numpy.ndarray.var "numpy.ndarray.var")([axis, dtype, out, ddof, keepdims, where]) | Returns the variance of the array elements, along given axis. | | [`view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view")([dtype][, type]) | New view of array with the same data. | | | | | --- | --- | | **dot** | | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.htmlnumpy.can_cast =============== numpy.can_cast(*from_*, *to*, *casting='safe'*) Returns True if cast between data types can occur according to the casting rule. If from is a scalar or array scalar, also returns True if the scalar value can be cast without overflow or truncation to an integer. Parameters **from_**dtype, dtype specifier, scalar, or array Data type, scalar, or array to cast from. **to**dtype or dtype specifier Data type to cast to. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. Returns **out**bool True if cast can occur according to the casting rule. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`result_type`](numpy.result_type#numpy.result_type "numpy.result_type") #### Notes Changed in version 1.17.0: Casting between a simple data type and a structured one is possible only for “unsafe” casting. Casting to multiple fields is allowed, but casting from multiple fields is not. Changed in version 1.9.0: Casting from numeric to string types in ‘safe’ casting mode requires that the string dtype length is long enough to store the maximum integer/float value converted. #### Examples Basic examples ``` >>> np.can_cast(np.int32, np.int64) True >>> np.can_cast(np.float64, complex) True >>> np.can_cast(complex, float) False ``` ``` >>> np.can_cast('i8', 'f8') True >>> np.can_cast('i8', 'f4') False >>> np.can_cast('i4', 'S4') False ``` Casting scalars ``` >>> np.can_cast(100, 'i1') True >>> np.can_cast(150, 'i1') False >>> np.can_cast(150, 'u1') True ``` ``` >>> np.can_cast(3.5e100, np.float32) False >>> np.can_cast(1000.0, np.float32) True ``` Array scalar checks the value, array does not ``` >>> np.can_cast(np.array(1000.0), np.float32) True >>> np.can_cast(np.array([1000.0]), np.float32) False ``` Using the casting rules ``` >>> np.can_cast('i8', 'i8', 'no') True >>> np.can_cast('<i8', '>i8', 'no') False ``` ``` >>> np.can_cast('<i8', '>i8', 'equiv') True >>> np.can_cast('<i4', '>i8', 'equiv') False ``` ``` >>> np.can_cast('<i4', '>i8', 'safe') True >>> np.can_cast('<i8', '>i4', 'safe') False ``` ``` >>> np.can_cast('<i8', '>i4', 'same_kind') True >>> np.can_cast('<i8', '>u4', 'same_kind') False ``` ``` >>> np.can_cast('<i8', '>u4', 'unsafe') True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.can_cast.htmlnumpy.dtype =========== *class*numpy.dtype(*dtype*, *align=False*, *copy=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Create a data type object. A numpy array is homogeneous, and contains elements described by a dtype object. A dtype object can be constructed from different combinations of fundamental numeric types. Parameters **dtype** Object to be converted to a data type object. **align**bool, optional Add padding to the fields to match what a C compiler would output for a similar C-struct. Can be `True` only if `obj` is a dictionary or a comma-separated string. If a struct dtype is being created, this also sets a sticky alignment flag `isalignedstruct`. **copy**bool, optional Make a new copy of the data-type object. If `False`, the result may just be a reference to a built-in data-type object. See also [`result_type`](numpy.result_type#numpy.result_type "numpy.result_type") #### Examples Using array-scalar type: ``` >>> np.dtype(np.int16) dtype('int16') ``` Structured type, one field name ‘f1’, containing int16: ``` >>> np.dtype([('f1', np.int16)]) dtype([('f1', '<i2')]) ``` Structured type, one field named ‘f1’, in itself containing a structured type with one field: ``` >>> np.dtype([('f1', [('f1', np.int16)])]) dtype([('f1', [('f1', '<i2')])]) ``` Structured type, two fields: the first field contains an unsigned int, the second an int32: ``` >>> np.dtype([('f1', np.uint64), ('f2', np.int32)]) dtype([('f1', '<u8'), ('f2', '<i4')]) ``` Using array-protocol type strings: ``` >>> np.dtype([('a','f8'),('b','S10')]) dtype([('a', '<f8'), ('b', 'S10')]) ``` Using comma-separated field formats. The shape is (2,3): ``` >>> np.dtype("i4, (2,3)f8") dtype([('f0', '<i4'), ('f1', '<f8', (2, 3))]) ``` Using tuples. `int` is a fixed type, 3 the field’s shape. `void` is a flexible type, here of size 10: ``` >>> np.dtype([('hello',(np.int64,3)),('world',np.void,10)]) dtype([('hello', '<i8', (3,)), ('world', 'V10')]) ``` Subdivide `int16` into 2 `int8`’s, called x and y. 0 and 1 are the offsets in bytes: ``` >>> np.dtype((np.int16, {'x':(np.int8,0), 'y':(np.int8,1)})) dtype((numpy.int16, [('x', 'i1'), ('y', 'i1')])) ``` Using dictionaries. Two fields named ‘gender’ and ‘age’: ``` >>> np.dtype({'names':['gender','age'], 'formats':['S1',np.uint8]}) dtype([('gender', 'S1'), ('age', 'u1')]) ``` Offsets in bytes, here 0 and 25: ``` >>> np.dtype({'surname':('S25',0),'age':(np.uint8,25)}) dtype([('surname', 'S25'), ('age', 'u1')]) ``` Attributes [`alignment`](numpy.dtype.alignment#numpy.dtype.alignment "numpy.dtype.alignment") The required alignment (bytes) of this data-type according to the compiler. [`base`](numpy.dtype.base#numpy.dtype.base "numpy.dtype.base") Returns dtype for the base element of the subarrays, regardless of their dimension or shape. [`byteorder`](numpy.dtype.byteorder#numpy.dtype.byteorder "numpy.dtype.byteorder") A character indicating the byte-order of this data-type object. [`char`](../routines.char#module-numpy.char "numpy.char") A unique character code for each of the 21 different built-in types. [`descr`](numpy.dtype.descr#numpy.dtype.descr "numpy.dtype.descr") `__array_interface__` description of the data-type. [`fields`](numpy.dtype.fields#numpy.dtype.fields "numpy.dtype.fields") Dictionary of named fields defined for this data type, or `None`. [`flags`](numpy.dtype.flags#numpy.dtype.flags "numpy.dtype.flags") Bit-flags describing how this data type is to be interpreted. [`hasobject`](numpy.dtype.hasobject#numpy.dtype.hasobject "numpy.dtype.hasobject") Boolean indicating whether this dtype contains any reference-counted objects in any fields or sub-dtypes. [`isalignedstruct`](numpy.dtype.isalignedstruct#numpy.dtype.isalignedstruct "numpy.dtype.isalignedstruct") Boolean indicating whether the dtype is a struct which maintains field alignment. [`isbuiltin`](numpy.dtype.isbuiltin#numpy.dtype.isbuiltin "numpy.dtype.isbuiltin") Integer indicating how this dtype relates to the built-in dtypes. [`isnative`](numpy.dtype.isnative#numpy.dtype.isnative "numpy.dtype.isnative") Boolean indicating whether the byte order of this dtype is native to the platform. [`itemsize`](numpy.dtype.itemsize#numpy.dtype.itemsize "numpy.dtype.itemsize") The element size of this data-type object. [`kind`](numpy.dtype.kind#numpy.dtype.kind "numpy.dtype.kind") A character code (one of ‘biufcmMOSUV’) identifying the general kind of data. [`metadata`](numpy.dtype.metadata#numpy.dtype.metadata "numpy.dtype.metadata") Either `None` or a readonly dictionary of metadata (mappingproxy). [`name`](numpy.dtype.name#numpy.dtype.name "numpy.dtype.name") A bit-width name for this data-type. [`names`](numpy.dtype.names#numpy.dtype.names "numpy.dtype.names") Ordered list of field names, or `None` if there are no fields. [`ndim`](numpy.dtype.ndim#numpy.dtype.ndim "numpy.dtype.ndim") Number of dimensions of the sub-array if this data type describes a sub-array, and `0` otherwise. [`num`](numpy.dtype.num#numpy.dtype.num "numpy.dtype.num") A unique number for each of the 21 different built-in types. [`shape`](numpy.shape#numpy.shape "numpy.shape") Shape tuple of the sub-array if this data type describes a sub-array, and `()` otherwise. [`str`](numpy.dtype.str#numpy.dtype.str "numpy.dtype.str") The array-protocol typestring of this data-type object. [`subdtype`](numpy.dtype.subdtype#numpy.dtype.subdtype "numpy.dtype.subdtype") Tuple `(item_dtype, shape)` if this [`dtype`](#numpy.dtype "numpy.dtype") describes a sub-array, and None otherwise. **type** #### Methods | | | | --- | --- | | [`newbyteorder`](numpy.dtype.newbyteorder#numpy.dtype.newbyteorder "numpy.dtype.newbyteorder")([new_order]) | Return a new dtype with a different byte order. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.htmlnumpy.add ========= numpy.add(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'add'>* Add arguments element-wise. Parameters **x1, x2**array_like The arrays to be added. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **add**ndarray or scalar The sum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. #### Notes Equivalent to `x1` + `x2` in terms of array broadcasting. #### Examples ``` >>> np.add(1.0, 4.0) 5.0 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.add(x1, x2) array([[ 0., 2., 4.], [ 3., 5., 7.], [ 6., 8., 10.]]) ``` The `+` operator can be used as a shorthand for `np.add` on ndarrays. ``` >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> x1 + x2 array([[ 0., 2., 4.], [ 3., 5., 7.], [ 6., 8., 10.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.add.htmlnumpy.subtract ============== numpy.subtract(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'subtract'>* Subtract arguments, element-wise. Parameters **x1, x2**array_like The arrays to be subtracted from each other. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The difference of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. #### Notes Equivalent to `x1 - x2` in terms of array broadcasting. #### Examples ``` >>> np.subtract(1.0, 4.0) -3.0 ``` ``` >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.subtract(x1, x2) array([[ 0., 0., 0.], [ 3., 3., 3.], [ 6., 6., 6.]]) ``` The `-` operator can be used as a shorthand for `np.subtract` on ndarrays. ``` >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> x1 - x2 array([[0., 0., 0.], [3., 3., 3.], [6., 6., 6.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.subtract.htmlnumpy.multiply ============== numpy.multiply(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'multiply'>* Multiply arguments element-wise. Parameters **x1, x2**array_like Input arrays to be multiplied. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The product of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. #### Notes Equivalent to `x1` * `x2` in terms of array broadcasting. #### Examples ``` >>> np.multiply(2.0, 4.0) 8.0 ``` ``` >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.multiply(x1, x2) array([[ 0., 1., 4.], [ 0., 4., 10.], [ 0., 7., 16.]]) ``` The `*` operator can be used as a shorthand for `np.multiply` on ndarrays. ``` >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> x1 * x2 array([[ 0., 1., 4.], [ 0., 4., 10.], [ 0., 7., 16.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.multiply.htmlnumpy.matmul ============ numpy.matmul(*x1*, *x2*, */*, *out=None*, ***, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*, *axes*, *axis*])*=<ufunc 'matmul'>* Matrix product of two arrays. Parameters **x1, x2**array_like Input arrays, scalars not allowed. **out**ndarray, optional A location into which the result is stored. If provided, it must have a shape that matches the signature `(n,k),(k,m)->(n,m)`. If not provided or None, a freshly-allocated array is returned. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). New in version 1.16: Now handles ufunc kwargs Returns **y**ndarray The matrix product of the inputs. This is a scalar only when both x1, x2 are 1-d vectors. Raises ValueError If the last dimension of `x1` is not the same size as the second-to-last dimension of `x2`. If a scalar value is passed in. See also [`vdot`](numpy.vdot#numpy.vdot "numpy.vdot") Complex-conjugating dot product. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. [`dot`](numpy.dot#numpy.dot "numpy.dot") alternative matrix product with different broadcasting rules. #### Notes The behavior depends on the arguments in the following way. * If both arguments are 2-D they are multiplied like conventional matrices. * If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly. * If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed. * If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed. `matmul` differs from `dot` in two important ways: * Multiplication by scalars is not allowed, use `*` instead. * Stacks of matrices are broadcast together as if the matrices were elements, respecting the signature `(n,k),(k,m)->(n,m)`: ``` >>> a = np.ones([9, 5, 7, 4]) >>> c = np.ones([9, 5, 4, 3]) >>> np.dot(a, c).shape (9, 5, 7, 9, 5, 3) >>> np.matmul(a, c).shape (9, 5, 7, 3) >>> # n is 7, k is 4, m is 3 ``` The matmul function implements the semantics of the `@` operator introduced in Python 3.5 following [**PEP 465**](https://peps.python.org/pep-0465/). #### Examples For 2-D arrays it is the matrix product: ``` >>> a = np.array([[1, 0], ... [0, 1]]) >>> b = np.array([[4, 1], ... [2, 2]]) >>> np.matmul(a, b) array([[4, 1], [2, 2]]) ``` For 2-D mixed with 1-D, the result is the usual. ``` >>> a = np.array([[1, 0], ... [0, 1]]) >>> b = np.array([1, 2]) >>> np.matmul(a, b) array([1, 2]) >>> np.matmul(b, a) array([1, 2]) ``` Broadcasting is conventional for stacks of arrays ``` >>> a = np.arange(2 * 2 * 4).reshape((2, 2, 4)) >>> b = np.arange(2 * 2 * 4).reshape((2, 4, 2)) >>> np.matmul(a,b).shape (2, 2, 2) >>> np.matmul(a, b)[0, 1, 1] 98 >>> sum(a[0, 1, :] * b[0 , :, 1]) 98 ``` Vector, vector returns the scalar inner product, but neither argument is complex-conjugated: ``` >>> np.matmul([2j, 3j], [2j, 3j]) (-13+0j) ``` Scalar multiplication raises an error. ``` >>> np.matmul([1,2], 3) Traceback (most recent call last): ... ValueError: matmul: Input operand 1 does not have enough dimensions ... ``` The `@` operator can be used as a shorthand for `np.matmul` on ndarrays. ``` >>> x1 = np.array([2j, 3j]) >>> x2 = np.array([2j, 3j]) >>> x1 @ x2 (-13+0j) ``` New in version 1.10.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matmul.htmlnumpy.divide ============ numpy.divide(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'divide'>* Divide arguments element-wise. Parameters **x1**array_like Dividend array. **x2**array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The quotient `x1/x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr") Set whether to raise or warn on overflow, underflow and division by zero. #### Notes Equivalent to `x1` / `x2` in terms of array-broadcasting. The `true_divide(x1, x2)` function is an alias for `divide(x1, x2)`. #### Examples ``` >>> np.divide(2.0, 4.0) 0.5 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.divide(x1, x2) array([[nan, 1. , 1. ], [inf, 4. , 2.5], [inf, 7. , 4. ]]) ``` The `/` operator can be used as a shorthand for `np.divide` on ndarrays. ``` >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = 2 * np.ones(3) >>> x1 / x2 array([[0. , 0.5, 1. ], [1.5, 2. , 2.5], [3. , 3.5, 4. ]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.divide.htmlnumpy.logaddexp =============== numpy.logaddexp(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'logaddexp'>* Logarithm of the sum of exponentiations of the inputs. Calculates `log(exp(x1) + exp(x2))`. This function is useful in statistics where the calculated probabilities of events may be so small as to exceed the range of normal floating point numbers. In such cases the logarithm of the calculated probability is stored. This function allows adding probabilities stored in such a fashion. Parameters **x1, x2**array_like Input values. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **result**ndarray Logarithm of `exp(x1) + exp(x2)`. This is a scalar if both `x1` and `x2` are scalars. See also [`logaddexp2`](numpy.logaddexp2#numpy.logaddexp2 "numpy.logaddexp2") Logarithm of the sum of exponentiations of inputs in base 2. #### Notes New in version 1.3.0. #### Examples ``` >>> prob1 = np.log(1e-50) >>> prob2 = np.log(2.5e-50) >>> prob12 = np.logaddexp(prob1, prob2) >>> prob12 -113.87649168120691 >>> np.exp(prob12) 3.5000000000000057e-50 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.logaddexp.htmlnumpy.logaddexp2 ================ numpy.logaddexp2(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'logaddexp2'>* Logarithm of the sum of exponentiations of the inputs in base-2. Calculates `log2(2**x1 + 2**x2)`. This function is useful in machine learning when the calculated probabilities of events may be so small as to exceed the range of normal floating point numbers. In such cases the base-2 logarithm of the calculated probability can be used instead. This function allows adding probabilities stored in such a fashion. Parameters **x1, x2**array_like Input values. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **result**ndarray Base-2 logarithm of `2**x1 + 2**x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`logaddexp`](numpy.logaddexp#numpy.logaddexp "numpy.logaddexp") Logarithm of the sum of exponentiations of the inputs. #### Notes New in version 1.3.0. #### Examples ``` >>> prob1 = np.log2(1e-50) >>> prob2 = np.log2(2.5e-50) >>> prob12 = np.logaddexp2(prob1, prob2) >>> prob1, prob2, prob12 (-166.09640474436813, -164.77447664948076, -164.28904982231052) >>> 2**prob12 3.4999999999999914e-50 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.logaddexp2.htmlnumpy.true_divide ================== numpy.true_divide(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'divide'>* Divide arguments element-wise. Parameters **x1**array_like Dividend array. **x2**array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The quotient `x1/x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr") Set whether to raise or warn on overflow, underflow and division by zero. #### Notes Equivalent to `x1` / `x2` in terms of array-broadcasting. The `true_divide(x1, x2)` function is an alias for `divide(x1, x2)`. #### Examples ``` >>> np.divide(2.0, 4.0) 0.5 >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = np.arange(3.0) >>> np.divide(x1, x2) array([[nan, 1. , 1. ], [inf, 4. , 2.5], [inf, 7. , 4. ]]) ``` The `/` operator can be used as a shorthand for `np.divide` on ndarrays. ``` >>> x1 = np.arange(9.0).reshape((3, 3)) >>> x2 = 2 * np.ones(3) >>> x1 / x2 array([[0. , 0.5, 1. ], [1.5, 2. , 2.5], [3. , 3.5, 4. ]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.true_divide.htmlnumpy.floor_divide =================== numpy.floor_divide(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'floor_divide'>* Return the largest integer smaller or equal to the division of the inputs. It is equivalent to the Python `//` operator and pairs with the Python `%` ([`remainder`](numpy.remainder#numpy.remainder "numpy.remainder")), function so that `a = a % b + b * (a // b)` up to roundoff. Parameters **x1**array_like Numerator. **x2**array_like Denominator. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray y = floor(`x1`/`x2`) This is a scalar if both `x1` and `x2` are scalars. See also [`remainder`](numpy.remainder#numpy.remainder "numpy.remainder") Remainder complementary to floor_divide. [`divmod`](numpy.divmod#numpy.divmod "numpy.divmod") Simultaneous floor division and remainder. [`divide`](numpy.divide#numpy.divide "numpy.divide") Standard division. [`floor`](numpy.floor#numpy.floor "numpy.floor") Round a number to the nearest integer toward minus infinity. [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil") Round a number to the nearest integer toward infinity. #### Examples ``` >>> np.floor_divide(7,3) 2 >>> np.floor_divide([1., 2., 3., 4.], 2.5) array([ 0., 0., 1., 1.]) ``` The `//` operator can be used as a shorthand for `np.floor_divide` on ndarrays. ``` >>> x1 = np.array([1., 2., 3., 4.]) >>> x1 // 2.5 array([0., 0., 1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.floor_divide.htmlnumpy.negative ============== numpy.negative(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'negative'>* Numerical negative, element-wise. Parameters **x**array_like or scalar Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar Returned array or scalar: `y = -x`. This is a scalar if `x` is a scalar. #### Examples ``` >>> np.negative([1.,-1.]) array([-1., 1.]) ``` The unary `-` operator can be used as a shorthand for `np.negative` on ndarrays. ``` >>> x1 = np.array(([1., -1.])) >>> -x1 array([-1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.negative.htmlnumpy.positive ============== numpy.positive(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'positive'>* Numerical positive, element-wise. New in version 1.13.0. Parameters **x**array_like or scalar Input array. Returns **y**ndarray or scalar Returned array or scalar: `y = +x`. This is a scalar if `x` is a scalar. #### Notes Equivalent to `x.copy()`, but only defined for types that support arithmetic. #### Examples ``` >>> x1 = np.array(([1., -1.])) >>> np.positive(x1) array([ 1., -1.]) ``` The unary `+` operator can be used as a shorthand for `np.positive` on ndarrays. ``` >>> x1 = np.array(([1., -1.])) >>> +x1 array([ 1., -1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.positive.htmlnumpy.power =========== numpy.power(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'power'>* First array elements raised to powers from second array, element-wise. Raise each base in `x1` to the positionally-corresponding power in `x2`. `x1` and `x2` must be broadcastable to the same shape. An integer type raised to a negative integer power will raise a `ValueError`. Negative values raised to a non-integral value will return `nan`. To get complex results, cast the input to complex, or specify the `dtype` to be `complex` (see the example below). Parameters **x1**array_like The bases. **x2**array_like The exponents. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The bases in `x1` raised to the exponents in `x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`float_power`](numpy.float_power#numpy.float_power "numpy.float_power") power function that promotes integers to float #### Examples Cube each element in an array. ``` >>> x1 = np.arange(6) >>> x1 [0, 1, 2, 3, 4, 5] >>> np.power(x1, 3) array([ 0, 1, 8, 27, 64, 125]) ``` Raise the bases to different exponents. ``` >>> x2 = [1.0, 2.0, 3.0, 3.0, 2.0, 1.0] >>> np.power(x1, x2) array([ 0., 1., 8., 27., 16., 5.]) ``` The effect of broadcasting. ``` >>> x2 = np.array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> x2 array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> np.power(x1, x2) array([[ 0, 1, 8, 27, 16, 5], [ 0, 1, 8, 27, 16, 5]]) ``` The `**` operator can be used as a shorthand for `np.power` on ndarrays. ``` >>> x2 = np.array([1, 2, 3, 3, 2, 1]) >>> x1 = np.arange(6) >>> x1 ** x2 array([ 0, 1, 8, 27, 16, 5]) ``` Negative values raised to a non-integral value will result in `nan` (and a warning will be generated). ``` >>> x3 = np.array([-1.0, -4.0]) >>> with np.errstate(invalid='ignore'): ... p = np.power(x3, 1.5) ... >>> p array([nan, nan]) ``` To get complex results, give the argument `dtype=complex`. ``` >>> np.power(x3, 1.5, dtype=complex) array([-1.83697020e-16-1.j, -1.46957616e-15-8.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.power.htmlnumpy.float_power ================== numpy.float_power(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'float_power'>* First array elements raised to powers from second array, element-wise. Raise each base in `x1` to the positionally-corresponding power in `x2`. `x1` and `x2` must be broadcastable to the same shape. This differs from the power function in that integers, float16, and float32 are promoted to floats with a minimum precision of float64 so that the result is always inexact. The intent is that the function will return a usable result for negative powers and seldom overflow for positive powers. Negative values raised to a non-integral value will return `nan`. To get complex results, cast the input to complex, or specify the `dtype` to be `complex` (see the example below). New in version 1.12.0. Parameters **x1**array_like The bases. **x2**array_like The exponents. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The bases in `x1` raised to the exponents in `x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`power`](numpy.power#numpy.power "numpy.power") power function that preserves type #### Examples Cube each element in a list. ``` >>> x1 = range(6) >>> x1 [0, 1, 2, 3, 4, 5] >>> np.float_power(x1, 3) array([ 0., 1., 8., 27., 64., 125.]) ``` Raise the bases to different exponents. ``` >>> x2 = [1.0, 2.0, 3.0, 3.0, 2.0, 1.0] >>> np.float_power(x1, x2) array([ 0., 1., 8., 27., 16., 5.]) ``` The effect of broadcasting. ``` >>> x2 = np.array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> x2 array([[1, 2, 3, 3, 2, 1], [1, 2, 3, 3, 2, 1]]) >>> np.float_power(x1, x2) array([[ 0., 1., 8., 27., 16., 5.], [ 0., 1., 8., 27., 16., 5.]]) ``` Negative values raised to a non-integral value will result in `nan` (and a warning will be generated). ``` >>> x3 = np.array([-1, -4]) >>> with np.errstate(invalid='ignore'): ... p = np.float_power(x3, 1.5) ... >>> p array([nan, nan]) ``` To get complex results, give the argument `dtype=complex`. ``` >>> np.float_power(x3, 1.5, dtype=complex) array([-1.83697020e-16-1.j, -1.46957616e-15-8.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.float_power.htmlnumpy.remainder =============== numpy.remainder(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'remainder'>* Returns the element-wise remainder of division. Computes the remainder complementary to the [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") function. It is equivalent to the Python modulus operator``x1 % x2`` and has the same sign as the divisor `x2`. The MATLAB function equivalent to `np.remainder` is `mod`. Warning This should not be confused with: * Python 3.7’s [`math.remainder`](https://docs.python.org/3/library/math.html#math.remainder "(in Python v3.10)") and C’s `remainder`, which computes the IEEE remainder, which are the complement to `round(x1 / x2)`. * The MATLAB `rem` function and or the C `%` operator which is the complement to `int(x1 / x2)`. Parameters **x1**array_like Dividend array. **x2**array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The element-wise remainder of the quotient `floor_divide(x1, x2)`. This is a scalar if both `x1` and `x2` are scalars. See also [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") Equivalent of Python `//` operator. [`divmod`](numpy.divmod#numpy.divmod "numpy.divmod") Simultaneous floor division and remainder. [`fmod`](numpy.fmod#numpy.fmod "numpy.fmod") Equivalent of the MATLAB `rem` function. [`divide`](numpy.divide#numpy.divide "numpy.divide"), [`floor`](numpy.floor#numpy.floor "numpy.floor") #### Notes Returns 0 when `x2` is 0 and both `x1` and `x2` are (arrays of) integers. `mod` is an alias of `remainder`. #### Examples ``` >>> np.remainder([4, 7], [2, 3]) array([0, 1]) >>> np.remainder(np.arange(7), 5) array([0, 1, 2, 3, 4, 0, 1]) ``` The `%` operator can be used as a shorthand for `np.remainder` on ndarrays. ``` >>> x1 = np.arange(7) >>> x1 % 5 array([0, 1, 2, 3, 4, 0, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.remainder.htmlnumpy.mod ========= numpy.mod(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'remainder'>* Returns the element-wise remainder of division. Computes the remainder complementary to the [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") function. It is equivalent to the Python modulus operator``x1 % x2`` and has the same sign as the divisor `x2`. The MATLAB function equivalent to `np.remainder` is `mod`. Warning This should not be confused with: * Python 3.7’s [`math.remainder`](https://docs.python.org/3/library/math.html#math.remainder "(in Python v3.10)") and C’s `remainder`, which computes the IEEE remainder, which are the complement to `round(x1 / x2)`. * The MATLAB `rem` function and or the C `%` operator which is the complement to `int(x1 / x2)`. Parameters **x1**array_like Dividend array. **x2**array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The element-wise remainder of the quotient `floor_divide(x1, x2)`. This is a scalar if both `x1` and `x2` are scalars. See also [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") Equivalent of Python `//` operator. [`divmod`](numpy.divmod#numpy.divmod "numpy.divmod") Simultaneous floor division and remainder. [`fmod`](numpy.fmod#numpy.fmod "numpy.fmod") Equivalent of the MATLAB `rem` function. [`divide`](numpy.divide#numpy.divide "numpy.divide"), [`floor`](numpy.floor#numpy.floor "numpy.floor") #### Notes Returns 0 when `x2` is 0 and both `x1` and `x2` are (arrays of) integers. `mod` is an alias of `remainder`. #### Examples ``` >>> np.remainder([4, 7], [2, 3]) array([0, 1]) >>> np.remainder(np.arange(7), 5) array([0, 1, 2, 3, 4, 0, 1]) ``` The `%` operator can be used as a shorthand for `np.remainder` on ndarrays. ``` >>> x1 = np.arange(7) >>> x1 % 5 array([0, 1, 2, 3, 4, 0, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.mod.htmlnumpy.fmod ========== numpy.fmod(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'fmod'>* Returns the element-wise remainder of division. This is the NumPy implementation of the C library function fmod, the remainder has the same sign as the dividend `x1`. It is equivalent to the Matlab(TM) `rem` function and should not be confused with the Python modulus operator `x1 % x2`. Parameters **x1**array_like Dividend. **x2**array_like Divisor. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**array_like The remainder of the division of `x1` by `x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`remainder`](numpy.remainder#numpy.remainder "numpy.remainder") Equivalent to the Python `%` operator. [`divide`](numpy.divide#numpy.divide "numpy.divide") #### Notes The result of the modulo operation for negative dividend and divisors is bound by conventions. For [`fmod`](#numpy.fmod "numpy.fmod"), the sign of result is the sign of the dividend, while for [`remainder`](numpy.remainder#numpy.remainder "numpy.remainder") the sign of the result is the sign of the divisor. The [`fmod`](#numpy.fmod "numpy.fmod") function is equivalent to the Matlab(TM) `rem` function. #### Examples ``` >>> np.fmod([-3, -2, -1, 1, 2, 3], 2) array([-1, 0, -1, 1, 0, 1]) >>> np.remainder([-3, -2, -1, 1, 2, 3], 2) array([1, 0, 1, 1, 0, 1]) ``` ``` >>> np.fmod([5, 3], [2, 2.]) array([ 1., 1.]) >>> a = np.arange(-3, 3).reshape(3, 2) >>> a array([[-3, -2], [-1, 0], [ 1, 2]]) >>> np.fmod(a, [2,2]) array([[-1, 0], [-1, 0], [ 1, 0]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fmod.htmlnumpy.divmod ============ numpy.divmod(*x1*, *x2*, [*out1*, *out2*, ]*/*, [*out=(None*, *None)*, ]***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'divmod'>* Return element-wise quotient and remainder simultaneously. New in version 1.13.0. `np.divmod(x, y)` is equivalent to `(x // y, x % y)`, but faster because it avoids redundant work. It is used to implement the Python built-in function `divmod` on NumPy arrays. Parameters **x1**array_like Dividend array. **x2**array_like Divisor array. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out1**ndarray Element-wise quotient resulting from floor division. This is a scalar if both `x1` and `x2` are scalars. **out2**ndarray Element-wise remainder from floor division. This is a scalar if both `x1` and `x2` are scalars. See also [`floor_divide`](numpy.floor_divide#numpy.floor_divide "numpy.floor_divide") Equivalent to Python’s `//` operator. [`remainder`](numpy.remainder#numpy.remainder "numpy.remainder") Equivalent to Python’s `%` operator. [`modf`](numpy.modf#numpy.modf "numpy.modf") Equivalent to `divmod(x, 1)` for positive `x` with the return values switched. #### Examples ``` >>> np.divmod(np.arange(5), 3) (array([0, 0, 0, 1, 1]), array([0, 1, 2, 0, 1])) ``` The [`divmod`](#numpy.divmod "numpy.divmod") function can be used as a shorthand for `np.divmod` on ndarrays. ``` >>> x = np.arange(5) >>> divmod(x, 3) (array([0, 0, 0, 1, 1]), array([0, 1, 2, 0, 1])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.divmod.htmlnumpy.absolute ============== numpy.absolute(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'absolute'>* Calculate the absolute value element-wise. `np.abs` is a shorthand for this function. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **absolute**ndarray An ndarray containing the absolute value of each element in `x`. For complex input, `a + ib`, the absolute value is \(\sqrt{ a^2 + b^2 }\). This is a scalar if `x` is a scalar. #### Examples ``` >>> x = np.array([-1.2, 1.2]) >>> np.absolute(x) array([ 1.2, 1.2]) >>> np.absolute(1.2 + 1j) 1.5620499351813308 ``` Plot the function over `[-10, 10]`: ``` >>> import matplotlib.pyplot as plt ``` ``` >>> x = np.linspace(start=-10, stop=10, num=101) >>> plt.plot(x, np.absolute(x)) >>> plt.show() ``` ![../../_images/numpy-absolute-1_00_00.png] Plot the function over the complex plane: ``` >>> xx = x + 1j * x[:, np.newaxis] >>> plt.imshow(np.abs(xx), extent=[-10, 10, -10, 10], cmap='gray') >>> plt.show() ``` ![../../_images/numpy-absolute-1_01_00.png] function can be used as a shorthand for `np.absolute` on ndarrays. ``` >>> x = np.array([-1.2, 1.2]) >>> abs(x) array([1.2, 1.2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.absolute.htmlnumpy.fabs ========== numpy.fabs(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'fabs'>* Compute the absolute values element-wise. This function returns the absolute values (positive magnitude) of the data in `x`. Complex values are not handled, use [`absolute`](numpy.absolute#numpy.absolute "numpy.absolute") to find the absolute values of complex data. Parameters **x**array_like The array of numbers for which the absolute values are required. If `x` is a scalar, the result `y` will also be a scalar. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The absolute values of `x`, the returned values are always floats. This is a scalar if `x` is a scalar. See also [`absolute`](numpy.absolute#numpy.absolute "numpy.absolute") Absolute values including [`complex`](https://docs.python.org/3/library/functions.html#complex "(in Python v3.10)") types. #### Examples ``` >>> np.fabs(-1) 1.0 >>> np.fabs([-1.2, 1.2]) array([ 1.2, 1.2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fabs.htmlnumpy.rint ========== numpy.rint(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'rint'>* Round elements of the array to the nearest integer. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Output array is same shape and type as `x`. This is a scalar if `x` is a scalar. See also [`fix`](numpy.fix#numpy.fix "numpy.fix"), [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc") #### Notes For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. #### Examples ``` >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) >>> np.rint(a) array([-2., -2., -0., 0., 2., 2., 2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.rint.htmlnumpy.sign ========== numpy.sign(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'sign'>* Returns an element-wise indication of the sign of a number. The [`sign`](#numpy.sign "numpy.sign") function returns `-1 if x < 0, 0 if x==0, 1 if x > 0`. nan is returned for nan inputs. For complex inputs, the [`sign`](#numpy.sign "numpy.sign") function returns `sign(x.real) + 0j if x.real != 0 else sign(x.imag) + 0j`. complex(nan, 0) is returned for complex nan inputs. Parameters **x**array_like Input values. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The sign of `x`. This is a scalar if `x` is a scalar. #### Notes There is more than one definition of sign in common use for complex numbers. The definition used here is equivalent to \(x/\sqrt{x*x}\) which is different from a common alternative, \(x/|x|\). #### Examples ``` >>> np.sign([-5., 4.5]) array([-1., 1.]) >>> np.sign(0) 0 >>> np.sign(5-2j) (1+0j) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.sign.htmlnumpy.heaviside =============== numpy.heaviside(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'heaviside'>* Compute the Heaviside step function. The Heaviside step function is defined as: ``` 0 if x1 < 0 heaviside(x1, x2) = x2 if x1 == 0 1 if x1 > 0 ``` where `x2` is often taken to be 0.5, but 0 and 1 are also sometimes used. Parameters **x1**array_like Input values. **x2**array_like The value of the function when x1 is 0. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar The output array, element-wise Heaviside step function of `x1`. This is a scalar if both `x1` and `x2` are scalars. #### Notes New in version 1.13.0. #### References #### Examples ``` >>> np.heaviside([-1.5, 0, 2.0], 0.5) array([ 0. , 0.5, 1. ]) >>> np.heaviside([-1.5, 0, 2.0], 1) array([ 0., 1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.heaviside.htmlnumpy.conjugate =============== numpy.conjugate(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'conjugate'>* Return the complex conjugate, element-wise. The complex conjugate of a complex number is obtained by changing the sign of its imaginary part. Parameters **x**array_like Input value. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The complex conjugate of `x`, with same dtype as `y`. This is a scalar if `x` is a scalar. #### Notes [`conj`](numpy.conj#numpy.conj "numpy.conj") is an alias for [`conjugate`](#numpy.conjugate "numpy.conjugate"): ``` >>> np.conj is np.conjugate True ``` #### Examples ``` >>> np.conjugate(1+2j) (1-2j) ``` ``` >>> x = np.eye(2) + 1j * np.eye(2) >>> np.conjugate(x) array([[ 1.-1.j, 0.-0.j], [ 0.-0.j, 1.-1.j]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.conjugate.htmlnumpy.exp ========= numpy.exp(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'exp'>* Calculate the exponential of all elements in the input array. Parameters **x**array_like Input values. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Output array, element-wise exponential of `x`. This is a scalar if `x` is a scalar. See also [`expm1`](numpy.expm1#numpy.expm1 "numpy.expm1") Calculate `exp(x) - 1` for all elements in the array. [`exp2`](numpy.exp2#numpy.exp2 "numpy.exp2") Calculate `2**x` for all elements in the array. #### Notes The irrational number `e` is also known as Euler’s number. It is approximately 2.718281, and is the base of the natural logarithm, `ln` (this means that, if \(x = \ln y = \log_e y\), then \(e^x = y\). For real input, `exp(x)` is always positive. For complex arguments, `x = a + ib`, we can write \(e^x = e^a e^{ib}\). The first term, \(e^a\), is already known (it is the real argument, described above). The second term, \(e^{ib}\), is \(\cos b + i \sin b\), a function with magnitude 1 and a periodic phase. #### References 1 Wikipedia, “Exponential function”, <https://en.wikipedia.org/wiki/Exponential_function 2 <NAME> and <NAME>, “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables,” Dover, 1964, p. 69, <https://personal.math.ubc.ca/~cbm/aands/page_69.htm #### Examples Plot the magnitude and phase of `exp(x)` in the complex plane: ``` >>> import matplotlib.pyplot as plt ``` ``` >>> x = np.linspace(-2*np.pi, 2*np.pi, 100) >>> xx = x + 1j * x[:, np.newaxis] # a + ib over complex plane >>> out = np.exp(xx) ``` ``` >>> plt.subplot(121) >>> plt.imshow(np.abs(out), ... extent=[-2*np.pi, 2*np.pi, -2*np.pi, 2*np.pi], cmap='gray') >>> plt.title('Magnitude of exp(x)') ``` ``` >>> plt.subplot(122) >>> plt.imshow(np.angle(out), ... extent=[-2*np.pi, 2*np.pi, -2*np.pi, 2*np.pi], cmap='hsv') >>> plt.title('Phase (angle) of exp(x)') >>> plt.show() ``` ![../../_images/numpy-exp-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.exp.htmlnumpy.exp2 ========== numpy.exp2(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'exp2'>* Calculate `2**p` for all `p` in the input array. Parameters **x**array_like Input values. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Element-wise 2 to the power `x`. This is a scalar if `x` is a scalar. See also [`power`](numpy.power#numpy.power "numpy.power") #### Notes New in version 1.3.0. #### Examples ``` >>> np.exp2([2, 3]) array([ 4., 8.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.exp2.htmlnumpy.log ========= numpy.log(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'log'>* Natural logarithm, element-wise. The natural logarithm [`log`](#numpy.log "numpy.log") is the inverse of the exponential function, so that `log(exp(x)) = x`. The natural logarithm is logarithm in base [`e`](../constants#numpy.e "numpy.e"). Parameters **x**array_like Input value. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The natural logarithm of `x`, element-wise. This is a scalar if `x` is a scalar. See also [`log10`](numpy.log10#numpy.log10 "numpy.log10"), [`log2`](numpy.log2#numpy.log2 "numpy.log2"), [`log1p`](numpy.log1p#numpy.log1p "numpy.log1p"), [`emath.log`](numpy.emath.log#numpy.emath.log "numpy.emath.log") #### Notes Logarithm is a multivalued function: for each `x` there is an infinite number of `z` such that `exp(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi, pi]`. For real-valued input data types, [`log`](#numpy.log "numpy.log") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`log`](#numpy.log "numpy.log") is a complex analytical function that has a branch cut `[-inf, 0]` and is continuous from above on it. [`log`](#numpy.log "numpy.log") handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard. #### References 1 <NAME> and <NAME>, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 67. <https://personal.math.ubc.ca/~cbm/aands/page_67.htm 2 Wikipedia, “Logarithm”. <https://en.wikipedia.org/wiki/Logarithm #### Examples ``` >>> np.log([1, np.e, np.e**2, 0]) array([ 0., 1., 2., -Inf]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.log.htmlnumpy.log2 ========== numpy.log2(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'log2'>* Base-2 logarithm of `x`. Parameters **x**array_like Input values. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray Base-2 logarithm of `x`. This is a scalar if `x` is a scalar. See also [`log`](numpy.log#numpy.log "numpy.log"), [`log10`](numpy.log10#numpy.log10 "numpy.log10"), [`log1p`](numpy.log1p#numpy.log1p "numpy.log1p"), [`emath.log2`](numpy.emath.log2#numpy.emath.log2 "numpy.emath.log2") #### Notes New in version 1.3.0. Logarithm is a multivalued function: for each `x` there is an infinite number of `z` such that `2**z = x`. The convention is to return the `z` whose imaginary part lies in `[-pi, pi]`. For real-valued input data types, [`log2`](#numpy.log2 "numpy.log2") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`log2`](#numpy.log2 "numpy.log2") is a complex analytical function that has a branch cut `[-inf, 0]` and is continuous from above on it. [`log2`](#numpy.log2 "numpy.log2") handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard. #### Examples ``` >>> x = np.array([0, 1, 2, 2**4]) >>> np.log2(x) array([-Inf, 0., 1., 4.]) ``` ``` >>> xi = np.array([0+1.j, 1, 2+0.j, 4.j]) >>> np.log2(xi) array([ 0.+2.26618007j, 0.+0.j , 1.+0.j , 2.+2.26618007j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.log2.htmlnumpy.log10 =========== numpy.log10(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'log10'>* Return the base 10 logarithm of the input array, element-wise. Parameters **x**array_like Input values. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The logarithm to the base 10 of `x`, element-wise. NaNs are returned where x is negative. This is a scalar if `x` is a scalar. See also [`emath.log10`](numpy.emath.log10#numpy.emath.log10 "numpy.emath.log10") #### Notes Logarithm is a multivalued function: for each `x` there is an infinite number of `z` such that `10**z = x`. The convention is to return the `z` whose imaginary part lies in `[-pi, pi]`. For real-valued input data types, [`log10`](#numpy.log10 "numpy.log10") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`log10`](#numpy.log10 "numpy.log10") is a complex analytical function that has a branch cut `[-inf, 0]` and is continuous from above on it. [`log10`](#numpy.log10 "numpy.log10") handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard. #### References 1 <NAME> and <NAME>, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 67. <https://personal.math.ubc.ca/~cbm/aands/page_67.htm 2 Wikipedia, “Logarithm”. <https://en.wikipedia.org/wiki/Logarithm #### Examples ``` >>> np.log10([1e-15, -3.]) array([-15., nan]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.log10.htmlnumpy.expm1 =========== numpy.expm1(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'expm1'>* Calculate `exp(x) - 1` for all elements in the array. Parameters **x**array_like Input values. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Element-wise exponential minus one: `out = exp(x) - 1`. This is a scalar if `x` is a scalar. See also [`log1p`](numpy.log1p#numpy.log1p "numpy.log1p") `log(1 + x)`, the inverse of expm1. #### Notes This function provides greater precision than `exp(x) - 1` for small values of `x`. #### Examples The true value of `exp(1e-10) - 1` is `1.00000000005e-10` to about 32 significant digits. This example shows the superiority of expm1 in this case. ``` >>> np.expm1(1e-10) 1.00000000005e-10 >>> np.exp(1e-10) - 1 1.000000082740371e-10 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.expm1.htmlnumpy.log1p =========== numpy.log1p(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'log1p'>* Return the natural logarithm of one plus the input array, element-wise. Calculates `log(1 + x)`. Parameters **x**array_like Input values. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray Natural logarithm of `1 + x`, element-wise. This is a scalar if `x` is a scalar. See also [`expm1`](numpy.expm1#numpy.expm1 "numpy.expm1") `exp(x) - 1`, the inverse of [`log1p`](#numpy.log1p "numpy.log1p"). #### Notes For real-valued input, [`log1p`](#numpy.log1p "numpy.log1p") is accurate also for `x` so small that `1 + x == 1` in floating-point accuracy. Logarithm is a multivalued function: for each `x` there is an infinite number of `z` such that `exp(z) = 1 + x`. The convention is to return the `z` whose imaginary part lies in `[-pi, pi]`. For real-valued input data types, [`log1p`](#numpy.log1p "numpy.log1p") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`log1p`](#numpy.log1p "numpy.log1p") is a complex analytical function that has a branch cut `[-inf, -1]` and is continuous from above on it. [`log1p`](#numpy.log1p "numpy.log1p") handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard. #### References 1 <NAME> and <NAME>, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 67. <https://personal.math.ubc.ca/~cbm/aands/page_67.htm 2 Wikipedia, “Logarithm”. <https://en.wikipedia.org/wiki/Logarithm #### Examples ``` >>> np.log1p(1e-99) 1e-99 >>> np.log(1 + 1e-99) 0.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.log1p.htmlnumpy.sqrt ========== numpy.sqrt(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'sqrt'>* Return the non-negative square-root of an array, element-wise. Parameters **x**array_like The values whose square-roots are required. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray An array of the same shape as `x`, containing the positive square-root of each element in `x`. If any element in `x` is complex, a complex array is returned (and the square-roots of negative reals are calculated). If all of the elements in `x` are real, so is `y`, with negative elements returning `nan`. If `out` was provided, `y` is a reference to it. This is a scalar if `x` is a scalar. See also [`emath.sqrt`](numpy.emath.sqrt#numpy.emath.sqrt "numpy.emath.sqrt") A version which returns complex numbers when given negative reals. `Note` 0.0 and -0.0 are handled differently for complex inputs. #### Notes *sqrt* has–consistent with common convention–as its branch cut the real “interval” [`-inf`, 0), and is continuous from above on it. A branch cut is a curve in the complex plane across which a given complex function fails to be continuous. #### Examples ``` >>> np.sqrt([1,4,9]) array([ 1., 2., 3.]) ``` ``` >>> np.sqrt([4, -1, -3+4J]) array([ 2.+0.j, 0.+1.j, 1.+2.j]) ``` ``` >>> np.sqrt([4, -1, np.inf]) array([ 2., nan, inf]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.sqrt.htmlnumpy.square ============ numpy.square(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'square'>* Return the element-wise square of the input. Parameters **x**array_like Input data. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Element-wise `x*x`, of the same shape and dtype as `x`. This is a scalar if `x` is a scalar. See also [`numpy.linalg.matrix_power`](numpy.linalg.matrix_power#numpy.linalg.matrix_power "numpy.linalg.matrix_power") [`sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt") [`power`](numpy.power#numpy.power "numpy.power") #### Examples ``` >>> np.square([-1j, 1]) array([-1.-0.j, 1.+0.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.square.htmlnumpy.cbrt ========== numpy.cbrt(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'cbrt'>* Return the cube-root of an array, element-wise. New in version 1.10.0. Parameters **x**array_like The values whose cube-roots are required. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray An array of the same shape as `x`, containing the cube cube-root of each element in `x`. If `out` was provided, `y` is a reference to it. This is a scalar if `x` is a scalar. #### Examples ``` >>> np.cbrt([1,8,27]) array([ 1., 2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.cbrt.htmlnumpy.reciprocal ================ numpy.reciprocal(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'reciprocal'>* Return the reciprocal of the argument, element-wise. Calculates `1/x`. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray Return array. This is a scalar if `x` is a scalar. #### Notes Note This function is not designed to work with integers. For integer arguments with absolute value larger than 1 the result is always zero because of the way Python handles integer division. For integer zero the result is an overflow. #### Examples ``` >>> np.reciprocal(2.) 0.5 >>> np.reciprocal([1, 2., 3.33]) array([ 1. , 0.5 , 0.3003003]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.reciprocal.htmlnumpy.gcd ========= numpy.gcd(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'gcd'>* Returns the greatest common divisor of `|x1|` and `|x2|` Parameters **x1, x2**array_like, int Arrays of values. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). Returns **y**ndarray or scalar The greatest common divisor of the absolute value of the inputs This is a scalar if both `x1` and `x2` are scalars. See also [`lcm`](numpy.lcm#numpy.lcm "numpy.lcm") The lowest common multiple #### Examples ``` >>> np.gcd(12, 20) 4 >>> np.gcd.reduce([15, 25, 35]) 5 >>> np.gcd(np.arange(6), 20) array([20, 1, 2, 1, 4, 5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.gcd.htmlnumpy.lcm ========= numpy.lcm(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'lcm'>* Returns the lowest common multiple of `|x1|` and `|x2|` Parameters **x1, x2**array_like, int Arrays of values. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). Returns **y**ndarray or scalar The lowest common multiple of the absolute value of the inputs This is a scalar if both `x1` and `x2` are scalars. See also [`gcd`](numpy.gcd#numpy.gcd "numpy.gcd") The greatest common divisor #### Examples ``` >>> np.lcm(12, 20) 60 >>> np.lcm.reduce([3, 12, 20]) 60 >>> np.lcm.reduce([40, 12, 20]) 120 >>> np.lcm(np.arange(6), 20) array([ 0, 20, 20, 60, 20, 20]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lcm.htmlnumpy.sin ========= numpy.sin(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'sin'>* Trigonometric sine, element-wise. Parameters **x**array_like Angle, in radians (\(2 \pi\) rad equals 360 degrees). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**array_like The sine of each element of x. This is a scalar if `x` is a scalar. See also [`arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin"), [`sinh`](numpy.sinh#numpy.sinh "numpy.sinh"), [`cos`](numpy.cos#numpy.cos "numpy.cos") #### Notes The sine is one of the fundamental functions of trigonometry (the mathematical study of triangles). Consider a circle of radius 1 centered on the origin. A ray comes in from the \(+x\) axis, makes an angle at the origin (measured counter-clockwise from that axis), and departs from the origin. The \(y\) coordinate of the outgoing ray’s intersection with the unit circle is the sine of that angle. It ranges from -1 for \(x=3\pi / 2\) to +1 for \(\pi / 2.\) The function has zeroes where the angle is a multiple of \(\pi\). Sines of angles between \(\pi\) and \(2\pi\) are negative. The numerous properties of the sine and related functions are included in any standard trigonometry text. #### Examples Print sine of one angle: ``` >>> np.sin(np.pi/2.) 1.0 ``` Print sines of an array of angles given in degrees: ``` >>> np.sin(np.array((0., 30., 45., 60., 90.)) * np.pi / 180. ) array([ 0. , 0.5 , 0.70710678, 0.8660254 , 1. ]) ``` Plot the sine function: ``` >>> import matplotlib.pylab as plt >>> x = np.linspace(-np.pi, np.pi, 201) >>> plt.plot(x, np.sin(x)) >>> plt.xlabel('Angle [rad]') >>> plt.ylabel('sin(x)') >>> plt.axis('tight') >>> plt.show() ``` ![../../_images/numpy-sin-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.sin.htmlnumpy.cos ========= numpy.cos(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'cos'>* Cosine element-wise. Parameters **x**array_like Input array in radians. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The corresponding cosine values. This is a scalar if `x` is a scalar. #### Notes If `out` is provided, the function writes the result into it, and returns a reference to `out`. (See Examples) #### References <NAME> and <NAME>, Handbook of Mathematical Functions. New York, NY: Dover, 1972. #### Examples ``` >>> np.cos(np.array([0, np.pi/2, np.pi])) array([ 1.00000000e+00, 6.12303177e-17, -1.00000000e+00]) >>> >>> # Example of providing the optional output parameter >>> out1 = np.array([0], dtype='d') >>> out2 = np.cos([0.1], out1) >>> out2 is out1 True >>> >>> # Example of ValueError due to provision of shape mis-matched `out` >>> np.cos(np.zeros((3,3)),np.zeros((2,2))) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: operands could not be broadcast together with shapes (3,3) (2,2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.cos.htmlnumpy.tan ========= numpy.tan(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'tan'>* Compute tangent element-wise. Equivalent to `np.sin(x)/np.cos(x)` element-wise. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The corresponding tangent values. This is a scalar if `x` is a scalar. #### Notes If `out` is provided, the function writes the result into it, and returns a reference to `out`. (See Examples) #### References <NAME> and <NAME>, Handbook of Mathematical Functions. New York, NY: Dover, 1972. #### Examples ``` >>> from math import pi >>> np.tan(np.array([-pi,pi/2,pi])) array([ 1.22460635e-16, 1.63317787e+16, -1.22460635e-16]) >>> >>> # Example of providing the optional output parameter illustrating >>> # that what is returned is a reference to said parameter >>> out1 = np.array([0], dtype='d') >>> out2 = np.cos([0.1], out1) >>> out2 is out1 True >>> >>> # Example of ValueError due to provision of shape mis-matched `out` >>> np.cos(np.zeros((3,3)),np.zeros((2,2))) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: operands could not be broadcast together with shapes (3,3) (2,2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.tan.htmlnumpy.arcsin ============ numpy.arcsin(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'arcsin'>* Inverse sine, element-wise. Parameters **x**array_like `y`-coordinate on the unit circle. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **angle**ndarray The inverse sine of each element in `x`, in radians and in the closed interval `[-pi/2, pi/2]`. This is a scalar if `x` is a scalar. See also [`sin`](numpy.sin#numpy.sin "numpy.sin"), [`cos`](numpy.cos#numpy.cos "numpy.cos"), [`arccos`](numpy.arccos#numpy.arccos "numpy.arccos"), [`tan`](numpy.tan#numpy.tan "numpy.tan"), [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan"), [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2"), [`emath.arcsin`](numpy.emath.arcsin#numpy.emath.arcsin "numpy.emath.arcsin") #### Notes [`arcsin`](#numpy.arcsin "numpy.arcsin") is a multivalued function: for each `x` there are infinitely many numbers `z` such that \(sin(z) = x\). The convention is to return the angle `z` whose real part lies in [-pi/2, pi/2]. For real-valued input data types, *arcsin* always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arcsin`](#numpy.arcsin "numpy.arcsin") is a complex analytic function that has, by convention, the branch cuts [-inf, -1] and [1, inf] and is continuous from above on the former and from below on the latter. The inverse sine is also known as `asin` or sin^{-1}. #### References <NAME>. and <NAME>., *Handbook of Mathematical Functions*, 10th printing, New York: Dover, 1964, pp. 79ff. <https://personal.math.ubc.ca/~cbm/aands/page_79.htm #### Examples ``` >>> np.arcsin(1) # pi/2 1.5707963267948966 >>> np.arcsin(-1) # -pi/2 -1.5707963267948966 >>> np.arcsin(0) 0.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.arcsin.htmlnumpy.arccos ============ numpy.arccos(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'arccos'>* Trigonometric inverse cosine, element-wise. The inverse of [`cos`](numpy.cos#numpy.cos "numpy.cos") so that, if `y = cos(x)`, then `x = arccos(y)`. Parameters **x**array_like `x`-coordinate on the unit circle. For real arguments, the domain is [-1, 1]. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **angle**ndarray The angle of the ray intersecting the unit circle at the given `x`-coordinate in radians [0, pi]. This is a scalar if `x` is a scalar. See also [`cos`](numpy.cos#numpy.cos "numpy.cos"), [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan"), [`arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin"), [`emath.arccos`](numpy.emath.arccos#numpy.emath.arccos "numpy.emath.arccos") #### Notes [`arccos`](#numpy.arccos "numpy.arccos") is a multivalued function: for each `x` there are infinitely many numbers `z` such that `cos(z) = x`. The convention is to return the angle `z` whose real part lies in `[0, pi]`. For real-valued input data types, [`arccos`](#numpy.arccos "numpy.arccos") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arccos`](#numpy.arccos "numpy.arccos") is a complex analytic function that has branch cuts `[-inf, -1]` and `[1, inf]` and is continuous from above on the former and from below on the latter. The inverse [`cos`](numpy.cos#numpy.cos "numpy.cos") is also known as `acos` or cos^-1. #### References <NAME> and <NAME>, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 79. <https://personal.math.ubc.ca/~cbm/aands/page_79.htm #### Examples We expect the arccos of 1 to be 0, and of -1 to be pi: ``` >>> np.arccos([1, -1]) array([ 0. , 3.14159265]) ``` Plot arccos: ``` >>> import matplotlib.pyplot as plt >>> x = np.linspace(-1, 1, num=100) >>> plt.plot(x, np.arccos(x)) >>> plt.axis('tight') >>> plt.show() ``` ![../../_images/numpy-arccos-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.arccos.htmlnumpy.arctan ============ numpy.arctan(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'arctan'>* Trigonometric inverse tangent, element-wise. The inverse of tan, so that if `y = tan(x)` then `x = arctan(y)`. Parameters **x**array_like **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Out has the same shape as `x`. Its real part is in `[-pi/2, pi/2]` (`arctan(+/-inf)` returns `+/-pi/2`). This is a scalar if `x` is a scalar. See also [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2") The “four quadrant” arctan of the angle formed by (`x`, `y`) and the positive `x`-axis. [`angle`](numpy.angle#numpy.angle "numpy.angle") Argument of complex values. #### Notes [`arctan`](#numpy.arctan "numpy.arctan") is a multi-valued function: for each `x` there are infinitely many numbers `z` such that tan(`z`) = `x`. The convention is to return the angle `z` whose real part lies in [-pi/2, pi/2]. For real-valued input data types, [`arctan`](#numpy.arctan "numpy.arctan") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arctan`](#numpy.arctan "numpy.arctan") is a complex analytic function that has [`1j, infj`] and [`-1j, -infj`] as branch cuts, and is continuous from the left on the former and from the right on the latter. The inverse tangent is also known as `atan` or tan^{-1}. #### References <NAME>. and <NAME>., *Handbook of Mathematical Functions*, 10th printing, New York: Dover, 1964, pp. 79. <https://personal.math.ubc.ca/~cbm/aands/page_79.htm #### Examples We expect the arctan of 0 to be 0, and of 1 to be pi/4: ``` >>> np.arctan([0, 1]) array([ 0. , 0.78539816]) ``` ``` >>> np.pi/4 0.78539816339744828 ``` Plot arctan: ``` >>> import matplotlib.pyplot as plt >>> x = np.linspace(-10, 10) >>> plt.plot(x, np.arctan(x)) >>> plt.axis('tight') >>> plt.show() ``` ![../../_images/numpy-arctan-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.arctan.htmlnumpy.arctan2 ============= numpy.arctan2(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'arctan2'>* Element-wise arc tangent of `x1/x2` choosing the quadrant correctly. The quadrant (i.e., branch) is chosen so that `arctan2(x1, x2)` is the signed angle in radians between the ray ending at the origin and passing through the point (1,0), and the ray ending at the origin and passing through the point (`x2`, `x1`). (Note the role reversal: the “`y`-coordinate” is the first function parameter, the “`x`-coordinate” is the second.) By IEEE convention, this function is defined for `x2` = +/-0 and for either or both of `x1` and `x2` = +/-inf (see Notes for specific values). This function is not defined for complex-valued arguments; for the so-called argument of complex values, use [`angle`](numpy.angle#numpy.angle "numpy.angle"). Parameters **x1**array_like, real-valued `y`-coordinates. **x2**array_like, real-valued `x`-coordinates. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **angle**ndarray Array of angles in radians, in the range `[-pi, pi]`. This is a scalar if both `x1` and `x2` are scalars. See also [`arctan`](numpy.arctan#numpy.arctan "numpy.arctan"), [`tan`](numpy.tan#numpy.tan "numpy.tan"), [`angle`](numpy.angle#numpy.angle "numpy.angle") #### Notes *arctan2* is identical to the `atan2` function of the underlying C library. The following special values are defined in the C standard: [[1]](#r73eacd397847-1) | `x1` | `x2` | `arctan2(x1,x2)` | | --- | --- | --- | | +/- 0 | +0 | +/- 0 | | +/- 0 | -0 | +/- pi | | > 0 | +/-inf | +0 / +pi | | < 0 | +/-inf | -0 / -pi | | +/-inf | +inf | +/- (pi/4) | | +/-inf | -inf | +/- (3*pi/4) | Note that +0 and -0 are distinct floating point numbers, as are +inf and -inf. #### References [1](#id1) ISO/IEC standard 9899:1999, “Programming language C.” #### Examples Consider four points in different quadrants: ``` >>> x = np.array([-1, +1, +1, -1]) >>> y = np.array([-1, -1, +1, +1]) >>> np.arctan2(y, x) * 180 / np.pi array([-135., -45., 45., 135.]) ``` Note the order of the parameters. [`arctan2`](#numpy.arctan2 "numpy.arctan2") is defined also when `x2` = 0 and at several other special points, obtaining values in the range `[-pi, pi]`: ``` >>> np.arctan2([1., -1.], [0., 0.]) array([ 1.57079633, -1.57079633]) >>> np.arctan2([0., 0., np.inf], [+0., -0., np.inf]) array([0. , 3.14159265, 0.78539816]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.arctan2.htmlnumpy.hypot =========== numpy.hypot(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'hypot'>* Given the “legs” of a right triangle, return its hypotenuse. Equivalent to `sqrt(x1**2 + x2**2)`, element-wise. If `x1` or `x2` is scalar_like (i.e., unambiguously cast-able to a scalar type), it is broadcast for use with each element of the other argument. (See Examples) Parameters **x1, x2**array_like Leg of the triangle(s). If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **z**ndarray The hypotenuse of the triangle(s). This is a scalar if both `x1` and `x2` are scalars. #### Examples ``` >>> np.hypot(3*np.ones((3, 3)), 4*np.ones((3, 3))) array([[ 5., 5., 5.], [ 5., 5., 5.], [ 5., 5., 5.]]) ``` Example showing broadcast of scalar_like argument: ``` >>> np.hypot(3*np.ones((3, 3)), [4]) array([[ 5., 5., 5.], [ 5., 5., 5.], [ 5., 5., 5.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.hypot.htmlnumpy.sinh ========== numpy.sinh(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'sinh'>* Hyperbolic sine, element-wise. Equivalent to `1/2 * (np.exp(x) - np.exp(-x))` or `-1j * np.sin(1j*x)`. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The corresponding hyperbolic sine values. This is a scalar if `x` is a scalar. #### Notes If `out` is provided, the function writes the result into it, and returns a reference to `out`. (See Examples) #### References <NAME> and <NAME>, Handbook of Mathematical Functions. New York, NY: Dover, 1972, pg. 83. #### Examples ``` >>> np.sinh(0) 0.0 >>> np.sinh(np.pi*1j/2) 1j >>> np.sinh(np.pi*1j) # (exact value is 0) 1.2246063538223773e-016j >>> # Discrepancy due to vagaries of floating point arithmetic. ``` ``` >>> # Example of providing the optional output parameter >>> out1 = np.array([0], dtype='d') >>> out2 = np.sinh([0.1], out1) >>> out2 is out1 True ``` ``` >>> # Example of ValueError due to provision of shape mis-matched `out` >>> np.sinh(np.zeros((3,3)),np.zeros((2,2))) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: operands could not be broadcast together with shapes (3,3) (2,2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.sinh.htmlnumpy.cosh ========== numpy.cosh(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'cosh'>* Hyperbolic cosine, element-wise. Equivalent to `1/2 * (np.exp(x) + np.exp(-x))` and `np.cos(1j*x)`. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Output array of same shape as `x`. This is a scalar if `x` is a scalar. #### Examples ``` >>> np.cosh(0) 1.0 ``` The hyperbolic cosine describes the shape of a hanging cable: ``` >>> import matplotlib.pyplot as plt >>> x = np.linspace(-4, 4, 1000) >>> plt.plot(x, np.cosh(x)) >>> plt.show() ``` ![../../_images/numpy-cosh-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.cosh.htmlnumpy.tanh ========== numpy.tanh(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'tanh'>* Compute hyperbolic tangent element-wise. Equivalent to `np.sinh(x)/np.cosh(x)` or `-1j * np.tan(1j*x)`. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The corresponding hyperbolic tangent values. This is a scalar if `x` is a scalar. #### Notes If `out` is provided, the function writes the result into it, and returns a reference to `out`. (See Examples) #### References 1 <NAME> and <NAME>, Handbook of Mathematical Functions. New York, NY: Dover, 1972, pg. 83. <https://personal.math.ubc.ca/~cbm/aands/page_83.htm 2 Wikipedia, “Hyperbolic function”, <https://en.wikipedia.org/wiki/Hyperbolic_function #### Examples ``` >>> np.tanh((0, np.pi*1j, np.pi*1j/2)) array([ 0. +0.00000000e+00j, 0. -1.22460635e-16j, 0. +1.63317787e+16j]) ``` ``` >>> # Example of providing the optional output parameter illustrating >>> # that what is returned is a reference to said parameter >>> out1 = np.array([0], dtype='d') >>> out2 = np.tanh([0.1], out1) >>> out2 is out1 True ``` ``` >>> # Example of ValueError due to provision of shape mis-matched `out` >>> np.tanh(np.zeros((3,3)),np.zeros((2,2))) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: operands could not be broadcast together with shapes (3,3) (2,2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.tanh.htmlnumpy.arcsinh ============= numpy.arcsinh(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'arcsinh'>* Inverse hyperbolic sine element-wise. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Array of the same shape as `x`. This is a scalar if `x` is a scalar. #### Notes [`arcsinh`](#numpy.arcsinh "numpy.arcsinh") is a multivalued function: for each `x` there are infinitely many numbers `z` such that `sinh(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi/2, pi/2]`. For real-valued input data types, [`arcsinh`](#numpy.arcsinh "numpy.arcsinh") always returns real output. For each value that cannot be expressed as a real number or infinity, it returns `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arccos`](numpy.arccos#numpy.arccos "numpy.arccos") is a complex analytical function that has branch cuts `[1j, infj]` and `[-1j, -infj]` and is continuous from the right on the former and from the left on the latter. The inverse hyperbolic sine is also known as `asinh` or `sinh^-1`. #### References 1 <NAME> and <NAME>, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. <https://personal.math.ubc.ca/~cbm/aands/page_86.htm 2 Wikipedia, “Inverse hyperbolic function”, <https://en.wikipedia.org/wiki/Arcsinh #### Examples ``` >>> np.arcsinh(np.array([np.e, 10.0])) array([ 1.72538256, 2.99822295]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.arcsinh.htmlnumpy.arccosh ============= numpy.arccosh(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'arccosh'>* Inverse hyperbolic cosine, element-wise. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **arccosh**ndarray Array of the same shape as `x`. This is a scalar if `x` is a scalar. See also [`cosh`](numpy.cosh#numpy.cosh "numpy.cosh"), [`arcsinh`](numpy.arcsinh#numpy.arcsinh "numpy.arcsinh"), [`sinh`](numpy.sinh#numpy.sinh "numpy.sinh"), [`arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh"), [`tanh`](numpy.tanh#numpy.tanh "numpy.tanh") #### Notes [`arccosh`](#numpy.arccosh "numpy.arccosh") is a multivalued function: for each `x` there are infinitely many numbers `z` such that `cosh(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi, pi]` and the real part in `[0, inf]`. For real-valued input data types, [`arccosh`](#numpy.arccosh "numpy.arccosh") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arccosh`](#numpy.arccosh "numpy.arccosh") is a complex analytical function that has a branch cut `[-inf, 1]` and is continuous from above on it. #### References 1 <NAME> and <NAME>, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. <https://personal.math.ubc.ca/~cbm/aands/page_86.htm 2 Wikipedia, “Inverse hyperbolic function”, <https://en.wikipedia.org/wiki/Arccosh #### Examples ``` >>> np.arccosh([np.e, 10.0]) array([ 1.65745445, 2.99322285]) >>> np.arccosh(1) 0.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.arccosh.htmlnumpy.arctanh ============= numpy.arctanh(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'arctanh'>* Inverse hyperbolic tangent element-wise. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Array of the same shape as `x`. This is a scalar if `x` is a scalar. See also [`emath.arctanh`](numpy.emath.arctanh#numpy.emath.arctanh "numpy.emath.arctanh") #### Notes [`arctanh`](#numpy.arctanh "numpy.arctanh") is a multivalued function: for each `x` there are infinitely many numbers `z` such that `tanh(z) = x`. The convention is to return the `z` whose imaginary part lies in `[-pi/2, pi/2]`. For real-valued input data types, [`arctanh`](#numpy.arctanh "numpy.arctanh") always returns real output. For each value that cannot be expressed as a real number or infinity, it yields `nan` and sets the `invalid` floating point error flag. For complex-valued input, [`arctanh`](#numpy.arctanh "numpy.arctanh") is a complex analytical function that has branch cuts `[-1, -inf]` and `[1, inf]` and is continuous from above on the former and from below on the latter. The inverse hyperbolic tangent is also known as `atanh` or `tanh^-1`. #### References 1 M. Abramowitz and <NAME>, “Handbook of Mathematical Functions”, 10th printing, 1964, pp. 86. <https://personal.math.ubc.ca/~cbm/aands/page_86.htm 2 Wikipedia, “Inverse hyperbolic function”, <https://en.wikipedia.org/wiki/Arctanh #### Examples ``` >>> np.arctanh([0, -0.5]) array([ 0. , -0.54930614]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.arctanh.htmlnumpy.degrees ============= numpy.degrees(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'degrees'>* Convert angles from radians to degrees. Parameters **x**array_like Input array in radians. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray of floats The corresponding degree values; if `out` was supplied this is a reference to it. This is a scalar if `x` is a scalar. See also [`rad2deg`](numpy.rad2deg#numpy.rad2deg "numpy.rad2deg") equivalent function #### Examples Convert a radian array to degrees ``` >>> rad = np.arange(12.)*np.pi/6 >>> np.degrees(rad) array([ 0., 30., 60., 90., 120., 150., 180., 210., 240., 270., 300., 330.]) ``` ``` >>> out = np.zeros((rad.shape)) >>> r = np.degrees(rad, out) >>> np.all(r == out) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.degrees.htmlnumpy.radians ============= numpy.radians(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'radians'>* Convert angles from degrees to radians. Parameters **x**array_like Input array in degrees. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The corresponding radian values. This is a scalar if `x` is a scalar. See also [`deg2rad`](numpy.deg2rad#numpy.deg2rad "numpy.deg2rad") equivalent function #### Examples Convert a degree array to radians ``` >>> deg = np.arange(12.) * 30. >>> np.radians(deg) array([ 0. , 0.52359878, 1.04719755, 1.57079633, 2.0943951 , 2.61799388, 3.14159265, 3.66519143, 4.1887902 , 4.71238898, 5.23598776, 5.75958653]) ``` ``` >>> out = np.zeros((deg.shape)) >>> ret = np.radians(deg, out) >>> ret is out True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.radians.htmlnumpy.deg2rad ============= numpy.deg2rad(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'deg2rad'>* Convert angles from degrees to radians. Parameters **x**array_like Angles in degrees. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The corresponding angle in radians. This is a scalar if `x` is a scalar. See also [`rad2deg`](numpy.rad2deg#numpy.rad2deg "numpy.rad2deg") Convert angles from radians to degrees. [`unwrap`](numpy.unwrap#numpy.unwrap "numpy.unwrap") Remove large jumps in angle by wrapping. #### Notes New in version 1.3.0. `deg2rad(x)` is `x * pi / 180`. #### Examples ``` >>> np.deg2rad(180) 3.1415926535897931 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.deg2rad.htmlnumpy.rad2deg ============= numpy.rad2deg(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'rad2deg'>* Convert angles from radians to degrees. Parameters **x**array_like Angle in radians. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The corresponding angle in degrees. This is a scalar if `x` is a scalar. See also [`deg2rad`](numpy.deg2rad#numpy.deg2rad "numpy.deg2rad") Convert angles from degrees to radians. [`unwrap`](numpy.unwrap#numpy.unwrap "numpy.unwrap") Remove large jumps in angle by wrapping. #### Notes New in version 1.3.0. rad2deg(x) is `180 * x / pi`. #### Examples ``` >>> np.rad2deg(np.pi/2) 90.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.rad2deg.htmlnumpy.bitwise_and ================== numpy.bitwise_and(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'bitwise_and'>* Compute the bit-wise AND of two arrays element-wise. Computes the bit-wise AND of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `&`. Parameters **x1, x2**array_like Only integer and boolean types are handled. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Result. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_and`](numpy.logical_and#numpy.logical_and "numpy.logical_and") [`bitwise_or`](numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or") [`bitwise_xor`](numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor") [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples The number 13 is represented by `00001101`. Likewise, 17 is represented by `00010001`. The bit-wise AND of 13 and 17 is therefore `000000001`, or 1: ``` >>> np.bitwise_and(13, 17) 1 ``` ``` >>> np.bitwise_and(14, 13) 12 >>> np.binary_repr(12) '1100' >>> np.bitwise_and([14,3], 13) array([12, 1]) ``` ``` >>> np.bitwise_and([11,7], [4,25]) array([0, 1]) >>> np.bitwise_and(np.array([2,5,255]), np.array([3,14,16])) array([ 2, 4, 16]) >>> np.bitwise_and([True, True], [False, True]) array([False, True]) ``` The `&` operator can be used as a shorthand for `np.bitwise_and` on ndarrays. ``` >>> x1 = np.array([2, 5, 255]) >>> x2 = np.array([3, 14, 16]) >>> x1 & x2 array([ 2, 4, 16]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.bitwise_and.htmlnumpy.bitwise_or ================= numpy.bitwise_or(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'bitwise_or'>* Compute the bit-wise OR of two arrays element-wise. Computes the bit-wise OR of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `|`. Parameters **x1, x2**array_like Only integer and boolean types are handled. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Result. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_or`](numpy.logical_or#numpy.logical_or "numpy.logical_or") [`bitwise_and`](numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and") [`bitwise_xor`](numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor") [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples The number 13 has the binary representation `00001101`. Likewise, 16 is represented by `00010000`. The bit-wise OR of 13 and 16 is then `000111011`, or 29: ``` >>> np.bitwise_or(13, 16) 29 >>> np.binary_repr(29) '11101' ``` ``` >>> np.bitwise_or(32, 2) 34 >>> np.bitwise_or([33, 4], 1) array([33, 5]) >>> np.bitwise_or([33, 4], [1, 2]) array([33, 6]) ``` ``` >>> np.bitwise_or(np.array([2, 5, 255]), np.array([4, 4, 4])) array([ 6, 5, 255]) >>> np.array([2, 5, 255]) | np.array([4, 4, 4]) array([ 6, 5, 255]) >>> np.bitwise_or(np.array([2, 5, 255, 2147483647], dtype=np.int32), ... np.array([4, 4, 4, 2147483647], dtype=np.int32)) array([ 6, 5, 255, 2147483647]) >>> np.bitwise_or([True, True], [False, True]) array([ True, True]) ``` The `|` operator can be used as a shorthand for `np.bitwise_or` on ndarrays. ``` >>> x1 = np.array([2, 5, 255]) >>> x2 = np.array([4, 4, 4]) >>> x1 | x2 array([ 6, 5, 255]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.bitwise_or.htmlnumpy.bitwise_xor ================== numpy.bitwise_xor(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'bitwise_xor'>* Compute the bit-wise XOR of two arrays element-wise. Computes the bit-wise XOR of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `^`. Parameters **x1, x2**array_like Only integer and boolean types are handled. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Result. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_xor`](numpy.logical_xor#numpy.logical_xor "numpy.logical_xor") [`bitwise_and`](numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and") [`bitwise_or`](numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or") [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples The number 13 is represented by `00001101`. Likewise, 17 is represented by `00010001`. The bit-wise XOR of 13 and 17 is therefore `00011100`, or 28: ``` >>> np.bitwise_xor(13, 17) 28 >>> np.binary_repr(28) '11100' ``` ``` >>> np.bitwise_xor(31, 5) 26 >>> np.bitwise_xor([31,3], 5) array([26, 6]) ``` ``` >>> np.bitwise_xor([31,3], [5,6]) array([26, 5]) >>> np.bitwise_xor([True, True], [False, True]) array([ True, False]) ``` The `^` operator can be used as a shorthand for `np.bitwise_xor` on ndarrays. ``` >>> x1 = np.array([True, True]) >>> x2 = np.array([False, True]) >>> x1 ^ x2 array([ True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.bitwise_xor.htmlnumpy.left_shift ================= numpy.left_shift(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'left_shift'>* Shift the bits of an integer to the left. Bits are shifted to the left by appending `x2` 0s at the right of `x1`. Since the internal representation of numbers is in binary format, this operation is equivalent to multiplying `x1` by `2**x2`. Parameters **x1**array_like of integer type Input values. **x2**array_like of integer type Number of zeros to append to `x1`. Has to be non-negative. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**array of integer type Return `x1` with bits shifted `x2` times to the left. This is a scalar if both `x1` and `x2` are scalars. See also [`right_shift`](numpy.right_shift#numpy.right_shift "numpy.right_shift") Shift the bits of an integer to the right. [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples ``` >>> np.binary_repr(5) '101' >>> np.left_shift(5, 2) 20 >>> np.binary_repr(20) '10100' ``` ``` >>> np.left_shift(5, [1,2,3]) array([10, 20, 40]) ``` Note that the dtype of the second argument may change the dtype of the result and can lead to unexpected results in some cases (see [Casting Rules](../../user/basics.ufuncs#ufuncs-casting)): ``` >>> a = np.left_shift(np.uint8(255), 1) # Expect 254 >>> print(a, type(a)) # Unexpected result due to upcasting 510 <class 'numpy.int64'> >>> b = np.left_shift(np.uint8(255), np.uint8(1)) >>> print(b, type(b)) 254 <class 'numpy.uint8'``` The `<<` operator can be used as a shorthand for `np.left_shift` on ndarrays. ``` >>> x1 = 5 >>> x2 = np.array([1, 2, 3]) >>> x1 << x2 array([10, 20, 40]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.left_shift.htmlnumpy.right_shift ================== numpy.right_shift(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'right_shift'>* Shift the bits of an integer to the right. Bits are shifted to the right `x2`. Because the internal representation of numbers is in binary format, this operation is equivalent to dividing `x1` by `2**x2`. Parameters **x1**array_like, int Input values. **x2**array_like, int Number of bits to remove at the right of `x1`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray, int Return `x1` with bits shifted `x2` times to the right. This is a scalar if both `x1` and `x2` are scalars. See also [`left_shift`](numpy.left_shift#numpy.left_shift "numpy.left_shift") Shift the bits of an integer to the left. [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Return the binary representation of the input number as a string. #### Examples ``` >>> np.binary_repr(10) '1010' >>> np.right_shift(10, 1) 5 >>> np.binary_repr(5) '101' ``` ``` >>> np.right_shift(10, [1,2,3]) array([5, 2, 1]) ``` The `>>` operator can be used as a shorthand for `np.right_shift` on ndarrays. ``` >>> x1 = 10 >>> x2 = np.array([1,2,3]) >>> x1 >> x2 array([5, 2, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.right_shift.htmlnumpy.greater ============= numpy.greater(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'greater'>* Return the truth value of (x1 > x2) element-wise. Parameters **x1, x2**array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less`](numpy.less#numpy.less "numpy.less"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples ``` >>> np.greater([4,2],[2,2]) array([ True, False]) ``` The `>` operator can be used as a shorthand for `np.greater` on ndarrays. ``` >>> a = np.array([4, 2]) >>> b = np.array([2, 2]) >>> a > b array([ True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.greater.htmlnumpy.greater_equal ==================== numpy.greater_equal(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'greater_equal'>* Return the truth value of (x1 >= x2) element-wise. Parameters **x1, x2**array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**bool or ndarray of bool Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples ``` >>> np.greater_equal([4, 2, 1], [2, 2, 2]) array([ True, True, False]) ``` The `>=` operator can be used as a shorthand for `np.greater_equal` on ndarrays. ``` >>> a = np.array([4, 2, 1]) >>> b = np.array([2, 2, 2]) >>> a >= b array([ True, True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.greater_equal.htmlnumpy.less ========== numpy.less(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'less'>* Return the truth value of (x1 < x2) element-wise. Parameters **x1, x2**array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples ``` >>> np.less([1, 2], [2, 2]) array([ True, False]) ``` The `<` operator can be used as a shorthand for `np.less` on ndarrays. ``` >>> a = np.array([1, 2]) >>> b = np.array([2, 2]) >>> a < b array([ True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.less.htmlnumpy.less_equal ================= numpy.less_equal(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'less_equal'>* Return the truth value of (x1 <= x2) element-wise. Parameters **x1, x2**array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal") #### Examples ``` >>> np.less_equal([4, 2, 1], [2, 2, 2]) array([False, True, True]) ``` The `<=` operator can be used as a shorthand for `np.less_equal` on ndarrays. ``` >>> a = np.array([4, 2, 1]) >>> b = np.array([2, 2, 2]) >>> a <= b array([False, True, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.less_equal.htmlnumpy.not_equal ================ numpy.not_equal(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'not_equal'>* Return (x1 != x2) element-wise. Parameters **x1, x2**array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less`](numpy.less#numpy.less "numpy.less"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal") #### Examples ``` >>> np.not_equal([1.,2.], [1., 3.]) array([False, True]) >>> np.not_equal([1, 2], [[1, 3],[1, 4]]) array([[False, True], [False, True]]) ``` The `!=` operator can be used as a shorthand for `np.not_equal` on ndarrays. ``` >>> a = np.array([1., 2.]) >>> b = np.array([1., 3.]) >>> a != b array([False, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.not_equal.htmlnumpy.equal =========== numpy.equal(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'equal'>* Return (x1 == x2) element-wise. Parameters **x1, x2**array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar Output array, element-wise comparison of `x1` and `x2`. Typically of type bool, unless `dtype=object` is passed. This is a scalar if both `x1` and `x2` are scalars. See also [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") #### Examples ``` >>> np.equal([0, 1, 3], np.arange(3)) array([ True, True, False]) ``` What is compared are values, not types. So an int (1) and an array of length one can evaluate as True: ``` >>> np.equal(1, np.ones(1)) array([ True]) ``` The `==` operator can be used as a shorthand for `np.equal` on ndarrays. ``` >>> a = np.array([2, 4, 6]) >>> b = np.array([2, 4, 2]) >>> a == b array([ True, True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.equal.htmlnumpy.logical_and ================== numpy.logical_and(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'logical_and'>* Compute the truth value of x1 AND x2 element-wise. Parameters **x1, x2**array_like Input arrays. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or bool Boolean result of the logical AND operation applied to the elements of `x1` and `x2`; the shape is determined by broadcasting. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_or`](numpy.logical_or#numpy.logical_or "numpy.logical_or"), [`logical_not`](numpy.logical_not#numpy.logical_not "numpy.logical_not"), [`logical_xor`](numpy.logical_xor#numpy.logical_xor "numpy.logical_xor") [`bitwise_and`](numpy.bitwise_and#numpy.bitwise_and "numpy.bitwise_and") #### Examples ``` >>> np.logical_and(True, False) False >>> np.logical_and([True, False], [False, False]) array([False, False]) ``` ``` >>> x = np.arange(5) >>> np.logical_and(x>1, x<4) array([False, False, True, True, False]) ``` The `&` operator can be used as a shorthand for `np.logical_and` on boolean ndarrays. ``` >>> a = np.array([True, False]) >>> b = np.array([False, False]) >>> a & b array([False, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.logical_and.htmlnumpy.logical_or ================= numpy.logical_or(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'logical_or'>* Compute the truth value of x1 OR x2 element-wise. Parameters **x1, x2**array_like Logical OR is applied to the elements of `x1` and `x2`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or bool Boolean result of the logical OR operation applied to the elements of `x1` and `x2`; the shape is determined by broadcasting. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_and`](numpy.logical_and#numpy.logical_and "numpy.logical_and"), [`logical_not`](numpy.logical_not#numpy.logical_not "numpy.logical_not"), [`logical_xor`](numpy.logical_xor#numpy.logical_xor "numpy.logical_xor") [`bitwise_or`](numpy.bitwise_or#numpy.bitwise_or "numpy.bitwise_or") #### Examples ``` >>> np.logical_or(True, False) True >>> np.logical_or([True, False], [False, False]) array([ True, False]) ``` ``` >>> x = np.arange(5) >>> np.logical_or(x < 1, x > 3) array([ True, False, False, False, True]) ``` The `|` operator can be used as a shorthand for `np.logical_or` on boolean ndarrays. ``` >>> a = np.array([True, False]) >>> b = np.array([False, False]) >>> a | b array([ True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.logical_or.htmlnumpy.logical_xor ================== numpy.logical_xor(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'logical_xor'>* Compute the truth value of x1 XOR x2, element-wise. Parameters **x1, x2**array_like Logical XOR is applied to the elements of `x1` and `x2`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**bool or ndarray of bool Boolean result of the logical XOR operation applied to the elements of `x1` and `x2`; the shape is determined by broadcasting. This is a scalar if both `x1` and `x2` are scalars. See also [`logical_and`](numpy.logical_and#numpy.logical_and "numpy.logical_and"), [`logical_or`](numpy.logical_or#numpy.logical_or "numpy.logical_or"), [`logical_not`](numpy.logical_not#numpy.logical_not "numpy.logical_not"), [`bitwise_xor`](numpy.bitwise_xor#numpy.bitwise_xor "numpy.bitwise_xor") #### Examples ``` >>> np.logical_xor(True, False) True >>> np.logical_xor([True, True, False, False], [True, False, True, False]) array([False, True, True, False]) ``` ``` >>> x = np.arange(5) >>> np.logical_xor(x < 1, x > 3) array([ True, False, False, False, True]) ``` Simple example showing support of broadcasting ``` >>> np.logical_xor(0, np.eye(2)) array([[ True, False], [False, True]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.logical_xor.htmlnumpy.logical_not ================== numpy.logical_not(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'logical_not'>* Compute the truth value of NOT x element-wise. Parameters **x**array_like Logical NOT is applied to the elements of `x`. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**bool or ndarray of bool Boolean result with the same shape as `x` of the NOT operation on elements of `x`. This is a scalar if `x` is a scalar. See also [`logical_and`](numpy.logical_and#numpy.logical_and "numpy.logical_and"), [`logical_or`](numpy.logical_or#numpy.logical_or "numpy.logical_or"), [`logical_xor`](numpy.logical_xor#numpy.logical_xor "numpy.logical_xor") #### Examples ``` >>> np.logical_not(3) False >>> np.logical_not([True, False, 0, 1]) array([False, True, True, False]) ``` ``` >>> x = np.arange(5) >>> np.logical_not(x<3) array([False, False, False, True, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.logical_not.htmlnumpy.fmax ========== numpy.fmax(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'fmax'>* Element-wise maximum of array elements. Compare two arrays and returns a new array containing the element-wise maxima. If one of the elements being compared is a NaN, then the non-nan element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are ignored when possible. Parameters **x1, x2**array_like The arrays holding the elements to be compared. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The maximum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") Element-wise minimum of two arrays, ignores NaNs. [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") Element-wise maximum of two arrays, propagates NaNs. [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value of an array along a given axis, propagates NaNs. [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") The maximum value of an array along a given axis, ignores NaNs. [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum"), [`amin`](numpy.amin#numpy.amin "numpy.amin"), [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") #### Notes New in version 1.3.0. The fmax is equivalent to `np.where(x1 >= x2, x1, x2)` when neither x1 nor x2 are NaNs, but it is faster and does proper broadcasting. #### Examples ``` >>> np.fmax([2, 3, 4], [1, 5, 2]) array([ 2., 5., 4.]) ``` ``` >>> np.fmax(np.eye(2), [0.5, 2]) array([[ 1. , 2. ], [ 0.5, 2. ]]) ``` ``` >>> np.fmax([np.nan, 0, np.nan],[0, np.nan, np.nan]) array([ 0., 0., nan]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fmax.htmlnumpy.fmin ========== numpy.fmin(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'fmin'>* Element-wise minimum of array elements. Compare two arrays and returns a new array containing the element-wise minima. If one of the elements being compared is a NaN, then the non-nan element is returned. If both elements are NaNs then the first is returned. The latter distinction is important for complex NaNs, which are defined as at least one of the real or imaginary parts being a NaN. The net effect is that NaNs are ignored when possible. Parameters **x1, x2**array_like The arrays holding the elements to be compared. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The minimum of `x1` and `x2`, element-wise. This is a scalar if both `x1` and `x2` are scalars. See also [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") Element-wise maximum of two arrays, ignores NaNs. [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") Element-wise minimum of two arrays, propagates NaNs. [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value of an array along a given axis, propagates NaNs. [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") The minimum value of an array along a given axis, ignores NaNs. [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum"), [`amax`](numpy.amax#numpy.amax "numpy.amax"), [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") #### Notes New in version 1.3.0. The fmin is equivalent to `np.where(x1 <= x2, x1, x2)` when neither x1 nor x2 are NaNs, but it is faster and does proper broadcasting. #### Examples ``` >>> np.fmin([2, 3, 4], [1, 5, 2]) array([1, 3, 2]) ``` ``` >>> np.fmin(np.eye(2), [0.5, 2]) array([[ 0.5, 0. ], [ 0. , 1. ]]) ``` ``` >>> np.fmin([np.nan, 0, np.nan],[0, np.nan, np.nan]) array([ 0., 0., nan]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fmin.htmlnumpy.isfinite ============== numpy.isfinite(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'isfinite'>* Test element-wise for finiteness (not infinity and not Not a Number). The result is returned as a boolean array. Parameters **x**array_like Input values. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray, bool True where `x` is not positive infinity, negative infinity, or NaN; false otherwise. This is a scalar if `x` is a scalar. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") #### Notes Not a Number, positive infinity and negative infinity are considered to be non-finite. NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Also that positive infinity is not equivalent to negative infinity. But infinity is equivalent to positive infinity. Errors result if the second argument is also supplied when `x` is a scalar input, or if first and second arguments have different shapes. #### Examples ``` >>> np.isfinite(1) True >>> np.isfinite(0) True >>> np.isfinite(np.nan) False >>> np.isfinite(np.inf) False >>> np.isfinite(np.NINF) False >>> np.isfinite([np.log(-1.),1.,np.log(0)]) array([False, True, False]) ``` ``` >>> x = np.array([-np.inf, 0., np.inf]) >>> y = np.array([2, 2, 2]) >>> np.isfinite(x, y) array([0, 1, 0]) >>> y array([0, 1, 0]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isfinite.htmlnumpy.isinf =========== numpy.isinf(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'isinf'>* Test element-wise for positive or negative infinity. Returns a boolean array of the same shape as `x`, True where `x == +/-inf`, otherwise False. Parameters **x**array_like Input values **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**bool (scalar) or boolean ndarray True where `x` is positive or negative infinity, false otherwise. This is a scalar if `x` is a scalar. See also [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Errors result if the second argument is supplied when the first argument is a scalar, or if the first and second arguments have different shapes. #### Examples ``` >>> np.isinf(np.inf) True >>> np.isinf(np.nan) False >>> np.isinf(np.NINF) True >>> np.isinf([np.inf, -np.inf, 1.0, np.nan]) array([ True, True, False, False]) ``` ``` >>> x = np.array([-np.inf, 0., np.inf]) >>> y = np.array([2, 2, 2]) >>> np.isinf(x, y) array([1, 0, 1]) >>> y array([1, 0, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isinf.htmlnumpy.isnan =========== numpy.isnan(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'isnan'>* Test element-wise for NaN and return result as a boolean array. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or bool True where `x` is NaN, false otherwise. This is a scalar if `x` is a scalar. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite"), [`isnat`](numpy.isnat#numpy.isnat "numpy.isnat") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. #### Examples ``` >>> np.isnan(np.nan) True >>> np.isnan(np.inf) False >>> np.isnan([np.log(-1.),1.,np.log(0)]) array([ True, False, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isnan.htmlnumpy.isnat =========== numpy.isnat(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'isnat'>* Test element-wise for NaT (not a time) and return result as a boolean array. New in version 1.13.0. Parameters **x**array_like Input array with datetime or timedelta data type. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or bool True where `x` is NaT, false otherwise. This is a scalar if `x` is a scalar. See also [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan"), [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") #### Examples ``` >>> np.isnat(np.datetime64("NaT")) True >>> np.isnat(np.datetime64("2016-01-01")) False >>> np.isnat(np.array(["NaT", "2016-01-01"], dtype="datetime64[ns]")) array([ True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isnat.htmlnumpy.signbit ============= numpy.signbit(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'signbit'>* Returns element-wise True where signbit is set (less than zero). Parameters **x**array_like The input value(s). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **result**ndarray of bool Output array, or reference to `out` if that was supplied. This is a scalar if `x` is a scalar. #### Examples ``` >>> np.signbit(-1.2) True >>> np.signbit(np.array([1, -2.3, 2.1])) array([False, True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.signbit.htmlnumpy.copysign ============== numpy.copysign(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'copysign'>* Change the sign of x1 to that of x2, element-wise. If `x2` is a scalar, its sign will be copied to all elements of `x1`. Parameters **x1**array_like Values to change the sign of. **x2**array_like The sign of `x2` is copied to `x1`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar The values of `x1` with the sign of `x2`. This is a scalar if both `x1` and `x2` are scalars. #### Examples ``` >>> np.copysign(1.3, -1) -1.3 >>> 1/np.copysign(0, 1) inf >>> 1/np.copysign(0, -1) -inf ``` ``` >>> np.copysign([-1, 0, 1], -1.1) array([-1., -0., -1.]) >>> np.copysign([-1, 0, 1], np.arange(3)-1) array([-1., 0., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.copysign.htmlnumpy.nextafter =============== numpy.nextafter(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'nextafter'>* Return the next floating-point value after x1 towards x2, element-wise. Parameters **x1**array_like Values to find the next representable value of. **x2**array_like The direction where to look for the next representable value of `x1`. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar The next representable values of `x1` in the direction of `x2`. This is a scalar if both `x1` and `x2` are scalars. #### Examples ``` >>> eps = np.finfo(np.float64).eps >>> np.nextafter(1, 2) == eps + 1 True >>> np.nextafter([1, 2], [2, 1]) == [eps + 1, 2 - eps] array([ True, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nextafter.htmlnumpy.spacing ============= numpy.spacing(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'spacing'>* Return the distance between x and the nearest adjacent number. Parameters **x**array_like Values to find the spacing of. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **out**ndarray or scalar The spacing of values of `x`. This is a scalar if `x` is a scalar. #### Notes It can be considered as a generalization of EPS: `spacing(np.float64(1)) == np.finfo(np.float64).eps`, and there should not be any representable number between `x + spacing(x)` and x for any finite x. Spacing of +- inf and NaN is NaN. #### Examples ``` >>> np.spacing(1) == np.finfo(np.float64).eps True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.spacing.htmlnumpy.modf ========== numpy.modf(*x*, [*out1*, *out2*, ]*/*, [*out=(None*, *None)*, ]***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'modf'>* Return the fractional and integral parts of an array, element-wise. The fractional and integral parts are negative if the given number is negative. Parameters **x**array_like Input array. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y1**ndarray Fractional part of `x`. This is a scalar if `x` is a scalar. **y2**ndarray Integral part of `x`. This is a scalar if `x` is a scalar. See also [`divmod`](numpy.divmod#numpy.divmod "numpy.divmod") `divmod(x, 1)` is equivalent to `modf` with the return values switched, except it always has a positive remainder. #### Notes For integer input the return values are floats. #### Examples ``` >>> np.modf([0, 3.5]) (array([ 0. , 0.5]), array([ 0., 3.])) >>> np.modf(-0.5) (-0.5, -0) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.modf.htmlnumpy.ldexp =========== numpy.ldexp(*x1*, *x2*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'ldexp'>* Returns x1 * 2**x2, element-wise. The mantissas `x1` and twos exponents `x2` are used to construct floating point numbers `x1 * 2**x2`. Parameters **x1**array_like Array of multipliers. **x2**array_like, int Array of twos exponents. If `x1.shape != x2.shape`, they must be broadcastable to a common shape (which becomes the shape of the output). **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The result of `x1 * 2**x2`. This is a scalar if both `x1` and `x2` are scalars. See also [`frexp`](numpy.frexp#numpy.frexp "numpy.frexp") Return (y1, y2) from `x = y1 * 2**y2`, inverse to [`ldexp`](#numpy.ldexp "numpy.ldexp"). #### Notes Complex dtypes are not supported, they will raise a TypeError. [`ldexp`](#numpy.ldexp "numpy.ldexp") is useful as the inverse of [`frexp`](numpy.frexp#numpy.frexp "numpy.frexp"), if used by itself it is more clear to simply use the expression `x1 * 2**x2`. #### Examples ``` >>> np.ldexp(5, np.arange(4)) array([ 5., 10., 20., 40.], dtype=float16) ``` ``` >>> x = np.arange(6) >>> np.ldexp(*np.frexp(x)) array([ 0., 1., 2., 3., 4., 5.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ldexp.htmlnumpy.frexp =========== numpy.frexp(*x*, [*out1*, *out2*, ]*/*, [*out=(None*, *None)*, ]***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'frexp'>* Decompose the elements of x into mantissa and twos exponent. Returns (`mantissa`, `exponent`), where `x = mantissa * 2**exponent`. The mantissa lies in the open interval(-1, 1), while the twos exponent is a signed integer. Parameters **x**array_like Array of numbers to be decomposed. **out1**ndarray, optional Output array for the mantissa. Must have the same shape as `x`. **out2**ndarray, optional Output array for the exponent. Must have the same shape as `x`. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **mantissa**ndarray Floating values between -1 and 1. This is a scalar if `x` is a scalar. **exponent**ndarray Integer exponents of 2. This is a scalar if `x` is a scalar. See also [`ldexp`](numpy.ldexp#numpy.ldexp "numpy.ldexp") Compute `y = x1 * 2**x2`, the inverse of [`frexp`](#numpy.frexp "numpy.frexp"). #### Notes Complex dtypes are not supported, they will raise a TypeError. #### Examples ``` >>> x = np.arange(9) >>> y1, y2 = np.frexp(x) >>> y1 array([ 0. , 0.5 , 0.5 , 0.75 , 0.5 , 0.625, 0.75 , 0.875, 0.5 ]) >>> y2 array([0, 1, 2, 2, 3, 3, 3, 3, 4]) >>> y1 * 2**y2 array([ 0., 1., 2., 3., 4., 5., 6., 7., 8.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.frexp.htmlnumpy.trunc =========== numpy.trunc(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<ufunc 'trunc'>* Return the truncated value of the input, element-wise. The truncated value of the scalar `x` is the nearest integer `i` which is closer to zero than `x` is. In short, the fractional part of the signed number `x` is discarded. Parameters **x**array_like Input data. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray or scalar The truncated value of each element in `x`. This is a scalar if `x` is a scalar. See also [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`fix`](numpy.fix#numpy.fix "numpy.fix") #### Notes New in version 1.3.0. #### Examples ``` >>> a = np.array([-1.7, -1.5, -0.2, 0.2, 1.5, 1.7, 2.0]) >>> np.trunc(a) array([-1., -1., -0., 0., 1., 1., 2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.trunc.htmlnumpy.datetime_as_string ========================== numpy.datetime_as_string(*arr*, *unit=None*, *timezone='naive'*, *casting='same_kind'*) Convert an array of datetimes into an array of strings. Parameters **arr**array_like of datetime64 The array of UTC timestamps to format. **unit**str One of None, ‘auto’, or a [datetime unit](../arrays.datetime#arrays-dtypes-dateunits). **timezone**{‘naive’, ‘UTC’, ‘local’} or tzinfo Timezone information to use when displaying the datetime. If ‘UTC’, end with a Z to indicate UTC time. If ‘local’, convert to the local timezone first, and suffix with a +-#### timezone offset. If a tzinfo object, then do as with ‘local’, but use the specified timezone. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’} Casting to allow when changing between datetime units. Returns **str_arr**ndarray An array of strings the same shape as `arr`. #### Examples ``` >>> import pytz >>> d = np.arange('2002-10-27T04:30', 4*60, 60, dtype='M8[m]') >>> d array(['2002-10-27T04:30', '2002-10-27T05:30', '2002-10-27T06:30', '2002-10-27T07:30'], dtype='datetime64[m]') ``` Setting the timezone to UTC shows the same information, but with a Z suffix ``` >>> np.datetime_as_string(d, timezone='UTC') array(['2002-10-27T04:30Z', '2002-10-27T05:30Z', '2002-10-27T06:30Z', '2002-10-27T07:30Z'], dtype='<U35') ``` Note that we picked datetimes that cross a DST boundary. Passing in a `pytz` timezone object will print the appropriate offset ``` >>> np.datetime_as_string(d, timezone=pytz.timezone('US/Eastern')) array(['2002-10-27T00:30-0400', '2002-10-27T01:30-0400', '2002-10-27T01:30-0500', '2002-10-27T02:30-0500'], dtype='<U39') ``` Passing in a unit will change the precision ``` >>> np.datetime_as_string(d, unit='h') array(['2002-10-27T04', '2002-10-27T05', '2002-10-27T06', '2002-10-27T07'], dtype='<U32') >>> np.datetime_as_string(d, unit='s') array(['2002-10-27T04:30:00', '2002-10-27T05:30:00', '2002-10-27T06:30:00', '2002-10-27T07:30:00'], dtype='<U38') ``` ‘casting’ can be used to specify whether precision can be changed ``` >>> np.datetime_as_string(d, unit='h', casting='safe') Traceback (most recent call last): ... TypeError: Cannot create a datetime string as units 'h' from a NumPy datetime with units 'm' according to the rule 'safe' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.datetime_as_string.htmlnumpy.datetime_data ==================== numpy.datetime_data(*dtype*, */*) Get information about the step size of a date or time type. The returned tuple can be passed as the second argument of [`numpy.datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64") and [`numpy.timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64"). Parameters **dtype**dtype The dtype object, which must be a [`datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64") or [`timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") type. Returns **unit**str The [datetime unit](../arrays.datetime#arrays-dtypes-dateunits) on which this dtype is based. **count**int The number of base units in a step. #### Examples ``` >>> dt_25s = np.dtype('timedelta64[25s]') >>> np.datetime_data(dt_25s) ('s', 25) >>> np.array(10, dt_25s).astype('timedelta64[s]') array(250, dtype='timedelta64[s]') ``` The result can be used to construct a datetime that uses the same units as a timedelta ``` >>> np.datetime64('2010', np.datetime_data(dt_25s)) numpy.datetime64('2010-01-01T00:00:00','25s') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.datetime_data.htmlnumpy.promote_types ==================== numpy.promote_types(*type1*, *type2*) Returns the data type with the smallest size and smallest scalar kind to which both `type1` and `type2` may be safely cast. The returned data type is always considered “canonical”, this mainly means that the promoted dtype will always be in native byte order. This function is symmetric, but rarely associative. Parameters **type1**dtype or dtype specifier First data type. **type2**dtype or dtype specifier Second data type. Returns **out**dtype The promoted data type. See also [`result_type`](numpy.result_type#numpy.result_type "numpy.result_type"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`can_cast`](numpy.can_cast#numpy.can_cast "numpy.can_cast") #### Notes Please see [`numpy.result_type`](numpy.result_type#numpy.result_type "numpy.result_type") for additional information about promotion. New in version 1.6.0. Starting in NumPy 1.9, promote_types function now returns a valid string length when given an integer or float dtype as one argument and a string dtype as another argument. Previously it always returned the input string dtype, even if it wasn’t long enough to store the max integer/float value converted to a string. Changed in version 1.23.0. NumPy now supports promotion for more structured dtypes. It will now remove unnecessary padding from a structure dtype and promote included fields individually. #### Examples ``` >>> np.promote_types('f4', 'f8') dtype('float64') ``` ``` >>> np.promote_types('i8', 'f4') dtype('float64') ``` ``` >>> np.promote_types('>i8', '<c8') dtype('complex128') ``` ``` >>> np.promote_types('i4', 'S8') dtype('S11') ``` An example of a non-associative case: ``` >>> p = np.promote_types >>> p('S', p('i1', 'u1')) dtype('S6') >>> p(p('S', 'i1'), 'u1') dtype('S4') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.promote_types.htmlnumpy.min_scalar_type ======================= numpy.min_scalar_type(*a*, */*) For scalar `a`, returns the data type with the smallest size and smallest scalar kind which can hold its value. For non-scalar array `a`, returns the vector’s dtype unmodified. Floating point values are not demoted to integers, and complex values are not demoted to floats. Parameters **a**scalar or array_like The value whose minimal data type is to be found. Returns **out**dtype The minimal data type. See also [`result_type`](numpy.result_type#numpy.result_type "numpy.result_type"), [`promote_types`](numpy.promote_types#numpy.promote_types "numpy.promote_types"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`can_cast`](numpy.can_cast#numpy.can_cast "numpy.can_cast") #### Notes New in version 1.6.0. #### Examples ``` >>> np.min_scalar_type(10) dtype('uint8') ``` ``` >>> np.min_scalar_type(-260) dtype('int16') ``` ``` >>> np.min_scalar_type(3.1) dtype('float16') ``` ``` >>> np.min_scalar_type(1e50) dtype('float64') ``` ``` >>> np.min_scalar_type(np.arange(4,dtype='f8')) dtype('float64') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.min_scalar_type.htmlnumpy.result_type ================== numpy.result_type(**arrays_and_dtypes*) Returns the type that results from applying the NumPy type promotion rules to the arguments. Type promotion in NumPy works similarly to the rules in languages like C++, with some slight differences. When both scalars and arrays are used, the array’s type takes precedence and the actual value of the scalar is taken into account. For example, calculating 3*a, where a is an array of 32-bit floats, intuitively should result in a 32-bit float output. If the 3 is a 32-bit integer, the NumPy rules indicate it can’t convert losslessly into a 32-bit float, so a 64-bit float should be the result type. By examining the value of the constant, ‘3’, we see that it fits in an 8-bit integer, which can be cast losslessly into the 32-bit float. Parameters **arrays_and_dtypes**list of arrays and dtypes The operands of some operation whose result type is needed. Returns **out**dtype The result type. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`promote_types`](numpy.promote_types#numpy.promote_types "numpy.promote_types"), [`min_scalar_type`](numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type"), [`can_cast`](numpy.can_cast#numpy.can_cast "numpy.can_cast") #### Notes New in version 1.6.0. The specific algorithm used is as follows. Categories are determined by first checking which of boolean, integer (int/uint), or floating point (float/complex) the maximum kind of all the arrays and the scalars are. If there are only scalars or the maximum category of the scalars is higher than the maximum category of the arrays, the data types are combined with [`promote_types`](numpy.promote_types#numpy.promote_types "numpy.promote_types") to produce the return value. Otherwise, [`min_scalar_type`](numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type") is called on each array, and the resulting data types are all combined with [`promote_types`](numpy.promote_types#numpy.promote_types "numpy.promote_types") to produce the return value. The set of int values is not a subset of the uint values for types with the same number of bits, something not reflected in [`min_scalar_type`](numpy.min_scalar_type#numpy.min_scalar_type "numpy.min_scalar_type"), but handled as a special case in [`result_type`](#numpy.result_type "numpy.result_type"). #### Examples ``` >>> np.result_type(3, np.arange(7, dtype='i1')) dtype('int8') ``` ``` >>> np.result_type('i4', 'c8') dtype('complex128') ``` ``` >>> np.result_type(3.0, -2) dtype('float64') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.result_type.htmlnumpy.common_type ================== numpy.common_type(**arrays*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L682-L735) Return a scalar type which is common to the input arrays. The return type will always be an inexact (i.e. floating point) scalar type, even if all the arrays are integer arrays. If one of the inputs is an integer array, the minimum precision type that is returned is a 64-bit floating point dtype. All input arrays except int64 and uint64 can be safely cast to the returned dtype without loss of information. Parameters **array1, array2, 
**ndarrays Input arrays. Returns **out**data type code Data type code. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`mintypecode`](numpy.mintypecode#numpy.mintypecode "numpy.mintypecode") #### Examples ``` >>> np.common_type(np.arange(2, dtype=np.float32)) <class 'numpy.float32'> >>> np.common_type(np.arange(2, dtype=np.float32), np.arange(2)) <class 'numpy.float64'> >>> np.common_type(np.arange(4), np.array([45, 6.j]), np.array([45.0])) <class 'numpy.complex128'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.common_type.htmlnumpy.obj2sctype ================ numpy.obj2sctype(*rep*, *default=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numerictypes.py#L228-L279) Return the scalar dtype or NumPy equivalent of Python type of an object. Parameters **rep**any The object of which the type is returned. **default**any, optional If given, this is returned for objects whose types can not be determined. If not given, None is returned for those objects. Returns **dtype**dtype or Python type The data type of `rep`. See also [`sctype2char`](numpy.sctype2char#numpy.sctype2char "numpy.sctype2char"), [`issctype`](numpy.issctype#numpy.issctype "numpy.issctype"), [`issubsctype`](numpy.issubsctype#numpy.issubsctype "numpy.issubsctype"), [`issubdtype`](numpy.issubdtype#numpy.issubdtype "numpy.issubdtype"), [`maximum_sctype`](numpy.maximum_sctype#numpy.maximum_sctype "numpy.maximum_sctype") #### Examples ``` >>> np.obj2sctype(np.int32) <class 'numpy.int32'> >>> np.obj2sctype(np.array([1., 2.])) <class 'numpy.float64'> >>> np.obj2sctype(np.array([1.j])) <class 'numpy.complex128'``` ``` >>> np.obj2sctype(dict) <class 'numpy.object_'> >>> np.obj2sctype('string') ``` ``` >>> np.obj2sctype(1, default=list) <class 'list'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.obj2sctype.htmlnumpy.apply_over_axes ======================= numpy.apply_over_axes(*func*, *a*, *axes*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L421-L505) Apply a function repeatedly over multiple axes. `func` is called as `res = func(a, axis)`, where `axis` is the first element of `axes`. The result `res` of the function call must have either the same dimensions as `a` or one less dimension. If `res` has one less dimension than `a`, a dimension is inserted before `axis`. The call to `func` is then repeated for each axis in `axes`, with `res` as the first argument. Parameters **func**function This function must take two arguments, `func(a, axis)`. **a**array_like Input array. **axes**array_like Axes over which `func` is applied; the elements must be integers. Returns **apply_over_axis**ndarray The output array. The number of dimensions is the same as `a`, but the shape can be different. This depends on whether `func` changes the shape of its output with respect to its input. See also [`apply_along_axis`](numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis") Apply a function to 1-D slices of an array along the given axis. #### Notes This function is equivalent to tuple axis arguments to reorderable ufuncs with keepdims=True. Tuple axis arguments to ufuncs have been available since version 1.7.0. #### Examples ``` >>> a = np.arange(24).reshape(2,3,4) >>> a array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) ``` Sum over axes 0 and 2. The result has same number of dimensions as the original array: ``` >>> np.apply_over_axes(np.sum, a, [0,2]) array([[[ 60], [ 92], [124]]]) ``` Tuple axis arguments to ufuncs are equivalent: ``` >>> np.sum(a, axis=(0,2), keepdims=True) array([[[ 60], [ 92], [124]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.apply_over_axes.htmlnumpy.frompyfunc ================ numpy.frompyfunc(*func*, */*, *nin*, *nout*, ***[, *identity*]) Takes an arbitrary Python function and returns a NumPy ufunc. Can be used, for example, to add broadcasting to a built-in Python function (see Examples section). Parameters **func**Python function object An arbitrary Python function. **nin**int The number of input arguments. **nout**int The number of objects returned by `func`. **identity**object, optional The value to use for the [`identity`](numpy.ufunc.identity#numpy.ufunc.identity "numpy.ufunc.identity") attribute of the resulting object. If specified, this is equivalent to setting the underlying C `identity` field to `PyUFunc_IdentityValue`. If omitted, the identity is set to `PyUFunc_None`. Note that this is _not_ equivalent to setting the identity to `None`, which implies the operation is reorderable. Returns **out**ufunc Returns a NumPy universal function (`ufunc`) object. See also [`vectorize`](numpy.vectorize#numpy.vectorize "numpy.vectorize") Evaluates pyfunc over input arrays using broadcasting rules of numpy. #### Notes The returned ufunc always returns PyObject arrays. #### Examples Use frompyfunc to add broadcasting to the Python function `oct`: ``` >>> oct_array = np.frompyfunc(oct, 1, 1) >>> oct_array(np.array((10, 30, 100))) array(['0o12', '0o36', '0o144'], dtype=object) >>> np.array((oct(10), oct(30), oct(100))) # for comparison array(['0o12', '0o36', '0o144'], dtype='<U5') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.frompyfunc.htmlnumpy.piecewise =============== numpy.piecewise(*x*, *condlist*, *funclist*, **args*, ***kw*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L639-L757) Evaluate a piecewise-defined function. Given a set of conditions and corresponding functions, evaluate each function on the input data wherever its condition is true. Parameters **x**ndarray or scalar The input domain. **condlist**list of bool arrays or bool scalars Each boolean array corresponds to a function in `funclist`. Wherever `condlist[i]` is True, `funclist[i](x)` is used as the output value. Each boolean array in `condlist` selects a piece of `x`, and should therefore be of the same shape as `x`. The length of `condlist` must correspond to that of `funclist`. If one extra function is given, i.e. if `len(funclist) == len(condlist) + 1`, then that extra function is the default value, used wherever all conditions are false. **funclist**list of callables, f(x,*args,**kw), or scalars Each function is evaluated over `x` wherever its corresponding condition is True. It should take a 1d array as input and give an 1d array or a scalar value as output. If, instead of a callable, a scalar is provided then a constant function (`lambda x: scalar`) is assumed. **args**tuple, optional Any further arguments given to [`piecewise`](#numpy.piecewise "numpy.piecewise") are passed to the functions upon execution, i.e., if called `piecewise(..., ..., 1, 'a')`, then each function is called as `f(x, 1, 'a')`. **kw**dict, optional Keyword arguments used in calling [`piecewise`](#numpy.piecewise "numpy.piecewise") are passed to the functions upon execution, i.e., if called `piecewise(..., ..., alpha=1)`, then each function is called as `f(x, alpha=1)`. Returns **out**ndarray The output is the same shape and type as x and is found by calling the functions in `funclist` on the appropriate portions of `x`, as defined by the boolean arrays in `condlist`. Portions not covered by any condition have a default value of 0. See also [`choose`](numpy.choose#numpy.choose "numpy.choose"), [`select`](numpy.select#numpy.select "numpy.select"), [`where`](numpy.where#numpy.where "numpy.where") #### Notes This is similar to choose or select, except that functions are evaluated on elements of `x` that satisfy the corresponding condition from `condlist`. The result is: ``` |-- |funclist[0](x[condlist[0]]) out = |funclist[1](x[condlist[1]]) |... |funclist[n2](x[condlist[n2]]) |-- ``` #### Examples Define the sigma function, which is -1 for `x < 0` and +1 for `x >= 0`. ``` >>> x = np.linspace(-2.5, 2.5, 6) >>> np.piecewise(x, [x < 0, x >= 0], [-1, 1]) array([-1., -1., -1., 1., 1., 1.]) ``` Define the absolute value, which is `-x` for `x <0` and `x` for `x >= 0`. ``` >>> np.piecewise(x, [x < 0, x >= 0], [lambda x: -x, lambda x: x]) array([2.5, 1.5, 0.5, 0.5, 1.5, 2.5]) ``` Apply the same function to a scalar value. ``` >>> y = -2 >>> np.piecewise(y, [y < 0, y >= 0], [lambda x: -x, lambda x: x]) array(2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.piecewise.htmlnumpy.matlib.empty ================== matlib.empty(*shape*, *dtype=None*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matlib.py#L24-L60) Return a new matrix of given shape and type, without initializing entries. Parameters **shape**int or tuple of int Shape of the empty matrix. **dtype**data-type, optional Desired output data-type. **order**{‘C’, ‘F’}, optional Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") #### Notes [`empty`](numpy.empty#numpy.empty "numpy.empty"), unlike [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros"), does not set the matrix values to zero, and may therefore be marginally faster. On the other hand, it requires the user to manually set all the values in the array, and should be used with caution. #### Examples ``` >>> import numpy.matlib >>> np.matlib.empty((2, 2)) # filled with random data matrix([[ 6.76425276e-320, 9.79033856e-307], # random [ 7.39337286e-309, 3.22135945e-309]]) >>> np.matlib.empty((2, 2), dtype=int) matrix([[ 6600475, 0], # random [ 6586976, 22740995]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matlib.empty.htmlnumpy.matlib.zeros ================== matlib.zeros(*shape*, *dtype=None*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matlib.py#L107-L149) Return a matrix of given shape and type, filled with zeros. Parameters **shape**int or sequence of ints Shape of the matrix **dtype**data-type, optional The desired data-type for the matrix, default is float. **order**{‘C’, ‘F’}, optional Whether to store the result in C- or Fortran-contiguous order, default is ‘C’. Returns **out**matrix Zero matrix of given shape, dtype, and order. See also [`numpy.zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Equivalent array function. [`matlib.ones`](numpy.matlib.ones#numpy.matlib.ones "numpy.matlib.ones") Return a matrix of ones. #### Notes If [`shape`](numpy.shape#numpy.shape "numpy.shape") has length one i.e. `(N,)`, or is a scalar `N`, `out` becomes a single row matrix of shape `(1,N)`. #### Examples ``` >>> import numpy.matlib >>> np.matlib.zeros((2, 3)) matrix([[0., 0., 0.], [0., 0., 0.]]) ``` ``` >>> np.matlib.zeros(2) matrix([[0., 0.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matlib.zeros.htmlnumpy.matlib.ones ================= matlib.ones(*shape*, *dtype=None*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matlib.py#L62-L105) Matrix of ones. Return a matrix of given shape and type, filled with ones. Parameters **shape**{sequence of ints, int} Shape of the matrix **dtype**data-type, optional The desired data-type for the matrix, default is np.float64. **order**{‘C’, ‘F’}, optional Whether to store matrix in C- or Fortran-contiguous order, default is ‘C’. Returns **out**matrix Matrix of ones of given shape, dtype, and order. See also [`ones`](numpy.ones#numpy.ones "numpy.ones") Array of ones. [`matlib.zeros`](numpy.matlib.zeros#numpy.matlib.zeros "numpy.matlib.zeros") Zero matrix. #### Notes If [`shape`](numpy.shape#numpy.shape "numpy.shape") has length one i.e. `(N,)`, or is a scalar `N`, `out` becomes a single row matrix of shape `(1,N)`. #### Examples ``` >>> np.matlib.ones((2,3)) matrix([[1., 1., 1.], [1., 1., 1.]]) ``` ``` >>> np.matlib.ones(2) matrix([[1., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matlib.ones.htmlnumpy.matlib.eye ================ matlib.eye(*n*, *M=None*, *k=0*, *dtype=<class 'float'>*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matlib.py#L187-L229) Return a matrix with ones on the diagonal and zeros elsewhere. Parameters **n**int Number of rows in the output. **M**int, optional Number of columns in the output, defaults to `n`. **k**int, optional Index of the diagonal: 0 refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal. **dtype**dtype, optional Data-type of the returned matrix. **order**{‘C’, ‘F’}, optional Whether the output should be stored in row-major (C-style) or column-major (Fortran-style) order in memory. New in version 1.14.0. Returns **I**matrix A `n` x `M` matrix where all elements are equal to zero, except for the `k`-th diagonal, whose values are equal to one. See also [`numpy.eye`](numpy.eye#numpy.eye "numpy.eye") Equivalent array function. [`identity`](numpy.identity#numpy.identity "numpy.identity") Square identity matrix. #### Examples ``` >>> import numpy.matlib >>> np.matlib.eye(3, k=1, dtype=float) matrix([[0., 1., 0.], [0., 0., 1.], [0., 0., 0.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matlib.eye.htmlnumpy.matlib.identity ===================== matlib.identity(*n*, *dtype=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matlib.py#L151-L185) Returns the square identity matrix of given size. Parameters **n**int Size of the returned identity matrix. **dtype**data-type, optional Data-type of the output. Defaults to `float`. Returns **out**matrix `n` x `n` matrix with its main diagonal set to one, and all other elements zero. See also [`numpy.identity`](numpy.identity#numpy.identity "numpy.identity") Equivalent array function. [`matlib.eye`](numpy.matlib.eye#numpy.matlib.eye "numpy.matlib.eye") More general matrix identity function. #### Examples ``` >>> import numpy.matlib >>> np.matlib.identity(3, dtype=int) matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matlib.identity.htmlnumpy.matlib.repmat =================== matlib.repmat(*a*, *m*, *n*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matlib.py#L328-L376) Repeat a 0-D to 2-D array or matrix MxN times. Parameters **a**array_like The array or matrix to be repeated. **m, n**int The number of times `a` is repeated along the first and second axes. Returns **out**ndarray The result of repeating `a`. #### Examples ``` >>> import numpy.matlib >>> a0 = np.array(1) >>> np.matlib.repmat(a0, 2, 3) array([[1, 1, 1], [1, 1, 1]]) ``` ``` >>> a1 = np.arange(4) >>> np.matlib.repmat(a1, 2, 2) array([[0, 1, 2, 3, 0, 1, 2, 3], [0, 1, 2, 3, 0, 1, 2, 3]]) ``` ``` >>> a2 = np.asmatrix(np.arange(6).reshape(2, 3)) >>> np.matlib.repmat(a2, 2, 3) matrix([[0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5], [0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matlib.repmat.htmlnumpy.matlib.rand ================= matlib.rand(**args*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matlib.py#L231-L275) Return a matrix of random values with given shape. Create a matrix of the given shape and propagate it with random samples from a uniform distribution over `[0, 1)`. Parameters ***args**Arguments Shape of the output. If given as N integers, each integer specifies the size of one dimension. If given as a tuple, this tuple gives the complete shape. Returns **out**ndarray The matrix of random values with shape given by `*args`. See also [`randn`](numpy.matlib.randn#numpy.matlib.randn "numpy.matlib.randn"), [`numpy.random.RandomState.rand`](../random/generated/numpy.random.randomstate.rand#numpy.random.RandomState.rand "numpy.random.RandomState.rand") #### Examples ``` >>> np.random.seed(123) >>> import numpy.matlib >>> np.matlib.rand(2, 3) matrix([[0.69646919, 0.28613933, 0.22685145], [0.55131477, 0.71946897, 0.42310646]]) >>> np.matlib.rand((2, 3)) matrix([[0.9807642 , 0.68482974, 0.4809319 ], [0.39211752, 0.34317802, 0.72904971]]) ``` If the first argument is a tuple, other arguments are ignored: ``` >>> np.matlib.rand((2, 3), 4) matrix([[0.43857224, 0.0596779 , 0.39804426], [0.73799541, 0.18249173, 0.17545176]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matlib.rand.htmlnumpy.matlib.randn ================== matlib.randn(**args*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matlib.py#L277-L326) Return a random matrix with data from the “standard normal” distribution. [`randn`](#numpy.matlib.randn "numpy.matlib.randn") generates a matrix filled with random floats sampled from a univariate “normal” (Gaussian) distribution of mean 0 and variance 1. Parameters ***args**Arguments Shape of the output. If given as N integers, each integer specifies the size of one dimension. If given as a tuple, this tuple gives the complete shape. Returns **Z**matrix of floats A matrix of floating-point samples drawn from the standard normal distribution. See also [`rand`](numpy.matlib.rand#numpy.matlib.rand "numpy.matlib.rand"), [`numpy.random.RandomState.randn`](../random/generated/numpy.random.randomstate.randn#numpy.random.RandomState.randn "numpy.random.RandomState.randn") #### Notes For random samples from \(N(\mu, \sigma^2)\), use: `sigma * np.matlib.randn(...) + mu` #### Examples ``` >>> np.random.seed(123) >>> import numpy.matlib >>> np.matlib.randn(1) matrix([[-1.0856306]]) >>> np.matlib.randn(1, 2, 3) matrix([[ 0.99734545, 0.2829785 , -1.50629471], [-0.57860025, 1.65143654, -2.42667924]]) ``` Two-by-four matrix of samples from \(N(3, 6.25)\): ``` >>> 2.5 * np.matlib.randn((2, 4)) + 3 matrix([[1.92771843, 6.16484065, 0.83314899, 1.30278462], [2.76322758, 6.72847407, 1.40274501, 1.8900451 ]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matlib.randn.htmlnumpy.pad ========= numpy.pad(*array*, *pad_width*, *mode='constant'*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arraypad.py#L529-L876) Pad an array. Parameters **array**array_like of rank N The array to pad. **pad_width**{sequence, array_like, int} Number of values padded to the edges of each axis. ((before_1, after_1), 
 (before_N, after_N)) unique pad widths for each axis. ((before, after),) yields same before and after pad for each axis. (pad,) or int is a shortcut for before = after = pad width for all axes. **mode**str or function, optional One of the following string values or a user supplied function. ‘constant’ (default) Pads with a constant value. ‘edge’ Pads with the edge values of array. ‘linear_ramp’ Pads with the linear ramp between end_value and the array edge value. ‘maximum’ Pads with the maximum value of all or part of the vector along each axis. ‘mean’ Pads with the mean value of all or part of the vector along each axis. ‘median’ Pads with the median value of all or part of the vector along each axis. ‘minimum’ Pads with the minimum value of all or part of the vector along each axis. ‘reflect’ Pads with the reflection of the vector mirrored on the first and last values of the vector along each axis. ‘symmetric’ Pads with the reflection of the vector mirrored along the edge of the array. ‘wrap’ Pads with the wrap of the vector along the axis. The first values are used to pad the end and the end values are used to pad the beginning. ‘empty’ Pads with undefined values. New in version 1.17. <functionPadding function, see Notes. **stat_length**sequence or int, optional Used in ‘maximum’, ‘mean’, ‘median’, and ‘minimum’. Number of values at edge of each axis used to calculate the statistic value. ((before_1, after_1), 
 (before_N, after_N)) unique statistic lengths for each axis. ((before, after),) yields same before and after statistic lengths for each axis. (stat_length,) or int is a shortcut for before = after = statistic length for all axes. Default is `None`, to use the entire axis. **constant_values**sequence or scalar, optional Used in ‘constant’. The values to set the padded values for each axis. `((before_1, after_1), ... (before_N, after_N))` unique pad constants for each axis. `((before, after),)` yields same before and after constants for each axis. `(constant,)` or `constant` is a shortcut for `before = after = constant` for all axes. Default is 0. **end_values**sequence or scalar, optional Used in ‘linear_ramp’. The values used for the ending value of the linear_ramp and that will form the edge of the padded array. `((before_1, after_1), ... (before_N, after_N))` unique end values for each axis. `((before, after),)` yields same before and after end values for each axis. `(constant,)` or `constant` is a shortcut for `before = after = constant` for all axes. Default is 0. **reflect_type**{‘even’, ‘odd’}, optional Used in ‘reflect’, and ‘symmetric’. The ‘even’ style is the default with an unaltered reflection around the edge value. For the ‘odd’ style, the extended part of the array is created by subtracting the reflected values from two times the edge value. Returns **pad**ndarray Padded array of rank equal to [`array`](numpy.array#numpy.array "numpy.array") with shape increased according to `pad_width`. #### Notes New in version 1.7.0. For an array with rank greater than 1, some of the padding of later axes is calculated from padding of previous axes. This is easiest to think about with a rank 2 array where the corners of the padded array are calculated by using padded values from the first axis. The padding function, if used, should modify a rank 1 array in-place. It has the following signature: ``` padding_func(vector, iaxis_pad_width, iaxis, kwargs) ``` where vectorndarray A rank 1 array already padded with zeros. Padded values are vector[:iaxis_pad_width[0]] and vector[-iaxis_pad_width[1]:]. iaxis_pad_widthtuple A 2-tuple of ints, iaxis_pad_width[0] represents the number of values padded at the beginning of vector where iaxis_pad_width[1] represents the number of values padded at the end of vector. iaxisint The axis currently being calculated. kwargsdict Any keyword arguments the function requires. #### Examples ``` >>> a = [1, 2, 3, 4, 5] >>> np.pad(a, (2, 3), 'constant', constant_values=(4, 6)) array([4, 4, 1, ..., 6, 6, 6]) ``` ``` >>> np.pad(a, (2, 3), 'edge') array([1, 1, 1, ..., 5, 5, 5]) ``` ``` >>> np.pad(a, (2, 3), 'linear_ramp', end_values=(5, -4)) array([ 5, 3, 1, 2, 3, 4, 5, 2, -1, -4]) ``` ``` >>> np.pad(a, (2,), 'maximum') array([5, 5, 1, 2, 3, 4, 5, 5, 5]) ``` ``` >>> np.pad(a, (2,), 'mean') array([3, 3, 1, 2, 3, 4, 5, 3, 3]) ``` ``` >>> np.pad(a, (2,), 'median') array([3, 3, 1, 2, 3, 4, 5, 3, 3]) ``` ``` >>> a = [[1, 2], [3, 4]] >>> np.pad(a, ((3, 2), (2, 3)), 'minimum') array([[1, 1, 1, 2, 1, 1, 1], [1, 1, 1, 2, 1, 1, 1], [1, 1, 1, 2, 1, 1, 1], [1, 1, 1, 2, 1, 1, 1], [3, 3, 3, 4, 3, 3, 3], [1, 1, 1, 2, 1, 1, 1], [1, 1, 1, 2, 1, 1, 1]]) ``` ``` >>> a = [1, 2, 3, 4, 5] >>> np.pad(a, (2, 3), 'reflect') array([3, 2, 1, 2, 3, 4, 5, 4, 3, 2]) ``` ``` >>> np.pad(a, (2, 3), 'reflect', reflect_type='odd') array([-1, 0, 1, 2, 3, 4, 5, 6, 7, 8]) ``` ``` >>> np.pad(a, (2, 3), 'symmetric') array([2, 1, 1, 2, 3, 4, 5, 5, 4, 3]) ``` ``` >>> np.pad(a, (2, 3), 'symmetric', reflect_type='odd') array([0, 1, 1, 2, 3, 4, 5, 5, 6, 7]) ``` ``` >>> np.pad(a, (2, 3), 'wrap') array([4, 5, 1, 2, 3, 4, 5, 1, 2, 3]) ``` ``` >>> def pad_with(vector, pad_width, iaxis, kwargs): ... pad_value = kwargs.get('padder', 10) ... vector[:pad_width[0]] = pad_value ... vector[-pad_width[1]:] = pad_value >>> a = np.arange(6) >>> a = a.reshape((2, 3)) >>> np.pad(a, 2, pad_with) array([[10, 10, 10, 10, 10, 10, 10], [10, 10, 10, 10, 10, 10, 10], [10, 10, 0, 1, 2, 10, 10], [10, 10, 3, 4, 5, 10, 10], [10, 10, 10, 10, 10, 10, 10], [10, 10, 10, 10, 10, 10, 10]]) >>> np.pad(a, 2, pad_with, padder=100) array([[100, 100, 100, 100, 100, 100, 100], [100, 100, 100, 100, 100, 100, 100], [100, 100, 0, 1, 2, 100, 100], [100, 100, 3, 4, 5, 100, 100], [100, 100, 100, 100, 100, 100, 100], [100, 100, 100, 100, 100, 100, 100]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.pad.htmlnumpy.lib.arraysetops ===================== Set operations for arrays based on sorting. Notes ----- For floating point arrays, inaccurate results may appear due to usual round-off and floating point comparison issues. Speed could be gained in some operations by an implementation of [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort"), that can provide directly the permutation vectors, thus avoiding calls to [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort"). Original author: <NAME> © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.arraysetops.htmlnumpy.ndarray.view ================== method ndarray.view(*[dtype][, type]*) New view of array with the same data. Note Passing None for `dtype` is different from omitting the parameter, since the former invokes `dtype(None)` which is an alias for `dtype('float_')`. Parameters **dtype**data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as `a`. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type**Python type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the last axis of `a` must be contiguous. This axis will be resized in the result. Changed in version 1.23.0: Only the last axis needs to be contiguous. Previously, the entire array had to be C-contiguous. #### Examples ``` >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) ``` Viewing array data using a different type and dtype: ``` >>> y = x.view(dtype=np.int16, type=np.matrix) >>> y matrix([[513]], dtype=int16) >>> print(type(y)) <class 'numpy.matrix'``` Creating a view on a structured array so it can be used in calculations ``` >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) ``` Making changes to the view changes the underlying array ``` >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) ``` Using a view to convert an array to a recarray: ``` >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) ``` Views share data: ``` >>> x[0] = (9, 10) >>> z[0] (9, 10) ``` Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: ``` >>> x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int16) >>> y = x[:, ::2] >>> y array([[1, 3], [4, 6]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the last axis must be contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 3)], [(4, 6)]], dtype=[('width', '<i2'), ('length', '<i2')]) ``` However, views that change dtype are totally fine for arrays with a contiguous last axis, even if the rest of the axes are not C-contiguous: ``` >>> x = np.arange(2 * 3 * 4, dtype=np.int8).reshape(2, 3, 4) >>> x.transpose(1, 0, 2).view(np.int16) array([[[ 256, 770], [3340, 3854]], [[1284, 1798], [4368, 4882]], [[2312, 2826], [5396, 5910]]], dtype=int16) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.view.htmlnumpy.format_parser ==================== *class*numpy.format_parser(*formats*, *names*, *titles*, *aligned=False*, *byteorder=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Class to convert formats, names, titles description to a dtype. After constructing the format_parser object, the dtype attribute is the converted data-type: `dtype = format_parser(formats, names, titles).dtype` Parameters **formats**str or list of str The format description, either specified as a string with comma-separated format descriptions in the form `'f8, i4, a5'`, or a list of format description strings in the form `['f8', 'i4', 'a5']`. **names**str or list/tuple of str The field names, either specified as a comma-separated string in the form `'col1, col2, col3'`, or as a list or tuple of strings in the form `['col1', 'col2', 'col3']`. An empty list can be used, in that case default field names (‘f0’, ‘f1’, 
) are used. **titles**sequence Sequence of title strings. An empty list can be used to leave titles out. **aligned**bool, optional If True, align the fields by padding as the C-compiler would. Default is False. **byteorder**str, optional If specified, all the fields will be changed to the provided byte-order. Otherwise, the default byte-order is used. For all available string specifiers, see [`dtype.newbyteorder`](numpy.dtype.newbyteorder#numpy.dtype.newbyteorder "numpy.dtype.newbyteorder"). See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`typename`](numpy.typename#numpy.typename "numpy.typename"), [`sctype2char`](numpy.sctype2char#numpy.sctype2char "numpy.sctype2char") #### Examples ``` >>> np.format_parser(['<f8', '<i4', '<a5'], ['col1', 'col2', 'col3'], ... ['T1', 'T2', 'T3']).dtype dtype([(('T1', 'col1'), '<f8'), (('T2', 'col2'), '<i4'), (('T3', 'col3'), 'S5')]) ``` `names` and/or `titles` can be empty lists. If `titles` is an empty list, titles will simply not appear. If `names` is empty, default field names will be used. ``` >>> np.format_parser(['f8', 'i4', 'a5'], ['col1', 'col2', 'col3'], ... []).dtype dtype([('col1', '<f8'), ('col2', '<i4'), ('col3', '<S5')]) >>> np.format_parser(['<f8', '<i4', '<a5'], [], []).dtype dtype([('f0', '<f8'), ('f1', '<i4'), ('f2', 'S5')]) ``` Attributes **dtype**dtype The converted data-type. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.format_parser.htmlnumpy.kaiser ============ numpy.kaiser(*M*, *beta*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L3428-L3553) Return the Kaiser window. The Kaiser window is a taper formed by using a Bessel function. Parameters **M**int Number of points in the output window. If zero or less, an empty array is returned. **beta**float Shape parameter for window. Returns **out**array The window, with the maximum value normalized to one (the value one appears only if the number of samples is odd). See also [`bartlett`](numpy.bartlett#numpy.bartlett "numpy.bartlett"), [`blackman`](numpy.blackman#numpy.blackman "numpy.blackman"), [`hamming`](numpy.hamming#numpy.hamming "numpy.hamming"), [`hanning`](numpy.hanning#numpy.hanning "numpy.hanning") #### Notes The Kaiser window is defined as \[w(n) = I_0\left( \beta \sqrt{1-\frac{4n^2}{(M-1)^2}} \right)/I_0(\beta)\] with \[\quad -\frac{M-1}{2} \leq n \leq \frac{M-1}{2},\] where \(I_0\) is the modified zeroth-order Bessel function. The Kaiser was named for <NAME>, who discovered a simple approximation to the DPSS window based on Bessel functions. The Kaiser window is a very good approximation to the Digital Prolate Spheroidal Sequence, or Slepian window, which is the transform which maximizes the energy in the main lobe of the window relative to total energy. The Kaiser can approximate many other windows by varying the beta parameter. | beta | Window shape | | --- | --- | | 0 | Rectangular | | 5 | Similar to a Hamming | | 6 | Similar to a Hanning | | 8.6 | Similar to a Blackman | A beta value of 14 is probably a good starting point. Note that as beta gets large, the window narrows, and so the number of samples needs to be large enough to sample the increasingly narrow spike, otherwise NaNs will get returned. Most references to the Kaiser window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. #### References 1 <NAME>, “Digital Filters” - Ch 7 in “Systems analysis by digital computer”, Editors: <NAME> and <NAME>, p 218-285. <NAME> and Sons, New York, (1966). 2 <NAME>, “Time Sequence Analysis in Geophysics”, The University of Alberta Press, 1975, pp. 177-178. 3 Wikipedia, “Window function”, <https://en.wikipedia.org/wiki/Window_function #### Examples ``` >>> import matplotlib.pyplot as plt >>> np.kaiser(12, 14) array([7.72686684e-06, 3.46009194e-03, 4.65200189e-02, # may vary 2.29737120e-01, 5.99885316e-01, 9.45674898e-01, 9.45674898e-01, 5.99885316e-01, 2.29737120e-01, 4.65200189e-02, 3.46009194e-03, 7.72686684e-06]) ``` Plot the window and the frequency response: ``` >>> from numpy.fft import fft, fftshift >>> window = np.kaiser(51, 14) >>> plt.plot(window) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Kaiser window") Text(0.5, 1.0, 'Kaiser window') >>> plt.ylabel("Amplitude") Text(0, 0.5, 'Amplitude') >>> plt.xlabel("Sample") Text(0.5, 0, 'Sample') >>> plt.show() ``` ![../../_images/numpy-kaiser-1_00_00.png] ``` >>> plt.figure() <Figure size 640x480 with 0 Axes> >>> A = fft(window, 2048) / 25.5 >>> mag = np.abs(fftshift(A)) >>> freq = np.linspace(-0.5, 0.5, len(A)) >>> response = 20 * np.log10(mag) >>> response = np.clip(response, -100, 100) >>> plt.plot(freq, response) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Frequency response of Kaiser window") Text(0.5, 1.0, 'Frequency response of Kaiser window') >>> plt.ylabel("Magnitude [dB]") Text(0, 0.5, 'Magnitude [dB]') >>> plt.xlabel("Normalized frequency [cycles per sample]") Text(0.5, 0, 'Normalized frequency [cycles per sample]') >>> plt.axis('tight') (-0.5, 0.5, -100.0, ...) # may vary >>> plt.show() ``` ![../../_images/numpy-kaiser-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.kaiser.htmldistutils.misc_util ==================== numpy.distutils.misc_util.all_strings(*lst*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L490-L495) Return True if all items in lst are string objects. numpy.distutils.misc_util.allpath(*name*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L129-L132) Convert a /-separated pathname to one using the OS’s path separator. numpy.distutils.misc_util.appendpath(*prefix*, *path*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2307-L2326) numpy.distutils.misc_util.as_list(*seq*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L509-L513) numpy.distutils.misc_util.blue_text(*s*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L381-L382) numpy.distutils.misc_util.cyan_text(*s*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L379-L380) numpy.distutils.misc_util.cyg2win32(*path:[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")*) → [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L386-L420) Convert a path from Cygwin-native to Windows-native. Uses the cygpath utility (part of the Base install) to do the actual conversion. Falls back to returning the original path if this fails. Handles the default `/cygdrive` mount prefix as well as the `/proc/cygdrive` portable prefix, custom cygdrive prefixes such as `/` or `/mnt`, and absolute paths such as `/usr/src/` or `/home/username` Parameters **path**str The path to convert Returns **converted_path**str The converted path #### Notes Documentation for cygpath utility: <https://cygwin.com/cygwin-ug-net/cygpath.html> Documentation for the C function it wraps: <https://cygwin.com/cygwin-api/func-cygwin-conv-path.html numpy.distutils.misc_util.default_config_dict(*name=None*, *parent_name=None*, *local_path=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2282-L2293) Return a configuration dictionary for usage in configuration() function defined in file setup_<name>.py. numpy.distutils.misc_util.dict_append(*d*, ***kws*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2296-L2305) numpy.distutils.misc_util.dot_join(**args*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L742-L743) numpy.distutils.misc_util.exec_mod_from_location(*modname*, *modfile*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2484-L2493) Use importlib machinery to import a module `modname` from the file `modfile`. Depending on the `spec.loader`, the module may not be registered in sys.modules. numpy.distutils.misc_util.filter_sources(*sources*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L542-L562) Return four lists of filenames containing C, C++, Fortran, and Fortran 90 module sources, respectively. numpy.distutils.misc_util.generate_config_py(*target*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2328-L2457) Generate config.py file containing system_info information used during building the package. Usage: config[‘py_modules’].append((packagename, ‘__config__’,generate_config_py)) numpy.distutils.misc_util.get_build_architecture()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2467-L2471) numpy.distutils.misc_util.get_cmd(*cmdname*, *_cache={}*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2131-L2141) numpy.distutils.misc_util.get_data_files(*data*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L723-L740) numpy.distutils.misc_util.get_dependencies(*sources*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L594-L596) numpy.distutils.misc_util.get_ext_source_files(*ext*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L649-L660) numpy.distutils.misc_util.get_frame(*level=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L745-L754) Return frame object from call stack with given level. numpy.distutils.misc_util.get_info(*pkgname*, *dirs=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2210-L2268) Return an info dict for a given C library. The info dict contains the necessary options to use the C library. Parameters **pkgname**str Name of the package (should match the name of the .ini file, without the extension, e.g. foo for the file foo.ini). **dirs**sequence, optional If given, should be a sequence of additional directories where to look for npy-pkg-config files. Those directories are searched prior to the NumPy directory. Returns **info**dict The dictionary with build information. Raises PkgNotFound If the package is not found. See also [`Configuration.add_npy_pkg_config`](../distutils#numpy.distutils.misc_util.Configuration.add_npy_pkg_config "numpy.distutils.misc_util.Configuration.add_npy_pkg_config"), [`Configuration.add_installed_library`](../distutils#numpy.distutils.misc_util.Configuration.add_installed_library "numpy.distutils.misc_util.Configuration.add_installed_library") [`get_pkg_info`](#numpy.distutils.misc_util.get_pkg_info "numpy.distutils.misc_util.get_pkg_info") #### Examples To get the necessary information for the npymath library from NumPy: ``` >>> npymath_info = np.distutils.misc_util.get_info('npymath') >>> npymath_info {'define_macros': [], 'libraries': ['npymath'], 'library_dirs': ['.../numpy/core/lib'], 'include_dirs': ['.../numpy/core/include']} ``` This info dict can then be used as input to a [`Configuration`](../distutils#numpy.distutils.misc_util.Configuration "numpy.distutils.misc_util.Configuration") instance: ``` config.add_extension('foo', sources=['foo.c'], extra_info=npymath_info) ``` numpy.distutils.misc_util.get_language(*sources*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L515-L526) Determine language value (c,f77,f90) from sources numpy.distutils.misc_util.get_lib_source_files(*lib*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L666-L678) numpy.distutils.misc_util.get_mathlibs(*path=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L205-L230) Return the MATHLIB line from numpyconfig.h numpy.distutils.misc_util.get_num_build_jobs()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L76-L110) Get number of parallel build jobs set by the –parallel command line argument of setup.py If the command did not receive a setting the environment variable NPY_NUM_BUILD_JOBS is checked. If that is unset, return the number of processors on the system, with a maximum of 8 (to prevent overloading the system if there a lot of CPUs). Returns **out**int number of parallel jobs that can be run numpy.distutils.misc_util.get_numpy_include_dirs()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2143-L2150) numpy.distutils.misc_util.get_pkg_info(*pkgname*, *dirs=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2172-L2208) Return library info for the given package. Parameters **pkgname**str Name of the package (should match the name of the .ini file, without the extension, e.g. foo for the file foo.ini). **dirs**sequence, optional If given, should be a sequence of additional directories where to look for npy-pkg-config files. Those directories are searched prior to the NumPy directory. Returns **pkginfo**class instance The `LibraryInfo` instance containing the build information. Raises PkgNotFound If the package is not found. See also [`Configuration.add_npy_pkg_config`](../distutils#numpy.distutils.misc_util.Configuration.add_npy_pkg_config "numpy.distutils.misc_util.Configuration.add_npy_pkg_config"), [`Configuration.add_installed_library`](../distutils#numpy.distutils.misc_util.Configuration.add_installed_library "numpy.distutils.misc_util.Configuration.add_installed_library") [`get_info`](#numpy.distutils.misc_util.get_info "numpy.distutils.misc_util.get_info") numpy.distutils.misc_util.get_script_files(*scripts*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L662-L664) numpy.distutils.misc_util.gpaths(*paths*, *local_path=''*, *include_non_existing=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L303-L308) Apply glob to paths and prepend local_path if needed. numpy.distutils.misc_util.green_text(*s*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L375-L376) numpy.distutils.misc_util.has_cxx_sources(*sources*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L535-L540) Return True if sources contains C++ files numpy.distutils.misc_util.has_f_sources(*sources*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L528-L533) Return True if sources contains Fortran files numpy.distutils.misc_util.is_local_src_dir(*directory*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L598-L611) Return true if directory is local directory. numpy.distutils.misc_util.is_sequence(*seq*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L497-L504) numpy.distutils.misc_util.is_string(*s*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L487-L488) numpy.distutils.misc_util.mingw32()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L423-L431) Return true when using mingw32 environment. numpy.distutils.misc_util.minrelpath(*path*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L232-L259) Resolve and ‘.’ from path. numpy.distutils.misc_util.njoin(**path*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L178-L203) Join two or more pathname components + - convert a /-separated pathname to one using the OS’s path separator. - resolve and from path. Either passing n arguments as in njoin(‘a’,’b’), or a sequence of n names as in njoin([‘a’,’b’]) is handled, or a mixture of such arguments. numpy.distutils.misc_util.red_text(*s*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L373-L374) numpy.distutils.misc_util.sanitize_cxx_flags(*cxxflags*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L2477-L2481) Some flags are valid for C but not C++. Prune them. numpy.distutils.misc_util.terminal_has_colors()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L323-L348) numpy.distutils.misc_util.yellow_text(*s*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/misc_util.py#L377-L378) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/distutils/misc_util.htmlnumpy.distutils.ccompiler ========================= #### Functions | | | | --- | --- | | [`CCompiler_compile`](numpy.distutils.ccompiler.ccompiler_compile#numpy.distutils.ccompiler.CCompiler_compile "numpy.distutils.ccompiler.CCompiler_compile")(self, sources[, ...]) | Compile one or more source files. | | [`CCompiler_customize`](numpy.distutils.ccompiler.ccompiler_customize#numpy.distutils.ccompiler.CCompiler_customize "numpy.distutils.ccompiler.CCompiler_customize")(self, dist[, need_cxx]) | Do any platform-specific customization of a compiler instance. | | [`CCompiler_customize_cmd`](numpy.distutils.ccompiler.ccompiler_customize_cmd#numpy.distutils.ccompiler.CCompiler_customize_cmd "numpy.distutils.ccompiler.CCompiler_customize_cmd")(self, cmd[, ignore]) | Customize compiler using distutils command. | | [`CCompiler_cxx_compiler`](numpy.distutils.ccompiler.ccompiler_cxx_compiler#numpy.distutils.ccompiler.CCompiler_cxx_compiler "numpy.distutils.ccompiler.CCompiler_cxx_compiler")(self) | Return the C++ compiler. | | [`CCompiler_find_executables`](numpy.distutils.ccompiler.ccompiler_find_executables#numpy.distutils.ccompiler.CCompiler_find_executables "numpy.distutils.ccompiler.CCompiler_find_executables")(self) | Does nothing here, but is called by the get_version method and can be overridden by subclasses. | | [`CCompiler_get_version`](numpy.distutils.ccompiler.ccompiler_get_version#numpy.distutils.ccompiler.CCompiler_get_version "numpy.distutils.ccompiler.CCompiler_get_version")(self[, force, ok_status]) | Return compiler version, or None if compiler is not available. | | [`CCompiler_object_filenames`](numpy.distutils.ccompiler.ccompiler_object_filenames#numpy.distutils.ccompiler.CCompiler_object_filenames "numpy.distutils.ccompiler.CCompiler_object_filenames")(self, ...[, ...]) | Return the name of the object files for the given source files. | | [`CCompiler_show_customization`](numpy.distutils.ccompiler.ccompiler_show_customization#numpy.distutils.ccompiler.CCompiler_show_customization "numpy.distutils.ccompiler.CCompiler_show_customization")(self) | Print the compiler customizations to stdout. | | [`CCompiler_spawn`](numpy.distutils.ccompiler.ccompiler_spawn#numpy.distutils.ccompiler.CCompiler_spawn "numpy.distutils.ccompiler.CCompiler_spawn")(self, cmd[, display, env]) | Execute a command in a sub-process. | | [`gen_lib_options`](numpy.distutils.ccompiler.gen_lib_options#numpy.distutils.ccompiler.gen_lib_options "numpy.distutils.ccompiler.gen_lib_options")(compiler, library_dirs, ...) | | | [`new_compiler`](numpy.distutils.ccompiler.new_compiler#numpy.distutils.ccompiler.new_compiler "numpy.distutils.ccompiler.new_compiler")([plat, compiler, verbose, ...]) | | | [`replace_method`](numpy.distutils.ccompiler.replace_method#numpy.distutils.ccompiler.replace_method "numpy.distutils.ccompiler.replace_method")(klass, method_name, func) | | | [`simple_version_match`](numpy.distutils.ccompiler.simple_version_match#numpy.distutils.ccompiler.simple_version_match "numpy.distutils.ccompiler.simple_version_match")([pat, ignore, start]) | Simple matching of version numbers, for use in CCompiler and FCompiler. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.htmlnumpy.distutils.ccompiler_opt ============================== Provides the [`CCompilerOpt`](numpy.distutils.ccompiler_opt.ccompileropt#numpy.distutils.ccompiler_opt.CCompilerOpt "numpy.distutils.ccompiler_opt.CCompilerOpt") class, used for handling the CPU/hardware optimization, starting from parsing the command arguments, to managing the relation between the CPU baseline and dispatch-able features, also generating the required C headers and ending with compiling the sources with proper compiler’s flags. [`CCompilerOpt`](numpy.distutils.ccompiler_opt.ccompileropt#numpy.distutils.ccompiler_opt.CCompilerOpt "numpy.distutils.ccompiler_opt.CCompilerOpt") doesn’t provide runtime detection for the CPU features, instead only focuses on the compiler side, but it creates abstract C headers that can be used later for the final runtime dispatching process. #### Functions | | | | --- | --- | | [`new_ccompiler_opt`](numpy.distutils.ccompiler_opt.new_ccompiler_opt#numpy.distutils.ccompiler_opt.new_ccompiler_opt "numpy.distutils.ccompiler_opt.new_ccompiler_opt")(compiler, dispatch_hpath, ...) | Create a new instance of 'CCompilerOpt' and generate the dispatch header which contains the #definitions and headers of platform-specific instruction-sets for the enabled CPU baseline and dispatch-able features. | #### Classes | | | | --- | --- | | [`CCompilerOpt`](numpy.distutils.ccompiler_opt.ccompileropt#numpy.distutils.ccompiler_opt.CCompilerOpt "numpy.distutils.ccompiler_opt.CCompilerOpt")(ccompiler[, cpu_baseline, ...]) | A helper class for `CCompiler` aims to provide extra build options to effectively control of compiler optimizations that are directly related to CPU features. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.htmlnumpy.distutils.cpuinfo.cpu =========================== distutils.cpuinfo.cpu*=<numpy.distutils.cpuinfo.LinuxCPUInfo object>* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.cpuinfo.cpu.htmlnumpy.distutils.core.Extension ============================== *class*numpy.distutils.core.Extension(*name*, *sources*, *include_dirs=None*, *define_macros=None*, *undef_macros=None*, *library_dirs=None*, *libraries=None*, *runtime_library_dirs=None*, *extra_objects=None*, *extra_compile_args=None*, *extra_link_args=None*, *export_symbols=None*, *swig_opts=None*, *depends=None*, *language=None*, *f2py_options=None*, *module_dirs=None*, *extra_c_compile_args=None*, *extra_cxx_compile_args=None*, *extra_f77_compile_args=None*, *extra_f90_compile_args=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/extension.py#L17-L105) Parameters **name**str Extension name. **sources**list of str List of source file locations relative to the top directory of the package. **extra_compile_args**list of str Extra command line arguments to pass to the compiler. **extra_f77_compile_args**list of str Extra command line arguments to pass to the fortran77 compiler. **extra_f90_compile_args**list of str Extra command line arguments to pass to the fortran90 compiler. #### Methods | | | | --- | --- | | **has_cxx_sources** | | | **has_f2py_sources** | | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.core.Extension.htmlnumpy.distutils.exec_command ============================= exec_command Implements exec_command function that is (almost) equivalent to commands.getstatusoutput function but on NT, DOS systems the returned status is actually correct (though, the returned status values may be different by a factor). In addition, exec_command takes keyword arguments for (re-)defining environment variables. Provides functions: exec_command — execute command in a specified directory and in the modified environment. find_executable — locate a command using info from environment variable PATH. Equivalent to posix `which` command. Author: <NAME> <[<EMAIL>](mailto:<EMAIL>)> Created: 11 January 2003 Requires: Python 2.x Successfully tested on: | os.name | sys.platform | comments | | --- | --- | --- | | posix | linux2 | Debian (sid) Linux, Python 2.1.3+, 2.2.3+, 2.3.3 PyCrust 0.9.3, Idle 1.0.2 | | posix | linux2 | Red Hat 9 Linux, Python 2.1.3, 2.2.2, 2.3.2 | | posix | sunos5 | SunOS 5.9, Python 2.2, 2.3.2 | | posix | darwin | Darwin 7.2.0, Python 2.3 | | nt | win32 | Windows Me Python 2.3(EE), Idle 1.0, PyCrust 0.7.2 Python 2.1.1 Idle 0.8 | | nt | win32 | Windows 98, Python 2.1.1. Idle 0.8 | | nt | win32 | Cygwin 98-4.10, Python 2.1.1(MSC) - echo tests fail i.e. redefining environment variables may not work. FIXED: don’t use cygwin echo! Comment: also `cmd /c echo` will not work but redefining environment variables do work. | | posix | cygwin | Cygwin 98-4.10, Python 2.3.3(cygming special) | | nt | win32 | Windows XP, Python 2.3.3 | Known bugs: * Tests, that send messages to stderr, fail when executed from MSYS prompt because the messages are lost at some point. #### Functions | | | | --- | --- | | [`exec_command`](numpy.distutils.exec_command.exec_command#numpy.distutils.exec_command.exec_command "numpy.distutils.exec_command.exec_command")(command[, execute_in, ...]) | Return (status,output) of executed command. | | [`filepath_from_subprocess_output`](numpy.distutils.exec_command.filepath_from_subprocess_output#numpy.distutils.exec_command.filepath_from_subprocess_output "numpy.distutils.exec_command.filepath_from_subprocess_output")(output) | Convert `bytes` in the encoding used by a subprocess into a filesystem-appropriate `str`. | | [`find_executable`](numpy.distutils.exec_command.find_executable#numpy.distutils.exec_command.find_executable "numpy.distutils.exec_command.find_executable")(exe[, path, _cache]) | Return full path of a executable or None. | | [`forward_bytes_to_stdout`](numpy.distutils.exec_command.forward_bytes_to_stdout#numpy.distutils.exec_command.forward_bytes_to_stdout "numpy.distutils.exec_command.forward_bytes_to_stdout")(val) | Forward bytes from a subprocess call to the console, without attempting to decode them. | | [`get_pythonexe`](numpy.distutils.exec_command.get_pythonexe#numpy.distutils.exec_command.get_pythonexe "numpy.distutils.exec_command.get_pythonexe")() | | | [`temp_file_name`](numpy.distutils.exec_command.temp_file_name#numpy.distutils.exec_command.temp_file_name "numpy.distutils.exec_command.temp_file_name")() | | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.exec_command.htmlnumpy.distutils.log.set_verbosity ================================== distutils.log.set_verbosity(*v*, *force=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/log.py#L67-L77) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.log.set_verbosity.htmlnumpy.distutils.system_info.get_info ====================================== distutils.system_info.get_info(*name*, *notfound_action=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/system_info.py#L497-L585) notfound_action: 0 - do nothing 1 - display warning message 2 - raise error © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.system_info.get_info.htmlnumpy.distutils.system_info.get_standard_file ================================================ distutils.system_info.get_standard_file(*fname*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/system_info.py#L378-L410) Returns a list of files named ‘fname’ from 1) System-wide directory (directory-location of this module) 2) Users HOME directory (os.environ[‘HOME’]) 3) Local directory © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.system_info.get_standard_file.htmlnumpy.ndarray.flags =================== attribute ndarray.flags Information about the memory layout of the array. #### Notes The [`flags`](#numpy.ndarray.flags "numpy.ndarray.flags") object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be *arbitrary* if `arr.shape[dim] == 1` or the array has no elements. It does *not* generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.flags.htmlnumpy.ndarray.data ================== attribute ndarray.data Python buffer object pointing to the start of the array’s data. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.data.htmlnumpy.ndarray.size ================== attribute ndarray.size Number of elements in the array. Equal to `np.prod(a.shape)`, i.e., the product of the array’s dimensions. #### Notes `a.size` returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested `np.prod(a.shape)`, which returns an instance of `np.int_`), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. #### Examples ``` >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.size.htmlnumpy.ndarray.itemsize ====================== attribute ndarray.itemsize Length of one array element in bytes. #### Examples ``` >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.itemsize.htmlnumpy.ndarray.nbytes ==================== attribute ndarray.nbytes Total bytes consumed by the elements of the array. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples ``` >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.nbytes.htmlnumpy.ndarray.dtype =================== attribute ndarray.dtype Data-type of the array’s elements. Warning Setting `arr.dtype` is discouraged and may be deprecated in the future. Setting will replace the `dtype` without modifying the memory (see also [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") and [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")). Parameters **None** Returns **d**numpy dtype object See also [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype") Cast the values contained in the array to a new data-type. [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") Create a view of the same data but a different data-type. [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") #### Examples ``` >>> x array([[0, 1], [2, 3]]) >>> x.dtype dtype('int32') >>> type(x.dtype) <type 'numpy.dtype'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.dtype.htmlnumpy.ndarray.T =============== attribute ndarray.T The transposed array. Same as `self.transpose()`. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") #### Examples ``` >>> x = np.array([[1.,2.],[3.,4.]]) >>> x array([[ 1., 2.], [ 3., 4.]]) >>> x.T array([[ 1., 3.], [ 2., 4.]]) >>> x = np.array([1.,2.,3.,4.]) >>> x array([ 1., 2., 3., 4.]) >>> x.T array([ 1., 2., 3., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.T.htmlnumpy.ndarray.real ================== attribute ndarray.real The real part of the array. See also [`numpy.real`](numpy.real#numpy.real "numpy.real") equivalent function #### Examples ``` >>> x = np.sqrt([1+0j, 0+1j]) >>> x.real array([ 1. , 0.70710678]) >>> x.real.dtype dtype('float64') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.real.htmlnumpy.ndarray.imag ================== attribute ndarray.imag The imaginary part of the array. #### Examples ``` >>> x = np.sqrt([1+0j, 0+1j]) >>> x.imag array([ 0. , 0.70710678]) >>> x.imag.dtype dtype('float64') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.imag.htmlnumpy.ndarray.flat ================== attribute ndarray.flat A 1-D iterator over the array. This is a [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also [`flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") Return a copy of the array collapsed into one dimension. [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples ``` >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) <class 'numpy.flatiter'``` An assignment example: ``` >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.flat.htmlnumpy.ndarray.ctypes ==================== attribute ndarray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters **None** Returns **c**Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference will not be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "(in Python v3.10)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "(in Python v3.10)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "(in Python v3.10)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L267-L284) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L286-L293) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L295-L302) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples ``` >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1fce60> # may vary >>> x.ctypes.strides <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1ff320> # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.ctypes.htmlnumpy.ndarray.tolist ==================== method ndarray.tolist() Return the array as an `a.ndim`-levels deep nested list of Python scalars. Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible builtin Python type, via the [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item") function. If `a.ndim` is 0, then since the depth of the nested list is 0, it will not be a list at all, but a simple Python scalar. Parameters **none** Returns **y**object, or list of object, or list of list of object, or 
 The possibly nested list of array elements. #### Notes The array may be recreated via `a = np.array(a.tolist())`, although this may sometimes lose precision. #### Examples For a 1D array, `a.tolist()` is almost the same as `list(a)`, except that `tolist` changes numpy scalars to Python scalars: ``` >>> a = np.uint32([1, 2]) >>> a_list = list(a) >>> a_list [1, 2] >>> type(a_list[0]) <class 'numpy.uint32'> >>> a_tolist = a.tolist() >>> a_tolist [1, 2] >>> type(a_tolist[0]) <class 'int'``` Additionally, for a 2D array, `tolist` applies recursively: ``` >>> a = np.array([[1, 2], [3, 4]]) >>> list(a) [array([1, 2]), array([3, 4])] >>> a.tolist() [[1, 2], [3, 4]] ``` The base case for this recursion is a 0D array: ``` >>> a = np.array(1) >>> list(a) Traceback (most recent call last): ... TypeError: iteration over a 0-d array >>> a.tolist() 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.tolist.htmlnumpy.ndarray.itemset ===================== method ndarray.itemset(**args*) Insert scalar into an array (scalar is cast to array’s dtype, if possible) There must be at least 1 argument, and define the last argument as *item*. Then, `a.itemset(*args)` is equivalent to but faster than `a[args] = item`. The item should be a scalar value and `args` must select a single item in the array `a`. Parameters ***args**Arguments If one argument: a scalar, only used in case `a` is of size 1. If two arguments: the last argument is the value to be set and must be a scalar, the first argument specifies a single array element location. It is either an int or a tuple. #### Notes Compared to indexing syntax, [`itemset`](#numpy.ndarray.itemset "numpy.ndarray.itemset") provides some speed increase for placing a scalar into a particular location in an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), if you must do this. However, generally this is discouraged: among other problems, it complicates the appearance of the code. Also, when using [`itemset`](#numpy.ndarray.itemset "numpy.ndarray.itemset") (and [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item")) inside a loop, be sure to assign the methods to a local variable to avoid the attribute look-up at each loop iteration. #### Examples ``` >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.itemset(4, 0) >>> x.itemset((2, 2), 9) >>> x array([[2, 2, 6], [1, 0, 6], [1, 0, 9]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.itemset.htmlnumpy.ndarray.tostring ====================== method ndarray.tostring(*order='C'*) A compatibility alias for [`tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), with exactly the same behavior. Despite its name, it returns `bytes` not [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")s. Deprecated since version 1.19.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.tostring.htmlnumpy.ndarray.tobytes ===================== method ndarray.tobytes(*order='C'*) Construct Python bytes containing the raw data bytes in the array. Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object is produced in C-order by default. This behavior is controlled by the `order` parameter. New in version 1.9.0. Parameters **order**{‘C’, ‘F’, ‘A’}, optional Controls the memory layout of the bytes object. ‘C’ means C-order, ‘F’ means F-order, ‘A’ (short for *Any*) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. Default is ‘C’. Returns **s**bytes Python bytes exhibiting a copy of `a`’s raw data. #### Examples ``` >>> x = np.array([[0, 1], [2, 3]], dtype='<u2') >>> x.tobytes() b'\x00\x00\x01\x00\x02\x00\x03\x00' >>> x.tobytes('C') == x.tobytes() True >>> x.tobytes('F') b'\x00\x00\x02\x00\x01\x00\x03\x00' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.tobytes.htmlnumpy.ndarray.tofile ==================== method ndarray.tofile(*fid*, *sep=''*, *format='%s'*) Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of `a`. The data produced by this method can be recovered using the function fromfile(). Parameters **fid**file or str or Path An open file object, or a string containing a filename. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. **sep**str Separator between array items for text output. If “” (empty), a binary file is written, equivalent to `file.write(a.tobytes())`. **format**str Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. #### Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s `write` method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support `fileno()` (e.g., BytesIO). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.tofile.htmlnumpy.ndarray.dump ================== method ndarray.dump(*file*) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters **file**str or Path A string naming the dump file. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.dump.htmlnumpy.ndarray.dumps =================== method ndarray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters **None** © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.dumps.htmlnumpy.ndarray.byteswap ====================== method ndarray.byteswap(*inplace=False*) Swap the bytes of the array elements Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place. Arrays of byte-strings are not swapped. The real and imaginary parts of a complex number are swapped individually. Parameters **inplace**bool, optional If `True`, swap bytes in-place, default is `False`. Returns **out**ndarray The byteswapped array. If `inplace` is `True`, this is a view to self. #### Examples ``` >>> A = np.array([1, 256, 8755], dtype=np.int16) >>> list(map(hex, A)) ['0x1', '0x100', '0x2233'] >>> A.byteswap(inplace=True) array([ 256, 1, 13090], dtype=int16) >>> list(map(hex, A)) ['0x100', '0x1', '0x3322'] ``` Arrays of byte-strings are not swapped ``` >>> A = np.array([b'ceg', b'fac']) >>> A.byteswap() array([b'ceg', b'fac'], dtype='|S3') ``` `A.newbyteorder().byteswap()` produces an array with the same values but different representation in memory ``` >>> A = np.array([1, 2, 3]) >>> A.view(np.uint8) array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0], dtype=uint8) >>> A.newbyteorder().byteswap(inplace=True) array([1, 2, 3]) >>> A.view(np.uint8) array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3], dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.byteswap.htmlnumpy.ndarray.copy ================== method ndarray.copy(*order='C'*) Return a copy of the array. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples ``` >>> x = np.array([[1,2,3],[4,5,6]], order='F') ``` ``` >>> y = x.copy() ``` ``` >>> x.fill(0) ``` ``` >>> x array([[0, 0, 0], [0, 0, 0]]) ``` ``` >>> y array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> y.flags['C_CONTIGUOUS'] True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.copy.htmlnumpy.ndarray.getfield ====================== method ndarray.getfield(*dtype*, *offset=0*) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters **dtype**str or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. **offset**int Number of bytes to skip before beginning the element view. #### Examples ``` >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) ``` By choosing an offset of 8 bytes we can select the complex part of the array for our view: ``` >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.getfield.htmlnumpy.ndarray.setflags ====================== method ndarray.setflags(*write=None*, *align=None*, *uic=None*) Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. These Boolean-valued flags affect how numpy interprets the memory area used by `a` (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY and flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters **write**bool, optional Describes whether or not `a` can be written to. **align**bool, optional Describes whether or not `a` is aligned properly for its type. **uic**bool, optional Describes whether or not `a` is a copy of another “base” array. #### Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only four of which can be changed by the user: WRITEBACKIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. #### Examples ``` >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: cannot set WRITEBACKIFCOPY flag to True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.setflags.htmlnumpy.ndarray.reshape ===================== method ndarray.reshape(*shape*, *order='C'*) Returns an array containing the same data with a new shape. Refer to [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") for full documentation. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") equivalent function #### Notes Unlike the free function [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), this method on [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") allows the elements of the shape parameter to be passed in as separate arguments. For example, `a.reshape(10, 11)` is equivalent to `a.reshape((10, 11))`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.reshape.htmlnumpy.ndarray.transpose ======================= method ndarray.transpose(**axes*) Returns a view of the array with axes transposed. For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector, an additional dimension must be added. `np.atleast2d(a).T` achieves this, as does `a[:, np.newaxis]`. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and `a.shape = (i[0], i[1], ... i[n-2], i[n-1])`, then `a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])`. Parameters **axes**None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means `a`’s `i`-th axis becomes `a.transpose()`’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form) Returns **out**ndarray View of `a`, with axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.transpose.htmlnumpy.ndarray.swapaxes ====================== method ndarray.swapaxes(*axis1*, *axis2*) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.swapaxes.htmlnumpy.ndarray.ravel =================== method ndarray.ravel([*order*]) Return a flattened array. Refer to [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") for full documentation. See also [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") equivalent function [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") a flat iterator on the array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.ravel.htmlnumpy.ndarray.squeeze ===================== method ndarray.squeeze(*axis=None*) Remove axes of length one from `a`. Refer to [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") for full documentation. See also [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.squeeze.htmlnumpy.ndarray.take ================== method ndarray.take(*indices*, *axis=None*, *out=None*, *mode='raise'*) Return an array formed from the elements of `a` at the given indices. Refer to [`numpy.take`](numpy.take#numpy.take "numpy.take") for full documentation. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.take.htmlnumpy.ndarray.put ================= method ndarray.put(*indices*, *values*, *mode='raise'*) Set `a.flat[n] = values[n]` for all `n` in indices. Refer to [`numpy.put`](numpy.put#numpy.put "numpy.put") for full documentation. See also [`numpy.put`](numpy.put#numpy.put "numpy.put") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.put.htmlnumpy.ndarray.repeat ==================== method ndarray.repeat(*repeats*, *axis=None*) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.repeat.htmlnumpy.ndarray.choose ==================== method ndarray.choose(*choices*, *out=None*, *mode='raise'*) Use an index array to construct a new array from a set of choices. Refer to [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") for full documentation. See also [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.choose.htmlnumpy.ndarray.sort ================== method ndarray.sort(*axis=- 1*, *kind=None*, *order=None*) Sort an array in-place. Refer to [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for full documentation. Parameters **axis**int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. Changed in version 1.15.0: The ‘stable’ option was added. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`numpy.lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in sorted array. [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes See [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples ``` >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) ``` Use the `order` keyword to specify a field to use when sorting a structured array: ``` >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '<i8')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.sort.htmlnumpy.ndarray.argsort ===================== method ndarray.argsort(*axis=- 1*, *kind=None*, *order=None*) Returns the indices that would sort this array. Refer to [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") for full documentation. See also [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.argsort.htmlnumpy.ndarray.partition ======================= method ndarray.partition(*kth*, *axis=- 1*, *kind='introselect'*, *order=None*) Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array. All elements smaller than the kth element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined. New in version 1.8.0. Parameters **kth**int or sequence of ints Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis**int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need to be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Return a partitioned copy of an array. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partition. [`sort`](numpy.sort#numpy.sort "numpy.sort") Full sort. #### Notes See `np.partition` for notes on the different algorithms. #### Examples ``` >>> a = np.array([3, 4, 2, 1]) >>> a.partition(3) >>> a array([2, 1, 3, 4]) ``` ``` >>> a.partition((1, 3)) >>> a array([1, 2, 3, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.partition.htmlnumpy.ndarray.argpartition ========================== method ndarray.argpartition(*kth*, *axis=- 1*, *kind='introselect'*, *order=None*) Returns the indices that would partition this array. Refer to [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") for full documentation. New in version 1.8.0. See also [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.argpartition.htmlnumpy.ndarray.searchsorted ========================== method ndarray.searchsorted(*v*, *side='left'*, *sorter=None*) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.searchsorted.htmlnumpy.ndarray.nonzero ===================== method ndarray.nonzero() Return the indices of the elements that are non-zero. Refer to [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") for full documentation. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.nonzero.htmlnumpy.ndarray.compress ====================== method ndarray.compress(*condition*, *axis=None*, *out=None*) Return selected slices of this array along given axis. Refer to [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") for full documentation. See also [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.compress.htmlnumpy.ndarray.diagonal ====================== method ndarray.diagonal(*offset=0*, *axis1=0*, *axis2=1*) Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed. Refer to [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") for full documentation. See also [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.diagonal.htmlnumpy.ndarray.max ================= method ndarray.max(*axis=None*, *out=None*, *keepdims=False*, *initial=<no value>*, *where=True*) Return the maximum along a given axis. Refer to [`numpy.amax`](numpy.amax#numpy.amax "numpy.amax") for full documentation. See also [`numpy.amax`](numpy.amax#numpy.amax "numpy.amax") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.max.htmlnumpy.ndarray.argmax ==================== method ndarray.argmax(*axis=None*, *out=None*, ***, *keepdims=False*) Return indices of the maximum values along the given axis. Refer to [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") for full documentation. See also [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.argmax.htmlnumpy.ndarray.min ================= method ndarray.min(*axis=None*, *out=None*, *keepdims=False*, *initial=<no value>*, *where=True*) Return the minimum along a given axis. Refer to [`numpy.amin`](numpy.amin#numpy.amin "numpy.amin") for full documentation. See also [`numpy.amin`](numpy.amin#numpy.amin "numpy.amin") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.min.htmlnumpy.ndarray.argmin ==================== method ndarray.argmin(*axis=None*, *out=None*, ***, *keepdims=False*) Return indices of the minimum values along the given axis. Refer to [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") for detailed documentation. See also [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.argmin.htmlnumpy.ndarray.ptp ================= method ndarray.ptp(*axis=None*, *out=None*, *keepdims=False*) Peak to peak (maximum - minimum) value along a given axis. Refer to [`numpy.ptp`](numpy.ptp#numpy.ptp "numpy.ptp") for full documentation. See also [`numpy.ptp`](numpy.ptp#numpy.ptp "numpy.ptp") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.ptp.htmlnumpy.ndarray.clip ================== method ndarray.clip(*min=None*, *max=None*, *out=None*, ***kwargs*) Return an array whose values are limited to `[min, max]`. One of max or min must be given. Refer to [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") for full documentation. See also [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.clip.htmlnumpy.ndarray.conj ================== method ndarray.conj() Complex-conjugate all elements. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.conj.htmlnumpy.ndarray.round =================== method ndarray.round(*decimals=0*, *out=None*) Return `a` with each element rounded to the given number of decimals. Refer to [`numpy.around`](numpy.around#numpy.around "numpy.around") for full documentation. See also [`numpy.around`](numpy.around#numpy.around "numpy.around") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.round.htmlnumpy.ndarray.trace =================== method ndarray.trace(*offset=0*, *axis1=0*, *axis2=1*, *dtype=None*, *out=None*) Return the sum along diagonals of the array. Refer to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") for full documentation. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.trace.htmlnumpy.ndarray.sum ================= method ndarray.sum(*axis=None*, *dtype=None*, *out=None*, *keepdims=False*, *initial=0*, *where=True*) Return the sum of the array elements over the given axis. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.sum.htmlnumpy.ndarray.cumsum ==================== method ndarray.cumsum(*axis=None*, *dtype=None*, *out=None*) Return the cumulative sum of the elements along the given axis. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.cumsum.htmlnumpy.ndarray.mean ================== method ndarray.mean(*axis=None*, *dtype=None*, *out=None*, *keepdims=False*, ***, *where=True*) Returns the average of the array elements along given axis. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.mean.htmlnumpy.ndarray.var ================= method ndarray.var(*axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=False*, ***, *where=True*) Returns the variance of the array elements, along given axis. Refer to [`numpy.var`](numpy.var#numpy.var "numpy.var") for full documentation. See also [`numpy.var`](numpy.var#numpy.var "numpy.var") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.var.htmlnumpy.ndarray.std ================= method ndarray.std(*axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=False*, ***, *where=True*) Returns the standard deviation of the array elements along given axis. Refer to [`numpy.std`](numpy.std#numpy.std "numpy.std") for full documentation. See also [`numpy.std`](numpy.std#numpy.std "numpy.std") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.std.htmlnumpy.ndarray.prod ================== method ndarray.prod(*axis=None*, *dtype=None*, *out=None*, *keepdims=False*, *initial=1*, *where=True*) Return the product of the array elements over the given axis Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.prod.htmlnumpy.ndarray.cumprod ===================== method ndarray.cumprod(*axis=None*, *dtype=None*, *out=None*) Return the cumulative product of the elements along the given axis. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.cumprod.htmlnumpy.ndarray.all ================= method ndarray.all(*axis=None*, *out=None*, *keepdims=False*, ***, *where=True*) Returns True if all elements evaluate to True. Refer to [`numpy.all`](numpy.all#numpy.all "numpy.all") for full documentation. See also [`numpy.all`](numpy.all#numpy.all "numpy.all") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.all.htmlnumpy.ndarray.any ================= method ndarray.any(*axis=None*, *out=None*, *keepdims=False*, ***, *where=True*) Returns True if any of the elements of `a` evaluate to True. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. See also [`numpy.any`](numpy.any#numpy.any "numpy.any") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.any.htmlnumpy.ndarray.__lt__ ======================== method ndarray.__lt__(*value*, */*) Return self<value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__lt__.htmlnumpy.ndarray.__le__ ======================== method ndarray.__le__(*value*, */*) Return self<=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__le__.htmlnumpy.ndarray.__gt__ ======================== method ndarray.__gt__(*value*, */*) Return self>value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__gt__.htmlnumpy.ndarray.__ge__ ======================== method ndarray.__ge__(*value*, */*) Return self>=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__ge__.htmlnumpy.ndarray.__eq__ ======================== method ndarray.__eq__(*value*, */*) Return self==value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__eq__.htmlnumpy.ndarray.__ne__ ======================== method ndarray.__ne__(*value*, */*) Return self!=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__ne__.htmlnumpy.ndarray.__bool__ ========================== method ndarray.__bool__(*/*) True if self else False © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__bool__.htmlnumpy.ndarray.__neg__ ========================= method ndarray.__neg__(*/*) -self © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__neg__.htmlnumpy.ndarray.__pos__ ========================= method ndarray.__pos__(*/*) +self © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__pos__.htmlnumpy.ndarray.__abs__ ========================= method ndarray.__abs__(*self*) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__abs__.htmlnumpy.ndarray.__invert__ ============================ method ndarray.__invert__(*/*) ~self © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__invert__.htmlnumpy.ndarray.__add__ ========================= method ndarray.__add__(*value*, */*) Return self+value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__add__.htmlnumpy.ndarray.__sub__ ========================= method ndarray.__sub__(*value*, */*) Return self-value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__sub__.htmlnumpy.ndarray.__mul__ ========================= method ndarray.__mul__(*value*, */*) Return self*value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__mul__.htmlnumpy.ndarray.__truediv__ ============================= method ndarray.__truediv__(*value*, */*) Return self/value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__truediv__.htmlnumpy.ndarray.__floordiv__ ============================== method ndarray.__floordiv__(*value*, */*) Return self//value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__floordiv__.htmlnumpy.ndarray.__mod__ ========================= method ndarray.__mod__(*value*, */*) Return self%value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__mod__.htmlnumpy.ndarray.__divmod__ ============================ method ndarray.__divmod__(*value*, */*) Return divmod(self, value). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__divmod__.htmlnumpy.ndarray.__pow__ ========================= method ndarray.__pow__(*value*, *mod=None*, */*) Return pow(self, value, mod). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__pow__.htmlnumpy.ndarray.__lshift__ ============================ method ndarray.__lshift__(*value*, */*) Return self<<value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__lshift__.htmlnumpy.ndarray.__rshift__ ============================ method ndarray.__rshift__(*value*, */*) Return self>>value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__rshift__.htmlnumpy.ndarray.__and__ ========================= method ndarray.__and__(*value*, */*) Return self&value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__and__.htmlnumpy.ndarray.__or__ ======================== method ndarray.__or__(*value*, */*) Return self|value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__or__.htmlnumpy.ndarray.__xor__ ========================= method ndarray.__xor__(*value*, */*) Return self^value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__xor__.htmlnumpy.ndarray.__iadd__ ========================== method ndarray.__iadd__(*value*, */*) Return self+=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__iadd__.htmlnumpy.ndarray.__isub__ ========================== method ndarray.__isub__(*value*, */*) Return self-=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__isub__.htmlnumpy.ndarray.__imul__ ========================== method ndarray.__imul__(*value*, */*) Return self*=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__imul__.htmlnumpy.ndarray.__itruediv__ ============================== method ndarray.__itruediv__(*value*, */*) Return self/=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__itruediv__.htmlnumpy.ndarray.__ifloordiv__ =============================== method ndarray.__ifloordiv__(*value*, */*) Return self//=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__ifloordiv__.htmlnumpy.ndarray.__imod__ ========================== method ndarray.__imod__(*value*, */*) Return self%=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__imod__.htmlnumpy.ndarray.__ipow__ ========================== method ndarray.__ipow__(*value*, */*) Return self**=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__ipow__.htmlnumpy.ndarray.__ilshift__ ============================= method ndarray.__ilshift__(*value*, */*) Return self<<=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__ilshift__.htmlnumpy.ndarray.__irshift__ ============================= method ndarray.__irshift__(*value*, */*) Return self>>=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__irshift__.htmlnumpy.ndarray.__iand__ ========================== method ndarray.__iand__(*value*, */*) Return self&=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__iand__.htmlnumpy.ndarray.__ior__ ========================= method ndarray.__ior__(*value*, */*) Return self|=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__ior__.htmlnumpy.ndarray.__ixor__ ========================== method ndarray.__ixor__(*value*, */*) Return self^=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__ixor__.htmlnumpy.ndarray.__matmul__ ============================ method ndarray.__matmul__(*value*, */*) Return [self@value](mailto:self%40value). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__matmul__.htmlnumpy.ndarray.__copy__ ========================== method ndarray.__copy__() Used if [`copy.copy`](https://docs.python.org/3/library/copy.html#copy.copy "(in Python v3.10)") is called on an array. Returns a copy of the array. Equivalent to `a.copy(order='K')`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__copy__.htmlnumpy.ndarray.__deepcopy__ ============================== method ndarray.__deepcopy__(*memo*, */*) → Deep copy of array. Used if [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "(in Python v3.10)") is called on an array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__deepcopy__.htmlnumpy.ndarray.__reduce__ ============================ method ndarray.__reduce__() For pickling. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__reduce__.htmlnumpy.ndarray.__setstate__ ============================== method ndarray.__setstate__(*state*, */*) For unpickling. The `state` argument must be a sequence that contains the following elements: Parameters **version**int optional pickle version. If omitted defaults to 0. **shape**tuple **dtype**data-type **isFortran**bool **rawdata**string or list a binary string with the data (or a list if ‘a’ is an object array) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__setstate__.htmlnumpy.ndarray.__new__ ========================= method ndarray.__new__(**args*, ***kwargs*) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__new__.htmlnumpy.ndarray.__array__ =========================== method ndarray.__array__([*dtype*, ]*/*) → reference if type unchanged, copy otherwise. Returns either a new reference to self if dtype is not given or a new array of provided data type if dtype is different from the current dtype of the array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__array__.htmlnumpy.ndarray.__array_wrap__ ================================= method ndarray.__array_wrap__(*array*, [*context*, ]*/*) Returns a view of [`array`](numpy.array#numpy.array "numpy.array") with the same type as self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__array_wrap__.htmlnumpy.ndarray.__len__ ========================= method ndarray.__len__(*/*) Return len(self). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__len__.htmlnumpy.ndarray.__getitem__ ============================= method ndarray.__getitem__(*key*, */*) Return self[key]. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__getitem__.htmlnumpy.ndarray.__setitem__ ============================= method ndarray.__setitem__(*key*, *value*, */*) Set self[key] to value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__setitem__.htmlnumpy.ndarray.__contains__ ============================== method ndarray.__contains__(*key*, */*) Return key in self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__contains__.htmlnumpy.ndarray.__int__ ========================= method ndarray.__int__(*self*) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__int__.htmlnumpy.ndarray.__float__ =========================== method ndarray.__float__(*self*) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__float__.htmlnumpy.ndarray.__complex__ ============================= method ndarray.__complex__() © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__complex__.htmlnumpy.ndarray.__str__ ========================= method ndarray.__str__(*/*) Return str(self). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__str__.htmlnumpy.ndarray.__repr__ ========================== method ndarray.__repr__(*/*) Return repr(self). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__repr__.htmlnumpy.ndarray.__class_getitem__ ==================================== method ndarray.__class_getitem__(*item*, */*) Return a parametrized wrapper around the [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") type. New in version 1.22. Returns **alias**types.GenericAlias A parametrized [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") type. See also [**PEP 585**](https://peps.python.org/pep-0585/) Type hinting generics in standard collections. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes This method is only available for python 3.9 and later. #### Examples ``` >>> from typing import Any >>> import numpy as np ``` ``` >>> np.ndarray[Any, np.dtype[Any]] numpy.ndarray[typing.Any, numpy.dtype[typing.Any]] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.__class_getitem__.htmlnumpy.argpartition ================== numpy.argpartition(*a*, *kth*, *axis=- 1*, *kind='introselect'*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L766-L845) Perform an indirect partition along the given axis using the algorithm specified by the `kind` keyword. It returns an array of indices of the same shape as `a` that index data along the given axis in partitioned order. New in version 1.8.0. Parameters **a**array_like Array to sort. **kth**int or sequence of ints Element index to partition by. The k-th element will be in its final sorted position and all smaller elements will be moved before it and all larger elements behind it. The order all elements in the partitions is undefined. If provided with a sequence of k-th it will partition all of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis**int or None, optional Axis along which to sort. The default is -1 (the last axis). If None, the flattened array is used. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’ **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. Returns **index_array**ndarray, int Array of indices that partition `a` along the specified axis. If `a` is one-dimensional, `a[index_array]` yields a partitioned `a`. More generally, `np.take_along_axis(a, index_array, axis)` always yields the partitioned `a`, irrespective of dimensionality. See also [`partition`](numpy.partition#numpy.partition "numpy.partition") Describes partition algorithms used. [`ndarray.partition`](numpy.ndarray.partition#numpy.ndarray.partition "numpy.ndarray.partition") Inplace partition. [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Full indirect sort. [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Apply `index_array` from argpartition to an array as if by calling partition. #### Notes See [`partition`](numpy.partition#numpy.partition "numpy.partition") for notes on the different selection algorithms. #### Examples One dimensional array: ``` >>> x = np.array([3, 4, 2, 1]) >>> x[np.argpartition(x, 3)] array([2, 1, 3, 4]) >>> x[np.argpartition(x, (1, 3))] array([1, 2, 3, 4]) ``` ``` >>> x = [3, 4, 2, 1] >>> np.array(x)[np.argpartition(x, 3)] array([2, 1, 3, 4]) ``` Multi-dimensional array: ``` >>> x = np.array([[3, 4, 2], [1, 3, 1]]) >>> index_array = np.argpartition(x, kth=1, axis=-1) >>> np.take_along_axis(x, index_array, axis=-1) # same as np.partition(x, kth=1) array([[2, 3, 4], [1, 1, 3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.argpartition.htmlnumpy.amax ========== numpy.amax(*a*, *axis=None*, *out=None*, *keepdims=<no value>*, *initial=<no value>*, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2677-L2794) Return the maximum of an array or maximum along an axis. Parameters **a**array_like Input data. **axis**None or int or tuple of ints, optional Axis or axes along which to operate. By default, flattened input is used. New in version 1.7.0. If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before. **out**ndarray, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`amax`](#numpy.amax "numpy.amax") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **initial**scalar, optional The minimum value of an output element. Must be present to allow computation on empty slice. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.15.0. **where**array_like of bool, optional Elements to compare for the maximum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.17.0. Returns **amax**ndarray or scalar Maximum of `a`. If `axis` is None, the result is a scalar value. If `axis` is given, the result is an array of dimension `a.ndim - 1`. See also [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value of an array along a given axis, propagating any NaNs. [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") The maximum value of an array along a given axis, ignoring any NaNs. [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") Element-wise maximum of two arrays, propagating any NaNs. [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") Element-wise maximum of two arrays, ignoring any NaNs. [`argmax`](numpy.argmax#numpy.argmax "numpy.argmax") Return the indices of the maximum values. [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin"), [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum"), [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") #### Notes NaN values are propagated, that is if at least one item is NaN, the corresponding max value will be NaN as well. To ignore NaN values (MATLAB behavior), please use nanmax. Don’t use [`amax`](#numpy.amax "numpy.amax") for element-wise comparison of 2 arrays; when `a.shape[0]` is 2, `maximum(a[0], a[1])` is faster than `amax(a, axis=0)`. #### Examples ``` >>> a = np.arange(4).reshape((2,2)) >>> a array([[0, 1], [2, 3]]) >>> np.amax(a) # Maximum of the flattened array 3 >>> np.amax(a, axis=0) # Maxima along the first axis array([2, 3]) >>> np.amax(a, axis=1) # Maxima along the second axis array([1, 3]) >>> np.amax(a, where=[False, True], initial=-1, axis=0) array([-1, 3]) >>> b = np.arange(5, dtype=float) >>> b[2] = np.NaN >>> np.amax(b) nan >>> np.amax(b, where=~np.isnan(b), initial=-1) 4.0 >>> np.nanmax(b) 4.0 ``` You can use an initial value to compute the maximum of an empty slice, or to initialize it to a different value: ``` >>> np.amax([[-50], [10]], axis=-1, initial=0) array([ 0, 10]) ``` Notice that the initial value is used as one of the elements for which the maximum is determined, unlike for the default argument Python’s max function, which is only used for empty iterables. ``` >>> np.amax([5], initial=6) 6 >>> max([5], default=6) 5 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.amax.htmlnumpy.amin ========== numpy.amin(*a*, *axis=None*, *out=None*, *keepdims=<no value>*, *initial=<no value>*, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L2802-L2919) Return the minimum of an array or minimum along an axis. Parameters **a**array_like Input data. **axis**None or int or tuple of ints, optional Axis or axes along which to operate. By default, flattened input is used. New in version 1.7.0. If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before. **out**ndarray, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`amin`](#numpy.amin "numpy.amin") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **initial**scalar, optional The maximum value of an output element. Must be present to allow computation on empty slice. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.15.0. **where**array_like of bool, optional Elements to compare for the minimum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.17.0. Returns **amin**ndarray or scalar Minimum of `a`. If `axis` is None, the result is a scalar value. If `axis` is given, the result is an array of dimension `a.ndim - 1`. See also [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value of an array along a given axis, propagating any NaNs. [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") The minimum value of an array along a given axis, ignoring any NaNs. [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") Element-wise minimum of two arrays, propagating any NaNs. [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") Element-wise minimum of two arrays, ignoring any NaNs. [`argmin`](numpy.argmin#numpy.argmin "numpy.argmin") Return the indices of the minimum values. [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax"), [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum"), [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") #### Notes NaN values are propagated, that is if at least one item is NaN, the corresponding min value will be NaN as well. To ignore NaN values (MATLAB behavior), please use nanmin. Don’t use [`amin`](#numpy.amin "numpy.amin") for element-wise comparison of 2 arrays; when `a.shape[0]` is 2, `minimum(a[0], a[1])` is faster than `amin(a, axis=0)`. #### Examples ``` >>> a = np.arange(4).reshape((2,2)) >>> a array([[0, 1], [2, 3]]) >>> np.amin(a) # Minimum of the flattened array 0 >>> np.amin(a, axis=0) # Minima along the first axis array([0, 1]) >>> np.amin(a, axis=1) # Minima along the second axis array([0, 2]) >>> np.amin(a, where=[False, True], initial=10, axis=0) array([10, 1]) ``` ``` >>> b = np.arange(5, dtype=float) >>> b[2] = np.NaN >>> np.amin(b) nan >>> np.amin(b, where=~np.isnan(b), initial=10) 0.0 >>> np.nanmin(b) 0.0 ``` ``` >>> np.amin([[-50], [10]], axis=-1, initial=0) array([-50, 0]) ``` Notice that the initial value is used as one of the elements for which the minimum is determined, unlike for the default argument Python’s max function, which is only used for empty iterables. Notice that this isn’t the same as Python’s `default` argument. ``` >>> np.amin([6], initial=5) 5 >>> min([6], default=5) 6 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.amin.htmlnumpy.around ============ numpy.around(*a*, *decimals=0*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L3214-L3305) Evenly round to the given number of decimals. Parameters **a**array_like Input data. **decimals**int, optional Number of decimal places to round to (default: 0). If decimals is negative, it specifies the number of positions to the left of the decimal point. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output, but the type of the output values will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. Returns **rounded_array**ndarray An array of the same type as `a`, containing the rounded values. Unless `out` was specified, a new array is created. A reference to the result is returned. The real and imaginary parts of complex numbers are rounded separately. The result of rounding a float is a float. See also [`ndarray.round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round") equivalent method [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil"), [`fix`](numpy.fix#numpy.fix "numpy.fix"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc") #### Notes For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. `np.around` uses a fast but sometimes inexact algorithm to round floating-point datatypes. For positive `decimals` it is equivalent to `np.true_divide(np.rint(a * 10**decimals), 10**decimals)`, which has error due to the inexact representation of decimal fractions in the IEEE floating point standard [[1]](#r907366b089c1-1) and errors introduced when scaling by powers of ten. For instance, note the extra “1” in the following: ``` >>> np.round(56294995342131.5, 3) 56294995342131.51 ``` If your goal is to print such values with a fixed number of decimals, it is preferable to use numpy’s float printing routines to limit the number of printed decimals: ``` >>> np.format_float_positional(56294995342131.5, precision=3) '56294995342131.5' ``` The float printing routines use an accurate but much more computationally demanding algorithm to compute the number of digits after the decimal point. Alternatively, Python’s builtin [`round`](https://docs.python.org/3/library/functions.html#round "(in Python v3.10)") function uses a more accurate but slower algorithm for 64-bit floating point values: ``` >>> round(56294995342131.5, 3) 56294995342131.5 >>> np.round(16.055, 2), round(16.055, 2) # equals 16.0549999999999997 (16.06, 16.05) ``` #### References [1](#id1) “Lecture Notes on the Status of IEEE 754”, <NAME>, <https://people.eecs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF #### Examples ``` >>> np.around([0.37, 1.64]) array([0., 2.]) >>> np.around([0.37, 1.64], decimals=1) array([0.4, 1.6]) >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value array([0., 2., 2., 4., 4.]) >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned array([ 1, 2, 3, 11]) >>> np.around([1,2,3,11], decimals=-1) array([ 0, 0, 0, 10]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.around.htmlnumpy.generic.flags =================== attribute generic.flags The integer value of flags. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.flags.htmlnumpy.generic.shape =================== attribute generic.shape Tuple of array dimensions. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.shape.htmlnumpy.generic.strides ===================== attribute generic.strides Tuple of bytes steps in each dimension. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.strides.htmlnumpy.generic.ndim ================== attribute generic.ndim The number of array dimensions. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.ndim.htmlnumpy.generic.data ================== attribute generic.data Pointer to start of data. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.data.htmlnumpy.generic.size ================== attribute generic.size The number of elements in the gentype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.size.htmlnumpy.generic.itemsize ====================== attribute generic.itemsize The length of one element in bytes. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.itemsize.htmlnumpy.generic.base ================== attribute generic.base Scalar attribute identical to the corresponding array attribute. Please see [`ndarray.base`](numpy.ndarray.base#numpy.ndarray.base "numpy.ndarray.base"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.base.htmlnumpy.generic.dtype =================== attribute generic.dtype Get array data-descriptor. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.dtype.htmlnumpy.generic.real ================== attribute generic.real The real part of the scalar. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.real.htmlnumpy.generic.imag ================== attribute generic.imag The imaginary part of the scalar. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.imag.htmlnumpy.generic.flat ================== attribute generic.flat A 1-D view of the scalar. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.flat.htmlnumpy.generic.T =============== attribute generic.T Scalar attribute identical to the corresponding array attribute. Please see [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.T.htmlnumpy.generic.__array_interface__ ====================================== attribute generic.__array_interface__ Array protocol: Python side © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.__array_interface__.htmlnumpy.generic.__array_struct__ =================================== attribute generic.__array_struct__ Array protocol: struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.__array_struct__.htmlnumpy.generic.__array_priority__ ===================================== attribute generic.__array_priority__ Array priority. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.__array_priority__.htmlnumpy.generic.__array_wrap__ ================================= method generic.__array_wrap__() sc.__array_wrap__(obj) return scalar from array © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.__array_wrap__.htmlnumpy.generic.__array__ =========================== method generic.__array__() sc.__array__(dtype) return 0-dim array from scalar with specified dtype © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.__array__.htmlnumpy.generic.squeeze ===================== method generic.squeeze() Scalar method identical to the corresponding array attribute. Please see [`ndarray.squeeze`](numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.squeeze.htmlnumpy.generic.byteswap ====================== method generic.byteswap() Scalar method identical to the corresponding array attribute. Please see [`ndarray.byteswap`](numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.byteswap.htmlnumpy.generic.__reduce__ ============================ method generic.__reduce__() Helper for pickle. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.__reduce__.htmlnumpy.generic.__setstate__ ============================== method generic.__setstate__() © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.__setstate__.htmlnumpy.generic.setflags ====================== method generic.setflags() Scalar method identical to the corresponding array attribute. Please see [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.generic.setflags.htmlnumpy.number.__class_getitem__ =================================== method number.__class_getitem__(*item*, */*) Return a parametrized wrapper around the [`number`](../arrays.scalars#numpy.number "numpy.number") type. New in version 1.22. Returns **alias**types.GenericAlias A parametrized [`number`](../arrays.scalars#numpy.number "numpy.number") type. See also [**PEP 585**](https://peps.python.org/pep-0585/) Type hinting generics in standard collections. #### Notes This method is only available for python 3.9 and later. #### Examples ``` >>> from typing import Any >>> import numpy as np ``` ``` >>> np.signedinteger[Any] numpy.signedinteger[typing.Any] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.number.__class_getitem__.htmlnumpy.format_float_positional =============================== numpy.format_float_positional(*x*, *precision=None*, *unique=True*, *fractional=True*, *trim='k'*, *sign=False*, *pad_left=None*, *pad_right=None*, *min_digits=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/arrayprint.py#L1130-L1219) Format a floating-point scalar as a decimal string in positional notation. Provides control over rounding, trimming and padding. Uses and assumes IEEE unbiased rounding. Uses the “Dragon4” algorithm. Parameters **x**python float or numpy floating scalar Value to format. **precision**non-negative integer or None, optional Maximum number of digits to print. May be None if [`unique`](numpy.unique#numpy.unique "numpy.unique") is `True`, but must be an integer if unique is `False`. **unique**boolean, optional If `True`, use a digit-generation strategy which gives the shortest representation which uniquely identifies the floating-point number from other values of the same type, by judicious rounding. If `precision` is given fewer digits than necessary can be printed, or if `min_digits` is given more can be printed, in which cases the last digit is rounded with unbiased rounding. If `False`, digits are generated as if printing an infinite-precision value and stopping after `precision` digits, rounding the remaining value with unbiased rounding **fractional**boolean, optional If `True`, the cutoffs of `precision` and `min_digits` refer to the total number of digits after the decimal point, including leading zeros. If `False`, `precision` and `min_digits` refer to the total number of significant digits, before or after the decimal point, ignoring leading zeros. **trim**one of ‘k’, ‘.’, ‘0’, ‘-’, optional Controls post-processing trimming of trailing digits, as follows: * ‘k’ : keep trailing zeros, keep decimal point (no trimming) * ‘.’ : trim all trailing zeros, leave decimal point * ‘0’ : trim all but the zero before the decimal point. Insert the zero if it is missing. * ‘-’ : trim trailing zeros and any trailing decimal point **sign**boolean, optional Whether to show the sign for positive values. **pad_left**non-negative integer, optional Pad the left side of the string with whitespace until at least that many characters are to the left of the decimal point. **pad_right**non-negative integer, optional Pad the right side of the string with whitespace until at least that many characters are to the right of the decimal point. **min_digits**non-negative integer or None, optional Minimum number of digits to print. Only has an effect if `unique=True` in which case additional digits past those necessary to uniquely identify the value may be printed, rounding the last additional digit. – versionadded:: 1.21.0 Returns **rep**string The string representation of the floating point value See also [`format_float_scientific`](numpy.format_float_scientific#numpy.format_float_scientific "numpy.format_float_scientific") #### Examples ``` >>> np.format_float_positional(np.float32(np.pi)) '3.1415927' >>> np.format_float_positional(np.float16(np.pi)) '3.14' >>> np.format_float_positional(np.float16(0.3)) '0.3' >>> np.format_float_positional(np.float16(0.3), unique=False, precision=10) '0.3000488281' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.format_float_positional.htmlnumpy.format_float_scientific =============================== numpy.format_float_scientific(*x*, *precision=None*, *unique=True*, *trim='k'*, *sign=False*, *pad_left=None*, *exp_digits=None*, *min_digits=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/arrayprint.py#L1050-L1127) Format a floating-point scalar as a decimal string in scientific notation. Provides control over rounding, trimming and padding. Uses and assumes IEEE unbiased rounding. Uses the “Dragon4” algorithm. Parameters **x**python float or numpy floating scalar Value to format. **precision**non-negative integer or None, optional Maximum number of digits to print. May be None if [`unique`](numpy.unique#numpy.unique "numpy.unique") is `True`, but must be an integer if unique is `False`. **unique**boolean, optional If `True`, use a digit-generation strategy which gives the shortest representation which uniquely identifies the floating-point number from other values of the same type, by judicious rounding. If `precision` is given fewer digits than necessary can be printed. If `min_digits` is given more can be printed, in which cases the last digit is rounded with unbiased rounding. If `False`, digits are generated as if printing an infinite-precision value and stopping after `precision` digits, rounding the remaining value with unbiased rounding **trim**one of ‘k’, ‘.’, ‘0’, ‘-’, optional Controls post-processing trimming of trailing digits, as follows: * ‘k’ : keep trailing zeros, keep decimal point (no trimming) * ‘.’ : trim all trailing zeros, leave decimal point * ‘0’ : trim all but the zero before the decimal point. Insert the zero if it is missing. * ‘-’ : trim trailing zeros and any trailing decimal point **sign**boolean, optional Whether to show the sign for positive values. **pad_left**non-negative integer, optional Pad the left side of the string with whitespace until at least that many characters are to the left of the decimal point. **exp_digits**non-negative integer, optional Pad the exponent with zeros until it contains at least this many digits. If omitted, the exponent will be at least 2 digits. **min_digits**non-negative integer or None, optional Minimum number of digits to print. This only has an effect for `unique=True`. In that case more digits than necessary to uniquely identify the value may be printed and rounded unbiased. – versionadded:: 1.21.0 Returns **rep**string The string representation of the floating point value See also [`format_float_positional`](numpy.format_float_positional#numpy.format_float_positional "numpy.format_float_positional") #### Examples ``` >>> np.format_float_scientific(np.float32(np.pi)) '3.1415927e+00' >>> s = np.float32(1.23e24) >>> np.format_float_scientific(s, unique=False, precision=15) '1.230000071797338e+24' >>> np.format_float_scientific(s, exp_digits=4) '1.23e+0024' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.format_float_scientific.htmlnumpy.dtype.type ================ attribute dtype.type*=None* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.type.htmlnumpy.dtype.kind ================ attribute dtype.kind A character code (one of ‘biufcmMOSUV’) identifying the general kind of data. | | | | --- | --- | | b | boolean | | i | signed integer | | u | unsigned integer | | f | floating-point | | c | complex floating-point | | m | timedelta | | M | datetime | | O | object | | S | (byte-)string | | U | Unicode | | V | void | #### Examples ``` >>> dt = np.dtype('i4') >>> dt.kind 'i' >>> dt = np.dtype('f8') >>> dt.kind 'f' >>> dt = np.dtype([('field1', 'f8')]) >>> dt.kind 'V' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.kind.htmlnumpy.dtype.char ================ attribute dtype.char A unique character code for each of the 21 different built-in types. #### Examples ``` >>> x = np.dtype(float) >>> x.char 'd' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.char.htmlnumpy.dtype.num =============== attribute dtype.num A unique number for each of the 21 different built-in types. These are roughly ordered from least-to-most precision. #### Examples ``` >>> dt = np.dtype(str) >>> dt.num 19 ``` ``` >>> dt = np.dtype(float) >>> dt.num 12 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.num.htmlnumpy.dtype.str =============== attribute dtype.str The array-protocol typestring of this data-type object. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.str.htmlnumpy.dtype.name ================ attribute dtype.name A bit-width name for this data-type. Un-sized flexible data-type objects do not have this attribute. #### Examples ``` >>> x = np.dtype(float) >>> x.name 'float64' >>> x = np.dtype([('a', np.int32, 8), ('b', np.float64, 6)]) >>> x.name 'void640' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.name.htmlnumpy.dtype.itemsize ==================== attribute dtype.itemsize The element size of this data-type object. For 18 of the 21 types this number is fixed by the data-type. For the flexible data-types, this number can be anything. #### Examples ``` >>> arr = np.array([[1, 2], [3, 4]]) >>> arr.dtype dtype('int64') >>> arr.itemsize 8 ``` ``` >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) >>> dt.itemsize 80 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.itemsize.htmlnumpy.dtype.byteorder ===================== attribute dtype.byteorder A character indicating the byte-order of this data-type object. One of: | | | | --- | --- | | ‘=’ | native | | ‘<’ | little-endian | | ‘>’ | big-endian | | ‘|’ | not applicable | All built-in data-type objects have byteorder either ‘=’ or ‘|’. #### Examples ``` >>> dt = np.dtype('i2') >>> dt.byteorder '=' >>> # endian is not relevant for 8 bit numbers >>> np.dtype('i1').byteorder '|' >>> # or ASCII strings >>> np.dtype('S2').byteorder '|' >>> # Even if specific code is given, and it is native >>> # '=' is the byteorder >>> import sys >>> sys_is_le = sys.byteorder == 'little' >>> native_code = sys_is_le and '<' or '>' >>> swapped_code = sys_is_le and '>' or '<' >>> dt = np.dtype(native_code + 'i2') >>> dt.byteorder '=' >>> # Swapped code shows up as itself >>> dt = np.dtype(swapped_code + 'i2') >>> dt.byteorder == swapped_code True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.byteorder.htmlnumpy.dtype.fields ================== attribute dtype.fields Dictionary of named fields defined for this data type, or `None`. The dictionary is indexed by keys that are the names of the fields. Each entry in the dictionary is a tuple fully describing the field: ``` (dtype, offset[, title]) ``` Offset is limited to C int, which is signed and usually 32 bits. If present, the optional title can be any object (if it is a string or unicode then it will also be a key in the fields dictionary, otherwise it’s meta-data). Notice also that the first two elements of the tuple can be passed directly as arguments to the `ndarray.getfield` and `ndarray.setfield` methods. See also [`ndarray.getfield`](numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield"), [`ndarray.setfield`](numpy.ndarray.setfield#numpy.ndarray.setfield "numpy.ndarray.setfield") #### Examples ``` >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) >>> print(dt.fields) {'grades': (dtype(('float64',(2,))), 16), 'name': (dtype('|S16'), 0)} ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.fields.htmlnumpy.dtype.names ================= attribute dtype.names Ordered list of field names, or `None` if there are no fields. The names are ordered according to increasing byte offset. This can be used, for example, to walk through all of the named fields in offset order. #### Examples ``` >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) >>> dt.names ('name', 'grades') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.names.htmlnumpy.dtype.subdtype ==================== attribute dtype.subdtype Tuple `(item_dtype, shape)` if this [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") describes a sub-array, and None otherwise. The *shape* is the fixed shape of the sub-array described by this data type, and *item_dtype* the data type of the array. If a field whose dtype object has this attribute is retrieved, then the extra dimensions implied by *shape* are tacked on to the end of the retrieved array. See also [`dtype.base`](numpy.dtype.base#numpy.dtype.base "numpy.dtype.base") #### Examples ``` >>> x = numpy.dtype('8f') >>> x.subdtype (dtype('float32'), (8,)) ``` ``` >>> x = numpy.dtype('i2') >>> x.subdtype >>``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.subdtype.htmlnumpy.dtype.shape ================= attribute dtype.shape Shape tuple of the sub-array if this data type describes a sub-array, and `()` otherwise. #### Examples ``` >>> dt = np.dtype(('i4', 4)) >>> dt.shape (4,) ``` ``` >>> dt = np.dtype(('i4', (2, 3))) >>> dt.shape (2, 3) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.shape.htmlnumpy.dtype.hasobject ===================== attribute dtype.hasobject Boolean indicating whether this dtype contains any reference-counted objects in any fields or sub-dtypes. Recall that what is actually in the ndarray memory representing the Python object is the memory address of that object (a pointer). Special handling may be required, and this attribute is useful for distinguishing data types that may contain arbitrary Python objects and data-types that won’t. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.hasobject.htmlnumpy.dtype.flags ================= attribute dtype.flags Bit-flags describing how this data type is to be interpreted. Bit-masks are in `numpy.core.multiarray` as the constants `ITEM_HASOBJECT`, `LIST_PICKLE`, `ITEM_IS_POINTER`, `NEEDS_INIT`, `NEEDS_PYAPI`, `USE_GETITEM`, `USE_SETITEM`. A full explanation of these flags is in C-API documentation; they are largely useful for user-defined data-types. The following example demonstrates that operations on this particular dtype requires Python C-API. #### Examples ``` >>> x = np.dtype([('a', np.int32, 8), ('b', np.float64, 6)]) >>> x.flags 16 >>> np.core.multiarray.NEEDS_PYAPI 16 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.flags.htmlnumpy.dtype.isbuiltin ===================== attribute dtype.isbuiltin Integer indicating how this dtype relates to the built-in dtypes. Read-only. | | | | --- | --- | | 0 | if this is a structured array type, with fields | | 1 | if this is a dtype compiled into numpy (such as ints, floats etc) | | 2 | if the dtype is for a user-defined numpy type A user-defined type uses the numpy C-API machinery to extend numpy to handle a new array type. See [User-defined data-types](../../user/c-info.beyond-basics#user-user-defined-data-types) in the NumPy manual. | #### Examples ``` >>> dt = np.dtype('i2') >>> dt.isbuiltin 1 >>> dt = np.dtype('f8') >>> dt.isbuiltin 1 >>> dt = np.dtype([('field1', 'f8')]) >>> dt.isbuiltin 0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.isbuiltin.htmlnumpy.dtype.isnative ==================== attribute dtype.isnative Boolean indicating whether the byte order of this dtype is native to the platform. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.isnative.htmlnumpy.dtype.descr ================= attribute dtype.descr `__array_interface__` description of the data-type. The format is that required by the ‘descr’ key in the `__array_interface__` attribute. Warning: This attribute exists specifically for `__array_interface__`, and passing it directly to `np.dtype` will not accurately reconstruct some dtypes (e.g., scalar and subarray dtypes). #### Examples ``` >>> x = np.dtype(float) >>> x.descr [('', '<f8')] ``` ``` >>> dt = np.dtype([('name', np.str_, 16), ('grades', np.float64, (2,))]) >>> dt.descr [('name', '<U16'), ('grades', '<f8', (2,))] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.descr.htmlnumpy.dtype.alignment ===================== attribute dtype.alignment The required alignment (bytes) of this data-type according to the compiler. More information is available in the C-API section of the manual. #### Examples ``` >>> x = np.dtype('i4') >>> x.alignment 4 ``` ``` >>> x = np.dtype(float) >>> x.alignment 8 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.alignment.htmlnumpy.dtype.base ================ attribute dtype.base Returns dtype for the base element of the subarrays, regardless of their dimension or shape. See also [`dtype.subdtype`](numpy.dtype.subdtype#numpy.dtype.subdtype "numpy.dtype.subdtype") #### Examples ``` >>> x = numpy.dtype('8f') >>> x.base dtype('float32') ``` ``` >>> x = numpy.dtype('i2') >>> x.base dtype('int16') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.base.htmlnumpy.dtype.metadata ==================== attribute dtype.metadata Either `None` or a readonly dictionary of metadata (mappingproxy). The metadata field can be set using any dictionary at data-type creation. NumPy currently has no uniform approach to propagating metadata; although some array operations preserve it, there is no guarantee that others will. Warning Although used in certain projects, this feature was long undocumented and is not well supported. Some aspects of metadata propagation are expected to change in the future. #### Examples ``` >>> dt = np.dtype(float, metadata={"key": "value"}) >>> dt.metadata["key"] 'value' >>> arr = np.array([1, 2, 3], dtype=dt) >>> arr.dtype.metadata mappingproxy({'key': 'value'}) ``` Adding arrays with identical datatypes currently preserves the metadata: ``` >>> (arr + arr).dtype.metadata mappingproxy({'key': 'value'}) ``` But if the arrays have different dtype metadata, the metadata may be dropped: ``` >>> dt2 = np.dtype(float, metadata={"key2": "value2"}) >>> arr2 = np.array([3, 2, 1], dtype=dt2) >>> (arr + arr2).dtype.metadata is None True # The metadata field is cleared so None is returned ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.metadata.htmlnumpy.dtype.newbyteorder ======================== method dtype.newbyteorder(*new_order='S'*, */*) Return a new dtype with a different byte order. Changes are also made in all fields and sub-arrays of the data type. Parameters **new_order**string, optional Byte order to force; a value from the byte order specifications below. The default value (‘S’) results in swapping the current byte order. `new_order` codes can be any of: * ‘S’ - swap dtype from current to opposite endian * {‘<’, ‘little’} - little endian * {‘>’, ‘big’} - big endian * {‘=’, ‘native’} - native order * {‘|’, ‘I’} - ignore (no change to byte order) Returns **new_dtype**dtype New dtype object with the given change to the byte order. #### Notes Changes are also made in all fields and sub-arrays of the data type. #### Examples ``` >>> import sys >>> sys_is_le = sys.byteorder == 'little' >>> native_code = sys_is_le and '<' or '>' >>> swapped_code = sys_is_le and '>' or '<' >>> native_dt = np.dtype(native_code+'i2') >>> swapped_dt = np.dtype(swapped_code+'i2') >>> native_dt.newbyteorder('S') == swapped_dt True >>> native_dt.newbyteorder() == swapped_dt True >>> native_dt == swapped_dt.newbyteorder('S') True >>> native_dt == swapped_dt.newbyteorder('=') True >>> native_dt == swapped_dt.newbyteorder('N') True >>> native_dt == native_dt.newbyteorder('|') True >>> np.dtype('<i2') == native_dt.newbyteorder('<') True >>> np.dtype('<i2') == native_dt.newbyteorder('L') True >>> np.dtype('>i2') == native_dt.newbyteorder('>') True >>> np.dtype('>i2') == native_dt.newbyteorder('B') True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.newbyteorder.htmlnumpy.dtype.__reduce__ ========================== method dtype.__reduce__() Helper for pickle. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.__reduce__.htmlnumpy.dtype.__setstate__ ============================ method dtype.__setstate__() © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.__setstate__.htmlnumpy.dtype.__class_getitem__ ================================== method dtype.__class_getitem__(*item*, */*) Return a parametrized wrapper around the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") type. New in version 1.22. Returns **alias**types.GenericAlias A parametrized [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") type. See also [**PEP 585**](https://peps.python.org/pep-0585/) Type hinting generics in standard collections. #### Notes This method is only available for python 3.9 and later. #### Examples ``` >>> import numpy as np ``` ``` >>> np.dtype[np.int64] numpy.dtype[numpy.int64] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.__class_getitem__.htmlnumpy.dtype.__ge__ ====================== method dtype.__ge__(*value*, */*) Return self>=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.__ge__.htmlnumpy.dtype.__gt__ ====================== method dtype.__gt__(*value*, */*) Return self>value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.__gt__.htmlnumpy.dtype.__le__ ====================== method dtype.__le__(*value*, */*) Return self<=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.__le__.htmlnumpy.dtype.__lt__ ====================== method dtype.__lt__(*value*, */*) Return self<value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.__lt__.htmlnumpy.s_ ========= numpy.s_*=<numpy.lib.index_tricks.IndexExpression object>* A nicer way to build up index tuples for arrays. Note Use one of the two predefined instances `index_exp` or [`s_`](#numpy.s_ "numpy.s_") rather than directly using `IndexExpression`. For any index combination, including slicing and axis insertion, `a[indices]` is the same as `a[np.index_exp[indices]]` for any array `a`. However, `np.index_exp[indices]` can be used anywhere in Python code and returns a tuple of slice objects that can be used in the construction of complex index expressions. Parameters **maketuple**bool If True, always returns a tuple. See also `index_exp` Predefined instance that always returns a tuple: `index_exp = IndexExpression(maketuple=True)`. [`s_`](#numpy.s_ "numpy.s_") Predefined instance without tuple conversion: `s_ = IndexExpression(maketuple=False)`. #### Notes You can do all this with `slice()` plus a few special objects, but there’s a lot to remember and this version is simpler because it uses the standard array indexing syntax. #### Examples ``` >>> np.s_[2::2] slice(2, None, 2) >>> np.index_exp[2::2] (slice(2, None, 2),) ``` ``` >>> np.array([0, 1, 2, 3, 4])[np.s_[2::2]] array([2, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.s_.htmlnumpy.ravel_multi_index ========================= numpy.ravel_multi_index(*multi_index*, *dims*, *mode='raise'*, *order='C'*) Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index. Parameters **multi_index**tuple of array_like A tuple of integer arrays, one array for each dimension. **dims**tuple of ints The shape of array into which the indices from `multi_index` apply. **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices are handled. Can specify either one mode or a tuple of modes, one mode per index. * ‘raise’ – raise an error (default) * ‘wrap’ – wrap around * ‘clip’ – clip to the range In ‘clip’ mode, a negative index which would normally wrap will clip to 0 instead. **order**{‘C’, ‘F’}, optional Determines whether the multi-index should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order. Returns **raveled_indices**ndarray An array of indices into the flattened version of an array of dimensions `dims`. See also [`unravel_index`](numpy.unravel_index#numpy.unravel_index "numpy.unravel_index") #### Notes New in version 1.6.0. #### Examples ``` >>> arr = np.array([[3,6,6],[4,5,1]]) >>> np.ravel_multi_index(arr, (7,6)) array([22, 41, 37]) >>> np.ravel_multi_index(arr, (7,6), order='F') array([31, 41, 13]) >>> np.ravel_multi_index(arr, (4,6), mode='clip') array([22, 23, 19]) >>> np.ravel_multi_index(arr, (4,4), mode=('clip','wrap')) array([12, 13, 13]) ``` ``` >>> np.ravel_multi_index((3,1,4,1), (6,7,8,9)) 1621 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ravel_multi_index.htmlnumpy.unravel_index ==================== numpy.unravel_index(*indices*, *shape*, *order='C'*) Converts a flat index or array of flat indices into a tuple of coordinate arrays. Parameters **indices**array_like An integer array whose elements are indices into the flattened version of an array of dimensions `shape`. Before version 1.6.0, this function accepted just one index value. **shape**tuple of ints The shape of the array to use for unraveling `indices`. Changed in version 1.16.0: Renamed from `dims` to `shape`. **order**{‘C’, ‘F’}, optional Determines whether the indices should be viewed as indexing in row-major (C-style) or column-major (Fortran-style) order. New in version 1.6.0. Returns **unraveled_coords**tuple of ndarray Each array in the tuple has the same shape as the `indices` array. See also [`ravel_multi_index`](numpy.ravel_multi_index#numpy.ravel_multi_index "numpy.ravel_multi_index") #### Examples ``` >>> np.unravel_index([22, 41, 37], (7,6)) (array([3, 6, 6]), array([4, 5, 1])) >>> np.unravel_index([31, 41, 13], (7,6), order='F') (array([3, 6, 6]), array([4, 5, 1])) ``` ``` >>> np.unravel_index(1621, (6,7,8,9)) (3, 1, 4, 1) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.unravel_index.htmlnumpy.diag_indices =================== numpy.diag_indices(*n*, *ndim=2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/index_tricks.py#L913-L979) Return the indices to access the main diagonal of an array. This returns a tuple of indices that can be used to access the main diagonal of an array `a` with `a.ndim >= 2` dimensions and shape (n, n, 
, n). For `a.ndim = 2` this is the usual diagonal, for `a.ndim > 2` this is the set of indices to access `a[i, i, ..., i]` for `i = [0..n-1]`. Parameters **n**int The size, along each dimension, of the arrays for which the returned indices can be used. **ndim**int, optional The number of dimensions. See also [`diag_indices_from`](numpy.diag_indices_from#numpy.diag_indices_from "numpy.diag_indices_from") #### Notes New in version 1.4.0. #### Examples Create a set of indices to access the diagonal of a (4, 4) array: ``` >>> di = np.diag_indices(4) >>> di (array([0, 1, 2, 3]), array([0, 1, 2, 3])) >>> a = np.arange(16).reshape(4, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) >>> a[di] = 100 >>> a array([[100, 1, 2, 3], [ 4, 100, 6, 7], [ 8, 9, 100, 11], [ 12, 13, 14, 100]]) ``` Now, we create indices to manipulate a 3-D array: ``` >>> d3 = np.diag_indices(2, 3) >>> d3 (array([0, 1]), array([0, 1]), array([0, 1])) ``` And use it to set the diagonal of an array of zeros to 1: ``` >>> a = np.zeros((2, 2, 2), dtype=int) >>> a[d3] = 1 >>> a array([[[1, 0], [0, 0]], [[0, 0], [0, 1]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.diag_indices.htmlnumpy.diag_indices_from ========================= numpy.diag_indices_from(*arr*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/index_tricks.py#L986-L1014) Return the indices to access the main diagonal of an n-dimensional array. See [`diag_indices`](numpy.diag_indices#numpy.diag_indices "numpy.diag_indices") for full details. Parameters **arr**array, at least 2-D See also [`diag_indices`](numpy.diag_indices#numpy.diag_indices "numpy.diag_indices") #### Notes New in version 1.4.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.diag_indices_from.htmlnumpy.mask_indices =================== numpy.mask_indices(*n*, *mask_func*, *k=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L829-L897) Return the indices to access (n, n) arrays, given a masking function. Assume `mask_func` is a function that, for a square array a of size `(n, n)` with a possible offset argument `k`, when called as `mask_func(a, k)` returns a new array with zeros in certain locations (functions like [`triu`](numpy.triu#numpy.triu "numpy.triu") or [`tril`](numpy.tril#numpy.tril "numpy.tril") do precisely this). Then this function returns the indices where the non-zero values would be located. Parameters **n**int The returned indices will be valid to access arrays of shape (n, n). **mask_func**callable A function whose call signature is similar to that of [`triu`](numpy.triu#numpy.triu "numpy.triu"), [`tril`](numpy.tril#numpy.tril "numpy.tril"). That is, `mask_func(x, k)` returns a boolean array, shaped like `x`. `k` is an optional argument to the function. **k**scalar An optional argument which is passed through to `mask_func`. Functions like [`triu`](numpy.triu#numpy.triu "numpy.triu"), [`tril`](numpy.tril#numpy.tril "numpy.tril") take a second argument that is interpreted as an offset. Returns **indices**tuple of arrays. The `n` arrays of indices corresponding to the locations where `mask_func(np.ones((n, n)), k)` is True. See also [`triu`](numpy.triu#numpy.triu "numpy.triu"), [`tril`](numpy.tril#numpy.tril "numpy.tril"), [`triu_indices`](numpy.triu_indices#numpy.triu_indices "numpy.triu_indices"), [`tril_indices`](numpy.tril_indices#numpy.tril_indices "numpy.tril_indices") #### Notes New in version 1.4.0. #### Examples These are the indices that would allow you to access the upper triangular part of any 3x3 array: ``` >>> iu = np.mask_indices(3, np.triu) ``` For example, if `a` is a 3x3 array: ``` >>> a = np.arange(9).reshape(3, 3) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> a[iu] array([0, 1, 2, 4, 5, 8]) ``` An offset can be passed also to the masking function. This gets us the indices starting on the first diagonal right of the main one: ``` >>> iu1 = np.mask_indices(3, np.triu, 1) ``` with which we now extract only three elements: ``` >>> a[iu1] array([1, 2, 5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.mask_indices.htmlnumpy.tril_indices =================== numpy.tril_indices(*n*, *k=0*, *m=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L900-L981) Return the indices for the lower-triangle of an (n, m) array. Parameters **n**int The row dimension of the arrays for which the returned indices will be valid. **k**int, optional Diagonal offset (see [`tril`](numpy.tril#numpy.tril "numpy.tril") for details). **m**int, optional New in version 1.9.0. The column dimension of the arrays for which the returned arrays will be valid. By default `m` is taken equal to `n`. Returns **inds**tuple of arrays The indices for the triangle. The returned tuple contains two arrays, each with the indices along one dimension of the array. See also [`triu_indices`](numpy.triu_indices#numpy.triu_indices "numpy.triu_indices") similar function, for upper-triangular. [`mask_indices`](numpy.mask_indices#numpy.mask_indices "numpy.mask_indices") generic function accepting an arbitrary mask function. [`tril`](numpy.tril#numpy.tril "numpy.tril"), [`triu`](numpy.triu#numpy.triu "numpy.triu") #### Notes New in version 1.4.0. #### Examples Compute two different sets of indices to access 4x4 arrays, one for the lower triangular part starting at the main diagonal, and one starting two diagonals further right: ``` >>> il1 = np.tril_indices(4) >>> il2 = np.tril_indices(4, 2) ``` Here is how they can be used with a sample array: ``` >>> a = np.arange(16).reshape(4, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) ``` Both for indexing: ``` >>> a[il1] array([ 0, 4, 5, ..., 13, 14, 15]) ``` And for assigning values: ``` >>> a[il1] = -1 >>> a array([[-1, 1, 2, 3], [-1, -1, 6, 7], [-1, -1, -1, 11], [-1, -1, -1, -1]]) ``` These cover almost the whole array (two diagonals right of the main one): ``` >>> a[il2] = -10 >>> a array([[-10, -10, -10, 3], [-10, -10, -10, -10], [-10, -10, -10, -10], [-10, -10, -10, -10]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.tril_indices.htmlnumpy.tril_indices_from ========================= numpy.tril_indices_from(*arr*, *k=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L988-L1014) Return the indices for the lower-triangle of arr. See [`tril_indices`](numpy.tril_indices#numpy.tril_indices "numpy.tril_indices") for full details. Parameters **arr**array_like The indices will be valid for square arrays whose dimensions are the same as arr. **k**int, optional Diagonal offset (see [`tril`](numpy.tril#numpy.tril "numpy.tril") for details). See also [`tril_indices`](numpy.tril_indices#numpy.tril_indices "numpy.tril_indices"), [`tril`](numpy.tril#numpy.tril "numpy.tril") #### Notes New in version 1.4.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.tril_indices_from.htmlnumpy.triu_indices =================== numpy.triu_indices(*n*, *k=0*, *m=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L1017-L1100) Return the indices for the upper-triangle of an (n, m) array. Parameters **n**int The size of the arrays for which the returned indices will be valid. **k**int, optional Diagonal offset (see [`triu`](numpy.triu#numpy.triu "numpy.triu") for details). **m**int, optional New in version 1.9.0. The column dimension of the arrays for which the returned arrays will be valid. By default `m` is taken equal to `n`. Returns **inds**tuple, shape(2) of ndarrays, shape(`n`) The indices for the triangle. The returned tuple contains two arrays, each with the indices along one dimension of the array. Can be used to slice a ndarray of shape(`n`, `n`). See also [`tril_indices`](numpy.tril_indices#numpy.tril_indices "numpy.tril_indices") similar function, for lower-triangular. [`mask_indices`](numpy.mask_indices#numpy.mask_indices "numpy.mask_indices") generic function accepting an arbitrary mask function. [`triu`](numpy.triu#numpy.triu "numpy.triu"), [`tril`](numpy.tril#numpy.tril "numpy.tril") #### Notes New in version 1.4.0. #### Examples Compute two different sets of indices to access 4x4 arrays, one for the upper triangular part starting at the main diagonal, and one starting two diagonals further right: ``` >>> iu1 = np.triu_indices(4) >>> iu2 = np.triu_indices(4, 2) ``` Here is how they can be used with a sample array: ``` >>> a = np.arange(16).reshape(4, 4) >>> a array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) ``` Both for indexing: ``` >>> a[iu1] array([ 0, 1, 2, ..., 10, 11, 15]) ``` And for assigning values: ``` >>> a[iu1] = -1 >>> a array([[-1, -1, -1, -1], [ 4, -1, -1, -1], [ 8, 9, -1, -1], [12, 13, 14, -1]]) ``` These cover only a small part of the whole array (two diagonals right of the main one): ``` >>> a[iu2] = -10 >>> a array([[ -1, -1, -10, -10], [ 4, -1, -1, -10], [ 8, 9, -1, -1], [ 12, 13, 14, -1]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.triu_indices.htmlnumpy.triu_indices_from ========================= numpy.triu_indices_from(*arr*, *k=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L1103-L1133) Return the indices for the upper-triangle of arr. See [`triu_indices`](numpy.triu_indices#numpy.triu_indices "numpy.triu_indices") for full details. Parameters **arr**ndarray, shape(N, N) The indices will be valid for square arrays. **k**int, optional Diagonal offset (see [`triu`](numpy.triu#numpy.triu "numpy.triu") for details). Returns **triu_indices_from**tuple, shape(2) of ndarray, shape(N) Indices for the upper-triangle of `arr`. See also [`triu_indices`](numpy.triu_indices#numpy.triu_indices "numpy.triu_indices"), [`triu`](numpy.triu#numpy.triu "numpy.triu") #### Notes New in version 1.4.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.triu_indices_from.htmlnumpy.take_along_axis ======================= numpy.take_along_axis(*arr*, *indices*, *axis*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L56-L170) Take values from the input array by matching 1d index and data slices. This iterates over matching 1d slices oriented along the specified axis in the index and data arrays, and uses the former to look up values in the latter. These slices can be different lengths. Functions returning an index along an axis, like [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") and [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition"), produce suitable indices for this function. New in version 1.15.0. Parameters **arr**ndarray (Ni
, M, Nk
) Source array **indices**ndarray (Ni
, J, Nk
) Indices to take along each 1d slice of `arr`. This must match the dimension of arr, but dimensions Ni and Nj only need to broadcast against `arr`. **axis**int The axis to take 1d slices along. If axis is None, the input array is treated as if it had first been flattened to 1d, for consistency with [`sort`](numpy.sort#numpy.sort "numpy.sort") and [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort"). Returns out: ndarray (Ni
, J, Nk
) The indexed result. See also [`take`](numpy.take#numpy.take "numpy.take") Take along an axis, using the same indices for every 1d slice [`put_along_axis`](numpy.put_along_axis#numpy.put_along_axis "numpy.put_along_axis") Put values into the destination array by matching 1d index and data slices #### Notes This is equivalent to (but faster than) the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex") and [`s_`](numpy.s_#numpy.s_ "numpy.s_"), which sets each of `ii` and `kk` to a tuple of indices: ``` Ni, M, Nk = a.shape[:axis], a.shape[axis], a.shape[axis+1:] J = indices.shape[axis] # Need not equal M out = np.empty(Ni + (J,) + Nk) for ii in ndindex(Ni): for kk in ndindex(Nk): a_1d = a [ii + s_[:,] + kk] indices_1d = indices[ii + s_[:,] + kk] out_1d = out [ii + s_[:,] + kk] for j in range(J): out_1d[j] = a_1d[indices_1d[j]] ``` Equivalently, eliminating the inner loop, the last two lines would be: ``` out_1d[:] = a_1d[indices_1d] ``` #### Examples For this sample array ``` >>> a = np.array([[10, 30, 20], [60, 40, 50]]) ``` We can sort either by using sort directly, or argsort and this function ``` >>> np.sort(a, axis=1) array([[10, 20, 30], [40, 50, 60]]) >>> ai = np.argsort(a, axis=1); ai array([[0, 2, 1], [1, 2, 0]]) >>> np.take_along_axis(a, ai, axis=1) array([[10, 20, 30], [40, 50, 60]]) ``` The same works for max and min, if you expand the dimensions: ``` >>> np.expand_dims(np.max(a, axis=1), axis=1) array([[30], [60]]) >>> ai = np.expand_dims(np.argmax(a, axis=1), axis=1) >>> ai array([[1], [0]]) >>> np.take_along_axis(a, ai, axis=1) array([[30], [60]]) ``` If we want to get the max and min at the same time, we can stack the indices first ``` >>> ai_min = np.expand_dims(np.argmin(a, axis=1), axis=1) >>> ai_max = np.expand_dims(np.argmax(a, axis=1), axis=1) >>> ai = np.concatenate([ai_min, ai_max], axis=1) >>> ai array([[0, 1], [1, 0]]) >>> np.take_along_axis(a, ai, axis=1) array([[10, 30], [40, 60]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.take_along_axis.htmlnumpy.diag ========== numpy.diag(*v*, *k=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L240-L309) Extract a diagonal or construct a diagonal array. See the more detailed documentation for `numpy.diagonal` if you use this function to extract a diagonal and wish to write to the resulting array; whether it returns a copy or a view depends on what version of numpy you are using. Parameters **v**array_like If `v` is a 2-D array, return a copy of its `k`-th diagonal. If `v` is a 1-D array, return a 2-D array with `v` on the `k`-th diagonal. **k**int, optional Diagonal in question. The default is 0. Use `k>0` for diagonals above the main diagonal, and `k<0` for diagonals below the main diagonal. Returns **out**ndarray The extracted diagonal or constructed diagonal array. See also [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") Return specified diagonals. [`diagflat`](numpy.diagflat#numpy.diagflat "numpy.diagflat") Create a 2-D array with the flattened input as a diagonal. [`trace`](numpy.trace#numpy.trace "numpy.trace") Sum along diagonals. [`triu`](numpy.triu#numpy.triu "numpy.triu") Upper triangle of an array. [`tril`](numpy.tril#numpy.tril "numpy.tril") Lower triangle of an array. #### Examples ``` >>> x = np.arange(9).reshape((3,3)) >>> x array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) ``` ``` >>> np.diag(x) array([0, 4, 8]) >>> np.diag(x, k=1) array([1, 5]) >>> np.diag(x, k=-1) array([3, 7]) ``` ``` >>> np.diag(np.diag(x)) array([[0, 0, 0], [0, 4, 0], [0, 0, 8]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.diag.htmlnumpy.select ============ numpy.select(*condlist*, *choicelist*, *default=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L765-L863) Return an array drawn from elements in choicelist, depending on conditions. Parameters **condlist**list of bool ndarrays The list of conditions which determine from which array in `choicelist` the output elements are taken. When multiple conditions are satisfied, the first one encountered in `condlist` is used. **choicelist**list of ndarrays The list of arrays from which the output elements are taken. It has to be of the same length as `condlist`. **default**scalar, optional The element inserted in `output` when all conditions evaluate to False. Returns **output**ndarray The output at position m is the m-th element of the array in `choicelist` where the m-th element of the corresponding array in `condlist` is True. See also [`where`](numpy.where#numpy.where "numpy.where") Return elements from one of two arrays depending on condition. [`take`](numpy.take#numpy.take "numpy.take"), [`choose`](numpy.choose#numpy.choose "numpy.choose"), [`compress`](numpy.compress#numpy.compress "numpy.compress"), [`diag`](numpy.diag#numpy.diag "numpy.diag"), [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") #### Examples ``` >>> x = np.arange(6) >>> condlist = [x<3, x>3] >>> choicelist = [x, x**2] >>> np.select(condlist, choicelist, 42) array([ 0, 1, 2, 42, 16, 25]) ``` ``` >>> condlist = [x<=4, x>3] >>> choicelist = [x, x**2] >>> np.select(condlist, choicelist, 55) array([ 0, 1, 2, 3, 4, 25]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.select.htmlnumpy.lib.stride_tricks.sliding_window_view ============================================== lib.stride_tricks.sliding_window_view(*x*, *window_shape*, *axis=None*, ***, *subok=False*, *writeable=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/stride_tricks.py#L123-L337) Create a sliding window view into the array with the given window shape. Also known as rolling or moving window, the window slides across all dimensions of the array and extracts subsets of the array at all window positions. New in version 1.20.0. Parameters **x**array_like Array to create the sliding window view from. **window_shape**int or tuple of int Size of window over each axis that takes part in the sliding window. If `axis` is not present, must have same length as the number of input array dimensions. Single integers `i` are treated as if they were the tuple `(i,)`. **axis**int or tuple of int, optional Axis or axes along which the sliding window is applied. By default, the sliding window is applied to all axes and `window_shape[i]` will refer to axis `i` of `x`. If `axis` is given as a `tuple of int`, `window_shape[i]` will refer to the axis `axis[i]` of `x`. Single integers `i` are treated as if they were the tuple `(i,)`. **subok**bool, optional If True, sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (default). **writeable**bool, optional When true, allow writing to the returned view. The default is false, as this should be used with caution: the returned view contains the same memory location multiple times, so writing to one location will cause others to change. Returns **view**ndarray Sliding window view of the array. The sliding window dimensions are inserted at the end, and the original dimensions are trimmed as required by the size of the sliding window. That is, `view.shape = x_shape_trimmed + window_shape`, where `x_shape_trimmed` is `x.shape` with every entry reduced by one less than the corresponding window size. See also [`lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") A lower-level and less safe routine for creating arbitrary views from custom shape and strides. [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") broadcast an array to a given shape. #### Notes For many applications using a sliding window view can be convenient, but potentially very slow. Often specialized solutions exist, for example: * [`scipy.signal.fftconvolve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html#scipy.signal.fftconvolve "(in SciPy v1.8.1)") * filtering functions in [`scipy.ndimage`](https://docs.scipy.org/doc/scipy/reference/ndimage.html#module-scipy.ndimage "(in SciPy v1.8.1)") * moving window functions provided by [bottleneck](https://github.com/pydata/bottleneck). As a rough estimate, a sliding window approach with an input size of `N` and a window size of `W` will scale as `O(N*W)` where frequently a special algorithm can achieve `O(N)`. That means that the sliding window variant for a window size of 100 can be a 100 times slower than a more specialized version. Nevertheless, for small window sizes, when no custom algorithm exists, or as a prototyping and developing tool, this function can be a good solution. #### Examples ``` >>> x = np.arange(6) >>> x.shape (6,) >>> v = sliding_window_view(x, 3) >>> v.shape (4, 3) >>> v array([[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5]]) ``` This also works in more dimensions, e.g. ``` >>> i, j = np.ogrid[:3, :4] >>> x = 10*i + j >>> x.shape (3, 4) >>> x array([[ 0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23]]) >>> shape = (2,2) >>> v = sliding_window_view(x, shape) >>> v.shape (2, 3, 2, 2) >>> v array([[[[ 0, 1], [10, 11]], [[ 1, 2], [11, 12]], [[ 2, 3], [12, 13]]], [[[10, 11], [20, 21]], [[11, 12], [21, 22]], [[12, 13], [22, 23]]]]) ``` The axis can be specified explicitly: ``` >>> v = sliding_window_view(x, 3, 0) >>> v.shape (1, 4, 3) >>> v array([[[ 0, 10, 20], [ 1, 11, 21], [ 2, 12, 22], [ 3, 13, 23]]]) ``` The same axis can be used several times. In that case, every use reduces the corresponding original dimension: ``` >>> v = sliding_window_view(x, (2, 3), (1, 1)) >>> v.shape (3, 1, 2, 3) >>> v array([[[[ 0, 1, 2], [ 1, 2, 3]]], [[[10, 11, 12], [11, 12, 13]]], [[[20, 21, 22], [21, 22, 23]]]]) ``` Combining with stepped slicing (`::step`), this can be used to take sliding views which skip elements: ``` >>> x = np.arange(7) >>> sliding_window_view(x, 5)[:, ::2] array([[0, 2, 4], [1, 3, 5], [2, 4, 6]]) ``` or views which move by multiple elements ``` >>> x = np.arange(7) >>> sliding_window_view(x, 3)[::2, :] array([[0, 1, 2], [2, 3, 4], [4, 5, 6]]) ``` A common application of [`sliding_window_view`](#numpy.lib.stride_tricks.sliding_window_view "numpy.lib.stride_tricks.sliding_window_view") is the calculation of running statistics. The simplest example is the [moving average](https://en.wikipedia.org/wiki/Moving_average): ``` >>> x = np.arange(6) >>> x.shape (6,) >>> v = sliding_window_view(x, 3) >>> v.shape (4, 3) >>> v array([[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5]]) >>> moving_average = v.mean(axis=-1) >>> moving_average array([1., 2., 3., 4.]) ``` Note that a sliding window approach is often **not** optimal (see Notes). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.stride_tricks.sliding_window_view.htmlnumpy.place =========== numpy.place(*arr*, *mask*, *vals*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L1912-L1953) Change elements of an array based on conditional and input values. Similar to `np.copyto(arr, vals, where=mask)`, the difference is that [`place`](#numpy.place "numpy.place") uses the first N elements of `vals`, where N is the number of True values in `mask`, while [`copyto`](numpy.copyto#numpy.copyto "numpy.copyto") uses the elements where `mask` is True. Note that [`extract`](numpy.extract#numpy.extract "numpy.extract") does the exact opposite of [`place`](#numpy.place "numpy.place"). Parameters **arr**ndarray Array to put data into. **mask**array_like Boolean mask array. Must have the same size as `a`. **vals**1-D sequence Values to put into `a`. Only the first N elements are used, where N is the number of True values in `mask`. If `vals` is smaller than N, it will be repeated, and if elements of `a` are to be masked, this sequence must be non-empty. See also [`copyto`](numpy.copyto#numpy.copyto "numpy.copyto"), [`put`](numpy.put#numpy.put "numpy.put"), [`take`](numpy.take#numpy.take "numpy.take"), [`extract`](numpy.extract#numpy.extract "numpy.extract") #### Examples ``` >>> arr = np.arange(6).reshape(2, 3) >>> np.place(arr, arr>2, [44, 55]) >>> arr array([[ 0, 1, 2], [44, 55, 44]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.place.htmlnumpy.put_along_axis ====================== numpy.put_along_axis(*arr*, *indices*, *values*, *axis*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L177-L260) Put values into the destination array by matching 1d index and data slices. This iterates over matching 1d slices oriented along the specified axis in the index and data arrays, and uses the former to place values into the latter. These slices can be different lengths. Functions returning an index along an axis, like [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") and [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition"), produce suitable indices for this function. New in version 1.15.0. Parameters **arr**ndarray (Ni
, M, Nk
) Destination array. **indices**ndarray (Ni
, J, Nk
) Indices to change along each 1d slice of `arr`. This must match the dimension of arr, but dimensions in Ni and Nj may be 1 to broadcast against `arr`. **values**array_like (Ni
, J, Nk
) values to insert at those indices. Its shape and dimension are broadcast to match that of [`indices`](numpy.indices#numpy.indices "numpy.indices"). **axis**int The axis to take 1d slices along. If axis is None, the destination array is treated as if a flattened 1d view had been created of it. See also [`take_along_axis`](numpy.take_along_axis#numpy.take_along_axis "numpy.take_along_axis") Take values from the input array by matching 1d index and data slices #### Notes This is equivalent to (but faster than) the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex") and [`s_`](numpy.s_#numpy.s_ "numpy.s_"), which sets each of `ii` and `kk` to a tuple of indices: ``` Ni, M, Nk = a.shape[:axis], a.shape[axis], a.shape[axis+1:] J = indices.shape[axis] # Need not equal M for ii in ndindex(Ni): for kk in ndindex(Nk): a_1d = a [ii + s_[:,] + kk] indices_1d = indices[ii + s_[:,] + kk] values_1d = values [ii + s_[:,] + kk] for j in range(J): a_1d[indices_1d[j]] = values_1d[j] ``` Equivalently, eliminating the inner loop, the last two lines would be: ``` a_1d[indices_1d] = values_1d ``` #### Examples For this sample array ``` >>> a = np.array([[10, 30, 20], [60, 40, 50]]) ``` We can replace the maximum values with: ``` >>> ai = np.expand_dims(np.argmax(a, axis=1), axis=1) >>> ai array([[1], [0]]) >>> np.put_along_axis(a, ai, 99, axis=1) >>> a array([[10, 99, 20], [99, 40, 50]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.put_along_axis.htmlnumpy.fill_diagonal ==================== numpy.fill_diagonal(*a*, *val*, *wrap=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/index_tricks.py#L779-L910) Fill the main diagonal of the given array of any dimensionality. For an array `a` with `a.ndim >= 2`, the diagonal is the list of locations with indices `a[i, ..., i]` all identical. This function modifies the input array in-place, it does not return a value. Parameters **a**array, at least 2-D. Array whose diagonal is to be filled, it gets modified in-place. **val**scalar or array_like Value(s) to write on the diagonal. If `val` is scalar, the value is written along the diagonal. If array-like, the flattened `val` is written along the diagonal, repeating if necessary to fill all diagonal entries. **wrap**bool For tall matrices in NumPy version up to 1.6.2, the diagonal “wrapped” after N columns. You can have this behavior with this option. This affects only tall matrices. See also [`diag_indices`](numpy.diag_indices#numpy.diag_indices "numpy.diag_indices"), [`diag_indices_from`](numpy.diag_indices_from#numpy.diag_indices_from "numpy.diag_indices_from") #### Notes New in version 1.4.0. This functionality can be obtained via [`diag_indices`](numpy.diag_indices#numpy.diag_indices "numpy.diag_indices"), but internally this version uses a much faster implementation that never constructs the indices and uses simple slicing. #### Examples ``` >>> a = np.zeros((3, 3), int) >>> np.fill_diagonal(a, 5) >>> a array([[5, 0, 0], [0, 5, 0], [0, 0, 5]]) ``` The same function can operate on a 4-D array: ``` >>> a = np.zeros((3, 3, 3, 3), int) >>> np.fill_diagonal(a, 4) ``` We only show a few blocks for clarity: ``` >>> a[0, 0] array([[4, 0, 0], [0, 0, 0], [0, 0, 0]]) >>> a[1, 1] array([[0, 0, 0], [0, 4, 0], [0, 0, 0]]) >>> a[2, 2] array([[0, 0, 0], [0, 0, 0], [0, 0, 4]]) ``` The wrap option affects only tall matrices: ``` >>> # tall matrices no wrap >>> a = np.zeros((5, 3), int) >>> np.fill_diagonal(a, 4) >>> a array([[4, 0, 0], [0, 4, 0], [0, 0, 4], [0, 0, 0], [0, 0, 0]]) ``` ``` >>> # tall matrices wrap >>> a = np.zeros((5, 3), int) >>> np.fill_diagonal(a, 4, wrap=True) >>> a array([[4, 0, 0], [0, 4, 0], [0, 0, 4], [0, 0, 0], [4, 0, 0]]) ``` ``` >>> # wide matrices >>> a = np.zeros((3, 5), int) >>> np.fill_diagonal(a, 4, wrap=True) >>> a array([[4, 0, 0, 0, 0], [0, 4, 0, 0, 0], [0, 0, 4, 0, 0]]) ``` The anti-diagonal can be filled by reversing the order of elements using either [`numpy.flipud`](numpy.flipud#numpy.flipud "numpy.flipud") or [`numpy.fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr"). ``` >>> a = np.zeros((3, 3), int); >>> np.fill_diagonal(np.fliplr(a), [1,2,3]) # Horizontal flip >>> a array([[0, 0, 1], [0, 2, 0], [3, 0, 0]]) >>> np.fill_diagonal(np.flipud(a), [1,2,3]) # Vertical flip >>> a array([[0, 0, 3], [0, 2, 0], [1, 0, 0]]) ``` Note that the order in which the diagonal is filled varies depending on the flip function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fill_diagonal.htmlnumpy.nditer ============ *class*numpy.nditer(*op*, *flags=None*, *op_flags=None*, *op_dtypes=None*, *order='K'*, *casting='safe'*, *op_axes=None*, *itershape=None*, *buffersize=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Efficient multi-dimensional iterator object to iterate over arrays. To get started using this object, see the [introductory guide to array iteration](../arrays.nditer#arrays-nditer). Parameters **op**ndarray or sequence of array_like The array(s) to iterate over. **flags**sequence of str, optional Flags to control the behavior of the iterator. * `buffered` enables buffering when required. * `c_index` causes a C-order index to be tracked. * `f_index` causes a Fortran-order index to be tracked. * `multi_index` causes a multi-index, or a tuple of indices with one per iteration dimension, to be tracked. * `common_dtype` causes all the operands to be converted to a common data type, with copying or buffering as necessary. * `copy_if_overlap` causes the iterator to determine if read operands have overlap with write operands, and make temporary copies as necessary to avoid overlap. False positives (needless copying) are possible in some cases. * `delay_bufalloc` delays allocation of the buffers until a reset() call is made. Allows `allocate` operands to be initialized before their values are copied into the buffers. * `external_loop` causes the `values` given to be one-dimensional arrays with multiple values instead of zero-dimensional arrays. * `grow_inner` allows the `value` array sizes to be made larger than the buffer size when both `buffered` and `external_loop` is used. * `ranged` allows the iterator to be restricted to a sub-range of the iterindex values. * `refs_ok` enables iteration of reference types, such as object arrays. * `reduce_ok` enables iteration of `readwrite` operands which are broadcasted, also known as reduction operands. * `zerosize_ok` allows [`itersize`](numpy.nditer.itersize#numpy.nditer.itersize "numpy.nditer.itersize") to be zero. **op_flags**list of list of str, optional This is a list of flags for each operand. At minimum, one of `readonly`, `readwrite`, or `writeonly` must be specified. * `readonly` indicates the operand will only be read from. * `readwrite` indicates the operand will be read from and written to. * `writeonly` indicates the operand will only be written to. * `no_broadcast` prevents the operand from being broadcasted. * `contig` forces the operand data to be contiguous. * `aligned` forces the operand data to be aligned. * `nbo` forces the operand data to be in native byte order. * `copy` allows a temporary read-only copy if required. * `updateifcopy` allows a temporary read-write copy if required. * `allocate` causes the array to be allocated if it is None in the `op` parameter. * `no_subtype` prevents an `allocate` operand from using a subtype. * `arraymask` indicates that this operand is the mask to use for selecting elements when writing to operands with the ‘writemasked’ flag set. The iterator does not enforce this, but when writing from a buffer back to the array, it only copies those elements indicated by this mask. * `writemasked` indicates that only elements where the chosen `arraymask` operand is True will be written to. * `overlap_assume_elementwise` can be used to mark operands that are accessed only in the iterator order, to allow less conservative copying when `copy_if_overlap` is present. **op_dtypes**dtype or tuple of dtype(s), optional The required data type(s) of the operands. If copying or buffering is enabled, the data will be converted to/from their original types. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the iteration order. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. This also affects the element memory order of `allocate` operands, as they are allocated to be compatible with iteration order. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur when making a copy or buffering. Setting this to ‘unsafe’ is not recommended, as it can adversely affect accumulations. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **op_axes**list of list of ints, optional If provided, is a list of ints or None for each operands. The list of axes for an operand is a mapping from the dimensions of the iterator to the dimensions of the operand. A value of -1 can be placed for entries, causing that dimension to be treated as [`newaxis`](../constants#numpy.newaxis "numpy.newaxis"). **itershape**tuple of ints, optional The desired shape of the iterator. This allows `allocate` operands with a dimension mapped by op_axes not corresponding to a dimension of a different operand to get a value not equal to 1 for that dimension. **buffersize**int, optional When buffering is enabled, controls the size of the temporary buffers. Set to 0 for the default value. #### Notes [`nditer`](#numpy.nditer "numpy.nditer") supersedes [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter"). The iterator implementation behind [`nditer`](#numpy.nditer "numpy.nditer") is also exposed by the NumPy C API. The Python exposure supplies two iteration interfaces, one which follows the Python iterator protocol, and another which mirrors the C-style do-while pattern. The native Python approach is better in most cases, but if you need the coordinates or index of an iterator, use the C-style pattern. #### Examples Here is how we might write an `iter_add` function, using the Python iterator protocol: ``` >>> def iter_add_py(x, y, out=None): ... addop = np.add ... it = np.nditer([x, y, out], [], ... [['readonly'], ['readonly'], ['writeonly','allocate']]) ... with it: ... for (a, b, c) in it: ... addop(a, b, out=c) ... return it.operands[2] ``` Here is the same function, but following the C-style pattern: ``` >>> def iter_add(x, y, out=None): ... addop = np.add ... it = np.nditer([x, y, out], [], ... [['readonly'], ['readonly'], ['writeonly','allocate']]) ... with it: ... while not it.finished: ... addop(it[0], it[1], out=it[2]) ... it.iternext() ... return it.operands[2] ``` Here is an example outer product function: ``` >>> def outer_it(x, y, out=None): ... mulop = np.multiply ... it = np.nditer([x, y, out], ['external_loop'], ... [['readonly'], ['readonly'], ['writeonly', 'allocate']], ... op_axes=[list(range(x.ndim)) + [-1] * y.ndim, ... [-1] * x.ndim + list(range(y.ndim)), ... None]) ... with it: ... for (a, b, c) in it: ... mulop(a, b, out=c) ... return it.operands[2] ``` ``` >>> a = np.arange(2)+1 >>> b = np.arange(3)+1 >>> outer_it(a,b) array([[1, 2, 3], [2, 4, 6]]) ``` Here is an example function which operates like a “lambda” ufunc: ``` >>> def luf(lamdaexpr, *args, **kwargs): ... '''luf(lambdaexpr, op1, ..., opn, out=None, order='K', casting='safe', buffersize=0)''' ... nargs = len(args) ... op = (kwargs.get('out',None),) + args ... it = np.nditer(op, ['buffered','external_loop'], ... [['writeonly','allocate','no_broadcast']] + ... [['readonly','nbo','aligned']]*nargs, ... order=kwargs.get('order','K'), ... casting=kwargs.get('casting','safe'), ... buffersize=kwargs.get('buffersize',0)) ... while not it.finished: ... it[0] = lamdaexpr(*it[1:]) ... it.iternext() ... return it.operands[0] ``` ``` >>> a = np.arange(5) >>> b = np.ones(5) >>> luf(lambda i,j:i*i + j/2, a, b) array([ 0.5, 1.5, 4.5, 9.5, 16.5]) ``` If operand flags `"writeonly"` or `"readwrite"` are used the operands may be views into the original data with the `WRITEBACKIFCOPY` flag. In this case [`nditer`](#numpy.nditer "numpy.nditer") must be used as a context manager or the [`nditer.close`](numpy.nditer.close#numpy.nditer.close "numpy.nditer.close") method must be called before using the result. The temporary data will be written back to the original data when the `__exit__` function is called but not before: ``` >>> a = np.arange(6, dtype='i4')[::-2] >>> with np.nditer(a, [], ... [['writeonly', 'updateifcopy']], ... casting='unsafe', ... op_dtypes=[np.dtype('f4')]) as i: ... x = i.operands[0] ... x[:] = [-1, -2, -3] ... # a still unchanged here >>> a, x (array([-1, -2, -3], dtype=int32), array([-1., -2., -3.], dtype=float32)) ``` It is important to note that once the iterator is exited, dangling references (like `x` in the example) may or may not share data with the original data `a`. If writeback semantics were active, i.e. if `x.base.flags.writebackifcopy` is `True`, then exiting the iterator will sever the connection between `x` and `a`, writing to `x` will no longer write to `a`. If writeback semantics are not active, then `x.data` will still point at some part of `a.data`, and writing to one will affect the other. Context management and the [`close`](numpy.nditer.close#numpy.nditer.close "numpy.nditer.close") method appeared in version 1.15.0. Attributes **dtypes**tuple of dtype(s) The data types of the values provided in [`value`](numpy.nditer.value#numpy.nditer.value "numpy.nditer.value"). This may be different from the operand data types if buffering is enabled. Valid only before the iterator is closed. **finished**bool Whether the iteration over the operands is finished or not. **has_delayed_bufalloc**bool If True, the iterator was created with the `delay_bufalloc` flag, and no reset() function was called on it yet. **has_index**bool If True, the iterator was created with either the `c_index` or the `f_index` flag, and the property [`index`](numpy.nditer.index#numpy.nditer.index "numpy.nditer.index") can be used to retrieve it. **has_multi_index**bool If True, the iterator was created with the `multi_index` flag, and the property [`multi_index`](numpy.nditer.multi_index#numpy.nditer.multi_index "numpy.nditer.multi_index") can be used to retrieve it. **index** When the `c_index` or `f_index` flag was used, this property provides access to the index. Raises a ValueError if accessed and `has_index` is False. **iterationneedsapi**bool Whether iteration requires access to the Python API, for example if one of the operands is an object array. **iterindex**int An index which matches the order of iteration. **itersize**int Size of the iterator. **itviews** Structured view(s) of [`operands`](numpy.nditer.operands#numpy.nditer.operands "numpy.nditer.operands") in memory, matching the reordered and optimized iterator access pattern. Valid only before the iterator is closed. **multi_index** When the `multi_index` flag was used, this property provides access to the index. Raises a ValueError if accessed accessed and `has_multi_index` is False. **ndim**int The dimensions of the iterator. **nop**int The number of iterator operands. [`operands`](numpy.nditer.operands#numpy.nditer.operands "numpy.nditer.operands")tuple of operand(s) operands[`Slice`] **shape**tuple of ints Shape tuple, the shape of the iterator. **value** Value of `operands` at current iteration. Normally, this is a tuple of array scalars, but if the flag `external_loop` is used, it is a tuple of one dimensional arrays. #### Methods | | | | --- | --- | | [`close`](numpy.nditer.close#numpy.nditer.close "numpy.nditer.close")() | Resolve all writeback semantics in writeable operands. | | [`copy`](numpy.nditer.copy#numpy.nditer.copy "numpy.nditer.copy")() | Get a copy of the iterator in its current state. | | [`debug_print`](numpy.nditer.debug_print#numpy.nditer.debug_print "numpy.nditer.debug_print")() | Print the current state of the [`nditer`](#numpy.nditer "numpy.nditer") instance and debug info to stdout. | | [`enable_external_loop`](numpy.nditer.enable_external_loop#numpy.nditer.enable_external_loop "numpy.nditer.enable_external_loop")() | When the "external_loop" was not used during construction, but is desired, this modifies the iterator to behave as if the flag was specified. | | [`iternext`](numpy.nditer.iternext#numpy.nditer.iternext "numpy.nditer.iternext")() | Check whether iterations are left, and perform a single internal iteration without returning the result. | | [`remove_axis`](numpy.nditer.remove_axis#numpy.nditer.remove_axis "numpy.nditer.remove_axis")(i, /) | Removes axis `i` from the iterator. | | [`remove_multi_index`](numpy.nditer.remove_multi_index#numpy.nditer.remove_multi_index "numpy.nditer.remove_multi_index")() | When the "multi_index" flag was specified, this removes it, allowing the internal iteration structure to be optimized further. | | [`reset`](numpy.nditer.reset#numpy.nditer.reset "numpy.nditer.reset")() | Reset the iterator to its initial state. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.htmlnumpy.ndindex ============= *class*numpy.ndindex(**shape*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) An N-dimensional iterator object to index arrays. Given the shape of an array, an [`ndindex`](#numpy.ndindex "numpy.ndindex") instance iterates over the N-dimensional index of the array. At each iteration a tuple of indices is returned, the last dimension is iterated over first. Parameters **shape**ints, or a single tuple of ints The size of each dimension of the array can be passed as individual parameters or as the elements of a tuple. See also [`ndenumerate`](numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate"), [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples Dimensions as individual arguments ``` >>> for index in np.ndindex(3, 2, 1): ... print(index) (0, 0, 0) (0, 1, 0) (1, 0, 0) (1, 1, 0) (2, 0, 0) (2, 1, 0) ``` Same dimensions - but in a tuple `(3, 2, 1)` ``` >>> for index in np.ndindex((3, 2, 1)): ... print(index) (0, 0, 0) (0, 1, 0) (1, 0, 0) (1, 1, 0) (2, 0, 0) (2, 1, 0) ``` #### Methods | | | | --- | --- | | [`ndincr`](numpy.ndindex.ndincr#numpy.ndindex.ndincr "numpy.ndindex.ndincr")() | Increment the multi-dimensional index by one. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndindex.htmlnumpy.nested_iters =================== numpy.nested_iters(*op*, *axes*, *flags=None*, *op_flags=None*, *op_dtypes=None*, *order='K'*, *casting='safe'*, *buffersize=0*) Create nditers for use in nested loops Create a tuple of [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer") objects which iterate in nested loops over different axes of the op argument. The first iterator is used in the outermost loop, the last in the innermost loop. Advancing one will change the subsequent iterators to point at its new element. Parameters **op**ndarray or sequence of array_like The array(s) to iterate over. **axes**list of list of int Each item is used as an “op_axes” argument to an nditer **flags, op_flags, op_dtypes, order, casting, buffersize (optional)** See [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer") parameters of the same name Returns **iters**tuple of nditer An nditer for each item in `axes`, outermost first See also [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer") #### Examples Basic usage. Note how y is the “flattened” version of [a[:, 0, :], a[:, 1, 0], a[:, 2, :]] since we specified the first iter’s axes as [1] ``` >>> a = np.arange(12).reshape(2, 3, 2) >>> i, j = np.nested_iters(a, [[1], [0, 2]], flags=["multi_index"]) >>> for x in i: ... print(i.multi_index) ... for y in j: ... print('', j.multi_index, y) (0,) (0, 0) 0 (0, 1) 1 (1, 0) 6 (1, 1) 7 (1,) (0, 0) 2 (0, 1) 3 (1, 0) 8 (1, 1) 9 (2,) (0, 0) 4 (0, 1) 5 (1, 0) 10 (1, 1) 11 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nested_iters.htmlnumpy.flatiter ============== *class*numpy.flatiter[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Flat iterator object to iterate over arrays. A [`flatiter`](#numpy.flatiter "numpy.flatiter") iterator is returned by `x.flat` for any array `x`. It allows iterating over the array as if it were a 1-D array, either in a for-loop or by calling its `next` method. Iteration is done in row-major, C-style order (the last index varying the fastest). The iterator can also be indexed using basic slicing or advanced indexing. See also [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") Return a flat iterator over an array. [`ndarray.flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten") Returns a flattened copy of an array. #### Notes A [`flatiter`](#numpy.flatiter "numpy.flatiter") iterator can not be constructed directly from Python code by calling the [`flatiter`](#numpy.flatiter "numpy.flatiter") constructor. #### Examples ``` >>> x = np.arange(6).reshape(2, 3) >>> fl = x.flat >>> type(fl) <class 'numpy.flatiter'> >>> for item in fl: ... print(item) ... 0 1 2 3 4 5 ``` ``` >>> fl[2:4] array([2, 3]) ``` Attributes [`base`](numpy.flatiter.base#numpy.flatiter.base "numpy.flatiter.base") A reference to the array that is iterated over. [`coords`](numpy.flatiter.coords#numpy.flatiter.coords "numpy.flatiter.coords") An N-dimensional tuple of current coordinates. [`index`](numpy.flatiter.index#numpy.flatiter.index "numpy.flatiter.index") Current flat index into the array. #### Methods | | | | --- | --- | | [`copy`](numpy.flatiter.copy#numpy.flatiter.copy "numpy.flatiter.copy")() | Get a copy of the iterator as a 1-D array. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.flatiter.htmlnumpy.lib.Arrayterator ====================== *class*numpy.lib.Arrayterator(*var*, *buf_size=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arrayterator.py#L16-L219) Buffered iterator for big arrays. [`Arrayterator`](#numpy.lib.Arrayterator "numpy.lib.Arrayterator") creates a buffered iterator for reading big arrays in small contiguous blocks. The class is useful for objects stored in the file system. It allows iteration over the object *without* reading everything in memory; instead, small blocks are read and iterated over. [`Arrayterator`](#numpy.lib.Arrayterator "numpy.lib.Arrayterator") can be used with any object that supports multidimensional slices. This includes NumPy arrays, but also variables from Scientific.IO.NetCDF or pynetcdf for example. Parameters **var**array_like The object to iterate over. **buf_size**int, optional The buffer size. If `buf_size` is supplied, the maximum amount of data that will be read into memory is `buf_size` elements. Default is None, which will read as many element as possible into memory. See also `ndenumerate` Multidimensional array iterator. `flatiter` Flat array iterator. `memmap` Create a memory-map to an array stored in a binary file on disk. #### Notes The algorithm works by first finding a “running dimension”, along which the blocks will be extracted. Given an array of dimensions `(d1, d2, ..., dn)`, e.g. if `buf_size` is smaller than `d1`, the first dimension will be used. If, on the other hand, `d1 < buf_size < d1*d2` the second dimension will be used, and so on. Blocks are extracted along this dimension, and when the last block is returned the process continues from the next dimension, until all elements have been read. #### Examples ``` >>> a = np.arange(3 * 4 * 5 * 6).reshape(3, 4, 5, 6) >>> a_itor = np.lib.Arrayterator(a, 2) >>> a_itor.shape (3, 4, 5, 6) ``` Now we can iterate over `a_itor`, and it will return arrays of size two. Since `buf_size` was smaller than any dimension, the first dimension will be iterated over first: ``` >>> for subarr in a_itor: ... if not subarr.all(): ... print(subarr, subarr.shape) >>> # [[[[0 1]]]] (1, 1, 1, 2) ``` Attributes **var** **buf_size** **start** **stop** **step** [`shape`](numpy.lib.arrayterator.shape#numpy.lib.Arrayterator.shape "numpy.lib.Arrayterator.shape") The shape of the array to be iterated over. [`flat`](numpy.lib.arrayterator.flat#numpy.lib.Arrayterator.flat "numpy.lib.Arrayterator.flat") A 1-D flat iterator for Arrayterator objects. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.Arrayterator.htmlnumpy.iterable ============== numpy.iterable(*y*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L346-L388) Check whether or not an object can be iterated over. Parameters **y**object Input object. Returns **b**bool Return `True` if the object has an iterator method or is a sequence and `False` otherwise. #### Notes In most cases, the results of `np.iterable(obj)` are consistent with `isinstance(obj, collections.abc.Iterable)`. One notable exception is the treatment of 0-dimensional arrays: ``` >>> from collections.abc import Iterable >>> a = np.array(1.0) # 0-dimensional numpy array >>> isinstance(a, Iterable) True >>> np.iterable(a) False ``` #### Examples ``` >>> np.iterable([1, 2, 3]) True >>> np.iterable(2) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.iterable.htmlnumpy.nditer.operands ===================== attribute nditer.operands operands[`Slice`] The array(s) to be iterated over. Valid only before the iterator is closed. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.operands.htmlnumpy.matrix.T ============== property *property*matrix.T Returns the transpose of the matrix. Does *not* conjugate! For the complex conjugate transpose, use `.H`. Parameters **None** Returns **ret**matrix object The (non-conjugated) transpose of the matrix. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose"), [`getH`](numpy.matrix.geth#numpy.matrix.getH "numpy.matrix.getH") #### Examples ``` >>> m = np.matrix('[1, 2; 3, 4]') >>> m matrix([[1, 2], [3, 4]]) >>> m.getT() matrix([[1, 3], [2, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.T.htmlnumpy.matrix.H ============== property *property*matrix.H Returns the (complex) conjugate transpose of `self`. Equivalent to `np.transpose(self)` if `self` is real-valued. Parameters **None** Returns **ret**matrix object complex conjugate transpose of `self` #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))) >>> z = x - 1j*x; z matrix([[ 0. +0.j, 1. -1.j, 2. -2.j, 3. -3.j], [ 4. -4.j, 5. -5.j, 6. -6.j, 7. -7.j], [ 8. -8.j, 9. -9.j, 10.-10.j, 11.-11.j]]) >>> z.getH() matrix([[ 0. -0.j, 4. +4.j, 8. +8.j], [ 1. +1.j, 5. +5.j, 9. +9.j], [ 2. +2.j, 6. +6.j, 10.+10.j], [ 3. +3.j, 7. +7.j, 11.+11.j]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.H.htmlnumpy.matrix.I ============== property *property*matrix.I Returns the (multiplicative) inverse of invertible `self`. Parameters **None** Returns **ret**matrix object If `self` is non-singular, `ret` is such that `ret * self` == `self * ret` == `np.matrix(np.eye(self[0,:].size))` all return `True`. Raises numpy.linalg.LinAlgError: Singular matrix If `self` is singular. See also [`linalg.inv`](numpy.linalg.inv#numpy.linalg.inv "numpy.linalg.inv") #### Examples ``` >>> m = np.matrix('[1, 2; 3, 4]'); m matrix([[1, 2], [3, 4]]) >>> m.getI() matrix([[-2. , 1. ], [ 1.5, -0.5]]) >>> m.getI() * m matrix([[ 1., 0.], # may vary [ 0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.I.htmlnumpy.matrix.A ============== property *property*matrix.A Return `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object. Equivalent to `np.asarray(self)`. Parameters **None** Returns **ret**ndarray `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.getA() array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.A.htmlnumpy.asmatrix ============== numpy.asmatrix(*data*, *dtype=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L36-L69) Interpret the input as a matrix. Unlike [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix"), [`asmatrix`](#numpy.asmatrix "numpy.asmatrix") does not make a copy if the input is already a matrix or an ndarray. Equivalent to `matrix(data, copy=False)`. Parameters **data**array_like Input data. **dtype**data-type Data-type of the output matrix. Returns **mat**matrix `data` interpreted as a matrix. #### Examples ``` >>> x = np.array([[1, 2], [3, 4]]) ``` ``` >>> m = np.asmatrix(x) ``` ``` >>> x[0,0] = 5 ``` ``` >>> m matrix([[5, 2], [3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.asmatrix.htmlnumpy.bmat ========== numpy.bmat(*obj*, *ldict=None*, *gdict=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L1035-L1111) Build a matrix object from a string, nested sequence, or array. Parameters **obj**str or array_like Input data. If a string, variables in the current scope may be referenced by name. **ldict**dict, optional A dictionary that replaces local operands in current frame. Ignored if `obj` is not a string or `gdict` is None. **gdict**dict, optional A dictionary that replaces global operands in current frame. Ignored if `obj` is not a string. Returns **out**matrix Returns a matrix object, which is a specialized 2-D array. See also [`block`](numpy.block#numpy.block "numpy.block") A generalization of this function for N-d arrays, that returns normal ndarrays. #### Examples ``` >>> A = np.mat('1 1; 1 1') >>> B = np.mat('2 2; 2 2') >>> C = np.mat('3 4; 5 6') >>> D = np.mat('7 8; 9 0') ``` All the following expressions construct the same block matrix: ``` >>> np.bmat([[A, B], [C, D]]) matrix([[1, 1, 2, 2], [1, 1, 2, 2], [3, 4, 7, 8], [5, 6, 9, 0]]) >>> np.bmat(np.r_[np.c_[A, B], np.c_[C, D]]) matrix([[1, 1, 2, 2], [1, 1, 2, 2], [3, 4, 7, 8], [5, 6, 9, 0]]) >>> np.bmat('A,B; C,D') matrix([[1, 1, 2, 2], [1, 1, 2, 2], [3, 4, 7, 8], [5, 6, 9, 0]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.bmat.htmlnumpy.memmap ============ *class*numpy.memmap(*filename*, *dtype=<class 'numpy.ubyte'>*, *mode='r+'*, *offset=0*, *shape=None*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Create a memory-map to an array stored in a *binary* file on disk. Memory-mapped files are used for accessing small segments of large files on disk, without reading the entire file into memory. NumPy’s memmap’s are array-like objects. This differs from Python’s `mmap` module, which uses file-like objects. This subclass of ndarray has some unpleasant interactions with some operations, because it doesn’t quite fit properly as a subclass. An alternative to using this subclass is to create the `mmap` object yourself, then create an ndarray with ndarray.__new__ directly, passing the object created in its ‘buffer=’ parameter. This class may at some point be turned into a factory function which returns a view into an mmap buffer. Flush the memmap instance to write the changes to the file. Currently there is no API to close the underlying `mmap`. It is tricky to ensure the resource is actually closed, since it may be shared between different memmap instances. Parameters **filename**str, file-like object, or pathlib.Path instance The file name or file object to be used as the array data buffer. **dtype**data-type, optional The data-type used to interpret the file contents. Default is [`uint8`](../arrays.scalars#numpy.uint8 "numpy.uint8"). **mode**{‘r+’, ‘r’, ‘w+’, ‘c’}, optional The file is opened in this mode: | | | | --- | --- | | ‘r’ | Open existing file for reading only. | | ‘r+’ | Open existing file for reading and writing. | | ‘w+’ | Create or overwrite existing file for reading and writing. | | ‘c’ | Copy-on-write: assignments affect data in memory, but changes are not saved to disk. The file on disk is read-only. | Default is ‘r+’. **offset**int, optional In the file, array data starts at this offset. Since `offset` is measured in bytes, it should normally be a multiple of the byte-size of [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"). When `mode != 'r'`, even positive offsets beyond end of file are valid; The file will be extended to accommodate the additional data. By default, `memmap` will start at the beginning of the file, even if `filename` is a file pointer `fp` and `fp.tell() != 0`. **shape**tuple, optional The desired shape of the array. If `mode == 'r'` and the number of remaining bytes after `offset` is not a multiple of the byte-size of [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), you must specify [`shape`](numpy.shape#numpy.shape "numpy.shape"). By default, the returned array will be 1-D with the number of elements determined by file size and data-type. **order**{‘C’, ‘F’}, optional Specify the order of the ndarray memory layout: [row-major](../../glossary#term-row-major), C-style or [column-major](../../glossary#term-column-major), Fortran-style. This only has an effect if the shape is greater than 1-D. The default order is ‘C’. See also [`lib.format.open_memmap`](numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap") Create or load a memory-mapped `.npy` file. #### Notes The memmap object can be used anywhere an ndarray is accepted. Given a memmap `fp`, `isinstance(fp, numpy.ndarray)` returns `True`. Memory-mapped files cannot be larger than 2GB on 32-bit systems. When a memmap causes a file to be created or extended beyond its current size in the filesystem, the contents of the new part are unspecified. On systems with POSIX filesystem semantics, the extended part will be filled with zero bytes. #### Examples ``` >>> data = np.arange(12, dtype='float32') >>> data.resize((3,4)) ``` This example uses a temporary file so that doctest doesn’t write files to your directory. You would use a ‘normal’ filename. ``` >>> from tempfile import mkdtemp >>> import os.path as path >>> filename = path.join(mkdtemp(), 'newfile.dat') ``` Create a memmap with dtype and shape that matches our data: ``` >>> fp = np.memmap(filename, dtype='float32', mode='w+', shape=(3,4)) >>> fp memmap([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]], dtype=float32) ``` Write data to memmap array: ``` >>> fp[:] = data[:] >>> fp memmap([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) ``` ``` >>> fp.filename == path.abspath(filename) True ``` Flushes memory changes to disk in order to read them back ``` >>> fp.flush() ``` Load the memmap and verify data was stored: ``` >>> newfp = np.memmap(filename, dtype='float32', mode='r', shape=(3,4)) >>> newfp memmap([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) ``` Read-only memmap: ``` >>> fpr = np.memmap(filename, dtype='float32', mode='r', shape=(3,4)) >>> fpr.flags.writeable False ``` Copy-on-write memmap: ``` >>> fpc = np.memmap(filename, dtype='float32', mode='c', shape=(3,4)) >>> fpc.flags.writeable True ``` It’s possible to assign to copy-on-write array, but values are only written into the memory copy of the array, and not written to disk: ``` >>> fpc memmap([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) >>> fpc[0,:] = 0 >>> fpc memmap([[ 0., 0., 0., 0.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) ``` File on disk is unchanged: ``` >>> fpr memmap([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.]], dtype=float32) ``` Offset into a memmap: ``` >>> fpo = np.memmap(filename, dtype='float32', mode='r', offset=16) >>> fpo memmap([ 4., 5., 6., 7., 8., 9., 10., 11.], dtype=float32) ``` Attributes **filename**str or pathlib.Path instance Path to the mapped file. **offset**int Offset position in the file. **mode**str File mode. #### Methods | | | | --- | --- | | [`flush`](numpy.memmap.flush#numpy.memmap.flush "numpy.memmap.flush")() | Write any changes in the array to the file on disk. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.memmap.htmlnumpy.memmap.flush ================== method memmap.flush()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/memmap.py#L300-L316) Write any changes in the array to the file on disk. For further information, see [`memmap`](numpy.memmap#numpy.memmap "numpy.memmap"). Parameters **None** See also [`memmap`](numpy.memmap#numpy.memmap "numpy.memmap") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.memmap.flush.htmlnumpy.chararray =============== *class*numpy.chararray(*shape*, *itemsize=1*, *unicode=False*, *buffer=None*, *offset=0*, *strides=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Provides a convenient view on arrays of string and unicode values. Note The [`chararray`](#numpy.chararray "numpy.chararray") class exists for backwards compatibility with Numarray, it is not recommended for new development. Starting from numpy 1.4, if one needs arrays of strings, it is recommended to use arrays of [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") [`object_`](../arrays.scalars#numpy.object_ "numpy.object_"), [`string_`](../arrays.scalars#numpy.string_ "numpy.string_") or [`unicode_`](../arrays.scalars#numpy.unicode_ "numpy.unicode_"), and use the free functions in the [`numpy.char`](../routines.char#module-numpy.char "numpy.char") module for fast vectorized string operations. Versus a regular NumPy array of type [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or `unicode`, this class adds the following functionality: 1. values automatically have whitespace removed from the end when indexed 2. comparison operators automatically remove whitespace from the end when comparing values 3. vectorized string operations are provided as methods (e.g. [`endswith`](numpy.chararray.endswith#numpy.chararray.endswith "numpy.chararray.endswith")) and infix operators (e.g. `"+", "*", "%"`) chararrays should be created using [`numpy.char.array`](numpy.char.array#numpy.char.array "numpy.char.array") or [`numpy.char.asarray`](numpy.char.asarray#numpy.char.asarray "numpy.char.asarray"), rather than this constructor directly. This constructor creates the array, using `buffer` (with `offset` and [`strides`](numpy.chararray.strides#numpy.chararray.strides "numpy.chararray.strides")) if it is not `None`. If `buffer` is `None`, then constructs a new array with [`strides`](numpy.chararray.strides#numpy.chararray.strides "numpy.chararray.strides") in “C order”, unless both `len(shape) >= 2` and `order='F'`, in which case [`strides`](numpy.chararray.strides#numpy.chararray.strides "numpy.chararray.strides") is in “Fortran order”. Parameters **shape**tuple Shape of the array. **itemsize**int, optional Length of each array element, in number of characters. Default is 1. **unicode**bool, optional Are the array elements of type unicode (True) or string (False). Default is False. **buffer**object exposing the buffer interface or str, optional Memory address of the start of the array data. Default is None, in which case a new array is created. **offset**int, optional Fixed stride displacement from the beginning of an axis? Default is 0. Needs to be >=0. **strides**array_like of ints, optional Strides for the array (see [`ndarray.strides`](numpy.ndarray.strides#numpy.ndarray.strides "numpy.ndarray.strides") for full description). Default is None. **order**{‘C’, ‘F’}, optional The order in which the array data is stored in memory: ‘C’ -> “row major” order (the default), ‘F’ -> “column major” (Fortran) order. #### Examples ``` >>> charar = np.chararray((3, 3)) >>> charar[:] = 'a' >>> charar chararray([[b'a', b'a', b'a'], [b'a', b'a', b'a'], [b'a', b'a', b'a']], dtype='|S1') ``` ``` >>> charar = np.chararray(charar.shape, itemsize=5) >>> charar[:] = 'abc' >>> charar chararray([[b'abc', b'abc', b'abc'], [b'abc', b'abc', b'abc'], [b'abc', b'abc', b'abc']], dtype='|S5') ``` Attributes [`T`](numpy.chararray.t#numpy.chararray.T "numpy.chararray.T") The transposed array. [`base`](numpy.chararray.base#numpy.chararray.base "numpy.chararray.base") Base object if memory is from some other object. [`ctypes`](numpy.chararray.ctypes#numpy.chararray.ctypes "numpy.chararray.ctypes") An object to simplify the interaction of the array with the ctypes module. [`data`](numpy.chararray.data#numpy.chararray.data "numpy.chararray.data") Python buffer object pointing to the start of the array’s data. [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Data-type of the array’s elements. [`flags`](numpy.chararray.flags#numpy.chararray.flags "numpy.chararray.flags") Information about the memory layout of the array. [`flat`](numpy.chararray.flat#numpy.chararray.flat "numpy.chararray.flat") A 1-D iterator over the array. [`imag`](numpy.imag#numpy.imag "numpy.imag") The imaginary part of the array. [`itemsize`](numpy.chararray.itemsize#numpy.chararray.itemsize "numpy.chararray.itemsize") Length of one array element in bytes. [`nbytes`](numpy.chararray.nbytes#numpy.chararray.nbytes "numpy.chararray.nbytes") Total bytes consumed by the elements of the array. [`ndim`](numpy.chararray.ndim#numpy.chararray.ndim "numpy.chararray.ndim") Number of array dimensions. [`real`](numpy.real#numpy.real "numpy.real") The real part of the array. [`shape`](numpy.shape#numpy.shape "numpy.shape") Tuple of array dimensions. [`size`](numpy.chararray.size#numpy.chararray.size "numpy.chararray.size") Number of elements in the array. [`strides`](numpy.chararray.strides#numpy.chararray.strides "numpy.chararray.strides") Tuple of bytes to step in each dimension when traversing an array. #### Methods | | | | --- | --- | | [`astype`](numpy.chararray.astype#numpy.chararray.astype "numpy.chararray.astype")(dtype[, order, casting, subok, copy]) | Copy of the array, cast to a specified type. | | [`argsort`](numpy.chararray.argsort#numpy.chararray.argsort "numpy.chararray.argsort")([axis, kind, order]) | Returns the indices that would sort this array. | | [`copy`](numpy.chararray.copy#numpy.chararray.copy "numpy.chararray.copy")([order]) | Return a copy of the array. | | [`count`](numpy.chararray.count#numpy.chararray.count "numpy.chararray.count")(sub[, start, end]) | Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`]. | | [`decode`](numpy.chararray.decode#numpy.chararray.decode "numpy.chararray.decode")([encoding, errors]) | Calls `str.decode` element-wise. | | [`dump`](numpy.chararray.dump#numpy.chararray.dump "numpy.chararray.dump")(file) | Dump a pickle of the array to the specified file. | | [`dumps`](numpy.chararray.dumps#numpy.chararray.dumps "numpy.chararray.dumps")() | Returns the pickle of the array as a string. | | [`encode`](numpy.chararray.encode#numpy.chararray.encode "numpy.chararray.encode")([encoding, errors]) | Calls [`str.encode`](https://docs.python.org/3/library/stdtypes.html#str.encode "(in Python v3.10)") element-wise. | | [`endswith`](numpy.chararray.endswith#numpy.chararray.endswith "numpy.chararray.endswith")(suffix[, start, end]) | Returns a boolean array which is `True` where the string element in `self` ends with `suffix`, otherwise `False`. | | [`expandtabs`](numpy.chararray.expandtabs#numpy.chararray.expandtabs "numpy.chararray.expandtabs")([tabsize]) | Return a copy of each string element where all tab characters are replaced by one or more spaces. | | [`fill`](numpy.chararray.fill#numpy.chararray.fill "numpy.chararray.fill")(value) | Fill the array with a scalar value. | | [`find`](numpy.chararray.find#numpy.chararray.find "numpy.chararray.find")(sub[, start, end]) | For each element, return the lowest index in the string where substring `sub` is found. | | [`flatten`](numpy.chararray.flatten#numpy.chararray.flatten "numpy.chararray.flatten")([order]) | Return a copy of the array collapsed into one dimension. | | [`getfield`](numpy.chararray.getfield#numpy.chararray.getfield "numpy.chararray.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. | | [`index`](numpy.chararray.index#numpy.chararray.index "numpy.chararray.index")(sub[, start, end]) | Like [`find`](numpy.chararray.find#numpy.chararray.find "numpy.chararray.find"), but raises `ValueError` when the substring is not found. | | [`isalnum`](numpy.chararray.isalnum#numpy.chararray.isalnum "numpy.chararray.isalnum")() | Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. | | [`isalpha`](numpy.chararray.isalpha#numpy.chararray.isalpha "numpy.chararray.isalpha")() | Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. | | [`isdecimal`](numpy.chararray.isdecimal#numpy.chararray.isdecimal "numpy.chararray.isdecimal")() | For each element in `self`, return True if there are only decimal characters in the element. | | [`isdigit`](numpy.chararray.isdigit#numpy.chararray.isdigit "numpy.chararray.isdigit")() | Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. | | [`islower`](numpy.chararray.islower#numpy.chararray.islower "numpy.chararray.islower")() | Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. | | [`isnumeric`](numpy.chararray.isnumeric#numpy.chararray.isnumeric "numpy.chararray.isnumeric")() | For each element in `self`, return True if there are only numeric characters in the element. | | [`isspace`](numpy.chararray.isspace#numpy.chararray.isspace "numpy.chararray.isspace")() | Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. | | [`istitle`](numpy.chararray.istitle#numpy.chararray.istitle "numpy.chararray.istitle")() | Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. | | [`isupper`](numpy.chararray.isupper#numpy.chararray.isupper "numpy.chararray.isupper")() | Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. | | [`item`](numpy.chararray.item#numpy.chararray.item "numpy.chararray.item")(*args) | Copy an element of an array to a standard Python scalar and return it. | | [`join`](numpy.chararray.join#numpy.chararray.join "numpy.chararray.join")(seq) | Return a string which is the concatenation of the strings in the sequence `seq`. | | [`ljust`](numpy.chararray.ljust#numpy.chararray.ljust "numpy.chararray.ljust")(width[, fillchar]) | Return an array with the elements of `self` left-justified in a string of length `width`. | | [`lower`](numpy.chararray.lower#numpy.chararray.lower "numpy.chararray.lower")() | Return an array with the elements of `self` converted to lowercase. | | [`lstrip`](numpy.chararray.lstrip#numpy.chararray.lstrip "numpy.chararray.lstrip")([chars]) | For each element in `self`, return a copy with the leading characters removed. | | [`nonzero`](numpy.chararray.nonzero#numpy.chararray.nonzero "numpy.chararray.nonzero")() | Return the indices of the elements that are non-zero. | | [`put`](numpy.chararray.put#numpy.chararray.put "numpy.chararray.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. | | [`ravel`](numpy.chararray.ravel#numpy.chararray.ravel "numpy.chararray.ravel")([order]) | Return a flattened array. | | [`repeat`](numpy.chararray.repeat#numpy.chararray.repeat "numpy.chararray.repeat")(repeats[, axis]) | Repeat elements of an array. | | [`replace`](numpy.chararray.replace#numpy.chararray.replace "numpy.chararray.replace")(old, new[, count]) | For each element in `self`, return a copy of the string with all occurrences of substring `old` replaced by `new`. | | [`reshape`](numpy.chararray.reshape#numpy.chararray.reshape "numpy.chararray.reshape")(shape[, order]) | Returns an array containing the same data with a new shape. | | [`resize`](numpy.chararray.resize#numpy.chararray.resize "numpy.chararray.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. | | [`rfind`](numpy.chararray.rfind#numpy.chararray.rfind "numpy.chararray.rfind")(sub[, start, end]) | For each element in `self`, return the highest index in the string where substring `sub` is found, such that `sub` is contained within [`start`, `end`]. | | [`rindex`](numpy.chararray.rindex#numpy.chararray.rindex "numpy.chararray.rindex")(sub[, start, end]) | Like [`rfind`](numpy.chararray.rfind#numpy.chararray.rfind "numpy.chararray.rfind"), but raises `ValueError` when the substring `sub` is not found. | | [`rjust`](numpy.chararray.rjust#numpy.chararray.rjust "numpy.chararray.rjust")(width[, fillchar]) | Return an array with the elements of `self` right-justified in a string of length `width`. | | [`rsplit`](numpy.chararray.rsplit#numpy.chararray.rsplit "numpy.chararray.rsplit")([sep, maxsplit]) | For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. | | [`rstrip`](numpy.chararray.rstrip#numpy.chararray.rstrip "numpy.chararray.rstrip")([chars]) | For each element in `self`, return a copy with the trailing characters removed. | | [`searchsorted`](numpy.chararray.searchsorted#numpy.chararray.searchsorted "numpy.chararray.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. | | [`setfield`](numpy.chararray.setfield#numpy.chararray.setfield "numpy.chararray.setfield")(val, dtype[, offset]) | Put a value into a specified place in a field defined by a data-type. | | [`setflags`](numpy.chararray.setflags#numpy.chararray.setflags "numpy.chararray.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. | | [`sort`](numpy.chararray.sort#numpy.chararray.sort "numpy.chararray.sort")([axis, kind, order]) | Sort an array in-place. | | [`split`](numpy.chararray.split#numpy.chararray.split "numpy.chararray.split")([sep, maxsplit]) | For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. | | [`splitlines`](numpy.chararray.splitlines#numpy.chararray.splitlines "numpy.chararray.splitlines")([keepends]) | For each element in `self`, return a list of the lines in the element, breaking at line boundaries. | | [`squeeze`](numpy.chararray.squeeze#numpy.chararray.squeeze "numpy.chararray.squeeze")([axis]) | Remove axes of length one from `a`. | | [`startswith`](numpy.chararray.startswith#numpy.chararray.startswith "numpy.chararray.startswith")(prefix[, start, end]) | Returns a boolean array which is `True` where the string element in `self` starts with `prefix`, otherwise `False`. | | [`strip`](numpy.chararray.strip#numpy.chararray.strip "numpy.chararray.strip")([chars]) | For each element in `self`, return a copy with the leading and trailing characters removed. | | [`swapaxes`](numpy.chararray.swapaxes#numpy.chararray.swapaxes "numpy.chararray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. | | [`swapcase`](numpy.chararray.swapcase#numpy.chararray.swapcase "numpy.chararray.swapcase")() | For each element in `self`, return a copy of the string with uppercase characters converted to lowercase and vice versa. | | [`take`](numpy.chararray.take#numpy.chararray.take "numpy.chararray.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. | | [`title`](numpy.chararray.title#numpy.chararray.title "numpy.chararray.title")() | For each element in `self`, return a titlecased version of the string: words start with uppercase characters, all remaining cased characters are lowercase. | | [`tofile`](numpy.chararray.tofile#numpy.chararray.tofile "numpy.chararray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). | | [`tolist`](numpy.chararray.tolist#numpy.chararray.tolist "numpy.chararray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. | | [`tostring`](numpy.chararray.tostring#numpy.chararray.tostring "numpy.chararray.tostring")([order]) | A compatibility alias for [`tobytes`](numpy.chararray.tobytes#numpy.chararray.tobytes "numpy.chararray.tobytes"), with exactly the same behavior. | | [`translate`](numpy.chararray.translate#numpy.chararray.translate "numpy.chararray.translate")(table[, deletechars]) | For each element in `self`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. | | [`transpose`](numpy.chararray.transpose#numpy.chararray.transpose "numpy.chararray.transpose")(*axes) | Returns a view of the array with axes transposed. | | [`upper`](numpy.chararray.upper#numpy.chararray.upper "numpy.chararray.upper")() | Return an array with the elements of `self` converted to uppercase. | | [`view`](numpy.chararray.view#numpy.chararray.view "numpy.chararray.view")([dtype][, type]) | New view of array with the same data. | | [`zfill`](numpy.chararray.zfill#numpy.chararray.zfill "numpy.chararray.zfill")(width) | Return the numeric string left-filled with zeros in a string of length `width`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.htmlnumpy.core.defchararray.array ============================= core.defchararray.array(*obj*, *itemsize=None*, *copy=True*, *unicode=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2612-L2743) Create a [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray"). Note This class is provided for numarray backward-compatibility. New code (not concerned with numarray compatibility) should use arrays of type [`string_`](../arrays.scalars#numpy.string_ "numpy.string_") or [`unicode_`](../arrays.scalars#numpy.unicode_ "numpy.unicode_") and use the free functions in `numpy.char` for fast vectorized string operations instead. Versus a regular NumPy array of type [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or `unicode`, this class adds the following functionality: 1. values automatically have whitespace removed from the end when indexed 2. comparison operators automatically remove whitespace from the end when comparing values 3. vectorized string operations are provided as methods (e.g. [`str.endswith`](https://docs.python.org/3/library/stdtypes.html#str.endswith "(in Python v3.10)")) and infix operators (e.g. `+, *, %`) Parameters **obj**array of str or unicode-like **itemsize**int, optional `itemsize` is the number of characters per scalar in the resulting array. If `itemsize` is None, and `obj` is an object array or a Python list, the `itemsize` will be automatically determined. If `itemsize` is provided and `obj` is of type str or unicode, then the `obj` string will be chunked into `itemsize` pieces. **copy**bool, optional If true (default), then the object is copied. Otherwise, a copy will only be made if __array__ returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (`itemsize`, unicode, `order`, etc.). **unicode**bool, optional When true, the resulting [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray") can contain Unicode characters, when false only 8-bit characters. If unicode is None and `obj` is one of the following: * a [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray"), * an ndarray of type [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or `unicode` * a Python str or unicode object, then the unicode setting of the output array will be automatically determined. **order**{‘C’, ‘F’, ‘A’}, optional Specify the order of the array. If order is ‘C’ (default), then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). If order is ‘A’, then the returned array may be in any order (either C-, Fortran-contiguous, or even discontiguous). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.core.defchararray.array.htmlnumpy.record ============ *class*numpy.record[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) A data-type scalar that allows field access as attribute lookup. Attributes [`T`](numpy.record.t#numpy.record.T "numpy.record.T") Scalar attribute identical to the corresponding array attribute. [`base`](numpy.record.base#numpy.record.base "numpy.record.base") base object [`data`](numpy.record.data#numpy.record.data "numpy.record.data") Pointer to start of data. [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") dtype object [`flags`](numpy.record.flags#numpy.record.flags "numpy.record.flags") integer value of flags [`flat`](numpy.record.flat#numpy.record.flat "numpy.record.flat") A 1-D view of the scalar. [`imag`](numpy.imag#numpy.imag "numpy.imag") The imaginary part of the scalar. [`itemsize`](numpy.record.itemsize#numpy.record.itemsize "numpy.record.itemsize") The length of one element in bytes. [`nbytes`](numpy.record.nbytes#numpy.record.nbytes "numpy.record.nbytes") The length of the scalar in bytes. [`ndim`](numpy.record.ndim#numpy.record.ndim "numpy.record.ndim") The number of array dimensions. [`real`](numpy.real#numpy.real "numpy.real") The real part of the scalar. [`shape`](numpy.shape#numpy.shape "numpy.shape") Tuple of array dimensions. [`size`](numpy.record.size#numpy.record.size "numpy.record.size") The number of elements in the gentype. [`strides`](numpy.record.strides#numpy.record.strides "numpy.record.strides") Tuple of bytes steps in each dimension. #### Methods | | | | --- | --- | | [`all`](numpy.record.all#numpy.record.all "numpy.record.all") | Scalar method identical to the corresponding array attribute. | | [`any`](numpy.record.any#numpy.record.any "numpy.record.any") | Scalar method identical to the corresponding array attribute. | | [`argmax`](numpy.record.argmax#numpy.record.argmax "numpy.record.argmax") | Scalar method identical to the corresponding array attribute. | | [`argmin`](numpy.record.argmin#numpy.record.argmin "numpy.record.argmin") | Scalar method identical to the corresponding array attribute. | | [`argsort`](numpy.record.argsort#numpy.record.argsort "numpy.record.argsort") | Scalar method identical to the corresponding array attribute. | | [`astype`](numpy.record.astype#numpy.record.astype "numpy.record.astype") | Scalar method identical to the corresponding array attribute. | | [`byteswap`](numpy.record.byteswap#numpy.record.byteswap "numpy.record.byteswap") | Scalar method identical to the corresponding array attribute. | | [`choose`](numpy.record.choose#numpy.record.choose "numpy.record.choose") | Scalar method identical to the corresponding array attribute. | | [`clip`](numpy.record.clip#numpy.record.clip "numpy.record.clip") | Scalar method identical to the corresponding array attribute. | | [`compress`](numpy.record.compress#numpy.record.compress "numpy.record.compress") | Scalar method identical to the corresponding array attribute. | | [`conjugate`](numpy.record.conjugate#numpy.record.conjugate "numpy.record.conjugate") | Scalar method identical to the corresponding array attribute. | | [`copy`](numpy.record.copy#numpy.record.copy "numpy.record.copy") | Scalar method identical to the corresponding array attribute. | | [`cumprod`](numpy.record.cumprod#numpy.record.cumprod "numpy.record.cumprod") | Scalar method identical to the corresponding array attribute. | | [`cumsum`](numpy.record.cumsum#numpy.record.cumsum "numpy.record.cumsum") | Scalar method identical to the corresponding array attribute. | | [`diagonal`](numpy.record.diagonal#numpy.record.diagonal "numpy.record.diagonal") | Scalar method identical to the corresponding array attribute. | | [`dump`](numpy.record.dump#numpy.record.dump "numpy.record.dump") | Scalar method identical to the corresponding array attribute. | | [`dumps`](numpy.record.dumps#numpy.record.dumps "numpy.record.dumps") | Scalar method identical to the corresponding array attribute. | | [`fill`](numpy.record.fill#numpy.record.fill "numpy.record.fill") | Scalar method identical to the corresponding array attribute. | | [`flatten`](numpy.record.flatten#numpy.record.flatten "numpy.record.flatten") | Scalar method identical to the corresponding array attribute. | | [`getfield`](numpy.record.getfield#numpy.record.getfield "numpy.record.getfield") | Scalar method identical to the corresponding array attribute. | | [`item`](numpy.record.item#numpy.record.item "numpy.record.item") | Scalar method identical to the corresponding array attribute. | | [`itemset`](numpy.record.itemset#numpy.record.itemset "numpy.record.itemset") | Scalar method identical to the corresponding array attribute. | | [`max`](numpy.record.max#numpy.record.max "numpy.record.max") | Scalar method identical to the corresponding array attribute. | | [`mean`](numpy.record.mean#numpy.record.mean "numpy.record.mean") | Scalar method identical to the corresponding array attribute. | | [`min`](numpy.record.min#numpy.record.min "numpy.record.min") | Scalar method identical to the corresponding array attribute. | | [`newbyteorder`](numpy.record.newbyteorder#numpy.record.newbyteorder "numpy.record.newbyteorder")([new_order]) | Return a new [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") with a different byte order. | | [`nonzero`](numpy.record.nonzero#numpy.record.nonzero "numpy.record.nonzero") | Scalar method identical to the corresponding array attribute. | | [`pprint`](numpy.record.pprint#numpy.record.pprint "numpy.record.pprint")() | Pretty-print all fields. | | [`prod`](numpy.record.prod#numpy.record.prod "numpy.record.prod") | Scalar method identical to the corresponding array attribute. | | [`ptp`](numpy.record.ptp#numpy.record.ptp "numpy.record.ptp") | Scalar method identical to the corresponding array attribute. | | [`put`](numpy.record.put#numpy.record.put "numpy.record.put") | Scalar method identical to the corresponding array attribute. | | [`ravel`](numpy.record.ravel#numpy.record.ravel "numpy.record.ravel") | Scalar method identical to the corresponding array attribute. | | [`repeat`](numpy.record.repeat#numpy.record.repeat "numpy.record.repeat") | Scalar method identical to the corresponding array attribute. | | [`reshape`](numpy.record.reshape#numpy.record.reshape "numpy.record.reshape") | Scalar method identical to the corresponding array attribute. | | [`resize`](numpy.record.resize#numpy.record.resize "numpy.record.resize") | Scalar method identical to the corresponding array attribute. | | [`round`](numpy.record.round#numpy.record.round "numpy.record.round") | Scalar method identical to the corresponding array attribute. | | [`searchsorted`](numpy.record.searchsorted#numpy.record.searchsorted "numpy.record.searchsorted") | Scalar method identical to the corresponding array attribute. | | [`setfield`](numpy.record.setfield#numpy.record.setfield "numpy.record.setfield") | Scalar method identical to the corresponding array attribute. | | [`setflags`](numpy.record.setflags#numpy.record.setflags "numpy.record.setflags") | Scalar method identical to the corresponding array attribute. | | [`sort`](numpy.record.sort#numpy.record.sort "numpy.record.sort") | Scalar method identical to the corresponding array attribute. | | [`squeeze`](numpy.record.squeeze#numpy.record.squeeze "numpy.record.squeeze") | Scalar method identical to the corresponding array attribute. | | [`std`](numpy.record.std#numpy.record.std "numpy.record.std") | Scalar method identical to the corresponding array attribute. | | [`sum`](numpy.record.sum#numpy.record.sum "numpy.record.sum") | Scalar method identical to the corresponding array attribute. | | [`swapaxes`](numpy.record.swapaxes#numpy.record.swapaxes "numpy.record.swapaxes") | Scalar method identical to the corresponding array attribute. | | [`take`](numpy.record.take#numpy.record.take "numpy.record.take") | Scalar method identical to the corresponding array attribute. | | [`tofile`](numpy.record.tofile#numpy.record.tofile "numpy.record.tofile") | Scalar method identical to the corresponding array attribute. | | [`tolist`](numpy.record.tolist#numpy.record.tolist "numpy.record.tolist") | Scalar method identical to the corresponding array attribute. | | [`tostring`](numpy.record.tostring#numpy.record.tostring "numpy.record.tostring") | Scalar method identical to the corresponding array attribute. | | [`trace`](numpy.record.trace#numpy.record.trace "numpy.record.trace") | Scalar method identical to the corresponding array attribute. | | [`transpose`](numpy.record.transpose#numpy.record.transpose "numpy.record.transpose") | Scalar method identical to the corresponding array attribute. | | [`var`](numpy.record.var#numpy.record.var "numpy.record.var") | Scalar method identical to the corresponding array attribute. | | [`view`](numpy.record.view#numpy.record.view "numpy.record.view") | Scalar method identical to the corresponding array attribute. | | | | | --- | --- | | **conj** | | | **tobytes** | | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.htmlnumpy.lib.user_array.container =============================== *class*numpy.lib.user_array.container(*data*, *dtype=None*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/user_array.py#L16-L264) Standard container-class for easy multiple-inheritance. #### Methods | | | | --- | --- | | **copy** | | | **tostring** | | | **byteswap** | | | **astype** | | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.user_array.container.htmlnumpy.broadcast =============== *class*numpy.broadcast[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Produce an object that mimics broadcasting. Parameters **in1, in2, 
**array_like Input parameters. Returns **b**broadcast object Broadcast the input parameters against one another, and return an object that encapsulates the result. Amongst others, it has `shape` and `nd` properties, and may be used as an iterator. See also [`broadcast_arrays`](numpy.broadcast_arrays#numpy.broadcast_arrays "numpy.broadcast_arrays") [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") [`broadcast_shapes`](numpy.broadcast_shapes#numpy.broadcast_shapes "numpy.broadcast_shapes") #### Examples Manually adding two vectors, using broadcasting: ``` >>> x = np.array([[1], [2], [3]]) >>> y = np.array([4, 5, 6]) >>> b = np.broadcast(x, y) ``` ``` >>> out = np.empty(b.shape) >>> out.flat = [u+v for (u,v) in b] >>> out array([[5., 6., 7.], [6., 7., 8.], [7., 8., 9.]]) ``` Compare against built-in broadcasting: ``` >>> x + y array([[5, 6, 7], [6, 7, 8], [7, 8, 9]]) ``` Attributes [`index`](numpy.broadcast.index#numpy.broadcast.index "numpy.broadcast.index") current index in broadcasted result [`iters`](numpy.broadcast.iters#numpy.broadcast.iters "numpy.broadcast.iters") tuple of iterators along `self`’s “components.” [`nd`](numpy.broadcast.nd#numpy.broadcast.nd "numpy.broadcast.nd") Number of dimensions of broadcasted result. [`ndim`](numpy.broadcast.ndim#numpy.broadcast.ndim "numpy.broadcast.ndim") Number of dimensions of broadcasted result. [`numiter`](numpy.broadcast.numiter#numpy.broadcast.numiter "numpy.broadcast.numiter") Number of iterators possessed by the broadcasted result. [`shape`](numpy.shape#numpy.shape "numpy.shape") Shape of broadcasted result. [`size`](numpy.broadcast.size#numpy.broadcast.size "numpy.broadcast.size") Total size of broadcasted result. #### Methods | | | | --- | --- | | [`reset`](numpy.broadcast.reset#numpy.broadcast.reset "numpy.broadcast.reset")() | Reset the broadcasted result's iterator(s). | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast.htmlnumpy.asarray ============= numpy.asarray(*a*, *dtype=None*, *order=None*, ***, *like=None*) Convert the input to an array. Parameters **a**array_like Input data, in any form that can be converted to an array. This includes lists, lists of tuples, tuples, tuples of tuples, tuples of lists and ndarrays. **dtype**data-type, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Memory layout. ‘A’ and ‘K’ depend on the order of input array a. ‘C’ row-major (C-style), ‘F’ column-major (Fortran-style) memory representation. ‘A’ (any) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise ‘K’ (keep) preserve input order Defaults to ‘K’. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray Array interpretation of `a`. No copy is performed if the input is already an ndarray with matching dtype and order. If `a` is a subclass of ndarray, a base class ndarray is returned. See also [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Similar function which passes through subclasses. [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous array. [`asfarray`](numpy.asfarray#numpy.asfarray "numpy.asfarray") Convert input to a floating point ndarray. [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`asarray_chkfinite`](numpy.asarray_chkfinite#numpy.asarray_chkfinite "numpy.asarray_chkfinite") Similar function which checks input for NaNs and Infs. [`fromiter`](numpy.fromiter#numpy.fromiter "numpy.fromiter") Create an array from an iterator. [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") Construct an array by executing a function on grid positions. #### Examples Convert a list into an array: ``` >>> a = [1, 2] >>> np.asarray(a) array([1, 2]) ``` Existing arrays are not copied: ``` >>> a = np.array([1, 2]) >>> np.asarray(a) is a True ``` If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is set, array is copied only if dtype does not match: ``` >>> a = np.array([1, 2], dtype=np.float32) >>> np.asarray(a, dtype=np.float32) is a True >>> np.asarray(a, dtype=np.float64) is a False ``` Contrary to [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray"), ndarray subclasses are not passed through: ``` >>> issubclass(np.recarray, np.ndarray) True >>> a = np.array([(1.0, 2), (3.0, 4)], dtype='f4,i4').view(np.recarray) >>> np.asarray(a) is a False >>> np.asanyarray(a) is a True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.asarray.htmlnumpy.asanyarray ================ numpy.asanyarray(*a*, *dtype=None*, *order=None*, ***, *like=None*) Convert the input to an ndarray, but pass ndarray subclasses through. Parameters **a**array_like Input data, in any form that can be converted to an array. This includes scalars, lists, lists of tuples, tuples, tuples of tuples, tuples of lists, and ndarrays. **dtype**data-type, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Memory layout. ‘A’ and ‘K’ depend on the order of input array a. ‘C’ row-major (C-style), ‘F’ column-major (Fortran-style) memory representation. ‘A’ (any) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise ‘K’ (keep) preserve input order Defaults to ‘C’. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray or an ndarray subclass Array interpretation of `a`. If `a` is an ndarray or a subclass of ndarray, it is returned as-is and no copy is performed. See also [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray") Similar function which always returns ndarrays. [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous array. [`asfarray`](numpy.asfarray#numpy.asfarray "numpy.asfarray") Convert input to a floating point ndarray. [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`asarray_chkfinite`](numpy.asarray_chkfinite#numpy.asarray_chkfinite "numpy.asarray_chkfinite") Similar function which checks input for NaNs and Infs. [`fromiter`](numpy.fromiter#numpy.fromiter "numpy.fromiter") Create an array from an iterator. [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") Construct an array by executing a function on grid positions. #### Examples Convert a list into an array: ``` >>> a = [1, 2] >>> np.asanyarray(a) array([1, 2]) ``` Instances of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") subclasses are passed through as-is: ``` >>> a = np.array([(1.0, 2), (3.0, 4)], dtype='f4,i4').view(np.recarray) >>> np.asanyarray(a) is a True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.asanyarray.htmlnumpy.lib.mixins.NDArrayOperatorsMixin ====================================== *class*numpy.lib.mixins.NDArrayOperatorsMixin[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/mixins.py#L59-L176) Mixin defining all operator special methods using __array_ufunc__. This class implements the special methods for almost all of Python’s builtin operators defined in the [`operator`](https://docs.python.org/3/library/operator.html#module-operator "(in Python v3.10)") module, including comparisons (`==`, `>`, etc.) and arithmetic (`+`, `*`, `-`, etc.), by deferring to the `__array_ufunc__` method, which subclasses must implement. It is useful for writing classes that do not inherit from [`numpy.ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), but that should support arithmetic and numpy universal functions like arrays as described in [A Mechanism for Overriding Ufuncs](https://numpy.org/neps/nep-0013-ufunc-overrides.html). As an trivial example, consider this implementation of an `ArrayLike` class that simply wraps a NumPy array and ensures that the result of any arithmetic operation is also an `ArrayLike` object: ``` class ArrayLike(np.lib.mixins.NDArrayOperatorsMixin): def __init__(self, value): self.value = np.asarray(value) # One might also consider adding the built-in list type to this # list, to support operations like np.add(array_like, list) _HANDLED_TYPES = (np.ndarray, numbers.Number) def __array_ufunc__(self, ufunc, method, *inputs, **kwargs): out = kwargs.get('out', ()) for x in inputs + out: # Only support operations with instances of _HANDLED_TYPES. # Use ArrayLike instead of type(self) for isinstance to # allow subclasses that don't override __array_ufunc__ to # handle ArrayLike objects. if not isinstance(x, self._HANDLED_TYPES + (ArrayLike,)): return NotImplemented # Defer to the implementation of the ufunc on unwrapped values. inputs = tuple(x.value if isinstance(x, ArrayLike) else x for x in inputs) if out: kwargs['out'] = tuple( x.value if isinstance(x, ArrayLike) else x for x in out) result = getattr(ufunc, method)(*inputs, **kwargs) if type(result) is tuple: # multiple return values return tuple(type(self)(x) for x in result) elif method == 'at': # no return value return None else: # one return value return type(self)(result) def __repr__(self): return '%s(%r)' % (type(self).__name__, self.value) ``` In interactions between `ArrayLike` objects and numbers or numpy arrays, the result is always another `ArrayLike`: ``` >>> x = ArrayLike([1, 2, 3]) >>> x - 1 ArrayLike(array([0, 1, 2])) >>> 1 - x ArrayLike(array([ 0, -1, -2])) >>> np.arange(3) - x ArrayLike(array([-1, -1, -1])) >>> x - np.arange(3) ArrayLike(array([1, 1, 1])) ``` Note that unlike `numpy.ndarray`, `ArrayLike` does not allow operations with arbitrary, unrecognized types. This ensures that interactions with ArrayLike preserve a well-defined casting hierarchy. New in version 1.13. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.mixins.NDArrayOperatorsMixin.htmlnumpy.broadcast.reset ===================== method broadcast.reset() Reset the broadcasted result’s iterator(s). Parameters **None** Returns None #### Examples ``` >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.index 0 >>> next(b), next(b), next(b) ((1, 4), (2, 4), (3, 4)) >>> b.index 3 >>> b.reset() >>> b.index 0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast.reset.htmlnumpy.from_dlpack ================== numpy.from_dlpack(*x*, */*) Create a NumPy array from an object implementing the `__dlpack__` protocol. Generally, the returned NumPy array is a read-only view of the input object. See [[1]](#re9eadf7a166b-1) and [[2]](#re9eadf7a166b-2) for more details. Parameters **x**object A Python object that implements the `__dlpack__` and `__dlpack_device__` methods. Returns **out**ndarray #### References [1](#id1) Array API documentation, <https://data-apis.org/array-api/latest/design_topics/data_interchange.html#syntax-for-data-interchange-with-dlpack [2](#id2) Python specification for DLPack, <https://dmlc.github.io/dlpack/latest/python_spec.html #### Examples ``` >>> import torch >>> x = torch.arange(10) >>> # create a view of the torch tensor "x" in NumPy >>> y = np.from_dlpack(x) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.from_dlpack.htmlnumpy.ma.where ============== ma.where(*condition*, *x=<no value>*, *y=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7303-L7391) Return a masked array with elements from `x` or `y`, depending on condition. Note When only `condition` is provided, this function is identical to [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero"). The rest of this documentation covers only the case where all three arguments are provided. Parameters **condition**array_like, bool Where True, yield `x`, otherwise yield `y`. **x, y**array_like, optional Values from which to choose. `x`, `y` and `condition` need to be broadcastable to some shape. Returns **out**MaskedArray An masked array with [`masked`](../maskedarray.baseclass#numpy.ma.masked "numpy.ma.masked") elements where the condition is masked, elements from `x` where `condition` is True, and elements from `y` elsewhere. See also [`numpy.where`](numpy.where#numpy.where "numpy.where") Equivalent function in the top-level NumPy module. [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") The function that is called when x and y are omitted #### Examples ``` >>> x = np.ma.array(np.arange(9.).reshape(3, 3), mask=[[0, 1, 0], ... [1, 0, 1], ... [0, 1, 0]]) >>> x masked_array( data=[[0.0, --, 2.0], [--, 4.0, --], [6.0, --, 8.0]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=1e+20) >>> np.ma.where(x > 5, x, -3.1416) masked_array( data=[[-3.1416, --, -3.1416], [--, -3.1416, --], [6.0, --, 8.0]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.where.htmlnumpy.busday_offset ==================== numpy.busday_offset(*dates*, *offsets*, *roll='raise'*, *weekmask='1111100'*, *holidays=None*, *busdaycal=None*, *out=None*) First adjusts the date to fall on a valid day according to the `roll` rule, then applies offsets to the given dates counted in valid days. New in version 1.7.0. Parameters **dates**array_like of datetime64[D] The array of dates to process. **offsets**array_like of int The array of offsets, which is broadcast with `dates`. **roll**{‘raise’, ‘nat’, ‘forward’, ‘following’, ‘backward’, ‘preceding’, ‘modifiedfollowing’, ‘modifiedpreceding’}, optional How to treat dates that do not fall on a valid day. The default is ‘raise’. * ‘raise’ means to raise an exception for an invalid day. * ‘nat’ means to return a NaT (not-a-time) for an invalid day. * ‘forward’ and ‘following’ mean to take the first valid day later in time. * ‘backward’ and ‘preceding’ mean to take the first valid day earlier in time. * ‘modifiedfollowing’ means to take the first valid day later in time unless it is across a Month boundary, in which case to take the first valid day earlier in time. * ‘modifiedpreceding’ means to take the first valid day earlier in time unless it is across a Month boundary, in which case to take the first valid day later in time. **weekmask**str or array_like of bool, optional A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun **holidays**array_like of datetime64[D], optional An array of dates to consider as invalid dates. They may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days. **busdaycal**busdaycalendar, optional A [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") object which specifies the valid days. If this parameter is provided, neither weekmask nor holidays may be provided. **out**array of datetime64[D], optional If provided, this array is filled with the result. Returns **out**array of datetime64[D] An array with a shape from broadcasting `dates` and `offsets` together, containing the dates with offsets applied. See also [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") An object that specifies a custom set of valid days. [`is_busday`](numpy.is_busday#numpy.is_busday "numpy.is_busday") Returns a boolean array indicating valid days. [`busday_count`](numpy.busday_count#numpy.busday_count "numpy.busday_count") Counts how many valid days are in a half-open date range. #### Examples ``` >>> # First business day in October 2011 (not accounting for holidays) ... np.busday_offset('2011-10', 0, roll='forward') numpy.datetime64('2011-10-03') >>> # Last business day in February 2012 (not accounting for holidays) ... np.busday_offset('2012-03', -1, roll='forward') numpy.datetime64('2012-02-29') >>> # Third Wednesday in January 2011 ... np.busday_offset('2011-01', 2, roll='forward', weekmask='Wed') numpy.datetime64('2011-01-19') >>> # 2012 Mother's Day in Canada and the U.S. ... np.busday_offset('2012-05', 1, roll='forward', weekmask='Sun') numpy.datetime64('2012-05-13') ``` ``` >>> # First business day on or after a date ... np.busday_offset('2011-03-20', 0, roll='forward') numpy.datetime64('2011-03-21') >>> np.busday_offset('2011-03-22', 0, roll='forward') numpy.datetime64('2011-03-22') >>> # First business day after a date ... np.busday_offset('2011-03-20', 1, roll='backward') numpy.datetime64('2011-03-21') >>> np.busday_offset('2011-03-22', 1, roll='backward') numpy.datetime64('2011-03-23') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.busday_offset.htmlnumpy.busdaycalendar ==================== *class*numpy.busdaycalendar(*weekmask='1111100'*, *holidays=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) A business day calendar object that efficiently stores information defining valid days for the busday family of functions. The default valid days are Monday through Friday (“business days”). A busdaycalendar object can be specified with any set of weekly valid days, plus an optional “holiday” dates that always will be invalid. Once a busdaycalendar object is created, the weekmask and holidays cannot be modified. New in version 1.7.0. Parameters **weekmask**str or array_like of bool, optional A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun **holidays**array_like of datetime64[D], optional An array of dates to consider as invalid dates, no matter which weekday they fall upon. Holiday dates may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days. Returns **out**busdaycalendar A business day calendar object containing the specified weekmask and holidays values. See also [`is_busday`](numpy.is_busday#numpy.is_busday "numpy.is_busday") Returns a boolean array indicating valid days. [`busday_offset`](numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") Applies an offset counted in valid days. [`busday_count`](numpy.busday_count#numpy.busday_count "numpy.busday_count") Counts how many valid days are in a half-open date range. #### Examples ``` >>> # Some important days in July ... bdd = np.busdaycalendar( ... holidays=['2011-07-01', '2011-07-04', '2011-07-17']) >>> # Default is Monday to Friday weekdays ... bdd.weekmask array([ True, True, True, True, True, False, False]) >>> # Any holidays already on the weekend are removed ... bdd.holidays array(['2011-07-01', '2011-07-04'], dtype='datetime64[D]') ``` Attributes **Note: once a busdaycalendar object is created, you cannot modify the** **weekmask or holidays. The attributes return copies of internal data.** [`weekmask`](numpy.busdaycalendar.weekmask#numpy.busdaycalendar.weekmask "numpy.busdaycalendar.weekmask")(copy) seven-element array of bool A copy of the seven-element boolean mask indicating valid days. [`holidays`](numpy.busdaycalendar.holidays#numpy.busdaycalendar.holidays "numpy.busdaycalendar.holidays")(copy) sorted array of datetime64[D] A copy of the holiday array indicating additional invalid days. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.busdaycalendar.htmlnumpy.is_busday ================ numpy.is_busday(*dates*, *weekmask='1111100'*, *holidays=None*, *busdaycal=None*, *out=None*) Calculates which of the given dates are valid days, and which are not. New in version 1.7.0. Parameters **dates**array_like of datetime64[D] The array of dates to process. **weekmask**str or array_like of bool, optional A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun **holidays**array_like of datetime64[D], optional An array of dates to consider as invalid dates. They may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days. **busdaycal**busdaycalendar, optional A [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") object which specifies the valid days. If this parameter is provided, neither weekmask nor holidays may be provided. **out**array of bool, optional If provided, this array is filled with the result. Returns **out**array of bool An array with the same shape as `dates`, containing True for each valid day, and False for each invalid day. See also [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") An object that specifies a custom set of valid days. [`busday_offset`](numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") Applies an offset counted in valid days. [`busday_count`](numpy.busday_count#numpy.busday_count "numpy.busday_count") Counts how many valid days are in a half-open date range. #### Examples ``` >>> # The weekdays are Friday, Saturday, and Monday ... np.is_busday(['2011-07-01', '2011-07-02', '2011-07-18'], ... holidays=['2011-07-01', '2011-07-04', '2011-07-17']) array([False, False, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.is_busday.htmlnumpy.busday_count =================== numpy.busday_count(*begindates*, *enddates*, *weekmask='1111100'*, *holidays=[]*, *busdaycal=None*, *out=None*) Counts the number of valid days between `begindates` and `enddates`, not including the day of `enddates`. If `enddates` specifies a date value that is earlier than the corresponding `begindates` date value, the count will be negative. New in version 1.7.0. Parameters **begindates**array_like of datetime64[D] The array of the first dates for counting. **enddates**array_like of datetime64[D] The array of the end dates for counting, which are excluded from the count themselves. **weekmask**str or array_like of bool, optional A seven-element array indicating which of Monday through Sunday are valid days. May be specified as a length-seven list or array, like [1,1,1,1,1,0,0]; a length-seven string, like ‘1111100’; or a string like “Mon Tue Wed Thu Fri”, made up of 3-character abbreviations for weekdays, optionally separated by white space. Valid abbreviations are: Mon Tue Wed Thu Fri Sat Sun **holidays**array_like of datetime64[D], optional An array of dates to consider as invalid dates. They may be specified in any order, and NaT (not-a-time) dates are ignored. This list is saved in a normalized form that is suited for fast calculations of valid days. **busdaycal**busdaycalendar, optional A [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") object which specifies the valid days. If this parameter is provided, neither weekmask nor holidays may be provided. **out**array of int, optional If provided, this array is filled with the result. Returns **out**array of int An array with a shape from broadcasting `begindates` and `enddates` together, containing the number of valid days between the begin and end dates. See also [`busdaycalendar`](numpy.busdaycalendar#numpy.busdaycalendar "numpy.busdaycalendar") An object that specifies a custom set of valid days. [`is_busday`](numpy.is_busday#numpy.is_busday "numpy.is_busday") Returns a boolean array indicating valid days. [`busday_offset`](numpy.busday_offset#numpy.busday_offset "numpy.busday_offset") Applies an offset counted in valid days. #### Examples ``` >>> # Number of weekdays in January 2011 ... np.busday_count('2011-01', '2011-02') 21 >>> # Number of weekdays in 2011 >>> np.busday_count('2011', '2012') 260 >>> # Number of Saturdays in 2011 ... np.busday_count('2011', '2012', weekmask='Sat') 53 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.busday_count.htmlnumpy.full ========== numpy.full(*shape*, *fill_value*, *dtype=None*, *order='C'*, ***, *like=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L289-L345) Return a new array of given shape and type, filled with `fill_value`. Parameters **shape**int or sequence of ints Shape of the new array, e.g., `(2, 3)` or `2`. **fill_value**scalar or array_like Fill value. **dtype**data-type, optional The desired data-type for the array The default, None, means `np.array(fill_value).dtype`. **order**{‘C’, ‘F’}, optional Whether to store multidimensional data in C- or Fortran-contiguous (row- or column-wise) order in memory. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray Array of `fill_value` with the given shape, dtype, and order. See also [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. #### Examples ``` >>> np.full((2, 2), np.inf) array([[inf, inf], [inf, inf]]) >>> np.full((2, 2), 10) array([[10, 10], [10, 10]]) ``` ``` >>> np.full((2, 2), [1, 2]) array([[1, 2], [1, 2]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.full.htmlnumpy.full_like ================ numpy.full_like(*a*, *fill_value*, *dtype=None*, *order='K'*, *subok=True*, *shape=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L357-L424) Return a full array with the same shape and type as a given array. Parameters **a**array_like The shape and data-type of `a` define these same attributes of the returned array. **fill_value**array_like Fill value. **dtype**data-type, optional Overrides the data type of the result. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. **subok**bool, optional. If True, then the newly created array will use the sub-class type of `a`, otherwise it will be a base-class array. Defaults to True. **shape**int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. New in version 1.17.0. Returns **out**ndarray Array of `fill_value` with the same shape and type as `a`. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples ``` >>> x = np.arange(6, dtype=int) >>> np.full_like(x, 1) array([1, 1, 1, 1, 1, 1]) >>> np.full_like(x, 0.1) array([0, 0, 0, 0, 0, 0]) >>> np.full_like(x, 0.1, dtype=np.double) array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1]) >>> np.full_like(x, np.nan, dtype=np.double) array([nan, nan, nan, nan, nan, nan]) ``` ``` >>> y = np.arange(6, dtype=np.double) >>> np.full_like(y, 0.1) array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1]) ``` ``` >>> y = np.zeros([2, 2, 3], dtype=int) >>> np.full_like(y, [0, 0, 255]) array([[[ 0, 0, 255], [ 0, 0, 255]], [[ 0, 0, 255], [ 0, 0, 255]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.full_like.htmlnumpy.ascontiguousarray ======================= numpy.ascontiguousarray(*a*, *dtype=None*, ***, *like=None*) Return a contiguous array (ndim >= 1) in memory (C order). Parameters **a**array_like Input array. **dtype**str or dtype object, optional Data-type of returned array. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray Contiguous array of same shape and content as `a`, with type [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") if specified. See also [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`require`](numpy.require#numpy.require "numpy.require") Return an ndarray that satisfies requirements. [`ndarray.flags`](numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") Information about the memory layout of the array. #### Examples ``` >>> x = np.arange(6).reshape(2,3) >>> np.ascontiguousarray(x, dtype=np.float32) array([[0., 1., 2.], [3., 4., 5.]], dtype=float32) >>> x.flags['C_CONTIGUOUS'] True ``` Note: This function returns an array with at least one-dimension (1-d) so it will not preserve 0-d arrays. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ascontiguousarray.htmlnumpy.frombuffer ================ numpy.frombuffer(*buffer*, *dtype=float*, *count=- 1*, *offset=0*, ***, *like=None*) Interpret a buffer as a 1-dimensional array. Parameters **buffer**buffer_like An object that exposes the buffer interface. **dtype**data-type, optional Data-type of the returned array; default: float. **count**int, optional Number of items to read. `-1` means all data in the buffer. **offset**int, optional Start reading the buffer from this offset (in bytes); default: 0. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray #### Notes If the buffer has data that is not in machine byte-order, this should be specified as part of the data-type, e.g.: ``` >>> dt = np.dtype(int) >>> dt = dt.newbyteorder('>') >>> np.frombuffer(buf, dtype=dt) ``` The data of the resulting array will not be byteswapped, but will be interpreted correctly. This function creates a view into the original object. This should be safe in general, but it may make sense to copy the result when the original object is mutable or untrusted. #### Examples ``` >>> s = b'hello world' >>> np.frombuffer(s, dtype='S1', count=5, offset=6) array([b'w', b'o', b'r', b'l', b'd'], dtype='|S1') ``` ``` >>> np.frombuffer(b'\x01\x02', dtype=np.uint8) array([1, 2], dtype=uint8) >>> np.frombuffer(b'\x01\x02\x03\x04\x05', dtype=np.uint8, count=3) array([1, 2, 3], dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.frombuffer.htmlnumpy.fromiter ============== numpy.fromiter(*iter*, *dtype*, *count=- 1*, ***, *like=None*) Create a new 1-dimensional array from an iterable object. Parameters **iter**iterable object An iterable object providing data for the array. **dtype**data-type The data-type of the returned array. Changed in version 1.23: Object and subarray dtypes are now supported (note that the final result is not 1-D for a subarray dtype). **count**int, optional The number of items to read from *iterable*. The default is -1, which means all data is read. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray The output array. #### Notes Specify `count` to improve performance. It allows `fromiter` to pre-allocate the output array, instead of resizing it on demand. #### Examples ``` >>> iterable = (x*x for x in range(5)) >>> np.fromiter(iterable, float) array([ 0., 1., 4., 9., 16.]) ``` A carefully constructed subarray dtype will lead to higher dimensional results: ``` >>> iterable = ((x+1, x+2) for x in range(5)) >>> np.fromiter(iterable, dtype=np.dtype((int, 2))) array([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fromiter.htmlnumpy.fromstring ================ numpy.fromstring(*string*, *dtype=float*, *count=- 1*, ***, *sep*, *like=None*) A new 1-D array initialized from text data in a string. Parameters **string**str A string containing the data. **dtype**data-type, optional The data type of the array; default: float. For binary input data, the data must be in exactly this format. Most builtin numeric types are supported and extension types may be supported. New in version 1.18.0: Complex dtypes. **count**int, optional Read this number of [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") elements from the data. If this is negative (the default), the count will be determined from the length of the data. **sep**str, optional The string separating numbers in the data; extra whitespace between elements is also ignored. Deprecated since version 1.14: Passing `sep=''`, the default, is deprecated since it will trigger the deprecated binary mode of this function. This mode interprets [`string`](https://docs.python.org/3/library/string.html#module-string "(in Python v3.10)") as binary bytes, rather than ASCII text with decimal numbers, an operation which is better spelt `frombuffer(string, dtype, count)`. If [`string`](https://docs.python.org/3/library/string.html#module-string "(in Python v3.10)") contains unicode text, the binary mode of [`fromstring`](#numpy.fromstring "numpy.fromstring") will first encode it into bytes using either utf-8 (python 3) or the default encoding (python 2), neither of which produce sane results. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **arr**ndarray The constructed array. Raises ValueError If the string is not the correct size to satisfy the requested [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and `count`. See also [`frombuffer`](numpy.frombuffer#numpy.frombuffer "numpy.frombuffer"), [`fromfile`](numpy.fromfile#numpy.fromfile "numpy.fromfile"), [`fromiter`](numpy.fromiter#numpy.fromiter "numpy.fromiter") #### Examples ``` >>> np.fromstring('1 2', dtype=int, sep=' ') array([1, 2]) >>> np.fromstring('1, 2', dtype=int, sep=',') array([1, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fromstring.htmlnumpy.loadtxt ============= numpy.loadtxt(*fname*, *dtype=<class 'float'>*, *comments='#'*, *delimiter=None*, *converters=None*, *skiprows=0*, *usecols=None*, *unpack=False*, *ndmin=0*, *encoding='bytes'*, *max_rows=None*, ***, *quotechar=None*, *like=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/npyio.py#L1061-L1306) Load data from a text file. Each row in the text file must have the same number of values. Parameters **fname**file, str, pathlib.Path, list of str, generator File, filename, list, or generator to read. If the filename extension is `.gz` or `.bz2`, the file is first decompressed. Note that generators must return bytes or strings. The strings in a list or produced by a generator are treated as lines. **dtype**data-type, optional Data-type of the resulting array; default: float. If this is a structured data-type, the resulting array will be 1-dimensional, and each row will be interpreted as an element of the array. In this case, the number of columns used must match the number of fields in the data-type. **comments**str or sequence of str or None, optional The characters or list of characters used to indicate the start of a comment. None implies no comments. For backwards compatibility, byte strings will be decoded as ‘latin1’. The default is ‘#’. **delimiter**str, optional The string used to separate values. For backwards compatibility, byte strings will be decoded as ‘latin1’. The default is whitespace. **converters**dict or callable, optional A function to parse all columns strings into the desired value, or a dictionary mapping column number to a parser function. E.g. if column 0 is a date string: `converters = {0: datestr2num}`. Converters can also be used to provide a default value for missing data, e.g. `converters = lambda s: float(s.strip() or 0)` will convert empty fields to 0. Default: None. **skiprows**int, optional Skip the first `skiprows` lines, including comments; default: 0. **usecols**int or sequence, optional Which columns to read, with 0 being the first. For example, `usecols = (1,4,5)` will extract the 2nd, 5th and 6th columns. The default, None, results in all columns being read. Changed in version 1.11.0: When a single column has to be read it is possible to use an integer instead of a tuple. E.g `usecols = 3` reads the fourth column the same way as `usecols = (3,)` would. **unpack**bool, optional If True, the returned array is transposed, so that arguments may be unpacked using `x, y, z = loadtxt(...)`. When used with a structured data-type, arrays are returned for each field. Default is False. **ndmin**int, optional The returned array will have at least `ndmin` dimensions. Otherwise mono-dimensional axes will be squeezed. Legal values: 0 (default), 1 or 2. New in version 1.6.0. **encoding**str, optional Encoding used to decode the inputfile. Does not apply to input streams. The special value ‘bytes’ enables backward compatibility workarounds that ensures you receive byte arrays as results if possible and passes ‘latin1’ encoded strings to converters. Override this value to receive unicode arrays and pass strings as input to converters. If set to None the system default is used. The default value is ‘bytes’. New in version 1.14.0. **max_rows**int, optional Read `max_rows` lines of content after `skiprows` lines. The default is to read all the lines. New in version 1.16.0. **quotechar**unicode character or None, optional The character used to denote the start and end of a quoted item. Occurrences of the delimiter or comment characters are ignored within a quoted item. The default value is `quotechar=None`, which means quoting support is disabled. If two consecutive instances of `quotechar` are found within a quoted field, the first is treated as an escape character. See examples. New in version 1.23.0. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray Data read from the text file. See also [`load`](numpy.load#numpy.load "numpy.load"), [`fromstring`](numpy.fromstring#numpy.fromstring "numpy.fromstring"), [`fromregex`](numpy.fromregex#numpy.fromregex "numpy.fromregex") [`genfromtxt`](numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") Load data with missing values handled as specified. [`scipy.io.loadmat`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.loadmat.html#scipy.io.loadmat "(in SciPy v1.8.1)") reads MATLAB data files #### Notes This function aims to be a fast reader for simply formatted files. The [`genfromtxt`](numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") function provides more sophisticated handling of, e.g., lines with missing values. New in version 1.10.0. The strings produced by the Python float.hex method can be used as input for floats. #### Examples ``` >>> from io import StringIO # StringIO behaves like a file object >>> c = StringIO("0 1\n2 3") >>> np.loadtxt(c) array([[0., 1.], [2., 3.]]) ``` ``` >>> d = StringIO("M 21 72\nF 35 58") >>> np.loadtxt(d, dtype={'names': ('gender', 'age', 'weight'), ... 'formats': ('S1', 'i4', 'f4')}) array([(b'M', 21, 72.), (b'F', 35, 58.)], dtype=[('gender', 'S1'), ('age', '<i4'), ('weight', '<f4')]) ``` ``` >>> c = StringIO("1,0,2\n3,0,4") >>> x, y = np.loadtxt(c, delimiter=',', usecols=(0, 2), unpack=True) >>> x array([1., 3.]) >>> y array([2., 4.]) ``` The `converters` argument is used to specify functions to preprocess the text prior to parsing. `converters` can be a dictionary that maps preprocessing functions to each column: ``` >>> s = StringIO("1.618, 2.296\n3.141, 4.669\n") >>> conv = { ... 0: lambda x: np.floor(float(x)), # conversion fn for column 0 ... 1: lambda x: np.ceil(float(x)), # conversion fn for column 1 ... } >>> np.loadtxt(s, delimiter=",", converters=conv) array([[1., 3.], [3., 5.]]) ``` `converters` can be a callable instead of a dictionary, in which case it is applied to all columns: ``` >>> s = StringIO("0xDE 0xAD\n0xC0 0xDE") >>> import functools >>> conv = functools.partial(int, base=16) >>> np.loadtxt(s, converters=conv) array([[222., 173.], [192., 222.]]) ``` This example shows how `converters` can be used to convert a field with a trailing minus sign into a negative number. ``` >>> s = StringIO('10.01 31.25-\n19.22 64.31\n17.57- 63.94') >>> def conv(fld): ... return -float(fld[:-1]) if fld.endswith(b'-') else float(fld) ... >>> np.loadtxt(s, converters=conv) array([[ 10.01, -31.25], [ 19.22, 64.31], [-17.57, 63.94]]) ``` Using a callable as the converter can be particularly useful for handling values with different formatting, e.g. floats with underscores: ``` >>> s = StringIO("1 2.7 100_000") >>> np.loadtxt(s, converters=float) array([1.e+00, 2.7e+00, 1.e+05]) ``` This idea can be extended to automatically handle values specified in many different formats: ``` >>> def conv(val): ... try: ... return float(val) ... except ValueError: ... return float.fromhex(val) >>> s = StringIO("1, 2.5, 3_000, 0b4, 0x1.4000000000000p+2") >>> np.loadtxt(s, delimiter=",", converters=conv, encoding=None) array([1.0e+00, 2.5e+00, 3.0e+03, 1.8e+02, 5.0e+00]) ``` Note that with the default `encoding="bytes"`, the inputs to the converter function are latin-1 encoded byte strings. To deactivate the implicit encoding prior to conversion, use `encoding=None` ``` >>> s = StringIO('10.01 31.25-\n19.22 64.31\n17.57- 63.94') >>> conv = lambda x: -float(x[:-1]) if x.endswith('-') else float(x) >>> np.loadtxt(s, converters=conv, encoding=None) array([[ 10.01, -31.25], [ 19.22, 64.31], [-17.57, 63.94]]) ``` Support for quoted fields is enabled with the `quotechar` parameter. Comment and delimiter characters are ignored when they appear within a quoted item delineated by `quotechar`: ``` >>> s = StringIO('"alpha, #42", 10.0\n"beta, #64", 2.0\n') >>> dtype = np.dtype([("label", "U12"), ("value", float)]) >>> np.loadtxt(s, dtype=dtype, delimiter=",", quotechar='"') array([('alpha, #42', 10.), ('beta, #64', 2.)], dtype=[('label', '<U12'), ('value', '<f8')]) ``` Two consecutive quote characters within a quoted field are treated as a single escaped character: ``` >>> s = StringIO('"Hello, my name is ""Monty""!"') >>> np.loadtxt(s, dtype="U", delimiter=",", quotechar='"') array('Hello, my name is "Monty"!', dtype='<U26') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.loadtxt.htmlnumpy.core.records.array ======================== core.records.array(*obj*, *dtype=None*, *shape=None*, *offset=0*, *strides=None*, *formats=None*, *names=None*, *titles=None*, *aligned=False*, *byteorder=None*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/records.py#L953-L1099) Construct a record array from a wide-variety of objects. A general-purpose record array constructor that dispatches to the appropriate [`recarray`](numpy.recarray#numpy.recarray "numpy.recarray") creation function based on the inputs (see Notes). Parameters **obj**any Input object. See Notes for details on how various input types are treated. **dtype**data-type, optional Valid dtype for array. **shape**int or tuple of ints, optional Shape of each array. **offset**int, optional Position in the file or buffer to start reading from. **strides**tuple of ints, optional Buffer (`buf`) is interpreted according to these strides (strides define how many bytes each array element, row, column, etc. occupy in memory). **formats, names, titles, aligned, byteorder** If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is `None`, these arguments are passed to [`numpy.format_parser`](numpy.format_parser#numpy.format_parser "numpy.format_parser") to construct a dtype. See that function for detailed documentation. **copy**bool, optional Whether to copy the input object (True), or to use a reference instead. This option only applies when the input is an ndarray or recarray. Defaults to True. Returns np.recarray Record array created from the specified object. #### Notes If `obj` is `None`, then call the [`recarray`](numpy.recarray#numpy.recarray "numpy.recarray") constructor. If `obj` is a string, then call the [`fromstring`](numpy.fromstring#numpy.fromstring "numpy.fromstring") constructor. If `obj` is a list or a tuple, then if the first object is an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), call [`fromarrays`](numpy.core.records.fromarrays#numpy.core.records.fromarrays "numpy.core.records.fromarrays"), otherwise call [`fromrecords`](numpy.core.records.fromrecords#numpy.core.records.fromrecords "numpy.core.records.fromrecords"). If `obj` is a [`recarray`](numpy.recarray#numpy.recarray "numpy.recarray"), then make a copy of the data in the recarray (if `copy=True`) and use the new formats, names, and titles. If `obj` is a file, then call [`fromfile`](numpy.fromfile#numpy.fromfile "numpy.fromfile"). Finally, if obj is an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), then return `obj.view(recarray)`, making a copy of the data if `copy=True`. #### Examples ``` >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) ``` ``` >>> np.core.records.array(a) rec.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=int32) ``` ``` >>> b = [(1, 1), (2, 4), (3, 9)] >>> c = np.core.records.array(b, formats = ['i2', 'f2'], names = ('x', 'y')) >>> c rec.array([(1, 1.0), (2, 4.0), (3, 9.0)], dtype=[('x', '<i2'), ('y', '<f2')]) ``` ``` >>> c.x rec.array([1, 2, 3], dtype=int16) ``` ``` >>> c.y rec.array([ 1.0, 4.0, 9.0], dtype=float16) ``` ``` >>> r = np.rec.array(['abc','def'], names=['col1','col2']) >>> print(r.col1) abc ``` ``` >>> r.col1 array('abc', dtype='<U3') ``` ``` >>> r.col2 array('def', dtype='<U3') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.core.records.array.htmlnumpy.core.records.fromarrays ============================= core.records.fromarrays(*arrayList*, *dtype=None*, *shape=None*, *formats=None*, *names=None*, *titles=None*, *aligned=False*, *byteorder=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/records.py#L588-L680) Create a record array from a (flat) list of arrays Parameters **arrayList**list or tuple List of array-like objects (such as lists, tuples, and ndarrays). **dtype**data-type, optional valid dtype for all arrays **shape**int or tuple of ints, optional Shape of the resulting array. If not provided, inferred from `arrayList[0]`. **formats, names, titles, aligned, byteorder** If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is `None`, these arguments are passed to [`numpy.format_parser`](numpy.format_parser#numpy.format_parser "numpy.format_parser") to construct a dtype. See that function for detailed documentation. Returns np.recarray Record array consisting of given arrayList columns. #### Examples ``` >>> x1=np.array([1,2,3,4]) >>> x2=np.array(['a','dd','xyz','12']) >>> x3=np.array([1.1,2,3,4]) >>> r = np.core.records.fromarrays([x1,x2,x3],names='a,b,c') >>> print(r[1]) (2, 'dd', 2.0) # may vary >>> x1[1]=34 >>> r.a array([1, 2, 3, 4]) ``` ``` >>> x1 = np.array([1, 2, 3, 4]) >>> x2 = np.array(['a', 'dd', 'xyz', '12']) >>> x3 = np.array([1.1, 2, 3,4]) >>> r = np.core.records.fromarrays( ... [x1, x2, x3], ... dtype=np.dtype([('a', np.int32), ('b', 'S3'), ('c', np.float32)])) >>> r rec.array([(1, b'a', 1.1), (2, b'dd', 2. ), (3, b'xyz', 3. ), (4, b'12', 4. )], dtype=[('a', '<i4'), ('b', 'S3'), ('c', '<f4')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.core.records.fromarrays.htmlnumpy.core.records.fromrecords ============================== core.records.fromrecords(*recList*, *dtype=None*, *shape=None*, *formats=None*, *names=None*, *titles=None*, *aligned=False*, *byteorder=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/records.py#L683-L765) Create a recarray from a list of records in text form. Parameters **recList**sequence data in the same field may be heterogeneous - they will be promoted to the highest data type. **dtype**data-type, optional valid dtype for all arrays **shape**int or tuple of ints, optional shape of each array. **formats, names, titles, aligned, byteorder** If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is `None`, these arguments are passed to [`numpy.format_parser`](numpy.format_parser#numpy.format_parser "numpy.format_parser") to construct a dtype. See that function for detailed documentation. If both `formats` and [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") are None, then this will auto-detect formats. Use list of tuples rather than list of lists for faster processing. Returns np.recarray record array consisting of given recList rows. #### Examples ``` >>> r=np.core.records.fromrecords([(456,'dbe',1.2),(2,'de',1.3)], ... names='col1,col2,col3') >>> print(r[0]) (456, 'dbe', 1.2) >>> r.col1 array([456, 2]) >>> r.col2 array(['dbe', 'de'], dtype='<U3') >>> import pickle >>> pickle.loads(pickle.dumps(r)) rec.array([(456, 'dbe', 1.2), ( 2, 'de', 1.3)], dtype=[('col1', '<i8'), ('col2', '<U3'), ('col3', '<f8')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.core.records.fromrecords.htmlnumpy.core.records.fromstring ============================= core.records.fromstring(*datastring*, *dtype=None*, *shape=None*, *offset=0*, *formats=None*, *names=None*, *titles=None*, *aligned=False*, *byteorder=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/records.py#L768-L841) Create a record array from binary data Note that despite the name of this function it does not accept [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") instances. Parameters **datastring**bytes-like Buffer of binary data **dtype**data-type, optional Valid dtype for all arrays **shape**int or tuple of ints, optional Shape of each array. **offset**int, optional Position in the buffer to start reading from. **formats, names, titles, aligned, byteorder** If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is `None`, these arguments are passed to [`numpy.format_parser`](numpy.format_parser#numpy.format_parser "numpy.format_parser") to construct a dtype. See that function for detailed documentation. Returns np.recarray Record array view into the data in datastring. This will be readonly if `datastring` is readonly. See also [`numpy.frombuffer`](numpy.frombuffer#numpy.frombuffer "numpy.frombuffer") #### Examples ``` >>> a = b'\x01\x02\x03abc' >>> np.core.records.fromstring(a, dtype='u1,u1,u1,S3') rec.array([(1, 2, 3, b'abc')], dtype=[('f0', 'u1'), ('f1', 'u1'), ('f2', 'u1'), ('f3', 'S3')]) ``` ``` >>> grades_dtype = [('Name', (np.str_, 10)), ('Marks', np.float64), ... ('GradeLevel', np.int32)] >>> grades_array = np.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ... ('Aadi', 66.6, 6)], dtype=grades_dtype) >>> np.core.records.fromstring(grades_array.tobytes(), dtype=grades_dtype) rec.array([('Sam', 33.3, 3), ('Mike', 44.4, 5), ('Aadi', 66.6, 6)], dtype=[('Name', '<U10'), ('Marks', '<f8'), ('GradeLevel', '<i4')]) ``` ``` >>> s = '\x01\x02\x03abc' >>> np.core.records.fromstring(s, dtype='u1,u1,u1,S3') Traceback (most recent call last) ... TypeError: a bytes-like object is required, not 'str' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.core.records.fromstring.htmlnumpy.core.records.fromfile =========================== core.records.fromfile(*fd*, *dtype=None*, *shape=None*, *offset=0*, *formats=None*, *names=None*, *titles=None*, *aligned=False*, *byteorder=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/records.py#L852-L950) Create an array from binary file data Parameters **fd**str or file type If file is a string or a path-like object then that file is opened, else it is assumed to be a file object. The file object must support random access (i.e. it must have tell and seek methods). **dtype**data-type, optional valid dtype for all arrays **shape**int or tuple of ints, optional shape of each array. **offset**int, optional Position in the file to start reading from. **formats, names, titles, aligned, byteorder** If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is `None`, these arguments are passed to [`numpy.format_parser`](numpy.format_parser#numpy.format_parser "numpy.format_parser") to construct a dtype. See that function for detailed documentation Returns np.recarray record array consisting of data enclosed in file. #### Examples ``` >>> from tempfile import TemporaryFile >>> a = np.empty(10,dtype='f8,i4,a5') >>> a[5] = (0.5,10,'abcde') >>> >>> fd=TemporaryFile() >>> a = a.newbyteorder('<') >>> a.tofile(fd) >>> >>> _ = fd.seek(0) >>> r=np.core.records.fromfile(fd, formats='f8,i4,a5', shape=10, ... byteorder='<') >>> print(r[5]) (0.5, 10, 'abcde') >>> r.shape (10,) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.core.records.fromfile.htmlnumpy.core.defchararray.asarray =============================== core.defchararray.asarray(*obj*, *itemsize=None*, *unicode=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2746-L2796) Convert the input to a [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray"), copying the data only if necessary. Versus a regular NumPy array of type [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or `unicode`, this class adds the following functionality: 1. values automatically have whitespace removed from the end when indexed 2. comparison operators automatically remove whitespace from the end when comparing values 3. vectorized string operations are provided as methods (e.g. [`str.endswith`](https://docs.python.org/3/library/stdtypes.html#str.endswith "(in Python v3.10)")) and infix operators (e.g. `+`, `*`,``%``) Parameters **obj**array of str or unicode-like **itemsize**int, optional `itemsize` is the number of characters per scalar in the resulting array. If `itemsize` is None, and `obj` is an object array or a Python list, the `itemsize` will be automatically determined. If `itemsize` is provided and `obj` is of type str or unicode, then the `obj` string will be chunked into `itemsize` pieces. **unicode**bool, optional When true, the resulting [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray") can contain Unicode characters, when false only 8-bit characters. If unicode is None and `obj` is one of the following: * a [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray"), * an ndarray of type [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or ‘unicode` * a Python str or unicode object, then the unicode setting of the output array will be automatically determined. **order**{‘C’, ‘F’}, optional Specify the order of the array. If order is ‘C’ (default), then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.core.defchararray.asarray.htmlnumpy.geomspace =============== numpy.geomspace(*start*, *stop*, *num=50*, *endpoint=True*, *dtype=None*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/function_base.py#L286-L440) Return numbers spaced evenly on a log scale (a geometric progression). This is similar to [`logspace`](numpy.logspace#numpy.logspace "numpy.logspace"), but with endpoints specified directly. Each output sample is a constant multiple of the previous. Changed in version 1.16.0: Non-scalar `start` and `stop` are now supported. Parameters **start**array_like The starting value of the sequence. **stop**array_like The final value of the sequence, unless `endpoint` is False. In that case, `num + 1` values are spaced over the interval in log-space, of which all but the last (a sequence of length `num`) are returned. **num**integer, optional Number of samples to generate. Default is 50. **endpoint**boolean, optional If true, `stop` is the last sample. Otherwise, it is not included. Default is True. **dtype**dtype The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, the data type is inferred from `start` and `stop`. The inferred dtype will never be an integer; [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)") is chosen even if the arguments would produce an array of integers. **axis**int, optional The axis in the result to store the samples. Relevant only if start or stop are array-like. By default (0), the samples will be along a new axis inserted at the beginning. Use -1 to get an axis at the end. New in version 1.16.0. Returns **samples**ndarray `num` samples, equally spaced on a log scale. See also [`logspace`](numpy.logspace#numpy.logspace "numpy.logspace") Similar to geomspace, but with endpoints specified using log and base. [`linspace`](numpy.linspace#numpy.linspace "numpy.linspace") Similar to geomspace, but with arithmetic instead of geometric progression. [`arange`](numpy.arange#numpy.arange "numpy.arange") Similar to linspace, with the step size specified instead of the number of samples. #### Notes If the inputs or dtype are complex, the output will follow a logarithmic spiral in the complex plane. (There are an infinite number of spirals passing through two points; the output will follow the shortest such path.) #### Examples ``` >>> np.geomspace(1, 1000, num=4) array([ 1., 10., 100., 1000.]) >>> np.geomspace(1, 1000, num=3, endpoint=False) array([ 1., 10., 100.]) >>> np.geomspace(1, 1000, num=4, endpoint=False) array([ 1. , 5.62341325, 31.6227766 , 177.827941 ]) >>> np.geomspace(1, 256, num=9) array([ 1., 2., 4., 8., 16., 32., 64., 128., 256.]) ``` Note that the above may not produce exact integers: ``` >>> np.geomspace(1, 256, num=9, dtype=int) array([ 1, 2, 4, 7, 16, 32, 63, 127, 256]) >>> np.around(np.geomspace(1, 256, num=9)).astype(int) array([ 1, 2, 4, 8, 16, 32, 64, 128, 256]) ``` Negative, decreasing, and complex inputs are allowed: ``` >>> np.geomspace(1000, 1, num=4) array([1000., 100., 10., 1.]) >>> np.geomspace(-1000, -1, num=4) array([-1000., -100., -10., -1.]) >>> np.geomspace(1j, 1000j, num=4) # Straight line array([0. +1.j, 0. +10.j, 0. +100.j, 0.+1000.j]) >>> np.geomspace(-1+0j, 1+0j, num=5) # Circle array([-1.00000000e+00+1.22464680e-16j, -7.07106781e-01+7.07106781e-01j, 6.12323400e-17+1.00000000e+00j, 7.07106781e-01+7.07106781e-01j, 1.00000000e+00+0.00000000e+00j]) ``` Graphical illustration of `endpoint` parameter: ``` >>> import matplotlib.pyplot as plt >>> N = 10 >>> y = np.zeros(N) >>> plt.semilogx(np.geomspace(1, 1000, N, endpoint=True), y + 1, 'o') [<matplotlib.lines.Line2D object at 0x...>] >>> plt.semilogx(np.geomspace(1, 1000, N, endpoint=False), y + 2, 'o') [<matplotlib.lines.Line2D object at 0x...>] >>> plt.axis([0.5, 2000, 0, 3]) [0.5, 2000, 0, 3] >>> plt.grid(True, color='0.7', linestyle='-', which='both', axis='both') >>> plt.show() ``` ![../../_images/numpy-geomspace-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.geomspace.htmlnumpy.meshgrid ============== numpy.meshgrid(**xi*, *copy=True*, *sparse=False*, *indexing='xy'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L4846-L4992) Return coordinate matrices from coordinate vectors. Make N-D coordinate arrays for vectorized evaluations of N-D scalar/vector fields over N-D grids, given one-dimensional coordinate arrays x1, x2,
, xn. Changed in version 1.9: 1-D and 0-D cases are allowed. Parameters **x1, x2,
, xn**array_like 1-D arrays representing the coordinates of a grid. **indexing**{‘xy’, ‘ij’}, optional Cartesian (‘xy’, default) or matrix (‘ij’) indexing of output. See Notes for more details. New in version 1.7.0. **sparse**bool, optional If True the shape of the returned coordinate array for dimension *i* is reduced from `(N1, ..., Ni, ... Nn)` to `(1, ..., 1, Ni, 1, ..., 1)`. These sparse coordinate grids are intended to be use with [Broadcasting](../../user/basics.broadcasting#basics-broadcasting). When all coordinates are used in an expression, broadcasting still leads to a fully-dimensonal result array. Default is False. New in version 1.7.0. **copy**bool, optional If False, a view into the original arrays are returned in order to conserve memory. Default is True. Please note that `sparse=False, copy=False` will likely return non-contiguous arrays. Furthermore, more than one element of a broadcast array may refer to a single memory location. If you need to write to the arrays, make copies first. New in version 1.7.0. Returns **X1, X2,
, XN**ndarray For vectors `x1`, `x2`,
, `xn` with lengths `Ni=len(xi)`, returns `(N1, N2, N3,..., Nn)` shaped arrays if indexing=’ij’ or `(N2, N1, N3,..., Nn)` shaped arrays if indexing=’xy’ with the elements of `xi` repeated to fill the matrix along the first dimension for `x1`, the second for `x2` and so on. See also [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid") Construct a multi-dimensional “meshgrid” using indexing notation. [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid") Construct an open multi-dimensional “meshgrid” using indexing notation. #### Notes This function supports both indexing conventions through the indexing keyword argument. Giving the string ‘ij’ returns a meshgrid with matrix indexing, while ‘xy’ returns a meshgrid with Cartesian indexing. In the 2-D case with inputs of length M and N, the outputs are of shape (N, M) for ‘xy’ indexing and (M, N) for ‘ij’ indexing. In the 3-D case with inputs of length M, N and P, outputs are of shape (N, M, P) for ‘xy’ indexing and (M, N, P) for ‘ij’ indexing. The difference is illustrated by the following code snippet: ``` xv, yv = np.meshgrid(x, y, indexing='ij') for i in range(nx): for j in range(ny): # treat xv[i,j], yv[i,j] xv, yv = np.meshgrid(x, y, indexing='xy') for i in range(nx): for j in range(ny): # treat xv[j,i], yv[j,i] ``` In the 1-D and 0-D case, the indexing and sparse keywords have no effect. #### Examples ``` >>> nx, ny = (3, 2) >>> x = np.linspace(0, 1, nx) >>> y = np.linspace(0, 1, ny) >>> xv, yv = np.meshgrid(x, y) >>> xv array([[0. , 0.5, 1. ], [0. , 0.5, 1. ]]) >>> yv array([[0., 0., 0.], [1., 1., 1.]]) >>> xv, yv = np.meshgrid(x, y, sparse=True) # make sparse output arrays >>> xv array([[0. , 0.5, 1. ]]) >>> yv array([[0.], [1.]]) ``` [`meshgrid`](#numpy.meshgrid "numpy.meshgrid") is very useful to evaluate functions on a grid. If the function depends on all coordinates, you can use the parameter `sparse=True` to save memory and computation time. ``` >>> x = np.linspace(-5, 5, 101) >>> y = np.linspace(-5, 5, 101) >>> # full coordinate arrays >>> xx, yy = np.meshgrid(x, y) >>> zz = np.sqrt(xx**2 + yy**2) >>> xx.shape, yy.shape, zz.shape ((101, 101), (101, 101), (101, 101)) >>> # sparse coordinate arrays >>> xs, ys = np.meshgrid(x, y, sparse=True) >>> zs = np.sqrt(xs**2 + ys**2) >>> xs.shape, ys.shape, zs.shape ((1, 101), (101, 1), (101, 101)) >>> np.array_equal(zz, zs) True ``` ``` >>> import matplotlib.pyplot as plt >>> h = plt.contourf(x, y, zs) >>> plt.axis('scaled') >>> plt.colorbar() >>> plt.show() ``` ![../../_images/numpy-meshgrid-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.meshgrid.htmlnumpy.diagflat ============== numpy.diagflat(*v*, *k=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L312-L369) Create a two-dimensional array with the flattened input as a diagonal. Parameters **v**array_like Input data, which is flattened and set as the `k`-th diagonal of the output. **k**int, optional Diagonal to set; 0, the default, corresponds to the “main” diagonal, a positive (negative) `k` giving the number of the diagonal above (below) the main. Returns **out**ndarray The 2-D output array. See also [`diag`](numpy.diag#numpy.diag "numpy.diag") MATLAB work-alike for 1-D and 2-D arrays. [`diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") Return specified diagonals. [`trace`](numpy.trace#numpy.trace "numpy.trace") Sum along diagonals. #### Examples ``` >>> np.diagflat([[1,2], [3,4]]) array([[1, 0, 0, 0], [0, 2, 0, 0], [0, 0, 3, 0], [0, 0, 0, 4]]) ``` ``` >>> np.diagflat([1,2], 1) array([[0, 1, 0], [0, 0, 2], [0, 0, 0]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.diagflat.htmlnumpy.tri ========= numpy.tri(*N*, *M=None*, *k=0*, *dtype=<class 'float'>*, ***, *like=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L376-L430) An array with ones at and below the given diagonal and zeros elsewhere. Parameters **N**int Number of rows in the array. **M**int, optional Number of columns in the array. By default, `M` is taken equal to `N`. **k**int, optional The sub-diagonal at and below which the array is filled. `k` = 0 is the main diagonal, while `k` < 0 is below it, and `k` > 0 is above. The default is 0. **dtype**dtype, optional Data type of the returned array. The default is float. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **tri**ndarray of shape (N, M) Array with its lower triangle filled with ones and zero elsewhere; in other words `T[i,j] == 1` for `j <= i + k`, 0 otherwise. #### Examples ``` >>> np.tri(3, 5, 2, dtype=int) array([[1, 1, 1, 0, 0], [1, 1, 1, 1, 0], [1, 1, 1, 1, 1]]) ``` ``` >>> np.tri(3, 5, -1) array([[0., 0., 0., 0., 0.], [1., 0., 0., 0., 0.], [1., 1., 0., 0., 0.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.tri.htmlnumpy.tril ========== numpy.tril(*m*, *k=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L442-L494) Lower triangle of an array. Return a copy of an array with elements above the `k`-th diagonal zeroed. For arrays with `ndim` exceeding 2, [`tril`](#numpy.tril "numpy.tril") will apply to the final two axes. Parameters **m**array_like, shape (
, M, N) Input array. **k**int, optional Diagonal above which to zero elements. `k = 0` (the default) is the main diagonal, `k < 0` is below it and `k > 0` is above. Returns **tril**ndarray, shape (
, M, N) Lower triangle of `m`, of same shape and data-type as `m`. See also [`triu`](numpy.triu#numpy.triu "numpy.triu") same thing, only for the upper triangle #### Examples ``` >>> np.tril([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) array([[ 0, 0, 0], [ 4, 0, 0], [ 7, 8, 0], [10, 11, 12]]) ``` ``` >>> np.tril(np.arange(3*4*5).reshape(3, 4, 5)) array([[[ 0, 0, 0, 0, 0], [ 5, 6, 0, 0, 0], [10, 11, 12, 0, 0], [15, 16, 17, 18, 0]], [[20, 0, 0, 0, 0], [25, 26, 0, 0, 0], [30, 31, 32, 0, 0], [35, 36, 37, 38, 0]], [[40, 0, 0, 0, 0], [45, 46, 0, 0, 0], [50, 51, 52, 0, 0], [55, 56, 57, 58, 0]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.tril.htmlnumpy.triu ========== numpy.triu(*m*, *k=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L497-L538) Upper triangle of an array. Return a copy of an array with the elements below the `k`-th diagonal zeroed. For arrays with `ndim` exceeding 2, [`triu`](#numpy.triu "numpy.triu") will apply to the final two axes. Please refer to the documentation for [`tril`](numpy.tril#numpy.tril "numpy.tril") for further details. See also [`tril`](numpy.tril#numpy.tril "numpy.tril") lower triangle of an array #### Examples ``` >>> np.triu([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], -1) array([[ 1, 2, 3], [ 4, 5, 6], [ 0, 8, 9], [ 0, 0, 12]]) ``` ``` >>> np.triu(np.arange(3*4*5).reshape(3, 4, 5)) array([[[ 0, 1, 2, 3, 4], [ 0, 6, 7, 8, 9], [ 0, 0, 12, 13, 14], [ 0, 0, 0, 18, 19]], [[20, 21, 22, 23, 24], [ 0, 26, 27, 28, 29], [ 0, 0, 32, 33, 34], [ 0, 0, 0, 38, 39]], [[40, 41, 42, 43, 44], [ 0, 46, 47, 48, 49], [ 0, 0, 52, 53, 54], [ 0, 0, 0, 58, 59]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.triu.htmlnumpy.vander ============ numpy.vander(*x*, *N=None*, *increasing=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L546-L634) Generate a Vandermonde matrix. The columns of the output matrix are powers of the input vector. The order of the powers is determined by the `increasing` boolean argument. Specifically, when `increasing` is False, the `i`-th output column is the input vector raised element-wise to the power of `N - i - 1`. Such a matrix with a geometric progression in each row is named for <NAME>. Parameters **x**array_like 1-D input array. **N**int, optional Number of columns in the output. If `N` is not specified, a square array is returned (`N = len(x)`). **increasing**bool, optional Order of the powers of the columns. If True, the powers increase from left to right, if False (the default) they are reversed. New in version 1.9.0. Returns **out**ndarray Vandermonde matrix. If `increasing` is False, the first column is `x^(N-1)`, the second `x^(N-2)` and so forth. If `increasing` is True, the columns are `x^0, x^1, ..., x^(N-1)`. See also [`polynomial.polynomial.polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander") #### Examples ``` >>> x = np.array([1, 2, 3, 5]) >>> N = 3 >>> np.vander(x, N) array([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) ``` ``` >>> np.column_stack([x**(N-1-i) for i in range(N)]) array([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) ``` ``` >>> x = np.array([1, 2, 3, 5]) >>> np.vander(x) array([[ 1, 1, 1, 1], [ 8, 4, 2, 1], [ 27, 9, 3, 1], [125, 25, 5, 1]]) >>> np.vander(x, increasing=True) array([[ 1, 1, 1, 1], [ 1, 2, 4, 8], [ 1, 3, 9, 27], [ 1, 5, 25, 125]]) ``` The determinant of a square Vandermonde matrix is the product of the differences between the values of the input vector: ``` >>> np.linalg.det(np.vander(x)) 48.000000000000043 # may vary >>> (5-3)*(5-2)*(5-1)*(3-2)*(3-1)*(2-1) 48 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.vander.htmlnumpy.copyto ============ numpy.copyto(*dst*, *src*, *casting='same_kind'*, *where=True*) Copies values from one array to another, broadcasting as necessary. Raises a TypeError if the `casting` rule is violated, and if [`where`](numpy.where#numpy.where "numpy.where") is provided, it selects which elements to copy. New in version 1.7.0. Parameters **dst**ndarray The array into which values are copied. **src**array_like The array from which values are copied. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur when copying. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **where**array_like of bool, optional A boolean array which is broadcasted to match the dimensions of `dst`, and selects elements to copy from `src` to `dst` wherever it contains the value True. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.copyto.htmlnumpy.shape =========== numpy.shape(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L1965-L2010) Return the shape of an array. Parameters **a**array_like Input array. Returns **shape**tuple of ints The elements of the shape tuple give the lengths of the corresponding array dimensions. See also [`len`](https://docs.python.org/3/library/functions.html#len "(in Python v3.10)") `len(a)` is equivalent to `np.shape(a)[0]` for N-D arrays with `N>=1`. [`ndarray.shape`](numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") Equivalent array method. #### Examples ``` >>> np.shape(np.eye(3)) (3, 3) >>> np.shape([[1, 3]]) (1, 2) >>> np.shape([0]) (1,) >>> np.shape(0) () ``` ``` >>> a = np.array([(1, 2), (3, 4), (5, 6)], ... dtype=[('x', 'i4'), ('y', 'i4')]) >>> np.shape(a) (3,) >>> a.shape (3,) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.shape.htmlnumpy.moveaxis ============== numpy.moveaxis(*a*, *source*, *destination*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L1410-L1478) Move axes of an array to new positions. Other axes remain in their original order. New in version 1.11.0. Parameters **a**np.ndarray The array whose axes should be reordered. **source**int or sequence of int Original positions of the axes to move. These must be unique. **destination**int or sequence of int Destination positions for each of the original axes. These must also be unique. Returns **result**np.ndarray Array with moved axes. This array is a view of the input array. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Permute the dimensions of an array. [`swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") Interchange two axes of an array. #### Examples ``` >>> x = np.zeros((3, 4, 5)) >>> np.moveaxis(x, 0, -1).shape (4, 5, 3) >>> np.moveaxis(x, -1, 0).shape (5, 3, 4) ``` These all achieve the same result: ``` >>> np.transpose(x).shape (5, 4, 3) >>> np.swapaxes(x, 0, -1).shape (5, 4, 3) >>> np.moveaxis(x, [0, 1], [-1, -2]).shape (5, 4, 3) >>> np.moveaxis(x, [0, 1, 2], [-1, -2, -3]).shape (5, 4, 3) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.moveaxis.htmlnumpy.rollaxis ============== numpy.rollaxis(*a*, *axis*, *start=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L1257-L1344) Roll the specified axis backwards, until it lies in a given position. This function continues to be supported for backward compatibility, but you should prefer [`moveaxis`](numpy.moveaxis#numpy.moveaxis "numpy.moveaxis"). The [`moveaxis`](numpy.moveaxis#numpy.moveaxis "numpy.moveaxis") function was added in NumPy 1.11. Parameters **a**ndarray Input array. **axis**int The axis to be rolled. The positions of the other axes do not change relative to one another. **start**int, optional When `start <= axis`, the axis is rolled back until it lies in this position. When `start > axis`, the axis is rolled until it lies before this position. The default, 0, results in a “complete” roll. The following table describes how negative values of `start` are interpreted: | `start` | Normalized `start` | | --- | --- | | `-(arr.ndim+1)` | raise `AxisError` | | `-arr.ndim` | 0 | | ⋮ | ⋮ | | `-1` | `arr.ndim-1` | | `0` | `0` | | ⋮ | ⋮ | | `arr.ndim` | `arr.ndim` | | `arr.ndim + 1` | raise `AxisError` | Returns **res**ndarray For NumPy >= 1.10.0 a view of `a` is always returned. For earlier NumPy versions a view of `a` is returned only if the order of the axes is changed, otherwise the input array is returned. See also [`moveaxis`](numpy.moveaxis#numpy.moveaxis "numpy.moveaxis") Move array axes to new positions. [`roll`](numpy.roll#numpy.roll "numpy.roll") Roll the elements of an array by a number of positions along a given axis. #### Examples ``` >>> a = np.ones((3,4,5,6)) >>> np.rollaxis(a, 3, 1).shape (3, 6, 4, 5) >>> np.rollaxis(a, 2).shape (5, 3, 4, 6) >>> np.rollaxis(a, 1, 4).shape (3, 5, 6, 4) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.rollaxis.htmlnumpy.broadcast_to =================== numpy.broadcast_to(*array*, *shape*, *subok=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/stride_tricks.py#L367-L413) Broadcast an array to a new shape. Parameters **array**array_like The array to broadcast. **shape**tuple or int The shape of the desired array. A single integer `i` is interpreted as `(i,)`. **subok**bool, optional If True, then sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array (default). Returns **broadcast**array A readonly view on the original array with the given shape. It is typically not contiguous. Furthermore, more than one element of a broadcasted array may refer to a single memory location. Raises ValueError If the array is not compatible with the new shape according to NumPy’s broadcasting rules. See also [`broadcast`](numpy.broadcast#numpy.broadcast "numpy.broadcast") [`broadcast_arrays`](numpy.broadcast_arrays#numpy.broadcast_arrays "numpy.broadcast_arrays") [`broadcast_shapes`](numpy.broadcast_shapes#numpy.broadcast_shapes "numpy.broadcast_shapes") #### Notes New in version 1.10.0. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> np.broadcast_to(x, (3, 3)) array([[1, 2, 3], [1, 2, 3], [1, 2, 3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast_to.htmlnumpy.broadcast_arrays ======================= numpy.broadcast_arrays(**args*, *subok=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/stride_tricks.py#L480-L547) Broadcast any number of arrays against each other. Parameters **`*args`**array_likes The arrays to broadcast. **subok**bool, optional If True, then sub-classes will be passed-through, otherwise the returned arrays will be forced to be a base-class array (default). Returns **broadcasted**list of arrays These arrays are views on the original arrays. They are typically not contiguous. Furthermore, more than one element of a broadcasted array may refer to a single memory location. If you need to write to the arrays, make copies first. While you can set the `writable` flag True, writing to a single output value may end up changing more than one location in the output array. Deprecated since version 1.17: The output is currently marked so that if written to, a deprecation warning will be emitted. A future version will set the `writable` flag False so writing to it will raise an error. See also [`broadcast`](numpy.broadcast#numpy.broadcast "numpy.broadcast") [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") [`broadcast_shapes`](numpy.broadcast_shapes#numpy.broadcast_shapes "numpy.broadcast_shapes") #### Examples ``` >>> x = np.array([[1,2,3]]) >>> y = np.array([[4],[5]]) >>> np.broadcast_arrays(x, y) [array([[1, 2, 3], [1, 2, 3]]), array([[4, 4, 4], [5, 5, 5]])] ``` Here is a useful idiom for getting contiguous copies instead of non-contiguous views. ``` >>> [np.array(a) for a in np.broadcast_arrays(x, y)] [array([[1, 2, 3], [1, 2, 3]]), array([[4, 4, 4], [5, 5, 5]])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast_arrays.htmlnumpy.asfarray ============== numpy.asfarray(*a*, *dtype=<class 'numpy.double'>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L84-L114) Return an array converted to a float type. Parameters **a**array_like The input array. **dtype**str or dtype object, optional Float type code to coerce input array `a`. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is one of the ‘int’ dtypes, it is replaced with float64. Returns **out**ndarray The input `a` as a float ndarray. #### Examples ``` >>> np.asfarray([2, 3]) array([2., 3.]) >>> np.asfarray([2, 3], dtype='float') array([2., 3.]) >>> np.asfarray([2, 3], dtype='int8') array([2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.asfarray.htmlnumpy.asfortranarray ==================== numpy.asfortranarray(*a*, *dtype=None*, ***, *like=None*) Return an array (ndim >= 1) laid out in Fortran order in memory. Parameters **a**array_like Input array. **dtype**str or dtype object, optional By default, the data-type is inferred from the input data. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray The input `a` in Fortran, or column-major, order. See also [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous (C order) array. [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Convert input to an ndarray with either row or column-major memory order. [`require`](numpy.require#numpy.require "numpy.require") Return an ndarray that satisfies requirements. [`ndarray.flags`](numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") Information about the memory layout of the array. #### Examples ``` >>> x = np.arange(6).reshape(2,3) >>> y = np.asfortranarray(x) >>> x.flags['F_CONTIGUOUS'] False >>> y.flags['F_CONTIGUOUS'] True ``` Note: This function returns an array with at least one-dimension (1-d) so it will not preserve 0-d arrays. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.asfortranarray.htmlnumpy.asarray_chkfinite ======================== numpy.asarray_chkfinite(*a*, *dtype=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L561-L629) Convert the input to an array, checking for NaNs or Infs. Parameters **a**array_like Input data, in any form that can be converted to an array. This includes lists, lists of tuples, tuples, tuples of tuples, tuples of lists and ndarrays. Success requires no NaNs or Infs. **dtype**data-type, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Memory layout. ‘A’ and ‘K’ depend on the order of input array a. ‘C’ row-major (C-style), ‘F’ column-major (Fortran-style) memory representation. ‘A’ (any) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise ‘K’ (keep) preserve input order Defaults to ‘C’. Returns **out**ndarray Array interpretation of `a`. No copy is performed if the input is already an ndarray. If `a` is a subclass of ndarray, a base class ndarray is returned. Raises ValueError Raises ValueError if `a` contains NaN (Not a Number) or Inf (Infinity). See also [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray") Create and array. [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Similar function which passes through subclasses. [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous array. [`asfarray`](numpy.asfarray#numpy.asfarray "numpy.asfarray") Convert input to a floating point ndarray. [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`fromiter`](numpy.fromiter#numpy.fromiter "numpy.fromiter") Create an array from an iterator. [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") Construct an array by executing a function on grid positions. #### Examples Convert a list into an array. If all elements are finite `asarray_chkfinite` is identical to `asarray`. ``` >>> a = [1, 2] >>> np.asarray_chkfinite(a, dtype=float) array([1., 2.]) ``` Raises ValueError if array_like contains Nans or Infs. ``` >>> a = [1, 2, np.inf] >>> try: ... np.asarray_chkfinite(a) ... except ValueError: ... print('ValueError') ... ValueError ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.asarray_chkfinite.htmlnumpy.require ============= numpy.require(*a*, *dtype=None*, *requirements=None*, ***, *like=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_asarray.py#L22-L133) Return an ndarray of the provided type that satisfies requirements. This function is useful to be sure that an array with the correct flags is returned for passing to compiled code (perhaps through ctypes). Parameters **a**array_like The object to be converted to a type-and-requirement-satisfying array. **dtype**data-type The required data-type. If None preserve the current dtype. If your application requires the data to be in native byteorder, include a byteorder specification as a part of the dtype specification. **requirements**str or list of str The requirements list can be any of the following * ‘F_CONTIGUOUS’ (‘F’) - ensure a Fortran-contiguous array * ‘C_CONTIGUOUS’ (‘C’) - ensure a C-contiguous array * ‘ALIGNED’ (‘A’) - ensure a data-type aligned array * ‘WRITEABLE’ (‘W’) - ensure a writable array * ‘OWNDATA’ (‘O’) - ensure an array that owns its own data * ‘ENSUREARRAY’, (‘E’) - ensure a base array, instead of a subclass **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**ndarray Array with specified requirements and type if given. See also [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray") Convert input to an ndarray. [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Convert to an ndarray, but pass through ndarray subclasses. [`ascontiguousarray`](numpy.ascontiguousarray#numpy.ascontiguousarray "numpy.ascontiguousarray") Convert input to a contiguous array. [`asfortranarray`](numpy.asfortranarray#numpy.asfortranarray "numpy.asfortranarray") Convert input to an ndarray with column-major memory order. [`ndarray.flags`](numpy.ndarray.flags#numpy.ndarray.flags "numpy.ndarray.flags") Information about the memory layout of the array. #### Notes The returned array will be guaranteed to have the listed requirements by making a copy if needed. #### Examples ``` >>> x = np.arange(6).reshape(2,3) >>> x.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : False WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False ``` ``` >>> y = np.require(x, dtype=np.float32, requirements=['A', 'O', 'W', 'F']) >>> y.flags C_CONTIGUOUS : False F_CONTIGUOUS : True OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.require.htmlnumpy.stack =========== numpy.stack(*arrays*, *axis=0*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/shape_base.py#L357-L433) Join a sequence of arrays along a new axis. The `axis` parameter specifies the index of the new axis in the dimensions of the result. For example, if `axis=0` it will be the first dimension and if `axis=-1` it will be the last dimension. New in version 1.10.0. Parameters **arrays**sequence of array_like Each array must have the same shape. **axis**int, optional The axis in the result array along which the input arrays are stacked. **out**ndarray, optional If provided, the destination to place the result. The shape must be correct, matching that of what stack would have returned if no out argument were specified. Returns **stacked**ndarray The stacked array has one more dimension than the input arrays. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`split`](numpy.split#numpy.split "numpy.split") Split array into a list of multiple sub-arrays of equal size. #### Examples ``` >>> arrays = [np.random.randn(3, 4) for _ in range(10)] >>> np.stack(arrays, axis=0).shape (10, 3, 4) ``` ``` >>> np.stack(arrays, axis=1).shape (3, 10, 4) ``` ``` >>> np.stack(arrays, axis=2).shape (3, 4, 10) ``` ``` >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.stack((a, b)) array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> np.stack((a, b), axis=-1) array([[1, 4], [2, 5], [3, 6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.stack.htmlnumpy.block =========== numpy.block(*arrays*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/shape_base.py#L678-L847) Assemble an nd-array from nested lists of blocks. Blocks in the innermost lists are concatenated (see [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate")) along the last dimension (-1), then these are concatenated along the second-last dimension (-2), and so on until the outermost list is reached. Blocks can be of any dimension, but will not be broadcasted using the normal rules. Instead, leading axes of size 1 are inserted, to make `block.ndim` the same for all blocks. This is primarily useful for working with scalars, and means that code like `np.block([v, 1])` is valid, where `v.ndim == 1`. When the nested list is two levels deep, this allows block matrices to be constructed from their components. New in version 1.13.0. Parameters **arrays**nested list of array_like or scalars (but not tuples) If passed a single ndarray or scalar (a nested list of depth 0), this is returned unmodified (and not copied). Elements shapes must match along the appropriate axes (without broadcasting), but leading 1s will be prepended to the shape as necessary to make the dimensions match. Returns **block_array**ndarray The array assembled from the given blocks. The dimensionality of the output is equal to the greatest of: * the dimensionality of all the inputs * the depth to which the input list is nested Raises ValueError * If list depths are mismatched - for instance, `[[a, b], c]` is illegal, and should be spelt `[[a, b], [c]]` * If lists are empty - for instance, `[[a, b], []]` See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split an array into multiple sub-arrays vertically (row-wise). #### Notes When called with only scalars, `np.block` is equivalent to an ndarray call. So `np.block([[1, 2], [3, 4]])` is equivalent to `np.array([[1, 2], [3, 4]])`. This function does not enforce that the blocks lie on a fixed grid. `np.block([[a, b], [c, d]])` is not restricted to arrays of the form: ``` AAAbb AAAbb cccDD ``` But is also allowed to produce, for some `a, b, c, d`: ``` AAAbb AAAbb cDDDD ``` Since concatenation happens along the last axis first, [`block`](#numpy.block "numpy.block") is _not_ capable of producing the following directly: ``` AAAbb cccbb cccDD ``` Matlab’s “square bracket stacking”, `[A, B, ...; p, q, ...]`, is equivalent to `np.block([[A, B, ...], [p, q, ...]])`. #### Examples The most common use of this function is to build a block matrix ``` >>> A = np.eye(2) * 2 >>> B = np.eye(3) * 3 >>> np.block([ ... [A, np.zeros((2, 3))], ... [np.ones((3, 2)), B ] ... ]) array([[2., 0., 0., 0., 0.], [0., 2., 0., 0., 0.], [1., 1., 3., 0., 0.], [1., 1., 0., 3., 0.], [1., 1., 0., 0., 3.]]) ``` With a list of depth 1, [`block`](#numpy.block "numpy.block") can be used as [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") ``` >>> np.block([1, 2, 3]) # hstack([1, 2, 3]) array([1, 2, 3]) ``` ``` >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.block([a, b, 10]) # hstack([a, b, 10]) array([ 1, 2, 3, 4, 5, 6, 10]) ``` ``` >>> A = np.ones((2, 2), int) >>> B = 2 * A >>> np.block([A, B]) # hstack([A, B]) array([[1, 1, 2, 2], [1, 1, 2, 2]]) ``` With a list of depth 2, [`block`](#numpy.block "numpy.block") can be used in place of [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack"): ``` >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.block([[a], [b]]) # vstack([a, b]) array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> A = np.ones((2, 2), int) >>> B = 2 * A >>> np.block([[A], [B]]) # vstack([A, B]) array([[1, 1], [1, 1], [2, 2], [2, 2]]) ``` It can also be used in places of [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d") and [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d") ``` >>> a = np.array(0) >>> b = np.array([1]) >>> np.block([a]) # atleast_1d(a) array([0]) >>> np.block([b]) # atleast_1d(b) array([1]) ``` ``` >>> np.block([[a]]) # atleast_2d(a) array([[0]]) >>> np.block([[b]]) # atleast_2d(b) array([[1]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.block.htmlnumpy.split =========== numpy.split(*ary*, *indices_or_sections*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L799-L874) Split an array into multiple sub-arrays as views into `ary`. Parameters **ary**ndarray Array to be divided into sub-arrays. **indices_or_sections**int or 1-D array If `indices_or_sections` is an integer, N, the array will be divided into N equal arrays along `axis`. If such a split is not possible, an error is raised. If `indices_or_sections` is a 1-D array of sorted integers, the entries indicate where along `axis` the array is split. For example, `[2, 3]` would, for `axis=0`, result in * ary[:2] * ary[2:3] * ary[3:] If an index exceeds the dimension of the array along `axis`, an empty sub-array is returned correspondingly. **axis**int, optional The axis along which to split, default is 0. Returns **sub-arrays**list of ndarrays A list of sub-arrays as views into `ary`. Raises ValueError If `indices_or_sections` is given as an integer, but a split does not result in equal division. See also [`array_split`](numpy.array_split#numpy.array_split "numpy.array_split") Split an array into multiple sub-arrays of equal or near-equal size. Does not raise an exception if an equal division cannot be made. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") Split array into multiple sub-arrays horizontally (column-wise). [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split array into multiple sub-arrays vertically (row wise). [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit") Split array into multiple sub-arrays along the 3rd axis (depth). [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third dimension). #### Examples ``` >>> x = np.arange(9.0) >>> np.split(x, 3) [array([0., 1., 2.]), array([3., 4., 5.]), array([6., 7., 8.])] ``` ``` >>> x = np.arange(8.0) >>> np.split(x, [3, 5, 6, 10]) [array([0., 1., 2.]), array([3., 4.]), array([5.]), array([6., 7.]), array([], dtype=float64)] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.split.htmlnumpy.tile ========== numpy.tile(*A*, *reps*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L1191-L1280) Construct an array by repeating A the number of times given by reps. If `reps` has length `d`, the result will have dimension of `max(d, A.ndim)`. If `A.ndim < d`, `A` is promoted to be d-dimensional by prepending new axes. So a shape (3,) array is promoted to (1, 3) for 2-D replication, or shape (1, 1, 3) for 3-D replication. If this is not the desired behavior, promote `A` to d-dimensions manually before calling this function. If `A.ndim > d`, `reps` is promoted to `A`.ndim by pre-pending 1’s to it. Thus for an `A` of shape (2, 3, 4, 5), a `reps` of (2, 2) is treated as (1, 1, 2, 2). Note : Although tile may be used for broadcasting, it is strongly recommended to use numpy’s broadcasting operations and functions. Parameters **A**array_like The input array. **reps**array_like The number of repetitions of `A` along each axis. Returns **c**ndarray The tiled output array. See also [`repeat`](numpy.repeat#numpy.repeat "numpy.repeat") Repeat elements of an array. [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") Broadcast an array to a new shape #### Examples ``` >>> a = np.array([0, 1, 2]) >>> np.tile(a, 2) array([0, 1, 2, 0, 1, 2]) >>> np.tile(a, (2, 2)) array([[0, 1, 2, 0, 1, 2], [0, 1, 2, 0, 1, 2]]) >>> np.tile(a, (2, 1, 2)) array([[[0, 1, 2, 0, 1, 2]], [[0, 1, 2, 0, 1, 2]]]) ``` ``` >>> b = np.array([[1, 2], [3, 4]]) >>> np.tile(b, 2) array([[1, 2, 1, 2], [3, 4, 3, 4]]) >>> np.tile(b, (2, 1)) array([[1, 2], [3, 4], [1, 2], [3, 4]]) ``` ``` >>> c = np.array([1,2,3,4]) >>> np.tile(c,(4,1)) array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.tile.htmlnumpy.delete ============ numpy.delete(*arr*, *obj*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L4999-L5184) Return a new array with sub-arrays along an axis deleted. For a one dimensional array, this returns those entries not returned by `arr[obj]`. Parameters **arr**array_like Input array. **obj**slice, int or array of ints Indicate indices of sub-arrays to remove along the specified axis. Changed in version 1.19.0: Boolean indices are now treated as a mask of elements to remove, rather than being cast to the integers 0 and 1. **axis**int, optional The axis along which to delete the subarray defined by `obj`. If `axis` is None, `obj` is applied to the flattened array. Returns **out**ndarray A copy of `arr` with the elements specified by `obj` removed. Note that [`delete`](#numpy.delete "numpy.delete") does not occur in-place. If `axis` is None, `out` is a flattened array. See also [`insert`](numpy.insert#numpy.insert "numpy.insert") Insert elements into an array. [`append`](numpy.append#numpy.append "numpy.append") Append elements at the end of an array. #### Notes Often it is preferable to use a boolean mask. For example: ``` >>> arr = np.arange(12) + 1 >>> mask = np.ones(len(arr), dtype=bool) >>> mask[[0,2,4]] = False >>> result = arr[mask,...] ``` Is equivalent to `np.delete(arr, [0,2,4], axis=0)`, but allows further use of `mask`. #### Examples ``` >>> arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]]) >>> arr array([[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12]]) >>> np.delete(arr, 1, 0) array([[ 1, 2, 3, 4], [ 9, 10, 11, 12]]) ``` ``` >>> np.delete(arr, np.s_[::2], 1) array([[ 2, 4], [ 6, 8], [10, 12]]) >>> np.delete(arr, [1,3,5], None) array([ 1, 3, 5, 7, 8, 9, 10, 11, 12]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.delete.htmlnumpy.insert ============ numpy.insert(*arr*, *obj*, *values*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L5191-L5378) Insert values along the given axis before the given indices. Parameters **arr**array_like Input array. **obj**int, slice or sequence of ints Object that defines the index or indices before which `values` is inserted. New in version 1.8.0. Support for multiple insertions when `obj` is a single scalar or a sequence with one element (similar to calling insert multiple times). **values**array_like Values to insert into `arr`. If the type of `values` is different from that of `arr`, `values` is converted to the type of `arr`. `values` should be shaped so that `arr[...,obj,...] = values` is legal. **axis**int, optional Axis along which to insert `values`. If `axis` is None then `arr` is flattened first. Returns **out**ndarray A copy of `arr` with `values` inserted. Note that [`insert`](#numpy.insert "numpy.insert") does not occur in-place: a new array is returned. If `axis` is None, `out` is a flattened array. See also [`append`](numpy.append#numpy.append "numpy.append") Append elements at the end of an array. [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`delete`](numpy.delete#numpy.delete "numpy.delete") Delete elements from an array. #### Notes Note that for higher dimensional inserts `obj=0` behaves very different from `obj=[0]` just like `arr[:,0,:] = values` is different from `arr[:,[0],:] = values`. #### Examples ``` >>> a = np.array([[1, 1], [2, 2], [3, 3]]) >>> a array([[1, 1], [2, 2], [3, 3]]) >>> np.insert(a, 1, 5) array([1, 5, 1, ..., 2, 3, 3]) >>> np.insert(a, 1, 5, axis=1) array([[1, 5, 1], [2, 5, 2], [3, 5, 3]]) ``` Difference between sequence and scalars: ``` >>> np.insert(a, [1], [[1],[2],[3]], axis=1) array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]) >>> np.array_equal(np.insert(a, 1, [1, 2, 3], axis=1), ... np.insert(a, [1], [[1],[2],[3]], axis=1)) True ``` ``` >>> b = a.flatten() >>> b array([1, 1, 2, 2, 3, 3]) >>> np.insert(b, [2, 2], [5, 6]) array([1, 1, 5, ..., 2, 3, 3]) ``` ``` >>> np.insert(b, slice(2, 4), [5, 6]) array([1, 1, 5, ..., 2, 3, 3]) ``` ``` >>> np.insert(b, [2, 2], [7.13, False]) # type casting array([1, 1, 7, ..., 2, 3, 3]) ``` ``` >>> x = np.arange(8).reshape(2, 4) >>> idx = (1, 3) >>> np.insert(x, idx, 999, axis=1) array([[ 0, 999, 1, 2, 999, 3], [ 4, 999, 5, 6, 999, 7]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.insert.htmlnumpy.append ============ numpy.append(*arr*, *values*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L5385-L5440) Append values to the end of an array. Parameters **arr**array_like Values are appended to a copy of this array. **values**array_like These values are appended to a copy of `arr`. It must be of the correct shape (the same shape as `arr`, excluding `axis`). If `axis` is not specified, `values` can be any shape and will be flattened before use. **axis**int, optional The axis along which `values` are appended. If `axis` is not given, both `arr` and `values` are flattened before use. Returns **append**ndarray A copy of `arr` with `values` appended to `axis`. Note that [`append`](#numpy.append "numpy.append") does not occur in-place: a new array is allocated and filled. If `axis` is None, `out` is a flattened array. See also [`insert`](numpy.insert#numpy.insert "numpy.insert") Insert elements into an array. [`delete`](numpy.delete#numpy.delete "numpy.delete") Delete elements from an array. #### Examples ``` >>> np.append([1, 2, 3], [[4, 5, 6], [7, 8, 9]]) array([1, 2, 3, ..., 7, 8, 9]) ``` When `axis` is specified, `values` must have the correct shape. ``` >>> np.append([[1, 2, 3], [4, 5, 6]], [[7, 8, 9]], axis=0) array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> np.append([[1, 2, 3], [4, 5, 6]], [7, 8, 9], axis=0) Traceback (most recent call last): ... ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.append.htmlnumpy.trim_zeros ================= numpy.trim_zeros(*filt*, *trim='fb'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L1799-L1849) Trim the leading and/or trailing zeros from a 1-D array or sequence. Parameters **filt**1-D array or sequence Input array. **trim**str, optional A string with ‘f’ representing trim from front and ‘b’ to trim from back. Default is ‘fb’, trim zeros from both front and back of the array. Returns **trimmed**1-D array or sequence The result of trimming the input. The input data type is preserved. #### Examples ``` >>> a = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0)) >>> np.trim_zeros(a) array([1, 2, 3, 0, 2, 1]) ``` ``` >>> np.trim_zeros(a, 'b') array([0, 0, 0, ..., 0, 2, 1]) ``` The input data type is preserved, list/tuple in means list/tuple out. ``` >>> np.trim_zeros([0, 1, 2, 0]) [1, 2] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.trim_zeros.htmlnumpy.fliplr ============ numpy.fliplr(*m*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L48-L99) Reverse the order of elements along axis 1 (left/right). For a 2-D array, this flips the entries in each row in the left/right direction. Columns are preserved, but appear in a different order than before. Parameters **m**array_like Input array, must be at least 2-D. Returns **f**ndarray A view of `m` with the columns reversed. Since a view is returned, this operation is \(\mathcal O(1)\). See also [`flipud`](numpy.flipud#numpy.flipud "numpy.flipud") Flip array in the up/down direction. [`flip`](numpy.flip#numpy.flip "numpy.flip") Flip array in one or more dimensions. [`rot90`](numpy.rot90#numpy.rot90 "numpy.rot90") Rotate array counterclockwise. #### Notes Equivalent to `m[:,::-1]` or `np.flip(m, axis=1)`. Requires the array to be at least 2-D. #### Examples ``` >>> A = np.diag([1.,2.,3.]) >>> A array([[1., 0., 0.], [0., 2., 0.], [0., 0., 3.]]) >>> np.fliplr(A) array([[0., 0., 1.], [0., 2., 0.], [3., 0., 0.]]) ``` ``` >>> A = np.random.randn(2,3,5) >>> np.all(np.fliplr(A) == A[:,::-1,...]) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fliplr.htmlnumpy.flipud ============ numpy.flipud(*m*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L102-L155) Reverse the order of elements along axis 0 (up/down). For a 2-D array, this flips the entries in each column in the up/down direction. Rows are preserved, but appear in a different order than before. Parameters **m**array_like Input array. Returns **out**array_like A view of `m` with the rows reversed. Since a view is returned, this operation is \(\mathcal O(1)\). See also [`fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr") Flip array in the left/right direction. [`flip`](numpy.flip#numpy.flip "numpy.flip") Flip array in one or more dimensions. [`rot90`](numpy.rot90#numpy.rot90 "numpy.rot90") Rotate array counterclockwise. #### Notes Equivalent to `m[::-1, ...]` or `np.flip(m, axis=0)`. Requires the array to be at least 1-D. #### Examples ``` >>> A = np.diag([1.0, 2, 3]) >>> A array([[1., 0., 0.], [0., 2., 0.], [0., 0., 3.]]) >>> np.flipud(A) array([[0., 0., 3.], [0., 2., 0.], [1., 0., 0.]]) ``` ``` >>> A = np.random.randn(2,3,5) >>> np.all(np.flipud(A) == A[::-1,...]) True ``` ``` >>> np.flipud([1,2]) array([2, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.flipud.htmlnumpy.roll ========== numpy.roll(*a*, *shift*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L1146-L1250) Roll array elements along a given axis. Elements that roll beyond the last position are re-introduced at the first. Parameters **a**array_like Input array. **shift**int or tuple of ints The number of places by which elements are shifted. If a tuple, then `axis` must be a tuple of the same size, and each of the given axes is shifted by the corresponding number. If an int while `axis` is a tuple of ints, then the same value is used for all given axes. **axis**int or tuple of ints, optional Axis or axes along which elements are shifted. By default, the array is flattened before shifting, after which the original shape is restored. Returns **res**ndarray Output array, with the same shape as `a`. See also [`rollaxis`](numpy.rollaxis#numpy.rollaxis "numpy.rollaxis") Roll the specified axis backwards, until it lies in a given position. #### Notes New in version 1.12.0. Supports rolling over multiple dimensions simultaneously. #### Examples ``` >>> x = np.arange(10) >>> np.roll(x, 2) array([8, 9, 0, 1, 2, 3, 4, 5, 6, 7]) >>> np.roll(x, -2) array([2, 3, 4, 5, 6, 7, 8, 9, 0, 1]) ``` ``` >>> x2 = np.reshape(x, (2, 5)) >>> x2 array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]) >>> np.roll(x2, 1) array([[9, 0, 1, 2, 3], [4, 5, 6, 7, 8]]) >>> np.roll(x2, -1) array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 0]]) >>> np.roll(x2, 1, axis=0) array([[5, 6, 7, 8, 9], [0, 1, 2, 3, 4]]) >>> np.roll(x2, -1, axis=0) array([[5, 6, 7, 8, 9], [0, 1, 2, 3, 4]]) >>> np.roll(x2, 1, axis=1) array([[4, 0, 1, 2, 3], [9, 5, 6, 7, 8]]) >>> np.roll(x2, -1, axis=1) array([[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]]) >>> np.roll(x2, (1, 1), axis=(1, 0)) array([[9, 5, 6, 7, 8], [4, 0, 1, 2, 3]]) >>> np.roll(x2, (2, 1), axis=(1, 0)) array([[8, 9, 5, 6, 7], [3, 4, 0, 1, 2]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.roll.htmlnumpy.rot90 =========== numpy.rot90(*m*, *k=1*, *axes=(0, 1)*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L158-L245) Rotate an array by 90 degrees in the plane specified by axes. Rotation direction is from the first towards the second axis. Parameters **m**array_like Array of two or more dimensions. **k**integer Number of times the array is rotated by 90 degrees. **axes**(2,) array_like The array is rotated in the plane defined by the axes. Axes must be different. New in version 1.12.0. Returns **y**ndarray A rotated view of `m`. See also [`flip`](numpy.flip#numpy.flip "numpy.flip") Reverse the order of elements in an array along the given axis. [`fliplr`](numpy.fliplr#numpy.fliplr "numpy.fliplr") Flip an array horizontally. [`flipud`](numpy.flipud#numpy.flipud "numpy.flipud") Flip an array vertically. #### Notes `rot90(m, k=1, axes=(1,0))` is the reverse of `rot90(m, k=1, axes=(0,1))` `rot90(m, k=1, axes=(1,0))` is equivalent to `rot90(m, k=-1, axes=(0,1))` #### Examples ``` >>> m = np.array([[1,2],[3,4]], int) >>> m array([[1, 2], [3, 4]]) >>> np.rot90(m) array([[2, 4], [1, 3]]) >>> np.rot90(m, 2) array([[4, 3], [2, 1]]) >>> m = np.arange(8).reshape((2,2,2)) >>> np.rot90(m, 1, (1,2)) array([[[1, 3], [0, 2]], [[5, 7], [4, 6]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.rot90.htmlnumpy.packbits ============== numpy.packbits(*a*, */*, *axis=None*, *bitorder='big'*) Packs the elements of a binary-valued array into bits in a uint8 array. The result is padded to full bytes by inserting zero bits at the end. Parameters **a**array_like An array of integers or booleans whose elements should be packed to bits. **axis**int, optional The dimension over which bit-packing is done. `None` implies packing the flattened array. **bitorder**{‘big’, ‘little’}, optional The order of the input bits. ‘big’ will mimic bin(val), `[0, 0, 0, 0, 0, 0, 1, 1] => 3 = 0b00000011`, ‘little’ will reverse the order so `[1, 1, 0, 0, 0, 0, 0, 0] => 3`. Defaults to ‘big’. New in version 1.17.0. Returns **packed**ndarray Array of type uint8 whose elements represent bits corresponding to the logical (0 or nonzero) value of the input elements. The shape of `packed` has the same number of dimensions as the input (unless `axis` is None, in which case the output is 1-D). See also [`unpackbits`](numpy.unpackbits#numpy.unpackbits "numpy.unpackbits") Unpacks elements of a uint8 array into a binary-valued output array. #### Examples ``` >>> a = np.array([[[1,0,1], ... [0,1,0]], ... [[1,1,0], ... [0,0,1]]]) >>> b = np.packbits(a, axis=-1) >>> b array([[[160], [ 64]], [[192], [ 32]]], dtype=uint8) ``` Note that in binary 160 = 1010 0000, 64 = 0100 0000, 192 = 1100 0000, and 32 = 0010 0000. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.packbits.htmlnumpy.unpackbits ================ numpy.unpackbits(*a*, */*, *axis=None*, *count=None*, *bitorder='big'*) Unpacks elements of a uint8 array into a binary-valued output array. Each element of `a` represents a bit-field that should be unpacked into a binary-valued output array. The shape of the output array is either 1-D (if `axis` is `None`) or the same shape as the input array with unpacking done along the axis specified. Parameters **a**ndarray, uint8 type Input array. **axis**int, optional The dimension over which bit-unpacking is done. `None` implies unpacking the flattened array. **count**int or None, optional The number of elements to unpack along `axis`, provided as a way of undoing the effect of packing a size that is not a multiple of eight. A non-negative number means to only unpack `count` bits. A negative number means to trim off that many bits from the end. `None` means to unpack the entire array (the default). Counts larger than the available number of bits will add zero padding to the output. Negative counts must not exceed the available number of bits. New in version 1.17.0. **bitorder**{‘big’, ‘little’}, optional The order of the returned bits. ‘big’ will mimic bin(val), `3 = 0b00000011 => [0, 0, 0, 0, 0, 0, 1, 1]`, ‘little’ will reverse the order to `[1, 1, 0, 0, 0, 0, 0, 0]`. Defaults to ‘big’. New in version 1.17.0. Returns **unpacked**ndarray, uint8 type The elements are binary-valued (0 or 1). See also [`packbits`](numpy.packbits#numpy.packbits "numpy.packbits") Packs the elements of a binary-valued array into bits in a uint8 array. #### Examples ``` >>> a = np.array([[2], [7], [23]], dtype=np.uint8) >>> a array([[ 2], [ 7], [23]], dtype=uint8) >>> b = np.unpackbits(a, axis=1) >>> b array([[0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 1, 0, 1, 1, 1]], dtype=uint8) >>> c = np.unpackbits(a, axis=1, count=-3) >>> c array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 1, 0]], dtype=uint8) ``` ``` >>> p = np.packbits(b, axis=0) >>> np.unpackbits(p, axis=0) array([[0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 1, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8) >>> np.array_equal(b, np.unpackbits(p, axis=0, count=b.shape[0])) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.unpackbits.htmlnumpy.binary_repr ================== numpy.binary_repr(*num*, *width=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L1954-L2066) Return the binary representation of the input number as a string. For negative numbers, if width is not given, a minus sign is added to the front. If width is given, the two’s complement of the number is returned, with respect to that width. In a two’s-complement system negative numbers are represented by the two’s complement of the absolute value. This is the most common method of representing signed integers on computers [[1]](#r962252997619-1). A N-bit two’s-complement system can represent every integer in the range \(-2^{N-1}\) to \(+2^{N-1}-1\). Parameters **num**int Only an integer decimal number can be used. **width**int, optional The length of the returned string if `num` is positive, or the length of the two’s complement if `num` is negative, provided that `width` is at least a sufficient number of bits for `num` to be represented in the designated form. If the `width` value is insufficient, it will be ignored, and `num` will be returned in binary (`num` > 0) or two’s complement (`num` < 0) form with its width equal to the minimum number of bits needed to represent the number in the designated form. This behavior is deprecated and will later raise an error. Deprecated since version 1.12.0. Returns **bin**str Binary representation of `num` or two’s complement of `num`. See also [`base_repr`](numpy.base_repr#numpy.base_repr "numpy.base_repr") Return a string representation of a number in the given base system. [`bin`](https://docs.python.org/3/library/functions.html#bin "(in Python v3.10)") Python’s built-in binary representation generator of an integer. #### Notes [`binary_repr`](#numpy.binary_repr "numpy.binary_repr") is equivalent to using [`base_repr`](numpy.base_repr#numpy.base_repr "numpy.base_repr") with base 2, but about 25x faster. #### References [1](#id1) Wikipedia, “Two’s complement”, [https://en.wikipedia.org/wiki/Two’s_complement](https://en.wikipedia.org/wiki/Two's_complement) #### Examples ``` >>> np.binary_repr(3) '11' >>> np.binary_repr(-3) '-11' >>> np.binary_repr(3, width=4) '0011' ``` The two’s complement is returned when the input number is negative and width is specified: ``` >>> np.binary_repr(-3, width=3) '101' >>> np.binary_repr(-3, width=5) '11101' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.binary_repr.htmlnumpy.char.add ============== char.add(*x1*, *x2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L285-L310) Return element-wise string concatenation for two arrays of str or unicode. Arrays `x1` and `x2` must have the same shape. Parameters **x1**array_like of str or unicode Input array. **x2**array_like of str or unicode Input array. Returns **add**ndarray Output array of [`string_`](../arrays.scalars#numpy.string_ "numpy.string_") or [`unicode_`](../arrays.scalars#numpy.unicode_ "numpy.unicode_"), depending on input types of the same shape as `x1` and `x2`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.add.htmlnumpy.char.multiply =================== char.multiply(*a*, *i*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L317-L344) Return (a * i), that is string multiple concatenation, element-wise. Values in `i` of less than 0 are treated as 0 (which yields an empty string). Parameters **a**array_like of str or unicode **i**array_like of ints Returns **out**ndarray Output array of str or unicode, depending on input types © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.multiply.htmlnumpy.char.mod ============== char.mod(*a*, *values*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L351-L376) Return (a % i), that is pre-Python 2.6 string formatting (interpolation), element-wise for a pair of array_likes of str or unicode. Parameters **a**array_like of str or unicode **values**array_like of values These values will be element-wise interpolated into the string. Returns **out**ndarray Output array of str or unicode, depending on input types See also `str.__mod__` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.mod.htmlnumpy.char.capitalize ===================== char.capitalize(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L379-L415) Return a copy of `a` with only the first character of each element capitalized. Calls [`str.capitalize`](https://docs.python.org/3/library/stdtypes.html#str.capitalize "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like of str or unicode Input array of strings to capitalize. Returns **out**ndarray Output array of str or unicode, depending on input types See also [`str.capitalize`](https://docs.python.org/3/library/stdtypes.html#str.capitalize "(in Python v3.10)") #### Examples ``` >>> c = np.array(['a1b2','1b2a','b2a1','2a1b'],'S4'); c array(['a1b2', '1b2a', 'b2a1', '2a1b'], dtype='|S4') >>> np.char.capitalize(c) array(['A1b2', '1b2a', 'B2a1', '2a1b'], dtype='|S4') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.capitalize.htmlnumpy.char.center ================= char.center(*a*, *width*, *fillchar=' '*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L422-L456) Return a copy of `a` with its elements centered in a string of length `width`. Calls [`str.center`](https://docs.python.org/3/library/stdtypes.html#str.center "(in Python v3.10)") element-wise. Parameters **a**array_like of str or unicode **width**int The length of the resulting strings **fillchar**str or unicode, optional The padding character to use (default is space). Returns **out**ndarray Output array of str or unicode, depending on input types See also [`str.center`](https://docs.python.org/3/library/stdtypes.html#str.center "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.center.htmlnumpy.char.decode ================= char.decode(*a*, *encoding=None*, *errors=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L513-L556) Calls `str.decode` element-wise. The set of available codecs comes from the Python standard library, and may be extended at runtime. For more information, see the [`codecs`](https://docs.python.org/3/library/codecs.html#module-codecs "(in Python v3.10)") module. Parameters **a**array_like of str or unicode **encoding**str, optional The name of an encoding **errors**str, optional Specifies how to handle encoding errors Returns **out**ndarray See also `str.decode` #### Notes The type of the result will depend on the encoding specified. #### Examples ``` >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='<U7') >>> np.char.encode(c, encoding='cp037') array(['\x81\xc1\x81\xc1\x81\xc1', '@@\x81\xc1@@', '\x81\x82\xc2\xc1\xc2\x82\x81'], dtype='|S7') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.decode.htmlnumpy.char.encode ================= char.encode(*a*, *encoding=None*, *errors=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L559-L592) Calls [`str.encode`](https://docs.python.org/3/library/stdtypes.html#str.encode "(in Python v3.10)") element-wise. The set of available codecs comes from the Python standard library, and may be extended at runtime. For more information, see the codecs module. Parameters **a**array_like of str or unicode **encoding**str, optional The name of an encoding **errors**str, optional Specifies how to handle encoding errors Returns **out**ndarray See also [`str.encode`](https://docs.python.org/3/library/stdtypes.html#str.encode "(in Python v3.10)") #### Notes The type of the result will depend on the encoding specified. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.encode.htmlnumpy.char.expandtabs ===================== char.expandtabs(*a*, *tabsize=8*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L647-L680) Return a copy of each string element where all tab characters are replaced by one or more spaces. Calls [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs "(in Python v3.10)") element-wise. Return a copy of each string element where all tab characters are replaced by one or more spaces, depending on the current column and the given `tabsize`. The column number is reset to zero after each newline occurring in the string. This doesn’t understand other non-printing characters or escape sequences. Parameters **a**array_like of str or unicode Input array **tabsize**int, optional Replace tabs with `tabsize` number of spaces. If not given defaults to 8 spaces. Returns **out**ndarray Output array of str or unicode, depending on input type See also [`str.expandtabs`](https://docs.python.org/3/library/stdtypes.html#str.expandtabs "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.expandtabs.htmlnumpy.char.join =============== char.join(*sep*, *seq*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L937-L960) Return a string which is the concatenation of the strings in the sequence `seq`. Calls [`str.join`](https://docs.python.org/3/library/stdtypes.html#str.join "(in Python v3.10)") element-wise. Parameters **sep**array_like of str or unicode **seq**array_like of str or unicode Returns **out**ndarray Output array of str or unicode, depending on input types See also [`str.join`](https://docs.python.org/3/library/stdtypes.html#str.join "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.join.htmlnumpy.char.ljust ================ char.ljust(*a*, *width*, *fillchar=' '*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L968-L1001) Return an array with the elements of `a` left-justified in a string of length `width`. Calls [`str.ljust`](https://docs.python.org/3/library/stdtypes.html#str.ljust "(in Python v3.10)") element-wise. Parameters **a**array_like of str or unicode **width**int The length of the resulting strings **fillchar**str or unicode, optional The character to use for padding Returns **out**ndarray Output array of str or unicode, depending on input type See also [`str.ljust`](https://docs.python.org/3/library/stdtypes.html#str.ljust "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.ljust.htmlnumpy.char.lower ================ char.lower(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1004-L1036) Return an array with the elements converted to lowercase. Call [`str.lower`](https://docs.python.org/3/library/stdtypes.html#str.lower "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like, {str, unicode} Input array. Returns **out**ndarray, {str, unicode} Output array of str or unicode, depending on input type See also [`str.lower`](https://docs.python.org/3/library/stdtypes.html#str.lower "(in Python v3.10)") #### Examples ``` >>> c = np.array(['A1B C', '1BCA', 'BCA1']); c array(['A1B C', '1BCA', 'BCA1'], dtype='<U5') >>> np.char.lower(c) array(['a1b c', '1bca', 'bca1'], dtype='<U5') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.lower.htmlnumpy.char.lstrip ================= char.lstrip(*a*, *chars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1043-L1095) For each element in `a`, return a copy with the leading characters removed. Calls [`str.lstrip`](https://docs.python.org/3/library/stdtypes.html#str.lstrip "(in Python v3.10)") element-wise. Parameters **a**array-like, {str, unicode} Input array. **chars**{str, unicode}, optional The `chars` argument is a string specifying the set of characters to be removed. If omitted or None, the `chars` argument defaults to removing whitespace. The `chars` argument is not a prefix; rather, all combinations of its values are stripped. Returns **out**ndarray, {str, unicode} Output array of str or unicode, depending on input type See also [`str.lstrip`](https://docs.python.org/3/library/stdtypes.html#str.lstrip "(in Python v3.10)") #### Examples ``` >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='<U7') ``` The ‘a’ variable is unstripped from c[1] because whitespace leading. ``` >>> np.char.lstrip(c, 'a') array(['AaAaA', ' aA ', 'bBABba'], dtype='<U7') ``` ``` >>> np.char.lstrip(c, 'A') # leaves c unchanged array(['aAaAaA', ' aA ', 'abBABba'], dtype='<U7') >>> (np.char.lstrip(c, ' ') == np.char.lstrip(c, '')).all() ... # XXX: is this a regression? This used to return True ... # np.char.lstrip(c,'') does not modify c at all. False >>> (np.char.lstrip(c, ' ') == np.char.lstrip(c, None)).all() True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.lstrip.htmlnumpy.char.partition ==================== char.partition(*a*, *sep*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1102-L1135) Partition each element in `a` around `sep`. Calls [`str.partition`](https://docs.python.org/3/library/stdtypes.html#str.partition "(in Python v3.10)") element-wise. For each element in `a`, split the element as the first occurrence of `sep`, and return 3 strings containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return 3 strings containing the string itself, followed by two empty strings. Parameters **a**array_like, {str, unicode} Input array **sep**{str, unicode} Separator to split each string element in `a`. Returns **out**ndarray, {str, unicode} Output array of str or unicode, depending on input type. The output array will have an extra dimension with 3 elements per input element. See also [`str.partition`](https://docs.python.org/3/library/stdtypes.html#str.partition "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.partition.htmlnumpy.char.replace ================== char.replace(*a*, *old*, *new*, *count=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1142-L1172) For each element in `a`, return a copy of the string with all occurrences of substring `old` replaced by `new`. Calls [`str.replace`](https://docs.python.org/3/library/stdtypes.html#str.replace "(in Python v3.10)") element-wise. Parameters **a**array-like of str or unicode **old, new**str or unicode **count**int, optional If the optional argument [`count`](numpy.char.count#numpy.char.count "numpy.char.count") is given, only the first [`count`](numpy.char.count#numpy.char.count "numpy.char.count") occurrences are replaced. Returns **out**ndarray Output array of str or unicode, depending on input type See also [`str.replace`](https://docs.python.org/3/library/stdtypes.html#str.replace "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.replace.htmlnumpy.char.rjust ================ char.rjust(*a*, *width*, *fillchar=' '*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1238-L1271) Return an array with the elements of `a` right-justified in a string of length `width`. Calls [`str.rjust`](https://docs.python.org/3/library/stdtypes.html#str.rjust "(in Python v3.10)") element-wise. Parameters **a**array_like of str or unicode **width**int The length of the resulting strings **fillchar**str or unicode, optional The character to use for padding Returns **out**ndarray Output array of str or unicode, depending on input type See also [`str.rjust`](https://docs.python.org/3/library/stdtypes.html#str.rjust "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.rjust.htmlnumpy.char.rpartition ===================== char.rpartition(*a*, *sep*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1274-L1307) Partition (split) each element around the right-most separator. Calls [`str.rpartition`](https://docs.python.org/3/library/stdtypes.html#str.rpartition "(in Python v3.10)") element-wise. For each element in `a`, split the element as the last occurrence of `sep`, and return 3 strings containing the part before the separator, the separator itself, and the part after the separator. If the separator is not found, return 3 strings containing the string itself, followed by two empty strings. Parameters **a**array_like of str or unicode Input array **sep**str or unicode Right-most separator to split each element in array. Returns **out**ndarray Output array of string or unicode, depending on input type. The output array will have an extra dimension with 3 elements per input element. See also [`str.rpartition`](https://docs.python.org/3/library/stdtypes.html#str.rpartition "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.rpartition.htmlnumpy.char.rsplit ================= char.rsplit(*a*, *sep=None*, *maxsplit=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1314-L1349) For each element in `a`, return a list of the words in the string, using `sep` as the delimiter string. Calls [`str.rsplit`](https://docs.python.org/3/library/stdtypes.html#str.rsplit "(in Python v3.10)") element-wise. Except for splitting from the right, [`rsplit`](#numpy.char.rsplit "numpy.char.rsplit") behaves like [`split`](numpy.split#numpy.split "numpy.split"). Parameters **a**array_like of str or unicode **sep**str or unicode, optional If `sep` is not specified or None, any whitespace string is a separator. **maxsplit**int, optional If `maxsplit` is given, at most `maxsplit` splits are done, the rightmost ones. Returns **out**ndarray Array of list objects See also [`str.rsplit`](https://docs.python.org/3/library/stdtypes.html#str.rsplit "(in Python v3.10)"), [`split`](numpy.split#numpy.split "numpy.split") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.rsplit.htmlnumpy.char.rstrip ================= char.rstrip(*a*, *chars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1356-L1398) For each element in `a`, return a copy with the trailing characters removed. Calls [`str.rstrip`](https://docs.python.org/3/library/stdtypes.html#str.rstrip "(in Python v3.10)") element-wise. Parameters **a**array-like of str or unicode **chars**str or unicode, optional The `chars` argument is a string specifying the set of characters to be removed. If omitted or None, the `chars` argument defaults to removing whitespace. The `chars` argument is not a suffix; rather, all combinations of its values are stripped. Returns **out**ndarray Output array of str or unicode, depending on input type See also [`str.rstrip`](https://docs.python.org/3/library/stdtypes.html#str.rstrip "(in Python v3.10)") #### Examples ``` >>> c = np.array(['aAaAaA', 'abBABba'], dtype='S7'); c array(['aAaAaA', 'abBABba'], dtype='|S7') >>> np.char.rstrip(c, b'a') array(['aAaAaA', 'abBABb'], dtype='|S7') >>> np.char.rstrip(c, b'A') array(['aAaAa', 'abBABba'], dtype='|S7') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.rstrip.htmlnumpy.char.split ================ char.split(*a*, *sep=None*, *maxsplit=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1401-L1433) For each element in `a`, return a list of the words in the string, using `sep` as the delimiter string. Calls [`str.split`](https://docs.python.org/3/library/stdtypes.html#str.split "(in Python v3.10)") element-wise. Parameters **a**array_like of str or unicode **sep**str or unicode, optional If `sep` is not specified or None, any whitespace string is a separator. **maxsplit**int, optional If `maxsplit` is given, at most `maxsplit` splits are done. Returns **out**ndarray Array of list objects See also [`str.split`](https://docs.python.org/3/library/stdtypes.html#str.split "(in Python v3.10)"), [`rsplit`](numpy.char.rsplit#numpy.char.rsplit "numpy.char.rsplit") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.split.htmlnumpy.char.splitlines ===================== char.splitlines(*a*, *keepends=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1440-L1467) For each element in `a`, return a list of the lines in the element, breaking at line boundaries. Calls [`str.splitlines`](https://docs.python.org/3/library/stdtypes.html#str.splitlines "(in Python v3.10)") element-wise. Parameters **a**array_like of str or unicode **keepends**bool, optional Line breaks are not included in the resulting list unless keepends is given and true. Returns **out**ndarray Array of list objects See also [`str.splitlines`](https://docs.python.org/3/library/stdtypes.html#str.splitlines "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.splitlines.htmlnumpy.char.strip ================ char.strip(*a*, *chars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1506-L1548) For each element in `a`, return a copy with the leading and trailing characters removed. Calls [`str.strip`](https://docs.python.org/3/library/stdtypes.html#str.strip "(in Python v3.10)") element-wise. Parameters **a**array-like of str or unicode **chars**str or unicode, optional The `chars` argument is a string specifying the set of characters to be removed. If omitted or None, the `chars` argument defaults to removing whitespace. The `chars` argument is not a prefix or suffix; rather, all combinations of its values are stripped. Returns **out**ndarray Output array of str or unicode, depending on input type See also [`str.strip`](https://docs.python.org/3/library/stdtypes.html#str.strip "(in Python v3.10)") #### Examples ``` >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='<U7') >>> np.char.strip(c) array(['aAaAaA', 'aA', 'abBABba'], dtype='<U7') >>> np.char.strip(c, 'a') # 'a' unstripped from c[1] because whitespace leads array(['AaAaA', ' aA ', 'bBABb'], dtype='<U7') >>> np.char.strip(c, 'A') # 'A' unstripped from c[1] because (unprinted) ws trails array(['aAaAa', ' aA ', 'abBABba'], dtype='<U7') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.strip.htmlnumpy.char.swapcase =================== char.swapcase(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1551-L1586) Return element-wise a copy of the string with uppercase characters converted to lowercase and vice versa. Calls [`str.swapcase`](https://docs.python.org/3/library/stdtypes.html#str.swapcase "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like, {str, unicode} Input array. Returns **out**ndarray, {str, unicode} Output array of str or unicode, depending on input type See also [`str.swapcase`](https://docs.python.org/3/library/stdtypes.html#str.swapcase "(in Python v3.10)") #### Examples ``` >>> c=np.array(['a1B c','1b Ca','b Ca1','cA1b'],'S5'); c array(['a1B c', '1b Ca', 'b Ca1', 'cA1b'], dtype='|S5') >>> np.char.swapcase(c) array(['A1b C', '1B cA', 'B cA1', 'Ca1B'], dtype='|S5') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.swapcase.htmlnumpy.char.title ================ char.title(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1589-L1626) Return element-wise title cased version of string or unicode. Title case words start with uppercase characters, all remaining cased characters are lowercase. Calls [`str.title`](https://docs.python.org/3/library/stdtypes.html#str.title "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like, {str, unicode} Input array. Returns **out**ndarray Output array of str or unicode, depending on input type See also [`str.title`](https://docs.python.org/3/library/stdtypes.html#str.title "(in Python v3.10)") #### Examples ``` >>> c=np.array(['a1b c','1b ca','b ca1','ca1b'],'S5'); c array(['a1b c', '1b ca', 'b ca1', 'ca1b'], dtype='|S5') >>> np.char.title(c) array(['A1B C', '1B Ca', 'B Ca1', 'Ca1B'], dtype='|S5') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.title.htmlnumpy.char.translate ==================== char.translate(*a*, *table*, *deletechars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1633-L1667) For each element in `a`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. Calls [`str.translate`](https://docs.python.org/3/library/stdtypes.html#str.translate "(in Python v3.10)") element-wise. Parameters **a**array-like of str or unicode **table**str of length 256 **deletechars**str Returns **out**ndarray Output array of str or unicode, depending on input type See also [`str.translate`](https://docs.python.org/3/library/stdtypes.html#str.translate "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.translate.htmlnumpy.char.upper ================ char.upper(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1670-L1702) Return an array with the elements converted to uppercase. Calls [`str.upper`](https://docs.python.org/3/library/stdtypes.html#str.upper "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like, {str, unicode} Input array. Returns **out**ndarray, {str, unicode} Output array of str or unicode, depending on input type See also [`str.upper`](https://docs.python.org/3/library/stdtypes.html#str.upper "(in Python v3.10)") #### Examples ``` >>> c = np.array(['a1b c', '1bca', 'bca1']); c array(['a1b c', '1bca', 'bca1'], dtype='<U5') >>> np.char.upper(c) array(['A1B C', '1BCA', 'BCA1'], dtype='<U5') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.upper.htmlnumpy.char.zfill ================ char.zfill(*a*, *width*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1709-L1737) Return the numeric string left-filled with zeros Calls [`str.zfill`](https://docs.python.org/3/library/stdtypes.html#str.zfill "(in Python v3.10)") element-wise. Parameters **a**array_like, {str, unicode} Input array. **width**int Width of string to left-fill elements in `a`. Returns **out**ndarray, {str, unicode} Output array of str or unicode, depending on input type See also [`str.zfill`](https://docs.python.org/3/library/stdtypes.html#str.zfill "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.zfill.htmlnumpy.char.equal ================ char.equal(*x1*, *x2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L100-L123) Return (x1 == x2) element-wise. Unlike [`numpy.equal`](numpy.equal#numpy.equal "numpy.equal"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters **x1, x2**array_like of str or unicode Input arrays of the same shape. Returns **out**ndarray Output array of bools. See also [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.equal.htmlnumpy.char.not_equal ===================== char.not_equal(*x1*, *x2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L126-L149) Return (x1 != x2) element-wise. Unlike [`numpy.not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters **x1, x2**array_like of str or unicode Input arrays of the same shape. Returns **out**ndarray Output array of bools. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.not_equal.htmlnumpy.char.greater_equal ========================= char.greater_equal(*x1*, *x2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L152-L176) Return (x1 >= x2) element-wise. Unlike [`numpy.greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters **x1, x2**array_like of str or unicode Input arrays of the same shape. Returns **out**ndarray Output array of bools. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.greater_equal.htmlnumpy.char.less_equal ====================== char.less_equal(*x1*, *x2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L179-L202) Return (x1 <= x2) element-wise. Unlike [`numpy.less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters **x1, x2**array_like of str or unicode Input arrays of the same shape. Returns **out**ndarray Output array of bools. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater"), [`less`](numpy.less#numpy.less "numpy.less") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.less_equal.htmlnumpy.char.greater ================== char.greater(*x1*, *x2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L205-L228) Return (x1 > x2) element-wise. Unlike [`numpy.greater`](numpy.greater#numpy.greater "numpy.greater"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters **x1, x2**array_like of str or unicode Input arrays of the same shape. Returns **out**ndarray Output array of bools. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`less`](numpy.less#numpy.less "numpy.less") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.greater.htmlnumpy.char.less =============== char.less(*x1*, *x2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L231-L254) Return (x1 < x2) element-wise. Unlike [`numpy.greater`](numpy.greater#numpy.greater "numpy.greater"), this comparison is performed by first stripping whitespace characters from the end of the string. This behavior is provided for backward-compatibility with numarray. Parameters **x1, x2**array_like of str or unicode Input arrays of the same shape. Returns **out**ndarray Output array of bools. See also [`equal`](numpy.equal#numpy.equal "numpy.equal"), [`not_equal`](numpy.not_equal#numpy.not_equal "numpy.not_equal"), [`greater_equal`](numpy.greater_equal#numpy.greater_equal "numpy.greater_equal"), [`less_equal`](numpy.less_equal#numpy.less_equal "numpy.less_equal"), [`greater`](numpy.greater#numpy.greater "numpy.greater") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.less.htmlnumpy.char.compare_chararrays ============================== char.compare_chararrays(*a1*, *a2*, *cmp*, *rstrip*) Performs element-wise comparison of two string arrays using the comparison operator specified by `cmp_op`. Parameters **a1, a2**array_like Arrays to be compared. **cmp**{“<”, “<=”, “==”, “>=”, “>”, “!=”} Type of comparison. **rstrip**Boolean If True, the spaces at the end of Strings are removed before the comparison. Returns **out**ndarray The output array of type Boolean with the same shape as a and b. Raises ValueError If `cmp_op` is not valid. TypeError If at least one of `a` or `b` is a non-string array #### Examples ``` >>> a = np.array(["a", "b", "cde"]) >>> b = np.array(["a", "a", "dec"]) >>> np.compare_chararrays(a, b, ">", True) array([False, True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.compare_chararrays.htmlnumpy.char.count ================ char.count(*a*, *sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L463-L506) Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`]. Calls [`str.count`](https://docs.python.org/3/library/stdtypes.html#str.count "(in Python v3.10)") element-wise. Parameters **a**array_like of str or unicode **sub**str or unicode The substring to search for. **start, end**int, optional Optional arguments `start` and `end` are interpreted as slice notation to specify the range in which to count. Returns **out**ndarray Output array of ints. See also [`str.count`](https://docs.python.org/3/library/stdtypes.html#str.count "(in Python v3.10)") #### Examples ``` >>> c = np.array(['aAaAaA', ' aA ', 'abBABba']) >>> c array(['aAaAaA', ' aA ', 'abBABba'], dtype='<U7') >>> np.char.count(c, 'A') array([3, 1, 1]) >>> np.char.count(c, 'aA') array([3, 1, 0]) >>> np.char.count(c, 'A', start=1, end=4) array([2, 1, 1]) >>> np.char.count(c, 'A', start=1, end=3) array([1, 0, 0]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.count.htmlnumpy.char.endswith =================== char.endswith(*a*, *suffix*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L599-L640) Returns a boolean array which is `True` where the string element in `a` ends with `suffix`, otherwise `False`. Calls [`str.endswith`](https://docs.python.org/3/library/stdtypes.html#str.endswith "(in Python v3.10)") element-wise. Parameters **a**array_like of str or unicode **suffix**str **start, end**int, optional With optional `start`, test beginning at that position. With optional `end`, stop comparing at that position. Returns **out**ndarray Outputs an array of bools. See also [`str.endswith`](https://docs.python.org/3/library/stdtypes.html#str.endswith "(in Python v3.10)") #### Examples ``` >>> s = np.array(['foo', 'bar']) >>> s[0] = 'foo' >>> s[1] = 'bar' >>> s array(['foo', 'bar'], dtype='<U3') >>> np.char.endswith(s, 'ar') array([False, True]) >>> np.char.endswith(s, 'a', start=1, end=2) array([False, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.endswith.htmlnumpy.char.find =============== char.find(*a*, *sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L683-L716) For each element, return the lowest index in the string where substring `sub` is found. Calls [`str.find`](https://docs.python.org/3/library/stdtypes.html#str.find "(in Python v3.10)") element-wise. For each element, return the lowest index in the string where substring `sub` is found, such that `sub` is contained in the range [`start`, `end`]. Parameters **a**array_like of str or unicode **sub**str or unicode **start, end**int, optional Optional arguments `start` and `end` are interpreted as in slice notation. Returns **out**ndarray or int Output array of ints. Returns -1 if `sub` is not found. See also [`str.find`](https://docs.python.org/3/library/stdtypes.html#str.find "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.find.htmlnumpy.char.index ================ char.index(*a*, *sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L719-L745) Like [`find`](numpy.char.find#numpy.char.find "numpy.char.find"), but raises `ValueError` when the substring is not found. Calls [`str.index`](https://docs.python.org/3/library/stdtypes.html#str.index "(in Python v3.10)") element-wise. Parameters **a**array_like of str or unicode **sub**str or unicode **start, end**int, optional Returns **out**ndarray Output array of ints. Returns -1 if `sub` is not found. See also [`find`](numpy.char.find#numpy.char.find "numpy.char.find"), [`str.find`](https://docs.python.org/3/library/stdtypes.html#str.find "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.index.htmlnumpy.char.isalpha ================== char.isalpha(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L774-L797) Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. Calls [`str.isalpha`](https://docs.python.org/3/library/stdtypes.html#str.isalpha "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like of str or unicode Returns **out**ndarray Output array of bools See also [`str.isalpha`](https://docs.python.org/3/library/stdtypes.html#str.isalpha "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.isalpha.htmlnumpy.char.isalnum ================== char.isalnum(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L748-L771) Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. Calls [`str.isalnum`](https://docs.python.org/3/library/stdtypes.html#str.isalnum "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like of str or unicode Returns **out**ndarray Output array of str or unicode, depending on input type See also [`str.isalnum`](https://docs.python.org/3/library/stdtypes.html#str.isalnum "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.isalnum.htmlnumpy.char.isdecimal ==================== char.isdecimal(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1772-L1801) For each element, return True if there are only decimal characters in the element. Calls `unicode.isdecimal` element-wise. Decimal characters include digit characters, and all characters that can be used to form decimal-radix numbers, e.g. `U+0660, ARABIC-INDIC DIGIT ZERO`. Parameters **a**array_like, unicode Input array. Returns **out**ndarray, bool Array of booleans identical in shape to `a`. See also `unicode.isdecimal` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.isdecimal.htmlnumpy.char.isdigit ================== char.isdigit(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L800-L823) Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. Calls [`str.isdigit`](https://docs.python.org/3/library/stdtypes.html#str.isdigit "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like of str or unicode Returns **out**ndarray Output array of bools See also [`str.isdigit`](https://docs.python.org/3/library/stdtypes.html#str.isdigit "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.isdigit.htmlnumpy.char.islower ================== char.islower(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L826-L850) Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. Calls [`str.islower`](https://docs.python.org/3/library/stdtypes.html#str.islower "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like of str or unicode Returns **out**ndarray Output array of bools See also [`str.islower`](https://docs.python.org/3/library/stdtypes.html#str.islower "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.islower.htmlnumpy.char.isnumeric ==================== char.isnumeric(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1740-L1769) For each element, return True if there are only numeric characters in the element. Calls `unicode.isnumeric` element-wise. Numeric characters include digit characters, and all characters that have the Unicode numeric value property, e.g. `U+2155, VULGAR FRACTION ONE FIFTH`. Parameters **a**array_like, unicode Input array. Returns **out**ndarray, bool Array of booleans of same shape as `a`. See also `unicode.isnumeric` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.isnumeric.htmlnumpy.char.isspace ================== char.isspace(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L853-L877) Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. Calls [`str.isspace`](https://docs.python.org/3/library/stdtypes.html#str.isspace "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like of str or unicode Returns **out**ndarray Output array of bools See also [`str.isspace`](https://docs.python.org/3/library/stdtypes.html#str.isspace "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.isspace.htmlnumpy.char.istitle ================== char.istitle(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L880-L903) Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. Call [`str.istitle`](https://docs.python.org/3/library/stdtypes.html#str.istitle "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like of str or unicode Returns **out**ndarray Output array of bools See also [`str.istitle`](https://docs.python.org/3/library/stdtypes.html#str.istitle "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.istitle.htmlnumpy.char.isupper ================== char.isupper(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L906-L930) Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. Call [`str.isupper`](https://docs.python.org/3/library/stdtypes.html#str.isupper "(in Python v3.10)") element-wise. For 8-bit strings, this method is locale-dependent. Parameters **a**array_like of str or unicode Returns **out**ndarray Output array of bools See also [`str.isupper`](https://docs.python.org/3/library/stdtypes.html#str.isupper "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.isupper.htmlnumpy.char.rfind ================ char.rfind(*a*, *sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1175-L1205) For each element in `a`, return the highest index in the string where substring `sub` is found, such that `sub` is contained within [`start`, `end`]. Calls [`str.rfind`](https://docs.python.org/3/library/stdtypes.html#str.rfind "(in Python v3.10)") element-wise. Parameters **a**array-like of str or unicode **sub**str or unicode **start, end**int, optional Optional arguments `start` and `end` are interpreted as in slice notation. Returns **out**ndarray Output array of ints. Return -1 on failure. See also [`str.rfind`](https://docs.python.org/3/library/stdtypes.html#str.rfind "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.rfind.htmlnumpy.char.rindex ================= char.rindex(*a*, *sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1208-L1235) Like [`rfind`](numpy.char.rfind#numpy.char.rfind "numpy.char.rfind"), but raises `ValueError` when the substring `sub` is not found. Calls [`str.rindex`](https://docs.python.org/3/library/stdtypes.html#str.rindex "(in Python v3.10)") element-wise. Parameters **a**array-like of str or unicode **sub**str or unicode **start, end**int, optional Returns **out**ndarray Output array of ints. See also [`rfind`](numpy.char.rfind#numpy.char.rfind "numpy.char.rfind"), [`str.rindex`](https://docs.python.org/3/library/stdtypes.html#str.rindex "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.rindex.htmlnumpy.char.startswith ===================== char.startswith(*a*, *prefix*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L1474-L1503) Returns a boolean array which is `True` where the string element in `a` starts with `prefix`, otherwise `False`. Calls [`str.startswith`](https://docs.python.org/3/library/stdtypes.html#str.startswith "(in Python v3.10)") element-wise. Parameters **a**array_like of str or unicode **prefix**str **start, end**int, optional With optional `start`, test beginning at that position. With optional `end`, stop comparing at that position. Returns **out**ndarray Array of booleans See also [`str.startswith`](https://docs.python.org/3/library/stdtypes.html#str.startswith "(in Python v3.10)") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.startswith.htmlnumpy.char.str_len =================== char.str_len(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L261-L282) Return len(a) element-wise. Parameters **a**array_like of str or unicode Returns **out**ndarray Output array of integers See also `builtins.len` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.str_len.htmlnumpy.char.array ================ char.array(*obj*, *itemsize=None*, *copy=True*, *unicode=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2612-L2743) Create a [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray"). Note This class is provided for numarray backward-compatibility. New code (not concerned with numarray compatibility) should use arrays of type [`string_`](../arrays.scalars#numpy.string_ "numpy.string_") or [`unicode_`](../arrays.scalars#numpy.unicode_ "numpy.unicode_") and use the free functions in `numpy.char` for fast vectorized string operations instead. Versus a regular NumPy array of type [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or `unicode`, this class adds the following functionality: 1. values automatically have whitespace removed from the end when indexed 2. comparison operators automatically remove whitespace from the end when comparing values 3. vectorized string operations are provided as methods (e.g. [`str.endswith`](https://docs.python.org/3/library/stdtypes.html#str.endswith "(in Python v3.10)")) and infix operators (e.g. `+, *, %`) Parameters **obj**array of str or unicode-like **itemsize**int, optional `itemsize` is the number of characters per scalar in the resulting array. If `itemsize` is None, and `obj` is an object array or a Python list, the `itemsize` will be automatically determined. If `itemsize` is provided and `obj` is of type str or unicode, then the `obj` string will be chunked into `itemsize` pieces. **copy**bool, optional If true (default), then the object is copied. Otherwise, a copy will only be made if __array__ returns a copy, if obj is a nested sequence, or if a copy is needed to satisfy any of the other requirements (`itemsize`, unicode, `order`, etc.). **unicode**bool, optional When true, the resulting [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray") can contain Unicode characters, when false only 8-bit characters. If unicode is None and `obj` is one of the following: * a [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray"), * an ndarray of type [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or `unicode` * a Python str or unicode object, then the unicode setting of the output array will be automatically determined. **order**{‘C’, ‘F’, ‘A’}, optional Specify the order of the array. If order is ‘C’ (default), then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). If order is ‘A’, then the returned array may be in any order (either C-, Fortran-contiguous, or even discontiguous). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.array.htmlnumpy.char.asarray ================== char.asarray(*obj*, *itemsize=None*, *unicode=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2746-L2796) Convert the input to a [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray"), copying the data only if necessary. Versus a regular NumPy array of type [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or `unicode`, this class adds the following functionality: 1. values automatically have whitespace removed from the end when indexed 2. comparison operators automatically remove whitespace from the end when comparing values 3. vectorized string operations are provided as methods (e.g. [`str.endswith`](https://docs.python.org/3/library/stdtypes.html#str.endswith "(in Python v3.10)")) and infix operators (e.g. `+`, `*`,``%``) Parameters **obj**array of str or unicode-like **itemsize**int, optional `itemsize` is the number of characters per scalar in the resulting array. If `itemsize` is None, and `obj` is an object array or a Python list, the `itemsize` will be automatically determined. If `itemsize` is provided and `obj` is of type str or unicode, then the `obj` string will be chunked into `itemsize` pieces. **unicode**bool, optional When true, the resulting [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray") can contain Unicode characters, when false only 8-bit characters. If unicode is None and `obj` is one of the following: * a [`chararray`](numpy.chararray#numpy.chararray "numpy.chararray"), * an ndarray of type [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)") or ‘unicode` * a Python str or unicode object, then the unicode setting of the output array will be automatically determined. **order**{‘C’, ‘F’}, optional Specify the order of the array. If order is ‘C’ (default), then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.asarray.htmlnumpy.char.chararray ==================== *class*numpy.char.chararray(*shape*, *itemsize=1*, *unicode=False*, *buffer=None*, *offset=0*, *strides=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Provides a convenient view on arrays of string and unicode values. Note The [`chararray`](#numpy.char.chararray "numpy.char.chararray") class exists for backwards compatibility with Numarray, it is not recommended for new development. Starting from numpy 1.4, if one needs arrays of strings, it is recommended to use arrays of [`dtype`](numpy.char.chararray.dtype#numpy.char.chararray.dtype "numpy.char.chararray.dtype") `object_`, `string_` or `unicode_`, and use the free functions in the [`numpy.char`](../routines.char#module-numpy.char "numpy.char") module for fast vectorized string operations. Versus a regular NumPy array of type `str` or `unicode`, this class adds the following functionality: 1. values automatically have whitespace removed from the end when indexed 2. comparison operators automatically remove whitespace from the end when comparing values 3. vectorized string operations are provided as methods (e.g. [`endswith`](numpy.char.chararray.endswith#numpy.char.chararray.endswith "numpy.char.chararray.endswith")) and infix operators (e.g. `"+", "*", "%"`) chararrays should be created using [`numpy.char.array`](numpy.char.array#numpy.char.array "numpy.char.array") or [`numpy.char.asarray`](numpy.char.asarray#numpy.char.asarray "numpy.char.asarray"), rather than this constructor directly. This constructor creates the array, using `buffer` (with `offset` and [`strides`](numpy.char.chararray.strides#numpy.char.chararray.strides "numpy.char.chararray.strides")) if it is not `None`. If `buffer` is `None`, then constructs a new array with [`strides`](numpy.char.chararray.strides#numpy.char.chararray.strides "numpy.char.chararray.strides") in “C order”, unless both `len(shape) >= 2` and `order='F'`, in which case [`strides`](numpy.char.chararray.strides#numpy.char.chararray.strides "numpy.char.chararray.strides") is in “Fortran order”. Parameters **shape**tuple Shape of the array. **itemsize**int, optional Length of each array element, in number of characters. Default is 1. **unicode**bool, optional Are the array elements of type unicode (True) or string (False). Default is False. **buffer**object exposing the buffer interface or str, optional Memory address of the start of the array data. Default is None, in which case a new array is created. **offset**int, optional Fixed stride displacement from the beginning of an axis? Default is 0. Needs to be >=0. **strides**array_like of ints, optional Strides for the array (see `ndarray.strides` for full description). Default is None. **order**{‘C’, ‘F’}, optional The order in which the array data is stored in memory: ‘C’ -> “row major” order (the default), ‘F’ -> “column major” (Fortran) order. #### Examples ``` >>> charar = np.chararray((3, 3)) >>> charar[:] = 'a' >>> charar chararray([[b'a', b'a', b'a'], [b'a', b'a', b'a'], [b'a', b'a', b'a']], dtype='|S1') ``` ``` >>> charar = np.chararray(charar.shape, itemsize=5) >>> charar[:] = 'abc' >>> charar chararray([[b'abc', b'abc', b'abc'], [b'abc', b'abc', b'abc'], [b'abc', b'abc', b'abc']], dtype='|S5') ``` Attributes [`T`](numpy.char.chararray.t#numpy.char.chararray.T "numpy.char.chararray.T") The transposed array. [`base`](numpy.char.chararray.base#numpy.char.chararray.base "numpy.char.chararray.base") Base object if memory is from some other object. [`ctypes`](numpy.char.chararray.ctypes#numpy.char.chararray.ctypes "numpy.char.chararray.ctypes") An object to simplify the interaction of the array with the ctypes module. [`data`](numpy.char.chararray.data#numpy.char.chararray.data "numpy.char.chararray.data") Python buffer object pointing to the start of the array’s data. [`dtype`](numpy.char.chararray.dtype#numpy.char.chararray.dtype "numpy.char.chararray.dtype") Data-type of the array’s elements. [`flags`](numpy.char.chararray.flags#numpy.char.chararray.flags "numpy.char.chararray.flags") Information about the memory layout of the array. [`flat`](numpy.char.chararray.flat#numpy.char.chararray.flat "numpy.char.chararray.flat") A 1-D iterator over the array. [`imag`](numpy.char.chararray.imag#numpy.char.chararray.imag "numpy.char.chararray.imag") The imaginary part of the array. [`itemsize`](numpy.char.chararray.itemsize#numpy.char.chararray.itemsize "numpy.char.chararray.itemsize") Length of one array element in bytes. [`nbytes`](numpy.char.chararray.nbytes#numpy.char.chararray.nbytes "numpy.char.chararray.nbytes") Total bytes consumed by the elements of the array. [`ndim`](numpy.char.chararray.ndim#numpy.char.chararray.ndim "numpy.char.chararray.ndim") Number of array dimensions. [`real`](numpy.char.chararray.real#numpy.char.chararray.real "numpy.char.chararray.real") The real part of the array. [`shape`](numpy.char.chararray.shape#numpy.char.chararray.shape "numpy.char.chararray.shape") Tuple of array dimensions. [`size`](numpy.char.chararray.size#numpy.char.chararray.size "numpy.char.chararray.size") Number of elements in the array. [`strides`](numpy.char.chararray.strides#numpy.char.chararray.strides "numpy.char.chararray.strides") Tuple of bytes to step in each dimension when traversing an array. #### Methods | | | | --- | --- | | [`astype`](numpy.char.chararray.astype#numpy.char.chararray.astype "numpy.char.chararray.astype")(dtype[, order, casting, subok, copy]) | Copy of the array, cast to a specified type. | | [`argsort`](numpy.char.chararray.argsort#numpy.char.chararray.argsort "numpy.char.chararray.argsort")([axis, kind, order]) | Returns the indices that would sort this array. | | [`copy`](numpy.char.chararray.copy#numpy.char.chararray.copy "numpy.char.chararray.copy")([order]) | Return a copy of the array. | | [`count`](numpy.char.chararray.count#numpy.char.chararray.count "numpy.char.chararray.count")(sub[, start, end]) | Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`]. | | [`decode`](numpy.char.chararray.decode#numpy.char.chararray.decode "numpy.char.chararray.decode")([encoding, errors]) | Calls `str.decode` element-wise. | | [`dump`](numpy.char.chararray.dump#numpy.char.chararray.dump "numpy.char.chararray.dump")(file) | Dump a pickle of the array to the specified file. | | [`dumps`](numpy.char.chararray.dumps#numpy.char.chararray.dumps "numpy.char.chararray.dumps")() | Returns the pickle of the array as a string. | | [`encode`](numpy.char.chararray.encode#numpy.char.chararray.encode "numpy.char.chararray.encode")([encoding, errors]) | Calls `str.encode` element-wise. | | [`endswith`](numpy.char.chararray.endswith#numpy.char.chararray.endswith "numpy.char.chararray.endswith")(suffix[, start, end]) | Returns a boolean array which is `True` where the string element in `self` ends with `suffix`, otherwise `False`. | | [`expandtabs`](numpy.char.chararray.expandtabs#numpy.char.chararray.expandtabs "numpy.char.chararray.expandtabs")([tabsize]) | Return a copy of each string element where all tab characters are replaced by one or more spaces. | | [`fill`](numpy.char.chararray.fill#numpy.char.chararray.fill "numpy.char.chararray.fill")(value) | Fill the array with a scalar value. | | [`find`](numpy.char.chararray.find#numpy.char.chararray.find "numpy.char.chararray.find")(sub[, start, end]) | For each element, return the lowest index in the string where substring `sub` is found. | | [`flatten`](numpy.char.chararray.flatten#numpy.char.chararray.flatten "numpy.char.chararray.flatten")([order]) | Return a copy of the array collapsed into one dimension. | | [`getfield`](numpy.char.chararray.getfield#numpy.char.chararray.getfield "numpy.char.chararray.getfield")(dtype[, offset]) | Returns a field of the given array as a certain type. | | [`index`](numpy.char.chararray.index#numpy.char.chararray.index "numpy.char.chararray.index")(sub[, start, end]) | Like [`find`](numpy.char.find#numpy.char.find "numpy.char.find"), but raises `ValueError` when the substring is not found. | | [`isalnum`](numpy.char.chararray.isalnum#numpy.char.chararray.isalnum "numpy.char.chararray.isalnum")() | Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. | | [`isalpha`](numpy.char.chararray.isalpha#numpy.char.chararray.isalpha "numpy.char.chararray.isalpha")() | Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. | | [`isdecimal`](numpy.char.chararray.isdecimal#numpy.char.chararray.isdecimal "numpy.char.chararray.isdecimal")() | For each element in `self`, return True if there are only decimal characters in the element. | | [`isdigit`](numpy.char.chararray.isdigit#numpy.char.chararray.isdigit "numpy.char.chararray.isdigit")() | Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. | | [`islower`](numpy.char.chararray.islower#numpy.char.chararray.islower "numpy.char.chararray.islower")() | Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. | | [`isnumeric`](numpy.char.chararray.isnumeric#numpy.char.chararray.isnumeric "numpy.char.chararray.isnumeric")() | For each element in `self`, return True if there are only numeric characters in the element. | | [`isspace`](numpy.char.chararray.isspace#numpy.char.chararray.isspace "numpy.char.chararray.isspace")() | Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. | | [`istitle`](numpy.char.chararray.istitle#numpy.char.chararray.istitle "numpy.char.chararray.istitle")() | Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. | | [`isupper`](numpy.char.chararray.isupper#numpy.char.chararray.isupper "numpy.char.chararray.isupper")() | Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. | | [`item`](numpy.char.chararray.item#numpy.char.chararray.item "numpy.char.chararray.item")(*args) | Copy an element of an array to a standard Python scalar and return it. | | [`join`](numpy.char.chararray.join#numpy.char.chararray.join "numpy.char.chararray.join")(seq) | Return a string which is the concatenation of the strings in the sequence `seq`. | | [`ljust`](numpy.char.chararray.ljust#numpy.char.chararray.ljust "numpy.char.chararray.ljust")(width[, fillchar]) | Return an array with the elements of `self` left-justified in a string of length `width`. | | [`lower`](numpy.char.chararray.lower#numpy.char.chararray.lower "numpy.char.chararray.lower")() | Return an array with the elements of `self` converted to lowercase. | | [`lstrip`](numpy.char.chararray.lstrip#numpy.char.chararray.lstrip "numpy.char.chararray.lstrip")([chars]) | For each element in `self`, return a copy with the leading characters removed. | | [`nonzero`](numpy.char.chararray.nonzero#numpy.char.chararray.nonzero "numpy.char.chararray.nonzero")() | Return the indices of the elements that are non-zero. | | [`put`](numpy.char.chararray.put#numpy.char.chararray.put "numpy.char.chararray.put")(indices, values[, mode]) | Set `a.flat[n] = values[n]` for all `n` in indices. | | [`ravel`](numpy.char.chararray.ravel#numpy.char.chararray.ravel "numpy.char.chararray.ravel")([order]) | Return a flattened array. | | [`repeat`](numpy.char.chararray.repeat#numpy.char.chararray.repeat "numpy.char.chararray.repeat")(repeats[, axis]) | Repeat elements of an array. | | [`replace`](numpy.char.chararray.replace#numpy.char.chararray.replace "numpy.char.chararray.replace")(old, new[, count]) | For each element in `self`, return a copy of the string with all occurrences of substring `old` replaced by `new`. | | [`reshape`](numpy.char.chararray.reshape#numpy.char.chararray.reshape "numpy.char.chararray.reshape")(shape[, order]) | Returns an array containing the same data with a new shape. | | [`resize`](numpy.char.chararray.resize#numpy.char.chararray.resize "numpy.char.chararray.resize")(new_shape[, refcheck]) | Change shape and size of array in-place. | | [`rfind`](numpy.char.chararray.rfind#numpy.char.chararray.rfind "numpy.char.chararray.rfind")(sub[, start, end]) | For each element in `self`, return the highest index in the string where substring `sub` is found, such that `sub` is contained within [`start`, `end`]. | | [`rindex`](numpy.char.chararray.rindex#numpy.char.chararray.rindex "numpy.char.chararray.rindex")(sub[, start, end]) | Like [`rfind`](numpy.char.rfind#numpy.char.rfind "numpy.char.rfind"), but raises `ValueError` when the substring `sub` is not found. | | [`rjust`](numpy.char.chararray.rjust#numpy.char.chararray.rjust "numpy.char.chararray.rjust")(width[, fillchar]) | Return an array with the elements of `self` right-justified in a string of length `width`. | | [`rsplit`](numpy.char.chararray.rsplit#numpy.char.chararray.rsplit "numpy.char.chararray.rsplit")([sep, maxsplit]) | For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. | | [`rstrip`](numpy.char.chararray.rstrip#numpy.char.chararray.rstrip "numpy.char.chararray.rstrip")([chars]) | For each element in `self`, return a copy with the trailing characters removed. | | [`searchsorted`](numpy.char.chararray.searchsorted#numpy.char.chararray.searchsorted "numpy.char.chararray.searchsorted")(v[, side, sorter]) | Find indices where elements of v should be inserted in a to maintain order. | | [`setfield`](numpy.char.chararray.setfield#numpy.char.chararray.setfield "numpy.char.chararray.setfield")(val, dtype[, offset]) | Put a value into a specified place in a field defined by a data-type. | | [`setflags`](numpy.char.chararray.setflags#numpy.char.chararray.setflags "numpy.char.chararray.setflags")([write, align, uic]) | Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. | | [`sort`](numpy.char.chararray.sort#numpy.char.chararray.sort "numpy.char.chararray.sort")([axis, kind, order]) | Sort an array in-place. | | [`split`](numpy.char.chararray.split#numpy.char.chararray.split "numpy.char.chararray.split")([sep, maxsplit]) | For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. | | [`splitlines`](numpy.char.chararray.splitlines#numpy.char.chararray.splitlines "numpy.char.chararray.splitlines")([keepends]) | For each element in `self`, return a list of the lines in the element, breaking at line boundaries. | | [`squeeze`](numpy.char.chararray.squeeze#numpy.char.chararray.squeeze "numpy.char.chararray.squeeze")([axis]) | Remove axes of length one from `a`. | | [`startswith`](numpy.char.chararray.startswith#numpy.char.chararray.startswith "numpy.char.chararray.startswith")(prefix[, start, end]) | Returns a boolean array which is `True` where the string element in `self` starts with `prefix`, otherwise `False`. | | [`strip`](numpy.char.chararray.strip#numpy.char.chararray.strip "numpy.char.chararray.strip")([chars]) | For each element in `self`, return a copy with the leading and trailing characters removed. | | [`swapaxes`](numpy.char.chararray.swapaxes#numpy.char.chararray.swapaxes "numpy.char.chararray.swapaxes")(axis1, axis2) | Return a view of the array with `axis1` and `axis2` interchanged. | | [`swapcase`](numpy.char.chararray.swapcase#numpy.char.chararray.swapcase "numpy.char.chararray.swapcase")() | For each element in `self`, return a copy of the string with uppercase characters converted to lowercase and vice versa. | | [`take`](numpy.char.chararray.take#numpy.char.chararray.take "numpy.char.chararray.take")(indices[, axis, out, mode]) | Return an array formed from the elements of `a` at the given indices. | | [`title`](numpy.char.chararray.title#numpy.char.chararray.title "numpy.char.chararray.title")() | For each element in `self`, return a titlecased version of the string: words start with uppercase characters, all remaining cased characters are lowercase. | | [`tofile`](numpy.char.chararray.tofile#numpy.char.chararray.tofile "numpy.char.chararray.tofile")(fid[, sep, format]) | Write array to a file as text or binary (default). | | [`tolist`](numpy.char.chararray.tolist#numpy.char.chararray.tolist "numpy.char.chararray.tolist")() | Return the array as an `a.ndim`-levels deep nested list of Python scalars. | | [`tostring`](numpy.char.chararray.tostring#numpy.char.chararray.tostring "numpy.char.chararray.tostring")([order]) | A compatibility alias for [`tobytes`](numpy.char.chararray.tobytes#numpy.char.chararray.tobytes "numpy.char.chararray.tobytes"), with exactly the same behavior. | | [`translate`](numpy.char.chararray.translate#numpy.char.chararray.translate "numpy.char.chararray.translate")(table[, deletechars]) | For each element in `self`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. | | [`transpose`](numpy.char.chararray.transpose#numpy.char.chararray.transpose "numpy.char.chararray.transpose")(*axes) | Returns a view of the array with axes transposed. | | [`upper`](numpy.char.chararray.upper#numpy.char.chararray.upper "numpy.char.chararray.upper")() | Return an array with the elements of `self` converted to uppercase. | | [`view`](numpy.char.chararray.view#numpy.char.chararray.view "numpy.char.chararray.view")([dtype][, type]) | New view of array with the same data. | | [`zfill`](numpy.char.chararray.zfill#numpy.char.chararray.zfill "numpy.char.chararray.zfill")(width) | Return the numeric string left-filled with zeros in a string of length `width`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.htmlnumpy.char.chararray.zfill ========================== method char.chararray.zfill(*width*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2575-L2585) Return the numeric string left-filled with zeros in a string of length `width`. See also [`char.zfill`](numpy.char.zfill#numpy.char.zfill "numpy.char.zfill") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.zfill.htmlnumpy.finfo =========== *class*numpy.finfo(*dtype*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Machine limits for floating point types. Parameters **dtype**float, dtype, or instance Kind of floating point data-type about which to get information. See also [`MachAr`](numpy.machar#numpy.MachAr "numpy.MachAr") The implementation of the tests that produce this information. [`iinfo`](numpy.iinfo#numpy.iinfo "numpy.iinfo") The equivalent for integer data types. [`spacing`](numpy.spacing#numpy.spacing "numpy.spacing") The distance between a value and the nearest adjacent number [`nextafter`](numpy.nextafter#numpy.nextafter "numpy.nextafter") The next floating point value after x1 towards x2 #### Notes For developers of NumPy: do not instantiate this at the module level. The initial calculation of these parameters is expensive and negatively impacts import times. These objects are cached, so calling `finfo()` repeatedly inside your functions is not a problem. Note that `smallest_normal` is not actually the smallest positive representable value in a NumPy floating point type. As in the IEEE-754 standard [[1]](#r2ee89c7f792a-1), NumPy floating point types make use of subnormal numbers to fill the gap between 0 and `smallest_normal`. However, subnormal numbers may have significantly reduced precision [[2]](#r2ee89c7f792a-2). #### References [1](#id1) IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2008, pp.1-70, 2008, <http://www.doi.org/10.1109/IEEESTD.2008.4610935 [2](#id2) Wikipedia, “Denormal Numbers”, <https://en.wikipedia.org/wiki/Denormal_number Attributes **bits**int The number of bits occupied by the type. **eps**float The difference between 1.0 and the next smallest representable float larger than 1.0. For example, for 64-bit binary floats in the IEEE-754 standard, `eps = 2**-52`, approximately 2.22e-16. **epsneg**float The difference between 1.0 and the next smallest representable float less than 1.0. For example, for 64-bit binary floats in the IEEE-754 standard, `epsneg = 2**-53`, approximately 1.11e-16. **iexp**int The number of bits in the exponent portion of the floating point representation. [`machar`](numpy.finfo.machar#numpy.finfo.machar "numpy.finfo.machar")MachAr The object which calculated these parameters and holds more detailed information. **machep**int The exponent that yields `eps`. **max**floating point number of the appropriate type The largest representable number. **maxexp**int The smallest positive power of the base (2) that causes overflow. **min**floating point number of the appropriate type The smallest representable number, typically `-max`. **minexp**int The most negative power of the base (2) consistent with there being no leading 0’s in the mantissa. **negep**int The exponent that yields `epsneg`. **nexp**int The number of bits in the exponent including its sign and bias. **nmant**int The number of bits in the mantissa. **precision**int The approximate number of decimal digits to which this kind of float is precise. **resolution**floating point number of the appropriate type The approximate decimal resolution of this type, i.e., `10**-precision`. [`tiny`](numpy.finfo.tiny#numpy.finfo.tiny "numpy.finfo.tiny")float Return the value for tiny, alias of smallest_normal. [`smallest_normal`](numpy.finfo.smallest_normal#numpy.finfo.smallest_normal "numpy.finfo.smallest_normal")float Return the value for the smallest normal. **smallest_subnormal**float The smallest positive floating point number with 0 as leading bit in the mantissa following IEEE-754. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.finfo.htmlnumpy.iinfo =========== *class*numpy.iinfo(*type*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Machine limits for integer types. Parameters **int_type**integer type, dtype, or instance The kind of integer data type to get information about. See also [`finfo`](numpy.finfo#numpy.finfo "numpy.finfo") The equivalent for floating point data types. #### Examples With types: ``` >>> ii16 = np.iinfo(np.int16) >>> ii16.min -32768 >>> ii16.max 32767 >>> ii32 = np.iinfo(np.int32) >>> ii32.min -2147483648 >>> ii32.max 2147483647 ``` With instances: ``` >>> ii32 = np.iinfo(np.int32(10)) >>> ii32.min -2147483648 >>> ii32.max 2147483647 ``` Attributes **bits**int The number of bits occupied by the type. [`min`](numpy.iinfo.min#numpy.iinfo.min "numpy.iinfo.min")int Minimum value of given dtype. [`max`](numpy.iinfo.max#numpy.iinfo.max "numpy.iinfo.max")int Maximum value of given dtype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.iinfo.htmlnumpy.MachAr ============ *class*numpy.MachAr(*float_conv=<class 'float'>*, *int_conv=<class 'int'>*, *float_to_float=<class 'float'>*, *float_to_str=<function MachAr.<lambda>>*, *title='Python floating point number'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Diagnosing machine parameters. Parameters **float_conv**function, optional Function that converts an integer or integer array to a float or float array. Default is [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)"). **int_conv**function, optional Function that converts a float or float array to an integer or integer array. Default is [`int`](https://docs.python.org/3/library/functions.html#int "(in Python v3.10)"). **float_to_float**function, optional Function that converts a float array to float. Default is [`float`](https://docs.python.org/3/library/functions.html#float "(in Python v3.10)"). Note that this does not seem to do anything useful in the current implementation. **float_to_str**function, optional Function that converts a single float to a string. Default is `lambda v:'%24.16e' %v`. **title**str, optional Title that is printed in the string representation of [`MachAr`](#numpy.MachAr "numpy.MachAr"). See also [`finfo`](numpy.finfo#numpy.finfo "numpy.finfo") Machine limits for floating point types. [`iinfo`](numpy.iinfo#numpy.iinfo "numpy.iinfo") Machine limits for integer types. #### References 1 Press, Teukolsky, Vetterling and Flannery, “Numerical Recipes in C++,” 2nd ed, Cambridge University Press, 2002, p. 31. Attributes **ibeta**int Radix in which numbers are represented. **it**int Number of base-`ibeta` digits in the floating point mantissa M. **machep**int Exponent of the smallest (most negative) power of `ibeta` that, added to 1.0, gives something different from 1.0 **eps**float Floating-point number `beta**machep` (floating point precision) **negep**int Exponent of the smallest power of `ibeta` that, subtracted from 1.0, gives something different from 1.0. **epsneg**float Floating-point number `beta**negep`. **iexp**int Number of bits in the exponent (including its sign and bias). **minexp**int Smallest (most negative) power of `ibeta` consistent with there being no leading zeros in the mantissa. **xmin**float Floating-point number `beta**minexp` (the smallest [in magnitude] positive floating point number with full precision). **maxexp**int Smallest (positive) power of `ibeta` that causes overflow. **xmax**float `(1-epsneg) * beta**maxexp` (the largest [in magnitude] usable floating value). **irnd**int In `range(6)`, information on what kind of rounding is done in addition, and on how underflow is handled. **ngrd**int Number of ‘guard digits’ used when truncating the product of two mantissas to fit the representation. **epsilon**float Same as `eps`. **tiny**float An alias for `smallest_normal`, kept for backwards compatibility. **huge**float Same as `xmax`. **precision**float `- int(-log10(eps))` **resolution**float `- 10**(-precision)` **smallest_normal**float The smallest positive floating point number with 1 as leading bit in the mantissa following IEEE-754. Same as `xmin`. **smallest_subnormal**float The smallest positive floating point number with 0 as leading bit in the mantissa following IEEE-754. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.MachAr.htmlnumpy.issctype ============== numpy.issctype(*rep*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numerictypes.py#L182-L225) Determines whether the given object represents a scalar data-type. Parameters **rep**any If `rep` is an instance of a scalar dtype, True is returned. If not, False is returned. Returns **out**bool Boolean result of check whether `rep` is a scalar dtype. See also [`issubsctype`](numpy.issubsctype#numpy.issubsctype "numpy.issubsctype"), [`issubdtype`](numpy.issubdtype#numpy.issubdtype "numpy.issubdtype"), [`obj2sctype`](numpy.obj2sctype#numpy.obj2sctype "numpy.obj2sctype"), [`sctype2char`](numpy.sctype2char#numpy.sctype2char "numpy.sctype2char") #### Examples ``` >>> np.issctype(np.int32) True >>> np.issctype(list) False >>> np.issctype(1.1) False ``` Strings are also a scalar type: ``` >>> np.issctype(np.dtype('str')) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.issctype.htmlnumpy.issubdtype ================ numpy.issubdtype(*arg1*, *arg2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numerictypes.py#L356-L420) Returns True if first argument is a typecode lower/equal in type hierarchy. This is like the builtin [`issubclass`](https://docs.python.org/3/library/functions.html#issubclass "(in Python v3.10)"), but for [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype")s. Parameters **arg1, arg2**dtype_like [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") or object coercible to one Returns **out**bool See also [Scalars](../arrays.scalars#arrays-scalars) Overview of the numpy type hierarchy. [`issubsctype`](numpy.issubsctype#numpy.issubsctype "numpy.issubsctype"), [`issubclass_`](numpy.issubclass_#numpy.issubclass_ "numpy.issubclass_") #### Examples [`issubdtype`](#numpy.issubdtype "numpy.issubdtype") can be used to check the type of arrays: ``` >>> ints = np.array([1, 2, 3], dtype=np.int32) >>> np.issubdtype(ints.dtype, np.integer) True >>> np.issubdtype(ints.dtype, np.floating) False ``` ``` >>> floats = np.array([1, 2, 3], dtype=np.float32) >>> np.issubdtype(floats.dtype, np.integer) False >>> np.issubdtype(floats.dtype, np.floating) True ``` Similar types of different sizes are not subdtypes of each other: ``` >>> np.issubdtype(np.float64, np.float32) False >>> np.issubdtype(np.float32, np.float64) False ``` but both are subtypes of [`floating`](../arrays.scalars#numpy.floating "numpy.floating"): ``` >>> np.issubdtype(np.float64, np.floating) True >>> np.issubdtype(np.float32, np.floating) True ``` For convenience, dtype-like objects are allowed too: ``` >>> np.issubdtype('S1', np.string_) True >>> np.issubdtype('i4', np.signedinteger) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.issubdtype.htmlnumpy.issubsctype ================= numpy.issubsctype(*arg1*, *arg2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numerictypes.py#L324-L353) Determine if the first argument is a subclass of the second argument. Parameters **arg1, arg2**dtype or dtype specifier Data-types. Returns **out**bool The result. See also [`issctype`](numpy.issctype#numpy.issctype "numpy.issctype"), [`issubdtype`](numpy.issubdtype#numpy.issubdtype "numpy.issubdtype"), [`obj2sctype`](numpy.obj2sctype#numpy.obj2sctype "numpy.obj2sctype") #### Examples ``` >>> np.issubsctype('S8', str) False >>> np.issubsctype(np.array([1]), int) True >>> np.issubsctype(np.array([1]), float) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.issubsctype.htmlnumpy.issubclass_ ================== numpy.issubclass_(*arg1*, *arg2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numerictypes.py#L282-L321) Determine if a class is a subclass of a second class. [`issubclass_`](#numpy.issubclass_ "numpy.issubclass_") is equivalent to the Python built-in `issubclass`, except that it returns False instead of raising a TypeError if one of the arguments is not a class. Parameters **arg1**class Input class. True is returned if `arg1` is a subclass of `arg2`. **arg2**class or tuple of classes. Input class. If a tuple of classes, True is returned if `arg1` is a subclass of any of the tuple elements. Returns **out**bool Whether `arg1` is a subclass of `arg2` or not. See also [`issubsctype`](numpy.issubsctype#numpy.issubsctype "numpy.issubsctype"), [`issubdtype`](numpy.issubdtype#numpy.issubdtype "numpy.issubdtype"), [`issctype`](numpy.issctype#numpy.issctype "numpy.issctype") #### Examples ``` >>> np.issubclass_(np.int32, int) False >>> np.issubclass_(np.int32, float) False >>> np.issubclass_(np.float64, float) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.issubclass_.htmlnumpy.find_common_type ======================== numpy.find_common_type(*array_types*, *scalar_types*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numerictypes.py#L597-L670) Determine common type following standard coercion rules. Parameters **array_types**sequence A list of dtypes or dtype convertible objects representing arrays. **scalar_types**sequence A list of dtypes or dtype convertible objects representing scalars. Returns **datatype**dtype The common data type, which is the maximum of `array_types` ignoring `scalar_types`, unless the maximum of `scalar_types` is of a different kind ([`dtype.kind`](numpy.dtype.kind#numpy.dtype.kind "numpy.dtype.kind")). If the kind is not understood, then None is returned. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`common_type`](numpy.common_type#numpy.common_type "numpy.common_type"), [`can_cast`](numpy.can_cast#numpy.can_cast "numpy.can_cast"), [`mintypecode`](numpy.mintypecode#numpy.mintypecode "numpy.mintypecode") #### Examples ``` >>> np.find_common_type([], [np.int64, np.float32, complex]) dtype('complex128') >>> np.find_common_type([np.int64, np.float32], []) dtype('float64') ``` The standard casting rules ensure that a scalar cannot up-cast an array unless the scalar is of a fundamentally different kind of data (i.e. under a different hierarchy in the data type hierarchy) then the array: ``` >>> np.find_common_type([np.float32], [np.int64, np.float64]) dtype('float32') ``` Complex is of a different type, so it up-casts the float in the `array_types` argument: ``` >>> np.find_common_type([np.float32], [complex]) dtype('complex128') ``` Type specifier strings are convertible to dtypes and can therefore be used instead of dtypes: ``` >>> np.find_common_type(['f4', 'f4', 'i4'], ['c8']) dtype('complex128') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.find_common_type.htmlnumpy.typename ============== numpy.typename(*char*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L612-L662) Return a description for the given data type code. Parameters **char**str Data type code. Returns **out**str Description of the input data type code. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `typecodes` #### Examples ``` >>> typechars = ['S1', '?', 'B', 'D', 'G', 'F', 'I', 'H', 'L', 'O', 'Q', ... 'S', 'U', 'V', 'b', 'd', 'g', 'f', 'i', 'h', 'l', 'q'] >>> for typechar in typechars: ... print(typechar, ' : ', np.typename(typechar)) ... S1 : character ? : bool B : unsigned char D : complex double precision G : complex long double precision F : complex single precision I : unsigned integer H : unsigned short L : unsigned long integer O : object Q : unsigned long long integer S : string U : unicode V : void b : signed char d : double precision g : long precision f : single precision i : integer h : short l : long integer q : long long integer ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.typename.htmlnumpy.sctype2char ================= numpy.sctype2char(*sctype*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numerictypes.py#L455-L504) Return the string representation of a scalar dtype. Parameters **sctype**scalar dtype or object If a scalar dtype, the corresponding string character is returned. If an object, [`sctype2char`](#numpy.sctype2char "numpy.sctype2char") tries to infer its scalar type and then return the corresponding string character. Returns **typechar**str The string character corresponding to the scalar type. Raises ValueError If `sctype` is an object for which the type can not be inferred. See also [`obj2sctype`](numpy.obj2sctype#numpy.obj2sctype "numpy.obj2sctype"), [`issctype`](numpy.issctype#numpy.issctype "numpy.issctype"), [`issubsctype`](numpy.issubsctype#numpy.issubsctype "numpy.issubsctype"), [`mintypecode`](numpy.mintypecode#numpy.mintypecode "numpy.mintypecode") #### Examples ``` >>> for sctype in [np.int32, np.double, np.complex_, np.string_, np.ndarray]: ... print(np.sctype2char(sctype)) l # may vary d D S O ``` ``` >>> x = np.array([1., 2-1.j]) >>> np.sctype2char(x) 'D' >>> np.sctype2char(list) 'O' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.sctype2char.htmlnumpy.mintypecode ================= numpy.mintypecode(*typechars*, *typeset='GDFgdf'*, *default='d'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L26-L77) Return the character for the minimum-size type to which given types can be safely cast. The returned type character must represent the smallest size dtype such that an array of the returned type can handle the data from an array of all types in `typechars` (or if `typechars` is an array, then its dtype.char). Parameters **typechars**list of str or array_like If a list of strings, each string should represent a dtype. If array_like, the character representation of the array dtype is used. **typeset**str or list of str, optional The set of characters that the returned character is chosen from. The default set is ‘GDFgdf’. **default**str, optional The default character, this is returned if none of the characters in `typechars` matches a character in `typeset`. Returns **typechar**str The character representing the minimum-size type that was found. See also [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), [`sctype2char`](numpy.sctype2char#numpy.sctype2char "numpy.sctype2char"), [`maximum_sctype`](numpy.maximum_sctype#numpy.maximum_sctype "numpy.maximum_sctype") #### Examples ``` >>> np.mintypecode(['d', 'f', 'S']) 'd' >>> x = np.array([1.1, 2-3.j]) >>> np.mintypecode(x) 'D' ``` ``` >>> np.mintypecode('abceh', default='G') 'G' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.mintypecode.htmlnumpy.maximum_sctype ===================== numpy.maximum_sctype(*t*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numerictypes.py#L132-L179) Return the scalar type of highest precision of the same kind as the input. Parameters **t**dtype or dtype specifier The input data type. This can be a [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") object or an object that is convertible to a [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"). Returns **out**dtype The highest precision data type of the same kind ([`dtype.kind`](numpy.dtype.kind#numpy.dtype.kind "numpy.dtype.kind")) as `t`. See also [`obj2sctype`](numpy.obj2sctype#numpy.obj2sctype "numpy.obj2sctype"), [`mintypecode`](numpy.mintypecode#numpy.mintypecode "numpy.mintypecode"), [`sctype2char`](numpy.sctype2char#numpy.sctype2char "numpy.sctype2char") [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") #### Examples ``` >>> np.maximum_sctype(int) <class 'numpy.int64'> >>> np.maximum_sctype(np.uint8) <class 'numpy.uint64'> >>> np.maximum_sctype(complex) <class 'numpy.complex256'> # may vary ``` ``` >>> np.maximum_sctype(str) <class 'numpy.str_'``` ``` >>> np.maximum_sctype('i2') <class 'numpy.int64'> >>> np.maximum_sctype('f4') <class 'numpy.float128'> # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.maximum_sctype.htmlnumpy.linalg.cholesky ===================== linalg.cholesky(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L679-L771) Cholesky decomposition. Return the Cholesky decomposition, `L * L.H`, of the square matrix `a`, where `L` is lower-triangular and .H is the conjugate transpose operator (which is the ordinary transpose if `a` is real-valued). `a` must be Hermitian (symmetric if real-valued) and positive-definite. No checking is performed to verify whether `a` is Hermitian or not. In addition, only the lower-triangular and diagonal elements of `a` are used. Only `L` is actually returned. Parameters **a**(
, M, M) array_like Hermitian (symmetric if all elements are real), positive-definite input matrix. Returns **L**(
, M, M) array_like Lower-triangular Cholesky factor of `a`. Returns a matrix object if `a` is a matrix object. Raises LinAlgError If the decomposition fails, for example, if `a` is not positive-definite. See also [`scipy.linalg.cholesky`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cholesky.html#scipy.linalg.cholesky "(in SciPy v1.8.1)") Similar function in SciPy. [`scipy.linalg.cholesky_banded`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cholesky_banded.html#scipy.linalg.cholesky_banded "(in SciPy v1.8.1)") Cholesky decompose a banded Hermitian positive-definite matrix. [`scipy.linalg.cho_factor`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cho_factor.html#scipy.linalg.cho_factor "(in SciPy v1.8.1)") Cholesky decomposition of a matrix, to use in [`scipy.linalg.cho_solve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.cho_solve.html#scipy.linalg.cho_solve "(in SciPy v1.8.1)"). #### Notes New in version 1.8.0. Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. The Cholesky decomposition is often used as a fast way of solving \[A \mathbf{x} = \mathbf{b}\] (when `A` is both Hermitian/symmetric and positive-definite). First, we solve for \(\mathbf{y}\) in \[L \mathbf{y} = \mathbf{b},\] and then for \(\mathbf{x}\) in \[L.H \mathbf{x} = \mathbf{y}.\] #### Examples ``` >>> A = np.array([[1,-2j],[2j,5]]) >>> A array([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> L = np.linalg.cholesky(A) >>> L array([[1.+0.j, 0.+0.j], [0.+2.j, 1.+0.j]]) >>> np.dot(L, L.T.conj()) # verify that L * L.H = A array([[1.+0.j, 0.-2.j], [0.+2.j, 5.+0.j]]) >>> A = [[1,-2j],[2j,5]] # what happens if A is only array_like? >>> np.linalg.cholesky(A) # an ndarray object is returned array([[1.+0.j, 0.+0.j], [0.+2.j, 1.+0.j]]) >>> # But a matrix object is returned if A is a matrix object >>> np.linalg.cholesky(np.matrix(A)) matrix([[ 1.+0.j, 0.+0.j], [ 0.+2.j, 1.+0.j]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.cholesky.htmlnumpy.linalg.det ================ linalg.det(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L2100-L2156) Compute the determinant of an array. Parameters **a**(
, M, M) array_like Input array to compute determinants for. Returns **det**(
) array_like Determinant of `a`. See also [`slogdet`](numpy.linalg.slogdet#numpy.linalg.slogdet "numpy.linalg.slogdet") Another way to represent the determinant, more suitable for large matrices where underflow/overflow may occur. [`scipy.linalg.det`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.det.html#scipy.linalg.det "(in SciPy v1.8.1)") Similar function in SciPy. #### Notes New in version 1.8.0. Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. The determinant is computed via LU factorization using the LAPACK routine `z/dgetrf`. #### Examples The determinant of a 2-D array [[a, b], [c, d]] is ad - bc: ``` >>> a = np.array([[1, 2], [3, 4]]) >>> np.linalg.det(a) -2.0 # may vary ``` Computing determinants for a stack of matrices: ``` >>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ]) >>> a.shape (3, 2, 2) >>> np.linalg.det(a) array([-2., -3., -8.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.det.htmlnumpy.linalg.eig ================ linalg.eig(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L1182-L1328) Compute the eigenvalues and right eigenvectors of a square array. Parameters **a**(
, M, M) array Matrices for which the eigenvalues and right eigenvectors will be computed Returns **w**(
, M) array The eigenvalues, each repeated according to its multiplicity. The eigenvalues are not necessarily ordered. The resulting array will be of complex type, unless the imaginary part is zero in which case it will be cast to a real type. When `a` is real the resulting eigenvalues will be real (0 imaginary part) or occur in conjugate pairs **v**(
, M, M) array The normalized (unit “length”) eigenvectors, such that the column `v[:,i]` is the eigenvector corresponding to the eigenvalue `w[i]`. Raises LinAlgError If the eigenvalue computation does not converge. See also [`eigvals`](numpy.linalg.eigvals#numpy.linalg.eigvals "numpy.linalg.eigvals") eigenvalues of a non-symmetric array. [`eigh`](numpy.linalg.eigh#numpy.linalg.eigh "numpy.linalg.eigh") eigenvalues and eigenvectors of a real symmetric or complex Hermitian (conjugate symmetric) array. [`eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") eigenvalues of a real symmetric or complex Hermitian (conjugate symmetric) array. [`scipy.linalg.eig`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eig.html#scipy.linalg.eig "(in SciPy v1.8.1)") Similar function in SciPy that also solves the generalized eigenvalue problem. [`scipy.linalg.schur`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.schur.html#scipy.linalg.schur "(in SciPy v1.8.1)") Best choice for unitary and other non-Hermitian normal matrices. #### Notes New in version 1.8.0. Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. This is implemented using the `_geev` LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays. The number `w` is an eigenvalue of `a` if there exists a vector `v` such that `a @ v = w * v`. Thus, the arrays `a`, `w`, and `v` satisfy the equations `a @ v[:,i] = w[i] * v[:,i]` for \(i \in \{0,...,M-1\}\). The array `v` of eigenvectors may not be of maximum rank, that is, some of the columns may be linearly dependent, although round-off error may obscure that fact. If the eigenvalues are all different, then theoretically the eigenvectors are linearly independent and `a` can be diagonalized by a similarity transformation using `v`, i.e, `inv(v) @ a @ v` is diagonal. For non-Hermitian normal matrices the SciPy function [`scipy.linalg.schur`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.schur.html#scipy.linalg.schur "(in SciPy v1.8.1)") is preferred because the matrix `v` is guaranteed to be unitary, which is not the case when using [`eig`](#numpy.linalg.eig "numpy.linalg.eig"). The Schur factorization produces an upper triangular matrix rather than a diagonal matrix, but for normal matrices only the diagonal of the upper triangular matrix is needed, the rest is roundoff error. Finally, it is emphasized that `v` consists of the *right* (as in right-hand side) eigenvectors of `a`. A vector `y` satisfying `y.T @ a = z * y.T` for some number `z` is called a *left* eigenvector of `a`, and, in general, the left and right eigenvectors of a matrix are not necessarily the (perhaps conjugate) transposes of each other. #### References <NAME>, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, Various pp. #### Examples ``` >>> from numpy import linalg as LA ``` (Almost) trivial example with real e-values and e-vectors. ``` >>> w, v = LA.eig(np.diag((1, 2, 3))) >>> w; v array([1., 2., 3.]) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) ``` Real matrix possessing complex e-values and e-vectors; note that the e-values are complex conjugates of each other. ``` >>> w, v = LA.eig(np.array([[1, -1], [1, 1]])) >>> w; v array([1.+1.j, 1.-1.j]) array([[0.70710678+0.j , 0.70710678-0.j ], [0. -0.70710678j, 0. +0.70710678j]]) ``` Complex-valued matrix with real e-values (but complex-valued e-vectors); note that `a.conj().T == a`, i.e., `a` is Hermitian. ``` >>> a = np.array([[1, 1j], [-1j, 1]]) >>> w, v = LA.eig(a) >>> w; v array([2.+0.j, 0.+0.j]) array([[ 0. +0.70710678j, 0.70710678+0.j ], # may vary [ 0.70710678+0.j , -0. +0.70710678j]]) ``` Be careful about round-off error! ``` >>> a = np.array([[1 + 1e-9, 0], [0, 1 - 1e-9]]) >>> # Theor. e-values are 1 +/- 1e-9 >>> w, v = LA.eig(a) >>> w; v array([1., 1.]) array([[1., 0.], [0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.eig.htmlnumpy.linalg.eigh ================= linalg.eigh(*a*, *UPLO='L'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L1331-L1468) Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. Returns two objects, a 1-D array containing the eigenvalues of `a`, and a 2-D square array or matrix (depending on the input type) of the corresponding eigenvectors (in columns). Parameters **a**(
, M, M) array Hermitian or real symmetric matrices whose eigenvalues and eigenvectors are to be computed. **UPLO**{‘L’, ‘U’}, optional Specifies whether the calculation is done with the lower triangular part of `a` (‘L’, default) or the upper triangular part (‘U’). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero. Returns **w**(
, M) ndarray The eigenvalues in ascending order, each repeated according to its multiplicity. **v**{(
, M, M) ndarray, (
, M, M) matrix} The column `v[:, i]` is the normalized eigenvector corresponding to the eigenvalue `w[i]`. Will return a matrix object if `a` is a matrix object. Raises LinAlgError If the eigenvalue computation does not converge. See also [`eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") eigenvalues of real symmetric or complex Hermitian (conjugate symmetric) arrays. [`eig`](numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig") eigenvalues and right eigenvectors for non-symmetric arrays. [`eigvals`](numpy.linalg.eigvals#numpy.linalg.eigvals "numpy.linalg.eigvals") eigenvalues of non-symmetric arrays. [`scipy.linalg.eigh`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigh.html#scipy.linalg.eigh "(in SciPy v1.8.1)") Similar function in SciPy (but also solves the generalized eigenvalue problem). #### Notes New in version 1.8.0. Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. The eigenvalues/eigenvectors are computed using LAPACK routines `_syevd`, `_heevd`. The eigenvalues of real symmetric or complex Hermitian matrices are always real. [[1]](#rc702e98a756a-1) The array `v` of (column) eigenvectors is unitary and `a`, `w`, and `v` satisfy the equations `dot(a, v[:, i]) = w[i] * v[:, i]`. #### References [1](#id1) <NAME>, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 222. #### Examples ``` >>> from numpy import linalg as LA >>> a = np.array([[1, -2j], [2j, 5]]) >>> a array([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> w, v = LA.eigh(a) >>> w; v array([0.17157288, 5.82842712]) array([[-0.92387953+0.j , -0.38268343+0.j ], # may vary [ 0. +0.38268343j, 0. -0.92387953j]]) ``` ``` >>> np.dot(a, v[:, 0]) - w[0] * v[:, 0] # verify 1st e-val/vec pair array([5.55111512e-17+0.0000000e+00j, 0.00000000e+00+1.2490009e-16j]) >>> np.dot(a, v[:, 1]) - w[1] * v[:, 1] # verify 2nd e-val/vec pair array([0.+0.j, 0.+0.j]) ``` ``` >>> A = np.matrix(a) # what happens if input is a matrix object >>> A matrix([[ 1.+0.j, -0.-2.j], [ 0.+2.j, 5.+0.j]]) >>> w, v = LA.eigh(A) >>> w; v array([0.17157288, 5.82842712]) matrix([[-0.92387953+0.j , -0.38268343+0.j ], # may vary [ 0. +0.38268343j, 0. -0.92387953j]]) ``` ``` >>> # demonstrate the treatment of the imaginary part of the diagonal >>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]]) >>> a array([[5.+2.j, 9.-2.j], [0.+2.j, 2.-1.j]]) >>> # with UPLO='L' this is numerically equivalent to using LA.eig() with: >>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]]) >>> b array([[5.+0.j, 0.-2.j], [0.+2.j, 2.+0.j]]) >>> wa, va = LA.eigh(a) >>> wb, vb = LA.eig(b) >>> wa; wb array([1., 6.]) array([6.+0.j, 1.+0.j]) >>> va; vb array([[-0.4472136 +0.j , -0.89442719+0.j ], # may vary [ 0. +0.89442719j, 0. -0.4472136j ]]) array([[ 0.89442719+0.j , -0. +0.4472136j], [-0. +0.4472136j, 0.89442719+0.j ]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.eigh.htmlnumpy.linalg.eigvals ==================== linalg.eigvals(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L983-L1072) Compute the eigenvalues of a general matrix. Main difference between [`eigvals`](#numpy.linalg.eigvals "numpy.linalg.eigvals") and [`eig`](numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig"): the eigenvectors aren’t returned. Parameters **a**(
, M, M) array_like A complex- or real-valued matrix whose eigenvalues will be computed. Returns **w**(
, M,) ndarray The eigenvalues, each repeated according to its multiplicity. They are not necessarily ordered, nor are they necessarily real for real matrices. Raises LinAlgError If the eigenvalue computation does not converge. See also [`eig`](numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig") eigenvalues and right eigenvectors of general arrays [`eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") eigenvalues of real symmetric or complex Hermitian (conjugate symmetric) arrays. [`eigh`](numpy.linalg.eigh#numpy.linalg.eigh "numpy.linalg.eigh") eigenvalues and eigenvectors of real symmetric or complex Hermitian (conjugate symmetric) arrays. [`scipy.linalg.eigvals`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvals.html#scipy.linalg.eigvals "(in SciPy v1.8.1)") Similar function in SciPy. #### Notes New in version 1.8.0. Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. This is implemented using the `_geev` LAPACK routines which compute the eigenvalues and eigenvectors of general square arrays. #### Examples Illustration, using the fact that the eigenvalues of a diagonal matrix are its diagonal elements, that multiplying a matrix on the left by an orthogonal matrix, `Q`, and on the right by `Q.T` (the transpose of `Q`), preserves the eigenvalues of the “middle” matrix. In other words, if `Q` is orthogonal, then `Q * A * Q.T` has the same eigenvalues as `A`: ``` >>> from numpy import linalg as LA >>> x = np.random.random() >>> Q = np.array([[np.cos(x), -np.sin(x)], [np.sin(x), np.cos(x)]]) >>> LA.norm(Q[0, :]), LA.norm(Q[1, :]), np.dot(Q[0, :],Q[1, :]) (1.0, 1.0, 0.0) ``` Now multiply a diagonal matrix by `Q` on one side and by `Q.T` on the other: ``` >>> D = np.diag((-1,1)) >>> LA.eigvals(D) array([-1., 1.]) >>> A = np.dot(Q, D) >>> A = np.dot(A, Q.T) >>> LA.eigvals(A) array([ 1., -1.]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.eigvals.htmlnumpy.linalg.eigvalsh ===================== linalg.eigvalsh(*a*, *UPLO='L'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L1079-L1171) Compute the eigenvalues of a complex Hermitian or real symmetric matrix. Main difference from eigh: the eigenvectors are not computed. Parameters **a**(
, M, M) array_like A complex- or real-valued matrix whose eigenvalues are to be computed. **UPLO**{‘L’, ‘U’}, optional Specifies whether the calculation is done with the lower triangular part of `a` (‘L’, default) or the upper triangular part (‘U’). Irrespective of this value only the real parts of the diagonal will be considered in the computation to preserve the notion of a Hermitian matrix. It therefore follows that the imaginary part of the diagonal will always be treated as zero. Returns **w**(
, M,) ndarray The eigenvalues in ascending order, each repeated according to its multiplicity. Raises LinAlgError If the eigenvalue computation does not converge. See also [`eigh`](numpy.linalg.eigh#numpy.linalg.eigh "numpy.linalg.eigh") eigenvalues and eigenvectors of real symmetric or complex Hermitian (conjugate symmetric) arrays. [`eigvals`](numpy.linalg.eigvals#numpy.linalg.eigvals "numpy.linalg.eigvals") eigenvalues of general real or complex arrays. [`eig`](numpy.linalg.eig#numpy.linalg.eig "numpy.linalg.eig") eigenvalues and right eigenvectors of general real or complex arrays. [`scipy.linalg.eigvalsh`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.eigvalsh.html#scipy.linalg.eigvalsh "(in SciPy v1.8.1)") Similar function in SciPy. #### Notes New in version 1.8.0. Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. The eigenvalues are computed using LAPACK routines `_syevd`, `_heevd`. #### Examples ``` >>> from numpy import linalg as LA >>> a = np.array([[1, -2j], [2j, 5]]) >>> LA.eigvalsh(a) array([ 0.17157288, 5.82842712]) # may vary ``` ``` >>> # demonstrate the treatment of the imaginary part of the diagonal >>> a = np.array([[5+2j, 9-2j], [0+2j, 2-1j]]) >>> a array([[5.+2.j, 9.-2.j], [0.+2.j, 2.-1.j]]) >>> # with UPLO='L' this is numerically equivalent to using LA.eigvals() >>> # with: >>> b = np.array([[5.+0.j, 0.-2.j], [0.+2.j, 2.-0.j]]) >>> b array([[5.+0.j, 0.-2.j], [0.+2.j, 2.+0.j]]) >>> wa = LA.eigvalsh(a) >>> wb = LA.eigvals(b) >>> wa; wb array([1., 6.]) array([6.+0.j, 1.+0.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.eigvalsh.htmlnumpy.linalg.inv ================ linalg.inv(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L483-L553) Compute the (multiplicative) inverse of a matrix. Given a square matrix `a`, return the matrix `ainv` satisfying `dot(a, ainv) = dot(ainv, a) = eye(a.shape[0])`. Parameters **a**(
, M, M) array_like Matrix to be inverted. Returns **ainv**(
, M, M) ndarray or matrix (Multiplicative) inverse of the matrix `a`. Raises LinAlgError If `a` is not square or inversion fails. See also [`scipy.linalg.inv`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.inv.html#scipy.linalg.inv "(in SciPy v1.8.1)") Similar function in SciPy. #### Notes New in version 1.8.0. Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. #### Examples ``` >>> from numpy.linalg import inv >>> a = np.array([[1., 2.], [3., 4.]]) >>> ainv = inv(a) >>> np.allclose(np.dot(a, ainv), np.eye(2)) True >>> np.allclose(np.dot(ainv, a), np.eye(2)) True ``` If a is a matrix object, then the return value is a matrix as well: ``` >>> ainv = inv(np.matrix(a)) >>> ainv matrix([[-2. , 1. ], [ 1.5, -0.5]]) ``` Inverses of several matrices can be computed at once: ``` >>> a = np.array([[[1., 2.], [3., 4.]], [[1, 3], [3, 5]]]) >>> inv(a) array([[[-2. , 1. ], [ 1.5 , -0.5 ]], [[-1.25, 0.75], [ 0.75, -0.25]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.inv.htmlnumpy.linalg.lstsq ================== linalg.lstsq(*a*, *b*, *rcond='warn'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L2165-L2322) Return the least-squares solution to a linear matrix equation. Computes the vector `x` that approximately solves the equation `a @ x = b`. The equation may be under-, well-, or over-determined (i.e., the number of linearly independent rows of `a` can be less than, equal to, or greater than its number of linearly independent columns). If `a` is square and of full rank, then `x` (but for round-off error) is the “exact” solution of the equation. Else, `x` minimizes the Euclidean 2-norm \(||b - ax||\). If there are multiple minimizing solutions, the one with the smallest 2-norm \(||x||\) is returned. Parameters **a**(M, N) array_like “Coefficient” matrix. **b**{(M,), (M, K)} array_like Ordinate or “dependent variable” values. If `b` is two-dimensional, the least-squares solution is calculated for each of the `K` columns of `b`. **rcond**float, optional Cut-off ratio for small singular values of `a`. For the purposes of rank determination, singular values are treated as zero if they are smaller than `rcond` times the largest singular value of `a`. Changed in version 1.14.0: If not set, a FutureWarning is given. The previous default of `-1` will use the machine precision as `rcond` parameter, the new default will use the machine precision times `max(M, N)`. To silence the warning and use the new default, use `rcond=None`, to keep using the old behavior, use `rcond=-1`. Returns **x**{(N,), (N, K)} ndarray Least-squares solution. If `b` is two-dimensional, the solutions are in the `K` columns of `x`. **residuals**{(1,), (K,), (0,)} ndarray Sums of squared residuals: Squared Euclidean 2-norm for each column in `b - a @ x`. If the rank of `a` is < N or M <= N, this is an empty array. If `b` is 1-dimensional, this is a (1,) shape array. Otherwise the shape is (K,). **rank**int Rank of matrix `a`. **s**(min(M, N),) ndarray Singular values of `a`. Raises LinAlgError If computation does not converge. See also [`scipy.linalg.lstsq`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lstsq.html#scipy.linalg.lstsq "(in SciPy v1.8.1)") Similar function in SciPy. #### Notes If `b` is a matrix, then all array results are returned as matrices. #### Examples Fit a line, `y = mx + c`, through some noisy data-points: ``` >>> x = np.array([0, 1, 2, 3]) >>> y = np.array([-1, 0.2, 0.9, 2.1]) ``` By examining the coefficients, we see that the line should have a gradient of roughly 1 and cut the y-axis at, more or less, -1. We can rewrite the line equation as `y = Ap`, where `A = [[x 1]]` and `p = [[m], [c]]`. Now use [`lstsq`](#numpy.linalg.lstsq "numpy.linalg.lstsq") to solve for `p`: ``` >>> A = np.vstack([x, np.ones(len(x))]).T >>> A array([[ 0., 1.], [ 1., 1.], [ 2., 1.], [ 3., 1.]]) ``` ``` >>> m, c = np.linalg.lstsq(A, y, rcond=None)[0] >>> m, c (1.0 -0.95) # may vary ``` Plot the data along with the fitted line: ``` >>> import matplotlib.pyplot as plt >>> _ = plt.plot(x, y, 'o', label='Original data', markersize=10) >>> _ = plt.plot(x, m*x + c, 'r', label='Fitted line') >>> _ = plt.legend() >>> plt.show() ``` ![../../_images/numpy-linalg-lstsq-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.lstsq.htmlnumpy.linalg.norm ================= linalg.norm(*x*, *ord=None*, *axis=None*, *keepdims=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L2357-L2607) Matrix or vector norm. This function is able to return one of eight different matrix norms, or one of an infinite number of vector norms (described below), depending on the value of the `ord` parameter. Parameters **x**array_like Input array. If `axis` is None, `x` must be 1-D or 2-D, unless `ord` is None. If both `axis` and `ord` are None, the 2-norm of `x.ravel` will be returned. **ord**{non-zero int, inf, -inf, ‘fro’, ‘nuc’}, optional Order of the norm (see table under `Notes`). inf means numpy’s [`inf`](../constants#numpy.inf "numpy.inf") object. The default is None. **axis**{None, int, 2-tuple of ints}, optional. If `axis` is an integer, it specifies the axis of `x` along which to compute the vector norms. If `axis` is a 2-tuple, it specifies the axes that hold 2-D matrices, and the matrix norms of these matrices are computed. If `axis` is None then either a vector norm (when `x` is 1-D) or a matrix norm (when `x` is 2-D) is returned. The default is None. New in version 1.8.0. **keepdims**bool, optional If this is set to True, the axes which are normed over are left in the result as dimensions with size one. With this option the result will broadcast correctly against the original `x`. New in version 1.10.0. Returns **n**float or ndarray Norm of the matrix or vector(s). See also [`scipy.linalg.norm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.norm.html#scipy.linalg.norm "(in SciPy v1.8.1)") Similar function in SciPy. #### Notes For values of `ord < 1`, the result is, strictly speaking, not a mathematical ‘norm’, but it may still be useful for various numerical purposes. The following norms can be calculated: | ord | norm for matrices | norm for vectors | | --- | --- | --- | | None | Frobenius norm | 2-norm | | ‘fro’ | Frobenius norm | – | | ‘nuc’ | nuclear norm | – | | inf | max(sum(abs(x), axis=1)) | max(abs(x)) | | -inf | min(sum(abs(x), axis=1)) | min(abs(x)) | | 0 | – | sum(x != 0) | | 1 | max(sum(abs(x), axis=0)) | as below | | -1 | min(sum(abs(x), axis=0)) | as below | | 2 | 2-norm (largest sing. value) | as below | | -2 | smallest singular value | as below | | other | – | sum(abs(x)**ord)**(1./ord) | The Frobenius norm is given by [[1]](#rac1c834adb66-1): \(||A||_F = [\sum_{i,j} abs(a_{i,j})^2]^{1/2}\) The nuclear norm is the sum of the singular values. Both the Frobenius and nuclear norm orders are only defined for matrices and raise a ValueError when `x.ndim != 2`. #### References [1](#id1) <NAME> and <NAME>, *Matrix Computations*, Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15 #### Examples ``` >>> from numpy import linalg as LA >>> a = np.arange(9) - 4 >>> a array([-4, -3, -2, ..., 2, 3, 4]) >>> b = a.reshape((3, 3)) >>> b array([[-4, -3, -2], [-1, 0, 1], [ 2, 3, 4]]) ``` ``` >>> LA.norm(a) 7.745966692414834 >>> LA.norm(b) 7.745966692414834 >>> LA.norm(b, 'fro') 7.745966692414834 >>> LA.norm(a, np.inf) 4.0 >>> LA.norm(b, np.inf) 9.0 >>> LA.norm(a, -np.inf) 0.0 >>> LA.norm(b, -np.inf) 2.0 ``` ``` >>> LA.norm(a, 1) 20.0 >>> LA.norm(b, 1) 7.0 >>> LA.norm(a, -1) -4.6566128774142013e-010 >>> LA.norm(b, -1) 6.0 >>> LA.norm(a, 2) 7.745966692414834 >>> LA.norm(b, 2) 7.3484692283495345 ``` ``` >>> LA.norm(a, -2) 0.0 >>> LA.norm(b, -2) 1.8570331885190563e-016 # may vary >>> LA.norm(a, 3) 5.8480354764257312 # may vary >>> LA.norm(a, -3) 0.0 ``` Using the `axis` argument to compute vector norms: ``` >>> c = np.array([[ 1, 2, 3], ... [-1, 1, 4]]) >>> LA.norm(c, axis=0) array([ 1.41421356, 2.23606798, 5. ]) >>> LA.norm(c, axis=1) array([ 3.74165739, 4.24264069]) >>> LA.norm(c, ord=1, axis=1) array([ 6., 6.]) ``` Using the `axis` argument to compute matrix norms: ``` >>> m = np.arange(8).reshape(2,2,2) >>> LA.norm(m, axis=(1,2)) array([ 3.74165739, 11.22497216]) >>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :]) (3.7416573867739413, 11.224972160321824) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.norm.htmlnumpy.linalg.pinv ================= linalg.pinv(*a*, *rcond=1e-15*, *hermitian=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L1912-L2007) Compute the (Moore-Penrose) pseudo-inverse of a matrix. Calculate the generalized inverse of a matrix using its singular-value decomposition (SVD) and including all *large* singular values. Changed in version 1.14: Can now operate on stacks of matrices Parameters **a**(
, M, N) array_like Matrix or stack of matrices to be pseudo-inverted. **rcond**(
) array_like of float Cutoff for small singular values. Singular values less than or equal to `rcond * largest_singular_value` are set to zero. Broadcasts against the stack of matrices. **hermitian**bool, optional If True, `a` is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. New in version 1.17.0. Returns **B**(
, N, M) ndarray The pseudo-inverse of `a`. If `a` is a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") instance, then so is `B`. Raises LinAlgError If the SVD computation does not converge. See also [`scipy.linalg.pinv`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.pinv.html#scipy.linalg.pinv "(in SciPy v1.8.1)") Similar function in SciPy. [`scipy.linalg.pinvh`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.pinvh.html#scipy.linalg.pinvh "(in SciPy v1.8.1)") Compute the (Moore-Penrose) pseudo-inverse of a Hermitian matrix. #### Notes The pseudo-inverse of a matrix A, denoted \(A^+\), is defined as: “the matrix that ‘solves’ [the least-squares problem] \(Ax = b\),” i.e., if \(\bar{x}\) is said solution, then \(A^+\) is that matrix such that \(\bar{x} = A^+b\). It can be shown that if \(Q_1 \Sigma Q_2^T = A\) is the singular value decomposition of A, then \(A^+ = Q_2 \Sigma^+ Q_1^T\), where \(Q_{1,2}\) are orthogonal matrices, \(\Sigma\) is a diagonal matrix consisting of A’s so-called singular values, (followed, typically, by zeros), and then \(\Sigma^+\) is simply the diagonal matrix consisting of the reciprocals of A’s singular values (again, followed by zeros). [[1]](#rec505eafac9d-1) #### References [1](#id1) <NAME>, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pp. 139-142. #### Examples The following example checks that `a * a+ * a == a` and `a+ * a * a+ == a+`: ``` >>> a = np.random.randn(9, 6) >>> B = np.linalg.pinv(a) >>> np.allclose(a, np.dot(a, np.dot(B, a))) True >>> np.allclose(B, np.dot(B, np.dot(a, B))) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.pinv.htmlnumpy.linalg.solve ================== linalg.solve(*a*, *b*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L320-L402) Solve a linear matrix equation, or system of linear scalar equations. Computes the “exact” solution, `x`, of the well-determined, i.e., full rank, linear matrix equation `ax = b`. Parameters **a**(
, M, M) array_like Coefficient matrix. **b**{(
, M,), (
, M, K)}, array_like Ordinate or “dependent variable” values. Returns **x**{(
, M,), (
, M, K)} ndarray Solution to the system a x = b. Returned shape is identical to `b`. Raises LinAlgError If `a` is singular or not square. See also [`scipy.linalg.solve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve.html#scipy.linalg.solve "(in SciPy v1.8.1)") Similar function in SciPy. #### Notes New in version 1.8.0. Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. The solutions are computed using LAPACK routine `_gesv`. `a` must be square and of full-rank, i.e., all rows (or, equivalently, columns) must be linearly independent; if either is not true, use [`lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") for the least-squares best “solution” of the system/equation. #### References 1 <NAME>, *Linear Algebra and Its Applications*, 2nd Ed., Orlando, FL, Academic Press, Inc., 1980, pg. 22. #### Examples Solve the system of equations `x0 + 2 * x1 = 1` and `3 * x0 + 5 * x1 = 2`: ``` >>> a = np.array([[1, 2], [3, 5]]) >>> b = np.array([1, 2]) >>> x = np.linalg.solve(a, b) >>> x array([-1., 1.]) ``` Check that the solution is correct: ``` >>> np.allclose(np.dot(a, x), b) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.solve.htmlnumpy.fft.fft ============= fft.fft(*a*, *n=None*, *axis=- 1*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L122-L216) Compute the one-dimensional discrete Fourier Transform. This function computes the one-dimensional *n*-point discrete Fourier Transform (DFT) with the efficient Fast Fourier Transform (FFT) algorithm [CT]. Parameters **a**array_like Input array, can be complex. **n**int, optional Length of the transformed axis of the output. If `n` is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If `n` is not given, the length of the input along the axis specified by `axis` is used. **axis**int, optional Axis over which to compute the FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**complex ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. Raises IndexError If `axis` is not a valid axis of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for definition of the DFT and conventions used. [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") The inverse of [`fft`](../routines.fft#module-numpy.fft "numpy.fft"). [`fft2`](numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2") The two-dimensional FFT. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The *n*-dimensional FFT. [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") The *n*-dimensional FFT of real input. [`fftfreq`](numpy.fft.fftfreq#numpy.fft.fftfreq "numpy.fft.fftfreq") Frequency bins for given FFT parameters. #### Notes FFT (Fast Fourier Transform) refers to a way the discrete Fourier Transform (DFT) can be calculated efficiently, by using symmetries in the calculated terms. The symmetry is highest when `n` is a power of 2, and the transform is therefore most efficient for these sizes. The DFT is defined, with the conventions used in this implementation, in the documentation for the [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") module. #### References CT Cooley, <NAME>., and <NAME>, 1965, “An algorithm for the machine calculation of complex Fourier series,” *Math. Comput.* 19: 297-301. #### Examples ``` >>> np.fft.fft(np.exp(2j * np.pi * np.arange(8) / 8)) array([-2.33486982e-16+1.14423775e-17j, 8.00000000e+00-1.25557246e-15j, 2.33486982e-16+2.33486982e-16j, 0.00000000e+00+1.22464680e-16j, -1.14423775e-17+2.33486982e-16j, 0.00000000e+00+5.20784380e-16j, 1.14423775e-17+1.14423775e-17j, 0.00000000e+00+1.22464680e-16j]) ``` In this example, real input has an FFT which is Hermitian, i.e., symmetric in the real part and anti-symmetric in the imaginary part, as described in the [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") documentation: ``` >>> import matplotlib.pyplot as plt >>> t = np.arange(256) >>> sp = np.fft.fft(np.sin(t)) >>> freq = np.fft.fftfreq(t.shape[-1]) >>> plt.plot(freq, sp.real, freq, sp.imag) [<matplotlib.lines.Line2D object at 0x...>, <matplotlib.lines.Line2D object at 0x...>] >>> plt.show() ``` ![../../_images/numpy-fft-fft-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.fft.htmlnumpy.fft.fft2 ============== fft.fft2(*a*, *s=None*, *axes=(- 2, - 1)*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L921-L1014) Compute the 2-dimensional discrete Fourier Transform. This function computes the *n*-dimensional discrete Fourier Transform over any axes in an *M*-dimensional array by means of the Fast Fourier Transform (FFT). By default, the transform is computed over the last two axes of the input array, i.e., a 2-dimensional FFT. Parameters **a**array_like Input array, can be complex **s**sequence of ints, optional Shape (length of each transformed axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). This corresponds to `n` for `fft(x, n)`. Along each axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. if `s` is not given, the shape of the input along the axes specified by `axes` is used. **axes**sequence of ints, optional Axes over which to compute the FFT. If not given, the last two axes are used. A repeated index in `axes` means the transform over that axis is performed multiple times. A one-element sequence means that a one-dimensional FFT is performed. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or the last two axes if `axes` is not given. Raises ValueError If `s` and `axes` have different length, or `axes` not given and `len(s) != 2`. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") Overall view of discrete Fourier transforms, with definitions and conventions used. [`ifft2`](numpy.fft.ifft2#numpy.fft.ifft2 "numpy.fft.ifft2") The inverse two-dimensional FFT. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The *n*-dimensional FFT. [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift") Shifts zero-frequency terms to the center of the array. For two-dimensional input, swaps first and third quadrants, and second and fourth quadrants. #### Notes [`fft2`](#numpy.fft.fft2 "numpy.fft.fft2") is just [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") with a different default for `axes`. The output, analogously to [`fft`](../routines.fft#module-numpy.fft "numpy.fft"), contains the term for zero frequency in the low-order corner of the transformed axes, the positive frequency terms in the first half of these axes, the term for the Nyquist frequency in the middle of the axes and the negative frequency terms in the second half of the axes, in order of decreasingly negative frequency. See [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") for details and a plotting example, and [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for definitions and conventions used. #### Examples ``` >>> a = np.mgrid[:5, :5][0] >>> np.fft.fft2(a) array([[ 50. +0.j , 0. +0.j , 0. +0.j , # may vary 0. +0.j , 0. +0.j ], [-12.5+17.20477401j, 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ], [-12.5 +4.0614962j , 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ], [-12.5 -4.0614962j , 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ], [-12.5-17.20477401j, 0. +0.j , 0. +0.j , 0. +0.j , 0. +0.j ]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.fft2.htmlnumpy.fft.fftn ============== fft.fftn(*a*, *s=None*, *axes=None*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L715-L815) Compute the N-dimensional discrete Fourier Transform. This function computes the *N*-dimensional discrete Fourier Transform over any number of axes in an *M*-dimensional array by means of the Fast Fourier Transform (FFT). Parameters **a**array_like Input array, can be complex. **s**sequence of ints, optional Shape (length of each transformed axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). This corresponds to `n` for `fft(x, n)`. Along any axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. if `s` is not given, the shape of the input along the axes specified by `axes` is used. **axes**sequence of ints, optional Axes over which to compute the FFT. If not given, the last `len(s)` axes are used, or all axes if `s` is also not specified. Repeated indices in `axes` means that the transform over that axis is performed multiple times. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or by a combination of `s` and `a`, as explained in the parameters section above. Raises ValueError If `s` and `axes` have different length. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") Overall view of discrete Fourier transforms, with definitions and conventions used. [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") The inverse of [`fftn`](#numpy.fft.fftn "numpy.fft.fftn"), the inverse *n*-dimensional FFT. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT, with definitions and conventions used. [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") The *n*-dimensional FFT of real input. [`fft2`](numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2") The two-dimensional FFT. [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift") Shifts zero-frequency terms to centre of array #### Notes The output, analogously to [`fft`](../routines.fft#module-numpy.fft "numpy.fft"), contains the term for zero frequency in the low-order corner of all axes, the positive frequency terms in the first half of all axes, the term for the Nyquist frequency in the middle of all axes and the negative frequency terms in the second half of all axes, in order of decreasingly negative frequency. See [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for details, definitions and conventions used. #### Examples ``` >>> a = np.mgrid[:3, :3, :3][0] >>> np.fft.fftn(a, axes=(1, 2)) array([[[ 0.+0.j, 0.+0.j, 0.+0.j], # may vary [ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j]], [[ 9.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j]], [[18.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j]]]) >>> np.fft.fftn(a, (2, 2), axes=(0, 1)) array([[[ 2.+0.j, 2.+0.j, 2.+0.j], # may vary [ 0.+0.j, 0.+0.j, 0.+0.j]], [[-2.+0.j, -2.+0.j, -2.+0.j], [ 0.+0.j, 0.+0.j, 0.+0.j]]]) ``` ``` >>> import matplotlib.pyplot as plt >>> [X, Y] = np.meshgrid(2 * np.pi * np.arange(200) / 12, ... 2 * np.pi * np.arange(200) / 34) >>> S = np.sin(X) + np.cos(Y) + np.random.uniform(0, 1, X.shape) >>> FS = np.fft.fftn(S) >>> plt.imshow(np.log(np.abs(np.fft.fftshift(FS))**2)) <matplotlib.image.AxesImage object at 0x...> >>> plt.show() ``` ![../../_images/numpy-fft-fftn-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.fftn.htmlnumpy.fft.ifft ============== fft.ifft(*a*, *n=None*, *axis=- 1*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L219-L317) Compute the one-dimensional inverse discrete Fourier Transform. This function computes the inverse of the one-dimensional *n*-point discrete Fourier transform computed by [`fft`](../routines.fft#module-numpy.fft "numpy.fft"). In other words, `ifft(fft(a)) == a` to within numerical accuracy. For a general description of the algorithm and definitions, see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft"). The input should be ordered in the same way as is returned by [`fft`](../routines.fft#module-numpy.fft "numpy.fft"), i.e., * `a[0]` should contain the zero frequency term, * `a[1:n//2]` should contain the positive-frequency terms, * `a[n//2 + 1:]` should contain the negative-frequency terms, in increasing order starting from the most negative frequency. For an even number of input points, `A[n//2]` represents the sum of the values at the positive and negative Nyquist frequencies, as the two are aliased together. See [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for details. Parameters **a**array_like Input array, can be complex. **n**int, optional Length of the transformed axis of the output. If `n` is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If `n` is not given, the length of the input along the axis specified by `axis` is used. See notes about padding issues. **axis**int, optional Axis over which to compute the inverse DFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**complex ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. Raises IndexError If `axis` is not a valid axis of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") An introduction, with definitions and general explanations. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional (forward) FFT, of which [`ifft`](#numpy.fft.ifft "numpy.fft.ifft") is the inverse [`ifft2`](numpy.fft.ifft2#numpy.fft.ifft2 "numpy.fft.ifft2") The two-dimensional inverse FFT. [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") The n-dimensional inverse FFT. #### Notes If the input parameter `n` is larger than the size of the input, the input is padded by appending zeros at the end. Even though this is the common approach, it might lead to surprising results. If a different padding is desired, it must be performed before calling [`ifft`](#numpy.fft.ifft "numpy.fft.ifft"). #### Examples ``` >>> np.fft.ifft([0, 4, 0, 0]) array([ 1.+0.j, 0.+1.j, -1.+0.j, 0.-1.j]) # may vary ``` Create and plot a band-limited signal with random phases: ``` >>> import matplotlib.pyplot as plt >>> t = np.arange(400) >>> n = np.zeros((400,), dtype=complex) >>> n[40:60] = np.exp(1j*np.random.uniform(0, 2*np.pi, (20,))) >>> s = np.fft.ifft(n) >>> plt.plot(t, s.real, label='real') [<matplotlib.lines.Line2D object at ...>] >>> plt.plot(t, s.imag, '--', label='imaginary') [<matplotlib.lines.Line2D object at ...>] >>> plt.legend() <matplotlib.legend.Legend object at ...> >>> plt.show() ``` ![../../_images/numpy-fft-ifft-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.ifft.htmlnumpy.fft.ifft2 =============== fft.ifft2(*a*, *s=None*, *axes=(- 2, - 1)*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L1017-L1107) Compute the 2-dimensional inverse discrete Fourier Transform. This function computes the inverse of the 2-dimensional discrete Fourier Transform over any number of axes in an M-dimensional array by means of the Fast Fourier Transform (FFT). In other words, `ifft2(fft2(a)) == a` to within numerical accuracy. By default, the inverse transform is computed over the last two axes of the input array. The input, analogously to [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft"), should be ordered in the same way as is returned by [`fft2`](numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2"), i.e. it should have the term for zero frequency in the low-order corner of the two axes, the positive frequency terms in the first half of these axes, the term for the Nyquist frequency in the middle of the axes and the negative frequency terms in the second half of both axes, in order of decreasingly negative frequency. Parameters **a**array_like Input array, can be complex. **s**sequence of ints, optional Shape (length of each axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). This corresponds to `n` for `ifft(x, n)`. Along each axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. if `s` is not given, the shape of the input along the axes specified by `axes` is used. See notes for issue on [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") zero padding. **axes**sequence of ints, optional Axes over which to compute the FFT. If not given, the last two axes are used. A repeated index in `axes` means the transform over that axis is performed multiple times. A one-element sequence means that a one-dimensional FFT is performed. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or the last two axes if `axes` is not given. Raises ValueError If `s` and `axes` have different length, or `axes` not given and `len(s) != 2`. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") Overall view of discrete Fourier transforms, with definitions and conventions used. [`fft2`](numpy.fft.fft2#numpy.fft.fft2 "numpy.fft.fft2") The forward 2-dimensional FFT, of which [`ifft2`](#numpy.fft.ifft2 "numpy.fft.ifft2") is the inverse. [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") The inverse of the *n*-dimensional FFT. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT. [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") The one-dimensional inverse FFT. #### Notes [`ifft2`](#numpy.fft.ifft2 "numpy.fft.ifft2") is just [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") with a different default for `axes`. See [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") for details and a plotting example, and [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for definition and conventions used. Zero-padding, analogously with [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft"), is performed by appending zeros to the input along the specified dimension. Although this is the common approach, it might lead to surprising results. If another form of zero padding is desired, it must be performed before [`ifft2`](#numpy.fft.ifft2 "numpy.fft.ifft2") is called. #### Examples ``` >>> a = 4 * np.eye(4) >>> np.fft.ifft2(a) array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], # may vary [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j], [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j], [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.ifft2.htmlnumpy.fft.ifftn =============== fft.ifftn(*a*, *s=None*, *axes=None*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L818-L918) Compute the N-dimensional inverse discrete Fourier Transform. This function computes the inverse of the N-dimensional discrete Fourier Transform over any number of axes in an M-dimensional array by means of the Fast Fourier Transform (FFT). In other words, `ifftn(fftn(a)) == a` to within numerical accuracy. For a description of the definitions and conventions used, see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft"). The input, analogously to [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft"), should be ordered in the same way as is returned by [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn"), i.e. it should have the term for zero frequency in all axes in the low-order corner, the positive frequency terms in the first half of all axes, the term for the Nyquist frequency in the middle of all axes and the negative frequency terms in the second half of all axes, in order of decreasingly negative frequency. Parameters **a**array_like Input array, can be complex. **s**sequence of ints, optional Shape (length of each transformed axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). This corresponds to `n` for `ifft(x, n)`. Along any axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. if `s` is not given, the shape of the input along the axes specified by `axes` is used. See notes for issue on [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") zero padding. **axes**sequence of ints, optional Axes over which to compute the IFFT. If not given, the last `len(s)` axes are used, or all axes if `s` is also not specified. Repeated indices in `axes` means that the inverse transform over that axis is performed multiple times. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or by a combination of `s` or `a`, as explained in the parameters section above. Raises ValueError If `s` and `axes` have different length. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") Overall view of discrete Fourier transforms, with definitions and conventions used. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The forward *n*-dimensional FFT, of which [`ifftn`](#numpy.fft.ifftn "numpy.fft.ifftn") is the inverse. [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") The one-dimensional inverse FFT. [`ifft2`](numpy.fft.ifft2#numpy.fft.ifft2 "numpy.fft.ifft2") The two-dimensional inverse FFT. [`ifftshift`](numpy.fft.ifftshift#numpy.fft.ifftshift "numpy.fft.ifftshift") Undoes [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift"), shifts zero-frequency terms to beginning of array. #### Notes See [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") for definitions and conventions used. Zero-padding, analogously with [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft"), is performed by appending zeros to the input along the specified dimension. Although this is the common approach, it might lead to surprising results. If another form of zero padding is desired, it must be performed before [`ifftn`](#numpy.fft.ifftn "numpy.fft.ifftn") is called. #### Examples ``` >>> a = np.eye(4) >>> np.fft.ifftn(np.fft.fftn(a, axes=(0,)), axes=(1,)) array([[1.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], # may vary [0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j], [0.+0.j, 0.+0.j, 1.+0.j, 0.+0.j], [0.+0.j, 0.+0.j, 0.+0.j, 1.+0.j]]) ``` Create and plot an image with band-limited frequency content: ``` >>> import matplotlib.pyplot as plt >>> n = np.zeros((200,200), dtype=complex) >>> n[60:80, 20:40] = np.exp(1j*np.random.uniform(0, 2*np.pi, (20, 20))) >>> im = np.fft.ifftn(n).real >>> plt.imshow(im) <matplotlib.image.AxesImage object at 0x...> >>> plt.show() ``` ![../../_images/numpy-fft-ifftn-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.ifftn.htmlnumpy.i0 ======== numpy.i0(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L3366-L3423) Modified Bessel function of the first kind, order 0. Usually denoted \(I_0\). Parameters **x**array_like of float Argument of the Bessel function. Returns **out**ndarray, shape = x.shape, dtype = float The modified Bessel function evaluated at each of the elements of `x`. See also [`scipy.special.i0`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.i0.html#scipy.special.i0 "(in SciPy v1.8.1)"), [`scipy.special.iv`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.iv.html#scipy.special.iv "(in SciPy v1.8.1)"), [`scipy.special.ive`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.ive.html#scipy.special.ive "(in SciPy v1.8.1)") #### Notes The scipy implementation is recommended over this function: it is a proper ufunc written in C, and more than an order of magnitude faster. We use the algorithm published by Clenshaw [[1]](#rfd38a370b188-1) and referenced by Abramowitz and Stegun [[2]](#rfd38a370b188-2), for which the function domain is partitioned into the two intervals [0,8] and (8,inf), and Chebyshev polynomial expansions are employed in each interval. Relative error on the domain [0,30] using IEEE arithmetic is documented [[3]](#rfd38a370b188-3) as having a peak of 5.8e-16 with an rms of 1.4e-16 (n = 30000). #### References [1](#id1) <NAME>, “Chebyshev series for mathematical functions”, in *National Physical Laboratory Mathematical Tables*, vol. 5, London: Her Majesty’s Stationery Office, 1962. [2](#id2) <NAME> and <NAME>, *Handbook of Mathematical Functions*, 10th printing, New York: Dover, 1964, pp. 379. <https://personal.math.ubc.ca/~cbm/aands/page_379.htm [3](#id3) <https://metacpan.org/pod/distribution/Math-Cephes/lib/Math/Cephes.pod#i0:-Modified-Bessel-function-of-order-zero #### Examples ``` >>> np.i0(0.) array(1.0) >>> np.i0([0, 1, 2, 3]) array([1. , 1.26606588, 2.2795853 , 4.88079259]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.i0.htmlnumpy.emath.sqrt ================ emath.sqrt(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/scimath.py#L198-L248) Compute the square root of x. For negative input elements, a complex value is returned (unlike [`numpy.sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt") which returns NaN). Parameters **x**array_like The input value(s). Returns **out**ndarray or scalar The square root of `x`. If `x` was a scalar, so is `out`, otherwise an array is returned. See also [`numpy.sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt") #### Examples For real, non-negative inputs this works just like [`numpy.sqrt`](numpy.sqrt#numpy.sqrt "numpy.sqrt"): ``` >>> np.emath.sqrt(1) 1.0 >>> np.emath.sqrt([1, 4]) array([1., 2.]) ``` But it automatically handles negative inputs: ``` >>> np.emath.sqrt(-1) 1j >>> np.emath.sqrt([-1,4]) array([0.+1.j, 2.+0.j]) ``` Different results are expected because: floating point 0.0 and -0.0 are distinct. For more control, explicitly use complex() as follows: ``` >>> np.emath.sqrt(complex(-4.0, 0.0)) 2j >>> np.emath.sqrt(complex(-4.0, -0.0)) -2j ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.emath.sqrt.htmlnumpy.emath.log =============== emath.log(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/scimath.py#L251-L296) Compute the natural logarithm of `x`. Return the “principal value” (for a description of this, see [`numpy.log`](numpy.log#numpy.log "numpy.log")) of \(log_e(x)\). For real `x > 0`, this is a real number (`log(0)` returns `-inf` and `log(np.inf)` returns `inf`). Otherwise, the complex principle value is returned. Parameters **x**array_like The value(s) whose log is (are) required. Returns **out**ndarray or scalar The log of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array is returned. See also [`numpy.log`](numpy.log#numpy.log "numpy.log") #### Notes For a log() that returns `NAN` when real `x < 0`, use [`numpy.log`](numpy.log#numpy.log "numpy.log") (note, however, that otherwise [`numpy.log`](numpy.log#numpy.log "numpy.log") and this [`log`](numpy.log#numpy.log "numpy.log") are identical, i.e., both return `-inf` for `x = 0`, `inf` for `x = inf`, and, notably, the complex principle value if `x.imag != 0`). #### Examples ``` >>> np.emath.log(np.exp(1)) 1.0 ``` Negative arguments are handled “correctly” (recall that `exp(log(x)) == x` does *not* hold for real `x < 0`): ``` >>> np.emath.log(-np.exp(1)) == (1 + np.pi * 1j) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.emath.log.htmlnumpy.emath.log2 ================ emath.log2(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/scimath.py#L389-L434) Compute the logarithm base 2 of `x`. Return the “principal value” (for a description of this, see [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2")) of \(log_2(x)\). For real `x > 0`, this is a real number (`log2(0)` returns `-inf` and `log2(np.inf)` returns `inf`). Otherwise, the complex principle value is returned. Parameters **x**array_like The value(s) whose log base 2 is (are) required. Returns **out**ndarray or scalar The log base 2 of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array is returned. See also [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2") #### Notes For a log2() that returns `NAN` when real `x < 0`, use [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2") (note, however, that otherwise [`numpy.log2`](numpy.log2#numpy.log2 "numpy.log2") and this [`log2`](numpy.log2#numpy.log2 "numpy.log2") are identical, i.e., both return `-inf` for `x = 0`, `inf` for `x = inf`, and, notably, the complex principle value if `x.imag != 0`). #### Examples We set the printing precision so the example can be auto-tested: ``` >>> np.set_printoptions(precision=4) ``` ``` >>> np.emath.log2(8) 3.0 >>> np.emath.log2([-4, -8, 8]) array([2.+4.5324j, 3.+4.5324j, 3.+0.j ]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.emath.log2.htmlnumpy.emath.logn ================ emath.logn(*n*, *x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/scimath.py#L353-L386) Take log base n of x. If `x` contains negative inputs, the answer is computed and returned in the complex domain. Parameters **n**array_like The integer base(s) in which the log is taken. **x**array_like The value(s) whose log base `n` is (are) required. Returns **out**ndarray or scalar The log base `n` of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array is returned. #### Examples ``` >>> np.set_printoptions(precision=4) ``` ``` >>> np.emath.logn(2, [4, 8]) array([2., 3.]) >>> np.emath.logn(2, [-4, -8, 8]) array([2.+4.5324j, 3.+4.5324j, 3.+0.j ]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.emath.logn.htmlnumpy.emath.log10 ================= emath.log10(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/scimath.py#L299-L346) Compute the logarithm base 10 of `x`. Return the “principal value” (for a description of this, see [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10")) of \(log_{10}(x)\). For real `x > 0`, this is a real number (`log10(0)` returns `-inf` and `log10(np.inf)` returns `inf`). Otherwise, the complex principle value is returned. Parameters **x**array_like or scalar The value(s) whose log base 10 is (are) required. Returns **out**ndarray or scalar The log base 10 of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array object is returned. See also [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10") #### Notes For a log10() that returns `NAN` when real `x < 0`, use [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10") (note, however, that otherwise [`numpy.log10`](numpy.log10#numpy.log10 "numpy.log10") and this [`log10`](numpy.log10#numpy.log10 "numpy.log10") are identical, i.e., both return `-inf` for `x = 0`, `inf` for `x = inf`, and, notably, the complex principle value if `x.imag != 0`). #### Examples (We set the printing precision so the example can be auto-tested) ``` >>> np.set_printoptions(precision=4) ``` ``` >>> np.emath.log10(10**1) 1.0 ``` ``` >>> np.emath.log10([-10**1, -10**2, 10**2]) array([1.+1.3644j, 2.+1.3644j, 2.+0.j ]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.emath.log10.htmlnumpy.emath.power ================= emath.power(*x*, *p*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/scimath.py#L441-L483) Return x to the power p, (x**p). If `x` contains negative values, the output is converted to the complex domain. Parameters **x**array_like The input value(s). **p**array_like of ints The power(s) to which `x` is raised. If `x` contains multiple values, `p` has to either be a scalar, or contain the same number of values as `x`. In the latter case, the result is `x[0]**p[0], x[1]**p[1], ...`. Returns **out**ndarray or scalar The result of `x**p`. If `x` and `p` are scalars, so is `out`, otherwise an array is returned. See also [`numpy.power`](numpy.power#numpy.power "numpy.power") #### Examples ``` >>> np.set_printoptions(precision=4) ``` ``` >>> np.emath.power([2, 4], 2) array([ 4, 16]) >>> np.emath.power([2, 4], -2) array([0.25 , 0.0625]) >>> np.emath.power([-2, 4], 2) array([ 4.-0.j, 16.+0.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.emath.power.htmlnumpy.emath.arccos ================== emath.arccos(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/scimath.py#L486-L528) Compute the inverse cosine of x. Return the “principal value” (for a description of this, see [`numpy.arccos`](numpy.arccos#numpy.arccos "numpy.arccos")) of the inverse cosine of `x`. For real `x` such that `abs(x) <= 1`, this is a real number in the closed interval \([0, \pi]\). Otherwise, the complex principle value is returned. Parameters **x**array_like or scalar The value(s) whose arccos is (are) required. Returns **out**ndarray or scalar The inverse cosine(s) of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array object is returned. See also [`numpy.arccos`](numpy.arccos#numpy.arccos "numpy.arccos") #### Notes For an arccos() that returns `NAN` when real `x` is not in the interval `[-1,1]`, use [`numpy.arccos`](numpy.arccos#numpy.arccos "numpy.arccos"). #### Examples ``` >>> np.set_printoptions(precision=4) ``` ``` >>> np.emath.arccos(1) # a scalar is returned 0.0 ``` ``` >>> np.emath.arccos([1,2]) array([0.-0.j , 0.-1.317j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.emath.arccos.htmlnumpy.emath.arcsin ================== emath.arcsin(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/scimath.py#L531-L574) Compute the inverse sine of x. Return the “principal value” (for a description of this, see [`numpy.arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin")) of the inverse sine of `x`. For real `x` such that `abs(x) <= 1`, this is a real number in the closed interval \([-\pi/2, \pi/2]\). Otherwise, the complex principle value is returned. Parameters **x**array_like or scalar The value(s) whose arcsin is (are) required. Returns **out**ndarray or scalar The inverse sine(s) of the `x` value(s). If `x` was a scalar, so is `out`, otherwise an array object is returned. See also [`numpy.arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin") #### Notes For an arcsin() that returns `NAN` when real `x` is not in the interval `[-1,1]`, use [`numpy.arcsin`](numpy.arcsin#numpy.arcsin "numpy.arcsin"). #### Examples ``` >>> np.set_printoptions(precision=4) ``` ``` >>> np.emath.arcsin(0) 0.0 ``` ``` >>> np.emath.arcsin([0,1]) array([0. , 1.5708]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.emath.arcsin.htmlnumpy.emath.arctanh =================== emath.arctanh(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/scimath.py#L577-L625) Compute the inverse hyperbolic tangent of `x`. Return the “principal value” (for a description of this, see [`numpy.arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh")) of `arctanh(x)`. For real `x` such that `abs(x) < 1`, this is a real number. If `abs(x) > 1`, or if `x` is complex, the result is complex. Finally, `x = 1` returns``inf`` and `x=-1` returns `-inf`. Parameters **x**array_like The value(s) whose arctanh is (are) required. Returns **out**ndarray or scalar The inverse hyperbolic tangent(s) of the `x` value(s). If `x` was a scalar so is `out`, otherwise an array is returned. See also [`numpy.arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh") #### Notes For an arctanh() that returns `NAN` when real `x` is not in the interval `(-1,1)`, use [`numpy.arctanh`](numpy.arctanh#numpy.arctanh "numpy.arctanh") (this latter, however, does return +/-inf for `x = +/-1`). #### Examples ``` >>> np.set_printoptions(precision=4) ``` ``` >>> from numpy.testing import suppress_warnings >>> with suppress_warnings() as sup: ... sup.filter(RuntimeWarning) ... np.emath.arctanh(np.eye(2)) array([[inf, 0.], [ 0., inf]]) >>> np.emath.arctanh([1j]) array([0.+0.7854j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.emath.arctanh.htmlnumpy.seterr ============ numpy.seterr(*all=None*, *divide=None*, *over=None*, *under=None*, *invalid=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_ufunc_config.py#L32-L128) Set how floating-point errors are handled. Note that operations on integer scalar types (such as [`int16`](../arrays.scalars#numpy.int16 "numpy.int16")) are handled like floating point, and are affected by these settings. Parameters **all**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Set treatment for all types of floating-point errors at once: * ignore: Take no action when the exception occurs. * warn: Print a `RuntimeWarning` (via the Python [`warnings`](https://docs.python.org/3/library/warnings.html#module-warnings "(in Python v3.10)") module). * raise: Raise a `FloatingPointError`. * call: Call a function specified using the [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall") function. * print: Print a warning directly to `stdout`. * log: Record error in a Log object specified by [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"). The default is not to change the current behavior. **divide**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Treatment for division by zero. **over**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Treatment for floating-point overflow. **under**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Treatment for floating-point underflow. **invalid**{‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}, optional Treatment for invalid floating-point operation. Returns **old_settings**dict Dictionary containing the old settings. See also [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall") Set a callback function for the ‘call’ mode. [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr"), [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall"), [`errstate`](numpy.errstate#numpy.errstate "numpy.errstate") #### Notes The floating-point exceptions are defined in the IEEE 754 standard [[1]](#r4cab4292821f-1): * Division by zero: infinite result obtained from finite numbers. * Overflow: result too large to be expressed. * Underflow: result so close to zero that some precision was lost. * Invalid operation: result is not an expressible number, typically indicates that a NaN was produced. [1](#id1) <https://en.wikipedia.org/wiki/IEEE_754 #### Examples ``` >>> old_settings = np.seterr(all='ignore') #seterr to known value >>> np.seterr(over='raise') {'divide': 'ignore', 'over': 'ignore', 'under': 'ignore', 'invalid': 'ignore'} >>> np.seterr(**old_settings) # reset to default {'divide': 'ignore', 'over': 'raise', 'under': 'ignore', 'invalid': 'ignore'} ``` ``` >>> np.int16(32000) * np.int16(3) 30464 >>> old_settings = np.seterr(all='warn', over='raise') >>> np.int16(32000) * np.int16(3) Traceback (most recent call last): File "<stdin>", line 1, in <module> FloatingPointError: overflow encountered in short_scalars ``` ``` >>> old_settings = np.seterr(all='print') >>> np.geterr() {'divide': 'print', 'over': 'print', 'under': 'print', 'invalid': 'print'} >>> np.int16(32000) * np.int16(3) 30464 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.seterr.htmlnumpy.geterr ============ numpy.geterr()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_ufunc_config.py#L131-L178) Get the current way of handling floating-point errors. Returns **res**dict A dictionary with keys “divide”, “over”, “under”, and “invalid”, whose values are from the strings “ignore”, “print”, “log”, “warn”, “raise”, and “call”. The keys represent possible floating-point exceptions, and the values define how these exceptions are handled. See also [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall"), [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall") #### Notes For complete documentation of the types of floating-point exceptions and treatment options, see [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). #### Examples ``` >>> np.geterr() {'divide': 'warn', 'over': 'warn', 'under': 'ignore', 'invalid': 'warn'} >>> np.arange(3.) / np.arange(3.) array([nan, 1., 1.]) ``` ``` >>> oldsettings = np.seterr(all='warn', over='raise') >>> np.geterr() {'divide': 'warn', 'over': 'raise', 'under': 'warn', 'invalid': 'warn'} >>> np.arange(3.) / np.arange(3.) array([nan, 1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.geterr.htmlnumpy.seterrcall ================ numpy.seterrcall(*func*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_ufunc_config.py#L220-L310) Set the floating-point error callback function or log object. There are two ways to capture floating-point error messages. The first is to set the error-handler to ‘call’, using [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). Then, set the function to call using this function. The second is to set the error-handler to ‘log’, using [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). Floating-point errors then trigger a call to the ‘write’ method of the provided object. Parameters **func**callable f(err, flag) or object with write method Function to call upon floating-point errors (‘call’-mode) or object whose ‘write’ method is used to log such message (‘log’-mode). The call function takes two arguments. The first is a string describing the type of error (such as “divide by zero”, “overflow”, “underflow”, or “invalid value”), and the second is the status flag. The flag is a byte, whose four least-significant bits indicate the type of error, one of “divide”, “over”, “under”, “invalid”: ``` [0 0 0 0 divide over under invalid] ``` In other words, `flags = divide + 2*over + 4*under + 8*invalid`. If an object is provided, its write method should take one argument, a string. Returns **h**callable, log instance or None The old error handler. See also [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr"), [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall") #### Examples Callback upon error: ``` >>> def err_handler(type, flag): ... print("Floating point error (%s), with flag %s" % (type, flag)) ... ``` ``` >>> saved_handler = np.seterrcall(err_handler) >>> save_err = np.seterr(all='call') ``` ``` >>> np.array([1, 2, 3]) / 0.0 Floating point error (divide by zero), with flag 1 array([inf, inf, inf]) ``` ``` >>> np.seterrcall(saved_handler) <function err_handler at 0x...> >>> np.seterr(**save_err) {'divide': 'call', 'over': 'call', 'under': 'call', 'invalid': 'call'} ``` Log error message: ``` >>> class Log: ... def write(self, msg): ... print("LOG: %s" % msg) ... ``` ``` >>> log = Log() >>> saved_handler = np.seterrcall(log) >>> save_err = np.seterr(all='log') ``` ``` >>> np.array([1, 2, 3]) / 0.0 LOG: Warning: divide by zero encountered in divide array([inf, inf, inf]) ``` ``` >>> np.seterrcall(saved_handler) <numpy.core.numeric.Log object at 0x...> >>> np.seterr(**save_err) {'divide': 'log', 'over': 'log', 'under': 'log', 'invalid': 'log'} ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.seterrcall.htmlnumpy.geterrcall ================ numpy.geterrcall()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_ufunc_config.py#L313-L356) Return the current callback function used on floating-point errors. When the error handling for a floating-point error (one of “divide”, “over”, “under”, or “invalid”) is set to ‘call’ or ‘log’, the function that is called or the log instance that is written to is returned by [`geterrcall`](#numpy.geterrcall "numpy.geterrcall"). This function or log instance has been set with [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"). Returns **errobj**callable, log instance or None The current error handler. If no handler was set through [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), `None` is returned. See also [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr") #### Notes For complete documentation of the types of floating-point exceptions and treatment options, see [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). #### Examples ``` >>> np.geterrcall() # we did not yet set a handler, returns None ``` ``` >>> oldsettings = np.seterr(all='call') >>> def err_handler(type, flag): ... print("Floating point error (%s), with flag %s" % (type, flag)) >>> oldhandler = np.seterrcall(err_handler) >>> np.array([1, 2, 3]) / 0.0 Floating point error (divide by zero), with flag 1 array([inf, inf, inf]) ``` ``` >>> cur_handler = np.geterrcall() >>> cur_handler is err_handler True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.geterrcall.htmlnumpy.errstate ============== *class*numpy.errstate(***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Context manager for floating-point error handling. Using an instance of [`errstate`](#numpy.errstate "numpy.errstate") as a context manager allows statements in that context to execute with a known error handling behavior. Upon entering the context the error handling is set with [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr") and [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), and upon exiting it is reset to what it was before. Changed in version 1.17.0: [`errstate`](#numpy.errstate "numpy.errstate") is also usable as a function decorator, saving a level of indentation if an entire function is wrapped. See [`contextlib.ContextDecorator`](https://docs.python.org/3/library/contextlib.html#contextlib.ContextDecorator "(in Python v3.10)") for more information. Parameters **kwargs**{divide, over, under, invalid} Keyword arguments. The valid keywords are the possible floating-point exceptions. Each keyword should have a string value that defines the treatment for the particular error. Possible values are {‘ignore’, ‘warn’, ‘raise’, ‘call’, ‘print’, ‘log’}. See also [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr"), [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall") #### Notes For complete documentation of the types of floating-point exceptions and treatment options, see [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). #### Examples ``` >>> olderr = np.seterr(all='ignore') # Set error handling to known state. ``` ``` >>> np.arange(3) / 0. array([nan, inf, inf]) >>> with np.errstate(divide='warn'): ... np.arange(3) / 0. array([nan, inf, inf]) ``` ``` >>> np.sqrt(-1) nan >>> with np.errstate(invalid='raise'): ... np.sqrt(-1) Traceback (most recent call last): File "<stdin>", line 2, in <module> FloatingPointError: invalid value encountered in sqrt ``` Outside the context the error handling behavior has not changed: ``` >>> np.geterr() {'divide': 'ignore', 'over': 'ignore', 'under': 'ignore', 'invalid': 'ignore'} ``` #### Methods | | | | --- | --- | | [`__call__`](numpy.errstate.__call__#numpy.errstate.__call__ "numpy.errstate.__call__")(func) | Call self as a function. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.errstate.htmlnumpy.seterrobj =============== numpy.seterrobj(*errobj*, */*) Set the object that defines floating-point error handling. The error object contains all information that defines the error handling behavior in NumPy. [`seterrobj`](#numpy.seterrobj "numpy.seterrobj") is used internally by the other functions that set error handling behavior ([`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall")). Parameters **errobj**list The error object, a list containing three elements: [internal numpy buffer size, error mask, error callback function]. The error mask is a single integer that holds the treatment information on all four floating point errors. The information for each error type is contained in three bits of the integer. If we print it in base 8, we can see what treatment is set for “invalid”, “under”, “over”, and “divide” (in that order). The printed string can be interpreted with * 0 : ‘ignore’ * 1 : ‘warn’ * 2 : ‘raise’ * 3 : ‘call’ * 4 : ‘print’ * 5 : ‘log’ See also [`geterrobj`](numpy.geterrobj#numpy.geterrobj "numpy.geterrobj"), [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr"), [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall") [`getbufsize`](numpy.getbufsize#numpy.getbufsize "numpy.getbufsize"), [`setbufsize`](numpy.setbufsize#numpy.setbufsize "numpy.setbufsize") #### Notes For complete documentation of the types of floating-point exceptions and treatment options, see [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). #### Examples ``` >>> old_errobj = np.geterrobj() # first get the defaults >>> old_errobj [8192, 521, None] ``` ``` >>> def err_handler(type, flag): ... print("Floating point error (%s), with flag %s" % (type, flag)) ... >>> new_errobj = [20000, 12, err_handler] >>> np.seterrobj(new_errobj) >>> np.base_repr(12, 8) # int for divide=4 ('print') and over=1 ('warn') '14' >>> np.geterr() {'over': 'warn', 'divide': 'print', 'invalid': 'ignore', 'under': 'ignore'} >>> np.geterrcall() is err_handler True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.seterrobj.htmlnumpy.geterrobj =============== numpy.geterrobj() Return the current object that defines floating-point error handling. The error object contains all information that defines the error handling behavior in NumPy. [`geterrobj`](#numpy.geterrobj "numpy.geterrobj") is used internally by the other functions that get and set error handling behavior ([`geterr`](numpy.geterr#numpy.geterr "numpy.geterr"), [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall"), [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall")). Returns **errobj**list The error object, a list containing three elements: [internal numpy buffer size, error mask, error callback function]. The error mask is a single integer that holds the treatment information on all four floating point errors. The information for each error type is contained in three bits of the integer. If we print it in base 8, we can see what treatment is set for “invalid”, “under”, “over”, and “divide” (in that order). The printed string can be interpreted with * 0 : ‘ignore’ * 1 : ‘warn’ * 2 : ‘raise’ * 3 : ‘call’ * 4 : ‘print’ * 5 : ‘log’ See also [`seterrobj`](numpy.seterrobj#numpy.seterrobj "numpy.seterrobj"), [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"), [`geterr`](numpy.geterr#numpy.geterr "numpy.geterr"), [`seterrcall`](numpy.seterrcall#numpy.seterrcall "numpy.seterrcall"), [`geterrcall`](numpy.geterrcall#numpy.geterrcall "numpy.geterrcall") [`getbufsize`](numpy.getbufsize#numpy.getbufsize "numpy.getbufsize"), [`setbufsize`](numpy.setbufsize#numpy.setbufsize "numpy.setbufsize") #### Notes For complete documentation of the types of floating-point exceptions and treatment options, see [`seterr`](numpy.seterr#numpy.seterr "numpy.seterr"). #### Examples ``` >>> np.geterrobj() # first get the defaults [8192, 521, None] ``` ``` >>> def err_handler(type, flag): ... print("Floating point error (%s), with flag %s" % (type, flag)) ... >>> old_bufsize = np.setbufsize(20000) >>> old_err = np.seterr(divide='raise') >>> old_handler = np.seterrcall(err_handler) >>> np.geterrobj() [8192, 521, <function err_handler at 0x91dcaac>] ``` ``` >>> old_err = np.seterr(all='ignore') >>> np.base_repr(np.geterrobj()[1], 8) '0' >>> old_err = np.seterr(divide='warn', over='log', under='call', ... invalid='print') >>> np.base_repr(np.geterrobj()[1], 8) '4351' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.geterrobj.htmlnumpy.fft.rfft ============== fft.rfft(*a*, *n=None*, *axis=- 1*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L320-L410) Compute the one-dimensional discrete Fourier Transform for real input. This function computes the one-dimensional *n*-point discrete Fourier Transform (DFT) of a real-valued array by means of an efficient algorithm called the Fast Fourier Transform (FFT). Parameters **a**array_like Input array **n**int, optional Number of points along transformation axis in the input to use. If `n` is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If `n` is not given, the length of the input along the axis specified by `axis` is used. **axis**int, optional Axis over which to compute the FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**complex ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. If `n` is even, the length of the transformed axis is `(n/2)+1`. If `n` is odd, the length is `(n+1)/2`. Raises IndexError If `axis` is not a valid axis of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") For definition of the DFT and conventions used. [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") The inverse of [`rfft`](#numpy.fft.rfft "numpy.fft.rfft"). [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT of general (complex) input. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The *n*-dimensional FFT. [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") The *n*-dimensional FFT of real input. #### Notes When the DFT is computed for purely real input, the output is Hermitian-symmetric, i.e. the negative frequency terms are just the complex conjugates of the corresponding positive-frequency terms, and the negative-frequency terms are therefore redundant. This function does not compute the negative frequency terms, and the length of the transformed axis of the output is therefore `n//2 + 1`. When `A = rfft(a)` and fs is the sampling frequency, `A[0]` contains the zero-frequency term 0*fs, which is real due to Hermitian symmetry. If `n` is even, `A[-1]` contains the term representing both positive and negative Nyquist frequency (+fs/2 and -fs/2), and must also be purely real. If `n` is odd, there is no term at fs/2; `A[-1]` contains the largest positive frequency (fs/2*(n-1)/n), and is complex in the general case. If the input `a` contains an imaginary part, it is silently discarded. #### Examples ``` >>> np.fft.fft([0, 1, 0, 0]) array([ 1.+0.j, 0.-1.j, -1.+0.j, 0.+1.j]) # may vary >>> np.fft.rfft([0, 1, 0, 0]) array([ 1.+0.j, 0.-1.j, -1.+0.j]) # may vary ``` Notice how the final element of the [`fft`](../routines.fft#module-numpy.fft "numpy.fft") output is the complex conjugate of the second element, for real input. For [`rfft`](#numpy.fft.rfft "numpy.fft.rfft"), this symmetry is exploited to compute only the non-negative frequency terms. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.rfft.htmlnumpy.fft.irfft =============== fft.irfft(*a*, *n=None*, *axis=- 1*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L413-L514) Computes the inverse of [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"). This function computes the inverse of the one-dimensional *n*-point discrete Fourier Transform of real input computed by [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"). In other words, `irfft(rfft(a), len(a)) == a` to within numerical accuracy. (See Notes below for why `len(a)` is necessary here.) The input is expected to be in the form returned by [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"), i.e. the real zero-frequency term followed by the complex positive frequency terms in order of increasing frequency. Since the discrete Fourier Transform of real input is Hermitian-symmetric, the negative frequency terms are taken to be the complex conjugates of the corresponding positive frequency terms. Parameters **a**array_like The input array. **n**int, optional Length of the transformed axis of the output. For `n` output points, `n//2+1` input points are necessary. If the input is longer than this, it is cropped. If it is shorter than this, it is padded with zeros. If `n` is not given, it is taken to be `2*(m-1)` where `m` is the length of the input along the axis specified by `axis`. **axis**int, optional Axis over which to compute the inverse FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. The length of the transformed axis is `n`, or, if `n` is not given, `2*(m-1)` where `m` is the length of the transformed axis of the input. To get an odd number of output points, `n` must be specified. Raises IndexError If `axis` is not a valid axis of `a`. See also [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft") For definition of the DFT and conventions used. [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") The one-dimensional FFT of real input, of which [`irfft`](#numpy.fft.irfft "numpy.fft.irfft") is inverse. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT. [`irfft2`](numpy.fft.irfft2#numpy.fft.irfft2 "numpy.fft.irfft2") The inverse of the two-dimensional FFT of real input. [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn") The inverse of the *n*-dimensional FFT of real input. #### Notes Returns the real valued `n`-point inverse discrete Fourier transform of `a`, where `a` contains the non-negative frequency terms of a Hermitian-symmetric sequence. `n` is the length of the result, not the input. If you specify an `n` such that `a` must be zero-padded or truncated, the extra/removed values will be added/removed at high frequencies. One can thus resample a series to `m` points via Fourier interpolation by: `a_resamp = irfft(rfft(a), m)`. The correct interpretation of the hermitian input depends on the length of the original data, as given by `n`. This is because each input shape could correspond to either an odd or even length signal. By default, [`irfft`](#numpy.fft.irfft "numpy.fft.irfft") assumes an even output length which puts the last entry at the Nyquist frequency; aliasing with its symmetric counterpart. By Hermitian symmetry, the value is thus treated as purely real. To avoid losing information, the correct length of the real input **must** be given. #### Examples ``` >>> np.fft.ifft([1, -1j, -1, 1j]) array([0.+0.j, 1.+0.j, 0.+0.j, 0.+0.j]) # may vary >>> np.fft.irfft([1, -1j, -1]) array([0., 1., 0., 0.]) ``` Notice how the last term in the input to the ordinary [`ifft`](numpy.fft.ifft#numpy.fft.ifft "numpy.fft.ifft") is the complex conjugate of the second term, and the output has zero imaginary part everywhere. When calling [`irfft`](#numpy.fft.irfft "numpy.fft.irfft"), the negative frequencies are not specified, and the output array is purely real. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.irfft.htmlnumpy.fft.rfft2 =============== fft.rfft2(*a*, *s=None*, *axes=(- 2, - 1)*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L1208-L1257) Compute the 2-dimensional FFT of a real array. Parameters **a**array Input array, taken to be real. **s**sequence of ints, optional Shape of the FFT. **axes**sequence of ints, optional Axes over which to compute the FFT. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**ndarray The result of the real 2-D FFT. See also [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") Compute the N-dimensional discrete Fourier Transform for real input. #### Notes This is really just [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") with different default behavior. For more details see [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn"). #### Examples ``` >>> a = np.mgrid[:5, :5][0] >>> np.fft.rfft2(a) array([[ 50. +0.j , 0. +0.j , 0. +0.j ], [-12.5+17.20477401j, 0. +0.j , 0. +0.j ], [-12.5 +4.0614962j , 0. +0.j , 0. +0.j ], [-12.5 -4.0614962j , 0. +0.j , 0. +0.j ], [-12.5-17.20477401j, 0. +0.j , 0. +0.j ]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.rfft2.htmlnumpy.fft.irfft2 ================ fft.irfft2(*a*, *s=None*, *axes=(- 2, - 1)*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L1370-L1424) Computes the inverse of [`rfft2`](numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2"). Parameters **a**array_like The input array **s**sequence of ints, optional Shape of the real output to the inverse FFT. **axes**sequence of ints, optional The axes over which to compute the inverse fft. Default is the last two axes. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**ndarray The result of the inverse real 2-D FFT. See also [`rfft2`](numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2") The forward two-dimensional FFT of real input, of which [`irfft2`](#numpy.fft.irfft2 "numpy.fft.irfft2") is the inverse. [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") The one-dimensional FFT for real input. [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") The inverse of the one-dimensional FFT of real input. [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn") Compute the inverse of the N-dimensional FFT of real input. #### Notes This is really [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn") with different defaults. For more details see [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn"). #### Examples ``` >>> a = np.mgrid[:5, :5][0] >>> A = np.fft.rfft2(a) >>> np.fft.irfft2(A, s=a.shape) array([[0., 0., 0., 0., 0.], [1., 1., 1., 1., 1.], [2., 2., 2., 2., 2.], [3., 3., 3., 3., 3.], [4., 4., 4., 4., 4.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.irfft2.htmlnumpy.fft.rfftn =============== fft.rfftn(*a*, *s=None*, *axes=None*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L1110-L1205) Compute the N-dimensional discrete Fourier Transform for real input. This function computes the N-dimensional discrete Fourier Transform over any number of axes in an M-dimensional real array by means of the Fast Fourier Transform (FFT). By default, all axes are transformed, with the real transform performed over the last axis, while the remaining transforms are complex. Parameters **a**array_like Input array, taken to be real. **s**sequence of ints, optional Shape (length along each transformed axis) to use from the input. (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). The final element of `s` corresponds to `n` for `rfft(x, n)`, while for the remaining axes, it corresponds to `n` for `fft(x, n)`. Along any axis, if the given shape is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. if `s` is not given, the shape of the input along the axes specified by `axes` is used. **axes**sequence of ints, optional Axes over which to compute the FFT. If not given, the last `len(s)` axes are used, or all axes if `s` is also not specified. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**complex ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or by a combination of `s` and `a`, as explained in the parameters section above. The length of the last axis transformed will be `s[-1]//2+1`, while the remaining transformed axes will have lengths according to `s`, or unchanged from the input. Raises ValueError If `s` and `axes` have different length. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`irfftn`](numpy.fft.irfftn#numpy.fft.irfftn "numpy.fft.irfftn") The inverse of [`rfftn`](#numpy.fft.rfftn "numpy.fft.rfftn"), i.e. the inverse of the n-dimensional FFT of real input. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT, with definitions and conventions used. [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") The one-dimensional FFT of real input. [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") The n-dimensional FFT. [`rfft2`](numpy.fft.rfft2#numpy.fft.rfft2 "numpy.fft.rfft2") The two-dimensional FFT of real input. #### Notes The transform for real input is performed over the last transformation axis, as by [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft"), then the transform over the remaining axes is performed as by [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn"). The order of the output is as for [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") for the final transformation axis, and as for [`fftn`](numpy.fft.fftn#numpy.fft.fftn "numpy.fft.fftn") for the remaining transformation axes. See [`fft`](../routines.fft#module-numpy.fft "numpy.fft") for details, definitions and conventions used. #### Examples ``` >>> a = np.ones((2, 2, 2)) >>> np.fft.rfftn(a) array([[[8.+0.j, 0.+0.j], # may vary [0.+0.j, 0.+0.j]], [[0.+0.j, 0.+0.j], [0.+0.j, 0.+0.j]]]) ``` ``` >>> np.fft.rfftn(a, axes=(2, 0)) array([[[4.+0.j, 0.+0.j], # may vary [4.+0.j, 0.+0.j]], [[0.+0.j, 0.+0.j], [0.+0.j, 0.+0.j]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.rfftn.htmlnumpy.fft.irfftn ================ fft.irfftn(*a*, *s=None*, *axes=None*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L1260-L1367) Computes the inverse of [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn"). This function computes the inverse of the N-dimensional discrete Fourier Transform for real input over any number of axes in an M-dimensional array by means of the Fast Fourier Transform (FFT). In other words, `irfftn(rfftn(a), a.shape) == a` to within numerical accuracy. (The `a.shape` is necessary like `len(a)` is for [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft"), and for the same reason.) The input should be ordered in the same way as is returned by [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn"), i.e. as for [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") for the final transformation axis, and as for [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") along all the other axes. Parameters **a**array_like Input array. **s**sequence of ints, optional Shape (length of each transformed axis) of the output (`s[0]` refers to axis 0, `s[1]` to axis 1, etc.). `s` is also the number of input points used along this axis, except for the last axis, where `s[-1]//2+1` points of the input are used. Along any axis, if the shape indicated by `s` is smaller than that of the input, the input is cropped. If it is larger, the input is padded with zeros. If `s` is not given, the shape of the input along the axes specified by axes is used. Except for the last axis which is taken to be `2*(m-1)` where `m` is the length of the input along that axis. **axes**sequence of ints, optional Axes over which to compute the inverse FFT. If not given, the last `len(s)` axes are used, or all axes if `s` is also not specified. Repeated indices in `axes` means that the inverse transform over that axis is performed multiple times. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**ndarray The truncated or zero-padded input, transformed along the axes indicated by `axes`, or by a combination of `s` or `a`, as explained in the parameters section above. The length of each transformed axis is as given by the corresponding element of `s`, or the length of the input in every axis except for the last one if `s` is not given. In the final transformed axis the length of the output when `s` is not given is `2*(m-1)` where `m` is the length of the final transformed axis of the input. To get an odd number of output points in the final axis, `s` must be specified. Raises ValueError If `s` and `axes` have different length. IndexError If an element of `axes` is larger than than the number of axes of `a`. See also [`rfftn`](numpy.fft.rfftn#numpy.fft.rfftn "numpy.fft.rfftn") The forward n-dimensional FFT of real input, of which [`ifftn`](numpy.fft.ifftn#numpy.fft.ifftn "numpy.fft.ifftn") is the inverse. [`fft`](../routines.fft#module-numpy.fft "numpy.fft") The one-dimensional FFT, with definitions and conventions used. [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") The inverse of the one-dimensional FFT of real input. [`irfft2`](numpy.fft.irfft2#numpy.fft.irfft2 "numpy.fft.irfft2") The inverse of the two-dimensional FFT of real input. #### Notes See [`fft`](../routines.fft#module-numpy.fft "numpy.fft") for definitions and conventions used. See [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") for definitions and conventions used for real input. The correct interpretation of the hermitian input depends on the shape of the original data, as given by `s`. This is because each input shape could correspond to either an odd or even length signal. By default, [`irfftn`](#numpy.fft.irfftn "numpy.fft.irfftn") assumes an even output length which puts the last entry at the Nyquist frequency; aliasing with its symmetric counterpart. When performing the final complex to real transform, the last value is thus treated as purely real. To avoid losing information, the correct shape of the real input **must** be given. #### Examples ``` >>> a = np.zeros((3, 2, 2)) >>> a[0, 0, 0] = 3 * 2 * 2 >>> np.fft.irfftn(a) array([[[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]], [[1., 1.], [1., 1.]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.irfftn.htmlnumpy.fft.hfft ============== fft.hfft(*a*, *n=None*, *axis=- 1*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L517-L612) Compute the FFT of a signal that has Hermitian symmetry, i.e., a real spectrum. Parameters **a**array_like The input array. **n**int, optional Length of the transformed axis of the output. For `n` output points, `n//2 + 1` input points are necessary. If the input is longer than this, it is cropped. If it is shorter than this, it is padded with zeros. If `n` is not given, it is taken to be `2*(m-1)` where `m` is the length of the input along the axis specified by `axis`. **axis**int, optional Axis over which to compute the FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. The length of the transformed axis is `n`, or, if `n` is not given, `2*m - 2` where `m` is the length of the transformed axis of the input. To get an odd number of output points, `n` must be specified, for instance as `2*m - 1` in the typical case, Raises IndexError If `axis` is not a valid axis of `a`. See also [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft") Compute the one-dimensional FFT for real input. [`ihfft`](numpy.fft.ihfft#numpy.fft.ihfft "numpy.fft.ihfft") The inverse of [`hfft`](#numpy.fft.hfft "numpy.fft.hfft"). #### Notes [`hfft`](#numpy.fft.hfft "numpy.fft.hfft")/[`ihfft`](numpy.fft.ihfft#numpy.fft.ihfft "numpy.fft.ihfft") are a pair analogous to [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft")/[`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft"), but for the opposite case: here the signal has Hermitian symmetry in the time domain and is real in the frequency domain. So here it’s [`hfft`](#numpy.fft.hfft "numpy.fft.hfft") for which you must supply the length of the result if it is to be odd. * even: `ihfft(hfft(a, 2*len(a) - 2)) == a`, within roundoff error, * odd: `ihfft(hfft(a, 2*len(a) - 1)) == a`, within roundoff error. The correct interpretation of the hermitian input depends on the length of the original data, as given by `n`. This is because each input shape could correspond to either an odd or even length signal. By default, [`hfft`](#numpy.fft.hfft "numpy.fft.hfft") assumes an even output length which puts the last entry at the Nyquist frequency; aliasing with its symmetric counterpart. By Hermitian symmetry, the value is thus treated as purely real. To avoid losing information, the shape of the full signal **must** be given. #### Examples ``` >>> signal = np.array([1, 2, 3, 4, 3, 2]) >>> np.fft.fft(signal) array([15.+0.j, -4.+0.j, 0.+0.j, -1.-0.j, 0.+0.j, -4.+0.j]) # may vary >>> np.fft.hfft(signal[:4]) # Input first half of signal array([15., -4., 0., -1., 0., -4.]) >>> np.fft.hfft(signal, 6) # Input entire signal and truncate array([15., -4., 0., -1., 0., -4.]) ``` ``` >>> signal = np.array([[1, 1.j], [-1.j, 2]]) >>> np.conj(signal.T) - signal # check Hermitian symmetry array([[ 0.-0.j, -0.+0.j], # may vary [ 0.+0.j, 0.-0.j]]) >>> freq_spectrum = np.fft.hfft(signal) >>> freq_spectrum array([[ 1., 1.], [ 2., -2.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.hfft.htmlnumpy.fft.ihfft =============== fft.ihfft(*a*, *n=None*, *axis=- 1*, *norm=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/_pocketfft.py#L615-L679) Compute the inverse FFT of a signal that has Hermitian symmetry. Parameters **a**array_like Input array. **n**int, optional Length of the inverse FFT, the number of points along transformation axis in the input to use. If `n` is smaller than the length of the input, the input is cropped. If it is larger, the input is padded with zeros. If `n` is not given, the length of the input along the axis specified by `axis` is used. **axis**int, optional Axis over which to compute the inverse FFT. If not given, the last axis is used. **norm**{“backward”, “ortho”, “forward”}, optional New in version 1.10.0. Normalization mode (see [`numpy.fft`](../routines.fft#module-numpy.fft "numpy.fft")). Default is “backward”. Indicates which direction of the forward/backward pair of transforms is scaled and with what normalization factor. New in version 1.20.0: The “backward”, “forward” values were added. Returns **out**complex ndarray The truncated or zero-padded input, transformed along the axis indicated by `axis`, or the last one if `axis` is not specified. The length of the transformed axis is `n//2 + 1`. See also [`hfft`](numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft"), [`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft") #### Notes [`hfft`](numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft")/[`ihfft`](#numpy.fft.ihfft "numpy.fft.ihfft") are a pair analogous to [`rfft`](numpy.fft.rfft#numpy.fft.rfft "numpy.fft.rfft")/[`irfft`](numpy.fft.irfft#numpy.fft.irfft "numpy.fft.irfft"), but for the opposite case: here the signal has Hermitian symmetry in the time domain and is real in the frequency domain. So here it’s [`hfft`](numpy.fft.hfft#numpy.fft.hfft "numpy.fft.hfft") for which you must supply the length of the result if it is to be odd: * even: `ihfft(hfft(a, 2*len(a) - 2)) == a`, within roundoff error, * odd: `ihfft(hfft(a, 2*len(a) - 1)) == a`, within roundoff error. #### Examples ``` >>> spectrum = np.array([ 15, -4, 0, -1, 0, -4]) >>> np.fft.ifft(spectrum) array([1.+0.j, 2.+0.j, 3.+0.j, 4.+0.j, 3.+0.j, 2.+0.j]) # may vary >>> np.fft.ihfft(spectrum) array([ 1.-0.j, 2.-0.j, 3.-0.j, 4.-0.j]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.ihfft.htmlnumpy.fft.fftfreq ================= fft.fftfreq(*n*, *d=1.0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/helper.py#L123-L169) Return the Discrete Fourier Transform sample frequencies. The returned float array `f` contains the frequency bin centers in cycles per unit of the sample spacing (with zero at the start). For instance, if the sample spacing is in seconds, then the frequency unit is cycles/second. Given a window length `n` and a sample spacing `d`: ``` f = [0, 1, ..., n/2-1, -n/2, ..., -1] / (d*n) if n is even f = [0, 1, ..., (n-1)/2, -(n-1)/2, ..., -1] / (d*n) if n is odd ``` Parameters **n**int Window length. **d**scalar, optional Sample spacing (inverse of the sampling rate). Defaults to 1. Returns **f**ndarray Array of length `n` containing the sample frequencies. #### Examples ``` >>> signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=float) >>> fourier = np.fft.fft(signal) >>> n = signal.size >>> timestep = 0.1 >>> freq = np.fft.fftfreq(n, d=timestep) >>> freq array([ 0. , 1.25, 2.5 , ..., -3.75, -2.5 , -1.25]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.fftfreq.htmlnumpy.fft.rfftfreq ================== fft.rfftfreq(*n*, *d=1.0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/helper.py#L172-L221) Return the Discrete Fourier Transform sample frequencies (for usage with rfft, irfft). The returned float array `f` contains the frequency bin centers in cycles per unit of the sample spacing (with zero at the start). For instance, if the sample spacing is in seconds, then the frequency unit is cycles/second. Given a window length `n` and a sample spacing `d`: ``` f = [0, 1, ..., n/2-1, n/2] / (d*n) if n is even f = [0, 1, ..., (n-1)/2-1, (n-1)/2] / (d*n) if n is odd ``` Unlike [`fftfreq`](numpy.fft.fftfreq#numpy.fft.fftfreq "numpy.fft.fftfreq") (but like [`scipy.fftpack.rfftfreq`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.fftpack.rfftfreq.html#scipy.fftpack.rfftfreq "(in SciPy v1.8.1)")) the Nyquist frequency component is considered to be positive. Parameters **n**int Window length. **d**scalar, optional Sample spacing (inverse of the sampling rate). Defaults to 1. Returns **f**ndarray Array of length `n//2 + 1` containing the sample frequencies. #### Examples ``` >>> signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5, -3, 4], dtype=float) >>> fourier = np.fft.rfft(signal) >>> n = signal.size >>> sample_rate = 100 >>> freq = np.fft.fftfreq(n, d=1./sample_rate) >>> freq array([ 0., 10., 20., ..., -30., -20., -10.]) >>> freq = np.fft.rfftfreq(n, d=1./sample_rate) >>> freq array([ 0., 10., 20., 30., 40., 50.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.rfftfreq.htmlnumpy.fft.fftshift ================== fft.fftshift(*x*, *axes=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/helper.py#L19-L73) Shift the zero-frequency component to the center of the spectrum. This function swaps half-spaces for all axes listed (defaults to all). Note that `y[0]` is the Nyquist component only if `len(x)` is even. Parameters **x**array_like Input array. **axes**int or shape tuple, optional Axes over which to shift. Default is None, which shifts all axes. Returns **y**ndarray The shifted array. See also [`ifftshift`](numpy.fft.ifftshift#numpy.fft.ifftshift "numpy.fft.ifftshift") The inverse of [`fftshift`](#numpy.fft.fftshift "numpy.fft.fftshift"). #### Examples ``` >>> freqs = np.fft.fftfreq(10, 0.1) >>> freqs array([ 0., 1., 2., ..., -3., -2., -1.]) >>> np.fft.fftshift(freqs) array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.]) ``` Shift the zero-frequency component only along the second axis: ``` >>> freqs = np.fft.fftfreq(9, d=1./9).reshape(3, 3) >>> freqs array([[ 0., 1., 2.], [ 3., 4., -4.], [-3., -2., -1.]]) >>> np.fft.fftshift(freqs, axes=(1,)) array([[ 2., 0., 1.], [-4., 3., 4.], [-1., -3., -2.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.fftshift.htmlnumpy.fft.ifftshift =================== fft.ifftshift(*x*, *axes=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/fft/helper.py#L76-L120) The inverse of [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift"). Although identical for even-length `x`, the functions differ by one sample for odd-length `x`. Parameters **x**array_like Input array. **axes**int or shape tuple, optional Axes over which to calculate. Defaults to None, which shifts all axes. Returns **y**ndarray The shifted array. See also [`fftshift`](numpy.fft.fftshift#numpy.fft.fftshift "numpy.fft.fftshift") Shift zero-frequency component to the center of the spectrum. #### Examples ``` >>> freqs = np.fft.fftfreq(9, d=1./9).reshape(3, 3) >>> freqs array([[ 0., 1., 2.], [ 3., 4., -4.], [-3., -2., -1.]]) >>> np.fft.ifftshift(np.fft.fftshift(freqs)) array([[ 0., 1., 2.], [ 3., 4., -4.], [-3., -2., -1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fft.ifftshift.htmlnumpy.lookfor ============= numpy.lookfor(*what*, *module=None*, *import_modules=True*, *regenerate=False*, *output=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/utils.py#L694-L816) Do a keyword search on docstrings. A list of objects that matched the search is displayed, sorted by relevance. All given keywords need to be found in the docstring for it to be returned as a result, but the order does not matter. Parameters **what**str String containing words to look for. **module**str or list, optional Name of module(s) whose docstrings to go through. **import_modules**bool, optional Whether to import sub-modules in packages. Default is True. **regenerate**bool, optional Whether to re-generate the docstring cache. Default is False. **output**file-like, optional File-like object to write the output to. If omitted, use a pager. See also [`source`](numpy.source#numpy.source "numpy.source"), [`info`](numpy.info#numpy.info "numpy.info") #### Notes Relevance is determined only roughly, by checking if the keywords occur in the function name, at the start of a docstring, etc. #### Examples ``` >>> np.lookfor('binary representation') Search results for 'binary representation' ------------------------------------------ numpy.binary_repr Return the binary representation of the input number as a string. numpy.core.setup_common.long_double_representation Given a binary dump as given by GNU od -b, look for long double numpy.base_repr Return a string representation of a number in the given base system. ... ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lookfor.htmlnumpy.info ========== numpy.info(*object=None*, *maxwidth=76*, *output=None*, *toplevel='numpy'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/utils.py#L485-L632) Get help information for a function, class, or module. Parameters **object**object or str, optional Input object or name to get information about. If [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") is a numpy object, its docstring is given. If it is a string, available modules are searched for matching objects. If None, information about [`info`](#numpy.info "numpy.info") itself is returned. **maxwidth**int, optional Printing width. **output**file like object, optional File like object that the output is written to, default is `None`, in which case `sys.stdout` will be used. The object has to be opened in ‘w’ or ‘a’ mode. **toplevel**str, optional Start search at this level. See also [`source`](numpy.source#numpy.source "numpy.source"), [`lookfor`](numpy.lookfor#numpy.lookfor "numpy.lookfor") #### Notes When used interactively with an object, `np.info(obj)` is equivalent to `help(obj)` on the Python prompt or `obj?` on the IPython prompt. #### Examples ``` >>> np.info(np.polyval) polyval(p, x) Evaluate the polynomial p at x. ... ``` When using a string for [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.10)") it is possible to get multiple results. ``` >>> np.info('fft') *** Found in numpy *** Core FFT routines ... *** Found in numpy.fft *** fft(a, n=None, axis=-1) ... *** Repeat reference found in numpy.fft.fftpack *** *** Total of 3 references found. *** ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.info.htmlnumpy.load ========== numpy.load(*file*, *mmap_mode=None*, *allow_pickle=False*, *fix_imports=True*, *encoding='ASCII'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/npyio.py#L254-L424) Load arrays or pickled objects from `.npy`, `.npz` or pickled files. Warning Loading files that contain object arrays uses the `pickle` module, which is not secure against erroneous or maliciously constructed data. Consider passing `allow_pickle=False` to load data that is known not to contain object arrays for the safer handling of untrusted sources. Parameters **file**file-like object, string, or pathlib.Path The file to read. File-like objects must support the `seek()` and `read()` methods and must always be opened in binary mode. Pickled files require that the file-like object support the `readline()` method as well. **mmap_mode**{None, ‘r+’, ‘r’, ‘w+’, ‘c’}, optional If not None, then memory-map the file, using the given mode (see [`numpy.memmap`](numpy.memmap#numpy.memmap "numpy.memmap") for a detailed description of the modes). A memory-mapped array is kept on disk. However, it can be accessed and sliced like any ndarray. Memory mapping is especially useful for accessing small fragments of large files without reading the entire file into memory. **allow_pickle**bool, optional Allow loading pickled object arrays stored in npy files. Reasons for disallowing pickles include security, as loading pickled data can execute arbitrary code. If pickles are disallowed, loading object arrays will fail. Default: False Changed in version 1.16.3: Made default False in response to CVE-2019-6446. **fix_imports**bool, optional Only useful when loading Python 2 generated pickled files on Python 3, which includes npy/npz files containing object arrays. If `fix_imports` is True, pickle will try to map the old Python 2 names to the new names used in Python 3. **encoding**str, optional What encoding to use when reading Python 2 strings. Only useful when loading Python 2 generated pickled files in Python 3, which includes npy/npz files containing object arrays. Values other than ‘latin1’, ‘ASCII’, and ‘bytes’ are not allowed, as they can corrupt numerical data. Default: ‘ASCII’ Returns **result**array, tuple, dict, etc. Data stored in the file. For `.npz` files, the returned instance of NpzFile class must be closed to avoid leaking file descriptors. Raises OSError If the input file does not exist or cannot be read. UnpicklingError If `allow_pickle=True`, but the file cannot be loaded as a pickle. ValueError The file contains an object array, but `allow_pickle=False` given. See also [`save`](numpy.save#numpy.save "numpy.save"), [`savez`](numpy.savez#numpy.savez "numpy.savez"), [`savez_compressed`](numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed"), [`loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") [`memmap`](numpy.memmap#numpy.memmap "numpy.memmap") Create a memory-map to an array stored in a file on disk. [`lib.format.open_memmap`](numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap") Create or load a memory-mapped `.npy` file. #### Notes * If the file contains pickle data, then whatever object is stored in the pickle is returned. * If the file is a `.npy` file, then a single array is returned. * If the file is a `.npz` file, then a dictionary-like object is returned, containing `{filename: array}` key-value pairs, one for each file in the archive. * If the file is a `.npz` file, the returned value supports the context manager protocol in a similar fashion to the open function: ``` with load('foo.npz') as data: a = data['a'] ``` The underlying file descriptor is closed when exiting the ‘with’ block. #### Examples Store data to disk, and load it again: ``` >>> np.save('/tmp/123', np.array([[1, 2, 3], [4, 5, 6]])) >>> np.load('/tmp/123.npy') array([[1, 2, 3], [4, 5, 6]]) ``` Store compressed data to disk, and load it again: ``` >>> a=np.array([[1, 2, 3], [4, 5, 6]]) >>> b=np.array([1, 2]) >>> np.savez('/tmp/123.npz', a=a, b=b) >>> data = np.load('/tmp/123.npz') >>> data['a'] array([[1, 2, 3], [4, 5, 6]]) >>> data['b'] array([1, 2]) >>> data.close() ``` Mem-map the stored array, and then access the second row directly from disk: ``` >>> X = np.load('/tmp/123.npy', mmap_mode='r') >>> X[1, :] memmap([4, 5, 6]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.load.htmlnumpy.save ========== numpy.save(*file*, *arr*, *allow_pickle=True*, *fix_imports=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/npyio.py#L431-L503) Save an array to a binary file in NumPy `.npy` format. Parameters **file**file, str, or pathlib.Path File or filename to which the data is saved. If file is a file-object, then the filename is unchanged. If file is a string or Path, a `.npy` extension will be appended to the filename if it does not already have one. **arr**array_like Array data to be saved. **allow_pickle**bool, optional Allow saving object arrays using Python pickles. Reasons for disallowing pickles include security (loading pickled data can execute arbitrary code) and portability (pickled objects may not be loadable on different Python installations, for example if the stored objects require libraries that are not available, and not all pickled data is compatible between Python 2 and Python 3). Default: True **fix_imports**bool, optional Only useful in forcing objects in object arrays on Python 3 to be pickled in a Python 2 compatible way. If `fix_imports` is True, pickle will try to map the new Python 3 names to the old module names used in Python 2, so that the pickle data stream is readable with Python 2. See also [`savez`](numpy.savez#numpy.savez "numpy.savez") Save several arrays into a `.npz` archive [`savetxt`](numpy.savetxt#numpy.savetxt "numpy.savetxt"), [`load`](numpy.load#numpy.load "numpy.load") #### Notes For a description of the `.npy` format, see [`numpy.lib.format`](numpy.lib.format#module-numpy.lib.format "numpy.lib.format"). Any data saved to the file is appended to the end of the file. #### Examples ``` >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() ``` ``` >>> x = np.arange(10) >>> np.save(outfile, x) ``` ``` >>> _ = outfile.seek(0) # Only needed here to simulate closing & reopening file >>> np.load(outfile) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) ``` ``` >>> with open('test.npy', 'wb') as f: ... np.save(f, np.array([1, 2])) ... np.save(f, np.array([1, 3])) >>> with open('test.npy', 'rb') as f: ... a = np.load(f) ... b = np.load(f) >>> print(a, b) # [1 2] [1 3] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.save.htmlnumpy.savez =========== numpy.savez(*file*, **args*, ***kwds*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/npyio.py#L511-L595) Save several arrays into a single file in uncompressed `.npz` format. Provide arrays as keyword arguments to store them under the corresponding name in the output file: `savez(fn, x=x, y=y)`. If arrays are specified as positional arguments, i.e., `savez(fn, x, y)`, their names will be `arr_0`, `arr_1`, etc. Parameters **file**str or file Either the filename (string) or an open file (file-like object) where the data will be saved. If file is a string or a Path, the `.npz` extension will be appended to the filename if it is not already there. **args**Arguments, optional Arrays to save to the file. Please use keyword arguments (see `kwds` below) to assign names to arrays. Arrays specified as args will be named “arr_0”, “arr_1”, and so on. **kwds**Keyword arguments, optional Arrays to save to the file. Each array will be saved to the output file with its corresponding keyword name. Returns None See also [`save`](numpy.save#numpy.save "numpy.save") Save a single array to a binary file in NumPy format. [`savetxt`](numpy.savetxt#numpy.savetxt "numpy.savetxt") Save an array to a file as plain text. [`savez_compressed`](numpy.savez_compressed#numpy.savez_compressed "numpy.savez_compressed") Save several arrays into a compressed `.npz` archive #### Notes The `.npz` file format is a zipped archive of files named after the variables they contain. The archive is not compressed and each file in the archive contains one variable in `.npy` format. For a description of the `.npy` format, see [`numpy.lib.format`](numpy.lib.format#module-numpy.lib.format "numpy.lib.format"). When opening the saved `.npz` file with [`load`](numpy.load#numpy.load "numpy.load") a `NpzFile` object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the `.files` attribute), and for the arrays themselves. Keys passed in `kwds` are used as filenames inside the ZIP archive. Therefore, keys should be valid filenames; e.g., avoid keys that begin with `/` or contain `.`. When naming variables with keyword arguments, it is not possible to name a variable `file`, as this would cause the `file` argument to be defined twice in the call to `savez`. #### Examples ``` >>> from tempfile import TemporaryFile >>> outfile = TemporaryFile() >>> x = np.arange(10) >>> y = np.sin(x) ``` Using [`savez`](#numpy.savez "numpy.savez") with *args, the arrays are saved with default names. ``` >>> np.savez(outfile, x, y) >>> _ = outfile.seek(0) # Only needed here to simulate closing & reopening file >>> npzfile = np.load(outfile) >>> npzfile.files ['arr_0', 'arr_1'] >>> npzfile['arr_0'] array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) ``` Using [`savez`](#numpy.savez "numpy.savez") with **kwds, the arrays are saved with the keyword names. ``` >>> outfile = TemporaryFile() >>> np.savez(outfile, x=x, y=y) >>> _ = outfile.seek(0) >>> npzfile = np.load(outfile) >>> sorted(npzfile.files) ['x', 'y'] >>> npzfile['x'] array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.savez.htmlnumpy.fromregex =============== numpy.fromregex(*file*, *regexp*, *dtype*, *encoding=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/npyio.py#L1568-L1664) Construct an array from a text file, using regular expression parsing. The returned array is always a structured array, and is constructed from all matches of the regular expression in the file. Groups in the regular expression are converted to fields of the structured array. Parameters **file**path or file Filename or file object to read. Changed in version 1.22.0: Now accepts [`os.PathLike`](https://docs.python.org/3/library/os.html#os.PathLike "(in Python v3.10)") implementations. **regexp**str or regexp Regular expression used to parse the file. Groups in the regular expression correspond to fields in the dtype. **dtype**dtype or list of dtypes Dtype for the structured array; must be a structured datatype. **encoding**str, optional Encoding used to decode the inputfile. Does not apply to input streams. New in version 1.14.0. Returns **output**ndarray The output array, containing the part of the content of `file` that was matched by `regexp`. `output` is always a structured array. Raises TypeError When [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not a valid dtype for a structured array. See also [`fromstring`](numpy.fromstring#numpy.fromstring "numpy.fromstring"), [`loadtxt`](numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") #### Notes Dtypes for structured arrays can be specified in several forms, but all forms specify at least the data type and field name. For details see `basics.rec`. #### Examples ``` >>> from io import StringIO >>> text = StringIO("1312 foo\n1534 bar\n444 qux") ``` ``` >>> regexp = r"(\d+)\s+(...)" # match [digits, whitespace, anything] >>> output = np.fromregex(text, regexp, ... [('num', np.int64), ('key', 'S3')]) >>> output array([(1312, b'foo'), (1534, b'bar'), ( 444, b'qux')], dtype=[('num', '<i8'), ('key', 'S3')]) >>> output['num'] array([1312, 1534, 444]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fromregex.htmlnumpy.array2string ================== numpy.array2string(*a*, *max_line_width=None*, *precision=None*, *suppress_small=None*, *separator=' '*, *prefix=''*, *style=<no value>*, *formatter=None*, *threshold=None*, *edgeitems=None*, *sign=None*, *floatmode=None*, *suffix=''*, ***, *legacy=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/arrayprint.py#L561-L736) Return a string representation of an array. Parameters **a**ndarray Input array. **max_line_width**int, optional Inserts newlines if text is longer than `max_line_width`. Defaults to `numpy.get_printoptions()['linewidth']`. **precision**int or None, optional Floating point precision. Defaults to `numpy.get_printoptions()['precision']`. **suppress_small**bool, optional Represent numbers “very close” to zero as zero; default is False. Very close is defined by precision: if the precision is 8, e.g., numbers smaller (in absolute value) than 5e-9 are represented as zero. Defaults to `numpy.get_printoptions()['suppress']`. **separator**str, optional Inserted between elements. **prefix**str, optional **suffix**str, optional The length of the prefix and suffix strings are used to respectively align and wrap the output. An array is typically printed as: ``` prefix + array2string(a) + suffix ``` The output is left-padded by the length of the prefix string, and wrapping is forced at the column `max_line_width - len(suffix)`. It should be noted that the content of prefix and suffix strings are not included in the output. **style**_NoValue, optional Has no effect, do not use. Deprecated since version 1.14.0. **formatter**dict of callables, optional If not None, the keys should indicate the type(s) that the respective formatting function applies to. Callables should return a string. Types that are not specified (by their corresponding keys) are handled by the default formatters. Individual types for which a formatter can be set are: * ‘bool’ * ‘int’ * ‘timedelta’ : a [`numpy.timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") * ‘datetime’ : a [`numpy.datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64") * ‘float’ * ‘longfloat’ : 128-bit floats * ‘complexfloat’ * ‘longcomplexfloat’ : composed of two 128-bit floats * ‘void’ : type [`numpy.void`](../arrays.scalars#numpy.void "numpy.void") * ‘numpystr’ : types [`numpy.string_`](../arrays.scalars#numpy.string_ "numpy.string_") and [`numpy.unicode_`](../arrays.scalars#numpy.unicode_ "numpy.unicode_") Other keys that can be used to set a group of types at once are: * ‘all’ : sets all types * ‘int_kind’ : sets ‘int’ * ‘float_kind’ : sets ‘float’ and ‘longfloat’ * ‘complex_kind’ : sets ‘complexfloat’ and ‘longcomplexfloat’ * ‘str_kind’ : sets ‘numpystr’ **threshold**int, optional Total number of array elements which trigger summarization rather than full repr. Defaults to `numpy.get_printoptions()['threshold']`. **edgeitems**int, optional Number of array items in summary at beginning and end of each dimension. Defaults to `numpy.get_printoptions()['edgeitems']`. **sign**string, either ‘-’, ‘+’, or ‘ ‘, optional Controls printing of the sign of floating-point types. If ‘+’, always print the sign of positive values. If ‘ ‘, always prints a space (whitespace character) in the sign position of positive values. If ‘-’, omit the sign character of positive values. Defaults to `numpy.get_printoptions()['sign']`. **floatmode**str, optional Controls the interpretation of the `precision` option for floating-point types. Defaults to `numpy.get_printoptions()['floatmode']`. Can take the following values: * ‘fixed’: Always print exactly `precision` fractional digits, even if this would print more or fewer digits than necessary to specify the value uniquely. * ‘unique’: Print the minimum number of fractional digits necessary to represent each value uniquely. Different elements may have a different number of digits. The value of the `precision` option is ignored. * ‘maxprec’: Print at most `precision` fractional digits, but if an element can be uniquely represented with fewer digits only print it with that many. * ‘maxprec_equal’: Print at most `precision` fractional digits, but if every element in the array can be uniquely represented with an equal number of fewer digits, use that many digits for all elements. **legacy**string or `False`, optional If set to the string `‘1.13’` enables 1.13 legacy printing mode. This approximates numpy 1.13 print output by including a space in the sign position of floats and different behavior for 0d arrays. If set to `False`, disables legacy mode. Unrecognized strings will be ignored with a warning for forward compatibility. New in version 1.14.0. Returns **array_str**str String representation of the array. Raises TypeError if a callable in `formatter` does not return a string. See also [`array_str`](numpy.array_str#numpy.array_str "numpy.array_str"), [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr"), [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions"), [`get_printoptions`](numpy.get_printoptions#numpy.get_printoptions "numpy.get_printoptions") #### Notes If a formatter is specified for a certain type, the `precision` keyword is ignored for that type. This is a very flexible function; [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr") and [`array_str`](numpy.array_str#numpy.array_str "numpy.array_str") are using [`array2string`](#numpy.array2string "numpy.array2string") internally so keywords with the same name should work identically in all three functions. #### Examples ``` >>> x = np.array([1e-16,1,2,3]) >>> np.array2string(x, precision=2, separator=',', ... suppress_small=True) '[0.,1.,2.,3.]' ``` ``` >>> x = np.arange(3.) >>> np.array2string(x, formatter={'float_kind':lambda x: "%.2f" % x}) '[0.00 1.00 2.00]' ``` ``` >>> x = np.arange(3) >>> np.array2string(x, formatter={'int':lambda x: hex(x)}) '[0x0 0x1 0x2]' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.array2string.htmlnumpy.array_repr ================= numpy.array_repr(*arr*, *max_line_width=None*, *precision=None*, *suppress_small=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/arrayprint.py#L1519-L1565) Return the string representation of an array. Parameters **arr**ndarray Input array. **max_line_width**int, optional Inserts newlines if text is longer than `max_line_width`. Defaults to `numpy.get_printoptions()['linewidth']`. **precision**int, optional Floating point precision. Defaults to `numpy.get_printoptions()['precision']`. **suppress_small**bool, optional Represent numbers “very close” to zero as zero; default is False. Very close is defined by precision: if the precision is 8, e.g., numbers smaller (in absolute value) than 5e-9 are represented as zero. Defaults to `numpy.get_printoptions()['suppress']`. Returns **string**str The string representation of an array. See also [`array_str`](numpy.array_str#numpy.array_str "numpy.array_str"), [`array2string`](numpy.array2string#numpy.array2string "numpy.array2string"), [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions") #### Examples ``` >>> np.array_repr(np.array([1,2])) 'array([1, 2])' >>> np.array_repr(np.ma.array([0.])) 'MaskedArray([0.])' >>> np.array_repr(np.array([], np.int32)) 'array([], dtype=int32)' ``` ``` >>> x = np.array([1e-6, 4e-7, 2, 3]) >>> np.array_repr(x, precision=6, suppress_small=True) 'array([0.000001, 0. , 2. , 3. ])' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.array_repr.htmlnumpy.array_str ================ numpy.array_str(*a*, *max_line_width=None*, *precision=None*, *suppress_small=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/arrayprint.py#L1600-L1637) Return a string representation of the data in an array. The data in the array is returned as a single string. This function is similar to [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr"), the difference being that [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr") also returns information on the kind of array and its data type. Parameters **a**ndarray Input array. **max_line_width**int, optional Inserts newlines if text is longer than `max_line_width`. Defaults to `numpy.get_printoptions()['linewidth']`. **precision**int, optional Floating point precision. Defaults to `numpy.get_printoptions()['precision']`. **suppress_small**bool, optional Represent numbers “very close” to zero as zero; default is False. Very close is defined by precision: if the precision is 8, e.g., numbers smaller (in absolute value) than 5e-9 are represented as zero. Defaults to `numpy.get_printoptions()['suppress']`. See also [`array2string`](numpy.array2string#numpy.array2string "numpy.array2string"), [`array_repr`](numpy.array_repr#numpy.array_repr "numpy.array_repr"), [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions") #### Examples ``` >>> np.array_str(np.arange(3)) '[0 1 2]' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.array_str.htmlnumpy.lib.format.open_memmap ============================= lib.format.open_memmap(*filename*, *mode='r+'*, *dtype=None*, *shape=None*, *fortran_order=False*, *version=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L790-L888) Open a .npy file as a memory-mapped array. This may be used to read an existing file or create a new one. Parameters **filename**str or path-like The name of the file on disk. This may *not* be a file-like object. **mode**str, optional The mode in which to open the file; the default is ‘r+’. In addition to the standard file modes, ‘c’ is also accepted to mean “copy on write.” See [`memmap`](numpy.memmap#numpy.memmap "numpy.memmap") for the available mode strings. **dtype**data-type, optional The data type of the array if we are creating a new file in “write” mode, if not, [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is ignored. The default value is None, which results in a data-type of [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **shape**tuple of int The shape of the array if we are creating a new file in “write” mode, in which case this parameter is required. Otherwise, this parameter is ignored and is thus optional. **fortran_order**bool, optional Whether the array should be Fortran-contiguous (True) or C-contiguous (False, the default) if we are creating a new file in “write” mode. **version**tuple of int (major, minor) or None If the mode is a “write” mode, then this is the version of the file format used to create the file. None means use the oldest supported version that is able to store the data. Default: None Returns **marray**memmap The memory-mapped array. Raises ValueError If the data or the mode is invalid. OSError If the file is not found or cannot be opened correctly. See also [`numpy.memmap`](numpy.memmap#numpy.memmap "numpy.memmap") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.open_memmap.htmlnumpy.set_printoptions ======================= numpy.set_printoptions(*precision=None*, *threshold=None*, *edgeitems=None*, *linewidth=None*, *suppress=None*, *nanstr=None*, *infstr=None*, *formatter=None*, *sign=None*, *floatmode=None*, ***, *legacy=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/arrayprint.py#L116-L292) Set printing options. These options determine the way floating point numbers, arrays and other NumPy objects are displayed. Parameters **precision**int or None, optional Number of digits of precision for floating point output (default 8). May be None if `floatmode` is not `fixed`, to print as many digits as necessary to uniquely specify the value. **threshold**int, optional Total number of array elements which trigger summarization rather than full repr (default 1000). To always use the full repr without summarization, pass [`sys.maxsize`](https://docs.python.org/3/library/sys.html#sys.maxsize "(in Python v3.10)"). **edgeitems**int, optional Number of array items in summary at beginning and end of each dimension (default 3). **linewidth**int, optional The number of characters per line for the purpose of inserting line breaks (default 75). **suppress**bool, optional If True, always print floating point numbers using fixed point notation, in which case numbers equal to zero in the current precision will print as zero. If False, then scientific notation is used when absolute value of the smallest number is < 1e-4 or the ratio of the maximum absolute value to the minimum is > 1e3. The default is False. **nanstr**str, optional String representation of floating point not-a-number (default nan). **infstr**str, optional String representation of floating point infinity (default inf). **sign**string, either ‘-’, ‘+’, or ‘ ‘, optional Controls printing of the sign of floating-point types. If ‘+’, always print the sign of positive values. If ‘ ‘, always prints a space (whitespace character) in the sign position of positive values. If ‘-’, omit the sign character of positive values. (default ‘-‘) **formatter**dict of callables, optional If not None, the keys should indicate the type(s) that the respective formatting function applies to. Callables should return a string. Types that are not specified (by their corresponding keys) are handled by the default formatters. Individual types for which a formatter can be set are: * ‘bool’ * ‘int’ * ‘timedelta’ : a [`numpy.timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") * ‘datetime’ : a [`numpy.datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64") * ‘float’ * ‘longfloat’ : 128-bit floats * ‘complexfloat’ * ‘longcomplexfloat’ : composed of two 128-bit floats * ‘numpystr’ : types [`numpy.string_`](../arrays.scalars#numpy.string_ "numpy.string_") and [`numpy.unicode_`](../arrays.scalars#numpy.unicode_ "numpy.unicode_") * ‘object’ : `np.object_` arrays Other keys that can be used to set a group of types at once are: * ‘all’ : sets all types * ‘int_kind’ : sets ‘int’ * ‘float_kind’ : sets ‘float’ and ‘longfloat’ * ‘complex_kind’ : sets ‘complexfloat’ and ‘longcomplexfloat’ * ‘str_kind’ : sets ‘numpystr’ **floatmode**str, optional Controls the interpretation of the `precision` option for floating-point types. Can take the following values (default maxprec_equal): * ‘fixed’: Always print exactly `precision` fractional digits, even if this would print more or fewer digits than necessary to specify the value uniquely. * ‘unique’: Print the minimum number of fractional digits necessary to represent each value uniquely. Different elements may have a different number of digits. The value of the `precision` option is ignored. * ‘maxprec’: Print at most `precision` fractional digits, but if an element can be uniquely represented with fewer digits only print it with that many. * ‘maxprec_equal’: Print at most `precision` fractional digits, but if every element in the array can be uniquely represented with an equal number of fewer digits, use that many digits for all elements. **legacy**string or `False`, optional If set to the string `‘1.13’` enables 1.13 legacy printing mode. This approximates numpy 1.13 print output by including a space in the sign position of floats and different behavior for 0d arrays. This also enables 1.21 legacy printing mode (described below). If set to the string `‘1.21’` enables 1.21 legacy printing mode. This approximates numpy 1.21 print output of complex structured dtypes by not inserting spaces after commas that separate fields and after colons. If set to `False`, disables legacy mode. Unrecognized strings will be ignored with a warning for forward compatibility. New in version 1.14.0. Changed in version 1.22.0. See also [`get_printoptions`](numpy.get_printoptions#numpy.get_printoptions "numpy.get_printoptions"), [`printoptions`](numpy.printoptions#numpy.printoptions "numpy.printoptions"), [`set_string_function`](numpy.set_string_function#numpy.set_string_function "numpy.set_string_function"), [`array2string`](numpy.array2string#numpy.array2string "numpy.array2string") #### Notes `formatter` is always reset with a call to [`set_printoptions`](#numpy.set_printoptions "numpy.set_printoptions"). Use [`printoptions`](numpy.printoptions#numpy.printoptions "numpy.printoptions") as a context manager to set the values temporarily. #### Examples Floating point precision can be set: ``` >>> np.set_printoptions(precision=4) >>> np.array([1.123456789]) [1.1235] ``` Long arrays can be summarised: ``` >>> np.set_printoptions(threshold=5) >>> np.arange(10) array([0, 1, 2, ..., 7, 8, 9]) ``` Small results can be suppressed: ``` >>> eps = np.finfo(float).eps >>> x = np.arange(4.) >>> x**2 - (x + eps)**2 array([-4.9304e-32, -4.4409e-16, 0.0000e+00, 0.0000e+00]) >>> np.set_printoptions(suppress=True) >>> x**2 - (x + eps)**2 array([-0., -0., 0., 0.]) ``` A custom formatter can be used to display array elements as desired: ``` >>> np.set_printoptions(formatter={'all':lambda x: 'int: '+str(-x)}) >>> x = np.arange(3) >>> x array([int: 0, int: -1, int: -2]) >>> np.set_printoptions() # formatter gets reset >>> x array([0, 1, 2]) ``` To put back the default options, you can use: ``` >>> np.set_printoptions(edgeitems=3, infstr='inf', ... linewidth=75, nanstr='nan', precision=8, ... suppress=False, threshold=1000, formatter=None) ``` Also to temporarily override options, use [`printoptions`](numpy.printoptions#numpy.printoptions "numpy.printoptions") as a context manager: ``` >>> with np.printoptions(precision=2, suppress=True, threshold=5): ... np.linspace(0, 10, 10) array([ 0. , 1.11, 2.22, ..., 7.78, 8.89, 10. ]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.set_printoptions.htmlnumpy.get_printoptions ======================= numpy.get_printoptions()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/arrayprint.py#L295-L326) Return the current print options. Returns **print_opts**dict Dictionary of current print options with keys * precision : int * threshold : int * edgeitems : int * linewidth : int * suppress : bool * nanstr : str * infstr : str * formatter : dict of callables * sign : str For a full description of these options, see [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions"). See also [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions"), [`printoptions`](numpy.printoptions#numpy.printoptions "numpy.printoptions"), [`set_string_function`](numpy.set_string_function#numpy.set_string_function "numpy.set_string_function") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.get_printoptions.htmlnumpy.set_string_function =========================== numpy.set_string_function(*f*, *repr=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/arrayprint.py#L1648-L1705) Set a Python function to be used when pretty printing arrays. Parameters **f**function or None Function to be used to pretty print arrays. The function should expect a single array argument and return a string of the representation of the array. If None, the function is reset to the default NumPy function to print arrays. **repr**bool, optional If True (default), the function for pretty printing (`__repr__`) is set, if False the function that returns the default string representation (`__str__`) is set. See also [`set_printoptions`](numpy.set_printoptions#numpy.set_printoptions "numpy.set_printoptions"), [`get_printoptions`](numpy.get_printoptions#numpy.get_printoptions "numpy.get_printoptions") #### Examples ``` >>> def pprint(arr): ... return 'HA! - What are you going to do now?' ... >>> np.set_string_function(pprint) >>> a = np.arange(10) >>> a HA! - What are you going to do now? >>> _ = a >>> # [0 1 2 3 4 5 6 7 8 9] ``` We can reset the function to the default: ``` >>> np.set_string_function(None) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) ``` `repr` affects either pretty printing or normal string representation. Note that `__repr__` is still affected by setting `__str__` because the width of each array element in the returned string becomes equal to the length of the result of `__str__()`. ``` >>> x = np.arange(4) >>> np.set_string_function(lambda x:'random', repr=False) >>> x.__str__() 'random' >>> x.__repr__() 'array([0, 1, 2, 3])' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.set_string_function.htmlnumpy.base_repr ================ numpy.base_repr(*number*, *base=2*, *padding=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L2069-L2123) Return a string representation of a number in the given base system. Parameters **number**int The value to convert. Positive and negative values are handled. **base**int, optional Convert [`number`](../arrays.scalars#numpy.number "numpy.number") to the `base` number system. The valid range is 2-36, the default value is 2. **padding**int, optional Number of zeros padded on the left. Default is 0 (no padding). Returns **out**str String representation of [`number`](../arrays.scalars#numpy.number "numpy.number") in `base` system. See also [`binary_repr`](numpy.binary_repr#numpy.binary_repr "numpy.binary_repr") Faster version of [`base_repr`](#numpy.base_repr "numpy.base_repr") for base 2. #### Examples ``` >>> np.base_repr(5) '101' >>> np.base_repr(6, 5) '11' >>> np.base_repr(7, base=5, padding=3) '00012' ``` ``` >>> np.base_repr(10, base=16) 'A' >>> np.base_repr(32, base=16) '20' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.base_repr.htmlnumpy.DataSource ================ *class*numpy.DataSource(*destpath='.'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) A generic data source file (file, http, ftp, 
). DataSources can be local files or remote files/URLs. The files may also be compressed or uncompressed. DataSource hides some of the low-level details of downloading the file, allowing you to simply pass in a valid file path (or URL) and obtain a file object. Parameters **destpath**str or None, optional Path to the directory where the source file gets downloaded to for use. If `destpath` is None, a temporary directory will be created. The default path is the current directory. #### Notes URLs require a scheme string (`http://`) to be used, without it they will fail: ``` >>> repos = np.DataSource() >>> repos.exists('www.google.com/index.html') False >>> repos.exists('http://www.google.com/index.html') True ``` Temporary directories are deleted when the DataSource is deleted. #### Examples ``` >>> ds = np.DataSource('/home/guido') >>> urlname = 'http://www.google.com/' >>> gfile = ds.open('http://www.google.com/') >>> ds.abspath(urlname) '/home/guido/www.google.com/index.html' >>> ds = np.DataSource(None) # use with temporary file >>> ds.open('/home/guido/foobar.txt') <open file '/home/guido.foobar.txt', mode 'r' at 0x91d4430> >>> ds.abspath('/home/guido/foobar.txt') '/tmp/.../home/guido/foobar.txt' ``` #### Methods | | | | --- | --- | | [`abspath`](numpy.datasource.abspath#numpy.DataSource.abspath "numpy.DataSource.abspath")(path) | Return absolute path of file in the DataSource directory. | | [`exists`](numpy.datasource.exists#numpy.DataSource.exists "numpy.DataSource.exists")(path) | Test if path exists. | | [`open`](numpy.datasource.open#numpy.DataSource.open "numpy.DataSource.open")(path[, mode, encoding, newline]) | Open and return file-like object. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.DataSource.htmlnumpy.lib.format ================ Binary serialization NPY format ---------- A simple format for saving numpy arrays to disk with the full information about them. The `.npy` format is the standard binary file format in NumPy for persisting a *single* arbitrary NumPy array on disk. The format stores all of the shape and dtype information necessary to reconstruct the array correctly even on another machine with a different architecture. The format is designed to be as simple as possible while achieving its limited goals. The `.npz` format is the standard format for persisting *multiple* NumPy arrays on disk. A `.npz` file is a zip file containing multiple `.npy` files, one for each array. ### Capabilities * Can represent all NumPy arrays including nested record arrays and object arrays. * Represents the data in its native binary form. * Supports Fortran-contiguous arrays directly. * Stores all of the necessary information to reconstruct the array including shape and dtype on a machine of a different architecture. Both little-endian and big-endian arrays are supported, and a file with little-endian numbers will yield a little-endian array on any machine reading the file. The types are described in terms of their actual sizes. For example, if a machine with a 64-bit C “long int” writes out an array with “long ints”, a reading machine with 32-bit C “long ints” will yield an array with 64-bit integers. * Is straightforward to reverse engineer. Datasets often live longer than the programs that created them. A competent developer should be able to create a solution in their preferred programming language to read most `.npy` files that they have been given without much documentation. * Allows memory-mapping of the data. See [`open_memmap`](numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap"). * Can be read from a filelike stream object instead of an actual file. * Stores object arrays, i.e. arrays containing elements that are arbitrary Python objects. Files with object arrays are not to be mmapable, but can be read and written to disk. ### Limitations * Arbitrary subclasses of numpy.ndarray are not completely preserved. Subclasses will be accepted for writing, but only the array data will be written out. A regular numpy.ndarray object will be created upon reading the file. Warning Due to limitations in the interpretation of structured dtypes, dtypes with fields with empty names will have the names replaced by ‘f0’, ‘f1’, etc. Such arrays will not round-trip through the format entirely accurately. The data is intact; only the field names will differ. We are working on a fix for this. This fix will not require a change in the file format. The arrays with such structures can still be saved and restored, and the correct dtype may be restored by using the `loadedarray.view(correct_dtype)` method. ### File extensions We recommend using the `.npy` and `.npz` extensions for files saved in this format. This is by no means a requirement; applications may wish to use these file formats but use an extension specific to the application. In the absence of an obvious alternative, however, we suggest using `.npy` and `.npz`. ### Version numbering The version numbering of these formats is independent of NumPy version numbering. If the format is upgraded, the code in `numpy.io` will still be able to read and write Version 1.0 files. ### Format Version 1.0 The first 6 bytes are a magic string: exactly `\x93NUMPY`. The next 1 byte is an unsigned byte: the major version number of the file format, e.g. `\x01`. The next 1 byte is an unsigned byte: the minor version number of the file format, e.g. `\x00`. Note: the version of the file format is not tied to the version of the numpy package. The next 2 bytes form a little-endian unsigned short int: the length of the header data HEADER_LEN. The next HEADER_LEN bytes form the header data describing the array’s format. It is an ASCII string which contains a Python literal expression of a dictionary. It is terminated by a newline (`\n`) and padded with spaces (`\x20`) to make the total of `len(magic string) + 2 + len(length) + HEADER_LEN` be evenly divisible by 64 for alignment purposes. The dictionary contains three keys: “descr”dtype.descr An object that can be passed as an argument to the [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") constructor to create the array’s dtype. “fortran_order”bool Whether the array data is Fortran-contiguous or not. Since Fortran-contiguous arrays are a common form of non-C-contiguity, we allow them to be written directly to disk for efficiency. “shape”tuple of int The shape of the array. For repeatability and readability, the dictionary keys are sorted in alphabetic order. This is for convenience only. A writer SHOULD implement this if possible. A reader MUST NOT depend on this. Following the header comes the array data. If the dtype contains Python objects (i.e. `dtype.hasobject is True`), then the data is a Python pickle of the array. Otherwise the data is the contiguous (either C- or Fortran-, depending on `fortran_order`) bytes of the array. Consumers can figure out the number of bytes by multiplying the number of elements given by the shape (noting that `shape=()` means there is 1 element) by `dtype.itemsize`. ### Format Version 2.0 The version 1.0 format only allowed the array header to have a total size of 65535 bytes. This can be exceeded by structured arrays with a large number of columns. The version 2.0 format extends the header size to 4 GiB. [`numpy.save`](numpy.save#numpy.save "numpy.save") will automatically save in 2.0 format if the data requires it, else it will always use the more compatible 1.0 format. The description of the fourth element of the header therefore has become: “The next 4 bytes form a little-endian unsigned int: the length of the header data HEADER_LEN.” ### Format Version 3.0 This version replaces the ASCII string (which in practice was latin1) with a utf8-encoded string, so supports structured types with any unicode field names. ### Notes The `.npy` format, including motivation for creating it and a comparison of alternatives, is described in the [“npy-format” NEP](https://numpy.org/neps/nep-0001-npy-format.html "(in NumPy Enhancement Proposals)"), however details have evolved with time and this document is more current. #### Functions | | | | --- | --- | | [`descr_to_dtype`](numpy.lib.format.descr_to_dtype#numpy.lib.format.descr_to_dtype "numpy.lib.format.descr_to_dtype")(descr) | Returns a dtype based off the given description. | | [`dtype_to_descr`](numpy.lib.format.dtype_to_descr#numpy.lib.format.dtype_to_descr "numpy.lib.format.dtype_to_descr")(dtype) | Get a serializable descriptor from the dtype. | | [`header_data_from_array_1_0`](numpy.lib.format.header_data_from_array_1_0#numpy.lib.format.header_data_from_array_1_0 "numpy.lib.format.header_data_from_array_1_0")(array) | Get the dictionary of header metadata from a numpy.ndarray. | | [`magic`](numpy.lib.format.magic#numpy.lib.format.magic "numpy.lib.format.magic")(major, minor) | Return the magic string for the given file format version. | | [`open_memmap`](numpy.lib.format.open_memmap#numpy.lib.format.open_memmap "numpy.lib.format.open_memmap")(filename[, mode, dtype, shape, ...]) | Open a .npy file as a memory-mapped array. | | [`read_array`](numpy.lib.format.read_array#numpy.lib.format.read_array "numpy.lib.format.read_array")(fp[, allow_pickle, pickle_kwargs]) | Read an array from an NPY file. | | [`read_array_header_1_0`](numpy.lib.format.read_array_header_1_0#numpy.lib.format.read_array_header_1_0 "numpy.lib.format.read_array_header_1_0")(fp) | Read an array header from a filelike object using the 1.0 file format version. | | [`read_array_header_2_0`](numpy.lib.format.read_array_header_2_0#numpy.lib.format.read_array_header_2_0 "numpy.lib.format.read_array_header_2_0")(fp) | Read an array header from a filelike object using the 2.0 file format version. | | [`read_magic`](numpy.lib.format.read_magic#numpy.lib.format.read_magic "numpy.lib.format.read_magic")(fp) | Read the magic string to get the version of the file format. | | [`write_array`](numpy.lib.format.write_array#numpy.lib.format.write_array "numpy.lib.format.write_array")(fp, array[, version, ...]) | Write an array to an NPY file, including a header. | | [`write_array_header_1_0`](numpy.lib.format.write_array_header_1_0#numpy.lib.format.write_array_header_1_0 "numpy.lib.format.write_array_header_1_0")(fp, d) | Write the header for an array using the 1.0 format. | | [`write_array_header_2_0`](numpy.lib.format.write_array_header_2_0#numpy.lib.format.write_array_header_2_0 "numpy.lib.format.write_array_header_2_0")(fp, d) | Write the header for an array using the 2.0 format. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.htmlnumpy.linalg.multi_dot ======================= linalg.multi_dot(*arrays*, ***, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L2617-L2735) Compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order. [`multi_dot`](#numpy.linalg.multi_dot "numpy.linalg.multi_dot") chains [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot") and uses optimal parenthesization of the matrices [[1]](#r451bed364cc6-1) [[2]](#r451bed364cc6-2). Depending on the shapes of the matrices, this can speed up the multiplication a lot. If the first argument is 1-D it is treated as a row vector. If the last argument is 1-D it is treated as a column vector. The other arguments must be 2-D. Think of [`multi_dot`](#numpy.linalg.multi_dot "numpy.linalg.multi_dot") as: ``` def multi_dot(arrays): return functools.reduce(np.dot, arrays) ``` Parameters **arrays**sequence of array_like If the first argument is 1-D it is treated as row vector. If the last argument is 1-D it is treated as column vector. The other arguments must be 2-D. **out**ndarray, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for `dot(a, b)`. This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. New in version 1.19.0. Returns **output**ndarray Returns the dot product of the supplied arrays. See also [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot") dot multiplication with two arguments. #### Notes The cost for a matrix multiplication can be calculated with the following function: ``` def cost(A, B): return A.shape[0] * A.shape[1] * B.shape[1] ``` Assume we have three matrices \(A_{10x100}, B_{100x5}, C_{5x50}\). The costs for the two different parenthesizations are as follows: ``` cost((AB)C) = 10*100*5 + 10*5*50 = 5000 + 2500 = 7500 cost(A(BC)) = 10*100*50 + 100*5*50 = 50000 + 25000 = 75000 ``` #### References [1](#id1) Cormen, “Introduction to Algorithms”, Chapter 15.2, p. 370-378 [2](#id2) <https://en.wikipedia.org/wiki/Matrix_chain_multiplication #### Examples [`multi_dot`](#numpy.linalg.multi_dot "numpy.linalg.multi_dot") allows you to write: ``` >>> from numpy.linalg import multi_dot >>> # Prepare some data >>> A = np.random.random((10000, 100)) >>> B = np.random.random((100, 1000)) >>> C = np.random.random((1000, 5)) >>> D = np.random.random((5, 333)) >>> # the actual dot multiplication >>> _ = multi_dot([A, B, C, D]) ``` instead of: ``` >>> _ = np.dot(np.dot(np.dot(A, B), C), D) >>> # or >>> _ = A.dot(B).dot(C).dot(D) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.multi_dot.htmlnumpy.tensordot =============== numpy.tensordot(*a*, *b*, *axes=2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L949-L1139) Compute tensor dot product along specified axes. Given two tensors, `a` and `b`, and an array_like object containing two array_like objects, `(a_axes, b_axes)`, sum the products of `a`’s and `b`’s elements (components) over the axes specified by `a_axes` and `b_axes`. The third argument can be a single non-negative integer_like scalar, `N`; if it is such, then the last `N` dimensions of `a` and the first `N` dimensions of `b` are summed over. Parameters **a, b**array_like Tensors to “dot”. **axes**int or (2,) array_like * integer_like If an int N, sum over the last N axes of `a` and the first N axes of `b` in order. The sizes of the corresponding axes must match. * (2,) array_like Or, a list of axes to be summed over, first sequence applying to `a`, second to `b`. Both elements array_like must be of the same length. Returns **output**ndarray The tensor dot product of the input. See also [`dot`](numpy.dot#numpy.dot "numpy.dot"), [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") #### Notes Three common use cases are: * `axes = 0` : tensor product \(a\otimes b\) * `axes = 1` : tensor dot product \(a\cdot b\) * `axes = 2` : (default) tensor double contraction \(a:b\) When `axes` is integer_like, the sequence for evaluation will be: first the -Nth axis in `a` and 0th axis in `b`, and the -1th axis in `a` and Nth axis in `b` last. When there is more than one axis to sum over - and they are not the last (first) axes of `a` (`b`) - the argument `axes` should consist of two sequences of the same length, with the first axis to sum over given first in both sequences, the second axis second, and so forth. The shape of the result consists of the non-contracted axes of the first tensor, followed by the non-contracted axes of the second. #### Examples A “traditional” example: ``` >>> a = np.arange(60.).reshape(3,4,5) >>> b = np.arange(24.).reshape(4,3,2) >>> c = np.tensordot(a,b, axes=([1,0],[0,1])) >>> c.shape (5, 2) >>> c array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) >>> # A slower but equivalent way of computing the same... >>> d = np.zeros((5,2)) >>> for i in range(5): ... for j in range(2): ... for k in range(3): ... for n in range(4): ... d[i,j] += a[k,n,i] * b[n,k,j] >>> c == d array([[ True, True], [ True, True], [ True, True], [ True, True], [ True, True]]) ``` An extended example taking advantage of the overloading of + and *: ``` >>> a = np.array(range(1, 9)) >>> a.shape = (2, 2, 2) >>> A = np.array(('a', 'b', 'c', 'd'), dtype=object) >>> A.shape = (2, 2) >>> a; A array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) array([['a', 'b'], ['c', 'd']], dtype=object) ``` ``` >>> np.tensordot(a, A) # third argument default is 2 for double-contraction array(['abbcccdddd', 'aaaaabbbbbbcccccccdddddddd'], dtype=object) ``` ``` >>> np.tensordot(a, A, 1) array([[['acc', 'bdd'], ['aaacccc', 'bbbdddd']], [['aaaaacccccc', 'bbbbbdddddd'], ['aaaaaaacccccccc', 'bbbbbbbdddddddd']]], dtype=object) ``` ``` >>> np.tensordot(a, A, 0) # tensor product (result too long to incl.) array([[[[['a', 'b'], ['c', 'd']], ... ``` ``` >>> np.tensordot(a, A, (0, 1)) array([[['abbbbb', 'cddddd'], ['aabbbbbb', 'ccdddddd']], [['aaabbbbbbb', 'cccddddddd'], ['aaaabbbbbbbb', 'ccccdddddddd']]], dtype=object) ``` ``` >>> np.tensordot(a, A, (2, 1)) array([[['abb', 'cdd'], ['aaabbbb', 'cccdddd']], [['aaaaabbbbbb', 'cccccdddddd'], ['aaaaaaabbbbbbbb', 'cccccccdddddddd']]], dtype=object) ``` ``` >>> np.tensordot(a, A, ((0, 1), (0, 1))) array(['abbbcccccddddddd', 'aabbbbccccccdddddddd'], dtype=object) ``` ``` >>> np.tensordot(a, A, ((2, 1), (1, 0))) array(['acccbbdddd', 'aaaaacccccccbbbbbbdddddddd'], dtype=object) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.tensordot.htmlnumpy.einsum ============ numpy.einsum(*subscripts*, **operands*, *out=None*, *dtype=None*, *order='K'*, *casting='safe'*, *optimize=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/einsumfunc.py#L1009-L1443) Evaluates the Einstein summation convention on the operands. Using the Einstein summation convention, many common multi-dimensional, linear algebraic array operations can be represented in a simple fashion. In *implicit* mode [`einsum`](#numpy.einsum "numpy.einsum") computes these values. In *explicit* mode, [`einsum`](#numpy.einsum "numpy.einsum") provides further flexibility to compute other array operations that might not be considered classical Einstein summation operations, by disabling, or forcing summation over specified subscript labels. See the notes and examples for clarification. Parameters **subscripts**str Specifies the subscripts for summation as comma separated list of subscript labels. An implicit (classical Einstein summation) calculation is performed unless the explicit indicator ‘->’ is included as well as subscript labels of the precise output form. **operands**list of array_like These are the arrays for the operation. **out**ndarray, optional If provided, the calculation is done into this array. **dtype**{data-type, None}, optional If provided, forces the calculation to use the data type specified. Note that you may have to also give a more liberal `casting` parameter to allow the conversions. Default is None. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the output. ‘C’ means it should be C contiguous. ‘F’ means it should be Fortran contiguous, ‘A’ means it should be ‘F’ if the inputs are all ‘F’, ‘C’ otherwise. ‘K’ means it should be as close to the layout as the inputs as is possible, including arbitrarily permuted axes. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Setting this to ‘unsafe’ is not recommended, as it can adversely affect accumulations. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. Default is ‘safe’. **optimize**{False, True, ‘greedy’, ‘optimal’}, optional Controls if intermediate optimization should occur. No optimization will occur if False and True will default to the ‘greedy’ algorithm. Also accepts an explicit contraction list from the `np.einsum_path` function. See `np.einsum_path` for more details. Defaults to False. Returns **output**ndarray The calculation based on the Einstein summation convention. See also [`einsum_path`](numpy.einsum_path#numpy.einsum_path "numpy.einsum_path"), [`dot`](numpy.dot#numpy.dot "numpy.dot"), [`inner`](numpy.inner#numpy.inner "numpy.inner"), [`outer`](numpy.outer#numpy.outer "numpy.outer"), [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot"), [`linalg.multi_dot`](numpy.linalg.multi_dot#numpy.linalg.multi_dot "numpy.linalg.multi_dot") `einops` similar verbose interface is provided by [einops](https://github.com/arogozhnikov/einops) package to cover additional operations: transpose, reshape/flatten, repeat/tile, squeeze/unsqueeze and reductions. `opt_einsum` [opt_einsum](https://optimized-einsum.readthedocs.io/en/stable/) optimizes contraction order for einsum-like expressions in backend-agnostic manner. #### Notes New in version 1.6.0. The Einstein summation convention can be used to compute many multi-dimensional, linear algebraic array operations. [`einsum`](#numpy.einsum "numpy.einsum") provides a succinct way of representing these. A non-exhaustive list of these operations, which can be computed by [`einsum`](#numpy.einsum "numpy.einsum"), is shown below along with examples: * Trace of an array, [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace"). * Return a diagonal, [`numpy.diag`](numpy.diag#numpy.diag "numpy.diag"). * Array axis summations, [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum"). * Transpositions and permutations, [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose"). * Matrix multiplication and dot product, [`numpy.matmul`](numpy.matmul#numpy.matmul "numpy.matmul") [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot"). * Vector inner and outer products, [`numpy.inner`](numpy.inner#numpy.inner "numpy.inner") [`numpy.outer`](numpy.outer#numpy.outer "numpy.outer"). * Broadcasting, element-wise and scalar multiplication, [`numpy.multiply`](numpy.multiply#numpy.multiply "numpy.multiply"). * Tensor contractions, [`numpy.tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot"). * Chained array operations, in efficient calculation order, [`numpy.einsum_path`](numpy.einsum_path#numpy.einsum_path "numpy.einsum_path"). The subscripts string is a comma-separated list of subscript labels, where each label refers to a dimension of the corresponding operand. Whenever a label is repeated it is summed, so `np.einsum('i,i', a, b)` is equivalent to [`np.inner(a,b)`](numpy.inner#numpy.inner "numpy.inner"). If a label appears only once, it is not summed, so `np.einsum('i', a)` produces a view of `a` with no changes. A further example `np.einsum('ij,jk', a, b)` describes traditional matrix multiplication and is equivalent to [`np.matmul(a,b)`](numpy.matmul#numpy.matmul "numpy.matmul"). Repeated subscript labels in one operand take the diagonal. For example, `np.einsum('ii', a)` is equivalent to [`np.trace(a)`](numpy.trace#numpy.trace "numpy.trace"). In *implicit mode*, the chosen subscripts are important since the axes of the output are reordered alphabetically. This means that `np.einsum('ij', a)` doesn’t affect a 2D array, while `np.einsum('ji', a)` takes its transpose. Additionally, `np.einsum('ij,jk', a, b)` returns a matrix multiplication, while, `np.einsum('ij,jh', a, b)` returns the transpose of the multiplication since subscript ‘h’ precedes subscript ‘i’. In *explicit mode* the output can be directly controlled by specifying output subscript labels. This requires the identifier ‘->’ as well as the list of output subscript labels. This feature increases the flexibility of the function since summing can be disabled or forced when required. The call `np.einsum('i->', a)` is like [`np.sum(a, axis=-1)`](numpy.sum#numpy.sum "numpy.sum"), and `np.einsum('ii->i', a)` is like [`np.diag(a)`](numpy.diag#numpy.diag "numpy.diag"). The difference is that [`einsum`](#numpy.einsum "numpy.einsum") does not allow broadcasting by default. Additionally `np.einsum('ij,jh->ih', a, b)` directly specifies the order of the output subscript labels and therefore returns matrix multiplication, unlike the example above in implicit mode. To enable and control broadcasting, use an ellipsis. Default NumPy-style broadcasting is done by adding an ellipsis to the left of each term, like `np.einsum('...ii->...i', a)`. To take the trace along the first and last axes, you can do `np.einsum('i...i', a)`, or to do a matrix-matrix product with the left-most indices instead of rightmost, one can do `np.einsum('ij...,jk...->ik...', a, b)`. When there is only one operand, no axes are summed, and no output parameter is provided, a view into the operand is returned instead of a new array. Thus, taking the diagonal as `np.einsum('ii->i', a)` produces a view (changed in version 1.10.0). [`einsum`](#numpy.einsum "numpy.einsum") also provides an alternative way to provide the subscripts and operands as `einsum(op0, sublist0, op1, sublist1, ..., [sublistout])`. If the output shape is not provided in this format [`einsum`](#numpy.einsum "numpy.einsum") will be calculated in implicit mode, otherwise it will be performed explicitly. The examples below have corresponding [`einsum`](#numpy.einsum "numpy.einsum") calls with the two parameter methods. New in version 1.10.0. Views returned from einsum are now writeable whenever the input array is writeable. For example, `np.einsum('ijk...->kji...', a)` will now have the same effect as [`np.swapaxes(a, 0, 2)`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") and `np.einsum('ii->i', a)` will return a writeable view of the diagonal of a 2D array. New in version 1.12.0. Added the `optimize` argument which will optimize the contraction order of an einsum expression. For a contraction with three or more operands this can greatly increase the computational efficiency at the cost of a larger memory footprint during computation. Typically a ‘greedy’ algorithm is applied which empirical tests have shown returns the optimal path in the majority of cases. In some cases ‘optimal’ will return the superlative path through a more expensive, exhaustive search. For iterative calculations it may be advisable to calculate the optimal path once and reuse that path by supplying it as an argument. An example is given below. See [`numpy.einsum_path`](numpy.einsum_path#numpy.einsum_path "numpy.einsum_path") for more details. #### Examples ``` >>> a = np.arange(25).reshape(5,5) >>> b = np.arange(5) >>> c = np.arange(6).reshape(2,3) ``` Trace of a matrix: ``` >>> np.einsum('ii', a) 60 >>> np.einsum(a, [0,0]) 60 >>> np.trace(a) 60 ``` Extract the diagonal (requires explicit form): ``` >>> np.einsum('ii->i', a) array([ 0, 6, 12, 18, 24]) >>> np.einsum(a, [0,0], [0]) array([ 0, 6, 12, 18, 24]) >>> np.diag(a) array([ 0, 6, 12, 18, 24]) ``` Sum over an axis (requires explicit form): ``` >>> np.einsum('ij->i', a) array([ 10, 35, 60, 85, 110]) >>> np.einsum(a, [0,1], [0]) array([ 10, 35, 60, 85, 110]) >>> np.sum(a, axis=1) array([ 10, 35, 60, 85, 110]) ``` For higher dimensional arrays summing a single axis can be done with ellipsis: ``` >>> np.einsum('...j->...', a) array([ 10, 35, 60, 85, 110]) >>> np.einsum(a, [Ellipsis,1], [Ellipsis]) array([ 10, 35, 60, 85, 110]) ``` Compute a matrix transpose, or reorder any number of axes: ``` >>> np.einsum('ji', c) array([[0, 3], [1, 4], [2, 5]]) >>> np.einsum('ij->ji', c) array([[0, 3], [1, 4], [2, 5]]) >>> np.einsum(c, [1,0]) array([[0, 3], [1, 4], [2, 5]]) >>> np.transpose(c) array([[0, 3], [1, 4], [2, 5]]) ``` Vector inner products: ``` >>> np.einsum('i,i', b, b) 30 >>> np.einsum(b, [0], b, [0]) 30 >>> np.inner(b,b) 30 ``` Matrix vector multiplication: ``` >>> np.einsum('ij,j', a, b) array([ 30, 80, 130, 180, 230]) >>> np.einsum(a, [0,1], b, [1]) array([ 30, 80, 130, 180, 230]) >>> np.dot(a, b) array([ 30, 80, 130, 180, 230]) >>> np.einsum('...j,j', a, b) array([ 30, 80, 130, 180, 230]) ``` Broadcasting and scalar multiplication: ``` >>> np.einsum('..., ...', 3, c) array([[ 0, 3, 6], [ 9, 12, 15]]) >>> np.einsum(',ij', 3, c) array([[ 0, 3, 6], [ 9, 12, 15]]) >>> np.einsum(3, [Ellipsis], c, [Ellipsis]) array([[ 0, 3, 6], [ 9, 12, 15]]) >>> np.multiply(3, c) array([[ 0, 3, 6], [ 9, 12, 15]]) ``` Vector outer product: ``` >>> np.einsum('i,j', np.arange(2)+1, b) array([[0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]) >>> np.einsum(np.arange(2)+1, [0], b, [1]) array([[0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]) >>> np.outer(np.arange(2)+1, b) array([[0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]) ``` Tensor contraction: ``` >>> a = np.arange(60.).reshape(3,4,5) >>> b = np.arange(24.).reshape(4,3,2) >>> np.einsum('ijk,jil->kl', a, b) array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) >>> np.einsum(a, [0,1,2], b, [1,0,3], [2,3]) array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) >>> np.tensordot(a,b, axes=([1,0],[0,1])) array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) ``` Writeable returned arrays (since version 1.10.0): ``` >>> a = np.zeros((3, 3)) >>> np.einsum('ii->i', a)[:] = 1 >>> a array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) ``` Example of ellipsis use: ``` >>> a = np.arange(6).reshape((3,2)) >>> b = np.arange(12).reshape((4,3)) >>> np.einsum('ki,jk->ij', a, b) array([[10, 28, 46, 64], [13, 40, 67, 94]]) >>> np.einsum('ki,...k->i...', a, b) array([[10, 28, 46, 64], [13, 40, 67, 94]]) >>> np.einsum('k...,jk', a, b) array([[10, 28, 46, 64], [13, 40, 67, 94]]) ``` Chained array operations. For more complicated contractions, speed ups might be achieved by repeatedly computing a ‘greedy’ path or pre-computing the ‘optimal’ path and repeatedly applying it, using an [`einsum_path`](numpy.einsum_path#numpy.einsum_path "numpy.einsum_path") insertion (since version 1.12.0). Performance improvements can be particularly significant with larger arrays: ``` >>> a = np.ones(64).reshape(2,4,8) ``` Basic [`einsum`](#numpy.einsum "numpy.einsum"): ~1520ms (benchmarked on 3.1GHz Intel i5.) ``` >>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a) ``` Sub-optimal [`einsum`](#numpy.einsum "numpy.einsum") (due to repeated path calculation time): ~330ms ``` >>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='optimal') ``` Greedy [`einsum`](#numpy.einsum "numpy.einsum") (faster optimal path approximation): ~160ms ``` >>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='greedy') ``` Optimal [`einsum`](#numpy.einsum "numpy.einsum") (best usage pattern in some use cases): ~110ms ``` >>> path = np.einsum_path('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='optimal')[0] >>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=path) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.einsum.htmlnumpy.einsum_path ================== numpy.einsum_path(*subscripts*, **operands*, *optimize='greedy'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/einsumfunc.py#L706-L998) Evaluates the lowest cost contraction order for an einsum expression by considering the creation of intermediate arrays. Parameters **subscripts**str Specifies the subscripts for summation. ***operands**list of array_like These are the arrays for the operation. **optimize**{bool, list, tuple, ‘greedy’, ‘optimal’} Choose the type of path. If a tuple is provided, the second argument is assumed to be the maximum intermediate size created. If only a single argument is provided the largest input or output array size is used as a maximum intermediate size. * if a list is given that starts with `einsum_path`, uses this as the contraction path * if False no optimization is taken * if True defaults to the ‘greedy’ algorithm * ‘optimal’ An algorithm that combinatorially explores all possible ways of contracting the listed tensors and choosest the least costly path. Scales exponentially with the number of terms in the contraction. * ‘greedy’ An algorithm that chooses the best pair contraction at each step. Effectively, this algorithm searches the largest inner, Hadamard, and then outer products at each step. Scales cubically with the number of terms in the contraction. Equivalent to the ‘optimal’ path for most contractions. Default is ‘greedy’. Returns **path**list of tuples A list representation of the einsum path. **string_repr**str A printable representation of the einsum path. See also [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum"), [`linalg.multi_dot`](numpy.linalg.multi_dot#numpy.linalg.multi_dot "numpy.linalg.multi_dot") #### Notes The resulting path indicates which terms of the input contraction should be contracted first, the result of this contraction is then appended to the end of the contraction list. This list can then be iterated over until all intermediate contractions are complete. #### Examples We can begin with a chain dot example. In this case, it is optimal to contract the `b` and `c` tensors first as represented by the first element of the path `(1, 2)`. The resulting tensor is added to the end of the contraction and the remaining contraction `(0, 1)` is then completed. ``` >>> np.random.seed(123) >>> a = np.random.rand(2, 2) >>> b = np.random.rand(2, 5) >>> c = np.random.rand(5, 2) >>> path_info = np.einsum_path('ij,jk,kl->il', a, b, c, optimize='greedy') >>> print(path_info[0]) ['einsum_path', (1, 2), (0, 1)] >>> print(path_info[1]) Complete contraction: ij,jk,kl->il # may vary Naive scaling: 4 Optimized scaling: 3 Naive FLOP count: 1.600e+02 Optimized FLOP count: 5.600e+01 Theoretical speedup: 2.857 Largest intermediate: 4.000e+00 elements ------------------------------------------------------------------------- scaling current remaining ------------------------------------------------------------------------- 3 kl,jk->jl ij,jl->il 3 jl,ij->il il->il ``` A more complex index transformation example. ``` >>> I = np.random.rand(10, 10, 10, 10) >>> C = np.random.rand(10, 10) >>> path_info = np.einsum_path('ea,fb,abcd,gc,hd->efgh', C, C, I, C, C, ... optimize='greedy') ``` ``` >>> print(path_info[0]) ['einsum_path', (0, 2), (0, 3), (0, 2), (0, 1)] >>> print(path_info[1]) Complete contraction: ea,fb,abcd,gc,hd->efgh # may vary Naive scaling: 8 Optimized scaling: 5 Naive FLOP count: 8.000e+08 Optimized FLOP count: 8.000e+05 Theoretical speedup: 1000.000 Largest intermediate: 1.000e+04 elements -------------------------------------------------------------------------- scaling current remaining -------------------------------------------------------------------------- 5 abcd,ea->bcde fb,gc,hd,bcde->efgh 5 bcde,fb->cdef gc,hd,cdef->efgh 5 cdef,gc->defg hd,defg->efgh 5 defg,hd->efgh efgh->efgh ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.einsum_path.htmlnumpy.linalg.matrix_power ========================== linalg.matrix_power(*a*, *n*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L560-L673) Raise a square matrix to the (integer) power `n`. For positive integers `n`, the power is computed by repeated matrix squarings and matrix multiplications. If `n == 0`, the identity matrix of the same shape as M is returned. If `n < 0`, the inverse is computed and then raised to the `abs(n)`. Note Stacks of object matrices are not currently supported. Parameters **a**(
, M, M) array_like Matrix to be “powered”. **n**int The exponent can be any integer or long integer, positive, negative, or zero. Returns **a**n**(
, M, M) ndarray or matrix object The return value is the same shape and type as `M`; if the exponent is positive or zero then the type of the elements is the same as those of `M`. If the exponent is negative the elements are floating-point. Raises LinAlgError For matrices that are not square or that (for negative powers) cannot be inverted numerically. #### Examples ``` >>> from numpy.linalg import matrix_power >>> i = np.array([[0, 1], [-1, 0]]) # matrix equiv. of the imaginary unit >>> matrix_power(i, 3) # should = -i array([[ 0, -1], [ 1, 0]]) >>> matrix_power(i, 0) array([[1, 0], [0, 1]]) >>> matrix_power(i, -3) # should = 1/(-i) = i, but w/ f.p. elements array([[ 0., 1.], [-1., 0.]]) ``` Somewhat more sophisticated example ``` >>> q = np.zeros((4, 4)) >>> q[0:2, 0:2] = -i >>> q[2:4, 2:4] = i >>> q # one of the three quaternion units not equal to 1 array([[ 0., -1., 0., 0.], [ 1., 0., 0., 0.], [ 0., 0., 0., 1.], [ 0., 0., -1., 0.]]) >>> matrix_power(q, 2) # = -np.eye(4) array([[-1., 0., 0., 0.], [ 0., -1., 0., 0.], [ 0., 0., -1., 0.], [ 0., 0., 0., -1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.matrix_power.htmlnumpy.kron ========== numpy.kron(*a*, *b*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L1073-L1184) Kronecker product of two arrays. Computes the Kronecker product, a composite array made of blocks of the second array scaled by the first. Parameters **a, b**array_like Returns **out**ndarray See also [`outer`](numpy.outer#numpy.outer "numpy.outer") The outer product #### Notes The function assumes that the number of dimensions of `a` and `b` are the same, if necessary prepending the smallest with ones. If `a.shape = (r0,r1,..,rN)` and `b.shape = (s0,s1,...,sN)`, the Kronecker product has shape `(r0*s0, r1*s1, ..., rN*SN)`. The elements are products of elements from `a` and `b`, organized explicitly by: ``` kron(a,b)[k0,k1,...,kN] = a[i0,i1,...,iN] * b[j0,j1,...,jN] ``` where: ``` kt = it * st + jt, t = 0,...,N ``` In the common 2-D case (N=1), the block structure can be visualized: ``` [[ a[0,0]*b, a[0,1]*b, ... , a[0,-1]*b ], [ ... ... ], [ a[-1,0]*b, a[-1,1]*b, ... , a[-1,-1]*b ]] ``` #### Examples ``` >>> np.kron([1,10,100], [5,6,7]) array([ 5, 6, 7, ..., 500, 600, 700]) >>> np.kron([5,6,7], [1,10,100]) array([ 5, 50, 500, ..., 7, 70, 700]) ``` ``` >>> np.kron(np.eye(2), np.ones((2,2))) array([[1., 1., 0., 0.], [1., 1., 0., 0.], [0., 0., 1., 1.], [0., 0., 1., 1.]]) ``` ``` >>> a = np.arange(100).reshape((2,5,2,5)) >>> b = np.arange(24).reshape((2,3,4)) >>> c = np.kron(a,b) >>> c.shape (2, 10, 6, 20) >>> I = (1,3,0,2) >>> J = (0,2,1) >>> J1 = (0,) + J # extend to ndim=4 >>> S1 = (1,) + b.shape >>> K = tuple(np.array(I) * np.array(S1) + np.array(J1)) >>> c[K] == a[I]*b[J] True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.kron.htmlnumpy.linalg.qr =============== linalg.qr(*a*, *mode='reduced'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L780-L978) Compute the qr factorization of a matrix. Factor the matrix `a` as *qr*, where `q` is orthonormal and `r` is upper-triangular. Parameters **a**array_like, shape (
, M, N) An array-like object with the dimensionality of at least 2. **mode**{‘reduced’, ‘complete’, ‘r’, ‘raw’}, optional If K = min(M, N), then * ‘reduced’returns q, r with dimensions (
, M, K), (
, K, N) (default) * ‘complete’ : returns q, r with dimensions (
, M, M), (
, M, N) * ‘r’ : returns r only with dimensions (
, K, N) * ‘raw’ : returns h, tau with dimensions (
, N, M), (
, K,) The options ‘reduced’, ‘complete, and ‘raw’ are new in numpy 1.8, see the notes for more information. The default is ‘reduced’, and to maintain backward compatibility with earlier versions of numpy both it and the old default ‘full’ can be omitted. Note that array h returned in ‘raw’ mode is transposed for calling Fortran. The ‘economic’ mode is deprecated. The modes ‘full’ and ‘economic’ may be passed using only the first letter for backwards compatibility, but all others must be spelled out. See the Notes for more explanation. Returns **q**ndarray of float or complex, optional A matrix with orthonormal columns. When mode = ‘complete’ the result is an orthogonal/unitary matrix depending on whether or not a is real/complex. The determinant may be either +/- 1 in that case. In case the number of dimensions in the input array is greater than 2 then a stack of the matrices with above properties is returned. **r**ndarray of float or complex, optional The upper-triangular matrix or a stack of upper-triangular matrices if the number of dimensions in the input array is greater than 2. **(h, tau)**ndarrays of np.double or np.cdouble, optional The array h contains the Householder reflectors that generate q along with r. The tau array contains scaling factors for the reflectors. In the deprecated ‘economic’ mode only h is returned. Raises LinAlgError If factoring fails. See also [`scipy.linalg.qr`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.qr.html#scipy.linalg.qr "(in SciPy v1.8.1)") Similar function in SciPy. [`scipy.linalg.rq`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.rq.html#scipy.linalg.rq "(in SciPy v1.8.1)") Compute RQ decomposition of a matrix. #### Notes This is an interface to the LAPACK routines `dgeqrf`, `zgeqrf`, `dorgqr`, and `zungqr`. For more information on the qr factorization, see for example: <https://en.wikipedia.org/wiki/QR_factorization Subclasses of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") are preserved except for the ‘raw’ mode. So if `a` is of type [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix"), all the return values will be matrices too. New ‘reduced’, ‘complete’, and ‘raw’ options for mode were added in NumPy 1.8.0 and the old option ‘full’ was made an alias of ‘reduced’. In addition the options ‘full’ and ‘economic’ were deprecated. Because ‘full’ was the previous default and ‘reduced’ is the new default, backward compatibility can be maintained by letting `mode` default. The ‘raw’ option was added so that LAPACK routines that can multiply arrays by q using the Householder reflectors can be used. Note that in this case the returned arrays are of type np.double or np.cdouble and the h array is transposed to be FORTRAN compatible. No routines using the ‘raw’ return are currently exposed by numpy, but some are available in lapack_lite and just await the necessary work. #### Examples ``` >>> a = np.random.randn(9, 6) >>> q, r = np.linalg.qr(a) >>> np.allclose(a, np.dot(q, r)) # a does equal qr True >>> r2 = np.linalg.qr(a, mode='r') >>> np.allclose(r, r2) # mode='r' returns the same r as mode='full' True >>> a = np.random.normal(size=(3, 2, 2)) # Stack of 2 x 2 matrices as input >>> q, r = np.linalg.qr(a) >>> q.shape (3, 2, 2) >>> r.shape (3, 2, 2) >>> np.allclose(a, np.matmul(q, r)) True ``` Example illustrating a common use of [`qr`](#numpy.linalg.qr "numpy.linalg.qr"): solving of least squares problems What are the least-squares-best `m` and `y0` in `y = y0 + mx` for the following data: {(0,1), (1,0), (1,2), (2,1)}. (Graph the points and you’ll see that it should be y0 = 0, m = 1.) The answer is provided by solving the over-determined matrix equation `Ax = b`, where: ``` A = array([[0, 1], [1, 1], [1, 1], [2, 1]]) x = array([[y0], [m]]) b = array([[1], [0], [2], [1]]) ``` If A = qr such that q is orthonormal (which is always possible via Gram-Schmidt), then `x = inv(r) * (q.T) * b`. (In numpy practice, however, we simply use [`lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq").) ``` >>> A = np.array([[0, 1], [1, 1], [1, 1], [2, 1]]) >>> A array([[0, 1], [1, 1], [1, 1], [2, 1]]) >>> b = np.array([1, 2, 2, 3]) >>> q, r = np.linalg.qr(A) >>> p = np.dot(q.T, b) >>> np.dot(np.linalg.inv(r), p) array([ 1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.qr.htmlnumpy.linalg.cond ================= linalg.cond(*x*, *p=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L1678-L1794) Compute the condition number of a matrix. This function is capable of returning the condition number using one of seven different norms, depending on the value of `p` (see Parameters below). Parameters **x**(
, M, N) array_like The matrix whose condition number is sought. **p**{None, 1, -1, 2, -2, inf, -inf, ‘fro’}, optional Order of the norm used in the condition number computation: | p | norm for matrices | | --- | --- | | None | 2-norm, computed directly using the `SVD` | | ‘fro’ | Frobenius norm | | inf | max(sum(abs(x), axis=1)) | | -inf | min(sum(abs(x), axis=1)) | | 1 | max(sum(abs(x), axis=0)) | | -1 | min(sum(abs(x), axis=0)) | | 2 | 2-norm (largest sing. value) | | -2 | smallest singular value | inf means the [`numpy.inf`](../constants#numpy.inf "numpy.inf") object, and the Frobenius norm is the root-of-sum-of-squares norm. Returns **c**{float, inf} The condition number of the matrix. May be infinite. See also [`numpy.linalg.norm`](numpy.linalg.norm#numpy.linalg.norm "numpy.linalg.norm") #### Notes The condition number of `x` is defined as the norm of `x` times the norm of the inverse of `x` [[1]](#r611900c44d60-1); the norm can be the usual L2-norm (root-of-sum-of-squares) or one of a number of other matrix norms. #### References [1](#id1) <NAME>, *Linear Algebra and Its Applications*, Orlando, FL, Academic Press, Inc., 1980, pg. 285. #### Examples ``` >>> from numpy import linalg as LA >>> a = np.array([[1, 0, -1], [0, 1, 0], [1, 0, 1]]) >>> a array([[ 1, 0, -1], [ 0, 1, 0], [ 1, 0, 1]]) >>> LA.cond(a) 1.4142135623730951 >>> LA.cond(a, 'fro') 3.1622776601683795 >>> LA.cond(a, np.inf) 2.0 >>> LA.cond(a, -np.inf) 1.0 >>> LA.cond(a, 1) 2.0 >>> LA.cond(a, -1) 1.0 >>> LA.cond(a, 2) 1.4142135623730951 >>> LA.cond(a, -2) 0.70710678118654746 # may vary >>> min(LA.svd(a, compute_uv=False))*min(LA.svd(LA.inv(a), compute_uv=False)) 0.70710678118654746 # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.cond.htmlnumpy.linalg.matrix_rank ========================= linalg.matrix_rank(*A*, *tol=None*, *hermitian=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L1801-L1903) Return matrix rank of array using SVD method Rank of the array is the number of singular values of the array that are greater than `tol`. Changed in version 1.14: Can now operate on stacks of matrices Parameters **A**{(M,), (
, M, N)} array_like Input vector or stack of matrices. **tol**(
) array_like, float, optional Threshold below which SVD values are considered zero. If `tol` is None, and `S` is an array with singular values for `M`, and `eps` is the epsilon value for datatype of `S`, then `tol` is set to `S.max() * max(M, N) * eps`. Changed in version 1.14: Broadcasted against the stack of matrices **hermitian**bool, optional If True, `A` is assumed to be Hermitian (symmetric if real-valued), enabling a more efficient method for finding singular values. Defaults to False. New in version 1.14. Returns **rank**(
) array_like Rank of A. #### Notes The default threshold to detect rank deficiency is a test on the magnitude of the singular values of `A`. By default, we identify singular values less than `S.max() * max(M, N) * eps` as indicating rank deficiency (with the symbols defined above). This is the algorithm MATLAB uses [1]. It also appears in *Numerical recipes* in the discussion of SVD solutions for linear least squares [2]. This default threshold is designed to detect rank deficiency accounting for the numerical errors of the SVD computation. Imagine that there is a column in `A` that is an exact (in floating point) linear combination of other columns in `A`. Computing the SVD on `A` will not produce a singular value exactly equal to 0 in general: any difference of the smallest SVD value from 0 will be caused by numerical imprecision in the calculation of the SVD. Our threshold for small SVD values takes this numerical imprecision into account, and the default threshold will detect such numerical rank deficiency. The threshold may declare a matrix `A` rank deficient even if the linear combination of some columns of `A` is not exactly equal to another column of `A` but only numerically very close to another column of `A`. We chose our default threshold because it is in wide use. Other thresholds are possible. For example, elsewhere in the 2007 edition of *Numerical recipes* there is an alternative threshold of `S.max() * np.finfo(A.dtype).eps / 2. * np.sqrt(m + n + 1.)`. The authors describe this threshold as being based on “expected roundoff error” (p 71). The thresholds above deal with floating point roundoff error in the calculation of the SVD. However, you may have more information about the sources of error in `A` that would make you consider other tolerance values to detect *effective* rank deficiency. The most useful measure of the tolerance depends on the operations you intend to use on your matrix. For example, if your data come from uncertain measurements with uncertainties greater than floating point epsilon, choosing a tolerance near that uncertainty may be preferable. The tolerance may be absolute if the uncertainties are absolute rather than relative. #### References 1 MATLAB reference documentation, “Rank” <https://www.mathworks.com/help/techdoc/ref/rank.html 2 <NAME>, <NAME>, <NAME> and <NAME>, “Numerical Recipes (3rd edition)”, Cambridge University Press, 2007, page 795. #### Examples ``` >>> from numpy.linalg import matrix_rank >>> matrix_rank(np.eye(4)) # Full rank matrix 4 >>> I=np.eye(4); I[-1,-1] = 0. # rank deficient matrix >>> matrix_rank(I) 3 >>> matrix_rank(np.ones((4,))) # 1 dimension - rank 1 unless all 0 1 >>> matrix_rank(np.zeros((4,))) 0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.matrix_rank.htmlnumpy.linalg.slogdet ==================== linalg.slogdet(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L2013-L2097) Compute the sign and (natural) logarithm of the determinant of an array. If an array has a very small or very large determinant, then a call to [`det`](numpy.linalg.det#numpy.linalg.det "numpy.linalg.det") may overflow or underflow. This routine is more robust against such issues, because it computes the logarithm of the determinant rather than the determinant itself. Parameters **a**(
, M, M) array_like Input array, has to be a square 2-D array. Returns **sign**(
) array_like A number representing the sign of the determinant. For a real matrix, this is 1, 0, or -1. For a complex matrix, this is a complex number with absolute value 1 (i.e., it is on the unit circle), or else 0. **logdet**(
) array_like The natural log of the absolute value of the determinant. If the determinant is zero, then [`sign`](numpy.sign#numpy.sign "numpy.sign") will be 0 and `logdet` will be -Inf. In all cases, the determinant is equal to `sign * np.exp(logdet)`. See also [`det`](numpy.linalg.det#numpy.linalg.det "numpy.linalg.det") #### Notes New in version 1.8.0. Broadcasting rules apply, see the [`numpy.linalg`](../routines.linalg#module-numpy.linalg "numpy.linalg") documentation for details. New in version 1.6.0. The determinant is computed via LU factorization using the LAPACK routine `z/dgetrf`. #### Examples The determinant of a 2-D array `[[a, b], [c, d]]` is `ad - bc`: ``` >>> a = np.array([[1, 2], [3, 4]]) >>> (sign, logdet) = np.linalg.slogdet(a) >>> (sign, logdet) (-1, 0.69314718055994529) # may vary >>> sign * np.exp(logdet) -2.0 ``` Computing log-determinants for a stack of matrices: ``` >>> a = np.array([ [[1, 2], [3, 4]], [[1, 2], [2, 1]], [[1, 3], [3, 1]] ]) >>> a.shape (3, 2, 2) >>> sign, logdet = np.linalg.slogdet(a) >>> (sign, logdet) (array([-1., -1., -1.]), array([ 0.69314718, 1.09861229, 2.07944154])) >>> sign * np.exp(logdet) array([-2., -3., -8.]) ``` This routine succeeds where ordinary [`det`](numpy.linalg.det#numpy.linalg.det "numpy.linalg.det") does not: ``` >>> np.linalg.det(np.eye(500) * 0.1) 0.0 >>> np.linalg.slogdet(np.eye(500) * 0.1) (1, -1151.2925464970228) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.slogdet.htmlnumpy.linalg.tensorsolve ======================== linalg.tensorsolve(*a*, *b*, *axes=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L239-L313) Solve the tensor equation `a x = b` for x. It is assumed that all indices of `x` are summed over in the product, together with the rightmost indices of `a`, as is done in, for example, `tensordot(a, x, axes=b.ndim)`. Parameters **a**array_like Coefficient tensor, of shape `b.shape + Q`. `Q`, a tuple, equals the shape of that sub-tensor of `a` consisting of the appropriate number of its rightmost indices, and must be such that `prod(Q) == prod(b.shape)` (in which sense `a` is said to be ‘square’). **b**array_like Right-hand tensor, which can be of any shape. **axes**tuple of ints, optional Axes in `a` to reorder to the right, before inversion. If None (default), no reordering is done. Returns **x**ndarray, shape Q Raises LinAlgError If `a` is singular or not ‘square’ (in the above sense). See also [`numpy.tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot"), [`tensorinv`](numpy.linalg.tensorinv#numpy.linalg.tensorinv "numpy.linalg.tensorinv"), [`numpy.einsum`](numpy.einsum#numpy.einsum "numpy.einsum") #### Examples ``` >>> a = np.eye(2*3*4) >>> a.shape = (2*3, 4, 2, 3, 4) >>> b = np.random.randn(2*3, 4) >>> x = np.linalg.tensorsolve(a, b) >>> x.shape (2, 3, 4) >>> np.allclose(np.tensordot(a, x, axes=3), b) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.tensorsolve.htmlnumpy.linalg.tensorinv ====================== linalg.tensorinv(*a*, *ind=2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/linalg.py#L409-L474) Compute the ‘inverse’ of an N-dimensional array. The result is an inverse for `a` relative to the tensordot operation `tensordot(a, b, ind)`, i. e., up to floating-point accuracy, `tensordot(tensorinv(a), a, ind)` is the “identity” tensor for the tensordot operation. Parameters **a**array_like Tensor to ‘invert’. Its shape must be ‘square’, i. e., `prod(a.shape[:ind]) == prod(a.shape[ind:])`. **ind**int, optional Number of first indices that are involved in the inverse sum. Must be a positive integer, default is 2. Returns **b**ndarray `a`’s tensordot inverse, shape `a.shape[ind:] + a.shape[:ind]`. Raises LinAlgError If `a` is singular or not ‘square’ (in the above sense). See also [`numpy.tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot"), [`tensorsolve`](numpy.linalg.tensorsolve#numpy.linalg.tensorsolve "numpy.linalg.tensorsolve") #### Examples ``` >>> a = np.eye(4*6) >>> a.shape = (4, 6, 8, 3) >>> ainv = np.linalg.tensorinv(a, ind=2) >>> ainv.shape (8, 3, 4, 6) >>> b = np.random.randn(4, 6) >>> np.allclose(np.tensordot(ainv, b), np.linalg.tensorsolve(a, b)) True ``` ``` >>> a = np.eye(4*6) >>> a.shape = (24, 8, 3) >>> ainv = np.linalg.tensorinv(a, ind=1) >>> ainv.shape (8, 3, 24) >>> b = np.random.randn(24) >>> np.allclose(np.tensordot(ainv, b, 1), np.linalg.tensorsolve(a, b)) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.tensorinv.htmlnumpy.linalg.LinAlgError ======================== *exception*linalg.LinAlgError[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/linalg/__init__.py) Generic Python-exception-derived object raised by linalg functions. General purpose exception class, derived from Python’s exception.Exception class, programmatically raised in linalg functions when a Linear Algebra-related condition would prevent further correct execution of the function. Parameters **None** #### Examples ``` >>> from numpy import linalg as LA >>> LA.inv(np.zeros((2,2))) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "...linalg.py", line 350, in inv return wrap(solve(a, identity(a.shape[0], dtype=a.dtype))) File "...linalg.py", line 249, in solve raise LinAlgError('Singular matrix') numpy.linalg.LinAlgError: Singular matrix ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.linalg.LinAlgError.htmlnumpy.lib.format.write_array_header_2_0 =========================================== lib.format.write_array_header_2_0(*fp*, *d*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L453-L466) Write the header for an array using the 2.0 format. The 2.0 format allows storing very large structured arrays. New in version 1.9.0. Parameters **fp**filelike object **d**dict This has the appropriate entries for writing its string representation to the header of the file. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.write_array_header_2_0.htmlnumpy.isneginf ============== numpy.isneginf(*x*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/ufunclike.py#L199-L268) Test element-wise for negative infinity, return result as bool array. Parameters **x**array_like The input array. **out**array_like, optional A location into which the result is stored. If provided, it must have a shape that the input broadcasts to. If not provided or None, a freshly-allocated boolean array is returned. Returns **out**ndarray A boolean array with the same dimensions as the input. If second argument is not supplied then a numpy boolean array is returned with values True where the corresponding element of the input is negative infinity and values False where the element of the input is not negative infinity. If a second argument is supplied the result is stored there. If the type of that array is a numeric type the result is represented as zeros and ones, if the type is boolean then as False and True. The return value `out` is then a reference to that array. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf"), [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Errors result if the second argument is also supplied when x is a scalar input, if first and second arguments have different shapes, or if the first argument has complex values. #### Examples ``` >>> np.isneginf(np.NINF) True >>> np.isneginf(np.inf) False >>> np.isneginf(np.PINF) False >>> np.isneginf([-np.inf, 0., np.inf]) array([ True, False, False]) ``` ``` >>> x = np.array([-np.inf, 0., np.inf]) >>> y = np.array([2, 2, 2]) >>> np.isneginf(x, y) array([1, 0, 0]) >>> y array([1, 0, 0]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isneginf.htmlnumpy.isposinf ============== numpy.isposinf(*x*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/ufunclike.py#L127-L196) Test element-wise for positive infinity, return result as bool array. Parameters **x**array_like The input array. **out**array_like, optional A location into which the result is stored. If provided, it must have a shape that the input broadcasts to. If not provided or None, a freshly-allocated boolean array is returned. Returns **out**ndarray A boolean array with the same dimensions as the input. If second argument is not supplied then a boolean array is returned with values True where the corresponding element of the input is positive infinity and values False where the element of the input is not positive infinity. If a second argument is supplied the result is stored there. If the type of that array is a numeric type the result is represented as zeros and ones, if the type is boolean then as False and True. The return value `out` is then a reference to that array. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf"), [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf"), [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite"), [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). Errors result if the second argument is also supplied when x is a scalar input, if first and second arguments have different shapes, or if the first argument has complex values #### Examples ``` >>> np.isposinf(np.PINF) True >>> np.isposinf(np.inf) True >>> np.isposinf(np.NINF) False >>> np.isposinf([-np.inf, 0., np.inf]) array([False, False, True]) ``` ``` >>> x = np.array([-np.inf, 0., np.inf]) >>> y = np.array([2, 2, 2]) >>> np.isposinf(x, y) array([0, 0, 1]) >>> y array([0, 0, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isposinf.htmlnumpy.iscomplex =============== numpy.iscomplex(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L210-L244) Returns a bool array, where True if input element is complex. What is tested is whether the input has a non-zero imaginary part, not if the input type is complex. Parameters **x**array_like Input array. Returns **out**ndarray of bools Output array. See also [`isreal`](numpy.isreal#numpy.isreal "numpy.isreal") [`iscomplexobj`](numpy.iscomplexobj#numpy.iscomplexobj "numpy.iscomplexobj") Return True if x is a complex type or an array of complex numbers. #### Examples ``` >>> np.iscomplex([1+1j, 1+0j, 4.5, 3, 2, 2j]) array([ True, False, False, False, False, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.iscomplex.htmlnumpy.iscomplexobj ================== numpy.iscomplexobj(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L303-L341) Check for a complex type or an array of complex numbers. The type of the input is checked, not the value. Even if the input has an imaginary part equal to zero, [`iscomplexobj`](#numpy.iscomplexobj "numpy.iscomplexobj") evaluates to True. Parameters **x**any The input can be of any type and shape. Returns **iscomplexobj**bool The return value, True if `x` is of a complex type or has at least one complex element. See also [`isrealobj`](numpy.isrealobj#numpy.isrealobj "numpy.isrealobj"), [`iscomplex`](numpy.iscomplex#numpy.iscomplex "numpy.iscomplex") #### Examples ``` >>> np.iscomplexobj(1) False >>> np.iscomplexobj(1+0j) True >>> np.iscomplexobj([3, 1+0j, True]) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.iscomplexobj.htmlnumpy.isfortran =============== numpy.isfortran(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L505-L570) Check if the array is Fortran contiguous but *not* C contiguous. This function is obsolete and, because of changes due to relaxed stride checking, its return value for the same array may differ for versions of NumPy >= 1.10.0 and previous versions. If you only want to check if an array is Fortran contiguous use `a.flags.f_contiguous` instead. Parameters **a**ndarray Input array. Returns **isfortran**bool Returns True if the array is Fortran contiguous but *not* C contiguous. #### Examples np.array allows to specify whether the array is written in C-contiguous order (last index varies the fastest), or FORTRAN-contiguous order in memory (first index varies the fastest). ``` >>> a = np.array([[1, 2, 3], [4, 5, 6]], order='C') >>> a array([[1, 2, 3], [4, 5, 6]]) >>> np.isfortran(a) False ``` ``` >>> b = np.array([[1, 2, 3], [4, 5, 6]], order='F') >>> b array([[1, 2, 3], [4, 5, 6]]) >>> np.isfortran(b) True ``` The transpose of a C-ordered array is a FORTRAN-ordered array. ``` >>> a = np.array([[1, 2, 3], [4, 5, 6]], order='C') >>> a array([[1, 2, 3], [4, 5, 6]]) >>> np.isfortran(a) False >>> b = a.T >>> b array([[1, 4], [2, 5], [3, 6]]) >>> np.isfortran(b) True ``` C-ordered arrays evaluate as False even if they are also FORTRAN-ordered. ``` >>> np.isfortran(np.array([1, 2], order='F')) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isfortran.htmlnumpy.isreal ============ numpy.isreal(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L247-L300) Returns a bool array, where True if input element is real. If element has complex type with zero complex part, the return value for that element is True. Parameters **x**array_like Input array. Returns **out**ndarray, bool Boolean array of same shape as `x`. See also [`iscomplex`](numpy.iscomplex#numpy.iscomplex "numpy.iscomplex") [`isrealobj`](numpy.isrealobj#numpy.isrealobj "numpy.isrealobj") Return True if x is not a complex type. #### Notes [`isreal`](#numpy.isreal "numpy.isreal") may behave unexpectedly for string or object arrays (see examples) #### Examples ``` >>> a = np.array([1+1j, 1+0j, 4.5, 3, 2, 2j], dtype=complex) >>> np.isreal(a) array([False, True, True, True, True, False]) ``` The function does not work on string arrays. ``` >>> a = np.array([2j, "a"], dtype="U") >>> np.isreal(a) # Warns about non-elementwise comparison False ``` Returns True for all elements in input array of `dtype=object` even if any of the elements is complex. ``` >>> a = np.array([1, "2", 3+4j], dtype=object) >>> np.isreal(a) array([ True, True, True]) ``` isreal should not be used with object arrays ``` >>> a = np.array([1+2j, 2+1j], dtype=object) >>> np.isreal(a) array([ True, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isreal.htmlnumpy.isrealobj =============== numpy.isrealobj(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L344-L390) Return True if x is a not complex type or an array of complex numbers. The type of the input is checked, not the value. So even if the input has an imaginary part equal to zero, [`isrealobj`](#numpy.isrealobj "numpy.isrealobj") evaluates to False if the data type is complex. Parameters **x**any The input can be of any type and shape. Returns **y**bool The return value, False if `x` is of a complex type. See also [`iscomplexobj`](numpy.iscomplexobj#numpy.iscomplexobj "numpy.iscomplexobj"), [`isreal`](numpy.isreal#numpy.isreal "numpy.isreal") #### Notes The function is only meant for arrays with numerical values but it accepts all other objects. Since it assumes array input, the return value of other objects may be True. ``` >>> np.isrealobj('A string') True >>> np.isrealobj(False) True >>> np.isrealobj(None) True ``` #### Examples ``` >>> np.isrealobj(1) True >>> np.isrealobj(1+0j) False >>> np.isrealobj([3, 1+0j, True]) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isrealobj.htmlnumpy.isscalar ============== numpy.isscalar(*element*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L1873-L1951) Returns True if the type of `element` is a scalar type. Parameters **element**any Input argument, can be of any type and shape. Returns **val**bool True if `element` is a scalar type, False if it is not. See also `ndim` Get the number of dimensions of an array #### Notes If you need a stricter way to identify a *numerical* scalar, use `isinstance(x, numbers.Number)`, as that returns `False` for most non-numerical elements such as strings. In most cases `np.ndim(x) == 0` should be used instead of this function, as that will also return true for 0d arrays. This is how numpy overloads functions in the style of the `dx` arguments to [`gradient`](numpy.gradient#numpy.gradient "numpy.gradient") and the `bins` argument to [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram"). Some key differences: | x | `isscalar(x)` | `np.ndim(x) == 0` | | --- | --- | --- | | PEP 3141 numeric objects (including builtins) | `True` | `True` | | builtin string and buffer objects | `True` | `True` | | other builtin objects, like [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)"), `Exception`, the result of [`re.compile`](https://docs.python.org/3/library/re.html#re.compile "(in Python v3.10)") | `False` | `True` | | third-party objects like [`matplotlib.figure.Figure`](https://matplotlib.org/stable/api/figure_api.html#matplotlib.figure.Figure "(in Matplotlib v3.5.2)") | `False` | `True` | | zero-dimensional numpy arrays | `False` | `True` | | other numpy arrays | `False` | `False` | | `list`, `tuple`, and other sequence objects | `False` | `False` | #### Examples ``` >>> np.isscalar(3.1) True >>> np.isscalar(np.array(3.1)) False >>> np.isscalar([3.1]) False >>> np.isscalar(False) True >>> np.isscalar('numpy') True ``` NumPy supports PEP 3141 numbers: ``` >>> from fractions import Fraction >>> np.isscalar(Fraction(5, 17)) True >>> from numbers import Number >>> np.isscalar(Number()) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isscalar.htmlnumpy.allclose ============== numpy.allclose(*a*, *b*, *rtol=1e-05*, *atol=1e-08*, *equal_nan=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L2194-L2266) Returns True if two arrays are element-wise equal within a tolerance. The tolerance values are positive, typically very small numbers. The relative difference (`rtol` * abs(`b`)) and the absolute difference `atol` are added together to compare against the absolute difference between `a` and `b`. NaNs are treated as equal if they are in the same place and if `equal_nan=True`. Infs are treated as equal if they are in the same place and of the same sign in both arrays. Parameters **a, b**array_like Input arrays to compare. **rtol**float The relative tolerance parameter (see Notes). **atol**float The absolute tolerance parameter (see Notes). **equal_nan**bool Whether to compare NaN’s as equal. If True, NaN’s in `a` will be considered equal to NaN’s in `b` in the output array. New in version 1.10.0. Returns **allclose**bool Returns True if the two arrays are equal within the given tolerance; False otherwise. See also [`isclose`](numpy.isclose#numpy.isclose "numpy.isclose"), [`all`](numpy.all#numpy.all "numpy.all"), [`any`](numpy.any#numpy.any "numpy.any"), [`equal`](numpy.equal#numpy.equal "numpy.equal") #### Notes If the following equation is element-wise True, then allclose returns True. absolute(`a` - `b`) <= (`atol` + `rtol` * absolute(`b`)) The above equation is not symmetric in `a` and `b`, so that `allclose(a, b)` might be different from `allclose(b, a)` in some rare cases. The comparison of `a` and `b` uses standard broadcasting, which means that `a` and `b` need not have the same shape in order for `allclose(a, b)` to evaluate to True. The same is true for [`equal`](numpy.equal#numpy.equal "numpy.equal") but not [`array_equal`](numpy.array_equal#numpy.array_equal "numpy.array_equal"). [`allclose`](#numpy.allclose "numpy.allclose") is not defined for non-numeric data types. [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.10)") is considered a numeric data-type for this purpose. #### Examples ``` >>> np.allclose([1e10,1e-7], [1.00001e10,1e-8]) False >>> np.allclose([1e10,1e-8], [1.00001e10,1e-9]) True >>> np.allclose([1e10,1e-8], [1.0001e10,1e-9]) False >>> np.allclose([1.0, np.nan], [1.0, np.nan]) False >>> np.allclose([1.0, np.nan], [1.0, np.nan], equal_nan=True) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.allclose.htmlnumpy.isclose ============= numpy.isclose(*a*, *b*, *rtol=1e-05*, *atol=1e-08*, *equal_nan=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L2273-L2395) Returns a boolean array where two arrays are element-wise equal within a tolerance. The tolerance values are positive, typically very small numbers. The relative difference (`rtol` * abs(`b`)) and the absolute difference `atol` are added together to compare against the absolute difference between `a` and `b`. Warning The default `atol` is not appropriate for comparing numbers that are much smaller than one (see Notes). Parameters **a, b**array_like Input arrays to compare. **rtol**float The relative tolerance parameter (see Notes). **atol**float The absolute tolerance parameter (see Notes). **equal_nan**bool Whether to compare NaN’s as equal. If True, NaN’s in `a` will be considered equal to NaN’s in `b` in the output array. Returns **y**array_like Returns a boolean array of where `a` and `b` are equal within the given tolerance. If both `a` and `b` are scalars, returns a single boolean value. See also [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose") [`math.isclose`](https://docs.python.org/3/library/math.html#math.isclose "(in Python v3.10)") #### Notes New in version 1.7.0. For finite values, isclose uses the following equation to test whether two floating point values are equivalent. absolute(`a` - `b`) <= (`atol` + `rtol` * absolute(`b`)) Unlike the built-in [`math.isclose`](https://docs.python.org/3/library/math.html#math.isclose "(in Python v3.10)"), the above equation is not symmetric in `a` and `b` – it assumes `b` is the reference value – so that `isclose(a, b)` might be different from `isclose(b, a)`. Furthermore, the default value of atol is not zero, and is used to determine what small values should be considered close to zero. The default value is appropriate for expected values of order unity: if the expected values are significantly smaller than one, it can result in false positives. `atol` should be carefully selected for the use case at hand. A zero value for `atol` will result in `False` if either `a` or `b` is zero. [`isclose`](#numpy.isclose "numpy.isclose") is not defined for non-numeric data types. [`bool`](https://docs.python.org/3/library/functions.html#bool "(in Python v3.10)") is considered a numeric data-type for this purpose. #### Examples ``` >>> np.isclose([1e10,1e-7], [1.00001e10,1e-8]) array([ True, False]) >>> np.isclose([1e10,1e-8], [1.00001e10,1e-9]) array([ True, True]) >>> np.isclose([1e10,1e-8], [1.0001e10,1e-9]) array([False, True]) >>> np.isclose([1.0, np.nan], [1.0, np.nan]) array([ True, False]) >>> np.isclose([1.0, np.nan], [1.0, np.nan], equal_nan=True) array([ True, True]) >>> np.isclose([1e-8, 1e-7], [0.0, 0.0]) array([ True, False]) >>> np.isclose([1e-100, 1e-7], [0.0, 0.0], atol=0.0) array([False, False]) >>> np.isclose([1e-10, 1e-10], [1e-20, 0.0]) array([ True, True]) >>> np.isclose([1e-10, 1e-10], [1e-20, 0.999999e-10], atol=0.0) array([False, True]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isclose.htmlnumpy.array_equal ================== numpy.array_equal(*a1*, *a2*, *equal_nan=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L2402-L2470) True if two arrays have the same shape and elements, False otherwise. Parameters **a1, a2**array_like Input arrays. **equal_nan**bool Whether to compare NaN’s as equal. If the dtype of a1 and a2 is complex, values will be considered equal if either the real or the imaginary component of a given value is `nan`. New in version 1.19.0. Returns **b**bool Returns True if the arrays are equal. See also [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose") Returns True if two arrays are element-wise equal within a tolerance. [`array_equiv`](numpy.array_equiv#numpy.array_equiv "numpy.array_equiv") Returns True if input arrays are shape consistent and all elements equal. #### Examples ``` >>> np.array_equal([1, 2], [1, 2]) True >>> np.array_equal(np.array([1, 2]), np.array([1, 2])) True >>> np.array_equal([1, 2], [1, 2, 3]) False >>> np.array_equal([1, 2], [1, 4]) False >>> a = np.array([1, np.nan]) >>> np.array_equal(a, a) False >>> np.array_equal(a, a, equal_nan=True) True ``` When `equal_nan` is True, complex values with nan components are considered equal if either the real *or* the imaginary components are nan. ``` >>> a = np.array([1 + 1j]) >>> b = a.copy() >>> a.real = np.nan >>> b.imag = np.nan >>> np.array_equal(a, b, equal_nan=True) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.array_equal.htmlnumpy.array_equiv ================== numpy.array_equiv(*a1*, *a2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L2477-L2522) Returns True if input arrays are shape consistent and all elements equal. Shape consistent means they are either the same shape, or one input array can be broadcasted to create the same shape as the other one. Parameters **a1, a2**array_like Input arrays. Returns **out**bool True if equivalent, False otherwise. #### Examples ``` >>> np.array_equiv([1, 2], [1, 2]) True >>> np.array_equiv([1, 2], [1, 3]) False ``` Showing the shape equivalence: ``` >>> np.array_equiv([1, 2], [[1, 2], [1, 2]]) True >>> np.array_equiv([1, 2], [[1, 2, 1, 2], [1, 2, 1, 2]]) False ``` ``` >>> np.array_equiv([1, 2], [[1, 2], [1, 3]]) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.array_equiv.htmlnumpy.ma.MaskType ================= numpy.ma.MaskType[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/src/multiarray/scalartypes.c.src) alias of [`numpy.bool_`](../arrays.scalars#numpy.bool_ "numpy.bool_") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskType.htmlnumpy.ma.masked_array ====================== numpy.ma.masked_array[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2703-L6272) alias of `numpy.ma.core.MaskedArray` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_array.htmlnumpy.ma.array ============== ma.array(*data*, *dtype=None*, *copy=False*, *order=None*, *mask=False*, *fill_value=None*, *keep_mask=True*, *hard_mask=False*, *shrink=True*, *subok=True*, *ndmin=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6600-L6613) An array class with possibly masked values. Masked values of True exclude the corresponding element from any computation. Construction: ``` x = MaskedArray(data, mask=nomask, dtype=None, copy=False, subok=True, ndmin=0, fill_value=None, keep_mask=True, hard_mask=None, shrink=True, order=None) ``` Parameters **data**array_like Input data. **mask**sequence, optional Mask. Must be convertible to an array of booleans with the same shape as `data`. True indicates a masked (i.e. invalid) data. **dtype**dtype, optional Data type of the output. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is None, the type of the data argument (`data.dtype`) is used. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not None and different from `data.dtype`, a copy is performed. **copy**bool, optional Whether to copy the input data (True), or to use a reference instead. Default is False. **subok**bool, optional Whether to return a subclass of [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") if possible (True) or a plain [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"). Default is True. **ndmin**int, optional Minimum number of dimensions. Default is 0. **fill_value**scalar, optional Value used to fill in the masked values when necessary. If None, a default based on the data-type is used. **keep_mask**bool, optional Whether to combine `mask` with the mask of the input data, if any (True), or to use only `mask` for the output (False). Default is True. **hard_mask**bool, optional Whether to use a hard mask or not. With a hard mask, masked values cannot be unmasked. Default is False. **shrink**bool, optional Whether to force compression of an empty mask. Default is True. **order**{‘C’, ‘F’, ‘A’}, optional Specify the order of the array. If order is ‘C’, then the array will be in C-contiguous order (last-index varies the fastest). If order is ‘F’, then the returned array will be in Fortran-contiguous order (first-index varies the fastest). If order is ‘A’ (default), then the returned array may be in any order (either C-, Fortran-contiguous, or even discontiguous), unless a copy is required, in which case it will be C-contiguous. #### Examples The `mask` can be initialized with an array of boolean values with the same shape as `data`. ``` >>> data = np.arange(6).reshape((2, 3)) >>> np.ma.MaskedArray(data, mask=[[False, True, False], ... [False, False, True]]) masked_array( data=[[0, --, 2], [3, 4, --]], mask=[[False, True, False], [False, False, True]], fill_value=999999) ``` Alternatively, the `mask` can be initialized to homogeneous boolean array with the same shape as `data` by passing in a scalar boolean value: ``` >>> np.ma.MaskedArray(data, mask=False) masked_array( data=[[0, 1, 2], [3, 4, 5]], mask=[[False, False, False], [False, False, False]], fill_value=999999) ``` ``` >>> np.ma.MaskedArray(data, mask=True) masked_array( data=[[--, --, --], [--, --, --]], mask=[[ True, True, True], [ True, True, True]], fill_value=999999, dtype=int64) ``` Note The recommended practice for initializing `mask` with a scalar boolean value is to use `True`/`False` rather than `np.True_`/`np.False_`. The reason is [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") is represented internally as `np.False_`. ``` >>> np.False_ is np.ma.nomask True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.array.htmlnumpy.ma.copy ============= ma.copy(*self*, **args*, ***params) a.copy(order='C'*)*=<numpy.ma.core._frommethod object>* Return a copy of the array. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples ``` >>> x = np.array([[1,2,3],[4,5,6]], order='F') ``` ``` >>> y = x.copy() ``` ``` >>> x.fill(0) ``` ``` >>> x array([[0, 0, 0], [0, 0, 0]]) ``` ``` >>> y array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> y.flags['C_CONTIGUOUS'] True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.copy.htmlnumpy.ma.frombuffer =================== ma.frombuffer(*buffer*, *dtype=float*, *count=- 1*, *offset=0*, ***, *like=None*)*=<numpy.ma.core._convert2ma object>* Interpret a buffer as a 1-dimensional array. Parameters **buffer**buffer_like An object that exposes the buffer interface. **dtype**data-type, optional Data-type of the returned array; default: float. **count**int, optional Number of items to read. `-1` means all data in the buffer. **offset**int, optional Start reading the buffer from this offset (in bytes); default: 0. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns out: MaskedArray #### Notes If the buffer has data that is not in machine byte-order, this should be specified as part of the data-type, e.g.: ``` >>> dt = np.dtype(int) >>> dt = dt.newbyteorder('>') >>> np.frombuffer(buf, dtype=dt) ``` The data of the resulting array will not be byteswapped, but will be interpreted correctly. This function creates a view into the original object. This should be safe in general, but it may make sense to copy the result when the original object is mutable or untrusted. #### Examples ``` >>> s = b'hello world' >>> np.frombuffer(s, dtype='S1', count=5, offset=6) array([b'w', b'o', b'r', b'l', b'd'], dtype='|S1') ``` ``` >>> np.frombuffer(b'\x01\x02', dtype=np.uint8) array([1, 2], dtype=uint8) >>> np.frombuffer(b'\x01\x02\x03\x04\x05', dtype=np.uint8, count=3) array([1, 2, 3], dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.frombuffer.htmlnumpy.ma.fromfunction ===================== ma.fromfunction(*function*, *shape*, ***dtype*)*=<numpy.ma.core._convert2ma object>* Construct an array by executing a function over each coordinate. The resulting array therefore has a value `fn(x, y, z)` at coordinate `(x, y, z)`. Parameters **function**callable The function is called with N parameters, where N is the rank of [`shape`](numpy.shape#numpy.shape "numpy.shape"). Each parameter represents the coordinates of the array varying along a specific axis. For example, if [`shape`](numpy.shape#numpy.shape "numpy.shape") were `(2, 2)`, then the parameters would be `array([[0, 0], [1, 1]])` and `array([[0, 1], [0, 1]])` **shape**(N,) tuple of ints Shape of the output array, which also determines the shape of the coordinate arrays passed to `function`. **dtype**data-type, optional Data-type of the coordinate arrays passed to `function`. By default, [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is float. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns fromfunction: MaskedArray The result of the call to `function` is passed back directly. Therefore the shape of [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") is completely determined by `function`. If `function` returns a scalar value, the shape of [`fromfunction`](numpy.fromfunction#numpy.fromfunction "numpy.fromfunction") would not match the [`shape`](numpy.shape#numpy.shape "numpy.shape") parameter. See also [`indices`](numpy.indices#numpy.indices "numpy.indices"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Notes Keywords other than [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") are passed to `function`. #### Examples ``` >>> np.fromfunction(lambda i, j: i, (2, 2), dtype=float) array([[0., 0.], [1., 1.]]) ``` ``` >>> np.fromfunction(lambda i, j: j, (2, 2), dtype=float) array([[0., 1.], [0., 1.]]) ``` ``` >>> np.fromfunction(lambda i, j: i == j, (3, 3), dtype=int) array([[ True, False, False], [False, True, False], [False, False, True]]) ``` ``` >>> np.fromfunction(lambda i, j: i + j, (3, 3), dtype=int) array([[0, 1, 2], [1, 2, 3], [2, 3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.fromfunction.htmlnumpy.ma.MaskedArray.copy ========================= method ma.MaskedArray.copy(*order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2577-L2587) Return a copy of the array. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples ``` >>> x = np.array([[1,2,3],[4,5,6]], order='F') ``` ``` >>> y = x.copy() ``` ``` >>> x.fill(0) ``` ``` >>> x array([[0, 0, 0], [0, 0, 0]]) ``` ``` >>> y array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> y.flags['C_CONTIGUOUS'] True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.copy.htmlnumpy.ma.empty ============== ma.empty(*shape*, *dtype=float*, *order='C'*, ***, *like=None*)*=<numpy.ma.core._convert2ma object>* Return a new array of given shape and type, without initializing entries. Parameters **shape**int or tuple of int Shape of the empty array, e.g., `(2, 3)` or `2`. **dtype**data-type, optional Desired output data-type for the array, e.g, [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: ‘C’ Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**MaskedArray Array of uninitialized (arbitrary) data of the given shape, dtype, and order. Object arrays will be initialized to None. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Notes [`empty`](numpy.empty#numpy.empty "numpy.empty"), unlike [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros"), does not set the array values to zero, and may therefore be marginally faster. On the other hand, it requires the user to manually set all the values in the array, and should be used with caution. #### Examples ``` >>> np.empty([2, 2]) array([[ -9.74499359e+001, 6.69583040e-309], [ 2.13182611e-314, 3.06959433e-309]]) #uninitialized ``` ``` >>> np.empty([2, 2], dtype=int) array([[-1073741821, -1067949133], [ 496041986, 19249760]]) #uninitialized ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.empty.htmlnumpy.ma.empty_like ==================== ma.empty_like(*prototype*, *dtype=None*, *order='K'*, *subok=True*, *shape=None*)*=<numpy.ma.core._convert2ma object>* Return a new array with the same shape and type as a given array. Parameters **prototype**array_like The shape and data-type of `prototype` define these same attributes of the returned array. **dtype**data-type, optional Overrides the data type of the result. New in version 1.6.0. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `prototype` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `prototype` as closely as possible. New in version 1.6.0. **subok**bool, optional. If True, then the newly created array will use the sub-class type of `prototype`, otherwise it will be a base-class array. Defaults to True. **shape**int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. New in version 1.17.0. Returns **out**MaskedArray Array of uninitialized (arbitrary) data with the same shape and type as `prototype`. See also [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. #### Notes This function does *not* initialize the returned array; to do that use [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") or [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") instead. It may be marginally faster than the functions that do set the array values. #### Examples ``` >>> a = ([1,2,3], [4,5,6]) # a is array-like >>> np.empty_like(a) array([[-1073741821, -1073741821, 3], # uninitialized [ 0, 0, -1073741821]]) >>> a = np.array([[1., 2., 3.],[4.,5.,6.]]) >>> np.empty_like(a) array([[ -2.00000715e+000, 1.48219694e-323, -2.00000572e+000], # uninitialized [ 4.38791518e-305, -2.00000715e+000, 4.17269252e-309]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.empty_like.htmlnumpy.ma.masked_all ==================== ma.masked_all(*shape*, *dtype=<class 'float'>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L104-L153) Empty masked array with all elements masked. Return an empty masked array of the given shape and dtype, where all the data are masked. Parameters **shape**int or tuple of ints Shape of the required MaskedArray, e.g., `(2, 3)` or `2`. **dtype**dtype, optional Data type of the output. Returns **a**MaskedArray A masked array with all data masked. See also [`masked_all_like`](numpy.ma.masked_all_like#numpy.ma.masked_all_like "numpy.ma.masked_all_like") Empty masked array modelled on an existing array. #### Examples ``` >>> import numpy.ma as ma >>> ma.masked_all((3, 3)) masked_array( data=[[--, --, --], [--, --, --], [--, --, --]], mask=[[ True, True, True], [ True, True, True], [ True, True, True]], fill_value=1e+20, dtype=float64) ``` The [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") parameter defines the underlying data type. ``` >>> a = ma.masked_all((3, 3)) >>> a.dtype dtype('float64') >>> a = ma.masked_all((3, 3), dtype=np.int32) >>> a.dtype dtype('int32') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_all.htmlnumpy.ma.masked_all_like ========================== ma.masked_all_like(*arr*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L156-L208) Empty masked array with the properties of an existing array. Return an empty masked array of the same shape and dtype as the array `arr`, where all the data are masked. Parameters **arr**ndarray An array describing the shape and dtype of the required MaskedArray. Returns **a**MaskedArray A masked array with all data masked. Raises AttributeError If `arr` doesn’t have a shape attribute (i.e. not an ndarray) See also [`masked_all`](numpy.ma.masked_all#numpy.ma.masked_all "numpy.ma.masked_all") Empty masked array with all elements masked. #### Examples ``` >>> import numpy.ma as ma >>> arr = np.zeros((2, 3), dtype=np.float32) >>> arr array([[0., 0., 0.], [0., 0., 0.]], dtype=float32) >>> ma.masked_all_like(arr) masked_array( data=[[--, --, --], [--, --, --]], mask=[[ True, True, True], [ True, True, True]], fill_value=1e+20, dtype=float32) ``` The dtype of the masked array matches the dtype of `arr`. ``` >>> arr.dtype dtype('float32') >>> ma.masked_all_like(arr).dtype dtype('float32') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_all_like.htmlnumpy.ma.ones ============= ma.ones(*shape*, *dtype=None*, *order='C'*)*=<numpy.ma.core._convert2ma object>* Return a new array of given shape and type, filled with ones. Parameters **shape**int or sequence of ints Shape of the new array, e.g., `(2, 3)` or `2`. **dtype**data-type, optional The desired data-type for the array, e.g., [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: C Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**MaskedArray Array of ones with the given shape, dtype, and order. See also [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples ``` >>> np.ones(5) array([1., 1., 1., 1., 1.]) ``` ``` >>> np.ones((5,), dtype=int) array([1, 1, 1, 1, 1]) ``` ``` >>> np.ones((2, 1)) array([[1.], [1.]]) ``` ``` >>> s = (2,2) >>> np.ones(s) array([[1., 1.], [1., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.ones.htmlnumpy.ma.ones_like =================== ma.ones_like(**args*, ***kwargs*)*=<numpy.ma.core._convert2ma object>* Return an array of ones with the same shape and type as a given array. Parameters **a**array_like The shape and data-type of `a` define these same attributes of the returned array. **dtype**data-type, optional Overrides the data type of the result. New in version 1.6.0. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. New in version 1.6.0. **subok**bool, optional. If True, then the newly created array will use the sub-class type of `a`, otherwise it will be a base-class array. Defaults to True. **shape**int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. New in version 1.17.0. Returns **out**MaskedArray Array of ones with the same shape and type as `a`. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. #### Examples ``` >>> x = np.arange(6) >>> x = x.reshape((2, 3)) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.ones_like(x) array([[1, 1, 1], [1, 1, 1]]) ``` ``` >>> y = np.arange(3, dtype=float) >>> y array([0., 1., 2.]) >>> np.ones_like(y) array([1., 1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.ones_like.htmlnumpy.ma.zeros ============== ma.zeros(*shape*, *dtype=float*, *order='C'*, ***, *like=None*)*=<numpy.ma.core._convert2ma object>* Return a new array of given shape and type, filled with zeros. Parameters **shape**int or tuple of ints Shape of the new array, e.g., `(2, 3)` or `2`. **dtype**data-type, optional The desired data-type for the array, e.g., [`numpy.int8`](../arrays.scalars#numpy.int8 "numpy.int8"). Default is [`numpy.float64`](../arrays.scalars#numpy.float64 "numpy.float64"). **order**{‘C’, ‘F’}, optional, default: ‘C’ Whether to store multi-dimensional data in row-major (C-style) or column-major (Fortran-style) order in memory. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**MaskedArray Array of zeros with the given shape, dtype, and order. See also [`zeros_like`](numpy.zeros_like#numpy.zeros_like "numpy.zeros_like") Return an array of zeros with shape and type of input. [`empty`](numpy.empty#numpy.empty "numpy.empty") Return a new uninitialized array. [`ones`](numpy.ones#numpy.ones "numpy.ones") Return a new array setting values to one. [`full`](numpy.full#numpy.full "numpy.full") Return a new array of given shape filled with value. #### Examples ``` >>> np.zeros(5) array([ 0., 0., 0., 0., 0.]) ``` ``` >>> np.zeros((5,), dtype=int) array([0, 0, 0, 0, 0]) ``` ``` >>> np.zeros((2, 1)) array([[ 0.], [ 0.]]) ``` ``` >>> s = (2,2) >>> np.zeros(s) array([[ 0., 0.], [ 0., 0.]]) ``` ``` >>> np.zeros((2,), dtype=[('x', 'i4'), ('y', 'i4')]) # custom dtype array([(0, 0), (0, 0)], dtype=[('x', '<i4'), ('y', '<i4')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.zeros.htmlnumpy.ma.zeros_like ==================== ma.zeros_like(**args*, ***kwargs*)*=<numpy.ma.core._convert2ma object>* Return an array of zeros with the same shape and type as a given array. Parameters **a**array_like The shape and data-type of `a` define these same attributes of the returned array. **dtype**data-type, optional Overrides the data type of the result. New in version 1.6.0. **order**{‘C’, ‘F’, ‘A’, or ‘K’}, optional Overrides the memory layout of the result. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. New in version 1.6.0. **subok**bool, optional. If True, then the newly created array will use the sub-class type of `a`, otherwise it will be a base-class array. Defaults to True. **shape**int or sequence of ints, optional. Overrides the shape of the result. If order=’K’ and the number of dimensions is unchanged, will try to keep order, otherwise, order=’C’ is implied. New in version 1.17.0. Returns **out**MaskedArray Array of zeros with the same shape and type as `a`. See also [`empty_like`](numpy.empty_like#numpy.empty_like "numpy.empty_like") Return an empty array with shape and type of input. [`ones_like`](numpy.ones_like#numpy.ones_like "numpy.ones_like") Return an array of ones with shape and type of input. [`full_like`](numpy.full_like#numpy.full_like "numpy.full_like") Return a new array with shape of input filled with value. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Return a new array setting values to zero. #### Examples ``` >>> x = np.arange(6) >>> x = x.reshape((2, 3)) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.zeros_like(x) array([[0, 0, 0], [0, 0, 0]]) ``` ``` >>> y = np.arange(3, dtype=float) >>> y array([0., 1., 2.]) >>> np.zeros_like(y) array([0., 0., 0.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.zeros_like.htmlnumpy.ma.all ============ ma.all(*self*, *axis=None*, *out=None*, *keepdims=<no value>*)*=<numpy.ma.core._frommethod object>* Returns True if all elements evaluate to True. The output array is masked where all the values along the given axis are masked: if the output would have been a scalar and that all the values are masked, then the output is [`masked`](../maskedarray.baseclass#numpy.ma.masked "numpy.ma.masked"). Refer to [`numpy.all`](numpy.all#numpy.all "numpy.all") for full documentation. See also [`numpy.ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") corresponding function for ndarrays [`numpy.all`](numpy.all#numpy.all "numpy.all") equivalent function #### Examples ``` >>> np.ma.array([1,2,3]).all() True >>> a = np.ma.array([1,2,3], mask=True) >>> (a.all() is np.ma.masked) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.all.htmlnumpy.ma.any ============ ma.any(*self*, *axis=None*, *out=None*, *keepdims=<no value>*)*=<numpy.ma.core._frommethod object>* Returns True if any of the elements of `a` evaluate to True. Masked values are considered as False during computation. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. See also [`numpy.ndarray.any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") corresponding function for ndarrays [`numpy.any`](numpy.any#numpy.any "numpy.any") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.any.htmlnumpy.ma.count ============== ma.count(*self*, *axis=None*, *keepdims=<no value>*)*=<numpy.ma.core._frommethod object>* Count the non-masked elements of the array along the given axis. Parameters **axis**None or int or tuple of ints, optional Axis or axes along which the count is performed. The default, None, performs the count over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. New in version 1.10.0. If this is a tuple of ints, the count is performed on multiple axes, instead of a single axis or all the axes as before. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns **result**ndarray or scalar An array with the same shape as the input array, with the specified axis removed. If the array is a 0-d array, or if `axis` is None, a scalar is returned. See also [`ma.count_masked`](numpy.ma.count_masked#numpy.ma.count_masked "numpy.ma.count_masked") Count masked elements in array or along a given axis. #### Examples ``` >>> import numpy.ma as ma >>> a = ma.arange(6).reshape((2, 3)) >>> a[1, :] = ma.masked >>> a masked_array( data=[[0, 1, 2], [--, --, --]], mask=[[False, False, False], [ True, True, True]], fill_value=999999) >>> a.count() 3 ``` When the `axis` keyword is specified an array of appropriate size is returned. ``` >>> a.count(axis=0) array([1, 1, 1]) >>> a.count(axis=1) array([3, 0]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.count.htmlnumpy.ma.count_masked ====================== ma.count_masked(*arr*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L50-L101) Count the number of masked elements along the given axis. Parameters **arr**array_like An array with (possibly) masked elements. **axis**int, optional Axis along which to count. If None (default), a flattened version of the array is used. Returns **count**int, ndarray The total number of masked elements (axis=None) or the number of masked elements along each slice of the given axis. See also [`MaskedArray.count`](numpy.ma.maskedarray.count#numpy.ma.MaskedArray.count "numpy.ma.MaskedArray.count") Count non-masked elements. #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(9).reshape((3,3)) >>> a = ma.array(a) >>> a[1, 0] = ma.masked >>> a[1, 2] = ma.masked >>> a[2, 1] = ma.masked >>> a masked_array( data=[[0, 1, 2], [--, 4, --], [6, --, 8]], mask=[[False, False, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> ma.count_masked(a) 3 ``` When the `axis` keyword is used an array is returned. ``` >>> ma.count_masked(a, axis=0) array([1, 1, 1]) >>> ma.count_masked(a, axis=1) array([0, 2, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.count_masked.htmlnumpy.ma.getmask ================ ma.getmask(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1355-L1411) Return the mask of a masked array, or nomask. Return the mask of `a` as an ndarray if `a` is a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") and the mask is not [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"), else return [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"). To guarantee a full array of booleans of the same shape as a, use [`getmaskarray`](numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray"). Parameters **a**array_like Input [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") for which the mask is required. See also [`getdata`](numpy.ma.getdata#numpy.ma.getdata "numpy.ma.getdata") Return the data of a masked array as an ndarray. [`getmaskarray`](numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray") Return the mask of a masked array, or full array of False. #### Examples ``` >>> import numpy.ma as ma >>> a = ma.masked_equal([[1,2],[3,4]], 2) >>> a masked_array( data=[[1, --], [3, 4]], mask=[[False, True], [False, False]], fill_value=2) >>> ma.getmask(a) array([[False, True], [False, False]]) ``` Equivalently use the [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") `mask` attribute. ``` >>> a.mask array([[False, True], [False, False]]) ``` Result when mask == [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") ``` >>> b = ma.masked_array([[1,2],[3,4]]) >>> b masked_array( data=[[1, 2], [3, 4]], mask=False, fill_value=999999) >>> ma.nomask False >>> ma.getmask(b) == ma.nomask True >>> b.mask == ma.nomask True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.getmask.htmlnumpy.ma.getmaskarray ===================== ma.getmaskarray(*arr*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1417-L1467) Return the mask of a masked array, or full boolean array of False. Return the mask of `arr` as an ndarray if `arr` is a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") and the mask is not [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"), else return a full boolean array of False of the same shape as `arr`. Parameters **arr**array_like Input [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") for which the mask is required. See also [`getmask`](numpy.ma.getmask#numpy.ma.getmask "numpy.ma.getmask") Return the mask of a masked array, or nomask. [`getdata`](numpy.ma.getdata#numpy.ma.getdata "numpy.ma.getdata") Return the data of a masked array as an ndarray. #### Examples ``` >>> import numpy.ma as ma >>> a = ma.masked_equal([[1,2],[3,4]], 2) >>> a masked_array( data=[[1, --], [3, 4]], mask=[[False, True], [False, False]], fill_value=2) >>> ma.getmaskarray(a) array([[False, True], [False, False]]) ``` Result when mask == `nomask` ``` >>> b = ma.masked_array([[1,2],[3,4]]) >>> b masked_array( data=[[1, 2], [3, 4]], mask=False, fill_value=999999) >>> ma.getmaskarray(b) array([[False, False], [False, False]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.getmaskarray.htmlnumpy.ma.getdata ================ ma.getdata(*a*, *subok=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L664-L712) Return the data of a masked array as an ndarray. Return the data of `a` (if any) as an ndarray if `a` is a `MaskedArray`, else return `a` as a ndarray or subclass (depending on `subok`) if not. Parameters **a**array_like Input `MaskedArray`, alternatively a ndarray or a subclass thereof. **subok**bool Whether to force the output to be a `pure` ndarray (False) or to return a subclass of ndarray if appropriate (True, default). See also [`getmask`](numpy.ma.getmask#numpy.ma.getmask "numpy.ma.getmask") Return the mask of a masked array, or nomask. [`getmaskarray`](numpy.ma.getmaskarray#numpy.ma.getmaskarray "numpy.ma.getmaskarray") Return the mask of a masked array, or full array of False. #### Examples ``` >>> import numpy.ma as ma >>> a = ma.masked_equal([[1,2],[3,4]], 2) >>> a masked_array( data=[[1, --], [3, 4]], mask=[[False, True], [False, False]], fill_value=2) >>> ma.getdata(a) array([[1, 2], [3, 4]]) ``` Equivalently use the `MaskedArray` `data` attribute. ``` >>> a.data array([[1, 2], [3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.getdata.htmlnumpy.ma.nonzero ================ ma.nonzero(*self*)*=<numpy.ma.core._frommethod object>* Return the indices of unmasked elements that are not zero. Returns a tuple of arrays, one for each dimension, containing the indices of the non-zero elements in that dimension. The corresponding non-zero values can be obtained with: ``` a[a.nonzero()] ``` To group the indices by element, rather than dimension, use instead: ``` np.transpose(a.nonzero()) ``` The result of this is always a 2d array, with a row for each non-zero element. Parameters **None** Returns **tuple_of_arrays**tuple Indices of elements that are non-zero. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") Function operating on ndarrays. [`flatnonzero`](numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero") Return indices that are non-zero in the flattened version of the input array. [`numpy.ndarray.nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") Equivalent ndarray method. [`count_nonzero`](numpy.count_nonzero#numpy.count_nonzero "numpy.count_nonzero") Counts the number of non-zero elements in the input array. #### Examples ``` >>> import numpy.ma as ma >>> x = ma.array(np.eye(3)) >>> x masked_array( data=[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]], mask=False, fill_value=1e+20) >>> x.nonzero() (array([0, 1, 2]), array([0, 1, 2])) ``` Masked elements are ignored. ``` >>> x[1, 1] = ma.masked >>> x masked_array( data=[[1.0, 0.0, 0.0], [0.0, --, 0.0], [0.0, 0.0, 1.0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1e+20) >>> x.nonzero() (array([0, 2]), array([0, 2])) ``` Indices can also be grouped by element. ``` >>> np.transpose(x.nonzero()) array([[0, 0], [2, 2]]) ``` A common use for `nonzero` is to find the indices of an array, where a condition is True. Given an array `a`, the condition `a` > 3 is a boolean array and since False is interpreted as 0, ma.nonzero(a > 3) yields the indices of the `a` where the condition is true. ``` >>> a = ma.array([[1,2,3],[4,5,6],[7,8,9]]) >>> a > 3 masked_array( data=[[False, False, False], [ True, True, True], [ True, True, True]], mask=False, fill_value=True) >>> ma.nonzero(a > 3) (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) ``` The `nonzero` method of the condition array can also be called. ``` >>> (a > 3).nonzero() (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.nonzero.htmlnumpy.ma.shape ============== ma.shape(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7286-L7288) Return the shape of an array. Parameters **a**array_like Input array. Returns **shape**tuple of ints The elements of the shape tuple give the lengths of the corresponding array dimensions. See also [`len`](https://docs.python.org/3/library/functions.html#len "(in Python v3.10)") `len(a)` is equivalent to `np.shape(a)[0]` for N-D arrays with `N>=1`. [`ndarray.shape`](numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") Equivalent array method. #### Examples ``` >>> np.shape(np.eye(3)) (3, 3) >>> np.shape([[1, 3]]) (1, 2) >>> np.shape([0]) (1,) >>> np.shape(0) () ``` ``` >>> a = np.array([(1, 2), (3, 4), (5, 6)], ... dtype=[('x', 'i4'), ('y', 'i4')]) >>> np.shape(a) (3,) >>> a.shape (3,) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.shape.htmlnumpy.ma.size ============= ma.size(*obj*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7292-L7294) Return the number of elements along a given axis. Parameters **a**array_like Input data. **axis**int, optional Axis along which the elements are counted. By default, give the total number of elements. Returns **element_count**int Number of elements along the specified axis. See also [`shape`](numpy.shape#numpy.shape "numpy.shape") dimensions of array [`ndarray.shape`](numpy.ndarray.shape#numpy.ndarray.shape "numpy.ndarray.shape") dimensions of array [`ndarray.size`](numpy.ndarray.size#numpy.ndarray.size "numpy.ndarray.size") number of elements in array #### Examples ``` >>> a = np.array([[1,2,3],[4,5,6]]) >>> np.size(a) 6 >>> np.size(a,1) 3 >>> np.size(a,0) 2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.size.htmlnumpy.ma.is_masked =================== ma.is_masked(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6617-L6667) Determine whether input has masked values. Accepts any object as input, but always returns False unless the input is a MaskedArray containing masked values. Parameters **x**array_like Array to check for masked values. Returns **result**bool True if `x` is a MaskedArray with masked values, False otherwise. #### Examples ``` >>> import numpy.ma as ma >>> x = ma.masked_equal([0, 1, 0, 2, 3], 0) >>> x masked_array(data=[--, 1, --, 2, 3], mask=[ True, False, True, False, False], fill_value=0) >>> ma.is_masked(x) True >>> x = ma.masked_equal([0, 1, 0, 2, 3], 42) >>> x masked_array(data=[0, 1, 0, 2, 3], mask=False, fill_value=42) >>> ma.is_masked(x) False ``` Always returns False if `x` isn’t a MaskedArray. ``` >>> x = [False, True, False] >>> ma.is_masked(x) False >>> x = 'a string' >>> ma.is_masked(x) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.is_masked.htmlnumpy.ma.is_mask ================= ma.is_mask(*m*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1470-L1535) Return True if m is a valid, standard mask. This function does not check the contents of the input, only that the type is MaskType. In particular, this function returns False if the mask has a flexible dtype. Parameters **m**array_like Array to test. Returns **result**bool True if `m.dtype.type` is MaskType, False otherwise. See also [`ma.isMaskedArray`](numpy.ma.ismaskedarray#numpy.ma.isMaskedArray "numpy.ma.isMaskedArray") Test whether input is an instance of MaskedArray. #### Examples ``` >>> import numpy.ma as ma >>> m = ma.masked_equal([0, 1, 0, 2, 3], 0) >>> m masked_array(data=[--, 1, --, 2, 3], mask=[ True, False, True, False, False], fill_value=0) >>> ma.is_mask(m) False >>> ma.is_mask(m.mask) True ``` Input must be an ndarray (or have similar attributes) for it to be considered a valid mask. ``` >>> m = [False, True, False] >>> ma.is_mask(m) False >>> m = np.array([False, True, False]) >>> m array([False, True, False]) >>> ma.is_mask(m) True ``` Arrays with complex dtypes don’t return True. ``` >>> dtype = np.dtype({'names':['monty', 'pithon'], ... 'formats':[bool, bool]}) >>> dtype dtype([('monty', '|b1'), ('pithon', '|b1')]) >>> m = np.array([(True, False), (False, True), (True, False)], ... dtype=dtype) >>> m array([( True, False), (False, True), ( True, False)], dtype=[('monty', '?'), ('pithon', '?')]) >>> ma.is_mask(m) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.is_mask.htmlnumpy.ma.isMaskedArray ====================== ma.isMaskedArray(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6425-L6473) Test whether input is an instance of MaskedArray. This function returns True if `x` is an instance of MaskedArray and returns False otherwise. Any object is accepted as input. Parameters **x**object Object to test. Returns **result**bool True if `x` is a MaskedArray. See also [`isMA`](numpy.ma.isma#numpy.ma.isMA "numpy.ma.isMA") Alias to isMaskedArray. [`isarray`](numpy.ma.isarray#numpy.ma.isarray "numpy.ma.isarray") Alias to isMaskedArray. #### Examples ``` >>> import numpy.ma as ma >>> a = np.eye(3, 3) >>> a array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> m = ma.masked_values(a, 0) >>> m masked_array( data=[[1.0, --, --], [--, 1.0, --], [--, --, 1.0]], mask=[[False, True, True], [ True, False, True], [ True, True, False]], fill_value=0.0) >>> ma.isMaskedArray(a) False >>> ma.isMaskedArray(m) True >>> ma.isMaskedArray([0, 1, 2]) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.isMaskedArray.htmlnumpy.ma.isMA ============= ma.isMA(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6425-L6473) Test whether input is an instance of MaskedArray. This function returns True if `x` is an instance of MaskedArray and returns False otherwise. Any object is accepted as input. Parameters **x**object Object to test. Returns **result**bool True if `x` is a MaskedArray. See also [`isMA`](#numpy.ma.isMA "numpy.ma.isMA") Alias to isMaskedArray. [`isarray`](numpy.ma.isarray#numpy.ma.isarray "numpy.ma.isarray") Alias to isMaskedArray. #### Examples ``` >>> import numpy.ma as ma >>> a = np.eye(3, 3) >>> a array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> m = ma.masked_values(a, 0) >>> m masked_array( data=[[1.0, --, --], [--, 1.0, --], [--, --, 1.0]], mask=[[False, True, True], [ True, False, True], [ True, True, False]], fill_value=0.0) >>> ma.isMaskedArray(a) False >>> ma.isMaskedArray(m) True >>> ma.isMaskedArray([0, 1, 2]) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.isMA.htmlnumpy.ma.isarray ================ ma.isarray(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6425-L6473) Test whether input is an instance of MaskedArray. This function returns True if `x` is an instance of MaskedArray and returns False otherwise. Any object is accepted as input. Parameters **x**object Object to test. Returns **result**bool True if `x` is a MaskedArray. See also [`isMA`](numpy.ma.isma#numpy.ma.isMA "numpy.ma.isMA") Alias to isMaskedArray. [`isarray`](#numpy.ma.isarray "numpy.ma.isarray") Alias to isMaskedArray. #### Examples ``` >>> import numpy.ma as ma >>> a = np.eye(3, 3) >>> a array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) >>> m = ma.masked_values(a, 0) >>> m masked_array( data=[[1.0, --, --], [--, 1.0, --], [--, --, 1.0]], mask=[[False, True, True], [ True, False, True], [ True, True, False]], fill_value=0.0) >>> ma.isMaskedArray(a) False >>> ma.isMaskedArray(m) True >>> ma.isMaskedArray([0, 1, 2]) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.isarray.htmlnumpy.ma.MaskedArray.all ======================== method ma.MaskedArray.all(*axis=None*, *out=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4861-L4899) Returns True if all elements evaluate to True. The output array is masked where all the values along the given axis are masked: if the output would have been a scalar and that all the values are masked, then the output is `masked`. Refer to [`numpy.all`](numpy.all#numpy.all "numpy.all") for full documentation. See also [`numpy.ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all") corresponding function for ndarrays [`numpy.all`](numpy.all#numpy.all "numpy.all") equivalent function #### Examples ``` >>> np.ma.array([1,2,3]).all() True >>> a = np.ma.array([1,2,3], mask=True) >>> (a.all() is np.ma.masked) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.all.htmlnumpy.ma.MaskedArray.any ======================== method ma.MaskedArray.any(*axis=None*, *out=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4901-L4929) Returns True if any of the elements of `a` evaluate to True. Masked values are considered as False during computation. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. See also [`numpy.ndarray.any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any") corresponding function for ndarrays [`numpy.any`](numpy.any#numpy.any "numpy.any") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.any.htmlnumpy.ma.MaskedArray.count ========================== method ma.MaskedArray.count(*axis=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4486-L4583) Count the non-masked elements of the array along the given axis. Parameters **axis**None or int or tuple of ints, optional Axis or axes along which the count is performed. The default, None, performs the count over all the dimensions of the input array. `axis` may be negative, in which case it counts from the last to the first axis. New in version 1.10.0. If this is a tuple of ints, the count is performed on multiple axes, instead of a single axis or all the axes as before. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns **result**ndarray or scalar An array with the same shape as the input array, with the specified axis removed. If the array is a 0-d array, or if `axis` is None, a scalar is returned. See also [`ma.count_masked`](numpy.ma.count_masked#numpy.ma.count_masked "numpy.ma.count_masked") Count masked elements in array or along a given axis. #### Examples ``` >>> import numpy.ma as ma >>> a = ma.arange(6).reshape((2, 3)) >>> a[1, :] = ma.masked >>> a masked_array( data=[[0, 1, 2], [--, --, --]], mask=[[False, False, False], [ True, True, True]], fill_value=999999) >>> a.count() 3 ``` When the `axis` keyword is specified an array of appropriate size is returned. ``` >>> a.count(axis=0) array([1, 1, 1]) >>> a.count(axis=1) array([3, 0]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.count.htmlnumpy.ma.MaskedArray.nonzero ============================ method ma.MaskedArray.nonzero()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4931-L5027) Return the indices of unmasked elements that are not zero. Returns a tuple of arrays, one for each dimension, containing the indices of the non-zero elements in that dimension. The corresponding non-zero values can be obtained with: ``` a[a.nonzero()] ``` To group the indices by element, rather than dimension, use instead: ``` np.transpose(a.nonzero()) ``` The result of this is always a 2d array, with a row for each non-zero element. Parameters **None** Returns **tuple_of_arrays**tuple Indices of elements that are non-zero. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") Function operating on ndarrays. [`flatnonzero`](numpy.flatnonzero#numpy.flatnonzero "numpy.flatnonzero") Return indices that are non-zero in the flattened version of the input array. [`numpy.ndarray.nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero") Equivalent ndarray method. [`count_nonzero`](numpy.count_nonzero#numpy.count_nonzero "numpy.count_nonzero") Counts the number of non-zero elements in the input array. #### Examples ``` >>> import numpy.ma as ma >>> x = ma.array(np.eye(3)) >>> x masked_array( data=[[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]], mask=False, fill_value=1e+20) >>> x.nonzero() (array([0, 1, 2]), array([0, 1, 2])) ``` Masked elements are ignored. ``` >>> x[1, 1] = ma.masked >>> x masked_array( data=[[1.0, 0.0, 0.0], [0.0, --, 0.0], [0.0, 0.0, 1.0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1e+20) >>> x.nonzero() (array([0, 2]), array([0, 2])) ``` Indices can also be grouped by element. ``` >>> np.transpose(x.nonzero()) array([[0, 0], [2, 2]]) ``` A common use for `nonzero` is to find the indices of an array, where a condition is True. Given an array `a`, the condition `a` > 3 is a boolean array and since False is interpreted as 0, ma.nonzero(a > 3) yields the indices of the `a` where the condition is true. ``` >>> a = ma.array([[1,2,3],[4,5,6],[7,8,9]]) >>> a > 3 masked_array( data=[[False, False, False], [ True, True, True], [ True, True, True]], mask=False, fill_value=True) >>> ma.nonzero(a > 3) (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) ``` The `nonzero` method of the condition array can also be called. ``` >>> (a > 3).nonzero() (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.nonzero.htmlnumpy.ma.ravel ============== ma.ravel(*self*, *order='C'*)*=<numpy.ma.core._frommethod object>* Returns a 1D version of self, as a view. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional The elements of `a` are read using this index order. ‘C’ means to index the elements in C-like order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to index the elements in Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. ‘A’ means to read the elements in Fortran-like index order if `m` is Fortran *contiguous* in memory, C-like order otherwise. ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, ‘C’ index order is used. Returns MaskedArray Output view is of shape `(self.size,)` (or `(np.ma.product(self.shape),)`). #### Examples ``` >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.ravel() masked_array(data=[1, --, 3, --, 5, --, 7, --, 9], mask=[False, True, False, True, False, True, False, True, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.ravel.htmlnumpy.ma.reshape ================ ma.reshape(*a*, *new_shape*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7193-L7209) Returns an array containing the same data with a new shape. Refer to [`MaskedArray.reshape`](numpy.ma.maskedarray.reshape#numpy.ma.MaskedArray.reshape "numpy.ma.MaskedArray.reshape") for full documentation. See also [`MaskedArray.reshape`](numpy.ma.maskedarray.reshape#numpy.ma.MaskedArray.reshape "numpy.ma.MaskedArray.reshape") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.reshape.htmlnumpy.ma.resize =============== ma.resize(*x*, *new_shape*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7212-L7273) Return a new masked array with the specified size and shape. This is the masked equivalent of the [`numpy.resize`](numpy.resize#numpy.resize "numpy.resize") function. The new array is filled with repeated copies of `x` (in the order that the data are stored in memory). If `x` is masked, the new array will be masked, and the new mask will be a repetition of the old one. See also [`numpy.resize`](numpy.resize#numpy.resize "numpy.resize") Equivalent function in the top level NumPy module. #### Examples ``` >>> import numpy.ma as ma >>> a = ma.array([[1, 2] ,[3, 4]]) >>> a[0, 1] = ma.masked >>> a masked_array( data=[[1, --], [3, 4]], mask=[[False, True], [False, False]], fill_value=999999) >>> np.resize(a, (3, 3)) masked_array( data=[[1, 2, 3], [4, 1, 2], [3, 4, 1]], mask=False, fill_value=999999) >>> ma.resize(a, (3, 3)) masked_array( data=[[1, --, 3], [4, 1, --], [3, 4, 1]], mask=[[False, True, False], [False, False, True], [False, False, False]], fill_value=999999) ``` A MaskedArray is always returned, regardless of the input type. ``` >>> a = np.array([[1, 2] ,[3, 4]]) >>> ma.resize(a, (3, 3)) masked_array( data=[[1, 2, 3], [4, 1, 2], [3, 4, 1]], mask=False, fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.resize.htmlnumpy.ma.MaskedArray.flatten ============================ method ma.MaskedArray.flatten(*order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2577-L2587) Return a copy of the array collapsed into one dimension. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if `a` is Fortran *contiguous* in memory, row-major order otherwise. ‘K’ means to flatten `a` in the order the elements occur in memory. The default is ‘C’. Returns **y**ndarray A copy of the input array, flattened to one dimension. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.ma.maskedarray.flat#numpy.ma.MaskedArray.flat "numpy.ma.MaskedArray.flat") A 1-D flat iterator over the array. #### Examples ``` >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.flatten.htmlnumpy.ma.MaskedArray.ravel ========================== method ma.MaskedArray.ravel(*order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4585-L4636) Returns a 1D version of self, as a view. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional The elements of `a` are read using this index order. ‘C’ means to index the elements in C-like order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to index the elements in Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. ‘A’ means to read the elements in Fortran-like index order if `m` is Fortran *contiguous* in memory, C-like order otherwise. ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, ‘C’ index order is used. Returns MaskedArray Output view is of shape `(self.size,)` (or `(np.ma.product(self.shape),)`). #### Examples ``` >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.ravel() masked_array(data=[1, --, 3, --, 5, --, 7, --, 9], mask=[False, True, False, True, False, True, False, True, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.ravel.htmlnumpy.ma.MaskedArray.reshape ============================ method ma.MaskedArray.reshape(**s*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4639-L4703) Give a new shape to the array without changing its data. Returns a masked array containing the same data, but with a new shape. The result is a view on the original array; if this is not possible, a ValueError is raised. Parameters **shape**int or tuple of ints The new shape should be compatible with the original shape. If an integer is supplied, then the result will be a 1-D array of that length. **order**{‘C’, ‘F’}, optional Determines whether the array data should be viewed as in C (row-major) or FORTRAN (column-major) order. Returns **reshaped_array**array A new view on the array. See also [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Equivalent function in the masked array module. [`numpy.ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Equivalent method on ndarray object. [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Equivalent function in the NumPy module. #### Notes The reshaping operation cannot guarantee that a copy will not be made, to modify the shape in place, use `a.shape = s` #### Examples ``` >>> x = np.ma.array([[1,2],[3,4]], mask=[1,0,0,1]) >>> x masked_array( data=[[--, 2], [3, --]], mask=[[ True, False], [False, True]], fill_value=999999) >>> x = x.reshape((4,1)) >>> x masked_array( data=[[--], [2], [3], [--]], mask=[[ True], [False], [False], [ True]], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.reshape.htmlnumpy.ma.MaskedArray.resize =========================== method ma.MaskedArray.resize(*newshape*, *refcheck=True*, *order=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4705-L4721) Warning This method does nothing, except raise a ValueError exception. A masked array does not own its data and therefore cannot safely be resized in place. Use the [`numpy.ma.resize`](numpy.ma.resize#numpy.ma.resize "numpy.ma.resize") function instead. This method is difficult to implement safely and may be deprecated in future releases of NumPy. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.resize.htmlnumpy.ma.swapaxes ================= ma.swapaxes(*self*, **args*, ***params) a.swapaxes(axis1*, *axis2*)*=<numpy.ma.core._frommethod object>* Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.swapaxes.htmlnumpy.ma.transpose ================== ma.transpose(*a*, *axes=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7155-L7190) Permute the dimensions of an array. This function is exactly equivalent to [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose"). See also [`numpy.transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function in top-level NumPy module. #### Examples ``` >>> import numpy.ma as ma >>> x = ma.arange(4).reshape((2,2)) >>> x[1, 1] = ma.masked >>> x masked_array( data=[[0, 1], [2, --]], mask=[[False, False], [False, True]], fill_value=999999) ``` ``` >>> ma.transpose(x) masked_array( data=[[0, 2], [1, --]], mask=[[False, False], [False, True]], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.transpose.htmlnumpy.ma.MaskedArray.swapaxes ============================= method ma.MaskedArray.swapaxes(*axis1*, *axis2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2577-L2587) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.swapaxes.htmlnumpy.ma.MaskedArray.transpose ============================== method ma.MaskedArray.transpose(**axes*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2577-L2587) Returns a view of the array with axes transposed. For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector, an additional dimension must be added. `np.atleast2d(a).T` achieves this, as does `a[:, np.newaxis]`. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and `a.shape = (i[0], i[1], ... i[n-2], i[n-1])`, then `a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])`. Parameters **axes**None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means `a`’s `i`-th axis becomes `a.transpose()`’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form) Returns **out**ndarray View of `a`, with axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.transpose.htmlnumpy.ma.atleast_1d ==================== ma.atleast_1d(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_allargs object>* Convert inputs to arrays with at least one dimension. Scalar inputs are converted to 1-dimensional arrays, whilst higher-dimensional inputs are preserved. Parameters **arys1, arys2, 
**array_like One or more input arrays. Returns **ret**ndarray An array, or list of arrays, each with `a.ndim >= 1`. Copies are made only if necessary. See also [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> np.atleast_1d(1.0) array([1.]) ``` ``` >>> x = np.arange(9.0).reshape(3,3) >>> np.atleast_1d(x) array([[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]) >>> np.atleast_1d(x) is x True ``` ``` >>> np.atleast_1d(1, [3, 4]) [array([1]), array([3, 4])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.atleast_1d.htmlnumpy.ma.atleast_2d ==================== ma.atleast_2d(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_allargs object>* View inputs as arrays with at least two dimensions. Parameters **arys1, arys2, 
**array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have two or more dimensions are preserved. Returns **res, res2, 
**ndarray An array, or list of arrays, each with `a.ndim >= 2`. Copies are avoided where possible, and views with two or more dimensions are returned. See also [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> np.atleast_2d(3.0) array([[3.]]) ``` ``` >>> x = np.arange(3.0) >>> np.atleast_2d(x) array([[0., 1., 2.]]) >>> np.atleast_2d(x).base is x True ``` ``` >>> np.atleast_2d(1, [1, 2], [[1, 2]]) [array([[1]]), array([[1, 2]]), array([[1, 2]])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.atleast_2d.htmlnumpy.ma.atleast_3d ==================== ma.atleast_3d(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_allargs object>* View inputs as arrays with at least three dimensions. Parameters **arys1, arys2, 
**array_like One or more array-like sequences. Non-array inputs are converted to arrays. Arrays that already have three or more dimensions are preserved. Returns **res1, res2, 
**ndarray An array, or list of arrays, each with `a.ndim >= 3`. Copies are avoided where possible, and views with three or more dimensions are returned. For example, a 1-D array of shape `(N,)` becomes a view of shape `(1, N, 1)`, and a 2-D array of shape `(M, N)` becomes a view of shape `(M, N, 1)`. See also [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d") #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> np.atleast_3d(3.0) array([[[3.]]]) ``` ``` >>> x = np.arange(3.0) >>> np.atleast_3d(x).shape (1, 3, 1) ``` ``` >>> x = np.arange(12.0).reshape(4,3) >>> np.atleast_3d(x).shape (4, 3, 1) >>> np.atleast_3d(x).base is x.base # x is a reshape, so not base itself True ``` ``` >>> for arr in np.atleast_3d([1, 2], [[1, 2]], [[[1, 2]]]): ... print(arr, arr.shape) ... [[[1] [2]]] (1, 2, 1) [[[1] [2]]] (1, 2, 1) [[[1 2]]] (1, 1, 2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.atleast_3d.htmlnumpy.ma.expand_dims ===================== ma.expand_dims(*a*, *axis*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/shape_base.py#L512-L602) Expand the shape of an array. Insert a new axis that will appear at the `axis` position in the expanded array shape. Parameters **a**array_like Input array. **axis**int or tuple of ints Position in the expanded axes where the new axis (or axes) is placed. Deprecated since version 1.13.0: Passing an axis where `axis > a.ndim` will be treated as `axis == a.ndim`, and passing `axis < -a.ndim - 1` will be treated as `axis == 0`. This behavior is deprecated. Changed in version 1.18.0: A tuple of axes is now supported. Out of range axes as described above are now forbidden and raise an [`AxisError`](numpy.axiserror#numpy.AxisError "numpy.AxisError"). Returns **result**ndarray View of `a` with the number of dimensions increased. See also [`squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") The inverse operation, removing singleton dimensions [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Insert, remove, and combine dimensions, and resize existing ones `doc.indexing`, [`atleast_1d`](numpy.atleast_1d#numpy.atleast_1d "numpy.atleast_1d"), [`atleast_2d`](numpy.atleast_2d#numpy.atleast_2d "numpy.atleast_2d"), [`atleast_3d`](numpy.atleast_3d#numpy.atleast_3d "numpy.atleast_3d") #### Examples ``` >>> x = np.array([1, 2]) >>> x.shape (2,) ``` The following is equivalent to `x[np.newaxis, :]` or `x[np.newaxis]`: ``` >>> y = np.expand_dims(x, axis=0) >>> y array([[1, 2]]) >>> y.shape (1, 2) ``` The following is equivalent to `x[:, np.newaxis]`: ``` >>> y = np.expand_dims(x, axis=1) >>> y array([[1], [2]]) >>> y.shape (2, 1) ``` `axis` may also be a tuple: ``` >>> y = np.expand_dims(x, axis=(0, 1)) >>> y array([[[1, 2]]]) ``` ``` >>> y = np.expand_dims(x, axis=(2, 0)) >>> y array([[[1], [2]]]) ``` Note that some examples may use `None` instead of `np.newaxis`. These are the same objects: ``` >>> np.newaxis is None True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.expand_dims.htmlnumpy.ma.squeeze ================ ma.squeeze(**args*, ***kwargs*)*=<numpy.ma.core._convert2ma object>* Remove axes of length one from `a`. Parameters **a**array_like Input data. **axis**None or int or tuple of ints, optional New in version 1.7.0. Selects a subset of the entries of length one in the shape. If an axis is selected with shape entry greater than one, an error is raised. Returns **squeezed**MaskedArray The input array, but with all or a subset of the dimensions of length 1 removed. This is always `a` itself or a view into `a`. Note that if all axes are squeezed, the result is a 0d array and not a scalar. Raises ValueError If `axis` is not None, and an axis being squeezed is not of length 1 See also [`expand_dims`](numpy.expand_dims#numpy.expand_dims "numpy.expand_dims") The inverse operation, adding entries of length one [`reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Insert, remove, and combine dimensions, and resize existing ones #### Examples ``` >>> x = np.array([[[0], [1], [2]]]) >>> x.shape (1, 3, 1) >>> np.squeeze(x).shape (3,) >>> np.squeeze(x, axis=0).shape (3, 1) >>> np.squeeze(x, axis=1).shape Traceback (most recent call last): ... ValueError: cannot select an axis to squeeze out which has size not equal to one >>> np.squeeze(x, axis=2).shape (1, 3) >>> x = np.array([[1234]]) >>> x.shape (1, 1) >>> np.squeeze(x) array(1234) # 0d array >>> np.squeeze(x).shape () >>> np.squeeze(x)[()] 1234 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.squeeze.htmlnumpy.ma.MaskedArray.squeeze ============================ method ma.MaskedArray.squeeze(*axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2577-L2587) Remove axes of length one from `a`. Refer to [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") for full documentation. See also [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.squeeze.htmlnumpy.ma.stack ============== ma.stack(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_seq object>* Join a sequence of arrays along a new axis. The `axis` parameter specifies the index of the new axis in the dimensions of the result. For example, if `axis=0` it will be the first dimension and if `axis=-1` it will be the last dimension. New in version 1.10.0. Parameters **arrays**sequence of array_like Each array must have the same shape. **axis**int, optional The axis in the result array along which the input arrays are stacked. **out**ndarray, optional If provided, the destination to place the result. The shape must be correct, matching that of what stack would have returned if no out argument were specified. Returns **stacked**ndarray The stacked array has one more dimension than the input arrays. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`split`](numpy.split#numpy.split "numpy.split") Split array into a list of multiple sub-arrays of equal size. #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> arrays = [np.random.randn(3, 4) for _ in range(10)] >>> np.stack(arrays, axis=0).shape (10, 3, 4) ``` ``` >>> np.stack(arrays, axis=1).shape (3, 10, 4) ``` ``` >>> np.stack(arrays, axis=2).shape (3, 4, 10) ``` ``` >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.stack((a, b)) array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> np.stack((a, b), axis=-1) array([[1, 4], [2, 5], [3, 6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.stack.htmlnumpy.ma.column_stack ====================== ma.column_stack(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_seq object>* Stack 1-D arrays as columns into a 2-D array. Take a sequence of 1-D arrays and stack them as columns to make a single 2-D array. 2-D arrays are stacked as-is, just like with [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack"). 1-D arrays are turned into 2-D columns first. Parameters **tup**sequence of 1-D or 2-D arrays. Arrays to stack. All of them must have the same first dimension. Returns **stacked**2-D array The array formed by stacking the given arrays. See also [`stack`](numpy.stack#numpy.stack "numpy.stack"), [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack"), [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack"), [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> a = np.array((1,2,3)) >>> b = np.array((2,3,4)) >>> np.column_stack((a,b)) array([[1, 2], [2, 3], [3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.column_stack.htmlnumpy.ma.concatenate ==================== ma.concatenate(*arrays*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6977-L7034) Concatenate a sequence of arrays along the given axis. Parameters **arrays**sequence of array_like The arrays must have the same shape, except in the dimension corresponding to `axis` (the first, by default). **axis**int, optional The axis along which the arrays will be joined. Default is 0. Returns **result**MaskedArray The concatenated array with any masked entries preserved. See also [`numpy.concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Equivalent function in the top-level NumPy module. #### Examples ``` >>> import numpy.ma as ma >>> a = ma.arange(3) >>> a[1] = ma.masked >>> b = ma.arange(2, 5) >>> a masked_array(data=[0, --, 2], mask=[False, True, False], fill_value=999999) >>> b masked_array(data=[2, 3, 4], mask=False, fill_value=999999) >>> ma.concatenate([a, b]) masked_array(data=[0, --, 2, 2, 3, 4], mask=[False, True, False, False, False, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.concatenate.htmlnumpy.ma.dstack =============== ma.dstack(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_seq object>* Stack arrays in sequence depth wise (along third axis). This is equivalent to concatenation along the third axis after 2-D arrays of shape `(M,N)` have been reshaped to `(M,N,1)` and 1-D arrays of shape `(N,)` have been reshaped to `(1,N,1)`. Rebuilds arrays divided by [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters **tup**sequence of arrays The arrays must have the same shape along all but the third axis. 1-D or 2-D arrays must have the same shape. Returns **stacked**ndarray The array formed by stacking the given arrays, will be at least 3-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`dsplit`](numpy.dsplit#numpy.dsplit "numpy.dsplit") Split array along third axis. #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> a = np.array((1,2,3)) >>> b = np.array((2,3,4)) >>> np.dstack((a,b)) array([[[1, 2], [2, 3], [3, 4]]]) ``` ``` >>> a = np.array([[1],[2],[3]]) >>> b = np.array([[2],[3],[4]]) >>> np.dstack((a,b)) array([[[1, 2]], [[2, 3]], [[3, 4]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.dstack.htmlnumpy.ma.hstack =============== ma.hstack(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_seq object>* Stack arrays in sequence horizontally (column wise). This is equivalent to concatenation along the second axis, except for 1-D arrays where it concatenates along the first axis. Rebuilds arrays divided by [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters **tup**sequence of ndarrays The arrays must have the same shape along all but the second axis, except 1-D arrays which can be any length. Returns **stacked**ndarray The array formed by stacking the given arrays. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`vstack`](numpy.vstack#numpy.vstack "numpy.vstack") Stack arrays in sequence vertically (row wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") Split an array into multiple sub-arrays horizontally (column-wise). #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> a = np.array((1,2,3)) >>> b = np.array((4,5,6)) >>> np.hstack((a,b)) array([1, 2, 3, 4, 5, 6]) >>> a = np.array([[1],[2],[3]]) >>> b = np.array([[4],[5],[6]]) >>> np.hstack((a,b)) array([[1, 4], [2, 5], [3, 6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.hstack.htmlnumpy.ma.hsplit =============== ma.hsplit(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_single object>* Split an array into multiple sub-arrays horizontally (column-wise). Please refer to the [`split`](numpy.split#numpy.split "numpy.split") documentation. [`hsplit`](numpy.hsplit#numpy.hsplit "numpy.hsplit") is equivalent to [`split`](numpy.split#numpy.split "numpy.split") with `axis=1`, the array is always split along the second axis except for 1-D arrays, where it is split at `axis=0`. See also [`split`](numpy.split#numpy.split "numpy.split") Split an array into multiple sub-arrays of equal size. #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> x = np.arange(16.0).reshape(4, 4) >>> x array([[ 0., 1., 2., 3.], [ 4., 5., 6., 7.], [ 8., 9., 10., 11.], [12., 13., 14., 15.]]) >>> np.hsplit(x, 2) [array([[ 0., 1.], [ 4., 5.], [ 8., 9.], [12., 13.]]), array([[ 2., 3.], [ 6., 7.], [10., 11.], [14., 15.]])] >>> np.hsplit(x, np.array([3, 6])) [array([[ 0., 1., 2.], [ 4., 5., 6.], [ 8., 9., 10.], [12., 13., 14.]]), array([[ 3.], [ 7.], [11.], [15.]]), array([], shape=(4, 0), dtype=float64)] ``` With a higher dimensional array the split is still along the second axis. ``` >>> x = np.arange(8.0).reshape(2, 2, 2) >>> x array([[[0., 1.], [2., 3.]], [[4., 5.], [6., 7.]]]) >>> np.hsplit(x, 2) [array([[[0., 1.]], [[4., 5.]]]), array([[[2., 3.]], [[6., 7.]]])] ``` With a 1-D array, the split is along axis 0. ``` >>> x = np.array([0, 1, 2, 3, 4, 5]) >>> np.hsplit(x, 2) [array([0, 1, 2]), array([3, 4, 5])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.hsplit.htmlnumpy.ma.mr_ ============= ma.mr_*=<numpy.ma.extras.mr_class object>* Translate slice objects to concatenation along the first axis. This is the masked array version of `lib.index_tricks.RClass`. See also `lib.index_tricks.RClass` #### Examples ``` >>> np.ma.mr_[np.ma.array([1,2,3]), 0, 0, np.ma.array([4,5,6])] masked_array(data=[1, 2, 3, ..., 4, 5, 6], mask=False, fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.mr_.htmlnumpy.ma.row_stack =================== ma.row_stack(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_seq object>* Stack arrays in sequence vertically (row wise). This is equivalent to concatenation along the first axis after 1-D arrays of shape `(N,)` have been reshaped to `(1,N)`. Rebuilds arrays divided by [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters **tup**sequence of ndarrays The arrays must have the same shape along all but the first axis. 1-D arrays must have the same length. Returns **stacked**ndarray The array formed by stacking the given arrays, will be at least 2-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split an array into multiple sub-arrays vertically (row-wise). #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.vstack((a,b)) array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> a = np.array([[1], [2], [3]]) >>> b = np.array([[4], [5], [6]]) >>> np.vstack((a,b)) array([[1], [2], [3], [4], [5], [6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.row_stack.htmlnumpy.ma.vstack =============== ma.vstack(**args*, ***kwargs*)*=<numpy.ma.extras._fromnxfunction_seq object>* Stack arrays in sequence vertically (row wise). This is equivalent to concatenation along the first axis after 1-D arrays of shape `(N,)` have been reshaped to `(1,N)`. Rebuilds arrays divided by [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit"). This function makes most sense for arrays with up to 3 dimensions. For instance, for pixel-data with a height (first axis), width (second axis), and r/g/b channels (third axis). The functions [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate"), [`stack`](numpy.stack#numpy.stack "numpy.stack") and [`block`](numpy.block#numpy.block "numpy.block") provide more general stacking and concatenation operations. Parameters **tup**sequence of ndarrays The arrays must have the same shape along all but the first axis. 1-D arrays must have the same length. Returns **stacked**ndarray The array formed by stacking the given arrays, will be at least 2-D. See also [`concatenate`](numpy.concatenate#numpy.concatenate "numpy.concatenate") Join a sequence of arrays along an existing axis. [`stack`](numpy.stack#numpy.stack "numpy.stack") Join a sequence of arrays along a new axis. [`block`](numpy.block#numpy.block "numpy.block") Assemble an nd-array from nested lists of blocks. [`hstack`](numpy.hstack#numpy.hstack "numpy.hstack") Stack arrays in sequence horizontally (column wise). [`dstack`](numpy.dstack#numpy.dstack "numpy.dstack") Stack arrays in sequence depth wise (along third axis). [`column_stack`](numpy.column_stack#numpy.column_stack "numpy.column_stack") Stack 1-D arrays as columns into a 2-D array. [`vsplit`](numpy.vsplit#numpy.vsplit "numpy.vsplit") Split an array into multiple sub-arrays vertically (row-wise). #### Notes The function is applied to both the _data and the _mask, if any. #### Examples ``` >>> a = np.array([1, 2, 3]) >>> b = np.array([4, 5, 6]) >>> np.vstack((a,b)) array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> a = np.array([[1], [2], [3]]) >>> b = np.array([[4], [5], [6]]) >>> np.vstack((a,b)) array([[1], [2], [3], [4], [5], [6]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.vstack.htmlnumpy.ma.append =============== ma.append(*a*, *b*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L8291-L8331) Append values to the end of an array. New in version 1.9.0. Parameters **a**array_like Values are appended to a copy of this array. **b**array_like These values are appended to a copy of `a`. It must be of the correct shape (the same shape as `a`, excluding `axis`). If `axis` is not specified, `b` can be any shape and will be flattened before use. **axis**int, optional The axis along which `v` are appended. If `axis` is not given, both `a` and `b` are flattened before use. Returns **append**MaskedArray A copy of `a` with `b` appended to `axis`. Note that [`append`](numpy.append#numpy.append "numpy.append") does not occur in-place: a new array is allocated and filled. If `axis` is None, the result is a flattened array. See also [`numpy.append`](numpy.append#numpy.append "numpy.append") Equivalent function in the top-level NumPy module. #### Examples ``` >>> import numpy.ma as ma >>> a = ma.masked_values([1, 2, 3], 2) >>> b = ma.masked_values([[4, 5, 6], [7, 8, 9]], 7) >>> ma.append(a, b) masked_array(data=[1, --, 3, 4, 5, 6, --, 8, 9], mask=[False, True, False, False, False, False, True, False, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.append.htmlnumpy.ma.make_mask =================== ma.make_mask(*m*, *copy=False*, *shrink=True*, *dtype=<class 'numpy.bool_'>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1548-L1634) Create a boolean mask from an array. Return `m` as a boolean mask, creating a copy if necessary or requested. The function can accept any sequence that is convertible to integers, or `nomask`. Does not require that contents must be 0s and 1s, values of 0 are interpreted as False, everything else as True. Parameters **m**array_like Potential mask. **copy**bool, optional Whether to return a copy of `m` (True) or `m` itself (False). **shrink**bool, optional Whether to shrink `m` to `nomask` if all its values are False. **dtype**dtype, optional Data-type of the output mask. By default, the output mask has a dtype of MaskType (bool). If the dtype is flexible, each field has a boolean dtype. This is ignored when `m` is `nomask`, in which case `nomask` is always returned. Returns **result**ndarray A boolean mask derived from `m`. #### Examples ``` >>> import numpy.ma as ma >>> m = [True, False, True, True] >>> ma.make_mask(m) array([ True, False, True, True]) >>> m = [1, 0, 1, 1] >>> ma.make_mask(m) array([ True, False, True, True]) >>> m = [1, 0, 2, -3] >>> ma.make_mask(m) array([ True, False, True, True]) ``` Effect of the `shrink` parameter. ``` >>> m = np.zeros(4) >>> m array([0., 0., 0., 0.]) >>> ma.make_mask(m) False >>> ma.make_mask(m, shrink=False) array([False, False, False, False]) ``` Using a flexible [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"). ``` >>> m = [1, 0, 1, 1] >>> n = [0, 1, 0, 0] >>> arr = [] >>> for man, mouse in zip(m, n): ... arr.append((man, mouse)) >>> arr [(1, 0), (0, 1), (1, 0), (1, 0)] >>> dtype = np.dtype({'names':['man', 'mouse'], ... 'formats':[np.int64, np.int64]}) >>> arr = np.array(arr, dtype=dtype) >>> arr array([(1, 0), (0, 1), (1, 0), (1, 0)], dtype=[('man', '<i8'), ('mouse', '<i8')]) >>> ma.make_mask(arr, dtype=dtype) array([(True, False), (False, True), (True, False), (True, False)], dtype=[('man', '|b1'), ('mouse', '|b1')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.make_mask.htmlnumpy.ma.make_mask_none ========================= ma.make_mask_none(*newshape*, *dtype=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1637-L1684) Return a boolean mask of the given shape, filled with False. This function returns a boolean ndarray with all entries False, that can be used in common mask manipulations. If a complex dtype is specified, the type of each field is converted to a boolean type. Parameters **newshape**tuple A tuple indicating the shape of the mask. **dtype**{None, dtype}, optional If None, use a MaskType instance. Otherwise, use a new datatype with the same fields as [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), converted to boolean types. Returns **result**ndarray An ndarray of appropriate shape and dtype, filled with False. See also [`make_mask`](numpy.ma.make_mask#numpy.ma.make_mask "numpy.ma.make_mask") Create a boolean mask from an array. [`make_mask_descr`](numpy.ma.make_mask_descr#numpy.ma.make_mask_descr "numpy.ma.make_mask_descr") Construct a dtype description list from a given dtype. #### Examples ``` >>> import numpy.ma as ma >>> ma.make_mask_none((3,)) array([False, False, False]) ``` Defining a more complex dtype. ``` >>> dtype = np.dtype({'names':['foo', 'bar'], ... 'formats':[np.float32, np.int64]}) >>> dtype dtype([('foo', '<f4'), ('bar', '<i8')]) >>> ma.make_mask_none((3,), dtype=dtype) array([(False, False), (False, False), (False, False)], dtype=[('foo', '|b1'), ('bar', '|b1')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.make_mask_none.htmlnumpy.ma.mask_or ================= ma.mask_or(*m1*, *m2*, *copy=False*, *shrink=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1697-L1750) Combine two masks with the `logical_or` operator. The result may be a view on `m1` or `m2` if the other is [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") (i.e. False). Parameters **m1, m2**array_like Input masks. **copy**bool, optional If copy is False and one of the inputs is [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"), return a view of the other input mask. Defaults to False. **shrink**bool, optional Whether to shrink the output to [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") if all its values are False. Defaults to True. Returns **mask**output mask The result masks values that are masked in either `m1` or `m2`. Raises ValueError If `m1` and `m2` have different flexible dtypes. #### Examples ``` >>> m1 = np.ma.make_mask([0, 1, 1, 0]) >>> m2 = np.ma.make_mask([1, 0, 0, 0]) >>> np.ma.mask_or(m1, m2) array([ True, True, True, False]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.mask_or.htmlnumpy.ma.make_mask_descr ========================== ma.make_mask_descr(*ndtype*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1322-L1352) Construct a dtype description list from a given dtype. Returns a new dtype object, with the type of all fields in `ndtype` to a boolean type. Field names are not altered. Parameters **ndtype**dtype The dtype to convert. Returns **result**dtype A dtype that looks like `ndtype`, the type of all fields is boolean. #### Examples ``` >>> import numpy.ma as ma >>> dtype = np.dtype({'names':['foo', 'bar'], ... 'formats':[np.float32, np.int64]}) >>> dtype dtype([('foo', '<f4'), ('bar', '<i8')]) >>> ma.make_mask_descr(dtype) dtype([('foo', '|b1'), ('bar', '|b1')]) >>> ma.make_mask_descr(np.float32) dtype('bool') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.make_mask_descr.htmlnumpy.ma.masked_array.mask =========================== property *property*ma.masked_array.mask Current mask. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_array.mask.htmlnumpy.ma.ndenumerate ==================== ma.ndenumerate(*a*, *compressed=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1555-L1620) Multidimensional index iterator. Return an iterator yielding pairs of array coordinates and values, skipping elements that are masked. With `compressed=False`, [`ma.masked`](../maskedarray.baseclass#numpy.ma.masked "numpy.ma.masked") is yielded as the value of masked elements. This behavior differs from that of [`numpy.ndenumerate`](numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate"), which yields the value of the underlying data array. Parameters **a**array_like An array with (possibly) masked elements. **compressed**bool, optional If True (default), masked elements are skipped. See also [`numpy.ndenumerate`](numpy.ndenumerate#numpy.ndenumerate "numpy.ndenumerate") Equivalent function ignoring any mask. #### Notes New in version 1.23.0. #### Examples ``` >>> a = np.ma.arange(9).reshape((3, 3)) >>> a[1, 0] = np.ma.masked >>> a[1, 2] = np.ma.masked >>> a[2, 1] = np.ma.masked >>> a masked_array( data=[[0, 1, 2], [--, 4, --], [6, --, 8]], mask=[[False, False, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> for index, x in np.ma.ndenumerate(a): ... print(index, x) (0, 0) 0 (0, 1) 1 (0, 2) 2 (1, 1) 4 (2, 0) 6 (2, 2) 8 ``` ``` >>> for index, x in np.ma.ndenumerate(a, compressed=False): ... print(index, x) (0, 0) 0 (0, 1) 1 (0, 2) 2 (1, 0) -- (1, 1) 4 (1, 2) -- (2, 0) 6 (2, 1) -- (2, 2) 8 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.ndenumerate.htmlnumpy.ma.flatnotmasked_contiguous ================================== ma.flatnotmasked_contiguous(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1729-L1783) Find contiguous unmasked data in a masked array. Parameters **a**array_like The input array. Returns **slice_list**list A sorted sequence of `slice` objects (start index, end index). Changed in version 1.15.0: Now returns an empty list instead of None for a fully masked array See also [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Notes Only accepts 2-D arrays at most. #### Examples ``` >>> a = np.ma.arange(10) >>> np.ma.flatnotmasked_contiguous(a) [slice(0, 10, None)] ``` ``` >>> mask = (a < 3) | (a > 8) | (a == 5) >>> a[mask] = np.ma.masked >>> np.array(a[~a.mask]) array([3, 4, 6, 7, 8]) ``` ``` >>> np.ma.flatnotmasked_contiguous(a) [slice(3, 5, None), slice(6, 9, None)] >>> a[:] = np.ma.masked >>> np.ma.flatnotmasked_contiguous(a) [] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.flatnotmasked_contiguous.htmlnumpy.ma.flatnotmasked_edges ============================= ma.flatnotmasked_edges(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1623-L1675) Find the indices of the first and last unmasked values. Expects a 1-D [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), returns None if all values are masked. Parameters **a**array_like Input 1-D [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") Returns **edges**ndarray or None The indices of first and last non-masked value in the array. Returns None if all values are masked. See also [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Notes Only accepts 1-D arrays. #### Examples ``` >>> a = np.ma.arange(10) >>> np.ma.flatnotmasked_edges(a) array([0, 9]) ``` ``` >>> mask = (a < 3) | (a > 8) | (a == 5) >>> a[mask] = np.ma.masked >>> np.array(a[~a.mask]) array([3, 4, 6, 7, 8]) ``` ``` >>> np.ma.flatnotmasked_edges(a) array([3, 8]) ``` ``` >>> a[:] = np.ma.masked >>> print(np.ma.flatnotmasked_edges(a)) None ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.flatnotmasked_edges.htmlnumpy.ma.notmasked_contiguous ============================== ma.notmasked_contiguous(*a*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1786-L1860) Find contiguous unmasked data in a masked array along the given axis. Parameters **a**array_like The input array. **axis**int, optional Axis along which to perform the operation. If None (default), applies to a flattened version of the array, and this is the same as [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"). Returns **endpoints**list A list of slices (start and end indexes) of unmasked indexes in the array. If the input is 2d and axis is specified, the result is a list of lists. See also [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Notes Only accepts 2-D arrays at most. #### Examples ``` >>> a = np.arange(12).reshape((3, 4)) >>> mask = np.zeros_like(a) >>> mask[1:, :-1] = 1; mask[0, 1] = 1; mask[-1, 0] = 0 >>> ma = np.ma.array(a, mask=mask) >>> ma masked_array( data=[[0, --, 2, 3], [--, --, --, 7], [8, --, --, 11]], mask=[[False, True, False, False], [ True, True, True, False], [False, True, True, False]], fill_value=999999) >>> np.array(ma[~ma.mask]) array([ 0, 2, 3, 7, 8, 11]) ``` ``` >>> np.ma.notmasked_contiguous(ma) [slice(0, 1, None), slice(2, 4, None), slice(7, 9, None), slice(11, 12, None)] ``` ``` >>> np.ma.notmasked_contiguous(ma, axis=0) [[slice(0, 1, None), slice(2, 3, None)], [], [slice(0, 1, None)], [slice(0, 3, None)]] ``` ``` >>> np.ma.notmasked_contiguous(ma, axis=1) [[slice(0, 1, None), slice(2, 4, None)], [slice(3, 4, None)], [slice(0, 1, None), slice(3, 4, None)]] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.notmasked_contiguous.htmlnumpy.ma.notmasked_edges ========================= ma.notmasked_edges(*a*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1678-L1726) Find the indices of the first and last unmasked values along an axis. If all values are masked, return None. Otherwise, return a list of two tuples, corresponding to the indices of the first and last unmasked values respectively. Parameters **a**array_like The input array. **axis**int, optional Axis along which to perform the operation. If None (default), applies to a flattened version of the array. Returns **edges**ndarray or list An array of start and end indexes if there are any masked data in the array. If there are no masked data in the array, `edges` is a list of the first and last index. See also [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous") [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Examples ``` >>> a = np.arange(9).reshape((3, 3)) >>> m = np.zeros_like(a) >>> m[1:, 1:] = 1 ``` ``` >>> am = np.ma.array(a, mask=m) >>> np.array(am[~am.mask]) array([0, 1, 2, 3, 6]) ``` ``` >>> np.ma.notmasked_edges(am) array([0, 6]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.notmasked_edges.htmlnumpy.ma.clump_masked ====================== ma.clump_masked(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1931-L1967) Returns a list of slices corresponding to the masked clumps of a 1-D array. (A “clump” is defined as a contiguous region of the array). Parameters **a**ndarray A one-dimensional masked array. Returns **slices**list of slice The list of slices, one for each continuous region of masked elements in `a`. See also [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous"), [`clump_unmasked`](numpy.ma.clump_unmasked#numpy.ma.clump_unmasked "numpy.ma.clump_unmasked") #### Notes New in version 1.4.0. #### Examples ``` >>> a = np.ma.masked_array(np.arange(10)) >>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked >>> np.ma.clump_masked(a) [slice(0, 3, None), slice(6, 7, None), slice(8, 10, None)] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.clump_masked.htmlnumpy.ma.clump_unmasked ======================== ma.clump_unmasked(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1892-L1928) Return list of slices corresponding to the unmasked clumps of a 1-D array. (A “clump” is defined as a contiguous region of the array). Parameters **a**ndarray A one-dimensional masked array. Returns **slices**list of slice The list of slices, one for each continuous region of unmasked elements in `a`. See also [`flatnotmasked_edges`](numpy.ma.flatnotmasked_edges#numpy.ma.flatnotmasked_edges "numpy.ma.flatnotmasked_edges"), [`flatnotmasked_contiguous`](numpy.ma.flatnotmasked_contiguous#numpy.ma.flatnotmasked_contiguous "numpy.ma.flatnotmasked_contiguous"), [`notmasked_edges`](numpy.ma.notmasked_edges#numpy.ma.notmasked_edges "numpy.ma.notmasked_edges") [`notmasked_contiguous`](numpy.ma.notmasked_contiguous#numpy.ma.notmasked_contiguous "numpy.ma.notmasked_contiguous"), [`clump_masked`](numpy.ma.clump_masked#numpy.ma.clump_masked "numpy.ma.clump_masked") #### Notes New in version 1.4.0. #### Examples ``` >>> a = np.ma.masked_array(np.arange(10)) >>> a[[0, 1, 2, 6, 8, 9]] = np.ma.masked >>> np.ma.clump_unmasked(a) [slice(3, 6, None), slice(7, 8, None)] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.clump_unmasked.htmlnumpy.ma.mask_cols =================== ma.mask_cols(*a*, *axis=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1013-L1060) Mask columns of a 2D array that contain masked values. This function is a shortcut to `mask_rowcols` with `axis` equal to 1. See also [`mask_rowcols`](numpy.ma.mask_rowcols#numpy.ma.mask_rowcols "numpy.ma.mask_rowcols") Mask rows and/or columns of a 2D array. [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples ``` >>> import numpy.ma as ma >>> a = np.zeros((3, 3), dtype=int) >>> a[1, 1] = 1 >>> a array([[0, 0, 0], [0, 1, 0], [0, 0, 0]]) >>> a = ma.masked_equal(a, 1) >>> a masked_array( data=[[0, 0, 0], [0, --, 0], [0, 0, 0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1) >>> ma.mask_cols(a) masked_array( data=[[0, --, 0], [0, --, 0], [0, --, 0]], mask=[[False, True, False], [False, True, False], [False, True, False]], fill_value=1) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.mask_cols.htmlnumpy.ma.mask_rowcols ====================== ma.mask_rowcols(*a*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7505-L7591) Mask rows and/or columns of a 2D array that contain masked values. Mask whole rows and/or columns of a 2D array that contain masked values. The masking behavior is selected using the `axis` parameter. * If `axis` is None, rows *and* columns are masked. * If `axis` is 0, only rows are masked. * If `axis` is 1 or -1, only columns are masked. Parameters **a**array_like, MaskedArray The array to mask. If not a MaskedArray instance (or if no array elements are masked). The result is a MaskedArray with `mask` set to [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask") (False). Must be a 2D array. **axis**int, optional Axis along which to perform the operation. If None, applies to a flattened version of the array. Returns **a**MaskedArray A modified version of the input array, masked depending on the value of the `axis` parameter. Raises NotImplementedError If input array `a` is not 2D. See also [`mask_rows`](numpy.ma.mask_rows#numpy.ma.mask_rows "numpy.ma.mask_rows") Mask rows of a 2D array that contain masked values. [`mask_cols`](numpy.ma.mask_cols#numpy.ma.mask_cols "numpy.ma.mask_cols") Mask cols of a 2D array that contain masked values. [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Notes The input array’s mask is modified by this function. #### Examples ``` >>> import numpy.ma as ma >>> a = np.zeros((3, 3), dtype=int) >>> a[1, 1] = 1 >>> a array([[0, 0, 0], [0, 1, 0], [0, 0, 0]]) >>> a = ma.masked_equal(a, 1) >>> a masked_array( data=[[0, 0, 0], [0, --, 0], [0, 0, 0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1) >>> ma.mask_rowcols(a) masked_array( data=[[0, --, 0], [--, --, --], [0, --, 0]], mask=[[False, True, False], [ True, True, True], [False, True, False]], fill_value=1) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.mask_rowcols.htmlnumpy.ma.mask_rows =================== ma.mask_rows(*a*, *axis=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L962-L1010) Mask rows of a 2D array that contain masked values. This function is a shortcut to `mask_rowcols` with `axis` equal to 0. See also [`mask_rowcols`](numpy.ma.mask_rowcols#numpy.ma.mask_rowcols "numpy.ma.mask_rowcols") Mask rows and/or columns of a 2D array. [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples ``` >>> import numpy.ma as ma >>> a = np.zeros((3, 3), dtype=int) >>> a[1, 1] = 1 >>> a array([[0, 0, 0], [0, 1, 0], [0, 0, 0]]) >>> a = ma.masked_equal(a, 1) >>> a masked_array( data=[[0, 0, 0], [0, --, 0], [0, 0, 0]], mask=[[False, False, False], [False, True, False], [False, False, False]], fill_value=1) ``` ``` >>> ma.mask_rows(a) masked_array( data=[[0, 0, 0], [--, --, --], [0, 0, 0]], mask=[[False, False, False], [ True, True, True], [False, False, False]], fill_value=1) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.mask_rows.htmlnumpy.ma.harden_mask ===================== ma.harden_mask(*self*)*=<numpy.ma.core._frommethod object>* Force the mask to hard, preventing unmasking by assignment. Whether the mask of a masked array is hard or soft is determined by its [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") property. [`harden_mask`](#numpy.ma.harden_mask "numpy.ma.harden_mask") sets [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") to `True` (and returns the modified self). See also [`ma.MaskedArray.hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") [`ma.MaskedArray.soften_mask`](numpy.ma.maskedarray.soften_mask#numpy.ma.MaskedArray.soften_mask "numpy.ma.MaskedArray.soften_mask") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.harden_mask.htmlnumpy.ma.soften_mask ===================== ma.soften_mask(*self*)*=<numpy.ma.core._frommethod object>* Force the mask to soft (default), allowing unmasking by assignment. Whether the mask of a masked array is hard or soft is determined by its [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") property. [`soften_mask`](#numpy.ma.soften_mask "numpy.ma.soften_mask") sets [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") to `False` (and returns the modified self). See also [`ma.MaskedArray.hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") [`ma.MaskedArray.harden_mask`](numpy.ma.maskedarray.harden_mask#numpy.ma.MaskedArray.harden_mask "numpy.ma.MaskedArray.harden_mask") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.soften_mask.htmlnumpy.ma.MaskedArray.harden_mask ================================= method ma.MaskedArray.harden_mask()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3543-L3559) Force the mask to hard, preventing unmasking by assignment. Whether the mask of a masked array is hard or soft is determined by its [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") property. [`harden_mask`](#numpy.ma.MaskedArray.harden_mask "numpy.ma.MaskedArray.harden_mask") sets [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") to `True` (and returns the modified self). See also [`ma.MaskedArray.hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") [`ma.MaskedArray.soften_mask`](numpy.ma.maskedarray.soften_mask#numpy.ma.MaskedArray.soften_mask "numpy.ma.MaskedArray.soften_mask") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.harden_mask.htmlnumpy.ma.MaskedArray.soften_mask ================================= method ma.MaskedArray.soften_mask()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3561-L3577) Force the mask to soft (default), allowing unmasking by assignment. Whether the mask of a masked array is hard or soft is determined by its [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") property. [`soften_mask`](#numpy.ma.MaskedArray.soften_mask "numpy.ma.MaskedArray.soften_mask") sets [`hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") to `False` (and returns the modified self). See also [`ma.MaskedArray.hardmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.hardmask "numpy.ma.MaskedArray.hardmask") [`ma.MaskedArray.harden_mask`](numpy.ma.maskedarray.harden_mask#numpy.ma.MaskedArray.harden_mask "numpy.ma.MaskedArray.harden_mask") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.soften_mask.htmlnumpy.ma.MaskedArray.shrink_mask ================================= method ma.MaskedArray.shrink_mask()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3646-L3675) Reduce a mask to nomask when possible. Parameters **None** Returns None #### Examples ``` >>> x = np.ma.array([[1,2 ], [3, 4]], mask=[0]*4) >>> x.mask array([[False, False], [False, False]]) >>> x.shrink_mask() masked_array( data=[[1, 2], [3, 4]], mask=False, fill_value=999999) >>> x.mask False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.shrink_mask.htmlnumpy.ma.MaskedArray.unshare_mask ================================== method ma.MaskedArray.unshare_mask()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3623-L3639) Copy the mask and set the [`sharedmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.sharedmask "numpy.ma.MaskedArray.sharedmask") flag to `False`. Whether the mask is shared between masked arrays can be seen from the [`sharedmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.sharedmask "numpy.ma.MaskedArray.sharedmask") property. [`unshare_mask`](#numpy.ma.MaskedArray.unshare_mask "numpy.ma.MaskedArray.unshare_mask") ensures the mask is not shared. A copy of the mask is only made if it was shared. See also [`sharedmask`](../maskedarray.baseclass#numpy.ma.MaskedArray.sharedmask "numpy.ma.MaskedArray.sharedmask") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.unshare_mask.htmlnumpy.ma.asarray ================ ma.asarray(*a*, *dtype=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7958-L8004) Convert the input to a masked array of the given data-type. No copy is performed if the input is already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If `a` is a subclass of [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), a base class [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") is returned. Parameters **a**array_like Input data, in any form that can be converted to a masked array. This includes lists, lists of tuples, tuples, tuples of tuples, tuples of lists, ndarrays and masked arrays. **dtype**dtype, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’}, optional Whether to use row-major (‘C’) or column-major (‘FORTRAN’) memory representation. Default is ‘C’. Returns **out**MaskedArray Masked array interpretation of `a`. See also [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray") Similar to [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray"), but conserves subclasses. #### Examples ``` >>> x = np.arange(10.).reshape(2, 5) >>> x array([[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]]) >>> np.ma.asarray(x) masked_array( data=[[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]], mask=False, fill_value=1e+20) >>> type(np.ma.asarray(x)) <class 'numpy.ma.core.MaskedArray'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.asarray.htmlnumpy.ma.asanyarray =================== ma.asanyarray(*a*, *dtype=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L8007-L8053) Convert the input to a masked array, conserving subclasses. If `a` is a subclass of [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), its class is conserved. No copy is performed if the input is already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). Parameters **a**array_like Input data, in any form that can be converted to an array. **dtype**dtype, optional By default, the data-type is inferred from the input data. **order**{‘C’, ‘F’}, optional Whether to use row-major (‘C’) or column-major (‘FORTRAN’) memory representation. Default is ‘C’. Returns **out**MaskedArray MaskedArray interpretation of `a`. See also [`asarray`](numpy.asarray#numpy.asarray "numpy.asarray") Similar to [`asanyarray`](numpy.asanyarray#numpy.asanyarray "numpy.asanyarray"), but does not conserve subclass. #### Examples ``` >>> x = np.arange(10.).reshape(2, 5) >>> x array([[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]]) >>> np.ma.asanyarray(x) masked_array( data=[[0., 1., 2., 3., 4.], [5., 6., 7., 8., 9.]], mask=False, fill_value=1e+20) >>> type(np.ma.asanyarray(x)) <class 'numpy.ma.core.MaskedArray'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.asanyarray.htmlnumpy.ma.fix_invalid ===================== ma.fix_invalid(*a*, *mask=False*, *copy=True*, *fill_value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L718-L774) Return input with invalid data masked and replaced by a fill value. Invalid data means values of [`nan`](../constants#numpy.nan "numpy.nan"), [`inf`](../constants#numpy.inf "numpy.inf"), etc. Parameters **a**array_like Input array, a (subclass of) ndarray. **mask**sequence, optional Mask. Must be convertible to an array of booleans with the same shape as `data`. True indicates a masked (i.e. invalid) data. **copy**bool, optional Whether to use a copy of `a` (True) or to fix `a` in place (False). Default is True. **fill_value**scalar, optional Value used for fixing invalid data. Default is None, in which case the `a.fill_value` is used. Returns **b**MaskedArray The input array with invalid entries fixed. #### Notes A copy is performed by default. #### Examples ``` >>> x = np.ma.array([1., -1, np.nan, np.inf], mask=[1] + [0]*3) >>> x masked_array(data=[--, -1.0, nan, inf], mask=[ True, False, False, False], fill_value=1e+20) >>> np.ma.fix_invalid(x) masked_array(data=[--, -1.0, --, --], mask=[ True, False, True, True], fill_value=1e+20) ``` ``` >>> fixed = np.ma.fix_invalid(x) >>> fixed.data array([ 1.e+00, -1.e+00, 1.e+20, 1.e+20]) >>> x.data array([ 1., -1., nan, inf]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.fix_invalid.htmlnumpy.ma.masked_equal ====================== ma.masked_equal(*x*, *value*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2073-L2100) Mask an array where equal to a given value. This function is a shortcut to `masked_where`, with `condition` = (x == value). For floating point arrays, consider using `masked_values(x, value)`. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values") Mask using floating point equality. #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_equal(a, 2) masked_array(data=[0, 1, --, 3], mask=[False, False, True, False], fill_value=2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_equal.htmlnumpy.ma.masked_greater ======================== ma.masked_greater(*x*, *value*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1943-L1966) Mask an array where greater than a given value. This function is a shortcut to `masked_where`, with `condition` = (x > value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_greater(a, 2) masked_array(data=[0, 1, 2, --], mask=[False, False, False, True], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_greater.htmlnumpy.ma.masked_greater_equal =============================== ma.masked_greater_equal(*x*, *value*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1969-L1992) Mask an array where greater than or equal to a given value. This function is a shortcut to `masked_where`, with `condition` = (x >= value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_greater_equal(a, 2) masked_array(data=[0, 1, --, --], mask=[False, False, True, True], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_greater_equal.htmlnumpy.ma.masked_inside ======================= ma.masked_inside(*x*, *v1*, *v2*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2103-L2140) Mask an array inside a given interval. Shortcut to `masked_where`, where `condition` is True for `x` inside the interval [v1,v2] (v1 <= x <= v2). The boundaries `v1` and `v2` can be given in either order. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Notes The array `x` is prefilled with its filling value. #### Examples ``` >>> import numpy.ma as ma >>> x = [0.31, 1.2, 0.01, 0.2, -0.4, -1.1] >>> ma.masked_inside(x, -0.3, 0.3) masked_array(data=[0.31, 1.2, --, --, -0.4, -1.1], mask=[False, False, True, True, False, False], fill_value=1e+20) ``` The order of `v1` and `v2` doesn’t matter. ``` >>> ma.masked_inside(x, 0.3, -0.3) masked_array(data=[0.31, 1.2, --, --, -0.4, -1.1], mask=[False, False, True, True, False, False], fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_inside.htmlnumpy.ma.masked_invalid ======================== ma.masked_invalid(*a*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2331-L2370) Mask an array where invalid values occur (NaNs or infs). This function is a shortcut to `masked_where`, with `condition` = ~(np.isfinite(a)). Any pre-existing mask is conserved. Only applies to arrays with a dtype where NaNs or infs make sense (i.e. floating point types), but accepts any array_like object. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(5, dtype=float) >>> a[2] = np.NaN >>> a[3] = np.PINF >>> a array([ 0., 1., nan, inf, 4.]) >>> ma.masked_invalid(a) masked_array(data=[0.0, 1.0, --, --, 4.0], mask=[False, False, True, True, False], fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_invalid.htmlnumpy.ma.masked_less ===================== ma.masked_less(*x*, *value*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1995-L2018) Mask an array where less than a given value. This function is a shortcut to `masked_where`, with `condition` = (x < value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_less(a, 2) masked_array(data=[--, --, 2, 3], mask=[ True, True, False, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_less.htmlnumpy.ma.masked_less_equal ============================ ma.masked_less_equal(*x*, *value*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2021-L2044) Mask an array where less than or equal to a given value. This function is a shortcut to `masked_where`, with `condition` = (x <= value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_less_equal(a, 2) masked_array(data=[--, --, --, 3], mask=[ True, True, True, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_less_equal.htmlnumpy.ma.masked_not_equal =========================== ma.masked_not_equal(*x*, *value*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2047-L2070) Mask an array where `not` equal to a given value. This function is a shortcut to `masked_where`, with `condition` = (x != value). See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_not_equal(a, 2) masked_array(data=[--, --, 2, --], mask=[ True, True, False, True], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_not_equal.htmlnumpy.ma.masked_object ======================= ma.masked_object(*x*, *value*, *copy=True*, *shrink=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2183-L2248) Mask the array `x` where the data are exactly equal to value. This function is similar to [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values"), but only suitable for object arrays: for floating point, use [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values") instead. Parameters **x**array_like Array to mask **value**object Comparison value **copy**{True, False}, optional Whether to return a copy of `x`. **shrink**{True, False}, optional Whether to collapse a mask full of False to nomask Returns **result**MaskedArray The result of masking `x` where equal to `value`. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal") Mask where equal to a given value (integers). [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values") Mask using floating point equality. #### Examples ``` >>> import numpy.ma as ma >>> food = np.array(['green_eggs', 'ham'], dtype=object) >>> # don't eat spoiled food >>> eat = ma.masked_object(food, 'green_eggs') >>> eat masked_array(data=[--, 'ham'], mask=[ True, False], fill_value='green_eggs', dtype=object) >>> # plain ol` ham is boring >>> fresh_food = np.array(['cheese', 'ham', 'pineapple'], dtype=object) >>> eat = ma.masked_object(fresh_food, 'green_eggs') >>> eat masked_array(data=['cheese', 'ham', 'pineapple'], mask=False, fill_value='green_eggs', dtype=object) ``` Note that `mask` is set to `nomask` if possible. ``` >>> eat masked_array(data=['cheese', 'ham', 'pineapple'], mask=False, fill_value='green_eggs', dtype=object) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_object.htmlnumpy.ma.masked_outside ======================== ma.masked_outside(*x*, *v1*, *v2*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2143-L2180) Mask an array outside a given interval. Shortcut to `masked_where`, where `condition` is True for `x` outside the interval [v1,v2] (x < v1)|(x > v2). The boundaries `v1` and `v2` can be given in either order. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. #### Notes The array `x` is prefilled with its filling value. #### Examples ``` >>> import numpy.ma as ma >>> x = [0.31, 1.2, 0.01, 0.2, -0.4, -1.1] >>> ma.masked_outside(x, -0.3, 0.3) masked_array(data=[--, --, 0.01, 0.2, --, --], mask=[ True, True, False, False, True, True], fill_value=1e+20) ``` The order of `v1` and `v2` doesn’t matter. ``` >>> ma.masked_outside(x, 0.3, -0.3) masked_array(data=[--, --, 0.01, 0.2, --, --], mask=[ True, True, False, False, True, True], fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_outside.htmlnumpy.ma.masked_values ======================= ma.masked_values(*x*, *value*, *rtol=1e-05*, *atol=1e-08*, *copy=True*, *shrink=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2251-L2328) Mask using floating point equality. Return a MaskedArray, masked where the data in array `x` are approximately equal to `value`, determined using [`isclose`](numpy.isclose#numpy.isclose "numpy.isclose"). The default tolerances for [`masked_values`](#numpy.ma.masked_values "numpy.ma.masked_values") are the same as those for [`isclose`](numpy.isclose#numpy.isclose "numpy.isclose"). For integer types, exact equality is used, in the same way as [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal"). The fill_value is set to `value` and the mask is set to `nomask` if possible. Parameters **x**array_like Array to mask. **value**float Masking value. **rtol, atol**float, optional Tolerance parameters passed on to [`isclose`](numpy.isclose#numpy.isclose "numpy.isclose") **copy**bool, optional Whether to return a copy of `x`. **shrink**bool, optional Whether to collapse a mask full of False to `nomask`. Returns **result**MaskedArray The result of masking `x` where approximately equal to `value`. See also [`masked_where`](numpy.ma.masked_where#numpy.ma.masked_where "numpy.ma.masked_where") Mask where a condition is met. [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal") Mask where equal to a given value (integers). #### Examples ``` >>> import numpy.ma as ma >>> x = np.array([1, 1.1, 2, 1.1, 3]) >>> ma.masked_values(x, 1.1) masked_array(data=[1.0, --, 2.0, --, 3.0], mask=[False, True, False, True, False], fill_value=1.1) ``` Note that `mask` is set to `nomask` if possible. ``` >>> ma.masked_values(x, 1.5) masked_array(data=[1. , 1.1, 2. , 1.1, 3. ], mask=False, fill_value=1.5) ``` For integers, the fill value will be different in general to the result of `masked_equal`. ``` >>> x = np.arange(5) >>> x array([0, 1, 2, 3, 4]) >>> ma.masked_values(x, 2) masked_array(data=[0, 1, --, 3, 4], mask=[False, False, True, False, False], fill_value=2) >>> ma.masked_equal(x, 2) masked_array(data=[0, 1, --, 3, 4], mask=[False, False, True, False, False], fill_value=2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_values.htmlnumpy.ma.masked_where ====================== ma.masked_where(*condition*, *a*, *copy=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L1821-L1940) Mask an array where a condition is met. Return `a` as an array masked where `condition` is True. Any masked values of `a` or `condition` are also masked in the output. Parameters **condition**array_like Masking condition. When `condition` tests floating point values for equality, consider using `masked_values` instead. **a**array_like Array to mask. **copy**bool If True (default) make a copy of `a` in the result. If False modify `a` in place and return a view. Returns **result**MaskedArray The result of masking `a` where `condition` is True. See also [`masked_values`](numpy.ma.masked_values#numpy.ma.masked_values "numpy.ma.masked_values") Mask using floating point equality. [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal") Mask where equal to a given value. [`masked_not_equal`](numpy.ma.masked_not_equal#numpy.ma.masked_not_equal "numpy.ma.masked_not_equal") Mask where `not` equal to a given value. [`masked_less_equal`](numpy.ma.masked_less_equal#numpy.ma.masked_less_equal "numpy.ma.masked_less_equal") Mask where less than or equal to a given value. [`masked_greater_equal`](numpy.ma.masked_greater_equal#numpy.ma.masked_greater_equal "numpy.ma.masked_greater_equal") Mask where greater than or equal to a given value. [`masked_less`](numpy.ma.masked_less#numpy.ma.masked_less "numpy.ma.masked_less") Mask where less than a given value. [`masked_greater`](numpy.ma.masked_greater#numpy.ma.masked_greater "numpy.ma.masked_greater") Mask where greater than a given value. [`masked_inside`](numpy.ma.masked_inside#numpy.ma.masked_inside "numpy.ma.masked_inside") Mask inside a given interval. [`masked_outside`](numpy.ma.masked_outside#numpy.ma.masked_outside "numpy.ma.masked_outside") Mask outside a given interval. [`masked_invalid`](numpy.ma.masked_invalid#numpy.ma.masked_invalid "numpy.ma.masked_invalid") Mask invalid values (NaNs or infs). #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(4) >>> a array([0, 1, 2, 3]) >>> ma.masked_where(a <= 2, a) masked_array(data=[--, --, --, 3], mask=[ True, True, True, False], fill_value=999999) ``` Mask array `b` conditional on `a`. ``` >>> b = ['a', 'b', 'c', 'd'] >>> ma.masked_where(a == 2, b) masked_array(data=['a', 'b', --, 'd'], mask=[False, False, True, False], fill_value='N/A', dtype='<U1') ``` Effect of the [`copy`](numpy.copy#numpy.copy "numpy.copy") argument. ``` >>> c = ma.masked_where(a <= 2, a) >>> c masked_array(data=[--, --, --, 3], mask=[ True, True, True, False], fill_value=999999) >>> c[0] = 99 >>> c masked_array(data=[99, --, --, 3], mask=[False, True, True, False], fill_value=999999) >>> a array([0, 1, 2, 3]) >>> c = ma.masked_where(a <= 2, a, copy=False) >>> c[0] = 99 >>> c masked_array(data=[99, --, --, 3], mask=[False, True, True, False], fill_value=999999) >>> a array([99, 1, 2, 3]) ``` When `condition` or `a` contain masked values. ``` >>> a = np.arange(4) >>> a = ma.masked_where(a == 2, a) >>> a masked_array(data=[0, 1, --, 3], mask=[False, False, True, False], fill_value=999999) >>> b = np.arange(4) >>> b = ma.masked_where(b == 0, b) >>> b masked_array(data=[--, 1, 2, 3], mask=[ True, False, False, False], fill_value=999999) >>> ma.masked_where(a == 3, b) masked_array(data=[--, 1, --, --], mask=[ True, False, True, True], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.masked_where.htmlnumpy.ma.compress_cols ======================= ma.compress_cols(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L944-L959) Suppress whole columns of a 2-D array that contain masked values. This is equivalent to `np.ma.compress_rowcols(a, 1)`, see [`compress_rowcols`](numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols") for details. See also [`compress_rowcols`](numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.compress_cols.htmlnumpy.ma.compress_rowcols ========================== ma.compress_rowcols(*x*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L871-L923) Suppress the rows and/or columns of a 2-D array that contain masked values. The suppression behavior is selected with the `axis` parameter. * If axis is None, both rows and columns are suppressed. * If axis is 0, only rows are suppressed. * If axis is 1 or -1, only columns are suppressed. Parameters **x**array_like, MaskedArray The array to operate on. If not a MaskedArray instance (or if no array elements are masked), `x` is interpreted as a MaskedArray with `mask` set to [`nomask`](../maskedarray.baseclass#numpy.ma.nomask "numpy.ma.nomask"). Must be a 2D array. **axis**int, optional Axis along which to perform the operation. Default is None. Returns **compressed_array**ndarray The compressed array. #### Examples ``` >>> x = np.ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], ... [1, 0, 0], ... [0, 0, 0]]) >>> x masked_array( data=[[--, 1, 2], [--, 4, 5], [6, 7, 8]], mask=[[ True, False, False], [ True, False, False], [False, False, False]], fill_value=999999) ``` ``` >>> np.ma.compress_rowcols(x) array([[7, 8]]) >>> np.ma.compress_rowcols(x, 0) array([[6, 7, 8]]) >>> np.ma.compress_rowcols(x, 1) array([[1, 2], [4, 5], [7, 8]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.compress_rowcols.htmlnumpy.ma.compress_rows ======================= ma.compress_rows(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L926-L941) Suppress whole rows of a 2-D array that contain masked values. This is equivalent to `np.ma.compress_rowcols(a, 0)`, see [`compress_rowcols`](numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols") for details. See also [`compress_rowcols`](numpy.ma.compress_rowcols#numpy.ma.compress_rowcols "numpy.ma.compress_rowcols") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.compress_rows.htmlnumpy.ma.compressed =================== ma.compressed(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6962-L6974) Return all the non-masked data as a 1-D array. This function is equivalent to calling the “compressed” method of a [`ma.MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), see [`ma.MaskedArray.compressed`](numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed") for details. See also [`ma.MaskedArray.compressed`](numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed") Equivalent method. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.compressed.htmlnumpy.ma.filled =============== ma.filled(*a*, *fill_value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L580-L634) Return input as an array with masked data replaced by a fill value. If `a` is not a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), `a` itself is returned. If `a` is a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") and `fill_value` is None, `fill_value` is set to `a.fill_value`. Parameters **a**MaskedArray or array_like An input object. **fill_value**array_like, optional. Can be scalar or non-scalar. If non-scalar, the resulting filled array should be broadcastable over input array. Default is None. Returns **a**ndarray The filled array. See also [`compressed`](numpy.ma.compressed#numpy.ma.compressed "numpy.ma.compressed") #### Examples ``` >>> x = np.ma.array(np.arange(9).reshape(3, 3), mask=[[1, 0, 0], ... [1, 0, 0], ... [0, 0, 0]]) >>> x.filled() array([[999999, 1, 2], [999999, 4, 5], [ 6, 7, 8]]) >>> x.filled(fill_value=333) array([[333, 1, 2], [333, 4, 5], [ 6, 7, 8]]) >>> x.filled(fill_value=np.arange(3)) array([[0, 1, 2], [0, 4, 5], [6, 7, 8]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.filled.htmlnumpy.ma.MaskedArray.compressed =============================== method ma.MaskedArray.compressed()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3856-L3881) Return all the non-masked data as a 1-D array. Returns **data**ndarray A new [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") holding the non-masked data is returned. #### Notes The result is **not** a MaskedArray! #### Examples ``` >>> x = np.ma.array(np.arange(5), mask=[0]*2 + [1]*3) >>> x.compressed() array([0, 1]) >>> type(x.compressed()) <class 'numpy.ndarray'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.compressed.htmlnumpy.ma.MaskedArray.filled =========================== method ma.MaskedArray.filled(*fill_value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3776-L3854) Return a copy of self, with masked values filled with a given value. **However**, if there are no masked values to fill, self will be returned instead as an ndarray. Parameters **fill_value**array_like, optional The value to use for invalid entries. Can be scalar or non-scalar. If non-scalar, the resulting ndarray must be broadcastable over input array. Default is None, in which case, the [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") attribute of the array is used instead. Returns **filled_array**ndarray A copy of `self` with invalid entries replaced by *fill_value* (be it the function argument or the attribute of `self`), or `self` itself as an ndarray if there are no invalid entries to be replaced. #### Notes The result is **not** a MaskedArray! #### Examples ``` >>> x = np.ma.array([1,2,3,4,5], mask=[0,0,1,0,1], fill_value=-999) >>> x.filled() array([ 1, 2, -999, 4, -999]) >>> x.filled(fill_value=1000) array([ 1, 2, 1000, 4, 1000]) >>> type(x.filled()) <class 'numpy.ndarray'``` Subclassing is preserved. This means that if, e.g., the data part of the masked array is a recarray, [`filled`](#numpy.ma.MaskedArray.filled "numpy.ma.MaskedArray.filled") returns a recarray: ``` >>> x = np.array([(-1, 2), (-3, 4)], dtype='i8,i8').view(np.recarray) >>> m = np.ma.array(x, mask=[(True, False), (False, True)]) >>> m.filled() rec.array([(999999, 2), ( -3, 999999)], dtype=[('f0', '<i8'), ('f1', '<i8')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.filled.htmlnumpy.ma.MaskedArray.tofile =========================== method ma.MaskedArray.tofile(*fid*, *sep=''*, *format='%s'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6154-L6167) Save a masked array to a file in binary format. Warning This function is not implemented yet. Raises NotImplementedError When [`tofile`](#numpy.ma.MaskedArray.tofile "numpy.ma.MaskedArray.tofile") is called. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.tofile.htmlnumpy.ma.MaskedArray.tolist =========================== method ma.MaskedArray.tolist(*fill_value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6048-L6097) Return the data portion of the masked array as a hierarchical Python list. Data items are converted to the nearest compatible Python type. Masked values are converted to [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value"). If [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") is None, the corresponding entries in the output list will be `None`. Parameters **fill_value**scalar, optional The value to use for invalid entries. Default is None. Returns **result**list The Python list representation of the masked array. #### Examples ``` >>> x = np.ma.array([[1,2,3], [4,5,6], [7,8,9]], mask=[0] + [1,0]*4) >>> x.tolist() [[1, None, 3], [None, 5, None], [7, None, 9]] >>> x.tolist(-999) [[1, -999, 3], [-999, 5, -999], [7, -999, 9]] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.tolist.htmlnumpy.ma.MaskedArray.torecords ============================== method ma.MaskedArray.torecords()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6169-L6226) Transforms a masked array into a flexible-type array. The flexible type array that is returned will have two fields: * the `_data` field stores the `_data` part of the array. * the `_mask` field stores the `_mask` part of the array. Parameters **None** Returns **record**ndarray A new flexible-type [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") with two fields: the first element containing a value, the second element containing the corresponding mask boolean. The returned record shape matches self.shape. #### Notes A side-effect of transforming a masked array into a flexible [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") is that meta information (`fill_value`, 
) will be lost. #### Examples ``` >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.toflex() array([[(1, False), (2, True), (3, False)], [(4, True), (5, False), (6, True)], [(7, False), (8, True), (9, False)]], dtype=[('_data', '<i8'), ('_mask', '?')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.torecords.htmlnumpy.ma.MaskedArray.tobytes ============================ method ma.MaskedArray.tobytes(*fill_value=None*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6114-L6152) Return the array data as a string containing the raw bytes in the array. The array is filled with a fill value before the string conversion. New in version 1.9.0. Parameters **fill_value**scalar, optional Value used to fill in the masked values. Default is None, in which case `MaskedArray.fill_value` is used. **order**{‘C’,’F’,’A’}, optional Order of the data item in the copy. Default is ‘C’. * ‘C’ – C order (row major). * ‘F’ – Fortran order (column major). * ‘A’ – Any, current order of array. * None – Same as ‘A’. See also [`numpy.ndarray.tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes") [`tolist`](numpy.ma.maskedarray.tolist#numpy.ma.MaskedArray.tolist "numpy.ma.MaskedArray.tolist"), [`tofile`](numpy.ma.maskedarray.tofile#numpy.ma.MaskedArray.tofile "numpy.ma.MaskedArray.tofile") #### Notes As for [`ndarray.tobytes`](numpy.ndarray.tobytes#numpy.ndarray.tobytes "numpy.ndarray.tobytes"), information about the shape, dtype, etc., but also about [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value"), will be lost. #### Examples ``` >>> x = np.ma.array(np.array([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]]) >>> x.tobytes() b'\x01\x00\x00\x00\x00\x00\x00\x00?B\x0f\x00\x00\x00\x00\x00?B\x0f\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.tobytes.htmlnumpy.ma.common_fill_value ============================ ma.common_fill_value(*a*, *b*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L548-L577) Return the common filling value of two masked arrays, if any. If `a.fill_value == b.fill_value`, return the fill value, otherwise return None. Parameters **a, b**MaskedArray The masked arrays for which to compare fill values. Returns **fill_value**scalar or None The common fill value, or None. #### Examples ``` >>> x = np.ma.array([0, 1.], fill_value=3) >>> y = np.ma.array([0, 1.], fill_value=3) >>> np.ma.common_fill_value(x, y) 3.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.common_fill_value.htmlnumpy.ma.default_fill_value ============================= ma.default_fill_value(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L223-L275) Return the default fill value for the argument object. The default filling value depends on the datatype of the input array or the type of the input scalar: | datatype | default | | --- | --- | | bool | True | | int | 999999 | | float | 1.e20 | | complex | 1.e20+0j | | object | ‘?’ | | string | ‘N/A’ | For structured types, a structured scalar is returned, with each field the default fill value for its type. For subarray types, the fill value is an array of the same size containing the default scalar fill value. Parameters **obj**ndarray, dtype or scalar The array data-type or scalar for which the default fill value is returned. Returns **fill_value**scalar The default fill value. #### Examples ``` >>> np.ma.default_fill_value(1) 999999 >>> np.ma.default_fill_value(np.array([1.1, 2., np.pi])) 1e+20 >>> np.ma.default_fill_value(np.dtype(complex)) (1e+20+0j) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.default_fill_value.htmlnumpy.ma.maximum_fill_value ============================= ma.maximum_fill_value(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L343-L391) Return the minimum value that can be represented by the dtype of an object. This function is useful for calculating a fill value suitable for taking the maximum of an array with a given dtype. Parameters **obj**ndarray, dtype or scalar An object that can be queried for it’s numeric type. Returns **val**scalar The minimum representable value. Raises TypeError If `obj` isn’t a suitable numeric type. See also [`minimum_fill_value`](numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value") The inverse function. [`set_fill_value`](numpy.ma.set_fill_value#numpy.ma.set_fill_value "numpy.ma.set_fill_value") Set the filling value of a masked array. [`MaskedArray.fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") Return current fill value. #### Examples ``` >>> import numpy.ma as ma >>> a = np.int8() >>> ma.maximum_fill_value(a) -128 >>> a = np.int32() >>> ma.maximum_fill_value(a) -2147483648 ``` An array of numeric data can also be passed. ``` >>> a = np.array([1, 2, 3], dtype=np.int8) >>> ma.maximum_fill_value(a) -128 >>> a = np.array([1, 2, 3], dtype=np.float32) >>> ma.maximum_fill_value(a) -inf ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.maximum_fill_value.htmlnumpy.ma.minimum_fill_value ============================= ma.minimum_fill_value(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L292-L340) Return the maximum value that can be represented by the dtype of an object. This function is useful for calculating a fill value suitable for taking the minimum of an array with a given dtype. Parameters **obj**ndarray, dtype or scalar An object that can be queried for it’s numeric type. Returns **val**scalar The maximum representable value. Raises TypeError If `obj` isn’t a suitable numeric type. See also [`maximum_fill_value`](numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value") The inverse function. [`set_fill_value`](numpy.ma.set_fill_value#numpy.ma.set_fill_value "numpy.ma.set_fill_value") Set the filling value of a masked array. [`MaskedArray.fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") Return current fill value. #### Examples ``` >>> import numpy.ma as ma >>> a = np.int8() >>> ma.minimum_fill_value(a) 127 >>> a = np.int32() >>> ma.minimum_fill_value(a) 2147483647 ``` An array of numeric data can also be passed. ``` >>> a = np.array([1, 2, 3], dtype=np.int8) >>> ma.minimum_fill_value(a) 127 >>> a = np.array([1, 2, 3], dtype=np.float32) >>> ma.minimum_fill_value(a) inf ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.minimum_fill_value.htmlnumpy.ma.set_fill_value ========================= ma.set_fill_value(*a*, *fill_value*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L470-L532) Set the filling value of a, if a is a masked array. This function changes the fill value of the masked array `a` in place. If `a` is not a masked array, the function returns silently, without doing anything. Parameters **a**array_like Input array. **fill_value**dtype Filling value. A consistency test is performed to make sure the value is compatible with the dtype of `a`. Returns None Nothing returned by this function. See also [`maximum_fill_value`](numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value") Return the default fill value for a dtype. [`MaskedArray.fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") Return current fill value. [`MaskedArray.set_fill_value`](numpy.ma.maskedarray.set_fill_value#numpy.ma.MaskedArray.set_fill_value "numpy.ma.MaskedArray.set_fill_value") Equivalent method. #### Examples ``` >>> import numpy.ma as ma >>> a = np.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> a = ma.masked_where(a < 3, a) >>> a masked_array(data=[--, --, --, 3, 4], mask=[ True, True, True, False, False], fill_value=999999) >>> ma.set_fill_value(a, -999) >>> a masked_array(data=[--, --, --, 3, 4], mask=[ True, True, True, False, False], fill_value=-999) ``` Nothing happens if `a` is not a masked array. ``` >>> a = list(range(5)) >>> a [0, 1, 2, 3, 4] >>> ma.set_fill_value(a, 100) >>> a [0, 1, 2, 3, 4] >>> a = np.arange(5) >>> a array([0, 1, 2, 3, 4]) >>> ma.set_fill_value(a, 100) >>> a array([0, 1, 2, 3, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.set_fill_value.htmlnumpy.ma.MaskedArray.get_fill_value ===================================== method ma.MaskedArray.get_fill_value()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3712-L3751) The filling value of the masked array is a scalar. When setting, None will set to a default based on the data type. #### Examples ``` >>> for dt in [np.int32, np.int64, np.float64, np.complex128]: ... np.ma.array([0, 1], dtype=dt).get_fill_value() ... 999999 999999 1e+20 (1e+20+0j) ``` ``` >>> x = np.ma.array([0, 1.], fill_value=-np.inf) >>> x.fill_value -inf >>> x.fill_value = np.pi >>> x.fill_value 3.1415926535897931 # may vary ``` Reset to default: ``` >>> x.fill_value = None >>> x.fill_value 1e+20 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.get_fill_value.htmlnumpy.ma.MaskedArray.set_fill_value ===================================== method ma.MaskedArray.set_fill_value(*value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3753-L3770) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.set_fill_value.htmlnumpy.ma.anom ============= ma.anom(*self*, *axis=None*, *dtype=None*)*=<numpy.ma.core._frommethod object>* Compute the anomalies (deviations from the arithmetic mean) along the given axis. Returns an array of anomalies, with the same shape as the input and where the arithmetic mean is computed along the given axis. Parameters **axis**int, optional Axis over which the anomalies are taken. The default is to use the mean of the flattened array as reference. **dtype**dtype, optional Type to use in computing the variance. For arrays of integer type the default is float32; for arrays of float types it is the same as the array type. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") Compute the mean of the array. #### Examples ``` >>> a = np.ma.array([1,2,3]) >>> a.anom() masked_array(data=[-1., 0., 1.], mask=False, fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.anom.htmlnumpy.ma.anomalies ================== ma.anomalies(*self*, *axis=None*, *dtype=None*)*=<numpy.ma.core._frommethod object>* Compute the anomalies (deviations from the arithmetic mean) along the given axis. Returns an array of anomalies, with the same shape as the input and where the arithmetic mean is computed along the given axis. Parameters **axis**int, optional Axis over which the anomalies are taken. The default is to use the mean of the flattened array as reference. **dtype**dtype, optional Type to use in computing the variance. For arrays of integer type the default is float32; for arrays of float types it is the same as the array type. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") Compute the mean of the array. #### Examples ``` >>> a = np.ma.array([1,2,3]) >>> a.anom() masked_array(data=[-1., 0., 1.], mask=False, fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.anomalies.htmlnumpy.ma.average ================ ma.average(*a*, *axis=None*, *weights=None*, *returned=False*, ***, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L528-L657) Return the weighted average of array over the given axis. Parameters **a**array_like Data to be averaged. Masked entries are not taken into account in the computation. **axis**int, optional Axis along which to average `a`. If None, averaging is done over the flattened array. **weights**array_like, optional The importance that each element has in the computation of the average. The weights array can either be 1-D (in which case its length must be the size of `a` along the given axis) or of the same shape as `a`. If `weights=None`, then all data in `a` are assumed to have a weight equal to one. The 1-D calculation is: ``` avg = sum(a * weights) / sum(weights) ``` The only constraint on `weights` is that `sum(weights)` must not be 0. **returned**bool, optional Flag indicating whether a tuple `(result, sum of weights)` should be returned as output (True), or just the result (False). Default is False. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. *Note:* `keepdims` will not work with instances of [`numpy.matrix`](numpy.matrix#numpy.matrix "numpy.matrix") or other classes whose methods do not support `keepdims`. New in version 1.23.0. Returns **average, [sum_of_weights]**(tuple of) scalar or MaskedArray The average along the specified axis. When returned is `True`, return a tuple with the average as the first element and the sum of the weights as the second element. The return type is `np.float64` if `a` is of integer type and floats smaller than [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"), or the input data-type, otherwise. If returned, `sum_of_weights` is always [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"). #### Examples ``` >>> a = np.ma.array([1., 2., 3., 4.], mask=[False, False, True, True]) >>> np.ma.average(a, weights=[3, 1, 0, 0]) 1.25 ``` ``` >>> x = np.ma.arange(6.).reshape(3, 2) >>> x masked_array( data=[[0., 1.], [2., 3.], [4., 5.]], mask=False, fill_value=1e+20) >>> avg, sumweights = np.ma.average(x, axis=0, weights=[1, 2, 3], ... returned=True) >>> avg masked_array(data=[2.6666666666666665, 3.6666666666666665], mask=[False, False], fill_value=1e+20) ``` With `keepdims=True`, the following result has shape (3, 1). ``` >>> np.ma.average(x, axis=1, keepdims=True) masked_array( data=[[0.5], [2.5], [4.5]], mask=False, fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.average.htmlnumpy.ma.conjugate ================== ma.conjugate(*x*, */*, *out=None*, ***, *where=True*, *casting='same_kind'*, *order='K'*, *dtype=None*, *subok=True*[, *signature*, *extobj*])*=<numpy.ma.core._MaskedUnaryOperation object>* Return the complex conjugate, element-wise. The complex conjugate of a complex number is obtained by changing the sign of its imaginary part. Parameters **x**array_like Input value. **out**ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If provided, it must have a shape that the inputs broadcast to. If not provided or None, a freshly-allocated array is returned. A tuple (possible only as a keyword argument) must have length equal to the number of outputs. **where**array_like, optional This condition is broadcast over the input. At locations where the condition is True, the `out` array will be set to the ufunc result. Elsewhere, the `out` array will retain its original value. Note that if an uninitialized `out` array is created via the default `out=None`, locations within it where the condition is False will remain uninitialized. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). Returns **y**ndarray The complex conjugate of `x`, with same dtype as `y`. This is a scalar if `x` is a scalar. #### Notes [`conj`](numpy.conj#numpy.conj "numpy.conj") is an alias for [`conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate"): ``` >>> np.conj is np.conjugate True ``` #### Examples ``` >>> np.conjugate(1+2j) (1-2j) ``` ``` >>> x = np.eye(2) + 1j * np.eye(2) >>> np.conjugate(x) array([[ 1.-1.j, 0.-0.j], [ 0.-0.j, 1.-1.j]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.conjugate.htmlnumpy.ma.corrcoef ================= ma.corrcoef(*x*, *y=None*, *rowvar=True*, *bias=<no value>*, *allow_masked=True*, *ddof=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1407-L1491) Return Pearson product-moment correlation coefficients. Except for the handling of missing data this function does the same as [`numpy.corrcoef`](numpy.corrcoef#numpy.corrcoef "numpy.corrcoef"). For more details and examples, see [`numpy.corrcoef`](numpy.corrcoef#numpy.corrcoef "numpy.corrcoef"). Parameters **x**array_like A 1-D or 2-D array containing multiple variables and observations. Each row of `x` represents a variable, and each column a single observation of all those variables. Also see `rowvar` below. **y**array_like, optional An additional set of variables and observations. `y` has the same shape as `x`. **rowvar**bool, optional If `rowvar` is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. **bias**_NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. **allow_masked**bool, optional If True, masked values are propagated pair-wise: if a value is masked in `x`, the corresponding value is masked in `y`. If False, raises an exception. Because `bias` is deprecated, this argument needs to be treated as keyword only to avoid a warning. **ddof**_NoValue, optional Has no effect, do not use. Deprecated since version 1.10.0. See also [`numpy.corrcoef`](numpy.corrcoef#numpy.corrcoef "numpy.corrcoef") Equivalent function in top-level NumPy module. [`cov`](numpy.cov#numpy.cov "numpy.cov") Estimate the covariance matrix. #### Notes This function accepts but discards arguments `bias` and `ddof`. This is for backwards compatibility with previous versions of this function. These arguments had no effect on the return values of the function and can be safely ignored in this and previous versions of numpy. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.corrcoef.htmlnumpy.ma.cov ============ ma.cov(*x*, *y=None*, *rowvar=True*, *bias=False*, *allow_masked=True*, *ddof=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1334-L1404) Estimate the covariance matrix. Except for the handling of missing data this function does the same as [`numpy.cov`](numpy.cov#numpy.cov "numpy.cov"). For more details and examples, see [`numpy.cov`](numpy.cov#numpy.cov "numpy.cov"). By default, masked values are recognized as such. If `x` and `y` have the same shape, a common mask is allocated: if `x[i,j]` is masked, then `y[i,j]` will also be masked. Setting `allow_masked` to False will raise an exception if values are missing in either of the input arrays. Parameters **x**array_like A 1-D or 2-D array containing multiple variables and observations. Each row of `x` represents a variable, and each column a single observation of all those variables. Also see `rowvar` below. **y**array_like, optional An additional set of variables and observations. `y` has the same shape as `x`. **rowvar**bool, optional If `rowvar` is True (default), then each row represents a variable, with observations in the columns. Otherwise, the relationship is transposed: each column represents a variable, while the rows contain observations. **bias**bool, optional Default normalization (False) is by `(N-1)`, where `N` is the number of observations given (unbiased estimate). If `bias` is True, then normalization is by `N`. This keyword can be overridden by the keyword `ddof` in numpy versions >= 1.5. **allow_masked**bool, optional If True, masked values are propagated pair-wise: if a value is masked in `x`, the corresponding value is masked in `y`. If False, raises a `ValueError` exception when some values are missing. **ddof**{None, int}, optional If not `None` normalization is by `(N - ddof)`, where `N` is the number of observations; this overrides the value implied by `bias`. The default value is `None`. New in version 1.5. Raises ValueError Raised if some values are missing and `allow_masked` is False. See also [`numpy.cov`](numpy.cov#numpy.cov "numpy.cov") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.cov.htmlnumpy.ma.cumsum =============== ma.cumsum(*self*, *axis=None*, *dtype=None*, *out=None*)*=<numpy.ma.core._frommethod object>* Return the cumulative sum of the array elements over the given axis. Masked values are set to 0 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.ndarray.cumsum`](numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum") corresponding function for ndarrays [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function #### Notes The mask is lost if `out` is not a valid [`ma.MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") ! Arithmetic is modular when using integer types, and no error is raised on overflow. #### Examples ``` >>> marr = np.ma.array(np.arange(10), mask=[0,0,0,1,1,1,0,0,0,0]) >>> marr.cumsum() masked_array(data=[0, 1, 3, --, --, --, 9, 16, 24, 33], mask=[False, False, False, True, True, True, False, False, False, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.cumsum.htmlnumpy.ma.cumprod ================ ma.cumprod(*self*, *axis=None*, *dtype=None*, *out=None*)*=<numpy.ma.core._frommethod object>* Return the cumulative product of the array elements over the given axis. Masked values are set to 1 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.ndarray.cumprod`](numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod") corresponding function for ndarrays [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function #### Notes The mask is lost if `out` is not a valid MaskedArray ! Arithmetic is modular when using integer types, and no error is raised on overflow. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.cumprod.htmlnumpy.ma.mean ============= ma.mean(*self*, *axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*)*=<numpy.ma.core._frommethod object>* Returns the average of the array elements along given axis. Masked entries are ignored, and result elements which are not finite will be masked. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.ndarray.mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean") corresponding function for ndarrays [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") Equivalent function [`numpy.ma.average`](numpy.ma.average#numpy.ma.average "numpy.ma.average") Weighted average. #### Examples ``` >>> a = np.ma.array([1,2,3], mask=[False, False, True]) >>> a masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> a.mean() 1.5 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.mean.htmlnumpy.ma.median =============== ma.median(*a*, *axis=None*, *out=None*, *overwrite_input=False*, *keepdims=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L660-L740) Compute the median along the specified axis. Returns the median of the array elements. Parameters **a**array_like Input array or object that can be converted to an array. **axis**int, optional Axis along which the medians are computed. The default (None) is to compute the median along a flattened version of the array. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. **overwrite_input**bool, optional If True, then allow use of memory of input array (a) for calculations. The input array will be modified by the call to median. This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. Note that, if `overwrite_input` is True, and the input is not already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), an error will be raised. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. New in version 1.10.0. Returns **median**ndarray A new array holding the result is returned unless out is specified, in which case a reference to out is returned. Return data-type is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64") for integers and floats smaller than [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"), or the input data-type, otherwise. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") #### Notes Given a vector `V` with `N` non masked values, the median of `V` is the middle value of a sorted copy of `V` (`Vs`) - i.e. `Vs[(N-1)/2]`, when `N` is odd, or `{Vs[N/2 - 1] + Vs[N/2]}/2` when `N` is even. #### Examples ``` >>> x = np.ma.array(np.arange(8), mask=[0]*4 + [1]*4) >>> np.ma.median(x) 1.5 ``` ``` >>> x = np.ma.array(np.arange(10).reshape(2, 5), mask=[0]*6 + [1]*4) >>> np.ma.median(x) 2.5 >>> np.ma.median(x, axis=-1, overwrite_input=True) masked_array(data=[2.0, 5.0], mask=[False, False], fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.median.htmlnumpy.ma.power ============== ma.power(*a*, *b*, *third=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6867-L6916) Returns element-wise base array raised to power from second array. This is the masked array version of [`numpy.power`](numpy.power#numpy.power "numpy.power"). For details see [`numpy.power`](numpy.power#numpy.power "numpy.power"). See also [`numpy.power`](numpy.power#numpy.power "numpy.power") #### Notes The *out* argument to [`numpy.power`](numpy.power#numpy.power "numpy.power") is not supported, `third` has to be None. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.power.htmlnumpy.ma.prod ============= ma.prod(*self*, *axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*)*=<numpy.ma.core._frommethod object>* Return the product of the array elements over the given axis. Masked elements are set to 1 internally for computation. Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") corresponding function for ndarrays [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.prod.htmlnumpy.ma.std ============ ma.std(*self*, *axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=<no value>*)*=<numpy.ma.core._frommethod object>* Returns the standard deviation of the array elements along given axis. Masked entries are ignored. Refer to [`numpy.std`](numpy.std#numpy.std "numpy.std") for full documentation. See also [`numpy.ndarray.std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std") corresponding function for ndarrays [`numpy.std`](numpy.std#numpy.std "numpy.std") Equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.std.htmlnumpy.ma.sum ============ ma.sum(*self*, *axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*)*=<numpy.ma.core._frommethod object>* Return the sum of the array elements over the given axis. Masked elements are set to 0 internally. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum") corresponding function for ndarrays [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") equivalent function #### Examples ``` >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.sum() 25 >>> x.sum(axis=1) masked_array(data=[4, 5, 16], mask=[False, False, False], fill_value=999999) >>> x.sum(axis=0) masked_array(data=[8, 5, 12], mask=[False, False, False], fill_value=999999) >>> print(type(x.sum(axis=0, dtype=np.int64)[0])) <class 'numpy.int64'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.sum.htmlnumpy.ma.var ============ ma.var(*self*, *axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=<no value>*)*=<numpy.ma.core._frommethod object>* Compute the variance along the specified axis. Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. Parameters **a**array_like Array containing numbers whose variance is desired. If `a` is not an array, a conversion is attempted. **axis**None or int or tuple of ints, optional Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. New in version 1.7.0. If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before. **dtype**data-type, optional Type to use in computing the variance. For arrays of integer type the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for arrays of float types it is the same as the array type. **out**ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary. **ddof**int, optional “Delta Degrees of Freedom”: the divisor used in the calculation is `N - ddof`, where `N` represents the number of elements. By default `ddof` is zero. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`var`](numpy.var#numpy.var "numpy.var") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where**array_like of bool, optional Elements to include in the variance. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns **variance**ndarray, see dtype parameter above If `out=None`, returns a new array containing the variance; otherwise, a reference to the output array is returned. See also [`std`](numpy.std#numpy.std "numpy.std"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes The variance is the average of the squared deviations from the mean, i.e., `var = mean(x)`, where `x = abs(a - a.mean())**2`. The mean is typically calculated as `x.sum() / N`, where `N = len(x)`. If, however, `ddof` is specified, the divisor `N - ddof` is used instead. In standard statistical practice, `ddof=1` provides an unbiased estimator of the variance of a hypothetical infinite population. `ddof=0` provides a maximum likelihood estimate of the variance for normally distributed variables. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-accuracy accumulator using the `dtype` keyword can alleviate this issue. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25]) ``` In single precision, var() can be inaccurate: ``` >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) 0.20250003 ``` Computing the variance in float64 is more accurate: ``` >>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025 ``` Specifying a where argument: ``` >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.var.htmlnumpy.ma.MaskedArray.anom ========================= method ma.MaskedArray.anom(*axis=None*, *dtype=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5306-L5341) Compute the anomalies (deviations from the arithmetic mean) along the given axis. Returns an array of anomalies, with the same shape as the input and where the arithmetic mean is computed along the given axis. Parameters **axis**int, optional Axis over which the anomalies are taken. The default is to use the mean of the flattened array as reference. **dtype**dtype, optional Type to use in computing the variance. For arrays of integer type the default is float32; for arrays of float types it is the same as the array type. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") Compute the mean of the array. #### Examples ``` >>> a = np.ma.array([1,2,3]) >>> a.anom() masked_array(data=[-1., 0., 1.], mask=False, fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.anom.htmlnumpy.ma.MaskedArray.cumprod ============================ method ma.MaskedArray.cumprod(*axis=None*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5228-L5257) Return the cumulative product of the array elements over the given axis. Masked values are set to 1 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.ndarray.cumprod`](numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod") corresponding function for ndarrays [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function #### Notes The mask is lost if `out` is not a valid MaskedArray ! Arithmetic is modular when using integer types, and no error is raised on overflow. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.cumprod.htmlnumpy.ma.MaskedArray.cumsum =========================== method ma.MaskedArray.cumsum(*axis=None*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5145-L5184) Return the cumulative sum of the array elements over the given axis. Masked values are set to 0 internally during the computation. However, their position is saved, and the result will be masked at the same locations. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.ndarray.cumsum`](numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum") corresponding function for ndarrays [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function #### Notes The mask is lost if `out` is not a valid [`ma.MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") ! Arithmetic is modular when using integer types, and no error is raised on overflow. #### Examples ``` >>> marr = np.ma.array(np.arange(10), mask=[0,0,0,1,1,1,0,0,0,0]) >>> marr.cumsum() masked_array(data=[0, 1, 3, --, --, --, 9, 16, 24, 33], mask=[False, False, False, True, True, True, False, False, False, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.cumsum.htmlnumpy.ma.MaskedArray.mean ========================= method ma.MaskedArray.mean(*axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5259-L5304) Returns the average of the array elements along given axis. Masked entries are ignored, and result elements which are not finite will be masked. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.ndarray.mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean") corresponding function for ndarrays [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") Equivalent function [`numpy.ma.average`](numpy.ma.average#numpy.ma.average "numpy.ma.average") Weighted average. #### Examples ``` >>> a = np.ma.array([1,2,3], mask=[False, False, True]) >>> a masked_array(data=[1, 2, --], mask=[False, False, True], fill_value=999999) >>> a.mean() 1.5 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.mean.htmlnumpy.ma.MaskedArray.prod ========================= method ma.MaskedArray.prod(*axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5186-L5225) Return the product of the array elements over the given axis. Masked elements are set to 1 internally for computation. Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") corresponding function for ndarrays [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.prod.htmlnumpy.ma.MaskedArray.std ======================== method ma.MaskedArray.std(*axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5407-L5429) Returns the standard deviation of the array elements along given axis. Masked entries are ignored. Refer to [`numpy.std`](numpy.std#numpy.std "numpy.std") for full documentation. See also [`numpy.ndarray.std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std") corresponding function for ndarrays [`numpy.std`](numpy.std#numpy.std "numpy.std") Equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.std.htmlnumpy.ma.MaskedArray.sum ======================== method ma.MaskedArray.sum(*axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5083-L5143) Return the sum of the array elements over the given axis. Masked elements are set to 0 internally. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum") corresponding function for ndarrays [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") equivalent function #### Examples ``` >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.sum() 25 >>> x.sum(axis=1) masked_array(data=[4, 5, 16], mask=[False, False, False], fill_value=999999) >>> x.sum(axis=0) masked_array(data=[8, 5, 12], mask=[False, False, False], fill_value=999999) >>> print(type(x.sum(axis=0, dtype=np.int64)[0])) <class 'numpy.int64'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.sum.htmlnumpy.ma.MaskedArray.var ======================== method ma.MaskedArray.var(*axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5343-L5404) Compute the variance along the specified axis. Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. Parameters **a**array_like Array containing numbers whose variance is desired. If `a` is not an array, a conversion is attempted. **axis**None or int or tuple of ints, optional Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. New in version 1.7.0. If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before. **dtype**data-type, optional Type to use in computing the variance. For arrays of integer type the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for arrays of float types it is the same as the array type. **out**ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary. **ddof**int, optional “Delta Degrees of Freedom”: the divisor used in the calculation is `N - ddof`, where `N` represents the number of elements. By default `ddof` is zero. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. If the default value is passed, then `keepdims` will not be passed through to the [`var`](numpy.var#numpy.var "numpy.var") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), however any non-default value will be. If the sub-class’ method does not implement `keepdims` any exceptions will be raised. **where**array_like of bool, optional Elements to include in the variance. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.20.0. Returns **variance**ndarray, see dtype parameter above If `out=None`, returns a new array containing the variance; otherwise, a reference to the output array is returned. See also [`std`](numpy.std#numpy.std "numpy.std"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes The variance is the average of the squared deviations from the mean, i.e., `var = mean(x)`, where `x = abs(a - a.mean())**2`. The mean is typically calculated as `x.sum() / N`, where `N = len(x)`. If, however, `ddof` is specified, the divisor `N - ddof` is used instead. In standard statistical practice, `ddof=1` provides an unbiased estimator of the variance of a hypothetical infinite population. `ddof=0` provides a maximum likelihood estimate of the variance for normally distributed variables. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-accuracy accumulator using the `dtype` keyword can alleviate this issue. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25]) ``` In single precision, var() can be inaccurate: ``` >>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) 0.20250003 ``` Computing the variance in float64 is more accurate: ``` >>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025 ``` Specifying a where argument: ``` >>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.var.htmlnumpy.ma.argmax =============== ma.argmax(*self*, *axis=None*, *fill_value=None*, *out=None*)*=<numpy.ma.core._frommethod object>* Returns array of indices of the maximum values along the given axis. Masked values are treated as if they had the value fill_value. Parameters **axis**{None, integer} If None, the index is into the flattened array, otherwise along the specified axis **fill_value**scalar or None, optional Value used to fill in the masked values. If None, the output of maximum_fill_value(self._data) is used instead. **out**{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns **index_array**{integer_array} #### Examples ``` >>> a = np.arange(6).reshape(2,3) >>> a.argmax() 5 >>> a.argmax(0) array([1, 1, 1]) >>> a.argmax(1) array([2, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.argmax.htmlnumpy.ma.argmin =============== ma.argmin(*self*, *axis=None*, *fill_value=None*, *out=None*)*=<numpy.ma.core._frommethod object>* Return array of indices to the minimum values along the given axis. Parameters **axis**{None, integer} If None, the index is into the flattened array, otherwise along the specified axis **fill_value**scalar or None, optional Value used to fill in the masked values. If None, the output of minimum_fill_value(self._data) is used instead. **out**{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns ndarray or scalar If multi-dimension input, returns a new ndarray of indices to the minimum values along the given axis. Otherwise, returns a scalar of index to the minimum values along the given axis. #### Examples ``` >>> x = np.ma.array(np.arange(4), mask=[1,1,0,0]) >>> x.shape = (2,2) >>> x masked_array( data=[[--, --], [2, 3]], mask=[[ True, True], [False, False]], fill_value=999999) >>> x.argmin(axis=0, fill_value=-1) array([0, 0]) >>> x.argmin(axis=0, fill_value=9) array([1, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.argmin.htmlnumpy.ma.max ============ ma.max(*obj*, *axis=None*, *out=None*, *fill_value=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6760-L6769) Return the maximum along a given axis. Parameters **axis**None or int or tuple of ints, optional Axis along which to operate. By default, `axis` is None and the flattened input is used. .. versionadded:: 1.7.0 If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before. **out**array_like, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. **fill_value**scalar or None, optional Value used to fill in the masked values. If None, use the output of maximum_fill_value(). **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns **amax**array_like New array holding the result. If `out` was specified, `out` is returned. See also [`ma.maximum_fill_value`](numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value") Returns the maximum filling value for a given datatype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.max.htmlnumpy.ma.min ============ ma.min(*obj*, *axis=None*, *out=None*, *fill_value=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6748-L6757) Return the minimum along a given axis. Parameters **axis**None or int or tuple of ints, optional Axis along which to operate. By default, `axis` is None and the flattened input is used. .. versionadded:: 1.7.0 If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before. **out**array_like, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. **fill_value**scalar or None, optional Value used to fill in the masked values. If None, use the output of [`minimum_fill_value`](numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value"). **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns **amin**array_like New array holding the result. If `out` was specified, `out` is returned. See also [`ma.minimum_fill_value`](numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value") Returns the minimum filling value for a given datatype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.min.htmlnumpy.ma.ptp ============ ma.ptp(*obj*, *axis=None*, *out=None*, *fill_value=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6773-L6781) Return (maximum - minimum) along the given dimension (i.e. peak-to-peak value). Warning [`ptp`](numpy.ptp#numpy.ptp "numpy.ptp") preserves the data type of the array. This means the return value for an input of signed integers with n bits (e.g. `np.int8`, `np.int16`, etc) is also a signed integer with n bits. In that case, peak-to-peak values greater than `2**(n-1)-1` will be returned as negative values. An example with a work-around is shown below. Parameters **axis**{None, int}, optional Axis along which to find the peaks. If None (default) the flattened array is used. **out**{None, array_like}, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. **fill_value**scalar or None, optional Value used to fill in the masked values. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns **ptp**ndarray. A new array holding the result, unless `out` was specified, in which case a reference to `out` is returned. #### Examples ``` >>> x = np.ma.MaskedArray([[4, 9, 2, 10], ... [6, 9, 7, 12]]) ``` ``` >>> x.ptp(axis=1) masked_array(data=[8, 6], mask=False, fill_value=999999) ``` ``` >>> x.ptp(axis=0) masked_array(data=[2, 0, 5, 2], mask=False, fill_value=999999) ``` ``` >>> x.ptp() 10 ``` This example shows that a negative value can be returned when the input is an array of signed integers. ``` >>> y = np.ma.MaskedArray([[1, 127], ... [0, 127], ... [-1, 127], ... [-2, 127]], dtype=np.int8) >>> y.ptp(axis=1) masked_array(data=[ 126, 127, -128, -127], mask=False, fill_value=999999, dtype=int8) ``` A work-around is to use the `view()` method to view the result as unsigned integers with the same bit width: ``` >>> y.ptp(axis=1).view(np.uint8) masked_array(data=[126, 127, 128, 129], mask=False, fill_value=999999, dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.ptp.htmlnumpy.ma.diff ============= ma.diff(**args*, ***kwargs*)*=<numpy.ma.core._convert2ma object>* Calculate the n-th discrete difference along the given axis. The first difference is given by `out[i] = a[i+1] - a[i]` along the given axis, higher differences are calculated by using [`diff`](numpy.diff#numpy.diff "numpy.diff") recursively. Parameters **a**array_like Input array **n**int, optional The number of times values are differenced. If zero, the input is returned as-is. **axis**int, optional The axis along which the difference is taken, default is the last axis. **prepend, append**array_like, optional Values to prepend or append to `a` along axis prior to performing the difference. Scalar values are expanded to arrays with length 1 in the direction of axis and the shape of the input array in along all other axes. Otherwise the dimension and shape must match `a` except along axis. New in version 1.16.0. Returns **diff**MaskedArray The n-th differences. The shape of the output is the same as `a` except along `axis` where the dimension is smaller by `n`. The type of the output is the same as the type of the difference between any two elements of `a`. This is the same as the type of `a` in most cases. A notable exception is [`datetime64`](../arrays.scalars#numpy.datetime64 "numpy.datetime64"), which results in a [`timedelta64`](../arrays.scalars#numpy.timedelta64 "numpy.timedelta64") output array. See also [`gradient`](numpy.gradient#numpy.gradient "numpy.gradient"), [`ediff1d`](numpy.ediff1d#numpy.ediff1d "numpy.ediff1d"), [`cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") #### Notes Type is preserved for boolean arrays, so the result will contain `False` when consecutive elements are the same and `True` when they differ. For unsigned integer arrays, the results will also be unsigned. This should not be surprising, as the result is consistent with calculating the difference directly: ``` >>> u8_arr = np.array([1, 0], dtype=np.uint8) >>> np.diff(u8_arr) array([255], dtype=uint8) >>> u8_arr[1,...] - u8_arr[0,...] 255 ``` If this is not desirable, then the array should be cast to a larger integer type first: ``` >>> i16_arr = u8_arr.astype(np.int16) >>> np.diff(i16_arr) array([-1], dtype=int16) ``` #### Examples ``` >>> x = np.array([1, 2, 4, 7, 0]) >>> np.diff(x) array([ 1, 2, 3, -7]) >>> np.diff(x, n=2) array([ 1, 1, -10]) ``` ``` >>> x = np.array([[1, 3, 6, 10], [0, 5, 6, 8]]) >>> np.diff(x) array([[2, 3, 4], [5, 1, 2]]) >>> np.diff(x, axis=0) array([[-1, 2, 0, -2]]) ``` ``` >>> x = np.arange('1066-10-13', '1066-10-16', dtype=np.datetime64) >>> np.diff(x) array([1, 1], dtype='timedelta64[D]') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.diff.htmlnumpy.ma.MaskedArray.argmax =========================== method ma.MaskedArray.argmax(*axis=None*, *fill_value=None*, *out=None*, ***, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5583-L5620) Returns array of indices of the maximum values along the given axis. Masked values are treated as if they had the value fill_value. Parameters **axis**{None, integer} If None, the index is into the flattened array, otherwise along the specified axis **fill_value**scalar or None, optional Value used to fill in the masked values. If None, the output of maximum_fill_value(self._data) is used instead. **out**{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns **index_array**{integer_array} #### Examples ``` >>> a = np.arange(6).reshape(2,3) >>> a.argmax() 5 >>> a.argmax(0) array([1, 1, 1]) >>> a.argmax(1) array([2, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.argmax.htmlnumpy.ma.MaskedArray.argmin =========================== method ma.MaskedArray.argmin(*axis=None*, *fill_value=None*, *out=None*, ***, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5536-L5581) Return array of indices to the minimum values along the given axis. Parameters **axis**{None, integer} If None, the index is into the flattened array, otherwise along the specified axis **fill_value**scalar or None, optional Value used to fill in the masked values. If None, the output of minimum_fill_value(self._data) is used instead. **out**{None, array}, optional Array into which the result can be placed. Its type is preserved and it must be of the right shape to hold the output. Returns ndarray or scalar If multi-dimension input, returns a new ndarray of indices to the minimum values along the given axis. Otherwise, returns a scalar of index to the minimum values along the given axis. #### Examples ``` >>> x = np.ma.array(np.arange(4), mask=[1,1,0,0]) >>> x.shape = (2,2) >>> x masked_array( data=[[--, --], [2, 3]], mask=[[ True, True], [False, False]], fill_value=999999) >>> x.argmin(axis=0, fill_value=-1) array([0, 0]) >>> x.argmin(axis=0, fill_value=9) array([1, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.argmin.htmlnumpy.ma.MaskedArray.max ======================== method ma.MaskedArray.max(*axis=None*, *out=None*, *fill_value=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5842-L5910) Return the maximum along a given axis. Parameters **axis**None or int or tuple of ints, optional Axis along which to operate. By default, `axis` is None and the flattened input is used. .. versionadded:: 1.7.0 If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before. **out**array_like, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. **fill_value**scalar or None, optional Value used to fill in the masked values. If None, use the output of maximum_fill_value(). **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns **amax**array_like New array holding the result. If `out` was specified, `out` is returned. See also [`ma.maximum_fill_value`](numpy.ma.maximum_fill_value#numpy.ma.maximum_fill_value "numpy.ma.maximum_fill_value") Returns the maximum filling value for a given datatype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.max.htmlnumpy.ma.MaskedArray.min ======================== method ma.MaskedArray.min(*axis=None*, *out=None*, *fill_value=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5705-L5772) Return the minimum along a given axis. Parameters **axis**None or int or tuple of ints, optional Axis along which to operate. By default, `axis` is None and the flattened input is used. .. versionadded:: 1.7.0 If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before. **out**array_like, optional Alternative output array in which to place the result. Must be of the same shape and buffer length as the expected output. **fill_value**scalar or None, optional Value used to fill in the masked values. If None, use the output of `minimum_fill_value`. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns **amin**array_like New array holding the result. If `out` was specified, `out` is returned. See also [`ma.minimum_fill_value`](numpy.ma.minimum_fill_value#numpy.ma.minimum_fill_value "numpy.ma.minimum_fill_value") Returns the minimum filling value for a given datatype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.min.htmlnumpy.ma.MaskedArray.ptp ======================== method ma.MaskedArray.ptp(*axis=None*, *out=None*, *fill_value=None*, *keepdims=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5912-L5998) Return (maximum - minimum) along the given dimension (i.e. peak-to-peak value). Warning [`ptp`](numpy.ptp#numpy.ptp "numpy.ptp") preserves the data type of the array. This means the return value for an input of signed integers with n bits (e.g. `np.int8`, `np.int16`, etc) is also a signed integer with n bits. In that case, peak-to-peak values greater than `2**(n-1)-1` will be returned as negative values. An example with a work-around is shown below. Parameters **axis**{None, int}, optional Axis along which to find the peaks. If None (default) the flattened array is used. **out**{None, array_like}, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. **fill_value**scalar or None, optional Value used to fill in the masked values. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. Returns **ptp**ndarray. A new array holding the result, unless `out` was specified, in which case a reference to `out` is returned. #### Examples ``` >>> x = np.ma.MaskedArray([[4, 9, 2, 10], ... [6, 9, 7, 12]]) ``` ``` >>> x.ptp(axis=1) masked_array(data=[8, 6], mask=False, fill_value=999999) ``` ``` >>> x.ptp(axis=0) masked_array(data=[2, 0, 5, 2], mask=False, fill_value=999999) ``` ``` >>> x.ptp() 10 ``` This example shows that a negative value can be returned when the input is an array of signed integers. ``` >>> y = np.ma.MaskedArray([[1, 127], ... [0, 127], ... [-1, 127], ... [-2, 127]], dtype=np.int8) >>> y.ptp(axis=1) masked_array(data=[ 126, 127, -128, -127], mask=False, fill_value=999999, dtype=int8) ``` A work-around is to use the `view()` method to view the result as unsigned integers with the same bit width: ``` >>> y.ptp(axis=1).view(np.uint8) masked_array(data=[126, 127, 128, 129], mask=False, fill_value=999999, dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.ptp.htmlnumpy.ma.argsort ================ ma.argsort(*a*, *axis=<no value>*, *kind=None*, *order=None*, *endwith=True*, *fill_value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6921-L6933) Return an ndarray of indices that sort the array along the specified axis. Masked values are filled beforehand to `fill_value`. Parameters **axis**int, optional Axis along which to sort. If None, the default, the flattened array is used. Changed in version 1.13.0: Previously, the default was documented to be -1, but that was in error. At some future date, the default will change to -1, as originally intended. Until then, the axis should be given explicitly when `arr.ndim > 1`, to avoid a FutureWarning. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional The sorting algorithm used. **order**list, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. Not all fields need be specified. **endwith**{True, False}, optional Whether missing values (if any) should be treated as the largest values (True) or the smallest values (False) When the array contains unmasked values at the same extremes of the datatype, the ordering of these values and the masked values is undefined. **fill_value**scalar or None, optional Value used internally for the masked values. If `fill_value` is not None, it supersedes `endwith`. Returns **index_array**ndarray, int Array of indices that sort `a` along the specified axis. In other words, `a[index_array]` yields a sorted `a`. See also [`ma.MaskedArray.sort`](numpy.ma.maskedarray.sort#numpy.ma.MaskedArray.sort "numpy.ma.MaskedArray.sort") Describes sorting algorithms used. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort with multiple keys. [`numpy.ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Inplace sort. #### Notes See [`sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples ``` >>> a = np.ma.array([3,2,1], mask=[False, False, True]) >>> a masked_array(data=[3, 2, --], mask=[False, False, True], fill_value=999999) >>> a.argsort() array([1, 0, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.argsort.htmlnumpy.ma.sort ============= ma.sort(*a*, *axis=- 1*, *kind=None*, *order=None*, *endwith=True*, *fill_value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6936-L6959) Return a sorted copy of the masked array. Equivalent to creating a copy of the array and applying the MaskedArray `sort()` method. Refer to `MaskedArray.sort` for the full documentation See also [`MaskedArray.sort`](numpy.ma.maskedarray.sort#numpy.ma.MaskedArray.sort "numpy.ma.MaskedArray.sort") equivalent method © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.sort.htmlnumpy.ma.MaskedArray.argsort ============================ method ma.MaskedArray.argsort(*axis=<no value>*, *kind=None*, *order=None*, *endwith=True*, *fill_value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5456-L5534) Return an ndarray of indices that sort the array along the specified axis. Masked values are filled beforehand to [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value"). Parameters **axis**int, optional Axis along which to sort. If None, the default, the flattened array is used. Changed in version 1.13.0: Previously, the default was documented to be -1, but that was in error. At some future date, the default will change to -1, as originally intended. Until then, the axis should be given explicitly when `arr.ndim > 1`, to avoid a FutureWarning. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional The sorting algorithm used. **order**list, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. Not all fields need be specified. **endwith**{True, False}, optional Whether missing values (if any) should be treated as the largest values (True) or the smallest values (False) When the array contains unmasked values at the same extremes of the datatype, the ordering of these values and the masked values is undefined. **fill_value**scalar or None, optional Value used internally for the masked values. If `fill_value` is not None, it supersedes `endwith`. Returns **index_array**ndarray, int Array of indices that sort `a` along the specified axis. In other words, `a[index_array]` yields a sorted `a`. See also [`ma.MaskedArray.sort`](numpy.ma.maskedarray.sort#numpy.ma.MaskedArray.sort "numpy.ma.MaskedArray.sort") Describes sorting algorithms used. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort with multiple keys. [`numpy.ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Inplace sort. #### Notes See [`sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples ``` >>> a = np.ma.array([3,2,1], mask=[False, False, True]) >>> a masked_array(data=[3, 2, --], mask=[False, False, True], fill_value=999999) >>> a.argsort() array([1, 0, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.argsort.htmlnumpy.ma.MaskedArray.sort ========================= method ma.MaskedArray.sort(*axis=- 1*, *kind=None*, *order=None*, *endwith=True*, *fill_value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5622-L5703) Sort the array, in-place Parameters **a**array_like Array to be sorted. **axis**int, optional Axis along which to sort. If None, the array is flattened before sorting. The default is -1, which sorts along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional The sorting algorithm used. **order**list, optional When `a` is a structured array, this argument specifies which fields to compare first, second, and so on. This list does not need to include all of the fields. **endwith**{True, False}, optional Whether missing values (if any) should be treated as the largest values (True) or the smallest values (False) When the array contains unmasked values sorting at the same extremes of the datatype, the ordering of these values and the masked values is undefined. **fill_value**scalar or None, optional Value used internally for the masked values. If `fill_value` is not None, it supersedes `endwith`. Returns **sorted_array**ndarray Array of the same type and shape as `a`. See also [`numpy.ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort") Method to sort an array in-place. [`argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in a sorted array. #### Notes See `sort` for notes on the different sorting algorithms. #### Examples ``` >>> a = np.ma.array([1, 2, 5, 4, 3],mask=[0, 1, 0, 1, 0]) >>> # Default >>> a.sort() >>> a masked_array(data=[1, 3, 5, --, --], mask=[False, False, False, True, True], fill_value=999999) ``` ``` >>> a = np.ma.array([1, 2, 5, 4, 3],mask=[0, 1, 0, 1, 0]) >>> # Put missing values in the front >>> a.sort(endwith=False) >>> a masked_array(data=[--, --, 1, 3, 5], mask=[ True, True, False, False, False], fill_value=999999) ``` ``` >>> a = np.ma.array([1, 2, 5, 4, 3],mask=[0, 1, 0, 1, 0]) >>> # fill_value takes over endwith >>> a.sort(endwith=False, fill_value=3) >>> a masked_array(data=[1, --, --, 3, 5], mask=[False, True, True, False, False], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.sort.htmlnumpy.ma.diag ============= ma.diag(*v*, *k=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7037-L7052) Extract a diagonal or construct a diagonal array. This function is the equivalent of [`numpy.diag`](numpy.diag#numpy.diag "numpy.diag") that takes masked values into account, see [`numpy.diag`](numpy.diag#numpy.diag "numpy.diag") for details. See also [`numpy.diag`](numpy.diag#numpy.diag "numpy.diag") Equivalent function for ndarrays. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.diag.htmlnumpy.ma.dot ============ ma.dot(*a*, *b*, *strict=False*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7597-L7676) Return the dot product of two arrays. This function is the equivalent of [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot") that takes masked values into account. Note that `strict` and `out` are in different position than in the method version. In order to maintain compatibility with the corresponding method, it is recommended that the optional arguments be treated as keyword only. At some point that may be mandatory. Note Works only with 2-D arrays at the moment. Parameters **a, b**masked_array_like Inputs arrays. **strict**bool, optional Whether masked data are propagated (True) or set to 0 (False) for the computation. Default is False. Propagating the mask means that if a masked value appears in a row or column, the whole row or column is considered masked. **out**masked_array, optional Output argument. This must have the exact kind that would be returned if it was not used. In particular, it must have the right type, must be C-contiguous, and its dtype must be the dtype that would be returned for `dot(a,b)`. This is a performance feature. Therefore, if these conditions are not met, an exception is raised, instead of attempting to be flexible. New in version 1.10.2. See also [`numpy.dot`](numpy.dot#numpy.dot "numpy.dot") Equivalent function for ndarrays. #### Examples ``` >>> a = np.ma.array([[1, 2, 3], [4, 5, 6]], mask=[[1, 0, 0], [0, 0, 0]]) >>> b = np.ma.array([[1, 2], [3, 4], [5, 6]], mask=[[1, 0], [0, 0], [0, 0]]) >>> np.ma.dot(a, b) masked_array( data=[[21, 26], [45, 64]], mask=[[False, False], [False, False]], fill_value=999999) >>> np.ma.dot(a, b, strict=True) masked_array( data=[[--, --], [--, 64]], mask=[[ True, True], [ True, False]], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.dot.htmlnumpy.ma.identity ================= ma.identity(*n*, *dtype=None*)*=<numpy.ma.core._convert2ma object>* Return the identity array. The identity array is a square array with ones on the main diagonal. Parameters **n**int Number of rows (and columns) in `n` x `n` output. **dtype**data-type, optional Data-type of the output. Defaults to `float`. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **out**MaskedArray `n` x `n` array with its main diagonal set to one, and all other elements 0. #### Examples ``` >>> np.identity(3) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.identity.htmlnumpy.ma.inner ============== ma.inner(*a*, *b*, */*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7679-L7693) Inner product of two arrays. Ordinary inner product of vectors for 1-D arrays (without complex conjugation), in higher dimensions a sum product over the last axes. Parameters **a, b**array_like If `a` and `b` are nonscalar, their last dimensions must match. Returns **out**ndarray If `a` and `b` are both scalars or both 1-D arrays then a scalar is returned; otherwise an array is returned. `out.shape = (*a.shape[:-1], *b.shape[:-1])` Raises ValueError If both `a` and `b` are nonscalar and their last dimensions have different sizes. See also [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`dot`](numpy.dot#numpy.dot "numpy.dot") Generalised matrix product, using second last dimension of `b`. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. #### Notes Masked values are replaced by 0. For vectors (1-D arrays) it computes the ordinary inner-product: ``` np.inner(a, b) = sum(a[:]*b[:]) ``` More generally, if `ndim(a) = r > 0` and `ndim(b) = s > 0`: ``` np.inner(a, b) = np.tensordot(a, b, axes=(-1,-1)) ``` or explicitly: ``` np.inner(a, b)[i0,...,ir-2,j0,...,js-2] = sum(a[i0,...,ir-2,:]*b[j0,...,js-2,:]) ``` In addition `a` or `b` may be scalars, in which case: ``` np.inner(a,b) = a*b ``` #### Examples Ordinary inner product for vectors: ``` >>> a = np.array([1,2,3]) >>> b = np.array([0,1,0]) >>> np.inner(a, b) 2 ``` Some multidimensional examples: ``` >>> a = np.arange(24).reshape((2,3,4)) >>> b = np.arange(4) >>> c = np.inner(a, b) >>> c.shape (2, 3) >>> c array([[ 14, 38, 62], [ 86, 110, 134]]) ``` ``` >>> a = np.arange(2).reshape((1,1,2)) >>> b = np.arange(6).reshape((3,2)) >>> c = np.inner(a, b) >>> c.shape (1, 1, 3) >>> c array([[[1, 3, 5]]]) ``` An example where `b` is a scalar: ``` >>> np.inner(np.eye(2), 7) array([[7., 0.], [0., 7.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.inner.htmlnumpy.ma.innerproduct ===================== ma.innerproduct(*a*, *b*, */*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7679-L7693) Inner product of two arrays. Ordinary inner product of vectors for 1-D arrays (without complex conjugation), in higher dimensions a sum product over the last axes. Parameters **a, b**array_like If `a` and `b` are nonscalar, their last dimensions must match. Returns **out**ndarray If `a` and `b` are both scalars or both 1-D arrays then a scalar is returned; otherwise an array is returned. `out.shape = (*a.shape[:-1], *b.shape[:-1])` Raises ValueError If both `a` and `b` are nonscalar and their last dimensions have different sizes. See also [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") Sum products over arbitrary axes. [`dot`](numpy.dot#numpy.dot "numpy.dot") Generalised matrix product, using second last dimension of `b`. [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") Einstein summation convention. #### Notes Masked values are replaced by 0. For vectors (1-D arrays) it computes the ordinary inner-product: ``` np.inner(a, b) = sum(a[:]*b[:]) ``` More generally, if `ndim(a) = r > 0` and `ndim(b) = s > 0`: ``` np.inner(a, b) = np.tensordot(a, b, axes=(-1,-1)) ``` or explicitly: ``` np.inner(a, b)[i0,...,ir-2,j0,...,js-2] = sum(a[i0,...,ir-2,:]*b[j0,...,js-2,:]) ``` In addition `a` or `b` may be scalars, in which case: ``` np.inner(a,b) = a*b ``` #### Examples Ordinary inner product for vectors: ``` >>> a = np.array([1,2,3]) >>> b = np.array([0,1,0]) >>> np.inner(a, b) 2 ``` Some multidimensional examples: ``` >>> a = np.arange(24).reshape((2,3,4)) >>> b = np.arange(4) >>> c = np.inner(a, b) >>> c.shape (2, 3) >>> c array([[ 14, 38, 62], [ 86, 110, 134]]) ``` ``` >>> a = np.arange(2).reshape((1,1,2)) >>> b = np.arange(6).reshape((3,2)) >>> c = np.inner(a, b) >>> c.shape (1, 1, 3) >>> c array([[[1, 3, 5]]]) ``` An example where `b` is a scalar: ``` >>> np.inner(np.eye(2), 7) array([[7., 0.], [0., 7.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.innerproduct.htmlnumpy.ma.outer ============== ma.outer(*a*, *b*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7699-L7711) Compute the outer product of two vectors. Given two vectors, `a = [a0, a1, ..., aM]` and `b = [b0, b1, ..., bN]`, the outer product [[1]](#r863504129d6e-1) is: ``` [[a0*b0 a0*b1 ... a0*bN ] [a1*b0 . [ ... . [aM*b0 aM*bN ]] ``` Parameters **a**(M,) array_like First input vector. Input is flattened if not already 1-dimensional. **b**(N,) array_like Second input vector. Input is flattened if not already 1-dimensional. **out**(M, N) ndarray, optional A location where the result is stored New in version 1.9.0. Returns **out**(M, N) ndarray `out[i, j] = a[i] * b[j]` See also [`inner`](numpy.inner#numpy.inner "numpy.inner") [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") `einsum('i,j->ij', a.ravel(), b.ravel())` is the equivalent. [`ufunc.outer`](numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer") A generalization to dimensions other than 1D and other operations. `np.multiply.outer(a.ravel(), b.ravel())` is the equivalent. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") `np.tensordot(a.ravel(), b.ravel(), axes=((), ()))` is the equivalent. #### Notes Masked values are replaced by 0. #### References [1](#id1) : <NAME> and <NAME>, *Matrix Computations*, 3rd ed., Baltimore, MD, Johns Hopkins University Press, 1996, pg. 8. #### Examples Make a (*very* coarse) grid for computing a Mandelbrot set: ``` >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5)) >>> rl array([[-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.]]) >>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,))) >>> im array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j], [0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j], [0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]]) >>> grid = rl + im >>> grid array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j], [-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j], [-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j], [-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j], [-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]]) ``` An example using a “vector” of letters: ``` >>> x = np.array(['a', 'b', 'c'], dtype=object) >>> np.outer(x, [1, 2, 3]) array([['a', 'aa', 'aaa'], ['b', 'bb', 'bbb'], ['c', 'cc', 'ccc']], dtype=object) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.outer.htmlnumpy.ma.outerproduct ===================== ma.outerproduct(*a*, *b*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7699-L7711) Compute the outer product of two vectors. Given two vectors, `a = [a0, a1, ..., aM]` and `b = [b0, b1, ..., bN]`, the outer product [[1]](#rf0d57dd5badd-1) is: ``` [[a0*b0 a0*b1 ... a0*bN ] [a1*b0 . [ ... . [aM*b0 aM*bN ]] ``` Parameters **a**(M,) array_like First input vector. Input is flattened if not already 1-dimensional. **b**(N,) array_like Second input vector. Input is flattened if not already 1-dimensional. **out**(M, N) ndarray, optional A location where the result is stored New in version 1.9.0. Returns **out**(M, N) ndarray `out[i, j] = a[i] * b[j]` See also [`inner`](numpy.inner#numpy.inner "numpy.inner") [`einsum`](numpy.einsum#numpy.einsum "numpy.einsum") `einsum('i,j->ij', a.ravel(), b.ravel())` is the equivalent. [`ufunc.outer`](numpy.ufunc.outer#numpy.ufunc.outer "numpy.ufunc.outer") A generalization to dimensions other than 1D and other operations. `np.multiply.outer(a.ravel(), b.ravel())` is the equivalent. [`tensordot`](numpy.tensordot#numpy.tensordot "numpy.tensordot") `np.tensordot(a.ravel(), b.ravel(), axes=((), ()))` is the equivalent. #### Notes Masked values are replaced by 0. #### References [1](#id1) : <NAME> and <NAME>, *Matrix Computations*, 3rd ed., Baltimore, MD, Johns Hopkins University Press, 1996, pg. 8. #### Examples Make a (*very* coarse) grid for computing a Mandelbrot set: ``` >>> rl = np.outer(np.ones((5,)), np.linspace(-2, 2, 5)) >>> rl array([[-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.], [-2., -1., 0., 1., 2.]]) >>> im = np.outer(1j*np.linspace(2, -2, 5), np.ones((5,))) >>> im array([[0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j, 0.+2.j], [0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j, 0.+1.j], [0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j], [0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j, 0.-1.j], [0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j, 0.-2.j]]) >>> grid = rl + im >>> grid array([[-2.+2.j, -1.+2.j, 0.+2.j, 1.+2.j, 2.+2.j], [-2.+1.j, -1.+1.j, 0.+1.j, 1.+1.j, 2.+1.j], [-2.+0.j, -1.+0.j, 0.+0.j, 1.+0.j, 2.+0.j], [-2.-1.j, -1.-1.j, 0.-1.j, 1.-1.j, 2.-1.j], [-2.-2.j, -1.-2.j, 0.-2.j, 1.-2.j, 2.-2.j]]) ``` An example using a “vector” of letters: ``` >>> x = np.array(['a', 'b', 'c'], dtype=object) >>> np.outer(x, [1, 2, 3]) array([['a', 'aa', 'aaa'], ['b', 'bb', 'bbb'], ['c', 'cc', 'ccc']], dtype=object) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.outerproduct.htmlnumpy.ma.trace ============== ma.trace(*self*, *offset=0*, *axis1=0*, *axis2=1*, *dtype=None*, *out=None) a.trace(offset=0*, *axis1=0*, *axis2=1*, *dtype=None*, *out=None*)*=<numpy.ma.core._frommethod object>* Return the sum along diagonals of the array. Refer to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") for full documentation. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.trace.htmlnumpy.ma.MaskedArray.trace ========================== method ma.MaskedArray.trace(*offset=0*, *axis1=0*, *axis2=1*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5029-L5041) Return the sum along diagonals of the array. Refer to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") for full documentation. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.trace.htmlnumpy.ma.vander =============== ma.vander(*x*, *n=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1975-L1984) Generate a Vandermonde matrix. The columns of the output matrix are powers of the input vector. The order of the powers is determined by the `increasing` boolean argument. Specifically, when `increasing` is False, the `i`-th output column is the input vector raised element-wise to the power of `N - i - 1`. Such a matrix with a geometric progression in each row is named for Alexandre- <NAME>. Parameters **x**array_like 1-D input array. **N**int, optional Number of columns in the output. If `N` is not specified, a square array is returned (`N = len(x)`). **increasing**bool, optional Order of the powers of the columns. If True, the powers increase from left to right, if False (the default) they are reversed. New in version 1.9.0. Returns **out**ndarray Vandermonde matrix. If `increasing` is False, the first column is `x^(N-1)`, the second `x^(N-2)` and so forth. If `increasing` is True, the columns are `x^0, x^1, ..., x^(N-1)`. See also [`polynomial.polynomial.polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander") #### Notes Masked values in the input array result in rows of zeros. #### Examples ``` >>> x = np.array([1, 2, 3, 5]) >>> N = 3 >>> np.vander(x, N) array([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) ``` ``` >>> np.column_stack([x**(N-1-i) for i in range(N)]) array([[ 1, 1, 1], [ 4, 2, 1], [ 9, 3, 1], [25, 5, 1]]) ``` ``` >>> x = np.array([1, 2, 3, 5]) >>> np.vander(x) array([[ 1, 1, 1, 1], [ 8, 4, 2, 1], [ 27, 9, 3, 1], [125, 25, 5, 1]]) >>> np.vander(x, increasing=True) array([[ 1, 1, 1, 1], [ 1, 2, 4, 8], [ 1, 3, 9, 27], [ 1, 5, 25, 125]]) ``` The determinant of a square Vandermonde matrix is the product of the differences between the values of the input vector: ``` >>> np.linalg.det(np.vander(x)) 48.000000000000043 # may vary >>> (5-3)*(5-2)*(5-1)*(3-2)*(3-1)*(2-1) 48 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.vander.htmlnumpy.ma.polyfit ================ ma.polyfit(*x*, *y*, *deg*, *rcond=None*, *full=False*, *w=None*, *cov=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1989-L2021) Least squares polynomial fit. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Fit a polynomial `p(x) = p[0] * x**deg + ... + p[deg]` of degree `deg` to points `(x, y)`. Returns a vector of coefficients `p` that minimises the squared error in the order `deg`, `deg-1`, 
 `0`. The [`Polynomial.fit`](numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit") class method is recommended for new code as it is more stable numerically. See the documentation of the method for more information. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg**int Degree of the fitting polynomial **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **cov**bool or str, optional If given and not `False`, return not just the estimate but also its covariance matrix. By default, the covariance are scaled by chi2/dof, where dof = M - (deg + 1), i.e., the weights are presumed to be unreliable except in a relative sense and everything is scaled such that the reduced chi2 is unity. This scaling is omitted if `cov='unscaled'`, as is relevant for the case that the weights are w = 1/sigma, with sigma known to be a reliable estimate of the uncertainty. Returns **p**ndarray, shape (deg + 1,) or (deg + 1, K) Polynomial coefficients, highest power first. If `y` was 2-D, the coefficients for `k`-th data set are in `p[:,k]`. residuals, rank, singular_values, rcond These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the effective rank of the scaled Vandermonde coefficient matrix * singular_values – singular values of the scaled Vandermonde coefficient matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). **V**ndarray, shape (M,M) or (M,M,K) Present only if `full == False` and `cov == True`. The covariance matrix of the polynomial coefficient estimates. The diagonal of this matrix are the variance estimates for each coefficient. If y is a 2-D array, then the covariance matrix for the `k`-th data set are in `V[:,:,k]` Warns RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by ``` >>> import warnings >>> warnings.simplefilter('ignore', np.RankWarning) ``` See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Compute polynomial values. [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "(in SciPy v1.8.1)") Computes spline fits. #### Notes Any masked values in x is propagated in y, and vice-versa. The solution minimizes the squared error \[E = \sum_{j=0}^k |p(x_j) - y_j|^2\] in the equations: ``` x[0]**n * p[0] + ... + x[0] * p[n-1] + p[n] = y[0] x[1]**n * p[0] + ... + x[1] * p[n-1] + p[n] = y[1] ... x[k]**n * p[0] + ... + x[k] * p[n-1] + p[n] = y[k] ``` The coefficient matrix of the coefficients `p` is a Vandermonde matrix. [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit") issues a [`RankWarning`](numpy.rankwarning#numpy.RankWarning "numpy.RankWarning") when the least-squares fit is badly conditioned. This implies that the best fit is not well-defined due to numerical error. The results may be improved by lowering the polynomial degree or by replacing `x` by `x` - `x`.mean(). The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious: including contributions from the small singular values can add numerical noise to the result. Note that fitting polynomial coefficients is inherently badly conditioned when the degree of the polynomial is large or the interval of sample points is badly centered. The quality of the fit should always be checked in these cases. When polynomial fits are not satisfactory, splines may be a good alternative. #### References 1 Wikipedia, “Curve fitting”, <https://en.wikipedia.org/wiki/Curve_fitting 2 Wikipedia, “Polynomial interpolation”, <https://en.wikipedia.org/wiki/Polynomial_interpolation #### Examples ``` >>> import warnings >>> x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]) >>> y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0]) >>> z = np.polyfit(x, y, 3) >>> z array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254]) # may vary ``` It is convenient to use [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") objects for dealing with polynomials: ``` >>> p = np.poly1d(z) >>> p(0.5) 0.6143849206349179 # may vary >>> p(3.5) -0.34732142857143039 # may vary >>> p(10) 22.579365079365115 # may vary ``` High-order polynomials may oscillate wildly: ``` >>> with warnings.catch_warnings(): ... warnings.simplefilter('ignore', np.RankWarning) ... p30 = np.poly1d(np.polyfit(x, y, 30)) ... >>> p30(4) -0.80000000000000204 # may vary >>> p30(5) -0.99999999999999445 # may vary >>> p30(4.5) -0.10547061179440398 # may vary ``` Illustration: ``` >>> import matplotlib.pyplot as plt >>> xp = np.linspace(-2, 6, 100) >>> _ = plt.plot(x, y, '.', xp, p(xp), '-', xp, p30(xp), '--') >>> plt.ylim(-2,2) (-2, 2) >>> plt.show() ``` ![../../_images/numpy-ma-polyfit-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.polyfit.htmlnumpy.ma.around =============== ma.around*=<numpy.ma.core._MaskedUnaryOperation object>* Round an array to the given number of decimals. See also [`around`](numpy.around#numpy.around "numpy.around") equivalent function; see for details. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.around.htmlnumpy.ma.clip ============= ma.clip(**args*, ***kwargs*)*=<numpy.ma.core._convert2ma object>* Clip (limit) the values in an array. Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of `[0, 1]` is specified, values smaller than 0 become 0, and values larger than 1 become 1. Equivalent to but faster than `np.minimum(a_max, np.maximum(a, a_min))`. No check is performed to ensure `a_min < a_max`. Parameters **a**array_like Array containing elements to clip. **a_min, a_max**array_like or None Minimum and maximum value. If `None`, clipping is not performed on the corresponding edge. Only one of `a_min` and `a_max` may be `None`. Both are broadcast against `a`. **out**ndarray, optional The results will be placed in this array. It may be the input array for in-place clipping. `out` must be of the right shape to hold the output. Its type is preserved. ****kwargs** For other keyword-only arguments, see the [ufunc docs](../ufuncs#ufuncs-kwargs). New in version 1.17.0. Returns **clipped_array**MaskedArray An array with the elements of `a`, but where values < `a_min` are replaced with `a_min`, and those > `a_max` with `a_max`. See also [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes When `a_min` is greater than `a_max`, [`clip`](numpy.clip#numpy.clip "numpy.clip") returns an array in which all values are equal to `a_max`, as shown in the second example. #### Examples ``` >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, 1, 8) array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) >>> np.clip(a, 8, 1) array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) >>> np.clip(a, 3, 6, out=a) array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, [3, 4, 1, 1, 1, 4, 4, 4, 4, 4], 8) array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.clip.htmlnumpy.ma.round ============== ma.round(*a*, *decimals=0*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7469-L7499) Return a copy of a, rounded to ‘decimals’ places. When ‘decimals’ is negative, it specifies the number of positions to the left of the decimal point. The real and imaginary parts of complex numbers are rounded separately. Nothing is done if the array is not of float type and ‘decimals’ is greater than or equal to 0. Parameters **decimals**int Number of decimals to round to. May be negative. **out**array_like Existing array to use for output. If not given, returns a default copy of a. #### Notes If out is given and does not have a mask attribute, the mask of a is lost! © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.round.htmlnumpy.ma.MaskedArray.clip ========================= method ma.MaskedArray.clip(*min=None*, *max=None*, *out=None*, ***kwargs*) Return an array whose values are limited to `[min, max]`. One of max or min must be given. Refer to [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") for full documentation. See also [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.clip.htmlnumpy.ma.MaskedArray.round ========================== method ma.MaskedArray.round(*decimals=0*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5431-L5454) Return each element rounded to the given number of decimals. Refer to [`numpy.around`](numpy.around#numpy.around "numpy.around") for full documentation. See also [`numpy.ndarray.round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round") corresponding function for ndarrays [`numpy.around`](numpy.around#numpy.around "numpy.around") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.round.htmlnumpy.ma.allequal ================= ma.allequal(*a*, *b*, *fill_value=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7792-L7847) Return True if all entries of a and b are equal, using fill_value as a truth value where either or both are masked. Parameters **a, b**array_like Input arrays to compare. **fill_value**bool, optional Whether masked values in a or b are considered equal (True) or not (False). Returns **y**bool Returns True if the two arrays are equal within the given tolerance, False otherwise. If either array contains NaN, then False is returned. See also [`all`](numpy.all#numpy.all "numpy.all"), [`any`](numpy.any#numpy.any "numpy.any") [`numpy.ma.allclose`](numpy.ma.allclose#numpy.ma.allclose "numpy.ma.allclose") #### Examples ``` >>> a = np.ma.array([1e10, 1e-7, 42.0], mask=[0, 0, 1]) >>> a masked_array(data=[10000000000.0, 1e-07, --], mask=[False, False, True], fill_value=1e+20) ``` ``` >>> b = np.array([1e10, 1e-7, -42.0]) >>> b array([ 1.00000000e+10, 1.00000000e-07, -4.20000000e+01]) >>> np.ma.allequal(a, b, fill_value=False) False >>> np.ma.allequal(a, b) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.allequal.htmlnumpy.ma.allclose ================= ma.allclose(*a*, *b*, *masked_equal=True*, *rtol=1e-05*, *atol=1e-08*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7850-L7955) Returns True if two arrays are element-wise equal within a tolerance. This function is equivalent to [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose") except that masked values are treated as equal (default) or unequal, depending on the [`masked_equal`](numpy.ma.masked_equal#numpy.ma.masked_equal "numpy.ma.masked_equal") argument. Parameters **a, b**array_like Input arrays to compare. **masked_equal**bool, optional Whether masked values in `a` and `b` are considered equal (True) or not (False). They are considered equal by default. **rtol**float, optional Relative tolerance. The relative difference is equal to `rtol * b`. Default is 1e-5. **atol**float, optional Absolute tolerance. The absolute difference is equal to `atol`. Default is 1e-8. Returns **y**bool Returns True if the two arrays are equal within the given tolerance, False otherwise. If either array contains NaN, then False is returned. See also [`all`](numpy.all#numpy.all "numpy.all"), [`any`](numpy.any#numpy.any "numpy.any") [`numpy.allclose`](numpy.allclose#numpy.allclose "numpy.allclose") the non-masked [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose"). #### Notes If the following equation is element-wise True, then [`allclose`](numpy.allclose#numpy.allclose "numpy.allclose") returns True: ``` absolute(`a` - `b`) <= (`atol` + `rtol` * absolute(`b`)) ``` Return True if all elements of `a` and `b` are equal subject to given tolerances. #### Examples ``` >>> a = np.ma.array([1e10, 1e-7, 42.0], mask=[0, 0, 1]) >>> a masked_array(data=[10000000000.0, 1e-07, --], mask=[False, False, True], fill_value=1e+20) >>> b = np.ma.array([1e10, 1e-8, -42.0], mask=[0, 0, 1]) >>> np.ma.allclose(a, b) False ``` ``` >>> a = np.ma.array([1e10, 1e-8, 42.0], mask=[0, 0, 1]) >>> b = np.ma.array([1.00001e10, 1e-9, -42.0], mask=[0, 0, 1]) >>> np.ma.allclose(a, b) True >>> np.ma.allclose(a, b, masked_equal=False) False ``` Masked values are not compared directly. ``` >>> a = np.ma.array([1e10, 1e-8, 42.0], mask=[0, 0, 1]) >>> b = np.ma.array([1.00001e10, 1e-9, 42.0], mask=[0, 0, 1]) >>> np.ma.allclose(a, b) True >>> np.ma.allclose(a, b, masked_equal=False) False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.allclose.htmlnumpy.ma.apply_along_axis =========================== ma.apply_along_axis(*func1d*, *axis*, *arr*, **args*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L371-L450) Apply a function to 1-D slices along the given axis. Execute `func1d(a, *args, **kwargs)` where `func1d` operates on 1-D arrays and `a` is a 1-D slice of `arr` along `axis`. This is equivalent to (but faster than) the following use of [`ndindex`](numpy.ndindex#numpy.ndindex "numpy.ndindex") and [`s_`](numpy.s_#numpy.s_ "numpy.s_"), which sets each of `ii`, `jj`, and `kk` to a tuple of indices: ``` Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): f = func1d(arr[ii + s_[:,] + kk]) Nj = f.shape for jj in ndindex(Nj): out[ii + jj + kk] = f[jj] ``` Equivalently, eliminating the inner loop, this can be expressed as: ``` Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nk): out[ii + s_[...,] + kk] = func1d(arr[ii + s_[:,] + kk]) ``` Parameters **func1d**function (M,) -> (Nj
) This function should accept 1-D arrays. It is applied to 1-D slices of `arr` along the specified axis. **axis**integer Axis along which `arr` is sliced. **arr**ndarray (Ni
, M, Nk
) Input array. **args**any Additional arguments to `func1d`. **kwargs**any Additional named arguments to `func1d`. New in version 1.9.0. Returns **out**ndarray (Ni
, Nj
, Nk
) The output array. The shape of `out` is identical to the shape of `arr`, except along the `axis` dimension. This axis is removed, and replaced with new dimensions equal to the shape of the return value of `func1d`. So if `func1d` returns a scalar `out` will have one fewer dimensions than `arr`. See also [`apply_over_axes`](numpy.apply_over_axes#numpy.apply_over_axes "numpy.apply_over_axes") Apply a function repeatedly over multiple axes. #### Examples ``` >>> def my_func(a): ... """Average first and last element of a 1-D array""" ... return (a[0] + a[-1]) * 0.5 >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(my_func, 0, b) array([4., 5., 6.]) >>> np.apply_along_axis(my_func, 1, b) array([2., 5., 8.]) ``` For a function that returns a 1D array, the number of dimensions in `outarr` is the same as `arr`. ``` >>> b = np.array([[8,1,7], [4,3,9], [5,2,6]]) >>> np.apply_along_axis(sorted, 1, b) array([[1, 7, 8], [3, 4, 9], [2, 5, 6]]) ``` For a function that returns a higher dimensional array, those dimensions are inserted in place of the `axis` dimension. ``` >>> b = np.array([[1,2,3], [4,5,6], [7,8,9]]) >>> np.apply_along_axis(np.diag, -1, b) array([[[1, 0, 0], [0, 2, 0], [0, 0, 3]], [[4, 0, 0], [0, 5, 0], [0, 0, 6]], [[7, 0, 0], [0, 8, 0], [0, 0, 9]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.apply_along_axis.htmlnumpy.ma.apply_over_axes ========================== ma.apply_over_axes(*func*, *a*, *axes*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L454-L476) Apply a function repeatedly over multiple axes. `func` is called as `res = func(a, axis)`, where `axis` is the first element of `axes`. The result `res` of the function call must have either the same dimensions as `a` or one less dimension. If `res` has one less dimension than `a`, a dimension is inserted before `axis`. The call to `func` is then repeated for each axis in `axes`, with `res` as the first argument. Parameters **func**function This function must take two arguments, `func(a, axis)`. **a**array_like Input array. **axes**array_like Axes over which `func` is applied; the elements must be integers. Returns **apply_over_axis**ndarray The output array. The number of dimensions is the same as `a`, but the shape can be different. This depends on whether `func` changes the shape of its output with respect to its input. See also [`apply_along_axis`](numpy.apply_along_axis#numpy.apply_along_axis "numpy.apply_along_axis") Apply a function to 1-D slices of an array along the given axis. #### Examples ``` >>> a = np.ma.arange(24).reshape(2,3,4) >>> a[:,0,1] = np.ma.masked >>> a[:,1,:] = np.ma.masked >>> a masked_array( data=[[[0, --, 2, 3], [--, --, --, --], [8, 9, 10, 11]], [[12, --, 14, 15], [--, --, --, --], [20, 21, 22, 23]]], mask=[[[False, True, False, False], [ True, True, True, True], [False, False, False, False]], [[False, True, False, False], [ True, True, True, True], [False, False, False, False]]], fill_value=999999) >>> np.ma.apply_over_axes(np.ma.sum, a, [0,2]) masked_array( data=[[[46], [--], [124]]], mask=[[[False], [ True], [False]]], fill_value=999999) ``` Tuple axis arguments to ufuncs are equivalent: ``` >>> np.ma.sum(a, axis=(0,2)).reshape((1,-1,1)) masked_array( data=[[[46], [--], [124]]], mask=[[[False], [ True], [False]]], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.apply_over_axes.htmlnumpy.ma.arange =============== ma.arange([*start*, ]*stop*, [*step*, ]*dtype=None*, ***, *like=None*)*=<numpy.ma.core._convert2ma object>* Return evenly spaced values within a given interval. `arange` can be called with a varying number of positional arguments: * `arange(stop)`: Values are generated within the half-open interval `[0, stop)` (in other words, the interval including `start` but excluding `stop`). * `arange(start, stop)`: Values are generated within the half-open interval `[start, stop)`. * `arange(start, stop, step)` Values are generated within the half-open interval `[start, stop)`, with spacing between values given by `step`. For integer arguments the function is roughly equivalent to the Python built-in [`range`](https://docs.python.org/3/library/stdtypes.html#range "(in Python v3.10)"), but returns an ndarray rather than a `range` instance. When using a non-integer step, such as 0.1, it is often better to use [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace"). See the Warning sections below for more information. Parameters **start**integer or real, optional Start of interval. The interval includes this value. The default start value is 0. **stop**integer or real End of interval. The interval does not include this value, except in some cases where `step` is not an integer and floating point round-off affects the length of `out`. **step**integer or real, optional Spacing between values. For any output `out`, this is the distance between two adjacent values, `out[i+1] - out[i]`. The default step size is 1. If `step` is specified as a position argument, `start` must also be given. **dtype**dtype, optional The type of the output array. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not given, infer the data type from the other input arguments. **like**array_like, optional Reference object to allow the creation of arrays which are not NumPy arrays. If an array-like passed in as `like` supports the `__array_function__` protocol, the result will be defined by it. In this case, it ensures the creation of an array object compatible with that passed in via this argument. New in version 1.20.0. Returns **arange**MaskedArray Array of evenly spaced values. For floating point arguments, the length of the result is `ceil((stop - start)/step)`. Because of floating point overflow, this rule may result in the last element of `out` being greater than `stop`. Warning The length of the output might not be numerically stable. Another stability issue is due to the internal implementation of [`numpy.arange`](numpy.arange#numpy.arange "numpy.arange"). The actual step value used to populate the array is `dtype(start + step) - dtype(start)` and not `step`. Precision loss can occur here, due to casting or due to using floating points when `start` is much larger than `step`. This can lead to unexpected behaviour. For example: ``` >>> np.arange(0, 5, 0.5, dtype=int) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) >>> np.arange(-3, 3, 0.5, dtype=int) array([-3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8]) ``` In such cases, the use of [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace") should be preferred. The built-in [`range`](https://docs.python.org/3/library/stdtypes.html#range "(in Python v3.10)") generates [Python built-in integers that have arbitrary size](https://docs.python.org/3/c-api/long.html "(in Python v3.10)"), while [`numpy.arange`](numpy.arange#numpy.arange "numpy.arange") produces [`numpy.int32`](../arrays.scalars#numpy.int32 "numpy.int32") or [`numpy.int64`](../arrays.scalars#numpy.int64 "numpy.int64") numbers. This may result in incorrect results for large integer values: ``` >>> power = 40 >>> modulo = 10000 >>> x1 = [(n ** power) % modulo for n in range(8)] >>> x2 = [(n ** power) % modulo for n in np.arange(8)] >>> print(x1) [0, 1, 7776, 8801, 6176, 625, 6576, 4001] # correct >>> print(x2) [0, 1, 7776, 7185, 0, 5969, 4816, 3361] # incorrect ``` See also [`numpy.linspace`](numpy.linspace#numpy.linspace "numpy.linspace") Evenly spaced numbers with careful handling of endpoints. [`numpy.ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid") Arrays of evenly spaced numbers in N-dimensions. [`numpy.mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid") Grid-shaped arrays of evenly spaced numbers in N-dimensions. #### Examples ``` >>> np.arange(3) array([0, 1, 2]) >>> np.arange(3.0) array([ 0., 1., 2.]) >>> np.arange(3,7) array([3, 4, 5, 6]) >>> np.arange(3,7,2) array([3, 5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.arange.htmlnumpy.ma.choose =============== ma.choose(*indices*, *choices*, *out=None*, *mode='raise'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L7394-L7466) Use an index array to construct a new array from a list of choices. Given an array of integers and a list of n choice arrays, this method will create a new array that merges each of the choice arrays. Where a value in `index` is i, the new array will have the value that choices[i] contains in the same place. Parameters **indices**ndarray of ints This array must contain integers in `[0, n-1]`, where n is the number of choices. **choices**sequence of arrays Choice arrays. The index array and all of the choices should be broadcastable to the same shape. **out**array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"). **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. * ‘raise’ : raise an error * ‘wrap’ : wrap around * ‘clip’ : clip to the range Returns **merged_array**array See also [`choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function #### Examples ``` >>> choice = np.array([[1,1,1], [2,2,2], [3,3,3]]) >>> a = np.array([2, 1, 0]) >>> np.ma.choose(a, choice) masked_array(data=[3, 2, 1], mask=False, fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.choose.htmlnumpy.ma.ediff1d ================ ma.ediff1d(*arr*, *to_end=None*, *to_begin=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/extras.py#L1067-L1093) Compute the differences between consecutive elements of an array. This function is the equivalent of [`numpy.ediff1d`](numpy.ediff1d#numpy.ediff1d "numpy.ediff1d") that takes masked values into account, see [`numpy.ediff1d`](numpy.ediff1d#numpy.ediff1d "numpy.ediff1d") for details. See also [`numpy.ediff1d`](numpy.ediff1d#numpy.ediff1d "numpy.ediff1d") Equivalent function for ndarrays. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.ediff1d.htmlnumpy.ma.indices ================ ma.indices(*dimensions*, *dtype=<class 'int'>*, *sparse=False*)*=<numpy.ma.core._convert2ma object>* Return an array representing the indices of a grid. Compute an array where the subarrays contain index values 0, 1, 
 varying only along the corresponding axis. Parameters **dimensions**sequence of ints The shape of the grid. **dtype**dtype, optional Data type of the result. **sparse**boolean, optional Return a sparse representation of the grid instead of a dense representation. Default is False. New in version 1.17. Returns **grid**one MaskedArray or tuple of MaskedArrays If sparse is False: Returns one array of grid indices, `grid.shape = (len(dimensions),) + tuple(dimensions)`. If sparse is True: Returns a tuple of arrays, with `grid[i].shape = (1, ..., 1, dimensions[i], 1, ..., 1)` with dimensions[i] in the ith place See also [`mgrid`](numpy.mgrid#numpy.mgrid "numpy.mgrid"), [`ogrid`](numpy.ogrid#numpy.ogrid "numpy.ogrid"), [`meshgrid`](numpy.meshgrid#numpy.meshgrid "numpy.meshgrid") #### Notes The output shape in the dense case is obtained by prepending the number of dimensions in front of the tuple of dimensions, i.e. if `dimensions` is a tuple `(r0, ..., rN-1)` of length `N`, the output shape is `(N, r0, ..., rN-1)`. The subarrays `grid[k]` contains the N-D array of indices along the `k-th` axis. Explicitly: ``` grid[k, i0, i1, ..., iN-1] = ik ``` #### Examples ``` >>> grid = np.indices((2, 3)) >>> grid.shape (2, 2, 3) >>> grid[0] # row indices array([[0, 0, 0], [1, 1, 1]]) >>> grid[1] # column indices array([[0, 1, 2], [0, 1, 2]]) ``` The indices can be used as an index into an array. ``` >>> x = np.arange(20).reshape(5, 4) >>> row, col = np.indices((2, 3)) >>> x[row, col] array([[0, 1, 2], [4, 5, 6]]) ``` Note that it would be more straightforward in the above example to extract the required elements directly with `x[:2, :3]`. If sparse is set to true, the grid will be returned in a sparse representation. ``` >>> i, j = np.indices((2, 3), sparse=True) >>> i.shape (2, 1) >>> j.shape (1, 3) >>> i # row indices array([[0], [1]]) >>> j # column indices array([[0, 1, 2]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.indices.htmlnumpy.unwrap ============ numpy.unwrap(*p*, *discont=None*, *axis=- 1*, ***, *period=6.283185307179586*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L1658-L1751) Unwrap by taking the complement of large deltas with respect to the period. This unwraps a signal `p` by changing elements which have an absolute difference from their predecessor of more than `max(discont, period/2)` to their `period`-complementary values. For the default case where `period` is \(2\pi\) and `discont` is \(\pi\), this unwraps a radian phase `p` such that adjacent differences are never greater than \(\pi\) by adding \(2k\pi\) for some integer \(k\). Parameters **p**array_like Input array. **discont**float, optional Maximum discontinuity between values, default is `period/2`. Values below `period/2` are treated as if they were `period/2`. To have an effect different from the default, `discont` should be larger than `period/2`. **axis**int, optional Axis along which unwrap will operate, default is the last axis. **period**float, optional Size of the range over which the input wraps. By default, it is `2 pi`. New in version 1.21.0. Returns **out**ndarray Output array. See also [`rad2deg`](numpy.rad2deg#numpy.rad2deg "numpy.rad2deg"), [`deg2rad`](numpy.deg2rad#numpy.deg2rad "numpy.deg2rad") #### Notes If the discontinuity in `p` is smaller than `period/2`, but larger than `discont`, no unwrapping is done because taking the complement would only make the discontinuity larger. #### Examples ``` >>> phase = np.linspace(0, np.pi, num=5) >>> phase[3:] += np.pi >>> phase array([ 0. , 0.78539816, 1.57079633, 5.49778714, 6.28318531]) # may vary >>> np.unwrap(phase) array([ 0. , 0.78539816, 1.57079633, -0.78539816, 0. ]) # may vary >>> np.unwrap([0, 1, 2, -1, 0], period=4) array([0, 1, 2, 3, 4]) >>> np.unwrap([ 1, 2, 3, 4, 5, 6, 1, 2, 3], period=6) array([1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.unwrap([2, 3, 4, 5, 2, 3, 4, 5], period=4) array([2, 3, 4, 5, 6, 7, 8, 9]) >>> phase_deg = np.mod(np.linspace(0 ,720, 19), 360) - 180 >>> np.unwrap(phase_deg, period=360) array([-180., -140., -100., -60., -20., 20., 60., 100., 140., 180., 220., 260., 300., 340., 380., 420., 460., 500., 540.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.unwrap.htmlnumpy.round_ ============= numpy.round_(*a*, *decimals=0*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/fromnumeric.py#L3722-L3731) Round an array to the given number of decimals. See also [`around`](numpy.around#numpy.around "numpy.around") equivalent function; see for details. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.round_.htmlnumpy.fix ========= numpy.fix(*x*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/ufunclike.py#L73-L124) Round to nearest integer towards zero. Round an array of floats element-wise to nearest integer towards zero. The rounded values are returned as floats. Parameters **x**array_like An array of floats to be rounded **out**ndarray, optional A location into which the result is stored. If provided, it must have a shape that the input broadcasts to. If not provided or None, a freshly-allocated array is returned. Returns **out**ndarray of floats A float array with the same dimensions as the input. If second argument is not supplied then a float array is returned with the rounded values. If a second argument is supplied the result is stored there. The return value `out` is then a reference to that array. See also [`rint`](numpy.rint#numpy.rint "numpy.rint"), [`trunc`](numpy.trunc#numpy.trunc "numpy.trunc"), [`floor`](numpy.floor#numpy.floor "numpy.floor"), [`ceil`](numpy.ceil#numpy.ceil "numpy.ceil") [`around`](numpy.around#numpy.around "numpy.around") Round to given number of decimals #### Examples ``` >>> np.fix(3.14) 3.0 >>> np.fix(3) 3.0 >>> np.fix([2.1, 2.9, -2.1, -2.9]) array([ 2., 2., -2., -2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.fix.htmlnumpy.nanprod ============= numpy.nanprod(*a*, *axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*, *initial=<no value>*, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L732-L807) Return the product of array elements over a given axis treating Not a Numbers (NaNs) as ones. One is returned for slices that are all-NaN or empty. New in version 1.10.0. Parameters **a**array_like Array containing numbers whose product is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the product is computed. The default is to compute the product of the flattened array. **dtype**data-type, optional The type of the returned array and of the accumulator in which the elements are summed. By default, the dtype of `a` is used. An exception is when `a` has an integer type with less precision than the platform (u)intp. In that case, the default will be either (u)int32 or (u)int64 depending on whether the platform is 32 or 64 bits. For inexact inputs, dtype must be inexact. **out**ndarray, optional Alternate output array in which to place the result. The default is `None`. If provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. The casting of NaN to integer can yield unexpected results. **keepdims**bool, optional If True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `arr`. **initial**scalar, optional The starting value for this product. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **where**array_like of bool, optional Elements to include in the product. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns **nanprod**ndarray A new array holding the result is returned unless `out` is specified, in which case it is returned. See also [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") Product across array propagating NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Show which elements are NaN. #### Examples ``` >>> np.nanprod(1) 1 >>> np.nanprod([1]) 1 >>> np.nanprod([1, np.nan]) 1.0 >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nanprod(a) 6.0 >>> np.nanprod(a, axis=0) array([3., 2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanprod.htmlnumpy.nansum ============ numpy.nansum(*a*, *axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*, *initial=<no value>*, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L623-L724) Return the sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. In NumPy versions <= 1.9.0 Nan is returned for slices that are all-NaN or empty. In later versions zero is returned. Parameters **a**array_like Array containing numbers whose sum is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the sum is computed. The default is to compute the sum of the flattened array. **dtype**data-type, optional The type of the returned array and of the accumulator in which the elements are summed. By default, the dtype of `a` is used. An exception is when `a` has an integer type with less precision than the platform (u)intp. In that case, the default will be either (u)int32 or (u)int64 depending on whether the platform is 32 or 64 bits. For inexact inputs, dtype must be inexact. New in version 1.8.0. **out**ndarray, optional Alternate output array in which to place the result. The default is `None`. If provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. The casting of NaN to integer can yield unexpected results. New in version 1.8.0. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If the value is anything but the default, then `keepdims` will be passed through to the [`mean`](numpy.mean#numpy.mean "numpy.mean") or [`sum`](numpy.sum#numpy.sum "numpy.sum") methods of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If the sub-classes methods does not implement `keepdims` any exceptions will be raised. New in version 1.8.0. **initial**scalar, optional Starting value for the sum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **where**array_like of bool, optional Elements to include in the sum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns **nansum**ndarray. A new array holding the result is returned unless `out` is specified, in which it is returned. The result has the same size as `a`, and the same shape as `a` if `axis` is not None or `a` is a 1-d array. See also [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") Sum across array propagating NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Show which elements are NaN. [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") Show which elements are not NaN or +/-inf. #### Notes If both positive and negative infinity are present, the sum will be Not A Number (NaN). #### Examples ``` >>> np.nansum(1) 1 >>> np.nansum([1]) 1 >>> np.nansum([1, np.nan]) 1.0 >>> a = np.array([[1, 1], [1, np.nan]]) >>> np.nansum(a) 3.0 >>> np.nansum(a, axis=0) array([2., 1.]) >>> np.nansum([1, np.nan, np.inf]) inf >>> np.nansum([1, np.nan, np.NINF]) -inf >>> from numpy.testing import suppress_warnings >>> with suppress_warnings() as sup: ... sup.filter(RuntimeWarning) ... np.nansum([1, np.nan, np.inf, -np.inf]) # both +/- infinity present nan ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nansum.htmlnumpy.nancumprod ================ numpy.nancumprod(*a*, *axis=None*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L884-L944) Return the cumulative product of array elements over a given axis treating Not a Numbers (NaNs) as one. The cumulative product does not change when NaNs are encountered and leading NaNs are replaced by ones. Ones are returned for slices that are all-NaN or empty. New in version 1.12.0. Parameters **a**array_like Input array. **axis**int, optional Axis along which the cumulative product is computed. By default the input is flattened. **dtype**dtype, optional Type of the returned array, as well as of the accumulator in which the elements are multiplied. If *dtype* is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used instead. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type of the resulting values will be cast if necessary. Returns **nancumprod**ndarray A new array holding the result is returned unless `out` is specified, in which case it is returned. See also [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") Cumulative product across array propagating NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Show which elements are NaN. #### Examples ``` >>> np.nancumprod(1) array([1]) >>> np.nancumprod([1]) array([1]) >>> np.nancumprod([1, np.nan]) array([1., 1.]) >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nancumprod(a) array([1., 2., 6., 6.]) >>> np.nancumprod(a, axis=0) array([[1., 2.], [3., 2.]]) >>> np.nancumprod(a, axis=1) array([[1., 2.], [3., 3.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nancumprod.htmlnumpy.nancumsum =============== numpy.nancumsum(*a*, *axis=None*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L814-L877) Return the cumulative sum of array elements over a given axis treating Not a Numbers (NaNs) as zero. The cumulative sum does not change when NaNs are encountered and leading NaNs are replaced by zeros. Zeros are returned for slices that are all-NaN or empty. New in version 1.12.0. Parameters **a**array_like Input array. **axis**int, optional Axis along which the cumulative sum is computed. The default (None) is to compute the cumsum over the flattened array. **dtype**dtype, optional Type of the returned array and of the accumulator in which the elements are summed. If [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not specified, it defaults to the dtype of `a`, unless `a` has an integer dtype with a precision less than that of the default platform integer. In that case, the default platform integer is used. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. Returns **nancumsum**ndarray. A new array holding the result is returned unless `out` is specified, in which it is returned. The result has the same size as `a`, and the same shape as `a` if `axis` is not None or `a` is a 1-d array. See also [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") Cumulative sum across array propagating NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Show which elements are NaN. #### Examples ``` >>> np.nancumsum(1) array([1]) >>> np.nancumsum([1]) array([1]) >>> np.nancumsum([1, np.nan]) array([1., 1.]) >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nancumsum(a) array([1., 3., 6., 6.]) >>> np.nancumsum(a, axis=0) array([[1., 2.], [4., 2.]]) >>> np.nancumsum(a, axis=1) array([[1., 3.], [3., 3.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nancumsum.htmlnumpy.ediff1d ============= numpy.ediff1d(*ary*, *to_end=None*, *to_begin=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arraysetops.py#L37-L122) The differences between consecutive elements of an array. Parameters **ary**array_like If necessary, will be flattened before the differences are taken. **to_end**array_like, optional Number(s) to append at the end of the returned differences. **to_begin**array_like, optional Number(s) to prepend at the beginning of the returned differences. Returns **ediff1d**ndarray The differences. Loosely, this is `ary.flat[1:] - ary.flat[:-1]`. See also [`diff`](numpy.diff#numpy.diff "numpy.diff"), [`gradient`](numpy.gradient#numpy.gradient "numpy.gradient") #### Notes When applied to masked arrays, this function drops the mask information if the `to_begin` and/or `to_end` parameters are used. #### Examples ``` >>> x = np.array([1, 2, 4, 7, 0]) >>> np.ediff1d(x) array([ 1, 2, 3, -7]) ``` ``` >>> np.ediff1d(x, to_begin=-99, to_end=np.array([88, 99])) array([-99, 1, 2, ..., -7, 88, 99]) ``` The returned array is always 1D. ``` >>> y = [[1, 2, 4], [1, 6, 24]] >>> np.ediff1d(y) array([ 1, 2, -3, 5, 18]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ediff1d.htmlnumpy.gradient ============== numpy.gradient(*f*, **varargs*, *axis=None*, *edge_order=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L969-L1312) Return the gradient of an N-dimensional array. The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array. Parameters **f**array_like An N-dimensional array containing samples of a scalar function. **varargs**list of scalar or array, optional Spacing between f values. Default unitary spacing for all dimensions. Spacing can be specified using: 1. single scalar to specify a sample distance for all dimensions. 2. N scalars to specify a constant sample distance for each dimension. i.e. `dx`, `dy`, `dz`, 
 3. N arrays to specify the coordinates of the values along each dimension of F. The length of the array must match the size of the corresponding dimension 4. Any combination of N scalars/arrays with the meaning of 2. and 3. If `axis` is given, the number of varargs must equal the number of axes. Default: 1. **edge_order**{1, 2}, optional Gradient is calculated using N-th order accurate differences at the boundaries. Default: 1. New in version 1.9.1. **axis**None or int or tuple of ints, optional Gradient is calculated only along the given axis or axes The default (axis = None) is to calculate the gradient for all the axes of the input array. axis may be negative, in which case it counts from the last to the first axis. New in version 1.11.0. Returns **gradient**ndarray or list of ndarray A list of ndarrays (or a single ndarray if there is only one dimension) corresponding to the derivatives of f with respect to each dimension. Each derivative has the same shape as f. #### Notes Assuming that \(f\in C^{3}\) (i.e., \(f\) has at least 3 continuous derivatives) and let \(h_{*}\) be a non-homogeneous stepsize, we minimize the “consistency error” \(\eta_{i}\) between the true gradient and its estimate from a linear combination of the neighboring grid-points: \[\eta_{i} = f_{i}^{\left(1\right)} - \left[ \alpha f\left(x_{i}\right) + \beta f\left(x_{i} + h_{d}\right) + \gamma f\left(x_{i}-h_{s}\right) \right]\] By substituting \(f(x_{i} + h_{d})\) and \(f(x_{i} - h_{s})\) with their Taylor series expansion, this translates into solving the following the linear system: \[\begin{split}\left\{ \begin{array}{r} \alpha+\beta+\gamma=0 \\ \beta h_{d}-\gamma h_{s}=1 \\ \beta h_{d}^{2}+\gamma h_{s}^{2}=0 \end{array} \right.\end{split}\] The resulting approximation of \(f_{i}^{(1)}\) is the following: \[\hat f_{i}^{(1)} = \frac{ h_{s}^{2}f\left(x_{i} + h_{d}\right) + \left(h_{d}^{2} - h_{s}^{2}\right)f\left(x_{i}\right) - h_{d}^{2}f\left(x_{i}-h_{s}\right)} { h_{s}h_{d}\left(h_{d} + h_{s}\right)} + \mathcal{O}\left(\frac{h_{d}h_{s}^{2} + h_{s}h_{d}^{2}}{h_{d} + h_{s}}\right)\] It is worth noting that if \(h_{s}=h_{d}\) (i.e., data are evenly spaced) we find the standard second order approximation: \[\hat f_{i}^{(1)}= \frac{f\left(x_{i+1}\right) - f\left(x_{i-1}\right)}{2h} + \mathcal{O}\left(h^{2}\right)\] With a similar procedure the forward/backward approximations used for boundaries can be derived. #### References 1 <NAME>., <NAME>., <NAME>. (2007) Numerical Mathematics (Texts in Applied Mathematics). New York: Springer. 2 <NAME>. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer. 3 <NAME>. (1988) Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Mathematics of Computation 51, no. 184 : 699-706. [PDF](http://www.ams.org/journals/mcom/1988-51-184/S0025-5718-1988-0935077-0/S0025-5718-1988-0935077-0.pdf). #### Examples ``` >>> f = np.array([1, 2, 4, 7, 11, 16], dtype=float) >>> np.gradient(f) array([1. , 1.5, 2.5, 3.5, 4.5, 5. ]) >>> np.gradient(f, 2) array([0.5 , 0.75, 1.25, 1.75, 2.25, 2.5 ]) ``` Spacing can be also specified with an array that represents the coordinates of the values F along the dimensions. For instance a uniform spacing: ``` >>> x = np.arange(f.size) >>> np.gradient(f, x) array([1. , 1.5, 2.5, 3.5, 4.5, 5. ]) ``` Or a non uniform one: ``` >>> x = np.array([0., 1., 1.5, 3.5, 4., 6.], dtype=float) >>> np.gradient(f, x) array([1. , 3. , 3.5, 6.7, 6.9, 2.5]) ``` For two dimensional arrays, the return will be two arrays ordered by axis. In this example the first array stands for the gradient in rows and the second one in columns direction: ``` >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float)) [array([[ 2., 2., -1.], [ 2., 2., -1.]]), array([[1. , 2.5, 4. ], [1. , 1. , 1. ]])] ``` In this example the spacing is also specified: uniform for axis=0 and non uniform for axis=1 ``` >>> dx = 2. >>> y = [1., 1.5, 3.5] >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), dx, y) [array([[ 1. , 1. , -0.5], [ 1. , 1. , -0.5]]), array([[2. , 2. , 2. ], [2. , 1.7, 0.5]])] ``` It is possible to specify how boundaries are treated using `edge_order` ``` >>> x = np.array([0, 1, 2, 3, 4]) >>> f = x**2 >>> np.gradient(f, edge_order=1) array([1., 2., 4., 6., 7.]) >>> np.gradient(f, edge_order=2) array([0., 2., 4., 6., 8.]) ``` The `axis` keyword can be used to specify a subset of axes of which the gradient is calculated ``` >>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), axis=0) array([[ 2., 2., -1.], [ 2., 2., -1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.gradient.htmlnumpy.trapz =========== numpy.trapz(*y*, *x=None*, *dx=1.0*, *axis=- 1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L4727-L4838) Integrate along the given axis using the composite trapezoidal rule. If `x` is provided, the integration happens in sequence along its elements - they are not sorted. Integrate `y` (`x`) along each 1d slice on the given axis, compute \(\int y(x) dx\). When `x` is specified, this integrates along the parametric curve, computing \(\int_t y(t) dt = \int_t y(t) \left.\frac{dx}{dt}\right|_{x=x(t)} dt\). Parameters **y**array_like Input array to integrate. **x**array_like, optional The sample points corresponding to the `y` values. If `x` is None, the sample points are assumed to be evenly spaced `dx` apart. The default is None. **dx**scalar, optional The spacing between sample points when `x` is None. The default is 1. **axis**int, optional The axis along which to integrate. Returns **trapz**float or ndarray Definite integral of `y` = n-dimensional array as approximated along a single axis by the trapezoidal rule. If `y` is a 1-dimensional array, then the result is a float. If `n` is greater than 1, then the result is an `n`-1 dimensional array. See also [`sum`](numpy.sum#numpy.sum "numpy.sum"), [`cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") #### Notes Image [[2]](#r7aa6c77779c0-2) illustrates trapezoidal rule – y-axis locations of points will be taken from `y` array, by default x-axis distances between points will be 1.0, alternatively they can be provided with `x` array or with `dx` scalar. Return value will be equal to combined area under the red lines. #### References 1 Wikipedia page: <https://en.wikipedia.org/wiki/Trapezoidal_rule [2](#id1) Illustration image: <https://en.wikipedia.org/wiki/File:Composite_trapezoidal_rule_illustration.png #### Examples ``` >>> np.trapz([1,2,3]) 4.0 >>> np.trapz([1,2,3], x=[4,6,8]) 8.0 >>> np.trapz([1,2,3], dx=2) 8.0 ``` Using a decreasing `x` corresponds to integrating in reverse: ``` >>> np.trapz([1,2,3], x=[8,6,4]) -8.0 ``` More generally `x` is used to integrate along a parametric curve. This finds the area of a circle, noting we repeat the sample which closes the curve: ``` >>> theta = np.linspace(0, 2 * np.pi, num=1000, endpoint=True) >>> np.trapz(np.cos(theta), x=np.sin(theta)) 3.141571941375841 ``` ``` >>> a = np.arange(6).reshape(2, 3) >>> a array([[0, 1, 2], [3, 4, 5]]) >>> np.trapz(a, axis=0) array([1.5, 2.5, 3.5]) >>> np.trapz(a, axis=1) array([2., 8.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.trapz.htmlnumpy.sinc ========== numpy.sinc(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L3560-L3638) Return the normalized sinc function. The sinc function is equal to \(\sin(\pi x)/(\pi x)\) for any argument \(x\ne 0\). `sinc(0)` takes the limit value 1, making `sinc` not only everywhere continuous but also infinitely differentiable. Note Note the normalization factor of `pi` used in the definition. This is the most commonly used definition in signal processing. Use `sinc(x / np.pi)` to obtain the unnormalized sinc function \(\sin(x)/x\) that is more common in mathematics. Parameters **x**ndarray Array (possibly multi-dimensional) of values for which to calculate `sinc(x)`. Returns **out**ndarray `sinc(x)`, which has the same shape as the input. #### Notes The name sinc is short for “sine cardinal” or “sinus cardinalis”. The sinc function is used in various signal processing applications, including in anti-aliasing, in the construction of a Lanczos resampling filter, and in interpolation. For bandlimited interpolation of discrete-time signals, the ideal interpolation kernel is proportional to the sinc function. #### References 1 Weisstein, <NAME>. “Sinc Function.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/SincFunction.html 2 Wikipedia, “Sinc function”, <https://en.wikipedia.org/wiki/Sinc_function #### Examples ``` >>> import matplotlib.pyplot as plt >>> x = np.linspace(-4, 4, 41) >>> np.sinc(x) array([-3.89804309e-17, -4.92362781e-02, -8.40918587e-02, # may vary -8.90384387e-02, -5.84680802e-02, 3.89804309e-17, 6.68206631e-02, 1.16434881e-01, 1.26137788e-01, 8.50444803e-02, -3.89804309e-17, -1.03943254e-01, -1.89206682e-01, -2.16236208e-01, -1.55914881e-01, 3.89804309e-17, 2.33872321e-01, 5.04551152e-01, 7.56826729e-01, 9.35489284e-01, 1.00000000e+00, 9.35489284e-01, 7.56826729e-01, 5.04551152e-01, 2.33872321e-01, 3.89804309e-17, -1.55914881e-01, -2.16236208e-01, -1.89206682e-01, -1.03943254e-01, -3.89804309e-17, 8.50444803e-02, 1.26137788e-01, 1.16434881e-01, 6.68206631e-02, 3.89804309e-17, -5.84680802e-02, -8.90384387e-02, -8.40918587e-02, -4.92362781e-02, -3.89804309e-17]) ``` ``` >>> plt.plot(x, np.sinc(x)) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Sinc Function") Text(0.5, 1.0, 'Sinc Function') >>> plt.ylabel("Amplitude") Text(0, 0.5, 'Amplitude') >>> plt.xlabel("X") Text(0.5, 0, 'X') >>> plt.show() ``` ![../../_images/numpy-sinc-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.sinc.htmlnumpy.angle =========== numpy.angle(*z*, *deg=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L1601-L1651) Return the angle of the complex argument. Parameters **z**array_like A complex number or sequence of complex numbers. **deg**bool, optional Return angle in degrees if True, radians if False (default). Returns **angle**ndarray or scalar The counterclockwise angle from the positive real axis on the complex plane in the range `(-pi, pi]`, with dtype as numpy.float64. Changed in version 1.16.0: This function works on subclasses of ndarray like [`ma.array`](numpy.ma.array#numpy.ma.array "numpy.ma.array"). See also [`arctan2`](numpy.arctan2#numpy.arctan2 "numpy.arctan2") [`absolute`](numpy.absolute#numpy.absolute "numpy.absolute") #### Notes Although the angle of the complex number 0 is undefined, `numpy.angle(0)` returns the value 0. #### Examples ``` >>> np.angle([1.0, 1.0j, 1+1j]) # in radians array([ 0. , 1.57079633, 0.78539816]) # may vary >>> np.angle(1+1j, deg=True) # in degrees 45.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.angle.htmlnumpy.nanmax ============ numpy.nanmax(*a*, *axis=None*, *out=None*, *keepdims=<no value>*, *initial=<no value>*, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L369-L494) Return the maximum of an array or maximum along an axis, ignoring any NaNs. When all-NaN slices are encountered a `RuntimeWarning` is raised and NaN is returned for that slice. Parameters **a**array_like Array containing numbers whose maximum is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the maximum is computed. The default is to compute the maximum of the flattened array. **out**ndarray, optional Alternate output array in which to place the result. The default is `None`; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. New in version 1.8.0. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If the value is anything but the default, then `keepdims` will be passed through to the [`max`](https://docs.python.org/3/library/functions.html#max "(in Python v3.10)") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If the sub-classes methods does not implement `keepdims` any exceptions will be raised. New in version 1.8.0. **initial**scalar, optional The minimum value of an output element. Must be present to allow computation on empty slice. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **where**array_like of bool, optional Elements to compare for the maximum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns **nanmax**ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, an ndarray scalar is returned. The same dtype as `a` is returned. See also [`nanmin`](numpy.nanmin#numpy.nanmin "numpy.nanmin") The minimum value of an array along a given axis, ignoring any NaNs. [`amax`](numpy.amax#numpy.amax "numpy.amax") The maximum value of an array along a given axis, propagating any NaNs. [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax") Element-wise maximum of two arrays, ignoring any NaNs. [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") Element-wise maximum of two arrays, propagating any NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Shows which elements are Not a Number (NaN). [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") Shows which elements are neither NaN nor infinity. [`amin`](numpy.amin#numpy.amin "numpy.amin"), [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin"), [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Positive infinity is treated as a very large number and negative infinity is treated as a very small (i.e. negative) number. If the input has a integer type the function is equivalent to np.max. #### Examples ``` >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nanmax(a) 3.0 >>> np.nanmax(a, axis=0) array([3., 2.]) >>> np.nanmax(a, axis=1) array([2., 3.]) ``` When positive infinity and negative infinity are present: ``` >>> np.nanmax([1, 2, np.nan, np.NINF]) 2.0 >>> np.nanmax([1, 2, np.nan, np.inf]) inf ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanmax.htmlnumpy.nanmin ============ numpy.nanmin(*a*, *axis=None*, *out=None*, *keepdims=<no value>*, *initial=<no value>*, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L236-L361) Return minimum of an array or minimum along an axis, ignoring any NaNs. When all-NaN slices are encountered a `RuntimeWarning` is raised and Nan is returned for that slice. Parameters **a**array_like Array containing numbers whose minimum is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the minimum is computed. The default is to compute the minimum of the flattened array. **out**ndarray, optional Alternate output array in which to place the result. The default is `None`; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. New in version 1.8.0. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If the value is anything but the default, then `keepdims` will be passed through to the [`min`](https://docs.python.org/3/library/functions.html#min "(in Python v3.10)") method of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If the sub-classes methods does not implement `keepdims` any exceptions will be raised. New in version 1.8.0. **initial**scalar, optional The maximum value of an output element. Must be present to allow computation on empty slice. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. **where**array_like of bool, optional Elements to compare for the minimum. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns **nanmin**ndarray An array with the same shape as `a`, with the specified axis removed. If `a` is a 0-d array, or if axis is None, an ndarray scalar is returned. The same dtype as `a` is returned. See also [`nanmax`](numpy.nanmax#numpy.nanmax "numpy.nanmax") The maximum value of an array along a given axis, ignoring any NaNs. [`amin`](numpy.amin#numpy.amin "numpy.amin") The minimum value of an array along a given axis, propagating any NaNs. [`fmin`](numpy.fmin#numpy.fmin "numpy.fmin") Element-wise minimum of two arrays, ignoring any NaNs. [`minimum`](numpy.minimum#numpy.minimum "numpy.minimum") Element-wise minimum of two arrays, propagating any NaNs. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Shows which elements are Not a Number (NaN). [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") Shows which elements are neither NaN nor infinity. [`amax`](numpy.amax#numpy.amax "numpy.amax"), [`fmax`](numpy.fmax#numpy.fmax "numpy.fmax"), [`maximum`](numpy.maximum#numpy.maximum "numpy.maximum") #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. Positive infinity is treated as a very large number and negative infinity is treated as a very small (i.e. negative) number. If the input has a integer type the function is equivalent to np.min. #### Examples ``` >>> a = np.array([[1, 2], [3, np.nan]]) >>> np.nanmin(a) 1.0 >>> np.nanmin(a, axis=0) array([1., 2.]) >>> np.nanmin(a, axis=1) array([1., 3.]) ``` When positive infinity and negative infinity are present: ``` >>> np.nanmin([1, 2, np.nan, np.inf]) 1.0 >>> np.nanmin([1, 2, np.nan, np.NINF]) -inf ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanmin.htmlnumpy.convolve ============== numpy.convolve(*a*, *v*, *mode='full'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L754-L850) Returns the discrete, linear convolution of two one-dimensional sequences. The convolution operator is often seen in signal processing, where it models the effect of a linear time-invariant system on a signal [[1]](#r95849f33d2b1-1). In probability theory, the sum of two independent random variables is distributed according to the convolution of their individual distributions. If `v` is longer than `a`, the arrays are swapped before computation. Parameters **a**(N,) array_like First one-dimensional input array. **v**(M,) array_like Second one-dimensional input array. **mode**{‘full’, ‘valid’, ‘same’}, optional ‘full’: By default, mode is ‘full’. This returns the convolution at each point of overlap, with an output shape of (N+M-1,). At the end-points of the convolution, the signals do not overlap completely, and boundary effects may be seen. ‘same’: Mode ‘same’ returns output of length `max(M, N)`. Boundary effects are still visible. ‘valid’: Mode ‘valid’ returns output of length `max(M, N) - min(M, N) + 1`. The convolution product is only given for points where the signals overlap completely. Values outside the signal boundary have no effect. Returns **out**ndarray Discrete, linear convolution of `a` and `v`. See also [`scipy.signal.fftconvolve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html#scipy.signal.fftconvolve "(in SciPy v1.8.1)") Convolve two arrays using the Fast Fourier Transform. [`scipy.linalg.toeplitz`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.toeplitz.html#scipy.linalg.toeplitz "(in SciPy v1.8.1)") Used to construct the convolution operator. [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul") Polynomial multiplication. Same output as convolve, but also accepts poly1d objects as input. #### Notes The discrete convolution operation is defined as \[(a * v)_n = \sum_{m = -\infty}^{\infty} a_m v_{n - m}\] It can be shown that a convolution \(x(t) * y(t)\) in time/space is equivalent to the multiplication \(X(f) Y(f)\) in the Fourier domain, after appropriate padding (padding is necessary to prevent circular convolution). Since multiplication is more efficient (faster) than convolution, the function [`scipy.signal.fftconvolve`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.fftconvolve.html#scipy.signal.fftconvolve "(in SciPy v1.8.1)") exploits the FFT to calculate the convolution of large data-sets. #### References [1](#id1) Wikipedia, “Convolution”, <https://en.wikipedia.org/wiki/Convolution #### Examples Note how the convolution operator flips the second array before “sliding” the two across one another: ``` >>> np.convolve([1, 2, 3], [0, 1, 0.5]) array([0. , 1. , 2.5, 4. , 1.5]) ``` Only return the middle values of the convolution. Contains boundary effects, where zeros are taken into account: ``` >>> np.convolve([1,2,3],[0,1,0.5], 'same') array([1. , 2.5, 4. ]) ``` The two arrays are of the same length, so there is only one position where they completely overlap: ``` >>> np.convolve([1,2,3],[0,1,0.5], 'valid') array([2.5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.convolve.htmlnumpy.nan_to_num ================== numpy.nan_to_num(*x*, *copy=True*, *nan=0.0*, *posinf=None*, *neginf=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L404-L521) Replace NaN with zero and infinity with large finite numbers (default behaviour) or with the numbers defined by the user using the [`nan`](../constants#numpy.nan "numpy.nan"), `posinf` and/or `neginf` keywords. If `x` is inexact, NaN is replaced by zero or by the user defined value in [`nan`](../constants#numpy.nan "numpy.nan") keyword, infinity is replaced by the largest finite floating point values representable by `x.dtype` or by the user defined value in `posinf` keyword and -infinity is replaced by the most negative finite floating point values representable by `x.dtype` or by the user defined value in `neginf` keyword. For complex dtypes, the above is applied to each of the real and imaginary components of `x` separately. If `x` is not inexact, then no replacements are made. Parameters **x**scalar or array_like Input data. **copy**bool, optional Whether to create a copy of `x` (True) or to replace values in-place (False). The in-place operation only occurs if casting to an array does not require a copy. Default is True. New in version 1.13. **nan**int, float, optional Value to be used to fill NaN values. If no value is passed then NaN values will be replaced with 0.0. New in version 1.17. **posinf**int, float, optional Value to be used to fill positive infinity values. If no value is passed then positive infinity values will be replaced with a very large number. New in version 1.17. **neginf**int, float, optional Value to be used to fill negative infinity values. If no value is passed then negative infinity values will be replaced with a very small (or negative) number. New in version 1.17. Returns **out**ndarray `x`, with the non-finite values replaced. If [`copy`](numpy.copy#numpy.copy "numpy.copy") is False, this may be `x` itself. See also [`isinf`](numpy.isinf#numpy.isinf "numpy.isinf") Shows which elements are positive or negative infinity. [`isneginf`](numpy.isneginf#numpy.isneginf "numpy.isneginf") Shows which elements are negative infinity. [`isposinf`](numpy.isposinf#numpy.isposinf "numpy.isposinf") Shows which elements are positive infinity. [`isnan`](numpy.isnan#numpy.isnan "numpy.isnan") Shows which elements are Not a Number (NaN). [`isfinite`](numpy.isfinite#numpy.isfinite "numpy.isfinite") Shows which elements are finite (not NaN, not infinity) #### Notes NumPy uses the IEEE Standard for Binary Floating-Point for Arithmetic (IEEE 754). This means that Not a Number is not equivalent to infinity. #### Examples ``` >>> np.nan_to_num(np.inf) 1.7976931348623157e+308 >>> np.nan_to_num(-np.inf) -1.7976931348623157e+308 >>> np.nan_to_num(np.nan) 0.0 >>> x = np.array([np.inf, -np.inf, np.nan, -128, 128]) >>> np.nan_to_num(x) array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000, # may vary -1.28000000e+002, 1.28000000e+002]) >>> np.nan_to_num(x, nan=-9999, posinf=33333333, neginf=33333333) array([ 3.3333333e+07, 3.3333333e+07, -9.9990000e+03, -1.2800000e+02, 1.2800000e+02]) >>> y = np.array([complex(np.inf, np.nan), np.nan, complex(np.nan, np.inf)]) array([ 1.79769313e+308, -1.79769313e+308, 0.00000000e+000, # may vary -1.28000000e+002, 1.28000000e+002]) >>> np.nan_to_num(y) array([ 1.79769313e+308 +0.00000000e+000j, # may vary 0.00000000e+000 +0.00000000e+000j, 0.00000000e+000 +1.79769313e+308j]) >>> np.nan_to_num(y, nan=111111, posinf=222222) array([222222.+111111.j, 111111. +0.j, 111111.+222222.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nan_to_num.htmlnumpy.real_if_close ===================== numpy.real_if_close(*a*, *tol=100*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/type_check.py#L529-L583) If input is complex with all imaginary parts close to zero, return real parts. “Close to zero” is defined as `tol` * (machine epsilon of the type for `a`). Parameters **a**array_like Input array. **tol**float Tolerance in machine epsilons for the complex part of the elements in the array. Returns **out**ndarray If `a` is real, the type of `a` is used for the output. If `a` has complex elements, the returned type is float. See also [`real`](numpy.real#numpy.real "numpy.real"), [`imag`](numpy.imag#numpy.imag "numpy.imag"), [`angle`](numpy.angle#numpy.angle "numpy.angle") #### Notes Machine epsilon varies from machine to machine and between data types but Python floats on most platforms have a machine epsilon equal to 2.2204460492503131e-16. You can use ‘np.finfo(float).eps’ to print out the machine epsilon for floats. #### Examples ``` >>> np.finfo(float).eps 2.2204460492503131e-16 # may vary ``` ``` >>> np.real_if_close([2.1 + 4e-14j, 5.2 + 3e-15j], tol=1000) array([2.1, 5.2]) >>> np.real_if_close([2.1 + 4e-13j, 5.2 + 3e-15j], tol=1000) array([2.1+4.e-13j, 5.2 + 3e-15j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.real_if_close.htmlnumpy.interp ============ numpy.interp(*x*, *xp*, *fp*, *left=None*, *right=None*, *period=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L1456-L1594) One-dimensional linear interpolation for monotonically increasing sample points. Returns the one-dimensional piecewise linear interpolant to a function with given discrete data points (`xp`, `fp`), evaluated at `x`. Parameters **x**array_like The x-coordinates at which to evaluate the interpolated values. **xp**1-D sequence of floats The x-coordinates of the data points, must be increasing if argument `period` is not specified. Otherwise, `xp` is internally sorted after normalizing the periodic boundaries with `xp = xp % period`. **fp**1-D sequence of float or complex The y-coordinates of the data points, same length as `xp`. **left**optional float or complex corresponding to fp Value to return for `x < xp[0]`, default is `fp[0]`. **right**optional float or complex corresponding to fp Value to return for `x > xp[-1]`, default is `fp[-1]`. **period**None or float, optional A period for the x-coordinates. This parameter allows the proper interpolation of angular x-coordinates. Parameters `left` and `right` are ignored if `period` is specified. New in version 1.10.0. Returns **y**float or complex (corresponding to fp) or ndarray The interpolated values, same shape as `x`. Raises ValueError If `xp` and `fp` have different length If `xp` or `fp` are not 1-D sequences If `period == 0` Warning The x-coordinate sequence is expected to be increasing, but this is not explicitly enforced. However, if the sequence `xp` is non-increasing, interpolation results are meaningless. Note that, since NaN is unsortable, `xp` also cannot contain NaNs. A simple check for `xp` being strictly increasing is: ``` np.all(np.diff(xp) > 0) ``` See also [`scipy.interpolate`](https://docs.scipy.org/doc/scipy/reference/interpolate.html#module-scipy.interpolate "(in SciPy v1.8.1)") #### Examples ``` >>> xp = [1, 2, 3] >>> fp = [3, 2, 0] >>> np.interp(2.5, xp, fp) 1.0 >>> np.interp([0, 1, 1.5, 2.72, 3.14], xp, fp) array([3. , 3. , 2.5 , 0.56, 0. ]) >>> UNDEF = -99.0 >>> np.interp(3.14, xp, fp, right=UNDEF) -99.0 ``` Plot an interpolant to the sine function: ``` >>> x = np.linspace(0, 2*np.pi, 10) >>> y = np.sin(x) >>> xvals = np.linspace(0, 2*np.pi, 50) >>> yinterp = np.interp(xvals, x, y) >>> import matplotlib.pyplot as plt >>> plt.plot(x, y, 'o') [<matplotlib.lines.Line2D object at 0x...>] >>> plt.plot(xvals, yinterp, '-x') [<matplotlib.lines.Line2D object at 0x...>] >>> plt.show() ``` ![../../_images/numpy-interp-1_00_00.png] Interpolation with periodic x-coordinates: ``` >>> x = [-180, -170, -185, 185, -10, -5, 0, 365] >>> xp = [190, -190, 350, -350] >>> fp = [5, 10, 3, 4] >>> np.interp(x, xp, fp, period=360) array([7.5 , 5. , 8.75, 6.25, 3. , 3.25, 3.5 , 3.75]) ``` Complex interpolation: ``` >>> x = [1.5, 4.0] >>> xp = [2,3,5] >>> fp = [1.0j, 0, 2+3j] >>> np.interp(x, xp, fp) array([0.+1.j , 1.+1.5j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.interp.htmlnumpy.setbufsize ================ numpy.setbufsize(*size*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_ufunc_config.py#L181-L203) Set the size of the buffer used in ufuncs. Parameters **size**int Size of buffer. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.setbufsize.htmlnumpy.getbufsize ================ numpy.getbufsize()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_ufunc_config.py#L206-L217) Return the size of the buffer used in ufuncs. Returns **getbufsize**int Size of ufunc buffer in bytes. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.getbufsize.htmlnumpy.may_share_memory ======================== numpy.may_share_memory(*a*, *b*, */*, *max_work=None*) Determine if two arrays might share memory A return of True does not necessarily mean that the two arrays share any element. It just means that they *might*. Only the memory bounds of a and b are checked by default. Parameters **a, b**ndarray Input arrays **max_work**int, optional Effort to spend on solving the overlap problem. See [`shares_memory`](numpy.shares_memory#numpy.shares_memory "numpy.shares_memory") for details. Default for `may_share_memory` is to do a bounds check. Returns **out**bool See also [`shares_memory`](numpy.shares_memory#numpy.shares_memory "numpy.shares_memory") #### Examples ``` >>> np.may_share_memory(np.array([1,2]), np.array([5,8,9])) False >>> x = np.zeros([3, 4]) >>> np.may_share_memory(x[:,0], x[:,1]) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.may_share_memory.htmlnumpy.byte_bounds ================== numpy.byte_bounds(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/utils.py#L227-L276) Returns pointers to the end-points of an array. Parameters **a**ndarray Input array. It must conform to the Python-side of the array interface. Returns **(low, high)**tuple of 2 integers The first integer is the first byte of the array, the second integer is just past the last byte of the array. If `a` is not contiguous it will not use every byte between the (`low`, `high`) values. #### Examples ``` >>> I = np.eye(2, dtype='f'); I.dtype dtype('float32') >>> low, high = np.byte_bounds(I) >>> high - low == I.size*I.itemsize True >>> I = np.eye(2); I.dtype dtype('float64') >>> low, high = np.byte_bounds(I) >>> high - low == I.size*I.itemsize True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.byte_bounds.htmlnumpy.lib.NumpyVersion ====================== *class*numpy.lib.NumpyVersion(*vstring*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/_version.py#L14-L155) Parse and compare numpy version strings. NumPy has the following versioning scheme (numbers given are examples; they can be > 9 in principle): * Released version: ‘1.8.0’, ‘1.8.1’, etc. * Alpha: ‘1.8.0a1’, ‘1.8.0a2’, etc. * Beta: ‘1.8.0b1’, ‘1.8.0b2’, etc. * Release candidates: ‘1.8.0rc1’, ‘1.8.0rc2’, etc. * Development versions: ‘1.8.0.dev-f1234afa’ (git commit hash appended) * Development versions after a1: ‘1.8.0a1.dev-f1234afa’, ‘1.8.0b2.dev-f1234afa’, ‘1.8.1rc1.dev-f1234afa’, etc. * Development versions (no git hash available): ‘1.8.0.dev-Unknown’ Comparing needs to be done against a valid version string or other [`NumpyVersion`](#numpy.lib.NumpyVersion "numpy.lib.NumpyVersion") instance. Note that all development versions of the same (pre-)release compare equal. New in version 1.9.0. Parameters **vstring**str NumPy version string (`np.__version__`). #### Examples ``` >>> from numpy.lib import NumpyVersion >>> if NumpyVersion(np.__version__) < '1.7.0': ... print('skip') >>> # skip ``` ``` >>> NumpyVersion('1.7') # raises ValueError, add ".0" Traceback (most recent call last): ... ValueError: Not a valid numpy version string ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.NumpyVersion.htmlnumpy.get_include ================== numpy.get_include()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/utils.py#L19-L45) Return the directory that contains the NumPy *.h header files. Extension modules that need to compile against NumPy should use this function to locate the appropriate include directory. #### Notes When using `distutils`, for example in `setup.py`: ``` import numpy as np ... Extension('extension_name', ... include_dirs=[np.get_include()]) ... ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.get_include.htmlnumpy.show_config ================== numpy.show_config()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__config__.py#L28-L119) Show libraries in the system on which NumPy was built. Print information about various resources (libraries, library directories, include directories, etc.) in the system on which NumPy was built. See also [`get_include`](numpy.get_include#numpy.get_include "numpy.get_include") Returns the directory containing NumPy C header files. #### Notes 1. Classes specifying the information to be printed are defined in the `numpy.distutils.system_info` module. Information may include: * `language`: language used to write the libraries (mostly C or f77) * `libraries`: names of libraries found in the system * `library_dirs`: directories containing the libraries * `include_dirs`: directories containing library header files * `src_dirs`: directories containing library source files * `define_macros`: preprocessor macros used by `distutils.setup` * `baseline`: minimum CPU features required * `found`: dispatched features supported in the system * `not found`: dispatched features that are not supported in the system 2. NumPy BLAS/LAPACK Installation Notes Installing a numpy wheel (`pip install numpy` or force it via `pip install numpy --only-binary :numpy: numpy`) includes an OpenBLAS implementation of the BLAS and LAPACK linear algebra APIs. In this case, `library_dirs` reports the original build time configuration as compiled with gcc/gfortran; at run time the OpenBLAS library is in `site-packages/numpy.libs/` (linux), or `site-packages/numpy/.dylibs/` (macOS), or `site-packages/numpy/.libs/` (windows). Installing numpy from source (`pip install numpy --no-binary numpy`) searches for BLAS and LAPACK dynamic link libraries at build time as influenced by environment variables NPY_BLAS_LIBS, NPY_CBLAS_LIBS, and NPY_LAPACK_LIBS; or NPY_BLAS_ORDER and NPY_LAPACK_ORDER; or the optional file `~/.numpy-site.cfg`. NumPy remembers those locations and expects to load the same libraries at run-time. In NumPy 1.21+ on macOS, ‘accelerate’ (Apple’s Accelerate BLAS library) is in the default build-time search order after ‘openblas’. #### Examples ``` >>> import numpy as np >>> np.show_config() blas_opt_info: language = c define_macros = [('HAVE_CBLAS', None)] libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.show_config.htmlnumpy.deprecate =============== numpy.deprecate(**args*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/utils.py#L143-L193) Issues a DeprecationWarning, adds warning to `old_name`’s docstring, rebinds `old_name.__name__` and returns the new function object. This function may also be used as a decorator. Parameters **func**function The function to be deprecated. **old_name**str, optional The name of the function to be deprecated. Default is None, in which case the name of `func` is used. **new_name**str, optional The new name for the function. Default is None, in which case the deprecation message is that `old_name` is deprecated. If given, the deprecation message is that `old_name` is deprecated and `new_name` should be used instead. **message**str, optional Additional explanation of the deprecation. Displayed in the docstring after the warning. Returns **old_func**function The deprecated function. #### Examples Note that `olduint` returns a value after printing Deprecation Warning: ``` >>> olduint = np.deprecate(np.uint) DeprecationWarning: `uint64` is deprecated! # may vary >>> olduint(6) 6 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.deprecate.htmlnumpy.deprecate_with_doc ========================== numpy.deprecate_with_doc(*msg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/utils.py#L196-L220) Deprecates a function and includes the deprecation in its docstring. This function is used as a decorator. It returns an object that can be used to issue a DeprecationWarning, by passing the to-be decorated function as argument, this adds warning to the to-be decorated function’s docstring and returns the new function object. Parameters **msg**str Additional explanation of the deprecation. Displayed in the docstring after the warning. Returns **obj**object See also [`deprecate`](numpy.deprecate#numpy.deprecate "numpy.deprecate") Decorate a function such that it issues a `DeprecationWarning` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.deprecate_with_doc.htmlnumpy.broadcast_shapes ======================= numpy.broadcast_shapes(**args*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/stride_tricks.py#L433-L473) Broadcast the input shapes into a single shape. [Learn more about broadcasting here](../../user/basics.broadcasting#basics-broadcasting). New in version 1.20.0. Parameters **`*args`**tuples of ints, or ints The shapes to be broadcast against each other. Returns tuple Broadcasted shape. Raises ValueError If the shapes are not compatible and cannot be broadcast according to NumPy’s broadcasting rules. See also [`broadcast`](numpy.broadcast#numpy.broadcast "numpy.broadcast") [`broadcast_arrays`](numpy.broadcast_arrays#numpy.broadcast_arrays "numpy.broadcast_arrays") [`broadcast_to`](numpy.broadcast_to#numpy.broadcast_to "numpy.broadcast_to") #### Examples ``` >>> np.broadcast_shapes((1, 2), (3, 1), (3, 2)) (3, 2) ``` ``` >>> np.broadcast_shapes((6, 7), (5, 6, 1), (7,), (5, 1, 7)) (5, 6, 7) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast_shapes.htmlnumpy.who ========= numpy.who(*vardict=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/utils.py#L284-L376) Print the NumPy arrays in the given dictionary. If there is no dictionary passed in or `vardict` is None then returns NumPy arrays in the globals() dictionary (all NumPy arrays in the namespace). Parameters **vardict**dict, optional A dictionary possibly containing ndarrays. Default is globals(). Returns **out**None Returns ‘None’. #### Notes Prints out the name, shape, bytes and type of all of the ndarrays present in `vardict`. #### Examples ``` >>> a = np.arange(10) >>> b = np.ones(20) >>> np.who() Name Shape Bytes Type =========================================================== a 10 80 int64 b 20 160 float64 Upper bound on total bytes = 240 ``` ``` >>> d = {'x': np.arange(2.0), 'y': np.arange(3.0), 'txt': 'Some str', ... 'idx':5} >>> np.who(d) Name Shape Bytes Type =========================================================== x 2 16 float64 y 3 24 float64 Upper bound on total bytes = 40 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.who.htmlnumpy.disp ========== numpy.disp(*mesg*, *device=None*, *linefeed=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L1956-L1995) Display a message on a device. Parameters **mesg**str Message to display. **device**object Device to write message. If None, defaults to `sys.stdout` which is very similar to `print`. `device` needs to have `write()` and `flush()` methods. **linefeed**bool, optional Option whether to print a line feed or not. Defaults to True. Raises AttributeError If `device` does not have a `write()` or `flush()` method. #### Examples Besides `sys.stdout`, a file-like object can also be used as it has both required methods: ``` >>> from io import StringIO >>> buf = StringIO() >>> np.disp(u'"Display" in a file', device=buf) >>> buf.getvalue() '"Display" in a file\n' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.disp.htmlnumpy.AxisError =============== *exception*numpy.AxisError(*axis*, *ndim=None*, *msg_prefix=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Axis supplied was invalid. This is raised whenever an `axis` parameter is specified that is larger than the number of array dimensions. For compatibility with code written against older numpy versions, which raised a mixture of `ValueError` and `IndexError` for this situation, this exception subclasses both to ensure that `except ValueError` and `except IndexError` statements continue to catch [`AxisError`](#numpy.AxisError "numpy.AxisError"). New in version 1.13. Parameters **axis**int or str The out of bounds axis or a custom exception message. If an axis is provided, then `ndim` should be specified as well. **ndim**int, optional The number of array dimensions. **msg_prefix**str, optional A prefix for the exception message. #### Examples ``` >>> array_1d = np.arange(10) >>> np.cumsum(array_1d, axis=1) Traceback (most recent call last): ... numpy.AxisError: axis 1 is out of bounds for array of dimension 1 ``` Negative axes are preserved: ``` >>> np.cumsum(array_1d, axis=-2) Traceback (most recent call last): ... numpy.AxisError: axis -2 is out of bounds for array of dimension 1 ``` The class constructor generally takes the axis and arrays’ dimensionality as arguments: ``` >>> print(np.AxisError(2, 1, msg_prefix='error')) error: axis 2 is out of bounds for array of dimension 1 ``` Alternatively, a custom exception message can be passed: ``` >>> print(np.AxisError('Custom error message')) Custom error message ``` Attributes **axis**int, optional The out of bounds axis or `None` if a custom exception message was provided. This should be the axis as passed by the user, before any normalization to resolve negative indices. New in version 1.22. **ndim**int, optional The number of array dimensions or `None` if a custom exception message was provided. New in version 1.22. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.AxisError.htmlUsing the Convenience Classes ============================= The convenience classes provided by the polynomial package are: | Name | Provides | | --- | --- | | Polynomial | Power series | | Chebyshev | Chebyshev series | | Legendre | Legendre series | | Laguerre | Laguerre series | | Hermite | Hermite series | | HermiteE | HermiteE series | The series in this context are finite sums of the corresponding polynomial basis functions multiplied by coefficients. For instance, a power series looks like \[p(x) = 1 + 2x + 3x^2\] and has coefficients \([1, 2, 3]\). The Chebyshev series with the same coefficients looks like \[p(x) = 1 T_0(x) + 2 T_1(x) + 3 T_2(x)\] and more generally \[p(x) = \sum_{i=0}^n c_i T_i(x)\] where in this case the \(T_n\) are the Chebyshev functions of degree \(n\), but could just as easily be the basis functions of any of the other classes. The convention for all the classes is that the coefficient \(c[i]\) goes with the basis function of degree i. All of the classes are immutable and have the same methods, and especially they implement the Python numeric operators +, -, *, //, %, divmod, **, ==, and !=. The last two can be a bit problematic due to floating point roundoff errors. We now give a quick demonstration of the various operations using NumPy version 1.7.0. Basics ------ First we need a polynomial class and a polynomial instance to play with. The classes can be imported directly from the polynomial package or from the module of the relevant type. Here we import from the package and use the conventional Polynomial class because of its familiarity: ``` >>> from numpy.polynomial import Polynomial as P >>> p = P([1,2,3]) >>> p Polynomial([1., 2., 3.], domain=[-1, 1], window=[-1, 1]) ``` Note that there are three parts to the long version of the printout. The first is the coefficients, the second is the domain, and the third is the window: ``` >>> p.coef array([1., 2., 3.]) >>> p.domain array([-1, 1]) >>> p.window array([-1, 1]) ``` Printing a polynomial yields the polynomial expression in a more familiar format: ``` >>> print(p) 1.0 + 2.0·x¹ + 3.0·x² ``` Note that the string representation of polynomials uses Unicode characters by default (except on Windows) to express powers and subscripts. An ASCII-based representation is also available (default on Windows). The polynomial string format can be toggled at the package-level with the [`set_default_printstyle`](generated/numpy.polynomial.set_default_printstyle#numpy.polynomial.set_default_printstyle "numpy.polynomial.set_default_printstyle") function: ``` >>> np.polynomial.set_default_printstyle('ascii') >>> print(p) 1.0 + 2.0 x**1 + 3.0 x**2 ``` or controlled for individual polynomial instances with string formatting: ``` >>> print(f"{p:unicode}") 1.0 + 2.0·x¹ + 3.0·x² ``` We will deal with the domain and window when we get to fitting, for the moment we ignore them and run through the basic algebraic and arithmetic operations. Addition and Subtraction: ``` >>> p + p Polynomial([2., 4., 6.], domain=[-1., 1.], window=[-1., 1.]) >>> p - p Polynomial([0.], domain=[-1., 1.], window=[-1., 1.]) ``` Multiplication: ``` >>> p * p Polynomial([ 1., 4., 10., 12., 9.], domain=[-1., 1.], window=[-1., 1.]) ``` Powers: ``` >>> p**2 Polynomial([ 1., 4., 10., 12., 9.], domain=[-1., 1.], window=[-1., 1.]) ``` Division: Floor division, ‘//’, is the division operator for the polynomial classes, polynomials are treated like integers in this regard. For Python versions < 3.x the ‘/’ operator maps to ‘//’, as it does for Python, for later versions the ‘/’ will only work for division by scalars. At some point it will be deprecated: ``` >>> p // P([-1, 1]) Polynomial([5., 3.], domain=[-1., 1.], window=[-1., 1.]) ``` Remainder: ``` >>> p % P([-1, 1]) Polynomial([6.], domain=[-1., 1.], window=[-1., 1.]) ``` Divmod: ``` >>> quo, rem = divmod(p, P([-1, 1])) >>> quo Polynomial([5., 3.], domain=[-1., 1.], window=[-1., 1.]) >>> rem Polynomial([6.], domain=[-1., 1.], window=[-1., 1.]) ``` Evaluation: ``` >>> x = np.arange(5) >>> p(x) array([ 1., 6., 17., 34., 57.]) >>> x = np.arange(6).reshape(3,2) >>> p(x) array([[ 1., 6.], [17., 34.], [57., 86.]]) ``` Substitution: Substitute a polynomial for x and expand the result. Here we substitute p in itself leading to a new polynomial of degree 4 after expansion. If the polynomials are regarded as functions this is composition of functions: ``` >>> p(p) Polynomial([ 6., 16., 36., 36., 27.], domain=[-1., 1.], window=[-1., 1.]) ``` Roots: ``` >>> p.roots() array([-0.33333333-0.47140452j, -0.33333333+0.47140452j]) ``` It isn’t always convenient to explicitly use Polynomial instances, so tuples, lists, arrays, and scalars are automatically cast in the arithmetic operations: ``` >>> p + [1, 2, 3] Polynomial([2., 4., 6.], domain=[-1., 1.], window=[-1., 1.]) >>> [1, 2, 3] * p Polynomial([ 1., 4., 10., 12., 9.], domain=[-1., 1.], window=[-1., 1.]) >>> p / 2 Polynomial([0.5, 1. , 1.5], domain=[-1., 1.], window=[-1., 1.]) ``` Polynomials that differ in domain, window, or class can’t be mixed in arithmetic: ``` >>> from numpy.polynomial import Chebyshev as T >>> p + P([1], domain=[0,1]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 213, in __add__ TypeError: Domains differ >>> p + P([1], window=[0,1]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 215, in __add__ TypeError: Windows differ >>> p + T([1]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 211, in __add__ TypeError: Polynomial types differ ``` But different types can be used for substitution. In fact, this is how conversion of Polynomial classes among themselves is done for type, domain, and window casting: ``` >>> p(T([0, 1])) Chebyshev([2.5, 2. , 1.5], domain=[-1., 1.], window=[-1., 1.]) ``` Which gives the polynomial `p` in Chebyshev form. This works because \(T_1(x) = x\) and substituting \(x\) for \(x\) doesn’t change the original polynomial. However, all the multiplications and divisions will be done using Chebyshev series, hence the type of the result. It is intended that all polynomial instances are immutable, therefore augmented operations (`+=`, `-=`, etc.) and any other functionality that would violate the immutablity of a polynomial instance are intentionally unimplemented. Calculus -------- Polynomial instances can be integrated and differentiated.: ``` >>> from numpy.polynomial import Polynomial as P >>> p = P([2, 6]) >>> p.integ() Polynomial([0., 2., 3.], domain=[-1., 1.], window=[-1., 1.]) >>> p.integ(2) Polynomial([0., 0., 1., 1.], domain=[-1., 1.], window=[-1., 1.]) ``` The first example integrates `p` once, the second example integrates it twice. By default, the lower bound of the integration and the integration constant are 0, but both can be specified.: ``` >>> p.integ(lbnd=-1) Polynomial([-1., 2., 3.], domain=[-1., 1.], window=[-1., 1.]) >>> p.integ(lbnd=-1, k=1) Polynomial([0., 2., 3.], domain=[-1., 1.], window=[-1., 1.]) ``` In the first case the lower bound of the integration is set to -1 and the integration constant is 0. In the second the constant of integration is set to 1 as well. Differentiation is simpler since the only option is the number of times the polynomial is differentiated: ``` >>> p = P([1, 2, 3]) >>> p.deriv(1) Polynomial([2., 6.], domain=[-1., 1.], window=[-1., 1.]) >>> p.deriv(2) Polynomial([6.], domain=[-1., 1.], window=[-1., 1.]) ``` Other Polynomial Constructors ----------------------------- Constructing polynomials by specifying coefficients is just one way of obtaining a polynomial instance, they may also be created by specifying their roots, by conversion from other polynomial types, and by least squares fits. Fitting is discussed in its own section, the other methods are demonstrated below: ``` >>> from numpy.polynomial import Polynomial as P >>> from numpy.polynomial import Chebyshev as T >>> p = P.fromroots([1, 2, 3]) >>> p Polynomial([-6., 11., -6., 1.], domain=[-1., 1.], window=[-1., 1.]) >>> p.convert(kind=T) Chebyshev([-9. , 11.75, -3. , 0.25], domain=[-1., 1.], window=[-1., 1.]) ``` The convert method can also convert domain and window: ``` >>> p.convert(kind=T, domain=[0, 1]) Chebyshev([-2.4375 , 2.96875, -0.5625 , 0.03125], domain=[0., 1.], window=[-1., 1.]) >>> p.convert(kind=P, domain=[0, 1]) Polynomial([-1.875, 2.875, -1.125, 0.125], domain=[0., 1.], window=[-1., 1.]) ``` In numpy versions >= 1.7.0 the `basis` and `cast` class methods are also available. The cast method works like the convert method while the basis method returns the basis polynomial of given degree: ``` >>> P.basis(3) Polynomial([0., 0., 0., 1.], domain=[-1., 1.], window=[-1., 1.]) >>> T.cast(p) Chebyshev([-9. , 11.75, -3. , 0.25], domain=[-1., 1.], window=[-1., 1.]) ``` Conversions between types can be useful, but it is *not* recommended for routine use. The loss of numerical precision in passing from a Chebyshev series of degree 50 to a Polynomial series of the same degree can make the results of numerical evaluation essentially random. Fitting ------- Fitting is the reason that the `domain` and `window` attributes are part of the convenience classes. To illustrate the problem, the values of the Chebyshev polynomials up to degree 5 are plotted below. ``` >>> import matplotlib.pyplot as plt >>> from numpy.polynomial import Chebyshev as T >>> x = np.linspace(-1, 1, 100) >>> for i in range(6): ... ax = plt.plot(x, T.basis(i)(x), lw=2, label=f"$T_{i}$") ... >>> plt.legend(loc="upper left") >>> plt.show() ``` ![../_images/routines-polynomials-classes-1.png] In the range -1 <= `x` <= 1 they are nice, equiripple functions lying between +/- 1. The same plots over the range -2 <= `x` <= 2 look very different: ``` >>> import matplotlib.pyplot as plt >>> from numpy.polynomial import Chebyshev as T >>> x = np.linspace(-2, 2, 100) >>> for i in range(6): ... ax = plt.plot(x, T.basis(i)(x), lw=2, label=f"$T_{i}$") ... >>> plt.legend(loc="lower right") >>> plt.show() ``` ![../_images/routines-polynomials-classes-2.png] As can be seen, the “good” parts have shrunk to insignificance. In using Chebyshev polynomials for fitting we want to use the region where `x` is between -1 and 1 and that is what the `window` specifies. However, it is unlikely that the data to be fit has all its data points in that interval, so we use `domain` to specify the interval where the data points lie. When the fit is done, the domain is first mapped to the window by a linear transformation and the usual least squares fit is done using the mapped data points. The window and domain of the fit are part of the returned series and are automatically used when computing values, derivatives, and such. If they aren’t specified in the call the fitting routine will use the default window and the smallest domain that holds all the data points. This is illustrated below for a fit to a noisy sine curve. ``` >>> import numpy as np >>> import matplotlib.pyplot as plt >>> from numpy.polynomial import Chebyshev as T >>> np.random.seed(11) >>> x = np.linspace(0, 2*np.pi, 20) >>> y = np.sin(x) + np.random.normal(scale=.1, size=x.shape) >>> p = T.fit(x, y, 5) >>> plt.plot(x, y, 'o') >>> xx, yy = p.linspace() >>> plt.plot(xx, yy, lw=2) >>> p.domain array([0. , 6.28318531]) >>> p.window array([-1., 1.]) >>> plt.show() ``` ![../_images/routines-polynomials-classes-3.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.classes.htmlPower Series (numpy.polynomial.polynomial) ========================================== This module provides a number of objects (mostly functions) useful for dealing with polynomials, including a [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with polynomial objects is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial")). Classes ------- | | | | --- | --- | | [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial")(coef[, domain, window]) | A power series class. | Constants --------- | | | | --- | --- | | [`polydomain`](generated/numpy.polynomial.polynomial.polydomain#numpy.polynomial.polynomial.polydomain "numpy.polynomial.polynomial.polydomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`polyzero`](generated/numpy.polynomial.polynomial.polyzero#numpy.polynomial.polynomial.polyzero "numpy.polynomial.polynomial.polyzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`polyone`](generated/numpy.polynomial.polynomial.polyone#numpy.polynomial.polynomial.polyone "numpy.polynomial.polynomial.polyone") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`polyx`](generated/numpy.polynomial.polynomial.polyx#numpy.polynomial.polynomial.polyx "numpy.polynomial.polynomial.polyx") | An array object represents a multidimensional, homogeneous array of fixed-size items. | Arithmetic ---------- | | | | --- | --- | | [`polyadd`](generated/numpy.polynomial.polynomial.polyadd#numpy.polynomial.polynomial.polyadd "numpy.polynomial.polynomial.polyadd")(c1, c2) | Add one polynomial to another. | | [`polysub`](generated/numpy.polynomial.polynomial.polysub#numpy.polynomial.polynomial.polysub "numpy.polynomial.polynomial.polysub")(c1, c2) | Subtract one polynomial from another. | | [`polymulx`](generated/numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx")(c) | Multiply a polynomial by x. | | [`polymul`](generated/numpy.polynomial.polynomial.polymul#numpy.polynomial.polynomial.polymul "numpy.polynomial.polynomial.polymul")(c1, c2) | Multiply one polynomial by another. | | [`polydiv`](generated/numpy.polynomial.polynomial.polydiv#numpy.polynomial.polynomial.polydiv "numpy.polynomial.polynomial.polydiv")(c1, c2) | Divide one polynomial by another. | | [`polypow`](generated/numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow")(c, pow[, maxpower]) | Raise a polynomial to a power. | | [`polyval`](generated/numpy.polynomial.polynomial.polyval#numpy.polynomial.polynomial.polyval "numpy.polynomial.polynomial.polyval")(x, c[, tensor]) | Evaluate a polynomial at points x. | | [`polyval2d`](generated/numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d")(x, y, c) | Evaluate a 2-D polynomial at points (x, y). | | [`polyval3d`](generated/numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d")(x, y, z, c) | Evaluate a 3-D polynomial at points (x, y, z). | | [`polygrid2d`](generated/numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d")(x, y, c) | Evaluate a 2-D polynomial on the Cartesian product of x and y. | | [`polygrid3d`](generated/numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d")(x, y, z, c) | Evaluate a 3-D polynomial on the Cartesian product of x, y and z. | Calculus -------- | | | | --- | --- | | [`polyder`](generated/numpy.polynomial.polynomial.polyder#numpy.polynomial.polynomial.polyder "numpy.polynomial.polynomial.polyder")(c[, m, scl, axis]) | Differentiate a polynomial. | | [`polyint`](generated/numpy.polynomial.polynomial.polyint#numpy.polynomial.polynomial.polyint "numpy.polynomial.polynomial.polyint")(c[, m, k, lbnd, scl, axis]) | Integrate a polynomial. | Misc Functions -------------- | | | | --- | --- | | [`polyfromroots`](generated/numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots")(roots) | Generate a monic polynomial with given roots. | | [`polyroots`](generated/numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots")(c) | Compute the roots of a polynomial. | | [`polyvalfromroots`](generated/numpy.polynomial.polynomial.polyvalfromroots#numpy.polynomial.polynomial.polyvalfromroots "numpy.polynomial.polynomial.polyvalfromroots")(x, r[, tensor]) | Evaluate a polynomial specified by its roots at points x. | | [`polyvander`](generated/numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander")(x, deg) | Vandermonde matrix of given degree. | | [`polyvander2d`](generated/numpy.polynomial.polynomial.polyvander2d#numpy.polynomial.polynomial.polyvander2d "numpy.polynomial.polynomial.polyvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`polyvander3d`](generated/numpy.polynomial.polynomial.polyvander3d#numpy.polynomial.polynomial.polyvander3d "numpy.polynomial.polynomial.polyvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`polycompanion`](generated/numpy.polynomial.polynomial.polycompanion#numpy.polynomial.polynomial.polycompanion "numpy.polynomial.polynomial.polycompanion")(c) | Return the companion matrix of c. | | [`polyfit`](generated/numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit")(x, y, deg[, rcond, full, w]) | Least-squares fit of a polynomial to data. | | [`polytrim`](generated/numpy.polynomial.polynomial.polytrim#numpy.polynomial.polynomial.polytrim "numpy.polynomial.polynomial.polytrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. | | [`polyline`](generated/numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline")(off, scl) | Returns an array representing a linear polynomial. | See Also -------- [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.polynomial.htmlChebyshev Series (numpy.polynomial.chebyshev) ============================================= This module provides a number of objects (mostly functions) useful for dealing with Chebyshev series, including a [`Chebyshev`](generated/numpy.polynomial.chebyshev.chebyshev#numpy.polynomial.chebyshev.Chebyshev "numpy.polynomial.chebyshev.Chebyshev") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial")). Classes ------- | | | | --- | --- | | [`Chebyshev`](generated/numpy.polynomial.chebyshev.chebyshev#numpy.polynomial.chebyshev.Chebyshev "numpy.polynomial.chebyshev.Chebyshev")(coef[, domain, window]) | A Chebyshev series class. | Constants --------- | | | | --- | --- | | [`chebdomain`](generated/numpy.polynomial.chebyshev.chebdomain#numpy.polynomial.chebyshev.chebdomain "numpy.polynomial.chebyshev.chebdomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`chebzero`](generated/numpy.polynomial.chebyshev.chebzero#numpy.polynomial.chebyshev.chebzero "numpy.polynomial.chebyshev.chebzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`chebone`](generated/numpy.polynomial.chebyshev.chebone#numpy.polynomial.chebyshev.chebone "numpy.polynomial.chebyshev.chebone") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`chebx`](generated/numpy.polynomial.chebyshev.chebx#numpy.polynomial.chebyshev.chebx "numpy.polynomial.chebyshev.chebx") | An array object represents a multidimensional, homogeneous array of fixed-size items. | Arithmetic ---------- | | | | --- | --- | | [`chebadd`](generated/numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd")(c1, c2) | Add one Chebyshev series to another. | | [`chebsub`](generated/numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub")(c1, c2) | Subtract one Chebyshev series from another. | | [`chebmulx`](generated/numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx")(c) | Multiply a Chebyshev series by x. | | [`chebmul`](generated/numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul")(c1, c2) | Multiply one Chebyshev series by another. | | [`chebdiv`](generated/numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv")(c1, c2) | Divide one Chebyshev series by another. | | [`chebpow`](generated/numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow")(c, pow[, maxpower]) | Raise a Chebyshev series to a power. | | [`chebval`](generated/numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval")(x, c[, tensor]) | Evaluate a Chebyshev series at points x. | | [`chebval2d`](generated/numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d")(x, y, c) | Evaluate a 2-D Chebyshev series at points (x, y). | | [`chebval3d`](generated/numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d")(x, y, z, c) | Evaluate a 3-D Chebyshev series at points (x, y, z). | | [`chebgrid2d`](generated/numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d")(x, y, c) | Evaluate a 2-D Chebyshev series on the Cartesian product of x and y. | | [`chebgrid3d`](generated/numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d")(x, y, z, c) | Evaluate a 3-D Chebyshev series on the Cartesian product of x, y, and z. | Calculus -------- | | | | --- | --- | | [`chebder`](generated/numpy.polynomial.chebyshev.chebder#numpy.polynomial.chebyshev.chebder "numpy.polynomial.chebyshev.chebder")(c[, m, scl, axis]) | Differentiate a Chebyshev series. | | [`chebint`](generated/numpy.polynomial.chebyshev.chebint#numpy.polynomial.chebyshev.chebint "numpy.polynomial.chebyshev.chebint")(c[, m, k, lbnd, scl, axis]) | Integrate a Chebyshev series. | Misc Functions -------------- | | | | --- | --- | | [`chebfromroots`](generated/numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots")(roots) | Generate a Chebyshev series with given roots. | | [`chebroots`](generated/numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots")(c) | Compute the roots of a Chebyshev series. | | [`chebvander`](generated/numpy.polynomial.chebyshev.chebvander#numpy.polynomial.chebyshev.chebvander "numpy.polynomial.chebyshev.chebvander")(x, deg) | Pseudo-Vandermonde matrix of given degree. | | [`chebvander2d`](generated/numpy.polynomial.chebyshev.chebvander2d#numpy.polynomial.chebyshev.chebvander2d "numpy.polynomial.chebyshev.chebvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`chebvander3d`](generated/numpy.polynomial.chebyshev.chebvander3d#numpy.polynomial.chebyshev.chebvander3d "numpy.polynomial.chebyshev.chebvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`chebgauss`](generated/numpy.polynomial.chebyshev.chebgauss#numpy.polynomial.chebyshev.chebgauss "numpy.polynomial.chebyshev.chebgauss")(deg) | Gauss-Chebyshev quadrature. | | [`chebweight`](generated/numpy.polynomial.chebyshev.chebweight#numpy.polynomial.chebyshev.chebweight "numpy.polynomial.chebyshev.chebweight")(x) | The weight function of the Chebyshev polynomials. | | [`chebcompanion`](generated/numpy.polynomial.chebyshev.chebcompanion#numpy.polynomial.chebyshev.chebcompanion "numpy.polynomial.chebyshev.chebcompanion")(c) | Return the scaled companion matrix of c. | | [`chebfit`](generated/numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit")(x, y, deg[, rcond, full, w]) | Least squares fit of Chebyshev series to data. | | [`chebpts1`](generated/numpy.polynomial.chebyshev.chebpts1#numpy.polynomial.chebyshev.chebpts1 "numpy.polynomial.chebyshev.chebpts1")(npts) | Chebyshev points of the first kind. | | [`chebpts2`](generated/numpy.polynomial.chebyshev.chebpts2#numpy.polynomial.chebyshev.chebpts2 "numpy.polynomial.chebyshev.chebpts2")(npts) | Chebyshev points of the second kind. | | [`chebtrim`](generated/numpy.polynomial.chebyshev.chebtrim#numpy.polynomial.chebyshev.chebtrim "numpy.polynomial.chebyshev.chebtrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. | | [`chebline`](generated/numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline")(off, scl) | Chebyshev series whose graph is a straight line. | | [`cheb2poly`](generated/numpy.polynomial.chebyshev.cheb2poly#numpy.polynomial.chebyshev.cheb2poly "numpy.polynomial.chebyshev.cheb2poly")(c) | Convert a Chebyshev series to a polynomial. | | [`poly2cheb`](generated/numpy.polynomial.chebyshev.poly2cheb#numpy.polynomial.chebyshev.poly2cheb "numpy.polynomial.chebyshev.poly2cheb")(pol) | Convert a polynomial to a Chebyshev series. | | [`chebinterpolate`](generated/numpy.polynomial.chebyshev.chebinterpolate#numpy.polynomial.chebyshev.chebinterpolate "numpy.polynomial.chebyshev.chebinterpolate")(func, deg[, args]) | Interpolate a function at the Chebyshev points of the first kind. | See also -------- [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") Notes ----- The implementations of multiplication, division, integration, and differentiation use the algebraic identities [[1]](#r3f3efff98d00-1): \[\begin{split}T_n(x) = \frac{z^n + z^{-n}}{2} \\ z\frac{dx}{dz} = \frac{z - z^{-1}}{2}.\end{split}\] where \[x = \frac{z + z^{-1}}{2}.\] These identities allow a Chebyshev series to be expressed as a finite, symmetric Laurent series. In this module, this sort of Laurent series is referred to as a “z-series.” References ---------- [1](#id1) <NAME>, et al., “Combinatorial Trigonometry with Chebyshev Polynomials,” *Journal of Statistical Planning and Inference 14*, 2008 (<https://web.archive.org/web/20080221202153/https://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf>, pg. 4) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.chebyshev.htmlHermite Series, “Physicists” (numpy.polynomial.hermite) ======================================================= This module provides a number of objects (mostly functions) useful for dealing with Hermite series, including a [`Hermite`](generated/numpy.polynomial.hermite.hermite#numpy.polynomial.hermite.Hermite "numpy.polynomial.hermite.Hermite") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial")). Classes ------- | | | | --- | --- | | [`Hermite`](generated/numpy.polynomial.hermite.hermite#numpy.polynomial.hermite.Hermite "numpy.polynomial.hermite.Hermite")(coef[, domain, window]) | An Hermite series class. | Constants --------- | | | | --- | --- | | [`hermdomain`](generated/numpy.polynomial.hermite.hermdomain#numpy.polynomial.hermite.hermdomain "numpy.polynomial.hermite.hermdomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`hermzero`](generated/numpy.polynomial.hermite.hermzero#numpy.polynomial.hermite.hermzero "numpy.polynomial.hermite.hermzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`hermone`](generated/numpy.polynomial.hermite.hermone#numpy.polynomial.hermite.hermone "numpy.polynomial.hermite.hermone") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`hermx`](generated/numpy.polynomial.hermite.hermx#numpy.polynomial.hermite.hermx "numpy.polynomial.hermite.hermx") | An array object represents a multidimensional, homogeneous array of fixed-size items. | Arithmetic ---------- | | | | --- | --- | | [`hermadd`](generated/numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd")(c1, c2) | Add one Hermite series to another. | | [`hermsub`](generated/numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub")(c1, c2) | Subtract one Hermite series from another. | | [`hermmulx`](generated/numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx")(c) | Multiply a Hermite series by x. | | [`hermmul`](generated/numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul")(c1, c2) | Multiply one Hermite series by another. | | [`hermdiv`](generated/numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv")(c1, c2) | Divide one Hermite series by another. | | [`hermpow`](generated/numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow")(c, pow[, maxpower]) | Raise a Hermite series to a power. | | [`hermval`](generated/numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval")(x, c[, tensor]) | Evaluate an Hermite series at points x. | | [`hermval2d`](generated/numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d")(x, y, c) | Evaluate a 2-D Hermite series at points (x, y). | | [`hermval3d`](generated/numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d")(x, y, z, c) | Evaluate a 3-D Hermite series at points (x, y, z). | | [`hermgrid2d`](generated/numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d")(x, y, c) | Evaluate a 2-D Hermite series on the Cartesian product of x and y. | | [`hermgrid3d`](generated/numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d")(x, y, z, c) | Evaluate a 3-D Hermite series on the Cartesian product of x, y, and z. | Calculus -------- | | | | --- | --- | | [`hermder`](generated/numpy.polynomial.hermite.hermder#numpy.polynomial.hermite.hermder "numpy.polynomial.hermite.hermder")(c[, m, scl, axis]) | Differentiate a Hermite series. | | [`hermint`](generated/numpy.polynomial.hermite.hermint#numpy.polynomial.hermite.hermint "numpy.polynomial.hermite.hermint")(c[, m, k, lbnd, scl, axis]) | Integrate a Hermite series. | Misc Functions -------------- | | | | --- | --- | | [`hermfromroots`](generated/numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots")(roots) | Generate a Hermite series with given roots. | | [`hermroots`](generated/numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots")(c) | Compute the roots of a Hermite series. | | [`hermvander`](generated/numpy.polynomial.hermite.hermvander#numpy.polynomial.hermite.hermvander "numpy.polynomial.hermite.hermvander")(x, deg) | Pseudo-Vandermonde matrix of given degree. | | [`hermvander2d`](generated/numpy.polynomial.hermite.hermvander2d#numpy.polynomial.hermite.hermvander2d "numpy.polynomial.hermite.hermvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`hermvander3d`](generated/numpy.polynomial.hermite.hermvander3d#numpy.polynomial.hermite.hermvander3d "numpy.polynomial.hermite.hermvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`hermgauss`](generated/numpy.polynomial.hermite.hermgauss#numpy.polynomial.hermite.hermgauss "numpy.polynomial.hermite.hermgauss")(deg) | Gauss-Hermite quadrature. | | [`hermweight`](generated/numpy.polynomial.hermite.hermweight#numpy.polynomial.hermite.hermweight "numpy.polynomial.hermite.hermweight")(x) | Weight function of the Hermite polynomials. | | [`hermcompanion`](generated/numpy.polynomial.hermite.hermcompanion#numpy.polynomial.hermite.hermcompanion "numpy.polynomial.hermite.hermcompanion")(c) | Return the scaled companion matrix of c. | | [`hermfit`](generated/numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit")(x, y, deg[, rcond, full, w]) | Least squares fit of Hermite series to data. | | [`hermtrim`](generated/numpy.polynomial.hermite.hermtrim#numpy.polynomial.hermite.hermtrim "numpy.polynomial.hermite.hermtrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. | | [`hermline`](generated/numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline")(off, scl) | Hermite series whose graph is a straight line. | | [`herm2poly`](generated/numpy.polynomial.hermite.herm2poly#numpy.polynomial.hermite.herm2poly "numpy.polynomial.hermite.herm2poly")(c) | Convert a Hermite series to a polynomial. | | [`poly2herm`](generated/numpy.polynomial.hermite.poly2herm#numpy.polynomial.hermite.poly2herm "numpy.polynomial.hermite.poly2herm")(pol) | Convert a polynomial to a Hermite series. | See also -------- [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.hermite.htmlHermiteE Series, “Probabilists” (numpy.polynomial.hermite_e) ============================================================= This module provides a number of objects (mostly functions) useful for dealing with Hermite_e series, including a [`HermiteE`](generated/numpy.polynomial.hermite_e.hermitee#numpy.polynomial.hermite_e.HermiteE "numpy.polynomial.hermite_e.HermiteE") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial")). Classes ------- | | | | --- | --- | | [`HermiteE`](generated/numpy.polynomial.hermite_e.hermitee#numpy.polynomial.hermite_e.HermiteE "numpy.polynomial.hermite_e.HermiteE")(coef[, domain, window]) | An HermiteE series class. | Constants --------- | | | | --- | --- | | [`hermedomain`](generated/numpy.polynomial.hermite_e.hermedomain#numpy.polynomial.hermite_e.hermedomain "numpy.polynomial.hermite_e.hermedomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`hermezero`](generated/numpy.polynomial.hermite_e.hermezero#numpy.polynomial.hermite_e.hermezero "numpy.polynomial.hermite_e.hermezero") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`hermeone`](generated/numpy.polynomial.hermite_e.hermeone#numpy.polynomial.hermite_e.hermeone "numpy.polynomial.hermite_e.hermeone") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`hermex`](generated/numpy.polynomial.hermite_e.hermex#numpy.polynomial.hermite_e.hermex "numpy.polynomial.hermite_e.hermex") | An array object represents a multidimensional, homogeneous array of fixed-size items. | Arithmetic ---------- | | | | --- | --- | | [`hermeadd`](generated/numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd")(c1, c2) | Add one Hermite series to another. | | [`hermesub`](generated/numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub")(c1, c2) | Subtract one Hermite series from another. | | [`hermemulx`](generated/numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx")(c) | Multiply a Hermite series by x. | | [`hermemul`](generated/numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul")(c1, c2) | Multiply one Hermite series by another. | | [`hermediv`](generated/numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv")(c1, c2) | Divide one Hermite series by another. | | [`hermepow`](generated/numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow")(c, pow[, maxpower]) | Raise a Hermite series to a power. | | [`hermeval`](generated/numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval")(x, c[, tensor]) | Evaluate an HermiteE series at points x. | | [`hermeval2d`](generated/numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d")(x, y, c) | Evaluate a 2-D HermiteE series at points (x, y). | | [`hermeval3d`](generated/numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d")(x, y, z, c) | Evaluate a 3-D Hermite_e series at points (x, y, z). | | [`hermegrid2d`](generated/numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d")(x, y, c) | Evaluate a 2-D HermiteE series on the Cartesian product of x and y. | | [`hermegrid3d`](generated/numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d")(x, y, z, c) | Evaluate a 3-D HermiteE series on the Cartesian product of x, y, and z. | Calculus -------- | | | | --- | --- | | [`hermeder`](generated/numpy.polynomial.hermite_e.hermeder#numpy.polynomial.hermite_e.hermeder "numpy.polynomial.hermite_e.hermeder")(c[, m, scl, axis]) | Differentiate a Hermite_e series. | | [`hermeint`](generated/numpy.polynomial.hermite_e.hermeint#numpy.polynomial.hermite_e.hermeint "numpy.polynomial.hermite_e.hermeint")(c[, m, k, lbnd, scl, axis]) | Integrate a Hermite_e series. | Misc Functions -------------- | | | | --- | --- | | [`hermefromroots`](generated/numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots")(roots) | Generate a HermiteE series with given roots. | | [`hermeroots`](generated/numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots")(c) | Compute the roots of a HermiteE series. | | [`hermevander`](generated/numpy.polynomial.hermite_e.hermevander#numpy.polynomial.hermite_e.hermevander "numpy.polynomial.hermite_e.hermevander")(x, deg) | Pseudo-Vandermonde matrix of given degree. | | [`hermevander2d`](generated/numpy.polynomial.hermite_e.hermevander2d#numpy.polynomial.hermite_e.hermevander2d "numpy.polynomial.hermite_e.hermevander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`hermevander3d`](generated/numpy.polynomial.hermite_e.hermevander3d#numpy.polynomial.hermite_e.hermevander3d "numpy.polynomial.hermite_e.hermevander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`hermegauss`](generated/numpy.polynomial.hermite_e.hermegauss#numpy.polynomial.hermite_e.hermegauss "numpy.polynomial.hermite_e.hermegauss")(deg) | Gauss-HermiteE quadrature. | | [`hermeweight`](generated/numpy.polynomial.hermite_e.hermeweight#numpy.polynomial.hermite_e.hermeweight "numpy.polynomial.hermite_e.hermeweight")(x) | Weight function of the Hermite_e polynomials. | | [`hermecompanion`](generated/numpy.polynomial.hermite_e.hermecompanion#numpy.polynomial.hermite_e.hermecompanion "numpy.polynomial.hermite_e.hermecompanion")(c) | Return the scaled companion matrix of c. | | [`hermefit`](generated/numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit")(x, y, deg[, rcond, full, w]) | Least squares fit of Hermite series to data. | | [`hermetrim`](generated/numpy.polynomial.hermite_e.hermetrim#numpy.polynomial.hermite_e.hermetrim "numpy.polynomial.hermite_e.hermetrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. | | [`hermeline`](generated/numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline")(off, scl) | Hermite series whose graph is a straight line. | | [`herme2poly`](generated/numpy.polynomial.hermite_e.herme2poly#numpy.polynomial.hermite_e.herme2poly "numpy.polynomial.hermite_e.herme2poly")(c) | Convert a Hermite series to a polynomial. | | [`poly2herme`](generated/numpy.polynomial.hermite_e.poly2herme#numpy.polynomial.hermite_e.poly2herme "numpy.polynomial.hermite_e.poly2herme")(pol) | Convert a polynomial to a Hermite series. | See also -------- [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.hermite_e.htmlLaguerre Series (numpy.polynomial.laguerre) =========================================== This module provides a number of objects (mostly functions) useful for dealing with Laguerre series, including a [`Laguerre`](generated/numpy.polynomial.laguerre.laguerre#numpy.polynomial.laguerre.Laguerre "numpy.polynomial.laguerre.Laguerre") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial")). Classes ------- | | | | --- | --- | | [`Laguerre`](generated/numpy.polynomial.laguerre.laguerre#numpy.polynomial.laguerre.Laguerre "numpy.polynomial.laguerre.Laguerre")(coef[, domain, window]) | A Laguerre series class. | Constants --------- | | | | --- | --- | | [`lagdomain`](generated/numpy.polynomial.laguerre.lagdomain#numpy.polynomial.laguerre.lagdomain "numpy.polynomial.laguerre.lagdomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`lagzero`](generated/numpy.polynomial.laguerre.lagzero#numpy.polynomial.laguerre.lagzero "numpy.polynomial.laguerre.lagzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`lagone`](generated/numpy.polynomial.laguerre.lagone#numpy.polynomial.laguerre.lagone "numpy.polynomial.laguerre.lagone") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`lagx`](generated/numpy.polynomial.laguerre.lagx#numpy.polynomial.laguerre.lagx "numpy.polynomial.laguerre.lagx") | An array object represents a multidimensional, homogeneous array of fixed-size items. | Arithmetic ---------- | | | | --- | --- | | [`lagadd`](generated/numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd")(c1, c2) | Add one Laguerre series to another. | | [`lagsub`](generated/numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub")(c1, c2) | Subtract one Laguerre series from another. | | [`lagmulx`](generated/numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx")(c) | Multiply a Laguerre series by x. | | [`lagmul`](generated/numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul")(c1, c2) | Multiply one Laguerre series by another. | | [`lagdiv`](generated/numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv")(c1, c2) | Divide one Laguerre series by another. | | [`lagpow`](generated/numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow")(c, pow[, maxpower]) | Raise a Laguerre series to a power. | | [`lagval`](generated/numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval")(x, c[, tensor]) | Evaluate a Laguerre series at points x. | | [`lagval2d`](generated/numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d")(x, y, c) | Evaluate a 2-D Laguerre series at points (x, y). | | [`lagval3d`](generated/numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d")(x, y, z, c) | Evaluate a 3-D Laguerre series at points (x, y, z). | | [`laggrid2d`](generated/numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d")(x, y, c) | Evaluate a 2-D Laguerre series on the Cartesian product of x and y. | | [`laggrid3d`](generated/numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d")(x, y, z, c) | Evaluate a 3-D Laguerre series on the Cartesian product of x, y, and z. | Calculus -------- | | | | --- | --- | | [`lagder`](generated/numpy.polynomial.laguerre.lagder#numpy.polynomial.laguerre.lagder "numpy.polynomial.laguerre.lagder")(c[, m, scl, axis]) | Differentiate a Laguerre series. | | [`lagint`](generated/numpy.polynomial.laguerre.lagint#numpy.polynomial.laguerre.lagint "numpy.polynomial.laguerre.lagint")(c[, m, k, lbnd, scl, axis]) | Integrate a Laguerre series. | Misc Functions -------------- | | | | --- | --- | | [`lagfromroots`](generated/numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots")(roots) | Generate a Laguerre series with given roots. | | [`lagroots`](generated/numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots")(c) | Compute the roots of a Laguerre series. | | [`lagvander`](generated/numpy.polynomial.laguerre.lagvander#numpy.polynomial.laguerre.lagvander "numpy.polynomial.laguerre.lagvander")(x, deg) | Pseudo-Vandermonde matrix of given degree. | | [`lagvander2d`](generated/numpy.polynomial.laguerre.lagvander2d#numpy.polynomial.laguerre.lagvander2d "numpy.polynomial.laguerre.lagvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`lagvander3d`](generated/numpy.polynomial.laguerre.lagvander3d#numpy.polynomial.laguerre.lagvander3d "numpy.polynomial.laguerre.lagvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`laggauss`](generated/numpy.polynomial.laguerre.laggauss#numpy.polynomial.laguerre.laggauss "numpy.polynomial.laguerre.laggauss")(deg) | Gauss-Laguerre quadrature. | | [`lagweight`](generated/numpy.polynomial.laguerre.lagweight#numpy.polynomial.laguerre.lagweight "numpy.polynomial.laguerre.lagweight")(x) | Weight function of the Laguerre polynomials. | | [`lagcompanion`](generated/numpy.polynomial.laguerre.lagcompanion#numpy.polynomial.laguerre.lagcompanion "numpy.polynomial.laguerre.lagcompanion")(c) | Return the companion matrix of c. | | [`lagfit`](generated/numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit")(x, y, deg[, rcond, full, w]) | Least squares fit of Laguerre series to data. | | [`lagtrim`](generated/numpy.polynomial.laguerre.lagtrim#numpy.polynomial.laguerre.lagtrim "numpy.polynomial.laguerre.lagtrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. | | [`lagline`](generated/numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline")(off, scl) | Laguerre series whose graph is a straight line. | | [`lag2poly`](generated/numpy.polynomial.laguerre.lag2poly#numpy.polynomial.laguerre.lag2poly "numpy.polynomial.laguerre.lag2poly")(c) | Convert a Laguerre series to a polynomial. | | [`poly2lag`](generated/numpy.polynomial.laguerre.poly2lag#numpy.polynomial.laguerre.poly2lag "numpy.polynomial.laguerre.poly2lag")(pol) | Convert a polynomial to a Laguerre series. | See also -------- [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.laguerre.htmlLegendre Series (numpy.polynomial.legendre) =========================================== This module provides a number of objects (mostly functions) useful for dealing with Legendre series, including a [`Legendre`](generated/numpy.polynomial.legendre.legendre#numpy.polynomial.legendre.Legendre "numpy.polynomial.legendre.Legendre") class that encapsulates the usual arithmetic operations. (General information on how this module represents and works with such polynomials is in the docstring for its “parent” sub-package, [`numpy.polynomial`](routines.polynomials.package#module-numpy.polynomial "numpy.polynomial")). Classes ------- | | | | --- | --- | | [`Legendre`](generated/numpy.polynomial.legendre.legendre#numpy.polynomial.legendre.Legendre "numpy.polynomial.legendre.Legendre")(coef[, domain, window]) | A Legendre series class. | Constants --------- | | | | --- | --- | | [`legdomain`](generated/numpy.polynomial.legendre.legdomain#numpy.polynomial.legendre.legdomain "numpy.polynomial.legendre.legdomain") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`legzero`](generated/numpy.polynomial.legendre.legzero#numpy.polynomial.legendre.legzero "numpy.polynomial.legendre.legzero") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`legone`](generated/numpy.polynomial.legendre.legone#numpy.polynomial.legendre.legone "numpy.polynomial.legendre.legone") | An array object represents a multidimensional, homogeneous array of fixed-size items. | | [`legx`](generated/numpy.polynomial.legendre.legx#numpy.polynomial.legendre.legx "numpy.polynomial.legendre.legx") | An array object represents a multidimensional, homogeneous array of fixed-size items. | Arithmetic ---------- | | | | --- | --- | | [`legadd`](generated/numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd")(c1, c2) | Add one Legendre series to another. | | [`legsub`](generated/numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub")(c1, c2) | Subtract one Legendre series from another. | | [`legmulx`](generated/numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx")(c) | Multiply a Legendre series by x. | | [`legmul`](generated/numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul")(c1, c2) | Multiply one Legendre series by another. | | [`legdiv`](generated/numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv")(c1, c2) | Divide one Legendre series by another. | | [`legpow`](generated/numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow")(c, pow[, maxpower]) | Raise a Legendre series to a power. | | [`legval`](generated/numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval")(x, c[, tensor]) | Evaluate a Legendre series at points x. | | [`legval2d`](generated/numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d")(x, y, c) | Evaluate a 2-D Legendre series at points (x, y). | | [`legval3d`](generated/numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d")(x, y, z, c) | Evaluate a 3-D Legendre series at points (x, y, z). | | [`leggrid2d`](generated/numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d")(x, y, c) | Evaluate a 2-D Legendre series on the Cartesian product of x and y. | | [`leggrid3d`](generated/numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d")(x, y, z, c) | Evaluate a 3-D Legendre series on the Cartesian product of x, y, and z. | Calculus -------- | | | | --- | --- | | [`legder`](generated/numpy.polynomial.legendre.legder#numpy.polynomial.legendre.legder "numpy.polynomial.legendre.legder")(c[, m, scl, axis]) | Differentiate a Legendre series. | | [`legint`](generated/numpy.polynomial.legendre.legint#numpy.polynomial.legendre.legint "numpy.polynomial.legendre.legint")(c[, m, k, lbnd, scl, axis]) | Integrate a Legendre series. | Misc Functions -------------- | | | | --- | --- | | [`legfromroots`](generated/numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots")(roots) | Generate a Legendre series with given roots. | | [`legroots`](generated/numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots")(c) | Compute the roots of a Legendre series. | | [`legvander`](generated/numpy.polynomial.legendre.legvander#numpy.polynomial.legendre.legvander "numpy.polynomial.legendre.legvander")(x, deg) | Pseudo-Vandermonde matrix of given degree. | | [`legvander2d`](generated/numpy.polynomial.legendre.legvander2d#numpy.polynomial.legendre.legvander2d "numpy.polynomial.legendre.legvander2d")(x, y, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`legvander3d`](generated/numpy.polynomial.legendre.legvander3d#numpy.polynomial.legendre.legvander3d "numpy.polynomial.legendre.legvander3d")(x, y, z, deg) | Pseudo-Vandermonde matrix of given degrees. | | [`leggauss`](generated/numpy.polynomial.legendre.leggauss#numpy.polynomial.legendre.leggauss "numpy.polynomial.legendre.leggauss")(deg) | Gauss-Legendre quadrature. | | [`legweight`](generated/numpy.polynomial.legendre.legweight#numpy.polynomial.legendre.legweight "numpy.polynomial.legendre.legweight")(x) | Weight function of the Legendre polynomials. | | [`legcompanion`](generated/numpy.polynomial.legendre.legcompanion#numpy.polynomial.legendre.legcompanion "numpy.polynomial.legendre.legcompanion")(c) | Return the scaled companion matrix of c. | | [`legfit`](generated/numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit")(x, y, deg[, rcond, full, w]) | Least squares fit of Legendre series to data. | | [`legtrim`](generated/numpy.polynomial.legendre.legtrim#numpy.polynomial.legendre.legtrim "numpy.polynomial.legendre.legtrim")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. | | [`legline`](generated/numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline")(off, scl) | Legendre series whose graph is a straight line. | | [`leg2poly`](generated/numpy.polynomial.legendre.leg2poly#numpy.polynomial.legendre.leg2poly "numpy.polynomial.legendre.leg2poly")(c) | Convert a Legendre series to a polynomial. | | [`poly2leg`](generated/numpy.polynomial.legendre.poly2leg#numpy.polynomial.legendre.poly2leg "numpy.polynomial.legendre.poly2leg")(pol) | Convert a polynomial to a Legendre series. | See also -------- numpy.polynomial © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.legendre.htmlPolyutils ========= Utility classes and functions for the polynomial modules. This module provides: error and warning objects; a polynomial base class; and some routines used in both the `polynomial` and `chebyshev` modules. Warning objects --------------- | | | | --- | --- | | [`RankWarning`](generated/numpy.polynomial.polyutils.rankwarning#numpy.polynomial.polyutils.RankWarning "numpy.polynomial.polyutils.RankWarning") | Issued by chebfit when the design matrix is rank deficient. | Functions --------- | | | | --- | --- | | [`as_series`](generated/numpy.polynomial.polyutils.as_series#numpy.polynomial.polyutils.as_series "numpy.polynomial.polyutils.as_series")(alist[, trim]) | Return argument as a list of 1-d arrays. | | [`trimseq`](generated/numpy.polynomial.polyutils.trimseq#numpy.polynomial.polyutils.trimseq "numpy.polynomial.polyutils.trimseq")(seq) | Remove small Poly series coefficients. | | [`trimcoef`](generated/numpy.polynomial.polyutils.trimcoef#numpy.polynomial.polyutils.trimcoef "numpy.polynomial.polyutils.trimcoef")(c[, tol]) | Remove "small" "trailing" coefficients from a polynomial. | | [`getdomain`](generated/numpy.polynomial.polyutils.getdomain#numpy.polynomial.polyutils.getdomain "numpy.polynomial.polyutils.getdomain")(x) | Return a domain suitable for given abscissae. | | [`mapdomain`](generated/numpy.polynomial.polyutils.mapdomain#numpy.polynomial.polyutils.mapdomain "numpy.polynomial.polyutils.mapdomain")(x, old, new) | Apply linear map to input points. | | [`mapparms`](generated/numpy.polynomial.polyutils.mapparms#numpy.polynomial.polyutils.mapparms "numpy.polynomial.polyutils.mapparms")(old, new) | Linear map parameters between domains. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.polyutils.htmlPoly1d ====== Basics ------ | | | | --- | --- | | [`poly1d`](generated/numpy.poly1d#numpy.poly1d "numpy.poly1d")(c_or_r[, r, variable]) | A one-dimensional polynomial class. | | [`polyval`](generated/numpy.polyval#numpy.polyval "numpy.polyval")(p, x) | Evaluate a polynomial at specific values. | | [`poly`](generated/numpy.poly#numpy.poly "numpy.poly")(seq_of_zeros) | Find the coefficients of a polynomial with the given sequence of roots. | | [`roots`](generated/numpy.roots#numpy.roots "numpy.roots")(p) | Return the roots of a polynomial with coefficients given in p. | Fitting ------- | | | | --- | --- | | [`polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit")(x, y, deg[, rcond, full, w, cov]) | Least squares polynomial fit. | Calculus -------- | | | | --- | --- | | [`polyder`](generated/numpy.polyder#numpy.polyder "numpy.polyder")(p[, m]) | Return the derivative of the specified order of a polynomial. | | [`polyint`](generated/numpy.polyint#numpy.polyint "numpy.polyint")(p[, m, k]) | Return an antiderivative (indefinite integral) of a polynomial. | Arithmetic ---------- | | | | --- | --- | | [`polyadd`](generated/numpy.polyadd#numpy.polyadd "numpy.polyadd")(a1, a2) | Find the sum of two polynomials. | | [`polydiv`](generated/numpy.polydiv#numpy.polydiv "numpy.polydiv")(u, v) | Returns the quotient and remainder of polynomial division. | | [`polymul`](generated/numpy.polymul#numpy.polymul "numpy.polymul")(a1, a2) | Find the product of two polynomials. | | [`polysub`](generated/numpy.polysub#numpy.polysub "numpy.polysub")(a1, a2) | Difference (subtraction) of two polynomials. | Warnings -------- | | | | --- | --- | | [`RankWarning`](generated/numpy.rankwarning#numpy.RankWarning "numpy.RankWarning") | Issued by [`polyfit`](generated/numpy.polyfit#numpy.polyfit "numpy.polyfit") when the Vandermonde matrix is rank deficient. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.poly1d.htmlConvenience Classes =================== The following lists the various constants and methods common to all of the classes representing the various kinds of polynomials. In the following, the term `Poly` represents any one of the convenience classes (e.g. [`Polynomial`](generated/numpy.polynomial.polynomial.polynomial#numpy.polynomial.polynomial.Polynomial "numpy.polynomial.polynomial.Polynomial"), [`Chebyshev`](generated/numpy.polynomial.chebyshev.chebyshev#numpy.polynomial.chebyshev.Chebyshev "numpy.polynomial.chebyshev.Chebyshev"), [`Hermite`](generated/numpy.polynomial.hermite.hermite#numpy.polynomial.hermite.Hermite "numpy.polynomial.hermite.Hermite"), etc.) while the lowercase `p` represents an **instance** of a polynomial class. Constants --------- * `Poly.domain` – Default domain * `Poly.window` – Default window * `Poly.basis_name` – String used to represent the basis * `Poly.maxpower` – Maximum value `n` such that `p**n` is allowed * `Poly.nickname` – String used in printing Creation -------- Methods for creating polynomial instances. * `Poly.basis(degree)` – Basis polynomial of given degree * `Poly.identity()` – `p` where `p(x) = x` for all `x` * `Poly.fit(x, y, deg)` – `p` of degree `deg` with coefficients determined by the least-squares fit to the data `x`, `y` * `Poly.fromroots(roots)` – `p` with specified roots * `p.copy()` – Create a copy of `p` Conversion ---------- Methods for converting a polynomial instance of one kind to another. * `p.cast(Poly)` – Convert `p` to instance of kind `Poly` * `p.convert(Poly)` – Convert `p` to instance of kind `Poly` or map between `domain` and `window` Calculus -------- * `p.deriv()` – Take the derivative of `p` * `p.integ()` – Integrate `p` Validation ---------- * `Poly.has_samecoef(p1, p2)` – Check if coefficients match * `Poly.has_samedomain(p1, p2)` – Check if domains match * `Poly.has_sametype(p1, p2)` – Check if types match * `Poly.has_samewindow(p1, p2)` – Check if windows match Misc ---- * `p.linspace()` – Return `x, p(x)` at equally-spaced points in `domain` * `p.mapparms()` – Return the parameters for the linear mapping between `domain` and `window`. * `p.roots()` – Return the roots of `p`. * `p.trim()` – Remove trailing coefficients. * `p.cutdeg(degree)` – Truncate p to given degree * `p.truncate(size)` – Truncate p to given size © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/routines.polynomials.package.htmlnumpy.poly1d ============ *class*numpy.poly1d(*c_or_r*, *r=False*, *variable=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) A one-dimensional polynomial class. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). A convenience class, used to encapsulate “natural” operations on polynomials so that said operations may take on their customary form in code (see Examples). Parameters **c_or_r**array_like The polynomial’s coefficients, in decreasing powers, or if the value of the second parameter is True, the polynomial’s roots (values where the polynomial evaluates to 0). For example, `poly1d([1, 2, 3])` returns an object that represents \(x^2 + 2x + 3\), whereas `poly1d([1, 2, 3], True)` returns one that represents \((x-1)(x-2)(x-3) = x^3 - 6x^2 + 11x -6\). **r**bool, optional If True, `c_or_r` specifies the polynomial’s roots; the default is False. **variable**str, optional Changes the variable used when printing `p` from `x` to [`variable`](numpy.poly1d.variable#numpy.poly1d.variable "numpy.poly1d.variable") (see Examples). #### Examples Construct the polynomial \(x^2 + 2x + 3\): ``` >>> p = np.poly1d([1, 2, 3]) >>> print(np.poly1d(p)) 2 1 x + 2 x + 3 ``` Evaluate the polynomial at \(x = 0.5\): ``` >>> p(0.5) 4.25 ``` Find the roots: ``` >>> p.r array([-1.+1.41421356j, -1.-1.41421356j]) >>> p(p.r) array([ -4.44089210e-16+0.j, -4.44089210e-16+0.j]) # may vary ``` These numbers in the previous line represent (0, 0) to machine precision Show the coefficients: ``` >>> p.c array([1, 2, 3]) ``` Display the order (the leading zero-coefficients are removed): ``` >>> p.order 2 ``` Show the coefficient of the k-th power in the polynomial (which is equivalent to `p.c[-(i+1)]`): ``` >>> p[1] 2 ``` Polynomials can be added, subtracted, multiplied, and divided (returns quotient and remainder): ``` >>> p * p poly1d([ 1, 4, 10, 12, 9]) ``` ``` >>> (p**3 + 4) / p (poly1d([ 1., 4., 10., 12., 9.]), poly1d([4.])) ``` `asarray(p)` gives the coefficient array, so polynomials can be used in all functions that accept arrays: ``` >>> p**2 # square of polynomial poly1d([ 1, 4, 10, 12, 9]) ``` ``` >>> np.square(p) # square of individual coefficients array([1, 4, 9]) ``` The variable used in the string representation of `p` can be modified, using the [`variable`](numpy.poly1d.variable#numpy.poly1d.variable "numpy.poly1d.variable") parameter: ``` >>> p = np.poly1d([1,2,3], variable='z') >>> print(p) 2 1 z + 2 z + 3 ``` Construct a polynomial from its roots: ``` >>> np.poly1d([1, 2], True) poly1d([ 1., -3., 2.]) ``` This is the same polynomial as obtained by: ``` >>> np.poly1d([1, -1]) * np.poly1d([1, -2]) poly1d([ 1, -3, 2]) ``` Attributes [`c`](numpy.poly1d.c#numpy.poly1d.c "numpy.poly1d.c") The polynomial coefficients [`coef`](numpy.poly1d.coef#numpy.poly1d.coef "numpy.poly1d.coef") The polynomial coefficients [`coefficients`](numpy.poly1d.coefficients#numpy.poly1d.coefficients "numpy.poly1d.coefficients") The polynomial coefficients [`coeffs`](numpy.poly1d.coeffs#numpy.poly1d.coeffs "numpy.poly1d.coeffs") The polynomial coefficients [`o`](numpy.poly1d.o#numpy.poly1d.o "numpy.poly1d.o") The order or degree of the polynomial [`order`](numpy.poly1d.order#numpy.poly1d.order "numpy.poly1d.order") The order or degree of the polynomial [`r`](numpy.poly1d.r#numpy.poly1d.r "numpy.poly1d.r") The roots of the polynomial, where self(x) == 0 [`roots`](numpy.roots#numpy.roots "numpy.roots") The roots of the polynomial, where self(x) == 0 [`variable`](numpy.poly1d.variable#numpy.poly1d.variable "numpy.poly1d.variable") The name of the polynomial variable #### Methods | | | | --- | --- | | [`__call__`](numpy.poly1d.__call__#numpy.poly1d.__call__ "numpy.poly1d.__call__")(val) | Call self as a function. | | [`deriv`](numpy.poly1d.deriv#numpy.poly1d.deriv "numpy.poly1d.deriv")([m]) | Return a derivative of this polynomial. | | [`integ`](numpy.poly1d.integ#numpy.poly1d.integ "numpy.poly1d.integ")([m, k]) | Return an antiderivative (indefinite integral) of this polynomial. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.htmlnumpy.polyadd ============= numpy.polyadd(*a1*, *a2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L787-L852) Find the sum of two polynomials. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Returns the polynomial resulting from the sum of two input polynomials. Each input must be either a poly1d object or a 1D sequence of polynomial coefficients, from highest to lowest degree. Parameters **a1, a2**array_like or poly1d object Input polynomials. Returns **out**ndarray or poly1d object The sum of the inputs. If either input is a poly1d object, then the output is also a poly1d object. Otherwise, it is a 1D array of polynomial coefficients from highest to lowest degree. See also [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A one-dimensional polynomial class. [`poly`](numpy.poly#numpy.poly "numpy.poly"), [`polyadd`](#numpy.polyadd "numpy.polyadd"), [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit"), [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") #### Examples ``` >>> np.polyadd([1, 2], [9, 5, 4]) array([9, 6, 6]) ``` Using poly1d objects: ``` >>> p1 = np.poly1d([1, 2]) >>> p2 = np.poly1d([9, 5, 4]) >>> print(p1) 1 x + 2 >>> print(p2) 2 9 x + 5 x + 4 >>> print(np.polyadd(p1, p2)) 2 9 x + 6 x + 6 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polyadd.htmlnumpy.polyval ============= numpy.polyval(*p*, *x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L704-L780) Evaluate a polynomial at specific values. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). If `p` is of length N, this function returns the value: `p[0]*x**(N-1) + p[1]*x**(N-2) + ... + p[N-2]*x + p[N-1]` If `x` is a sequence, then `p(x)` is returned for each element of `x`. If `x` is another polynomial then the composite polynomial `p(x(t))` is returned. Parameters **p**array_like or poly1d object 1D array of polynomial coefficients (including coefficients equal to zero) from highest degree to the constant term, or an instance of poly1d. **x**array_like or poly1d object A number, an array of numbers, or an instance of poly1d, at which to evaluate `p`. Returns **values**ndarray or poly1d If `x` is a poly1d instance, the result is the composition of the two polynomials, i.e., `x` is “substituted” in `p` and the simplified result is returned. In addition, the type of `x` - array_like or poly1d - governs the type of the output: `x` array_like => `values` array_like, `x` a poly1d object => `values` is also. See also [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A polynomial class. #### Notes Horner’s scheme [[1]](#r138ee7027ddf-1) is used to evaluate the polynomial. Even so, for polynomials of high degree the values may be inaccurate due to rounding errors. Use carefully. If `x` is a subtype of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") the return value will be of the same type. #### References [1](#id1) <NAME>, <NAME>, and <NAME> (Eng. trans. Ed.), *Handbook of Mathematics*, New York, Van Nostrand Reinhold Co., 1985, pg. 720. #### Examples ``` >>> np.polyval([3,0,1], 5) # 3 * 5**2 + 0 * 5**1 + 1 76 >>> np.polyval([3,0,1], np.poly1d(5)) poly1d([76]) >>> np.polyval(np.poly1d([3,0,1]), 5) 76 >>> np.polyval(np.poly1d([3,0,1]), np.poly1d(5)) poly1d([76]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polyval.htmlnumpy.polyfit ============= numpy.polyfit(*x*, *y*, *deg*, *rcond=None*, *full=False*, *w=None*, *cov=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L452-L697) Least squares polynomial fit. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Fit a polynomial `p(x) = p[0] * x**deg + ... + p[deg]` of degree `deg` to points `(x, y)`. Returns a vector of coefficients `p` that minimises the squared error in the order `deg`, `deg-1`, 
 `0`. The [`Polynomial.fit`](numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit") class method is recommended for new code as it is more stable numerically. See the documentation of the method for more information. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg**int Degree of the fitting polynomial **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. **cov**bool or str, optional If given and not `False`, return not just the estimate but also its covariance matrix. By default, the covariance are scaled by chi2/dof, where dof = M - (deg + 1), i.e., the weights are presumed to be unreliable except in a relative sense and everything is scaled such that the reduced chi2 is unity. This scaling is omitted if `cov='unscaled'`, as is relevant for the case that the weights are w = 1/sigma, with sigma known to be a reliable estimate of the uncertainty. Returns **p**ndarray, shape (deg + 1,) or (deg + 1, K) Polynomial coefficients, highest power first. If `y` was 2-D, the coefficients for `k`-th data set are in `p[:,k]`. residuals, rank, singular_values, rcond These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the effective rank of the scaled Vandermonde coefficient matrix * singular_values – singular values of the scaled Vandermonde coefficient matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). **V**ndarray, shape (M,M) or (M,M,K) Present only if `full == False` and `cov == True`. The covariance matrix of the polynomial coefficient estimates. The diagonal of this matrix are the variance estimates for each coefficient. If y is a 2-D array, then the covariance matrix for the `k`-th data set are in `V[:,:,k]` Warns RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by ``` >>> import warnings >>> warnings.simplefilter('ignore', np.RankWarning) ``` See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Compute polynomial values. [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "(in SciPy v1.8.1)") Computes spline fits. #### Notes The solution minimizes the squared error \[E = \sum_{j=0}^k |p(x_j) - y_j|^2\] in the equations: ``` x[0]**n * p[0] + ... + x[0] * p[n-1] + p[n] = y[0] x[1]**n * p[0] + ... + x[1] * p[n-1] + p[n] = y[1] ... x[k]**n * p[0] + ... + x[k] * p[n-1] + p[n] = y[k] ``` The coefficient matrix of the coefficients `p` is a Vandermonde matrix. [`polyfit`](#numpy.polyfit "numpy.polyfit") issues a [`RankWarning`](numpy.rankwarning#numpy.RankWarning "numpy.RankWarning") when the least-squares fit is badly conditioned. This implies that the best fit is not well-defined due to numerical error. The results may be improved by lowering the polynomial degree or by replacing `x` by `x` - `x`.mean(). The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious: including contributions from the small singular values can add numerical noise to the result. Note that fitting polynomial coefficients is inherently badly conditioned when the degree of the polynomial is large or the interval of sample points is badly centered. The quality of the fit should always be checked in these cases. When polynomial fits are not satisfactory, splines may be a good alternative. #### References 1 Wikipedia, “Curve fitting”, <https://en.wikipedia.org/wiki/Curve_fitting 2 Wikipedia, “Polynomial interpolation”, <https://en.wikipedia.org/wiki/Polynomial_interpolation #### Examples ``` >>> import warnings >>> x = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0]) >>> y = np.array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0]) >>> z = np.polyfit(x, y, 3) >>> z array([ 0.08703704, -0.81349206, 1.69312169, -0.03968254]) # may vary ``` It is convenient to use [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") objects for dealing with polynomials: ``` >>> p = np.poly1d(z) >>> p(0.5) 0.6143849206349179 # may vary >>> p(3.5) -0.34732142857143039 # may vary >>> p(10) 22.579365079365115 # may vary ``` High-order polynomials may oscillate wildly: ``` >>> with warnings.catch_warnings(): ... warnings.simplefilter('ignore', np.RankWarning) ... p30 = np.poly1d(np.polyfit(x, y, 30)) ... >>> p30(4) -0.80000000000000204 # may vary >>> p30(5) -0.99999999999999445 # may vary >>> p30(4.5) -0.10547061179440398 # may vary ``` Illustration: ``` >>> import matplotlib.pyplot as plt >>> xp = np.linspace(-2, 6, 100) >>> _ = plt.plot(x, y, '.', xp, p(xp), '-', xp, p30(xp), '--') >>> plt.ylim(-2,2) (-2, 2) >>> plt.show() ``` ![../../_images/numpy-polyfit-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polyfit.htmlnumpy.poly ========== numpy.poly(*seq_of_zeros*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L44-L164) Find the coefficients of a polynomial with the given sequence of roots. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Returns the coefficients of the polynomial whose leading coefficient is one for the given sequence of zeros (multiple roots must be included in the sequence as many times as their multiplicity; see Examples). A square matrix (or array, which will be treated as a matrix) can also be given, in which case the coefficients of the characteristic polynomial of the matrix are returned. Parameters **seq_of_zeros**array_like, shape (N,) or (N, N) A sequence of polynomial roots, or a square array or matrix object. Returns **c**ndarray 1D array of polynomial coefficients from highest to lowest degree: `c[0] * x**(N) + c[1] * x**(N-1) + ... + c[N-1] * x + c[N]` where c[0] always equals 1. Raises ValueError If input is the wrong shape (the input must be a 1-D or square 2-D array). See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Compute polynomial values. [`roots`](numpy.roots#numpy.roots "numpy.roots") Return the roots of a polynomial. [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit") Least squares polynomial fit. [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A one-dimensional polynomial class. #### Notes Specifying the roots of a polynomial still leaves one degree of freedom, typically represented by an undetermined leading coefficient. [[1]](#r6c2ffae921d1-1) In the case of this function, that coefficient - the first one in the returned array - is always taken as one. (If for some reason you have one other point, the only automatic way presently to leverage that information is to use `polyfit`.) The characteristic polynomial, \(p_a(t)\), of an `n`-by-`n` matrix **A** is given by \(p_a(t) = \mathrm{det}(t\, \mathbf{I} - \mathbf{A})\), where **I** is the `n`-by-`n` identity matrix. [[2]](#r6c2ffae921d1-2) #### References [1](#id1) <NAME> and <NAME>, III, “Algebra and Trignometry, Enhanced With Graphing Utilities,” Prentice-Hall, pg. 318, 1996. [2](#id2) <NAME>, “Linear Algebra and Its Applications, 2nd Edition,” Academic Press, pg. 182, 1980. #### Examples Given a sequence of a polynomial’s zeros: ``` >>> np.poly((0, 0, 0)) # Multiple root example array([1., 0., 0., 0.]) ``` The line above represents z**3 + 0*z**2 + 0*z + 0. ``` >>> np.poly((-1./2, 0, 1./2)) array([ 1. , 0. , -0.25, 0. ]) ``` The line above represents z**3 - z/4 ``` >>> np.poly((np.random.random(1)[0], 0, np.random.random(1)[0])) array([ 1. , -0.77086955, 0.08618131, 0. ]) # random ``` Given a square array object: ``` >>> P = np.array([[0, 1./3], [-1./2, 0]]) >>> np.poly(P) array([1. , 0. , 0.16666667]) ``` Note how in all cases the leading coefficient is always 1. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly.htmlnumpy.polynomial.polynomial.Polynomial ====================================== *class*numpy.polynomial.polynomial.Polynomial(*coef*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L1472-L1530) A power series class. The Polynomial class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the `ABCPolyBase` documentation. Parameters **coef**array_like Polynomial coefficients in order of increasing degree, i.e., `(1, 2, 3)` give `1 + 2*x + 3*x**2`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1, 1]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.polynomial.polynomial.domain#numpy.polynomial.polynomial.Polynomial.domain "numpy.polynomial.polynomial.Polynomial.domain") for its use. The default value is [-1, 1]. New in version 1.6.0. Attributes **basis_name** #### Methods | | | | --- | --- | | [`__call__`](numpy.polynomial.polynomial.polynomial.__call__#numpy.polynomial.polynomial.Polynomial.__call__ "numpy.polynomial.polynomial.Polynomial.__call__")(arg) | Call self as a function. | | [`basis`](numpy.polynomial.polynomial.polynomial.basis#numpy.polynomial.polynomial.Polynomial.basis "numpy.polynomial.polynomial.Polynomial.basis")(deg[, domain, window]) | Series basis polynomial of degree `deg`. | | [`cast`](numpy.polynomial.polynomial.polynomial.cast#numpy.polynomial.polynomial.Polynomial.cast "numpy.polynomial.polynomial.Polynomial.cast")(series[, domain, window]) | Convert series to series of this class. | | [`convert`](numpy.polynomial.polynomial.polynomial.convert#numpy.polynomial.polynomial.Polynomial.convert "numpy.polynomial.polynomial.Polynomial.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. | | [`copy`](numpy.polynomial.polynomial.polynomial.copy#numpy.polynomial.polynomial.Polynomial.copy "numpy.polynomial.polynomial.Polynomial.copy")() | Return a copy. | | [`cutdeg`](numpy.polynomial.polynomial.polynomial.cutdeg#numpy.polynomial.polynomial.Polynomial.cutdeg "numpy.polynomial.polynomial.Polynomial.cutdeg")(deg) | Truncate series to the given degree. | | [`degree`](numpy.polynomial.polynomial.polynomial.degree#numpy.polynomial.polynomial.Polynomial.degree "numpy.polynomial.polynomial.Polynomial.degree")() | The degree of the series. | | [`deriv`](numpy.polynomial.polynomial.polynomial.deriv#numpy.polynomial.polynomial.Polynomial.deriv "numpy.polynomial.polynomial.Polynomial.deriv")([m]) | Differentiate. | | [`fit`](numpy.polynomial.polynomial.polynomial.fit#numpy.polynomial.polynomial.Polynomial.fit "numpy.polynomial.polynomial.Polynomial.fit")(x, y, deg[, domain, rcond, full, w, window]) | Least squares fit to data. | | [`fromroots`](numpy.polynomial.polynomial.polynomial.fromroots#numpy.polynomial.polynomial.Polynomial.fromroots "numpy.polynomial.polynomial.Polynomial.fromroots")(roots[, domain, window]) | Return series instance that has the specified roots. | | [`has_samecoef`](numpy.polynomial.polynomial.polynomial.has_samecoef#numpy.polynomial.polynomial.Polynomial.has_samecoef "numpy.polynomial.polynomial.Polynomial.has_samecoef")(other) | Check if coefficients match. | | [`has_samedomain`](numpy.polynomial.polynomial.polynomial.has_samedomain#numpy.polynomial.polynomial.Polynomial.has_samedomain "numpy.polynomial.polynomial.Polynomial.has_samedomain")(other) | Check if domains match. | | [`has_sametype`](numpy.polynomial.polynomial.polynomial.has_sametype#numpy.polynomial.polynomial.Polynomial.has_sametype "numpy.polynomial.polynomial.Polynomial.has_sametype")(other) | Check if types match. | | [`has_samewindow`](numpy.polynomial.polynomial.polynomial.has_samewindow#numpy.polynomial.polynomial.Polynomial.has_samewindow "numpy.polynomial.polynomial.Polynomial.has_samewindow")(other) | Check if windows match. | | [`identity`](numpy.polynomial.polynomial.polynomial.identity#numpy.polynomial.polynomial.Polynomial.identity "numpy.polynomial.polynomial.Polynomial.identity")([domain, window]) | Identity function. | | [`integ`](numpy.polynomial.polynomial.polynomial.integ#numpy.polynomial.polynomial.Polynomial.integ "numpy.polynomial.polynomial.Polynomial.integ")([m, k, lbnd]) | Integrate. | | [`linspace`](numpy.polynomial.polynomial.polynomial.linspace#numpy.polynomial.polynomial.Polynomial.linspace "numpy.polynomial.polynomial.Polynomial.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. | | [`mapparms`](numpy.polynomial.polynomial.polynomial.mapparms#numpy.polynomial.polynomial.Polynomial.mapparms "numpy.polynomial.polynomial.Polynomial.mapparms")() | Return the mapping parameters. | | [`roots`](numpy.polynomial.polynomial.polynomial.roots#numpy.polynomial.polynomial.Polynomial.roots "numpy.polynomial.polynomial.Polynomial.roots")() | Return the roots of the series polynomial. | | [`trim`](numpy.polynomial.polynomial.polynomial.trim#numpy.polynomial.polynomial.Polynomial.trim "numpy.polynomial.polynomial.Polynomial.trim")([tol]) | Remove trailing coefficients | | [`truncate`](numpy.polynomial.polynomial.polynomial.truncate#numpy.polynomial.polynomial.Polynomial.truncate "numpy.polynomial.polynomial.Polynomial.truncate")(size) | Truncate series to length `size`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.htmlnumpy.polynomial.polynomial.Polynomial.fit ========================================== method *classmethod*polynomial.polynomial.Polynomial.fit(*x*, *y*, *deg*, *domain=None*, *rcond=None*, *full=False*, *w=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L900-L986) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. New in version 1.5.0. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain New in version 1.6.0. Returns **new_series**series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]**list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.fit.htmlnumpy.polynomial.polynomial.Polynomial.convert ============================================== method polynomial.polynomial.Polynomial.convert(*domain=None*, *kind=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L732-L767) Convert series to a different kind and/or domain and/or window. Parameters **domain**array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind**class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window**array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns **new_series**series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.convert.htmlRandom Generator ================ The [`Generator`](#numpy.random.Generator "numpy.random.Generator") provides access to a wide range of distributions, and served as a replacement for [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). The main difference between the two is that `Generator` relies on an additional BitGenerator to manage state and generate the random bits, which are then transformed into random values from useful distributions. The default BitGenerator used by `Generator` is [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"). The BitGenerator can be changed by passing an instantized BitGenerator to `Generator`. numpy.random.default_rng() Construct a new Generator with the default BitGenerator (PCG64). Parameters **seed**{None, int, array_like[ints], SeedSequence, BitGenerator, Generator}, optional A seed to initialize the [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. Additionally, when passed a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"), it will be wrapped by [`Generator`](#numpy.random.Generator "numpy.random.Generator"). If passed a [`Generator`](#numpy.random.Generator "numpy.random.Generator"), it will be returned unaltered. Returns Generator The initialized generator object. #### Notes If `seed` is not a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") or a [`Generator`](#numpy.random.Generator "numpy.random.Generator"), a new [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") is instantiated. This function does not manage a default global instance. #### Examples `default_rng` is the recommended constructor for the random number class `Generator`. Here are several ways we can construct a random number generator using `default_rng` and the `Generator` class. Here we use `default_rng` to generate a random float: ``` >>> import numpy as np >>> rng = np.random.default_rng(12345) >>> print(rng) Generator(PCG64) >>> rfloat = rng.random() >>> rfloat 0.22733602246716966 >>> type(rfloat) <class 'float'``` Here we use `default_rng` to generate 3 random integers between 0 (inclusive) and 10 (exclusive): ``` >>> import numpy as np >>> rng = np.random.default_rng(12345) >>> rints = rng.integers(low=0, high=10, size=3) >>> rints array([6, 2, 7]) >>> type(rints[0]) <class 'numpy.int64'``` Here we specify a seed so that we have reproducible results: ``` >>> import numpy as np >>> rng = np.random.default_rng(seed=42) >>> print(rng) Generator(PCG64) >>> arr1 = rng.random((3, 3)) >>> arr1 array([[0.77395605, 0.43887844, 0.85859792], [0.69736803, 0.09417735, 0.97562235], [0.7611397 , 0.78606431, 0.12811363]]) ``` If we exit and restart our Python interpreter, we’ll see that we generate the same random numbers again: ``` >>> import numpy as np >>> rng = np.random.default_rng(seed=42) >>> arr2 = rng.random((3, 3)) >>> arr2 array([[0.77395605, 0.43887844, 0.85859792], [0.69736803, 0.09417735, 0.97562235], [0.7611397 , 0.78606431, 0.12811363]]) ``` *class*numpy.random.Generator(*bit_generator*) Container for the BitGenerators. `Generator` exposes a number of methods for generating random numbers drawn from a variety of probability distributions. In addition to the distribution-specific arguments, each method takes a keyword argument `size` that defaults to `None`. If `size` is `None`, then a single value is generated and returned. If `size` is an integer, then a 1-D array filled with generated values is returned. If `size` is a tuple, then an array with that shape is filled and returned. The function [`numpy.random.default_rng`](#numpy.random.default_rng "numpy.random.default_rng") will instantiate a [`Generator`](#numpy.random.Generator "numpy.random.Generator") with numpy’s default [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). **No Compatibility Guarantee** `Generator` does not provide a version compatibility guarantee. In particular, as better algorithms evolve the bit stream may change. Parameters **bit_generator**BitGenerator BitGenerator to use as the core generator. See also [`default_rng`](#numpy.random.default_rng "numpy.random.default_rng") Recommended constructor for [`Generator`](#numpy.random.Generator "numpy.random.Generator"). #### Notes The Python stdlib module [`random`](generated/numpy.random.random#numpy.random.random "numpy.random.random") contains pseudo-random number generator with a number of methods that are similar to the ones available in `Generator`. It uses Mersenne Twister, and this bit generator can be accessed using `MT19937`. `Generator`, besides being NumPy-aware, has the advantage that it provides a much larger number of probability distributions to choose from. #### Examples ``` >>> from numpy.random import Generator, PCG64 >>> rng = Generator(PCG64()) >>> rng.standard_normal() -0.203 # random ``` Accessing the BitGenerator -------------------------- | | | | --- | --- | | [`bit_generator`](generated/numpy.random.generator.bit_generator#numpy.random.Generator.bit_generator "numpy.random.Generator.bit_generator") | Gets the bit generator instance used by the generator | Simple random data ------------------ | | | | --- | --- | | [`integers`](generated/numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers")(low[, high, size, dtype, endpoint]) | Return random integers from `low` (inclusive) to `high` (exclusive), or if endpoint=True, `low` (inclusive) to `high` (inclusive). | | [`random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random")([size, dtype, out]) | Return random floats in the half-open interval [0.0, 1.0). | | [`choice`](generated/numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice")(a[, size, replace, p, axis, shuffle]) | Generates a random sample from a given array | | [`bytes`](generated/numpy.random.generator.bytes#numpy.random.Generator.bytes "numpy.random.Generator.bytes")(length) | Return random bytes. | Permutations ------------ The methods for randomly permuting a sequence are | | | | --- | --- | | [`shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle")(x[, axis]) | Modify an array or sequence in-place by shuffling its contents. | | [`permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation")(x[, axis]) | Randomly permute a sequence, or return a permuted range. | | [`permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted")(x[, axis, out]) | Randomly permute `x` along axis `axis`. | The following table summarizes the behaviors of the methods. | method | copy/in-place | axis handling | | --- | --- | --- | | shuffle | in-place | as if 1d | | permutation | copy | as if 1d | | permuted | either (use ‘out’ for in-place) | axis independent | The following subsections provide more details about the differences. ### In-place vs. copy The main difference between [`Generator.shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") and [`Generator.permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") is that [`Generator.shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") operates in-place, while [`Generator.permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") returns a copy. By default, [`Generator.permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted") returns a copy. To operate in-place with [`Generator.permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted"), pass the same array as the first argument *and* as the value of the `out` parameter. For example, ``` >>> rng = np.random.default_rng() >>> x = np.arange(0, 15).reshape(3, 5) >>> x array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) >>> y = rng.permuted(x, axis=1, out=x) >>> x array([[ 1, 0, 2, 4, 3], # random [ 6, 7, 8, 9, 5], [10, 14, 11, 13, 12]]) ``` Note that when `out` is given, the return value is `out`: ``` >>> y is x True ``` ### Handling the `axis` parameter An important distinction for these methods is how they handle the `axis` parameter. Both [`Generator.shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") and [`Generator.permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") treat the input as a one-dimensional sequence, and the `axis` parameter determines which dimension of the input array to use as the sequence. In the case of a two-dimensional array, `axis=0` will, in effect, rearrange the rows of the array, and `axis=1` will rearrange the columns. For example ``` >>> rng = np.random.default_rng() >>> x = np.arange(0, 15).reshape(3, 5) >>> x array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) >>> rng.permutation(x, axis=1) array([[ 1, 3, 2, 0, 4], # random [ 6, 8, 7, 5, 9], [11, 13, 12, 10, 14]]) ``` Note that the columns have been rearranged “in bulk”: the values within each column have not changed. The method [`Generator.permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted") treats the `axis` parameter similar to how [`numpy.sort`](../generated/numpy.sort#numpy.sort "numpy.sort") treats it. Each slice along the given axis is shuffled independently of the others. Compare the following example of the use of [`Generator.permuted`](generated/numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted") to the above example of [`Generator.permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation"): ``` >>> rng.permuted(x, axis=1) array([[ 1, 0, 2, 4, 3], # random [ 5, 7, 6, 9, 8], [10, 14, 12, 13, 11]]) ``` In this example, the values within each row (i.e. the values along `axis=1`) have been shuffled independently. This is not a “bulk” shuffle of the columns. ### Shuffling non-NumPy sequences [`Generator.shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") works on non-NumPy sequences. That is, if it is given a sequence that is not a NumPy array, it shuffles that sequence in-place. For example, ``` >>> rng = np.random.default_rng() >>> a = ['A', 'B', 'C', 'D', 'E'] >>> rng.shuffle(a) # shuffle the list in-place >>> a ['B', 'D', 'A', 'E', 'C'] # random ``` Distributions ------------- | | | | --- | --- | | [`beta`](generated/numpy.random.generator.beta#numpy.random.Generator.beta "numpy.random.Generator.beta")(a, b[, size]) | Draw samples from a Beta distribution. | | [`binomial`](generated/numpy.random.generator.binomial#numpy.random.Generator.binomial "numpy.random.Generator.binomial")(n, p[, size]) | Draw samples from a binomial distribution. | | [`chisquare`](generated/numpy.random.generator.chisquare#numpy.random.Generator.chisquare "numpy.random.Generator.chisquare")(df[, size]) | Draw samples from a chi-square distribution. | | [`dirichlet`](generated/numpy.random.generator.dirichlet#numpy.random.Generator.dirichlet "numpy.random.Generator.dirichlet")(alpha[, size]) | Draw samples from the Dirichlet distribution. | | [`exponential`](generated/numpy.random.generator.exponential#numpy.random.Generator.exponential "numpy.random.Generator.exponential")([scale, size]) | Draw samples from an exponential distribution. | | [`f`](generated/numpy.random.generator.f#numpy.random.Generator.f "numpy.random.Generator.f")(dfnum, dfden[, size]) | Draw samples from an F distribution. | | [`gamma`](generated/numpy.random.generator.gamma#numpy.random.Generator.gamma "numpy.random.Generator.gamma")(shape[, scale, size]) | Draw samples from a Gamma distribution. | | [`geometric`](generated/numpy.random.generator.geometric#numpy.random.Generator.geometric "numpy.random.Generator.geometric")(p[, size]) | Draw samples from the geometric distribution. | | [`gumbel`](generated/numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel")([loc, scale, size]) | Draw samples from a Gumbel distribution. | | [`hypergeometric`](generated/numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric")(ngood, nbad, nsample[, size]) | Draw samples from a Hypergeometric distribution. | | [`laplace`](generated/numpy.random.generator.laplace#numpy.random.Generator.laplace "numpy.random.Generator.laplace")([loc, scale, size]) | Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). | | [`logistic`](generated/numpy.random.generator.logistic#numpy.random.Generator.logistic "numpy.random.Generator.logistic")([loc, scale, size]) | Draw samples from a logistic distribution. | | [`lognormal`](generated/numpy.random.generator.lognormal#numpy.random.Generator.lognormal "numpy.random.Generator.lognormal")([mean, sigma, size]) | Draw samples from a log-normal distribution. | | [`logseries`](generated/numpy.random.generator.logseries#numpy.random.Generator.logseries "numpy.random.Generator.logseries")(p[, size]) | Draw samples from a logarithmic series distribution. | | [`multinomial`](generated/numpy.random.generator.multinomial#numpy.random.Generator.multinomial "numpy.random.Generator.multinomial")(n, pvals[, size]) | Draw samples from a multinomial distribution. | | [`multivariate_hypergeometric`](generated/numpy.random.generator.multivariate_hypergeometric#numpy.random.Generator.multivariate_hypergeometric "numpy.random.Generator.multivariate_hypergeometric")(colors, nsample) | Generate variates from a multivariate hypergeometric distribution. | | [`multivariate_normal`](generated/numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal")(mean, cov[, size, ...]) | Draw random samples from a multivariate normal distribution. | | [`negative_binomial`](generated/numpy.random.generator.negative_binomial#numpy.random.Generator.negative_binomial "numpy.random.Generator.negative_binomial")(n, p[, size]) | Draw samples from a negative binomial distribution. | | [`noncentral_chisquare`](generated/numpy.random.generator.noncentral_chisquare#numpy.random.Generator.noncentral_chisquare "numpy.random.Generator.noncentral_chisquare")(df, nonc[, size]) | Draw samples from a noncentral chi-square distribution. | | [`noncentral_f`](generated/numpy.random.generator.noncentral_f#numpy.random.Generator.noncentral_f "numpy.random.Generator.noncentral_f")(dfnum, dfden, nonc[, size]) | Draw samples from the noncentral F distribution. | | [`normal`](generated/numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal")([loc, scale, size]) | Draw random samples from a normal (Gaussian) distribution. | | [`pareto`](generated/numpy.random.generator.pareto#numpy.random.Generator.pareto "numpy.random.Generator.pareto")(a[, size]) | Draw samples from a Pareto II or Lomax distribution with specified shape. | | [`poisson`](generated/numpy.random.generator.poisson#numpy.random.Generator.poisson "numpy.random.Generator.poisson")([lam, size]) | Draw samples from a Poisson distribution. | | [`power`](generated/numpy.random.generator.power#numpy.random.Generator.power "numpy.random.Generator.power")(a[, size]) | Draws samples in [0, 1] from a power distribution with positive exponent a - 1. | | [`rayleigh`](generated/numpy.random.generator.rayleigh#numpy.random.Generator.rayleigh "numpy.random.Generator.rayleigh")([scale, size]) | Draw samples from a Rayleigh distribution. | | [`standard_cauchy`](generated/numpy.random.generator.standard_cauchy#numpy.random.Generator.standard_cauchy "numpy.random.Generator.standard_cauchy")([size]) | Draw samples from a standard Cauchy distribution with mode = 0. | | [`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential")([size, dtype, method, out]) | Draw samples from the standard exponential distribution. | | [`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma")(shape[, size, dtype, out]) | Draw samples from a standard Gamma distribution. | | [`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal")([size, dtype, out]) | Draw samples from a standard Normal distribution (mean=0, stdev=1). | | [`standard_t`](generated/numpy.random.generator.standard_t#numpy.random.Generator.standard_t "numpy.random.Generator.standard_t")(df[, size]) | Draw samples from a standard Student's t distribution with `df` degrees of freedom. | | [`triangular`](generated/numpy.random.generator.triangular#numpy.random.Generator.triangular "numpy.random.Generator.triangular")(left, mode, right[, size]) | Draw samples from the triangular distribution over the interval `[left, right]`. | | [`uniform`](generated/numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform")([low, high, size]) | Draw samples from a uniform distribution. | | [`vonmises`](generated/numpy.random.generator.vonmises#numpy.random.Generator.vonmises "numpy.random.Generator.vonmises")(mu, kappa[, size]) | Draw samples from a von Mises distribution. | | [`wald`](generated/numpy.random.generator.wald#numpy.random.Generator.wald "numpy.random.Generator.wald")(mean, scale[, size]) | Draw samples from a Wald, or inverse Gaussian, distribution. | | [`weibull`](generated/numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull")(a[, size]) | Draw samples from a Weibull distribution. | | [`zipf`](generated/numpy.random.generator.zipf#numpy.random.Generator.zipf "numpy.random.Generator.zipf")(a[, size]) | Draw samples from a Zipf distribution. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generator.htmlLegacy Random Generation ======================== The [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState") provides access to legacy generators. This generator is considered frozen and will have no further improvements. It is guaranteed to produce the same values as the final point release of NumPy v1.16. These all depend on Box-Muller normals or inverse CDF exponentials or gammas. This class should only be used if it is essential to have randoms that are identical to what would have been produced by previous versions of NumPy. [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState") adds additional information to the state which is required when using Box-Muller normals since these are produced in pairs. It is important to use [`RandomState.get_state`](generated/numpy.random.randomstate.get_state#numpy.random.RandomState.get_state "numpy.random.RandomState.get_state"), and not the underlying bit generators `state`, when accessing the state so that these extra values are saved. Although we provide the [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") BitGenerator for use independent of [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState"), note that its default seeding uses [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") rather than the legacy seeding algorithm. [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState") will use the legacy seeding algorithm. The methods to use the legacy seeding algorithm are currently private as the main reason to use them is just to implement [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState"). However, one can reset the state of [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") using the state of the [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState"): ``` from numpy.random import MT19937 from numpy.random import RandomState rs = RandomState(12345) mt19937 = MT19937() mt19937.state = rs.get_state() rs2 = RandomState(mt19937) # Same output rs.standard_normal() rs2.standard_normal() rs.random() rs2.random() rs.standard_exponential() rs2.standard_exponential() ``` *class*numpy.random.RandomState(*seed=None*) Container for the slow Mersenne Twister pseudo-random number generator. Consider using a different BitGenerator with the Generator container instead. [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState") and [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") expose a number of methods for generating random numbers drawn from a variety of probability distributions. In addition to the distribution-specific arguments, each method takes a keyword argument `size` that defaults to `None`. If `size` is `None`, then a single value is generated and returned. If `size` is an integer, then a 1-D array filled with generated values is returned. If `size` is a tuple, then an array with that shape is filled and returned. **Compatibility Guarantee** A fixed bit generator using a fixed seed and a fixed series of calls to ‘RandomState’ methods using the same parameters will always produce the same results up to roundoff error except when the values were incorrect. [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState") is effectively frozen and will only receive updates that are required by changes in the the internals of Numpy. More substantial changes, including algorithmic improvements, are reserved for [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). Parameters **seed**{None, int, array_like, BitGenerator}, optional Random seed used to initialize the pseudo-random number generator or an instantized BitGenerator. If an integer or array, used as a seed for the MT19937 BitGenerator. Values can be any integer between 0 and 2**32 - 1 inclusive, an array (or other sequence) of such integers, or `None` (the default). If [`seed`](generated/numpy.random.seed#numpy.random.seed "numpy.random.seed") is `None`, then the [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") BitGenerator is initialized by reading data from `/dev/urandom` (or the Windows analogue) if available or seed from the clock otherwise. See also [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") [`numpy.random.BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") #### Notes The Python stdlib module “random” also contains a Mersenne Twister pseudo-random number generator with a number of methods that are similar to the ones available in [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState"). [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState"), besides being NumPy-aware, has the advantage that it provides a much larger number of probability distributions to choose from. Seeding and State ----------------- | | | | --- | --- | | [`get_state`](generated/numpy.random.randomstate.get_state#numpy.random.RandomState.get_state "numpy.random.RandomState.get_state")() | Return a tuple representing the internal state of the generator. | | [`set_state`](generated/numpy.random.randomstate.set_state#numpy.random.RandomState.set_state "numpy.random.RandomState.set_state")(state) | Set the internal state of the generator from a tuple. | | [`seed`](generated/numpy.random.randomstate.seed#numpy.random.RandomState.seed "numpy.random.RandomState.seed")(self[, seed]) | Reseed a legacy MT19937 BitGenerator | Simple random data ------------------ | | | | --- | --- | | [`rand`](generated/numpy.random.randomstate.rand#numpy.random.RandomState.rand "numpy.random.RandomState.rand")(d0, d1, ..., dn) | Random values in a given shape. | | [`randn`](generated/numpy.random.randomstate.randn#numpy.random.RandomState.randn "numpy.random.RandomState.randn")(d0, d1, ..., dn) | Return a sample (or samples) from the "standard normal" distribution. | | [`randint`](generated/numpy.random.randomstate.randint#numpy.random.RandomState.randint "numpy.random.RandomState.randint")(low[, high, size, dtype]) | Return random integers from `low` (inclusive) to `high` (exclusive). | | [`random_integers`](generated/numpy.random.randomstate.random_integers#numpy.random.RandomState.random_integers "numpy.random.RandomState.random_integers")(low[, high, size]) | Random integers of type `np.int_` between `low` and `high`, inclusive. | | [`random_sample`](generated/numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample")([size]) | Return random floats in the half-open interval [0.0, 1.0). | | [`choice`](generated/numpy.random.randomstate.choice#numpy.random.RandomState.choice "numpy.random.RandomState.choice")(a[, size, replace, p]) | Generates a random sample from a given 1-D array | | [`bytes`](generated/numpy.random.randomstate.bytes#numpy.random.RandomState.bytes "numpy.random.RandomState.bytes")(length) | Return random bytes. | Permutations ------------ | | | | --- | --- | | [`shuffle`](generated/numpy.random.randomstate.shuffle#numpy.random.RandomState.shuffle "numpy.random.RandomState.shuffle")(x) | Modify a sequence in-place by shuffling its contents. | | [`permutation`](generated/numpy.random.randomstate.permutation#numpy.random.RandomState.permutation "numpy.random.RandomState.permutation")(x) | Randomly permute a sequence, or return a permuted range. | Distributions ------------- | | | | --- | --- | | [`beta`](generated/numpy.random.randomstate.beta#numpy.random.RandomState.beta "numpy.random.RandomState.beta")(a, b[, size]) | Draw samples from a Beta distribution. | | [`binomial`](generated/numpy.random.randomstate.binomial#numpy.random.RandomState.binomial "numpy.random.RandomState.binomial")(n, p[, size]) | Draw samples from a binomial distribution. | | [`chisquare`](generated/numpy.random.randomstate.chisquare#numpy.random.RandomState.chisquare "numpy.random.RandomState.chisquare")(df[, size]) | Draw samples from a chi-square distribution. | | [`dirichlet`](generated/numpy.random.randomstate.dirichlet#numpy.random.RandomState.dirichlet "numpy.random.RandomState.dirichlet")(alpha[, size]) | Draw samples from the Dirichlet distribution. | | [`exponential`](generated/numpy.random.randomstate.exponential#numpy.random.RandomState.exponential "numpy.random.RandomState.exponential")([scale, size]) | Draw samples from an exponential distribution. | | [`f`](generated/numpy.random.randomstate.f#numpy.random.RandomState.f "numpy.random.RandomState.f")(dfnum, dfden[, size]) | Draw samples from an F distribution. | | [`gamma`](generated/numpy.random.randomstate.gamma#numpy.random.RandomState.gamma "numpy.random.RandomState.gamma")(shape[, scale, size]) | Draw samples from a Gamma distribution. | | [`geometric`](generated/numpy.random.randomstate.geometric#numpy.random.RandomState.geometric "numpy.random.RandomState.geometric")(p[, size]) | Draw samples from the geometric distribution. | | [`gumbel`](generated/numpy.random.randomstate.gumbel#numpy.random.RandomState.gumbel "numpy.random.RandomState.gumbel")([loc, scale, size]) | Draw samples from a Gumbel distribution. | | [`hypergeometric`](generated/numpy.random.randomstate.hypergeometric#numpy.random.RandomState.hypergeometric "numpy.random.RandomState.hypergeometric")(ngood, nbad, nsample[, size]) | Draw samples from a Hypergeometric distribution. | | [`laplace`](generated/numpy.random.randomstate.laplace#numpy.random.RandomState.laplace "numpy.random.RandomState.laplace")([loc, scale, size]) | Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). | | [`logistic`](generated/numpy.random.randomstate.logistic#numpy.random.RandomState.logistic "numpy.random.RandomState.logistic")([loc, scale, size]) | Draw samples from a logistic distribution. | | [`lognormal`](generated/numpy.random.randomstate.lognormal#numpy.random.RandomState.lognormal "numpy.random.RandomState.lognormal")([mean, sigma, size]) | Draw samples from a log-normal distribution. | | [`logseries`](generated/numpy.random.randomstate.logseries#numpy.random.RandomState.logseries "numpy.random.RandomState.logseries")(p[, size]) | Draw samples from a logarithmic series distribution. | | [`multinomial`](generated/numpy.random.randomstate.multinomial#numpy.random.RandomState.multinomial "numpy.random.RandomState.multinomial")(n, pvals[, size]) | Draw samples from a multinomial distribution. | | [`multivariate_normal`](generated/numpy.random.randomstate.multivariate_normal#numpy.random.RandomState.multivariate_normal "numpy.random.RandomState.multivariate_normal")(mean, cov[, size, ...]) | Draw random samples from a multivariate normal distribution. | | [`negative_binomial`](generated/numpy.random.randomstate.negative_binomial#numpy.random.RandomState.negative_binomial "numpy.random.RandomState.negative_binomial")(n, p[, size]) | Draw samples from a negative binomial distribution. | | [`noncentral_chisquare`](generated/numpy.random.randomstate.noncentral_chisquare#numpy.random.RandomState.noncentral_chisquare "numpy.random.RandomState.noncentral_chisquare")(df, nonc[, size]) | Draw samples from a noncentral chi-square distribution. | | [`noncentral_f`](generated/numpy.random.randomstate.noncentral_f#numpy.random.RandomState.noncentral_f "numpy.random.RandomState.noncentral_f")(dfnum, dfden, nonc[, size]) | Draw samples from the noncentral F distribution. | | [`normal`](generated/numpy.random.randomstate.normal#numpy.random.RandomState.normal "numpy.random.RandomState.normal")([loc, scale, size]) | Draw random samples from a normal (Gaussian) distribution. | | [`pareto`](generated/numpy.random.randomstate.pareto#numpy.random.RandomState.pareto "numpy.random.RandomState.pareto")(a[, size]) | Draw samples from a Pareto II or Lomax distribution with specified shape. | | [`poisson`](generated/numpy.random.randomstate.poisson#numpy.random.RandomState.poisson "numpy.random.RandomState.poisson")([lam, size]) | Draw samples from a Poisson distribution. | | [`power`](generated/numpy.random.randomstate.power#numpy.random.RandomState.power "numpy.random.RandomState.power")(a[, size]) | Draws samples in [0, 1] from a power distribution with positive exponent a - 1. | | [`rayleigh`](generated/numpy.random.randomstate.rayleigh#numpy.random.RandomState.rayleigh "numpy.random.RandomState.rayleigh")([scale, size]) | Draw samples from a Rayleigh distribution. | | [`standard_cauchy`](generated/numpy.random.randomstate.standard_cauchy#numpy.random.RandomState.standard_cauchy "numpy.random.RandomState.standard_cauchy")([size]) | Draw samples from a standard Cauchy distribution with mode = 0. | | [`standard_exponential`](generated/numpy.random.randomstate.standard_exponential#numpy.random.RandomState.standard_exponential "numpy.random.RandomState.standard_exponential")([size]) | Draw samples from the standard exponential distribution. | | [`standard_gamma`](generated/numpy.random.randomstate.standard_gamma#numpy.random.RandomState.standard_gamma "numpy.random.RandomState.standard_gamma")(shape[, size]) | Draw samples from a standard Gamma distribution. | | [`standard_normal`](generated/numpy.random.randomstate.standard_normal#numpy.random.RandomState.standard_normal "numpy.random.RandomState.standard_normal")([size]) | Draw samples from a standard Normal distribution (mean=0, stdev=1). | | [`standard_t`](generated/numpy.random.randomstate.standard_t#numpy.random.RandomState.standard_t "numpy.random.RandomState.standard_t")(df[, size]) | Draw samples from a standard Student's t distribution with `df` degrees of freedom. | | [`triangular`](generated/numpy.random.randomstate.triangular#numpy.random.RandomState.triangular "numpy.random.RandomState.triangular")(left, mode, right[, size]) | Draw samples from the triangular distribution over the interval `[left, right]`. | | [`uniform`](generated/numpy.random.randomstate.uniform#numpy.random.RandomState.uniform "numpy.random.RandomState.uniform")([low, high, size]) | Draw samples from a uniform distribution. | | [`vonmises`](generated/numpy.random.randomstate.vonmises#numpy.random.RandomState.vonmises "numpy.random.RandomState.vonmises")(mu, kappa[, size]) | Draw samples from a von Mises distribution. | | [`wald`](generated/numpy.random.randomstate.wald#numpy.random.RandomState.wald "numpy.random.RandomState.wald")(mean, scale[, size]) | Draw samples from a Wald, or inverse Gaussian, distribution. | | [`weibull`](generated/numpy.random.randomstate.weibull#numpy.random.RandomState.weibull "numpy.random.RandomState.weibull")(a[, size]) | Draw samples from a Weibull distribution. | | [`zipf`](generated/numpy.random.randomstate.zipf#numpy.random.RandomState.zipf "numpy.random.RandomState.zipf")(a[, size]) | Draw samples from a Zipf distribution. | Functions in numpy.random ------------------------- Many of the RandomState methods above are exported as functions in [`numpy.random`](index#module-numpy.random "numpy.random") This usage is discouraged, as it is implemented via a global [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState") instance which is not advised on two counts: * It uses global state, which means results will change as the code changes * It uses a [`RandomState`](#numpy.random.RandomState "numpy.random.RandomState") rather than the more modern [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). For backward compatible legacy reasons, we cannot change this. See [Quick Start](index#random-quick-start). | | | | --- | --- | | [`beta`](generated/numpy.random.beta#numpy.random.beta "numpy.random.beta")(a, b[, size]) | Draw samples from a Beta distribution. | | [`binomial`](generated/numpy.random.binomial#numpy.random.binomial "numpy.random.binomial")(n, p[, size]) | Draw samples from a binomial distribution. | | [`bytes`](generated/numpy.random.bytes#numpy.random.bytes "numpy.random.bytes")(length) | Return random bytes. | | [`chisquare`](generated/numpy.random.chisquare#numpy.random.chisquare "numpy.random.chisquare")(df[, size]) | Draw samples from a chi-square distribution. | | [`choice`](generated/numpy.random.choice#numpy.random.choice "numpy.random.choice")(a[, size, replace, p]) | Generates a random sample from a given 1-D array | | [`dirichlet`](generated/numpy.random.dirichlet#numpy.random.dirichlet "numpy.random.dirichlet")(alpha[, size]) | Draw samples from the Dirichlet distribution. | | [`exponential`](generated/numpy.random.exponential#numpy.random.exponential "numpy.random.exponential")([scale, size]) | Draw samples from an exponential distribution. | | [`f`](generated/numpy.random.f#numpy.random.f "numpy.random.f")(dfnum, dfden[, size]) | Draw samples from an F distribution. | | [`gamma`](generated/numpy.random.gamma#numpy.random.gamma "numpy.random.gamma")(shape[, scale, size]) | Draw samples from a Gamma distribution. | | [`geometric`](generated/numpy.random.geometric#numpy.random.geometric "numpy.random.geometric")(p[, size]) | Draw samples from the geometric distribution. | | [`get_state`](generated/numpy.random.get_state#numpy.random.get_state "numpy.random.get_state")() | Return a tuple representing the internal state of the generator. | | [`gumbel`](generated/numpy.random.gumbel#numpy.random.gumbel "numpy.random.gumbel")([loc, scale, size]) | Draw samples from a Gumbel distribution. | | [`hypergeometric`](generated/numpy.random.hypergeometric#numpy.random.hypergeometric "numpy.random.hypergeometric")(ngood, nbad, nsample[, size]) | Draw samples from a Hypergeometric distribution. | | [`laplace`](generated/numpy.random.laplace#numpy.random.laplace "numpy.random.laplace")([loc, scale, size]) | Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). | | [`logistic`](generated/numpy.random.logistic#numpy.random.logistic "numpy.random.logistic")([loc, scale, size]) | Draw samples from a logistic distribution. | | [`lognormal`](generated/numpy.random.lognormal#numpy.random.lognormal "numpy.random.lognormal")([mean, sigma, size]) | Draw samples from a log-normal distribution. | | [`logseries`](generated/numpy.random.logseries#numpy.random.logseries "numpy.random.logseries")(p[, size]) | Draw samples from a logarithmic series distribution. | | [`multinomial`](generated/numpy.random.multinomial#numpy.random.multinomial "numpy.random.multinomial")(n, pvals[, size]) | Draw samples from a multinomial distribution. | | [`multivariate_normal`](generated/numpy.random.multivariate_normal#numpy.random.multivariate_normal "numpy.random.multivariate_normal")(mean, cov[, size, ...]) | Draw random samples from a multivariate normal distribution. | | [`negative_binomial`](generated/numpy.random.negative_binomial#numpy.random.negative_binomial "numpy.random.negative_binomial")(n, p[, size]) | Draw samples from a negative binomial distribution. | | [`noncentral_chisquare`](generated/numpy.random.noncentral_chisquare#numpy.random.noncentral_chisquare "numpy.random.noncentral_chisquare")(df, nonc[, size]) | Draw samples from a noncentral chi-square distribution. | | [`noncentral_f`](generated/numpy.random.noncentral_f#numpy.random.noncentral_f "numpy.random.noncentral_f")(dfnum, dfden, nonc[, size]) | Draw samples from the noncentral F distribution. | | [`normal`](generated/numpy.random.normal#numpy.random.normal "numpy.random.normal")([loc, scale, size]) | Draw random samples from a normal (Gaussian) distribution. | | [`pareto`](generated/numpy.random.pareto#numpy.random.pareto "numpy.random.pareto")(a[, size]) | Draw samples from a Pareto II or Lomax distribution with specified shape. | | [`permutation`](generated/numpy.random.permutation#numpy.random.permutation "numpy.random.permutation")(x) | Randomly permute a sequence, or return a permuted range. | | [`poisson`](generated/numpy.random.poisson#numpy.random.poisson "numpy.random.poisson")([lam, size]) | Draw samples from a Poisson distribution. | | [`power`](generated/numpy.random.power#numpy.random.power "numpy.random.power")(a[, size]) | Draws samples in [0, 1] from a power distribution with positive exponent a - 1. | | [`rand`](generated/numpy.random.rand#numpy.random.rand "numpy.random.rand")(d0, d1, ..., dn) | Random values in a given shape. | | [`randint`](generated/numpy.random.randint#numpy.random.randint "numpy.random.randint")(low[, high, size, dtype]) | Return random integers from `low` (inclusive) to `high` (exclusive). | | [`randn`](generated/numpy.random.randn#numpy.random.randn "numpy.random.randn")(d0, d1, ..., dn) | Return a sample (or samples) from the "standard normal" distribution. | | [`random`](generated/numpy.random.random#numpy.random.random "numpy.random.random")([size]) | Return random floats in the half-open interval [0.0, 1.0). | | [`random_integers`](generated/numpy.random.random_integers#numpy.random.random_integers "numpy.random.random_integers")(low[, high, size]) | Random integers of type `np.int_` between `low` and `high`, inclusive. | | [`random_sample`](generated/numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample")([size]) | Return random floats in the half-open interval [0.0, 1.0). | | [`ranf`](generated/numpy.random.ranf#numpy.random.ranf "numpy.random.ranf") | This is an alias of [`random_sample`](generated/numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). | | [`rayleigh`](generated/numpy.random.rayleigh#numpy.random.rayleigh "numpy.random.rayleigh")([scale, size]) | Draw samples from a Rayleigh distribution. | | [`sample`](generated/numpy.random.sample#numpy.random.sample "numpy.random.sample") | This is an alias of [`random_sample`](generated/numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). | | [`seed`](generated/numpy.random.seed#numpy.random.seed "numpy.random.seed")(self[, seed]) | Reseed a legacy MT19937 BitGenerator | | [`set_state`](generated/numpy.random.set_state#numpy.random.set_state "numpy.random.set_state")(state) | Set the internal state of the generator from a tuple. | | [`shuffle`](generated/numpy.random.shuffle#numpy.random.shuffle "numpy.random.shuffle")(x) | Modify a sequence in-place by shuffling its contents. | | [`standard_cauchy`](generated/numpy.random.standard_cauchy#numpy.random.standard_cauchy "numpy.random.standard_cauchy")([size]) | Draw samples from a standard Cauchy distribution with mode = 0. | | [`standard_exponential`](generated/numpy.random.standard_exponential#numpy.random.standard_exponential "numpy.random.standard_exponential")([size]) | Draw samples from the standard exponential distribution. | | [`standard_gamma`](generated/numpy.random.standard_gamma#numpy.random.standard_gamma "numpy.random.standard_gamma")(shape[, size]) | Draw samples from a standard Gamma distribution. | | [`standard_normal`](generated/numpy.random.standard_normal#numpy.random.standard_normal "numpy.random.standard_normal")([size]) | Draw samples from a standard Normal distribution (mean=0, stdev=1). | | [`standard_t`](generated/numpy.random.standard_t#numpy.random.standard_t "numpy.random.standard_t")(df[, size]) | Draw samples from a standard Student's t distribution with `df` degrees of freedom. | | [`triangular`](generated/numpy.random.triangular#numpy.random.triangular "numpy.random.triangular")(left, mode, right[, size]) | Draw samples from the triangular distribution over the interval `[left, right]`. | | [`uniform`](generated/numpy.random.uniform#numpy.random.uniform "numpy.random.uniform")([low, high, size]) | Draw samples from a uniform distribution. | | [`vonmises`](generated/numpy.random.vonmises#numpy.random.vonmises "numpy.random.vonmises")(mu, kappa[, size]) | Draw samples from a von Mises distribution. | | [`wald`](generated/numpy.random.wald#numpy.random.wald "numpy.random.wald")(mean, scale[, size]) | Draw samples from a Wald, or inverse Gaussian, distribution. | | [`weibull`](generated/numpy.random.weibull#numpy.random.weibull "numpy.random.weibull")(a[, size]) | Draw samples from a Weibull distribution. | | [`zipf`](generated/numpy.random.zipf#numpy.random.zipf "numpy.random.zipf")(a[, size]) | Draw samples from a Zipf distribution. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/legacy.htmlBit Generators ============== The random values produced by [`Generator`](../generator#numpy.random.Generator "numpy.random.Generator") originate in a BitGenerator. The BitGenerators do not directly provide random numbers and only contains methods used for seeding, getting or setting the state, jumping or advancing the state, and for accessing low-level wrappers for consumption by code that can efficiently access the functions provided, e.g., [numba](https://numba.pydata.org). Supported BitGenerators ----------------------- The included BitGenerators are: * PCG-64 - The default. A fast generator that can be advanced by an arbitrary amount. See the documentation for [`advance`](generated/numpy.random.pcg64.advance#numpy.random.PCG64.advance "numpy.random.PCG64.advance"). PCG-64 has a period of \(2^{128}\). See the [PCG author’s page](http://www.pcg-random.org/) for more details about this class of PRNG. * PCG-64 DXSM - An upgraded version of PCG-64 with better statistical properties in parallel contexts. See [Upgrading PCG64 with PCG64DXSM](../upgrading-pcg64#upgrading-pcg64) for more information on these improvements. * MT19937 - The standard Python BitGenerator. Adds a [`MT19937.jumped`](generated/numpy.random.mt19937.jumped#numpy.random.MT19937.jumped "numpy.random.MT19937.jumped") function that returns a new generator with state as-if \(2^{128}\) draws have been made. * Philox - A counter-based generator capable of being advanced an arbitrary number of steps or generating independent streams. See the [Random123](https://www.deshawresearch.com/resources_random123.html) page for more details about this class of bit generators. * SFC64 - A fast generator based on random invertible mappings. Usually the fastest generator of the four. See the [SFC author’s page](http://pracrand.sourceforge.net/RNG_engines.txt) for (a little) more detail. | | | | --- | --- | | [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator")([seed]) | Base Class for generic BitGenerators, which provide a stream of random bits based on different algorithms. | * [MT19937](mt19937) * [PCG64](pcg64) * [PCG64DXSM](pcg64dxsm) * [Philox](philox) * [SFC64](sfc64) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/index.htmlUpgrading PCG64 with PCG64DXSM ============================== Uses of the [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") in a massively-parallel context have been shown to have statistical weaknesses that were not apparent at the first release in numpy 1.17. Most users will never observe this weakness and are safe to continue to use [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"). We have introduced a new [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") that will eventually become the new default [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") implementation used by [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") in future releases. [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") solves the statistical weakness while preserving the performance and the features of [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"). Does this affect me? -------------------- If you 1. only use a single [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") instance, 2. only use [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") or the functions in [`numpy.random`](index#module-numpy.random "numpy.random"), 3. only use the [`PCG64.jumped`](bit_generators/generated/numpy.random.pcg64.jumped#numpy.random.PCG64.jumped "numpy.random.PCG64.jumped") method to generate parallel streams, 4. explicitly use a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") other than [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"), then this weakness does not affect you at all. Carry on. If you use moderate numbers of parallel streams created with [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") or [`SeedSequence.spawn`](bit_generators/generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn"), in the 1000s, then the chance of observing this weakness is negligibly small. You can continue to use [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") comfortably. If you use very large numbers of parallel streams, in the millions, and draw large amounts of numbers from each, then the chance of observing this weakness can become non-negligible, if still small. An example of such a use case would be a very large distributed reinforcement learning problem with millions of long Monte Carlo playouts each generating billions of random number draws. Such use cases should consider using [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") explicitly or another modern [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") like [`SFC64`](bit_generators/sfc64#numpy.random.SFC64 "numpy.random.SFC64") or [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox"), but it is unlikely that any old results you may have calculated are invalid. In any case, the weakness is a kind of [Birthday Paradox](https://en.wikipedia.org/wiki/Birthday_problem) collision. That is, a single pair of parallel streams out of the millions, considered together, might fail a stringent set of statistical tests of randomness. The remaining millions of streams would all be perfectly fine, and the effect of the bad pair in the whole calculation is very likely to be swamped by the remaining streams in most applications. Technical Details ----------------- Like many PRNG algorithms, [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") is constructed from a transition function, which advances a 128-bit state, and an output function, that mixes the 128-bit state into a 64-bit integer to be output. One of the guiding design principles of the PCG family of PRNGs is to balance the computational cost (and pseudorandomness strength) between the transition function and the output function. The transition function is a 128-bit linear congruential generator (LCG), which consists of multiplying the 128-bit state with a fixed multiplication constant and then adding a user-chosen increment, in 128-bit modular arithmetic. LCGs are well-analyzed PRNGs with known weaknesses, though 128-bit LCGs are large enough to pass stringent statistical tests on their own, with only the trivial output function. The output function of [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") is intended to patch up some of those known weaknesses by doing “just enough” scrambling of the bits to assist in the statistical properties without adding too much computational cost. One of these known weaknesses is that advancing the state of the LCG by steps numbering a power of two (`bg.advance(2**N)`) will leave the lower `N` bits identical to the state that was just left. For a single stream drawn from sequentially, this is of little consequence. The remaining \(128-N\) bits provide plenty of pseudorandomness that will be mixed in for any practical `N` that can be observed in a single stream, which is why one does not need to worry about this if you only use a single stream in your application. Similarly, the [`PCG64.jumped`](bit_generators/generated/numpy.random.pcg64.jumped#numpy.random.PCG64.jumped "numpy.random.PCG64.jumped") method uses a carefully chosen number of steps to avoid creating these collisions. However, once you start creating “randomly-initialized” parallel streams, either using OS entropy by calling [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") repeatedly or using [`SeedSequence.spawn`](bit_generators/generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn"), then we need to consider how many lower bits need to “collide” in order to create a bad pair of streams, and then evaluate the probability of creating such a collision. [Empirically](https://github.com/numpy/numpy/issues/16313), it has been determined that if one shares the lower 58 bits of state and shares an increment, then the pair of streams, when interleaved, will fail [PractRand](http://pracrand.sourceforge.net/) in a reasonable amount of time, after drawing a few gigabytes of data. Following the standard Birthday Paradox calculations for a collision of 58 bits, we can see that we can create \(2^{29}\), or about half a billion, streams which is when the probability of such a collision becomes high. Half a billion streams is quite high, and the amount of data each stream needs to draw before the statistical correlations become apparent to even the strict `PractRand` tests is in the gigabytes. But this is on the horizon for very large applications like distributed reinforcement learning. There are reasons to expect that even in these applications a collision probably will not have a practical effect in the total result, since the statistical problem is constrained to just the colliding pair. Now, let us consider the case when the increment is not constrained to be the same. Our implementation of [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") seeds both the state and the increment; that is, two calls to [`default_rng`](generator#numpy.random.default_rng "numpy.random.default_rng") (almost certainly) have different states and increments. Upon our first release, we believed that having the seeded increment would provide a certain amount of extra protection, that one would have to be “close” in both the state space and increment space in order to observe correlations (`PractRand` failures) in a pair of streams. If that were true, then the “bottleneck” for collisions would be the 128-bit entropy pool size inside of [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") (and 128-bit collisions are in the “preposterously unlikely” category). Unfortunately, this is not true. One of the known properties of an LCG is that different increments create *distinct* streams, but with a known relationship. Each LCG has an orbit that traverses all \(2^{128}\) different 128-bit states. Two LCGs with different increments are related in that one can “rotate” the orbit of the first LCG (advance it by a number of steps that we can compute from the two increments) such that then both LCGs will always then have the same state, up to an additive constant and maybe an inversion of the bits. If you then iterate both streams in lockstep, then the states will *always* remain related by that same additive constant (and the inversion, if present). Recall that [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") is constructed from both a transition function (the LCG) and an output function. It was expected that the scrambling effect of the output function would have been strong enough to make the distinct streams practically independent (i.e. “passing the `PractRand` tests”) unless the two increments were pathologically related to each other (e.g. 1 and 3). The output function XSL-RR of the then-standard PCG algorithm that we implemented in [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") turns out to be too weak to cover up for the 58-bit collision of the underlying LCG that we described above. For any given pair of increments, the size of the “colliding” space of states is the same, so for this weakness, the extra distinctness provided by the increments does not translate into extra protection from statistical correlations that `PractRand` can detect. Fortunately, strengthening the output function is able to correct this weakness and *does* turn the extra distinctness provided by differing increments into additional protection from these low-bit collisions. To the [PCG author’s credit](https://github.com/numpy/numpy/issues/13635#issuecomment-506088698), she had developed a stronger output function in response to related discussions during the long birth of the new [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") system. We NumPy developers chose to be “conservative” and use the XSL-RR variant that had undergone a longer period of testing at that time. The DXSM output function adopts a “xorshift-multiply” construction used in strong integer hashes that has much better avalanche properties than the XSL-RR output function. While there are “pathological” pairs of increments that induce “bad” additive constants that relate the two streams, the vast majority of pairs induce “good” additive constants that make the merely-distinct streams of LCG states into practically-independent output streams. Indeed, now the claim we once made about [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") is actually true of [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM"): collisions are possible, but both streams have to simultaneously be both “close” in the 128 bit state space *and* “close” in the 127-bit increment space, so that would be less likely than the negligible chance of colliding in the 128-bit internal [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") pool. The DXSM output function is more computationally intensive than XSL-RR, but some optimizations in the LCG more than make up for the performance hit on most machines, so [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") is a good, safe upgrade. There are, of course, an infinite number of stronger output functions that one could consider, but most will have a greater computational cost, and the DXSM output function has now received many CPU cycles of testing via `PractRand` at this time. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/upgrading-pcg64.htmlParallel Random Number Generation ================================= There are three strategies implemented that can be used to produce repeatable pseudo-random numbers across multiple processes (local or distributed). SeedSequence spawning --------------------- [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") [implements an algorithm](http://www.pcg-random.org/posts/developing-a-seed_seq-alternative.html) to process a user-provided seed, typically as an integer of some size, and to convert it into an initial state for a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). It uses hashing techniques to ensure that low-quality seeds are turned into high quality initial states (at least, with very high probability). For example, [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") has a state consisting of 624 `uint32` integers. A naive way to take a 32-bit integer seed would be to just set the last element of the state to the 32-bit seed and leave the rest 0s. This is a valid state for [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937"), but not a good one. The Mersenne Twister algorithm [suffers if there are too many 0s](http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/emt19937ar.html). Similarly, two adjacent 32-bit integer seeds (i.e. `12345` and `12346`) would produce very similar streams. [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") avoids these problems by using successions of integer hashes with good [avalanche properties](https://en.wikipedia.org/wiki/Avalanche_effect) to ensure that flipping any bit in the input has about a 50% chance of flipping any bit in the output. Two input seeds that are very close to each other will produce initial states that are very far from each other (with very high probability). It is also constructed in such a way that you can provide arbitrary-sized integers or lists of integers. [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") will take all of the bits that you provide and mix them together to produce however many bits the consuming [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") needs to initialize itself. These properties together mean that we can safely mix together the usual user-provided seed with simple incrementing counters to get [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") states that are (to very high probability) independent of each other. We can wrap this together into an API that is easy to use and difficult to misuse. ``` from numpy.random import SeedSequence, default_rng ss = SeedSequence(12345) # Spawn off 10 child SeedSequences to pass to child processes. child_seeds = ss.spawn(10) streams = [default_rng(s) for s in child_seeds] ``` Child [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") objects can also spawn to make grandchildren, and so on. Each [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") has its position in the tree of spawned [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") objects mixed in with the user-provided seed to generate independent (with very high probability) streams. ``` grandchildren = child_seeds[0].spawn(4) grand_streams = [default_rng(s) for s in grandchildren] ``` This feature lets you make local decisions about when and how to split up streams without coordination between processes. You do not have to preallocate space to avoid overlapping or request streams from a common global service. This general “tree-hashing” scheme is [not unique to numpy](https://www.iro.umontreal.ca/~lecuyer/myftp/papers/parallel-rng-imacs.pdf) but not yet widespread. Python has increasingly-flexible mechanisms for parallelization available, and this scheme fits in very well with that kind of use. Using this scheme, an upper bound on the probability of a collision can be estimated if one knows the number of streams that you derive. [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") hashes its inputs, both the seed and the spawn-tree-path, down to a 128-bit pool by default. The probability that there is a collision in that pool, pessimistically-estimated ([1](#id3)), will be about \(n^2*2^{-128}\) where `n` is the number of streams spawned. If a program uses an aggressive million streams, about \(2^{20}\), then the probability that at least one pair of them are identical is about \(2^{-88}\), which is in solidly-ignorable territory ([2](#id4)). [1](#id1) The algorithm is carefully designed to eliminate a number of possible ways to collide. For example, if one only does one level of spawning, it is guaranteed that all states will be unique. But it’s easier to estimate the naive upper bound on a napkin and take comfort knowing that the probability is actually lower. [2](#id2) In this calculation, we can mostly ignore the amount of numbers drawn from each stream. See [Upgrading PCG64 with PCG64DXSM](upgrading-pcg64#upgrading-pcg64) for the technical details about [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64"). The other PRNGs we provide have some extra protection built in that avoids overlaps if the [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") pools differ in the slightest bit. [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") has \(2^{127}\) separate cycles determined by the seed in addition to the position in the \(2^{128}\) long period for each cycle, so one has to both get on or near the same cycle *and* seed a nearby position in the cycle. [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") has completely independent cycles determined by the seed. [`SFC64`](bit_generators/sfc64#numpy.random.SFC64 "numpy.random.SFC64") incorporates a 64-bit counter so every unique seed is at least \(2^{64}\) iterations away from any other seed. And finally, [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") has just an unimaginably huge period. Getting a collision internal to [`SeedSequence`](bit_generators/generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") is the way a failure would be observed. Independent Streams ------------------- [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") is a counter-based RNG based which generates values by encrypting an incrementing counter using weak cryptographic primitives. The seed determines the key that is used for the encryption. Unique keys create unique, independent streams. [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") lets you bypass the seeding algorithm to directly set the 128-bit key. Similar, but different, keys will still create independent streams. ``` import secrets from numpy.random import Philox # 128-bit number as a seed root_seed = secrets.getrandbits(128) streams = [Philox(key=root_seed + stream_id) for stream_id in range(10)] ``` This scheme does require that you avoid reusing stream IDs. This may require coordination between the parallel processes. Jumping the BitGenerator state ------------------------------ `jumped` advances the state of the BitGenerator *as-if* a large number of random numbers have been drawn, and returns a new instance with this state. The specific number of draws varies by BitGenerator, and ranges from \(2^{64}\) to \(2^{128}\). Additionally, the *as-if* draws also depend on the size of the default random number produced by the specific BitGenerator. The BitGenerators that support `jumped`, along with the period of the BitGenerator, the size of the jump and the bits in the default unsigned random are listed below. | BitGenerator | Period | Jump Size | Bits per Draw | | --- | --- | --- | --- | | MT19937 | \(2^{19937}-1\) | \(2^{128}\) | 32 | | PCG64 | \(2^{128}\) | \(~2^{127}\) ([3](#id8)) | 64 | | PCG64DXSM | \(2^{128}\) | \(~2^{127}\) ([3](#id8)) | 64 | | Philox | \(2^{256}\) | \(2^{128}\) | 64 | 3([1](#id6),[2](#id7)) The jump size is \((\phi-1)*2^{128}\) where \(\phi\) is the golden ratio. As the jumps wrap around the period, the actual distances between neighboring streams will slowly grow smaller than the jump size, but using the golden ratio this way is a classic method of constructing a low-discrepancy sequence that spreads out the states around the period optimally. You will not be able to jump enough to make those distances small enough to overlap in your lifetime. `jumped` can be used to produce long blocks which should be long enough to not overlap. ``` import secrets from numpy.random import PCG64 seed = secrets.getrandbits(128) blocked_rng = [] rng = PCG64(seed) for i in range(10): blocked_rng.append(rng.jumped(i)) ``` When using `jumped`, one does have to take care not to jump to a stream that was already used. In the above example, one could not later use `blocked_rng[0].jumped()` as it would overlap with `blocked_rng[1]`. Like with the independent streams, if the main process here wants to split off 10 more streams by jumping, then it needs to start with `range(10, 20)`, otherwise it would recreate the same streams. On the other hand, if you carefully construct the streams, then you are guaranteed to have streams that do not overlap. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/parallel.htmlMultithreaded Generation ======================== The four core distributions ([`random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random"), [`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal"), [`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential"), and [`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma")) all allow existing arrays to be filled using the `out` keyword argument. Existing arrays need to be contiguous and well-behaved (writable and aligned). Under normal circumstances, arrays created using the common constructors such as [`numpy.empty`](../generated/numpy.empty#numpy.empty "numpy.empty") will satisfy these requirements. This example makes use of Python 3 [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html#module-concurrent.futures "(in Python v3.10)") to fill an array using multiple threads. Threads are long-lived so that repeated calls do not require any additional overheads from thread creation. The random numbers generated are reproducible in the sense that the same seed will produce the same outputs, given that the number of threads does not change. ``` from numpy.random import default_rng, SeedSequence import multiprocessing import concurrent.futures import numpy as np class MultithreadedRNG: def __init__(self, n, seed=None, threads=None): if threads is None: threads = multiprocessing.cpu_count() self.threads = threads seq = SeedSequence(seed) self._random_generators = [default_rng(s) for s in seq.spawn(threads)] self.n = n self.executor = concurrent.futures.ThreadPoolExecutor(threads) self.values = np.empty(n) self.step = np.ceil(n / threads).astype(np.int_) def fill(self): def _fill(random_state, out, first, last): random_state.standard_normal(out=out[first:last]) futures = {} for i in range(self.threads): args = (_fill, self._random_generators[i], self.values, i * self.step, (i + 1) * self.step) futures[self.executor.submit(*args)] = i concurrent.futures.wait(futures) def __del__(self): self.executor.shutdown(False) ``` The multithreaded random number generator can be used to fill an array. The `values` attributes shows the zero-value before the fill and the random value after. ``` In [2]: mrng = MultithreadedRNG(10000000, seed=12345) ...: print(mrng.values[-1]) Out[2]: 0.0 In [3]: mrng.fill() ...: print(mrng.values[-1]) Out[3]: 2.4545724517479104 ``` The time required to produce using multiple threads can be compared to the time required to generate using a single thread. ``` In [4]: print(mrng.threads) ...: %timeit mrng.fill() Out[4]: 4 ...: 32.8 ms ± 2.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` The single threaded call directly uses the BitGenerator. ``` In [5]: values = np.empty(10000000) ...: rg = default_rng() ...: %timeit rg.standard_normal(out=values) Out[5]: 99.6 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` The gains are substantial and the scaling is reasonable even for arrays that are only moderately large. The gains are even larger when compared to a call that does not use an existing array due to array creation overhead. ``` In [6]: rg = default_rng() ...: %timeit rg.standard_normal(10000000) Out[6]: 125 ms ± 309 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` Note that if `threads` is not set by the user, it will be determined by `multiprocessing.cpu_count()`. ``` In [7]: # simulate the behavior for `threads=None`, if the machine had only one thread ...: mrng = MultithreadedRNG(10000000, seed=12345, threads=1) ...: print(mrng.values[-1]) Out[7]: 1.1800150052158556 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/multithreading.htmlWhat’s New or Different ======================= Warning The Box-Muller method used to produce NumPy’s normals is no longer available in [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). It is not possible to reproduce the exact random values using `Generator` for the normal distribution or any other distribution that relies on the normal such as the [`Generator.gamma`](generated/numpy.random.generator.gamma#numpy.random.Generator.gamma "numpy.random.Generator.gamma") or [`Generator.standard_t`](generated/numpy.random.generator.standard_t#numpy.random.Generator.standard_t "numpy.random.Generator.standard_t"). If you require bitwise backward compatible streams, use [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"), i.e., [`RandomState.gamma`](generated/numpy.random.randomstate.gamma#numpy.random.RandomState.gamma "numpy.random.RandomState.gamma") or [`RandomState.standard_t`](generated/numpy.random.randomstate.standard_t#numpy.random.RandomState.standard_t "numpy.random.RandomState.standard_t"). Quick comparison of legacy [mtrand](legacy#legacy) to the new [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") | | | | | --- | --- | --- | | Feature | Older Equivalent | Notes | | [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") | [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") | `Generator` requires a stream source, called a [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") A number of these are provided. `RandomState` uses the Mersenne Twister [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") by default, but can also be instantiated with any BitGenerator. | | `random` | `random_sample`, `rand` | Access the values in a BitGenerator, convert them to `float64` in the interval `[0.0.,` `` 1.0)``. In addition to the `size` kwarg, now supports `dtype='d'` or `dtype='f'`, and an `out` kwarg to fill a user- supplied array. Many other distributions are also supported. | | `integers` | `randint`, `random_integers` | Use the `endpoint` kwarg to adjust the inclusion or exclusion of the `high` interval endpoint | And in more detail: * Simulate from the complex normal distribution (`complex_normal`) * The normal, exponential and gamma generators use 256-step Ziggurat methods which are 2-10 times faster than NumPy’s default implementation in [`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal"), [`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential") or [`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma"). ``` In [1]: from numpy.random import Generator, PCG64 In [2]: import numpy.random In [3]: rng = Generator(PCG64()) In [4]: %timeit -n 1 rng.standard_normal(100000) ...: %timeit -n 1 numpy.random.standard_normal(100000) ...: 1.11 ms +- 23.8 us per loop (mean +- std. dev. of 7 runs, 1 loop each) 1.97 ms +- 23.8 us per loop (mean +- std. dev. of 7 runs, 1 loop each) ``` ``` In [5]: %timeit -n 1 rng.standard_exponential(100000) ...: %timeit -n 1 numpy.random.standard_exponential(100000) ...: 603 us +- 11.4 us per loop (mean +- std. dev. of 7 runs, 1 loop each) 1.45 ms +- 15.6 us per loop (mean +- std. dev. of 7 runs, 1 loop each) ``` ``` In [6]: %timeit -n 1 rng.standard_gamma(3.0, 100000) ...: %timeit -n 1 numpy.random.standard_gamma(3.0, 100000) ...: 2.16 ms +- 22.8 us per loop (mean +- std. dev. of 7 runs, 1 loop each) 4.07 ms +- 15.4 us per loop (mean +- std. dev. of 7 runs, 1 loop each) ``` * [`integers`](generated/numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") is now the canonical way to generate integer random numbers from a discrete uniform distribution. The `rand` and `randn` methods are only available through the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState"). This replaces both `randint` and the deprecated `random_integers`. * The Box-Muller method used to produce NumPy’s normals is no longer available. * All bit generators can produce doubles, uint64s and uint32s via CTypes ([`ctypes`](bit_generators/generated/numpy.random.pcg64.ctypes#numpy.random.PCG64.ctypes "numpy.random.PCG64.ctypes")) and CFFI ([`cffi`](bit_generators/generated/numpy.random.pcg64.cffi#numpy.random.PCG64.cffi "numpy.random.PCG64.cffi")). This allows these bit generators to be used in numba. * The bit generators can be used in downstream projects via Cython. * Optional `dtype` argument that accepts `np.float32` or `np.float64` to produce either single or double precision uniform random variables for select distributions + Uniforms ([`random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") and [`integers`](generated/numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers")) + Normals ([`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal")) + Standard Gammas ([`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma")) + Standard Exponentials ([`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential")) ``` In [7]: rng = Generator(PCG64(0)) In [8]: rng.random(3, dtype='d') Out[8]: array([0.63696169, 0.26978671, 0.04097352]) In [9]: rng.random(3, dtype='f') Out[9]: array([0.07524014, 0.01652759, 0.17526728], dtype=float32) ``` * Optional `out` argument that allows existing arrays to be filled for select distributions + Uniforms ([`random`](generated/numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random")) + Normals ([`standard_normal`](generated/numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal")) + Standard Gammas ([`standard_gamma`](generated/numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma")) + Standard Exponentials ([`standard_exponential`](generated/numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential"))This allows multithreading to fill large arrays in chunks using suitable BitGenerators in parallel. ``` In [10]: existing = np.zeros(4) In [11]: rng.random(out=existing[:2]) Out[11]: array([0.91275558, 0.60663578]) In [12]: print(existing) [0.91275558 0.60663578 0. 0. ] ``` * Optional `axis` argument for methods like [`choice`](generated/numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice"), [`permutation`](generated/numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") and [`shuffle`](generated/numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") that controls which axis an operation is performed over for multi-dimensional arrays. ``` In [13]: rng = Generator(PCG64(123456789)) In [14]: a = np.arange(12).reshape((3, 4)) In [15]: a Out[15]: array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) In [16]: rng.choice(a, axis=1, size=5) Out[16]: array([[ 3, 0, 2, 3, 1], [ 7, 4, 6, 7, 5], [11, 8, 10, 11, 9]]) In [17]: rng.shuffle(a, axis=1) # Shuffle in-place In [18]: a Out[18]: array([[ 3, 1, 2, 0], [ 7, 5, 6, 4], [11, 9, 10, 8]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/new-or-different.htmlPerformance =========== Recommendation -------------- The recommended generator for general use is [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") or its upgraded variant [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM") for heavily-parallel use cases. They are statistically high quality, full-featured, and fast on most platforms, but somewhat slow when compiled for 32-bit processes. See [Upgrading PCG64 with PCG64DXSM](upgrading-pcg64#upgrading-pcg64) for details on when heavy parallelism would indicate using [`PCG64DXSM`](bit_generators/pcg64dxsm#numpy.random.PCG64DXSM "numpy.random.PCG64DXSM"). [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") is fairly slow, but its statistical properties have very high quality, and it is easy to get an assuredly-independent stream by using unique keys. If that is the style you wish to use for parallel streams, or you are porting from another system that uses that style, then [`Philox`](bit_generators/philox#numpy.random.Philox "numpy.random.Philox") is your choice. [`SFC64`](bit_generators/sfc64#numpy.random.SFC64 "numpy.random.SFC64") is statistically high quality and very fast. However, it lacks jumpability. If you are not using that capability and want lots of speed, even on 32-bit processes, this is your choice. [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") [fails some statistical tests](https://www.iro.umontreal.ca/~lecuyer/myftp/papers/testu01.pdf) and is not especially fast compared to modern PRNGs. For these reasons, we mostly do not recommend using it on its own, only through the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") for reproducing old results. That said, it has a very long history as a default in many systems. Timings ------- The timings below are the time in ns to produce 1 random value from a specific distribution. The original [`MT19937`](bit_generators/mt19937#numpy.random.MT19937 "numpy.random.MT19937") generator is much slower since it requires 2 32-bit values to equal the output of the faster generators. Integer performance has a similar ordering. The pattern is similar for other, more complex generators. The normal performance of the legacy [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") generator is much lower than the other since it uses the Box-Muller transform rather than the Ziggurat method. The performance gap for Exponentials is also large due to the cost of computing the log function to invert the CDF. The column labeled MT19973 uses the same 32-bit generator as [`RandomState`](legacy#numpy.random.RandomState "numpy.random.RandomState") but produces random variates using [`Generator`](generator#numpy.random.Generator "numpy.random.Generator"). | | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 | RandomState | | --- | --- | --- | --- | --- | --- | --- | | 32-bit Unsigned Ints | 3.3 | 1.9 | 2.0 | 3.3 | 1.8 | 3.1 | | 64-bit Unsigned Ints | 5.6 | 3.2 | 2.9 | 4.9 | 2.5 | 5.5 | | Uniforms | 5.9 | 3.1 | 2.9 | 5.0 | 2.6 | 6.0 | | Normals | 13.9 | 10.8 | 10.5 | 12.0 | 8.3 | 56.8 | | Exponentials | 9.1 | 6.0 | 5.8 | 8.1 | 5.4 | 63.9 | | Gammas | 37.2 | 30.8 | 28.9 | 34.0 | 27.5 | 77.0 | | Binomials | 21.3 | 17.4 | 17.6 | 19.3 | 15.6 | 21.4 | | Laplaces | 73.2 | 72.3 | 76.1 | 73.0 | 72.3 | 82.5 | | Poissons | 111.7 | 103.4 | 100.5 | 109.4 | 90.7 | 115.2 | The next table presents the performance in percentage relative to values generated by the legacy generator, `RandomState(MT19937())`. The overall performance was computed using a geometric mean. | | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 | | --- | --- | --- | --- | --- | --- | | 32-bit Unsigned Ints | 96 | 162 | 160 | 96 | 175 | | 64-bit Unsigned Ints | 97 | 171 | 188 | 113 | 218 | | Uniforms | 102 | 192 | 206 | 121 | 233 | | Normals | 409 | 526 | 541 | 471 | 684 | | Exponentials | 701 | 1071 | 1101 | 784 | 1179 | | Gammas | 207 | 250 | 266 | 227 | 281 | | Binomials | 100 | 123 | 122 | 111 | 138 | | Laplaces | 113 | 114 | 108 | 113 | 114 | | Poissons | 103 | 111 | 115 | 105 | 127 | | Overall | 159 | 219 | 225 | 174 | 251 | Note All timings were taken using Linux on an AMD Ryzen 9 3900X processor. Performance on different Operating Systems ------------------------------------------ Performance differs across platforms due to compiler and hardware availability (e.g., register width) differences. The default bit generator has been chosen to perform well on 64-bit platforms. Performance on 32-bit operating systems is very different. The values reported are normalized relative to the speed of MT19937 in each table. A value of 100 indicates that the performance matches the MT19937. Higher values indicate improved performance. These values cannot be compared across tables. ### 64-bit Linux | Distribution | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 | | --- | --- | --- | --- | --- | --- | | 32-bit Unsigned Ints | 100 | 168 | 166 | 100 | 182 | | 64-bit Unsigned Ints | 100 | 176 | 193 | 116 | 224 | | Uniforms | 100 | 188 | 202 | 118 | 228 | | Normals | 100 | 128 | 132 | 115 | 167 | | Exponentials | 100 | 152 | 157 | 111 | 168 | | Overall | 100 | 161 | 168 | 112 | 192 | ### 64-bit Windows The relative performance on 64-bit Linux and 64-bit Windows is broadly similar with the notable exception of the Philox generator. | Distribution | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 | | --- | --- | --- | --- | --- | --- | | 32-bit Unsigned Ints | 100 | 155 | 131 | 29 | 150 | | 64-bit Unsigned Ints | 100 | 157 | 143 | 25 | 154 | | Uniforms | 100 | 151 | 144 | 24 | 155 | | Normals | 100 | 129 | 128 | 37 | 150 | | Exponentials | 100 | 150 | 145 | 28 | 159 | | **Overall** | 100 | 148 | 138 | 28 | 154 | ### 32-bit Windows The performance of 64-bit generators on 32-bit Windows is much lower than on 64-bit operating systems due to register width. MT19937, the generator that has been in NumPy since 2005, operates on 32-bit integers. | Distribution | MT19937 | PCG64 | PCG64DXSM | Philox | SFC64 | | --- | --- | --- | --- | --- | --- | | 32-bit Unsigned Ints | 100 | 24 | 34 | 14 | 57 | | 64-bit Unsigned Ints | 100 | 21 | 32 | 14 | 74 | | Uniforms | 100 | 21 | 34 | 16 | 73 | | Normals | 100 | 36 | 57 | 28 | 101 | | Exponentials | 100 | 28 | 44 | 20 | 88 | | **Overall** | 100 | 25 | 39 | 18 | 77 | Note Linux timings used Ubuntu 20.04 and GCC 9.3.0. Windows timings were made on Windows 10 using Microsoft C/C++ Optimizing Compiler Version 19 (Visual Studio 2019). All timings were produced on an AMD Ryzen 9 3900X processor. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/performance.htmlC API for random ================ New in version 1.19.0. Access to various distributions below is available via Cython or C-wrapper libraries like CFFI. All the functions accept a [`bitgen_t`](#c.bitgen_t "bitgen_t") as their first argument. To access these from Cython or C, you must link with the `npyrandom` library which is part of the NumPy distribution, located in `numpy/random/lib`. typebitgen_t The [`bitgen_t`](#c.bitgen_t "bitgen_t") holds the current state of the BitGenerator and pointers to functions that return standard C types while advancing the state. ``` struct bitgen: void *state npy_uint64 (*next_uint64)(void *st) nogil uint32_t (*next_uint32)(void *st) nogil double (*next_double)(void *st) nogil npy_uint64 (*next_raw)(void *st) nogil ctypedef bitgen bitgen_t ``` See [Extending](extending) for examples of using these functions. The functions are named with the following conventions: * “standard” refers to the reference values for any parameters. For instance “standard_uniform” means a uniform distribution on the interval `0.0` to `1.0` * “fill” functions will fill the provided `out` with `cnt` values. * The functions without “standard” in their name require additional parameters to describe the distributions. * Functions with `inv` in their name are based on the slower inverse method instead of a ziggurat lookup algorithm, which is significantly faster. The non-ziggurat variants are used in corner cases and for legacy compatibility. doublerandom_standard_uniform([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) voidrandom_standard_uniform_fill([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, double*out) doublerandom_standard_exponential([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) voidrandom_standard_exponential_fill([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, double*out) voidrandom_standard_exponential_inv_fill([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, double*out) doublerandom_standard_normal([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) voidrandom_standard_normal_fill([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")count, double*out) voidrandom_standard_normal_fill_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")count, float*out) doublerandom_standard_gamma([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubleshape) floatrandom_standard_uniform_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) voidrandom_standard_uniform_fill_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, float*out) floatrandom_standard_exponential_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) voidrandom_standard_exponential_fill_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, float*out) voidrandom_standard_exponential_inv_fill_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")cnt, float*out) floatrandom_standard_normal_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) floatrandom_standard_gamma_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, floatshape) doublerandom_normal([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubleloc, doublescale) doublerandom_gamma([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubleshape, doublescale) floatrandom_gamma_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, floatshape, floatscale) doublerandom_exponential([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublescale) doublerandom_uniform([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublelower, doublerange) doublerandom_beta([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublea, doubleb) doublerandom_chisquare([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubledf) doublerandom_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubledfnum, doubledfden) doublerandom_standard_cauchy([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) doublerandom_pareto([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublea) doublerandom_weibull([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublea) doublerandom_power([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublea) doublerandom_laplace([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubleloc, doublescale) doublerandom_gumbel([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubleloc, doublescale) doublerandom_logistic([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubleloc, doublescale) doublerandom_lognormal([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublemean, doublesigma) doublerandom_rayleigh([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublemode) doublerandom_standard_t([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubledf) doublerandom_noncentral_chisquare([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubledf, doublenonc) doublerandom_noncentral_f([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubledfnum, doubledfden, doublenonc) doublerandom_wald([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublemean, doublescale) doublerandom_vonmises([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublemu, doublekappa) doublerandom_triangular([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doubleleft, doublemode, doubleright) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_poisson([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublelam) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_negative_binomial([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublen, doublep) typebinomial_t ``` typedef struct s_binomial_t { int has_binomial; /* !=0: following parameters initialized for binomial */ double psave; RAND_INT_TYPE nsave; double r; double q; double fm; RAND_INT_TYPE m; double p1; double xm; double xl; double xr; double c; double laml; double lamr; double p2; double p3; double p4; } binomial_t; ``` [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_binomial([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublep, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")n, [binomial_t](#c.binomial_t "binomial_t")*binomial) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_logseries([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublep) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_geometric_search([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublep) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_geometric_inversion([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublep) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_geometric([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublep) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_zipf([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, doublea) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_hypergeometric([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")good, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")bad, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")sample) [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")random_interval([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")max) voidrandom_multinomial([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")n, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*mnix, double*pix, [npy_intp](../c-api/dtype#c.npy_intp "npy_intp")d, [binomial_t](#c.binomial_t "binomial_t")*binomial) intrandom_multivariate_hypergeometric_count([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")total, size_tnum_colors, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*colors, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")nsample, size_tnum_variates, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*variates) voidrandom_multivariate_hypergeometric_marginals([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")total, size_tnum_colors, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*colors, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")nsample, size_tnum_variates, [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")*variates) Generate a single integer [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_positive_int64([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) [npy_int32](../c-api/dtype#c.npy_int32 "npy_int32")random_positive_int32([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) [npy_int64](../c-api/dtype#c.npy_int64 "npy_int64")random_positive_int([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")random_uint([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state) Generate random uint64 numbers in closed interval [off, off + rng]. [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")random_bounded_uint64([bitgen_t](#c.bitgen_t "bitgen_t")*bitgen_state, [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")off, [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")rng, [npy_uint64](../c-api/dtype#c.npy_uint64 "npy_uint64")mask, booluse_masked) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/c-api.htmlExtending ========= The BitGenerators have been designed to be extendable using standard tools for high-performance Python – numba and Cython. The [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") object can also be used with user-provided BitGenerators as long as these export a small set of required functions. Numba ----- Numba can be used with either CTypes or CFFI. The current iteration of the BitGenerators all export a small set of functions through both interfaces. This example shows how numba can be used to produce gaussian samples using a pure Python implementation which is then compiled. The random numbers are provided by `ctypes.next_double`. ``` import numpy as np import numba as nb from numpy.random import PCG64 from timeit import timeit bit_gen = PCG64() next_d = bit_gen.cffi.next_double state_addr = bit_gen.cffi.state_address def normals(n, state): out = np.empty(n) for i in range((n + 1) // 2): x1 = 2.0 * next_d(state) - 1.0 x2 = 2.0 * next_d(state) - 1.0 r2 = x1 * x1 + x2 * x2 while r2 >= 1.0 or r2 == 0.0: x1 = 2.0 * next_d(state) - 1.0 x2 = 2.0 * next_d(state) - 1.0 r2 = x1 * x1 + x2 * x2 f = np.sqrt(-2.0 * np.log(r2) / r2) out[2 * i] = f * x1 if 2 * i + 1 < n: out[2 * i + 1] = f * x2 return out # Compile using Numba normalsj = nb.jit(normals, nopython=True) # Must use state address not state with numba n = 10000 def numbacall(): return normalsj(n, state_addr) rg = np.random.Generator(PCG64()) def numpycall(): return rg.normal(size=n) # Check that the functions work r1 = numbacall() r2 = numpycall() assert r1.shape == (n,) assert r1.shape == r2.shape t1 = timeit(numbacall, number=1000) print(f'{t1:.2f} secs for {n} PCG64 (Numba/PCG64) gaussian randoms') t2 = timeit(numpycall, number=1000) print(f'{t2:.2f} secs for {n} PCG64 (NumPy/PCG64) gaussian randoms') ``` Both CTypes and CFFI allow the more complicated distributions to be used directly in Numba after compiling the file distributions.c into a `DLL` or `so`. An example showing the use of a more complicated distribution is in the `examples` section below. Cython ------ Cython can be used to unpack the `PyCapsule` provided by a BitGenerator. This example uses [`PCG64`](bit_generators/pcg64#numpy.random.PCG64 "numpy.random.PCG64") and the example from above. The usual caveats for writing high-performance code using Cython – removing bounds checks and wrap around, providing array alignment information – still apply. ``` #!/usr/bin/env python3 #cython: language_level=3 """ This file shows how the to use a BitGenerator to create a distribution. """ import numpy as np cimport numpy as np cimport cython from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer from libc.stdint cimport uint16_t, uint64_t from numpy.random cimport bitgen_t from numpy.random import PCG64 from numpy.random.c_distributions cimport ( random_standard_uniform_fill, random_standard_uniform_fill_f) @cython.boundscheck(False) @cython.wraparound(False) def uniforms(Py_ssize_t n): """ Create an array of `n` uniformly distributed doubles. A 'real' distribution would want to process the values into some non-uniform distribution """ cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef double[::1] random_values x = PCG64() capsule = x.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='float64') with x.lock, nogil: for i in range(n): # Call the function random_values[i] = rng.next_double(rng.state) randoms = np.asarray(random_values) return randoms ``` The BitGenerator can also be directly accessed using the members of the `bitgen_t` struct. ``` @cython.boundscheck(False) @cython.wraparound(False) def uint10_uniforms(Py_ssize_t n): """Uniform 10 bit integers stored as 16-bit unsigned integers""" cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef uint16_t[::1] random_values cdef int bits_remaining cdef int width = 10 cdef uint64_t buff, mask = 0x3FF x = PCG64() capsule = x.capsule if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='uint16') # Best practice is to release GIL and acquire the lock bits_remaining = 0 with x.lock, nogil: for i in range(n): if bits_remaining < width: buff = rng.next_uint64(rng.state) random_values[i] = buff & mask buff >>= width randoms = np.asarray(random_values) return randoms ``` Cython can be used to directly access the functions in `numpy/random/c_distributions.pxd`. This requires linking with the `npyrandom` library located in `numpy/random/lib`. ``` def uniforms_ex(bit_generator, Py_ssize_t n, dtype=np.float64): """ Create an array of `n` uniformly distributed doubles via a "fill" function. A 'real' distribution would want to process the values into some non-uniform distribution Parameters ---------- bit_generator: BitGenerator instance n: int Output vector length dtype: {str, dtype}, optional Desired dtype, either 'd' (or 'float64') or 'f' (or 'float32'). The default dtype value is 'd' """ cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef np.ndarray randoms capsule = bit_generator.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name) _dtype = np.dtype(dtype) randoms = np.empty(n, dtype=_dtype) if _dtype == np.float32: with bit_generator.lock: random_standard_uniform_fill_f(rng, n, <float*>np.PyArray_DATA(randoms)) elif _dtype == np.float64: with bit_generator.lock: random_standard_uniform_fill(rng, n, <double*>np.PyArray_DATA(randoms)) else: raise TypeError('Unsupported dtype %r for random' % _dtype) return randoms ``` See [Extending numpy.random via Cython](examples/cython/index#extending-cython-example) for the complete listings of these examples and a minimal `setup.py` to build the c-extension modules. CFFI ---- CFFI can be used to directly access the functions in `include/numpy/random/distributions.h`. Some “massaging” of the header file is required: ``` """ Use cffi to access any of the underlying C functions from distributions.h """ import os import numpy as np import cffi from .parse import parse_distributions_h ffi = cffi.FFI() inc_dir = os.path.join(np.get_include(), 'numpy') # Basic numpy types ffi.cdef(''' typedef intptr_t npy_intp; typedef unsigned char npy_bool; ''') parse_distributions_h(ffi, inc_dir) ``` Once the header is parsed by `ffi.cdef`, the functions can be accessed directly from the `_generator` shared object, using the [`BitGenerator.cffi`](bit_generators/generated/numpy.random.bitgenerator.cffi#numpy.random.BitGenerator.cffi "numpy.random.BitGenerator.cffi") interface. ``` # Compare the distributions.h random_standard_normal_fill to # Generator.standard_random bit_gen = np.random.PCG64() rng = np.random.Generator(bit_gen) state = bit_gen.state interface = rng.bit_generator.cffi n = 100 vals_cffi = ffi.new('double[%d]' % n) lib.random_standard_normal_fill(interface.bit_generator, n, vals_cffi) # reset the state bit_gen.state = state vals = rng.standard_normal(n) for i in range(n): assert vals[i] == vals_cffi[i] ``` New Bit Generators ------------------ [`Generator`](generator#numpy.random.Generator "numpy.random.Generator") can be used with user-provided [`BitGenerator`](bit_generators/generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator")s. The simplest way to write a new BitGenerator is to examine the pyx file of one of the existing BitGenerators. The key structure that must be provided is the `capsule` which contains a `PyCapsule` to a struct pointer of type `bitgen_t`, ``` typedef struct bitgen { void *state; uint64_t (*next_uint64)(void *st); uint32_t (*next_uint32)(void *st); double (*next_double)(void *st); uint64_t (*next_raw)(void *st); } bitgen_t; ``` which provides 5 pointers. The first is an opaque pointer to the data structure used by the BitGenerators. The next three are function pointers which return the next 64- and 32-bit unsigned integers, the next random double and the next raw value. This final function is used for testing and so can be set to the next 64-bit unsigned integer function if not needed. Functions inside `Generator` use this structure as in ``` bitgen_state->next_uint64(bitgen_state->state) ``` Examples -------- * [Numba](examples/numba) * [CFFI + Numba](examples/numba_cffi) * [Cython](examples/cython/index) + [setup.py](examples/cython/setup.py) + [extending.pyx](examples/cython/extending.pyx) + [extending_distributions.pyx](examples/cython/extending_distributions.pyx) * [CFFI](examples/cffi) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/extending.htmlnumpy.random.BitGenerator ========================= *class*numpy.random.BitGenerator(*seed=None*) Base Class for generic BitGenerators, which provide a stream of random bits based on different algorithms. Must be overridden. Parameters **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to ~`numpy.random.SeedSequence` to derive the initial [`BitGenerator`](#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. See also [`SeedSequence`](numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") Attributes **lock**threading.Lock Lock instance that is shared so that the same BitGenerator can be used in multiple Generators without corrupting the state. Code that generates values from a bit generator should hold the bit generator’s lock. #### Methods | | | | --- | --- | | [`random_raw`](numpy.random.bitgenerator.random_raw#numpy.random.BitGenerator.random_raw "numpy.random.BitGenerator.random_raw")(self[, size]) | Return randoms as generated by the underlying BitGenerator | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.BitGenerator.htmlPermuted Congruential Generator (64-bit, PCG64) =============================================== *class*numpy.random.PCG64(*seed=None*) BitGenerator for the PCG-64 pseudo-random number generator. Parameters **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. #### Notes PCG-64 is a 128-bit implementation of O’Neill’s permutation congruential generator ([[1]](#r4523891264fe-1), [[2]](#r4523891264fe-2)). PCG-64 has a period of \(2^{128}\) and supports advancing an arbitrary number of steps as well as \(2^{127}\) streams. The specific member of the PCG family that we use is PCG XSL RR 128/64 as described in the paper ([[2]](#r4523891264fe-2)). `PCG64` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers. These are not directly consumable in Python and must be consumed by a `Generator` or similar object that supports low-level access. Supports the method [`advance`](generated/numpy.random.pcg64.advance#numpy.random.PCG64.advance "numpy.random.PCG64.advance") to advance the RNG an arbitrary number of steps. The state of the PCG-64 RNG is represented by 2 128-bit unsigned integers. **State and Seeding** The `PCG64` state vector consists of 2 unsigned 128-bit values, which are represented externally as Python ints. One is the state of the PRNG, which is advanced by a linear congruential generator (LCG). The second is a fixed odd increment used in the LCG. The input seed is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to generate both values. The increment is not independently settable. **Parallel Features** The preferred way to use a BitGenerator in parallel applications is to use the [`SeedSequence.spawn`](generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") method to obtain entropy values, and to use these to generate new BitGenerators: ``` >>> from numpy.random import Generator, PCG64, SeedSequence >>> sg = SeedSequence(1234) >>> rg = [Generator(PCG64(s)) for s in sg.spawn(10)] ``` **Compatibility Guarantee** `PCG64` makes a guarantee that a fixed seed will always produce the same random integer stream. #### References [1](#id1) [“PCG, A Family of Better Random Number Generators”](http://www.pcg-random.org/) 2([1](#id2),[2](#id3)) O’Neill, <NAME>. [“PCG: A Family of Simple Fast Space-Efficient Statistically Good Algorithms for Random Number Generation”](https://www.cs.hmc.edu/tr/hmc-cs-2014-0905.pdf) State ----- | | | | --- | --- | | [`state`](generated/numpy.random.pcg64.state#numpy.random.PCG64.state "numpy.random.PCG64.state") | Get or set the PRNG state | Parallel generation ------------------- | | | | --- | --- | | [`advance`](generated/numpy.random.pcg64.advance#numpy.random.PCG64.advance "numpy.random.PCG64.advance")(delta) | Advance the underlying RNG as-if delta draws have occurred. | | [`jumped`](generated/numpy.random.pcg64.jumped#numpy.random.PCG64.jumped "numpy.random.PCG64.jumped")([jumps]) | Returns a new bit generator with the state jumped. | Extending --------- | | | | --- | --- | | [`cffi`](generated/numpy.random.pcg64.cffi#numpy.random.PCG64.cffi "numpy.random.PCG64.cffi") | CFFI interface | | [`ctypes`](generated/numpy.random.pcg64.ctypes#numpy.random.PCG64.ctypes "numpy.random.PCG64.ctypes") | ctypes interface | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/pcg64.htmlMersenne Twister (MT19937) ========================== *class*numpy.random.MT19937(*seed=None*) Container for the Mersenne Twister pseudo-random number generator. Parameters **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. #### Notes `MT19937` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers [[1]](#r312276d80bfa-1). These are not directly consumable in Python and must be consumed by a `Generator` or similar object that supports low-level access. The Python stdlib module “random” also contains a Mersenne Twister pseudo-random number generator. **State and Seeding** The `MT19937` state vector consists of a 624-element array of 32-bit unsigned integers plus a single integer value between 0 and 624 that indexes the current position within the main array. The input seed is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to fill the whole state. The first element is reset such that only its most significant bit is set. **Parallel Features** The preferred way to use a BitGenerator in parallel applications is to use the [`SeedSequence.spawn`](generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") method to obtain entropy values, and to use these to generate new BitGenerators: ``` >>> from numpy.random import Generator, MT19937, SeedSequence >>> sg = SeedSequence(1234) >>> rg = [Generator(MT19937(s)) for s in sg.spawn(10)] ``` Another method is to use [`MT19937.jumped`](generated/numpy.random.mt19937.jumped#numpy.random.MT19937.jumped "numpy.random.MT19937.jumped") which advances the state as-if \(2^{128}\) random numbers have been generated ([[1]](#r312276d80bfa-1), [[2]](#r312276d80bfa-2)). This allows the original sequence to be split so that distinct segments can be used in each worker process. All generators should be chained to ensure that the segments come from the same sequence. ``` >>> from numpy.random import Generator, MT19937, SeedSequence >>> sg = SeedSequence(1234) >>> bit_generator = MT19937(sg) >>> rg = [] >>> for _ in range(10): ... rg.append(Generator(bit_generator)) ... # Chain the BitGenerators ... bit_generator = bit_generator.jumped() ``` **Compatibility Guarantee** `MT19937` makes a guarantee that a fixed seed will always produce the same random integer stream. #### References 1([1](#id1),[2](#id2)) <NAME>, <NAME>, and <NAME>, “A Fast Jump Ahead Algorithm for Linear Recurrences in a Polynomial Space”, Sequences and Their Applications - SETA, 290–298, 2008. [2](#id3) <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, “Efficient Jump Ahead for F2-Linear Random Number Generators”, INFORMS JOURNAL ON COMPUTING, Vol. 20, No. 3, Summer 2008, pp. 385-390. Attributes **lock: threading.Lock** Lock instance that is shared so that the same bit git generator can be used in multiple Generators without corrupting the state. Code that generates values from a bit generator should hold the bit generator’s lock. State ----- | | | | --- | --- | | [`state`](generated/numpy.random.mt19937.state#numpy.random.MT19937.state "numpy.random.MT19937.state") | Get or set the PRNG state | Parallel generation ------------------- | | | | --- | --- | | [`jumped`](generated/numpy.random.mt19937.jumped#numpy.random.MT19937.jumped "numpy.random.MT19937.jumped")([jumps]) | Returns a new bit generator with the state jumped | Extending --------- | | | | --- | --- | | [`cffi`](generated/numpy.random.mt19937.cffi#numpy.random.MT19937.cffi "numpy.random.MT19937.cffi") | CFFI interface | | [`ctypes`](generated/numpy.random.mt19937.ctypes#numpy.random.MT19937.ctypes "numpy.random.MT19937.ctypes") | ctypes interface | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/mt19937.htmlnumpy.random.SeedSequence.spawn =============================== method random.SeedSequence.spawn(*n_children*) Spawn a number of child `SeedSequence` s by extending the [`spawn_key`](numpy.random.seedsequence.spawn_key#numpy.random.SeedSequence.spawn_key "numpy.random.SeedSequence.spawn_key"). Parameters **n_children**int Returns **seqs**list of `SeedSequence` s © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.SeedSequence.spawn.htmlnumpy.random.SeedSequence ========================= *class*numpy.random.SeedSequence(*entropy=None*, ***, *spawn_key=()*, *pool_size=4*) SeedSequence mixes sources of entropy in a reproducible way to set the initial state for independent and very probably non-overlapping BitGenerators. Once the SeedSequence is instantiated, you can call the [`generate_state`](numpy.random.seedsequence.generate_state#numpy.random.SeedSequence.generate_state "numpy.random.SeedSequence.generate_state") method to get an appropriately sized seed. Calling [`spawn(n)`](numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") will create `n` SeedSequences that can be used to seed independent BitGenerators, i.e. for different threads. Parameters **entropy**{None, int, sequence[int]}, optional The entropy for creating a [`SeedSequence`](#numpy.random.SeedSequence "numpy.random.SeedSequence"). **spawn_key**{(), sequence[int]}, optional A third source of entropy, used internally when calling [`SeedSequence.spawn`](numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") **pool_size**{int}, optional Size of the pooled entropy to store. Default is 4 to give a 128-bit entropy pool. 8 (for 256 bits) is another reasonable choice if working with larger PRNGs, but there is very little to be gained by selecting another value. **n_children_spawned**{int}, optional The number of children already spawned. Only pass this if reconstructing a [`SeedSequence`](#numpy.random.SeedSequence "numpy.random.SeedSequence") from a serialized form. #### Notes Best practice for achieving reproducible bit streams is to use the default `None` for the initial entropy, and then use [`SeedSequence.entropy`](numpy.random.seedsequence.entropy#numpy.random.SeedSequence.entropy "numpy.random.SeedSequence.entropy") to log/pickle the [`entropy`](numpy.random.seedsequence.entropy#numpy.random.SeedSequence.entropy "numpy.random.SeedSequence.entropy") for reproducibility: ``` >>> sq1 = np.random.SeedSequence() >>> sq1.entropy 243799254704924441050048792905230269161 # random >>> sq2 = np.random.SeedSequence(sq1.entropy) >>> np.all(sq1.generate_state(10) == sq2.generate_state(10)) True ``` Attributes **entropy** **n_children_spawned** **pool** **pool_size** **spawn_key** **state** #### Methods | | | | --- | --- | | [`generate_state`](numpy.random.seedsequence.generate_state#numpy.random.SeedSequence.generate_state "numpy.random.SeedSequence.generate_state")(n_words[, dtype]) | Return the requested number of words for PRNG seeding. | | [`spawn`](numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn")(n_children) | Spawn a number of child [`SeedSequence`](#numpy.random.SeedSequence "numpy.random.SeedSequence") s by extending the [`spawn_key`](numpy.random.seedsequence.spawn_key#numpy.random.SeedSequence.spawn_key "numpy.random.SeedSequence.spawn_key"). | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.SeedSequence.htmlnumpy.random.RandomState.gamma ============================== method random.RandomState.gamma(*shape*, *scale=1.0*, *size=None*) Draw samples from a Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, [`shape`](../../generated/numpy.shape#numpy.shape "numpy.shape") (sometimes designated “k”) and `scale` (sometimes designated “theta”), where both parameters are > 0. Note New code should use the `gamma` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **shape**float or array_like of floats The shape of the gamma distribution. Must be non-negative. **scale**float or array_like of floats, optional The scale of the gamma distribution. Must be non-negative. Default is equal to 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` and `scale` are both scalars. Otherwise, `np.broadcast(shape, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.gamma`](numpy.random.generator.gamma#numpy.random.Generator.gamma "numpy.random.Generator.gamma") which should be used for new code. #### Notes The probability density for the Gamma distribution is \[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\] where \(k\) is the shape and \(\theta\) the scale, and \(\Gamma\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References 1 Weisstein, <NAME>. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/GammaDistribution.html 2 Wikipedia, “Gamma distribution”, <https://en.wikipedia.org/wiki/Gamma_distribution #### Examples Draw samples from the distribution: ``` >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2) >>> s = np.random.gamma(shape, scale, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1)*(np.exp(-bins/scale) / ... (sps.gamma(shape)*scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-gamma-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.gamma.htmlnumpy.random.RandomState.standard_t ==================================== method random.RandomState.standard_t(*df*, *size=None*) Draw samples from a standard Student’s t distribution with `df` degrees of freedom. A special case of the hyperbolic distribution. As `df` gets large, the result resembles that of the standard normal distribution ([`standard_normal`](numpy.random.randomstate.standard_normal#numpy.random.RandomState.standard_normal "numpy.random.RandomState.standard_normal")). Note New code should use the `standard_t` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **df**float or array_like of floats Degrees of freedom, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized standard Student’s t distribution. See also [`random.Generator.standard_t`](numpy.random.generator.standard_t#numpy.random.Generator.standard_t "numpy.random.Generator.standard_t") which should be used for new code. #### Notes The probability density function for the t distribution is \[P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df} \Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}\] The t test is based on an assumption that the data come from a Normal distribution. The t test provides a way to test whether the sample mean (that is the mean calculated from the data) is a good estimate of the true mean. The derivation of the t-distribution was first published in 1908 by <NAME> while working for the Guinness Brewery in Dublin. Due to proprietary issues, he had to publish under a pseudonym, and so he used the name Student. #### References [1](#id3) Dalgaard, Peter, “Introductory Statistics With R”, Springer, 2002. 2 Wikipedia, “Student’s t-distribution” [https://en.wikipedia.org/wiki/Student’s_t-distribution](https://en.wikipedia.org/wiki/Student's_t-distribution) #### Examples From Dalgaard page 83 [[1]](#r89f5270d198b-1), suppose the daily energy intake for 11 women in kilojoules (kJ) is: ``` >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \ ... 7515, 8230, 8770]) ``` Does their energy intake deviate systematically from the recommended value of 7725 kJ? Our null hypothesis will be the absence of deviation, and the alternate hypothesis will be the presence of an effect that could be either positive or negative, hence making our test 2-tailed. Because we are estimating the mean and we have N=11 values in our sample, we have N-1=10 degrees of freedom. We set our significance level to 95% and compute the t statistic using the empirical mean and empirical standard deviation of our intake. We use a ddof of 1 to base the computation of our empirical standard deviation on an unbiased estimate of the variance (note: the final estimate is not unbiased due to the concave nature of the square root). ``` >>> np.mean(intake) 6753.636363636364 >>> intake.std(ddof=1) 1142.1232221373727 >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake))) >>> t -2.8207540608310198 ``` We draw 1000000 samples from Student’s t distribution with the adequate degrees of freedom. ``` >>> import matplotlib.pyplot as plt >>> s = np.random.standard_t(10, size=1000000) >>> h = plt.hist(s, bins=100, density=True) ``` Does our t statistic land in one of the two critical regions found at both tails of the distribution? ``` >>> np.sum(np.abs(t) < np.abs(s)) / float(len(s)) 0.018318 #random < 0.05, statistic is in critical region ``` The probability value for this 2-tailed test is about 1.83%, which is lower than the 5% pre-determined significance threshold. Therefore, the probability of observing values as extreme as our intake conditionally on the null hypothesis being true is too low, and we reject the null hypothesis of no deviation. ![../../../_images/numpy-random-RandomState-standard_t-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.standard_t.htmlnumpy.random.PCG64.ctypes ========================= attribute random.PCG64.ctypes ctypes interface Returns **interface**namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64.ctypes.htmlnumpy.random.PCG64.cffi ======================= attribute random.PCG64.cffi CFFI interface Returns **interface**namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64.cffi.htmlnumpy.random.Generator.integers =============================== method random.Generator.integers(*low*, *high=None*, *size=None*, *dtype=np.int64*, *endpoint=False*) Return random integers from `low` (inclusive) to `high` (exclusive), or if endpoint=True, `low` (inclusive) to `high` (inclusive). Replaces `RandomState.randint` (with endpoint=False) and `RandomState.random_integers` (with endpoint=True) Return random integers from the “discrete uniform” distribution of the specified dtype. If `high` is None (the default), then results are from 0 to `low`. Parameters **low**int or array-like of ints Lowest (signed) integers to be drawn from the distribution (unless `high=None`, in which case this parameter is 0 and this value is used for `high`). **high**int or array-like of ints, optional If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). If array-like, must contain integer values **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype**dtype, optional Desired dtype of the result. Byteorder must be native. The default value is np.int64. **endpoint**bool, optional If true, sample from the interval [low, high] instead of the default [low, high) Defaults to False Returns **out**int or ndarray of ints `size`-shaped array of random integers from the appropriate distribution, or a single such random int if `size` not provided. #### Notes When using broadcasting with uint64 dtypes, the maximum value (2**64) cannot be represented as a standard integer type. The high array (or low if high is None) must have object dtype, e.g., array([2**64]). #### References 1 <NAME>., “Fast Random Integer Generation in an Interval”, ACM Transactions on Modeling and Computer Simulation 29 (1), 2019, <http://arxiv.org/abs/1805.10941>. #### Examples ``` >>> rng = np.random.default_rng() >>> rng.integers(2, size=10) array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random >>> rng.integers(1, size=10) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) ``` Generate a 2 x 4 array of ints between 0 and 4, inclusive: ``` >>> rng.integers(5, size=(2, 4)) array([[4, 0, 2, 1], [3, 2, 2, 0]]) # random ``` Generate a 1 x 3 array with 3 different upper bounds ``` >>> rng.integers(1, [3, 5, 10]) array([2, 2, 9]) # random ``` Generate a 1 by 3 array with 3 different lower bounds ``` >>> rng.integers([1, 5, 7], 10) array([9, 8, 7]) # random ``` Generate a 2 by 4 array using broadcasting with dtype of uint8 ``` >>> rng.integers([1, 3, 5, 7], [[10], [20]], dtype=np.uint8) array([[ 8, 6, 9, 7], [ 1, 16, 9, 12]], dtype=uint8) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.integers.htmlnumpy.random.Generator.random ============================= method random.Generator.random(*size=None*, *dtype=np.float64*, *out=None*) Return random floats in the half-open interval [0.0, 1.0). Results are from the “continuous uniform” distribution over the stated interval. To sample \(Unif[a, b), b > a\) multiply the output of [`random`](../index#module-numpy.random "numpy.random") by `(b-a)` and add `a`: ``` (b - a) * random() + a ``` Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype**dtype, optional Desired dtype of the result, only [`float64`](../../arrays.scalars#numpy.float64 "numpy.float64") and [`float32`](../../arrays.scalars#numpy.float32 "numpy.float32") are supported. Byteorder must be native. The default value is np.float64. **out**ndarray, optional Alternative output array in which to place the result. If size is not None, it must have the same shape as the provided size and must match the type of the output values. Returns **out**float or ndarray of floats Array of random floats of shape `size` (unless `size=None`, in which case a single float is returned). #### Examples ``` >>> rng = np.random.default_rng() >>> rng.random() 0.47108547995356098 # random >>> type(rng.random()) <class 'float'> >>> rng.random((5,)) array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random ``` Three-by-two array of random numbers from [-5, 0): ``` >>> 5 * rng.random((3, 2)) - 5 array([[-3.99149989, -0.52338984], # random [-2.99091858, -0.79479508], [-1.23204345, -1.75224494]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.random.htmlnumpy.random.RandomState.random_sample ======================================= method random.RandomState.random_sample(*size=None*) Return random floats in the half-open interval [0.0, 1.0). Results are from the “continuous uniform” distribution over the stated interval. To sample \(Unif[a, b), b > a\) multiply the output of [`random_sample`](#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample") by `(b-a)` and add `a`: ``` (b - a) * random_sample() + a ``` Note New code should use the `random` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**float or ndarray of floats Array of random floats of shape `size` (unless `size=None`, in which case a single float is returned). See also [`random.Generator.random`](numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") which should be used for new code. #### Examples ``` >>> np.random.random_sample() 0.47108547995356098 # random >>> type(np.random.random_sample()) <class 'float'> >>> np.random.random_sample((5,)) array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random ``` Three-by-two array of random numbers from [-5, 0): ``` >>> 5 * np.random.random_sample((3, 2)) - 5 array([[-3.99149989, -0.52338984], # random [-2.99091858, -0.79479508], [-1.23204345, -1.75224494]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.random_sample.htmlnumpy.random.Generator.choice ============================= method random.Generator.choice(*a*, *size=None*, *replace=True*, *p=None*, *axis=0*, *shuffle=True*) Generates a random sample from a given array Parameters **a**{array_like, int} If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated from np.arange(a). **size**{int, tuple[int]}, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn from the 1-d `a`. If `a` has more than one dimension, the `size` shape will be inserted into the `axis` dimension, so the output `ndim` will be `a.ndim - 1 + len(size)`. Default is None, in which case a single value is returned. **replace**bool, optional Whether the sample is with or without replacement. Default is True, meaning that a value of `a` can be selected multiple times. **p**1-D array_like, optional The probabilities associated with each entry in a. If not given, the sample assumes a uniform distribution over all entries in `a`. **axis**int, optional The axis along which the selection is performed. The default, 0, selects by row. **shuffle**bool, optional Whether the sample is shuffled when sampling without replacement. Default is True, False provides a speedup. Returns **samples**single item or ndarray The generated random samples Raises ValueError If a is an int and less than zero, if p is not 1-dimensional, if a is array-like with a size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size. See also [`integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers"), [`shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle"), [`permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") #### Notes Setting user-specified probabilities through `p` uses a more general but less efficient sampler than the default. The general sampler produces a different sample than the optimized sampler even if each element of `p` is 1 / len(a). #### Examples Generate a uniform random sample from np.arange(5) of size 3: ``` >>> rng = np.random.default_rng() >>> rng.choice(5, 3) array([0, 3, 4]) # random >>> #This is equivalent to rng.integers(0,5,3) ``` Generate a non-uniform random sample from np.arange(5) of size 3: ``` >>> rng.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]) array([3, 3, 0]) # random ``` Generate a uniform random sample from np.arange(5) of size 3 without replacement: ``` >>> rng.choice(5, 3, replace=False) array([3,1,0]) # random >>> #This is equivalent to rng.permutation(np.arange(5))[:3] ``` Generate a uniform random sample from a 2-D array along the first axis (the default), without replacement: ``` >>> rng.choice([[0, 1, 2], [3, 4, 5], [6, 7, 8]], 2, replace=False) array([[3, 4, 5], # random [0, 1, 2]]) ``` Generate a non-uniform random sample from np.arange(5) of size 3 without replacement: ``` >>> rng.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0]) array([2, 3, 0]) # random ``` Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance: ``` >>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher'] >>> rng.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3]) array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random dtype='<U11') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.choice.htmlnumpy.random.Generator.permutation ================================== method random.Generator.permutation(*x*, *axis=0*) Randomly permute a sequence, or return a permuted range. Parameters **x**int or array_like If `x` is an integer, randomly permute `np.arange(x)`. If `x` is an array, make a copy and shuffle the elements randomly. **axis**int, optional The axis which `x` is shuffled along. Default is 0. Returns **out**ndarray Permuted sequence or array range. #### Examples ``` >>> rng = np.random.default_rng() >>> rng.permutation(10) array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random ``` ``` >>> rng.permutation([1, 4, 9, 12, 15]) array([15, 1, 9, 4, 12]) # random ``` ``` >>> arr = np.arange(9).reshape((3, 3)) >>> rng.permutation(arr) array([[6, 7, 8], # random [0, 1, 2], [3, 4, 5]]) ``` ``` >>> rng.permutation("abc") Traceback (most recent call last): ... numpy.AxisError: axis 0 is out of bounds for array of dimension 0 ``` ``` >>> arr = np.arange(9).reshape((3, 3)) >>> rng.permutation(arr, axis=1) array([[0, 2, 1], # random [3, 5, 4], [6, 8, 7]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.permutation.htmlnumpy.random.Generator.shuffle ============================== method random.Generator.shuffle(*x*, *axis=0*) Modify an array or sequence in-place by shuffling its contents. The order of sub-arrays is changed but their contents remains the same. Parameters **x**ndarray or MutableSequence The array, list or mutable sequence to be shuffled. **axis**int, optional The axis which `x` is shuffled along. Default is 0. It is only supported on [`ndarray`](../../generated/numpy.ndarray#numpy.ndarray "numpy.ndarray") objects. Returns None See also [`permuted`](numpy.random.generator.permuted#numpy.random.Generator.permuted "numpy.random.Generator.permuted") [`permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") #### Notes An important distinction between methods `shuffle` and `permuted` is how they both treat the `axis` parameter which can be found at [Handling the axis parameter](../generator#generator-handling-axis-parameter). #### Examples ``` >>> rng = np.random.default_rng() >>> arr = np.arange(10) >>> arr array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> rng.shuffle(arr) >>> arr array([2, 0, 7, 5, 1, 4, 8, 9, 3, 6]) # random ``` ``` >>> arr = np.arange(9).reshape((3, 3)) >>> arr array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> rng.shuffle(arr) >>> arr array([[3, 4, 5], # random [6, 7, 8], [0, 1, 2]]) ``` ``` >>> arr = np.arange(9).reshape((3, 3)) >>> arr array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> rng.shuffle(arr, axis=1) >>> arr array([[2, 0, 1], # random [5, 3, 4], [8, 6, 7]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.shuffle.htmlnumpy.RankWarning ================= *exception*numpy.RankWarning[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/__init__.py) Issued by [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit") when the Vandermonde matrix is rank deficient. For more information, a way to suppress the warning, and an example of [`RankWarning`](#numpy.RankWarning "numpy.RankWarning") being issued, see [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.RankWarning.htmlnumpy.in1d ========== numpy.in1d(*ar1*, *ar2*, *assume_unique=False*, *invert=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arraysetops.py#L523-L637) Test whether each element of a 1-D array is also present in a second array. Returns a boolean array the same length as `ar1` that is True where an element of `ar1` is in `ar2` and False otherwise. We recommend using [`isin`](numpy.isin#numpy.isin "numpy.isin") instead of [`in1d`](#numpy.in1d "numpy.in1d") for new code. Parameters **ar1**(M,) array_like Input array. **ar2**array_like The values against which to test each value of `ar1`. **assume_unique**bool, optional If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False. **invert**bool, optional If True, the values in the returned array are inverted (that is, False where an element of `ar1` is in `ar2` and True otherwise). Default is False. `np.in1d(a, b, invert=True)` is equivalent to (but is faster than) `np.invert(in1d(a, b))`. New in version 1.8.0. Returns **in1d**(M,) ndarray, bool The values `ar1[in1d]` are in `ar2`. See also [`isin`](numpy.isin#numpy.isin "numpy.isin") Version of this function that preserves the shape of ar1. [`numpy.lib.arraysetops`](numpy.lib.arraysetops#module-numpy.lib.arraysetops "numpy.lib.arraysetops") Module with a number of other functions for performing set operations on arrays. #### Notes [`in1d`](#numpy.in1d "numpy.in1d") can be considered as an element-wise function version of the python keyword `in`, for 1-D sequences. `in1d(a, b)` is roughly equivalent to `np.array([item in b for item in a])`. However, this idea fails if `ar2` is a set, or similar (non-sequence) container: As `ar2` is converted to an array, in those cases `asarray(ar2)` is an object array rather than the expected array of contained values. New in version 1.4.0. #### Examples ``` >>> test = np.array([0, 1, 2, 5, 0]) >>> states = [0, 2] >>> mask = np.in1d(test, states) >>> mask array([ True, False, True, False, True]) >>> test[mask] array([0, 2, 0]) >>> mask = np.in1d(test, states, invert=True) >>> mask array([False, True, False, True, False]) >>> test[mask] array([1, 5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.in1d.htmlnumpy.intersect1d ================= numpy.intersect1d(*ar1*, *ar2*, *assume_unique=False*, *return_indices=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arraysetops.py#L373-L469) Find the intersection of two arrays. Return the sorted, unique values that are in both of the input arrays. Parameters **ar1, ar2**array_like Input arrays. Will be flattened if not already 1D. **assume_unique**bool If True, the input arrays are both assumed to be unique, which can speed up the calculation. If True but `ar1` or `ar2` are not unique, incorrect results and out-of-bounds indices could result. Default is False. **return_indices**bool If True, the indices which correspond to the intersection of the two arrays are returned. The first instance of a value is used if there are multiple. Default is False. New in version 1.15.0. Returns **intersect1d**ndarray Sorted 1D array of common and unique elements. **comm1**ndarray The indices of the first occurrences of the common values in `ar1`. Only provided if `return_indices` is True. **comm2**ndarray The indices of the first occurrences of the common values in `ar2`. Only provided if `return_indices` is True. See also [`numpy.lib.arraysetops`](numpy.lib.arraysetops#module-numpy.lib.arraysetops "numpy.lib.arraysetops") Module with a number of other functions for performing set operations on arrays. #### Examples ``` >>> np.intersect1d([1, 3, 4, 3], [3, 1, 2, 1]) array([1, 3]) ``` To intersect more than two arrays, use functools.reduce: ``` >>> from functools import reduce >>> reduce(np.intersect1d, ([1, 3, 4, 3], [3, 1, 2, 1], [6, 3, 4, 2])) array([3]) ``` To return the indices of the values common to the input arrays along with the intersected values: ``` >>> x = np.array([1, 1, 2, 3, 4]) >>> y = np.array([2, 1, 4, 6]) >>> xy, x_ind, y_ind = np.intersect1d(x, y, return_indices=True) >>> x_ind, y_ind (array([0, 2, 4]), array([1, 0, 2])) >>> xy, x[x_ind], y[y_ind] (array([1, 2, 4]), array([1, 2, 4]), array([1, 2, 4])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.intersect1d.htmlnumpy.isin ========== numpy.isin(*element*, *test_elements*, *assume_unique=False*, *invert=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arraysetops.py#L644-L740) Calculates `element in test_elements`, broadcasting over `element` only. Returns a boolean array of the same shape as `element` that is True where an element of `element` is in `test_elements` and False otherwise. Parameters **element**array_like Input array. **test_elements**array_like The values against which to test each value of `element`. This argument is flattened if it is an array or array_like. See notes for behavior with non-array-like parameters. **assume_unique**bool, optional If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False. **invert**bool, optional If True, the values in the returned array are inverted, as if calculating `element not in test_elements`. Default is False. `np.isin(a, b, invert=True)` is equivalent to (but faster than) `np.invert(np.isin(a, b))`. Returns **isin**ndarray, bool Has the same shape as `element`. The values `element[isin]` are in `test_elements`. See also [`in1d`](numpy.in1d#numpy.in1d "numpy.in1d") Flattened version of this function. [`numpy.lib.arraysetops`](numpy.lib.arraysetops#module-numpy.lib.arraysetops "numpy.lib.arraysetops") Module with a number of other functions for performing set operations on arrays. #### Notes [`isin`](#numpy.isin "numpy.isin") is an element-wise function version of the python keyword `in`. `isin(a, b)` is roughly equivalent to `np.array([item in b for item in a])` if `a` and `b` are 1-D sequences. `element` and `test_elements` are converted to arrays if they are not already. If `test_elements` is a set (or other non-sequence collection) it will be converted to an object array with one element, rather than an array of the values contained in `test_elements`. This is a consequence of the [`array`](numpy.array#numpy.array "numpy.array") constructor’s way of handling non-sequence collections. Converting the set to a list usually gives the desired behavior. New in version 1.13.0. #### Examples ``` >>> element = 2*np.arange(4).reshape((2, 2)) >>> element array([[0, 2], [4, 6]]) >>> test_elements = [1, 2, 4, 8] >>> mask = np.isin(element, test_elements) >>> mask array([[False, True], [ True, False]]) >>> element[mask] array([2, 4]) ``` The indices of the matched values can be obtained with [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero"): ``` >>> np.nonzero(mask) (array([0, 1]), array([1, 0])) ``` The test can also be inverted: ``` >>> mask = np.isin(element, test_elements, invert=True) >>> mask array([[ True, False], [False, True]]) >>> element[mask] array([0, 6]) ``` Because of how [`array`](numpy.array#numpy.array "numpy.array") handles sets, the following does not work as expected: ``` >>> test_set = {1, 2, 4, 8} >>> np.isin(element, test_set) array([[False, False], [False, False]]) ``` Casting the set to a list gives the expected result: ``` >>> np.isin(element, list(test_set)) array([[False, True], [ True, False]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.isin.htmlnumpy.setdiff1d =============== numpy.setdiff1d(*ar1*, *ar2*, *assume_unique=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arraysetops.py#L788-L830) Find the set difference of two arrays. Return the unique values in `ar1` that are not in `ar2`. Parameters **ar1**array_like Input array. **ar2**array_like Input comparison array. **assume_unique**bool If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False. Returns **setdiff1d**ndarray 1D array of values in `ar1` that are not in `ar2`. The result is sorted when `assume_unique=False`, but otherwise only sorted if the input is sorted. See also [`numpy.lib.arraysetops`](numpy.lib.arraysetops#module-numpy.lib.arraysetops "numpy.lib.arraysetops") Module with a number of other functions for performing set operations on arrays. #### Examples ``` >>> a = np.array([1, 2, 3, 2, 4, 1]) >>> b = np.array([3, 4, 5, 6]) >>> np.setdiff1d(a, b) array([1, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.setdiff1d.htmlnumpy.setxor1d ============== numpy.setxor1d(*ar1*, *ar2*, *assume_unique=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arraysetops.py#L476-L516) Find the set exclusive-or of two arrays. Return the sorted, unique values that are in only one (not both) of the input arrays. Parameters **ar1, ar2**array_like Input arrays. **assume_unique**bool If True, the input arrays are both assumed to be unique, which can speed up the calculation. Default is False. Returns **setxor1d**ndarray Sorted 1D array of unique values that are in only one of the input arrays. #### Examples ``` >>> a = np.array([1, 2, 3, 2, 4]) >>> b = np.array([2, 3, 5, 7, 5]) >>> np.setxor1d(a,b) array([1, 4, 5, 7]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.setxor1d.htmlnumpy.union1d ============= numpy.union1d(*ar1*, *ar2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/arraysetops.py#L747-L781) Find the union of two arrays. Return the unique, sorted array of values that are in either of the two input arrays. Parameters **ar1, ar2**array_like Input arrays. They are flattened if they are not already 1D. Returns **union1d**ndarray Unique, sorted union of the input arrays. See also [`numpy.lib.arraysetops`](numpy.lib.arraysetops#module-numpy.lib.arraysetops "numpy.lib.arraysetops") Module with a number of other functions for performing set operations on arrays. #### Examples ``` >>> np.union1d([-1, 0, 1], [-2, 0, 2]) array([-2, -1, 0, 1, 2]) ``` To find the union of more than two arrays, use functools.reduce: ``` >>> from functools import reduce >>> reduce(np.union1d, ([1, 3, 4, 3], [3, 1, 2, 1], [6, 3, 4, 2])) array([1, 2, 3, 4, 6]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.union1d.htmlExtending via CFFI ================== ``` """ Use cffi to access any of the underlying C functions from distributions.h """ import os import numpy as np import cffi from .parse import parse_distributions_h ffi = cffi.FFI() inc_dir = os.path.join(np.get_include(), 'numpy') # Basic numpy types ffi.cdef(''' typedef intptr_t npy_intp; typedef unsigned char npy_bool; ''') parse_distributions_h(ffi, inc_dir) lib = ffi.dlopen(np.random._generator.__file__) # Compare the distributions.h random_standard_normal_fill to # Generator.standard_random bit_gen = np.random.PCG64() rng = np.random.Generator(bit_gen) state = bit_gen.state interface = rng.bit_generator.cffi n = 100 vals_cffi = ffi.new('double[%d]' % n) lib.random_standard_normal_fill(interface.bit_generator, n, vals_cffi) # reset the state bit_gen.state = state vals = rng.standard_normal(n) for i in range(n): assert vals[i] == vals_cffi[i] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/examples/cffi.htmlnumpy.msort =========== numpy.msort(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L3645-L3671) Return a copy of an array sorted along the first axis. Parameters **a**array_like Array to be sorted. Returns **sorted_array**ndarray Array of the same type and shape as `a`. See also [`sort`](numpy.sort#numpy.sort "numpy.sort") #### Notes `np.msort(a)` is equivalent to `np.sort(a, axis=0)`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.msort.htmlnumpy.sort_complex =================== numpy.sort_complex(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L1758-L1792) Sort a complex array using the real part first, then the imaginary part. Parameters **a**array_like Input array Returns **out**complex ndarray Always returns a sorted complex array. #### Examples ``` >>> np.sort_complex([5, 3, 6, 2, 1]) array([1.+0.j, 2.+0.j, 3.+0.j, 5.+0.j, 6.+0.j]) ``` ``` >>> np.sort_complex([1 + 2j, 2 - 1j, 3 - 2j, 3 - 3j, 3 + 5j]) array([1.+2.j, 2.-1.j, 3.-3.j, 3.-2.j, 3.+5.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.sort_complex.htmlnumpy.nanargmax =============== numpy.nanargmax(*a*, *axis=None*, *out=None*, ***, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L561-L615) Return the indices of the maximum values in the specified axis ignoring NaNs. For all-NaN slices `ValueError` is raised. Warning: the results cannot be trusted if a slice contains only NaNs and -Infs. Parameters **a**array_like Input data. **axis**int, optional Axis along which to operate. By default flattened input is used. **out**array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. New in version 1.22.0. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns **index_array**ndarray An array of indices or a single index value. See also [`argmax`](numpy.argmax#numpy.argmax "numpy.argmax"), [`nanargmin`](numpy.nanargmin#numpy.nanargmin "numpy.nanargmin") #### Examples ``` >>> a = np.array([[np.nan, 4], [2, 3]]) >>> np.argmax(a) 0 >>> np.nanargmax(a) 1 >>> np.nanargmax(a, axis=0) array([1, 0]) >>> np.nanargmax(a, axis=1) array([1, 1]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanargmax.htmlnumpy.nanargmin =============== numpy.nanargmin(*a*, *axis=None*, *out=None*, ***, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L501-L554) Return the indices of the minimum values in the specified axis ignoring NaNs. For all-NaN slices `ValueError` is raised. Warning: the results cannot be trusted if a slice contains only NaNs and Infs. Parameters **a**array_like Input data. **axis**int, optional Axis along which to operate. By default flattened input is used. **out**array, optional If provided, the result will be inserted into this array. It should be of the appropriate shape and dtype. New in version 1.22.0. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array. New in version 1.22.0. Returns **index_array**ndarray An array of indices or a single index value. See also [`argmin`](numpy.argmin#numpy.argmin "numpy.argmin"), [`nanargmax`](numpy.nanargmax#numpy.nanargmax "numpy.nanargmax") #### Examples ``` >>> a = np.array([[np.nan, 4], [2, 3]]) >>> np.argmin(a) 0 >>> np.nanargmin(a) 2 >>> np.nanargmin(a, axis=0) array([1, 1]) >>> np.nanargmin(a, axis=1) array([1, 0]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanargmin.htmlnumpy.argwhere ============== numpy.argwhere(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L577-L624) Find the indices of array elements that are non-zero, grouped by element. Parameters **a**array_like Input data. Returns **index_array**(N, a.ndim) ndarray Indices of elements that are non-zero. Indices are grouped by element. This array will have shape `(N, a.ndim)` where `N` is the number of non-zero items. See also [`where`](numpy.where#numpy.where "numpy.where"), [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") #### Notes `np.argwhere(a)` is almost the same as `np.transpose(np.nonzero(a))`, but produces a result of the correct shape for a 0D array. The output of `argwhere` is not suitable for indexing arrays. For this purpose use `nonzero(a)` instead. #### Examples ``` >>> x = np.arange(6).reshape(2,3) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> np.argwhere(x>1) array([[0, 2], [1, 0], [1, 1], [1, 2]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.argwhere.htmlnumpy.flatnonzero ================= numpy.flatnonzero(*a*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L631-L669) Return indices that are non-zero in the flattened version of a. This is equivalent to `np.nonzero(np.ravel(a))[0]`. Parameters **a**array_like Input data. Returns **res**ndarray Output array, containing the indices of the elements of `a.ravel()` that are non-zero. See also [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") Return the indices of the non-zero elements of the input array. [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a 1-D array containing the elements of the input array. #### Examples ``` >>> x = np.arange(-2, 3) >>> x array([-2, -1, 0, 1, 2]) >>> np.flatnonzero(x) array([0, 1, 3, 4]) ``` Use the indices of the non-zero elements as an index array to extract these elements: ``` >>> x.ravel()[np.flatnonzero(x)] array([-2, -1, 1, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.flatnonzero.htmlnumpy.extract ============= numpy.extract(*condition*, *arr*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L1856-L1905) Return the elements of an array that satisfy some condition. This is equivalent to `np.compress(ravel(condition), ravel(arr))`. If `condition` is boolean `np.extract` is equivalent to `arr[condition]`. Note that [`place`](numpy.place#numpy.place "numpy.place") does the exact opposite of [`extract`](#numpy.extract "numpy.extract"). Parameters **condition**array_like An array whose nonzero or True entries indicate the elements of `arr` to extract. **arr**array_like Input array of the same size as `condition`. Returns **extract**ndarray Rank 1 array of values from `arr` where `condition` is True. See also [`take`](numpy.take#numpy.take "numpy.take"), [`put`](numpy.put#numpy.put "numpy.put"), [`copyto`](numpy.copyto#numpy.copyto "numpy.copyto"), [`compress`](numpy.compress#numpy.compress "numpy.compress"), [`place`](numpy.place#numpy.place "numpy.place") #### Examples ``` >>> arr = np.arange(12).reshape((3, 4)) >>> arr array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> condition = np.mod(arr, 3)==0 >>> condition array([[ True, False, False, True], [False, False, True, False], [False, True, False, False]]) >>> np.extract(condition, arr) array([0, 3, 6, 9]) ``` If `condition` is boolean: ``` >>> arr[condition] array([0, 3, 6, 9]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.extract.htmlnumpy.count_nonzero ==================== numpy.count_nonzero(*a*, *axis=None*, ***, *keepdims=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L431-L502) Counts the number of non-zero values in the array `a`. The word “non-zero” is in reference to the Python 2.x built-in method `__nonzero__()` (renamed `__bool__()` in Python 3.x) of Python objects that tests an object’s “truthfulness”. For example, any number is considered truthful if it is nonzero, whereas any string is considered truthful if it is not the empty string. Thus, this function (recursively) counts how many elements in `a` (and in sub-arrays thereof) have their `__nonzero__()` or `__bool__()` method evaluated to `True`. Parameters **a**array_like The array for which to count non-zeros. **axis**int or tuple, optional Axis or tuple of axes along which to count non-zeros. Default is None, meaning that non-zeros will be counted along a flattened version of `a`. New in version 1.12.0. **keepdims**bool, optional If this is set to True, the axes that are counted are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. New in version 1.19.0. Returns **count**int or array of int Number of non-zero values in the array along a given axis. Otherwise, the total number of non-zero values in the array is returned. See also [`nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") Return the coordinates of all the non-zero values. #### Examples ``` >>> np.count_nonzero(np.eye(4)) 4 >>> a = np.array([[0, 1, 7, 0], ... [3, 0, 2, 19]]) >>> np.count_nonzero(a) 5 >>> np.count_nonzero(a, axis=0) array([1, 1, 2, 1]) >>> np.count_nonzero(a, axis=1) array([2, 3]) >>> np.count_nonzero(a, axis=1, keepdims=True) array([[2], [3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.count_nonzero.htmlnumpy.percentile ================ numpy.percentile(*a*, *q*, *axis=None*, *out=None*, *overwrite_input=False*, *method='linear'*, *keepdims=False*, ***, *interpolation=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L3884-L4167) Compute the q-th percentile of the data along the specified axis. Returns the q-th percentile(s) of the array elements. Parameters **a**array_like Input array or object that can be converted to an array. **q**array_like of float Percentile or sequence of percentiles to compute, which must be between 0 and 100 inclusive. **axis**{int, tuple of int, None}, optional Axis or axes along which the percentiles are computed. The default is to compute the percentile(s) along a flattened version of the array. Changed in version 1.9.0: A tuple of axes is supported **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input**bool, optional If True, then allow the input array `a` to be modified by intermediate calculations, to save memory. In this case, the contents of the input `a` after this function completes is undefined. **method**str, optional This parameter specifies the method to use for estimating the percentile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [[1]](#r08bde0ebf37b-1) are: 1. ‘inverted_cdf’ 2. ‘averaged_inverted_cdf’ 3. ‘closest_observation’ 4. ‘interpolated_inverted_cdf’ 5. ‘hazen’ 6. ‘weibull’ 7. ‘linear’ (default) 8. ‘median_unbiased’ 9. ‘normal_unbiased’ The first three methods are discontiuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: * ‘lower’ * ‘higher’, * ‘midpoint’ * ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array `a`. New in version 1.9.0. **interpolation**str, optional Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns **percentile**scalar or ndarray If `q` is a single percentile and `axis=None`, then the result is a scalar. If multiple percentiles are given, first axis of the result corresponds to the percentiles. The other axes are the axes that remain after the reduction of `a`. If the input contains integers or floats smaller than `float64`, the output data-type is `float64`. Otherwise, the output data-type is the same as that of the input. If `out` is specified, that array is returned instead. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") [`median`](numpy.median#numpy.median "numpy.median") equivalent to `percentile(..., 50)` [`nanpercentile`](numpy.nanpercentile#numpy.nanpercentile "numpy.nanpercentile") [`quantile`](numpy.quantile#numpy.quantile "numpy.quantile") equivalent to percentile, except q in the range [0, 1]. #### Notes Given a vector `V` of length `N`, the q-th percentile of `V` is the value `q/100` of the way from the minimum to the maximum in a sorted copy of `V`. The values and distances of the two nearest neighbors as well as the `method` parameter will determine the percentile if the normalized ranking does not match the location of `q` exactly. This function is the same as the median if `q=50`, the same as the minimum if `q=0` and the same as the maximum if `q=100`. This optional `method` parameter specifies the method to use when the desired quantile lies between two data points `i < j`. If `g` is the fractional part of the index surrounded by `i` and alpha and beta are correction constants modifying i and j. Below, ‘q’ is the quantile value, ‘n’ is the sample size and alpha and beta are constants. The following formula gives an interpolation “i + g” of where the quantile would be in the sorted sample. With ‘i’ being the floor and ‘g’ the fractional part of the result. \[i + g = (q - alpha) / ( n - alpha - beta + 1 )\] The different methods then work as follows inverted_cdf: method 1 of H&F [[1]](#r08bde0ebf37b-1). This method gives discontinuous results: * if g > 0 ; then take j * if g = 0 ; then take i averaged_inverted_cdf: method 2 of H&F [[1]](#r08bde0ebf37b-1). This method give discontinuous results: * if g > 0 ; then take j * if g = 0 ; then average between bounds closest_observation: method 3 of H&F [[1]](#r08bde0ebf37b-1). This method give discontinuous results: * if g > 0 ; then take j * if g = 0 and index is odd ; then take j * if g = 0 and index is even ; then take i interpolated_inverted_cdf: method 4 of H&F [[1]](#r08bde0ebf37b-1). This method give continuous results using: * alpha = 0 * beta = 1 hazen: method 5 of H&F [[1]](#r08bde0ebf37b-1). This method give continuous results using: * alpha = 1/2 * beta = 1/2 weibull: method 6 of H&F [[1]](#r08bde0ebf37b-1). This method give continuous results using: * alpha = 0 * beta = 0 linear: method 7 of H&F [[1]](#r08bde0ebf37b-1). This method give continuous results using: * alpha = 1 * beta = 1 median_unbiased: method 8 of H&F [[1]](#r08bde0ebf37b-1). This method is probably the best method if the sample distribution function is unknown (see reference). This method give continuous results using: * alpha = 1/3 * beta = 1/3 normal_unbiased: method 9 of H&F [[1]](#r08bde0ebf37b-1). This method is probably the best method if the sample distribution function is known to be normal. This method give continuous results using: * alpha = 3/8 * beta = 3/8 lower: NumPy method kept for backwards compatibility. Takes `i` as the interpolation point. higher: NumPy method kept for backwards compatibility. Takes `j` as the interpolation point. nearest: NumPy method kept for backwards compatibility. Takes `i` or `j`, whichever is nearest. midpoint: NumPy method kept for backwards compatibility. Uses `(i + j) / 2`. #### References 1([1](#id1),[2](#id2),[3](#id3),[4](#id4),[5](#id5),[6](#id6),[7](#id7),[8](#id8),[9](#id9),[10](#id10)) <NAME> and <NAME>, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 #### Examples ``` >>> a = np.array([[10, 7, 4], [3, 2, 1]]) >>> a array([[10, 7, 4], [ 3, 2, 1]]) >>> np.percentile(a, 50) 3.5 >>> np.percentile(a, 50, axis=0) array([6.5, 4.5, 2.5]) >>> np.percentile(a, 50, axis=1) array([7., 2.]) >>> np.percentile(a, 50, axis=1, keepdims=True) array([[7.], [2.]]) ``` ``` >>> m = np.percentile(a, 50, axis=0) >>> out = np.zeros_like(m) >>> np.percentile(a, 50, axis=0, out=out) array([6.5, 4.5, 2.5]) >>> m array([6.5, 4.5, 2.5]) ``` ``` >>> b = a.copy() >>> np.percentile(b, 50, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a == b) ``` The different methods can be visualized graphically: ``` import matplotlib.pyplot as plt a = np.arange(4) p = np.linspace(0, 100, 6001) ax = plt.gca() lines = [ ('linear', '-', 'C0'), ('inverted_cdf', ':', 'C1'), # Almost the same as `inverted_cdf`: ('averaged_inverted_cdf', '-.', 'C1'), ('closest_observation', ':', 'C2'), ('interpolated_inverted_cdf', '--', 'C1'), ('hazen', '--', 'C3'), ('weibull', '-.', 'C4'), ('median_unbiased', '--', 'C5'), ('normal_unbiased', '-.', 'C6'), ] for method, style, color in lines: ax.plot( p, np.percentile(a, p, method=method), label=method, linestyle=style, color=color) ax.set( title='Percentiles for different methods and data: ' + str(a), xlabel='Percentile', ylabel='Estimated percentile value', yticks=a) ax.legend() plt.show() ``` ![../../_images/numpy-percentile-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.percentile.htmlnumpy.nanpercentile =================== numpy.nanpercentile(*a*, *q*, *axis=None*, *out=None*, *overwrite_input=False*, *method='linear'*, *keepdims=<no value>*, ***, *interpolation=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L1231-L1385) Compute the qth percentile of the data along the specified axis, while ignoring nan values. Returns the qth percentile(s) of the array elements. New in version 1.9.0. Parameters **a**array_like Input array or object that can be converted to an array, containing nan values to be ignored. **q**array_like of float Percentile or sequence of percentiles to compute, which must be between 0 and 100 inclusive. **axis**{int, tuple of int, None}, optional Axis or axes along which the percentiles are computed. The default is to compute the percentile(s) along a flattened version of the array. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input**bool, optional If True, then allow the input array `a` to be modified by intermediate calculations, to save memory. In this case, the contents of the input `a` after this function completes is undefined. **method**str, optional This parameter specifies the method to use for estimating the percentile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [[1]](#re21b1d0b0470-1) are: 1. ‘inverted_cdf’ 2. ‘averaged_inverted_cdf’ 3. ‘closest_observation’ 4. ‘interpolated_inverted_cdf’ 5. ‘hazen’ 6. ‘weibull’ 7. ‘linear’ (default) 8. ‘median_unbiased’ 9. ‘normal_unbiased’ The first three methods are discontiuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: * ‘lower’ * ‘higher’, * ‘midpoint’ * ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array `a`. If this is anything but the default value it will be passed through (in the special case of an empty array) to the [`mean`](numpy.mean#numpy.mean "numpy.mean") function of the underlying array. If the array is a sub-class and [`mean`](numpy.mean#numpy.mean "numpy.mean") does not have the kwarg `keepdims` this will raise a RuntimeError. **interpolation**str, optional Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns **percentile**scalar or ndarray If `q` is a single percentile and `axis=None`, then the result is a scalar. If multiple percentiles are given, first axis of the result corresponds to the percentiles. The other axes are the axes that remain after the reduction of `a`. If the input contains integers or floats smaller than `float64`, the output data-type is `float64`. Otherwise, the output data-type is the same as that of the input. If `out` is specified, that array is returned instead. See also [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean") [`nanmedian`](numpy.nanmedian#numpy.nanmedian "numpy.nanmedian") equivalent to `nanpercentile(..., 50)` [`percentile`](numpy.percentile#numpy.percentile "numpy.percentile"), [`median`](numpy.median#numpy.median "numpy.median"), [`mean`](numpy.mean#numpy.mean "numpy.mean") [`nanquantile`](numpy.nanquantile#numpy.nanquantile "numpy.nanquantile") equivalent to nanpercentile, except q in range [0, 1]. #### Notes For more information please see [`numpy.percentile`](numpy.percentile#numpy.percentile "numpy.percentile") #### References [1](#id1) <NAME> and <NAME>, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 #### Examples ``` >>> a = np.array([[10., 7., 4.], [3., 2., 1.]]) >>> a[0][1] = np.nan >>> a array([[10., nan, 4.], [ 3., 2., 1.]]) >>> np.percentile(a, 50) nan >>> np.nanpercentile(a, 50) 3.0 >>> np.nanpercentile(a, 50, axis=0) array([6.5, 2. , 2.5]) >>> np.nanpercentile(a, 50, axis=1, keepdims=True) array([[7.], [2.]]) >>> m = np.nanpercentile(a, 50, axis=0) >>> out = np.zeros_like(m) >>> np.nanpercentile(a, 50, axis=0, out=out) array([6.5, 2. , 2.5]) >>> m array([6.5, 2. , 2.5]) ``` ``` >>> b = a.copy() >>> np.nanpercentile(b, 50, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a==b) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanpercentile.htmlnumpy.quantile ============== numpy.quantile(*a*, *q*, *axis=None*, *out=None*, *overwrite_input=False*, *method='linear'*, *keepdims=False*, ***, *interpolation=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L4175-L4413) Compute the q-th quantile of the data along the specified axis. New in version 1.15.0. Parameters **a**array_like Input array or object that can be converted to an array. **q**array_like of float Quantile or sequence of quantiles to compute, which must be between 0 and 1 inclusive. **axis**{int, tuple of int, None}, optional Axis or axes along which the quantiles are computed. The default is to compute the quantile(s) along a flattened version of the array. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input**bool, optional If True, then allow the input array `a` to be modified by intermediate calculations, to save memory. In this case, the contents of the input `a` after this function completes is undefined. **method**str, optional This parameter specifies the method to use for estimating the quantile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [[1]](#re01cd3f3acfe-1) are: 1. ‘inverted_cdf’ 2. ‘averaged_inverted_cdf’ 3. ‘closest_observation’ 4. ‘interpolated_inverted_cdf’ 5. ‘hazen’ 6. ‘weibull’ 7. ‘linear’ (default) 8. ‘median_unbiased’ 9. ‘normal_unbiased’ The first three methods are discontinuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: * ‘lower’ * ‘higher’, * ‘midpoint’ * ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array `a`. **interpolation**str, optional Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns **quantile**scalar or ndarray If `q` is a single quantile and `axis=None`, then the result is a scalar. If multiple quantiles are given, first axis of the result corresponds to the quantiles. The other axes are the axes that remain after the reduction of `a`. If the input contains integers or floats smaller than `float64`, the output data-type is `float64`. Otherwise, the output data-type is the same as that of the input. If `out` is specified, that array is returned instead. See also [`mean`](numpy.mean#numpy.mean "numpy.mean") [`percentile`](numpy.percentile#numpy.percentile "numpy.percentile") equivalent to quantile, but with q in the range [0, 100]. [`median`](numpy.median#numpy.median "numpy.median") equivalent to `quantile(..., 0.5)` [`nanquantile`](numpy.nanquantile#numpy.nanquantile "numpy.nanquantile") #### Notes Given a vector `V` of length `N`, the q-th quantile of `V` is the value `q` of the way from the minimum to the maximum in a sorted copy of `V`. The values and distances of the two nearest neighbors as well as the `method` parameter will determine the quantile if the normalized ranking does not match the location of `q` exactly. This function is the same as the median if `q=0.5`, the same as the minimum if `q=0.0` and the same as the maximum if `q=1.0`. The optional `method` parameter specifies the method to use when the desired quantile lies between two data points `i < j`. If `g` is the fractional part of the index surrounded by `i` and `j`, and alpha and beta are correction constants modifying i and j: \[i + g = (q - alpha) / ( n - alpha - beta + 1 )\] The different methods then work as follows inverted_cdf: method 1 of H&F [[1]](#re01cd3f3acfe-1). This method gives discontinuous results: * if g > 0 ; then take j * if g = 0 ; then take i averaged_inverted_cdf: method 2 of H&F [[1]](#re01cd3f3acfe-1). This method gives discontinuous results: * if g > 0 ; then take j * if g = 0 ; then average between bounds closest_observation: method 3 of H&F [[1]](#re01cd3f3acfe-1). This method gives discontinuous results: * if g > 0 ; then take j * if g = 0 and index is odd ; then take j * if g = 0 and index is even ; then take i interpolated_inverted_cdf: method 4 of H&F [[1]](#re01cd3f3acfe-1). This method gives continuous results using: * alpha = 0 * beta = 1 hazen: method 5 of H&F [[1]](#re01cd3f3acfe-1). This method gives continuous results using: * alpha = 1/2 * beta = 1/2 weibull: method 6 of H&F [[1]](#re01cd3f3acfe-1). This method gives continuous results using: * alpha = 0 * beta = 0 linear: method 7 of H&F [[1]](#re01cd3f3acfe-1). This method gives continuous results using: * alpha = 1 * beta = 1 median_unbiased: method 8 of H&F [[1]](#re01cd3f3acfe-1). This method is probably the best method if the sample distribution function is unknown (see reference). This method gives continuous results using: * alpha = 1/3 * beta = 1/3 normal_unbiased: method 9 of H&F [[1]](#re01cd3f3acfe-1). This method is probably the best method if the sample distribution function is known to be normal. This method gives continuous results using: * alpha = 3/8 * beta = 3/8 lower: NumPy method kept for backwards compatibility. Takes `i` as the interpolation point. higher: NumPy method kept for backwards compatibility. Takes `j` as the interpolation point. nearest: NumPy method kept for backwards compatibility. Takes `i` or `j`, whichever is nearest. midpoint: NumPy method kept for backwards compatibility. Uses `(i + j) / 2`. #### References 1([1](#id1),[2](#id2),[3](#id3),[4](#id4),[5](#id5),[6](#id6),[7](#id7),[8](#id8),[9](#id9),[10](#id10)) <NAME> and <NAME>, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 #### Examples ``` >>> a = np.array([[10, 7, 4], [3, 2, 1]]) >>> a array([[10, 7, 4], [ 3, 2, 1]]) >>> np.quantile(a, 0.5) 3.5 >>> np.quantile(a, 0.5, axis=0) array([6.5, 4.5, 2.5]) >>> np.quantile(a, 0.5, axis=1) array([7., 2.]) >>> np.quantile(a, 0.5, axis=1, keepdims=True) array([[7.], [2.]]) >>> m = np.quantile(a, 0.5, axis=0) >>> out = np.zeros_like(m) >>> np.quantile(a, 0.5, axis=0, out=out) array([6.5, 4.5, 2.5]) >>> m array([6.5, 4.5, 2.5]) >>> b = a.copy() >>> np.quantile(b, 0.5, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a == b) ``` See also [`numpy.percentile`](numpy.percentile#numpy.percentile "numpy.percentile") for a visualization of most methods. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.quantile.htmlnumpy.nanquantile ================= numpy.nanquantile(*a*, *q*, *axis=None*, *out=None*, *overwrite_input=False*, *method='linear'*, *keepdims=<no value>*, ***, *interpolation=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L1393-L1542) Compute the qth quantile of the data along the specified axis, while ignoring nan values. Returns the qth quantile(s) of the array elements. New in version 1.15.0. Parameters **a**array_like Input array or object that can be converted to an array, containing nan values to be ignored **q**array_like of float Quantile or sequence of quantiles to compute, which must be between 0 and 1 inclusive. **axis**{int, tuple of int, None}, optional Axis or axes along which the quantiles are computed. The default is to compute the quantile(s) along a flattened version of the array. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input**bool, optional If True, then allow the input array `a` to be modified by intermediate calculations, to save memory. In this case, the contents of the input `a` after this function completes is undefined. **method**str, optional This parameter specifies the method to use for estimating the quantile. There are many different methods, some unique to NumPy. See the notes for explanation. The options sorted by their R type as summarized in the H&F paper [[1]](#r02de30f409d2-1) are: 1. ‘inverted_cdf’ 2. ‘averaged_inverted_cdf’ 3. ‘closest_observation’ 4. ‘interpolated_inverted_cdf’ 5. ‘hazen’ 6. ‘weibull’ 7. ‘linear’ (default) 8. ‘median_unbiased’ 9. ‘normal_unbiased’ The first three methods are discontiuous. NumPy further defines the following discontinuous variations of the default ‘linear’ (7.) option: * ‘lower’ * ‘higher’, * ‘midpoint’ * ‘nearest’ Changed in version 1.22.0: This argument was previously called “interpolation” and only offered the “linear” default and last four options. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original array `a`. If this is anything but the default value it will be passed through (in the special case of an empty array) to the [`mean`](numpy.mean#numpy.mean "numpy.mean") function of the underlying array. If the array is a sub-class and [`mean`](numpy.mean#numpy.mean "numpy.mean") does not have the kwarg `keepdims` this will raise a RuntimeError. **interpolation**str, optional Deprecated name for the method keyword argument. Deprecated since version 1.22.0. Returns **quantile**scalar or ndarray If `q` is a single percentile and `axis=None`, then the result is a scalar. If multiple quantiles are given, first axis of the result corresponds to the quantiles. The other axes are the axes that remain after the reduction of `a`. If the input contains integers or floats smaller than `float64`, the output data-type is `float64`. Otherwise, the output data-type is the same as that of the input. If `out` is specified, that array is returned instead. See also [`quantile`](numpy.quantile#numpy.quantile "numpy.quantile") [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean"), [`nanmedian`](numpy.nanmedian#numpy.nanmedian "numpy.nanmedian") [`nanmedian`](numpy.nanmedian#numpy.nanmedian "numpy.nanmedian") equivalent to `nanquantile(..., 0.5)` [`nanpercentile`](numpy.nanpercentile#numpy.nanpercentile "numpy.nanpercentile") same as nanquantile, but with q in the range [0, 100]. #### Notes For more information please see [`numpy.quantile`](numpy.quantile#numpy.quantile "numpy.quantile") #### References [1](#id1) <NAME> and <NAME>, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996 #### Examples ``` >>> a = np.array([[10., 7., 4.], [3., 2., 1.]]) >>> a[0][1] = np.nan >>> a array([[10., nan, 4.], [ 3., 2., 1.]]) >>> np.quantile(a, 0.5) nan >>> np.nanquantile(a, 0.5) 3.0 >>> np.nanquantile(a, 0.5, axis=0) array([6.5, 2. , 2.5]) >>> np.nanquantile(a, 0.5, axis=1, keepdims=True) array([[7.], [2.]]) >>> m = np.nanquantile(a, 0.5, axis=0) >>> out = np.zeros_like(m) >>> np.nanquantile(a, 0.5, axis=0, out=out) array([6.5, 2. , 2.5]) >>> m array([6.5, 2. , 2.5]) >>> b = a.copy() >>> np.nanquantile(b, 0.5, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a==b) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanquantile.htmlnumpy.nanmedian =============== numpy.nanmedian(*a*, *axis=None*, *out=None*, *overwrite_input=False*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L1126-L1222) Compute the median along the specified axis, while ignoring NaNs. Returns the median of the array elements. New in version 1.9.0. Parameters **a**array_like Input array or object that can be converted to an array. **axis**{int, sequence of int, None}, optional Axis or axes along which the medians are computed. The default is to compute the median along a flattened version of the array. A sequence of axes is supported since version 1.9.0. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape and buffer length as the expected output, but the type (of the output) will be cast if necessary. **overwrite_input**bool, optional If True, then allow use of memory of input array `a` for calculations. The input array will be modified by the call to [`median`](numpy.median#numpy.median "numpy.median"). This will save memory when you do not need to preserve the contents of the input array. Treat the input as undefined, but it will probably be fully or partially sorted. Default is False. If `overwrite_input` is `True` and `a` is not already an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), an error will be raised. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If this is anything but the default value it will be passed through (in the special case of an empty array) to the [`mean`](numpy.mean#numpy.mean "numpy.mean") function of the underlying array. If the array is a sub-class and [`mean`](numpy.mean#numpy.mean "numpy.mean") does not have the kwarg `keepdims` this will raise a RuntimeError. Returns **median**ndarray A new array holding the result. If the input contains integers or floats smaller than `float64`, then the output data-type is `np.float64`. Otherwise, the data-type of the output is the same as that of the input. If `out` is specified, that array is returned instead. See also [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`median`](numpy.median#numpy.median "numpy.median"), [`percentile`](numpy.percentile#numpy.percentile "numpy.percentile") #### Notes Given a vector `V` of length `N`, the median of `V` is the middle value of a sorted copy of `V`, `V_sorted` - i.e., `V_sorted[(N-1)/2]`, when `N` is odd and the average of the two middle values of `V_sorted` when `N` is even. #### Examples ``` >>> a = np.array([[10.0, 7, 4], [3, 2, 1]]) >>> a[0, 1] = np.nan >>> a array([[10., nan, 4.], [ 3., 2., 1.]]) >>> np.median(a) nan >>> np.nanmedian(a) 3.0 >>> np.nanmedian(a, axis=0) array([6.5, 2. , 2.5]) >>> np.median(a, axis=1) array([nan, 2.]) >>> b = a.copy() >>> np.nanmedian(b, axis=1, overwrite_input=True) array([7., 2.]) >>> assert not np.all(a==b) >>> b = a.copy() >>> np.nanmedian(b, axis=None, overwrite_input=True) 3.0 >>> assert not np.all(a==b) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanmedian.htmlnumpy.nanmean ============= numpy.nanmean(*a*, *axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*, ***, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L952-L1055) Compute the arithmetic mean along the specified axis, ignoring NaNs. Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. [`float64`](../arrays.scalars#numpy.float64 "numpy.float64") intermediate and return values are used for integer inputs. For all-NaN slices, NaN is returned and a `RuntimeWarning` is raised. New in version 1.8.0. Parameters **a**array_like Array containing numbers whose mean is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the means are computed. The default is to compute the mean of the flattened array. **dtype**data-type, optional Type to use in computing the mean. For integer inputs, the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for inexact inputs, it is the same as the input dtype. **out**ndarray, optional Alternate output array in which to place the result. The default is `None`; if provided, it must have the same shape as the expected output, but the type will be cast if necessary. See [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) for more details. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If the value is anything but the default, then `keepdims` will be passed through to the [`mean`](numpy.mean#numpy.mean "numpy.mean") or [`sum`](numpy.sum#numpy.sum "numpy.sum") methods of sub-classes of [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). If the sub-classes methods does not implement `keepdims` any exceptions will be raised. **where**array_like of bool, optional Elements to include in the mean. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns **m**ndarray, see dtype parameter above If `out=None`, returns a new array containing the mean values, otherwise a reference to the output array is returned. Nan is returned for slices that contain only NaNs. See also [`average`](numpy.average#numpy.average "numpy.average") Weighted average [`mean`](numpy.mean#numpy.mean "numpy.mean") Arithmetic mean taken while not ignoring NaNs [`var`](numpy.var#numpy.var "numpy.var"), [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar") #### Notes The arithmetic mean is the sum of the non-NaN elements along the axis divided by the number of non-NaN elements. Note that for floating-point input, the mean is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32"). Specifying a higher-precision accumulator using the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") keyword can alleviate this issue. #### Examples ``` >>> a = np.array([[1, np.nan], [3, 4]]) >>> np.nanmean(a) 2.6666666666666665 >>> np.nanmean(a, axis=0) array([2., 4.]) >>> np.nanmean(a, axis=1) array([1., 3.5]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanmean.htmlnumpy.nanstd ============ numpy.nanstd(*a*, *axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=<no value>*, ***, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L1777-L1886) Compute the standard deviation along the specified axis, while ignoring NaNs. Returns the standard deviation, a measure of the spread of a distribution, of the non-NaN array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis. For all-NaN slices or slices with zero degrees of freedom, NaN is returned and a `RuntimeWarning` is raised. New in version 1.8.0. Parameters **a**array_like Calculate the standard deviation of the non-NaN values. **axis**{int, tuple of int, None}, optional Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array. **dtype**dtype, optional Type to use in computing the standard deviation. For arrays of integer type the default is float64, for arrays of float types it is the same as the array type. **out**ndarray, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type (of the calculated values) will be cast if necessary. **ddof**int, optional Means Delta Degrees of Freedom. The divisor used in calculations is `N - ddof`, where `N` represents the number of non-NaN elements. By default `ddof` is zero. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. If this value is anything but the default it is passed through as-is to the relevant functions of the sub-classes. If these functions do not have a `keepdims` kwarg, a RuntimeError will be raised. **where**array_like of bool, optional Elements to include in the standard deviation. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns **standard_deviation**ndarray, see dtype parameter above. If `out` is None, return a new array containing the standard deviation, otherwise return a reference to the output array. If ddof is >= the number of non-NaN elements in a slice or the slice contains only NaNs, then the result for that slice is NaN. See also [`var`](numpy.var#numpy.var "numpy.var"), [`mean`](numpy.mean#numpy.mean "numpy.mean"), [`std`](numpy.std#numpy.std "numpy.std") [`nanvar`](numpy.nanvar#numpy.nanvar "numpy.nanvar"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes The standard deviation is the square root of the average of the squared deviations from the mean: `std = sqrt(mean(abs(x - x.mean())**2))`. The average squared deviation is normally calculated as `x.sum() / N`, where `N = len(x)`. If, however, `ddof` is specified, the divisor `N - ddof` is used instead. In standard statistical practice, `ddof=1` provides an unbiased estimator of the variance of the infinite population. `ddof=0` provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even with `ddof=1`, it will not be an unbiased estimate of the standard deviation per se. Note that, for complex numbers, [`std`](numpy.std#numpy.std "numpy.std") takes the absolute value before squaring, so that the result is always real and nonnegative. For floating-point input, the *std* is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") keyword can alleviate this issue. #### Examples ``` >>> a = np.array([[1, np.nan], [3, 4]]) >>> np.nanstd(a) 1.247219128924647 >>> np.nanstd(a, axis=0) array([1., 0.]) >>> np.nanstd(a, axis=1) array([0., 0.5]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanstd.htmlnumpy.nanvar ============ numpy.nanvar(*a*, *axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=<no value>*, ***, *where=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/nanfunctions.py#L1616-L1769) Compute the variance along the specified axis, while ignoring NaNs. Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis. For all-NaN slices or slices with zero degrees of freedom, NaN is returned and a `RuntimeWarning` is raised. New in version 1.8.0. Parameters **a**array_like Array containing numbers whose variance is desired. If `a` is not an array, a conversion is attempted. **axis**{int, tuple of int, None}, optional Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array. **dtype**data-type, optional Type to use in computing the variance. For arrays of integer type the default is [`float64`](../arrays.scalars#numpy.float64 "numpy.float64"); for arrays of float types it is the same as the array type. **out**ndarray, optional Alternate output array in which to place the result. It must have the same shape as the expected output, but the type is cast if necessary. **ddof**int, optional “Delta Degrees of Freedom”: the divisor used in the calculation is `N - ddof`, where `N` represents the number of non-NaN elements. By default `ddof` is zero. **keepdims**bool, optional If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original `a`. **where**array_like of bool, optional Elements to include in the variance. See [`reduce`](numpy.ufunc.reduce#numpy.ufunc.reduce "numpy.ufunc.reduce") for details. New in version 1.22.0. Returns **variance**ndarray, see dtype parameter above If `out` is None, return a new array containing the variance, otherwise return a reference to the output array. If ddof is >= the number of non-NaN elements in a slice or the slice contains only NaNs, then the result for that slice is NaN. See also [`std`](numpy.std#numpy.std "numpy.std") Standard deviation [`mean`](numpy.mean#numpy.mean "numpy.mean") Average [`var`](numpy.var#numpy.var "numpy.var") Variance while not ignoring NaNs [`nanstd`](numpy.nanstd#numpy.nanstd "numpy.nanstd"), [`nanmean`](numpy.nanmean#numpy.nanmean "numpy.nanmean") [Output type determination](../../user/basics.ufuncs#ufuncs-output-type) #### Notes The variance is the average of the squared deviations from the mean, i.e., `var = mean(abs(x - x.mean())**2)`. The mean is normally calculated as `x.sum() / N`, where `N = len(x)`. If, however, `ddof` is specified, the divisor `N - ddof` is used instead. In standard statistical practice, `ddof=1` provides an unbiased estimator of the variance of a hypothetical infinite population. `ddof=0` provides a maximum likelihood estimate of the variance for normally distributed variables. Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative. For floating-point input, the variance is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for [`float32`](../arrays.scalars#numpy.float32 "numpy.float32") (see example below). Specifying a higher-accuracy accumulator using the `dtype` keyword can alleviate this issue. For this function to work on sub-classes of ndarray, they must define [`sum`](numpy.sum#numpy.sum "numpy.sum") with the kwarg `keepdims` #### Examples ``` >>> a = np.array([[1, np.nan], [3, 4]]) >>> np.nanvar(a) 1.5555555555555554 >>> np.nanvar(a, axis=0) array([1., 0.]) >>> np.nanvar(a, axis=1) array([0., 0.25]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nanvar.htmlnumpy.correlate =============== numpy.correlate(*a*, *v*, *mode='valid'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/numeric.py#L676-L747) Cross-correlation of two 1-dimensional sequences. This function computes the correlation as generally defined in signal processing texts: \[c_k = \sum_n a_{n+k} \cdot \overline{v_n}\] with a and v sequences being zero-padded where necessary and \(\overline x\) denoting complex conjugation. Parameters **a, v**array_like Input sequences. **mode**{‘valid’, ‘same’, ‘full’}, optional Refer to the [`convolve`](numpy.convolve#numpy.convolve "numpy.convolve") docstring. Note that the default is ‘valid’, unlike [`convolve`](numpy.convolve#numpy.convolve "numpy.convolve"), which uses ‘full’. **old_behavior**bool `old_behavior` was removed in NumPy 1.10. If you need the old behavior, use `multiarray.correlate`. Returns **out**ndarray Discrete cross-correlation of `a` and `v`. See also [`convolve`](numpy.convolve#numpy.convolve "numpy.convolve") Discrete, linear convolution of two one-dimensional sequences. `multiarray.correlate` Old, no conjugate, version of correlate. [`scipy.signal.correlate`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate.html#scipy.signal.correlate "(in SciPy v1.8.1)") uses FFT which has superior performance on large arrays. #### Notes The definition of correlation above is not unique and sometimes correlation may be defined differently. Another common definition is: \[c'_k = \sum_n a_{n} \cdot \overline{v_{n+k}}\] which is related to \(c_k\) by \(c'_k = c_{-k}\). [`numpy.correlate`](#numpy.correlate "numpy.correlate") may perform slowly in large arrays (i.e. n = 1e5) because it does not use the FFT to compute the convolution; in that case, [`scipy.signal.correlate`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.correlate.html#scipy.signal.correlate "(in SciPy v1.8.1)") might be preferable. #### Examples ``` >>> np.correlate([1, 2, 3], [0, 1, 0.5]) array([3.5]) >>> np.correlate([1, 2, 3], [0, 1, 0.5], "same") array([2. , 3.5, 3. ]) >>> np.correlate([1, 2, 3], [0, 1, 0.5], "full") array([0.5, 2. , 3.5, 3. , 0. ]) ``` Using complex sequences: ``` >>> np.correlate([1+1j, 2, 3-1j], [0, 1, 0.5j], 'full') array([ 0.5-0.5j, 1.0+0.j , 1.5-1.5j, 3.0-1.j , 0.0+0.j ]) ``` Note that you get the time reversed, complex conjugated result (\(\overline{c_{-k}}\)) when the two input sequences a and v change places: ``` >>> np.correlate([0, 1, 0.5j], [1+1j, 2, 3-1j], 'full') array([ 0.0+0.j , 3.0+1.j , 1.5+1.5j, 1.0+0.j , 0.5+0.5j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.correlate.htmlnumpy.histogram =============== numpy.histogram(*a*, *bins=10*, *range=None*, *normed=None*, *weights=None*, *density=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/histograms.py#L678-L929) Compute the histogram of a dataset. Parameters **a**array_like Input data. The histogram is computed over the flattened array. **bins**int or sequence of scalars or str, optional If `bins` is an int, it defines the number of equal-width bins in the given range (10, by default). If `bins` is a sequence, it defines a monotonically increasing array of bin edges, including the rightmost edge, allowing for non-uniform bin widths. New in version 1.11.0. If `bins` is a string, it defines the method used to calculate the optimal bin width, as defined by [`histogram_bin_edges`](numpy.histogram_bin_edges#numpy.histogram_bin_edges "numpy.histogram_bin_edges"). **range**(float, float), optional The lower and upper range of the bins. If not provided, range is simply `(a.min(), a.max())`. Values outside the range are ignored. The first element of the range must be less than or equal to the second. `range` affects the automatic bin computation as well. While bin width is computed to be optimal based on the actual data within `range`, the bin count will fill the entire range including portions containing no data. **normed**bool, optional Deprecated since version 1.6.0. This is equivalent to the `density` argument, but produces incorrect results for unequal bin widths. It should not be used. Changed in version 1.15.0: DeprecationWarnings are actually emitted. **weights**array_like, optional An array of weights, of the same shape as `a`. Each value in `a` only contributes its associated weight towards the bin count (instead of 1). If `density` is True, the weights are normalized, so that the integral of the density over the range remains 1. **density**bool, optional If `False`, the result will contain the number of samples in each bin. If `True`, the result is the value of the probability *density* function at the bin, normalized such that the *integral* over the range is 1. Note that the sum of the histogram values will not be equal to 1 unless bins of unity width are chosen; it is not a probability *mass* function. Overrides the `normed` keyword if given. Returns **hist**array The values of the histogram. See `density` and `weights` for a description of the possible semantics. **bin_edges**array of dtype float Return the bin edges `(length(hist)+1)`. See also [`histogramdd`](numpy.histogramdd#numpy.histogramdd "numpy.histogramdd"), [`bincount`](numpy.bincount#numpy.bincount "numpy.bincount"), [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted"), [`digitize`](numpy.digitize#numpy.digitize "numpy.digitize"), [`histogram_bin_edges`](numpy.histogram_bin_edges#numpy.histogram_bin_edges "numpy.histogram_bin_edges") #### Notes All but the last (righthand-most) bin is half-open. In other words, if `bins` is: ``` [1, 2, 3, 4] ``` then the first bin is `[1, 2)` (including 1, but excluding 2) and the second `[2, 3)`. The last bin, however, is `[3, 4]`, which *includes* 4. #### Examples ``` >>> np.histogram([1, 2, 1], bins=[0, 1, 2, 3]) (array([0, 2, 1]), array([0, 1, 2, 3])) >>> np.histogram(np.arange(4), bins=np.arange(5), density=True) (array([0.25, 0.25, 0.25, 0.25]), array([0, 1, 2, 3, 4])) >>> np.histogram([[1, 2, 1], [1, 0, 1]], bins=[0,1,2,3]) (array([1, 4, 1]), array([0, 1, 2, 3])) ``` ``` >>> a = np.arange(5) >>> hist, bin_edges = np.histogram(a, density=True) >>> hist array([0.5, 0. , 0.5, 0. , 0. , 0.5, 0. , 0.5, 0. , 0.5]) >>> hist.sum() 2.4999999999999996 >>> np.sum(hist * np.diff(bin_edges)) 1.0 ``` New in version 1.11.0. Automated Bin Selection Methods example, using 2 peak random data with 2000 points: ``` >>> import matplotlib.pyplot as plt >>> rng = np.random.RandomState(10) # deterministic random data >>> a = np.hstack((rng.normal(size=1000), ... rng.normal(loc=5, scale=2, size=1000))) >>> _ = plt.hist(a, bins='auto') # arguments are passed to np.histogram >>> plt.title("Histogram with 'auto' bins") Text(0.5, 1.0, "Histogram with 'auto' bins") >>> plt.show() ``` ![../../_images/numpy-histogram-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.histogram.htmlnumpy.histogram2d ================= numpy.histogram2d(*x*, *y*, *bins=10*, *range=None*, *normed=None*, *weights=None*, *density=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/twodim_base.py#L655-L826) Compute the bi-dimensional histogram of two data samples. Parameters **x**array_like, shape (N,) An array containing the x coordinates of the points to be histogrammed. **y**array_like, shape (N,) An array containing the y coordinates of the points to be histogrammed. **bins**int or array_like or [int, int] or [array, array], optional The bin specification: * If int, the number of bins for the two dimensions (nx=ny=bins). * If array_like, the bin edges for the two dimensions (x_edges=y_edges=bins). * If [int, int], the number of bins in each dimension (nx, ny = bins). * If [array, array], the bin edges in each dimension (x_edges, y_edges = bins). * A combination [int, array] or [array, int], where int is the number of bins and array is the bin edges. **range**array_like, shape(2,2), optional The leftmost and rightmost edges of the bins along each dimension (if not specified explicitly in the `bins` parameters): `[[xmin, xmax], [ymin, ymax]]`. All values outside of this range will be considered outliers and not tallied in the histogram. **density**bool, optional If False, the default, returns the number of samples in each bin. If True, returns the probability *density* function at the bin, `bin_count / sample_count / bin_area`. **normed**bool, optional An alias for the density argument that behaves identically. To avoid confusion with the broken normed argument to [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram"), `density` should be preferred. **weights**array_like, shape(N,), optional An array of values `w_i` weighing each sample `(x_i, y_i)`. Weights are normalized to 1 if `normed` is True. If `normed` is False, the values of the returned histogram are equal to the sum of the weights belonging to the samples falling into each bin. Returns **H**ndarray, shape(nx, ny) The bi-dimensional histogram of samples `x` and `y`. Values in `x` are histogrammed along the first dimension and values in `y` are histogrammed along the second dimension. **xedges**ndarray, shape(nx+1,) The bin edges along the first dimension. **yedges**ndarray, shape(ny+1,) The bin edges along the second dimension. See also [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") 1D histogram [`histogramdd`](numpy.histogramdd#numpy.histogramdd "numpy.histogramdd") Multidimensional histogram #### Notes When `normed` is True, then the returned histogram is the sample density, defined such that the sum over bins of the product `bin_value * bin_area` is 1. Please note that the histogram does not follow the Cartesian convention where `x` values are on the abscissa and `y` values on the ordinate axis. Rather, `x` is histogrammed along the first dimension of the array (vertical), and `y` along the second dimension of the array (horizontal). This ensures compatibility with [`histogramdd`](numpy.histogramdd#numpy.histogramdd "numpy.histogramdd"). #### Examples ``` >>> from matplotlib.image import NonUniformImage >>> import matplotlib.pyplot as plt ``` Construct a 2-D histogram with variable bin width. First define the bin edges: ``` >>> xedges = [0, 1, 3, 5] >>> yedges = [0, 2, 3, 4, 6] ``` Next we create a histogram H with random bin content: ``` >>> x = np.random.normal(2, 1, 100) >>> y = np.random.normal(1, 1, 100) >>> H, xedges, yedges = np.histogram2d(x, y, bins=(xedges, yedges)) >>> # Histogram does not follow Cartesian convention (see Notes), >>> # therefore transpose H for visualization purposes. >>> H = H.T ``` [`imshow`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.imshow.html#matplotlib.pyplot.imshow "(in Matplotlib v3.5.2)") can only display square bins: ``` >>> fig = plt.figure(figsize=(7, 3)) >>> ax = fig.add_subplot(131, title='imshow: square bins') >>> plt.imshow(H, interpolation='nearest', origin='lower', ... extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]]) <matplotlib.image.AxesImage object at 0x...``` [`pcolormesh`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html#matplotlib.pyplot.pcolormesh "(in Matplotlib v3.5.2)") can display actual edges: ``` >>> ax = fig.add_subplot(132, title='pcolormesh: actual edges', ... aspect='equal') >>> X, Y = np.meshgrid(xedges, yedges) >>> ax.pcolormesh(X, Y, H) <matplotlib.collections.QuadMesh object at 0x...``` [`NonUniformImage`](https://matplotlib.org/stable/api/image_api.html#matplotlib.image.NonUniformImage "(in Matplotlib v3.5.2)") can be used to display actual bin edges with interpolation: ``` >>> ax = fig.add_subplot(133, title='NonUniformImage: interpolated', ... aspect='equal', xlim=xedges[[0, -1]], ylim=yedges[[0, -1]]) >>> im = NonUniformImage(ax, interpolation='bilinear') >>> xcenters = (xedges[:-1] + xedges[1:]) / 2 >>> ycenters = (yedges[:-1] + yedges[1:]) / 2 >>> im.set_data(xcenters, ycenters, H) >>> ax.images.append(im) >>> plt.show() ``` ![../../_images/numpy-histogram2d-1_00_00.png] It is also possible to construct a 2-D histogram without specifying bin edges: ``` >>> # Generate non-symmetric test data >>> n = 10000 >>> x = np.linspace(1, 100, n) >>> y = 2*np.log(x) + np.random.rand(n) - 0.5 >>> # Compute 2d histogram. Note the order of x/y and xedges/yedges >>> H, yedges, xedges = np.histogram2d(y, x, bins=20) ``` Now we can plot the histogram using [`pcolormesh`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.pcolormesh.html#matplotlib.pyplot.pcolormesh "(in Matplotlib v3.5.2)"), and a [`hexbin`](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.hexbin.html#matplotlib.pyplot.hexbin "(in Matplotlib v3.5.2)") for comparison. ``` >>> # Plot histogram using pcolormesh >>> fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True) >>> ax1.pcolormesh(xedges, yedges, H, cmap='rainbow') >>> ax1.plot(x, 2*np.log(x), 'k-') >>> ax1.set_xlim(x.min(), x.max()) >>> ax1.set_ylim(y.min(), y.max()) >>> ax1.set_xlabel('x') >>> ax1.set_ylabel('y') >>> ax1.set_title('histogram2d') >>> ax1.grid() ``` ``` >>> # Create hexbin plot for comparison >>> ax2.hexbin(x, y, gridsize=20, cmap='rainbow') >>> ax2.plot(x, 2*np.log(x), 'k-') >>> ax2.set_title('hexbin') >>> ax2.set_xlim(x.min(), x.max()) >>> ax2.set_xlabel('x') >>> ax2.grid() ``` ``` >>> plt.show() ``` ![../../_images/numpy-histogram2d-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.histogram2d.htmlnumpy.histogramdd ================= numpy.histogramdd(*sample*, *bins=10*, *range=None*, *normed=None*, *weights=None*, *density=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/histograms.py#L943-L1129) Compute the multidimensional histogram of some data. Parameters **sample**(N, D) array, or (D, N) array_like The data to be histogrammed. Note the unusual interpretation of sample when an array_like: * When an array, each row is a coordinate in a D-dimensional space - such as `histogramdd(np.array([p1, p2, p3]))`. * When an array_like, each element is the list of values for single coordinate - such as `histogramdd((X, Y, Z))`. The first form should be preferred. **bins**sequence or int, optional The bin specification: * A sequence of arrays describing the monotonically increasing bin edges along each dimension. * The number of bins for each dimension (nx, ny, 
 =bins) * The number of bins for all dimensions (nx=ny=
=bins). **range**sequence, optional A sequence of length D, each an optional (lower, upper) tuple giving the outer bin edges to be used if the edges are not given explicitly in `bins`. An entry of None in the sequence results in the minimum and maximum values being used for the corresponding dimension. The default, None, is equivalent to passing a tuple of D None values. **density**bool, optional If False, the default, returns the number of samples in each bin. If True, returns the probability *density* function at the bin, `bin_count / sample_count / bin_volume`. **normed**bool, optional An alias for the density argument that behaves identically. To avoid confusion with the broken normed argument to [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram"), `density` should be preferred. **weights**(N,) array_like, optional An array of values `w_i` weighing each sample `(x_i, y_i, z_i, 
)`. Weights are normalized to 1 if normed is True. If normed is False, the values of the returned histogram are equal to the sum of the weights belonging to the samples falling into each bin. Returns **H**ndarray The multidimensional histogram of sample x. See normed and weights for the different possible semantics. **edges**list A list of D arrays describing the bin edges for each dimension. See also [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") 1-D histogram [`histogram2d`](numpy.histogram2d#numpy.histogram2d "numpy.histogram2d") 2-D histogram #### Examples ``` >>> r = np.random.randn(100,3) >>> H, edges = np.histogramdd(r, bins = (5, 8, 4)) >>> H.shape, edges[0].size, edges[1].size, edges[2].size ((5, 8, 4), 6, 9, 5) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.histogramdd.htmlnumpy.histogram_bin_edges =========================== numpy.histogram_bin_edges(*a*, *bins=10*, *range=None*, *weights=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/histograms.py#L470-L670) Function to calculate only the edges of the bins used by the [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") function. Parameters **a**array_like Input data. The histogram is computed over the flattened array. **bins**int or sequence of scalars or str, optional If `bins` is an int, it defines the number of equal-width bins in the given range (10, by default). If `bins` is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin widths. If `bins` is a string from the list below, [`histogram_bin_edges`](#numpy.histogram_bin_edges "numpy.histogram_bin_edges") will use the method chosen to calculate the optimal bin width and consequently the number of bins (see `Notes` for more detail on the estimators) from the data that falls within the requested range. While the bin width will be optimal for the actual data in the range, the number of bins will be computed to fill the entire range, including the empty portions. For visualisation, using the ‘auto’ option is suggested. Weighted data is not supported for automated bin size selection. ‘auto’ Maximum of the ‘sturges’ and ‘fd’ estimators. Provides good all around performance. ‘fd’ (Freedman Diaconis Estimator) Robust (resilient to outliers) estimator that takes into account data variability and data size. ‘doane’ An improved version of Sturges’ estimator that works better with non-normal datasets. ‘scott’ Less robust estimator that takes into account data variability and data size. ‘stone’ Estimator based on leave-one-out cross-validation estimate of the integrated squared error. Can be regarded as a generalization of Scott’s rule. ‘rice’ Estimator does not take variability into account, only data size. Commonly overestimates number of bins required. ‘sturges’ R’s default method, only accounts for data size. Only optimal for gaussian data and underestimates number of bins for large non-gaussian datasets. ‘sqrt’ Square root (of data size) estimator, used by Excel and other programs for its speed and simplicity. **range**(float, float), optional The lower and upper range of the bins. If not provided, range is simply `(a.min(), a.max())`. Values outside the range are ignored. The first element of the range must be less than or equal to the second. `range` affects the automatic bin computation as well. While bin width is computed to be optimal based on the actual data within `range`, the bin count will fill the entire range including portions containing no data. **weights**array_like, optional An array of weights, of the same shape as `a`. Each value in `a` only contributes its associated weight towards the bin count (instead of 1). This is currently not used by any of the bin estimators, but may be in the future. Returns **bin_edges**array of dtype float The edges to pass into [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") See also [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram") #### Notes The methods to estimate the optimal number of bins are well founded in literature, and are inspired by the choices R provides for histogram visualisation. Note that having the number of bins proportional to \(n^{1/3}\) is asymptotically optimal, which is why it appears in most estimators. These are simply plug-in methods that give good starting points for number of bins. In the equations below, \(h\) is the binwidth and \(n_h\) is the number of bins. All estimators that compute bin counts are recast to bin width using the [`ptp`](numpy.ptp#numpy.ptp "numpy.ptp") of the data. The final bin count is obtained from `np.round(np.ceil(range / h))`. The final bin width is often less than what is returned by the estimators below. ‘auto’ (maximum of the ‘sturges’ and ‘fd’ estimators) A compromise to get a good value. For small datasets the Sturges value will usually be chosen, while larger datasets will usually default to FD. Avoids the overly conservative behaviour of FD and Sturges for small and large datasets respectively. Switchover point is usually \(a.size \approx 1000\). ‘fd’ (Freedman Diaconis Estimator) \[h = 2 \frac{IQR}{n^{1/3}}\] The binwidth is proportional to the interquartile range (IQR) and inversely proportional to cube root of a.size. Can be too conservative for small datasets, but is quite good for large datasets. The IQR is very robust to outliers. ‘scott’ \[h = \sigma \sqrt[3]{\frac{24 \sqrt{\pi}}{n}}\] The binwidth is proportional to the standard deviation of the data and inversely proportional to cube root of `x.size`. Can be too conservative for small datasets, but is quite good for large datasets. The standard deviation is not very robust to outliers. Values are very similar to the Freedman-Diaconis estimator in the absence of outliers. ‘rice’ \[n_h = 2n^{1/3}\] The number of bins is only proportional to cube root of `a.size`. It tends to overestimate the number of bins and it does not take into account data variability. ‘sturges’ \[n_h = \log _{2}(n) + 1\] The number of bins is the base 2 log of `a.size`. This estimator assumes normality of data and is too conservative for larger, non-normal datasets. This is the default method in R’s `hist` method. ‘doane’ \[ \begin{align}\begin{aligned}n_h = 1 + \log_{2}(n) + \log_{2}\left(1 + \frac{|g_1|}{\sigma_{g_1}}\right)\\g_1 = mean\left[\left(\frac{x - \mu}{\sigma}\right)^3\right]\\\sigma_{g_1} = \sqrt{\frac{6(n - 2)}{(n + 1)(n + 3)}}\end{aligned}\end{align} \] An improved version of Sturges’ formula that produces better estimates for non-normal datasets. This estimator attempts to account for the skew of the data. ‘sqrt’ \[n_h = \sqrt n\] The simplest and fastest estimator. Only takes into account the data size. #### Examples ``` >>> arr = np.array([0, 0, 0, 1, 2, 3, 3, 4, 5]) >>> np.histogram_bin_edges(arr, bins='auto', range=(0, 1)) array([0. , 0.25, 0.5 , 0.75, 1. ]) >>> np.histogram_bin_edges(arr, bins=2) array([0. , 2.5, 5. ]) ``` For consistency with histogram, an array of pre-computed bins is passed through unmodified: ``` >>> np.histogram_bin_edges(arr, [1, 2]) array([1, 2]) ``` This function allows one set of bins to be computed, and reused across multiple histograms: ``` >>> shared_bins = np.histogram_bin_edges(arr, bins='auto') >>> shared_bins array([0., 1., 2., 3., 4., 5.]) ``` ``` >>> group_id = np.array([0, 1, 1, 0, 1, 1, 0, 1, 1]) >>> hist_0, _ = np.histogram(arr[group_id == 0], bins=shared_bins) >>> hist_1, _ = np.histogram(arr[group_id == 1], bins=shared_bins) ``` ``` >>> hist_0; hist_1 array([1, 1, 0, 1, 0]) array([2, 0, 1, 1, 2]) ``` Which gives more easily comparable results than using separate bins for each histogram: ``` >>> hist_0, bins_0 = np.histogram(arr[group_id == 0], bins='auto') >>> hist_1, bins_1 = np.histogram(arr[group_id == 1], bins='auto') >>> hist_0; hist_1 array([1, 1, 1]) array([2, 1, 1, 2]) >>> bins_0; bins_1 array([0., 1., 2., 3.]) array([0. , 1.25, 2.5 , 3.75, 5. ]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.histogram_bin_edges.htmlnumpy.digitize ============== numpy.digitize(*x*, *bins*, *right=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L5447-L5555) Return the indices of the bins to which each value in input array belongs. | `right` | order of bins | returned index `i` satisfies | | --- | --- | --- | | `False` | increasing | `bins[i-1] <= x < bins[i]` | | `True` | increasing | `bins[i-1] < x <= bins[i]` | | `False` | decreasing | `bins[i-1] > x >= bins[i]` | | `True` | decreasing | `bins[i-1] >= x > bins[i]` | If values in `x` are beyond the bounds of `bins`, 0 or `len(bins)` is returned as appropriate. Parameters **x**array_like Input array to be binned. Prior to NumPy 1.10.0, this array had to be 1-dimensional, but can now have any shape. **bins**array_like Array of bins. It has to be 1-dimensional and monotonic. **right**bool, optional Indicating whether the intervals include the right or the left bin edge. Default behavior is (right==False) indicating that the interval does not include the right edge. The left bin end is open in this case, i.e., bins[i-1] <= x < bins[i] is the default behavior for monotonically increasing bins. Returns **indices**ndarray of ints Output array of indices, of same shape as `x`. Raises ValueError If `bins` is not monotonic. TypeError If the type of the input is complex. See also [`bincount`](numpy.bincount#numpy.bincount "numpy.bincount"), [`histogram`](numpy.histogram#numpy.histogram "numpy.histogram"), [`unique`](numpy.unique#numpy.unique "numpy.unique"), [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") #### Notes If values in `x` are such that they fall outside the bin range, attempting to index `bins` with the indices that [`digitize`](#numpy.digitize "numpy.digitize") returns will result in an IndexError. New in version 1.10.0. `np.digitize` is implemented in terms of `np.searchsorted`. This means that a binary search is used to bin the values, which scales much better for larger number of bins than the previous linear search. It also removes the requirement for the input array to be 1-dimensional. For monotonically _increasing_ `bins`, the following are equivalent: ``` np.digitize(x, bins, right=True) np.searchsorted(bins, x, side='left') ``` Note that as the order of the arguments are reversed, the side must be too. The [`searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") call is marginally faster, as it does not do any monotonicity checks. Perhaps more importantly, it supports all dtypes. #### Examples ``` >>> x = np.array([0.2, 6.4, 3.0, 1.6]) >>> bins = np.array([0.0, 1.0, 2.5, 4.0, 10.0]) >>> inds = np.digitize(x, bins) >>> inds array([1, 4, 3, 2]) >>> for n in range(x.size): ... print(bins[inds[n]-1], "<=", x[n], "<", bins[inds[n]]) ... 0.0 <= 0.2 < 1.0 4.0 <= 6.4 < 10.0 2.5 <= 3.0 < 4.0 1.0 <= 1.6 < 2.5 ``` ``` >>> x = np.array([1.2, 10.0, 12.4, 15.5, 20.]) >>> bins = np.array([0, 5, 10, 15, 20]) >>> np.digitize(x,bins,right=True) array([1, 2, 3, 4, 4]) >>> np.digitize(x,bins,right=False) array([1, 3, 3, 4, 5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.digitize.htmlnumpy.testing.assert_allclose ============================== testing.assert_allclose(*actual*, *desired*, *rtol=1e-07*, *atol=0*, *equal_nan=True*, *err_msg=''*, *verbose=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1473-L1528) Raises an AssertionError if two objects are not equal up to desired tolerance. The test is equivalent to `allclose(actual, desired, rtol, atol)` (note that `allclose` has different default values). It compares the difference between `actual` and `desired` to `atol + rtol * abs(desired)`. New in version 1.5.0. Parameters **actual**array_like Array obtained. **desired**array_like Array desired. **rtol**float, optional Relative tolerance. **atol**float, optional Absolute tolerance. **equal_nan**bool, optional. If True, NaNs will compare equal. **err_msg**str, optional The error message to be printed in case of failure. **verbose**bool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal up to specified precision. See also [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") #### Examples ``` >>> x = [1e-5, 1e-3, 1e-1] >>> y = np.arccos(np.cos(x)) >>> np.testing.assert_allclose(x, y, rtol=1e-5, atol=0) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_allclose.htmlnumpy.testing.assert_array_almost_equal_nulp ================================================ testing.assert_array_almost_equal_nulp(*x*, *y*, *nulp=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1531-L1591) Compare two arrays relatively to their spacing. This is a relatively robust method to compare two arrays whose amplitude is variable. Parameters **x, y**array_like Input arrays. **nulp**int, optional The maximum number of unit in the last place for tolerance (see Notes). Default is 1. Returns None Raises AssertionError If the spacing between `x` and `y` for one or more elements is larger than `nulp`. See also [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") Check that all items of arrays differ in at most N Units in the Last Place. [`spacing`](numpy.spacing#numpy.spacing "numpy.spacing") Return the distance between x and the nearest adjacent number. #### Notes An assertion is raised if the following condition is not met: ``` abs(x - y) <= nulps * spacing(maximum(abs(x), abs(y))) ``` #### Examples ``` >>> x = np.array([1., 1e-10, 1e-20]) >>> eps = np.finfo(x.dtype).eps >>> np.testing.assert_array_almost_equal_nulp(x, x*eps/2 + x) ``` ``` >>> np.testing.assert_array_almost_equal_nulp(x, x*eps + x) Traceback (most recent call last): ... AssertionError: X and Y are not equal to 1 ULP (max is 2) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_array_almost_equal_nulp.htmlnumpy.testing.assert_array_max_ulp ===================================== testing.assert_array_max_ulp(*a*, *b*, *maxulp=1*, *dtype=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1594-L1643) Check that all items of arrays differ in at most N Units in the Last Place. Parameters **a, b**array_like Input arrays to be compared. **maxulp**int, optional The maximum number of units in the last place that elements of `a` and `b` can differ. Default is 1. **dtype**dtype, optional Data-type to convert `a` and `b` to if given. Default is None. Returns **ret**ndarray Array containing number of representable floating point numbers between items in `a` and `b`. Raises AssertionError If one or more elements differ by more than `maxulp`. See also [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") Compare two arrays relatively to their spacing. #### Notes For computing the ULP difference, this API does not differentiate between various representations of NAN (ULP difference between 0x7fc00000 and 0xffc00000 is zero). #### Examples ``` >>> a = np.linspace(0., 1., 100) >>> res = np.testing.assert_array_max_ulp(a, np.arcsin(np.sin(a))) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_array_max_ulp.htmlnumpy.testing.assert_array_equal ================================== testing.assert_array_equal(*x*, *y*, *err_msg=''*, *verbose=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L855-L935) Raises an AssertionError if two array_like objects are not equal. Given two array_like objects, check that the shape is equal and all elements of these objects are equal (but see the Notes for the special handling of a scalar). An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions. The usual caution for verifying equality with floating point numbers is advised. Parameters **x**array_like The actual object to check. **y**array_like The desired, expected object. **err_msg**str, optional The error message to be printed in case of failure. **verbose**bool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired objects are not equal. See also [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") Compare two array_like objects for equality with desired relative and/or absolute precision. [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp"), [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") #### Notes When one of `x` and `y` is a scalar and the other is array_like, the function checks that each element of the array_like object is equal to the scalar. #### Examples The first assert does not raise an exception: ``` >>> np.testing.assert_array_equal([1.0,2.33333,np.nan], ... [np.exp(0),2.33333, np.nan]) ``` Assert fails with numerical imprecision with floats: ``` >>> np.testing.assert_array_equal([1.0,np.pi,np.nan], ... [1, np.sqrt(np.pi)**2, np.nan]) Traceback (most recent call last): ... AssertionError: Arrays are not equal Mismatched elements: 1 / 3 (33.3%) Max absolute difference: 4.4408921e-16 Max relative difference: 1.41357986e-16 x: array([1. , 3.141593, nan]) y: array([1. , 3.141593, nan]) ``` Use [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") or one of the nulp (number of floating point values) functions for these cases instead: ``` >>> np.testing.assert_allclose([1.0,np.pi,np.nan], ... [1, np.sqrt(np.pi)**2, np.nan], ... rtol=1e-10, atol=0) ``` As mentioned in the Notes section, [`assert_array_equal`](#numpy.testing.assert_array_equal "numpy.testing.assert_array_equal") has special handling for scalars. Here the test checks that each value in `x` is 3: ``` >>> x = np.full((2, 5), fill_value=3) >>> np.testing.assert_array_equal(x, 3) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_array_equal.htmlnumpy.testing.assert_array_less ================================= testing.assert_array_less(*x*, *y*, *err_msg=''*, *verbose=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1051-L1131) Raises an AssertionError if two array_like objects are not ordered by less than. Given two array_like objects, check that the shape is equal and all elements of the first object are strictly smaller than those of the second object. An exception is raised at shape mismatch or incorrectly ordered values. Shape mismatch does not raise if an object has zero dimension. In contrast to the standard usage in numpy, NaNs are compared, no assertion is raised if both objects have NaNs in the same positions. Parameters **x**array_like The smaller object to check. **y**array_like The larger object to compare. **err_msg**string The error message to be printed in case of failure. **verbose**bool If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired objects are not equal. See also [`assert_array_equal`](numpy.testing.assert_array_equal#numpy.testing.assert_array_equal "numpy.testing.assert_array_equal") tests objects for equality [`assert_array_almost_equal`](numpy.testing.assert_array_almost_equal#numpy.testing.assert_array_almost_equal "numpy.testing.assert_array_almost_equal") test objects for equality up to precision #### Examples ``` >>> np.testing.assert_array_less([1.0, 1.0, np.nan], [1.1, 2.0, np.nan]) >>> np.testing.assert_array_less([1.0, 1.0, np.nan], [1, 2.0, np.nan]) Traceback (most recent call last): ... AssertionError: Arrays are not less-ordered Mismatched elements: 1 / 3 (33.3%) Max absolute difference: 1. Max relative difference: 0.5 x: array([ 1., 1., nan]) y: array([ 1., 2., nan]) ``` ``` >>> np.testing.assert_array_less([1.0, 4.0], 3) Traceback (most recent call last): ... AssertionError: Arrays are not less-ordered Mismatched elements: 1 / 2 (50%) Max absolute difference: 2. Max relative difference: 0.66666667 x: array([1., 4.]) y: array(3) ``` ``` >>> np.testing.assert_array_less([1.0, 2.0, 3.0], [4]) Traceback (most recent call last): ... AssertionError: Arrays are not less-ordered (shapes (3,), (1,) mismatch) x: array([1., 2., 3.]) y: array([4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_array_less.htmlnumpy.testing.assert_equal =========================== testing.assert_equal(*actual*, *desired*, *err_msg=''*, *verbose=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L275-L432) Raises an AssertionError if two objects are not equal. Given two objects (scalars, lists, tuples, dictionaries or numpy arrays), check that all elements of these objects are equal. An exception is raised at the first conflicting values. When one of `actual` and `desired` is a scalar and the other is array_like, the function checks that each element of the array_like object is equal to the scalar. This function handles NaN comparisons as if NaN was a “normal” number. That is, AssertionError is not raised if both objects have NaNs in the same positions. This is in contrast to the IEEE standard on NaNs, which says that NaN compared to anything must return False. Parameters **actual**array_like The object to check. **desired**array_like The expected object. **err_msg**str, optional The error message to be printed in case of failure. **verbose**bool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal. #### Examples ``` >>> np.testing.assert_equal([4,5], [4,6]) Traceback (most recent call last): ... AssertionError: Items are not equal: item=1 ACTUAL: 5 DESIRED: 6 ``` The following comparison does not raise an exception. There are NaNs in the inputs, but they are in the same positions. ``` >>> np.testing.assert_equal(np.array([1.0, 2.0, np.nan]), [1, 2, np.nan]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_equal.htmlnumpy.testing.assert_raises ============================ testing.assert_raises(*exception_class*, *callable*, **args*, ***kwargs) assert_raises(exception_class*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1304-L1330) testing.assert_raises(*exception_class*) → [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.10)") Fail unless an exception of class exception_class is thrown by callable when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception. Alternatively, [`assert_raises`](#numpy.testing.assert_raises "numpy.testing.assert_raises") can be used as a context manager: ``` >>> from numpy.testing import assert_raises >>> with assert_raises(ZeroDivisionError): ... 1 / 0 ``` is equivalent to ``` >>> def div(x, y): ... return x / y >>> assert_raises(ZeroDivisionError, div, 1, 0) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_raises.htmlnumpy.testing.assert_raises_regex =================================== testing.assert_raises_regex(*exception_class*, *expected_regexp*, *callable*, **args*, ***kwargs) assert_raises_regex(exception_class*, *expected_regexp*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1333-L1351) Fail unless an exception of class exception_class and with message that matches expected_regexp is thrown by callable when invoked with arguments args and keyword arguments kwargs. Alternatively, can be used as a context manager like [`assert_raises`](numpy.testing.assert_raises#numpy.testing.assert_raises "numpy.testing.assert_raises"). #### Notes New in version 1.9.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_raises_regex.htmlnumpy.testing.assert_warns =========================== testing.assert_warns(*warning_class*, **args*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1749-L1800) Fail unless the given callable throws the specified warning. A warning of class warning_class should be thrown by the callable when invoked with arguments args and keyword arguments kwargs. If a different type of warning is thrown, it will not be caught. If called with all arguments other than the warning class omitted, may be used as a context manager: with assert_warns(SomeWarning): do_something() The ability to be used as a context manager is new in NumPy v1.11.0. New in version 1.4.0. Parameters **warning_class**class The class defining the warning that `func` is expected to throw. **func**callable, optional Callable to test ***args**Arguments Arguments for `func`. ****kwargs**Kwargs Keyword arguments for `func`. Returns The value returned by `func`. #### Examples ``` >>> import warnings >>> def deprecated_func(num): ... warnings.warn("Please upgrade", DeprecationWarning) ... return num*num >>> with np.testing.assert_warns(DeprecationWarning): ... assert deprecated_func(4) == 16 >>> # or passing a func >>> ret = np.testing.assert_warns(DeprecationWarning, deprecated_func, 4) >>> assert ret == 16 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_warns.htmlnumpy.testing.assert_no_warnings ================================== testing.assert_no_warnings(**args*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1814-L1847) Fail if the given callable produces any warnings. If called with all arguments omitted, may be used as a context manager: with assert_no_warnings(): do_something() The ability to be used as a context manager is new in NumPy v1.11.0. New in version 1.7.0. Parameters **func**callable The callable to test. ***args**Arguments Arguments passed to `func`. ****kwargs**Kwargs Keyword arguments passed to `func`. Returns The value returned by `func`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_no_warnings.htmlnumpy.testing.assert_no_gc_cycles ==================================== testing.assert_no_gc_cycles(**args*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L2357-L2389) Fail if the given callable produces any reference cycles. If called with all arguments omitted, may be used as a context manager: with assert_no_gc_cycles(): do_something() New in version 1.15.0. Parameters **func**callable The callable to test. ***args**Arguments Arguments passed to `func`. ****kwargs**Kwargs Keyword arguments passed to `func`. Returns Nothing. The result is deliberately discarded to ensure that all cycles are found. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_no_gc_cycles.htmlnumpy.testing.assert_string_equal =================================== testing.assert_string_equal(*actual*, *desired*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1138-L1206) Test if two strings are equal. If the given strings are equal, [`assert_string_equal`](#numpy.testing.assert_string_equal "numpy.testing.assert_string_equal") does nothing. If they are not equal, an AssertionError is raised, and the diff between the strings is shown. Parameters **actual**str The string to test for equality against the expected string. **desired**str The expected string. #### Examples ``` >>> np.testing.assert_string_equal('abc', 'abc') >>> np.testing.assert_string_equal('abc', 'abcd') Traceback (most recent call last): File "<stdin>", line 1, in <module> ... AssertionError: Differences in strings: - abc+ abcd? + ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_string_equal.htmlnumpy.testing.assert_ ====================== testing.assert_(*val*, *msg=''*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L78-L95) Assert that works in release mode. Accepts callable msg to allow deferring evaluation until failure. The Python built-in `assert` does not work when executing code in optimized mode (the `-O` flag) - no byte-code is generated for it. For documentation on usage, refer to the Python documentation. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_.htmlnumpy.testing.assert_almost_equal =================================== testing.assert_almost_equal(*actual*, *desired*, *decimal=7*, *err_msg=''*, *verbose=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L476-L599) Raises an AssertionError if two items are not equal up to desired precision. Note It is recommended to use one of [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose"), [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") or [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") instead of this function for more consistent floating point comparisons. The test verifies that the elements of `actual` and `desired` satisfy. `abs(desired-actual) < 1.5 * 10**(-decimal)` That is a looser test than originally documented, but agrees with what the actual implementation in [`assert_array_almost_equal`](numpy.testing.assert_array_almost_equal#numpy.testing.assert_array_almost_equal "numpy.testing.assert_array_almost_equal") did up to rounding vagaries. An exception is raised at conflicting values. For ndarrays this delegates to assert_array_almost_equal Parameters **actual**array_like The object to check. **desired**array_like The expected object. **decimal**int, optional Desired precision, default is 7. **err_msg**str, optional The error message to be printed in case of failure. **verbose**bool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal up to specified precision. See also [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") Compare two array_like objects for equality with desired relative and/or absolute precision. [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp"), [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") #### Examples ``` >>> from numpy.testing import assert_almost_equal >>> assert_almost_equal(2.3333333333333, 2.33333334) >>> assert_almost_equal(2.3333333333333, 2.33333334, decimal=10) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 10 decimals ACTUAL: 2.3333333333333 DESIRED: 2.33333334 ``` ``` >>> assert_almost_equal(np.array([1.0,2.3333333333333]), ... np.array([1.0,2.33333334]), decimal=9) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 9 decimals Mismatched elements: 1 / 2 (50%) Max absolute difference: 6.66669964e-09 Max relative difference: 2.85715698e-09 x: array([1. , 2.333333333]) y: array([1. , 2.33333334]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_almost_equal.htmlnumpy.testing.assert_approx_equal =================================== testing.assert_approx_equal(*actual*, *desired*, *significant=7*, *err_msg=''*, *verbose=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L602-L698) Raises an AssertionError if two items are not equal up to significant digits. Note It is recommended to use one of [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose"), [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") or [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") instead of this function for more consistent floating point comparisons. Given two numbers, check that they are approximately equal. Approximately equal is defined as the number of significant digits that agree. Parameters **actual**scalar The object to check. **desired**scalar The expected object. **significant**int, optional Desired precision, default is 7. **err_msg**str, optional The error message to be printed in case of failure. **verbose**bool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal up to specified precision. See also [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") Compare two array_like objects for equality with desired relative and/or absolute precision. [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp"), [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") #### Examples ``` >>> np.testing.assert_approx_equal(0.12345677777777e-20, 0.1234567e-20) >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345671e-20, ... significant=8) >>> np.testing.assert_approx_equal(0.12345670e-20, 0.12345672e-20, ... significant=8) Traceback (most recent call last): ... AssertionError: Items are not equal to 8 significant digits: ACTUAL: 1.234567e-21 DESIRED: 1.2345672e-21 ``` the evaluated condition that raises the exception is ``` >>> abs(0.12345670e-20/1e-21 - 0.12345672e-20/1e-21) >= 10**-(8-1) True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_approx_equal.htmlnumpy.testing.assert_array_almost_equal ========================================== testing.assert_array_almost_equal(*x*, *y*, *decimal=6*, *err_msg=''*, *verbose=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L938-L1048) Raises an AssertionError if two objects are not equal up to desired precision. Note It is recommended to use one of [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose"), [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp") or [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp") instead of this function for more consistent floating point comparisons. The test verifies identical shapes and that the elements of `actual` and `desired` satisfy. `abs(desired-actual) < 1.5 * 10**(-decimal)` That is a looser test than originally documented, but agrees with what the actual implementation did up to rounding vagaries. An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions. Parameters **x**array_like The actual object to check. **y**array_like The desired, expected object. **decimal**int, optional Desired precision, default is 6. **err_msg**str, optional The error message to be printed in case of failure. **verbose**bool, optional If True, the conflicting values are appended to the error message. Raises AssertionError If actual and desired are not equal up to specified precision. See also [`assert_allclose`](numpy.testing.assert_allclose#numpy.testing.assert_allclose "numpy.testing.assert_allclose") Compare two array_like objects for equality with desired relative and/or absolute precision. [`assert_array_almost_equal_nulp`](numpy.testing.assert_array_almost_equal_nulp#numpy.testing.assert_array_almost_equal_nulp "numpy.testing.assert_array_almost_equal_nulp"), [`assert_array_max_ulp`](numpy.testing.assert_array_max_ulp#numpy.testing.assert_array_max_ulp "numpy.testing.assert_array_max_ulp"), [`assert_equal`](numpy.testing.assert_equal#numpy.testing.assert_equal "numpy.testing.assert_equal") #### Examples the first assert does not raise an exception ``` >>> np.testing.assert_array_almost_equal([1.0,2.333,np.nan], ... [1.0,2.333,np.nan]) ``` ``` >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], ... [1.0,2.33339,np.nan], decimal=5) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 5 decimals Mismatched elements: 1 / 3 (33.3%) Max absolute difference: 6.e-05 Max relative difference: 2.57136612e-05 x: array([1. , 2.33333, nan]) y: array([1. , 2.33339, nan]) ``` ``` >>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan], ... [1.0,2.33333, 5], decimal=5) Traceback (most recent call last): ... AssertionError: Arrays are not almost equal to 5 decimals x and y nan location mismatch: x: array([1. , 2.33333, nan]) y: array([1. , 2.33333, 5. ]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.assert_array_almost_equal.htmlnumpy.testing.print_assert_equal ================================== testing.print_assert_equal(*test_string*, *actual*, *desired*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L435-L473) Test if two objects are equal, and print an error message if test fails. The test is performed with `actual == desired`. Parameters **test_string**str The message supplied to AssertionError. **actual**object The object to test for equality against `desired`. **desired**object The expected result. #### Examples ``` >>> np.testing.print_assert_equal('Test XYZ of func xyz', [0, 1], [0, 1]) >>> np.testing.print_assert_equal('Test XYZ of func xyz', [0, 1], [0, 2]) Traceback (most recent call last): ... AssertionError: Test XYZ of func xyz failed ACTUAL: [0, 1] DESIRED: [0, 2] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.print_assert_equal.htmlnumpy.testing.dec.deprecated ============================ testing.dec.deprecated(*conditional=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/decorators.py#L253-L304) Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Filter deprecation warnings while running the test suite. This decorator can be used to filter DeprecationWarning’s, to avoid printing them during the test suite run, while checking that the test actually raises a DeprecationWarning. Parameters **conditional**bool or callable, optional Flag to determine whether to mark test as deprecated or not. If the condition is a callable, it is used at runtime to dynamically make the decision. Default is True. Returns **decorator**function The [`deprecated`](#numpy.testing.dec.deprecated "numpy.testing.dec.deprecated") decorator itself. #### Notes New in version 1.4.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.dec.deprecated.htmlnumpy.testing.dec.knownfailureif ================================ testing.dec.knownfailureif(*fail_condition*, *msg=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/decorators.py#L191-L251) Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Make function raise KnownFailureException exception if given condition is true. If the condition is a callable, it is used at runtime to dynamically make the decision. This is useful for tests that may require costly imports, to delay the cost until the test suite is actually executed. Parameters **fail_condition**bool or callable Flag to determine whether to mark the decorated test as a known failure (if True) or not (if False). **msg**str, optional Message to give on raising a KnownFailureException exception. Default is None. Returns **decorator**function Decorator, which, when applied to a function, causes KnownFailureException to be raised when `fail_condition` is True, and the function to be called normally otherwise. #### Notes The decorator itself is decorated with the `nose.tools.make_decorator` function in order to transmit function name, and various other metadata. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.dec.knownfailureif.htmlnumpy.testing.dec.setastest =========================== testing.dec.setastest(*tf=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/decorators.py#L67-L105) Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Signals to nose that this function is or is not a test. Parameters **tf**bool If True, specifies that the decorated callable is a test. If False, specifies that the decorated callable is not a test. Default is True. #### Notes This decorator can’t use the nose namespace, because it can be called from a non-test module. See also `istest` and `nottest` in `nose.tools`. #### Examples [`setastest`](#numpy.testing.dec.setastest "numpy.testing.dec.setastest") can be used in the following way: ``` from numpy.testing import dec @dec.setastest(False) def func_with_test_in_name(arg1, arg2): pass ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.dec.setastest.htmlnumpy.testing.dec.skipif ======================== testing.dec.skipif(*skip_condition*, *msg=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/decorators.py#L107-L188) Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Make function raise SkipTest exception if a given condition is true. If the condition is a callable, it is used at runtime to dynamically make the decision. This is useful for tests that may require costly imports, to delay the cost until the test suite is actually executed. Parameters **skip_condition**bool or callable Flag to determine whether to skip the decorated test. **msg**str, optional Message to give on raising a SkipTest exception. Default is None. Returns **decorator**function Decorator which, when applied to a function, causes SkipTest to be raised when `skip_condition` is True, and the function to be called normally otherwise. #### Notes The decorator itself is decorated with the `nose.tools.make_decorator` function in order to transmit function name, and various other metadata. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.dec.skipif.htmlnumpy.testing.dec.slow ====================== testing.dec.slow(*t*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/decorators.py#L25-L65) Deprecated since version 1.21: This decorator is retained for compatibility with the nose testing framework, which is being phased out. Please use the nose2 or pytest frameworks instead. Label a test as ‘slow’. The exact definition of a slow test is obviously both subjective and hardware-dependent, but in general any individual test that requires more than a second or two should be labeled as slow (the whole suite consists of thousands of tests, so even a second is significant). Parameters **t**callable The test to label as slow. Returns **t**callable The decorated test `t`. #### Examples The [`numpy.testing`](../routines.testing#module-numpy.testing "numpy.testing") module includes `import decorators as dec`. A test can be decorated as slow like this: ``` from numpy.testing import * @dec.slow def test_big(self): print('Big, slow test') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.dec.slow.htmlnumpy.testing.decorate_methods =============================== testing.decorate_methods(*cls*, *decorator*, *testmatch=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1354-L1398) Apply a decorator to all methods in a class matching a regular expression. The given decorator is applied to all public methods of `cls` that are matched by the regular expression `testmatch` (`testmatch.search(methodname)`). Methods that are private, i.e. start with an underscore, are ignored. Parameters **cls**class Class whose methods to decorate. **decorator**function Decorator to apply to methods **testmatch**compiled regexp or str, optional The regular expression. Default value is None, in which case the nose default (`re.compile(r'(?:^|[\b_\.%s-])[Tt]est' % os.sep)`) is used. If `testmatch` is a string, it is compiled to a regular expression first. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.decorate_methods.htmlnumpy.testing.Tester ==================== numpy.testing.Tester[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/nosetester.py#L112-L536) alias of `numpy.testing._private.nosetester.NoseTester` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.Tester.htmlnumpy.testing.clear_and_catch_warnings ========================================= *class*numpy.testing.clear_and_catch_warnings(*record=False*, *modules=()*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1961-L2024) Context manager that resets warning registry for catching warnings Warnings can be slippery, because, whenever a warning is triggered, Python adds a `__warningregistry__` member to the *calling* module. This makes it impossible to retrigger the warning in this module, whatever you put in the warnings filters. This context manager accepts a sequence of `modules` as a keyword argument to its constructor and: * stores and removes any `__warningregistry__` entries in given `modules` on entry; * resets `__warningregistry__` to its previous state on exit. This makes it possible to trigger any warning afresh inside the context manager without disturbing the state of warnings outside. For compatibility with Python 3.0, please consider all arguments to be keyword-only. Parameters **record**bool, optional Specifies whether warnings should be captured by a custom implementation of `warnings.showwarning()` and be appended to a list returned by the context manager. Otherwise None is returned by the context manager. The objects appended to the list are arguments whose attributes mirror the arguments to `showwarning()`. **modules**sequence, optional Sequence of modules for which to reset warnings registry on entry and restore on exit. To work correctly, all ‘ignore’ filters should filter by one of these modules. #### Examples ``` >>> import warnings >>> with np.testing.clear_and_catch_warnings( ... modules=[np.core.fromnumeric]): ... warnings.simplefilter('always') ... warnings.filterwarnings('ignore', module='np.core.fromnumeric') ... # do something that raises a warning but ignore those in ... # np.core.fromnumeric ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.clear_and_catch_warnings.htmlnumpy.testing.measure ===================== testing.measure(*code_str*, *times=1*, *label=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1401-L1444) Return elapsed time for executing code in the namespace of the caller. The supplied code string is compiled with the Python builtin `compile`. The precision of the timing is 10 milli-seconds. If the code will execute fast on this timescale, it can be executed many times to get reasonable timing accuracy. Parameters **code_str**str The code to be timed. **times**int, optional The number of times the code is executed. Default is 1. The code is only compiled once. **label**str, optional A label to identify `code_str` with. This is passed into `compile` as the second argument (for run-time error messages). Returns **elapsed**float Total elapsed time in seconds for executing `code_str` `times` times. #### Examples ``` >>> times = 10 >>> etime = np.testing.measure('for i in range(1000): np.sqrt(i**2)', times=times) >>> print("Time for a single execution : ", etime / times, "s") Time for a single execution : 0.005 s ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.measure.htmlnumpy.testing.run_module_suite ================================ testing.run_module_suite(*file_to_run=None*, *argv=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/nosetester.py#L61-L109) Run a test module. Equivalent to calling `$ nosetests <argv> <file_to_run>` from the command line Parameters **file_to_run**str, optional Path to test module, or None. By default, run the module from which this function is called. **argv**list of strings Arguments to be passed to the nose test runner. `argv[0]` is ignored. All command line arguments accepted by `nosetests` will work. If it is the default value None, sys.argv is used. New in version 1.9.0. #### Examples Adding the following: ``` if __name__ == "__main__" : run_module_suite(argv=sys.argv) ``` at the end of a test module will run the tests when that module is called in the python interpreter. Alternatively, calling: ``` >>> run_module_suite(file_to_run="numpy/tests/test_matlib.py") ``` from an interpreter will run all the test routine in ‘test_matlib.py’. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.run_module_suite.htmlnumpy.testing.rundocs ===================== testing.rundocs(*filename=None*, *raise_on_error=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L1209-L1252) Run doctests found in the given file. By default [`rundocs`](#numpy.testing.rundocs "numpy.testing.rundocs") raises an AssertionError on failure. Parameters **filename**str The path to the file for which the doctests are run. **raise_on_error**bool Whether to raise an AssertionError when a doctest fails. Default is True. #### Notes The doctests can be run by the user/developer by adding the `doctests` argument to the `test()` call. For example, to run all tests (including doctests) for `numpy.lib`: ``` >>> np.lib.test(doctests=True) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.rundocs.htmlnumpy.testing.suppress_warnings ================================ *class*numpy.testing.suppress_warnings(*forwarding_rule='always'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L2027-L2302) Context manager and decorator doing much the same as `warnings.catch_warnings`. However, it also provides a filter mechanism to work around <https://bugs.python.org/issue4180>. This bug causes Python before 3.4 to not reliably show warnings again after they have been ignored once (even within catch_warnings). It means that no “ignore” filter can be used easily, since following tests might need to see the warning. Additionally it allows easier specificity for testing warnings and can be nested. Parameters **forwarding_rule**str, optional One of “always”, “once”, “module”, or “location”. Analogous to the usual warnings module filter mode, it is useful to reduce noise mostly on the outmost level. Unsuppressed and unrecorded warnings will be forwarded based on this rule. Defaults to “always”. “location” is equivalent to the warnings “default”, match by exact location the warning warning originated from. #### Notes Filters added inside the context manager will be discarded again when leaving it. Upon entering all filters defined outside a context will be applied automatically. When a recording filter is added, matching warnings are stored in the `log` attribute as well as in the list returned by `record`. If filters are added and the `module` keyword is given, the warning registry of this module will additionally be cleared when applying it, entering the context, or exiting it. This could cause warnings to appear a second time after leaving the context if they were configured to be printed once (default) and were already printed before the context was entered. Nesting this context manager will work as expected when the forwarding rule is “always” (default). Unfiltered and unrecorded warnings will be passed out and be matched by the outer level. On the outmost level they will be printed (or caught by another warnings context). The forwarding rule argument can modify this behaviour. Like `catch_warnings` this context manager is not threadsafe. #### Examples With a context manager: ``` with np.testing.suppress_warnings() as sup: sup.filter(DeprecationWarning, "Some text") sup.filter(module=np.ma.core) log = sup.record(FutureWarning, "Does this occur?") command_giving_warnings() # The FutureWarning was given once, the filtered warnings were # ignored. All other warnings abide outside settings (may be # printed/error) assert_(len(log) == 1) assert_(len(sup.log) == 1) # also stored in log attribute ``` Or as a decorator: ``` sup = np.testing.suppress_warnings() sup.filter(module=np.ma.core) # module must match exactly @sup def some_function(): # do something which causes a warning in np.ma.core pass ``` #### Methods | | | | --- | --- | | [`__call__`](numpy.testing.suppress_warnings.__call__#numpy.testing.suppress_warnings.__call__ "numpy.testing.suppress_warnings.__call__")(func) | Function decorator to apply certain suppressions to a whole function. | | [`filter`](numpy.testing.suppress_warnings.filter#numpy.testing.suppress_warnings.filter "numpy.testing.suppress_warnings.filter")([category, message, module]) | Add a new suppressing filter or apply it if the state is entered. | | [`record`](numpy.testing.suppress_warnings.record#numpy.testing.suppress_warnings.record "numpy.testing.suppress_warnings.record")([category, message, module]) | Append a new recording filter or apply it if the state is entered. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.suppress_warnings.htmlTesting Guidelines ================== Introduction ------------ Until the 1.15 release, NumPy used the [nose](https://nose.readthedocs.io/en/latest/) testing framework, it now uses the [pytest](https://pytest.readthedocs.io) framework. The older framework is still maintained in order to support downstream projects that use the old numpy framework, but all tests for NumPy should use pytest. Our goal is that every module and package in NumPy should have a thorough set of unit tests. These tests should exercise the full functionality of a given routine as well as its robustness to erroneous or unexpected input arguments. Well-designed tests with good coverage make an enormous difference to the ease of refactoring. Whenever a new bug is found in a routine, you should write a new test for that specific case and add it to the test suite to prevent that bug from creeping back in unnoticed. Note SciPy uses the testing framework from [`numpy.testing`](routines.testing#module-numpy.testing "numpy.testing"), so all of the NumPy examples shown below are also applicable to SciPy Testing NumPy ------------- NumPy can be tested in a number of ways, choose any way you feel comfortable. ### Running tests from inside Python You can test an installed NumPy by [`numpy.test`](#numpy.test "numpy.test"), for example, To run NumPy’s full test suite, use the following: ``` >>> import numpy >>> numpy.test(label='slow') ``` The test method may take two or more arguments; the first `label` is a string specifying what should be tested and the second `verbose` is an integer giving the level of output verbosity. See the docstring [`numpy.test`](#numpy.test "numpy.test") for details. The default value for `label` is ‘fast’ - which will run the standard tests. The string ‘full’ will run the full battery of tests, including those identified as being slow to run. If `verbose` is 1 or less, the tests will just show information messages about the tests that are run; but if it is greater than 1, then the tests will also provide warnings on missing tests. So if you want to run every test and get messages about which modules don’t have tests: ``` >>> numpy.test(label='full', verbose=2) # or numpy.test('full', 2) ``` Finally, if you are only interested in testing a subset of NumPy, for example, the `core` module, use the following: ``` >>> numpy.core.test() ``` ### Running tests from the command line If you want to build NumPy in order to work on NumPy itself, use `runtests.py`.To run NumPy’s full test suite: ``` $ python runtests.py ``` Testing a subset of NumPy: ``` $python runtests.py -t numpy/core/tests ``` For detailed info on testing, see [Testing builds](../dev/development_environment#testing-builds) ### Other methods of running tests Run tests using your favourite IDE such as [vscode](https://code.visualstudio.com/docs/python/testing#_enable-a-test-framework) or [pycharm](https://www.jetbrains.com/help/pycharm/testing-your-first-python-application.html) Writing your own tests ---------------------- If you are writing a package that you’d like to become part of NumPy, please write the tests as you develop the package. Every Python module, extension module, or subpackage in the NumPy package directory should have a corresponding `test_<name>.py` file. Pytest examines these files for test methods (named `test*`) and test classes (named `Test*`). Suppose you have a NumPy module `numpy/xxx/yyy.py` containing a function `zzz()`. To test this function you would create a test module called `test_yyy.py`. If you only need to test one aspect of `zzz`, you can simply add a test function: ``` def test_zzz(): assert zzz() == 'Hello from zzz' ``` More often, we need to group a number of tests together, so we create a test class: ``` import pytest # import xxx symbols from numpy.xxx.yyy import zzz import pytest class TestZzz: def test_simple(self): assert zzz() == 'Hello from zzz' def test_invalid_parameter(self): with pytest.raises(ValueError, match='.*some matching regex.*'): ... ``` Within these test methods, `assert` and related functions are used to test whether a certain assumption is valid. If the assertion fails, the test fails. `pytest` internally rewrites the `assert` statement to give informative output when it fails, so should be preferred over the legacy variant `numpy.testing.assert_`. Whereas plain `assert` statements are ignored when running Python in optimized mode with `-O`, this is not an issue when running tests with pytest. Similarly, the pytest functions [`pytest.raises`](https://docs.pytest.org/en/stable/reference/reference.html#pytest.raises "(in pytest v7.1.2)") and [`pytest.warns`](https://docs.pytest.org/en/stable/reference/reference.html#pytest.warns "(in pytest v7.1.2)") should be preferred over their legacy counterparts [`numpy.testing.assert_raises`](generated/numpy.testing.assert_raises#numpy.testing.assert_raises "numpy.testing.assert_raises") and [`numpy.testing.assert_warns`](generated/numpy.testing.assert_warns#numpy.testing.assert_warns "numpy.testing.assert_warns"), since the pytest variants are more broadly used and allow more explicit targeting of warnings and errors when used with the `match` regex. Note that `test_` functions or methods should not have a docstring, because that makes it hard to identify the test from the output of running the test suite with `verbose=2` (or similar verbosity setting). Use plain comments (`#`) if necessary. Also since much of NumPy is legacy code that was originally written without unit tests, there are still several modules that don’t have tests yet. Please feel free to choose one of these modules and develop tests for it. ### Using C code in tests NumPy exposes a rich [C-API](c-api/index#c-api) . These are tested using c-extension modules written “as-if” they know nothing about the internals of NumPy, rather using the official C-API interfaces only. Examples of such modules are tests for a user-defined `rational` dtype in `_rational_tests` or the ufunc machinery tests in `_umath_tests` which are part of the binary distribution. Starting from version 1.21, you can also write snippets of C code in tests that will be compiled locally into c-extension modules and loaded into python. numpy.testing.extbuild.build_and_import_extension(*modname*, *functions*, ***, *prologue=''*, *build_dir=None*, *include_dirs=[]*, *more_init=''*) Build and imports a c-extension module `modname` from a list of function fragments `functions`. Parameters **functions**list of fragments Each fragment is a sequence of func_name, calling convention, snippet. **prologue**string Code to precede the rest, usually extra `#include` or `#define` macros. **build_dir**pathlib.Path Where to build the module, usually a temporary directory **include_dirs**list Extra directories to find include files when compiling **more_init**string Code to appear in the module PyMODINIT_FUNC Returns out: module The module will have been loaded and is ready for use #### Examples ``` >>> functions = [("test_bytes", "METH_O", """ if ( !PyBytesCheck(args)) { Py_RETURN_FALSE; } Py_RETURN_TRUE; """)] >>> mod = build_and_import_extension("testme", functions) >>> assert not mod.test_bytes(u'abc') >>> assert mod.test_bytes(b'abc') ``` ### Labeling tests Unlabeled tests like the ones above are run in the default `numpy.test()` run. If you want to label your test as slow - and therefore reserved for a full `numpy.test(label='full')` run, you can label it with `pytest.mark.slow`: ``` import pytest @pytest.mark.slow def test_big(self): print('Big, slow test') ``` Similarly for methods: ``` class test_zzz: @pytest.mark.slow def test_simple(self): assert_(zzz() == 'Hello from zzz') ``` ### Easier setup and teardown functions / methods Testing looks for module-level or class-level setup and teardown functions by name; thus: ``` def setup(): """Module-level setup""" print('doing setup') def teardown(): """Module-level teardown""" print('doing teardown') class TestMe: def setup(): """Class-level setup""" print('doing setup') def teardown(): """Class-level teardown""" print('doing teardown') ``` Setup and teardown functions to functions and methods are known as “fixtures”, and their use is not encouraged. ### Parametric tests One very nice feature of testing is allowing easy testing across a range of parameters - a nasty problem for standard unit tests. Use the `pytest.mark.parametrize` decorator. ### Doctests Doctests are a convenient way of documenting the behavior of a function and allowing that behavior to be tested at the same time. The output of an interactive Python session can be included in the docstring of a function, and the test framework can run the example and compare the actual output to the expected output. The doctests can be run by adding the `doctests` argument to the `test()` call; for example, to run all tests (including doctests) for numpy.lib: ``` >>> import numpy as np >>> np.lib.test(doctests=True) ``` The doctests are run as if they are in a fresh Python instance which has executed `import numpy as np`. Tests that are part of a NumPy subpackage will have that subpackage already imported. E.g. for a test in `numpy/linalg/tests/`, the namespace will be created such that `from numpy import linalg` has already executed. ### `tests/` Rather than keeping the code and the tests in the same directory, we put all the tests for a given subpackage in a `tests/` subdirectory. For our example, if it doesn’t already exist you will need to create a `tests/` directory in `numpy/xxx/`. So the path for `test_yyy.py` is `numpy/xxx/tests/test_yyy.py`. Once the `numpy/xxx/tests/test_yyy.py` is written, its possible to run the tests by going to the `tests/` directory and typing: ``` python test_yyy.py ``` Or if you add `numpy/xxx/tests/` to the Python path, you could run the tests interactively in the interpreter like this: ``` >>> import test_yyy >>> test_yyy.test() ``` ### `__init__.py` and `setup.py` Usually, however, adding the `tests/` directory to the python path isn’t desirable. Instead it would better to invoke the test straight from the module `xxx`. To this end, simply place the following lines at the end of your package’s `__init__.py` file: ``` ... def test(level=1, verbosity=1): from numpy.testing import Tester return Tester().test(level, verbosity) ``` You will also need to add the tests directory in the configuration section of your setup.py: ``` ... def configuration(parent_package='', top_path=None): ... config.add_subpackage('tests') return config ... ``` Now you can do the following to test your module: ``` >>> import numpy >>> numpy.xxx.test() ``` Also, when invoking the entire NumPy test suite, your tests will be found and run: ``` >>> import numpy >>> numpy.test() # your tests are included and run automatically! ``` Tips & Tricks ------------- ### Creating many similar tests If you have a collection of tests that must be run multiple times with minor variations, it can be helpful to create a base class containing all the common tests, and then create a subclass for each variation. Several examples of this technique exist in NumPy; below are excerpts from one in [numpy/linalg/tests/test_linalg.py](https://github.com/numpy/numpy/blob/main/numpy/linalg/tests/test_linalg.py): ``` class LinalgTestCase: def test_single(self): a = array([[1., 2.], [3., 4.]], dtype=single) b = array([2., 1.], dtype=single) self.do(a, b) def test_double(self): a = array([[1., 2.], [3., 4.]], dtype=double) b = array([2., 1.], dtype=double) self.do(a, b) ... class TestSolve(LinalgTestCase): def do(self, a, b): x = linalg.solve(a, b) assert_allclose(b, dot(a, x)) assert imply(isinstance(b, matrix), isinstance(x, matrix)) class TestInv(LinalgTestCase): def do(self, a, b): a_inv = linalg.inv(a) assert_allclose(dot(a, a_inv), identity(asarray(a).shape[0])) assert imply(isinstance(a, matrix), isinstance(a_inv, matrix)) ``` In this case, we wanted to test solving a linear algebra problem using matrices of several data types, using `linalg.solve` and `linalg.inv`. The common test cases (for single-precision, double-precision, etc. matrices) are collected in `LinalgTestCase`. ### Known failures & skipping tests Sometimes you might want to skip a test or mark it as a known failure, such as when the test suite is being written before the code it’s meant to test, or if a test only fails on a particular architecture. To skip a test, simply use `skipif`: ``` import pytest @pytest.mark.skipif(SkipMyTest, reason="Skipping this test because...") def test_something(foo): ... ``` The test is marked as skipped if `SkipMyTest` evaluates to nonzero, and the message in verbose test output is the second argument given to `skipif`. Similarly, a test can be marked as a known failure by using `xfail`: ``` import pytest @pytest.mark.xfail(MyTestFails, reason="This test is known to fail because...") def test_something_else(foo): ... ``` Of course, a test can be unconditionally skipped or marked as a known failure by using `skip` or `xfail` without argument, respectively. A total of the number of skipped and known failing tests is displayed at the end of the test run. Skipped tests are marked as `'S'` in the test results (or `'SKIPPED'` for `verbose > 1`), and known failing tests are marked as `'x'` (or `'XFAIL'` if `verbose > 1`). ### Tests on random data Tests on random data are good, but since test failures are meant to expose new bugs or regressions, a test that passes most of the time but fails occasionally with no code changes is not helpful. Make the random data deterministic by setting the random number seed before generating it. Use either Python’s `random.seed(some_number)` or NumPy’s `numpy.random.seed(some_number)`, depending on the source of random numbers. Alternatively, you can use [Hypothesis](https://hypothesis.readthedocs.io/en/latest/) to generate arbitrary data. Hypothesis manages both Python’s and Numpy’s random seeds for you, and provides a very concise and powerful way to describe data (including `hypothesis.extra.numpy`, e.g. for a set of mutually-broadcastable shapes). The advantages over random generation include tools to replay and share failures without requiring a fixed seed, reporting *minimal* examples for each failure, and better-than-naive-random techniques for triggering bugs. ### Documentation for `numpy.test` numpy.test(*label='fast'*, *verbose=1*, *extra_argv=None*, *doctests=False*, *coverage=False*, *durations=- 1*, *tests=None*) Pytest test runner. A test function is typically added to a package’s __init__.py like so: ``` from numpy._pytesttester import PytestTester test = PytestTester(__name__).test del PytestTester ``` Calling this test function finds and runs all tests associated with the module and all its sub-modules. Parameters **module_name**module name The name of the module to test. #### Notes Unlike the previous `nose`-based implementation, this class is not publicly exposed as it performs some `numpy`-specific warning suppression. Attributes **module_name**str Full path to the package to test. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/testing.htmlnumpy.bartlett ============== numpy.bartlett(*M*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L2966-L3071) Return the Bartlett window. The Bartlett window is very similar to a triangular window, except that the end points are at zero. It is often used in signal processing for tapering a signal, without generating too much ripple in the frequency domain. Parameters **M**int Number of points in the output window. If zero or less, an empty array is returned. Returns **out**array The triangular window, with the maximum value normalized to one (the value one appears only if the number of samples is odd), with the first and last samples equal to zero. See also [`blackman`](numpy.blackman#numpy.blackman "numpy.blackman"), [`hamming`](numpy.hamming#numpy.hamming "numpy.hamming"), [`hanning`](numpy.hanning#numpy.hanning "numpy.hanning"), [`kaiser`](numpy.kaiser#numpy.kaiser "numpy.kaiser") #### Notes The Bartlett window is defined as \[w(n) = \frac{2}{M-1} \left( \frac{M-1}{2} - \left|n - \frac{M-1}{2}\right| \right)\] Most references to the Bartlett window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. Note that convolution with this window produces linear interpolation. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. The Fourier transform of the Bartlett window is the product of two sinc functions. Note the excellent discussion in Kanasewich [[2]](#r3a7a5a2c0d2a-2). #### References 1 <NAME>, “Periodogram Analysis and Continuous Spectra”, Biometrika 37, 1-16, 1950. [2](#id1) <NAME>, “Time Sequence Analysis in Geophysics”, The University of Alberta Press, 1975, pp. 109-110. 3 <NAME> and <NAME>, “Discrete-Time Signal Processing”, Prentice-Hall, 1999, pp. 468-471. 4 Wikipedia, “Window function”, <https://en.wikipedia.org/wiki/Window_function 5 <NAME>, <NAME>, <NAME>, and <NAME>, “Numerical Recipes”, Cambridge University Press, 1986, page 429. #### Examples ``` >>> import matplotlib.pyplot as plt >>> np.bartlett(12) array([ 0. , 0.18181818, 0.36363636, 0.54545455, 0.72727273, # may vary 0.90909091, 0.90909091, 0.72727273, 0.54545455, 0.36363636, 0.18181818, 0. ]) ``` Plot the window and its frequency response (requires SciPy and matplotlib): ``` >>> from numpy.fft import fft, fftshift >>> window = np.bartlett(51) >>> plt.plot(window) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Bartlett window") Text(0.5, 1.0, 'Bartlett window') >>> plt.ylabel("Amplitude") Text(0, 0.5, 'Amplitude') >>> plt.xlabel("Sample") Text(0.5, 0, 'Sample') >>> plt.show() ``` ![../../_images/numpy-bartlett-1_00_00.png] ``` >>> plt.figure() <Figure size 640x480 with 0 Axes> >>> A = fft(window, 2048) / 25.5 >>> mag = np.abs(fftshift(A)) >>> freq = np.linspace(-0.5, 0.5, len(A)) >>> with np.errstate(divide='ignore', invalid='ignore'): ... response = 20 * np.log10(mag) ... >>> response = np.clip(response, -100, 100) >>> plt.plot(freq, response) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Frequency response of Bartlett window") Text(0.5, 1.0, 'Frequency response of Bartlett window') >>> plt.ylabel("Magnitude [dB]") Text(0, 0.5, 'Magnitude [dB]') >>> plt.xlabel("Normalized frequency [cycles per sample]") Text(0.5, 0, 'Normalized frequency [cycles per sample]') >>> _ = plt.axis('tight') >>> plt.show() ``` ![../../_images/numpy-bartlett-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.bartlett.htmlnumpy.blackman ============== numpy.blackman(*M*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L2866-L2963) Return the Blackman window. The Blackman window is a taper formed by using the first three terms of a summation of cosines. It was designed to have close to the minimal leakage possible. It is close to optimal, only slightly worse than a Kaiser window. Parameters **M**int Number of points in the output window. If zero or less, an empty array is returned. Returns **out**ndarray The window, with the maximum value normalized to one (the value one appears only if the number of samples is odd). See also [`bartlett`](numpy.bartlett#numpy.bartlett "numpy.bartlett"), [`hamming`](numpy.hamming#numpy.hamming "numpy.hamming"), [`hanning`](numpy.hanning#numpy.hanning "numpy.hanning"), [`kaiser`](numpy.kaiser#numpy.kaiser "numpy.kaiser") #### Notes The Blackman window is defined as \[w(n) = 0.42 - 0.5 \cos(2\pi n/M) + 0.08 \cos(4\pi n/M)\] Most references to the Blackman window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. It is known as a “near optimal” tapering function, almost as good (by some measures) as the kaiser window. #### References <NAME>. and <NAME>., (1958) The measurement of power spectra, Dover Publications, New York. <NAME>., and <NAME>. Discrete-Time Signal Processing. Upper Saddle River, NJ: Prentice-Hall, 1999, pp. 468-471. #### Examples ``` >>> import matplotlib.pyplot as plt >>> np.blackman(12) array([-1.38777878e-17, 3.26064346e-02, 1.59903635e-01, # may vary 4.14397981e-01, 7.36045180e-01, 9.67046769e-01, 9.67046769e-01, 7.36045180e-01, 4.14397981e-01, 1.59903635e-01, 3.26064346e-02, -1.38777878e-17]) ``` Plot the window and the frequency response: ``` >>> from numpy.fft import fft, fftshift >>> window = np.blackman(51) >>> plt.plot(window) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Blackman window") Text(0.5, 1.0, 'Blackman window') >>> plt.ylabel("Amplitude") Text(0, 0.5, 'Amplitude') >>> plt.xlabel("Sample") Text(0.5, 0, 'Sample') >>> plt.show() ``` ![../../_images/numpy-blackman-1_00_00.png] ``` >>> plt.figure() <Figure size 640x480 with 0 Axes> >>> A = fft(window, 2048) / 25.5 >>> mag = np.abs(fftshift(A)) >>> freq = np.linspace(-0.5, 0.5, len(A)) >>> with np.errstate(divide='ignore', invalid='ignore'): ... response = 20 * np.log10(mag) ... >>> response = np.clip(response, -100, 100) >>> plt.plot(freq, response) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Frequency response of Blackman window") Text(0.5, 1.0, 'Frequency response of Blackman window') >>> plt.ylabel("Magnitude [dB]") Text(0, 0.5, 'Magnitude [dB]') >>> plt.xlabel("Normalized frequency [cycles per sample]") Text(0.5, 0, 'Normalized frequency [cycles per sample]') >>> _ = plt.axis('tight') >>> plt.show() ``` ![../../_images/numpy-blackman-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.blackman.htmlnumpy.hamming ============= numpy.hamming(*M*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L3178-L3275) Return the Hamming window. The Hamming window is a taper formed by using a weighted cosine. Parameters **M**int Number of points in the output window. If zero or less, an empty array is returned. Returns **out**ndarray The window, with the maximum value normalized to one (the value one appears only if the number of samples is odd). See also [`bartlett`](numpy.bartlett#numpy.bartlett "numpy.bartlett"), [`blackman`](numpy.blackman#numpy.blackman "numpy.blackman"), [`hanning`](numpy.hanning#numpy.hanning "numpy.hanning"), [`kaiser`](numpy.kaiser#numpy.kaiser "numpy.kaiser") #### Notes The Hamming window is defined as \[w(n) = 0.54 - 0.46\cos\left(\frac{2\pi{n}}{M-1}\right) \qquad 0 \leq n \leq M-1\] The Hamming was named for <NAME>, an associate of <NAME> and is described in Blackman and Tukey. It was recommended for smoothing the truncated autocovariance function in the time domain. Most references to the Hamming window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. #### References 1 <NAME>. and <NAME>., (1958) The measurement of power spectra, Dover Publications, New York. 2 <NAME>, “Time Sequence Analysis in Geophysics”, The University of Alberta Press, 1975, pp. 109-110. 3 Wikipedia, “Window function”, <https://en.wikipedia.org/wiki/Window_function 4 <NAME>, <NAME>, <NAME>, and <NAME>, “Numerical Recipes”, Cambridge University Press, 1986, page 425. #### Examples ``` >>> np.hamming(12) array([ 0.08 , 0.15302337, 0.34890909, 0.60546483, 0.84123594, # may vary 0.98136677, 0.98136677, 0.84123594, 0.60546483, 0.34890909, 0.15302337, 0.08 ]) ``` Plot the window and the frequency response: ``` >>> import matplotlib.pyplot as plt >>> from numpy.fft import fft, fftshift >>> window = np.hamming(51) >>> plt.plot(window) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Hamming window") Text(0.5, 1.0, 'Hamming window') >>> plt.ylabel("Amplitude") Text(0, 0.5, 'Amplitude') >>> plt.xlabel("Sample") Text(0.5, 0, 'Sample') >>> plt.show() ``` ![../../_images/numpy-hamming-1_00_00.png] ``` >>> plt.figure() <Figure size 640x480 with 0 Axes> >>> A = fft(window, 2048) / 25.5 >>> mag = np.abs(fftshift(A)) >>> freq = np.linspace(-0.5, 0.5, len(A)) >>> response = 20 * np.log10(mag) >>> response = np.clip(response, -100, 100) >>> plt.plot(freq, response) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Frequency response of Hamming window") Text(0.5, 1.0, 'Frequency response of Hamming window') >>> plt.ylabel("Magnitude [dB]") Text(0, 0.5, 'Magnitude [dB]') >>> plt.xlabel("Normalized frequency [cycles per sample]") Text(0.5, 0, 'Normalized frequency [cycles per sample]') >>> plt.axis('tight') ... >>> plt.show() ``` ![../../_images/numpy-hamming-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.hamming.htmlnumpy.hanning ============= numpy.hanning(*M*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L3074-L3175) Return the Hanning window. The Hanning window is a taper formed by using a weighted cosine. Parameters **M**int Number of points in the output window. If zero or less, an empty array is returned. Returns **out**ndarray, shape(M,) The window, with the maximum value normalized to one (the value one appears only if `M` is odd). See also [`bartlett`](numpy.bartlett#numpy.bartlett "numpy.bartlett"), [`blackman`](numpy.blackman#numpy.blackman "numpy.blackman"), [`hamming`](numpy.hamming#numpy.hamming "numpy.hamming"), [`kaiser`](numpy.kaiser#numpy.kaiser "numpy.kaiser") #### Notes The Hanning window is defined as \[w(n) = 0.5 - 0.5\cos\left(\frac{2\pi{n}}{M-1}\right) \qquad 0 \leq n \leq M-1\] The Hanning was named for <NAME>, an Austrian meteorologist. It is also known as the Cosine Bell. Some authors prefer that it be called a Hann window, to help avoid confusion with the very similar Hamming window. Most references to the Hanning window come from the signal processing literature, where it is used as one of many windowing functions for smoothing values. It is also known as an apodization (which means “removing the foot”, i.e. smoothing discontinuities at the beginning and end of the sampled signal) or tapering function. #### References 1 <NAME>. and <NAME>., (1958) The measurement of power spectra, Dover Publications, New York. 2 <NAME>, “Time Sequence Analysis in Geophysics”, The University of Alberta Press, 1975, pp. 106-108. 3 Wikipedia, “Window function”, <https://en.wikipedia.org/wiki/Window_function 4 <NAME>, <NAME>, <NAME>, and <NAME>, “Numerical Recipes”, Cambridge University Press, 1986, page 425. #### Examples ``` >>> np.hanning(12) array([0. , 0.07937323, 0.29229249, 0.57115742, 0.82743037, 0.97974649, 0.97974649, 0.82743037, 0.57115742, 0.29229249, 0.07937323, 0. ]) ``` Plot the window and its frequency response: ``` >>> import matplotlib.pyplot as plt >>> from numpy.fft import fft, fftshift >>> window = np.hanning(51) >>> plt.plot(window) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Hann window") Text(0.5, 1.0, 'Hann window') >>> plt.ylabel("Amplitude") Text(0, 0.5, 'Amplitude') >>> plt.xlabel("Sample") Text(0.5, 0, 'Sample') >>> plt.show() ``` ![../../_images/numpy-hanning-1_00_00.png] ``` >>> plt.figure() <Figure size 640x480 with 0 Axes> >>> A = fft(window, 2048) / 25.5 >>> mag = np.abs(fftshift(A)) >>> freq = np.linspace(-0.5, 0.5, len(A)) >>> with np.errstate(divide='ignore', invalid='ignore'): ... response = 20 * np.log10(mag) ... >>> response = np.clip(response, -100, 100) >>> plt.plot(freq, response) [<matplotlib.lines.Line2D object at 0x...>] >>> plt.title("Frequency response of the Hann window") Text(0.5, 1.0, 'Frequency response of the Hann window') >>> plt.ylabel("Magnitude [dB]") Text(0, 0.5, 'Magnitude [dB]') >>> plt.xlabel("Normalized frequency [cycles per sample]") Text(0.5, 0, 'Normalized frequency [cycles per sample]') >>> plt.axis('tight') ... >>> plt.show() ``` ![../../_images/numpy-hanning-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.hanning.htmlnumpy.ndarray.setfield ====================== method ndarray.setfield(*val*, *dtype*, *offset=0*) Put a value into a specified place in a field defined by a data-type. Place `val` into `a`’s field defined by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and beginning `offset` bytes into the field. Parameters **val**object Value to be placed in field. **dtype**dtype object Data-type of the field in which to place `val`. **offset**int, optional The number of bytes into the field at which to place `val`. Returns None See also [`getfield`](numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield") #### Examples ``` >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.setfield.htmlnumpy.ndarray.conjugate ======================= method ndarray.conjugate() Return the complex conjugate, element-wise. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.conjugate.htmlnumpy.vectorize.__call__ ============================ method vectorize.__call__(**args*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/function_base.py#L2300-L2328) Return arrays with the results of `pyfunc` broadcast (vectorized) over `args` and `kwargs` not in `excluded`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.vectorize.__call__.htmlnumpy.nditer.reset ================== method nditer.reset() Reset the iterator to its initial state. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.reset.htmlnumpy.ndarray.newbyteorder ========================== method ndarray.newbyteorder(*new_order='S'*, */*) Return the array with the same data viewed with a different byte order. Equivalent to: ``` arr.view(arr.dtype.newbytorder(new_order)) ``` Changes are also made in all fields and sub-arrays of the array data type. Parameters **new_order**string, optional Byte order to force; a value from the byte order specifications below. `new_order` codes can be any of: * ‘S’ - swap dtype from current to opposite endian * {‘<’, ‘little’} - little endian * {‘>’, ‘big’} - big endian * {‘=’, ‘native’} - native order, equivalent to [`sys.byteorder`](https://docs.python.org/3/library/sys.html#sys.byteorder "(in Python v3.10)") * {‘|’, ‘I’} - ignore (no change to byte order) The default value (‘S’) results in swapping the current byte order. Returns **new_arr**array New array object with the dtype reflecting given change to the byte order. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndarray.newbyteorder.htmlImporting data with genfromtxt ============================== NumPy provides several functions to create arrays from tabular data. We focus here on the [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") function. In a nutshell, [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") runs two main loops. The first loop converts each line of the file in a sequence of strings. The second loop converts each string to the appropriate data type. This mechanism is slower than a single loop, but gives more flexibility. In particular, [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") is able to take missing data into account, when other faster and simpler functions like [`loadtxt`](../reference/generated/numpy.loadtxt#numpy.loadtxt "numpy.loadtxt") cannot. Note When giving examples, we will use the following conventions: ``` >>> import numpy as np >>> from io import StringIO ``` Defining the input ------------------ The only mandatory argument of [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") is the source of the data. It can be a string, a list of strings, a generator or an open file-like object with a `read` method, for example, a file or [`io.StringIO`](https://docs.python.org/3/library/io.html#io.StringIO "(in Python v3.10)") object. If a single string is provided, it is assumed to be the name of a local or remote file. If a list of strings or a generator returning strings is provided, each string is treated as one line in a file. When the URL of a remote file is passed, the file is automatically downloaded to the current directory and opened. Recognized file types are text files and archives. Currently, the function recognizes `gzip` and `bz2` (`bzip2`) archives. The type of the archive is determined from the extension of the file: if the filename ends with `'.gz'`, a `gzip` archive is expected; if it ends with `'bz2'`, a `bzip2` archive is assumed. Splitting the lines into columns -------------------------------- ### The `delimiter` argument Once the file is defined and open for reading, [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") splits each non-empty line into a sequence of strings. Empty or commented lines are just skipped. The `delimiter` keyword is used to define how the splitting should take place. Quite often, a single character marks the separation between columns. For example, comma-separated files (CSV) use a comma (`,`) or a semicolon (`;`) as delimiter: ``` >>> data = u"1, 2, 3\n4, 5, 6" >>> np.genfromtxt(StringIO(data), delimiter=",") array([[1., 2., 3.], [4., 5., 6.]]) ``` Another common separator is `"\t"`, the tabulation character. However, we are not limited to a single character, any string will do. By default, [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") assumes `delimiter=None`, meaning that the line is split along white spaces (including tabs) and that consecutive white spaces are considered as a single white space. Alternatively, we may be dealing with a fixed-width file, where columns are defined as a given number of characters. In that case, we need to set `delimiter` to a single integer (if all the columns have the same size) or to a sequence of integers (if columns can have different sizes): ``` >>> data = u" 1 2 3\n 4 5 67\n890123 4" >>> np.genfromtxt(StringIO(data), delimiter=3) array([[ 1., 2., 3.], [ 4., 5., 67.], [890., 123., 4.]]) >>> data = u"123456789\n 4 7 9\n 4567 9" >>> np.genfromtxt(StringIO(data), delimiter=(4, 3, 2)) array([[1234., 567., 89.], [ 4., 7., 9.], [ 4., 567., 9.]]) ``` ### The `autostrip` argument By default, when a line is decomposed into a series of strings, the individual entries are not stripped of leading nor trailing white spaces. This behavior can be overwritten by setting the optional argument `autostrip` to a value of `True`: ``` >>> data = u"1, abc , 2\n 3, xxx, 4" >>> # Without autostrip >>> np.genfromtxt(StringIO(data), delimiter=",", dtype="|U5") array([['1', ' abc ', ' 2'], ['3', ' xxx', ' 4']], dtype='<U5') >>> # With autostrip >>> np.genfromtxt(StringIO(data), delimiter=",", dtype="|U5", autostrip=True) array([['1', 'abc', '2'], ['3', 'xxx', '4']], dtype='<U5') ``` ### The `comments` argument The optional argument `comments` is used to define a character string that marks the beginning of a comment. By default, [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") assumes `comments='#'`. The comment marker may occur anywhere on the line. Any character present after the comment marker(s) is simply ignored: ``` >>> data = u"""# ... # Skip me ! ... # Skip me too ! ... 1, 2 ... 3, 4 ... 5, 6 #This is the third line of the data ... 7, 8 ... # And here comes the last line ... 9, 0 ... """ >>> np.genfromtxt(StringIO(data), comments="#", delimiter=",") array([[1., 2.], [3., 4.], [5., 6.], [7., 8.], [9., 0.]]) ``` New in version 1.7.0: When `comments` is set to `None`, no lines are treated as comments. Note There is one notable exception to this behavior: if the optional argument `names=True`, the first commented line will be examined for names. Skipping lines and choosing columns ----------------------------------- ### The `skip_header` and `skip_footer` arguments The presence of a header in the file can hinder data processing. In that case, we need to use the `skip_header` optional argument. The values of this argument must be an integer which corresponds to the number of lines to skip at the beginning of the file, before any other action is performed. Similarly, we can skip the last `n` lines of the file by using the `skip_footer` attribute and giving it a value of `n`: ``` >>> data = u"\n".join(str(i) for i in range(10)) >>> np.genfromtxt(StringIO(data),) array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) >>> np.genfromtxt(StringIO(data), ... skip_header=3, skip_footer=5) array([3., 4.]) ``` By default, `skip_header=0` and `skip_footer=0`, meaning that no lines are skipped. ### The `usecols` argument In some cases, we are not interested in all the columns of the data but only a few of them. We can select which columns to import with the `usecols` argument. This argument accepts a single integer or a sequence of integers corresponding to the indices of the columns to import. Remember that by convention, the first column has an index of 0. Negative integers behave the same as regular Python negative indexes. For example, if we want to import only the first and the last columns, we can use `usecols=(0, -1)`: ``` >>> data = u"1 2 3\n4 5 6" >>> np.genfromtxt(StringIO(data), usecols=(0, -1)) array([[1., 3.], [4., 6.]]) ``` If the columns have names, we can also select which columns to import by giving their name to the `usecols` argument, either as a sequence of strings or a comma-separated string: ``` >>> data = u"1 2 3\n4 5 6" >>> np.genfromtxt(StringIO(data), ... names="a, b, c", usecols=("a", "c")) array([(1., 3.), (4., 6.)], dtype=[('a', '<f8'), ('c', '<f8')]) >>> np.genfromtxt(StringIO(data), ... names="a, b, c", usecols=("a, c")) array([(1., 3.), (4., 6.)], dtype=[('a', '<f8'), ('c', '<f8')]) ``` Choosing the data type ---------------------- The main way to control how the sequences of strings we have read from the file are converted to other types is to set the `dtype` argument. Acceptable values for this argument are: * a single type, such as `dtype=float`. The output will be 2D with the given dtype, unless a name has been associated with each column with the use of the `names` argument (see below). Note that `dtype=float` is the default for [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt"). * a sequence of types, such as `dtype=(int, float, float)`. * a comma-separated string, such as `dtype="i4,f8,|U3"`. * a dictionary with two keys `'names'` and `'formats'`. * a sequence of tuples `(name, type)`, such as `dtype=[('A', int), ('B', float)]`. * an existing [`numpy.dtype`](../reference/generated/numpy.dtype#numpy.dtype "numpy.dtype") object. * the special value `None`. In that case, the type of the columns will be determined from the data itself (see below). In all the cases but the first one, the output will be a 1D array with a structured dtype. This dtype has as many fields as items in the sequence. The field names are defined with the `names` keyword. When `dtype=None`, the type of each column is determined iteratively from its data. We start by checking whether a string can be converted to a boolean (that is, if the string matches `true` or `false` in lower cases); then whether it can be converted to an integer, then to a float, then to a complex and eventually to a string. The option `dtype=None` is provided for convenience. However, it is significantly slower than setting the dtype explicitly. Setting the names ----------------- ### The `names` argument A natural approach when dealing with tabular data is to allocate a name to each column. A first possibility is to use an explicit structured dtype, as mentioned previously: ``` >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=[(_, int) for _ in "abc"]) array([(1, 2, 3), (4, 5, 6)], dtype=[('a', '<i8'), ('b', '<i8'), ('c', '<i8')]) ``` Another simpler possibility is to use the `names` keyword with a sequence of strings or a comma-separated string: ``` >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, names="A, B, C") array([(1., 2., 3.), (4., 5., 6.)], dtype=[('A', '<f8'), ('B', '<f8'), ('C', '<f8')]) ``` In the example above, we used the fact that by default, `dtype=float`. By giving a sequence of names, we are forcing the output to a structured dtype. We may sometimes need to define the column names from the data itself. In that case, we must use the `names` keyword with a value of `True`. The names will then be read from the first line (after the `skip_header` ones), even if the line is commented out: ``` >>> data = StringIO("So it goes\n#a b c\n1 2 3\n 4 5 6") >>> np.genfromtxt(data, skip_header=1, names=True) array([(1., 2., 3.), (4., 5., 6.)], dtype=[('a', '<f8'), ('b', '<f8'), ('c', '<f8')]) ``` The default value of `names` is `None`. If we give any other value to the keyword, the new names will overwrite the field names we may have defined with the dtype: ``` >>> data = StringIO("1 2 3\n 4 5 6") >>> ndtype=[('a',int), ('b', float), ('c', int)] >>> names = ["A", "B", "C"] >>> np.genfromtxt(data, names=names, dtype=ndtype) array([(1, 2., 3), (4, 5., 6)], dtype=[('A', '<i8'), ('B', '<f8'), ('C', '<i8')]) ``` ### The `defaultfmt` argument If `names=None` but a structured dtype is expected, names are defined with the standard NumPy default of `"f%i"`, yielding names like `f0`, `f1` and so forth: ``` >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=(int, float, int)) array([(1, 2., 3), (4, 5., 6)], dtype=[('f0', '<i8'), ('f1', '<f8'), ('f2', '<i8')]) ``` In the same way, if we don’t give enough names to match the length of the dtype, the missing names will be defined with this default template: ``` >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=(int, float, int), names="a") array([(1, 2., 3), (4, 5., 6)], dtype=[('a', '<i8'), ('f0', '<f8'), ('f1', '<i8')]) ``` We can overwrite this default with the `defaultfmt` argument, that takes any format string: ``` >>> data = StringIO("1 2 3\n 4 5 6") >>> np.genfromtxt(data, dtype=(int, float, int), defaultfmt="var_%02i") array([(1, 2., 3), (4, 5., 6)], dtype=[('var_00', '<i8'), ('var_01', '<f8'), ('var_02', '<i8')]) ``` Note We need to keep in mind that `defaultfmt` is used only if some names are expected but not defined. ### Validating names NumPy arrays with a structured dtype can also be viewed as [`recarray`](../reference/generated/numpy.recarray#numpy.recarray "numpy.recarray"), where a field can be accessed as if it were an attribute. For that reason, we may need to make sure that the field name doesn’t contain any space or invalid character, or that it does not correspond to the name of a standard attribute (like `size` or `shape`), which would confuse the interpreter. [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") accepts three optional arguments that provide a finer control on the names: `deletechars` Gives a string combining all the characters that must be deleted from the name. By default, invalid characters are `~!@#$%^&*()-=+~\|]}[{';: /?.>,<`. `excludelist` Gives a list of the names to exclude, such as `return`, `file`, `print`
 If one of the input name is part of this list, an underscore character (`'_'`) will be appended to it. `case_sensitive` Whether the names should be case-sensitive (`case_sensitive=True`), converted to upper case (`case_sensitive=False` or `case_sensitive='upper'`) or to lower case (`case_sensitive='lower'`). Tweaking the conversion ----------------------- ### The `converters` argument Usually, defining a dtype is sufficient to define how the sequence of strings must be converted. However, some additional control may sometimes be required. For example, we may want to make sure that a date in a format `YYYY/MM/DD` is converted to a [`datetime`](https://docs.python.org/3/library/datetime.html#datetime.datetime "(in Python v3.10)") object, or that a string like `xx%` is properly converted to a float between 0 and 1. In such cases, we should define conversion functions with the `converters` arguments. The value of this argument is typically a dictionary with column indices or column names as keys and a conversion functions as values. These conversion functions can either be actual functions or lambda functions. In any case, they should accept only a string as input and output only a single element of the wanted type. In the following example, the second column is converted from as string representing a percentage to a float between 0 and 1: ``` >>> convertfunc = lambda x: float(x.strip(b"%"))/100. >>> data = u"1, 2.3%, 45.\n6, 78.9%, 0" >>> names = ("i", "p", "n") >>> # General case ..... >>> np.genfromtxt(StringIO(data), delimiter=",", names=names) array([(1., nan, 45.), (6., nan, 0.)], dtype=[('i', '<f8'), ('p', '<f8'), ('n', '<f8')]) ``` We need to keep in mind that by default, `dtype=float`. A float is therefore expected for the second column. However, the strings `' 2.3%'` and `' 78.9%'` cannot be converted to float and we end up having `np.nan` instead. Let’s now use a converter: ``` >>> # Converted case ... >>> np.genfromtxt(StringIO(data), delimiter=",", names=names, ... converters={1: convertfunc}) array([(1., 0.023, 45.), (6., 0.789, 0.)], dtype=[('i', '<f8'), ('p', '<f8'), ('n', '<f8')]) ``` The same results can be obtained by using the name of the second column (`"p"`) as key instead of its index (1): ``` >>> # Using a name for the converter ... >>> np.genfromtxt(StringIO(data), delimiter=",", names=names, ... converters={"p": convertfunc}) array([(1., 0.023, 45.), (6., 0.789, 0.)], dtype=[('i', '<f8'), ('p', '<f8'), ('n', '<f8')]) ``` Converters can also be used to provide a default for missing entries. In the following example, the converter `convert` transforms a stripped string into the corresponding float or into -999 if the string is empty. We need to explicitly strip the string from white spaces as it is not done by default: ``` >>> data = u"1, , 3\n 4, 5, 6" >>> convert = lambda x: float(x.strip() or -999) >>> np.genfromtxt(StringIO(data), delimiter=",", ... converters={1: convert}) array([[ 1., -999., 3.], [ 4., 5., 6.]]) ``` ### Using missing and filling values Some entries may be missing in the dataset we are trying to import. In a previous example, we used a converter to transform an empty string into a float. However, user-defined converters may rapidly become cumbersome to manage. The [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt") function provides two other complementary mechanisms: the `missing_values` argument is used to recognize missing data and a second argument, `filling_values`, is used to process these missing data. ### `missing_values` By default, any empty string is marked as missing. We can also consider more complex strings, such as `"N/A"` or `"???"` to represent missing or invalid data. The `missing_values` argument accepts three kinds of values: a string or a comma-separated string This string will be used as the marker for missing data for all the columns a sequence of strings In that case, each item is associated to a column, in order. a dictionary Values of the dictionary are strings or sequence of strings. The corresponding keys can be column indices (integers) or column names (strings). In addition, the special key `None` can be used to define a default applicable to all columns. ### `filling_values` We know how to recognize missing data, but we still need to provide a value for these missing entries. By default, this value is determined from the expected dtype according to this table: | Expected type | Default | | --- | --- | | `bool` | `False` | | `int` | `-1` | | `float` | `np.nan` | | `complex` | `np.nan+0j` | | `string` | `'???'` | We can get a finer control on the conversion of missing values with the `filling_values` optional argument. Like `missing_values`, this argument accepts different kind of values: a single value This will be the default for all columns a sequence of values Each entry will be the default for the corresponding column a dictionary Each key can be a column index or a column name, and the corresponding value should be a single object. We can use the special key `None` to define a default for all columns. In the following example, we suppose that the missing values are flagged with `"N/A"` in the first column and by `"???"` in the third column. We wish to transform these missing values to 0 if they occur in the first and second column, and to -999 if they occur in the last column: ``` >>> data = u"N/A, 2, 3\n4, ,???" >>> kwargs = dict(delimiter=",", ... dtype=int, ... names="a,b,c", ... missing_values={0:"N/A", 'b':" ", 2:"???"}, ... filling_values={0:0, 'b':0, 2:-999}) >>> np.genfromtxt(StringIO(data), **kwargs) array([(0, 2, 3), (4, 0, -999)], dtype=[('a', '<i8'), ('b', '<i8'), ('c', '<i8')]) ``` ### `usemask` We may also want to keep track of the occurrence of missing data by constructing a boolean mask, with `True` entries where data was missing and `False` otherwise. To do that, we just have to set the optional argument `usemask` to `True` (the default is `False`). The output array will then be a [`MaskedArray`](../reference/maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"). Shortcut functions ------------------ In addition to [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt"), the `numpy.lib.npyio` module provides several convenience functions derived from [`genfromtxt`](../reference/generated/numpy.genfromtxt#numpy.genfromtxt "numpy.genfromtxt"). These functions work the same way as the original, but they have different default values. `numpy.lib.npyio.recfromtxt` Returns a standard [`numpy.recarray`](../reference/generated/numpy.recarray#numpy.recarray "numpy.recarray") (if `usemask=False`) or a `numpy.ma.mrecords.MaskedRecords` array (if `usemaske=True`). The default dtype is `dtype=None`, meaning that the types of each column will be automatically determined. `numpy.lib.npyio.recfromcsv` Like `numpy.lib.npyio.recfromtxt`, but with a default `delimiter=","`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/user/basics.io.genfromtxt.htmlnumpy.matrix.all ================ method matrix.all(*axis=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L571-L609) Test whether all matrix elements along a given axis evaluate to True. Parameters **See `numpy.all` for complete descriptions** See also [`numpy.all`](numpy.all#numpy.all "numpy.all") #### Notes This is the same as [`ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all"), but it returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object. #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> y = x[0]; y matrix([[0, 1, 2, 3]]) >>> (x == y) matrix([[ True, True, True, True], [False, False, False, False], [False, False, False, False]]) >>> (x == y).all() False >>> (x == y).all(0) matrix([[False, False, False, False]]) >>> (x == y).all(1) matrix([[ True], [False], [False]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.all.htmlnumpy.matrix.any ================ method matrix.any(*axis=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L548-L569) Test whether any array element along a given axis evaluates to True. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. Parameters **axis**int, optional Axis along which logical OR is performed **out**ndarray, optional Output to existing array instead of creating new one, must have same shape as expected output Returns **any**bool, ndarray Returns a single bool if `axis` is `None`; otherwise, returns [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.any.htmlnumpy.matrix.argmax =================== method matrix.argmax(*axis=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L646-L683) Indexes of the maximum values along an axis. Return the indexes of the first occurrences of the maximum values along the specified axis. If axis is None, the index is for the flattened matrix. Parameters **See `numpy.argmax` for complete descriptions** See also [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") #### Notes This is the same as [`ndarray.argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax"), but returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object where [`ndarray.argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax") would return an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.argmax() 11 >>> x.argmax(0) matrix([[2, 2, 2, 2]]) >>> x.argmax(1) matrix([[3], [3], [3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.argmax.htmlnumpy.matrix.argmin =================== method matrix.argmin(*axis=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L720-L757) Indexes of the minimum values along an axis. Return the indexes of the first occurrences of the minimum values along the specified axis. If axis is None, the index is for the flattened matrix. Parameters **See `numpy.argmin` for complete descriptions.** See also [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") #### Notes This is the same as [`ndarray.argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin"), but returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object where [`ndarray.argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin") would return an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). #### Examples ``` >>> x = -np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, -1, -2, -3], [ -4, -5, -6, -7], [ -8, -9, -10, -11]]) >>> x.argmin() 11 >>> x.argmin(0) matrix([[2, 2, 2, 2]]) >>> x.argmin(1) matrix([[3], [3], [3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.argmin.htmlnumpy.matrix.argpartition ========================= method matrix.argpartition(*kth*, *axis=- 1*, *kind='introselect'*, *order=None*) Returns the indices that would partition this array. Refer to [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") for full documentation. New in version 1.8.0. See also [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.argpartition.htmlnumpy.matrix.argsort ==================== method matrix.argsort(*axis=- 1*, *kind=None*, *order=None*) Returns the indices that would sort this array. Refer to [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") for full documentation. See also [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.argsort.htmlnumpy.matrix.astype =================== method matrix.astype(*dtype*, *order='K'*, *casting='unsafe'*, *subok=True*, *copy=True*) Copy of the array, cast to a specified type. Parameters **dtype**str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok**bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy**bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns **arr_t**ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Notes Changed in version 1.17.0: Casting between a simple data type and a structured one is possible only for “unsafe” casting. Casting to multiple fields is allowed, but casting from multiple fields is not. Changed in version 1.9.0: Casting from numeric to string types in ‘safe’ casting mode requires that the string dtype length is long enough to store the max integer/float value converted. #### Examples ``` >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) ``` ``` >>> x.astype(int) array([1, 2, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.astype.htmlnumpy.matrix.byteswap ===================== method matrix.byteswap(*inplace=False*) Swap the bytes of the array elements Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place. Arrays of byte-strings are not swapped. The real and imaginary parts of a complex number are swapped individually. Parameters **inplace**bool, optional If `True`, swap bytes in-place, default is `False`. Returns **out**ndarray The byteswapped array. If `inplace` is `True`, this is a view to self. #### Examples ``` >>> A = np.array([1, 256, 8755], dtype=np.int16) >>> list(map(hex, A)) ['0x1', '0x100', '0x2233'] >>> A.byteswap(inplace=True) array([ 256, 1, 13090], dtype=int16) >>> list(map(hex, A)) ['0x100', '0x1', '0x3322'] ``` Arrays of byte-strings are not swapped ``` >>> A = np.array([b'ceg', b'fac']) >>> A.byteswap() array([b'ceg', b'fac'], dtype='|S3') ``` `A.newbyteorder().byteswap()` produces an array with the same values but different representation in memory ``` >>> A = np.array([1, 2, 3]) >>> A.view(np.uint8) array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0], dtype=uint8) >>> A.newbyteorder().byteswap(inplace=True) array([1, 2, 3]) >>> A.view(np.uint8) array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3], dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.byteswap.htmlnumpy.matrix.choose =================== method matrix.choose(*choices*, *out=None*, *mode='raise'*) Use an index array to construct a new array from a set of choices. Refer to [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") for full documentation. See also [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.choose.htmlnumpy.matrix.clip ================= method matrix.clip(*min=None*, *max=None*, *out=None*, ***kwargs*) Return an array whose values are limited to `[min, max]`. One of max or min must be given. Refer to [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") for full documentation. See also [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.clip.htmlnumpy.matrix.compress ===================== method matrix.compress(*condition*, *axis=None*, *out=None*) Return selected slices of this array along given axis. Refer to [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") for full documentation. See also [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.compress.htmlnumpy.matrix.conj ================= method matrix.conj() Complex-conjugate all elements. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.conj.htmlnumpy.matrix.conjugate ====================== method matrix.conjugate() Return the complex conjugate, element-wise. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.conjugate.htmlnumpy.matrix.copy ================= method matrix.copy(*order='C'*) Return a copy of the array. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples ``` >>> x = np.array([[1,2,3],[4,5,6]], order='F') ``` ``` >>> y = x.copy() ``` ``` >>> x.fill(0) ``` ``` >>> x array([[0, 0, 0], [0, 0, 0]]) ``` ``` >>> y array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> y.flags['C_CONTIGUOUS'] True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.copy.htmlnumpy.matrix.cumprod ==================== method matrix.cumprod(*axis=None*, *dtype=None*, *out=None*) Return the cumulative product of the elements along the given axis. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.cumprod.htmlnumpy.matrix.cumsum =================== method matrix.cumsum(*axis=None*, *dtype=None*, *out=None*) Return the cumulative sum of the elements along the given axis. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.cumsum.htmlnumpy.matrix.diagonal ===================== method matrix.diagonal(*offset=0*, *axis1=0*, *axis2=1*) Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed. Refer to [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") for full documentation. See also [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.diagonal.htmlnumpy.matrix.dump ================= method matrix.dump(*file*) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters **file**str or Path A string naming the dump file. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.dump.htmlnumpy.matrix.dumps ================== method matrix.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters **None** © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.dumps.htmlnumpy.matrix.fill ================= method matrix.fill(*value*) Fill the array with a scalar value. Parameters **value**scalar All elements of `a` will be assigned this value. #### Examples ``` >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.fill.htmlnumpy.matrix.flatten ==================== method matrix.flatten(*order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L376-L411) Return a flattened copy of the matrix. All `N` elements of the matrix are placed into a single row. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran-style) order. ‘A’ means to flatten in column-major order if `m` is Fortran *contiguous* in memory, row-major order otherwise. ‘K’ means to flatten `m` in the order the elements occur in memory. The default is ‘C’. Returns **y**matrix A copy of the matrix, flattened to a `(1, N)` matrix where `N` is the number of elements in the original matrix. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.matrix.flat#numpy.matrix.flat "numpy.matrix.flat") A 1-D flat iterator over the matrix. #### Examples ``` >>> m = np.matrix([[1,2], [3,4]]) >>> m.flatten() matrix([[1, 2, 3, 4]]) >>> m.flatten('F') matrix([[1, 3, 2, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.flatten.htmlnumpy.matrix.getA ================= method matrix.getA()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L837-L865) Return `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object. Equivalent to `np.asarray(self)`. Parameters **None** Returns **ret**ndarray `self` as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.getA() array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.getA.htmlnumpy.matrix.getA1 ================== method matrix.getA1()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L867-L894) Return `self` as a flattened [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). Equivalent to `np.asarray(x).ravel()` Parameters **None** Returns **ret**ndarray `self`, 1-D, as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.getA1() array([ 0, 1, 2, ..., 9, 10, 11]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.getA1.htmlnumpy.matrix.getH ================= method matrix.getH()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L968-L1001) Returns the (complex) conjugate transpose of `self`. Equivalent to `np.transpose(self)` if `self` is real-valued. Parameters **None** Returns **ret**matrix object complex conjugate transpose of `self` #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))) >>> z = x - 1j*x; z matrix([[ 0. +0.j, 1. -1.j, 2. -2.j, 3. -3.j], [ 4. -4.j, 5. -5.j, 6. -6.j, 7. -7.j], [ 8. -8.j, 9. -9.j, 10.-10.j, 11.-11.j]]) >>> z.getH() matrix([[ 0. -0.j, 4. +4.j, 8. +8.j], [ 1. +1.j, 5. +5.j, 9. +9.j], [ 2. +2.j, 6. +6.j, 10.+10.j], [ 3. +3.j, 7. +7.j, 11.+11.j]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.getH.htmlnumpy.matrix.getI ================= method matrix.getI()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L792-L835) Returns the (multiplicative) inverse of invertible `self`. Parameters **None** Returns **ret**matrix object If `self` is non-singular, `ret` is such that `ret * self` == `self * ret` == `np.matrix(np.eye(self[0,:].size))` all return `True`. Raises numpy.linalg.LinAlgError: Singular matrix If `self` is singular. See also [`linalg.inv`](numpy.linalg.inv#numpy.linalg.inv "numpy.linalg.inv") #### Examples ``` >>> m = np.matrix('[1, 2; 3, 4]'); m matrix([[1, 2], [3, 4]]) >>> m.getI() matrix([[-2. , 1. ], [ 1.5, -0.5]]) >>> m.getI() * m matrix([[ 1., 0.], # may vary [ 0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.getI.htmlnumpy.matrix.getT ================= method matrix.getT()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L935-L966) Returns the transpose of the matrix. Does *not* conjugate! For the complex conjugate transpose, use `.H`. Parameters **None** Returns **ret**matrix object The (non-conjugated) transpose of the matrix. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose"), [`getH`](numpy.matrix.geth#numpy.matrix.getH "numpy.matrix.getH") #### Examples ``` >>> m = np.matrix('[1, 2; 3, 4]') >>> m matrix([[1, 2], [3, 4]]) >>> m.getT() matrix([[1, 3], [2, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.getT.htmlnumpy.matrix.getfield ===================== method matrix.getfield(*dtype*, *offset=0*) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters **dtype**str or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. **offset**int Number of bytes to skip before beginning the element view. #### Examples ``` >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) ``` By choosing an offset of 8 bytes we can select the complex part of the array for our view: ``` >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.getfield.htmlnumpy.matrix.item ================= method matrix.item(**args*) Copy an element of an array to a standard Python scalar and return it. Parameters ***args**Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns **z**Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. [`item`](#numpy.matrix.item "numpy.matrix.item") is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples ``` >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.item.htmlnumpy.matrix.itemset ==================== method matrix.itemset(**args*) Insert scalar into an array (scalar is cast to array’s dtype, if possible) There must be at least 1 argument, and define the last argument as *item*. Then, `a.itemset(*args)` is equivalent to but faster than `a[args] = item`. The item should be a scalar value and `args` must select a single item in the array `a`. Parameters ***args**Arguments If one argument: a scalar, only used in case `a` is of size 1. If two arguments: the last argument is the value to be set and must be a scalar, the first argument specifies a single array element location. It is either an int or a tuple. #### Notes Compared to indexing syntax, [`itemset`](#numpy.matrix.itemset "numpy.matrix.itemset") provides some speed increase for placing a scalar into a particular location in an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), if you must do this. However, generally this is discouraged: among other problems, it complicates the appearance of the code. Also, when using [`itemset`](#numpy.matrix.itemset "numpy.matrix.itemset") (and [`item`](numpy.matrix.item#numpy.matrix.item "numpy.matrix.item")) inside a loop, be sure to assign the methods to a local variable to avoid the attribute look-up at each loop iteration. #### Examples ``` >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.itemset(4, 0) >>> x.itemset((2, 2), 9) >>> x array([[2, 2, 6], [1, 0, 6], [1, 0, 9]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.itemset.htmlnumpy.matrix.max ================ method matrix.max(*axis=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L611-L644) Return the maximum value along an axis. Parameters **See `amax` for complete descriptions** See also [`amax`](numpy.amax#numpy.amax "numpy.amax"), [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max") #### Notes This is the same as [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max"), but returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object where [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max") would return an ndarray. #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.max() 11 >>> x.max(0) matrix([[ 8, 9, 10, 11]]) >>> x.max(1) matrix([[ 3], [ 7], [11]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.max.htmlnumpy.matrix.mean ================= method matrix.mean(*axis=None*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L413-L445) Returns the average of the matrix elements along the given axis. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") #### Notes Same as [`ndarray.mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean") except that, where that returns an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), this returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object. #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3, 4))) >>> x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.mean() 5.5 >>> x.mean(0) matrix([[4., 5., 6., 7.]]) >>> x.mean(1) matrix([[ 1.5], [ 5.5], [ 9.5]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.mean.htmlnumpy.matrix.min ================ method matrix.min(*axis=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L685-L718) Return the minimum value along an axis. Parameters **See `amin` for complete descriptions.** See also [`amin`](numpy.amin#numpy.amin "numpy.amin"), [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min") #### Notes This is the same as [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min"), but returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object where [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min") would return an ndarray. #### Examples ``` >>> x = -np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, -1, -2, -3], [ -4, -5, -6, -7], [ -8, -9, -10, -11]]) >>> x.min() -11 >>> x.min(0) matrix([[ -8, -9, -10, -11]]) >>> x.min(1) matrix([[ -3], [ -7], [-11]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.min.htmlnumpy.matrix.newbyteorder ========================= method matrix.newbyteorder(*new_order='S'*, */*) Return the array with the same data viewed with a different byte order. Equivalent to: ``` arr.view(arr.dtype.newbytorder(new_order)) ``` Changes are also made in all fields and sub-arrays of the array data type. Parameters **new_order**string, optional Byte order to force; a value from the byte order specifications below. `new_order` codes can be any of: * ‘S’ - swap dtype from current to opposite endian * {‘<’, ‘little’} - little endian * {‘>’, ‘big’} - big endian * {‘=’, ‘native’} - native order, equivalent to [`sys.byteorder`](https://docs.python.org/3/library/sys.html#sys.byteorder "(in Python v3.10)") * {‘|’, ‘I’} - ignore (no change to byte order) The default value (‘S’) results in swapping the current byte order. Returns **new_arr**array New array object with the dtype reflecting given change to the byte order. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.newbyteorder.htmlnumpy.matrix.nonzero ==================== method matrix.nonzero() Return the indices of the elements that are non-zero. Refer to [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") for full documentation. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.nonzero.htmlnumpy.matrix.partition ====================== method matrix.partition(*kth*, *axis=- 1*, *kind='introselect'*, *order=None*) Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array. All elements smaller than the kth element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined. New in version 1.8.0. Parameters **kth**int or sequence of ints Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis**int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need to be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Return a partitioned copy of an array. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partition. [`sort`](numpy.sort#numpy.sort "numpy.sort") Full sort. #### Notes See `np.partition` for notes on the different algorithms. #### Examples ``` >>> a = np.array([3, 4, 2, 1]) >>> a.partition(3) >>> a array([2, 1, 3, 4]) ``` ``` >>> a.partition((1, 3)) >>> a array([1, 2, 3, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.partition.htmlnumpy.matrix.prod ================= method matrix.prod(*axis=None*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L515-L546) Return the product of the array elements over the given axis. Refer to [`prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`prod`](numpy.prod#numpy.prod "numpy.prod"), [`ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") #### Notes Same as [`ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod"), except, where that returns an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), this returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object instead. #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.prod() 0 >>> x.prod(0) matrix([[ 0, 45, 120, 231]]) >>> x.prod(1) matrix([[ 0], [ 840], [7920]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.prod.htmlnumpy.matrix.ptp ================ method matrix.ptp(*axis=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L759-L790) Peak-to-peak (maximum - minimum) value along the given axis. Refer to [`numpy.ptp`](numpy.ptp#numpy.ptp "numpy.ptp") for full documentation. See also [`numpy.ptp`](numpy.ptp#numpy.ptp "numpy.ptp") #### Notes Same as [`ndarray.ptp`](numpy.ndarray.ptp#numpy.ndarray.ptp "numpy.ndarray.ptp"), except, where that would return an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") object, this returns a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object. #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.ptp() 11 >>> x.ptp(0) matrix([[8, 8, 8, 8]]) >>> x.ptp(1) matrix([[3], [3], [3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.ptp.htmlnumpy.matrix.put ================ method matrix.put(*indices*, *values*, *mode='raise'*) Set `a.flat[n] = values[n]` for all `n` in indices. Refer to [`numpy.put`](numpy.put#numpy.put "numpy.put") for full documentation. See also [`numpy.put`](numpy.put#numpy.put "numpy.put") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.put.htmlnumpy.matrix.ravel ================== method matrix.ravel(*order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L897-L933) Return a flattened matrix. Refer to [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") for more documentation. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional The elements of `m` are read using this index order. ‘C’ means to index the elements in C-like order, with the last axis index changing fastest, back to the first axis index changing slowest. ‘F’ means to index the elements in Fortran-like index order, with the first index changing fastest, and the last index changing slowest. Note that the ‘C’ and ‘F’ options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. ‘A’ means to read the elements in Fortran-like index order if `m` is Fortran *contiguous* in memory, C-like order otherwise. ‘K’ means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, ‘C’ index order is used. Returns **ret**matrix Return the matrix flattened to shape `(1, N)` where `N` is the number of elements in the original matrix. A copy is made only if necessary. See also [`matrix.flatten`](numpy.matrix.flatten#numpy.matrix.flatten "numpy.matrix.flatten") returns a similar output matrix but always a copy [`matrix.flat`](numpy.matrix.flat#numpy.matrix.flat "numpy.matrix.flat") a flat iterator on the array. [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") related function which returns an ndarray © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.ravel.htmlnumpy.matrix.repeat =================== method matrix.repeat(*repeats*, *axis=None*) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.repeat.htmlnumpy.matrix.reshape ==================== method matrix.reshape(*shape*, *order='C'*) Returns an array containing the same data with a new shape. Refer to [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") for full documentation. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") equivalent function #### Notes Unlike the free function [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), this method on [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") allows the elements of the shape parameter to be passed in as separate arguments. For example, `a.reshape(10, 11)` is equivalent to `a.reshape((10, 11))`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.reshape.htmlnumpy.matrix.resize =================== method matrix.resize(*new_shape*, *refcheck=True*) Change shape and size of array in-place. Parameters **new_shape**tuple of ints, or `n` ints Shape of resized array. **refcheck**bool, optional If False, reference count will not be checked. Default is True. Returns None Raises ValueError If `a` does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the `order` keyword argument is specified. This behaviour is a bug in NumPy. See also [`resize`](numpy.resize#numpy.resize "numpy.resize") Return a new array with the specified shape. #### Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set `refcheck` to False. #### Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: ``` >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) ``` ``` >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) ``` Enlarging an array: as above, but missing entries are filled with zeros: ``` >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) ``` Referencing an array prevents resizing
 ``` >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... ``` Unless `refcheck` is False: ``` >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.resize.htmlnumpy.matrix.round ================== method matrix.round(*decimals=0*, *out=None*) Return `a` with each element rounded to the given number of decimals. Refer to [`numpy.around`](numpy.around#numpy.around "numpy.around") for full documentation. See also [`numpy.around`](numpy.around#numpy.around "numpy.around") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.round.htmlnumpy.matrix.searchsorted ========================= method matrix.searchsorted(*v*, *side='left'*, *sorter=None*) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.searchsorted.htmlnumpy.matrix.setfield ===================== method matrix.setfield(*val*, *dtype*, *offset=0*) Put a value into a specified place in a field defined by a data-type. Place `val` into `a`’s field defined by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and beginning `offset` bytes into the field. Parameters **val**object Value to be placed in field. **dtype**dtype object Data-type of the field in which to place `val`. **offset**int, optional The number of bytes into the field at which to place `val`. Returns None See also [`getfield`](numpy.matrix.getfield#numpy.matrix.getfield "numpy.matrix.getfield") #### Examples ``` >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.setfield.htmlnumpy.matrix.setflags ===================== method matrix.setflags(*write=None*, *align=None*, *uic=None*) Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. These Boolean-valued flags affect how numpy interprets the memory area used by `a` (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY and flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters **write**bool, optional Describes whether or not `a` can be written to. **align**bool, optional Describes whether or not `a` is aligned properly for its type. **uic**bool, optional Describes whether or not `a` is a copy of another “base” array. #### Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only four of which can be changed by the user: WRITEBACKIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. #### Examples ``` >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: cannot set WRITEBACKIFCOPY flag to True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.setflags.htmlnumpy.matrix.sort ================= method matrix.sort(*axis=- 1*, *kind=None*, *order=None*) Sort an array in-place. Refer to [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for full documentation. Parameters **axis**int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. Changed in version 1.15.0: The ‘stable’ option was added. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`numpy.lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in sorted array. [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes See [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples ``` >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) ``` Use the `order` keyword to specify a field to use when sorting a structured array: ``` >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '<i8')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.sort.htmlnumpy.matrix.squeeze ==================== method matrix.squeeze(*axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L323-L372) Return a possibly reshaped matrix. Refer to [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") for more documentation. Parameters **axis**None or int or tuple of ints, optional Selects a subset of the axes of length one in the shape. If an axis is selected with shape entry greater than one, an error is raised. Returns **squeezed**matrix The matrix, but as a (1, N) matrix if it had shape (N, 1). See also [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") related function #### Notes If `m` has a single column then that column is returned as the single row of a matrix. Otherwise `m` is returned. The returned matrix is always either `m` itself or a view into `m`. Supplying an axis keyword argument will not affect the returned matrix but it may cause an error to be raised. #### Examples ``` >>> c = np.matrix([[1], [2]]) >>> c matrix([[1], [2]]) >>> c.squeeze() matrix([[1, 2]]) >>> r = c.T >>> r matrix([[1, 2]]) >>> r.squeeze() matrix([[1, 2]]) >>> m = np.matrix([[1, 2], [3, 4]]) >>> m.squeeze() matrix([[1, 2], [3, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.squeeze.htmlnumpy.matrix.std ================ method matrix.std(*axis=None*, *dtype=None*, *out=None*, *ddof=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L447-L479) Return the standard deviation of the array elements along the given axis. Refer to [`numpy.std`](numpy.std#numpy.std "numpy.std") for full documentation. See also [`numpy.std`](numpy.std#numpy.std "numpy.std") #### Notes This is the same as [`ndarray.std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std"), except that where an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") would be returned, a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object is returned instead. #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3, 4))) >>> x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.std() 3.4520525295346629 # may vary >>> x.std(0) matrix([[ 3.26598632, 3.26598632, 3.26598632, 3.26598632]]) # may vary >>> x.std(1) matrix([[ 1.11803399], [ 1.11803399], [ 1.11803399]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.std.htmlnumpy.matrix.sum ================ method matrix.sum(*axis=None*, *dtype=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L287-L319) Returns the sum of the matrix elements, along the given axis. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") #### Notes This is the same as [`ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum"), except that where an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") would be returned, a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object is returned instead. #### Examples ``` >>> x = np.matrix([[1, 2], [4, 3]]) >>> x.sum() 10 >>> x.sum(axis=1) matrix([[3], [7]]) >>> x.sum(axis=1, dtype='float') matrix([[3.], [7.]]) >>> out = np.zeros((2, 1), dtype='float') >>> x.sum(axis=1, dtype='float', out=np.asmatrix(out)) matrix([[3.], [7.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.sum.htmlnumpy.matrix.swapaxes ===================== method matrix.swapaxes(*axis1*, *axis2*) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.swapaxes.htmlnumpy.matrix.take ================= method matrix.take(*indices*, *axis=None*, *out=None*, *mode='raise'*) Return an array formed from the elements of `a` at the given indices. Refer to [`numpy.take`](numpy.take#numpy.take "numpy.take") for full documentation. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.take.htmlnumpy.matrix.tobytes ==================== method matrix.tobytes(*order='C'*) Construct Python bytes containing the raw data bytes in the array. Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object is produced in C-order by default. This behavior is controlled by the `order` parameter. New in version 1.9.0. Parameters **order**{‘C’, ‘F’, ‘A’}, optional Controls the memory layout of the bytes object. ‘C’ means C-order, ‘F’ means F-order, ‘A’ (short for *Any*) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. Default is ‘C’. Returns **s**bytes Python bytes exhibiting a copy of `a`’s raw data. #### Examples ``` >>> x = np.array([[0, 1], [2, 3]], dtype='<u2') >>> x.tobytes() b'\x00\x00\x01\x00\x02\x00\x03\x00' >>> x.tobytes('C') == x.tobytes() True >>> x.tobytes('F') b'\x00\x00\x02\x00\x01\x00\x03\x00' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.tobytes.htmlnumpy.matrix.tofile =================== method matrix.tofile(*fid*, *sep=''*, *format='%s'*) Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of `a`. The data produced by this method can be recovered using the function fromfile(). Parameters **fid**file or str or Path An open file object, or a string containing a filename. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. **sep**str Separator between array items for text output. If “” (empty), a binary file is written, equivalent to `file.write(a.tobytes())`. **format**str Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. #### Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s `write` method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support `fileno()` (e.g., BytesIO). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.tofile.htmlnumpy.matrix.tolist =================== method matrix.tolist()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L264-L284) Return the matrix as a (possibly nested) list. See [`ndarray.tolist`](numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist") for full documentation. See also [`ndarray.tolist`](numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist") #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.tolist() [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.tolist.htmlnumpy.matrix.tostring ===================== method matrix.tostring(*order='C'*) A compatibility alias for [`tobytes`](numpy.matrix.tobytes#numpy.matrix.tobytes "numpy.matrix.tobytes"), with exactly the same behavior. Despite its name, it returns `bytes` not [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")s. Deprecated since version 1.19.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.tostring.htmlnumpy.matrix.trace ================== method matrix.trace(*offset=0*, *axis1=0*, *axis2=1*, *dtype=None*, *out=None*) Return the sum along diagonals of the array. Refer to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") for full documentation. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.trace.htmlnumpy.matrix.transpose ====================== method matrix.transpose(**axes*) Returns a view of the array with axes transposed. For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector, an additional dimension must be added. `np.atleast2d(a).T` achieves this, as does `a[:, np.newaxis]`. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and `a.shape = (i[0], i[1], ... i[n-2], i[n-1])`, then `a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])`. Parameters **axes**None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means `a`’s `i`-th axis becomes `a.transpose()`’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form) Returns **out**ndarray View of `a`, with axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.transpose.htmlnumpy.matrix.var ================ method matrix.var(*axis=None*, *dtype=None*, *out=None*, *ddof=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/matrixlib/defmatrix.py#L481-L513) Returns the variance of the matrix elements, along the given axis. Refer to [`numpy.var`](numpy.var#numpy.var "numpy.var") for full documentation. See also [`numpy.var`](numpy.var#numpy.var "numpy.var") #### Notes This is the same as [`ndarray.var`](numpy.ndarray.var#numpy.ndarray.var "numpy.ndarray.var"), except that where an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") would be returned, a [`matrix`](numpy.matrix#numpy.matrix "numpy.matrix") object is returned instead. #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3, 4))) >>> x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.var() 11.916666666666666 >>> x.var(0) matrix([[ 10.66666667, 10.66666667, 10.66666667, 10.66666667]]) # may vary >>> x.var(1) matrix([[1.25], [1.25], [1.25]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.var.htmlnumpy.matrix.view ================= method matrix.view(*[dtype][, type]*) New view of array with the same data. Note Passing None for `dtype` is different from omitting the parameter, since the former invokes `dtype(None)` which is an alias for `dtype('float_')`. Parameters **dtype**data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as `a`. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type**Python type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the last axis of `a` must be contiguous. This axis will be resized in the result. Changed in version 1.23.0: Only the last axis needs to be contiguous. Previously, the entire array had to be C-contiguous. #### Examples ``` >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) ``` Viewing array data using a different type and dtype: ``` >>> y = x.view(dtype=np.int16, type=np.matrix) >>> y matrix([[513]], dtype=int16) >>> print(type(y)) <class 'numpy.matrix'``` Creating a view on a structured array so it can be used in calculations ``` >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) ``` Making changes to the view changes the underlying array ``` >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) ``` Using a view to convert an array to a recarray: ``` >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) ``` Views share data: ``` >>> x[0] = (9, 10) >>> z[0] (9, 10) ``` Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: ``` >>> x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int16) >>> y = x[:, ::2] >>> y array([[1, 3], [4, 6]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the last axis must be contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 3)], [(4, 6)]], dtype=[('width', '<i2'), ('length', '<i2')]) ``` However, views that change dtype are totally fine for arrays with a contiguous last axis, even if the rest of the axes are not C-contiguous: ``` >>> x = np.arange(2 * 3 * 4, dtype=np.int8).reshape(2, 3, 4) >>> x.transpose(1, 0, 2).view(np.int16) array([[[ 256, 770], [3340, 3854]], [[1284, 1798], [4368, 4882]], [[2312, 2826], [5396, 5910]]], dtype=int16) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.view.htmlnumpy.matrix.data ================= attribute matrix.data Python buffer object pointing to the start of the array’s data. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.data.htmlnumpy.matrix.A1 =============== property *property*matrix.A1 Return `self` as a flattened [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"). Equivalent to `np.asarray(x).ravel()` Parameters **None** Returns **ret**ndarray `self`, 1-D, as an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") #### Examples ``` >>> x = np.matrix(np.arange(12).reshape((3,4))); x matrix([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]]) >>> x.getA1() array([ 0, 1, 2, ..., 9, 10, 11]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.A1.htmlnumpy.matrix.base ================= attribute matrix.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: ``` >>> x = np.array([1,2,3,4]) >>> x.base is None True ``` Slicing creates a view, whose memory is shared with x: ``` >>> y = x[2:] >>> y.base is x True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.base.htmlnumpy.matrix.ctypes =================== attribute matrix.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters **None** Returns **c**Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference will not be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "(in Python v3.10)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "(in Python v3.10)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "(in Python v3.10)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L267-L284) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L286-L293) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L295-L302) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples ``` >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1fce60> # may vary >>> x.ctypes.strides <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1ff320> # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.ctypes.htmlnumpy.matrix.flags ================== attribute matrix.flags Information about the memory layout of the array. #### Notes The [`flags`](#numpy.matrix.flags "numpy.matrix.flags") object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be *arbitrary* if `arr.shape[dim] == 1` or the array has no elements. It does *not* generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.flags.htmlnumpy.matrix.flat ================= attribute matrix.flat A 1-D iterator over the array. This is a [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also [`flatten`](numpy.matrix.flatten#numpy.matrix.flatten "numpy.matrix.flatten") Return a copy of the array collapsed into one dimension. [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples ``` >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) <class 'numpy.flatiter'``` An assignment example: ``` >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.flat.htmlnumpy.matrix.itemsize ===================== attribute matrix.itemsize Length of one array element in bytes. #### Examples ``` >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.itemsize.htmlnumpy.matrix.nbytes =================== attribute matrix.nbytes Total bytes consumed by the elements of the array. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples ``` >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.nbytes.htmlnumpy.matrix.ndim ================= attribute matrix.ndim Number of array dimensions. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.ndim.htmlnumpy.matrix.size ================= attribute matrix.size Number of elements in the array. Equal to `np.prod(a.shape)`, i.e., the product of the array’s dimensions. #### Notes `a.size` returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested `np.prod(a.shape)`, which returns an instance of `np.int_`), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. #### Examples ``` >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.size.htmlnumpy.matrix.strides ==================== attribute matrix.strides Tuple of bytes to step in each dimension when traversing an array. The byte offset of element `(i[0], i[1], ..., i[n])` in an array `a` is: ``` offset = sum(np.array(i) * a.strides) ``` A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide. Warning Setting `arr.strides` is discouraged and may be deprecated in the future. [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") should be preferred to create a new view of the same data in a safer way. See also [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") #### Notes Imagine an array of 32-bit integers (each 4 bytes): ``` x = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int32) ``` This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array `x` will be `(20, 4)`. #### Examples ``` >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 ``` ``` >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.matrix.strides.htmlnumpy.recarray.all ================== method recarray.all(*axis=None*, *out=None*, *keepdims=False*, ***, *where=True*) Returns True if all elements evaluate to True. Refer to [`numpy.all`](numpy.all#numpy.all "numpy.all") for full documentation. See also [`numpy.all`](numpy.all#numpy.all "numpy.all") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.all.htmlnumpy.recarray.any ================== method recarray.any(*axis=None*, *out=None*, *keepdims=False*, ***, *where=True*) Returns True if any of the elements of `a` evaluate to True. Refer to [`numpy.any`](numpy.any#numpy.any "numpy.any") for full documentation. See also [`numpy.any`](numpy.any#numpy.any "numpy.any") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.any.htmlnumpy.recarray.argmax ===================== method recarray.argmax(*axis=None*, *out=None*, ***, *keepdims=False*) Return indices of the maximum values along the given axis. Refer to [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") for full documentation. See also [`numpy.argmax`](numpy.argmax#numpy.argmax "numpy.argmax") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.argmax.htmlnumpy.recarray.argmin ===================== method recarray.argmin(*axis=None*, *out=None*, ***, *keepdims=False*) Return indices of the minimum values along the given axis. Refer to [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") for detailed documentation. See also [`numpy.argmin`](numpy.argmin#numpy.argmin "numpy.argmin") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.argmin.htmlnumpy.recarray.argpartition =========================== method recarray.argpartition(*kth*, *axis=- 1*, *kind='introselect'*, *order=None*) Returns the indices that would partition this array. Refer to [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") for full documentation. New in version 1.8.0. See also [`numpy.argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.argpartition.htmlnumpy.recarray.argsort ====================== method recarray.argsort(*axis=- 1*, *kind=None*, *order=None*) Returns the indices that would sort this array. Refer to [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") for full documentation. See also [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.argsort.htmlnumpy.recarray.astype ===================== method recarray.astype(*dtype*, *order='K'*, *casting='unsafe'*, *subok=True*, *copy=True*) Copy of the array, cast to a specified type. Parameters **dtype**str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok**bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy**bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns **arr_t**ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Notes Changed in version 1.17.0: Casting between a simple data type and a structured one is possible only for “unsafe” casting. Casting to multiple fields is allowed, but casting from multiple fields is not. Changed in version 1.9.0: Casting from numeric to string types in ‘safe’ casting mode requires that the string dtype length is long enough to store the max integer/float value converted. #### Examples ``` >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) ``` ``` >>> x.astype(int) array([1, 2, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.astype.htmlnumpy.recarray.byteswap ======================= method recarray.byteswap(*inplace=False*) Swap the bytes of the array elements Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place. Arrays of byte-strings are not swapped. The real and imaginary parts of a complex number are swapped individually. Parameters **inplace**bool, optional If `True`, swap bytes in-place, default is `False`. Returns **out**ndarray The byteswapped array. If `inplace` is `True`, this is a view to self. #### Examples ``` >>> A = np.array([1, 256, 8755], dtype=np.int16) >>> list(map(hex, A)) ['0x1', '0x100', '0x2233'] >>> A.byteswap(inplace=True) array([ 256, 1, 13090], dtype=int16) >>> list(map(hex, A)) ['0x100', '0x1', '0x3322'] ``` Arrays of byte-strings are not swapped ``` >>> A = np.array([b'ceg', b'fac']) >>> A.byteswap() array([b'ceg', b'fac'], dtype='|S3') ``` `A.newbyteorder().byteswap()` produces an array with the same values but different representation in memory ``` >>> A = np.array([1, 2, 3]) >>> A.view(np.uint8) array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0], dtype=uint8) >>> A.newbyteorder().byteswap(inplace=True) array([1, 2, 3]) >>> A.view(np.uint8) array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3], dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.byteswap.htmlnumpy.recarray.choose ===================== method recarray.choose(*choices*, *out=None*, *mode='raise'*) Use an index array to construct a new array from a set of choices. Refer to [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") for full documentation. See also [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.choose.htmlnumpy.recarray.clip =================== method recarray.clip(*min=None*, *max=None*, *out=None*, ***kwargs*) Return an array whose values are limited to `[min, max]`. One of max or min must be given. Refer to [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") for full documentation. See also [`numpy.clip`](numpy.clip#numpy.clip "numpy.clip") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.clip.htmlnumpy.recarray.compress ======================= method recarray.compress(*condition*, *axis=None*, *out=None*) Return selected slices of this array along given axis. Refer to [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") for full documentation. See also [`numpy.compress`](numpy.compress#numpy.compress "numpy.compress") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.compress.htmlnumpy.recarray.conj =================== method recarray.conj() Complex-conjugate all elements. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.conj.htmlnumpy.recarray.conjugate ======================== method recarray.conjugate() Return the complex conjugate, element-wise. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.conjugate.htmlnumpy.recarray.copy =================== method recarray.copy(*order='C'*) Return a copy of the array. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples ``` >>> x = np.array([[1,2,3],[4,5,6]], order='F') ``` ``` >>> y = x.copy() ``` ``` >>> x.fill(0) ``` ``` >>> x array([[0, 0, 0], [0, 0, 0]]) ``` ``` >>> y array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> y.flags['C_CONTIGUOUS'] True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.copy.htmlnumpy.recarray.cumprod ====================== method recarray.cumprod(*axis=None*, *dtype=None*, *out=None*) Return the cumulative product of the elements along the given axis. Refer to [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") for full documentation. See also [`numpy.cumprod`](numpy.cumprod#numpy.cumprod "numpy.cumprod") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.cumprod.htmlnumpy.recarray.cumsum ===================== method recarray.cumsum(*axis=None*, *dtype=None*, *out=None*) Return the cumulative sum of the elements along the given axis. Refer to [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") for full documentation. See also [`numpy.cumsum`](numpy.cumsum#numpy.cumsum "numpy.cumsum") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.cumsum.htmlnumpy.recarray.diagonal ======================= method recarray.diagonal(*offset=0*, *axis1=0*, *axis2=1*) Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed. Refer to [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") for full documentation. See also [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.diagonal.htmlnumpy.recarray.dump =================== method recarray.dump(*file*) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters **file**str or Path A string naming the dump file. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.dump.htmlnumpy.recarray.dumps ==================== method recarray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters **None** © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.dumps.htmlnumpy.recarray.fill =================== method recarray.fill(*value*) Fill the array with a scalar value. Parameters **value**scalar All elements of `a` will be assigned this value. #### Examples ``` >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.fill.htmlnumpy.recarray.flatten ====================== method recarray.flatten(*order='C'*) Return a copy of the array collapsed into one dimension. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if `a` is Fortran *contiguous* in memory, row-major order otherwise. ‘K’ means to flatten `a` in the order the elements occur in memory. The default is ‘C’. Returns **y**ndarray A copy of the input array, flattened to one dimension. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.recarray.flat#numpy.recarray.flat "numpy.recarray.flat") A 1-D flat iterator over the array. #### Examples ``` >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.flatten.htmlnumpy.recarray.getfield ======================= method recarray.getfield(*dtype*, *offset=0*) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters **dtype**str or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. **offset**int Number of bytes to skip before beginning the element view. #### Examples ``` >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) ``` By choosing an offset of 8 bytes we can select the complex part of the array for our view: ``` >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.getfield.htmlnumpy.recarray.item =================== method recarray.item(**args*) Copy an element of an array to a standard Python scalar and return it. Parameters ***args**Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns **z**Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. [`item`](#numpy.recarray.item "numpy.recarray.item") is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples ``` >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.item.htmlnumpy.recarray.itemset ====================== method recarray.itemset(**args*) Insert scalar into an array (scalar is cast to array’s dtype, if possible) There must be at least 1 argument, and define the last argument as *item*. Then, `a.itemset(*args)` is equivalent to but faster than `a[args] = item`. The item should be a scalar value and `args` must select a single item in the array `a`. Parameters ***args**Arguments If one argument: a scalar, only used in case `a` is of size 1. If two arguments: the last argument is the value to be set and must be a scalar, the first argument specifies a single array element location. It is either an int or a tuple. #### Notes Compared to indexing syntax, [`itemset`](#numpy.recarray.itemset "numpy.recarray.itemset") provides some speed increase for placing a scalar into a particular location in an [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray"), if you must do this. However, generally this is discouraged: among other problems, it complicates the appearance of the code. Also, when using [`itemset`](#numpy.recarray.itemset "numpy.recarray.itemset") (and [`item`](numpy.recarray.item#numpy.recarray.item "numpy.recarray.item")) inside a loop, be sure to assign the methods to a local variable to avoid the attribute look-up at each loop iteration. #### Examples ``` >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.itemset(4, 0) >>> x.itemset((2, 2), 9) >>> x array([[2, 2, 6], [1, 0, 6], [1, 0, 9]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.itemset.htmlnumpy.recarray.max ================== method recarray.max(*axis=None*, *out=None*, *keepdims=False*, *initial=<no value>*, *where=True*) Return the maximum along a given axis. Refer to [`numpy.amax`](numpy.amax#numpy.amax "numpy.amax") for full documentation. See also [`numpy.amax`](numpy.amax#numpy.amax "numpy.amax") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.max.htmlnumpy.recarray.mean =================== method recarray.mean(*axis=None*, *dtype=None*, *out=None*, *keepdims=False*, ***, *where=True*) Returns the average of the array elements along given axis. Refer to [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") for full documentation. See also [`numpy.mean`](numpy.mean#numpy.mean "numpy.mean") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.mean.htmlnumpy.recarray.min ================== method recarray.min(*axis=None*, *out=None*, *keepdims=False*, *initial=<no value>*, *where=True*) Return the minimum along a given axis. Refer to [`numpy.amin`](numpy.amin#numpy.amin "numpy.amin") for full documentation. See also [`numpy.amin`](numpy.amin#numpy.amin "numpy.amin") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.min.htmlnumpy.recarray.newbyteorder =========================== method recarray.newbyteorder(*new_order='S'*, */*) Return the array with the same data viewed with a different byte order. Equivalent to: ``` arr.view(arr.dtype.newbytorder(new_order)) ``` Changes are also made in all fields and sub-arrays of the array data type. Parameters **new_order**string, optional Byte order to force; a value from the byte order specifications below. `new_order` codes can be any of: * ‘S’ - swap dtype from current to opposite endian * {‘<’, ‘little’} - little endian * {‘>’, ‘big’} - big endian * {‘=’, ‘native’} - native order, equivalent to [`sys.byteorder`](https://docs.python.org/3/library/sys.html#sys.byteorder "(in Python v3.10)") * {‘|’, ‘I’} - ignore (no change to byte order) The default value (‘S’) results in swapping the current byte order. Returns **new_arr**array New array object with the dtype reflecting given change to the byte order. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.newbyteorder.htmlnumpy.recarray.nonzero ====================== method recarray.nonzero() Return the indices of the elements that are non-zero. Refer to [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") for full documentation. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.nonzero.htmlnumpy.recarray.partition ======================== method recarray.partition(*kth*, *axis=- 1*, *kind='introselect'*, *order=None*) Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array. All elements smaller than the kth element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined. New in version 1.8.0. Parameters **kth**int or sequence of ints Element index to partition by. The kth element value will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of kth it will partition all elements indexed by kth of them into their sorted position at once. Deprecated since version 1.22.0: Passing booleans as index is deprecated. **axis**int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘introselect’}, optional Selection algorithm. Default is ‘introselect’. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need to be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Return a partitioned copy of an array. [`argpartition`](numpy.argpartition#numpy.argpartition "numpy.argpartition") Indirect partition. [`sort`](numpy.sort#numpy.sort "numpy.sort") Full sort. #### Notes See `np.partition` for notes on the different algorithms. #### Examples ``` >>> a = np.array([3, 4, 2, 1]) >>> a.partition(3) >>> a array([2, 1, 3, 4]) ``` ``` >>> a.partition((1, 3)) >>> a array([1, 2, 3, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.partition.htmlnumpy.recarray.prod =================== method recarray.prod(*axis=None*, *dtype=None*, *out=None*, *keepdims=False*, *initial=1*, *where=True*) Return the product of the array elements over the given axis Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.prod.htmlnumpy.recarray.ptp ================== method recarray.ptp(*axis=None*, *out=None*, *keepdims=False*) Peak to peak (maximum - minimum) value along a given axis. Refer to [`numpy.ptp`](numpy.ptp#numpy.ptp "numpy.ptp") for full documentation. See also [`numpy.ptp`](numpy.ptp#numpy.ptp "numpy.ptp") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.ptp.htmlnumpy.recarray.put ================== method recarray.put(*indices*, *values*, *mode='raise'*) Set `a.flat[n] = values[n]` for all `n` in indices. Refer to [`numpy.put`](numpy.put#numpy.put "numpy.put") for full documentation. See also [`numpy.put`](numpy.put#numpy.put "numpy.put") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.put.htmlnumpy.recarray.ravel ==================== method recarray.ravel([*order*]) Return a flattened array. Refer to [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") for full documentation. See also [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") equivalent function [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") a flat iterator on the array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.ravel.htmlnumpy.recarray.repeat ===================== method recarray.repeat(*repeats*, *axis=None*) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.repeat.htmlnumpy.recarray.reshape ====================== method recarray.reshape(*shape*, *order='C'*) Returns an array containing the same data with a new shape. Refer to [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") for full documentation. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") equivalent function #### Notes Unlike the free function [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), this method on [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") allows the elements of the shape parameter to be passed in as separate arguments. For example, `a.reshape(10, 11)` is equivalent to `a.reshape((10, 11))`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.reshape.htmlnumpy.recarray.resize ===================== method recarray.resize(*new_shape*, *refcheck=True*) Change shape and size of array in-place. Parameters **new_shape**tuple of ints, or `n` ints Shape of resized array. **refcheck**bool, optional If False, reference count will not be checked. Default is True. Returns None Raises ValueError If `a` does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the `order` keyword argument is specified. This behaviour is a bug in NumPy. See also [`resize`](numpy.resize#numpy.resize "numpy.resize") Return a new array with the specified shape. #### Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set `refcheck` to False. #### Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: ``` >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) ``` ``` >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) ``` Enlarging an array: as above, but missing entries are filled with zeros: ``` >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) ``` Referencing an array prevents resizing
 ``` >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... ``` Unless `refcheck` is False: ``` >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.resize.htmlnumpy.recarray.round ==================== method recarray.round(*decimals=0*, *out=None*) Return `a` with each element rounded to the given number of decimals. Refer to [`numpy.around`](numpy.around#numpy.around "numpy.around") for full documentation. See also [`numpy.around`](numpy.around#numpy.around "numpy.around") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.round.htmlnumpy.recarray.searchsorted =========================== method recarray.searchsorted(*v*, *side='left'*, *sorter=None*) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.searchsorted.htmlnumpy.recarray.setfield ======================= method recarray.setfield(*val*, *dtype*, *offset=0*) Put a value into a specified place in a field defined by a data-type. Place `val` into `a`’s field defined by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and beginning `offset` bytes into the field. Parameters **val**object Value to be placed in field. **dtype**dtype object Data-type of the field in which to place `val`. **offset**int, optional The number of bytes into the field at which to place `val`. Returns None See also [`getfield`](numpy.recarray.getfield#numpy.recarray.getfield "numpy.recarray.getfield") #### Examples ``` >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.setfield.htmlnumpy.recarray.setflags ======================= method recarray.setflags(*write=None*, *align=None*, *uic=None*) Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. These Boolean-valued flags affect how numpy interprets the memory area used by `a` (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY and flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters **write**bool, optional Describes whether or not `a` can be written to. **align**bool, optional Describes whether or not `a` is aligned properly for its type. **uic**bool, optional Describes whether or not `a` is a copy of another “base” array. #### Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only four of which can be changed by the user: WRITEBACKIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. #### Examples ``` >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: cannot set WRITEBACKIFCOPY flag to True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.setflags.htmlnumpy.recarray.sort =================== method recarray.sort(*axis=- 1*, *kind=None*, *order=None*) Sort an array in-place. Refer to [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for full documentation. Parameters **axis**int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. Changed in version 1.15.0: The ‘stable’ option was added. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`numpy.lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in sorted array. [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes See [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples ``` >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) ``` Use the `order` keyword to specify a field to use when sorting a structured array: ``` >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '<i8')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.sort.htmlnumpy.recarray.squeeze ====================== method recarray.squeeze(*axis=None*) Remove axes of length one from `a`. Refer to [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") for full documentation. See also [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.squeeze.htmlnumpy.recarray.std ================== method recarray.std(*axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=False*, ***, *where=True*) Returns the standard deviation of the array elements along given axis. Refer to [`numpy.std`](numpy.std#numpy.std "numpy.std") for full documentation. See also [`numpy.std`](numpy.std#numpy.std "numpy.std") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.std.htmlnumpy.recarray.sum ================== method recarray.sum(*axis=None*, *dtype=None*, *out=None*, *keepdims=False*, *initial=0*, *where=True*) Return the sum of the array elements over the given axis. Refer to [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") for full documentation. See also [`numpy.sum`](numpy.sum#numpy.sum "numpy.sum") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.sum.htmlnumpy.recarray.swapaxes ======================= method recarray.swapaxes(*axis1*, *axis2*) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.swapaxes.htmlnumpy.recarray.take =================== method recarray.take(*indices*, *axis=None*, *out=None*, *mode='raise'*) Return an array formed from the elements of `a` at the given indices. Refer to [`numpy.take`](numpy.take#numpy.take "numpy.take") for full documentation. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.take.htmlnumpy.recarray.tobytes ====================== method recarray.tobytes(*order='C'*) Construct Python bytes containing the raw data bytes in the array. Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object is produced in C-order by default. This behavior is controlled by the `order` parameter. New in version 1.9.0. Parameters **order**{‘C’, ‘F’, ‘A’}, optional Controls the memory layout of the bytes object. ‘C’ means C-order, ‘F’ means F-order, ‘A’ (short for *Any*) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. Default is ‘C’. Returns **s**bytes Python bytes exhibiting a copy of `a`’s raw data. #### Examples ``` >>> x = np.array([[0, 1], [2, 3]], dtype='<u2') >>> x.tobytes() b'\x00\x00\x01\x00\x02\x00\x03\x00' >>> x.tobytes('C') == x.tobytes() True >>> x.tobytes('F') b'\x00\x00\x02\x00\x01\x00\x03\x00' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.tobytes.htmlnumpy.recarray.tofile ===================== method recarray.tofile(*fid*, *sep=''*, *format='%s'*) Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of `a`. The data produced by this method can be recovered using the function fromfile(). Parameters **fid**file or str or Path An open file object, or a string containing a filename. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. **sep**str Separator between array items for text output. If “” (empty), a binary file is written, equivalent to `file.write(a.tobytes())`. **format**str Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. #### Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s `write` method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support `fileno()` (e.g., BytesIO). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.tofile.htmlnumpy.recarray.tolist ===================== method recarray.tolist() Return the array as an `a.ndim`-levels deep nested list of Python scalars. Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible builtin Python type, via the [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item") function. If `a.ndim` is 0, then since the depth of the nested list is 0, it will not be a list at all, but a simple Python scalar. Parameters **none** Returns **y**object, or list of object, or list of list of object, or 
 The possibly nested list of array elements. #### Notes The array may be recreated via `a = np.array(a.tolist())`, although this may sometimes lose precision. #### Examples For a 1D array, `a.tolist()` is almost the same as `list(a)`, except that `tolist` changes numpy scalars to Python scalars: ``` >>> a = np.uint32([1, 2]) >>> a_list = list(a) >>> a_list [1, 2] >>> type(a_list[0]) <class 'numpy.uint32'> >>> a_tolist = a.tolist() >>> a_tolist [1, 2] >>> type(a_tolist[0]) <class 'int'``` Additionally, for a 2D array, `tolist` applies recursively: ``` >>> a = np.array([[1, 2], [3, 4]]) >>> list(a) [array([1, 2]), array([3, 4])] >>> a.tolist() [[1, 2], [3, 4]] ``` The base case for this recursion is a 0D array: ``` >>> a = np.array(1) >>> list(a) Traceback (most recent call last): ... TypeError: iteration over a 0-d array >>> a.tolist() 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.tolist.htmlnumpy.recarray.tostring ======================= method recarray.tostring(*order='C'*) A compatibility alias for [`tobytes`](numpy.recarray.tobytes#numpy.recarray.tobytes "numpy.recarray.tobytes"), with exactly the same behavior. Despite its name, it returns `bytes` not [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")s. Deprecated since version 1.19.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.tostring.htmlnumpy.recarray.trace ==================== method recarray.trace(*offset=0*, *axis1=0*, *axis2=1*, *dtype=None*, *out=None*) Return the sum along diagonals of the array. Refer to [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") for full documentation. See also [`numpy.trace`](numpy.trace#numpy.trace "numpy.trace") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.trace.htmlnumpy.recarray.transpose ======================== method recarray.transpose(**axes*) Returns a view of the array with axes transposed. For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector, an additional dimension must be added. `np.atleast2d(a).T` achieves this, as does `a[:, np.newaxis]`. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and `a.shape = (i[0], i[1], ... i[n-2], i[n-1])`, then `a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])`. Parameters **axes**None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means `a`’s `i`-th axis becomes `a.transpose()`’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form) Returns **out**ndarray View of `a`, with axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.transpose.htmlnumpy.recarray.var ================== method recarray.var(*axis=None*, *dtype=None*, *out=None*, *ddof=0*, *keepdims=False*, ***, *where=True*) Returns the variance of the array elements, along given axis. Refer to [`numpy.var`](numpy.var#numpy.var "numpy.var") for full documentation. See also [`numpy.var`](numpy.var#numpy.var "numpy.var") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.var.htmlnumpy.recarray.view =================== method recarray.view(*[dtype][, type]*) New view of array with the same data. Note Passing None for `dtype` is different from omitting the parameter, since the former invokes `dtype(None)` which is an alias for `dtype('float_')`. Parameters **dtype**data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as `a`. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type**Python type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the last axis of `a` must be contiguous. This axis will be resized in the result. Changed in version 1.23.0: Only the last axis needs to be contiguous. Previously, the entire array had to be C-contiguous. #### Examples ``` >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) ``` Viewing array data using a different type and dtype: ``` >>> y = x.view(dtype=np.int16, type=np.matrix) >>> y matrix([[513]], dtype=int16) >>> print(type(y)) <class 'numpy.matrix'``` Creating a view on a structured array so it can be used in calculations ``` >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) ``` Making changes to the view changes the underlying array ``` >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) ``` Using a view to convert an array to a recarray: ``` >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) ``` Views share data: ``` >>> x[0] = (9, 10) >>> z[0] (9, 10) ``` Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: ``` >>> x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int16) >>> y = x[:, ::2] >>> y array([[1, 3], [4, 6]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the last axis must be contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 3)], [(4, 6)]], dtype=[('width', '<i2'), ('length', '<i2')]) ``` However, views that change dtype are totally fine for arrays with a contiguous last axis, even if the rest of the axes are not C-contiguous: ``` >>> x = np.arange(2 * 3 * 4, dtype=np.int8).reshape(2, 3, 4) >>> x.transpose(1, 0, 2).view(np.int16) array([[[ 256, 770], [3340, 3854]], [[1284, 1798], [4368, 4882]], [[2312, 2826], [5396, 5910]]], dtype=int16) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.view.htmlnumpy.recarray.strides ====================== attribute recarray.strides Tuple of bytes to step in each dimension when traversing an array. The byte offset of element `(i[0], i[1], ..., i[n])` in an array `a` is: ``` offset = sum(np.array(i) * a.strides) ``` A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide. Warning Setting `arr.strides` is discouraged and may be deprecated in the future. [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") should be preferred to create a new view of the same data in a safer way. See also [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") #### Notes Imagine an array of 32-bit integers (each 4 bytes): ``` x = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int32) ``` This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array `x` will be `(20, 4)`. #### Examples ``` >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 ``` ``` >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.strides.htmlnumpy.recarray.T ================ attribute recarray.T The transposed array. Same as `self.transpose()`. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") #### Examples ``` >>> x = np.array([[1.,2.],[3.,4.]]) >>> x array([[ 1., 2.], [ 3., 4.]]) >>> x.T array([[ 1., 3.], [ 2., 4.]]) >>> x = np.array([1.,2.,3.,4.]) >>> x array([ 1., 2., 3., 4.]) >>> x.T array([ 1., 2., 3., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.T.htmlnumpy.recarray.base =================== attribute recarray.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: ``` >>> x = np.array([1,2,3,4]) >>> x.base is None True ``` Slicing creates a view, whose memory is shared with x: ``` >>> y = x[2:] >>> y.base is x True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.base.htmlnumpy.recarray.ctypes ===================== attribute recarray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters **None** Returns **c**Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference will not be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "(in Python v3.10)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "(in Python v3.10)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "(in Python v3.10)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L267-L284) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L286-L293) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L295-L302) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples ``` >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1fce60> # may vary >>> x.ctypes.strides <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1ff320> # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.ctypes.htmlnumpy.recarray.data =================== attribute recarray.data Python buffer object pointing to the start of the array’s data. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.data.htmlnumpy.recarray.flags ==================== attribute recarray.flags Information about the memory layout of the array. #### Notes The [`flags`](#numpy.recarray.flags "numpy.recarray.flags") object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be *arbitrary* if `arr.shape[dim] == 1` or the array has no elements. It does *not* generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.flags.htmlnumpy.recarray.flat =================== attribute recarray.flat A 1-D iterator over the array. This is a [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also [`flatten`](numpy.recarray.flatten#numpy.recarray.flatten "numpy.recarray.flatten") Return a copy of the array collapsed into one dimension. [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples ``` >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) <class 'numpy.flatiter'``` An assignment example: ``` >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.flat.htmlnumpy.recarray.itemsize ======================= attribute recarray.itemsize Length of one array element in bytes. #### Examples ``` >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.itemsize.htmlnumpy.recarray.nbytes ===================== attribute recarray.nbytes Total bytes consumed by the elements of the array. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples ``` >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.nbytes.htmlnumpy.recarray.ndim =================== attribute recarray.ndim Number of array dimensions. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.ndim.htmlnumpy.recarray.size =================== attribute recarray.size Number of elements in the array. Equal to `np.prod(a.shape)`, i.e., the product of the array’s dimensions. #### Notes `a.size` returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested `np.prod(a.shape)`, which returns an instance of `np.int_`), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. #### Examples ``` >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.recarray.size.htmlnumpy.dtype.isalignedstruct =========================== attribute dtype.isalignedstruct Boolean indicating whether the dtype is a struct which maintains field alignment. This flag is sticky, so when combining multiple structs together, it is preserved and produces new dtypes which are also aligned. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.isalignedstruct.htmlnumpy.ma.MaskedArray.__array__ ================================== method ma.MaskedArray.__array__([*dtype*, ]*/*) → reference if type unchanged, copy otherwise. Returns either a new reference to self if dtype is not given or a new array of provided data type if dtype is different from the current dtype of the array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__array__.htmlnumpy.ma.MaskedArray.base ========================= attribute ma.MaskedArray.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: ``` >>> x = np.array([1,2,3,4]) >>> x.base is None True ``` Slicing creates a view, whose memory is shared with x: ``` >>> y = x[2:] >>> y.base is x True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.base.htmlnumpy.ma.MaskedArray.ctypes =========================== attribute ma.MaskedArray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters **None** Returns **c**Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference will not be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "(in Python v3.10)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "(in Python v3.10)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "(in Python v3.10)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L267-L284) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L286-L293) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L295-L302) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples ``` >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1fce60> # may vary >>> x.ctypes.strides <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1ff320> # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.ctypes.htmlnumpy.ma.MaskedArray.dtype ========================== property *property*ma.MaskedArray.dtype Data-type of the array’s elements. Warning Setting `arr.dtype` is discouraged and may be deprecated in the future. Setting will replace the `dtype` without modifying the memory (see also [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") and [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")). Parameters **None** Returns **d**numpy dtype object See also [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype") Cast the values contained in the array to a new data-type. [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") Create a view of the same data but a different data-type. [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") #### Examples ``` >>> x array([[0, 1], [2, 3]]) >>> x.dtype dtype('int32') >>> type(x.dtype) <type 'numpy.dtype'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.dtype.htmlnumpy.ma.MaskedArray.flags ========================== attribute ma.MaskedArray.flags Information about the memory layout of the array. #### Notes The [`flags`](#numpy.ma.MaskedArray.flags "numpy.ma.MaskedArray.flags") object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be *arbitrary* if `arr.shape[dim] == 1` or the array has no elements. It does *not* generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.flags.htmlnumpy.ma.MaskedArray.itemsize ============================= attribute ma.MaskedArray.itemsize Length of one array element in bytes. #### Examples ``` >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.itemsize.htmlnumpy.ma.MaskedArray.nbytes =========================== attribute ma.MaskedArray.nbytes Total bytes consumed by the elements of the array. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples ``` >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.nbytes.htmlnumpy.ma.MaskedArray.ndim ========================= attribute ma.MaskedArray.ndim Number of array dimensions. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.ndim.htmlnumpy.ma.MaskedArray.shape ========================== property *property*ma.MaskedArray.shape Tuple of array dimensions. The shape property is usually used to get the current shape of an array, but may also be used to reshape the array in-place by assigning a tuple of array dimensions to it. As with [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), one of the new shape dimensions can be -1, in which case its value is inferred from the size of the array and the remaining dimensions. Reshaping an array in-place will fail if a copy is required. Warning Setting `arr.shape` is discouraged and may be deprecated in the future. Using [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") is the preferred approach. See also [`numpy.shape`](numpy.shape#numpy.shape "numpy.shape") Equivalent getter function. [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Function similar to setting `shape`. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Method similar to setting `shape`. #### Examples ``` >>> x = np.array([1, 2, 3, 4]) >>> x.shape (4,) >>> y = np.zeros((2, 3, 4)) >>> y.shape (2, 3, 4) >>> y.shape = (3, 8) >>> y array([[ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> y.shape = (3, 6) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: total size of new array must be unchanged >>> np.zeros((4,2))[::2].shape = (-1,) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape. ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.shape.htmlnumpy.ma.MaskedArray.size ========================= attribute ma.MaskedArray.size Number of elements in the array. Equal to `np.prod(a.shape)`, i.e., the product of the array’s dimensions. #### Notes `a.size` returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested `np.prod(a.shape)`, which returns an instance of `np.int_`), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. #### Examples ``` >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.size.htmlnumpy.ma.MaskedArray.strides ============================ attribute ma.MaskedArray.strides Tuple of bytes to step in each dimension when traversing an array. The byte offset of element `(i[0], i[1], ..., i[n])` in an array `a` is: ``` offset = sum(np.array(i) * a.strides) ``` A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide. Warning Setting `arr.strides` is discouraged and may be deprecated in the future. [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") should be preferred to create a new view of the same data in a safer way. See also [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") #### Notes Imagine an array of 32-bit integers (each 4 bytes): ``` x = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int32) ``` This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array `x` will be `(20, 4)`. #### Examples ``` >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 ``` ``` >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.strides.htmlnumpy.ma.MaskedArray.imag ========================= property *property*ma.MaskedArray.imag The imaginary part of the masked array. This property is a view on the imaginary part of this `MaskedArray`. See also [`real`](numpy.real#numpy.real "numpy.real") #### Examples ``` >>> x = np.ma.array([1+1.j, -2j, 3.45+1.6j], mask=[False, True, False]) >>> x.imag masked_array(data=[1.0, --, 1.6], mask=[False, True, False], fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.imag.htmlnumpy.ma.MaskedArray.real ========================= property *property*ma.MaskedArray.real The real part of the masked array. This property is a view on the real part of this `MaskedArray`. See also [`imag`](numpy.imag#numpy.imag "numpy.imag") #### Examples ``` >>> x = np.ma.array([1+1.j, -2j, 3.45+1.6j], mask=[False, True, False]) >>> x.real masked_array(data=[1.0, --, 3.45], mask=[False, True, False], fill_value=1e+20) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.real.htmlnumpy.ma.MaskedArray.flat ========================= property *property*ma.MaskedArray.flat Return a flat iterator, or set a flattened version of self to value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.flat.htmlnumpy.ma.MaskedArray.__array_priority__ ============================================ attribute ma.MaskedArray.__array_priority__*=15* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__array_priority__.htmlnumpy.ma.MaskedArray.__float__ ================================== method ma.MaskedArray.__float__()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4407-L4418) Convert to float. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__float__.htmlnumpy.ma.MaskedArray.__int__ ================================ method ma.MaskedArray.__int__()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4420-L4430) Convert to int. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__int__.htmlnumpy.ma.MaskedArray.view ========================= method ma.MaskedArray.view(*dtype=None*, *type=None*, *fill_value=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3121-L3209) Return a view of the MaskedArray data. Parameters **dtype**data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. The default, None, results in the view having the same data-type as `a`. As with `ndarray.view`, dtype can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type**Python type, optional Type of the returned view, either ndarray or a subclass. The default None results in type preservation. **fill_value**scalar, optional The value to use for invalid entries (None by default). If None, then this argument is inferred from the passed [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), or in its absence the original array, as discussed in the notes below. See also [`numpy.ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") Equivalent method on ndarray object. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. If [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") is not specified, but [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is specified (and is not an ndarray sub-class), the [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") of the MaskedArray will be reset. If neither [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") nor [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") are specified (or if [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is an ndarray sub-class), then the fill value is preserved. Finally, if [`fill_value`](../maskedarray.baseclass#numpy.ma.MaskedArray.fill_value "numpy.ma.MaskedArray.fill_value") is specified, but [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") is not, the fill value is set to the specified value. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the behavior of the view cannot be predicted just from the superficial appearance of `a` (shown by `print(a)`). It also depends on exactly how `a` is stored in memory. Therefore if `a` is C-ordered versus fortran-ordered, versus defined as a slice or transpose, etc., the view may give different results. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.view.htmlnumpy.ma.MaskedArray.astype =========================== method ma.MaskedArray.astype(*dtype*, *order='K'*, *casting='unsafe'*, *subok=True*, *copy=True*) Copy of the array, cast to a specified type. Parameters **dtype**str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok**bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy**bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns **arr_t**ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Notes Changed in version 1.17.0: Casting between a simple data type and a structured one is possible only for “unsafe” casting. Casting to multiple fields is allowed, but casting from multiple fields is not. Changed in version 1.9.0: Casting from numeric to string types in ‘safe’ casting mode requires that the string dtype length is long enough to store the max integer/float value converted. #### Examples ``` >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) ``` ``` >>> x.astype(int) array([1, 2, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.astype.htmlnumpy.ma.MaskedArray.byteswap ============================= method ma.MaskedArray.byteswap(*inplace=False*) Swap the bytes of the array elements Toggle between low-endian and big-endian data representation by returning a byteswapped array, optionally swapped in-place. Arrays of byte-strings are not swapped. The real and imaginary parts of a complex number are swapped individually. Parameters **inplace**bool, optional If `True`, swap bytes in-place, default is `False`. Returns **out**ndarray The byteswapped array. If `inplace` is `True`, this is a view to self. #### Examples ``` >>> A = np.array([1, 256, 8755], dtype=np.int16) >>> list(map(hex, A)) ['0x1', '0x100', '0x2233'] >>> A.byteswap(inplace=True) array([ 256, 1, 13090], dtype=int16) >>> list(map(hex, A)) ['0x100', '0x1', '0x3322'] ``` Arrays of byte-strings are not swapped ``` >>> A = np.array([b'ceg', b'fac']) >>> A.byteswap() array([b'ceg', b'fac'], dtype='|S3') ``` `A.newbyteorder().byteswap()` produces an array with the same values but different representation in memory ``` >>> A = np.array([1, 2, 3]) >>> A.view(np.uint8) array([1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0], dtype=uint8) >>> A.newbyteorder().byteswap(inplace=True) array([1, 2, 3]) >>> A.view(np.uint8) array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3], dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.byteswap.htmlnumpy.ma.MaskedArray.toflex =========================== method ma.MaskedArray.toflex()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6169-L6226) Transforms a masked array into a flexible-type array. The flexible type array that is returned will have two fields: * the `_data` field stores the `_data` part of the array. * the `_mask` field stores the `_mask` part of the array. Parameters **None** Returns **record**ndarray A new flexible-type [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") with two fields: the first element containing a value, the second element containing the corresponding mask boolean. The returned record shape matches self.shape. #### Notes A side-effect of transforming a masked array into a flexible [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") is that meta information (`fill_value`, 
) will be lost. #### Examples ``` >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.toflex() array([[(1, False), (2, True), (3, False)], [(4, True), (5, False), (6, True)], [(7, False), (8, True), (9, False)]], dtype=[('_data', '<i8'), ('_mask', '?')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.toflex.htmlnumpy.ma.MaskedArray.tostring ============================= method ma.MaskedArray.tostring(*fill_value=None*, *order='C'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6099-L6112) A compatibility alias for [`tobytes`](numpy.ma.maskedarray.tobytes#numpy.ma.MaskedArray.tobytes "numpy.ma.MaskedArray.tobytes"), with exactly the same behavior. Despite its name, it returns `bytes` not [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")s. Deprecated since version 1.19.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.tostring.htmlnumpy.ma.MaskedArray.T ====================== property *property*ma.MaskedArray.T The transposed array. Same as `self.transpose()`. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") #### Examples ``` >>> x = np.array([[1.,2.],[3.,4.]]) >>> x array([[ 1., 2.], [ 3., 4.]]) >>> x.T array([[ 1., 3.], [ 2., 4.]]) >>> x = np.array([1.,2.,3.,4.]) >>> x array([ 1., 2., 3., 4.]) >>> x.T array([ 1., 2., 3., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.T.htmlnumpy.ma.MaskedArray.choose =========================== method ma.MaskedArray.choose(*choices*, *out=None*, *mode='raise'*) Use an index array to construct a new array from a set of choices. Refer to [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") for full documentation. See also [`numpy.choose`](numpy.choose#numpy.choose "numpy.choose") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.choose.htmlnumpy.ma.MaskedArray.compress ============================= method ma.MaskedArray.compress(*condition*, *axis=None*, *out=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3883-L3953) Return `a` where condition is `True`. If condition is a [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray"), missing values are considered as `False`. Parameters **condition**var Boolean 1-d array selecting which entries to return. If len(condition) is less than the size of a along the axis, then output is truncated to length of condition array. **axis**{None, int}, optional Axis along which the operation must be performed. **out**{None, ndarray}, optional Alternative output array in which to place the result. It must have the same shape as the expected output but the type will be cast if necessary. Returns **result**MaskedArray A [`MaskedArray`](../maskedarray.baseclass#numpy.ma.MaskedArray "numpy.ma.MaskedArray") object. #### Notes Please note the difference with [`compressed`](numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed") ! The output of [`compress`](numpy.compress#numpy.compress "numpy.compress") has a mask, the output of [`compressed`](numpy.ma.maskedarray.compressed#numpy.ma.MaskedArray.compressed "numpy.ma.MaskedArray.compressed") does not. #### Examples ``` >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.compress([1, 0, 1]) masked_array(data=[1, 3], mask=[False, False], fill_value=999999) ``` ``` >>> x.compress([1, 0, 1], axis=1) masked_array( data=[[1, 3], [--, --], [7, 9]], mask=[[False, False], [ True, True], [False, False]], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.compress.htmlnumpy.ma.MaskedArray.diagonal ============================= method ma.MaskedArray.diagonal(*offset=0*, *axis1=0*, *axis2=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2577-L2587) Return specified diagonals. In NumPy 1.9 the returned array is a read-only view instead of a copy as in previous NumPy versions. In a future version the read-only restriction will be removed. Refer to [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") for full documentation. See also [`numpy.diagonal`](numpy.diagonal#numpy.diagonal "numpy.diagonal") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.diagonal.htmlnumpy.ma.MaskedArray.fill ========================= method ma.MaskedArray.fill(*value*) Fill the array with a scalar value. Parameters **value**scalar All elements of `a` will be assigned this value. #### Examples ``` >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.fill.htmlnumpy.ma.MaskedArray.item ========================= method ma.MaskedArray.item(**args*) Copy an element of an array to a standard Python scalar and return it. Parameters ***args**Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns **z**Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. [`item`](#numpy.ma.MaskedArray.item "numpy.ma.MaskedArray.item") is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples ``` >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.item.htmlnumpy.ma.MaskedArray.put ======================== method ma.MaskedArray.put(*indices*, *values*, *mode='raise'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4723-L4806) Set storage-indexed locations to corresponding values. Sets self._data.flat[n] = values[n] for each n in indices. If `values` is shorter than [`indices`](numpy.indices#numpy.indices "numpy.indices") then it will repeat. If `values` has some masked values, the initial mask is updated in consequence, else the corresponding values are unmasked. Parameters **indices**1-D array_like Target indices, interpreted as integers. **values**array_like Values to place in self._data copy at target indices. **mode**{‘raise’, ‘wrap’, ‘clip’}, optional Specifies how out-of-bounds indices will behave. ‘raise’ : raise an error. ‘wrap’ : wrap around. ‘clip’ : clip to the range. #### Notes `values` can be a scalar or length 1 array. #### Examples ``` >>> x = np.ma.array([[1,2,3],[4,5,6],[7,8,9]], mask=[0] + [1,0]*4) >>> x masked_array( data=[[1, --, 3], [--, 5, --], [7, --, 9]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) >>> x.put([0,4,8],[10,20,30]) >>> x masked_array( data=[[10, --, 3], [--, 20, --], [7, --, 30]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) ``` ``` >>> x.put(4,999) >>> x masked_array( data=[[10, --, 3], [--, 999, --], [7, --, 30]], mask=[[False, True, False], [ True, False, True], [False, True, False]], fill_value=999999) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.put.htmlnumpy.ma.MaskedArray.repeat =========================== method ma.MaskedArray.repeat(*repeats*, *axis=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2577-L2587) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.repeat.htmlnumpy.ma.MaskedArray.searchsorted ================================= method ma.MaskedArray.searchsorted(*v*, *side='left'*, *sorter=None*) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.searchsorted.htmlnumpy.ma.MaskedArray.take ========================= method ma.MaskedArray.take(*indices*, *axis=None*, *out=None*, *mode='raise'*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6012-L6036) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.take.htmlnumpy.ma.MaskedArray.dump ========================= method ma.MaskedArray.dump(*file*) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters **file**str or Path A string naming the dump file. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.dump.htmlnumpy.ma.MaskedArray.dumps ========================== method ma.MaskedArray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters **None** © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.dumps.htmlnumpy.ma.MaskedArray.conj ========================= method ma.MaskedArray.conj() Complex-conjugate all elements. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.conj.htmlnumpy.ma.MaskedArray.conjugate ============================== method ma.MaskedArray.conjugate() Return the complex conjugate, element-wise. Refer to [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") for full documentation. See also [`numpy.conjugate`](numpy.conjugate#numpy.conjugate "numpy.conjugate") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.conjugate.htmlnumpy.ma.MaskedArray.product ============================ method ma.MaskedArray.product(*axis=None*, *dtype=None*, *out=None*, *keepdims=<no value>*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L5186-L5225) Return the product of the array elements over the given axis. Masked elements are set to 1 internally for computation. Refer to [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") for full documentation. See also [`numpy.ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod") corresponding function for ndarrays [`numpy.prod`](numpy.prod#numpy.prod "numpy.prod") equivalent function #### Notes Arithmetic is modular when using integer types, and no error is raised on overflow. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.product.htmlnumpy.ma.MaskedArray.__lt__ =============================== method ma.MaskedArray.__lt__(*value*, */*) Return self<value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__lt__.htmlnumpy.ma.MaskedArray.__le__ =============================== method ma.MaskedArray.__le__(*value*, */*) Return self<=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__le__.htmlnumpy.ma.MaskedArray.__gt__ =============================== method ma.MaskedArray.__gt__(*value*, */*) Return self>value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__gt__.htmlnumpy.ma.MaskedArray.__ge__ =============================== method ma.MaskedArray.__ge__(*value*, */*) Return self>=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__ge__.htmlnumpy.ma.MaskedArray.__eq__ =============================== method ma.MaskedArray.__eq__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4151-L4162) Check whether other equals self elementwise. When either of the elements is masked, the result is masked as well, but the underlying boolean data are still set, with self and other considered equal if both are masked, and unequal otherwise. For structured arrays, all fields are combined, with masked values ignored. The result is masked if all fields were masked, with self and other considered equal only if both were fully masked. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__eq__.htmlnumpy.ma.MaskedArray.__ne__ =============================== method ma.MaskedArray.__ne__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4164-L4175) Check whether other does not equal self elementwise. When either of the elements is masked, the result is masked as well, but the underlying boolean data are still set, with self and other considered equal if both are masked, and unequal otherwise. For structured arrays, all fields are combined, with masked values ignored. The result is masked if all fields were masked, with self and other considered equal only if both were fully masked. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__ne__.htmlnumpy.ma.MaskedArray.__bool__ ================================= method ma.MaskedArray.__bool__(*/*) True if self else False © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__bool__.htmlnumpy.ma.MaskedArray.__abs__ ================================ method ma.MaskedArray.__abs__(*self*) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__abs__.htmlnumpy.ma.MaskedArray.__add__ ================================ method ma.MaskedArray.__add__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4177-L4184) Add self to other, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__add__.htmlnumpy.ma.MaskedArray.__radd__ ================================= method ma.MaskedArray.__radd__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4186-L4193) Add other to self, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__radd__.htmlnumpy.ma.MaskedArray.__sub__ ================================ method ma.MaskedArray.__sub__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4195-L4202) Subtract other from self, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__sub__.htmlnumpy.ma.MaskedArray.__rsub__ ================================= method ma.MaskedArray.__rsub__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4204-L4209) Subtract self from other, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rsub__.htmlnumpy.ma.MaskedArray.__mul__ ================================ method ma.MaskedArray.__mul__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4211-L4215) Multiply self by other, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__mul__.htmlnumpy.ma.MaskedArray.__rmul__ ================================= method ma.MaskedArray.__rmul__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4217-L4224) Multiply other by self, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rmul__.htmlnumpy.ma.MaskedArray.__div__ ================================ method ma.MaskedArray.__div__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4226-L4233) Divide other into self, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__div__.htmlnumpy.ma.MaskedArray.__truediv__ ==================================== method ma.MaskedArray.__truediv__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4235-L4242) Divide other into self, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__truediv__.htmlnumpy.ma.MaskedArray.__rtruediv__ ===================================== method ma.MaskedArray.__rtruediv__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4244-L4249) Divide self into other, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rtruediv__.htmlnumpy.ma.MaskedArray.__floordiv__ ===================================== method ma.MaskedArray.__floordiv__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4251-L4258) Divide other into self, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__floordiv__.htmlnumpy.ma.MaskedArray.__rfloordiv__ ====================================== method ma.MaskedArray.__rfloordiv__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4260-L4265) Divide self into other, and return a new masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rfloordiv__.htmlnumpy.ma.MaskedArray.__mod__ ================================ method ma.MaskedArray.__mod__(*value*, */*) Return self%value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__mod__.htmlnumpy.ma.MaskedArray.__rmod__ ================================= method ma.MaskedArray.__rmod__(*value*, */*) Return value%self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rmod__.htmlnumpy.ma.MaskedArray.__divmod__ =================================== method ma.MaskedArray.__divmod__(*value*, */*) Return divmod(self, value). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__divmod__.htmlnumpy.ma.MaskedArray.__rdivmod__ ==================================== method ma.MaskedArray.__rdivmod__(*value*, */*) Return divmod(value, self). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rdivmod__.htmlnumpy.ma.MaskedArray.__pow__ ================================ method ma.MaskedArray.__pow__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4267-L4274) Raise self to the power other, masking the potential NaNs/Infs © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__pow__.htmlnumpy.ma.MaskedArray.__rpow__ ================================= method ma.MaskedArray.__rpow__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4276-L4281) Raise other to the power self, masking the potential NaNs/Infs © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rpow__.htmlnumpy.ma.MaskedArray.__lshift__ =================================== method ma.MaskedArray.__lshift__(*value*, */*) Return self<<value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__lshift__.htmlnumpy.ma.MaskedArray.__rlshift__ ==================================== method ma.MaskedArray.__rlshift__(*value*, */*) Return value<<self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rlshift__.htmlnumpy.ma.MaskedArray.__rshift__ =================================== method ma.MaskedArray.__rshift__(*value*, */*) Return self>>value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rshift__.htmlnumpy.ma.MaskedArray.__rrshift__ ==================================== method ma.MaskedArray.__rrshift__(*value*, */*) Return value>>self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rrshift__.htmlnumpy.ma.MaskedArray.__and__ ================================ method ma.MaskedArray.__and__(*value*, */*) Return self&value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__and__.htmlnumpy.ma.MaskedArray.__rand__ ================================= method ma.MaskedArray.__rand__(*value*, */*) Return value&self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rand__.htmlnumpy.ma.MaskedArray.__or__ =============================== method ma.MaskedArray.__or__(*value*, */*) Return self|value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__or__.htmlnumpy.ma.MaskedArray.__ror__ ================================ method ma.MaskedArray.__ror__(*value*, */*) Return value|self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__ror__.htmlnumpy.ma.MaskedArray.__xor__ ================================ method ma.MaskedArray.__xor__(*value*, */*) Return self^value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__xor__.htmlnumpy.ma.MaskedArray.__rxor__ ================================= method ma.MaskedArray.__rxor__(*value*, */*) Return value^self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__rxor__.htmlnumpy.ma.MaskedArray.__iadd__ ================================= method ma.MaskedArray.__iadd__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4283-L4298) Add other to self in-place. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__iadd__.htmlnumpy.ma.MaskedArray.__isub__ ================================= method ma.MaskedArray.__isub__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4300-L4314) Subtract other from self in-place. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__isub__.htmlnumpy.ma.MaskedArray.__imul__ ================================= method ma.MaskedArray.__imul__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4316-L4330) Multiply self by other in-place. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__imul__.htmlnumpy.ma.MaskedArray.__idiv__ ================================= method ma.MaskedArray.__idiv__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4332-L4348) Divide self by other in-place. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__idiv__.htmlnumpy.ma.MaskedArray.__itruediv__ ===================================== method ma.MaskedArray.__itruediv__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4368-L4384) True divide self by other in-place. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__itruediv__.htmlnumpy.ma.MaskedArray.__ifloordiv__ ====================================== method ma.MaskedArray.__ifloordiv__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4350-L4366) Floor divide self by other in-place. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__ifloordiv__.htmlnumpy.ma.MaskedArray.__imod__ ================================= method ma.MaskedArray.__imod__(*value*, */*) Return self%=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__imod__.htmlnumpy.ma.MaskedArray.__ipow__ ================================= method ma.MaskedArray.__ipow__(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4386-L4405) Raise self to the power other, in place. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__ipow__.htmlnumpy.ma.MaskedArray.__ilshift__ ==================================== method ma.MaskedArray.__ilshift__(*value*, */*) Return self<<=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__ilshift__.htmlnumpy.ma.MaskedArray.__irshift__ ==================================== method ma.MaskedArray.__irshift__(*value*, */*) Return self>>=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__irshift__.htmlnumpy.ma.MaskedArray.__iand__ ================================= method ma.MaskedArray.__iand__(*value*, */*) Return self&=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__iand__.htmlnumpy.ma.MaskedArray.__ior__ ================================ method ma.MaskedArray.__ior__(*value*, */*) Return self|=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__ior__.htmlnumpy.ma.MaskedArray.__ixor__ ================================= method ma.MaskedArray.__ixor__(*value*, */*) Return self^=value. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__ixor__.htmlnumpy.ma.MaskedArray.__repr__ ================================= method ma.MaskedArray.__repr__()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3989-L4071) Literal string representation. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__repr__.htmlnumpy.ma.MaskedArray.__str__ ================================ method ma.MaskedArray.__str__()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3986-L3987) Return str(self). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__str__.htmlnumpy.ma.MaskedArray.ids ======================== method ma.MaskedArray.ids()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4808-L4832) Return the addresses of the data and mask areas. Parameters **None** #### Examples ``` >>> x = np.ma.array([1, 2, 3], mask=[0, 1, 1]) >>> x.ids() (166670640, 166659832) # may vary ``` If the array has no mask, the address of `nomask` is returned. This address is typically not close to the data in memory: ``` >>> x = np.ma.array([1, 2, 3]) >>> x.ids() (166691080, 3083169284) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.ids.htmlnumpy.ma.MaskedArray.iscontiguous ================================= method ma.MaskedArray.iscontiguous()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L4834-L4859) Return a boolean indicating whether the data is contiguous. Parameters **None** #### Examples ``` >>> x = np.ma.array([1, 2, 3]) >>> x.iscontiguous() True ``` [`iscontiguous`](#numpy.ma.MaskedArray.iscontiguous "numpy.ma.MaskedArray.iscontiguous") returns one of the flags of the masked array: ``` >>> x.flags C_CONTIGUOUS : True F_CONTIGUOUS : True OWNDATA : False WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.iscontiguous.htmlnumpy.ma.MaskedArray.__copy__ ================================= method ma.MaskedArray.__copy__() Used if [`copy.copy`](https://docs.python.org/3/library/copy.html#copy.copy "(in Python v3.10)") is called on an array. Returns a copy of the array. Equivalent to `a.copy(order='K')`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__copy__.htmlnumpy.ma.MaskedArray.__deepcopy__ ===================================== method ma.MaskedArray.__deepcopy__(*memo*, */*) → Deep copy of array.[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6264-L6272) Used if [`copy.deepcopy`](https://docs.python.org/3/library/copy.html#copy.deepcopy "(in Python v3.10)") is called on an array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__deepcopy__.htmlnumpy.ma.MaskedArray.__getstate__ ===================================== method ma.MaskedArray.__getstate__()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6230-L6237) Return the internal state of the masked array, for pickling purposes. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__getstate__.htmlnumpy.ma.MaskedArray.__reduce__ =================================== method ma.MaskedArray.__reduce__()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6256-L6262) Return a 3-tuple for pickling a MaskedArray. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__reduce__.htmlnumpy.ma.MaskedArray.__setstate__ ===================================== method ma.MaskedArray.__setstate__(*state*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L6239-L6254) Restore the internal state of the masked array, for pickling purposes. `state` is typically the output of the `__getstate__` output, and is a 5-tuple: * class name * a tuple giving the shape of the data * a typecode for the data * a binary string for the data * a binary string for the mask. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__setstate__.htmlnumpy.ma.MaskedArray.__new__ ================================ method *static*ma.MaskedArray.__new__(*cls*, *data=None*, *mask=False*, *dtype=None*, *copy=False*, *subok=True*, *ndmin=0*, *fill_value=None*, *keep_mask=True*, *hard_mask=None*, *shrink=True*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L2814-L2943) Create a new masked array from scratch. #### Notes A masked array can also be created by taking a .view(MaskedArray). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__new__.htmlnumpy.ma.MaskedArray.__array_wrap__ ======================================== method ma.MaskedArray.__array_wrap__(*obj*, *context=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3065-L3119) Special hook for ufuncs. Wraps the numpy array and sets the mask according to context. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__array_wrap__.htmlnumpy.ma.MaskedArray.__len__ ================================ method ma.MaskedArray.__len__(*/*) Return len(self). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__len__.htmlnumpy.ma.MaskedArray.__getitem__ ==================================== method ma.MaskedArray.__getitem__(*indx*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3211-L3335) x.__getitem__(y) <==> x[y] Return the item described by i, as a masked array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__getitem__.htmlnumpy.ma.MaskedArray.__setitem__ ==================================== method ma.MaskedArray.__setitem__(*indx*, *value*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3337-L3404) x.__setitem__(i, y) <==> x[i]=y Set item described by index. If value is masked, masks those locations. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__setitem__.htmlnumpy.ma.MaskedArray.__delitem__ ==================================== method ma.MaskedArray.__delitem__(*key*, */*) Delete self[key]. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__delitem__.htmlnumpy.ma.MaskedArray.__contains__ ===================================== method ma.MaskedArray.__contains__(*key*, */*) Return key in self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__contains__.htmlnumpy.ma.MaskedArray.__setmask__ ==================================== method ma.MaskedArray.__setmask__(*mask*, *copy=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/ma/core.py#L3435-L3502) Set the mask. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ma.MaskedArray.__setmask__.htmlnumpy.ufunc.__call__ ======================== method ufunc.__call__(**args*, ***kwargs*) Call self as a function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ufunc.__call__.htmlnumpy.dtype.ndim ================ attribute dtype.ndim Number of dimensions of the sub-array if this data type describes a sub-array, and `0` otherwise. New in version 1.13.0. #### Examples ``` >>> x = np.dtype(float) >>> x.ndim 0 ``` ``` >>> x = np.dtype((float, 8)) >>> x.ndim 1 ``` ``` >>> x = np.dtype(('i4', (3, 4))) >>> x.ndim 2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.dtype.ndim.htmlnumpy.random.RandomState.rand ============================= method random.RandomState.rand(*d0*, *d1*, *...*, *dn*) Random values in a given shape. Note This is a convenience function for users porting code from Matlab, and wraps [`random_sample`](numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample"). That function takes a tuple to specify the size of the output, which is consistent with other NumPy functions like [`numpy.zeros`](../../generated/numpy.zeros#numpy.zeros "numpy.zeros") and [`numpy.ones`](../../generated/numpy.ones#numpy.ones "numpy.ones"). Create an array of the given shape and populate it with random samples from a uniform distribution over `[0, 1)`. Parameters **d0, d1, 
, dn**int, optional The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned. Returns **out**ndarray, shape `(d0, d1, ..., dn)` Random values. See also [`random`](../index#module-numpy.random "numpy.random") #### Examples ``` >>> np.random.rand(3,2) array([[ 0.14022471, 0.96360618], #random [ 0.37601032, 0.25528411], #random [ 0.49313049, 0.94909878]]) #random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.rand.htmlnumpy.random.RandomState.randn ============================== method random.RandomState.randn(*d0*, *d1*, *...*, *dn*) Return a sample (or samples) from the “standard normal” distribution. Note This is a convenience function for users porting code from Matlab, and wraps [`standard_normal`](numpy.random.randomstate.standard_normal#numpy.random.RandomState.standard_normal "numpy.random.RandomState.standard_normal"). That function takes a tuple to specify the size of the output, which is consistent with other NumPy functions like [`numpy.zeros`](../../generated/numpy.zeros#numpy.zeros "numpy.zeros") and [`numpy.ones`](../../generated/numpy.ones#numpy.ones "numpy.ones"). Note New code should use the `standard_normal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). If positive int_like arguments are provided, [`randn`](#numpy.random.RandomState.randn "numpy.random.RandomState.randn") generates an array of shape `(d0, d1, ..., dn)`, filled with random floats sampled from a univariate “normal” (Gaussian) distribution of mean 0 and variance 1. A single float randomly sampled from the distribution is returned if no argument is provided. Parameters **d0, d1, 
, dn**int, optional The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned. Returns **Z**ndarray or float A `(d0, d1, ..., dn)`-shaped array of floating-point samples from the standard normal distribution, or a single such float if no parameters were supplied. See also [`standard_normal`](numpy.random.randomstate.standard_normal#numpy.random.RandomState.standard_normal "numpy.random.RandomState.standard_normal") Similar, but takes a tuple as its argument. [`normal`](numpy.random.randomstate.normal#numpy.random.RandomState.normal "numpy.random.RandomState.normal") Also accepts mu and sigma arguments. [`random.Generator.standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") which should be used for new code. #### Notes For random samples from \(N(\mu, \sigma^2)\), use: `sigma * np.random.randn(...) + mu` #### Examples ``` >>> np.random.randn() 2.1923875335537315 # random ``` Two-by-four array of samples from N(3, 6.25): ``` >>> 3 + 2.5 * np.random.randn(2, 4) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.randn.htmlnumpy.distutils.ccompiler.CCompiler_compile ============================================ distutils.ccompiler.CCompiler_compile(*self*, *sources*, *output_dir=None*, *macros=None*, *include_dirs=None*, *debug=0*, *extra_preargs=None*, *extra_postargs=None*, *depends=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L232-L368) Compile one or more source files. Please refer to the Python distutils API reference for more details. Parameters **sources**list of str A list of filenames **output_dir**str, optional Path to the output directory. **macros**list of tuples A list of macro definitions. **include_dirs**list of str, optional The directories to add to the default include file search path for this compilation only. **debug**bool, optional Whether or not to output debug symbols in or alongside the object file(s). **extra_preargs, extra_postargs**? Extra pre- and post-arguments. **depends**list of str, optional A list of file names that all targets depend on. Returns **objects**list of str A list of object file names, one per source file `sources`. Raises CompileError If compilation fails. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.CCompiler_compile.htmlnumpy.distutils.ccompiler.CCompiler_customize ============================================== distutils.ccompiler.CCompiler_customize(*self*, *dist*, *need_cxx=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L469-L548) Do any platform-specific customization of a compiler instance. This method calls [`distutils.sysconfig.customize_compiler`](https://docs.python.org/3/distutils/apiref.html#distutils.sysconfig.customize_compiler "(in Python v3.10)") for platform-specific customization, as well as optionally remove a flag to suppress spurious warnings in case C++ code is being compiled. Parameters **dist**object This parameter is not used for anything. **need_cxx**bool, optional Whether or not C++ has to be compiled. If so (True), the `"-Wstrict-prototypes"` option is removed to prevent spurious warnings. Default is False. Returns None #### Notes All the default options used by distutils can be extracted with: ``` from distutils import sysconfig sysconfig.get_config_vars('CC', 'CXX', 'OPT', 'BASECFLAGS', 'CCSHARED', 'LDSHARED', 'SO') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.CCompiler_customize.htmlnumpy.distutils.ccompiler.CCompiler_customize_cmd =================================================== distutils.ccompiler.CCompiler_customize_cmd(*self*, *cmd*, *ignore=()*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L372-L418) Customize compiler using distutils command. Parameters **cmd**class instance An instance inheriting from [`distutils.cmd.Command`](https://docs.python.org/3/distutils/apiref.html#distutils.cmd.Command "(in Python v3.10)"). **ignore**sequence of str, optional List of `CCompiler` commands (without `'set_'`) that should not be altered. Strings that are checked for are: `('include_dirs', 'define', 'undef', 'libraries', 'library_dirs', 'rpath', 'link_objects')`. Returns None © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.CCompiler_customize_cmd.htmlnumpy.distutils.ccompiler.CCompiler_cxx_compiler ================================================== distutils.ccompiler.CCompiler_cxx_compiler(*self*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L668-L695) Return the C++ compiler. Parameters **None** Returns **cxx**class instance The C++ compiler, as a `CCompiler` instance. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.CCompiler_cxx_compiler.htmlnumpy.distutils.ccompiler.CCompiler_find_executables ====================================================== distutils.ccompiler.CCompiler_find_executables(*self*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L98-L105) Does nothing here, but is called by the get_version method and can be overridden by subclasses. In particular it is redefined in the `FCompiler` class where more documentation can be found. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.CCompiler_find_executables.htmlnumpy.distutils.ccompiler.CCompiler_get_version ================================================= distutils.ccompiler.CCompiler_get_version(*self*, *force=False*, *ok_status=[0]*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L599-L664) Return compiler version, or None if compiler is not available. Parameters **force**bool, optional If True, force a new determination of the version, even if the compiler already has a version attribute. Default is False. **ok_status**list of int, optional The list of status values returned by the version look-up process for which a version string is returned. If the status value is not in `ok_status`, None is returned. Default is `[0]`. Returns **version**str or None Version string, in the format of `distutils.version.LooseVersion`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.CCompiler_get_version.htmlnumpy.distutils.ccompiler.CCompiler_object_filenames ====================================================== distutils.ccompiler.CCompiler_object_filenames(*self*, *source_filenames*, *strip_dir=0*, *output_dir=''*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L185-L228) Return the name of the object files for the given source files. Parameters **source_filenames**list of str The list of paths to source files. Paths can be either relative or absolute, this is handled transparently. **strip_dir**bool, optional Whether to strip the directory from the returned paths. If True, the file name prepended by `output_dir` is returned. Default is False. **output_dir**str, optional If given, this path is prepended to the returned paths to the object files. Returns **obj_names**list of str The list of paths to the object files corresponding to the source files in `source_filenames`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.CCompiler_object_filenames.htmlnumpy.distutils.ccompiler.CCompiler_show_customization ======================================================== distutils.ccompiler.CCompiler_show_customization(*self*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L440-L465) Print the compiler customizations to stdout. Parameters **None** Returns None #### Notes Printing is only done if the distutils log threshold is < 2. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.CCompiler_show_customization.htmlnumpy.distutils.ccompiler.CCompiler_spawn ========================================== distutils.ccompiler.CCompiler_spawn(*self*, *cmd*, *display=None*, *env=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L112-L181) Execute a command in a sub-process. Parameters **cmd**str The command to execute. **display**str or sequence of str, optional The text to add to the log file kept by [`numpy.distutils`](../distutils#module-numpy.distutils "numpy.distutils"). If not given, `display` is equal to [`cmd`](https://docs.python.org/3/library/cmd.html#module-cmd "(in Python v3.10)"). **env**a dictionary for environment variables, optional Returns None Raises DistutilsExecError If the command failed, i.e. the exit status was not 0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.CCompiler_spawn.htmlnumpy.distutils.ccompiler.gen_lib_options =========================================== distutils.ccompiler.gen_lib_options(*compiler*, *library_dirs*, *runtime_library_dirs*, *libraries*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L781-L797) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.gen_lib_options.htmlnumpy.distutils.ccompiler.new_compiler ======================================= distutils.ccompiler.new_compiler(*plat=None*, *compiler=None*, *verbose=None*, *dry_run=0*, *force=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L734-L776) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.new_compiler.htmlnumpy.distutils.ccompiler.replace_method ========================================= distutils.ccompiler.replace_method(*klass*, *method_name*, *func*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L87-L90) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.replace_method.htmlnumpy.distutils.ccompiler.simple_version_match ================================================ distutils.ccompiler.simple_version_match(*pat='[-.\\d]+'*, *ignore=''*, *start=''*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler.py#L552-L597) Simple matching of version numbers, for use in CCompiler and FCompiler. Parameters **pat**str, optional A regular expression matching version numbers. Default is `r'[-.\d]+'`. **ignore**str, optional A regular expression matching patterns to skip. Default is `''`, in which case nothing is skipped. **start**str, optional A regular expression matching the start of where to start looking for version numbers. Default is `''`, in which case searching is started at the beginning of the version string given to `matcher`. Returns **matcher**callable A function that is appropriate to use as the `.version_match` attribute of a `CCompiler` class. `matcher` takes a single parameter, a version string. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler.simple_version_match.htmlnumpy.distutils.ccompiler_opt.new_ccompiler_opt ================================================== distutils.ccompiler_opt.new_ccompiler_opt(*compiler*, *dispatch_hpath*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L2635-L2655) Create a new instance of ‘CCompilerOpt’ and generate the dispatch header which contains the #definitions and headers of platform-specific instruction-sets for the enabled CPU baseline and dispatch-able features. Parameters **compiler**CCompiler instance **dispatch_hpath**str path of the dispatch header ****kwargs: passed as-is to `CCompilerOpt(
)`** **Returns** **——-** **new instance of CCompilerOpt** © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.new_ccompiler_opt.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt =========================================== *class*numpy.distutils.ccompiler_opt.CCompilerOpt(*ccompiler*, *cpu_baseline='min'*, *cpu_dispatch='max'*, *cache_path=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L2200-L2633) A helper class for `CCompiler` aims to provide extra build options to effectively control of compiler optimizations that are directly related to CPU features. Attributes **conf_cache_factors** **conf_tmp_path** #### Methods | | | | --- | --- | | [`cache_flush`](numpy.distutils.ccompiler_opt.ccompileropt.cache_flush#numpy.distutils.ccompiler_opt.CCompilerOpt.cache_flush "numpy.distutils.ccompiler_opt.CCompilerOpt.cache_flush")() | Force update the cache. | | [`cc_normalize_flags`](numpy.distutils.ccompiler_opt.ccompileropt.cc_normalize_flags#numpy.distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags "numpy.distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags")(flags) | Remove the conflicts that caused due gathering implied features flags. | | [`conf_features_partial`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features_partial#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features_partial "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features_partial")() | Return a dictionary of supported CPU features by the platform, and accumulate the rest of undefined options in [`conf_features`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features"), the returned dict has same rules and notes in class attribute [`conf_features`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features"), also its override any options that been set in 'conf_features'. | | [`cpu_baseline_flags`](numpy.distutils.ccompiler_opt.ccompileropt.cpu_baseline_flags#numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags "numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags")() | Returns a list of final CPU baseline compiler flags | | [`cpu_baseline_names`](numpy.distutils.ccompiler_opt.ccompileropt.cpu_baseline_names#numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names "numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names")() | return a list of final CPU baseline feature names | | [`cpu_dispatch_names`](numpy.distutils.ccompiler_opt.ccompileropt.cpu_dispatch_names#numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names "numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names")() | return a list of final CPU dispatch feature names | | [`dist_compile`](numpy.distutils.ccompiler_opt.ccompileropt.dist_compile#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_compile "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_compile")(sources, flags[, ccompiler]) | Wrap CCompiler.compile() | | [`dist_error`](numpy.distutils.ccompiler_opt.ccompileropt.dist_error#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_error "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_error")(*args) | Raise a compiler error | | [`dist_fatal`](numpy.distutils.ccompiler_opt.ccompileropt.dist_fatal#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_fatal "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_fatal")(*args) | Raise a distutils error | | [`dist_info`](numpy.distutils.ccompiler_opt.ccompileropt.dist_info#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_info "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_info")() | Return a tuple containing info about (platform, compiler, extra_args), required by the abstract class '_CCompiler' for discovering the platform environment. | | [`dist_load_module`](numpy.distutils.ccompiler_opt.ccompileropt.dist_load_module#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_load_module "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_load_module")(name, path) | Load a module from file, required by the abstract class '_Cache'. | | [`dist_log`](numpy.distutils.ccompiler_opt.ccompileropt.dist_log#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_log "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_log")(*args[, stderr]) | Print a console message | | [`dist_test`](numpy.distutils.ccompiler_opt.ccompileropt.dist_test#numpy.distutils.ccompiler_opt.CCompilerOpt.dist_test "numpy.distutils.ccompiler_opt.CCompilerOpt.dist_test")(source, flags[, macros]) | Return True if 'CCompiler.compile()' able to compile a source file with certain flags. | | [`feature_ahead`](numpy.distutils.ccompiler_opt.ccompileropt.feature_ahead#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_ahead "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_ahead")(names) | Return list of features in 'names' after remove any implied features and keep the origins. | | [`feature_c_preprocessor`](numpy.distutils.ccompiler_opt.ccompileropt.feature_c_preprocessor#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor")(feature_name[, tabs]) | Generate C preprocessor definitions and include headers of a CPU feature. | | [`feature_detect`](numpy.distutils.ccompiler_opt.ccompileropt.feature_detect#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_detect "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_detect")(names) | Return a list of CPU features that required to be detected sorted from the lowest to highest interest. | | [`feature_get_til`](numpy.distutils.ccompiler_opt.ccompileropt.feature_get_til#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_get_til "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_get_til")(names, keyisfalse) | same as `feature_implies_c()` but stop collecting implied features when feature's option that provided through parameter 'keyisfalse' is False, also sorting the returned features. | | [`feature_implies`](numpy.distutils.ccompiler_opt.ccompileropt.feature_implies#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies")(names[, keep_origins]) | Return a set of CPU features that implied by 'names' | | [`feature_implies_c`](numpy.distutils.ccompiler_opt.ccompileropt.feature_implies_c#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies_c "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies_c")(names) | same as feature_implies() but combining 'names' | | [`feature_is_exist`](numpy.distutils.ccompiler_opt.ccompileropt.feature_is_exist#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_is_exist "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_is_exist")(name) | Returns True if a certain feature is exist and covered within `_Config.conf_features`. | | [`feature_names`](numpy.distutils.ccompiler_opt.ccompileropt.feature_names#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_names "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_names")([names, force_flags, macros]) | Returns a set of CPU feature names that supported by platform and the **C** compiler. | | [`feature_sorted`](numpy.distutils.ccompiler_opt.ccompileropt.feature_sorted#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_sorted "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_sorted")(names[, reverse]) | Sort a list of CPU features ordered by the lowest interest. | | [`feature_untied`](numpy.distutils.ccompiler_opt.ccompileropt.feature_untied#numpy.distutils.ccompiler_opt.CCompilerOpt.feature_untied "numpy.distutils.ccompiler_opt.CCompilerOpt.feature_untied")(names) | same as 'feature_ahead()' but if both features implied each other and keep the highest interest. | | [`generate_dispatch_header`](numpy.distutils.ccompiler_opt.ccompileropt.generate_dispatch_header#numpy.distutils.ccompiler_opt.CCompilerOpt.generate_dispatch_header "numpy.distutils.ccompiler_opt.CCompilerOpt.generate_dispatch_header")(header_path) | Generate the dispatch header which contains the #definitions and headers for platform-specific instruction-sets for the enabled CPU baseline and dispatch-able features. | | [`is_cached`](numpy.distutils.ccompiler_opt.ccompileropt.is_cached#numpy.distutils.ccompiler_opt.CCompilerOpt.is_cached "numpy.distutils.ccompiler_opt.CCompilerOpt.is_cached")() | Returns True if the class loaded from the cache file | | [`me`](numpy.distutils.ccompiler_opt.ccompileropt.me#numpy.distutils.ccompiler_opt.CCompilerOpt.me "numpy.distutils.ccompiler_opt.CCompilerOpt.me")(cb) | A static method that can be treated as a decorator to dynamically cache certain methods. | | [`parse_targets`](numpy.distutils.ccompiler_opt.ccompileropt.parse_targets#numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets "numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets")(source) | Fetch and parse configuration statements that required for defining the targeted CPU features, statements should be declared in the top of source in between **C** comment and start with a special mark **@targets**. | | [`try_dispatch`](numpy.distutils.ccompiler_opt.ccompileropt.try_dispatch#numpy.distutils.ccompiler_opt.CCompilerOpt.try_dispatch "numpy.distutils.ccompiler_opt.CCompilerOpt.try_dispatch")(sources[, src_dir, ccompiler]) | Compile one or more dispatch-able sources and generates object files, also generates abstract C config headers and macros that used later for the final runtime dispatching process. | | | | | --- | --- | | **cache_hash** | | | **cc_test_cexpr** | | | **cc_test_flags** | | | **feature_can_autovec** | | | **feature_extra_checks** | | | **feature_flags** | | | **feature_is_supported** | | | **feature_test** | | | **report** | | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.try_dispatch ========================================================= method distutils.ccompiler_opt.CCompilerOpt.try_dispatch(*sources*, *src_dir=None*, *ccompiler=None*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L2256-L2341) Compile one or more dispatch-able sources and generates object files, also generates abstract C config headers and macros that used later for the final runtime dispatching process. The mechanism behind it is to takes each source file that specified in ‘sources’ and branching it into several files depend on special configuration statements that must be declared in the top of each source which contains targeted CPU features, then it compiles every branched source with the proper compiler flags. Parameters **sources**list Must be a list of dispatch-able sources file paths, and configuration statements must be declared inside each file. **src_dir**str Path of parent directory for the generated headers and wrapped sources. If None(default) the files will generated in-place. **ccompiler**CCompiler Distutils `CCompiler` instance to be used for compilation. If None (default), the provided instance during the initialization will be used instead. ****kwargs**any Arguments to pass on to the `CCompiler.compile()` Returns **list**generated object files Raises CompileError Raises by `CCompiler.compile()` on compiling failure. DistutilsError Some errors during checking the sanity of configuration statements. See also [`parse_targets`](numpy.distutils.ccompiler_opt.ccompileropt.parse_targets#numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets "numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets") Parsing the configuration statements of dispatch-able sources. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.try_dispatch.htmlnumpy.distutils.exec_command.exec_command =========================================== distutils.exec_command.exec_command(*command*, *execute_in=''*, *use_shell=None*, *use_tee=None*, *_with_python=1*, ***env*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/exec_command.py#L177-L250) Return (status,output) of executed command. Deprecated since version 1.17: Use subprocess.Popen instead Parameters **command**str A concatenated string of executable and arguments. **execute_in**str Before running command `cd execute_in` and after `cd -`. **use_shell**{bool, None}, optional If True, execute `sh -c command`. Default None (True) **use_tee**{bool, None}, optional If True use tee. Default None (True) Returns **res**str Both stdout and stderr messages. #### Notes On NT, DOS systems the returned status is correct for external commands. Wild cards will not work for non-posix systems or when use_shell=0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.exec_command.exec_command.htmlnumpy.distutils.exec_command.filepath_from_subprocess_output ================================================================ distutils.exec_command.filepath_from_subprocess_output(*output*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/exec_command.py#L63-L77) Convert `bytes` in the encoding used by a subprocess into a filesystem-appropriate [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)"). Inherited from [`exec_command`](numpy.distutils.exec_command.exec_command#numpy.distutils.exec_command.exec_command "numpy.distutils.exec_command.exec_command"), and possibly incorrect. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.exec_command.filepath_from_subprocess_output.htmlnumpy.distutils.exec_command.find_executable ============================================== distutils.exec_command.find_executable(*exe*, *path=None*, *_cache={}*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/exec_command.py#L116-L163) Return full path of a executable or None. Symbolic links are not followed. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.exec_command.find_executable.htmlnumpy.distutils.exec_command.forward_bytes_to_stdout ======================================================== distutils.exec_command.forward_bytes_to_stdout(*val*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/exec_command.py#L80-L96) Forward bytes from a subprocess call to the console, without attempting to decode them. The assumption is that the subprocess call already returned bytes in a suitable encoding. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.exec_command.forward_bytes_to_stdout.htmlnumpy.distutils.exec_command.get_pythonexe ============================================ distutils.exec_command.get_pythonexe()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/exec_command.py#L107-L114) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.exec_command.get_pythonexe.htmlnumpy.distutils.exec_command.temp_file_name ============================================== distutils.exec_command.temp_file_name()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/exec_command.py#L99-L105) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.exec_command.temp_file_name.htmlnumpy.nditer.close ================== method nditer.close() Resolve all writeback semantics in writeable operands. New in version 1.15.0. See also [Modifying Array Values](../arrays.nditer#nditer-context-manager) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.close.htmlnumpy.nditer.copy ================= method nditer.copy() Get a copy of the iterator in its current state. #### Examples ``` >>> x = np.arange(10) >>> y = x + 1 >>> it = np.nditer([x, y]) >>> next(it) (array(0), array(1)) >>> it2 = it.copy() >>> next(it2) (array(1), array(2)) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.copy.htmlnumpy.nditer.debug_print ========================= method nditer.debug_print() Print the current state of the [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer") instance and debug info to stdout. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.debug_print.htmlnumpy.nditer.enable_external_loop =================================== method nditer.enable_external_loop() When the “external_loop” was not used during construction, but is desired, this modifies the iterator to behave as if the flag was specified. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.enable_external_loop.htmlnumpy.nditer.iternext ===================== method nditer.iternext() Check whether iterations are left, and perform a single internal iteration without returning the result. Used in the C-style pattern do-while pattern. For an example, see [`nditer`](numpy.nditer#numpy.nditer "numpy.nditer"). Returns **iternext**bool Whether or not there are iterations left. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.iternext.htmlnumpy.nditer.remove_axis ========================= method nditer.remove_axis(*i*, */*) Removes axis `i` from the iterator. Requires that the flag “multi_index” be enabled. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.remove_axis.htmlnumpy.nditer.remove_multi_index ================================= method nditer.remove_multi_index() When the “multi_index” flag was specified, this removes it, allowing the internal iteration structure to be optimized further. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.remove_multi_index.htmlnumpy.nditer.itersize ===================== attribute nditer.itersize © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.itersize.htmlnumpy.nditer.value ================== attribute nditer.value © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.value.htmlnumpy.nditer.index ================== attribute nditer.index © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.index.htmlnumpy.nditer.multi_index ========================= attribute nditer.multi_index © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.nditer.multi_index.htmlnumpy.ndindex.ndincr ==================== method ndindex.ndincr()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/index_tricks.py#L668-L682) Increment the multi-dimensional index by one. This method is for backward compatibility only: do not use. Deprecated since version 1.20.0: This method has been advised against since numpy 1.8.0, but only started emitting DeprecationWarning as of this version. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.ndindex.ndincr.htmlnumpy.flatiter.copy =================== method flatiter.copy() Get a copy of the iterator as a 1-D array. #### Examples ``` >>> x = np.arange(6).reshape(2, 3) >>> x array([[0, 1, 2], [3, 4, 5]]) >>> fl = x.flat >>> fl.copy() array([0, 1, 2, 3, 4, 5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.flatiter.copy.htmlnumpy.flatiter.base =================== attribute flatiter.base A reference to the array that is iterated over. #### Examples ``` >>> x = np.arange(5) >>> fl = x.flat >>> fl.base is x True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.flatiter.base.htmlnumpy.flatiter.coords ===================== attribute flatiter.coords An N-dimensional tuple of current coordinates. #### Examples ``` >>> x = np.arange(6).reshape(2, 3) >>> fl = x.flat >>> fl.coords (0, 0) >>> next(fl) 0 >>> fl.coords (0, 1) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.flatiter.coords.htmlnumpy.flatiter.index ==================== attribute flatiter.index Current flat index into the array. #### Examples ``` >>> x = np.arange(6).reshape(2, 3) >>> fl = x.flat >>> fl.index 0 >>> next(fl) 0 >>> fl.index 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.flatiter.index.htmlnumpy.lib.Arrayterator.shape ============================ property *property*lib.Arrayterator.shape The shape of the array to be iterated over. For an example, see `Arrayterator`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.Arrayterator.shape.htmlnumpy.lib.Arrayterator.flat =========================== property *property*lib.Arrayterator.flat A 1-D flat iterator for Arrayterator objects. This iterator returns elements of the array to be iterated over in `Arrayterator` one by one. It is similar to [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter"). See also `Arrayterator` [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples ``` >>> a = np.arange(3 * 4 * 5 * 6).reshape(3, 4, 5, 6) >>> a_itor = np.lib.Arrayterator(a, 2) ``` ``` >>> for subarr in a_itor.flat: ... if not subarr: ... print(subarr, type(subarr)) ... 0 <class 'numpy.int64'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.Arrayterator.flat.htmlnumpy.chararray.astype ====================== method chararray.astype(*dtype*, *order='K'*, *casting='unsafe'*, *subok=True*, *copy=True*) Copy of the array, cast to a specified type. Parameters **dtype**str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok**bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy**bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns **arr_t**ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Notes Changed in version 1.17.0: Casting between a simple data type and a structured one is possible only for “unsafe” casting. Casting to multiple fields is allowed, but casting from multiple fields is not. Changed in version 1.9.0: Casting from numeric to string types in ‘safe’ casting mode requires that the string dtype length is long enough to store the max integer/float value converted. #### Examples ``` >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) ``` ``` >>> x.astype(int) array([1, 2, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.astype.htmlnumpy.chararray.argsort ======================= method chararray.argsort(*axis=- 1*, *kind=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2121-L2139) Returns the indices that would sort this array. Refer to [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") for full documentation. See also [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.argsort.htmlnumpy.chararray.copy ==================== method chararray.copy(*order='C'*) Return a copy of the array. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples ``` >>> x = np.array([[1,2,3],[4,5,6]], order='F') ``` ``` >>> y = x.copy() ``` ``` >>> x.fill(0) ``` ``` >>> x array([[0, 0, 0], [0, 0, 0]]) ``` ``` >>> y array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> y.flags['C_CONTIGUOUS'] True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.copy.htmlnumpy.chararray.count ===================== method chararray.count(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2165-L2175) Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`]. See also [`char.count`](numpy.char.count#numpy.char.count "numpy.char.count") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.count.htmlnumpy.chararray.decode ====================== method chararray.decode(*encoding=None*, *errors=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2177-L2186) Calls `str.decode` element-wise. See also [`char.decode`](numpy.char.decode#numpy.char.decode "numpy.char.decode") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.decode.htmlnumpy.chararray.dump ==================== method chararray.dump(*file*) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters **file**str or Path A string naming the dump file. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.dump.htmlnumpy.chararray.dumps ===================== method chararray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters **None** © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.dumps.htmlnumpy.chararray.encode ====================== method chararray.encode(*encoding=None*, *errors=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2188-L2197) Calls [`str.encode`](https://docs.python.org/3/library/stdtypes.html#str.encode "(in Python v3.10)") element-wise. See also [`char.encode`](numpy.char.encode#numpy.char.encode "numpy.char.encode") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.encode.htmlnumpy.chararray.endswith ======================== method chararray.endswith(*suffix*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2199-L2209) Returns a boolean array which is `True` where the string element in `self` ends with `suffix`, otherwise `False`. See also [`char.endswith`](numpy.char.endswith#numpy.char.endswith "numpy.char.endswith") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.endswith.htmlnumpy.chararray.expandtabs ========================== method chararray.expandtabs(*tabsize=8*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2211-L2221) Return a copy of each string element where all tab characters are replaced by one or more spaces. See also [`char.expandtabs`](numpy.char.expandtabs#numpy.char.expandtabs "numpy.char.expandtabs") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.expandtabs.htmlnumpy.chararray.fill ==================== method chararray.fill(*value*) Fill the array with a scalar value. Parameters **value**scalar All elements of `a` will be assigned this value. #### Examples ``` >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.fill.htmlnumpy.chararray.find ==================== method chararray.find(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2223-L2233) For each element, return the lowest index in the string where substring `sub` is found. See also [`char.find`](numpy.char.find#numpy.char.find "numpy.char.find") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.find.htmlnumpy.chararray.flatten ======================= method chararray.flatten(*order='C'*) Return a copy of the array collapsed into one dimension. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if `a` is Fortran *contiguous* in memory, row-major order otherwise. ‘K’ means to flatten `a` in the order the elements occur in memory. The default is ‘C’. Returns **y**ndarray A copy of the input array, flattened to one dimension. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.chararray.flat#numpy.chararray.flat "numpy.chararray.flat") A 1-D flat iterator over the array. #### Examples ``` >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.flatten.htmlnumpy.chararray.getfield ======================== method chararray.getfield(*dtype*, *offset=0*) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters **dtype**str or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. **offset**int Number of bytes to skip before beginning the element view. #### Examples ``` >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) ``` By choosing an offset of 8 bytes we can select the complex part of the array for our view: ``` >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.getfield.htmlnumpy.chararray.index ===================== method chararray.index(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2235-L2244) Like [`find`](numpy.chararray.find#numpy.chararray.find "numpy.chararray.find"), but raises `ValueError` when the substring is not found. See also [`char.index`](numpy.char.index#numpy.char.index "numpy.char.index") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.index.htmlnumpy.chararray.isalnum ======================= method chararray.isalnum()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2246-L2257) Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. See also [`char.isalnum`](numpy.char.isalnum#numpy.char.isalnum "numpy.char.isalnum") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.isalnum.htmlnumpy.chararray.isalpha ======================= method chararray.isalpha()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2259-L2270) Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. See also [`char.isalpha`](numpy.char.isalpha#numpy.char.isalpha "numpy.char.isalpha") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.isalpha.htmlnumpy.chararray.isdecimal ========================= method chararray.isdecimal()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2599-L2609) For each element in `self`, return True if there are only decimal characters in the element. See also [`char.isdecimal`](numpy.char.isdecimal#numpy.char.isdecimal "numpy.char.isdecimal") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.isdecimal.htmlnumpy.chararray.isdigit ======================= method chararray.isdigit()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2272-L2282) Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. See also [`char.isdigit`](numpy.char.isdigit#numpy.char.isdigit "numpy.char.isdigit") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.isdigit.htmlnumpy.chararray.islower ======================= method chararray.islower()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2284-L2295) Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. See also [`char.islower`](numpy.char.islower#numpy.char.islower "numpy.char.islower") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.islower.htmlnumpy.chararray.isnumeric ========================= method chararray.isnumeric()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2587-L2597) For each element in `self`, return True if there are only numeric characters in the element. See also [`char.isnumeric`](numpy.char.isnumeric#numpy.char.isnumeric "numpy.char.isnumeric") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.isnumeric.htmlnumpy.chararray.isspace ======================= method chararray.isspace()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2297-L2308) Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. See also [`char.isspace`](numpy.char.isspace#numpy.char.isspace "numpy.char.isspace") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.isspace.htmlnumpy.chararray.istitle ======================= method chararray.istitle()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2310-L2320) Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. See also [`char.istitle`](numpy.char.istitle#numpy.char.istitle "numpy.char.istitle") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.istitle.htmlnumpy.chararray.isupper ======================= method chararray.isupper()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2322-L2333) Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. See also [`char.isupper`](numpy.char.isupper#numpy.char.isupper "numpy.char.isupper") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.isupper.htmlnumpy.chararray.item ==================== method chararray.item(**args*) Copy an element of an array to a standard Python scalar and return it. Parameters ***args**Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns **z**Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. [`item`](#numpy.chararray.item "numpy.chararray.item") is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples ``` >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.item.htmlnumpy.chararray.join ==================== method chararray.join(*seq*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2335-L2345) Return a string which is the concatenation of the strings in the sequence `seq`. See also [`char.join`](numpy.char.join#numpy.char.join "numpy.char.join") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.join.htmlnumpy.chararray.ljust ===================== method chararray.ljust(*width*, *fillchar=' '*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2347-L2357) Return an array with the elements of `self` left-justified in a string of length `width`. See also [`char.ljust`](numpy.char.ljust#numpy.char.ljust "numpy.char.ljust") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.ljust.htmlnumpy.chararray.lower ===================== method chararray.lower()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2359-L2369) Return an array with the elements of `self` converted to lowercase. See also [`char.lower`](numpy.char.lower#numpy.char.lower "numpy.char.lower") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.lower.htmlnumpy.chararray.lstrip ====================== method chararray.lstrip(*chars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2371-L2381) For each element in `self`, return a copy with the leading characters removed. See also [`char.lstrip`](numpy.char.lstrip#numpy.char.lstrip "numpy.char.lstrip") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.lstrip.htmlnumpy.chararray.nonzero ======================= method chararray.nonzero() Return the indices of the elements that are non-zero. Refer to [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") for full documentation. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.nonzero.htmlnumpy.chararray.put =================== method chararray.put(*indices*, *values*, *mode='raise'*) Set `a.flat[n] = values[n]` for all `n` in indices. Refer to [`numpy.put`](numpy.put#numpy.put "numpy.put") for full documentation. See also [`numpy.put`](numpy.put#numpy.put "numpy.put") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.put.htmlnumpy.chararray.ravel ===================== method chararray.ravel([*order*]) Return a flattened array. Refer to [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") for full documentation. See also [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") equivalent function [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") a flat iterator on the array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.ravel.htmlnumpy.chararray.repeat ====================== method chararray.repeat(*repeats*, *axis=None*) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.repeat.htmlnumpy.chararray.replace ======================= method chararray.replace(*old*, *new*, *count=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2393-L2403) For each element in `self`, return a copy of the string with all occurrences of substring `old` replaced by `new`. See also [`char.replace`](numpy.char.replace#numpy.char.replace "numpy.char.replace") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.replace.htmlnumpy.chararray.reshape ======================= method chararray.reshape(*shape*, *order='C'*) Returns an array containing the same data with a new shape. Refer to [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") for full documentation. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") equivalent function #### Notes Unlike the free function [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), this method on [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") allows the elements of the shape parameter to be passed in as separate arguments. For example, `a.reshape(10, 11)` is equivalent to `a.reshape((10, 11))`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.reshape.htmlnumpy.chararray.resize ====================== method chararray.resize(*new_shape*, *refcheck=True*) Change shape and size of array in-place. Parameters **new_shape**tuple of ints, or `n` ints Shape of resized array. **refcheck**bool, optional If False, reference count will not be checked. Default is True. Returns None Raises ValueError If `a` does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the `order` keyword argument is specified. This behaviour is a bug in NumPy. See also [`resize`](numpy.resize#numpy.resize "numpy.resize") Return a new array with the specified shape. #### Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set `refcheck` to False. #### Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: ``` >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) ``` ``` >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) ``` Enlarging an array: as above, but missing entries are filled with zeros: ``` >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) ``` Referencing an array prevents resizing
 ``` >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... ``` Unless `refcheck` is False: ``` >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.resize.htmlnumpy.chararray.rfind ===================== method chararray.rfind(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2405-L2416) For each element in `self`, return the highest index in the string where substring `sub` is found, such that `sub` is contained within [`start`, `end`]. See also [`char.rfind`](numpy.char.rfind#numpy.char.rfind "numpy.char.rfind") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.rfind.htmlnumpy.chararray.rindex ====================== method chararray.rindex(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2418-L2428) Like [`rfind`](numpy.chararray.rfind#numpy.chararray.rfind "numpy.chararray.rfind"), but raises `ValueError` when the substring `sub` is not found. See also [`char.rindex`](numpy.char.rindex#numpy.char.rindex "numpy.char.rindex") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.rindex.htmlnumpy.chararray.rjust ===================== method chararray.rjust(*width*, *fillchar=' '*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2430-L2440) Return an array with the elements of `self` right-justified in a string of length `width`. See also [`char.rjust`](numpy.char.rjust#numpy.char.rjust "numpy.char.rjust") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.rjust.htmlnumpy.chararray.rsplit ====================== method chararray.rsplit(*sep=None*, *maxsplit=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2452-L2462) For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. See also [`char.rsplit`](numpy.char.rsplit#numpy.char.rsplit "numpy.char.rsplit") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.rsplit.htmlnumpy.chararray.rstrip ====================== method chararray.rstrip(*chars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2464-L2474) For each element in `self`, return a copy with the trailing characters removed. See also [`char.rstrip`](numpy.char.rstrip#numpy.char.rstrip "numpy.char.rstrip") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.rstrip.htmlnumpy.chararray.searchsorted ============================ method chararray.searchsorted(*v*, *side='left'*, *sorter=None*) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.searchsorted.htmlnumpy.chararray.setfield ======================== method chararray.setfield(*val*, *dtype*, *offset=0*) Put a value into a specified place in a field defined by a data-type. Place `val` into `a`’s field defined by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and beginning `offset` bytes into the field. Parameters **val**object Value to be placed in field. **dtype**dtype object Data-type of the field in which to place `val`. **offset**int, optional The number of bytes into the field at which to place `val`. Returns None See also [`getfield`](numpy.chararray.getfield#numpy.chararray.getfield "numpy.chararray.getfield") #### Examples ``` >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.setfield.htmlnumpy.chararray.setflags ======================== method chararray.setflags(*write=None*, *align=None*, *uic=None*) Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. These Boolean-valued flags affect how numpy interprets the memory area used by `a` (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY and flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters **write**bool, optional Describes whether or not `a` can be written to. **align**bool, optional Describes whether or not `a` is aligned properly for its type. **uic**bool, optional Describes whether or not `a` is a copy of another “base” array. #### Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only four of which can be changed by the user: WRITEBACKIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. #### Examples ``` >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: cannot set WRITEBACKIFCOPY flag to True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.setflags.htmlnumpy.chararray.sort ==================== method chararray.sort(*axis=- 1*, *kind=None*, *order=None*) Sort an array in-place. Refer to [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for full documentation. Parameters **axis**int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. Changed in version 1.15.0: The ‘stable’ option was added. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`numpy.lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in sorted array. [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes See [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples ``` >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) ``` Use the `order` keyword to specify a field to use when sorting a structured array: ``` >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '<i8')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.sort.htmlnumpy.chararray.split ===================== method chararray.split(*sep=None*, *maxsplit=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2476-L2486) For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. See also [`char.split`](numpy.char.split#numpy.char.split "numpy.char.split") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.split.htmlnumpy.chararray.splitlines ========================== method chararray.splitlines(*keepends=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2488-L2498) For each element in `self`, return a list of the lines in the element, breaking at line boundaries. See also [`char.splitlines`](numpy.char.splitlines#numpy.char.splitlines "numpy.char.splitlines") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.splitlines.htmlnumpy.chararray.squeeze ======================= method chararray.squeeze(*axis=None*) Remove axes of length one from `a`. Refer to [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") for full documentation. See also [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.squeeze.htmlnumpy.chararray.startswith ========================== method chararray.startswith(*prefix*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2500-L2510) Returns a boolean array which is `True` where the string element in `self` starts with `prefix`, otherwise `False`. See also [`char.startswith`](numpy.char.startswith#numpy.char.startswith "numpy.char.startswith") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.startswith.htmlnumpy.chararray.strip ===================== method chararray.strip(*chars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2512-L2522) For each element in `self`, return a copy with the leading and trailing characters removed. See also [`char.strip`](numpy.char.strip#numpy.char.strip "numpy.char.strip") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.strip.htmlnumpy.chararray.swapaxes ======================== method chararray.swapaxes(*axis1*, *axis2*) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.swapaxes.htmlnumpy.chararray.swapcase ======================== method chararray.swapcase()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2524-L2534) For each element in `self`, return a copy of the string with uppercase characters converted to lowercase and vice versa. See also [`char.swapcase`](numpy.char.swapcase#numpy.char.swapcase "numpy.char.swapcase") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.swapcase.htmlnumpy.chararray.take ==================== method chararray.take(*indices*, *axis=None*, *out=None*, *mode='raise'*) Return an array formed from the elements of `a` at the given indices. Refer to [`numpy.take`](numpy.take#numpy.take "numpy.take") for full documentation. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.take.htmlnumpy.chararray.title ===================== method chararray.title()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2536-L2547) For each element in `self`, return a titlecased version of the string: words start with uppercase characters, all remaining cased characters are lowercase. See also [`char.title`](numpy.char.title#numpy.char.title "numpy.char.title") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.title.htmlnumpy.chararray.tofile ====================== method chararray.tofile(*fid*, *sep=''*, *format='%s'*) Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of `a`. The data produced by this method can be recovered using the function fromfile(). Parameters **fid**file or str or Path An open file object, or a string containing a filename. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. **sep**str Separator between array items for text output. If “” (empty), a binary file is written, equivalent to `file.write(a.tobytes())`. **format**str Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. #### Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s `write` method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support `fileno()` (e.g., BytesIO). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.tofile.htmlnumpy.chararray.tolist ====================== method chararray.tolist() Return the array as an `a.ndim`-levels deep nested list of Python scalars. Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible builtin Python type, via the [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item") function. If `a.ndim` is 0, then since the depth of the nested list is 0, it will not be a list at all, but a simple Python scalar. Parameters **none** Returns **y**object, or list of object, or list of list of object, or 
 The possibly nested list of array elements. #### Notes The array may be recreated via `a = np.array(a.tolist())`, although this may sometimes lose precision. #### Examples For a 1D array, `a.tolist()` is almost the same as `list(a)`, except that `tolist` changes numpy scalars to Python scalars: ``` >>> a = np.uint32([1, 2]) >>> a_list = list(a) >>> a_list [1, 2] >>> type(a_list[0]) <class 'numpy.uint32'> >>> a_tolist = a.tolist() >>> a_tolist [1, 2] >>> type(a_tolist[0]) <class 'int'``` Additionally, for a 2D array, `tolist` applies recursively: ``` >>> a = np.array([[1, 2], [3, 4]]) >>> list(a) [array([1, 2]), array([3, 4])] >>> a.tolist() [[1, 2], [3, 4]] ``` The base case for this recursion is a 0D array: ``` >>> a = np.array(1) >>> list(a) Traceback (most recent call last): ... TypeError: iteration over a 0-d array >>> a.tolist() 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.tolist.htmlnumpy.chararray.tostring ======================== method chararray.tostring(*order='C'*) A compatibility alias for [`tobytes`](numpy.chararray.tobytes#numpy.chararray.tobytes "numpy.chararray.tobytes"), with exactly the same behavior. Despite its name, it returns `bytes` not [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")s. Deprecated since version 1.19.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.tostring.htmlnumpy.chararray.translate ========================= method chararray.translate(*table*, *deletechars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2549-L2561) For each element in `self`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. See also [`char.translate`](numpy.char.translate#numpy.char.translate "numpy.char.translate") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.translate.htmlnumpy.chararray.transpose ========================= method chararray.transpose(**axes*) Returns a view of the array with axes transposed. For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector, an additional dimension must be added. `np.atleast2d(a).T` achieves this, as does `a[:, np.newaxis]`. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and `a.shape = (i[0], i[1], ... i[n-2], i[n-1])`, then `a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])`. Parameters **axes**None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means `a`’s `i`-th axis becomes `a.transpose()`’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form) Returns **out**ndarray View of `a`, with axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.transpose.htmlnumpy.chararray.upper ===================== method chararray.upper()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2563-L2573) Return an array with the elements of `self` converted to uppercase. See also [`char.upper`](numpy.char.upper#numpy.char.upper "numpy.char.upper") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.upper.htmlnumpy.chararray.view ==================== method chararray.view(*[dtype][, type]*) New view of array with the same data. Note Passing None for `dtype` is different from omitting the parameter, since the former invokes `dtype(None)` which is an alias for `dtype('float_')`. Parameters **dtype**data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as `a`. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type**Python type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the last axis of `a` must be contiguous. This axis will be resized in the result. Changed in version 1.23.0: Only the last axis needs to be contiguous. Previously, the entire array had to be C-contiguous. #### Examples ``` >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) ``` Viewing array data using a different type and dtype: ``` >>> y = x.view(dtype=np.int16, type=np.matrix) >>> y matrix([[513]], dtype=int16) >>> print(type(y)) <class 'numpy.matrix'``` Creating a view on a structured array so it can be used in calculations ``` >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) ``` Making changes to the view changes the underlying array ``` >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) ``` Using a view to convert an array to a recarray: ``` >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) ``` Views share data: ``` >>> x[0] = (9, 10) >>> z[0] (9, 10) ``` Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: ``` >>> x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int16) >>> y = x[:, ::2] >>> y array([[1, 3], [4, 6]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the last axis must be contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 3)], [(4, 6)]], dtype=[('width', '<i2'), ('length', '<i2')]) ``` However, views that change dtype are totally fine for arrays with a contiguous last axis, even if the rest of the axes are not C-contiguous: ``` >>> x = np.arange(2 * 3 * 4, dtype=np.int8).reshape(2, 3, 4) >>> x.transpose(1, 0, 2).view(np.int16) array([[[ 256, 770], [3340, 3854]], [[1284, 1798], [4368, 4882]], [[2312, 2826], [5396, 5910]]], dtype=int16) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.view.htmlnumpy.chararray.zfill ===================== method chararray.zfill(*width*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2575-L2585) Return the numeric string left-filled with zeros in a string of length `width`. See also [`char.zfill`](numpy.char.zfill#numpy.char.zfill "numpy.char.zfill") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.zfill.htmlnumpy.chararray.strides ======================= attribute chararray.strides Tuple of bytes to step in each dimension when traversing an array. The byte offset of element `(i[0], i[1], ..., i[n])` in an array `a` is: ``` offset = sum(np.array(i) * a.strides) ``` A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide. Warning Setting `arr.strides` is discouraged and may be deprecated in the future. [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") should be preferred to create a new view of the same data in a safer way. See also [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") #### Notes Imagine an array of 32-bit integers (each 4 bytes): ``` x = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int32) ``` This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array `x` will be `(20, 4)`. #### Examples ``` >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 ``` ``` >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.strides.htmlnumpy.chararray.T ================= attribute chararray.T The transposed array. Same as `self.transpose()`. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") #### Examples ``` >>> x = np.array([[1.,2.],[3.,4.]]) >>> x array([[ 1., 2.], [ 3., 4.]]) >>> x.T array([[ 1., 3.], [ 2., 4.]]) >>> x = np.array([1.,2.,3.,4.]) >>> x array([ 1., 2., 3., 4.]) >>> x.T array([ 1., 2., 3., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.T.htmlnumpy.chararray.base ==================== attribute chararray.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: ``` >>> x = np.array([1,2,3,4]) >>> x.base is None True ``` Slicing creates a view, whose memory is shared with x: ``` >>> y = x[2:] >>> y.base is x True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.base.htmlnumpy.chararray.ctypes ====================== attribute chararray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters **None** Returns **c**Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference will not be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "(in Python v3.10)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "(in Python v3.10)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "(in Python v3.10)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L267-L284) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L286-L293) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L295-L302) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples ``` >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1fce60> # may vary >>> x.ctypes.strides <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1ff320> # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.ctypes.htmlnumpy.chararray.data ==================== attribute chararray.data Python buffer object pointing to the start of the array’s data. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.data.htmlnumpy.chararray.flags ===================== attribute chararray.flags Information about the memory layout of the array. #### Notes The [`flags`](#numpy.chararray.flags "numpy.chararray.flags") object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be *arbitrary* if `arr.shape[dim] == 1` or the array has no elements. It does *not* generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.flags.htmlnumpy.chararray.flat ==================== attribute chararray.flat A 1-D iterator over the array. This is a [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also [`flatten`](numpy.chararray.flatten#numpy.chararray.flatten "numpy.chararray.flatten") Return a copy of the array collapsed into one dimension. [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples ``` >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) <class 'numpy.flatiter'``` An assignment example: ``` >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.flat.htmlnumpy.chararray.itemsize ======================== attribute chararray.itemsize Length of one array element in bytes. #### Examples ``` >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.itemsize.htmlnumpy.chararray.nbytes ====================== attribute chararray.nbytes Total bytes consumed by the elements of the array. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples ``` >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.nbytes.htmlnumpy.chararray.ndim ==================== attribute chararray.ndim Number of array dimensions. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.ndim.htmlnumpy.chararray.size ==================== attribute chararray.size Number of elements in the array. Equal to `np.prod(a.shape)`, i.e., the product of the array’s dimensions. #### Notes `a.size` returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested `np.prod(a.shape)`, which returns an instance of `np.int_`), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. #### Examples ``` >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.size.htmlnumpy.chararray.tobytes ======================= method chararray.tobytes(*order='C'*) Construct Python bytes containing the raw data bytes in the array. Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object is produced in C-order by default. This behavior is controlled by the `order` parameter. New in version 1.9.0. Parameters **order**{‘C’, ‘F’, ‘A’}, optional Controls the memory layout of the bytes object. ‘C’ means C-order, ‘F’ means F-order, ‘A’ (short for *Any*) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. Default is ‘C’. Returns **s**bytes Python bytes exhibiting a copy of `a`’s raw data. #### Examples ``` >>> x = np.array([[0, 1], [2, 3]], dtype='<u2') >>> x.tobytes() b'\x00\x00\x01\x00\x02\x00\x03\x00' >>> x.tobytes('C') == x.tobytes() True >>> x.tobytes('F') b'\x00\x00\x02\x00\x01\x00\x03\x00' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.chararray.tobytes.htmlnumpy.record.all ================ method record.all() Scalar method identical to the corresponding array attribute. Please see [`ndarray.all`](numpy.ndarray.all#numpy.ndarray.all "numpy.ndarray.all"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.all.htmlnumpy.record.any ================ method record.any() Scalar method identical to the corresponding array attribute. Please see [`ndarray.any`](numpy.ndarray.any#numpy.ndarray.any "numpy.ndarray.any"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.any.htmlnumpy.record.argmax =================== method record.argmax() Scalar method identical to the corresponding array attribute. Please see [`ndarray.argmax`](numpy.ndarray.argmax#numpy.ndarray.argmax "numpy.ndarray.argmax"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.argmax.htmlnumpy.record.argmin =================== method record.argmin() Scalar method identical to the corresponding array attribute. Please see [`ndarray.argmin`](numpy.ndarray.argmin#numpy.ndarray.argmin "numpy.ndarray.argmin"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.argmin.htmlnumpy.record.argsort ==================== method record.argsort() Scalar method identical to the corresponding array attribute. Please see [`ndarray.argsort`](numpy.ndarray.argsort#numpy.ndarray.argsort "numpy.ndarray.argsort"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.argsort.htmlnumpy.record.astype =================== method record.astype() Scalar method identical to the corresponding array attribute. Please see [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.astype.htmlnumpy.record.byteswap ===================== method record.byteswap() Scalar method identical to the corresponding array attribute. Please see [`ndarray.byteswap`](numpy.ndarray.byteswap#numpy.ndarray.byteswap "numpy.ndarray.byteswap"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.byteswap.htmlnumpy.record.choose =================== method record.choose() Scalar method identical to the corresponding array attribute. Please see [`ndarray.choose`](numpy.ndarray.choose#numpy.ndarray.choose "numpy.ndarray.choose"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.choose.htmlnumpy.record.clip ================= method record.clip() Scalar method identical to the corresponding array attribute. Please see [`ndarray.clip`](numpy.ndarray.clip#numpy.ndarray.clip "numpy.ndarray.clip"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.clip.htmlnumpy.record.compress ===================== method record.compress() Scalar method identical to the corresponding array attribute. Please see [`ndarray.compress`](numpy.ndarray.compress#numpy.ndarray.compress "numpy.ndarray.compress"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.compress.htmlnumpy.record.conjugate ====================== method record.conjugate() Scalar method identical to the corresponding array attribute. Please see [`ndarray.conjugate`](numpy.ndarray.conjugate#numpy.ndarray.conjugate "numpy.ndarray.conjugate"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.conjugate.htmlnumpy.record.copy ================= method record.copy() Scalar method identical to the corresponding array attribute. Please see [`ndarray.copy`](numpy.ndarray.copy#numpy.ndarray.copy "numpy.ndarray.copy"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.copy.htmlnumpy.record.cumprod ==================== method record.cumprod() Scalar method identical to the corresponding array attribute. Please see [`ndarray.cumprod`](numpy.ndarray.cumprod#numpy.ndarray.cumprod "numpy.ndarray.cumprod"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.cumprod.htmlnumpy.record.cumsum =================== method record.cumsum() Scalar method identical to the corresponding array attribute. Please see [`ndarray.cumsum`](numpy.ndarray.cumsum#numpy.ndarray.cumsum "numpy.ndarray.cumsum"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.cumsum.htmlnumpy.record.diagonal ===================== method record.diagonal() Scalar method identical to the corresponding array attribute. Please see [`ndarray.diagonal`](numpy.ndarray.diagonal#numpy.ndarray.diagonal "numpy.ndarray.diagonal"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.diagonal.htmlnumpy.record.dump ================= method record.dump() Scalar method identical to the corresponding array attribute. Please see [`ndarray.dump`](numpy.ndarray.dump#numpy.ndarray.dump "numpy.ndarray.dump"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.dump.htmlnumpy.record.dumps ================== method record.dumps() Scalar method identical to the corresponding array attribute. Please see [`ndarray.dumps`](numpy.ndarray.dumps#numpy.ndarray.dumps "numpy.ndarray.dumps"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.dumps.htmlnumpy.record.fill ================= method record.fill() Scalar method identical to the corresponding array attribute. Please see [`ndarray.fill`](numpy.ndarray.fill#numpy.ndarray.fill "numpy.ndarray.fill"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.fill.htmlnumpy.record.flatten ==================== method record.flatten() Scalar method identical to the corresponding array attribute. Please see [`ndarray.flatten`](numpy.ndarray.flatten#numpy.ndarray.flatten "numpy.ndarray.flatten"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.flatten.htmlnumpy.record.getfield ===================== method record.getfield() Scalar method identical to the corresponding array attribute. Please see [`ndarray.getfield`](numpy.ndarray.getfield#numpy.ndarray.getfield "numpy.ndarray.getfield"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.getfield.htmlnumpy.record.item ================= method record.item() Scalar method identical to the corresponding array attribute. Please see [`ndarray.item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.item.htmlnumpy.record.itemset ==================== method record.itemset() Scalar method identical to the corresponding array attribute. Please see [`ndarray.itemset`](numpy.ndarray.itemset#numpy.ndarray.itemset "numpy.ndarray.itemset"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.itemset.htmlnumpy.record.max ================ method record.max() Scalar method identical to the corresponding array attribute. Please see [`ndarray.max`](numpy.ndarray.max#numpy.ndarray.max "numpy.ndarray.max"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.max.htmlnumpy.record.mean ================= method record.mean() Scalar method identical to the corresponding array attribute. Please see [`ndarray.mean`](numpy.ndarray.mean#numpy.ndarray.mean "numpy.ndarray.mean"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.mean.htmlnumpy.record.min ================ method record.min() Scalar method identical to the corresponding array attribute. Please see [`ndarray.min`](numpy.ndarray.min#numpy.ndarray.min "numpy.ndarray.min"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.min.htmlnumpy.record.newbyteorder ========================= method record.newbyteorder(*new_order='S'*, */*) Return a new [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") with a different byte order. Changes are also made in all fields and sub-arrays of the data type. The `new_order` code can be any from the following: * ‘S’ - swap dtype from current to opposite endian * {‘<’, ‘little’} - little endian * {‘>’, ‘big’} - big endian * {‘=’, ‘native’} - native order * {‘|’, ‘I’} - ignore (no change to byte order) Parameters **new_order**str, optional Byte order to force; a value from the byte order specifications above. The default value (‘S’) results in swapping the current byte order. Returns **new_dtype**dtype New [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") object with the given change to the byte order. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.newbyteorder.htmlnumpy.record.nonzero ==================== method record.nonzero() Scalar method identical to the corresponding array attribute. Please see [`ndarray.nonzero`](numpy.ndarray.nonzero#numpy.ndarray.nonzero "numpy.ndarray.nonzero"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.nonzero.htmlnumpy.record.pprint =================== method record.pprint()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/records.py#L291-L298) Pretty-print all fields. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.pprint.htmlnumpy.record.prod ================= method record.prod() Scalar method identical to the corresponding array attribute. Please see [`ndarray.prod`](numpy.ndarray.prod#numpy.ndarray.prod "numpy.ndarray.prod"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.prod.htmlnumpy.record.ptp ================ method record.ptp() Scalar method identical to the corresponding array attribute. Please see [`ndarray.ptp`](numpy.ndarray.ptp#numpy.ndarray.ptp "numpy.ndarray.ptp"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.ptp.htmlnumpy.record.put ================ method record.put() Scalar method identical to the corresponding array attribute. Please see [`ndarray.put`](numpy.ndarray.put#numpy.ndarray.put "numpy.ndarray.put"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.put.htmlnumpy.record.ravel ================== method record.ravel() Scalar method identical to the corresponding array attribute. Please see [`ndarray.ravel`](numpy.ndarray.ravel#numpy.ndarray.ravel "numpy.ndarray.ravel"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.ravel.htmlnumpy.record.repeat =================== method record.repeat() Scalar method identical to the corresponding array attribute. Please see [`ndarray.repeat`](numpy.ndarray.repeat#numpy.ndarray.repeat "numpy.ndarray.repeat"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.repeat.htmlnumpy.record.reshape ==================== method record.reshape() Scalar method identical to the corresponding array attribute. Please see [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.reshape.htmlnumpy.record.resize =================== method record.resize() Scalar method identical to the corresponding array attribute. Please see [`ndarray.resize`](numpy.ndarray.resize#numpy.ndarray.resize "numpy.ndarray.resize"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.resize.htmlnumpy.record.round ================== method record.round() Scalar method identical to the corresponding array attribute. Please see [`ndarray.round`](numpy.ndarray.round#numpy.ndarray.round "numpy.ndarray.round"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.round.htmlnumpy.record.searchsorted ========================= method record.searchsorted() Scalar method identical to the corresponding array attribute. Please see [`ndarray.searchsorted`](numpy.ndarray.searchsorted#numpy.ndarray.searchsorted "numpy.ndarray.searchsorted"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.searchsorted.htmlnumpy.record.setfield ===================== method record.setfield() Scalar method identical to the corresponding array attribute. Please see [`ndarray.setfield`](numpy.ndarray.setfield#numpy.ndarray.setfield "numpy.ndarray.setfield"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.setfield.htmlnumpy.record.setflags ===================== method record.setflags() Scalar method identical to the corresponding array attribute. Please see [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.setflags.htmlnumpy.record.sort ================= method record.sort() Scalar method identical to the corresponding array attribute. Please see [`ndarray.sort`](numpy.ndarray.sort#numpy.ndarray.sort "numpy.ndarray.sort"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.sort.htmlnumpy.record.squeeze ==================== method record.squeeze() Scalar method identical to the corresponding array attribute. Please see [`ndarray.squeeze`](numpy.ndarray.squeeze#numpy.ndarray.squeeze "numpy.ndarray.squeeze"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.squeeze.htmlnumpy.record.std ================ method record.std() Scalar method identical to the corresponding array attribute. Please see [`ndarray.std`](numpy.ndarray.std#numpy.ndarray.std "numpy.ndarray.std"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.std.htmlnumpy.record.sum ================ method record.sum() Scalar method identical to the corresponding array attribute. Please see [`ndarray.sum`](numpy.ndarray.sum#numpy.ndarray.sum "numpy.ndarray.sum"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.sum.htmlnumpy.record.swapaxes ===================== method record.swapaxes() Scalar method identical to the corresponding array attribute. Please see [`ndarray.swapaxes`](numpy.ndarray.swapaxes#numpy.ndarray.swapaxes "numpy.ndarray.swapaxes"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.swapaxes.htmlnumpy.record.take ================= method record.take() Scalar method identical to the corresponding array attribute. Please see [`ndarray.take`](numpy.ndarray.take#numpy.ndarray.take "numpy.ndarray.take"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.take.htmlnumpy.record.tofile =================== method record.tofile() Scalar method identical to the corresponding array attribute. Please see [`ndarray.tofile`](numpy.ndarray.tofile#numpy.ndarray.tofile "numpy.ndarray.tofile"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.tofile.htmlnumpy.record.tolist =================== method record.tolist() Scalar method identical to the corresponding array attribute. Please see [`ndarray.tolist`](numpy.ndarray.tolist#numpy.ndarray.tolist "numpy.ndarray.tolist"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.tolist.htmlnumpy.record.tostring ===================== method record.tostring() Scalar method identical to the corresponding array attribute. Please see [`ndarray.tostring`](numpy.ndarray.tostring#numpy.ndarray.tostring "numpy.ndarray.tostring"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.tostring.htmlnumpy.record.trace ================== method record.trace() Scalar method identical to the corresponding array attribute. Please see [`ndarray.trace`](numpy.ndarray.trace#numpy.ndarray.trace "numpy.ndarray.trace"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.trace.htmlnumpy.record.transpose ====================== method record.transpose() Scalar method identical to the corresponding array attribute. Please see [`ndarray.transpose`](numpy.ndarray.transpose#numpy.ndarray.transpose "numpy.ndarray.transpose"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.transpose.htmlnumpy.record.var ================ method record.var() Scalar method identical to the corresponding array attribute. Please see [`ndarray.var`](numpy.ndarray.var#numpy.ndarray.var "numpy.ndarray.var"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.var.htmlnumpy.record.view ================= method record.view() Scalar method identical to the corresponding array attribute. Please see [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.view.htmlnumpy.record.T ============== attribute record.T Scalar attribute identical to the corresponding array attribute. Please see [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.T.htmlnumpy.record.base ================= attribute record.base base object © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.base.htmlnumpy.record.data ================= attribute record.data Pointer to start of data. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.data.htmlnumpy.record.flags ================== attribute record.flags integer value of flags © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.flags.htmlnumpy.record.flat ================= attribute record.flat A 1-D view of the scalar. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.flat.htmlnumpy.record.itemsize ===================== attribute record.itemsize The length of one element in bytes. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.itemsize.htmlnumpy.record.nbytes =================== attribute record.nbytes The length of the scalar in bytes. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.nbytes.htmlnumpy.record.ndim ================= attribute record.ndim The number of array dimensions. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.ndim.htmlnumpy.record.size ================= attribute record.size The number of elements in the gentype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.size.htmlnumpy.record.strides ==================== attribute record.strides Tuple of bytes steps in each dimension. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.record.strides.htmlnumpy.broadcast.index ===================== attribute broadcast.index current index in broadcasted result #### Examples ``` >>> x = np.array([[1], [2], [3]]) >>> y = np.array([4, 5, 6]) >>> b = np.broadcast(x, y) >>> b.index 0 >>> next(b), next(b), next(b) ((1, 4), (1, 5), (1, 6)) >>> b.index 3 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast.index.htmlnumpy.broadcast.iters ===================== attribute broadcast.iters tuple of iterators along `self`’s “components.” Returns a tuple of [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") objects, one for each “component” of `self`. See also [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples ``` >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> row, col = b.iters >>> next(row), next(col) (1, 4) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast.iters.htmlnumpy.broadcast.nd ================== attribute broadcast.nd Number of dimensions of broadcasted result. For code intended for NumPy 1.12.0 and later the more consistent [`ndim`](numpy.broadcast.ndim#numpy.broadcast.ndim "numpy.broadcast.ndim") is preferred. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.nd 2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast.nd.htmlnumpy.broadcast.ndim ==================== attribute broadcast.ndim Number of dimensions of broadcasted result. Alias for [`nd`](numpy.broadcast.nd#numpy.broadcast.nd "numpy.broadcast.nd"). New in version 1.12.0. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.ndim 2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast.ndim.htmlnumpy.broadcast.numiter ======================= attribute broadcast.numiter Number of iterators possessed by the broadcasted result. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.numiter 2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast.numiter.htmlnumpy.broadcast.size ==================== attribute broadcast.size Total size of broadcasted result. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> y = np.array([[4], [5], [6]]) >>> b = np.broadcast(x, y) >>> b.size 9 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.broadcast.size.htmlnumpy.busdaycalendar.weekmask ============================= attribute busdaycalendar.weekmask A copy of the seven-element boolean mask indicating valid days. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.busdaycalendar.weekmask.htmlnumpy.busdaycalendar.holidays ============================= attribute busdaycalendar.holidays A copy of the holiday array indicating additional invalid days. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.busdaycalendar.holidays.htmlnumpy.polynomial.polynomial.polyvander ====================================== polynomial.polynomial.polyvander(*x*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L1058-L1109) Vandermonde matrix of given degree. Returns the Vandermonde matrix of degree `deg` and sample points `x`. The Vandermonde matrix is defined by \[V[..., i] = x^i,\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the power of `x`. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the matrix `V = polyvander(x, n)`, then `np.dot(V, c)` and `polyval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of polynomials of the same degree and sample points. Parameters **x**array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg**int Degree of the resulting matrix. Returns **vander**ndarray. The Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where the last index is the power of `x`. The dtype will be the same as the converted `x`. See also [`polyvander2d`](numpy.polynomial.polynomial.polyvander2d#numpy.polynomial.polynomial.polyvander2d "numpy.polynomial.polynomial.polyvander2d"), [`polyvander3d`](numpy.polynomial.polynomial.polyvander3d#numpy.polynomial.polynomial.polyvander3d "numpy.polynomial.polynomial.polyvander3d") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyvander.htmlnumpy.char.chararray.astype =========================== method char.chararray.astype(*dtype*, *order='K'*, *casting='unsafe'*, *subok=True*, *copy=True*) Copy of the array, cast to a specified type. Parameters **dtype**str or dtype Typecode or data-type to which the array is cast. **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible. Default is ‘K’. **casting**{‘no’, ‘equiv’, ‘safe’, ‘same_kind’, ‘unsafe’}, optional Controls what kind of data casting may occur. Defaults to ‘unsafe’ for backwards compatibility. * ‘no’ means the data types should not be cast at all. * ‘equiv’ means only byte-order changes are allowed. * ‘safe’ means only casts which can preserve values are allowed. * ‘same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed. * ‘unsafe’ means any data conversions may be done. **subok**bool, optional If True, then sub-classes will be passed-through (default), otherwise the returned array will be forced to be a base-class array. **copy**bool, optional By default, astype always returns a newly allocated array. If this is set to false, and the [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`, and `subok` requirements are satisfied, the input array is returned instead of a copy. Returns **arr_t**ndarray Unless [`copy`](numpy.copy#numpy.copy "numpy.copy") is False and the other conditions for returning the input array are satisfied (see description for [`copy`](numpy.copy#numpy.copy "numpy.copy") input parameter), `arr_t` is a new array of the same shape as the input array, with dtype, order given by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), `order`. Raises ComplexWarning When casting from complex to float or int. To avoid this, one should use `a.real.astype(t)`. #### Notes Changed in version 1.17.0: Casting between a simple data type and a structured one is possible only for “unsafe” casting. Casting to multiple fields is allowed, but casting from multiple fields is not. Changed in version 1.9.0: Casting from numeric to string types in ‘safe’ casting mode requires that the string dtype length is long enough to store the max integer/float value converted. #### Examples ``` >>> x = np.array([1, 2, 2.5]) >>> x array([1. , 2. , 2.5]) ``` ``` >>> x.astype(int) array([1, 2, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.astype.htmlnumpy.char.chararray.argsort ============================ method char.chararray.argsort(*axis=- 1*, *kind=None*, *order=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2121-L2139) Returns the indices that would sort this array. Refer to [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") for full documentation. See also [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.argsort.htmlnumpy.char.chararray.copy ========================= method char.chararray.copy(*order='C'*) Return a copy of the array. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional Controls the memory layout of the copy. ‘C’ means C-order, ‘F’ means F-order, ‘A’ means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. ‘K’ means match the layout of `a` as closely as possible. (Note that this function and [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") are very similar but have different default values for their order= arguments, and this function always passes sub-classes through.) See also [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") Similar function with different default behavior [`numpy.copyto`](numpy.copyto#numpy.copyto "numpy.copyto") #### Notes This function is the preferred method for creating an array copy. The function [`numpy.copy`](numpy.copy#numpy.copy "numpy.copy") is similar, but it defaults to using order ‘K’, and will not pass sub-classes through by default. #### Examples ``` >>> x = np.array([[1,2,3],[4,5,6]], order='F') ``` ``` >>> y = x.copy() ``` ``` >>> x.fill(0) ``` ``` >>> x array([[0, 0, 0], [0, 0, 0]]) ``` ``` >>> y array([[1, 2, 3], [4, 5, 6]]) ``` ``` >>> y.flags['C_CONTIGUOUS'] True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.copy.htmlnumpy.char.chararray.count ========================== method char.chararray.count(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2165-L2175) Returns an array with the number of non-overlapping occurrences of substring `sub` in the range [`start`, `end`]. See also [`char.count`](numpy.char.count#numpy.char.count "numpy.char.count") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.count.htmlnumpy.char.chararray.decode =========================== method char.chararray.decode(*encoding=None*, *errors=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2177-L2186) Calls `str.decode` element-wise. See also [`char.decode`](numpy.char.decode#numpy.char.decode "numpy.char.decode") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.decode.htmlnumpy.char.chararray.dump ========================= method char.chararray.dump(*file*) Dump a pickle of the array to the specified file. The array can be read back with pickle.load or numpy.load. Parameters **file**str or Path A string naming the dump file. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.dump.htmlnumpy.char.chararray.dumps ========================== method char.chararray.dumps() Returns the pickle of the array as a string. pickle.loads will convert the string back to an array. Parameters **None** © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.dumps.htmlnumpy.char.chararray.encode =========================== method char.chararray.encode(*encoding=None*, *errors=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2188-L2197) Calls [`str.encode`](https://docs.python.org/3/library/stdtypes.html#str.encode "(in Python v3.10)") element-wise. See also [`char.encode`](numpy.char.encode#numpy.char.encode "numpy.char.encode") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.encode.htmlnumpy.char.chararray.endswith ============================= method char.chararray.endswith(*suffix*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2199-L2209) Returns a boolean array which is `True` where the string element in `self` ends with `suffix`, otherwise `False`. See also [`char.endswith`](numpy.char.endswith#numpy.char.endswith "numpy.char.endswith") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.endswith.htmlnumpy.char.chararray.expandtabs =============================== method char.chararray.expandtabs(*tabsize=8*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2211-L2221) Return a copy of each string element where all tab characters are replaced by one or more spaces. See also [`char.expandtabs`](numpy.char.expandtabs#numpy.char.expandtabs "numpy.char.expandtabs") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.expandtabs.htmlnumpy.char.chararray.fill ========================= method char.chararray.fill(*value*) Fill the array with a scalar value. Parameters **value**scalar All elements of `a` will be assigned this value. #### Examples ``` >>> a = np.array([1, 2]) >>> a.fill(0) >>> a array([0, 0]) >>> a = np.empty(2) >>> a.fill(1) >>> a array([1., 1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.fill.htmlnumpy.char.chararray.find ========================= method char.chararray.find(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2223-L2233) For each element, return the lowest index in the string where substring `sub` is found. See also [`char.find`](numpy.char.find#numpy.char.find "numpy.char.find") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.find.htmlnumpy.char.chararray.flatten ============================ method char.chararray.flatten(*order='C'*) Return a copy of the array collapsed into one dimension. Parameters **order**{‘C’, ‘F’, ‘A’, ‘K’}, optional ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran- style) order. ‘A’ means to flatten in column-major order if `a` is Fortran *contiguous* in memory, row-major order otherwise. ‘K’ means to flatten `a` in the order the elements occur in memory. The default is ‘C’. Returns **y**ndarray A copy of the input array, flattened to one dimension. See also [`ravel`](numpy.ravel#numpy.ravel "numpy.ravel") Return a flattened array. [`flat`](numpy.char.chararray.flat#numpy.char.chararray.flat "numpy.char.chararray.flat") A 1-D flat iterator over the array. #### Examples ``` >>> a = np.array([[1,2], [3,4]]) >>> a.flatten() array([1, 2, 3, 4]) >>> a.flatten('F') array([1, 3, 2, 4]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.flatten.htmlnumpy.char.chararray.getfield ============================= method char.chararray.getfield(*dtype*, *offset=0*) Returns a field of the given array as a certain type. A field is a view of the array data with a given data-type. The values in the view are determined by the given type and the offset into the current array in bytes. The offset needs to be such that the view dtype fits in the array dtype; for example an array of dtype complex128 has 16-byte elements. If taking a view with a 32-bit integer (4 bytes), the offset needs to be between 0 and 12 bytes. Parameters **dtype**str or dtype The data type of the view. The dtype size of the view can not be larger than that of the array itself. **offset**int Number of bytes to skip before beginning the element view. #### Examples ``` >>> x = np.diag([1.+1.j]*2) >>> x[1, 1] = 2 + 4.j >>> x array([[1.+1.j, 0.+0.j], [0.+0.j, 2.+4.j]]) >>> x.getfield(np.float64) array([[1., 0.], [0., 2.]]) ``` By choosing an offset of 8 bytes we can select the complex part of the array for our view: ``` >>> x.getfield(np.float64, offset=8) array([[1., 0.], [0., 4.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.getfield.htmlnumpy.char.chararray.index ========================== method char.chararray.index(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2235-L2244) Like [`find`](numpy.char.chararray.find#numpy.char.chararray.find "numpy.char.chararray.find"), but raises `ValueError` when the substring is not found. See also [`char.index`](numpy.char.index#numpy.char.index "numpy.char.index") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.index.htmlnumpy.char.chararray.isalnum ============================ method char.chararray.isalnum()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2246-L2257) Returns true for each element if all characters in the string are alphanumeric and there is at least one character, false otherwise. See also [`char.isalnum`](numpy.char.isalnum#numpy.char.isalnum "numpy.char.isalnum") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.isalnum.htmlnumpy.char.chararray.isalpha ============================ method char.chararray.isalpha()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2259-L2270) Returns true for each element if all characters in the string are alphabetic and there is at least one character, false otherwise. See also [`char.isalpha`](numpy.char.isalpha#numpy.char.isalpha "numpy.char.isalpha") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.isalpha.htmlnumpy.char.chararray.isdecimal ============================== method char.chararray.isdecimal()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2599-L2609) For each element in `self`, return True if there are only decimal characters in the element. See also [`char.isdecimal`](numpy.char.isdecimal#numpy.char.isdecimal "numpy.char.isdecimal") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.isdecimal.htmlnumpy.char.chararray.isdigit ============================ method char.chararray.isdigit()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2272-L2282) Returns true for each element if all characters in the string are digits and there is at least one character, false otherwise. See also [`char.isdigit`](numpy.char.isdigit#numpy.char.isdigit "numpy.char.isdigit") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.isdigit.htmlnumpy.char.chararray.islower ============================ method char.chararray.islower()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2284-L2295) Returns true for each element if all cased characters in the string are lowercase and there is at least one cased character, false otherwise. See also [`char.islower`](numpy.char.islower#numpy.char.islower "numpy.char.islower") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.islower.htmlnumpy.char.chararray.isnumeric ============================== method char.chararray.isnumeric()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2587-L2597) For each element in `self`, return True if there are only numeric characters in the element. See also [`char.isnumeric`](numpy.char.isnumeric#numpy.char.isnumeric "numpy.char.isnumeric") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.isnumeric.htmlnumpy.char.chararray.isspace ============================ method char.chararray.isspace()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2297-L2308) Returns true for each element if there are only whitespace characters in the string and there is at least one character, false otherwise. See also [`char.isspace`](numpy.char.isspace#numpy.char.isspace "numpy.char.isspace") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.isspace.htmlnumpy.char.chararray.istitle ============================ method char.chararray.istitle()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2310-L2320) Returns true for each element if the element is a titlecased string and there is at least one character, false otherwise. See also [`char.istitle`](numpy.char.istitle#numpy.char.istitle "numpy.char.istitle") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.istitle.htmlnumpy.char.chararray.isupper ============================ method char.chararray.isupper()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2322-L2333) Returns true for each element if all cased characters in the string are uppercase and there is at least one character, false otherwise. See also [`char.isupper`](numpy.char.isupper#numpy.char.isupper "numpy.char.isupper") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.isupper.htmlnumpy.char.chararray.item ========================= method char.chararray.item(**args*) Copy an element of an array to a standard Python scalar and return it. Parameters ***args**Arguments (variable number and type) * none: in this case, the method only works for arrays with one element (`a.size == 1`), which element is copied into a standard Python scalar object and returned. * int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return. * tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array. Returns **z**Standard Python scalar object A copy of the specified element of the array as a suitable Python scalar #### Notes When the data type of `a` is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned. [`item`](#numpy.char.chararray.item "numpy.char.chararray.item") is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math. #### Examples ``` >>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.item.htmlnumpy.char.chararray.join ========================= method char.chararray.join(*seq*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2335-L2345) Return a string which is the concatenation of the strings in the sequence `seq`. See also [`char.join`](numpy.char.join#numpy.char.join "numpy.char.join") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.join.htmlnumpy.char.chararray.ljust ========================== method char.chararray.ljust(*width*, *fillchar=' '*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2347-L2357) Return an array with the elements of `self` left-justified in a string of length `width`. See also [`char.ljust`](numpy.char.ljust#numpy.char.ljust "numpy.char.ljust") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.ljust.htmlnumpy.char.chararray.lower ========================== method char.chararray.lower()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2359-L2369) Return an array with the elements of `self` converted to lowercase. See also [`char.lower`](numpy.char.lower#numpy.char.lower "numpy.char.lower") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.lower.htmlnumpy.char.chararray.lstrip =========================== method char.chararray.lstrip(*chars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2371-L2381) For each element in `self`, return a copy with the leading characters removed. See also [`char.lstrip`](numpy.char.lstrip#numpy.char.lstrip "numpy.char.lstrip") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.lstrip.htmlnumpy.char.chararray.nonzero ============================ method char.chararray.nonzero() Return the indices of the elements that are non-zero. Refer to [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") for full documentation. See also [`numpy.nonzero`](numpy.nonzero#numpy.nonzero "numpy.nonzero") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.nonzero.htmlnumpy.char.chararray.put ======================== method char.chararray.put(*indices*, *values*, *mode='raise'*) Set `a.flat[n] = values[n]` for all `n` in indices. Refer to [`numpy.put`](numpy.put#numpy.put "numpy.put") for full documentation. See also [`numpy.put`](numpy.put#numpy.put "numpy.put") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.put.htmlnumpy.char.chararray.ravel ========================== method char.chararray.ravel([*order*]) Return a flattened array. Refer to [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") for full documentation. See also [`numpy.ravel`](numpy.ravel#numpy.ravel "numpy.ravel") equivalent function [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") a flat iterator on the array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.ravel.htmlnumpy.char.chararray.repeat =========================== method char.chararray.repeat(*repeats*, *axis=None*) Repeat elements of an array. Refer to [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") for full documentation. See also [`numpy.repeat`](numpy.repeat#numpy.repeat "numpy.repeat") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.repeat.htmlnumpy.char.chararray.replace ============================ method char.chararray.replace(*old*, *new*, *count=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2393-L2403) For each element in `self`, return a copy of the string with all occurrences of substring `old` replaced by `new`. See also [`char.replace`](numpy.char.replace#numpy.char.replace "numpy.char.replace") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.replace.htmlnumpy.char.chararray.reshape ============================ method char.chararray.reshape(*shape*, *order='C'*) Returns an array containing the same data with a new shape. Refer to [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") for full documentation. See also [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") equivalent function #### Notes Unlike the free function [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), this method on [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") allows the elements of the shape parameter to be passed in as separate arguments. For example, `a.reshape(10, 11)` is equivalent to `a.reshape((10, 11))`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.reshape.htmlnumpy.char.chararray.resize =========================== method char.chararray.resize(*new_shape*, *refcheck=True*) Change shape and size of array in-place. Parameters **new_shape**tuple of ints, or `n` ints Shape of resized array. **refcheck**bool, optional If False, reference count will not be checked. Default is True. Returns None Raises ValueError If `a` does not own its own data or references or views to it exist, and the data memory must be changed. PyPy only: will always raise if the data memory must be changed, since there is no reliable way to determine if references or views to it exist. SystemError If the `order` keyword argument is specified. This behaviour is a bug in NumPy. See also [`resize`](numpy.resize#numpy.resize "numpy.resize") Return a new array with the specified shape. #### Notes This reallocates space for the data area if necessary. Only contiguous arrays (data elements consecutive in memory) can be resized. The purpose of the reference count check is to make sure you do not use this array as a buffer for another Python object and then reallocate the memory. However, reference counts can increase in other ways so if you are sure that you have not shared the memory for this array with another Python object, then you may safely set `refcheck` to False. #### Examples Shrinking an array: array is flattened (in the order that the data are stored in memory), resized, and reshaped: ``` >>> a = np.array([[0, 1], [2, 3]], order='C') >>> a.resize((2, 1)) >>> a array([[0], [1]]) ``` ``` >>> a = np.array([[0, 1], [2, 3]], order='F') >>> a.resize((2, 1)) >>> a array([[0], [2]]) ``` Enlarging an array: as above, but missing entries are filled with zeros: ``` >>> b = np.array([[0, 1], [2, 3]]) >>> b.resize(2, 3) # new_shape parameter doesn't have to be a tuple >>> b array([[0, 1, 2], [3, 0, 0]]) ``` Referencing an array prevents resizing
 ``` >>> c = a >>> a.resize((1, 1)) Traceback (most recent call last): ... ValueError: cannot resize an array that references or is referenced ... ``` Unless `refcheck` is False: ``` >>> a.resize((1, 1), refcheck=False) >>> a array([[0]]) >>> c array([[0]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.resize.htmlnumpy.char.chararray.rfind ========================== method char.chararray.rfind(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2405-L2416) For each element in `self`, return the highest index in the string where substring `sub` is found, such that `sub` is contained within [`start`, `end`]. See also [`char.rfind`](numpy.char.rfind#numpy.char.rfind "numpy.char.rfind") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.rfind.htmlnumpy.char.chararray.rindex =========================== method char.chararray.rindex(*sub*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2418-L2428) Like [`rfind`](numpy.char.chararray.rfind#numpy.char.chararray.rfind "numpy.char.chararray.rfind"), but raises `ValueError` when the substring `sub` is not found. See also [`char.rindex`](numpy.char.rindex#numpy.char.rindex "numpy.char.rindex") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.rindex.htmlnumpy.char.chararray.rjust ========================== method char.chararray.rjust(*width*, *fillchar=' '*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2430-L2440) Return an array with the elements of `self` right-justified in a string of length `width`. See also [`char.rjust`](numpy.char.rjust#numpy.char.rjust "numpy.char.rjust") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.rjust.htmlnumpy.char.chararray.rsplit =========================== method char.chararray.rsplit(*sep=None*, *maxsplit=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2452-L2462) For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. See also [`char.rsplit`](numpy.char.rsplit#numpy.char.rsplit "numpy.char.rsplit") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.rsplit.htmlnumpy.char.chararray.rstrip =========================== method char.chararray.rstrip(*chars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2464-L2474) For each element in `self`, return a copy with the trailing characters removed. See also [`char.rstrip`](numpy.char.rstrip#numpy.char.rstrip "numpy.char.rstrip") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.rstrip.htmlnumpy.char.chararray.searchsorted ================================= method char.chararray.searchsorted(*v*, *side='left'*, *sorter=None*) Find indices where elements of v should be inserted in a to maintain order. For full documentation, see [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") See also [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.searchsorted.htmlnumpy.char.chararray.setfield ============================= method char.chararray.setfield(*val*, *dtype*, *offset=0*) Put a value into a specified place in a field defined by a data-type. Place `val` into `a`’s field defined by [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") and beginning `offset` bytes into the field. Parameters **val**object Value to be placed in field. **dtype**dtype object Data-type of the field in which to place `val`. **offset**int, optional The number of bytes into the field at which to place `val`. Returns None See also [`getfield`](numpy.char.chararray.getfield#numpy.char.chararray.getfield "numpy.char.chararray.getfield") #### Examples ``` >>> x = np.eye(3) >>> x.getfield(np.float64) array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) >>> x.setfield(3, np.int32) >>> x.getfield(np.int32) array([[3, 3, 3], [3, 3, 3], [3, 3, 3]], dtype=int32) >>> x array([[1.0e+000, 1.5e-323, 1.5e-323], [1.5e-323, 1.0e+000, 1.5e-323], [1.5e-323, 1.5e-323, 1.0e+000]]) >>> x.setfield(np.eye(3), np.int32) >>> x array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.setfield.htmlnumpy.char.chararray.setflags ============================= method char.chararray.setflags(*write=None*, *align=None*, *uic=None*) Set array flags WRITEABLE, ALIGNED, WRITEBACKIFCOPY, respectively. These Boolean-valued flags affect how numpy interprets the memory area used by `a` (see Notes below). The ALIGNED flag can only be set to True if the data is actually aligned according to the type. The WRITEBACKIFCOPY and flag can never be set to True. The flag WRITEABLE can only be set to True if the array owns its own memory, or the ultimate owner of the memory exposes a writeable buffer interface, or is a string. (The exception for string is made so that unpickling can be done without copying memory.) Parameters **write**bool, optional Describes whether or not `a` can be written to. **align**bool, optional Describes whether or not `a` is aligned properly for its type. **uic**bool, optional Describes whether or not `a` is a copy of another “base” array. #### Notes Array flags provide information about how the memory area used for the array is to be interpreted. There are 7 Boolean flags in use, only four of which can be changed by the user: WRITEBACKIFCOPY, WRITEABLE, and ALIGNED. WRITEABLE (W) the data area can be written to; ALIGNED (A) the data and strides are aligned appropriately for the hardware (as determined by the compiler); WRITEBACKIFCOPY (X) this array is a copy of some other array (referenced by .base). When the C-API function PyArray_ResolveWritebackIfCopy is called, the base array will be updated with the contents of this array. All flags can be accessed using the single (upper case) letter as well as the full name. #### Examples ``` >>> y = np.array([[3, 1, 7], ... [2, 0, 0], ... [8, 5, 9]]) >>> y array([[3, 1, 7], [2, 0, 0], [8, 5, 9]]) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : True ALIGNED : True WRITEBACKIFCOPY : False >>> y.setflags(write=0, align=0) >>> y.flags C_CONTIGUOUS : True F_CONTIGUOUS : False OWNDATA : True WRITEABLE : False ALIGNED : False WRITEBACKIFCOPY : False >>> y.setflags(uic=1) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: cannot set WRITEBACKIFCOPY flag to True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.setflags.htmlnumpy.char.chararray.sort ========================= method char.chararray.sort(*axis=- 1*, *kind=None*, *order=None*) Sort an array in-place. Refer to [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for full documentation. Parameters **axis**int, optional Axis along which to sort. Default is -1, which means sort along the last axis. **kind**{‘quicksort’, ‘mergesort’, ‘heapsort’, ‘stable’}, optional Sorting algorithm. The default is ‘quicksort’. Note that both ‘stable’ and ‘mergesort’ use timsort under the covers and, in general, the actual implementation will vary with datatype. The ‘mergesort’ option is retained for backwards compatibility. Changed in version 1.15.0: The ‘stable’ option was added. **order**str or list of str, optional When `a` is an array with fields defined, this argument specifies which fields to compare first, second, etc. A single field can be specified as a string, and not all fields need be specified, but unspecified fields will still be used, in the order in which they come up in the dtype, to break ties. See also [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") Return a sorted copy of an array. [`numpy.argsort`](numpy.argsort#numpy.argsort "numpy.argsort") Indirect sort. [`numpy.lexsort`](numpy.lexsort#numpy.lexsort "numpy.lexsort") Indirect stable sort on multiple keys. [`numpy.searchsorted`](numpy.searchsorted#numpy.searchsorted "numpy.searchsorted") Find elements in sorted array. [`numpy.partition`](numpy.partition#numpy.partition "numpy.partition") Partial sort. #### Notes See [`numpy.sort`](numpy.sort#numpy.sort "numpy.sort") for notes on the different sorting algorithms. #### Examples ``` >>> a = np.array([[1,4], [3,1]]) >>> a.sort(axis=1) >>> a array([[1, 4], [1, 3]]) >>> a.sort(axis=0) >>> a array([[1, 3], [1, 4]]) ``` Use the `order` keyword to specify a field to use when sorting a structured array: ``` >>> a = np.array([('a', 2), ('c', 1)], dtype=[('x', 'S1'), ('y', int)]) >>> a.sort(order='y') >>> a array([(b'c', 1), (b'a', 2)], dtype=[('x', 'S1'), ('y', '<i8')]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.sort.htmlnumpy.char.chararray.split ========================== method char.chararray.split(*sep=None*, *maxsplit=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2476-L2486) For each element in `self`, return a list of the words in the string, using `sep` as the delimiter string. See also [`char.split`](numpy.char.split#numpy.char.split "numpy.char.split") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.split.htmlnumpy.char.chararray.splitlines =============================== method char.chararray.splitlines(*keepends=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2488-L2498) For each element in `self`, return a list of the lines in the element, breaking at line boundaries. See also [`char.splitlines`](numpy.char.splitlines#numpy.char.splitlines "numpy.char.splitlines") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.splitlines.htmlnumpy.char.chararray.squeeze ============================ method char.chararray.squeeze(*axis=None*) Remove axes of length one from `a`. Refer to [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") for full documentation. See also [`numpy.squeeze`](numpy.squeeze#numpy.squeeze "numpy.squeeze") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.squeeze.htmlnumpy.char.chararray.startswith =============================== method char.chararray.startswith(*prefix*, *start=0*, *end=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2500-L2510) Returns a boolean array which is `True` where the string element in `self` starts with `prefix`, otherwise `False`. See also [`char.startswith`](numpy.char.startswith#numpy.char.startswith "numpy.char.startswith") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.startswith.htmlnumpy.char.chararray.strip ========================== method char.chararray.strip(*chars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2512-L2522) For each element in `self`, return a copy with the leading and trailing characters removed. See also [`char.strip`](numpy.char.strip#numpy.char.strip "numpy.char.strip") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.strip.htmlnumpy.char.chararray.swapaxes ============================= method char.chararray.swapaxes(*axis1*, *axis2*) Return a view of the array with `axis1` and `axis2` interchanged. Refer to [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") for full documentation. See also [`numpy.swapaxes`](numpy.swapaxes#numpy.swapaxes "numpy.swapaxes") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.swapaxes.htmlnumpy.char.chararray.swapcase ============================= method char.chararray.swapcase()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2524-L2534) For each element in `self`, return a copy of the string with uppercase characters converted to lowercase and vice versa. See also [`char.swapcase`](numpy.char.swapcase#numpy.char.swapcase "numpy.char.swapcase") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.swapcase.htmlnumpy.char.chararray.take ========================= method char.chararray.take(*indices*, *axis=None*, *out=None*, *mode='raise'*) Return an array formed from the elements of `a` at the given indices. Refer to [`numpy.take`](numpy.take#numpy.take "numpy.take") for full documentation. See also [`numpy.take`](numpy.take#numpy.take "numpy.take") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.take.htmlnumpy.char.chararray.title ========================== method char.chararray.title()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2536-L2547) For each element in `self`, return a titlecased version of the string: words start with uppercase characters, all remaining cased characters are lowercase. See also [`char.title`](numpy.char.title#numpy.char.title "numpy.char.title") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.title.htmlnumpy.char.chararray.tofile =========================== method char.chararray.tofile(*fid*, *sep=''*, *format='%s'*) Write array to a file as text or binary (default). Data is always written in ‘C’ order, independent of the order of `a`. The data produced by this method can be recovered using the function fromfile(). Parameters **fid**file or str or Path An open file object, or a string containing a filename. Changed in version 1.17.0: [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html#pathlib.Path "(in Python v3.10)") objects are now accepted. **sep**str Separator between array items for text output. If “” (empty), a binary file is written, equivalent to `file.write(a.tobytes())`. **format**str Format string for text file output. Each entry in the array is formatted to text by first converting it to the closest Python type, and then using “format” % item. #### Notes This is a convenience function for quick storage of array data. Information on endianness and precision is lost, so this method is not a good choice for files intended to archive data or transport data between machines with different endianness. Some of these problems can be overcome by outputting the data as text files, at the expense of speed and file size. When fid is a file object, array contents are directly written to the file, bypassing the file object’s `write` method. As a result, tofile cannot be used with files objects supporting compression (e.g., GzipFile) or file-like objects that do not support `fileno()` (e.g., BytesIO). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.tofile.htmlnumpy.char.chararray.tolist =========================== method char.chararray.tolist() Return the array as an `a.ndim`-levels deep nested list of Python scalars. Return a copy of the array data as a (nested) Python list. Data items are converted to the nearest compatible builtin Python type, via the [`item`](numpy.ndarray.item#numpy.ndarray.item "numpy.ndarray.item") function. If `a.ndim` is 0, then since the depth of the nested list is 0, it will not be a list at all, but a simple Python scalar. Parameters **none** Returns **y**object, or list of object, or list of list of object, or 
 The possibly nested list of array elements. #### Notes The array may be recreated via `a = np.array(a.tolist())`, although this may sometimes lose precision. #### Examples For a 1D array, `a.tolist()` is almost the same as `list(a)`, except that `tolist` changes numpy scalars to Python scalars: ``` >>> a = np.uint32([1, 2]) >>> a_list = list(a) >>> a_list [1, 2] >>> type(a_list[0]) <class 'numpy.uint32'> >>> a_tolist = a.tolist() >>> a_tolist [1, 2] >>> type(a_tolist[0]) <class 'int'``` Additionally, for a 2D array, `tolist` applies recursively: ``` >>> a = np.array([[1, 2], [3, 4]]) >>> list(a) [array([1, 2]), array([3, 4])] >>> a.tolist() [[1, 2], [3, 4]] ``` The base case for this recursion is a 0D array: ``` >>> a = np.array(1) >>> list(a) Traceback (most recent call last): ... TypeError: iteration over a 0-d array >>> a.tolist() 1 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.tolist.htmlnumpy.char.chararray.tostring ============================= method char.chararray.tostring(*order='C'*) A compatibility alias for [`tobytes`](numpy.char.chararray.tobytes#numpy.char.chararray.tobytes "numpy.char.chararray.tobytes"), with exactly the same behavior. Despite its name, it returns `bytes` not [`str`](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.10)")s. Deprecated since version 1.19.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.tostring.htmlnumpy.char.chararray.translate ============================== method char.chararray.translate(*table*, *deletechars=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2549-L2561) For each element in `self`, return a copy of the string where all characters occurring in the optional argument `deletechars` are removed, and the remaining characters have been mapped through the given translation table. See also [`char.translate`](numpy.char.translate#numpy.char.translate "numpy.char.translate") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.translate.htmlnumpy.char.chararray.transpose ============================== method char.chararray.transpose(**axes*) Returns a view of the array with axes transposed. For a 1-D array this has no effect, as a transposed vector is simply the same vector. To convert a 1-D array into a 2D column vector, an additional dimension must be added. `np.atleast2d(a).T` achieves this, as does `a[:, np.newaxis]`. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given, their order indicates how the axes are permuted (see Examples). If axes are not provided and `a.shape = (i[0], i[1], ... i[n-2], i[n-1])`, then `a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0])`. Parameters **axes**None, tuple of ints, or `n` ints * None or no argument: reverses the order of the axes. * tuple of ints: `i` in the `j`-th place in the tuple means `a`’s `i`-th axis becomes `a.transpose()`’s `j`-th axis. * `n` ints: same as an n-tuple of the same ints (this form is intended simply as a “convenience” alternative to the tuple form) Returns **out**ndarray View of `a`, with axes suitably permuted. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") Equivalent function [`ndarray.T`](numpy.ndarray.t#numpy.ndarray.T "numpy.ndarray.T") Array property returning the array transposed. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Give a new shape to an array without changing its data. #### Examples ``` >>> a = np.array([[1, 2], [3, 4]]) >>> a array([[1, 2], [3, 4]]) >>> a.transpose() array([[1, 3], [2, 4]]) >>> a.transpose((1, 0)) array([[1, 3], [2, 4]]) >>> a.transpose(1, 0) array([[1, 3], [2, 4]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.transpose.htmlnumpy.char.chararray.upper ========================== method char.chararray.upper()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/defchararray.py#L2563-L2573) Return an array with the elements of `self` converted to uppercase. See also [`char.upper`](numpy.char.upper#numpy.char.upper "numpy.char.upper") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.upper.htmlnumpy.char.chararray.view ========================= method char.chararray.view(*[dtype][, type]*) New view of array with the same data. Note Passing None for `dtype` is different from omitting the parameter, since the former invokes `dtype(None)` which is an alias for `dtype('float_')`. Parameters **dtype**data-type or ndarray sub-class, optional Data-type descriptor of the returned view, e.g., float32 or int16. Omitting it results in the view having the same data-type as `a`. This argument can also be specified as an ndarray sub-class, which then specifies the type of the returned object (this is equivalent to setting the `type` parameter). **type**Python type, optional Type of the returned view, e.g., ndarray or matrix. Again, omission of the parameter results in type preservation. #### Notes `a.view()` is used two different ways: `a.view(some_dtype)` or `a.view(dtype=some_dtype)` constructs a view of the array’s memory with a different data-type. This can cause a reinterpretation of the bytes of memory. `a.view(ndarray_subclass)` or `a.view(type=ndarray_subclass)` just returns an instance of `ndarray_subclass` that looks at the same array (same shape, dtype, etc.) This does not cause a reinterpretation of the memory. For `a.view(some_dtype)`, if `some_dtype` has a different number of bytes per entry than the previous dtype (for example, converting a regular array to a structured array), then the last axis of `a` must be contiguous. This axis will be resized in the result. Changed in version 1.23.0: Only the last axis needs to be contiguous. Previously, the entire array had to be C-contiguous. #### Examples ``` >>> x = np.array([(1, 2)], dtype=[('a', np.int8), ('b', np.int8)]) ``` Viewing array data using a different type and dtype: ``` >>> y = x.view(dtype=np.int16, type=np.matrix) >>> y matrix([[513]], dtype=int16) >>> print(type(y)) <class 'numpy.matrix'``` Creating a view on a structured array so it can be used in calculations ``` >>> x = np.array([(1, 2),(3,4)], dtype=[('a', np.int8), ('b', np.int8)]) >>> xv = x.view(dtype=np.int8).reshape(-1,2) >>> xv array([[1, 2], [3, 4]], dtype=int8) >>> xv.mean(0) array([2., 3.]) ``` Making changes to the view changes the underlying array ``` >>> xv[0,1] = 20 >>> x array([(1, 20), (3, 4)], dtype=[('a', 'i1'), ('b', 'i1')]) ``` Using a view to convert an array to a recarray: ``` >>> z = x.view(np.recarray) >>> z.a array([1, 3], dtype=int8) ``` Views share data: ``` >>> x[0] = (9, 10) >>> z[0] (9, 10) ``` Views that change the dtype size (bytes per entry) should normally be avoided on arrays defined by slices, transposes, fortran-ordering, etc.: ``` >>> x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int16) >>> y = x[:, ::2] >>> y array([[1, 3], [4, 6]], dtype=int16) >>> y.view(dtype=[('width', np.int16), ('length', np.int16)]) Traceback (most recent call last): ... ValueError: To change to a dtype of a different size, the last axis must be contiguous >>> z = y.copy() >>> z.view(dtype=[('width', np.int16), ('length', np.int16)]) array([[(1, 3)], [(4, 6)]], dtype=[('width', '<i2'), ('length', '<i2')]) ``` However, views that change dtype are totally fine for arrays with a contiguous last axis, even if the rest of the axes are not C-contiguous: ``` >>> x = np.arange(2 * 3 * 4, dtype=np.int8).reshape(2, 3, 4) >>> x.transpose(1, 0, 2).view(np.int16) array([[[ 256, 770], [3340, 3854]], [[1284, 1798], [4368, 4882]], [[2312, 2826], [5396, 5910]]], dtype=int16) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.view.htmlnumpy.char.chararray.dtype ========================== attribute char.chararray.dtype Data-type of the array’s elements. Warning Setting `arr.dtype` is discouraged and may be deprecated in the future. Setting will replace the `dtype` without modifying the memory (see also [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") and [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype")). Parameters **None** Returns **d**numpy dtype object See also [`ndarray.astype`](numpy.ndarray.astype#numpy.ndarray.astype "numpy.ndarray.astype") Cast the values contained in the array to a new data-type. [`ndarray.view`](numpy.ndarray.view#numpy.ndarray.view "numpy.ndarray.view") Create a view of the same data but a different data-type. [`numpy.dtype`](numpy.dtype#numpy.dtype "numpy.dtype") #### Examples ``` >>> x array([[0, 1], [2, 3]]) >>> x.dtype dtype('int32') >>> type(x.dtype) <type 'numpy.dtype'``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.dtype.htmlnumpy.char.chararray.strides ============================ attribute char.chararray.strides Tuple of bytes to step in each dimension when traversing an array. The byte offset of element `(i[0], i[1], ..., i[n])` in an array `a` is: ``` offset = sum(np.array(i) * a.strides) ``` A more detailed explanation of strides can be found in the “ndarray.rst” file in the NumPy reference guide. Warning Setting `arr.strides` is discouraged and may be deprecated in the future. [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") should be preferred to create a new view of the same data in a safer way. See also [`numpy.lib.stride_tricks.as_strided`](numpy.lib.stride_tricks.as_strided#numpy.lib.stride_tricks.as_strided "numpy.lib.stride_tricks.as_strided") #### Notes Imagine an array of 32-bit integers (each 4 bytes): ``` x = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int32) ``` This array is stored in memory as 40 bytes, one after the other (known as a contiguous block of memory). The strides of an array tell us how many bytes we have to skip in memory to move to the next position along a certain axis. For example, we have to skip 4 bytes (1 value) to move to the next column, but 20 bytes (5 values) to get to the same position in the next row. As such, the strides for the array `x` will be `(20, 4)`. #### Examples ``` >>> y = np.reshape(np.arange(2*3*4), (2,3,4)) >>> y array([[[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]], [[12, 13, 14, 15], [16, 17, 18, 19], [20, 21, 22, 23]]]) >>> y.strides (48, 16, 4) >>> y[1,1,1] 17 >>> offset=sum(y.strides * np.array((1,1,1))) >>> offset/y.itemsize 17 ``` ``` >>> x = np.reshape(np.arange(5*6*7*8), (5,6,7,8)).transpose(2,3,1,0) >>> x.strides (32, 4, 224, 1344) >>> i = np.array([3,5,2,2]) >>> offset = sum(i * x.strides) >>> x[3,5,2,2] 813 >>> offset / x.itemsize 813 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.strides.htmlnumpy.char.chararray.T ====================== attribute char.chararray.T The transposed array. Same as `self.transpose()`. See also [`transpose`](numpy.transpose#numpy.transpose "numpy.transpose") #### Examples ``` >>> x = np.array([[1.,2.],[3.,4.]]) >>> x array([[ 1., 2.], [ 3., 4.]]) >>> x.T array([[ 1., 3.], [ 2., 4.]]) >>> x = np.array([1.,2.,3.,4.]) >>> x array([ 1., 2., 3., 4.]) >>> x.T array([ 1., 2., 3., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.T.htmlnumpy.char.chararray.base ========================= attribute char.chararray.base Base object if memory is from some other object. #### Examples The base of an array that owns its memory is None: ``` >>> x = np.array([1,2,3,4]) >>> x.base is None True ``` Slicing creates a view, whose memory is shared with x: ``` >>> y = x[2:] >>> y.base is x True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.base.htmlnumpy.char.chararray.ctypes =========================== attribute char.chararray.ctypes An object to simplify the interaction of the array with the ctypes module. This attribute creates an object that makes it easier to use arrays when calling shared libraries with the ctypes module. The returned object has, among others, data, shape, and strides attributes (see Notes below) which themselves return ctypes objects that can be used as arguments to a shared library. Parameters **None** Returns **c**Python object Possessing attributes data, shape, strides, etc. See also [`numpy.ctypeslib`](../routines.ctypeslib#module-numpy.ctypeslib "numpy.ctypeslib") #### Notes Below are the public attributes of this object which were documented in “Guide to NumPy” (we have omitted undocumented public attributes, as well as documented private attributes): _ctypes.data A pointer to the memory area of the array as a Python integer. This memory area may contain data that is not aligned, or not in correct byte-order. The memory area may not even be writeable. The array flags and data-type of this array should be respected when passing this attribute to arbitrary C-code to avoid trouble that can include Python crashing. User Beware! The value of this attribute is exactly the same as `self._array_interface_['data'][0]`. Note that unlike `data_as`, a reference will not be kept to the array: code like `ctypes.c_void_p((a + b).ctypes.data)` will result in a pointer to a deallocated array, and should be spelt `(a + b).ctypes.data_as(ctypes.c_void_p)` _ctypes.shape (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the C-integer corresponding to `dtype('p')` on this platform (see [`c_intp`](../routines.ctypeslib#numpy.ctypeslib.c_intp "numpy.ctypeslib.c_intp")). This base-type could be [`ctypes.c_int`](https://docs.python.org/3/library/ctypes.html#ctypes.c_int "(in Python v3.10)"), [`ctypes.c_long`](https://docs.python.org/3/library/ctypes.html#ctypes.c_long "(in Python v3.10)"), or [`ctypes.c_longlong`](https://docs.python.org/3/library/ctypes.html#ctypes.c_longlong "(in Python v3.10)") depending on the platform. The ctypes array contains the shape of the underlying array. _ctypes.strides (c_intp*self.ndim): A ctypes array of length self.ndim where the basetype is the same as for the shape attribute. This ctypes array contains the strides information from the underlying array. This strides information is important for showing how many bytes must be jumped to get to the next element in the array. _ctypes.data_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L267-L284) Return the data pointer cast to a particular c-types object. For example, calling `self._as_parameter_` is equivalent to `self.data_as(ctypes.c_void_p)`. Perhaps you want to use the data as a pointer to a ctypes array of floating-point data: `self.data_as(ctypes.POINTER(ctypes.c_double))`. The returned pointer will keep a reference to the array. _ctypes.shape_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L286-L293) Return the shape tuple as an array of some other c-types type. For example: `self.shape_as(ctypes.c_short)`. _ctypes.strides_as(*obj*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/core/_internal.py#L295-L302) Return the strides tuple as an array of some other c-types type. For example: `self.strides_as(ctypes.c_longlong)`. If the ctypes module is not available, then the ctypes attribute of array objects still returns something useful, but ctypes objects are not returned and errors may be raised instead. In particular, the object will still have the `as_parameter` attribute which will return an integer equal to the data attribute. #### Examples ``` >>> import ctypes >>> x = np.array([[0, 1], [2, 3]], dtype=np.int32) >>> x array([[0, 1], [2, 3]], dtype=int32) >>> x.ctypes.data 31962608 # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)) <__main__.LP_c_uint object at 0x7ff2fc1fc200> # may vary >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint32)).contents c_uint(0) >>> x.ctypes.data_as(ctypes.POINTER(ctypes.c_uint64)).contents c_ulong(4294967296) >>> x.ctypes.shape <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1fce60> # may vary >>> x.ctypes.strides <numpy.core._internal.c_long_Array_2 object at 0x7ff2fc1ff320> # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.ctypes.htmlnumpy.char.chararray.data ========================= attribute char.chararray.data Python buffer object pointing to the start of the array’s data. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.data.htmlnumpy.char.chararray.flags ========================== attribute char.chararray.flags Information about the memory layout of the array. #### Notes The [`flags`](#numpy.char.chararray.flags "numpy.char.chararray.flags") object can be accessed dictionary-like (as in `a.flags['WRITEABLE']`), or by using lowercased attribute names (as in `a.flags.writeable`). Short flag names are only supported in dictionary access. Only the WRITEBACKIFCOPY, WRITEABLE, and ALIGNED flags can be changed by the user, via direct assignment to the attribute or dictionary entry, or by calling [`ndarray.setflags`](numpy.ndarray.setflags#numpy.ndarray.setflags "numpy.ndarray.setflags"). The array flags cannot be set arbitrarily: * WRITEBACKIFCOPY can only be set `False`. * ALIGNED can only be set `True` if the data is truly aligned. * WRITEABLE can only be set `True` if the array owns its own memory or the ultimate owner of the memory exposes a writeable buffer interface or is a string. Arrays can be both C-style and Fortran-style contiguous simultaneously. This is clear for 1-dimensional arrays, but can also be true for higher dimensional arrays. Even for contiguous arrays a stride for a given dimension `arr.strides[dim]` may be *arbitrary* if `arr.shape[dim] == 1` or the array has no elements. It does *not* generally hold that `self.strides[-1] == self.itemsize` for C-style contiguous arrays or `self.strides[0] == self.itemsize` for Fortran-style contiguous arrays is true. Attributes **C_CONTIGUOUS (C)** The data is in a single, C-style contiguous segment. **F_CONTIGUOUS (F)** The data is in a single, Fortran-style contiguous segment. **OWNDATA (O)** The array owns the memory it uses or borrows it from another object. **WRITEABLE (W)** The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception. **ALIGNED (A)** The data and all elements are aligned appropriately for the hardware. **WRITEBACKIFCOPY (X)** This array is a copy of some other array. The C-API function PyArray_ResolveWritebackIfCopy must be called before deallocating to the base array will be updated with the contents of this array. **FNC** F_CONTIGUOUS and not C_CONTIGUOUS. **FORC** F_CONTIGUOUS or C_CONTIGUOUS (one-segment test). **BEHAVED (B)** ALIGNED and WRITEABLE. **CARRAY (CA)** BEHAVED and C_CONTIGUOUS. **FARRAY (FA)** BEHAVED and F_CONTIGUOUS and not C_CONTIGUOUS. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.flags.htmlnumpy.char.chararray.flat ========================= attribute char.chararray.flat A 1-D iterator over the array. This is a [`numpy.flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") instance, which acts similarly to, but is not a subclass of, Python’s built-in iterator object. See also [`flatten`](numpy.char.chararray.flatten#numpy.char.chararray.flatten "numpy.char.chararray.flatten") Return a copy of the array collapsed into one dimension. [`flatiter`](numpy.flatiter#numpy.flatiter "numpy.flatiter") #### Examples ``` >>> x = np.arange(1, 7).reshape(2, 3) >>> x array([[1, 2, 3], [4, 5, 6]]) >>> x.flat[3] 4 >>> x.T array([[1, 4], [2, 5], [3, 6]]) >>> x.T.flat[3] 5 >>> type(x.flat) <class 'numpy.flatiter'``` An assignment example: ``` >>> x.flat = 3; x array([[3, 3, 3], [3, 3, 3]]) >>> x.flat[[1,4]] = 1; x array([[3, 1, 3], [3, 1, 3]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.flat.htmlnumpy.char.chararray.imag ========================= attribute char.chararray.imag The imaginary part of the array. #### Examples ``` >>> x = np.sqrt([1+0j, 0+1j]) >>> x.imag array([ 0. , 0.70710678]) >>> x.imag.dtype dtype('float64') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.imag.htmlnumpy.char.chararray.itemsize ============================= attribute char.chararray.itemsize Length of one array element in bytes. #### Examples ``` >>> x = np.array([1,2,3], dtype=np.float64) >>> x.itemsize 8 >>> x = np.array([1,2,3], dtype=np.complex128) >>> x.itemsize 16 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.itemsize.htmlnumpy.char.chararray.nbytes =========================== attribute char.chararray.nbytes Total bytes consumed by the elements of the array. #### Notes Does not include memory consumed by non-element attributes of the array object. #### Examples ``` >>> x = np.zeros((3,5,2), dtype=np.complex128) >>> x.nbytes 480 >>> np.prod(x.shape) * x.itemsize 480 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.nbytes.htmlnumpy.char.chararray.ndim ========================= attribute char.chararray.ndim Number of array dimensions. #### Examples ``` >>> x = np.array([1, 2, 3]) >>> x.ndim 1 >>> y = np.zeros((2, 3, 4)) >>> y.ndim 3 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.ndim.htmlnumpy.char.chararray.real ========================= attribute char.chararray.real The real part of the array. See also [`numpy.real`](numpy.real#numpy.real "numpy.real") equivalent function #### Examples ``` >>> x = np.sqrt([1+0j, 0+1j]) >>> x.real array([ 1. , 0.70710678]) >>> x.real.dtype dtype('float64') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.real.htmlnumpy.char.chararray.shape ========================== attribute char.chararray.shape Tuple of array dimensions. The shape property is usually used to get the current shape of an array, but may also be used to reshape the array in-place by assigning a tuple of array dimensions to it. As with [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape"), one of the new shape dimensions can be -1, in which case its value is inferred from the size of the array and the remaining dimensions. Reshaping an array in-place will fail if a copy is required. Warning Setting `arr.shape` is discouraged and may be deprecated in the future. Using [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") is the preferred approach. See also [`numpy.shape`](numpy.shape#numpy.shape "numpy.shape") Equivalent getter function. [`numpy.reshape`](numpy.reshape#numpy.reshape "numpy.reshape") Function similar to setting `shape`. [`ndarray.reshape`](numpy.ndarray.reshape#numpy.ndarray.reshape "numpy.ndarray.reshape") Method similar to setting `shape`. #### Examples ``` >>> x = np.array([1, 2, 3, 4]) >>> x.shape (4,) >>> y = np.zeros((2, 3, 4)) >>> y.shape (2, 3, 4) >>> y.shape = (3, 8) >>> y array([[ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0.]]) >>> y.shape = (3, 6) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: total size of new array must be unchanged >>> np.zeros((4,2))[::2].shape = (-1,) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape. ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.shape.htmlnumpy.char.chararray.size ========================= attribute char.chararray.size Number of elements in the array. Equal to `np.prod(a.shape)`, i.e., the product of the array’s dimensions. #### Notes `a.size` returns a standard arbitrary precision Python integer. This may not be the case with other methods of obtaining the same value (like the suggested `np.prod(a.shape)`, which returns an instance of `np.int_`), and may be relevant if the value is used further in calculations that may overflow a fixed size integer type. #### Examples ``` >>> x = np.zeros((3, 5, 2), dtype=np.complex128) >>> x.size 30 >>> np.prod(x.shape) 30 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.size.htmlnumpy.char.chararray.tobytes ============================ method char.chararray.tobytes(*order='C'*) Construct Python bytes containing the raw data bytes in the array. Constructs Python bytes showing a copy of the raw contents of data memory. The bytes object is produced in C-order by default. This behavior is controlled by the `order` parameter. New in version 1.9.0. Parameters **order**{‘C’, ‘F’, ‘A’}, optional Controls the memory layout of the bytes object. ‘C’ means C-order, ‘F’ means F-order, ‘A’ (short for *Any*) means ‘F’ if `a` is Fortran contiguous, ‘C’ otherwise. Default is ‘C’. Returns **s**bytes Python bytes exhibiting a copy of `a`’s raw data. #### Examples ``` >>> x = np.array([[0, 1], [2, 3]], dtype='<u2') >>> x.tobytes() b'\x00\x00\x01\x00\x02\x00\x03\x00' >>> x.tobytes('C') == x.tobytes() True >>> x.tobytes('F') b'\x00\x00\x02\x00\x01\x00\x03\x00' ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.char.chararray.tobytes.htmlnumpy.finfo.machar ================== property *property*finfo.machar The object which calculated these parameters and holds more detailed information. Deprecated since version 1.22. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.finfo.machar.htmlnumpy.finfo.tiny ================ property *property*finfo.tiny Return the value for tiny, alias of smallest_normal. Returns **tiny**float Value for the smallest normal, alias of smallest_normal. Warns UserWarning If the calculated value for the smallest normal is requested for double-double. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.finfo.tiny.htmlnumpy.finfo.smallest_normal ============================ property *property*finfo.smallest_normal Return the value for the smallest normal. Returns **smallest_normal**float Value for the smallest normal. Warns UserWarning If the calculated value for the smallest normal is requested for double-double. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.finfo.smallest_normal.htmlnumpy.iinfo.min =============== property *property*iinfo.min Minimum value of given dtype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.iinfo.min.htmlnumpy.iinfo.max =============== property *property*iinfo.max Maximum value of given dtype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.iinfo.max.htmlnumpy.errstate.__call__ =========================== method errstate.__call__(*func*) Call self as a function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.errstate.__call__.htmlnumpy.DataSource.abspath ======================== method DataSource.abspath(*path*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/_datasource.py#L375-L415) Return absolute path of file in the DataSource directory. If `path` is an URL, then [`abspath`](#numpy.DataSource.abspath "numpy.DataSource.abspath") will return either the location the file exists locally or the location it would exist when opened using the [`open`](numpy.datasource.open#numpy.DataSource.open "numpy.DataSource.open") method. Parameters **path**str Can be a local file or a remote URL. Returns **out**str Complete path, including the [`DataSource`](numpy.datasource#numpy.DataSource "numpy.DataSource") destination directory. #### Notes The functionality is based on [`os.path.abspath`](https://docs.python.org/3/library/os.path.html#os.path.abspath "(in Python v3.10)"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.DataSource.abspath.htmlnumpy.DataSource.exists ======================= method DataSource.exists(*path*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/_datasource.py#L431-L485) Test if path exists. Test if `path` exists as (and in this order): * a local file. * a remote URL that has been downloaded and stored locally in the [`DataSource`](numpy.datasource#numpy.DataSource "numpy.DataSource") directory. * a remote URL that has not been downloaded, but is valid and accessible. Parameters **path**str Can be a local file or a remote URL. Returns **out**bool True if `path` exists. #### Notes When `path` is an URL, [`exists`](#numpy.DataSource.exists "numpy.DataSource.exists") will return True if it’s either stored locally in the [`DataSource`](numpy.datasource#numpy.DataSource "numpy.DataSource") directory, or is a valid remote URL. [`DataSource`](numpy.datasource#numpy.DataSource "numpy.DataSource") does not discriminate between the two, the file is accessible if it exists in either location. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.DataSource.exists.htmlnumpy.DataSource.open ===================== method DataSource.open(*path*, *mode='r'*, *encoding=None*, *newline=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/_datasource.py#L487-L533) Open and return file-like object. If `path` is an URL, it will be downloaded, stored in the [`DataSource`](numpy.datasource#numpy.DataSource "numpy.DataSource") directory and opened from there. Parameters **path**str Local file path or URL to open. **mode**{‘r’, ‘w’, ‘a’}, optional Mode to open `path`. Mode ‘r’ for reading, ‘w’ for writing, ‘a’ to append. Available modes depend on the type of object specified by `path`. Default is ‘r’. **encoding**{None, str}, optional Open text file with given encoding. The default encoding will be what [`io.open`](https://docs.python.org/3/library/io.html#io.open "(in Python v3.10)") uses. **newline**{None, str}, optional Newline to use when reading text file. Returns **out**file object File object. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.DataSource.open.htmlnumpy.lib.format.descr_to_dtype ================================= lib.format.descr_to_dtype(*descr*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L282-L336) Returns a dtype based off the given description. This is essentially the reverse of `dtype_to_descr()`. It will remove the valueless padding fields created by, i.e. simple fields like dtype(‘float32’), and then convert the description to its corresponding dtype. Parameters **descr**object The object retrieved by dtype.descr. Can be passed to `numpy.dtype()` in order to replicate the input dtype. Returns **dtype**dtype The dtype constructed by the description. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.descr_to_dtype.htmlnumpy.lib.format.dtype_to_descr ================================= lib.format.dtype_to_descr(*dtype*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L246-L280) Get a serializable descriptor from the dtype. The .descr attribute of a dtype object cannot be round-tripped through the dtype() constructor. Simple types, like dtype(‘float32’), have a descr which looks like a record array with one field with ‘’ as a name. The dtype() constructor interprets this as a request to give a default name. Instead, we construct descriptor that can be passed to dtype(). Parameters **dtype**dtype The dtype of the array that will be written to disk. Returns **descr**object An object that can be passed to `numpy.dtype()` in order to replicate the input dtype. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.dtype_to_descr.htmlnumpy.lib.format.header_data_from_array_1_0 ================================================ lib.format.header_data_from_array_1_0(*array*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L338-L363) Get the dictionary of header metadata from a numpy.ndarray. Parameters **array**numpy.ndarray Returns **d**dict This has the appropriate entries for writing its string representation to the header of the file. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.header_data_from_array_1_0.htmlnumpy.lib.format.magic ====================== lib.format.magic(*major*, *minor*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L195-L215) Return the magic string for the given file format version. Parameters **major**int in [0, 255] **minor**int in [0, 255] Returns **magic**str Raises ValueError if the version cannot be formatted. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.magic.htmlnumpy.lib.format.read_array ============================ lib.format.read_array(*fp*, *allow_pickle=False*, *pickle_kwargs=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L697-L787) Read an array from an NPY file. Parameters **fp**file_like object If this is not a real file object, then this may take extra memory and time. **allow_pickle**bool, optional Whether to allow writing pickled data. Default: False Changed in version 1.16.3: Made default False in response to CVE-2019-6446. **pickle_kwargs**dict Additional keyword arguments to pass to pickle.load. These are only useful when loading object arrays saved on Python 2 when using Python 3. Returns **array**ndarray The array from the data on disk. Raises ValueError If the data is invalid, or allow_pickle=False and the file contains an object array. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.read_array.htmlnumpy.lib.format.read_array_header_1_0 ========================================== lib.format.read_array_header_1_0(*fp*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L468-L497) Read an array header from a filelike object using the 1.0 file format version. This will leave the file object located just after the header. Parameters **fp**filelike object A file object or something with a `read()` method like a file. Returns **shape**tuple of int The shape of the array. **fortran_order**bool The array data will be written out directly if it is either C-contiguous or Fortran-contiguous. Otherwise, it will be made contiguous before writing it out. **dtype**dtype The dtype of the file’s data. Raises ValueError If the data is invalid. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.read_array_header_1_0.htmlnumpy.lib.format.read_array_header_2_0 ========================================== lib.format.read_array_header_2_0(*fp*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L499-L530) Read an array header from a filelike object using the 2.0 file format version. This will leave the file object located just after the header. New in version 1.9.0. Parameters **fp**filelike object A file object or something with a `read()` method like a file. Returns **shape**tuple of int The shape of the array. **fortran_order**bool The array data will be written out directly if it is either C-contiguous or Fortran-contiguous. Otherwise, it will be made contiguous before writing it out. **dtype**dtype The dtype of the file’s data. Raises ValueError If the data is invalid. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.read_array_header_2_0.htmlnumpy.lib.format.read_magic ============================ lib.format.read_magic(*fp*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L217-L234) Read the magic string to get the version of the file format. Parameters **fp**filelike object Returns **major**int **minor**int © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.read_magic.htmlnumpy.lib.format.write_array ============================= lib.format.write_array(*fp*, *array*, *version=None*, *allow_pickle=True*, *pickle_kwargs=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L625-L694) Write an array to an NPY file, including a header. If the array is neither C-contiguous nor Fortran-contiguous AND the file_like object is not a real file object, this function will have to copy data in memory. Parameters **fp**file_like object An open, writable file object, or similar object with a `.write()` method. **array**ndarray The array to write to disk. **version**(int, int) or None, optional The version number of the format. None means use the oldest supported version that is able to store the data. Default: None **allow_pickle**bool, optional Whether to allow writing pickled data. Default: True **pickle_kwargs**dict, optional Additional keyword arguments to pass to pickle.dump, excluding ‘protocol’. These are only useful when pickling objects in object arrays on Python 3 to Python 2 compatible format. Raises ValueError If the array cannot be persisted. This includes the case of allow_pickle=False and array being an object array. Various other errors If the array contains Python objects as part of its dtype, the process of pickling them may raise various errors if the objects are not picklable. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.write_array.htmlnumpy.lib.format.write_array_header_1_0 =========================================== lib.format.write_array_header_1_0(*fp*, *d*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/format.py#L440-L450) Write the header for an array using the 1.0 format. Parameters **fp**filelike object **d**dict This has the appropriate entries for writing its string representation to the header of the file. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.lib.format.write_array_header_1_0.htmlnumpy.polymul ============= numpy.polymul(*a1*, *a2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L909-L969) Find the product of two polynomials. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Finds the polynomial resulting from the multiplication of the two input polynomials. Each input must be either a poly1d object or a 1D sequence of polynomial coefficients, from highest to lowest degree. Parameters **a1, a2**array_like or poly1d object Input polynomials. Returns **out**ndarray or poly1d object The polynomial resulting from the multiplication of the inputs. If either inputs is a poly1d object, then the output is also a poly1d object. Otherwise, it is a 1D array of polynomial coefficients from highest to lowest degree. See also [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A one-dimensional polynomial class. [`poly`](numpy.poly#numpy.poly "numpy.poly"), [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit"), [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") [`convolve`](numpy.convolve#numpy.convolve "numpy.convolve") Array convolution. Same output as polymul, but has parameter for overlap mode. #### Examples ``` >>> np.polymul([1, 2, 3], [9, 5, 1]) array([ 9, 23, 38, 17, 3]) ``` Using poly1d objects: ``` >>> p1 = np.poly1d([1, 2, 3]) >>> p2 = np.poly1d([9, 5, 1]) >>> print(p1) 2 1 x + 2 x + 3 >>> print(p2) 2 9 x + 5 x + 1 >>> print(np.polymul(p1, p2)) 4 3 2 9 x + 23 x + 38 x + 17 x + 3 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polymul.htmlnumpy.polynomial.set_default_printstyle ========================================= polynomial.set_default_printstyle(*style*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/__init__.py#L134-L180) Set the default format for the string representation of polynomials. Values for `style` must be valid inputs to `__format__`, i.e. ‘ascii’ or ‘unicode’. Parameters **style**str Format string for default printing style. Must be either ‘ascii’ or ‘unicode’. #### Notes The default format depends on the platform: ‘unicode’ is used on Unix-based systems and ‘ascii’ on Windows. This determination is based on default font support for the unicode superscript and subscript ranges. #### Examples ``` >>> p = np.polynomial.Polynomial([1, 2, 3]) >>> c = np.polynomial.Chebyshev([1, 2, 3]) >>> np.polynomial.set_default_printstyle('unicode') >>> print(p) 1.0 + 2.0·x¹ + 3.0·x² >>> print(c) 1.0 + 2.0·T₁(x) + 3.0·T₂(x) >>> np.polynomial.set_default_printstyle('ascii') >>> print(p) 1.0 + 2.0 x**1 + 3.0 x**2 >>> print(c) 1.0 + 2.0 T_1(x) + 3.0 T_2(x) >>> # Formatting supersedes all class/package-level defaults >>> print(f"{p:unicode}") 1.0 + 2.0·x¹ + 3.0·x² ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.set_default_printstyle.htmlnumpy.polynomial.polynomial.polydomain ====================================== polynomial.polynomial.polydomain*=array([-1, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polydomain.htmlnumpy.polynomial.polynomial.polyzero ==================================== polynomial.polynomial.polyzero*=array([0])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyzero.htmlnumpy.polynomial.polynomial.polyone =================================== polynomial.polynomial.polyone*=array([1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyone.htmlnumpy.polynomial.polynomial.polyx ================================= polynomial.polynomial.polyx*=array([0, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyx.htmlnumpy.polynomial.polynomial.polyadd =================================== polynomial.polynomial.polyadd(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L215-L248) Add one polynomial to another. Returns the sum of two polynomials `c1` + `c2`. The arguments are sequences of coefficients from lowest order term to highest, i.e., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2`. Parameters **c1, c2**array_like 1-D arrays of polynomial coefficients ordered from low to high. Returns **out**ndarray The coefficient array representing their sum. See also [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Examples ``` >>> from numpy.polynomial import polynomial as P >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> sum = P.polyadd(c1,c2); sum array([4., 4., 4.]) >>> P.polyval(2, sum) # 4 + 4(2) + 4(2**2) 28.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyadd.htmlnumpy.polynomial.polynomial.polysub =================================== polynomial.polynomial.polysub(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L251-L285) Subtract one polynomial from another. Returns the difference of two polynomials `c1` - `c2`. The arguments are sequences of coefficients from lowest order term to highest, i.e., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2`. Parameters **c1, c2**array_like 1-D arrays of polynomial coefficients ordered from low to high. Returns **out**ndarray Of coefficients representing their difference. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Examples ``` >>> from numpy.polynomial import polynomial as P >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> P.polysub(c1,c2) array([-2., 0., 2.]) >>> P.polysub(c2,c1) # -P.polysub(c1,c2) array([ 2., 0., -2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polysub.htmlnumpy.polynomial.polynomial.polymulx ==================================== polynomial.polynomial.polymulx(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L288-L325) Multiply a polynomial by x. Multiply the polynomial `c` by x, where x is the independent variable. Parameters **c**array_like 1-D array of polynomial coefficients ordered from low to high. Returns **out**ndarray Array representing the result of the multiplication. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Notes New in version 1.5.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polymulx.htmlnumpy.polynomial.polynomial.polymul =================================== polynomial.polynomial.polymul(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L328-L363) Multiply one polynomial by another. Returns the product of two polynomials `c1` * `c2`. The arguments are sequences of coefficients, from lowest order term to highest, e.g., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2.` Parameters **c1, c2**array_like 1-D arrays of coefficients representing a polynomial, relative to the “standard” basis, and ordered from lowest order term to highest. Returns **out**ndarray Of the coefficients of their product. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Examples ``` >>> from numpy.polynomial import polynomial as P >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> P.polymul(c1,c2) array([ 3., 8., 14., 8., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polymul.htmlnumpy.polynomial.polynomial.polydiv =================================== polynomial.polynomial.polydiv(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L366-L421) Divide one polynomial by another. Returns the quotient-with-remainder of two polynomials `c1` / `c2`. The arguments are sequences of coefficients, from lowest order term to highest, e.g., [1,2,3] represents `1 + 2*x + 3*x**2`. Parameters **c1, c2**array_like 1-D arrays of polynomial coefficients ordered from low to high. Returns **[quo, rem]**ndarrays Of coefficient series representing the quotient and remainder. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polypow`](numpy.polynomial.polynomial.polypow#numpy.polynomial.polynomial.polypow "numpy.polynomial.polynomial.polypow") #### Examples ``` >>> from numpy.polynomial import polynomial as P >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> P.polydiv(c1,c2) (array([3.]), array([-8., -4.])) >>> P.polydiv(c2,c1) (array([ 0.33333333]), array([ 2.66666667, 1.33333333])) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polydiv.htmlnumpy.polynomial.polynomial.polypow =================================== polynomial.polynomial.polypow(*c*, *pow*, *maxpower=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L424-L460) Raise a polynomial to a power. Returns the polynomial `c` raised to the power `pow`. The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `1 + 2*x + 3*x**2.` Parameters **c**array_like 1-D array of array of series coefficients ordered from low to high degree. **pow**integer Power to which the series will be raised **maxpower**integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns **coef**ndarray Power series of power. See also [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub"), [`polymulx`](numpy.polynomial.polynomial.polymulx#numpy.polynomial.polynomial.polymulx "numpy.polynomial.polynomial.polymulx"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv") #### Examples ``` >>> from numpy.polynomial import polynomial as P >>> P.polypow([1,2,3], 2) array([ 1., 4., 10., 12., 9.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polypow.htmlnumpy.polynomial.polynomial.polyval =================================== polynomial.polynomial.polyval(*x*, *c*, *tensor=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L664-L757) Evaluate a polynomial at points x. If `c` is of length `n + 1`, this function returns the value \[p(x) = c_0 + c_1 * x + ... + c_n * x^n\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters **x**array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with with themselves and with the elements of `c`. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor**boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. New in version 1.7.0. Returns **values**ndarray, compatible object The shape of the returned array is described above. See also [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polygrid2d`](numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d"), [`polygrid3d`](numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d") #### Notes The evaluation uses Horner’s method. #### Examples ``` >>> from numpy.polynomial.polynomial import polyval >>> polyval(1, [1,2,3]) 6.0 >>> a = np.arange(4).reshape(2,2) >>> a array([[0, 1], [2, 3]]) >>> polyval(a, [1,2,3]) array([[ 1., 6.], [17., 34.]]) >>> coef = np.arange(4).reshape(2,2) # multidimensional coefficients >>> coef array([[0, 1], [2, 3]]) >>> polyval([1,2], coef, tensor=True) array([[2., 4.], [4., 7.]]) >>> polyval([1,2], coef, tensor=False) array([2., 7.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyval.htmlnumpy.polynomial.polynomial.polyval2d ===================================== polynomial.polynomial.polyval2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L848-L895) Evaluate a 2-D polynomial at points (x, y). This function returns the value \[p(x,y) = \sum_{i,j} c_{i,j} * x^i * y^j\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points formed with pairs of corresponding values from `x` and `y`. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polygrid2d`](numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d"), [`polygrid3d`](numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyval2d.htmlnumpy.polynomial.polynomial.polyval3d ===================================== polynomial.polynomial.polyval3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L951-L999) Evaluate a 3-D polynomial at points (x, y, z). This function returns the values: \[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * x^i * y^j * z^k\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters **x, y, z**array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polygrid2d`](numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d"), [`polygrid3d`](numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyval3d.htmlnumpy.polynomial.polynomial.polygrid2d ====================================== polynomial.polynomial.polygrid2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L898-L948) Evaluate a 2-D polynomial on the Cartesian product of x and y. This function returns the values: \[p(a,b) = \sum_{i,j} c_{i,j} * a^i * b^j\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape + y.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d"), [`polygrid3d`](numpy.polynomial.polynomial.polygrid3d#numpy.polynomial.polynomial.polygrid3d "numpy.polynomial.polynomial.polygrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polygrid2d.htmlnumpy.polynomial.polynomial.polygrid3d ====================================== polynomial.polynomial.polygrid3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L1002-L1055) Evaluate a 3-D polynomial on the Cartesian product of x, y and z. This function returns the values: \[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * a^i * b^j * c^k\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters **x, y, z**array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`,`y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polygrid2d`](numpy.polynomial.polynomial.polygrid2d#numpy.polynomial.polynomial.polygrid2d "numpy.polynomial.polynomial.polygrid2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polygrid3d.htmlnumpy.polynomial.polynomial.polyder =================================== polynomial.polynomial.polyder(*c*, *m=1*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L463-L542) Differentiate a polynomial. Returns the polynomial coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2` while [[1,2],[1,2]] represents `1 + 1*x + 2*y + 2*x*y` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of polynomial coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl**scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis**int, optional Axis over which the derivative is taken. (Default: 0). New in version 1.7.0. Returns **der**ndarray Polynomial coefficients of the derivative. See also [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint") #### Examples ``` >>> from numpy.polynomial import polynomial as P >>> c = (1,2,3,4) # 1 + 2x + 3x**2 + 4x**3 >>> P.polyder(c) # (d/dx)(c) = 2 + 6x + 12x**2 array([ 2., 6., 12.]) >>> P.polyder(c,3) # (d**3/dx**3)(c) = 24 array([24.]) >>> P.polyder(c,scl=-1) # (d/d(-x))(c) = -2 - 6x - 12x**2 array([ -2., -6., -12.]) >>> P.polyder(c,2,-1) # (d**2/d(-x)**2)(c) = 6 + 24x array([ 6., 24.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyder.htmlnumpy.polynomial.polynomial.polyint =================================== polynomial.polynomial.polyint(*c*, *m=1*, *k=[]*, *lbnd=0*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L545-L661) Integrate a polynomial. Returns the polynomial coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients, from low to high degree along each axis, e.g., [1,2,3] represents the polynomial `1 + 2*x + 3*x**2` while [[1,2],[1,2]] represents `1 + 1*x + 2*y + 2*x*y` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like 1-D array of polynomial coefficients, ordered from low to high. **m**int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at zero is the first value in the list, the value of the second integral at zero is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd**scalar, optional The lower bound of the integral. (Default: 0) **scl**scalar, optional Following each integration the result is *multiplied* by `scl` before the integration constant is added. (Default: 1) **axis**int, optional Axis over which the integral is taken. (Default: 0). New in version 1.7.0. Returns **S**ndarray Coefficient array of the integral. Raises ValueError If `m < 1`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder") #### Notes Note that the result of each integration is *multiplied* by `scl`. Why is this important to note? Say one is making a linear change of variable \(u = ax + b\) in an integral relative to `x`. Then \(dx = du/a\), so one will need to set `scl` equal to \(1/a\) - perhaps not what one would have first thought. #### Examples ``` >>> from numpy.polynomial import polynomial as P >>> c = (1,2,3) >>> P.polyint(c) # should return array([0, 1, 1, 1]) array([0., 1., 1., 1.]) >>> P.polyint(c,3) # should return array([0, 0, 0, 1/6, 1/12, 1/20]) array([ 0. , 0. , 0. , 0.16666667, 0.08333333, # may vary 0.05 ]) >>> P.polyint(c,k=3) # should return array([3, 1, 1, 1]) array([3., 1., 1., 1.]) >>> P.polyint(c,lbnd=-2) # should return array([6, 1, 1, 1]) array([6., 1., 1., 1.]) >>> P.polyint(c,scl=-2) # should return array([0, -2, -2, -2]) array([ 0., -2., -2., -2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyint.htmlnumpy.polynomial.polynomial.polyfromroots ========================================= polynomial.polynomial.polyfromroots(*roots*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L151-L212) Generate a monic polynomial with given roots. Return the coefficients of the polynomial \[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\] where the `r_n` are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \[p(x) = c_0 + c_1 * x + ... + x^n\] The coefficient of the last term is 1 for monic polynomials in this form. Parameters **roots**array_like Sequence containing the roots. Returns **out**ndarray 1-D array of the polynomial’s coefficients If all the roots are real, then `out` is also real, otherwise it is complex. (see Examples below). See also [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Notes The coefficients are determined by multiplying together linear factors of the form `(x - r_i)`, i.e. \[p(x) = (x - r_0) (x - r_1) ... (x - r_n)\] where `n == len(roots) - 1`; note that this implies that `1` is always returned for \(a_n\). #### Examples ``` >>> from numpy.polynomial import polynomial as P >>> P.polyfromroots((-1,0,1)) # x(x - 1)(x + 1) = x^3 - x array([ 0., -1., 0., 1.]) >>> j = complex(0,1) >>> P.polyfromroots((-j,j)) # complex returned, though values are real array([1.+0.j, 0.+0.j, 1.+0.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyfromroots.htmlnumpy.polynomial.polynomial.polyroots ===================================== polynomial.polynomial.polyroots(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L1405-L1465) Compute the roots of a polynomial. Return the roots (a.k.a. “zeros”) of the polynomial \[p(x) = \sum_i c[i] * x^i.\] Parameters **c**1-D array_like 1-D array of polynomial coefficients. Returns **out**ndarray Array of the roots of the polynomial. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the power series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. #### Examples ``` >>> import numpy.polynomial.polynomial as poly >>> poly.polyroots(poly.polyfromroots((-1,0,1))) array([-1., 0., 1.]) >>> poly.polyroots(poly.polyfromroots((-1,0,1))).dtype dtype('float64') >>> j = complex(0,1) >>> poly.polyroots(poly.polyfromroots((-j,0,j))) array([ 0.00000000e+00+0.j, 0.00000000e+00+1.j, 2.77555756e-17-1.j]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyroots.htmlnumpy.polynomial.polynomial.polyvalfromroots ============================================ polynomial.polynomial.polyvalfromroots(*x*, *r*, *tensor=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L760-L845) Evaluate a polynomial specified by its roots at points x. If `r` is of length `N`, this function returns the value \[p(x) = \prod_{n=1}^{N} (x - r_n)\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `r`. If `r` is a 1-D array, then `p(x)` will have the same shape as `x`. If `r` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is `True` the shape will be r.shape[1:] + x.shape; that is, each polynomial is evaluated at every value of `x`. If `tensor` is `False`, the shape will be r.shape[1:]; that is, each polynomial is evaluated only for the corresponding broadcast value of `x`. Note that scalars have shape (,). New in version 1.12. Parameters **x**array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with with themselves and with the elements of `r`. **r**array_like Array of roots. If `r` is multidimensional the first index is the root index, while the remaining indices enumerate multiple polynomials. For instance, in the two dimensional case the roots of each polynomial may be thought of as stored in the columns of `r`. **tensor**boolean, optional If True, the shape of the roots array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `r` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `r` for the evaluation. This keyword is useful when `r` is multidimensional. The default value is True. Returns **values**ndarray, compatible object The shape of the returned array is described above. See also [`polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots"), [`polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots"), [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") #### Examples ``` >>> from numpy.polynomial.polynomial import polyvalfromroots >>> polyvalfromroots(1, [1,2,3]) 0.0 >>> a = np.arange(4).reshape(2,2) >>> a array([[0, 1], [2, 3]]) >>> polyvalfromroots(a, [-1, 0, 1]) array([[-0., 0.], [ 6., 24.]]) >>> r = np.arange(-2, 2).reshape(2,2) # multidimensional coefficients >>> r # each column of r defines one polynomial array([[-2, -1], [ 0, 1]]) >>> b = [-2, 1] >>> polyvalfromroots(b, r, tensor=True) array([[-0., 3.], [ 3., 0.]]) >>> polyvalfromroots(b, r, tensor=False) array([-0., 0.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyvalfromroots.htmlnumpy.polynomial.polynomial.polyvander2d ======================================== polynomial.polynomial.polyvander2d(*x*, *y*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L1112-L1157) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \[V[..., (deg[1] + 1)*i + j] = x^i * y^j,\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the powers of `x` and `y`. If `V = polyvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\] and `np.dot(V, c.flat)` and `polyval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D polynomials of the same degrees and sample points. Parameters **x, y**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns **vander2d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg([1]+1)\). The dtype will be the same as the converted `x` and `y`. See also [`polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander"), [`polyvander3d`](numpy.polynomial.polynomial.polyvander3d#numpy.polynomial.polynomial.polyvander3d "numpy.polynomial.polynomial.polyvander3d"), [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyvander2d.htmlnumpy.polynomial.polynomial.polyvander3d ======================================== polynomial.polynomial.polyvander3d(*x*, *y*, *z*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L1160-L1211) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l, m, n` are the given degrees in `x, y, z`, then The pseudo-Vandermonde matrix is defined by \[V[..., (m+1)(n+1)i + (n+1)j + k] = x^i * y^j * z^k,\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the powers of `x`, `y`, and `z`. If `V = polyvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\] and `np.dot(V, c.flat)` and `polyval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D polynomials of the same degrees and sample points. Parameters **x, y, z**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns **vander3d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg([1]+1)*(deg[2]+1)\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander"), [`polyvander3d`](#numpy.polynomial.polynomial.polyvander3d "numpy.polynomial.polynomial.polyvander3d"), [`polyval2d`](numpy.polynomial.polynomial.polyval2d#numpy.polynomial.polynomial.polyval2d "numpy.polynomial.polynomial.polyval2d"), [`polyval3d`](numpy.polynomial.polynomial.polyval3d#numpy.polynomial.polynomial.polyval3d "numpy.polynomial.polynomial.polyval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyvander3d.htmlnumpy.polynomial.polynomial.polycompanion ========================================= polynomial.polynomial.polycompanion(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L1365-L1402) Return the companion matrix of c. The companion matrix for power series cannot be made symmetric by scaling the basis, so this function differs from those for the orthogonal polynomials. Parameters **c**array_like 1-D array of polynomial coefficients ordered from low to high degree. Returns **mat**ndarray Companion matrix of dimensions (deg, deg). #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polycompanion.htmlnumpy.polynomial.polynomial.polyfit =================================== polynomial.polynomial.polyfit(*x*, *y*, *deg*, *rcond=None*, *full=False*, *w=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L1214-L1362) Least-squares fit of a polynomial to data. Return the coefficients of a polynomial of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \[p(x) = c_0 + c_1 * x + ... + c_n * x^n,\] where `n` is `deg`. Parameters **x**array_like, shape (`M`,) x-coordinates of the `M` sample (data) points `(x[i], y[i])`. **y**array_like, shape (`M`,) or (`M`, `K`) y-coordinates of the sample points. Several sets of sample points sharing the same x-coordinates can be (independently) fit with one call to [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit") by passing in for `y` a 2-D array that contains one data set per column. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond**float, optional Relative condition number of the fit. Singular values smaller than `rcond`, relative to the largest singular value, will be ignored. The default value is `len(x)*eps`, where `eps` is the relative precision of the platform’s float type, about 2e-16 in most cases. **full**bool, optional Switch determining the nature of the return value. When `False` (the default) just the coefficients are returned; when `True`, diagnostic information from the singular value decomposition (used to solve the fit’s matrix equation) is also returned. **w**array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. New in version 1.5.0. Returns **coef**ndarray, shape (`deg` + 1,) or (`deg` + 1, `K`) Polynomial coefficients ordered from low to high. If `y` was 2-D, the coefficients in column `k` of `coef` represent the polynomial fit to the data in `y`’s `k`-th column. **[residuals, rank, singular_values, rcond]**list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Raises RankWarning Raised if the matrix in the least-squares fit is rank deficient. The warning is only raised if `full == False`. The warnings can be turned off by: ``` >>> import warnings >>> warnings.simplefilter('ignore', np.RankWarning) ``` See also [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Evaluates a polynomial. [`polyvander`](numpy.polynomial.polynomial.polyvander#numpy.polynomial.polynomial.polyvander "numpy.polynomial.polynomial.polyvander") Vandermonde matrix for powers. [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "(in SciPy v1.8.1)") Computes spline fits. #### Notes The solution is the coefficients of the polynomial `p` that minimizes the sum of the weighted squared errors \[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\] where the \(w_j\) are the weights. This problem is solved by setting up the (typically) over-determined matrix equation: \[V(x) * c = w * y,\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, and `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected (and [`full`](numpy.full#numpy.full "numpy.full") == `False`), a [`RankWarning`](numpy.rankwarning#numpy.RankWarning "numpy.RankWarning") will be raised. This means that the coefficient values may be poorly determined. Fitting to a lower order polynomial will usually get rid of the warning (but may not be what you want, of course; if you have independent reason(s) for choosing the degree which isn’t working, you may have to: a) reconsider those reasons, and/or b) reconsider the quality of your data). The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Polynomial fits using double precision tend to “fail” at about (polynomial) degree 20. Fits using Chebyshev or Legendre series are generally better conditioned, but much can still depend on the distribution of the sample points and the smoothness of the data. If the quality of the fit is inadequate, splines may be a good alternative. #### Examples ``` >>> np.random.seed(123) >>> from numpy.polynomial import polynomial as P >>> x = np.linspace(-1,1,51) # x "data": [-1, -0.96, ..., 0.96, 1] >>> y = x**3 - x + np.random.randn(len(x)) # x^3 - x + N(0,1) "noise" >>> c, stats = P.polyfit(x,y,3,full=True) >>> np.random.seed(123) >>> c # c[0], c[2] should be approx. 0, c[1] approx. -1, c[3] approx. 1 array([ 0.01909725, -1.30598256, -0.00577963, 1.02644286]) # may vary >>> stats # note the large SSR, explaining the rather poor results [array([ 38.06116253]), 4, array([ 1.38446749, 1.32119158, 0.50443316, # may vary 0.28853036]), 1.1324274851176597e-014] ``` Same thing without the added noise ``` >>> y = x**3 - x >>> c, stats = P.polyfit(x,y,3,full=True) >>> c # c[0], c[2] should be "very close to 0", c[1] ~= -1, c[3] ~= 1 array([-6.36925336e-18, -1.00000000e+00, -4.08053781e-16, 1.00000000e+00]) >>> stats # note the minuscule SSR [array([ 7.46346754e-31]), 4, array([ 1.38446749, 1.32119158, # may vary 0.50443316, 0.28853036]), 1.1324274851176597e-014] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyfit.htmlnumpy.polynomial.polynomial.polytrim ==================================== polynomial.polynomial.polytrim(*c*, *tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L156-L208) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters **c**array_like 1-d array of coefficients, ordered from lowest order to highest. **tol**number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns **trimmed**ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises ValueError If `tol` < 0 See also `trimseq` #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polytrim.htmlnumpy.polynomial.polynomial.polyline ==================================== polynomial.polynomial.polyline(*off*, *scl*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polynomial.py#L113-L148) Returns an array representing a linear polynomial. Parameters **off, scl**scalars The “y-intercept” and “slope” of the line, respectively. Returns **y**ndarray This module’s representation of the linear polynomial `off + scl*x`. See also [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples ``` >>> from numpy.polynomial import polynomial as P >>> P.polyline(1,-1) array([ 1, -1]) >>> P.polyval(1, P.polyline(1,-1)) # should be 0 0.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.polyline.htmlnumpy.polynomial.chebyshev.Chebyshev ==================================== *class*numpy.polynomial.chebyshev.Chebyshev(*coef*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1994-L2075) A Chebyshev series class. The Chebyshev class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the methods listed below. Parameters **coef**array_like Chebyshev coefficients in order of increasing degree, i.e., `(1, 2, 3)` gives `1*T_0(x) + 2*T_1(x) + 3*T_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1, 1]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.chebyshev.chebyshev.domain#numpy.polynomial.chebyshev.Chebyshev.domain "numpy.polynomial.chebyshev.Chebyshev.domain") for its use. The default value is [-1, 1]. New in version 1.6.0. #### Methods | | | | --- | --- | | [`__call__`](numpy.polynomial.chebyshev.chebyshev.__call__#numpy.polynomial.chebyshev.Chebyshev.__call__ "numpy.polynomial.chebyshev.Chebyshev.__call__")(arg) | Call self as a function. | | [`basis`](numpy.polynomial.chebyshev.chebyshev.basis#numpy.polynomial.chebyshev.Chebyshev.basis "numpy.polynomial.chebyshev.Chebyshev.basis")(deg[, domain, window]) | Series basis polynomial of degree `deg`. | | [`cast`](numpy.polynomial.chebyshev.chebyshev.cast#numpy.polynomial.chebyshev.Chebyshev.cast "numpy.polynomial.chebyshev.Chebyshev.cast")(series[, domain, window]) | Convert series to series of this class. | | [`convert`](numpy.polynomial.chebyshev.chebyshev.convert#numpy.polynomial.chebyshev.Chebyshev.convert "numpy.polynomial.chebyshev.Chebyshev.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. | | [`copy`](numpy.polynomial.chebyshev.chebyshev.copy#numpy.polynomial.chebyshev.Chebyshev.copy "numpy.polynomial.chebyshev.Chebyshev.copy")() | Return a copy. | | [`cutdeg`](numpy.polynomial.chebyshev.chebyshev.cutdeg#numpy.polynomial.chebyshev.Chebyshev.cutdeg "numpy.polynomial.chebyshev.Chebyshev.cutdeg")(deg) | Truncate series to the given degree. | | [`degree`](numpy.polynomial.chebyshev.chebyshev.degree#numpy.polynomial.chebyshev.Chebyshev.degree "numpy.polynomial.chebyshev.Chebyshev.degree")() | The degree of the series. | | [`deriv`](numpy.polynomial.chebyshev.chebyshev.deriv#numpy.polynomial.chebyshev.Chebyshev.deriv "numpy.polynomial.chebyshev.Chebyshev.deriv")([m]) | Differentiate. | | [`fit`](numpy.polynomial.chebyshev.chebyshev.fit#numpy.polynomial.chebyshev.Chebyshev.fit "numpy.polynomial.chebyshev.Chebyshev.fit")(x, y, deg[, domain, rcond, full, w, window]) | Least squares fit to data. | | [`fromroots`](numpy.polynomial.chebyshev.chebyshev.fromroots#numpy.polynomial.chebyshev.Chebyshev.fromroots "numpy.polynomial.chebyshev.Chebyshev.fromroots")(roots[, domain, window]) | Return series instance that has the specified roots. | | [`has_samecoef`](numpy.polynomial.chebyshev.chebyshev.has_samecoef#numpy.polynomial.chebyshev.Chebyshev.has_samecoef "numpy.polynomial.chebyshev.Chebyshev.has_samecoef")(other) | Check if coefficients match. | | [`has_samedomain`](numpy.polynomial.chebyshev.chebyshev.has_samedomain#numpy.polynomial.chebyshev.Chebyshev.has_samedomain "numpy.polynomial.chebyshev.Chebyshev.has_samedomain")(other) | Check if domains match. | | [`has_sametype`](numpy.polynomial.chebyshev.chebyshev.has_sametype#numpy.polynomial.chebyshev.Chebyshev.has_sametype "numpy.polynomial.chebyshev.Chebyshev.has_sametype")(other) | Check if types match. | | [`has_samewindow`](numpy.polynomial.chebyshev.chebyshev.has_samewindow#numpy.polynomial.chebyshev.Chebyshev.has_samewindow "numpy.polynomial.chebyshev.Chebyshev.has_samewindow")(other) | Check if windows match. | | [`identity`](numpy.polynomial.chebyshev.chebyshev.identity#numpy.polynomial.chebyshev.Chebyshev.identity "numpy.polynomial.chebyshev.Chebyshev.identity")([domain, window]) | Identity function. | | [`integ`](numpy.polynomial.chebyshev.chebyshev.integ#numpy.polynomial.chebyshev.Chebyshev.integ "numpy.polynomial.chebyshev.Chebyshev.integ")([m, k, lbnd]) | Integrate. | | [`interpolate`](numpy.polynomial.chebyshev.chebyshev.interpolate#numpy.polynomial.chebyshev.Chebyshev.interpolate "numpy.polynomial.chebyshev.Chebyshev.interpolate")(func, deg[, domain, args]) | Interpolate a function at the Chebyshev points of the first kind. | | [`linspace`](numpy.polynomial.chebyshev.chebyshev.linspace#numpy.polynomial.chebyshev.Chebyshev.linspace "numpy.polynomial.chebyshev.Chebyshev.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. | | [`mapparms`](numpy.polynomial.chebyshev.chebyshev.mapparms#numpy.polynomial.chebyshev.Chebyshev.mapparms "numpy.polynomial.chebyshev.Chebyshev.mapparms")() | Return the mapping parameters. | | [`roots`](numpy.polynomial.chebyshev.chebyshev.roots#numpy.polynomial.chebyshev.Chebyshev.roots "numpy.polynomial.chebyshev.Chebyshev.roots")() | Return the roots of the series polynomial. | | [`trim`](numpy.polynomial.chebyshev.chebyshev.trim#numpy.polynomial.chebyshev.Chebyshev.trim "numpy.polynomial.chebyshev.Chebyshev.trim")([tol]) | Remove trailing coefficients | | [`truncate`](numpy.polynomial.chebyshev.chebyshev.truncate#numpy.polynomial.chebyshev.Chebyshev.truncate "numpy.polynomial.chebyshev.Chebyshev.truncate")(size) | Truncate series to length `size`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.htmlnumpy.polynomial.chebyshev.chebdomain ===================================== polynomial.chebyshev.chebdomain*=array([-1, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebdomain.htmlnumpy.polynomial.chebyshev.chebzero =================================== polynomial.chebyshev.chebzero*=array([0])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebzero.htmlnumpy.polynomial.chebyshev.chebone ================================== polynomial.chebyshev.chebone*=array([1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebone.htmlnumpy.polynomial.chebyshev.chebx ================================ polynomial.chebyshev.chebx*=array([0, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebx.htmlnumpy.polynomial.chebyshev.chebadd ================================== polynomial.chebyshev.chebadd(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L569-L608) Add one Chebyshev series to another. Returns the sum of two Chebyshev series `c1` + `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2`. Parameters **c1, c2**array_like 1-D arrays of Chebyshev series coefficients ordered from low to high. Returns **out**ndarray Array representing the Chebyshev series of their sum. See also [`chebsub`](numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebmul`](numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul"), [`chebdiv`](numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv"), [`chebpow`](numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow") #### Notes Unlike multiplication, division, etc., the sum of two Chebyshev series is a Chebyshev series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial import chebyshev as C >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> C.chebadd(c1,c2) array([4., 4., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebadd.htmlnumpy.polynomial.chebyshev.chebsub ================================== polynomial.chebyshev.chebsub(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L611-L652) Subtract one Chebyshev series from another. Returns the difference of two Chebyshev series `c1` - `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2`. Parameters **c1, c2**array_like 1-D arrays of Chebyshev series coefficients ordered from low to high. Returns **out**ndarray Of Chebyshev series coefficients representing their difference. See also [`chebadd`](numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebmul`](numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul"), [`chebdiv`](numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv"), [`chebpow`](numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow") #### Notes Unlike multiplication, division, etc., the difference of two Chebyshev series is a Chebyshev series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial import chebyshev as C >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> C.chebsub(c1,c2) array([-2., 0., 2.]) >>> C.chebsub(c2,c1) # -C.chebsub(c1,c2) array([ 2., 0., -2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebsub.htmlnumpy.polynomial.chebyshev.chebmulx =================================== polynomial.chebyshev.chebmulx(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L655-L698) Multiply a Chebyshev series by x. Multiply the polynomial `c` by x, where x is the independent variable. Parameters **c**array_like 1-D array of Chebyshev series coefficients ordered from low to high. Returns **out**ndarray Array representing the result of the multiplication. #### Notes New in version 1.5.0. #### Examples ``` >>> from numpy.polynomial import chebyshev as C >>> C.chebmulx([1,2,3]) array([1. , 2.5, 1. , 1.5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebmulx.htmlnumpy.polynomial.chebyshev.chebmul ================================== polynomial.chebyshev.chebmul(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L701-L747) Multiply one Chebyshev series by another. Returns the product of two Chebyshev series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2`. Parameters **c1, c2**array_like 1-D arrays of Chebyshev series coefficients ordered from low to high. Returns **out**ndarray Of Chebyshev series coefficients representing their product. See also [`chebadd`](numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd"), [`chebsub`](numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebdiv`](numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv"), [`chebpow`](numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Chebyshev polynomial basis set. Thus, to express the product as a C-series, it is typically necessary to “reproject” the product onto said basis set, which typically produces “unintuitive live” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial import chebyshev as C >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> C.chebmul(c1,c2) # multiplication requires "reprojection" array([ 6.5, 12. , 12. , 4. , 1.5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebmul.htmlnumpy.polynomial.chebyshev.chebdiv ================================== polynomial.chebyshev.chebdiv(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L750-L814) Divide one Chebyshev series by another. Returns the quotient-with-remainder of two Chebyshev series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2`. Parameters **c1, c2**array_like 1-D arrays of Chebyshev series coefficients ordered from low to high. Returns **[quo, rem]**ndarrays Of Chebyshev series coefficients representing the quotient and remainder. See also [`chebadd`](numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd"), [`chebsub`](numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebmul`](numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul"), [`chebpow`](numpy.polynomial.chebyshev.chebpow#numpy.polynomial.chebyshev.chebpow "numpy.polynomial.chebyshev.chebpow") #### Notes In general, the (polynomial) division of one C-series by another results in quotient and remainder terms that are not in the Chebyshev polynomial basis set. Thus, to express these results as C-series, it is typically necessary to “reproject” the results onto said basis set, which typically produces “unintuitive” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial import chebyshev as C >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> C.chebdiv(c1,c2) # quotient "intuitive," remainder not (array([3.]), array([-8., -4.])) >>> c2 = (0,1,2,3) >>> C.chebdiv(c2,c1) # neither "intuitive" (array([0., 2.]), array([-2., -4.])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebdiv.htmlnumpy.polynomial.chebyshev.chebpow ================================== polynomial.chebyshev.chebpow(*c*, *pow*, *maxpower=16*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L817-L872) Raise a Chebyshev series to a power. Returns the Chebyshev series `c` raised to the power `pow`. The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `T_0 + 2*T_1 + 3*T_2.` Parameters **c**array_like 1-D array of Chebyshev series coefficients ordered from low to high. **pow**integer Power to which the series will be raised **maxpower**integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns **coef**ndarray Chebyshev series of power. See also [`chebadd`](numpy.polynomial.chebyshev.chebadd#numpy.polynomial.chebyshev.chebadd "numpy.polynomial.chebyshev.chebadd"), [`chebsub`](numpy.polynomial.chebyshev.chebsub#numpy.polynomial.chebyshev.chebsub "numpy.polynomial.chebyshev.chebsub"), [`chebmulx`](numpy.polynomial.chebyshev.chebmulx#numpy.polynomial.chebyshev.chebmulx "numpy.polynomial.chebyshev.chebmulx"), [`chebmul`](numpy.polynomial.chebyshev.chebmul#numpy.polynomial.chebyshev.chebmul "numpy.polynomial.chebyshev.chebmul"), [`chebdiv`](numpy.polynomial.chebyshev.chebdiv#numpy.polynomial.chebyshev.chebdiv "numpy.polynomial.chebyshev.chebdiv") #### Examples ``` >>> from numpy.polynomial import chebyshev as C >>> C.chebpow([1, 2, 3, 4], 2) array([15.5, 22. , 16. , ..., 12.5, 12. , 8. ]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebpow.htmlnumpy.polynomial.chebyshev.chebval ================================== polynomial.chebyshev.chebval(*x*, *c*, *tensor=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1094-L1175) Evaluate a Chebyshev series at points x. If `c` is of length `n + 1`, this function returns the value: \[p(x) = c_0 * T_0(x) + c_1 * T_1(x) + ... + c_n * T_n(x)\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters **x**array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with themselves and with the elements of `c`. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor**boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. New in version 1.7.0. Returns **values**ndarray, algebra_like The shape of the return value is described above. See also [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebgrid2d`](numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d"), [`chebgrid3d`](numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebval.htmlnumpy.polynomial.chebyshev.chebval2d ==================================== polynomial.chebyshev.chebval2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1178-L1224) Evaluate a 2-D Chebyshev series at points (x, y). This function returns the values: \[p(x,y) = \sum_{i,j} c_{i,j} * T_i(x) * T_j(y)\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in `c[i,j]`. If `c` has dimension greater than 2 the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional Chebyshev series at points formed from pairs of corresponding values from `x` and `y`. See also [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval"), [`chebgrid2d`](numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d"), [`chebgrid3d`](numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebval2d.htmlnumpy.polynomial.chebyshev.chebval3d ==================================== polynomial.chebyshev.chebval3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1280-L1328) Evaluate a 3-D Chebyshev series at points (x, y, z). This function returns the values: \[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * T_i(x) * T_j(y) * T_k(z)\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters **x, y, z**array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval"), [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebgrid2d`](numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d"), [`chebgrid3d`](numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebval3d.htmlnumpy.polynomial.chebyshev.chebgrid2d ===================================== polynomial.chebyshev.chebgrid2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1227-L1277) Evaluate a 2-D Chebyshev series on the Cartesian product of x and y. This function returns the values: \[p(a,b) = \sum_{i,j} c_{i,j} * T_i(a) * T_j(b),\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape + y.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional Chebyshev series at points in the Cartesian product of `x` and `y`. See also [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval"), [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d"), [`chebgrid3d`](numpy.polynomial.chebyshev.chebgrid3d#numpy.polynomial.chebyshev.chebgrid3d "numpy.polynomial.chebyshev.chebgrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebgrid2d.htmlnumpy.polynomial.chebyshev.chebgrid3d ===================================== polynomial.chebyshev.chebgrid3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1331-L1384) Evaluate a 3-D Chebyshev series on the Cartesian product of x, y, and z. This function returns the values: \[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * T_i(a) * T_j(b) * T_k(c)\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters **x, y, z**array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`,`y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval"), [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebgrid2d`](numpy.polynomial.chebyshev.chebgrid2d#numpy.polynomial.chebyshev.chebgrid2d "numpy.polynomial.chebyshev.chebgrid2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebgrid3d.htmlnumpy.polynomial.chebyshev.chebder ================================== polynomial.chebyshev.chebder(*c*, *m=1*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L875-L964) Differentiate a Chebyshev series. Returns the Chebyshev series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*T_0 + 2*T_1 + 3*T_2` while [[1,2],[1,2]] represents `1*T_0(x)*T_0(y) + 1*T_1(x)*T_0(y) + 2*T_0(x)*T_1(y) + 2*T_1(x)*T_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Chebyshev series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl**scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis**int, optional Axis over which the derivative is taken. (Default: 0). New in version 1.7.0. Returns **der**ndarray Chebyshev series of the derivative. See also [`chebint`](numpy.polynomial.chebyshev.chebint#numpy.polynomial.chebyshev.chebint "numpy.polynomial.chebyshev.chebint") #### Notes In general, the result of differentiating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial import chebyshev as C >>> c = (1,2,3,4) >>> C.chebder(c) array([14., 12., 24.]) >>> C.chebder(c,3) array([96.]) >>> C.chebder(c,scl=-1) array([-14., -12., -24.]) >>> C.chebder(c,2,-1) array([12., 96.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebder.htmlnumpy.polynomial.chebyshev.chebint ================================== polynomial.chebyshev.chebint(*c*, *m=1*, *k=[]*, *lbnd=0*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L967-L1091) Integrate a Chebyshev series. Returns the Chebyshev series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `T_0 + 2*T_1 + 3*T_2` while [[1,2],[1,2]] represents `1*T_0(x)*T_0(y) + 1*T_1(x)*T_0(y) + 2*T_0(x)*T_1(y) + 2*T_1(x)*T_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Chebyshev series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at zero is the first value in the list, the value of the second integral at zero is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd**scalar, optional The lower bound of the integral. (Default: 0) **scl**scalar, optional Following each integration the result is *multiplied* by `scl` before the integration constant is added. (Default: 1) **axis**int, optional Axis over which the integral is taken. (Default: 0). New in version 1.7.0. Returns **S**ndarray C-series coefficients of the integral. Raises ValueError If `m < 1`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`chebder`](numpy.polynomial.chebyshev.chebder#numpy.polynomial.chebyshev.chebder "numpy.polynomial.chebyshev.chebder") #### Notes Note that the result of each integration is *multiplied* by `scl`. Why is this important to note? Say one is making a linear change of variable \(u = ax + b\) in an integral relative to `x`. Then \(dx = du/a\), so one will need to set `scl` equal to \(1/a\)- perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial import chebyshev as C >>> c = (1,2,3) >>> C.chebint(c) array([ 0.5, -0.5, 0.5, 0.5]) >>> C.chebint(c,3) array([ 0.03125 , -0.1875 , 0.04166667, -0.05208333, 0.01041667, # may vary 0.00625 ]) >>> C.chebint(c, k=3) array([ 3.5, -0.5, 0.5, 0.5]) >>> C.chebint(c,lbnd=-2) array([ 8.5, -0.5, 0.5, 0.5]) >>> C.chebint(c,scl=-2) array([-1., 1., -1., -1.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebint.htmlnumpy.polynomial.chebyshev.chebfromroots ======================================== polynomial.chebyshev.chebfromroots(*roots*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L514-L566) Generate a Chebyshev series with given roots. The function returns the coefficients of the polynomial \[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\] in Chebyshev form, where the `r_n` are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \[p(x) = c_0 + c_1 * T_1(x) + ... + c_n * T_n(x)\] The coefficient of the last term is not generally 1 for monic polynomials in Chebyshev form. Parameters **roots**array_like Sequence containing the roots. Returns **out**ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Examples ``` >>> import numpy.polynomial.chebyshev as C >>> C.chebfromroots((-1,0,1)) # x^3 - x relative to the standard basis array([ 0. , -0.25, 0. , 0.25]) >>> j = complex(0,1) >>> C.chebfromroots((-j,j)) # x^2 + 1 relative to the standard basis array([1.5+0.j, 0. +0.j, 0.5+0.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebfromroots.htmlnumpy.polynomial.chebyshev.chebroots ==================================== polynomial.chebyshev.chebroots(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1719-L1777) Compute the roots of a Chebyshev series. Return the roots (a.k.a. “zeros”) of the polynomial \[p(x) = \sum_i c[i] * T_i(x).\] Parameters **c**1-D array_like 1-D array of coefficients. Returns **out**ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The Chebyshev series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples ``` >>> import numpy.polynomial.chebyshev as cheb >>> cheb.chebroots((-1, 1,-1, 1)) # T3 - T2 + T1 - T0 has real roots array([ -5.00000000e-01, 2.60860684e-17, 1.00000000e+00]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebroots.htmlnumpy.polynomial.chebyshev.chebvander ===================================== polynomial.chebyshev.chebvander(*x*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1387-L1437) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \[V[..., i] = T_i(x),\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the Chebyshev polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the matrix `V = chebvander(x, n)`, then `np.dot(V, c)` and `chebval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of Chebyshev series of the same degree and sample points. Parameters **x**array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg**int Degree of the resulting matrix. Returns **vander**ndarray The pseudo Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding Chebyshev polynomial. The dtype will be the same as the converted `x`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebvander.htmlnumpy.polynomial.chebyshev.chebvander2d ======================================= polynomial.chebyshev.chebvander2d(*x*, *y*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1440-L1490) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \[V[..., (deg[1] + 1)*i + j] = T_i(x) * T_j(y),\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the Chebyshev polynomials. If `V = chebvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\] and `np.dot(V, c.flat)` and `chebval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D Chebyshev series of the same degrees and sample points. Parameters **x, y**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns **vander2d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)\). The dtype will be the same as the converted `x` and `y`. See also [`chebvander`](numpy.polynomial.chebyshev.chebvander#numpy.polynomial.chebyshev.chebvander "numpy.polynomial.chebyshev.chebvander"), [`chebvander3d`](numpy.polynomial.chebyshev.chebvander3d#numpy.polynomial.chebyshev.chebvander3d "numpy.polynomial.chebyshev.chebvander3d"), [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebvander2d.htmlnumpy.polynomial.chebyshev.chebvander3d ======================================= polynomial.chebyshev.chebvander3d(*x*, *y*, *z*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1493-L1544) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l, m, n` are the given degrees in `x, y, z`, then The pseudo-Vandermonde matrix is defined by \[V[..., (m+1)(n+1)i + (n+1)j + k] = T_i(x)*T_j(y)*T_k(z),\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the Chebyshev polynomials. If `V = chebvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\] and `np.dot(V, c.flat)` and `chebval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Chebyshev series of the same degrees and sample points. Parameters **x, y, z**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns **vander3d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`chebvander`](numpy.polynomial.chebyshev.chebvander#numpy.polynomial.chebyshev.chebvander "numpy.polynomial.chebyshev.chebvander"), [`chebvander3d`](#numpy.polynomial.chebyshev.chebvander3d "numpy.polynomial.chebyshev.chebvander3d"), [`chebval2d`](numpy.polynomial.chebyshev.chebval2d#numpy.polynomial.chebyshev.chebval2d "numpy.polynomial.chebyshev.chebval2d"), [`chebval3d`](numpy.polynomial.chebyshev.chebval3d#numpy.polynomial.chebyshev.chebval3d "numpy.polynomial.chebyshev.chebval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebvander3d.htmlnumpy.polynomial.chebyshev.chebgauss ==================================== polynomial.chebyshev.chebgauss(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1847-L1889) Gauss-Chebyshev quadrature. Computes the sample points and weights for Gauss-Chebyshev quadrature. These sample points and weights will correctly integrate polynomials of degree \(2*deg - 1\) or less over the interval \([-1, 1]\) with the weight function \(f(x) = 1/\sqrt{1 - x^2}\). Parameters **deg**int Number of sample points and weights. It must be >= 1. Returns **x**ndarray 1-D ndarray containing the sample points. **y**ndarray 1-D ndarray containing the weights. #### Notes New in version 1.7.0. The results have only been tested up to degree 100, higher degrees may be problematic. For Gauss-Chebyshev there are closed form solutions for the sample points and weights. If n = `deg`, then \[x_i = \cos(\pi (2 i - 1) / (2 n))\] \[w_i = \pi / n\] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebgauss.htmlnumpy.polynomial.chebyshev.chebweight ===================================== polynomial.chebyshev.chebweight(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1892-L1917) The weight function of the Chebyshev polynomials. The weight function is \(1/\sqrt{1 - x^2}\) and the interval of integration is \([-1, 1]\). The Chebyshev polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters **x**array_like Values at which the weight function will be computed. Returns **w**ndarray The weight function at `x`. #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebweight.htmlnumpy.polynomial.chebyshev.chebcompanion ======================================== polynomial.chebyshev.chebcompanion(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1674-L1716) Return the scaled companion matrix of c. The basis polynomials are scaled so that the companion matrix is symmetric when `c` is a Chebyshev basis polynomial. This provides better eigenvalue estimates than the unscaled case and for basis polynomials the eigenvalues are guaranteed to be real if [`numpy.linalg.eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") is used to obtain them. Parameters **c**array_like 1-D array of Chebyshev series coefficients ordered from low to high degree. Returns **mat**ndarray Scaled companion matrix of dimensions (deg, deg). #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebcompanion.htmlnumpy.polynomial.chebyshev.chebfit ================================== polynomial.chebyshev.chebfit(*x*, *y*, *deg*, *rcond=None*, *full=False*, *w=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1547-L1671) Least squares fit of Chebyshev series to data. Return the coefficients of a Chebyshev series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \[p(x) = c_0 + c_1 * T_1(x) + ... + c_n * T_n(x),\] where `n` is `deg`. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer, all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. New in version 1.5.0. Returns **coef**ndarray, shape (M,) or (M, K) Chebyshev coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column k of `y` are in column `k`. **[residuals, rank, singular_values, rcond]**list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by ``` >>> import warnings >>> warnings.simplefilter('ignore', np.RankWarning) ``` See also [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`chebval`](numpy.polynomial.chebyshev.chebval#numpy.polynomial.chebyshev.chebval "numpy.polynomial.chebyshev.chebval") Evaluates a Chebyshev series. [`chebvander`](numpy.polynomial.chebyshev.chebvander#numpy.polynomial.chebyshev.chebvander "numpy.polynomial.chebyshev.chebvander") Vandermonde matrix of Chebyshev series. [`chebweight`](numpy.polynomial.chebyshev.chebweight#numpy.polynomial.chebyshev.chebweight "numpy.polynomial.chebyshev.chebweight") Chebyshev weight function. [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "(in SciPy v1.8.1)") Computes spline fits. #### Notes The solution is the coefficients of the Chebyshev series `p` that minimizes the sum of the weighted squared errors \[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\] where \(w_j\) are the weights. This problem is solved by setting up as the (typically) overdetermined matrix equation \[V(x) * c = w * y,\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, and `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.rankwarning#numpy.RankWarning "numpy.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using Chebyshev series are usually better conditioned than fits using power series, but much can depend on the distribution of the sample points and the smoothness of the data. If the quality of the fit is inadequate splines may be a good alternative. #### References 1 Wikipedia, “Curve fitting”, <https://en.wikipedia.org/wiki/Curve_fitting © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebfit.htmlnumpy.polynomial.chebyshev.chebpts1 =================================== polynomial.chebyshev.chebpts1(*npts*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1920-L1954) Chebyshev points of the first kind. The Chebyshev points of the first kind are the points `cos(x)`, where `x = [pi*(k + .5)/npts for k in range(npts)]`. Parameters **npts**int Number of sample points desired. Returns **pts**ndarray The Chebyshev points of the first kind. See also [`chebpts2`](numpy.polynomial.chebyshev.chebpts2#numpy.polynomial.chebyshev.chebpts2 "numpy.polynomial.chebyshev.chebpts2") #### Notes New in version 1.5.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebpts1.htmlnumpy.polynomial.chebyshev.chebpts2 =================================== polynomial.chebyshev.chebpts2(*npts*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1957-L1987) Chebyshev points of the second kind. The Chebyshev points of the second kind are the points `cos(x)`, where `x = [pi*k/(npts - 1) for k in range(npts)]`. Parameters **npts**int Number of sample points desired. Returns **pts**ndarray The Chebyshev points of the second kind. #### Notes New in version 1.5.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebpts2.htmlnumpy.polynomial.chebyshev.chebtrim =================================== polynomial.chebyshev.chebtrim(*c*, *tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L156-L208) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters **c**array_like 1-d array of coefficients, ordered from lowest order to highest. **tol**number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns **trimmed**ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises ValueError If `tol` < 0 See also `trimseq` #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebtrim.htmlnumpy.polynomial.chebyshev.chebline =================================== polynomial.chebyshev.chebline(*off*, *scl*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L476-L511) Chebyshev series whose graph is a straight line. Parameters **off, scl**scalars The specified line is given by `off + scl*x`. Returns **y**ndarray This module’s representation of the Chebyshev series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples ``` >>> import numpy.polynomial.chebyshev as C >>> C.chebline(3,2) array([3, 2]) >>> C.chebval(-3, C.chebline(3,2)) # should be -3 -3.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebline.htmlnumpy.polynomial.chebyshev.cheb2poly ==================================== polynomial.chebyshev.cheb2poly(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L397-L455) Convert a Chebyshev series to a polynomial. Convert an array representing the coefficients of a Chebyshev series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters **c**array_like 1-D array containing the Chebyshev series coefficients, ordered from lowest order term to highest. Returns **pol**ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2cheb`](numpy.polynomial.chebyshev.poly2cheb#numpy.polynomial.chebyshev.poly2cheb "numpy.polynomial.chebyshev.poly2cheb") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy import polynomial as P >>> c = P.Chebyshev(range(4)) >>> c Chebyshev([0., 1., 2., 3.], domain=[-1, 1], window=[-1, 1]) >>> p = c.convert(kind=P.Polynomial) >>> p Polynomial([-2., -8., 4., 12.], domain=[-1., 1.], window=[-1., 1.]) >>> P.chebyshev.cheb2poly(range(4)) array([-2., -8., 4., 12.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.cheb2poly.htmlnumpy.polynomial.chebyshev.poly2cheb ==================================== polynomial.chebyshev.poly2cheb(*pol*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L347-L394) Convert a polynomial to a Chebyshev series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Chebyshev series, ordered from lowest to highest degree. Parameters **pol**array_like 1-D array containing the polynomial coefficients Returns **c**ndarray 1-D array containing the coefficients of the equivalent Chebyshev series. See also [`cheb2poly`](numpy.polynomial.chebyshev.cheb2poly#numpy.polynomial.chebyshev.cheb2poly "numpy.polynomial.chebyshev.cheb2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy import polynomial as P >>> p = P.Polynomial(range(4)) >>> p Polynomial([0., 1., 2., 3.], domain=[-1, 1], window=[-1, 1]) >>> c = p.convert(kind=P.Chebyshev) >>> c Chebyshev([1. , 3.25, 1. , 0.75], domain=[-1., 1.], window=[-1., 1.]) >>> P.chebyshev.poly2cheb(range(4)) array([1. , 3.25, 1. , 0.75]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.poly2cheb.htmlnumpy.polynomial.chebyshev.chebinterpolate ========================================== polynomial.chebyshev.chebinterpolate(*func*, *deg*, *args=()*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L1780-L1844) Interpolate a function at the Chebyshev points of the first kind. Returns the Chebyshev series that interpolates `func` at the Chebyshev points of the first kind in the interval [-1, 1]. The interpolating series tends to a minmax approximation to `func` with increasing `deg` if the function is continuous in the interval. New in version 1.14.0. Parameters **func**function The function to be approximated. It must be a function of a single variable of the form `f(x, a, b, c...)`, where `a, b, c...` are extra arguments passed in the `args` parameter. **deg**int Degree of the interpolating polynomial **args**tuple, optional Extra arguments to be used in the function call. Default is no extra arguments. Returns **coef**ndarray, shape (deg + 1,) Chebyshev coefficients of the interpolating series ordered from low to high. #### Notes The Chebyshev polynomials used in the interpolation are orthogonal when sampled at the Chebyshev points of the first kind. If it is desired to constrain some of the coefficients they can simply be set to the desired value after the interpolation, no new interpolation or fit is needed. This is especially useful if it is known apriori that some of coefficients are zero. For instance, if the function is even then the coefficients of the terms of odd degree in the result can be set to zero. #### Examples ``` >>> import numpy.polynomial.chebyshev as C >>> C.chebfromfunction(lambda x: np.tanh(x) + 0.5, 8) array([ 5.00000000e-01, 8.11675684e-01, -9.86864911e-17, -5.42457905e-02, -2.71387850e-16, 4.51658839e-03, 2.46716228e-17, -3.79694221e-04, -3.26899002e-16]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.chebinterpolate.htmlnumpy.polynomial.hermite.Hermite ================================ *class*numpy.polynomial.hermite.Hermite(*coef*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1658-L1697) An Hermite series class. The Hermite class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the `ABCPolyBase` documentation. Parameters **coef**array_like Hermite coefficients in order of increasing degree, i.e, `(1, 2, 3)` gives `1*H_0(x) + 2*H_1(X) + 3*H_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1, 1]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.hermite.hermite.domain#numpy.polynomial.hermite.Hermite.domain "numpy.polynomial.hermite.Hermite.domain") for its use. The default value is [-1, 1]. New in version 1.6.0. #### Methods | | | | --- | --- | | [`__call__`](numpy.polynomial.hermite.hermite.__call__#numpy.polynomial.hermite.Hermite.__call__ "numpy.polynomial.hermite.Hermite.__call__")(arg) | Call self as a function. | | [`basis`](numpy.polynomial.hermite.hermite.basis#numpy.polynomial.hermite.Hermite.basis "numpy.polynomial.hermite.Hermite.basis")(deg[, domain, window]) | Series basis polynomial of degree `deg`. | | [`cast`](numpy.polynomial.hermite.hermite.cast#numpy.polynomial.hermite.Hermite.cast "numpy.polynomial.hermite.Hermite.cast")(series[, domain, window]) | Convert series to series of this class. | | [`convert`](numpy.polynomial.hermite.hermite.convert#numpy.polynomial.hermite.Hermite.convert "numpy.polynomial.hermite.Hermite.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. | | [`copy`](numpy.polynomial.hermite.hermite.copy#numpy.polynomial.hermite.Hermite.copy "numpy.polynomial.hermite.Hermite.copy")() | Return a copy. | | [`cutdeg`](numpy.polynomial.hermite.hermite.cutdeg#numpy.polynomial.hermite.Hermite.cutdeg "numpy.polynomial.hermite.Hermite.cutdeg")(deg) | Truncate series to the given degree. | | [`degree`](numpy.polynomial.hermite.hermite.degree#numpy.polynomial.hermite.Hermite.degree "numpy.polynomial.hermite.Hermite.degree")() | The degree of the series. | | [`deriv`](numpy.polynomial.hermite.hermite.deriv#numpy.polynomial.hermite.Hermite.deriv "numpy.polynomial.hermite.Hermite.deriv")([m]) | Differentiate. | | [`fit`](numpy.polynomial.hermite.hermite.fit#numpy.polynomial.hermite.Hermite.fit "numpy.polynomial.hermite.Hermite.fit")(x, y, deg[, domain, rcond, full, w, window]) | Least squares fit to data. | | [`fromroots`](numpy.polynomial.hermite.hermite.fromroots#numpy.polynomial.hermite.Hermite.fromroots "numpy.polynomial.hermite.Hermite.fromroots")(roots[, domain, window]) | Return series instance that has the specified roots. | | [`has_samecoef`](numpy.polynomial.hermite.hermite.has_samecoef#numpy.polynomial.hermite.Hermite.has_samecoef "numpy.polynomial.hermite.Hermite.has_samecoef")(other) | Check if coefficients match. | | [`has_samedomain`](numpy.polynomial.hermite.hermite.has_samedomain#numpy.polynomial.hermite.Hermite.has_samedomain "numpy.polynomial.hermite.Hermite.has_samedomain")(other) | Check if domains match. | | [`has_sametype`](numpy.polynomial.hermite.hermite.has_sametype#numpy.polynomial.hermite.Hermite.has_sametype "numpy.polynomial.hermite.Hermite.has_sametype")(other) | Check if types match. | | [`has_samewindow`](numpy.polynomial.hermite.hermite.has_samewindow#numpy.polynomial.hermite.Hermite.has_samewindow "numpy.polynomial.hermite.Hermite.has_samewindow")(other) | Check if windows match. | | [`identity`](numpy.polynomial.hermite.hermite.identity#numpy.polynomial.hermite.Hermite.identity "numpy.polynomial.hermite.Hermite.identity")([domain, window]) | Identity function. | | [`integ`](numpy.polynomial.hermite.hermite.integ#numpy.polynomial.hermite.Hermite.integ "numpy.polynomial.hermite.Hermite.integ")([m, k, lbnd]) | Integrate. | | [`linspace`](numpy.polynomial.hermite.hermite.linspace#numpy.polynomial.hermite.Hermite.linspace "numpy.polynomial.hermite.Hermite.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. | | [`mapparms`](numpy.polynomial.hermite.hermite.mapparms#numpy.polynomial.hermite.Hermite.mapparms "numpy.polynomial.hermite.Hermite.mapparms")() | Return the mapping parameters. | | [`roots`](numpy.polynomial.hermite.hermite.roots#numpy.polynomial.hermite.Hermite.roots "numpy.polynomial.hermite.Hermite.roots")() | Return the roots of the series polynomial. | | [`trim`](numpy.polynomial.hermite.hermite.trim#numpy.polynomial.hermite.Hermite.trim "numpy.polynomial.hermite.Hermite.trim")([tol]) | Remove trailing coefficients | | [`truncate`](numpy.polynomial.hermite.hermite.truncate#numpy.polynomial.hermite.Hermite.truncate "numpy.polynomial.hermite.Hermite.truncate")(size) | Truncate series to length `size`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.htmlnumpy.polynomial.hermite.hermdomain =================================== polynomial.hermite.hermdomain*=array([-1, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermdomain.htmlnumpy.polynomial.hermite.hermzero ================================= polynomial.hermite.hermzero*=array([0])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermzero.htmlnumpy.polynomial.hermite.hermone ================================ polynomial.hermite.hermone*=array([1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermone.htmlnumpy.polynomial.hermite.hermx ============================== polynomial.hermite.hermx*=array([0. , 0.5])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermx.htmlnumpy.polynomial.hermite.hermadd ================================ polynomial.hermite.hermadd(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L313-L350) Add one Hermite series to another. Returns the sum of two Hermite series `c1` + `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns **out**ndarray Array representing the Hermite series of their sum. See also [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes Unlike multiplication, division, etc., the sum of two Hermite series is a Hermite series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial.hermite import hermadd >>> hermadd([1, 2, 3], [1, 2, 3, 4]) array([2., 4., 6., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermadd.htmlnumpy.polynomial.hermite.hermsub ================================ polynomial.hermite.hermsub(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L353-L390) Subtract one Hermite series from another. Returns the difference of two Hermite series `c1` - `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns **out**ndarray Of Hermite series coefficients representing their difference. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes Unlike multiplication, division, etc., the difference of two Hermite series is a Hermite series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial.hermite import hermsub >>> hermsub([1, 2, 3, 4], [1, 2, 3]) array([0., 0., 0., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermsub.htmlnumpy.polynomial.hermite.hermmulx ================================= polynomial.hermite.hermmulx(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L393-L443) Multiply a Hermite series by x. Multiply the Hermite series `c` by x, where x is the independent variable. Parameters **c**array_like 1-D array of Hermite series coefficients ordered from low to high. Returns **out**ndarray Array representing the result of the multiplication. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes The multiplication uses the recursion relationship for Hermite polynomials in the form \[xP_i(x) = (P_{i + 1}(x)/2 + i*P_{i - 1}(x))\] #### Examples ``` >>> from numpy.polynomial.hermite import hermmulx >>> hermmulx([1, 2, 3]) array([2. , 6.5, 1. , 1.5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermmulx.htmlnumpy.polynomial.hermite.hermmul ================================ polynomial.hermite.hermmul(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L446-L509) Multiply one Hermite series by another. Returns the product of two Hermite series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns **out**ndarray Of Hermite series coefficients representing their product. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Hermite polynomial basis set. Thus, to express the product as a Hermite series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial.hermite import hermmul >>> hermmul([1, 2, 3], [0, 1, 2]) array([52., 29., 52., 7., 6.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermmul.htmlnumpy.polynomial.hermite.hermdiv ================================ polynomial.hermite.hermdiv(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L512-L557) Divide one Hermite series by another. Returns the quotient-with-remainder of two Hermite series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns **[quo, rem]**ndarrays Of Hermite series coefficients representing the quotient and remainder. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermpow`](numpy.polynomial.hermite.hermpow#numpy.polynomial.hermite.hermpow "numpy.polynomial.hermite.hermpow") #### Notes In general, the (polynomial) division of one Hermite series by another results in quotient and remainder terms that are not in the Hermite polynomial basis set. Thus, to express these results as a Hermite series, it is necessary to “reproject” the results onto the Hermite basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial.hermite import hermdiv >>> hermdiv([ 52., 29., 52., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([0.])) >>> hermdiv([ 54., 31., 52., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([2., 2.])) >>> hermdiv([ 53., 30., 52., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([1., 1.])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermdiv.htmlnumpy.polynomial.hermite.hermpow ================================ polynomial.hermite.hermpow(*c*, *pow*, *maxpower=16*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L560-L594) Raise a Hermite series to a power. Returns the Hermite series `c` raised to the power `pow`. The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `P_0 + 2*P_1 + 3*P_2.` Parameters **c**array_like 1-D array of Hermite series coefficients ordered from low to high. **pow**integer Power to which the series will be raised **maxpower**integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns **coef**ndarray Hermite series of power. See also [`hermadd`](numpy.polynomial.hermite.hermadd#numpy.polynomial.hermite.hermadd "numpy.polynomial.hermite.hermadd"), [`hermsub`](numpy.polynomial.hermite.hermsub#numpy.polynomial.hermite.hermsub "numpy.polynomial.hermite.hermsub"), [`hermmulx`](numpy.polynomial.hermite.hermmulx#numpy.polynomial.hermite.hermmulx "numpy.polynomial.hermite.hermmulx"), [`hermmul`](numpy.polynomial.hermite.hermmul#numpy.polynomial.hermite.hermmul "numpy.polynomial.hermite.hermmul"), [`hermdiv`](numpy.polynomial.hermite.hermdiv#numpy.polynomial.hermite.hermdiv "numpy.polynomial.hermite.hermdiv") #### Examples ``` >>> from numpy.polynomial.hermite import hermpow >>> hermpow([1, 2, 3], 2) array([81., 52., 82., 12., 9.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermpow.htmlnumpy.polynomial.hermite.hermval ================================ polynomial.hermite.hermval(*x*, *c*, *tensor=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L802-L895) Evaluate an Hermite series at points x. If `c` is of length `n + 1`, this function returns the value: \[p(x) = c_0 * H_0(x) + c_1 * H_1(x) + ... + c_n * H_n(x)\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters **x**array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with themselves and with the elements of `c`. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor**boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. New in version 1.7.0. Returns **values**ndarray, algebra_like The shape of the return value is described above. See also [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermgrid2d`](numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d"), [`hermgrid3d`](numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. #### Examples ``` >>> from numpy.polynomial.hermite import hermval >>> coef = [1,2,3] >>> hermval(1, coef) 11.0 >>> hermval([[1,2],[3,4]], coef) array([[ 11., 51.], [115., 203.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermval.htmlnumpy.polynomial.hermite.hermval2d ================================== polynomial.hermite.hermval2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L898-L944) Evaluate a 2-D Hermite series at points (x, y). This function returns the values: \[p(x,y) = \sum_{i,j} c_{i,j} * H_i(x) * H_j(y)\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points formed with pairs of corresponding values from `x` and `y`. See also [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval"), [`hermgrid2d`](numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d"), [`hermgrid3d`](numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermval2d.htmlnumpy.polynomial.hermite.hermval3d ================================== polynomial.hermite.hermval3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1000-L1048) Evaluate a 3-D Hermite series at points (x, y, z). This function returns the values: \[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * H_i(x) * H_j(y) * H_k(z)\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters **x, y, z**array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval"), [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermgrid2d`](numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d"), [`hermgrid3d`](numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermval3d.htmlnumpy.polynomial.hermite.hermgrid2d =================================== polynomial.hermite.hermgrid2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L947-L997) Evaluate a 2-D Hermite series on the Cartesian product of x and y. This function returns the values: \[p(a,b) = \sum_{i,j} c_{i,j} * H_i(a) * H_j(b)\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval"), [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d"), [`hermgrid3d`](numpy.polynomial.hermite.hermgrid3d#numpy.polynomial.hermite.hermgrid3d "numpy.polynomial.hermite.hermgrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermgrid2d.htmlnumpy.polynomial.hermite.hermgrid3d =================================== polynomial.hermite.hermgrid3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1051-L1104) Evaluate a 3-D Hermite series on the Cartesian product of x, y, and z. This function returns the values: \[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * H_i(a) * H_j(b) * H_k(c)\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters **x, y, z**array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`,`y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval"), [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermgrid2d`](numpy.polynomial.hermite.hermgrid2d#numpy.polynomial.hermite.hermgrid2d "numpy.polynomial.hermite.hermgrid2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermgrid3d.htmlnumpy.polynomial.hermite.hermder ================================ polynomial.hermite.hermder(*c*, *m=1*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L597-L677) Differentiate a Hermite series. Returns the Hermite series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*H_0 + 2*H_1 + 3*H_2` while [[1,2],[1,2]] represents `1*H_0(x)*H_0(y) + 1*H_1(x)*H_0(y) + 2*H_0(x)*H_1(y) + 2*H_1(x)*H_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Hermite series coefficients. If `c` is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl**scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis**int, optional Axis over which the derivative is taken. (Default: 0). New in version 1.7.0. Returns **der**ndarray Hermite series of the derivative. See also [`hermint`](numpy.polynomial.hermite.hermint#numpy.polynomial.hermite.hermint "numpy.polynomial.hermite.hermint") #### Notes In general, the result of differentiating a Hermite series does not resemble the same operation on a power series. Thus the result of this function may be “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial.hermite import hermder >>> hermder([ 1. , 0.5, 0.5, 0.5]) array([1., 2., 3.]) >>> hermder([-0.5, 1./2., 1./8., 1./12., 1./16.], m=2) array([1., 2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermder.htmlnumpy.polynomial.hermite.hermint ================================ polynomial.hermite.hermint(*c*, *m=1*, *k=[]*, *lbnd=0*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L680-L799) Integrate a Hermite series. Returns the Hermite series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `H_0 + 2*H_1 + 3*H_2` while [[1,2],[1,2]] represents `1*H_0(x)*H_0(y) + 1*H_1(x)*H_0(y) + 2*H_0(x)*H_1(y) + 2*H_1(x)*H_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Hermite series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at `lbnd` is the first value in the list, the value of the second integral at `lbnd` is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd**scalar, optional The lower bound of the integral. (Default: 0) **scl**scalar, optional Following each integration the result is *multiplied* by `scl` before the integration constant is added. (Default: 1) **axis**int, optional Axis over which the integral is taken. (Default: 0). New in version 1.7.0. Returns **S**ndarray Hermite series coefficients of the integral. Raises ValueError If `m < 0`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`hermder`](numpy.polynomial.hermite.hermder#numpy.polynomial.hermite.hermder "numpy.polynomial.hermite.hermder") #### Notes Note that the result of each integration is *multiplied* by `scl`. Why is this important to note? Say one is making a linear change of variable \(u = ax + b\) in an integral relative to `x`. Then \(dx = du/a\), so one will need to set `scl` equal to \(1/a\) - perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial.hermite import hermint >>> hermint([1,2,3]) # integrate once, value 0 at 0. array([1. , 0.5, 0.5, 0.5]) >>> hermint([1,2,3], m=2) # integrate twice, value & deriv 0 at 0 array([-0.5 , 0.5 , 0.125 , 0.08333333, 0.0625 ]) # may vary >>> hermint([1,2,3], k=1) # integrate once, value 1 at 0. array([2. , 0.5, 0.5, 0.5]) >>> hermint([1,2,3], lbnd=-1) # integrate once, value 0 at -1 array([-2. , 0.5, 0.5, 0.5]) >>> hermint([1,2,3], m=2, k=[1,2], lbnd=-1) array([ 1.66666667, -0.5 , 0.125 , 0.08333333, 0.0625 ]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermint.htmlnumpy.polynomial.hermite.hermfromroots ====================================== polynomial.hermite.hermfromroots(*roots*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L257-L310) Generate a Hermite series with given roots. The function returns the coefficients of the polynomial \[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\] in Hermite form, where the `r_n` are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \[p(x) = c_0 + c_1 * H_1(x) + ... + c_n * H_n(x)\] The coefficient of the last term is not generally 1 for monic polynomials in Hermite form. Parameters **roots**array_like Sequence containing the roots. Returns **out**ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Examples ``` >>> from numpy.polynomial.hermite import hermfromroots, hermval >>> coef = hermfromroots((-1, 0, 1)) >>> hermval((-1, 0, 1), coef) array([0., 0., 0.]) >>> coef = hermfromroots((-1j, 1j)) >>> hermval((-1j, 1j), coef) array([0.+0.j, 0.+0.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermfromroots.htmlnumpy.polynomial.hermite.hermroots ================================== polynomial.hermite.hermroots(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1452-L1513) Compute the roots of a Hermite series. Return the roots (a.k.a. “zeros”) of the polynomial \[p(x) = \sum_i c[i] * H_i(x).\] Parameters **c**1-D array_like 1-D array of coefficients. Returns **out**ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The Hermite series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples ``` >>> from numpy.polynomial.hermite import hermroots, hermfromroots >>> coef = hermfromroots([-1, 0, 1]) >>> coef array([0. , 0.25 , 0. , 0.125]) >>> hermroots(coef) array([-1.00000000e+00, -1.38777878e-17, 1.00000000e+00]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermroots.htmlnumpy.polynomial.hermite.hermvander =================================== polynomial.hermite.hermvander(*x*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1107-L1165) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \[V[..., i] = H_i(x),\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the Hermite polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the array `V = hermvander(x, n)`, then `np.dot(V, c)` and `hermval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of Hermite series of the same degree and sample points. Parameters **x**array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg**int Degree of the resulting matrix. Returns **vander**ndarray The pseudo-Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding Hermite polynomial. The dtype will be the same as the converted `x`. #### Examples ``` >>> from numpy.polynomial.hermite import hermvander >>> x = np.array([-1, 0, 1]) >>> hermvander(x, 3) array([[ 1., -2., 2., 4.], [ 1., 0., -2., -0.], [ 1., 2., 2., -4.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermvander.htmlnumpy.polynomial.hermite.hermvander2d ===================================== polynomial.hermite.hermvander2d(*x*, *y*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1168-L1218) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \[V[..., (deg[1] + 1)*i + j] = H_i(x) * H_j(y),\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the Hermite polynomials. If `V = hermvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\] and `np.dot(V, c.flat)` and `hermval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D Hermite series of the same degrees and sample points. Parameters **x, y**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns **vander2d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)\). The dtype will be the same as the converted `x` and `y`. See also [`hermvander`](numpy.polynomial.hermite.hermvander#numpy.polynomial.hermite.hermvander "numpy.polynomial.hermite.hermvander"), [`hermvander3d`](numpy.polynomial.hermite.hermvander3d#numpy.polynomial.hermite.hermvander3d "numpy.polynomial.hermite.hermvander3d"), [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermvander2d.htmlnumpy.polynomial.hermite.hermvander3d ===================================== polynomial.hermite.hermvander3d(*x*, *y*, *z*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1221-L1272) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l, m, n` are the given degrees in `x, y, z`, then The pseudo-Vandermonde matrix is defined by \[V[..., (m+1)(n+1)i + (n+1)j + k] = H_i(x)*H_j(y)*H_k(z),\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the Hermite polynomials. If `V = hermvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\] and `np.dot(V, c.flat)` and `hermval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Hermite series of the same degrees and sample points. Parameters **x, y, z**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns **vander3d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`hermvander`](numpy.polynomial.hermite.hermvander#numpy.polynomial.hermite.hermvander "numpy.polynomial.hermite.hermvander"), [`hermvander3d`](#numpy.polynomial.hermite.hermvander3d "numpy.polynomial.hermite.hermvander3d"), [`hermval2d`](numpy.polynomial.hermite.hermval2d#numpy.polynomial.hermite.hermval2d "numpy.polynomial.hermite.hermval2d"), [`hermval3d`](numpy.polynomial.hermite.hermval3d#numpy.polynomial.hermite.hermval3d "numpy.polynomial.hermite.hermval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermvander3d.htmlnumpy.polynomial.hermite.hermgauss ================================== polynomial.hermite.hermgauss(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1559-L1623) Gauss-Hermite quadrature. Computes the sample points and weights for Gauss-Hermite quadrature. These sample points and weights will correctly integrate polynomials of degree \(2*deg - 1\) or less over the interval \([-\inf, \inf]\) with the weight function \(f(x) = \exp(-x^2)\). Parameters **deg**int Number of sample points and weights. It must be >= 1. Returns **x**ndarray 1-D ndarray containing the sample points. **y**ndarray 1-D ndarray containing the weights. #### Notes New in version 1.7.0. The results have only been tested up to degree 100, higher degrees may be problematic. The weights are determined by using the fact that \[w_k = c / (H'_n(x_k) * H_{n-1}(x_k))\] where \(c\) is a constant independent of \(k\) and \(x_k\) is the k’th root of \(H_n\), and then scaling the results to get the right value when integrating 1. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermgauss.htmlnumpy.polynomial.hermite.hermweight =================================== polynomial.hermite.hermweight(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1626-L1651) Weight function of the Hermite polynomials. The weight function is \(\exp(-x^2)\) and the interval of integration is \([-\inf, \inf]\). the Hermite polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters **x**array_like Values at which the weight function will be computed. Returns **w**ndarray The weight function at `x`. #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermweight.htmlnumpy.polynomial.hermite.hermcompanion ====================================== polynomial.hermite.hermcompanion(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1407-L1449) Return the scaled companion matrix of c. The basis polynomials are scaled so that the companion matrix is symmetric when `c` is an Hermite basis polynomial. This provides better eigenvalue estimates than the unscaled case and for basis polynomials the eigenvalues are guaranteed to be real if [`numpy.linalg.eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") is used to obtain them. Parameters **c**array_like 1-D array of Hermite series coefficients ordered from low to high degree. Returns **mat**ndarray Scaled companion matrix of dimensions (deg, deg). #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermcompanion.htmlnumpy.polynomial.hermite.hermfit ================================ polynomial.hermite.hermfit(*x*, *y*, *deg*, *rcond=None*, *full=False*, *w=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L1275-L1404) Least squares fit of Hermite series to data. Return the coefficients of a Hermite series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \[p(x) = c_0 + c_1 * H_1(x) + ... + c_n * H_n(x),\] where `n` is `deg`. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. Returns **coef**ndarray, shape (M,) or (M, K) Hermite coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column k of `y` are in column `k`. **[residuals, rank, singular_values, rcond]**list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by ``` >>> import warnings >>> warnings.simplefilter('ignore', np.RankWarning) ``` See also [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`hermval`](numpy.polynomial.hermite.hermval#numpy.polynomial.hermite.hermval "numpy.polynomial.hermite.hermval") Evaluates a Hermite series. [`hermvander`](numpy.polynomial.hermite.hermvander#numpy.polynomial.hermite.hermvander "numpy.polynomial.hermite.hermvander") Vandermonde matrix of Hermite series. [`hermweight`](numpy.polynomial.hermite.hermweight#numpy.polynomial.hermite.hermweight "numpy.polynomial.hermite.hermweight") Hermite weight function [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "(in SciPy v1.8.1)") Computes spline fits. #### Notes The solution is the coefficients of the Hermite series `p` that minimizes the sum of the weighted squared errors \[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\] where the \(w_j\) are the weights. This problem is solved by setting up the (typically) overdetermined matrix equation \[V(x) * c = w * y,\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.rankwarning#numpy.RankWarning "numpy.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using Hermite series are probably most useful when the data can be approximated by `sqrt(w(x)) * p(x)`, where `w(x)` is the Hermite weight. In that case the weight `sqrt(w(x[i]))` should be used together with data values `y[i]/sqrt(w(x[i]))`. The weight function is available as [`hermweight`](numpy.polynomial.hermite.hermweight#numpy.polynomial.hermite.hermweight "numpy.polynomial.hermite.hermweight"). #### References 1 Wikipedia, “Curve fitting”, <https://en.wikipedia.org/wiki/Curve_fitting #### Examples ``` >>> from numpy.polynomial.hermite import hermfit, hermval >>> x = np.linspace(-10, 10) >>> err = np.random.randn(len(x))/10 >>> y = hermval(x, [1, 2, 3]) + err >>> hermfit(x, y, 2) array([1.0218, 1.9986, 2.9999]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermfit.htmlnumpy.polynomial.hermite.hermtrim ================================= polynomial.hermite.hermtrim(*c*, *tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L156-L208) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters **c**array_like 1-d array of coefficients, ordered from lowest order to highest. **tol**number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns **trimmed**ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises ValueError If `tol` < 0 See also `trimseq` #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermtrim.htmlnumpy.polynomial.hermite.hermline ================================= polynomial.hermite.hermline(*off*, *scl*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L217-L254) Hermite series whose graph is a straight line. Parameters **off, scl**scalars The specified line is given by `off + scl*x`. Returns **y**ndarray This module’s representation of the Hermite series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples ``` >>> from numpy.polynomial.hermite import hermline, hermval >>> hermval(0,hermline(3, 2)) 3.0 >>> hermval(1,hermline(3, 2)) 5.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.hermline.htmlnumpy.polynomial.hermite.herm2poly ================================== polynomial.hermite.herm2poly(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L142-L197) Convert a Hermite series to a polynomial. Convert an array representing the coefficients of a Hermite series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters **c**array_like 1-D array containing the Hermite series coefficients, ordered from lowest order term to highest. Returns **pol**ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2herm`](numpy.polynomial.hermite.poly2herm#numpy.polynomial.hermite.poly2herm "numpy.polynomial.hermite.poly2herm") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy.polynomial.hermite import herm2poly >>> herm2poly([ 1. , 2.75 , 0.5 , 0.375]) array([0., 1., 2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.herm2poly.htmlnumpy.polynomial.hermite.poly2herm ================================== polynomial.hermite.poly2herm(*pol*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite.py#L96-L139) Convert a polynomial to a Hermite series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Hermite series, ordered from lowest to highest degree. Parameters **pol**array_like 1-D array containing the polynomial coefficients Returns **c**ndarray 1-D array containing the coefficients of the equivalent Hermite series. See also [`herm2poly`](numpy.polynomial.hermite.herm2poly#numpy.polynomial.hermite.herm2poly "numpy.polynomial.hermite.herm2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy.polynomial.hermite import poly2herm >>> poly2herm(np.arange(4)) array([1. , 2.75 , 0.5 , 0.375]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.poly2herm.htmlnumpy.polynomial.hermite_e.HermiteE ==================================== *class*numpy.polynomial.hermite_e.HermiteE(*coef*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1650-L1689) An HermiteE series class. The HermiteE class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the `ABCPolyBase` documentation. Parameters **coef**array_like HermiteE coefficients in order of increasing degree, i.e, `(1, 2, 3)` gives `1*He_0(x) + 2*He_1(X) + 3*He_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1, 1]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.hermite_e.hermitee.domain#numpy.polynomial.hermite_e.HermiteE.domain "numpy.polynomial.hermite_e.HermiteE.domain") for its use. The default value is [-1, 1]. New in version 1.6.0. #### Methods | | | | --- | --- | | [`__call__`](numpy.polynomial.hermite_e.hermitee.__call__#numpy.polynomial.hermite_e.HermiteE.__call__ "numpy.polynomial.hermite_e.HermiteE.__call__")(arg) | Call self as a function. | | [`basis`](numpy.polynomial.hermite_e.hermitee.basis#numpy.polynomial.hermite_e.HermiteE.basis "numpy.polynomial.hermite_e.HermiteE.basis")(deg[, domain, window]) | Series basis polynomial of degree `deg`. | | [`cast`](numpy.polynomial.hermite_e.hermitee.cast#numpy.polynomial.hermite_e.HermiteE.cast "numpy.polynomial.hermite_e.HermiteE.cast")(series[, domain, window]) | Convert series to series of this class. | | [`convert`](numpy.polynomial.hermite_e.hermitee.convert#numpy.polynomial.hermite_e.HermiteE.convert "numpy.polynomial.hermite_e.HermiteE.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. | | [`copy`](numpy.polynomial.hermite_e.hermitee.copy#numpy.polynomial.hermite_e.HermiteE.copy "numpy.polynomial.hermite_e.HermiteE.copy")() | Return a copy. | | [`cutdeg`](numpy.polynomial.hermite_e.hermitee.cutdeg#numpy.polynomial.hermite_e.HermiteE.cutdeg "numpy.polynomial.hermite_e.HermiteE.cutdeg")(deg) | Truncate series to the given degree. | | [`degree`](numpy.polynomial.hermite_e.hermitee.degree#numpy.polynomial.hermite_e.HermiteE.degree "numpy.polynomial.hermite_e.HermiteE.degree")() | The degree of the series. | | [`deriv`](numpy.polynomial.hermite_e.hermitee.deriv#numpy.polynomial.hermite_e.HermiteE.deriv "numpy.polynomial.hermite_e.HermiteE.deriv")([m]) | Differentiate. | | [`fit`](numpy.polynomial.hermite_e.hermitee.fit#numpy.polynomial.hermite_e.HermiteE.fit "numpy.polynomial.hermite_e.HermiteE.fit")(x, y, deg[, domain, rcond, full, w, window]) | Least squares fit to data. | | [`fromroots`](numpy.polynomial.hermite_e.hermitee.fromroots#numpy.polynomial.hermite_e.HermiteE.fromroots "numpy.polynomial.hermite_e.HermiteE.fromroots")(roots[, domain, window]) | Return series instance that has the specified roots. | | [`has_samecoef`](numpy.polynomial.hermite_e.hermitee.has_samecoef#numpy.polynomial.hermite_e.HermiteE.has_samecoef "numpy.polynomial.hermite_e.HermiteE.has_samecoef")(other) | Check if coefficients match. | | [`has_samedomain`](numpy.polynomial.hermite_e.hermitee.has_samedomain#numpy.polynomial.hermite_e.HermiteE.has_samedomain "numpy.polynomial.hermite_e.HermiteE.has_samedomain")(other) | Check if domains match. | | [`has_sametype`](numpy.polynomial.hermite_e.hermitee.has_sametype#numpy.polynomial.hermite_e.HermiteE.has_sametype "numpy.polynomial.hermite_e.HermiteE.has_sametype")(other) | Check if types match. | | [`has_samewindow`](numpy.polynomial.hermite_e.hermitee.has_samewindow#numpy.polynomial.hermite_e.HermiteE.has_samewindow "numpy.polynomial.hermite_e.HermiteE.has_samewindow")(other) | Check if windows match. | | [`identity`](numpy.polynomial.hermite_e.hermitee.identity#numpy.polynomial.hermite_e.HermiteE.identity "numpy.polynomial.hermite_e.HermiteE.identity")([domain, window]) | Identity function. | | [`integ`](numpy.polynomial.hermite_e.hermitee.integ#numpy.polynomial.hermite_e.HermiteE.integ "numpy.polynomial.hermite_e.HermiteE.integ")([m, k, lbnd]) | Integrate. | | [`linspace`](numpy.polynomial.hermite_e.hermitee.linspace#numpy.polynomial.hermite_e.HermiteE.linspace "numpy.polynomial.hermite_e.HermiteE.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. | | [`mapparms`](numpy.polynomial.hermite_e.hermitee.mapparms#numpy.polynomial.hermite_e.HermiteE.mapparms "numpy.polynomial.hermite_e.HermiteE.mapparms")() | Return the mapping parameters. | | [`roots`](numpy.polynomial.hermite_e.hermitee.roots#numpy.polynomial.hermite_e.HermiteE.roots "numpy.polynomial.hermite_e.HermiteE.roots")() | Return the roots of the series polynomial. | | [`trim`](numpy.polynomial.hermite_e.hermitee.trim#numpy.polynomial.hermite_e.HermiteE.trim "numpy.polynomial.hermite_e.HermiteE.trim")([tol]) | Remove trailing coefficients | | [`truncate`](numpy.polynomial.hermite_e.hermitee.truncate#numpy.polynomial.hermite_e.HermiteE.truncate "numpy.polynomial.hermite_e.HermiteE.truncate")(size) | Truncate series to length `size`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.htmlnumpy.polynomial.hermite_e.hermedomain ======================================= polynomial.hermite_e.hermedomain*=array([-1, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermedomain.htmlnumpy.polynomial.hermite_e.hermezero ===================================== polynomial.hermite_e.hermezero*=array([0])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermezero.htmlnumpy.polynomial.hermite_e.hermeone ==================================== polynomial.hermite_e.hermeone*=array([1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeone.htmlnumpy.polynomial.hermite_e.hermex ================================== polynomial.hermite_e.hermex*=array([0, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermex.htmlnumpy.polynomial.hermite_e.hermeadd ==================================== polynomial.hermite_e.hermeadd(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L312-L349) Add one Hermite series to another. Returns the sum of two Hermite series `c1` + `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns **out**ndarray Array representing the Hermite series of their sum. See also [`hermesub`](numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermemul`](numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul"), [`hermediv`](numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv"), [`hermepow`](numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow") #### Notes Unlike multiplication, division, etc., the sum of two Hermite series is a Hermite series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial.hermite_e import hermeadd >>> hermeadd([1, 2, 3], [1, 2, 3, 4]) array([2., 4., 6., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeadd.htmlnumpy.polynomial.hermite_e.hermesub ==================================== polynomial.hermite_e.hermesub(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L352-L389) Subtract one Hermite series from another. Returns the difference of two Hermite series `c1` - `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns **out**ndarray Of Hermite series coefficients representing their difference. See also [`hermeadd`](numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermemul`](numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul"), [`hermediv`](numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv"), [`hermepow`](numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow") #### Notes Unlike multiplication, division, etc., the difference of two Hermite series is a Hermite series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial.hermite_e import hermesub >>> hermesub([1, 2, 3, 4], [1, 2, 3]) array([0., 0., 0., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermesub.htmlnumpy.polynomial.hermite_e.hermemulx ===================================== polynomial.hermite_e.hermemulx(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L392-L438) Multiply a Hermite series by x. Multiply the Hermite series `c` by x, where x is the independent variable. Parameters **c**array_like 1-D array of Hermite series coefficients ordered from low to high. Returns **out**ndarray Array representing the result of the multiplication. #### Notes The multiplication uses the recursion relationship for Hermite polynomials in the form \[xP_i(x) = (P_{i + 1}(x) + iP_{i - 1}(x)))\] #### Examples ``` >>> from numpy.polynomial.hermite_e import hermemulx >>> hermemulx([1, 2, 3]) array([2., 7., 2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermemulx.htmlnumpy.polynomial.hermite_e.hermemul ==================================== polynomial.hermite_e.hermemul(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L441-L504) Multiply one Hermite series by another. Returns the product of two Hermite series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns **out**ndarray Of Hermite series coefficients representing their product. See also [`hermeadd`](numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd"), [`hermesub`](numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermediv`](numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv"), [`hermepow`](numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Hermite polynomial basis set. Thus, to express the product as a Hermite series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial.hermite_e import hermemul >>> hermemul([1, 2, 3], [0, 1, 2]) array([14., 15., 28., 7., 6.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermemul.htmlnumpy.polynomial.hermite_e.hermediv ==================================== polynomial.hermite_e.hermediv(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L507-L550) Divide one Hermite series by another. Returns the quotient-with-remainder of two Hermite series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Hermite series coefficients ordered from low to high. Returns **[quo, rem]**ndarrays Of Hermite series coefficients representing the quotient and remainder. See also [`hermeadd`](numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd"), [`hermesub`](numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermemul`](numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul"), [`hermepow`](numpy.polynomial.hermite_e.hermepow#numpy.polynomial.hermite_e.hermepow "numpy.polynomial.hermite_e.hermepow") #### Notes In general, the (polynomial) division of one Hermite series by another results in quotient and remainder terms that are not in the Hermite polynomial basis set. Thus, to express these results as a Hermite series, it is necessary to “reproject” the results onto the Hermite basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial.hermite_e import hermediv >>> hermediv([ 14., 15., 28., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([0.])) >>> hermediv([ 15., 17., 28., 7., 6.], [0, 1, 2]) (array([1., 2., 3.]), array([1., 2.])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermediv.htmlnumpy.polynomial.hermite_e.hermepow ==================================== polynomial.hermite_e.hermepow(*c*, *pow*, *maxpower=16*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L553-L587) Raise a Hermite series to a power. Returns the Hermite series `c` raised to the power `pow`. The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `P_0 + 2*P_1 + 3*P_2.` Parameters **c**array_like 1-D array of Hermite series coefficients ordered from low to high. **pow**integer Power to which the series will be raised **maxpower**integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns **coef**ndarray Hermite series of power. See also [`hermeadd`](numpy.polynomial.hermite_e.hermeadd#numpy.polynomial.hermite_e.hermeadd "numpy.polynomial.hermite_e.hermeadd"), [`hermesub`](numpy.polynomial.hermite_e.hermesub#numpy.polynomial.hermite_e.hermesub "numpy.polynomial.hermite_e.hermesub"), [`hermemulx`](numpy.polynomial.hermite_e.hermemulx#numpy.polynomial.hermite_e.hermemulx "numpy.polynomial.hermite_e.hermemulx"), [`hermemul`](numpy.polynomial.hermite_e.hermemul#numpy.polynomial.hermite_e.hermemul "numpy.polynomial.hermite_e.hermemul"), [`hermediv`](numpy.polynomial.hermite_e.hermediv#numpy.polynomial.hermite_e.hermediv "numpy.polynomial.hermite_e.hermediv") #### Examples ``` >>> from numpy.polynomial.hermite_e import hermepow >>> hermepow([1, 2, 3], 2) array([23., 28., 46., 12., 9.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermepow.htmlnumpy.polynomial.hermite_e.hermeval ==================================== polynomial.hermite_e.hermeval(*x*, *c*, *tensor=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L795-L887) Evaluate an HermiteE series at points x. If `c` is of length `n + 1`, this function returns the value: \[p(x) = c_0 * He_0(x) + c_1 * He_1(x) + ... + c_n * He_n(x)\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters **x**array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with with themselves and with the elements of `c`. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor**boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. New in version 1.7.0. Returns **values**ndarray, algebra_like The shape of the return value is described above. See also [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermegrid2d`](numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d"), [`hermegrid3d`](numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. #### Examples ``` >>> from numpy.polynomial.hermite_e import hermeval >>> coef = [1,2,3] >>> hermeval(1, coef) 3.0 >>> hermeval([[1,2],[3,4]], coef) array([[ 3., 14.], [31., 54.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeval.htmlnumpy.polynomial.hermite_e.hermeval2d ====================================== polynomial.hermite_e.hermeval2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L890-L936) Evaluate a 2-D HermiteE series at points (x, y). This function returns the values: \[p(x,y) = \sum_{i,j} c_{i,j} * He_i(x) * He_j(y)\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points formed with pairs of corresponding values from `x` and `y`. See also [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval"), [`hermegrid2d`](numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d"), [`hermegrid3d`](numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeval2d.htmlnumpy.polynomial.hermite_e.hermeval3d ====================================== polynomial.hermite_e.hermeval3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L992-L1040) Evaluate a 3-D Hermite_e series at points (x, y, z). This function returns the values: \[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * He_i(x) * He_j(y) * He_k(z)\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters **x, y, z**array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval"), [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermegrid2d`](numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d"), [`hermegrid3d`](numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeval3d.htmlnumpy.polynomial.hermite_e.hermegrid2d ======================================= polynomial.hermite_e.hermegrid2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L939-L989) Evaluate a 2-D HermiteE series on the Cartesian product of x and y. This function returns the values: \[p(a,b) = \sum_{i,j} c_{i,j} * H_i(a) * H_j(b)\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval"), [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d"), [`hermegrid3d`](numpy.polynomial.hermite_e.hermegrid3d#numpy.polynomial.hermite_e.hermegrid3d "numpy.polynomial.hermite_e.hermegrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermegrid2d.htmlnumpy.polynomial.hermite_e.hermegrid3d ======================================= polynomial.hermite_e.hermegrid3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1043-L1096) Evaluate a 3-D HermiteE series on the Cartesian product of x, y, and z. This function returns the values: \[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * He_i(a) * He_j(b) * He_k(c)\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters **x, y, z**array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`,`y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval"), [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermegrid2d`](numpy.polynomial.hermite_e.hermegrid2d#numpy.polynomial.hermite_e.hermegrid2d "numpy.polynomial.hermite_e.hermegrid2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermegrid3d.htmlnumpy.polynomial.hermite_e.hermeder ==================================== polynomial.hermite_e.hermeder(*c*, *m=1*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L590-L670) Differentiate a Hermite_e series. Returns the series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*He_0 + 2*He_1 + 3*He_2` while [[1,2],[1,2]] represents `1*He_0(x)*He_0(y) + 1*He_1(x)*He_0(y) + 2*He_0(x)*He_1(y) + 2*He_1(x)*He_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Hermite_e series coefficients. If `c` is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl**scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis**int, optional Axis over which the derivative is taken. (Default: 0). New in version 1.7.0. Returns **der**ndarray Hermite series of the derivative. See also [`hermeint`](numpy.polynomial.hermite_e.hermeint#numpy.polynomial.hermite_e.hermeint "numpy.polynomial.hermite_e.hermeint") #### Notes In general, the result of differentiating a Hermite series does not resemble the same operation on a power series. Thus the result of this function may be “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial.hermite_e import hermeder >>> hermeder([ 1., 1., 1., 1.]) array([1., 2., 3.]) >>> hermeder([-0.25, 1., 1./2., 1./3., 1./4 ], m=2) array([1., 2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeder.htmlnumpy.polynomial.hermite_e.hermeint ==================================== polynomial.hermite_e.hermeint(*c*, *m=1*, *k=[]*, *lbnd=0*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L673-L792) Integrate a Hermite_e series. Returns the Hermite_e series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `H_0 + 2*H_1 + 3*H_2` while [[1,2],[1,2]] represents `1*H_0(x)*H_0(y) + 1*H_1(x)*H_0(y) + 2*H_0(x)*H_1(y) + 2*H_1(x)*H_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Hermite_e series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at `lbnd` is the first value in the list, the value of the second integral at `lbnd` is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd**scalar, optional The lower bound of the integral. (Default: 0) **scl**scalar, optional Following each integration the result is *multiplied* by `scl` before the integration constant is added. (Default: 1) **axis**int, optional Axis over which the integral is taken. (Default: 0). New in version 1.7.0. Returns **S**ndarray Hermite_e series coefficients of the integral. Raises ValueError If `m < 0`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`hermeder`](numpy.polynomial.hermite_e.hermeder#numpy.polynomial.hermite_e.hermeder "numpy.polynomial.hermite_e.hermeder") #### Notes Note that the result of each integration is *multiplied* by `scl`. Why is this important to note? Say one is making a linear change of variable \(u = ax + b\) in an integral relative to `x`. Then \(dx = du/a\), so one will need to set `scl` equal to \(1/a\) - perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial.hermite_e import hermeint >>> hermeint([1, 2, 3]) # integrate once, value 0 at 0. array([1., 1., 1., 1.]) >>> hermeint([1, 2, 3], m=2) # integrate twice, value & deriv 0 at 0 array([-0.25 , 1. , 0.5 , 0.33333333, 0.25 ]) # may vary >>> hermeint([1, 2, 3], k=1) # integrate once, value 1 at 0. array([2., 1., 1., 1.]) >>> hermeint([1, 2, 3], lbnd=-1) # integrate once, value 0 at -1 array([-1., 1., 1., 1.]) >>> hermeint([1, 2, 3], m=2, k=[1, 2], lbnd=-1) array([ 1.83333333, 0. , 0.5 , 0.33333333, 0.25 ]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeint.htmlnumpy.polynomial.hermite_e.hermefromroots ========================================== polynomial.hermite_e.hermefromroots(*roots*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L256-L309) Generate a HermiteE series with given roots. The function returns the coefficients of the polynomial \[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\] in HermiteE form, where the `r_n` are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \[p(x) = c_0 + c_1 * He_1(x) + ... + c_n * He_n(x)\] The coefficient of the last term is not generally 1 for monic polynomials in HermiteE form. Parameters **roots**array_like Sequence containing the roots. Returns **out**ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") #### Examples ``` >>> from numpy.polynomial.hermite_e import hermefromroots, hermeval >>> coef = hermefromroots((-1, 0, 1)) >>> hermeval((-1, 0, 1), coef) array([0., 0., 0.]) >>> coef = hermefromroots((-1j, 1j)) >>> hermeval((-1j, 1j), coef) array([0.+0.j, 0.+0.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermefromroots.htmlnumpy.polynomial.hermite_e.hermeroots ====================================== polynomial.hermite_e.hermeroots(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1445-L1506) Compute the roots of a HermiteE series. Return the roots (a.k.a. “zeros”) of the polynomial \[p(x) = \sum_i c[i] * He_i(x).\] Parameters **c**1-D array_like 1-D array of coefficients. Returns **out**ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The HermiteE series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples ``` >>> from numpy.polynomial.hermite_e import hermeroots, hermefromroots >>> coef = hermefromroots([-1, 0, 1]) >>> coef array([0., 2., 0., 1.]) >>> hermeroots(coef) array([-1., 0., 1.]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeroots.htmlnumpy.polynomial.hermite_e.hermevander ======================================= polynomial.hermite_e.hermevander(*x*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1099-L1156) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \[V[..., i] = He_i(x),\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the HermiteE polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the array `V = hermevander(x, n)`, then `np.dot(V, c)` and `hermeval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of HermiteE series of the same degree and sample points. Parameters **x**array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg**int Degree of the resulting matrix. Returns **vander**ndarray The pseudo-Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding HermiteE polynomial. The dtype will be the same as the converted `x`. #### Examples ``` >>> from numpy.polynomial.hermite_e import hermevander >>> x = np.array([-1, 0, 1]) >>> hermevander(x, 3) array([[ 1., -1., 0., 2.], [ 1., 0., -1., -0.], [ 1., 1., 0., -2.]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermevander.htmlnumpy.polynomial.hermite_e.hermevander2d ========================================= polynomial.hermite_e.hermevander2d(*x*, *y*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1159-L1209) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \[V[..., (deg[1] + 1)*i + j] = He_i(x) * He_j(y),\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the HermiteE polynomials. If `V = hermevander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\] and `np.dot(V, c.flat)` and `hermeval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D HermiteE series of the same degrees and sample points. Parameters **x, y**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns **vander2d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)\). The dtype will be the same as the converted `x` and `y`. See also [`hermevander`](numpy.polynomial.hermite_e.hermevander#numpy.polynomial.hermite_e.hermevander "numpy.polynomial.hermite_e.hermevander"), [`hermevander3d`](numpy.polynomial.hermite_e.hermevander3d#numpy.polynomial.hermite_e.hermevander3d "numpy.polynomial.hermite_e.hermevander3d"), [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermevander2d.htmlnumpy.polynomial.hermite_e.hermevander3d ========================================= polynomial.hermite_e.hermevander3d(*x*, *y*, *z*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1212-L1263) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l, m, n` are the given degrees in `x, y, z`, then Hehe pseudo-Vandermonde matrix is defined by \[V[..., (m+1)(n+1)i + (n+1)j + k] = He_i(x)*He_j(y)*He_k(z),\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the HermiteE polynomials. If `V = hermevander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\] and `np.dot(V, c.flat)` and `hermeval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D HermiteE series of the same degrees and sample points. Parameters **x, y, z**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns **vander3d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`hermevander`](numpy.polynomial.hermite_e.hermevander#numpy.polynomial.hermite_e.hermevander "numpy.polynomial.hermite_e.hermevander"), [`hermevander3d`](#numpy.polynomial.hermite_e.hermevander3d "numpy.polynomial.hermite_e.hermevander3d"), [`hermeval2d`](numpy.polynomial.hermite_e.hermeval2d#numpy.polynomial.hermite_e.hermeval2d "numpy.polynomial.hermite_e.hermeval2d"), [`hermeval3d`](numpy.polynomial.hermite_e.hermeval3d#numpy.polynomial.hermite_e.hermeval3d "numpy.polynomial.hermite_e.hermeval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermevander3d.htmlnumpy.polynomial.hermite_e.hermegauss ====================================== polynomial.hermite_e.hermegauss(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1552-L1616) Gauss-HermiteE quadrature. Computes the sample points and weights for Gauss-HermiteE quadrature. These sample points and weights will correctly integrate polynomials of degree \(2*deg - 1\) or less over the interval \([-\inf, \inf]\) with the weight function \(f(x) = \exp(-x^2/2)\). Parameters **deg**int Number of sample points and weights. It must be >= 1. Returns **x**ndarray 1-D ndarray containing the sample points. **y**ndarray 1-D ndarray containing the weights. #### Notes New in version 1.7.0. The results have only been tested up to degree 100, higher degrees may be problematic. The weights are determined by using the fact that \[w_k = c / (He'_n(x_k) * He_{n-1}(x_k))\] where \(c\) is a constant independent of \(k\) and \(x_k\) is the k’th root of \(He_n\), and then scaling the results to get the right value when integrating 1. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermegauss.htmlnumpy.polynomial.hermite_e.hermeweight ======================================= polynomial.hermite_e.hermeweight(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1619-L1643) Weight function of the Hermite_e polynomials. The weight function is \(\exp(-x^2/2)\) and the interval of integration is \([-\inf, \inf]\). the HermiteE polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters **x**array_like Values at which the weight function will be computed. Returns **w**ndarray The weight function at `x`. #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeweight.htmlnumpy.polynomial.hermite_e.hermecompanion ========================================== polynomial.hermite_e.hermecompanion(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1399-L1442) Return the scaled companion matrix of c. The basis polynomials are scaled so that the companion matrix is symmetric when `c` is an HermiteE basis polynomial. This provides better eigenvalue estimates than the unscaled case and for basis polynomials the eigenvalues are guaranteed to be real if [`numpy.linalg.eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") is used to obtain them. Parameters **c**array_like 1-D array of HermiteE series coefficients ordered from low to high degree. Returns **mat**ndarray Scaled companion matrix of dimensions (deg, deg). #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermecompanion.htmlnumpy.polynomial.hermite_e.hermefit ==================================== polynomial.hermite_e.hermefit(*x*, *y*, *deg*, *rcond=None*, *full=False*, *w=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L1266-L1396) Least squares fit of Hermite series to data. Return the coefficients of a HermiteE series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \[p(x) = c_0 + c_1 * He_1(x) + ... + c_n * He_n(x),\] where `n` is `deg`. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. Returns **coef**ndarray, shape (M,) or (M, K) Hermite coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column k of `y` are in column `k`. **[residuals, rank, singular_values, rcond]**list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full = False`. The warnings can be turned off by ``` >>> import warnings >>> warnings.simplefilter('ignore', np.RankWarning) ``` See also [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`hermeval`](numpy.polynomial.hermite_e.hermeval#numpy.polynomial.hermite_e.hermeval "numpy.polynomial.hermite_e.hermeval") Evaluates a Hermite series. [`hermevander`](numpy.polynomial.hermite_e.hermevander#numpy.polynomial.hermite_e.hermevander "numpy.polynomial.hermite_e.hermevander") pseudo Vandermonde matrix of Hermite series. [`hermeweight`](numpy.polynomial.hermite_e.hermeweight#numpy.polynomial.hermite_e.hermeweight "numpy.polynomial.hermite_e.hermeweight") HermiteE weight function. [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "(in SciPy v1.8.1)") Computes spline fits. #### Notes The solution is the coefficients of the HermiteE series `p` that minimizes the sum of the weighted squared errors \[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\] where the \(w_j\) are the weights. This problem is solved by setting up the (typically) overdetermined matrix equation \[V(x) * c = w * y,\] where `V` is the pseudo Vandermonde matrix of `x`, the elements of `c` are the coefficients to be solved for, and the elements of `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.rankwarning#numpy.RankWarning "numpy.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using HermiteE series are probably most useful when the data can be approximated by `sqrt(w(x)) * p(x)`, where `w(x)` is the HermiteE weight. In that case the weight `sqrt(w(x[i]))` should be used together with data values `y[i]/sqrt(w(x[i]))`. The weight function is available as [`hermeweight`](numpy.polynomial.hermite_e.hermeweight#numpy.polynomial.hermite_e.hermeweight "numpy.polynomial.hermite_e.hermeweight"). #### References 1 Wikipedia, “Curve fitting”, <https://en.wikipedia.org/wiki/Curve_fitting #### Examples ``` >>> from numpy.polynomial.hermite_e import hermefit, hermeval >>> x = np.linspace(-10, 10) >>> np.random.seed(123) >>> err = np.random.randn(len(x))/10 >>> y = hermeval(x, [1, 2, 3]) + err >>> hermefit(x, y, 2) array([ 1.01690445, 1.99951418, 2.99948696]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermefit.htmlnumpy.polynomial.hermite_e.hermetrim ===================================== polynomial.hermite_e.hermetrim(*c*, *tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L156-L208) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters **c**array_like 1-d array of coefficients, ordered from lowest order to highest. **tol**number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns **trimmed**ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises ValueError If `tol` < 0 See also `trimseq` #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermetrim.htmlnumpy.polynomial.hermite_e.hermeline ===================================== polynomial.hermite_e.hermeline(*off*, *scl*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L217-L253) Hermite series whose graph is a straight line. Parameters **off, scl**scalars The specified line is given by `off + scl*x`. Returns **y**ndarray This module’s representation of the Hermite series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") #### Examples ``` >>> from numpy.polynomial.hermite_e import hermeline >>> from numpy.polynomial.hermite_e import hermeline, hermeval >>> hermeval(0,hermeline(3, 2)) 3.0 >>> hermeval(1,hermeline(3, 2)) 5.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.hermeline.htmlnumpy.polynomial.hermite_e.herme2poly ====================================== polynomial.hermite_e.herme2poly(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L143-L197) Convert a Hermite series to a polynomial. Convert an array representing the coefficients of a Hermite series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters **c**array_like 1-D array containing the Hermite series coefficients, ordered from lowest order term to highest. Returns **pol**ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2herme`](numpy.polynomial.hermite_e.poly2herme#numpy.polynomial.hermite_e.poly2herme "numpy.polynomial.hermite_e.poly2herme") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy.polynomial.hermite_e import herme2poly >>> herme2poly([ 2., 10., 2., 3.]) array([0., 1., 2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.herme2poly.htmlnumpy.polynomial.hermite_e.poly2herme ====================================== polynomial.hermite_e.poly2herme(*pol*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/hermite_e.py#L97-L140) Convert a polynomial to a Hermite series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Hermite series, ordered from lowest to highest degree. Parameters **pol**array_like 1-D array containing the polynomial coefficients Returns **c**ndarray 1-D array containing the coefficients of the equivalent Hermite series. See also [`herme2poly`](numpy.polynomial.hermite_e.herme2poly#numpy.polynomial.hermite_e.herme2poly "numpy.polynomial.hermite_e.herme2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy.polynomial.hermite_e import poly2herme >>> poly2herme(np.arange(4)) array([ 2., 10., 2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.poly2herme.htmlnumpy.polynomial.laguerre.Laguerre ================================== *class*numpy.polynomial.laguerre.Laguerre(*coef*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1606-L1645) A Laguerre series class. The Laguerre class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the `ABCPolyBase` documentation. Parameters **coef**array_like Laguerre coefficients in order of increasing degree, i.e, `(1, 2, 3)` gives `1*L_0(x) + 2*L_1(X) + 3*L_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [0, 1]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.laguerre.laguerre.domain#numpy.polynomial.laguerre.Laguerre.domain "numpy.polynomial.laguerre.Laguerre.domain") for its use. The default value is [0, 1]. New in version 1.6.0. #### Methods | | | | --- | --- | | [`__call__`](numpy.polynomial.laguerre.laguerre.__call__#numpy.polynomial.laguerre.Laguerre.__call__ "numpy.polynomial.laguerre.Laguerre.__call__")(arg) | Call self as a function. | | [`basis`](numpy.polynomial.laguerre.laguerre.basis#numpy.polynomial.laguerre.Laguerre.basis "numpy.polynomial.laguerre.Laguerre.basis")(deg[, domain, window]) | Series basis polynomial of degree `deg`. | | [`cast`](numpy.polynomial.laguerre.laguerre.cast#numpy.polynomial.laguerre.Laguerre.cast "numpy.polynomial.laguerre.Laguerre.cast")(series[, domain, window]) | Convert series to series of this class. | | [`convert`](numpy.polynomial.laguerre.laguerre.convert#numpy.polynomial.laguerre.Laguerre.convert "numpy.polynomial.laguerre.Laguerre.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. | | [`copy`](numpy.polynomial.laguerre.laguerre.copy#numpy.polynomial.laguerre.Laguerre.copy "numpy.polynomial.laguerre.Laguerre.copy")() | Return a copy. | | [`cutdeg`](numpy.polynomial.laguerre.laguerre.cutdeg#numpy.polynomial.laguerre.Laguerre.cutdeg "numpy.polynomial.laguerre.Laguerre.cutdeg")(deg) | Truncate series to the given degree. | | [`degree`](numpy.polynomial.laguerre.laguerre.degree#numpy.polynomial.laguerre.Laguerre.degree "numpy.polynomial.laguerre.Laguerre.degree")() | The degree of the series. | | [`deriv`](numpy.polynomial.laguerre.laguerre.deriv#numpy.polynomial.laguerre.Laguerre.deriv "numpy.polynomial.laguerre.Laguerre.deriv")([m]) | Differentiate. | | [`fit`](numpy.polynomial.laguerre.laguerre.fit#numpy.polynomial.laguerre.Laguerre.fit "numpy.polynomial.laguerre.Laguerre.fit")(x, y, deg[, domain, rcond, full, w, window]) | Least squares fit to data. | | [`fromroots`](numpy.polynomial.laguerre.laguerre.fromroots#numpy.polynomial.laguerre.Laguerre.fromroots "numpy.polynomial.laguerre.Laguerre.fromroots")(roots[, domain, window]) | Return series instance that has the specified roots. | | [`has_samecoef`](numpy.polynomial.laguerre.laguerre.has_samecoef#numpy.polynomial.laguerre.Laguerre.has_samecoef "numpy.polynomial.laguerre.Laguerre.has_samecoef")(other) | Check if coefficients match. | | [`has_samedomain`](numpy.polynomial.laguerre.laguerre.has_samedomain#numpy.polynomial.laguerre.Laguerre.has_samedomain "numpy.polynomial.laguerre.Laguerre.has_samedomain")(other) | Check if domains match. | | [`has_sametype`](numpy.polynomial.laguerre.laguerre.has_sametype#numpy.polynomial.laguerre.Laguerre.has_sametype "numpy.polynomial.laguerre.Laguerre.has_sametype")(other) | Check if types match. | | [`has_samewindow`](numpy.polynomial.laguerre.laguerre.has_samewindow#numpy.polynomial.laguerre.Laguerre.has_samewindow "numpy.polynomial.laguerre.Laguerre.has_samewindow")(other) | Check if windows match. | | [`identity`](numpy.polynomial.laguerre.laguerre.identity#numpy.polynomial.laguerre.Laguerre.identity "numpy.polynomial.laguerre.Laguerre.identity")([domain, window]) | Identity function. | | [`integ`](numpy.polynomial.laguerre.laguerre.integ#numpy.polynomial.laguerre.Laguerre.integ "numpy.polynomial.laguerre.Laguerre.integ")([m, k, lbnd]) | Integrate. | | [`linspace`](numpy.polynomial.laguerre.laguerre.linspace#numpy.polynomial.laguerre.Laguerre.linspace "numpy.polynomial.laguerre.Laguerre.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. | | [`mapparms`](numpy.polynomial.laguerre.laguerre.mapparms#numpy.polynomial.laguerre.Laguerre.mapparms "numpy.polynomial.laguerre.Laguerre.mapparms")() | Return the mapping parameters. | | [`roots`](numpy.polynomial.laguerre.laguerre.roots#numpy.polynomial.laguerre.Laguerre.roots "numpy.polynomial.laguerre.Laguerre.roots")() | Return the roots of the series polynomial. | | [`trim`](numpy.polynomial.laguerre.laguerre.trim#numpy.polynomial.laguerre.Laguerre.trim "numpy.polynomial.laguerre.Laguerre.trim")([tol]) | Remove trailing coefficients | | [`truncate`](numpy.polynomial.laguerre.laguerre.truncate#numpy.polynomial.laguerre.Laguerre.truncate "numpy.polynomial.laguerre.Laguerre.truncate")(size) | Truncate series to length `size`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.htmlnumpy.polynomial.laguerre.lagdomain =================================== polynomial.laguerre.lagdomain*=array([0, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagdomain.htmlnumpy.polynomial.laguerre.lagzero ================================= polynomial.laguerre.lagzero*=array([0])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagzero.htmlnumpy.polynomial.laguerre.lagone ================================ polynomial.laguerre.lagone*=array([1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagone.htmlnumpy.polynomial.laguerre.lagx ============================== polynomial.laguerre.lagx*=array([ 1, -1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagx.htmlnumpy.polynomial.laguerre.lagadd ================================ polynomial.laguerre.lagadd(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L307-L345) Add one Laguerre series to another. Returns the sum of two Laguerre series `c1` + `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Laguerre series coefficients ordered from low to high. Returns **out**ndarray Array representing the Laguerre series of their sum. See also [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes Unlike multiplication, division, etc., the sum of two Laguerre series is a Laguerre series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial.laguerre import lagadd >>> lagadd([1, 2, 3], [1, 2, 3, 4]) array([2., 4., 6., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagadd.htmlnumpy.polynomial.laguerre.lagsub ================================ polynomial.laguerre.lagsub(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L348-L385) Subtract one Laguerre series from another. Returns the difference of two Laguerre series `c1` - `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Laguerre series coefficients ordered from low to high. Returns **out**ndarray Of Laguerre series coefficients representing their difference. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes Unlike multiplication, division, etc., the difference of two Laguerre series is a Laguerre series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial.laguerre import lagsub >>> lagsub([1, 2, 3, 4], [1, 2, 3]) array([0., 0., 0., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagsub.htmlnumpy.polynomial.laguerre.lagmulx ================================= polynomial.laguerre.lagmulx(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L388-L439) Multiply a Laguerre series by x. Multiply the Laguerre series `c` by x, where x is the independent variable. Parameters **c**array_like 1-D array of Laguerre series coefficients ordered from low to high. Returns **out**ndarray Array representing the result of the multiplication. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes The multiplication uses the recursion relationship for Laguerre polynomials in the form \[xP_i(x) = (-(i + 1)*P_{i + 1}(x) + (2i + 1)P_{i}(x) - iP_{i - 1}(x))\] #### Examples ``` >>> from numpy.polynomial.laguerre import lagmulx >>> lagmulx([1, 2, 3]) array([-1., -1., 11., -9.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagmulx.htmlnumpy.polynomial.laguerre.lagmul ================================ polynomial.laguerre.lagmul(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L442-L505) Multiply one Laguerre series by another. Returns the product of two Laguerre series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Laguerre series coefficients ordered from low to high. Returns **out**ndarray Of Laguerre series coefficients representing their product. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Laguerre polynomial basis set. Thus, to express the product as a Laguerre series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial.laguerre import lagmul >>> lagmul([1, 2, 3], [0, 1, 2]) array([ 8., -13., 38., -51., 36.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagmul.htmlnumpy.polynomial.laguerre.lagdiv ================================ polynomial.laguerre.lagdiv(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L508-L551) Divide one Laguerre series by another. Returns the quotient-with-remainder of two Laguerre series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Laguerre series coefficients ordered from low to high. Returns **[quo, rem]**ndarrays Of Laguerre series coefficients representing the quotient and remainder. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagpow`](numpy.polynomial.laguerre.lagpow#numpy.polynomial.laguerre.lagpow "numpy.polynomial.laguerre.lagpow") #### Notes In general, the (polynomial) division of one Laguerre series by another results in quotient and remainder terms that are not in the Laguerre polynomial basis set. Thus, to express these results as a Laguerre series, it is necessary to “reproject” the results onto the Laguerre basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial.laguerre import lagdiv >>> lagdiv([ 8., -13., 38., -51., 36.], [0, 1, 2]) (array([1., 2., 3.]), array([0.])) >>> lagdiv([ 9., -12., 38., -51., 36.], [0, 1, 2]) (array([1., 2., 3.]), array([1., 1.])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagdiv.htmlnumpy.polynomial.laguerre.lagpow ================================ polynomial.laguerre.lagpow(*c*, *pow*, *maxpower=16*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L554-L588) Raise a Laguerre series to a power. Returns the Laguerre series `c` raised to the power `pow`. The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `P_0 + 2*P_1 + 3*P_2.` Parameters **c**array_like 1-D array of Laguerre series coefficients ordered from low to high. **pow**integer Power to which the series will be raised **maxpower**integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns **coef**ndarray Laguerre series of power. See also [`lagadd`](numpy.polynomial.laguerre.lagadd#numpy.polynomial.laguerre.lagadd "numpy.polynomial.laguerre.lagadd"), [`lagsub`](numpy.polynomial.laguerre.lagsub#numpy.polynomial.laguerre.lagsub "numpy.polynomial.laguerre.lagsub"), [`lagmulx`](numpy.polynomial.laguerre.lagmulx#numpy.polynomial.laguerre.lagmulx "numpy.polynomial.laguerre.lagmulx"), [`lagmul`](numpy.polynomial.laguerre.lagmul#numpy.polynomial.laguerre.lagmul "numpy.polynomial.laguerre.lagmul"), [`lagdiv`](numpy.polynomial.laguerre.lagdiv#numpy.polynomial.laguerre.lagdiv "numpy.polynomial.laguerre.lagdiv") #### Examples ``` >>> from numpy.polynomial.laguerre import lagpow >>> lagpow([1, 2, 3], 2) array([ 14., -16., 56., -72., 54.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagpow.htmlnumpy.polynomial.laguerre.lagval ================================ polynomial.laguerre.lagval(*x*, *c*, *tensor=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L801-L893) Evaluate a Laguerre series at points x. If `c` is of length `n + 1`, this function returns the value: \[p(x) = c_0 * L_0(x) + c_1 * L_1(x) + ... + c_n * L_n(x)\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters **x**array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with themselves and with the elements of `c`. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor**boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. New in version 1.7.0. Returns **values**ndarray, algebra_like The shape of the return value is described above. See also [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`laggrid2d`](numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d"), [`laggrid3d`](numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. #### Examples ``` >>> from numpy.polynomial.laguerre import lagval >>> coef = [1,2,3] >>> lagval(1, coef) -0.5 >>> lagval([[1,2],[3,4]], coef) array([[-0.5, -4. ], [-4.5, -2. ]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagval.htmlnumpy.polynomial.laguerre.lagval2d ================================== polynomial.laguerre.lagval2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L896-L942) Evaluate a 2-D Laguerre series at points (x, y). This function returns the values: \[p(x,y) = \sum_{i,j} c_{i,j} * L_i(x) * L_j(y)\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points formed with pairs of corresponding values from `x` and `y`. See also [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval"), [`laggrid2d`](numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d"), [`laggrid3d`](numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagval2d.htmlnumpy.polynomial.laguerre.lagval3d ================================== polynomial.laguerre.lagval3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L998-L1046) Evaluate a 3-D Laguerre series at points (x, y, z). This function returns the values: \[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * L_i(x) * L_j(y) * L_k(z)\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters **x, y, z**array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval"), [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`laggrid2d`](numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d"), [`laggrid3d`](numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagval3d.htmlnumpy.polynomial.laguerre.laggrid2d =================================== polynomial.laguerre.laggrid2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L945-L995) Evaluate a 2-D Laguerre series on the Cartesian product of x and y. This function returns the values: \[p(a,b) = \sum_{i,j} c_{i,j} * L_i(a) * L_j(b)\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape + y.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional Chebyshev series at points in the Cartesian product of `x` and `y`. See also [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval"), [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d"), [`laggrid3d`](numpy.polynomial.laguerre.laggrid3d#numpy.polynomial.laguerre.laggrid3d "numpy.polynomial.laguerre.laggrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.laggrid2d.htmlnumpy.polynomial.laguerre.laggrid3d =================================== polynomial.laguerre.laggrid3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1049-L1102) Evaluate a 3-D Laguerre series on the Cartesian product of x, y, and z. This function returns the values: \[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * L_i(a) * L_j(b) * L_k(c)\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters **x, y, z**array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`,`y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval"), [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`laggrid2d`](numpy.polynomial.laguerre.laggrid2d#numpy.polynomial.laguerre.laggrid2d "numpy.polynomial.laguerre.laggrid2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.laggrid3d.htmlnumpy.polynomial.laguerre.lagder ================================ polynomial.laguerre.lagder(*c*, *m=1*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L591-L674) Differentiate a Laguerre series. Returns the Laguerre series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*L_0 + 2*L_1 + 3*L_2` while [[1,2],[1,2]] represents `1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Laguerre series coefficients. If `c` is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl**scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis**int, optional Axis over which the derivative is taken. (Default: 0). New in version 1.7.0. Returns **der**ndarray Laguerre series of the derivative. See also [`lagint`](numpy.polynomial.laguerre.lagint#numpy.polynomial.laguerre.lagint "numpy.polynomial.laguerre.lagint") #### Notes In general, the result of differentiating a Laguerre series does not resemble the same operation on a power series. Thus the result of this function may be “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial.laguerre import lagder >>> lagder([ 1., 1., 1., -3.]) array([1., 2., 3.]) >>> lagder([ 1., 0., 0., -4., 3.], m=2) array([1., 2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagder.htmlnumpy.polynomial.laguerre.lagint ================================ polynomial.laguerre.lagint(*c*, *m=1*, *k=[]*, *lbnd=0*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L677-L798) Integrate a Laguerre series. Returns the Laguerre series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `L_0 + 2*L_1 + 3*L_2` while [[1,2],[1,2]] represents `1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Laguerre series coefficients. If `c` is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at `lbnd` is the first value in the list, the value of the second integral at `lbnd` is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd**scalar, optional The lower bound of the integral. (Default: 0) **scl**scalar, optional Following each integration the result is *multiplied* by `scl` before the integration constant is added. (Default: 1) **axis**int, optional Axis over which the integral is taken. (Default: 0). New in version 1.7.0. Returns **S**ndarray Laguerre series coefficients of the integral. Raises ValueError If `m < 0`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`lagder`](numpy.polynomial.laguerre.lagder#numpy.polynomial.laguerre.lagder "numpy.polynomial.laguerre.lagder") #### Notes Note that the result of each integration is *multiplied* by `scl`. Why is this important to note? Say one is making a linear change of variable \(u = ax + b\) in an integral relative to `x`. Then \(dx = du/a\), so one will need to set `scl` equal to \(1/a\) - perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial.laguerre import lagint >>> lagint([1,2,3]) array([ 1., 1., 1., -3.]) >>> lagint([1,2,3], m=2) array([ 1., 0., 0., -4., 3.]) >>> lagint([1,2,3], k=1) array([ 2., 1., 1., -3.]) >>> lagint([1,2,3], lbnd=-1) array([11.5, 1. , 1. , -3. ]) >>> lagint([1,2], m=2, k=[1,2], lbnd=-1) array([ 11.16666667, -5. , -3. , 2. ]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagint.htmlnumpy.polynomial.laguerre.lagfromroots ====================================== polynomial.laguerre.lagfromroots(*roots*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L251-L304) Generate a Laguerre series with given roots. The function returns the coefficients of the polynomial \[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\] in Laguerre form, where the `r_n` are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \[p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x)\] The coefficient of the last term is not generally 1 for monic polynomials in Laguerre form. Parameters **roots**array_like Sequence containing the roots. Returns **out**ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.legendre.legfromroots`](numpy.polynomial.legendre.legfromroots#numpy.polynomial.legendre.legfromroots "numpy.polynomial.legendre.legfromroots") [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Examples ``` >>> from numpy.polynomial.laguerre import lagfromroots, lagval >>> coef = lagfromroots((-1, 0, 1)) >>> lagval((-1, 0, 1), coef) array([0., 0., 0.]) >>> coef = lagfromroots((-1j, 1j)) >>> lagval((-1j, 1j), coef) array([0.+0.j, 0.+0.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagfromroots.htmlnumpy.polynomial.laguerre.lagroots ================================== polynomial.laguerre.lagroots(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1448-L1509) Compute the roots of a Laguerre series. Return the roots (a.k.a. “zeros”) of the polynomial \[p(x) = \sum_i c[i] * L_i(x).\] Parameters **c**1-D array_like 1-D array of coefficients. Returns **out**ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.legendre.legroots`](numpy.polynomial.legendre.legroots#numpy.polynomial.legendre.legroots "numpy.polynomial.legendre.legroots") [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The Laguerre series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples ``` >>> from numpy.polynomial.laguerre import lagroots, lagfromroots >>> coef = lagfromroots([0, 1, 2]) >>> coef array([ 2., -8., 12., -6.]) >>> lagroots(coef) array([-4.4408921e-16, 1.0000000e+00, 2.0000000e+00]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagroots.htmlnumpy.polynomial.laguerre.lagvander =================================== polynomial.laguerre.lagvander(*x*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1105-L1162) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \[V[..., i] = L_i(x)\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the Laguerre polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the array `V = lagvander(x, n)`, then `np.dot(V, c)` and `lagval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of Laguerre series of the same degree and sample points. Parameters **x**array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg**int Degree of the resulting matrix. Returns **vander**ndarray The pseudo-Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding Laguerre polynomial. The dtype will be the same as the converted `x`. #### Examples ``` >>> from numpy.polynomial.laguerre import lagvander >>> x = np.array([0, 1, 2]) >>> lagvander(x, 3) array([[ 1. , 1. , 1. , 1. ], [ 1. , 0. , -0.5 , -0.66666667], [ 1. , -1. , -1. , -0.33333333]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagvander.htmlnumpy.polynomial.laguerre.lagvander2d ===================================== polynomial.laguerre.lagvander2d(*x*, *y*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1165-L1215) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \[V[..., (deg[1] + 1)*i + j] = L_i(x) * L_j(y),\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the Laguerre polynomials. If `V = lagvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\] and `np.dot(V, c.flat)` and `lagval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D Laguerre series of the same degrees and sample points. Parameters **x, y**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns **vander2d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)\). The dtype will be the same as the converted `x` and `y`. See also [`lagvander`](numpy.polynomial.laguerre.lagvander#numpy.polynomial.laguerre.lagvander "numpy.polynomial.laguerre.lagvander"), [`lagvander3d`](numpy.polynomial.laguerre.lagvander3d#numpy.polynomial.laguerre.lagvander3d "numpy.polynomial.laguerre.lagvander3d"), [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagvander2d.htmlnumpy.polynomial.laguerre.lagvander3d ===================================== polynomial.laguerre.lagvander3d(*x*, *y*, *z*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1218-L1269) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l, m, n` are the given degrees in `x, y, z`, then The pseudo-Vandermonde matrix is defined by \[V[..., (m+1)(n+1)i + (n+1)j + k] = L_i(x)*L_j(y)*L_k(z),\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the Laguerre polynomials. If `V = lagvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\] and `np.dot(V, c.flat)` and `lagval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Laguerre series of the same degrees and sample points. Parameters **x, y, z**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns **vander3d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`lagvander`](numpy.polynomial.laguerre.lagvander#numpy.polynomial.laguerre.lagvander "numpy.polynomial.laguerre.lagvander"), [`lagvander3d`](#numpy.polynomial.laguerre.lagvander3d "numpy.polynomial.laguerre.lagvander3d"), [`lagval2d`](numpy.polynomial.laguerre.lagval2d#numpy.polynomial.laguerre.lagval2d "numpy.polynomial.laguerre.lagval2d"), [`lagval3d`](numpy.polynomial.laguerre.lagval3d#numpy.polynomial.laguerre.lagval3d "numpy.polynomial.laguerre.lagval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagvander3d.htmlnumpy.polynomial.laguerre.laggauss ================================== polynomial.laguerre.laggauss(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1512-L1573) Gauss-Laguerre quadrature. Computes the sample points and weights for Gauss-Laguerre quadrature. These sample points and weights will correctly integrate polynomials of degree \(2*deg - 1\) or less over the interval \([0, \inf]\) with the weight function \(f(x) = \exp(-x)\). Parameters **deg**int Number of sample points and weights. It must be >= 1. Returns **x**ndarray 1-D ndarray containing the sample points. **y**ndarray 1-D ndarray containing the weights. #### Notes New in version 1.7.0. The results have only been tested up to degree 100 higher degrees may be problematic. The weights are determined by using the fact that \[w_k = c / (L'_n(x_k) * L_{n-1}(x_k))\] where \(c\) is a constant independent of \(k\) and \(x_k\) is the k’th root of \(L_n\), and then scaling the results to get the right value when integrating 1. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.laggauss.htmlnumpy.polynomial.laguerre.lagweight =================================== polynomial.laguerre.lagweight(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1576-L1600) Weight function of the Laguerre polynomials. The weight function is \(exp(-x)\) and the interval of integration is \([0, \inf]\). The Laguerre polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters **x**array_like Values at which the weight function will be computed. Returns **w**ndarray The weight function at `x`. #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagweight.htmlnumpy.polynomial.laguerre.lagcompanion ====================================== polynomial.laguerre.lagcompanion(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1404-L1445) Return the companion matrix of c. The usual companion matrix of the Laguerre polynomials is already symmetric when `c` is a basis Laguerre polynomial, so no scaling is applied. Parameters **c**array_like 1-D array of Laguerre series coefficients ordered from low to high degree. Returns **mat**ndarray Companion matrix of dimensions (deg, deg). #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagcompanion.htmlnumpy.polynomial.laguerre.lagfit ================================ polynomial.laguerre.lagfit(*x*, *y*, *deg*, *rcond=None*, *full=False*, *w=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L1272-L1401) Least squares fit of Laguerre series to data. Return the coefficients of a Laguerre series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \[p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x),\] where `n` is `deg`. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. Returns **coef**ndarray, shape (M,) or (M, K) Laguerre coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column k of `y` are in column `k`. **[residuals, rank, singular_values, rcond]**list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by ``` >>> import warnings >>> warnings.simplefilter('ignore', np.RankWarning) ``` See also [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.legendre.legfit`](numpy.polynomial.legendre.legfit#numpy.polynomial.legendre.legfit "numpy.polynomial.legendre.legfit") [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`lagval`](numpy.polynomial.laguerre.lagval#numpy.polynomial.laguerre.lagval "numpy.polynomial.laguerre.lagval") Evaluates a Laguerre series. [`lagvander`](numpy.polynomial.laguerre.lagvander#numpy.polynomial.laguerre.lagvander "numpy.polynomial.laguerre.lagvander") pseudo Vandermonde matrix of Laguerre series. [`lagweight`](numpy.polynomial.laguerre.lagweight#numpy.polynomial.laguerre.lagweight "numpy.polynomial.laguerre.lagweight") Laguerre weight function. [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "(in SciPy v1.8.1)") Computes spline fits. #### Notes The solution is the coefficients of the Laguerre series `p` that minimizes the sum of the weighted squared errors \[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\] where the \(w_j\) are the weights. This problem is solved by setting up as the (typically) overdetermined matrix equation \[V(x) * c = w * y,\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, and `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.rankwarning#numpy.RankWarning "numpy.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using Laguerre series are probably most useful when the data can be approximated by `sqrt(w(x)) * p(x)`, where `w(x)` is the Laguerre weight. In that case the weight `sqrt(w(x[i]))` should be used together with data values `y[i]/sqrt(w(x[i]))`. The weight function is available as [`lagweight`](numpy.polynomial.laguerre.lagweight#numpy.polynomial.laguerre.lagweight "numpy.polynomial.laguerre.lagweight"). #### References 1 Wikipedia, “Curve fitting”, <https://en.wikipedia.org/wiki/Curve_fitting #### Examples ``` >>> from numpy.polynomial.laguerre import lagfit, lagval >>> x = np.linspace(0, 10) >>> err = np.random.randn(len(x))/10 >>> y = lagval(x, [1, 2, 3]) + err >>> lagfit(x, y, 2) array([ 0.96971004, 2.00193749, 3.00288744]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagfit.htmlnumpy.polynomial.laguerre.lagtrim ================================= polynomial.laguerre.lagtrim(*c*, *tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L156-L208) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters **c**array_like 1-d array of coefficients, ordered from lowest order to highest. **tol**number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns **trimmed**ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises ValueError If `tol` < 0 See also `trimseq` #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagtrim.htmlnumpy.polynomial.laguerre.lagline ================================= polynomial.laguerre.lagline(*off*, *scl*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L213-L248) Laguerre series whose graph is a straight line. Parameters **off, scl**scalars The specified line is given by `off + scl*x`. Returns **y**ndarray This module’s representation of the Laguerre series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.legendre.legline`](numpy.polynomial.legendre.legline#numpy.polynomial.legendre.legline "numpy.polynomial.legendre.legline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples ``` >>> from numpy.polynomial.laguerre import lagline, lagval >>> lagval(0,lagline(3, 2)) 3.0 >>> lagval(1,lagline(3, 2)) 5.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lagline.htmlnumpy.polynomial.laguerre.lag2poly ================================== polynomial.laguerre.lag2poly(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L141-L193) Convert a Laguerre series to a polynomial. Convert an array representing the coefficients of a Laguerre series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters **c**array_like 1-D array containing the Laguerre series coefficients, ordered from lowest order term to highest. Returns **pol**ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2lag`](numpy.polynomial.laguerre.poly2lag#numpy.polynomial.laguerre.poly2lag "numpy.polynomial.laguerre.poly2lag") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy.polynomial.laguerre import lag2poly >>> lag2poly([ 23., -63., 58., -18.]) array([0., 1., 2., 3.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.lag2poly.htmlnumpy.polynomial.laguerre.poly2lag ================================== polynomial.laguerre.poly2lag(*pol*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/laguerre.py#L96-L138) Convert a polynomial to a Laguerre series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Laguerre series, ordered from lowest to highest degree. Parameters **pol**array_like 1-D array containing the polynomial coefficients Returns **c**ndarray 1-D array containing the coefficients of the equivalent Laguerre series. See also [`lag2poly`](numpy.polynomial.laguerre.lag2poly#numpy.polynomial.laguerre.lag2poly "numpy.polynomial.laguerre.lag2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy.polynomial.laguerre import poly2lag >>> poly2lag(np.arange(4)) array([ 23., -63., 58., -18.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.poly2lag.htmlnumpy.polynomial.legendre.Legendre ================================== *class*numpy.polynomial.legendre.Legendre(*coef*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1619-L1658) A Legendre series class. The Legendre class provides the standard Python numerical methods ‘+’, ‘-’, ‘*’, ‘//’, ‘%’, ‘divmod’, ‘**’, and ‘()’ as well as the attributes and methods listed in the `ABCPolyBase` documentation. Parameters **coef**array_like Legendre coefficients in order of increasing degree, i.e., `(1, 2, 3)` gives `1*P_0(x) + 2*P_1(x) + 3*P_2(x)`. **domain**(2,) array_like, optional Domain to use. The interval `[domain[0], domain[1]]` is mapped to the interval `[window[0], window[1]]` by shifting and scaling. The default value is [-1, 1]. **window**(2,) array_like, optional Window, see [`domain`](numpy.polynomial.legendre.legendre.domain#numpy.polynomial.legendre.Legendre.domain "numpy.polynomial.legendre.Legendre.domain") for its use. The default value is [-1, 1]. New in version 1.6.0. #### Methods | | | | --- | --- | | [`__call__`](numpy.polynomial.legendre.legendre.__call__#numpy.polynomial.legendre.Legendre.__call__ "numpy.polynomial.legendre.Legendre.__call__")(arg) | Call self as a function. | | [`basis`](numpy.polynomial.legendre.legendre.basis#numpy.polynomial.legendre.Legendre.basis "numpy.polynomial.legendre.Legendre.basis")(deg[, domain, window]) | Series basis polynomial of degree `deg`. | | [`cast`](numpy.polynomial.legendre.legendre.cast#numpy.polynomial.legendre.Legendre.cast "numpy.polynomial.legendre.Legendre.cast")(series[, domain, window]) | Convert series to series of this class. | | [`convert`](numpy.polynomial.legendre.legendre.convert#numpy.polynomial.legendre.Legendre.convert "numpy.polynomial.legendre.Legendre.convert")([domain, kind, window]) | Convert series to a different kind and/or domain and/or window. | | [`copy`](numpy.polynomial.legendre.legendre.copy#numpy.polynomial.legendre.Legendre.copy "numpy.polynomial.legendre.Legendre.copy")() | Return a copy. | | [`cutdeg`](numpy.polynomial.legendre.legendre.cutdeg#numpy.polynomial.legendre.Legendre.cutdeg "numpy.polynomial.legendre.Legendre.cutdeg")(deg) | Truncate series to the given degree. | | [`degree`](numpy.polynomial.legendre.legendre.degree#numpy.polynomial.legendre.Legendre.degree "numpy.polynomial.legendre.Legendre.degree")() | The degree of the series. | | [`deriv`](numpy.polynomial.legendre.legendre.deriv#numpy.polynomial.legendre.Legendre.deriv "numpy.polynomial.legendre.Legendre.deriv")([m]) | Differentiate. | | [`fit`](numpy.polynomial.legendre.legendre.fit#numpy.polynomial.legendre.Legendre.fit "numpy.polynomial.legendre.Legendre.fit")(x, y, deg[, domain, rcond, full, w, window]) | Least squares fit to data. | | [`fromroots`](numpy.polynomial.legendre.legendre.fromroots#numpy.polynomial.legendre.Legendre.fromroots "numpy.polynomial.legendre.Legendre.fromroots")(roots[, domain, window]) | Return series instance that has the specified roots. | | [`has_samecoef`](numpy.polynomial.legendre.legendre.has_samecoef#numpy.polynomial.legendre.Legendre.has_samecoef "numpy.polynomial.legendre.Legendre.has_samecoef")(other) | Check if coefficients match. | | [`has_samedomain`](numpy.polynomial.legendre.legendre.has_samedomain#numpy.polynomial.legendre.Legendre.has_samedomain "numpy.polynomial.legendre.Legendre.has_samedomain")(other) | Check if domains match. | | [`has_sametype`](numpy.polynomial.legendre.legendre.has_sametype#numpy.polynomial.legendre.Legendre.has_sametype "numpy.polynomial.legendre.Legendre.has_sametype")(other) | Check if types match. | | [`has_samewindow`](numpy.polynomial.legendre.legendre.has_samewindow#numpy.polynomial.legendre.Legendre.has_samewindow "numpy.polynomial.legendre.Legendre.has_samewindow")(other) | Check if windows match. | | [`identity`](numpy.polynomial.legendre.legendre.identity#numpy.polynomial.legendre.Legendre.identity "numpy.polynomial.legendre.Legendre.identity")([domain, window]) | Identity function. | | [`integ`](numpy.polynomial.legendre.legendre.integ#numpy.polynomial.legendre.Legendre.integ "numpy.polynomial.legendre.Legendre.integ")([m, k, lbnd]) | Integrate. | | [`linspace`](numpy.polynomial.legendre.legendre.linspace#numpy.polynomial.legendre.Legendre.linspace "numpy.polynomial.legendre.Legendre.linspace")([n, domain]) | Return x, y values at equally spaced points in domain. | | [`mapparms`](numpy.polynomial.legendre.legendre.mapparms#numpy.polynomial.legendre.Legendre.mapparms "numpy.polynomial.legendre.Legendre.mapparms")() | Return the mapping parameters. | | [`roots`](numpy.polynomial.legendre.legendre.roots#numpy.polynomial.legendre.Legendre.roots "numpy.polynomial.legendre.Legendre.roots")() | Return the roots of the series polynomial. | | [`trim`](numpy.polynomial.legendre.legendre.trim#numpy.polynomial.legendre.Legendre.trim "numpy.polynomial.legendre.Legendre.trim")([tol]) | Remove trailing coefficients | | [`truncate`](numpy.polynomial.legendre.legendre.truncate#numpy.polynomial.legendre.Legendre.truncate "numpy.polynomial.legendre.Legendre.truncate")(size) | Truncate series to length `size`. | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.htmlnumpy.polynomial.legendre.legdomain =================================== polynomial.legendre.legdomain*=array([-1, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legdomain.htmlnumpy.polynomial.legendre.legzero ================================= polynomial.legendre.legzero*=array([0])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legzero.htmlnumpy.polynomial.legendre.legone ================================ polynomial.legendre.legone*=array([1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legone.htmlnumpy.polynomial.legendre.legx ============================== polynomial.legendre.legx*=array([0, 1])* An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.) Arrays should be constructed using [`array`](numpy.array#numpy.array "numpy.array"), [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") or [`empty`](numpy.empty#numpy.empty "numpy.empty") (refer to the See Also section below). The parameters given here refer to a low-level method (`ndarray(
)`) for instantiating an array. For more information, refer to the [`numpy`](../index#module-numpy "numpy") module and examine the methods and attributes of an array. Parameters **(for the __new__ method; see Notes below)** **shape**tuple of ints Shape of created array. **dtype**data-type, optional Any object that can be interpreted as a numpy data type. **buffer**object exposing buffer interface, optional Used to fill the array with data. **offset**int, optional Offset of array data in buffer. **strides**tuple of ints, optional Strides of data in memory. **order**{‘C’, ‘F’}, optional Row-major (C-style) or column-major (Fortran-style) order. See also [`array`](numpy.array#numpy.array "numpy.array") Construct an array. [`zeros`](numpy.zeros#numpy.zeros "numpy.zeros") Create an array, each element of which is zero. [`empty`](numpy.empty#numpy.empty "numpy.empty") Create an array, but leave its allocated memory unchanged (i.e., it contains “garbage”). [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype") Create a data-type. [`numpy.typing.NDArray`](../typing#numpy.typing.NDArray "numpy.typing.NDArray") An ndarray alias [generic](https://docs.python.org/3/glossary.html#term-generic-type "(in Python v3.10)") w.r.t. its [`dtype.type`](numpy.dtype.type#numpy.dtype.type "numpy.dtype.type"). #### Notes There are two modes of creating an array using `__new__`: 1. If `buffer` is None, then only [`shape`](numpy.shape#numpy.shape "numpy.shape"), [`dtype`](numpy.dtype#numpy.dtype "numpy.dtype"), and `order` are used. 2. If `buffer` is an object exposing the buffer interface, then all keywords are interpreted. No `__init__` method is needed because the array is fully initialized after the `__new__` method. #### Examples These examples illustrate the low-level [`ndarray`](numpy.ndarray#numpy.ndarray "numpy.ndarray") constructor. Refer to the `See Also` section above for easier ways of constructing an ndarray. First mode, `buffer` is None: ``` >>> np.ndarray(shape=(2,2), dtype=float, order='F') array([[0.0e+000, 0.0e+000], # random [ nan, 2.5e-323]]) ``` Second mode: ``` >>> np.ndarray((2,), buffer=np.array([1,2,3]), ... offset=np.int_().itemsize, ... dtype=int) # offset = 1*itemsize, i.e. skip first element array([2, 3]) ``` Attributes **T**ndarray Transpose of the array. **data**buffer The array’s elements, in memory. **dtype**dtype object Describes the format of the elements in the array. **flags**dict Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc. **flat**numpy.flatiter object Flattened version of the array as an iterator. The iterator allows assignments, e.g., `x.flat = 3` (See [`ndarray.flat`](numpy.ndarray.flat#numpy.ndarray.flat "numpy.ndarray.flat") for assignment examples; TODO). **imag**ndarray Imaginary part of the array. **real**ndarray Real part of the array. **size**int Number of elements in the array. **itemsize**int The memory use of each array element in bytes. **nbytes**int The total number of bytes required to store the array data, i.e., `itemsize * size`. **ndim**int The array’s number of dimensions. **shape**tuple of ints Shape of the array. **strides**tuple of ints The step-size required to move from one element to the next in memory. For example, a contiguous `(3, 4)` array of type `int16` in C-order has strides `(8, 2)`. This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (`2 * 4`). **ctypes**ctypes object Class containing properties of the array needed for interaction with ctypes. **base**ndarray If the array is a view into another array, that array is its `base` (unless that array is also a view). The `base` array is where the array data is actually stored. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legx.htmlnumpy.polynomial.legendre.legadd ================================ polynomial.legendre.legadd(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L322-L361) Add one Legendre series to another. Returns the sum of two Legendre series `c1` + `c2`. The arguments are sequences of coefficients ordered from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Legendre series coefficients ordered from low to high. Returns **out**ndarray Array representing the Legendre series of their sum. See also [`legsub`](numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes Unlike multiplication, division, etc., the sum of two Legendre series is a Legendre series (without having to “reproject” the result onto the basis set) so addition, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial import legendre as L >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> L.legadd(c1,c2) array([4., 4., 4.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legadd.htmlnumpy.polynomial.legendre.legsub ================================ polynomial.legendre.legsub(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L364-L405) Subtract one Legendre series from another. Returns the difference of two Legendre series `c1` - `c2`. The sequences of coefficients are from lowest order term to highest, i.e., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Legendre series coefficients ordered from low to high. Returns **out**ndarray Of Legendre series coefficients representing their difference. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes Unlike multiplication, division, etc., the difference of two Legendre series is a Legendre series (without having to “reproject” the result onto the basis set) so subtraction, just like that of “standard” polynomials, is simply “component-wise.” #### Examples ``` >>> from numpy.polynomial import legendre as L >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> L.legsub(c1,c2) array([-2., 0., 2.]) >>> L.legsub(c2,c1) # -C.legsub(c1,c2) array([ 2., 0., -2.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legsub.htmlnumpy.polynomial.legendre.legmulx ================================= polynomial.legendre.legmulx(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L408-L461) Multiply a Legendre series by x. Multiply the Legendre series `c` by x, where x is the independent variable. Parameters **c**array_like 1-D array of Legendre series coefficients ordered from low to high. Returns **out**ndarray Array representing the result of the multiplication. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes The multiplication uses the recursion relationship for Legendre polynomials in the form \[xP_i(x) = ((i + 1)*P_{i + 1}(x) + i*P_{i - 1}(x))/(2i + 1)\] #### Examples ``` >>> from numpy.polynomial import legendre as L >>> L.legmulx([1,2,3]) array([ 0.66666667, 2.2, 1.33333333, 1.8]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legmulx.htmlnumpy.polynomial.legendre.legmul ================================ polynomial.legendre.legmul(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L464-L529) Multiply one Legendre series by another. Returns the product of two Legendre series `c1` * `c2`. The arguments are sequences of coefficients, from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Legendre series coefficients ordered from low to high. Returns **out**ndarray Of Legendre series coefficients representing their product. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legsub`](numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes In general, the (polynomial) product of two C-series results in terms that are not in the Legendre polynomial basis set. Thus, to express the product as a Legendre series, it is necessary to “reproject” the product onto said basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial import legendre as L >>> c1 = (1,2,3) >>> c2 = (3,2) >>> L.legmul(c1,c2) # multiplication requires "reprojection" array([ 4.33333333, 10.4 , 11.66666667, 3.6 ]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legmul.htmlnumpy.polynomial.legendre.legdiv ================================ polynomial.legendre.legdiv(*c1*, *c2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L532-L578) Divide one Legendre series by another. Returns the quotient-with-remainder of two Legendre series `c1` / `c2`. The arguments are sequences of coefficients from lowest order “term” to highest, e.g., [1,2,3] represents the series `P_0 + 2*P_1 + 3*P_2`. Parameters **c1, c2**array_like 1-D arrays of Legendre series coefficients ordered from low to high. Returns **quo, rem**ndarrays Of Legendre series coefficients representing the quotient and remainder. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legsub`](numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legpow`](numpy.polynomial.legendre.legpow#numpy.polynomial.legendre.legpow "numpy.polynomial.legendre.legpow") #### Notes In general, the (polynomial) division of one Legendre series by another results in quotient and remainder terms that are not in the Legendre polynomial basis set. Thus, to express these results as a Legendre series, it is necessary to “reproject” the results onto the Legendre basis set, which may produce “unintuitive” (but correct) results; see Examples section below. #### Examples ``` >>> from numpy.polynomial import legendre as L >>> c1 = (1,2,3) >>> c2 = (3,2,1) >>> L.legdiv(c1,c2) # quotient "intuitive," remainder not (array([3.]), array([-8., -4.])) >>> c2 = (0,1,2,3) >>> L.legdiv(c2,c1) # neither "intuitive" (array([-0.07407407, 1.66666667]), array([-1.03703704, -2.51851852])) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legdiv.htmlnumpy.polynomial.legendre.legpow ================================ polynomial.legendre.legpow(*c*, *pow*, *maxpower=16*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L581-L609) Raise a Legendre series to a power. Returns the Legendre series `c` raised to the power `pow`. The argument `c` is a sequence of coefficients ordered from low to high. i.e., [1,2,3] is the series `P_0 + 2*P_1 + 3*P_2.` Parameters **c**array_like 1-D array of Legendre series coefficients ordered from low to high. **pow**integer Power to which the series will be raised **maxpower**integer, optional Maximum power allowed. This is mainly to limit growth of the series to unmanageable size. Default is 16 Returns **coef**ndarray Legendre series of power. See also [`legadd`](numpy.polynomial.legendre.legadd#numpy.polynomial.legendre.legadd "numpy.polynomial.legendre.legadd"), [`legsub`](numpy.polynomial.legendre.legsub#numpy.polynomial.legendre.legsub "numpy.polynomial.legendre.legsub"), [`legmulx`](numpy.polynomial.legendre.legmulx#numpy.polynomial.legendre.legmulx "numpy.polynomial.legendre.legmulx"), [`legmul`](numpy.polynomial.legendre.legmul#numpy.polynomial.legendre.legmul "numpy.polynomial.legendre.legmul"), [`legdiv`](numpy.polynomial.legendre.legdiv#numpy.polynomial.legendre.legdiv "numpy.polynomial.legendre.legdiv") © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legpow.htmlnumpy.polynomial.legendre.legval ================================ polynomial.legendre.legval(*x*, *c*, *tensor=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L832-L914) Evaluate a Legendre series at points x. If `c` is of length `n + 1`, this function returns the value: \[p(x) = c_0 * L_0(x) + c_1 * L_1(x) + ... + c_n * L_n(x)\] The parameter `x` is converted to an array only if it is a tuple or a list, otherwise it is treated as a scalar. In either case, either `x` or its elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If `c` is multidimensional, then the shape of the result depends on the value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that scalars have shape (,). Trailing zeros in the coefficients will be used in the evaluation, so they should be avoided if efficiency is a concern. Parameters **x**array_like, compatible object If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with themselves and with the elements of `c`. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the remaining indices enumerate multiple polynomials. In the two dimensional case the coefficients may be thought of as stored in the columns of `c`. **tensor**boolean, optional If True, the shape of the coefficient array is extended with ones on the right, one for each dimension of `x`. Scalars have dimension 0 for this action. The result is that every column of coefficients in `c` is evaluated for every element of `x`. If False, `x` is broadcast over the columns of `c` for the evaluation. This keyword is useful when `c` is multidimensional. The default value is True. New in version 1.7.0. Returns **values**ndarray, algebra_like The shape of the return value is described above. See also [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`leggrid2d`](numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d"), [`leggrid3d`](numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d") #### Notes The evaluation uses Clenshaw recursion, aka synthetic division. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legval.htmlnumpy.polynomial.legendre.legval2d ================================== polynomial.legendre.legval2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L917-L963) Evaluate a 2-D Legendre series at points (x, y). This function returns the values: \[p(x,y) = \sum_{i,j} c_{i,j} * L_i(x) * L_j(y)\] The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` is a 1-D array a one is implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points `(x, y)`, where `x` and `y` must have the same shape. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional Legendre series at points formed from pairs of corresponding values from `x` and `y`. See also [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval"), [`leggrid2d`](numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d"), [`leggrid3d`](numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legval2d.htmlnumpy.polynomial.legendre.legval3d ================================== polynomial.legendre.legval3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1019-L1067) Evaluate a 3-D Legendre series at points (x, y, z). This function returns the values: \[p(x,y,z) = \sum_{i,j,k} c_{i,j,k} * L_i(x) * L_j(y) * L_k(z)\] The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars and they must have the same shape after conversion. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than 3 dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape. Parameters **x, y, z**array_like, compatible object The three dimensional series is evaluated at the points `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If any of `x`, `y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and if it isn’t an ndarray it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j,k is contained in `c[i,j,k]`. If `c` has dimension greater than 3 the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the multidimensional polynomial on points formed with triples of corresponding values from `x`, `y`, and `z`. See also [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval"), [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`leggrid2d`](numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d"), [`leggrid3d`](numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legval3d.htmlnumpy.polynomial.legendre.leggrid2d =================================== polynomial.legendre.leggrid2d(*x*, *y*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L966-L1016) Evaluate a 2-D Legendre series on the Cartesian product of x and y. This function returns the values: \[p(a,b) = \sum_{i,j} c_{i,j} * L_i(a) * L_j(b)\] where the points `(a, b)` consist of all pairs formed by taking `a` from `x` and `b` from `y`. The resulting points form a grid with `x` in the first dimension and `y` in the second. The parameters `x` and `y` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x` and `y` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than two dimensions, ones are implicitly appended to its shape to make it 2-D. The shape of the result will be c.shape[2:] + x.shape + y.shape. Parameters **x, y**array_like, compatible objects The two dimensional series is evaluated at the points in the Cartesian product of `x` and `y`. If `x` or `y` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficient of the term of multi-degree i,j is contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional Chebyshev series at points in the Cartesian product of `x` and `y`. See also [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval"), [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d"), [`leggrid3d`](numpy.polynomial.legendre.leggrid3d#numpy.polynomial.legendre.leggrid3d "numpy.polynomial.legendre.leggrid3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.leggrid2d.htmlnumpy.polynomial.legendre.leggrid3d =================================== polynomial.legendre.leggrid3d(*x*, *y*, *z*, *c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1070-L1123) Evaluate a 3-D Legendre series on the Cartesian product of x, y, and z. This function returns the values: \[p(a,b,c) = \sum_{i,j,k} c_{i,j,k} * L_i(a) * L_j(b) * L_k(c)\] where the points `(a, b, c)` consist of all triples formed by taking `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form a grid with `x` in the first dimension, `y` in the second, and `z` in the third. The parameters `x`, `y`, and `z` are converted to arrays only if they are tuples or a lists, otherwise they are treated as a scalars. In either case, either `x`, `y`, and `z` or their elements must support multiplication and addition both with themselves and with the elements of `c`. If `c` has fewer than three dimensions, ones are implicitly appended to its shape to make it 3-D. The shape of the result will be c.shape[3:] + x.shape + y.shape + z.shape. Parameters **x, y, z**array_like, compatible objects The three dimensional series is evaluated at the points in the Cartesian product of `x`, `y`, and `z`. If `x`,`y`, or `z` is a list or tuple, it is first converted to an ndarray, otherwise it is left unchanged and, if it isn’t an ndarray, it is treated as a scalar. **c**array_like Array of coefficients ordered so that the coefficients for terms of degree i,j are contained in `c[i,j]`. If `c` has dimension greater than two the remaining indices enumerate multiple sets of coefficients. Returns **values**ndarray, compatible object The values of the two dimensional polynomial at points in the Cartesian product of `x` and `y`. See also [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval"), [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`leggrid2d`](numpy.polynomial.legendre.leggrid2d#numpy.polynomial.legendre.leggrid2d "numpy.polynomial.legendre.leggrid2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.leggrid3d.htmlnumpy.polynomial.legendre.legder ================================ polynomial.legendre.legder(*c*, *m=1*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L612-L701) Differentiate a Legendre series. Returns the Legendre series coefficients `c` differentiated `m` times along `axis`. At each iteration the result is multiplied by `scl` (the scaling factor is for use in a linear change of variable). The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `1*L_0 + 2*L_1 + 3*L_2` while [[1,2],[1,2]] represents `1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Legendre series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Number of derivatives taken, must be non-negative. (Default: 1) **scl**scalar, optional Each differentiation is multiplied by `scl`. The end result is multiplication by `scl**m`. This is for use in a linear change of variable. (Default: 1) **axis**int, optional Axis over which the derivative is taken. (Default: 0). New in version 1.7.0. Returns **der**ndarray Legendre series of the derivative. See also [`legint`](numpy.polynomial.legendre.legint#numpy.polynomial.legendre.legint "numpy.polynomial.legendre.legint") #### Notes In general, the result of differentiating a Legendre series does not resemble the same operation on a power series. Thus the result of this function may be “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial import legendre as L >>> c = (1,2,3,4) >>> L.legder(c) array([ 6., 9., 20.]) >>> L.legder(c, 3) array([60.]) >>> L.legder(c, scl=-1) array([ -6., -9., -20.]) >>> L.legder(c, 2,-1) array([ 9., 60.]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legder.htmlnumpy.polynomial.legendre.legint ================================ polynomial.legendre.legint(*c*, *m=1*, *k=[]*, *lbnd=0*, *scl=1*, *axis=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L704-L829) Integrate a Legendre series. Returns the Legendre series coefficients `c` integrated `m` times from `lbnd` along `axis`. At each iteration the resulting series is **multiplied** by `scl` and an integration constant, `k`, is added. The scaling factor is for use in a linear change of variable. (“Buyer beware”: note that, depending on what one is doing, one may want `scl` to be the reciprocal of what one might expect; for more information, see the Notes section below.) The argument `c` is an array of coefficients from low to high degree along each axis, e.g., [1,2,3] represents the series `L_0 + 2*L_1 + 3*L_2` while [[1,2],[1,2]] represents `1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)` if axis=0 is `x` and axis=1 is `y`. Parameters **c**array_like Array of Legendre series coefficients. If c is multidimensional the different axis correspond to different variables with the degree in each axis given by the corresponding index. **m**int, optional Order of integration, must be positive. (Default: 1) **k**{[], list, scalar}, optional Integration constant(s). The value of the first integral at `lbnd` is the first value in the list, the value of the second integral at `lbnd` is the second value, etc. If `k == []` (the default), all constants are set to zero. If `m == 1`, a single scalar can be given instead of a list. **lbnd**scalar, optional The lower bound of the integral. (Default: 0) **scl**scalar, optional Following each integration the result is *multiplied* by `scl` before the integration constant is added. (Default: 1) **axis**int, optional Axis over which the integral is taken. (Default: 0). New in version 1.7.0. Returns **S**ndarray Legendre series coefficient array of the integral. Raises ValueError If `m < 0`, `len(k) > m`, `np.ndim(lbnd) != 0`, or `np.ndim(scl) != 0`. See also [`legder`](numpy.polynomial.legendre.legder#numpy.polynomial.legendre.legder "numpy.polynomial.legendre.legder") #### Notes Note that the result of each integration is *multiplied* by `scl`. Why is this important to note? Say one is making a linear change of variable \(u = ax + b\) in an integral relative to `x`. Then \(dx = du/a\), so one will need to set `scl` equal to \(1/a\) - perhaps not what one would have first thought. Also note that, in general, the result of integrating a C-series needs to be “reprojected” onto the C-series basis set. Thus, typically, the result of this function is “unintuitive,” albeit correct; see Examples section below. #### Examples ``` >>> from numpy.polynomial import legendre as L >>> c = (1,2,3) >>> L.legint(c) array([ 0.33333333, 0.4 , 0.66666667, 0.6 ]) # may vary >>> L.legint(c, 3) array([ 1.66666667e-02, -1.78571429e-02, 4.76190476e-02, # may vary -1.73472348e-18, 1.90476190e-02, 9.52380952e-03]) >>> L.legint(c, k=3) array([ 3.33333333, 0.4 , 0.66666667, 0.6 ]) # may vary >>> L.legint(c, lbnd=-2) array([ 7.33333333, 0.4 , 0.66666667, 0.6 ]) # may vary >>> L.legint(c, scl=2) array([ 0.66666667, 0.8 , 1.33333333, 1.2 ]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legint.htmlnumpy.polynomial.legendre.legfromroots ====================================== polynomial.legendre.legfromroots(*roots*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L267-L319) Generate a Legendre series with given roots. The function returns the coefficients of the polynomial \[p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n),\] in Legendre form, where the `r_n` are the roots specified in [`roots`](numpy.roots#numpy.roots "numpy.roots"). If a zero has multiplicity n, then it must appear in [`roots`](numpy.roots#numpy.roots "numpy.roots") n times. For instance, if 2 is a root of multiplicity three and 3 is a root of multiplicity 2, then [`roots`](numpy.roots#numpy.roots "numpy.roots") looks something like [2, 2, 2, 3, 3]. The roots can appear in any order. If the returned coefficients are `c`, then \[p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x)\] The coefficient of the last term is not generally 1 for monic polynomials in Legendre form. Parameters **roots**array_like Sequence containing the roots. Returns **out**ndarray 1-D array of coefficients. If all roots are real then `out` is a real array, if some of the roots are complex, then `out` is complex even if all the coefficients in the result are real (see Examples below). See also [`numpy.polynomial.polynomial.polyfromroots`](numpy.polynomial.polynomial.polyfromroots#numpy.polynomial.polynomial.polyfromroots "numpy.polynomial.polynomial.polyfromroots") [`numpy.polynomial.chebyshev.chebfromroots`](numpy.polynomial.chebyshev.chebfromroots#numpy.polynomial.chebyshev.chebfromroots "numpy.polynomial.chebyshev.chebfromroots") [`numpy.polynomial.laguerre.lagfromroots`](numpy.polynomial.laguerre.lagfromroots#numpy.polynomial.laguerre.lagfromroots "numpy.polynomial.laguerre.lagfromroots") [`numpy.polynomial.hermite.hermfromroots`](numpy.polynomial.hermite.hermfromroots#numpy.polynomial.hermite.hermfromroots "numpy.polynomial.hermite.hermfromroots") [`numpy.polynomial.hermite_e.hermefromroots`](numpy.polynomial.hermite_e.hermefromroots#numpy.polynomial.hermite_e.hermefromroots "numpy.polynomial.hermite_e.hermefromroots") #### Examples ``` >>> import numpy.polynomial.legendre as L >>> L.legfromroots((-1,0,1)) # x^3 - x relative to the standard basis array([ 0. , -0.4, 0. , 0.4]) >>> j = complex(0,1) >>> L.legfromroots((-j,j)) # x^2 + 1 relative to the standard basis array([ 1.33333333+0.j, 0.00000000+0.j, 0.66666667+0.j]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legfromroots.htmlnumpy.polynomial.legendre.legroots ================================== polynomial.legendre.legroots(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1459-L1517) Compute the roots of a Legendre series. Return the roots (a.k.a. “zeros”) of the polynomial \[p(x) = \sum_i c[i] * L_i(x).\] Parameters **c**1-D array_like 1-D array of coefficients. Returns **out**ndarray Array of the roots of the series. If all the roots are real, then `out` is also real, otherwise it is complex. See also [`numpy.polynomial.polynomial.polyroots`](numpy.polynomial.polynomial.polyroots#numpy.polynomial.polynomial.polyroots "numpy.polynomial.polynomial.polyroots") [`numpy.polynomial.chebyshev.chebroots`](numpy.polynomial.chebyshev.chebroots#numpy.polynomial.chebyshev.chebroots "numpy.polynomial.chebyshev.chebroots") [`numpy.polynomial.laguerre.lagroots`](numpy.polynomial.laguerre.lagroots#numpy.polynomial.laguerre.lagroots "numpy.polynomial.laguerre.lagroots") [`numpy.polynomial.hermite.hermroots`](numpy.polynomial.hermite.hermroots#numpy.polynomial.hermite.hermroots "numpy.polynomial.hermite.hermroots") [`numpy.polynomial.hermite_e.hermeroots`](numpy.polynomial.hermite_e.hermeroots#numpy.polynomial.hermite_e.hermeroots "numpy.polynomial.hermite_e.hermeroots") #### Notes The root estimates are obtained as the eigenvalues of the companion matrix, Roots far from the origin of the complex plane may have large errors due to the numerical instability of the series for such values. Roots with multiplicity greater than 1 will also show larger errors as the value of the series near such points is relatively insensitive to errors in the roots. Isolated roots near the origin can be improved by a few iterations of Newton’s method. The Legendre series basis polynomials aren’t powers of `x` so the results of this function may seem unintuitive. #### Examples ``` >>> import numpy.polynomial.legendre as leg >>> leg.legroots((1, 2, 3, 4)) # 4L_3 + 3L_2 + 2L_1 + 1L_0, all real roots array([-0.85099543, -0.11407192, 0.51506735]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legroots.htmlnumpy.polynomial.legendre.legvander =================================== polynomial.legendre.legvander(*x*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1126-L1176) Pseudo-Vandermonde matrix of given degree. Returns the pseudo-Vandermonde matrix of degree `deg` and sample points `x`. The pseudo-Vandermonde matrix is defined by \[V[..., i] = L_i(x)\] where `0 <= i <= deg`. The leading indices of `V` index the elements of `x` and the last index is the degree of the Legendre polynomial. If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the array `V = legvander(x, n)`, then `np.dot(V, c)` and `legval(x, c)` are the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of Legendre series of the same degree and sample points. Parameters **x**array_like Array of points. The dtype is converted to float64 or complex128 depending on whether any of the elements are complex. If `x` is scalar it is converted to a 1-D array. **deg**int Degree of the resulting matrix. Returns **vander**ndarray The pseudo-Vandermonde matrix. The shape of the returned matrix is `x.shape + (deg + 1,)`, where The last index is the degree of the corresponding Legendre polynomial. The dtype will be the same as the converted `x`. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legvander.htmlnumpy.polynomial.legendre.legvander2d ===================================== polynomial.legendre.legvander2d(*x*, *y*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1179-L1229) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y)`. The pseudo-Vandermonde matrix is defined by \[V[..., (deg[1] + 1)*i + j] = L_i(x) * L_j(y),\] where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of `V` index the points `(x, y)` and the last index encodes the degrees of the Legendre polynomials. If `V = legvander2d(x, y, [xdeg, ydeg])`, then the columns of `V` correspond to the elements of a 2-D coefficient array `c` of shape (xdeg + 1, ydeg + 1) in the order \[c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ...\] and `np.dot(V, c.flat)` and `legval2d(x, y, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 2-D Legendre series of the same degrees and sample points. Parameters **x, y**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg]. Returns **vander2d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)\). The dtype will be the same as the converted `x` and `y`. See also [`legvander`](numpy.polynomial.legendre.legvander#numpy.polynomial.legendre.legvander "numpy.polynomial.legendre.legvander"), [`legvander3d`](numpy.polynomial.legendre.legvander3d#numpy.polynomial.legendre.legvander3d "numpy.polynomial.legendre.legvander3d"), [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legvander2d.htmlnumpy.polynomial.legendre.legvander3d ===================================== polynomial.legendre.legvander3d(*x*, *y*, *z*, *deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1232-L1283) Pseudo-Vandermonde matrix of given degrees. Returns the pseudo-Vandermonde matrix of degrees `deg` and sample points `(x, y, z)`. If `l, m, n` are the given degrees in `x, y, z`, then The pseudo-Vandermonde matrix is defined by \[V[..., (m+1)(n+1)i + (n+1)j + k] = L_i(x)*L_j(y)*L_k(z),\] where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading indices of `V` index the points `(x, y, z)` and the last index encodes the degrees of the Legendre polynomials. If `V = legvander3d(x, y, z, [xdeg, ydeg, zdeg])`, then the columns of `V` correspond to the elements of a 3-D coefficient array `c` of shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order \[c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},...\] and `np.dot(V, c.flat)` and `legval3d(x, y, z, c)` will be the same up to roundoff. This equivalence is useful both for least squares fitting and for the evaluation of a large number of 3-D Legendre series of the same degrees and sample points. Parameters **x, y, z**array_like Arrays of point coordinates, all of the same shape. The dtypes will be converted to either float64 or complex128 depending on whether any of the elements are complex. Scalars are converted to 1-D arrays. **deg**list of ints List of maximum degrees of the form [x_deg, y_deg, z_deg]. Returns **vander3d**ndarray The shape of the returned matrix is `x.shape + (order,)`, where \(order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)\). The dtype will be the same as the converted `x`, `y`, and `z`. See also [`legvander`](numpy.polynomial.legendre.legvander#numpy.polynomial.legendre.legvander "numpy.polynomial.legendre.legvander"), [`legvander3d`](#numpy.polynomial.legendre.legvander3d "numpy.polynomial.legendre.legvander3d"), [`legval2d`](numpy.polynomial.legendre.legval2d#numpy.polynomial.legendre.legval2d "numpy.polynomial.legendre.legval2d"), [`legval3d`](numpy.polynomial.legendre.legval3d#numpy.polynomial.legendre.legval3d "numpy.polynomial.legendre.legval3d") #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legvander3d.htmlnumpy.polynomial.legendre.leggauss ================================== polynomial.legendre.leggauss(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1520-L1585) Gauss-Legendre quadrature. Computes the sample points and weights for Gauss-Legendre quadrature. These sample points and weights will correctly integrate polynomials of degree \(2*deg - 1\) or less over the interval \([-1, 1]\) with the weight function \(f(x) = 1\). Parameters **deg**int Number of sample points and weights. It must be >= 1. Returns **x**ndarray 1-D ndarray containing the sample points. **y**ndarray 1-D ndarray containing the weights. #### Notes New in version 1.7.0. The results have only been tested up to degree 100, higher degrees may be problematic. The weights are determined by using the fact that \[w_k = c / (L'_n(x_k) * L_{n-1}(x_k))\] where \(c\) is a constant independent of \(k\) and \(x_k\) is the k’th root of \(L_n\), and then scaling the results to get the right value when integrating 1. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.leggauss.htmlnumpy.polynomial.legendre.legweight =================================== polynomial.legendre.legweight(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1588-L1613) Weight function of the Legendre polynomials. The weight function is \(1\) and the interval of integration is \([-1, 1]\). The Legendre polynomials are orthogonal, but not normalized, with respect to this weight function. Parameters **x**array_like Values at which the weight function will be computed. Returns **w**ndarray The weight function at `x`. #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legweight.htmlnumpy.polynomial.legendre.legcompanion ====================================== polynomial.legendre.legcompanion(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1415-L1456) Return the scaled companion matrix of c. The basis polynomials are scaled so that the companion matrix is symmetric when `c` is an Legendre basis polynomial. This provides better eigenvalue estimates than the unscaled case and for basis polynomials the eigenvalues are guaranteed to be real if [`numpy.linalg.eigvalsh`](numpy.linalg.eigvalsh#numpy.linalg.eigvalsh "numpy.linalg.eigvalsh") is used to obtain them. Parameters **c**array_like 1-D array of Legendre series coefficients ordered from low to high degree. Returns **mat**ndarray Scaled companion matrix of dimensions (deg, deg). #### Notes New in version 1.7.0. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legcompanion.htmlnumpy.polynomial.legendre.legfit ================================ polynomial.legendre.legfit(*x*, *y*, *deg*, *rcond=None*, *full=False*, *w=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L1286-L1412) Least squares fit of Legendre series to data. Return the coefficients of a Legendre series of degree `deg` that is the least squares fit to the data values `y` given at points `x`. If `y` is 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple fits are done, one for each column of `y`, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form \[p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x),\] where `n` is `deg`. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) or (M, K) y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (`M`,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. New in version 1.5.0. Returns **coef**ndarray, shape (M,) or (M, K) Legendre coefficients ordered from low to high. If `y` was 2-D, the coefficients for the data in column k of `y` are in column `k`. If `deg` is specified as a list, coefficients for terms not included in the fit are set equal to zero in the returned `coef`. **[residuals, rank, singular_values, rcond]**list These values are only returned if `full == True` * residuals – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * singular_values – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). Warns RankWarning The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if `full == False`. The warnings can be turned off by ``` >>> import warnings >>> warnings.simplefilter('ignore', np.RankWarning) ``` See also [`numpy.polynomial.polynomial.polyfit`](numpy.polynomial.polynomial.polyfit#numpy.polynomial.polynomial.polyfit "numpy.polynomial.polynomial.polyfit") [`numpy.polynomial.chebyshev.chebfit`](numpy.polynomial.chebyshev.chebfit#numpy.polynomial.chebyshev.chebfit "numpy.polynomial.chebyshev.chebfit") [`numpy.polynomial.laguerre.lagfit`](numpy.polynomial.laguerre.lagfit#numpy.polynomial.laguerre.lagfit "numpy.polynomial.laguerre.lagfit") [`numpy.polynomial.hermite.hermfit`](numpy.polynomial.hermite.hermfit#numpy.polynomial.hermite.hermfit "numpy.polynomial.hermite.hermfit") [`numpy.polynomial.hermite_e.hermefit`](numpy.polynomial.hermite_e.hermefit#numpy.polynomial.hermite_e.hermefit "numpy.polynomial.hermite_e.hermefit") [`legval`](numpy.polynomial.legendre.legval#numpy.polynomial.legendre.legval "numpy.polynomial.legendre.legval") Evaluates a Legendre series. [`legvander`](numpy.polynomial.legendre.legvander#numpy.polynomial.legendre.legvander "numpy.polynomial.legendre.legvander") Vandermonde matrix of Legendre series. [`legweight`](numpy.polynomial.legendre.legweight#numpy.polynomial.legendre.legweight "numpy.polynomial.legendre.legweight") Legendre weight function (= 1). [`numpy.linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq") Computes a least-squares fit from the matrix. [`scipy.interpolate.UnivariateSpline`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.UnivariateSpline.html#scipy.interpolate.UnivariateSpline "(in SciPy v1.8.1)") Computes spline fits. #### Notes The solution is the coefficients of the Legendre series `p` that minimizes the sum of the weighted squared errors \[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\] where \(w_j\) are the weights. This problem is solved by setting up as the (typically) overdetermined matrix equation \[V(x) * c = w * y,\] where `V` is the weighted pseudo Vandermonde matrix of `x`, `c` are the coefficients to be solved for, `w` are the weights, and `y` are the observed values. This equation is then solved using the singular value decomposition of `V`. If some of the singular values of `V` are so small that they are neglected, then a [`RankWarning`](numpy.rankwarning#numpy.RankWarning "numpy.RankWarning") will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The `rcond` parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error. Fits using Legendre series are usually better conditioned than fits using power series, but much can depend on the distribution of the sample points and the smoothness of the data. If the quality of the fit is inadequate splines may be a good alternative. #### References 1 Wikipedia, “Curve fitting”, <https://en.wikipedia.org/wiki/Curve_fitting © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legfit.htmlnumpy.polynomial.legendre.legtrim ================================= polynomial.legendre.legtrim(*c*, *tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L156-L208) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters **c**array_like 1-d array of coefficients, ordered from lowest order to highest. **tol**number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns **trimmed**ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises ValueError If `tol` < 0 See also `trimseq` #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legtrim.htmlnumpy.polynomial.legendre.legline ================================= polynomial.legendre.legline(*off*, *scl*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L227-L264) Legendre series whose graph is a straight line. Parameters **off, scl**scalars The specified line is given by `off + scl*x`. Returns **y**ndarray This module’s representation of the Legendre series for `off + scl*x`. See also [`numpy.polynomial.polynomial.polyline`](numpy.polynomial.polynomial.polyline#numpy.polynomial.polynomial.polyline "numpy.polynomial.polynomial.polyline") [`numpy.polynomial.chebyshev.chebline`](numpy.polynomial.chebyshev.chebline#numpy.polynomial.chebyshev.chebline "numpy.polynomial.chebyshev.chebline") [`numpy.polynomial.laguerre.lagline`](numpy.polynomial.laguerre.lagline#numpy.polynomial.laguerre.lagline "numpy.polynomial.laguerre.lagline") [`numpy.polynomial.hermite.hermline`](numpy.polynomial.hermite.hermline#numpy.polynomial.hermite.hermline "numpy.polynomial.hermite.hermline") [`numpy.polynomial.hermite_e.hermeline`](numpy.polynomial.hermite_e.hermeline#numpy.polynomial.hermite_e.hermeline "numpy.polynomial.hermite_e.hermeline") #### Examples ``` >>> import numpy.polynomial.legendre as L >>> L.legline(3,2) array([3, 2]) >>> L.legval(-3, L.legline(3,2)) # should be -3 -3.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.legline.htmlnumpy.polynomial.legendre.leg2poly ================================== polynomial.legendre.leg2poly(*c*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L148-L207) Convert a Legendre series to a polynomial. Convert an array representing the coefficients of a Legendre series, ordered from lowest degree to highest, to an array of the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest to highest degree. Parameters **c**array_like 1-D array containing the Legendre series coefficients, ordered from lowest order term to highest. Returns **pol**ndarray 1-D array containing the coefficients of the equivalent polynomial (relative to the “standard” basis) ordered from lowest order term to highest. See also [`poly2leg`](numpy.polynomial.legendre.poly2leg#numpy.polynomial.legendre.poly2leg "numpy.polynomial.legendre.poly2leg") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy import polynomial as P >>> c = P.Legendre(range(4)) >>> c Legendre([0., 1., 2., 3.], domain=[-1, 1], window=[-1, 1]) >>> p = c.convert(kind=P.Polynomial) >>> p Polynomial([-1. , -3.5, 3. , 7.5], domain=[-1., 1.], window=[-1., 1.]) >>> P.legendre.leg2poly(range(4)) array([-1. , -3.5, 3. , 7.5]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.leg2poly.htmlnumpy.polynomial.legendre.poly2leg ================================== polynomial.legendre.poly2leg(*pol*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/legendre.py#L100-L145) Convert a polynomial to a Legendre series. Convert an array representing the coefficients of a polynomial (relative to the “standard” basis) ordered from lowest degree to highest, to an array of the coefficients of the equivalent Legendre series, ordered from lowest to highest degree. Parameters **pol**array_like 1-D array containing the polynomial coefficients Returns **c**ndarray 1-D array containing the coefficients of the equivalent Legendre series. See also [`leg2poly`](numpy.polynomial.legendre.leg2poly#numpy.polynomial.legendre.leg2poly "numpy.polynomial.legendre.leg2poly") #### Notes The easy way to do conversions between polynomial basis sets is to use the convert method of a class instance. #### Examples ``` >>> from numpy import polynomial as P >>> p = P.Polynomial(np.arange(4)) >>> p Polynomial([0., 1., 2., 3.], domain=[-1, 1], window=[-1, 1]) >>> c = P.Legendre(P.legendre.poly2leg(p.coef)) >>> c Legendre([ 1. , 3.25, 1. , 0.75], domain=[-1, 1], window=[-1, 1]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.poly2leg.htmlnumpy.polynomial.polyutils.RankWarning ====================================== *exception*polynomial.polyutils.RankWarning[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L43-L45) Issued by chebfit when the design matrix is rank deficient. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polyutils.RankWarning.htmlnumpy.polynomial.polyutils.as_series ===================================== polynomial.polyutils.as_series(*alist*, *trim=True*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L80-L153) Return argument as a list of 1-d arrays. The returned list contains array(s) of dtype double, complex double, or object. A 1-d argument of shape `(N,)` is parsed into `N` arrays of size one; a 2-d argument of shape `(M,N)` is parsed into `M` arrays of size `N` (i.e., is “parsed by row”); and a higher dimensional array raises a Value Error if it is not first reshaped into either a 1-d or 2-d array. Parameters **alist**array_like A 1- or 2-d array_like **trim**boolean, optional When True, trailing zeros are removed from the inputs. When False, the inputs are passed through intact. Returns **[a1, a2,
]**list of 1-D arrays A copy of the input data as a list of 1-d arrays. Raises ValueError Raised when [`as_series`](#numpy.polynomial.polyutils.as_series "numpy.polynomial.polyutils.as_series") cannot convert its input to 1-d arrays, or at least one of the resulting arrays is empty. #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> a = np.arange(4) >>> pu.as_series(a) [array([0.]), array([1.]), array([2.]), array([3.])] >>> b = np.arange(6).reshape((2,3)) >>> pu.as_series(b) [array([0., 1., 2.]), array([3., 4., 5.])] ``` ``` >>> pu.as_series((1, np.arange(3), np.arange(2, dtype=np.float16))) [array([1.]), array([0., 1., 2.]), array([0., 1.])] ``` ``` >>> pu.as_series([2, [1.1, 0.]]) [array([2.]), array([1.1])] ``` ``` >>> pu.as_series([2, [1.1, 0.]], trim=False) [array([2.]), array([1.1, 0. ])] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polyutils.as_series.htmlnumpy.polynomial.polyutils.trimseq ================================== polynomial.polyutils.trimseq(*seq*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L50-L77) Remove small Poly series coefficients. Parameters **seq**sequence Sequence of Poly series coefficients. This routine fails for empty sequences. Returns **series**sequence Subsequence with trailing zeros removed. If the resulting sequence would be empty, return the first element. The returned sequence may or may not be a view. #### Notes Do not lose the type info if the sequence contains unknown objects. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polyutils.trimseq.htmlnumpy.polynomial.polyutils.trimcoef =================================== polynomial.polyutils.trimcoef(*c*, *tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L156-L208) Remove “small” “trailing” coefficients from a polynomial. “Small” means “small in absolute value” and is controlled by the parameter `tol`; “trailing” means highest order coefficient(s), e.g., in `[0, 1, 1, 0, 0]` (which represents `0 + x + x**2 + 0*x**3 + 0*x**4`) both the 3-rd and 4-th order coefficients would be “trimmed.” Parameters **c**array_like 1-d array of coefficients, ordered from lowest order to highest. **tol**number, optional Trailing (i.e., highest order) elements with absolute value less than or equal to `tol` (default value is zero) are removed. Returns **trimmed**ndarray 1-d array with trailing zeros removed. If the resulting series would be empty, a series containing a single zero is returned. Raises ValueError If `tol` < 0 See also [`trimseq`](numpy.polynomial.polyutils.trimseq#numpy.polynomial.polyutils.trimseq "numpy.polynomial.polyutils.trimseq") #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> pu.trimcoef((0,0,3,0,5,0,0)) array([0., 0., 3., 0., 5.]) >>> pu.trimcoef((0,0,1e-3,0,1e-5,0,0),1e-3) # item == tol is trimmed array([0.]) >>> i = complex(0,1) # works for complex >>> pu.trimcoef((3e-4,1e-3*(1-i),5e-4,2e-5*(1+i)), 1e-3) array([0.0003+0.j , 0.001 -0.001j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polyutils.trimcoef.htmlnumpy.polynomial.polyutils.getdomain ==================================== polynomial.polyutils.getdomain(*x*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L210-L254) Return a domain suitable for given abscissae. Find a domain suitable for a polynomial or Chebyshev series defined at the values supplied. Parameters **x**array_like 1-d array of abscissae whose domain will be determined. Returns **domain**ndarray 1-d array containing two values. If the inputs are complex, then the two returned points are the lower left and upper right corners of the smallest rectangle (aligned with the axes) in the complex plane containing the points `x`. If the inputs are real, then the two points are the ends of the smallest interval containing the points `x`. See also [`mapparms`](numpy.polynomial.polyutils.mapparms#numpy.polynomial.polyutils.mapparms "numpy.polynomial.polyutils.mapparms"), [`mapdomain`](numpy.polynomial.polyutils.mapdomain#numpy.polynomial.polyutils.mapdomain "numpy.polynomial.polyutils.mapdomain") #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> points = np.arange(4)**2 - 5; points array([-5, -4, -1, 4]) >>> pu.getdomain(points) array([-5., 4.]) >>> c = np.exp(complex(0,1)*np.pi*np.arange(12)/6) # unit circle >>> pu.getdomain(c) array([-1.-1.j, 1.+1.j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polyutils.getdomain.htmlnumpy.polynomial.polyutils.mapdomain ==================================== polynomial.polyutils.mapdomain(*x*, *old*, *new*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L303-L368) Apply linear map to input points. The linear map `offset + scale*x` that maps the domain `old` to the domain `new` is applied to the points `x`. Parameters **x**array_like Points to be mapped. If `x` is a subtype of ndarray the subtype will be preserved. **old, new**array_like The two domains that determine the map. Each must (successfully) convert to 1-d arrays containing precisely two values. Returns **x_out**ndarray Array of points of the same shape as `x`, after application of the linear map between the two domains. See also [`getdomain`](numpy.polynomial.polyutils.getdomain#numpy.polynomial.polyutils.getdomain "numpy.polynomial.polyutils.getdomain"), [`mapparms`](numpy.polynomial.polyutils.mapparms#numpy.polynomial.polyutils.mapparms "numpy.polynomial.polyutils.mapparms") #### Notes Effectively, this implements: \[x\_out = new[0] + m(x - old[0])\] where \[m = \frac{new[1]-new[0]}{old[1]-old[0]}\] #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> old_domain = (-1,1) >>> new_domain = (0,2*np.pi) >>> x = np.linspace(-1,1,6); x array([-1. , -0.6, -0.2, 0.2, 0.6, 1. ]) >>> x_out = pu.mapdomain(x, old_domain, new_domain); x_out array([ 0. , 1.25663706, 2.51327412, 3.76991118, 5.02654825, # may vary 6.28318531]) >>> x - pu.mapdomain(x_out, new_domain, old_domain) array([0., 0., 0., 0., 0., 0.]) ``` Also works for complex numbers (and thus can be used to map any line in the complex plane to any other line therein). ``` >>> i = complex(0,1) >>> old = (-1 - i, 1 + i) >>> new = (-1 + i, 1 - i) >>> z = np.linspace(old[0], old[1], 6); z array([-1. -1.j , -0.6-0.6j, -0.2-0.2j, 0.2+0.2j, 0.6+0.6j, 1. +1.j ]) >>> new_z = pu.mapdomain(z, old, new); new_z array([-1.0+1.j , -0.6+0.6j, -0.2+0.2j, 0.2-0.2j, 0.6-0.6j, 1.0-1.j ]) # may vary ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polyutils.mapdomain.htmlnumpy.polynomial.polyutils.mapparms =================================== polynomial.polyutils.mapparms(*old*, *new*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/polyutils.py#L256-L301) Linear map parameters between domains. Return the parameters of the linear map `offset + scale*x` that maps `old` to `new` such that `old[i] -> new[i]`, `i = 0, 1`. Parameters **old, new**array_like Domains. Each domain must (successfully) convert to a 1-d array containing precisely two values. Returns **offset, scale**scalars The map `L(x) = offset + scale*x` maps the first domain to the second. See also [`getdomain`](numpy.polynomial.polyutils.getdomain#numpy.polynomial.polyutils.getdomain "numpy.polynomial.polyutils.getdomain"), [`mapdomain`](numpy.polynomial.polyutils.mapdomain#numpy.polynomial.polyutils.mapdomain "numpy.polynomial.polyutils.mapdomain") #### Notes Also works for complex numbers, and thus can be used to calculate the parameters required to map any line in the complex plane to any other line therein. #### Examples ``` >>> from numpy.polynomial import polyutils as pu >>> pu.mapparms((-1,1),(-1,1)) (0.0, 1.0) >>> pu.mapparms((1,-1),(-1,1)) (-0.0, -1.0) >>> i = complex(0,1) >>> pu.mapparms((-i,-1),(1,i)) ((1+1j), (1-0j)) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polyutils.mapparms.htmlnumpy.roots =========== numpy.roots(*p*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L171-L260) Return the roots of a polynomial with coefficients given in p. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). The values in the rank-1 array `p` are coefficients of a polynomial. If the length of `p` is n+1 then the polynomial is described by: ``` p[0] * x**n + p[1] * x**(n-1) + ... + p[n-1]*x + p[n] ``` Parameters **p**array_like Rank-1 array of polynomial coefficients. Returns **out**ndarray An array containing the roots of the polynomial. Raises ValueError When `p` cannot be converted to a rank-1 array. See also [`poly`](numpy.poly#numpy.poly "numpy.poly") Find the coefficients of a polynomial with a given sequence of roots. [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") Compute polynomial values. [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit") Least squares polynomial fit. [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") A one-dimensional polynomial class. #### Notes The algorithm relies on computing the eigenvalues of the companion matrix [[1]](#r01a8f58ef25b-1). #### References [1](#id1) <NAME> & <NAME>, *Matrix Analysis*. Cambridge, UK: Cambridge University Press, 1999, pp. 146-7. #### Examples ``` >>> coeff = [3.2, 2, 1] >>> np.roots(coeff) array([-0.3125+0.46351241j, -0.3125-0.46351241j]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.roots.htmlnumpy.polyder ============= numpy.polyder(*p*, *m=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L372-L445) Return the derivative of the specified order of a polynomial. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Parameters **p**poly1d or sequence Polynomial to differentiate. A sequence is interpreted as polynomial coefficients, see [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d"). **m**int, optional Order of differentiation (default: 1) Returns **der**poly1d A new polynomial representing the derivative. See also [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint") Anti-derivative of a polynomial. [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") Class for one-dimensional polynomials. #### Examples The derivative of the polynomial \(x^3 + x^2 + x^1 + 1\) is: ``` >>> p = np.poly1d([1,1,1,1]) >>> p2 = np.polyder(p) >>> p2 poly1d([3, 2, 1]) ``` which evaluates to: ``` >>> p2(2.) 17.0 ``` We can verify this, approximating the derivative with `(f(x + h) - f(x))/h`: ``` >>> (p(2. + 0.001) - p(2.)) / 0.001 17.007000999997857 ``` The fourth-order derivative of a 3rd-order polynomial is zero: ``` >>> np.polyder(p, 2) poly1d([6, 2]) >>> np.polyder(p, 3) poly1d([6]) >>> np.polyder(p, 4) poly1d([0]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polyder.htmlnumpy.polyint ============= numpy.polyint(*p*, *m=1*, *k=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L267-L365) Return an antiderivative (indefinite integral) of a polynomial. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). The returned order `m` antiderivative `P` of polynomial `p` satisfies \(\frac{d^m}{dx^m}P(x) = p(x)\) and is defined up to `m - 1` integration constants `k`. The constants determine the low-order polynomial part \[\frac{k_{m-1}}{0!} x^0 + \ldots + \frac{k_0}{(m-1)!}x^{m-1}\] of `P` so that \(P^{(j)}(0) = k_{m-j-1}\). Parameters **p**array_like or poly1d Polynomial to integrate. A sequence is interpreted as polynomial coefficients, see [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d"). **m**int, optional Order of the antiderivative. (Default: 1) **k**list of `m` scalars or scalar, optional Integration constants. They are given in the order of integration: those corresponding to highest-order terms come first. If `None` (default), all constants are assumed to be zero. If `m = 1`, a single scalar can be given instead of a list. See also [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder") derivative of a polynomial [`poly1d.integ`](numpy.poly1d.integ#numpy.poly1d.integ "numpy.poly1d.integ") equivalent method #### Examples The defining property of the antiderivative: ``` >>> p = np.poly1d([1,1,1]) >>> P = np.polyint(p) >>> P poly1d([ 0.33333333, 0.5 , 1. , 0. ]) # may vary >>> np.polyder(P) == p True ``` The integration constants default to zero, but can be specified: ``` >>> P = np.polyint(p, 3) >>> P(0) 0.0 >>> np.polyder(P)(0) 0.0 >>> np.polyder(P, 2)(0) 0.0 >>> P = np.polyint(p, 3, k=[6,5,3]) >>> P poly1d([ 0.01666667, 0.04166667, 0.16666667, 3. , 5. , 3. ]) # may vary ``` Note that 3 = 6 / 2!, and that the constants are given in the order of integrations. Constant of the highest-order polynomial term comes first: ``` >>> np.polyder(P, 2)(0) 6.0 >>> np.polyder(P, 1)(0) 5.0 >>> P(0) 3.0 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polyint.htmlnumpy.polydiv ============= numpy.polydiv(*u*, *v*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L976-L1046) Returns the quotient and remainder of polynomial division. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). The input arrays are the coefficients (including any coefficients equal to zero) of the “numerator” (dividend) and “denominator” (divisor) polynomials, respectively. Parameters **u**array_like or poly1d Dividend polynomial’s coefficients. **v**array_like or poly1d Divisor polynomial’s coefficients. Returns **q**ndarray Coefficients, including those equal to zero, of the quotient. **r**ndarray Coefficients, including those equal to zero, of the remainder. See also [`poly`](numpy.poly#numpy.poly "numpy.poly"), [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd"), [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder"), [`polydiv`](#numpy.polydiv "numpy.polydiv"), [`polyfit`](numpy.polyfit#numpy.polyfit "numpy.polyfit"), [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polysub`](numpy.polysub#numpy.polysub "numpy.polysub") [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval") #### Notes Both `u` and `v` must be 0-d or 1-d (ndim = 0 or 1), but `u.ndim` need not equal `v.ndim`. In other words, all four possible combinations - `u.ndim = v.ndim = 0`, `u.ndim = v.ndim = 1`, `u.ndim = 1, v.ndim = 0`, and `u.ndim = 0, v.ndim = 1` - work. #### Examples \[\frac{3x^2 + 5x + 2}{2x + 1} = 1.5x + 1.75, remainder 0.25\] ``` >>> x = np.array([3.0, 5.0, 2.0]) >>> y = np.array([2.0, 1.0]) >>> np.polydiv(x, y) (array([1.5 , 1.75]), array([0.25])) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polydiv.htmlnumpy.polysub ============= numpy.polysub(*a1*, *a2*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L855-L906) Difference (subtraction) of two polynomials. Note This forms part of the old polynomial API. Since version 1.4, the new polynomial API defined in [`numpy.polynomial`](../routines.polynomials.package#module-numpy.polynomial "numpy.polynomial") is preferred. A summary of the differences can be found in the [transition guide](../routines.polynomials). Given two polynomials `a1` and `a2`, returns `a1 - a2`. `a1` and `a2` can be either array_like sequences of the polynomials’ coefficients (including coefficients equal to zero), or [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") objects. Parameters **a1, a2**array_like or poly1d Minuend and subtrahend polynomials, respectively. Returns **out**ndarray or poly1d Array or [`poly1d`](numpy.poly1d#numpy.poly1d "numpy.poly1d") object of the difference polynomial’s coefficients. See also [`polyval`](numpy.polyval#numpy.polyval "numpy.polyval"), [`polydiv`](numpy.polydiv#numpy.polydiv "numpy.polydiv"), [`polymul`](numpy.polymul#numpy.polymul "numpy.polymul"), [`polyadd`](numpy.polyadd#numpy.polyadd "numpy.polyadd") #### Examples \[(2 x^2 + 10 x - 2) - (3 x^2 + 10 x -4) = (-x^2 + 2)\] ``` >>> np.polysub([2, 10, -2], [3, 10, -4]) array([-1, 0, 2]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polysub.htmlnumpy.polynomial.chebyshev.Chebyshev.fit ======================================== method *classmethod*polynomial.chebyshev.Chebyshev.fit(*x*, *y*, *deg*, *domain=None*, *rcond=None*, *full=False*, *w=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L900-L986) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. New in version 1.5.0. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain New in version 1.6.0. Returns **new_series**series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]**list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.fit.htmlnumpy.poly1d.variable ===================== property *property*poly1d.variable The name of the polynomial variable © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.variable.htmlnumpy.poly1d.c ============== property *property*poly1d.c The polynomial coefficients © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.c.htmlnumpy.poly1d.coef ================= property *property*poly1d.coef The polynomial coefficients © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.coef.htmlnumpy.poly1d.coefficients ========================= property *property*poly1d.coefficients The polynomial coefficients © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.coefficients.htmlnumpy.poly1d.coeffs =================== property *property*poly1d.coeffs The polynomial coefficients © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.coeffs.htmlnumpy.poly1d.o ============== property *property*poly1d.o The order or degree of the polynomial © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.o.htmlnumpy.poly1d.order ================== property *property*poly1d.order The order or degree of the polynomial © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.order.htmlnumpy.poly1d.r ============== property *property*poly1d.r The roots of the polynomial, where self(x) == 0 © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.r.htmlnumpy.poly1d.__call__ ========================= method poly1d.__call__(*val*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L1324-L1325) Call self as a function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.__call__.htmlnumpy.poly1d.deriv ================== method poly1d.deriv(*m=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L1437-L1448) Return a derivative of this polynomial. Refer to [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder") for full documentation. See also [`polyder`](numpy.polyder#numpy.polyder "numpy.polyder") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.deriv.htmlnumpy.poly1d.integ ================== method poly1d.integ(*m=1*, *k=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/lib/polynomial.py#L1424-L1435) Return an antiderivative (indefinite integral) of this polynomial. Refer to [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint") for full documentation. See also [`polyint`](numpy.polyint#numpy.polyint "numpy.polyint") equivalent function © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.poly1d.integ.htmlnumpy.polynomial.polynomial.Polynomial.domain ============================================= attribute polynomial.polynomial.Polynomial.domain*=array([-1, 1])* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.domain.htmlnumpy.polynomial.polynomial.Polynomial.__call__ =================================================== method polynomial.polynomial.Polynomial.__call__(*arg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L480-L483) Call self as a function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.__call__.htmlnumpy.polynomial.polynomial.Polynomial.basis ============================================ method *classmethod*polynomial.polynomial.Polynomial.basis(*deg*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1062-L1099) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. New in version 1.7.0. Parameters **deg**int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series with the coefficient of the `deg` term set to one and all others zero. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.basis.htmlnumpy.polynomial.polynomial.Polynomial.cast =========================================== method *classmethod*polynomial.polynomial.Polynomial.cast(*series*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1101-L1141) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. New in version 1.7.0. Parameters **series**series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.polynomial.polynomial.convert#numpy.polynomial.polynomial.Polynomial.convert "numpy.polynomial.polynomial.Polynomial.convert") similar instance method © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.cast.htmlnumpy.polynomial.polynomial.Polynomial.copy =========================================== method polynomial.polynomial.Polynomial.copy()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L631-L640) Return a copy. Returns **new_series**series Copy of self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.copy.htmlnumpy.polynomial.polynomial.Polynomial.cutdeg ============================================= method polynomial.polynomial.Polynomial.cutdeg(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L655-L678) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. New in version 1.5.0. Parameters **deg**non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns **new_series**series New instance of series with reduced degree. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.cutdeg.htmlnumpy.polynomial.polynomial.Polynomial.degree ============================================= method polynomial.polynomial.Polynomial.degree()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L642-L653) The degree of the series. New in version 1.5.0. Returns **degree**int Degree of the series, one less than the number of coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.degree.htmlnumpy.polynomial.polynomial.Polynomial.deriv ============================================ method polynomial.polynomial.Polynomial.deriv(*m=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L831-L851) Differentiate. Return a series instance of that is the derivative of the current series. Parameters **m**non-negative int Find the derivative of order `m`. Returns **new_series**series A new series representing the derivative. The domain is the same as the domain of the differentiated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.deriv.htmlnumpy.polynomial.polynomial.Polynomial.fromroots ================================================ method *classmethod*polynomial.polynomial.Polynomial.fromroots(*roots*, *domain=[]*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L988-L1027) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters **roots**array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. Returns **new_series**series Series with the specified roots. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.fromroots.htmlnumpy.polynomial.polynomial.Polynomial.has_samecoef ==================================================== method polynomial.polynomial.Polynomial.has_samecoef(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L177-L198) Check if coefficients match. New in version 1.6.0. Parameters **other**class instance The other class must have the `coef` attribute. Returns **bool**boolean True if the coefficients are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.has_samecoef.htmlnumpy.polynomial.polynomial.Polynomial.has_samedomain ====================================================== method polynomial.polynomial.Polynomial.has_samedomain(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L200-L216) Check if domains match. New in version 1.6.0. Parameters **other**class instance The other class must have the `domain` attribute. Returns **bool**boolean True if the domains are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.has_samedomain.htmlnumpy.polynomial.polynomial.Polynomial.has_sametype ==================================================== method polynomial.polynomial.Polynomial.has_sametype(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L236-L252) Check if types match. New in version 1.7.0. Parameters **other**object Class instance. Returns **bool**boolean True if other is same class as self © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.has_sametype.htmlnumpy.polynomial.polynomial.Polynomial.has_samewindow ====================================================== method polynomial.polynomial.Polynomial.has_samewindow(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L218-L234) Check if windows match. New in version 1.6.0. Parameters **other**class instance The other class must have the `window` attribute. Returns **bool**boolean True if the windows are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.has_samewindow.htmlnumpy.polynomial.polynomial.Polynomial.identity =============================================== method *classmethod*polynomial.polynomial.Polynomial.identity(*domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1029-L1060) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series Series of representing the identity. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.identity.htmlnumpy.polynomial.polynomial.Polynomial.integ ============================================ method polynomial.polynomial.Polynomial.integ(*m=1*, *k=[]*, *lbnd=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L798-L829) Integrate. Return a series instance that is the definite integral of the current series. Parameters **m**non-negative int The number of integrations to perform. **k**array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd**Scalar The lower bound of the definite integral. Returns **new_series**series A new series representing the integral. The domain is the same as the domain of the integrated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.integ.htmlnumpy.polynomial.polynomial.Polynomial.linspace =============================================== method polynomial.polynomial.Polynomial.linspace(*n=100*, *domain=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L868-L898) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. New in version 1.5.0. Parameters **n**int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns **x, y**ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.linspace.htmlnumpy.polynomial.polynomial.Polynomial.mapparms =============================================== method polynomial.polynomial.Polynomial.mapparms()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L769-L796) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns **off, scl**float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: ``` L(l1) = l2 L(r1) = r2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.mapparms.htmlnumpy.polynomial.polynomial.Polynomial.roots ============================================ method polynomial.polynomial.Polynomial.roots()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L853-L866) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decrease the further outside the domain they lie. Returns **roots**ndarray Array containing the roots of the series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.roots.htmlnumpy.polynomial.polynomial.Polynomial.trim =========================================== method polynomial.polynomial.Polynomial.trim(*tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L680-L701) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters **tol**non-negative number. All trailing coefficients less than `tol` will be removed. Returns **new_series**series New instance of series with trimmed coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.trim.htmlnumpy.polynomial.polynomial.Polynomial.truncate =============================================== method polynomial.polynomial.Polynomial.truncate(*size*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L703-L730) Truncate series to length `size`. Reduce the series to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters **size**positive int The series is reduced to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. Returns **new_series**series New instance of series with truncated coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.polynomial.Polynomial.truncate.htmlnumpy.random.Generator.bit_generator ===================================== attribute random.Generator.bit_generator Gets the bit generator instance used by the generator Returns **bit_generator**BitGenerator The bit generator instance used by the generator © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.bit_generator.htmlnumpy.random.Generator.bytes ============================ method random.Generator.bytes(*length*) Return random bytes. Parameters **length**int Number of random bytes. Returns **out**bytes String of length `length`. #### Examples ``` >>> np.random.default_rng().bytes(10) b'\xfeC\x9b\x86\x17\xf2\xa1\xafcp' # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.bytes.htmlnumpy.random.Generator.permuted =============================== method random.Generator.permuted(*x*, *axis=None*, *out=None*) Randomly permute `x` along axis `axis`. Unlike [`shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle"), each slice along the given axis is shuffled independently of the others. Parameters **x**array_like, at least one-dimensional Array to be shuffled. **axis**int, optional Slices of `x` in this axis are shuffled. Each slice is shuffled independently of the others. If `axis` is None, the flattened array is shuffled. **out**ndarray, optional If given, this is the destinaton of the shuffled array. If `out` is None, a shuffled copy of the array is returned. Returns ndarray If `out` is None, a shuffled copy of `x` is returned. Otherwise, the shuffled array is stored in `out`, and `out` is returned See also [`shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") [`permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") #### Notes An important distinction between methods `shuffle` and `permuted` is how they both treat the `axis` parameter which can be found at [Handling the axis parameter](../generator#generator-handling-axis-parameter). #### Examples Create a [`numpy.random.Generator`](../generator#numpy.random.Generator "numpy.random.Generator") instance: ``` >>> rng = np.random.default_rng() ``` Create a test array: ``` >>> x = np.arange(24).reshape(3, 8) >>> x array([[ 0, 1, 2, 3, 4, 5, 6, 7], [ 8, 9, 10, 11, 12, 13, 14, 15], [16, 17, 18, 19, 20, 21, 22, 23]]) ``` Shuffle the rows of `x`: ``` >>> y = rng.permuted(x, axis=1) >>> y array([[ 4, 3, 6, 7, 1, 2, 5, 0], # random [15, 10, 14, 9, 12, 11, 8, 13], [17, 16, 20, 21, 18, 22, 23, 19]]) ``` `x` has not been modified: ``` >>> x array([[ 0, 1, 2, 3, 4, 5, 6, 7], [ 8, 9, 10, 11, 12, 13, 14, 15], [16, 17, 18, 19, 20, 21, 22, 23]]) ``` To shuffle the rows of `x` in-place, pass `x` as the `out` parameter: ``` >>> y = rng.permuted(x, axis=1, out=x) >>> x array([[ 3, 0, 4, 7, 1, 6, 2, 5], # random [ 8, 14, 13, 9, 12, 11, 15, 10], [17, 18, 16, 22, 19, 23, 20, 21]]) ``` Note that when the `out` parameter is given, the return value is `out`: ``` >>> y is x True ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.permuted.htmlnumpy.random.Generator.beta =========================== method random.Generator.beta(*a*, *b*, *size=None*) Draw samples from a Beta distribution. The Beta distribution is a special case of the Dirichlet distribution, and is related to the Gamma distribution. It has the probability distribution function \[f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1} (1 - x)^{\beta - 1},\] where the normalization, B, is the beta function, \[B(\alpha, \beta) = \int_0^1 t^{\alpha - 1} (1 - t)^{\beta - 1} dt.\] It is often seen in Bayesian inference and order statistics. Parameters **a**float or array_like of floats Alpha, positive (>0). **b**float or array_like of floats Beta, positive (>0). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` and `b` are both scalars. Otherwise, `np.broadcast(a, b).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized beta distribution. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.beta.htmlnumpy.random.Generator.binomial =============================== method random.Generator.binomial(*n*, *p*, *size=None*) Draw samples from a binomial distribution. Samples are drawn from a binomial distribution with specified parameters, n trials and p probability of success where n an integer >= 0 and p is in the interval [0,1]. (n may be input as a float, but it is truncated to an integer in use) Parameters **n**int or array_like of ints Parameter of the distribution, >= 0. Floats are also accepted, but they will be truncated to integers. **p**float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized binomial distribution, where each sample is equal to the number of successes over the n trials. See also [`scipy.stats.binom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom.html#scipy.stats.binom "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the binomial distribution is \[P(N) = \binom{n}{N}p^N(1-p)^{n-N},\] where \(n\) is the number of trials, \(p\) is the probability of success, and \(N\) is the number of successes. When estimating the standard error of a proportion in a population by using a random sample, the normal distribution works well unless the product p*n <=5, where p = population proportion estimate, and n = number of samples, in which case the binomial distribution is used instead. For example, a sample of 15 people shows 4 who are left handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4, so the binomial distribution should be used in this case. #### References 1 Dalgaard, Peter, “Introductory Statistics with R”, Springer-Verlag, 2002. 2 Glantz, <NAME>. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. 3 Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. 4 Weisstein, <NAME>. “Binomial Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/BinomialDistribution.html 5 Wikipedia, “Binomial distribution”, <https://en.wikipedia.org/wiki/Binomial_distribution #### Examples Draw samples from the distribution: ``` >>> rng = np.random.default_rng() >>> n, p = 10, .5 # number of trials, probability of each trial >>> s = rng.binomial(n, p, 1000) # result of flipping a coin 10 times, tested 1000 times. ``` A real world example. A company drills 9 wild-cat oil exploration wells, each with an estimated probability of success of 0.1. All nine wells fail. What is the probability of that happening? Let’s do 20,000 trials of the model, and count the number that generate zero positive results. ``` >>> sum(rng.binomial(9, 0.1, 20000) == 0)/20000. # answer = 0.38885, or 39%. ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.binomial.htmlnumpy.random.Generator.chisquare ================================ method random.Generator.chisquare(*df*, *size=None*) Draw samples from a chi-square distribution. When `df` independent random variables, each with standard normal distributions (mean 0, variance 1), are squared and summed, the resulting distribution is chi-square (see Notes). This distribution is often used in hypothesis testing. Parameters **df**float or array_like of floats Number of degrees of freedom, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized chi-square distribution. Raises ValueError When `df` <= 0 or when an inappropriate `size` (e.g. `size=-1`) is given. #### Notes The variable obtained by summing the squares of `df` independent, standard normally distributed random variables: \[Q = \sum_{i=0}^{\mathtt{df}} X^2_i\] is chi-square distributed, denoted \[Q \sim \chi^2_k.\] The probability density function of the chi-squared distribution is \[p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2},\] where \(\Gamma\) is the gamma function, \[\Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.\] #### References 1 NIST “Engineering Statistics Handbook” <https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm #### Examples ``` >>> np.random.default_rng().chisquare(2,4) array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.chisquare.htmlnumpy.random.Generator.dirichlet ================================ method random.Generator.dirichlet(*alpha*, *size=None*) Draw samples from the Dirichlet distribution. Draw `size` samples of dimension k from a Dirichlet distribution. A Dirichlet-distributed random variable can be seen as a multivariate generalization of a Beta distribution. The Dirichlet distribution is a conjugate prior of a multinomial distribution in Bayesian inference. Parameters **alpha**sequence of floats, length k Parameter of the distribution (length `k` for sample of length `k`). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n)`, then `m * n * k` samples are drawn. Default is None, in which case a vector of length `k` is returned. Returns **samples**ndarray, The drawn samples, of shape `(size, k)`. Raises ValueError If any value in `alpha` is less than or equal to zero #### Notes The Dirichlet distribution is a distribution over vectors \(x\) that fulfil the conditions \(x_i>0\) and \(\sum_{i=1}^k x_i = 1\). The probability density function \(p\) of a Dirichlet-distributed random vector \(X\) is proportional to \[p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},\] where \(\alpha\) is a vector containing the positive concentration parameters. The method uses the following property for computation: let \(Y\) be a random vector which has components that follow a standard gamma distribution, then \(X = \frac{1}{\sum_{i=1}^k{Y_i}} Y\) is Dirichlet-distributed #### References 1 <NAME>, “Information Theory, Inference and Learning Algorithms,” chapter 23, <http://www.inference.org.uk/mackay/itila/ 2 Wikipedia, “Dirichlet distribution”, <https://en.wikipedia.org/wiki/Dirichlet_distribution #### Examples Taking an example cited in Wikipedia, this distribution can be used if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had, on average, a designated average length, but allowing some variation in the relative sizes of the pieces. ``` >>> s = np.random.default_rng().dirichlet((10, 5, 3), 20).transpose() ``` ``` >>> import matplotlib.pyplot as plt >>> plt.barh(range(20), s[0]) >>> plt.barh(range(20), s[1], left=s[0], color='g') >>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r') >>> plt.title("Lengths of Strings") ``` ![../../../_images/numpy-random-Generator-dirichlet-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.dirichlet.htmlnumpy.random.Generator.exponential ================================== method random.Generator.exponential(*scale=1.0*, *size=None*) Draw samples from an exponential distribution. Its probability density function is \[f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),\] for `x > 0` and 0 elsewhere. \(\beta\) is the scale parameter, which is the inverse of the rate parameter \(\lambda = 1/\beta\). The rate parameter is an alternative, widely used parameterization of the exponential distribution [[3]](#r0dbb9b01ef9c-3). The exponential distribution is a continuous analogue of the geometric distribution. It describes many common situations, such as the size of raindrops measured over many rainstorms [[1]](#r0dbb9b01ef9c-1), or the time between page requests to Wikipedia [[2]](#r0dbb9b01ef9c-2). Parameters **scale**float or array_like of floats The scale parameter, \(\beta = 1/\lambda\). Must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized exponential distribution. #### References [1](#id2) <NAME>., “Probability, Random Variables and Random Signal Principles”, 4th ed, 2001, p. 57. [2](#id3) Wikipedia, “Poisson process”, <https://en.wikipedia.org/wiki/Poisson_process [3](#id1) Wikipedia, “Exponential distribution”, <https://en.wikipedia.org/wiki/Exponential_distribution © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.exponential.htmlnumpy.random.Generator.f ======================== method random.Generator.f(*dfnum*, *dfden*, *size=None*) Draw samples from an F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters must be greater than zero. The random variate of the F distribution (also known as the Fisher distribution) is a continuous probability distribution that arises in ANOVA tests, and is the ratio of two chi-square variates. Parameters **dfnum**float or array_like of floats Degrees of freedom in numerator, must be > 0. **dfden**float or array_like of float Degrees of freedom in denominator, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum` and `dfden` are both scalars. Otherwise, `np.broadcast(dfnum, dfden).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Fisher distribution. See also [`scipy.stats.f`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f.html#scipy.stats.f "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. #### Notes The F statistic is used to compare in-group variances to between-group variances. Calculating the distribution depends on the sampling, and so it is a function of the respective degrees of freedom in the problem. The variable `dfnum` is the number of samples minus one, the between-groups degrees of freedom, while `dfden` is the within-groups degrees of freedom, the sum of the number of samples in each group minus the number of groups. #### References 1 Glantz, <NAME>. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. 2 Wikipedia, “F-distribution”, <https://en.wikipedia.org/wiki/F-distribution #### Examples An example from Glantz[1], pp 47-40: Two groups, children of diabetics (25 people) and children from people without diabetes (25 controls). Fasting blood glucose was measured, case group had a mean value of 86.1, controls had a mean value of 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these data consistent with the null hypothesis that the parents diabetic status does not affect their children’s blood glucose levels? Calculating the F statistic from the data gives a value of 36.01. Draw samples from the distribution: ``` >>> dfnum = 1. # between group degrees of freedom >>> dfden = 48. # within groups degrees of freedom >>> s = np.random.default_rng().f(dfnum, dfden, 1000) ``` The lower bound for the top 1% of the samples is : ``` >>> np.sort(s)[-10] 7.61988120985 # random ``` So there is about a 1% chance that the F statistic will exceed 7.62, the measured value is 36, so the null hypothesis is rejected at the 1% level. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.f.htmlnumpy.random.Generator.gamma ============================ method random.Generator.gamma(*shape*, *scale=1.0*, *size=None*) Draw samples from a Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, [`shape`](../../generated/numpy.shape#numpy.shape "numpy.shape") (sometimes designated “k”) and `scale` (sometimes designated “theta”), where both parameters are > 0. Parameters **shape**float or array_like of floats The shape of the gamma distribution. Must be non-negative. **scale**float or array_like of floats, optional The scale of the gamma distribution. Must be non-negative. Default is equal to 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` and `scale` are both scalars. Otherwise, `np.broadcast(shape, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the Gamma distribution is \[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\] where \(k\) is the shape and \(\theta\) the scale, and \(\Gamma\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References 1 Weisstein, <NAME>. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/GammaDistribution.html 2 Wikipedia, “Gamma distribution”, <https://en.wikipedia.org/wiki/Gamma_distribution #### Examples Draw samples from the distribution: ``` >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2) >>> s = np.random.default_rng().gamma(shape, scale, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1)*(np.exp(-bins/scale) / ... (sps.gamma(shape)*scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-gamma-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.gamma.htmlnumpy.random.Generator.geometric ================================ method random.Generator.geometric(*p*, *size=None*) Draw samples from the geometric distribution. Bernoulli trials are experiments with one of two outcomes: success or failure (an example of such an experiment is flipping a coin). The geometric distribution models the number of trials that must be run in order to achieve success. It is therefore supported on the positive integers, `k = 1, 2, ...`. The probability mass function of the geometric distribution is \[f(k) = (1 - p)^{k - 1} p\] where `p` is the probability of success of an individual trial. Parameters **p**float or array_like of floats The probability of success of an individual trial. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized geometric distribution. #### Examples Draw ten thousand values from the geometric distribution, with the probability of an individual success equal to 0.35: ``` >>> z = np.random.default_rng().geometric(p=0.35, size=10000) ``` How many trials succeeded after a single run? ``` >>> (z == 1).sum() / 10000. 0.34889999999999999 # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.geometric.htmlnumpy.random.Generator.gumbel ============================= method random.Generator.gumbel(*loc=0.0*, *scale=1.0*, *size=None*) Draw samples from a Gumbel distribution. Draw samples from a Gumbel distribution with specified location and scale. For more information on the Gumbel distribution, see Notes and References below. Parameters **loc**float or array_like of floats, optional The location of the mode of the distribution. Default is 0. **scale**float or array_like of floats, optional The scale parameter of the distribution. Default is 1. Must be non- negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Gumbel distribution. See also [`scipy.stats.gumbel_l`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_l.html#scipy.stats.gumbel_l "(in SciPy v1.8.1)") [`scipy.stats.gumbel_r`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_r.html#scipy.stats.gumbel_r "(in SciPy v1.8.1)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "(in SciPy v1.8.1)") [`weibull`](numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull") #### Notes The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme Value Type I) distribution is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. The Gumbel is a special case of the Extreme Value Type I distribution for maximums from distributions with “exponential-like” tails. The probability density for the Gumbel distribution is \[p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/ \beta}},\] where \(\mu\) is the mode, a location parameter, and \(\beta\) is the scale parameter. The Gumbel (named for German mathematician <NAME>) was used very early in the hydrology literature, for modeling the occurrence of flood events. It is also used for modeling maximum wind speed and rainfall rates. It is a “fat-tailed” distribution - the probability of an event in the tail of the distribution is larger than if one used a Gaussian, hence the surprisingly frequent occurrence of 100-year floods. Floods were initially modeled as a Gaussian process, which underestimated the frequency of extreme events. It is one of a class of extreme value distributions, the Generalized Extreme Value (GEV) distributions, which also includes the Weibull and Frechet. The function has a mean of \(\mu + 0.57721\beta\) and a variance of \(\frac{\pi^2}{6}\beta^2\). #### References 1 <NAME>., “Statistics of Extremes,” New York: Columbia University Press, 1958. 2 <NAME>. and <NAME>., “Statistical Analysis of Extreme Values from Insurance, Finance, Hydrology and Other Fields,” Basel: Birkhauser Verlag, 2001. #### Examples Draw samples from the distribution: ``` >>> rng = np.random.default_rng() >>> mu, beta = 0, 0.1 # location and scale >>> s = rng.gumbel(mu, beta, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp( -np.exp( -(bins - mu) /beta) ), ... linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-gumbel-1_00_00.png] Show how an extreme value distribution can arise from a Gaussian process and compare to a Gaussian: ``` >>> means = [] >>> maxima = [] >>> for i in range(0,1000) : ... a = rng.normal(mu, beta, 1000) ... means.append(a.mean()) ... maxima.append(a.max()) >>> count, bins, ignored = plt.hist(maxima, 30, density=True) >>> beta = np.std(maxima) * np.sqrt(6) / np.pi >>> mu = np.mean(maxima) - 0.57721*beta >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp(-np.exp(-(bins - mu)/beta)), ... linewidth=2, color='r') >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi)) ... * np.exp(-(bins - mu)**2 / (2 * beta**2)), ... linewidth=2, color='g') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-gumbel-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.gumbel.htmlnumpy.random.Generator.hypergeometric ===================================== method random.Generator.hypergeometric(*ngood*, *nbad*, *nsample*, *size=None*) Draw samples from a Hypergeometric distribution. Samples are drawn from a hypergeometric distribution with specified parameters, `ngood` (ways to make a good selection), `nbad` (ways to make a bad selection), and `nsample` (number of items sampled, which is less than or equal to the sum `ngood + nbad`). Parameters **ngood**int or array_like of ints Number of ways to make a good selection. Must be nonnegative and less than 10**9. **nbad**int or array_like of ints Number of ways to make a bad selection. Must be nonnegative and less than 10**9. **nsample**int or array_like of ints Number of items sampled. Must be nonnegative and less than `ngood + nbad`. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `ngood`, `nbad`, and `nsample` are all scalars. Otherwise, `np.broadcast(ngood, nbad, nsample).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized hypergeometric distribution. Each sample is the number of good items within a randomly selected subset of size `nsample` taken from a set of `ngood` good items and `nbad` bad items. See also [`multivariate_hypergeometric`](numpy.random.generator.multivariate_hypergeometric#numpy.random.Generator.multivariate_hypergeometric "numpy.random.Generator.multivariate_hypergeometric") Draw samples from the multivariate hypergeometric distribution. [`scipy.stats.hypergeom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.hypergeom.html#scipy.stats.hypergeom "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the Hypergeometric distribution is \[P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},\] where \(0 \le x \le n\) and \(n-b \le x \le g\) for P(x) the probability of `x` good results in the drawn sample, g = `ngood`, b = `nbad`, and n = `nsample`. Consider an urn with black and white marbles in it, `ngood` of them are black and `nbad` are white. If you draw `nsample` balls without replacement, then the hypergeometric distribution describes the distribution of black balls in the drawn sample. Note that this distribution is very similar to the binomial distribution, except that in this case, samples are drawn without replacement, whereas in the Binomial case samples are drawn with replacement (or the sample space is infinite). As the sample space becomes large, this distribution approaches the binomial. The arguments `ngood` and `nbad` each must be less than `10**9`. For extremely large arguments, the algorithm that is used to compute the samples [[4]](#r688e4aa3bfc3-4) breaks down because of loss of precision in floating point calculations. For such large values, if `nsample` is not also large, the distribution can be approximated with the binomial distribution, `binomial(n=nsample, p=ngood/(ngood + nbad))`. #### References 1 <NAME>, “Elementary Applied Statistics”, Bogden and Quigley, 1972. 2 Weisstein, <NAME>. “Hypergeometric Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/HypergeometricDistribution.html 3 Wikipedia, “Hypergeometric distribution”, <https://en.wikipedia.org/wiki/Hypergeometric_distribution [4](#id1) <NAME>, “The ratio of uniforms approach for generating discrete random variates”, Journal of Computational and Applied Mathematics, 31, pp. 181-189 (1990). #### Examples Draw samples from the distribution: ``` >>> rng = np.random.default_rng() >>> ngood, nbad, nsamp = 100, 2, 10 # number of good, number of bad, and number of samples >>> s = rng.hypergeometric(ngood, nbad, nsamp, 1000) >>> from matplotlib.pyplot import hist >>> hist(s) # note that it is very unlikely to grab both bad items ``` Suppose you have an urn with 15 white and 15 black marbles. If you pull 15 marbles at random, how likely is it that 12 or more of them are one color? ``` >>> s = rng.hypergeometric(15, 15, 15, 100000) >>> sum(s>=12)/100000. + sum(s<=3)/100000. # answer = 0.003 ... pretty unlikely! ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.hypergeometric.htmlnumpy.random.Generator.laplace ============================== method random.Generator.laplace(*loc=0.0*, *scale=1.0*, *size=None*) Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). The Laplace distribution is similar to the Gaussian/normal distribution, but is sharper at the peak and has fatter tails. It represents the difference between two independent, identically distributed exponential random variables. Parameters **loc**float or array_like of floats, optional The position, \(\mu\), of the distribution peak. Default is 0. **scale**float or array_like of floats, optional \(\lambda\), the exponential decay. Default is 1. Must be non- negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Laplace distribution. #### Notes It has the probability density function \[f(x; \mu, \lambda) = \frac{1}{2\lambda} \exp\left(-\frac{|x - \mu|}{\lambda}\right).\] The first law of Laplace, from 1774, states that the frequency of an error can be expressed as an exponential function of the absolute magnitude of the error, which leads to the Laplace distribution. For many problems in economics and health sciences, this distribution seems to model the data better than the standard Gaussian distribution. #### References 1 <NAME>. and <NAME>. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. 2 <NAME>, et. al. “The Laplace Distribution and Generalizations, ” Birkhauser, 2001. 3 Weisstein, <NAME>. “Laplace Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/LaplaceDistribution.html 4 Wikipedia, “Laplace distribution”, <https://en.wikipedia.org/wiki/Laplace_distribution #### Examples Draw samples from the distribution ``` >>> loc, scale = 0., 1. >>> s = np.random.default_rng().laplace(loc, scale, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> x = np.arange(-8., 8., .01) >>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale) >>> plt.plot(x, pdf) ``` Plot Gaussian for comparison: ``` >>> g = (1/(scale * np.sqrt(2 * np.pi)) * ... np.exp(-(x - loc)**2 / (2 * scale**2))) >>> plt.plot(x,g) ``` ![../../../_images/numpy-random-Generator-laplace-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.laplace.htmlnumpy.random.Generator.logistic =============================== method random.Generator.logistic(*loc=0.0*, *scale=1.0*, *size=None*) Draw samples from a logistic distribution. Samples are drawn from a logistic distribution with specified parameters, loc (location or mean, also median), and scale (>0). Parameters **loc**float or array_like of floats, optional Parameter of the distribution. Default is 0. **scale**float or array_like of floats, optional Parameter of the distribution. Must be non-negative. Default is 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized logistic distribution. See also [`scipy.stats.logistic`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html#scipy.stats.logistic "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the Logistic distribution is \[P(x) = P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},\] where \(\mu\) = location and \(s\) = scale. The Logistic distribution is used in Extreme Value problems where it can act as a mixture of Gumbel distributions, in Epidemiology, and by the World Chess Federation (FIDE) where it is used in the Elo ranking system, assuming the performance of each player is a logistically distributed random variable. #### References 1 <NAME>. and <NAME>. (2001), “Statistical Analysis of Extreme Values, from Insurance, Finance, Hydrology and Other Fields,” Birkhauser Verlag, Basel, pp 132-133. 2 Weisstein, <NAME>. “Logistic Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/LogisticDistribution.html 3 Wikipedia, “Logistic-distribution”, <https://en.wikipedia.org/wiki/Logistic_distribution #### Examples Draw samples from the distribution: ``` >>> loc, scale = 10, 1 >>> s = np.random.default_rng().logistic(loc, scale, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=50) ``` # plot against distribution ``` >>> def logist(x, loc, scale): ... return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2) >>> lgst_val = logist(bins, loc, scale) >>> plt.plot(bins, lgst_val * count.max() / lgst_val.max()) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-logistic-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.logistic.htmlnumpy.random.Generator.lognormal ================================ method random.Generator.lognormal(*mean=0.0*, *sigma=1.0*, *size=None*) Draw samples from a log-normal distribution. Draw samples from a log-normal distribution with specified mean, standard deviation, and array shape. Note that the mean and standard deviation are not the values for the distribution itself, but of the underlying normal distribution it is derived from. Parameters **mean**float or array_like of floats, optional Mean value of the underlying normal distribution. Default is 0. **sigma**float or array_like of floats, optional Standard deviation of the underlying normal distribution. Must be non-negative. Default is 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `sigma` are both scalars. Otherwise, `np.broadcast(mean, sigma).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized log-normal distribution. See also [`scipy.stats.lognorm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html#scipy.stats.lognorm "(in SciPy v1.8.1)") probability density function, distribution, cumulative density function, etc. #### Notes A variable `x` has a log-normal distribution if `log(x)` is normally distributed. The probability density function for the log-normal distribution is: \[p(x) = \frac{1}{\sigma x \sqrt{2\pi}} e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}\] where \(\mu\) is the mean and \(\sigma\) is the standard deviation of the normally distributed logarithm of the variable. A log-normal distribution results if a random variable is the *product* of a large number of independent, identically-distributed variables in the same way that a normal distribution results if the variable is the *sum* of a large number of independent, identically-distributed variables. #### References 1 <NAME>., <NAME>., and <NAME>., “Log-normal Distributions across the Sciences: Keys and Clues,” BioScience, Vol. 51, No. 5, May, 2001. <https://stat.ethz.ch/~stahel/lognormal/bioscience.pdf 2 <NAME>. and <NAME>., “Statistical Analysis of Extreme Values,” Basel: Birkhauser Verlag, 2001, pp. 31-32. #### Examples Draw samples from the distribution: ``` >>> rng = np.random.default_rng() >>> mu, sigma = 3., 1. # mean and standard deviation >>> s = rng.lognormal(mu, sigma, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 100, density=True, align='mid') ``` ``` >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) ``` ``` >>> plt.plot(x, pdf, linewidth=2, color='r') >>> plt.axis('tight') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-lognormal-1_00_00.png] Demonstrate that taking the products of random samples from a uniform distribution can be fit well by a log-normal probability density function. ``` >>> # Generate a thousand samples: each is the product of 100 random >>> # values, drawn from a normal distribution. >>> rng = rng >>> b = [] >>> for i in range(1000): ... a = 10. + rng.standard_normal(100) ... b.append(np.product(a)) ``` ``` >>> b = np.array(b) / np.min(b) # scale values to be positive >>> count, bins, ignored = plt.hist(b, 100, density=True, align='mid') >>> sigma = np.std(np.log(b)) >>> mu = np.mean(np.log(b)) ``` ``` >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) ``` ``` >>> plt.plot(x, pdf, color='r', linewidth=2) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-lognormal-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.lognormal.htmlnumpy.random.Generator.logseries ================================ method random.Generator.logseries(*p*, *size=None*) Draw samples from a logarithmic series distribution. Samples are drawn from a log series distribution with specified shape parameter, 0 < `p` < 1. Parameters **p**float or array_like of floats Shape parameter for the distribution. Must be in the range (0, 1). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized logarithmic series distribution. See also [`scipy.stats.logser`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logser.html#scipy.stats.logser "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. #### Notes The probability mass function for the Log Series distribution is \[P(k) = \frac{-p^k}{k \ln(1-p)},\] where p = probability. The log series distribution is frequently used to represent species richness and occurrence, first proposed by <NAME>, and Williams in 1943 [2]. It may also be used to model the numbers of occupants seen in cars [3]. #### References 1 <NAME>.; Culver, <NAME>., Understanding regional species diversity through the log series distribution of occurrences: BIODIVERSITY RESEARCH Diversity & Distributions, Volume 5, Number 5, September 1999 , pp. 187-195(9). 2 <NAME>,, <NAME>, and <NAME>. 1943. The relation between the number of species and the number of individuals in a random sample of an animal population. Journal of Animal Ecology, 12:42-58. 3 <NAME>, <NAME>, <NAME>, <NAME>, A Handbook of Small Data Sets, CRC Press, 1994. 4 Wikipedia, “Logarithmic distribution”, <https://en.wikipedia.org/wiki/Logarithmic_distribution #### Examples Draw samples from the distribution: ``` >>> a = .6 >>> s = np.random.default_rng().logseries(a, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s) ``` # plot against distribution ``` >>> def logseries(k, p): ... return -p**k/(k*np.log(1-p)) >>> plt.plot(bins, logseries(bins, a) * count.max()/ ... logseries(bins, a).max(), 'r') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-logseries-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.logseries.htmlnumpy.random.Generator.multinomial ================================== method random.Generator.multinomial(*n*, *pvals*, *size=None*) Draw samples from a multinomial distribution. The multinomial distribution is a multivariate generalization of the binomial distribution. Take an experiment with one of `p` possible outcomes. An example of such an experiment is throwing a dice, where the outcome can be 1 through 6. Each sample drawn from the distribution represents `n` such experiments. Its values, `X_i = [X_0, X_1, ..., X_p]`, represent the number of times the outcome was `i`. Parameters **n**int or array-like of ints Number of experiments. **pvals**array-like of floats Probabilities of each of the `p` different outcomes with shape `(k0, k1, ..., kn, p)`. Each element `pvals[i,j,...,:]` must sum to 1 (however, the last element is always assumed to account for the remaining probability, as long as `sum(pvals[..., :-1], axis=-1) <= 1.0`. Must have at least 1 dimension where pvals.shape[-1] > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn each with `p` elements. Default is None where the output size is determined by the broadcast shape of `n` and all by the final dimension of `pvals`, which is denoted as `b=(b0, b1, ..., bq)`. If size is not None, then it must be compatible with the broadcast shape `b`. Specifically, size must have `q` or more elements and size[-(q-j):] must equal `bj`. Returns **out**ndarray The drawn samples, of shape size, if provided. When size is provided, the output shape is size + (p,) If not specified, the shape is determined by the broadcast shape of `n` and `pvals`, `(b0, b1, ..., bq)` augmented with the dimension of the multinomial, `p`, so that that output shape is `(b0, b1, ..., bq, p)`. Each entry `out[i,j,...,:]` is a `p`-dimensional value drawn from the distribution. #### Examples Throw a dice 20 times: ``` >>> rng = np.random.default_rng() >>> rng.multinomial(20, [1/6.]*6, size=1) array([[4, 1, 7, 5, 2, 1]]) # random ``` It landed 4 times on 1, once on 2, etc. Now, throw the dice 20 times, and 20 times again: ``` >>> rng.multinomial(20, [1/6.]*6, size=2) array([[3, 4, 3, 3, 4, 3], [2, 4, 3, 4, 0, 7]]) # random ``` For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc. Now, do one experiment throwing the dice 10 time, and 10 times again, and another throwing the dice 20 times, and 20 times again: ``` >>> rng.multinomial([[10], [20]], [1/6.]*6, size=(2, 2)) array([[[2, 4, 0, 1, 2, 1], [1, 3, 0, 3, 1, 2]], [[1, 4, 4, 4, 4, 3], [3, 3, 2, 5, 5, 2]]]) # random ``` The first array shows the outcomes of throwing the dice 10 times, and the second shows the outcomes from throwing the dice 20 times. A loaded die is more likely to land on number 6: ``` >>> rng.multinomial(100, [1/7.]*5 + [2/7.]) array([11, 16, 14, 17, 16, 26]) # random ``` Simulate 10 throws of a 4-sided die and 20 throws of a 6-sided die ``` >>> rng.multinomial([10, 20],[[1/4]*4 + [0]*2, [1/6]*6]) array([[2, 1, 4, 3, 0, 0], [3, 3, 3, 6, 1, 4]], dtype=int64) # random ``` Generate categorical random variates from two categories where the first has 3 outcomes and the second has 2. ``` >>> rng.multinomial(1, [[.1, .5, .4 ], [.3, .7, .0]]) array([[0, 0, 1], [0, 1, 0]], dtype=int64) # random ``` `argmax(axis=-1)` is then used to return the categories. ``` >>> pvals = [[.1, .5, .4 ], [.3, .7, .0]] >>> rvs = rng.multinomial(1, pvals, size=(4,2)) >>> rvs.argmax(axis=-1) array([[0, 1], [2, 0], [2, 1], [2, 0]], dtype=int64) # random ``` The same output dimension can be produced using broadcasting. ``` >>> rvs = rng.multinomial([[1]] * 4, pvals) >>> rvs.argmax(axis=-1) array([[0, 1], [2, 0], [2, 1], [2, 0]], dtype=int64) # random ``` The probability inputs should be normalized. As an implementation detail, the value of the last entry is ignored and assumed to take up any leftover probability mass, but this should not be relied on. A biased coin which has twice as much weight on one side as on the other should be sampled like so: ``` >>> rng.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT array([38, 62]) # random ``` not like: ``` >>> rng.multinomial(100, [1.0, 2.0]) # WRONG Traceback (most recent call last): ValueError: pvals < 0, pvals > 1 or pvals contains NaNs ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.multinomial.htmlnumpy.random.Generator.multivariate_hypergeometric =================================================== method random.Generator.multivariate_hypergeometric(*colors*, *nsample*, *size=None*, *method='marginals'*) Generate variates from a multivariate hypergeometric distribution. The multivariate hypergeometric distribution is a generalization of the hypergeometric distribution. Choose `nsample` items at random without replacement from a collection with `N` distinct types. `N` is the length of `colors`, and the values in `colors` are the number of occurrences of that type in the collection. The total number of items in the collection is `sum(colors)`. Each random variate generated by this function is a vector of length `N` holding the counts of the different types that occurred in the `nsample` items. The name `colors` comes from a common description of the distribution: it is the probability distribution of the number of marbles of each color selected without replacement from an urn containing marbles of different colors; `colors[i]` is the number of marbles in the urn with color `i`. Parameters **colors**sequence of integers The number of each type of item in the collection from which a sample is drawn. The values in `colors` must be nonnegative. To avoid loss of precision in the algorithm, `sum(colors)` must be less than `10**9` when `method` is “marginals”. **nsample**int The number of items selected. `nsample` must not be greater than `sum(colors)`. **size**int or tuple of ints, optional The number of variates to generate, either an integer or a tuple holding the shape of the array of variates. If the given size is, e.g., `(k, m)`, then `k * m` variates are drawn, where one variate is a vector of length `len(colors)`, and the return value has shape `(k, m, len(colors))`. If `size` is an integer, the output has shape `(size, len(colors))`. Default is None, in which case a single variate is returned as an array with shape `(len(colors),)`. **method**string, optional Specify the algorithm that is used to generate the variates. Must be ‘count’ or ‘marginals’ (the default). See the Notes for a description of the methods. Returns **variates**ndarray Array of variates drawn from the multivariate hypergeometric distribution. See also [`hypergeometric`](numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric") Draw samples from the (univariate) hypergeometric distribution. #### Notes The two methods do not return the same sequence of variates. The “count” algorithm is roughly equivalent to the following numpy code: ``` choices = np.repeat(np.arange(len(colors)), colors) selection = np.random.choice(choices, nsample, replace=False) variate = np.bincount(selection, minlength=len(colors)) ``` The “count” algorithm uses a temporary array of integers with length `sum(colors)`. The “marginals” algorithm generates a variate by using repeated calls to the univariate hypergeometric sampler. It is roughly equivalent to: ``` variate = np.zeros(len(colors), dtype=np.int64) # `remaining` is the cumulative sum of `colors` from the last # element to the first; e.g. if `colors` is [3, 1, 5], then # `remaining` is [9, 6, 5]. remaining = np.cumsum(colors[::-1])[::-1] for i in range(len(colors)-1): if nsample < 1: break variate[i] = hypergeometric(colors[i], remaining[i+1], nsample) nsample -= variate[i] variate[-1] = nsample ``` The default method is “marginals”. For some cases (e.g. when `colors` contains relatively small integers), the “count” method can be significantly faster than the “marginals” method. If performance of the algorithm is important, test the two methods with typical inputs to decide which works best. New in version 1.18.0. #### Examples ``` >>> colors = [16, 8, 4] >>> seed = 4861946401452 >>> gen = np.random.Generator(np.random.PCG64(seed)) >>> gen.multivariate_hypergeometric(colors, 6) array([5, 0, 1]) >>> gen.multivariate_hypergeometric(colors, 6, size=3) array([[5, 0, 1], [2, 2, 2], [3, 3, 0]]) >>> gen.multivariate_hypergeometric(colors, 6, size=(2, 2)) array([[[3, 2, 1], [3, 2, 1]], [[4, 1, 1], [3, 2, 1]]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.multivariate_hypergeometric.htmlnumpy.random.Generator.multivariate_normal =========================================== method random.Generator.multivariate_normal(*mean*, *cov*, *size=None*, *check_valid='warn'*, *tol=1e-8*, ***, *method='svd'*) Draw random samples from a multivariate normal distribution. The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. These parameters are analogous to the mean (average or “center”) and variance (standard deviation, or “width,” squared) of the one-dimensional normal distribution. Parameters **mean**1-D array_like, of length N Mean of the N-dimensional distribution. **cov**2-D array_like, of shape (N, N) Covariance matrix of the distribution. It must be symmetric and positive-semidefinite for proper sampling. **size**int or tuple of ints, optional Given a shape of, for example, `(m,n,k)`, `m*n*k` samples are generated, and packed in an `m`-by-`n`-by-`k` arrangement. Because each sample is `N`-dimensional, the output shape is `(m,n,k,N)`. If no shape is specified, a single (`N`-D) sample is returned. **check_valid**{ ‘warn’, ‘raise’, ‘ignore’ }, optional Behavior when the covariance matrix is not positive semidefinite. **tol**float, optional Tolerance when checking the singular values in covariance matrix. cov is cast to double before the check. **method**{ ‘svd’, ‘eigh’, ‘cholesky’}, optional The cov input is used to compute a factor matrix A such that `A @ A.T = cov`. This argument is used to select the method used to compute the factor matrix A. The default method ‘svd’ is the slowest, while ‘cholesky’ is the fastest but less robust than the slowest method. The method `eigh` uses eigen decomposition to compute A and is faster than svd but slower than cholesky. New in version 1.18.0. Returns **out**ndarray The drawn samples, of shape *size*, if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. #### Notes The mean is a coordinate in N-dimensional space, which represents the location where samples are most likely to be generated. This is analogous to the peak of the bell curve for the one-dimensional or univariate normal distribution. Covariance indicates the level to which two variables vary together. From the multivariate normal distribution, we draw N-dimensional samples, \(X = [x_1, x_2, ... x_N]\). The covariance matrix element \(C_{ij}\) is the covariance of \(x_i\) and \(x_j\). The element \(C_{ii}\) is the variance of \(x_i\) (i.e. its “spread”). Instead of specifying the full covariance matrix, popular approximations include: * Spherical covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") is a multiple of the identity matrix) * Diagonal covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") has non-negative elements, and only on the diagonal) This geometrical property can be seen in two dimensions by plotting generated data-points: ``` >>> mean = [0, 0] >>> cov = [[1, 0], [0, 100]] # diagonal covariance ``` Diagonal covariance means that points are oriented along x or y-axis: ``` >>> import matplotlib.pyplot as plt >>> x, y = np.random.default_rng().multivariate_normal(mean, cov, 5000).T >>> plt.plot(x, y, 'x') >>> plt.axis('equal') >>> plt.show() ``` Note that the covariance matrix must be positive semidefinite (a.k.a. nonnegative-definite). Otherwise, the behavior of this method is undefined and backwards compatibility is not guaranteed. #### References 1 <NAME>., “Probability, Random Variables, and Stochastic Processes,” 3rd ed., New York: McGraw-Hill, 1991. 2 <NAME>., <NAME>., and <NAME>., “Pattern Classification,” 2nd ed., New York: Wiley, 2001. #### Examples ``` >>> mean = (1, 2) >>> cov = [[1, 0], [0, 1]] >>> rng = np.random.default_rng() >>> x = rng.multivariate_normal(mean, cov, (3, 3)) >>> x.shape (3, 3, 2) ``` We can use a different method other than the default to factorize cov: ``` >>> y = rng.multivariate_normal(mean, cov, (3, 3), method='cholesky') >>> y.shape (3, 3, 2) ``` Here we generate 800 samples from the bivariate normal distribution with mean [0, 0] and covariance matrix [[6, -3], [-3, 3.5]]. The expected variances of the first and second components of the sample are 6 and 3.5, respectively, and the expected correlation coefficient is -3/sqrt(6*3.5) ≈ -0.65465. ``` >>> cov = np.array([[6, -3], [-3, 3.5]]) >>> pts = rng.multivariate_normal([0, 0], cov, size=800) ``` Check that the mean, covariance, and correlation coefficient of the sample are close to the expected values: ``` >>> pts.mean(axis=0) array([ 0.0326911 , -0.01280782]) # may vary >>> np.cov(pts.T) array([[ 5.96202397, -2.85602287], [-2.85602287, 3.47613949]]) # may vary >>> np.corrcoef(pts.T)[0, 1] -0.6273591314603949 # may vary ``` We can visualize this data with a scatter plot. The orientation of the point cloud illustrates the negative correlation of the components of this sample. ``` >>> import matplotlib.pyplot as plt >>> plt.plot(pts[:, 0], pts[:, 1], '.', alpha=0.5) >>> plt.axis('equal') >>> plt.grid() >>> plt.show() ``` ![../../../_images/numpy-random-Generator-multivariate_normal-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.multivariate_normal.htmlnumpy.random.Generator.negative_binomial ========================================= method random.Generator.negative_binomial(*n*, *p*, *size=None*) Draw samples from a negative binomial distribution. Samples are drawn from a negative binomial distribution with specified parameters, `n` successes and `p` probability of success where `n` is > 0 and `p` is in the interval (0, 1]. Parameters **n**float or array_like of floats Parameter of the distribution, > 0. **p**float or array_like of floats Parameter of the distribution. Must satisfy 0 < p <= 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized negative binomial distribution, where each sample is equal to N, the number of failures that occurred before a total of n successes was reached. #### Notes The probability mass function of the negative binomial distribution is \[P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},\] where \(n\) is the number of successes, \(p\) is the probability of success, \(N+n\) is the number of trials, and \(\Gamma\) is the gamma function. When \(n\) is an integer, \(\frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}\), which is the more common form of this term in the the pmf. The negative binomial distribution gives the probability of N failures given n successes, with a success on the last trial. If one throws a die repeatedly until the third time a “1” appears, then the probability distribution of the number of non-“1”s that appear before the third “1” is a negative binomial distribution. Because this method internally calls `Generator.poisson` with an intermediate random value, a ValueError is raised when the choice of \(n\) and \(p\) would result in the mean + 10 sigma of the sampled intermediate distribution exceeding the max acceptable value of the `Generator.poisson` method. This happens when \(p\) is too low (a lot of failures happen for every success) and \(n\) is too big ( a lot of sucesses are allowed). Therefore, the \(n\) and \(p\) values must satisfy the constraint: \[n\frac{1-p}{p}+10n\sqrt{n}\frac{1-p}{p}<2^{63}-1-10\sqrt{2^{63}-1},\] Where the left side of the equation is the derived mean + 10 sigma of a sample from the gamma distribution internally used as the \(lam\) parameter of a poisson sample, and the right side of the equation is the constraint for maximum value of \(lam\) in `Generator.poisson`. #### References 1 Weisstein, <NAME>. “Negative Binomial Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/NegativeBinomialDistribution.html 2 Wikipedia, “Negative binomial distribution”, <https://en.wikipedia.org/wiki/Negative_binomial_distribution #### Examples Draw samples from the distribution: A real world example. A company drills wild-cat oil exploration wells, each with an estimated probability of success of 0.1. What is the probability of having one success for each successive well, that is what is the probability of a single success after drilling 5 wells, after 6 wells, etc.? ``` >>> s = np.random.default_rng().negative_binomial(1, 0.1, 100000) >>> for i in range(1, 11): ... probability = sum(s<i) / 100000. ... print(i, "wells drilled, probability of one success =", probability) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.negative_binomial.htmlnumpy.random.Generator.noncentral_chisquare ============================================ method random.Generator.noncentral_chisquare(*df*, *nonc*, *size=None*) Draw samples from a noncentral chi-square distribution. The noncentral \(\chi^2\) distribution is a generalization of the \(\chi^2\) distribution. Parameters **df**float or array_like of floats Degrees of freedom, must be > 0. Changed in version 1.10.0: Earlier NumPy versions required dfnum > 1. **nonc**float or array_like of floats Non-centrality, must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` and `nonc` are both scalars. Otherwise, `np.broadcast(df, nonc).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized noncentral chi-square distribution. #### Notes The probability density function for the noncentral Chi-square distribution is \[P(x;df,nonc) = \sum^{\infty}_{i=0} \frac{e^{-nonc/2}(nonc/2)^{i}}{i!} P_{Y_{df+2i}}(x),\] where \(Y_{q}\) is the Chi-square with q degrees of freedom. #### References 1 Wikipedia, “Noncentral chi-squared distribution” <https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution #### Examples Draw values from the distribution and plot the histogram ``` >>> rng = np.random.default_rng() >>> import matplotlib.pyplot as plt >>> values = plt.hist(rng.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-noncentral_chisquare-1_00_00.png] Draw values from a noncentral chisquare with very small noncentrality, and compare to a chisquare. ``` >>> plt.figure() >>> values = plt.hist(rng.noncentral_chisquare(3, .0000001, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> values2 = plt.hist(rng.chisquare(3, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-noncentral_chisquare-1_01_00.png] Demonstrate how large values of non-centrality lead to a more symmetric distribution. ``` >>> plt.figure() >>> values = plt.hist(rng.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-noncentral_chisquare-1_02_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.noncentral_chisquare.htmlnumpy.random.Generator.noncentral_f ==================================== method random.Generator.noncentral_f(*dfnum*, *dfden*, *nonc*, *size=None*) Draw samples from the noncentral F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters > 1. `nonc` is the non-centrality parameter. Parameters **dfnum**float or array_like of floats Numerator degrees of freedom, must be > 0. Changed in version 1.14.0: Earlier NumPy versions required dfnum > 1. **dfden**float or array_like of floats Denominator degrees of freedom, must be > 0. **nonc**float or array_like of floats Non-centrality parameter, the sum of the squares of the numerator means, must be >= 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum`, `dfden`, and `nonc` are all scalars. Otherwise, `np.broadcast(dfnum, dfden, nonc).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized noncentral Fisher distribution. #### Notes When calculating the power of an experiment (power = probability of rejecting the null hypothesis when a specific alternative is true) the non-central F statistic becomes important. When the null hypothesis is true, the F statistic follows a central F distribution. When the null hypothesis is not true, then it follows a non-central F statistic. #### References 1 Weisstein, <NAME>. “Noncentral F-Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/NoncentralF-Distribution.html 2 Wikipedia, “Noncentral F-distribution”, <https://en.wikipedia.org/wiki/Noncentral_F-distribution #### Examples In a study, testing for a specific alternative to the null hypothesis requires use of the Noncentral F distribution. We need to calculate the area in the tail of the distribution that exceeds the value of the F distribution for the null hypothesis. We’ll plot the two probability distributions for comparison. ``` >>> rng = np.random.default_rng() >>> dfnum = 3 # between group deg of freedom >>> dfden = 20 # within groups degrees of freedom >>> nonc = 3.0 >>> nc_vals = rng.noncentral_f(dfnum, dfden, nonc, 1000000) >>> NF = np.histogram(nc_vals, bins=50, density=True) >>> c_vals = rng.f(dfnum, dfden, 1000000) >>> F = np.histogram(c_vals, bins=50, density=True) >>> import matplotlib.pyplot as plt >>> plt.plot(F[1][1:], F[0]) >>> plt.plot(NF[1][1:], NF[0]) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-noncentral_f-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.noncentral_f.htmlnumpy.random.Generator.normal ============================= method random.Generator.normal(*loc=0.0*, *scale=1.0*, *size=None*) Draw random samples from a normal (Gaussian) distribution. The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently [[2]](#r1536f9c044a3-2), is often called the bell curve because of its characteristic shape (see the example below). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution [[2]](#r1536f9c044a3-2). Parameters **loc**float or array_like of floats Mean (“centre”) of the distribution. **scale**float or array_like of floats Standard deviation (spread or “width”) of the distribution. Must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized normal distribution. See also [`scipy.stats.norm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the Gaussian distribution is \[p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },\] where \(\mu\) is the mean and \(\sigma\) the standard deviation. The square of the standard deviation, \(\sigma^2\), is called the variance. The function has its peak at the mean, and its “spread” increases with the standard deviation (the function reaches 0.607 times its maximum at \(x + \sigma\) and \(x - \sigma\) [[2]](#r1536f9c044a3-2)). This implies that [`normal`](#numpy.random.Generator.normal "numpy.random.Generator.normal") is more likely to return samples lying close to the mean, rather than those far away. #### References 1 Wikipedia, “Normal distribution”, <https://en.wikipedia.org/wiki/Normal_distribution2([1](#id1),[2](#id2),[3](#id3)) <NAME>., “Central Limit Theorem” in “Probability, Random Variables and Random Signal Principles”, 4th ed., 2001, pp. 51, 51, 125. #### Examples Draw samples from the distribution: ``` >>> mu, sigma = 0, 0.1 # mean and standard deviation >>> s = np.random.default_rng().normal(mu, sigma, 1000) ``` Verify the mean and the variance: ``` >>> abs(mu - np.mean(s)) 0.0 # may vary ``` ``` >>> abs(sigma - np.std(s, ddof=1)) 0.0 # may vary ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), ... linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-normal-1_00_00.png]: ``` >>> np.random.default_rng().normal(3, 2.5, size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.normal.htmlnumpy.random.Generator.pareto ============================= method random.Generator.pareto(*a*, *size=None*) Draw samples from a Pareto II or Lomax distribution with specified shape. The Lomax or Pareto II distribution is a shifted Pareto distribution. The classical Pareto distribution can be obtained from the Lomax distribution by adding 1 and multiplying by the scale parameter `m` (see Notes). The smallest value of the Lomax distribution is zero while for the classical Pareto distribution it is `mu`, where the standard Pareto distribution has location `mu = 1`. Lomax can also be considered as a simplified version of the Generalized Pareto distribution (available in SciPy), with the scale set to one and the location set to zero. The Pareto distribution must be greater than zero, and is unbounded above. It is also known as the “80-20 rule”. In this distribution, 80 percent of the weights are in the lowest 20 percent of the range, while the other 20 percent fill the remaining 80 percent of the range. Parameters **a**float or array_like of floats Shape of the distribution. Must be positive. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Pareto distribution. See also [`scipy.stats.lomax`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lomax.html#scipy.stats.lomax "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`scipy.stats.genpareto`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genpareto.html#scipy.stats.genpareto "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the Pareto distribution is \[p(x) = \frac{am^a}{x^{a+1}}\] where \(a\) is the shape and \(m\) the scale. The Pareto distribution, named after the Italian economist <NAME>, is a power law probability distribution useful in many real world problems. Outside the field of economics it is generally referred to as the Bradford distribution. Pareto developed the distribution to describe the distribution of wealth in an economy. It has also found use in insurance, web page access statistics, oil field sizes, and many other problems, including the download frequency for projects in Sourceforge [[1]](#rc338d9f74bfc-1). It is one of the so-called “fat-tailed” distributions. #### References [1](#id1) <NAME> and <NAME>, On the Pareto Distribution of Sourceforge projects. 2 <NAME>. (1896). Course of Political Economy. Lausanne. 3 <NAME>., <NAME>.(2001), Statistical Analysis of Extreme Values, <NAME>, Basel, pp 23-30. 4 Wikipedia, “Pareto distribution”, <https://en.wikipedia.org/wiki/Pareto_distribution #### Examples Draw samples from the distribution: ``` >>> a, m = 3., 2. # shape and mode >>> s = (np.random.default_rng().pareto(a, 1000) + 1) * m ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 100, density=True) >>> fit = a*m**a / bins**(a+1) >>> plt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-pareto-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.pareto.htmlnumpy.random.Generator.poisson ============================== method random.Generator.poisson(*lam=1.0*, *size=None*) Draw samples from a Poisson distribution. The Poisson distribution is the limit of the binomial distribution for large N. Parameters **lam**float or array_like of floats Expected number of events occurring in a fixed-time interval, must be >= 0. A sequence must be broadcastable over the requested size. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `lam` is a scalar. Otherwise, `np.array(lam).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Poisson distribution. #### Notes The Poisson distribution \[f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}\] For events with an expected separation \(\lambda\) the Poisson distribution \(f(k; \lambda)\) describes the probability of \(k\) events occurring within the observed interval \(\lambda\). Because the output is limited to the range of the C int64 type, a ValueError is raised when `lam` is within 10 sigma of the maximum representable value. #### References 1 Weisstein, <NAME>. “Poisson Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/PoissonDistribution.html 2 Wikipedia, “Poisson distribution”, <https://en.wikipedia.org/wiki/Poisson_distribution #### Examples Draw samples from the distribution: ``` >>> import numpy as np >>> rng = np.random.default_rng() >>> s = rng.poisson(5, 10000) ``` Display histogram of the sample: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 14, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-poisson-1_00_00.png] Draw each 100 values for lambda 100 and 500: ``` >>> s = rng.poisson(lam=(100., 500.), size=(100, 2)) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.poisson.htmlnumpy.random.Generator.power ============================ method random.Generator.power(*a*, *size=None*) Draws samples in [0, 1] from a power distribution with positive exponent a - 1. Also known as the power function distribution. Parameters **a**float or array_like of floats Parameter of the distribution. Must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized power distribution. Raises ValueError If a <= 0. #### Notes The probability density function is \[P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.\] The power function distribution is just the inverse of the Pareto distribution. It may also be seen as a special case of the Beta distribution. It is used, for example, in modeling the over-reporting of insurance claims. #### References 1 <NAME>, <NAME>, “Statistical size distributions in economics and actuarial sciences”, Wiley, 2003. 2 <NAME>. and Filliben, <NAME>. “NIST Handbook 148: Dataplot Reference Manual, Volume 2: Let Subcommands and Library Functions”, National Institute of Standards and Technology Handbook Series, June 2003. <https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf #### Examples Draw samples from the distribution: ``` >>> rng = np.random.default_rng() >>> a = 5. # shape >>> samples = 1000 >>> s = rng.power(a, samples) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=30) >>> x = np.linspace(0, 1, 100) >>> y = a*x**(a-1.) >>> normed_y = samples*np.diff(bins)[0]*y >>> plt.plot(x, normed_y) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-power-1_00_00.png] Compare the power function distribution to the inverse of the Pareto. ``` >>> from scipy import stats >>> rvs = rng.power(5, 1000000) >>> rvsp = rng.pareto(5, 1000000) >>> xx = np.linspace(0,1,100) >>> powpdf = stats.powerlaw.pdf(xx,5) ``` ``` >>> plt.figure() >>> plt.hist(rvs, bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('power(5)') ``` ``` >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of 1 + Generator.pareto(5)') ``` ``` >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of stats.pareto(5)') ``` ![../../../_images/numpy-random-Generator-power-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.power.htmlnumpy.random.Generator.rayleigh =============================== method random.Generator.rayleigh(*scale=1.0*, *size=None*) Draw samples from a Rayleigh distribution. The \(\chi\) and Weibull distributions are generalizations of the Rayleigh. Parameters **scale**float or array_like of floats, optional Scale, also equals the mode. Must be non-negative. Default is 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Rayleigh distribution. #### Notes The probability density function for the Rayleigh distribution is \[P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}\] The Rayleigh distribution would arise, for example, if the East and North components of the wind velocity had identical zero-mean Gaussian distributions. Then the wind speed would have a Rayleigh distribution. #### References 1 Brighton Webs Ltd., “Rayleigh Distribution,” <https://web.archive.org/web/20090514091424/http://brighton-webs.co.uk:80/distributions/rayleigh.asp 2 Wikipedia, “Rayleigh distribution” <https://en.wikipedia.org/wiki/Rayleigh_distribution #### Examples Draw values from the distribution and plot the histogram ``` >>> from matplotlib.pyplot import hist >>> rng = np.random.default_rng() >>> values = hist(rng.rayleigh(3, 100000), bins=200, density=True) ``` Wave heights tend to follow a Rayleigh distribution. If the mean wave height is 1 meter, what fraction of waves are likely to be larger than 3 meters? ``` >>> meanvalue = 1 >>> modevalue = np.sqrt(2 / np.pi) * meanvalue >>> s = rng.rayleigh(modevalue, 1000000) ``` The percentage of waves larger than 3 meters is: ``` >>> 100.*sum(s>3)/1000000. 0.087300000000000003 # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.rayleigh.htmlnumpy.random.Generator.standard_cauchy ======================================= method random.Generator.standard_cauchy(*size=None*) Draw samples from a standard Cauchy distribution with mode = 0. Also known as the Lorentz distribution. Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **samples**ndarray or scalar The drawn samples. #### Notes The probability density function for the full Cauchy distribution is \[P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+ (\frac{x-x_0}{\gamma})^2 \bigr] }\] and the Standard Cauchy distribution just sets \(x_0=0\) and \(\gamma=1\) The Cauchy distribution arises in the solution to the driven harmonic oscillator problem, and also describes spectral line broadening. It also describes the distribution of values at which a line tilted at a random angle will cut the x axis. When studying hypothesis tests that assume normality, seeing how the tests perform on data from a Cauchy distribution is a good indicator of their sensitivity to a heavy-tailed distribution, since the Cauchy looks very much like a Gaussian distribution, but with heavier tails. #### References 1 NIST/SEMATECH e-Handbook of Statistical Methods, “Cauchy Distribution”, <https://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm 2 Weisstein, <NAME>. “Cauchy Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/CauchyDistribution.html 3 Wikipedia, “Cauchy distribution” <https://en.wikipedia.org/wiki/Cauchy_distribution #### Examples Draw samples and plot the distribution: ``` >>> import matplotlib.pyplot as plt >>> s = np.random.default_rng().standard_cauchy(1000000) >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well >>> plt.hist(s, bins=100) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-standard_cauchy-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.standard_cauchy.htmlnumpy.random.Generator.standard_exponential ============================================ method random.Generator.standard_exponential(*size=None*, *dtype=np.float64*, *method='zig'*, *out=None*) Draw samples from the standard exponential distribution. [`standard_exponential`](#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential") is identical to the exponential distribution with a scale parameter of 1. Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype**dtype, optional Desired dtype of the result, only [`float64`](../../arrays.scalars#numpy.float64 "numpy.float64") and [`float32`](../../arrays.scalars#numpy.float32 "numpy.float32") are supported. Byteorder must be native. The default value is np.float64. **method**str, optional Either ‘inv’ or ‘zig’. ‘inv’ uses the default inverse CDF method. ‘zig’ uses the much faster Ziggurat method of Marsaglia and Tsang. **out**ndarray, optional Alternative output array in which to place the result. If size is not None, it must have the same shape as the provided size and must match the type of the output values. Returns **out**float or ndarray Drawn samples. #### Examples Output a 3x8000 array: ``` >>> n = np.random.default_rng().standard_exponential((3, 8000)) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.standard_exponential.htmlnumpy.random.Generator.standard_gamma ====================================== method random.Generator.standard_gamma(*shape*, *size=None*, *dtype=np.float64*, *out=None*) Draw samples from a standard Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, shape (sometimes designated “k”) and scale=1. Parameters **shape**float or array_like of floats Parameter, must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` is a scalar. Otherwise, `np.array(shape).size` samples are drawn. **dtype**dtype, optional Desired dtype of the result, only [`float64`](../../arrays.scalars#numpy.float64 "numpy.float64") and [`float32`](../../arrays.scalars#numpy.float32 "numpy.float32") are supported. Byteorder must be native. The default value is np.float64. **out**ndarray, optional Alternative output array in which to place the result. If size is not None, it must have the same shape as the provided size and must match the type of the output values. Returns **out**ndarray or scalar Drawn samples from the parameterized standard gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. #### Notes The probability density for the Gamma distribution is \[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\] where \(k\) is the shape and \(\theta\) the scale, and \(\Gamma\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References 1 Weisstein, <NAME>. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/GammaDistribution.html 2 Wikipedia, “Gamma distribution”, <https://en.wikipedia.org/wiki/Gamma_distribution #### Examples Draw samples from the distribution: ``` >>> shape, scale = 2., 1. # mean and width >>> s = np.random.default_rng().standard_gamma(shape, 1000000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ ... (sps.gamma(shape) * scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-standard_gamma-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.standard_gamma.htmlnumpy.random.Generator.standard_normal ======================================= method random.Generator.standard_normal(*size=None*, *dtype=np.float64*, *out=None*) Draw samples from a standard Normal distribution (mean=0, stdev=1). Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype**dtype, optional Desired dtype of the result, only [`float64`](../../arrays.scalars#numpy.float64 "numpy.float64") and [`float32`](../../arrays.scalars#numpy.float32 "numpy.float32") are supported. Byteorder must be native. The default value is np.float64. **out**ndarray, optional Alternative output array in which to place the result. If size is not None, it must have the same shape as the provided size and must match the type of the output values. Returns **out**float or ndarray A floating-point array of shape `size` of drawn samples, or a single sample if `size` was not specified. See also [`normal`](numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal") Equivalent function with additional `loc` and `scale` arguments for setting the mean and standard deviation. #### Notes For random samples from \(N(\mu, \sigma^2)\), use one of: ``` mu + sigma * rng.standard_normal(size=...) rng.normal(mu, sigma, size=...) ``` #### Examples ``` >>> rng = np.random.default_rng() >>> rng.standard_normal() 2.1923875335537315 # random ``` ``` >>> s = rng.standard_normal(8000) >>> s array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, # random -0.38672696, -0.4685006 ]) # random >>> s.shape (8000,) >>> s = rng.standard_normal(size=(3, 4, 2)) >>> s.shape (3, 4, 2) ``` Two-by-four array of samples from \(N(3, 6.25)\): ``` >>> 3 + 2.5 * rng.standard_normal(size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.standard_normal.htmlnumpy.random.Generator.standard_t ================================== method random.Generator.standard_t(*df*, *size=None*) Draw samples from a standard Student’s t distribution with `df` degrees of freedom. A special case of the hyperbolic distribution. As `df` gets large, the result resembles that of the standard normal distribution ([`standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal")). Parameters **df**float or array_like of floats Degrees of freedom, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized standard Student’s t distribution. #### Notes The probability density function for the t distribution is \[P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df} \Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}\] The t test is based on an assumption that the data come from a Normal distribution. The t test provides a way to test whether the sample mean (that is the mean calculated from the data) is a good estimate of the true mean. The derivation of the t-distribution was first published in 1908 by <NAME> while working for the Guinness Brewery in Dublin. Due to proprietary issues, he had to publish under a pseudonym, and so he used the name Student. #### References [1](#id3) Dalgaard, Peter, “Introductory Statistics With R”, Springer, 2002. 2 Wikipedia, “Student’s t-distribution” [https://en.wikipedia.org/wiki/Student’s_t-distribution](https://en.wikipedia.org/wiki/Student's_t-distribution) #### Examples From Dalgaard page 83 [[1]](#rb7c952f3992e-1), suppose the daily energy intake for 11 women in kilojoules (kJ) is: ``` >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \ ... 7515, 8230, 8770]) ``` Does their energy intake deviate systematically from the recommended value of 7725 kJ? Our null hypothesis will be the absence of deviation, and the alternate hypothesis will be the presence of an effect that could be either positive or negative, hence making our test 2-tailed. Because we are estimating the mean and we have N=11 values in our sample, we have N-1=10 degrees of freedom. We set our significance level to 95% and compute the t statistic using the empirical mean and empirical standard deviation of our intake. We use a ddof of 1 to base the computation of our empirical standard deviation on an unbiased estimate of the variance (note: the final estimate is not unbiased due to the concave nature of the square root). ``` >>> np.mean(intake) 6753.636363636364 >>> intake.std(ddof=1) 1142.1232221373727 >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake))) >>> t -2.8207540608310198 ``` We draw 1000000 samples from Student’s t distribution with the adequate degrees of freedom. ``` >>> import matplotlib.pyplot as plt >>> s = np.random.default_rng().standard_t(10, size=1000000) >>> h = plt.hist(s, bins=100, density=True) ``` Does our t statistic land in one of the two critical regions found at both tails of the distribution? ``` >>> np.sum(np.abs(t) < np.abs(s)) / float(len(s)) 0.018318 #random < 0.05, statistic is in critical region ``` The probability value for this 2-tailed test is about 1.83%, which is lower than the 5% pre-determined significance threshold. Therefore, the probability of observing values as extreme as our intake conditionally on the null hypothesis being true is too low, and we reject the null hypothesis of no deviation. ![../../../_images/numpy-random-Generator-standard_t-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.standard_t.htmlnumpy.random.Generator.triangular ================================= method random.Generator.triangular(*left*, *mode*, *right*, *size=None*) Draw samples from the triangular distribution over the interval `[left, right]`. The triangular distribution is a continuous probability distribution with lower limit left, peak at mode, and upper limit right. Unlike the other distributions, these parameters directly define the shape of the pdf. Parameters **left**float or array_like of floats Lower limit. **mode**float or array_like of floats The value where the peak of the distribution occurs. The value must fulfill the condition `left <= mode <= right`. **right**float or array_like of floats Upper limit, must be larger than `left`. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `left`, `mode`, and `right` are all scalars. Otherwise, `np.broadcast(left, mode, right).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized triangular distribution. #### Notes The probability density function for the triangular distribution is \[\begin{split}P(x;l, m, r) = \begin{cases} \frac{2(x-l)}{(r-l)(m-l)}& \text{for $l \leq x \leq m$},\\ \frac{2(r-x)}{(r-l)(r-m)}& \text{for $m \leq x \leq r$},\\ 0& \text{otherwise}. \end{cases}\end{split}\] The triangular distribution is often used in ill-defined problems where the underlying distribution is not known, but some knowledge of the limits and mode exists. Often it is used in simulations. #### References 1 Wikipedia, “Triangular distribution” <https://en.wikipedia.org/wiki/Triangular_distribution #### Examples Draw values from the distribution and plot the histogram: ``` >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.default_rng().triangular(-3, 0, 8, 100000), bins=200, ... density=True) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-triangular-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.triangular.htmlnumpy.random.Generator.uniform ============================== method random.Generator.uniform(*low=0.0*, *high=1.0*, *size=None*) Draw samples from a uniform distribution. Samples are uniformly distributed over the half-open interval `[low, high)` (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by [`uniform`](#numpy.random.Generator.uniform "numpy.random.Generator.uniform"). Parameters **low**float or array_like of floats, optional Lower boundary of the output interval. All values generated will be greater than or equal to low. The default value is 0. **high**float or array_like of floats Upper boundary of the output interval. All values generated will be less than high. The high limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. high - low must be non-negative. The default value is 1.0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `low` and `high` are both scalars. Otherwise, `np.broadcast(low, high).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized uniform distribution. See also [`integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") Discrete uniform distribution, yielding integers. [`random`](../index#module-numpy.random "numpy.random") Floats uniformly distributed over `[0, 1)`. #### Notes The probability density function of the uniform distribution is \[p(x) = \frac{1}{b - a}\] anywhere within the interval `[a, b)`, and zero elsewhere. When `high` == `low`, values of `low` will be returned. #### Examples Draw samples from the distribution: ``` >>> s = np.random.default_rng().uniform(-1,0,1000) ``` All values are within the given interval: ``` >>> np.all(s >= -1) True >>> np.all(s < 0) True ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 15, density=True) >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-uniform-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.uniform.htmlnumpy.random.Generator.vonmises =============================== method random.Generator.vonmises(*mu*, *kappa*, *size=None*) Draw samples from a von Mises distribution. Samples are drawn from a von Mises distribution with specified mode (mu) and dispersion (kappa), on the interval [-pi, pi]. The von Mises distribution (also known as the circular normal distribution) is a continuous probability distribution on the unit circle. It may be thought of as the circular analogue of the normal distribution. Parameters **mu**float or array_like of floats Mode (“center”) of the distribution. **kappa**float or array_like of floats Dispersion of the distribution, has to be >=0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mu` and `kappa` are both scalars. Otherwise, `np.broadcast(mu, kappa).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized von Mises distribution. See also [`scipy.stats.vonmises`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html#scipy.stats.vonmises "(in SciPy v1.8.1)") probability density function, distribution, or cumulative density function, etc. #### Notes The probability density for the von Mises distribution is \[p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},\] where \(\mu\) is the mode and \(\kappa\) the dispersion, and \(I_0(\kappa)\) is the modified Bessel function of order 0. The von Mises is named for <NAME>, who was born in Austria-Hungary, in what is now the Ukraine. He fled to the United States in 1939 and became a professor at Harvard. He worked in probability theory, aerodynamics, fluid mechanics, and philosophy of science. #### References 1 <NAME>. and <NAME>. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. 2 von <NAME>., “Mathematical Theory of Probability and Statistics”, New York: Academic Press, 1964. #### Examples Draw samples from the distribution: ``` >>> mu, kappa = 0.0, 4.0 # mean and dispersion >>> s = np.random.default_rng().vonmises(mu, kappa, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> from scipy.special import i0 >>> plt.hist(s, 50, density=True) >>> x = np.linspace(-np.pi, np.pi, num=51) >>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa)) >>> plt.plot(x, y, linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-vonmises-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.vonmises.htmlnumpy.random.Generator.wald =========================== method random.Generator.wald(*mean*, *scale*, *size=None*) Draw samples from a Wald, or inverse Gaussian, distribution. As the scale approaches infinity, the distribution becomes more like a Gaussian. Some references claim that the Wald is an inverse Gaussian with mean equal to 1, but this is by no means universal. The inverse Gaussian distribution was first studied in relationship to Brownian motion. In 1956 <NAME> used the name inverse Gaussian because there is an inverse relationship between the time to cover a unit distance and distance covered in unit time. Parameters **mean**float or array_like of floats Distribution mean, must be > 0. **scale**float or array_like of floats Scale parameter, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `scale` are both scalars. Otherwise, `np.broadcast(mean, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Wald distribution. #### Notes The probability density function for the Wald distribution is \[P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^ \frac{-scale(x-mean)^2}{2\cdotp mean^2x}\] As noted above the inverse Gaussian distribution first arise from attempts to model Brownian motion. It is also a competitor to the Weibull for use in reliability modeling and modeling stock returns and interest rate processes. #### References 1 Brighton Webs Ltd., Wald Distribution, <https://web.archive.org/web/20090423014010/http://www.brighton-webs.co.uk:80/distributions/wald.asp 2 <NAME>., and <NAME>, “The Inverse Gaussian Distribution: Theory : Methodology, and Applications”, CRC Press, 1988. 3 Wikipedia, “Inverse Gaussian distribution” <https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution #### Examples Draw values from the distribution and plot the histogram: ``` >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.default_rng().wald(3, 2, 100000), bins=200, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-wald-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.wald.htmlnumpy.random.Generator.weibull ============================== method random.Generator.weibull(*a*, *size=None*) Draw samples from a Weibull distribution. Draw samples from a 1-parameter Weibull distribution with the given shape parameter `a`. \[X = (-ln(U))^{1/a}\] Here, U is drawn from the uniform distribution over (0,1]. The more common 2-parameter Weibull, including a scale parameter \(\lambda\) is just \(X = \lambda(-ln(U))^{1/a}\). Parameters **a**float or array_like of floats Shape parameter of the distribution. Must be nonnegative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Weibull distribution. See also [`scipy.stats.weibull_max`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_max.html#scipy.stats.weibull_max "(in SciPy v1.8.1)") [`scipy.stats.weibull_min`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_min.html#scipy.stats.weibull_min "(in SciPy v1.8.1)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "(in SciPy v1.8.1)") [`gumbel`](numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel") #### Notes The Weibull (or Type III asymptotic extreme value distribution for smallest values, SEV Type III, or Rosin-Rammler distribution) is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. This class includes the Gumbel and Frechet distributions. The probability density for the Weibull distribution is \[p(x) = \frac{a} {\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},\] where \(a\) is the shape and \(\lambda\) the scale. The function has its peak (the mode) at \(\lambda(\frac{a-1}{a})^{1/a}\). When `a = 1`, the Weibull distribution reduces to the exponential distribution. #### References 1 <NAME>, Royal Technical University, Stockholm, 1939 “A Statistical Theory Of The Strength Of Materials”, Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939, Generalstabens Litografiska Anstalts Forlag, Stockholm. 2 <NAME>, “A Statistical Distribution Function of Wide Applicability”, Journal Of Applied Mechanics ASME Paper 1951. 3 Wikipedia, “Weibull distribution”, <https://en.wikipedia.org/wiki/Weibull_distribution #### Examples Draw samples from the distribution: ``` >>> rng = np.random.default_rng() >>> a = 5. # shape >>> s = rng.weibull(a, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> x = np.arange(1,100.)/50. >>> def weib(x,n,a): ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a) ``` ``` >>> count, bins, ignored = plt.hist(rng.weibull(5.,1000)) >>> x = np.arange(1,100.)/50. >>> scale = count.max()/weib(x, 1., 5.).max() >>> plt.plot(x, weib(x, 1., 5.)*scale) >>> plt.show() ``` ![../../../_images/numpy-random-Generator-weibull-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.weibull.htmlnumpy.random.Generator.zipf =========================== method random.Generator.zipf(*a*, *size=None*) Draw samples from a Zipf distribution. Samples are drawn from a Zipf distribution with specified parameter `a` > 1. The Zipf distribution (also known as the zeta distribution) is a discrete probability distribution that satisfies Zipf’s law: the frequency of an item is inversely proportional to its rank in a frequency table. Parameters **a**float or array_like of floats Distribution parameter. Must be greater than 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Zipf distribution. See also [`scipy.stats.zipf`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zipf.html#scipy.stats.zipf "(in SciPy v1.8.1)") probability density function, distribution, or cumulative density function, etc. #### Notes The probability density for the Zipf distribution is \[p(k) = \frac{k^{-a}}{\zeta(a)},\] for integers \(k \geq 1\), where \(\zeta\) is the Riemann Zeta function. It is named for the American linguist <NAME>, who noted that the frequency of any word in a sample of a language is inversely proportional to its rank in the frequency table. #### References 1 <NAME>., “Selected Studies of the Principle of Relative Frequency in Language,” Cambridge, MA: Harvard Univ. Press, 1932. #### Examples Draw samples from the distribution: ``` >>> a = 4.0 >>> n = 20000 >>> s = np.random.default_rng().zipf(a, size=n) ``` Display the histogram of the samples, along with the expected histogram based on the probability density function: ``` >>> import matplotlib.pyplot as plt >>> from scipy.special import zeta ``` [`bincount`](../../generated/numpy.bincount#numpy.bincount "numpy.bincount") provides a fast histogram for small integers. ``` >>> count = np.bincount(s) >>> k = np.arange(1, s.max() + 1) ``` ``` >>> plt.bar(k, count[1:], alpha=0.5, label='sample count') >>> plt.plot(k, n*(k**-a)/zeta(a), 'k.-', alpha=0.5, ... label='expected count') >>> plt.semilogy() >>> plt.grid(alpha=0.4) >>> plt.legend() >>> plt.title(f'Zipf sample, a={a}, size={n}') >>> plt.show() ``` ![../../../_images/numpy-random-Generator-zipf-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.Generator.zipf.htmlnumpy.random.random =================== random.random(*size=None*) Return random floats in the half-open interval [0.0, 1.0). Alias for [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample") to ease forward-porting to the new random API. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.random.htmlnumpy.random.RandomState.get_state =================================== method random.RandomState.get_state() Return a tuple representing the internal state of the generator. For more details, see [`set_state`](numpy.random.randomstate.set_state#numpy.random.RandomState.set_state "numpy.random.RandomState.set_state"). Parameters **legacy**bool, optional Flag indicating to return a legacy tuple state when the BitGenerator is MT19937, instead of a dict. Returns **out**{tuple(str, ndarray of 624 uints, int, int, float), dict} The returned tuple has the following items: 1. the string ‘MT19937’. 2. a 1-D array of 624 unsigned integer keys. 3. an integer `pos`. 4. an integer `has_gauss`. 5. a float `cached_gaussian`. If `legacy` is False, or the BitGenerator is not MT19937, then state is returned as a dictionary. See also [`set_state`](numpy.random.randomstate.set_state#numpy.random.RandomState.set_state "numpy.random.RandomState.set_state") #### Notes [`set_state`](numpy.random.randomstate.set_state#numpy.random.RandomState.set_state "numpy.random.RandomState.set_state") and [`get_state`](#numpy.random.RandomState.get_state "numpy.random.RandomState.get_state") are not needed to work with any of the random distributions in NumPy. If the internal state is manually altered, the user should know exactly what he/she is doing. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.get_state.htmlnumpy.random.RandomState.set_state =================================== method random.RandomState.set_state(*state*) Set the internal state of the generator from a tuple. For use if one has reason to manually (re-)set the internal state of the bit generator used by the RandomState instance. By default, RandomState uses the “Mersenne Twister”[[1]](#rd62dfb5ffa26-1) pseudo-random number generating algorithm. Parameters **state**{tuple(str, ndarray of 624 uints, int, int, float), dict} The `state` tuple has the following items: 1. the string ‘MT19937’, specifying the Mersenne Twister algorithm. 2. a 1-D array of 624 unsigned integers `keys`. 3. an integer `pos`. 4. an integer `has_gauss`. 5. a float `cached_gaussian`. If state is a dictionary, it is directly set using the BitGenerators `state` property. Returns **out**None Returns ‘None’ on success. See also [`get_state`](numpy.random.randomstate.get_state#numpy.random.RandomState.get_state "numpy.random.RandomState.get_state") #### Notes [`set_state`](#numpy.random.RandomState.set_state "numpy.random.RandomState.set_state") and [`get_state`](numpy.random.randomstate.get_state#numpy.random.RandomState.get_state "numpy.random.RandomState.get_state") are not needed to work with any of the random distributions in NumPy. If the internal state is manually altered, the user should know exactly what he/she is doing. For backwards compatibility, the form (str, array of 624 uints, int) is also accepted although it is missing some information about the cached Gaussian value: `state = ('MT19937', keys, pos)`. #### References [1](#id1) <NAME> and <NAME>, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator,” *ACM Trans. on Modeling and Computer Simulation*, Vol. 8, No. 1, pp. 3-30, Jan. 1998. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.set_state.htmlnumpy.random.RandomState.seed ============================= method random.RandomState.seed(*self*, *seed=None*) Reseed a legacy MT19937 BitGenerator #### Notes This is a convenience, legacy function. The best practice is to **not** reseed a BitGenerator, rather to recreate a new one. This method is here for legacy reasons. This example demonstrates best practice. ``` >>> from numpy.random import MT19937 >>> from numpy.random import RandomState, SeedSequence >>> rs = RandomState(MT19937(SeedSequence(123456789))) # Later, you want to restart the stream >>> rs = RandomState(MT19937(SeedSequence(987654321))) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.seed.htmlnumpy.random.RandomState.randint ================================ method random.RandomState.randint(*low*, *high=None*, *size=None*, *dtype=int*) Return random integers from `low` (inclusive) to `high` (exclusive). Return random integers from the “discrete uniform” distribution of the specified dtype in the “half-open” interval [`low`, `high`). If `high` is None (the default), then results are from [0, `low`). Note New code should use the `integers` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **low**int or array-like of ints Lowest (signed) integers to be drawn from the distribution (unless `high=None`, in which case this parameter is one above the *highest* such integer). **high**int or array-like of ints, optional If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). If array-like, must contain integer values **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype**dtype, optional Desired dtype of the result. Byteorder must be native. The default value is int. New in version 1.11.0. Returns **out**int or ndarray of ints `size`-shaped array of random integers from the appropriate distribution, or a single such random int if `size` not provided. See also [`random_integers`](numpy.random.randomstate.random_integers#numpy.random.RandomState.random_integers "numpy.random.RandomState.random_integers") similar to [`randint`](#numpy.random.RandomState.randint "numpy.random.RandomState.randint"), only for the closed interval [`low`, `high`], and 1 is the lowest value if `high` is omitted. [`random.Generator.integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") which should be used for new code. #### Examples ``` >>> np.random.randint(2, size=10) array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random >>> np.random.randint(1, size=10) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) ``` Generate a 2 x 4 array of ints between 0 and 4, inclusive: ``` >>> np.random.randint(5, size=(2, 4)) array([[4, 0, 2, 1], # random [3, 2, 2, 0]]) ``` Generate a 1 x 3 array with 3 different upper bounds ``` >>> np.random.randint(1, [3, 5, 10]) array([2, 2, 9]) # random ``` Generate a 1 by 3 array with 3 different lower bounds ``` >>> np.random.randint([1, 5, 7], 10) array([9, 8, 7]) # random ``` Generate a 2 by 4 array using broadcasting with dtype of uint8 ``` >>> np.random.randint([1, 3, 5, 7], [[10], [20]], dtype=np.uint8) array([[ 8, 6, 9, 7], # random [ 1, 16, 9, 12]], dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.randint.htmlnumpy.random.RandomState.random_integers ========================================= method random.RandomState.random_integers(*low*, *high=None*, *size=None*) Random integers of type `np.int_` between `low` and `high`, inclusive. Return random integers of type `np.int_` from the “discrete uniform” distribution in the closed interval [`low`, `high`]. If `high` is None (the default), then results are from [1, `low`]. The `np.int_` type translates to the C long integer type and its precision is platform dependent. This function has been deprecated. Use randint instead. Deprecated since version 1.11.0. Parameters **low**int Lowest (signed) integer to be drawn from the distribution (unless `high=None`, in which case this parameter is the *highest* such integer). **high**int, optional If provided, the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**int or ndarray of ints `size`-shaped array of random integers from the appropriate distribution, or a single such random int if `size` not provided. See also [`randint`](numpy.random.randomstate.randint#numpy.random.RandomState.randint "numpy.random.RandomState.randint") Similar to [`random_integers`](#numpy.random.RandomState.random_integers "numpy.random.RandomState.random_integers"), only for the half-open interval [`low`, `high`), and 0 is the lowest value if `high` is omitted. #### Notes To sample from N evenly spaced floating-point numbers between a and b, use: ``` a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.) ``` #### Examples ``` >>> np.random.random_integers(5) 4 # random >>> type(np.random.random_integers(5)) <class 'numpy.int64'> >>> np.random.random_integers(5, size=(3,2)) array([[5, 4], # random [3, 3], [4, 5]]) ``` Choose five random numbers from the set of five evenly-spaced numbers between 0 and 2.5, inclusive (*i.e.*, from the set \({0, 5/8, 10/8, 15/8, 20/8}\)): ``` >>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4. array([ 0.625, 1.25 , 0.625, 0.625, 2.5 ]) # random ``` Roll two six sided dice 1000 times and sum the results: ``` >>> d1 = np.random.random_integers(1, 6, 1000) >>> d2 = np.random.random_integers(1, 6, 1000) >>> dsums = d1 + d2 ``` Display results as a histogram: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(dsums, 11, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-random_integers-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.random_integers.htmlnumpy.random.RandomState.choice =============================== method random.RandomState.choice(*a*, *size=None*, *replace=True*, *p=None*) Generates a random sample from a given 1-D array New in version 1.7.0. Note New code should use the `choice` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**1-D array-like or int If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated as if it were `np.arange(a)` **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **replace**boolean, optional Whether the sample is with or without replacement. Default is True, meaning that a value of `a` can be selected multiple times. **p**1-D array-like, optional The probabilities associated with each entry in a. If not given, the sample assumes a uniform distribution over all entries in `a`. Returns **samples**single item or ndarray The generated random samples Raises ValueError If a is an int and less than zero, if a or p are not 1-dimensional, if a is an array-like of size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size See also [`randint`](numpy.random.randomstate.randint#numpy.random.RandomState.randint "numpy.random.RandomState.randint"), [`shuffle`](numpy.random.randomstate.shuffle#numpy.random.RandomState.shuffle "numpy.random.RandomState.shuffle"), [`permutation`](numpy.random.randomstate.permutation#numpy.random.RandomState.permutation "numpy.random.RandomState.permutation") [`random.Generator.choice`](numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice") which should be used in new code #### Notes Setting user-specified probabilities through `p` uses a more general but less efficient sampler than the default. The general sampler produces a different sample than the optimized sampler even if each element of `p` is 1 / len(a). Sampling random rows from a 2-D array is not possible with this function, but is possible with `Generator.choice` through its `axis` keyword. #### Examples Generate a uniform random sample from np.arange(5) of size 3: ``` >>> np.random.choice(5, 3) array([0, 3, 4]) # random >>> #This is equivalent to np.random.randint(0,5,3) ``` Generate a non-uniform random sample from np.arange(5) of size 3: ``` >>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]) array([3, 3, 0]) # random ``` Generate a uniform random sample from np.arange(5) of size 3 without replacement: ``` >>> np.random.choice(5, 3, replace=False) array([3,1,0]) # random >>> #This is equivalent to np.random.permutation(np.arange(5))[:3] ``` Generate a non-uniform random sample from np.arange(5) of size 3 without replacement: ``` >>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0]) array([2, 3, 0]) # random ``` Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance: ``` >>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher'] >>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3]) array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random dtype='<U11') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.choice.htmlnumpy.random.RandomState.bytes ============================== method random.RandomState.bytes(*length*) Return random bytes. Note New code should use the `bytes` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **length**int Number of random bytes. Returns **out**bytes String of length `length`. See also [`random.Generator.bytes`](numpy.random.generator.bytes#numpy.random.Generator.bytes "numpy.random.Generator.bytes") which should be used for new code. #### Examples ``` >>> np.random.bytes(10) b' eh\x85\x022SZ\xbf\xa4' #random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.bytes.htmlnumpy.random.RandomState.shuffle ================================ method random.RandomState.shuffle(*x*) Modify a sequence in-place by shuffling its contents. This function only shuffles the array along the first axis of a multi-dimensional array. The order of sub-arrays is changed but their contents remains the same. Note New code should use the `shuffle` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **x**ndarray or MutableSequence The array, list or mutable sequence to be shuffled. Returns None See also [`random.Generator.shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") which should be used for new code. #### Examples ``` >>> arr = np.arange(10) >>> np.random.shuffle(arr) >>> arr [1 7 5 2 9 4 3 6 0 8] # random ``` Multi-dimensional arrays are only shuffled along the first axis: ``` >>> arr = np.arange(9).reshape((3, 3)) >>> np.random.shuffle(arr) >>> arr array([[3, 4, 5], # random [6, 7, 8], [0, 1, 2]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.shuffle.htmlnumpy.random.RandomState.permutation ==================================== method random.RandomState.permutation(*x*) Randomly permute a sequence, or return a permuted range. If `x` is a multi-dimensional array, it is only shuffled along its first index. Note New code should use the `permutation` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **x**int or array_like If `x` is an integer, randomly permute `np.arange(x)`. If `x` is an array, make a copy and shuffle the elements randomly. Returns **out**ndarray Permuted sequence or array range. See also [`random.Generator.permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") which should be used for new code. #### Examples ``` >>> np.random.permutation(10) array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random ``` ``` >>> np.random.permutation([1, 4, 9, 12, 15]) array([15, 1, 9, 4, 12]) # random ``` ``` >>> arr = np.arange(9).reshape((3, 3)) >>> np.random.permutation(arr) array([[6, 7, 8], # random [0, 1, 2], [3, 4, 5]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.permutation.htmlnumpy.random.RandomState.beta ============================= method random.RandomState.beta(*a*, *b*, *size=None*) Draw samples from a Beta distribution. The Beta distribution is a special case of the Dirichlet distribution, and is related to the Gamma distribution. It has the probability distribution function \[f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1} (1 - x)^{\beta - 1},\] where the normalization, B, is the beta function, \[B(\alpha, \beta) = \int_0^1 t^{\alpha - 1} (1 - t)^{\beta - 1} dt.\] It is often seen in Bayesian inference and order statistics. Note New code should use the `beta` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Alpha, positive (>0). **b**float or array_like of floats Beta, positive (>0). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` and `b` are both scalars. Otherwise, `np.broadcast(a, b).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized beta distribution. See also [`random.Generator.beta`](numpy.random.generator.beta#numpy.random.Generator.beta "numpy.random.Generator.beta") which should be used for new code. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.beta.htmlnumpy.random.RandomState.binomial ================================= method random.RandomState.binomial(*n*, *p*, *size=None*) Draw samples from a binomial distribution. Samples are drawn from a binomial distribution with specified parameters, n trials and p probability of success where n an integer >= 0 and p is in the interval [0,1]. (n may be input as a float, but it is truncated to an integer in use) Note New code should use the `binomial` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **n**int or array_like of ints Parameter of the distribution, >= 0. Floats are also accepted, but they will be truncated to integers. **p**float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized binomial distribution, where each sample is equal to the number of successes over the n trials. See also [`scipy.stats.binom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom.html#scipy.stats.binom "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.binomial`](numpy.random.generator.binomial#numpy.random.Generator.binomial "numpy.random.Generator.binomial") which should be used for new code. #### Notes The probability density for the binomial distribution is \[P(N) = \binom{n}{N}p^N(1-p)^{n-N},\] where \(n\) is the number of trials, \(p\) is the probability of success, and \(N\) is the number of successes. When estimating the standard error of a proportion in a population by using a random sample, the normal distribution works well unless the product p*n <=5, where p = population proportion estimate, and n = number of samples, in which case the binomial distribution is used instead. For example, a sample of 15 people shows 4 who are left handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4, so the binomial distribution should be used in this case. #### References 1 <NAME>, “Introductory Statistics with R”, Springer-Verlag, 2002. 2 Glantz, <NAME>. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. 3 Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. 4 Weisstein, <NAME>. “Binomial Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/BinomialDistribution.html 5 Wikipedia, “Binomial distribution”, <https://en.wikipedia.org/wiki/Binomial_distribution #### Examples Draw samples from the distribution: ``` >>> n, p = 10, .5 # number of trials, probability of each trial >>> s = np.random.binomial(n, p, 1000) # result of flipping a coin 10 times, tested 1000 times. ``` A real world example. A company drills 9 wild-cat oil exploration wells, each with an estimated probability of success of 0.1. All nine wells fail. What is the probability of that happening? Let’s do 20,000 trials of the model, and count the number that generate zero positive results. ``` >>> sum(np.random.binomial(9, 0.1, 20000) == 0)/20000. # answer = 0.38885, or 38%. ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.binomial.htmlnumpy.random.RandomState.chisquare ================================== method random.RandomState.chisquare(*df*, *size=None*) Draw samples from a chi-square distribution. When `df` independent random variables, each with standard normal distributions (mean 0, variance 1), are squared and summed, the resulting distribution is chi-square (see Notes). This distribution is often used in hypothesis testing. Note New code should use the `chisquare` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **df**float or array_like of floats Number of degrees of freedom, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized chi-square distribution. Raises ValueError When `df` <= 0 or when an inappropriate `size` (e.g. `size=-1`) is given. See also [`random.Generator.chisquare`](numpy.random.generator.chisquare#numpy.random.Generator.chisquare "numpy.random.Generator.chisquare") which should be used for new code. #### Notes The variable obtained by summing the squares of `df` independent, standard normally distributed random variables: \[Q = \sum_{i=0}^{\mathtt{df}} X^2_i\] is chi-square distributed, denoted \[Q \sim \chi^2_k.\] The probability density function of the chi-squared distribution is \[p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2},\] where \(\Gamma\) is the gamma function, \[\Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.\] #### References 1 NIST “Engineering Statistics Handbook” <https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm #### Examples ``` >>> np.random.chisquare(2,4) array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.chisquare.htmlnumpy.random.RandomState.dirichlet ================================== method random.RandomState.dirichlet(*alpha*, *size=None*) Draw samples from the Dirichlet distribution. Draw `size` samples of dimension k from a Dirichlet distribution. A Dirichlet-distributed random variable can be seen as a multivariate generalization of a Beta distribution. The Dirichlet distribution is a conjugate prior of a multinomial distribution in Bayesian inference. Note New code should use the `dirichlet` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **alpha**sequence of floats, length k Parameter of the distribution (length `k` for sample of length `k`). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n)`, then `m * n * k` samples are drawn. Default is None, in which case a vector of length `k` is returned. Returns **samples**ndarray, The drawn samples, of shape `(size, k)`. Raises ValueError If any value in `alpha` is less than or equal to zero See also [`random.Generator.dirichlet`](numpy.random.generator.dirichlet#numpy.random.Generator.dirichlet "numpy.random.Generator.dirichlet") which should be used for new code. #### Notes The Dirichlet distribution is a distribution over vectors \(x\) that fulfil the conditions \(x_i>0\) and \(\sum_{i=1}^k x_i = 1\). The probability density function \(p\) of a Dirichlet-distributed random vector \(X\) is proportional to \[p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},\] where \(\alpha\) is a vector containing the positive concentration parameters. The method uses the following property for computation: let \(Y\) be a random vector which has components that follow a standard gamma distribution, then \(X = \frac{1}{\sum_{i=1}^k{Y_i}} Y\) is Dirichlet-distributed #### References 1 <NAME>, “Information Theory, Inference and Learning Algorithms,” chapter 23, <http://www.inference.org.uk/mackay/itila/ 2 Wikipedia, “Dirichlet distribution”, <https://en.wikipedia.org/wiki/Dirichlet_distribution #### Examples Taking an example cited in Wikipedia, this distribution can be used if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had, on average, a designated average length, but allowing some variation in the relative sizes of the pieces. ``` >>> s = np.random.dirichlet((10, 5, 3), 20).transpose() ``` ``` >>> import matplotlib.pyplot as plt >>> plt.barh(range(20), s[0]) >>> plt.barh(range(20), s[1], left=s[0], color='g') >>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r') >>> plt.title("Lengths of Strings") ``` ![../../../_images/numpy-random-RandomState-dirichlet-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.dirichlet.htmlnumpy.random.RandomState.exponential ==================================== method random.RandomState.exponential(*scale=1.0*, *size=None*) Draw samples from an exponential distribution. Its probability density function is \[f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),\] for `x > 0` and 0 elsewhere. \(\beta\) is the scale parameter, which is the inverse of the rate parameter \(\lambda = 1/\beta\). The rate parameter is an alternative, widely used parameterization of the exponential distribution [[3]](#rcfd3e98ffb09-3). The exponential distribution is a continuous analogue of the geometric distribution. It describes many common situations, such as the size of raindrops measured over many rainstorms [[1]](#rcfd3e98ffb09-1), or the time between page requests to Wikipedia [[2]](#rcfd3e98ffb09-2). Note New code should use the `exponential` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **scale**float or array_like of floats The scale parameter, \(\beta = 1/\lambda\). Must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized exponential distribution. See also [`random.Generator.exponential`](numpy.random.generator.exponential#numpy.random.Generator.exponential "numpy.random.Generator.exponential") which should be used for new code. #### References [1](#id2) <NAME>r., “Probability, Random Variables and Random Signal Principles”, 4th ed, 2001, p. 57. [2](#id3) Wikipedia, “Poisson process”, <https://en.wikipedia.org/wiki/Poisson_process [3](#id1) Wikipedia, “Exponential distribution”, <https://en.wikipedia.org/wiki/Exponential_distribution © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.exponential.htmlnumpy.random.RandomState.f ========================== method random.RandomState.f(*dfnum*, *dfden*, *size=None*) Draw samples from an F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters must be greater than zero. The random variate of the F distribution (also known as the Fisher distribution) is a continuous probability distribution that arises in ANOVA tests, and is the ratio of two chi-square variates. Note New code should use the `f` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **dfnum**float or array_like of floats Degrees of freedom in numerator, must be > 0. **dfden**float or array_like of float Degrees of freedom in denominator, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum` and `dfden` are both scalars. Otherwise, `np.broadcast(dfnum, dfden).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Fisher distribution. See also [`scipy.stats.f`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f.html#scipy.stats.f "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.f`](numpy.random.generator.f#numpy.random.Generator.f "numpy.random.Generator.f") which should be used for new code. #### Notes The F statistic is used to compare in-group variances to between-group variances. Calculating the distribution depends on the sampling, and so it is a function of the respective degrees of freedom in the problem. The variable `dfnum` is the number of samples minus one, the between-groups degrees of freedom, while `dfden` is the within-groups degrees of freedom, the sum of the number of samples in each group minus the number of groups. #### References 1 Glantz, <NAME>. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. 2 Wikipedia, “F-distribution”, <https://en.wikipedia.org/wiki/F-distribution #### Examples An example from Glantz[1], pp 47-40: Two groups, children of diabetics (25 people) and children from people without diabetes (25 controls). Fasting blood glucose was measured, case group had a mean value of 86.1, controls had a mean value of 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these data consistent with the null hypothesis that the parents diabetic status does not affect their children’s blood glucose levels? Calculating the F statistic from the data gives a value of 36.01. Draw samples from the distribution: ``` >>> dfnum = 1. # between group degrees of freedom >>> dfden = 48. # within groups degrees of freedom >>> s = np.random.f(dfnum, dfden, 1000) ``` The lower bound for the top 1% of the samples is : ``` >>> np.sort(s)[-10] 7.61988120985 # random ``` So there is about a 1% chance that the F statistic will exceed 7.62, the measured value is 36, so the null hypothesis is rejected at the 1% level. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.f.htmlnumpy.random.RandomState.geometric ================================== method random.RandomState.geometric(*p*, *size=None*) Draw samples from the geometric distribution. Bernoulli trials are experiments with one of two outcomes: success or failure (an example of such an experiment is flipping a coin). The geometric distribution models the number of trials that must be run in order to achieve success. It is therefore supported on the positive integers, `k = 1, 2, ...`. The probability mass function of the geometric distribution is \[f(k) = (1 - p)^{k - 1} p\] where `p` is the probability of success of an individual trial. Note New code should use the `geometric` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **p**float or array_like of floats The probability of success of an individual trial. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized geometric distribution. See also [`random.Generator.geometric`](numpy.random.generator.geometric#numpy.random.Generator.geometric "numpy.random.Generator.geometric") which should be used for new code. #### Examples Draw ten thousand values from the geometric distribution, with the probability of an individual success equal to 0.35: ``` >>> z = np.random.geometric(p=0.35, size=10000) ``` How many trials succeeded after a single run? ``` >>> (z == 1).sum() / 10000. 0.34889999999999999 #random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.geometric.htmlnumpy.random.RandomState.gumbel =============================== method random.RandomState.gumbel(*loc=0.0*, *scale=1.0*, *size=None*) Draw samples from a Gumbel distribution. Draw samples from a Gumbel distribution with specified location and scale. For more information on the Gumbel distribution, see Notes and References below. Note New code should use the `gumbel` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **loc**float or array_like of floats, optional The location of the mode of the distribution. Default is 0. **scale**float or array_like of floats, optional The scale parameter of the distribution. Default is 1. Must be non- negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Gumbel distribution. See also [`scipy.stats.gumbel_l`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_l.html#scipy.stats.gumbel_l "(in SciPy v1.8.1)") [`scipy.stats.gumbel_r`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_r.html#scipy.stats.gumbel_r "(in SciPy v1.8.1)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "(in SciPy v1.8.1)") [`weibull`](numpy.random.randomstate.weibull#numpy.random.RandomState.weibull "numpy.random.RandomState.weibull") [`random.Generator.gumbel`](numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel") which should be used for new code. #### Notes The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme Value Type I) distribution is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. The Gumbel is a special case of the Extreme Value Type I distribution for maximums from distributions with “exponential-like” tails. The probability density for the Gumbel distribution is \[p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/ \beta}},\] where \(\mu\) is the mode, a location parameter, and \(\beta\) is the scale parameter. The Gumbel (named for German mathematician Emil <NAME>) was used very early in the hydrology literature, for modeling the occurrence of flood events. It is also used for modeling maximum wind speed and rainfall rates. It is a “fat-tailed” distribution - the probability of an event in the tail of the distribution is larger than if one used a Gaussian, hence the surprisingly frequent occurrence of 100-year floods. Floods were initially modeled as a Gaussian process, which underestimated the frequency of extreme events. It is one of a class of extreme value distributions, the Generalized Extreme Value (GEV) distributions, which also includes the Weibull and Frechet. The function has a mean of \(\mu + 0.57721\beta\) and a variance of \(\frac{\pi^2}{6}\beta^2\). #### References 1 <NAME>., “Statistics of Extremes,” New York: Columbia University Press, 1958. 2 <NAME>. and <NAME>., “Statistical Analysis of Extreme Values from Insurance, Finance, Hydrology and Other Fields,” Basel: Birkhauser Verlag, 2001. #### Examples Draw samples from the distribution: ``` >>> mu, beta = 0, 0.1 # location and scale >>> s = np.random.gumbel(mu, beta, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp( -np.exp( -(bins - mu) /beta) ), ... linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-gumbel-1_00_00.png] Show how an extreme value distribution can arise from a Gaussian process and compare to a Gaussian: ``` >>> means = [] >>> maxima = [] >>> for i in range(0,1000) : ... a = np.random.normal(mu, beta, 1000) ... means.append(a.mean()) ... maxima.append(a.max()) >>> count, bins, ignored = plt.hist(maxima, 30, density=True) >>> beta = np.std(maxima) * np.sqrt(6) / np.pi >>> mu = np.mean(maxima) - 0.57721*beta >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp(-np.exp(-(bins - mu)/beta)), ... linewidth=2, color='r') >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi)) ... * np.exp(-(bins - mu)**2 / (2 * beta**2)), ... linewidth=2, color='g') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-gumbel-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.gumbel.htmlnumpy.random.RandomState.hypergeometric ======================================= method random.RandomState.hypergeometric(*ngood*, *nbad*, *nsample*, *size=None*) Draw samples from a Hypergeometric distribution. Samples are drawn from a hypergeometric distribution with specified parameters, `ngood` (ways to make a good selection), `nbad` (ways to make a bad selection), and `nsample` (number of items sampled, which is less than or equal to the sum `ngood + nbad`). Note New code should use the `hypergeometric` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **ngood**int or array_like of ints Number of ways to make a good selection. Must be nonnegative. **nbad**int or array_like of ints Number of ways to make a bad selection. Must be nonnegative. **nsample**int or array_like of ints Number of items sampled. Must be at least 1 and at most `ngood + nbad`. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `ngood`, `nbad`, and `nsample` are all scalars. Otherwise, `np.broadcast(ngood, nbad, nsample).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized hypergeometric distribution. Each sample is the number of good items within a randomly selected subset of size `nsample` taken from a set of `ngood` good items and `nbad` bad items. See also [`scipy.stats.hypergeom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.hypergeom.html#scipy.stats.hypergeom "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.hypergeometric`](numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric") which should be used for new code. #### Notes The probability density for the Hypergeometric distribution is \[P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},\] where \(0 \le x \le n\) and \(n-b \le x \le g\) for P(x) the probability of `x` good results in the drawn sample, g = `ngood`, b = `nbad`, and n = `nsample`. Consider an urn with black and white marbles in it, `ngood` of them are black and `nbad` are white. If you draw `nsample` balls without replacement, then the hypergeometric distribution describes the distribution of black balls in the drawn sample. Note that this distribution is very similar to the binomial distribution, except that in this case, samples are drawn without replacement, whereas in the Binomial case samples are drawn with replacement (or the sample space is infinite). As the sample space becomes large, this distribution approaches the binomial. #### References 1 <NAME>, “Elementary Applied Statistics”, Bogden and Quigley, 1972. 2 Weisstein, <NAME>. “Hypergeometric Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/HypergeometricDistribution.html 3 Wikipedia, “Hypergeometric distribution”, <https://en.wikipedia.org/wiki/Hypergeometric_distribution #### Examples Draw samples from the distribution: ``` >>> ngood, nbad, nsamp = 100, 2, 10 # number of good, number of bad, and number of samples >>> s = np.random.hypergeometric(ngood, nbad, nsamp, 1000) >>> from matplotlib.pyplot import hist >>> hist(s) # note that it is very unlikely to grab both bad items ``` Suppose you have an urn with 15 white and 15 black marbles. If you pull 15 marbles at random, how likely is it that 12 or more of them are one color? ``` >>> s = np.random.hypergeometric(15, 15, 15, 100000) >>> sum(s>=12)/100000. + sum(s<=3)/100000. # answer = 0.003 ... pretty unlikely! ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.hypergeometric.htmlnumpy.random.RandomState.laplace ================================ method random.RandomState.laplace(*loc=0.0*, *scale=1.0*, *size=None*) Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). The Laplace distribution is similar to the Gaussian/normal distribution, but is sharper at the peak and has fatter tails. It represents the difference between two independent, identically distributed exponential random variables. Note New code should use the `laplace` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **loc**float or array_like of floats, optional The position, \(\mu\), of the distribution peak. Default is 0. **scale**float or array_like of floats, optional \(\lambda\), the exponential decay. Default is 1. Must be non- negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Laplace distribution. See also [`random.Generator.laplace`](numpy.random.generator.laplace#numpy.random.Generator.laplace "numpy.random.Generator.laplace") which should be used for new code. #### Notes It has the probability density function \[f(x; \mu, \lambda) = \frac{1}{2\lambda} \exp\left(-\frac{|x - \mu|}{\lambda}\right).\] The first law of Laplace, from 1774, states that the frequency of an error can be expressed as an exponential function of the absolute magnitude of the error, which leads to the Laplace distribution. For many problems in economics and health sciences, this distribution seems to model the data better than the standard Gaussian distribution. #### References 1 <NAME> <NAME>. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. 2 <NAME>, et. al. “The Laplace Distribution and Generalizations, ” Birkhauser, 2001. 3 Weisstein, <NAME>. “Laplace Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/LaplaceDistribution.html 4 Wikipedia, “Laplace distribution”, <https://en.wikipedia.org/wiki/Laplace_distribution #### Examples Draw samples from the distribution ``` >>> loc, scale = 0., 1. >>> s = np.random.laplace(loc, scale, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> x = np.arange(-8., 8., .01) >>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale) >>> plt.plot(x, pdf) ``` Plot Gaussian for comparison: ``` >>> g = (1/(scale * np.sqrt(2 * np.pi)) * ... np.exp(-(x - loc)**2 / (2 * scale**2))) >>> plt.plot(x,g) ``` ![../../../_images/numpy-random-RandomState-laplace-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.laplace.htmlnumpy.random.RandomState.logistic ================================= method random.RandomState.logistic(*loc=0.0*, *scale=1.0*, *size=None*) Draw samples from a logistic distribution. Samples are drawn from a logistic distribution with specified parameters, loc (location or mean, also median), and scale (>0). Note New code should use the `logistic` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **loc**float or array_like of floats, optional Parameter of the distribution. Default is 0. **scale**float or array_like of floats, optional Parameter of the distribution. Must be non-negative. Default is 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized logistic distribution. See also [`scipy.stats.logistic`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html#scipy.stats.logistic "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.logistic`](numpy.random.generator.logistic#numpy.random.Generator.logistic "numpy.random.Generator.logistic") which should be used for new code. #### Notes The probability density for the Logistic distribution is \[P(x) = P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},\] where \(\mu\) = location and \(s\) = scale. The Logistic distribution is used in Extreme Value problems where it can act as a mixture of Gumbel distributions, in Epidemiology, and by the World Chess Federation (FIDE) where it is used in the Elo ranking system, assuming the performance of each player is a logistically distributed random variable. #### References 1 <NAME>. and <NAME>. (2001), “Statistical Analysis of Extreme Values, from Insurance, Finance, Hydrology and Other Fields,” <NAME>, Basel, pp 132-133. 2 Weisstein, <NAME>. “Logistic Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/LogisticDistribution.html 3 Wikipedia, “Logistic-distribution”, <https://en.wikipedia.org/wiki/Logistic_distribution #### Examples Draw samples from the distribution: ``` >>> loc, scale = 10, 1 >>> s = np.random.logistic(loc, scale, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=50) ``` # plot against distribution ``` >>> def logist(x, loc, scale): ... return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2) >>> lgst_val = logist(bins, loc, scale) >>> plt.plot(bins, lgst_val * count.max() / lgst_val.max()) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-logistic-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.logistic.htmlnumpy.random.RandomState.lognormal ================================== method random.RandomState.lognormal(*mean=0.0*, *sigma=1.0*, *size=None*) Draw samples from a log-normal distribution. Draw samples from a log-normal distribution with specified mean, standard deviation, and array shape. Note that the mean and standard deviation are not the values for the distribution itself, but of the underlying normal distribution it is derived from. Note New code should use the `lognormal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **mean**float or array_like of floats, optional Mean value of the underlying normal distribution. Default is 0. **sigma**float or array_like of floats, optional Standard deviation of the underlying normal distribution. Must be non-negative. Default is 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `sigma` are both scalars. Otherwise, `np.broadcast(mean, sigma).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized log-normal distribution. See also [`scipy.stats.lognorm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html#scipy.stats.lognorm "(in SciPy v1.8.1)") probability density function, distribution, cumulative density function, etc. [`random.Generator.lognormal`](numpy.random.generator.lognormal#numpy.random.Generator.lognormal "numpy.random.Generator.lognormal") which should be used for new code. #### Notes A variable `x` has a log-normal distribution if `log(x)` is normally distributed. The probability density function for the log-normal distribution is: \[p(x) = \frac{1}{\sigma x \sqrt{2\pi}} e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}\] where \(\mu\) is the mean and \(\sigma\) is the standard deviation of the normally distributed logarithm of the variable. A log-normal distribution results if a random variable is the *product* of a large number of independent, identically-distributed variables in the same way that a normal distribution results if the variable is the *sum* of a large number of independent, identically-distributed variables. #### References 1 <NAME>., <NAME>., and <NAME>., “Log-normal Distributions across the Sciences: Keys and Clues,” BioScience, Vol. 51, No. 5, May, 2001. <https://stat.ethz.ch/~stahel/lognormal/bioscience.pdf 2 <NAME>. and <NAME>., “Statistical Analysis of Extreme Values,” Basel: B<NAME>, 2001, pp. 31-32. #### Examples Draw samples from the distribution: ``` >>> mu, sigma = 3., 1. # mean and standard deviation >>> s = np.random.lognormal(mu, sigma, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 100, density=True, align='mid') ``` ``` >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) ``` ``` >>> plt.plot(x, pdf, linewidth=2, color='r') >>> plt.axis('tight') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-lognormal-1_00_00.png] Demonstrate that taking the products of random samples from a uniform distribution can be fit well by a log-normal probability density function. ``` >>> # Generate a thousand samples: each is the product of 100 random >>> # values, drawn from a normal distribution. >>> b = [] >>> for i in range(1000): ... a = 10. + np.random.standard_normal(100) ... b.append(np.product(a)) ``` ``` >>> b = np.array(b) / np.min(b) # scale values to be positive >>> count, bins, ignored = plt.hist(b, 100, density=True, align='mid') >>> sigma = np.std(np.log(b)) >>> mu = np.mean(np.log(b)) ``` ``` >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) ``` ``` >>> plt.plot(x, pdf, color='r', linewidth=2) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-lognormal-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.lognormal.htmlnumpy.random.RandomState.logseries ================================== method random.RandomState.logseries(*p*, *size=None*) Draw samples from a logarithmic series distribution. Samples are drawn from a log series distribution with specified shape parameter, 0 < `p` < 1. Note New code should use the `logseries` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **p**float or array_like of floats Shape parameter for the distribution. Must be in the range (0, 1). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized logarithmic series distribution. See also [`scipy.stats.logser`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logser.html#scipy.stats.logser "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.logseries`](numpy.random.generator.logseries#numpy.random.Generator.logseries "numpy.random.Generator.logseries") which should be used for new code. #### Notes The probability density for the Log Series distribution is \[P(k) = \frac{-p^k}{k \ln(1-p)},\] where p = probability. The log series distribution is frequently used to represent species richness and occurrence, first proposed by Fisher, Corbet, and Williams in 1943 [2]. It may also be used to model the numbers of occupants seen in cars [3]. #### References 1 <NAME>.; <NAME>., Understanding regional species diversity through the log series distribution of occurrences: BIODIVERSITY RESEARCH Diversity & Distributions, Volume 5, Number 5, September 1999 , pp. 187-195(9). 2 <NAME>,, <NAME>, and <NAME>. 1943. The relation between the number of species and the number of individuals in a random sample of an animal population. Journal of Animal Ecology, 12:42-58. 3 <NAME>, <NAME>, <NAME>, <NAME>, A Handbook of Small Data Sets, CRC Press, 1994. 4 Wikipedia, “Logarithmic distribution”, <https://en.wikipedia.org/wiki/Logarithmic_distribution #### Examples Draw samples from the distribution: ``` >>> a = .6 >>> s = np.random.logseries(a, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s) ``` # plot against distribution ``` >>> def logseries(k, p): ... return -p**k/(k*np.log(1-p)) >>> plt.plot(bins, logseries(bins, a)*count.max()/ ... logseries(bins, a).max(), 'r') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-logseries-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.logseries.htmlnumpy.random.RandomState.multinomial ==================================== method random.RandomState.multinomial(*n*, *pvals*, *size=None*) Draw samples from a multinomial distribution. The multinomial distribution is a multivariate generalization of the binomial distribution. Take an experiment with one of `p` possible outcomes. An example of such an experiment is throwing a dice, where the outcome can be 1 through 6. Each sample drawn from the distribution represents `n` such experiments. Its values, `X_i = [X_0, X_1, ..., X_p]`, represent the number of times the outcome was `i`. Note New code should use the `multinomial` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **n**int Number of experiments. **pvals**sequence of floats, length p Probabilities of each of the `p` different outcomes. These must sum to 1 (however, the last element is always assumed to account for the remaining probability, as long as `sum(pvals[:-1]) <= 1)`. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**ndarray The drawn samples, of shape *size*, if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. See also [`random.Generator.multinomial`](numpy.random.generator.multinomial#numpy.random.Generator.multinomial "numpy.random.Generator.multinomial") which should be used for new code. #### Examples Throw a dice 20 times: ``` >>> np.random.multinomial(20, [1/6.]*6, size=1) array([[4, 1, 7, 5, 2, 1]]) # random ``` It landed 4 times on 1, once on 2, etc. Now, throw the dice 20 times, and 20 times again: ``` >>> np.random.multinomial(20, [1/6.]*6, size=2) array([[3, 4, 3, 3, 4, 3], # random [2, 4, 3, 4, 0, 7]]) ``` For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc. A loaded die is more likely to land on number 6: ``` >>> np.random.multinomial(100, [1/7.]*5 + [2/7.]) array([11, 16, 14, 17, 16, 26]) # random ``` The probability inputs should be normalized. As an implementation detail, the value of the last entry is ignored and assumed to take up any leftover probability mass, but this should not be relied on. A biased coin which has twice as much weight on one side as on the other should be sampled like so: ``` >>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT array([38, 62]) # random ``` not like: ``` >>> np.random.multinomial(100, [1.0, 2.0]) # WRONG Traceback (most recent call last): ValueError: pvals < 0, pvals > 1 or pvals contains NaNs ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.multinomial.htmlnumpy.random.RandomState.multivariate_normal ============================================= method random.RandomState.multivariate_normal(*mean*, *cov*, *size=None*, *check_valid='warn'*, *tol=1e-8*) Draw random samples from a multivariate normal distribution. The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. These parameters are analogous to the mean (average or “center”) and variance (standard deviation, or “width,” squared) of the one-dimensional normal distribution. Note New code should use the `multivariate_normal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **mean**1-D array_like, of length N Mean of the N-dimensional distribution. **cov**2-D array_like, of shape (N, N) Covariance matrix of the distribution. It must be symmetric and positive-semidefinite for proper sampling. **size**int or tuple of ints, optional Given a shape of, for example, `(m,n,k)`, `m*n*k` samples are generated, and packed in an `m`-by-`n`-by-`k` arrangement. Because each sample is `N`-dimensional, the output shape is `(m,n,k,N)`. If no shape is specified, a single (`N`-D) sample is returned. **check_valid**{ ‘warn’, ‘raise’, ‘ignore’ }, optional Behavior when the covariance matrix is not positive semidefinite. **tol**float, optional Tolerance when checking the singular values in covariance matrix. cov is cast to double before the check. Returns **out**ndarray The drawn samples, of shape *size*, if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. See also [`random.Generator.multivariate_normal`](numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal") which should be used for new code. #### Notes The mean is a coordinate in N-dimensional space, which represents the location where samples are most likely to be generated. This is analogous to the peak of the bell curve for the one-dimensional or univariate normal distribution. Covariance indicates the level to which two variables vary together. From the multivariate normal distribution, we draw N-dimensional samples, \(X = [x_1, x_2, ... x_N]\). The covariance matrix element \(C_{ij}\) is the covariance of \(x_i\) and \(x_j\). The element \(C_{ii}\) is the variance of \(x_i\) (i.e. its “spread”). Instead of specifying the full covariance matrix, popular approximations include: * Spherical covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") is a multiple of the identity matrix) * Diagonal covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") has non-negative elements, and only on the diagonal) This geometrical property can be seen in two dimensions by plotting generated data-points: ``` >>> mean = [0, 0] >>> cov = [[1, 0], [0, 100]] # diagonal covariance ``` Diagonal covariance means that points are oriented along x or y-axis: ``` >>> import matplotlib.pyplot as plt >>> x, y = np.random.multivariate_normal(mean, cov, 5000).T >>> plt.plot(x, y, 'x') >>> plt.axis('equal') >>> plt.show() ``` Note that the covariance matrix must be positive semidefinite (a.k.a. nonnegative-definite). Otherwise, the behavior of this method is undefined and backwards compatibility is not guaranteed. #### References 1 <NAME>., “Probability, Random Variables, and Stochastic Processes,” 3rd ed., New York: McGraw-Hill, 1991. 2 <NAME>., <NAME>., and <NAME>., “Pattern Classification,” 2nd ed., New York: Wiley, 2001. #### Examples ``` >>> mean = (1, 2) >>> cov = [[1, 0], [0, 1]] >>> x = np.random.multivariate_normal(mean, cov, (3, 3)) >>> x.shape (3, 3, 2) ``` Here we generate 800 samples from the bivariate normal distribution with mean [0, 0] and covariance matrix [[6, -3], [-3, 3.5]]. The expected variances of the first and second components of the sample are 6 and 3.5, respectively, and the expected correlation coefficient is -3/sqrt(6*3.5) ≈ -0.65465. ``` >>> cov = np.array([[6, -3], [-3, 3.5]]) >>> pts = np.random.multivariate_normal([0, 0], cov, size=800) ``` Check that the mean, covariance, and correlation coefficient of the sample are close to the expected values: ``` >>> pts.mean(axis=0) array([ 0.0326911 , -0.01280782]) # may vary >>> np.cov(pts.T) array([[ 5.96202397, -2.85602287], [-2.85602287, 3.47613949]]) # may vary >>> np.corrcoef(pts.T)[0, 1] -0.6273591314603949 # may vary ``` We can visualize this data with a scatter plot. The orientation of the point cloud illustrates the negative correlation of the components of this sample. ``` >>> import matplotlib.pyplot as plt >>> plt.plot(pts[:, 0], pts[:, 1], '.', alpha=0.5) >>> plt.axis('equal') >>> plt.grid() >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-multivariate_normal-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.multivariate_normal.htmlnumpy.random.RandomState.negative_binomial =========================================== method random.RandomState.negative_binomial(*n*, *p*, *size=None*) Draw samples from a negative binomial distribution. Samples are drawn from a negative binomial distribution with specified parameters, `n` successes and `p` probability of success where `n` is > 0 and `p` is in the interval [0, 1]. Note New code should use the `negative_binomial` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **n**float or array_like of floats Parameter of the distribution, > 0. **p**float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized negative binomial distribution, where each sample is equal to N, the number of failures that occurred before a total of n successes was reached. See also [`random.Generator.negative_binomial`](numpy.random.generator.negative_binomial#numpy.random.Generator.negative_binomial "numpy.random.Generator.negative_binomial") which should be used for new code. #### Notes The probability mass function of the negative binomial distribution is \[P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},\] where \(n\) is the number of successes, \(p\) is the probability of success, \(N+n\) is the number of trials, and \(\Gamma\) is the gamma function. When \(n\) is an integer, \(\frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}\), which is the more common form of this term in the the pmf. The negative binomial distribution gives the probability of N failures given n successes, with a success on the last trial. If one throws a die repeatedly until the third time a “1” appears, then the probability distribution of the number of non-“1”s that appear before the third “1” is a negative binomial distribution. #### References 1 Weisstein, <NAME>. “Negative Binomial Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/NegativeBinomialDistribution.html 2 Wikipedia, “Negative binomial distribution”, <https://en.wikipedia.org/wiki/Negative_binomial_distribution #### Examples Draw samples from the distribution: A real world example. A company drills wild-cat oil exploration wells, each with an estimated probability of success of 0.1. What is the probability of having one success for each successive well, that is what is the probability of a single success after drilling 5 wells, after 6 wells, etc.? ``` >>> s = np.random.negative_binomial(1, 0.1, 100000) >>> for i in range(1, 11): ... probability = sum(s<i) / 100000. ... print(i, "wells drilled, probability of one success =", probability) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.negative_binomial.htmlnumpy.random.RandomState.noncentral_chisquare ============================================== method random.RandomState.noncentral_chisquare(*df*, *nonc*, *size=None*) Draw samples from a noncentral chi-square distribution. The noncentral \(\chi^2\) distribution is a generalization of the \(\chi^2\) distribution. Note New code should use the `noncentral_chisquare` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **df**float or array_like of floats Degrees of freedom, must be > 0. Changed in version 1.10.0: Earlier NumPy versions required dfnum > 1. **nonc**float or array_like of floats Non-centrality, must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` and `nonc` are both scalars. Otherwise, `np.broadcast(df, nonc).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized noncentral chi-square distribution. See also [`random.Generator.noncentral_chisquare`](numpy.random.generator.noncentral_chisquare#numpy.random.Generator.noncentral_chisquare "numpy.random.Generator.noncentral_chisquare") which should be used for new code. #### Notes The probability density function for the noncentral Chi-square distribution is \[P(x;df,nonc) = \sum^{\infty}_{i=0} \frac{e^{-nonc/2}(nonc/2)^{i}}{i!} P_{Y_{df+2i}}(x),\] where \(Y_{q}\) is the Chi-square with q degrees of freedom. #### References 1 Wikipedia, “Noncentral chi-squared distribution” <https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution #### Examples Draw values from the distribution and plot the histogram ``` >>> import matplotlib.pyplot as plt >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-noncentral_chisquare-1_00_00.png] Draw values from a noncentral chisquare with very small noncentrality, and compare to a chisquare. ``` >>> plt.figure() >>> values = plt.hist(np.random.noncentral_chisquare(3, .0000001, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> values2 = plt.hist(np.random.chisquare(3, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-noncentral_chisquare-1_01_00.png] Demonstrate how large values of non-centrality lead to a more symmetric distribution. ``` >>> plt.figure() >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-noncentral_chisquare-1_02_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.noncentral_chisquare.htmlnumpy.random.RandomState.noncentral_f ====================================== method random.RandomState.noncentral_f(*dfnum*, *dfden*, *nonc*, *size=None*) Draw samples from the noncentral F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters > 1. `nonc` is the non-centrality parameter. Note New code should use the `noncentral_f` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **dfnum**float or array_like of floats Numerator degrees of freedom, must be > 0. Changed in version 1.14.0: Earlier NumPy versions required dfnum > 1. **dfden**float or array_like of floats Denominator degrees of freedom, must be > 0. **nonc**float or array_like of floats Non-centrality parameter, the sum of the squares of the numerator means, must be >= 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum`, `dfden`, and `nonc` are all scalars. Otherwise, `np.broadcast(dfnum, dfden, nonc).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized noncentral Fisher distribution. See also [`random.Generator.noncentral_f`](numpy.random.generator.noncentral_f#numpy.random.Generator.noncentral_f "numpy.random.Generator.noncentral_f") which should be used for new code. #### Notes When calculating the power of an experiment (power = probability of rejecting the null hypothesis when a specific alternative is true) the non-central F statistic becomes important. When the null hypothesis is true, the F statistic follows a central F distribution. When the null hypothesis is not true, then it follows a non-central F statistic. #### References 1 Weisstein, <NAME>. “Noncentral F-Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/NoncentralF-Distribution.html 2 Wikipedia, “Noncentral F-distribution”, <https://en.wikipedia.org/wiki/Noncentral_F-distribution #### Examples In a study, testing for a specific alternative to the null hypothesis requires use of the Noncentral F distribution. We need to calculate the area in the tail of the distribution that exceeds the value of the F distribution for the null hypothesis. We’ll plot the two probability distributions for comparison. ``` >>> dfnum = 3 # between group deg of freedom >>> dfden = 20 # within groups degrees of freedom >>> nonc = 3.0 >>> nc_vals = np.random.noncentral_f(dfnum, dfden, nonc, 1000000) >>> NF = np.histogram(nc_vals, bins=50, density=True) >>> c_vals = np.random.f(dfnum, dfden, 1000000) >>> F = np.histogram(c_vals, bins=50, density=True) >>> import matplotlib.pyplot as plt >>> plt.plot(F[1][1:], F[0]) >>> plt.plot(NF[1][1:], NF[0]) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-noncentral_f-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.noncentral_f.htmlnumpy.random.RandomState.normal =============================== method random.RandomState.normal(*loc=0.0*, *scale=1.0*, *size=None*) Draw random samples from a normal (Gaussian) distribution. The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently [[2]](#ra2e838c5ea87-2), is often called the bell curve because of its characteristic shape (see the example below). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution [[2]](#ra2e838c5ea87-2). Note New code should use the `normal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **loc**float or array_like of floats Mean (“centre”) of the distribution. **scale**float or array_like of floats Standard deviation (spread or “width”) of the distribution. Must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized normal distribution. See also [`scipy.stats.norm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.normal`](numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal") which should be used for new code. #### Notes The probability density for the Gaussian distribution is \[p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },\] where \(\mu\) is the mean and \(\sigma\) the standard deviation. The square of the standard deviation, \(\sigma^2\), is called the variance. The function has its peak at the mean, and its “spread” increases with the standard deviation (the function reaches 0.607 times its maximum at \(x + \sigma\) and \(x - \sigma\) [[2]](#ra2e838c5ea87-2)). This implies that normal is more likely to return samples lying close to the mean, rather than those far away. #### References 1 Wikipedia, “Normal distribution”, <https://en.wikipedia.org/wiki/Normal_distribution2([1](#id1),[2](#id2),[3](#id3)) <NAME>., “Central Limit Theorem” in “Probability, Random Variables and Random Signal Principles”, 4th ed., 2001, pp. 51, 51, 125. #### Examples Draw samples from the distribution: ``` >>> mu, sigma = 0, 0.1 # mean and standard deviation >>> s = np.random.normal(mu, sigma, 1000) ``` Verify the mean and the variance: ``` >>> abs(mu - np.mean(s)) 0.0 # may vary ``` ``` >>> abs(sigma - np.std(s, ddof=1)) 0.1 # may vary ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), ... linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-normal-1_00_00.png]: ``` >>> np.random.normal(3, 2.5, size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.normal.htmlnumpy.random.RandomState.pareto =============================== method random.RandomState.pareto(*a*, *size=None*) Draw samples from a Pareto II or Lomax distribution with specified shape. The Lomax or Pareto II distribution is a shifted Pareto distribution. The classical Pareto distribution can be obtained from the Lomax distribution by adding 1 and multiplying by the scale parameter `m` (see Notes). The smallest value of the Lomax distribution is zero while for the classical Pareto distribution it is `mu`, where the standard Pareto distribution has location `mu = 1`. Lomax can also be considered as a simplified version of the Generalized Pareto distribution (available in SciPy), with the scale set to one and the location set to zero. The Pareto distribution must be greater than zero, and is unbounded above. It is also known as the “80-20 rule”. In this distribution, 80 percent of the weights are in the lowest 20 percent of the range, while the other 20 percent fill the remaining 80 percent of the range. Note New code should use the `pareto` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Shape of the distribution. Must be positive. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Pareto distribution. See also [`scipy.stats.lomax`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lomax.html#scipy.stats.lomax "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`scipy.stats.genpareto`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genpareto.html#scipy.stats.genpareto "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.pareto`](numpy.random.generator.pareto#numpy.random.Generator.pareto "numpy.random.Generator.pareto") which should be used for new code. #### Notes The probability density for the Pareto distribution is \[p(x) = \frac{am^a}{x^{a+1}}\] where \(a\) is the shape and \(m\) the scale. The Pareto distribution, named after the Italian economist <NAME>, is a power law probability distribution useful in many real world problems. Outside the field of economics it is generally referred to as the Bradford distribution. Pareto developed the distribution to describe the distribution of wealth in an economy. It has also found use in insurance, web page access statistics, oil field sizes, and many other problems, including the download frequency for projects in Sourceforge [[1]](#r4338a4b3d731-1). It is one of the so-called “fat-tailed” distributions. #### References [1](#id1) <NAME> and <NAME>, On the Pareto Distribution of Sourceforge projects. 2 <NAME>. (1896). Course of Political Economy. Lausanne. 3 <NAME>., <NAME>.(2001), Statistical Analysis of Extreme Values, <NAME>, pp 23-30. 4 Wikipedia, “Pareto distribution”, <https://en.wikipedia.org/wiki/Pareto_distribution #### Examples Draw samples from the distribution: ``` >>> a, m = 3., 2. # shape and mode >>> s = (np.random.pareto(a, 1000) + 1) * m ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 100, density=True) >>> fit = a*m**a / bins**(a+1) >>> plt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-pareto-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.pareto.htmlnumpy.random.RandomState.poisson ================================ method random.RandomState.poisson(*lam=1.0*, *size=None*) Draw samples from a Poisson distribution. The Poisson distribution is the limit of the binomial distribution for large N. Note New code should use the `poisson` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **lam**float or array_like of floats Expected number of events occurring in a fixed-time interval, must be >= 0. A sequence must be broadcastable over the requested size. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `lam` is a scalar. Otherwise, `np.array(lam).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Poisson distribution. See also [`random.Generator.poisson`](numpy.random.generator.poisson#numpy.random.Generator.poisson "numpy.random.Generator.poisson") which should be used for new code. #### Notes The Poisson distribution \[f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}\] For events with an expected separation \(\lambda\) the Poisson distribution \(f(k; \lambda)\) describes the probability of \(k\) events occurring within the observed interval \(\lambda\). Because the output is limited to the range of the C int64 type, a ValueError is raised when `lam` is within 10 sigma of the maximum representable value. #### References 1 Weisstein, <NAME>. “Poisson Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/PoissonDistribution.html 2 Wikipedia, “Poisson distribution”, <https://en.wikipedia.org/wiki/Poisson_distribution #### Examples Draw samples from the distribution: ``` >>> import numpy as np >>> s = np.random.poisson(5, 10000) ``` Display histogram of the sample: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 14, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-poisson-1_00_00.png] Draw each 100 values for lambda 100 and 500: ``` >>> s = np.random.poisson(lam=(100., 500.), size=(100, 2)) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.poisson.htmlnumpy.random.RandomState.power ============================== method random.RandomState.power(*a*, *size=None*) Draws samples in [0, 1] from a power distribution with positive exponent a - 1. Also known as the power function distribution. Note New code should use the `power` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Parameter of the distribution. Must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized power distribution. Raises ValueError If a <= 0. See also [`random.Generator.power`](numpy.random.generator.power#numpy.random.Generator.power "numpy.random.Generator.power") which should be used for new code. #### Notes The probability density function is \[P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.\] The power function distribution is just the inverse of the Pareto distribution. It may also be seen as a special case of the Beta distribution. It is used, for example, in modeling the over-reporting of insurance claims. #### References 1 <NAME>, <NAME>, “Statistical size distributions in economics and actuarial sciences”, Wiley, 2003. 2 <NAME>. and Filliben, <NAME>. “NIST Handbook 148: Dataplot Reference Manual, Volume 2: Let Subcommands and Library Functions”, National Institute of Standards and Technology Handbook Series, June 2003. <https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf #### Examples Draw samples from the distribution: ``` >>> a = 5. # shape >>> samples = 1000 >>> s = np.random.power(a, samples) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=30) >>> x = np.linspace(0, 1, 100) >>> y = a*x**(a-1.) >>> normed_y = samples*np.diff(bins)[0]*y >>> plt.plot(x, normed_y) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-power-1_00_00.png] Compare the power function distribution to the inverse of the Pareto. ``` >>> from scipy import stats >>> rvs = np.random.power(5, 1000000) >>> rvsp = np.random.pareto(5, 1000000) >>> xx = np.linspace(0,1,100) >>> powpdf = stats.powerlaw.pdf(xx,5) ``` ``` >>> plt.figure() >>> plt.hist(rvs, bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('np.random.power(5)') ``` ``` >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of 1 + np.random.pareto(5)') ``` ``` >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of stats.pareto(5)') ``` ![../../../_images/numpy-random-RandomState-power-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.power.htmlnumpy.random.RandomState.rayleigh ================================= method random.RandomState.rayleigh(*scale=1.0*, *size=None*) Draw samples from a Rayleigh distribution. The \(\chi\) and Weibull distributions are generalizations of the Rayleigh. Note New code should use the `rayleigh` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **scale**float or array_like of floats, optional Scale, also equals the mode. Must be non-negative. Default is 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Rayleigh distribution. See also [`random.Generator.rayleigh`](numpy.random.generator.rayleigh#numpy.random.Generator.rayleigh "numpy.random.Generator.rayleigh") which should be used for new code. #### Notes The probability density function for the Rayleigh distribution is \[P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}\] The Rayleigh distribution would arise, for example, if the East and North components of the wind velocity had identical zero-mean Gaussian distributions. Then the wind speed would have a Rayleigh distribution. #### References 1 Brighton Webs Ltd., “Rayleigh Distribution,” <https://web.archive.org/web/20090514091424/http://brighton-webs.co.uk:80/distributions/rayleigh.asp 2 Wikipedia, “Rayleigh distribution” <https://en.wikipedia.org/wiki/Rayleigh_distribution #### Examples Draw values from the distribution and plot the histogram ``` >>> from matplotlib.pyplot import hist >>> values = hist(np.random.rayleigh(3, 100000), bins=200, density=True) ``` Wave heights tend to follow a Rayleigh distribution. If the mean wave height is 1 meter, what fraction of waves are likely to be larger than 3 meters? ``` >>> meanvalue = 1 >>> modevalue = np.sqrt(2 / np.pi) * meanvalue >>> s = np.random.rayleigh(modevalue, 1000000) ``` The percentage of waves larger than 3 meters is: ``` >>> 100.*sum(s>3)/1000000. 0.087300000000000003 # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.rayleigh.htmlnumpy.random.RandomState.standard_cauchy ========================================= method random.RandomState.standard_cauchy(*size=None*) Draw samples from a standard Cauchy distribution with mode = 0. Also known as the Lorentz distribution. Note New code should use the `standard_cauchy` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **samples**ndarray or scalar The drawn samples. See also [`random.Generator.standard_cauchy`](numpy.random.generator.standard_cauchy#numpy.random.Generator.standard_cauchy "numpy.random.Generator.standard_cauchy") which should be used for new code. #### Notes The probability density function for the full Cauchy distribution is \[P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+ (\frac{x-x_0}{\gamma})^2 \bigr] }\] and the Standard Cauchy distribution just sets \(x_0=0\) and \(\gamma=1\) The Cauchy distribution arises in the solution to the driven harmonic oscillator problem, and also describes spectral line broadening. It also describes the distribution of values at which a line tilted at a random angle will cut the x axis. When studying hypothesis tests that assume normality, seeing how the tests perform on data from a Cauchy distribution is a good indicator of their sensitivity to a heavy-tailed distribution, since the Cauchy looks very much like a Gaussian distribution, but with heavier tails. #### References 1 NIST/SEMATECH e-Handbook of Statistical Methods, “Cauchy Distribution”, <https://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm 2 Weisstein, <NAME>. “Cauchy Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/CauchyDistribution.html 3 Wikipedia, “Cauchy distribution” <https://en.wikipedia.org/wiki/Cauchy_distribution #### Examples Draw samples and plot the distribution: ``` >>> import matplotlib.pyplot as plt >>> s = np.random.standard_cauchy(1000000) >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well >>> plt.hist(s, bins=100) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-standard_cauchy-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.standard_cauchy.htmlnumpy.random.RandomState.standard_exponential ============================================== method random.RandomState.standard_exponential(*size=None*) Draw samples from the standard exponential distribution. [`standard_exponential`](#numpy.random.RandomState.standard_exponential "numpy.random.RandomState.standard_exponential") is identical to the exponential distribution with a scale parameter of 1. Note New code should use the `standard_exponential` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**float or ndarray Drawn samples. See also [`random.Generator.standard_exponential`](numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential") which should be used for new code. #### Examples Output a 3x8000 array: ``` >>> n = np.random.standard_exponential((3, 8000)) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.standard_exponential.htmlnumpy.random.RandomState.standard_gamma ======================================== method random.RandomState.standard_gamma(*shape*, *size=None*) Draw samples from a standard Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, shape (sometimes designated “k”) and scale=1. Note New code should use the `standard_gamma` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **shape**float or array_like of floats Parameter, must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` is a scalar. Otherwise, `np.array(shape).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized standard gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.standard_gamma`](numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma") which should be used for new code. #### Notes The probability density for the Gamma distribution is \[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\] where \(k\) is the shape and \(\theta\) the scale, and \(\Gamma\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References 1 Weisstein, <NAME>. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/GammaDistribution.html 2 Wikipedia, “Gamma distribution”, <https://en.wikipedia.org/wiki/Gamma_distribution #### Examples Draw samples from the distribution: ``` >>> shape, scale = 2., 1. # mean and width >>> s = np.random.standard_gamma(shape, 1000000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ ... (sps.gamma(shape) * scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-standard_gamma-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.standard_gamma.htmlnumpy.random.RandomState.standard_normal ========================================= method random.RandomState.standard_normal(*size=None*) Draw samples from a standard Normal distribution (mean=0, stdev=1). Note New code should use the `standard_normal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**float or ndarray A floating-point array of shape `size` of drawn samples, or a single sample if `size` was not specified. See also [`normal`](numpy.random.randomstate.normal#numpy.random.RandomState.normal "numpy.random.RandomState.normal") Equivalent function with additional `loc` and `scale` arguments for setting the mean and standard deviation. [`random.Generator.standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") which should be used for new code. #### Notes For random samples from \(N(\mu, \sigma^2)\), use one of: ``` mu + sigma * np.random.standard_normal(size=...) np.random.normal(mu, sigma, size=...) ``` #### Examples ``` >>> np.random.standard_normal() 2.1923875335537315 #random ``` ``` >>> s = np.random.standard_normal(8000) >>> s array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, # random -0.38672696, -0.4685006 ]) # random >>> s.shape (8000,) >>> s = np.random.standard_normal(size=(3, 4, 2)) >>> s.shape (3, 4, 2) ``` Two-by-four array of samples from \(N(3, 6.25)\): ``` >>> 3 + 2.5 * np.random.standard_normal(size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.standard_normal.htmlnumpy.random.RandomState.triangular =================================== method random.RandomState.triangular(*left*, *mode*, *right*, *size=None*) Draw samples from the triangular distribution over the interval `[left, right]`. The triangular distribution is a continuous probability distribution with lower limit left, peak at mode, and upper limit right. Unlike the other distributions, these parameters directly define the shape of the pdf. Note New code should use the `triangular` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **left**float or array_like of floats Lower limit. **mode**float or array_like of floats The value where the peak of the distribution occurs. The value must fulfill the condition `left <= mode <= right`. **right**float or array_like of floats Upper limit, must be larger than `left`. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `left`, `mode`, and `right` are all scalars. Otherwise, `np.broadcast(left, mode, right).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized triangular distribution. See also [`random.Generator.triangular`](numpy.random.generator.triangular#numpy.random.Generator.triangular "numpy.random.Generator.triangular") which should be used for new code. #### Notes The probability density function for the triangular distribution is \[\begin{split}P(x;l, m, r) = \begin{cases} \frac{2(x-l)}{(r-l)(m-l)}& \text{for $l \leq x \leq m$},\\ \frac{2(r-x)}{(r-l)(r-m)}& \text{for $m \leq x \leq r$},\\ 0& \text{otherwise}. \end{cases}\end{split}\] The triangular distribution is often used in ill-defined problems where the underlying distribution is not known, but some knowledge of the limits and mode exists. Often it is used in simulations. #### References 1 Wikipedia, “Triangular distribution” <https://en.wikipedia.org/wiki/Triangular_distribution #### Examples Draw values from the distribution and plot the histogram: ``` >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200, ... density=True) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-triangular-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.triangular.htmlnumpy.random.RandomState.uniform ================================ method random.RandomState.uniform(*low=0.0*, *high=1.0*, *size=None*) Draw samples from a uniform distribution. Samples are uniformly distributed over the half-open interval `[low, high)` (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by [`uniform`](#numpy.random.RandomState.uniform "numpy.random.RandomState.uniform"). Note New code should use the `uniform` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **low**float or array_like of floats, optional Lower boundary of the output interval. All values generated will be greater than or equal to low. The default value is 0. **high**float or array_like of floats Upper boundary of the output interval. All values generated will be less than or equal to high. The high limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. The default value is 1.0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `low` and `high` are both scalars. Otherwise, `np.broadcast(low, high).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized uniform distribution. See also [`randint`](numpy.random.randomstate.randint#numpy.random.RandomState.randint "numpy.random.RandomState.randint") Discrete uniform distribution, yielding integers. [`random_integers`](numpy.random.randomstate.random_integers#numpy.random.RandomState.random_integers "numpy.random.RandomState.random_integers") Discrete uniform distribution over the closed interval `[low, high]`. [`random_sample`](numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample") Floats uniformly distributed over `[0, 1)`. [`random`](../index#module-numpy.random "numpy.random") Alias for [`random_sample`](numpy.random.randomstate.random_sample#numpy.random.RandomState.random_sample "numpy.random.RandomState.random_sample"). [`rand`](numpy.random.randomstate.rand#numpy.random.RandomState.rand "numpy.random.RandomState.rand") Convenience function that accepts dimensions as input, e.g., `rand(2,2)` would generate a 2-by-2 array of floats, uniformly distributed over `[0, 1)`. [`random.Generator.uniform`](numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform") which should be used for new code. #### Notes The probability density function of the uniform distribution is \[p(x) = \frac{1}{b - a}\] anywhere within the interval `[a, b)`, and zero elsewhere. When `high` == `low`, values of `low` will be returned. If `high` < `low`, the results are officially undefined and may eventually raise an error, i.e. do not rely on this function to behave when passed arguments satisfying that inequality condition. The `high` limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. For example: ``` >>> x = np.float32(5*0.99999999) >>> x 5.0 ``` #### Examples Draw samples from the distribution: ``` >>> s = np.random.uniform(-1,0,1000) ``` All values are within the given interval: ``` >>> np.all(s >= -1) True >>> np.all(s < 0) True ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 15, density=True) >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-uniform-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.uniform.htmlnumpy.random.RandomState.vonmises ================================= method random.RandomState.vonmises(*mu*, *kappa*, *size=None*) Draw samples from a von Mises distribution. Samples are drawn from a von Mises distribution with specified mode (mu) and dispersion (kappa), on the interval [-pi, pi]. The von Mises distribution (also known as the circular normal distribution) is a continuous probability distribution on the unit circle. It may be thought of as the circular analogue of the normal distribution. Note New code should use the `vonmises` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **mu**float or array_like of floats Mode (“center”) of the distribution. **kappa**float or array_like of floats Dispersion of the distribution, has to be >=0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mu` and `kappa` are both scalars. Otherwise, `np.broadcast(mu, kappa).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized von Mises distribution. See also [`scipy.stats.vonmises`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html#scipy.stats.vonmises "(in SciPy v1.8.1)") probability density function, distribution, or cumulative density function, etc. [`random.Generator.vonmises`](numpy.random.generator.vonmises#numpy.random.Generator.vonmises "numpy.random.Generator.vonmises") which should be used for new code. #### Notes The probability density for the von Mises distribution is \[p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},\] where \(\mu\) is the mode and \(\kappa\) the dispersion, and \(I_0(\kappa)\) is the modified Bessel function of order 0. The von Mises is named for <NAME>, who was born in Austria-Hungary, in what is now the Ukraine. He fled to the United States in 1939 and became a professor at Harvard. He worked in probability theory, aerodynamics, fluid mechanics, and philosophy of science. #### References 1 <NAME>. and <NAME>. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. 2 von <NAME>., “Mathematical Theory of Probability and Statistics”, New York: Academic Press, 1964. #### Examples Draw samples from the distribution: ``` >>> mu, kappa = 0.0, 4.0 # mean and dispersion >>> s = np.random.vonmises(mu, kappa, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> from scipy.special import i0 >>> plt.hist(s, 50, density=True) >>> x = np.linspace(-np.pi, np.pi, num=51) >>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa)) >>> plt.plot(x, y, linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-vonmises-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.vonmises.htmlnumpy.random.RandomState.wald ============================= method random.RandomState.wald(*mean*, *scale*, *size=None*) Draw samples from a Wald, or inverse Gaussian, distribution. As the scale approaches infinity, the distribution becomes more like a Gaussian. Some references claim that the Wald is an inverse Gaussian with mean equal to 1, but this is by no means universal. The inverse Gaussian distribution was first studied in relationship to Brownian motion. In 1956 <NAME> used the name inverse Gaussian because there is an inverse relationship between the time to cover a unit distance and distance covered in unit time. Note New code should use the `wald` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **mean**float or array_like of floats Distribution mean, must be > 0. **scale**float or array_like of floats Scale parameter, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `scale` are both scalars. Otherwise, `np.broadcast(mean, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Wald distribution. See also [`random.Generator.wald`](numpy.random.generator.wald#numpy.random.Generator.wald "numpy.random.Generator.wald") which should be used for new code. #### Notes The probability density function for the Wald distribution is \[P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^ \frac{-scale(x-mean)^2}{2\cdotp mean^2x}\] As noted above the inverse Gaussian distribution first arise from attempts to model Brownian motion. It is also a competitor to the Weibull for use in reliability modeling and modeling stock returns and interest rate processes. #### References 1 Brighton Webs Ltd., Wald Distribution, <https://web.archive.org/web/20090423014010/http://www.brighton-webs.co.uk:80/distributions/wald.asp 2 Chhikara, <NAME>., and <NAME>, “The Inverse Gaussian Distribution: Theory : Methodology, and Applications”, CRC Press, 1988. 3 Wikipedia, “Inverse Gaussian distribution” <https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution #### Examples Draw values from the distribution and plot the histogram: ``` >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.wald(3, 2, 100000), bins=200, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-wald-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.wald.htmlnumpy.random.RandomState.weibull ================================ method random.RandomState.weibull(*a*, *size=None*) Draw samples from a Weibull distribution. Draw samples from a 1-parameter Weibull distribution with the given shape parameter `a`. \[X = (-ln(U))^{1/a}\] Here, U is drawn from the uniform distribution over (0,1]. The more common 2-parameter Weibull, including a scale parameter \(\lambda\) is just \(X = \lambda(-ln(U))^{1/a}\). Note New code should use the `weibull` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Shape parameter of the distribution. Must be nonnegative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Weibull distribution. See also [`scipy.stats.weibull_max`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_max.html#scipy.stats.weibull_max "(in SciPy v1.8.1)") [`scipy.stats.weibull_min`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_min.html#scipy.stats.weibull_min "(in SciPy v1.8.1)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "(in SciPy v1.8.1)") [`gumbel`](numpy.random.randomstate.gumbel#numpy.random.RandomState.gumbel "numpy.random.RandomState.gumbel") [`random.Generator.weibull`](numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull") which should be used for new code. #### Notes The Weibull (or Type III asymptotic extreme value distribution for smallest values, SEV Type III, or Rosin-Rammler distribution) is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. This class includes the Gumbel and Frechet distributions. The probability density for the Weibull distribution is \[p(x) = \frac{a} {\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},\] where \(a\) is the shape and \(\lambda\) the scale. The function has its peak (the mode) at \(\lambda(\frac{a-1}{a})^{1/a}\). When `a = 1`, the Weibull distribution reduces to the exponential distribution. #### References 1 <NAME>, Royal Technical University, Stockholm, 1939 “A Statistical Theory Of The Strength Of Materials”, Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939, Generalstabens Litografiska Anstalts Forlag, Stockholm. 2 <NAME>, “A Statistical Distribution Function of Wide Applicability”, Journal Of Applied Mechanics ASME Paper 1951. 3 Wikipedia, “Weibull distribution”, <https://en.wikipedia.org/wiki/Weibull_distribution #### Examples Draw samples from the distribution: ``` >>> a = 5. # shape >>> s = np.random.weibull(a, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> x = np.arange(1,100.)/50. >>> def weib(x,n,a): ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a) ``` ``` >>> count, bins, ignored = plt.hist(np.random.weibull(5.,1000)) >>> x = np.arange(1,100.)/50. >>> scale = count.max()/weib(x, 1., 5.).max() >>> plt.plot(x, weib(x, 1., 5.)*scale) >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-weibull-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.weibull.htmlnumpy.random.RandomState.zipf ============================= method random.RandomState.zipf(*a*, *size=None*) Draw samples from a Zipf distribution. Samples are drawn from a Zipf distribution with specified parameter `a` > 1. The Zipf distribution (also known as the zeta distribution) is a discrete probability distribution that satisfies Zipf’s law: the frequency of an item is inversely proportional to its rank in a frequency table. Note New code should use the `zipf` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Distribution parameter. Must be greater than 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Zipf distribution. See also [`scipy.stats.zipf`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zipf.html#scipy.stats.zipf "(in SciPy v1.8.1)") probability density function, distribution, or cumulative density function, etc. [`random.Generator.zipf`](numpy.random.generator.zipf#numpy.random.Generator.zipf "numpy.random.Generator.zipf") which should be used for new code. #### Notes The probability density for the Zipf distribution is \[p(k) = \frac{k^{-a}}{\zeta(a)},\] for integers \(k \geq 1\), where \(\zeta\) is the Riemann Zeta function. It is named for the American linguist <NAME>, who noted that the frequency of any word in a sample of a language is inversely proportional to its rank in the frequency table. #### References 1 <NAME>., “Selected Studies of the Principle of Relative Frequency in Language,” Cambridge, MA: Harvard Univ. Press, 1932. #### Examples Draw samples from the distribution: ``` >>> a = 4.0 >>> n = 20000 >>> s = np.random.zipf(a, n) ``` Display the histogram of the samples, along with the expected histogram based on the probability density function: ``` >>> import matplotlib.pyplot as plt >>> from scipy.special import zeta ``` [`bincount`](../../generated/numpy.bincount#numpy.bincount "numpy.bincount") provides a fast histogram for small integers. ``` >>> count = np.bincount(s) >>> k = np.arange(1, s.max() + 1) ``` ``` >>> plt.bar(k, count[1:], alpha=0.5, label='sample count') >>> plt.plot(k, n*(k**-a)/zeta(a), 'k.-', alpha=0.5, ... label='expected count') >>> plt.semilogy() >>> plt.grid(alpha=0.4) >>> plt.legend() >>> plt.title(f'Zipf sample, a={a}, size={n}') >>> plt.show() ``` ![../../../_images/numpy-random-RandomState-zipf-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.RandomState.zipf.htmlnumpy.random.beta ================= random.beta(*a*, *b*, *size=None*) Draw samples from a Beta distribution. The Beta distribution is a special case of the Dirichlet distribution, and is related to the Gamma distribution. It has the probability distribution function \[f(x; a,b) = \frac{1}{B(\alpha, \beta)} x^{\alpha - 1} (1 - x)^{\beta - 1},\] where the normalization, B, is the beta function, \[B(\alpha, \beta) = \int_0^1 t^{\alpha - 1} (1 - t)^{\beta - 1} dt.\] It is often seen in Bayesian inference and order statistics. Note New code should use the `beta` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Alpha, positive (>0). **b**float or array_like of floats Beta, positive (>0). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` and `b` are both scalars. Otherwise, `np.broadcast(a, b).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized beta distribution. See also [`random.Generator.beta`](numpy.random.generator.beta#numpy.random.Generator.beta "numpy.random.Generator.beta") which should be used for new code. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.beta.htmlnumpy.random.binomial ===================== random.binomial(*n*, *p*, *size=None*) Draw samples from a binomial distribution. Samples are drawn from a binomial distribution with specified parameters, n trials and p probability of success where n an integer >= 0 and p is in the interval [0,1]. (n may be input as a float, but it is truncated to an integer in use) Note New code should use the `binomial` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **n**int or array_like of ints Parameter of the distribution, >= 0. Floats are also accepted, but they will be truncated to integers. **p**float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized binomial distribution, where each sample is equal to the number of successes over the n trials. See also [`scipy.stats.binom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom.html#scipy.stats.binom "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.binomial`](numpy.random.generator.binomial#numpy.random.Generator.binomial "numpy.random.Generator.binomial") which should be used for new code. #### Notes The probability density for the binomial distribution is \[P(N) = \binom{n}{N}p^N(1-p)^{n-N},\] where \(n\) is the number of trials, \(p\) is the probability of success, and \(N\) is the number of successes. When estimating the standard error of a proportion in a population by using a random sample, the normal distribution works well unless the product p*n <=5, where p = population proportion estimate, and n = number of samples, in which case the binomial distribution is used instead. For example, a sample of 15 people shows 4 who are left handed, and 11 who are right handed. Then p = 4/15 = 27%. 0.27*15 = 4, so the binomial distribution should be used in this case. #### References 1 <NAME>, “Introductory Statistics with R”, Springer-Verlag, 2002. 2 Glantz, <NAME>. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. 3 <NAME>, “Elementary Applied Statistics”, Bogden and Quigley, 1972. 4 Weisstein, <NAME>. “Binomial Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/BinomialDistribution.html 5 Wikipedia, “Binomial distribution”, <https://en.wikipedia.org/wiki/Binomial_distribution #### Examples Draw samples from the distribution: ``` >>> n, p = 10, .5 # number of trials, probability of each trial >>> s = np.random.binomial(n, p, 1000) # result of flipping a coin 10 times, tested 1000 times. ``` A real world example. A company drills 9 wild-cat oil exploration wells, each with an estimated probability of success of 0.1. All nine wells fail. What is the probability of that happening? Let’s do 20,000 trials of the model, and count the number that generate zero positive results. ``` >>> sum(np.random.binomial(9, 0.1, 20000) == 0)/20000. # answer = 0.38885, or 38%. ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.binomial.htmlnumpy.random.bytes ================== random.bytes(*length*) Return random bytes. Note New code should use the `bytes` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **length**int Number of random bytes. Returns **out**bytes String of length `length`. See also [`random.Generator.bytes`](numpy.random.generator.bytes#numpy.random.Generator.bytes "numpy.random.Generator.bytes") which should be used for new code. #### Examples ``` >>> np.random.bytes(10) b' eh\x85\x022SZ\xbf\xa4' #random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.bytes.htmlnumpy.random.chisquare ====================== random.chisquare(*df*, *size=None*) Draw samples from a chi-square distribution. When `df` independent random variables, each with standard normal distributions (mean 0, variance 1), are squared and summed, the resulting distribution is chi-square (see Notes). This distribution is often used in hypothesis testing. Note New code should use the `chisquare` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **df**float or array_like of floats Number of degrees of freedom, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized chi-square distribution. Raises ValueError When `df` <= 0 or when an inappropriate `size` (e.g. `size=-1`) is given. See also [`random.Generator.chisquare`](numpy.random.generator.chisquare#numpy.random.Generator.chisquare "numpy.random.Generator.chisquare") which should be used for new code. #### Notes The variable obtained by summing the squares of `df` independent, standard normally distributed random variables: \[Q = \sum_{i=0}^{\mathtt{df}} X^2_i\] is chi-square distributed, denoted \[Q \sim \chi^2_k.\] The probability density function of the chi-squared distribution is \[p(x) = \frac{(1/2)^{k/2}}{\Gamma(k/2)} x^{k/2 - 1} e^{-x/2},\] where \(\Gamma\) is the gamma function, \[\Gamma(x) = \int_0^{-\infty} t^{x - 1} e^{-t} dt.\] #### References 1 NIST “Engineering Statistics Handbook” <https://www.itl.nist.gov/div898/handbook/eda/section3/eda3666.htm #### Examples ``` >>> np.random.chisquare(2,4) array([ 1.89920014, 9.00867716, 3.13710533, 5.62318272]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.chisquare.htmlnumpy.random.choice =================== random.choice(*a*, *size=None*, *replace=True*, *p=None*) Generates a random sample from a given 1-D array New in version 1.7.0. Note New code should use the `choice` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**1-D array-like or int If an ndarray, a random sample is generated from its elements. If an int, the random sample is generated as if it were `np.arange(a)` **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **replace**boolean, optional Whether the sample is with or without replacement. Default is True, meaning that a value of `a` can be selected multiple times. **p**1-D array-like, optional The probabilities associated with each entry in a. If not given, the sample assumes a uniform distribution over all entries in `a`. Returns **samples**single item or ndarray The generated random samples Raises ValueError If a is an int and less than zero, if a or p are not 1-dimensional, if a is an array-like of size 0, if p is not a vector of probabilities, if a and p have different lengths, or if replace=False and the sample size is greater than the population size See also [`randint`](numpy.random.randint#numpy.random.randint "numpy.random.randint"), [`shuffle`](numpy.random.shuffle#numpy.random.shuffle "numpy.random.shuffle"), [`permutation`](numpy.random.permutation#numpy.random.permutation "numpy.random.permutation") [`random.Generator.choice`](numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice") which should be used in new code #### Notes Setting user-specified probabilities through `p` uses a more general but less efficient sampler than the default. The general sampler produces a different sample than the optimized sampler even if each element of `p` is 1 / len(a). Sampling random rows from a 2-D array is not possible with this function, but is possible with [`Generator.choice`](numpy.random.generator.choice#numpy.random.Generator.choice "numpy.random.Generator.choice") through its `axis` keyword. #### Examples Generate a uniform random sample from np.arange(5) of size 3: ``` >>> np.random.choice(5, 3) array([0, 3, 4]) # random >>> #This is equivalent to np.random.randint(0,5,3) ``` Generate a non-uniform random sample from np.arange(5) of size 3: ``` >>> np.random.choice(5, 3, p=[0.1, 0, 0.3, 0.6, 0]) array([3, 3, 0]) # random ``` Generate a uniform random sample from np.arange(5) of size 3 without replacement: ``` >>> np.random.choice(5, 3, replace=False) array([3,1,0]) # random >>> #This is equivalent to np.random.permutation(np.arange(5))[:3] ``` Generate a non-uniform random sample from np.arange(5) of size 3 without replacement: ``` >>> np.random.choice(5, 3, replace=False, p=[0.1, 0, 0.3, 0.6, 0]) array([2, 3, 0]) # random ``` Any of the above can be repeated with an arbitrary array-like instead of just integers. For instance: ``` >>> aa_milne_arr = ['pooh', 'rabbit', 'piglet', 'Christopher'] >>> np.random.choice(aa_milne_arr, 5, p=[0.5, 0.1, 0.1, 0.3]) array(['pooh', 'pooh', 'pooh', 'Christopher', 'piglet'], # random dtype='<U11') ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.choice.htmlnumpy.random.dirichlet ====================== random.dirichlet(*alpha*, *size=None*) Draw samples from the Dirichlet distribution. Draw `size` samples of dimension k from a Dirichlet distribution. A Dirichlet-distributed random variable can be seen as a multivariate generalization of a Beta distribution. The Dirichlet distribution is a conjugate prior of a multinomial distribution in Bayesian inference. Note New code should use the `dirichlet` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **alpha**sequence of floats, length k Parameter of the distribution (length `k` for sample of length `k`). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n)`, then `m * n * k` samples are drawn. Default is None, in which case a vector of length `k` is returned. Returns **samples**ndarray, The drawn samples, of shape `(size, k)`. Raises ValueError If any value in `alpha` is less than or equal to zero See also [`random.Generator.dirichlet`](numpy.random.generator.dirichlet#numpy.random.Generator.dirichlet "numpy.random.Generator.dirichlet") which should be used for new code. #### Notes The Dirichlet distribution is a distribution over vectors \(x\) that fulfil the conditions \(x_i>0\) and \(\sum_{i=1}^k x_i = 1\). The probability density function \(p\) of a Dirichlet-distributed random vector \(X\) is proportional to \[p(x) \propto \prod_{i=1}^{k}{x^{\alpha_i-1}_i},\] where \(\alpha\) is a vector containing the positive concentration parameters. The method uses the following property for computation: let \(Y\) be a random vector which has components that follow a standard gamma distribution, then \(X = \frac{1}{\sum_{i=1}^k{Y_i}} Y\) is Dirichlet-distributed #### References 1 <NAME>, “Information Theory, Inference and Learning Algorithms,” chapter 23, <http://www.inference.org.uk/mackay/itila/ 2 Wikipedia, “Dirichlet distribution”, <https://en.wikipedia.org/wiki/Dirichlet_distribution #### Examples Taking an example cited in Wikipedia, this distribution can be used if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had, on average, a designated average length, but allowing some variation in the relative sizes of the pieces. ``` >>> s = np.random.dirichlet((10, 5, 3), 20).transpose() ``` ``` >>> import matplotlib.pyplot as plt >>> plt.barh(range(20), s[0]) >>> plt.barh(range(20), s[1], left=s[0], color='g') >>> plt.barh(range(20), s[2], left=s[0]+s[1], color='r') >>> plt.title("Lengths of Strings") ``` ![../../../_images/numpy-random-dirichlet-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.dirichlet.htmlnumpy.random.exponential ======================== random.exponential(*scale=1.0*, *size=None*) Draw samples from an exponential distribution. Its probability density function is \[f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta}),\] for `x > 0` and 0 elsewhere. \(\beta\) is the scale parameter, which is the inverse of the rate parameter \(\lambda = 1/\beta\). The rate parameter is an alternative, widely used parameterization of the exponential distribution [[3]](#r3cbd6af2d0d3-3). The exponential distribution is a continuous analogue of the geometric distribution. It describes many common situations, such as the size of raindrops measured over many rainstorms [[1]](#r3cbd6af2d0d3-1), or the time between page requests to Wikipedia [[2]](#r3cbd6af2d0d3-2). Note New code should use the `exponential` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **scale**float or array_like of floats The scale parameter, \(\beta = 1/\lambda\). Must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized exponential distribution. See also [`random.Generator.exponential`](numpy.random.generator.exponential#numpy.random.Generator.exponential "numpy.random.Generator.exponential") which should be used for new code. #### References [1](#id2) <NAME>r., “Probability, Random Variables and Random Signal Principles”, 4th ed, 2001, p. 57. [2](#id3) Wikipedia, “Poisson process”, <https://en.wikipedia.org/wiki/Poisson_process [3](#id1) Wikipedia, “Exponential distribution”, <https://en.wikipedia.org/wiki/Exponential_distribution © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.exponential.htmlnumpy.random.f ============== random.f(*dfnum*, *dfden*, *size=None*) Draw samples from an F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters must be greater than zero. The random variate of the F distribution (also known as the Fisher distribution) is a continuous probability distribution that arises in ANOVA tests, and is the ratio of two chi-square variates. Note New code should use the `f` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **dfnum**float or array_like of floats Degrees of freedom in numerator, must be > 0. **dfden**float or array_like of float Degrees of freedom in denominator, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum` and `dfden` are both scalars. Otherwise, `np.broadcast(dfnum, dfden).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Fisher distribution. See also [`scipy.stats.f`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.f.html#scipy.stats.f "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.f`](numpy.random.generator.f#numpy.random.Generator.f "numpy.random.Generator.f") which should be used for new code. #### Notes The F statistic is used to compare in-group variances to between-group variances. Calculating the distribution depends on the sampling, and so it is a function of the respective degrees of freedom in the problem. The variable `dfnum` is the number of samples minus one, the between-groups degrees of freedom, while `dfden` is the within-groups degrees of freedom, the sum of the number of samples in each group minus the number of groups. #### References 1 Glantz, <NAME>. “Primer of Biostatistics.”, McGraw-Hill, Fifth Edition, 2002. 2 Wikipedia, “F-distribution”, <https://en.wikipedia.org/wiki/F-distribution #### Examples An example from Glantz[1], pp 47-40: Two groups, children of diabetics (25 people) and children from people without diabetes (25 controls). Fasting blood glucose was measured, case group had a mean value of 86.1, controls had a mean value of 82.2. Standard deviations were 2.09 and 2.49 respectively. Are these data consistent with the null hypothesis that the parents diabetic status does not affect their children’s blood glucose levels? Calculating the F statistic from the data gives a value of 36.01. Draw samples from the distribution: ``` >>> dfnum = 1. # between group degrees of freedom >>> dfden = 48. # within groups degrees of freedom >>> s = np.random.f(dfnum, dfden, 1000) ``` The lower bound for the top 1% of the samples is : ``` >>> np.sort(s)[-10] 7.61988120985 # random ``` So there is about a 1% chance that the F statistic will exceed 7.62, the measured value is 36, so the null hypothesis is rejected at the 1% level. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.f.htmlnumpy.random.gamma ================== random.gamma(*shape*, *scale=1.0*, *size=None*) Draw samples from a Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, [`shape`](../../generated/numpy.shape#numpy.shape "numpy.shape") (sometimes designated “k”) and `scale` (sometimes designated “theta”), where both parameters are > 0. Note New code should use the `gamma` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **shape**float or array_like of floats The shape of the gamma distribution. Must be non-negative. **scale**float or array_like of floats, optional The scale of the gamma distribution. Must be non-negative. Default is equal to 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` and `scale` are both scalars. Otherwise, `np.broadcast(shape, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.gamma`](numpy.random.generator.gamma#numpy.random.Generator.gamma "numpy.random.Generator.gamma") which should be used for new code. #### Notes The probability density for the Gamma distribution is \[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\] where \(k\) is the shape and \(\theta\) the scale, and \(\Gamma\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References 1 Weisstein, <NAME>. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/GammaDistribution.html 2 Wikipedia, “Gamma distribution”, <https://en.wikipedia.org/wiki/Gamma_distribution #### Examples Draw samples from the distribution: ``` >>> shape, scale = 2., 2. # mean=4, std=2*sqrt(2) >>> s = np.random.gamma(shape, scale, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1)*(np.exp(-bins/scale) / ... (sps.gamma(shape)*scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-gamma-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.gamma.htmlnumpy.random.geometric ====================== random.geometric(*p*, *size=None*) Draw samples from the geometric distribution. Bernoulli trials are experiments with one of two outcomes: success or failure (an example of such an experiment is flipping a coin). The geometric distribution models the number of trials that must be run in order to achieve success. It is therefore supported on the positive integers, `k = 1, 2, ...`. The probability mass function of the geometric distribution is \[f(k) = (1 - p)^{k - 1} p\] where `p` is the probability of success of an individual trial. Note New code should use the `geometric` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **p**float or array_like of floats The probability of success of an individual trial. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized geometric distribution. See also [`random.Generator.geometric`](numpy.random.generator.geometric#numpy.random.Generator.geometric "numpy.random.Generator.geometric") which should be used for new code. #### Examples Draw ten thousand values from the geometric distribution, with the probability of an individual success equal to 0.35: ``` >>> z = np.random.geometric(p=0.35, size=10000) ``` How many trials succeeded after a single run? ``` >>> (z == 1).sum() / 10000. 0.34889999999999999 #random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.geometric.htmlnumpy.random.get_state ======================= random.get_state() Return a tuple representing the internal state of the generator. For more details, see [`set_state`](numpy.random.set_state#numpy.random.set_state "numpy.random.set_state"). Parameters **legacy**bool, optional Flag indicating to return a legacy tuple state when the BitGenerator is MT19937, instead of a dict. Returns **out**{tuple(str, ndarray of 624 uints, int, int, float), dict} The returned tuple has the following items: 1. the string ‘MT19937’. 2. a 1-D array of 624 unsigned integer keys. 3. an integer `pos`. 4. an integer `has_gauss`. 5. a float `cached_gaussian`. If `legacy` is False, or the BitGenerator is not MT19937, then state is returned as a dictionary. See also [`set_state`](numpy.random.set_state#numpy.random.set_state "numpy.random.set_state") #### Notes [`set_state`](numpy.random.set_state#numpy.random.set_state "numpy.random.set_state") and [`get_state`](#numpy.random.get_state "numpy.random.get_state") are not needed to work with any of the random distributions in NumPy. If the internal state is manually altered, the user should know exactly what he/she is doing. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.get_state.htmlnumpy.random.gumbel =================== random.gumbel(*loc=0.0*, *scale=1.0*, *size=None*) Draw samples from a Gumbel distribution. Draw samples from a Gumbel distribution with specified location and scale. For more information on the Gumbel distribution, see Notes and References below. Note New code should use the `gumbel` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **loc**float or array_like of floats, optional The location of the mode of the distribution. Default is 0. **scale**float or array_like of floats, optional The scale parameter of the distribution. Default is 1. Must be non- negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Gumbel distribution. See also [`scipy.stats.gumbel_l`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_l.html#scipy.stats.gumbel_l "(in SciPy v1.8.1)") [`scipy.stats.gumbel_r`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gumbel_r.html#scipy.stats.gumbel_r "(in SciPy v1.8.1)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "(in SciPy v1.8.1)") [`weibull`](numpy.random.weibull#numpy.random.weibull "numpy.random.weibull") [`random.Generator.gumbel`](numpy.random.generator.gumbel#numpy.random.Generator.gumbel "numpy.random.Generator.gumbel") which should be used for new code. #### Notes The Gumbel (or Smallest Extreme Value (SEV) or the Smallest Extreme Value Type I) distribution is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. The Gumbel is a special case of the Extreme Value Type I distribution for maximums from distributions with “exponential-like” tails. The probability density for the Gumbel distribution is \[p(x) = \frac{e^{-(x - \mu)/ \beta}}{\beta} e^{ -e^{-(x - \mu)/ \beta}},\] where \(\mu\) is the mode, a location parameter, and \(\beta\) is the scale parameter. The Gumbel (named for German mathematician <NAME>) was used very early in the hydrology literature, for modeling the occurrence of flood events. It is also used for modeling maximum wind speed and rainfall rates. It is a “fat-tailed” distribution - the probability of an event in the tail of the distribution is larger than if one used a Gaussian, hence the surprisingly frequent occurrence of 100-year floods. Floods were initially modeled as a Gaussian process, which underestimated the frequency of extreme events. It is one of a class of extreme value distributions, the Generalized Extreme Value (GEV) distributions, which also includes the Weibull and Frechet. The function has a mean of \(\mu + 0.57721\beta\) and a variance of \(\frac{\pi^2}{6}\beta^2\). #### References 1 <NAME>., “Statistics of Extremes,” New York: Columbia University Press, 1958. 2 <NAME>. and <NAME>., “Statistical Analysis of Extreme Values from Insurance, Finance, Hydrology and Other Fields,” Basel: Birkhauser Verlag, 2001. #### Examples Draw samples from the distribution: ``` >>> mu, beta = 0, 0.1 # location and scale >>> s = np.random.gumbel(mu, beta, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp( -np.exp( -(bins - mu) /beta) ), ... linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-gumbel-1_00_00.png] Show how an extreme value distribution can arise from a Gaussian process and compare to a Gaussian: ``` >>> means = [] >>> maxima = [] >>> for i in range(0,1000) : ... a = np.random.normal(mu, beta, 1000) ... means.append(a.mean()) ... maxima.append(a.max()) >>> count, bins, ignored = plt.hist(maxima, 30, density=True) >>> beta = np.std(maxima) * np.sqrt(6) / np.pi >>> mu = np.mean(maxima) - 0.57721*beta >>> plt.plot(bins, (1/beta)*np.exp(-(bins - mu)/beta) ... * np.exp(-np.exp(-(bins - mu)/beta)), ... linewidth=2, color='r') >>> plt.plot(bins, 1/(beta * np.sqrt(2 * np.pi)) ... * np.exp(-(bins - mu)**2 / (2 * beta**2)), ... linewidth=2, color='g') >>> plt.show() ``` ![../../../_images/numpy-random-gumbel-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.gumbel.htmlnumpy.random.hypergeometric =========================== random.hypergeometric(*ngood*, *nbad*, *nsample*, *size=None*) Draw samples from a Hypergeometric distribution. Samples are drawn from a hypergeometric distribution with specified parameters, `ngood` (ways to make a good selection), `nbad` (ways to make a bad selection), and `nsample` (number of items sampled, which is less than or equal to the sum `ngood + nbad`). Note New code should use the `hypergeometric` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **ngood**int or array_like of ints Number of ways to make a good selection. Must be nonnegative. **nbad**int or array_like of ints Number of ways to make a bad selection. Must be nonnegative. **nsample**int or array_like of ints Number of items sampled. Must be at least 1 and at most `ngood + nbad`. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `ngood`, `nbad`, and `nsample` are all scalars. Otherwise, `np.broadcast(ngood, nbad, nsample).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized hypergeometric distribution. Each sample is the number of good items within a randomly selected subset of size `nsample` taken from a set of `ngood` good items and `nbad` bad items. See also [`scipy.stats.hypergeom`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.hypergeom.html#scipy.stats.hypergeom "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.hypergeometric`](numpy.random.generator.hypergeometric#numpy.random.Generator.hypergeometric "numpy.random.Generator.hypergeometric") which should be used for new code. #### Notes The probability density for the Hypergeometric distribution is \[P(x) = \frac{\binom{g}{x}\binom{b}{n-x}}{\binom{g+b}{n}},\] where \(0 \le x \le n\) and \(n-b \le x \le g\) for P(x) the probability of `x` good results in the drawn sample, g = `ngood`, b = `nbad`, and n = `nsample`. Consider an urn with black and white marbles in it, `ngood` of them are black and `nbad` are white. If you draw `nsample` balls without replacement, then the hypergeometric distribution describes the distribution of black balls in the drawn sample. Note that this distribution is very similar to the binomial distribution, except that in this case, samples are drawn without replacement, whereas in the Binomial case samples are drawn with replacement (or the sample space is infinite). As the sample space becomes large, this distribution approaches the binomial. #### References 1 Lentner, Marvin, “Elementary Applied Statistics”, Bogden and Quigley, 1972. 2 Weisstein, <NAME>. “Hypergeometric Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/HypergeometricDistribution.html 3 Wikipedia, “Hypergeometric distribution”, <https://en.wikipedia.org/wiki/Hypergeometric_distribution #### Examples Draw samples from the distribution: ``` >>> ngood, nbad, nsamp = 100, 2, 10 # number of good, number of bad, and number of samples >>> s = np.random.hypergeometric(ngood, nbad, nsamp, 1000) >>> from matplotlib.pyplot import hist >>> hist(s) # note that it is very unlikely to grab both bad items ``` Suppose you have an urn with 15 white and 15 black marbles. If you pull 15 marbles at random, how likely is it that 12 or more of them are one color? ``` >>> s = np.random.hypergeometric(15, 15, 15, 100000) >>> sum(s>=12)/100000. + sum(s<=3)/100000. # answer = 0.003 ... pretty unlikely! ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.hypergeometric.htmlnumpy.random.laplace ==================== random.laplace(*loc=0.0*, *scale=1.0*, *size=None*) Draw samples from the Laplace or double exponential distribution with specified location (or mean) and scale (decay). The Laplace distribution is similar to the Gaussian/normal distribution, but is sharper at the peak and has fatter tails. It represents the difference between two independent, identically distributed exponential random variables. Note New code should use the `laplace` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **loc**float or array_like of floats, optional The position, \(\mu\), of the distribution peak. Default is 0. **scale**float or array_like of floats, optional \(\lambda\), the exponential decay. Default is 1. Must be non- negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Laplace distribution. See also [`random.Generator.laplace`](numpy.random.generator.laplace#numpy.random.Generator.laplace "numpy.random.Generator.laplace") which should be used for new code. #### Notes It has the probability density function \[f(x; \mu, \lambda) = \frac{1}{2\lambda} \exp\left(-\frac{|x - \mu|}{\lambda}\right).\] The first law of Laplace, from 1774, states that the frequency of an error can be expressed as an exponential function of the absolute magnitude of the error, which leads to the Laplace distribution. For many problems in economics and health sciences, this distribution seems to model the data better than the standard Gaussian distribution. #### References 1 <NAME> <NAME>. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. 2 <NAME>, et. al. “The Laplace Distribution and Generalizations, ” Birkhauser, 2001. 3 Weisstein, <NAME>. “Laplace Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/LaplaceDistribution.html 4 Wikipedia, “Laplace distribution”, <https://en.wikipedia.org/wiki/Laplace_distribution #### Examples Draw samples from the distribution ``` >>> loc, scale = 0., 1. >>> s = np.random.laplace(loc, scale, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> x = np.arange(-8., 8., .01) >>> pdf = np.exp(-abs(x-loc)/scale)/(2.*scale) >>> plt.plot(x, pdf) ``` Plot Gaussian for comparison: ``` >>> g = (1/(scale * np.sqrt(2 * np.pi)) * ... np.exp(-(x - loc)**2 / (2 * scale**2))) >>> plt.plot(x,g) ``` ![../../../_images/numpy-random-laplace-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.laplace.htmlnumpy.random.logistic ===================== random.logistic(*loc=0.0*, *scale=1.0*, *size=None*) Draw samples from a logistic distribution. Samples are drawn from a logistic distribution with specified parameters, loc (location or mean, also median), and scale (>0). Note New code should use the `logistic` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **loc**float or array_like of floats, optional Parameter of the distribution. Default is 0. **scale**float or array_like of floats, optional Parameter of the distribution. Must be non-negative. Default is 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized logistic distribution. See also [`scipy.stats.logistic`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html#scipy.stats.logistic "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.logistic`](numpy.random.generator.logistic#numpy.random.Generator.logistic "numpy.random.Generator.logistic") which should be used for new code. #### Notes The probability density for the Logistic distribution is \[P(x) = P(x) = \frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s})^2},\] where \(\mu\) = location and \(s\) = scale. The Logistic distribution is used in Extreme Value problems where it can act as a mixture of Gumbel distributions, in Epidemiology, and by the World Chess Federation (FIDE) where it is used in the Elo ranking system, assuming the performance of each player is a logistically distributed random variable. #### References 1 <NAME>. and <NAME>. (2001), “Statistical Analysis of Extreme Values, from Insurance, Finance, Hydrology and Other Fields,” Birkhauser Verlag, Basel, pp 132-133. 2 Weisstein, <NAME>. “Logistic Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/LogisticDistribution.html 3 Wikipedia, “Logistic-distribution”, <https://en.wikipedia.org/wiki/Logistic_distribution #### Examples Draw samples from the distribution: ``` >>> loc, scale = 10, 1 >>> s = np.random.logistic(loc, scale, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=50) ``` # plot against distribution ``` >>> def logist(x, loc, scale): ... return np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2) >>> lgst_val = logist(bins, loc, scale) >>> plt.plot(bins, lgst_val * count.max() / lgst_val.max()) >>> plt.show() ``` ![../../../_images/numpy-random-logistic-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.logistic.htmlnumpy.random.lognormal ====================== random.lognormal(*mean=0.0*, *sigma=1.0*, *size=None*) Draw samples from a log-normal distribution. Draw samples from a log-normal distribution with specified mean, standard deviation, and array shape. Note that the mean and standard deviation are not the values for the distribution itself, but of the underlying normal distribution it is derived from. Note New code should use the `lognormal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **mean**float or array_like of floats, optional Mean value of the underlying normal distribution. Default is 0. **sigma**float or array_like of floats, optional Standard deviation of the underlying normal distribution. Must be non-negative. Default is 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `sigma` are both scalars. Otherwise, `np.broadcast(mean, sigma).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized log-normal distribution. See also [`scipy.stats.lognorm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html#scipy.stats.lognorm "(in SciPy v1.8.1)") probability density function, distribution, cumulative density function, etc. [`random.Generator.lognormal`](numpy.random.generator.lognormal#numpy.random.Generator.lognormal "numpy.random.Generator.lognormal") which should be used for new code. #### Notes A variable `x` has a log-normal distribution if `log(x)` is normally distributed. The probability density function for the log-normal distribution is: \[p(x) = \frac{1}{\sigma x \sqrt{2\pi}} e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}\] where \(\mu\) is the mean and \(\sigma\) is the standard deviation of the normally distributed logarithm of the variable. A log-normal distribution results if a random variable is the *product* of a large number of independent, identically-distributed variables in the same way that a normal distribution results if the variable is the *sum* of a large number of independent, identically-distributed variables. #### References 1 <NAME>., <NAME>., and <NAME>., “Log-normal Distributions across the Sciences: Keys and Clues,” BioScience, Vol. 51, No. 5, May, 2001. <https://stat.ethz.ch/~stahel/lognormal/bioscience.pdf 2 <NAME>. and <NAME>., “Statistical Analysis of Extreme Values,” Basel: Birkhauser Verlag, 2001, pp. 31-32. #### Examples Draw samples from the distribution: ``` >>> mu, sigma = 3., 1. # mean and standard deviation >>> s = np.random.lognormal(mu, sigma, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 100, density=True, align='mid') ``` ``` >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) ``` ``` >>> plt.plot(x, pdf, linewidth=2, color='r') >>> plt.axis('tight') >>> plt.show() ``` ![../../../_images/numpy-random-lognormal-1_00_00.png] Demonstrate that taking the products of random samples from a uniform distribution can be fit well by a log-normal probability density function. ``` >>> # Generate a thousand samples: each is the product of 100 random >>> # values, drawn from a normal distribution. >>> b = [] >>> for i in range(1000): ... a = 10. + np.random.standard_normal(100) ... b.append(np.product(a)) ``` ``` >>> b = np.array(b) / np.min(b) # scale values to be positive >>> count, bins, ignored = plt.hist(b, 100, density=True, align='mid') >>> sigma = np.std(np.log(b)) >>> mu = np.mean(np.log(b)) ``` ``` >>> x = np.linspace(min(bins), max(bins), 10000) >>> pdf = (np.exp(-(np.log(x) - mu)**2 / (2 * sigma**2)) ... / (x * sigma * np.sqrt(2 * np.pi))) ``` ``` >>> plt.plot(x, pdf, color='r', linewidth=2) >>> plt.show() ``` ![../../../_images/numpy-random-lognormal-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.lognormal.htmlnumpy.random.logseries ====================== random.logseries(*p*, *size=None*) Draw samples from a logarithmic series distribution. Samples are drawn from a log series distribution with specified shape parameter, 0 < `p` < 1. Note New code should use the `logseries` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **p**float or array_like of floats Shape parameter for the distribution. Must be in the range (0, 1). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `p` is a scalar. Otherwise, `np.array(p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized logarithmic series distribution. See also [`scipy.stats.logser`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logser.html#scipy.stats.logser "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.logseries`](numpy.random.generator.logseries#numpy.random.Generator.logseries "numpy.random.Generator.logseries") which should be used for new code. #### Notes The probability density for the Log Series distribution is \[P(k) = \frac{-p^k}{k \ln(1-p)},\] where p = probability. The log series distribution is frequently used to represent species richness and occurrence, first proposed by <NAME>, and Williams in 1943 [2]. It may also be used to model the numbers of occupants seen in cars [3]. #### References 1 <NAME>.; <NAME>., Understanding regional species diversity through the log series distribution of occurrences: BIODIVERSITY RESEARCH Diversity & Distributions, Volume 5, Number 5, September 1999 , pp. 187-195(9). 2 <NAME>,, <NAME>, and <NAME>. 1943. The relation between the number of species and the number of individuals in a random sample of an animal population. Journal of Animal Ecology, 12:42-58. 3 <NAME>, <NAME>, <NAME>, <NAME>, A Handbook of Small Data Sets, CRC Press, 1994. 4 Wikipedia, “Logarithmic distribution”, <https://en.wikipedia.org/wiki/Logarithmic_distribution #### Examples Draw samples from the distribution: ``` >>> a = .6 >>> s = np.random.logseries(a, 10000) >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s) ``` # plot against distribution ``` >>> def logseries(k, p): ... return -p**k/(k*np.log(1-p)) >>> plt.plot(bins, logseries(bins, a)*count.max()/ ... logseries(bins, a).max(), 'r') >>> plt.show() ``` ![../../../_images/numpy-random-logseries-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.logseries.htmlnumpy.random.multinomial ======================== random.multinomial(*n*, *pvals*, *size=None*) Draw samples from a multinomial distribution. The multinomial distribution is a multivariate generalization of the binomial distribution. Take an experiment with one of `p` possible outcomes. An example of such an experiment is throwing a dice, where the outcome can be 1 through 6. Each sample drawn from the distribution represents `n` such experiments. Its values, `X_i = [X_0, X_1, ..., X_p]`, represent the number of times the outcome was `i`. Note New code should use the `multinomial` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **n**int Number of experiments. **pvals**sequence of floats, length p Probabilities of each of the `p` different outcomes. These must sum to 1 (however, the last element is always assumed to account for the remaining probability, as long as `sum(pvals[:-1]) <= 1)`. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**ndarray The drawn samples, of shape *size*, if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. See also [`random.Generator.multinomial`](numpy.random.generator.multinomial#numpy.random.Generator.multinomial "numpy.random.Generator.multinomial") which should be used for new code. #### Examples Throw a dice 20 times: ``` >>> np.random.multinomial(20, [1/6.]*6, size=1) array([[4, 1, 7, 5, 2, 1]]) # random ``` It landed 4 times on 1, once on 2, etc. Now, throw the dice 20 times, and 20 times again: ``` >>> np.random.multinomial(20, [1/6.]*6, size=2) array([[3, 4, 3, 3, 4, 3], # random [2, 4, 3, 4, 0, 7]]) ``` For the first run, we threw 3 times 1, 4 times 2, etc. For the second, we threw 2 times 1, 4 times 2, etc. A loaded die is more likely to land on number 6: ``` >>> np.random.multinomial(100, [1/7.]*5 + [2/7.]) array([11, 16, 14, 17, 16, 26]) # random ``` The probability inputs should be normalized. As an implementation detail, the value of the last entry is ignored and assumed to take up any leftover probability mass, but this should not be relied on. A biased coin which has twice as much weight on one side as on the other should be sampled like so: ``` >>> np.random.multinomial(100, [1.0 / 3, 2.0 / 3]) # RIGHT array([38, 62]) # random ``` not like: ``` >>> np.random.multinomial(100, [1.0, 2.0]) # WRONG Traceback (most recent call last): ValueError: pvals < 0, pvals > 1 or pvals contains NaNs ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.multinomial.htmlnumpy.random.multivariate_normal ================================= random.multivariate_normal(*mean*, *cov*, *size=None*, *check_valid='warn'*, *tol=1e-8*) Draw random samples from a multivariate normal distribution. The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. These parameters are analogous to the mean (average or “center”) and variance (standard deviation, or “width,” squared) of the one-dimensional normal distribution. Note New code should use the `multivariate_normal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **mean**1-D array_like, of length N Mean of the N-dimensional distribution. **cov**2-D array_like, of shape (N, N) Covariance matrix of the distribution. It must be symmetric and positive-semidefinite for proper sampling. **size**int or tuple of ints, optional Given a shape of, for example, `(m,n,k)`, `m*n*k` samples are generated, and packed in an `m`-by-`n`-by-`k` arrangement. Because each sample is `N`-dimensional, the output shape is `(m,n,k,N)`. If no shape is specified, a single (`N`-D) sample is returned. **check_valid**{ ‘warn’, ‘raise’, ‘ignore’ }, optional Behavior when the covariance matrix is not positive semidefinite. **tol**float, optional Tolerance when checking the singular values in covariance matrix. cov is cast to double before the check. Returns **out**ndarray The drawn samples, of shape *size*, if that was provided. If not, the shape is `(N,)`. In other words, each entry `out[i,j,...,:]` is an N-dimensional value drawn from the distribution. See also [`random.Generator.multivariate_normal`](numpy.random.generator.multivariate_normal#numpy.random.Generator.multivariate_normal "numpy.random.Generator.multivariate_normal") which should be used for new code. #### Notes The mean is a coordinate in N-dimensional space, which represents the location where samples are most likely to be generated. This is analogous to the peak of the bell curve for the one-dimensional or univariate normal distribution. Covariance indicates the level to which two variables vary together. From the multivariate normal distribution, we draw N-dimensional samples, \(X = [x_1, x_2, ... x_N]\). The covariance matrix element \(C_{ij}\) is the covariance of \(x_i\) and \(x_j\). The element \(C_{ii}\) is the variance of \(x_i\) (i.e. its “spread”). Instead of specifying the full covariance matrix, popular approximations include: * Spherical covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") is a multiple of the identity matrix) * Diagonal covariance ([`cov`](../../generated/numpy.cov#numpy.cov "numpy.cov") has non-negative elements, and only on the diagonal) This geometrical property can be seen in two dimensions by plotting generated data-points: ``` >>> mean = [0, 0] >>> cov = [[1, 0], [0, 100]] # diagonal covariance ``` Diagonal covariance means that points are oriented along x or y-axis: ``` >>> import matplotlib.pyplot as plt >>> x, y = np.random.multivariate_normal(mean, cov, 5000).T >>> plt.plot(x, y, 'x') >>> plt.axis('equal') >>> plt.show() ``` Note that the covariance matrix must be positive semidefinite (a.k.a. nonnegative-definite). Otherwise, the behavior of this method is undefined and backwards compatibility is not guaranteed. #### References 1 <NAME>., “Probability, Random Variables, and Stochastic Processes,” 3rd ed., New York: McGraw-Hill, 1991. 2 <NAME>., <NAME>., and <NAME>., “Pattern Classification,” 2nd ed., New York: Wiley, 2001. #### Examples ``` >>> mean = (1, 2) >>> cov = [[1, 0], [0, 1]] >>> x = np.random.multivariate_normal(mean, cov, (3, 3)) >>> x.shape (3, 3, 2) ``` Here we generate 800 samples from the bivariate normal distribution with mean [0, 0] and covariance matrix [[6, -3], [-3, 3.5]]. The expected variances of the first and second components of the sample are 6 and 3.5, respectively, and the expected correlation coefficient is -3/sqrt(6*3.5) ≈ -0.65465. ``` >>> cov = np.array([[6, -3], [-3, 3.5]]) >>> pts = np.random.multivariate_normal([0, 0], cov, size=800) ``` Check that the mean, covariance, and correlation coefficient of the sample are close to the expected values: ``` >>> pts.mean(axis=0) array([ 0.0326911 , -0.01280782]) # may vary >>> np.cov(pts.T) array([[ 5.96202397, -2.85602287], [-2.85602287, 3.47613949]]) # may vary >>> np.corrcoef(pts.T)[0, 1] -0.6273591314603949 # may vary ``` We can visualize this data with a scatter plot. The orientation of the point cloud illustrates the negative correlation of the components of this sample. ``` >>> import matplotlib.pyplot as plt >>> plt.plot(pts[:, 0], pts[:, 1], '.', alpha=0.5) >>> plt.axis('equal') >>> plt.grid() >>> plt.show() ``` ![../../../_images/numpy-random-multivariate_normal-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.multivariate_normal.htmlnumpy.random.negative_binomial =============================== random.negative_binomial(*n*, *p*, *size=None*) Draw samples from a negative binomial distribution. Samples are drawn from a negative binomial distribution with specified parameters, `n` successes and `p` probability of success where `n` is > 0 and `p` is in the interval [0, 1]. Note New code should use the `negative_binomial` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **n**float or array_like of floats Parameter of the distribution, > 0. **p**float or array_like of floats Parameter of the distribution, >= 0 and <=1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `n` and `p` are both scalars. Otherwise, `np.broadcast(n, p).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized negative binomial distribution, where each sample is equal to N, the number of failures that occurred before a total of n successes was reached. See also [`random.Generator.negative_binomial`](numpy.random.generator.negative_binomial#numpy.random.Generator.negative_binomial "numpy.random.Generator.negative_binomial") which should be used for new code. #### Notes The probability mass function of the negative binomial distribution is \[P(N;n,p) = \frac{\Gamma(N+n)}{N!\Gamma(n)}p^{n}(1-p)^{N},\] where \(n\) is the number of successes, \(p\) is the probability of success, \(N+n\) is the number of trials, and \(\Gamma\) is the gamma function. When \(n\) is an integer, \(\frac{\Gamma(N+n)}{N!\Gamma(n)} = \binom{N+n-1}{N}\), which is the more common form of this term in the the pmf. The negative binomial distribution gives the probability of N failures given n successes, with a success on the last trial. If one throws a die repeatedly until the third time a “1” appears, then the probability distribution of the number of non-“1”s that appear before the third “1” is a negative binomial distribution. #### References 1 Weisstein, <NAME>. “Negative Binomial Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/NegativeBinomialDistribution.html 2 Wikipedia, “Negative binomial distribution”, <https://en.wikipedia.org/wiki/Negative_binomial_distribution #### Examples Draw samples from the distribution: A real world example. A company drills wild-cat oil exploration wells, each with an estimated probability of success of 0.1. What is the probability of having one success for each successive well, that is what is the probability of a single success after drilling 5 wells, after 6 wells, etc.? ``` >>> s = np.random.negative_binomial(1, 0.1, 100000) >>> for i in range(1, 11): ... probability = sum(s<i) / 100000. ... print(i, "wells drilled, probability of one success =", probability) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.negative_binomial.htmlnumpy.random.noncentral_chisquare ================================== random.noncentral_chisquare(*df*, *nonc*, *size=None*) Draw samples from a noncentral chi-square distribution. The noncentral \(\chi^2\) distribution is a generalization of the \(\chi^2\) distribution. Note New code should use the `noncentral_chisquare` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **df**float or array_like of floats Degrees of freedom, must be > 0. Changed in version 1.10.0: Earlier NumPy versions required dfnum > 1. **nonc**float or array_like of floats Non-centrality, must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` and `nonc` are both scalars. Otherwise, `np.broadcast(df, nonc).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized noncentral chi-square distribution. See also [`random.Generator.noncentral_chisquare`](numpy.random.generator.noncentral_chisquare#numpy.random.Generator.noncentral_chisquare "numpy.random.Generator.noncentral_chisquare") which should be used for new code. #### Notes The probability density function for the noncentral Chi-square distribution is \[P(x;df,nonc) = \sum^{\infty}_{i=0} \frac{e^{-nonc/2}(nonc/2)^{i}}{i!} P_{Y_{df+2i}}(x),\] where \(Y_{q}\) is the Chi-square with q degrees of freedom. #### References 1 Wikipedia, “Noncentral chi-squared distribution” <https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution #### Examples Draw values from the distribution and plot the histogram ``` >>> import matplotlib.pyplot as plt >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-noncentral_chisquare-1_00_00.png] Draw values from a noncentral chisquare with very small noncentrality, and compare to a chisquare. ``` >>> plt.figure() >>> values = plt.hist(np.random.noncentral_chisquare(3, .0000001, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> values2 = plt.hist(np.random.chisquare(3, 100000), ... bins=np.arange(0., 25, .1), density=True) >>> plt.plot(values[1][0:-1], values[0]-values2[0], 'ob') >>> plt.show() ``` ![../../../_images/numpy-random-noncentral_chisquare-1_01_00.png] Demonstrate how large values of non-centrality lead to a more symmetric distribution. ``` >>> plt.figure() >>> values = plt.hist(np.random.noncentral_chisquare(3, 20, 100000), ... bins=200, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-noncentral_chisquare-1_02_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.noncentral_chisquare.htmlnumpy.random.noncentral_f ========================== random.noncentral_f(*dfnum*, *dfden*, *nonc*, *size=None*) Draw samples from the noncentral F distribution. Samples are drawn from an F distribution with specified parameters, `dfnum` (degrees of freedom in numerator) and `dfden` (degrees of freedom in denominator), where both parameters > 1. `nonc` is the non-centrality parameter. Note New code should use the `noncentral_f` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **dfnum**float or array_like of floats Numerator degrees of freedom, must be > 0. Changed in version 1.14.0: Earlier NumPy versions required dfnum > 1. **dfden**float or array_like of floats Denominator degrees of freedom, must be > 0. **nonc**float or array_like of floats Non-centrality parameter, the sum of the squares of the numerator means, must be >= 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `dfnum`, `dfden`, and `nonc` are all scalars. Otherwise, `np.broadcast(dfnum, dfden, nonc).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized noncentral Fisher distribution. See also [`random.Generator.noncentral_f`](numpy.random.generator.noncentral_f#numpy.random.Generator.noncentral_f "numpy.random.Generator.noncentral_f") which should be used for new code. #### Notes When calculating the power of an experiment (power = probability of rejecting the null hypothesis when a specific alternative is true) the non-central F statistic becomes important. When the null hypothesis is true, the F statistic follows a central F distribution. When the null hypothesis is not true, then it follows a non-central F statistic. #### References 1 Weisstein, <NAME>. “Noncentral F-Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/NoncentralF-Distribution.html 2 Wikipedia, “Noncentral F-distribution”, <https://en.wikipedia.org/wiki/Noncentral_F-distribution #### Examples In a study, testing for a specific alternative to the null hypothesis requires use of the Noncentral F distribution. We need to calculate the area in the tail of the distribution that exceeds the value of the F distribution for the null hypothesis. We’ll plot the two probability distributions for comparison. ``` >>> dfnum = 3 # between group deg of freedom >>> dfden = 20 # within groups degrees of freedom >>> nonc = 3.0 >>> nc_vals = np.random.noncentral_f(dfnum, dfden, nonc, 1000000) >>> NF = np.histogram(nc_vals, bins=50, density=True) >>> c_vals = np.random.f(dfnum, dfden, 1000000) >>> F = np.histogram(c_vals, bins=50, density=True) >>> import matplotlib.pyplot as plt >>> plt.plot(F[1][1:], F[0]) >>> plt.plot(NF[1][1:], NF[0]) >>> plt.show() ``` ![../../../_images/numpy-random-noncentral_f-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.noncentral_f.htmlnumpy.random.normal =================== random.normal(*loc=0.0*, *scale=1.0*, *size=None*) Draw random samples from a normal (Gaussian) distribution. The probability density function of the normal distribution, first derived by De Moivre and 200 years later by both Gauss and Laplace independently [[2]](#rf578abb8fba2-2), is often called the bell curve because of its characteristic shape (see the example below). The normal distributions occurs often in nature. For example, it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution [[2]](#rf578abb8fba2-2). Note New code should use the `normal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **loc**float or array_like of floats Mean (“centre”) of the distribution. **scale**float or array_like of floats Standard deviation (spread or “width”) of the distribution. Must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `loc` and `scale` are both scalars. Otherwise, `np.broadcast(loc, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized normal distribution. See also [`scipy.stats.norm`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html#scipy.stats.norm "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.normal`](numpy.random.generator.normal#numpy.random.Generator.normal "numpy.random.Generator.normal") which should be used for new code. #### Notes The probability density for the Gaussian distribution is \[p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 }} e^{ - \frac{ (x - \mu)^2 } {2 \sigma^2} },\] where \(\mu\) is the mean and \(\sigma\) the standard deviation. The square of the standard deviation, \(\sigma^2\), is called the variance. The function has its peak at the mean, and its “spread” increases with the standard deviation (the function reaches 0.607 times its maximum at \(x + \sigma\) and \(x - \sigma\) [[2]](#rf578abb8fba2-2)). This implies that normal is more likely to return samples lying close to the mean, rather than those far away. #### References 1 Wikipedia, “Normal distribution”, <https://en.wikipedia.org/wiki/Normal_distribution2([1](#id1),[2](#id2),[3](#id3)) <NAME> Jr., “Central Limit Theorem” in “Probability, Random Variables and Random Signal Principles”, 4th ed., 2001, pp. 51, 51, 125. #### Examples Draw samples from the distribution: ``` >>> mu, sigma = 0, 0.1 # mean and standard deviation >>> s = np.random.normal(mu, sigma, 1000) ``` Verify the mean and the variance: ``` >>> abs(mu - np.mean(s)) 0.0 # may vary ``` ``` >>> abs(sigma - np.std(s, ddof=1)) 0.1 # may vary ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 30, density=True) >>> plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * ... np.exp( - (bins - mu)**2 / (2 * sigma**2) ), ... linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-normal-1_00_00.png]: ``` >>> np.random.normal(3, 2.5, size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.normal.htmlnumpy.random.pareto =================== random.pareto(*a*, *size=None*) Draw samples from a Pareto II or Lomax distribution with specified shape. The Lomax or Pareto II distribution is a shifted Pareto distribution. The classical Pareto distribution can be obtained from the Lomax distribution by adding 1 and multiplying by the scale parameter `m` (see Notes). The smallest value of the Lomax distribution is zero while for the classical Pareto distribution it is `mu`, where the standard Pareto distribution has location `mu = 1`. Lomax can also be considered as a simplified version of the Generalized Pareto distribution (available in SciPy), with the scale set to one and the location set to zero. The Pareto distribution must be greater than zero, and is unbounded above. It is also known as the “80-20 rule”. In this distribution, 80 percent of the weights are in the lowest 20 percent of the range, while the other 20 percent fill the remaining 80 percent of the range. Note New code should use the `pareto` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Shape of the distribution. Must be positive. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Pareto distribution. See also [`scipy.stats.lomax`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lomax.html#scipy.stats.lomax "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`scipy.stats.genpareto`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genpareto.html#scipy.stats.genpareto "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.pareto`](numpy.random.generator.pareto#numpy.random.Generator.pareto "numpy.random.Generator.pareto") which should be used for new code. #### Notes The probability density for the Pareto distribution is \[p(x) = \frac{am^a}{x^{a+1}}\] where \(a\) is the shape and \(m\) the scale. The Pareto distribution, named after the Italian economist <NAME>, is a power law probability distribution useful in many real world problems. Outside the field of economics it is generally referred to as the Bradford distribution. Pareto developed the distribution to describe the distribution of wealth in an economy. It has also found use in insurance, web page access statistics, oil field sizes, and many other problems, including the download frequency for projects in Sourceforge [[1]](#r3973533a530a-1). It is one of the so-called “fat-tailed” distributions. #### References [1](#id1) <NAME> and <NAME>, On the Pareto Distribution of Sourceforge projects. 2 <NAME>. (1896). Course of Political Economy. Lausanne. 3 <NAME>., <NAME>.(2001), Statistical Analysis of Extreme Values, <NAME>l, pp 23-30. 4 Wikipedia, “Pareto distribution”, <https://en.wikipedia.org/wiki/Pareto_distribution #### Examples Draw samples from the distribution: ``` >>> a, m = 3., 2. # shape and mode >>> s = (np.random.pareto(a, 1000) + 1) * m ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, _ = plt.hist(s, 100, density=True) >>> fit = a*m**a / bins**(a+1) >>> plt.plot(bins, max(count)*fit/max(fit), linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-pareto-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.pareto.htmlnumpy.random.permutation ======================== random.permutation(*x*) Randomly permute a sequence, or return a permuted range. If `x` is a multi-dimensional array, it is only shuffled along its first index. Note New code should use the `permutation` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **x**int or array_like If `x` is an integer, randomly permute `np.arange(x)`. If `x` is an array, make a copy and shuffle the elements randomly. Returns **out**ndarray Permuted sequence or array range. See also [`random.Generator.permutation`](numpy.random.generator.permutation#numpy.random.Generator.permutation "numpy.random.Generator.permutation") which should be used for new code. #### Examples ``` >>> np.random.permutation(10) array([1, 7, 4, 3, 0, 9, 2, 5, 8, 6]) # random ``` ``` >>> np.random.permutation([1, 4, 9, 12, 15]) array([15, 1, 9, 4, 12]) # random ``` ``` >>> arr = np.arange(9).reshape((3, 3)) >>> np.random.permutation(arr) array([[6, 7, 8], # random [0, 1, 2], [3, 4, 5]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.permutation.htmlnumpy.random.poisson ==================== random.poisson(*lam=1.0*, *size=None*) Draw samples from a Poisson distribution. The Poisson distribution is the limit of the binomial distribution for large N. Note New code should use the `poisson` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **lam**float or array_like of floats Expected number of events occurring in a fixed-time interval, must be >= 0. A sequence must be broadcastable over the requested size. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `lam` is a scalar. Otherwise, `np.array(lam).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Poisson distribution. See also [`random.Generator.poisson`](numpy.random.generator.poisson#numpy.random.Generator.poisson "numpy.random.Generator.poisson") which should be used for new code. #### Notes The Poisson distribution \[f(k; \lambda)=\frac{\lambda^k e^{-\lambda}}{k!}\] For events with an expected separation \(\lambda\) the Poisson distribution \(f(k; \lambda)\) describes the probability of \(k\) events occurring within the observed interval \(\lambda\). Because the output is limited to the range of the C int64 type, a ValueError is raised when `lam` is within 10 sigma of the maximum representable value. #### References 1 Weisstein, <NAME>. “Poisson Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/PoissonDistribution.html 2 Wikipedia, “Poisson distribution”, <https://en.wikipedia.org/wiki/Poisson_distribution #### Examples Draw samples from the distribution: ``` >>> import numpy as np >>> s = np.random.poisson(5, 10000) ``` Display histogram of the sample: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 14, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-poisson-1_00_00.png] Draw each 100 values for lambda 100 and 500: ``` >>> s = np.random.poisson(lam=(100., 500.), size=(100, 2)) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.poisson.htmlnumpy.random.power ================== random.power(*a*, *size=None*) Draws samples in [0, 1] from a power distribution with positive exponent a - 1. Also known as the power function distribution. Note New code should use the `power` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Parameter of the distribution. Must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized power distribution. Raises ValueError If a <= 0. See also [`random.Generator.power`](numpy.random.generator.power#numpy.random.Generator.power "numpy.random.Generator.power") which should be used for new code. #### Notes The probability density function is \[P(x; a) = ax^{a-1}, 0 \le x \le 1, a>0.\] The power function distribution is just the inverse of the Pareto distribution. It may also be seen as a special case of the Beta distribution. It is used, for example, in modeling the over-reporting of insurance claims. #### References 1 <NAME>, <NAME>, “Statistical size distributions in economics and actuarial sciences”, Wiley, 2003. 2 <NAME>. and <NAME>. “NIST Handbook 148: Dataplot Reference Manual, Volume 2: Let Subcommands and Library Functions”, National Institute of Standards and Technology Handbook Series, June 2003. <https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/powpdf.pdf #### Examples Draw samples from the distribution: ``` >>> a = 5. # shape >>> samples = 1000 >>> s = np.random.power(a, samples) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, bins=30) >>> x = np.linspace(0, 1, 100) >>> y = a*x**(a-1.) >>> normed_y = samples*np.diff(bins)[0]*y >>> plt.plot(x, normed_y) >>> plt.show() ``` ![../../../_images/numpy-random-power-1_00_00.png] Compare the power function distribution to the inverse of the Pareto. ``` >>> from scipy import stats >>> rvs = np.random.power(5, 1000000) >>> rvsp = np.random.pareto(5, 1000000) >>> xx = np.linspace(0,1,100) >>> powpdf = stats.powerlaw.pdf(xx,5) ``` ``` >>> plt.figure() >>> plt.hist(rvs, bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('np.random.power(5)') ``` ``` >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of 1 + np.random.pareto(5)') ``` ``` >>> plt.figure() >>> plt.hist(1./(1.+rvsp), bins=50, density=True) >>> plt.plot(xx,powpdf,'r-') >>> plt.title('inverse of stats.pareto(5)') ``` ![../../../_images/numpy-random-power-1_01_00.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.power.htmlnumpy.random.rand ================= random.rand(*d0*, *d1*, *...*, *dn*) Random values in a given shape. Note This is a convenience function for users porting code from Matlab, and wraps [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). That function takes a tuple to specify the size of the output, which is consistent with other NumPy functions like [`numpy.zeros`](../../generated/numpy.zeros#numpy.zeros "numpy.zeros") and [`numpy.ones`](../../generated/numpy.ones#numpy.ones "numpy.ones"). Create an array of the given shape and populate it with random samples from a uniform distribution over `[0, 1)`. Parameters **d0, d1, 
, dn**int, optional The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned. Returns **out**ndarray, shape `(d0, d1, ..., dn)` Random values. See also [`random`](../index#module-numpy.random "numpy.random") #### Examples ``` >>> np.random.rand(3,2) array([[ 0.14022471, 0.96360618], #random [ 0.37601032, 0.25528411], #random [ 0.49313049, 0.94909878]]) #random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.rand.htmlnumpy.random.randint ==================== random.randint(*low*, *high=None*, *size=None*, *dtype=int*) Return random integers from `low` (inclusive) to `high` (exclusive). Return random integers from the “discrete uniform” distribution of the specified dtype in the “half-open” interval [`low`, `high`). If `high` is None (the default), then results are from [0, `low`). Note New code should use the `integers` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **low**int or array-like of ints Lowest (signed) integers to be drawn from the distribution (unless `high=None`, in which case this parameter is one above the *highest* such integer). **high**int or array-like of ints, optional If provided, one above the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). If array-like, must contain integer values **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **dtype**dtype, optional Desired dtype of the result. Byteorder must be native. The default value is int. New in version 1.11.0. Returns **out**int or ndarray of ints `size`-shaped array of random integers from the appropriate distribution, or a single such random int if `size` not provided. See also [`random_integers`](numpy.random.random_integers#numpy.random.random_integers "numpy.random.random_integers") similar to [`randint`](#numpy.random.randint "numpy.random.randint"), only for the closed interval [`low`, `high`], and 1 is the lowest value if `high` is omitted. [`random.Generator.integers`](numpy.random.generator.integers#numpy.random.Generator.integers "numpy.random.Generator.integers") which should be used for new code. #### Examples ``` >>> np.random.randint(2, size=10) array([1, 0, 0, 0, 1, 1, 0, 0, 1, 0]) # random >>> np.random.randint(1, size=10) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) ``` Generate a 2 x 4 array of ints between 0 and 4, inclusive: ``` >>> np.random.randint(5, size=(2, 4)) array([[4, 0, 2, 1], # random [3, 2, 2, 0]]) ``` Generate a 1 x 3 array with 3 different upper bounds ``` >>> np.random.randint(1, [3, 5, 10]) array([2, 2, 9]) # random ``` Generate a 1 by 3 array with 3 different lower bounds ``` >>> np.random.randint([1, 5, 7], 10) array([9, 8, 7]) # random ``` Generate a 2 by 4 array using broadcasting with dtype of uint8 ``` >>> np.random.randint([1, 3, 5, 7], [[10], [20]], dtype=np.uint8) array([[ 8, 6, 9, 7], # random [ 1, 16, 9, 12]], dtype=uint8) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.randint.htmlnumpy.random.randn ================== random.randn(*d0*, *d1*, *...*, *dn*) Return a sample (or samples) from the “standard normal” distribution. Note This is a convenience function for users porting code from Matlab, and wraps [`standard_normal`](numpy.random.standard_normal#numpy.random.standard_normal "numpy.random.standard_normal"). That function takes a tuple to specify the size of the output, which is consistent with other NumPy functions like [`numpy.zeros`](../../generated/numpy.zeros#numpy.zeros "numpy.zeros") and [`numpy.ones`](../../generated/numpy.ones#numpy.ones "numpy.ones"). Note New code should use the `standard_normal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). If positive int_like arguments are provided, [`randn`](#numpy.random.randn "numpy.random.randn") generates an array of shape `(d0, d1, ..., dn)`, filled with random floats sampled from a univariate “normal” (Gaussian) distribution of mean 0 and variance 1. A single float randomly sampled from the distribution is returned if no argument is provided. Parameters **d0, d1, 
, dn**int, optional The dimensions of the returned array, must be non-negative. If no argument is given a single Python float is returned. Returns **Z**ndarray or float A `(d0, d1, ..., dn)`-shaped array of floating-point samples from the standard normal distribution, or a single such float if no parameters were supplied. See also [`standard_normal`](numpy.random.standard_normal#numpy.random.standard_normal "numpy.random.standard_normal") Similar, but takes a tuple as its argument. [`normal`](numpy.random.normal#numpy.random.normal "numpy.random.normal") Also accepts mu and sigma arguments. [`random.Generator.standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") which should be used for new code. #### Notes For random samples from \(N(\mu, \sigma^2)\), use: `sigma * np.random.randn(...) + mu` #### Examples ``` >>> np.random.randn() 2.1923875335537315 # random ``` Two-by-four array of samples from N(3, 6.25): ``` >>> 3 + 2.5 * np.random.randn(2, 4) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.randn.htmlnumpy.random.random_integers ============================= random.random_integers(*low*, *high=None*, *size=None*) Random integers of type `np.int_` between `low` and `high`, inclusive. Return random integers of type `np.int_` from the “discrete uniform” distribution in the closed interval [`low`, `high`]. If `high` is None (the default), then results are from [1, `low`]. The `np.int_` type translates to the C long integer type and its precision is platform dependent. This function has been deprecated. Use randint instead. Deprecated since version 1.11.0. Parameters **low**int Lowest (signed) integer to be drawn from the distribution (unless `high=None`, in which case this parameter is the *highest* such integer). **high**int, optional If provided, the largest (signed) integer to be drawn from the distribution (see above for behavior if `high=None`). **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**int or ndarray of ints `size`-shaped array of random integers from the appropriate distribution, or a single such random int if `size` not provided. See also [`randint`](numpy.random.randint#numpy.random.randint "numpy.random.randint") Similar to [`random_integers`](#numpy.random.random_integers "numpy.random.random_integers"), only for the half-open interval [`low`, `high`), and 0 is the lowest value if `high` is omitted. #### Notes To sample from N evenly spaced floating-point numbers between a and b, use: ``` a + (b - a) * (np.random.random_integers(N) - 1) / (N - 1.) ``` #### Examples ``` >>> np.random.random_integers(5) 4 # random >>> type(np.random.random_integers(5)) <class 'numpy.int64'> >>> np.random.random_integers(5, size=(3,2)) array([[5, 4], # random [3, 3], [4, 5]]) ``` Choose five random numbers from the set of five evenly-spaced numbers between 0 and 2.5, inclusive (*i.e.*, from the set \({0, 5/8, 10/8, 15/8, 20/8}\)): ``` >>> 2.5 * (np.random.random_integers(5, size=(5,)) - 1) / 4. array([ 0.625, 1.25 , 0.625, 0.625, 2.5 ]) # random ``` Roll two six sided dice 1000 times and sum the results: ``` >>> d1 = np.random.random_integers(1, 6, 1000) >>> d2 = np.random.random_integers(1, 6, 1000) >>> dsums = d1 + d2 ``` Display results as a histogram: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(dsums, 11, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-random_integers-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.random_integers.htmlnumpy.random.random_sample =========================== random.random_sample(*size=None*) Return random floats in the half-open interval [0.0, 1.0). Results are from the “continuous uniform” distribution over the stated interval. To sample \(Unif[a, b), b > a\) multiply the output of [`random_sample`](#numpy.random.random_sample "numpy.random.random_sample") by `(b-a)` and add `a`: ``` (b - a) * random_sample() + a ``` Note New code should use the `random` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**float or ndarray of floats Array of random floats of shape `size` (unless `size=None`, in which case a single float is returned). See also [`random.Generator.random`](numpy.random.generator.random#numpy.random.Generator.random "numpy.random.Generator.random") which should be used for new code. #### Examples ``` >>> np.random.random_sample() 0.47108547995356098 # random >>> type(np.random.random_sample()) <class 'float'> >>> np.random.random_sample((5,)) array([ 0.30220482, 0.86820401, 0.1654503 , 0.11659149, 0.54323428]) # random ``` Three-by-two array of random numbers from [-5, 0): ``` >>> 5 * np.random.random_sample((3, 2)) - 5 array([[-3.99149989, -0.52338984], # random [-2.99091858, -0.79479508], [-1.23204345, -1.75224494]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.random_sample.htmlnumpy.random.ranf ================= random.ranf() This is an alias of [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). See [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample") for the complete documentation. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.ranf.htmlnumpy.random.rayleigh ===================== random.rayleigh(*scale=1.0*, *size=None*) Draw samples from a Rayleigh distribution. The \(\chi\) and Weibull distributions are generalizations of the Rayleigh. Note New code should use the `rayleigh` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **scale**float or array_like of floats, optional Scale, also equals the mode. Must be non-negative. Default is 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `scale` is a scalar. Otherwise, `np.array(scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Rayleigh distribution. See also [`random.Generator.rayleigh`](numpy.random.generator.rayleigh#numpy.random.Generator.rayleigh "numpy.random.Generator.rayleigh") which should be used for new code. #### Notes The probability density function for the Rayleigh distribution is \[P(x;scale) = \frac{x}{scale^2}e^{\frac{-x^2}{2 \cdotp scale^2}}\] The Rayleigh distribution would arise, for example, if the East and North components of the wind velocity had identical zero-mean Gaussian distributions. Then the wind speed would have a Rayleigh distribution. #### References 1 Brighton Webs Ltd., “Rayleigh Distribution,” <https://web.archive.org/web/20090514091424/http://brighton-webs.co.uk:80/distributions/rayleigh.asp 2 Wikipedia, “Rayleigh distribution” <https://en.wikipedia.org/wiki/Rayleigh_distribution #### Examples Draw values from the distribution and plot the histogram ``` >>> from matplotlib.pyplot import hist >>> values = hist(np.random.rayleigh(3, 100000), bins=200, density=True) ``` Wave heights tend to follow a Rayleigh distribution. If the mean wave height is 1 meter, what fraction of waves are likely to be larger than 3 meters? ``` >>> meanvalue = 1 >>> modevalue = np.sqrt(2 / np.pi) * meanvalue >>> s = np.random.rayleigh(modevalue, 1000000) ``` The percentage of waves larger than 3 meters is: ``` >>> 100.*sum(s>3)/1000000. 0.087300000000000003 # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.rayleigh.htmlnumpy.random.sample =================== random.sample() This is an alias of [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). See [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample") for the complete documentation. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.sample.htmlnumpy.random.seed ================= random.seed(*self*, *seed=None*) Reseed a legacy MT19937 BitGenerator #### Notes This is a convenience, legacy function. The best practice is to **not** reseed a BitGenerator, rather to recreate a new one. This method is here for legacy reasons. This example demonstrates best practice. ``` >>> from numpy.random import MT19937 >>> from numpy.random import RandomState, SeedSequence >>> rs = RandomState(MT19937(SeedSequence(123456789))) # Later, you want to restart the stream >>> rs = RandomState(MT19937(SeedSequence(987654321))) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.seed.htmlnumpy.random.set_state ======================= random.set_state(*state*) Set the internal state of the generator from a tuple. For use if one has reason to manually (re-)set the internal state of the bit generator used by the RandomState instance. By default, RandomState uses the “Mersenne Twister”[[1]](#rf0f3f75f485b-1) pseudo-random number generating algorithm. Parameters **state**{tuple(str, ndarray of 624 uints, int, int, float), dict} The `state` tuple has the following items: 1. the string ‘MT19937’, specifying the Mersenne Twister algorithm. 2. a 1-D array of 624 unsigned integers `keys`. 3. an integer `pos`. 4. an integer `has_gauss`. 5. a float `cached_gaussian`. If state is a dictionary, it is directly set using the BitGenerators `state` property. Returns **out**None Returns ‘None’ on success. See also [`get_state`](numpy.random.get_state#numpy.random.get_state "numpy.random.get_state") #### Notes [`set_state`](#numpy.random.set_state "numpy.random.set_state") and [`get_state`](numpy.random.get_state#numpy.random.get_state "numpy.random.get_state") are not needed to work with any of the random distributions in NumPy. If the internal state is manually altered, the user should know exactly what he/she is doing. For backwards compatibility, the form (str, array of 624 uints, int) is also accepted although it is missing some information about the cached Gaussian value: `state = ('MT19937', keys, pos)`. #### References [1](#id1) <NAME> and <NAME>, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator,” *ACM Trans. on Modeling and Computer Simulation*, Vol. 8, No. 1, pp. 3-30, Jan. 1998. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.set_state.htmlnumpy.random.shuffle ==================== random.shuffle(*x*) Modify a sequence in-place by shuffling its contents. This function only shuffles the array along the first axis of a multi-dimensional array. The order of sub-arrays is changed but their contents remains the same. Note New code should use the `shuffle` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **x**ndarray or MutableSequence The array, list or mutable sequence to be shuffled. Returns None See also [`random.Generator.shuffle`](numpy.random.generator.shuffle#numpy.random.Generator.shuffle "numpy.random.Generator.shuffle") which should be used for new code. #### Examples ``` >>> arr = np.arange(10) >>> np.random.shuffle(arr) >>> arr [1 7 5 2 9 4 3 6 0 8] # random ``` Multi-dimensional arrays are only shuffled along the first axis: ``` >>> arr = np.arange(9).reshape((3, 3)) >>> np.random.shuffle(arr) >>> arr array([[3, 4, 5], # random [6, 7, 8], [0, 1, 2]]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.shuffle.htmlnumpy.random.standard_cauchy ============================= random.standard_cauchy(*size=None*) Draw samples from a standard Cauchy distribution with mode = 0. Also known as the Lorentz distribution. Note New code should use the `standard_cauchy` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **samples**ndarray or scalar The drawn samples. See also [`random.Generator.standard_cauchy`](numpy.random.generator.standard_cauchy#numpy.random.Generator.standard_cauchy "numpy.random.Generator.standard_cauchy") which should be used for new code. #### Notes The probability density function for the full Cauchy distribution is \[P(x; x_0, \gamma) = \frac{1}{\pi \gamma \bigl[ 1+ (\frac{x-x_0}{\gamma})^2 \bigr] }\] and the Standard Cauchy distribution just sets \(x_0=0\) and \(\gamma=1\) The Cauchy distribution arises in the solution to the driven harmonic oscillator problem, and also describes spectral line broadening. It also describes the distribution of values at which a line tilted at a random angle will cut the x axis. When studying hypothesis tests that assume normality, seeing how the tests perform on data from a Cauchy distribution is a good indicator of their sensitivity to a heavy-tailed distribution, since the Cauchy looks very much like a Gaussian distribution, but with heavier tails. #### References 1 NIST/SEMATECH e-Handbook of Statistical Methods, “Cauchy Distribution”, <https://www.itl.nist.gov/div898/handbook/eda/section3/eda3663.htm 2 Weisstein, <NAME>. “Cauchy Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/CauchyDistribution.html 3 Wikipedia, “Cauchy distribution” <https://en.wikipedia.org/wiki/Cauchy_distribution #### Examples Draw samples and plot the distribution: ``` >>> import matplotlib.pyplot as plt >>> s = np.random.standard_cauchy(1000000) >>> s = s[(s>-25) & (s<25)] # truncate distribution so it plots well >>> plt.hist(s, bins=100) >>> plt.show() ``` ![../../../_images/numpy-random-standard_cauchy-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.standard_cauchy.htmlnumpy.random.standard_exponential ================================== random.standard_exponential(*size=None*) Draw samples from the standard exponential distribution. [`standard_exponential`](#numpy.random.standard_exponential "numpy.random.standard_exponential") is identical to the exponential distribution with a scale parameter of 1. Note New code should use the `standard_exponential` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**float or ndarray Drawn samples. See also [`random.Generator.standard_exponential`](numpy.random.generator.standard_exponential#numpy.random.Generator.standard_exponential "numpy.random.Generator.standard_exponential") which should be used for new code. #### Examples Output a 3x8000 array: ``` >>> n = np.random.standard_exponential((3, 8000)) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.standard_exponential.htmlnumpy.random.standard_gamma ============================ random.standard_gamma(*shape*, *size=None*) Draw samples from a standard Gamma distribution. Samples are drawn from a Gamma distribution with specified parameters, shape (sometimes designated “k”) and scale=1. Note New code should use the `standard_gamma` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **shape**float or array_like of floats Parameter, must be non-negative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `shape` is a scalar. Otherwise, `np.array(shape).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized standard gamma distribution. See also [`scipy.stats.gamma`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html#scipy.stats.gamma "(in SciPy v1.8.1)") probability density function, distribution or cumulative density function, etc. [`random.Generator.standard_gamma`](numpy.random.generator.standard_gamma#numpy.random.Generator.standard_gamma "numpy.random.Generator.standard_gamma") which should be used for new code. #### Notes The probability density for the Gamma distribution is \[p(x) = x^{k-1}\frac{e^{-x/\theta}}{\theta^k\Gamma(k)},\] where \(k\) is the shape and \(\theta\) the scale, and \(\Gamma\) is the Gamma function. The Gamma distribution is often used to model the times to failure of electronic components, and arises naturally in processes for which the waiting times between Poisson distributed events are relevant. #### References 1 Weisstein, <NAME>. “Gamma Distribution.” From MathWorld–A Wolfram Web Resource. <http://mathworld.wolfram.com/GammaDistribution.html 2 Wikipedia, “Gamma distribution”, <https://en.wikipedia.org/wiki/Gamma_distribution #### Examples Draw samples from the distribution: ``` >>> shape, scale = 2., 1. # mean and width >>> s = np.random.standard_gamma(shape, 1000000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> import scipy.special as sps >>> count, bins, ignored = plt.hist(s, 50, density=True) >>> y = bins**(shape-1) * ((np.exp(-bins/scale))/ ... (sps.gamma(shape) * scale**shape)) >>> plt.plot(bins, y, linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-standard_gamma-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.standard_gamma.htmlnumpy.random.standard_normal ============================= random.standard_normal(*size=None*) Draw samples from a standard Normal distribution (mean=0, stdev=1). Note New code should use the `standard_normal` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. Returns **out**float or ndarray A floating-point array of shape `size` of drawn samples, or a single sample if `size` was not specified. See also [`normal`](numpy.random.normal#numpy.random.normal "numpy.random.normal") Equivalent function with additional `loc` and `scale` arguments for setting the mean and standard deviation. [`random.Generator.standard_normal`](numpy.random.generator.standard_normal#numpy.random.Generator.standard_normal "numpy.random.Generator.standard_normal") which should be used for new code. #### Notes For random samples from \(N(\mu, \sigma^2)\), use one of: ``` mu + sigma * np.random.standard_normal(size=...) np.random.normal(mu, sigma, size=...) ``` #### Examples ``` >>> np.random.standard_normal() 2.1923875335537315 #random ``` ``` >>> s = np.random.standard_normal(8000) >>> s array([ 0.6888893 , 0.78096262, -0.89086505, ..., 0.49876311, # random -0.38672696, -0.4685006 ]) # random >>> s.shape (8000,) >>> s = np.random.standard_normal(size=(3, 4, 2)) >>> s.shape (3, 4, 2) ``` Two-by-four array of samples from \(N(3, 6.25)\): ``` >>> 3 + 2.5 * np.random.standard_normal(size=(2, 4)) array([[-4.49401501, 4.00950034, -1.81814867, 7.29718677], # random [ 0.39924804, 4.68456316, 4.99394529, 4.84057254]]) # random ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.standard_normal.htmlnumpy.random.standard_t ======================== random.standard_t(*df*, *size=None*) Draw samples from a standard Student’s t distribution with `df` degrees of freedom. A special case of the hyperbolic distribution. As `df` gets large, the result resembles that of the standard normal distribution ([`standard_normal`](numpy.random.standard_normal#numpy.random.standard_normal "numpy.random.standard_normal")). Note New code should use the `standard_t` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **df**float or array_like of floats Degrees of freedom, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `df` is a scalar. Otherwise, `np.array(df).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized standard Student’s t distribution. See also [`random.Generator.standard_t`](numpy.random.generator.standard_t#numpy.random.Generator.standard_t "numpy.random.Generator.standard_t") which should be used for new code. #### Notes The probability density function for the t distribution is \[P(x, df) = \frac{\Gamma(\frac{df+1}{2})}{\sqrt{\pi df} \Gamma(\frac{df}{2})}\Bigl( 1+\frac{x^2}{df} \Bigr)^{-(df+1)/2}\] The t test is based on an assumption that the data come from a Normal distribution. The t test provides a way to test whether the sample mean (that is the mean calculated from the data) is a good estimate of the true mean. The derivation of the t-distribution was first published in 1908 by <NAME> while working for the Guinness Brewery in Dublin. Due to proprietary issues, he had to publish under a pseudonym, and so he used the name Student. #### References [1](#id3) Dalgaard, Peter, “Introductory Statistics With R”, Springer, 2002. 2 Wikipedia, “Student’s t-distribution” [https://en.wikipedia.org/wiki/Student’s_t-distribution](https://en.wikipedia.org/wiki/Student's_t-distribution) #### Examples From Dalgaard page 83 [[1]](#r755c9bae090e-1), suppose the daily energy intake for 11 women in kilojoules (kJ) is: ``` >>> intake = np.array([5260., 5470, 5640, 6180, 6390, 6515, 6805, 7515, \ ... 7515, 8230, 8770]) ``` Does their energy intake deviate systematically from the recommended value of 7725 kJ? Our null hypothesis will be the absence of deviation, and the alternate hypothesis will be the presence of an effect that could be either positive or negative, hence making our test 2-tailed. Because we are estimating the mean and we have N=11 values in our sample, we have N-1=10 degrees of freedom. We set our significance level to 95% and compute the t statistic using the empirical mean and empirical standard deviation of our intake. We use a ddof of 1 to base the computation of our empirical standard deviation on an unbiased estimate of the variance (note: the final estimate is not unbiased due to the concave nature of the square root). ``` >>> np.mean(intake) 6753.636363636364 >>> intake.std(ddof=1) 1142.1232221373727 >>> t = (np.mean(intake)-7725)/(intake.std(ddof=1)/np.sqrt(len(intake))) >>> t -2.8207540608310198 ``` We draw 1000000 samples from Student’s t distribution with the adequate degrees of freedom. ``` >>> import matplotlib.pyplot as plt >>> s = np.random.standard_t(10, size=1000000) >>> h = plt.hist(s, bins=100, density=True) ``` Does our t statistic land in one of the two critical regions found at both tails of the distribution? ``` >>> np.sum(np.abs(t) < np.abs(s)) / float(len(s)) 0.018318 #random < 0.05, statistic is in critical region ``` The probability value for this 2-tailed test is about 1.83%, which is lower than the 5% pre-determined significance threshold. Therefore, the probability of observing values as extreme as our intake conditionally on the null hypothesis being true is too low, and we reject the null hypothesis of no deviation. ![../../../_images/numpy-random-standard_t-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.standard_t.htmlnumpy.random.triangular ======================= random.triangular(*left*, *mode*, *right*, *size=None*) Draw samples from the triangular distribution over the interval `[left, right]`. The triangular distribution is a continuous probability distribution with lower limit left, peak at mode, and upper limit right. Unlike the other distributions, these parameters directly define the shape of the pdf. Note New code should use the `triangular` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **left**float or array_like of floats Lower limit. **mode**float or array_like of floats The value where the peak of the distribution occurs. The value must fulfill the condition `left <= mode <= right`. **right**float or array_like of floats Upper limit, must be larger than `left`. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `left`, `mode`, and `right` are all scalars. Otherwise, `np.broadcast(left, mode, right).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized triangular distribution. See also [`random.Generator.triangular`](numpy.random.generator.triangular#numpy.random.Generator.triangular "numpy.random.Generator.triangular") which should be used for new code. #### Notes The probability density function for the triangular distribution is \[\begin{split}P(x;l, m, r) = \begin{cases} \frac{2(x-l)}{(r-l)(m-l)}& \text{for $l \leq x \leq m$},\\ \frac{2(r-x)}{(r-l)(r-m)}& \text{for $m \leq x \leq r$},\\ 0& \text{otherwise}. \end{cases}\end{split}\] The triangular distribution is often used in ill-defined problems where the underlying distribution is not known, but some knowledge of the limits and mode exists. Often it is used in simulations. #### References 1 Wikipedia, “Triangular distribution” <https://en.wikipedia.org/wiki/Triangular_distribution #### Examples Draw values from the distribution and plot the histogram: ``` >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.triangular(-3, 0, 8, 100000), bins=200, ... density=True) >>> plt.show() ``` ![../../../_images/numpy-random-triangular-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.triangular.htmlnumpy.random.uniform ==================== random.uniform(*low=0.0*, *high=1.0*, *size=None*) Draw samples from a uniform distribution. Samples are uniformly distributed over the half-open interval `[low, high)` (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by [`uniform`](#numpy.random.uniform "numpy.random.uniform"). Note New code should use the `uniform` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **low**float or array_like of floats, optional Lower boundary of the output interval. All values generated will be greater than or equal to low. The default value is 0. **high**float or array_like of floats Upper boundary of the output interval. All values generated will be less than or equal to high. The high limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. The default value is 1.0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `low` and `high` are both scalars. Otherwise, `np.broadcast(low, high).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized uniform distribution. See also [`randint`](numpy.random.randint#numpy.random.randint "numpy.random.randint") Discrete uniform distribution, yielding integers. [`random_integers`](numpy.random.random_integers#numpy.random.random_integers "numpy.random.random_integers") Discrete uniform distribution over the closed interval `[low, high]`. [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample") Floats uniformly distributed over `[0, 1)`. [`random`](../index#module-numpy.random "numpy.random") Alias for [`random_sample`](numpy.random.random_sample#numpy.random.random_sample "numpy.random.random_sample"). [`rand`](numpy.random.rand#numpy.random.rand "numpy.random.rand") Convenience function that accepts dimensions as input, e.g., `rand(2,2)` would generate a 2-by-2 array of floats, uniformly distributed over `[0, 1)`. [`random.Generator.uniform`](numpy.random.generator.uniform#numpy.random.Generator.uniform "numpy.random.Generator.uniform") which should be used for new code. #### Notes The probability density function of the uniform distribution is \[p(x) = \frac{1}{b - a}\] anywhere within the interval `[a, b)`, and zero elsewhere. When `high` == `low`, values of `low` will be returned. If `high` < `low`, the results are officially undefined and may eventually raise an error, i.e. do not rely on this function to behave when passed arguments satisfying that inequality condition. The `high` limit may be included in the returned array of floats due to floating-point rounding in the equation `low + (high-low) * random_sample()`. For example: ``` >>> x = np.float32(5*0.99999999) >>> x 5.0 ``` #### Examples Draw samples from the distribution: ``` >>> s = np.random.uniform(-1,0,1000) ``` All values are within the given interval: ``` >>> np.all(s >= -1) True >>> np.all(s < 0) True ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> count, bins, ignored = plt.hist(s, 15, density=True) >>> plt.plot(bins, np.ones_like(bins), linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-uniform-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.uniform.htmlnumpy.random.vonmises ===================== random.vonmises(*mu*, *kappa*, *size=None*) Draw samples from a von Mises distribution. Samples are drawn from a von Mises distribution with specified mode (mu) and dispersion (kappa), on the interval [-pi, pi]. The von Mises distribution (also known as the circular normal distribution) is a continuous probability distribution on the unit circle. It may be thought of as the circular analogue of the normal distribution. Note New code should use the `vonmises` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **mu**float or array_like of floats Mode (“center”) of the distribution. **kappa**float or array_like of floats Dispersion of the distribution, has to be >=0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mu` and `kappa` are both scalars. Otherwise, `np.broadcast(mu, kappa).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized von Mises distribution. See also [`scipy.stats.vonmises`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html#scipy.stats.vonmises "(in SciPy v1.8.1)") probability density function, distribution, or cumulative density function, etc. [`random.Generator.vonmises`](numpy.random.generator.vonmises#numpy.random.Generator.vonmises "numpy.random.Generator.vonmises") which should be used for new code. #### Notes The probability density for the von Mises distribution is \[p(x) = \frac{e^{\kappa cos(x-\mu)}}{2\pi I_0(\kappa)},\] where \(\mu\) is the mode and \(\kappa\) the dispersion, and \(I_0(\kappa)\) is the modified Bessel function of order 0. The von Mises is named for <NAME>, who was born in Austria-Hungary, in what is now the Ukraine. He fled to the United States in 1939 and became a professor at Harvard. He worked in probability theory, aerodynamics, fluid mechanics, and philosophy of science. #### References 1 <NAME>. and <NAME>. (Eds.). “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th printing,” New York: Dover, 1972. 2 <NAME>., “Mathematical Theory of Probability and Statistics”, New York: Academic Press, 1964. #### Examples Draw samples from the distribution: ``` >>> mu, kappa = 0.0, 4.0 # mean and dispersion >>> s = np.random.vonmises(mu, kappa, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> from scipy.special import i0 >>> plt.hist(s, 50, density=True) >>> x = np.linspace(-np.pi, np.pi, num=51) >>> y = np.exp(kappa*np.cos(x-mu))/(2*np.pi*i0(kappa)) >>> plt.plot(x, y, linewidth=2, color='r') >>> plt.show() ``` ![../../../_images/numpy-random-vonmises-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.vonmises.htmlnumpy.random.wald ================= random.wald(*mean*, *scale*, *size=None*) Draw samples from a Wald, or inverse Gaussian, distribution. As the scale approaches infinity, the distribution becomes more like a Gaussian. Some references claim that the Wald is an inverse Gaussian with mean equal to 1, but this is by no means universal. The inverse Gaussian distribution was first studied in relationship to Brownian motion. In 1956 <NAME> used the name inverse Gaussian because there is an inverse relationship between the time to cover a unit distance and distance covered in unit time. Note New code should use the `wald` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **mean**float or array_like of floats Distribution mean, must be > 0. **scale**float or array_like of floats Scale parameter, must be > 0. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `mean` and `scale` are both scalars. Otherwise, `np.broadcast(mean, scale).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Wald distribution. See also [`random.Generator.wald`](numpy.random.generator.wald#numpy.random.Generator.wald "numpy.random.Generator.wald") which should be used for new code. #### Notes The probability density function for the Wald distribution is \[P(x;mean,scale) = \sqrt{\frac{scale}{2\pi x^3}}e^ \frac{-scale(x-mean)^2}{2\cdotp mean^2x}\] As noted above the inverse Gaussian distribution first arise from attempts to model Brownian motion. It is also a competitor to the Weibull for use in reliability modeling and modeling stock returns and interest rate processes. #### References 1 Brighton Webs Ltd., Wald Distribution, <https://web.archive.org/web/20090423014010/http://www.brighton-webs.co.uk:80/distributions/wald.asp 2 <NAME>., and <NAME>, “The Inverse Gaussian Distribution: Theory : Methodology, and Applications”, CRC Press, 1988. 3 Wikipedia, “Inverse Gaussian distribution” <https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution #### Examples Draw values from the distribution and plot the histogram: ``` >>> import matplotlib.pyplot as plt >>> h = plt.hist(np.random.wald(3, 2, 100000), bins=200, density=True) >>> plt.show() ``` ![../../../_images/numpy-random-wald-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.wald.htmlnumpy.random.weibull ==================== random.weibull(*a*, *size=None*) Draw samples from a Weibull distribution. Draw samples from a 1-parameter Weibull distribution with the given shape parameter `a`. \[X = (-ln(U))^{1/a}\] Here, U is drawn from the uniform distribution over (0,1]. The more common 2-parameter Weibull, including a scale parameter \(\lambda\) is just \(X = \lambda(-ln(U))^{1/a}\). Note New code should use the `weibull` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Shape parameter of the distribution. Must be nonnegative. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Weibull distribution. See also [`scipy.stats.weibull_max`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_max.html#scipy.stats.weibull_max "(in SciPy v1.8.1)") [`scipy.stats.weibull_min`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.weibull_min.html#scipy.stats.weibull_min "(in SciPy v1.8.1)") [`scipy.stats.genextreme`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.genextreme.html#scipy.stats.genextreme "(in SciPy v1.8.1)") [`gumbel`](numpy.random.gumbel#numpy.random.gumbel "numpy.random.gumbel") [`random.Generator.weibull`](numpy.random.generator.weibull#numpy.random.Generator.weibull "numpy.random.Generator.weibull") which should be used for new code. #### Notes The Weibull (or Type III asymptotic extreme value distribution for smallest values, SEV Type III, or Rosin-Rammler distribution) is one of a class of Generalized Extreme Value (GEV) distributions used in modeling extreme value problems. This class includes the Gumbel and Frechet distributions. The probability density for the Weibull distribution is \[p(x) = \frac{a} {\lambda}(\frac{x}{\lambda})^{a-1}e^{-(x/\lambda)^a},\] where \(a\) is the shape and \(\lambda\) the scale. The function has its peak (the mode) at \(\lambda(\frac{a-1}{a})^{1/a}\). When `a = 1`, the Weibull distribution reduces to the exponential distribution. #### References 1 <NAME>, Royal Technical University, Stockholm, 1939 “A Statistical Theory Of The Strength Of Materials”, Ingeniorsvetenskapsakademiens Handlingar Nr 151, 1939, Generalstabens Litografiska Anstalts Forlag, Stockholm. 2 <NAME>, “A Statistical Distribution Function of Wide Applicability”, Journal Of Applied Mechanics ASME Paper 1951. 3 Wikipedia, “Weibull distribution”, <https://en.wikipedia.org/wiki/Weibull_distribution #### Examples Draw samples from the distribution: ``` >>> a = 5. # shape >>> s = np.random.weibull(a, 1000) ``` Display the histogram of the samples, along with the probability density function: ``` >>> import matplotlib.pyplot as plt >>> x = np.arange(1,100.)/50. >>> def weib(x,n,a): ... return (a / n) * (x / n)**(a - 1) * np.exp(-(x / n)**a) ``` ``` >>> count, bins, ignored = plt.hist(np.random.weibull(5.,1000)) >>> x = np.arange(1,100.)/50. >>> scale = count.max()/weib(x, 1., 5.).max() >>> plt.plot(x, weib(x, 1., 5.)*scale) >>> plt.show() ``` ![../../../_images/numpy-random-weibull-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.weibull.htmlnumpy.random.zipf ================= random.zipf(*a*, *size=None*) Draw samples from a Zipf distribution. Samples are drawn from a Zipf distribution with specified parameter `a` > 1. The Zipf distribution (also known as the zeta distribution) is a discrete probability distribution that satisfies Zipf’s law: the frequency of an item is inversely proportional to its rank in a frequency table. Note New code should use the `zipf` method of a `default_rng()` instance instead; please see the [Quick Start](../index#random-quick-start). Parameters **a**float or array_like of floats Distribution parameter. Must be greater than 1. **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. If size is `None` (default), a single value is returned if `a` is a scalar. Otherwise, `np.array(a).size` samples are drawn. Returns **out**ndarray or scalar Drawn samples from the parameterized Zipf distribution. See also [`scipy.stats.zipf`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.zipf.html#scipy.stats.zipf "(in SciPy v1.8.1)") probability density function, distribution, or cumulative density function, etc. [`random.Generator.zipf`](numpy.random.generator.zipf#numpy.random.Generator.zipf "numpy.random.Generator.zipf") which should be used for new code. #### Notes The probability density for the Zipf distribution is \[p(k) = \frac{k^{-a}}{\zeta(a)},\] for integers \(k \geq 1\), where \(\zeta\) is the Riemann Zeta function. It is named for the American linguist <NAME>, who noted that the frequency of any word in a sample of a language is inversely proportional to its rank in the frequency table. #### References 1 <NAME>., “Selected Studies of the Principle of Relative Frequency in Language,” Cambridge, MA: Harvard Univ. Press, 1932. #### Examples Draw samples from the distribution: ``` >>> a = 4.0 >>> n = 20000 >>> s = np.random.zipf(a, n) ``` Display the histogram of the samples, along with the expected histogram based on the probability density function: ``` >>> import matplotlib.pyplot as plt >>> from scipy.special import zeta ``` [`bincount`](../../generated/numpy.bincount#numpy.bincount "numpy.bincount") provides a fast histogram for small integers. ``` >>> count = np.bincount(s) >>> k = np.arange(1, s.max() + 1) ``` ``` >>> plt.bar(k, count[1:], alpha=0.5, label='sample count') >>> plt.plot(k, n*(k**-a)/zeta(a), 'k.-', alpha=0.5, ... label='expected count') >>> plt.semilogy() >>> plt.grid(alpha=0.4) >>> plt.legend() >>> plt.title(f'Zipf sample, a={a}, size={n}') >>> plt.show() ``` ![../../../_images/numpy-random-zipf-1.png] © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/generated/numpy.random.zipf.htmlPermuted Congruential Generator (64-bit, PCG64 DXSM) ==================================================== *class*numpy.random.PCG64DXSM(*seed=None*) BitGenerator for the PCG-64 DXSM pseudo-random number generator. Parameters **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. #### Notes PCG-64 DXSM is a 128-bit implementation of O’Neill’s permutation congruential generator ([[1]](#rad26d0f7d970-1), [[2]](#rad26d0f7d970-2)). PCG-64 DXSM has a period of \(2^{128}\) and supports advancing an arbitrary number of steps as well as \(2^{127}\) streams. The specific member of the PCG family that we use is PCG CM DXSM 128/64. It differs from `PCG64` in that it uses the stronger DXSM output function, a 64-bit “cheap multiplier” in the LCG, and outputs from the state before advancing it rather than advance-then-output. `PCG64DXSM` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers. These are not directly consumable in Python and must be consumed by a `Generator` or similar object that supports low-level access. Supports the method [`advance`](generated/numpy.random.pcg64dxsm.advance#numpy.random.PCG64DXSM.advance "numpy.random.PCG64DXSM.advance") to advance the RNG an arbitrary number of steps. The state of the PCG-64 DXSM RNG is represented by 2 128-bit unsigned integers. **State and Seeding** The `PCG64DXSM` state vector consists of 2 unsigned 128-bit values, which are represented externally as Python ints. One is the state of the PRNG, which is advanced by a linear congruential generator (LCG). The second is a fixed odd increment used in the LCG. The input seed is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to generate both values. The increment is not independently settable. **Parallel Features** The preferred way to use a BitGenerator in parallel applications is to use the [`SeedSequence.spawn`](generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") method to obtain entropy values, and to use these to generate new BitGenerators: ``` >>> from numpy.random import Generator, PCG64DXSM, SeedSequence >>> sg = SeedSequence(1234) >>> rg = [Generator(PCG64DXSM(s)) for s in sg.spawn(10)] ``` **Compatibility Guarantee** `PCG64DXSM` makes a guarantee that a fixed seed will always produce the same random integer stream. #### References [1](#id1) [“PCG, A Family of Better Random Number Generators”](http://www.pcg-random.org/) [2](#id2) O’Neill, <NAME>. [“PCG: A Family of Simple Fast Space-Efficient Statistically Good Algorithms for Random Number Generation”](https://www.cs.hmc.edu/tr/hmc-cs-2014-0905.pdf) State ----- | | | | --- | --- | | [`state`](generated/numpy.random.pcg64dxsm.state#numpy.random.PCG64DXSM.state "numpy.random.PCG64DXSM.state") | Get or set the PRNG state | Parallel generation ------------------- | | | | --- | --- | | [`advance`](generated/numpy.random.pcg64dxsm.advance#numpy.random.PCG64DXSM.advance "numpy.random.PCG64DXSM.advance")(delta) | Advance the underlying RNG as-if delta draws have occurred. | | [`jumped`](generated/numpy.random.pcg64dxsm.jumped#numpy.random.PCG64DXSM.jumped "numpy.random.PCG64DXSM.jumped")([jumps]) | Returns a new bit generator with the state jumped. | Extending --------- | | | | --- | --- | | [`cffi`](generated/numpy.random.pcg64dxsm.cffi#numpy.random.PCG64DXSM.cffi "numpy.random.PCG64DXSM.cffi") | CFFI interface | | [`ctypes`](generated/numpy.random.pcg64dxsm.ctypes#numpy.random.PCG64DXSM.ctypes "numpy.random.PCG64DXSM.ctypes") | ctypes interface | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/pcg64dxsm.htmlPhilox Counter-based RNG ======================== *class*numpy.random.Philox(*seed=None*, *counter=None*, *key=None*) Container for the Philox (4x64) pseudo-random number generator. Parameters **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. **counter**{None, int, array_like}, optional Counter to use in the Philox state. Can be either a Python int (long in 2.x) in [0, 2**256) or a 4-element uint64 array. If not provided, the RNG is initialized at 0. **key**{None, int, array_like}, optional Key to use in the Philox state. Unlike `seed`, the value in key is directly set. Can be either a Python int in [0, 2**128) or a 2-element uint64 array. `key` and `seed` cannot both be used. #### Notes Philox is a 64-bit PRNG that uses a counter-based design based on weaker (and faster) versions of cryptographic functions [[1]](#r40f8e2ad755a-1). Instances using different values of the key produce independent sequences. Philox has a period of \(2^{256} - 1\) and supports arbitrary advancing and jumping the sequence in increments of \(2^{128}\). These features allow multiple non-overlapping sequences to be generated. `Philox` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers. These are not directly consumable in Python and must be consumed by a `Generator` or similar object that supports low-level access. **State and Seeding** The `Philox` state vector consists of a 256-bit value encoded as a 4-element uint64 array and a 128-bit value encoded as a 2-element uint64 array. The former is a counter which is incremented by 1 for every 4 64-bit randoms produced. The second is a key which determined the sequence produced. Using different keys produces independent sequences. The input `seed` is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to generate the key. The counter is set to 0. Alternately, one can omit the `seed` parameter and set the `key` and `counter` directly. **Parallel Features** The preferred way to use a BitGenerator in parallel applications is to use the [`SeedSequence.spawn`](generated/numpy.random.seedsequence.spawn#numpy.random.SeedSequence.spawn "numpy.random.SeedSequence.spawn") method to obtain entropy values, and to use these to generate new BitGenerators: ``` >>> from numpy.random import Generator, Philox, SeedSequence >>> sg = SeedSequence(1234) >>> rg = [Generator(Philox(s)) for s in sg.spawn(10)] ``` `Philox` can be used in parallel applications by calling the `jumped` method to advances the state as-if \(2^{128}\) random numbers have been generated. Alternatively, `advance` can be used to advance the counter for any positive step in [0, 2**256). When using `jumped`, all generators should be chained to ensure that the segments come from the same sequence. ``` >>> from numpy.random import Generator, Philox >>> bit_generator = Philox(1234) >>> rg = [] >>> for _ in range(10): ... rg.append(Generator(bit_generator)) ... bit_generator = bit_generator.jumped() ``` Alternatively, `Philox` can be used in parallel applications by using a sequence of distinct keys where each instance uses different key. ``` >>> key = 2**96 + 2**33 + 2**17 + 2**9 >>> rg = [Generator(Philox(key=key+i)) for i in range(10)] ``` **Compatibility Guarantee** `Philox` makes a guarantee that a fixed `seed` will always produce the same random integer stream. #### References [1](#id1) <NAME>, <NAME>, <NAME>, and <NAME>, “Parallel Random Numbers: As Easy as 1, 2, 3,” Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC11), New York, NY: ACM, 2011. #### Examples ``` >>> from numpy.random import Generator, Philox >>> rg = Generator(Philox(1234)) >>> rg.standard_normal() 0.123 # random ``` Attributes **lock: threading.Lock** Lock instance that is shared so that the same bit git generator can be used in multiple Generators without corrupting the state. Code that generates values from a bit generator should hold the bit generator’s lock. State ----- | | | | --- | --- | | [`state`](generated/numpy.random.philox.state#numpy.random.Philox.state "numpy.random.Philox.state") | Get or set the PRNG state | Parallel generation ------------------- | | | | --- | --- | | [`advance`](generated/numpy.random.philox.advance#numpy.random.Philox.advance "numpy.random.Philox.advance")(delta) | Advance the underlying RNG as-if delta draws have occurred. | | [`jumped`](generated/numpy.random.philox.jumped#numpy.random.Philox.jumped "numpy.random.Philox.jumped")([jumps]) | Returns a new bit generator with the state jumped | Extending --------- | | | | --- | --- | | [`cffi`](generated/numpy.random.philox.cffi#numpy.random.Philox.cffi "numpy.random.Philox.cffi") | CFFI interface | | [`ctypes`](generated/numpy.random.philox.ctypes#numpy.random.Philox.ctypes "numpy.random.Philox.ctypes") | ctypes interface | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/philox.htmlSFC64 Small Fast Chaotic PRNG ============================= *class*numpy.random.SFC64(*seed=None*) BitGenerator for <NAME>’s Small Fast Chaotic PRNG. Parameters **seed**{None, int, array_like[ints], SeedSequence}, optional A seed to initialize the [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator"). If None, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to derive the initial [`BitGenerator`](generated/numpy.random.bitgenerator#numpy.random.BitGenerator "numpy.random.BitGenerator") state. One may also pass in a [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") instance. #### Notes `SFC64` is a 256-bit implementation of <NAME>’s Small Fast Chaotic PRNG ([[1]](#r50352647a6aa-1)). `SFC64` has a few different cycles that one might be on, depending on the seed; the expected period will be about \(2^{255}\) ([[2]](#r50352647a6aa-2)). `SFC64` incorporates a 64-bit counter which means that the absolute minimum cycle length is \(2^{64}\) and that distinct seeds will not run into each other for at least \(2^{64}\) iterations. `SFC64` provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers. These are not directly consumable in Python and must be consumed by a `Generator` or similar object that supports low-level access. **State and Seeding** The `SFC64` state vector consists of 4 unsigned 64-bit values. The last is a 64-bit counter that increments by 1 each iteration. The input seed is processed by [`SeedSequence`](generated/numpy.random.seedsequence#numpy.random.SeedSequence "numpy.random.SeedSequence") to generate the first 3 values, then the `SFC64` algorithm is iterated a small number of times to mix. **Compatibility Guarantee** `SFC64` makes a guarantee that a fixed seed will always produce the same random integer stream. #### References [1](#id1) [“PractRand”](http://pracrand.sourceforge.net/RNG_engines.txt) [2](#id2) [“Random Invertible Mapping Statistics”](http://www.pcg-random.org/posts/random-invertible-mapping-statistics.html) State ----- | | | | --- | --- | | [`state`](generated/numpy.random.sfc64.state#numpy.random.SFC64.state "numpy.random.SFC64.state") | Get or set the PRNG state | Extending --------- | | | | --- | --- | | [`cffi`](generated/numpy.random.sfc64.cffi#numpy.random.SFC64.cffi "numpy.random.SFC64.cffi") | CFFI interface | | [`ctypes`](generated/numpy.random.sfc64.ctypes#numpy.random.SFC64.ctypes "numpy.random.SFC64.ctypes") | ctypes interface | © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/sfc64.htmlnumpy.random.PCG64.advance ========================== method random.PCG64.advance(*delta*) Advance the underlying RNG as-if delta draws have occurred. Parameters **delta**integer, positive Number of draws to advance the RNG. Must be less than the size state variable in the underlying RNG. Returns **self**PCG64 RNG advanced delta steps #### Notes Advancing a RNG updates the underlying RNG state as-if a given number of calls to the underlying RNG have been made. In general there is not a one-to-one relationship between the number output random values from a particular distribution and the number of draws from the core RNG. This occurs for two reasons: * The random values are simulated using a rejection-based method and so, on average, more than one value from the underlying RNG is required to generate an single draw. * The number of bits required to generate a simulated value differs from the number of bits generated by the underlying RNG. For example, two 16-bit integer values can be simulated from a single draw of a 32-bit RNG. Advancing the RNG state resets any pre-computed random numbers. This is required to ensure exact reproducibility. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64.advance.htmlnumpy.random.MT19937.jumped =========================== method random.MT19937.jumped(*jumps=1*) Returns a new bit generator with the state jumped The state of the returned bit generator is jumped as-if 2**(128 * jumps) random numbers have been generated. Parameters **jumps**integer, positive Number of times to jump the state of the bit generator returned Returns **bit_generator**MT19937 New instance of generator jumped iter times #### Notes The jump step is computed using a modified version of Matsumoto’s implementation of Horner’s method. The step polynomial is precomputed to perform 2**128 steps. The jumped state has been verified to match the state produced using Matsumoto’s original code. #### References 1 Matsumoto, M, Generating multiple disjoint streams of pseudorandom number sequences. Accessed on: May 6, 2020. <http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/JUMP/ 2 <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, “Efficient Jump Ahead for F2-Linear Random Number Generators”, INFORMS JOURNAL ON COMPUTING, Vol. 20, No. 3, Summer 2008, pp. 385-390. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.MT19937.jumped.htmlnumpy.random.PCG64.jumped ========================= method random.PCG64.jumped(*jumps=1*) Returns a new bit generator with the state jumped. Jumps the state as-if jumps * 210306068529402873165736369884012333109 random numbers have been generated. Parameters **jumps**integer, positive Number of times to jump the state of the bit generator returned Returns **bit_generator**PCG64 New instance of generator jumped iter times #### Notes The step size is phi-1 when multiplied by 2**128 where phi is the golden ratio. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64.jumped.htmlExtending via Numba =================== ``` import numpy as np import numba as nb from numpy.random import PCG64 from timeit import timeit bit_gen = PCG64() next_d = bit_gen.cffi.next_double state_addr = bit_gen.cffi.state_address def normals(n, state): out = np.empty(n) for i in range((n + 1) // 2): x1 = 2.0 * next_d(state) - 1.0 x2 = 2.0 * next_d(state) - 1.0 r2 = x1 * x1 + x2 * x2 while r2 >= 1.0 or r2 == 0.0: x1 = 2.0 * next_d(state) - 1.0 x2 = 2.0 * next_d(state) - 1.0 r2 = x1 * x1 + x2 * x2 f = np.sqrt(-2.0 * np.log(r2) / r2) out[2 * i] = f * x1 if 2 * i + 1 < n: out[2 * i + 1] = f * x2 return out # Compile using Numba normalsj = nb.jit(normals, nopython=True) # Must use state address not state with numba n = 10000 def numbacall(): return normalsj(n, state_addr) rg = np.random.Generator(PCG64()) def numpycall(): return rg.normal(size=n) # Check that the functions work r1 = numbacall() r2 = numpycall() assert r1.shape == (n,) assert r1.shape == r2.shape t1 = timeit(numbacall, number=1000) print(f'{t1:.2f} secs for {n} PCG64 (Numba/PCG64) gaussian randoms') t2 = timeit(numpycall, number=1000) print(f'{t2:.2f} secs for {n} PCG64 (NumPy/PCG64) gaussian randoms') # example 2 next_u32 = bit_gen.ctypes.next_uint32 ctypes_state = bit_gen.ctypes.state @nb.jit(nopython=True) def bounded_uint(lb, ub, state): mask = delta = ub - lb mask |= mask >> 1 mask |= mask >> 2 mask |= mask >> 4 mask |= mask >> 8 mask |= mask >> 16 val = next_u32(state) & mask while val > delta: val = next_u32(state) & mask return lb + val print(bounded_uint(323, 2394691, ctypes_state.value)) @nb.jit(nopython=True) def bounded_uints(lb, ub, n, state): out = np.empty(n, dtype=np.uint32) for i in range(n): out[i] = bounded_uint(lb, ub, state) bounded_uints(323, 2394691, 10000000, ctypes_state.value) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/examples/numba.htmlExtending via Numba and CFFI ============================ ``` r""" Building the required library in this example requires a source distribution of NumPy or clone of the NumPy git repository since distributions.c is not included in binary distributions. On *nix, execute in numpy/random/src/distributions export ${PYTHON_VERSION}=3.8 # Python version export PYTHON_INCLUDE=#path to Python's include folder, usually \ ${PYTHON_HOME}/include/python${PYTHON_VERSION}m export NUMPY_INCLUDE=#path to numpy's include folder, usually \ ${PYTHON_HOME}/lib/python${PYTHON_VERSION}/site-packages/numpy/core/include gcc -shared -o libdistributions.so -fPIC distributions.c \ -I${NUMPY_INCLUDE} -I${PYTHON_INCLUDE} mv libdistributions.so ../../_examples/numba/ On Windows rem PYTHON_HOME and PYTHON_VERSION are setup dependent, this is an example set PYTHON_HOME=c:\Anaconda set PYTHON_VERSION=38 cl.exe /LD .\distributions.c -DDLL_EXPORT \ -I%PYTHON_HOME%\lib\site-packages\numpy\core\include \ -I%PYTHON_HOME%\include %PYTHON_HOME%\libs\python%PYTHON_VERSION%.lib move distributions.dll ../../_examples/numba/ """ import os import numba as nb import numpy as np from cffi import FFI from numpy.random import PCG64 ffi = FFI() if os.path.exists('./distributions.dll'): lib = ffi.dlopen('./distributions.dll') elif os.path.exists('./libdistributions.so'): lib = ffi.dlopen('./libdistributions.so') else: raise RuntimeError('Required DLL/so file was not found.') ffi.cdef(""" double random_standard_normal(void *bitgen_state); """) x = PCG64() xffi = x.cffi bit_generator = xffi.bit_generator random_standard_normal = lib.random_standard_normal def normals(n, bit_generator): out = np.empty(n) for i in range(n): out[i] = random_standard_normal(bit_generator) return out normalsj = nb.jit(normals, nopython=True) # Numba requires a memory address for void * # Can also get address from x.ctypes.bit_generator.value bit_generator_address = int(ffi.cast('uintptr_t', bit_generator)) norm = normalsj(1000, bit_generator_address) print(norm[:12]) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/examples/numba_cffi.htmlExtending numpy.random via Cython ================================= * <setup.py> * <extending.pyx> * <extending_distributions.pyx © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/examples/cython/index.htmlnumpy.random.BitGenerator.cffi ============================== attribute random.BitGenerator.cffi CFFI interface Returns **interface**namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.BitGenerator.cffi.htmlsetup.py ======== ``` #!/usr/bin/env python3 """ Build the Cython demonstrations of low-level access to NumPy random Usage: python setup.py build_ext -i """ import setuptools # triggers monkeypatching distutils from distutils.core import setup from os.path import dirname, join, abspath import numpy as np from Cython.Build import cythonize from numpy.distutils.misc_util import get_info from setuptools.extension import Extension path = dirname(__file__) src_dir = join(dirname(path), '..', 'src') defs = [('NPY_NO_DEPRECATED_API', 0)] inc_path = np.get_include() lib_path = [abspath(join(np.get_include(), '..', '..', 'random', 'lib'))] lib_path += get_info('npymath')['library_dirs'] extending = Extension("extending", sources=[join('.', 'extending.pyx')], include_dirs=[ np.get_include(), join(path, '..', '..') ], define_macros=defs, ) distributions = Extension("extending_distributions", sources=[join('.', 'extending_distributions.pyx')], include_dirs=[inc_path], library_dirs=lib_path, libraries=['npyrandom', 'npymath'], define_macros=defs, ) extensions = [extending, distributions] setup( ext_modules=cythonize(extensions) ) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/examples/cython/setup.py.htmlextending.pyx ============= ``` #!/usr/bin/env python3 #cython: language_level=3 from libc.stdint cimport uint32_t from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer import numpy as np cimport numpy as np cimport cython from numpy.random cimport bitgen_t from numpy.random import PCG64 np.import_array() @cython.boundscheck(False) @cython.wraparound(False) def uniform_mean(Py_ssize_t n): cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef double[::1] random_values cdef np.ndarray randoms x = PCG64() capsule = x.capsule if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n) # Best practice is to acquire the lock whenever generating random values. # This prevents other threads from modifying the state. Acquiring the lock # is only necessary if the GIL is also released, as in this example. with x.lock, nogil: for i in range(n): random_values[i] = rng.next_double(rng.state) randoms = np.asarray(random_values) return randoms.mean() # This function is declared nogil so it can be used without the GIL below cdef uint32_t bounded_uint(uint32_t lb, uint32_t ub, bitgen_t *rng) nogil: cdef uint32_t mask, delta, val mask = delta = ub - lb mask |= mask >> 1 mask |= mask >> 2 mask |= mask >> 4 mask |= mask >> 8 mask |= mask >> 16 val = rng.next_uint32(rng.state) & mask while val > delta: val = rng.next_uint32(rng.state) & mask return lb + val @cython.boundscheck(False) @cython.wraparound(False) def bounded_uints(uint32_t lb, uint32_t ub, Py_ssize_t n): cdef Py_ssize_t i cdef bitgen_t *rng cdef uint32_t[::1] out cdef const char *capsule_name = "BitGenerator" x = PCG64() out = np.empty(n, dtype=np.uint32) capsule = x.capsule if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") rng = <bitgen_t *>PyCapsule_GetPointer(capsule, capsule_name) with x.lock, nogil: for i in range(n): out[i] = bounded_uint(lb, ub, rng) return np.asarray(out) ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/examples/cython/extending.pyx.htmlextending_distributions.pyx ============================ ``` #!/usr/bin/env python3 #cython: language_level=3 """ This file shows how the to use a BitGenerator to create a distribution. """ import numpy as np cimport numpy as np cimport cython from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer from libc.stdint cimport uint16_t, uint64_t from numpy.random cimport bitgen_t from numpy.random import PCG64 from numpy.random.c_distributions cimport ( random_standard_uniform_fill, random_standard_uniform_fill_f) @cython.boundscheck(False) @cython.wraparound(False) def uniforms(Py_ssize_t n): """ Create an array of `n` uniformly distributed doubles. A 'real' distribution would want to process the values into some non-uniform distribution """ cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef double[::1] random_values x = PCG64() capsule = x.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='float64') with x.lock, nogil: for i in range(n): # Call the function random_values[i] = rng.next_double(rng.state) randoms = np.asarray(random_values) return randoms # cython example 2 @cython.boundscheck(False) @cython.wraparound(False) def uint10_uniforms(Py_ssize_t n): """Uniform 10 bit integers stored as 16-bit unsigned integers""" cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef uint16_t[::1] random_values cdef int bits_remaining cdef int width = 10 cdef uint64_t buff, mask = 0x3FF x = PCG64() capsule = x.capsule if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name) random_values = np.empty(n, dtype='uint16') # Best practice is to release GIL and acquire the lock bits_remaining = 0 with x.lock, nogil: for i in range(n): if bits_remaining < width: buff = rng.next_uint64(rng.state) random_values[i] = buff & mask buff >>= width randoms = np.asarray(random_values) return randoms # cython example 3 def uniforms_ex(bit_generator, Py_ssize_t n, dtype=np.float64): """ Create an array of `n` uniformly distributed doubles via a "fill" function. A 'real' distribution would want to process the values into some non-uniform distribution Parameters ---------- bit_generator: BitGenerator instance n: int Output vector length dtype: {str, dtype}, optional Desired dtype, either 'd' (or 'float64') or 'f' (or 'float32'). The default dtype value is 'd' """ cdef Py_ssize_t i cdef bitgen_t *rng cdef const char *capsule_name = "BitGenerator" cdef np.ndarray randoms capsule = bit_generator.capsule # Optional check that the capsule if from a BitGenerator if not PyCapsule_IsValid(capsule, capsule_name): raise ValueError("Invalid pointer to anon_func_state") # Cast the pointer rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name) _dtype = np.dtype(dtype) randoms = np.empty(n, dtype=_dtype) if _dtype == np.float32: with bit_generator.lock: random_standard_uniform_fill_f(rng, n, <float*>np.PyArray_DATA(randoms)) elif _dtype == np.float64: with bit_generator.lock: random_standard_uniform_fill(rng, n, <double*>np.PyArray_DATA(randoms)) else: raise TypeError('Unsupported dtype %r for random' % _dtype) return randoms ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/examples/cython/extending_distributions.pyx.htmlnumpy.random.BitGenerator.random_raw ===================================== method random.BitGenerator.random_raw(*self*, *size=None*) Return randoms as generated by the underlying BitGenerator Parameters **size**int or tuple of ints, optional Output shape. If the given shape is, e.g., `(m, n, k)`, then `m * n * k` samples are drawn. Default is None, in which case a single value is returned. **output**bool, optional Output values. Used for performance testing since the generated values are not returned. Returns **out**uint or ndarray Drawn samples. #### Notes This method directly exposes the raw underlying pseudo-random number generator. All values are returned as unsigned 64-bit values irrespective of the number of bits produced by the PRNG. See the class docstring for the number of bits returned. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.BitGenerator.random_raw.htmlnumpy.random.PCG64.state ======================== attribute random.PCG64.state Get or set the PRNG state Returns **state**dict Dictionary containing the information required to describe the state of the PRNG © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64.state.htmlnumpy.random.MT19937.ctypes =========================== attribute random.MT19937.ctypes ctypes interface Returns **interface**namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.MT19937.ctypes.htmlnumpy.random.MT19937.state ========================== attribute random.MT19937.state Get or set the PRNG state Returns **state**dict Dictionary containing the information required to describe the state of the PRNG © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.MT19937.state.htmlnumpy.random.MT19937.cffi ========================= attribute random.MT19937.cffi CFFI interface Returns **interface**namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.MT19937.cffi.htmlnumpy.random.SeedSequence.spawn_key ==================================== attribute random.SeedSequence.spawn_key © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.SeedSequence.spawn_key.htmlnumpy.random.SeedSequence.generate_state ========================================= method random.SeedSequence.generate_state(*n_words*, *dtype=np.uint32*) Return the requested number of words for PRNG seeding. A BitGenerator should call this method in its constructor with an appropriate `n_words` parameter to properly seed itself. Parameters **n_words**int **dtype**np.uint32 or np.uint64, optional The size of each word. This should only be either [`uint32`](../../../arrays.scalars#numpy.uint32 "numpy.uint32") or [`uint64`](../../../arrays.scalars#numpy.uint64 "numpy.uint64"). Strings (`‘uint32’`, `‘uint64’`) are fine. Note that requesting [`uint64`](../../../arrays.scalars#numpy.uint64 "numpy.uint64") will draw twice as many bits as [`uint32`](../../../arrays.scalars#numpy.uint32 "numpy.uint32") for the same `n_words`. This is a convenience for `BitGenerator`s that express their states as `uint64` arrays. Returns **state**uint32 or uint64 array, shape=(n_words,) © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.SeedSequence.generate_state.htmlnumpy.random.SeedSequence.entropy ================================= attribute random.SeedSequence.entropy © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.SeedSequence.entropy.htmlnumpy.random.SFC64.ctypes ========================= attribute random.SFC64.ctypes ctypes interface Returns **interface**namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.SFC64.ctypes.htmlnumpy.testing.suppress_warnings.__call__ ============================================= method testing.suppress_warnings.__call__(*func*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L2292-L2302) Function decorator to apply certain suppressions to a whole function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.suppress_warnings.__call__.htmlnumpy.testing.suppress_warnings.filter ======================================= method testing.suppress_warnings.filter(*category=<class 'Warning'>*, *message=''*, *module=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L2148-L2169) Add a new suppressing filter or apply it if the state is entered. Parameters **category**class, optional Warning class to filter **message**string, optional Regular expression matching the warning message. **module**module, optional Module to filter for. Note that the module (and its file) must match exactly and cannot be a submodule. This may make it unreliable for external modules. #### Notes When added within a context, filters are only added inside the context and will be forgotten when the context is exited. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.suppress_warnings.filter.htmlnumpy.testing.suppress_warnings.record ======================================= method testing.suppress_warnings.record(*category=<class 'Warning'>*, *message=''*, *module=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/testing/_private/utils.py#L2171-L2199) Append a new recording filter or apply it if the state is entered. All warnings matching will be appended to the `log` attribute. Parameters **category**class, optional Warning class to filter **message**string, optional Regular expression matching the warning message. **module**module, optional Module to filter for. Note that the module (and its file) must match exactly and cannot be a submodule. This may make it unreliable for external modules. Returns **log**list A list which will be filled with all matched warnings. #### Notes When added within a context, filters are only added inside the context and will be forgotten when the context is exited. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.testing.suppress_warnings.record.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.cache_flush ======================================================== method distutils.ccompiler_opt.CCompilerOpt.cache_flush()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L850-L876) Force update the cache. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.cache_flush.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags ================================================================ method distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags(*flags*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1099-L1128) Remove the conflicts that caused due gathering implied features flags. Parameters **‘flags’ list, compiler flags** flags should be sorted from the lowest to the highest interest. Returns list, filtered from any conflicts. #### Examples ``` >>> self.cc_normalize_flags(['-march=armv8.2-a+fp16', '-march=armv8.2-a+dotprod']) ['armv8.2-a+fp16+dotprod'] ``` ``` >>> self.cc_normalize_flags( ['-msse', '-msse2', '-msse3', '-mssse3', '-msse4.1', '-msse4.2', '-mavx', '-march=core-avx2'] ) ['-march=core-avx2'] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.cc_normalize_flags.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.conf_features_partial =================================================================== method distutils.ccompiler_opt.CCompilerOpt.conf_features_partial()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L329-L562) Return a dictionary of supported CPU features by the platform, and accumulate the rest of undefined options in [`conf_features`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features"), the returned dict has same rules and notes in class attribute [`conf_features`](numpy.distutils.ccompiler_opt.ccompileropt.conf_features#numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features "numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features"), also its override any options that been set in ‘conf_features’. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features_partial.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags ================================================================ method distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L2238-L2242) Returns a list of final CPU baseline compiler flags © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_flags.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names ================================================================ method distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L2244-L2248) return a list of final CPU baseline feature names © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_baseline_names.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names ================================================================ method distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L2250-L2254) return a list of final CPU dispatch feature names © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.cpu_dispatch_names.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.dist_compile ========================================================= method distutils.ccompiler_opt.CCompilerOpt.dist_compile(*sources*, *flags*, *ccompiler=None*, ***kwargs*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L599-L607) Wrap CCompiler.compile() © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.dist_compile.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.dist_error ======================================================= method *static*distutils.ccompiler_opt.CCompilerOpt.dist_error(**args*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L672-L676) Raise a compiler error © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.dist_error.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.dist_fatal ======================================================= method *static*distutils.ccompiler_opt.CCompilerOpt.dist_fatal(**args*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L678-L682) Raise a distutils error © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.dist_fatal.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.dist_info ====================================================== method distutils.ccompiler_opt.CCompilerOpt.dist_info()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L635-L670) Return a tuple containing info about (platform, compiler, extra_args), required by the abstract class ‘_CCompiler’ for discovering the platform environment. This is also used as a cache factor in order to detect any changes happening from outside. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.dist_info.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.dist_load_module ============================================================== method *static*distutils.ccompiler_opt.CCompilerOpt.dist_load_module(*name*, *path*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L694-L702) Load a module from file, required by the abstract class ‘_Cache’. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.dist_load_module.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.dist_log ===================================================== method *static*distutils.ccompiler_opt.CCompilerOpt.dist_log(**args*, *stderr=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L684-L692) Print a console message © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.dist_log.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.dist_test ====================================================== method distutils.ccompiler_opt.CCompilerOpt.dist_test(*source*, *flags*, *macros=[]*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L609-L633) Return True if ‘CCompiler.compile()’ able to compile a source file with certain flags. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.dist_test.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_ahead ========================================================== method distutils.ccompiler_opt.CCompilerOpt.feature_ahead(*names*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1392-L1428) Return list of features in ‘names’ after remove any implied features and keep the origins. Parameters **‘names’: sequence** sequence of CPU feature names in uppercase. Returns list of CPU features sorted as-is ‘names’ #### Examples ``` >>> self.feature_ahead(["SSE2", "SSE3", "SSE41"]) ["SSE41"] # assume AVX2 and FMA3 implies each other and AVX2 # is the highest interest >>> self.feature_ahead(["SSE2", "SSE3", "SSE41", "AVX2", "FMA3"]) ["AVX2"] # assume AVX2 and FMA3 don't implies each other >>> self.feature_ahead(["SSE2", "SSE3", "SSE41", "AVX2", "FMA3"]) ["AVX2", "FMA3"] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_ahead.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor ==================================================================== method distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor(*feature_name*, *tabs=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1649-L1696) Generate C preprocessor definitions and include headers of a CPU feature. Parameters **‘feature_name’: str** CPU feature name in uppercase. **‘tabs’: int** if > 0, align the generated strings to the right depend on number of tabs. Returns str, generated C preprocessor #### Examples ``` >>> self.feature_c_preprocessor("SSE3") /** SSE3 **/ #define NPY_HAVE_SSE3 1 #include <pmmintrin.h``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_c_preprocessor.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_detect =========================================================== method distutils.ccompiler_opt.CCompilerOpt.feature_detect(*names*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1498-L1508) Return a list of CPU features that required to be detected sorted from the lowest to highest interest. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_detect.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_get_til ============================================================= method distutils.ccompiler_opt.CCompilerOpt.feature_get_til(*names*, *keyisfalse*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1471-L1496) same as `feature_implies_c()` but stop collecting implied features when feature’s option that provided through parameter ‘keyisfalse’ is False, also sorting the returned features. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_get_til.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies ============================================================ method distutils.ccompiler_opt.CCompilerOpt.feature_implies(*names*, *keep_origins=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1335-L1382) Return a set of CPU features that implied by ‘names’ Parameters **names**str or sequence of str CPU feature name(s) in uppercase. **keep_origins**bool if False(default) then the returned set will not contain any features from ‘names’. This case happens only when two features imply each other. #### Examples ``` >>> self.feature_implies("SSE3") {'SSE', 'SSE2'} >>> self.feature_implies("SSE2") {'SSE'} >>> self.feature_implies("SSE2", keep_origins=True) # 'SSE2' found here since 'SSE' and 'SSE2' imply each other {'SSE', 'SSE2'} ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies_c =============================================================== method distutils.ccompiler_opt.CCompilerOpt.feature_implies_c(*names*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1384-L1390) same as feature_implies() but combining ‘names’ © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_implies_c.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_is_exist ============================================================== method distutils.ccompiler_opt.CCompilerOpt.feature_is_exist(*name*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1296-L1307) Returns True if a certain feature is exist and covered within `_Config.conf_features`. Parameters **‘name’: str** feature name in uppercase. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_is_exist.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_names ========================================================== method distutils.ccompiler_opt.CCompilerOpt.feature_names(*names=None*, *force_flags=None*, *macros=[]*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1261-L1294) Returns a set of CPU feature names that supported by platform and the **C** compiler. Parameters **names**sequence or None, optional Specify certain CPU features to test it against the **C** compiler. if None(default), it will test all current supported features. **Note**: feature names must be in upper-case. **force_flags**list or None, optional If None(default), default compiler flags for every CPU feature will be used during the test. **macros**list of tuples, optional A list of C macro definitions. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_names.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_sorted =========================================================== method distutils.ccompiler_opt.CCompilerOpt.feature_sorted(*names*, *reverse=False*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1309-L1333) Sort a list of CPU features ordered by the lowest interest. Parameters **‘names’: sequence** sequence of supported feature names in uppercase. **‘reverse’: bool, optional** If true, the sorted features is reversed. (highest interest) Returns list, sorted CPU features © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_sorted.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.feature_untied =========================================================== method distutils.ccompiler_opt.CCompilerOpt.feature_untied(*names*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1430-L1469) same as ‘feature_ahead()’ but if both features implied each other and keep the highest interest. Parameters **‘names’: sequence** sequence of CPU feature names in uppercase. Returns list of CPU features sorted as-is ‘names’ #### Examples ``` >>> self.feature_untied(["SSE2", "SSE3", "SSE41"]) ["SSE2", "SSE3", "SSE41"] # assume AVX2 and FMA3 implies each other >>> self.feature_untied(["SSE2", "SSE3", "SSE41", "FMA3", "AVX2"]) ["SSE2", "SSE3", "SSE41", "AVX2"] ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.feature_untied.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.generate_dispatch_header ====================================================================== method distutils.ccompiler_opt.CCompilerOpt.generate_dispatch_header(*header_path*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L2343-L2424) Generate the dispatch header which contains the #definitions and headers for platform-specific instruction-sets for the enabled CPU baseline and dispatch-able features. Its highly recommended to take a look at the generated header also the generated source files via `try_dispatch()` in order to get the full picture. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.generate_dispatch_header.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.is_cached ====================================================== method distutils.ccompiler_opt.CCompilerOpt.is_cached()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L2232-L2236) Returns True if the class loaded from the cache file © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.is_cached.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.me ============================================== method *static*distutils.ccompiler_opt.CCompilerOpt.me(*cb*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L888-L904) A static method that can be treated as a decorator to dynamically cache certain methods. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.me.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets ========================================================== method distutils.ccompiler_opt.CCompilerOpt.parse_targets(*source*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/distutils/ccompiler_opt.py#L1827-L1879) Fetch and parse configuration statements that required for defining the targeted CPU features, statements should be declared in the top of source in between **C** comment and start with a special mark **@targets**. Configuration statements are sort of keywords representing CPU features names, group of statements and policies, combined together to determine the required optimization. Parameters **source**str the path of **C** source file. Returns * bool, True if group has the ‘baseline’ option * list, list of CPU features * list, list of extra compiler flags © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.parse_targets.htmlnumpy.distutils.ccompiler_opt.CCompilerOpt.conf_features ========================================================== attribute distutils.ccompiler_opt.CCompilerOpt.conf_features*={'ASIMD': {'implies': 'NEON_FP16 NEON_VFPV4', 'implies_detect': False, 'interest': 4}, 'ASIMDDP': {'implies': 'ASIMD', 'interest': 6}, 'ASIMDFHM': {'implies': 'ASIMDHP', 'interest': 7}, 'ASIMDHP': {'implies': 'ASIMD', 'interest': 5}, 'AVX': {'headers': 'immintrin.h', 'implies': 'SSE42', 'implies_detect': False, 'interest': 8}, 'AVX2': {'implies': 'F16C', 'interest': 13}, 'AVX512CD': {'implies': 'AVX512F', 'interest': 21}, 'AVX512F': {'extra_checks': 'AVX512F_REDUCE', 'implies': 'FMA3 AVX2', 'implies_detect': False, 'interest': 20}, 'AVX512_CLX': {'detect': 'AVX512_CLX', 'group': 'AVX512VNNI', 'implies': 'AVX512_SKX', 'interest': 43}, 'AVX512_CNL': {'detect': 'AVX512_CNL', 'group': 'AVX512IFMA AVX512VBMI', 'implies': 'AVX512_SKX', 'implies_detect': False, 'interest': 44}, 'AVX512_ICL': {'detect': 'AVX512_ICL', 'group': 'AVX512VBMI2 AVX512BITALG AVX512VPOPCNTDQ', 'implies': 'AVX512_CLX AVX512_CNL', 'implies_detect': False, 'interest': 45}, 'AVX512_KNL': {'detect': 'AVX512_KNL', 'group': 'AVX512ER AVX512PF', 'implies': 'AVX512CD', 'implies_detect': False, 'interest': 40}, 'AVX512_KNM': {'detect': 'AVX512_KNM', 'group': 'AVX5124FMAPS AVX5124VNNIW AVX512VPOPCNTDQ', 'implies': 'AVX512_KNL', 'implies_detect': False, 'interest': 41}, 'AVX512_SKX': {'detect': 'AVX512_SKX', 'extra_checks': 'AVX512BW_MASK AVX512DQ_MASK', 'group': 'AVX512VL AVX512BW AVX512DQ', 'implies': 'AVX512CD', 'implies_detect': False, 'interest': 42}, 'F16C': {'implies': 'AVX', 'interest': 11}, 'FMA3': {'implies': 'F16C', 'interest': 12}, 'FMA4': {'headers': 'x86intrin.h', 'implies': 'AVX', 'interest': 10}, 'NEON': {'headers': 'arm_neon.h', 'interest': 1}, 'NEON_FP16': {'implies': 'NEON', 'interest': 2}, 'NEON_VFPV4': {'implies': 'NEON_FP16', 'interest': 3}, 'POPCNT': {'headers': 'popcntintrin.h', 'implies': 'SSE41', 'interest': 6}, 'SSE': {'headers': 'xmmintrin.h', 'implies': 'SSE2', 'interest': 1}, 'SSE2': {'headers': 'emmintrin.h', 'implies': 'SSE', 'interest': 2}, 'SSE3': {'headers': 'pmmintrin.h', 'implies': 'SSE2', 'interest': 3}, 'SSE41': {'headers': 'smmintrin.h', 'implies': 'SSSE3', 'interest': 5}, 'SSE42': {'implies': 'POPCNT', 'interest': 7}, 'SSSE3': {'headers': 'tmmintrin.h', 'implies': 'SSE3', 'interest': 4}, 'VSX': {'extra_checks': 'VSX_ASM', 'headers': 'altivec.h', 'interest': 1}, 'VSX2': {'implies': 'VSX', 'implies_detect': False, 'interest': 2}, 'VSX3': {'implies': 'VSX2', 'implies_detect': False, 'interest': 3}, 'VSX4': {'extra_checks': 'VSX4_MMA', 'implies': 'VSX3', 'implies_detect': False, 'interest': 4}, 'VX': {'headers': 'vecintrin.h', 'interest': 1}, 'VXE': {'implies': 'VX', 'implies_detect': False, 'interest': 2}, 'VXE2': {'implies': 'VXE', 'implies_detect': False, 'interest': 3}, 'XOP': {'headers': 'x86intrin.h', 'implies': 'AVX', 'interest': 9}}* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.distutils.ccompiler_opt.CCompilerOpt.conf_features.htmlnumpy.polynomial.chebyshev.Chebyshev.domain =========================================== attribute polynomial.chebyshev.Chebyshev.domain*=array([-1, 1])* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.domain.htmlnumpy.polynomial.chebyshev.Chebyshev.__call__ ================================================= method polynomial.chebyshev.Chebyshev.__call__(*arg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L480-L483) Call self as a function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.__call__.htmlnumpy.polynomial.chebyshev.Chebyshev.basis ========================================== method *classmethod*polynomial.chebyshev.Chebyshev.basis(*deg*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1062-L1099) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. New in version 1.7.0. Parameters **deg**int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series with the coefficient of the `deg` term set to one and all others zero. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.basis.htmlnumpy.polynomial.chebyshev.Chebyshev.cast ========================================= method *classmethod*polynomial.chebyshev.Chebyshev.cast(*series*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1101-L1141) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. New in version 1.7.0. Parameters **series**series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.chebyshev.chebyshev.convert#numpy.polynomial.chebyshev.Chebyshev.convert "numpy.polynomial.chebyshev.Chebyshev.convert") similar instance method © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.cast.htmlnumpy.polynomial.chebyshev.Chebyshev.convert ============================================ method polynomial.chebyshev.Chebyshev.convert(*domain=None*, *kind=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L732-L767) Convert series to a different kind and/or domain and/or window. Parameters **domain**array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind**class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window**array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns **new_series**series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.convert.htmlnumpy.polynomial.chebyshev.Chebyshev.copy ========================================= method polynomial.chebyshev.Chebyshev.copy()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L631-L640) Return a copy. Returns **new_series**series Copy of self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.copy.htmlnumpy.polynomial.chebyshev.Chebyshev.cutdeg =========================================== method polynomial.chebyshev.Chebyshev.cutdeg(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L655-L678) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. New in version 1.5.0. Parameters **deg**non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns **new_series**series New instance of series with reduced degree. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.cutdeg.htmlnumpy.polynomial.chebyshev.Chebyshev.degree =========================================== method polynomial.chebyshev.Chebyshev.degree()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L642-L653) The degree of the series. New in version 1.5.0. Returns **degree**int Degree of the series, one less than the number of coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.degree.htmlnumpy.polynomial.chebyshev.Chebyshev.deriv ========================================== method polynomial.chebyshev.Chebyshev.deriv(*m=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L831-L851) Differentiate. Return a series instance of that is the derivative of the current series. Parameters **m**non-negative int Find the derivative of order `m`. Returns **new_series**series A new series representing the derivative. The domain is the same as the domain of the differentiated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.deriv.htmlnumpy.polynomial.chebyshev.Chebyshev.fromroots ============================================== method *classmethod*polynomial.chebyshev.Chebyshev.fromroots(*roots*, *domain=[]*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L988-L1027) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters **roots**array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. Returns **new_series**series Series with the specified roots. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.fromroots.htmlnumpy.polynomial.chebyshev.Chebyshev.has_samecoef ================================================== method polynomial.chebyshev.Chebyshev.has_samecoef(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L177-L198) Check if coefficients match. New in version 1.6.0. Parameters **other**class instance The other class must have the `coef` attribute. Returns **bool**boolean True if the coefficients are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.has_samecoef.htmlnumpy.polynomial.chebyshev.Chebyshev.has_samedomain ==================================================== method polynomial.chebyshev.Chebyshev.has_samedomain(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L200-L216) Check if domains match. New in version 1.6.0. Parameters **other**class instance The other class must have the `domain` attribute. Returns **bool**boolean True if the domains are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.has_samedomain.htmlnumpy.polynomial.chebyshev.Chebyshev.has_sametype ================================================== method polynomial.chebyshev.Chebyshev.has_sametype(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L236-L252) Check if types match. New in version 1.7.0. Parameters **other**object Class instance. Returns **bool**boolean True if other is same class as self © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.has_sametype.htmlnumpy.polynomial.chebyshev.Chebyshev.has_samewindow ==================================================== method polynomial.chebyshev.Chebyshev.has_samewindow(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L218-L234) Check if windows match. New in version 1.6.0. Parameters **other**class instance The other class must have the `window` attribute. Returns **bool**boolean True if the windows are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.has_samewindow.htmlnumpy.polynomial.chebyshev.Chebyshev.identity ============================================= method *classmethod*polynomial.chebyshev.Chebyshev.identity(*domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1029-L1060) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series Series of representing the identity. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.identity.htmlnumpy.polynomial.chebyshev.Chebyshev.integ ========================================== method polynomial.chebyshev.Chebyshev.integ(*m=1*, *k=[]*, *lbnd=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L798-L829) Integrate. Return a series instance that is the definite integral of the current series. Parameters **m**non-negative int The number of integrations to perform. **k**array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd**Scalar The lower bound of the definite integral. Returns **new_series**series A new series representing the integral. The domain is the same as the domain of the integrated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.integ.htmlnumpy.polynomial.chebyshev.Chebyshev.interpolate ================================================ method *classmethod*polynomial.chebyshev.Chebyshev.interpolate(*func*, *deg*, *domain=None*, *args=()*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/chebyshev.py#L2030-L2070) Interpolate a function at the Chebyshev points of the first kind. Returns the series that interpolates `func` at the Chebyshev points of the first kind scaled and shifted to the [`domain`](numpy.polynomial.chebyshev.chebyshev.domain#numpy.polynomial.chebyshev.Chebyshev.domain "numpy.polynomial.chebyshev.Chebyshev.domain"). The resulting series tends to a minmax approximation of `func` when the function is continuous in the domain. New in version 1.14.0. Parameters **func**function The function to be interpolated. It must be a function of a single variable of the form `f(x, a, b, c...)`, where `a, b, c...` are extra arguments passed in the `args` parameter. **deg**int Degree of the interpolating polynomial. **domain**{None, [beg, end]}, optional Domain over which `func` is interpolated. The default is None, in which case the domain is [-1, 1]. **args**tuple, optional Extra arguments to be used in the function call. Default is no extra arguments. Returns **polynomial**Chebyshev instance Interpolating Chebyshev instance. #### Notes See `numpy.polynomial.chebfromfunction` for more details. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.interpolate.htmlnumpy.polynomial.chebyshev.Chebyshev.linspace ============================================= method polynomial.chebyshev.Chebyshev.linspace(*n=100*, *domain=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L868-L898) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. New in version 1.5.0. Parameters **n**int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns **x, y**ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.linspace.htmlnumpy.polynomial.chebyshev.Chebyshev.mapparms ============================================= method polynomial.chebyshev.Chebyshev.mapparms()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L769-L796) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns **off, scl**float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: ``` L(l1) = l2 L(r1) = r2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.mapparms.htmlnumpy.polynomial.chebyshev.Chebyshev.roots ========================================== method polynomial.chebyshev.Chebyshev.roots()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L853-L866) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decrease the further outside the domain they lie. Returns **roots**ndarray Array containing the roots of the series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.roots.htmlnumpy.polynomial.chebyshev.Chebyshev.trim ========================================= method polynomial.chebyshev.Chebyshev.trim(*tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L680-L701) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters **tol**non-negative number. All trailing coefficients less than `tol` will be removed. Returns **new_series**series New instance of series with trimmed coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.trim.htmlnumpy.polynomial.chebyshev.Chebyshev.truncate ============================================= method polynomial.chebyshev.Chebyshev.truncate(*size*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L703-L730) Truncate series to length `size`. Reduce the series to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters **size**positive int The series is reduced to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. Returns **new_series**series New instance of series with truncated coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.chebyshev.Chebyshev.truncate.htmlnumpy.polynomial.hermite.Hermite.domain ======================================= attribute polynomial.hermite.Hermite.domain*=array([-1, 1])* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.domain.htmlnumpy.polynomial.hermite.Hermite.__call__ ============================================= method polynomial.hermite.Hermite.__call__(*arg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L480-L483) Call self as a function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.__call__.htmlnumpy.polynomial.hermite.Hermite.basis ====================================== method *classmethod*polynomial.hermite.Hermite.basis(*deg*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1062-L1099) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. New in version 1.7.0. Parameters **deg**int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series with the coefficient of the `deg` term set to one and all others zero. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.basis.htmlnumpy.polynomial.hermite.Hermite.cast ===================================== method *classmethod*polynomial.hermite.Hermite.cast(*series*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1101-L1141) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. New in version 1.7.0. Parameters **series**series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.hermite.hermite.convert#numpy.polynomial.hermite.Hermite.convert "numpy.polynomial.hermite.Hermite.convert") similar instance method © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.cast.htmlnumpy.polynomial.hermite.Hermite.convert ======================================== method polynomial.hermite.Hermite.convert(*domain=None*, *kind=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L732-L767) Convert series to a different kind and/or domain and/or window. Parameters **domain**array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind**class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window**array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns **new_series**series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.convert.htmlnumpy.polynomial.hermite.Hermite.copy ===================================== method polynomial.hermite.Hermite.copy()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L631-L640) Return a copy. Returns **new_series**series Copy of self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.copy.htmlnumpy.polynomial.hermite.Hermite.cutdeg ======================================= method polynomial.hermite.Hermite.cutdeg(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L655-L678) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. New in version 1.5.0. Parameters **deg**non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns **new_series**series New instance of series with reduced degree. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.cutdeg.htmlnumpy.polynomial.hermite.Hermite.degree ======================================= method polynomial.hermite.Hermite.degree()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L642-L653) The degree of the series. New in version 1.5.0. Returns **degree**int Degree of the series, one less than the number of coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.degree.htmlnumpy.polynomial.hermite.Hermite.deriv ====================================== method polynomial.hermite.Hermite.deriv(*m=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L831-L851) Differentiate. Return a series instance of that is the derivative of the current series. Parameters **m**non-negative int Find the derivative of order `m`. Returns **new_series**series A new series representing the derivative. The domain is the same as the domain of the differentiated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.deriv.htmlnumpy.polynomial.hermite.Hermite.fit ==================================== method *classmethod*polynomial.hermite.Hermite.fit(*x*, *y*, *deg*, *domain=None*, *rcond=None*, *full=False*, *w=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L900-L986) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. New in version 1.5.0. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain New in version 1.6.0. Returns **new_series**series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]**list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.fit.htmlnumpy.polynomial.hermite.Hermite.fromroots ========================================== method *classmethod*polynomial.hermite.Hermite.fromroots(*roots*, *domain=[]*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L988-L1027) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters **roots**array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. Returns **new_series**series Series with the specified roots. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.fromroots.htmlnumpy.polynomial.hermite.Hermite.has_samecoef ============================================== method polynomial.hermite.Hermite.has_samecoef(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L177-L198) Check if coefficients match. New in version 1.6.0. Parameters **other**class instance The other class must have the `coef` attribute. Returns **bool**boolean True if the coefficients are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.has_samecoef.htmlnumpy.polynomial.hermite.Hermite.has_samedomain ================================================ method polynomial.hermite.Hermite.has_samedomain(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L200-L216) Check if domains match. New in version 1.6.0. Parameters **other**class instance The other class must have the `domain` attribute. Returns **bool**boolean True if the domains are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.has_samedomain.htmlnumpy.polynomial.hermite.Hermite.has_sametype ============================================== method polynomial.hermite.Hermite.has_sametype(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L236-L252) Check if types match. New in version 1.7.0. Parameters **other**object Class instance. Returns **bool**boolean True if other is same class as self © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.has_sametype.htmlnumpy.polynomial.hermite.Hermite.has_samewindow ================================================ method polynomial.hermite.Hermite.has_samewindow(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L218-L234) Check if windows match. New in version 1.6.0. Parameters **other**class instance The other class must have the `window` attribute. Returns **bool**boolean True if the windows are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.has_samewindow.htmlnumpy.polynomial.hermite.Hermite.identity ========================================= method *classmethod*polynomial.hermite.Hermite.identity(*domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1029-L1060) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series Series of representing the identity. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.identity.htmlnumpy.polynomial.hermite.Hermite.integ ====================================== method polynomial.hermite.Hermite.integ(*m=1*, *k=[]*, *lbnd=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L798-L829) Integrate. Return a series instance that is the definite integral of the current series. Parameters **m**non-negative int The number of integrations to perform. **k**array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd**Scalar The lower bound of the definite integral. Returns **new_series**series A new series representing the integral. The domain is the same as the domain of the integrated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.integ.htmlnumpy.polynomial.hermite.Hermite.linspace ========================================= method polynomial.hermite.Hermite.linspace(*n=100*, *domain=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L868-L898) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. New in version 1.5.0. Parameters **n**int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns **x, y**ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.linspace.htmlnumpy.polynomial.hermite.Hermite.mapparms ========================================= method polynomial.hermite.Hermite.mapparms()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L769-L796) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns **off, scl**float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: ``` L(l1) = l2 L(r1) = r2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.mapparms.htmlnumpy.polynomial.hermite.Hermite.roots ====================================== method polynomial.hermite.Hermite.roots()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L853-L866) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decrease the further outside the domain they lie. Returns **roots**ndarray Array containing the roots of the series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.roots.htmlnumpy.polynomial.hermite.Hermite.trim ===================================== method polynomial.hermite.Hermite.trim(*tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L680-L701) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters **tol**non-negative number. All trailing coefficients less than `tol` will be removed. Returns **new_series**series New instance of series with trimmed coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.trim.htmlnumpy.polynomial.hermite.Hermite.truncate ========================================= method polynomial.hermite.Hermite.truncate(*size*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L703-L730) Truncate series to length `size`. Reduce the series to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters **size**positive int The series is reduced to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. Returns **new_series**series New instance of series with truncated coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite.Hermite.truncate.htmlnumpy.polynomial.hermite_e.HermiteE.domain =========================================== attribute polynomial.hermite_e.HermiteE.domain*=array([-1, 1])* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.domain.htmlnumpy.polynomial.hermite_e.HermiteE.__call__ ================================================= method polynomial.hermite_e.HermiteE.__call__(*arg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L480-L483) Call self as a function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.__call__.htmlnumpy.polynomial.hermite_e.HermiteE.basis ========================================== method *classmethod*polynomial.hermite_e.HermiteE.basis(*deg*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1062-L1099) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. New in version 1.7.0. Parameters **deg**int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series with the coefficient of the `deg` term set to one and all others zero. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.basis.htmlnumpy.polynomial.hermite_e.HermiteE.cast ========================================= method *classmethod*polynomial.hermite_e.HermiteE.cast(*series*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1101-L1141) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. New in version 1.7.0. Parameters **series**series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.hermite_e.hermitee.convert#numpy.polynomial.hermite_e.HermiteE.convert "numpy.polynomial.hermite_e.HermiteE.convert") similar instance method © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.cast.htmlnumpy.polynomial.hermite_e.HermiteE.convert ============================================ method polynomial.hermite_e.HermiteE.convert(*domain=None*, *kind=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L732-L767) Convert series to a different kind and/or domain and/or window. Parameters **domain**array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind**class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window**array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns **new_series**series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.convert.htmlnumpy.polynomial.hermite_e.HermiteE.copy ========================================= method polynomial.hermite_e.HermiteE.copy()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L631-L640) Return a copy. Returns **new_series**series Copy of self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.copy.htmlnumpy.polynomial.hermite_e.HermiteE.cutdeg =========================================== method polynomial.hermite_e.HermiteE.cutdeg(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L655-L678) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. New in version 1.5.0. Parameters **deg**non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns **new_series**series New instance of series with reduced degree. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.cutdeg.htmlnumpy.polynomial.hermite_e.HermiteE.degree =========================================== method polynomial.hermite_e.HermiteE.degree()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L642-L653) The degree of the series. New in version 1.5.0. Returns **degree**int Degree of the series, one less than the number of coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.degree.htmlnumpy.polynomial.hermite_e.HermiteE.deriv ========================================== method polynomial.hermite_e.HermiteE.deriv(*m=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L831-L851) Differentiate. Return a series instance of that is the derivative of the current series. Parameters **m**non-negative int Find the derivative of order `m`. Returns **new_series**series A new series representing the derivative. The domain is the same as the domain of the differentiated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.deriv.htmlnumpy.polynomial.hermite_e.HermiteE.fit ======================================== method *classmethod*polynomial.hermite_e.HermiteE.fit(*x*, *y*, *deg*, *domain=None*, *rcond=None*, *full=False*, *w=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L900-L986) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. New in version 1.5.0. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain New in version 1.6.0. Returns **new_series**series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]**list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.fit.htmlnumpy.polynomial.hermite_e.HermiteE.fromroots ============================================== method *classmethod*polynomial.hermite_e.HermiteE.fromroots(*roots*, *domain=[]*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L988-L1027) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters **roots**array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. Returns **new_series**series Series with the specified roots. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.fromroots.htmlnumpy.polynomial.hermite_e.HermiteE.has_samecoef ================================================== method polynomial.hermite_e.HermiteE.has_samecoef(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L177-L198) Check if coefficients match. New in version 1.6.0. Parameters **other**class instance The other class must have the `coef` attribute. Returns **bool**boolean True if the coefficients are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.has_samecoef.htmlnumpy.polynomial.hermite_e.HermiteE.has_samedomain ==================================================== method polynomial.hermite_e.HermiteE.has_samedomain(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L200-L216) Check if domains match. New in version 1.6.0. Parameters **other**class instance The other class must have the `domain` attribute. Returns **bool**boolean True if the domains are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.has_samedomain.htmlnumpy.polynomial.hermite_e.HermiteE.has_sametype ================================================== method polynomial.hermite_e.HermiteE.has_sametype(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L236-L252) Check if types match. New in version 1.7.0. Parameters **other**object Class instance. Returns **bool**boolean True if other is same class as self © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.has_sametype.htmlnumpy.polynomial.hermite_e.HermiteE.has_samewindow ==================================================== method polynomial.hermite_e.HermiteE.has_samewindow(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L218-L234) Check if windows match. New in version 1.6.0. Parameters **other**class instance The other class must have the `window` attribute. Returns **bool**boolean True if the windows are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.has_samewindow.htmlnumpy.polynomial.hermite_e.HermiteE.identity ============================================= method *classmethod*polynomial.hermite_e.HermiteE.identity(*domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1029-L1060) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series Series of representing the identity. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.identity.htmlnumpy.polynomial.hermite_e.HermiteE.integ ========================================== method polynomial.hermite_e.HermiteE.integ(*m=1*, *k=[]*, *lbnd=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L798-L829) Integrate. Return a series instance that is the definite integral of the current series. Parameters **m**non-negative int The number of integrations to perform. **k**array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd**Scalar The lower bound of the definite integral. Returns **new_series**series A new series representing the integral. The domain is the same as the domain of the integrated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.integ.htmlnumpy.polynomial.hermite_e.HermiteE.linspace ============================================= method polynomial.hermite_e.HermiteE.linspace(*n=100*, *domain=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L868-L898) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. New in version 1.5.0. Parameters **n**int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns **x, y**ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.linspace.htmlnumpy.polynomial.hermite_e.HermiteE.mapparms ============================================= method polynomial.hermite_e.HermiteE.mapparms()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L769-L796) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns **off, scl**float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: ``` L(l1) = l2 L(r1) = r2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.mapparms.htmlnumpy.polynomial.hermite_e.HermiteE.roots ========================================== method polynomial.hermite_e.HermiteE.roots()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L853-L866) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decrease the further outside the domain they lie. Returns **roots**ndarray Array containing the roots of the series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.roots.htmlnumpy.polynomial.hermite_e.HermiteE.trim ========================================= method polynomial.hermite_e.HermiteE.trim(*tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L680-L701) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters **tol**non-negative number. All trailing coefficients less than `tol` will be removed. Returns **new_series**series New instance of series with trimmed coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.trim.htmlnumpy.polynomial.hermite_e.HermiteE.truncate ============================================= method polynomial.hermite_e.HermiteE.truncate(*size*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L703-L730) Truncate series to length `size`. Reduce the series to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters **size**positive int The series is reduced to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. Returns **new_series**series New instance of series with truncated coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.hermite_e.HermiteE.truncate.htmlnumpy.polynomial.laguerre.Laguerre.domain ========================================= attribute polynomial.laguerre.Laguerre.domain*=array([0, 1])* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.domain.htmlnumpy.polynomial.laguerre.Laguerre.__call__ =============================================== method polynomial.laguerre.Laguerre.__call__(*arg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L480-L483) Call self as a function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.__call__.htmlnumpy.polynomial.laguerre.Laguerre.basis ======================================== method *classmethod*polynomial.laguerre.Laguerre.basis(*deg*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1062-L1099) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. New in version 1.7.0. Parameters **deg**int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series with the coefficient of the `deg` term set to one and all others zero. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.basis.htmlnumpy.polynomial.laguerre.Laguerre.cast ======================================= method *classmethod*polynomial.laguerre.Laguerre.cast(*series*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1101-L1141) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. New in version 1.7.0. Parameters **series**series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.laguerre.laguerre.convert#numpy.polynomial.laguerre.Laguerre.convert "numpy.polynomial.laguerre.Laguerre.convert") similar instance method © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.cast.htmlnumpy.polynomial.laguerre.Laguerre.convert ========================================== method polynomial.laguerre.Laguerre.convert(*domain=None*, *kind=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L732-L767) Convert series to a different kind and/or domain and/or window. Parameters **domain**array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind**class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window**array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns **new_series**series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.convert.htmlnumpy.polynomial.laguerre.Laguerre.copy ======================================= method polynomial.laguerre.Laguerre.copy()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L631-L640) Return a copy. Returns **new_series**series Copy of self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.copy.htmlnumpy.polynomial.laguerre.Laguerre.cutdeg ========================================= method polynomial.laguerre.Laguerre.cutdeg(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L655-L678) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. New in version 1.5.0. Parameters **deg**non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns **new_series**series New instance of series with reduced degree. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.cutdeg.htmlnumpy.polynomial.laguerre.Laguerre.degree ========================================= method polynomial.laguerre.Laguerre.degree()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L642-L653) The degree of the series. New in version 1.5.0. Returns **degree**int Degree of the series, one less than the number of coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.degree.htmlnumpy.polynomial.laguerre.Laguerre.deriv ======================================== method polynomial.laguerre.Laguerre.deriv(*m=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L831-L851) Differentiate. Return a series instance of that is the derivative of the current series. Parameters **m**non-negative int Find the derivative of order `m`. Returns **new_series**series A new series representing the derivative. The domain is the same as the domain of the differentiated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.deriv.htmlnumpy.polynomial.laguerre.Laguerre.fit ====================================== method *classmethod*polynomial.laguerre.Laguerre.fit(*x*, *y*, *deg*, *domain=None*, *rcond=None*, *full=False*, *w=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L900-L986) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. New in version 1.5.0. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain New in version 1.6.0. Returns **new_series**series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]**list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.fit.htmlnumpy.polynomial.laguerre.Laguerre.fromroots ============================================ method *classmethod*polynomial.laguerre.Laguerre.fromroots(*roots*, *domain=[]*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L988-L1027) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters **roots**array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. Returns **new_series**series Series with the specified roots. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.fromroots.htmlnumpy.polynomial.laguerre.Laguerre.has_samecoef ================================================ method polynomial.laguerre.Laguerre.has_samecoef(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L177-L198) Check if coefficients match. New in version 1.6.0. Parameters **other**class instance The other class must have the `coef` attribute. Returns **bool**boolean True if the coefficients are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.has_samecoef.htmlnumpy.polynomial.laguerre.Laguerre.has_samedomain ================================================== method polynomial.laguerre.Laguerre.has_samedomain(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L200-L216) Check if domains match. New in version 1.6.0. Parameters **other**class instance The other class must have the `domain` attribute. Returns **bool**boolean True if the domains are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.has_samedomain.htmlnumpy.polynomial.laguerre.Laguerre.has_sametype ================================================ method polynomial.laguerre.Laguerre.has_sametype(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L236-L252) Check if types match. New in version 1.7.0. Parameters **other**object Class instance. Returns **bool**boolean True if other is same class as self © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.has_sametype.htmlnumpy.polynomial.laguerre.Laguerre.has_samewindow ================================================== method polynomial.laguerre.Laguerre.has_samewindow(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L218-L234) Check if windows match. New in version 1.6.0. Parameters **other**class instance The other class must have the `window` attribute. Returns **bool**boolean True if the windows are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.has_samewindow.htmlnumpy.polynomial.laguerre.Laguerre.identity =========================================== method *classmethod*polynomial.laguerre.Laguerre.identity(*domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1029-L1060) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series Series of representing the identity. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.identity.htmlnumpy.polynomial.laguerre.Laguerre.integ ======================================== method polynomial.laguerre.Laguerre.integ(*m=1*, *k=[]*, *lbnd=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L798-L829) Integrate. Return a series instance that is the definite integral of the current series. Parameters **m**non-negative int The number of integrations to perform. **k**array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd**Scalar The lower bound of the definite integral. Returns **new_series**series A new series representing the integral. The domain is the same as the domain of the integrated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.integ.htmlnumpy.polynomial.laguerre.Laguerre.linspace =========================================== method polynomial.laguerre.Laguerre.linspace(*n=100*, *domain=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L868-L898) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. New in version 1.5.0. Parameters **n**int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns **x, y**ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.linspace.htmlnumpy.polynomial.laguerre.Laguerre.mapparms =========================================== method polynomial.laguerre.Laguerre.mapparms()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L769-L796) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns **off, scl**float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: ``` L(l1) = l2 L(r1) = r2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.mapparms.htmlnumpy.polynomial.laguerre.Laguerre.roots ======================================== method polynomial.laguerre.Laguerre.roots()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L853-L866) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decrease the further outside the domain they lie. Returns **roots**ndarray Array containing the roots of the series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.roots.htmlnumpy.polynomial.laguerre.Laguerre.trim ======================================= method polynomial.laguerre.Laguerre.trim(*tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L680-L701) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters **tol**non-negative number. All trailing coefficients less than `tol` will be removed. Returns **new_series**series New instance of series with trimmed coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.trim.htmlnumpy.polynomial.laguerre.Laguerre.truncate =========================================== method polynomial.laguerre.Laguerre.truncate(*size*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L703-L730) Truncate series to length `size`. Reduce the series to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters **size**positive int The series is reduced to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. Returns **new_series**series New instance of series with truncated coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.laguerre.Laguerre.truncate.htmlnumpy.polynomial.legendre.Legendre.domain ========================================= attribute polynomial.legendre.Legendre.domain*=array([-1, 1])* © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.domain.htmlnumpy.polynomial.legendre.Legendre.__call__ =============================================== method polynomial.legendre.Legendre.__call__(*arg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L480-L483) Call self as a function. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.__call__.htmlnumpy.polynomial.legendre.Legendre.basis ======================================== method *classmethod*polynomial.legendre.Legendre.basis(*deg*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1062-L1099) Series basis polynomial of degree `deg`. Returns the series representing the basis polynomial of degree `deg`. New in version 1.7.0. Parameters **deg**int Degree of the basis polynomial for the series. Must be >= 0. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series with the coefficient of the `deg` term set to one and all others zero. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.basis.htmlnumpy.polynomial.legendre.Legendre.cast ======================================= method *classmethod*polynomial.legendre.Legendre.cast(*series*, *domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1101-L1141) Convert series to series of this class. The `series` is expected to be an instance of some polynomial series of one of the types supported by by the numpy.polynomial module, but could be some other class that supports the convert method. New in version 1.7.0. Parameters **series**series The series instance to be converted. **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series A series of the same kind as the calling class and equal to `series` when evaluated. See also [`convert`](numpy.polynomial.legendre.legendre.convert#numpy.polynomial.legendre.Legendre.convert "numpy.polynomial.legendre.Legendre.convert") similar instance method © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.cast.htmlnumpy.polynomial.legendre.Legendre.convert ========================================== method polynomial.legendre.Legendre.convert(*domain=None*, *kind=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L732-L767) Convert series to a different kind and/or domain and/or window. Parameters **domain**array_like, optional The domain of the converted series. If the value is None, the default domain of `kind` is used. **kind**class, optional The polynomial series type class to which the current instance should be converted. If kind is None, then the class of the current instance is used. **window**array_like, optional The window of the converted series. If the value is None, the default window of `kind` is used. Returns **new_series**series The returned class can be of different type than the current instance and/or have a different domain and/or different window. #### Notes Conversion between domains and class types can result in numerically ill defined series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.convert.htmlnumpy.polynomial.legendre.Legendre.copy ======================================= method polynomial.legendre.Legendre.copy()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L631-L640) Return a copy. Returns **new_series**series Copy of self. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.copy.htmlnumpy.polynomial.legendre.Legendre.cutdeg ========================================= method polynomial.legendre.Legendre.cutdeg(*deg*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L655-L678) Truncate series to the given degree. Reduce the degree of the series to `deg` by discarding the high order terms. If `deg` is greater than the current degree a copy of the current series is returned. This can be useful in least squares where the coefficients of the high degree terms may be very small. New in version 1.5.0. Parameters **deg**non-negative int The series is reduced to degree `deg` by discarding the high order terms. The value of `deg` must be a non-negative integer. Returns **new_series**series New instance of series with reduced degree. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.cutdeg.htmlnumpy.polynomial.legendre.Legendre.degree ========================================= method polynomial.legendre.Legendre.degree()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L642-L653) The degree of the series. New in version 1.5.0. Returns **degree**int Degree of the series, one less than the number of coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.degree.htmlnumpy.polynomial.legendre.Legendre.deriv ======================================== method polynomial.legendre.Legendre.deriv(*m=1*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L831-L851) Differentiate. Return a series instance of that is the derivative of the current series. Parameters **m**non-negative int Find the derivative of order `m`. Returns **new_series**series A new series representing the derivative. The domain is the same as the domain of the differentiated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.deriv.htmlnumpy.polynomial.legendre.Legendre.fit ====================================== method *classmethod*polynomial.legendre.Legendre.fit(*x*, *y*, *deg*, *domain=None*, *rcond=None*, *full=False*, *w=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L900-L986) Least squares fit to data. Return a series instance that is the least squares fit to the data `y` sampled at `x`. The domain of the returned instance can be specified and this will often result in a superior fit with less chance of ill conditioning. Parameters **x**array_like, shape (M,) x-coordinates of the M sample points `(x[i], y[i])`. **y**array_like, shape (M,) y-coordinates of the M sample points `(x[i], y[i])`. **deg**int or 1-D array_like Degree(s) of the fitting polynomials. If `deg` is a single integer all terms up to and including the `deg`’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead. **domain**{None, [beg, end], []}, optional Domain to use for the returned series. If `None`, then a minimal domain that covers the points `x` is chosen. If `[]` the class domain is used. The default value was the class domain in NumPy 1.4 and `None` in later versions. The `[]` option was added in numpy 1.5.0. **rcond**float, optional Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is len(x)*eps, where eps is the relative precision of the float type, about 2e-16 in most cases. **full**bool, optional Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned. **w**array_like, shape (M,), optional Weights. If not None, the weight `w[i]` applies to the unsquared residual `y[i] - y_hat[i]` at `x[i]`. Ideally the weights are chosen so that the errors of the products `w[i]*y[i]` all have the same variance. When using inverse-variance weighting, use `w[i] = 1/sigma(y[i])`. The default value is None. New in version 1.5.0. **window**{[beg, end]}, optional Window to use for the returned series. The default value is the default class domain New in version 1.6.0. Returns **new_series**series A series that represents the least squares fit to the data and has the domain and window specified in the call. If the coefficients for the unscaled and unshifted basis polynomials are of interest, do `new_series.convert().coef`. **[resid, rank, sv, rcond]**list These values are only returned if `full == True` * resid – sum of squared residuals of the least squares fit * rank – the numerical rank of the scaled Vandermonde matrix * sv – singular values of the scaled Vandermonde matrix * rcond – value of `rcond`. For more details, see [`linalg.lstsq`](numpy.linalg.lstsq#numpy.linalg.lstsq "numpy.linalg.lstsq"). © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.fit.htmlnumpy.polynomial.legendre.Legendre.fromroots ============================================ method *classmethod*polynomial.legendre.Legendre.fromroots(*roots*, *domain=[]*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L988-L1027) Return series instance that has the specified roots. Returns a series representing the product `(x - r[0])*(x - r[1])*...*(x - r[n-1])`, where `r` is a list of roots. Parameters **roots**array_like List of roots. **domain**{[], None, array_like}, optional Domain for the resulting series. If None the domain is the interval from the smallest root to the largest. If [] the domain is the class domain. The default is []. **window**{None, array_like}, optional Window for the returned series. If None the class window is used. The default is None. Returns **new_series**series Series with the specified roots. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.fromroots.htmlnumpy.polynomial.legendre.Legendre.has_samecoef ================================================ method polynomial.legendre.Legendre.has_samecoef(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L177-L198) Check if coefficients match. New in version 1.6.0. Parameters **other**class instance The other class must have the `coef` attribute. Returns **bool**boolean True if the coefficients are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.has_samecoef.htmlnumpy.polynomial.legendre.Legendre.has_samedomain ================================================== method polynomial.legendre.Legendre.has_samedomain(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L200-L216) Check if domains match. New in version 1.6.0. Parameters **other**class instance The other class must have the `domain` attribute. Returns **bool**boolean True if the domains are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.has_samedomain.htmlnumpy.polynomial.legendre.Legendre.has_sametype ================================================ method polynomial.legendre.Legendre.has_sametype(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L236-L252) Check if types match. New in version 1.7.0. Parameters **other**object Class instance. Returns **bool**boolean True if other is same class as self © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.has_sametype.htmlnumpy.polynomial.legendre.Legendre.has_samewindow ================================================== method polynomial.legendre.Legendre.has_samewindow(*other*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L218-L234) Check if windows match. New in version 1.6.0. Parameters **other**class instance The other class must have the `window` attribute. Returns **bool**boolean True if the windows are the same, False otherwise. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.has_samewindow.htmlnumpy.polynomial.legendre.Legendre.identity =========================================== method *classmethod*polynomial.legendre.Legendre.identity(*domain=None*, *window=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L1029-L1060) Identity function. If `p` is the returned series, then `p(x) == x` for all values of x. Parameters **domain**{None, array_like}, optional If given, the array must be of the form `[beg, end]`, where `beg` and `end` are the endpoints of the domain. If None is given then the class domain is used. The default is None. **window**{None, array_like}, optional If given, the resulting array must be if the form `[beg, end]`, where `beg` and `end` are the endpoints of the window. If None is given then the class window is used. The default is None. Returns **new_series**series Series of representing the identity. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.identity.htmlnumpy.polynomial.legendre.Legendre.integ ======================================== method polynomial.legendre.Legendre.integ(*m=1*, *k=[]*, *lbnd=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L798-L829) Integrate. Return a series instance that is the definite integral of the current series. Parameters **m**non-negative int The number of integrations to perform. **k**array_like Integration constants. The first constant is applied to the first integration, the second to the second, and so on. The list of values must less than or equal to `m` in length and any missing values are set to zero. **lbnd**Scalar The lower bound of the definite integral. Returns **new_series**series A new series representing the integral. The domain is the same as the domain of the integrated series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.integ.htmlnumpy.polynomial.legendre.Legendre.linspace =========================================== method polynomial.legendre.Legendre.linspace(*n=100*, *domain=None*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L868-L898) Return x, y values at equally spaced points in domain. Returns the x, y values at `n` linearly spaced points across the domain. Here y is the value of the polynomial at the points x. By default the domain is the same as that of the series instance. This method is intended mostly as a plotting aid. New in version 1.5.0. Parameters **n**int, optional Number of point pairs to return. The default value is 100. **domain**{None, array_like}, optional If not None, the specified domain is used instead of that of the calling instance. It should be of the form `[beg,end]`. The default is None which case the class domain is used. Returns **x, y**ndarray x is equal to linspace(self.domain[0], self.domain[1], n) and y is the series evaluated at element of x. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.linspace.htmlnumpy.polynomial.legendre.Legendre.mapparms =========================================== method polynomial.legendre.Legendre.mapparms()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L769-L796) Return the mapping parameters. The returned values define a linear map `off + scl*x` that is applied to the input arguments before the series is evaluated. The map depends on the `domain` and `window`; if the current `domain` is equal to the `window` the resulting map is the identity. If the coefficients of the series instance are to be used by themselves outside this class, then the linear function must be substituted for the `x` in the standard representation of the base polynomials. Returns **off, scl**float or complex The mapping function is defined by `off + scl*x`. #### Notes If the current domain is the interval `[l1, r1]` and the window is `[l2, r2]`, then the linear mapping function `L` is defined by the equations: ``` L(l1) = l2 L(r1) = r2 ``` © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.mapparms.htmlnumpy.polynomial.legendre.Legendre.roots ======================================== method polynomial.legendre.Legendre.roots()[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L853-L866) Return the roots of the series polynomial. Compute the roots for the series. Note that the accuracy of the roots decrease the further outside the domain they lie. Returns **roots**ndarray Array containing the roots of the series. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.roots.htmlnumpy.polynomial.legendre.Legendre.trim ======================================= method polynomial.legendre.Legendre.trim(*tol=0*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L680-L701) Remove trailing coefficients Remove trailing coefficients until a coefficient is reached whose absolute value greater than `tol` or the beginning of the series is reached. If all the coefficients would be removed the series is set to `[0]`. A new series instance is returned with the new coefficients. The current instance remains unchanged. Parameters **tol**non-negative number. All trailing coefficients less than `tol` will be removed. Returns **new_series**series New instance of series with trimmed coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.trim.htmlnumpy.polynomial.legendre.Legendre.truncate =========================================== method polynomial.legendre.Legendre.truncate(*size*)[[source]](https://github.com/numpy/numpy/blob/v1.23.0/numpy/polynomial/_polybase.py#L703-L730) Truncate series to length `size`. Reduce the series to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. This can be useful in least squares where the coefficients of the high degree terms may be very small. Parameters **size**positive int The series is reduced to length `size` by discarding the high degree terms. The value of `size` must be a positive integer. Returns **new_series**series New instance of series with truncated coefficients. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/generated/numpy.polynomial.legendre.Legendre.truncate.htmlnumpy.random.PCG64DXSM.advance ============================== method random.PCG64DXSM.advance(*delta*) Advance the underlying RNG as-if delta draws have occurred. Parameters **delta**integer, positive Number of draws to advance the RNG. Must be less than the size state variable in the underlying RNG. Returns **self**PCG64 RNG advanced delta steps #### Notes Advancing a RNG updates the underlying RNG state as-if a given number of calls to the underlying RNG have been made. In general there is not a one-to-one relationship between the number output random values from a particular distribution and the number of draws from the core RNG. This occurs for two reasons: * The random values are simulated using a rejection-based method and so, on average, more than one value from the underlying RNG is required to generate an single draw. * The number of bits required to generate a simulated value differs from the number of bits generated by the underlying RNG. For example, two 16-bit integer values can be simulated from a single draw of a 32-bit RNG. Advancing the RNG state resets any pre-computed random numbers. This is required to ensure exact reproducibility. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64DXSM.advance.htmlnumpy.random.PCG64DXSM.state ============================ attribute random.PCG64DXSM.state Get or set the PRNG state Returns **state**dict Dictionary containing the information required to describe the state of the PRNG © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64DXSM.state.htmlnumpy.random.PCG64DXSM.jumped ============================= method random.PCG64DXSM.jumped(*jumps=1*) Returns a new bit generator with the state jumped. Jumps the state as-if jumps * 210306068529402873165736369884012333109 random numbers have been generated. Parameters **jumps**integer, positive Number of times to jump the state of the bit generator returned Returns **bit_generator**PCG64DXSM New instance of generator jumped iter times #### Notes The step size is phi-1 when multiplied by 2**128 where phi is the golden ratio. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64DXSM.jumped.htmlnumpy.random.PCG64DXSM.cffi =========================== attribute random.PCG64DXSM.cffi CFFI interface Returns **interface**namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64DXSM.cffi.htmlnumpy.random.PCG64DXSM.ctypes ============================= attribute random.PCG64DXSM.ctypes ctypes interface Returns **interface**namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.PCG64DXSM.ctypes.htmlnumpy.random.Philox.state ========================= attribute random.Philox.state Get or set the PRNG state Returns **state**dict Dictionary containing the information required to describe the state of the PRNG © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.Philox.state.htmlnumpy.random.Philox.advance =========================== method random.Philox.advance(*delta*) Advance the underlying RNG as-if delta draws have occurred. Parameters **delta**integer, positive Number of draws to advance the RNG. Must be less than the size state variable in the underlying RNG. Returns **self**Philox RNG advanced delta steps #### Notes Advancing a RNG updates the underlying RNG state as-if a given number of calls to the underlying RNG have been made. In general there is not a one-to-one relationship between the number output random values from a particular distribution and the number of draws from the core RNG. This occurs for two reasons: * The random values are simulated using a rejection-based method and so, on average, more than one value from the underlying RNG is required to generate an single draw. * The number of bits required to generate a simulated value differs from the number of bits generated by the underlying RNG. For example, two 16-bit integer values can be simulated from a single draw of a 32-bit RNG. Advancing the RNG state resets any pre-computed random numbers. This is required to ensure exact reproducibility. © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.Philox.advance.htmlnumpy.random.Philox.jumped ========================== method random.Philox.jumped(*jumps=1*) Returns a new bit generator with the state jumped The state of the returned bit generator is jumped as-if (2**128) * jumps random numbers have been generated. Parameters **jumps**integer, positive Number of times to jump the state of the bit generator returned Returns **bit_generator**Philox New instance of generator jumped iter times © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.Philox.jumped.htmlnumpy.random.Philox.cffi ======================== attribute random.Philox.cffi CFFI interface Returns **interface**namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.Philox.cffi.htmlnumpy.random.Philox.ctypes ========================== attribute random.Philox.ctypes ctypes interface Returns **interface**namedtuple Named tuple containing ctypes wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.Philox.ctypes.htmlnumpy.random.SFC64.state ======================== attribute random.SFC64.state Get or set the PRNG state Returns **state**dict Dictionary containing the information required to describe the state of the PRNG © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.SFC64.state.htmlnumpy.random.SFC64.cffi ======================= attribute random.SFC64.cffi CFFI interface Returns **interface**namedtuple Named tuple containing CFFI wrapper * state_address - Memory address of the state struct * state - pointer to the state struct * next_uint64 - function pointer to produce 64 bit integers * next_uint32 - function pointer to produce 32 bit integers * next_double - function pointer to produce doubles * bitgen - pointer to the bit generator struct © 2005–2022 NumPy Developers Licensed under the 3-clause BSD License. <https://numpy.org/doc/1.23/reference/random/bit_generators/generated/numpy.random.SFC64.cffi.htmlnumpy~1.23
roxyglobals
cran
R
Package ‘roxyglobals’ August 21, 2023 Title 'Roxygen2' Global Variable Declarations Version 1.0.0 Description Generate utils::globalVariables() from 'roxygen2' @global and @autoglobal tags. License MIT + file LICENSE URL https://github.com/anthonynorth/roxyglobals BugReports https://github.com/anthonynorth/roxyglobals/issues Imports brio, codetools, desc, roxygen2 Suggests covr, testthat (>= 3.0.0), withr Config/testthat/edition 3 Encoding UTF-8 RoxygenNote 7.2.3 NeedsCompilation no Author <NAME> [aut, cre, cph], <NAME> [ctb] (<https://orcid.org/0000-0003-2865-2548>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-08-21 14:20:08 UTC R topics documented: global_rocle... 2 use_roxyglobal... 2 global_roclet Roclet: global Description This roclet automates utils::globalVariables() declaration from @global and @autoglobal roxygen tags. Package authors will not typically need to invoke global_roclet() directly. Global roclet in- stances are created by roxygen2 during roxygen2::roxygenise() (or devtools::document()). Usage global_roclet() Value A roxygen2::roclet() instance for declaring utils::globalVariables() during roxygen2::roxygenise() Examples #' @autoglobal foo <- function(x) { # bar isn't declared -> add to utils::globalVariables() subset(x, bar == 4) } #' @global bar foo <- function(x) { # bar is explicitly defined as a global -> add to utils::globalVariables() subset(x, bar == 4) } use_roxyglobals Use roxyglobals Description Configures roxygen to use global_roclet(), adds roxyglobals to Suggests Usage use_roxyglobals() Value nothing use_roxyglobals 3 Examples ## Not run: use_roxyglobals() ## End(Not run)
SafeVote
cran
R
Package ‘SafeVote’ January 18, 2023 Type Package Title Election Vote Counting with Safety Features Version 1.0.0 Date 2023-01-18 Description Fork of 'vote_2.3-2', Raftery et al. (2021) <DOI:10.32614/RJ-2021-086>, with additional support for stochastic experimentation. Depends R (>= 3.5.0) Imports formattable, knitr, fields, grDevices, graphics, utils, ggplot2, data.table, stringr, forcats, dplyr Encoding UTF-8 License GPL (>= 2) Language EN-GB NeedsCompilation no RoxygenNote 7.2.3 LazyData true URL https://cthombor.github.io/SafeVote/ Suggests testthat (>= 3.0.0), vote, STV Config/testthat/edition 3 Author <NAME> [cre, aut] (<https://orcid.org/0000-0002-4147-7898>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-01-18 11:50:06 UTC R topics documented: .print.summary.SafeVot... 3 .summary.SafeVot... 4 a3_hi... 4 a4_hi... 5 a53_hi... 5 approva... 6 as.SafeRankExp... 7 assemble.args.for.check.scor... 7 assemble.args.for.check.st... 8 backwards.tiebrea... 8 check.nseat... 9 check.rankin... 9 check.vote... 10 check.votes.approva... 10 check.votes.condorce... 11 check.votes.pluralit... 11 check.votes.scor... 12 check.votes.st... 12 check.votes.tworound.runof... 13 combineRanking... 13 completeRankingTabl... 14 condorce... 14 correct.rankin... 16 dublin_wes... 16 election.inf... 17 extractMargin... 17 extractRan... 18 food_electio... 18 forwards.tiebrea... 19 image.SafeVote.condorce... 19 image.SafeVote.st... 20 ims_approva... 20 ims_electio... 21 ims_pluralit... 21 ims_scor... 22 ims_st... 22 invalid.vote... 23 is.SafeRankExp... 23 is.valid.vot... 24 loserMargi... 24 new_SafeRankExp... 25 ordered.preference... 26 ordered.tiebrea... 26 plot.SafeRankExp... 26 plot.SafeVote.st... 28 pluralit... 29 prepare.vote... 29 print.summary.SafeRankExp... 30 print.summary.SafeVote.approva... 30 print.summary.SafeVote.condorce... 31 print.summary.SafeVote.pluralit... 31 print.summary.SafeVote.scor... 32 print.summary.SafeVote.st... 32 rbind.SafeRankExp... 33 readHi... 33 remove.candidat... 34 scor... 34 solveTiebrea... 35 st... 36 summary.SafeRankExp... 38 summary.SafeVote.approva... 39 summary.SafeVote.condorce... 39 summary.SafeVote.pluralit... 40 summary.SafeVote.scor... 40 summary.SafeVote.st... 41 sumOfVote... 41 testAddition... 42 testDeletion... 43 testFractio... 44 translate.tie... 45 uk_labour_201... 46 vie... 46 view.SafeVote.approva... 47 view.SafeVote.condorce... 47 view.SafeVote.pluralit... 48 view.SafeVote.scor... 48 view.SafeVote.st... 49 winnerMargi... 49 yale_ballot... 50 .print.summary.SafeVote .print method for summary object Description .print method for summary object Usage .print.summary.SafeVote(x, ...) Arguments x, ... undocumented Value undocumented .summary.SafeVote summarises vote-totals for subsequent printing Description summarises vote-totals for subsequent printing Usage .summary.SafeVote(object, larger.wins = TRUE, reorder = TRUE) Arguments object vector of total votes per candidate larger.wins TRUE if candidates are "voted in" rather than voted-out reorder TRUE if output data.frame columns should be in rank-order Value a data.frame with three columns and nc+1 rows, where nc is the number of candidates. The first column contains candidate names and a final entry named "Sum". The second column contains vote totals. The third column is a vector of chars which indicate whether the candidate has been elected. The data.frame has four named attributes carrying election parameters. TODO: refactor into a modern dialect of R, perhaps by defining a constructor for an election_info S3 object with a summary method and a print method a3_hil Tideman a3_hil Description This data is one of 87 sets of ballots from the Tideman data collection, as curated by The Center for Range Voting. This set of ballots was collected in 1987 by <NAME>, with support from NSF grant SES86- 18328. "The data are records of ballots from elections of British organizations (mostly trade unions using PR-STV or IRV voting) in which the voters ranked the candidates. The data were gathered under a stipulation that the organizations involved would remain anonymous." The ballots were encoded in David Hill’s format, and have been converted to the preference-vector format of this package. The archival file A4.HIL at rangevoting.org contains eight blank ballot papers (1, 616, 619, 620, 685, 686, 687, 688) which we have retained. This set may be counted by stv(a3_hil,nseats=attr(a3_hil,"nseats")). Usage data(a3_hil) Format A data frame with attribute "nseats" = 7, consisting of 989 observations and 15 candidates. a4_hil Tideman a4_hil Description This data is one of 87 sets of ballots from the Tideman data collection, as curated by The Center for Range Voting. The ballots were archived in David Hill’s format, and have been converted to the preference-vector format of this package. This set of ballots was collected in 1987 by <NAME>, with support from NSF grant SES86- 18328. "The data are records of ballots from elections of British organizations (mostly trade unions using PR-STV or IRV voting) in which the voters ranked the candidates. The data were gathered under a stipulation that the organizations involved would remain anonymous." Usage data(a4_hil) Format A data frame with attribute "nseats" = 2, consisting of 43 observations and 14 candidates. a53_hil Tideman a53_hil Description This data is one of 87 sets of ballots from the Tideman data collection, as curated by The Center for Range Voting. This set of ballots was collected in 1988 by <NAME>, with support from NSF grant SES86- 18328. "The data are records of ballots from elections of British organizations (mostly trade unions using PR-STV or IRV voting) in which the voters ranked the candidates. The data were gathered under a stipulation that the organizations involved would remain anonymous." The ballots were encoded in David Hill’s format, and have been converted to the preference-vector format of this package. Candidates have been renamed to letters of the alphabet, for ease of compar- ison with Table 3 of Tideman (2000). Note: the DOI for this article is 10.1023/A:1005082925477, with an embedded colon which isn’t handled by the usual DOI-to-URL conversions. As noted in this table, it is a very close race between candidates D, F, and B in the final rounds of a Meek count of a53_hil. Tideman’s implementation of Meek’s method excludes B (on 59.02 votes), then elects D in the final round (on 88.33 votes) with a margin of 0.95 votes ahead of F (on 87.38 votes). In v1.0, stv(a53.hil,quota.hare=TRUE) excludes F (on 56.418 votes), then elects D in the final round (on 79.705 votes) with a winning margin of 0.747 votes ahead of B (on 78.958 votes). The result of the election is the same but the vote counts and winning margins differ significantly; so we conclude that stv(quota.hare=TRUE) in SafeVote v1.0 is not a reliable proxy for Tideman’s implementation of Meek’s algorithm. Future researchers may wish to adjust the quota calculation of vote.stv() so that it is no longer biased upward by a "fuzz" of 0.001, to see if this change significantly reduces the discrepancies with Tideman’s implementation of Meek. It would be unreasonable to expect an exact replication of results from two different implementa- tions of an STV method. We leave it to future researchers to develop a formal specification, so that it would be possible to verify the correctness of an implementation. We also leave it to fu- ture researchers to develop a set of test cases with appropriate levels of tolerance for the vagaries of floating-point roundoff in optimised (or even unoptimised!) compilations of the same code on different computing systems. We suggest that a53_hil be included in any such test set. We note in passing that <NAME>, in "Checking two STV programs", Voting Matters 11, 2000, discussed the cross-validation exercise he conducted between the ERBS implementation of its voting rules and the Church of England’s implementation of its voting rules. In both cases, he discovered ambiguities in the specification as well as defects in the implementation. Usage data(a53_hil) Format A data frame with attribute "nseats" = 4, consisting of 460 observations and 10 candidates. approval Count votes using the approval method Description See https://arxiv.org/abs/2102.05801 Usage approval(votes, nseats = 1, fsep = "\t", quiet = FALSE, ...) Arguments votes, nseats, fsep, quiet, ... undocumented Value undocumented as.SafeRankExpt as.SafeRankExpt() Description as.SafeRankExpt() Usage as.SafeRankExpt(df) Arguments df data.frame object Value a SafeRankExpt object, or stop() if df fails some sanity checks assemble.args.for.check.score undocumented internal method Description undocumented internal method Usage assemble.args.for.check.score(x, max.score = NULL, ...) Arguments x, max.score, ... undocumented Value undocumented assemble.args.for.check.stv undocumented internal method Description undocumented internal method Usage assemble.args.for.check.stv(x, equal.ranking = FALSE, ...) Arguments x, equal.ranking, ... undocumented Value undocumented backwards.tiebreak Undocumented internal method Description Undocumented internal method Usage backwards.tiebreak(prefs, icans, elim = TRUE) Arguments prefs undocumented icans undocumented elim undocumented check.nseats parameter-checking method for nseats (internal) Description parameter-checking method for nseats (internal) Usage check.nseats( nseats = NULL, ncandidates, default = 1, mcan = NULL, complete.ranking = FALSE ) Arguments nseats initially-specified number of seats to be filled in an election ncandidates the number of candidates standing for election default the return value of this function when nseats=NULL mcan a deprecated name for nseats complete.ranking when TRUE, the return value is in 1..ncandidates When FALSE, the return value is in 1..ncandidates-1 (for backwards compatibility) Value a valid non-NULL value for the number of seats to be filled check.ranking check the validity of a partial ranking Description check the validity of a partial ranking Usage check.ranking(r) Arguments r a numeric vector Value a partial ranking of the elements of r, using ties.method="min" check.votes undocumented internal method Description undocumented internal method Usage check.votes(x, ..., quiet = FALSE) Arguments x, quiet, ... undocumented Value undocumented check.votes.approval undocumented internal method Description undocumented internal method Usage check.votes.approval(record, ...) Arguments record, ... undocumented Value undocumented check.votes.condorcet 11 check.votes.condorcet undocumented internal method Description undocumented internal method Usage check.votes.condorcet(record, ...) Arguments record, ... undocumented Value undocumented check.votes.plurality undocumented internal method Description undocumented internal method Usage check.votes.plurality(record, ...) Arguments record, ... undocumented Value undocumented check.votes.score undocumented internal method Description undocumented internal method Usage check.votes.score(record, max.score, ...) Arguments record, max.score, ... undocumented Value undocumented check.votes.stv undocumented internal method Description undocumented internal method Usage check.votes.stv(record, equal.ranking = FALSE, ...) Arguments record, equal.ranking, ... undocumented Value undocumented check.votes.tworound.runoff undocumented internal method Description undocumented internal method Usage check.votes.tworound.runoff(record, ...) Arguments record, ... undocumented Value undocumented combineRankings the least upper bound on a pair of rankings Description the least upper bound on a pair of rankings Usage combineRankings(r1, r2) Arguments r1, r2 numeric vectors Value the most complete (but possibly partial) ranking which is consistent with both r1 and r2. Uses ties.method="min" Examples combineRankings( c(3,1,2), c(2,1,3) ) completeRankingTable internal method to analyse the partial results of an stv() ballot count, to discover a complete ranking of all candidates. The ranking may de- pend on the value of nseats, because this affects how votes are trans- ferred. Description internal method to analyse the partial results of an stv() ballot count, to discover a complete ranking of all candidates. The ranking may depend on the value of nseats, because this affects how votes are transferred. Usage completeRankingTable(object, quiet, verbose) Arguments object partial results quiet TRUE to suppress console output verbose TRUE to produce diagnostic output Value data.frame with columns TotalRank, Margin, Candidate, Elected, SafeRank condorcet Count votes using the Condorcet voting method. Description The Condorcet method elects the candidate who wins a majority of the ranked vote in every head to head election against each of the other candidates. A Condorcet winner is a candidate who beats all other candidates in pairwise comparisons. Analogously, a Condorcet loser is a candidate who loses against all other candidates. Neither Condorcet winner nor loser might exist. Usage condorcet( votes, runoff = FALSE, nseats = 1, safety = 1, fsep = "\t", quiet = FALSE, ... ) Arguments votes A matrix or data.frame containing the votes. Rows correspond to the votes, columns correspond to the candidates. If votes is a character string, it is inter- preted as a file name from which the votes are to be read. See below. runoff Logical. If TRUE and no Condorcet winner exists, the election goes into a run- off, see below. nseats the number of seats to be filled in this election safety Parameter for a clustering heuristic on a total ranking of the candidates. Conjec- ture: the default of 1.0 ensures a separation of one s.d. between clusters, when votes are i.u.d. permutations on the candidates. fsep If votes is a file name, this argument gives the column separator in the file. quiet If TRUE no output is printed. ... Undocumented intent (preserved from legacy code) Details If the runoff argument is set to TRUE and no Condorcet winner exists, two or more candidates with the most pairwise wins are selected and the method is applied to such subset. If more than two candidates are in such run-off, the selection is performed repeatedly, until either a winner is selected or no more selection is possible. The input data votes is structured the same way as for the stv method: Row i contains the prefer- ences of voter i numbered 1; 2; : : : ; r; 0; 0; 0; 0, in some order, while equal preferences are allowed. The columns correspond to the candidates. The dimnames of the columns are the names of the candidates; if these are not supplied then the candidates are lettered A, B, C, .... If the dataset contains missing values (NA), they are replaced by zeros. If a ballot has equally-ranked candidates, its rankings are tested for validity: for each preference i which does not have any duplicate, there are exactly i−1 preferences j with 0 < j < i. If any ballot x fails this validity test, it is automatically corrected (aka "converted") into a valid ballot using x <- rank(x, ties.method = "min"), and a warning is issued. This method also computes a Borda ranking of all candidates, using tournament-style scoring. This ranking is "fuzzed" into a safeRank, with approximately 1 s.d. of fuzz when safety=1.0 and voter preferences are i.u.d. A warning is thrown if a safeRank violates the (extended) Condorcet principle: that Candidate i is more highly ranked than Candidate j only if a majority of voters agree with this. Value Object of class SafeVote.condorcet Examples { data(food_election) condorcet(food_election) } correct.ranking Amend ballots with equal or incomplete preferences Description The correct.ranking function returns a modified set of ballots. Its argument partial deter- mines if ballots are partially set to 0 (TRUE), or if it is a complete re-ranking, as allowed when equal.ranking = TRUE. It can be used by calling it explicitly. It is called by stv if equal.ranking = TRUE or invalid.partial = TRUE. It is also called from within the condorcet function with the default value (FALSE) for partial, i.e. interpreting any 0 as a last= preference. Usage correct.ranking(votes, partial = FALSE, quiet = FALSE) Arguments votes original contents of ballot box partial if FALSE (default), each ballot is interpreted, if possible, as a complete (but not necessarily total) ranking of the candidates. If TRUE, a ballot will contain a 0 on unranked candidates. quiet suppress diagnostics Value corrected ballots dublin_west Dublin West Description Dataset containing ranked votes for the Dublin West constituency in 2002, Ireland. Usage data(dublin_west) Format A data frame with 29988 observations and 9 candidates. Each record corresponds to one ballot with candidates being ranked between 1 and 9 with zeros allowed. See Also Wikipedia election.info prints the basic results of an election Description prints the basic results of an election Usage election.info(x) Arguments x basic election results, as named attributes of an R structure or object Value data.frame : an invisible copy of the printed results TODO: refactor into a modern dialect of R, e.g. defining a constructor for an election_info S3 object with a print method extractMargins extract margins from the results of a ballot count Description extract margins from the results of a ballot count Usage extractMargins(marginNames, crRanks, cr) Arguments marginNames list of colnames of the margins in our SafeRank result crRanks ranks of candidates, not necessarily total cr structure returned by a ballot-counting method Margins are adjusted for tied candidates, such that candidates within a tie group have margins indicative of their relative strengths. Extremely small margins are indicative of floating-point roundoff errors. Value named list of margins extractRank Extract a ranking vector by name from the results of a ballot count Description Extract a ranking vector by name from the results of a ballot count Usage extractRank(rankMethod, cr) Arguments rankMethod "safeRank", "elected", or "rank" cr structure returned by a ballot-counting method Value a numeric ranking vector, in order of colnames(cr$data) food_election Food Election Description Sample data for testing SafeVote Usage data(food_election) Format A data frame with 20 observations and 5 candidates (Oranges, Pears, Chocolate, Strawberries, Sweets). Each record corresponds to one ballot with ranking for each of the candidates. forwards.tiebreak Undocumented internal method Description Undocumented internal method Usage forwards.tiebreak(prefs, icans, elim = TRUE) Arguments prefs undocumented icans undocumented elim undocumented image.SafeVote.condorcet The image function visualizes the joint distribution of two preferences (if all.pref=FALSE) given xpref and ypref, as well as the marginal distribution of all preferences (if all.pref=TRUE). The joint distribu- tion can be shown as proportions (if proportion=TRUE) or raw vote counts (if proportion=FALSE). Description The image function visualizes the joint distribution of two preferences (if all.pref=FALSE) given xpref and ypref, as well as the marginal distribution of all preferences (if all.pref=TRUE). The joint distribution can be shown as proportions (if proportion=TRUE) or raw vote counts (if proportion=FALSE). Usage ## S3 method for class 'SafeVote.condorcet' image(x, ...) Arguments x object of type SafeVote.condorcet ... See arguments for image.SafeVote.stv, especially xpref, ypref, all.pref and proportion. Value image object, with side-effect in RStudio Plots pane image.SafeVote.stv visualisation of joint and marginal distributions in STV preferences Description visualisation of joint and marginal distributions in STV preferences Usage ## S3 method for class 'SafeVote.stv' image(x, xpref = 2, ypref = 1, all.pref = FALSE, proportion = TRUE, ...) Arguments x STV results to be visualised xpref, ypref candidates shown in a joint distribution plot all.pref plot the joint distribution of two preferences (if all.pref=FALSE) or the marginal distribution of all preferences (if all.pref=TRUE). proportion The joint distribution can be shown either as proportions (if proportion=TRUE) or raw vote counts (if proportion=FALSE). ... args passed to fields::image.plot() Value image object, with side-effect in RStudio Plots pane ims_approval IMS Approval Description Modified version of ims_election, for use in approval voting. Usage data(ims_approval) Format A data frame with 620 observations and 10 candidates (names were made up). Each record corre- sponds to one ballot, with 0 indicating disapproval of a candidate and 1 indicating approval. ims_election IMS Election Description Datasets containing anonymized votes for a past Council election of the Institute of Mathematical Statistics (IMS). The dataset ims_election is the original dataset used with single transferable vote, where candidate names have been changed. Usage data(ims_election) Format A data frame with 620 observations and 10 candidates (names were made up). Each record cor- responds to one ballot. The IMS Council voting is done using the STV method, and thus the ims_election dataset contains ballots with candidates being ranked between 1 and 10 with zeros allowed. ims_plurality IMS Plurality Description Modified version of ims_election, for use in plurality voting. Usage data(ims_plurality) Format A data frame with 620 observations and 10 candidates (names were made up). Each record cor- responds to one ballot, with 1 against the voter’s most-preferred candidate and 0 against all other candidates. ims_score IMS Score Description Modified version of ims_election, for use in score voting. Usage data(ims_score) Format A data frame with 620 observations and 10 candidates (names were made up). Each record corre- sponds to one ballot, with higher values indicating the more-preferred candidates. ims_stv IMS STV Description Copy of ims_election, included for backwards compatibility. Usage data(ims_election) Format A data frame with 620 observations and 10 candidates (names were made up). Each record cor- responds to one ballot. The IMS Council voting is done using the STV method, and thus the ims_election dataset contains ballots with candidates being ranked between 1 and 10 with zeros allowed. invalid.votes Extracts the invalid.votes member (if any) from the result of a count Description This method was added Jan 2022 – it was named in a warning message but had apparently either never been implemented, or had been "lost" through versioning. Usage invalid.votes(x) Arguments x value returned by stv, condorcet, approval, plurality, or score Value matrix with one column per candidate and one row per invalid ballot is.SafeRankExpt is.SafeRankExpt() Description is.SafeRankExpt() Usage is.SafeRankExpt(x) Arguments x object of unknown class Value TRUE if x is a valid SafeRankExpt object is.valid.vote undocumented internal method Description undocumented internal method Usage is.valid.vote(x, method, ...) Arguments x, method, ... undocumented Value undocumented loserMargin Find a loser and their margin of victory Description Find a loser and their margin of victory Usage loserMargin(votes) Arguments votes cleaned ballots Value length-2 vector: the index of a losing candidate, and their margin of loss (0 if a tie, NA if no winners) new_SafeRankExpt Constructor for the results of a SafeRank experiment Description Constructor for the results of a SafeRank experiment Usage new_SafeRankExpt( rankNames = list(), marginNames = list(), countMethod = character(0), rankMethod = character(0), datasetName = character(0), experimentalMethod = character(0), countArgs = list(), nseats = integer(0), otherFactors = list(), unitFactors = list() ) Arguments rankNames colnames for per-candidate ranks marginNames colnames for per-candidate margins countMethod secondary factor: counting method e.g. "stv" rankMethod secondary factor: ranking method e.g. "elected" datasetName secondary factor: name of the dataset of ballots experimentalMethod secondary factor: name of the method which simulated these elections e.g. "test- Fraction" countArgs secondary factor: args passed to countMethod nseats secondary factor: number of seats to be filled otherFactors other secondary factors, e.g. parameters to experimentalMethod unitFactors per-unit factors derived from PRNG of the experimental harness, e.g describing the ballots randomly deleted during testDeletions Value object of class SafeRankExpt ordered.preferences Undocumented internal method Description Undocumented internal method Usage ordered.preferences(vmat) Arguments vmat undocumented ordered.tiebreak Undocumented internal method Description Undocumented internal method Usage ordered.tiebreak(vmat, seed = NULL) Arguments vmat undocumented seed undocumented plot.SafeRankExpt plot() method for the result of an experiment with varying numbers of ballots Description The "adjusted rank" √ of a candidate is their ranking r plus their scaled "winning margin". The scaled margin is e−cx/ n , where x is the adjusted margin (i.e. the number of votes by which this candidate is ahead of the next-weaker candidate, adjusted for the number of ballots n and the number of seats s), and c > 0 is the margin-scaling parameter cMargin. Usage ## S3 method for class 'SafeRankExpt' plot( x, facetWrap = FALSE, nResults = NA, anBallots = 0, cMargin = 1, xlab = "Ballots", ylab = "Adjusted Rank", title = NULL, subtitle = "(default)", line = TRUE, boxPlot = FALSE, boxPlotCutInterval = 10, pointSize = 1, ... ) Arguments x object containing experimental results facetWrap TRUE provides per-candidate scatterplots nResults number of candidates whose results are plotted (omitting the least-favoured can- didates first) anBallots, cMargin parameters in the rank-adjustment formula xlab, ylab axis labels title overall title for the plot. Default: NULL subtitle subtitle for the plot. Default: value of nSeats and any non-zero rank-adjustment parameters line TRUE will connect points with lines, and will disable jitter boxPlot TRUE for a boxplot, rather than the default xy-scatter boxPlotCutInterval parameter of boxplot, default 10 pointSize diameter of points ... params for generic plot() Details The default value of cMargin=1.0 draws visual attention to candidates with a very small winning margin, as their adjusted rank is very near to r + 1. Candidates with anything more than a small winning margin have only a small rank adjustment, due to the exponential scaling. A scaling linear in s/n is applied to margins when anBallots>0. Such a linear scaling may be a helpful way to visualise the winning margins in STV elections because the margin of victory for an elected candidate is typically not much larger than the quota of n/(s + 1) (Droop) or n/s (Hare). The linear scaling factor is as/n, where a is the value of anBallots, s is the number of seats, and n is the number of ballots. For plotting on the (inverted) adjusted rank scale, the linearly-scaled margin is added to the candidate’s rank. Note that the linearly-scaled margins are zero when a = 0, and thus have no effect on the adjusted rank. You might want to increase the value of anBallots, starting from 1.0, until the winning candidate’s adjusted rank is 1.0 when all ballots are counted, then confirm that the adjusted ranks of other candidates are still congruent with their ranking (i.e. that the rank-adjustment is less than 1 in all cases except perhaps on an initial transient with small numbers of ballots). When both anBallots and cMargins are non-zero, the ranks are adjusted with both exponentially- scaled margins and linearly-scaled margins. The resulting plot would be difficult to interpret in a valid way. Todo: Accept a list of SafeVoteExpt objects. Todo: Multiple counts with the same number of ballots could be summarised with a box-and- whisker graphic, rather than a set of jittered points. Todo: Consider developing a linear scaling that is appropriate for plotting stochastic experimental data derived from Condorcet elections. Value graphics object, with side-effect in RStudio Plots pane plot.SafeVote.stv plot() method for the result of an stv() ballot-count Description The plot function shows the evolution of the total score for each candidate as well as the quota. Usage ## S3 method for class 'SafeVote.stv' plot(x, xlab = "Count", ylab = "Preferences", point.size = 2, ...) Arguments x stv results xlab, ylab axis labels point.size diameter of elected/eliminated points ... params for generic plot() Value graphics object, with side-effect in RStudio’s Plots pane plurality Count votes using the plurality method Description See https://arxiv.org/abs/2102.05801 Usage plurality(votes, nseats = 1, fsep = "\t", quiet = FALSE, ...) Arguments votes, nseats, fsep, quiet, ... undocumented Value undocumented prepare.votes Coerce input ’data’ into a matrix Description Coerce input ’data’ into a matrix Usage prepare.votes(data, fsep = "\n") Arguments data possibly a .csv file, possibly an R object fsep separation character for .csv e.g. tab or comma Value a matrix with one row per ballot, one column per candidate, with named rows and columns print.summary.SafeRankExpt Print method for summary.SafeRankExpt Description Print method for summary.SafeRankExpt Usage ## S3 method for class 'summary.SafeRankExpt' print(x, ...) Arguments x experimental results ... args for generic print() Value invisible(x), with side-effects to console print.summary.SafeVote.approval print method for summary object Description print method for summary object Usage ## S3 method for class 'summary.SafeVote.approval' print(x, ...) Arguments x, ... undocumented Value undocumented print.summary.SafeVote.condorcet 31 print.summary.SafeVote.condorcet print method for summary.SafeVote.condorcet Description print method for summary.SafeVote.condorcet Usage ## S3 method for class 'summary.SafeVote.condorcet' print(x, ...) Arguments x object of type summary.SafeVote.condorcet ... parameters passed to generic print Value textual description of x print.summary.SafeVote.plurality print method for summary of plurality object Description print method for summary of plurality object Usage ## S3 method for class 'summary.SafeVote.plurality' print(x, ...) Arguments x, ... undocumented Value undocumented print.summary.SafeVote.score print method for summary.score object Description print method for summary.score object Usage ## S3 method for class 'summary.SafeVote.score' print(x, ...) Arguments x, ... undocumented Value undocumented print.summary.SafeVote.stv print() method for a summary() of a SafeVote result Description print() method for a summary() of a SafeVote result Usage ## S3 method for class 'summary.SafeVote.stv' print(x, ...) Arguments x election results ... args to be passed to kable() Value no return value, called for side-effect of printing to console rbind.SafeRankExpt add a row to a SafeRankExpt object Description add a row to a SafeRankExpt object Usage ## S3 method for class 'SafeRankExpt' rbind(object, row) Arguments object prior results of experimentation row new observations Value SafeRankExpt object with an additional row readHil read a set of ballots in .HIL format Description rangevoting.org/TidemanData.html: The data are in a format developed by <NAME>. The first line contains the number of candidates and the number to be elected. (Many but not all elections were multi-winner.) In subsequent lines that represent ballot papers, the first number is always 1. (The format was designed for a counting program that treats the first number as the number of instances of the ordering of the candidates on the line.) Next on these lines is a sequence of numbers representing a voter’s reported ranking: The number of the candidate ranked first, the number of the candidate ranked second, and so on. The end of the reported ranking is signaled by a zero. A zero at the beginning of the ranking is a signal that the list of ballot papers has ended. Next come the names of the candidates, each in parentheses, as required by the counting program, and finally the name of the election. Usage readHil(filnm, quiet = FALSE) Arguments filnm name of a file in .HIL format quiet suppress diagnostic output Value a matrix with one row per ballot, one column per candidate, with named rows and columns, and with attributes "nseats" and "ename" remove.candidate Remove a candidate, amending ballot papers as required Description Remove a candidate, amending ballot papers as required Usage remove.candidate(votes, can, quiet = TRUE) Arguments votes ballot box can candidate to be removed quiet suppress diagnostics Value amended ballot box score Count votes using the score (or range) method. Description See https://arxiv.org/abs/2102.05801 Usage score( votes, nseats = 1, max.score = NULL, larger.wins = TRUE, fsep = "\t", quiet = FALSE, ... ) Arguments votes, nseats, max.score, larger.wins, fsep, quiet, ... undocumented Value undocumented solveTiebreak Undocumented internal method, renamed from ’solve.tiebreak’ to avoid confusion with generic solve() Description Undocumented internal method, renamed from ’solve.tiebreak’ to avoid confusion with generic solve() Usage solveTiebreak(method, prefs, icans, ordered.ranking = NULL, elim = TRUE) Arguments method undocumented prefs undocumented icans undocumented ordered.ranking undocumented elim undocumented Value undocumented stv Count preferential ballots using an STV method Description The votes parameter is as described in condorcet() with the following additional semantics. Usage stv( votes, nseats = NULL, eps = 0.001, equal.ranking = FALSE, fsep = "\t", ties = c("f", "b"), quota.hare = FALSE, constant.quota = FALSE, win.by.elim = TRUE, group.nseats = NULL, group.members = NULL, complete.ranking = FALSE, invalid.partial = FALSE, verbose = FALSE, seed = NULL, quiet = FALSE, digits = 3, backwards.compatible = FALSE, safety = 1, ... ) Arguments votes an array with one column per candidate and one row per ballot, as described in condorcet() nseats the number of seats to be filled in this election eps fuzz-factor when comparing fractional votes. The default of 0.001 is preserved from the legacy code, injecting substantial validity hazards into the codebase. We have not attempted to mitigate any of these hazards in SafeVote v1.0.0. We prefer instead to retain backwards-compatibility with the legacy code in vote_2.3-2 in the knowledge that, even if these hazards were adequately ad- dressed, the resulting code is unlikely to be reliable at replicating the results of any other implementation of any of the many variants of "STV" counting methods. Please see the description of the a53_hil dataset in this package for some preliminary findings on the magnitude of the vote-count-variances which may be injected by differing implementations of broadly-similar "STV" count- ing methods. equal.ranking if TRUE, equal preferences are allowed. fsep column-separator for output ties vector of tie-breaking methods: 'f' for forward, 'b' for backward quota.hare TRUE if Hare quota, FALSE if Droop quota (default) constant.quota TRUE if quota is held constant. Over-rides quota.hare. Default is FALSE win.by.elim TRUE (default) if the quota is waived when there are no more candidates than va- cant seats. Note: there is no lower limit when the quota is waived, so a candidate may be elected on zero votes. group.nseats number of seats reserved to members of a group group.members vector of members of the group with reserved seats complete.ranking is TRUE by default. This parameter is retained solely for backwards compatibility with vote::stv(). It has no effect on elections in which nseats is explicitly specified in the call to stv(). invalid.partial TRUE if ballots which do not specify a complete ranking of candidates are infor- mal (aka "invalid") i.e. ignored (with a warning). Default is FALSE. verbose TRUE for diagnostic output seed integer seed for tie-breaking. Warning: if non-NULL, the PRNG for R is reseeded prior to every random tie-break among the possibly-elected candidates. We have preserved this functionality in this branch to allow regression against the legacy codebase of vote::stv(). In stv() the default value for seed is NULL rather than the legacy value of 1234, to mitigate the validity hazard of PRNG reseed- ings during a stochastic experiment. quiet TRUE to suppress console output digits number of significant digits in the output table backwards.compatible TRUE to regress against vote2_3.2 by disabling $margins, $fuzz, $rankingTable, $safeRank safety number of standard deviations on vote-counts, when producing a safeRank by clustering near-ties in a complete ranking ... undocumented intent (preserved from legacy code) Details By default the preferences are not allowed to contain duplicates per ballot. However, if the argument equal.ranking is set to TRUE, ballots are allowed to have the same ranking for multiple candidates. The desired format is such that for each preference $i$ that does not have any duplicate, there must be exactly $i – 1$ preferences $j$ with $0 < j < i$. For example, valid ordered preferences are 1; 1; 3; 4; . . . , or 1; 2; 3; 3; 3; 6; . . . , but NOT 1; 1; 2; 3; . . . , or NOT 1; 2; 3; 3; 3; 5; 6; . . . . If the data contain such invalid votes, they are automatically corrected and a warning is issued by calling the correct.ranking function. If equal ranking is not allowed (equal.ranking = FALSE), the argument invalid.partial can be used to make ballots containing duplicates or gaps partially valid. If it is TRUE, a ballot is considered valid up to a preference that is in normal case not allowed. For example, ballots 1; 2; 3; 4; 4; 6 or 1; 2; 3; 5; 6; 7 would be both converted into 1; 2; 3; 0; 0; 0, because the ballots contain valid ranking only up to the third preference. By default, ties in the STV algorithm are resolved using the forwards tie-breaking method, see Newland and Briton (Section 5.2.5). Argument ties can be set to ”b” in order to use the back- wards tie-breaking method, see O’Neill (2004). In addition, both methods are complemented by the following “ordered” method: Prior to the STV election candidates are ordered by the number of first preferences. Equal ranks are resolved by moving to the number of second preferences, then third and so on. Remaining ties are broken by random draws. Such complete ordering is used to break any tie that cannot be resolved by the forwards or backwards method. If there is at least one tie during the processing, the output contains a row indicating in which count a tie-break happened (see the ties element in the Value section for an explanation of the symbols). The ordered tiebreaking described above can be analysed from outside of the stv function by using the ordered.tiebreak function for viewing the a-priori ordering (the highest number is the best and lowest is the worst). Such ranking is produced by comparing candidates along the columns of the matrix returned by ordered.preferences. Value object of class vote.stv. Note: the winning margins in this object are valid for the elected candi- dates and their (total) ranking, but must be adjusted within tiegroups to be valid for the candidates’ (possibly partial) safeRank. Examples data(food_election) stv(food_election, safety = 0.0) stv(food_election, nseats = 2) summary.SafeRankExpt summary method for SafeRankExpt Description summary method for SafeRankExpt Usage ## S3 method for class 'SafeRankExpt' summary(object, ...) Arguments object experimental results to be summarised ... args for generic summary() summary.SafeVote.approval 39 Value summary.SafeRankExpt object summary.SafeVote.approval summary method for approval results Description summary method for approval results Usage ## S3 method for class 'SafeVote.approval' summary(object, ...) Arguments object, ... undocumented Value undocumented summary.SafeVote.condorcet Summary method for condorcet() results Description Summary method for condorcet() results Usage ## S3 method for class 'SafeVote.condorcet' summary(object, ...) Arguments object of type SafeVote.condorcet ... undocumented, currently unused Value data.frame object summary.SafeVote.plurality summary method for plurality object Description summary method for plurality object Usage ## S3 method for class 'SafeVote.plurality' summary(object, ...) Arguments object, ... undocumented Value descriptive dataframe summary.SafeVote.score summary method for score object Description summary method for score object Usage ## S3 method for class 'SafeVote.score' summary(object, ...) Arguments object, ... undocumented Value undocumented summary.SafeVote.stv summary() method for a SafeVote result Description summary() method for a SafeVote result Usage ## S3 method for class 'SafeVote.stv' summary(object, ..., digits = 3) Arguments object undocumented, legacy code ... undocumented digits undocumented Value data.frame summarising object, for use by print method sumOfVotes internal method, computes column-sums Description Renamed from ’sum.votes’ to avoid confusion with the generic sum() Usage sumOfVotes(votes) Arguments votes ballots are rows, candidates are columns Value vector of votes for each candidate testAdditions Test the sensitivity of a result to tactical voting. Description Ballots are added until a specified number of simulated elections (arep) have been held. If a favoured candidate is specified, then the ballot-box is stuffed with ballots awarding first-preference to this candidate. Alternatively, a tacticalBallot may be specified. If both favoured and tacticalBallot are NULL, then a random candidate is selected as the favoured one. Usage testAdditions( votes, ainc = 1, arep = NULL, favoured = NULL, tacticalBallot = NULL, rankMethod = "safeRank", countMethod = "stv", countArgs = list(), exptName = NULL, equiet = FALSE, everbose = FALSE ) Arguments votes A set of ballots, as in vote_2.3.2 ainc Number of ballots to be added in each step arep Maximum number of ballot-stuffed elections to run favoured Name of the candidate being "plumped". If NULL, a random candidate is se- lected from among the candidates not initially top-ranked. All other candidates are fully-ranked at random, with an identical ballot paper being stuffed multi- ple times. An integer value for favoured is interpreted as an index into the candidate names. tacticalBallot A ballot paper i.e. a vector of length ncol(ballots). If this argument is non- NULL, it takes precedence over favoured when the ballot box is being stuffed. rankMethod "safeRank" (default), "elected", or "rank". "rank" is a total ranking of the can- didates, with ties broken at random. "elected" assigns rank=1 to elected candi- dates, rank=2 for eliminated candidates. countMethod countMethod "stv" (default) or "condorcet" countArgs List of args to be passed to countMethod (in addition to votes) exptName stem-name of experimental units e.g. "E". If NULL, then a 3-character string of capital letters is chosen at random. equiet TRUE to suppress all experimental output everbose TRUE to produce diagnostic output from the experiment Value A matrix of experimental results, of dimension n by 2m + 1, where n is the number of elections and m is the number of candidates. The first column is named "nBallots". Other columns indicate the ranking of the eponymous candidate, and their margin over the next-lower-ranked candidate. Examples data(food_election) testAdditions(food_election, arep = 2, favoured = "Strawberries", countArgs = list(safety = 0)) testDeletions Assess the safety of a preliminary result for an election Description Ballots are deleted at random from the ballot-box, with election results computed once per dinc ballot-deletions. The experiment terminates after a specified number of ballots have been deleted, or a specified number of ballot-counts have occurred. Note: these ballot-counts are correlated. Use testFraction() to experiment with independently-drawn samples from the ballot-box. Usage testDeletions( votes, countMethod = "stv", countArgs = list(), dstart = NULL, dinc = NULL, dlimit = NULL, drep = NULL, rankMethod = "safeRank", exptName = NULL, equiet = FALSE, everbose = FALSE ) Arguments votes A set of ballots, as in vote_2.3.2 countMethod "stv" (default) or "condorcet" countArgs List of args to be passed to countMethod (in addition to votes) dstart Number of ballots in the first ballot-count (selected at random from votes, with- out replacement) dinc Number of ballots to be deleted in subsequent steps dlimit Maximum number of ballots to delete (in addition to dstart) drep Maximum number of elections (required if dinc=0) rankMethod "safeRank" (default), "elected", or "rank". "rank" is a total ranking of the can- didates, with ties broken at random. "elected" assigns rank=1 to elected candi- dates, rank=2 for eliminated candidates. exptName stem-name of experimental units e.g. "E". If NULL, then a 3-character string of capital letters is chosen at random. equiet TRUE to suppress all experimental output everbose TRUE to produce diagnostic output from the experiment Value SafeRankExpt object, describing this experiment and its results Examples data(food_election) testDeletions(food_election) testDeletions(food_election, countMethod="stv", countArgs=list(complete.ranking=TRUE)) testFraction Bootstrapping experiment, with fractional counts of a ballot box. Description Starting from some number (astart) of randomly-selected ballots, an increasingly-large collection of randomly-selected ballots are counted. The ballots are chosen independently without replace- ment for each experimental unit; if you want to count decreasingly-sized portions of a single sample of ballots, use testDeletions(). Usage testFraction( votes = NULL, astart = NULL, ainc = NULL, arep = NULL, trep = NULL, rankMethod = "safeRank", countMethod = "stv", countArgs = list(), exptName = NULL, equiet = FALSE, everbose = FALSE ) Arguments votes A numeric matrix: one row per ballot, one column per candidate astart Starting number of ballots (min 2) ainc Number of ballots to be added in each step. Must be non-negative. arep Number of repetitions of the test on each step. Required to be non-NULL if ainc=0 && is.null(trep)‘. trep Limit on the total number of simulated elections. Required to be non-NULL if ainc=0 && is.null(arep). rankMethod "safeRank" (default), "elected", or "rank". "rank" is a total ranking of the can- didates, with ties broken at random. "elected" assigns rank=1 to elected candi- dates, rank=2 for eliminated candidates. countMethod countMethod "stv" (default) or "condorcet" countArgs List of args to be passed to countMethod (in addition to votes) exptName stem-name of experimental units e.g. "E". If NULL, then a 3-character string of capital letters is chosen at random. equiet TRUE to suppress all experimental output everbose TRUE to produce diagnostic output from the experiment Value a SafeRankExpt object of experimental results. Examples data(food_election) testFraction(food_election, countMethod="condorcet", countArgs=list(safety=0.5,complete.ranking=TRUE)) testFraction(dublin_west, astart=20, ainc=10, arep=2, trep=3, countMethod="stv", rankMethod="elected", equiet=FALSE) translate.ties Undocumented internal method from original code Description Undocumented internal method from original code Usage translate.ties(ties, method) Arguments ties undocumented method ’f’ for forward, ’b’ for backward Value undocumented uk_labour_2010 UK Labour Party Leader 2010 Description These are the ballots cast by Labour MPs and MEPs in an election of their party’s leader in 2010, as published by the Manchester Guardian. The names of the electors have been suppressed in this file, but are available at rangevoting.org, along with extensive commentary on the election. Usage data(uk_labour_2010) Format A data frame with 266 observations and 5 candidates. view generic view() for classes defined in this package Description generic view() for classes defined in this package Usage view(object, ...) Arguments object election object to be viewed ... additional parameters, passed to formattable::formattable() Value html-formatted object, with side-effect in RStudio’s Viewer pane view.SafeVote.approval 47 view.SafeVote.approval view method for approval object Description view method for approval object Usage ## S3 method for class 'SafeVote.approval' view(object, ...) Arguments object, ... undocumented Value undocumented view.SafeVote.condorcet view method for SafeVote.condorcet Description view method for SafeVote.condorcet Usage ## S3 method for class 'SafeVote.condorcet' view(object, ...) Arguments object of type SafeVote.condorcet ... see view.SafeVote.approval Value view object view.SafeVote.plurality view method for plurality object Description view method for plurality object Usage ## S3 method for class 'SafeVote.plurality' view(object, ...) Arguments object, ... undocumented Value undocumented view.SafeVote.score view method for score object Description view method for score object Usage ## S3 method for class 'SafeVote.score' view(object, ...) Arguments object, ... undocumented Value undocumented view.SafeVote.stv view method for the result of an stv() ballot-count Description view method for the result of an stv() ballot-count Usage ## S3 method for class 'SafeVote.stv' view(object, ...) Arguments object object to be viewed ... additional parameters, passed to formattable::formattable() Value html-formatted object winnerMargin Find a winner and their margin of victory Description Find a winner and their margin of victory Usage winnerMargin(votes) Arguments votes cleaned ballots Value length-2 vector: the index of a winning candidate, and their margin of victory (0 if a tie, NA if no losers) yale_ballots Yale Faculty Senate 2016 Description This data follows the structure of a 2016 Yale Faculty Senate election, with candidate names anonymised and permuted. Imported to SafeVote from STV v1.0.2, after applying the STV::cleanBallots method to remove the ten empty rows. Usage data(yale_ballots) Format A data frame with 479 observations and 44 candidates.
github.com/ribice/glice
go
Go
README [¶](#section-readme) --- ### glice [![Build Status](https://travis-ci.org/ribice/glice.svg?branch=master)](https://travis-ci.org/ribice/glice) [![Coverage Status](https://coveralls.io/repos/github/ribice/glice/badge.svg?branch=master)](https://coveralls.io/github/ribice/glice?branch=master) [![Go Report Card](https://goreportcard.com/badge/github.com/ribice/glice)](https://goreportcard.com/report/github.com/ribice/glice) Golang license and dependency checker. Prints list of all dependencies (both from std and 3rd party), number of times used, their license and saves all the license files in /licenses. #### Introduction glice analyzes the source code of your project and gets the list of all dependencies (by default only third-party ones, standard library can be enabled with flag) and prints in a tabular format their name, URL (for third-party ones, currently only GitHub is supported) and license short-name (MIT, GPL..). #### Installation Download and install glice by executing: ``` go get github.com/ribice/glice go install github.com/ribice/glice ``` To update: ``` go get -u github.com/ribice/glice ``` #### Usage To run glice, navigate to a folder in gopath and execute: ``` glice ``` Alternatively, you can provide path which you want to be scanned with -p flag: ``` glice -p"github.com/ribice/glice/" ``` By default glice: * Prints only to stdout * Gets dependencies and licenses from current directory * Shows only third-party dependencies, grouped under project name (e.g. github.com/author/repo/http and github.com/author/repo/http/middleware are counted as 2 github.com/author/repo, and that's being displayed) * Shows only dependencies of project where you're currently located (does not go recursively through dependencies) * Is limited to 60 API calls on GitHub (up to 60 dependencies from github.com) * Ignores files in vendor folder All flags are optional. Glice supports the following flags: ``` - s [boolean - include standard library]// return standard library dependencies as well as third-party ones - v [boolean - verbose] // by default, github.com/author/repo/http and github.com/author/repo/http/middleware are counted as 2 times github.com/author/repo, and that's being displayed. With verbose, they are counted and shown separately. - r [boolean - recursive] // returns dependencies of dependencies (single level only) - f [boolean - fileWrite] // writes all license files inside /licenses folder - i [string - ignoreFolders] // list of comma-separated folders that should be ignored - p [string - path] // path to be scanned in form of github.com/author/repo/ - gh [string - githubAPIKey] // used to increase GitHub API's rate limit from 60req/h to 5000req/h - t [boolean - thanks] // if GitHub API key is provided, setting this flag will star all GitHub repos from dependency. __In order to do this, API key must have access to public_repo__ - c [boolean - count] // print usage count of dependencies in packages (max one count per package) ``` Don't forget `-help` flag for detailed usage information. #### Sample output Executing glice -c on github.com/qiangxue/golang-restful-starter-kit prints (with additional colors for links and licenses): ``` +---+---+---+---+ | DEPENDENCY | COUNT | REPOURL | LICENSE | +---+---+---+---+ | fmt | 5 | | | | github.com/Sirupsen/logrus | 2 | https://github.com/Sirupsen/logrus | MIT | | github.com/go-ozzo/ozzo-dbx | 3 | https://github.com/go-ozzo/ozzo-dbx | MIT | | github.com/go-ozzo/ozzo-routing | 9 | https://github.com/go-ozzo/ozzo-routing | MIT | | github.com/lib/pq | 2 | https://github.com/lib/pq | MIT | | net/http | 3 | | | | github.com/dgrijalva/jwt-go | 1 | https://github.com/dgrijalva/jwt-go | MIT | | strconv | 1 | | | | time | 2 | | | | database/sql | 1 | | | | github.com/go-ozzo/ozzo-validation | 3 | https://github.com/go-ozzo/ozzo-validation | MIT | | github.com/spf13/viper | 1 | https://github.com/spf13/viper | MIT | | gopkg.in/yaml.v2 | 1 | | | | io/ioutil | 2 | | | | sort | 1 | | | | strings | 3 | | | | os | 1 | | | +---+---+---+---+ ``` #### To-Do * Improve tests and code coverage * Add flag to disable dependencies from test files * Implement license checking for projects hosted on GitLab.com * Implement license checking for projects hosted on Bitbucket.org * Implement license checking for projects hosted on other third party sites (e.g. gitea.io, gogs.io) * Add ability to send path of project via flag * Remove dependency on go-github * Implement concurrency * Add naming option when saving licenses to files (currently author-repo-license.MD) #### License glice is licensed under the MIT license. Check the [LICENSE](https://github.com/ribice/glice/blob/v1.0.0/LICENSE.md) file for details. #### Author [<NAME>](https://ribice.ba) Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
pypi_bcolz.jsonl
personal_doc
Dart
* Optimized Hyper-slicing in PyTables with Blosc2 NDim * Dynamic plugins in C-Blosc2 * Bytedelta: Enhance Your Compression Toolset * Introducing Blosc2 NDim * 100 Trillion Rows Baby * 20 years of PyTables * Blosc2 Meets PyTables: Making HDF5 I/O Performance Awesome * User Defined Pipeline for Python-Blosc2 * New features in Python-Blosc2 * Announcing Support for Lossy ZFP Codec as a Plugin for C-Blosc2 * Caterva Slicing Performance: A Study * Registering plugins in C-Blosc2 * Wrapping C-Blosc2 in Python (a beginner's view) * C-Blosc2 Ready for General Review * Blosc metalayers, where the user metainformation is stored * Introducing Sparse Frames * Announcing Blosc Wheels * Mid 2020 Progress Report * C-Blosc Beast Release * Blosc Received a $50,000 USD donation * Blosc2-Meets-Rome * C-Blosc2 Enters Beta Stage * Is ARM Hungry Enough to Eat Intel's Favorite Pie? * Breaking Down Memory Walls * New Forward Compatibility Policy * Blosc Has Won Google's Open Source Peer Bonus Program * The Lizard Codec * Testing PGO with LZ4 and Zstd codecs * Fine Tuning the BloscLZ codec * Zstd has just landed in Blosc * ARM is becoming a first-class citizen for Blosc * Hairy situation of Microsoft Windows compilers * New 'bitshuffle' filter * Seeking Sponsorship for Bcolz/Blosc * Compress Me, Stupid! # Why it works? Blosc is a high-performance compressor that has been optimized for binary data. Its design allows for faster transmission of data to the processor cache than the traditional, non-compressed, direct memory fetch approach through an memcpy() OS call. This can be useful not only in reducing the size of large datasets, but also in accelerating I/O, be either on-disk or in-memory (both are supported). Watch this introductory video to learn more about the main features of Blosc: Blosc2 is the new iteration of the Blosc 1.x series, which adds more features and better documentation. You can also check out the slides that explain the highlights of Blosc2. Blosc2 also includes NDim, a container with multi-dimensional capabilities. In particular, Blosc2 NDim excels at reading multi-dimensional slices, thanks to its innovative pineapple-style partitioning. To learn more, watch the video Why slicing in a pineapple-style is useful. You can also check out the slides that explain the highlights of Blosc2 NDim, specially when dealing with highly sparse, multidimensional datasets that appear when exploring the Milky Way. There it is also discussed Btune, a tool that helps you finding the best Blosc2 configuration for your data. When Blosc2 is used in combination with other libraries, magic can happen. For example, when used with HDF5/PyTables, Blosc2 can help to query tables with 100 trillion rows in human time frames. Read more on this in these slides on the latest developments of Blosc2. ## Why it works? Blosc uses the blocking technique (as described here) to reduce activity on the memory bus as much as possible. The blocking technique divides datasets into blocks small enough to fit in the caches of modern processors and performs compression/decompression there. It also leverages SIMD (SSE2) and multi-threading capabilities present in modern multi-core processors to accelerate the compression/decompression process to the maximum. Blosc2 also applies more advanced techniques to improve the compression ratio on sparse datasets, and a larger diversity of filters such as bytedelta. This makes Blosc2 a very versatile compressor that can be used in a wide range of situations. ## Performance Blosc2 is also designed to be efficient when retrieving blocks and chunks in multidimensional datasets. For comparison purposes, see below the speed that BloscLZ, one of the fastest codecs available in Blosc, can achieve when combined with different libraries supporting Blosc(1)/Blosc2 when accessing a 7.3 TB dataset: Note how BloscLZ does not need a lot of threads to reach its performance. Such a low requirement on CPU core count makes it ideal for running on small laptops while guaranteeing reasonable performance. And below is the compression ratio that BloscLZ, and also Zstd (the codec that can typically achieve better compression ratios in Blosc), can achieve when combined with different libraries supporting Blosc(1)/Blosc2: See how Blosc2 can make better use of the space required to store the compressed data and internal indices, specially when dealing with sparse datasets (as is the case above). More info in these slides. You can find more benchmarks on our blog. Additionally, you may be interested in reading this article on Breaking Down Memory Walls. Finally, make sure to check out Blosc2, the next generation of Blosc, with support for n-dimensional data as well as more efficient handling of sparse data. ## Blosc as a Meta-Compressor Blosc is not like other compressors; it should rather be called a meta-compressor. This is because it can use different codecs (libraries that reduce the size of inputs) and filters (libraries that improve compression ratio) under the hood. Nonetheless, it can still be referred to as a compressor because it includes several codecs conveniently packaged and made accessible for you. Currently, Blosc uses BloscLZ by default, a codec heavily based on FastLZ. Blosc also includes support for LZ4 and LZ4HC, Zlib and Zstd right out-of-the-box. Also, it comes with highly optimized shuffle, bitshuffle, bytedelta and precision truncation filters. These can use SSE2, AVX2 (Intel), NEON (ARM) or VMX/AltiVec/VSX (PowerPC) instructions (if available). Blosc is responsible for coordinating codecs and filters to leverage the blocking technique (described above) and multi-threaded execution (when several cores are available), while making minimal use of temporary buffers. This ensures that every codec and filter can operate at high speeds, even if it was not initially designed for blocking or multi-threading. For instance, Blosc allows the use of the LZ4 codec in a multi-threaded manner by default. ## Other Advantages over Existing Compressors Meant for binary data: Can take advantage of the type size meta-information to improve the compression ratio by using the integrated shuffle and bitshuffle filters. * Small overhead on non-compressible data: Only a maximum of 32 for Blosc2 (16 for Blosc1) of additional bytes per data chunk are needed on non-compressible data. * 63-bit containers: In Blosc2, we have introduced super-chunks as a way to overcome the limitations of chunks, which can only be up to 2^31 bytes in size. Super-chunks, on the other hand, can host data up to 2^63 bytes in size. * Frames: Blosc2 also has introduced a way to serialize data either in-memory or on-disk. Frames provide an efficient way to persist or transmit the data in a compressed format. However, there is much more to Blosc. For an updated list of features, please refer to our ROADMAP. When combined, these features distinguish Blosc from other similar solutions. ## Where Can Blosc Be Used? Provided that data is compressible enough, applications that use Blosc are expected to surpass expected physical limits for I/O performance, either for network, disk, or in-memory storage, simply because applications needs to transmit less (compressed) data, and compression/decompression is very fast and usually happens entirely in CPU caches. For instance, see how Blosc can break down memory walls. Blosc2 also adds support for sparse and multi-dimensional datasets, which are common in scientific applications. See an example on how Blosc can make an efficient access to much larger datasets than the available memory. Currently, there is support for using Blosc in Zarr, h5py (via hdf5plugin) or PyTables; all of these projects have binary packages, so it is easy to start using it. ## Adapt Blosc to your needs We understand that every user has unique needs, so we have made it possible to register your own codecs and filters to better adapt Blosc to different scenarios. Additionally, you can request that they be included in the main C-Blosc2 library, which not only allows for easier deployment, but also contributes to creating a richer and more useful ecosystem. Additionally, we have created Btune, an innovative AI tool that can automatically determine the best compression parameters for your specific use case. The Blosc Development Team is continuously working on improving it to meet your needs. ## Is Blosc Ready for Production Use? Yes, it is! Blosc is currently being used in various libraries and is able to compress data at a rate that exceeds several petabytes per month worldwide. Fortunately, there haven't been many reports of failures caused by Blosc itself, but we strive to respond as quickly as possible when such issues do arise. After a long period of testing, C-Blosc2 has entered the production stage in version 2.0.0. Additionally, all new releases are guaranteed to read from persistent storage generated from previous releases (as of 2.0.0). ## Git repository, downloads and ticketing The home of the git repository for all Blosc-related libraries is located at: You can download the sources and file tickets there too. ## Twitter feed Keep informed about the latest developments by following the @Blosc2 twitter account: ## Mailing list There is an official Blosc blosc mailing list at: ## Python wrappers The official Python wrappers can be found at: http://github.com/Blosc/python-blosc http://github.com/Blosc/python-blosc2 ## Want To Contribute? Your contribution is crucial to making Blosc as solid as possible. If you detect a bug or wish to propose an enhancement, feel free to open a new ticket or make yourself heard on the mailing list. Also, please note that we have a Code of Conduct that you should read before contributing in any way. ## Blosc License Blosc is a free software released under the permissive BSD license. This means that you can use it in almost any way you want! -- The Blosc Development Team Date: 2009-01-01 Categories: Tags: ## A fast, compressed and persistent data store library for C# The Blosc Development Team * Contact * URL * Gitter * Actions * NumFOCUS * Code of Conduct ## What is it?# Blosc is a high performance compressor optimized for binary data (i.e. floating point numbers, integers and booleans, although it can handle string data too). It has been designed to transmit data to the processor cache faster than the traditional, non-compressed, direct memory fetch approach via a memcpy() OS call. Blosc main goal is not just to reduce the size of large datasets on-disk or in-memory, but also to accelerate memory-bound computations. C-Blosc2 is the new major version of C-Blosc, and is backward compatible with both the C-Blosc1 API and its in-memory format. However, the reverse thing is generally not true for the format; buffers generated with C-Blosc2 are not format-compatible with C-Blosc1 (i.e. forward compatibility is not supported). In case you want to ensure full API compatibility with C-Blosc1 API, define the BLOSC1_COMPAT symbol. See a 3 minutes introductory video to Blosc2. ## Blosc2 NDim: an N-Dimensional store# One of the latest and more exciting additions in C-Blosc2 is the Blosc2 NDim layer (or b2nd for short), allowing to create and read n-dimensional datasets in an extremely efficient way thanks to a n-dim 2-level partitioning, that allows to slice and dice arbitrary large and compressed data in a more fine-grained way: To wet you appetite, here it is how the NDArray object in the Python wrapper performs on getting slices orthogonal to the different axis of a 4-dim dataset: We have blogged about this: https://www.blosc.org/posts/blosc2-ndim-intro ## New features in C-Blosc2# 64-bit containers: the first-class container in C-Blosc2 is the super-chunk or, for brevity, schunk, that is made by smaller chunks which are essentially C-Blosc1 32-bit containers. The super-chunk can be backed or not by another container which is called a frame (see later). * NDim containers (b2nd): allow to store n-dimensional data that can efficiently read datasets in slices that can be n-dimensional too. To achieve this, a n-dimensional 2-level partitioning has been implemented. This capabilities were formerly part of Caterva, and now it is included in C-Blosc2 for convenience. Caterva is now deprecated. * More filters: besides shuffle and bitshuffle already present in C-Blosc1, C-Blosc2 already implements: bytedelta: calculates the difference between bytes in a block that has been shuffled already. We have blogged about bytedelta. * delta: the stored blocks inside a chunk are diff’ed with respect to first block in the chunk. The idea is that, in some situations, the diff will have more zeros than the original data, leading to better compression. * trunc_prec: it zeroes the least significant bits of the mantissa of float32 and float64 types. When combined with the shuffle or bitshuffle filter, this leads to more contiguous zeros, which are compressed better. * A filter pipeline: the different filters can be pipelined so that the output of one can the input for the other. A possible example is a delta followed by shuffle, or as described above, trunc_prec followed by bitshuffle. * Prefilters: allow to apply user-defined C callbacks prior the filter pipeline during compression. See test_prefilter.c for an example of use. * Postfilters: allow to apply user-defined C callbacks after the filter pipeline during decompression. The combination of prefilters and postfilters could be interesting for supporting e.g. encryption (via prefilters) and decryption (via postfilters). Also, a postfilter alone can be used to produce on-the-flight computation based on existing data (or other metadata, like e.g. coordinates). See test_postfilter.c for an example of use. * SIMD support for ARM (NEON): this allows for faster operation on ARM architectures. Only shuffle is supported right now, but the idea is to implement bitshuffle for NEON too. Thanks to <NAME>. * SIMD support for PowerPC (ALTIVEC): this allows for faster operation on PowerPC architectures. Both shuffle and bitshuffle are supported; however, this has been done via a transparent mapping from SSE2 into ALTIVEC emulation in GCC 8, so performance could be better (but still, it is already a nice improvement over native C code; see PR Blosc/c-blosc2#59 for details). Thanks to <NAME> and ESRF for sponsoring the Blosc team in helping him in this task. * Dictionaries: when a block is going to be compressed, C-Blosc2 can use a previously made dictionary (stored in the header of the super-chunk) for compressing all the blocks that are part of the chunks. This usually improves the compression ratio, as well as the decompression speed, at the expense of a (small) overhead in compression speed. Currently, it is only supported in the zstd codec, but would be nice to extend it to lz4 and blosclz at least. * Contiguous frames: allow to store super-chunks contiguously, either on-disk or in-memory. When a super-chunk is backed by a frame, instead of storing all the chunks sparsely in-memory, they are serialized inside the frame container. The frame can be stored on-disk too, meaning that persistence of super-chunks is supported. * Sparse frames: each chunk in a super-chunk is stored in a separate file or different memory area, as well as the metadata. This is allows for more efficient updates/deletes than in contiguous frames (i.e. avoiding ‘holes’ in monolithic files). The drawback is that it consumes more inodes when on-disk. Thanks to <NAME> for this contribution. * Partial chunk reads: there is support for reading just part of chunks, so avoiding to read the whole thing and then discard the unnecessary data. * Parallel chunk reads: when several blocks of a chunk are to be read, this is done in parallel by the decompressing machinery. That means that every thread is responsible to read, post-filter and decompress a block by itself, leading to an efficient overlap of I/O and CPU usage that optimizes reads to a maximum. * Meta-layers: optionally, the user can add meta-data for different uses and in different layers. For example, one may think on providing a meta-layer for NumPy so that most of the meta-data for it is stored in a meta-layer; then, one can place another meta-layer on top of the latter for adding more high-level info if desired (e.g. geo-spatial, meteorological
). * Variable length meta-layers: the user may want to add variable-length meta information that can be potentially very large (up to 2 GB). The regular meta-layer described above is very quick to read, but meant to store fixed-length and relatively small meta information. Variable length metalayers are stored in the trailer of a frame, whereas regular meta-layers are in the header. * Efficient support for special values: large sequences of repeated values can be represented with an efficient, simple and fast run-length representation, without the need to use regular codecs. With that, chunks or super-chunks with values that are the same (zeros, NaNs or any value in general) can be built in constant time, regardless of the size. This can be useful in situations where a lot of zeros (or NaNs) need to be stored (e.g. sparse matrices). * Nice markup for documentation: we are currently using a combination of Sphinx + Doxygen + Breathe for documenting the C-API. See https://www.blosc.org/c-blosc2/c-blosc2.html. Thanks to <NAME> and <NAME> for contributing the support for this. * Plugin capabilities for filters and codecs: we have a plugin register capability inplace so that the info about the new filters and codecs can be persisted and transmitted to different machines. See Blosc/c-blosc2 for a self-contained example. Thanks to the NumFOCUS foundation for providing a grant for doing this, and <NAME> and <NAME> for the implementation. * Pluggable tuning capabilities: this will allow users with different needs to define an interface so as to better tune different parameters like the codec, the compression level, the filters to use, the blocksize or the shuffle size. Thanks to ironArray for sponsoring us in doing this. * Support for I/O plugins: so that users can extend the I/O capabilities beyond the current filesystem support. Things like the use of databases or S3 interfaces should be possible by implementing these interfaces. Thanks to ironArray for sponsoring us in doing this. * Python wrapper: we have a preliminary wrapper in the works. You can have a look at our ongoing efforts in the python-blosc2 repo. Thanks to the Python Software Foundation for providing a grant for doing this. * Security: we are actively using using the OSS-Fuzz and ClusterFuzz for uncovering programming errors in C-Blosc2. Thanks to Google for sponsoring us in doing this, and to <NAME> for most of the work here. More info about the improved capabilities of C-Blosc2 can be found in this talk. C-Blosc2 API and format have been frozen, and that means that there is guarantee that your programs will continue to work with future versions of the library, and that next releases will be able to read from persistent storage generated from previous releases (as of 2.0.0). ## Python wrapper# We are officially supporting (thanks to the Python Software Foundation) a Python wrapper for Blosc2. It supports all the features of the predecessor python-blosc package plus most of the bells and whistles of C-Blosc2, like 64-bit and multidimensional containers. As a bonus, the python-blosc2 package comes with wheels and binary versions of the C-Blosc2 libraries, so anyone, even non-Python users can install C-Blosc2 binaries easily with: `pip install blosc2` ## Compiling the C-Blosc2 library with CMake# Blosc can be built, tested and installed using CMake. The following procedure describes a typical CMake build. Create the build directory inside the sources and move into it: ``` git clone https://github.com/Blosc/c-blosc2 cd c-blosc2 mkdir build cd build ``` Now run CMake configuration and optionally specify the installation directory (e.g. ‘/usr’ or ‘/usr/local’): ``` cmake -DCMAKE_INSTALL_PREFIX=your_install_prefix_directory .. ``` CMake allows to configure Blosc in many different ways, like preferring internal or external sources for compressors or enabling/disabling them. Please note that configuration can also be performed using UI tools provided by CMake (ccmake or cmake-gui): ``` ccmake .. # run a curses-based interface cmake-gui .. # run a graphical interface ``` Build, test and install Blosc: ``` cmake --build . ctest cmake --build . --target install ``` The static and dynamic version of the Blosc library, together with header files, will be installed into the specified CMAKE_INSTALL_PREFIX. Once you have compiled your Blosc library, you can easily link your apps with it as shown in the examples/ directory. ### Handling support for codecs (LZ4, LZ4HC, Zstd, Zlib)# C-Blosc2 comes with full sources for LZ4, LZ4HC, Zstd, and Zlib and in general, you should not worry about not having (or CMake not finding) the libraries in your system because by default the included sources will be automatically compiled and included in the C-Blosc2 library. This means that you can be confident in having a complete support for all these codecs in all the official Blosc deployments. Of course, if you consider this is too bloated, you can exclude support for some of them. For example, let’s suppose that you want to disable support for Zstd: ``` cmake -DDEACTIVATE_ZSTD=ON .. ``` Or, you may want to use a codec in an external library already in the system: ``` cmake -DPREFER_EXTERNAL_LZ4=ON .. ``` ### Supported platforms# C-Blosc2 is meant to support all platforms where a C99 compliant C compiler can be found. The ones that are mostly tested are Intel (Linux, Mac OSX and Windows), ARM (Linux, Mac), and PowerPC (Linux). More on ARM support in README_ARM.rst. For Windows, you will need at least VS2015 or higher on x86 and x64 targets (i.e. ARM is not supported on Windows). For Mac OSX, make sure that you have the command line developer tools available. You can always install them with: ``` xcode-select --install ``` For Mac OSX on arm64 architecture, you may want to compile it like this: ``` CC="clang -arch arm64" cmake .. ``` ### Display error messages# By default error messages are disabled. To display them, you just need to activate the Blosc tracing machinery by setting the `BLOSC_TRACE` environment variable. ## Contributing# If you want to collaborate in this development you are welcome. We need help in the different areas listed at the ROADMAP; also, be sure to read our DEVELOPING-GUIDE and our Code of Conduct. Blosc is distributed using the BSD license. ## Tweeter feed# Follow @Blosc2 so as to get informed about the latest developments. ## Citing Blosc# You can cite our work on the different libraries under the Blosc umbrella as: ``` @ONLINE{blosc, author = {{Blosc Development Team}}, title = "{A fast, compressed and persistent data store library}", year = {2009-2023}, note = {https://blosc.org} } ``` ## Acknowledgments# See THANKS document. Date: 2017-01-27 Categories: Tags: ## A Python wrapper for the extremely fast Blosc compression library# The Blosc development team * Contact * Github * URL * PyPi * Anaconda * Gitter * Code of Conduct ## What it is# Blosc (https://blosc.org) is a high performance compressor optimized for binary data. It has been designed to transmit data to the processor cache faster than the traditional, non-compressed, direct memory fetch approach via a memcpy() OS call. Blosc works well for compressing numerical arrays that contains data with relatively low entropy, like sparse data, time series, grids with regular-spaced values, etc. python-blosc a Python package that wraps Blosc. python-blosc supports Python 3.8 or higher versions. ## Installing# Blosc is now offering Python wheels for the main OS (Win, Mac and Linux) and platforms. You can install binary packages from PyPi using `pip` : `$ pip install blosc` ## Documentation# The Sphinx based documentation is here: https://blosc.org/python-blosc/python-blosc.html Also, some examples are available on python-blosc wiki page: Lastly, here is the recording and the slides from the talk “Compress me stupid” at the EuroPython 2014. ## Building# If you need more control, there are different ways to compile python-blosc, depending if you want to link with an already installed Blosc library or not. ### Installing via setuptools# python-blosc comes with the Blosc sources with it and can be built with: ``` $ python -m pip install -r requirements-dev.txt $ python setup.py build_ext --inplace ``` Any codec can be enabled (=1) or disabled (=0) on this build-path with the appropriate OS environment variables INCLUDE_LZ4, INCLUDE_SNAPPY, INCLUDE_ZLIB, and INCLUDE_ZSTD. By default all the codecs in Blosc are enabled except Snappy (due to some issues with C++ with the gcc toolchain). Compiler specific optimisations are automatically enabled by inspecting the CPU flags building Blosc. They can be manually disabled by setting the following environmental variables: DISABLE_BLOSC_SSE2 and DISABLE_BLOSC_AVX2. setuptools is limited to using the compiler specified in the environment variable CC which on posix systems is usually gcc. This often causes trouble with the Snappy codec, which is written in C++, and as a result Snappy is no longer compiled by default. This problem is not known to affect MSVC or clang. Snappy is considered optional in Blosc as its compression performance is below that of the other codecs. That’s all. You can proceed with testing section now. ### Compiling with an installed Blosc library# This approach uses pre-built, fully optimized versions of Blosc built via CMake. Go to Blosc/c-blosc and download and install the C-Blosc library. Then, you can tell python-blosc where is the C-Blosc library in a couple of ways: Using an environment variable: ``` $ export USE_SYSTEM_BLOSC=1 # or "set USE_SYSTEM_BLOSC=1" on Windows $ export Blosc_ROOT=/usr/local/customprefix # If you installed Blosc into a custom location $ python setup.py build_ext --inplace ``` Using flags: ``` $ python setup.py build_ext --inplace -DUSE_SYSTEM_BLOSC:BOOL=YES -DBlosc_ROOT:PATH=/usr/local/customprefix ``` ## Testing# After compiling, you can quickly check that the package is sane by running the doctests in `blosc/test.py` : ``` $ python -m blosc.test (add -v for verbose mode) ``` Once installed, you can re-run the tests at any time with: ``` $ python -c "import blosc; blosc.test()" ``` ## Benchmarking# If curious, you may want to run a small benchmark that compares a plain NumPy array copy against compression through different compressors in your Blosc build: ``` $ PYTHONPATH=. python bench/compress_ptr.py ``` Just to whet your appetite, here are the results for an Intel Xeon E5-2695 v3 @ 2.30GHz, running Python 3.5, CentOS 7, but YMMV (and will vary!): ``` -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= python-blosc version: 1.5.1.dev0 Blosc version: 1.11.2 ($Date:: 2017-01-27 #$) Compressors available: ['blosclz', 'lz4', 'lz4hc', 'snappy', 'zlib', 'zstd'] Compressor library versions: BloscLZ: 1.0.5 LZ4: 1.7.5 Snappy: 1.1.1 Zlib: 1.2.7 Zstd: 1.1.2 Python version: 3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:53:06) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] Platform: Linux-3.10.0-327.18.2.el7.x86_64-x86_64 (#1 SMP Thu May 12 11:03:55 UTC 2016) Linux dist: CentOS Linux 7.2.1511 Processor: x86_64 Byte-ordering: little Detected cores: 56 Number of threads to use by default: 4 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Creating NumPy arrays with 10**8 int64/float64 elements: *** ctypes.memmove() *** Time for memcpy(): 0.276 s (2.70 GB/s) Times for compressing/decompressing with clevel=5 and 24 threads *** the arange linear distribution *** *** blosclz , noshuffle *** 0.382 s (1.95 GB/s) / 0.300 s (2.48 GB/s) Compr. ratio: 1.0x *** blosclz , shuffle *** 0.042 s (17.77 GB/s) / 0.027 s (27.18 GB/s) Compr. ratio: 57.1x *** blosclz , bitshuffle *** 0.094 s (7.94 GB/s) / 0.041 s (18.28 GB/s) Compr. ratio: 74.0x *** lz4 , noshuffle *** 0.156 s (4.79 GB/s) / 0.052 s (14.30 GB/s) Compr. ratio: 2.0x *** lz4 , shuffle *** 0.033 s (22.58 GB/s) / 0.034 s (22.03 GB/s) Compr. ratio: 68.6x *** lz4 , bitshuffle *** 0.059 s (12.63 GB/s) / 0.053 s (14.18 GB/s) Compr. ratio: 33.1x *** lz4hc , noshuffle *** 0.443 s (1.68 GB/s) / 0.070 s (10.62 GB/s) Compr. ratio: 2.0x *** lz4hc , shuffle *** 0.102 s (7.31 GB/s) / 0.029 s (25.42 GB/s) Compr. ratio: 97.5x *** lz4hc , bitshuffle *** 0.206 s (3.62 GB/s) / 0.038 s (19.85 GB/s) Compr. ratio: 180.5x *** snappy , noshuffle *** 0.154 s (4.84 GB/s) / 0.056 s (13.28 GB/s) Compr. ratio: 2.0x *** snappy , shuffle *** 0.044 s (16.89 GB/s) / 0.047 s (15.95 GB/s) Compr. ratio: 17.4x *** snappy , bitshuffle *** 0.064 s (11.58 GB/s) / 0.061 s (12.26 GB/s) Compr. ratio: 18.2x *** zlib , noshuffle *** 1.172 s (0.64 GB/s) / 0.135 s (5.50 GB/s) Compr. ratio: 5.3x *** zlib , shuffle *** 0.260 s (2.86 GB/s) / 0.086 s (8.67 GB/s) Compr. ratio: 120.8x *** zlib , bitshuffle *** 0.262 s (2.84 GB/s) / 0.094 s (7.96 GB/s) Compr. ratio: 260.1x *** zstd , noshuffle *** 0.973 s (0.77 GB/s) / 0.093 s (8.00 GB/s) Compr. ratio: 7.8x *** zstd , shuffle *** 0.093 s (7.97 GB/s) / 0.023 s (32.71 GB/s) Compr. ratio: 156.7x *** zstd , bitshuffle *** 0.115 s (6.46 GB/s) / 0.029 s (25.60 GB/s) Compr. ratio: 320.6x *** the linspace linear distribution *** *** blosclz , noshuffle *** 0.341 s (2.19 GB/s) / 0.291 s (2.56 GB/s) Compr. ratio: 1.0x *** blosclz , shuffle *** 0.132 s (5.65 GB/s) / 0.023 s (33.10 GB/s) Compr. ratio: 2.0x *** blosclz , bitshuffle *** 0.166 s (4.50 GB/s) / 0.036 s (20.89 GB/s) Compr. ratio: 2.8x *** lz4 , noshuffle *** 0.142 s (5.26 GB/s) / 0.028 s (27.07 GB/s) Compr. ratio: 1.0x *** lz4 , shuffle *** 0.093 s (8.01 GB/s) / 0.030 s (24.87 GB/s) Compr. ratio: 3.4x *** lz4 , bitshuffle *** 0.102 s (7.31 GB/s) / 0.039 s (19.13 GB/s) Compr. ratio: 5.3x *** lz4hc , noshuffle *** 0.700 s (1.06 GB/s) / 0.044 s (16.77 GB/s) Compr. ratio: 1.1x *** lz4hc , shuffle *** 0.203 s (3.67 GB/s) / 0.021 s (36.22 GB/s) Compr. ratio: 8.6x *** lz4hc , bitshuffle *** 0.342 s (2.18 GB/s) / 0.028 s (26.50 GB/s) Compr. ratio: 14.2x *** snappy , noshuffle *** 0.271 s (2.75 GB/s) / 0.274 s (2.72 GB/s) Compr. ratio: 1.0x *** snappy , shuffle *** 0.099 s (7.54 GB/s) / 0.042 s (17.55 GB/s) Compr. ratio: 4.2x *** snappy , bitshuffle *** 0.127 s (5.86 GB/s) / 0.043 s (17.20 GB/s) Compr. ratio: 6.1x *** zlib , noshuffle *** 1.525 s (0.49 GB/s) / 0.158 s (4.70 GB/s) Compr. ratio: 1.6x *** zlib , shuffle *** 0.346 s (2.15 GB/s) / 0.098 s (7.59 GB/s) Compr. ratio: 10.7x *** zlib , bitshuffle *** 0.420 s (1.78 GB/s) / 0.104 s (7.20 GB/s) Compr. ratio: 18.0x *** zstd , noshuffle *** 1.061 s (0.70 GB/s) / 0.096 s (7.79 GB/s) Compr. ratio: 1.9x *** zstd , shuffle *** 0.203 s (3.68 GB/s) / 0.052 s (14.21 GB/s) Compr. ratio: 14.2x *** zstd , bitshuffle *** 0.251 s (2.97 GB/s) / 0.047 s (15.84 GB/s) Compr. ratio: 22.2x *** the random distribution *** *** blosclz , noshuffle *** 0.340 s (2.19 GB/s) / 0.285 s (2.61 GB/s) Compr. ratio: 1.0x *** blosclz , shuffle *** 0.091 s (8.21 GB/s) / 0.017 s (44.29 GB/s) Compr. ratio: 3.9x *** blosclz , bitshuffle *** 0.080 s (9.27 GB/s) / 0.029 s (26.12 GB/s) Compr. ratio: 6.1x *** lz4 , noshuffle *** 0.150 s (4.95 GB/s) / 0.027 s (28.05 GB/s) Compr. ratio: 2.4x *** lz4 , shuffle *** 0.068 s (11.02 GB/s) / 0.029 s (26.03 GB/s) Compr. ratio: 4.5x *** lz4 , bitshuffle *** 0.063 s (11.87 GB/s) / 0.054 s (13.70 GB/s) Compr. ratio: 6.2x *** lz4hc , noshuffle *** 0.645 s (1.15 GB/s) / 0.019 s (39.22 GB/s) Compr. ratio: 3.5x *** lz4hc , shuffle *** 0.257 s (2.90 GB/s) / 0.022 s (34.62 GB/s) Compr. ratio: 5.1x *** lz4hc , bitshuffle *** 0.128 s (5.80 GB/s) / 0.029 s (25.52 GB/s) Compr. ratio: 6.2x *** snappy , noshuffle *** 0.164 s (4.54 GB/s) / 0.048 s (15.46 GB/s) Compr. ratio: 2.2x *** snappy , shuffle *** 0.082 s (9.09 GB/s) / 0.043 s (17.39 GB/s) Compr. ratio: 4.3x *** snappy , bitshuffle *** 0.071 s (10.48 GB/s) / 0.046 s (16.08 GB/s) Compr. ratio: 5.0x *** zlib , noshuffle *** 1.223 s (0.61 GB/s) / 0.093 s (7.97 GB/s) Compr. ratio: 4.0x *** zlib , shuffle *** 0.636 s (1.17 GB/s) / 0.126 s (5.89 GB/s) Compr. ratio: 5.5x *** zlib , bitshuffle *** 0.327 s (2.28 GB/s) / 0.109 s (6.81 GB/s) Compr. ratio: 6.2x *** zstd , noshuffle *** 1.432 s (0.52 GB/s) / 0.103 s (7.27 GB/s) Compr. ratio: 4.2x *** zstd , shuffle *** 0.388 s (1.92 GB/s) / 0.031 s (23.71 GB/s) Compr. ratio: 5.9x *** zstd , bitshuffle *** 0.127 s (5.86 GB/s) / 0.033 s (22.77 GB/s) Compr. ratio: 6.4x ``` Also, Blosc works quite well on ARM processors (even without NEON support yet): ``` -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= python-blosc version: 1.4.4 Blosc version: 1.11.2 ($Date:: 2017-01-27 #$) Compressors available: ['blosclz', 'lz4', 'lz4hc', 'snappy', 'zlib', 'zstd'] Compressor library versions: BloscLZ: 1.0.5 LZ4: 1.7.5 Snappy: 1.1.1 Zlib: 1.2.8 Zstd: 1.1.2 Python version: 3.6.0 (default, Dec 31 2016, 21:20:16) [GCC 4.9.2] Platform: Linux-3.4.113-sun8i-armv7l (#50 SMP PREEMPT Mon Nov 14 08:41:55 CET 2016) Linux dist: debian 9.0 Processor: not recognized Byte-ordering: little Detected cores: 4 Number of threads to use by default: 4 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= *** ctypes.memmove() *** Time for memcpy(): 0.015 s (93.57 MB/s) Times for compressing/decompressing with clevel=5 and 4 threads *** user input *** *** blosclz , noshuffle *** 0.015 s (89.93 MB/s) / 0.010 s (138.32 MB/s) Compr. ratio: 2.7x *** blosclz , shuffle *** 0.023 s (60.25 MB/s) / 0.012 s (112.71 MB/s) Compr. ratio: 2.3x *** blosclz , bitshuffle *** 0.018 s (77.63 MB/s) / 0.021 s (66.76 MB/s) Compr. ratio: 7.3x *** lz4 , noshuffle *** 0.008 s (177.14 MB/s) / 0.009 s (159.00 MB/s) Compr. ratio: 3.6x *** lz4 , shuffle *** 0.010 s (131.29 MB/s) / 0.012 s (117.69 MB/s) Compr. ratio: 3.5x *** lz4 , bitshuffle *** 0.015 s (89.97 MB/s) / 0.022 s (63.62 MB/s) Compr. ratio: 8.4x *** lz4hc , noshuffle *** 0.071 s (19.30 MB/s) / 0.007 s (186.64 MB/s) Compr. ratio: 8.6x *** lz4hc , shuffle *** 0.079 s (17.30 MB/s) / 0.014 s (95.99 MB/s) Compr. ratio: 6.2x *** lz4hc , bitshuffle *** 0.062 s (22.23 MB/s) / 0.027 s (51.53 MB/s) Compr. ratio: 9.7x *** snappy , noshuffle *** 0.008 s (173.87 MB/s) / 0.009 s (148.77 MB/s) Compr. ratio: 4.4x *** snappy , shuffle *** 0.011 s (123.22 MB/s) / 0.016 s (85.16 MB/s) Compr. ratio: 4.4x *** snappy , bitshuffle *** 0.015 s (89.02 MB/s) / 0.021 s (64.87 MB/s) Compr. ratio: 6.2x *** zlib , noshuffle *** 0.047 s (29.26 MB/s) / 0.011 s (121.83 MB/s) Compr. ratio: 14.7x *** zlib , shuffle *** 0.080 s (17.20 MB/s) / 0.022 s (63.61 MB/s) Compr. ratio: 9.4x *** zlib , bitshuffle *** 0.059 s (23.50 MB/s) / 0.033 s (41.10 MB/s) Compr. ratio: 10.5x *** zstd , noshuffle *** 0.113 s (12.21 MB/s) / 0.011 s (124.64 MB/s) Compr. ratio: 15.6x *** zstd , shuffle *** 0.154 s (8.92 MB/s) / 0.026 s (52.56 MB/s) Compr. ratio: 9.9x *** zstd , bitshuffle *** 0.116 s (11.86 MB/s) / 0.036 s (38.40 MB/s) Compr. ratio: 11.4x ``` For details on the ARM benchmark see: Blosc/python-blosc#105 In case you find your own results interesting, please report them back to the authors! ## License# The software is licensed under a 3-Clause BSD license. A copy of the python-blosc license can be found in LICENSE.txt. ## Mailing list# Discussion about this module is welcome in the Blosc list: https://groups.google.com/g/blosc ## Contents# * Introduction * Installation * Tutorials * Library Reference Python-Blosc2 is a Python package that wraps C-Blosc2, the newest version of the Blosc compressor. Getting Started New to Python-Blosc2? Check out the getting started guides. They contain an introduction to Python-Blosc2 main concepts and different tutorials. API Reference The reference guide contains a detailed description of the Python-Blosc2 API. The reference describes how the functions work and which parameters can be used. Development Saw a typo in the documentation? Want to improve existing functionalities? The contributing guidelines will guide you through the process of improving Python-Blosc2. Release Notes Want to see what’s new in the latest release? Check out the release notes to find out! ### What is Btune? Btune is a dynamic plugin for Blosc2 that can help you find the optimal combination of compression parameters for your datasets. Depending on your needs, Btune has three different tiers of support for tuning datasets: Genetic (Btune Free): This genetic algorithm tests different combinations of compression parameters to meet the user's requirements for both compression ratio and speed for each chunk in the dataset. It assigns a score to each combination and, after a number of iterations, the software stops and uses the best score (minimal value) found for the rest of the dataset. For a graphical visualization, click on the image, select an example, and click on the 'play' button (it may require clicking twice). This is best suited for personal use. * Trained (Btune Models): With this approach, the user sends a representative sample of datasets to the Blosc development team and receives back trained neural network models that enable Btune to predict the best compression parameters for similar or related datasets. This approach is best for workgroups that need to optimize for a limited variety of datasets. * Fully managed (Btune Studio): The user receives a license to use our training software, which enables on-site training for an unlimited number of datasets. The license also includes a specified number of training/consultancy hours to help the user get the most out of the training process. Refer to the details below for more information. This approach is best suited for organizations that need to optimize for a wide range of datasets. ### Why Btune? Essentially, because compression is not a one-codec-fits-all problem. Compressing data involves a trade-off between compression ratio and speed. A higher compression ratio results in a slower compression process. Depending on your needs, you may want to prioritize one over the other. For instance, if you are storing data from high-speed data acquisition systems, you may want to prioritize compression speed over compression ratio. This is because you will be writing data at speeds near the capacity of your systems. On the other hand, if the goal is to access the data repeatedly from a file system, you may want to prioritize decompression speed over compression ratio for optimal performance. Finally, if you are storing data in the cloud, you may want to prioritize compression ratio over speed. This is because you pay for the storage (and potentially upload/download costs) of data. Finding the optimal compression parameters in Blosc2 can be a slow process due to the large number of combinations of compression parameters (codec, compression level, filter, split mode, number of threads, etc.), and it may require a significant amount of manual trial and error to find the best combinations. However, you can significantly speed up this process by using Btune while compressing your datasets. ### What's in a Model? A neural network is a simplified model of the way the human brain processes information. It simulates a large number of interconnected processing units that resemble abstract versions of neurons. These processing units are arranged in layers, which are connected by weights that are adjusted during the training process. To train the network, a large number of examples are fed into it, and the weights are adjusted to minimize the difference between the expected output and the actual output. Once training is complete, the network can be used to predict the output for new inputs. In our context, the "model" refers to the serialization of the layers and weights of the trained neural network. It is delivered to you as a set of small files (in JSON and TensorFlow format) that can be placed anywhere in your filesystem for Btune to access. By using this model, Btune can predict the optimal combination of compression parameters for a given chunk of data. The inference process is very fast, making it suitable for selecting the appropriate compression parameters on a chunk-by-chunk basis while consolidating large amounts of data. ### How To Use Btune? Btune is a plugin for Blosc2 that can be obtained from the PyPI repository. You can learn how to use it in the Btune README. The plugin is currently only available for Linux and macOS, and only for Intel architecture. However, we plan to add support for other architectures in the future. The Btune plugin above can be used for both Btune Free and Btune Models. For Btune Studio, you will need to contact us to get the additional software for training the models. ### Licensing Model There are different licenses available for Btune. Btune Free allows you to explore compression parameters that are better suited to your datasets. However, this process can be slow and may require a large number of iterations before finding the best combination. Additionally, certain chunks in the same dataset may benefit more from a particular combination, while others may benefit more from a different one. Btune Models addresses the limitations of Btune Free by automatically finding the best combination for chunks in a dataset, without requiring any manual operation. This is made possible by using pre-trained neural network models, which allow the best combination to be found on a chunk-by-chunk basis, thereby increasing the effectiveness of the compression process. Finally, for those who need to train a wide range of datasets, Btune Studio provides access to the software necessary for training the datasets yourself. In this way, you have control over all the necessary components to find optimal compression parameters and avoid external dependencies. ### Pricing # Btune Free It is free to use. Please note that it is licensed under an Affero GPLv3 license. This license comes with limited support, as it is mostly a community-supported project. # Btune Models Requires a fee of $1500 USD (or 1500 EUR) for up to 3 trained models per year, including 3 hours of support. You can ask to re-train models for the same or a different set of datasets on a yearly basis. If you need more than 3 models, you can ask for a quote. Renewal is $1200 USD (or 1200 EUR) per year. If you don't renew, you keep the right to use the models you already have forever, but you will not be able to ask for training new models. # Btune Studio Requires a fee of $7500 USD (or 7500 EUR) per year, or $750 USD (or 750 EUR) per month for at least 1 year, whichever fits best for you. This includes 25 hours of support per year, or up to 3 hours of support per month when using the monthly fee. Renewal is $6000 USD (or 6000 EUR) per year, or $600 USD (or 600 EUR) monthly after the 1st year. If you don't renew, you keep the right to use Btune Studio for producing models internally in your organization forever, but you will not have access to newer versions. Note: Btune Studio is not open source software, but we deliver sources with it, so that you can build/fix it yourself. However, you cannot include it in your own software and distribute it without permission. # Priority Support For all licenses we offer an optional priority support pack that includes up to 3 hours of support per month for a monthly fee of $250 (or 250 EUR). For more support hours, please contact us. The contracted support can be used for training in the use of the software, or for consultation on compression for big data in general. # How To Pay? You can do the payments via the donations form for the Blosc project where, at the end of the form, you can specify the kind of license and support you are interested in. If for some reason, you cannot (or you don't want to) donate via NumFOCUS, please contact us; we can invoice you directly as well. # Why donations via NumFOCUS? NumFOCUS is a non-profit organization with a mission to promote open practices in research, data, and scientific computing. They serve as a fiscal sponsor for open-source projects and organize community-driven educational programs. The Blosc project has benefited significantly from NumFOCUS Small Development Grant Program, and they have been instrumental in helping us to channel donations. When you pay Btune fees by donating to the Blosc project via NumFOCUS, 15% of the amount goes to them as a fee. We believe this fee is fair and helps repay NumFOCUS for the services, support, and love they have shown us over the years. Your donation will not only strengthen the Blosc project, but also many other open-source projects. If you or your organization encounter issues donating via NumFOCUS, the Blosc development team can also produce invoices directly. ### Practical Example In the figure below, you can see the most predicted combinations of codecs and filters when optimizing for decompression performance on a subset of the Gaia dataset. The subset contains stars that are less than 10,000 light years away from our Sun (around 500 millions). The data is stored in an array of shape (20,000, 20,000, 20,000), with the number of stars in every cubic light year cell, resulting in a total uncompressed size of 7.3 TB. The following figure displays the speed that can be achieved by obtaining multiple multidimensional slices of the dataset along different axes, using the most efficient codecs and filters for various tradeoffs. The speed is measured in GB/s, so a higher value is better. The results indicate that the fastest compression combination is BloscLZ (compression level 5), closely followed by Zstd (compression level 9). Also, note how the fastest codecs, BloscLZ and also Zstd, are not affected very much by the number of threads used, which means that they are not CPU-bound, so small computers or laptops with low core counts will be able to reach good speeds. Finally, it is important to compare the compression ratios achieved by different codecs and filters. In the following figure, we can see the file sizes created when using the most commonly predicted codecs and filters for various trade-offs. The file sizes are measured in GB, so the lower, the better. In this case, the trained model recommends using Zstd (compression level 9) for a good balance between compression ratio and decompression speed, and that can be confirmed by seeing the large difference in size. However, note that BitShuffle + Zstd (compression level 9) is not a good option in general, unless you are looking for the absolute best compression ratio. You can read more context about this example in our forthcoming article for SciPy 2023. ### Testimonials Blosc2 and Btune are fantastic tools that allow us to efficiently compress and load large volumes of data for the development of AI algorithms for clinical applications. In particular, the new NDarray structure became immensely useful when dealing with large spectral video sequences. -- <NAME>, Div. Intelligent Medical Systems, German Cancer Research Center (DKFZ) Btune is a simple and highly effective tool. We tried this out with @LEAPSinitiative data and found some super useful spots in the parameter space of Blosc2 compression arguments! Awesome work, @Blosc2 team! -- <NAME>, Helmholtz AI Consultants Team Lead for Matter Research @HZDR_Dresden ### Contact If you are interested in Btune and have any further questions, please contact us at <EMAIL>. # Thank you for supporting Blosc! Please review before submitting your payment. Questions?(512) 831-2870 | <EMAIL> Please review before submitting your payment. Questions?(512) 831-2870 | <EMAIL> Skip to main content 2023-10-11 11:00 Optimized Hyper-slicing in PyTables with Blosc2 NDim # Cárabos Coop. V. Date: 2003-01-01 Categories: Tags: Back in October 2002 the first version of PyTables was released. It was an attempt to store a large amount of tabular data while being able to provide a hierarchical structure around it. Here it is the first public announcement by me: > Hi!, PyTables is a Python package which allows dealing with HDF5 tables. Such a table is defined as a collection of records whose values are stored in fixed-length fields. PyTables is intended to be easy-to-use, and tried to be a high-performance interface to HDF5. To achieve this, the newest improvements in Python 2.2 (like generators or slots and metaclasses in brand-new classes) has been used. Python creation extension tool has been chosen to access the HDF5 library. This package should be platform independent, but until now I’ve tested it only with Linux. It’s the first public release (v 0.1), and it is in alpha state. As noted, PyTables was an early adopter of generators and metaclasses that were introduced in the new (by that time) Python 2.2. It turned out that generators demonstrated to be an excellent tool in many libraries related with data science. Also, Pyrex adoption (which was released just a few months ago) greatly simplified the wrapping of native C libraries like HDF5. By that time there were not that much Python libraries for persisting tabular data with a format that allowed on-the-flight compression, and that gave PyTables a chance to be considered as a good option. Some months later, PyCon 2003 accepted our first talk about PyTables. Since then, we (mainly me, with the support from S<NAME> on the documentation part) gave several presentations in different international conferences, like SciPy or EuroSciPy and its popularity skyrocketed somehow. ## Cárabos Coop. V. In 2005, and after receiving some good inputs on PyTables by some customers (including The HDF Group), we decided to try to make a life out of PyTables development and together with <NAME> and <NAME>, we set out to create a cooperative called Cárabos Coop V. Unfortunately, and after 3 years of enthusiastic (and hard) work, we did not succeed in making the project profitable, and we had to close by 2008. During this period we managed to make a professional version of PyTables that was using out-of core indexes (aka OPSI) as well as a GUI called ViTables. After closing Cárabos we open sourced both technologies, and we are happy to say that they are still in good use, most specially OPSI indexes, that are meant to perform fast queries in very large datasets; OPSI can still be used straight from pandas. ## Crew renewal After Cárabos closure, I (<NAME>) continued to maintain PyTables for a while, but in 2010 I expressed my desire to handover the project, and shortly after, a new gang of people, including <NAME> and <NAME>, with <NAME> joining shortly after, stepped ahead and took the challenge. This is where open source is strong: whenever a project faces difficulties, there are always people eager to jump up to the wagon and continue providing traction for it. ## Attempt to merge with h5py Meanwhile, the h5py package was receiving a great adoption, specially from the community that valued more the multidimensional arrays than the tabular side of the things. There was a feeling that we were duplicating efforts and by 2016, <NAME>, with the help of <NAME>, organized a HackFest in Perth, Australia where developers of the h5py and PyTables gathered to attempt a merge of the two projects. After the initial work there, we continued this effort with a grant from NumFOCUS. Unfortunately, the effort demonstrated to be rather complex, and we could not finish it properly (for the sake of curiosity, the attempt is still available). At any rate, we are actively encouraging people using both packages depending on the need; see for example, the tutorial on h5py/PyTables that Tom Kooij taught at SciPy 2017. ## Satellite Projects: Blosc and numexpr As many other open sources libraries, PyTables stands in the shoulders of giants, and makes use of amazing libraries like HDF5 or NumPy for doing its magic. In addition to that, and in order to allow PyTables push against the hardware I/O and computational limits, it leverages two high-performance packages: Blosc and numexpr. Blosc is in charge of compressing data efficiently and at very high speeds to overcome the limits imposed by the I/O subsystem, while numexpr allows to get maximum performance from computations in CPU when querying large tables. Both projects have been substantially improved by the PyTables crew, and actually, they are quite popular by themselves. Specifically, the Blosc compressor, although born out of the needs of PyTables, it spun off as a standalone compressor (or meta-compressor, as it can use several codecs internally) meant to accelerate not just disk I/O, but also memory access in general. In an unexpected twist, Blosc2, has developed its own multi-level data partitioning system, which goes beyond the single-level partitions in HDF5, and is currently helping PyTables to reach new performance heights. By teaming with the HDF5 library (and hence PyTables), Blosc2 is allowing PyTables to query 100 trillion rows in human timeframes. ## Thank you! It has been a long way since PyTables started 20 years ago. We are happy to have helped in providing a useful framework for data storage and querying needs for many people during the journey. Many thanks to all maintainers and contributors (either with code or donations) to the project; they are too numerous to mention them all here, but if you are reading this and are among them, you should be proud to have contributed to PyTables. In hindsight, the road may have been certainly bumpy, but it somehow worked and many difficulties have been surpassed; such is the magic and grace of Open Source! # How second partition allows for Big Chunking PyTables lets users to easily handle data tables and array objects in a hierarchical structure. It also supports a variety of different data compression libraries through HDF5 filters. With the release of PyTables 3.8.0, the Blosc Development Team is pleased to announce the availability of Blosc2, acting not only as another HDF5 filter, but also as an additional partition tool (aka 'second partition') that complements the existing HDF5 chunking schema. By providing support for a second partition in HDF5, the chunks (aka the 'first partition') can be made larger, ideally fitting in cache level 3 in modern CPUs (see below for advantages of this). Meanwhile, Blosc2 will use its internal blocks (aka the second partition) as the minimum amount of data that should be read and decompressed during data retrieval, no matter how small the hyperslice to be read is. When Blosc2 is used to implement a second partition for data (referred ahead as 'optimized Blosc2' too), it can bypass the HDF5 pipeline for writing and for reading. This brings another degree of freedom when choosing the different internal I/O buffers, which can be of extraordinary importance in terms of performance and/or resource saving. ## How second partition allows for Big Chunking Blosc2 in PyTables is meant for compressing data in big chunks (typically in the range of level 3 caches in modern CPUs, that is, 10 ~ 1000 MB). This has some interesting advantages: It allows to reduce the number of entries in the HDF5 hash table. This means less resource consumption in the HDF5 side, so PyTables can handle larger tables using less resources. * It speeds-up compression and decompression because multithreading works better with more blocks. Remember that you can specify the number of threads to use by using the MAX_BLOSC_THREADS parameter, or by using the BLOSC_NTHREADS environment variable. However, the traditional drawback of having large chunks is that getting small slices would take long time because the whole chunk has to be read completely and decompressed. Blosc2 surmounts that difficulty like this: it asks HDF5 where chunks start on-disk (via H5Dget_chunk_info()), and then it access to the internal blocks (aka the second partition) independently instead of having to decompress the entire chunk. This effectively avoids penalizing access to small data slices. In the graphic below you can see the second partition in action where, in order to retrieve the green slice, only blocks 2 and 3 needs to be addressed and decompressed, instead of the (potentially much) larger chunk 0 and 1, which would be the case for the traditional 1 single partition in HDF5: In the benchmarks below we are comparing the performance of existing filters inside PyTables (like Zlib or Blosc(1)) against Blosc2, both working as a filter or in optimized mode, that is, bypassing the HDF5 filter pipeline completely. ## Benchmarks The data used in this section have been fetched from ERA5 database (see downloading script), which provides hourly estimates of a large number of atmospheric, land and oceanic climate variables. To build the tables used for reading and writing operations, we have used five different ERA5 datasets with the same shape (100 x 720 x 1440) and the same variables (latitude, longitude and time). Then, we have built a table with a column for each variable and each dataset and added the latitude, longitude and time as columns (for a total of 8 cols). Finally, there have been written 100 x 720 x 1440 rows (more than 100 million) to this table, which makes for a total data size of 3.1 GB. We present different scenarios when comparing resource usage for writing and reading between the Blosc and Blosc2 filters, including the Blosc2 optimized versions. First one is when PyTables is choosing the chunkshape automatically (the default); as Blosc2 is meant towards large chunks, PyTables has been tuned to produce far larger chunks for Blosc2 in this case (Blosc and other filters will remain using the same chunk sizes as usual). Second, we will visit the case where the chunkshape is equal for both Blosc and Blosc2. Spoiler alert: we will see how Blosc2 behaves well (and sometimes much beter) in both scenarios. ### Automatic chunkshape # Inkernel searches We start by performing inkernel queries where the chunkshape for the table is chosen automatically by the PyTables machinery (see query script here). This size is the same for Blosc, Zlib and uncompressed cases which are all using 16384 rows (about 512 KB), whereas for Blosc2 the computed chunkshape is quite larger: 1179648 rows (about 36 MB; this actually depends on the size of the L3 cache, which is automatically queried in real-time by PyTables and it turns out to be exactly 36 MB for our CPU, an Intel i9-13900K). Now, we are going to analyze the memory and time usage of performing six inkernel searches, which means scanning the full table six times, in different cases: With no compression; size is 3,1 GB. * Using HDF5 with ZLIB + Shuffle; size is 407 MB. * Using Blosc2 filter with Zstd codec + Bitshuffle; size is 341 MB. As we can see, the queries with no compression enable do not take much time or memory consumption, but it requires storing the full 3.1 GB of data. When using ZLIB, which is the HDF5 default, it does not require much memory either, but it takes a much more time (about 10x more), although the stored data is more than 6x smaller. When using Blosc, the time spent in (de-)compression is much less, but the queries still takes more time (1.7x more) than the no compression case; in addition, the compression ratio is quite close to the ZLIB case. However, the big jump comes when using Blosc2 with BloscLZ and BitShuffle, since although it uses just a little bit more memory than Blosc (a consequence of using larger chunks), in exchange it is quite faster than the previous methods while achieving a noticeably better compression ratio. Actually, this combination is 1.3x faster than using no compression; this is one of the main features of Blosc (and even more with Blosc2): it can accelerate operation by using compression. Finally, in case we want to improve compression further, Blosc2 can be used with the ZSTD codec, which achieves the best compression ratio here, in exchange for a slightly slower time (but still 1.15x faster than not using compression). # PyTables inkernel vs pandas queries Now that we have seen how Blosc2 can help PyTables in getting great query performance, we are going to compare it against pandas queries; to make things more interesting, we will be using the same NumExpr engine in both PyTables (where it is used in inkernel queries) and pandas. For this benchmark, we have been exploring the best configuration for speed, so we will be using 16 threads (for both Blosc2 and NumExpr) and the Shuffle filter instead of Bitshuffle; this leads to slightly less compression ratios (see below), but now the goal is getting full speed, not reducing storage (keep in mind that Pandas stores data in-memory without compression). Here it is how PyTables and pandas behave when doing the same 6 queries than in the previous section. And here it is another plot for the same queries, but comparing raw I/O performance for a variety of codecs and filters: As we can see, the queries using Blosc2 are generally faster (up to 2x faster) than not using compression. Furthermore, Blosc2 + LZ4 get nearly as good times as pandas, while the memory consumption is much smaller with Blosc2 (as much as 20x less in this case; more for larger tables indeed). This is remarkable, as this means that Blosc2 compression results in acceleration that almost compensates for all the additional layers in PyTables (the disk subsystem and the HDF5 library itself). And in case you wonder how much compression ratio we have lost by switching from Bitshuffle to Shuffle, not much actually: The take away message here is that, when used correctly, compression can make out-of-core queries go as fast as pure in-memory ones (even when using a high performance tool-set like pandas + NumExpr). # Writing and reading speed with automatic chunkshape Now, let's have a look at the raw write and read performance. In this case we are going to compare Blosc, Blosc2 as an HDF5 filter, and the optimized Blosc2 (acting as a de facto second partition). Remember that in this section the chunkshape determination is still automatic and different for Blosc (16384 rows, about 512 KB) and Blosc2 (1179648 rows, about 36 MB). For writing, optimized Blosc2 is able to do the job faster and get better compression ratios than others, mainly because it uses the HDF5 direct chunking mechanism, bypassing the overhead of the HDF5 pipeline. Note: the standard Blosc2 filter cannot make of use HDF5 direct chunking, but it still has an advantage when using bigger chunks because it allows for more threads working in parallel and hence, allowing improved parallel (de-)compression. The plot below shows how optimized Blosc2 is able to read the table faster and how the performance advantage grows as we use more threads. And now, let's compare the mean times of Blosc and Blosc2 optimized to read a small slice. In this case, Blosc chunkshape is much smaller, but optimized Blosc2 still can reach similar speed since it uses blocks that are similar in size to Blosc chunks. ### Writing and reading speed when using the same chunkshape In this scenario, we are choosing the same chunkshape (720 x 1440 rows, about 32 MB) for both Blosc and Blosc2. Let's see how this affects performance: The plot above shows how optimized Blosc2 manages to write the table faster (mainly because it can bypass the HDF5 pipeline); with the advantage being larger as more threads are used. Regarding reading, the optimized Blosc2 is able to perform faster too, and we continue to see the same trend of getting more speed when more threads are thrown at the task, with optimized Blosc2 scaling better. Finally, let's compare the mean times of Blosc and Blosc2 when reading a small slice in this same chunkshape scenario. In this case, since chunkshapes are equal and large, optimized Blosc2 is much faster than the others because it has the ability to decompresses just the necessary internal blocks, instead of the whole chunks. However, the Blosc and the Blosc2 filter still need to decompress the whole chunk, so getting much worse times. See this effect below: By allowing a second partition on top of the HDF5 layer, Blosc2 provides a great boost in PyTables I/O speed, specially when using big chunks (mainly when they fit in L3 CPU cache). That means that you can read, write and query large compressed datasets in less time. Interestingly, Blosc2 compression can make these operations faster than when using no compression at all, and even being competitive against a pure in-memory solution like pandas (but consuming vastly less memory). On the other hand, there are situations where using big chunks would not be acceptable. For example, when using other HDF5 apps that do not support the optimized paths for Blosc2 second partition, and one is forced to use the plain Blosc2 filter. In this case having large chunks would penalize the retrieval of small data slices too much. By the way, you can find a nice assortment of generic filters (including Blosc2) for HDF5 in the hdf5plugin library. Also note that, in the current implementation we have just provided optimized Blosc2 paths for the Table object in PyTables. That makes sense because Table is probably the most used entity in PyTables. Other chunked objects in PyTables (like EArray or CArray) could be optimized with Blosc2 in the future too (although that would require providing a *multidimensional* second partition for Blosc2). Last but not least, we would like to thank NumFOCUS and other PyTables donors for providing the funds required to implement Blosc2 support in PyTables. If you like what we are doing, and would like our effort to continue developing well, you can support our work by donating to PyTables project or Blosc project teams. Thank you! # Going multidimensional in the first and the second partition Date: 2023-02-24 Categories: Tags: One of the latest and more exciting additions in recently released C-Blosc2 2.7.0 is the Blosc2 NDim layer (or b2nd for short). It allows to create and read n-dimensional datasets in an extremely efficient way thanks to a completely general n-dim 2-level partitioning, allowing to slice and dice arbitrary large (and compressed!) data in a more fine-grained way. Remember that having a second partition means that we have better flexibility to fit the different partitions at the different CPU cache levels; typically the first partition (aka chunks) should be made to fit in L3 cache, whereas the second partition (aka blocks) should rather fit in L2/L1 caches (depending on whether compression ratio or speed is desired). This capability was formerly part of Caterva, and now we are including it in C-Blosc2 for convenience. As a consequence, the Caterva and Python-Caterva projects are now officially deprecated and all the action will happen in the C-Blosc2 / Python-Blosc2 side of the things. Last but not least, Blosc NDim is gaining support for a full-fledged data type system like NumPy. Keep reading. ## Going multidimensional in the first and the second partition Blosc (both Blosc1 and Blosc2) has always had a two-level partition schema to leverage the different cache levels in modern CPUs and make compression happen as quickly as possible. This allows, among other things, to create and query tables with 100 trillion of rows when properly cooperating with existing libraries like HDF5. With Blosc2 NDim we are taking this feature a step further and both partitions, known as chunks and blocks, are gaining multidimensional capabilities meaning that one can split some dataset (super-chunk in Blosc2 parlance) in such a n-dim cubes and sub-cubes: With these more fine-grained cubes (aka partitions), it is possible to retrieve arbitrary n-dim slices more rapidly because you don't have to decompress all the data that is necessary for the more coarse-grained partitions typical in other libraries. For example, for a 4-d array with a shape of (50, 100, 300, 250) with float64 items, we can choose a chunk with shape (10, 25, 50, 50) and a block with shape (3, 5, 10, 20) which makes for about 5 MB and 23 KB respectively. This way, a chunk fits comfortably on a L3 cache in most of modern CPUs, and a block in a L1 cache (we are tuning for speed here). With that configuration, the NDArray object in the Python-Blosc2 package can slice the array as fast as it is shown below: Of course, the double partition comes with some overhead during the creation of the partitions: more data moves and computations are required in order to place the data in the correct positions. However, we have done our best in order to minimize the data movement as much as possible. Below we can see how the speed of creation (write) of an array from anew is still quite competitive: On the other hand, we can also see that, when reading the complete array, the double partitioning overhead is not really a big issue, and actually, it benefits Blosc2 NDArray somewhat. All the plots above have been generated using the compare_getslice.py script, where we have been using the Zstd codec with compression level 1 (the fastest inside Blosc2) + the Shuffle filter for all the packages. The box used was an Intel 13900K CPU with 32 GB of RAM and using an up-to-date Clear Linux distro. ## Data types are in! Another important thing that we are adding to Blosc2 NDim is the support for data types. This was not previously supported in either C-Blosc2 or Caterva, where only a typesize was available to characterize the type. Now, the data type becomes a first class citizen for the b2nd metalayer. Metalayers in Blosc2 are stored in msgpack format, so it is pretty easy to introspect into them by using external msgpack tools. For example, the b2nd file created in the section above contains this meta info: > $ dd bs=1 skip=112 count=1000 < compare_getslice.b2nd | msgpack2json -b <snip> [0,4,[50,100,300,250],[10,25,50,50],[3,5,10,20],0,"<f8"] Here we can see the version of the metalayer (0), the number of dimensions of the array (4), followed by the shape, chunk shape and block shape. Then it comes the version of the dtype representation (it support up to 127; the default is 0, meaning NumPy). Finally, we can spot the "<f8" string, so a little-endian double precision data type. Note that the all data types in NumPy are supported by the Python wrapper of Blosc2; that means that with the NDArray object you can store e.g. datetimes (including units), or arbitrarily nested heterogeneous types, which allows to create multidimensional tables. We have seen how, when sensibly chosen, the double partition provides a formidable boost in retrieving arbitrary slices in potentially large multidimensional arrays. In addition, the new support for arbitrary data types represents a powerful addition as well. Combine that with the excellent compression capabilities of Blosc2, and you will get a first class data container for many types of (numerical, but also textual) data. Finally, we will be releasing the new NDArray object in the forthcoming release of Python-Blosc2 very soon. This will enable full access to these shiny new features of Blosc2 from the convenience of Python. Stay tuned! ## Update (2023-02-24) We have released Python-Blosc2 2.1.1 with the new NDArray object: https://github.com/Blosc/python-blosc2/releases/ * Enjoy the meal! # Creating a dynamically loaded filter Date: 2023-08-03 Categories: Tags: Updated: 2023-08-03 Added a new example of a dynamic filter for Python. Also, we have improved the content so that it can work more as a tutorial on how to make dynamic plugins for Blosc2. Finally, there is support now for dynamic plugins on Windows and MacOS/ARM64. Enjoy! The Blosc Development Team is excited to announce that the latest version of C-Blosc2 and Python-Blosc2 include a great new feature: the ability to dynamically load plugins, such as codecs and filters. This means that these codecs or filters will only be loaded at runtime when they are needed. These C libraries will be easily distributed inside Python wheels and be used from both C and Python code without problems. Keep reading for a gentle introduction to this new feature. ## Creating a dynamically loaded filter To learn how to create dynamic plugins, we'll use an already created example. Suppose you have a filter that you want Blosc2 to load dynamically only when it is used. In this case, you need to create a Python package to build a wheel and install it as a separate library. You can follow the structure used in blosc2_plugin_example to do this: > ├── CMakeLists.txt ├── README.md ├── blosc2_plugin_name │ └── __init__.py ├── examples │ ├── array_roundtrip.py │ ├── schunk_roundtrip.py │ └── test_plugin.c ├── pyproject.toml ├── requirements-build.txt ├── setup.py └── src ├── CMakeLists.txt ├── urfilters.c └── urfilters.h Note that the project name has to be blosc2_ followed by the plugin name (plugin_example in this case). The corresponding functions will be defined in the src folder, in our case in urfilters.c, following the same format as functions for user-defined filters (see https://github.com/Blosc/c-blosc2/blob/main/plugins/README.md for more information). Here it is the sample code: > int blosc2_plugin_example_forward(const uint8_t* src, uint8_t* dest, int32_t size, uint8_t meta, blosc2_cparams *cparams, uint8_t id) { blosc2_schunk *schunk = cparams->schunk; for (int i = 0; i < size / schunk->typesize; ++i) { switch (schunk->typesize) { case 8: ((int64_t *) dest)[i] = ((int64_t *) src)[i] + 1; break; default: BLOSC_TRACE_ERROR("Item size %d not supported", schunk->typesize); return BLOSC2_ERROR_FAILURE; } } return BLOSC2_ERROR_SUCCESS; } int blosc2_plugin_example_backward(const uint8_t* src, uint8_t* dest, int32_t size, uint8_t meta, blosc2_dparams *dparams, uint8_t id) { blosc2_schunk *schunk = dparams->schunk; for (int i = 0; i < size / schunk->typesize; ++i) { switch (schunk->typesize) { case 8: ((int64_t *) dest)[i] = ((int64_t *) src)[i] - 1; break; default: BLOSC_TRACE_ERROR("Item size %d not supported", schunk->typesize); return BLOSC2_ERROR_FAILURE; } } return BLOSC2_ERROR_SUCCESS; } In addition to these functions, we need to create a filter_info (or codec_info or tune_info in each case) named info. This variable will contain the names of the forward and backward functions. In our case, we will have: > filter_info info = {"blosc2_plugin_example_forward", "blosc2_plugin_example_backward"}; To find the functions, the variable must always be named info. Furthermore, the symbols info and the functions forward and backward must be exported in order for Windows to find them. You can see all the details for doing that in the blosc2_plugin_example repository. ## Creating and installing the wheel Once the project is done, you can create a wheel and install it locally: > python setup.py bdist_wheel pip install dist/*.whl This wheel can be uploaded to PyPI so that anybody can use it. Once tested and stable enough, you can request the Blosc Team to register it globally. This way, an ID for the filter or codec will be booked so that the data will always be able to be encoded/decoded by the same code, ensuring portability. ## Registering the plugin in C-Blosc2 After installation, and prior to use it, you must register it in C-Blosc2. This step is necessary only if the filter is not already registered globally by C-Blosc2, which is likely if you are testing it or you are not ready to share it with other users. To register it, follow the same process as registering a user-defined plugin, but leave the function pointers as NULL: > blosc2_filter plugin_example; plugin_example.id = 250; plugin_example.name = "plugin_example"; plugin_example.version = 1; plugin_example.forward = NULL; plugin_example.backward = NULL; blosc2_register_filter(&plugin_example); When the filter is used for the first time, C-Blosc2 will automatically fill in the function pointers. ## Registering the plugin in Python-Blosc2 The same applies for Python-Blosc2. You can register the filter as follows: > import blosc2 blosc2.register_filter(250, None, None, "plugin_example") ## Using the plugin in C-Blosc2 To use the plugin, simply set the filter ID in the filters pipeline, as you would do with user-defined filters: > blosc2_cparams cparams = BLOSC2_CPARAMS_DEFAULTS; cparams.filters[4] = 250; cparams.filters_meta[4] = 0; blosc2_dparams dparams = BLOSC2_DPARAMS_DEFAULTS; blosc2_schunk* schunk; /* Create a super-chunk container */ cparams.typesize = sizeof(int32_t); blosc2_storage storage = {.cparams=&cparams, .dparams=&dparams}; schunk = blosc2_schunk_new(&storage); To see a full usage example, refer to https://github.com/Blosc/blosc2_plugin_example/blob/main/examples/test_plugin.c. Keep in mind that the executable using the plugin must be launched from the same virtual environment where the plugin wheel was installed. When compressing or decompressing, C-Blosc2 will dynamically load the library and call its functions automatically (as depicted below). Once you are satisfied with your plugin, you may choose to request the Blosc Development Team to register it as a global plugin. The only difference (aside from its ID number) is that users won't need to register it locally anymore. Also, a dynamic plugin will not be loaded until it is explicitly requested by any compression or decompression function, saving resources. ## Using the plugin in Python-Blosc2 As in C-Blosc2, just set the filter ID in the filters pipeline, as you would do with user-defined filters: > shape = [100, 100] size = int(np.prod(shape)) nparray = np.arange(size, dtype=np.int32).reshape(shape) blosc2_array = blosc2.asarray(nparray, cparams={"filters": [250]}) To see a full usage example, refer to https://github.com/Blosc/blosc2_plugin_example/blob/main/examples/array_roundtrip.py. C-Blosc2's ability to support dynamically loaded plugins allows the library to grow in features without increasing the size and complexity of the library itself. For more information about user-defined plugins, refer to this blog entry. We appreciate your interest in our project! If you find our work useful and valuable, we would be grateful if you could support us by making a donation. Your contribution will help us continue to develop and improve Blosc packages, making them more accessible and useful for everyone. Our team is committed to creating high-quality and efficient software, and your support will help us to achieve this goal. Date: 2023-05-10 Categories: Tags: Skip to main content 2023-05-10 08:32 Dynamic plugins in C-Blosc2 2022-10-06 10:32 New features in Python-Blosc2 # Register for user plugins Blosc has traditionally supported different filters and codecs for compressing data, and it was up to the user to choose one or another depending on her needs. However, there will always be scenarios where a more richer variety of them could be useful. Blosc2 has now a new plugin register capability in place so that the info about the new filters and codecs can be persisted and transmitted to different machines. In this way Blosc can figure out the info of persistent plugins, and use them so as to decompress the data without problems. Besides, the Blosc Development Team has implemented a centralized repository so that people can propose new plugins; and if these plugins fulfill a series of requirements, they will be officially accepted, and distributed within the C-Blosc2 library. This provides an easy path for extending C-Blosc2 and hence, better adapt to user needs. The plugins that can be registered in the repository can be either codecs or filters. A codec is a program able to compress and decompress a data stream with the objective of reducing its size and to enable a faster transmission of data. * A filter is a program that reorders the data without changing its size, so that the initial and final size are equal. A filter consists of encoder and decoder. The filter encoder is applied before the codec compressor (or codec encoder) in order to make data easier to compress and the filter decoder is used after codec decompressor (or codec decoder) to restore the original data arrangement. Here it is an example on how the compression process goes: > -------------------- filter encoder ------------------- codec encoder ------- | src | -----------> | tmp | ----------> | c_src | -------------------- ------------------- ------- And the decompression process: > -------- codec decoder ------------------- filter decoder ------------------- | c_src | -----------> | tmp | ----------> | src | -------- ------------------- ------------------- ## Register for user plugins User registered plugins are plugins that users register locally so that they can be used in the same way as Blosc official codecs and filters. This option is perfect for users that want to try new filters or codecs on their own. The register process is quite simple. You just use the function and then the Blosc2 machinery will store its info with the rest of plugins. After that you will be able to access your plugin through its ID by setting Blosc2 compression or decompression params. > filters pipeline ---------------------- | BLOSC_SHUFFLE 1 | ---------------------- | BLOSC_BITSHUFFLE 2 | ---------------------- | BLOSC_DELTA 3 | ---------------------- | BLOSC_TRUNC 4 | ---------------------- | ... | ---------------------- | BLOSC_NDCELL 32 | ---------------------- | BLOSC_NDMEAN 33 | ---------------------- | ... | ---------------------- | urfilter1 160 | ---------------------- blosc2_register_filter(urfilter2) ---> | urfilter2 161 | ---> cparams.filters[4] = 161; // can be used now ---------------------- | ... | ---------------------- ## Global register for Blosc plugins Blosc global registered plugins are Blosc plugins that have passed through a selection process and a review by the Blosc Development Team. These plugins will be available for everybody using the C-Blosc2 library. You should consider this option if you think that your codec or filter could be useful for the community, or you just want being able to use them with upstream C-Blosc2 library. The steps for registering an official Blosc plugin can be seen at: https://github.com/Blosc/c-blosc2/blob/main/plugins/README.md Some well documented examples of these kind of plugins are the codec `ndlz` and the filters `ndcell` and `ndmean` on the C-Blosc2 GitHub repository: https://github.com/Blosc/c-blosc2/tree/main/plugins ## Compiling plugins examples using Blosc2 wheels So as to easy the use of the registered filters, full-fledged C-Blosc2 binary libraries including plugins functionality can be installed from python-blosc2 (>= 0.1.8) wheels: > $ pip install blosc2 Collecting blosc2 Downloading blosc2-0.1.8-cp37-cp37m-manylinux2010_x86_64.whl (3.3 MB) |████████████████████████████████| 3.3 MB 4.7 MB/s Installing collected packages: blosc2 Successfully installed blosc2-0.1.8 Once you have installed the C-Blosc2 libraries you can not only use the official Blosc filters and codecs, but you can also register and use them. You can find directions on how to compile C files using the Blosc2 libraries inside these wheels at: https://github.com/Blosc/c-blosc2/blob/main/COMPILING_WITH_WHEELS.rst ### Using user plugins To use your own plugins with the Blosc machinery you first have to register them through the function with an ID between ``` BLOSC2_USER_DEFINED_FILTERS_START ``` and ``` BLOSC2_USER_DEFINED_FILTERS_STOP ``` . Then you can use this ID in the compression parameters (cparams.compcode, cparams.filters) and decompression parameters (dparams.compcode, dparams.filters). For any doubts you can see the whole process in the examples urcodecs.c and urfilters.c. ### Using Blosc official plugins To use the Blosc official plugins it is mandatory to add the next lines in order to activate the plugins mechanism: ``` #include "blosc2/codecs-registery.h" ``` ``` #include "blosc2/filters-registery.h" ``` depending on the plugin type at the beginning of the file * ``` #include "blosc2/blosc2.h" ``` at the beginning of the file * Call `blosc_init()` at the beginning of main() function * Call `blosc_destroy()` at the end of main() function Then you just have to use the ID of the plugin that you want to use in the compression parameters ( `cparams.compcode` ). > #include "blosc2.h" #include "../codecs-registry.h" int main(void) { blosc_init(); ... blosc2_cparams cparams = BLOSC2_CPARAMS_DEFAULTS; cparams.compcode = BLOSC_CODEC_NDLZ; cparams.compcode_meta = 4; ... blosc_destroy(); } In case of doubts, you can see how the whole process works in working tests like: test_ndlz.c, test_ndcell.c, test_ndmean_mean.c and test_ndmean_repart.c. ## Final remarks The plugin register functionality let use new codecs and filters within Blosc in an easy and quick way. To enhance the plugin experience, we have implemented a centralized plugin repository, so that users can propose their own plugins to be in the standard C-Blosc2 library for the benefit of all the Blosc community. The Blosc Development Team kindly invites you to test the different plugins we already offer, but also to try with your own one. Besides, if you are willing to contribute it to the community, then apply to register it. This way everyone will be able to enjoy a variety of different and unique plugins. Hope you will enjoy this new and exciting feature! Last but not least, a big thank you to the NumFOCUS foundation for providing a grant for implementing the register functionality. # Compressing ERA5 datasets Bytedelta is a new filter that calculates the difference between bytes in a data stream. Combined with the shuffle filter, it can improve compression for some datasets. Bytedelta is based on initial work by <NAME>. TL;DR: We have a brief introduction to bytedelta in the 3rd section of this presentation. The basic concept is simple: after applying the shuffle filter, then compute the difference for each byte in the byte streams (also called splits in Blosc terminology): The key insight enabling the bytedelta algorithm lies in its implementation, especially the use of SIMD on Intel/AMD and ARM NEON CPUs, making the filter overhead minimal. Although Aras's original code implemented shuffle and bytedelta together, it was limited to a specific item size (4 bytes). Making it more general would require significant effort. Instead, for Blosc2 we built on the existing shuffle filter and created a new one that just does bytedelta. When we insert both in the Blosc2 filter pipeline (it supports up to 6 chained filters), it leads to a completely general filter that works for any type size supported by existing shuffle filter. With that said, the implementation of the bytedelta filter has been a breeze thanks to the plugin support in C-Blosc2. You can also implement your own filters and codecs on your own, or if you are too busy, we will be happy to assist you. ## Compressing ERA5 datasets The best approach to evaluate a new filter is to apply it to real data. For this, we will use some of the ERA5 datasets, representing different measurements and labeled as "wind", "snow", "flux", "pressure" and "precip". They all contain floating point data (float32) and we will use a full month of each one, accounting for 2.8 GB for each dataset. The diverse datasets exhibit rather dissimilar complexity, which proves advantageous for testing diverse compression scenarios. For instance, the wind dataset appears as follows: The image shows the intricate network of winds across the globe on October 1, 1987. The South American continent is visible on the right side of the map. Another example is the snow dataset: This time the image is quite flat. Here one can spot Antarctica, Greenland, North America and of course, Siberia, which was pretty full of snow by 1987-10-01 23:00:00 already. Let's see how the new bytedelta filter performs when compressing these datasets. All the plots below have been made using a box with an Intel i13900k processor, 32 GB of RAM and using Clear Linux. In the box plot above, we summarized the compression ratios for all datasets using different codecs (BLOSCLZ, LZ4, LZ4HC and ZSTD). The main takeaway is that using bytedelta yields the best median compression ratio: bytedelta achieves a median of 5.86x, compared to 5.62x for bitshuffle, 5.1x for shuffle, and 3.86x for codecs without filters. Overall, bytedelta seems to improve compression ratios here, which is good news. While the compression ratio is a useful metric for evaluating the new bytedelta filter, there is more to consider. For instance, does the filter work better on some data sets than others? How does it impact the performance of different codecs? If you're interested in learning more, read on. ## Effects on various datasets Let's see how different filters behave on various datasets: Here we see that, for datasets that compress easily (precip, snow), the behavior is quite different from those that are less compressible. For precip, bytedelta actually worsens results, whereas for snow, it slightly improves them. For less compressible datasets, the trend is more apparent, as can be seen in this zoomed in image: In these cases, bytedelta clearly provides a better compression ratio, most specifically with the pressure dataset, where compression ratio by using bytedelta has increased by 25% compared to the second best, bitshuffle (5.0x vs 4.0x, using ZSTD clevel 9). Overall, only one dataset (precip) shows an actual decrease. This is good news for bytedelta indeed. Furthermore, Blosc2 supports another compression parameter for splitting the compressed streams into bytes with the same significance. Normally, this leads to better speed but less compression ratio, so this is automatically activated for faster codecs, whereas it is disabled for slower ones. However, it turns out that, when we activate splitting for all the codecs, we find a welcome surprise: bytedelta enables ZSTD to find significantly better compression paths, resulting in higher compression ratios. As can be seen, in general ZSTD + bytedelta can compress these datasets better. For the pressure dataset in particular, it goes up to 5.7x, 37% more than the second best, bitshuffle (5.7x vs 4.1x, using ZSTD clevel 9). Note also that this new highest is 14% more than without splitting (the default). This shows that when compressing, you cannot just trust your intuition for setting compression parameters - there is no substitute for experimentation. ## Effects on different codecs Now, let's see how bytedelta affects performance for different codecs and compression levels. Interestingly, on average bytedelta proves most useful for ZSTD and higher compression levels of ZLIB (Blosc2 comes with ZLIB-NG). On the other hand, the fastest codecs (LZ4, BLOSCLZ) seem to benefit more from bitshuffle instead. Regarding compression speed, in general we can see that bytedelta has little effect on performance: As we can see, compression algorithms like BLOSCLZ, LZ4 and ZSTD can achieve extremely high speeds. LZ4 reaches and surpasses speeds of 30 GB/s, even when using bytedelta. BLOSCLZ and ZSTD can also exceed 20 GB/s, which is quite impressive. Here one can see that, to achieve the highest compression rates when combined with shuffle and bytedelta, the codecs require significant CPU resources; this is especially noticeable in the zoomed-in view: where capable compressors like ZSTD do require up to 2x more time to compress when using bytedelta, especially for high compression levels (6 and 9). Now, let us examine decompression speeds: In general, decompression is faster than compression. BLOSCLZ, LZ4 and LZ4HC can achieve over 100 GB/s. BLOSCLZ reaches nearly 180 GB/s using no filters on the snow dataset (lowest complexity). The bytedelta filter noticeably reduces speed for most codecs, up to 20% or more. ZSTD performance is less impacted. ## Achieving a balance between compression ratio and speed Often, you want to achieve a good balance of compression and speed, rather than extreme values of either. We will conclude by showing plots depicting a combination of both metrics and how bytedelta influences them. Let's first represent the compression ratio versus compression speed: As we can see, the shuffle filter is typically found on the Pareto frontier (in this case, the point furthest to the right and top). Bytedelta comes next. In contrast, not using a filter at all is on the opposite side. This is typically the case for most real-world numerical datasets. Let's now group filters and datasets and calculate the mean values of combining (in this case, multiplying) the compression ratio and compression speed for all codecs. As can be seen, bytedelta works best with the wind dataset (which is quite complex), while bitshuffle does a good job in general for the others. The shuffle filter wins on the snow dataset (low complexity). If we group by compression level: We see that bytedelta works well with LZ4 here, and also with ZSTD at the lowest compression level (1). Let's revise the compression ratio versus decompression speed comparison: Let's group together the datasets and calculate the mean for all codecs: In this case, shuffle generally prevails, with bitshuffle also doing reasonably well, winning on precip and pressure datasets. Also, let’s group the data by compression level: We find that bytedelta compression does not outperform shuffle compression in any scenario. This is unsurprising since decompression is typically fast, and bytedelta's extra processing can decrease performance more easily. We also see that LZ4HC (clevel 6 and 9) + shuffle strikes the best balance in this scenario. Finally, let's consider the balance between compression ratio, compression speed, and decompression speed: Here the winners are shuffle and bitshuffle, depending on the data set, but bytedelta never wins. If we group by compression levels: Overall, we see LZ4 as the clear winner at any level, especially when combined with shuffle. On the other hand, bytedelta did not win in any scenario here. ## Benchmarks for other computers We have run the benchmarks presented here in an assortment of different boxes: Also, find here a couple of runs using the i9-13900K box above, but with the always split and never split settings: Reproducing the benchmarks is straightforward. First, download the data; the downloaded files will be in the new era5_pds/ directory. Then perform the series of benchmarks; this is takes time, so grab coffee and wait 30 min (fast workstations) to 6 hours (slow laptops). Finally, run the plotting Jupyter notebook to explore your results. If you wish to share your results with the Blosc development team, we will appreciate hearing from you! Bytedelta can achieve higher compression ratios in most datasets, specially in combination with capable codecs like ZSTD, with a maximum gain of 37% (pressure) over other codecs; only in one case (precip) compression ratio decreases. By compressing data more efficiently, bytedelta can reduce file sizes even more, accelerating transfer and storage. On the other hand, while bytedelta excels at achieving high compression ratios, this requires more computing power. We have found that for striking a good balance between high compression and fast compression/decompression, other filters, particularly shuffle, are superior overall. We've learned that no single codec/filter combination is best for all datasets: ZSTD (clevel 9) + bytedelta can get better absolute compression ratio for most of the datasets. * LZ4 + shuffle is well-balanced for all metrics (compression ratio, speed, decompression speed). * LZ4 (clevel 6) and ZSTD (clevel 1) + shuffle strike a good balance of compression ratio and speed. * LZ4HC (clevel 6 and 9) + shuffle balances well compression ratio and decompression speed. * BLOSCLZ without filters achieves best decompression speed (at least in one instance). In summary, the optimal choice depends on your priorities. As a final note, the Blosc development team is working on BTune, a new deep learning tuner for Blosc2. BTune can be trained to automatically recognize different kinds of datasets and choose the optimal codec and filters to achieve the best balance, based on the user's needs. This would create a much more intelligent compressor that can adapt itself to your data faster, without requiring time-consuming manual tuning. If interested, contact us; we are looking for beta testers! Date: 2023-03-24 Categories: Tags: * Bytedelta: Enhance Your Compression Toolset * 100 Trillion Rows Baby * 20 years of PyTables * C-Blosc2 Ready for General Review * Mid 2020 Progress Report * C-Blosc Beast Release * Blosc Received a $50,000 USD donation * Blosc2-Meets-Rome * C-Blosc2 Enters Beta Stage * Is ARM Hungry Enough to Eat Intel's Favorite Pie? * Breaking Down Memory Walls * New Forward Compatibility Policy * Blosc Has Won Google's Open Source Peer Bonus Program * The Lizard Codec * Testing PGO with LZ4 and Zstd codecs * Fine Tuning the BloscLZ codec * Zstd has just landed in Blosc * ARM is becoming a first-class citizen for Blosc * Hairy situation of Microsoft Windows compilers * New 'bitshuffle' filter * Compress Me, Stupid! filter | clevel | codec | cspeed | dspeed | cratio | cratio * cspeed | cratio * dspeed | cratio * cspeed * dspeed | | --- | --- | --- | --- | --- | --- | --- | --- | --- | 0 | bitshuffle | 1 | BLOSCLZ | 8.521135 | 37.163098 | 12.394004 | 148.448453 | 501.408152 | 6253.902051 | 1 | bitshuffle | 1 | LZ4 | 9.909460 | 44.217161 | 12.307460 | 166.358353 | 576.647578 | 7988.992414 | 2 | bitshuffle | 1 | LZ4HC | 4.159958 | 43.733051 | 13.208067 | 70.761302 | 605.809677 | 3295.282965 | 3 | bitshuffle | 1 | ZLIB | 5.168855 | 13.940715 | 11.603630 | 77.505811 | 186.900015 | 1311.426899 | 4 | bitshuffle | 1 | ZSTD | 8.797193 | 26.460997 | 16.284555 | 204.352339 | 486.648768 | 6403.270085 | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | 75 | shuffle | 9 | BLOSCLZ | 7.090165 | 44.120614 | 11.680731 | 148.535757 | 703.817960 | 10643.652492 | 76 | shuffle | 9 | LZ4 | 10.490418 | 58.993959 | 11.310522 | 198.642207 | 867.525260 | 17219.304417 | 77 | shuffle | 9 | LZ4HC | 1.491062 | 67.126168 | 13.315973 | 32.524722 | 1046.584163 | 2841.998079 | 78 | shuffle | 9 | ZLIB | 0.601809 | 6.119631 | 16.804231 | 14.217234 | 107.050088 | 87.078144 | 79 | shuffle | 9 | ZSTD | 0.064413 | 26.539378 | 18.811941 | 1.190553 | 733.932425 | 47.901377 | filter | clevel | codec | cspeed | dspeed | cratio | cratio * cspeed | cratio * dspeed | cratio * cspeed * dspeed | | --- | --- | --- | --- | --- | --- | --- | --- | --- | 0 | bitshuffle | 1 | BLOSCLZ | 12.761833 | 66.747672 | 9.173380 | 170.239436 | 675.753748 | 12870.859984 | 1 | bitshuffle | 1 | LZ4 | 12.205809 | 73.292183 | 11.894671 | 222.119679 | 922.913309 | 17639.433252 | 2 | bitshuffle | 1 | LZ4HC | 6.076772 | 67.928352 | 12.769271 | 98.755855 | 908.162400 | 7154.733080 | 3 | bitshuffle | 1 | ZLIB | 7.298479 | 25.029979 | 11.200472 | 106.809085 | 330.309560 | 3332.657153 | 4 | bitshuffle | 1 | ZSTD | 11.477392 | 44.646190 | 15.785637 | 273.428811 | 774.338758 | 13816.902657 | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | 75 | shuffle | 9 | BLOSCLZ | 10.061763 | 72.198181 | 11.338061 | 209.104856 | 1076.932715 | 22342.209134 | 76 | shuffle | 9 | LZ4 | 15.022658 | 92.064490 | 10.963307 | 266.011167 | 1250.285500 | 32369.603352 | 77 | shuffle | 9 | LZ4HC | 2.625860 | 94.784550 | 12.928206 | 61.242247 | 1482.265572 | 7769.986737 | 78 | shuffle | 9 | ZLIB | 1.053539 | 11.222278 | 16.333023 | 25.617114 | 199.231475 | 319.643959 | 79 | shuffle | 9 | ZSTD | 0.103401 | 41.698953 | 18.288814 | 1.900180 | 1087.433286 | 114.708453 | Date: 2023-02-22 Categories: Tags: Skip to main content 2023-02-22 10:32Introducing Blosc2 NDim It contains both the data and metalayer that stores the dimensional info for the array. Blosc2 NDim has a managed internal context that stores the different properties of each array. ## Context# * typedef struct b2nd_context_s b2nd_context_t# * General parameters needed for the creation of a b2nd array. * b2nd_context_t *b2nd_create_ctx(const blosc2_storage *b2_storage, int8_t ndim, const int64_t *shape, const int32_t *chunkshape, const int32_t *blockshape, const char *dtype, int8_t dtype_format, const blosc2_metalayer *metalayers, int32_t nmetalayers)# * Create b2nd params. b2_storage – The Blosc2 storage params. * ndim – The dimensions. * shape – The shape. * chunkshape – The chunk shape. * blockshape – The block shape. * dtype – The data type expressed as a string version. * dtype_format – The data type format; default is DTYPE_NUMPY_FORMAT. * metalayers – The memory pointer to the list of the metalayers desired. * nmetalayers – The number of metalayers. * Returns * A pointer to the new b2nd params. NULL is returned if this fails. The pointer returned must be freed when not used anymore with b2nd_free_ctx. * int b2nd_free_ctx(b2nd_context_t *ctx)# * Free the resources associated with b2nd_context_t. ctx – The b2nd context to free. * Returns * This is safe in the sense that it will not free the schunk pointer in internal cparams. ## Array# A Blosc2 NDim array is a n-dimensional object that can be managed by the associated functions. The functions let users to perform different operations with these arrays like copying, getting, setting or converting data into buffers or files and vice-versa. Furthermore, Blosc2 NDim only stores the type size (not the data type), and every item of an array has the same size. The b2nd_array_t type struct is where all data and metadata for an array is stored: * struct b2nd_array_t# * A multidimensional array of data that can be compressed. # Constructors# * int b2nd_uninit(b2nd_context_t *ctx, b2nd_array_t **array)# * Create an uninitialized array. * int b2nd_empty(b2nd_context_t *ctx, b2nd_array_t **array)# * Create an empty array. * int b2nd_zeros(b2nd_context_t *ctx, b2nd_array_t **array)# * * int b2nd_full(b2nd_context_t *ctx, b2nd_array_t **array, const void *fill_value)# * fill_value – Default value for uninitialized portions of the array. * Returns * # From/To buffer# * int b2nd_from_cbuffer(b2nd_context_t *ctx, b2nd_array_t **array, const void *buffer, int64_t buffersize)# * Create a b2nd array from a C buffer. buffer – The buffer where source data is stored. * * int b2nd_to_cbuffer(const b2nd_array_t *array, void *buffer, int64_t buffersize)# * Extract the data from a b2nd array into a C buffer. # From/To file# * int b2nd_open(const char *urlpath, b2nd_array_t **array)# * Open a b2nd array from a file. * int b2nd_open_offset(const char *urlpath, b2nd_array_t **array, int64_t offset)# * Open a b2nd array from a file using an offset. offset – The offset in the file where the b2nd array frame starts. * Returns * * int b2nd_save(const b2nd_array_t *array, char *urlpath)# * Save b2nd array into a specific urlpath. array – The array to be saved. * urlpath – The urlpath where the array will be stored. * Returns * # From Blosc object# * int b2nd_from_schunk(blosc2_schunk *schunk, b2nd_array_t **array)# * It can only be used if the array is backed by a blosc super-chunk. schunk – The blosc super-chunk where the b2nd array is stored. * * int b2nd_from_cframe(uint8_t *cframe, int64_t cframe_len, bool copy, b2nd_array_t **array)# * cframe – The buffer of the in-memory array. * cframe_len – The size (in bytes) of the in-memory array. * copy – Whether b2nd should make a copy of the cframe data or not. The copy will be made to an internal sparse frame. * * int b2nd_to_cframe(const b2nd_array_t *array, uint8_t **cframe, int64_t *cframe_len, bool *needs_free)# * Create a serialized super-chunk from a b2nd array. array – The b2nd array to be serialized. * cframe – The pointer of the buffer where the in-memory array will be copied. * cframe_len – The length of the in-memory array buffer. * needs_free – Whether the buffer should be freed or not. * Returns * # Modify data# * int b2nd_insert(b2nd_array_t *array, const void *buffer, int64_t buffersize, int8_t axis, int64_t insert_start)# * Insert given buffer in an array extending the given axis. array – The array to insert the data in. * buffer – The buffer data to be inserted. * axis – The axis that will be extended. * insert_start – The position inside the axis to start inserting the data. * Returns * * int b2nd_append(b2nd_array_t *array, const void *buffer, int64_t buffersize, int8_t axis)# * Append a buffer at the end of a b2nd array. array – The array to append the data in. * buffer – The buffer data to be appended. * axis – The axis that will be extended to append the data. * Returns * * int b2nd_delete(b2nd_array_t *array, int8_t axis, int64_t delete_start, int64_t delete_len)# * Delete shrinking the given axis delete_len items. axis – The axis to shrink. * delete_start – The start position from the axis to start deleting chunks. * delete_len – The number of items to delete to the array->shape[axis]. The newshape[axis] will be the old array->shape[axis] - delete_len * Returns * ### Copying# * int b2nd_copy(b2nd_context_t *ctx, const b2nd_array_t *src, b2nd_array_t **array)# * Make a copy of the array data. The copy is done into a new b2nd array. src – The array from which data is copied. * The ndim and shape in ctx will be overwritten by the src ctx. ### Slicing# * int b2nd_get_slice(b2nd_context_t *ctx, b2nd_array_t **array, const b2nd_array_t *src, const int64_t *start, const int64_t *stop)# * Get a slice from an array and store it into a new array. src – The array from which the slice will be extracted * The ndim and shape from ctx will be overwritten by the src and stop-start respectively. * int b2nd_get_slice_cbuffer(const b2nd_array_t *array, const int64_t *start, const int64_t *stop, void *buffer, const int64_t *buffershape, int64_t buffersize)# * Get a slice from an array and store it into a C buffer. array – The array from which the slice will be extracted. * * int b2nd_set_slice_cbuffer(const void *buffer, const int64_t *buffershape, int64_t buffersize, const int64_t *start, const int64_t *stop, b2nd_array_t *array)# * Set a slice in a b2nd array using a C buffer. buffer – The buffer where the slice data is. * array – The b2nd array where the slice will be set * Returns * * int b2nd_get_orthogonal_selection(const b2nd_array_t *array, int64_t **selection, int64_t *selection_size, void *buffer, int64_t *buffershape, int64_t buffersize)# * array – The array to get the data from. * buffer – The buffer for getting the data. * See also b2nd_set_orthogonal_selection. * int b2nd_set_orthogonal_selection(b2nd_array_t *array, int64_t **selection, int64_t *selection_size, const void *buffer, int64_t *buffershape, int64_t buffersize)# * array – The array to set the data to. * buffer – The buffer with the data for setting. * See also b2nd_get_orthogonal_selection. * int b2nd_squeeze(b2nd_array_t *array)# * * int b2nd_squeeze_index(b2nd_array_t *array, const bool *index)# * index – Indexes of the single-dimensional entries to remove. * Returns * ### Utils# * int b2nd_print_meta(const b2nd_array_t *array)# * Print metalayer parameters. array – The array where the metalayer is stored. * Returns * * int b2nd_serialize_meta(int8_t ndim, const int64_t *shape, const int32_t *chunkshape, const int32_t *blockshape, const char *dtype, int8_t dtype_format, uint8_t **smeta)# * Create the metainfo for the b2nd metalayer. shape – The shape of the array. * chunkshape – The shape of the chunks in the array. * blockshape – The shape of the blocks in the array. * dtype – A string representation of the data type of the array. * dtype_format – The format of the dtype representation. 0 means NumPy. * smeta – The msgpack buffer (output). * Returns * * static inline int b2nd_deserialize_meta(const uint8_t *smeta, int32_t smeta_len, int8_t *ndim, int64_t *shape, int32_t *chunkshape, int32_t *blockshape, char **dtype, int8_t *dtype_format)# * Read the metainfo in the b2nd metalayer. smeta – The msgpack buffer (input). * smeta_len – The length of the smeta buffer (input). * shape – The shape of the array (output). * chunkshape – The shape of the chunks in the array (output). * blockshape – The shape of the blocks in the array (output). * dtype – A string representation of the data type of the array (output). * dtype_format – The format of the dtype representation (output). 0 means NumPy (the default). * Returns * This function is inlined and available even when not linking with libblosc2. * int b2nd_resize(b2nd_array_t *array, const int64_t *new_shape, const int64_t *start)# * Resize the shape of an array. new_shape – The new shape from the array. * start – The position in which the array will be extended or shrunk. * Returns * * int b2nd_free(b2nd_array_t *array)# * Free an array. # Effect on (relatively small) datasets In recently released PyTables 3.8.0 we gave support for an optimized path for writing and reading Table instances with Blosc2 cooperating with the HDF5 machinery. On the blog describing its implementation we have shown how it collaborates with the HDF5 library so as to get top-class I/O performance. Since then, we have been aware (thanks to <NAME>) of the introduction of the H5Dchunk_iter function in HDF5 1.14 series. This predates the functionality of H5Dget_chunk_info, and makes retrieving the offsets of the chunks in the HDF5 file way more efficiently, specially on files with a large number of chunks - H5Dchunk_iter cost is O(n), whereas H5Dget_chunk_info is O(n^2). As we decided to implement support for H5Dchunk_iter in PyTables, we were curious on the sort of boost this could provide reading tables created from real data. Keep reading for the experiments we've conducted about this. ## Effect on (relatively small) datasets We start by reading a table with real data coming from our usual ERA5 database. We fetched one year (2000 to be specific) of data with five different ERA5 datasets with the same shape and the same coordinates (latitude, longitude and time). This data has been stored on a table with 8 columns with 32 bytes per row and with 9 millions rows (for a grand total of 270 GB); the number of chunks is about 8K. When using compression, the size is typically reduced between a factor of 6x (LZ4 + shuffle) and 9x (Zstd + bitshuffle); in any case, the resulting file size is larger than the RAM available in our box (32 GB), so we can safely exclude OS filesystem caching effects here. Let's have a look at the results on reading this dataset inside PyTables (using shuffle only; for bitshuffle results are just a bit slower): We see how the improvement when using HDF5 1.14 (and hence H5Dchunk_iter) for reading data sequentially (via a PyTables query) is not that noticeable, but for random queries, the speedup is way more apparent. For comparison purposes, we added the figures for Blosc1+LZ4; one can notice the great job of Blosc2, specially in terms of random reads due to the double partitioning and HDF5 pipeline replacement. ## A trillion rows table But 8K chunks is not such a large figure, and we are interested in using datasets with a larger amount. As it is very time consuming to download large amounts of real data for our benchmarks purposes, we have decided to use synthetic data (basically, a bunch of zeros) just to explore how the new H5Dchunk_iter function scales when handling extremely large datasets in HDF5. Now we will be creating a large table with 1 trillion rows, with the same 8 fields than in the previous section, but whose values are zeros (remember, we are trying to push HDF5 / Blosc2 to their limits, so data content is not important here). With that, we are getting a table with 845K chunks, which is about 100x more than in the previous section. With this, lets' have a look at the plots for the read speed: As expected, we are getting significantly better results when using HDF5 1.14 (with H5Dchunk_iter) in both sequential and random cases. For comparison purposes, we have added Blosc1-Zstd which does not make use of the new functionality. In particular, note how Blosc1 gets better results for random reads than Blosc2 with HDF5 1.12; as this is somehow unexpected, if you have an explanation, please chime in. It is worth noting that even though the data are made of zeros, Blosc2 still needs to compress/decompress the full 32 TB thing. And the same goes for numexpr, which is used internally to perform the computations for the query in the sequential read case. This is testimonial of the optimization efforts in the data flow (i.e. avoiding as much memory copies as possible) inside PyTables. ## 100 trillion rows baby As a final exercise, we took the previous experiment to the limit, and made a table with 100 trillion (that’s a 1 followed with 14 zeros!) rows and measured different interesting aspects. It is worth noting that the total size for this case is 2.8 PB (petabyte), and the number of chunks is around 85 millions (finally, large enough to fully demonstrate the scalability of the new H5Dchunk_iter functionality). Here it is the speed of random and sequential reads: As we can see, despite the large amount of chunks, the sequential read speed actually improved up to more than 75 GB/s. Regarding the random read latency, it increased to 60 µs; this is not too bad actually, as in real life the latencies during random reads in such a large files are determined by the storage media, which is no less than 100 µs for the fastest SSDs nowadays. The script that creates the table and reads it can be found at bench/100-trillion-rows-baby.py. For the curious, it took about 24 hours to run on a Linux box wearing an Intel 13900K CPU with 32 GB of RAM. The memory consumption during writing was about 110 MB, whereas for reading was 1.7 GB steadily (pretty good for a multi-petabyte table). The final size for the file has been 17 GB, for a compression ratio of more than 175000x. As we have seen, the H5Dchunk_iter function recently introduced in HDF5 1.14 is confirmed to be of a big help in performing reads more efficiently. We have also demonstrated that scalability is excellent, reaching phenomenal sequential speeds (exceeding 75 GB/s with synthetic data) that cannot be easily achieved by the most modern I/O subsystems, and hence avoiding unnecessary bottlenecks. Indeed, the combo HDF5 / Blosc2 is able to handle monster sized tables (on the petabyte ballpark) without becoming a significant bottleneck in performance. Not that you need to handle such a sheer amount of data anytime soon, but it is always reassuring to use a tool that is not going to take a step back in daunting scenarios like this. Date: 2022-12-23 Categories: Tags: 2022-12-23 12:32Blosc2 Meets PyTables: Making HDF5 I/O Performance Awesome 2022-03-11 10:32Announcing Support for Lossy ZFP Codec as a Plugin for C-Blosc2 2021-07-26 04:32Caterva Slicing Performance: A Study This is the classic API from Blosc1 with 32-bit limited containers. ## Main API# * void blosc2_init(void)# * Initialize the Blosc library environment. You must call this previous to any other Blosc call, unless you want Blosc to be used simultaneously in a multi-threaded environment, in which case you can use the blosc2_compress_ctx blosc2_decompress_ctx pair. * void blosc2_destroy(void)# * Destroy the Blosc library environment. You must call this after to you are done with all the Blosc calls, unless you have not used blosc2_init() before. * int blosc1_compress(int clevel, int doshuffle, size_t typesize, size_t nbytes, const void *src, void *dest, size_t destsize)# * Compress a block of data in the `src` buffer and returns the size of compressed block. Remark Compression is memory safe and guaranteed not to write `dest` more than what is specified in `destsize` . There is not a minimum for `src` buffer size `nbytes` . BLOSC_CLEVEL=(INTEGER): This will overwrite the `clevel` parameter before the compression process starts. * BLOSC_SHUFFLE=[NOSHUFFLE | SHUFFLE | BITSHUFFLE]: This will overwrite the `doshuffle` parameter before the compression process starts. * BLOSC_DELTA=(1|0): This will call blosc2_set_delta() before the compression process starts. * BLOSC_TYPESIZE=(INTEGER): This will overwrite the `typesize` parameter before the compression process starts. * BLOSC_COMPRESSOR=[BLOSCLZ | LZ4 | LZ4HC | ZLIB | ZSTD]: This will call blosc1_set_compressor before the compression process starts. * BLOSC_SPLITMODE=(ALWAYS | NEVER | AUTO | FORWARD_COMPAT): This will call blosc1_set_splitmode() before the compression process starts. * BLOSC_BLOCKSIZE=(INTEGER): This will call blosc1_set_blocksize before the compression process starts. NOTE: The blocksize is a critical parameter with important restrictions in the allowed values, so use this with care. * BLOSC_NOLOCK=(ANY VALUE): This will call blosc2_compress_ctx under the hood, with the compressor, blocksize and numinternalthreads parameters set to the same as the last calls to blosc1_set_compressor, blosc1_set_blocksize and blosc2_set_nthreads. BLOSC_CLEVEL, BLOSC_SHUFFLE, BLOSC_DELTA and BLOSC_TYPESIZE environment vars will also be honored. clevel – The desired compression level and must be a number between 0 (no compression) and 9 (maximum compression). * doshuffle – Specifies whether the shuffle compression preconditioner should be applied or not. BLOSC_NOFILTER means not applying filters, BLOSC_SHUFFLE means applying shuffle at a byte level and BLOSC_BITSHUFFLE at a bit level (slower but may achieve better compression). * typesize – Is the number of bytes for the atomic type in binary `src` buffer. This is mainly useful for the shuffle preconditioner. For implementation reasons, only a 1 < typesize < 256 will allow the shuffle filter to work. When typesize is not in this range, shuffle will be silently disabled. * nbytes – The number of bytes to compress in the `src` buffer. * src – The buffer containing the data to compress. * dest – The buffer where the compressed data will be put, must have at least the size of `destsize` . * destsize – The size of the dest buffer. Blosc guarantees that if you set `destsize` to, at least, ( `nbytes` + BLOSC2_MAX_OVERHEAD), the compression will always succeed. * Returns * The number of bytes compressed. If `src` buffer cannot be compressed into `destsize` , the return value is zero and you should discard the contents of the `dest` buffer. A negative return value means that an internal error happened. This should never happen. If you see this, please report it back together with the buffer data causing this and compression settings. Warning * int blosc1_decompress(const void *src, void *dest, size_t destsize)# * Decompress a block of compressed data in `src` , put the result in `dest` and returns the size of the decompressed block. Remark Decompression is memory safe and guaranteed not to write the `dest` buffer more than what is specified in `destsize` . Remark In case you want to keep under control the number of bytes read from source, you can call blosc1_cbuffer_sizes first to check whether the `nbytes` (i.e. the number of bytes to be read from `src` buffer by this function) in the compressed buffer is ok with you. BLOSC_NOLOCK=(ANY VALUE): This will call blosc2_decompress_ctx under the hood, with the numinternalthreads parameter set to the same value as the last call to blosc2_set_nthreads. src – The buffer to be decompressed. * dest – The buffer where the decompressed data will be put. * destsize – The size of the `dest` buffer. * Returns * The number of bytes decompressed. If an error occurs, e.g. the compressed data is corrupted or the output buffer is not large enough, then a negative value will be returned instead. * int blosc1_getitem(const void *src, int start, int nitems, void *dest)# * Get `nitems` (of `typesize` size) in `src` buffer starting in `start` . The items are returned in `dest` buffer, which has to have enough space for storing all items. src – The compressed buffer from data will be decompressed. * start – The position of the first item (of `typesize` size) from where data will be retrieved. * nitems – The number of items (of `typesize` size) that will be retrieved. * dest – The buffer where the decompressed data retrieved will be put. * Returns * The number of bytes copied to `dest` or a negative value if some error happens. * int16_t blosc2_get_nthreads(void)# * Returns the current number of threads that are used for compression/decompression. * int16_t blosc2_set_nthreads(int16_t nthreads)# * Initialize a pool of threads for compression/decompression. If `nthreads` is 1, then the serial version is chosen and a possible previous existing pool is ended. If this is not called, `nthreads` is set to 1 internally. nthreads – The number of threads to use. * Returns * The previous number of threads. * const char *blosc1_get_compressor(void)# * Get the current compressor that is used for compression. The string identifying the compressor being used. * int blosc1_set_compressor(const char *compname)# * Select the compressor to be used. The supported ones are “blosclz”, “lz4”, “lz4hc”, “zlib” and “ztsd”. If this function is not called, then “blosclz” will be used. compname – The name identifier of the compressor to be set. * Returns * The code for the compressor (>=0). In case the compressor is not recognized, or there is not support for it in this build, it returns a -1. * void blosc2_set_delta(int dodelta)# * Select the delta coding filter to be used. This call should always succeed. dodelta – A value >0 will activate the delta filter. If 0, it will be de-activated * int blosc1_get_blocksize(void)# * Get the internal blocksize to be used during compression. 0 means that an automatic blocksize is computed internally. The size in bytes of the internal block size. * void blosc1_set_blocksize(size_t blocksize)# * Force the use of a specific blocksize. If 0, an automatic blocksize will be used (the default). The blocksize is a critical parameter with important restrictions in the allowed values, so use this with care. * void blosc1_set_splitmode(int splitmode)# * Set the split mode. BLOSC_FORWARD_COMPAT offers reasonably forward compatibility, BLOSC_AUTO_SPLIT is for nearly optimal results (based on heuristics), BLOSC_NEVER_SPLIT and BLOSC_ALWAYS_SPLIT are for the user experimenting when trying to get best compression ratios and/or speed. If not called, the default mode is BLOSC_FORWARD_COMPAT_SPLIT. splitmode – It can take the next values: BLOSC_FORWARD_COMPAT_SPLIT BLOSC_AUTO_SPLIT BLOSC_NEVER_SPLIT BLOSC_ALWAYS_SPLIT * int blosc2_free_resources(void)# * Free possible memory temporaries and thread resources. Use this when you are not going to use Blosc for a long while. A 0 if succeeds, in case of problems releasing the resources, it returns a negative number. ## Compressed buffer information# * void blosc1_cbuffer_sizes(const void *cbuffer, size_t *nbytes, size_t *cbytes, size_t *blocksize)# * Get information about a compressed buffer, namely the number of uncompressed bytes ( `nbytes` ) and compressed ( `cbytes` ). It also returns the `blocksize` (which is used internally for doing the compression by blocks). You only need to pass the first BLOSC_MIN_HEADER_LENGTH bytes of a compressed buffer for this call to work. cbytes – The pointer where the number of compressed bytes will be put. * blocksize – The pointer where the block size will be put. * void blosc1_cbuffer_metainfo(const void *cbuffer, size_t *typesize, int *flags)# * Get information about a compressed buffer, namely the type size ( `typesize` ), as well as some internal `flags` . You can use the `BLOSC_DOSHUFFLE` , `BLOSC_DOBITSHUFFLE` , `BLOSC_DODELTA` and `BLOSC_MEMCPYED` symbols for extracting the interesting bits (e.g. `flags` & `BLOSC_DOSHUFFLE` says whether the buffer is byte-shuffled or not). This function should always succeed. typesize – The pointer where the type size will be put. * flags – The pointer of the integer where the additional info is encoded. The `flags` is a set of bits, where the currently used ones are: bit 0: whether the shuffle filter has been applied or not * bit 1: whether the internal buffer is a pure memcpy or not * bit 2: whether the bitshuffle filter has been applied or not * bit 3: whether the delta coding filter has been applied or not * void blosc2_cbuffer_versions(const void *cbuffer, int *version, int *versionlz)# * Get information about a compressed buffer, namely the internal Blosc format version ( `version` ) and the format for the internal Lempel-Ziv compressor used ( `versionlz` ). This function should always succeed. version – The pointer where the Blosc format version will be put. * versionlz – The pointer where the Lempel-Ziv version will be put. * const char *blosc2_cbuffer_complib(const void *cbuffer)# * Get the compressor library/format used in a compressed buffer. The string identifying the compressor library/format used. * int blosc1_cbuffer_validate(const void *cbuffer, size_t cbytes, size_t *nbytes)# * Checks that the compressed buffer starting at `cbuffer` of length `cbytes` may contain valid blosc compressed data, and that it is safe to call blosc1_decompress/blosc1_getitem. On success, returns 0 and sets `nbytes` to the size of the uncompressed data. This does not guarantee that the decompression function won’t return an error, but does guarantee that it is safe to attempt decompression. cbytes – The number of compressed bytes. * On failure, returns negative value. ## Utility functions# * int blosc2_compcode_to_compname(int compcode, const char **compname)# * Get the compressor name associated with the compressor code. compcode – The code identifying the compressor * compname – The pointer to a string where the compressor name will be put. * Returns * The compressor code. If the compressor code is not recognized, or there is not support for it in this build, -1 is returned. * int blosc2_compname_to_compcode(const char *compname)# * Get the compressor code associated with the compressor name. compname – The string containing the compressor name. * Returns * The compressor code. If the compressor name is not recognized, or there is not support for it in this build, -1 is returned instead. * const char *blosc2_list_compressors(void)# * Get a list of compressors supported in the current build. This function does not leak, so you should not free() the returned list. The comma separated string with the list of compressor names supported. * const char *blosc2_get_version_string(void)# * Get the version of Blosc in string format. The string with the current Blosc version. Useful for dynamic libraries. * int blosc2_get_complib_info(const char *compname, char **complib, char **version)# * Get info from compression libraries included in the current build. compname – The compressor name that you want info from. * complib – The pointer to a string where the compression library name, if available, will be put. * version – The pointer to a string where the compression library version, if available, will be put. * Returns * The code for the compression library (>=0). If it is not supported, this function returns -1. You are in charge of the `complib` and `version` strings, you should free() them so as to avoid leaks. # The Blosc2 pipeline The Blosc Development Team is happy to announce that the latest version of Python-Blosc2 allows user-defined Python functions all throughout its compression pipeline: you can use Python for building prefilters, postfilters, filters, codecs for Blosc2 and explore all its capabilities. And if done correctly (by using e.g. NumPy, numba, numexpr...), most of the time you won't even need to translate those into C for speed. ## The Blosc2 pipeline The Blosc2 pipeline includes all the functions that are applied to the data until it is finally compressed (and decompressed back). As can be seen in the image below, during compression the first function that can be applied to the data is the prefilter (if any), then the filters pipeline (with a maximum of six filters) and, last but not least, the codec itself. For decompressing, the order will be the other way around: first the codec, then the filters pipeline and finally the postfilter (if any). ## Defining prefilters and postfilters A prefilter is a function that is applied to the SChunk each time you add data into it (e.g. when appending or updating). It is executed for each data block and receives three parameters: input, output and offset. For convenience, the input and output are presented as NumPy arrays; the former is a view of the input data and the later is an empty NumPy array that must be filled (this is actually what the first filter in the filters pipeline will receive). Regarding the offset, it is an integer which indicates where the corresponding block begins inside the SChunk container. You can easily set a prefilter to a specific SChunk through a decorator. For example: > schunk = blosc2.SChunk() @schunk.prefilter(np.int64, np.float64) def pref(input, output, offset): output[:] = input - np.pi + offset This decorator requires the data types for the input (original data) and output NumPy arrays, which must be of same itemsize. If you do not want the prefilter to be applied anymore, you can always remove it: > schunk.remove_prefilter("pref") As for the postfilters, they are applied at the end of the pipeline during decompression. A postfilter receives the same parameters as the prefilter and can be set in the same way: > @schunk.postfilter(np.float64, np.int64) def postf(input, output, offset): output[:] = input + np.pi - offset In this case, the input data is the one from the buffer returned by the filter pipeline, and the output data type should be the same as the original data (for a good round-trip). You can also remove postfilters whenever you want: > schunk.remove_postfilter("postf") ### Fillers Before we move onto the next step in the pipeline, we need to introduce the fillers. A filler is similar to a prefilter but with a twist. It is used to fill an empty SChunk and you can pass to it any extra parameter you want, as long as it is a NumPy array, SChunk or Python Scalar. All these extra parameters will arrive to the filler function as a tuple containing just the corresponding block slice for each parameter (except for the scalars, that are passed untouched). To declare a filter, you will also need to pass the inputs along with its data type. For example: > @schunk.filler(((schunk0, dtype0), (ndarray1, dtype1), (py_scalar3, dtype2)), schunk_dtype) def filler(inputs_tuple, output, offset): output[:] = inputs_tuple[0] - inputs_tuple[1] * inputs_tuple[2] This will automatically append the data to the schunk, but applying the filler function first. After that the schunk would be completely filled, the filler will be de-registered, so the next time you update some data the it would not be executed; a filler is meant to build new SChunk objects from other containers. ## User-defined filters and codecs The main difference between prefilters/postfilters and their filters/codecs counterparts is that the former ones are meant to run for an specific SChunk instance, whereas the later can be locally registered and hence, used in any SChunk. Let's start describing the user-defined filters. A filter is composed by two functions: one for the compression process (forward), and another one for the decompression process (backward). Such functions receive the input and output as NumPy arrays of type uint8 (bytes), plus the filter meta and the SChunk instance to which the data belongs to. The forward function will fill the output with the modified data from input. The backward will be responsible of reversing the changes done by forward so that the data returned at the end of the decompression should be the same as the one received at the beginning of the compression. Check the drawing below: So, once we have the pair of forward and backward functions defined, they can be registered locally associating to a filter ID between 160 and 255: > blosc2.register_filter(id, forward, backward) Now, we can use the user-defined filter in any SChunk instance by choosing the new local ID in the filters pipeline: > schunk.cparams = {"filters": [id], "filters_meta": [meta]} Regarding the user-defined codecs, they do not differ too much from its filter counterparts. The main difference is that, because their goal is to actually compress data, the corresponding functions (in this case encoder and decoder) will need to return the size in bytes of the compressed/decompressed data respectively. This time the scheme would be: To register a codec, you name it, assign an ID to it and pass the user-defined functions: > blosc2.register_codec(codec_name, id, encoder, decoder) And to use it you just use its ID in the cparams: > schunk.cparams = {"codec": id, "codec_meta": meta} We have seen how easily you can define your own filters and codecs for the Blosc2 compression pipeline. They are very easy to use because they conveniently wrap input and output data as NumPy arrays. Now, you can start experimenting with different filter/compression algorithms straight from Python. You can even come with a library of such filters/codecs that can be used in all your data pipeline processing. Welcome to compression made easy! See more examples in the repository. Find the complete documentation at: https://www.blosc.org/python-blosc2/python-blosc2.html. This work has been made thanks to a Small Development Grant from NumFOCUS. NumFOCUS is a non-profit organization supporting open code for better science. If you like this, consider giving a donation to them (and if you like our work, you can nominate it to our project too!). Thanks! Date: 2022-12-15 Categories: Tags: 2022-12-15 08:00User Defined Pipeline for Python-Blosc2 2021-05-10 07:32Wrapping C-Blosc2 in Python (a beginner's view) 2021-02-08 07:32Introducing Sparse Frames # Retrieve data with __getitem__ and get_slice Date: 2014-05-05 Categories: Tags: The Blosc Development Team is happy to announce that the latest version of Python-Blosc2 0.4. It comes with new and exciting features, like a handier way of setting, expanding and getting the data and metadata from a super-chunk (SChunk) instance. Contrarily to chunks, a super-chunk can update and resize the data that it containes, supports user metadata, and it does not have the 2 GB storage limitation. Additionally, you can now convert now a SChunk into a contiguous, serialized buffer (aka cframe) and vice-versa; as a bonus, this serialization process also works with a NumPy array at a blazing speed. Continue reading for knowing the new features a bit more in depth. ## Retrieve data with __getitem__ and get_slice The most general way to store data in Python-Blosc2 is through a SChunk (super-chunk) object. Here the data is split into chunks of the same size. So until now, the only way of working with it was chunk by chunk (see the basics tutorial). With the new version, you can get general data slices with the handy __getitem__() method without having to mess with chunks manually. The only inconvenience is that this returns a bytes object, which is difficult to read by humans. To overcome this, we have also implemented the get_slice() method; it comes with two optional params: start and stop for selecting the slice you are interested in. Also, you can pass to out any Python object supporting the Buffer Protocol and it will be filled with the data slice. One common example is to pass a NumPy array in the out argument: > out_slice = numpy.empty(chunksize * nchunks, dtype=numpy.int32) schunk.get_slice(out=out_slice) We have now the out_slice NumPy array filled with the schunk data. Easy and effective. ## Set data with __setitem__ Similarly, if we would like to set data, we had different ways of doing it, e.g. with the update_chunk() or the update_data() methods. But those work, again, chunk by chunk, which was a bummer. That's why we also implemented the convenient __setitem__() method. In a similar way to the get_slice() method, the value to be set can be any Python object supporting the Buffer Protocol. In addition, this method is very flexible because it not only can set the data of an already existing slice of the SChunk, but it also can expand (and update at the same time) it. To do so, the stop param will set the new number of items in SChunk: > start = schunk_nelems - 123 stop = start + new_value.size # new number of items schunk[start:stop] = new_value In the code above, the data between start and the SChunk current size will be updated and then, the data between the previous SChunk size and the new stop will be appended automatically for you. This is very handy indeed (note that the step parameter is not yet supported though). ## Serialize SChunk from/to a contiguous compressed buffer Super-chunks can be serialized in two slightly different format frames: contiguous and sparse. A contiguous frame (aka cframe) serializes the whole super-chunk data and metadata into a sequential buffer, whereas the sparse frame (aka sframe) uses a contiguous frame for metadata and the data is stored separately in so-called chunks. Here it is how they look like: The contiguous and sparse formats come with its own pros and cons. A contiguous frame is ideal for transmitting / storing data as a whole buffer / file, while the sparse one is better suited to act as a store while a super-chunk is being built. In this new version of Python-Blosc2 we have added a method to convert from a SChunk to a contiguous, serialized buffer: > buf = schunk.to_cframe() as well as a function to build back a SChunk instance from that buffer: > schunk = schunk_from_cframe(buf) This allows for a nice way to serialize / deserialize super-chunks for transmission / storage purposes. Also, and for performance reasons, and for reducing memory usage to a maximum, these functions avoid copies as much as possible. For example, the schunk_from_cframe function can build a SChunk instance without copying the data in cframe. Such a capability makes the use of cframes very desirable whenever you have to transmit and re-create data from one machine to another in a very efficient way. ## Serializing NumPy arrays Last but not least, you can also serialize NumPy arrays with the new pair of functions pack_array2() / unpack_array2(). Although you could already do this with the existing pack_array() / unpack_array() functions, the new ones are much faster and do not have the 2 GB size limitation. To prove this, let's see its performance by looking at some benchmark results obtained with an Intel box (i9-10940X CPU @ 3.30GHz, 14 cores) running Ubuntu 22.04. In this benchmark we are comparing a plain NumPy array copy against compression/decompression through different compressors and functions (compress() / decompress(), pack_array() / unpack_array() and pack_array2() / unpack_array2()). The data distribution for the plots below is for 3 different data distributions: arange, linspace and random: As can be seen, different codecs offer different compression ratios for the different distributions. Note in particular how linear distributions (arange for int64 and linspace for float64) can reach really high compression ratios (very low entropy). Let's see the speed for compression / decompression; in order to not show too many info in this blog, we will show just the plots for the linspace linear distribution: Here we can see that the pair pack_array2() / unpack_array2() is consistently (much) faster than their previous version pack_array() / unpack_array(). Despite that, the fastest is the compress() / decompress() pair; however this is not serializing all the properties of a NumPy array, and has the limitation of not being able to compress data larger than 2 GB. You can test the speed in your box by running the pack_compress bench. Also, if you would like to store the contiguous buffer on-disk, you can directly use the pair of functions save_array(), load_array(). ## Native performance on Apple M1 processors Contrariliy to Blosc1, Blosc2 comes with native support for ARM processors (it leverages the NEON SIMD instruction set there), and that means that it runs very fast in this architecture. As an example, let's see how the new pack_array2() / unpack_array2() works in an Apple M1 laptop (Macbook Air). As can be seen, running Blosc2 in native arm64 mode on M1 offers quite a bit more performance (specially during compression) than using the i386 emulation. If speed is important to you, and you have a M1/M2 processor, make sure that you are running Blosc2 in native mode (arm64). The new features added to python-blosc2 offer an easy way of creating, getting, setting and expanding data by using a SChunk instance. Furthermore, you can get a contiguous compressed representation (aka cframe) of it and re-create it again latter. And you can do the same with NumPy arrays (either in-memory or on-disk) faster than with the former functions, and even faster than a plain memcpy(). For more info on how to use these useful new features, see this Jupyter notebook tutorial. Finally, the complete documentation is at: https://www.blosc.org/python-blosc2/python-blosc2.html. Thanks to <NAME> (@datapythonista) for his fine work and enthusiasm in helping us in providing a better structure to the Blosc documentation! This work has been possible thanks to a Small Development Grant from NumFOCUS. NumFOCUS is a non-profit organization supporting open code for better science. If you like the goal, consider giving a donation to NumFOCUS (you can optionally make it go to our project too, to which we would be very grateful indeed :-). # 404 File not found The site configured at this address does not contain the requested file. If this is your site, make sure that the filename case matches the URL as well as any file permissions.For root URLs (like `http://example.com/` ) you must provide an `index.html` file. Read the full documentation for more information about using GitHub Pages. * blosc2.pack_array2(arr, chunksize=None, **kwargs)# * Pack (compress) a NumPy array. This is faster, and it does not have a 2 GB limitation. out – The serialized version (cframe) of the array. If urlpath is provided, the number of bytes in file is returned instead. * Return type: * bytes | int Examples > >>> import numpy as np >>> a = np.arange(1e6) >>> cframe = blosc2.pack_array2(a) >>> len(cframe) < a.size * a.itemsize True * blosc2.unpack_array2(cframe)# * Unpack (decompress) a packed NumPy array via a cframe. cframe¶ (bytes) – The packed array to be restored. * Returns: * out – The unpacked NumPy array. * Return type: * TypeError – If `cframe` is not of type bytes, or not a cframe. * RunTimeError – If some other problem is detected. Examples > >>> import numpy as np >>> a = np.arange(1e6) >>> cframe = blosc2.pack_array2(a) >>> len(cframe) < a.size*a.itemsize True >>> a2 = blosc2.unpack_array2(cframe) >>> np.array_equal(a, a2) True * blosc2.pack_array(arr, clevel=9, filter=Filter.SHUFFLE, codec=Codec.BLOSCLZ)# * Pack (compress) a NumPy array. It is equivalent to the pack function. clevel¶ (int (optional)) – The compression level from 0 (no compression) to 9 (maximum compression). The default is 9. * filter¶ ( `Filter` (optional)) – The filter to be activated. The default is `Filter.SHUFFLE` . * codec¶ ( `Codec` (optional)) – The compressor used internally in Blosc. The default is `Codec.BLOSCLZ` . * Returns: * out – The packed array in form of a Python str / bytes object. * Return type: * str / bytes * Raises: * AttributeError – If `arr` does not have an itemsize attribute. If `arr` does not have a size attribute. * ValueError – If typesize is not within the allowed range. If the pickled object size is larger than the maximum allowed buffer size. If `clevel` is not within the allowed range. If `codec` is not within the supported compressors. See also Examples > >>> import numpy as np >>> a = np.arange(1e6) >>> parray = blosc2.pack_array(a) >>> len(parray) < a.size*a.itemsize True * blosc2.unpack_array(packed_array, **kwargs)# * Restore a packed NumPy array. packed_array¶ (str / bytes) – The packed array to be restored. * **kwargs¶ (fix_imports / encoding / errors) – Optional parameters that can be passed to the pickle.loads API. * Returns: * out – The decompressed data in form of a NumPy array. * Return type: * TypeError – If `packed_array` is not of type bytes or string. Examples > >>> import numpy as np >>> a = np.arange(1e6) >>> parray = blosc2.pack_array(a) >>> len(parray) < a.size*a.itemsize True >>> a2 = blosc2.unpack_array(parray) >>> np.array_equal(a, a2) True >>> a = np.array(['Ã¥', 'ç', 'Þ']) >>> parray = blosc2.pack_array(a) >>> a2 = blosc2.unpack_array(parray) >>> np.array_equal(a, a2) True * blosc2.save_array(arr, urlpath, chunksize=None, **kwargs)# * Save a serialized NumPy array in urlpath. arr¶ (ndarray) – The NumPy array to be saved. * urlpath¶ (str) – The path for the file where the array is saved. * out – The number of bytes of the saved array. * Return type: * int Examples > >>> import numpy as np >>> a = np.arange(1e6) >>> serial_size = blosc2.save_array(a, "test.bl2", mode="w") >>> serial_size < a.size * a.itemsize True # Announcing Support for Lossy ZFP Codec as a Plugin for C-Blosc2 ## Announcing Support for Lossy ZFP Codec as a Plugin for C-Blosc2 Blosc supports different filters and codecs for compressing data, like e.g. the lossless NDLZ codec and the NDCELL filter. These have been developed explicitly to be used in multidimensional datasets (via Caterva or ironArray Community Edition). However, a lossy codec like ZFP allows for much better compression ratios at the expense of loosing some precision in floating point data. Moreover, while NDLZ is only available for 2-dim datasets, ZFP can be used up to 4-dim datasets. ### How ZFP works? ZFP partitions datasets into cells of 4^(number of dimensions) values, i.e., 4, 16, 64, or 256 values for 1D, 2D, 3D, and 4D arrays, respectively. Each cell is then (de)compressed independently, and the resulting bit strings are concatenated into a single stream of bits. Furthermore, ZFP usually truncates each input value either to a fixed number of bits to meet a storage budget or to some variable length needed to meet a chosen error tolerance. For more info on how this works, see zfp overview docs. ### ZFP implementation Similarly to other registered Blosc2 official plugins, this codec is now available at the blosc2/plugins directory of the C-Blosc2 repository. However, as there are different modes for working with ZFP, there are several associated codec IDs (see later). So, in order to use ZFP, users just have to choose the ID for the desired ZFP mode between the ones listed in blosc2/codecs-registry.h. For more info on how the plugin selection mechanism works, see https://www.blosc.org/posts/registering-plugins/. ### ZFP modes ZFP is a lossy codec, but it still lets the user to choose the degree of the data loss. There are different compression modes: BLOSC_CODEC_ZFP_FIXED_ACCURACY: The user can choose the absolute error in truncation. For example, if the desired absolute error is 0.01, each value loss must be less than or equal to 0.01. With that, if 23.0567 is a value of the original input, after compressing and decompressing this input with error=0.01, the new value must be between 23.0467 and 23.0667. * BLOSC_CODEC_ZFP_FIXED_PRECISION: The user specifies the maximum number of bit planes encoded during compression (relative error). This is, for each input value, the number of most significant bits that will be encoded. * BLOSC_CODEC_ZFP_FIXED_RATE: The user chooses the size that the compressed cells must have based on the input cell size. For example, if the cell size is 2000 bytes and user chooses ratio=50, the output cell size will be 50% of 2000 = 1000 bytes. For more info, see: https://github.com/Blosc/c-blosc2/blob/main/plugins/codecs/zfp/README.md ### Benchmark: ZFP FIXED-ACCURACY VS FIXED_PRECISION VS FIXED-RATE modes The dataset used in this benchmark is called precipitation_amount_1hour_Accumulation.zarr and has been fetched from ERA5 database, which provides hourly estimates of a large number of atmospheric, land and oceanic climate variables. Specifically, the downloaded dataset in Caterva format has this parameters: ndim = 3 * type = float32 * shape = [720, 721, 1440] * chunkshape = [128, 128, 256] * blockshape = [16, 32, 64] The next plots represent the compression results obtained by using the different ZFP modes to compress the already mentioned dataset. Note: It is important to remark that this is a specific dataset and the codec may perform differently for other ones. Below the bars it is annotated what parameter is used for each test. For example, for the first column, the different compression modes are setup like this: FIXED-ACCURACY: for each input value, the absolute error is 10^(-6) = 0.000001. * FIXED-PRECISION: for each input value, only the 20 most significant bits for the mantissa will be encoded. * FIXED-RATE: the size of the output cells is 30% of the input cell size. Although the FIXED-PRECISION mode does not obtain great results, we see that with the FIXED-ACCURACY mode we do get better performance as the absolute error increases. Similarly, we can see how the FIXED-RATE mode gets the requested ratios, which is cool but, in exchange, the amount of data loss is unknown. Also, while FIXED-ACCURACY and FIXED-RATE modes consume similar times, the FIXED-PRECISION mode, which seems to have less data loss, also takes longer to compress. Generally speaking we can see how, the more data loss (more data truncation) achieved by a mode, the faster it operates. ### "Third partition" One of the most appealing features of Caterva besides supporting multi-dimensionality is its implementation of a second partition, making slicing more efficient. As one of the distinctive characteristics of ZFP is that it compresses data in independent (and small) cells, we have been toying with the idea of implementing a third partition so that slicing of thin selections or just single-point selection can be made faster. So, as part of the current ZFP implementation, we have combined the Caterva/Blosc2 partitioning (chunking and blocking) with the independent cell handling of ZFP, allowing to extract single cells within the ZFP streams (blocks in Blosc jargon). Due to the properties and limitations of the different ZFP compression modes, we have been able to implement a sort of "third partition" just for the FIXED-RATE mode when used together with the blosc2_getitem_ctx() function. Such a combination of the existing partitioning and single cell extraction is useful for selecting more narrowly the data to extract, saving time and memory. As an example, below you can see a comparison of the mean times that it takes to retrieve a bunch of single elements out of different multidimensional arrays from the ERA5 dataset (see above). Here we have used Blosc2 with a regular LZ4 codec compared against the FIXED-RATE mode of the new ZFP codec: As you can see, using the ZFP codec in FIXED-RATE mode allows for a good improvement in speed (up to more than 2x) for retrieving single elements (or, in general an amount not exceeding the cell size) in comparison with the existing codecs (even the fastest ones, like LZ4) inside Blosc2. As the performance improvement is of the same order than random access time of modern SSDs, we anticipate that this could be a major win in scenarios where random access is important. If you are curious on how this new functionality performs for your own datasets and computer, you can use/adapt our benchmark code. ### Conclusions The integration of ZFP as a codec plugin will greatly enhance the capabilities of lossy compression inside C-Blosc2. The current ZFP plugin supports different modes; if users want to specify data loss during compression, it is recommended to use the FIXED-ACCURACY or FIXED-PRECISION modes (and most specially the former because of its better compression performance). However, if the priority is to get specific compression ratios without paying too much attention to the amount of data loss, one should use the FIXED-RATE mode, which let choose the desired compression ratio. This mode also has the advantage that a "third partition" can be used for improving random elements access speed. This work has been done thanks to a Small Development Grant from the NumFOCUS Foundation, to whom we are very grateful indeed. NumFOCUS is doing a excellent job in sponsoring scientific projects and you can donate to the Blosc project (or many others under the NumFOCUS umbrella) via its donation page.
rkafkajars
cran
R
Package ‘rkafkajars’ October 14, 2022 Type Package Title External Jars Required for Package 'rkafka' Version 1.2 Date 2021-12-01 Author <NAME> [aut,cre], <NAME> and Yammer Inc.[ctb,cph],Sun Microsys- tems Inc.[ctb,cph],<NAME>[ctb,cph],<NAME>[ctb],Junit[ctb,cph],The Apache Soft- ware Foundation [ctb,cph],Stefan Groschupf[ctb,cph],Taro L.Saito[ctb],EPFL Type- safe Inc.[ctb,cph],QOS.ch[ctb,cph] Maintainer <NAME> <<EMAIL>> Description The 'rkafkajars' package collects all the external jars required for the 'rkafka' package. Depends rJava Imports RUnit SystemRequirements Java JDK 1.7 or higher Copyright See file COPYRIGHTS License Apache License 2.0 | file LICENSE NeedsCompilation no Repository CRAN Date/Publication 2021-12-05 16:10:02 UTC R topics documented: rkafkajar... 2 rkafkajars External Jars Required for Package ’rkafka’ Description The ’rkafkajars’ package collects all the external jars required for the ’rkafka’ package. These external jars are quite large in size (12MB) and have a slow release cycle. By separating the Java and R development, the storage footprint on CRAN is reduced. By using the ’rJava’ package that links R and Java, we can use the excellent work already done by the folks and provide the functionality of using ’KAFKA’ in R Details Package: rkafkajars Type: Package Version: 1.2 Date: 2021-12-01 License: Apache License 2.0 Author(s) <NAME> Maintainer: <NAME><<EMAIL>>
rusoto_pinpoint_sms_voice
rust
Rust
Crate rusoto_pinpoint_sms_voice === Pinpoint SMS and Voice Messaging public facing APIs If you’re using the service, you’re probably looking for PinpointSmsVoiceClient and PinpointSmsVoice. Structs --- CallInstructionsMessageTypeAn object that defines a message that contains text formatted using Amazon Pinpoint Voice Instructions markup. CloudWatchLogsDestinationAn object that contains information about an event destination that sends data to Amazon CloudWatch Logs. CreateConfigurationSetEventDestinationRequestCreate a new event destination in a configuration set. CreateConfigurationSetEventDestinationResponseAn empty object that indicates that the event destination was created successfully. CreateConfigurationSetRequestA request to create a new configuration set. CreateConfigurationSetResponseAn empty object that indicates that the configuration set was successfully created. DeleteConfigurationSetEventDestinationRequestDeleteConfigurationSetEventDestinationResponseAn empty object that indicates that the event destination was deleted successfully. DeleteConfigurationSetRequestDeleteConfigurationSetResponseAn empty object that indicates that the configuration set was deleted successfully. EventDestinationAn object that defines an event destination. EventDestinationDefinitionAn object that defines a single event destination. GetConfigurationSetEventDestinationsRequestGetConfigurationSetEventDestinationsResponseAn object that contains information about an event destination. KinesisFirehoseDestinationAn object that contains information about an event destination that sends data to Amazon Kinesis Data Firehose. PinpointSmsVoiceClientA client for the Pinpoint SMS Voice API. PlainTextMessageTypeAn object that defines a message that contains unformatted text. SSMLMessageTypeAn object that defines a message that contains SSML-formatted text. SendVoiceMessageRequestSendVoiceMessageRequest SendVoiceMessageResponseAn object that that contains the Message ID of a Voice message that was sent successfully. SnsDestinationAn object that contains information about an event destination that sends data to Amazon SNS. UpdateConfigurationSetEventDestinationRequestUpdateConfigurationSetEventDestinationRequest UpdateConfigurationSetEventDestinationResponseAn empty object that indicates that the event destination was updated successfully. VoiceMessageContentAn object that contains a voice message and information about the recipient that you want to send it to. Enums --- CreateConfigurationSetErrorErrors returned by CreateConfigurationSet CreateConfigurationSetEventDestinationErrorErrors returned by CreateConfigurationSetEventDestination DeleteConfigurationSetErrorErrors returned by DeleteConfigurationSet DeleteConfigurationSetEventDestinationErrorErrors returned by DeleteConfigurationSetEventDestination GetConfigurationSetEventDestinationsErrorErrors returned by GetConfigurationSetEventDestinations SendVoiceMessageErrorErrors returned by SendVoiceMessage UpdateConfigurationSetEventDestinationErrorErrors returned by UpdateConfigurationSetEventDestination Traits --- PinpointSmsVoiceTrait representing the capabilities of the Pinpoint SMS Voice API. Pinpoint SMS Voice clients implement this trait. Crate rusoto_pinpoint_sms_voice === Pinpoint SMS and Voice Messaging public facing APIs If you’re using the service, you’re probably looking for PinpointSmsVoiceClient and PinpointSmsVoice. Structs --- CallInstructionsMessageTypeAn object that defines a message that contains text formatted using Amazon Pinpoint Voice Instructions markup. CloudWatchLogsDestinationAn object that contains information about an event destination that sends data to Amazon CloudWatch Logs. CreateConfigurationSetEventDestinationRequestCreate a new event destination in a configuration set. CreateConfigurationSetEventDestinationResponseAn empty object that indicates that the event destination was created successfully. CreateConfigurationSetRequestA request to create a new configuration set. CreateConfigurationSetResponseAn empty object that indicates that the configuration set was successfully created. DeleteConfigurationSetEventDestinationRequestDeleteConfigurationSetEventDestinationResponseAn empty object that indicates that the event destination was deleted successfully. DeleteConfigurationSetRequestDeleteConfigurationSetResponseAn empty object that indicates that the configuration set was deleted successfully. EventDestinationAn object that defines an event destination. EventDestinationDefinitionAn object that defines a single event destination. GetConfigurationSetEventDestinationsRequestGetConfigurationSetEventDestinationsResponseAn object that contains information about an event destination. KinesisFirehoseDestinationAn object that contains information about an event destination that sends data to Amazon Kinesis Data Firehose. PinpointSmsVoiceClientA client for the Pinpoint SMS Voice API. PlainTextMessageTypeAn object that defines a message that contains unformatted text. SSMLMessageTypeAn object that defines a message that contains SSML-formatted text. SendVoiceMessageRequestSendVoiceMessageRequest SendVoiceMessageResponseAn object that that contains the Message ID of a Voice message that was sent successfully. SnsDestinationAn object that contains information about an event destination that sends data to Amazon SNS. UpdateConfigurationSetEventDestinationRequestUpdateConfigurationSetEventDestinationRequest UpdateConfigurationSetEventDestinationResponseAn empty object that indicates that the event destination was updated successfully. VoiceMessageContentAn object that contains a voice message and information about the recipient that you want to send it to. Enums --- CreateConfigurationSetErrorErrors returned by CreateConfigurationSet CreateConfigurationSetEventDestinationErrorErrors returned by CreateConfigurationSetEventDestination DeleteConfigurationSetErrorErrors returned by DeleteConfigurationSet DeleteConfigurationSetEventDestinationErrorErrors returned by DeleteConfigurationSetEventDestination GetConfigurationSetEventDestinationsErrorErrors returned by GetConfigurationSetEventDestinations SendVoiceMessageErrorErrors returned by SendVoiceMessage UpdateConfigurationSetEventDestinationErrorErrors returned by UpdateConfigurationSetEventDestination Traits --- PinpointSmsVoiceTrait representing the capabilities of the Pinpoint SMS Voice API. Pinpoint SMS Voice clients implement this trait. Struct rusoto_pinpoint_sms_voice::PinpointSmsVoiceClient === ``` pub struct PinpointSmsVoiceClient { /* private fields */ } ``` A client for the Pinpoint SMS Voice API. Implementations --- source### impl PinpointSmsVoiceClient source#### pub fn new(region: Region) -> PinpointSmsVoiceClient Creates a client backed by the default tokio event loop. The client will use the default credentials provider and tls client. source#### pub fn new_with<P, D>(    request_dispatcher: D,     credentials_provider: P,     region: Region) -> PinpointSmsVoiceClient where    P: ProvideAwsCredentials + Send + Sync + 'static,    D: DispatchSignedRequest + Send + Sync + 'static, source#### pub fn new_with_client(client: Client, region: Region) -> PinpointSmsVoiceClient Trait Implementations --- source### impl Clone for PinpointSmsVoiceClient source#### fn clone(&self) -> PinpointSmsVoiceClient Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl PinpointSmsVoice for PinpointSmsVoiceClient source#### fn create_configuration_set<'life0, 'async_trait>(    &'life0 self,     input: CreateConfigurationSetRequest) -> Pin<Box<dyn Future<Output = Result<CreateConfigurationSetResponse, RusotoError<CreateConfigurationSetError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Create a new configuration set. After you create the configuration set, you can add one or more event destinations to it. source#### fn create_configuration_set_event_destination<'life0, 'async_trait>(    &'life0 self,     input: CreateConfigurationSetEventDestinationRequest) -> Pin<Box<dyn Future<Output = Result<CreateConfigurationSetEventDestinationResponse, RusotoError<CreateConfigurationSetEventDestinationError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Create a new event destination in a configuration set. source#### fn delete_configuration_set<'life0, 'async_trait>(    &'life0 self,     input: DeleteConfigurationSetRequest) -> Pin<Box<dyn Future<Output = Result<DeleteConfigurationSetResponse, RusotoError<DeleteConfigurationSetError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes an existing configuration set. source#### fn delete_configuration_set_event_destination<'life0, 'async_trait>(    &'life0 self,     input: DeleteConfigurationSetEventDestinationRequest) -> Pin<Box<dyn Future<Output = Result<DeleteConfigurationSetEventDestinationResponse, RusotoError<DeleteConfigurationSetEventDestinationError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes an event destination in a configuration set. source#### fn get_configuration_set_event_destinations<'life0, 'async_trait>(    &'life0 self,     input: GetConfigurationSetEventDestinationsRequest) -> Pin<Box<dyn Future<Output = Result<GetConfigurationSetEventDestinationsResponse, RusotoError<GetConfigurationSetEventDestinationsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Obtain information about an event destination, including the types of events it reports, the Amazon Resource Name (ARN) of the destination, and the name of the event destination. source#### fn send_voice_message<'life0, 'async_trait>(    &'life0 self,     input: SendVoiceMessageRequest) -> Pin<Box<dyn Future<Output = Result<SendVoiceMessageResponse, RusotoError<SendVoiceMessageError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Create a new voice message and send it to a recipient's phone number. source#### fn update_configuration_set_event_destination<'life0, 'async_trait>(    &'life0 self,     input: UpdateConfigurationSetEventDestinationRequest) -> Pin<Box<dyn Future<Output = Result<UpdateConfigurationSetEventDestinationResponse, RusotoError<UpdateConfigurationSetEventDestinationError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Update an event destination in a configuration set. An event destination is a location that you publish information about your voice calls to. For example, you can log an event to an Amazon CloudWatch destination when a call fails. Auto Trait Implementations --- ### impl !RefUnwindSafe for PinpointSmsVoiceClient ### impl Send for PinpointSmsVoiceClient ### impl Sync for PinpointSmsVoiceClient ### impl Unpin for PinpointSmsVoiceClient ### impl !UnwindSafe for PinpointSmsVoiceClient Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Trait rusoto_pinpoint_sms_voice::PinpointSmsVoice === ``` pub trait PinpointSmsVoice { fn create_configuration_set<'life0, 'async_trait>(         &'life0 self,         input: CreateConfigurationSetRequest     ) -> Pin<Box<dyn Future<Output = Result<CreateConfigurationSetResponse, RusotoError<CreateConfigurationSetError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn create_configuration_set_event_destination<'life0, 'async_trait>(         &'life0 self,         input: CreateConfigurationSetEventDestinationRequest     ) -> Pin<Box<dyn Future<Output = Result<CreateConfigurationSetEventDestinationResponse, RusotoError<CreateConfigurationSetEventDestinationError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn delete_configuration_set<'life0, 'async_trait>(         &'life0 self,         input: DeleteConfigurationSetRequest     ) -> Pin<Box<dyn Future<Output = Result<DeleteConfigurationSetResponse, RusotoError<DeleteConfigurationSetError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn delete_configuration_set_event_destination<'life0, 'async_trait>(         &'life0 self,         input: DeleteConfigurationSetEventDestinationRequest     ) -> Pin<Box<dyn Future<Output = Result<DeleteConfigurationSetEventDestinationResponse, RusotoError<DeleteConfigurationSetEventDestinationError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn get_configuration_set_event_destinations<'life0, 'async_trait>(         &'life0 self,         input: GetConfigurationSetEventDestinationsRequest     ) -> Pin<Box<dyn Future<Output = Result<GetConfigurationSetEventDestinationsResponse, RusotoError<GetConfigurationSetEventDestinationsError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn send_voice_message<'life0, 'async_trait>(         &'life0 self,         input: SendVoiceMessageRequest     ) -> Pin<Box<dyn Future<Output = Result<SendVoiceMessageResponse, RusotoError<SendVoiceMessageError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; fn update_configuration_set_event_destination<'life0, 'async_trait>(         &'life0 self,         input: UpdateConfigurationSetEventDestinationRequest     ) -> Pin<Box<dyn Future<Output = Result<UpdateConfigurationSetEventDestinationResponse, RusotoError<UpdateConfigurationSetEventDestinationError>>> + Send + 'async_trait>    where         'life0: 'async_trait,         Self: 'async_trait; } ``` Trait representing the capabilities of the Pinpoint SMS Voice API. Pinpoint SMS Voice clients implement this trait. Required Methods --- source#### fn create_configuration_set<'life0, 'async_trait>(    &'life0 self,     input: CreateConfigurationSetRequest) -> Pin<Box<dyn Future<Output = Result<CreateConfigurationSetResponse, RusotoError<CreateConfigurationSetError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Create a new configuration set. After you create the configuration set, you can add one or more event destinations to it. source#### fn create_configuration_set_event_destination<'life0, 'async_trait>(    &'life0 self,     input: CreateConfigurationSetEventDestinationRequest) -> Pin<Box<dyn Future<Output = Result<CreateConfigurationSetEventDestinationResponse, RusotoError<CreateConfigurationSetEventDestinationError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Create a new event destination in a configuration set. source#### fn delete_configuration_set<'life0, 'async_trait>(    &'life0 self,     input: DeleteConfigurationSetRequest) -> Pin<Box<dyn Future<Output = Result<DeleteConfigurationSetResponse, RusotoError<DeleteConfigurationSetError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes an existing configuration set. source#### fn delete_configuration_set_event_destination<'life0, 'async_trait>(    &'life0 self,     input: DeleteConfigurationSetEventDestinationRequest) -> Pin<Box<dyn Future<Output = Result<DeleteConfigurationSetEventDestinationResponse, RusotoError<DeleteConfigurationSetEventDestinationError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Deletes an event destination in a configuration set. source#### fn get_configuration_set_event_destinations<'life0, 'async_trait>(    &'life0 self,     input: GetConfigurationSetEventDestinationsRequest) -> Pin<Box<dyn Future<Output = Result<GetConfigurationSetEventDestinationsResponse, RusotoError<GetConfigurationSetEventDestinationsError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Obtain information about an event destination, including the types of events it reports, the Amazon Resource Name (ARN) of the destination, and the name of the event destination. source#### fn send_voice_message<'life0, 'async_trait>(    &'life0 self,     input: SendVoiceMessageRequest) -> Pin<Box<dyn Future<Output = Result<SendVoiceMessageResponse, RusotoError<SendVoiceMessageError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Create a new voice message and send it to a recipient's phone number. source#### fn update_configuration_set_event_destination<'life0, 'async_trait>(    &'life0 self,     input: UpdateConfigurationSetEventDestinationRequest) -> Pin<Box<dyn Future<Output = Result<UpdateConfigurationSetEventDestinationResponse, RusotoError<UpdateConfigurationSetEventDestinationError>>> + Send + 'async_trait>> where    'life0: 'async_trait,    Self: 'async_trait, Update an event destination in a configuration set. An event destination is a location that you publish information about your voice calls to. For example, you can log an event to an Amazon CloudWatch destination when a call fails. Implementors --- source### impl PinpointSmsVoice for PinpointSmsVoiceClient Struct rusoto_pinpoint_sms_voice::CallInstructionsMessageType === ``` pub struct CallInstructionsMessageType { pub text: Option<String>, } ``` An object that defines a message that contains text formatted using Amazon Pinpoint Voice Instructions markup. Fields --- `text: Option<String>`The language to use when delivering the message. For a complete list of supported languages, see the Amazon Polly Developer Guide. Trait Implementations --- source### impl Clone for CallInstructionsMessageType source#### fn clone(&self) -> CallInstructionsMessageType Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for CallInstructionsMessageType source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for CallInstructionsMessageType source#### fn default() -> CallInstructionsMessageType Returns the “default value” for a type. Read more source### impl PartialEq<CallInstructionsMessageType> for CallInstructionsMessageType source#### fn eq(&self, other: &CallInstructionsMessageType) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &CallInstructionsMessageType) -> bool This method tests for `!=`. source### impl Serialize for CallInstructionsMessageType source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for CallInstructionsMessageType Auto Trait Implementations --- ### impl RefUnwindSafe for CallInstructionsMessageType ### impl Send for CallInstructionsMessageType ### impl Sync for CallInstructionsMessageType ### impl Unpin for CallInstructionsMessageType ### impl UnwindSafe for CallInstructionsMessageType Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::CloudWatchLogsDestination === ``` pub struct CloudWatchLogsDestination { pub iam_role_arn: Option<String>, pub log_group_arn: Option<String>, } ``` An object that contains information about an event destination that sends data to Amazon CloudWatch Logs. Fields --- `iam_role_arn: Option<String>`The Amazon Resource Name (ARN) of an Amazon Identity and Access Management (IAM) role that is able to write event data to an Amazon CloudWatch destination. `log_group_arn: Option<String>`The name of the Amazon CloudWatch Log Group that you want to record events in. Trait Implementations --- source### impl Clone for CloudWatchLogsDestination source#### fn clone(&self) -> CloudWatchLogsDestination Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for CloudWatchLogsDestination source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for CloudWatchLogsDestination source#### fn default() -> CloudWatchLogsDestination Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for CloudWatchLogsDestination source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<CloudWatchLogsDestination> for CloudWatchLogsDestination source#### fn eq(&self, other: &CloudWatchLogsDestination) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &CloudWatchLogsDestination) -> bool This method tests for `!=`. source### impl Serialize for CloudWatchLogsDestination source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for CloudWatchLogsDestination Auto Trait Implementations --- ### impl RefUnwindSafe for CloudWatchLogsDestination ### impl Send for CloudWatchLogsDestination ### impl Sync for CloudWatchLogsDestination ### impl Unpin for CloudWatchLogsDestination ### impl UnwindSafe for CloudWatchLogsDestination Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::CreateConfigurationSetEventDestinationRequest === ``` pub struct CreateConfigurationSetEventDestinationRequest { pub configuration_set_name: String, pub event_destination: Option<EventDestinationDefinition>, pub event_destination_name: Option<String>, } ``` Create a new event destination in a configuration set. Fields --- `configuration_set_name: String`ConfigurationSetName `event_destination: Option<EventDestinationDefinition>``event_destination_name: Option<String>`A name that identifies the event destination. Trait Implementations --- source### impl Clone for CreateConfigurationSetEventDestinationRequest source#### fn clone(&self) -> CreateConfigurationSetEventDestinationRequest Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for CreateConfigurationSetEventDestinationRequest source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for CreateConfigurationSetEventDestinationRequest source#### fn default() -> CreateConfigurationSetEventDestinationRequest Returns the “default value” for a type. Read more source### impl PartialEq<CreateConfigurationSetEventDestinationRequest> for CreateConfigurationSetEventDestinationRequest source#### fn eq(&self, other: &CreateConfigurationSetEventDestinationRequest) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &CreateConfigurationSetEventDestinationRequest) -> bool This method tests for `!=`. source### impl Serialize for CreateConfigurationSetEventDestinationRequest source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for CreateConfigurationSetEventDestinationRequest Auto Trait Implementations --- ### impl RefUnwindSafe for CreateConfigurationSetEventDestinationRequest ### impl Send for CreateConfigurationSetEventDestinationRequest ### impl Sync for CreateConfigurationSetEventDestinationRequest ### impl Unpin for CreateConfigurationSetEventDestinationRequest ### impl UnwindSafe for CreateConfigurationSetEventDestinationRequest Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::CreateConfigurationSetEventDestinationResponse === ``` pub struct CreateConfigurationSetEventDestinationResponse {} ``` An empty object that indicates that the event destination was created successfully. Trait Implementations --- source### impl Clone for CreateConfigurationSetEventDestinationResponse source#### fn clone(&self) -> CreateConfigurationSetEventDestinationResponse Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for CreateConfigurationSetEventDestinationResponse source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for CreateConfigurationSetEventDestinationResponse source#### fn default() -> CreateConfigurationSetEventDestinationResponse Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for CreateConfigurationSetEventDestinationResponse source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<CreateConfigurationSetEventDestinationResponse> for CreateConfigurationSetEventDestinationResponse source#### fn eq(&self, other: &CreateConfigurationSetEventDestinationResponse) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for CreateConfigurationSetEventDestinationResponse Auto Trait Implementations --- ### impl RefUnwindSafe for CreateConfigurationSetEventDestinationResponse ### impl Send for CreateConfigurationSetEventDestinationResponse ### impl Sync for CreateConfigurationSetEventDestinationResponse ### impl Unpin for CreateConfigurationSetEventDestinationResponse ### impl UnwindSafe for CreateConfigurationSetEventDestinationResponse Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::CreateConfigurationSetRequest === ``` pub struct CreateConfigurationSetRequest { pub configuration_set_name: Option<String>, } ``` A request to create a new configuration set. Fields --- `configuration_set_name: Option<String>`The name that you want to give the configuration set. Trait Implementations --- source### impl Clone for CreateConfigurationSetRequest source#### fn clone(&self) -> CreateConfigurationSetRequest Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for CreateConfigurationSetRequest source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for CreateConfigurationSetRequest source#### fn default() -> CreateConfigurationSetRequest Returns the “default value” for a type. Read more source### impl PartialEq<CreateConfigurationSetRequest> for CreateConfigurationSetRequest source#### fn eq(&self, other: &CreateConfigurationSetRequest) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &CreateConfigurationSetRequest) -> bool This method tests for `!=`. source### impl Serialize for CreateConfigurationSetRequest source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for CreateConfigurationSetRequest Auto Trait Implementations --- ### impl RefUnwindSafe for CreateConfigurationSetRequest ### impl Send for CreateConfigurationSetRequest ### impl Sync for CreateConfigurationSetRequest ### impl Unpin for CreateConfigurationSetRequest ### impl UnwindSafe for CreateConfigurationSetRequest Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::CreateConfigurationSetResponse === ``` pub struct CreateConfigurationSetResponse {} ``` An empty object that indicates that the configuration set was successfully created. Trait Implementations --- source### impl Clone for CreateConfigurationSetResponse source#### fn clone(&self) -> CreateConfigurationSetResponse Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for CreateConfigurationSetResponse source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for CreateConfigurationSetResponse source#### fn default() -> CreateConfigurationSetResponse Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for CreateConfigurationSetResponse source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<CreateConfigurationSetResponse> for CreateConfigurationSetResponse source#### fn eq(&self, other: &CreateConfigurationSetResponse) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for CreateConfigurationSetResponse Auto Trait Implementations --- ### impl RefUnwindSafe for CreateConfigurationSetResponse ### impl Send for CreateConfigurationSetResponse ### impl Sync for CreateConfigurationSetResponse ### impl Unpin for CreateConfigurationSetResponse ### impl UnwindSafe for CreateConfigurationSetResponse Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::DeleteConfigurationSetEventDestinationRequest === ``` pub struct DeleteConfigurationSetEventDestinationRequest { pub configuration_set_name: String, pub event_destination_name: String, } ``` Fields --- `configuration_set_name: String`ConfigurationSetName `event_destination_name: String`EventDestinationName Trait Implementations --- source### impl Clone for DeleteConfigurationSetEventDestinationRequest source#### fn clone(&self) -> DeleteConfigurationSetEventDestinationRequest Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteConfigurationSetEventDestinationRequest source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteConfigurationSetEventDestinationRequest source#### fn default() -> DeleteConfigurationSetEventDestinationRequest Returns the “default value” for a type. Read more source### impl PartialEq<DeleteConfigurationSetEventDestinationRequest> for DeleteConfigurationSetEventDestinationRequest source#### fn eq(&self, other: &DeleteConfigurationSetEventDestinationRequest) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteConfigurationSetEventDestinationRequest) -> bool This method tests for `!=`. source### impl Serialize for DeleteConfigurationSetEventDestinationRequest source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for DeleteConfigurationSetEventDestinationRequest Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteConfigurationSetEventDestinationRequest ### impl Send for DeleteConfigurationSetEventDestinationRequest ### impl Sync for DeleteConfigurationSetEventDestinationRequest ### impl Unpin for DeleteConfigurationSetEventDestinationRequest ### impl UnwindSafe for DeleteConfigurationSetEventDestinationRequest Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::DeleteConfigurationSetEventDestinationResponse === ``` pub struct DeleteConfigurationSetEventDestinationResponse {} ``` An empty object that indicates that the event destination was deleted successfully. Trait Implementations --- source### impl Clone for DeleteConfigurationSetEventDestinationResponse source#### fn clone(&self) -> DeleteConfigurationSetEventDestinationResponse Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteConfigurationSetEventDestinationResponse source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteConfigurationSetEventDestinationResponse source#### fn default() -> DeleteConfigurationSetEventDestinationResponse Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for DeleteConfigurationSetEventDestinationResponse source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<DeleteConfigurationSetEventDestinationResponse> for DeleteConfigurationSetEventDestinationResponse source#### fn eq(&self, other: &DeleteConfigurationSetEventDestinationResponse) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteConfigurationSetEventDestinationResponse Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteConfigurationSetEventDestinationResponse ### impl Send for DeleteConfigurationSetEventDestinationResponse ### impl Sync for DeleteConfigurationSetEventDestinationResponse ### impl Unpin for DeleteConfigurationSetEventDestinationResponse ### impl UnwindSafe for DeleteConfigurationSetEventDestinationResponse Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::DeleteConfigurationSetRequest === ``` pub struct DeleteConfigurationSetRequest { pub configuration_set_name: String, } ``` Fields --- `configuration_set_name: String`ConfigurationSetName Trait Implementations --- source### impl Clone for DeleteConfigurationSetRequest source#### fn clone(&self) -> DeleteConfigurationSetRequest Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteConfigurationSetRequest source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteConfigurationSetRequest source#### fn default() -> DeleteConfigurationSetRequest Returns the “default value” for a type. Read more source### impl PartialEq<DeleteConfigurationSetRequest> for DeleteConfigurationSetRequest source#### fn eq(&self, other: &DeleteConfigurationSetRequest) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteConfigurationSetRequest) -> bool This method tests for `!=`. source### impl Serialize for DeleteConfigurationSetRequest source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for DeleteConfigurationSetRequest Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteConfigurationSetRequest ### impl Send for DeleteConfigurationSetRequest ### impl Sync for DeleteConfigurationSetRequest ### impl Unpin for DeleteConfigurationSetRequest ### impl UnwindSafe for DeleteConfigurationSetRequest Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::DeleteConfigurationSetResponse === ``` pub struct DeleteConfigurationSetResponse {} ``` An empty object that indicates that the configuration set was deleted successfully. Trait Implementations --- source### impl Clone for DeleteConfigurationSetResponse source#### fn clone(&self) -> DeleteConfigurationSetResponse Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for DeleteConfigurationSetResponse source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for DeleteConfigurationSetResponse source#### fn default() -> DeleteConfigurationSetResponse Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for DeleteConfigurationSetResponse source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<DeleteConfigurationSetResponse> for DeleteConfigurationSetResponse source#### fn eq(&self, other: &DeleteConfigurationSetResponse) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteConfigurationSetResponse Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteConfigurationSetResponse ### impl Send for DeleteConfigurationSetResponse ### impl Sync for DeleteConfigurationSetResponse ### impl Unpin for DeleteConfigurationSetResponse ### impl UnwindSafe for DeleteConfigurationSetResponse Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::EventDestination === ``` pub struct EventDestination { pub cloud_watch_logs_destination: Option<CloudWatchLogsDestination>, pub enabled: Option<bool>, pub kinesis_firehose_destination: Option<KinesisFirehoseDestination>, pub matching_event_types: Option<Vec<String>>, pub name: Option<String>, pub sns_destination: Option<SnsDestination>, } ``` An object that defines an event destination. Fields --- `cloud_watch_logs_destination: Option<CloudWatchLogsDestination>``enabled: Option<bool>`Indicates whether or not the event destination is enabled. If the event destination is enabled, then Amazon Pinpoint sends response data to the specified event destination. `kinesis_firehose_destination: Option<KinesisFirehoseDestination>``matching_event_types: Option<Vec<String>>``name: Option<String>`A name that identifies the event destination configuration. `sns_destination: Option<SnsDestination>`Trait Implementations --- source### impl Clone for EventDestination source#### fn clone(&self) -> EventDestination Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for EventDestination source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for EventDestination source#### fn default() -> EventDestination Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for EventDestination source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<EventDestination> for EventDestination source#### fn eq(&self, other: &EventDestination) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &EventDestination) -> bool This method tests for `!=`. source### impl StructuralPartialEq for EventDestination Auto Trait Implementations --- ### impl RefUnwindSafe for EventDestination ### impl Send for EventDestination ### impl Sync for EventDestination ### impl Unpin for EventDestination ### impl UnwindSafe for EventDestination Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::EventDestinationDefinition === ``` pub struct EventDestinationDefinition { pub cloud_watch_logs_destination: Option<CloudWatchLogsDestination>, pub enabled: Option<bool>, pub kinesis_firehose_destination: Option<KinesisFirehoseDestination>, pub matching_event_types: Option<Vec<String>>, pub sns_destination: Option<SnsDestination>, } ``` An object that defines a single event destination. Fields --- `cloud_watch_logs_destination: Option<CloudWatchLogsDestination>``enabled: Option<bool>`Indicates whether or not the event destination is enabled. If the event destination is enabled, then Amazon Pinpoint sends response data to the specified event destination. `kinesis_firehose_destination: Option<KinesisFirehoseDestination>``matching_event_types: Option<Vec<String>>``sns_destination: Option<SnsDestination>`Trait Implementations --- source### impl Clone for EventDestinationDefinition source#### fn clone(&self) -> EventDestinationDefinition Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for EventDestinationDefinition source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for EventDestinationDefinition source#### fn default() -> EventDestinationDefinition Returns the “default value” for a type. Read more source### impl PartialEq<EventDestinationDefinition> for EventDestinationDefinition source#### fn eq(&self, other: &EventDestinationDefinition) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &EventDestinationDefinition) -> bool This method tests for `!=`. source### impl Serialize for EventDestinationDefinition source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for EventDestinationDefinition Auto Trait Implementations --- ### impl RefUnwindSafe for EventDestinationDefinition ### impl Send for EventDestinationDefinition ### impl Sync for EventDestinationDefinition ### impl Unpin for EventDestinationDefinition ### impl UnwindSafe for EventDestinationDefinition Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::GetConfigurationSetEventDestinationsRequest === ``` pub struct GetConfigurationSetEventDestinationsRequest { pub configuration_set_name: String, } ``` Fields --- `configuration_set_name: String`ConfigurationSetName Trait Implementations --- source### impl Clone for GetConfigurationSetEventDestinationsRequest source#### fn clone(&self) -> GetConfigurationSetEventDestinationsRequest Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetConfigurationSetEventDestinationsRequest source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetConfigurationSetEventDestinationsRequest source#### fn default() -> GetConfigurationSetEventDestinationsRequest Returns the “default value” for a type. Read more source### impl PartialEq<GetConfigurationSetEventDestinationsRequest> for GetConfigurationSetEventDestinationsRequest source#### fn eq(&self, other: &GetConfigurationSetEventDestinationsRequest) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetConfigurationSetEventDestinationsRequest) -> bool This method tests for `!=`. source### impl Serialize for GetConfigurationSetEventDestinationsRequest source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for GetConfigurationSetEventDestinationsRequest Auto Trait Implementations --- ### impl RefUnwindSafe for GetConfigurationSetEventDestinationsRequest ### impl Send for GetConfigurationSetEventDestinationsRequest ### impl Sync for GetConfigurationSetEventDestinationsRequest ### impl Unpin for GetConfigurationSetEventDestinationsRequest ### impl UnwindSafe for GetConfigurationSetEventDestinationsRequest Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::GetConfigurationSetEventDestinationsResponse === ``` pub struct GetConfigurationSetEventDestinationsResponse { pub event_destinations: Option<Vec<EventDestination>>, } ``` An object that contains information about an event destination. Fields --- `event_destinations: Option<Vec<EventDestination>>`Trait Implementations --- source### impl Clone for GetConfigurationSetEventDestinationsResponse source#### fn clone(&self) -> GetConfigurationSetEventDestinationsResponse Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for GetConfigurationSetEventDestinationsResponse source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for GetConfigurationSetEventDestinationsResponse source#### fn default() -> GetConfigurationSetEventDestinationsResponse Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for GetConfigurationSetEventDestinationsResponse source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<GetConfigurationSetEventDestinationsResponse> for GetConfigurationSetEventDestinationsResponse source#### fn eq(&self, other: &GetConfigurationSetEventDestinationsResponse) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetConfigurationSetEventDestinationsResponse) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetConfigurationSetEventDestinationsResponse Auto Trait Implementations --- ### impl RefUnwindSafe for GetConfigurationSetEventDestinationsResponse ### impl Send for GetConfigurationSetEventDestinationsResponse ### impl Sync for GetConfigurationSetEventDestinationsResponse ### impl Unpin for GetConfigurationSetEventDestinationsResponse ### impl UnwindSafe for GetConfigurationSetEventDestinationsResponse Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::KinesisFirehoseDestination === ``` pub struct KinesisFirehoseDestination { pub delivery_stream_arn: Option<String>, pub iam_role_arn: Option<String>, } ``` An object that contains information about an event destination that sends data to Amazon Kinesis Data Firehose. Fields --- `delivery_stream_arn: Option<String>`The Amazon Resource Name (ARN) of an IAM role that can write data to an Amazon Kinesis Data Firehose stream. `iam_role_arn: Option<String>`The Amazon Resource Name (ARN) of the Amazon Kinesis Data Firehose destination that you want to use in the event destination. Trait Implementations --- source### impl Clone for KinesisFirehoseDestination source#### fn clone(&self) -> KinesisFirehoseDestination Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for KinesisFirehoseDestination source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for KinesisFirehoseDestination source#### fn default() -> KinesisFirehoseDestination Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for KinesisFirehoseDestination source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<KinesisFirehoseDestination> for KinesisFirehoseDestination source#### fn eq(&self, other: &KinesisFirehoseDestination) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &KinesisFirehoseDestination) -> bool This method tests for `!=`. source### impl Serialize for KinesisFirehoseDestination source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for KinesisFirehoseDestination Auto Trait Implementations --- ### impl RefUnwindSafe for KinesisFirehoseDestination ### impl Send for KinesisFirehoseDestination ### impl Sync for KinesisFirehoseDestination ### impl Unpin for KinesisFirehoseDestination ### impl UnwindSafe for KinesisFirehoseDestination Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::PlainTextMessageType === ``` pub struct PlainTextMessageType { pub language_code: Option<String>, pub text: Option<String>, pub voice_id: Option<String>, } ``` An object that defines a message that contains unformatted text. Fields --- `language_code: Option<String>`The language to use when delivering the message. For a complete list of supported languages, see the Amazon Polly Developer Guide. `text: Option<String>`The plain (not SSML-formatted) text to deliver to the recipient. `voice_id: Option<String>`The name of the voice that you want to use to deliver the message. For a complete list of supported voices, see the Amazon Polly Developer Guide. Trait Implementations --- source### impl Clone for PlainTextMessageType source#### fn clone(&self) -> PlainTextMessageType Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for PlainTextMessageType source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for PlainTextMessageType source#### fn default() -> PlainTextMessageType Returns the “default value” for a type. Read more source### impl PartialEq<PlainTextMessageType> for PlainTextMessageType source#### fn eq(&self, other: &PlainTextMessageType) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &PlainTextMessageType) -> bool This method tests for `!=`. source### impl Serialize for PlainTextMessageType source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for PlainTextMessageType Auto Trait Implementations --- ### impl RefUnwindSafe for PlainTextMessageType ### impl Send for PlainTextMessageType ### impl Sync for PlainTextMessageType ### impl Unpin for PlainTextMessageType ### impl UnwindSafe for PlainTextMessageType Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::SSMLMessageType === ``` pub struct SSMLMessageType { pub language_code: Option<String>, pub text: Option<String>, pub voice_id: Option<String>, } ``` An object that defines a message that contains SSML-formatted text. Fields --- `language_code: Option<String>`The language to use when delivering the message. For a complete list of supported languages, see the Amazon Polly Developer Guide. `text: Option<String>`The SSML-formatted text to deliver to the recipient. `voice_id: Option<String>`The name of the voice that you want to use to deliver the message. For a complete list of supported voices, see the Amazon Polly Developer Guide. Trait Implementations --- source### impl Clone for SSMLMessageType source#### fn clone(&self) -> SSMLMessageType Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for SSMLMessageType source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for SSMLMessageType source#### fn default() -> SSMLMessageType Returns the “default value” for a type. Read more source### impl PartialEq<SSMLMessageType> for SSMLMessageType source#### fn eq(&self, other: &SSMLMessageType) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &SSMLMessageType) -> bool This method tests for `!=`. source### impl Serialize for SSMLMessageType source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for SSMLMessageType Auto Trait Implementations --- ### impl RefUnwindSafe for SSMLMessageType ### impl Send for SSMLMessageType ### impl Sync for SSMLMessageType ### impl Unpin for SSMLMessageType ### impl UnwindSafe for SSMLMessageType Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::SendVoiceMessageRequest === ``` pub struct SendVoiceMessageRequest { pub caller_id: Option<String>, pub configuration_set_name: Option<String>, pub content: Option<VoiceMessageContent>, pub destination_phone_number: Option<String>, pub origination_phone_number: Option<String>, } ``` SendVoiceMessageRequest Fields --- `caller_id: Option<String>`The phone number that appears on recipients' devices when they receive the message. `configuration_set_name: Option<String>`The name of the configuration set that you want to use to send the message. `content: Option<VoiceMessageContent>``destination_phone_number: Option<String>`The phone number that you want to send the voice message to. `origination_phone_number: Option<String>`The phone number that Amazon Pinpoint should use to send the voice message. This isn't necessarily the phone number that appears on recipients' devices when they receive the message, because you can specify a CallerId parameter in the request. Trait Implementations --- source### impl Clone for SendVoiceMessageRequest source#### fn clone(&self) -> SendVoiceMessageRequest Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for SendVoiceMessageRequest source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for SendVoiceMessageRequest source#### fn default() -> SendVoiceMessageRequest Returns the “default value” for a type. Read more source### impl PartialEq<SendVoiceMessageRequest> for SendVoiceMessageRequest source#### fn eq(&self, other: &SendVoiceMessageRequest) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &SendVoiceMessageRequest) -> bool This method tests for `!=`. source### impl Serialize for SendVoiceMessageRequest source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for SendVoiceMessageRequest Auto Trait Implementations --- ### impl RefUnwindSafe for SendVoiceMessageRequest ### impl Send for SendVoiceMessageRequest ### impl Sync for SendVoiceMessageRequest ### impl Unpin for SendVoiceMessageRequest ### impl UnwindSafe for SendVoiceMessageRequest Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::SendVoiceMessageResponse === ``` pub struct SendVoiceMessageResponse { pub message_id: Option<String>, } ``` An object that that contains the Message ID of a Voice message that was sent successfully. Fields --- `message_id: Option<String>`A unique identifier for the voice message. Trait Implementations --- source### impl Clone for SendVoiceMessageResponse source#### fn clone(&self) -> SendVoiceMessageResponse Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for SendVoiceMessageResponse source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for SendVoiceMessageResponse source#### fn default() -> SendVoiceMessageResponse Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for SendVoiceMessageResponse source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<SendVoiceMessageResponse> for SendVoiceMessageResponse source#### fn eq(&self, other: &SendVoiceMessageResponse) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &SendVoiceMessageResponse) -> bool This method tests for `!=`. source### impl StructuralPartialEq for SendVoiceMessageResponse Auto Trait Implementations --- ### impl RefUnwindSafe for SendVoiceMessageResponse ### impl Send for SendVoiceMessageResponse ### impl Sync for SendVoiceMessageResponse ### impl Unpin for SendVoiceMessageResponse ### impl UnwindSafe for SendVoiceMessageResponse Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::SnsDestination === ``` pub struct SnsDestination { pub topic_arn: Option<String>, } ``` An object that contains information about an event destination that sends data to Amazon SNS. Fields --- `topic_arn: Option<String>`The Amazon Resource Name (ARN) of the Amazon SNS topic that you want to publish events to. Trait Implementations --- source### impl Clone for SnsDestination source#### fn clone(&self) -> SnsDestination Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for SnsDestination source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for SnsDestination source#### fn default() -> SnsDestination Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for SnsDestination source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<SnsDestination> for SnsDestination source#### fn eq(&self, other: &SnsDestination) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &SnsDestination) -> bool This method tests for `!=`. source### impl Serialize for SnsDestination source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for SnsDestination Auto Trait Implementations --- ### impl RefUnwindSafe for SnsDestination ### impl Send for SnsDestination ### impl Sync for SnsDestination ### impl Unpin for SnsDestination ### impl UnwindSafe for SnsDestination Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::UpdateConfigurationSetEventDestinationRequest === ``` pub struct UpdateConfigurationSetEventDestinationRequest { pub configuration_set_name: String, pub event_destination: Option<EventDestinationDefinition>, pub event_destination_name: String, } ``` UpdateConfigurationSetEventDestinationRequest Fields --- `configuration_set_name: String`ConfigurationSetName `event_destination: Option<EventDestinationDefinition>``event_destination_name: String`EventDestinationName Trait Implementations --- source### impl Clone for UpdateConfigurationSetEventDestinationRequest source#### fn clone(&self) -> UpdateConfigurationSetEventDestinationRequest Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for UpdateConfigurationSetEventDestinationRequest source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for UpdateConfigurationSetEventDestinationRequest source#### fn default() -> UpdateConfigurationSetEventDestinationRequest Returns the “default value” for a type. Read more source### impl PartialEq<UpdateConfigurationSetEventDestinationRequest> for UpdateConfigurationSetEventDestinationRequest source#### fn eq(&self, other: &UpdateConfigurationSetEventDestinationRequest) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &UpdateConfigurationSetEventDestinationRequest) -> bool This method tests for `!=`. source### impl Serialize for UpdateConfigurationSetEventDestinationRequest source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for UpdateConfigurationSetEventDestinationRequest Auto Trait Implementations --- ### impl RefUnwindSafe for UpdateConfigurationSetEventDestinationRequest ### impl Send for UpdateConfigurationSetEventDestinationRequest ### impl Sync for UpdateConfigurationSetEventDestinationRequest ### impl Unpin for UpdateConfigurationSetEventDestinationRequest ### impl UnwindSafe for UpdateConfigurationSetEventDestinationRequest Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Struct rusoto_pinpoint_sms_voice::UpdateConfigurationSetEventDestinationResponse === ``` pub struct UpdateConfigurationSetEventDestinationResponse {} ``` An empty object that indicates that the event destination was updated successfully. Trait Implementations --- source### impl Clone for UpdateConfigurationSetEventDestinationResponse source#### fn clone(&self) -> UpdateConfigurationSetEventDestinationResponse Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for UpdateConfigurationSetEventDestinationResponse source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for UpdateConfigurationSetEventDestinationResponse source#### fn default() -> UpdateConfigurationSetEventDestinationResponse Returns the “default value” for a type. Read more source### impl<'de> Deserialize<'de> for UpdateConfigurationSetEventDestinationResponse source#### fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where    __D: Deserializer<'de>, Deserialize this value from the given Serde deserializer. Read more source### impl PartialEq<UpdateConfigurationSetEventDestinationResponse> for UpdateConfigurationSetEventDestinationResponse source#### fn eq(&self, other: &UpdateConfigurationSetEventDestinationResponse) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more 1.0.0 · source#### fn ne(&self, other: &Rhs) -> bool This method tests for `!=`. source### impl StructuralPartialEq for UpdateConfigurationSetEventDestinationResponse Auto Trait Implementations --- ### impl RefUnwindSafe for UpdateConfigurationSetEventDestinationResponse ### impl Send for UpdateConfigurationSetEventDestinationResponse ### impl Sync for UpdateConfigurationSetEventDestinationResponse ### impl Unpin for UpdateConfigurationSetEventDestinationResponse ### impl UnwindSafe for UpdateConfigurationSetEventDestinationResponse Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source### impl<T> DeserializeOwned for T where    T: for<'de> Deserialize<'de>, Struct rusoto_pinpoint_sms_voice::VoiceMessageContent === ``` pub struct VoiceMessageContent { pub call_instructions_message: Option<CallInstructionsMessageType>, pub plain_text_message: Option<PlainTextMessageType>, pub ssml_message: Option<SSMLMessageType>, } ``` An object that contains a voice message and information about the recipient that you want to send it to. Fields --- `call_instructions_message: Option<CallInstructionsMessageType>``plain_text_message: Option<PlainTextMessageType>``ssml_message: Option<SSMLMessageType>`Trait Implementations --- source### impl Clone for VoiceMessageContent source#### fn clone(&self) -> VoiceMessageContent Returns a copy of the value. Read more 1.0.0 · source#### fn clone_from(&mut self, source: &Self) Performs copy-assignment from `source`. Read more source### impl Debug for VoiceMessageContent source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Default for VoiceMessageContent source#### fn default() -> VoiceMessageContent Returns the “default value” for a type. Read more source### impl PartialEq<VoiceMessageContent> for VoiceMessageContent source#### fn eq(&self, other: &VoiceMessageContent) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &VoiceMessageContent) -> bool This method tests for `!=`. source### impl Serialize for VoiceMessageContent source#### fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error> where    __S: Serializer, Serialize this value into the given Serde serializer. Read more source### impl StructuralPartialEq for VoiceMessageContent Auto Trait Implementations --- ### impl RefUnwindSafe for VoiceMessageContent ### impl Send for VoiceMessageContent ### impl Sync for VoiceMessageContent ### impl Unpin for VoiceMessageContent ### impl UnwindSafe for VoiceMessageContent Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToOwned for T where    T: Clone, #### type Owned = T The resulting type after obtaining ownership. source#### fn to_owned(&self) -> T Creates owned data from borrowed data, usually by cloning. Read more source#### fn clone_into(&self, target: &mutT) 🔬 This is a nightly-only experimental API. (`toowned_clone_into`)Uses borrowed data to replace owned data, usually by cloning. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_pinpoint_sms_voice::CreateConfigurationSetError === ``` pub enum CreateConfigurationSetError { AlreadyExists(String), BadRequest(String), InternalServiceError(String), LimitExceeded(String), TooManyRequests(String), } ``` Errors returned by CreateConfigurationSet Variants --- ### `AlreadyExists(String)` The resource specified in your request already exists. ### `BadRequest(String)` The input you provided is invalid. ### `InternalServiceError(String)` The API encountered an unexpected error and couldn't complete the request. You might be able to successfully issue the request again in the future. ### `LimitExceeded(String)` There are too many instances of the specified resource type. ### `TooManyRequests(String)` You've issued too many requests to the resource. Wait a few minutes, and then try again. Implementations --- source### impl CreateConfigurationSetError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<CreateConfigurationSetErrorTrait Implementations --- source### impl Debug for CreateConfigurationSetError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for CreateConfigurationSetError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for CreateConfigurationSetError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<CreateConfigurationSetError> for CreateConfigurationSetError source#### fn eq(&self, other: &CreateConfigurationSetError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &CreateConfigurationSetError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for CreateConfigurationSetError Auto Trait Implementations --- ### impl RefUnwindSafe for CreateConfigurationSetError ### impl Send for CreateConfigurationSetError ### impl Sync for CreateConfigurationSetError ### impl Unpin for CreateConfigurationSetError ### impl UnwindSafe for CreateConfigurationSetError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_pinpoint_sms_voice::CreateConfigurationSetEventDestinationError === ``` pub enum CreateConfigurationSetEventDestinationError { AlreadyExists(String), BadRequest(String), InternalServiceError(String), LimitExceeded(String), NotFound(String), TooManyRequests(String), } ``` Errors returned by CreateConfigurationSetEventDestination Variants --- ### `AlreadyExists(String)` The resource specified in your request already exists. ### `BadRequest(String)` The input you provided is invalid. ### `InternalServiceError(String)` The API encountered an unexpected error and couldn't complete the request. You might be able to successfully issue the request again in the future. ### `LimitExceeded(String)` There are too many instances of the specified resource type. ### `NotFound(String)` The resource you attempted to access doesn't exist. ### `TooManyRequests(String)` You've issued too many requests to the resource. Wait a few minutes, and then try again. Implementations --- source### impl CreateConfigurationSetEventDestinationError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<CreateConfigurationSetEventDestinationErrorTrait Implementations --- source### impl Debug for CreateConfigurationSetEventDestinationError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for CreateConfigurationSetEventDestinationError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for CreateConfigurationSetEventDestinationError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<CreateConfigurationSetEventDestinationError> for CreateConfigurationSetEventDestinationError source#### fn eq(&self, other: &CreateConfigurationSetEventDestinationError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &CreateConfigurationSetEventDestinationError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for CreateConfigurationSetEventDestinationError Auto Trait Implementations --- ### impl RefUnwindSafe for CreateConfigurationSetEventDestinationError ### impl Send for CreateConfigurationSetEventDestinationError ### impl Sync for CreateConfigurationSetEventDestinationError ### impl Unpin for CreateConfigurationSetEventDestinationError ### impl UnwindSafe for CreateConfigurationSetEventDestinationError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_pinpoint_sms_voice::DeleteConfigurationSetError === ``` pub enum DeleteConfigurationSetError { BadRequest(String), InternalServiceError(String), NotFound(String), TooManyRequests(String), } ``` Errors returned by DeleteConfigurationSet Variants --- ### `BadRequest(String)` The input you provided is invalid. ### `InternalServiceError(String)` The API encountered an unexpected error and couldn't complete the request. You might be able to successfully issue the request again in the future. ### `NotFound(String)` The resource you attempted to access doesn't exist. ### `TooManyRequests(String)` You've issued too many requests to the resource. Wait a few minutes, and then try again. Implementations --- source### impl DeleteConfigurationSetError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DeleteConfigurationSetErrorTrait Implementations --- source### impl Debug for DeleteConfigurationSetError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DeleteConfigurationSetError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DeleteConfigurationSetError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DeleteConfigurationSetError> for DeleteConfigurationSetError source#### fn eq(&self, other: &DeleteConfigurationSetError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteConfigurationSetError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteConfigurationSetError Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteConfigurationSetError ### impl Send for DeleteConfigurationSetError ### impl Sync for DeleteConfigurationSetError ### impl Unpin for DeleteConfigurationSetError ### impl UnwindSafe for DeleteConfigurationSetError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_pinpoint_sms_voice::DeleteConfigurationSetEventDestinationError === ``` pub enum DeleteConfigurationSetEventDestinationError { BadRequest(String), InternalServiceError(String), NotFound(String), TooManyRequests(String), } ``` Errors returned by DeleteConfigurationSetEventDestination Variants --- ### `BadRequest(String)` The input you provided is invalid. ### `InternalServiceError(String)` The API encountered an unexpected error and couldn't complete the request. You might be able to successfully issue the request again in the future. ### `NotFound(String)` The resource you attempted to access doesn't exist. ### `TooManyRequests(String)` You've issued too many requests to the resource. Wait a few minutes, and then try again. Implementations --- source### impl DeleteConfigurationSetEventDestinationError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<DeleteConfigurationSetEventDestinationErrorTrait Implementations --- source### impl Debug for DeleteConfigurationSetEventDestinationError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for DeleteConfigurationSetEventDestinationError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for DeleteConfigurationSetEventDestinationError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<DeleteConfigurationSetEventDestinationError> for DeleteConfigurationSetEventDestinationError source#### fn eq(&self, other: &DeleteConfigurationSetEventDestinationError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &DeleteConfigurationSetEventDestinationError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for DeleteConfigurationSetEventDestinationError Auto Trait Implementations --- ### impl RefUnwindSafe for DeleteConfigurationSetEventDestinationError ### impl Send for DeleteConfigurationSetEventDestinationError ### impl Sync for DeleteConfigurationSetEventDestinationError ### impl Unpin for DeleteConfigurationSetEventDestinationError ### impl UnwindSafe for DeleteConfigurationSetEventDestinationError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_pinpoint_sms_voice::GetConfigurationSetEventDestinationsError === ``` pub enum GetConfigurationSetEventDestinationsError { BadRequest(String), InternalServiceError(String), NotFound(String), TooManyRequests(String), } ``` Errors returned by GetConfigurationSetEventDestinations Variants --- ### `BadRequest(String)` The input you provided is invalid. ### `InternalServiceError(String)` The API encountered an unexpected error and couldn't complete the request. You might be able to successfully issue the request again in the future. ### `NotFound(String)` The resource you attempted to access doesn't exist. ### `TooManyRequests(String)` You've issued too many requests to the resource. Wait a few minutes, and then try again. Implementations --- source### impl GetConfigurationSetEventDestinationsError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<GetConfigurationSetEventDestinationsErrorTrait Implementations --- source### impl Debug for GetConfigurationSetEventDestinationsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for GetConfigurationSetEventDestinationsError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for GetConfigurationSetEventDestinationsError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<GetConfigurationSetEventDestinationsError> for GetConfigurationSetEventDestinationsError source#### fn eq(&self, other: &GetConfigurationSetEventDestinationsError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &GetConfigurationSetEventDestinationsError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for GetConfigurationSetEventDestinationsError Auto Trait Implementations --- ### impl RefUnwindSafe for GetConfigurationSetEventDestinationsError ### impl Send for GetConfigurationSetEventDestinationsError ### impl Sync for GetConfigurationSetEventDestinationsError ### impl Unpin for GetConfigurationSetEventDestinationsError ### impl UnwindSafe for GetConfigurationSetEventDestinationsError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_pinpoint_sms_voice::SendVoiceMessageError === ``` pub enum SendVoiceMessageError { BadRequest(String), InternalServiceError(String), TooManyRequests(String), } ``` Errors returned by SendVoiceMessage Variants --- ### `BadRequest(String)` The input you provided is invalid. ### `InternalServiceError(String)` The API encountered an unexpected error and couldn't complete the request. You might be able to successfully issue the request again in the future. ### `TooManyRequests(String)` You've issued too many requests to the resource. Wait a few minutes, and then try again. Implementations --- source### impl SendVoiceMessageError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<SendVoiceMessageErrorTrait Implementations --- source### impl Debug for SendVoiceMessageError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for SendVoiceMessageError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for SendVoiceMessageError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<SendVoiceMessageError> for SendVoiceMessageError source#### fn eq(&self, other: &SendVoiceMessageError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &SendVoiceMessageError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for SendVoiceMessageError Auto Trait Implementations --- ### impl RefUnwindSafe for SendVoiceMessageError ### impl Send for SendVoiceMessageError ### impl Sync for SendVoiceMessageError ### impl Unpin for SendVoiceMessageError ### impl UnwindSafe for SendVoiceMessageError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more Enum rusoto_pinpoint_sms_voice::UpdateConfigurationSetEventDestinationError === ``` pub enum UpdateConfigurationSetEventDestinationError { BadRequest(String), InternalServiceError(String), NotFound(String), TooManyRequests(String), } ``` Errors returned by UpdateConfigurationSetEventDestination Variants --- ### `BadRequest(String)` The input you provided is invalid. ### `InternalServiceError(String)` The API encountered an unexpected error and couldn't complete the request. You might be able to successfully issue the request again in the future. ### `NotFound(String)` The resource you attempted to access doesn't exist. ### `TooManyRequests(String)` You've issued too many requests to the resource. Wait a few minutes, and then try again. Implementations --- source### impl UpdateConfigurationSetEventDestinationError source#### pub fn from_response(    res: BufferedHttpResponse) -> RusotoError<UpdateConfigurationSetEventDestinationErrorTrait Implementations --- source### impl Debug for UpdateConfigurationSetEventDestinationError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Display for UpdateConfigurationSetEventDestinationError source#### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. Read more source### impl Error for UpdateConfigurationSetEventDestinationError 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more source#### fn backtrace(&self) -> Option<&Backtrace🔬 This is a nightly-only experimental API. (`backtrace`)Returns a stack backtrace, if available, of where this error occurred. Read more 1.0.0 · source#### fn description(&self) -> &str 👎 Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎 Deprecated since 1.33.0: replaced by Error::source, which can support downcasting source### impl PartialEq<UpdateConfigurationSetEventDestinationError> for UpdateConfigurationSetEventDestinationError source#### fn eq(&self, other: &UpdateConfigurationSetEventDestinationError) -> bool This method tests for `self` and `other` values to be equal, and is used by `==`. Read more source#### fn ne(&self, other: &UpdateConfigurationSetEventDestinationError) -> bool This method tests for `!=`. source### impl StructuralPartialEq for UpdateConfigurationSetEventDestinationError Auto Trait Implementations --- ### impl RefUnwindSafe for UpdateConfigurationSetEventDestinationError ### impl Send for UpdateConfigurationSetEventDestinationError ### impl Sync for UpdateConfigurationSetEventDestinationError ### impl Unpin for UpdateConfigurationSetEventDestinationError ### impl UnwindSafe for UpdateConfigurationSetEventDestinationError Blanket Implementations --- source### impl<T> Any for T where    T: 'static + ?Sized, source#### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. Read more source### impl<T> Borrow<T> for T where    T: ?Sized, const: unstable · source#### fn borrow(&self) -> &T Immutably borrows from an owned value. Read more source### impl<T> BorrowMut<T> for T where    T: ?Sized, const: unstable · source#### fn borrow_mut(&mut self) -> &mutT Mutably borrows from an owned value. Read more source### impl<T> From<T> for T const: unstable · source#### fn from(t: T) -> T Returns the argument unchanged. source### impl<T> Instrument for T source#### fn instrument(self, span: Span) -> Instrumented<SelfInstruments this type with the provided `Span`, returning an `Instrumented` wrapper. Read more source#### fn in_current_span(self) -> Instrumented<SelfInstruments this type with the current `Span`, returning an `Instrumented` wrapper. Read more source### impl<T, U> Into<U> for T where    U: From<T>, const: unstable · source#### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. source### impl<T> Same<T> for T #### type Output = T Should always be `Self` source### impl<T> ToString for T where    T: Display + ?Sized, source#### default fn to_string(&self) -> String Converts the given value to a `String`. Read more source### impl<T, U> TryFrom<U> for T where    U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error. const: unstable · source#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion. source### impl<T, U> TryInto<U> for T where    U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error. const: unstable · source#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. source### impl<T> WithSubscriber for T source#### fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where    S: Into<Dispatch>, Attaches the provided `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more source#### fn with_current_subscriber(self) -> WithDispatch<SelfAttaches the current default `Subscriber` to this type, returning a `WithDispatch` wrapper. Read more
bisectr
cran
R
Package ‘bisectr’ October 12, 2022 Title Tools to find bad commits with git bisect Version 0.1.0 Author <NAME> <<EMAIL>> Maintainer <NAME> <<EMAIL>> Description Tools to find bad commits with git bisect. See https://github.com/wch/bisectr for examples and test script templates. URL https://github.com/wch/bisectr Depends R (>= 2.14) Imports devtools License GPL-2 Collate 'bisect.r' Repository CRAN Date/Publication 2012-06-15 03:45:00 NeedsCompilation no R topics documented: bisect... 2 bisect_instal... 2 bisect_load_al... 3 bisect_requir... 3 bisect_return_interactiv... 4 bisect_runtes... 4 bisect_sourc... 5 bisectr bisectr package Description This package is used for creating test scripts to find bad commits with git bisect. For example test scripts, see https://github.com/wch/bisectr. bisect_install Install a package from source, for bisect tests. Description If the installation fails, the default behavior is to mark this commit as skip. Usage bisect_install(pkgdir = ".", on_fail = "skip") Arguments pkgdir The directory to load from on_fail What to do if installation fails (default is to mark this commit as "skip") Details This function is usually used together with bisect_require. See Also bisect_require bisect_load_all bisect_source bisect_runtest bisect_return_interactive bisect_load_all Like load_all, but for bisect tests. Description If the package fails to load, the default is to mark this commit as skip. Usage bisect_load_all(pkgdir = ".", on_error = "skip") Arguments pkgdir The directory to load from on_error What to do if loading throws an error (default is to mark this commit as "skip") See Also bisect_source bisect_install bisect_runtest bisect_return_interactive bisect_require Load a package like require(), for bisect tests. Description If the package fails to load, the default behavior is to mark this commit as skip. Usage bisect_require(package, on_fail = "skip") Arguments package Name of package on_fail What to do if loading fails (default "skip") Details This function is usually used together with bisect_install. See Also bisect_install bisect_load_all bisect_source bisect_runtest bisect_return_interactive bisect_return_interactive Prompt the user for an interactive good/bad/skip response and return the appropriate value (to be passed to bisect_runtest). Description Prompt the user for an interactive good/bad/skip response and return the appropriate value (to be passed to bisect_runtest). Usage bisect_return_interactive() See Also bisect_runtest bisect_load_all bisect_install bisect_source bisect_runtest Run a test function for git bisect testing. Description If the function fun returns "good" or TRUE, quit and return a code to mark this commit as good. If the function returns "bad" or FALSE, quit and return a code to mark this commit as bad. If the function returns "skip" or NA, quit and return a code to mark this commit as skip. If the function returns "ignore" or NULL, do nothing. Usage bisect_runtest(fun, on_error = "skip", msg = "Running test...") Arguments fun The test function on_error What to do if running fun throws an error (default is to mark this commit as skip) msg A message to print to the console when running the test Details It is also important to set on_error. This tells it what to do when the test function throws an error. The default behavior is to mark this commit as skip. However, in some cases, it makes sense to mark this commit as bad if an error is thrown. See Also bisect_load_all bisect_install bisect_source bisect_return_interactive bisect_source Like source, but for bisect tests. Description If the file fails to load, the default is mark this commit as skip. Usage bisect_source(file, ..., on_error = "skip") Arguments file The file to load ... Other arguments to pass to source on_error What to do if loading throws an error (default is to mark this commit as "skip") See Also source bisect_load_all bisect_install bisect_runtest bisect_return_interactive
web-platform-compat
readthedoc
JSON
web-platform-compat 0.1.0 documentation web-platform-compat[¶](#web-platform-compat) === The Web Platform Compatibility API will support compatibility data on the [Mozilla Developer Network](https://developer.mozilla.org). This currently takes the form of browser compatibility tables, such as the one on the [CSS display property](https://developer.mozilla.org/en-US/docs/Web/CSS/display#Browser_compatibility). The API will help centralize this data, and allow it to be kept consistant across languages and different presentations. *Note: This project will be renamed to browsercompat in the near future, to syncronize the project name with the planned production domain name https://browsercompat.org.* The project started in December 2013. The goals, requirements, and current status are documented on the [MozillaWiki](https://wiki.mozilla.org/index.php?title=MDN/Projects/Development/CompatibilityTables). This project will implement the data store and API for compatibility data and related resources. Status[¶](#status) --- We’re defining the API and the split between API-side and client-side functionality - see the [draft API docs](draft/intro.html). The next step is to implement some of the API with sample data, as an aid to the discussion. Development[¶](#development) --- | Code: | <https://github.com/jwhitlock/web-platform-compat> | | Dev Server: | <https://browsercompat.herokuapp.com> (based on [jwhitlock/browsercompat-data](https://github.com/jwhitlock/browsercompat-data)) | | Issues: | <https://bugzilla.mozilla.org/buglist.cgi?quicksearch=compat-data> (tracking bug) <https://github.com/jwhitlock/web-platform-compat/issues?state=open> (documentation issues) <https://bugzilla.mozilla.org/showdependencytree.cgi?id=996570&hide_resolved=1> (blocking issues for v1) | | Dev Docs: | <https://web-platform-compat.readthedocs.org> <https://github.com/jwhitlock/web-platform-compat/wiki> | | Mailing list: | <https://lists.mozilla.org/listinfo/dev-mdn> | | IRC: | <irc://irc.mozilla.org/mdndev> | Contents: ### Installation[¶](#installation) #### Install Django Project[¶](#install-django-project) For detailed local instation instructions, including OS-specific instructions, see the [Installation page on the wiki](https://www.python.org). 1. Install system packages and libraries. The required packages are [Python](https://www.python.org) (2.7, 3.4, or both), [pip](https://pip.pypa.io/en/latest/) (latest), and [virtualenv](https://virtualenv.pypa.io/en/latest/) (latest). To match production and for a smooth installation of Python packages, install [PostgreSQL](http://www.postgresql.org) (9.2 or later recommended) and [Memcached](http://memcached.org) (latest). [virtualenvwrapper](http://virtualenvwrapper.readthedocs.org/en/latest/) and [autoenv](https://github.com/kennethreitz/autoenv) will make your development life easier. 2. Optionally, provision a PostgreSQL database, recommended to match production. The default Django database settings will use a [SQLite](http://sqlite.org) database named `db.sqlite3`. 3. Optionally, run [Memcached](http://memcached.org) for improved read performance and to match production. The default settings will run without a cache. 4. [Clone project locally](https://help.github.com/articles/which-remote-url-should-i-use/). 5. [Create a virtualenv](https://virtualenv.pypa.io/en/latest/userguide.html). 6. Install dependencies with `pip install -r requirements.txt -r requirements-dev.txt`. 7. Customize the configuration with environment variables. See `wpcsite/settings.py` and `env.dist` for advice and available settings. 8. Initialize the database and a superuser account with `./manage.py migrate`. 9. Verify that tests pass with `./manage,py test` or `make test`. 10. Run it with `./manage.py runserver` or `./manage.py runserver_plus`. #### Install in Heroku[¶](#install-in-heroku) [Heroku](https://www.heroku.com/) allows you to quickly deploy web-platform-compat. Heroku hosts the beta version of the service at <https://browsercompat.herokuapp.com>, using the add-ons: * [heroku-postgresql](https://devcenter.heroku.com/articles/heroku-postgresql) ([hobby-basic tier](https://devcenter.heroku.com/articles/heroku-postgres-plans), $9/month, required for size of dataset) * [memcachier](https://devcenter.heroku.com/articles/memcachier) (free dev tier) To deploy with Heroku, you’ll need to [signup for a free account](https://signup.heroku.com/) and install the [Heroku Toolbelt](http://toolbelt.heroku.com/). Then you can: 1. Clone project locally 2. `heroku apps:create` 3. `git push heroku master` 4. See the current config with `heroku config`, and then customize with environment variables using `heroku config:set` (see `wpcsite/settings.py` and `env.dist`) 5. Add superuser account (`heroku run ./manage.py createsuperuser`) #### Configuring authentication[¶](#configuring-authentication) The project uses [django-allauth](http://www.intenct.nl/projects/django-allauth/) as a framework for local and social authentication. The [public service](https://browsercompat.herokuapp.com) uses username and password for local authentication, and [Firefox Accounts](https://developer.mozilla.org/en-US/Firefox_Accounts) (FxA) for social authentication. django-allauth supports multiple emails per user, with one primary email used for communication. Email addresses are validated by sending a confirmation link. For a public server, you’ll need to [configure Django to send email](https://docs.djangoproject.com/en/1.7/topics/email/), by configuring your mail server and setting environment variables. For local development, it is easiest to print emails to the console: ``` export EMAIL_BACKEND="django.core.mail.backends.console.EmailBackend" ``` django-allauth supports many social authentication providers. See the [providers documentation](http://django-allauth.readthedocs.org/en/latest/providers.html) for the current list and hints for configuration. Using a authentication provider is not required, especially for local development. Instead, use local authentication with a username and password. If you need FxA integration, see the [Firefox Accounts page on the wiki](https://github.com/jwhitlock/web-platform-compat/wiki/Firefox%20Accounts) for install hints. #### Load Data[¶](#load-data) There are several ways to get data into your API: 1. Load data from the github export 2. Load data from another webcompat server 3. Load sample data from the [WebPlatform project](http://www.webplatform.org) and [MDN](https://developer.mozilla.org/en-US/) ##### Load from GitHub[¶](#load-from-github) The data on [browsercompat.herokuapp.com](https://browsercompat.herokuapp.com) is archived in the [browsercompat-data](https://github.com/jwhitlock/browsercompat-data) github repo, and this is the fastest way to get data into your empty API: 1. Clone the github repo (`git clone https://github.com/jwhitlock/browsercompat-data.git`) 2. Run the API (`./manage.py runserver`) 3. Import the data (`tools/upload_data.py --data /path/to/browsercompat-data/data`) ##### Load from another webcompat server[¶](#load-from-another-webcompat-server) If you have read access to a webcompat server that you’d like to clone, you can grab the data for your own server. 1. Download the data (`tools/download_data.py --api https://browsercompat.example.com`) 2. Run the API (`./manage.py runserver`) 3. Import the data (`tools/upload_data.py`) ##### Load Sample Data[¶](#load-sample-data) The [WebPlatform project](http://www.webplatform.org) imported data from [MDN](https://developer.mozilla.org/en-US/), and stored the formatted compatibility data in a [github project](https://github.com/webplatform/compatibility-data). There is a lot of data that was not imported, so it’s not a good data source for re-displaying on MDN. However, combining this data with specification data from MDN will create a good data set for testing the API at scale. To load sample data: 1. Run the API (`./manage.py runserver`) 2. Load a subset of the WebPlatform data (`tools/load_webcompat_data.py`) or full set of data (`tools/load_webcompat.py --all-data`) 3. Load specification data (`tools/load_spec_data.py`) ### Contributing[¶](#contributing) Contributions should follow the [MDN Contribution Guidelines](https://github.com/mozilla/kuma/blob/master/CONTRIBUTING.md): * You agree to license your contributions under [MPL 2](http://www.mozilla.org/MPL/2.0/) * Discuss large changes on the [dev-mdn mailing list](https://lists.mozilla.org/listinfo/dev-mdn) or on a [bugzilla bug](https://bugzilla.mozilla.org/show_bug.cgi?id=989448) before coding. * Python code style should follow [PEP8 standards](http://www.python.org/dev/peps/pep-0008/) whenever possible. * All commit messages must start with “bug NNNNNNN” or “fix bug NNNNNNN” + Reason: Make it easy for someone to consume the commit log and reach originating requests for all changes + Exceptions: “Merge” and “Revert” commits + Notes: - “fix bug NNNNNNN” - will trigger a github bot to automatically mark bug as “RESOLVED:FIXED” - If a pull request has multiple commits, we should squash commits together or re-word commit messages so that each commit message contains a bug number * MDN module [owner or peer](https://wiki.mozilla.org/Modules/All#MDN) must review and merge all pull requests. + Reason: Owner and peers are and accountable for the quality of MDN code changes + Exceptions: Owners/peers may commit directly to master for critical security/down-time fixes; they must file a bug for follow-up review. * MDN reviewers must verify sufficient test coverage on all changes - either by new or existing tests. + Reason: Automated tests reduce human error involved in reviews + Notes: The Django site has [good testing docs](https://docs.djangoproject.com/en/dev/topics/testing/), and [Django REST framework](http://www.django-rest-framework.org) has some [additonal testing docs](http://www.django-rest-framework.org/api-guide/testing). #### What to work on[¶](#what-to-work-on) There is a [tracking bug](https://bugzilla.mozilla.org/showdependencytree.cgi?id=989448&hide_resolved=1) for this project, and a [specific bug](https://bugzilla.mozilla.org/showdependencytree.cgi?id=996570&hide_resolved=1) for the data store, the primary purpose of this project. The dependant bugs represent areas of work, and are not exhaustive. If you want to contribute at this phase in development, take a look at the bugs and linked documents, to familiarize yourself with the project, and then get in touch with the team on IRC (#mdndev on irc.mozilla.org) to carve out a piece of the project. #### GitHub workflow[¶](#github-workflow) 1. [Get your environment setup](installation.html) 2. Set up mozilla remote (`$ git remote add mozilla git://github.com/mozilla/web-platform-compat.git`) 3. Create a branch for a bug (`$ git checkout -b new-issue-888888`) 4. Develop on bug branch. Time passes, the mozilla/web-platform-compat repository accumulates new commits 5. Commit changes to bug branch (`$ git add . ; git commit -m 'fix bug 888888 - commit message'`) 6. Fetch mozilla (`$ git fetch mozilla`) 7. Update local master (`$ git checkout master; git pull mozilla master`) Repeat steps 4-7 till dev is complete 8. Rebase issue branch (`$ git checkout new-issue-888888; git rebase master`) 9. Push branch to GitHub (`$ git push origin new-issue-888888`) 10. Issue pull request (Click Pull Request button) ### API (DRAFT)[¶](#api-draft) **This is a draft document** The [MDN community](http://developer.mozilla.org) maintains information about web technologies such as HTML and CSS. This includes information about which specifications define the technology, and what browsers support the technology. Browser support is shown in **Browser compatibility** tables in the source. A simple example is for the [HTML element <address>](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/address#Browser_compatibility). A more complex example is the [CSS property display](https://developer.mozilla.org/en-US/docs/Web/CSS/display#Browser_compatibility). There are several issues with the table-based compatibility tables, some of which could be solved by having a database-backed representation of compatibilty data, readable and writable from an API. #### Entrypoints[¶](#entrypoints) The API will be reachable at <https://browsersupports.org/api/v1/>. A non-SSL version will be reachable at <http://browsersupports.org/api/v1/>, and will redirect to the SSL version. This site is for applications that read, create, update, and delete compatibility resources. It includes a browsable API to ease application development, but not full documentation. The API supports two representations: `application/vnd.api+json` *(default)* JSON mostly conforming to the [JSON API](http://jsonapi.org). `text/html` the Django REST Framework browsable API. The API supports user accounts with [Persona](http://www.mozilla.org/en-US/persona/) authentication. Persona credentials can be exchanged for an [OAuth 2.0](http://oauth.net/2/) token for server-side code changes. A developer-centered website will be available at <https://browsersupports.org/>. A non-SSL version will be available at <http://browsersupports.org> and will redirect to the HTTPS version. This site is for documentation, example code, and example presentations. The documentation site is not editable from the browser. It uses gettext-style translations. en-US will be the first supported language. #### Proposed Technologies[¶](#proposed-technologies) The two sites are served from a single codebase, at <https://github.com/mozilla/web-platform-compat>. Technologies include: * [Django 1.7](https://docs.djangoproject.com/en/1.7/), a web framework * [Django REST Framework](http://www.django-rest-framework.org), an API framework * [django-simple-history](https://django-simple-history.readthedocs.org/en/latest/index.html), for recording changes to models * [django-mptt](https://github.com/django-mptt/django-mptt/), for efficiently storing hierarchical data * [django-oauth2-provider](https://github.com/caffeinehit/django-oauth2-provider), for script-based updates of content #### [Resources](#id1)[¶](#resources) Resources are simple objects supporting [CRUD](http://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations. Read operations can be done anonymously. Creating and updating require account permissions, and deleting requires admin account permissions. All resources support similar operations using HTTP methods: * `GET /api/v1/<type>` - List instances (paginated) * `POST /api/v1/<type>` - Create new instance * `GET /api/v1/<type>/<id>` - Retrieve an instance * `PUT /api/v1/<type>/<id>` - Update an instance * `DELETE /api/v1/<type>/<id>` - Delete instance Additional features may be added as needed. See the [JSON API docs](http://jsonapi.org/format/) for ideas and what format they will take. Because the operations are similar, only [browsers](#browsers) has complete operations examples, and others just show retrieving an instance (`GET /api/v1/<type>/<id>`). Contents * [Resources](#resources) + [Browsers](#browsers) - [List](#list) - [Retrieve by ID](#retrieve-by-id) - [Retrieve by Slug](#retrieve-by-slug) - [Create](#create) - [Update](#update) - [Partial Update](#partial-update) - [Update order of related resources](#update-order-of-related-resources) - [Reverting to a previous instance](#reverting-to-a-previous-instance) - [Deletion](#deletion) - [Reverting a deletion](#reverting-a-deletion) + [Versions](#versions) + [Features](#features) + [Supports](#supports) + [Specifications](#specifications) + [Sections](#sections) + [Maturities](#maturities) ##### [Browsers](#id2)[¶](#browsers) A **browser** is a brand of web client that has one or more versions. This follows most users’ understanding of browsers, i.e., `firefox` represents desktop Firefox, `safari` represents desktop Safari, and `firefox-mobile` represents Firefox Mobile. The **browsers** representation includes: * **attributes** + **id** *(server selected)* - Database ID + **slug** *(write-once)* - Unique, human-friendly slug + **name** *(localized)* - Browser name + **note** *(localized)* - Notes, intended for related data like OS, applicable device, engines, etc. * **links** + **versions** *(many)* - Associated [versions](#versions), ordered roughly from earliest to latest. User can change the order. + **history_current** *(one)* - Current [historical_browsers](history.html#historical-browsers). Can be set to a value from **history** to revert changes. + **history** *(many)* - Associated [historical_browsers](history.html#historical-browsers) in time order (most recent first). Changes are ignored. ###### [List](#id3)[¶](#list) To request the paginated list of **browsers**: ``` GET /api/v1/browsers HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "browsers": [ { "id": "1", "slug": "android", "name": { "en": "Android" }, "note": null, "links": { "history": [ "1" ], "history_current": "1", "versions": [ "1", "2", "3" ] } }, { "id": "2", "slug": "blackberry", "name": { "en": "BlackBerry" }, "note": null, "links": { "history": [ "2" ], "history_current": "2", "versions": [ "4", "5", "6" ] } }, { "id": "3", "slug": "chrome", "name": { "en": "Chrome" }, "note": null, "links": { "history": [ "3" ], "history_current": "3", "versions": [ "7", "8", "9" ] } }, { "id": "4", "slug": "chrome_for_android", "name": { "en": "Chrome for Android" }, "note": null, "links": { "history": [ "4" ], "history_current": "4", "versions": [ "10", "11", "12" ] } }, { "id": "5", "slug": "chrome_mobile", "name": { "en": "Chrome Mobile" }, "note": null, "links": { "history": [ "5" ], "history_current": "5", "versions": [ "13" ] } }, { "id": "6", "slug": "firefox", "name": { "en": "Firefox" }, "note": null, "links": { "history": [ "6" ], "history_current": "6", "versions": [ "14", "15", "16", "17", "18" ] } }, { "id": "7", "slug": "firefox_mobile", "name": { "en": "Firefox Mobile" }, "note": null, "links": { "history": [ "7" ], "history_current": "7", "versions": [ "19", "20", "21" ] } }, { "id": "8", "slug": "firefox_os", "name": { "en": "Firefox OS" }, "note": null, "links": { "history": [ "8" ], "history_current": "8", "versions": [ "22", "23", "24" ] } }, { "id": "9", "slug": "ie_mobile", "name": { "en": "IE Mobile" }, "note": null, "links": { "history": [ "9" ], "history_current": "9", "versions": [ "25", "26", "27" ] } }, { "id": "10", "slug": "internet_explorer", "name": { "en": "Internet Explorer" }, "note": null, "links": { "history": [ "10" ], "history_current": "10", "versions": [ "28", "29", "30", "31", "32" ] } } ], "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } }, "meta": { "pagination": { "browsers": { "previous": null, "next": "https://browsercompat.org/api/v1/browsers?page=2", "count": 15 } } } } ``` ###### [Retrieve by ID](#id4)[¶](#retrieve-by-id) To request a single **browser** with a known ID: ``` GET /api/v1/browsers/6 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "browsers": { "id": "6", "slug": "firefox", "name": { "en": "Firefox" }, "note": null, "links": { "history": [ "6" ], "history_current": "6", "versions": [ "14", "15", "16", "17", "18" ] } }, "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } } } ``` ###### [Retrieve by Slug](#id5)[¶](#retrieve-by-slug) To request a **browser** by slug: ``` GET /api/v1/browsers?slug=firefox HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` The response includes the desired browser, in list format: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "browsers": [ { "id": "6", "slug": "firefox", "name": { "en": "Firefox" }, "note": null, "links": { "history": [ "6" ], "history_current": "6", "versions": [ "14", "15", "16", "17", "18" ] } } ], "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } }, "meta": { "pagination": { "browsers": { "previous": null, "next": null, "count": 1 } } } } ``` ###### [Create](#id6)[¶](#create) Creating **browser** instances require authentication with create privileges. To create a new **browser** instance, `POST` a representation with at least the required parameters. Some items (such as the `id` attribute and the `history_current` link) will be picked by the server, and will be ignored if included. Here’s an example of creating a **browser** instance: ``` POST /api/v1/browsers HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 132 Content-Type: application/vnd.api+json Cookie: csrftoken=<KEY>; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: <KEY> ``` ``` { "browsers": { "slug": "amazon-silk-mobile", "name": { "en": "Amazon Silk Mobile" } } } ``` A sample response is: ``` HTTP/1.1 201 CREATED Content-Type: application/vnd.api+json ``` ``` { "browsers": { "id": "16", "slug": "amazon-silk-mobile", "name": { "en": "Amazon Silk Mobile" }, "note": null, "links": { "history": [ "16" ], "history_current": "16", "versions": [] } }, "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } } } ``` This, and other methods that change resources, will create a new [changeset](change-control.html#changesets), and associate the new [historical_browsers](history.html#historical-browsers) with that [changeset](change-control.html#changesets). To assign to an existing changeset, add it to the URI: ``` POST /api/v1/browsers?changeset=4 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 220 Content-Type: application/vnd.api+json Cookie: csrftoken=<KEY>; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: p<KEY> ``` ``` { "browsers": { "slug": "nintendo-ds", "name": { "en": "Nintendo DS Browser", "ja": "\u30cb\u30f3\u30c6\u30f3\u30c9\u30fc\uff24\uff33\u30d6\u30e9\u30a6\u30b6" } } } ``` A sample response is: ``` HTTP/1.1 201 CREATED Content-Type: application/vnd.api+json ``` ``` { "browsers": { "id": "18", "slug": "nintendo-ds", "name": { "en": "Nintendo DS Browser", "ja": "ニンテンドヌブラりザ" }, "note": null, "links": { "history": [ "18" ], "history_current": "18", "versions": [] } }, "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } } } ``` ###### [Update](#id7)[¶](#update) Updating a **browser** instance require authentication with create privileges. Some items (such as the `id` attribute and `history` links) can not be changed, and will be ignored if included. A successful update will return a `200 OK`, add a new ID to the `history` links list, and update the `history_current` link. This update changes the English name from “Internet Explorer” to “Microsoft Internet Explorer”: ``` PUT /api/v1/browsers/10 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 1010 Content-Type: application/vnd.api+json Cookie: csrftoken=<KEY>; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: <KEY> ``` ``` { "browsers": { "id": "10", "slug": "internet_explorer", "name": { "en": "Microsoft Internet Explorer" }, "note": null, "links": { "history": [ "10" ], "history_current": "10", "versions": [ "28", "29", "30", "31", "32" ] } }, "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } } } ``` With this response: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "browsers": { "id": "10", "slug": "internet_explorer", "name": { "en": "Microsoft Internet Explorer" }, "note": null, "links": { "history": [ "19", "10" ], "history_current": "19", "versions": [ "28", "29", "30", "31", "32" ] } }, "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } } } ``` ###### [Partial Update](#id8)[¶](#partial-update) An update can just update the target fields. This is a further request to change the English name for the Internet Explorer browser. ``` PUT /api/v1/browsers/10 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 78 Content-Type: application/vnd.api+json Cookie: csrftoken=p7FqFyNp6hZS0FJYKyQxVmLrZILldjqn; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: p7FqFyNp6hZS0FJYKyQxVmLrZILldjqn ``` ``` { "browsers": { "name": { "en": "IE" } } } ``` With this response: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "browsers": { "id": "10", "slug": "internet_explorer", "name": { "en": "IE" }, "note": null, "links": { "history": [ "20", "19", "10" ], "history_current": "20", "versions": [ "28", "29", "30", "31", "32" ] } }, "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } } } ``` ###### [Update order of related resources](#id9)[¶](#update-order-of-related-resources) In many cases, related resources (which appear in the “links” attribute”) are sorted by ID. In some cases, the order is significant, and is set on a related field. For example, **versions** for a **browser** are ordered by updating the order on the **browser** object: To change just the [versions](#versions) order: ``` PUT /api/v1/browsers/10 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 205 Content-Type: application/vnd.api+json Cookie: csrftoken=p<KEY>0<KEY>; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: p<KEY>n ``` ``` { "browsers": { "links": { "versions": [ "28", "29", "30", "32", "31" ] } } } ``` With this response: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "browsers": { "id": "10", "slug": "internet_explorer", "name": { "en": "IE" }, "note": null, "links": { "history": [ "21", "20", "19", "10" ], "history_current": "21", "versions": [ "28", "29", "30", "32", "31" ] } }, "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } } } ``` ###### [Reverting to a previous instance](#id10)[¶](#reverting-to-a-previous-instance) To revert to an earlier instance, set the `history_current` link to a previous value. This resets the content and creates a new [historical_browsers](history.html#historical-browsers) object: ``` PUT /api/v1/browsers/10 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 92 Content-Type: application/vnd.api+json Cookie: csrftoken=<KEY>; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: <KEY> ``` ``` { "browsers": { "links": { "history_current": "10" } } } ``` With this response: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "browsers": { "id": "10", "slug": "internet_explorer", "name": { "en": "Internet Explorer" }, "note": null, "links": { "history": [ "22", "21", "20", "19", "10" ], "history_current": "22", "versions": [ "28", "29", "30", "32", "31" ] } }, "links": { "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "browsers.versions": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{browsers.versions}" } } } ``` ###### [Deletion](#id11)[¶](#deletion) To delete a **browser**: ``` DELETE /api/v1/browsers/16 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 0 Cookie: csrftoken=<KEY>; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: <KEY> ``` The response has no body: ``` HTTP/1.1 204 NO CONTENT ``` ###### [Reverting a deletion](#id12)[¶](#reverting-a-deletion) Reverting deletions is not currently possible. ##### [Versions](#id13)[¶](#versions) A **version** is a specific release of a Browser. The **versions** representation includes: * **attributes** + **id** *(server selected)* - Database ID + **version** *(write-once)* - Version of browser. Numeric or text string, depending on the status (see table below). + **release_day** - Day that browser was released in [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) format, or null if unknown. + **retirement_day** - Approximate day the browser was “retired” (stopped being a current browser), in [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) format, or null if unknown. + **status** - One of: `beta` (a numbered release candidate suggested for early adopters or testers), `current` (current version, the preferred download or update for users), `future` (a named but unnumbered planned future release), `retired-beta` (old beta version, replaced by a new beta or release), `retired` (old version, no longer the preferred download for any platform), or `unknown` (status of this version is unknown) + **release_notes_uri** *(localized)* - URI of release notes for this version, or null if none. + **note** *(localized)* - Engine, OS, etc. information, or null + **order** *(read-only)* - The relative order amoung versions for this browser. The order can be changed on the **browser** resource. * **links** + **browser** - The related **browser** + **supports** *(many)* - Associated **supports**, in ID order. Changes are ignored; work on the **supports** to add, change, or remove. + **history_current** *(one)* - Current **historical_versions**. Set to a value from **history** to revert to that version. + **history** *(many)* - Associated **historical_versions**, in time order (most recent first). Changes are ignored. The version is either a numeric value, such as `"11.0"`, or text, such as `"Nightly"`. The version format depends on the chosen status: | Status | Version | | --- | --- | | `beta` | numeric | | `current` | numeric or the text `"current"` | | `future` | text | | `retired-beta` | numeric | | `retired` | numeric | | `unknown` | numeric | To get a single **version**: ``` GET /api/v1/versions/18 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "versions": { "id": "18", "version": "16.0", "release_day": "2012-10-09", "retirement_day": "2012-11-20", "status": "retired", "release_notes_uri": { "en": "https://developer.mozilla.org/en/Firefox/Releases/16", "de": "https://developer.mozilla.org/de/Firefox/Releases/16", "es": "https://developer.mozilla.org/es/Firefox/Releases/16", "fr": "https://developer.mozilla.org/fr/Firefox/Versions/16", "ja": "https://developer.mozilla.org/ja/Firefox/Releases/16", "ko": "https://developer.mozilla.org/ko/Firefox/Releases/16", "pl": "https://developer.mozilla.org/pl/Firefox/Releases/16", "pt-PT": "https://developer.mozilla.org/pt-PT/Firefox/Releases/16", "ru": "https://developer.mozilla.org/ru/Firefox/Releases/16", "zh-CN": "https://developer.mozilla.org/zh-CN/Firefox/Releases/16", "zh-TW": "https://developer.mozilla.org/zh-TW/Firefox/Releases/16" }, "note": null, "order": 4, "links": { "browser": "6", "supports": [ "12", "22" ], "history": [ "18" ], "history_current": "18" } }, "links": { "versions.browser": { "type": "browsers", "href": "https://browsercompat.org/api/v1/browsers/{versions.browser}" }, "versions.supports": { "type": "supports", "href": "https://browsercompat.org/api/v1/supports/{versions.supports}" }, "versions.history": { "type": "historical_versions", "href": "https://browsercompat.org/api/v1/historical_versions/{versions.history}" }, "versions.history_current": { "type": "historical_versions", "href": "https://browsercompat.org/api/v1/historical_versions/{versions.history_current}" } } } ``` ##### [Features](#id14)[¶](#features) A **feature** is a web technology. This could be a precise technology, such as the value `cover` for the CSS `background-size` property. It could be a heirarchical group of related technologies, such as the CSS `background-size` property or the set of all CSS properties. Some features correspond to a page on [MDN](https://developer.mozilla.org), which will display the list of specifications and a browser compatibility table of the sub-features. The **features** representation includes: * **attributes** + **id** *(server selected)* - Database ID + **slug** *(write-once)* - Unique, human-friendly slug + **mdn_uri** *(optional, localized)* - The URI of the language-specific MDN page that this feature was first scraped from. If the path contains unicode, it should be percent-encoded as in [RFC 3987](http://tools.ietf.org/html/rfc3987.html#section-3.1). May be used in UX or for debugging import scripts. + **experimental** - True if a feature is considered experimental, such as being non-standard or part of an non-ratified spec. + **standardized** - True if a feature is described in a standards-track spec, regardless of the spec’s maturity. + **stable** - True if a feature is considered suitable for production websites. + **obsolete** - True if a feature should not be used in new development. + **name** *(canonical or localized)* - Feature name. If the name is the code used by a developer, then the value is a string, and should be wrapped in a `<code>` block when displayed. If the name is a description of the feature, then the value is the available translations, including at least an `en` translation, and may include HTML markup. For example, `"display"` and `"display: none"` are canonical names for the CSS display property and one of the values for that property, while `"Basic support"`, `"<code>none, inline</code> and <code>block</code>"`, and `"CSS Properties"` are non-canonical names that should be translated. * **links** + **sections** *(many)* - Associated [sections](#sections). Order can be changed by the user. + **supports** *(many)* - Associated [supports](#supports), Order is in ID order, changes are ignored. + **parent** *(one or null)* - The feature one level up, or null if top-level. Can be changed by user. + **children** *(many)* - The features that have this feature as parent, in display order. Can be an empty list, for “leaf” features. Can be re-ordered by the user. + **history_current** *(one)* - Current [historical_features](history.html#historical-features). User can set to a valid **history** to revert to that version. + **history** *(many)* - Associated [historical_features](history.html#historical-features), in time order (most recent first). Changes are ignored. To get a specific **feature** (in this case, a leaf feature with a translated name): ``` GET /api/v1/features/13 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "features": { "id": "13", "slug": "web-css-transform-three-value-syntax", "mdn_uri": null, "experimental": false, "standardized": true, "stable": true, "obsolete": false, "name": { "en": "Three-value syntax", "es": "Sintaxis con tres valores", "ja": "3-倀構文" }, "links": { "sections": [], "supports": [ "20", "21", "22", "23", "24", "25", "26" ], "parent": "10", "children": [], "history_current": "13", "history": [ "13" ] } }, "links": { "features.sections": { "type": "sections", "href": "https://browsercompat.org/api/v1/sections/{features.sections}" }, "features.supports": { "type": "supports", "href": "https://browsercompat.org/api/v1/supports/{features.supports}" }, "features.parent": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{features.parent}" }, "features.children": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{features.children}" }, "features.history_current": { "type": "historical_features", "href": "https://browsercompat.org/api/v1/historical_features/{features.history_current}" }, "features.history": { "type": "historical_features", "href": "https://browsercompat.org/api/v1/historical_features/{features.history}" } } } ``` Here’s an example of a branch feature with a canonical name (the parent of the previous example): ``` GET /api/v1/features/10 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "features": { "id": "10", "slug": "web-css-transform-origin", "mdn_uri": { "en": "https://developer.mozilla.org/en-US/docs/Web/CSS/transform-origin", "es": "https://developer.mozilla.org/es/docs/Web/CSS/transform-origin", "fr": "https://developer.mozilla.org/fr/docs/Web/CSS/transform-origin", "ja": "https://developer.mozilla.org/ja/docs/Web/CSS/transform-origin" }, "experimental": false, "standardized": true, "stable": true, "obsolete": false, "name": "transform-origin", "links": { "sections": [ "4" ], "supports": [], "parent": "2", "children": [ "11", "13" ], "history_current": "10", "history": [ "10" ] } }, "links": { "features.sections": { "type": "sections", "href": "https://browsercompat.org/api/v1/sections/{features.sections}" }, "features.supports": { "type": "supports", "href": "https://browsercompat.org/api/v1/supports/{features.supports}" }, "features.parent": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{features.parent}" }, "features.children": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{features.children}" }, "features.history_current": { "type": "historical_features", "href": "https://browsercompat.org/api/v1/historical_features/{features.history_current}" }, "features.history": { "type": "historical_features", "href": "https://browsercompat.org/api/v1/historical_features/{features.history}" } } } ``` ##### [Supports](#id15)[¶](#supports) A **support** is an assertion that a particular Version of a Browser supports (or does not support) a feature. The **support** representation includes: * **attributes** + **id** *(server selected)* - Database ID + **support** - Assertion of support of the [version](#versions) for the [feature](#features), one of `"yes"`, `"no"`, `"partial"`, or `"unknown"` + **prefix** - Prefix used to enable support, such as “moz” + **prefix_mandatory** - True if the prefix is required + **alternate_name** - An alternate name associated with this feature, such as `"RTCPeerConnectionIdentityEvent"` + **alternate_name_mandatory** - True if the alternate name is required + **requires_config** - A configuration string required to enable the feature, such as `"media.peerconnection.enabled=on"` + **default_config** - The configuration string in the shipping browser, such as `"media.peerconnection.enabled=off"` + **protected** - True if the feature requires additional steps to enable in order to protect the user’s security or privacy, such as geolocation and the Bluetooth API. + **note** *(localized)* - Note on support, designed for display after a compatibility table, can contain HTML * **links** + **version** *(one)* - The associated [version](#versions). Cannot be changed by the user after creation. + **feature** *(one)* - The associated [feature](#features). Cannot be changed by the user after creation. The version and feature combo must be unique. + **history_current** *(one)* - Current [historical_supports](history.html#historical-supports). Can be changed to a valid **history** to revert to that version. + **history** *(many)* - Associated [historical_supports](history.html#historical-supports) in time order (most recent first). Changes are ignored. To get a single **support**: ``` GET /api/v1/supports/22 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "supports": { "id": "22", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "18", "feature": "13", "history_current": "22", "history": [ "22" ] } }, "links": { "supports.version": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{supports.version}" }, "supports.feature": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{supports.feature}" }, "supports.history_current": { "type": "historical_supports", "href": "https://browsercompat.org/api/v1/historical_supports/{supports.history_current}" }, "supports.history": { "type": "historical_supports", "href": "https://browsercompat.org/api/v1/historical_supports/{supports.history}" } } } ``` ##### [Specifications](#id16)[¶](#specifications) A **specification** is a standards document that specifies a web technology. The **specification** representation includes: * **attributes** + **id** *(server selected)* - Database ID + **slug** - Unique, human-friendly key + **mdn_key** - Key used in the KumaScript macros [SpecName](https://developer.mozilla.org/en-US/docs/Template:SpecName) and [Spec2](https://developer.mozilla.org/en-US/docs/Template:Spec2). + **name** *(localized)* - Specification name + **uri** *(localized)* - Specification URI, without subpath and anchor * **links** + **maturity** *(one)* - Associated [maturity](#maturities). Can be changed by the user. + **sections** *(many)* - Associated [sections](#sections). The order can be changed by the user. + **history_current** *(one)* - Current [historical_specifications](history.html#historical-specifications). Can be changed to a valid **history** to revert to that version. + **history** *(many)* - Associated [historical_specifications](history.html#historical-specifications) in time order (most recent first). Changes are ignored. To get a single **specification**: ``` GET /api/v1/specifications/2 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "specifications": { "id": "2", "slug": "css2_1", "mdn_key": "CSS2.1", "name": { "en": "CSS Level&nbsp;2 (Revision&nbsp;1)" }, "uri": { "en": "http://www.w3.org/TR/CSS2/" }, "links": { "maturity": "1", "sections": [ "2" ], "history_current": "2", "history": [ "2" ] } }, "links": { "specifications.maturity": { "type": "maturities", "href": "https://browsercompat.org/api/v1/maturities/{specifications.maturity}" }, "specifications.sections": { "type": "sections", "href": "https://browsercompat.org/api/v1/sections/{specifications.sections}" }, "specifications.history_current": { "type": "historical_specifications", "href": "https://browsercompat.org/api/v1/historical_specifications/{specifications.history_current}" }, "specifications.history": { "type": "historical_specifications", "href": "https://browsercompat.org/api/v1/historical_specifications/{specifications.history}" } } } ``` ##### [Sections](#id17)[¶](#sections) A **section** refers to a specific area of a [specification](#specifications) document. The **section** representation includes: * **attributes** + **id** *(server selected)* - Database ID + **number** *(optional, localized)* - The section number + **name** *(localized)* - Section name + **subpath** *(localized, optional)* - A subpage (possibly with an #anchor) to get to the subsection in the doc. Can be empty string. + **note** *(localized, optional)* - Notes for this section * **links** + **specification** *(one)* - The [specification](#specifications). Can be changed by the user. + **features** *(many)* - The associated [features](#features). In ID order, changes are ignored. + **history_current** *(one)* - Current [historical_sections](history.html#historical-sections). Can be changed to a valid **history** to revert to that version. + **history** *(many)* - Associated [historical_sections](history.html#historical-sections) in time order (most recent first). Changes are ignored. To get a single **section**: ``` GET /api/v1/sections/3 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "sections": { "id": "3", "number": { "en": "16" }, "name": { "en": "The float property" }, "subpath": { "en": "#the-float-property" }, "note": { "en": "Lots of new values, not all clearly defined yet. Any differences in behavior unrelated to new features are expected to be unintentional; please report." }, "links": { "specification": "3", "features": [ "5" ], "history_current": "3", "history": [ "3" ] } }, "links": { "sections.specification": { "type": "specifications", "href": "https://browsercompat.org/api/v1/specifications/{sections.specification}" }, "sections.features": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{sections.features}" }, "sections.history_current": { "type": "historical_sections", "href": "https://browsercompat.org/api/v1/historical_sections/{sections.history_current}" }, "sections.history": { "type": "historical_sections", "href": "https://browsercompat.org/api/v1/historical_sections/{sections.history}" } } } ``` ##### [Maturities](#id18)[¶](#maturities) A **maturity** refers to the maturity of a [specification](#specifications) document. The **maturity** representation includes: * **attributes** + **id** *(server selected)* - Database ID + **slug** - A human-friendly identifier for this maturity. When applicabile, it match the key in the KumaScript macro [Spec2](https://developer.mozilla.org/en-US/docs/Template:Spec2) + **name** *(localized)* - Status name * **links** + **specifications** *(many)* - Associated [specifications](#specifications). In ID order, changes are ignored. + **history_current** *(one)* - Current [historical_maturities](history.html#historical-maturities). Can be changed to a valid **history** to revert to that version. + **history** *(many)* - Associated [historical_maturities](history.html#historical-maturities) in time order (most recent first). Changes are ignored. To get a single **maturity**: ``` GET /api/v1/maturities/1 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "maturities": { "id": "1", "slug": "REC", "name": { "en": "Recommendation", "de": "Empfehlung", "ja": "勧告", "ru": "РекПЌеМЎацОя" }, "links": { "specifications": [ "1", "2" ], "history_current": "1", "history": [ "1" ] } }, "links": { "maturities.specifications": { "type": "specifications", "href": "https://browsercompat.org/api/v1/specifications/{maturities.specifications}" }, "maturities.history_current": { "type": "historical_maturities", "href": "https://browsercompat.org/api/v1/historical_maturities/{maturities.history_current}" }, "maturities.history": { "type": "historical_maturities", "href": "https://browsercompat.org/api/v1/historical_maturities/{maturities.history}" } } } ``` #### Change Control Resources[¶](#change-control-resources) Change Control Resources help manage changes to resources. ##### Users[¶](#users) A **user** represents a person or process that creates, changes, or deletes a resource. The representation includes: * **attributes** + **id** *(server selected)* - Database ID + **username** - The user’s email or ID + **created** *(server selected)* - Time that the account was created, in [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) format. + **agreement** - The version of the contribution agreement the user has accepted. “0” for not agreed, “1” for first version, etc. + **permissions** - A list of permissions. Permissions include `"change-resource"` (add or change any resource except [users](#users) or history resources), `"delete-resource"` (delete any resource) `"import-mdn"` (setup import of an MDN page) * **links** + **changesets** *(many)* - Associated [changesets](#changesets), in ID order, changes are ignored. To get a single **user** representation: ``` GET /api/v1/users/1 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "users": { "id": "1", "username": "user", "created": "2015-04-20T18:06:48.567514Z", "agreement": 0, "permissions": [ "change-resource", "delete-resource" ], "links": { "changesets": [ "1" ] } }, "links": { "users.changesets": { "type": "changesets", "href": "https://browsercompat.org/api/v1/changesets/{users.changesets}" } } } ``` If a client is authenticated, the logged-in user’s account can be retrieved with: ``` GET /api/v1/users/me HTTP/1.1 Host: browsersupports.org Accept: application/vnd.api+json ``` ##### Changesets[¶](#changesets) A **changeset** collects history resources into a logical unit, allowing for faster reversions and better history display. The **changeset** can be auto-created through a `POST`, `PUT`, or `DELETE` to a resource, or it can be created independently and specified by adding `changeset=<ID>` URI parameter (i.e., `PUT /browsers/15?changeset=73`). The representation includes: * **attributes** + **id** *(server selected)* - Database ID + **created** *(server selected)* - When the changeset was created, in [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) format. + **modified** *(server selected)* - When the changeset was last modified, in [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) format. + **target_resource_type** *(write-once, optional)* - The name of the primary resource for this changeset, for example “browsers”, “versions”, etc. + **target_resource_id** *(write-once, optional)* - The ID of the primary resource for this changeset. + **closed** - True if the changeset is closed to new changes. Auto-created changesets are auto-closed, and cache invalidation is delayed until manually created changesets are closed. * **links** + **user** *(one)* - The user who initiated this changeset, can not be changed. + **historical_browsers** *(many)* - Associated [historical_browsers](history.html#historical-browsers), in ID order, changes are ignored. + **historical_features** *(many)* - Associated [historical_features](history.html#historical-features), in ID order, changes are ignored. + **historical_maturities** *(many)* - Associated [historical_maturities](history.html#historical-maturities), in ID order, changes are ignored. + **historical_sections** *(many)* - Associated [historical_sections](history.html#historical-sections), in ID order, changes are ignored. + **historical_specificationss** *(many)* - Associated [historical_specificationss](history.html#historical-specificationss), in ID order, changes are ignored. + **historical_supports** *(many)* - Associated [historical_supports](history.html#historical-supports), in ID order, changes are ignored. + **historical_versions** *(many)* - Associated [historical_versions](history.html#historical-versions), in ID order, changes are ignored. To get a single **changeset** representation: ``` GET /api/v1/changesets/2 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "changesets": { "id": "2", "created": "2015-04-20T18:22:47.046692Z", "modified": "2015-04-20T18:22:47.056433Z", "closed": true, "target_resource_type": null, "target_resource_id": null, "links": { "user": "1", "historical_browsers": [ "16" ], "historical_features": [], "historical_maturities": [], "historical_sections": [], "historical_specifications": [], "historical_supports": [], "historical_versions": [] } }, "links": { "changesets.user": { "type": "users", "href": "https://browsercompat.org/api/v1/users/{changesets.user}" }, "changesets.historical_browsers": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{changesets.historical_browsers}" }, "changesets.historical_features": { "type": "historical_features", "href": "https://browsercompat.org/api/v1/historical_features/{changesets.historical_features}" }, "changesets.historical_maturities": { "type": "historical_maturities", "href": "https://browsercompat.org/api/v1/historical_maturities/{changesets.historical_maturities}" }, "changesets.historical_sections": { "type": "historical_sections", "href": "https://browsercompat.org/api/v1/historical_sections/{changesets.historical_sections}" }, "changesets.historical_specifications": { "type": "historical_specifications", "href": "https://browsercompat.org/api/v1/historical_specifications/{changesets.historical_specifications}" }, "changesets.historical_supports": { "type": "historical_supports", "href": "https://browsercompat.org/api/v1/historical_supports/{changesets.historical_supports}" }, "changesets.historical_versions": { "type": "historical_versions", "href": "https://browsercompat.org/api/v1/historical_versions/{changesets.historical_versions}" } } } ``` #### History Resources[¶](#history-resources) History Resources are created when a Resource is created, updated, or deleted. By navigating the history chain, a caller can see the changes of a resource over time. All history representations are similar, so one example should be enough to determine the pattern. ##### Historical Browsers[¶](#historical-browsers) A **historical_browser** resource represents the state of a [browser](resources.html#browsers) at a point in time, and who is responsible for that state. The representation includes: * **attributes** + **id** *(server selected)* - Database ID + **date** *(server selected)* - The time of this change in [ISO 8601](http://en.wikipedia.org/wiki/ISO_8601) + **event** *(server selected)* - The type of event, one of `"created"`, `"changed"`, or `"deleted"` + **browsers** - The **browsers** representation at this point in time * **links** + **browser** *(one)* - Associated [browser](resources.html#browsers), can not be changed + **changeset** *(one)* - Associated [changeset](change-control#changesets), can not be changed. To get a single **historical_browsers** representation: ``` GET /api/v1/historical_browsers/6 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "historical_browsers": { "id": "6", "date": "2015-04-20T18:44:09.905824Z", "event": "created", "browsers": { "id": "6", "slug": "firefox", "name": { "en": "Firefox" }, "note": null, "links": { "history_current": "6" } }, "links": { "changeset": "1", "browser": "6" } }, "links": { "historical_browsers.changeset": { "type": "changesets", "href": "https://browsercompat.org/api/v1/changesets/{historical_browsers.changeset}" }, "historical_browsers.browser": { "type": "browsers", "href": "https://browsercompat.org/api/v1/browsers/{historical_browsers.browser}" } } } ``` ##### Historical Versions[¶](#historical-versions) A **historical_versions** resource represents the state of a [version](resources.html#versions) at a point in time, and who is responsible for that representation. See [historical_browsers](#historical-browsers) and [versions](resources.html#versions) for an idea of the represention. ##### Historical Features[¶](#historical-features) A **historical_features** resource represents the state of a [feature](resources.html#features) at a point in time, and who is responsible for that representation. See [historical_browsers](#historical-browsers) and [features](resources.html#features) for an idea of the represention. ##### Historical Sections[¶](#historical-sections) A **historical_sections** resource represents the state of a [section](resources.html#sections) at a point in time, and who is responsible for that representation. See [historical_browsers](#historical-browsers) and [sections](resources.html#sections) for an idea of the represention. ##### Historical Specifications[¶](#historical-specifications) A **historical_specifications** resource represents the state of a [specification](resources.html#specifications) at a point in time, and who is responsible for that representation. See [historical_browsers](#historical-browsers) and [specifications](resources.html#specifications) for an idea of the represention. ##### Historical Supports[¶](#historical-supports) A **historical_supports** resource represents the state of a [support](resources.html#supports) at a point in time, and who is responsible for that representation. See [historical_browsers](#historical-browsers) and [supports](resources.html#supports) for an idea of the represention. ##### Historical Maturities[¶](#historical-maturities) A **historical_maturities** resource represents the state of a [maturity](resources.html#maturities) at a point in time, and who is responsible for that representation. See [historical_browsers](#historical-browsers) and [maturities](resources.html#maturities) for an idea of the represention. #### Views[¶](#views) A **View** is a combination of resources for a particular presentation. It is suitable for anonymous viewing of content. It is possible that views are unnecessary, and could be constructed by supporting optional parts of the JSON API spec, such as [Inclusion of Linked Resources](http://jsonapi.org/format/#fetching-includes). These views are written as if they are aliases for a fully-fleshed implementation of JSON API. ##### View a Feature[¶](#view-a-feature) This view collects the data for a [feature](resources.html#features), including the related resources needed to display it on MDN. Here is a simple example, the view for the CSS property [float](https://developer.mozilla.org/en-US/docs/Web/CSS/float): ``` GET /api/v1/view_features/5 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json ``` A sample response is: ``` HTTP/1.1 200 OK Content-Type: application/vnd.api+json ``` ``` { "features": { "id": "5", "slug": "web-css-float", "mdn_uri": { "en": "https://developer.mozilla.org/en-US/docs/Web/CSS/float", "de": "https://developer.mozilla.org/de/docs/Web/CSS/float", "es": "https://developer.mozilla.org/es/docs/Web/CSS/float", "ja": "https://developer.mozilla.org/ja/docs/Web/CSS/float", "ru": "https://developer.mozilla.org/ru/docs/Web/CSS/float" }, "experimental": false, "standardized": true, "stable": true, "obsolete": false, "name": "float", "links": { "sections": [ "1", "2", "3" ], "supports": [], "parent": "2", "children": [ "6" ], "history_current": "5", "history": [ "5" ] } }, "links": { "features.sections": { "type": "sections", "href": "https://browsercompat.org/api/v1/sections/{features.sections}" }, "features.supports": { "type": "supports", "href": "https://browsercompat.org/api/v1/supports/{features.supports}" }, "features.parent": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{features.parent}" }, "features.children": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{features.children}" }, "features.history_current": { "type": "historical_features", "href": "https://browsercompat.org/api/v1/historical_features/{features.history_current}" }, "features.history": { "type": "historical_features", "href": "https://browsercompat.org/api/v1/historical_features/{features.history}" }, "browsers.history": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history}" }, "browsers.history_current": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{browsers.history_current}" }, "versions.browser": { "type": "browsers", "href": "https://browsercompat.org/api/v1/browsers/{versions.browser}" }, "versions.history": { "type": "historical_versions", "href": "https://browsercompat.org/api/v1/historical_versions/{versions.history}" }, "versions.history_current": { "type": "historical_versions", "href": "https://browsercompat.org/api/v1/historical_versions/{versions.history_current}" }, "supports.version": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{supports.version}" }, "supports.feature": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{supports.feature}" }, "supports.history_current": { "type": "historical_supports", "href": "https://browsercompat.org/api/v1/historical_supports/{supports.history_current}" }, "supports.history": { "type": "historical_supports", "href": "https://browsercompat.org/api/v1/historical_supports/{supports.history}" }, "maturities.history_current": { "type": "historical_maturities", "href": "https://browsercompat.org/api/v1/historical_maturities/{maturities.history_current}" }, "maturities.history": { "type": "historical_maturities", "href": "https://browsercompat.org/api/v1/historical_maturities/{maturities.history}" }, "specifications.maturity": { "type": "maturities", "href": "https://browsercompat.org/api/v1/maturities/{specifications.maturity}" }, "specifications.history_current": { "type": "historical_specifications", "href": "https://browsercompat.org/api/v1/historical_specifications/{specifications.history_current}" }, "specifications.history": { "type": "historical_specifications", "href": "https://browsercompat.org/api/v1/historical_specifications/{specifications.history}" }, "sections.specification": { "type": "specifications", "href": "https://browsercompat.org/api/v1/specifications/{sections.specification}" }, "sections.history_current": { "type": "historical_sections", "href": "https://browsercompat.org/api/v1/historical_sections/{sections.history_current}" }, "sections.history": { "type": "historical_sections", "href": "https://browsercompat.org/api/v1/historical_sections/{sections.history}" } }, "linked": { "browsers": [ { "id": "1", "slug": "android", "name": { "en": "Android" }, "note": null, "links": { "history": [ "1" ], "history_current": "1" } }, { "id": "2", "slug": "blackberry", "name": { "en": "BlackBerry" }, "note": null, "links": { "history": [ "2" ], "history_current": "2" } }, { "id": "3", "slug": "chrome", "name": { "en": "Chrome" }, "note": null, "links": { "history": [ "3" ], "history_current": "3" } }, { "id": "6", "slug": "firefox", "name": { "en": "Firefox" }, "note": null, "links": { "history": [ "6" ], "history_current": "6" } }, { "id": "7", "slug": "firefox_mobile", "name": { "en": "Firefox Mobile" }, "note": null, "links": { "history": [ "7" ], "history_current": "7" } }, { "id": "9", "slug": "ie_mobile", "name": { "en": "IE Mobile" }, "note": null, "links": { "history": [ "9" ], "history_current": "9" } }, { "id": "10", "slug": "internet_explorer", "name": { "en": "Internet Explorer" }, "note": null, "links": { "history": [ "22", "21", "20", "19", "10" ], "history_current": "22" } }, { "id": "11", "slug": "opera", "name": { "en": "Opera" }, "note": null, "links": { "history": [ "11" ], "history_current": "11" } }, { "id": "14", "slug": "safari", "name": { "en": "Safari" }, "note": null, "links": { "history": [ "14" ], "history_current": "14" } }, { "id": "15", "slug": "safari_mobile", "name": { "en": "Safari Mobile" }, "note": null, "links": { "history": [ "15" ], "history_current": "15" } } ], "versions": [ { "id": "2", "version": "1.0", "release_day": null, "retirement_day": null, "status": "unknown", "release_notes_uri": null, "note": null, "order": 1, "links": { "browser": "1", "history": [ "2" ], "history_current": "2" } }, { "id": "4", "version": "current", "release_day": null, "retirement_day": null, "status": "current", "release_notes_uri": null, "note": null, "order": 0, "links": { "browser": "2", "history": [ "4" ], "history_current": "4" } }, { "id": "8", "version": "1.0", "release_day": null, "retirement_day": null, "status": "unknown", "release_notes_uri": null, "note": null, "order": 1, "links": { "browser": "3", "history": [ "8" ], "history_current": "8" } }, { "id": "15", "version": "1.0", "release_day": "2004-11-09", "retirement_day": "2005-11-29", "status": "retired", "release_notes_uri": null, "note": null, "order": 1, "links": { "browser": "6", "history": [ "15" ], "history_current": "15" } }, { "id": "20", "version": "1.0", "release_day": null, "retirement_day": null, "status": "unknown", "release_notes_uri": null, "note": null, "order": 1, "links": { "browser": "7", "history": [ "20" ], "history_current": "20" } }, { "id": "27", "version": "6.0", "release_day": null, "retirement_day": null, "status": "unknown", "release_notes_uri": null, "note": null, "order": 2, "links": { "browser": "9", "history": [ "27" ], "history_current": "27" } }, { "id": "29", "version": "4.0", "release_day": null, "retirement_day": null, "status": "unknown", "release_notes_uri": null, "note": null, "order": 1, "links": { "browser": "10", "history": [ "29" ], "history_current": "29" } }, { "id": "34", "version": "7.0", "release_day": null, "retirement_day": null, "status": "unknown", "release_notes_uri": null, "note": null, "order": 1, "links": { "browser": "11", "history": [ "34" ], "history_current": "34" } }, { "id": "42", "version": "1.0", "release_day": null, "retirement_day": null, "status": "unknown", "release_notes_uri": null, "note": null, "order": 1, "links": { "browser": "14", "history": [ "42" ], "history_current": "42" } }, { "id": "44", "version": "5.1", "release_day": null, "retirement_day": null, "status": "unknown", "release_notes_uri": null, "note": null, "order": 3, "links": { "browser": "14", "history": [ "44" ], "history_current": "44" } }, { "id": "46", "version": "1.0", "release_day": null, "retirement_day": null, "status": "unknown", "release_notes_uri": null, "note": null, "order": 1, "links": { "browser": "15", "history": [ "46" ], "history_current": "46" } } ], "supports": [ { "id": "1", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "2", "feature": "6", "history_current": "1", "history": [ "1" ] } }, { "id": "2", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "8", "feature": "6", "history_current": "2", "history": [ "2" ] } }, { "id": "3", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "15", "feature": "6", "history_current": "3", "history": [ "3" ] } }, { "id": "4", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "20", "feature": "6", "history_current": "4", "history": [ "4" ] } }, { "id": "5", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "27", "feature": "6", "history_current": "5", "history": [ "5" ] } }, { "id": "6", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "29", "feature": "6", "history_current": "6", "history": [ "6" ] } }, { "id": "7", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "34", "feature": "6", "history_current": "7", "history": [ "7" ] } }, { "id": "8", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "42", "feature": "6", "history_current": "8", "history": [ "8" ] } }, { "id": "9", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "44", "feature": "6", "history_current": "9", "history": [ "9" ] } }, { "id": "10", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "46", "feature": "6", "history_current": "10", "history": [ "10" ] } }, { "id": "27", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "4", "feature": "6", "history_current": "27", "history": [ "27" ] } } ], "maturities": [ { "id": "1", "slug": "REC", "name": { "en": "Recommendation", "de": "Empfehlung", "ja": "勧告", "ru": "РекПЌеМЎацОя" }, "links": { "history_current": "1", "history": [ "1" ] } }, { "id": "2", "slug": "WD", "name": { "en": "Working Draft", "de": "Arbeitsentwurf", "ja": "草案", "ru": "РабПчОй черМПвОк" }, "links": { "history_current": "2", "history": [ "2" ] } } ], "specifications": [ { "id": "1", "slug": "css1", "mdn_key": "CSS1", "name": { "en": "CSS Level&nbsp;1" }, "uri": { "en": "http://www.w3.org/TR/CSS1/" }, "links": { "maturity": "1", "history_current": "1", "history": [ "1" ] } }, { "id": "2", "slug": "css2_1", "mdn_key": "CSS2.1", "name": { "en": "CSS Level&nbsp;2 (Revision&nbsp;1)" }, "uri": { "en": "http://www.w3.org/TR/CSS2/" }, "links": { "maturity": "1", "history_current": "2", "history": [ "2" ] } }, { "id": "3", "slug": "css3_box", "mdn_key": "CSS3 Box", "name": { "en": "CSS Basic Box Model" }, "uri": { "en": "http://dev.w3.org/csswg/css3-box/" }, "links": { "maturity": "2", "history_current": "3", "history": [ "3" ] } } ], "sections": [ { "id": "1", "number": { "en": "5.5.25" }, "name": { "en": "'float'" }, "subpath": { "en": "#float" }, "note": { "en": "Initial definition." }, "links": { "specification": "1", "history_current": "1", "history": [ "1" ] } }, { "id": "2", "number": { "en": "9.5.1" }, "name": { "en": "Positioning the float: the 'float' property" }, "subpath": { "en": "visuren.html#float-position" }, "note": { "en": "No change." }, "links": { "specification": "2", "history_current": "2", "history": [ "2" ] } }, { "id": "3", "number": { "en": "16" }, "name": { "en": "The float property" }, "subpath": { "en": "#the-float-property" }, "note": { "en": "Lots of new values, not all clearly defined yet. Any differences in behavior unrelated to new features are expected to be unintentional; please report." }, "links": { "specification": "3", "history_current": "3", "history": [ "3" ] } } ], "features": [ { "id": "6", "slug": "web-css-float_basic-support", "mdn_uri": null, "experimental": false, "standardized": true, "stable": true, "obsolete": false, "name": { "en": "Basic Support", "ja": "基本サポヌト" }, "links": { "sections": [], "supports": [ "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "27" ], "parent": "5", "children": [], "history_current": "6", "history": [ "6" ] } } ] }, "meta": { "compat_table": { "supports": { "5": {}, "6": { "1": [ "1" ], "2": [ "27" ], "3": [ "2" ], "6": [ "3" ], "7": [ "4" ], "9": [ "5" ], "10": [ "6" ], "11": [ "7" ], "14": [ "8" ], "15": [ "10" ] } }, "tabs": [ { "name": { "en": "Desktop Browsers" }, "browsers": [ "3", "6", "10", "11", "14" ] }, { "name": { "en": "Mobile Browsers" }, "browsers": [ "1", "7", "9", "15", "2" ] } ], "pagination": { "linked.features": { "previous": null, "next": null, "count": 1 } }, "languages": [ "en", "de", "es", "ja", "ru" ], "notes": {} } } } ``` The process for using this representation is: 1. Parse into an in-memory object store, 2. Create the “Specifications” section: 1. Add the `Specifications` header 2. Create an HTML table with a header row “Specification”, “Status”, “Comment” 3. For each id in features.links.sections (`["746", "421", "70"]`): * Add the first column: a link to specifications.uri.(lang or en) + sections.subpath.(lang or en), with link text specifications.name.(lang or en), with title based on sections.name.(lang or en) or feature.name.(lang or en). * Add the second column: A span with class “spec-” + maturities.slug, and the text maturities.name.(lang or en). * Add the third column: maturities.notes.(lang or en), or empty string 4. Close the table, and add an edit button. 3. Create the Browser Compatibility section: 1. Add The “Browser compatibility” header 2. For each item in meta.compat-table.tabs, create a table with the proper name (“Desktop”, “Mobile”) 3. For each browser id in meta.compat-table.tabs.browsers, add a column with the translated browser name. 4. For each feature in features.features: * Add the first column: the feature name. If it is a string, then wrap in `<code>`. Otherwise, use the best translation of feature.name, in a `lang=(lang)` block. * Add any feature flags, such as an obsolete or experimental icon, based on the feature flags. * For each browser id in meta.compat-table-important: + Get the important support IDs from meta.compat-table-important.supports.<`feature ID`>.<`browser ID`> + If null, then display ”?” + If just one, display “<`version`>”, or “<`support`>”, depending on the defined attributes + If multiple, display as subcells + Add prefixes, alternate names, config, ank notes link as appropriate 5. Close each table, add an edit button 6. Add notes for displayed supports This may be done by including the JSON in the page as sent over the wire, or loaded asynchronously, with the tables built after initial page load. This can also be used by a [“caniuse” table layout](https://wiki.mozilla.org/MDN/Development/CompatibilityTables/Data_Requirements#1._CanIUse_table_layout) by ignoring the meta section and displaying all the included data. This will require more client-side processing to generate, or additional data in the `<meta>` section. ###### Updating Views with Changesets[¶](#updating-views-with-changesets) Updating the page requires a sequence of requests. For example, if a user wants to change Chrome support for `<address>` from an unknown version to version 1, you’ll have to create the [version](resources.html#versions) for that version, then add the [support](resources.html#supports) for the support. The first step is to create a [changeset](change-control#changeset) as an authenticated user: ``` POST /api/v1/changesets HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 107 Content-Type: application/vnd.api+json Cookie: csrftoken=<KEY>; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: p<KEY> ``` ``` { "changesets": { "target_resource_type": "features", "target_resource_id": "5" } } ``` A sample response is: ``` HTTP/1.1 201 CREATED Content-Type: application/vnd.api+json ``` ``` { "changesets": { "id": "36", "created": "2015-04-20T20:36:06.794827Z", "modified": "2015-04-20T20:36:06.795315Z", "closed": false, "target_resource_type": "features", "target_resource_id": 5, "links": { "user": "1", "historical_browsers": [], "historical_features": [], "historical_maturities": [], "historical_sections": [], "historical_specifications": [], "historical_supports": [], "historical_versions": [] } }, "links": { "changesets.user": { "type": "users", "href": "https://browsercompat.org/api/v1/users/{changesets.user}" }, "changesets.historical_browsers": { "type": "historical_browsers", "href": "https://browsercompat.org/api/v1/historical_browsers/{changesets.historical_browsers}" }, "changesets.historical_features": { "type": "historical_features", "href": "https://browsercompat.org/api/v1/historical_features/{changesets.historical_features}" }, "changesets.historical_maturities": { "type": "historical_maturities", "href": "https://browsercompat.org/api/v1/historical_maturities/{changesets.historical_maturities}" }, "changesets.historical_sections": { "type": "historical_sections", "href": "https://browsercompat.org/api/v1/historical_sections/{changesets.historical_sections}" }, "changesets.historical_specifications": { "type": "historical_specifications", "href": "https://browsercompat.org/api/v1/historical_specifications/{changesets.historical_specifications}" }, "changesets.historical_supports": { "type": "historical_supports", "href": "https://browsercompat.org/api/v1/historical_supports/{changesets.historical_supports}" }, "changesets.historical_versions": { "type": "historical_versions", "href": "https://browsercompat.org/api/v1/historical_versions/{changesets.historical_versions}" } } } ``` Next, use the [changeset](change-control#changeset) ID when creating the [version](resources.html#versions): ``` POST /api/v1/versions?changeset=36 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 138 Content-Type: application/vnd.api+json Cookie: csrftoken=p<KEY>n; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: p<KEY>n ``` ``` { "versions": { "version": "2.0", "status": "retired", "links": { "browser": "3" } } } ``` A sample response is: ``` HTTP/1.1 201 CREATED Content-Type: application/vnd.api+json ``` ``` { "versions": { "id": "50", "version": "2.0", "release_day": null, "retirement_day": null, "status": "retired", "release_notes_uri": null, "note": null, "order": 3, "links": { "browser": "3", "supports": [], "history": [ "53" ], "history_current": "53" } }, "links": { "versions.browser": { "type": "browsers", "href": "https://browsercompat.org/api/v1/browsers/{versions.browser}" }, "versions.supports": { "type": "supports", "href": "https://browsercompat.org/api/v1/supports/{versions.supports}" }, "versions.history": { "type": "historical_versions", "href": "https://browsercompat.org/api/v1/historical_versions/{versions.history}" }, "versions.history_current": { "type": "historical_versions", "href": "https://browsercompat.org/api/v1/historical_versions/{versions.history_current}" } } } ``` Finally, create the [support](resources.html#supports): ``` POST /api/v1/supports?changeset=36 HTTP/1.1 Host: browsercompat.org Accept: application/vnd.api+json Content-Length: 112 Content-Type: application/vnd.api+json Cookie: csrftoken=<KEY>; sessionid=wurexa2wq416ftlvd5plesngwa28183h X-Csrftoken: p<KEY> ``` ``` { "supports": { "links": { "version": "50", "feature": "6" } } } ``` A sample response is: ``` HTTP/1.1 201 CREATED Content-Type: application/vnd.api+json ``` ``` { "supports": { "id": "29", "support": "yes", "prefix": null, "prefix_mandatory": false, "alternate_name": null, "alternate_mandatory": false, "requires_config": null, "default_config": null, "protected": false, "note": null, "links": { "version": "50", "feature": "6", "history_current": "32", "history": [ "32" ] } }, "links": { "supports.version": { "type": "versions", "href": "https://browsercompat.org/api/v1/versions/{supports.version}" }, "supports.feature": { "type": "features", "href": "https://browsercompat.org/api/v1/features/{supports.feature}" }, "supports.history_current": { "type": "historical_supports", "href": "https://browsercompat.org/api/v1/historical_supports/{supports.history_current}" }, "supports.history": { "type": "historical_supports", "href": "https://browsercompat.org/api/v1/historical_supports/{supports.history}" } } } ``` The [historical_versions](history.html#historical-versions) and [historical_supports](history.html#historical-supports) resources will both refer to [changeset](change-control#changeset) 36, and this [changeset](change-control#changeset) is linked to [feature](resources.html#features) 5, despite the fact that no changes were made to the [feature](resources.html#features). This will facilitate displaying a history of the compatibility tables, for the purpose of reviewing changes and reverting vandalism. ###### Updating View with PUT[¶](#updating-view-with-put) view_features supports PUT for bulk updates of support data. Here is a simple example that adds a new subfeature without support: ``` PUT /api/v1/view_features/html-element-address HTTP/1.1 Host: browsersupports.org Content-Type: application/vnd.api+json Authorization: Bearer mF_9.B5f-4.1JqM ``` ``` { "features": { "id": "816", "slug": "html-element-address", "mdn_uri": { "en": "https://developer.mozilla.org/en-US/docs/Web/HTML/Element/address" }, "experimental": false, "standardized": true, "stable": true, "obsolete": false, "name": "address", "links": { "sections": ["746", "421", "70"], "supports": [], "parent": "800", "children": ["191"], "history_current": "216", "history": ["216"] } }, "linked": { "features": [ { "id": "_New Subfeature", "slug": "html-address-new-subfeature", "name": { "en": "New Subfeature" }, "links": { "parent": "816" } } ] } } ``` The response is the feature view with new and updated items, or an error response. This is a trivial use case, which would be better implemented by creating the [feature](resources.html#features) directly, but it can be extended to bulk updates of existing feature views, or for first-time importing of subfeatures and support data. It has some quirks: * New items should be identified with an ID starting with an underscore (`_`). Relations to new items should use the underscored IDs. * Only [feature](resources.html#features), [support](resources.html#supports), and [section](resources.html#sections) resources can be added or updated. Features must be the target feature or a descendant, and supports and sections are restricted to those features. * Deletions are not supported. * Other resources ([browsers](resources.html#browsers), [versions](resources.html#versions), etc) can not be added or changed. This includes adding links to new resources. Once the MDN import is complete, this PUT interface will be deprecated in favor of direct POST and PUT to the standard resource API. #### Services[¶](#services) A **Service** provides server functionality beyond basic manipulation of resources. ##### Authentication[¶](#authentication) Several endpoint handle user authentication. <https://browsersupports.org/accounts/profile/> is an HTML page that summarizes the users’ account, which includes: * The username, which can’t be changed. * Changing or setting the password, which is optional if a linked account is used. * Viewing, adding, and removing linked accounts, which are optional if a password is set. The support linked account type is [Firefox Accounts](https://accounts.firefox.com/signup). * Viewing, adding, and removing emails. Emails can be verified (a link is clicked or a trusted linked accout says it is verified), and one is the primary email used for communication. Additional endpoints for authentication: * `/accounts/` - Redirect to login or profile, depending on login state * `/accounts/signup/` - Create a new account, using username and password * `/accounts/login/` - Login to an existing account, using username and password or selecting a linked account * `/accounts/logout/` - Logout from site * `/accounts/password/change/` - Change an existing password * `/accounts/password/set/` - Set the password for an account using only a linked account * `/accounts/email/` - Manage associated email addresses * `/accounts/password/reset/` - Start a password reset via email * `/accounts/social/connections/` - Manage social accounts * `/accounts/fxa/login/` - Start a [Firefox Accounts](https://accounts.firefox.com/signup) login ##### Browser Identification[¶](#browser-identification) The `/browser_ident` endpoint provides browser identification based on the User Agent and other parameters. Two potential sources for this information: * [WhichBrowser](https://github.com/NielsLeenheer/WhichBrowser) - Very detailed. Uses User Agent header and feature detection to distinguish between similar browsers. Written in PHP. * [ua-parser](https://github.com/tobie/ua-parser) - Parses the User Agent. The [reference parser](https://webplatform.github.io/browser-compat-model/#reference-user-agent-parser) for [WebPlatform.org](http://www.webplatform.org). Written in Python. This endpoint will probably require the browser to visit it. It will be further speced as part of the UX around user contributions. #### [Issues](#id1)[¶](#issues) This draft API reflects a good guess at a useful, efficent API for storing user-contributed compatibility data. Some of the guesses are better than others. This section highlights the areas where more experienced opinions are welcome, and areas where more work is expected. Contents * [Issues](#issues) + [Additions to Browser Compatibility Data Architecture](#additions-to-browser-compatibility-data-architecture) + [Unresolved Issues](#unresolved-issues) + [Interesting MDN Pages](#interesting-mdn-pages) + [Translating from MDN wiki format](#translating-from-mdn-wiki-format) + [Data sources for browser versions](#data-sources-for-browser-versions) + [To Do](#to-do) ##### [Additions to Browser Compatibility Data Architecture](#id2)[¶](#additions-to-browser-compatibility-data-architecture) This spec includes changes to the [Browser Compatibility Data Architecture](https://docs.google.com/document/d/1YF7GJ6kgV5_hx6SJjyrgunqznQU1mKxp5FaLAEzMDl4/edit#) developed around March 2014. These seemed like a good idea to me, based on list threads and thinking how to recreate Browser Compatibility tables live on MDN. These changes are: * [browsers](resources.html#browsers) + **slug** - human-friendly unique identifier + **name** - converted to localized text. + **note** - added for engine, OS, etc. information * [versions](resources.html#versions) + Was browser-versions, but multi-word resources are problematic. + **release_day** - Day of release + **retirement_day** - Day of “retirement”, or when it was superceeded by a new release. + **status** - One of “retired”, “retired-beta”, “current”, “beta”, “future” + **relese_notes_uri** - For Mozilla releases, as specified in [CompatGeckoDesktop](https://developer.mozilla.org/en-US/docs/Template:CompatGeckoDesktop). + **note** - added for engine version, etc. * [features](resources.html#features) + **slug** - human-friendly unique identifier + **mdn_uri** - MDN page that data was scraped from + **experimental** - True if the feature is considered experimental due to being part of a unratified spec such as CSS Transitions, ES6, or the DOM Living Standard. For example, see the run-in value of [Web/CSS/display](https://developer.mozilla.org/en-US/docs/Web/CSS/display#Specifications). + **standardized** - True if the feature is described in a standards-track specification, regardless of the maturity of the specification. Most features are standardized, but some browser-specific features may be non-standard, and some features like the left and right features of [Web/CSS/caption-side](https://developer.mozilla.org/en-US/docs/Web/CSS/caption-side#Specifications) were part of the CSS2 “wishlist” document that was not standardized. + **stable** - True if the feature is considered stable enough for production usage. + **obsolete** - True if the feature should no longer be used in production code. + **name** - converted to localized text, or a string if the name is canonical + **sections** - replaces spec link + **parent**, **children** - tree relations for this feature * [supports](resources.html#supports) + Was browser-version-features, but multi-word resources are problematic. + **prefix** - string prefix to enable, or null if no prefix + **prefix_mandatory** - True if the prefix is required + **alternate_name** - An alternate name associated with this feature, such as “RTCPeerConnectionIdentityEvent” + **alternate_name_mandatory** - True if the alternate name is required + **requires_config** - A configuration string required to enable the feature, such as “media.peerconnection.enabled=on” + **default_config** - The configuration string in the shipping browser, such as “media.peerconnection.enabled=off” + **note** - Note, which can be null, but if included, can be translated, and may contain HTML and code samples. Supports extended footnote in use on MDN. MDN inline notes are not supported, and must be converted to footnotes. There are also additional [Resources](resources.html): * [specifications](resources.html#specifications) - For referring to a specification, with translated titles and URIs. * [sections](resources.html#sections) - For referring to a section of a specification, with translated titles and anchors * [maturities](resources.html#maturities) - For identifying the process stage of a specification * All the [history](history.html) resources ([historical_browsers](history.html#historical-browsers), [historical_versions](history.html#historical-versions), etc.) * [users](change-control#users) - For identifying the user who made a change * [changesets](change-control#changesets) - Collect several history resources into a logical change ##### [Unresolved Issues](#id3)[¶](#unresolved-issues) * We’ve been talking data models. This document talks about APIs. **The service will not have a working SQL interface**. Features like history require that changes are made through the API. Make sure your use case is supported by the API. * overholt wants [availability in Web Workers](https://bugzilla.mozilla.org/show_bug.cgi?id=996570#c14). Is an API enough to support that need? * I think <NAME> has good points about browsers being different across operating systems and OS versions - I’m considering adding this to the model: [Mapping browsers to 2.2.1 Dictionary Browser Members](http://lists.w3.org/Archives/Public/public-webplatform-tests/2013OctDec/0007.html). * How should we support versioning the API? There is no Internet concensus. + I expect to break the API as needed while implementing. At some point (late 2014), we’ll call it v1. + Additions, such as new attributes and links, will not cause an API bump + Some people put the version in the URL (/v1/browsers, /v2/browsers) + Some people use a custom header (`X-Api-Version: 2`) + Some people use the Accept header (`Accept: application/vnd.api+json;version=2`) + These people all hate each other. [Read a good blog post on the subject](http://www.troyhunt.com/2014/02/your-api-versioning-is-wrong-which-is.html). * What should be the default permissions for new users? What is the process for upgrading or downgrading permissions? * Is Persona a good fit for creating API accounts? There will need to be a process where an MDN user becomes an API user, and a way for an API user to authenticate directly with the API. * If we succeed, we’ll have a detailed history of browser support for each feature. For example, the datastore will know that every version of Firefox supported the `<h1>` tag. How should version history be summarized for the Browser Compatibility table? Should the API pick the “important” versions, and the KumaScript display them all? Or should the API send all known versions, and the KumaScript parse them for the significant versions, with UX for exposing known versions? The view doc proposes one implementation, with a `<meta>` section for identifying the important bits. * Do we want to add more items to versions? Wikipedia has interesting data for [Chrome release history](http://en.wikipedia.org/wiki/Google_Chrome_complete_version_history#Release_history) and [Firefox release history](http://en.wikipedia.org/wiki/Firefox_release_history#Release_history). Some possibly useful additions: release date, retirement date, codename, JS engine version, operating system, notes. It feels like we should import the data from version-specific KumaScripts like [CompatGeckoDesktop](https://developer.mozilla.org/en-US/docs/Template:CompatGeckoDesktop) (versions, release dates, translations, links to release docs). * We’ll need additional models for automated browser testing. Things like user agents, test names, test results for a user / user agent. And, we’ll need a bunch of rules for mapping test results to features, required number of tests before we’ll say a browser supports a feature, what to do with test conflicts, etc. It might be easier to move all those wishlist items to a different project, that talks to this API when it’s ready to assert browser support for a feature. * We need to decide on the URIs for the API and the developer resources. This is being tracked by [Bugzilla 1050458](https://bugzilla.mozilla.org/show_bug.cgi?id=1050458). * In [browsers](resources.html#browsers), it seems like icon won’t be generally useful. What format should the icon be? What size? It may be more useful to use the slug for deciding between icons designed for the local implementation. ##### [Interesting MDN Pages](#id4)[¶](#interesting-mdn-pages) These MDN pages represent use cases for compatibility data. They may suggest features to add, or existing features that will be dropped. * [Web/HTML/Element/address](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/address#Specifications) - A typical “simple” example. However, the name is non-canonical (“Basic Features”) and must be translated, rather than a canonical form (“<address>”) that could be the same for all languages. * [Web/CSS/display](https://developer.mozilla.org/en-US/docs/Web/CSS/display#Specifications) - This complex page includes non-canonical names (“`none,inline` and `block`”), experimental features (`run-in`), support changes across versions, prefixes, etc. Everything that makes this project hard. * [Web/CSS/cursor](https://developer.mozilla.org/en-US/docs/Web/CSS/cursor#Specifications) - May be more complex than display. * [Web/HTML/Element/Input](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/Input#Browser_compatibility) - Complex, with lots of attributes. Split by standard may not be as useful as other ways to split it. * [Web/CSS/animation-name](https://developer.mozilla.org/en-US/docs/Web/CSS/animation-name#Specifications) - New property that moved from prefixed support to standard support. * [Web/CSS/caption-side](https://developer.mozilla.org/en-US/docs/Web/CSS/caption-side#Specifications) - Rarely used ‘Non-standard’ tag. Also seen on [Web/CSS/text-align](https://developer.mozilla.org/en-US/docs/Web/CSS/text-align#Specifications). * [Web/CSS/@font-face](https://developer.mozilla.org/en-US/docs/Web/CSS/@font-face#Specifications) - Rarely used ‘Unimplemented’ tag as inline note. Also seen on [Web/CSS/text-decoration-line](https://developer.mozilla.org/en-US/docs/Web/CSS/text-decoration-line#Specifications). * [Web/CSS/length](https://developer.mozilla.org/en-US/docs/Web/CSS/length#Browser_compatibility) - Rarely used “warning” tag. Also seen on [Web/CSS/text-underline-position](https://developer.mozilla.org/en-US/docs/Web/CSS/text-underline-position#Specifications). * [Web/CSS/line-break](https://developer.mozilla.org/en-US/docs/Web/CSS/line-break#Specifications) - Rarely used “Fix Me” inline note * [Web/CSS/min-height](https://developer.mozilla.org/en-US/docs/Web/CSS/min-height#Specifications) - “Obsolete since Gecko 22” tag on auto, versus: * [Web/CSS/min-width](https://developer.mozilla.org/en-US/docs/Web/CSS/min-width#Specifications) - Obsolete trash can icon * [Web/CSS/text-transform](https://developer.mozilla.org/en-US/docs/Web/CSS/text-transform#Specifications) - Interesting use of non-ascii unicode in feature names, good test case. * [Web/CSS/transform-origin](https://developer.mozilla.org/en-US/docs/Web/CSS/transform-origin#Specifications) - IE may justify a ‘alternate’ value for supports.support, or just ‘no’ with a note. Some pages will require manual intervention to get them into the data store. Here’s a sample: * [Web/CSS/box-decoration-break](https://developer.mozilla.org/en-US/docs/Web/CSS/box-decoration-break#Specifications) - Broken formatting * [Web/CSS/box-sizing](https://developer.mozilla.org/en-US/docs/Web/CSS/box-sizing#Specifications) - In Safari column, link to engine version will become an inline note. * [Web/CSS/break-inside](https://developer.mozilla.org/en-US/docs/Web/CSS/break-inside#Specifications) - Will need to add a skeleton compatibility table. * [Web/CSS/@document](https://developer.mozilla.org/en-US/docs/Web/CSS/@document#Specifications) - Specification paragraph rather than normal table. * [Web/CSS/clip](https://developer.mozilla.org/en-US/docs/Web/CSS/clip#Specifications) - Long inline notes should be converted to notes. * [Web/CSS/:invalid](https://developer.mozilla.org/en-US/docs/Web/CSS/:invalid#Specifications) - Links in feature names to other MDN docs * [Web/CSS/outline-color](https://developer.mozilla.org/en-US/docs/Web/CSS/outline-color#Specifications) - Instead of version, long note about support. Convert to two versions and a footnote. * [Web/CSS/radial-gradient](https://developer.mozilla.org/en-US/docs/Web/CSS/radial-gradient#Specifications) - Evolving standard, used version notes instead of marking feature as experimental or deprecated. * [Web/CSS/ratio](https://developer.mozilla.org/en-US/docs/Web/CSS/ratio#Specifications) - Strange Chrome version * [Web/CSS/tab-size](https://developer.mozilla.org/en-US/docs/Web/CSS/tab-size#Specifications) - Lots of interesting versions, including Safari nightly. * [Web/CSS/text-rendering](https://developer.mozilla.org/en-US/docs/Web/CSS/text-rendering#Specifications) - convert to footnotes, other changes needed. Not sure if it belongs under CSS. * [Web/API/IDBObjectStore](https://developer.mozilla.org/en-US/docs/Web/API/IDBObjectStore#Specifications) - apoplectic warning of Chrome behaviour. Maybe convert to regular note, or add a Feature for Chrome prefix with non-standard tag? ##### [Translating from MDN wiki format](#id5)[¶](#translating-from-mdn-wiki-format) The current compatibility data on developer.mozilla.org in MDN wiki format, a combination of HTML and KumaScript. A MDN page will be imported as a feature with at least one child feature. Here’s the MDN wiki version of the Specifications section for [Web/CSS/border-image-width](http://developer.mozilla.org/en-US/docs/Web/CSS/border-image-width): ``` <h2 id="Specifications" name="Specifications">Specifications</h2> <table class="standard-table"> <thead> <tr> <th scope="col">Specification</th> <th scope="col">Status</th> <th scope="col">Comment</th> </tr> </thead> <tbody> <tr> <td>{{SpecName('CSS3 Backgrounds', '#border-image-width', 'border-image-width')}}</td> <td>{{Spec2('CSS3 Backgrounds')}}</td> <td>Initial specification</td> </tr> </tbody> </table> ``` The elements of this table are converted into API data: * **Body row, first column** - Format is `SpecName('KEY', 'PATH', 'NAME')`. `KEY` is the specification.mdn_key, `PATH` is section.subpath, in the page language, and `NAME` is section.name, in the page language. The macro [SpecName](https://developer.mozilla.org/en-US/docs/Template:SpecName) has additional lookups on `KEY` for specification.name and specification.uri (en language only). * **Body row, second column** - Format is `Spec2('KEY')`. `KEY` is the specification.mdn_key, and should match the one from column one. The macro [Spec2](https://developer.mozilla.org/en-US/docs/Template:Spec2) has additional lookups on `KEY` for maturity.mdn_key, and maturity.name (multiple languages). * **Body row, third column** - Format is a text fragment which may include HTML markup, becomes the section.name associated with this feature. and here’s the Browser compatibility section: ``` <h2 id="Browser_compatibility">Browser compatibility</h2> <div>{{CompatibilityTable}}</div> <div id="compat-desktop"> <table class="compat-table"> <tbody> <tr> <th>Feature</th> <th>Chrome</th> <th>Firefox (Gecko)</th> <th>Internet Explorer</th> <th>Opera</th> <th>Safari</th> </tr> <tr> <td>Basic support</td> <td>15.0</td> <td>{{CompatGeckoDesktop("13.0")}}</td> <td>11</td> <td>15</td> <td>6</td> </tr> </tbody> </table> </div> <div id="compat-mobile"> <table class="compat-table"> <tbody> <tr> <th>Feature</th> <th>Android</th> <th>Firefox Mobile (Gecko)</th> <th>IE Phone</th> <th>Opera Mobile</th> <th>Safari Mobile</th> </tr> <tr> <td>Basic support</td> <td>{{CompatUnknown}}</td> <td>{{CompatGeckoMobile("13.0")}}</td> <td>{{CompatNo}}</td> <td>{{CompatUnknown}}</td> <td>{{CompatUnknown}}</td> </tr> </tbody> </table> </div> </div> ``` This will be converted to API resources: * **Table class** - one of `"compat-desktop"` or `"compat-mobile"`. Representation in API is TBD. * **Header row, all but the first column** - format is either `Browser Name (Engine Name)` or `Browser Name`. Used for browser.name, engine name is discarded. Other formats or KumaScript halt import. * **Non-header rows, first column** - If the format is `<code>some text</code>`, then feature.canonical=true and the string is the canonical name. If the format is text w/o KumaScript, it is the non-canonocial name. If there is also KumaScript, it varies. **TODO:** doc KumaScript. * **Non-header rows, remaining columns** - Usually Kumascript: + `{{CompatUnknown}}` - version.version is `null`, and support.support is `"unknown"` + `{{CompatVersionUnknown}}` - version.version and are `null`, and support.support in `"yes"` + `{{CompatNo}}` - version.version and are `null`, and support.support is `"no"` + `{{CompatGeckoDesktop("VAL")}}` - version.version is set to `"VAL"`, support.support is `"yes"`. and version.release_day is set by logic in [CompatGeckoDesktop](https://developer.mozilla.org/en-US/docs/Template:CompatGeckoDesktop). + `{{CompatGeckoMobile("VAL")}}` - version.version is set to `"VAL"`, support.support is `"yes"`. is set by logic in [CompatGeckoMobile](https://developer.mozilla.org/en-US/docs/Template:CompatGeckoMobile). + Numeric string, such as `6`, `15.0`. This becomes the version.version, and support.support is `"yes"`. * **Content after table** - This is usually formatted as a paragraph, containing HTML. If it resembles a footnote, it will be converted to supports.notes. Once the initial conversion has been done for a page, it may be useful to perform additional steps: 1. Split large [features](resources.html#features) into smaller ones. For example, here’s one way to reorganize [Web/CSS/display](https://developer.mozilla.org/en-US/docs/Web/CSS/display#Specifications): ##### [Data sources for browser versions](#id6)[¶](#data-sources-for-browser-versions) The **version** model currently supports a release date and a retirement date, as well as other version data. Some sources for this data include: * Google Chrome - [Google Chrome Release History](http://en.wikipedia.org/wiki/Google_Chrome#Release_history) on Wikipedia * Mozilla Firefox - [Firefox Release History](http://en.wikipedia.org/wiki/Firefox_release_history#Release_history) on Wikipedia and KumaScript macro [CompatGeckoDesktop](https://developer.mozilla.org/en-US/docs/Template:CompatGeckoDesktop) * Microsoft Internet Explorer - [Release History of IE](http://en.wikipedia.org/wiki/Internet_Explorer_1#Release_history_for_desktop_Windows_OS_version) on Wikipedia * Opera - [Current Opera version history](http://www.opera.com/docs/history/) and [Presto history](http://www.opera.com/docs/history/presto/) on opera.com * Safari - [Safari version history](http://en.wikipedia.org/wiki/Safari_version_history#Release_history) on Wikipedia ##### [To Do](#id7)[¶](#to-do) * Add multi-get to browser doc, after deciding on `GET /versions/1,2,3,4` vs. `GET /browser/1/versions` * Look at additional MDN content for items in common use * Move to developers.mozilla.org subpath, auth changes * Jeremie’s suggested changes (*italics are done*) + *Add browsers.notes, localized, to note things like engine, applicable OS, execution contexts (web workers, XUL, etc.).* + *Drop browsers.engine attribute. Not important for searching or filtering, instead free text in browsers.notes* + *Add versions.notes, localized, to note things like OS, devices, engines, etc.* + *Drop versions.engine-version, not important for searching or sorting.* + Drop versions.status. Doesn’t think the MDN team will be able to keep up with browser releases. Will instead rely on users figuring out if a browser version is the current release. + *Drop feature.canonical. Instead, name=”string” means it is canonical, and name={“lang”: “translation”} means it is non-canonical.* + Feature-sets is a cloud, not a heirarchy. “color=red” is the same feature as “background-color=red”, so needs to be multiply assigned. + *A feature-set can either have sub-feature sets (middle of cloud), or features (edge of cloud).* - Note - implemented by merging features and feature sets. + *Add support-sets, to make positive assertions about a version supporting a feature-set. Only negative assertions can be made based on features.* - Note - implemented by merging features and feature sets + Drop order of features by feature set. Client will alpha-sort. + *supports.support, drop “prefixed” status. If prefixed, support = ‘yes’, and prefix is set.* + Add examples of filtering (browser versions in 2010, firefox versions before version X). * Holly’s suggestions + Nail down the data, so she has something solid to build a UX on. + sheppy or jms will have experience with how users use tables and contribute to them, how frequently. * Add history resources for specifications, etc. * Add empty resource for deleted items? ### Tools[¶](#tools) Some potentially useful scripts can be found in the /tools folder: #### download_data.py[¶](#download-data-py) Download data from API. Usage: ``` $ tools/download_data.py [--api API] [-vq] [--data DATA] ``` * `--api <API>` (optional): Set the base URL of the API (default: http://localhost:8000) * `-v` (optional): Print debug information * `-q` (optional): Only print warnings * `--data <DATA>` (optional): Set the output data folder (default: data subfolder in the working copy) This will create several files (browsers.json, versions.json, etc.) that represent the API resources without pagination. #### import_mdn.py[¶](#import-mdn-py) Import features from MDN, or reparse imported features. Usage: ``` $ tools/import_mdn.py [--api API] [--user USER] [-vq] ``` * `--api <API>` (optional): Set the base URL of the API (default: `http://localhost:8000`) * `--user <USER>` (optional): Set the username to use on the API (default: prompt for username) * `-v` (optional): Print debug information * `-q` (optional): Only print warnings #### load_spec_data.py[¶](#load-spec-data-py) Import specification data from MDN’s [SpecName](https://developer.mozilla.org/en-US/docs/Template:SpecName) and [Spec2](https://developer.mozilla.org/en-US/docs/Template:Spec2). Usage: ``` $ tools/load_spec_data.py [--api <API>] [--user <USER>] [-vq] [--all-data] ``` * `--api <API>` (optional): Set the base URL of the API (default: `http://localhost:8000`) * `--user <USER>` (optional): Set the username to use on the API (default: prompt for username) * `-v` (optional): Print debug information * `-q` (optional): Only print warnings #### load_webcompat_data.py[¶](#load-webcompat-data-py) Initialize with compatibility data from the [WebPlatform](https://github.com/webplatform/compatibility-data) project. Usage: ``` $ tools/load_webcompat_data.py [--api <API>] [--user <USER>] [-vq] [--all-data] ``` * `--api <API>` (optional): Set the base URL of the API (default: `http://localhost:8000`) * `--user <USER>` (optional): Set the username to use on the API (default: prompt for username) * `-v` (optional): Print debug information * `-q` (optional): Only print warnings * `--all-data` (optional): Import all data, rather than a subset #### make_doc_requests.py[¶](#make-doc-requests-py) Make documentation/integration requests against an API. Used by tools/run_integration_tests.sh. Usage: ``` $ tools/integration_requests.py [--mode {display,generate,verify}] [--api API] [--raw RAW] [--cases CASES] [--user USER] [--password PASSWORD] [--include-mod] [-vq] [case name [case name ...]] ``` * `--mode {display,generate,verify}` (optional): Set the mode. Values are: + `display` (default): Run GET requests against an API, printing the actual requests and responses. + `generate`: Run all requests against an API. Throw away some headers, such as `Allow` and `Server`. Modify other headers, such as `Cookies`, to make them consistant from run to run. Standardize some response data, such as creation and modification times. Store the cleaned-up requests and responses in the docs folder, for documentation and integration testing. + `verify`: Run all requests against an API, standardizing the requests and responses and comparing them to those in the docs folder. * `--api API` (optional): Set the base URL of the API (default: `http://localhost:8000`) * `--raw RAW` (optional): Set the path to the folder containing raw requests and responses (default: `docs/raw`) * `--cases CASES` (optional): Set the path to the documentation cases JSON file (default `docs/doc_cases.json`) * `--user USER`: Set the username to use for requests (default anonymous requests) * `--password PASSWORD`: Set the password to use for requests (default is prompt if `--user` set, otherwise use anonymous requests) * `--include-mod`: If `--mode display`, then include requests that would modify the data, such as `POST`, `PUT`, and `DELETE`. * `-v`: Be more verbose * `-q`: Be quieter * `case name`: Run the listed cases, not the full suite of cases #### mirror_mdn_features.py[¶](#mirror-mdn-features-py) Create and update API features for MDN pages. This will create the branch features, and then import_mdn.py can be used to generate the leaf features. Usage: ``` $ tools/mirror_mdn_features.py [--api API] [--data DATA] [--user USER] [--password PASSWORD] ``` * `--api API` (optional): Set the base URL of the API (default: `http://localhost:8000`) * `--user USER`: Set the username to use for requests (default is prompt for username) * `--password PASSWORD`: Set the password to use for requests (default is prompt for username) * `--data DATA`: Set the data folder for caching MDN page JSON #### run_integration_tests.sh[¶](#run-integration-tests-sh) Run a local API server with known data, make requests against it, and look for issues in the response. Usage: ``` $ tools/run_integration_tests.sh [-ghqv] ``` * `-g`: Generate documentation / integration test samples. If omitted, then responses are checked against the documentation samples. Useful for adding new documentation cases, or updating when the API changes. * `-h`: Show a usage statement * `-q`: Show less output * `-v`: Show more output #### sample_mdn.py[¶](#sample-mdn-py) Load random pages from MDN in your browser. Usage: ``` $ tools/sample_mdn.py ``` #### upload_data.py[¶](#upload-data-py) Upload data to the API. Usage: ``` $ tools/upload_data.py [--api API] [--user USER] [-vq] [--data DATA] ``` * `--api <API>` (optional): Set the base URL of the API (default: `http://localhost:8000`) * `--user <USER>` (optional): Set the username to use on the API (default: prompt for username) * `-v` (optional): Print debug information * `-q` (optional): Only print warnings * `--data <DATA>` (optional): Set the output data folder (default: data subfolder in the working copy) This will load the local resources from files (browsers.json, versions.json, etc), download the resources from the API, and upload the changes to make the API match the local resource files. ### History[¶](#history) web-platform-compat has not been released to production yet. The blocking issues for version 1 are tracked on [Bugzilla](https://bugzilla.mozilla.org/showdependencytree.cgi?id=996570&hide_resolved=1). In-progess versions are periodically deployed to Heroku at <https://browsercompat.herokuapp.com>. Here are some of the development deployments: * 2015-02 - Added MDN importer * 2014-12 - Added rest of resources, sample displays. Dropped versioning pre-release. * 0.2.0 - 2014-10-13 - Add features, supports, pagination * 0.1.0d - 2014-09-29 - Add resource-level caching * 0.1.0c - 2014-09-16 - Add sample feature view, simplify draft API * 0.1.0b - 2014-09-05 - Add filtering, more JSON API tuning. * 0.1.0a - 2014-09-02 - First Heroku deployment. Browser and Version data. ### Indices and tables[¶](#indices-and-tables) * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html) ### [Table Of Contents](index.html#document-index) * [Installation](index.html#document-installation) + [Install Django Project](index.html#install-django-project) + [Install in Heroku](index.html#install-in-heroku) + [Configuring authentication](index.html#configuring-authentication) + [Load Data](index.html#load-data) - [Load from GitHub](index.html#load-from-github) - [Load from another webcompat server](index.html#load-from-another-webcompat-server) - [Load Sample Data](index.html#load-sample-data) * [Contributing](index.html#document-contributing) + [What to work on](index.html#what-to-work-on) + [GitHub workflow](index.html#github-workflow) * [API (DRAFT)](index.html#document-draft/intro) + [Entrypoints](index.html#document-draft/entrypoints) + [Proposed Technologies](index.html#document-draft/technologies) + [Resources](index.html#document-draft/resources) - [Browsers](index.html#browsers) - [Versions](index.html#versions) - [Features](index.html#features) - [Supports](index.html#supports) - [Specifications](index.html#specifications) - [Sections](index.html#sections) - [Maturities](index.html#maturities) + [Change Control Resources](index.html#document-draft/change-control) - [Users](index.html#users) - [Changesets](index.html#changesets) + [History Resources](index.html#document-draft/history) - [Historical Browsers](index.html#historical-browsers) - [Historical Versions](index.html#historical-versions) - [Historical Features](index.html#historical-features) - [Historical Sections](index.html#historical-sections) - [Historical Specifications](index.html#historical-specifications) - [Historical Supports](index.html#historical-supports) - [Historical Maturities](index.html#historical-maturities) + [Views](index.html#document-draft/views) - [View a Feature](index.html#view-a-feature) + [Services](index.html#document-draft/services) - [Authentication](index.html#authentication) - [Browser Identification](index.html#browser-identification) + [Issues](index.html#document-draft/issues) - [Additions to Browser Compatibility Data Architecture](index.html#additions-to-browser-compatibility-data-architecture) - [Unresolved Issues](index.html#unresolved-issues) - [Interesting MDN Pages](index.html#interesting-mdn-pages) - [Translating from MDN wiki format](index.html#translating-from-mdn-wiki-format) - [Data sources for browser versions](index.html#data-sources-for-browser-versions) - [To Do](index.html#to-do) * [Tools](index.html#document-tools) + [download_data.py](index.html#download-data-py) + [import_mdn.py](index.html#import-mdn-py) + [load_spec_data.py](index.html#load-spec-data-py) + [load_webcompat_data.py](index.html#load-webcompat-data-py) + [make_doc_requests.py](index.html#make-doc-requests-py) + [mirror_mdn_features.py](index.html#mirror-mdn-features-py) + [run_integration_tests.sh](index.html#run-integration-tests-sh) + [sample_mdn.py](index.html#sample-mdn-py) + [upload_data.py](index.html#upload-data-py) * [History](index.html#document-history) ### Related Topics * [Documentation overview](index.html#document-index) ### Quick search Enter search terms or a module, class or function name.
github.com/docker/docker/pkg/chrootarchive
go
Go
None Documentation [¶](#section-documentation) --- [Rendered for](https://go.dev/about#build-context) linux/amd64 windows/amd64 darwin/amd64 js/wasm ### Index [¶](#pkg-index) * [func ApplyLayer(dest string, layer io.Reader) (size int64, err error)](#ApplyLayer) * [func ApplyUncompressedLayer(dest string, layer io.Reader, options *archive.TarOptions) (int64, error)](#ApplyUncompressedLayer) * [func NewArchiver(idMapping idtools.IdentityMapping) *archive.Archiver](#NewArchiver) * [func Tar(srcPath string, options *archive.TarOptions, root string) (io.ReadCloser, error)](#Tar) * [func Untar(tarArchive io.Reader, dest string, options *archive.TarOptions) error](#Untar) * [func UntarUncompressed(tarArchive io.Reader, dest string, options *archive.TarOptions) error](#UntarUncompressed) * [func UntarWithRoot(tarArchive io.Reader, dest string, options *archive.TarOptions, root string) error](#UntarWithRoot) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [ApplyLayer](https://github.com/docker/docker/blob/v24.0.6/pkg/chrootarchive/diff.go#L13) [¶](#ApplyLayer) ``` func ApplyLayer(dest [string](/builtin#string), layer [io](/io).[Reader](/io#Reader)) (size [int64](/builtin#int64), err [error](/builtin#error)) ``` ApplyLayer parses a diff in the standard layer format from `layer`, and applies it to the directory `dest`. The stream `layer` can only be uncompressed. Returns the size in bytes of the contents of the layer. #### func [ApplyUncompressedLayer](https://github.com/docker/docker/blob/v24.0.6/pkg/chrootarchive/diff.go#L21) [¶](#ApplyUncompressedLayer) added in v1.8.0 ``` func ApplyUncompressedLayer(dest [string](/builtin#string), layer [io](/io).[Reader](/io#Reader), options *[archive](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive).[TarOptions](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive#TarOptions)) ([int64](/builtin#int64), [error](/builtin#error)) ``` ApplyUncompressedLayer parses a diff in the standard layer format from `layer`, and applies it to the directory `dest`. The stream `layer` can only be uncompressed. Returns the size in bytes of the contents of the layer. #### func [NewArchiver](https://github.com/docker/docker/blob/v24.0.6/pkg/chrootarchive/archive.go#L14) [¶](#NewArchiver) ``` func NewArchiver(idMapping [idtools](/github.com/docker/docker@v24.0.6+incompatible/pkg/idtools).[IdentityMapping](/github.com/docker/docker@v24.0.6+incompatible/pkg/idtools#IdentityMapping)) *[archive](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive).[Archiver](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive#Archiver) ``` NewArchiver returns a new Archiver which uses chrootarchive.Untar #### func [Tar](https://github.com/docker/docker/blob/v24.0.6/pkg/chrootarchive/archive.go#L91) [¶](#Tar) ``` func Tar(srcPath [string](/builtin#string), options *[archive](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive).[TarOptions](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive#TarOptions), root [string](/builtin#string)) ([io](/io).[ReadCloser](/io#ReadCloser), [error](/builtin#error)) ``` Tar tars the requested path while chrooted to the specified root. #### func [Untar](https://github.com/docker/docker/blob/v24.0.6/pkg/chrootarchive/archive.go#L25) [¶](#Untar) ``` func Untar(tarArchive [io](/io).[Reader](/io#Reader), dest [string](/builtin#string), options *[archive](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive).[TarOptions](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive#TarOptions)) [error](/builtin#error) ``` Untar reads a stream of bytes from `archive`, parses it as a tar archive, and unpacks it into the directory at `dest`. The archive may be compressed with one of the following algorithms: identity (uncompressed), gzip, bzip2, xz. #### func [UntarUncompressed](https://github.com/docker/docker/blob/v24.0.6/pkg/chrootarchive/archive.go#L48) [¶](#UntarUncompressed) added in v1.8.0 ``` func UntarUncompressed(tarArchive [io](/io).[Reader](/io#Reader), dest [string](/builtin#string), options *[archive](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive).[TarOptions](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive#TarOptions)) [error](/builtin#error) ``` UntarUncompressed reads a stream of bytes from `archive`, parses it as a tar archive, and unpacks it into the directory at `dest`. The archive must be an uncompressed stream. #### func [UntarWithRoot](https://github.com/docker/docker/blob/v24.0.6/pkg/chrootarchive/archive.go#L41) [¶](#UntarWithRoot) ``` func UntarWithRoot(tarArchive [io](/io).[Reader](/io#Reader), dest [string](/builtin#string), options *[archive](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive).[TarOptions](/github.com/docker/docker@v24.0.6+incompatible/pkg/archive#TarOptions), root [string](/builtin#string)) [error](/builtin#error) ``` UntarWithRoot is the same as `Untar`, but allows you to pass in a root directory The root directory is the directory that will be chrooted to. `dest` must be a path within `root`, if it is not an error will be returned. `root` should set to a directory which is not controlled by any potentially malicious process. This should be used to prevent a potential attacker from manipulating `dest` such that it would provide access to files outside of `dest` through things like symlinks. Normally `ResolveSymlinksInScope` would handle this, however sanitizing symlinks in this manner is inherrently racey: ref: CVE-2018-15664 ### Types [¶](#pkg-types) This section is empty.
capnproto.org/go/capnp/v3
go
Go
README [¶](#section-readme) --- ### Cap'n Proto bindings for Go ![License](https://img.shields.io/badge/license-MIT-brightgreen?style=flat-square) [![CodeQuality](https://goreportcard.com/badge/capnproto.org/go/capnp)](https://goreportcard.com/report/capnproto.org/go/capnp/v3) [![Go](https://github.com/capnproto/go-capnproto2/actions/workflows/go.yml/badge.svg)](https://github.com/capnproto/go-capnproto2/actions/workflows/go.yml) [![GoDoc](https://godoc.org/capnproto.org/go/capnp/v3?status.svg)](http://pkg.go.dev/capnproto.org/go/capnp/v3) [![Matrix](https://img.shields.io/matrix/go-capnp:matrix.org?color=lightpink&label=Get%20Help&logo=matrix&style=flat-square)](https://matrix.to/#/%23go-capnp:matrix.org) Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) * [Generating code](#hdr-Generating_code) * [Messages and Segments](#hdr-Messages_and_Segments) * [Pointers](#hdr-Pointers) * [Structs](#hdr-Structs) * [Groups](#hdr-Groups) * [Unions](#hdr-Unions) * [Enums](#hdr-Enums) * [Interfaces](#hdr-Interfaces) Package capnp is a Cap'n Proto library for Go. <https://capnproto.org/Read the Getting Started guide for a tutorial on how to use this package. <https://github.com/capnproto/go-capnproto2/wiki/Getting-Started#### Generating code [¶](#hdr-Generating_code) capnpc-go provides the compiler backend for capnp. ``` # First, install capnpc-go to $PATH. go install capnproto.org/go/capnp/v3/capnpc-go # Then, check out the go-capnp source code: git clone https://github.com/capnproto/go-capnp /desired/path/to/go-capnp # Then, generate Go files. capnp compile -I /desired/path/to/go-capnp/std -ogo *.capnp ``` capnpc-go requires two annotations for all files: package and import. package is needed to know what package to place at the head of the generated file and what identifier to use when referring to the type from another package. import should be the fully qualified import path and is used to generate import statement from other packages and to detect when two types are in the same package. For example: ``` using Go = import "/go.capnp"; $Go.package("main"); $Go.import("capnproto.org/go/capnp/v3/example"); ``` For adding documentation comments to the generated code, there's the doc annotation. This annotation adds the comment to a struct, enum or field so that godoc will pick it up. For example: ``` struct Zdate $Go.doc("Zdate represents a calendar date") { year @0 :Int16; month @1 :UInt8; day @2 :UInt8 ; } ``` #### Messages and Segments [¶](#hdr-Messages_and_Segments) In Cap'n Proto, the unit of communication is a message. A message consists of one or more segments -- contiguous blocks of memory. This allows large messages to be split up and loaded independently or lazily. Typically you will use one segment per message. Logically, a message is organized in a tree of objects, with the root always being a struct (as opposed to a list or primitive). Messages can be read from and written to a stream. The Message and Segment types are the main types that application code will use from this package. The Message type has methods for marshaling and unmarshaling its segments to the wire format. If the application needs to read or write from a stream, it should use the Encoder and Decoder types. #### Pointers [¶](#hdr-Pointers) The type for a generic reference to a Cap'n Proto object is Ptr. A Ptr can refer to a struct, a list, or an interface. Ptr, Struct, List, and Interface (the pointer types) have value semantics and refer to data in a single segment. All of the pointer types have a notion of "valid". An invalid pointer will return the default value from any accessor and panic when any setter is called. In previous versions of this package, the Pointer interface was used instead of the Ptr struct. This interface and functions that use it are now deprecated. See <https://github.com/capnproto/go-capnproto2/wiki/New-Ptr-Type> for details about this API change. Data accessors and setters (i.e. struct primitive fields and list elements) do not return errors, but pointer accessors and setters do. There are a few reasons that a read or write of a pointer can fail, but the most common are bad pointers or allocation failures. For accessors, an invalid object will be returned in case of an error. Since Go doesn't have generics, wrapper types provide type safety on lists. This package provides lists of basic types, and capnpc-go generates list wrappers for named types. However, if you need to use deeper nesting of lists (e.g. List(List(UInt8))), you will need to use a PointerList and wrap the elements. #### Structs [¶](#hdr-Structs) For the following schema: ``` struct Foo @0x8423424e9b01c0af { num @0 :UInt32; bar @1 :Foo; } ``` capnpc-go will generate: ``` // Foo is a pointer to a Foo struct in a segment. // Member functions are provided to get/set members in the // struct. type Foo struct{ capnp.Struct } // Foo_TypeID is the unique identifier for the type Foo. // It remains the same across languages and schema changes. const Foo_TypeID = 0x8423424e9b01c0af // NewFoo creates a new orphaned Foo struct, preferring placement in // s. If there isn't enough space, then another segment in the // message will be used or allocated. You can set a field of type Foo // to this new message, but usually you will want to use the // NewBar()-style method shown below. func NewFoo(s *capnp.Segment) (Foo, error) // NewRootFoo creates a new Foo struct and sets the message's root to // it. func NewRootFoo(s *capnp.Segment) (Foo, error) // ReadRootFoo reads the message's root pointer and converts it to a // Foo struct. func ReadRootFoo(msg *capnp.Message) (Foo, error) // Num returns the value of the num field. func (s Foo) Num() uint32 // SetNum sets the value of the num field to v. func (s Foo) SetNum(v uint32) // Bar returns the value of the bar field. This can return an error // if the pointer goes beyond the segment's range, the segment fails // to load, or the pointer recursion limit has been reached. func (s Foo) Bar() (Foo, error) // HasBar reports whether the bar field was initialized (non-null). func (s Foo) HasBar() bool // SetBar sets the value of the bar field to v. func (s Foo) SetBar(v Foo) error // NewBar sets the bar field to a newly allocated Foo struct, // preferring placement in s's segment. func (s Foo) NewBar() (Foo, error) // Foo_List is a value with pointer semantics. It is created for all // structs, and is used for List(Foo) in the capnp file. type Foo_List struct{ capnp.List } // NewFoo_List creates a new orphaned List(Foo), preferring placement // in s. This can then be added to a message by using a Set function // which takes a Foo_List. sz specifies the number of elements in the // list. The list's size cannot be changed after creation. func NewFoo_List(s *capnp.Segment, sz int32) Foo_List // Len returns the number of elements in the list. func (s Foo_List) Len() int // At returns a pointer to the i'th element. If i is an invalid index, // this will return an invalid Foo (all getters will return default // values, setters will fail). func (s Foo_List) At(i int) Foo // Foo_Promise is a promise for a Foo. Methods are provided to get // promises of struct and interface fields. type Foo_Promise struct{ *capnp.Pipeline } // Get waits until the promise is resolved and returns the result. func (p Foo_Promise) Get() (Foo, error) // Bar returns a promise for that bar field. func (p Foo_Promise) Bar() Foo_Promise ``` #### Groups [¶](#hdr-Groups) For each group a typedef is created with a different method set for just the groups fields: ``` struct Foo { group :Group { field @0 :Bool; } } ``` generates the following: ``` type Foo struct{ capnp.Struct } type Foo_group Foo func (s Foo) Group() Foo_group func (s Foo_group) Field() bool ``` That way the following may be used to access a field in a group: ``` var f Foo value := f.Group().Field() ``` Note that group accessors just convert the type and so have no overhead. #### Unions [¶](#hdr-Unions) Named unions are treated as a group with an inner unnamed union. Unnamed unions generate an enum Type_Which and a corresponding Which() function: ``` struct Foo { union { a @0 :Bool; b @1 :Bool; } } ``` generates the following: ``` type Foo_Which uint16 const ( Foo_Which_a Foo_Which = 0 Foo_Which_b Foo_Which = 1 ) func (s Foo) A() bool func (s Foo) B() bool func (s Foo) SetA(v bool) func (s Foo) SetB(v bool) func (s Foo) Which() Foo_Which ``` Which() should be checked before using the getters, and the default case must always be handled. Setters for single values will set the union discriminator as well as set the value. For voids in unions, there is a void setter that just sets the discriminator. For example: ``` struct Foo { union { a @0 :Void; b @1 :Void; } } ``` generates the following: ``` func (s Foo) SetA() // Set that we are using A func (s Foo) SetB() // Set that we are using B ``` Similarly, for groups in unions, there is a group setter that just sets the discriminator. This must be called before the group getter can be used to set values. For example: ``` struct Foo { union { a :group { v :Bool } b :group { v :Bool } } } ``` and in usage: ``` f.SetA() // Set that we are using group A f.A().SetV(true) // then we can use the group A getter to set the inner values ``` #### Enums [¶](#hdr-Enums) capnpc-go generates enum values as constants. For example in the capnp file: ``` enum ElementSize { empty @0; bit @1; byte @2; twoBytes @3; fourBytes @4; eightBytes @5; pointer @6; inlineComposite @7; } ``` In the generated capnp.go file: ``` type ElementSize uint16 const ( ElementSize_empty ElementSize = 0 ElementSize_bit ElementSize = 1 ElementSize_byte ElementSize = 2 ElementSize_twoBytes ElementSize = 3 ElementSize_fourBytes ElementSize = 4 ElementSize_eightBytes ElementSize = 5 ElementSize_pointer ElementSize = 6 ElementSize_inlineComposite ElementSize = 7 ) ``` In addition an enum.String() function is generated that will convert the constants to a string for debugging or logging purposes. By default, the enum name is used as the tag value, but the tags can be customized with a $Go.tag or $Go.notag annotation. For example: ``` enum ElementSize { empty @0 $Go.tag("void"); bit @1 $Go.tag("1 bit"); byte @2 $Go.tag("8 bits"); inlineComposite @7 $Go.notag; } ``` In the generated go file: ``` func (c ElementSize) String() string { switch c { case ElementSize_empty: return "void" case ElementSize_bit: return "1 bit" case ElementSize_byte: return "8 bits" default: return "" } } ``` #### Interfaces [¶](#hdr-Interfaces) capnpc-go generates type-safe Client wrappers for interfaces. For parameter lists and result lists, structs are generated as described above with the names Interface_method_Params and Interface_method_Results, unless a single struct type is used. For example, for this interface: ``` interface Calculator { evaluate @0 (expression :Expression) -> (value :Value); } ``` capnpc-go generates the following Go code (along with the structs Calculator_evaluate_Params and Calculator_evaluate_Results): ``` // Calculator is a client to a Calculator interface. type Calculator struct{ Client capnp.Client } // Evaluate calls `evaluate` on the client. params is called on a newly // allocated Calculator_evaluate_Params struct to fill in the parameters. func (c Calculator) Evaluate( ctx context.Context, params func(Calculator_evaluate_Params) error, opts ...capnp.CallOption) *Calculator_evaluate_Results_Promise ``` capnpc-go also generates code to implement the interface: ``` // A Calculator_Server implements the Calculator interface. type Calculator_Server interface { Evaluate(context.Context, Calculator_evaluate_Call) error } // Calculator_evaluate_Call holds the arguments for a Calculator.evaluate server call. type Calculator_evaluate_Call struct { Params Calculator_evaluate_Params Results Calculator_evaluate_Results Options capnp.CallOptions } // Calculator_ServerToClient is equivalent to calling: // NewCalculator(capnp.NewServer(Calculator_Methods(nil, s), s)) // If s does not implement the Close method, then nil is used. func Calculator_ServerToClient(s Calculator_Server) Calculator // Calculator_Methods appends methods from Calculator that call to server and // returns the methods. If methods is nil or the capacity of the underlying // slice is too small, a new slice is returned. func Calculator_Methods(methods []server.Method, s Calculator_Server) []server.Method ``` Since a single capability may want to implement many interfaces, you can use multiple *_Methods functions to build a single slice to send to NewServer. An example of combining the client/server code to communicate with a locally implemented Calculator: ``` var srv Calculator_Server calc := Calculator_ServerToClient(srv) result := calc.Evaluate(ctx, func(params Calculator_evaluate_Params) { params.SetExpression(expr) }) val := result.Value().Get() ``` A note about message ordering: by default, only one method per server will be invoked at a time; when implementing a server method which blocks or takes a long time, you calling the server.Go function to unblock future calls. Example [¶](#example-package) ``` // Make a brand new empty message. msg, seg, err := capnp.NewMessage(capnp.SingleSegment(nil)) if err != nil { panic(err) } // If you want runtime-type identification, this is easily obtained. Just // wrap everything in a struct that contains a single anoymous union (e.g. struct Z). // Then always set a Z as the root object in you message/first segment. // The cost of the extra word of storage is usually worth it, as // then human readable output is easily obtained via a shell command such as // // $ cat binary.cpz | capnp decode aircraft.capnp Z // // If you need to conserve space, and know your content in advance, it // isn't necessary to use an anonymous union. Just supply the type name // in place of 'Z' in the decode command above. // There can only be one root. Subsequent NewRoot* calls will set the root // pointer and orphan the previous root. z, err := air.NewRootZ(seg) if err != nil { panic(err) } // then non-root objects: aircraft, err := z.NewAircraft() if err != nil { panic(err) } b737, err := aircraft.NewB737() if err != nil { panic(err) } planebase, err := b737.NewBase() if err != nil { panic(err) } // Set primitive fields planebase.SetCanFly(true) planebase.SetName("Henrietta") planebase.SetRating(100) planebase.SetMaxSpeed(876) // km/hr // if we don't set capacity, it will get the default value, in this case 0. //planebase.SetCapacity(26020) // Liters fuel // Creating a list homes, err := planebase.NewHomes(2) if err != nil { panic(err) } homes.Set(0, air.Airport_jfk) homes.Set(1, air.Airport_lax) // Ready to write! // You can write to memory... buf, err := msg.Marshal() if err != nil { panic(err) } _ = buf // ... or write to an io.Writer. file, err := os.CreateTemp("", "go-capnproto") if err != nil { panic(err) } defer file.Close() defer os.Remove(file.Name()) err = capnp.NewEncoder(file).Encode(msg) if err != nil { panic(err) } ``` ``` Output: ``` ### Index [¶](#pkg-index) * [func Canonicalize(s Struct) ([]byte, error)](#Canonicalize) * [func Disconnected(s string) error](#Disconnected) * [func Equal(p1, p2 Ptr) (bool, error)](#Equal) * [func IsDisconnected(e error) bool](#IsDisconnected) * [func IsUnimplemented(e error) bool](#IsUnimplemented) * [func NewMessage(arena Arena) (*Message, *Segment, error)](#NewMessage) * [func NewMultiSegmentMessage(b [][]byte) (msg *Message, first *Segment)](#NewMultiSegmentMessage) * [func NewPromisedClient(hook ClientHook) (Client, Resolver[Client])](#NewPromisedClient) * [func NewSingleSegmentMessage(b []byte) (msg *Message, first *Segment)](#NewSingleSegmentMessage) * [func SamePtr(p, q Ptr) bool](#SamePtr) * [func SetClientLeakFunc(clientLeakFunc func(msg string))](#SetClientLeakFunc) * [func Unimplemented(s string) error](#Unimplemented) * [type Answer](#Answer) * + [func ErrorAnswer(m Method, e error) *Answer](#ErrorAnswer) + [func ImmediateAnswer(m Method, ptr Ptr) *Answer](#ImmediateAnswer) * + [func (ans *Answer) Client() Client](#Answer.Client) + [func (ans *Answer) Done() <-chan struct{}](#Answer.Done) + [func (ans *Answer) Field(off uint16, def []byte) *Future](#Answer.Field) + [func (ans *Answer) Future() *Future](#Answer.Future) + [func (ans *Answer) List() (List, error)](#Answer.List) + [func (ans *Answer) Metadata() *Metadata](#Answer.Metadata) + [func (ans *Answer) PipelineRecv(ctx context.Context, transform []PipelineOp, r Recv) PipelineCaller](#Answer.PipelineRecv) + [func (ans *Answer) PipelineSend(ctx context.Context, transform []PipelineOp, s Send) (*Answer, ReleaseFunc)](#Answer.PipelineSend) + [func (ans *Answer) Struct() (Struct, error)](#Answer.Struct) * [type AnswerQueue](#AnswerQueue) * + [func NewAnswerQueue(m Method) *AnswerQueue](#NewAnswerQueue) * + [func (aq *AnswerQueue) Fulfill(ptr Ptr)](#AnswerQueue.Fulfill) + [func (aq *AnswerQueue) PipelineRecv(ctx context.Context, transform []PipelineOp, r Recv) PipelineCaller](#AnswerQueue.PipelineRecv) + [func (aq *AnswerQueue) PipelineSend(ctx context.Context, transform []PipelineOp, r Send) (*Answer, ReleaseFunc)](#AnswerQueue.PipelineSend) + [func (aq *AnswerQueue) Reject(e error)](#AnswerQueue.Reject) * [type Arena](#Arena) * [type BitList](#BitList) * + [func NewBitList(s *Segment, n int32) (BitList, error)](#NewBitList) * + [func (p BitList) At(i int) bool](#BitList.At) + [func (BitList) DecodeFromPtr(p Ptr) BitList](#BitList.DecodeFromPtr) + [func (l BitList) EncodeAsPtr(seg *Segment) Ptr](#BitList.EncodeAsPtr) + [func (l BitList) IsValid() bool](#BitList.IsValid) + [func (l BitList) Len() int](#BitList.Len) + [func (l BitList) Message() *Message](#BitList.Message) + [func (l BitList) Segment() *Segment](#BitList.Segment) + [func (p BitList) Set(i int, v bool)](#BitList.Set) + [func (p BitList) String() string](#BitList.String) + [func (l BitList) ToPtr() Ptr](#BitList.ToPtr) * [type BitOffset](#BitOffset) * + [func (bit BitOffset) GoString() string](#BitOffset.GoString) + [func (bit BitOffset) String() string](#BitOffset.String) * [type Brand](#Brand) * [type CapList](#CapList) * + [func (c CapList[T]) At(i int) (T, error)](#CapList.At) + [func (CapList[T]) DecodeFromPtr(p Ptr) CapList[T]](#CapList.DecodeFromPtr) + [func (l CapList[T]) EncodeAsPtr(seg *Segment) Ptr](#CapList.EncodeAsPtr) + [func (l CapList[T]) IsValid() bool](#CapList.IsValid) + [func (l CapList[T]) Len() int](#CapList.Len) + [func (l CapList[T]) Message() *Message](#CapList.Message) + [func (l CapList[T]) Segment() *Segment](#CapList.Segment) + [func (c CapList[T]) Set(i int, v T) error](#CapList.Set) + [func (l CapList[T]) ToPtr() Ptr](#CapList.ToPtr) * [type CapTable](#CapTable) * + [func (ct *CapTable) Add(c Client) CapabilityID](#CapTable.Add) + [func (ct CapTable) At(i int) Client](#CapTable.At) + [func (ct CapTable) Contains(ifc Interface) bool](#CapTable.Contains) + [func (ct CapTable) Get(ifc Interface) (c Client)](#CapTable.Get) + [func (ct CapTable) Len() int](#CapTable.Len) + [func (ct *CapTable) Reset(cs ...Client)](#CapTable.Reset) + [func (ct CapTable) Set(id CapabilityID, c Client)](#CapTable.Set) * [type CapabilityID](#CapabilityID) * + [func (id CapabilityID) GoString() string](#CapabilityID.GoString) + [func (id CapabilityID) String() string](#CapabilityID.String) * [type Client](#Client) * + [func ErrorClient(e error) Client](#ErrorClient) + [func NewClient(hook ClientHook) Client](#NewClient) * + [func (c Client) AddRef() Client](#Client.AddRef) + [func (Client) DecodeFromPtr(p Ptr) Client](#Client.DecodeFromPtr) + [func (c Client) EncodeAsPtr(seg *Segment) Ptr](#Client.EncodeAsPtr) + [func (c Client) GetFlowLimiter() flowcontrol.FlowLimiter](#Client.GetFlowLimiter) + [func (c Client) IsSame(c2 Client) bool](#Client.IsSame) + [func (c Client) IsValid() bool](#Client.IsValid) + [func (c Client) RecvCall(ctx context.Context, r Recv) PipelineCaller](#Client.RecvCall) + [func (c Client) Release()](#Client.Release) + [func (c Client) Resolve(ctx context.Context) error](#Client.Resolve) + [func (c Client) SendCall(ctx context.Context, s Send) (*Answer, ReleaseFunc)](#Client.SendCall) + [func (c Client) SendStreamCall(ctx context.Context, s Send) error](#Client.SendStreamCall) + [func (c Client) SetFlowLimiter(lim flowcontrol.FlowLimiter)](#Client.SetFlowLimiter) + [func (c Client) State() ClientState](#Client.State) + [func (c Client) String() string](#Client.String) + [func (c Client) WaitStreaming() error](#Client.WaitStreaming) + [func (c Client) WeakRef() *WeakClient](#Client.WeakRef) * [type ClientHook](#ClientHook) * [type ClientKind](#ClientKind) * [type ClientState](#ClientState) * [type DataList](#DataList) * + [func NewDataList(s *Segment, n int32) (DataList, error)](#NewDataList) * + [func (l DataList) At(i int) ([]byte, error)](#DataList.At) + [func (DataList) DecodeFromPtr(p Ptr) DataList](#DataList.DecodeFromPtr) + [func (l DataList) EncodeAsPtr(seg *Segment) Ptr](#DataList.EncodeAsPtr) + [func (l DataList) IsValid() bool](#DataList.IsValid) + [func (l DataList) Len() int](#DataList.Len) + [func (l DataList) Message() *Message](#DataList.Message) + [func (l DataList) Segment() *Segment](#DataList.Segment) + [func (l DataList) Set(i int, v []byte) error](#DataList.Set) + [func (l DataList) String() string](#DataList.String) + [func (l DataList) ToPtr() Ptr](#DataList.ToPtr) * [type DataOffset](#DataOffset) * + [func (off DataOffset) GoString() string](#DataOffset.GoString) + [func (off DataOffset) String() string](#DataOffset.String) * [type Decoder](#Decoder) * + [func NewDecoder(r io.Reader) *Decoder](#NewDecoder) + [func NewPackedDecoder(r io.Reader) *Decoder](#NewPackedDecoder) * + [func (d *Decoder) Decode() (*Message, error)](#Decoder.Decode) * [type Encoder](#Encoder) * + [func NewEncoder(w io.Writer) *Encoder](#NewEncoder) + [func NewPackedEncoder(w io.Writer) *Encoder](#NewPackedEncoder) * + [func (e *Encoder) Encode(m *Message) error](#Encoder.Encode) * [type EnumList](#EnumList) * + [func NewEnumList(s *Segment, n int32) (EnumList[T], error)](#NewEnumList) * + [func (l EnumList[T]) At(i int) T](#EnumList.At) + [func (EnumList[T]) DecodeFromPtr(p Ptr) EnumList[T]](#EnumList.DecodeFromPtr) + [func (l EnumList[T]) EncodeAsPtr(seg *Segment) Ptr](#EnumList.EncodeAsPtr) + [func (l EnumList[T]) IsValid() bool](#EnumList.IsValid) + [func (l EnumList[T]) Len() int](#EnumList.Len) + [func (l EnumList[T]) Message() *Message](#EnumList.Message) + [func (l EnumList[T]) Segment() *Segment](#EnumList.Segment) + [func (l EnumList[T]) Set(i int, v T)](#EnumList.Set) + [func (l EnumList[T]) String() string](#EnumList.String) + [func (l EnumList[T]) ToPtr() Ptr](#EnumList.ToPtr) * [type Float32List](#Float32List) * + [func NewFloat32List(s *Segment, n int32) (Float32List, error)](#NewFloat32List) * + [func (l Float32List) At(i int) float32](#Float32List.At) + [func (Float32List) DecodeFromPtr(p Ptr) Float32List](#Float32List.DecodeFromPtr) + [func (l Float32List) EncodeAsPtr(seg *Segment) Ptr](#Float32List.EncodeAsPtr) + [func (l Float32List) IsValid() bool](#Float32List.IsValid) + [func (l Float32List) Len() int](#Float32List.Len) + [func (l Float32List) Message() *Message](#Float32List.Message) + [func (l Float32List) Segment() *Segment](#Float32List.Segment) + [func (l Float32List) Set(i int, v float32)](#Float32List.Set) + [func (l Float32List) String() string](#Float32List.String) + [func (l Float32List) ToPtr() Ptr](#Float32List.ToPtr) * [type Float64List](#Float64List) * + [func NewFloat64List(s *Segment, n int32) (Float64List, error)](#NewFloat64List) * + [func (l Float64List) At(i int) float64](#Float64List.At) + [func (Float64List) DecodeFromPtr(p Ptr) Float64List](#Float64List.DecodeFromPtr) + [func (l Float64List) EncodeAsPtr(seg *Segment) Ptr](#Float64List.EncodeAsPtr) + [func (l Float64List) IsValid() bool](#Float64List.IsValid) + [func (l Float64List) Len() int](#Float64List.Len) + [func (l Float64List) Message() *Message](#Float64List.Message) + [func (l Float64List) Segment() *Segment](#Float64List.Segment) + [func (l Float64List) Set(i int, v float64)](#Float64List.Set) + [func (l Float64List) String() string](#Float64List.String) + [func (l Float64List) ToPtr() Ptr](#Float64List.ToPtr) * [type Future](#Future) * + [func (f *Future) Client() Client](#Future.Client) + [func (f *Future) Done() <-chan struct{}](#Future.Done) + [func (f *Future) Field(off uint16, def []byte) *Future](#Future.Field) + [func (f *Future) List() (List, error)](#Future.List) + [func (f *Future) Ptr() (Ptr, error)](#Future.Ptr) + [func (f *Future) Struct() (Struct, error)](#Future.Struct) * [type Int16List](#Int16List) * + [func NewInt16List(s *Segment, n int32) (Int16List, error)](#NewInt16List) * + [func (l Int16List) At(i int) int16](#Int16List.At) + [func (Int16List) DecodeFromPtr(p Ptr) Int16List](#Int16List.DecodeFromPtr) + [func (l Int16List) EncodeAsPtr(seg *Segment) Ptr](#Int16List.EncodeAsPtr) + [func (l Int16List) IsValid() bool](#Int16List.IsValid) + [func (l Int16List) Len() int](#Int16List.Len) + [func (l Int16List) Message() *Message](#Int16List.Message) + [func (l Int16List) Segment() *Segment](#Int16List.Segment) + [func (l Int16List) Set(i int, v int16)](#Int16List.Set) + [func (l Int16List) String() string](#Int16List.String) + [func (l Int16List) ToPtr() Ptr](#Int16List.ToPtr) * [type Int32List](#Int32List) * + [func NewInt32List(s *Segment, n int32) (Int32List, error)](#NewInt32List) * + [func (l Int32List) At(i int) int32](#Int32List.At) + [func (Int32List) DecodeFromPtr(p Ptr) Int32List](#Int32List.DecodeFromPtr) + [func (l Int32List) EncodeAsPtr(seg *Segment) Ptr](#Int32List.EncodeAsPtr) + [func (l Int32List) IsValid() bool](#Int32List.IsValid) + [func (l Int32List) Len() int](#Int32List.Len) + [func (l Int32List) Message() *Message](#Int32List.Message) + [func (l Int32List) Segment() *Segment](#Int32List.Segment) + [func (l Int32List) Set(i int, v int32)](#Int32List.Set) + [func (l Int32List) String() string](#Int32List.String) + [func (l Int32List) ToPtr() Ptr](#Int32List.ToPtr) * [type Int64List](#Int64List) * + [func NewInt64List(s *Segment, n int32) (Int64List, error)](#NewInt64List) * + [func (l Int64List) At(i int) int64](#Int64List.At) + [func (Int64List) DecodeFromPtr(p Ptr) Int64List](#Int64List.DecodeFromPtr) + [func (l Int64List) EncodeAsPtr(seg *Segment) Ptr](#Int64List.EncodeAsPtr) + [func (l Int64List) IsValid() bool](#Int64List.IsValid) + [func (l Int64List) Len() int](#Int64List.Len) + [func (l Int64List) Message() *Message](#Int64List.Message) + [func (l Int64List) Segment() *Segment](#Int64List.Segment) + [func (l Int64List) Set(i int, v int64)](#Int64List.Set) + [func (l Int64List) String() string](#Int64List.String) + [func (l Int64List) ToPtr() Ptr](#Int64List.ToPtr) * [type Int8List](#Int8List) * + [func NewInt8List(s *Segment, n int32) (Int8List, error)](#NewInt8List) * + [func (l Int8List) At(i int) int8](#Int8List.At) + [func (Int8List) DecodeFromPtr(p Ptr) Int8List](#Int8List.DecodeFromPtr) + [func (l Int8List) EncodeAsPtr(seg *Segment) Ptr](#Int8List.EncodeAsPtr) + [func (l Int8List) IsValid() bool](#Int8List.IsValid) + [func (l Int8List) Len() int](#Int8List.Len) + [func (l Int8List) Message() *Message](#Int8List.Message) + [func (l Int8List) Segment() *Segment](#Int8List.Segment) + [func (l Int8List) Set(i int, v int8)](#Int8List.Set) + [func (l Int8List) String() string](#Int8List.String) + [func (l Int8List) ToPtr() Ptr](#Int8List.ToPtr) * [type Interface](#Interface) * + [func NewInterface(s *Segment, cap CapabilityID) Interface](#NewInterface) * + [func (i Interface) Capability() CapabilityID](#Interface.Capability) + [func (i Interface) Client() (c Client)](#Interface.Client) + [func (Interface) DecodeFromPtr(p Ptr) Interface](#Interface.DecodeFromPtr) + [func (i Interface) EncodeAsPtr(*Segment) Ptr](#Interface.EncodeAsPtr) + [func (i Interface) IsValid() bool](#Interface.IsValid) + [func (i Interface) Message() *Message](#Interface.Message) + [func (i Interface) ToPtr() Ptr](#Interface.ToPtr) * [type List](#List) * + [func NewCompositeList(s *Segment, sz ObjectSize, n int32) (List, error)](#NewCompositeList) * + [func (List) DecodeFromPtr(p Ptr) List](#List.DecodeFromPtr) + [func (l List) EncodeAsPtr(*Segment) Ptr](#List.EncodeAsPtr) + [func (p List) IsValid() bool](#List.IsValid) + [func (p List) Len() int](#List.Len) + [func (p List) Message() *Message](#List.Message) + [func (p List) Segment() *Segment](#List.Segment) + [func (p List) SetStruct(i int, s Struct) error](#List.SetStruct) + [func (p List) Struct(i int) Struct](#List.Struct) + [func (p List) ToPtr() Ptr](#List.ToPtr) * [type ListKind](#ListKind) * [type Message](#Message) * + [func Unmarshal(data []byte) (*Message, error)](#Unmarshal) + [func UnmarshalPacked(data []byte) (*Message, error)](#UnmarshalPacked) * + [func (m *Message) CapTable() *CapTable](#Message.CapTable) + [func (m *Message) Marshal() ([]byte, error)](#Message.Marshal) + [func (m *Message) MarshalPacked() ([]byte, error)](#Message.MarshalPacked) + [func (m *Message) NumSegments() int64](#Message.NumSegments) + [func (m *Message) Release()](#Message.Release) + [func (m *Message) Reset(arena Arena) (first *Segment, err error)](#Message.Reset) + [func (m *Message) ResetReadLimit(limit uint64)](#Message.ResetReadLimit) + [func (m *Message) Root() (Ptr, error)](#Message.Root) + [func (m *Message) Segment(id SegmentID) (*Segment, error)](#Message.Segment) + [func (m *Message) SetRoot(p Ptr) error](#Message.SetRoot) + [func (m *Message) TotalSize() (uint64, error)](#Message.TotalSize) + [func (m *Message) Unread(sz Size)](#Message.Unread) + [func (m *Message) WriteTo(w io.Writer) (int64, error)](#Message.WriteTo) * [type Metadata](#Metadata) * + [func NewMetadata() *Metadata](#NewMetadata) * + [func (m *Metadata) Delete(key any)](#Metadata.Delete) + [func (m *Metadata) Get(key any) (value any, ok bool)](#Metadata.Get) + [func (m *Metadata) Lock()](#Metadata.Lock) + [func (m *Metadata) Put(key, value any)](#Metadata.Put) + [func (m *Metadata) Unlock()](#Metadata.Unlock) * [type Method](#Method) * + [func (m *Method) String() string](#Method.String) * [type MultiSegmentArena](#MultiSegmentArena) * + [func MultiSegment(b [][]byte) *MultiSegmentArena](#MultiSegment) * + [func (msa *MultiSegmentArena) Allocate(sz Size, segs map[SegmentID]*Segment) (SegmentID, []byte, error)](#MultiSegmentArena.Allocate) + [func (msa *MultiSegmentArena) Data(id SegmentID) ([]byte, error)](#MultiSegmentArena.Data) + [func (msa *MultiSegmentArena) NumSegments() int64](#MultiSegmentArena.NumSegments) + [func (msa *MultiSegmentArena) Release()](#MultiSegmentArena.Release) + [func (msa *MultiSegmentArena) String() string](#MultiSegmentArena.String) * [type ObjectSize](#ObjectSize) * + [func (sz ObjectSize) GoString() string](#ObjectSize.GoString) + [func (sz ObjectSize) String() string](#ObjectSize.String) * [type PipelineCaller](#PipelineCaller) * [type PipelineClient](#PipelineClient) * + [func (pc PipelineClient) Answer() *Answer](#PipelineClient.Answer) + [func (pc PipelineClient) Brand() Brand](#PipelineClient.Brand) + [func (pc PipelineClient) Recv(ctx context.Context, r Recv) PipelineCaller](#PipelineClient.Recv) + [func (pc PipelineClient) Send(ctx context.Context, s Send) (*Answer, ReleaseFunc)](#PipelineClient.Send) + [func (pc PipelineClient) Shutdown()](#PipelineClient.Shutdown) + [func (pc PipelineClient) String() string](#PipelineClient.String) + [func (pc PipelineClient) Transform() []PipelineOp](#PipelineClient.Transform) * [type PipelineOp](#PipelineOp) * + [func (op PipelineOp) String() string](#PipelineOp.String) * [type PointerList](#PointerList) * + [func NewPointerList(s *Segment, n int32) (PointerList, error)](#NewPointerList) * + [func (p PointerList) At(i int) (Ptr, error)](#PointerList.At) + [func (PointerList) DecodeFromPtr(p Ptr) PointerList](#PointerList.DecodeFromPtr) + [func (l PointerList) EncodeAsPtr(seg *Segment) Ptr](#PointerList.EncodeAsPtr) + [func (l PointerList) IsValid() bool](#PointerList.IsValid) + [func (l PointerList) Len() int](#PointerList.Len) + [func (l PointerList) Message() *Message](#PointerList.Message) + [func (l PointerList) Segment() *Segment](#PointerList.Segment) + [func (p PointerList) Set(i int, v Ptr) error](#PointerList.Set) + [func (l PointerList) ToPtr() Ptr](#PointerList.ToPtr) * [type Promise](#Promise) * + [func NewPromise(m Method, pc PipelineCaller) *Promise](#NewPromise) * + [func (p *Promise) Answer() *Answer](#Promise.Answer) + [func (p *Promise) Fulfill(result Ptr)](#Promise.Fulfill) + [func (p *Promise) Reject(e error)](#Promise.Reject) + [func (p *Promise) ReleaseClients()](#Promise.ReleaseClients) + [func (p *Promise) Resolve(r Ptr, e error)](#Promise.Resolve) * [type Ptr](#Ptr) * + [func MustUnmarshalRoot(data []byte) Ptr](#MustUnmarshalRoot) + [func Transform(p Ptr, transform []PipelineOp) (Ptr, error)](#Transform) * + [func (p Ptr) Data() []byte](#Ptr.Data) + [func (p Ptr) DataDefault(def []byte) []byte](#Ptr.DataDefault) + [func (Ptr) DecodeFromPtr(p Ptr) Ptr](#Ptr.DecodeFromPtr) + [func (p Ptr) Default(def []byte) (Ptr, error)](#Ptr.Default) + [func (p Ptr) EncodeAsPtr(*Segment) Ptr](#Ptr.EncodeAsPtr) + [func (p Ptr) Interface() Interface](#Ptr.Interface) + [func (p Ptr) IsValid() bool](#Ptr.IsValid) + [func (p Ptr) List() List](#Ptr.List) + [func (p Ptr) ListDefault(def []byte) (List, error)](#Ptr.ListDefault) + [func (p Ptr) Message() *Message](#Ptr.Message) + [func (p Ptr) Segment() *Segment](#Ptr.Segment) + [func (p Ptr) Struct() Struct](#Ptr.Struct) + [func (p Ptr) StructDefault(def []byte) (Struct, error)](#Ptr.StructDefault) + [func (p Ptr) Text() string](#Ptr.Text) + [func (p Ptr) TextBytes() []byte](#Ptr.TextBytes) + [func (p Ptr) TextBytesDefault(def string) []byte](#Ptr.TextBytesDefault) + [func (p Ptr) TextDefault(def string) string](#Ptr.TextDefault) * [type Recv](#Recv) * + [func (r Recv) AllocResults(sz ObjectSize) (Struct, error)](#Recv.AllocResults) + [func (r Recv) Reject(e error)](#Recv.Reject) + [func (r Recv) Return()](#Recv.Return) * [type ReleaseFunc](#ReleaseFunc) * [type Request](#Request) * + [func NewRequest(client Client, method Method, argsSize ObjectSize) (*Request, error)](#NewRequest) * + [func (r *Request) Args() Struct](#Request.Args) + [func (r *Request) Future() *Future](#Request.Future) + [func (r *Request) Release()](#Request.Release) + [func (r *Request) Send(ctx context.Context) *Future](#Request.Send) + [func (r *Request) SendStream(ctx context.Context) error](#Request.SendStream) * [type Resolver](#Resolver) * + [func NewLocalPromise() (C, Resolver[C])](#NewLocalPromise) * [type Returner](#Returner) * [type Segment](#Segment) * + [func (s *Segment) Data() []byte](#Segment.Data) + [func (s *Segment) ID() SegmentID](#Segment.ID) + [func (s *Segment) Message() *Message](#Segment.Message) * [type SegmentID](#SegmentID) * [type Send](#Send) * [type SingleSegmentArena](#SingleSegmentArena) * + [func SingleSegment(b []byte) *SingleSegmentArena](#SingleSegment) * + [func (ssa *SingleSegmentArena) Allocate(sz Size, segs map[SegmentID]*Segment) (SegmentID, []byte, error)](#SingleSegmentArena.Allocate) + [func (ssa SingleSegmentArena) Data(id SegmentID) ([]byte, error)](#SingleSegmentArena.Data) + [func (ssa SingleSegmentArena) NumSegments() int64](#SingleSegmentArena.NumSegments) + [func (ssa *SingleSegmentArena) Release()](#SingleSegmentArena.Release) + [func (ssa SingleSegmentArena) String() string](#SingleSegmentArena.String) * [type Size](#Size) * + [func (sz Size) GoString() string](#Size.GoString) + [func (sz Size) String() string](#Size.String) * [type Struct](#Struct) * + [func NewRootStruct(s *Segment, sz ObjectSize) (Struct, error)](#NewRootStruct) + [func NewStruct(s *Segment, sz ObjectSize) (Struct, error)](#NewStruct) * + [func (p Struct) Bit(n BitOffset) bool](#Struct.Bit) + [func (p Struct) CopyFrom(other Struct) error](#Struct.CopyFrom) + [func (Struct) DecodeFromPtr(p Ptr) Struct](#Struct.DecodeFromPtr) + [func (s Struct) EncodeAsPtr(*Segment) Ptr](#Struct.EncodeAsPtr) + [func (p Struct) HasPtr(i uint16) bool](#Struct.HasPtr) + [func (p Struct) IsValid() bool](#Struct.IsValid) + [func (p Struct) Message() *Message](#Struct.Message) + [func (p Struct) Ptr(i uint16) (Ptr, error)](#Struct.Ptr) + [func (p Struct) Segment() *Segment](#Struct.Segment) + [func (p Struct) SetBit(n BitOffset, v bool)](#Struct.SetBit) + [func (p Struct) SetData(i uint16, v []byte) error](#Struct.SetData) + [func (p Struct) SetNewText(i uint16, v string) error](#Struct.SetNewText) + [func (p Struct) SetPtr(i uint16, src Ptr) error](#Struct.SetPtr) + [func (p Struct) SetText(i uint16, v string) error](#Struct.SetText) + [func (p Struct) SetTextFromBytes(i uint16, v []byte) error](#Struct.SetTextFromBytes) + [func (p Struct) SetUint16(off DataOffset, v uint16)](#Struct.SetUint16) + [func (p Struct) SetUint32(off DataOffset, v uint32)](#Struct.SetUint32) + [func (p Struct) SetUint64(off DataOffset, v uint64)](#Struct.SetUint64) + [func (p Struct) SetUint8(off DataOffset, v uint8)](#Struct.SetUint8) + [func (p Struct) Size() ObjectSize](#Struct.Size) + [func (p Struct) ToPtr() Ptr](#Struct.ToPtr) + [func (p Struct) Uint16(off DataOffset) uint16](#Struct.Uint16) + [func (p Struct) Uint32(off DataOffset) uint32](#Struct.Uint32) + [func (p Struct) Uint64(off DataOffset) uint64](#Struct.Uint64) + [func (p Struct) Uint8(off DataOffset) uint8](#Struct.Uint8) * [type StructKind](#StructKind) * [type StructList](#StructList) * + [func (s StructList[T]) At(i int) T](#StructList.At) + [func (StructList[T]) DecodeFromPtr(p Ptr) StructList[T]](#StructList.DecodeFromPtr) + [func (l StructList[T]) EncodeAsPtr(seg *Segment) Ptr](#StructList.EncodeAsPtr) + [func (l StructList[T]) IsValid() bool](#StructList.IsValid) + [func (l StructList[T]) Len() int](#StructList.Len) + [func (l StructList[T]) Message() *Message](#StructList.Message) + [func (l StructList[T]) Segment() *Segment](#StructList.Segment) + [func (s StructList[T]) Set(i int, v T) error](#StructList.Set) + [func (l StructList[T]) ToPtr() Ptr](#StructList.ToPtr) * [type StructReturner](#StructReturner) * + [func (sr *StructReturner) AllocResults(sz ObjectSize) (Struct, error)](#StructReturner.AllocResults) + [func (sr *StructReturner) Answer(m Method, pcall PipelineCaller) (*Answer, ReleaseFunc)](#StructReturner.Answer) + [func (sr *StructReturner) PrepareReturn(e error)](#StructReturner.PrepareReturn) + [func (sr *StructReturner) ReleaseResults()](#StructReturner.ReleaseResults) + [func (sr *StructReturner) Return()](#StructReturner.Return) * [type TextList](#TextList) * + [func NewTextList(s *Segment, n int32) (TextList, error)](#NewTextList) * + [func (l TextList) At(i int) (string, error)](#TextList.At) + [func (l TextList) BytesAt(i int) ([]byte, error)](#TextList.BytesAt) + [func (TextList) DecodeFromPtr(p Ptr) TextList](#TextList.DecodeFromPtr) + [func (l TextList) EncodeAsPtr(seg *Segment) Ptr](#TextList.EncodeAsPtr) + [func (l TextList) IsValid() bool](#TextList.IsValid) + [func (l TextList) Len() int](#TextList.Len) + [func (l TextList) Message() *Message](#TextList.Message) + [func (l TextList) Segment() *Segment](#TextList.Segment) + [func (l TextList) Set(i int, v string) error](#TextList.Set) + [func (l TextList) String() string](#TextList.String) + [func (l TextList) ToPtr() Ptr](#TextList.ToPtr) * [type TypeParam](#TypeParam) * [type UInt16List](#UInt16List) * + [func NewUInt16List(s *Segment, n int32) (UInt16List, error)](#NewUInt16List) * + [func (l UInt16List) At(i int) uint16](#UInt16List.At) + [func (UInt16List) DecodeFromPtr(p Ptr) UInt16List](#UInt16List.DecodeFromPtr) + [func (l UInt16List) EncodeAsPtr(seg *Segment) Ptr](#UInt16List.EncodeAsPtr) + [func (l UInt16List) IsValid() bool](#UInt16List.IsValid) + [func (l UInt16List) Len() int](#UInt16List.Len) + [func (l UInt16List) Message() *Message](#UInt16List.Message) + [func (l UInt16List) Segment() *Segment](#UInt16List.Segment) + [func (l UInt16List) Set(i int, v uint16)](#UInt16List.Set) + [func (l UInt16List) String() string](#UInt16List.String) + [func (l UInt16List) ToPtr() Ptr](#UInt16List.ToPtr) * [type UInt32List](#UInt32List) * + [func NewUInt32List(s *Segment, n int32) (UInt32List, error)](#NewUInt32List) * + [func (l UInt32List) At(i int) uint32](#UInt32List.At) + [func (UInt32List) DecodeFromPtr(p Ptr) UInt32List](#UInt32List.DecodeFromPtr) + [func (l UInt32List) EncodeAsPtr(seg *Segment) Ptr](#UInt32List.EncodeAsPtr) + [func (l UInt32List) IsValid() bool](#UInt32List.IsValid) + [func (l UInt32List) Len() int](#UInt32List.Len) + [func (l UInt32List) Message() *Message](#UInt32List.Message) + [func (l UInt32List) Segment() *Segment](#UInt32List.Segment) + [func (l UInt32List) Set(i int, v uint32)](#UInt32List.Set) + [func (l UInt32List) String() string](#UInt32List.String) + [func (l UInt32List) ToPtr() Ptr](#UInt32List.ToPtr) * [type UInt64List](#UInt64List) * + [func NewUInt64List(s *Segment, n int32) (UInt64List, error)](#NewUInt64List) * + [func (l UInt64List) At(i int) uint64](#UInt64List.At) + [func (UInt64List) DecodeFromPtr(p Ptr) UInt64List](#UInt64List.DecodeFromPtr) + [func (l UInt64List) EncodeAsPtr(seg *Segment) Ptr](#UInt64List.EncodeAsPtr) + [func (l UInt64List) IsValid() bool](#UInt64List.IsValid) + [func (l UInt64List) Len() int](#UInt64List.Len) + [func (l UInt64List) Message() *Message](#UInt64List.Message) + [func (l UInt64List) Segment() *Segment](#UInt64List.Segment) + [func (l UInt64List) Set(i int, v uint64)](#UInt64List.Set) + [func (l UInt64List) String() string](#UInt64List.String) + [func (l UInt64List) ToPtr() Ptr](#UInt64List.ToPtr) * [type UInt8List](#UInt8List) * + [func NewData(s *Segment, v []byte) (UInt8List, error)](#NewData) + [func NewText(s *Segment, v string) (UInt8List, error)](#NewText) + [func NewTextFromBytes(s *Segment, v []byte) (UInt8List, error)](#NewTextFromBytes) + [func NewUInt8List(s *Segment, n int32) (UInt8List, error)](#NewUInt8List) * + [func (l UInt8List) At(i int) uint8](#UInt8List.At) + [func (UInt8List) DecodeFromPtr(p Ptr) UInt8List](#UInt8List.DecodeFromPtr) + [func (l UInt8List) EncodeAsPtr(seg *Segment) Ptr](#UInt8List.EncodeAsPtr) + [func (l UInt8List) IsValid() bool](#UInt8List.IsValid) + [func (l UInt8List) Len() int](#UInt8List.Len) + [func (l UInt8List) Message() *Message](#UInt8List.Message) + [func (l UInt8List) Segment() *Segment](#UInt8List.Segment) + [func (l UInt8List) Set(i int, v uint8)](#UInt8List.Set) + [func (l UInt8List) String() string](#UInt8List.String) + [func (l UInt8List) ToPtr() Ptr](#UInt8List.ToPtr) * [type VoidList](#VoidList) * + [func NewVoidList(s *Segment, n int32) VoidList](#NewVoidList) * + [func (VoidList) DecodeFromPtr(p Ptr) VoidList](#VoidList.DecodeFromPtr) + [func (l VoidList) EncodeAsPtr(seg *Segment) Ptr](#VoidList.EncodeAsPtr) + [func (l VoidList) IsValid() bool](#VoidList.IsValid) + [func (l VoidList) Len() int](#VoidList.Len) + [func (l VoidList) Message() *Message](#VoidList.Message) + [func (l VoidList) Segment() *Segment](#VoidList.Segment) + [func (l VoidList) String() string](#VoidList.String) + [func (l VoidList) ToPtr() Ptr](#VoidList.ToPtr) * [type WeakClient](#WeakClient) * + [func (wc *WeakClient) AddRef() (c Client, ok bool)](#WeakClient.AddRef) #### Examples [¶](#pkg-examples) * [Package](#example-package) * [Unmarshal](#example-Unmarshal) ### Constants [¶](#pkg-constants) This section is empty. ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) #### func [Canonicalize](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/canonical.go#L12) [¶](#Canonicalize) ``` func Canonicalize(s [Struct](#Struct)) ([][byte](/builtin#byte), [error](/builtin#error)) ``` Canonicalize encodes a struct into its canonical form: a single- segment blob without a segment table. The result will be identical for equivalent structs, even as the schema evolves. The blob is suitable for hashing or signing. #### func [Disconnected](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/error.go#L23) [¶](#Disconnected) ``` func Disconnected(s [string](/builtin#string)) [error](/builtin#error) ``` Disconnected returns an error that formats as the given text and will report true when passed to IsDisconnected. #### func [Equal](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L287) [¶](#Equal) ``` func Equal(p1, p2 [Ptr](#Ptr)) ([bool](/builtin#bool), [error](/builtin#error)) ``` Equal returns true iff p1 and p2 are equal. Equality is defined to be: * Two structs are equal iff all of their fields are equal. If one struct has more fields than the other, the extra fields must all be zero. * Two lists are equal iff they have the same length and their corresponding elements are equal. If one list is a list of primitives and the other is a list of structs, then the list of primitives is treated as if it was a list of structs with the element value as the sole field. * Two interfaces are equal iff they point to a capability created by the same call to NewClient or they are referring to the same capability table index in the same message. The latter is significant when the message's capability table has not been populated. * Two null pointers are equal. * All other combinations of things are not equal. #### func [IsDisconnected](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/error.go#L28) [¶](#IsDisconnected) ``` func IsDisconnected(e [error](/builtin#error)) [bool](/builtin#bool) ``` IsDisconnected reports whether e indicates a failure due to loss of a necessary capability. #### func [IsUnimplemented](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/error.go#L17) [¶](#IsUnimplemented) ``` func IsUnimplemented(e [error](/builtin#error)) [bool](/builtin#bool) ``` IsUnimplemented reports whether e indicates that functionality is unimplemented. #### func [NewMessage](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L63) [¶](#NewMessage) ``` func NewMessage(arena [Arena](#Arena)) (*[Message](#Message), *[Segment](#Segment), [error](/builtin#error)) ``` NewMessage creates a message with a new root and returns the first segment. It is an error to call NewMessage on an arena with data in it. #### func [NewMultiSegmentMessage](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L82) [¶](#NewMultiSegmentMessage) ``` func NewMultiSegmentMessage(b [][][byte](/builtin#byte)) (msg *[Message](#Message), first *[Segment](#Segment)) ``` Analogous to NewSingleSegmentMessage, but using MultiSegment. #### func [NewPromisedClient](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L190) [¶](#NewPromisedClient) ``` func NewPromisedClient(hook [ClientHook](#ClientHook)) ([Client](#Client), [Resolver](#Resolver)[[Client](#Client)]) ``` NewPromisedClient creates the first reference to a capability that can resolve to a different capability. The hook will be shut down when the promise is resolved or the client has no more references, whichever comes first. Typically the RPC system will create a client for the application. Most applications will not need to use this directly. #### func [NewSingleSegmentMessage](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L73) [¶](#NewSingleSegmentMessage) ``` func NewSingleSegmentMessage(b [][byte](/builtin#byte)) (msg *[Message](#Message), first *[Segment](#Segment)) ``` NewSingleSegmentMessage(b) is equivalent to NewMessage(SingleSegment(b)), except that it panics instead of returning an error. This can only happen if the passed slice contains data, so the caller is responsible for ensuring that it has a length of zero. #### func [SamePtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L202) [¶](#SamePtr) ``` func SamePtr(p, q [Ptr](#Ptr)) [bool](/builtin#bool) ``` SamePtr reports whether p and q refer to the same object. #### func [SetClientLeakFunc](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L696) [¶](#SetClientLeakFunc) ``` func SetClientLeakFunc(clientLeakFunc func(msg [string](/builtin#string))) ``` SetClientLeakFunc sets a callback for reporting Clients that went out of scope without being released. The callback is not guaranteed to be called and must be safe to call concurrently from multiple goroutines. The exact format of the message is unspecified. SetClientLeakFunc must not be called after any calls to NewClient or NewPromisedClient. #### func [Unimplemented](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/error.go#L12) [¶](#Unimplemented) ``` func Unimplemented(s [string](/builtin#string)) [error](/builtin#error) ``` Unimplemented returns an error that formats as the given text and will report true when passed to IsUnimplemented. ### Types [¶](#pkg-types) #### type [Answer](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L233) [¶](#Answer) ``` type Answer struct { // contains filtered or unexported fields } ``` An Answer is a deferred result of a client call. Conceptually, this is a future. It is safe to use from multiple goroutines. #### func [ErrorAnswer](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L239) [¶](#ErrorAnswer) ``` func ErrorAnswer(m [Method](#Method), e [error](/builtin#error)) *[Answer](#Answer) ``` ErrorAnswer returns a Answer that always returns error e. #### func [ImmediateAnswer](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L252) [¶](#ImmediateAnswer) ``` func ImmediateAnswer(m [Method](#Method), ptr [Ptr](#Ptr)) *[Answer](#Answer) ``` ImmediateAnswer returns an Answer that accesses ptr. #### func (*Answer) [Client](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L297) [¶](#Answer.Client) ``` func (ans *[Answer](#Answer)) Client() [Client](#Client) ``` Client returns the answer as a client. If the answer's originating call has not completed, then calls will be queued until the original call's completion. The client reference is borrowed: the caller should not call Close. #### func (*Answer) [Done](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L277) [¶](#Answer.Done) ``` func (ans *[Answer](#Answer)) Done() <-chan struct{} ``` Done returns a channel that is closed when the answer's call is finished. #### func (*Answer) [Field](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L303) [¶](#Answer.Field) ``` func (ans *[Answer](#Answer)) Field(off [uint16](/builtin#uint16), def [][byte](/builtin#byte)) *[Future](#Future) ``` Field returns a derived future which yields the pointer field given, defaulting to the value given. #### func (*Answer) [Future](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L266) [¶](#Answer.Future) ``` func (ans *[Answer](#Answer)) Future() *[Future](#Future) ``` Future returns a future that is equivalent to ans. #### func (*Answer) [List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L289) [¶](#Answer.List) ``` func (ans *[Answer](#Answer)) List() ([List](#List), [error](/builtin#error)) ``` List waits until the answer is resolved and returns the list this answer represents. #### func (*Answer) [Metadata](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L272) [¶](#Answer.Metadata) ``` func (ans *[Answer](#Answer)) Metadata() *[Metadata](#Metadata) ``` Metadata returns a metadata map where callers can store information about the answer #### func (*Answer) [PipelineRecv](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L336) [¶](#Answer.PipelineRecv) ``` func (ans *[Answer](#Answer)) PipelineRecv(ctx [context](/context).[Context](/context#Context), transform [][PipelineOp](#PipelineOp), r [Recv](#Recv)) [PipelineCaller](#PipelineCaller) ``` PipelineRecv starts a pipelined call. #### func (*Answer) [PipelineSend](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L308) [¶](#Answer.PipelineSend) ``` func (ans *[Answer](#Answer)) PipelineSend(ctx [context](/context).[Context](/context#Context), transform [][PipelineOp](#PipelineOp), s [Send](#Send)) (*[Answer](#Answer), [ReleaseFunc](#ReleaseFunc)) ``` PipelineSend starts a pipelined call. #### func (*Answer) [Struct](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L283) [¶](#Answer.Struct) ``` func (ans *[Answer](#Answer)) Struct() ([Struct](#Struct), [error](/builtin#error)) ``` Struct waits until the answer is resolved and returns the struct this answer represents. #### type [AnswerQueue](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L23) [¶](#AnswerQueue) ``` type AnswerQueue struct { // contains filtered or unexported fields } ``` AnswerQueue is a queue of method calls to make after an earlier method call finishes. The queue is unbounded; it is the caller's responsibility to manage/impose backpressure. An AnswerQueue can be in one of three states: 1. Queueing. Incoming method calls will be added to the queue. 2. Draining, entered by calling Fulfill or Reject. Queued method calls will be delivered in sequence, and new incoming method calls will block until the AnswerQueue enters the Drained state. 3. Drained, entered once all queued methods have been delivered. Incoming methods are passthrough. #### func [NewAnswerQueue](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L47) [¶](#NewAnswerQueue) ``` func NewAnswerQueue(m [Method](#Method)) *[AnswerQueue](#AnswerQueue) ``` NewAnswerQueue creates a new answer queue. #### func (*AnswerQueue) [Fulfill](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L61) [¶](#AnswerQueue.Fulfill) ``` func (aq *[AnswerQueue](#AnswerQueue)) Fulfill(ptr [Ptr](#Ptr)) ``` Fulfill empties the queue, delivering the method calls on the given pointer. After fulfill returns, pipeline calls will be immediately delivered instead of being queued. #### func (*AnswerQueue) [PipelineRecv](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L114) [¶](#AnswerQueue.PipelineRecv) ``` func (aq *[AnswerQueue](#AnswerQueue)) PipelineRecv(ctx [context](/context).[Context](/context#Context), transform [][PipelineOp](#PipelineOp), r [Recv](#Recv)) [PipelineCaller](#PipelineCaller) ``` #### func (*AnswerQueue) [PipelineSend](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L118) [¶](#AnswerQueue.PipelineSend) ``` func (aq *[AnswerQueue](#AnswerQueue)) PipelineSend(ctx [context](/context).[Context](/context#Context), transform [][PipelineOp](#PipelineOp), r [Send](#Send)) (*[Answer](#Answer), [ReleaseFunc](#ReleaseFunc)) ``` #### func (*AnswerQueue) [Reject](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L85) [¶](#AnswerQueue.Reject) ``` func (aq *[AnswerQueue](#AnswerQueue)) Reject(e [error](/builtin#error)) ``` Reject empties the queue, returning errors on all the method calls. #### type [Arena](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L12) [¶](#Arena) ``` type Arena interface { // NumSegments returns the number of segments in the arena. // This must not be larger than 1<<32. NumSegments() [int64](/builtin#int64) // Data loads the data for the segment with the given ID. IDs are in // the range [0, NumSegments()). // must be tightly packed in the range [0, NumSegments()). Data(id [SegmentID](#SegmentID)) ([][byte](/builtin#byte), [error](/builtin#error)) // Allocate selects a segment to place a new object in, creating a // segment or growing the capacity of a previously loaded segment if // necessary. If Allocate does not return an error, then the // difference of the capacity and the length of the returned slice // must be at least minsz. segs is a map of segments keyed by ID // using arrays returned by the Data method (although the length of // these slices may have changed by previous allocations). Allocate // must not modify segs. // // If Allocate creates a new segment, the ID must be one larger than // the last segment's ID or zero if it is the first segment. // // If Allocate returns an previously loaded segment's ID, then the // arena is responsible for preserving the existing data in the // returned byte slice. Allocate(minsz [Size](#Size), segs map[[SegmentID](#SegmentID)]*[Segment](#Segment)) ([SegmentID](#SegmentID), [][byte](/builtin#byte), [error](/builtin#error)) // Release all resources associated with the Arena. Callers MUST NOT // use the Arena after it has been released. // // Calling Release() is OPTIONAL, but may reduce allocations. // // Implementations MAY use Release() as a signal to return resources // to free lists, or otherwise reuse the Arena. However, they MUST // NOT assume Release() will be called. Release() } ``` An Arena loads and allocates segments for a Message. #### type [BitList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L250) [¶](#BitList) ``` type BitList [List](#List) ``` A BitList is a reference to a list of booleans. #### func [NewBitList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L255) [¶](#NewBitList) ``` func NewBitList(s *[Segment](#Segment), n [int32](/builtin#int32)) ([BitList](#BitList), [error](/builtin#error)) ``` NewBitList creates a new bit list, preferring placement in s. #### func (BitList) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L279) [¶](#BitList.At) ``` func (p [BitList](#BitList)) At(i [int](/builtin#int)) [bool](/builtin#bool) ``` At returns the i'th bit. #### func (BitList) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L49) [¶](#BitList.DecodeFromPtr) ``` func ([BitList](#BitList)) DecodeFromPtr(p [Ptr](#Ptr)) [BitList](#BitList) ``` #### func (BitList) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L45) [¶](#BitList.EncodeAsPtr) ``` func (l [BitList](#BitList)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (BitList) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L37) [¶](#BitList.IsValid) ``` func (l [BitList](#BitList)) IsValid() [bool](/builtin#bool) ``` #### func (BitList) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L41) [¶](#BitList.Len) ``` func (l [BitList](#BitList)) Len() [int](/builtin#int) ``` #### func (BitList) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L53) [¶](#BitList.Message) ``` func (l [BitList](#BitList)) Message() *[Message](#Message) ``` #### func (BitList) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L57) [¶](#BitList.Segment) ``` func (l [BitList](#BitList)) Segment() *[Segment](#Segment) ``` #### func (BitList) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L293) [¶](#BitList.Set) ``` func (p [BitList](#BitList)) Set(i [int](/builtin#int), v [bool](/builtin#bool)) ``` Set sets the i'th bit to v. #### func (BitList) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L313) [¶](#BitList.String) ``` func (p [BitList](#BitList)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[true, false]"). #### func (BitList) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L61) [¶](#BitList.ToPtr) ``` func (l [BitList](#BitList)) ToPtr() [Ptr](#Ptr) ``` #### type [BitOffset](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L197) [¶](#BitOffset) ``` type BitOffset [uint32](/builtin#uint32) ``` BitOffset is an offset in bits from the beginning of a struct's data section. It is bounded to [0, 1<<22). #### func (BitOffset) [GoString](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L215) [¶](#BitOffset.GoString) ``` func (bit [BitOffset](#BitOffset)) GoString() [string](/builtin#string) ``` GoString returns the offset as a Go expression. #### func (BitOffset) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L210) [¶](#BitOffset.String) ``` func (bit [BitOffset](#BitOffset)) String() [string](/builtin#string) ``` String returns the offset in the format "bit X". #### type [Brand](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L573) [¶](#Brand) ``` type Brand struct { Value [any](/builtin#any) } ``` A Brand is an opaque value used to identify a capability. #### type [CapList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1082) [¶](#CapList) ``` type CapList[T ~[ClientKind](#ClientKind)] [PointerList](#PointerList) ``` A list of some Cap'n Proto capability type T. #### func (CapList[T]) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1084) [¶](#CapList.At) ``` func (c [CapList](#CapList)[T]) At(i [int](/builtin#int)) (T, [error](/builtin#error)) ``` #### func (CapList[T]) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L305) [¶](#CapList.DecodeFromPtr) ``` func ([CapList](#CapList)[T]) DecodeFromPtr(p [Ptr](#Ptr)) [CapList](#CapList)[T] ``` #### func (CapList[T]) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L301) [¶](#CapList.EncodeAsPtr) ``` func (l [CapList](#CapList)[T]) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (CapList[T]) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L293) [¶](#CapList.IsValid) ``` func (l [CapList](#CapList)[T]) IsValid() [bool](/builtin#bool) ``` #### func (CapList[T]) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L297) [¶](#CapList.Len) ``` func (l [CapList](#CapList)[T]) Len() [int](/builtin#int) ``` #### func (CapList[T]) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L309) [¶](#CapList.Message) ``` func (l [CapList](#CapList)[T]) Message() *[Message](#Message) ``` #### func (CapList[T]) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L313) [¶](#CapList.Segment) ``` func (l [CapList](#CapList)[T]) Segment() *[Segment](#Segment) ``` #### func (CapList[T]) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1092) [¶](#CapList.Set) ``` func (c [CapList](#CapList)[T]) Set(i [int](/builtin#int), v T) [error](/builtin#error) ``` #### func (CapList[T]) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L317) [¶](#CapList.ToPtr) ``` func (l [CapList](#CapList)[T]) ToPtr() [Ptr](#Ptr) ``` #### type [CapTable](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/captable.go#L9) [¶](#CapTable) ``` type CapTable struct { // contains filtered or unexported fields } ``` CapTable is the indexed list of the clients referenced in the message. Capability pointers inside the message will use this table to map pointers to Clients. The table is populated by the RPC system. <https://capnproto.org/encoding.html#capabilities-interfaces#### func (*CapTable) [Add](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/captable.go#L61) [¶](#CapTable.Add) ``` func (ct *[CapTable](#CapTable)) Add(c [Client](#Client)) [CapabilityID](#CapabilityID) ``` Add appends a capability to the message's capability table and returns its ID. It "steals" c's reference: the Message will release the client when calling Reset. #### func (CapTable) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/captable.go#L30) [¶](#CapTable.At) ``` func (ct [CapTable](#CapTable)) At(i [int](/builtin#int)) [Client](#Client) ``` At returns the capability at the given index of the table. #### func (CapTable) [Contains](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/captable.go#L36) [¶](#CapTable.Contains) ``` func (ct [CapTable](#CapTable)) Contains(ifc [Interface](#Interface)) [bool](/builtin#bool) ``` Contains returns true if the supplied interface corresponds to a client already present in the table. #### func (CapTable) [Get](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/captable.go#L43) [¶](#CapTable.Get) ``` func (ct [CapTable](#CapTable)) Get(ifc [Interface](#Interface)) (c [Client](#Client)) ``` Get the client corresponding to the supplied interface. It returns a null client if the interface's CapabilityID isn't in the table. #### func (CapTable) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/captable.go#L25) [¶](#CapTable.Len) ``` func (ct [CapTable](#CapTable)) Len() [int](/builtin#int) ``` Len returns the number of capabilities in the table. #### func (*CapTable) [Reset](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/captable.go#L16) [¶](#CapTable.Reset) ``` func (ct *[CapTable](#CapTable)) Reset(cs ...[Client](#Client)) ``` Reset the cap table, releasing all capabilities and setting the length to zero. Clients passed as arguments are added to the table after zeroing, such that ct.Len() == len(cs). #### func (CapTable) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/captable.go#L54) [¶](#CapTable.Set) ``` func (ct [CapTable](#CapTable)) Set(id [CapabilityID](#CapabilityID), c [Client](#Client)) ``` Set the client for the supplied capability ID. If a client for the given ID already exists, it will be replaced without releasing. #### type [CapabilityID](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L91) [¶](#CapabilityID) ``` type CapabilityID [uint32](/builtin#uint32) ``` A CapabilityID is an index into a message's capability table. #### func (CapabilityID) [GoString](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L99) [¶](#CapabilityID.GoString) ``` func (id [CapabilityID](#CapabilityID)) GoString() [string](/builtin#string) ``` GoString returns the ID as a Go expression. #### func (CapabilityID) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L94) [¶](#CapabilityID.String) ``` func (id [CapabilityID](#CapabilityID)) String() [string](/builtin#string) ``` String returns the ID in the format "capability X". #### type [Client](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L106) [¶](#Client) ``` type Client [ClientKind](#ClientKind) ``` A Client is a reference to a Cap'n Proto capability. The zero value is a null capability reference. It is safe to use from multiple goroutines. #### func [ErrorClient](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L1002) [¶](#ErrorClient) ``` func ErrorClient(e [error](/builtin#error)) [Client](#Client) ``` ErrorClient returns a Client that always returns error e. An ErrorClient does not need to be released: it is a sentinel like a nil Client. The returned client's State() method returns a State with its Brand.Value set to e. #### func [NewClient](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L161) [¶](#NewClient) ``` func NewClient(hook [ClientHook](#ClientHook)) [Client](#Client) ``` NewClient creates the first reference to a capability. If hook is nil, then NewClient returns nil. Typically the RPC system will create a client for the application. Most applications will not need to use this directly. #### func (Client) [AddRef](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L517) [¶](#Client.AddRef) ``` func (c [Client](#Client)) AddRef() [Client](#Client) ``` AddRef creates a new Client that refers to the same capability as c. If c is nil or has resolved to null, then AddRef returns nil. #### func (Client) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L671) [¶](#Client.DecodeFromPtr) ``` func ([Client](#Client)) DecodeFromPtr(p [Ptr](#Ptr)) [Client](#Client) ``` #### func (Client) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L666) [¶](#Client.EncodeAsPtr) ``` func (c [Client](#Client)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (Client) [GetFlowLimiter](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L289) [¶](#Client.GetFlowLimiter) ``` func (c [Client](#Client)) GetFlowLimiter() [flowcontrol](/capnproto.org/go/capnp/v3@v3.0.0-alpha-29/flowcontrol).[FlowLimiter](/capnproto.org/go/capnp/v3@v3.0.0-alpha-29/flowcontrol#FlowLimiter) ``` Get the current flowcontrol.FlowLimiter used to manage flow control for this client. #### func (Client) [IsSame](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L477) [¶](#Client.IsSame) ``` func (c [Client](#Client)) IsSame(c2 [Client](#Client)) [bool](/builtin#bool) ``` IsSame reports whether c and c2 refer to a capability created by the same call to NewClient. This can return false negatives if c or c2 are not fully resolved: use Resolve if this is an issue. If either c or c2 are released, then IsSame panics. #### func (Client) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L468) [¶](#Client.IsValid) ``` func (c [Client](#Client)) IsValid() [bool](/builtin#bool) ``` IsValid reports whether c is a valid reference to a capability. A reference is invalid if it is nil, has resolved to null, or has been released. #### func (Client) [RecvCall](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L451) [¶](#Client.RecvCall) ``` func (c [Client](#Client)) RecvCall(ctx [context](/context).[Context](/context#Context), r [Recv](#Recv)) [PipelineCaller](#PipelineCaller) ``` RecvCall starts executing a method with the referenced arguments and returns an answer that will hold the result. The hook will call a.Release when it no longer needs to reference the parameters. The caller must call the returned release function when it no longer needs the answer's data. Note that unlike SendCall, this method does *not* respect the flow control policy configured with SetFlowLimiter. #### func (Client) [Release](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L632) [¶](#Client.Release) ``` func (c [Client](#Client)) Release() ``` Release releases a capability reference. If this is the last reference to the capability, then the underlying resources associated with the capability will be released. Release has no effect if c has already been released, or if c is nil or resolved to null. #### func (Client) [Resolve](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L492) [¶](#Client.Resolve) ``` func (c [Client](#Client)) Resolve(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error) ``` Resolve blocks until the capability is fully resolved or the Context is Done. Resolve only returns an error if the context is canceled; it returns nil even if the capability resolves to an error. #### func (Client) [SendCall](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L319) [¶](#Client.SendCall) ``` func (c [Client](#Client)) SendCall(ctx [context](/context).[Context](/context#Context), s [Send](#Send)) (*[Answer](#Answer), [ReleaseFunc](#ReleaseFunc)) ``` SendCall allocates space for parameters, calls args.Place to fill out the parameters, then starts executing a method, returning an answer that will hold the result. The caller must call the returned release function when it no longer needs the answer's data. This method respects the flow control policy configured with SetFlowLimiter; it may block if the sender is sending too fast. #### func (Client) [SendStreamCall](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L403) [¶](#Client.SendStreamCall) ``` func (c [Client](#Client)) SendStreamCall(ctx [context](/context).[Context](/context#Context), s [Send](#Send)) [error](/builtin#error) ``` SendStreamCall is like SendCall except that: 1. It does not return an answer for the eventual result. 2. If the call returns an error, all future calls on this client will return the same error (without starting the method or calling PlaceArgs). #### func (Client) [SetFlowLimiter](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L306) [¶](#Client.SetFlowLimiter) ``` func (c [Client](#Client)) SetFlowLimiter(lim [flowcontrol](/capnproto.org/go/capnp/v3@v3.0.0-alpha-29/flowcontrol).[FlowLimiter](/capnproto.org/go/capnp/v3@v3.0.0-alpha-29/flowcontrol#FlowLimiter)) ``` Update the flowcontrol.FlowLimiter used to manage flow control for this client. This affects all future calls, but not calls already waiting to send. Passing nil sets the value to flowcontrol.NopLimiter, which is also the default. When .Release() is called on the client, it will call .Release() on the FlowLimiter in turn. #### func (Client) [State](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L557) [¶](#Client.State) ``` func (c [Client](#Client)) State() [ClientState](#ClientState) ``` State reads the current state of the client. It returns the zero ClientState if c is nil, has resolved to null, or has been released. #### func (Client) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L596) [¶](#Client.String) ``` func (c [Client](#Client)) String() [string](/builtin#string) ``` String returns a string that identifies this capability for debugging purposes. Its format should not be depended on: in particular, it should not be used to compare clients. Use IsSame to compare clients for equality. #### func (Client) [WaitStreaming](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L433) [¶](#Client.WaitStreaming) ``` func (c [Client](#Client)) WaitStreaming() [error](/builtin#error) ``` WaitStreaming waits for all outstanding streaming calls (i.e. calls started with SendStreamCall) to complete, and then returns an error if any streaming call has failed. #### func (Client) [WeakRef](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L544) [¶](#Client.WeakRef) ``` func (c [Client](#Client)) WeakRef() *[WeakClient](#WeakClient) ``` WeakRef creates a new WeakClient that refers to the same capability as c. If c is nil or has resolved to null, then WeakRef returns nil. #### type [ClientHook](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L829) [¶](#ClientHook) ``` type ClientHook interface { // Send allocates space for parameters, calls s.PlaceArgs to fill out // the arguments, then starts executing a method, returning an answer // that will hold the result. The hook must call s.PlaceArgs at most // once, and if it does call s.PlaceArgs, it must return before Send // returns. The caller must call the returned release function when // it no longer needs the answer's data. // // Send is typically used when application code is making a call. Send(ctx [context](/context).[Context](/context#Context), s [Send](#Send)) (*[Answer](#Answer), [ReleaseFunc](#ReleaseFunc)) // Recv starts executing a method with the referenced arguments // and places the result in a message controlled by the caller. // The hook will call r.ReleaseArgs when it no longer needs to // reference the parameters and use r.Returner to complete the method // call. If Recv does not call r.Returner.Return before it returns, // then it must return a non-nil PipelineCaller. // // Recv is typically used when the RPC system has received a call. Recv(ctx [context](/context).[Context](/context#Context), r [Recv](#Recv)) [PipelineCaller](#PipelineCaller) // Brand returns an implementation-specific value. This can be used // to introspect and identify kinds of clients. Brand() [Brand](#Brand) // Shutdown releases any resources associated with this capability. // The behavior of calling any methods on the receiver after calling // Shutdown is undefined. It is expected for the ClientHook to reject // any outstanding call futures. Shutdown() // String formats the hook as a string (same as fmt.Stringer) String() [string](/builtin#string) } ``` A ClientHook represents a Cap'n Proto capability. Application code should not pass around ClientHooks; applications should pass around Clients. A ClientHook must be safe to use from multiple goroutines. Calls must be delivered to the capability in the order they are made. This guarantee is based on the concept of a capability acknowledging delivery of a call: this is specific to an implementation of ClientHook. A type that implements ClientHook must guarantee that if foo() then bar() is called on a client, then the capability acknowledging foo() happens before the capability observing bar(). ClientHook is an internal interface. Users generally SHOULD NOT supply their own implementations. #### type [ClientKind](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L111) [¶](#ClientKind) ``` type ClientKind = struct { // contains filtered or unexported fields } ``` The underlying type of Client. We expose this so that we can use ~ClientKind as a constraint in generics to capture any capability type. #### type [ClientState](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L578) [¶](#ClientState) ``` type ClientState struct { // Brand is the value returned from the hook's Brand method. Brand [Brand](#Brand) // IsPromise is true if the client has not resolved yet. IsPromise [bool](/builtin#bool) // Arbitrary metadata. Note that, if a Client is a promise, // when it resolves its metadata will be replaced with that // of its resolution. // // TODO: this might change before the v3 API is stabilized; // we are not sure the above is the correct semantics. Metadata *[Metadata](#Metadata) } ``` ClientState is a snapshot of a client's identity. #### type [DataList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L450) [¶](#DataList) ``` type DataList [List](#List) ``` DataList is an array of pointers to data. #### func [NewDataList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L455) [¶](#NewDataList) ``` func NewDataList(s *[Segment](#Segment), n [int32](/builtin#int32)) ([DataList](#DataList), [error](/builtin#error)) ``` NewDataList allocates a new list of data pointers, preferring placement in s. #### func (DataList) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L464) [¶](#DataList.At) ``` func (l [DataList](#DataList)) At(i [int](/builtin#int)) ([][byte](/builtin#byte), [error](/builtin#error)) ``` At returns the i'th data in the list. #### func (DataList) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L177) [¶](#DataList.DecodeFromPtr) ``` func ([DataList](#DataList)) DecodeFromPtr(p [Ptr](#Ptr)) [DataList](#DataList) ``` #### func (DataList) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L173) [¶](#DataList.EncodeAsPtr) ``` func (l [DataList](#DataList)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (DataList) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L165) [¶](#DataList.IsValid) ``` func (l [DataList](#DataList)) IsValid() [bool](/builtin#bool) ``` #### func (DataList) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L169) [¶](#DataList.Len) ``` func (l [DataList](#DataList)) Len() [int](/builtin#int) ``` #### func (DataList) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L181) [¶](#DataList.Message) ``` func (l [DataList](#DataList)) Message() *[Message](#Message) ``` #### func (DataList) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L185) [¶](#DataList.Segment) ``` func (l [DataList](#DataList)) Segment() *[Segment](#Segment) ``` #### func (DataList) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L477) [¶](#DataList.Set) ``` func (l [DataList](#DataList)) Set(i [int](/builtin#int), v [][byte](/builtin#byte)) [error](/builtin#error) ``` Set sets the i'th data in the list to v. #### func (DataList) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L493) [¶](#DataList.String) ``` func (l [DataList](#DataList)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. `["foo", "bar"]`). #### func (DataList) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L189) [¶](#DataList.ToPtr) ``` func (l [DataList](#DataList)) ToPtr() [Ptr](#Ptr) ``` #### type [DataOffset](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L122) [¶](#DataOffset) ``` type DataOffset [uint32](/builtin#uint32) ``` DataOffset is an offset in bytes from the beginning of a struct's data section. It is bounded to [0, 1<<19). #### func (DataOffset) [GoString](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L133) [¶](#DataOffset.GoString) ``` func (off [DataOffset](#DataOffset)) GoString() [string](/builtin#string) ``` GoString returns the offset as a Go expression. #### func (DataOffset) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L125) [¶](#DataOffset.String) ``` func (off [DataOffset](#DataOffset)) String() [string](/builtin#string) ``` String returns the offset in the format "+X bytes". #### type [Decoder](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L18) [¶](#Decoder) ``` type Decoder struct { // Maximum number of bytes that can be read per call to Decode. // If not set, a reasonable default is used. MaxMessageSize [uint64](/builtin#uint64) // contains filtered or unexported fields } ``` A Decoder represents a framer that deserializes a particular Cap'n Proto input stream. #### func [NewDecoder](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L32) [¶](#NewDecoder) ``` func NewDecoder(r [io](/io).[Reader](/io#Reader)) *[Decoder](#Decoder) ``` NewDecoder creates a new Cap'n Proto framer that reads from r. The returned decoder will only read as much data as necessary to decode the message. #### func [NewPackedDecoder](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L39) [¶](#NewPackedDecoder) ``` func NewPackedDecoder(r [io](/io).[Reader](/io#Reader)) *[Decoder](#Decoder) ``` NewPackedDecoder creates a new Cap'n Proto framer that reads from a packed stream r. The returned decoder may read more data than necessary from r. #### func (*Decoder) [Decode](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L45) [¶](#Decoder.Decode) ``` func (d *[Decoder](#Decoder)) Decode() (*[Message](#Message), [error](/builtin#error)) ``` Decode reads a message from the decoder stream. The error is io.EOF only if no bytes were read. #### type [Encoder](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L200) [¶](#Encoder) ``` type Encoder struct { // contains filtered or unexported fields } ``` An Encoder represents a framer for serializing a particular Cap'n Proto stream. #### func [NewEncoder](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L207) [¶](#NewEncoder) ``` func NewEncoder(w [io](/io).[Writer](/io#Writer)) *[Encoder](#Encoder) ``` NewEncoder creates a new Cap'n Proto framer that writes to w. #### func [NewPackedEncoder](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L213) [¶](#NewPackedEncoder) ``` func NewPackedEncoder(w [io](/io).[Writer](/io#Writer)) *[Encoder](#Encoder) ``` NewPackedEncoder creates a new Cap'n Proto framer that writes to a packed stream w. #### func (*Encoder) [Encode](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L218) [¶](#Encoder.Encode) ``` func (e *[Encoder](#Encoder)) Encode(m *[Message](#Message)) [error](/builtin#error) ``` Encode writes a message to the encoder stream. #### type [EnumList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1041) [¶](#EnumList) ``` type EnumList[T ~[uint16](/builtin#uint16)] [UInt16List](#UInt16List) ``` A list of some Cap'n Proto enum type T. #### func [NewEnumList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1046) [¶](#NewEnumList) ``` func NewEnumList[T ~[uint16](/builtin#uint16)](s *[Segment](#Segment), n [int32](/builtin#int32)) ([EnumList](#EnumList)[T], [error](/builtin#error)) ``` NewEnumList creates a new list of T, preferring placement in s. #### func (EnumList[T]) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1052) [¶](#EnumList.At) ``` func (l [EnumList](#EnumList)[T]) At(i [int](/builtin#int)) T ``` At returns the i'th element. #### func (EnumList[T]) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L241) [¶](#EnumList.DecodeFromPtr) ``` func ([EnumList](#EnumList)[T]) DecodeFromPtr(p [Ptr](#Ptr)) [EnumList](#EnumList)[T] ``` #### func (EnumList[T]) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L237) [¶](#EnumList.EncodeAsPtr) ``` func (l [EnumList](#EnumList)[T]) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (EnumList[T]) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L229) [¶](#EnumList.IsValid) ``` func (l [EnumList](#EnumList)[T]) IsValid() [bool](/builtin#bool) ``` #### func (EnumList[T]) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L233) [¶](#EnumList.Len) ``` func (l [EnumList](#EnumList)[T]) Len() [int](/builtin#int) ``` #### func (EnumList[T]) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L245) [¶](#EnumList.Message) ``` func (l [EnumList](#EnumList)[T]) Message() *[Message](#Message) ``` #### func (EnumList[T]) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L249) [¶](#EnumList.Segment) ``` func (l [EnumList](#EnumList)[T]) Segment() *[Segment](#Segment) ``` #### func (EnumList[T]) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1057) [¶](#EnumList.Set) ``` func (l [EnumList](#EnumList)[T]) Set(i [int](/builtin#int), v T) ``` Set sets the i'th element to v. #### func (EnumList[T]) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1062) [¶](#EnumList.String) ``` func (l [EnumList](#EnumList)[T]) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (EnumList[T]) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L253) [¶](#EnumList.ToPtr) ``` func (l [EnumList](#EnumList)[T]) ToPtr() [Ptr](#Ptr) ``` #### type [Float32List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L949) [¶](#Float32List) ``` type Float32List [List](#List) ``` Float32List is an array of Float32 values. #### func [NewFloat32List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L954) [¶](#NewFloat32List) ``` func NewFloat32List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([Float32List](#Float32List), [error](/builtin#error)) ``` NewFloat32List creates a new list of Float32, preferring placement in s. #### func (Float32List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L963) [¶](#Float32List.At) ``` func (l [Float32List](#Float32List)) At(i [int](/builtin#int)) [float32](/builtin#float32) ``` At returns the i'th element. #### func (Float32List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L81) [¶](#Float32List.DecodeFromPtr) ``` func ([Float32List](#Float32List)) DecodeFromPtr(p [Ptr](#Ptr)) [Float32List](#Float32List) ``` #### func (Float32List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L77) [¶](#Float32List.EncodeAsPtr) ``` func (l [Float32List](#Float32List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (Float32List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L69) [¶](#Float32List.IsValid) ``` func (l [Float32List](#Float32List)) IsValid() [bool](/builtin#bool) ``` #### func (Float32List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L73) [¶](#Float32List.Len) ``` func (l [Float32List](#Float32List)) Len() [int](/builtin#int) ``` #### func (Float32List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L85) [¶](#Float32List.Message) ``` func (l [Float32List](#Float32List)) Message() *[Message](#Message) ``` #### func (Float32List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L89) [¶](#Float32List.Segment) ``` func (l [Float32List](#Float32List)) Segment() *[Segment](#Segment) ``` #### func (Float32List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L972) [¶](#Float32List.Set) ``` func (l [Float32List](#Float32List)) Set(i [int](/builtin#int), v [float32](/builtin#float32)) ``` Set sets the i'th element to v. #### func (Float32List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L981) [¶](#Float32List.String) ``` func (l [Float32List](#Float32List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (Float32List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L93) [¶](#Float32List.ToPtr) ``` func (l [Float32List](#Float32List)) ToPtr() [Ptr](#Ptr) ``` #### type [Float64List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L995) [¶](#Float64List) ``` type Float64List [List](#List) ``` Float64List is an array of Float64 values. #### func [NewFloat64List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1000) [¶](#NewFloat64List) ``` func NewFloat64List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([Float64List](#Float64List), [error](/builtin#error)) ``` NewFloat64List creates a new list of Float64, preferring placement in s. #### func (Float64List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1009) [¶](#Float64List.At) ``` func (l [Float64List](#Float64List)) At(i [int](/builtin#int)) [float64](/builtin#float64) ``` At returns the i'th element. #### func (Float64List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L113) [¶](#Float64List.DecodeFromPtr) ``` func ([Float64List](#Float64List)) DecodeFromPtr(p [Ptr](#Ptr)) [Float64List](#Float64List) ``` #### func (Float64List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L109) [¶](#Float64List.EncodeAsPtr) ``` func (l [Float64List](#Float64List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (Float64List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L101) [¶](#Float64List.IsValid) ``` func (l [Float64List](#Float64List)) IsValid() [bool](/builtin#bool) ``` #### func (Float64List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L105) [¶](#Float64List.Len) ``` func (l [Float64List](#Float64List)) Len() [int](/builtin#int) ``` #### func (Float64List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L117) [¶](#Float64List.Message) ``` func (l [Float64List](#Float64List)) Message() *[Message](#Message) ``` #### func (Float64List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L121) [¶](#Float64List.Segment) ``` func (l [Float64List](#Float64List)) Segment() *[Segment](#Segment) ``` #### func (Float64List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1018) [¶](#Float64List.Set) ``` func (l [Float64List](#Float64List)) Set(i [int](/builtin#int), v [float64](/builtin#float64)) ``` Set sets the i'th element to v. #### func (Float64List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1027) [¶](#Float64List.String) ``` func (l [Float64List](#Float64List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (Float64List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L125) [¶](#Float64List.ToPtr) ``` func (l [Float64List](#Float64List)) ToPtr() [Ptr](#Ptr) ``` #### type [Future](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L366) [¶](#Future) ``` type Future struct { // contains filtered or unexported fields } ``` A Future accesses a portion of an Answer. It is safe to use from multiple goroutines. #### func (*Future) [Client](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L423) [¶](#Future.Client) ``` func (f *[Future](#Future)) Client() [Client](#Client) ``` Client returns the future as a client. If the answer's originating call has not completed, then calls will be queued until the original call's completion. The client reference is borrowed: the caller should not call Release. #### func (*Future) [Done](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L390) [¶](#Future.Done) ``` func (f *[Future](#Future)) Done() <-chan struct{} ``` Done returns a channel that is closed when the answer's call is finished. #### func (*Future) [Field](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L459) [¶](#Future.Field) ``` func (f *[Future](#Future)) Field(off [uint16](/builtin#uint16), def [][byte](/builtin#byte)) *[Future](#Future) ``` Field returns a derived future which yields the pointer field given, defaulting to the value given. #### func (*Future) [List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L414) [¶](#Future.List) ``` func (f *[Future](#Future)) List() ([List](#List), [error](/builtin#error)) ``` List waits until the answer is resolved and returns the list this answer represents. #### func (*Future) [Ptr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L396) [¶](#Future.Ptr) ``` func (f *[Future](#Future)) Ptr() ([Ptr](#Ptr), [error](/builtin#error)) ``` Ptr waits until the answer is resolved and returns the pointer this future represents. #### func (*Future) [Struct](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L407) [¶](#Future.Struct) ``` func (f *[Future](#Future)) Struct() ([Struct](#Struct), [error](/builtin#error)) ``` Struct waits until the answer is resolved and returns the struct this answer represents. #### type [Int16List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L719) [¶](#Int16List) ``` type Int16List [List](#List) ``` Int16List is an array of Int16 values. #### func [NewInt16List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L724) [¶](#NewInt16List) ``` func NewInt16List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([Int16List](#Int16List), [error](/builtin#error)) ``` NewInt16List creates a new list of Int16, preferring placement in s. #### func (Int16List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L733) [¶](#Int16List.At) ``` func (l [Int16List](#Int16List)) At(i [int](/builtin#int)) [int16](/builtin#int16) ``` At returns the i'th element. #### func (Int16List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L401) [¶](#Int16List.DecodeFromPtr) ``` func ([Int16List](#Int16List)) DecodeFromPtr(p [Ptr](#Ptr)) [Int16List](#Int16List) ``` #### func (Int16List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L397) [¶](#Int16List.EncodeAsPtr) ``` func (l [Int16List](#Int16List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (Int16List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L389) [¶](#Int16List.IsValid) ``` func (l [Int16List](#Int16List)) IsValid() [bool](/builtin#bool) ``` #### func (Int16List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L393) [¶](#Int16List.Len) ``` func (l [Int16List](#Int16List)) Len() [int](/builtin#int) ``` #### func (Int16List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L405) [¶](#Int16List.Message) ``` func (l [Int16List](#Int16List)) Message() *[Message](#Message) ``` #### func (Int16List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L409) [¶](#Int16List.Segment) ``` func (l [Int16List](#Int16List)) Segment() *[Segment](#Segment) ``` #### func (Int16List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L742) [¶](#Int16List.Set) ``` func (l [Int16List](#Int16List)) Set(i [int](/builtin#int), v [int16](/builtin#int16)) ``` Set sets the i'th element to v. #### func (Int16List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L751) [¶](#Int16List.String) ``` func (l [Int16List](#Int16List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (Int16List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L413) [¶](#Int16List.ToPtr) ``` func (l [Int16List](#Int16List)) ToPtr() [Ptr](#Ptr) ``` #### type [Int32List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L811) [¶](#Int32List) ``` type Int32List [List](#List) ``` Int32List is an array of Int32 values. #### func [NewInt32List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L816) [¶](#NewInt32List) ``` func NewInt32List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([Int32List](#Int32List), [error](/builtin#error)) ``` NewInt32List creates a new list of Int32, preferring placement in s. #### func (Int32List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L825) [¶](#Int32List.At) ``` func (l [Int32List](#Int32List)) At(i [int](/builtin#int)) [int32](/builtin#int32) ``` At returns the i'th element. #### func (Int32List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L465) [¶](#Int32List.DecodeFromPtr) ``` func ([Int32List](#Int32List)) DecodeFromPtr(p [Ptr](#Ptr)) [Int32List](#Int32List) ``` #### func (Int32List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L461) [¶](#Int32List.EncodeAsPtr) ``` func (l [Int32List](#Int32List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (Int32List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L453) [¶](#Int32List.IsValid) ``` func (l [Int32List](#Int32List)) IsValid() [bool](/builtin#bool) ``` #### func (Int32List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L457) [¶](#Int32List.Len) ``` func (l [Int32List](#Int32List)) Len() [int](/builtin#int) ``` #### func (Int32List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L469) [¶](#Int32List.Message) ``` func (l [Int32List](#Int32List)) Message() *[Message](#Message) ``` #### func (Int32List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L473) [¶](#Int32List.Segment) ``` func (l [Int32List](#Int32List)) Segment() *[Segment](#Segment) ``` #### func (Int32List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L834) [¶](#Int32List.Set) ``` func (l [Int32List](#Int32List)) Set(i [int](/builtin#int), v [int32](/builtin#int32)) ``` Set sets the i'th element to v. #### func (Int32List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L843) [¶](#Int32List.String) ``` func (l [Int32List](#Int32List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (Int32List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L477) [¶](#Int32List.ToPtr) ``` func (l [Int32List](#Int32List)) ToPtr() [Ptr](#Ptr) ``` #### type [Int64List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L903) [¶](#Int64List) ``` type Int64List [List](#List) ``` Int64List is an array of Int64 values. #### func [NewInt64List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L908) [¶](#NewInt64List) ``` func NewInt64List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([Int64List](#Int64List), [error](/builtin#error)) ``` NewInt64List creates a new list of Int64, preferring placement in s. #### func (Int64List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L917) [¶](#Int64List.At) ``` func (l [Int64List](#Int64List)) At(i [int](/builtin#int)) [int64](/builtin#int64) ``` At returns the i'th element. #### func (Int64List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L529) [¶](#Int64List.DecodeFromPtr) ``` func ([Int64List](#Int64List)) DecodeFromPtr(p [Ptr](#Ptr)) [Int64List](#Int64List) ``` #### func (Int64List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L525) [¶](#Int64List.EncodeAsPtr) ``` func (l [Int64List](#Int64List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (Int64List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L517) [¶](#Int64List.IsValid) ``` func (l [Int64List](#Int64List)) IsValid() [bool](/builtin#bool) ``` #### func (Int64List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L521) [¶](#Int64List.Len) ``` func (l [Int64List](#Int64List)) Len() [int](/builtin#int) ``` #### func (Int64List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L533) [¶](#Int64List.Message) ``` func (l [Int64List](#Int64List)) Message() *[Message](#Message) ``` #### func (Int64List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L537) [¶](#Int64List.Segment) ``` func (l [Int64List](#Int64List)) Segment() *[Segment](#Segment) ``` #### func (Int64List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L926) [¶](#Int64List.Set) ``` func (l [Int64List](#Int64List)) Set(i [int](/builtin#int), v [int64](/builtin#int64)) ``` Set sets the i'th element to v. #### func (Int64List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L935) [¶](#Int64List.String) ``` func (l [Int64List](#Int64List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (Int64List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L541) [¶](#Int64List.ToPtr) ``` func (l [Int64List](#Int64List)) ToPtr() [Ptr](#Ptr) ``` #### type [Int8List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L627) [¶](#Int8List) ``` type Int8List [List](#List) ``` Int8List is an array of Int8 values. #### func [NewInt8List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L632) [¶](#NewInt8List) ``` func NewInt8List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([Int8List](#Int8List), [error](/builtin#error)) ``` NewInt8List creates a new list of Int8, preferring placement in s. #### func (Int8List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L641) [¶](#Int8List.At) ``` func (l [Int8List](#Int8List)) At(i [int](/builtin#int)) [int8](/builtin#int8) ``` At returns the i'th element. #### func (Int8List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L337) [¶](#Int8List.DecodeFromPtr) ``` func ([Int8List](#Int8List)) DecodeFromPtr(p [Ptr](#Ptr)) [Int8List](#Int8List) ``` #### func (Int8List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L333) [¶](#Int8List.EncodeAsPtr) ``` func (l [Int8List](#Int8List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (Int8List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L325) [¶](#Int8List.IsValid) ``` func (l [Int8List](#Int8List)) IsValid() [bool](/builtin#bool) ``` #### func (Int8List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L329) [¶](#Int8List.Len) ``` func (l [Int8List](#Int8List)) Len() [int](/builtin#int) ``` #### func (Int8List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L341) [¶](#Int8List.Message) ``` func (l [Int8List](#Int8List)) Message() *[Message](#Message) ``` #### func (Int8List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L345) [¶](#Int8List.Segment) ``` func (l [Int8List](#Int8List)) Segment() *[Segment](#Segment) ``` #### func (Int8List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L650) [¶](#Int8List.Set) ``` func (l [Int8List](#Int8List)) Set(i [int](/builtin#int), v [int8](/builtin#int8)) ``` Set sets the i'th element to v. #### func (Int8List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L659) [¶](#Int8List.String) ``` func (l [Int8List](#Int8List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (Int8List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L349) [¶](#Int8List.ToPtr) ``` func (l [Int8List](#Int8List)) ToPtr() [Ptr](#Ptr) ``` #### type [Interface](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L22) [¶](#Interface) ``` type Interface struct { // contains filtered or unexported fields } ``` An Interface is a reference to a client in a message's capability table. #### func [NewInterface](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L40) [¶](#NewInterface) ``` func NewInterface(s *[Segment](#Segment), cap [CapabilityID](#CapabilityID)) [Interface](#Interface) ``` NewInterface creates a new interface pointer. No allocation is performed in the given segment: it is used purely to associate the interface pointer with a message. #### func (Interface) [Capability](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L68) [¶](#Interface.Capability) ``` func (i [Interface](#Interface)) Capability() [CapabilityID](#CapabilityID) ``` Capability returns the capability ID of the interface. #### func (Interface) [Client](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L82) [¶](#Interface.Client) ``` func (i [Interface](#Interface)) Client() (c [Client](#Client)) ``` Client returns the client stored in the message's capability table or nil if the pointer is invalid. #### func (Interface) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L32) [¶](#Interface.DecodeFromPtr) ``` func ([Interface](#Interface)) DecodeFromPtr(p [Ptr](#Ptr)) [Interface](#Interface) ``` DecodeFromPtr(p) is equivalent to p.Interface(); for implementing TypeParam. #### func (Interface) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L29) [¶](#Interface.EncodeAsPtr) ``` func (i [Interface](#Interface)) EncodeAsPtr(*[Segment](#Segment)) [Ptr](#Ptr) ``` i.EncodeAsPtr is equivalent to i.ToPtr(); for implementing TypeParam. The segment argument is ignored. #### func (Interface) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L63) [¶](#Interface.IsValid) ``` func (i [Interface](#Interface)) IsValid() [bool](/builtin#bool) ``` IsValid returns whether the interface is valid. #### func (Interface) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L55) [¶](#Interface.Message) ``` func (i [Interface](#Interface)) Message() *[Message](#Message) ``` Message returns the message whose capability table the interface references or nil if the pointer is invalid. #### func (Interface) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L45) [¶](#Interface.ToPtr) ``` func (i [Interface](#Interface)) ToPtr() [Ptr](#Ptr) ``` ToPtr converts the interface to a generic pointer. #### type [List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L14) [¶](#List) ``` type List [ListKind](#ListKind) ``` A List is a reference to an array of values. #### func [NewCompositeList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L51) [¶](#NewCompositeList) ``` func NewCompositeList(s *[Segment](#Segment), sz [ObjectSize](#ObjectSize), n [int32](/builtin#int32)) ([List](#List), [error](/builtin#error)) ``` NewCompositeList creates a new composite list, preferring placement in s. #### func (List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L245) [¶](#List.DecodeFromPtr) ``` func ([List](#List)) DecodeFromPtr(p [Ptr](#Ptr)) [List](#List) ``` DecodeFromPtr(p) is equivalent to p.List; for implementing TypeParam. #### func (List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L242) [¶](#List.EncodeAsPtr) ``` func (l [List](#List)) EncodeAsPtr(*[Segment](#Segment)) [Ptr](#Ptr) ``` l.EncodeAsPtr is equivalent to l.ToPtr(); for implementing TypeParam. The segment argument is ignored. #### func (List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L107) [¶](#List.IsValid) ``` func (p [List](#List)) IsValid() [bool](/builtin#bool) ``` IsValid returns whether the list is valid. #### func (List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L178) [¶](#List.Len) ``` func (p [List](#List)) Len() [int](/builtin#int) ``` Len returns the length of the list. #### func (List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L99) [¶](#List.Message) ``` func (p [List](#List)) Message() *[Message](#Message) ``` Message returns the message the referenced list is stored in or nil if the pointer is invalid. #### func (List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L93) [¶](#List.Segment) ``` func (p [List](#List)) Segment() *[Segment](#Segment) ``` Segment returns the segment the referenced list is stored in or nil if the pointer is invalid. #### func (List) [SetStruct](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L230) [¶](#List.SetStruct) ``` func (p [List](#List)) SetStruct(i [int](/builtin#int), s [Struct](#Struct)) [error](/builtin#error) ``` SetStruct set the i'th element to the value in s. #### func (List) [Struct](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L208) [¶](#List.Struct) ``` func (p [List](#List)) Struct(i [int](/builtin#int)) [Struct](#Struct) ``` Struct returns the i'th element as a struct. #### func (List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L80) [¶](#List.ToPtr) ``` func (p [List](#List)) ToPtr() [Ptr](#Ptr) ``` ToPtr converts the list to a generic pointer. #### type [ListKind](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L19) [¶](#ListKind) ``` type ListKind = struct { // contains filtered or unexported fields } ``` The underlying type of List. We expose this so that we can use ~ListKind as a constraint in generics to capture any list type. #### type [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L30) [¶](#Message) ``` type Message struct { Arena [Arena](#Arena) // TraverseLimit limits how many total bytes of data are allowed to be // traversed while reading. Traversal is counted when a Struct or // List is obtained. This means that calling a getter for the same // sub-struct multiple times will cause it to be double-counted. Once // the traversal limit is reached, pointer accessors will report // errors. See <https://capnproto.org/encoding.html#amplification-attack> // for more details on this security measure. // // If not set, this defaults to 64 MiB. TraverseLimit [uint64](/builtin#uint64) // DepthLimit limits how deeply-nested a message structure can be. // If not set, this defaults to 64. DepthLimit [uint](/builtin#uint) // contains filtered or unexported fields } ``` A Message is a tree of Cap'n Proto objects, split into one or more segments of contiguous memory. The only required field is Arena. A Message is safe to read from multiple goroutines. #### func [Unmarshal](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L144) [¶](#Unmarshal) ``` func Unmarshal(data [][byte](/builtin#byte)) (*[Message](#Message), [error](/builtin#error)) ``` Unmarshal reads an unpacked serialized stream into a message. No copying is performed, so the objects in the returned message read directly from data. Example [¶](#example-Unmarshal) ``` msg, s, err := capnp.NewMessage(capnp.SingleSegment(nil)) if err != nil { fmt.Printf("allocation error %v\n", err) return } d, err := air.NewRootZdate(s) if err != nil { fmt.Printf("root error %v\n", err) return } d.SetYear(2004) d.SetMonth(12) d.SetDay(7) data, err := msg.Marshal() if err != nil { fmt.Printf("marshal error %v\n", err) return } // Read msg, err = capnp.Unmarshal(data) if err != nil { fmt.Printf("unmarshal error %v\n", err) return } d, err = air.ReadRootZdate(msg) if err != nil { fmt.Printf("read root error %v\n", err) return } fmt.Printf("year %d, month %d, day %d\n", d.Year(), d.Month(), d.Day()) ``` ``` Output: year 2004, month 12, day 7 ``` #### func [UnmarshalPacked](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L173) [¶](#UnmarshalPacked) ``` func UnmarshalPacked(data [][byte](/builtin#byte)) (*[Message](#Message), [error](/builtin#error)) ``` UnmarshalPacked reads a packed serialized stream into a message. #### func (*Message) [CapTable](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L221) [¶](#Message.CapTable) ``` func (m *[Message](#Message)) CapTable() *[CapTable](#CapTable) ``` CapTable is the indexed list of the clients referenced in the message. Capability pointers inside the message will use this table to map pointers to Clients. The table is populated by the RPC system. <https://capnproto.org/encoding.html#capabilities-interfaces#### func (*Message) [Marshal](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L351) [¶](#Message.Marshal) ``` func (m *[Message](#Message)) Marshal() ([][byte](/builtin#byte), [error](/builtin#error)) ``` Marshal concatenates the segments in the message into a single byte slice including framing. #### func (*Message) [MarshalPacked](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L411) [¶](#Message.MarshalPacked) ``` func (m *[Message](#Message)) MarshalPacked() ([][byte](/builtin#byte), [error](/builtin#error)) ``` MarshalPacked marshals the message in packed form. #### func (*Message) [NumSegments](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L249) [¶](#Message.NumSegments) ``` func (m *[Message](#Message)) NumSegments() [int64](/builtin#int64) ``` NumSegments returns the number of segments in the message. #### func (*Message) [Release](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L92) [¶](#Message.Release) ``` func (m *[Message](#Message)) Release() ``` Release is syntactic sugar for Message.Reset(nil). See docstring for Reset for an important warning. #### func (*Message) [Reset](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L100) [¶](#Message.Reset) ``` func (m *[Message](#Message)) Reset(arena [Arena](#Arena)) (first *[Segment](#Segment), err [error](/builtin#error)) ``` Reset the message to use a different arena, allowing it to be reused. This invalidates any existing pointers in the Message, releases all clients in the cap table, and releases the current Arena, so use with caution. #### func (*Message) [ResetReadLimit](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L179) [¶](#Message.ResetReadLimit) ``` func (m *[Message](#Message)) ResetReadLimit(limit [uint64](/builtin#uint64)) ``` ResetReadLimit sets the number of bytes allowed to be read from this message. #### func (*Message) [Root](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L191) [¶](#Message.Root) ``` func (m *[Message](#Message)) Root() ([Ptr](#Ptr), [error](/builtin#error)) ``` Root returns the pointer to the message's root object. #### func (*Message) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L254) [¶](#Message.Segment) ``` func (m *[Message](#Message)) Segment(id [SegmentID](#SegmentID)) (*[Segment](#Segment), [error](/builtin#error)) ``` Segment returns the segment with the given ID. #### func (*Message) [SetRoot](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L204) [¶](#Message.SetRoot) ``` func (m *[Message](#Message)) SetRoot(p [Ptr](#Ptr)) [error](/builtin#error) ``` SetRoot sets the message's root object to p. #### func (*Message) [TotalSize](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L228) [¶](#Message.TotalSize) ``` func (m *[Message](#Message)) TotalSize() ([uint64](/builtin#uint64), [error](/builtin#error)) ``` Compute the total size of the message in bytes, when serialized as a stream. This is the same as the length of the slice returned by m.Marshal() #### func (*Message) [Unread](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L185) [¶](#Message.Unread) ``` func (m *[Message](#Message)) Unread(sz [Size](#Size)) ``` Unread increases the read limit by sz. #### func (*Message) [WriteTo](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/message.go#L343) [¶](#Message.WriteTo) ``` func (m *[Message](#Message)) WriteTo(w [io](/io).[Writer](/io#Writer)) ([int64](/builtin#int64), [error](/builtin#error)) ``` #### type [Metadata](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/metadata.go#L13) [¶](#Metadata) ``` type Metadata struct { // contains filtered or unexported fields } ``` Metadata is a morally a map[any]any which implements sync.Locker; it is used by the rpc system to attach bookkeeping information to various objects. The zero value is not meaningful, and the Metadata must not be copied after its first use. #### func [NewMetadata](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/metadata.go#L46) [¶](#NewMetadata) ``` func NewMetadata() *[Metadata](#Metadata) ``` Allocate and return a freshly initialized Metadata. #### func (*Metadata) [Delete](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/metadata.go#L41) [¶](#Metadata.Delete) ``` func (m *[Metadata](#Metadata)) Delete(key [any](/builtin#any)) ``` Delete the key from the map. #### func (*Metadata) [Get](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/metadata.go#L30) [¶](#Metadata.Get) ``` func (m *[Metadata](#Metadata)) Get(key [any](/builtin#any)) (value [any](/builtin#any), ok [bool](/builtin#bool)) ``` Look up key in the map. Returns the value, and a boolean which is false if the key was not present. #### func (*Metadata) [Lock](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/metadata.go#L19) [¶](#Metadata.Lock) ``` func (m *[Metadata](#Metadata)) Lock() ``` Lock the metadata map. #### func (*Metadata) [Put](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/metadata.go#L36) [¶](#Metadata.Put) ``` func (m *[Metadata](#Metadata)) Put(key, value [any](/builtin#any)) ``` Insert the key, value pair into the map. #### func (*Metadata) [Unlock](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/metadata.go#L24) [¶](#Metadata.Unlock) ``` func (m *[Metadata](#Metadata)) Unlock() ``` Unlock the metadata map. #### type [Method](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L961) [¶](#Method) ``` type Method struct { InterfaceID [uint64](/builtin#uint64) MethodID [uint16](/builtin#uint16) // Canonical name of the interface. May be empty. InterfaceName [string](/builtin#string) // Method name as it appears in the schema. May be empty. MethodName [string](/builtin#string) } ``` A Method identifies a method along with an optional human-readable description of the method. #### func (*Method) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L974) [¶](#Method.String) ``` func (m *[Method](#Method)) String() [string](/builtin#string) ``` String returns a formatted string containing the interface name or the method name if present, otherwise it uses the raw IDs. This is suitable for use in error messages and logs. #### type [MultiSegmentArena](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L118) [¶](#MultiSegmentArena) ``` type MultiSegmentArena struct { // contains filtered or unexported fields } ``` MultiSegment is an arena that stores object data across multiple []byte buffers, allocating new buffers of exponentially-increasing size when full. This avoids the potentially-expensive slice copying of SingleSegment. #### func [MultiSegment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L127) [¶](#MultiSegment) ``` func MultiSegment(b [][][byte](/builtin#byte)) *[MultiSegmentArena](#MultiSegmentArena) ``` MultiSegment returns a new arena that allocates new segments when they are full. b MAY be nil. Callers MAY use b to populate the buffer for reading or to reserve memory of a specific size. #### func (*MultiSegmentArena) [Allocate](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L209) [¶](#MultiSegmentArena.Allocate) ``` func (msa *[MultiSegmentArena](#MultiSegmentArena)) Allocate(sz [Size](#Size), segs map[[SegmentID](#SegmentID)]*[Segment](#Segment)) ([SegmentID](#SegmentID), [][byte](/builtin#byte), [error](/builtin#error)) ``` #### func (*MultiSegmentArena) [Data](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L201) [¶](#MultiSegmentArena.Data) ``` func (msa *[MultiSegmentArena](#MultiSegmentArena)) Data(id [SegmentID](#SegmentID)) ([][byte](/builtin#byte), [error](/builtin#error)) ``` #### func (*MultiSegmentArena) [NumSegments](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L197) [¶](#MultiSegmentArena.NumSegments) ``` func (msa *[MultiSegmentArena](#MultiSegmentArena)) NumSegments() [int64](/builtin#int64) ``` #### func (*MultiSegmentArena) [Release](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L143) [¶](#MultiSegmentArena.Release) ``` func (msa *[MultiSegmentArena](#MultiSegmentArena)) Release() ``` Return this arena to an internal sync.Pool of arenas that can be re-used. Any time MultiSegment(nil) is called, arenas from this pool will be used if available, which can help reduce memory allocations. All segments will be zeroed before re-use. Calling Release is optional; if not done the garbage collector will release the memory per usual. #### func (*MultiSegmentArena) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L240) [¶](#MultiSegmentArena.String) ``` func (msa *[MultiSegmentArena](#MultiSegmentArena)) String() [string](/builtin#string) ``` #### type [ObjectSize](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L138) [¶](#ObjectSize) ``` type ObjectSize struct { DataSize [Size](#Size) // must be <= 1<<19 - 8 PointerCount [uint16](/builtin#uint16) } ``` ObjectSize records section sizes for a struct or list. #### func (ObjectSize) [GoString](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L190) [¶](#ObjectSize.GoString) ``` func (sz [ObjectSize](#ObjectSize)) GoString() [string](/builtin#string) ``` GoString formats the ObjectSize as a keyed struct literal. #### func (ObjectSize) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L185) [¶](#ObjectSize.String) ``` func (sz [ObjectSize](#ObjectSize)) String() [string](/builtin#string) ``` String returns a short, human readable representation of the object size. #### type [PipelineCaller](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L226) [¶](#PipelineCaller) ``` type PipelineCaller interface { PipelineSend(ctx [context](/context).[Context](/context#Context), transform [][PipelineOp](#PipelineOp), s [Send](#Send)) (*[Answer](#Answer), [ReleaseFunc](#ReleaseFunc)) PipelineRecv(ctx [context](/context).[Context](/context#Context), transform [][PipelineOp](#PipelineOp), r [Recv](#Recv)) [PipelineCaller](#PipelineCaller) } ``` A PipelineCaller implements promise pipelining. See the counterpart methods in ClientHook for a description. #### type [PipelineClient](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L471) [¶](#PipelineClient) ``` type PipelineClient struct { // contains filtered or unexported fields } ``` PipelineClient implements ClientHook by calling to the pipeline's answer. #### func (PipelineClient) [Answer](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L476) [¶](#PipelineClient.Answer) ``` func (pc [PipelineClient](#PipelineClient)) Answer() *[Answer](#Answer) ``` #### func (PipelineClient) [Brand](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L492) [¶](#PipelineClient.Brand) ``` func (pc [PipelineClient](#PipelineClient)) Brand() [Brand](#Brand) ``` #### func (PipelineClient) [Recv](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L488) [¶](#PipelineClient.Recv) ``` func (pc [PipelineClient](#PipelineClient)) Recv(ctx [context](/context).[Context](/context#Context), r [Recv](#Recv)) [PipelineCaller](#PipelineCaller) ``` #### func (PipelineClient) [Send](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L484) [¶](#PipelineClient.Send) ``` func (pc [PipelineClient](#PipelineClient)) Send(ctx [context](/context).[Context](/context#Context), s [Send](#Send)) (*[Answer](#Answer), [ReleaseFunc](#ReleaseFunc)) ``` #### func (PipelineClient) [Shutdown](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L504) [¶](#PipelineClient.Shutdown) ``` func (pc [PipelineClient](#PipelineClient)) Shutdown() ``` #### func (PipelineClient) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L507) [¶](#PipelineClient.String) ``` func (pc [PipelineClient](#PipelineClient)) String() [string](/builtin#string) ``` #### func (PipelineClient) [Transform](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L480) [¶](#PipelineClient.Transform) ``` func (pc [PipelineClient](#PipelineClient)) Transform() [][PipelineOp](#PipelineOp) ``` #### type [PipelineOp](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L516) [¶](#PipelineOp) ``` type PipelineOp struct { Field [uint16](/builtin#uint16) DefaultValue [][byte](/builtin#byte) } ``` A PipelineOp describes a step in transforming a pipeline. It maps closely with the PromisedAnswer.Op struct in rpc.capnp. #### func (PipelineOp) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L522) [¶](#PipelineOp.String) ``` func (op [PipelineOp](#PipelineOp)) String() [string](/builtin#string) ``` String returns a human-readable description of op. #### type [PointerList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L331) [¶](#PointerList) ``` type PointerList [List](#List) ``` A PointerList is a reference to an array of pointers. #### func [NewPointerList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L336) [¶](#NewPointerList) ``` func NewPointerList(s *[Segment](#Segment), n [int32](/builtin#int32)) ([PointerList](#PointerList), [error](/builtin#error)) ``` NewPointerList allocates a new list of pointers, preferring placement in s. #### func (PointerList) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L355) [¶](#PointerList.At) ``` func (p [PointerList](#PointerList)) At(i [int](/builtin#int)) ([Ptr](#Ptr), [error](/builtin#error)) ``` At returns the i'th pointer in the list. #### func (PointerList) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L209) [¶](#PointerList.DecodeFromPtr) ``` func ([PointerList](#PointerList)) DecodeFromPtr(p [Ptr](#Ptr)) [PointerList](#PointerList) ``` #### func (PointerList) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L205) [¶](#PointerList.EncodeAsPtr) ``` func (l [PointerList](#PointerList)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (PointerList) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L197) [¶](#PointerList.IsValid) ``` func (l [PointerList](#PointerList)) IsValid() [bool](/builtin#bool) ``` #### func (PointerList) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L201) [¶](#PointerList.Len) ``` func (l [PointerList](#PointerList)) Len() [int](/builtin#int) ``` #### func (PointerList) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L213) [¶](#PointerList.Message) ``` func (l [PointerList](#PointerList)) Message() *[Message](#Message) ``` #### func (PointerList) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L217) [¶](#PointerList.Segment) ``` func (l [PointerList](#PointerList)) Segment() *[Segment](#Segment) ``` #### func (PointerList) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L365) [¶](#PointerList.Set) ``` func (p [PointerList](#PointerList)) Set(i [int](/builtin#int), v [Ptr](#Ptr)) [error](/builtin#error) ``` Set sets the i'th pointer in the list to v. #### func (PointerList) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L221) [¶](#PointerList.ToPtr) ``` func (l [PointerList](#PointerList)) ToPtr() [Ptr](#Ptr) ``` #### type [Promise](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L21) [¶](#Promise) ``` type Promise struct { // contains filtered or unexported fields } ``` A Promise holds the result of an RPC call. Only one of Fulfill or Reject can be called on a Promise. Before the result is written, calls can be queued up using the Answer methods — this is promise pipelining. Promise is most useful for implementing ClientHook. Most applications will use Answer, since that what is returned by a Client. #### func [NewPromise](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L67) [¶](#NewPromise) ``` func NewPromise(m [Method](#Method), pc [PipelineCaller](#PipelineCaller)) *[Promise](#Promise) ``` NewPromise creates a new unresolved promise. The PipelineCaller will be used to make pipelined calls before the promise resolves. #### func (*Promise) [Answer](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L200) [¶](#Promise.Answer) ``` func (p *[Promise](#Promise)) Answer() *[Answer](#Answer) ``` Answer returns a read-only view of the promise. Answer may be called concurrently by multiple goroutines. #### func (*Promise) [Fulfill](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L117) [¶](#Promise.Fulfill) ``` func (p *[Promise](#Promise)) Fulfill(result [Ptr](#Ptr)) ``` Fulfill resolves the promise with a successful result. Fulfill will wait for any outstanding calls to the underlying PipelineCaller to yield Answers and any pipelined clients to be fulfilled. #### func (*Promise) [Reject](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L126) [¶](#Promise.Reject) ``` func (p *[Promise](#Promise)) Reject(e [error](/builtin#error)) ``` Reject resolves the promise with a failure. Reject will wait for any outstanding calls to the underlying PipelineCaller to yield Answers and any pipelined clients to be fulfilled. #### func (*Promise) [ReleaseClients](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L211) [¶](#Promise.ReleaseClients) ``` func (p *[Promise](#Promise)) ReleaseClients() ``` ReleaseClients waits until p is resolved and then closes any proxy clients created by the promise's answer. Failure to call this method will result in capability leaks. After the first call, subsequent calls to ReleaseClients do nothing. It is safe to call ReleaseClients concurrently from multiple goroutines. This method is typically used in a ReleaseFunc. #### func (*Promise) [Resolve](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L137) [¶](#Promise.Resolve) ``` func (p *[Promise](#Promise)) Resolve(r [Ptr](#Ptr), e [error](/builtin#error)) ``` Resolve resolves the promise. If e != nil, then this is equivalent to p.Reject(e). Otherwise, it is equivalent to p.Fulfill(r). #### type [Ptr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L12) [¶](#Ptr) ``` type Ptr struct { // contains filtered or unexported fields } ``` A Ptr is a reference to a Cap'n Proto struct, list, or interface. The zero value is a null pointer. #### func [MustUnmarshalRoot](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/codec.go#L186) [¶](#MustUnmarshalRoot) ``` func MustUnmarshalRoot(data [][byte](/builtin#byte)) [Ptr](#Ptr) ``` MustUnmarshalRoot reads an unpacked serialized stream and returns its root pointer. If there is any error, it panics. #### func [Transform](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answer.go#L535) [¶](#Transform) ``` func Transform(p [Ptr](#Ptr), transform [][PipelineOp](#PipelineOp)) ([Ptr](#Ptr), [error](/builtin#error)) ``` Transform applies a sequence of pipeline operations to a pointer and returns the result. #### func (Ptr) [Data](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L155) [¶](#Ptr.Data) ``` func (p [Ptr](#Ptr)) Data() [][byte](/builtin#byte) ``` Data attempts to convert p into Data, returning nil if p is not a valid 1-byte list pointer. #### func (Ptr) [DataDefault](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L161) [¶](#Ptr.DataDefault) ``` func (p [Ptr](#Ptr)) DataDefault(def [][byte](/builtin#byte)) [][byte](/builtin#byte) ``` DataDefault attempts to convert p into Data, returning def if p is not a valid 1-byte list pointer. #### func (Ptr) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L211) [¶](#Ptr.DecodeFromPtr) ``` func ([Ptr](#Ptr)) DecodeFromPtr(p [Ptr](#Ptr)) [Ptr](#Ptr) ``` DecodeFromPtr returns its argument; for implementing TypeParam. #### func (Ptr) [Default](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L194) [¶](#Ptr.Default) ``` func (p [Ptr](#Ptr)) Default(def [][byte](/builtin#byte)) ([Ptr](#Ptr), [error](/builtin#error)) ``` Default returns p if it is valid, otherwise it unmarshals def. #### func (Ptr) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L208) [¶](#Ptr.EncodeAsPtr) ``` func (p [Ptr](#Ptr)) EncodeAsPtr(*[Segment](#Segment)) [Ptr](#Ptr) ``` EncodeAsPtr returns the receiver; for implementing TypeParam. The segment argument is ignored. #### func (Ptr) [Interface](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L88) [¶](#Ptr.Interface) ``` func (p [Ptr](#Ptr)) Interface() [Interface](#Interface) ``` Interface converts p to an Interface. If p does not hold a List pointer, the zero value is returned. #### func (Ptr) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L174) [¶](#Ptr.IsValid) ``` func (p [Ptr](#Ptr)) IsValid() [bool](/builtin#bool) ``` IsValid reports whether p is valid. #### func (Ptr) [List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L55) [¶](#Ptr.List) ``` func (p [Ptr](#Ptr)) List() [List](#List) ``` List converts p to a List. If p does not hold a List pointer, the zero value is returned. #### func (Ptr) [ListDefault](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L71) [¶](#Ptr.ListDefault) ``` func (p [Ptr](#Ptr)) ListDefault(def [][byte](/builtin#byte)) ([List](#List), [error](/builtin#error)) ``` ListDefault attempts to convert p into a list, reading the default value from def if p is not a list. #### func (Ptr) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L186) [¶](#Ptr.Message) ``` func (p [Ptr](#Ptr)) Message() *[Message](#Message) ``` Message returns the message the referenced data is stored in or nil if the pointer is invalid. #### func (Ptr) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L180) [¶](#Ptr.Segment) ``` func (p [Ptr](#Ptr)) Segment() *[Segment](#Segment) ``` Segment returns the segment that the referenced data is stored in or nil if the pointer is invalid. #### func (Ptr) [Struct](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L23) [¶](#Ptr.Struct) ``` func (p [Ptr](#Ptr)) Struct() [Struct](#Struct) ``` Struct converts p to a Struct. If p does not hold a Struct pointer, the zero value is returned. #### func (Ptr) [StructDefault](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L38) [¶](#Ptr.StructDefault) ``` func (p [Ptr](#Ptr)) StructDefault(def [][byte](/builtin#byte)) ([Struct](#Struct), [error](/builtin#error)) ``` StructDefault attempts to convert p into a struct, reading the default value from def if p is not a struct. #### func (Ptr) [Text](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L100) [¶](#Ptr.Text) ``` func (p [Ptr](#Ptr)) Text() [string](/builtin#string) ``` Text attempts to convert p into Text, returning an empty string if p is not a valid 1-byte list pointer. #### func (Ptr) [TextBytes](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L121) [¶](#Ptr.TextBytes) ``` func (p [Ptr](#Ptr)) TextBytes() [][byte](/builtin#byte) ``` TextBytes attempts to convert p into Text, returning nil if p is not a valid 1-byte list pointer. It returns a slice directly into the segment. #### func (Ptr) [TextBytesDefault](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L132) [¶](#Ptr.TextBytesDefault) ``` func (p [Ptr](#Ptr)) TextBytesDefault(def [string](/builtin#string)) [][byte](/builtin#byte) ``` TextBytesDefault attempts to convert p into Text, returning def if p is not a valid 1-byte list pointer. It returns a slice directly into the segment. #### func (Ptr) [TextDefault](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/pointer.go#L110) [¶](#Ptr.TextDefault) ``` func (p [Ptr](#Ptr)) TextDefault(def [string](/builtin#string)) [string](/builtin#string) ``` TextDefault attempts to convert p into Text, returning def if p is not a valid 1-byte list pointer. #### type [Recv](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L878) [¶](#Recv) ``` type Recv struct { // Method must have InterfaceID and MethodID filled in. Method [Method](#Method) // Args is the set of arguments for the RPC. Args [Struct](#Struct) // ReleaseArgs is called after Args is no longer referenced. // Must not be nil. If called more than once, subsequent calls // must silently no-op. ReleaseArgs [ReleaseFunc](#ReleaseFunc) // Returner manages the results. Returner [Returner](#Returner) } ``` Recv is the input to ClientHook.Recv. #### func (Recv) [AllocResults](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L896) [¶](#Recv.AllocResults) ``` func (r [Recv](#Recv)) AllocResults(sz [ObjectSize](#ObjectSize)) ([Struct](#Struct), [error](/builtin#error)) ``` AllocResults allocates a result struct. It is the same as calling r.Returner.AllocResults(sz). #### func (Recv) [Reject](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L908) [¶](#Recv.Reject) ``` func (r [Recv](#Recv)) Reject(e [error](/builtin#error)) ``` Reject ends the method call with an error, releasing the arguments. #### func (Recv) [Return](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L901) [¶](#Recv.Return) ``` func (r [Recv](#Recv)) Return() ``` Return ends the method call successfully, releasing the arguments. #### type [ReleaseFunc](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L957) [¶](#ReleaseFunc) ``` type ReleaseFunc func() ``` A ReleaseFunc tells the RPC system that a parameter or result struct is no longer in use and may be reclaimed. After the first call, subsequent calls to a ReleaseFunc do nothing. A ReleaseFunc should not be called concurrently. #### type [Request](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/request.go#L10) [¶](#Request) ``` type Request struct { // contains filtered or unexported fields } ``` A Request is a method call to be sent. Create one with NewReqeust, and send it with Request.Send(). #### func [NewRequest](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/request.go#L20) [¶](#NewRequest) ``` func NewRequest(client [Client](#Client), method [Method](#Method), argsSize [ObjectSize](#ObjectSize)) (*[Request](#Request), [error](/builtin#error)) ``` NewRequest creates a new request calling the specified method on the specified client. argsSize is the size of the arguments struct. #### func (*Request) [Args](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/request.go#L35) [¶](#Request.Args) ``` func (r *[Request](#Request)) Args() [Struct](#Struct) ``` Args returns the arguments struct for this request. The arguments must not be accessed after the request is sent. #### func (*Request) [Future](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/request.go#L74) [¶](#Request.Future) ``` func (r *[Request](#Request)) Future() *[Future](#Future) ``` Future returns a future for the requests results. Returns nil if called before the request is sent. #### func (*Request) [Release](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/request.go#L83) [¶](#Request.Release) ``` func (r *[Request](#Request)) Release() ``` Release resources associated with the request. In particular: * Release the arguments if they have not yet been released. * If the request has been sent, wait for the result and release the results. #### func (*Request) [Send](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/request.go#L52) [¶](#Request.Send) ``` func (r *[Request](#Request)) Send(ctx [context](/context).[Context](/context#Context)) *[Future](#Future) ``` Send sends the request, returning a future for its results. #### func (*Request) [SendStream](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/request.go#L64) [¶](#Request.SendStream) ``` func (r *[Request](#Request)) SendStream(ctx [context](/context).[Context](/context#Context)) [error](/builtin#error) ``` SendStream is to send as Client.SendStreamCall is to Client.SendCall #### type [Resolver](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/resolver.go#L4) [¶](#Resolver) ``` type Resolver[T [any](/builtin#any)] interface { // Fulfill supplies the value for the corresponding // Promise Fulfill(T) // Reject rejects the corresponding promise, with // the specified error. Reject([error](/builtin#error)) } ``` A Resolver supplies a value for a pending promise. #### func [NewLocalPromise](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/localpromise.go#L16) [¶](#NewLocalPromise) ``` func NewLocalPromise[C ~[ClientKind](#ClientKind)]() (C, [Resolver](#Resolver)[C]) ``` NewLocalPromise returns a client that will eventually resolve to a capability, supplied via the fulfiller. #### type [Returner](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L919) [¶](#Returner) ``` type Returner interface { // AllocResults allocates the results struct that will be sent using // Return. It can be called at most once, and only before calling // Return. The struct returned by AllocResults must not be mutated // after Return is called, and may not be accessed after // ReleaseResults is called. AllocResults(sz [ObjectSize](#ObjectSize)) ([Struct](#Struct), [error](/builtin#error)) // PrepareReturn finalizes the return message. The method call will // resolve successfully if e is nil, or otherwise it will return an // exception to the caller. // // PrepareReturn must be called once. // // After PrepareReturn is invoked, no goroutine may modify the message // containing the results. PrepareReturn(e [error](/builtin#error)) // Return resolves the method call, using the results finalized in // PrepareReturn. Return must be called once. // // Return must wait for all ongoing pipelined calls to be delivered, // and after it returns, no new calls can be sent to the PipelineCaller // returned from Recv. Return() // ReleaseResults relinquishes the caller's access to the message // containing the results; once this is called the message may be // released or reused, and it is not safe to access. // // If AllocResults has not been called, then this is a no-op. ReleaseResults() } ``` A Returner allocates and sends the results from a received capability method call. #### type [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/segment.go#L17) [¶](#Segment) ``` type Segment struct { // contains filtered or unexported fields } ``` A Segment is an allocation arena for Cap'n Proto objects. It is part of a Message, which can contain other segments that reference each other. #### func (*Segment) [Data](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/segment.go#L34) [¶](#Segment.Data) ``` func (s *[Segment](#Segment)) Data() [][byte](/builtin#byte) ``` Data returns the raw byte slice for the segment. #### func (*Segment) [ID](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/segment.go#L29) [¶](#Segment.ID) ``` func (s *[Segment](#Segment)) ID() [SegmentID](#SegmentID) ``` ID returns the segment's ID. #### func (*Segment) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/segment.go#L24) [¶](#Segment.Message) ``` func (s *[Segment](#Segment)) Message() *[Message](#Message) ``` Message returns the message that contains s. #### type [SegmentID](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/segment.go#L12) [¶](#SegmentID) ``` type SegmentID [uint32](/builtin#uint32) ``` A SegmentID is a numeric identifier for a Segment. #### type [Send](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L865) [¶](#Send) ``` type Send struct { // Method must have InterfaceID and MethodID filled in. Method [Method](#Method) // PlaceArgs is a function that will be called at most once before Send // returns to populate the arguments for the RPC. PlaceArgs may be nil. PlaceArgs func([Struct](#Struct)) [error](/builtin#error) // ArgsSize specifies the size of the struct to pass to PlaceArgs. ArgsSize [ObjectSize](#ObjectSize) } ``` Send is the input to ClientHook.Send. #### type [SingleSegmentArena](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L54) [¶](#SingleSegmentArena) ``` type SingleSegmentArena [][byte](/builtin#byte) ``` SingleSegmentArena is an Arena implementation that stores message data in a continguous slice. Allocation is performed by first allocating a new slice and copying existing data. SingleSegment arena does not fail unless the caller attempts to access another segment. #### func [SingleSegment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L59) [¶](#SingleSegment) ``` func SingleSegment(b [][byte](/builtin#byte)) *[SingleSegmentArena](#SingleSegmentArena) ``` SingleSegment constructs a SingleSegmentArena from b. b MAY be nil. Callers MAY use b to populate the segment for reading, or to reserve memory of a specific size. #### func (*SingleSegmentArena) [Allocate](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L74) [¶](#SingleSegmentArena.Allocate) ``` func (ssa *[SingleSegmentArena](#SingleSegmentArena)) Allocate(sz [Size](#Size), segs map[[SegmentID](#SegmentID)]*[Segment](#Segment)) ([SegmentID](#SegmentID), [][byte](/builtin#byte), [error](/builtin#error)) ``` #### func (SingleSegmentArena) [Data](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L67) [¶](#SingleSegmentArena.Data) ``` func (ssa [SingleSegmentArena](#SingleSegmentArena)) Data(id [SegmentID](#SegmentID)) ([][byte](/builtin#byte), [error](/builtin#error)) ``` #### func (SingleSegmentArena) [NumSegments](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L63) [¶](#SingleSegmentArena.NumSegments) ``` func (ssa [SingleSegmentArena](#SingleSegmentArena)) NumSegments() [int64](/builtin#int64) ``` #### func (*SingleSegmentArena) [Release](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L110) [¶](#SingleSegmentArena.Release) ``` func (ssa *[SingleSegmentArena](#SingleSegmentArena)) Release() ``` Return this arena to an internal sync.Pool of arenas that can be re-used. Any time SingleSegment(nil) is called, arenas from this pool will be used if available, which can help reduce memory allocations. All segments will be zeroed before re-use. Calling Release is optional; if not done the garbage collector will release the memory per usual. #### func (SingleSegmentArena) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/arena.go#L97) [¶](#SingleSegmentArena.String) ``` func (ssa [SingleSegmentArena](#SingleSegmentArena)) String() [string](/builtin#string) ``` #### type [Size](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L57) [¶](#Size) ``` type Size [uint32](/builtin#uint32) ``` A Size is a size (in bytes). #### func (Size) [GoString](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L71) [¶](#Size.GoString) ``` func (sz [Size](#Size)) GoString() [string](/builtin#string) ``` GoString returns the size as a Go expression. #### func (Size) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/address.go#L63) [¶](#Size.String) ``` func (sz [Size](#Size)) String() [string](/builtin#string) ``` String returns the size in the format "X bytes". #### type [Struct](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L11) [¶](#Struct) ``` type Struct [StructKind](#StructKind) ``` Struct is a pointer to a struct. #### func [NewRootStruct](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L44) [¶](#NewRootStruct) ``` func NewRootStruct(s *[Segment](#Segment), sz [ObjectSize](#ObjectSize)) ([Struct](#Struct), [error](/builtin#error)) ``` NewRootStruct creates a new struct, preferring placement in s, then sets the message's root to the new struct. #### func [NewStruct](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L25) [¶](#NewStruct) ``` func NewStruct(s *[Segment](#Segment), sz [ObjectSize](#ObjectSize)) ([Struct](#Struct), [error](/builtin#error)) ``` NewStruct creates a new struct, preferring placement in s. #### func (Struct) [Bit](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L190) [¶](#Struct.Bit) ``` func (p [Struct](#Struct)) Bit(n [BitOffset](#BitOffset)) [bool](/builtin#bool) ``` Bit returns the bit that is n bits from the start of the struct. #### func (Struct) [CopyFrom](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L95) [¶](#Struct.CopyFrom) ``` func (p [Struct](#Struct)) CopyFrom(other [Struct](#Struct)) [error](/builtin#error) ``` CopyFrom copies content from another struct. If the other struct's sections are larger than this struct's, the extra data is not copied, meaning there is a risk of data loss when copying from messages built with future versions of the protocol. #### func (Struct) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L367) [¶](#Struct.DecodeFromPtr) ``` func ([Struct](#Struct)) DecodeFromPtr(p [Ptr](#Ptr)) [Struct](#Struct) ``` DecodeFromPtr(p) is equivalent to p.Struct() (the receiver is ignored). for implementing TypeParam. #### func (Struct) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L363) [¶](#Struct.EncodeAsPtr) ``` func (s [Struct](#Struct)) EncodeAsPtr(*[Segment](#Segment)) [Ptr](#Ptr) ``` s.EncodeAsPtr is equivalent to s.ToPtr(); for implementing TypeParam. The segment argument is ignored. #### func (Struct) [HasPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L121) [¶](#Struct.HasPtr) ``` func (p [Struct](#Struct)) HasPtr(i [uint16](/builtin#uint16)) [bool](/builtin#bool) ``` HasPtr reports whether the i'th pointer in the struct is non-null. It does not affect the read limit. #### func (Struct) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L82) [¶](#Struct.IsValid) ``` func (p [Struct](#Struct)) IsValid() [bool](/builtin#bool) ``` IsValid returns whether the struct is valid. #### func (Struct) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L74) [¶](#Struct.Message) ``` func (p [Struct](#Struct)) Message() *[Message](#Message) ``` Message returns the message the referenced struct is stored in or nil if the pointer is invalid. #### func (Struct) [Ptr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L112) [¶](#Struct.Ptr) ``` func (p [Struct](#Struct)) Ptr(i [uint16](/builtin#uint16)) ([Ptr](#Ptr), [error](/builtin#error)) ``` Ptr returns the i'th pointer in the struct. #### func (Struct) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L68) [¶](#Struct.Segment) ``` func (p [Struct](#Struct)) Segment() *[Segment](#Segment) ``` Segment returns the segment the referenced struct is stored in or nil if the pointer is invalid. #### func (Struct) [SetBit](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L199) [¶](#Struct.SetBit) ``` func (p [Struct](#Struct)) SetBit(n [BitOffset](#BitOffset), v [bool](/builtin#bool)) ``` SetBit sets the bit that is n bits from the start of the struct to v. #### func (Struct) [SetData](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L166) [¶](#Struct.SetData) ``` func (p [Struct](#Struct)) SetData(i [uint16](/builtin#uint16), v [][byte](/builtin#byte)) [error](/builtin#error) ``` SetData sets the i'th pointer to a newly allocated data or null if v is nil. #### func (Struct) [SetNewText](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L145) [¶](#Struct.SetNewText) ``` func (p [Struct](#Struct)) SetNewText(i [uint16](/builtin#uint16), v [string](/builtin#string)) [error](/builtin#error) ``` SetNewText sets the i'th pointer to a newly allocated text. #### func (Struct) [SetPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L129) [¶](#Struct.SetPtr) ``` func (p [Struct](#Struct)) SetPtr(i [uint16](/builtin#uint16), src [Ptr](#Ptr)) [error](/builtin#error) ``` SetPtr sets the i'th pointer in the struct to src. #### func (Struct) [SetText](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L137) [¶](#Struct.SetText) ``` func (p [Struct](#Struct)) SetText(i [uint16](/builtin#uint16), v [string](/builtin#string)) [error](/builtin#error) ``` SetText sets the i'th pointer to a newly allocated text or null if v is empty. #### func (Struct) [SetTextFromBytes](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L154) [¶](#Struct.SetTextFromBytes) ``` func (p [Struct](#Struct)) SetTextFromBytes(i [uint16](/builtin#uint16), v [][byte](/builtin#byte)) [error](/builtin#error) ``` SetTextFromBytes sets the i'th pointer to a newly allocated text or null if v is nil. #### func (Struct) [SetUint16](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L266) [¶](#Struct.SetUint16) ``` func (p [Struct](#Struct)) SetUint16(off [DataOffset](#DataOffset), v [uint16](/builtin#uint16)) ``` SetUint16 sets the 16-bit integer that is off bytes from the start of the struct to v. #### func (Struct) [SetUint32](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L275) [¶](#Struct.SetUint32) ``` func (p [Struct](#Struct)) SetUint32(off [DataOffset](#DataOffset), v [uint32](/builtin#uint32)) ``` SetUint32 sets the 32-bit integer that is off bytes from the start of the struct to v. #### func (Struct) [SetUint64](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L284) [¶](#Struct.SetUint64) ``` func (p [Struct](#Struct)) SetUint64(off [DataOffset](#DataOffset), v [uint64](/builtin#uint64)) ``` SetUint64 sets the 64-bit integer that is off bytes from the start of the struct to v. #### func (Struct) [SetUint8](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L257) [¶](#Struct.SetUint8) ``` func (p [Struct](#Struct)) SetUint8(off [DataOffset](#DataOffset), v [uint8](/builtin#uint8)) ``` SetUint8 sets the 8-bit integer that is off bytes from the start of the struct to v. #### func (Struct) [Size](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L87) [¶](#Struct.Size) ``` func (p [Struct](#Struct)) Size() [ObjectSize](#ObjectSize) ``` Size returns the size of the struct. #### func (Struct) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L56) [¶](#Struct.ToPtr) ``` func (p [Struct](#Struct)) ToPtr() [Ptr](#Ptr) ``` ToPtr converts the struct to a generic pointer. #### func (Struct) [Uint16](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L230) [¶](#Struct.Uint16) ``` func (p [Struct](#Struct)) Uint16(off [DataOffset](#DataOffset)) [uint16](/builtin#uint16) ``` Uint16 returns a 16-bit integer from the struct's data section. #### func (Struct) [Uint32](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L239) [¶](#Struct.Uint32) ``` func (p [Struct](#Struct)) Uint32(off [DataOffset](#DataOffset)) [uint32](/builtin#uint32) ``` Uint32 returns a 32-bit integer from the struct's data section. #### func (Struct) [Uint64](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L248) [¶](#Struct.Uint64) ``` func (p [Struct](#Struct)) Uint64(off [DataOffset](#DataOffset)) [uint64](/builtin#uint64) ``` Uint64 returns a 64-bit integer from the struct's data section. #### func (Struct) [Uint8](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L221) [¶](#Struct.Uint8) ``` func (p [Struct](#Struct)) Uint8(off [DataOffset](#DataOffset)) [uint8](/builtin#uint8) ``` Uint8 returns an 8-bit integer from the struct's data section. #### type [StructKind](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/struct.go#L16) [¶](#StructKind) ``` type StructKind = struct { // contains filtered or unexported fields } ``` The underlying type of Struct. We expose this so that we can use ~StructKind as a constraint in generics to capture any struct type. #### type [StructList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1067) [¶](#StructList) ``` type StructList[T ~[StructKind](#StructKind)] [List](#List) ``` A list of some Cap'n Proto struct type T. #### func (StructList[T]) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1072) [¶](#StructList.At) ``` func (s [StructList](#StructList)[T]) At(i [int](/builtin#int)) T ``` At returns the i'th element. #### func (StructList[T]) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L273) [¶](#StructList.DecodeFromPtr) ``` func ([StructList](#StructList)[T]) DecodeFromPtr(p [Ptr](#Ptr)) [StructList](#StructList)[T] ``` #### func (StructList[T]) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L269) [¶](#StructList.EncodeAsPtr) ``` func (l [StructList](#StructList)[T]) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (StructList[T]) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L261) [¶](#StructList.IsValid) ``` func (l [StructList](#StructList)[T]) IsValid() [bool](/builtin#bool) ``` #### func (StructList[T]) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L265) [¶](#StructList.Len) ``` func (l [StructList](#StructList)[T]) Len() [int](/builtin#int) ``` #### func (StructList[T]) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L277) [¶](#StructList.Message) ``` func (l [StructList](#StructList)[T]) Message() *[Message](#Message) ``` #### func (StructList[T]) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L281) [¶](#StructList.Segment) ``` func (l [StructList](#StructList)[T]) Segment() *[Segment](#Segment) ``` #### func (StructList[T]) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L1077) [¶](#StructList.Set) ``` func (s [StructList](#StructList)[T]) Set(i [int](/builtin#int), v T) [error](/builtin#error) ``` Set sets the i'th element to v. #### func (StructList[T]) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L285) [¶](#StructList.ToPtr) ``` func (l [StructList](#StructList)[T]) ToPtr() [Ptr](#Ptr) ``` #### type [StructReturner](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L181) [¶](#StructReturner) ``` type StructReturner struct { // contains filtered or unexported fields } ``` A StructReturner implements Returner by allocating an in-memory message. It is safe to use from multiple goroutines. The zero value is a Returner in its initial state. #### func (*StructReturner) [AllocResults](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L194) [¶](#StructReturner.AllocResults) ``` func (sr *[StructReturner](#StructReturner)) AllocResults(sz [ObjectSize](#ObjectSize)) ([Struct](#Struct), [error](/builtin#error)) ``` #### func (*StructReturner) [Answer](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L265) [¶](#StructReturner.Answer) ``` func (sr *[StructReturner](#StructReturner)) Answer(m [Method](#Method), pcall [PipelineCaller](#PipelineCaller)) (*[Answer](#Answer), [ReleaseFunc](#ReleaseFunc)) ``` Answer returns an Answer that will be resolved when Return is called. answer must only be called once per StructReturner. #### func (*StructReturner) [PrepareReturn](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L210) [¶](#StructReturner.PrepareReturn) ``` func (sr *[StructReturner](#StructReturner)) PrepareReturn(e [error](/builtin#error)) ``` #### func (*StructReturner) [ReleaseResults](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L238) [¶](#StructReturner.ReleaseResults) ``` func (sr *[StructReturner](#StructReturner)) ReleaseResults() ``` #### func (*StructReturner) [Return](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/answerqueue.go#L216) [¶](#StructReturner.Return) ``` func (sr *[StructReturner](#StructReturner)) Return() ``` #### type [TextList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L374) [¶](#TextList) ``` type TextList [List](#List) ``` TextList is an array of pointers to strings. #### func [NewTextList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L379) [¶](#NewTextList) ``` func NewTextList(s *[Segment](#Segment), n [int32](/builtin#int32)) ([TextList](#TextList), [error](/builtin#error)) ``` NewTextList allocates a new list of text pointers, preferring placement in s. #### func (TextList) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L388) [¶](#TextList.At) ``` func (l [TextList](#TextList)) At(i [int](/builtin#int)) ([string](/builtin#string), [error](/builtin#error)) ``` At returns the i'th string in the list. #### func (TextList) [BytesAt](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L402) [¶](#TextList.BytesAt) ``` func (l [TextList](#TextList)) BytesAt(i [int](/builtin#int)) ([][byte](/builtin#byte), [error](/builtin#error)) ``` BytesAt returns the i'th element in the list as a byte slice. The underlying array of the slice is the segment data. #### func (TextList) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L145) [¶](#TextList.DecodeFromPtr) ``` func ([TextList](#TextList)) DecodeFromPtr(p [Ptr](#Ptr)) [TextList](#TextList) ``` #### func (TextList) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L141) [¶](#TextList.EncodeAsPtr) ``` func (l [TextList](#TextList)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (TextList) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L133) [¶](#TextList.IsValid) ``` func (l [TextList](#TextList)) IsValid() [bool](/builtin#bool) ``` #### func (TextList) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L137) [¶](#TextList.Len) ``` func (l [TextList](#TextList)) Len() [int](/builtin#int) ``` #### func (TextList) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L149) [¶](#TextList.Message) ``` func (l [TextList](#TextList)) Message() *[Message](#Message) ``` #### func (TextList) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L153) [¶](#TextList.Segment) ``` func (l [TextList](#TextList)) Segment() *[Segment](#Segment) ``` #### func (TextList) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L415) [¶](#TextList.Set) ``` func (l [TextList](#TextList)) Set(i [int](/builtin#int), v [string](/builtin#string)) [error](/builtin#error) ``` Set sets the i'th string in the list to v. #### func (TextList) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L431) [¶](#TextList.String) ``` func (l [TextList](#TextList)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. `["foo", "bar"]`). #### func (TextList) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L157) [¶](#TextList.ToPtr) ``` func (l [TextList](#TextList)) ToPtr() [Ptr](#Ptr) ``` #### type [TypeParam](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/typeparam.go#L7) [¶](#TypeParam) ``` type TypeParam[T [any](/builtin#any)] interface { // Convert the receiver to a Ptr, storing it in seg if it is not // already associated with some message (only true for Clients and // wrappers around them. EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) // Decode the pointer as the type of the receiver. Generally, // the receiver will be the zero value for the type. DecodeFromPtr(p [Ptr](#Ptr)) T } ``` The TypeParam interface must be satisified by a type to be used as a capnproto type parameter. This is satisified by all capnproto pointer types. T should be instantiated by the type itself; in type parameter lists you will typically see parameter/constraints like T TypeParam[T]. #### type [UInt16List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L673) [¶](#UInt16List) ``` type UInt16List [List](#List) ``` A UInt16List is an array of UInt16 values. #### func [NewUInt16List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L678) [¶](#NewUInt16List) ``` func NewUInt16List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([UInt16List](#UInt16List), [error](/builtin#error)) ``` NewUInt16List creates a new list of UInt16, preferring placement in s. #### func (UInt16List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L687) [¶](#UInt16List.At) ``` func (l [UInt16List](#UInt16List)) At(i [int](/builtin#int)) [uint16](/builtin#uint16) ``` At returns the i'th element. #### func (UInt16List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L433) [¶](#UInt16List.DecodeFromPtr) ``` func ([UInt16List](#UInt16List)) DecodeFromPtr(p [Ptr](#Ptr)) [UInt16List](#UInt16List) ``` #### func (UInt16List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L429) [¶](#UInt16List.EncodeAsPtr) ``` func (l [UInt16List](#UInt16List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (UInt16List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L421) [¶](#UInt16List.IsValid) ``` func (l [UInt16List](#UInt16List)) IsValid() [bool](/builtin#bool) ``` #### func (UInt16List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L425) [¶](#UInt16List.Len) ``` func (l [UInt16List](#UInt16List)) Len() [int](/builtin#int) ``` #### func (UInt16List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L437) [¶](#UInt16List.Message) ``` func (l [UInt16List](#UInt16List)) Message() *[Message](#Message) ``` #### func (UInt16List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L441) [¶](#UInt16List.Segment) ``` func (l [UInt16List](#UInt16List)) Segment() *[Segment](#Segment) ``` #### func (UInt16List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L696) [¶](#UInt16List.Set) ``` func (l [UInt16List](#UInt16List)) Set(i [int](/builtin#int), v [uint16](/builtin#uint16)) ``` Set sets the i'th element to v. #### func (UInt16List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L705) [¶](#UInt16List.String) ``` func (l [UInt16List](#UInt16List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (UInt16List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L445) [¶](#UInt16List.ToPtr) ``` func (l [UInt16List](#UInt16List)) ToPtr() [Ptr](#Ptr) ``` #### type [UInt32List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L765) [¶](#UInt32List) ``` type UInt32List [List](#List) ``` UInt32List is an array of UInt32 values. #### func [NewUInt32List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L770) [¶](#NewUInt32List) ``` func NewUInt32List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([UInt32List](#UInt32List), [error](/builtin#error)) ``` NewUInt32List creates a new list of UInt32, preferring placement in s. #### func (UInt32List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L779) [¶](#UInt32List.At) ``` func (l [UInt32List](#UInt32List)) At(i [int](/builtin#int)) [uint32](/builtin#uint32) ``` At returns the i'th element. #### func (UInt32List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L497) [¶](#UInt32List.DecodeFromPtr) ``` func ([UInt32List](#UInt32List)) DecodeFromPtr(p [Ptr](#Ptr)) [UInt32List](#UInt32List) ``` #### func (UInt32List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L493) [¶](#UInt32List.EncodeAsPtr) ``` func (l [UInt32List](#UInt32List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (UInt32List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L485) [¶](#UInt32List.IsValid) ``` func (l [UInt32List](#UInt32List)) IsValid() [bool](/builtin#bool) ``` #### func (UInt32List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L489) [¶](#UInt32List.Len) ``` func (l [UInt32List](#UInt32List)) Len() [int](/builtin#int) ``` #### func (UInt32List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L501) [¶](#UInt32List.Message) ``` func (l [UInt32List](#UInt32List)) Message() *[Message](#Message) ``` #### func (UInt32List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L505) [¶](#UInt32List.Segment) ``` func (l [UInt32List](#UInt32List)) Segment() *[Segment](#Segment) ``` #### func (UInt32List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L788) [¶](#UInt32List.Set) ``` func (l [UInt32List](#UInt32List)) Set(i [int](/builtin#int), v [uint32](/builtin#uint32)) ``` Set sets the i'th element to v. #### func (UInt32List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L797) [¶](#UInt32List.String) ``` func (l [UInt32List](#UInt32List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (UInt32List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L509) [¶](#UInt32List.ToPtr) ``` func (l [UInt32List](#UInt32List)) ToPtr() [Ptr](#Ptr) ``` #### type [UInt64List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L857) [¶](#UInt64List) ``` type UInt64List [List](#List) ``` UInt64List is an array of UInt64 values. #### func [NewUInt64List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L862) [¶](#NewUInt64List) ``` func NewUInt64List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([UInt64List](#UInt64List), [error](/builtin#error)) ``` NewUInt64List creates a new list of UInt64, preferring placement in s. #### func (UInt64List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L871) [¶](#UInt64List.At) ``` func (l [UInt64List](#UInt64List)) At(i [int](/builtin#int)) [uint64](/builtin#uint64) ``` At returns the i'th element. #### func (UInt64List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L561) [¶](#UInt64List.DecodeFromPtr) ``` func ([UInt64List](#UInt64List)) DecodeFromPtr(p [Ptr](#Ptr)) [UInt64List](#UInt64List) ``` #### func (UInt64List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L557) [¶](#UInt64List.EncodeAsPtr) ``` func (l [UInt64List](#UInt64List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (UInt64List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L549) [¶](#UInt64List.IsValid) ``` func (l [UInt64List](#UInt64List)) IsValid() [bool](/builtin#bool) ``` #### func (UInt64List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L553) [¶](#UInt64List.Len) ``` func (l [UInt64List](#UInt64List)) Len() [int](/builtin#int) ``` #### func (UInt64List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L565) [¶](#UInt64List.Message) ``` func (l [UInt64List](#UInt64List)) Message() *[Message](#Message) ``` #### func (UInt64List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L569) [¶](#UInt64List.Segment) ``` func (l [UInt64List](#UInt64List)) Segment() *[Segment](#Segment) ``` #### func (UInt64List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L880) [¶](#UInt64List.Set) ``` func (l [UInt64List](#UInt64List)) Set(i [int](/builtin#int), v [uint64](/builtin#uint64)) ``` Set sets the i'th element to v. #### func (UInt64List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L889) [¶](#UInt64List.String) ``` func (l [UInt64List](#UInt64List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (UInt64List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L573) [¶](#UInt64List.ToPtr) ``` func (l [UInt64List](#UInt64List)) ToPtr() [Ptr](#Ptr) ``` #### type [UInt8List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L544) [¶](#UInt8List) ``` type UInt8List [List](#List) ``` A UInt8List is an array of UInt8 values. #### func [NewData](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L580) [¶](#NewData) ``` func NewData(s *[Segment](#Segment), v [][byte](/builtin#byte)) ([UInt8List](#UInt8List), [error](/builtin#error)) ``` NewData creates a new list of UInt8 from a byte slice. #### func [NewText](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L558) [¶](#NewText) ``` func NewText(s *[Segment](#Segment), v [string](/builtin#string)) ([UInt8List](#UInt8List), [error](/builtin#error)) ``` NewText creates a new list of UInt8 from a string. #### func [NewTextFromBytes](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L569) [¶](#NewTextFromBytes) ``` func NewTextFromBytes(s *[Segment](#Segment), v [][byte](/builtin#byte)) ([UInt8List](#UInt8List), [error](/builtin#error)) ``` NewTextFromBytes creates a NUL-terminated list of UInt8 from a byte slice. #### func [NewUInt8List](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L549) [¶](#NewUInt8List) ``` func NewUInt8List(s *[Segment](#Segment), n [int32](/builtin#int32)) ([UInt8List](#UInt8List), [error](/builtin#error)) ``` NewUInt8List creates a new list of UInt8, preferring placement in s. #### func (UInt8List) [At](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L595) [¶](#UInt8List.At) ``` func (l [UInt8List](#UInt8List)) At(i [int](/builtin#int)) [uint8](/builtin#uint8) ``` At returns the i'th element. #### func (UInt8List) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L369) [¶](#UInt8List.DecodeFromPtr) ``` func ([UInt8List](#UInt8List)) DecodeFromPtr(p [Ptr](#Ptr)) [UInt8List](#UInt8List) ``` #### func (UInt8List) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L365) [¶](#UInt8List.EncodeAsPtr) ``` func (l [UInt8List](#UInt8List)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (UInt8List) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L357) [¶](#UInt8List.IsValid) ``` func (l [UInt8List](#UInt8List)) IsValid() [bool](/builtin#bool) ``` #### func (UInt8List) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L361) [¶](#UInt8List.Len) ``` func (l [UInt8List](#UInt8List)) Len() [int](/builtin#int) ``` #### func (UInt8List) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L373) [¶](#UInt8List.Message) ``` func (l [UInt8List](#UInt8List)) Message() *[Message](#Message) ``` #### func (UInt8List) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L377) [¶](#UInt8List.Segment) ``` func (l [UInt8List](#UInt8List)) Segment() *[Segment](#Segment) ``` #### func (UInt8List) [Set](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L604) [¶](#UInt8List.Set) ``` func (l [UInt8List](#UInt8List)) Set(i [int](/builtin#int), v [uint8](/builtin#uint8)) ``` Set sets the i'th element to v. #### func (UInt8List) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L613) [¶](#UInt8List.String) ``` func (l [UInt8List](#UInt8List)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[1, 2, 3]"). #### func (UInt8List) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L381) [¶](#UInt8List.ToPtr) ``` func (l [UInt8List](#UInt8List)) ToPtr() [Ptr](#Ptr) ``` #### type [VoidList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L512) [¶](#VoidList) ``` type VoidList [List](#List) ``` A VoidList is a list of zero-sized elements. #### func [NewVoidList](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L518) [¶](#NewVoidList) ``` func NewVoidList(s *[Segment](#Segment), n [int32](/builtin#int32)) [VoidList](#VoidList) ``` NewVoidList creates a list of voids. No allocation is performed; s is only used for Segment()'s return value. #### func (VoidList) [DecodeFromPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L17) [¶](#VoidList.DecodeFromPtr) ``` func ([VoidList](#VoidList)) DecodeFromPtr(p [Ptr](#Ptr)) [VoidList](#VoidList) ``` #### func (VoidList) [EncodeAsPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L13) [¶](#VoidList.EncodeAsPtr) ``` func (l [VoidList](#VoidList)) EncodeAsPtr(seg *[Segment](#Segment)) [Ptr](#Ptr) ``` #### func (VoidList) [IsValid](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L5) [¶](#VoidList.IsValid) ``` func (l [VoidList](#VoidList)) IsValid() [bool](/builtin#bool) ``` #### func (VoidList) [Len](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L9) [¶](#VoidList.Len) ``` func (l [VoidList](#VoidList)) Len() [int](/builtin#int) ``` #### func (VoidList) [Message](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L21) [¶](#VoidList.Message) ``` func (l [VoidList](#VoidList)) Message() *[Message](#Message) ``` #### func (VoidList) [Segment](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L25) [¶](#VoidList.Segment) ``` func (l [VoidList](#VoidList)) Segment() *[Segment](#Segment) ``` #### func (VoidList) [String](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list.go#L530) [¶](#VoidList.String) ``` func (l [VoidList](#VoidList)) String() [string](/builtin#string) ``` String returns the list in Cap'n Proto schema format (e.g. "[void, void, void]"). #### func (VoidList) [ToPtr](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/list-gen.go#L29) [¶](#VoidList.ToPtr) ``` func (l [VoidList](#VoidList)) ToPtr() [Ptr](#Ptr) ``` #### type [WeakClient](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L786) [¶](#WeakClient) ``` type WeakClient struct { // contains filtered or unexported fields } ``` A WeakClient is a weak reference to a capability: it refers to a capability without preventing it from being shut down. The zero value is a null reference. #### func (*WeakClient) [AddRef](https://github.com/capnproto/go-capnproto2/blob/v3.0.0-alpha-29/capability.go#L792) [¶](#WeakClient.AddRef) ``` func (wc *[WeakClient](#WeakClient)) AddRef() (c [Client](#Client), ok [bool](/builtin#bool)) ``` AddRef creates a new Client that refers to the same capability as c as long as the capability hasn't already been shut down.
npm_@kolyaventuri_covid_act_now.jsonl
personal_doc
Unknown
# Register# The Covid Act Now API provides access to all of our COVID data tracking US states, counties, and metros. It includes data and metrics for cases, vaccinations, tests, hospitalizations, and deaths. See data definitions for all included data. The API provides the same data that powers Covid Act Now but in easily digestible CSV or JSON files, intended for consumption by other COVID websites, models, and tools. Register Data endpoints In the examples below, replace `YOUR_KEY_HERE` with the API key from Step 1 above. The Covid Act Now API has coverage for states, counties, and metro areas (CBSAs). You can query the data in CSV or JSON formats. Learn how to query: * Current data for all states, counties, and metros * Historic data for all states, counties, and metros * Data for an individual state, county, or metro area * Aggregated US data Details on the contents of each endpoint are available in the API Reference Current data for all states, counties, or metros These endpoints contain the latest available data (a single, scalar value per field) for all states, counties, and CBSAs (metros). Each file contains all locations in the region type. For example, ``` https://api.covidactnow.org/v2/counties.csv?apiKey=YOUR_KEY_HERE ``` contains all US counties. CSV format Read the API Reference for /v2/states.csv, /v2/counties.csv, /v2/cbsas.csv. Historic data for all states, counties, or metros These files contain the entire history (a timeseries for every field) for all states, counties, and cbsas (metros). CSV format Read the API reference for /v2/states.timeseries.csv, /v2/counties.timeseries.csv, /v2/cbsas.timeseries.csv # note As ``` /v2/counties.timeseries.json ``` contains historic data for every county, it is very large. If you are looking for counties in a single state, consider downloading counties for a single state. Data for individual locations It's easy to query a single state, county, or metro area. Current data Historic data Data for all counties in a state You can also query all counties for a single state. Aggregated US data We also calculate country-wide data for the entire US. License Data is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International. We are a volunteer-driven 501(c)3 non-profit whose mission is to provide reliable and trustworthy COVID data, with a focus on U.S. counties. Since we started on March 20th, we’ve made our model and data open-source and freely available to the public. This data requires a non-trivial amount of work to organize, sanitize, maintain, and develop, especially given the fast-changing nature of COVID and health data in the U.S. In order to ensure that we can continue to deliver best-in-class open-source COVID data, our tiny, volunteer-driven team appreciates donations here. We also ask that, as legally required by our license, all commercial entities wishing to access our API contact <EMAIL> to acquire a commercial license. We define commercial users as any individual or entity engaged in commercial activities, such as selling goods or services. Our non-commercial users can freely download, use, share, modify, or build upon our source code. Date: 2019-08-24 Categories: Tags: The Covid Act Now API provides historical covid projections updated daily. ## API Key An API key is required. Register for an API key here. Security Scheme Type | API Key | | --- | --- | Query parameter name: | apiKey | ## Single State Summary Region Summary object for a single state. ## All states summary (json) ## All states summary (csv) ## Single County Summary Region Summary object for a single county. ## Single County Timeseries Region Summary with Timeseries for a single county. ## All counties summary (json) ## All counties summary (csv) ## Counties summary in state (json) ## Counties summary in state (csv) Aggregated data for all core-based statistical areas (CBSA). CBSAs represent collections of counties that are socioeconomically linked. They are used to represent metropolitan and micropolitan (at least 10,000 people and fewer than 50,000 people) areas. For example, the Seattle-Tacoma-Bellevue, WA CBSA is an aggregation of King County, Pierce County, and Snohomish County. CBSAs are currently in beta and may not contain all metrics or data. ## Single CBSA Summary Region Summary object for a single CBSA. ## Single CBSA Timeseries Region Summary with Timeseries for a single CBSA. ## All CBSAs summary (json) ## All CBSAs summary (csv) ## US Summary Region Summary object for US. overall | | | --- | --- | testPositivityRatio | | caseDensity | | contactTracerCapacityRatio | | infectionRate | | icuCapacityRatio | | * "overall": 0, * "testPositivityRatio": 0, * "caseDensity": 0, * "contactTracerCapacityRatio": 0, * "infectionRate": 0, * "icuCapacityRatio": 0 } # Cases# Date: 2022-01-01 Categories: Tags: Read more about the data included in the Covid Act Now API. Cases Cases Cumulative confirmed or suspected cases. * CSV column names: `actuals.cases` * JSON file fields: `actuals.cases` , ``` actualsTimeseries.*.cases ``` New Cases New confirmed or suspected cases. New cases are a processed timeseries of cases - summing new cases may not equal the cumulative case count. Processing steps: * If a region does not report cases for a period of time but then begins reporting again, we will exclude the first day that reporting recommences. This first day likely includes multiple days worth of cases and can be misleading to the overall series. * We remove any days with negative new cases. * We apply an outlier detection filter to the timeseries, which removes any data points that seem improbable given recent numbers. Many times this is due to backfill of previously unreported cases. ``` actualsTimeseries.*.newCases ``` Case Density The number of cases per 100k population calculated using a 7-day rolling average. * CSV column names: `metrics.caseDensity` * JSON file fields: `metrics.caseDensity` , ``` metricsTimeseries.*.caseDensity ``` Infection Rate R_t, or the estimated number of infections arising from a typical case. ``` metricsTimeseries.*.infectionRate ``` Infection Rate Ci90 90th percentile confidence interval upper endpoint of the infection rate. ``` metricsTimeseries.*.infectionRateCI90 ``` Weekly New Cases Per100K The number of new cases per 100k population over the last week. ``` metricsTimeseries.*.weeklyNewCasesPer100k ``` Tests Positive Tests Cumulative positive test results to date ``` actualsTimeseries.*.positiveTests ``` Negative Tests Cumulative negative test results to date ``` actualsTimeseries.*.negativeTests ``` Test Positivity Ratio Ratio of people who test positive calculated using a 7-day rolling average. ``` metricsTimeseries.*.testPositivityRatio ``` Test Positivity Ratio Details Where to access ``` metricsTimeseries.*.testPositivityRatioDetails ``` Hospitalizations Icu Beds Information about ICU bed utilization details. * capacity - Current staffed ICU bed capacity. * currentUsageTotal - Total number of ICU beds currently in use * currentUsageCovid - Number of ICU beds currently in use by COVID patients. ``` actualsTimeseries.*.icuBeds ``` Hospital Beds Information about acute bed utilization details. * capacity - Current staffed acute bed capacity. * currentUsageTotal - Total number of acute beds currently in use * currentUsageCovid - Number of acute beds currently in use by COVID patients. * weeklyCovidAdmissions - Number of COVID patients admitted in the past week. ``` actualsTimeseries.*.hospitalBeds ``` Icu Capacity Ratio Ratio of staffed intensive care unit (ICU) beds that are currently in use. ``` metricsTimeseries.*.icuCapacityRatio ``` Beds With Covid Patients Ratio Ratio of staffed hospital beds that are currently in use by COVID patients. For counties, this is calculated using HSA-level data for the corresponding area. For more on HSAs, see https://apidocs.covidactnow.org/data-definitions/#health-service-areas ``` metricsTimeseries.*.bedsWithCovidPatientsRatio ``` Weekly Covid Admissions Per100K Number of COVID patients per 100k population admitted in the past week. For counties, this is calculated using HSA-level data for the corresponding area. For more on HSAs, see https://apidocs.covidactnow.org/data-definitions/#health-service-areas ``` metricsTimeseries.*.weeklyCovidAdmissionsPer100k ``` Vaccinations Vaccines Distributed Number of vaccine doses distributed. ``` actualsTimeseries.*.vaccinesDistributed ``` Vaccinations Initiated Number of vaccinations initiated. ``` actualsTimeseries.*.vaccinationsInitiated ``` Vaccinations Completed Number of vaccinations completed. ``` actualsTimeseries.*.vaccinationsCompleted ``` Vaccinations Additional Dose Number of individuals who are fully vaccinated and have received a booster (or additional) dose. ``` actualsTimeseries.*.vaccinationsAdditionalDose ``` Vaccinations Fall 2022 Bivalent Booster Number of individuals who have received a bivalent vaccine dose. ``` actualsTimeseries.*.vaccinationsFall2022BivalentBooster ``` Vaccines Administered Total number of vaccine doses administered. ``` actualsTimeseries.*.vaccinesAdministered ``` Vaccines Administered Demographics Demographic distributions for administered vaccines. Vaccinations Initiated Demographics Demographic distributions for initiated vaccinations. Vaccinations Initiated Ratio Ratio of population that has initiated vaccination. ``` metricsTimeseries.*.vaccinationsInitiatedRatio ``` Vaccinations Completed Ratio Ratio of population that has completed vaccination. ``` metricsTimeseries.*.vaccinationsCompletedRatio ``` Vaccinations Additional Dose Ratio Ratio of population that are fully vaccinated and have received a booster (or additional) dose. ``` metricsTimeseries.*.vaccinationsAdditionalDoseRatio ``` Vaccinations Fall 2022 Bivalent Booster Ratio Ratio of population that have received a bivalent vaccine dose. ``` metricsTimeseries.*.vaccinationsFall2022BivalentBoosterRatio ``` Deaths Deaths Cumulative deaths that are suspected or confirmed to have been caused by COVID-19. ``` actualsTimeseries.*.deaths ``` Community Levels Community level for region. See https://www.cdc.gov/coronavirus/2019-ncov/science/community-levels.html for details about how the Community Level is calculated and should be interpreted. The values correspond to the following levels: API value | Community Level | | --- | --- | 0 | Low | 1 | Medium | 2 | High | Note that we provide two versions of the Community Level. One is called `canCommunityLevel` which is calculated using CAN's data sources and is available for states, counties, and metros. It is updated daily though depends on hospital data which may only update weekly for counties. The other is called `cdcCommunityLevel` and is the raw Community Level published by the CDC. It is only available for counties and is updated on a weekly basis. Where to access ``` communityLevelsTimeseries.*.canCommunityLevel ``` and * ``` communityLevelsTimeseries.*.cdcCommunityLevel ``` Health Service Areas A Health Service area (HSA) is a collection of one or more contiguous counties which are relatively self-contained with respect to hospital care. HSAs are used when calculating county-level hospital metrics in order to correct for instances where an individual county does not have any, or has few healthcare facilities within its own borders. For more information see https://seer.cancer.gov/seerstat/variables/countyattribs/hsa.html. The source for our county to HSA mappings is `cdc_hsa_mapping.csv` which follows the HSA definitions used by the CDC in their COVID-19 Community Levels. HSA populations are calculated as the sum of the component county populations. Date: 2020-12-22 Categories: Tags: Updates to the API will be reflected here. Sunsetting Demographic Vaccine Data 2023-01-11 Unlike the other data in our API, county-level demographic vaccine data is not actively quality assured, and as such, we cannot guarantee that it is of high quality. We will continue to provide this data until 02/15/2023, at which point we will remove it from the API. Please reach out to <EMAIL> with any concerns or questions. CDC Community Level data now available Added on 2022-04-05 The CDC Community Level metric as well as the subcomponents of the CDC Community Level metric are now available within the Covid Act Now API. Learn more about how the CDC Community Level is measured. : The raw CDC Community Level metric. It serves as an exact mirror of the CDC’s published data, which is available for counties only and typically updates once a week. * : The Covid Act Now team’s version of CDC Community Level metric. It uses the same methodology as the CDC community level, but it is available for states and metros in addition to counties. It also uses different data sources in some cases (New York Times for cases, HHS for state hospitalization data). It updates daily, though county hospitalization data only updates once a week. * : The number of new COVID cases per week per 100K population. * : The number of new COVID hospital admissions per week per 100K population. For counties this is calculated at the Health Service Area level. * : The ratio of staffed inpatient beds that are occupied by COVID patients. For counties this is calculated at the Health Service Area level. * ``` actuals.hospitalBeds.weeklyCovidAdmissions ``` : The number of new COVID hospital admissions per week. * ``` actuals.hsaHospitalBeds.weeklyCovidAdmissions ``` : The number of new COVID hospital admissions per week measured at the Health Service Area level. * `hsaName` : The name of the Health Service Area. * `hsaPopulation` : The population of the Health Service Area. * `hsaHospitalBeds` : This is a mirror of the existing hospitalBeds field, but measured at the Health Service Area level. * `hsaIcuBeds` : This is a mirror of the existing icuBeds field, but measured at the Health Service Area level. Vaccine Booster data now available Added on 2022-01-13 Vaccine booster shot (or additional dose) data is now available within the Covid Act Now API. : Number of individuals who are fully vaccinated and have received a booster (or additional) dose. * : Ratio of population that are fully vaccinated and have received a booster (or additional) dose. ICU Headroom and Typical Usage Rate removed Added on 2021-09-16 The following deprecated fields have been removed from the API: `icuHeadroomRatio` , `icuHeadroomDetails` , and `typicalUsageRate` . Consider using `icuCapacityRatio` instead. CDC Community Transmission Levels Added on 2021-07-30 We now expose a CDC Community Transmission Level in the API. The CDC community transmission levels are similar to the Covid Act Now risk levels, but have slightly different thresholds. See definitions of CDC community transmission levels for more details. We calculate the level using the CDC's thresholds and expose it in the field `cdcTransmissionLevel` in all API responses. The values correspond to the following levels: API value | CDC community transmission level | | --- | --- | 0 | Low | 1 | Moderate | 2 | Substantial | 3 | High | 4 | Unknown | Note that the value may differ from what the CDC website reports given we have different data sources. We have also introduced an "Unknown" level for when both case data and test positivity data are missing for at least 15 days. The CDC does not have an "Unknown" level and instead will designate a location as "Low" when case and test positivity data are missing. Aggregated US data Added on 2021-03-31 You can now query US aggregated data. Vaccine demographic data Added on 2021-03-25 We are now starting to include vaccine demographic data; currently we are collecting two fields: and . While we currently only have county level data in TX and PA (as of 3/25) we are working on adding more regions to provide the most complete vaccine demographic data. Note that demographic buckets may differ by jurisdiction as different states may collect and bucket demographic data in different ways. We surface the buckets as collected for now but this may change in the future. Vaccines administered Added on 2021-03-23 Added to the API. This represents the total number of doses administered for a region. New deaths column Added on 2021-03-08 Added `actuals.newDeaths` and ``` actualsTimeseries.*.newDeaths ``` to the API. The processing is similar to `actuals.newCases` - `newDeaths` represent new deaths since previous report with erratic values removed by outlier detection. Field level annotations Added on 2021-01-19 The Annotations field has a FieldAnnotations for each field in `Actuals` . You can now access the data source(s) used to produce a field and list of dates where an anomalous observation was removed. The exact structure of the `AnomalyAnnotation` may be modified in the future. Vaccine data now available Added on 2021-01-14 Vaccine data is now available within the Covid Act Now API. Currently the data is available for states only, but county-level vaccination data is coming soon. * `vaccinesDistributed` : Total number of vaccine doses distributed. * ``` vaccinationsInitiated ``` : Total number of people initiating vaccination. For a vaccine with a 2-dose regimen, this represents the first dose. * ``` vaccinationsCompleted ``` : Total number of people completing vaccination - currently those completing their second shot. * ``` vaccinationsInitiatedRatio ``` : Ratio of population that has initiated vaccination. * ``` vaccinationsCompletedRatio ``` : Ratio of population that has completed vaccination. You can access these fields in both the `actuals` field and `actualsTimeseries` fields. View entire timeseries of risk levels for all regions Added on 2020-12-22 You can now view the history of a region's overall risk level in all timeseries endpoints under the key `riskLevelsTimeseries` . Overall risk level now based on 3 key metrics Added on 2020-12-22 The overall risk level is now based on `caseDensity` , `testPositivityRatio` , and `infectionRate` . Learn more about the changes we made. We will be continuing to calculate all metrics and have no plans of removing ``` contactTracingCapacityRatio ``` , `icuCapacityRatio` , or `icuHeadroomRatio` at this time. Link to Covid Act Now website Added on 2020-12-03 Each region now includes a field `url` that points to the Covid Act Now location page for that region. Upcoming risk level update Added on 2020-12-01 We modified our risk levels to include a 5th level on covidactnow.org for locations experiencing a severe outbreak. On Monday, December 7th we will be updating the risk levels in the API to reflect this. If you have any code depending on 4 risk levels, you will need to update it to include the fifth risk level. This change directly affects the fields `riskLevels.overall` and ``` riskLevels.caseDensity ``` . If you would like to include the new risk level right now to reflect what is currently on covidactnow.org, you can do so by classifying both `overall` and `caseDensity` risk as extreme for any location where ``` actuals.casesDensity > 75 ``` . Query by CBSA Added on 2020-11-09 We added core-based statistical area endpoints to the API. You can now view results aggregated by CBSA: Read the CBSA API Documentation to learn more. Increase county test positivity coverage Added on 2020-11-04 We increased our test positivity coverage for counties by adding in data from U.S. Department of Health & Human Services as aggregated by the Centers for Medicare & Medicaid Services. See our data sources for more information. Add new cases field with outlier detection Added on 2020-10-29 In addition to cumulative case counts, we added a `New Cases` field to all `actuals` and `actualsTimeseries` values. The `New Cases` field computes new cases and applies outlier detection to remove erratic case values. `locationId` field#Add Added on 2020-10-27 Adds a generic location ID used to represent regions. Will allow for greater flexibility as we add more aggregation levels (such as country). `riskLevels` #Add Added on 2020-10-15 Added risk levels as seen on covidactnow.org to the API. # Covid Act Now API Overview# On March 7th, The Covid Tracking Project (CTP) will be winding down their daily updates. Throughout the pandemic they have provided an amazing service and resource with their daily data collection and in-depth reporting. For those looking for a replacement, the Covid Act Now API can be used to serve many of the same use cases. The Covid Act Now API provides access to comprehensive COVID data — both current and historical. The data is available for all US states, counties, and metros and is aggregated from a number of official sources, quality assured, and updated daily. Our API powers all data available on covidactnow.org including daily updates to cases, hospitalization, testing, and vaccination data. It includes raw data, risk levels, and calculated metrics to help you understand covid spread for a location. In addition to state data, we also provide county and metro data where available. Covid Act Now API Overview In general, the Covid Act Now API provides much of the same data as The Covid Tracking Project. However there are some differences: County and metro data In addition to state-level data as was provided by Covid Tracking Project, we also provide county and metro data where available. Typically county data is not as complete as state data but coverage is improving. Our county data is collected from a wide variety of sources that include federal, state, and local dashboards. Metrics and risk levels We calculate metrics and risk levels derived from the raw data. These metrics include daily new cases per 100k, infection rate (R_t), test positivity, percent of population vaccinated, and ICU utilization. Vaccination data Our API includes vaccination data. We include doses distributed, vaccinations initiated, and vaccinations completed. Testing data For testing data, we focus on test positivity via PCR specimens which has become the standard metric tracked by most health departments. We have positive and negative PCR tests for all states and a computed test positivity percentage for all states and most counties. Hospitalization data We currently ingest hospitalization data at the state and county level from HHS. For both ICU and overall hospitalization we include total staffed beds, beds in use by COVID patients, and total beds in use. Many of the fields in the Covid Tracking API do overlap. Cases, deaths, and hospitalization/ICU data have the largest commonalities. CTP field name | CAN Field Name | | --- | --- | date | date | fips | fips | state | state | death | actuals.deaths | hospitalizedCurrently | actuals.hospitalBeds.currentUsageCovid | inIcuCurrently | actuals.icuBeds.currentUsageCovid | negativeTestsViral | actuals.negativeTests | positiveTestsViral | actuals.positiveTests | positive | actuals.cases | positiveIncrease | actuals.newCases | See data definitions for all included fields. Getting Started To get started, register to get your API key. All requests should be made to ``` https://api.covidactnow.org ``` and include an API key. So for example, to query a CSV for the current values for all states, the request would be of the form: Many of endpoints from The Covid Tracking Project are available in our API. Below is a summary of similar endpoints: Description | Covid Tracking endpoint | Covid Act Now endpoint | | --- | --- | --- | Current values for all states | /v1/states/current.{csv,json} | /v2/states.{csv,json} | Historical values for all states | /v1/states/daily.{csv,json} | /v2/states.timeseries.{csv,json} | Current values for a single state | /v1/states/ca/daily.{csv,json} | /v2/state/CA.timeseries.{csv,json} | Historical values for a single state | /v1/states/ca/current.{csv,json} | /v2/state/CA.{json} | Skip to main content Getting started Data Definitions API Updates Covid Tracking Migration Guide Contact Us If you have any questions about the API, please contact us at <EMAIL> .
github.com/takama/router
go
Go
README [¶](#section-readme) --- ### Go Router [![Build Status](https://travis-ci.org/takama/router.svg?branch=master)](https://travis-ci.org/takama/router) [![GoDoc](https://godoc.org/github.com/takama/router?status.svg)](https://godoc.org/github.com/takama/router) [![Contributions Welcome](https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat)](https://github.com/takama/router/issues) [![Go Report Card](https://goreportcard.com/badge/github.com/takama/router)](https://goreportcard.com/report/github.com/takama/router) [![codecov.io](https://codecov.io/github/takama/router/coverage.svg?branch=master)](https://codecov.io/github/takama/router?branch=master) A simple, compact and fast router package to process HTTP requests. It has some sugar from framework however still lightweight. The router package is useful to prepare a RESTful API for Go services. It has JSON output, which bind automatically for relevant type of data. The router has timer feature to display duration of request handling in the header ##### Examples * Simplest example (serve static route): ``` package main import ( "github.com/takama/router" ) func Hello(c *router.Control) { c.Body("Hello world") } func main() { r := router.New() r.GET("/hello", Hello) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` * Check it: ``` curl -i http://localhost:8888/hello/ HTTP/1.1 200 OK Content-Type: text/plain Date: Sun, 17 Aug 2014 13:25:50 GMT Content-Length: 11 Hello world ``` * Serve dynamic route with parameter: ``` package main import ( "github.com/takama/router" ) func main() { r := router.New() r.GET("/hello/:name", func(c *router.Control) { c.Body("Hello " + c.Get(":name")) }) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` * Check it: ``` curl -i http://localhost:8888/hello/John HTTP/1.1 200 OK Content-Type: text/plain Date: Sun, 17 Aug 2014 13:25:55 GMT Content-Length: 10 Hello John ``` * Checks JSON Content-Type automatically: ``` package main import ( "github.com/takama/router" ) // Data is helper to construct JSON type Data map[string]interface{} func main() { r := router.New() r.GET("/api/v1/settings/database/:db", func(c *router.Control) { data := Data{ "Database settings": Data{ "database": c.Get(":db"), "host": "localhost", "port": "3306", }, } c.Code(200).Body(data) }) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` * Check it: ``` curl -i http://localhost:8888/api/v1/settings/database/testdb HTTP/1.1 200 OK Content-Type: application/json Date: Sun, 17 Aug 2014 13:25:58 GMT Content-Length: 102 { "Database settings": { "database": "testdb", "host": "localhost", "port": "3306" } } ``` * Use timer to calculate duration of request handling: ``` package main import ( "github.com/takama/router" ) // Data is helper to construct JSON type Data map[string]interface{} func main() { r := router.New() r.GET("/api/v1/settings/database/:db", func(c *router.Control) { c.UseTimer() // Do something data := Data{ "Database settings": Data{ "database": c.Get(":db"), "host": "localhost", "port": "3306", }, } c.Code(200).Body(data) }) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` * Check it: ``` curl -i http://localhost:8888/api/v1/settings/database/testdb HTTP/1.1 200 OK Content-Type: application/json Date: Sun, 17 Aug 2014 13:26:05 GMT Content-Length: 143 { "duration": 5356123 "took": "5.356ms", "data": { "Database settings": { "database": "testdb", "host": "localhost", "port": "3306" } } } ``` * Custom handler with "Access-Control-Allow" options and compact JSON: ``` package main import ( "github.com/takama/router" ) // Data is helper to construct JSON type Data map[string]interface{} func baseHandler(handle router.Handle) router.Handle { return func(c *router.Control) { c.CompactJSON(true) if origin := c.Request.Header.Get("Origin"); origin != "" { c.Writer.Header().Set("Access-Control-Allow-Origin", origin) c.Writer.Header().Set("Access-Control-Allow-Credentials", "true") } handle(c) } } func Info(c *router.Control) { data := Data{ "debug": true, "error": false, } c.Body(data) } func main() { r := router.New() r.CustomHandler = baseHandler r.GET("/info", Info) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` * Check it: ``` curl -i -H 'Origin: http://foo.com' http://localhost:8888/info/ HTTP/1.1 200 OK Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: http://foo.com Content-Type: text/plain Date: Sun, 17 Aug 2014 13:27:10 GMT Content-Length: 28 {"debug":true,"error":false} ``` * Use google json style `https://google.github.io/styleguide/jsoncstyleguide.xml`: ``` package main import ( "net/http" "github.com/takama/router" ) func main() { r := router.New() r.GET("/api/v1/people/:action/:id", func(c *router.Control) { // Do something c.Method("people." + c.Get(":action")) c.SetParams(map[string]string{"userId": c.Get(":id")}) c.SetError(http.StatusNotFound, "UserId not found") c.AddError(router.Error{Message: "Group or User not found"}) c.Code(http.StatusNotFound).Body(nil) }) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` * Check it: ``` curl -i http://localhost:8888/api/v1/people/get/@me HTTP/1.1 404 Not Found Content-Type: application/json Date: Sat, 22 Oct 2016 14:50:00 GMT Content-Length: 220 { "method": "people.get", "params": { "userId": "@me" }, "error": { "code": 404, "message": "UserId not found", "errors": [ { "message": "Group or User not found" } ] } } ``` #### Contributors (unsorted) * [<NAME>](https://github.com/takama) * [<NAME>](https://github.com/CSharpRU) All the contributors are welcome. If you would like to be the contributor please accept some rules. * The pull requests will be accepted only in "develop" branch * All modifications or additions should be tested * Sorry, I'll not accept code with any dependency, only standard library Thank you for your understanding! #### License [BSD License](https://github.com/takama/router/raw/master/LICENSE) Documentation [¶](#section-documentation) --- ### Overview [¶](#pkg-overview) Package router 0.6.5 provides fast HTTP request router. The router matches incoming requests by the request method and the path. If a handle is registered for this path and method, the router delegates the request to defined handler. The router package is useful to prepare a RESTful API for Go services. It has JSON output, which bind automatically for relevant type of data. The router has timer feature to display duration of request handling in the header Simplest example (serve static route): ``` package main import ( "github.com/takama/router" ) func Hello(c *router.Control) { c.Body("Hello") } func main() { r := router.New() r.GET("/hello", Hello) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` Check it: ``` curl -i http://localhost:8888/hello ``` Serve dynamic route with parameter: ``` func main() { r := router.New() r.GET("/hello/:name", func(c *router.Control) { c.Code(200).Body("Hello " + c.Get(":name")) }) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` Checks JSON Content-Type automatically: ``` // Data is helper to construct JSON type Data map[string]interface{} func main() { r := router.New() r.GET("/settings/database/:db", func(c *router.Control) { data := map[string]map[string]string{ "Database settings": { "database": c.Get(":db"), "host": "localhost", "port": "3306", }, } c.Code(200).Body(data) }) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` Custom handler with "Access-Control-Allow" options and compact JSON: ``` // Data is helper to construct JSON type Data map[string]interface{} func baseHandler(handle router.Handle) router.Handle { return func(c *router.Control) { c.CompactJSON(true) if origin := c.Request.Header.Get("Origin"); origin != "" { c.Writer.Header().Set("Access-Control-Allow-Origin", origin) c.Writer.Header().Set("Access-Control-Allow-Credentials", "true") } handle(c) } } func Info(c *router.Control) { data := Data{ "debug": true, "error": false, } c.Body(data) } func main() { r := router.New() r.CustomHandler = baseHandler r.GET("/info", Info) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` Use google json style `<https://google.github.io/styleguide/jsoncstyleguide.xml>` ``` func main() { r := router.New() r.GET("/api/v1/people/:action/:id", func(c *router.Control) { // Do something c.Method("people." + c.Get(":action")) c.SetParams(map[string]string{"userId": c.Get(":id")}) c.SetError(http.StatusNotFound, "UserId not found") c.AddError(router.Error{Message: "Group or User not found"}) c.Code(http.StatusNotFound).Body(nil) }) // Listen and serve on 0.0.0.0:8888 r.Listen(":8888") } ``` Go Router ### Index [¶](#pkg-index) * [Constants](#pkg-constants) * [type Control](#Control) * + [func (c *Control) APIVersion(version string) *Control](#Control.APIVersion) + [func (c *Control) AddError(errors ...Error) *Control](#Control.AddError) + [func (c *Control) Body(data interface{})](#Control.Body) + [func (c *Control) Code(code int) *Control](#Control.Code) + [func (c *Control) CompactJSON(mode bool) *Control](#Control.CompactJSON) + [func (c *Control) Get(name string) string](#Control.Get) + [func (c *Control) GetCode() int](#Control.GetCode) + [func (c *Control) GetTimer() time.Time](#Control.GetTimer) + [func (c *Control) HeaderContext(context string) *Control](#Control.HeaderContext) + [func (c *Control) ID(id string) *Control](#Control.ID) + [func (c *Control) Method(method string) *Control](#Control.Method) + [func (c *Control) Set(params ...Param) *Control](#Control.Set) + [func (c *Control) SetError(code uint16, message string) *Control](#Control.SetError) + [func (c *Control) SetParams(params interface{}) *Control](#Control.SetParams) + [func (c *Control) UseMetaData() *Control](#Control.UseMetaData) + [func (c *Control) UseTimer()](#Control.UseTimer) * [type Error](#Error) * [type ErrorHeader](#ErrorHeader) * [type Handle](#Handle) * [type Header](#Header) * [type Param](#Param) * [type Route](#Route) * [type Router](#Router) * + [func New() *Router](#New) * + [func (r *Router) AllowedMethods(path string) []string](#Router.AllowedMethods) + [func (r *Router) DELETE(path string, h Handle)](#Router.DELETE) + [func (r *Router) GET(path string, h Handle)](#Router.GET) + [func (r *Router) HEAD(path string, h Handle)](#Router.HEAD) + [func (r *Router) Handle(method, path string, h Handle)](#Router.Handle) + [func (r *Router) Handler(method, path string, handler http.Handler)](#Router.Handler) + [func (r *Router) HandlerFunc(method, path string, handler http.HandlerFunc)](#Router.HandlerFunc) + [func (r *Router) Listen(hostPort string)](#Router.Listen) + [func (r *Router) Lookup(method, path string) (Handle, []Param, bool)](#Router.Lookup) + [func (r *Router) OPTIONS(path string, h Handle)](#Router.OPTIONS) + [func (r *Router) PATCH(path string, handle Handle)](#Router.PATCH) + [func (r *Router) POST(path string, h Handle)](#Router.POST) + [func (r *Router) PUT(path string, h Handle)](#Router.PUT) + [func (r *Router) Routes() []Route](#Router.Routes) + [func (r *Router) ServeHTTP(w http.ResponseWriter, req *http.Request)](#Router.ServeHTTP) ### Constants [¶](#pkg-constants) ``` const ( // MIMEJSON - "Content-type" for JSON MIMEJSON = "application/json" // MIMETEXT - "Content-type" for TEXT MIMETEXT = "text/plain" ) ``` Default content types ### Variables [¶](#pkg-variables) This section is empty. ### Functions [¶](#pkg-functions) This section is empty. ### Types [¶](#pkg-types) #### type [Control](https://github.com/takama/router/blob/v0.6.5/control.go#L26) [¶](#Control) ``` type Control struct { // Context embedded [context](/context).[Context](/context#Context) // Request is an adapter which allows the usage of a http.Request as standard request Request *[http](/net/http).[Request](/net/http#Request) // Writer is an adapter which allows the usage of a http.ResponseWriter as standard writer Writer [http](/net/http).[ResponseWriter](/net/http#ResponseWriter) // User content type ContentType [string](/builtin#string) // contains filtered or unexported fields } ``` Control allows us to pass variables between middleware, assign Http codes and render a Body. #### func (*Control) [APIVersion](https://github.com/takama/router/blob/v0.6.5/control.go#L143) [¶](#Control.APIVersion) ``` func (c *[Control](#Control)) APIVersion(version [string](/builtin#string)) *[Control](#Control) ``` APIVersion adds API version meta data #### func (*Control) [AddError](https://github.com/takama/router/blob/v0.6.5/control.go#L186) [¶](#Control.AddError) ``` func (c *[Control](#Control)) AddError(errors ...[Error](#Error)) *[Control](#Control) ``` AddError adds new error #### func (*Control) [Body](https://github.com/takama/router/blob/v0.6.5/control.go#L204) [¶](#Control.Body) ``` func (c *[Control](#Control)) Body(data interface{}) ``` Body renders the given data into the response body #### func (*Control) [Code](https://github.com/takama/router/blob/v0.6.5/control.go#L118) [¶](#Control.Code) ``` func (c *[Control](#Control)) Code(code [int](/builtin#int)) *[Control](#Control) ``` Code assigns http status code, which returns on http request #### func (*Control) [CompactJSON](https://github.com/takama/router/blob/v0.6.5/control.go#L131) [¶](#Control.CompactJSON) ``` func (c *[Control](#Control)) CompactJSON(mode [bool](/builtin#bool)) *[Control](#Control) ``` CompactJSON changes JSON output format (default mode is false) #### func (*Control) [Get](https://github.com/takama/router/blob/v0.6.5/control.go#L101) [¶](#Control.Get) ``` func (c *[Control](#Control)) Get(name [string](/builtin#string)) [string](/builtin#string) ``` Get returns the first value associated with the given name. If there are no values associated with the key, an empty string is returned. #### func (*Control) [GetCode](https://github.com/takama/router/blob/v0.6.5/control.go#L126) [¶](#Control.GetCode) ``` func (c *[Control](#Control)) GetCode() [int](/builtin#int) ``` GetCode returns status code #### func (*Control) [GetTimer](https://github.com/takama/router/blob/v0.6.5/control.go#L199) [¶](#Control.GetTimer) ``` func (c *[Control](#Control)) GetTimer() [time](/time).[Time](/time#Time) ``` GetTimer returns timer state #### func (*Control) [HeaderContext](https://github.com/takama/router/blob/v0.6.5/control.go#L150) [¶](#Control.HeaderContext) ``` func (c *[Control](#Control)) HeaderContext(context [string](/builtin#string)) *[Control](#Control) ``` HeaderContext adds context meta data #### func (*Control) [ID](https://github.com/takama/router/blob/v0.6.5/control.go#L157) [¶](#Control.ID) ``` func (c *[Control](#Control)) ID(id [string](/builtin#string)) *[Control](#Control) ``` ID adds id meta data #### func (*Control) [Method](https://github.com/takama/router/blob/v0.6.5/control.go#L164) [¶](#Control.Method) ``` func (c *[Control](#Control)) Method(method [string](/builtin#string)) *[Control](#Control) ``` Method adds method meta data #### func (*Control) [Set](https://github.com/takama/router/blob/v0.6.5/control.go#L112) [¶](#Control.Set) ``` func (c *[Control](#Control)) Set(params ...[Param](#Param)) *[Control](#Control) ``` Set adds new parameters which represents as set of key/value. #### func (*Control) [SetError](https://github.com/takama/router/blob/v0.6.5/control.go#L178) [¶](#Control.SetError) ``` func (c *[Control](#Control)) SetError(code [uint16](/builtin#uint16), message [string](/builtin#string)) *[Control](#Control) ``` SetError sets error code and error message #### func (*Control) [SetParams](https://github.com/takama/router/blob/v0.6.5/control.go#L171) [¶](#Control.SetParams) ``` func (c *[Control](#Control)) SetParams(params interface{}) *[Control](#Control) ``` SetParams adds params meta data in alternative format #### func (*Control) [UseMetaData](https://github.com/takama/router/blob/v0.6.5/control.go#L137) [¶](#Control.UseMetaData) ``` func (c *[Control](#Control)) UseMetaData() *[Control](#Control) ``` UseMetaData shows meta data in JSON Header #### func (*Control) [UseTimer](https://github.com/takama/router/blob/v0.6.5/control.go#L193) [¶](#Control.UseTimer) ``` func (c *[Control](#Control)) UseTimer() ``` UseTimer allows caalculate elapsed time of request handling #### type [Error](https://github.com/takama/router/blob/v0.6.5/control.go#L89) [¶](#Error) ``` type Error struct { Domain [string](/builtin#string) `json:"domain,omitempty"` Reason [string](/builtin#string) `json:"reason,omitempty"` Message [string](/builtin#string) `json:"message,omitempty"` Location [string](/builtin#string) `json:"location,omitempty"` LocationType [string](/builtin#string) `json:"locationType,omitempty"` ExtendedHelp [string](/builtin#string) `json:"extendedHelp,omitempty"` SendReport [string](/builtin#string) `json:"sendReport,omitempty"` } ``` Error report format #### type [ErrorHeader](https://github.com/takama/router/blob/v0.6.5/control.go#L82) [¶](#ErrorHeader) ``` type ErrorHeader struct { Code [uint16](/builtin#uint16) `json:"code,omitempty"` Message [string](/builtin#string) `json:"message,omitempty"` Errors [][Error](#Error) `json:"errors,omitempty"` } ``` ErrorHeader contains error code, message and array of specified error reports #### type [Handle](https://github.com/takama/router/blob/v0.6.5/router.go#L158) [¶](#Handle) ``` type Handle func(*[Control](#Control)) ``` Handle type is aliased to type of handler function. #### type [Header](https://github.com/takama/router/blob/v0.6.5/control.go#L69) [¶](#Header) ``` type Header struct { Duration [time](/time).[Duration](/time#Duration) `json:"duration,omitempty"` Took [string](/builtin#string) `json:"took,omitempty"` APIVersion [string](/builtin#string) `json:"apiVersion,omitempty"` Context [string](/builtin#string) `json:"context,omitempty"` ID [string](/builtin#string) `json:"id,omitempty"` Method [string](/builtin#string) `json:"method,omitempty"` Params interface{} `json:"params,omitempty"` Data interface{} `json:"data,omitempty"` Error interface{} `json:"error,omitempty"` } ``` Header is used to prepare a JSON header with meta data #### type [Param](https://github.com/takama/router/blob/v0.6.5/control.go#L63) [¶](#Param) ``` type Param struct { Key [string](/builtin#string) `json:"key,omitempty"` Value [string](/builtin#string) `json:"value,omitempty"` } ``` Param is a URL parameter which represents as key and value. #### type [Route](https://github.com/takama/router/blob/v0.6.5/router.go#L161) [¶](#Route) ``` type Route struct { Method [string](/builtin#string) Path [string](/builtin#string) } ``` Route type contains information about HTTP method and path #### type [Router](https://github.com/takama/router/blob/v0.6.5/router.go#L136) [¶](#Router) ``` type Router struct { // NotFound is called when unknown HTTP method or a handler not found. // If it is not set, http.NotFound is used. // Please overwrite this if need your own NotFound handler. NotFound [Handle](#Handle) // PanicHandler is called when panic happen. // The handler prevents your server from crashing and should be used to return // http status code http.StatusInternalServerError (500) PanicHandler [Handle](#Handle) // CustomHandler is called allways if defined CustomHandler func([Handle](#Handle)) [Handle](#Handle) // Logger activates logging user function for each requests Logger [Handle](#Handle) // contains filtered or unexported fields } ``` Router represents a multiplexer for HTTP requests. #### func [New](https://github.com/takama/router/blob/v0.6.5/router.go#L167) [¶](#New) ``` func New() *[Router](#Router) ``` New it returns a new multiplexer (Router). #### func (*Router) [AllowedMethods](https://github.com/takama/router/blob/v0.6.5/router.go#L241) [¶](#Router.AllowedMethods) ``` func (r *[Router](#Router)) AllowedMethods(path [string](/builtin#string)) [][string](/builtin#string) ``` AllowedMethods returns list of allowed methods #### func (*Router) [DELETE](https://github.com/takama/router/blob/v0.6.5/router.go#L187) [¶](#Router.DELETE) ``` func (r *[Router](#Router)) DELETE(path [string](/builtin#string), h [Handle](#Handle)) ``` DELETE is a shortcut for Router Handle("DELETE", path, handle) #### func (*Router) [GET](https://github.com/takama/router/blob/v0.6.5/router.go#L172) [¶](#Router.GET) ``` func (r *[Router](#Router)) GET(path [string](/builtin#string), h [Handle](#Handle)) ``` GET is a shortcut for Router Handle("GET", path, handle) #### func (*Router) [HEAD](https://github.com/takama/router/blob/v0.6.5/router.go#L192) [¶](#Router.HEAD) ``` func (r *[Router](#Router)) HEAD(path [string](/builtin#string), h [Handle](#Handle)) ``` HEAD is a shortcut for Router Handle("HEAD", path, handle) #### func (*Router) [Handle](https://github.com/takama/router/blob/v0.6.5/router.go#L207) [¶](#Router.Handle) ``` func (r *[Router](#Router)) Handle(method, path [string](/builtin#string), h [Handle](#Handle)) ``` Handle registers a new request handle with the given path and method. #### func (*Router) [Handler](https://github.com/takama/router/blob/v0.6.5/router.go#L215) [¶](#Router.Handler) ``` func (r *[Router](#Router)) Handler(method, path [string](/builtin#string), handler [http](/net/http).[Handler](/net/http#Handler)) ``` Handler allows the usage of an http.Handler as a request handle. #### func (*Router) [HandlerFunc](https://github.com/takama/router/blob/v0.6.5/router.go#L224) [¶](#Router.HandlerFunc) ``` func (r *[Router](#Router)) HandlerFunc(method, path [string](/builtin#string), handler [http](/net/http).[HandlerFunc](/net/http#HandlerFunc)) ``` HandlerFunc allows the usage of an http.HandlerFunc as a request handle. #### func (*Router) [Listen](https://github.com/takama/router/blob/v0.6.5/router.go#L253) [¶](#Router.Listen) ``` func (r *[Router](#Router)) Listen(hostPort [string](/builtin#string)) ``` Listen and serve on requested host and port. #### func (*Router) [Lookup](https://github.com/takama/router/blob/v0.6.5/router.go#L233) [¶](#Router.Lookup) ``` func (r *[Router](#Router)) Lookup(method, path [string](/builtin#string)) ([Handle](#Handle), [][Param](#Param), [bool](/builtin#bool)) ``` Lookup returns handler and URL parameters that associated with path. #### func (*Router) [OPTIONS](https://github.com/takama/router/blob/v0.6.5/router.go#L197) [¶](#Router.OPTIONS) ``` func (r *[Router](#Router)) OPTIONS(path [string](/builtin#string), h [Handle](#Handle)) ``` OPTIONS is a shortcut for Router Handle("OPTIONS", path, handle) #### func (*Router) [PATCH](https://github.com/takama/router/blob/v0.6.5/router.go#L202) [¶](#Router.PATCH) ``` func (r *[Router](#Router)) PATCH(path [string](/builtin#string), handle [Handle](#Handle)) ``` PATCH is a shortcut for router.Handle("PATCH", path, handle) #### func (*Router) [POST](https://github.com/takama/router/blob/v0.6.5/router.go#L177) [¶](#Router.POST) ``` func (r *[Router](#Router)) POST(path [string](/builtin#string), h [Handle](#Handle)) ``` POST is a shortcut for Router Handle("POST", path, handle) #### func (*Router) [PUT](https://github.com/takama/router/blob/v0.6.5/router.go#L182) [¶](#Router.PUT) ``` func (r *[Router](#Router)) PUT(path [string](/builtin#string), h [Handle](#Handle)) ``` PUT is a shortcut for Router Handle("PUT", path, handle) #### func (*Router) [Routes](https://github.com/takama/router/blob/v0.6.5/router.go#L306) [¶](#Router.Routes) ``` func (r *[Router](#Router)) Routes() [][Route](#Route) ``` Routes returns list of registered HTTP methods with path #### func (*Router) [ServeHTTP](https://github.com/takama/router/blob/v0.6.5/router.go#L260) [¶](#Router.ServeHTTP) ``` func (r *[Router](#Router)) ServeHTTP(w [http](/net/http).[ResponseWriter](/net/http#ResponseWriter), req *[http](/net/http).[Request](/net/http#Request)) ``` ServeHTTP implements http.Handler interface.
PL94171
cran
R
Package ‘PL94171’ October 12, 2022 Type Package Title Tabulate P.L. 94-171 Redistricting Data Summary Files Version 1.1.2 Maintainer <NAME> <<EMAIL>> Description Tools to process legacy format summary redistricting data files produced by the United States Census Bureau pursuant to P.L. 94-171. These files are generally available earlier but are difficult to work with as-is. Depends R (>= 4.0.0) Imports cli, stringr, readr, dplyr (>= 1.0.0), tinytiger, sf, withr, httr Suggests testthat (>= 3.0.0), lifecycle, knitr, rmarkdown, ggplot2 License MIT + file LICENSE Encoding UTF-8 LazyData true RoxygenNote 7.2.1 URL https://corymccartan.com/PL94171/, https://github.com/CoryMcCartan/PL94171/ BugReports https://github.com/CoryMcCartan/PL94171/issues Config/testthat/edition 3 VignetteBuilder knitr Language en-US NeedsCompilation no Author <NAME> [aut, cre], <NAME> [aut] Repository CRAN Date/Publication 2022-09-11 22:13:02 UTC R topics documented: pl_crosswal... 2 pl_e... 3 pl_geog_level... 3 pl_get_ba... 4 pl_get_prototyp... 4 pl_get_vt... 6 pl_rea... 6 pl_retall... 7 pl_select_standar... 8 pl_subse... 9 pl_tidy_sh... 10 pl_ur... 10 pl_crosswalk Download Block Crosswalk Files Description Downloads crosswalks from https://www.census.gov/geographies/reference-files/time-series/ geo/relationship-files.html. Adjusts land overlap area to ensure weights sum to 1. Usage pl_crosswalk(abbr, from_year = 2010L, to_year = from_year + 10L) Arguments abbr the state to download the crosswalk for. from_year the year with the blocks that the data is currently tabulated with respect to. to_year the year with the blocks that the data should be tabulated into. Value A tibble, with two sets of GEOIDs and overlap information. Examples ## Not run: # Takes a bit of time to run pl_crosswalk("RI", 2010, 2020) ## End(Not run) pl_ex PL Example File Description This data contains a subset of the 2020 prototype PL data Usage data("pl_ex") Format list of tibbles containing the four PL files. 00001 Tables P1 and P2 00002 Tables P3, P4, and H1 00003 Table P5 geo geographic header file Examples data(pl_ex) pl_geog_levels List of Summary Levels and Official Descriptions Description This dataset is tibble version of the descriptions of (potentially) available summary levels within the P.L. 94-171 data, as described in the 2018 Redistricting Data Prototype (Public Law 94-171) Summary File documentation. Usage pl_geog_levels Format a tibble with two columns: SUMLEV The three character summary level code SUMLEV_description The summary level description pl_get_baf Download 2020 Block Assignment Files for a State Description [Experimental] From the Census: "The Block Assignment Files (BAFs) are among the geographic products that the Census Bureau provides to states and other data users containing the small area census data necessary for legislative redistricting. The BAFs contain Census tabulation block codes and geographic area codes for a specific geographic entity type." Usage pl_get_baf(abbr, geographies = NULL, cache_to = NULL, refresh = FALSE) Arguments abbr the state abbreviation to get the BAF for geographies the geographies to get. Defaults to all available. cache_to the file name, if any, to cache the results to (as an RDS). If a file exists and refresh=FALSE, will read BAF from this file. refresh if TRUE, force a re-download of the BAF data. Value A list of tibbles, one for each available BAF geography. Examples pl_get_baf("RI") pl_get_baf("RI", "VTD") pl_get_prototype Download TIGER Prototype shapefiles Description [Experimental] These prototype shapefiles correspond to the Rhode Island end-to-end Census test and the accompanying prototype P.L. 94-171 data. This function is unlikely to be useful for working with any actual decennial Census data. The corresponding tinytiger or tigris functions should be used instead. Usage pl_get_prototype( geog, year = 2020, full_state = TRUE, cache_to = NULL, clean_names = TRUE, refresh = FALSE ) Arguments geog Geography to download data for. See details for full list. year year, either 2010 or 2020 full_state whether to return the full state (TRUE) or the single county subset (FALSE) cache_to the file name, if any, to cache the results to (as an RDS). If a file exists and refresh=FALSE, will read from this file. clean_names whether to clean and rename columns refresh if TRUE, force a re-download of the data. Details Current acceptable arguments to geog include: • block: block • block_group: block group • tract: tract • county: county • state: state • sld_low: state legislative district lower house • sld_up: state legislative district upper house • congressional_district: federal congressional district • place: Census place • voting_district: voting tabulation district Value An sf object containing the blocks. Examples shp <- pl_get_prototype("block") pl_get_vtd Download 2020 Voting District Shapefiles Description [Experimental] A (likely temporary) function to download TIGER shapefiles for 2020 voting tab- ulation districts (VTDs). Usage pl_get_vtd(abbr, cache_to = NULL, refresh = FALSE) Arguments abbr Geography to download data for. See details for full list. cache_to the file name, if any, to cache the results to (as an RDS). If a file exists and refresh=FALSE, will read from this file. refresh if TRUE, force a re-download of the data. Value An sf object containing the VTDs. Examples shp <- pl_get_vtd("RI") pl_read Read a set of PL Files Description PL files come in one of four types and are pipe-delimited with no header row. This function speedily reads in the files and assigns the appropriate column names and types. Usage pl_read(path, ...) read_pl(path, ...) Arguments path a path to a folder containing PL files. Can also be path or a URL for a ZIP file, which will be downloaded and unzipped. ... passed on to readr::read_delim() Value A list of data frames containing the four PL files. Examples pl_ex_path <- system.file('extdata/ri2018_2020Style.pl', package = 'PL94171') pl <- pl_read(pl_ex_path) # or try `pl_read(pl_url("RI", 2010))` pl_retally Approximately re-tally Census data under new block boundaries Description Applies a block crosswalk to a table of block data using areal interpolation. That is, the fraction of land area in the overlapping region between old and new blocks is used to divide the population of the old blocks into the new. Usage pl_retally(d_from, crosswalk) Arguments d_from The data frame to process. All numeric columns will be re-tallied. Integer columns will be re-tallied with rounding. Character columns will be preserved if constant across new block geometries. crosswalk The crosswalk data frame, from pl_crosswalk() Details All numeric columns will be re-tallied. Integer columns will be re-tallied with rounding. Character columns will be preserved if constant across new block geometries. Blocks from other states will be ignored. Value A new data frame, like d_from, except with the geometry column dropped, if one exists. New geometry should be loaded, perhaps with tinytiger::tt_blocks(). Examples crosswalk = pl_crosswalk("RI", 2010, 2020) RI_2010 = pl_tidy_shp("RI", pl_url("RI", 2010), 2010) pl_retally(RI_2010, crosswalk) pl_select_standard Select the Standard Redistricting Columns Description Selects the standard set of basic population groups and VAP groups. Optionally renames them from the PXXXYYYY naming convention (where XXX is the table and YYYY is the variable) to more human readable names. pop_* is the total population, from tables 1 and 2, while vap_* is the 18+ population (voting age population). Usage pl_select_standard(pl, clean_names = TRUE) Arguments pl A list of PL tables, as read in by pl_read() clean_names whether to clean names Details If clean names=TRUE, then the variables extracted are as follows: • \*_hisp: Hispanic or Latino (of any race) • \*_white: White alone, not Hispanic or Latino • \*_black: Black or African American alone, not Hispanic or Latino • \*_aian: American Indian and Alaska Native alone, not Hispanic or Latino • \*_asian: Asian alone, not Hispanic or Latino • \*_nhpi: Native Hawaiian and Other Pacific Islander alone, not Hispanic or Latino • \*_other: Some Other Race alone, not Hispanic or Latino • \*_two: Population of two or more races, not Hispanic or Latino where \* is pop or vap. Value A tibble with the selected and optionally renamed columns Examples pl_ex_path <- system.file('extdata/ri2018_2020Style.pl', package = 'PL94171') pl <- pl_read(pl_ex_path) pl <- pl_select_standard(pl) pl_subset Subset to a Summary Level Description This subsets a pl table to a desired summary level. Typical choices include: • ’750’: block • ’150’: block group • ’630’: voting district • ’050’: county Usage pl_subset(pl, sumlev = "750") Arguments pl A list of PL tables, as read in by pl_read() sumlev the summary level to filter to. A 3 character SUMLEV code. Default is ’750’ for blocks. Details All summary levels are listed in pl_geog_levels. Value tibble Examples pl_ex_path <- system.file('extdata/ri2018_2020Style.pl', package = 'PL94171') pl <- pl_read(pl_ex_path) pl <- pl_subset(pl) pl_tidy_shp All-in-one Shapefile Function Description Downloads block geography and merges with the cleaned PL 94-171 file. Usage pl_tidy_shp(abbr, path, year = 2020, type = c("blocks", "vtds"), ...) Arguments abbr The state to make the shapefile for path The path to the PL files, as in pl_read() year The year to download the block geography for. Should match the year of the PL files. type If "blocks", make a Census block shapefile; if "vtds" make a VTD shapefile. ... passed on to dplyr::filter(); use to subset to a certain county, for example. Value an sf object with demographic and shapefile information for the state. Examples pl_ex_path <- system.file("extdata/ri2018_2020Style.pl", package = "PL94171") pl_tidy_shp("RI", pl_ex_path) pl_url Get the URL for PL files for a particular state and year Description Get the URL for PL files for a particular state and year Usage pl_url(abbr, year = 2010) Arguments abbr The state to download the PL files for year The year of PL file to download. Supported years: 2000, 2010, 2020 (after release). 2000 files are in a different format. Earlier years available on tape or CD-ROM only. Value a character vector containing the URL to a ZIP containing the PL files. Examples pl_url("RI", 2010)
littler
cran
R
Package ‘littler’ March 26, 2023 Type Package Title R at the Command-Line via 'r' Version 0.3.18 Date 2023-03-25 Author <NAME> and <NAME> Maintainer <NAME> <<EMAIL>> Description A scripting and command-line front-end is provided by 'r' (aka 'littler') as a lightweight binary wrapper around the GNU R language and environment for statistical computing and graphics. While R can be used in batch mode, the r binary adds full support for both 'shebang'-style scripting (i.e. using a hash-mark-exclamation-path expression as the first line in scripts) as well as command-line use in standard Unix pipelines. In other words, r provides the R language without the environment. URL https://github.com/eddelbuettel/littler, https://dirk.eddelbuettel.com/code/littler.html, https://eddelbuettel.github.io/littler/ BugReports https://github.com/eddelbuettel/littler/issues License GPL (>= 2) OS_type unix SystemRequirements libR Suggests simplermarkdown, docopt, rcmdcheck, foghorn VignetteBuilder simplermarkdown RoxygenNote 5.0.1 NeedsCompilation yes Repository CRAN Date/Publication 2023-03-26 10:20:05 UTC R topics documented: little... 2 ... 3 littler Command-line and scripting front-end for R Description The r binary provides a convenient and powerful front-end. By embedding R, it permits four distinct ways to leverage the power of R at the shell prompt: scripting, filename execution, piping and direct expression evaluation. Details The r front-end was written with four distinct usage modes in mind. First, it allow to write so-called ‘shebang’ scripts starting with #!/usr/bin/env r. These ‘shebang’ scripts are perfectly suited for automation and execution via e.g. via cron. Second, we can use r somefile.R to quickly execute the name R source file. This is useful as r is both easy to type—and quicker to start that either R itself, or its scripting tool Rscript, while still loading the methods package. Third, r can be used in ‘pipes’ which are very common in Unix. A simple and trivial example is echo 'cat(2+2)' | r illustrating that the standard output of one program can be used as the standard input of another program. Fourth, r can be used as a calculator by supplying expressions after the -e or --eval options. Value Common with other shell tools and programs, r returns its exit code where a value of zero indicates success. Note On OS X one may have to link the binary to, say, lr instead. As OS X insists that files named R and r are the same, we cannot use the latter. Author(s) <NAME> and <NAME> wrote littler from 2006 to today, with contributions from several others. <NAME> <<EMAIL>> is the maintainer. Examples ## Not run: #!/usr/bin/env r ## for use in scripts other input | r ## for use in pipes r somefile.R ## for running files r -e 'expr' ## for evaluating expressions r --help ## to show a quick synopsis ## End(Not run) r Return Path to r Binary Description Return the path of the install r binary. Usage r(usecat = FALSE) Arguments usecat Optional toggle to request output to stdout (useful in Makefiles) Details The test for Windows is of course superfluous as we have no binary for Windows. Maybe one day... Value The path is returned as character variable. If the usecat option is set the character variable is displayed via cat instead. Author(s) <NAME>
daymetr
cran
R
Package ‘daymetr’ September 15, 2023 Title Interface to the 'Daymet' Web Services Version 1.7.1 Description Programmatic interface to the 'Daymet' web services (<http://daymet.ornl.gov>). Allows for easy downloads of 'Daymet' climate data directly to your R workspace or your computer. Routines for both single pixel data downloads and gridded (netCDF) data are provided. Depends R (>= 3.6) Imports sf, terra, ncdf4, httr, tidyr, tibble, tools, utils License AGPL-3 LazyData true ByteCompile true RoxygenNote 7.2.3 Suggests ggplot2, dplyr, knitr, markdown, covr, testthat VignetteBuilder knitr URL https://github.com/bluegreen-labs/daymetr BugReports https://github.com/bluegreen-labs/daymetr/issues NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-5070-8109>), BlueGreen Labs [cph, fnd] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-09-15 14:52:05 UTC R topics documented: calc_n... 2 daymet_grid_ag... 3 daymet_grid_offse... 4 daymet_grid_tmea... 5 download_dayme... 6 download_daymet_batc... 8 download_daymet_ncs... 9 download_daymet_tile... 11 nc2ti... 12 read_dayme... 13 tile_outline... 14 calc_nd Count days meeting set criteria Description Function to count the number of days in a given time period that meet a given set of criteria. This can be used to extract indices such as Growing Degree Days (tmin > 0), or days with precipitation (prcp != 0). Usage calc_nd( file, start_doy = 1, end_doy = 365, criteria, value, internal = FALSE, path = tempdir() ) Arguments file path of a file containing the daily gridded Daymet data start_doy numeric day-of-year at which counting should begin. (default = 1) end_doy numeric day of year at which counting should end. (default = 365) criteria logical expression (">=",">","<=","<","==", "!=") to evaluate value the value that the criteria is evaluated against internal return to workspace (TRUE) or write to disk (FALSE) (default = FALSE) path path to which to write data to disk (default = tempdir()) Value A raster object in the R workspace or a file on disk with summary statistics for every pixel which meet the predefined criteria. Output files if written to file will be named nd_YYYY.tif (with YYYY the year of the processed tile or ncss netCDF file). Examples ## Not run: # download daily gridded data # using default settings (data written to tempdir()) download_daymet_ncss() # read in the Daymet file and report back the number # of days in a year with a minimum temperature lower # than 15 degrees C r <- calc_nd(file.path(tempdir(),"tmin_daily_1980_ncss.nc"), criteria = "<", value = 15, internal = TRUE) # plot the output terra::plot(r) ## End(Not run) daymet_grid_agg Aggregate daily Daymet data Description Aggregates daily Daymet data by time interval to create convenient seasonal datasets for data ex- ploration or modelling. Usage daymet_grid_agg( file, int = "seasonal", fun = "mean", internal = FALSE, path = tempdir() ) Arguments file The name of the file to be processed. Use daily gridded Daymet data. int Interval to aggregate by. Options are "monthly", "seasonal" or "annual". Sea- sons are defined as the astronomical seasons between solstices and equinoxes (default = "seasonal") fun Function to be used to aggregate data. Genertic R functions can be used. "mean" and "sum" are suggested. na.rm = TRUE by default. (default = "mean") internal logical If FALSE, write the output to a tif file using the Daymet file format protocol. path path to a directory where output files should be written. Used only if internal = FALSE (default = tempdir()) Value aggregated daily Daymet data as a tiff file written to disk or a raster stack when data is returned to the workspace. Examples ## Not run: # This code calculates the average minimum temperature by # season for a subset region. # download default ncss tiled subset for 1980 # (daily tmin values only), works on tiles as well download_daymet_ncss() # Finally, run the function daymet_grid_agg( file = file.path(tempdir(),"/tmin_daily_1980_ncss.nc"), int = "seasonal", fun = "mean" ) ## End(Not run) daymet_grid_offset Returns a time shifted (offset) dataset Description Returns an offset dataset with data running from offset DOY in year - 1 to offset DOY in the current year. Two years of data (730 data layers) are required for this function to work. The output serves as input for further data processing and / or ecosystem modelling efforts. Usage daymet_grid_offset(data, offset = 264) Arguments data rasterStack or rasterBrick of 730 layers (2 consecutive years) offset offset of the time series in DOY (default = 264, sept 21) Examples ## Not run: my_subset <- daymet_gridded_offset(mystack, offset = 264) ## End(Not run) daymet_grid_tmean Averages tmax and tmin ’Daymet’ gridded products Description Combines data into a single mean daily temperature (tmean) gridded output (geotiff) for easy post processing and modelling. Optionally a raster object is returned to the current workspace. Usage daymet_grid_tmean(path = tempdir(), product, year, internal = FALSE) Arguments path full path location of the daymet tiles (default = tempdir()) product either a tile number or a ncss product name year which year to process internal TRUE / FALSE (if FALSE, write the output to file) using the Daymet file format protocol. Examples ## Not run: # This code calculates the mean temperature # for all daymet tiles in a user provided # directory. In this example we first # download tile 11935 for tmin and tmax # download a tile download_daymet_tiles(tiles = 11935, start = 1980, end = 1980, param = c("tmin","tmax"), path = tempdir()) # calculate the mean temperature and export # the result to the R workspace (internal = TRUE) # If internal = FALSE, a file tmean_11935_1980.tif # is written into the source path (path_with_daymet_tiles) tmean <- daymet_grid_tmean(path = tempdir(), tile = 11935, year = 1980, internal = TRUE) ## End(Not run) download_daymet Function to download single location ’Daymet’ data Description Function to download single location ’Daymet’ data Usage download_daymet( site = "Daymet", lat = 36.0133, lon = -84.2625, start = 2000, end = as.numeric(format(Sys.time(), "%Y")) - 2, path = tempdir(), internal = TRUE, silent = FALSE, force = FALSE, simplify = FALSE ) Arguments site the site name. lat latitude (decimal degrees) lon longitude (decimal degrees) start start of the range of years over which to download data end end of the range of years over which to download data path set path where to save the data if internal = FALSE (default = NULL) internal TRUE or FALSE, if TRUE returns a list to the R workspace if FALSE puts the down- loaded data into the current working directory (default = FALSE) silent TRUE or FALSE (default), to provide verbose output force TRUE or FALSE (default), override the conservative end year setting simplify output data as a tibble, logical FALSE or TRUE (default = TRUE) Value Daymet data for a point location, returned to the R workspace or written to disk as a csv file. download_daymet 7 Examples ## Not run: # The following commands download and process Daymet data # for 10 years of the >30 year of data available since 1980. daymet_data <- download_daymet( "testsite_name", lat = 36.0133, lon = -84.2625, start = 2000, end = 2010, internal = TRUE ) # We can now quickly calculate and plot # daily mean temperature. Also, take note of # the weird format of the header. This format # is not altered as to keep compatibility # with other ways of acquiring Daymet data # through the ORNL DAAC website. # The below command lists headers of # the downloaded nested list. # This data includes information on the site # location etc. The true climate data is stored # in the "data" part of the nested list. # In this case it can be accessed through # daymet_data$data. Other attributes include # for example the tile location (daymet_data$tile), # the altitude (daymet_data$altitude), etc. str(daymet_data) # load the tidyverse (install if necessary) if(!require(tidyverse)){install.package(tidyverse)} library(tidyverse) # Calculate the mean temperature from min # max temperatures and convert the year and doy # to a proper date format. daymet_data$data <- daymet_data$data |> mutate( tmean = (tmax..deg.c. + tmin..deg.c.)/2, date = as.Date(paste(year, yday, sep = "-"), "%Y-%j") ) # show a simple graph of the mean temperature plot(daymet_data$data$date, daymet_data$data$tmean, xlab = "Date", ylab = "mean temperature") # For other practical examples consult the included # vignette. ## End(Not run) download_daymet_batch This function downloads ’Daymet’ data for several single pixel loca- tion, as specified by a batch file. Description This function downloads ’Daymet’ data for several single pixel location, as specified by a batch file. Usage download_daymet_batch( file_location = NULL, start = 1980, end = as.numeric(format(Sys.time(), "%Y")) - 1, internal = TRUE, force = FALSE, silent = FALSE, path = tempdir(), simplify = FALSE ) Arguments file_location file with several site locations and coordinates in a comma delimited format: site, latitude, longitude start start of the range of years over which to download data end end of the range of years over which to download data internal assign or FALSE, load data into workspace or save to disc force TRUE or FALSE (default), override the conservative end year setting silent suppress the verbose output (default = FALSE) path set path where to save the data if internal = FALSE (default = tempdir()) simplify output data to a tibble, logical FALSE or TRUE (default = TRUE) Value Daymet data for point locations as a nested list or data written to csv files Examples ## Not run: # The download_daymet_batch() routine is a wrapper around # the download_daymet() function. It queries a file with # coordinates to easily download a large batch of daymet # pixel locations. When internal = TRUE, the data is stored # in a structured list in an R variable. If FALSE, the data # is written to disk. # create demo locations (two sites) locations <- data.frame(site = c("site1", "site2"), lat = rep(36.0133, 2), lon = rep(-84.2625, 2)) # write data to csv file write.table(locations, paste0(tempdir(),"/locations.csv"), sep = ",", col.names = TRUE, row.names = FALSE, quote = FALSE) # download data, will return nested list of daymet data df_batch <- download_daymet_batch(file_location = paste0(tempdir(), "/locations.csv"), start = 1980, end = 1980, internal = TRUE, silent = TRUE) # For other practical examples consult the included # vignette. ## End(Not run) download_daymet_ncss Function to geographically subset ’Daymet’ regions exceeding tile limits Description Function to geographically subset ’Daymet’ regions exceeding tile limits Usage download_daymet_ncss( location = c(34, -82, 33.75, -81.75), start = 1980, end = 1980, param = "tmin", frequency = "daily", mosaic = "na", path = tempdir(), silent = FALSE, force = FALSE, ssl = TRUE ) Arguments location location of a bounding box c(lat, lon, lat, lon) defined by a top left and bottom- right coordinates start start of the range of years over which to download data end end of the range of years over which to download data param climate variable you want to download vapour pressure (vp), minimum and maximum temperature (tmin,tmax), snow water equivalent (swe), solar radia- tion (srad), precipitation (prcp) , day length (dayl). The default setting is ALL, this will download all the previously mentioned climate variables. frequency frequency of the data requested (default = "daily", other options are "monthly" or "annual"). mosaic which tile mosiac to source from (na = Northern America, hi = Hawaii, pr = Puerto Rico), defaults to "na". path directory where to store the downloaded data (default = tempdir()) silent suppress the verbose output force TRUE or FALSE (default), override the conservative end year setting ssl TRUE (default) or FALSE, override default SSL settings in case of CA issues Value netCDF data file of an area circumscribed by the location bounding box Examples ## Not run: # The following call allows you to subset gridded # Daymet data using a bounding box location. This # is an alternative way to query gridded data. The # routine is particularly helpful if you need certain # data which stradles boundaries of multiple tiles # or a smaller subset of a larger tile. Keep in mind # that there is a 6GB upper limit to the output file # so querying larger regions will result in an error. # To download larger areas use the download_daymet_tiles() # function. # Download a subset of a / multiple tiles # into your current working directory. download_daymet_ncss(location = c(34, -82, 33.75, -81.75), start = 1980, end = 1980, param = "tmin", path = tempdir()) # For other practical examples consult the included # vignette. ## End(Not run) download_daymet_tiles Function to batch download gridded ’Daymet’ data tiles Description Function to batch download gridded ’Daymet’ data tiles Usage download_daymet_tiles( location = c(18.9103, -114.6109), tiles, start = 1980, end = 1980, path = tempdir(), param = "ALL", silent = FALSE, force = FALSE ) Arguments location location of a point c(lat, lon) or a bounding box defined by a top left and bottom- right coordinates c(lat, lon, lat, lon) tiles which tiles to download, overrides geographic constraints start start of the range of years over which to download data end end of the range of years over which to download data path where should the downloaded tiles be stored (default = tempdir()) param climate variable you want to download vapour pressure (vp), minimum and maximum temperature (tmin,tmax), snow water equivalent (swe), solar radia- tion (srad), precipitation (prcp) , day length (dayl). The default setting is ALL, this will download all the previously mentioned climate variables. silent suppress the verbose output force TRUE or FALSE (default), override the conservative end year setting Value downloads netCDF tiles as defined by the Daymet tile grid Examples ## Not run: Download a single tile of minimum temperature download_daymet_tiles(location = c(18.9103, -114.6109), start = 1980, end = 1980, param = "tmin") # For other practical examples consult the included # vignette. ## End(Not run) nc2tif Converts netCDF (nc) files to geotiff Description Conversion to .tif to simplify workflows if the data that has been downloaded is to be handled in other software (e.g. QGIS). Usage nc2tif(path = tempdir(), files = NULL, overwrite = FALSE, silent = FALSE) Arguments path a character string showing the path to the directory containing Daymet .nc files (default = tempdir()) files a character vector containing the name of one or more files to be converted (optional) overwrite a logical controlling whether all files will be written, or whether files will not be written in the event that there is already a .tif of that file. (default = NULL) silent limit verbose output (default = FALSE) Value Converted geotiff files of all netCDF data in the provided directory (path). Examples ## Not run: # The below command converts all netCDF data in # the provided path to geotiff files. Existing # files will be overwritten. If set to FALSE, # files will not be overwritten. # download the data download_daymet_ncss(param = "tmin", frequency = "annual", path = tempdir(), silent = TRUE) # convert files from nc to tif nc2tif(path = tempdir(), overwrite = TRUE) # print converted files print(list.files(tempdir(), "*.tif")) ## End(Not run) read_daymet Read Single Pixel Daymet data Description Reads Single Pixel Daymet data into a nested list or tibble, preserving header data and critical file name information. Usage read_daymet(file, site, skip_header = FALSE, simplify = TRUE) Arguments file a Daymet Single Pixel data file site a sitename (default = NULL) skip_header do not ingest header meta-data, logical FALSE or TRUE (default = FALSE) simplify output tidy data (tibble), logical FALSE or TRUE (default = TRUE) Value A nested data structure including site meta-data, the full header and the data as a ‘data.frame()‘. Examples ## Not run: # download the data download_daymet( site = "Daymet", start = 1980, end = 1980, internal = FALSE, silent = TRUE ) # read in the Daymet file df <- read_daymet(paste0(tempdir(),"/Daymet_1980_1980.csv")) # print data structure print(str(df)) ## End(Not run) tile_outlines tile_outlines Description Large simple feature collection containing the outlines of all the Daymet tiles available as well as projection information. This data was converted from a shapefile as provided on the Daymet main website. Usage tile_outlines Format SpatialPolygonDataFrame TileID tile ID number XMin minimum longitude XMax maximum longitude YMin minimum latitude YMax maximum latitude
opencv
rust
Rust
Module opencv::alphamat === Alpha Matting --- Alpha matting is used to extract a foreground object with soft boundaries from a background image. This module is dedicated to computing alpha matte of objects in images from a given input image and a greyscale trimap image that contains information about the foreground, background and unknown pixels. The unknown pixels are assumed to be a combination of foreground and background pixels. The algorithm uses a combination of multiple carefully defined pixels affinities to estimate the opacity of the foreground pixels in the unkown region. The implementation is based on aksoy2017designing. This module was developed by <NAME> and <NAME> as a project for Google Summer of Code 2019 (GSoC 19). Modules --- * prelude Functions --- * info_flowCompute alpha matte of an object in an image Module opencv::aruco === Aruco markers, module functionality was moved to objdetect module --- ArUco Marker Detection, module functionality was moved to objdetect module ### See also ArucoDetector, CharucoDetector, Board, GridBoard, CharucoBoard Modules --- * prelude Structs --- * EstimateParametersPose estimation parameters Enums --- * PatternPositionTypervec/tvec define the right handed coordinate system of the marker. Constants --- * ARUCO_CCW_CENTERThe marker coordinate system is centered on the middle of the marker. * ARUCO_CW_TOP_LEFT_CORNERThe marker coordinate system is centered on the top-left corner of the marker. Traits --- * EstimateParametersTraitMutable methods for crate::aruco::EstimateParameters * EstimateParametersTraitConstConstant methods for crate::aruco::EstimateParameters Functions --- * calibrate_camera_arucoCalibrate a camera using aruco markers * calibrate_camera_aruco_def@overload It’s the same function as calibrate_camera_aruco but without calibration error estimation. * calibrate_camera_aruco_extendedCalibrate a camera using aruco markers * calibrate_camera_aruco_extended_defCalibrate a camera using aruco markers * calibrate_camera_charucoIt’s the same function as calibrate_camera_charuco but without calibration error estimation. * calibrate_camera_charuco_defIt’s the same function as calibrate_camera_charuco but without calibration error estimation. * calibrate_camera_charuco_extendedCalibrate a camera using Charuco corners * calibrate_camera_charuco_extended_defCalibrate a camera using Charuco corners * detect_charuco_diamondDeprecatedDetect ChArUco Diamond markers * detect_charuco_diamond_defDeprecatedDetect ChArUco Diamond markers * detect_markersDeprecateddetect markers * detect_markers_defDeprecateddetect markers * draw_charuco_diamondDraw a ChArUco Diamond marker * draw_charuco_diamond_defDraw a ChArUco Diamond marker * draw_planar_boardDeprecateddraw planar board * estimate_pose_boardDeprecated**Deprecated**: Use cv::solvePnP * estimate_pose_board_defDeprecated**Deprecated**: Use cv::solvePnP * estimate_pose_charuco_boardPose estimation for a ChArUco board given some of their corners * estimate_pose_charuco_board_defPose estimation for a ChArUco board given some of their corners * estimate_pose_single_markersDeprecated**Deprecated**: Use cv::solvePnP * estimate_pose_single_markers_defDeprecated**Deprecated**: Use cv::solvePnP * get_board_object_and_image_pointsDeprecatedget board object and image points * interpolate_corners_charucoDeprecatedInterpolate position of ChArUco board corners * interpolate_corners_charuco_defDeprecatedInterpolate position of ChArUco board corners * refine_detected_markersDeprecatedrefine detected markers * refine_detected_markers_defDeprecatedrefine detected markers * test_charuco_corners_collinearDeprecated**Deprecated**: Use CharucoBoard::checkCharucoCornersCollinear Module opencv::bgsegm === Improved Background-Foreground Segmentation Methods --- Modules --- * prelude Structs --- * BackgroundSubtractorCNTBackground subtraction based on counting. * BackgroundSubtractorGMGBackground Subtractor module based on the algorithm given in Gold2012 . * BackgroundSubtractorGSOCImplementation of the different yet better algorithm which is called GSOC, as it was implemented during GSOC and was not originated from any paper. * BackgroundSubtractorLSBPBackground Subtraction using Local SVD Binary Pattern. More details about the algorithm can be found at LGuo2016 * BackgroundSubtractorLSBPDescThis is for calculation of the LSBP descriptors. * BackgroundSubtractorMOGGaussian Mixture-based Background/Foreground Segmentation Algorithm. * SyntheticSequenceGeneratorSynthetic frame sequence generator for testing background subtraction algorithms. Enums --- * LSBPCameraMotionCompensation Constants --- * LSBP_CAMERA_MOTION_COMPENSATION_LK * LSBP_CAMERA_MOTION_COMPENSATION_NONE Traits --- * BackgroundSubtractorCNTTraitMutable methods for crate::bgsegm::BackgroundSubtractorCNT * BackgroundSubtractorCNTTraitConstConstant methods for crate::bgsegm::BackgroundSubtractorCNT * BackgroundSubtractorGMGTraitMutable methods for crate::bgsegm::BackgroundSubtractorGMG * BackgroundSubtractorGMGTraitConstConstant methods for crate::bgsegm::BackgroundSubtractorGMG * BackgroundSubtractorGSOCTraitMutable methods for crate::bgsegm::BackgroundSubtractorGSOC * BackgroundSubtractorGSOCTraitConstConstant methods for crate::bgsegm::BackgroundSubtractorGSOC * BackgroundSubtractorLSBPDescTraitMutable methods for crate::bgsegm::BackgroundSubtractorLSBPDesc * BackgroundSubtractorLSBPDescTraitConstConstant methods for crate::bgsegm::BackgroundSubtractorLSBPDesc * BackgroundSubtractorLSBPTraitMutable methods for crate::bgsegm::BackgroundSubtractorLSBP * BackgroundSubtractorLSBPTraitConstConstant methods for crate::bgsegm::BackgroundSubtractorLSBP * BackgroundSubtractorMOGTraitMutable methods for crate::bgsegm::BackgroundSubtractorMOG * BackgroundSubtractorMOGTraitConstConstant methods for crate::bgsegm::BackgroundSubtractorMOG * SyntheticSequenceGeneratorTraitMutable methods for crate::bgsegm::SyntheticSequenceGenerator * SyntheticSequenceGeneratorTraitConstConstant methods for crate::bgsegm::SyntheticSequenceGenerator Functions --- * create_background_subtractor_cntCreates a CNT Background Subtractor * create_background_subtractor_cnt_defCreates a CNT Background Subtractor * create_background_subtractor_gmgCreates a GMG Background Subtractor * create_background_subtractor_gmg_defCreates a GMG Background Subtractor * create_background_subtractor_gsocCreates an instance of BackgroundSubtractorGSOC algorithm. * create_background_subtractor_gsoc_defCreates an instance of BackgroundSubtractorGSOC algorithm. * create_background_subtractor_lsbpCreates an instance of BackgroundSubtractorLSBP algorithm. * create_background_subtractor_lsbp_defCreates an instance of BackgroundSubtractorLSBP algorithm. * create_background_subtractor_mogCreates mixture-of-gaussian background subtractor * create_background_subtractor_mog_defCreates mixture-of-gaussian background subtractor * create_synthetic_sequence_generatorCreates an instance of SyntheticSequenceGenerator. * create_synthetic_sequence_generator_defCreates an instance of SyntheticSequenceGenerator. Module opencv::bioinspired === Biologically inspired vision models and derivated tools --- The module provides biological visual systems models (human visual system and others). It also provides derivated objects that take advantage of those bio-inspired models. [bioinspired_retina] Modules --- * prelude Structs --- * Retinaclass which allows the Gipsa/Listic Labs model to be used with OpenCV. * RetinaFastToneMappinga wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV. * RetinaParametersretina model parameters structure * RetinaParameters_IplMagnoParametersInner Plexiform Layer Magnocellular channel (IplMagno) * RetinaParameters_OPLandIplParvoParametersOuter Plexiform Layer (OPL) and Inner Plexiform Layer Parvocellular (IplParvo) parameters * SegmentationParametersparameter structure that stores the transient events detector setup parameters * TransientAreasSegmentationModuleclass which provides a transient/moving areas segmentation module Constants --- * RETINA_COLOR_BAYERstandard bayer sampling * RETINA_COLOR_DIAGONALcolor sampling is RGBRGBRGB
, line 2 BRGBRGBRG
, line 3, GBRGBRGBR
 * RETINA_COLOR_RANDOMeach pixel position is either R, G or B in a random choice Traits --- * RetinaFastToneMappingTraitMutable methods for crate::bioinspired::RetinaFastToneMapping * RetinaFastToneMappingTraitConstConstant methods for crate::bioinspired::RetinaFastToneMapping * RetinaParametersTraitMutable methods for crate::bioinspired::RetinaParameters * RetinaParametersTraitConstConstant methods for crate::bioinspired::RetinaParameters * RetinaTraitMutable methods for crate::bioinspired::Retina * RetinaTraitConstConstant methods for crate::bioinspired::Retina * TransientAreasSegmentationModuleTraitMutable methods for crate::bioinspired::TransientAreasSegmentationModule * TransientAreasSegmentationModuleTraitConstConstant methods for crate::bioinspired::TransientAreasSegmentationModule Module opencv::calib3d === Camera Calibration and 3D Reconstruction --- The functions in this section use a so-called pinhole camera model. The view of a scene is obtained by projecting a scene’s 3D point ![inline formula](https://latex.codecogs.com/png.latex?P%5Fw) into the image plane using a perspective transformation which forms the corresponding pixel ![inline formula](https://latex.codecogs.com/png.latex?p). Both ![inline formula](https://latex.codecogs.com/png.latex?P%5Fw) The distortion-free projective transformation given by a pinhole camera model is shown below. ![block formula](https://latex.codecogs.com/png.latex?s%20%5C%3B%20p%20%3D%20A%20%5Cbegin%7Bbmatrix%7D%20R%7Ct%20%5Cend%7Bbmatrix%7D%20P%5Fw%2C) where ![inline formula](https://latex.codecogs.com/png.latex?P%5Fw) is a 3D point expressed with respect to the world coordinate system, ![inline formula](https://latex.codecogs.com/png.latex?p) is a 2D pixel in the image plane, ![inline formula](https://latex.codecogs.com/png.latex?A) is the camera intrinsic matrix, ![inline formula](https://latex.codecogs.com/png.latex?R) and ![inline formula](https://latex.codecogs.com/png.latex?t) are the rotation and translation that describe the change of coordinates from world to camera coordinate systems (or camera frame) and ![inline formula](https://latex.codecogs.com/png.latex?s) is the projective transformation’s arbitrary scaling and not part of the camera model. The camera intrinsic matrix ![inline formula](https://latex.codecogs.com/png.latex?A) (notation used as in Zhang2000 and also generally notated as ![inline formula](https://latex.codecogs.com/png.latex?K)) projects 3D points given in the camera coordinate system to 2D pixel coordinates, i.e. ![block formula](https://latex.codecogs.com/png.latex?p%20%3D%20A%20P%5Fc%2E) The camera intrinsic matrix ![inline formula](https://latex.codecogs.com/png.latex?A) is composed of the focal lengths ![inline formula](https://latex.codecogs.com/png.latex?f%5Fx) and ![inline formula](https://latex.codecogs.com/png.latex?f%5Fy), which are expressed in pixel units, and the principal point ![inline formula](https://latex.codecogs.com/png.latex?%28c%5Fx%2C%20c%5Fy%29), that is usually close to the image center: ![block formula](https://latex.codecogs.com/png.latex?A%20%3D%20%5Cbegin%7Bbmatrix%7D%20f%5Fx%20%26%200%20%26%20c%5Fx%5C%5C%200%20%26%20f%5Fy%20%26%20c%5Fy%5C%5C%200%20%26%200%20%26%201%20%5Cend%7Bbmatrix%7D%2C) and thus ![block formula](https://latex.codecogs.com/png.latex?s%20%5Cbegin%7Bbmatrix%7D%20u%5C%5C%20v%5C%5C%201%20%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%20f%5Fx%20%26%200%20%26%20c%5Fx%5C%5C%200%20%26%20f%5Fy%20%26%20c%5Fy%5C%5C%200%20%26%200%20%26%201%20%5Cend%7Bbmatrix%7D%20%5Cbegin%7Bbmatrix%7D%20X%5Fc%5C%5C%20Y%5Fc%5C%5C%20Z%5Fc%20%5Cend%7Bbmatrix%7D%2E) The matrix of intrinsic parameters does not depend on the scene viewed. So, once estimated, it can be re-used as long as the focal length is fixed (in case of a zoom lens). Thus, if an image from the camera is scaled by a factor, all of these parameters need to be scaled (multiplied/divided, respectively) by the same factor. The joint rotation-translation matrix ![inline formula](https://latex.codecogs.com/png.latex?%5BR%7Ct%5D) is the matrix product of a projective transformation and a homogeneous transformation. The 3-by-4 projective transformation maps 3D points represented in camera coordinates to 2D points in the image plane and represented in normalized camera coordinates ![inline formula](https://latex.codecogs.com/png.latex?x%27%20%3D%20X%5Fc%20%2F%20Z%5Fc) and ![inline formula](https://latex.codecogs.com/png.latex?y%27%20%3D%20Y%5Fc%20%2F%20Z%5Fc): ![block formula](https://latex.codecogs.com/png.latex?Z%5Fc%20%5Cbegin%7Bbmatrix%7D%0Ax%27%20%5C%5C%0Ay%27%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0A1%20%26%200%20%26%200%20%26%200%20%5C%5C%0A0%20%26%201%20%26%200%20%26%200%20%5C%5C%0A0%20%26%200%20%26%201%20%26%200%0A%5Cend%7Bbmatrix%7D%0A%5Cbegin%7Bbmatrix%7D%0AX%5Fc%20%5C%5C%0AY%5Fc%20%5C%5C%0AZ%5Fc%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%2E) The homogeneous transformation is encoded by the extrinsic parameters ![inline formula](https://latex.codecogs.com/png.latex?R) and ![inline formula](https://latex.codecogs.com/png.latex?t) and represents the change of basis from world coordinate system ![inline formula](https://latex.codecogs.com/png.latex?w) to the camera coordinate sytem ![inline formula](https://latex.codecogs.com/png.latex?c). Thus, given the representation of the point ![inline formula](https://latex.codecogs.com/png.latex?P) in world coordinates, ![inline formula](https://latex.codecogs.com/png.latex?P%5Fw), we obtain ![inline formula](https://latex.codecogs.com/png.latex?P)’s representation in the camera coordinate system, ![inline formula](https://latex.codecogs.com/png.latex?P%5Fc), by ![block formula](https://latex.codecogs.com/png.latex?P%5Fc%20%3D%20%5Cbegin%7Bbmatrix%7D%0AR%20%26%20t%20%5C%5C%0A0%20%26%201%0A%5Cend%7Bbmatrix%7D%20P%5Fw%2C) This homogeneous transformation is composed out of ![inline formula](https://latex.codecogs.com/png.latex?R), a 3-by-3 rotation matrix, and ![inline formula](https://latex.codecogs.com/png.latex?t), a 3-by-1 translation vector: ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%0AR%20%26%20t%20%5C%5C%0A0%20%26%201%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0Ar%5F%7B11%7D%20%26%20r%5F%7B12%7D%20%26%20r%5F%7B13%7D%20%26%20t%5Fx%20%5C%5C%0Ar%5F%7B21%7D%20%26%20r%5F%7B22%7D%20%26%20r%5F%7B23%7D%20%26%20t%5Fy%20%5C%5C%0Ar%5F%7B31%7D%20%26%20r%5F%7B32%7D%20%26%20r%5F%7B33%7D%20%26%20t%5Fz%20%5C%5C%0A0%20%26%200%20%26%200%20%26%201%0A%5Cend%7Bbmatrix%7D%2C%0A) and therefore ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%0AX%5Fc%20%5C%5C%0AY%5Fc%20%5C%5C%0AZ%5Fc%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0Ar%5F%7B11%7D%20%26%20r%5F%7B12%7D%20%26%20r%5F%7B13%7D%20%26%20t%5Fx%20%5C%5C%0Ar%5F%7B21%7D%20%26%20r%5F%7B22%7D%20%26%20r%5F%7B23%7D%20%26%20t%5Fy%20%5C%5C%0Ar%5F%7B31%7D%20%26%20r%5F%7B32%7D%20%26%20r%5F%7B33%7D%20%26%20t%5Fz%20%5C%5C%0A0%20%26%200%20%26%200%20%26%201%0A%5Cend%7Bbmatrix%7D%0A%5Cbegin%7Bbmatrix%7D%0AX%5Fw%20%5C%5C%0AY%5Fw%20%5C%5C%0AZ%5Fw%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%2E) Combining the projective transformation and the homogeneous transformation, we obtain the projective transformation that maps 3D points in world coordinates into 2D points in the image plane and in normalized camera coordinates: ![block formula](https://latex.codecogs.com/png.latex?Z%5Fc%20%5Cbegin%7Bbmatrix%7D%0Ax%27%20%5C%5C%0Ay%27%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%20R%7Ct%20%5Cend%7Bbmatrix%7D%20%5Cbegin%7Bbmatrix%7D%0AX%5Fw%20%5C%5C%0AY%5Fw%20%5C%5C%0AZ%5Fw%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0Ar%5F%7B11%7D%20%26%20r%5F%7B12%7D%20%26%20r%5F%7B13%7D%20%26%20t%5Fx%20%5C%5C%0Ar%5F%7B21%7D%20%26%20r%5F%7B22%7D%20%26%20r%5F%7B23%7D%20%26%20t%5Fy%20%5C%5C%0Ar%5F%7B31%7D%20%26%20r%5F%7B32%7D%20%26%20r%5F%7B33%7D%20%26%20t%5Fz%0A%5Cend%7Bbmatrix%7D%0A%5Cbegin%7Bbmatrix%7D%0AX%5Fw%20%5C%5C%0AY%5Fw%20%5C%5C%0AZ%5Fw%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%2C) with ![inline formula](https://latex.codecogs.com/png.latex?x%27%20%3D%20X%5Fc%20%2F%20Z%5Fc) and ![inline formula](https://latex.codecogs.com/png.latex?y%27%20%3D%20Y%5Fc%20%2F%20Z%5Fc). Putting the equations for instrincs and extrinsics together, we can write out ![inline formula](https://latex.codecogs.com/png.latex?s%20%5C%3B%20p%20%3D%20A%20%5Cbegin%7Bbmatrix%7D%20R%7Ct%20%5Cend%7Bbmatrix%7D%20P%5Fw) as ![block formula](https://latex.codecogs.com/png.latex?s%20%5Cbegin%7Bbmatrix%7D%20u%5C%5C%20v%5C%5C%201%20%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%20f%5Fx%20%26%200%20%26%20c%5Fx%5C%5C%200%20%26%20f%5Fy%20%26%20c%5Fy%5C%5C%200%20%26%200%20%26%201%20%5Cend%7Bbmatrix%7D%0A%5Cbegin%7Bbmatrix%7D%0Ar%5F%7B11%7D%20%26%20r%5F%7B12%7D%20%26%20r%5F%7B13%7D%20%26%20t%5Fx%20%5C%5C%0Ar%5F%7B21%7D%20%26%20r%5F%7B22%7D%20%26%20r%5F%7B23%7D%20%26%20t%5Fy%20%5C%5C%0Ar%5F%7B31%7D%20%26%20r%5F%7B32%7D%20%26%20r%5F%7B33%7D%20%26%20t%5Fz%0A%5Cend%7Bbmatrix%7D%0A%5Cbegin%7Bbmatrix%7D%0AX%5Fw%20%5C%5C%0AY%5Fw%20%5C%5C%0AZ%5Fw%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%2E) If ![inline formula](https://latex.codecogs.com/png.latex?Z%5Fc%20%5Cne%200), the transformation above is equivalent to the following, ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%0Au%20%5C%5C%0Av%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0Af%5Fx%20X%5Fc%2FZ%5Fc%20%2B%20c%5Fx%20%5C%5C%0Af%5Fy%20Y%5Fc%2FZ%5Fc%20%2B%20c%5Fy%0A%5Cend%7Bbmatrix%7D) with ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%20X%5Fc%5C%5C%20Y%5Fc%5C%5C%20Z%5Fc%20%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0AR%7Ct%0A%5Cend%7Bbmatrix%7D%20%5Cbegin%7Bbmatrix%7D%0AX%5Fw%20%5C%5C%0AY%5Fw%20%5C%5C%0AZ%5Fw%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%2E) The following figure illustrates the pinhole camera model. ![Pinhole camera model](https://docs.opencv.org/4.8.1/pinhole_camera_model.png) Real lenses usually have some distortion, mostly radial distortion, and slight tangential distortion. So, the above model is extended as: ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%0Au%20%5C%5C%0Av%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0Af%5Fx%20x%27%27%20%2B%20c%5Fx%20%5C%5C%0Af%5Fy%20y%27%27%20%2B%20c%5Fy%0A%5Cend%7Bbmatrix%7D) where ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%0Ax%27%27%20%5C%5C%0Ay%27%27%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0Ax%27%20%5Cfrac%7B1%20%2B%20k%5F1%20r%5E2%20%2B%20k%5F2%20r%5E4%20%2B%20k%5F3%20r%5E6%7D%7B1%20%2B%20k%5F4%20r%5E2%20%2B%20k%5F5%20r%5E4%20%2B%20k%5F6%20r%5E6%7D%20%2B%202%20p%5F1%20x%27%20y%27%20%2B%20p%5F2%28r%5E2%20%2B%202%20x%27%5E2%29%20%2B%20s%5F1%20r%5E2%20%2B%20s%5F2%20r%5E4%20%5C%5C%0Ay%27%20%5Cfrac%7B1%20%2B%20k%5F1%20r%5E2%20%2B%20k%5F2%20r%5E4%20%2B%20k%5F3%20r%5E6%7D%7B1%20%2B%20k%5F4%20r%5E2%20%2B%20k%5F5%20r%5E4%20%2B%20k%5F6%20r%5E6%7D%20%2B%20p%5F1%20%28r%5E2%20%2B%202%20y%27%5E2%29%20%2B%202%20p%5F2%20x%27%20y%27%20%2B%20s%5F3%20r%5E2%20%2B%20s%5F4%20r%5E4%20%5C%5C%0A%5Cend%7Bbmatrix%7D) with ![block formula](https://latex.codecogs.com/png.latex?r%5E2%20%3D%20x%27%5E2%20%2B%20y%27%5E2) and ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%0Ax%27%5C%5C%0Ay%27%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0AX%5Fc%2FZ%5Fc%20%5C%5C%0AY%5Fc%2FZ%5Fc%0A%5Cend%7Bbmatrix%7D%2C) if ![inline formula](https://latex.codecogs.com/png.latex?Z%5Fc%20%5Cne%200). The distortion parameters are the radial coefficients ![inline formula](https://latex.codecogs.com/png.latex?k%5F1), ![inline formula](https://latex.codecogs.com/png.latex?k%5F2), ![inline formula](https://latex.codecogs.com/png.latex?k%5F3), ![inline formula](https://latex.codecogs.com/png.latex?k%5F4), ![inline formula](https://latex.codecogs.com/png.latex?k%5F5), and ![inline formula](https://latex.codecogs.com/png.latex?k%5F6) ,![inline formula](https://latex.codecogs.com/png.latex?p%5F1) and ![inline formula](https://latex.codecogs.com/png.latex?p%5F2) are the tangential distortion coefficients, and ![inline formula](https://latex.codecogs.com/png.latex?s%5F1), ![inline formula](https://latex.codecogs.com/png.latex?s%5F2), ![inline formula](https://latex.codecogs.com/png.latex?s%5F3), and ![inline formula](https://latex.codecogs.com/png.latex?s%5F4), are the thin prism distortion coefficients. Higher-order coefficients are not considered in OpenCV. The next figures show two common types of radial distortion: barrel distortion (![inline formula](https://latex.codecogs.com/png.latex?%201%20%2B%20k%5F1%20r%5E2%20%2B%20k%5F2%20r%5E4%20%2B%20k%5F3%20r%5E6%20) monotonically decreasing) and pincushion distortion (![inline formula](https://latex.codecogs.com/png.latex?%201%20%2B%20k%5F1%20r%5E2%20%2B%20k%5F2%20r%5E4%20%2B%20k%5F3%20r%5E6%20) monotonically increasing). Radial distortion is always monotonic for real lenses, and if the estimator produces a non-monotonic result, this should be considered a calibration failure. More generally, radial distortion must be monotonic and the distortion function must be bijective. A failed estimation result may look deceptively good near the image center but will work poorly in e.g. AR/SFM applications. The optimization method used in OpenCV camera calibration does not include these constraints as the framework does not support the required integer programming and polynomial inequalities. See issue #15992 for additional information. ![](https://docs.opencv.org/4.8.1/distortion_examples.png) ![](https://docs.opencv.org/4.8.1/distortion_examples2.png) In some cases, the image sensor may be tilted in order to focus an oblique plane in front of the camera (Scheimpflug principle). This can be useful for particle image velocimetry (PIV) or triangulation with a laser fan. The tilt causes a perspective distortion of ![inline formula](https://latex.codecogs.com/png.latex?x%27%27) and ![inline formula](https://latex.codecogs.com/png.latex?y%27%27). This distortion can be modeled in the following way, see e.g. Louhichi07. ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%0Au%20%5C%5C%0Av%0A%5Cend%7Bbmatrix%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0Af%5Fx%20x%27%27%27%20%2B%20c%5Fx%20%5C%5C%0Af%5Fy%20y%27%27%27%20%2B%20c%5Fy%0A%5Cend%7Bbmatrix%7D%2C) where ![block formula](https://latex.codecogs.com/png.latex?s%5Cbegin%7Bbmatrix%7D%20x%27%27%27%5C%5C%20y%27%27%27%5C%5C%201%20%5Cend%7Bbmatrix%7D%20%3D%0A%5Cvecthreethree%7BR%5F%7B33%7D%28%5Ctau%5Fx%2C%20%5Ctau%5Fy%29%7D%7B0%7D%7B%2DR%5F%7B13%7D%28%5Ctau%5Fx%2C%20%5Ctau%5Fy%29%7D%0A%7B0%7D%7BR%5F%7B33%7D%28%5Ctau%5Fx%2C%20%5Ctau%5Fy%29%7D%7B%2DR%5F%7B23%7D%28%5Ctau%5Fx%2C%20%5Ctau%5Fy%29%7D%0A%7B0%7D%7B0%7D%7B1%7D%20R%28%5Ctau%5Fx%2C%20%5Ctau%5Fy%29%20%5Cbegin%7Bbmatrix%7D%20x%27%27%5C%5C%20y%27%27%5C%5C%201%20%5Cend%7Bbmatrix%7D) and the matrix ![inline formula](https://latex.codecogs.com/png.latex?R%28%5Ctau%5Fx%2C%20%5Ctau%5Fy%29) is defined by two rotations with angular parameter ![inline formula](https://latex.codecogs.com/png.latex?%5Ctau%5Fx) and ![inline formula](https://latex.codecogs.com/png.latex?%5Ctau%5Fy), respectively, ![block formula](https://latex.codecogs.com/png.latex?%0AR%28%5Ctau%5Fx%2C%20%5Ctau%5Fy%29%20%3D%0A%5Cbegin%7Bbmatrix%7D%20%5Ccos%28%5Ctau%5Fy%29%20%26%200%20%26%20%2D%5Csin%28%5Ctau%5Fy%29%5C%5C%200%20%26%201%20%26%200%5C%5C%20%5Csin%28%5Ctau%5Fy%29%20%26%200%20%26%20%5Ccos%28%5Ctau%5Fy%29%20%5Cend%7Bbmatrix%7D%0A%5Cbegin%7Bbmatrix%7D%201%20%26%200%20%26%200%5C%5C%200%20%26%20%5Ccos%28%5Ctau%5Fx%29%20%26%20%5Csin%28%5Ctau%5Fx%29%5C%5C%200%20%26%20%2D%5Csin%28%5Ctau%5Fx%29%20%26%20%5Ccos%28%5Ctau%5Fx%29%20%5Cend%7Bbmatrix%7D%20%3D%0A%5Cbegin%7Bbmatrix%7D%20%5Ccos%28%5Ctau%5Fy%29%20%26%20%5Csin%28%5Ctau%5Fy%29%5Csin%28%5Ctau%5Fx%29%20%26%20%2D%5Csin%28%5Ctau%5Fy%29%5Ccos%28%5Ctau%5Fx%29%5C%5C%200%20%26%20%5Ccos%28%5Ctau%5Fx%29%20%26%20%5Csin%28%5Ctau%5Fx%29%5C%5C%20%5Csin%28%5Ctau%5Fy%29%20%26%20%2D%5Ccos%28%5Ctau%5Fy%29%5Csin%28%5Ctau%5Fx%29%20%26%20%5Ccos%28%5Ctau%5Fy%29%5Ccos%28%5Ctau%5Fx%29%20%5Cend%7Bbmatrix%7D%2E%0A) In the functions below the coefficients are passed or returned as ![block formula](https://latex.codecogs.com/png.latex?%28k%5F1%2C%20k%5F2%2C%20p%5F1%2C%20p%5F2%5B%2C%20k%5F3%5B%2C%20k%5F4%2C%20k%5F5%2C%20k%5F6%20%5B%2C%20s%5F1%2C%20s%5F2%2C%20s%5F3%2C%20s%5F4%5B%2C%20%5Ctau%5Fx%2C%20%5Ctau%5Fy%5D%5D%5D%5D%29) vector. That is, if the vector contains four elements, it means that ![inline formula](https://latex.codecogs.com/png.latex?k%5F3%3D0) . The distortion coefficients do not depend on the scene viewed. Thus, they also belong to the intrinsic camera parameters. And they remain the same regardless of the captured image resolution. If, for example, a camera has been calibrated on images of 320 x 240 resolution, absolutely the same distortion coefficients can be used for 640 x 480 images from the same camera while ![inline formula](https://latex.codecogs.com/png.latex?f%5Fx), ![inline formula](https://latex.codecogs.com/png.latex?f%5Fy), ![inline formula](https://latex.codecogs.com/png.latex?c%5Fx), and ![inline formula](https://latex.codecogs.com/png.latex?c%5Fy) need to be scaled appropriately. The functions below use the above model to do the following: * Project 3D points to the image plane given intrinsic and extrinsic parameters. * Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their projections. * Estimate intrinsic and extrinsic camera parameters from several views of a known calibration pattern (every view is described by several 3D-2D point correspondences). * Estimate the relative position and orientation of the stereo camera “heads” and compute the *rectification* transformation that makes the camera optical axes parallel. **Homogeneous Coordinates** Homogeneous Coordinates are a system of coordinates that are used in projective geometry. Their use allows to represent points at infinity by finite coordinates and simplifies formulas when compared to the cartesian counterparts, e.g. they have the advantage that affine transformations can be expressed as linear homogeneous transformation. One obtains the homogeneous vector ![inline formula](https://latex.codecogs.com/png.latex?P%5Fh) by appending a 1 along an n-dimensional cartesian vector ![inline formula](https://latex.codecogs.com/png.latex?P) e.g. for a 3D cartesian vector the mapping ![inline formula](https://latex.codecogs.com/png.latex?P%20%5Crightarrow%20P%5Fh) is: ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%0AX%20%5C%5C%0AY%20%5C%5C%0AZ%0A%5Cend%7Bbmatrix%7D%20%5Crightarrow%20%5Cbegin%7Bbmatrix%7D%0AX%20%5C%5C%0AY%20%5C%5C%0AZ%20%5C%5C%0A1%0A%5Cend%7Bbmatrix%7D%2E) For the inverse mapping ![inline formula](https://latex.codecogs.com/png.latex?P%5Fh%20%5Crightarrow%20P), one divides all elements of the homogeneous vector by its last element, e.g. for a 3D homogeneous vector one gets its 2D cartesian counterpart by: ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bbmatrix%7D%0AX%20%5C%5C%0AY%20%5C%5C%0AW%0A%5Cend%7Bbmatrix%7D%20%5Crightarrow%20%5Cbegin%7Bbmatrix%7D%0AX%20%2F%20W%20%5C%5C%0AY%20%2F%20W%0A%5Cend%7Bbmatrix%7D%2C) if ![inline formula](https://latex.codecogs.com/png.latex?W%20%5Cne%200). Due to this mapping, all multiples ![inline formula](https://latex.codecogs.com/png.latex?k%20P%5Fh), for ![inline formula](https://latex.codecogs.com/png.latex?k%20%5Cne%200), of a homogeneous point represent the same point ![inline formula](https://latex.codecogs.com/png.latex?P%5Fh). An intuitive understanding of this property is that under a projective transformation, all multiples of ![inline formula](https://latex.codecogs.com/png.latex?P%5Fh) are mapped to the same point. This is the physical observation one does for pinhole cameras, as all points along a ray through the camera’s pinhole are projected to the same image point, e.g. all points along the red ray in the image of the pinhole camera model above would be mapped to the same image coordinate. This property is also the source for the scale ambiguity s in the equation of the pinhole camera model. As mentioned, by using homogeneous coordinates we can express any change of basis parameterized by ![inline formula](https://latex.codecogs.com/png.latex?R) and ![inline formula](https://latex.codecogs.com/png.latex?t) as a linear transformation, e.g. for the change of basis from coordinate system 0 to coordinate system 1 becomes: ![block formula](https://latex.codecogs.com/png.latex?P%5F1%20%3D%20R%20P%5F0%20%2B%20t%20%5Crightarrow%20P%5F%7Bh%5F1%7D%20%3D%20%5Cbegin%7Bbmatrix%7D%0AR%20%26%20t%20%5C%5C%0A0%20%26%201%0A%5Cend%7Bbmatrix%7D%20P%5F%7Bh%5F0%7D%2E) Note: * Many functions in this module take a camera intrinsic matrix as an input parameter. Although all functions assume the same structure of this parameter, they may name it differently. The parameter’s description, however, will be clear in that a camera intrinsic matrix with the structure shown above is required. * A calibration sample for 3 cameras in a horizontal position can be found at opencv_source_code/samples/cpp/3calibration.cpp * A calibration sample based on a sequence of images can be found at opencv_source_code/samples/cpp/calibration.cpp * A calibration sample in order to do 3D reconstruction can be found at opencv_source_code/samples/cpp/build3dmodel.cpp * A calibration example on stereo calibration can be found at opencv_source_code/samples/cpp/stereo_calib.cpp * A calibration example on stereo matching can be found at opencv_source_code/samples/cpp/stereo_match.cpp * (Python) A camera calibration sample can be found at opencv_source_code/samples/python/calibrate.py Fisheye camera model --- Definitions: Let P be a point in 3D of coordinates X in the world reference frame (stored in the matrix X) The coordinate vector of P in the camera reference frame is: ![block formula](https://latex.codecogs.com/png.latex?Xc%20%3D%20R%20X%20%2B%20T) where R is the rotation matrix corresponding to the rotation vector om: R = rodrigues(om); call x, y and z the 3 coordinates of Xc: ![block formula](https://latex.codecogs.com/png.latex?x%20%3D%20Xc%5F1%20%5C%5C%20y%20%3D%20Xc%5F2%20%5C%5C%20z%20%3D%20Xc%5F3) The pinhole projection coordinates of P is [a; b] where ![block formula](https://latex.codecogs.com/png.latex?a%20%3D%20x%20%2F%20z%20%5C%20and%20%5C%20b%20%3D%20y%20%2F%20z%20%5C%5C%20r%5E2%20%3D%20a%5E2%20%2B%20b%5E2%20%5C%5C%20%5Ctheta%20%3D%20atan%28r%29) Fisheye distortion: ![block formula](https://latex.codecogs.com/png.latex?%5Ctheta%5Fd%20%3D%20%5Ctheta%20%281%20%2B%20k%5F1%20%5Ctheta%5E2%20%2B%20k%5F2%20%5Ctheta%5E4%20%2B%20k%5F3%20%5Ctheta%5E6%20%2B%20k%5F4%20%5Ctheta%5E8%29) The distorted point coordinates are [x’; y’] where ![block formula](https://latex.codecogs.com/png.latex?x%27%20%3D%20%28%5Ctheta%5Fd%20%2F%20r%29%20a%20%5C%5C%20y%27%20%3D%20%28%5Ctheta%5Fd%20%2F%20r%29%20b%20) Finally, conversion into pixel coordinates: The final pixel coordinates vector [u; v] where: ![block formula](https://latex.codecogs.com/png.latex?u%20%3D%20f%5Fx%20%28x%27%20%2B%20%5Calpha%20y%27%29%20%2B%20c%5Fx%20%5C%5C%0A%20%20%20%20v%20%3D%20f%5Fy%20y%27%20%2B%20c%5Fy) Summary: Generic camera model Kannala2006 with perspective projection and without distortion correction C API --- Modules --- * prelude Structs --- * CirclesGridFinderParameters * LMSolverLevenberg-Marquardt solver. Starting with the specified vector of parameters it optimizes the target vector criteria “err” (finds local minima of each target vector component absolute value). * LMSolver_Callback * StereoBMClass for computing stereo correspondence using the block matching algorithm, introduced and contributed to OpenCV by <NAME>. * StereoMatcherThe base class for stereo correspondence algorithms. * StereoSGBMThe class implements the modified H. Hirschmuller algorithm HH08 that differs from the original one as follows: * UsacParams Enums --- * CirclesGridFinderParameters_GridType * HandEyeCalibrationMethod * LocalOptimMethod * NeighborSearchMethod * PolishingMethod * RobotWorldHandEyeCalibrationMethod * SamplingMethod * ScoreMethod * SolvePnPMethod * UndistortTypescv::undistort mode Constants --- * CALIB_CB_ACCURACY * CALIB_CB_ADAPTIVE_THRESH * CALIB_CB_ASYMMETRIC_GRID * CALIB_CB_CLUSTERING * CALIB_CB_EXHAUSTIVE * CALIB_CB_FAST_CHECK * CALIB_CB_FILTER_QUADS * CALIB_CB_LARGER * CALIB_CB_MARKER * CALIB_CB_NORMALIZE_IMAGE * CALIB_CB_SYMMETRIC_GRID * CALIB_FIX_ASPECT_RATIO * CALIB_FIX_FOCAL_LENGTH * CALIB_FIX_INTRINSIC * CALIB_FIX_K1 * CALIB_FIX_K2 * CALIB_FIX_K3 * CALIB_FIX_K4 * CALIB_FIX_K5 * CALIB_FIX_K6 * CALIB_FIX_PRINCIPAL_POINT * CALIB_FIX_S1_S2_S3_S4 * CALIB_FIX_TANGENT_DIST * CALIB_FIX_TAUX_TAUY * CALIB_HAND_EYE_ANDREFFOn-line Hand-Eye Calibration Andreff99 * CALIB_HAND_EYE_DANIILIDISHand-Eye Calibration Using Dual Quaternions Daniilidis98 * CALIB_HAND_EYE_HORAUDHand-eye Calibration Horaud95 * CALIB_HAND_EYE_PARKRobot Sensor Calibration: Solving AX = XB on the Euclidean Group Park94 * CALIB_HAND_EYE_TSAIA New Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration Tsai89 * CALIB_NINTRINSIC * CALIB_RATIONAL_MODEL * CALIB_ROBOT_WORLD_HAND_EYE_LISimultaneous robot-world and hand-eye calibration using dual-quaternions and kronecker product Li2010SimultaneousRA * CALIB_ROBOT_WORLD_HAND_EYE_SHAHSolving the robot-world/hand-eye calibration problem using the kronecker product Shah2013SolvingTR * CALIB_SAME_FOCAL_LENGTH * CALIB_THIN_PRISM_MODEL * CALIB_TILTED_MODEL * CALIB_USE_EXTRINSIC_GUESSfor stereoCalibrate * CALIB_USE_INTRINSIC_GUESS * CALIB_USE_LUuse LU instead of SVD decomposition for solving. much faster but potentially less precise * CALIB_USE_QRuse QR instead of SVD decomposition for solving. Faster but potentially less precise * CALIB_ZERO_DISPARITY * CALIB_ZERO_TANGENT_DIST * COV_POLISHER * CirclesGridFinderParameters_ASYMMETRIC_GRID * CirclesGridFinderParameters_SYMMETRIC_GRID * FM_7POINT7-point algorithm * FM_8POINT8-point algorithm * FM_LMEDSleast-median algorithm. 7-point algorithm is used. * FM_RANSACRANSAC algorithm. It needs at least 15 points. 7-point algorithm is used. * Fisheye_CALIB_CHECK_COND * Fisheye_CALIB_FIX_FOCAL_LENGTH * Fisheye_CALIB_FIX_INTRINSIC * Fisheye_CALIB_FIX_K1 * Fisheye_CALIB_FIX_K2 * Fisheye_CALIB_FIX_K3 * Fisheye_CALIB_FIX_K4 * Fisheye_CALIB_FIX_PRINCIPAL_POINT * Fisheye_CALIB_FIX_SKEW * Fisheye_CALIB_RECOMPUTE_EXTRINSIC * Fisheye_CALIB_USE_INTRINSIC_GUESS * Fisheye_CALIB_ZERO_DISPARITY * LMEDSleast-median of squares algorithm * LOCAL_OPTIM_GC * LOCAL_OPTIM_INNER_AND_ITER_LO * LOCAL_OPTIM_INNER_LO * LOCAL_OPTIM_NULL * LOCAL_OPTIM_SIGMA * LSQ_POLISHER * MAGSAC * NEIGH_FLANN_KNN * NEIGH_FLANN_RADIUS * NEIGH_GRID * NONE_POLISHER * PROJ_SPHERICAL_EQRECT * PROJ_SPHERICAL_ORTHO * RANSACRANSAC algorithm * RHORHO algorithm * SAMPLING_NAPSAC * SAMPLING_PROGRESSIVE_NAPSAC * SAMPLING_PROSAC * SAMPLING_UNIFORM * SCORE_METHOD_LMEDS * SCORE_METHOD_MAGSAC * SCORE_METHOD_MSAC * SCORE_METHOD_RANSAC * SOLVEPNP_AP3PAn Efficient Algebraic Solution to the Perspective-Three-Point Problem Ke17 * SOLVEPNP_DLS**Broken implementation. Using this flag will fallback to EPnP.** * SOLVEPNP_EPNPEPnP: Efficient Perspective-n-Point Camera Pose Estimation lepetit2009epnp * SOLVEPNP_IPPEInfinitesimal Plane-Based Pose Estimation Collins14 * SOLVEPNP_IPPE_SQUAREInfinitesimal Plane-Based Pose Estimation Collins14 * SOLVEPNP_ITERATIVEPose refinement using non-linear Levenberg-Marquardt minimization scheme Madsen04 Eade13 * SOLVEPNP_MAX_COUNTUsed for count * SOLVEPNP_P3PComplete Solution Classification for the Perspective-Three-Point Problem gao2003complete * SOLVEPNP_SQPNPSQPnP: A Consistently Fast and Globally OptimalSolution to the Perspective-n-Point Problem Terzakis2020SQPnP * SOLVEPNP_UPNP**Broken implementation. Using this flag will fallback to EPnP.** * StereoBM_PREFILTER_NORMALIZED_RESPONSE * StereoBM_PREFILTER_XSOBEL * StereoMatcher_DISP_SCALE * StereoMatcher_DISP_SHIFT * StereoSGBM_MODE_HH * StereoSGBM_MODE_HH4 * StereoSGBM_MODE_SGBM * StereoSGBM_MODE_SGBM_3WAY * USAC_ACCURATEUSAC, accurate settings * USAC_DEFAULTUSAC algorithm, default settings * USAC_FASTUSAC, fast settings * USAC_FM_8PTSUSAC, fundamental matrix 8 points * USAC_MAGSACUSAC, runs MAGSAC++ * USAC_PARALLELUSAC, parallel version * USAC_PROSACUSAC, sorted points, runs PROSAC Traits --- * LMSolverTraitMutable methods for crate::calib3d::LMSolver * LMSolverTraitConstConstant methods for crate::calib3d::LMSolver * LMSolver_CallbackTraitMutable methods for crate::calib3d::LMSolver_Callback * LMSolver_CallbackTraitConstConstant methods for crate::calib3d::LMSolver_Callback * StereoBMTraitMutable methods for crate::calib3d::StereoBM * StereoBMTraitConstConstant methods for crate::calib3d::StereoBM * StereoMatcherTraitMutable methods for crate::calib3d::StereoMatcher * StereoMatcherTraitConstConstant methods for crate::calib3d::StereoMatcher * StereoSGBMTraitMutable methods for crate::calib3d::StereoSGBM * StereoSGBMTraitConstConstant methods for crate::calib3d::StereoSGBM Functions --- * calibratePerforms camera calibration * calibrate_cameraFinds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. * calibrate_camera_def@overload * calibrate_camera_extendedFinds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. * calibrate_camera_extended_defFinds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. * calibrate_camera_roFinds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. * calibrate_camera_ro_def@overload * calibrate_camera_ro_extendedFinds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. * calibrate_camera_ro_extended_defFinds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. * calibrate_defPerforms camera calibration * calibrate_hand_eyeComputes Hand-Eye calibration: inline formula * calibrate_hand_eye_defComputes Hand-Eye calibration: inline formula * calibrate_robot_world_hand_eyeComputes Robot-World/Hand-Eye calibration: inline formula and inline formula * calibrate_robot_world_hand_eye_defComputes Robot-World/Hand-Eye calibration: inline formula and inline formula * calibration_matrix_valuesComputes useful camera characteristics from the camera intrinsic matrix. * check_chessboard * compose_rtCombines two rotation-and-shift transformations. * compose_rt_defCombines two rotation-and-shift transformations. * compute_correspond_epilinesFor points in an image of a stereo pair, computes the corresponding epilines in the other image. * convert_points_from_homogeneousConverts points from homogeneous to Euclidean space. * convert_points_homogeneousConverts points to/from homogeneous coordinates. * convert_points_to_homogeneousConverts points from Euclidean to homogeneous space. * correct_matchesRefines coordinates of corresponding points. * decompose_essential_matDecompose an essential matrix to possible rotations and translation. * decompose_homography_matDecompose a homography matrix to rotation(s), translation(s) and plane normal(s). * decompose_projection_matrixDecomposes a projection matrix into a rotation matrix and a camera intrinsic matrix. * decompose_projection_matrix_defDecomposes a projection matrix into a rotation matrix and a camera intrinsic matrix. * draw_chessboard_cornersRenders the detected chessboard corners. * draw_frame_axesDraw axes of the world/object coordinate system from pose estimation. see also: solvePnP * draw_frame_axes_defDraw axes of the world/object coordinate system from pose estimation. see also: solvePnP * estimate_affine_2dComputes an optimal affine transformation between two 2D point sets. * estimate_affine_2d_1 * estimate_affine_2d_defComputes an optimal affine transformation between two 2D point sets. * estimate_affine_3dComputes an optimal affine transformation between two 3D point sets. * estimate_affine_3d_1Computes an optimal affine transformation between two 3D point sets. * estimate_affine_3d_1_defComputes an optimal affine transformation between two 3D point sets. * estimate_affine_3d_defComputes an optimal affine transformation between two 3D point sets. * estimate_affine_partial_2dComputes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. * estimate_affine_partial_2d_defComputes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. * estimate_chessboard_sharpnessEstimates the sharpness of a detected chessboard. * estimate_chessboard_sharpness_defEstimates the sharpness of a detected chessboard. * estimate_new_camera_matrix_for_undistort_rectifyEstimates new camera intrinsic matrix for undistortion or rectification. * estimate_new_camera_matrix_for_undistort_rectify_defEstimates new camera intrinsic matrix for undistortion or rectification. * estimate_translation_3dComputes an optimal translation between two 3D point sets. * estimate_translation_3d_defComputes an optimal translation between two 3D point sets. * filter_homography_decomp_by_visible_refpointsFilters homography decompositions based on additional information. * filter_homography_decomp_by_visible_refpoints_defFilters homography decompositions based on additional information. * filter_specklesFilters off small noise blobs (speckles) in the disparity map * filter_speckles_defFilters off small noise blobs (speckles) in the disparity map * find4_quad_corner_subpixfinds subpixel-accurate positions of the chessboard corners * find_chessboard_cornersFinds the positions of internal corners of the chessboard. * find_chessboard_corners_defFinds the positions of internal corners of the chessboard. * find_chessboard_corners_sbFinds the positions of internal corners of the chessboard using a sector based approach. * find_chessboard_corners_sb_def@overload * find_chessboard_corners_sb_with_metaFinds the positions of internal corners of the chessboard using a sector based approach. * find_circles_gridFinds centers in the grid of circles. * find_circles_grid_1Finds centers in the grid of circles. * find_circles_grid_1_def@overload * find_essential_matCalculates an essential matrix from the corresponding points in two images. * find_essential_mat_1Calculates an essential matrix from the corresponding points in two images. * find_essential_mat_1_def@overload * find_essential_mat_2Calculates an essential matrix from the corresponding points in two images. * find_essential_mat_3Calculates an essential matrix from the corresponding points in two images from potentially two different cameras. * find_essential_mat_3_defCalculates an essential matrix from the corresponding points in two images from potentially two different cameras. * find_essential_mat_4 * find_essential_mat_defCalculates an essential matrix from the corresponding points in two images. * find_essential_mat_matrixCalculates an essential matrix from the corresponding points in two images. * find_fundamental_matCalculates a fundamental matrix from the corresponding points in two images. * find_fundamental_mat_1Calculates a fundamental matrix from the corresponding points in two images. * find_fundamental_mat_1_def@overload * find_fundamental_mat_2 * find_fundamental_mat_defCalculates a fundamental matrix from the corresponding points in two images. * find_fundamental_mat_maskCalculates a fundamental matrix from the corresponding points in two images. * find_fundamental_mat_mask_def@overload * find_homographyFinds a perspective transformation between two planes. * find_homography_1 * find_homography_def@overload * find_homography_extFinds a perspective transformation between two planes. * find_homography_ext_defFinds a perspective transformation between two planes. * fisheye_distort_pointsDistorts 2D points using fisheye model. * fisheye_distort_points_defDistorts 2D points using fisheye model. * fisheye_init_undistort_rectify_mapComputes undistortion and rectification maps for image transform by #remap. If D is empty zero distortion is used, if R or P is empty identity matrixes are used. * fisheye_project_pointsProjects points using fisheye model * fisheye_project_points_defProjects points using fisheye model * fisheye_project_points_vecProjects points using fisheye model * fisheye_project_points_vec_def@overload * fisheye_stereo_calibratePerforms stereo calibration * fisheye_stereo_calibrate_def@overload * fisheye_stereo_rectifyStereo rectification for fisheye camera model * fisheye_stereo_rectify_defStereo rectification for fisheye camera model * fisheye_undistort_imageTransforms an image to compensate for fisheye lens distortion. * fisheye_undistort_image_defTransforms an image to compensate for fisheye lens distortion. * fisheye_undistort_pointsUndistorts 2D points using fisheye model * fisheye_undistort_points_defUndistorts 2D points using fisheye model * get_default_new_camera_matrixReturns the default new camera matrix. * get_default_new_camera_matrix_defReturns the default new camera matrix. * get_optimal_new_camera_matrixReturns the new camera intrinsic matrix based on the free scaling parameter. * get_optimal_new_camera_matrix_defReturns the new camera intrinsic matrix based on the free scaling parameter. * get_valid_disparity_roicomputes valid disparity ROI from the valid ROIs of the rectified images (that are returned by #stereoRectify) * init_camera_matrix_2dFinds an initial camera intrinsic matrix from 3D-2D point correspondences. * init_camera_matrix_2d_defFinds an initial camera intrinsic matrix from 3D-2D point correspondences. * init_inverse_rectification_mapComputes the projection and inverse-rectification transformation map. In essense, this is the inverse of init_undistort_rectify_map to accomodate stereo-rectification of projectors (‘inverse-cameras’) in projector-camera pairs. * init_undistort_rectify_mapComputes the undistortion and rectification transformation map. * init_wide_angle_proj_mapinitializes maps for [remap] for wide-angle * init_wide_angle_proj_map_definitializes maps for [remap] for wide-angle * mat_mul_derivComputes partial derivatives of the matrix product for each multiplied matrix. * project_pointsProjects 3D points to an image plane. * project_points_defProjects 3D points to an image plane. * recover_poseRecovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check. * recover_pose_2_camerasRecovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of inliers that pass the check. * recover_pose_2_cameras_defRecovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of inliers that pass the check. * recover_pose_def@overload * recover_pose_estimatedRecovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check. * recover_pose_estimated_defRecovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check. * recover_pose_triangulatedRecovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check. * recover_pose_triangulated_def@overload * rectify3_collinearcomputes the rectification transformations for 3-head camera, where all the heads are on the same line. * reproject_image_to_3dReprojects a disparity image to 3D space. * reproject_image_to_3d_defReprojects a disparity image to 3D space. * rodriguesConverts a rotation matrix to a rotation vector or vice versa. * rodrigues_defConverts a rotation matrix to a rotation vector or vice versa. * rq_decomp3x3Computes an RQ decomposition of 3x3 matrices. * rq_decomp3x3_defComputes an RQ decomposition of 3x3 matrices. * sampson_distanceCalculates the Sampson Distance between two points. * solve_p3pFinds an object pose from 3 3D-2D point correspondences. * solve_pnpFinds an object pose from 3D-2D point correspondences. * solve_pnp_defFinds an object pose from 3D-2D point correspondences. * solve_pnp_genericFinds an object pose from 3D-2D point correspondences. * solve_pnp_generic_defFinds an object pose from 3D-2D point correspondences. * solve_pnp_ransacFinds an object pose from 3D-2D point correspondences using the RANSAC scheme. * solve_pnp_ransac_1C++ default parameters * solve_pnp_ransac_1_defNote * solve_pnp_ransac_defFinds an object pose from 3D-2D point correspondences using the RANSAC scheme. * solve_pnp_refine_lmRefine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution. * solve_pnp_refine_lm_defRefine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution. * solve_pnp_refine_vvsRefine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution. * solve_pnp_refine_vvs_defRefine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution. * stereo_calibrateCalibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras. * stereo_calibrate_1Calibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras. * stereo_calibrate_1_def@overload * stereo_calibrate_2Performs stereo calibration * stereo_calibrate_2_defPerforms stereo calibration * stereo_calibrate_def@overload * stereo_calibrate_extendedCalibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras. * stereo_calibrate_extended_defCalibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras. * stereo_rectifyComputes rectification transforms for each head of a calibrated stereo camera. * stereo_rectify_defComputes rectification transforms for each head of a calibrated stereo camera. * stereo_rectify_uncalibratedComputes a rectification transform for an uncalibrated stereo camera. * stereo_rectify_uncalibrated_defComputes a rectification transform for an uncalibrated stereo camera. * triangulate_pointsThis function reconstructs 3-dimensional points (in homogeneous coordinates) by using their observations with a stereo camera. * undistortTransforms an image to compensate for lens distortion. * undistort_defTransforms an image to compensate for lens distortion. * undistort_image_pointsCompute undistorted image points position * undistort_image_points_defCompute undistorted image points position * undistort_pointsComputes the ideal point coordinates from the observed point coordinates. * undistort_points_defComputes the ideal point coordinates from the observed point coordinates. * undistort_points_iterComputes the ideal point coordinates from the observed point coordinates. * validate_disparityvalidates disparity using the left-right check. The matrix “cost” should be computed by the stereo correspondence algorithm * validate_disparity_defvalidates disparity using the left-right check. The matrix “cost” should be computed by the stereo correspondence algorithm Type Aliases --- * CirclesGridFinderParameters2 Module opencv::ccalib === Custom Calibration Pattern for 3D reconstruction --- Modules --- * prelude Structs --- * CustomPattern * MultiCameraCalibrationClass for multiple camera calibration that supports pinhole camera and omnidirection camera. For omnidirectional camera model, please refer to omnidir.hpp in ccalib module. It first calibrate each camera individually, then a bundle adjustment like optimization is applied to refine extrinsic parameters. So far, it only support “random” pattern for calibration, see randomPattern.hpp in ccalib module for details. Images that are used should be named by “cameraIdx-timestamp.*”, several images with the same timestamp means that they are the same pattern that are photographed. cameraIdx should start from 0. * MultiCameraCalibration_edge * MultiCameraCalibration_vertex * RandomPatternCornerFinderClass for finding features points and corresponding 3D in world coordinate of a “random” pattern, which can be to be used in calibration. It is useful when pattern is partly occluded or only a part of pattern can be observed in multiple cameras calibration. The pattern can be generated by RandomPatternGenerator class described in this file. * RandomPatternGenerator Constants --- * CALIB_FIX_CENTER * CALIB_FIX_GAMMA * CALIB_FIX_K1 * CALIB_FIX_K2 * CALIB_FIX_P1 * CALIB_FIX_P2 * CALIB_FIX_SKEW * CALIB_FIX_XI * CALIB_USE_GUESS * HEAD * INVALID * MultiCameraCalibration_OMNIDIRECTIONAL * MultiCameraCalibration_PINHOLE * RECTIFY_CYLINDRICAL * RECTIFY_LONGLATI * RECTIFY_PERSPECTIVE * RECTIFY_STEREOGRAPHIC * XYZ * XYZRGB Traits --- * CustomPatternTraitMutable methods for crate::ccalib::CustomPattern * CustomPatternTraitConstConstant methods for crate::ccalib::CustomPattern * MultiCameraCalibrationTraitMutable methods for crate::ccalib::MultiCameraCalibration * MultiCameraCalibrationTraitConstConstant methods for crate::ccalib::MultiCameraCalibration * MultiCameraCalibration_edgeTraitMutable methods for crate::ccalib::MultiCameraCalibration_edge * MultiCameraCalibration_edgeTraitConstConstant methods for crate::ccalib::MultiCameraCalibration_edge * MultiCameraCalibration_vertexTraitMutable methods for crate::ccalib::MultiCameraCalibration_vertex * MultiCameraCalibration_vertexTraitConstConstant methods for crate::ccalib::MultiCameraCalibration_vertex * RandomPatternCornerFinderTraitMutable methods for crate::ccalib::RandomPatternCornerFinder * RandomPatternCornerFinderTraitConstConstant methods for crate::ccalib::RandomPatternCornerFinder * RandomPatternGeneratorTraitMutable methods for crate::ccalib::RandomPatternGenerator * RandomPatternGeneratorTraitConstConstant methods for crate::ccalib::RandomPatternGenerator Functions --- * calibratePerform omnidirectional camera calibration, the default depth of outputs is CV_64F. * calibrate_defPerform omnidirectional camera calibration, the default depth of outputs is CV_64F. * init_undistort_rectify_mapComputes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used. * project_pointsProjects points for omnidirectional camera using CMei’s model * project_points_1Projects points for omnidirectional camera using CMei’s model * project_points_1_def@overload * project_points_defProjects points for omnidirectional camera using CMei’s model * stereo_calibrateStereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F. * stereo_calibrate_defStereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F. * stereo_reconstructStereo 3D reconstruction from a pair of images * stereo_reconstruct_defStereo 3D reconstruction from a pair of images * stereo_rectifyStereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras * undistort_imageUndistort omnidirectional images to perspective images * undistort_image_defUndistort omnidirectional images to perspective images * undistort_pointsUndistort 2D image points for omnidirectional camera using CMei’s model Module opencv::core === Core functionality --- Basic structures --- C structures and operations --- Operations on arrays --- Asynchronous API --- XML/YAML Persistence --- Clustering --- Utility and system functions and macros --- OpenGL interoperability --- Intel IPP Asynchronous C/C++ Converters --- Optimization Algorithms --- DirectX interoperability --- Eigen support --- OpenCL support --- Intel VA-API/OpenCL (CL-VA) interoperability --- Hardware Acceleration Layer --- Parallel Processing --- Re-exports --- * `pub use ptr_extern::PtrExtern;` * `pub use ptr_extern::PtrExternCtor;` * `pub use vector_extern::VectorExtern;` * `pub use vector_extern::VectorExternCopyNonBool;` Modules --- * prelude Structs --- * Affine3docs.opencv.org * AlgorithmThis is a base class for all more or less complex algorithms in OpenCV * ArraysWrapper for OpenGL Client-Side Vertex arrays. * AsyncArrayReturns result of asynchronous operations * AsyncPromiseProvides result of asynchronous operations * BufferSmart pointer for OpenGL buffer object with reference counting. * BufferPoolBufferPool for use with CUDA streams * ClassWithKeywordProperties * CommandLineParserDesigned for command line parsing * ConjGradSolverThis class is used to perform the non-linear non-constrained minimization of a function with known gradient, * Context * Context_UserContext * DMatchClass for matching keypoint descriptors * Detail_CheckContext * Device * DeviceInfoClass providing functionality for querying the specified GPU properties. * DownhillSolverThis class is used to perform the non-linear non-constrained minimization of a function, * Event * Exception! Class passed to an error. * FileNodeFile Storage Node class. * FileNodeIteratorused to iterate through sequences and mappings. * FileStorageXML/YAML/JSON file storage class that encapsulates all the information necessary for writing or reading data to/from a file. * Formatted@todo document * Formatter@todo document * FunctionParams * GpuData * GpuMatBase storage class for GPU memory with reference counting. * GpuMatND * GpuMat_Allocator * Hamming * HostMemClass with reference counting wrapping special memory type allocation functions from CUDA. * Image2D * Kernel * KernelArg * KeyPointData structure for salient point detectors. * LDALinear Discriminant Analysis @todo document this class * LogTag * Matn-dimensional dense array class \anchor CVMat_Details * MatConstIterator/////////////////////////////// MatConstIterator ////////////////////////////////// * MatExprMatrix expression representation @anchor MatrixExpressions This is a list of implemented matrix operations that can be combined in arbitrary complex expressions (here A, B stand for matrices ( Mat ), s for a scalar ( Scalar ), alpha for a real-valued scalar ( double )): * MatIter * MatOp////////////////////////////// Matrix Expressions ///////////////////////////////// * MatSize * MatStep * Mat_docs.opencv.org * Matxdocs.opencv.org * Matx_AddOp@cond IGNORED * Matx_DivOp * Matx_MatMulOp * Matx_MulOp * Matx_ScaleOp * Matx_SubOp * Matx_TOp * MinProblemSolverBasic interface for all solvers * MinProblemSolver_FunctionRepresents function being optimized * Momentsstruct returned by cv::moments * NodeData * OpenCLExecutionContext * OriginalClassName * OriginalClassName_Params * PCAPrincipal Component Analysis * ParallelLoopBodyBase class for parallel data processors * Platform@deprecated * PlatformInfo * Point3_docs.opencv.org * Point_docs.opencv.org * Program * ProgramSource * PtrThis is similar to Rust `Box`, but handled by the C++. Some OpenCV functions insist on accepting `Ptr` instead of a heap allocated object so we need to satisfy those. * Queue * RNGRandom Number Generator * RNG_MT19937Mersenne Twister random number generator * RangeTemplate class specifying a continuous subsequence (slice) of a sequence. * Rect_docs.opencv.org * RotatedRectThe class represents rotated (i.e. not up-right) rectangles on a plane. * SVDSingular Value Decomposition * Size_docs.opencv.org * SizedArray12 * SizedArray13 * SizedArray14 * SizedArray16 * SizedArray21 * SizedArray22 * SizedArray23 * SizedArray31 * SizedArray32 * SizedArray33 * SizedArray34 * SizedArray41 * SizedArray43 * SizedArray44 * SizedArray61 * SizedArray66 * SparseMatThe class SparseMat represents multi-dimensional sparse numerical arrays. * SparseMatConstIteratorRead-Only Sparse Matrix Iterator. * SparseMatIteratorRead-write Sparse Matrix Iterator * SparseMat_Hdrthe sparse matrix header * SparseMat_Nodesparse matrix node - element of a hash table * StreamThis class encapsulates a queue of asynchronous calls. * TargetArchsClass providing a set of static methods to check what NVIDIA* card architecture the CUDA module was built for. * TermCriteriaThe class defining termination criteria for iterative algorithms. * Texture2DSmart pointer for OpenGL 2D texture memory with reference counting. * TickMetera Class to measure passing time. * Timer * TupleWrapper for C++ std::tupe and std::pair * UMat@todo document * UMatData * VecNdocs.opencv.org Named `VecN` to avoid name clash with std’s `Vec`. * VectorWrapper for C++ std::vector * VectorIterator * VectorRefIterator * WriteStructContext * _InputArrayThis is the proxy class for passing read-only input arrays into OpenCV functions. * _InputOutputArray * _OutputArrayThis type is very similar to InputArray except that it is used for input/output and output function parameters. Enums --- * AccessFlag * BorderTypesVarious border types, image boundaries are denoted with `|` * Buffer_Access * Buffer_TargetThe target defines how you intend to use the buffer object. * CmpTypescomparison types * Codeerror codes * CovarFlagsCovariation flags * CpuFeaturesAvailable CPU features. * DecompTypesmatrix decomposition types * Detail_TestOp * DeviceInfo_ComputeMode * DftFlags * Event_CreateFlags * FLAGS * FeatureSetEnumeration providing CUDA computing features. * FileStorage_Modefile storage mode * FileStorage_State * Formatter_FormatType * GemmFlagsgeneralized matrix multiplication flags * HostMem_AllocType * IMPL * KmeansFlagsk-Means flags * LogLevelSupported logging levels and their semantic * MatExprResult * NormTypesnorm types * OclVectorStrategy * PCA_Flags * Param * ReduceTypes * RenderModesrender mode * RotateFlags * SVD_Flags * SolveLPResultreturn codes for cv::solveLP() function * SortFlags * TYPE * TermCriteria_TypeCriteria type, can be one of: COUNT, EPS or COUNT + EPS * Texture2D_FormatAn Image Format describes the way that the images in Textures store their data. * UMatData_MemoryFlag * UMatUsageFlagsUsage flags for allocator * _InputArray_KindFlag * _OutputArray_DepthMask Constants --- * ACCESS_FAST * ACCESS_MASK * ACCESS_READ * ACCESS_RW * ACCESS_WRITE * BORDER_CONSTANT`iiiiii|abcdefgh|iiiiiii` with some specified `i` * BORDER_DEFAULTsame as BORDER_REFLECT_101 * BORDER_ISOLATEDdo not look outside of ROI * BORDER_REFLECT`fedcba|abcdefgh|hgfedcb` * BORDER_REFLECT101same as BORDER_REFLECT_101 * BORDER_REFLECT_101`gfedcb|abcdefgh|gfedcba` * BORDER_REPLICATE`aaaaaa|abcdefgh|hhhhhhh` * BORDER_TRANSPARENT`uvwxyz|abcdefgh|ijklmno` * BORDER_WRAP`cdefgh|abcdefgh|abcdefg` * BadAlignincorrect input align * BadAlphaChannel * BadCOIinput COI is not supported * BadCallBack * BadDataPtr * BadDepthinput image depth is not supported by the function * BadImageSizeimage size is invalid * BadModelOrChSeq * BadNumChannel1U * BadNumChannelsbad number of channels, for example, some functions accept only single channel matrices. * BadOffsetoffset is invalid * BadOrdernumber of dimensions is out of range * BadOriginincorrect input origin * BadROISizeincorrect input roi * BadStepimage step is wrong, this may happen for a non-continuous matrix. * BadTileSize * Buffer_ARRAY_BUFFERThe buffer will be used as a source for vertex data * Buffer_ELEMENT_ARRAY_BUFFERThe buffer will be used for indices (in glDrawElements, for example) * Buffer_PIXEL_PACK_BUFFERThe buffer will be used for reading from OpenGL textures * Buffer_PIXEL_UNPACK_BUFFERThe buffer will be used for writing to OpenGL textures * Buffer_READ_ONLY * Buffer_READ_WRITE * Buffer_WRITE_ONLY * CMP_EQsrc1 is equal to src2. * CMP_GEsrc1 is greater than or equal to src2. * CMP_GTsrc1 is greater than src2. * CMP_LEsrc1 is less than or equal to src2. * CMP_LTsrc1 is less than src2. * CMP_NEsrc1 is unequal to src2. * COVAR_COLSIf the flag is specified, all the input vectors are stored as columns of the samples matrix. mean should be a single-column vector in this case. * COVAR_NORMALThe output covariance matrix is calculated as: block formula covar will be a square matrix of the same size as the total number of elements in each input vector. One and only one of COVAR_SCRAMBLED and COVAR_NORMAL must be specified. * COVAR_ROWSIf the flag is specified, all the input vectors are stored as rows of the samples matrix. mean should be a single-row vector in this case. * COVAR_SCALEIf the flag is specified, the covariance matrix is scaled. In the “normal” mode, scale is 1./nsamples . In the “scrambled” mode, scale is the reciprocal of the total number of elements in each input vector. By default (if the flag is not specified), the covariance matrix is not scaled ( scale=1 ). * COVAR_SCRAMBLEDThe output covariance matrix is calculated as: block formula The covariance matrix will be nsamples x nsamples. Such an unusual covariance matrix is used for fast PCA of a set of very large vectors (see, for example, the EigenFaces technique for face recognition). Eigenvalues of this “scrambled” matrix match the eigenvalues of the true covariance matrix. The “true” eigenvectors can be easily calculated from the eigenvectors of the “scrambled” covariance matrix. * COVAR_USE_AVGIf the flag is specified, the function does not calculate mean from the input vectors but, instead, uses the passed mean vector. This is useful if mean has been pre-calculated or known in advance, or if the covariance matrix is calculated by parts. In this case, mean is not a mean vector of the input sub-set of vectors but rather the mean vector of the whole set. * CPU_AVX * CPU_AVX2 * CPU_AVX512_CLXCascade Lake with AVX-512F/CD/BW/DQ/VL/VNNI * CPU_AVX512_CNLCannon Lake with AVX-512F/CD/BW/DQ/VL/IFMA/VBMI * CPU_AVX512_COMMONCommon instructions AVX-512F/CD for all CPUs that support AVX-512 * CPU_AVX512_ICLIce Lake with AVX-512F/CD/BW/DQ/VL/IFMA/VBMI/VNNI/VBMI2/BITALG/VPOPCNTDQ * CPU_AVX512_KNLKnights Landing with AVX-512F/CD/ER/PF * CPU_AVX512_KNMKnights Mill with AVX-512F/CD/ER/PF/4FMAPS/4VNNIW/VPOPCNTDQ * CPU_AVX512_SKXSkylake-X with AVX-512F/CD/BW/DQ/VL * CPU_AVX_512BITALG * CPU_AVX_512BW * CPU_AVX_512CD * CPU_AVX_512DQ * CPU_AVX_512ER * CPU_AVX_512F * CPU_AVX_512IFMA * CPU_AVX_512IFMA512 * CPU_AVX_512PF * CPU_AVX_512VBMI * CPU_AVX_512VBMI2 * CPU_AVX_512VL * CPU_AVX_512VNNI * CPU_AVX_512VPOPCNTDQ * CPU_AVX_5124FMAPS * CPU_AVX_5124VNNIW * CPU_FMA3 * CPU_FP16 * CPU_LASX * CPU_MAX_FEATURE * CPU_MMX * CPU_MSA * CPU_NEON * CPU_NEON_DOTPROD * CPU_POPCNT * CPU_RISCVV * CPU_RVV * CPU_SSE * CPU_SSE2 * CPU_SSE3 * CPU_SSE4_1 * CPU_SSE4_2 * CPU_SSSE3 * CPU_VSX * CPU_VSX3 * CV_2PI * CV_8S * CV_8SC1 * CV_8SC2 * CV_8SC3 * CV_8SC4 * CV_8U * CV_8UC1 * CV_8UC2 * CV_8UC3 * CV_8UC4 * CV_16F * CV_16FC1 * CV_16FC2 * CV_16FC3 * CV_16FC4 * CV_16S * CV_16SC1 * CV_16SC2 * CV_16SC3 * CV_16SC4 * CV_16U * CV_16UC1 * CV_16UC2 * CV_16UC3 * CV_16UC4 * CV_32F * CV_32FC1 * CV_32FC2 * CV_32FC3 * CV_32FC4 * CV_32S * CV_32SC1 * CV_32SC2 * CV_32SC3 * CV_32SC4 * CV_64F * CV_64FC1 * CV_64FC2 * CV_64FC3 * CV_64FC4 * CV_AVX * CV_AVX2 * CV_AVX512_CLX * CV_AVX512_CNL * CV_AVX512_COMMON * CV_AVX512_ICL * CV_AVX512_KNL * CV_AVX512_KNM * CV_AVX512_SKX * CV_AVX_512BITALG * CV_AVX_512BW * CV_AVX_512CD * CV_AVX_512DQ * CV_AVX_512ER * CV_AVX_512F * CV_AVX_512IFMA * CV_AVX_512IFMA512 * CV_AVX_512PF * CV_AVX_512VBMI * CV_AVX_512VBMI2 * CV_AVX_512VL * CV_AVX_512VNNI * CV_AVX_512VPOPCNTDQ * CV_AVX_5124FMAPS * CV_AVX_5124VNNIW * CV_CN_MAX * CV_CN_SHIFT * CV_CPU_AVX * CV_CPU_AVX2 * CV_CPU_AVX512_CLX * CV_CPU_AVX512_CNL * CV_CPU_AVX512_COMMON * CV_CPU_AVX512_ICL * CV_CPU_AVX512_KNL * CV_CPU_AVX512_KNM * CV_CPU_AVX512_SKX * CV_CPU_AVX_512BITALG * CV_CPU_AVX_512BW * CV_CPU_AVX_512CD * CV_CPU_AVX_512DQ * CV_CPU_AVX_512ER * CV_CPU_AVX_512F * CV_CPU_AVX_512IFMA * CV_CPU_AVX_512IFMA512 * CV_CPU_AVX_512PF * CV_CPU_AVX_512VBMI * CV_CPU_AVX_512VBMI2 * CV_CPU_AVX_512VL * CV_CPU_AVX_512VNNI * CV_CPU_AVX_512VPOPCNTDQ * CV_CPU_AVX_5124FMAPS * CV_CPU_AVX_5124VNNIW * CV_CPU_FMA3 * CV_CPU_FP16 * CV_CPU_LASX * CV_CPU_MMX * CV_CPU_MSA * CV_CPU_NEON * CV_CPU_NEON_DOTPROD * CV_CPU_NONE * CV_CPU_POPCNT * CV_CPU_RISCVV * CV_CPU_RVV * CV_CPU_SSE * CV_CPU_SSE2 * CV_CPU_SSE3 * CV_CPU_SSE4_1 * CV_CPU_SSE4_2 * CV_CPU_SSSE3 * CV_CPU_VSX * CV_CPU_VSX3 * CV_CXX11 * CV_CXX_MOVE_SEMANTICS * CV_CXX_STD_ARRAY * CV_DEPTH_MAX * CV_ENABLE_UNROLLED * CV_FMA3 * CV_FP16 * CV_FP16_TYPE * CV_HAL_BORDER_CONSTANT * CV_HAL_BORDER_ISOLATED * CV_HAL_BORDER_REFLECT * CV_HAL_BORDER_REFLECT_101 * CV_HAL_BORDER_REPLICATE * CV_HAL_BORDER_TRANSPARENT * CV_HAL_BORDER_WRAP * CV_HAL_CMP_EQ * CV_HAL_CMP_GE * CV_HAL_CMP_GT * CV_HAL_CMP_LE * CV_HAL_CMP_LT * CV_HAL_CMP_NE * CV_HAL_DFT_COMPLEX_OUTPUT * CV_HAL_DFT_INVERSE * CV_HAL_DFT_IS_CONTINUOUS * CV_HAL_DFT_IS_INPLACE * CV_HAL_DFT_REAL_OUTPUT * CV_HAL_DFT_ROWS * CV_HAL_DFT_SCALE * CV_HAL_DFT_STAGE_COLS * CV_HAL_DFT_TWO_STAGE * CV_HAL_ERROR_NOT_IMPLEMENTED * CV_HAL_ERROR_OK * CV_HAL_ERROR_UNKNOWN * CV_HAL_GEMM_1_T * CV_HAL_GEMM_2_T * CV_HAL_GEMM_3_T * CV_HAL_SVD_FULL_UV * CV_HAL_SVD_MODIFY_A * CV_HAL_SVD_NO_UV * CV_HAL_SVD_SHORT_UV * CV_HARDWARE_MAX_FEATURE * CV_IMPL_IPP * CV_IMPL_MT * CV_IMPL_OCL * CV_IMPL_PLAIN * CV_LASX * CV_LOG2 * CV_LOG_LEVEL_DEBUG * CV_LOG_LEVEL_ERROR * CV_LOG_LEVEL_FATAL * CV_LOG_LEVEL_INFO * CV_LOG_LEVEL_SILENT * CV_LOG_LEVEL_VERBOSE * CV_LOG_LEVEL_WARN * CV_LOG_STRIP_LEVEL * CV_MAJOR_VERSION * CV_MAT_CN_MASK * CV_MAT_CONT_FLAG * CV_MAT_CONT_FLAG_SHIFT * CV_MAT_DEPTH_MASK * CV_MAT_TYPE_MASK * CV_MINOR_VERSION * CV_MMX * CV_MSA * CV_NEON * CV_PI * CV_POPCNT * CV_RVV * CV_RVV071 * CV_SSE * CV_SSE2 * CV_SSE3 * CV_SSE4_1 * CV_SSE4_2 * CV_SSSE3 * CV_STRONG_ALIGNMENT * CV_SUBMAT_FLAG * CV_SUBMAT_FLAG_SHIFT * CV_SUBMINOR_VERSION * CV_VERSION * CV_VERSION_MAJOR * CV_VERSION_MINOR * CV_VERSION_REVISION * CV_VERSION_STATUS * CV_VSX * CV_VSX3 * CV_WASM_SIMD * CV__EXCEPTION_PTR * DCT_INVERSEperforms an inverse 1D or 2D transform instead of the default forward transform. * DCT_ROWSperforms a forward or inverse transform of every individual row of the input matrix. This flag enables you to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself) to perform 3D and higher-dimensional transforms and so forth. * DECOMP_CHOLESKYCholesky inline formula factorization; the matrix src1 must be symmetrical and positively defined * DECOMP_EIGeigenvalue decomposition; the matrix src1 must be symmetrical * DECOMP_LUGaussian elimination with the optimal pivot element chosen. * DECOMP_NORMALwhile all the previous flags are mutually exclusive, this flag can be used together with any of the previous; it means that the normal equations inline formula are solved instead of the original system inline formula * DECOMP_QRQR factorization; the system can be over-defined and/or the matrix src1 can be singular * DECOMP_SVDsingular value decomposition (SVD) method; the system can be over-defined and/or the matrix src1 can be singular * DFT_COMPLEX_INPUTspecifies that input is complex input. If this flag is set, the input must have 2 channels. On the other hand, for backwards compatibility reason, if input has 2 channels, input is already considered complex. * DFT_COMPLEX_OUTPUTperforms a forward transformation of 1D or 2D real array; the result, though being a complex array, has complex-conjugate symmetry (*CCS*, see the function description below for details), and such an array can be packed into a real array of the same size as input, which is the fastest option and which is what the function does by default; however, you may wish to get a full complex array (for simpler spectrum analysis, and so on) - pass the flag to enable the function to produce a full-size complex output array. * DFT_INVERSEperforms an inverse 1D or 2D transform instead of the default forward transform. * DFT_REAL_OUTPUTperforms an inverse transformation of a 1D or 2D complex array; the result is normally a complex array of the same size, however, if the input array has conjugate-complex symmetry (for example, it is a result of forward transformation with DFT_COMPLEX_OUTPUT flag), the output is a real array; while the function itself does not check whether the input is symmetrical or not, you can pass the flag and then the function will assume the symmetry and produce the real output array (note that when the input is packed into a real array and inverse transformation is executed, the function treats the input as a packed complex-conjugate symmetrical array, and the output will also be a real array). * DFT_ROWSperforms a forward or inverse transform of every individual row of the input matrix; this flag enables you to transform multiple vectors simultaneously and can be used to decrease the overhead (which is sometimes several times larger than the processing itself) to perform 3D and higher-dimensional transformations and so forth. * DFT_SCALEscales the result: divide it by the number of array elements. Normally, it is combined with DFT_INVERSE. * DYNAMIC_PARALLELISM * Detail_CV__LAST_TEST_OP * Detail_TEST_CUSTOM * Detail_TEST_EQ * Detail_TEST_GE * Detail_TEST_GT * Detail_TEST_LE * Detail_TEST_LT * Detail_TEST_NE * DeviceInfo_ComputeModeDefault< default compute mode (Multiple threads can use cudaSetDevice with this device) * DeviceInfo_ComputeModeExclusive< compute-exclusive-thread mode (Only one thread in one process will be able to use cudaSetDevice with this device) * DeviceInfo_ComputeModeExclusiveProcess< compute-exclusive-process mode (Many threads in one process will be able to use cudaSetDevice with this device) * DeviceInfo_ComputeModeProhibited< compute-prohibited mode (No threads can use cudaSetDevice with this device) * Device_EXEC_KERNEL * Device_EXEC_NATIVE_KERNEL * Device_FP_CORRECTLY_ROUNDED_DIVIDE_SQRT * Device_FP_DENORM * Device_FP_FMA * Device_FP_INF_NAN * Device_FP_ROUND_TO_INF * Device_FP_ROUND_TO_NEAREST * Device_FP_ROUND_TO_ZERO * Device_FP_SOFT_FLOAT * Device_LOCAL_IS_GLOBAL * Device_LOCAL_IS_LOCAL * Device_NO_CACHE * Device_NO_LOCAL_MEM * Device_READ_ONLY_CACHE * Device_READ_WRITE_CACHE * Device_TYPE_ACCELERATOR * Device_TYPE_ALL * Device_TYPE_CPU * Device_TYPE_DEFAULT * Device_TYPE_DGPU * Device_TYPE_GPU * Device_TYPE_IGPU * Device_UNKNOWN_VENDOR * Device_VENDOR_AMD * Device_VENDOR_INTEL * Device_VENDOR_NVIDIA * ENUM_LOG_LEVEL_FORCE_INT * Event_BLOCKING_SYNC< Event uses blocking synchronization * Event_DEFAULT< Default event flag * Event_DISABLE_TIMING< Event will not record timing data * Event_INTERPROCESS< Event is suitable for interprocess use. DisableTiming must be set * FEATURE_SET_COMPUTE_10 * FEATURE_SET_COMPUTE_11 * FEATURE_SET_COMPUTE_12 * FEATURE_SET_COMPUTE_13 * FEATURE_SET_COMPUTE_20 * FEATURE_SET_COMPUTE_21 * FEATURE_SET_COMPUTE_30 * FEATURE_SET_COMPUTE_32 * FEATURE_SET_COMPUTE_35 * FEATURE_SET_COMPUTE_50 * FLAGS_EXPAND_SAME_NAMES * FLAGS_MAPPING * FLAGS_NONE * FileNode_EMPTYempty structure (sequence or mapping) * FileNode_FLOATsynonym or REAL * FileNode_FLOWcompact representation of a sequence or mapping. Used only by YAML writer * FileNode_INTan integer * FileNode_MAPmapping * FileNode_NAMEDthe node has a name (i.e. it is element of a mapping). * FileNode_NONEempty node * FileNode_REALfloating-point number * FileNode_SEQsequence * FileNode_STRtext string in UTF-8 encoding * FileNode_STRINGsynonym for STR * FileNode_TYPE_MASK * FileNode_UNIFORMif set, means that all the collection elements are numbers of the same type (real’s or int’s). UNIFORM is used only when reading FileStorage; FLOW is used only when writing. So they share the same bit * FileStorage_APPENDvalue, open the file for appending * FileStorage_BASE64flag, write rawdata in Base64 by default. (consider using WRITE_BASE64) * FileStorage_FORMAT_AUTOflag, auto format * FileStorage_FORMAT_JSONflag, JSON format * FileStorage_FORMAT_MASKmask for format flags * FileStorage_FORMAT_XMLflag, XML format * FileStorage_FORMAT_YAMLflag, YAML format * FileStorage_INSIDE_MAP * FileStorage_MEMORY< flag, read data from source or write data to the internal buffer (which is returned by FileStorage::release) * FileStorage_NAME_EXPECTED * FileStorage_READvalue, open the file for reading * FileStorage_UNDEFINED * FileStorage_VALUE_EXPECTED * FileStorage_WRITEvalue, open the file for writing * FileStorage_WRITE_BASE64flag, enable both WRITE and BASE64 * Formatter_FMT_C * Formatter_FMT_CSV * Formatter_FMT_DEFAULT * Formatter_FMT_MATLAB * Formatter_FMT_NUMPY * Formatter_FMT_PYTHON * GEMM_1_Ttransposes src1 * GEMM_2_Ttransposes src2 * GEMM_3_Ttransposes src3 * GLOBAL_ATOMICS * GpuApiCallErrorGPU API call error * GpuNotSupportedno CUDA support * HeaderIsNullimage header is NULL * HostMem_PAGE_LOCKED * HostMem_SHARED * HostMem_WRITE_COMBINED * IMPL_IPP * IMPL_OPENCL * IMPL_PLAIN * KMEANS_PP_CENTERSUse kmeans++ center initialization by <NAME> Vassilvitskii [Arthur2007]. * KMEANS_RANDOM_CENTERSSelect random initial centers in each attempt. * KMEANS_USE_INITIAL_LABELSDuring the first (and possibly the only) attempt, use the user-supplied labels instead of computing them from the initial centers. For the second and further attempts, use the random or semi-random centers. Use one of KMEANS_*_CENTERS flag to specify the exact method. * KernelArg_CONSTANT * KernelArg_LOCAL * KernelArg_NO_SIZE * KernelArg_PTR_ONLY * KernelArg_READ_ONLY * KernelArg_READ_WRITE * KernelArg_WRITE_ONLY * LINES * LINE_LOOP * LINE_STRIP * LOG_LEVEL_DEBUGDebug message. Disabled in the “Release” build. * LOG_LEVEL_ERRORError message * LOG_LEVEL_FATALFatal (critical) error (unrecoverable internal error) * LOG_LEVEL_INFOInfo message * LOG_LEVEL_SILENTfor using in setLogVevel() call * LOG_LEVEL_VERBOSEVerbose (trace) messages. Requires verbosity level. Disabled in the “Release” build. * LOG_LEVEL_WARNINGWarning message * MaskIsTiled * Mat_AUTO_STEP * Mat_CONTINUOUS_FLAG * Mat_DEPTH_MASK * Mat_MAGIC_MASK * Mat_MAGIC_VAL * Mat_SUBMATRIX_FLAG * Mat_TYPE_MASK * NATIVE_DOUBLE * NORM_HAMMINGIn the case of one input array, calculates the Hamming distance of the array from zero, In the case of two input arrays, calculates the Hamming distance between the arrays. * NORM_HAMMING2Similar to NORM_HAMMING, but in the calculation, each two bits of the input sequence will be added and treated as a single bit to be used in the same calculation as NORM_HAMMING. * NORM_INFblock formula * NORM_L1block formula * NORM_L2block formula * NORM_L2SQRblock formula * NORM_MINMAXflag * NORM_RELATIVEflag * NORM_TYPE_MASKbit-mask which can be used to separate norm type from norm flags * OCL_VECTOR_DEFAULT * OCL_VECTOR_MAX * OCL_VECTOR_OWN * OPENCV_ABI_COMPATIBILITY * OPENCV_USE_FASTMATH_BUILTINS * OpenCLApiCallErrorOpenCL API call error * OpenCLDoubleNotSupported * OpenCLInitErrorOpenCL initialization error * OpenCLNoAMDBlasFft * OpenGlApiCallErrorOpenGL API call error * OpenGlNotSupportedno OpenGL support * PCA_DATA_AS_COLindicates that the input samples are stored as matrix columns * PCA_DATA_AS_ROWindicates that the input samples are stored as matrix rows * PCA_USE_AVG * POINTS * POLYGON * Param_ALGORITHM * Param_BOOLEAN * Param_FLOAT * Param_INT * Param_MAT * Param_MAT_VECTOR * Param_REAL * Param_SCALAR * Param_STRING * Param_UCHAR * Param_UINT64 * Param_UNSIGNED_INT * QUADS * QUAD_STRIP * REDUCE_AVGthe output is the mean vector of all rows/columns of the matrix. * REDUCE_MAXthe output is the maximum (column/row-wise) of all rows/columns of the matrix. * REDUCE_MINthe output is the minimum (column/row-wise) of all rows/columns of the matrix. * REDUCE_SUMthe output is the sum of all rows/columns of the matrix. * REDUCE_SUM2the output is the sum of all squared rows/columns of the matrix. * RNG_NORMAL * RNG_UNIFORM * ROTATE_90_CLOCKWISERotate 90 degrees clockwise * ROTATE_90_COUNTERCLOCKWISERotate 270 degrees clockwise * ROTATE_180Rotate 180 degrees clockwise * SHARED_ATOMICS * SOLVELP_LOSTproblem is feasible, but solver lost solution due to floating-point arithmetic errors * SOLVELP_MULTIthere are multiple maxima for target function - the arbitrary one is returned * SOLVELP_SINGLEthere is only one maximum for target function * SOLVELP_UNBOUNDEDproblem is unbounded (target function can achieve arbitrary high values) * SOLVELP_UNFEASIBLEproblem is unfeasible (there are no points that satisfy all the constraints imposed) * SORT_ASCENDINGeach matrix row is sorted in the ascending order. * SORT_DESCENDINGeach matrix row is sorted in the descending order; this flag and the previous one are also mutually exclusive. * SORT_EVERY_COLUMNeach matrix column is sorted independently; this flag and the previous one are mutually exclusive. * SORT_EVERY_ROWeach matrix row is sorted independently * SVD_FULL_UVwhen the matrix is not square, by default the algorithm produces u and vt matrices of sufficiently large size for the further A reconstruction; if, however, FULL_UV flag is specified, u and vt will be full-size square orthogonal matrices. * SVD_MODIFY_Aallow the algorithm to modify the decomposed matrix; it can save space and speed up processing. currently ignored. * SVD_NO_UVindicates that only a vector of singular values `w` is to be processed, while u and vt will be set to empty matrices * SparseMat_HASH_BIT * SparseMat_HASH_SCALE * SparseMat_MAGIC_VAL * SparseMat_MAX_DIM * StsAssertassertion failed * StsAutoTracetracing * StsBackTracepseudo error for back trace * StsBadArgfunction arg/param is bad * StsBadFlagflag is wrong or not supported * StsBadFuncunsupported function * StsBadMaskbad format of mask (neither 8uC1 nor 8sC1) * StsBadMemBlockan allocated block has been corrupted * StsBadPointbad CvPoint * StsBadSizethe input/output structure size is incorrect * StsDivByZerodivision by zero * StsErrorunknown /unspecified error * StsFilterOffsetErrincorrect filter offset value * StsFilterStructContentErrincorrect filter structure content * StsInplaceNotSupportedin-place operation is not supported * StsInternalinternal error (bad state) * StsKernelStructContentErrincorrect transform kernel content * StsNoConviteration didn’t converge * StsNoMeminsufficient memory * StsNotImplementedthe requested function/feature is not implemented * StsNullPtrnull pointer * StsObjectNotFoundrequest can’t be completed * StsOkeverything is ok * StsOutOfRangesome of parameters are out of range * StsParseErrorinvalid syntax/structure of the parsed file * StsUnmatchedFormatsformats of input/output arrays differ * StsUnmatchedSizessizes of input/output structures do not match * StsUnsupportedFormatthe data format/type is not supported by the function * StsVecLengthErrincorrect vector length * TRIANGLES * TRIANGLE_FAN * TRIANGLE_STRIP * TYPE_FUN * TYPE_GENERAL * TYPE_MARKER * TYPE_WRAPPER * TermCriteria_COUNTthe maximum number of iterations or elements to compute * TermCriteria_EPSthe desired accuracy or change in parameters at which the iterative algorithm stops * TermCriteria_MAX_ITERditto * Texture2D_DEPTH_COMPONENTDepth * Texture2D_NONE * Texture2D_RGBRed, Green, Blue * Texture2D_RGBARed, Green, Blue, Alpha * UMatData_ASYNC_CLEANUP * UMatData_COPY_ON_MAP * UMatData_DEVICE_COPY_OBSOLETE * UMatData_DEVICE_MEM_MAPPED * UMatData_HOST_COPY_OBSOLETE * UMatData_TEMP_COPIED_UMAT * UMatData_TEMP_UMAT * UMatData_USER_ALLOCATED * UMat_AUTO_STEP * UMat_CONTINUOUS_FLAG * UMat_DEPTH_MASK * UMat_MAGIC_MASK * UMat_MAGIC_VAL * UMat_SUBMATRIX_FLAG * UMat_TYPE_MASK * USAGE_ALLOCATE_DEVICE_MEMORY * USAGE_ALLOCATE_HOST_MEMORY * USAGE_ALLOCATE_SHARED_MEMORY * USAGE_DEFAULT * WARP_SHUFFLE_FUNCTIONS * _InputArray_CUDA_GPU_MAT * _InputArray_CUDA_HOST_MEM * _InputArray_EXPRremoved: https://github.com/opencv/opencv/pull/17046 * _InputArray_FIXED_SIZE * _InputArray_FIXED_TYPE * _InputArray_KIND_MASK * _InputArray_KIND_SHIFT * _InputArray_MAT * _InputArray_MATX * _InputArray_NONE * _InputArray_OPENGL_BUFFER * _InputArray_STD_ARRAYremoved: https://github.com/opencv/opencv/issues/18897 * _InputArray_STD_ARRAY_MAT * _InputArray_STD_BOOL_VECTOR * _InputArray_STD_VECTOR * _InputArray_STD_VECTOR_CUDA_GPU_MAT * _InputArray_STD_VECTOR_MAT * _InputArray_STD_VECTOR_UMAT * _InputArray_STD_VECTOR_VECTOR * _InputArray_UMAT * _OutputArray_DEPTH_MASK_8S * _OutputArray_DEPTH_MASK_8U * _OutputArray_DEPTH_MASK_16F * _OutputArray_DEPTH_MASK_16S * _OutputArray_DEPTH_MASK_16U * _OutputArray_DEPTH_MASK_32F * _OutputArray_DEPTH_MASK_32S * _OutputArray_DEPTH_MASK_64F * _OutputArray_DEPTH_MASK_ALL * _OutputArray_DEPTH_MASK_ALL_16F * _OutputArray_DEPTH_MASK_ALL_BUT_8S * _OutputArray_DEPTH_MASK_FLT * __UMAT_USAGE_FLAGS_32BIT Traits --- * AlgorithmTraitMutable methods for core::Algorithm * AlgorithmTraitConstConstant methods for core::Algorithm * ArraysTraitMutable methods for core::Arrays * ArraysTraitConstConstant methods for core::Arrays * AsyncArrayTraitMutable methods for core::AsyncArray * AsyncArrayTraitConstConstant methods for core::AsyncArray * AsyncPromiseTraitMutable methods for core::AsyncPromise * AsyncPromiseTraitConstConstant methods for core::AsyncPromise * BufferPoolTraitMutable methods for core::BufferPool * BufferPoolTraitConstConstant methods for core::BufferPool * BufferTraitMutable methods for core::Buffer * BufferTraitConstConstant methods for core::Buffer * CommandLineParserTraitMutable methods for core::CommandLineParser * CommandLineParserTraitConstConstant methods for core::CommandLineParser * ConjGradSolverTraitMutable methods for core::ConjGradSolver * ConjGradSolverTraitConstConstant methods for core::ConjGradSolver * ContextTraitMutable methods for core::Context * ContextTraitConstConstant methods for core::Context * Context_UserContextTraitMutable methods for core::Context_UserContext * Context_UserContextTraitConstConstant methods for core::Context_UserContext * DataTypeImplement this trait types that are valid to use as Mat elements. * Detail_CheckContextTraitMutable methods for core::Detail_CheckContext * Detail_CheckContextTraitConstConstant methods for core::Detail_CheckContext * DeviceInfoTraitMutable methods for core::DeviceInfo * DeviceInfoTraitConstConstant methods for core::DeviceInfo * DeviceTraitMutable methods for core::Device * DeviceTraitConstConstant methods for core::Device * DownhillSolverTraitMutable methods for core::DownhillSolver * DownhillSolverTraitConstConstant methods for core::DownhillSolver * ElemMulelementwise multiplication * EventTraitMutable methods for core::Event * EventTraitConstConstant methods for core::Event * ExceptionTraitMutable methods for core::Exception * ExceptionTraitConstConstant methods for core::Exception * FileNodeIteratorTraitMutable methods for core::FileNodeIterator * FileNodeIteratorTraitConstConstant methods for core::FileNodeIterator * FileNodeTraitMutable methods for core::FileNode * FileNodeTraitConstConstant methods for core::FileNode * FileStorageTraitMutable methods for core::FileStorage * FileStorageTraitConstConstant methods for core::FileStorage * FormattedTraitMutable methods for core::Formatted * FormattedTraitConstConstant methods for core::Formatted * FormatterTraitMutable methods for core::Formatter * FormatterTraitConstConstant methods for core::Formatter * FunctionParamsTraitMutable methods for core::FunctionParams * FunctionParamsTraitConstConstant methods for core::FunctionParams * GpuDataTraitMutable methods for core::GpuData * GpuDataTraitConstConstant methods for core::GpuData * GpuMatNDTraitMutable methods for core::GpuMatND * GpuMatNDTraitConstConstant methods for core::GpuMatND * GpuMatTraitMutable methods for core::GpuMat * GpuMatTraitConstConstant methods for core::GpuMat * GpuMat_AllocatorTraitMutable methods for core::GpuMat_Allocator * GpuMat_AllocatorTraitConstConstant methods for core::GpuMat_Allocator * HammingTraitMutable methods for core::Hamming * HammingTraitConstConstant methods for core::Hamming * HostMemTraitMutable methods for core::HostMem * HostMemTraitConstConstant methods for core::HostMem * Image2DTraitMutable methods for core::Image2D * Image2DTraitConstConstant methods for core::Image2D * KernelArgTraitMutable methods for core::KernelArg * KernelArgTraitConstConstant methods for core::KernelArg * KernelTraitMutable methods for core::Kernel * KernelTraitConstConstant methods for core::Kernel * KeyPointTraitMutable methods for core::KeyPoint * KeyPointTraitConstConstant methods for core::KeyPoint * LDATraitMutable methods for core::LDA * LDATraitConstConstant methods for core::LDA * LogTagTraitMutable methods for core::LogTag * LogTagTraitConstConstant methods for core::LogTag * MatConstIteratorTraitMutable methods for core::MatConstIterator * MatConstIteratorTraitConstConstant methods for core::MatConstIterator * MatConstIteratorTraitManual * MatExprTraitMutable methods for core::MatExpr * MatExprTraitConstConstant methods for core::MatExpr * MatOpTraitMutable methods for core::MatOp * MatOpTraitConstConstant methods for core::MatOp * MatSizeTraitMutable methods for core::MatSize * MatSizeTraitConstConstant methods for core::MatSize * MatStepTraitMutable methods for core::MatStep * MatStepTraitConstConstant methods for core::MatStep * MatTraitMutable methods for core::Mat * MatTraitConstConstant methods for core::Mat * MatTraitConstManual * MatTraitManual * MatxTrait * Matx_AddOpTraitMutable methods for core::Matx_AddOp * Matx_AddOpTraitConstConstant methods for core::Matx_AddOp * Matx_DivOpTraitMutable methods for core::Matx_DivOp * Matx_DivOpTraitConstConstant methods for core::Matx_DivOp * Matx_MatMulOpTraitMutable methods for core::Matx_MatMulOp * Matx_MatMulOpTraitConstConstant methods for core::Matx_MatMulOp * Matx_MulOpTraitMutable methods for core::Matx_MulOp * Matx_MulOpTraitConstConstant methods for core::Matx_MulOp * Matx_ScaleOpTraitMutable methods for core::Matx_ScaleOp * Matx_ScaleOpTraitConstConstant methods for core::Matx_ScaleOp * Matx_SubOpTraitMutable methods for core::Matx_SubOp * Matx_SubOpTraitConstConstant methods for core::Matx_SubOp * Matx_TOpTraitMutable methods for core::Matx_TOp * Matx_TOpTraitConstConstant methods for core::Matx_TOp * MinProblemSolverTraitMutable methods for core::MinProblemSolver * MinProblemSolverTraitConstConstant methods for core::MinProblemSolver * MinProblemSolver_FunctionTraitMutable methods for core::MinProblemSolver_Function * MinProblemSolver_FunctionTraitConstConstant methods for core::MinProblemSolver_Function * NodeDataTraitMutable methods for core::NodeData * NodeDataTraitConstConstant methods for core::NodeData * OpenCLExecutionContextTraitMutable methods for core::OpenCLExecutionContext * OpenCLExecutionContextTraitConstConstant methods for core::OpenCLExecutionContext * OriginalClassNameTraitMutable methods for core::OriginalClassName * OriginalClassNameTraitConstConstant methods for core::OriginalClassName * PCATraitMutable methods for core::PCA * PCATraitConstConstant methods for core::PCA * ParallelLoopBodyTraitMutable methods for core::ParallelLoopBody * ParallelLoopBodyTraitConstConstant methods for core::ParallelLoopBody * PlatformInfoTraitMutable methods for core::PlatformInfo * PlatformInfoTraitConstConstant methods for core::PlatformInfo * PlatformTraitMutable methods for core::Platform * PlatformTraitConstConstant methods for core::Platform * ProgramSourceTraitMutable methods for core::ProgramSource * ProgramSourceTraitConstConstant methods for core::ProgramSource * ProgramTraitMutable methods for core::Program * ProgramTraitConstConstant methods for core::Program * QueueTraitMutable methods for core::Queue * QueueTraitConstConstant methods for core::Queue * RNGTraitMutable methods for core::RNG * RNGTraitConstConstant methods for core::RNG * RNG_MT19937TraitMutable methods for core::RNG_MT19937 * RNG_MT19937TraitConstConstant methods for core::RNG_MT19937 * RangeTraitMutable methods for core::Range * RangeTraitConstConstant methods for core::Range * SVDTraitMutable methods for core::SVD * SVDTraitConstConstant methods for core::SVD * SizedArray * SparseMatConstIteratorTraitMutable methods for core::SparseMatConstIterator * SparseMatConstIteratorTraitConstConstant methods for core::SparseMatConstIterator * SparseMatIteratorTraitMutable methods for core::SparseMatIterator * SparseMatIteratorTraitConstConstant methods for core::SparseMatIterator * SparseMatTraitMutable methods for core::SparseMat * SparseMatTraitConstConstant methods for core::SparseMat * SparseMat_HdrTraitMutable methods for core::SparseMat_Hdr * SparseMat_HdrTraitConstConstant methods for core::SparseMat_Hdr * SparseMat_NodeTraitMutable methods for core::SparseMat_Node * SparseMat_NodeTraitConstConstant methods for core::SparseMat_Node * StreamTraitMutable methods for core::Stream * StreamTraitConstConstant methods for core::Stream * TargetArchsTraitMutable methods for core::TargetArchs * TargetArchsTraitConstConstant methods for core::TargetArchs * Texture2DTraitMutable methods for core::Texture2D * Texture2DTraitConstConstant methods for core::Texture2D * TickMeterTraitMutable methods for core::TickMeter * TickMeterTraitConstConstant methods for core::TickMeter * TimerTraitMutable methods for core::Timer * TimerTraitConstConstant methods for core::Timer * ToInputArrayTrait to serve as a replacement for `InputArray` in C++ OpenCV * ToInputOutputArrayTrait to serve as a replacement for `InputOutputArray` in C++ OpenCV * ToOutputArrayTrait to serve as a replacement for `OutputArray` in C++ OpenCV * TupleExtern * UMatDataTraitMutable methods for core::UMatData * UMatDataTraitConstConstant methods for core::UMatData * UMatTraitMutable methods for core::UMat * UMatTraitConstConstant methods for core::UMat * VectorElementThis trait is implemented by any type that can be stored inside `Vector`. * WriteStructContextTraitMutable methods for core::WriteStructContext * WriteStructContextTraitConstConstant methods for core::WriteStructContext * _InputArrayTraitMutable methods for core::_InputArray * _InputArrayTraitConstConstant methods for core::_InputArray * _InputOutputArrayTraitMutable methods for core::_InputOutputArray * _InputOutputArrayTraitConstConstant methods for core::_InputOutputArray * _OutputArrayTraitMutable methods for core::_OutputArray * _OutputArrayTraitConstConstant methods for core::_OutputArray Functions --- * CV_MAKETYPE * CV_MAKE_TYPE * CV_MAT_DEPTH * absCalculates an absolute value of each matrix element. * abs_matexprCalculates an absolute value of each matrix element. * absdiffCalculates the per-element absolute difference between two arrays or between an array and a scalar. * addCalculates the per-element sum of two arrays or an array and a scalar. * add_defCalculates the per-element sum of two arrays or an array and a scalar. * add_mat_mat@relates cv::MatExpr * add_mat_matexpr * add_mat_scalar * add_matexpr_mat * add_matexpr_matexpr * add_matexpr_scalar * add_samples_data_search_pathOverride search data path by adding new search location * add_samples_data_search_sub_directoryAppend samples search data sub directory * add_scalar_mat * add_scalar_matexpr * add_weightedCalculates the weighted sum of two arrays. * add_weighted_defCalculates the weighted sum of two arrays. * and_mat_mat * and_mat_scalar * and_scalar_mat * attach_context⚠Attaches OpenCL context to OpenCV * batch_distancenaive nearest neighbor finder * batch_distance_defnaive nearest neighbor finder * bitwise_andcomputes bitwise conjunction of the two arrays (dst = src1 & src2) Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar. * bitwise_and_defcomputes bitwise conjunction of the two arrays (dst = src1 & src2) Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar. * bitwise_notInverts every bit of an array. * bitwise_not_defInverts every bit of an array. * bitwise_orCalculates the per-element bit-wise disjunction of two arrays or an array and a scalar. * bitwise_or_defCalculates the per-element bit-wise disjunction of two arrays or an array and a scalar. * bitwise_xorCalculates the per-element bit-wise “exclusive or” operation on two arrays or an array and a scalar. * bitwise_xor_defCalculates the per-element bit-wise “exclusive or” operation on two arrays or an array and a scalar. * border_interpolateComputes the source location of an extrapolated pixel. * build_options_add_matrix_description * calc_covar_matrixCalculates the covariance matrix of a set of vectors. * calc_covar_matrix_def@overload * cart_to_polarCalculates the magnitude and angle of 2D vectors. * cart_to_polar_defCalculates the magnitude and angle of 2D vectors. * check_failed_auto * check_failed_auto_1 * check_failed_auto_2 * check_failed_auto_3 * check_failed_auto_4 * check_failed_auto_5 * check_failed_auto_6 * check_failed_auto_7 * check_failed_auto_8 * check_failed_auto_9 * check_failed_auto_10 * check_failed_auto_11 * check_failed_false * check_failed_mat_channels * check_failed_mat_channels_1 * check_failed_mat_depth * check_failed_mat_depth_1 * check_failed_mat_type * check_failed_mat_type_1 * check_failed_true * check_hardware_supportReturns true if the specified feature is supported by the host hardware. * check_optimal_vector_widthC++ default parameters * check_optimal_vector_width_defNote * check_rangeChecks every element of an input array for invalid values. * check_range_defChecks every element of an input array for invalid values. * choleskyproxy for hal::Cholesky * cholesky_f32proxy for hal::Cholesky * comparePerforms the per-element comparison of two arrays or an array and scalar value. * complete_symmCopies the lower or the upper half of a square matrix to its another half. * complete_symm_defCopies the lower or the upper half of a square matrix to its another half. * convert_fp16Converts an array to half precision floating number. * convert_from_buffer⚠Convert OpenCL buffer to UMat * convert_from_gl_texture_2dConverts Texture2D object to OutputArray. * convert_from_image⚠Convert OpenCL image2d_t to UMat * convert_from_va_surface⚠Converts VASurfaceID object to OutputArray. * convert_scale_absScales, calculates absolute values, and converts the result to 8-bit. * convert_scale_abs_defScales, calculates absolute values, and converts the result to 8-bit. * convert_to_gl_texture_2dConverts InputArray to Texture2D object. * convert_to_va_surface⚠Converts InputArray to VASurfaceID object. * convert_type_str * convert_type_str_1 * copy_make_borderForms a border around an image. * copy_make_border_defForms a border around an image. * copy_mat_and_dump_named_argumentsC++ default parameters * copy_mat_and_dump_named_arguments_defNote * copy_toThis is an overloaded member function, provided for convenience (python) Copies the matrix to another one. When the operation mask is specified, if the Mat::create call shown above reallocates the matrix, the newly allocated matrix is initialized with all zeros before copying the data. * count_non_zeroCounts non-zero array elements. * create_continuousCreates a continuous matrix. * create_gpu_mat_from_cuda_memoryBindings overload to create a GpuMat from existing GPU memory. * create_gpu_mat_from_cuda_memory_1Bindings overload to create a GpuMat from existing GPU memory. * create_gpu_mat_from_cuda_memory_1_def@overload * create_gpu_mat_from_cuda_memory_defBindings overload to create a GpuMat from existing GPU memory. * cube_rootComputes the cube root of an argument. * dctPerforms a forward or inverse discrete Cosine transform of 1D or 2D array. * dct_defPerforms a forward or inverse discrete Cosine transform of 1D or 2D array. * depth_to_stringReturns string of cv::Mat depth value: CV_8U -> “CV_8U” or “” * determinantReturns the determinant of a square floating-point matrix. * device_supportschecks whether current device supports the given feature * dftPerforms a forward or inverse Discrete Fourier transform of a 1D or 2D floating-point array. * dft_defPerforms a forward or inverse Discrete Fourier transform of a 1D or 2D floating-point array. * div_f64_mat * div_f64_matexpr * div_mat_f64 * div_mat_mat * div_mat_matexpr * div_matexpr_f64 * div_matexpr_mat * div_matexpr_matexpr * dividePerforms per-element division of two arrays or a scalar by an array. * divide2Performs per-element division of two arrays or a scalar by an array. * divide2_defPerforms per-element division of two arrays or a scalar by an array. * divide_def@overload * dump_bool * dump_c_string * dump_double * dump_float * dump_input_array * dump_input_array_of_arrays * dump_input_output_array * dump_input_output_array_of_arrays * dump_int * dump_int64 * dump_range * dump_rect * dump_rotated_rect * dump_size_t * dump_string * dump_term_criteria * dump_vec2iC++ default parameters * dump_vec2i_defNote * dump_vector_of_double * dump_vector_of_int * dump_vector_of_rect * eigenCalculates eigenvalues and eigenvectors of a symmetric matrix. * eigen_defCalculates eigenvalues and eigenvectors of a symmetric matrix. * eigen_non_symmetricCalculates eigenvalues and eigenvectors of a non-symmetric matrix (real eigenvalues only). * ensure_size_is_enoughEnsures that the size of a matrix is big enough and the matrix has a proper type. * equals_f64_mat * equals_filenodeiterator_filenodeiterator@relates cv::FileNodeIterator * equals_mat_f64 * equals_mat_mat * error! Signals an error and raises the exception. * error_1Deprecated! Signals an error and raises the exception. * expCalculates the exponent of every array element. * extract_channelExtracts a single channel from src (coi is 0-based index) * fast_atan2Calculates the angle of a 2D vector in degrees. * find_fileTry to find requested data file * find_file_defTry to find requested data file * find_file_or_keepC++ default parameters * find_file_or_keep_defNote * find_non_zeroReturns the list of locations of non-zero pixels * finish * flipFlips a 2D array around vertical, horizontal, or both axes. * flip_ndFlips a n-dimensional at given axis * gemmPerforms generalized matrix multiplication. * gemm_defPerforms generalized matrix multiplication. * generate_vector_of_int * generate_vector_of_mat * generate_vector_of_rect * get_build_informationReturns full configuration time cmake output. * get_cache_directory_for_downloads * get_cpu_features_lineReturns list of CPU features enabled during compilation. * get_cpu_tick_countReturns the number of CPU ticks. * get_cuda_enabled_device_countReturns the number of installed CUDA-enabled devices. * get_deviceReturns the current device index set by cuda::setDevice or initialized by default. * get_elem_size * get_flags * get_global_log_tagGet global log tag * get_hardware_feature_nameReturns feature name by ID * get_ipp_error_location * get_ipp_features * get_ipp_status * get_ipp_version * get_log_levelGet global logging level * get_log_level_1 * get_log_tag_level * get_num_threadsReturns the number of threads used by OpenCV for parallel regions. * get_number_of_cpusReturns the number of logical CPUs available for the process. * get_opencl_error_string * get_optimal_dft_sizeReturns the optimal DFT size for a given vector size. * get_platfoms_info * get_thread_id * get_thread_numDeprecatedReturns the index of the currently executed thread within the current parallel region. Always returns 0 if called outside of parallel region. * get_tick_countReturns the number of ticks. * get_tick_frequencyReturns the number of ticks per second. * get_type_from_d3d_formatGet OpenCV type from DirectX type * get_type_from_dxgi_formatGet OpenCV type from DirectX type * get_version_majorReturns major library version * get_version_minorReturns minor library version * get_version_revisionReturns revision field of the library version * get_version_stringReturns library version string * globC++ default parameters * glob_defNote * greater_than_f64_mat * greater_than_mat_f64 * greater_than_mat_mat * greater_than_or_equal_f64_mat * greater_than_or_equal_mat_f64 * greater_than_or_equal_mat_mat * has_non_zeroChecks for the presence of at least one non-zero array element. * have_amd_blas * have_amd_fft * have_opencl * have_openvxCheck if use of OpenVX is possible * have_svm * hconcatApplies horizontal concatenation to given matrices. * hconcat2Applies horizontal concatenation to given matrices. * idctCalculates the inverse Discrete Cosine Transform of a 1D or 2D array. * idct_defCalculates the inverse Discrete Cosine Transform of a 1D or 2D array. * idftCalculates the inverse Discrete Fourier Transform of a 1D or 2D array. * idft_defCalculates the inverse Discrete Fourier Transform of a 1D or 2D array. * in_rangeChecks if array elements lie between the elements of two other arrays. * initialize_context_from_glCreates OpenCL context from GL. * initialize_context_from_va⚠Creates OpenCL context from VA. * initialize_context_from_va_def⚠Creates OpenCL context from VA. * insert_channelInserts a single channel to dst (coi is 0-based index) * invertFinds the inverse or pseudo-inverse of a matrix. * invert_defFinds the inverse or pseudo-inverse of a matrix. * kernel_to_strC++ default parameters * kernel_to_str_defNote * kmeansFinds centers of clusters and groups input samples around the clusters. * kmeans_defFinds centers of clusters and groups input samples around the clusters. * less_than_f64_mat * less_than_mat_f64 * less_than_mat_mat * less_than_or_equal_f64_mat * less_than_or_equal_mat_f64 * less_than_or_equal_mat_mat * logCalculates the natural logarithm of every array element. * luproxy for hal::LU * lu_f32proxy for hal::LU * lutPerforms a look-up table transform of an array. * magnitudeCalculates the magnitude of 2D vectors. * mahalanobisCalculates the Mahalanobis distance between two vectors. * map_gl_bufferMaps Buffer object to process on CL side (convert to UMat). * map_gl_buffer_defMaps Buffer object to process on CL side (convert to UMat). * maxCalculates per-element maximum of two arrays or an array and a scalar. * max_f64_mat * max_mat * max_mat_f64 * max_mat_toCalculates per-element maximum of two arrays or an array and a scalar. * max_umat_toCalculates per-element maximum of two arrays or an array and a scalar. * meanCalculates an average (mean) of array elements. * mean_defCalculates an average (mean) of array elements. * mean_std_devCalculates a mean and standard deviation of array elements. * mean_std_dev_defCalculates a mean and standard deviation of array elements. * memop_type_to_str * mergeCreates one multi-channel array out of several single-channel ones. * minCalculates per-element minimum of two arrays or an array and a scalar. * min_f64_mat * min_mat * min_mat_f64 * min_mat_toCalculates per-element minimum of two arrays or an array and a scalar. * min_max_idxFinds the global minimum and maximum in an array * min_max_idx_defFinds the global minimum and maximum in an array * min_max_locFinds the global minimum and maximum in an array. * min_max_loc_defFinds the global minimum and maximum in an array. * min_max_loc_sparseFinds the global minimum and maximum in an array. * min_max_loc_sparse_def@overload * min_umat_toCalculates per-element minimum of two arrays or an array and a scalar. * mix_channelsCopies specified channels from input arrays to the specified channels of output arrays. * mix_channels_vecCopies specified channels from input arrays to the specified channels of output arrays. * mul_f64_mat * mul_f64_matexpr * mul_mat_f64 * mul_mat_mat * mul_mat_matexpr * mul_matexpr_f64 * mul_matexpr_mat * mul_matexpr_matexpr * mul_spectrumsPerforms the per-element multiplication of two Fourier spectrums. * mul_spectrums_defPerforms the per-element multiplication of two Fourier spectrums. * mul_transposedCalculates the product of a matrix and its transposition. * mul_transposed_defCalculates the product of a matrix and its transposition. * multiplyCalculates the per-element scaled product of two arrays. * multiply_defCalculates the per-element scaled product of two arrays. * negate * no_array * normCalculates the absolute norm of an array. * norm2Calculates an absolute difference norm or a relative difference norm. * norm2_defCalculates an absolute difference norm or a relative difference norm. * norm_defCalculates the absolute norm of an array. * norm_sparseCalculates an absolute difference norm or a relative difference norm. * normalizeNormalizes the norm or value range of an array. * normalize_defNormalizes the norm or value range of an array. * normalize_sparseNormalizes the norm or value range of an array. * not_equals_f64_mat * not_equals_filenodeiterator_filenodeiterator * not_equals_mat_f64 * not_equals_mat_mat * or_mat_mat * or_mat_scalar * or_scalar_mat * parallel_for_Parallel data processor * parallel_for__defParallel data processor * patch_na_nsconverts NaNs to the given number * patch_na_ns_defconverts NaNs to the given number * pca_back_projectwrap PCA::backProject * pca_computewrap PCA::operator() * pca_compute2wrap PCA::operator() and add eigenvalues output parameter * pca_compute2_defwrap PCA::operator() and add eigenvalues output parameter * pca_compute2_variancewrap PCA::operator() and add eigenvalues output parameter * pca_compute_defwrap PCA::operator() * pca_compute_variancewrap PCA::operator() * pca_projectwrap PCA::project * perspective_transformPerforms the perspective matrix transformation of vectors. * phaseCalculates the rotation angle of 2D vectors. * phase_defCalculates the rotation angle of 2D vectors. * polar_to_cartCalculates x and y coordinates of 2D vectors from their magnitude and angle. * polar_to_cart_defCalculates x and y coordinates of 2D vectors from their magnitude and angle. * powRaises every array element to a power. * predict_optimal_vector_widthC++ default parameters * predict_optimal_vector_width_defNote * predict_optimal_vector_width_maxC++ default parameters * predict_optimal_vector_width_max_defNote * print_cuda_device_info * print_short_cuda_device_info * psnrComputes the Peak Signal-to-Noise Ratio (PSNR) image quality metric. * psnr_defComputes the Peak Signal-to-Noise Ratio (PSNR) image quality metric. * rand_shuffleShuffles the array elements randomly. * rand_shuffle_defShuffles the array elements randomly. * randnFills the array with normally distributed random numbers. * randuGenerates a single uniformly-distributed random number or an array of random numbers. * read_dmatch * read_dmatch_vec_legacy * read_f32 * read_f64 * read_i32@relates cv::FileNode * read_keypoint * read_keypoint_vec_legacy * read_matC++ default parameters * read_mat_defNote * read_sparsematC++ default parameters * read_sparsemat_defNote * read_str * rectangle_intersection_areaFinds out if there is any intersection between two rectangles * reduceReduces a matrix to a vector. * reduce_arg_maxFinds indices of max elements along provided axis * reduce_arg_max_defFinds indices of max elements along provided axis * reduce_arg_minFinds indices of min elements along provided axis * reduce_arg_min_defFinds indices of min elements along provided axis * reduce_defReduces a matrix to a vector. * register_log_tag * register_page_lockedPage-locks the memory of matrix and maps it for the device(s). * renderRender OpenGL texture or primitives. * render_1Render OpenGL texture or primitives. * render_1_def@overload * render_2Render OpenGL texture or primitives. * render_2_def@overload * render_defRender OpenGL texture or primitives. * repeatFills the output array with repeated copies of the input array. * repeat_toFills the output array with repeated copies of the input array. * reset_deviceExplicitly destroys and cleans up all resources associated with the current device in the current process. * reset_trace * rotateRotates a 2D array in multiples of 90 degrees. The function cv::rotate rotates the array in one of three different ways: * scale_addCalculates the sum of a scaled array and another array. * set_break_on_errorSets/resets the break-on-error mode. * set_buffer_pool_config * set_buffer_pool_usageBufferPool management (must be called before Stream creation) * set_deviceSets a device and initializes it for the current thread. * set_flags * set_gl_deviceSets a CUDA device and initializes it for the current thread with OpenGL interoperability. * set_gl_device_defSets a CUDA device and initializes it for the current thread with OpenGL interoperability. * set_identityInitializes a scaled identity matrix. * set_identity_defInitializes a scaled identity matrix. * set_ipp_statusC++ default parameters * set_ipp_status_defNote * set_log_levelSet global logging level * set_log_level_1@cond IGNORED * set_log_tag_level * set_num_threadsOpenCV will try to set the number of threads for subsequent parallel regions. * set_rng_seedSets state of default random number generator. * set_use_instrumentation * set_use_ipp * set_use_ipp_not_exact * set_use_opencl * set_use_openvxEnable/disable use of OpenVX * set_use_optimizedEnables or disables the optimized code. * solveSolves one or more linear systems or least-squares problems. * solve_cubicFinds the real roots of a cubic equation. * solve_defSolves one or more linear systems or least-squares problems. * solve_lpSolve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method). * solve_lp_1Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method). * solve_polyFinds the real or complex roots of a polynomial equation. * solve_poly_defFinds the real or complex roots of a polynomial equation. * sortSorts each row or each column of a matrix. * sort_idxSorts each row or each column of a matrix. * splitDivides a multi-channel array into several single-channel arrays. * split_sliceDivides a multi-channel array into several single-channel arrays. * sqrtCalculates a square root of array elements. * sub_mat * sub_mat_mat * sub_mat_matexpr * sub_mat_scalar * sub_matexpr * sub_matexpr_mat * sub_matexpr_matexpr * sub_matexpr_scalar * sub_scalar_mat * sub_scalar_matexpr * subtractCalculates the per-element difference between two arrays or array and a scalar. * subtract_defCalculates the per-element difference between two arrays or array and a scalar. * sum_elemsCalculates the sum of array elements. * sv_back_substwrap SVD::backSubst * sv_decompwrap SVD::compute * sv_decomp_defwrap SVD::compute * swapSwaps two matrices * swap_umatSwaps two matrices * tempfileC++ default parameters * tempfile_defNote * test_async_array * test_async_exception * test_echo_boolean_function * test_overload_resolutionC++ default parameters * test_overload_resolution_1 * test_overload_resolution_defNote * test_overwrite_native_method * test_raise_general_exception * test_reserved_keyword_conversionC++ default parameters * test_reserved_keyword_conversion_defNote * test_rotated_rect * test_rotated_rect_vector * the_rngReturns the default random number generator. * traceReturns the trace of a matrix. * transformPerforms the matrix transformation of every array element. * transposeTransposes a matrix. * transpose_ndTranspose for n-dimensional matrices. * type_to_str * type_to_stringReturns string of cv::Mat depth value: CV_8UC3 -> “CV_8UC3” or “” * unmap_gl_bufferUnmaps Buffer object (releases UMat, previously mapped from Buffer). * unregister_page_lockedUnmaps the memory of matrix and makes it pageable again. * use_instrumentation * use_ipp * use_ipp_not_exact * use_opencl * use_openvxCheck if use of OpenVX is enabled * use_optimizedReturns the status of optimized code usage. * vconcatApplies vertical concatenation to given matrices. * vconcat2Applies vertical concatenation to given matrices. * vecop_type_to_str * wrap_streamBindings overload to create a Stream object from the address stored in an existing CUDA Runtime API stream pointer (cudaStream_t). * write_dmatch_vec * write_f32 * write_f64 * write_i32@relates cv::FileStorage * write_keypoint_vec * write_log_messageWrite log message * write_log_message_exWrite log message * write_mat * write_scalar_f32 * write_scalar_f64 * write_scalar_i32 * write_scalar_str * write_sparsemat * write_str * xor_mat_mat * xor_mat_scalar * xor_scalar_mat Type Aliases --- * Affine3d * Affine3f * GpuMatND_IndexArray * GpuMatND_SizeArray * GpuMatND_StepArray * HammingLUT * Hamming_result_type * Hamming_value_type * InputArray * InputArrayOfArrays * InputOutputArray * InputOutputArrayOfArrays * Mat1b * Mat1d * Mat1f * Mat1i * Mat1s * Mat1w * Mat2b * Mat2d * Mat2f * Mat2i * Mat2s * Mat2w * Mat3b * Mat3d * Mat3f * Mat3i * Mat3s * Mat3w * Mat4b * Mat4d * Mat4f * Mat4i * Mat4s * Mat4w * MatConstIterator_difference_type * MatConstIterator_pointer * MatConstIterator_reference * MatConstIterator_value_type * MatND * Matx12 * Matx12d * Matx12f * Matx13 * Matx13d * Matx13f * Matx14 * Matx14d * Matx14f * Matx16 * Matx16d * Matx16f * Matx21 * Matx21d * Matx21f * Matx22 * Matx22d * Matx22f * Matx23 * Matx23d * Matx23f * Matx31 * Matx31d * Matx31f * Matx32 * Matx32d * Matx32f * Matx33 * Matx33d * Matx33f * Matx34 * Matx34d * Matx34f * Matx41 * Matx41d * Matx41f * Matx43 * Matx43d * Matx43f * Matx44 * Matx44d * Matx44f * Matx61 * Matx61d * Matx61f * Matx66 * Matx66d * Matx66f * OutputArray * OutputArrayOfArrays * Point * Point2d * Point2f * Point2i * Point2l * Point3d * Point3f * Point3i * ProgramSource_hash_t * Rect * Rect2d * Rect2f * Rect2i * Scalar * Scalar_docs.opencv.org * Size * Size2d * Size2f * Size2i * Size2l * SparseMat_const_iterator * SparseMat_iterator * Stream_StreamCallback * Vec2bShorter aliases for the most popular specializations of Vec<T,n> * Vec2d * Vec2f * Vec2i * Vec2s * Vec2w * Vec3b * Vec3d * Vec3f * Vec3i * Vec3s * Vec3w * Vec4b * Vec4d * Vec4f * Vec4i * Vec4s * Vec4w * Vec6d * Vec6f * Vec6i * Vec8i * va_display * va_surface_id Module opencv::cudaarithm === Operations on Matrices --- Modules --- * prelude Structs --- * ConvolutionBase class for convolution (or cross-correlation) operator. : * DFTBase class for DFT operator as a cv::Algorithm. : * LookUpTableBase class for transform using lookup table. Traits --- * ConvolutionTraitMutable methods for crate::cudaarithm::Convolution * ConvolutionTraitConstConstant methods for crate::cudaarithm::Convolution * DFTTraitMutable methods for crate::cudaarithm::DFT * DFTTraitConstConstant methods for crate::cudaarithm::DFT * LookUpTableTraitMutable methods for crate::cudaarithm::LookUpTable * LookUpTableTraitConstConstant methods for crate::cudaarithm::LookUpTable Functions --- * absComputes an absolute value of each matrix element. * abs_defComputes an absolute value of each matrix element. * abs_sumReturns the sum of absolute values for matrix elements. * abs_sum_defReturns the sum of absolute values for matrix elements. * absdiffComputes per-element absolute difference of two matrices (or of a matrix and scalar). * absdiff_defComputes per-element absolute difference of two matrices (or of a matrix and scalar). * addComputes a matrix-matrix or matrix-scalar sum. * add_defComputes a matrix-matrix or matrix-scalar sum. * add_weightedComputes the weighted sum of two arrays. * add_weighted_defComputes the weighted sum of two arrays. * bitwise_andPerforms a per-element bitwise conjunction of two matrices (or of matrix and scalar). * bitwise_and_defPerforms a per-element bitwise conjunction of two matrices (or of matrix and scalar). * bitwise_notPerforms a per-element bitwise inversion. * bitwise_not_defPerforms a per-element bitwise inversion. * bitwise_orPerforms a per-element bitwise disjunction of two matrices (or of matrix and scalar). * bitwise_or_defPerforms a per-element bitwise disjunction of two matrices (or of matrix and scalar). * bitwise_xorPerforms a per-element bitwise exclusive or operation of two matrices (or of matrix and scalar). * bitwise_xor_defPerforms a per-element bitwise exclusive or operation of two matrices (or of matrix and scalar). * calc_abs_sumThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * calc_abs_sum_def@overload * calc_normThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * calc_norm_def@overload * calc_norm_diffThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * calc_norm_diff_def@overload * calc_sqr_sumThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * calc_sqr_sum_def@overload * calc_sumThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * calc_sum_def@overload * cart_to_polarConverts Cartesian coordinates into polar. * cart_to_polar_defConverts Cartesian coordinates into polar. * compareCompares elements of two matrices (or of a matrix and scalar). * compare_defCompares elements of two matrices (or of a matrix and scalar). * copy_make_borderForms a border around an image. * copy_make_border_defForms a border around an image. * count_non_zeroCounts non-zero matrix elements. * count_non_zero_1Counts non-zero matrix elements. * count_non_zero_1_def@overload * create_convolutionCreates implementation for cuda::Convolution . * create_convolution_defCreates implementation for cuda::Convolution . * create_dftCreates implementation for cuda::DFT. * create_look_up_tableCreates implementation for cuda::LookUpTable . * dftPerforms a forward or inverse discrete Fourier transform (1D or 2D) of the floating point matrix. * dft_defPerforms a forward or inverse discrete Fourier transform (1D or 2D) of the floating point matrix. * divideComputes a matrix-matrix or matrix-scalar division. * divide_defComputes a matrix-matrix or matrix-scalar division. * expComputes an exponent of each matrix element. * exp_defComputes an exponent of each matrix element. * find_min_maxThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * find_min_max_def@overload * find_min_max_locThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * find_min_max_loc_def@overload * flipFlips a 2D matrix around vertical, horizontal, or both axes. * flip_defFlips a 2D matrix around vertical, horizontal, or both axes. * gemmPerforms generalized matrix multiplication. * gemm_defPerforms generalized matrix multiplication. * in_rangeChecks if array elements lie between two scalars. * in_range_defChecks if array elements lie between two scalars. * integralComputes an integral image. * integral_defComputes an integral image. * logComputes a natural logarithm of absolute value of each matrix element. * log_defComputes a natural logarithm of absolute value of each matrix element. * lshiftPerforms pixel by pixel right left of an image by a constant value. * lshift_1C++ default parameters * lshift_1_defNote * lshift_defPerforms pixel by pixel right left of an image by a constant value. * magnitudeComputes magnitudes of complex matrix elements. * magnitude_1Computes magnitudes of complex matrix elements. * magnitude_1_def@overload computes magnitude of each (x(i), y(i)) vector supports only floating-point source * magnitude_defComputes magnitudes of complex matrix elements. * magnitude_sqrComputes squared magnitudes of complex matrix elements. * magnitude_sqr_1Computes squared magnitudes of complex matrix elements. * magnitude_sqr_1_def@overload computes squared magnitude of each (x(i), y(i)) vector supports only floating-point source * magnitude_sqr_defComputes squared magnitudes of complex matrix elements. * maxComputes the per-element maximum of two matrices (or a matrix and a scalar). * max_defComputes the per-element maximum of two matrices (or a matrix and a scalar). * mean_std_devComputes a mean value and a standard deviation of matrix elements. * mean_std_dev_1Computes a mean value and a standard deviation of matrix elements. * mean_std_dev_1_def@overload * mean_std_dev_2Computes a mean value and a standard deviation of matrix elements. * mean_std_dev_3Computes a mean value and a standard deviation of matrix elements. * mean_std_dev_defComputes a mean value and a standard deviation of matrix elements. * mergeMakes a multi-channel matrix out of several single-channel matrices. * merge_1Makes a multi-channel matrix out of several single-channel matrices. * merge_1_def@overload * merge_defMakes a multi-channel matrix out of several single-channel matrices. * minComputes the per-element minimum of two matrices (or a matrix and a scalar). * min_defComputes the per-element minimum of two matrices (or a matrix and a scalar). * min_maxFinds global minimum and maximum matrix elements and returns their values. * min_max_defFinds global minimum and maximum matrix elements and returns their values. * min_max_locFinds global minimum and maximum matrix elements and returns their values with locations. * min_max_loc_defFinds global minimum and maximum matrix elements and returns their values with locations. * mul_and_scale_spectrumsPerforms a per-element multiplication of two Fourier spectrums and scales the result. * mul_and_scale_spectrums_defPerforms a per-element multiplication of two Fourier spectrums and scales the result. * mul_spectrumsPerforms a per-element multiplication of two Fourier spectrums. * mul_spectrums_defPerforms a per-element multiplication of two Fourier spectrums. * multiplyComputes a matrix-matrix or matrix-scalar per-element product. * multiply_defComputes a matrix-matrix or matrix-scalar per-element product. * normReturns the norm of a matrix (or difference of two matrices). * norm_1Returns the difference of two matrices. * norm_1_defReturns the difference of two matrices. * norm_defReturns the norm of a matrix (or difference of two matrices). * normalizeNormalizes the norm or value range of an array. * normalize_defNormalizes the norm or value range of an array. * phaseComputes polar angles of complex matrix elements. * phase_defComputes polar angles of complex matrix elements. * polar_to_cartConverts polar coordinates into Cartesian. * polar_to_cart_defConverts polar coordinates into Cartesian. * powRaises every matrix element to a power. * pow_defRaises every matrix element to a power. * rect_std_devComputes a standard deviation of integral images. * rect_std_dev_defComputes a standard deviation of integral images. * reduceReduces a matrix to a vector. * reduce_defReduces a matrix to a vector. * rshiftPerforms pixel by pixel right shift of an image by a constant value. * rshift_1C++ default parameters * rshift_1_defNote * rshift_defPerforms pixel by pixel right shift of an image by a constant value. * splitCopies each plane of a multi-channel matrix into an array. * split_1Copies each plane of a multi-channel matrix into an array. * split_1_def@overload * split_defCopies each plane of a multi-channel matrix into an array. * sqrComputes a square value of each matrix element. * sqr_defComputes a square value of each matrix element. * sqr_integralComputes a squared integral image. * sqr_integral_defComputes a squared integral image. * sqr_sumReturns the squared sum of matrix elements. * sqr_sum_defReturns the squared sum of matrix elements. * sqrtComputes a square root of each matrix element. * sqrt_defComputes a square root of each matrix element. * subtractComputes a matrix-matrix or matrix-scalar difference. * subtract_defComputes a matrix-matrix or matrix-scalar difference. * sumReturns the sum of matrix elements. * sum_defReturns the sum of matrix elements. * thresholdApplies a fixed-level threshold to each array element. * threshold_defApplies a fixed-level threshold to each array element. * transposeTransposes a matrix. * transpose_defTransposes a matrix. Module opencv::cudabgsegm === Background Segmentation --- Modules --- * prelude Structs --- * CUDA_BackgroundSubtractorMOGGaussian Mixture-based Background/Foreground Segmentation Algorithm. * CUDA_BackgroundSubtractorMOG2Gaussian Mixture-based Background/Foreground Segmentation Algorithm. Traits --- * CUDA_BackgroundSubtractorMOG2TraitMutable methods for crate::cudabgsegm::CUDA_BackgroundSubtractorMOG2 * CUDA_BackgroundSubtractorMOG2TraitConstConstant methods for crate::cudabgsegm::CUDA_BackgroundSubtractorMOG2 * CUDA_BackgroundSubtractorMOGTraitMutable methods for crate::cudabgsegm::CUDA_BackgroundSubtractorMOG * CUDA_BackgroundSubtractorMOGTraitConstConstant methods for crate::cudabgsegm::CUDA_BackgroundSubtractorMOG Functions --- * create_background_subtractor_mogCreates mixture-of-gaussian background subtractor * create_background_subtractor_mog2Creates MOG2 Background Subtractor * create_background_subtractor_mog2_defCreates MOG2 Background Subtractor * create_background_subtractor_mog_defCreates mixture-of-gaussian background subtractor Module opencv::cudacodec === Video Encoding/Decoding --- Modules --- * prelude Structs --- * CUDA_EncodeQpQuantization Parameter for each type of frame when using ENC_PARAMS_RC_MODE::ENC_PARAMS_RC_CONSTQP. * CUDA_EncoderCallbackInterface for encoder callbacks. * CUDA_EncoderParamsDifferent parameters for CUDA video encoder. * CUDA_FormatInfoStruct providing information about video file format. : * CUDA_RawVideoSourceInterface for video demultiplexing. : * CUDA_VideoReaderVideo reader interface. * CUDA_VideoReaderInitParamsVideoReader initialization parameters * CUDA_VideoWriterVideo writer interface. Enums --- * CUDA_ChromaFormatChroma formats supported by cudacodec::VideoReader. * CUDA_CodecVideo codecs supported by cudacodec::VideoReader and cudacodec::VideoWriter. * CUDA_ColorFormatColorFormat for the frame returned by VideoReader::nextFrame() and VideoReader::retrieve() or used to initialize a VideoWriter. * CUDA_DeinterlaceModeDeinterlacing mode used by decoder. * CUDA_EncodeMultiPassMulti Pass Encoding. * CUDA_EncodeParamsRcModeRate Control Modes. * CUDA_EncodePresetNvidia Encoding Presets. Performance degrades and quality improves as we move from P1 to P7. * CUDA_EncodeProfileSupported Encoder Profiles. * CUDA_EncodeTuningInfoTuning information. * CUDA_VideoReaderPropscv::cudacodec::VideoReader generic properties identifier. Constants --- * CUDA_AV1 * CUDA_Adaptive * CUDA_Bob * CUDA_ColorFormat_BGROpenCV color format, can be used with both VideoReader and VideoWriter. * CUDA_ColorFormat_BGRAOpenCV color format, can be used with both VideoReader and VideoWriter. * CUDA_ColorFormat_GRAYOpenCV color format, can be used with both VideoReader and VideoWriter. * CUDA_ColorFormat_NV_AYUVNvidia Buffer Format - 8 bit Packed A8Y8U8V8. This is a word-ordered format where a pixel is represented by a 32-bit word with V in the lowest 8 bits, U in the next 8 bits, Y in the 8 bits after that and A in the highest 8 bits, can only be used with VideoWriter. * CUDA_ColorFormat_NV_IYUVNvidia Buffer Format - Planar YUV [Y plane followed by U and V planes], use with VideoReader, can only be used with VideoWriter. * CUDA_ColorFormat_NV_NV12Nvidia color format - equivalent to YUV - Semi-Planar YUV [Y plane followed by interleaved UV plane], can be used with both VideoReader and VideoWriter. * CUDA_ColorFormat_NV_YUV444Nvidia Buffer Format - Planar YUV [Y plane followed by U and V planes], use with VideoReader, can only be used with VideoWriter. * CUDA_ColorFormat_NV_YV12Nvidia Buffer Format - Planar YUV [Y plane followed by V and U planes], use with VideoReader, can only be used with VideoWriter. * CUDA_ColorFormat_PROP_NOT_SUPPORTED * CUDA_ColorFormat_RGBOpenCV color format, can only be used with VideoWriter. * CUDA_ColorFormat_RGBAOpenCV color format, can only be used with VideoWriter. * CUDA_ColorFormat_UNDEFINED * CUDA_ENC_CODEC_PROFILE_AUTOSELECT * CUDA_ENC_H264_PROFILE_BASELINE * CUDA_ENC_H264_PROFILE_CONSTRAINED_HIGH * CUDA_ENC_H264_PROFILE_HIGH * CUDA_ENC_H264_PROFILE_HIGH_444 * CUDA_ENC_H264_PROFILE_MAIN * CUDA_ENC_H264_PROFILE_PROGRESSIVE_HIGH * CUDA_ENC_H264_PROFILE_STEREO * CUDA_ENC_HEVC_PROFILE_FREXT * CUDA_ENC_HEVC_PROFILE_MAIN * CUDA_ENC_HEVC_PROFILE_MAIN10 * CUDA_ENC_MULTI_PASS_DISABLEDSingle Pass. * CUDA_ENC_PARAMS_RC_CBRConstant bitrate mode. * CUDA_ENC_PARAMS_RC_CONSTQPConstant QP mode. * CUDA_ENC_PARAMS_RC_VBRVariable bitrate mode. * CUDA_ENC_PRESET_P1 * CUDA_ENC_PRESET_P2 * CUDA_ENC_PRESET_P3 * CUDA_ENC_PRESET_P4 * CUDA_ENC_PRESET_P5 * CUDA_ENC_PRESET_P6 * CUDA_ENC_PRESET_P7 * CUDA_ENC_TUNING_INFO_COUNT * CUDA_ENC_TUNING_INFO_HIGH_QUALITYTune presets for latency tolerant encoding. * CUDA_ENC_TUNING_INFO_LOSSLESSTune presets for lossless encoding. * CUDA_ENC_TUNING_INFO_LOW_LATENCYTune presets for low latency streaming. * CUDA_ENC_TUNING_INFO_ULTRA_LOW_LATENCYTune presets for ultra low latency streaming. * CUDA_ENC_TUNING_INFO_UNDEFINEDUndefined tuningInfo. Invalid value for encoding. * CUDA_ENC_TWO_PASS_FULL_RESOLUTIONTwo Pass encoding is enabled where first Pass is full resolution. * CUDA_ENC_TWO_PASS_QUARTER_RESOLUTIONTwo Pass encoding is enabled where first Pass is quarter resolution. * CUDA_H264 * CUDA_H264_MVC * CUDA_H264_SVC * CUDA_HEVC * CUDA_JPEG * CUDA_MPEG1 * CUDA_MPEG2 * CUDA_MPEG4 * CUDA_Monochrome * CUDA_NumCodecs * CUDA_NumFormats * CUDA_Uncompressed_NV12Y,UV (4:2:0) * CUDA_Uncompressed_UYVYUYVY (4:2:2) * CUDA_Uncompressed_YUV420Y,U,V (4:2:0) * CUDA_Uncompressed_YUYVYUYV/YUY2 (4:2:2) * CUDA_Uncompressed_YV12Y,V,U (4:2:0) * CUDA_VC1 * CUDA_VP8 * CUDA_VP9 * CUDA_VideoReaderProps_PROP_ALLOW_FRAME_DROPStatus of VideoReaderInitParams::allowFrameDrop initialization. * CUDA_VideoReaderProps_PROP_COLOR_FORMATSet the ColorFormat of the decoded frame. This can be changed before every call to nextFrame() and retrieve(). * CUDA_VideoReaderProps_PROP_DECODED_FRAME_IDXIndex for retrieving the decoded frame using retrieve(). * CUDA_VideoReaderProps_PROP_EXTRA_DATA_INDEXIndex for retrieving the extra data associated with a video source using retrieve(). * CUDA_VideoReaderProps_PROP_LRF_HAS_KEY_FRAMEFFmpeg source only - Indicates whether the Last Raw Frame (LRF), output from VideoReader::retrieve() when VideoReader is initialized in raw mode, contains encoded data for a key frame. * CUDA_VideoReaderProps_PROP_NOT_SUPPORTED * CUDA_VideoReaderProps_PROP_NUMBER_OF_RAW_PACKAGES_SINCE_LAST_GRABNumber of raw packages recieved since the last call to grab(). * CUDA_VideoReaderProps_PROP_RAW_MODEStatus of raw mode. * CUDA_VideoReaderProps_PROP_RAW_PACKAGES_BASE_INDEXBase index for retrieving raw encoded data using retrieve(). * CUDA_VideoReaderProps_PROP_UDP_SOURCEStatus of VideoReaderInitParams::udpSource initialization. * CUDA_Weave * CUDA_YUV420 * CUDA_YUV422 * CUDA_YUV444 Traits --- * CUDA_EncoderCallbackTraitMutable methods for crate::cudacodec::CUDA_EncoderCallback * CUDA_EncoderCallbackTraitConstConstant methods for crate::cudacodec::CUDA_EncoderCallback * CUDA_RawVideoSourceTraitMutable methods for crate::cudacodec::CUDA_RawVideoSource * CUDA_RawVideoSourceTraitConstConstant methods for crate::cudacodec::CUDA_RawVideoSource * CUDA_VideoReaderTraitMutable methods for crate::cudacodec::CUDA_VideoReader * CUDA_VideoReaderTraitConstConstant methods for crate::cudacodec::CUDA_VideoReader * CUDA_VideoWriterTraitMutable methods for crate::cudacodec::CUDA_VideoWriter * CUDA_VideoWriterTraitConstConstant methods for crate::cudacodec::CUDA_VideoWriter Functions --- * create_video_readerCreates video reader. * create_video_reader_1Creates video reader. * create_video_reader_1_def@overload * create_video_reader_defCreates video reader. * create_video_writerCreates video writer. * create_video_writer_1Creates video writer. * create_video_writer_1_defCreates video writer. * create_video_writer_defCreates video writer. * equals_cuda_encoderparams_cuda_encoderparams Module opencv::cudafeatures2d === Feature Detection and Description --- Modules --- * prelude Structs --- * CUDA_DescriptorMatcherAbstract base class for matching keypoint descriptors. * CUDA_FastFeatureDetectorWrapping class for feature detection using the FAST method. * CUDA_Feature2DAsyncAbstract base class for CUDA asynchronous 2D image feature detectors and descriptor extractors. * CUDA_ORBClass implementing the ORB (*oriented BRIEF*) keypoint detector and descriptor extractor Traits --- * CUDA_DescriptorMatcherTraitMutable methods for crate::cudafeatures2d::CUDA_DescriptorMatcher * CUDA_DescriptorMatcherTraitConstConstant methods for crate::cudafeatures2d::CUDA_DescriptorMatcher * CUDA_FastFeatureDetectorTraitMutable methods for crate::cudafeatures2d::CUDA_FastFeatureDetector * CUDA_FastFeatureDetectorTraitConstConstant methods for crate::cudafeatures2d::CUDA_FastFeatureDetector * CUDA_Feature2DAsyncTraitMutable methods for crate::cudafeatures2d::CUDA_Feature2DAsync * CUDA_Feature2DAsyncTraitConstConstant methods for crate::cudafeatures2d::CUDA_Feature2DAsync * CUDA_ORBTraitMutable methods for crate::cudafeatures2d::CUDA_ORB * CUDA_ORBTraitConstConstant methods for crate::cudafeatures2d::CUDA_ORB Module opencv::cudafilters === Image Filtering --- Functions and classes described in this section are used to perform various linear or non-linear filtering operations on 2D images. Note: * An example containing all basic morphology operators like erode and dilate can be found at opencv_source_code/samples/gpu/morphology.cpp Modules --- * prelude Structs --- * FilterCommon interface for all CUDA filters : Traits --- * FilterTraitMutable methods for crate::cudafilters::Filter * FilterTraitConstConstant methods for crate::cudafilters::Filter Functions --- * create_box_filterCreates a normalized 2D box filter. * create_box_filter_defCreates a normalized 2D box filter. * create_box_max_filterCreates the maximum filter. * create_box_max_filter_defCreates the maximum filter. * create_box_min_filterCreates the minimum filter. * create_box_min_filter_defCreates the minimum filter. * create_column_sum_filterCreates a vertical 1D box filter. * create_column_sum_filter_defCreates a vertical 1D box filter. * create_deriv_filterCreates a generalized Deriv operator. * create_deriv_filter_defCreates a generalized Deriv operator. * create_gaussian_filterCreates a Gaussian filter. * create_gaussian_filter_defCreates a Gaussian filter. * create_laplacian_filterCreates a Laplacian operator. * create_laplacian_filter_defCreates a Laplacian operator. * create_linear_filterCreates a non-separable linear 2D filter. * create_linear_filter_defCreates a non-separable linear 2D filter. * create_median_filterPerforms median filtering for each point of the source image. * create_median_filter_defPerforms median filtering for each point of the source image. * create_morphology_filterCreates a 2D morphological filter. * create_morphology_filter_defCreates a 2D morphological filter. * create_row_sum_filterCreates a horizontal 1D box filter. * create_row_sum_filter_defCreates a horizontal 1D box filter. * create_scharr_filterCreates a vertical or horizontal Scharr operator. * create_scharr_filter_defCreates a vertical or horizontal Scharr operator. * create_separable_linear_filterCreates a separable linear filter. * create_separable_linear_filter_defCreates a separable linear filter. * create_sobel_filterCreates a Sobel operator. * create_sobel_filter_defCreates a Sobel operator. Module opencv::cudaimgproc === Image Processing --- Color space processing --- Histogram Calculation --- Hough Transform --- Feature Detection --- Modules --- * prelude Structs --- * CUDA_CLAHEBase class for Contrast Limited Adaptive Histogram Equalization. : * CUDA_CannyEdgeDetectorBase class for Canny Edge Detector. : * CUDA_CornernessCriteriaBase class for Cornerness Criteria computation. : * CUDA_CornersDetectorBase class for Corners Detector. : * CUDA_HoughCirclesDetectorBase class for circles detector algorithm. : * CUDA_HoughLinesDetectorBase class for lines detector algorithm. : * CUDA_HoughSegmentDetectorBase class for line segments detector algorithm. : * CUDA_TemplateMatchingBase class for Template Matching. : Enums --- * CUDA_AlphaCompTypes * CUDA_ConnectedComponentsAlgorithmsTypesConnected Components Algorithm * CUDA_DemosaicTypes Constants --- * CUDA_ALPHA_ATOP * CUDA_ALPHA_ATOP_PREMUL * CUDA_ALPHA_IN * CUDA_ALPHA_IN_PREMUL * CUDA_ALPHA_OUT * CUDA_ALPHA_OUT_PREMUL * CUDA_ALPHA_OVER * CUDA_ALPHA_OVER_PREMUL * CUDA_ALPHA_PLUS * CUDA_ALPHA_PLUS_PREMUL * CUDA_ALPHA_PREMUL * CUDA_ALPHA_XOR * CUDA_ALPHA_XOR_PREMUL * CUDA_CCL_BKEBKE Allegretti2019 algorithm for 8-way connectivity. * CUDA_CCL_DEFAULTBKE Allegretti2019 algorithm for 8-way connectivity. * CUDA_COLOR_BayerBG2BGR_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerBG2GRAY_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerBG2RGB_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerGB2BGR_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerGB2GRAY_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerGB2RGB_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerGR2BGR_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerGR2GRAY_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerGR2RGB_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerRG2BGR_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerRG2GRAY_MHTBayer Demosaicing (Malvar, He, and Cutler) * CUDA_COLOR_BayerRG2RGB_MHTBayer Demosaicing (Malvar, He, and Cutler) Traits --- * CUDA_CLAHETraitMutable methods for crate::cudaimgproc::CUDA_CLAHE * CUDA_CLAHETraitConstConstant methods for crate::cudaimgproc::CUDA_CLAHE * CUDA_CannyEdgeDetectorTraitMutable methods for crate::cudaimgproc::CUDA_CannyEdgeDetector * CUDA_CannyEdgeDetectorTraitConstConstant methods for crate::cudaimgproc::CUDA_CannyEdgeDetector * CUDA_CornernessCriteriaTraitMutable methods for crate::cudaimgproc::CUDA_CornernessCriteria * CUDA_CornernessCriteriaTraitConstConstant methods for crate::cudaimgproc::CUDA_CornernessCriteria * CUDA_CornersDetectorTraitMutable methods for crate::cudaimgproc::CUDA_CornersDetector * CUDA_CornersDetectorTraitConstConstant methods for crate::cudaimgproc::CUDA_CornersDetector * CUDA_HoughCirclesDetectorTraitMutable methods for crate::cudaimgproc::CUDA_HoughCirclesDetector * CUDA_HoughCirclesDetectorTraitConstConstant methods for crate::cudaimgproc::CUDA_HoughCirclesDetector * CUDA_HoughLinesDetectorTraitMutable methods for crate::cudaimgproc::CUDA_HoughLinesDetector * CUDA_HoughLinesDetectorTraitConstConstant methods for crate::cudaimgproc::CUDA_HoughLinesDetector * CUDA_HoughSegmentDetectorTraitMutable methods for crate::cudaimgproc::CUDA_HoughSegmentDetector * CUDA_HoughSegmentDetectorTraitConstConstant methods for crate::cudaimgproc::CUDA_HoughSegmentDetector * CUDA_TemplateMatchingTraitMutable methods for crate::cudaimgproc::CUDA_TemplateMatching * CUDA_TemplateMatchingTraitConstConstant methods for crate::cudaimgproc::CUDA_TemplateMatching Functions --- * alpha_compComposites two images using alpha opacity values contained in each image. * alpha_comp_defComposites two images using alpha opacity values contained in each image. * bilateral_filterPerforms bilateral filtering of passed image * bilateral_filter_defPerforms bilateral filtering of passed image * blend_linearPerforms linear blending of two images. * blend_linear_defPerforms linear blending of two images. * calc_histCalculates histogram for one channel 8-bit image. * calc_hist_1Calculates histogram for one channel 8-bit image confined in given mask. * calc_hist_1_defCalculates histogram for one channel 8-bit image confined in given mask. * calc_hist_defCalculates histogram for one channel 8-bit image. * connected_componentsComputes the Connected Components Labeled image of a binary image. * connected_components_def@overload * connected_components_with_algorithmComputes the Connected Components Labeled image of a binary image. * create_canny_edge_detectorCreates implementation for cuda::CannyEdgeDetector . * create_canny_edge_detector_defCreates implementation for cuda::CannyEdgeDetector . * create_claheCreates implementation for cuda::CLAHE . * create_clahe_defCreates implementation for cuda::CLAHE . * create_generalized_hough_ballardCreates implementation for generalized hough transform from Ballard1981 . * create_generalized_hough_guilCreates implementation for generalized hough transform from Guil1999 . * create_good_features_to_track_detectorCreates implementation for cuda::CornersDetector . * create_good_features_to_track_detector_defCreates implementation for cuda::CornersDetector . * create_harris_cornerCreates implementation for Harris cornerness criteria. * create_harris_corner_defCreates implementation for Harris cornerness criteria. * create_hough_circles_detectorCreates implementation for cuda::HoughCirclesDetector . * create_hough_circles_detector_defCreates implementation for cuda::HoughCirclesDetector . * create_hough_lines_detectorCreates implementation for cuda::HoughLinesDetector . * create_hough_lines_detector_defCreates implementation for cuda::HoughLinesDetector . * create_hough_segment_detectorCreates implementation for cuda::HoughSegmentDetector . * create_hough_segment_detector_defCreates implementation for cuda::HoughSegmentDetector . * create_min_eigen_val_cornerCreates implementation for the minimum eigen value of a 2x2 derivative covariation matrix (the cornerness criteria). * create_min_eigen_val_corner_defCreates implementation for the minimum eigen value of a 2x2 derivative covariation matrix (the cornerness criteria). * create_template_matchingCreates implementation for cuda::TemplateMatching . * create_template_matching_defCreates implementation for cuda::TemplateMatching . * cvt_colorConverts an image from one color space to another. * cvt_color_defConverts an image from one color space to another. * demosaicingConverts an image from Bayer pattern to RGB or grayscale. * demosaicing_defConverts an image from Bayer pattern to RGB or grayscale. * equalize_histEqualizes the histogram of a grayscale image. * equalize_hist_defEqualizes the histogram of a grayscale image. * even_levelsComputes levels with even distribution. * even_levels_defComputes levels with even distribution. * gamma_correctionRoutines for correcting image color gamma. * gamma_correction_defRoutines for correcting image color gamma. * hist_evenCalculates a histogram with evenly distributed bins. * hist_even_defCalculates a histogram with evenly distributed bins. * hist_rangeCalculates a histogram with bins determined by the levels array. * hist_range_defCalculates a histogram with bins determined by the levels array. * mean_shift_filteringPerforms mean-shift filtering for each point of the source image. * mean_shift_filtering_defPerforms mean-shift filtering for each point of the source image. * mean_shift_procPerforms a mean-shift procedure and stores information about processed points (their colors and positions) in two images. * mean_shift_proc_defPerforms a mean-shift procedure and stores information about processed points (their colors and positions) in two images. * mean_shift_segmentationPerforms a mean-shift segmentation of the source image and eliminates small segments. * mean_shift_segmentation_defPerforms a mean-shift segmentation of the source image and eliminates small segments. * swap_channelsExchanges the color channels of an image in-place. * swap_channels_defExchanges the color channels of an image in-place. Module opencv::cudaobjdetect === Object Detection --- Modules --- * prelude Structs --- * CUDA_CascadeClassifierCascade classifier class used for object detection. Supports HAAR and LBP cascades. : * CUDA_HOGThe class implements Histogram of Oriented Gradients (Dalal2005) object detector. Traits --- * CUDA_CascadeClassifierTraitMutable methods for crate::cudaobjdetect::CUDA_CascadeClassifier * CUDA_CascadeClassifierTraitConstConstant methods for crate::cudaobjdetect::CUDA_CascadeClassifier * CUDA_HOGTraitMutable methods for crate::cudaobjdetect::CUDA_HOG * CUDA_HOGTraitConstConstant methods for crate::cudaobjdetect::CUDA_HOG Module opencv::cudaoptflow === Optical Flow --- Modules --- * prelude Structs --- * CUDA_BroxOpticalFlowClass computing the optical flow for two images using Brox et al Optical Flow algorithm (Brox2004). * CUDA_DenseOpticalFlowBase interface for dense optical flow algorithms. * CUDA_DensePyrLKOpticalFlowClass used for calculating a dense optical flow. * CUDA_FarnebackOpticalFlowClass computing a dense optical flow using the Gunnar Farneback’s algorithm. * CUDA_NvidiaHWOpticalFlowBase Interface for optical flow algorithms using NVIDIA Optical Flow SDK. * CUDA_NvidiaOpticalFlow_1_0Class for computing the optical flow vectors between two images using NVIDIA Optical Flow hardware and Optical Flow SDK 1.0. * CUDA_NvidiaOpticalFlow_2_0Class for computing the optical flow vectors between two images using NVIDIA Optical Flow hardware and Optical Flow SDK 2.0. * CUDA_OpticalFlowDual_TVL1Implementation of the Zach, Pock and Bischof Dual TV-L1 Optical Flow method. * CUDA_SparseOpticalFlowBase interface for sparse optical flow algorithms. * CUDA_SparsePyrLKOpticalFlowClass used for calculating a sparse optical flow. Enums --- * CUDA_NvidiaOpticalFlow_1_0_NVIDIA_OF_PERF_LEVELSupported optical flow performance levels. * CUDA_NvidiaOpticalFlow_2_0_NVIDIA_OF_HINT_VECTOR_GRID_SIZESupported grid size for hint buffer. * CUDA_NvidiaOpticalFlow_2_0_NVIDIA_OF_OUTPUT_VECTOR_GRID_SIZESupported grid size for output buffer. * CUDA_NvidiaOpticalFlow_2_0_NVIDIA_OF_PERF_LEVELSupported optical flow performance levels. Constants --- * CUDA_NvidiaOpticalFlow_1_0_NV_OF_PERF_LEVEL_FAST< Fast perf level results in high performance and low quality * CUDA_NvidiaOpticalFlow_1_0_NV_OF_PERF_LEVEL_MAX * CUDA_NvidiaOpticalFlow_1_0_NV_OF_PERF_LEVEL_MEDIUM< Medium perf level results in low performance and medium quality * CUDA_NvidiaOpticalFlow_1_0_NV_OF_PERF_LEVEL_SLOW< Slow perf level results in lowest performance and best quality * CUDA_NvidiaOpticalFlow_1_0_NV_OF_PERF_LEVEL_UNDEFINED * CUDA_NvidiaOpticalFlow_2_0_NV_OF_HINT_VECTOR_GRID_SIZE_1< Hint buffer grid size is 1x1. * CUDA_NvidiaOpticalFlow_2_0_NV_OF_HINT_VECTOR_GRID_SIZE_2< Hint buffer grid size is 2x2. * CUDA_NvidiaOpticalFlow_2_0_NV_OF_HINT_VECTOR_GRID_SIZE_4< Hint buffer grid size is 4x4. * CUDA_NvidiaOpticalFlow_2_0_NV_OF_HINT_VECTOR_GRID_SIZE_8< Hint buffer grid size is 8x8. * CUDA_NvidiaOpticalFlow_2_0_NV_OF_HINT_VECTOR_GRID_SIZE_MAX * CUDA_NvidiaOpticalFlow_2_0_NV_OF_HINT_VECTOR_GRID_SIZE_UNDEFINED * CUDA_NvidiaOpticalFlow_2_0_NV_OF_OUTPUT_VECTOR_GRID_SIZE_1< Output buffer grid size is 1x1 * CUDA_NvidiaOpticalFlow_2_0_NV_OF_OUTPUT_VECTOR_GRID_SIZE_2< Output buffer grid size is 2x2 * CUDA_NvidiaOpticalFlow_2_0_NV_OF_OUTPUT_VECTOR_GRID_SIZE_4< Output buffer grid size is 4x4 * CUDA_NvidiaOpticalFlow_2_0_NV_OF_OUTPUT_VECTOR_GRID_SIZE_MAX * CUDA_NvidiaOpticalFlow_2_0_NV_OF_OUTPUT_VECTOR_GRID_SIZE_UNDEFINED * CUDA_NvidiaOpticalFlow_2_0_NV_OF_PERF_LEVEL_FAST< Fast perf level results in high performance and low quality * CUDA_NvidiaOpticalFlow_2_0_NV_OF_PERF_LEVEL_MAX * CUDA_NvidiaOpticalFlow_2_0_NV_OF_PERF_LEVEL_MEDIUM< Medium perf level results in low performance and medium quality * CUDA_NvidiaOpticalFlow_2_0_NV_OF_PERF_LEVEL_SLOW< Slow perf level results in lowest performance and best quality * CUDA_NvidiaOpticalFlow_2_0_NV_OF_PERF_LEVEL_UNDEFINED Traits --- * CUDA_BroxOpticalFlowTraitMutable methods for crate::cudaoptflow::CUDA_BroxOpticalFlow * CUDA_BroxOpticalFlowTraitConstConstant methods for crate::cudaoptflow::CUDA_BroxOpticalFlow * CUDA_DenseOpticalFlowTraitMutable methods for crate::cudaoptflow::CUDA_DenseOpticalFlow * CUDA_DenseOpticalFlowTraitConstConstant methods for crate::cudaoptflow::CUDA_DenseOpticalFlow * CUDA_DensePyrLKOpticalFlowTraitMutable methods for crate::cudaoptflow::CUDA_DensePyrLKOpticalFlow * CUDA_DensePyrLKOpticalFlowTraitConstConstant methods for crate::cudaoptflow::CUDA_DensePyrLKOpticalFlow * CUDA_FarnebackOpticalFlowTraitMutable methods for crate::cudaoptflow::CUDA_FarnebackOpticalFlow * CUDA_FarnebackOpticalFlowTraitConstConstant methods for crate::cudaoptflow::CUDA_FarnebackOpticalFlow * CUDA_NvidiaHWOpticalFlowTraitMutable methods for crate::cudaoptflow::CUDA_NvidiaHWOpticalFlow * CUDA_NvidiaHWOpticalFlowTraitConstConstant methods for crate::cudaoptflow::CUDA_NvidiaHWOpticalFlow * CUDA_NvidiaOpticalFlow_1_0TraitMutable methods for crate::cudaoptflow::CUDA_NvidiaOpticalFlow_1_0 * CUDA_NvidiaOpticalFlow_1_0TraitConstConstant methods for crate::cudaoptflow::CUDA_NvidiaOpticalFlow_1_0 * CUDA_NvidiaOpticalFlow_2_0TraitMutable methods for crate::cudaoptflow::CUDA_NvidiaOpticalFlow_2_0 * CUDA_NvidiaOpticalFlow_2_0TraitConstConstant methods for crate::cudaoptflow::CUDA_NvidiaOpticalFlow_2_0 * CUDA_OpticalFlowDual_TVL1TraitMutable methods for crate::cudaoptflow::CUDA_OpticalFlowDual_TVL1 * CUDA_OpticalFlowDual_TVL1TraitConstConstant methods for crate::cudaoptflow::CUDA_OpticalFlowDual_TVL1 * CUDA_SparseOpticalFlowTraitMutable methods for crate::cudaoptflow::CUDA_SparseOpticalFlow * CUDA_SparseOpticalFlowTraitConstConstant methods for crate::cudaoptflow::CUDA_SparseOpticalFlow * CUDA_SparsePyrLKOpticalFlowTraitMutable methods for crate::cudaoptflow::CUDA_SparsePyrLKOpticalFlow * CUDA_SparsePyrLKOpticalFlowTraitConstConstant methods for crate::cudaoptflow::CUDA_SparsePyrLKOpticalFlow Module opencv::cudastereo === Stereo Correspondence --- Modules --- * prelude Structs --- * CUDA_DisparityBilateralFilterClass refining a disparity map using joint bilateral filtering. : * CUDA_StereoBMClass computing stereo correspondence (disparity map) using the block matching algorithm. : * CUDA_StereoBeliefPropagationClass computing stereo correspondence using the belief propagation algorithm. : * CUDA_StereoConstantSpaceBPClass computing stereo correspondence using the constant space belief propagation algorithm. : * CUDA_StereoSGMThe class implements the modified H. Hirschmuller algorithm HH08. Limitation and difference are as follows: Traits --- * CUDA_DisparityBilateralFilterTraitMutable methods for crate::cudastereo::CUDA_DisparityBilateralFilter * CUDA_DisparityBilateralFilterTraitConstConstant methods for crate::cudastereo::CUDA_DisparityBilateralFilter * CUDA_StereoBMTraitMutable methods for crate::cudastereo::CUDA_StereoBM * CUDA_StereoBMTraitConstConstant methods for crate::cudastereo::CUDA_StereoBM * CUDA_StereoBeliefPropagationTraitMutable methods for crate::cudastereo::CUDA_StereoBeliefPropagation * CUDA_StereoBeliefPropagationTraitConstConstant methods for crate::cudastereo::CUDA_StereoBeliefPropagation * CUDA_StereoConstantSpaceBPTraitMutable methods for crate::cudastereo::CUDA_StereoConstantSpaceBP * CUDA_StereoConstantSpaceBPTraitConstConstant methods for crate::cudastereo::CUDA_StereoConstantSpaceBP * CUDA_StereoSGMTraitMutable methods for crate::cudastereo::CUDA_StereoSGM * CUDA_StereoSGMTraitConstConstant methods for crate::cudastereo::CUDA_StereoSGM Functions --- * create_disparity_bilateral_filterCreates DisparityBilateralFilter object. * create_disparity_bilateral_filter_defCreates DisparityBilateralFilter object. * create_stereo_belief_propagationCreates StereoBeliefPropagation object. * create_stereo_belief_propagation_defCreates StereoBeliefPropagation object. * create_stereo_bmCreates StereoBM object. * create_stereo_bm_defCreates StereoBM object. * create_stereo_constant_space_bpCreates StereoConstantSpaceBP object. * create_stereo_constant_space_bp_defCreates StereoConstantSpaceBP object. * create_stereo_sgmCreates StereoSGM object. * create_stereo_sgm_defCreates StereoSGM object. * draw_color_dispColors a disparity image. * draw_color_disp_defColors a disparity image. * reproject_image_to_3dReprojects a disparity image to 3D space. * reproject_image_to_3d_1C++ default parameters * reproject_image_to_3d_1_defNote * reproject_image_to_3d_defReprojects a disparity image to 3D space. Module opencv::cudawarping === Image Warping --- Modules --- * prelude Functions --- * build_warp_affine_mapsBuilds transformation maps for affine transformation. * build_warp_affine_maps_1C++ default parameters * build_warp_affine_maps_1_defNote * build_warp_affine_maps_2C++ default parameters * build_warp_affine_maps_2_defNote * build_warp_affine_maps_defBuilds transformation maps for affine transformation. * build_warp_perspective_mapsBuilds transformation maps for perspective transformation. * build_warp_perspective_maps_1C++ default parameters * build_warp_perspective_maps_1_defNote * build_warp_perspective_maps_2C++ default parameters * build_warp_perspective_maps_2_defNote * build_warp_perspective_maps_defBuilds transformation maps for perspective transformation. * pyr_downSmoothes an image and downsamples it. * pyr_down_defSmoothes an image and downsamples it. * pyr_upUpsamples an image and then smoothes it. * pyr_up_defUpsamples an image and then smoothes it. * remapApplies a generic geometrical transformation to an image. * remap_defApplies a generic geometrical transformation to an image. * resizeResizes an image. * resize_defResizes an image. * rotateRotates an image around the origin (0,0) and then shifts it. * rotate_defRotates an image around the origin (0,0) and then shifts it. * warp_affineApplies an affine transformation to an image. * warp_affine_1C++ default parameters * warp_affine_1_defNote * warp_affine_2C++ default parameters * warp_affine_2_defNote * warp_affine_defApplies an affine transformation to an image. * warp_perspectiveApplies a perspective transformation to an image. * warp_perspective_1C++ default parameters * warp_perspective_1_defNote * warp_perspective_2C++ default parameters * warp_perspective_2_defNote * warp_perspective_defApplies a perspective transformation to an image. Module opencv::cvv === GUI for Interactive Visual Debugging of Computer Vision Programs --- Namespace for all functions is **cvv**, i.e. *cvv::showImage()*. Compilation: * For development, i.e. for cvv GUI to show up, compile your code using cvv with *g++ -DCVVISUAL_DEBUGMODE*. * For release, i.e. cvv calls doing nothing, compile your code without above flag. See cvv tutorial for a commented example application using cvv. Modules --- * prelude Structs --- * CallMetaDataOptional information about a location in Code. Traits --- * CallMetaDataTraitMutable methods for crate::cvv::CallMetaData * CallMetaDataTraitConstConstant methods for crate::cvv::CallMetaData Functions --- * debug_d_match * debug_filter * final_show * show_image Module opencv::dnn === Deep Neural Network module --- This module contains: - API for new layers creation, layers are building bricks of neural networks; - set of built-in most-useful Layers; - API to construct and modify comprehensive neural networks from layers; - functionality for loading serialized networks models from different frameworks. Functionality of this module is designed only for forward pass computations (i.e. network testing). A network training is in principle not supported. Modules --- * prelude Structs --- * AbsLayer * AccumLayer * AcosLayer * AcoshLayer * ActivationLayer * ActivationLayerInt8 * ArgLayerArgMax/ArgMin layer * AsinLayer * AsinhLayer * AtanLayer * AtanhLayer * BNLLLayer * BackendNodeDerivatives of this class encapsulates functions of certain backends. * BackendWrapperDerivatives of this class wraps cv::Mat for different backends and targets. * BaseConvolutionLayer * BatchNormLayer * BatchNormLayerInt8 * BlankLayerPartial List of Implemented Layers * CeilLayer * CeluLayer * ChannelsPReLULayer * ClassificationModelThis class represents high-level API for classification models. * CompareLayer * ConcatLayer * ConstLayerConstant layer produces the same data blob at an every forward pass. * ConvolutionLayer * ConvolutionLayerInt8 * CorrelationLayer * CosLayer * CoshLayer * CropAndResizeLayer * CropLayer * CumSumLayer * DataAugmentationLayer * DeconvolutionLayer * DequantizeLayer * DetectionModelThis class represents high-level API for object detection networks. * DetectionOutputLayerDetection output layer. * DictThis class implements name-value dictionary, values are instances of DictValue. * DictValueThis struct stores the scalar value (or array) of one of the following type: double, cv::String or int64. @todo Maybe int64 is useless because double type exactly stores at least 2^52 integers. * ELULayer * EltwiseLayerElement wise operation on inputs * EltwiseLayerInt8 * ErfLayer * ExpLayer * FlattenLayer * FloorLayer * FlowWarpLayer * GRULayerGRU recurrent one-layer * GatherLayerGather layer * GeluApproximationLayer * GeluLayer * HardSigmoidLayer * HardSwishLayer * Image2BlobParamsProcessing params of image to blob. * InnerProductLayer`InnerProduct`, `MatMul` and `Gemm` operations are all implemented by Fully Connected Layer. Parameter `is_matmul` is used to distinguish `MatMul` and `Gemm` from `InnerProduct`. * InnerProductLayerInt8 * InterpLayerBilinear resize layer from https://github.com/cdmh/deeplab-public-ver2 * KeypointsModelThis class represents high-level API for keypoints models * LRNLayer * LSTMLayerLSTM recurrent layer * LayerThis interface class allows to build new Layers - are building blocks of networks. * LayerFactory%Layer factory allows to create instances of registered layers. * LayerNormLayer * LayerParamsThis class provides all data needed to initialize layer. * LogLayer * MVNLayer * MaxUnpoolLayer * MishLayer * ModelThis class is presented high-level API for neural networks. * NaryEltwiseLayer * NetThis class allows to create and manipulate comprehensive artificial neural networks. * NormalizeBBoxLayerinline formula - normalization layer. * NotLayer * PaddingLayerAdds extra values for specific axes. * PermuteLayer * PoolingLayer * PoolingLayerInt8 * PowerLayer * PriorBoxLayer * ProposalLayer * QuantizeLayer * RNNLayerClassical recurrent layer * ReLU6Layer * ReLULayer * ReciprocalLayer * ReduceLayer * RegionLayer * ReorgLayer * RequantizeLayer * ReshapeLayer * ResizeLayerResize input 4-dimensional blob by nearest neighbor or bilinear strategy. * RoundLayer * ScaleLayer * ScaleLayerInt8 * ScatterLayer * ScatterNDLayer * SegmentationModelThis class represents high-level API for segmentation models * SeluLayer * ShiftLayer * ShiftLayerInt8 * ShrinkLayer * ShuffleChannelLayerPermute channels of 4-dimensional input blob. * SigmoidLayer * SignLayer * SinLayer * SinhLayer * SliceLayerSlice layer has several modes: * SoftmaxLayer * SoftmaxLayerInt8 * SoftplusLayer * SoftsignLayer * SplitLayer * SqrtLayer * SwishLayer * TanHLayer * TanLayer * TextDetectionModelBase class for text detection networks * TextDetectionModel_DBThis class represents high-level API for text detection DL networks compatible with DB model. * TextDetectionModel_EASTThis class represents high-level API for text detection DL networks compatible with EAST model. * TextRecognitionModelThis class represents high-level API for text recognition networks. * ThresholdedReluLayer * TileLayer * _Range Enums --- * BackendEnum of computation backends supported by layers. * DataLayoutEnum of data layout for model inference. * ImagePaddingModeEnum of image processing mode. To facilitate the specialization pre-processing requirements of the dnn model. For example, the `letter box` often used in the Yolo series of models. * SoftNMSMethodEnum of Soft NMS methods. * TargetEnum of target devices for computations. Constants --- * CV_DNN_BACKEND_INFERENCE_ENGINE_NGRAPH * CV_DNN_BACKEND_INFERENCE_ENGINE_NN_BUILDER_API * CV_DNN_INFERENCE_ENGINE_CPU_TYPE_ARM_COMPUTE * CV_DNN_INFERENCE_ENGINE_CPU_TYPE_X86 * CV_DNN_INFERENCE_ENGINE_VPU_TYPE_MYRIAD_2 * CV_DNN_INFERENCE_ENGINE_VPU_TYPE_MYRIAD_X * CV_DNN_INFERENCE_ENGINE_VPU_TYPE_UNSPECIFIED * DNN_BACKEND_CANN * DNN_BACKEND_CUDA * DNN_BACKEND_DEFAULTDNN_BACKEND_DEFAULT equals to DNN_BACKEND_INFERENCE_ENGINE if OpenCV is built with Intel OpenVINO or DNN_BACKEND_OPENCV otherwise. * DNN_BACKEND_HALIDEDNN_BACKEND_DEFAULT equals to DNN_BACKEND_INFERENCE_ENGINE if OpenCV is built with Intel OpenVINO or DNN_BACKEND_OPENCV otherwise. * DNN_BACKEND_INFERENCE_ENGINEIntel OpenVINO computational backend * DNN_BACKEND_OPENCV * DNN_BACKEND_TIMVX * DNN_BACKEND_VKCOM * DNN_BACKEND_WEBNN * DNN_LAYOUT_NCDHWOpenCV data layout for 5D data. * DNN_LAYOUT_NCHWOpenCV data layout for 4D data. * DNN_LAYOUT_NDOpenCV data layout for 2D data. * DNN_LAYOUT_NDHWCTensorflow-like data layout for 5D data. * DNN_LAYOUT_NHWCTensorflow-like data layout for 4D data. * DNN_LAYOUT_PLANARTensorflow-like data layout, it should only be used at tf or tflite model parsing. * DNN_LAYOUT_UNKNOWN * DNN_PMODE_CROP_CENTER * DNN_PMODE_LETTERBOX * DNN_PMODE_NULL * DNN_TARGET_CPU * DNN_TARGET_CPU_FP16 * DNN_TARGET_CUDA * DNN_TARGET_CUDA_FP16 * DNN_TARGET_FPGAFPGA device with CPU fallbacks using Inference Engine’s Heterogeneous plugin. * DNN_TARGET_HDDL * DNN_TARGET_MYRIAD * DNN_TARGET_NPU * DNN_TARGET_OPENCL * DNN_TARGET_OPENCL_FP16 * DNN_TARGET_VULKAN * OPENCV_DNN_API_VERSION * SoftNMSMethod_SOFTNMS_GAUSSIAN * SoftNMSMethod_SOFTNMS_LINEAR Traits --- * AbsLayerTraitMutable methods for crate::dnn::AbsLayer * AbsLayerTraitConstConstant methods for crate::dnn::AbsLayer * AccumLayerTraitMutable methods for crate::dnn::AccumLayer * AccumLayerTraitConstConstant methods for crate::dnn::AccumLayer * AcosLayerTraitMutable methods for crate::dnn::AcosLayer * AcosLayerTraitConstConstant methods for crate::dnn::AcosLayer * AcoshLayerTraitMutable methods for crate::dnn::AcoshLayer * AcoshLayerTraitConstConstant methods for crate::dnn::AcoshLayer * ActivationLayerInt8TraitMutable methods for crate::dnn::ActivationLayerInt8 * ActivationLayerInt8TraitConstConstant methods for crate::dnn::ActivationLayerInt8 * ActivationLayerTraitMutable methods for crate::dnn::ActivationLayer * ActivationLayerTraitConstConstant methods for crate::dnn::ActivationLayer * ArgLayerTraitMutable methods for crate::dnn::ArgLayer * ArgLayerTraitConstConstant methods for crate::dnn::ArgLayer * AsinLayerTraitMutable methods for crate::dnn::AsinLayer * AsinLayerTraitConstConstant methods for crate::dnn::AsinLayer * AsinhLayerTraitMutable methods for crate::dnn::AsinhLayer * AsinhLayerTraitConstConstant methods for crate::dnn::AsinhLayer * AtanLayerTraitMutable methods for crate::dnn::AtanLayer * AtanLayerTraitConstConstant methods for crate::dnn::AtanLayer * AtanhLayerTraitMutable methods for crate::dnn::AtanhLayer * AtanhLayerTraitConstConstant methods for crate::dnn::AtanhLayer * BNLLLayerTraitMutable methods for crate::dnn::BNLLLayer * BNLLLayerTraitConstConstant methods for crate::dnn::BNLLLayer * BackendNodeTraitMutable methods for crate::dnn::BackendNode * BackendNodeTraitConstConstant methods for crate::dnn::BackendNode * BackendWrapperTraitMutable methods for crate::dnn::BackendWrapper * BackendWrapperTraitConstConstant methods for crate::dnn::BackendWrapper * BaseConvolutionLayerTraitMutable methods for crate::dnn::BaseConvolutionLayer * BaseConvolutionLayerTraitConstConstant methods for crate::dnn::BaseConvolutionLayer * BatchNormLayerInt8TraitMutable methods for crate::dnn::BatchNormLayerInt8 * BatchNormLayerInt8TraitConstConstant methods for crate::dnn::BatchNormLayerInt8 * BatchNormLayerTraitMutable methods for crate::dnn::BatchNormLayer * BatchNormLayerTraitConstConstant methods for crate::dnn::BatchNormLayer * BlankLayerTraitMutable methods for crate::dnn::BlankLayer * BlankLayerTraitConstConstant methods for crate::dnn::BlankLayer * CeilLayerTraitMutable methods for crate::dnn::CeilLayer * CeilLayerTraitConstConstant methods for crate::dnn::CeilLayer * CeluLayerTraitMutable methods for crate::dnn::CeluLayer * CeluLayerTraitConstConstant methods for crate::dnn::CeluLayer * ChannelsPReLULayerTraitMutable methods for crate::dnn::ChannelsPReLULayer * ChannelsPReLULayerTraitConstConstant methods for crate::dnn::ChannelsPReLULayer * ClassificationModelTraitMutable methods for crate::dnn::ClassificationModel * ClassificationModelTraitConstConstant methods for crate::dnn::ClassificationModel * CompareLayerTraitMutable methods for crate::dnn::CompareLayer * CompareLayerTraitConstConstant methods for crate::dnn::CompareLayer * ConcatLayerTraitMutable methods for crate::dnn::ConcatLayer * ConcatLayerTraitConstConstant methods for crate::dnn::ConcatLayer * ConstLayerTraitMutable methods for crate::dnn::ConstLayer * ConstLayerTraitConstConstant methods for crate::dnn::ConstLayer * ConvolutionLayerInt8TraitMutable methods for crate::dnn::ConvolutionLayerInt8 * ConvolutionLayerInt8TraitConstConstant methods for crate::dnn::ConvolutionLayerInt8 * ConvolutionLayerTraitMutable methods for crate::dnn::ConvolutionLayer * ConvolutionLayerTraitConstConstant methods for crate::dnn::ConvolutionLayer * CorrelationLayerTraitMutable methods for crate::dnn::CorrelationLayer * CorrelationLayerTraitConstConstant methods for crate::dnn::CorrelationLayer * CosLayerTraitMutable methods for crate::dnn::CosLayer * CosLayerTraitConstConstant methods for crate::dnn::CosLayer * CoshLayerTraitMutable methods for crate::dnn::CoshLayer * CoshLayerTraitConstConstant methods for crate::dnn::CoshLayer * CropAndResizeLayerTraitMutable methods for crate::dnn::CropAndResizeLayer * CropAndResizeLayerTraitConstConstant methods for crate::dnn::CropAndResizeLayer * CropLayerTraitMutable methods for crate::dnn::CropLayer * CropLayerTraitConstConstant methods for crate::dnn::CropLayer * CumSumLayerTraitMutable methods for crate::dnn::CumSumLayer * CumSumLayerTraitConstConstant methods for crate::dnn::CumSumLayer * DataAugmentationLayerTraitMutable methods for crate::dnn::DataAugmentationLayer * DataAugmentationLayerTraitConstConstant methods for crate::dnn::DataAugmentationLayer * DeconvolutionLayerTraitMutable methods for crate::dnn::DeconvolutionLayer * DeconvolutionLayerTraitConstConstant methods for crate::dnn::DeconvolutionLayer * DequantizeLayerTraitMutable methods for crate::dnn::DequantizeLayer * DequantizeLayerTraitConstConstant methods for crate::dnn::DequantizeLayer * DetectionModelTraitMutable methods for crate::dnn::DetectionModel * DetectionModelTraitConstConstant methods for crate::dnn::DetectionModel * DetectionOutputLayerTraitMutable methods for crate::dnn::DetectionOutputLayer * DetectionOutputLayerTraitConstConstant methods for crate::dnn::DetectionOutputLayer * DictTraitMutable methods for crate::dnn::Dict * DictTraitConstConstant methods for crate::dnn::Dict * DictValueTraitMutable methods for crate::dnn::DictValue * DictValueTraitConstConstant methods for crate::dnn::DictValue * ELULayerTraitMutable methods for crate::dnn::ELULayer * ELULayerTraitConstConstant methods for crate::dnn::ELULayer * EltwiseLayerInt8TraitMutable methods for crate::dnn::EltwiseLayerInt8 * EltwiseLayerInt8TraitConstConstant methods for crate::dnn::EltwiseLayerInt8 * EltwiseLayerTraitMutable methods for crate::dnn::EltwiseLayer * EltwiseLayerTraitConstConstant methods for crate::dnn::EltwiseLayer * ErfLayerTraitMutable methods for crate::dnn::ErfLayer * ErfLayerTraitConstConstant methods for crate::dnn::ErfLayer * ExpLayerTraitMutable methods for crate::dnn::ExpLayer * ExpLayerTraitConstConstant methods for crate::dnn::ExpLayer * FlattenLayerTraitMutable methods for crate::dnn::FlattenLayer * FlattenLayerTraitConstConstant methods for crate::dnn::FlattenLayer * FloorLayerTraitMutable methods for crate::dnn::FloorLayer * FloorLayerTraitConstConstant methods for crate::dnn::FloorLayer * FlowWarpLayerTraitMutable methods for crate::dnn::FlowWarpLayer * FlowWarpLayerTraitConstConstant methods for crate::dnn::FlowWarpLayer * GRULayerTraitMutable methods for crate::dnn::GRULayer * GRULayerTraitConstConstant methods for crate::dnn::GRULayer * GatherLayerTraitMutable methods for crate::dnn::GatherLayer * GatherLayerTraitConstConstant methods for crate::dnn::GatherLayer * GeluApproximationLayerTraitMutable methods for crate::dnn::GeluApproximationLayer * GeluApproximationLayerTraitConstConstant methods for crate::dnn::GeluApproximationLayer * GeluLayerTraitMutable methods for crate::dnn::GeluLayer * GeluLayerTraitConstConstant methods for crate::dnn::GeluLayer * HardSigmoidLayerTraitMutable methods for crate::dnn::HardSigmoidLayer * HardSigmoidLayerTraitConstConstant methods for crate::dnn::HardSigmoidLayer * HardSwishLayerTraitMutable methods for crate::dnn::HardSwishLayer * HardSwishLayerTraitConstConstant methods for crate::dnn::HardSwishLayer * InnerProductLayerInt8TraitMutable methods for crate::dnn::InnerProductLayerInt8 * InnerProductLayerInt8TraitConstConstant methods for crate::dnn::InnerProductLayerInt8 * InnerProductLayerTraitMutable methods for crate::dnn::InnerProductLayer * InnerProductLayerTraitConstConstant methods for crate::dnn::InnerProductLayer * InterpLayerTraitMutable methods for crate::dnn::InterpLayer * InterpLayerTraitConstConstant methods for crate::dnn::InterpLayer * KeypointsModelTraitMutable methods for crate::dnn::KeypointsModel * KeypointsModelTraitConstConstant methods for crate::dnn::KeypointsModel * LRNLayerTraitMutable methods for crate::dnn::LRNLayer * LRNLayerTraitConstConstant methods for crate::dnn::LRNLayer * LSTMLayerTraitMutable methods for crate::dnn::LSTMLayer * LSTMLayerTraitConstConstant methods for crate::dnn::LSTMLayer * LayerFactoryTraitMutable methods for crate::dnn::LayerFactory * LayerFactoryTraitConstConstant methods for crate::dnn::LayerFactory * LayerNormLayerTraitMutable methods for crate::dnn::LayerNormLayer * LayerNormLayerTraitConstConstant methods for crate::dnn::LayerNormLayer * LayerParamsTraitMutable methods for crate::dnn::LayerParams * LayerParamsTraitConstConstant methods for crate::dnn::LayerParams * LayerTraitMutable methods for crate::dnn::Layer * LayerTraitConstConstant methods for crate::dnn::Layer * LogLayerTraitMutable methods for crate::dnn::LogLayer * LogLayerTraitConstConstant methods for crate::dnn::LogLayer * MVNLayerTraitMutable methods for crate::dnn::MVNLayer * MVNLayerTraitConstConstant methods for crate::dnn::MVNLayer * MaxUnpoolLayerTraitMutable methods for crate::dnn::MaxUnpoolLayer * MaxUnpoolLayerTraitConstConstant methods for crate::dnn::MaxUnpoolLayer * MishLayerTraitMutable methods for crate::dnn::MishLayer * MishLayerTraitConstConstant methods for crate::dnn::MishLayer * ModelTraitMutable methods for crate::dnn::Model * ModelTraitConstConstant methods for crate::dnn::Model * NaryEltwiseLayerTraitMutable methods for crate::dnn::NaryEltwiseLayer * NaryEltwiseLayerTraitConstConstant methods for crate::dnn::NaryEltwiseLayer * NetTraitMutable methods for crate::dnn::Net * NetTraitConstConstant methods for crate::dnn::Net * NormalizeBBoxLayerTraitMutable methods for crate::dnn::NormalizeBBoxLayer * NormalizeBBoxLayerTraitConstConstant methods for crate::dnn::NormalizeBBoxLayer * NotLayerTraitMutable methods for crate::dnn::NotLayer * NotLayerTraitConstConstant methods for crate::dnn::NotLayer * PaddingLayerTraitMutable methods for crate::dnn::PaddingLayer * PaddingLayerTraitConstConstant methods for crate::dnn::PaddingLayer * PermuteLayerTraitMutable methods for crate::dnn::PermuteLayer * PermuteLayerTraitConstConstant methods for crate::dnn::PermuteLayer * PoolingLayerInt8TraitMutable methods for crate::dnn::PoolingLayerInt8 * PoolingLayerInt8TraitConstConstant methods for crate::dnn::PoolingLayerInt8 * PoolingLayerTraitMutable methods for crate::dnn::PoolingLayer * PoolingLayerTraitConstConstant methods for crate::dnn::PoolingLayer * PowerLayerTraitMutable methods for crate::dnn::PowerLayer * PowerLayerTraitConstConstant methods for crate::dnn::PowerLayer * PriorBoxLayerTraitMutable methods for crate::dnn::PriorBoxLayer * PriorBoxLayerTraitConstConstant methods for crate::dnn::PriorBoxLayer * ProposalLayerTraitMutable methods for crate::dnn::ProposalLayer * ProposalLayerTraitConstConstant methods for crate::dnn::ProposalLayer * QuantizeLayerTraitMutable methods for crate::dnn::QuantizeLayer * QuantizeLayerTraitConstConstant methods for crate::dnn::QuantizeLayer * RNNLayerTraitMutable methods for crate::dnn::RNNLayer * RNNLayerTraitConstConstant methods for crate::dnn::RNNLayer * ReLU6LayerTraitMutable methods for crate::dnn::ReLU6Layer * ReLU6LayerTraitConstConstant methods for crate::dnn::ReLU6Layer * ReLULayerTraitMutable methods for crate::dnn::ReLULayer * ReLULayerTraitConstConstant methods for crate::dnn::ReLULayer * ReciprocalLayerTraitMutable methods for crate::dnn::ReciprocalLayer * ReciprocalLayerTraitConstConstant methods for crate::dnn::ReciprocalLayer * ReduceLayerTraitMutable methods for crate::dnn::ReduceLayer * ReduceLayerTraitConstConstant methods for crate::dnn::ReduceLayer * RegionLayerTraitMutable methods for crate::dnn::RegionLayer * RegionLayerTraitConstConstant methods for crate::dnn::RegionLayer * ReorgLayerTraitMutable methods for crate::dnn::ReorgLayer * ReorgLayerTraitConstConstant methods for crate::dnn::ReorgLayer * RequantizeLayerTraitMutable methods for crate::dnn::RequantizeLayer * RequantizeLayerTraitConstConstant methods for crate::dnn::RequantizeLayer * ReshapeLayerTraitMutable methods for crate::dnn::ReshapeLayer * ReshapeLayerTraitConstConstant methods for crate::dnn::ReshapeLayer * ResizeLayerTraitMutable methods for crate::dnn::ResizeLayer * ResizeLayerTraitConstConstant methods for crate::dnn::ResizeLayer * RoundLayerTraitMutable methods for crate::dnn::RoundLayer * RoundLayerTraitConstConstant methods for crate::dnn::RoundLayer * ScaleLayerInt8TraitMutable methods for crate::dnn::ScaleLayerInt8 * ScaleLayerInt8TraitConstConstant methods for crate::dnn::ScaleLayerInt8 * ScaleLayerTraitMutable methods for crate::dnn::ScaleLayer * ScaleLayerTraitConstConstant methods for crate::dnn::ScaleLayer * ScatterLayerTraitMutable methods for crate::dnn::ScatterLayer * ScatterLayerTraitConstConstant methods for crate::dnn::ScatterLayer * ScatterNDLayerTraitMutable methods for crate::dnn::ScatterNDLayer * ScatterNDLayerTraitConstConstant methods for crate::dnn::ScatterNDLayer * SegmentationModelTraitMutable methods for crate::dnn::SegmentationModel * SegmentationModelTraitConstConstant methods for crate::dnn::SegmentationModel * SeluLayerTraitMutable methods for crate::dnn::SeluLayer * SeluLayerTraitConstConstant methods for crate::dnn::SeluLayer * ShiftLayerInt8TraitMutable methods for crate::dnn::ShiftLayerInt8 * ShiftLayerInt8TraitConstConstant methods for crate::dnn::ShiftLayerInt8 * ShiftLayerTraitMutable methods for crate::dnn::ShiftLayer * ShiftLayerTraitConstConstant methods for crate::dnn::ShiftLayer * ShrinkLayerTraitMutable methods for crate::dnn::ShrinkLayer * ShrinkLayerTraitConstConstant methods for crate::dnn::ShrinkLayer * ShuffleChannelLayerTraitMutable methods for crate::dnn::ShuffleChannelLayer * ShuffleChannelLayerTraitConstConstant methods for crate::dnn::ShuffleChannelLayer * SigmoidLayerTraitMutable methods for crate::dnn::SigmoidLayer * SigmoidLayerTraitConstConstant methods for crate::dnn::SigmoidLayer * SignLayerTraitMutable methods for crate::dnn::SignLayer * SignLayerTraitConstConstant methods for crate::dnn::SignLayer * SinLayerTraitMutable methods for crate::dnn::SinLayer * SinLayerTraitConstConstant methods for crate::dnn::SinLayer * SinhLayerTraitMutable methods for crate::dnn::SinhLayer * SinhLayerTraitConstConstant methods for crate::dnn::SinhLayer * SliceLayerTraitMutable methods for crate::dnn::SliceLayer * SliceLayerTraitConstConstant methods for crate::dnn::SliceLayer * SoftmaxLayerInt8TraitMutable methods for crate::dnn::SoftmaxLayerInt8 * SoftmaxLayerInt8TraitConstConstant methods for crate::dnn::SoftmaxLayerInt8 * SoftmaxLayerTraitMutable methods for crate::dnn::SoftmaxLayer * SoftmaxLayerTraitConstConstant methods for crate::dnn::SoftmaxLayer * SoftplusLayerTraitMutable methods for crate::dnn::SoftplusLayer * SoftplusLayerTraitConstConstant methods for crate::dnn::SoftplusLayer * SoftsignLayerTraitMutable methods for crate::dnn::SoftsignLayer * SoftsignLayerTraitConstConstant methods for crate::dnn::SoftsignLayer * SplitLayerTraitMutable methods for crate::dnn::SplitLayer * SplitLayerTraitConstConstant methods for crate::dnn::SplitLayer * SqrtLayerTraitMutable methods for crate::dnn::SqrtLayer * SqrtLayerTraitConstConstant methods for crate::dnn::SqrtLayer * SwishLayerTraitMutable methods for crate::dnn::SwishLayer * SwishLayerTraitConstConstant methods for crate::dnn::SwishLayer * TanHLayerTraitMutable methods for crate::dnn::TanHLayer * TanHLayerTraitConstConstant methods for crate::dnn::TanHLayer * TanLayerTraitMutable methods for crate::dnn::TanLayer * TanLayerTraitConstConstant methods for crate::dnn::TanLayer * TextDetectionModelTraitMutable methods for crate::dnn::TextDetectionModel * TextDetectionModelTraitConstConstant methods for crate::dnn::TextDetectionModel * TextDetectionModel_DBTraitMutable methods for crate::dnn::TextDetectionModel_DB * TextDetectionModel_DBTraitConstConstant methods for crate::dnn::TextDetectionModel_DB * TextDetectionModel_EASTTraitMutable methods for crate::dnn::TextDetectionModel_EAST * TextDetectionModel_EASTTraitConstConstant methods for crate::dnn::TextDetectionModel_EAST * TextRecognitionModelTraitMutable methods for crate::dnn::TextRecognitionModel * TextRecognitionModelTraitConstConstant methods for crate::dnn::TextRecognitionModel * ThresholdedReluLayerTraitMutable methods for crate::dnn::ThresholdedReluLayer * ThresholdedReluLayerTraitConstConstant methods for crate::dnn::ThresholdedReluLayer * TileLayerTraitMutable methods for crate::dnn::TileLayer * TileLayerTraitConstConstant methods for crate::dnn::TileLayer * _RangeTraitMutable methods for crate::dnn::_Range * _RangeTraitConstConstant methods for crate::dnn::_Range Functions --- * blob_from_imageCreates 4-dimensional blob from image. Optionally resizes and crops @p image from center, subtract @p mean values, scales values by @p scalefactor, swap Blue and Red channels. * blob_from_image_defCreates 4-dimensional blob from image. Optionally resizes and crops @p image from center, subtract @p mean values, scales values by @p scalefactor, swap Blue and Red channels. * blob_from_image_toCreates 4-dimensional blob from image. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * blob_from_image_to_defCreates 4-dimensional blob from image. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * blob_from_image_with_paramsCreates 4-dimensional blob from image with given params. * blob_from_image_with_params_1Creates 4-dimensional blob from image with given params. * blob_from_image_with_params_1_def@overload * blob_from_image_with_params_defCreates 4-dimensional blob from image with given params. * blob_from_imagesCreates 4-dimensional blob from series of images. Optionally resizes and crops @p images from center, subtract @p mean values, scales values by @p scalefactor, swap Blue and Red channels. * blob_from_images_defCreates 4-dimensional blob from series of images. Optionally resizes and crops @p images from center, subtract @p mean values, scales values by @p scalefactor, swap Blue and Red channels. * blob_from_images_toCreates 4-dimensional blob from series of images. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * blob_from_images_to_defCreates 4-dimensional blob from series of images. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * blob_from_images_with_paramsCreates 4-dimensional blob from series of images with given params. * blob_from_images_with_params_1Creates 4-dimensional blob from series of images with given params. * blob_from_images_with_params_1_def@overload * blob_from_images_with_params_defCreates 4-dimensional blob from series of images with given params. * concat * enable_model_diagnosticsEnables detailed logging of the DNN model loading with CV DNN API. * get_available_backends * get_available_targets * get_inference_engine_backend_typeReturns Inference Engine internal backend API. * get_inference_engine_cpu_typeReturns Inference Engine CPU type. * get_inference_engine_vpu_typeReturns Inference Engine VPU type. * get_plane * images_from_blobParse a 4D blob and output the images it contains as 2D arrays through a simpler data structure (std::vectorcv::Mat). * nms_boxesPerforms non maximum suppression given boxes and corresponding scores. * nms_boxes_batchedPerforms batched non maximum suppression on given boxes and corresponding scores across different classes. * nms_boxes_batched_1C++ default parameters * nms_boxes_batched_1_defNote * nms_boxes_batched_defPerforms batched non maximum suppression on given boxes and corresponding scores across different classes. * nms_boxes_defPerforms non maximum suppression given boxes and corresponding scores. * nms_boxes_f64C++ default parameters * nms_boxes_f64_defNote * nms_boxes_rotatedC++ default parameters * nms_boxes_rotated_defNote * read_netRead deep learning network represented in one of the supported formats. * read_net_1Read deep learning network represented in one of the supported formats. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * read_net_1_defRead deep learning network represented in one of the supported formats. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * read_net_defRead deep learning network represented in one of the supported formats. * read_net_from_caffeReads a network model stored in Caffe framework’s format. * read_net_from_caffe_bufferReads a network model stored in Caffe model in memory. * read_net_from_caffe_buffer_defReads a network model stored in Caffe model in memory. * read_net_from_caffe_defReads a network model stored in Caffe framework’s format. * read_net_from_caffe_strReads a network model stored in Caffe model in memory. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * read_net_from_caffe_str_defReads a network model stored in Caffe model in memory. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * read_net_from_darknetReads a network model stored in Darknet model files. * read_net_from_darknet_bufferReads a network model stored in Darknet model files. * read_net_from_darknet_buffer_defReads a network model stored in Darknet model files. * read_net_from_darknet_defReads a network model stored in Darknet model files. * read_net_from_darknet_strReads a network model stored in Darknet model files. * read_net_from_darknet_str_defReads a network model stored in Darknet model files. * read_net_from_model_optimizerLoad a network from Intel’s Model Optimizer intermediate representation. * read_net_from_model_optimizer_1Load a network from Intel’s Model Optimizer intermediate representation. * read_net_from_model_optimizer_2Load a network from Intel’s Model Optimizer intermediate representation. * read_net_from_onnxReads a network model ONNX. * read_net_from_onnx_bufferReads a network model from ONNX in-memory buffer. * read_net_from_onnx_strReads a network model from ONNX in-memory buffer. * read_net_from_tensorflowReads a network model stored in TensorFlow framework’s format. * read_net_from_tensorflow_bufferReads a network model stored in TensorFlow framework’s format. * read_net_from_tensorflow_buffer_defReads a network model stored in TensorFlow framework’s format. * read_net_from_tensorflow_defReads a network model stored in TensorFlow framework’s format. * read_net_from_tensorflow_strReads a network model stored in TensorFlow framework’s format. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * read_net_from_tensorflow_str_defReads a network model stored in TensorFlow framework’s format. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * read_net_from_tf_liteReads a network model stored in TFLite framework’s format. * read_net_from_tf_lite_1Reads a network model stored in TFLite framework’s format. * read_net_from_tf_lite_2Reads a network model stored in TFLite framework’s format. @details This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * read_net_from_torchReads a network model stored in Torch7 framework’s format. * read_net_from_torch_defReads a network model stored in Torch7 framework’s format. * read_tensor_from_onnxCreates blob from .pb file. * read_torch_blobLoads blob which was serialized as torch.Tensor object of Torch7 framework. @warning This function has the same limitations as readNetFromTorch(). * read_torch_blob_defLoads blob which was serialized as torch.Tensor object of Torch7 framework. @warning This function has the same limitations as readNetFromTorch(). * release_hddl_pluginRelease a HDDL plugin. * reset_myriad_deviceRelease a Myriad device (binded by OpenCV). * set_inference_engine_backend_typeSpecify Inference Engine internal backend API. * shape * shape_1 * shape_2 * shape_3 * shape_4C++ default parameters * shape_4_defNote * shrink_caffe_modelConvert all weights of Caffe network to half precision floating point. * shrink_caffe_model_defConvert all weights of Caffe network to half precision floating point. * slice * slice_1 * slice_2 * slice_3 * soft_nms_boxesPerforms soft non maximum suppression given boxes and corresponding scores. Reference: https://arxiv.org/abs/1704.04503 * soft_nms_boxes_defPerforms soft non maximum suppression given boxes and corresponding scores. Reference: https://arxiv.org/abs/1704.04503 * totalC++ default parameters * total_1C++ default parameters * total_1_defNote * total_defNote * write_text_graphCreate a text representation for a binary network stored in protocol buffer format. Type Aliases --- * LayerFactory_ConstructorEach Layer class must provide this function to the factory * MatShape * Net_LayerIdDeprecatedContainer for strings and integers. Module opencv::dnn_superres === DNN used for super resolution --- This module contains functionality for upscaling an image via convolutional neural networks. The following four models are implemented: * EDSR https://arxiv.org/abs/1707.02921 * ESPCN https://arxiv.org/abs/1609.05158 * FSRCNN https://arxiv.org/abs/1608.00367 * LapSRN https://arxiv.org/abs/1710.01992 Modules --- * prelude Structs --- * DnnSuperResImplA class to upscale images via convolutional neural networks. The following four models are implemented: Traits --- * DnnSuperResImplTraitMutable methods for crate::dnn_superres::DnnSuperResImpl * DnnSuperResImplTraitConstConstant methods for crate::dnn_superres::DnnSuperResImpl Module opencv::dpm === Deformable Part-based Models --- ### Discriminatively Trained Part Based Models for Object Detection The object detector described below has been initially proposed by <NAME> in Felzenszwalb2010a . It is based on a Dalal-Triggs detector that uses a single filter on histogram of oriented gradients (HOG) features to represent an object category. This detector uses a sliding window approach, where a filter is applied at all positions and scales of an image. The first innovation is enriching the Dalal-Triggs model using a star-structured part-based model defined by a “root” filter (analogous to the Dalal-Triggs filter) plus a set of parts filters and associated deformation models. The score of one of star models at a particular position and scale within an image is the score of the root filter at the given location plus the sum over parts of the maximum, over placements of that part, of the part filter score on its location minus a deformation cost easuring the deviation of the part from its ideal location relative to the root. Both root and part filter scores are defined by the dot product between a filter (a set of weights) and a subwindow of a feature pyramid computed from the input image. Another improvement is a representation of the class of models by a mixture of star models. The score of a mixture model at a particular position and scale is the maximum over components, of the score of that component model at the given location. The detector was dramatically speeded-up with cascade algorithm proposed by <NAME> in Felzenszwalb2010b . The algorithm prunes partial hypotheses using thresholds on their scores.The basic idea of the algorithm is to use a hierarchy of models defined by an ordering of the original model’s parts. For a model with (n+1) parts, including the root, a sequence of (n+1) models is obtained. The i-th model in this sequence is defined by the first i parts from the original model. Using this hierarchy, low scoring hypotheses can be pruned after looking at the best configuration of a subset of the parts. Hypotheses that score high under a weak model are evaluated further using a richer model. In OpenCV there is an C++ implementation of DPM cascade detector. Modules --- * prelude Structs --- * DPMDetectorThis is a C++ abstract class, it provides external user API to work with DPM. * DPMDetector_ObjectDetection Traits --- * DPMDetectorTraitMutable methods for crate::dpm::DPMDetector * DPMDetectorTraitConstConstant methods for crate::dpm::DPMDetector * DPMDetector_ObjectDetectionTraitMutable methods for crate::dpm::DPMDetector_ObjectDetection * DPMDetector_ObjectDetectionTraitConstConstant methods for crate::dpm::DPMDetector_ObjectDetection Module opencv::face === Face Analysis --- * [face_changelog] * [tutorial_face_main] Modules --- * prelude Structs --- * BIFImplementation of bio-inspired features (BIF) from the paper: <NAME>, et al. “Human age estimation using bio-inspired features.” Computer Vision and Pattern Recognition, 2009. CVPR 2009. * BasicFaceRecognizer * CParams * EigenFaceRecognizer * FaceRecognizerAbstract base class for all face recognition models * FacemarkAbstract base class for all facemark models * FacemarkAAM * FacemarkAAM_Config\brief Optional parameter for fitting process. * FacemarkAAM_Data\brief Data container for the facemark::getData function * FacemarkAAM_Model\brief The model of AAM Algorithm * FacemarkAAM_Model_Texture * FacemarkAAM_Params * FacemarkKazemi * FacemarkKazemi_Params * FacemarkLBF * FacemarkLBF_Params * FacemarkTrainAbstract base class for trainable facemark models * FisherFaceRecognizer * LBPHFaceRecognizer * MACEMinimum Average Correlation Energy Filter useful for authentication with (cancellable) biometrical features. (does not need many positives to train (10-50), and no negatives at all, also robust to noise/salting) * PredictCollectorAbstract base class for all strategies of prediction result handling * StandardCollectorDefault predict collector * StandardCollector_PredictResult Traits --- * BIFTraitMutable methods for crate::face::BIF * BIFTraitConstConstant methods for crate::face::BIF * BasicFaceRecognizerTraitMutable methods for crate::face::BasicFaceRecognizer * BasicFaceRecognizerTraitConstConstant methods for crate::face::BasicFaceRecognizer * CParamsTraitMutable methods for crate::face::CParams * CParamsTraitConstConstant methods for crate::face::CParams * EigenFaceRecognizerTraitMutable methods for crate::face::EigenFaceRecognizer * EigenFaceRecognizerTraitConstConstant methods for crate::face::EigenFaceRecognizer * FaceRecognizerTraitMutable methods for crate::face::FaceRecognizer * FaceRecognizerTraitConstConstant methods for crate::face::FaceRecognizer * FacemarkAAMTraitMutable methods for crate::face::FacemarkAAM * FacemarkAAMTraitConstConstant methods for crate::face::FacemarkAAM * FacemarkAAM_ConfigTraitMutable methods for crate::face::FacemarkAAM_Config * FacemarkAAM_ConfigTraitConstConstant methods for crate::face::FacemarkAAM_Config * FacemarkAAM_DataTraitMutable methods for crate::face::FacemarkAAM_Data * FacemarkAAM_DataTraitConstConstant methods for crate::face::FacemarkAAM_Data * FacemarkAAM_ModelTraitMutable methods for crate::face::FacemarkAAM_Model * FacemarkAAM_ModelTraitConstConstant methods for crate::face::FacemarkAAM_Model * FacemarkAAM_Model_TextureTraitMutable methods for crate::face::FacemarkAAM_Model_Texture * FacemarkAAM_Model_TextureTraitConstConstant methods for crate::face::FacemarkAAM_Model_Texture * FacemarkAAM_ParamsTraitMutable methods for crate::face::FacemarkAAM_Params * FacemarkAAM_ParamsTraitConstConstant methods for crate::face::FacemarkAAM_Params * FacemarkKazemiTraitMutable methods for crate::face::FacemarkKazemi * FacemarkKazemiTraitConstConstant methods for crate::face::FacemarkKazemi * FacemarkKazemi_ParamsTraitMutable methods for crate::face::FacemarkKazemi_Params * FacemarkKazemi_ParamsTraitConstConstant methods for crate::face::FacemarkKazemi_Params * FacemarkLBFTraitMutable methods for crate::face::FacemarkLBF * FacemarkLBFTraitConstConstant methods for crate::face::FacemarkLBF * FacemarkLBF_ParamsTraitMutable methods for crate::face::FacemarkLBF_Params * FacemarkLBF_ParamsTraitConstConstant methods for crate::face::FacemarkLBF_Params * FacemarkTrainTraitMutable methods for crate::face::FacemarkTrain * FacemarkTrainTraitConstConstant methods for crate::face::FacemarkTrain * FacemarkTraitMutable methods for crate::face::Facemark * FacemarkTraitConstConstant methods for crate::face::Facemark * FisherFaceRecognizerTraitMutable methods for crate::face::FisherFaceRecognizer * FisherFaceRecognizerTraitConstConstant methods for crate::face::FisherFaceRecognizer * LBPHFaceRecognizerTraitMutable methods for crate::face::LBPHFaceRecognizer * LBPHFaceRecognizerTraitConstConstant methods for crate::face::LBPHFaceRecognizer * MACETraitMutable methods for crate::face::MACE * MACETraitConstConstant methods for crate::face::MACE * PredictCollectorTraitMutable methods for crate::face::PredictCollector * PredictCollectorTraitConstConstant methods for crate::face::PredictCollector * StandardCollectorTraitMutable methods for crate::face::StandardCollector * StandardCollectorTraitConstConstant methods for crate::face::StandardCollector Functions --- * create_facemark_aamconstruct an AAM facemark detector * create_facemark_kazemiconstruct a Kazemi facemark detector * create_facemark_lbfconstruct an LBF facemark detector * draw_facemarksUtility to draw the detected facial landmark points * draw_facemarks_defUtility to draw the detected facial landmark points * get_facesDefault face detector This function is mainly utilized by the implementation of a Facemark Algorithm. End users are advised to use function Facemark::getFaces which can be manually defined and circumvented to the algorithm by Facemark::setFaceDetector. * get_faces_haar * load_dataset_listA utility to load list of paths to training image and annotation file. * load_face_pointsA utility to load facial landmark information from a given file. * load_face_points_defA utility to load facial landmark information from a given file. * load_training_dataA utility to load facial landmark dataset from a single file. * load_training_data_1A utility to load facial landmark information from the dataset. * load_training_data_1_defA utility to load facial landmark information from the dataset. * load_training_data_2This function extracts the data for training from .txt files which contains the corresponding image name and landmarks. The first file in each file should give the path of the image whose landmarks are being described in the file. Then in the subsequent lines there should be coordinates of the landmarks in the image i.e each line should be of the form x,y where x represents the x coordinate of the landmark and y represents the y coordinate of the landmark. * load_training_data_defA utility to load facial landmark dataset from a single file. Type Aliases --- * FN_FaceDetector Module opencv::features2d === 2D Features Framework --- Feature Detection and Description --- Descriptor Matchers --- Matchers of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch between different algorithms solving the same problem. This section is devoted to matching descriptors that are represented as vectors in a multidimensional space. All objects that implement vector descriptor matchers inherit the DescriptorMatcher interface. Drawing Function of Keypoints and Matches --- Object Categorization --- This section describes approaches based on local 2D features and used to categorize objects. Hardware Acceleration Layer --- Modules --- * prelude Structs --- * AKAZEClass implementing the AKAZE keypoint detector and descriptor extractor, described in ANB13. * AffineFeatureClass for implementing the wrapper which makes detectors and extractors to be affine invariant, described as ASIFT in YM11 . * AgastFeatureDetectorWrapping class for feature detection using the AGAST method. : * BFMatcherBrute-force descriptor matcher. * BOWImgDescriptorExtractorClass to compute an image descriptor using the *bag of visual words*. * BOWKMeansTrainerkmeans -based class to train visual vocabulary using the *bag of visual words* approach. : * BOWTrainerAbstract base class for training the *bag of visual words* vocabulary from a set of descriptors. * BRISKClass implementing the BRISK keypoint detector and descriptor extractor, described in LCS11 . * DescriptorMatcherAbstract base class for matching keypoint descriptors. * FastFeatureDetectorWrapping class for feature detection using the FAST method. : * Feature2D * FlannBasedMatcherFlann-based descriptor matcher. * GFTTDetectorWrapping class for feature detection using the goodFeaturesToTrack function. : * KAZEClass implementing the KAZE keypoint detector and descriptor extractor, described in ABD12 . * KeyPointsFilterA class filters a vector of keypoints. * MSERMaximally stable extremal region extractor * ORBClass implementing the ORB (*oriented BRIEF*) keypoint detector and descriptor extractor * SIFTClass for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) algorithm by <NAME> . * SimpleBlobDetectorClass for extracting blobs from an image. : * SimpleBlobDetector_Params Enums --- * AKAZE_DescriptorType * AgastFeatureDetector_DetectorType * DescriptorMatcher_MatcherType * DrawMatchesFlags * FastFeatureDetector_DetectorType * KAZE_DiffusivityType * ORB_ScoreType Constants --- * AKAZE_DESCRIPTOR_KAZE * AKAZE_DESCRIPTOR_KAZE_UPRIGHTUpright descriptors, not invariant to rotation * AKAZE_DESCRIPTOR_MLDB * AKAZE_DESCRIPTOR_MLDB_UPRIGHTUpright descriptors, not invariant to rotation * AgastFeatureDetector_AGAST_5_8 * AgastFeatureDetector_AGAST_7_12d * AgastFeatureDetector_AGAST_7_12s * AgastFeatureDetector_NONMAX_SUPPRESSION * AgastFeatureDetector_OAST_9_16 * AgastFeatureDetector_THRESHOLD * DescriptorMatcher_BRUTEFORCE * DescriptorMatcher_BRUTEFORCE_HAMMING * DescriptorMatcher_BRUTEFORCE_HAMMINGLUT * DescriptorMatcher_BRUTEFORCE_L1 * DescriptorMatcher_BRUTEFORCE_SL2 * DescriptorMatcher_FLANNBASED * DrawMatchesFlags_DEFAULTOutput image matrix will be created (Mat::create), i.e. existing memory of output image may be reused. Two source image, matches and single keypoints will be drawn. For each keypoint only the center point will be drawn (without the circle around keypoint with keypoint size and orientation). * DrawMatchesFlags_DRAW_OVER_OUTIMGOutput image matrix will not be created (Mat::create). Matches will be drawn on existing content of output image. * DrawMatchesFlags_DRAW_RICH_KEYPOINTSFor each keypoint the circle around keypoint with keypoint size and orientation will be drawn. * DrawMatchesFlags_NOT_DRAW_SINGLE_POINTSSingle keypoints will not be drawn. * FastFeatureDetector_FAST_N * FastFeatureDetector_NONMAX_SUPPRESSION * FastFeatureDetector_THRESHOLD * FastFeatureDetector_TYPE_5_8 * FastFeatureDetector_TYPE_7_12 * FastFeatureDetector_TYPE_9_16 * KAZE_DIFF_CHARBONNIER * KAZE_DIFF_PM_G1 * KAZE_DIFF_PM_G2 * KAZE_DIFF_WEICKERT * ORB_FAST_SCORE * ORB_HARRIS_SCORE Traits --- * AKAZETraitMutable methods for crate::features2d::AKAZE * AKAZETraitConstConstant methods for crate::features2d::AKAZE * AffineFeatureTraitMutable methods for crate::features2d::AffineFeature * AffineFeatureTraitConstConstant methods for crate::features2d::AffineFeature * AgastFeatureDetectorTraitMutable methods for crate::features2d::AgastFeatureDetector * AgastFeatureDetectorTraitConstConstant methods for crate::features2d::AgastFeatureDetector * BFMatcherTraitMutable methods for crate::features2d::BFMatcher * BFMatcherTraitConstConstant methods for crate::features2d::BFMatcher * BOWImgDescriptorExtractorTraitMutable methods for crate::features2d::BOWImgDescriptorExtractor * BOWImgDescriptorExtractorTraitConstConstant methods for crate::features2d::BOWImgDescriptorExtractor * BOWKMeansTrainerTraitMutable methods for crate::features2d::BOWKMeansTrainer * BOWKMeansTrainerTraitConstConstant methods for crate::features2d::BOWKMeansTrainer * BOWTrainerTraitMutable methods for crate::features2d::BOWTrainer * BOWTrainerTraitConstConstant methods for crate::features2d::BOWTrainer * BRISKTraitMutable methods for crate::features2d::BRISK * BRISKTraitConstConstant methods for crate::features2d::BRISK * DescriptorMatcherTraitMutable methods for crate::features2d::DescriptorMatcher * DescriptorMatcherTraitConstConstant methods for crate::features2d::DescriptorMatcher * FastFeatureDetectorTraitMutable methods for crate::features2d::FastFeatureDetector * FastFeatureDetectorTraitConstConstant methods for crate::features2d::FastFeatureDetector * Feature2DTraitMutable methods for crate::features2d::Feature2D * Feature2DTraitConstConstant methods for crate::features2d::Feature2D * FlannBasedMatcherTraitMutable methods for crate::features2d::FlannBasedMatcher * FlannBasedMatcherTraitConstConstant methods for crate::features2d::FlannBasedMatcher * GFTTDetectorTraitMutable methods for crate::features2d::GFTTDetector * GFTTDetectorTraitConstConstant methods for crate::features2d::GFTTDetector * KAZETraitMutable methods for crate::features2d::KAZE * KAZETraitConstConstant methods for crate::features2d::KAZE * KeyPointsFilterTraitMutable methods for crate::features2d::KeyPointsFilter * KeyPointsFilterTraitConstConstant methods for crate::features2d::KeyPointsFilter * MSERTraitMutable methods for crate::features2d::MSER * MSERTraitConstConstant methods for crate::features2d::MSER * ORBTraitMutable methods for crate::features2d::ORB * ORBTraitConstConstant methods for crate::features2d::ORB * SIFTTraitMutable methods for crate::features2d::SIFT * SIFTTraitConstConstant methods for crate::features2d::SIFT * SimpleBlobDetectorTraitMutable methods for crate::features2d::SimpleBlobDetector * SimpleBlobDetectorTraitConstConstant methods for crate::features2d::SimpleBlobDetector Functions --- * agastDetects corners using the AGAST algorithm * agast_def@overload * agast_with_typeDetects corners using the AGAST algorithm * compute_recall_precision_curve * draw_keypointsDraws keypoints. * draw_keypoints_defDraws keypoints. * draw_matchesDraws the found matches of keypoints from two images. * draw_matches_1Draws the found matches of keypoints from two images. * draw_matches_1_def@overload * draw_matches_defDraws the found matches of keypoints from two images. * draw_matches_knnC++ default parameters * draw_matches_knn_defNote * evaluate_feature_detector *************************************************************************************Functions to evaluate the feature detectors and [generic] descriptor extractors * *************************************************************************************** * evaluate_feature_detector_def *************************************************************************************Functions to evaluate the feature detectors and [generic] descriptor extractors * *************************************************************************************** * fastDetects corners using the FAST algorithm * fast_def@overload * fast_with_typeDetects corners using the FAST algorithm * get_nearest_point * get_recall Type Aliases --- * AffineDescriptorExtractor * AffineFeatureDetector * DescriptorExtractorExtractors of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch between different algorithms solving the same problem. This section is devoted to computing descriptors represented as vectors in a multidimensional space. All objects that implement the vector descriptor extractors inherit the DescriptorExtractor interface. * FeatureDetectorFeature detectors in OpenCV have wrappers with a common interface that enables you to easily switch between different algorithms solving the same problem. All objects that implement keypoint detectors inherit the FeatureDetector interface. * SiftDescriptorExtractor * SiftFeatureDetector Module opencv::flann === Clustering and Search in Multi-Dimensional Spaces --- This section documents OpenCV’s interface to the FLANN library. FLANN (Fast Library for Approximate Nearest Neighbors) is a library that contains a collection of algorithms optimized for fast nearest neighbor search in large datasets and for high dimensional features. More information about FLANN can be found in Muja2009 . Modules --- * prelude Structs --- * AutotunedIndexParams * CompositeIndexParams * HierarchicalClusteringIndexParams * Index * IndexParams * KDTreeIndexParams * KMeansIndexParams * LinearIndexParams * LshIndexParams * SavedIndexParams * SearchParams Enums --- * FlannIndexType * flann_algorithm_t * flann_centers_init_t * flann_datatype_t * flann_distance_t * flann_log_level_t Constants --- * AUTOTUNED * BITS_PER_BASE * BITS_PER_CHAR * BLOCKSIZE * CENTERS_GONZALES * CENTERS_KMEANSPP * CENTERS_RANDOM * COMPOSITE * CS * EUCLIDEAN * FLANN_CENTERS_GONZALES * FLANN_CENTERS_GROUPWISE * FLANN_CENTERS_KMEANSPP * FLANN_CENTERS_RANDOM * FLANN_CHECKS_AUTOTUNED * FLANN_CHECKS_UNLIMITED * FLANN_DIST_CHI_SQUARE * FLANN_DIST_CS * FLANN_DIST_DNAMMING * FLANN_DIST_EUCLIDEAN * FLANN_DIST_HAMMING * FLANN_DIST_HELLINGER * FLANN_DIST_HIST_INTERSECT * FLANN_DIST_KL * FLANN_DIST_KULLBACK_LEIBLER * FLANN_DIST_L1 * FLANN_DIST_L2 * FLANN_DIST_MANHATTAN * FLANN_DIST_MAX * FLANN_DIST_MINKOWSKI * FLANN_FLOAT32 * FLANN_FLOAT64 * FLANN_INDEX_AUTOTUNED * FLANN_INDEX_COMPOSITE * FLANN_INDEX_HIERARCHICAL * FLANN_INDEX_KDTREE * FLANN_INDEX_KDTREE_SINGLE * FLANN_INDEX_KMEANS * FLANN_INDEX_LINEAR * FLANN_INDEX_LSH * FLANN_INDEX_SAVED * FLANN_INDEX_TYPE_8S * FLANN_INDEX_TYPE_8U * FLANN_INDEX_TYPE_16S * FLANN_INDEX_TYPE_16U * FLANN_INDEX_TYPE_32F * FLANN_INDEX_TYPE_32S * FLANN_INDEX_TYPE_64F * FLANN_INDEX_TYPE_ALGORITHM * FLANN_INDEX_TYPE_BOOL * FLANN_INDEX_TYPE_STRING * FLANN_INT8 * FLANN_INT16 * FLANN_INT32 * FLANN_INT64 * FLANN_LOG_ERROR * FLANN_LOG_FATAL * FLANN_LOG_INFO * FLANN_LOG_NONE * FLANN_LOG_WARN * FLANN_SIGNATURE_ * FLANN_UINT8 * FLANN_UINT16 * FLANN_UINT32 * FLANN_UINT64 * FLANN_USE_BOOST * FLANN_VERSION_ * HELLINGER * HIST_INTERSECT * KDTREE * KDTREE_SINGLE * KL * KMEANS * KULLBACK_LEIBLER * LAST_VALUE_FLANN_INDEX_TYPE * LINEAR * MANHATTAN * MAX_DIST * MINKOWSKI * SAVED * USE_UNORDERED_MAP * WORDSIZEPooled storage allocator Traits --- * AutotunedIndexParamsTraitMutable methods for crate::flann::AutotunedIndexParams * AutotunedIndexParamsTraitConstConstant methods for crate::flann::AutotunedIndexParams * CompositeIndexParamsTraitMutable methods for crate::flann::CompositeIndexParams * CompositeIndexParamsTraitConstConstant methods for crate::flann::CompositeIndexParams * HierarchicalClusteringIndexParamsTraitMutable methods for crate::flann::HierarchicalClusteringIndexParams * HierarchicalClusteringIndexParamsTraitConstConstant methods for crate::flann::HierarchicalClusteringIndexParams * IndexParamsTraitMutable methods for crate::flann::IndexParams * IndexParamsTraitConstConstant methods for crate::flann::IndexParams * IndexTraitMutable methods for crate::flann::Index * IndexTraitConstConstant methods for crate::flann::Index * KDTreeIndexParamsTraitMutable methods for crate::flann::KDTreeIndexParams * KDTreeIndexParamsTraitConstConstant methods for crate::flann::KDTreeIndexParams * KMeansIndexParamsTraitMutable methods for crate::flann::KMeansIndexParams * KMeansIndexParamsTraitConstConstant methods for crate::flann::KMeansIndexParams * LinearIndexParamsTraitMutable methods for crate::flann::LinearIndexParams * LinearIndexParamsTraitConstConstant methods for crate::flann::LinearIndexParams * LshIndexParamsTraitMutable methods for crate::flann::LshIndexParams * LshIndexParamsTraitConstConstant methods for crate::flann::LshIndexParams * SavedIndexParamsTraitMutable methods for crate::flann::SavedIndexParams * SavedIndexParamsTraitConstConstant methods for crate::flann::SavedIndexParams * SearchParamsTraitMutable methods for crate::flann::SearchParams * SearchParamsTraitConstConstant methods for crate::flann::SearchParams Functions --- * flann_distance_type * set_distance_type Type Aliases --- * BucketA bucket in an LSH table * bucket_keyThe id from which we can get a bucket back in an LSH table * feature_indexWhat is stored in an LSH bucket Module opencv::freetype === Drawing UTF-8 strings with freetype/harfbuzz --- This modules is to draw UTF-8 strings with freetype/harfbuzz. 1. Install freetype2 and harfbuzz in your system. 2. Create FreeType2 instance with createFreeType2() function. 3. Load font file with loadFontData() function. 4. Draw text with putText() function. * If thickness parameter is negative, drawing glyph is filled. * If thickness parameter is positive, drawing glyph is outlined with thickness. * If line_type parameter is 16(or CV_AA), drawing glyph is smooth. Modules --- * prelude Structs --- * FreeType2 Traits --- * FreeType2TraitMutable methods for crate::freetype::FreeType2 * FreeType2TraitConstConstant methods for crate::freetype::FreeType2 Functions --- * create_free_type2Create FreeType2 Instance Module opencv::fuzzy === Image processing based on fuzzy mathematics --- Namespace for all functions is `ft`. The module brings implementation of the last image processing algorithms based on fuzzy mathematics. Method are named based on the pattern `FT`_degree_dimension`_`method. Math with F0-transform support --- Fuzzy transform (![inline formula](https://latex.codecogs.com/png.latex?F%5E0)-transform) of the 0th degree transforms whole image to a matrix of its components. These components are used in latter computation where each of them represents average color of certain subarea. Math with F1-transform support --- Fuzzy transform (![inline formula](https://latex.codecogs.com/png.latex?F%5E1)-transform) of the 1th degree transforms whole image to a matrix of its components. Each component is polynomial of the 1th degree carrying information about average color and average gradient of certain subarea. Fuzzy image processing --- Image proceesing based on fuzzy mathematics namely F-transform. Modules --- * prelude Constants --- * ITERATIVEprocessing in several iterations * LINEARlinear (triangular) shape * MULTI_STEPprocessing in multiple step * ONE_STEPprocessing in one step * SINUSsinusoidal shape Functions --- * create_kernelCreates kernel from general functions. * create_kernel1Creates kernel from basic functions. * filterImage filtering * ft0_2d_componentsComputes components of the array using direct inline formula-transform. * ft0_2d_components_defComputes components of the array using direct inline formula-transform. * ft0_2d_fl_processSligtly less accurate version of inline formula-transfrom computation optimized for higher speed. The methods counts with linear basic function. * ft0_2d_fl_process_floatSligtly less accurate version of inline formula-transfrom computation optimized for higher speed. The methods counts with linear basic function. * ft0_2d_inverse_ftComputes inverse inline formula-transfrom. * ft0_2d_iterationComputes inline formula-transfrom and inverse inline formula-transfrom at once and return state. * ft0_2d_processComputes inline formula-transfrom and inverse inline formula-transfrom at once. * ft0_2d_process_defComputes inline formula-transfrom and inverse inline formula-transfrom at once. * ft1_2d_componentsComputes components of the array using direct inline formula-transform. * ft1_2d_create_polynom_matrix_horizontalCreates horizontal matrix for inline formula-transform computation. * ft1_2d_create_polynom_matrix_verticalCreates vertical matrix for inline formula-transform computation. * ft1_2d_inverse_ftComputes inverse inline formula-transfrom. * ft1_2d_polynomialComputes elements of inline formula-transform components. * ft1_2d_polynomial_defComputes elements of inline formula-transform components. * ft1_2d_processComputes inline formula-transfrom and inverse inline formula-transfrom at once. * ft1_2d_process_defComputes inline formula-transfrom and inverse inline formula-transfrom at once. * inpaintImage inpainting Module opencv::gapi === \defgroup gapi G-API framework G-API Main Classes --- G-API Data Types --- G-API Standard Backends --- G-API Graph Compilation Arguments --- G-API Serialization functionality --- Modules --- * prelude Structs --- * CircleThis structure represents a circle to draw. * DataThis aggregate type represents all types which G-API can handle (via variant). * Detail_ExtractArgsCallback * Detail_ExtractMetaCallback * Detail_GArrayU * Detail_GOpaqueU * GArg * GArrayDesc\addtogroup gapi_meta_args * GBackend@private * GCall * GCompileArgRepresents an arbitrary compilation argument. * GCompiled\addtogroup gapi_main_classes / * GComputation\addtogroup gapi_main_classes * GFrame\addtogroup gapi_data_objects / * GFrameDesc\addtogroup gapi_meta_args * GFunctor@private * GKernel * GKernelImpl * GKernelPackageA container class for heterogeneous kernel implementation collections and graph transformations. * GMat\addtogroup gapi_data_objects * GMatDesc\addtogroup gapi_meta_args * GMatP * GOpaqueDesc\addtogroup gapi_meta_args * GRunArg * GScalar\addtogroup gapi_data_objects / * GScalarDesc\addtogroup gapi_meta_args * GStreamingCompiled\addtogroup gapi_main_classes / * GTransform * GTypeInfo * ImageThis structure represents an image to draw. * LineThis structure represents a line to draw. * MediaFrame\addtogroup gapi_data_structures * MediaFrame_IAdapterAn interface class for MediaFrame data adapters. * MediaFrame_ViewProvides access to the MediaFrame’s underlying data. * MosaicThis structure represents a mosaicing operation. * PolyThis structure represents a polygon to draw. * RMat\addtogroup gapi_data_structures * RMat_IAdapter * RMat_View * RectThis structure represents a rectangle to draw. * Scalar * TextThis structure represents a text string to draw.Parameters match cv::putText(). * any * queue_capacitySpecify queue capacity for streaming execution. * use_only\addtogroup gapi_compile_args / Enums --- * Detail_ArgKind * Detail_OpaqueKind * GShape * MediaFormat * MediaFrame_AccessThis enum defines different types of cv::MediaFrame provided access to the underlying data. Note that different flags can’t be combined in this version. * RMat_Access Constants --- * Detail_ArgKind_GARRAY * Detail_ArgKind_GFRAME * Detail_ArgKind_GMAT * Detail_ArgKind_GMATP * Detail_ArgKind_GOBJREF * Detail_ArgKind_GOPAQUE * Detail_ArgKind_GSCALAR * Detail_ArgKind_OPAQUE * Detail_ArgKind_OPAQUE_VAL * Detail_OpaqueKind_CV_BOOL * Detail_OpaqueKind_CV_DOUBLE * Detail_OpaqueKind_CV_DRAW_PRIM * Detail_OpaqueKind_CV_FLOAT * Detail_OpaqueKind_CV_INT * Detail_OpaqueKind_CV_INT64 * Detail_OpaqueKind_CV_MAT * Detail_OpaqueKind_CV_POINT * Detail_OpaqueKind_CV_POINT2F * Detail_OpaqueKind_CV_POINT3F * Detail_OpaqueKind_CV_RECT * Detail_OpaqueKind_CV_SCALAR * Detail_OpaqueKind_CV_SIZE * Detail_OpaqueKind_CV_STRING * Detail_OpaqueKind_CV_UINT64 * Detail_OpaqueKind_CV_UNKNOWN * GShape_GARRAY * GShape_GFRAME * GShape_GMAT * GShape_GOPAQUE * GShape_GSCALAR * MediaFormat_BGR * MediaFormat_GRAY * MediaFormat_NV12 * MediaFrame_Access_RAccess data for reading * MediaFrame_Access_WAccess data for writing * RMat_Access_R * RMat_Access_W Traits --- * DataTraitMutable methods for crate::gapi::Data * DataTraitConstConstant methods for crate::gapi::Data * Detail_ExtractArgsCallbackTraitMutable methods for crate::gapi::Detail_ExtractArgsCallback * Detail_ExtractArgsCallbackTraitConstConstant methods for crate::gapi::Detail_ExtractArgsCallback * Detail_ExtractMetaCallbackTraitMutable methods for crate::gapi::Detail_ExtractMetaCallback * Detail_ExtractMetaCallbackTraitConstConstant methods for crate::gapi::Detail_ExtractMetaCallback * Detail_GArrayUTraitMutable methods for crate::gapi::Detail_GArrayU * Detail_GArrayUTraitConstConstant methods for crate::gapi::Detail_GArrayU * Detail_GOpaqueUTraitMutable methods for crate::gapi::Detail_GOpaqueU * Detail_GOpaqueUTraitConstConstant methods for crate::gapi::Detail_GOpaqueU * GArgTraitMutable methods for crate::gapi::GArg * GArgTraitConstConstant methods for crate::gapi::GArg * GArrayDescTraitMutable methods for crate::gapi::GArrayDesc * GArrayDescTraitConstConstant methods for crate::gapi::GArrayDesc * GBackendTraitMutable methods for crate::gapi::GBackend * GBackendTraitConstConstant methods for crate::gapi::GBackend * GCallTraitMutable methods for crate::gapi::GCall * GCallTraitConstConstant methods for crate::gapi::GCall * GCompileArgTraitMutable methods for crate::gapi::GCompileArg * GCompileArgTraitConstConstant methods for crate::gapi::GCompileArg * GCompiledTraitMutable methods for crate::gapi::GCompiled * GCompiledTraitConstConstant methods for crate::gapi::GCompiled * GComputationTraitMutable methods for crate::gapi::GComputation * GComputationTraitConstConstant methods for crate::gapi::GComputation * GFrameDescTraitMutable methods for crate::gapi::GFrameDesc * GFrameDescTraitConstConstant methods for crate::gapi::GFrameDesc * GFrameTraitMutable methods for crate::gapi::GFrame * GFrameTraitConstConstant methods for crate::gapi::GFrame * GFunctorTraitMutable methods for crate::gapi::GFunctor * GFunctorTraitConstConstant methods for crate::gapi::GFunctor * GKernelImplTraitMutable methods for crate::gapi::GKernelImpl * GKernelImplTraitConstConstant methods for crate::gapi::GKernelImpl * GKernelPackageTraitMutable methods for crate::gapi::GKernelPackage * GKernelPackageTraitConstConstant methods for crate::gapi::GKernelPackage * GKernelTraitMutable methods for crate::gapi::GKernel * GKernelTraitConstConstant methods for crate::gapi::GKernel * GMatDescTraitMutable methods for crate::gapi::GMatDesc * GMatDescTraitConstConstant methods for crate::gapi::GMatDesc * GMatPTraitMutable methods for crate::gapi::GMatP * GMatPTraitConstConstant methods for crate::gapi::GMatP * GMatTraitMutable methods for crate::gapi::GMat * GMatTraitConstConstant methods for crate::gapi::GMat * GOpaqueDescTraitMutable methods for crate::gapi::GOpaqueDesc * GOpaqueDescTraitConstConstant methods for crate::gapi::GOpaqueDesc * GRunArgTraitMutable methods for crate::gapi::GRunArg * GRunArgTraitConstConstant methods for crate::gapi::GRunArg * GScalarDescTraitMutable methods for crate::gapi::GScalarDesc * GScalarDescTraitConstConstant methods for crate::gapi::GScalarDesc * GScalarTraitMutable methods for crate::gapi::GScalar * GScalarTraitConstConstant methods for crate::gapi::GScalar * GStreamingCompiledTraitMutable methods for crate::gapi::GStreamingCompiled * GStreamingCompiledTraitConstConstant methods for crate::gapi::GStreamingCompiled * GTransformTraitMutable methods for crate::gapi::GTransform * GTransformTraitConstConstant methods for crate::gapi::GTransform * GTypeInfoTraitMutable methods for crate::gapi::GTypeInfo * GTypeInfoTraitConstConstant methods for crate::gapi::GTypeInfo * ImageTraitMutable methods for crate::gapi::Image * ImageTraitConstConstant methods for crate::gapi::Image * MediaFrameTraitMutable methods for crate::gapi::MediaFrame * MediaFrameTraitConstConstant methods for crate::gapi::MediaFrame * MediaFrame_IAdapterTraitMutable methods for crate::gapi::MediaFrame_IAdapter * MediaFrame_IAdapterTraitConstConstant methods for crate::gapi::MediaFrame_IAdapter * MediaFrame_ViewTraitMutable methods for crate::gapi::MediaFrame_View * MediaFrame_ViewTraitConstConstant methods for crate::gapi::MediaFrame_View * PolyTraitMutable methods for crate::gapi::Poly * PolyTraitConstConstant methods for crate::gapi::Poly * RMatTraitMutable methods for crate::gapi::RMat * RMatTraitConstConstant methods for crate::gapi::RMat * RMat_IAdapterTraitMutable methods for crate::gapi::RMat_IAdapter * RMat_IAdapterTraitConstConstant methods for crate::gapi::RMat_IAdapter * RMat_ViewTraitMutable methods for crate::gapi::RMat_View * RMat_ViewTraitConstConstant methods for crate::gapi::RMat_View * ScalarTraitMutable methods for crate::gapi::Scalar * ScalarTraitConstConstant methods for crate::gapi::Scalar * TextTraitMutable methods for crate::gapi::Text * TextTraitConstConstant methods for crate::gapi::Text * anyTraitMutable methods for crate::gapi::any * anyTraitConstConstant methods for crate::gapi::any * use_onlyTraitMutable methods for crate::gapi::use_only * use_onlyTraitConstConstant methods for crate::gapi::use_only Functions --- * abs_diffCalculates the per-element absolute difference between two matrices. * abs_diff_cCalculates absolute value of matrix elements. * addCalculates the per-element sum of two matrices. * add_cCalculates the per-element sum of matrix and given scalar. * add_c_1Calculates the per-element sum of matrix and given scalar. * add_c_1_def@overload * add_c_defCalculates the per-element sum of matrix and given scalar. * add_defCalculates the per-element sum of two matrices. * add_gmat_gmat * add_gmat_gscalar * add_gscalar_gmat * add_weightedCalculates the weighted sum of two matrices. * add_weighted_defCalculates the weighted sum of two matrices. * and_gmat_gmat * and_gmat_gscalar * and_gscalar_gmat * bayer_gr2_rgbConverts an image from BayerGR color space to RGB. The function converts an input image from BayerGR color space to RGB. The conventional ranges for G, R, and B channel values are 0 to 255. * bgrGets bgr plane from input frame * bgr2_grayConverts an image from BGR color space to gray-scaled. * bgr2_i420Converts an image from BGR color space to I420 color space. * bgr2_luvConverts an image from BGR color space to LUV color space. * bgr2_rgbConverts an image from BGR color space to RGB color space. * bgr2_yuvConverts an image from BGR color space to YUV color space. * bilateral_filterApplies the bilateral filter to an image. * bilateral_filter_defApplies the bilateral filter to an image. * bitwise_andcomputes bitwise conjunction of the two matrixes (src1 & src2) Calculates the per-element bit-wise logical conjunction of two matrices of the same size. * bitwise_and_1computes bitwise conjunction of the two matrixes (src1 & src2) Calculates the per-element bit-wise logical conjunction of two matrices of the same size. * bitwise_notInverts every bit of an array. * bitwise_orcomputes bitwise disjunction of the two matrixes (src1 | src2) Calculates the per-element bit-wise logical disjunction of two matrices of the same size. * bitwise_or_1computes bitwise disjunction of the two matrixes (src1 | src2) Calculates the per-element bit-wise logical disjunction of two matrices of the same size. * bitwise_xorcomputes bitwise logical “exclusive or” of the two matrixes (src1 ^ src2) Calculates the per-element bit-wise logical “exclusive or” of two matrices of the same size. * bitwise_xor_1computes bitwise logical “exclusive or” of the two matrixes (src1 ^ src2) Calculates the per-element bit-wise logical “exclusive or” of two matrices of the same size. * blurBlurs an image using the normalized box filter. * blur_defBlurs an image using the normalized box filter. * box_filterBlurs an image using the box filter. * box_filter_defBlurs an image using the box filter. * cannyFinds edges in an image using the Canny algorithm. * canny_defFinds edges in an image using the Canny algorithm. * cart_to_polarCalculates the magnitude and angle of 2D vectors. * cart_to_polar_defCalculates the magnitude and angle of 2D vectors. * cmp_eqPerforms the per-element comparison of two matrices checking if elements from first matrix are equal to elements in second. * cmp_eq_1Performs the per-element comparison of two matrices checking if elements from first matrix are equal to elements in second. * cmp_gePerforms the per-element comparison of two matrices checking if elements from first matrix are greater or equal compare to elements in second. * cmp_ge_1Performs the per-element comparison of two matrices checking if elements from first matrix are greater or equal compare to elements in second. * cmp_gtPerforms the per-element comparison of two matrices checking if elements from first matrix are greater compare to elements in second. * cmp_gt_1Performs the per-element comparison of two matrices checking if elements from first matrix are greater compare to elements in second. * cmp_lePerforms the per-element comparison of two matrices checking if elements from first matrix are less or equal compare to elements in second. * cmp_le_1Performs the per-element comparison of two matrices checking if elements from first matrix are less or equal compare to elements in second. * cmp_ltPerforms the per-element comparison of two matrices checking if elements from first matrix are less than elements in second. * cmp_lt_1Performs the per-element comparison of two matrices checking if elements from first matrix are less than elements in second. * cmp_nePerforms the per-element comparison of two matrices checking if elements from first matrix are not equal to elements in second. * cmp_ne_1Performs the per-element comparison of two matrices checking if elements from first matrix are not equal to elements in second. * combineCreate a new package based on `lhs` and `rhs`. * concat_horApplies horizontal concatenation to given matrices. * concat_hor_1Applies horizontal concatenation to given matrices. * concat_vertApplies vertical concatenation to given matrices. * concat_vert_1Applies vertical concatenation to given matrices. * convert_toConverts a matrix to another data depth with optional scaling. * convert_to_defConverts a matrix to another data depth with optional scaling. * copyMakes a copy of the input image. Note that this copy may be not real (no actual data copied). Use this function to maintain graph contracts, e.g when graph’s input needs to be passed directly to output, like in Streaming mode. * copy_1Makes a copy of the input frame. Note that this copy may be not real (no actual data copied). Use this function to maintain graph contracts, e.g when graph’s input needs to be passed directly to output, like in Streaming mode. * cropCrops a 2D matrix. * descr_of * descr_of_1 * descr_of_2 * descr_of_3 * descr_of_4 * desyncStarts a desynchronized branch in the graph. * desync_1 * dilateDilates an image by using a specific structuring element. * dilate3x3Dilates an image by using 3 by 3 rectangular structuring element. * dilate3x3_defDilates an image by using 3 by 3 rectangular structuring element. * dilate_defDilates an image by using a specific structuring element. * divPerforms per-element division of two matrices. * div_cDivides matrix by scalar. * div_c_defDivides matrix by scalar. * div_defPerforms per-element division of two matrices. * div_gmat_gmat * div_gmat_gscalar * div_gscalar_gmat * div_rcDivides scalar by matrix. * div_rc_defDivides scalar by matrix. * empty_array_desc * empty_gopaque_desc * empty_scalar_desc * equalize_histEqualizes the histogram of a grayscale image. * equals_gmat_gmat * equals_gmat_gscalar * equals_gscalar_gmat * erodeErodes an image by using a specific structuring element. * erode3x3Erodes an image by using 3 by 3 rectangular structuring element. * erode3x3_defErodes an image by using 3 by 3 rectangular structuring element. * erode_defErodes an image by using a specific structuring element. * filter_2dConvolves an image with the kernel. * filter_2d_defConvolves an image with the kernel. * flipFlips a 2D matrix around vertical, horizontal, or both axes. * gaussian_blurBlurs an image using a Gaussian filter. * gaussian_blur_defBlurs an image using a Gaussian filter. * greater_than_gmat_gmat * greater_than_gmat_gscalar * greater_than_gscalar_gmat * greater_than_or_equal_gmat_gmat * greater_than_or_equal_gmat_gscalar * greater_than_or_equal_gscalar_gmat * i4202_bgrConverts an image from I420 color space to BGR color space. * i4202_rgbConverts an image from I420 color space to BGR color space. * in_rangeApplies a range-level threshold to each matrix element. * integralCalculates the integral of an image. * integral_defCalculates the integral of an image. * kernels * laplacianCalculates the Laplacian of an image. * laplacian_defCalculates the Laplacian of an image. * less_than_gmat_gmat * less_than_gmat_gscalar * less_than_gscalar_gmat * less_than_or_equal_gmat_gmat * less_than_or_equal_gmat_gscalar * less_than_or_equal_gscalar_gmat * lutPerforms a look-up table transform of a matrix. * luv2_bgrConverts an image from LUV color space to BGR color space. * maskApplies a mask to a matrix. * maxCalculates per-element maximum of two matrices. * meanCalculates an average (mean) of matrix elements. * median_blurBlurs an image using the median filter. * merge3Creates one 3-channel matrix out of 3 single-channel ones. * merge4Creates one 4-channel matrix out of 4 single-channel ones. * minCalculates per-element minimum of two matrices. * morphology_exPerforms advanced morphological transformations. * morphology_ex_defPerforms advanced morphological transformations. * mulCalculates the per-element scaled product of two matrices. * mul_cMultiplies matrix by scalar. * mul_c_1Multiplies matrix by scalar. * mul_c_1_def@overload * mul_c_2Multiplies matrix by scalar. * mul_c_2_def@overload * mul_c_defMultiplies matrix by scalar. * mul_defCalculates the per-element scaled product of two matrices. * mul_f32_gmat * mul_gmat_f32 * mul_gmat_gscalar * mul_gscalar_gmat * negate * norm_infCalculates the absolute infinite norm of a matrix. * norm_l1Calculates the absolute L1 norm of a matrix. * norm_l2Calculates the absolute L2 norm of a matrix. * normalizeNormalizes the norm or value range of an array. * normalize_defNormalizes the norm or value range of an array. * not_equals_gmat_gmat * not_equals_gmat_gscalar * not_equals_gscalar_gmat * nv12to_bg_rpConverts an image from NV12 (YUV420p) color space to BGR. The function converts an input image from NV12 color space to BGR. The conventional ranges for Y, U, and V channel values are 0 to 255. * nv12to_bgrConverts an image from NV12 (YUV420p) color space to BGR. The function converts an input image from NV12 color space to RGB. The conventional ranges for Y, U, and V channel values are 0 to 255. * nv12to_grayConverts an image from NV12 (YUV420p) color space to gray-scaled. The function converts an input image from NV12 color space to gray-scaled. The conventional ranges for Y, U, and V channel values are 0 to 255. * nv12to_rg_bpConverts an image from NV12 (YUV420p) color space to RGB. The function converts an input image from NV12 color space to RGB. The conventional ranges for Y, U, and V channel values are 0 to 255. * nv12to_rgbConverts an image from NV12 (YUV420p) color space to RGB. The function converts an input image from NV12 color space to RGB. The conventional ranges for Y, U, and V channel values are 0 to 255. * or_gmat_gmat * or_gmat_gscalar * or_gscalar_gmat * phaseCalculates the rotation angle of 2D vectors. * phase_defCalculates the rotation angle of 2D vectors. * polar_to_cartCalculates x and y coordinates of 2D vectors from their magnitude and angle. * polar_to_cart_defCalculates x and y coordinates of 2D vectors from their magnitude and angle. * remapApplies a generic geometrical transformation to an image. * remap_defApplies a generic geometrical transformation to an image. * resizeResizes an image. * resize_defResizes an image. * resize_pResizes a planar image. * resize_p_defResizes a planar image. * rgb2_grayConverts an image from RGB color space to gray-scaled. * rgb2_gray_1Converts an image from RGB color space to gray-scaled. * rgb2_hsvConverts an image from RGB color space to HSV. The function converts an input image from RGB color space to HSV. The conventional ranges for R, G, and B channel values are 0 to 255. * rgb2_i420Converts an image from RGB color space to I420 color space. * rgb2_labConverts an image from RGB color space to Lab color space. * rgb2_yuvConverts an image from RGB color space to YUV color space. * rgb2_yuv422Converts an image from RGB color space to YUV422. The function converts an input image from RGB color space to YUV422. The conventional ranges for R, G, and B channel values are 0 to 255. * selectSelect values from either first or second of input matrices by given mask. The function set to the output matrix either the value from the first input matrix if corresponding value of mask matrix is 255, or value from the second input matrix (if value of mask matrix set to 0). * sep_filterApplies a separable linear filter to a matrix(image). * sep_filter_defApplies a separable linear filter to a matrix(image). * sobelCalculates the first, second, third, or mixed image derivatives using an extended Sobel operator. * sobel_defCalculates the first, second, third, or mixed image derivatives using an extended Sobel operator. * sobel_xyCalculates the first, second, third, or mixed image derivatives using an extended Sobel operator. * sobel_xy_defCalculates the first, second, third, or mixed image derivatives using an extended Sobel operator. * split3Divides a 3-channel matrix into 3 single-channel matrices. * split4Divides a 4-channel matrix into 4 single-channel matrices. * sqrtCalculates a square root of array elements. * subCalculates the per-element difference between two matrices. * sub_cCalculates the per-element difference between matrix and given scalar. * sub_c_defCalculates the per-element difference between matrix and given scalar. * sub_defCalculates the per-element difference between two matrices. * sub_gmat_gmat * sub_gmat_gscalar * sub_gscalar_gmat * sub_rcCalculates the per-element difference between given scalar and the matrix. * sub_rc_defCalculates the per-element difference between given scalar and the matrix. * sumCalculates sum of all matrix elements. * thresholdApplies a fixed-level threshold to each matrix element. * threshold_1Applies a fixed-level threshold to each matrix element. * transposeTransposes a matrix. * uvExtracts UV plane from media frame. * validate_input_arg * validate_input_args * warp_affineApplies an affine transformation to an image. * warp_affine_defApplies an affine transformation to an image. * warp_perspectiveApplies a perspective transformation to an image. * warp_perspective_defApplies a perspective transformation to an image. * xor_gmat_gmat * xor_gmat_gscalar * xor_gscalar_gmat * yExtracts Y plane from media frame. * yuv2_bgrConverts an image from YUV color space to BGR color space. * yuv2_rgbConverts an image from YUV color space to RGB. The function converts an input image from YUV color space to RGB. The conventional ranges for Y, U, and V channel values are 0 to 255. Type Aliases --- * GArgs * GCompileArgs * GKinds * GMat2 * GMat3 * GMat4 * GMatScalar * GRunArgs * GShapes * GTypesInfo * ImgProc_GMat2 * ImgProc_GMat3 * ImgProc_cont_method * ImgProc_retr_mode * RMat_Adapter * RMat_View_stepsT Module opencv::hdf === Hierarchical Data Format I/O routines --- This module provides storage routines for Hierarchical Data Format objects. Hierarchical Data Format version 5 --- ### Hierarchical Data Format version 5 In order to use it, the hdf5 library has to be installed, which means cmake should find it using `find_package(HDF5)` . Modules --- * prelude Structs --- * HDF5Hierarchical Data Format version 5 interface. Constants --- * HDF5_H5_GETCHUNKDIMSGet the chunk sizes of a dataset. see also: dsgetsize() * HDF5_H5_GETDIMSGet the dimension information of a dataset. see also: dsgetsize() * HDF5_H5_GETMAXDIMSGet the maximum dimension information of a dataset. see also: dsgetsize() * HDF5_H5_NONENo compression, see also: dscreate() * HDF5_H5_UNLIMITEDThe dimension size is unlimited, see also: dscreate() Traits --- * HDF5TraitMutable methods for crate::hdf::HDF5 * HDF5TraitConstConstant methods for crate::hdf::HDF5 Functions --- * openOpen or create hdf5 file Module opencv::hfs === Hierarchical Feature Selection for Efficient Image Segmentation --- The opencv hfs module contains an efficient algorithm to segment an image. This module is implemented based on the paper Hierarchical Feature Selection for Efficient Image Segmentation, ECCV 2016. The original project was developed by <NAME>(https://github.com/yun-liu/hfs). ### Introduction to Hierarchical Feature Selection This algorithm is executed in 3 stages: In the first stage, the algorithm uses SLIC (simple linear iterative clustering) algorithm to obtain the superpixel of the input image. In the second stage, the algorithm view each superpixel as a node in the graph. It will calculate a feature vector for each edge of the graph. It then calculates a weight for each edge based on the feature vector and trained SVM parameters. After obtaining weight for each edge, it will exploit EGB (Efficient Graph-based Image Segmentation) algorithm to merge some nodes in the graph thus obtaining a coarser segmentation After these operations, a post process will be executed to merge regions that are smaller then a specific number of pixels into their nearby region. In the third stage, the algorithm exploits the similar mechanism to further merge the small regions obtained in the second stage into even coarser segmentation. After these three stages, we can obtain the final segmentation of the image. For further details about the algorithm, please refer to the original paper: Hierarchical Feature Selection for Efficient Image Segmentation, ECCV 2016 Modules --- * prelude Structs --- * HfsSegment Traits --- * HfsSegmentTraitMutable methods for crate::hfs::HfsSegment * HfsSegmentTraitConstConstant methods for crate::hfs::HfsSegment Module opencv::highgui === High-level GUI --- While OpenCV was designed for use in full-scale applications and can be used within functionally rich UI frameworks (such as Qt*, WinForms*, or Cocoa*) or without any UI at all, sometimes there it is required to try functionality quickly and visualize the results. This is what the HighGUI module has been designed for. It provides easy interface to: * Create and manipulate windows that can display images and “remember” their content (no need to handle repaint events from OS). * Add trackbars to the windows, handle simple mouse events as well as keyboard commands. Flags related creating and manipulating HighGUI windows and mouse events --- OpenGL support --- Qt New Functions --- ![image](https://docs.opencv.org/4.8.1/qtgui.png) This figure explains new functionality implemented with Qt* GUI. The new GUI provides a statusbar, a toolbar, and a control panel. The control panel can have trackbars and buttonbars attached to it. If you cannot see the control panel, press Ctrl+P or right-click any Qt window and select **Display properties window**. * To attach a trackbar, the window name parameter must be NULL. * To attach a buttonbar, a button must be created. If the last bar attached to the control panel is a buttonbar, the new button is added to the right of the last button. If the last bar attached to the control panel is a trackbar, or the control panel is empty, a new buttonbar is created. Then, a new button is attached to it. See below the example used to generate the figure: ``` int main(int argc, char *argv[]) { int value = 50; int value2 = 0; namedWindow("main1",WINDOW_NORMAL); namedWindow("main2",WINDOW_AUTOSIZE | WINDOW_GUI_NORMAL); createTrackbar( "track1", "main1", &value, 255, NULL); String nameb1 = "button1"; String nameb2 = "button2"; createButton(nameb1,callbackButton,&nameb1,QT_CHECKBOX,1); createButton(nameb2,callbackButton,NULL,QT_CHECKBOX,0); createTrackbar( "track2", NULL, &value2, 255, NULL); createButton("button5",callbackButton1,NULL,QT_RADIOBOX,0); createButton("button6",callbackButton2,NULL,QT_RADIOBOX,1); setMouseCallback( "main2",on_mouse,NULL ); Mat img1 = imread("files/flower.jpg"); VideoCapture video; video.open("files/hockey.avi"); Mat img2,img3; while( waitKey(33) != 27 ) { img1.convertTo(img2,-1,1,value); video >> img3; imshow("main1",img2); imshow("main2",img3); } destroyAllWindows(); return 0; } ``` WinRT support --- This figure explains new functionality implemented with WinRT GUI. The new GUI provides an Image control, and a slider panel. Slider panel holds trackbars attached to it. Sliders are attached below the image control. Every new slider is added below the previous one. See below the example used to generate the figure: ``` void sample_app::MainPage::ShowWindow() { static cv::String windowName("sample"); cv::winrt_initContainer(this->cvContainer); cv::namedWindow(windowName); // not required cv::Mat image = cv::imread("Assets/sample.jpg"); cv::Mat converted = cv::Mat(image.rows, image.cols, CV_8UC4); cv::cvtColor(image, converted, COLOR_BGR2BGRA); cv::imshow(windowName, converted); // this will create window if it hasn't been created before int state = 42; cv::TrackbarCallback callback = [](int pos, void* userdata) { if (pos == 0) { cv::destroyWindow(windowName); } }; cv::TrackbarCallback callbackTwin = [](int pos, void* userdata) { if (pos >= 70) { cv::destroyAllWindows(); } }; cv::createTrackbar("Sample trackbar", windowName, &state, 100, callback); cv::createTrackbar("Twin brother", windowName, &state, 100, callbackTwin); } ``` C API --- Modules --- * prelude Structs --- * QtFontQtFont available only for Qt. See cv::fontQt Enums --- * MouseEventFlagsMouse Event Flags see cv::MouseCallback * MouseEventTypesMouse Events see cv::MouseCallback * QtButtonTypesQt “button” type * QtFontStylesQt font style * QtFontWeightsQt font weight * WindowFlagsFlags for cv::namedWindow * WindowPropertyFlagsFlags for cv::setWindowProperty / cv::getWindowProperty Constants --- * EVENT_FLAG_ALTKEYindicates that ALT Key is pressed. * EVENT_FLAG_CTRLKEYindicates that CTRL Key is pressed. * EVENT_FLAG_LBUTTONindicates that the left mouse button is down. * EVENT_FLAG_MBUTTONindicates that the middle mouse button is down. * EVENT_FLAG_RBUTTONindicates that the right mouse button is down. * EVENT_FLAG_SHIFTKEYindicates that SHIFT Key is pressed. * EVENT_LBUTTONDBLCLKindicates that left mouse button is double clicked. * EVENT_LBUTTONDOWNindicates that the left mouse button is pressed. * EVENT_LBUTTONUPindicates that left mouse button is released. * EVENT_MBUTTONDBLCLKindicates that middle mouse button is double clicked. * EVENT_MBUTTONDOWNindicates that the middle mouse button is pressed. * EVENT_MBUTTONUPindicates that middle mouse button is released. * EVENT_MOUSEHWHEELpositive and negative values mean right and left scrolling, respectively. * EVENT_MOUSEMOVEindicates that the mouse pointer has moved over the window. * EVENT_MOUSEWHEELpositive and negative values mean forward and backward scrolling, respectively. * EVENT_RBUTTONDBLCLKindicates that right mouse button is double clicked. * EVENT_RBUTTONDOWNindicates that the right mouse button is pressed. * EVENT_RBUTTONUPindicates that right mouse button is released. * QT_CHECKBOXCheckbox button. * QT_FONT_BLACKWeight of 87 * QT_FONT_BOLDWeight of 75 * QT_FONT_DEMIBOLDWeight of 63 * QT_FONT_LIGHTWeight of 25 * QT_FONT_NORMALWeight of 50 * QT_NEW_BUTTONBARButton should create a new buttonbar * QT_PUSH_BUTTONPush button. * QT_RADIOBOXRadiobox button. * QT_STYLE_ITALICItalic font. * QT_STYLE_NORMALNormal font. * QT_STYLE_OBLIQUEOblique font. * WINDOW_AUTOSIZEthe user cannot resize the window, the size is constrainted by the image displayed. * WINDOW_FREERATIOthe image expends as much as it can (no ratio constraint). * WINDOW_FULLSCREENchange the window to fullscreen. * WINDOW_GUI_EXPANDEDstatus bar and tool bar * WINDOW_GUI_NORMALold fashious way * WINDOW_KEEPRATIOthe ratio of the image is respected. * WINDOW_NORMALthe user can resize the window (no constraint) / also use to switch a fullscreen window to a normal size. * WINDOW_OPENGLwindow with opengl support. * WND_PROP_ASPECT_RATIOwindow’s aspect ration (can be set to WINDOW_FREERATIO or WINDOW_KEEPRATIO). * WND_PROP_AUTOSIZEautosize property (can be WINDOW_NORMAL or WINDOW_AUTOSIZE). * WND_PROP_FULLSCREENfullscreen property (can be WINDOW_NORMAL or WINDOW_FULLSCREEN). * WND_PROP_OPENGLopengl support. * WND_PROP_TOPMOSTproperty to toggle normal window being topmost or not * WND_PROP_VISIBLEchecks whether the window exists and is visible * WND_PROP_VSYNCenable or disable VSYNC (in OpenGL mode) Traits --- * QtFontTraitMutable methods for crate::highgui::QtFont * QtFontTraitConstConstant methods for crate::highgui::QtFont Functions --- * add_textDraws a text on the image. * add_text_with_fontDraws a text on the image. * add_text_with_font_defDraws a text on the image. * create_buttonAttaches a button to the control panel. * create_button_defAttaches a button to the control panel. * create_trackbarCreates a trackbar and attaches it to the specified window. * destroy_all_windowsDestroys all of the HighGUI windows. * destroy_windowDestroys the specified window. * display_overlayDisplays a text on a window image as an overlay for a specified duration. * display_overlay_defDisplays a text on a window image as an overlay for a specified duration. * display_status_barDisplays a text on the window statusbar during the specified period of time. * display_status_bar_defDisplays a text on the window statusbar during the specified period of time. * font_qtCreates the font to draw a text on an image. * font_qt_defCreates the font to draw a text on an image. * get_mouse_wheel_deltaGets the mouse-wheel motion delta, when handling mouse-wheel events cv::EVENT_MOUSEWHEEL and cv::EVENT_MOUSEHWHEEL. * get_trackbar_posReturns the trackbar position. * get_window_image_rectProvides rectangle of image in the window. * get_window_propertyProvides parameters of a window. * imshowDisplays an image in the specified window. * load_window_parametersLoads parameters of the specified window. * move_windowMoves the window to the specified position * named_windowCreates a window. * named_window_defCreates a window. * poll_keyPolls for a pressed key. * resize_windowResizes the window to the specified size * resize_window_sizeResizes the window to the specified size * save_window_parametersSaves parameters of the specified window. * select_ro_isAllows users to select multiple ROIs on the given image. * select_ro_is_defAllows users to select multiple ROIs on the given image. * select_roiAllows users to select a ROI on the given image. * select_roi_1Allows users to select a ROI on the given image. * select_roi_1_def@overload * select_roi_defAllows users to select a ROI on the given image. * set_mouse_callback@example samples/cpp/create_mask.cpp This program demonstrates using mouse events and how to make and use a mask image (black and white) . * set_opengl_contextSets the specified window as current OpenGL context. * set_opengl_draw_callbackSets a callback function to be called to draw on top of displayed image. * set_trackbar_maxSets the trackbar maximum position. * set_trackbar_minSets the trackbar minimum position. * set_trackbar_posSets the trackbar position. * set_window_propertyChanges parameters of a window dynamically. * set_window_titleUpdates window title * start_loop * start_window_thread * stop_loop * update_windowForce window to redraw its context and call draw callback ( See cv::setOpenGlDrawCallback ). * wait_keyWaits for a pressed key. * wait_key_defWaits for a pressed key. * wait_key_exSimilar to #waitKey, but returns full key code. * wait_key_ex_defSimilar to #waitKey, but returns full key code. Type Aliases --- * ButtonCallbackCallback function for a button created by cv::createButton * MouseCallbackCallback function for mouse events. see cv::setMouseCallback * OpenGlDrawCallbackCallback function defined to be called every frame. See cv::setOpenGlDrawCallback * TrackbarCallbackCallback function for Trackbar see cv::createTrackbar Module opencv::img_hash === The module brings implementations of different image hashing algorithms. --- Provide algorithms to extract the hash of images and fast way to figure out most similar images in huge data set. Namespace for all functions is cv::img_hash. #### Supported Algorithms * Average hash (also called Different hash) * PHash (also called Perceptual hash) * Marr Hildreth Hash * Radial Variance Hash * Block Mean Hash (modes 0 and 1) * Color Moment Hash (this is the one and only hash algorithm resist to rotation attack(-90~90 degree)) You can study more about image hashing from following paper and websites: * “Implementation and benchmarking of perceptual image hash functions” zauner2010implementation * “Looks Like It” lookslikeit #### Code Example @include samples/hash_samples.cpp #### Performance under different attacks ![Performance chart](https://docs.opencv.org/4.8.1/attack_performance.JPG) #### Speed comparison with PHash library (100 images from ukbench) ![Hash Computation chart](https://docs.opencv.org/4.8.1/hash_computation_chart.JPG) ![Hash comparison chart](https://docs.opencv.org/4.8.1/hash_comparison_chart.JPG) As you can see, hash computation speed of img_hash module outperform PHash library a lot. PS : I do not list out the comparison of Average hash, PHash and Color Moment hash, because I cannot find them in PHash. #### Motivation Collects useful image hash algorithms into opencv, so we do not need to rewrite them by ourselves again and again or rely on another 3rd party library(ex : PHash library). BOVW or correlation matching are good and robust, but they are very slow compare with image hash, if you need to deal with large scale CBIR(content based image retrieval) problem, image hash is a more reasonable solution. #### More info You can learn more about img_hash modules from following links, these links show you how to find similar image from ukbench dataset, provide thorough benchmark of different attacks(contrast, blur, noise(gaussion,pepper and salt), jpeg compression, watermark, resize). * Introduction to image hash module of opencv * Speed up image hashing of opencv(img_hash) and introduce color moment hash #### Contributors <NAME>, <EMAIL> Modules --- * prelude Structs --- * AverageHashComputes average hash value of the input image * BlockMeanHashImage hash based on block mean. * ColorMomentHashImage hash based on color moments. * ImgHashBaseThe base class for image hash algorithms * MarrHildrethHashMarr-Hildreth Operator Based Hash, slowest but more discriminative. * PHashpHash * RadialVarianceHashImage hash based on Radon transform. Enums --- * BlockMeanHashMode Constants --- * BLOCK_MEAN_HASH_MODE_0use fewer block and generate 16*16/8 uchar hash value * BLOCK_MEAN_HASH_MODE_1use block blocks(step sizes/2), generate 31*31/8 + 1 uchar hash value Traits --- * AverageHashTraitMutable methods for crate::img_hash::AverageHash * AverageHashTraitConstConstant methods for crate::img_hash::AverageHash * BlockMeanHashTraitMutable methods for crate::img_hash::BlockMeanHash * BlockMeanHashTraitConstConstant methods for crate::img_hash::BlockMeanHash * ColorMomentHashTraitMutable methods for crate::img_hash::ColorMomentHash * ColorMomentHashTraitConstConstant methods for crate::img_hash::ColorMomentHash * ImgHashBaseTraitMutable methods for crate::img_hash::ImgHashBase * ImgHashBaseTraitConstConstant methods for crate::img_hash::ImgHashBase * MarrHildrethHashTraitMutable methods for crate::img_hash::MarrHildrethHash * MarrHildrethHashTraitConstConstant methods for crate::img_hash::MarrHildrethHash * PHashTraitMutable methods for crate::img_hash::PHash * PHashTraitConstConstant methods for crate::img_hash::PHash * RadialVarianceHashTraitMutable methods for crate::img_hash::RadialVarianceHash * RadialVarianceHashTraitConstConstant methods for crate::img_hash::RadialVarianceHash Functions --- * average_hashCalculates img_hash::AverageHash in one call * block_mean_hashComputes block mean hash of the input image * block_mean_hash_defComputes block mean hash of the input image * color_moment_hashComputes color moment hash of the input, the algorithm is come from the paper “Perceptual Hashing for Color Images Using Invariant Moments” * marr_hildreth_hashComputes average hash value of the input image * marr_hildreth_hash_defComputes average hash value of the input image * p_hashComputes pHash value of the input image * radial_variance_hashComputes radial variance hash of the input image * radial_variance_hash_defComputes radial variance hash of the input image Module opencv::imgcodecs === Image file reading and writing --- C API --- Flags used for image file reading and writing --- iOS glue --- MacOS(OSX) glue --- Modules --- * prelude Structs --- * ImageCollectionTo read Multi Page images on demand * ImageCollection_iterator Enums --- * ImreadModesImread flags * ImwriteEXRCompressionFlags * ImwriteEXRTypeFlags * ImwriteFlagsImwrite flags * ImwriteHDRCompressionFlagsImwrite HDR specific values for IMWRITE_HDR_COMPRESSION parameter key * ImwriteJPEGSamplingFactorParams * ImwritePAMFlagsImwrite PAM specific tupletype flags used to define the ‘TUPLETYPE’ field of a PAM file. * ImwritePNGFlagsImwrite PNG specific flags used to tune the compression algorithm. These flags will be modify the way of PNG image compression and will be passed to the underlying zlib processing stage. Constants --- * IMREAD_ANYCOLORIf set, the image is read in any possible color format. * IMREAD_ANYDEPTHIf set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit. * IMREAD_COLORIf set, always convert image to the 3 channel BGR color image. * IMREAD_GRAYSCALEIf set, always convert image to the single channel grayscale image (codec internal conversion). * IMREAD_IGNORE_ORIENTATIONIf set, do not rotate the image according to EXIF’s orientation flag. * IMREAD_LOAD_GDALIf set, use the gdal driver for loading the image. * IMREAD_REDUCED_COLOR_2If set, always convert image to the 3 channel BGR color image and the image size reduced 1/2. * IMREAD_REDUCED_COLOR_4If set, always convert image to the 3 channel BGR color image and the image size reduced 1/4. * IMREAD_REDUCED_COLOR_8If set, always convert image to the 3 channel BGR color image and the image size reduced 1/8. * IMREAD_REDUCED_GRAYSCALE_2If set, always convert image to the single channel grayscale image and the image size reduced 1/2. * IMREAD_REDUCED_GRAYSCALE_4If set, always convert image to the single channel grayscale image and the image size reduced 1/4. * IMREAD_REDUCED_GRAYSCALE_8If set, always convert image to the single channel grayscale image and the image size reduced 1/8. * IMREAD_UNCHANGEDIf set, return the loaded image as is (with alpha channel, otherwise it gets cropped). Ignore EXIF orientation. * IMWRITE_AVIF_DEPTHFor AVIF, it can be 8, 10 or 12. If >8, it is stored/read as CV_32F. Default is 8. * IMWRITE_AVIF_QUALITYFor AVIF, it can be a quality between 0 and 100 (the higher the better). Default is 95. * IMWRITE_AVIF_SPEEDFor AVIF, it is between 0 (slowest) and (fastest). Default is 9. * IMWRITE_EXR_COMPRESSIONoverride EXR compression type (ZIP_COMPRESSION = 3 is default) * IMWRITE_EXR_COMPRESSION_B44lossy 4-by-4 pixel block compression, fixed compression rate * IMWRITE_EXR_COMPRESSION_B44Alossy 4-by-4 pixel block compression, flat fields are compressed more * IMWRITE_EXR_COMPRESSION_DWAAlossy DCT based compression, in blocks of 32 scanlines. More efficient for partial buffer access. Supported since OpenEXR 2.2.0. * IMWRITE_EXR_COMPRESSION_DWABlossy DCT based compression, in blocks of 256 scanlines. More efficient space wise and faster to decode full frames than DWAA_COMPRESSION. Supported since OpenEXR 2.2.0. * IMWRITE_EXR_COMPRESSION_NOno compression * IMWRITE_EXR_COMPRESSION_PIZpiz-based wavelet compression * IMWRITE_EXR_COMPRESSION_PXR24lossy 24-bit float compression * IMWRITE_EXR_COMPRESSION_RLErun length encoding * IMWRITE_EXR_COMPRESSION_ZIPzlib compression, in blocks of 16 scan lines * IMWRITE_EXR_COMPRESSION_ZIPSzlib compression, one scan line at a time * IMWRITE_EXR_DWA_COMPRESSION_LEVELoverride EXR DWA compression level (45 is default) * IMWRITE_EXR_TYPEoverride EXR storage type (FLOAT (FP32) is default) * IMWRITE_EXR_TYPE_FLOATstore as FP32 (default) * IMWRITE_EXR_TYPE_HALFstore as HALF (FP16) * IMWRITE_HDR_COMPRESSIONspecify HDR compression * IMWRITE_HDR_COMPRESSION_NONE * IMWRITE_HDR_COMPRESSION_RLE * IMWRITE_JPEG2000_COMPRESSION_X1000For JPEG2000, use to specify the target compression rate (multiplied by 1000). The value can be from 0 to 1000. Default is 1000. * IMWRITE_JPEG_CHROMA_QUALITYSeparate chroma quality level, 0 - 100, default is -1 - don’t use. * IMWRITE_JPEG_LUMA_QUALITYSeparate luma quality level, 0 - 100, default is -1 - don’t use. * IMWRITE_JPEG_OPTIMIZEEnable JPEG features, 0 or 1, default is False. * IMWRITE_JPEG_PROGRESSIVEEnable JPEG features, 0 or 1, default is False. * IMWRITE_JPEG_QUALITYFor JPEG, it can be a quality from 0 to 100 (the higher is the better). Default value is 95. * IMWRITE_JPEG_RST_INTERVALJPEG restart interval, 0 - 65535, default is 0 - no restart. * IMWRITE_JPEG_SAMPLING_FACTORFor JPEG, set sampling factor. See cv::ImwriteJPEGSamplingFactorParams. * IMWRITE_JPEG_SAMPLING_FACTOR_4114x1,1x1,1x1 * IMWRITE_JPEG_SAMPLING_FACTOR_4202x2,1x1,1x1(Default) * IMWRITE_JPEG_SAMPLING_FACTOR_4222x1,1x1,1x1 * IMWRITE_JPEG_SAMPLING_FACTOR_4401x2,1x1,1x1 * IMWRITE_JPEG_SAMPLING_FACTOR_4441x1,1x1,1x1(No subsampling) * IMWRITE_PAM_FORMAT_BLACKANDWHITE * IMWRITE_PAM_FORMAT_GRAYSCALE * IMWRITE_PAM_FORMAT_GRAYSCALE_ALPHA * IMWRITE_PAM_FORMAT_NULL * IMWRITE_PAM_FORMAT_RGB * IMWRITE_PAM_FORMAT_RGB_ALPHA * IMWRITE_PAM_TUPLETYPEFor PAM, sets the TUPLETYPE field to the corresponding string value that is defined for the format * IMWRITE_PNG_BILEVELBinary level PNG, 0 or 1, default is 0. * IMWRITE_PNG_COMPRESSIONFor PNG, it can be the compression level from 0 to 9. A higher value means a smaller size and longer compression time. If specified, strategy is changed to IMWRITE_PNG_STRATEGY_DEFAULT (Z_DEFAULT_STRATEGY). Default value is 1 (best speed setting). * IMWRITE_PNG_STRATEGYOne of cv::ImwritePNGFlags, default is IMWRITE_PNG_STRATEGY_RLE. * IMWRITE_PNG_STRATEGY_DEFAULTUse this value for normal data. * IMWRITE_PNG_STRATEGY_FILTEREDUse this value for data produced by a filter (or predictor).Filtered data consists mostly of small values with a somewhat random distribution. In this case, the compression algorithm is tuned to compress them better. * IMWRITE_PNG_STRATEGY_FIXEDUsing this value prevents the use of dynamic Huffman codes, allowing for a simpler decoder for special applications. * IMWRITE_PNG_STRATEGY_HUFFMAN_ONLYUse this value to force Huffman encoding only (no string match). * IMWRITE_PNG_STRATEGY_RLEUse this value to limit match distances to one (run-length encoding). * IMWRITE_PXM_BINARYFor PPM, PGM, or PBM, it can be a binary format flag, 0 or 1. Default value is 1. * IMWRITE_TIFF_COMPRESSIONFor TIFF, use to specify the image compression scheme. See libtiff for integer constants corresponding to compression formats. Note, for images whose depth is CV_32F, only libtiff’s SGILOG compression scheme is used. For other supported depths, the compression scheme can be specified by this flag; LZW compression is the default. * IMWRITE_TIFF_RESUNITFor TIFF, use to specify which DPI resolution unit to set; see libtiff documentation for valid values * IMWRITE_TIFF_XDPIFor TIFF, use to specify the X direction DPI * IMWRITE_TIFF_YDPIFor TIFF, use to specify the Y direction DPI * IMWRITE_WEBP_QUALITYFor WEBP, it can be a quality from 1 to 100 (the higher is the better). By default (without any parameter) and for quality above 100 the lossless compression is used. Traits --- * ImageCollectionTraitMutable methods for crate::imgcodecs::ImageCollection * ImageCollectionTraitConstConstant methods for crate::imgcodecs::ImageCollection * ImageCollection_iteratorTraitMutable methods for crate::imgcodecs::ImageCollection_iterator * ImageCollection_iteratorTraitConstConstant methods for crate::imgcodecs::ImageCollection_iterator Functions --- * have_image_readerReturns true if the specified image can be decoded by OpenCV * have_image_writerReturns true if an image with the specified filename can be encoded by OpenCV * imcountReturns the number of images inside the give file * imcount_defReturns the number of images inside the give file * imdecodeReads an image from a buffer in memory. * imdecode_toReads an image from a buffer in memory. * imdecodemultiReads a multi-page image from a buffer in memory. * imencodeEncodes an image into a memory buffer. * imencode_defEncodes an image into a memory buffer. * imreadLoads an image from a file. * imread_defLoads an image from a file. * imreadmultiLoads a multi-page image from a file. * imreadmulti_defLoads a multi-page image from a file. * imreadmulti_rangeLoads a of images of a multi-page image from a file. * imreadmulti_range_defLoads a of images of a multi-page image from a file. * imwriteSaves an image to a specified file. * imwrite_defSaves an image to a specified file. * imwritemultiThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. multi-image overload for bindings * imwritemulti_def@overload multi-image overload for bindings Module opencv::imgproc === Image Processing --- This module includes image-processing functions. Image Filtering --- Functions and classes described in this section are used to perform various linear or non-linear filtering operations on 2D images (represented as Mat’s). It means that for each pixel location ![inline formula](https://latex.codecogs.com/png.latex?%28x%2Cy%29) in the source image (normally, rectangular), its neighborhood is considered and used to compute the response. In case of a linear filter, it is a weighted sum of pixel values. In case of morphological operations, it is the minimum or maximum values, and so on. The computed response is stored in the destination image at the same location ![inline formula](https://latex.codecogs.com/png.latex?%28x%2Cy%29). It means that the output image will be of the same size as the input image. Normally, the functions support multi-channel arrays, in which case every channel is processed independently. Therefore, the output image will also have the same number of channels as the input one. Another common feature of the functions and classes described in this section is that, unlike simple arithmetic functions, they need to extrapolate values of some non-existing pixels. For example, if you want to smooth an image using a Gaussian ![inline formula](https://latex.codecogs.com/png.latex?3%20%5Ctimes%203) filter, then, when processing the left-most pixels in each row, you need pixels to the left of them, that is, outside of the image. You can let these pixels be the same as the left-most image pixels (“replicated border” extrapolation method), or assume that all the non-existing pixels are zeros (“constant border” extrapolation method), and so on. OpenCV enables you to specify the extrapolation method. For details, see [border_types] @anchor filter_depths #### Depth combinations | Input depth (src.depth()) | Output depth (ddepth) | | --- | --- | | CV_8U | -1/CV_16S/CV_32F/CV_64F | | CV_16U/CV_16S | -1/CV_32F/CV_64F | | CV_32F | -1/CV_32F | | CV_64F | -1/CV_64F | Note: when ddepth=-1, the output image will have the same depth as the source. Note: if you need double floating-point accuracy and using single floating-point input data (CV_32F input and CV_64F output depth combination), you can use [Mat].convertTo to convert the input data to the desired precision. Geometric Image Transformations --- The functions in this section perform various geometrical transformations of 2D images. They do not change the image content but deform the pixel grid and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel ![inline formula](https://latex.codecogs.com/png.latex?%28x%2C%20y%29) of the destination image, the functions compute coordinates of the corresponding “donor” pixel in the source image and copy the pixel value: ![block formula](https://latex.codecogs.com/png.latex?%5Ctexttt%7Bdst%7D%20%28x%2Cy%29%3D%20%5Ctexttt%7Bsrc%7D%20%28f%5Fx%28x%2Cy%29%2C%20f%5Fy%28x%2Cy%29%29) In case when you specify the forward mapping ![inline formula](https://latex.codecogs.com/png.latex?%5Cleft%3Cg%5Fx%2C%20g%5Fy%5Cright%3E%3A%20%5Ctexttt%7Bsrc%7D%20%5Crightarrow%0A%5Ctexttt%7Bdst%7D), the OpenCV functions first compute the corresponding inverse mapping ![inline formula](https://latex.codecogs.com/png.latex?%5Cleft%3Cf%5Fx%2C%20f%5Fy%5Cright%3E%3A%20%5Ctexttt%7Bdst%7D%20%5Crightarrow%20%5Ctexttt%7Bsrc%7D) and then use the above formula. The actual implementations of the geometrical transformations, from the most generic remap and to the simplest and the fastest resize, need to solve two main problems with the above formula: * Extrapolation of non-existing pixels. Similarly to the filtering functions described in the previous section, for some ![inline formula](https://latex.codecogs.com/png.latex?%28x%2Cy%29), either one of ![inline formula](https://latex.codecogs.com/png.latex?f%5Fx%28x%2Cy%29), or ![inline formula](https://latex.codecogs.com/png.latex?f%5Fy%28x%2Cy%29), or both of them may fall outside of the image. In this case, an extrapolation method needs to be used. OpenCV provides the same selection of extrapolation methods as in the filtering functions. In addition, it provides the method #BORDER_TRANSPARENT. This means that the corresponding pixels in the destination image will not be modified at all. * Interpolation of pixel values. Usually ![inline formula](https://latex.codecogs.com/png.latex?f%5Fx%28x%2Cy%29) and ![inline formula](https://latex.codecogs.com/png.latex?f%5Fy%28x%2Cy%29) are floating-point numbers. This means that ![inline formula](https://latex.codecogs.com/png.latex?%5Cleft%3Cf%5Fx%2C%20f%5Fy%5Cright%3E) can be either an affine or perspective transformation, or radial lens distortion correction, and so on. So, a pixel value at fractional coordinates needs to be retrieved. In the simplest case, the coordinates can be just rounded to the nearest integer coordinates and the corresponding pixel can be used. This is called a nearest-neighbor interpolation. However, a better result can be achieved by using more sophisticated interpolation methods , where a polynomial function is fit into some neighborhood of the computed pixel ![inline formula](https://latex.codecogs.com/png.latex?%28f%5Fx%28x%2Cy%29%2C%0Af%5Fy%28x%2Cy%29%29), and then the value of the polynomial at ![inline formula](https://latex.codecogs.com/png.latex?%28f%5Fx%28x%2Cy%29%2C%20f%5Fy%28x%2Cy%29%29) is taken as the interpolated pixel value. In OpenCV, you can choose between several interpolation methods. See resize for details. Note: The geometrical transformations do not work with `CV_8S` or `CV_32S` images. Miscellaneous Image Transformations --- Drawing Functions --- Drawing functions work with matrices/images of arbitrary depth. The boundaries of the shapes can be rendered with antialiasing (implemented only for 8-bit images for now). All the functions include the parameter color that uses an RGB value (that may be constructed with the Scalar constructor ) for color images and brightness for grayscale images. For color images, the channel ordering is normally *Blue, Green, Red*. This is what imshow, imread, and imwrite expect. So, if you form a color using the Scalar constructor, it should look like: ![block formula](https://latex.codecogs.com/png.latex?%5Ctexttt%7BScalar%7D%20%28blue%20%5C%5F%20component%2C%20green%20%5C%5F%20component%2C%20red%20%5C%5F%20component%5B%2C%20alpha%20%5C%5F%20component%5D%29) If you are using your own image rendering and I/O functions, you can use any channel ordering. The drawing functions process each channel independently and do not depend on the channel order or even on the used color space. The whole image can be converted from BGR to RGB or to a different color space using cvtColor . If a drawn figure is partially or completely outside the image, the drawing functions clip it. Also, many drawing functions can handle pixel coordinates specified with sub-pixel accuracy. This means that the coordinates can be passed as fixed-point numbers encoded as integers. The number of fractional bits is specified by the shift parameter and the real point coordinates are calculated as ![inline formula](https://latex.codecogs.com/png.latex?%5Ctexttt%7BPoint%7D%28x%2Cy%29%5Crightarrow%5Ctexttt%7BPoint2f%7D%28x%2A2%5E%7B%2Dshift%7D%2Cy%2A2%5E%7B%2Dshift%7D%29) . This feature is especially effective when rendering antialiased shapes. Note: The functions do not support alpha-transparency when the target image is 4-channel. In this case, the color[3] is simply copied to the repainted pixels. Thus, if you want to paint semi-transparent shapes, you can paint them in a separate buffer and then blend it with the main image. Color Space Conversions --- ColorMaps in OpenCV --- The human perception isn’t built for observing fine changes in grayscale images. Human eyes are more sensitive to observing changes between colors, so you often need to recolor your grayscale images to get a clue about them. OpenCV now comes with various colormaps to enhance the visualization in your computer vision application. In OpenCV you only need applyColorMap to apply a colormap on a given image. The following sample code reads the path to an image from command line, applies a Jet colormap on it and shows the result: @include snippets/imgproc_applyColorMap.cpp ### See also [colormap_types] Planar Subdivision --- The Subdiv2D class described in this section is used to perform various planar subdivision on a set of 2D points (represented as vector of Point2f). OpenCV subdivides a plane into triangles using the Delaunay’s algorithm, which corresponds to the dual graph of the Voronoi diagram. In the figure below, the Delaunay’s triangulation is marked with black lines and the Voronoi diagram with red lines. ![Delaunay triangulation (black) and Voronoi (red)](https://docs.opencv.org/4.8.1/delaunay_voronoi.png) The subdivisions can be used for the 3D piece-wise transformation of a plane, morphing, fast location of points on the plane, building special graphs (such as NNG,RNG), and so forth. Histograms --- Structural Analysis and Shape Descriptors --- Motion Analysis and Object Tracking --- Feature Detection --- Object Detection --- Image Segmentation --- C API --- Hardware Acceleration Layer --- Modules --- * prelude Structs --- * CLAHEBase class for Contrast Limited Adaptive Histogram Equalization. * GeneralizedHoughfinds arbitrary template in the grayscale image using Generalized Hough Transform * GeneralizedHoughBallardfinds arbitrary template in the grayscale image using Generalized Hough Transform * GeneralizedHoughGuilfinds arbitrary template in the grayscale image using Generalized Hough Transform * IntelligentScissorsMBIntelligent Scissors image segmentation * LineIteratorClass for iterating over all pixels on a raster line segment. * LineSegmentDetectorLine segment detector class * Subdiv2D Enums --- * AdaptiveThresholdTypesadaptive threshold algorithm * ColorConversionCodesthe color conversion codes * ColormapTypesGNU Octave/MATLAB equivalent colormaps * ConnectedComponentsAlgorithmsTypesconnected components algorithm * ConnectedComponentsTypesconnected components statistics * ContourApproximationModesthe contour approximation algorithm * DistanceTransformLabelTypesdistanceTransform algorithm flags * DistanceTransformMasksMask size for distance transform * DistanceTypesDistance types for Distance Transform and M-estimators * FloodFillFlagsfloodfill algorithm flags * GrabCutClassesclass of the pixel in GrabCut algorithm * GrabCutModesGrabCut algorithm flags * HersheyFontsOnly a subset of Hershey fonts https://en.wikipedia.org/wiki/Hershey_fonts are supported @ingroup imgproc_draw * HistCompMethodsHistogram comparison methods @ingroup imgproc_hist * HoughModesVariants of a Hough transform * InterpolationFlagsinterpolation algorithm * InterpolationMasks * LineSegmentDetectorModesVariants of Line Segment %Detector * LineTypestypes of line @ingroup imgproc_draw * MarkerTypesPossible set of marker types used for the cv::drawMarker function @ingroup imgproc_draw * MorphShapesshape of the structuring element * MorphTypestype of morphological operation * RectanglesIntersectTypestypes of intersection between rectangles * RetrievalModesmode of the contour retrieval algorithm * ShapeMatchModesShape matching methods * SpecialFilter * TemplateMatchModestype of the template matching operation * ThresholdTypestype of the threshold operation threshold types * WarpPolarMode\brief Specify the polar mapping mode Constants --- * ADAPTIVE_THRESH_GAUSSIAN_Cthe threshold value inline formula is a weighted sum (cross-correlation with a Gaussian window) of the inline formula neighborhood of inline formula minus C . The default sigma (standard deviation) is used for the specified blockSize . See #getGaussianKernel * ADAPTIVE_THRESH_MEAN_Cthe threshold value inline formula is a mean of the inline formula neighborhood of inline formula minus C * CCL_BBDTSame as CCL_GRANA. It is preferable to use the flag with the name of the algorithm (CCL_BBDT) rather than the one with the name of the first author (CCL_GRANA). * CCL_BOLELLISpaghetti Bolelli2019 algorithm for 8-way connectivity, Spaghetti4C Bolelli2021 algorithm for 4-way connectivity. The parallel implementation described in Bolelli2017 is available for both Spaghetti and Spaghetti4C. * CCL_DEFAULTSpaghetti Bolelli2019 algorithm for 8-way connectivity, Spaghetti4C Bolelli2021 algorithm for 4-way connectivity. * CCL_GRANABBDT Grana2010 algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity. The parallel implementation described in Bolelli2017 is available for both BBDT and SAUF. * CCL_SAUFSame as CCL_WU. It is preferable to use the flag with the name of the algorithm (CCL_SAUF) rather than the one with the name of the first author (CCL_WU). * CCL_SPAGHETTISame as CCL_BOLELLI. It is preferable to use the flag with the name of the algorithm (CCL_SPAGHETTI) rather than the one with the name of the first author (CCL_BOLELLI). * CCL_WUSAUF Wu2009 algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity. The parallel implementation described in Bolelli2017 is available for SAUF. * CC_STAT_AREAThe total area (in pixels) of the connected component * CC_STAT_HEIGHTThe vertical size of the bounding box * CC_STAT_LEFTThe leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction. * CC_STAT_MAXMax enumeration value. Used internally only for memory allocation * CC_STAT_TOPThe topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction. * CC_STAT_WIDTHThe horizontal size of the bounding box * CHAIN_APPROX_NONEstores absolutely all the contour points. That is, any 2 subsequent points (x1,y1) and (x2,y2) of the contour will be either horizontal, vertical or diagonal neighbors, that is, max(abs(x1-x2),abs(y2-y1))==1. * CHAIN_APPROX_SIMPLEcompresses horizontal, vertical, and diagonal segments and leaves only their end points. For example, an up-right rectangular contour is encoded with 4 points. * CHAIN_APPROX_TC89_KCOSapplies one of the flavors of the Teh-Chin chain approximation algorithm TehChin89 * CHAIN_APPROX_TC89_L1applies one of the flavors of the Teh-Chin chain approximation algorithm TehChin89 * COLORMAP_AUTUMNautumn * COLORMAP_BONEbone * COLORMAP_CIVIDIScividis * COLORMAP_COOLcool * COLORMAP_DEEPGREENdeepgreen * COLORMAP_HOThot * COLORMAP_HSVHSV * COLORMAP_INFERNOinferno * COLORMAP_JETjet * COLORMAP_MAGMAmagma * COLORMAP_OCEANocean * COLORMAP_PARULAparula * COLORMAP_PINKpink * COLORMAP_PLASMAplasma * COLORMAP_RAINBOWrainbow * COLORMAP_SPRINGspring * COLORMAP_SUMMERsummer * COLORMAP_TURBOturbo * COLORMAP_TWILIGHTtwilight * COLORMAP_TWILIGHT_SHIFTEDtwilight shifted * COLORMAP_VIRIDISviridis * COLORMAP_WINTERwinter * COLOR_BGR2BGR555convert between RGB/BGR and BGR555 (16-bit images) * COLOR_BGR2BGR565convert between RGB/BGR and BGR565 (16-bit images) * COLOR_BGR2BGRAadd alpha channel to RGB or BGR image * COLOR_BGR2GRAYconvert between RGB/BGR and grayscale, [color_convert_rgb_gray] “color conversions” * COLOR_BGR2HLSconvert RGB/BGR to HLS (hue lightness saturation) with H range 0..180 if 8 bit image, [color_convert_rgb_hls] “color conversions” * COLOR_BGR2HLS_FULLconvert RGB/BGR to HLS (hue lightness saturation) with H range 0..255 if 8 bit image, [color_convert_rgb_hls] “color conversions” * COLOR_BGR2HSVconvert RGB/BGR to HSV (hue saturation value) with H range 0..180 if 8 bit image, [color_convert_rgb_hsv] “color conversions” * COLOR_BGR2HSV_FULLconvert RGB/BGR to HSV (hue saturation value) with H range 0..255 if 8 bit image, [color_convert_rgb_hsv] “color conversions” * COLOR_BGR2Labconvert RGB/BGR to CIE Lab, [color_convert_rgb_lab] “color conversions” * COLOR_BGR2Luvconvert RGB/BGR to CIE Luv, [color_convert_rgb_luv] “color conversions” * COLOR_BGR2RGB * COLOR_BGR2RGBAconvert between RGB and BGR color spaces (with or without alpha channel) * COLOR_BGR2XYZconvert RGB/BGR to CIE XYZ, [color_convert_rgb_xyz] “color conversions” * COLOR_BGR2YCrCbconvert RGB/BGR to luma-chroma (aka YCC), [color_convert_rgb_ycrcb] “color conversions” * COLOR_BGR2YUVconvert between RGB/BGR and YUV * COLOR_BGR2YUV_I420RGB to YUV 4:2:0 family * COLOR_BGR2YUV_IYUVRGB to YUV 4:2:0 family * COLOR_BGR2YUV_YV12RGB to YUV 4:2:0 family * COLOR_BGR5552BGR * COLOR_BGR5552BGRA * COLOR_BGR5552GRAY * COLOR_BGR5552RGB * COLOR_BGR5552RGBA * COLOR_BGR5652BGR * COLOR_BGR5652BGRA * COLOR_BGR5652GRAY * COLOR_BGR5652RGB * COLOR_BGR5652RGBA * COLOR_BGRA2BGRremove alpha channel from RGB or BGR image * COLOR_BGRA2BGR555 * COLOR_BGRA2BGR565 * COLOR_BGRA2GRAY * COLOR_BGRA2RGB * COLOR_BGRA2RGBA * COLOR_BGRA2YUV_I420RGB to YUV 4:2:0 family * COLOR_BGRA2YUV_IYUVRGB to YUV 4:2:0 family * COLOR_BGRA2YUV_YV12RGB to YUV 4:2:0 family * COLOR_BayerBG2BGRequivalent to RGGB Bayer pattern * COLOR_BayerBG2BGRAequivalent to RGGB Bayer pattern * COLOR_BayerBG2BGR_EAequivalent to RGGB Bayer pattern * COLOR_BayerBG2BGR_VNGequivalent to RGGB Bayer pattern * COLOR_BayerBG2GRAYequivalent to RGGB Bayer pattern * COLOR_BayerBG2RGBequivalent to RGGB Bayer pattern * COLOR_BayerBG2RGBAequivalent to RGGB Bayer pattern * COLOR_BayerBG2RGB_EAequivalent to RGGB Bayer pattern * COLOR_BayerBG2RGB_VNGequivalent to RGGB Bayer pattern * COLOR_BayerBGGR2BGR * COLOR_BayerBGGR2BGRA * COLOR_BayerBGGR2BGR_EA * COLOR_BayerBGGR2BGR_VNG * COLOR_BayerBGGR2GRAY * COLOR_BayerBGGR2RGB * COLOR_BayerBGGR2RGBA * COLOR_BayerBGGR2RGB_EA * COLOR_BayerBGGR2RGB_VNG * COLOR_BayerGB2BGRequivalent to GRBG Bayer pattern * COLOR_BayerGB2BGRAequivalent to GRBG Bayer pattern * COLOR_BayerGB2BGR_EAequivalent to GRBG Bayer pattern * COLOR_BayerGB2BGR_VNGequivalent to GRBG Bayer pattern * COLOR_BayerGB2GRAYequivalent to GRBG Bayer pattern * COLOR_BayerGB2RGBequivalent to GRBG Bayer pattern * COLOR_BayerGB2RGBAequivalent to GRBG Bayer pattern * COLOR_BayerGB2RGB_EAequivalent to GRBG Bayer pattern * COLOR_BayerGB2RGB_VNGequivalent to GRBG Bayer pattern * COLOR_BayerGBRG2BGR * COLOR_BayerGBRG2BGRA * COLOR_BayerGBRG2BGR_EA * COLOR_BayerGBRG2BGR_VNG * COLOR_BayerGBRG2GRAY * COLOR_BayerGBRG2RGB * COLOR_BayerGBRG2RGBA * COLOR_BayerGBRG2RGB_EA * COLOR_BayerGBRG2RGB_VNG * COLOR_BayerGR2BGRequivalent to GBRG Bayer pattern * COLOR_BayerGR2BGRAequivalent to GBRG Bayer pattern * COLOR_BayerGR2BGR_EAequivalent to GBRG Bayer pattern * COLOR_BayerGR2BGR_VNGequivalent to GBRG Bayer pattern * COLOR_BayerGR2GRAYequivalent to GBRG Bayer pattern * COLOR_BayerGR2RGBequivalent to GBRG Bayer pattern * COLOR_BayerGR2RGBAequivalent to GBRG Bayer pattern * COLOR_BayerGR2RGB_EAequivalent to GBRG Bayer pattern * COLOR_BayerGR2RGB_VNGequivalent to GBRG Bayer pattern * COLOR_BayerGRBG2BGR * COLOR_BayerGRBG2BGRA * COLOR_BayerGRBG2BGR_EA * COLOR_BayerGRBG2BGR_VNG * COLOR_BayerGRBG2GRAY * COLOR_BayerGRBG2RGB * COLOR_BayerGRBG2RGBA * COLOR_BayerGRBG2RGB_EA * COLOR_BayerGRBG2RGB_VNG * COLOR_BayerRG2BGRequivalent to BGGR Bayer pattern * COLOR_BayerRG2BGRAequivalent to BGGR Bayer pattern * COLOR_BayerRG2BGR_EAequivalent to BGGR Bayer pattern * COLOR_BayerRG2BGR_VNGequivalent to BGGR Bayer pattern * COLOR_BayerRG2GRAYequivalent to BGGR Bayer pattern * COLOR_BayerRG2RGBequivalent to BGGR Bayer pattern * COLOR_BayerRG2RGBAequivalent to BGGR Bayer pattern * COLOR_BayerRG2RGB_EAequivalent to BGGR Bayer pattern * COLOR_BayerRG2RGB_VNGequivalent to BGGR Bayer pattern * COLOR_BayerRGGB2BGR * COLOR_BayerRGGB2BGRA * COLOR_BayerRGGB2BGR_EA * COLOR_BayerRGGB2BGR_VNG * COLOR_BayerRGGB2GRAY * COLOR_BayerRGGB2RGB * COLOR_BayerRGGB2RGBA * COLOR_BayerRGGB2RGB_EA * COLOR_BayerRGGB2RGB_VNG * COLOR_COLORCVT_MAX * COLOR_GRAY2BGR * COLOR_GRAY2BGR555convert between grayscale and BGR555 (16-bit images) * COLOR_GRAY2BGR565convert between grayscale to BGR565 (16-bit images) * COLOR_GRAY2BGRA * COLOR_GRAY2RGB * COLOR_GRAY2RGBA * COLOR_HLS2BGRbackward conversions HLS to RGB/BGR with H range 0..180 if 8 bit image * COLOR_HLS2BGR_FULLbackward conversions HLS to RGB/BGR with H range 0..255 if 8 bit image * COLOR_HLS2RGB * COLOR_HLS2RGB_FULL * COLOR_HSV2BGRbackward conversions HSV to RGB/BGR with H range 0..180 if 8 bit image * COLOR_HSV2BGR_FULLbackward conversions HSV to RGB/BGR with H range 0..255 if 8 bit image * COLOR_HSV2RGB * COLOR_HSV2RGB_FULL * COLOR_LBGR2Lab * COLOR_LBGR2Luv * COLOR_LRGB2Lab * COLOR_LRGB2Luv * COLOR_Lab2BGR * COLOR_Lab2LBGR * COLOR_Lab2LRGB * COLOR_Lab2RGB * COLOR_Luv2BGR * COLOR_Luv2LBGR * COLOR_Luv2LRGB * COLOR_Luv2RGB * COLOR_RGB2BGR * COLOR_RGB2BGR555 * COLOR_RGB2BGR565 * COLOR_RGB2BGRA * COLOR_RGB2GRAY * COLOR_RGB2HLS * COLOR_RGB2HLS_FULL * COLOR_RGB2HSV * COLOR_RGB2HSV_FULL * COLOR_RGB2Lab * COLOR_RGB2Luv * COLOR_RGB2RGBA * COLOR_RGB2XYZ * COLOR_RGB2YCrCb * COLOR_RGB2YUV * COLOR_RGB2YUV_I420RGB to YUV 4:2:0 family * COLOR_RGB2YUV_IYUVRGB to YUV 4:2:0 family * COLOR_RGB2YUV_YV12RGB to YUV 4:2:0 family * COLOR_RGBA2BGR * COLOR_RGBA2BGR555 * COLOR_RGBA2BGR565 * COLOR_RGBA2BGRA * COLOR_RGBA2GRAY * COLOR_RGBA2RGB * COLOR_RGBA2YUV_I420RGB to YUV 4:2:0 family * COLOR_RGBA2YUV_IYUVRGB to YUV 4:2:0 family * COLOR_RGBA2YUV_YV12RGB to YUV 4:2:0 family * COLOR_RGBA2mRGBAalpha premultiplication * COLOR_XYZ2BGR * COLOR_XYZ2RGB * COLOR_YCrCb2BGR * COLOR_YCrCb2RGB * COLOR_YUV2BGR * COLOR_YUV2BGRA_I420YUV 4:2:0 family to RGB * COLOR_YUV2BGRA_IYUVYUV 4:2:0 family to RGB * COLOR_YUV2BGRA_NV12YUV 4:2:0 family to RGB * COLOR_YUV2BGRA_NV21YUV 4:2:0 family to RGB * COLOR_YUV2BGRA_UYNVYUV 4:2:2 family to RGB * COLOR_YUV2BGRA_UYVYYUV 4:2:2 family to RGB * COLOR_YUV2BGRA_Y422YUV 4:2:2 family to RGB * COLOR_YUV2BGRA_YUNVYUV 4:2:2 family to RGB * COLOR_YUV2BGRA_YUY2YUV 4:2:2 family to RGB * COLOR_YUV2BGRA_YUYVYUV 4:2:2 family to RGB * COLOR_YUV2BGRA_YV12YUV 4:2:0 family to RGB * COLOR_YUV2BGRA_YVYUYUV 4:2:2 family to RGB * COLOR_YUV2BGR_I420YUV 4:2:0 family to RGB * COLOR_YUV2BGR_IYUVYUV 4:2:0 family to RGB * COLOR_YUV2BGR_NV12YUV 4:2:0 family to RGB * COLOR_YUV2BGR_NV21YUV 4:2:0 family to RGB * COLOR_YUV2BGR_UYNVYUV 4:2:2 family to RGB * COLOR_YUV2BGR_UYVYYUV 4:2:2 family to RGB * COLOR_YUV2BGR_Y422YUV 4:2:2 family to RGB * COLOR_YUV2BGR_YUNVYUV 4:2:2 family to RGB * COLOR_YUV2BGR_YUY2YUV 4:2:2 family to RGB * COLOR_YUV2BGR_YUYVYUV 4:2:2 family to RGB * COLOR_YUV2BGR_YV12YUV 4:2:0 family to RGB * COLOR_YUV2BGR_YVYUYUV 4:2:2 family to RGB * COLOR_YUV2GRAY_420YUV 4:2:0 family to RGB * COLOR_YUV2GRAY_I420YUV 4:2:0 family to RGB * COLOR_YUV2GRAY_IYUVYUV 4:2:0 family to RGB * COLOR_YUV2GRAY_NV12YUV 4:2:0 family to RGB * COLOR_YUV2GRAY_NV21YUV 4:2:0 family to RGB * COLOR_YUV2GRAY_UYNVYUV 4:2:2 family to RGB * COLOR_YUV2GRAY_UYVYYUV 4:2:2 family to RGB * COLOR_YUV2GRAY_Y422YUV 4:2:2 family to RGB * COLOR_YUV2GRAY_YUNVYUV 4:2:2 family to RGB * COLOR_YUV2GRAY_YUY2YUV 4:2:2 family to RGB * COLOR_YUV2GRAY_YUYVYUV 4:2:2 family to RGB * COLOR_YUV2GRAY_YV12YUV 4:2:0 family to RGB * COLOR_YUV2GRAY_YVYUYUV 4:2:2 family to RGB * COLOR_YUV2RGB * COLOR_YUV2RGBA_I420YUV 4:2:0 family to RGB * COLOR_YUV2RGBA_IYUVYUV 4:2:0 family to RGB * COLOR_YUV2RGBA_NV12YUV 4:2:0 family to RGB * COLOR_YUV2RGBA_NV21YUV 4:2:0 family to RGB * COLOR_YUV2RGBA_UYNVYUV 4:2:2 family to RGB * COLOR_YUV2RGBA_UYVYYUV 4:2:2 family to RGB * COLOR_YUV2RGBA_Y422YUV 4:2:2 family to RGB * COLOR_YUV2RGBA_YUNVYUV 4:2:2 family to RGB * COLOR_YUV2RGBA_YUY2YUV 4:2:2 family to RGB * COLOR_YUV2RGBA_YUYVYUV 4:2:2 family to RGB * COLOR_YUV2RGBA_YV12YUV 4:2:0 family to RGB * COLOR_YUV2RGBA_YVYUYUV 4:2:2 family to RGB * COLOR_YUV2RGB_I420YUV 4:2:0 family to RGB * COLOR_YUV2RGB_IYUVYUV 4:2:0 family to RGB * COLOR_YUV2RGB_NV12YUV 4:2:0 family to RGB * COLOR_YUV2RGB_NV21YUV 4:2:0 family to RGB * COLOR_YUV2RGB_UYNVYUV 4:2:2 family to RGB * COLOR_YUV2RGB_UYVYYUV 4:2:2 family to RGB * COLOR_YUV2RGB_Y422YUV 4:2:2 family to RGB * COLOR_YUV2RGB_YUNVYUV 4:2:2 family to RGB * COLOR_YUV2RGB_YUY2YUV 4:2:2 family to RGB * COLOR_YUV2RGB_YUYVYUV 4:2:2 family to RGB * COLOR_YUV2RGB_YV12YUV 4:2:0 family to RGB * COLOR_YUV2RGB_YVYUYUV 4:2:2 family to RGB * COLOR_YUV420p2BGRYUV 4:2:0 family to RGB * COLOR_YUV420p2BGRAYUV 4:2:0 family to RGB * COLOR_YUV420p2GRAYYUV 4:2:0 family to RGB * COLOR_YUV420p2RGBYUV 4:2:0 family to RGB * COLOR_YUV420p2RGBAYUV 4:2:0 family to RGB * COLOR_YUV420sp2BGRYUV 4:2:0 family to RGB * COLOR_YUV420sp2BGRAYUV 4:2:0 family to RGB * COLOR_YUV420sp2GRAYYUV 4:2:0 family to RGB * COLOR_YUV420sp2RGBYUV 4:2:0 family to RGB * COLOR_YUV420sp2RGBAYUV 4:2:0 family to RGB * COLOR_mRGBA2RGBAalpha premultiplication * CONTOURS_MATCH_I1block formula * CONTOURS_MATCH_I2block formula * CONTOURS_MATCH_I3block formula * DIST_Cdistance = max(|x1-x2|,|y1-y2|) * DIST_FAIRdistance = c^2(|x|/c-log(1+|x|/c)), c = 1.3998 * DIST_HUBERdistance = |x|<c ? x^2/2 : c(|x|-c/2), c=1.345 * DIST_L1distance = |x1-x2| + |y1-y2| * DIST_L2the simple euclidean distance * DIST_L12L1-L2 metric: distance = 2(sqrt(1+x*x/2) - 1)) * DIST_LABEL_CCOMPeach connected component of zeros in src (as well as all the non-zero pixels closest to the connected component) will be assigned the same label * DIST_LABEL_PIXELeach zero pixel (and all the non-zero pixels closest to it) gets its own label. * DIST_MASK_3mask=3 * DIST_MASK_5mask=5 * DIST_MASK_PRECISE * DIST_USERUser defined distance * DIST_WELSCHdistance = c^2/2(1-exp(-(x/c)^2)), c = 2.9846 * FILLED * FILTER_SCHARR * FLOODFILL_FIXED_RANGEIf set, the difference between the current pixel and seed pixel is considered. Otherwise, the difference between neighbor pixels is considered (that is, the range is floating). * FLOODFILL_MASK_ONLYIf set, the function does not change the image ( newVal is ignored), and only fills the mask with the value specified in bits 8-16 of flags as described above. This option only make sense in function variants that have the mask parameter. * FONT_HERSHEY_COMPLEXnormal size serif font * FONT_HERSHEY_COMPLEX_SMALLsmaller version of FONT_HERSHEY_COMPLEX * FONT_HERSHEY_DUPLEXnormal size sans-serif font (more complex than FONT_HERSHEY_SIMPLEX) * FONT_HERSHEY_PLAINsmall size sans-serif font * FONT_HERSHEY_SCRIPT_COMPLEXmore complex variant of FONT_HERSHEY_SCRIPT_SIMPLEX * FONT_HERSHEY_SCRIPT_SIMPLEXhand-writing style font * FONT_HERSHEY_SIMPLEXnormal size sans-serif font * FONT_HERSHEY_TRIPLEXnormal size serif font (more complex than FONT_HERSHEY_COMPLEX) * FONT_ITALICflag for italic font * GC_BGDan obvious background pixels * GC_EVALThe value means that the algorithm should just resume. * GC_EVAL_FREEZE_MODELThe value means that the algorithm should just run the grabCut algorithm (a single iteration) with the fixed model * GC_FGDan obvious foreground (object) pixel * GC_INIT_WITH_MASKThe function initializes the state using the provided mask. Note that GC_INIT_WITH_RECT and GC_INIT_WITH_MASK can be combined. Then, all the pixels outside of the ROI are automatically initialized with GC_BGD . * GC_INIT_WITH_RECTThe function initializes the state and the mask using the provided rectangle. After that it runs iterCount iterations of the algorithm. * GC_PR_BGDa possible background pixel * GC_PR_FGDa possible foreground pixel * HISTCMP_BHATTACHARYYABhattacharyya distance (In fact, OpenCV computes Hellinger distance, which is related to Bhattacharyya coefficient.) block formula * HISTCMP_CHISQRChi-Square block formula * HISTCMP_CHISQR_ALTAlternative Chi-Square block formula This alternative formula is regularly used for texture comparison. See e.g. Puzicha1997 * HISTCMP_CORRELCorrelation block formula where block formula and inline formula is a total number of histogram bins. * HISTCMP_HELLINGERSynonym for HISTCMP_BHATTACHARYYA * HISTCMP_INTERSECTIntersection block formula * HISTCMP_KL_DIVKullback-Leibler divergence block formula * HOUGH_GRADIENTbasically *21HT*, described in Yuen90 * HOUGH_GRADIENT_ALTvariation of HOUGH_GRADIENT to get better accuracy * HOUGH_MULTI_SCALEmulti-scale variant of the classical Hough transform. The lines are encoded the same way as HOUGH_STANDARD. * HOUGH_PROBABILISTICprobabilistic Hough transform (more efficient in case if the picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of the CV_32SC4 type. * HOUGH_STANDARDclassical or standard Hough transform. Every line is represented by two floating-point numbers inline formula , where inline formula is a distance between (0,0) point and the line, and inline formula is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of CV_32FC2 type * INTERSECT_FULLOne of the rectangle is fully enclosed in the other * INTERSECT_NONENo intersection * INTERSECT_PARTIALThere is a partial intersection * INTER_AREAresampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire’-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method. * INTER_BITS * INTER_BITS2 * INTER_CUBICbicubic interpolation * INTER_LANCZOS4Lanczos interpolation over 8x8 neighborhood * INTER_LINEARbilinear interpolation * INTER_LINEAR_EXACTBit exact bilinear interpolation * INTER_MAXmask for interpolation codes * INTER_NEARESTnearest neighbor interpolation * INTER_NEAREST_EXACTBit exact nearest neighbor interpolation. This will produce same results as the nearest neighbor method in PIL, scikit-image or Matlab. * INTER_TAB_SIZE * INTER_TAB_SIZE2 * LINE_44-connected line * LINE_88-connected line * LINE_AAantialiased line * LSD_REFINE_ADVAdvanced refinement. Number of false alarms is calculated, lines are refined through increase of precision, decrement in size, etc. * LSD_REFINE_NONENo refinement applied * LSD_REFINE_STDStandard refinement is applied. E.g. breaking arches into smaller straighter line approximations. * MARKER_CROSSA crosshair marker shape * MARKER_DIAMONDA diamond marker shape * MARKER_SQUAREA square marker shape * MARKER_STARA star marker shape, combination of cross and tilted cross * MARKER_TILTED_CROSSA 45 degree tilted crosshair marker shape * MARKER_TRIANGLE_DOWNA downwards pointing triangle marker shape * MARKER_TRIANGLE_UPAn upwards pointing triangle marker shape * MORPH_BLACKHAT“black hat” block formula * MORPH_CLOSEa closing operation block formula * MORPH_CROSSa cross-shaped structuring element: block formula * MORPH_DILATEsee #dilate * MORPH_ELLIPSEan elliptic structuring element, that is, a filled ellipse inscribed into the rectangle Rect(0, 0, esize.width, 0.esize.height) * MORPH_ERODEsee #erode * MORPH_GRADIENTa morphological gradient block formula * MORPH_HITMISS“hit or miss” .- Only supported for CV_8UC1 binary images. A tutorial can be found in the documentation * MORPH_OPENan opening operation block formula * MORPH_RECTa rectangular structuring element: block formula * MORPH_TOPHAT“top hat” block formula * RETR_CCOMPretrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level. * RETR_EXTERNALretrieves only the extreme outer contours. It sets `hierarchy[i][2]=hierarchy[i][3]=-1` for all the contours. * RETR_FLOODFILL * RETR_LISTretrieves all of the contours without establishing any hierarchical relationships. * RETR_TREEretrieves all of the contours and reconstructs a full hierarchy of nested contours. * Subdiv2D_NEXT_AROUND_DST * Subdiv2D_NEXT_AROUND_LEFT * Subdiv2D_NEXT_AROUND_ORG * Subdiv2D_NEXT_AROUND_RIGHT * Subdiv2D_PREV_AROUND_DST * Subdiv2D_PREV_AROUND_LEFT * Subdiv2D_PREV_AROUND_ORG * Subdiv2D_PREV_AROUND_RIGHT * Subdiv2D_PTLOC_ERRORPoint location error * Subdiv2D_PTLOC_INSIDEPoint inside some facet * Subdiv2D_PTLOC_ON_EDGEPoint on some edge * Subdiv2D_PTLOC_OUTSIDE_RECTPoint outside the subdivision bounding rect * Subdiv2D_PTLOC_VERTEXPoint coincides with one of the subdivision vertices * THRESH_BINARYblock formula * THRESH_BINARY_INVblock formula * THRESH_MASK * THRESH_OTSUflag, use Otsu algorithm to choose the optimal threshold value * THRESH_TOZEROblock formula * THRESH_TOZERO_INVblock formula * THRESH_TRIANGLEflag, use Triangle algorithm to choose the optimal threshold value * THRESH_TRUNCblock formula * TM_CCOEFF!< block formula where block formula with mask: block formula * TM_CCOEFF_NORMED!< block formula * TM_CCORR!< block formula with mask: block formula * TM_CCORR_NORMED!< block formula with mask: block formula * TM_SQDIFF!< block formula with mask: block formula * TM_SQDIFF_NORMED!< block formula with mask: block formula * WARP_FILL_OUTLIERSflag, fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to zero * WARP_INVERSE_MAPflag, inverse transformation * WARP_POLAR_LINEARRemaps an image to/from polar space. * WARP_POLAR_LOGRemaps an image to/from semilog-polar space. Traits --- * CLAHETraitMutable methods for crate::imgproc::CLAHE * CLAHETraitConstConstant methods for crate::imgproc::CLAHE * GeneralizedHoughBallardTraitMutable methods for crate::imgproc::GeneralizedHoughBallard * GeneralizedHoughBallardTraitConstConstant methods for crate::imgproc::GeneralizedHoughBallard * GeneralizedHoughGuilTraitMutable methods for crate::imgproc::GeneralizedHoughGuil * GeneralizedHoughGuilTraitConstConstant methods for crate::imgproc::GeneralizedHoughGuil * GeneralizedHoughTraitMutable methods for crate::imgproc::GeneralizedHough * GeneralizedHoughTraitConstConstant methods for crate::imgproc::GeneralizedHough * IntelligentScissorsMBTraitMutable methods for crate::imgproc::IntelligentScissorsMB * IntelligentScissorsMBTraitConstConstant methods for crate::imgproc::IntelligentScissorsMB * LineIteratorTraitMutable methods for crate::imgproc::LineIterator * LineIteratorTraitConstConstant methods for crate::imgproc::LineIterator * LineSegmentDetectorTraitMutable methods for crate::imgproc::LineSegmentDetector * LineSegmentDetectorTraitConstConstant methods for crate::imgproc::LineSegmentDetector * Subdiv2DTraitMutable methods for crate::imgproc::Subdiv2D * Subdiv2DTraitConstConstant methods for crate::imgproc::Subdiv2D Functions --- * accumulateAdds an image to the accumulator image. * accumulate_defAdds an image to the accumulator image. * accumulate_productAdds the per-element product of two input images to the accumulator image. * accumulate_product_defAdds the per-element product of two input images to the accumulator image. * accumulate_squareAdds the square of a source image to the accumulator image. * accumulate_square_defAdds the square of a source image to the accumulator image. * accumulate_weightedUpdates a running average. * accumulate_weighted_defUpdates a running average. * adaptive_thresholdApplies an adaptive threshold to an array. * apply_color_mapApplies a GNU Octave/MATLAB equivalent colormap on a given image. * apply_color_map_userApplies a user colormap on a given image. * approx_poly_dpApproximates a polygonal curve(s) with the specified precision. * arc_lengthCalculates a contour perimeter or a curve length. * arrowed_lineDraws an arrow segment pointing from the first point to the second one. * arrowed_line_defDraws an arrow segment pointing from the first point to the second one. * bilateral_filterApplies the bilateral filter to an image. * bilateral_filter_defApplies the bilateral filter to an image. * blend_linearPerforms linear blending of two images: block formula * blurBlurs an image using the normalized box filter. * blur_defBlurs an image using the normalized box filter. * bounding_rectCalculates the up-right bounding rectangle of a point set or non-zero pixels of gray-scale image. * box_filterBlurs an image using the box filter. * box_filter_defBlurs an image using the box filter. * box_pointsFinds the four vertices of a rotated rect. Useful to draw the rotated rectangle. * build_pyramidConstructs the Gaussian pyramid for an image. * build_pyramid_defConstructs the Gaussian pyramid for an image. * calc_back_projectCalculates the back projection of a histogram. * calc_histCalculates a histogram of a set of arrays. * calc_hist_def@overload * cannyFinds edges in an image using the Canny algorithm Canny86 . * canny_defFinds edges in an image using the Canny algorithm Canny86 . * canny_derivative\overload * canny_derivative_def\overload * circleDraws a circle. * circle_defDraws a circle. * clip_lineClips the line against the image rectangle. * clip_line_sizeClips the line against the image rectangle. * clip_line_size_i64Clips the line against the image rectangle. * compare_histCompares two histograms. * compare_hist_1Compares two histograms. * connected_componentscomputes the connected components labeled image of boolean image * connected_components_def@overload * connected_components_with_algorithmcomputes the connected components labeled image of boolean image * connected_components_with_statscomputes the connected components labeled image of boolean image and also produces a statistics output for each label * connected_components_with_stats_def@overload * connected_components_with_stats_with_algorithmcomputes the connected components labeled image of boolean image and also produces a statistics output for each label * contour_areaCalculates a contour area. * contour_area_defCalculates a contour area. * convert_mapsConverts image transformation maps from one representation to another. * convert_maps_defConverts image transformation maps from one representation to another. * convex_hullFinds the convex hull of a point set. * convex_hull_defFinds the convex hull of a point set. * convexity_defectsFinds the convexity defects of a contour. * corner_eigen_vals_and_vecsCalculates eigenvalues and eigenvectors of image blocks for corner detection. * corner_eigen_vals_and_vecs_defCalculates eigenvalues and eigenvectors of image blocks for corner detection. * corner_harrisHarris corner detector. * corner_harris_defHarris corner detector. * corner_min_eigen_valCalculates the minimal eigenvalue of gradient matrices for corner detection. * corner_min_eigen_val_defCalculates the minimal eigenvalue of gradient matrices for corner detection. * corner_sub_pixRefines the corner locations. * create_claheCreates a smart pointer to a cv::CLAHE class and initializes it. * create_clahe_defCreates a smart pointer to a cv::CLAHE class and initializes it. * create_generalized_hough_ballardCreates a smart pointer to a cv::GeneralizedHoughBallard class and initializes it. * create_generalized_hough_guilCreates a smart pointer to a cv::GeneralizedHoughGuil class and initializes it. * create_hanning_windowThis function computes a Hanning window coefficients in two dimensions. * create_line_segment_detectorCreates a smart pointer to a LineSegmentDetector object and initializes it. * create_line_segment_detector_defCreates a smart pointer to a LineSegmentDetector object and initializes it. * cvt_colorConverts an image from one color space to another. * cvt_color_defConverts an image from one color space to another. * cvt_color_two_planeConverts an image from one color space to another where the source image is stored in two planes. * demosaicingmain function for all demosaicing processes * demosaicing_defmain function for all demosaicing processes * dilateDilates an image by using a specific structuring element. * dilate_defDilates an image by using a specific structuring element. * distance_transformCalculates the distance to the closest zero pixel for each pixel of the source image. * distance_transform_def@overload * distance_transform_with_labelsCalculates the distance to the closest zero pixel for each pixel of the source image. * distance_transform_with_labels_defCalculates the distance to the closest zero pixel for each pixel of the source image. * div_spectrumsPerforms the per-element division of the first Fourier spectrum by the second Fourier spectrum. * div_spectrums_defPerforms the per-element division of the first Fourier spectrum by the second Fourier spectrum. * draw_contoursDraws contours outlines or filled contours. * draw_contours_defDraws contours outlines or filled contours. * draw_markerDraws a marker on a predefined position in an image. * draw_marker_defDraws a marker on a predefined position in an image. * ellipseDraws a simple or thick elliptic arc or fills an ellipse sector. * ellipse_2_polyApproximates an elliptic arc with a polyline. * ellipse_2_poly_f64Approximates an elliptic arc with a polyline. * ellipse_defDraws a simple or thick elliptic arc or fills an ellipse sector. * ellipse_rotated_rectDraws a simple or thick elliptic arc or fills an ellipse sector. * ellipse_rotated_rect_def@overload * emdComputes the “minimal work” distance between two weighted point configurations. * emd_1C++ default parameters * emd_1_defNote * emd_defComputes the “minimal work” distance between two weighted point configurations. * equalize_histEqualizes the histogram of a grayscale image. * erodeErodes an image by using a specific structuring element. * erode_defErodes an image by using a specific structuring element. * fill_convex_polyFills a convex polygon. * fill_convex_poly_defFills a convex polygon. * fill_polyFills the area bounded by one or more polygons. * fill_poly_defFills the area bounded by one or more polygons. * filter_2dConvolves an image with the kernel. * filter_2d_defConvolves an image with the kernel. * find_contoursFinds contours in a binary image. * find_contours_def@overload * find_contours_with_hierarchyFinds contours in a binary image. * find_contours_with_hierarchy_defFinds contours in a binary image. * fit_ellipseFits an ellipse around a set of 2D points. * fit_ellipse_amsFits an ellipse around a set of 2D points. * fit_ellipse_directFits an ellipse around a set of 2D points. * fit_lineFits a line to a 2D or 3D point set. * flood_fillFills a connected component with the given color. * flood_fill_def@overload * flood_fill_maskFills a connected component with the given color. * flood_fill_mask_defFills a connected component with the given color. * gaussian_blurBlurs an image using a Gaussian filter. * gaussian_blur_defBlurs an image using a Gaussian filter. * get_affine_transform * get_affine_transform_sliceCalculates an affine transform from three pairs of the corresponding points. * get_deriv_kernelsReturns filter coefficients for computing spatial image derivatives. * get_deriv_kernels_defReturns filter coefficients for computing spatial image derivatives. * get_font_scale_from_heightCalculates the font-specific size to use to achieve a given height in pixels. * get_font_scale_from_height_defCalculates the font-specific size to use to achieve a given height in pixels. * get_gabor_kernelReturns Gabor filter coefficients. * get_gabor_kernel_defReturns Gabor filter coefficients. * get_gaussian_kernelReturns Gaussian filter coefficients. * get_gaussian_kernel_defReturns Gaussian filter coefficients. * get_perspective_transformCalculates a perspective transform from four pairs of the corresponding points. * get_perspective_transform_defCalculates a perspective transform from four pairs of the corresponding points. * get_perspective_transform_sliceCalculates a perspective transform from four pairs of the corresponding points. * get_perspective_transform_slice_def@overload * get_rect_sub_pixRetrieves a pixel rectangle from an image with sub-pixel accuracy. * get_rect_sub_pix_defRetrieves a pixel rectangle from an image with sub-pixel accuracy. * get_rotation_matrix_2dCalculates an affine matrix of 2D rotation. * get_rotation_matrix_2d_matxSee also * get_structuring_elementReturns a structuring element of the specified size and shape for morphological operations. * get_structuring_element_defReturns a structuring element of the specified size and shape for morphological operations. * get_text_sizeCalculates the width and height of a text string. * good_features_to_trackDetermines strong corners on an image. * good_features_to_track_defDetermines strong corners on an image. * good_features_to_track_with_gradientC++ default parameters * good_features_to_track_with_gradient_defNote * good_features_to_track_with_qualitySame as above, but returns also quality measure of the detected corners. * good_features_to_track_with_quality_defSame as above, but returns also quality measure of the detected corners. * grab_cutRuns the GrabCut algorithm. * grab_cut_defRuns the GrabCut algorithm. * hough_circlesFinds circles in a grayscale image using the Hough transform. * hough_circles_defFinds circles in a grayscale image using the Hough transform. * hough_linesFinds lines in a binary image using the standard Hough transform. * hough_lines_defFinds lines in a binary image using the standard Hough transform. * hough_lines_pFinds line segments in a binary image using the probabilistic Hough transform. * hough_lines_p_defFinds line segments in a binary image using the probabilistic Hough transform. * hough_lines_point_setFinds lines in a set of points using the standard Hough transform. * hu_momentsCalculates seven Hu invariants. * hu_moments_1Calculates seven Hu invariants. * integralCalculates the integral of an image. * integral2Calculates the integral of an image. * integral2_def@overload * integral3Calculates the integral of an image. * integral3_defCalculates the integral of an image. * integral_def@overload * intersect_convex_convexFinds intersection of two convex polygons * intersect_convex_convex_defFinds intersection of two convex polygons * invert_affine_transformInverts an affine transformation. * is_contour_convexTests a contour convexity. * laplacianCalculates the Laplacian of an image. * laplacian_defCalculates the Laplacian of an image. * lineDraws a line segment connecting two points. * line_defDraws a line segment connecting two points. * linear_polarDeprecatedRemaps an image to polar coordinates space. * log_polarDeprecatedRemaps an image to semilog-polar coordinates space. * match_shapesCompares two shapes. * match_templateCompares a template against overlapped image regions. * match_template_defCompares a template against overlapped image regions. * median_blurBlurs an image using the median filter. * min_area_rectFinds a rotated rectangle of the minimum area enclosing the input 2D point set. * min_enclosing_circleFinds a circle of the minimum area enclosing a 2D point set. * min_enclosing_triangleFinds a triangle of minimum area enclosing a 2D point set and returns its area. * momentsCalculates all of the moments up to the third order of a polygon or rasterized shape. * moments_defCalculates all of the moments up to the third order of a polygon or rasterized shape. * morphology_default_border_valuereturns “magic” border value for erosion and dilation. It is automatically transformed to Scalar::all(-DBL_MAX) for dilation. * morphology_exPerforms advanced morphological transformations. * morphology_ex_defPerforms advanced morphological transformations. * phase_correlateThe function is used to detect translational shifts that occur between two images. * phase_correlate_defThe function is used to detect translational shifts that occur between two images. * point_polygon_testPerforms a point-in-contour test. * polylinesDraws several polygonal curves. * polylines_defDraws several polygonal curves. * pre_corner_detectCalculates a feature map for corner detection. * pre_corner_detect_defCalculates a feature map for corner detection. * put_textDraws a text string. * put_text_defDraws a text string. * pyr_downBlurs an image and downsamples it. * pyr_down_defBlurs an image and downsamples it. * pyr_mean_shift_filteringPerforms initial step of meanshift segmentation of an image. * pyr_mean_shift_filtering_defPerforms initial step of meanshift segmentation of an image. * pyr_upUpsamples an image and then blurs it. * pyr_up_defUpsamples an image and then blurs it. * rectangleDraws a simple, thick, or filled up-right rectangle. * rectangle_def@overload * rectangle_pointsDraws a simple, thick, or filled up-right rectangle. * rectangle_points_defDraws a simple, thick, or filled up-right rectangle. * remapApplies a generic geometrical transformation to an image. * remap_defApplies a generic geometrical transformation to an image. * resizeResizes an image. * resize_defResizes an image. * rotated_rectangle_intersectionFinds out if there is any intersection between two rotated rectangles. * scharrCalculates the first x- or y- image derivative using Scharr operator. * scharr_defCalculates the first x- or y- image derivative using Scharr operator. * sep_filter_2dApplies a separable linear filter to an image. * sep_filter_2d_defApplies a separable linear filter to an image. * sobelCalculates the first, second, third, or mixed image derivatives using an extended Sobel operator. * sobel_defCalculates the first, second, third, or mixed image derivatives using an extended Sobel operator. * spatial_gradientCalculates the first order image derivative in both x and y using a Sobel operator * spatial_gradient_defCalculates the first order image derivative in both x and y using a Sobel operator * sqr_box_filterCalculates the normalized sum of squares of the pixel values overlapping the filter. * sqr_box_filter_defCalculates the normalized sum of squares of the pixel values overlapping the filter. * stack_blurBlurs an image using the stackBlur. * thresholdApplies a fixed-level threshold to each array element. * warp_affineApplies an affine transformation to an image. * warp_affine_defApplies an affine transformation to an image. * warp_perspectiveApplies a perspective transformation to an image. * warp_perspective_defApplies a perspective transformation to an image. * warp_polar\brief Remaps an image to polar or semilog-polar coordinates space * watershedPerforms a marker-based image segmentation using the watershed algorithm. Module opencv::intensity_transform === The module brings implementations of intensity transformation algorithms to adjust image contrast. --- Namespace for all functions is `cv::intensity_transform`. #### Supported Algorithms * Autoscaling * Log Transformations * Power-Law (Gamma) Transformations * Contrast Stretching * BIMEF, A Bio-Inspired Multi-Exposure Fusion Framework for Low-light Image Enhancement ying2017bio ying2017new References from following book and websites: * Digital Image Processing 4th Edition Chapter 3 [<NAME>, <NAME>] Gonzalez2018 * http://www.cs.uregina.ca/Links/class-info/425/Lab3/ lcs435lab * https://theailearner.com/2019/01/30/contrast-stretching/ theailearner Modules --- * prelude Functions --- * autoscalingGiven an input bgr or grayscale image, apply autoscaling on domain [0, 255] to increase the contrast of the input image and return the resulting image. * bimefGiven an input color image, enhance low-light images using the BIMEF method (ying2017bio ying2017new). * bimef2Given an input color image, enhance low-light images using the BIMEF method (ying2017bio ying2017new). * bimef_defGiven an input color image, enhance low-light images using the BIMEF method (ying2017bio ying2017new). * contrast_stretchingGiven an input bgr or grayscale image, apply linear contrast stretching on domain [0, 255] and return the resulting image. * gamma_correctionGiven an input bgr or grayscale image and constant gamma, apply power-law transformation, a.k.a. gamma correction to the image on domain [0, 255] and return the resulting image. * log_transformGiven an input bgr or grayscale image and constant c, apply log transformation to the image on domain [0, 255] and return the resulting image. Module opencv::line_descriptor === Binary descriptors for lines extracted from an image --- ### Introduction One of the most challenging activities in computer vision is the extraction of useful information from a given image. Such information, usually comes in the form of points that preserve some kind of property (for instance, they are scale-invariant) and are actually representative of input image. The goal of this module is seeking a new kind of representative information inside an image and providing the functionalities for its extraction and representation. In particular, differently from previous methods for detection of relevant elements inside an image, lines are extracted in place of points; a new class is defined ad hoc to summarize a line’s properties, for reuse and plotting purposes. ### Computation of binary descriptors To obtatin a binary descriptor representing a certain line detected from a certain octave of an image, we first compute a non-binary descriptor as described in LBD . Such algorithm works on lines extracted using EDLine detector, as explained in EDL . Given a line, we consider a rectangular region centered at it and called *line support region (LSR)*. Such region is divided into a set of bands ![inline formula](https://latex.codecogs.com/png.latex?%5C%7BB%5F1%2C%20B%5F2%2C%20%2E%2E%2E%2C%20B%5Fm%5C%7D), whose length equals the one of line. If we indicate with ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7Bd%7D%5FL) the direction of line, the orthogonal and clockwise direction to line ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7Bd%7D%5F%7B%5Cperp%7D) Later on, a Gaussian function is applied to all LSR’s pixels along ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7Bd%7D%5F%5Cperp) direction; first, we assign a global weighting coefficient ![inline formula](https://latex.codecogs.com/png.latex?f%5Fg%28i%29%20%3D%20%281%2F%5Csqrt%7B2%5Cpi%7D%5Csigma%5Fg%29e%5E%7B%2Dd%5E2%5Fi%2F2%5Csigma%5E2%5Fg%7D) to *i*-th row in LSR, where ![inline formula](https://latex.codecogs.com/png.latex?d%5Fi) is the distance of *i*-th row from the center row in LSR, ![inline formula](https://latex.codecogs.com/png.latex?%5Csigma%5Fg%20%3D%200%2E5%28m%20%5Ccdot%20w%20%2D%201%29) and ![inline formula](https://latex.codecogs.com/png.latex?w) is the width of bands (the same for every band). Secondly, considering a band ![inline formula](https://latex.codecogs.com/png.latex?B%5Fj) and its neighbor bands ![inline formula](https://latex.codecogs.com/png.latex?B%5F%7Bj%2D1%7D%2C%20B%5F%7Bj%2B1%7D), we assign a local weighting ![inline formula](https://latex.codecogs.com/png.latex?F%5Fl%28k%29%20%3D%20%281%2F%5Csqrt%7B2%5Cpi%7D%5Csigma%5Fl%29e%5E%7B%2Dd%27%5E2%5Fk%2F2%5Csigma%5Fl%5E2%7D), where ![inline formula](https://latex.codecogs.com/png.latex?d%27%5Fk) is the distance of *k*-th row from the center row in ![inline formula](https://latex.codecogs.com/png.latex?B%5Fj) and ![inline formula](https://latex.codecogs.com/png.latex?%5Csigma%5Fl%20%3D%20w). Using the global and local weights, we obtain, at the same time, the reduction of role played by gradients far from line and of boundary effect, respectively. Each band ![inline formula](https://latex.codecogs.com/png.latex?B%5Fj) in LSR has an associated *band descriptor(BD)* which is computed considering previous and next band (top and bottom bands are ignored when computing descriptor for first and last band). Once each band has been assignen its BD, the LBD descriptor of line is simply given by ![block formula](https://latex.codecogs.com/png.latex?LBD%20%3D%20%28BD%5F1%5ET%2C%20BD%5F2%5ET%2C%20%2E%2E%2E%20%2C%20BD%5ET%5Fm%29%5ET%2E) To compute a band descriptor ![inline formula](https://latex.codecogs.com/png.latex?B%5Fj), each *k*-th row in it is considered and the gradients in such row are accumulated: ![block formula](https://latex.codecogs.com/png.latex?%5Cbegin%7Bmatrix%7D%20%5Cbf%7BV1%7D%5Ek%5Fj%20%3D%20%5Clambda%20%5Csum%5Climits%5F%7B%5Cbf%7Bg%7D%27%5F%7Bd%5F%5Cperp%7D%3E0%7D%5Cbf%7Bg%7D%27%5F%7Bd%5F%5Cperp%7D%2C%20%26%20%20%5Cbf%7BV2%7D%5Ek%5Fj%20%3D%20%5Clambda%20%5Csum%5Climits%5F%7B%5Cbf%7Bg%7D%27%5F%7Bd%5F%5Cperp%7D%3C0%7D%20%2D%5Cbf%7Bg%7D%27%5F%7Bd%5F%5Cperp%7D%2C%20%5C%5C%20%5Cbf%7BV3%7D%5Ek%5Fj%20%3D%20%5Clambda%20%5Csum%5Climits%5F%7B%5Cbf%7Bg%7D%27%5F%7Bd%5FL%7D%3E0%7D%5Cbf%7Bg%7D%27%5F%7Bd%5FL%7D%2C%20%26%20%5Cbf%7BV4%7D%5Ek%5Fj%20%3D%20%5Clambda%20%5Csum%5Climits%5F%7B%5Cbf%7Bg%7D%27%5F%7Bd%5FL%7D%3C0%7D%20%2D%5Cbf%7Bg%7D%27%5F%7Bd%5FL%7D%5Cend%7Bmatrix%7D%2E) with ![inline formula](https://latex.codecogs.com/png.latex?%5Clambda%20%3D%20f%5Fg%28k%29f%5Fl%28k%29). By stacking previous results, we obtain the *band description matrix (BDM)* ![block formula](https://latex.codecogs.com/png.latex?BDM%5Fj%20%3D%20%5Cleft%28%5Cbegin%7Bmatrix%7D%20%5Cbf%7BV1%7D%5Fj%5E1%20%26%20%5Cbf%7BV1%7D%5Fj%5E2%20%26%20%5Cldots%20%26%20%5Cbf%7BV1%7D%5Fj%5En%20%5C%5C%20%5Cbf%7BV2%7D%5Fj%5E1%20%26%20%5Cbf%7BV2%7D%5Fj%5E2%20%26%20%5Cldots%20%26%20%5Cbf%7BV2%7D%5Fj%5En%20%5C%5C%20%5Cbf%7BV3%7D%5Fj%5E1%20%26%20%5Cbf%7BV3%7D%5Fj%5E2%20%26%20%5Cldots%20%26%20%5Cbf%7BV3%7D%5Fj%5En%20%5C%5C%20%5Cbf%7BV4%7D%5Fj%5E1%20%26%20%5Cbf%7BV4%7D%5Fj%5E2%20%26%20%5Cldots%20%26%20%5Cbf%7BV4%7D%5Fj%5En%20%5Cend%7Bmatrix%7D%20%5Cright%29%20%5Cin%20%5Cmathbb%7BR%7D%5E%7B4%5Ctimes%20n%7D%2C) with ![inline formula](https://latex.codecogs.com/png.latex?n) the number of rows in band ![inline formula](https://latex.codecogs.com/png.latex?B%5Fj): ![block formula](https://latex.codecogs.com/png.latex?n%20%3D%20%5Cbegin%7Bcases%7D%202w%2C%20%26%20j%20%3D%201%7C%7Cm%3B%20%5C%5C%203w%2C%20%26%20%5Cmbox%7Belse%7D%2E%20%5Cend%7Bcases%7D) Each ![inline formula](https://latex.codecogs.com/png.latex?BD%5Fj) can be obtained using the standard deviation vector ![inline formula](https://latex.codecogs.com/png.latex?S%5Fj) and mean vector ![inline formula](https://latex.codecogs.com/png.latex?M%5Fj) of ![inline formula](https://latex.codecogs.com/png.latex?BDM%5FJ). Thus, finally: ![block formula](https://latex.codecogs.com/png.latex?LBD%20%3D%20%28M%5F1%5ET%2C%20S%5F1%5ET%2C%20M%5F2%5ET%2C%20S%5F2%5ET%2C%20%5Cldots%2C%20M%5Fm%5ET%2C%20S%5Fm%5ET%29%5ET%20%5Cin%20%5Cmathbb%7BR%7D%5E%7B8m%7D) Once the LBD has been obtained, it must be converted into a binary form. For such purpose, we consider 32 possible pairs of BD inside it; each couple of BD is compared bit by bit and comparison generates an 8 bit string. Concatenating 32 comparison strings, we get the 256-bit final binary representation of a single LBD. Modules --- * prelude Structs --- * BinaryDescriptorClass implements both functionalities for detection of lines and computation of their binary descriptor. * BinaryDescriptorMatcherfurnishes all functionalities for querying a dataset provided by user or internal to class (that user must, anyway, populate) on the model of [features2d_match] * BinaryDescriptor_ParamsList of BinaryDescriptor parameters: * KeyLineA class to represent a line * LSDDetector * LSDParamLines extraction methodology Enums --- * DrawLinesMatchesFlags Constants --- * DrawLinesMatchesFlags_DEFAULTOutput image matrix will be created (Mat::create), i.e. existing memory of output image may be reused. Two source images, matches, and single keylines will be drawn. * DrawLinesMatchesFlags_DRAW_OVER_OUTIMGOutput image matrix will not be created (using Mat::create). Matches will be drawn on existing content of output image. * DrawLinesMatchesFlags_NOT_DRAW_SINGLE_LINESSingle keylines will not be drawn. * MLN10 * RELATIVE_ERROR_FACTOR Traits --- * BinaryDescriptorMatcherTraitMutable methods for crate::line_descriptor::BinaryDescriptorMatcher * BinaryDescriptorMatcherTraitConstConstant methods for crate::line_descriptor::BinaryDescriptorMatcher * BinaryDescriptorTraitMutable methods for crate::line_descriptor::BinaryDescriptor * BinaryDescriptorTraitConstConstant methods for crate::line_descriptor::BinaryDescriptor * BinaryDescriptor_ParamsTraitMutable methods for crate::line_descriptor::BinaryDescriptor_Params * BinaryDescriptor_ParamsTraitConstConstant methods for crate::line_descriptor::BinaryDescriptor_Params * LSDDetectorTraitMutable methods for crate::line_descriptor::LSDDetector * LSDDetectorTraitConstConstant methods for crate::line_descriptor::LSDDetector Functions --- * draw_keylinesDraws keylines. * draw_keylines_defDraws keylines. * draw_line_matchesDraws the found matches of keylines from two images. * draw_line_matches_defDraws the found matches of keylines from two images. Type Aliases --- * uint8 * uint16 * uint32 * uint64 Module opencv::mcc === Macbeth Chart module --- Color Correction Model --- ### Introduction ColorCharts are a tool for calibrating the color profile of camera, which not only depends on the intrinsic and extrinsic parameters of camera but also on the lighting conditions. This is done by taking the image of a chart, such that the value of its colors present in it known, in the image the color values changes depeding on many variables, this gives us the colors initially present and the colors that are present in the image, based on this information we can apply any suitable algorithm to find the actual color of all the objects present in the image. Modules --- * prelude Structs --- * ColorCorrectionModelCore class of ccm model * MCC_CCheckerCChecker * MCC_CCheckerDetectorA class to find the positions of the ColorCharts in the image. * MCC_CCheckerDraw\brief checker draw * MCC_DetectorParametersParameters for the detectMarker process: Enums --- * CCM_TYPEEnum of the possible types of ccm. * COLOR_SPACE * CONST_COLORMacbeth and Vinyl ColorChecker with 2deg D50 * DISTANCE_TYPEEnum of possible functions to calculate the distance between colors. * INITIAL_METHOD_TYPEEnum of the possible types of initial method. * LINEAR_TYPELinearization transformation type * MCC_TYPECHARTTYPECHART Constants --- * CCM_3x3The CCM with the shape inline formula performs linear transformation on color values. * CCM_4x3The CCM with the shape inline formula performs affine transformation. * COLORCHECKER_DigitalSGDigitalSG ColorChecker with 140 squares * COLORCHECKER_MacbethMacbeth ColorChecker * COLORCHECKER_VinylDKK ColorChecker * COLOR_SPACE_AdobeRGBhttps://en.wikipedia.org/wiki/Adobe_RGB_color_space , RGB color space * COLOR_SPACE_AdobeRGBLhttps://en.wikipedia.org/wiki/Adobe_RGB_color_space , linear RGB color space * COLOR_SPACE_AppleRGBhttps://en.wikipedia.org/wiki/RGB_color_space , RGB color space * COLOR_SPACE_AppleRGBLhttps://en.wikipedia.org/wiki/RGB_color_space , linear RGB color space * COLOR_SPACE_DCI_P3_RGBhttps://en.wikipedia.org/wiki/DCI-P3 , RGB color space * COLOR_SPACE_DCI_P3_RGBLhttps://en.wikipedia.org/wiki/DCI-P3 , linear RGB color space * COLOR_SPACE_Lab_A_2non-RGB color space * COLOR_SPACE_Lab_A_10non-RGB color space * COLOR_SPACE_Lab_D50_2non-RGB color space * COLOR_SPACE_Lab_D50_10non-RGB color space * COLOR_SPACE_Lab_D55_2non-RGB color space * COLOR_SPACE_Lab_D55_10non-RGB color space * COLOR_SPACE_Lab_D65_2https://en.wikipedia.org/wiki/CIELAB_color_space , non-RGB color space * COLOR_SPACE_Lab_D65_10non-RGB color space * COLOR_SPACE_Lab_D75_2non-RGB color space * COLOR_SPACE_Lab_D75_10non-RGB color space * COLOR_SPACE_Lab_E_2non-RGB color space * COLOR_SPACE_Lab_E_10non-RGB color space * COLOR_SPACE_ProPhotoRGBhttps://en.wikipedia.org/wiki/ProPhoto_RGB_color_space , RGB color space * COLOR_SPACE_ProPhotoRGBLhttps://en.wikipedia.org/wiki/ProPhoto_RGB_color_space , linear RGB color space * COLOR_SPACE_REC_709_RGBhttps://en.wikipedia.org/wiki/Rec._709 , RGB color space * COLOR_SPACE_REC_709_RGBLhttps://en.wikipedia.org/wiki/Rec._709 , linear RGB color space * COLOR_SPACE_REC_2020_RGBhttps://en.wikipedia.org/wiki/Rec._2020 , RGB color space * COLOR_SPACE_REC_2020_RGBLhttps://en.wikipedia.org/wiki/Rec._2020 , linear RGB color space * COLOR_SPACE_WideGamutRGBhttps://en.wikipedia.org/wiki/Wide-gamut_RGB_color_space , RGB color space * COLOR_SPACE_WideGamutRGBLhttps://en.wikipedia.org/wiki/Wide-gamut_RGB_color_space , linear RGB color space * COLOR_SPACE_XYZ_A_2non-RGB color space * COLOR_SPACE_XYZ_A_10non-RGB color space * COLOR_SPACE_XYZ_D50_2non-RGB color space * COLOR_SPACE_XYZ_D50_10non-RGB color space * COLOR_SPACE_XYZ_D55_2non-RGB color space * COLOR_SPACE_XYZ_D55_10non-RGB color space * COLOR_SPACE_XYZ_D65_2https://en.wikipedia.org/wiki/CIE_1931_color_space , non-RGB color space * COLOR_SPACE_XYZ_D65_10non-RGB color space * COLOR_SPACE_XYZ_D75_2non-RGB color space * COLOR_SPACE_XYZ_D75_10non-RGB color space * COLOR_SPACE_XYZ_E_2non-RGB color space * COLOR_SPACE_XYZ_E_10non-RGB color space * COLOR_SPACE_sRGBhttps://en.wikipedia.org/wiki/SRGB , RGB color space * COLOR_SPACE_sRGBLhttps://en.wikipedia.org/wiki/SRGB , linear RGB color space * DISTANCE_CIE76The 1976 formula is the first formula that related a measured color difference to a known set of CIELAB coordinates. * DISTANCE_CIE94_GRAPHIC_ARTSThe 1976 definition was extended to address perceptual non-uniformities. * DISTANCE_CIE94_TEXTILES * DISTANCE_CIE2000 * DISTANCE_CMC_1TO1In 1984, the Colour Measurement Committee of the Society of Dyers and Colourists defined a difference measure, also based on the L*C*h color model. * DISTANCE_CMC_2TO1 * DISTANCE_RGBEuclidean distance of rgb color space * DISTANCE_RGBLEuclidean distance of rgbl color space * INITIAL_METHOD_LEAST_SQUAREthe least square method is an optimal solution under the linear RGB distance function * INITIAL_METHOD_WHITE_BALANCEThe white balance method. The initial value is: * LINEARIZATION_COLORLOGPOLYFITlogarithmic polynomial fitting channels respectively; Need assign a value to deg simultaneously * LINEARIZATION_COLORPOLYFITpolynomial fitting channels respectively; Need assign a value to deg simultaneously * LINEARIZATION_GAMMAgamma correction; Need assign a value to gamma simultaneously * LINEARIZATION_GRAYLOGPOLYFITgrayscale Logarithmic polynomial fitting; Need assign a value to deg and dst_whites simultaneously * LINEARIZATION_GRAYPOLYFITgrayscale polynomial fitting; Need assign a value to deg and dst_whites simultaneously * LINEARIZATION_IDENTITYno change is made * MCC_MCC24Standard Macbeth Chart with 24 squares * MCC_SG140DigitalSG with 140 squares * MCC_VINYL18DKK color chart with 12 squares and 6 rectangle Traits --- * ColorCorrectionModelTraitMutable methods for crate::mcc::ColorCorrectionModel * ColorCorrectionModelTraitConstConstant methods for crate::mcc::ColorCorrectionModel * MCC_CCheckerDetectorTraitMutable methods for crate::mcc::MCC_CCheckerDetector * MCC_CCheckerDetectorTraitConstConstant methods for crate::mcc::MCC_CCheckerDetector * MCC_CCheckerDrawTraitMutable methods for crate::mcc::MCC_CCheckerDraw * MCC_CCheckerDrawTraitConstConstant methods for crate::mcc::MCC_CCheckerDraw * MCC_CCheckerTraitMutable methods for crate::mcc::MCC_CChecker * MCC_CCheckerTraitConstConstant methods for crate::mcc::MCC_CChecker * MCC_DetectorParametersTraitMutable methods for crate::mcc::MCC_DetectorParameters * MCC_DetectorParametersTraitConstConstant methods for crate::mcc::MCC_DetectorParameters Module opencv::ml === Machine Learning --- The Machine Learning Library (MLL) is a set of classes and functions for statistical classification, regression, and clustering of data. Most of the classification and regression algorithms are implemented as C++ classes. As the algorithms have different sets of features (like an ability to handle missing measurements or categorical input variables), there is a little common ground between the classes. This common ground is defined by the class cv::ml::StatModel that all the other ML classes are derived from. See detailed overview here: [ml_intro]. Modules --- * prelude Structs --- * ANN_MLPArtificial Neural Networks - Multi-Layer Perceptrons. * BoostBoosted tree classifier derived from DTrees * DTreesThe class represents a single decision tree or a collection of decision trees. * DTrees_NodeThe class represents a decision tree node. * DTrees_SplitThe class represents split in a decision tree. * EMThe class implements the Expectation Maximization algorithm. * KNearestThe class implements K-Nearest Neighbors model * LogisticRegressionImplements Logistic Regression classifier. * NormalBayesClassifierBayes classifier for normally distributed data. * ParamGridThe structure represents the logarithmic grid range of statmodel parameters. * RTreesThe class implements the random forest predictor. * SVMSupport Vector Machines. * SVMSGD! Stochastic Gradient Descent SVM classifier * SVM_Kernel * StatModelBase class for statistical models in OpenCV ML. * TrainDataClass encapsulating training data. Enums --- * ANN_MLP_ActivationFunctionspossible activation functions * ANN_MLP_TrainFlagsTrain options * ANN_MLP_TrainingMethodsAvailable training methods * Boost_TypesBoosting type. Gentle AdaBoost and Real AdaBoost are often the preferable choices. * DTrees_FlagsPredict options * EM_TypesType of covariation matrices * ErrorTypes%Error types * KNearest_TypesImplementations of KNearest algorithm * LogisticRegression_MethodsTraining methods * LogisticRegression_RegKindsRegularization kinds * SVMSGD_MarginTypeMargin type. * SVMSGD_SvmsgdTypeSVMSGD type. ASGD is often the preferable choice. * SVM_KernelTypes%SVM kernel type * SVM_ParamTypes%SVM params type * SVM_Types%SVM type * SampleTypesSample types * StatModel_FlagsPredict options * VariableTypesVariable types Constants --- * ANN_MLP_ANNEALThe simulated annealing algorithm. See Kirkpatrick83 for details. * ANN_MLP_BACKPROPThe back-propagation algorithm. * ANN_MLP_GAUSSIANGaussian function: inline formula * ANN_MLP_IDENTITYIdentity function: inline formula * ANN_MLP_LEAKYRELULeaky ReLU function: for x>0 inline formula and x<=0 inline formula * ANN_MLP_NO_INPUT_SCALEDo not normalize the input vectors. If this flag is not set, the training algorithm normalizes each input feature independently, shifting its mean value to 0 and making the standard deviation equal to 1. If the network is assumed to be updated frequently, the new training data could be much different from original one. In this case, you should take care of proper normalization. * ANN_MLP_NO_OUTPUT_SCALEDo not normalize the output vectors. If the flag is not set, the training algorithm normalizes each output feature independently, by transforming it to the certain range depending on the used activation function. * ANN_MLP_RELUReLU function: inline formula * ANN_MLP_RPROPThe RPROP algorithm. See RPROP93 for details. * ANN_MLP_SIGMOID_SYMSymmetrical sigmoid: inline formula * ANN_MLP_UPDATE_WEIGHTSUpdate the network weights, rather than compute them from scratch. In the latter case the weights are initialized using the Nguyen-Widrow algorithm. * Boost_DISCRETEDiscrete AdaBoost. * Boost_GENTLEGentle AdaBoost. It puts less weight on outlier data points and for that reason is often good with regression data. * Boost_LOGITLogitBoost. It can produce good regression fits. * Boost_REALReal AdaBoost. It is a technique that utilizes confidence-rated predictions and works well with categorical data. * COL_SAMPLEeach training sample occupies a column of samples * DTrees_PREDICT_AUTO * DTrees_PREDICT_MASK * DTrees_PREDICT_MAX_VOTE * DTrees_PREDICT_SUM * EM_COV_MAT_DEFAULTA symmetric positively defined matrix. The number of free parameters in each matrix is about inline formula. It is not recommended to use this option, unless there is pretty accurate initial estimation of the parameters and/or a huge number of training samples. * EM_COV_MAT_DIAGONALA diagonal matrix with positive diagonal elements. The number of free parameters is d for each matrix. This is most commonly used option yielding good estimation results. * EM_COV_MAT_GENERICA symmetric positively defined matrix. The number of free parameters in each matrix is about inline formula. It is not recommended to use this option, unless there is pretty accurate initial estimation of the parameters and/or a huge number of training samples. * EM_COV_MAT_SPHERICALA scaled identity matrix inline formula. There is the only parameter inline formula to be estimated for each matrix. The option may be used in special cases, when the constraint is relevant, or as a first step in the optimization (for example in case when the data is preprocessed with PCA). The results of such preliminary estimation may be passed again to the optimization procedure, this time with covMatType=EM::COV_MAT_DIAGONAL. * EM_DEFAULT_MAX_ITERS * EM_DEFAULT_NCLUSTERS * EM_START_AUTO_STEP * EM_START_E_STEP * EM_START_M_STEP * KNearest_BRUTE_FORCE * KNearest_KDTREE * LogisticRegression_BATCH * LogisticRegression_MINI_BATCHSet MiniBatchSize to a positive integer when using this method. * LogisticRegression_REG_DISABLERegularization disabled * LogisticRegression_REG_L1%L1 norm * LogisticRegression_REG_L2%L2 norm * ROW_SAMPLEeach training sample is a row of samples * SVMSGD_ASGDAverage Stochastic Gradient Descent * SVMSGD_HARD_MARGINMore accurate for the case of linearly separable sets. * SVMSGD_SGDStochastic Gradient Descent * SVMSGD_SOFT_MARGINGeneral case, suits to the case of non-linearly separable sets, allows outliers. * SVM_C * SVM_CHI2Exponential Chi2 kernel, similar to the RBF kernel: inline formula. * SVM_COEF * SVM_CUSTOMReturned by SVM::getKernelType in case when custom kernel has been set * SVM_C_SVCC-Support Vector Classification. n-class classification (n inline formula 2), allows imperfect separation of classes with penalty multiplier C for outliers. * SVM_DEGREE * SVM_EPS_SVRinline formula-Support Vector Regression. The distance between feature vectors from the training set and the fitting hyper-plane must be less than p. For outliers the penalty multiplier C is used. * SVM_GAMMA * SVM_INTERHistogram intersection kernel. A fast kernel. inline formula. * SVM_LINEARLinear kernel. No mapping is done, linear discrimination (or regression) is done in the original feature space. It is the fastest option. inline formula. * SVM_NU * SVM_NU_SVCinline formula-Support Vector Classification. n-class classification with possible imperfect separation. Parameter inline formula (in the range 0..1, the larger the value, the smoother the decision boundary) is used instead of C. * SVM_NU_SVRinline formula-Support Vector Regression. inline formula is used instead of p. See LibSVM for details. * SVM_ONE_CLASSDistribution Estimation (One-class %SVM). All the training data are from the same class, %SVM builds a boundary that separates the class from the rest of the feature space. * SVM_P * SVM_POLYPolynomial kernel: inline formula. * SVM_RBFRadial basis function (RBF), a good choice in most cases. inline formula. * SVM_SIGMOIDSigmoid kernel: inline formula. * StatModel_COMPRESSED_INPUT * StatModel_PREPROCESSED_INPUT * StatModel_RAW_OUTPUTmakes the method return the raw results (the sum), not the class label * StatModel_UPDATE_MODEL * TEST_ERROR * TRAIN_ERROR * VAR_CATEGORICALcategorical variables * VAR_NUMERICALsame as VAR_ORDERED * VAR_ORDEREDordered variables Traits --- * ANN_MLPTraitMutable methods for crate::ml::ANN_MLP * ANN_MLPTraitConstConstant methods for crate::ml::ANN_MLP * BoostTraitMutable methods for crate::ml::Boost * BoostTraitConstConstant methods for crate::ml::Boost * DTreesTraitMutable methods for crate::ml::DTrees * DTreesTraitConstConstant methods for crate::ml::DTrees * DTrees_NodeTraitMutable methods for crate::ml::DTrees_Node * DTrees_NodeTraitConstConstant methods for crate::ml::DTrees_Node * DTrees_SplitTraitMutable methods for crate::ml::DTrees_Split * DTrees_SplitTraitConstConstant methods for crate::ml::DTrees_Split * EMTraitMutable methods for crate::ml::EM * EMTraitConstConstant methods for crate::ml::EM * KNearestTraitMutable methods for crate::ml::KNearest * KNearestTraitConstConstant methods for crate::ml::KNearest * LogisticRegressionTraitMutable methods for crate::ml::LogisticRegression * LogisticRegressionTraitConstConstant methods for crate::ml::LogisticRegression * NormalBayesClassifierTraitMutable methods for crate::ml::NormalBayesClassifier * NormalBayesClassifierTraitConstConstant methods for crate::ml::NormalBayesClassifier * ParamGridTraitMutable methods for crate::ml::ParamGrid * ParamGridTraitConstConstant methods for crate::ml::ParamGrid * RTreesTraitMutable methods for crate::ml::RTrees * RTreesTraitConstConstant methods for crate::ml::RTrees * SVMSGDTraitMutable methods for crate::ml::SVMSGD * SVMSGDTraitConstConstant methods for crate::ml::SVMSGD * SVMTraitMutable methods for crate::ml::SVM * SVMTraitConstConstant methods for crate::ml::SVM * SVM_KernelTraitMutable methods for crate::ml::SVM_Kernel * SVM_KernelTraitConstConstant methods for crate::ml::SVM_Kernel * StatModelTraitMutable methods for crate::ml::StatModel * StatModelTraitConstConstant methods for crate::ml::StatModel * TrainDataTraitMutable methods for crate::ml::TrainData * TrainDataTraitConstConstant methods for crate::ml::TrainData Functions --- * create_concentric_spheres_test_setCreates test set * rand_mv_normalGenerates *sample* from multivariate normal distribution Type Aliases --- * ANN_MLP_ANNEAL Module opencv::objdetect === Object Detection --- Cascade Classifier for Object Detection --- The object detector described below has been initially proposed by Paul Viola Viola01 and improved by <NAME> . First, a classifier (namely a *cascade of boosted classifiers working with haar-like features*) is trained with a few hundred sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size. After a classifier is trained, it can be applied to a region of interest (of the same size as used during the training) in an input image. The classifier outputs a “1” if the region is likely to show the object (i.e., face/car), and “0” otherwise. To search for the object in the whole image one can move the search window across the image and check every location using the classifier. The classifier is designed so that it can be easily “resized” in order to be able to find the objects of interest at different sizes, which is more efficient than resizing the image itself. So, to find an object of an unknown size in the image the scan procedure should be done several times at different scales. The word “cascade” in the classifier name means that the resultant classifier consists of several simpler classifiers (*stages*) that are applied subsequently to a region of interest until at some stage the candidate is rejected or all the stages are passed. The word “boosted” means that the classifiers at every stage of the cascade are complex themselves and they are built out of basic classifiers using one of four different boosting techniques (weighted voting). Currently Discrete Adaboost, Real Adaboost, Gentle Adaboost and Logitboost are supported. The basic classifiers are decision-tree classifiers with at least 2 leaves. Haar-like features are the input to the basic classifiers, and are calculated as described below. The current algorithm uses the following Haar-like features: ![image](https://docs.opencv.org/4.8.1/haarfeatures.png) The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within the region of interest and the scale (this scale is not the same as the scale used at the detection stage, though these two scales are multiplied). For example, in the case of the third line feature (2c) the response is calculated as the difference between the sum of image pixels under the rectangle covering the whole feature (including the two white stripes and the black stripe in the middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to compensate for the differences in the size of areas. The sums of pixel values over a rectangular regions are calculated rapidly using integral images (see below and the integral description). Check [tutorial_cascade_classifier] “the corresponding tutorial” for more details. The following reference is for the detection part only. There is a separate application called opencv_traincascade that can train a cascade of boosted classifiers from a set of samples. Note: In the new C++ interface it is also possible to use LBP (local binary pattern) features in addition to Haar-like features. .. [Viola01] <NAME> and <NAME>. Rapid Object Detection using a Boosted Cascade of Simple Features. IEEE CVPR, 2001. The paper is available online at https://github.com/SvHey/thesis/blob/master/Literature/ObjectDetection/violaJones_CVPR2001.pdf HOG (Histogram of Oriented Gradients) descriptor and object detector --- Barcode detection and decoding --- QRCode detection and encoding --- DNN-based face detection and recognition --- Check [tutorial_dnn_face] “the corresponding tutorial” for more details. Common functions and classes --- ArUco markers and boards detection for robust camera pose estimation --- ``` ArUco Marker Detection Square fiducial markers (also known as Augmented Reality Markers) are useful for easy, fast and robust camera pose estimation. The main functionality of ArucoDetector class is detection of markers in an image. If the markers are grouped as a board, then you can try to recover the missing markers with ArucoDetector::refineDetectedMarkers(). ArUco markers can also be used for advanced chessboard corner finding. To do this, group the markers in the CharucoBoard and find the corners of the chessboard with the CharucoDetector::detectBoard(). The implementation is based on the ArUco Library by <NAME> and <NAME> [Aruco2014](https://docs.opencv.org/4.8.1/d0/de3/citelist.html#CITEREF_Aruco2014). Markers can also be detected based on the AprilTag 2 [wang2016iros](https://docs.opencv.org/4.8.1/d0/de3/citelist.html#CITEREF_wang2016iros) fiducial detection method. ``` ### See also Aruco2014 This code has been originally developed by <NAME> as a project for Google Summer of Code 2015 (GSoC 15). Modules --- * prelude Structs --- * ArucoDetectorThe main functionality of ArucoDetector class is detection of markers in an image with detectMarkers() method. * BarcodeDetector * BaseCascadeClassifier * BaseCascadeClassifier_MaskGenerator * BoardBoard of ArUco markers * CascadeClassifier@example samples/cpp/facedetect.cpp This program demonstrates usage of the Cascade classifier class \image html Cascade_Classifier_Tutorial_Result_Haar.jpg “Sample screenshot” width=321 height=254 * CharucoBoardChArUco board is a planar chessboard where the markers are placed inside the white squares of a chessboard. * CharucoDetector * CharucoParameters * DetectionBasedTracker * DetectionBasedTracker_ExtObject * DetectionBasedTracker_IDetector * DetectionBasedTracker_Parameters * DetectionROIstruct for detection region of interest (ROI) * DetectorParametersstruct DetectorParameters is used by ArucoDetector * DictionaryDictionary/Set of markers, it contains the inner codification * FaceDetectorYNDNN-based face detector * FaceRecognizerSFDNN-based face recognizer * GraphicalCodeDetector * GridBoardPlanar board with grid arrangement of markers * HOGDescriptorImplementation of HOG (Histogram of Oriented Gradients) descriptor and object detector. * QRCodeDetector * QRCodeDetectorAruco * QRCodeDetectorAruco_Params * QRCodeEncoder * QRCodeEncoder_ParamsQR code encoder parameters. * RefineParametersstruct RefineParameters is used by ArucoDetector * SimilarRectsThis class is used for grouping object candidates detected by Cascade Classifier, HOG etc. Enums --- * CornerRefineMethod * DetectionBasedTracker_ObjectStatus * FaceRecognizerSF_DisTypeDefinition of distance used for calculating the distance between two face features * HOGDescriptor_DescriptorStorageFormat * HOGDescriptor_HistogramNormType * PredefinedDictionaryTypePredefined markers dictionaries/sets * QRCodeEncoder_CorrectionLevel * QRCodeEncoder_ECIEncodings * QRCodeEncoder_EncodeMode Constants --- * CASCADE_DO_CANNY_PRUNING * CASCADE_DO_ROUGH_SEARCH * CASCADE_FIND_BIGGEST_OBJECT * CASCADE_SCALE_IMAGE * CORNER_REFINE_APRILTAGTag and corners detection based on the AprilTag 2 approach wang2016iros * CORNER_REFINE_CONTOURArUco approach and refine the corners locations using the contour-points line fitting * CORNER_REFINE_NONETag and corners detection based on the ArUco approach * CORNER_REFINE_SUBPIXArUco approach and refine the corners locations using corner subpixel accuracy * DICT_4X4_504x4 bits, minimum hamming distance between any two codes = 4, 50 codes * DICT_4X4_1004x4 bits, minimum hamming distance between any two codes = 3, 100 codes * DICT_4X4_2504x4 bits, minimum hamming distance between any two codes = 3, 250 codes * DICT_4X4_10004x4 bits, minimum hamming distance between any two codes = 2, 1000 codes * DICT_5X5_505x5 bits, minimum hamming distance between any two codes = 8, 50 codes * DICT_5X5_1005x5 bits, minimum hamming distance between any two codes = 7, 100 codes * DICT_5X5_2505x5 bits, minimum hamming distance between any two codes = 6, 250 codes * DICT_5X5_10005x5 bits, minimum hamming distance between any two codes = 5, 1000 codes * DICT_6X6_506x6 bits, minimum hamming distance between any two codes = 13, 50 codes * DICT_6X6_1006x6 bits, minimum hamming distance between any two codes = 12, 100 codes * DICT_6X6_2506x6 bits, minimum hamming distance between any two codes = 11, 250 codes * DICT_6X6_10006x6 bits, minimum hamming distance between any two codes = 9, 1000 codes * DICT_7X7_507x7 bits, minimum hamming distance between any two codes = 19, 50 codes * DICT_7X7_1007x7 bits, minimum hamming distance between any two codes = 18, 100 codes * DICT_7X7_2507x7 bits, minimum hamming distance between any two codes = 17, 250 codes * DICT_7X7_10007x7 bits, minimum hamming distance between any two codes = 14, 1000 codes * DICT_APRILTAG_16h54x4 bits, minimum hamming distance between any two codes = 5, 30 codes * DICT_APRILTAG_25h95x5 bits, minimum hamming distance between any two codes = 9, 35 codes * DICT_APRILTAG_36h106x6 bits, minimum hamming distance between any two codes = 10, 2320 codes * DICT_APRILTAG_36h116x6 bits, minimum hamming distance between any two codes = 11, 587 codes * DICT_ARUCO_MIP_36h126x6 bits, minimum hamming distance between any two codes = 12, 250 codes * DICT_ARUCO_ORIGINAL6x6 bits, minimum hamming distance between any two codes = 3, 1024 codes * DetectionBasedTracker_DETECTED * DetectionBasedTracker_DETECTED_NOT_SHOWN_YET * DetectionBasedTracker_DETECTED_TEMPORARY_LOST * DetectionBasedTracker_WRONG_OBJECT * FaceRecognizerSF_FR_COSINE * FaceRecognizerSF_FR_NORM_L2 * HOGDescriptor_DEFAULT_NLEVELSDefault nlevels value. * HOGDescriptor_DESCR_FORMAT_COL_BY_COL * HOGDescriptor_DESCR_FORMAT_ROW_BY_ROW * HOGDescriptor_L2HysDefault histogramNormType * QRCodeEncoder_CORRECT_LEVEL_H * QRCodeEncoder_CORRECT_LEVEL_L * QRCodeEncoder_CORRECT_LEVEL_M * QRCodeEncoder_CORRECT_LEVEL_Q * QRCodeEncoder_ECI_UTF8 * QRCodeEncoder_MODE_ALPHANUMERIC * QRCodeEncoder_MODE_AUTO * QRCodeEncoder_MODE_BYTE * QRCodeEncoder_MODE_ECI * QRCodeEncoder_MODE_KANJI * QRCodeEncoder_MODE_NUMERIC * QRCodeEncoder_MODE_STRUCTURED_APPEND Traits --- * ArucoDetectorTraitMutable methods for crate::objdetect::ArucoDetector * ArucoDetectorTraitConstConstant methods for crate::objdetect::ArucoDetector * BarcodeDetectorTraitMutable methods for crate::objdetect::BarcodeDetector * BarcodeDetectorTraitConstConstant methods for crate::objdetect::BarcodeDetector * BaseCascadeClassifierTraitMutable methods for crate::objdetect::BaseCascadeClassifier * BaseCascadeClassifierTraitConstConstant methods for crate::objdetect::BaseCascadeClassifier * BaseCascadeClassifier_MaskGeneratorTraitMutable methods for crate::objdetect::BaseCascadeClassifier_MaskGenerator * BaseCascadeClassifier_MaskGeneratorTraitConstConstant methods for crate::objdetect::BaseCascadeClassifier_MaskGenerator * BoardTraitMutable methods for crate::objdetect::Board * BoardTraitConstConstant methods for crate::objdetect::Board * CascadeClassifierTraitMutable methods for crate::objdetect::CascadeClassifier * CascadeClassifierTraitConstConstant methods for crate::objdetect::CascadeClassifier * CharucoBoardTraitMutable methods for crate::objdetect::CharucoBoard * CharucoBoardTraitConstConstant methods for crate::objdetect::CharucoBoard * CharucoDetectorTraitMutable methods for crate::objdetect::CharucoDetector * CharucoDetectorTraitConstConstant methods for crate::objdetect::CharucoDetector * CharucoParametersTraitMutable methods for crate::objdetect::CharucoParameters * CharucoParametersTraitConstConstant methods for crate::objdetect::CharucoParameters * DetectionBasedTrackerTraitMutable methods for crate::objdetect::DetectionBasedTracker * DetectionBasedTrackerTraitConstConstant methods for crate::objdetect::DetectionBasedTracker * DetectionBasedTracker_ExtObjectTraitMutable methods for crate::objdetect::DetectionBasedTracker_ExtObject * DetectionBasedTracker_ExtObjectTraitConstConstant methods for crate::objdetect::DetectionBasedTracker_ExtObject * DetectionBasedTracker_IDetectorTraitMutable methods for crate::objdetect::DetectionBasedTracker_IDetector * DetectionBasedTracker_IDetectorTraitConstConstant methods for crate::objdetect::DetectionBasedTracker_IDetector * DetectionBasedTracker_ParametersTraitMutable methods for crate::objdetect::DetectionBasedTracker_Parameters * DetectionBasedTracker_ParametersTraitConstConstant methods for crate::objdetect::DetectionBasedTracker_Parameters * DetectionROITraitMutable methods for crate::objdetect::DetectionROI * DetectionROITraitConstConstant methods for crate::objdetect::DetectionROI * DetectorParametersTraitMutable methods for crate::objdetect::DetectorParameters * DetectorParametersTraitConstConstant methods for crate::objdetect::DetectorParameters * DictionaryTraitMutable methods for crate::objdetect::Dictionary * DictionaryTraitConstConstant methods for crate::objdetect::Dictionary * FaceDetectorYNTraitMutable methods for crate::objdetect::FaceDetectorYN * FaceDetectorYNTraitConstConstant methods for crate::objdetect::FaceDetectorYN * FaceRecognizerSFTraitMutable methods for crate::objdetect::FaceRecognizerSF * FaceRecognizerSFTraitConstConstant methods for crate::objdetect::FaceRecognizerSF * GraphicalCodeDetectorTraitMutable methods for crate::objdetect::GraphicalCodeDetector * GraphicalCodeDetectorTraitConstConstant methods for crate::objdetect::GraphicalCodeDetector * GridBoardTraitMutable methods for crate::objdetect::GridBoard * GridBoardTraitConstConstant methods for crate::objdetect::GridBoard * HOGDescriptorTraitMutable methods for crate::objdetect::HOGDescriptor * HOGDescriptorTraitConstConstant methods for crate::objdetect::HOGDescriptor * QRCodeDetectorArucoTraitMutable methods for crate::objdetect::QRCodeDetectorAruco * QRCodeDetectorArucoTraitConstConstant methods for crate::objdetect::QRCodeDetectorAruco * QRCodeDetectorTraitMutable methods for crate::objdetect::QRCodeDetector * QRCodeDetectorTraitConstConstant methods for crate::objdetect::QRCodeDetector * QRCodeEncoderTraitMutable methods for crate::objdetect::QRCodeEncoder * QRCodeEncoderTraitConstConstant methods for crate::objdetect::QRCodeEncoder * SimilarRectsTraitMutable methods for crate::objdetect::SimilarRects * SimilarRectsTraitConstConstant methods for crate::objdetect::SimilarRects Functions --- * create_face_detection_mask_generator * draw_detected_corners_charucoDraws a set of Charuco corners * draw_detected_corners_charuco_defDraws a set of Charuco corners * draw_detected_diamondsDraw a set of detected ChArUco Diamond markers * draw_detected_diamonds_defDraw a set of detected ChArUco Diamond markers * draw_detected_markersDraw detected markers in image * draw_detected_markers_defDraw detected markers in image * extend_dictionaryExtend base dictionary by new nMarkers * extend_dictionary_defExtend base dictionary by new nMarkers * generate_image_markerGenerate a canonical marker image * generate_image_marker_defGenerate a canonical marker image * get_predefined_dictionaryReturns one of the predefined dictionaries defined in PredefinedDictionaryType * get_predefined_dictionary_i32Returns one of the predefined dictionaries referenced by DICT_*. * group_rectanglesGroups the object candidate rectangles. * group_rectangles_defGroups the object candidate rectangles. * group_rectangles_levelsGroups the object candidate rectangles. * group_rectangles_levels_def@overload * group_rectangles_levelweightsGroups the object candidate rectangles. * group_rectangles_meanshiftThis is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. * group_rectangles_meanshift_def@overload * group_rectangles_weightsGroups the object candidate rectangles. * group_rectangles_weights_def@overload Type Aliases --- * DetectionBasedTracker_Object Module opencv::optflow === Optical Flow Algorithms --- Dense optical flow algorithms compute motion for each point: * cv::optflow::calcOpticalFlowSF * cv::optflow::createOptFlow_DeepFlow Motion templates is alternative technique for detecting motion and computing its direction. See samples/motempl.py. * cv::motempl::updateMotionHistory * cv::motempl::calcMotionGradient * cv::motempl::calcGlobalOrientation * cv::motempl::segmentMotion Functions reading and writing .flo files in “Middlebury” format, see: http://vision.middlebury.edu/flow/code/flow-code/README.txt * cv::optflow::readOpticalFlow * cv::optflow::writeOpticalFlow Modules --- * prelude Structs --- * DenseRLOFOpticalFlowFast dense optical flow computation based on robust local optical flow (RLOF) algorithms and sparse-to-dense interpolation scheme. * DualTVL1OpticalFlow“Dual TV L1” Optical Flow Algorithm. * GPCDetails * GPCMatchingParamsClass encapsulating matching parameters. * GPCPatchDescriptor * GPCPatchSample * GPCTrainingParamsClass encapsulating training parameters. * GPCTrainingSamplesClass encapsulating training samples. * GPCTreeClass for individual tree. * GPCTree_Node * OpticalFlowPCAFlowPCAFlow algorithm. * PCAPriorThis class can be used for imposing a learned prior on the resulting optical flow. Solution will be regularized according to this prior. You need to generate appropriate prior file with “learn_prior.py” script beforehand. * RLOFOpticalFlowParameterThis is used store and set up the parameters of the robust local optical flow (RLOF) algoritm. * SparseRLOFOpticalFlowClass used for calculation sparse optical flow and feature tracking with robust local optical flow (RLOF) algorithms. Enums --- * GPCDescTypeDescriptor types for the Global Patch Collider. * InterpolationType * SolverType * SupportRegionType Constants --- * GPC_DESCRIPTOR_DCTBetter quality but slow * GPC_DESCRIPTOR_WHTWorse quality but much faster * INTERP_EPIC< Edge-preserving interpolation using ximgproc::EdgeAwareInterpolator, see Revaud2015,Geistert2016. * INTERP_GEO< Fast geodesic interpolation, see Geistert2016 * INTERP_RIC< SLIC based robust interpolation using ximgproc::RICInterpolator, see Hu2017. * SR_CROSS< Apply a adaptive support region obtained by cross-based segmentation as described in Senst2014 * SR_FIXED< Apply a constant support region * ST_BILINEAR< Apply optimized iterative refinement based bilinear equation solutions as described in Senst2013 * ST_STANDART< Apply standard iterative refinement Traits --- * DenseRLOFOpticalFlowTraitMutable methods for crate::optflow::DenseRLOFOpticalFlow * DenseRLOFOpticalFlowTraitConstConstant methods for crate::optflow::DenseRLOFOpticalFlow * DualTVL1OpticalFlowTraitMutable methods for crate::optflow::DualTVL1OpticalFlow * DualTVL1OpticalFlowTraitConstConstant methods for crate::optflow::DualTVL1OpticalFlow * GPCDetailsTraitMutable methods for crate::optflow::GPCDetails * GPCDetailsTraitConstConstant methods for crate::optflow::GPCDetails * GPCPatchDescriptorTraitMutable methods for crate::optflow::GPCPatchDescriptor * GPCPatchDescriptorTraitConstConstant methods for crate::optflow::GPCPatchDescriptor * GPCPatchSampleTraitMutable methods for crate::optflow::GPCPatchSample * GPCPatchSampleTraitConstConstant methods for crate::optflow::GPCPatchSample * GPCTrainingSamplesTraitMutable methods for crate::optflow::GPCTrainingSamples * GPCTrainingSamplesTraitConstConstant methods for crate::optflow::GPCTrainingSamples * GPCTreeTraitMutable methods for crate::optflow::GPCTree * GPCTreeTraitConstConstant methods for crate::optflow::GPCTree * OpticalFlowPCAFlowTraitMutable methods for crate::optflow::OpticalFlowPCAFlow * OpticalFlowPCAFlowTraitConstConstant methods for crate::optflow::OpticalFlowPCAFlow * PCAPriorTraitMutable methods for crate::optflow::PCAPrior * PCAPriorTraitConstConstant methods for crate::optflow::PCAPrior * RLOFOpticalFlowParameterTraitMutable methods for crate::optflow::RLOFOpticalFlowParameter * RLOFOpticalFlowParameterTraitConstConstant methods for crate::optflow::RLOFOpticalFlowParameter * SparseRLOFOpticalFlowTraitMutable methods for crate::optflow::SparseRLOFOpticalFlow * SparseRLOFOpticalFlowTraitConstConstant methods for crate::optflow::SparseRLOFOpticalFlow Functions --- * calc_global_orientationCalculates a global motion orientation in a selected region. * calc_motion_gradientCalculates a gradient orientation of a motion history image. * calc_motion_gradient_defCalculates a gradient orientation of a motion history image. * calc_optical_flow_dense_rlofFast dense optical flow computation based on robust local optical flow (RLOF) algorithms and sparse-to-dense interpolation scheme. * calc_optical_flow_dense_rlof_defFast dense optical flow computation based on robust local optical flow (RLOF) algorithms and sparse-to-dense interpolation scheme. * calc_optical_flow_sfCalculate an optical flow using “SimpleFlow” algorithm. * calc_optical_flow_sf_1Calculate an optical flow using “SimpleFlow” algorithm. * calc_optical_flow_sparse_rlofCalculates fast optical flow for a sparse feature set using the robust local optical flow (RLOF) similar to optflow::calcOpticalFlowPyrLK(). * calc_optical_flow_sparse_rlof_defCalculates fast optical flow for a sparse feature set using the robust local optical flow (RLOF) similar to optflow::calcOpticalFlowPyrLK(). * calc_optical_flow_sparse_to_denseFast dense optical flow based on PyrLK sparse matches interpolation. * calc_optical_flow_sparse_to_dense_defFast dense optical flow based on PyrLK sparse matches interpolation. * create_opt_flow_deep_flowDeepFlow optical flow algorithm implementation. * create_opt_flow_dense_rlofAdditional interface to the Dense RLOF algorithm - optflow::calcOpticalFlowDenseRLOF() * create_opt_flow_dual_tvl1Creates instance of cv::DenseOpticalFlow * create_opt_flow_farnebackAdditional interface to the Farneback’s algorithm - calcOpticalFlowFarneback() * create_opt_flow_pca_flowCreates an instance of PCAFlow * create_opt_flow_simple_flowAdditional interface to the SimpleFlow algorithm - calcOpticalFlowSF() * create_opt_flow_sparse_rlofAdditional interface to the Sparse RLOF algorithm - optflow::calcOpticalFlowSparseRLOF() * create_opt_flow_sparse_to_denseAdditional interface to the SparseToDenseFlow algorithm - calcOpticalFlowSparseToDense() * read * segment_motionSplits a motion history image into a few parts corresponding to separate independent motions (for example, left hand, right hand). * update_motion_historyUpdates the motion history image by a moving silhouette. * write Type Aliases --- * GPCSamplesVector Module opencv::ovis === OGRE 3D Visualiser --- ovis is a simplified rendering wrapper around ogre3d. The Ogre terminology is used in the API and Ogre Script is assumed to be used for advanced customization. Besides the API you see here, there are several environment variables that control the behavior of ovis. They are documented in [createWindow]. ### Loading geometry You can create geometry on the fly or by loading Ogre `.mesh` files. #### Blender For converting/ creating geometry Blender is recommended. * Blender 2.7x is better tested, but Blender 2.8x should work too * install blender2ogre matching your Blender version * download the Ogre MSVC SDK which contains `OgreXMLConverter.exe` (in `bin/`) and set the path in the blender2ogre settings * get ogre-meshviewer to enable the preview function in blender2ogre as well as for verifying the exported files * in case the exported materials are not exactly how you want them, consult the Ogre Manual #### Assimp When using Ogre 1.12.9 or later, enabling the Assimp plugin allows to load arbitrary geometry. Simply pass `bunny.obj` instead of `bunny.mesh` as `meshname` in [WindowScene::createEntity]. You should still use ogre-meshviewer to verify that the geometry is converted correctly. Modules --- * prelude Structs --- * WindowSceneA 3D viewport and the associated scene Enums --- * EntityProperty * MaterialProperty * SceneSettings Constants --- * ENTITY_AABB_WORLD * ENTITY_ANIMBLEND_MODE * ENTITY_CAST_SHADOWS * ENTITY_MATERIAL * ENTITY_SCALE * MATERIAL_DIFFUSE * MATERIAL_EMISSIVE * MATERIAL_LINE_WIDTH * MATERIAL_OPACITY * MATERIAL_POINT_SIZE * MATERIAL_TEXTURE * MATERIAL_TEXTURE0 * MATERIAL_TEXTURE1 * MATERIAL_TEXTURE2 * MATERIAL_TEXTURE3 * SCENE_AAApply anti-aliasing. The first window determines the setting for all windows. * SCENE_INTERACTIVEallow the user to control the camera. * SCENE_OFFSCREENRender off-screen without a window. Allows separate AA setting. Requires manual update via WindowScene::update * SCENE_SEPARATEthe window will use a separate scene. The scene will be shared otherwise. * SCENE_SHADOWSEnable real-time shadows in the scene. All entities cast shadows by default. Control via ENTITY_CAST_SHADOWS * SCENE_SHOW_CS_CROSSdraw coordinate system crosses for debugging Traits --- * WindowSceneTraitMutable methods for crate::ovis::WindowScene * WindowSceneTraitConstConstant methods for crate::ovis::WindowScene Functions --- * add_resource_locationAdd an additional resource location that is search for meshes, textures and materials * create_grid_meshcreates a grid * create_grid_mesh_defcreates a grid * create_plane_meshcreate a 2D plane, X right, Y down, Z up * create_plane_mesh_defcreate a 2D plane, X right, Y down, Z up * create_point_cloud_meshcreates a point cloud mesh * create_point_cloud_mesh_defcreates a point cloud mesh * create_triangle_meshcreates a triangle mesh from vertex-vertex or face-vertex representation * create_triangle_mesh_defcreates a triangle mesh from vertex-vertex or face-vertex representation * create_windowcreate a new rendering window/ viewport * create_window_defcreate a new rendering window/ viewport * set_material_propertyset the property of a material to the given value * set_material_property_1set the property of a material to the given value * set_material_property_2set the shader property of a material to the given value * set_material_textureset the texture of a material to the given value * update_textureDeprecated**Deprecated**: use setMaterialProperty * wait_keyupdate all windows and wait for keyboard event * wait_key_defupdate all windows and wait for keyboard event Module opencv::phase_unwrapping === Phase Unwrapping API --- Two-dimensional phase unwrapping is found in different applications like terrain elevation estimation in synthetic aperture radar (SAR), field mapping in magnetic resonance imaging or as a way of finding corresponding pixels in structured light reconstruction with sinusoidal patterns. Given a phase map, wrapped between [-pi; pi], phase unwrapping aims at finding the “true” phase map by adding the right number of 2*pi to each pixel. The problem is straightforward for perfect wrapped phase map, but real data are usually not noise-free. Among the different algorithms that were developed, quality-guided phase unwrapping methods are fast and efficient. They follow a path that unwraps high quality pixels first, avoiding error propagation from the start. In this module, a quality-guided phase unwrapping is implemented following the approach described in histogramUnwrapping . Modules --- * prelude Structs --- * HistogramPhaseUnwrappingClass implementing two-dimensional phase unwrapping based on histogramUnwrapping This algorithm belongs to the quality-guided phase unwrapping methods. First, it computes a reliability map from second differences between a pixel and its eight neighbours. Reliability values lie between 0 and 16*pi*pi. Then, this reliability map is used to compute the reliabilities of “edges”. An edge is an entity defined by two pixels that are connected horizontally or vertically. Its reliability is found by adding the the reliabilities of the two pixels connected through it. Edges are sorted in a histogram based on their reliability values. This histogram is then used to unwrap pixels, starting from the highest quality pixel. * HistogramPhaseUnwrapping_ParamsParameters of phaseUnwrapping constructor. * PhaseUnwrappingAbstract base class for phase unwrapping. Traits --- * HistogramPhaseUnwrappingTraitMutable methods for crate::phase_unwrapping::HistogramPhaseUnwrapping * HistogramPhaseUnwrappingTraitConstConstant methods for crate::phase_unwrapping::HistogramPhaseUnwrapping * PhaseUnwrappingTraitMutable methods for crate::phase_unwrapping::PhaseUnwrapping * PhaseUnwrappingTraitConstConstant methods for crate::phase_unwrapping::PhaseUnwrapping Module opencv::photo === Computational Photography --- This module includes photo processing algorithms Inpainting --- Denoising --- HDR imaging --- This section describes high dynamic range imaging algorithms namely tonemapping, exposure alignment, camera calibration with multiple exposures and exposure fusion. Contrast Preserving Decolorization --- Useful links: http://www.cse.cuhk.edu.hk/leojia/projects/color2gray/index.html Seamless Cloning --- Useful links: https://www.learnopencv.com/seamless-cloning-using-opencv-python-cpp Non-Photorealistic Rendering --- Useful links: http://www.inf.ufrgs.br/~eslgastal/DomainTransform https://www.learnopencv.com/non-photorealistic-rendering-using-opencv-python-c/ C API --- Modules --- * prelude Structs --- * AlignExposuresThe base class for algorithms that align images of the same scene with different exposures * AlignMTBThis algorithm converts images to median threshold bitmaps (1 for pixels brighter than median luminance and 0 otherwise) and than aligns the resulting bitmaps using bit operations. * CalibrateCRFThe base class for camera response calibration algorithms. * CalibrateDebevecInverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. Objective function is constructed using pixel values on the same position in all images, extra term is added to make the result smoother. * CalibrateRobertsonInverse camera response function is extracted for each brightness value by minimizing an objective function as linear system. This algorithm uses all image pixels. * MergeDebevecThe resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response. * MergeExposuresThe base class algorithms that can merge exposure sequence to a single image. * MergeMertensPixels are weighted using contrast, saturation and well-exposedness measures, than images are combined using laplacian pyramids. * MergeRobertsonThe resulting HDR image is calculated as weighted average of the exposures considering exposure values and camera response. * TonemapBase class for tonemapping algorithms - tools that are used to map HDR image to 8-bit range. * TonemapDragoAdaptive logarithmic mapping is a fast global tonemapping algorithm that scales the image in logarithmic domain. * TonemapMantiukThis algorithm transforms image to contrast using gradients on all levels of gaussian pyramid, transforms contrast values to HVS response and scales the response. After this the image is reconstructed from new contrast values. * TonemapReinhardThis is a global tonemapping operator that models human visual system. Constants --- * INPAINT_NSUse Navier-Stokes based method * INPAINT_TELEAUse the algorithm proposed by <NAME> * LDR_SIZE * MIXED_CLONEThe classic method, color-based selection and alpha masking might be time consuming and often leaves an undesirable halo. Seamless cloning, even averaged with the original image, is not effective. Mixed seamless cloning based on a loose selection proves effective. * MONOCHROME_TRANSFERMonochrome transfer allows the user to easily replace certain features of one object by alternative features. * NORMAL_CLONEThe power of the method is fully expressed when inserting objects with complex outlines into a new background * NORMCONV_FILTERNormalized Convolution Filtering * RECURS_FILTERRecursive Filtering Traits --- * AlignExposuresTraitMutable methods for crate::photo::AlignExposures * AlignExposuresTraitConstConstant methods for crate::photo::AlignExposures * AlignMTBTraitMutable methods for crate::photo::AlignMTB * AlignMTBTraitConstConstant methods for crate::photo::AlignMTB * CalibrateCRFTraitMutable methods for crate::photo::CalibrateCRF * CalibrateCRFTraitConstConstant methods for crate::photo::CalibrateCRF * CalibrateDebevecTraitMutable methods for crate::photo::CalibrateDebevec * CalibrateDebevecTraitConstConstant methods for crate::photo::CalibrateDebevec * CalibrateRobertsonTraitMutable methods for crate::photo::CalibrateRobertson * CalibrateRobertsonTraitConstConstant methods for crate::photo::CalibrateRobertson * MergeDebevecTraitMutable methods for crate::photo::MergeDebevec * MergeDebevecTraitConstConstant methods for crate::photo::MergeDebevec * MergeExposuresTraitMutable methods for crate::photo::MergeExposures * MergeExposuresTraitConstConstant methods for crate::photo::MergeExposures * MergeMertensTraitMutable methods for crate::photo::MergeMertens * MergeMertensTraitConstConstant methods for crate::photo::MergeMertens * MergeRobertsonTraitMutable methods for crate::photo::MergeRobertson * MergeRobertsonTraitConstConstant methods for crate::photo::MergeRobertson * TonemapDragoTraitMutable methods for crate::photo::TonemapDrago * TonemapDragoTraitConstConstant methods for crate::photo::TonemapDrago * TonemapMantiukTraitMutable methods for crate::photo::TonemapMantiuk * TonemapMantiukTraitConstConstant methods for crate::photo::TonemapMantiuk * TonemapReinhardTraitMutable methods for crate::photo::TonemapReinhard * TonemapReinhardTraitConstConstant methods for crate::photo::TonemapReinhard * TonemapTraitMutable methods for crate::photo::Tonemap * TonemapTraitConstConstant methods for crate::photo::Tonemap Functions --- * color_changeGiven an original color image, two differently colored versions of this image can be mixed seamlessly. * color_change_defGiven an original color image, two differently colored versions of this image can be mixed seamlessly. * create_align_mtbCreates AlignMTB object * create_align_mtb_defCreates AlignMTB object * create_calibrate_debevecCreates CalibrateDebevec object * create_calibrate_debevec_defCreates CalibrateDebevec object * create_calibrate_robertsonCreates CalibrateRobertson object * create_calibrate_robertson_defCreates CalibrateRobertson object * create_merge_debevecCreates MergeDebevec object * create_merge_mertensCreates MergeMertens object * create_merge_mertens_defCreates MergeMertens object * create_merge_robertsonCreates MergeRobertson object * create_tonemapCreates simple linear mapper with gamma correction * create_tonemap_defCreates simple linear mapper with gamma correction * create_tonemap_dragoCreates TonemapDrago object * create_tonemap_drago_defCreates TonemapDrago object * create_tonemap_mantiukCreates TonemapMantiuk object * create_tonemap_mantiuk_defCreates TonemapMantiuk object * create_tonemap_reinhardCreates TonemapReinhard object * create_tonemap_reinhard_defCreates TonemapReinhard object * decolorTransforms a color image to a grayscale image. It is a basic tool in digital printing, stylized black-and-white photograph rendering, and in many single channel image processing applications CL12 . * denoise_tvl1Primal-dual algorithm is an algorithm for solving special types of variational problems (that is, finding a function to minimize some functional). As the image denoising, in particular, may be seen as the variational problem, primal-dual algorithm then can be used to perform denoising and this is exactly what is implemented. * denoise_tvl1_defPrimal-dual algorithm is an algorithm for solving special types of variational problems (that is, finding a function to minimize some functional). As the image denoising, in particular, may be seen as the variational problem, primal-dual algorithm then can be used to perform denoising and this is exactly what is implemented. * detail_enhanceThis filter enhances the details of a particular image. * detail_enhance_defThis filter enhances the details of a particular image. * edge_preserving_filterFiltering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications EM11 . * edge_preserving_filter_defFiltering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications EM11 . * fast_nl_means_denoisingPerform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a gaussian white noise * fast_nl_means_denoising_1C++ default parameters * fast_nl_means_denoising_1_defNote * fast_nl_means_denoising_coloredModification of fastNlMeansDenoising function for colored images * fast_nl_means_denoising_colored_1C++ default parameters * fast_nl_means_denoising_colored_1_defNote * fast_nl_means_denoising_colored_cudaModification of fastNlMeansDenoising function for colored images * fast_nl_means_denoising_colored_cuda_defModification of fastNlMeansDenoising function for colored images * fast_nl_means_denoising_colored_defModification of fastNlMeansDenoising function for colored images * fast_nl_means_denoising_colored_multiModification of fastNlMeansDenoisingMulti function for colored images sequences * fast_nl_means_denoising_colored_multi_defModification of fastNlMeansDenoisingMulti function for colored images sequences * fast_nl_means_denoising_cudaPerform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising with several computational optimizations. Noise expected to be a gaussian white noise * fast_nl_means_denoising_cuda_defPerform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising with several computational optimizations. Noise expected to be a gaussian white noise * fast_nl_means_denoising_defPerform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a gaussian white noise * fast_nl_means_denoising_multiModification of fastNlMeansDenoising function for images sequence where consecutive images have been captured in small period of time. For example video. This version of the function is for grayscale images or for manual manipulation with colorspaces. See Buades2005DenoisingIS for more details (open access here). * fast_nl_means_denoising_multi_defModification of fastNlMeansDenoising function for images sequence where consecutive images have been captured in small period of time. For example video. This version of the function is for grayscale images or for manual manipulation with colorspaces. See Buades2005DenoisingIS for more details (open access here). * fast_nl_means_denoising_multi_vecModification of fastNlMeansDenoising function for images sequence where consecutive images have been captured in small period of time. For example video. This version of the function is for grayscale images or for manual manipulation with colorspaces. See Buades2005DenoisingIS for more details (open access here). * fast_nl_means_denoising_multi_vec_defModification of fastNlMeansDenoising function for images sequence where consecutive images have been captured in small period of time. For example video. This version of the function is for grayscale images or for manual manipulation with colorspaces. See Buades2005DenoisingIS for more details (open access here). * fast_nl_means_denoising_vecPerform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a gaussian white noise * fast_nl_means_denoising_vec_defPerform image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a gaussian white noise * illumination_changeApplying an appropriate non-linear transformation to the gradient field inside the selection and then integrating back with a Poisson solver, modifies locally the apparent illumination of an image. * illumination_change_defApplying an appropriate non-linear transformation to the gradient field inside the selection and then integrating back with a Poisson solver, modifies locally the apparent illumination of an image. * inpaintRestores the selected region in an image using the region neighborhood. * non_local_meansPerforms pure non local means denoising without any simplification, and thus it is not fast. * non_local_means_1C++ default parameters * non_local_means_1_defNote * non_local_means_defPerforms pure non local means denoising without any simplification, and thus it is not fast. * pencil_sketch@example samples/cpp/tutorial_code/photo/non_photorealistic_rendering/npr_demo.cpp An example using non-photorealistic line drawing functions * pencil_sketch_def@example samples/cpp/tutorial_code/photo/non_photorealistic_rendering/npr_demo.cpp An example using non-photorealistic line drawing functions * seamless_clone@example samples/cpp/tutorial_code/photo/seamless_cloning/cloning_demo.cpp An example using seamlessClone function * stylizationStylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features. * stylization_defStylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features. * texture_flatteningBy retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect. Here Canny Edge %Detector is used. * texture_flattening_defBy retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect. Here Canny Edge %Detector is used. Module opencv::platform_types === Reexported platform types that are used by OpenCV Enums --- * FILE Type Aliases --- * clock_t * ptrdiff_t * size_t Module opencv::plot === Plot function for Mat data --- Modules --- * prelude Structs --- * Plot2d Traits --- * Plot2dTraitMutable methods for crate::plot::Plot2d * Plot2dTraitConstConstant methods for crate::plot::Plot2d Module opencv::rapid === silhouette based 3D object tracking --- implements “RAPID-a video rate object tracker” harris1990rapid with the dynamic control point extraction of drummond2002real Modules --- * prelude Structs --- * Rapid_GOSTrackerimplements “Global optimal searching for textureless 3D object tracking” wang2015global * Rapid_OLSTrackerimplements “Optimal local searching for fast and robust textureless 3D object tracking in highly cluttered backgrounds” seo2013optimal * Rapid_Rapidwrapper around rapid function for uniform access * Rapid_TrackerAbstract base class for stateful silhouette trackers Traits --- * Rapid_GOSTrackerTraitMutable methods for crate::rapid::Rapid_GOSTracker * Rapid_GOSTrackerTraitConstConstant methods for crate::rapid::Rapid_GOSTracker * Rapid_OLSTrackerTraitMutable methods for crate::rapid::Rapid_OLSTracker * Rapid_OLSTrackerTraitConstConstant methods for crate::rapid::Rapid_OLSTracker * Rapid_RapidTraitMutable methods for crate::rapid::Rapid_Rapid * Rapid_RapidTraitConstConstant methods for crate::rapid::Rapid_Rapid * Rapid_TrackerTraitMutable methods for crate::rapid::Rapid_Tracker * Rapid_TrackerTraitConstConstant methods for crate::rapid::Rapid_Tracker Functions --- * convert_correspondenciesCollect corresponding 2d and 3d points based on correspondencies and mask * convert_correspondencies_defCollect corresponding 2d and 3d points based on correspondencies and mask * draw_correspondenciesDebug draw markers of matched correspondences onto a lineBundle * draw_correspondencies_defDebug draw markers of matched correspondences onto a lineBundle * draw_search_linesDebug draw search lines onto an image * draw_wireframeDraw a wireframe of a triangle mesh * draw_wireframe_defDraw a wireframe of a triangle mesh * extract_control_pointsExtract control points from the projected silhouette of a mesh * extract_line_bundleExtract the line bundle from an image * find_correspondenciesFind corresponding image locations by searching for a maximal sobel edge along the search line (a single row in the bundle) * find_correspondencies_defFind corresponding image locations by searching for a maximal sobel edge along the search line (a single row in the bundle) * rapidHigh level function to execute a single rapid harris1990rapid iteration * rapid_defHigh level function to execute a single rapid harris1990rapid iteration Module opencv::rgbd === RGB-Depth Processing --- [kinfu_icp] Modules --- * prelude Structs --- * ColoredKinfu_ColoredKinFuKinectFusion implementation * ColoredKinfu_Params * DepthCleanerObject that can clean a noisy depth image * Dynafu_DynaFu * FastICPOdometryA faster version of ICPOdometry which is used in KinectFusion implementation Partial list of differences: * ICPOdometryOdometry based on the paper “KinectFusion: Real-Time Dense Surface Mapping and Tracking”, <NAME>, <NAME>, at al, SIGGRAPH, 2011. * Kinfu_Detail_PoseGraph * Kinfu_Intr * Kinfu_Intr_ProjectorProjects camera space vector onto screen * Kinfu_Intr_ReprojectorCamera intrinsics Reprojects screen point to camera space given z coord. * Kinfu_KinFuKinectFusion implementation * Kinfu_Params * Kinfu_Volume * Kinfu_VolumeParams * LargeKinfuLarge Scale Dense Depth Fusion implementation * LineMod_ColorGradient\brief Modality that computes quantized gradient orientations from a color image. * LineMod_DepthNormal\brief Modality that computes quantized surface normals from a dense depth map. * LineMod_Detector\brief Object detector using the LINE template matching algorithm with any set of modalities. * LineMod_Feature\brief Discriminant feature described by its location and label. * LineMod_Match\brief Represents a successful template match. * LineMod_Modality\brief Interface for modalities that plug into the LINE template matching representation. * LineMod_QuantizedPyramid\brief Represents a modality operating over an image pyramid. * LineMod_Template * OdometryBase class for computation of odometry. * OdometryFrameObject that contains a frame data that is possibly needed for the Odometry. It’s used for the efficiency (to pass precomputed/cached data of the frame that participates in the Odometry processing several times). * Params * RgbdFrameObject that contains a frame data. * RgbdICPOdometryOdometry that merges RgbdOdometry and ICPOdometry by minimize sum of their energy functions. * RgbdNormalsObject that can compute the normals in an image. It is an object as it can cache data for speed efficiency The implemented methods are either: * RgbdOdometryOdometry based on the paper “Real-Time Visual Odometry from Dense RGB-D Images”, <NAME>, <NAME>, <NAME>, ICCV, 2011. * RgbdPlaneObject that can compute planes in an image Enums --- * DepthCleaner_DEPTH_CLEANER_METHODNIL method is from `Modeling Kinect Sensor Noise for Improved 3d Reconstruction and Tracking` by <NAME>, <NAME>, <NAME> * Kinfu_VolumeType * RgbdNormals_RGBD_NORMALS_METHOD * RgbdPlane_RGBD_PLANE_METHOD Constants --- * DepthCleaner_DEPTH_CLEANER_NIL * Kinfu_VolumeType_COLOREDTSDF * Kinfu_VolumeType_HASHTSDF * Kinfu_VolumeType_TSDF * OdometryFrame_CACHE_ALL * OdometryFrame_CACHE_DST * OdometryFrame_CACHE_SRC * Odometry_RIGID_BODY_MOTION * Odometry_ROTATION * Odometry_TRANSLATION * RgbdNormals_RGBD_NORMALS_METHOD_FALS * RgbdNormals_RGBD_NORMALS_METHOD_LINEMOD * RgbdNormals_RGBD_NORMALS_METHOD_SRI * RgbdPlane_RGBD_PLANE_METHOD_DEFAULT Traits --- * ColoredKinfu_ColoredKinFuTraitMutable methods for crate::rgbd::ColoredKinfu_ColoredKinFu * ColoredKinfu_ColoredKinFuTraitConstConstant methods for crate::rgbd::ColoredKinfu_ColoredKinFu * ColoredKinfu_ParamsTraitMutable methods for crate::rgbd::ColoredKinfu_Params * ColoredKinfu_ParamsTraitConstConstant methods for crate::rgbd::ColoredKinfu_Params * DepthCleanerTraitMutable methods for crate::rgbd::DepthCleaner * DepthCleanerTraitConstConstant methods for crate::rgbd::DepthCleaner * Dynafu_DynaFuTraitMutable methods for crate::rgbd::Dynafu_DynaFu * Dynafu_DynaFuTraitConstConstant methods for crate::rgbd::Dynafu_DynaFu * FastICPOdometryTraitMutable methods for crate::rgbd::FastICPOdometry * FastICPOdometryTraitConstConstant methods for crate::rgbd::FastICPOdometry * ICPOdometryTraitMutable methods for crate::rgbd::ICPOdometry * ICPOdometryTraitConstConstant methods for crate::rgbd::ICPOdometry * Kinfu_Detail_PoseGraphTraitMutable methods for crate::rgbd::Kinfu_Detail_PoseGraph * Kinfu_Detail_PoseGraphTraitConstConstant methods for crate::rgbd::Kinfu_Detail_PoseGraph * Kinfu_KinFuTraitMutable methods for crate::rgbd::Kinfu_KinFu * Kinfu_KinFuTraitConstConstant methods for crate::rgbd::Kinfu_KinFu * Kinfu_ParamsTraitMutable methods for crate::rgbd::Kinfu_Params * Kinfu_ParamsTraitConstConstant methods for crate::rgbd::Kinfu_Params * Kinfu_VolumeParamsTraitMutable methods for crate::rgbd::Kinfu_VolumeParams * Kinfu_VolumeParamsTraitConstConstant methods for crate::rgbd::Kinfu_VolumeParams * Kinfu_VolumeTraitMutable methods for crate::rgbd::Kinfu_Volume * Kinfu_VolumeTraitConstConstant methods for crate::rgbd::Kinfu_Volume * LargeKinfuTraitMutable methods for crate::rgbd::LargeKinfu * LargeKinfuTraitConstConstant methods for crate::rgbd::LargeKinfu * LineMod_ColorGradientTraitMutable methods for crate::rgbd::LineMod_ColorGradient * LineMod_ColorGradientTraitConstConstant methods for crate::rgbd::LineMod_ColorGradient * LineMod_DepthNormalTraitMutable methods for crate::rgbd::LineMod_DepthNormal * LineMod_DepthNormalTraitConstConstant methods for crate::rgbd::LineMod_DepthNormal * LineMod_DetectorTraitMutable methods for crate::rgbd::LineMod_Detector * LineMod_DetectorTraitConstConstant methods for crate::rgbd::LineMod_Detector * LineMod_MatchTraitMutable methods for crate::rgbd::LineMod_Match * LineMod_MatchTraitConstConstant methods for crate::rgbd::LineMod_Match * LineMod_ModalityTraitMutable methods for crate::rgbd::LineMod_Modality * LineMod_ModalityTraitConstConstant methods for crate::rgbd::LineMod_Modality * LineMod_QuantizedPyramidTraitMutable methods for crate::rgbd::LineMod_QuantizedPyramid * LineMod_QuantizedPyramidTraitConstConstant methods for crate::rgbd::LineMod_QuantizedPyramid * LineMod_TemplateTraitMutable methods for crate::rgbd::LineMod_Template * LineMod_TemplateTraitConstConstant methods for crate::rgbd::LineMod_Template * OdometryFrameTraitMutable methods for crate::rgbd::OdometryFrame * OdometryFrameTraitConstConstant methods for crate::rgbd::OdometryFrame * OdometryTraitMutable methods for crate::rgbd::Odometry * OdometryTraitConstConstant methods for crate::rgbd::Odometry * ParamsTraitMutable methods for crate::rgbd::Params * ParamsTraitConstConstant methods for crate::rgbd::Params * RgbdFrameTraitMutable methods for crate::rgbd::RgbdFrame * RgbdFrameTraitConstConstant methods for crate::rgbd::RgbdFrame * RgbdICPOdometryTraitMutable methods for crate::rgbd::RgbdICPOdometry * RgbdICPOdometryTraitConstConstant methods for crate::rgbd::RgbdICPOdometry * RgbdNormalsTraitMutable methods for crate::rgbd::RgbdNormals * RgbdNormalsTraitConstConstant methods for crate::rgbd::RgbdNormals * RgbdOdometryTraitMutable methods for crate::rgbd::RgbdOdometry * RgbdOdometryTraitConstConstant methods for crate::rgbd::RgbdOdometry * RgbdPlaneTraitMutable methods for crate::rgbd::RgbdPlane * RgbdPlaneTraitConstConstant methods for crate::rgbd::RgbdPlane Functions --- * colormap\brief Debug function to colormap a quantized image for viewing. * depth_to3dConverts a depth image to an organized set of 3d points. The coordinate system is x pointing left, y down and z away from the camera * depth_to3d_defConverts a depth image to an organized set of 3d points. The coordinate system is x pointing left, y down and z away from the camera * depth_to3d_sparseParameters * draw_features\brief Debug function to draw linemod features * draw_features_def\brief Debug function to draw linemod features * get_default_line\brief Factory function for detector using LINE algorithm with color gradients. * get_default_linemod\brief Factory function for detector using LINE-MOD algorithm with color gradients and depth normals. * is_valid_depthChecks if the value is a valid depth. For CV_16U or CV_16S, the convention is to be invalid if it is a limit. For a float/double, we just check if it is a NaN * is_valid_depth_1 * is_valid_depth_2 * is_valid_depth_3 * is_valid_depth_4 * is_valid_depth_5 * make_volume * register_depthRegisters depth data to an external camera Registration is performed by creating a depth cloud, transforming the cloud by the rigid body transformation between the cameras, and then projecting the transformed points into the RGB camera. * register_depth_defRegisters depth data to an external camera Registration is performed by creating a depth cloud, transforming the cloud by the rigid body transformation between the cameras, and then projecting the transformed points into the RGB camera. * rescale_depthIf the input image is of type CV_16UC1 (like the Kinect one), the image is converted to floats, divided by depth_factor to get a depth in meters, and the values 0 are converted to std::numeric_limits::quiet_NaN() Otherwise, the image is simply converted to floats * rescale_depth_defIf the input image is of type CV_16UC1 (like the Kinect one), the image is converted to floats, divided by depth_factor to get a depth in meters, and the values 0 are converted to std::numeric_limits::quiet_NaN() Otherwise, the image is simply converted to floats * warp_frameWarp the image: compute 3d points from the depth, transform them using given transformation, then project color point cloud to an image plane. This function can be used to visualize results of the Odometry algorithm. * warp_frame_defWarp the image: compute 3d points from the depth, transform them using given transformation, then project color point cloud to an image plane. This function can be used to visualize results of the Odometry algorithm. Type Aliases --- * Dynafu_ParamsBackwards compatibility for old versions Module opencv::saliency === Saliency API --- Many computer vision applications may benefit from understanding where humans focus given a scene. Other than cognitively understanding the way human perceive images and scenes, finding salient regions and objects in the images helps various tasks such as speeding up object detection, object recognition, object tracking and content-aware image editing. About the saliency, there is a rich literature but the development is very fragmented. The principal purpose of this API is to give a unique interface, a unique framework for use and plug sever saliency algorithms, also with very different nature and methodology, but they share the same purpose, organizing algorithms into three main categories: **Static Saliency**: algorithms belonging to this category, exploit different image features that allow to detect salient objects in a non dynamic scenarios. **Motion Saliency**: algorithms belonging to this category, are particularly focused to detect salient objects over time (hence also over frame), then there is a temporal component sealing cosider that allows to detect “moving” objects as salient, meaning therefore also the more general sense of detection the changes in the scene. **Objectness**: Objectness is usually represented as a value which reflects how likely an image window covers an object of any category. Algorithms belonging to this category, avoid making decisions early on, by proposing a small number of category-independent proposals, that are expected to cover all objects in an image. Being able to perceive objects before identifying them is closely related to bottom up visual attention (saliency). ![Saliency diagram](https://docs.opencv.org/4.8.1/saliency.png) To see how API works, try tracker demo: https://github.com/fpuja/opencv_contrib/blob/saliencyModuleDevelop/modules/saliency/samples/computeSaliency.cpp Note: This API has been designed with PlantUML. If you modify this API please change UML. Modules --- * prelude Structs --- * MotionSaliency********************************* Motion Saliency Base Class *********************************** * MotionSaliencyBinWangApr2014! * Objectness********************************* Objectness Base Class *********************************** * ObjectnessBINGthe Binarized normed gradients algorithm from BING * Saliency********************************* Saliency Base Class *********************************** * StaticSaliency********************************* Static Saliency Base Class *********************************** * StaticSaliencyFineGrainedthe Fine Grained Saliency approach from FGS * StaticSaliencySpectralResidualthe Spectral Residual approach from SR Traits --- * MotionSaliencyBinWangApr2014TraitMutable methods for crate::saliency::MotionSaliencyBinWangApr2014 * MotionSaliencyBinWangApr2014TraitConstConstant methods for crate::saliency::MotionSaliencyBinWangApr2014 * MotionSaliencyTraitMutable methods for crate::saliency::MotionSaliency * MotionSaliencyTraitConstConstant methods for crate::saliency::MotionSaliency * ObjectnessBINGTraitMutable methods for crate::saliency::ObjectnessBING * ObjectnessBINGTraitConstConstant methods for crate::saliency::ObjectnessBING * ObjectnessTraitMutable methods for crate::saliency::Objectness * ObjectnessTraitConstConstant methods for crate::saliency::Objectness * SaliencyTraitMutable methods for crate::saliency::Saliency * SaliencyTraitConstConstant methods for crate::saliency::Saliency * StaticSaliencyFineGrainedTraitMutable methods for crate::saliency::StaticSaliencyFineGrained * StaticSaliencyFineGrainedTraitConstConstant methods for crate::saliency::StaticSaliencyFineGrained * StaticSaliencySpectralResidualTraitMutable methods for crate::saliency::StaticSaliencySpectralResidual * StaticSaliencySpectralResidualTraitConstConstant methods for crate::saliency::StaticSaliencySpectralResidual * StaticSaliencyTraitMutable methods for crate::saliency::StaticSaliency * StaticSaliencyTraitConstConstant methods for crate::saliency::StaticSaliency Module opencv::sfm === Structure From Motion --- The opencv_sfm module contains algorithms to perform 3d reconstruction from 2d images. The core of the module is based on a light version of Libmv originally developed by <NAME> and <NAME>. **Whats is libmv?** libmv, also known as the Library for Multiview Reconstruction (or LMV), is the computer vision backend for Blender’s motion tracking abilities. Unlike other vision libraries with general ambitions, libmv is focused on algorithms for match moving, specifically targeting Blender as the primary customer. Dense reconstruction, reconstruction from unorganized photo collections, image recognition, and other tasks are not a focus of libmv. **Development** libmv is officially under the Blender umbrella, and so is developed on developer.blender.org. The source repository can get checked out independently from Blender. This module has been originally developed as a project for Google Summer of Code 2012-2015. Note: * Notice that it is compiled only when Eigen, GLog and GFlags are correctly installed. Check installation instructions in the following tutorial: [tutorial_sfm_installation] Conditioning --- Fundamental --- Input/Output --- Numeric --- Projection --- Robust Estimation --- Triangulation --- Reconstruction --- Note: - Notice that it is compiled only when Ceres Solver is correctly installed. ``` Check installation instructions in the following tutorial: [tutorial_sfm_installation] ``` Simple Pipeline --- Note: - Notice that it is compiled only when Ceres Solver is correctly installed. ``` Check installation instructions in the following tutorial: [tutorial_sfm_installation] ``` Modules --- * prelude Structs --- * BaseSFMbase class BaseSFM declares a common API that would be used in a typical scene reconstruction scenario * SFMLibmvEuclideanReconstructionSFMLibmvEuclideanReconstruction class provides an interface with the Libmv Structure From Motion pipeline. * libmv_CameraIntrinsicsOptionsData structure describing the camera model and its parameters. * libmv_ReconstructionOptionsData structure describing the reconstruction options. Constants --- * SFM_DISTORTION_MODEL_DIVISION * SFM_DISTORTION_MODEL_POLYNOMIAL * SFM_IO_BUNDLER * SFM_IO_OPENMVG * SFM_IO_OPENSFM * SFM_IO_THEIASFM * SFM_IO_VISUALSFM * SFM_REFINE_FOCAL_LENGTH * SFM_REFINE_PRINCIPAL_POINT * SFM_REFINE_RADIAL_DISTORTION_K1 * SFM_REFINE_RADIAL_DISTORTION_K2 Traits --- * BaseSFMTraitMutable methods for crate::sfm::BaseSFM * BaseSFMTraitConstConstant methods for crate::sfm::BaseSFM * SFMLibmvEuclideanReconstructionTraitMutable methods for crate::sfm::SFMLibmvEuclideanReconstruction * SFMLibmvEuclideanReconstructionTraitConstConstant methods for crate::sfm::SFMLibmvEuclideanReconstruction Functions --- * apply_transformation_to_pointsApply Transformation to points. * compute_orientationComputes Absolute or Exterior Orientation (Pose Estimation) between 2 sets of 3D point. * depthReturns the depth of a point transformed by a rigid transform. * essential_from_fundamentalGet Essential matrix from Fundamental and Camera matrices. * essential_from_rtGet Essential matrix from Motion (R’s and t’s ). * euclidean_to_homogeneousConverts points from Euclidean to homogeneous space. E.g., ((x,y)->(x,y,1)) * fundamental_from_correspondences7_point_robustEstimate robustly the fundamental matrix between two dataset of 2D point (image coords space). * fundamental_from_correspondences7_point_robust_defEstimate robustly the fundamental matrix between two dataset of 2D point (image coords space). * fundamental_from_correspondences8_point_robustEstimate robustly the fundamental matrix between two dataset of 2D point (image coords space). * fundamental_from_correspondences8_point_robust_defEstimate robustly the fundamental matrix between two dataset of 2D point (image coords space). * fundamental_from_essentialGet Essential matrix from Fundamental and Camera matrices. * fundamental_from_projectionsGet Fundamental matrix from Projection matrices. * homogeneous_to_euclideanConverts point coordinates from homogeneous to euclidean pixel coordinates. E.g., ((x,y,z)->(x/z, y/z)) * import_reconstructionImport a reconstruction file. * import_reconstruction_defImport a reconstruction file. * isotropic_preconditioner_from_pointsPoint conditioning (isotropic). * k_rt_from_projectionGet K, R and t from projection matrix P, decompose using the RQ decomposition. * mean_and_variance_along_rowsComputes the mean and variance of a given matrix along its rows. * motion_from_essentialGet Motion (R’s and t’s ) from Essential matrix. * motion_from_essential_choose_solutionChoose one of the four possible motion solutions from an essential matrix. * normalize_fundamentalNormalizes the Fundamental matrix. * normalize_isotropic_pointsThis function normalizes points. (isotropic). * normalize_pointsThis function normalizes points (non isotropic). * normalized_eight_point_solverEstimate the fundamental matrix between two dataset of 2D point (image coords space). * preconditioner_from_pointsPoint conditioning (non isotropic). * projection_from_k_rtGet projection matrix P from K, R and t. * projections_from_fundamentalGet projection matrices from Fundamental matrix * reconstructReconstruct 3d points from 2d correspondences while performing autocalibration. * reconstruct_1Reconstruct 3d points from 2d correspondences while performing autocalibration. * reconstruct_1_defReconstruct 3d points from 2d correspondences while performing autocalibration. * reconstruct_2Reconstruct 3d points from 2d images while performing autocalibration. * reconstruct_2_defReconstruct 3d points from 2d images while performing autocalibration. * reconstruct_3Reconstruct 3d points from 2d images while performing autocalibration. * reconstruct_3_defReconstruct 3d points from 2d images while performing autocalibration. * reconstruct_defReconstruct 3d points from 2d correspondences while performing autocalibration. * relative_camera_motionComputes the relative camera motion between two cameras. * skewReturns the 3x3 skew symmetric matrix of a vector. * triangulate_pointsReconstructs bunch of points by triangulation. Module opencv::shape === Shape Distance and Matching --- Modules --- * prelude Structs --- * AffineTransformerWrapper class for the OpenCV Affine Transformation algorithm. : * ChiHistogramCostExtractorAn Chi based cost extraction. : * EMDHistogramCostExtractorAn EMD based cost extraction. : * EMDL1HistogramCostExtractorAn EMD-L1 based cost extraction. : * HausdorffDistanceExtractor --- * HistogramCostExtractorAbstract base class for histogram cost algorithms. * NormHistogramCostExtractorA norm based cost extraction. : * ShapeContextDistanceExtractor --- * ShapeDistanceExtractor@example modules/shape/samples/shape_example.cpp An example using shape distance algorithm * ShapeTransformerAbstract base class for shape transformation algorithms. * ThinPlateSplineShapeTransformerDefinition of the transformation Traits --- * AffineTransformerTraitMutable methods for crate::shape::AffineTransformer * AffineTransformerTraitConstConstant methods for crate::shape::AffineTransformer * ChiHistogramCostExtractorTraitMutable methods for crate::shape::ChiHistogramCostExtractor * ChiHistogramCostExtractorTraitConstConstant methods for crate::shape::ChiHistogramCostExtractor * EMDHistogramCostExtractorTraitMutable methods for crate::shape::EMDHistogramCostExtractor * EMDHistogramCostExtractorTraitConstConstant methods for crate::shape::EMDHistogramCostExtractor * EMDL1HistogramCostExtractorTraitMutable methods for crate::shape::EMDL1HistogramCostExtractor * EMDL1HistogramCostExtractorTraitConstConstant methods for crate::shape::EMDL1HistogramCostExtractor * HausdorffDistanceExtractorTraitMutable methods for crate::shape::HausdorffDistanceExtractor * HausdorffDistanceExtractorTraitConstConstant methods for crate::shape::HausdorffDistanceExtractor * HistogramCostExtractorTraitMutable methods for crate::shape::HistogramCostExtractor * HistogramCostExtractorTraitConstConstant methods for crate::shape::HistogramCostExtractor * NormHistogramCostExtractorTraitMutable methods for crate::shape::NormHistogramCostExtractor * NormHistogramCostExtractorTraitConstConstant methods for crate::shape::NormHistogramCostExtractor * ShapeContextDistanceExtractorTraitMutable methods for crate::shape::ShapeContextDistanceExtractor * ShapeContextDistanceExtractorTraitConstConstant methods for crate::shape::ShapeContextDistanceExtractor * ShapeDistanceExtractorTraitMutable methods for crate::shape::ShapeDistanceExtractor * ShapeDistanceExtractorTraitConstConstant methods for crate::shape::ShapeDistanceExtractor * ShapeTransformerTraitMutable methods for crate::shape::ShapeTransformer * ShapeTransformerTraitConstConstant methods for crate::shape::ShapeTransformer * ThinPlateSplineShapeTransformerTraitMutable methods for crate::shape::ThinPlateSplineShapeTransformer * ThinPlateSplineShapeTransformerTraitConstConstant methods for crate::shape::ThinPlateSplineShapeTransformer Functions --- * create_affine_transformerComplete constructor * create_chi_histogram_cost_extractorC++ default parameters * create_chi_histogram_cost_extractor_defNote * create_emd_histogram_cost_extractorC++ default parameters * create_emd_histogram_cost_extractor_defNote * create_emdl1_histogram_cost_extractorC++ default parameters * create_emdl1_histogram_cost_extractor_defNote * create_hausdorff_distance_extractorC++ default parameters * create_hausdorff_distance_extractor_defNote * create_norm_histogram_cost_extractorC++ default parameters * create_norm_histogram_cost_extractor_defNote * create_shape_context_distance_extractorC++ default parameters * create_shape_context_distance_extractor_defNote * create_thin_plate_spline_shape_transformerComplete constructor * create_thin_plate_spline_shape_transformer_defComplete constructor * emdl1Computes the “minimal work” distance between two weighted point configurations base on the papers “EMD-L1: An efficient and Robust Algorithm for comparing histogram-based descriptors”, by <NAME> and <NAME>; and “The Earth Mover’s Distance is the Mallows Distance: Some Insights from Statistics”, by <NAME> and <NAME>. Module opencv::stereo === Stereo Correspondance Algorithms --- Modules --- * prelude Structs --- * MatchQuasiDense\addtogroup stereo * PropagationParameters * QuasiDenseStereoClass containing the methods needed for Quasi Dense Stereo computation. Constants --- * CV_CS_CENSUS * CV_DENSE_CENSUS * CV_MEAN_VARIATION * CV_MODIFIED_CENSUS_TRANSFORM * CV_MODIFIED_CS_CENSUS * CV_QUADRATIC_INTERPOLATION * CV_SIMETRICV_INTERPOLATION * CV_SPARSE_CENSUS * CV_SPECKLE_REMOVAL_ALGORITHM * CV_SPECKLE_REMOVAL_AVG_ALGORITHM * CV_STAR_KERNEL Traits --- * QuasiDenseStereoTraitMutable methods for crate::stereo::QuasiDenseStereo * QuasiDenseStereoTraitConstConstant methods for crate::stereo::QuasiDenseStereo Functions --- * census_transformTwo variations of census applied on input images Implementation of a census transform which is taking into account just the some pixels from the census kernel thus allowing for larger block sizes * * census_transform_1single image census transform * modified_census_transformSTANDARD_MCT - Modified census which is memorizing for each pixel 2 bits and includes a tolerance to the pixel comparison MCT_MEAN_VARIATION - Implementation of a modified census transform which is also taking into account the variation to the mean of the window not just the center pixel * * modified_census_transform_1single version of modified census transform descriptor * modified_census_transform_1_defsingle version of modified census transform descriptor * modified_census_transform_defSTANDARD_MCT - Modified census which is memorizing for each pixel 2 bits and includes a tolerance to the pixel comparison MCT_MEAN_VARIATION - Implementation of a modified census transform which is also taking into account the variation to the mean of the window not just the center pixel * * star_census_transformin a 9x9 kernel only certain positions are choosen * star_census_transform_1single image version of star kernel * symetric_census_transformThe classical center symetric census A modified version of cs census which is comparing a pixel with its correspondent after the center * * symetric_census_transform_1single version of census transform Module opencv::stitching === Images stitching --- This figure illustrates the stitching module pipeline implemented in the Stitcher class. Using that class it’s possible to configure/remove some steps, i.e. adjust the stitching pipeline according to the particular needs. All building blocks from the pipeline are available in the detail namespace, one can combine and use them separately. The implemented stitching pipeline is very similar to the one proposed in BL07 . ![stitching pipeline](https://docs.opencv.org/4.8.1/StitchingPipeline.jpg) ### Camera models There are currently 2 camera models implemented in stitching pipeline. * *Homography model* expecting perspective transformations between images implemented in [cv::detail::BestOf2NearestMatcher] cv::detail::HomographyBasedEstimator cv::detail::BundleAdjusterReproj cv::detail::BundleAdjusterRay * *Affine model* expecting affine transformation with 6 DOF or 4 DOF implemented in [cv::detail::AffineBestOf2NearestMatcher] cv::detail::AffineBasedEstimator cv::detail::BundleAdjusterAffine cv::detail::BundleAdjusterAffinePartial cv::AffineWarper Homography model is useful for creating photo panoramas captured by camera, while affine-based model can be used to stitch scans and object captured by specialized devices. Use [cv::Stitcher::create] to get preconfigured pipeline for one of those models. Note: Certain detailed settings of [cv::Stitcher] might not make sense. Especially you should not mix classes implementing affine model and classes implementing Homography model, as they work with different transformations. Features Finding and Images Matching --- Rotation Estimation --- Autocalibration --- Images Warping --- Seam Estimation --- Exposure Compensation --- Image Blenders --- Modules --- * prelude Structs --- * AffineWarperAffine warper factory class. * CompressedRectilinearPortraitWarper * CompressedRectilinearWarper * CylindricalWarperCylindrical warper factory class. * CylindricalWarperGpu * Detail_AffineBasedEstimatorAffine transformation based estimator. * Detail_AffineBestOf2NearestMatcherFeatures matcher similar to cv::detail::BestOf2NearestMatcher which finds two best matches for each feature and leaves the best one only if the ratio between descriptor distances is greater than the threshold match_conf. * Detail_AffineWarperAffine warper that uses rotations and translations * Detail_BestOf2NearestMatcherFeatures matcher which finds two best matches for each feature and leaves the best one only if the ratio between descriptor distances is greater than the threshold match_conf * Detail_BestOf2NearestRangeMatcher * Detail_BlenderBase class for all blenders. * Detail_BlocksChannelsCompensatorExposure compensator which tries to remove exposure related artifacts by adjusting image block on each channel. * Detail_BlocksCompensatorExposure compensator which tries to remove exposure related artifacts by adjusting image blocks. * Detail_BlocksGainCompensatorExposure compensator which tries to remove exposure related artifacts by adjusting image block intensities, see UES01 for details. * Detail_BundleAdjusterAffineBundle adjuster that expects affine transformation represented in homogeneous coordinates in R for each camera param. Implements camera parameters refinement algorithm which minimizes sum of the reprojection error squares * Detail_BundleAdjusterAffinePartialBundle adjuster that expects affine transformation with 4 DOF represented in homogeneous coordinates in R for each camera param. Implements camera parameters refinement algorithm which minimizes sum of the reprojection error squares * Detail_BundleAdjusterBaseBase class for all camera parameters refinement methods. * Detail_BundleAdjusterRayImplementation of the camera parameters refinement algorithm which minimizes sum of the distances between the rays passing through the camera center and a feature. : * Detail_BundleAdjusterReprojImplementation of the camera parameters refinement algorithm which minimizes sum of the reprojection error squares * Detail_CameraParamsDescribes camera parameters. * Detail_ChannelsCompensatorExposure compensator which tries to remove exposure related artifacts by adjusting image intensities on each channel independently. * Detail_CompressedRectilinearPortraitProjector * Detail_CompressedRectilinearPortraitWarper * Detail_CompressedRectilinearProjector * Detail_CompressedRectilinearWarper * Detail_CylindricalPortraitProjector * Detail_CylindricalPortraitWarper * Detail_CylindricalProjector * Detail_CylindricalWarperWarper that maps an image onto the x*x + z*z = 1 cylinder. * Detail_CylindricalWarperGpu * Detail_DisjointSets * Detail_DpSeamFinder * Detail_EstimatorRotation estimator base class. * Detail_ExposureCompensatorBase class for all exposure compensators. * Detail_FeatherBlenderSimple blender which mixes images at its borders. * Detail_FeaturesMatcherFeature matchers base class. * Detail_FisheyeProjector * Detail_FisheyeWarper * Detail_GainCompensatorExposure compensator which tries to remove exposure related artifacts by adjusting image intensities, see BL07 and WJ10 for details. * Detail_Graph * Detail_GraphCutSeamFinderMinimum graph cut-based seam estimator. See details in V03 . * Detail_GraphCutSeamFinderBaseBase class for all minimum graph-cut-based seam estimators. * Detail_GraphCutSeamFinderGpu * Detail_GraphEdge * Detail_HomographyBasedEstimatorHomography based rotation estimator. * Detail_ImageFeaturesStructure containing image keypoints and descriptors. * Detail_MatchesInfoStructure containing information about matches between two images. * Detail_MercatorProjector * Detail_MercatorWarper * Detail_MultiBandBlenderBlender which uses multi-band blending algorithm (see BA83). * Detail_NoBundleAdjusterStub bundle adjuster that does nothing. * Detail_NoExposureCompensatorStub exposure compensator which does nothing. * Detail_NoSeamFinderStub seam estimator which does nothing. * Detail_PairwiseSeamFinderBase class for all pairwise seam estimators. * Detail_PaniniPortraitProjector * Detail_PaniniPortraitWarper * Detail_PaniniProjector * Detail_PaniniWarper * Detail_PlanePortraitProjector * Detail_PlanePortraitWarper * Detail_PlaneProjector * Detail_PlaneWarperWarper that maps an image onto the z = 1 plane. * Detail_PlaneWarperGpu * Detail_ProjectorBaseBase class for warping logic implementation. * Detail_RotationWarperRotation-only model image warper interface. * Detail_SeamFinderBase class for a seam estimator. * Detail_SphericalPortraitProjector * Detail_SphericalPortraitWarper * Detail_SphericalProjector * Detail_SphericalWarperWarper that maps an image onto the unit sphere located at the origin. * Detail_SphericalWarperGpu * Detail_StereographicProjector * Detail_StereographicWarper * Detail_TransverseMercatorProjector * Detail_TransverseMercatorWarper * Detail_VoronoiSeamFinderVoronoi diagram-based seam estimator. * FisheyeWarper * MercatorWarper * PaniniPortraitWarper * PaniniWarper * PlaneWarperPlane warper factory class. * PlaneWarperGpu * PyRotationWarper * SphericalWarperSpherical warper factory class * SphericalWarperGpu * StereographicWarper * StitcherHigh level image stitcher. * TransverseMercatorWarper * WarperCreatorImage warper factories base class. Enums --- * Detail_DpSeamFinder_CostFunction * Detail_GraphCutSeamFinderBase_CostType * Detail_WaveCorrectKind * Stitcher_Mode * Stitcher_Status Constants --- * Detail_Blender_FEATHER * Detail_Blender_MULTI_BAND * Detail_Blender_NO * Detail_DpSeamFinder_COLOR * Detail_DpSeamFinder_COLOR_GRAD * Detail_ExposureCompensator_CHANNELS * Detail_ExposureCompensator_CHANNELS_BLOCKS * Detail_ExposureCompensator_GAIN * Detail_ExposureCompensator_GAIN_BLOCKS * Detail_ExposureCompensator_NO * Detail_GraphCutSeamFinderBase_COST_COLOR * Detail_GraphCutSeamFinderBase_COST_COLOR_GRAD * Detail_SeamFinder_DP_SEAM * Detail_SeamFinder_NO * Detail_SeamFinder_VORONOI_SEAM * Detail_WAVE_CORRECT_AUTO * Detail_WAVE_CORRECT_HORIZ * Detail_WAVE_CORRECT_VERT * Stitcher_ERR_CAMERA_PARAMS_ADJUST_FAIL * Stitcher_ERR_HOMOGRAPHY_EST_FAIL * Stitcher_ERR_NEED_MORE_IMGS * Stitcher_OK * Stitcher_PANORAMAMode for creating photo panoramas. Expects images under perspective transformation and projects resulting pano to sphere. * Stitcher_SCANSMode for composing scans. Expects images under affine transformation does not compensate exposure by default. Traits --- * AffineWarperTraitMutable methods for crate::stitching::AffineWarper * AffineWarperTraitConstConstant methods for crate::stitching::AffineWarper * CompressedRectilinearPortraitWarperTraitMutable methods for crate::stitching::CompressedRectilinearPortraitWarper * CompressedRectilinearPortraitWarperTraitConstConstant methods for crate::stitching::CompressedRectilinearPortraitWarper * CompressedRectilinearWarperTraitMutable methods for crate::stitching::CompressedRectilinearWarper * CompressedRectilinearWarperTraitConstConstant methods for crate::stitching::CompressedRectilinearWarper * CylindricalWarperGpuTraitMutable methods for crate::stitching::CylindricalWarperGpu * CylindricalWarperGpuTraitConstConstant methods for crate::stitching::CylindricalWarperGpu * CylindricalWarperTraitMutable methods for crate::stitching::CylindricalWarper * CylindricalWarperTraitConstConstant methods for crate::stitching::CylindricalWarper * Detail_AffineBasedEstimatorTraitMutable methods for crate::stitching::Detail_AffineBasedEstimator * Detail_AffineBasedEstimatorTraitConstConstant methods for crate::stitching::Detail_AffineBasedEstimator * Detail_AffineBestOf2NearestMatcherTraitMutable methods for crate::stitching::Detail_AffineBestOf2NearestMatcher * Detail_AffineBestOf2NearestMatcherTraitConstConstant methods for crate::stitching::Detail_AffineBestOf2NearestMatcher * Detail_AffineWarperTraitMutable methods for crate::stitching::Detail_AffineWarper * Detail_AffineWarperTraitConstConstant methods for crate::stitching::Detail_AffineWarper * Detail_BestOf2NearestMatcherTraitMutable methods for crate::stitching::Detail_BestOf2NearestMatcher * Detail_BestOf2NearestMatcherTraitConstConstant methods for crate::stitching::Detail_BestOf2NearestMatcher * Detail_BestOf2NearestRangeMatcherTraitMutable methods for crate::stitching::Detail_BestOf2NearestRangeMatcher * Detail_BestOf2NearestRangeMatcherTraitConstConstant methods for crate::stitching::Detail_BestOf2NearestRangeMatcher * Detail_BlenderTraitMutable methods for crate::stitching::Detail_Blender * Detail_BlenderTraitConstConstant methods for crate::stitching::Detail_Blender * Detail_BlocksChannelsCompensatorTraitMutable methods for crate::stitching::Detail_BlocksChannelsCompensator * Detail_BlocksChannelsCompensatorTraitConstConstant methods for crate::stitching::Detail_BlocksChannelsCompensator * Detail_BlocksCompensatorTraitMutable methods for crate::stitching::Detail_BlocksCompensator * Detail_BlocksCompensatorTraitConstConstant methods for crate::stitching::Detail_BlocksCompensator * Detail_BlocksGainCompensatorTraitMutable methods for crate::stitching::Detail_BlocksGainCompensator * Detail_BlocksGainCompensatorTraitConstConstant methods for crate::stitching::Detail_BlocksGainCompensator * Detail_BundleAdjusterAffinePartialTraitMutable methods for crate::stitching::Detail_BundleAdjusterAffinePartial * Detail_BundleAdjusterAffinePartialTraitConstConstant methods for crate::stitching::Detail_BundleAdjusterAffinePartial * Detail_BundleAdjusterAffineTraitMutable methods for crate::stitching::Detail_BundleAdjusterAffine * Detail_BundleAdjusterAffineTraitConstConstant methods for crate::stitching::Detail_BundleAdjusterAffine * Detail_BundleAdjusterBaseTraitMutable methods for crate::stitching::Detail_BundleAdjusterBase * Detail_BundleAdjusterBaseTraitConstConstant methods for crate::stitching::Detail_BundleAdjusterBase * Detail_BundleAdjusterRayTraitMutable methods for crate::stitching::Detail_BundleAdjusterRay * Detail_BundleAdjusterRayTraitConstConstant methods for crate::stitching::Detail_BundleAdjusterRay * Detail_BundleAdjusterReprojTraitMutable methods for crate::stitching::Detail_BundleAdjusterReproj * Detail_BundleAdjusterReprojTraitConstConstant methods for crate::stitching::Detail_BundleAdjusterReproj * Detail_CameraParamsTraitMutable methods for crate::stitching::Detail_CameraParams * Detail_CameraParamsTraitConstConstant methods for crate::stitching::Detail_CameraParams * Detail_ChannelsCompensatorTraitMutable methods for crate::stitching::Detail_ChannelsCompensator * Detail_ChannelsCompensatorTraitConstConstant methods for crate::stitching::Detail_ChannelsCompensator * Detail_CompressedRectilinearPortraitProjectorTraitMutable methods for crate::stitching::Detail_CompressedRectilinearPortraitProjector * Detail_CompressedRectilinearPortraitProjectorTraitConstConstant methods for crate::stitching::Detail_CompressedRectilinearPortraitProjector * Detail_CompressedRectilinearPortraitWarperTraitMutable methods for crate::stitching::Detail_CompressedRectilinearPortraitWarper * Detail_CompressedRectilinearPortraitWarperTraitConstConstant methods for crate::stitching::Detail_CompressedRectilinearPortraitWarper * Detail_CompressedRectilinearProjectorTraitMutable methods for crate::stitching::Detail_CompressedRectilinearProjector * Detail_CompressedRectilinearProjectorTraitConstConstant methods for crate::stitching::Detail_CompressedRectilinearProjector * Detail_CompressedRectilinearWarperTraitMutable methods for crate::stitching::Detail_CompressedRectilinearWarper * Detail_CompressedRectilinearWarperTraitConstConstant methods for crate::stitching::Detail_CompressedRectilinearWarper * Detail_CylindricalPortraitProjectorTraitMutable methods for crate::stitching::Detail_CylindricalPortraitProjector * Detail_CylindricalPortraitProjectorTraitConstConstant methods for crate::stitching::Detail_CylindricalPortraitProjector * Detail_CylindricalPortraitWarperTraitMutable methods for crate::stitching::Detail_CylindricalPortraitWarper * Detail_CylindricalPortraitWarperTraitConstConstant methods for crate::stitching::Detail_CylindricalPortraitWarper * Detail_CylindricalProjectorTraitMutable methods for crate::stitching::Detail_CylindricalProjector * Detail_CylindricalProjectorTraitConstConstant methods for crate::stitching::Detail_CylindricalProjector * Detail_CylindricalWarperGpuTraitMutable methods for crate::stitching::Detail_CylindricalWarperGpu * Detail_CylindricalWarperGpuTraitConstConstant methods for crate::stitching::Detail_CylindricalWarperGpu * Detail_CylindricalWarperTraitMutable methods for crate::stitching::Detail_CylindricalWarper * Detail_CylindricalWarperTraitConstConstant methods for crate::stitching::Detail_CylindricalWarper * Detail_DisjointSetsTraitMutable methods for crate::stitching::Detail_DisjointSets * Detail_DisjointSetsTraitConstConstant methods for crate::stitching::Detail_DisjointSets * Detail_DpSeamFinderTraitMutable methods for crate::stitching::Detail_DpSeamFinder * Detail_DpSeamFinderTraitConstConstant methods for crate::stitching::Detail_DpSeamFinder * Detail_EstimatorTraitMutable methods for crate::stitching::Detail_Estimator * Detail_EstimatorTraitConstConstant methods for crate::stitching::Detail_Estimator * Detail_ExposureCompensatorTraitMutable methods for crate::stitching::Detail_ExposureCompensator * Detail_ExposureCompensatorTraitConstConstant methods for crate::stitching::Detail_ExposureCompensator * Detail_FeatherBlenderTraitMutable methods for crate::stitching::Detail_FeatherBlender * Detail_FeatherBlenderTraitConstConstant methods for crate::stitching::Detail_FeatherBlender * Detail_FeaturesMatcherTraitMutable methods for crate::stitching::Detail_FeaturesMatcher * Detail_FeaturesMatcherTraitConstConstant methods for crate::stitching::Detail_FeaturesMatcher * Detail_FisheyeProjectorTraitMutable methods for crate::stitching::Detail_FisheyeProjector * Detail_FisheyeProjectorTraitConstConstant methods for crate::stitching::Detail_FisheyeProjector * Detail_FisheyeWarperTraitMutable methods for crate::stitching::Detail_FisheyeWarper * Detail_FisheyeWarperTraitConstConstant methods for crate::stitching::Detail_FisheyeWarper * Detail_GainCompensatorTraitMutable methods for crate::stitching::Detail_GainCompensator * Detail_GainCompensatorTraitConstConstant methods for crate::stitching::Detail_GainCompensator * Detail_GraphCutSeamFinderBaseTraitMutable methods for crate::stitching::Detail_GraphCutSeamFinderBase * Detail_GraphCutSeamFinderBaseTraitConstConstant methods for crate::stitching::Detail_GraphCutSeamFinderBase * Detail_GraphCutSeamFinderGpuTraitMutable methods for crate::stitching::Detail_GraphCutSeamFinderGpu * Detail_GraphCutSeamFinderGpuTraitConstConstant methods for crate::stitching::Detail_GraphCutSeamFinderGpu * Detail_GraphCutSeamFinderTraitMutable methods for crate::stitching::Detail_GraphCutSeamFinder * Detail_GraphCutSeamFinderTraitConstConstant methods for crate::stitching::Detail_GraphCutSeamFinder * Detail_GraphEdgeTraitMutable methods for crate::stitching::Detail_GraphEdge * Detail_GraphEdgeTraitConstConstant methods for crate::stitching::Detail_GraphEdge * Detail_GraphTraitMutable methods for crate::stitching::Detail_Graph * Detail_GraphTraitConstConstant methods for crate::stitching::Detail_Graph * Detail_HomographyBasedEstimatorTraitMutable methods for crate::stitching::Detail_HomographyBasedEstimator * Detail_HomographyBasedEstimatorTraitConstConstant methods for crate::stitching::Detail_HomographyBasedEstimator * Detail_ImageFeaturesTraitMutable methods for crate::stitching::Detail_ImageFeatures * Detail_ImageFeaturesTraitConstConstant methods for crate::stitching::Detail_ImageFeatures * Detail_MatchesInfoTraitMutable methods for crate::stitching::Detail_MatchesInfo * Detail_MatchesInfoTraitConstConstant methods for crate::stitching::Detail_MatchesInfo * Detail_MercatorProjectorTraitMutable methods for crate::stitching::Detail_MercatorProjector * Detail_MercatorProjectorTraitConstConstant methods for crate::stitching::Detail_MercatorProjector * Detail_MercatorWarperTraitMutable methods for crate::stitching::Detail_MercatorWarper * Detail_MercatorWarperTraitConstConstant methods for crate::stitching::Detail_MercatorWarper * Detail_MultiBandBlenderTraitMutable methods for crate::stitching::Detail_MultiBandBlender * Detail_MultiBandBlenderTraitConstConstant methods for crate::stitching::Detail_MultiBandBlender * Detail_NoBundleAdjusterTraitMutable methods for crate::stitching::Detail_NoBundleAdjuster * Detail_NoBundleAdjusterTraitConstConstant methods for crate::stitching::Detail_NoBundleAdjuster * Detail_NoExposureCompensatorTraitMutable methods for crate::stitching::Detail_NoExposureCompensator * Detail_NoExposureCompensatorTraitConstConstant methods for crate::stitching::Detail_NoExposureCompensator * Detail_NoSeamFinderTraitMutable methods for crate::stitching::Detail_NoSeamFinder * Detail_NoSeamFinderTraitConstConstant methods for crate::stitching::Detail_NoSeamFinder * Detail_PairwiseSeamFinderTraitMutable methods for crate::stitching::Detail_PairwiseSeamFinder * Detail_PairwiseSeamFinderTraitConstConstant methods for crate::stitching::Detail_PairwiseSeamFinder * Detail_PaniniPortraitProjectorTraitMutable methods for crate::stitching::Detail_PaniniPortraitProjector * Detail_PaniniPortraitProjectorTraitConstConstant methods for crate::stitching::Detail_PaniniPortraitProjector * Detail_PaniniPortraitWarperTraitMutable methods for crate::stitching::Detail_PaniniPortraitWarper * Detail_PaniniPortraitWarperTraitConstConstant methods for crate::stitching::Detail_PaniniPortraitWarper * Detail_PaniniProjectorTraitMutable methods for crate::stitching::Detail_PaniniProjector * Detail_PaniniProjectorTraitConstConstant methods for crate::stitching::Detail_PaniniProjector * Detail_PaniniWarperTraitMutable methods for crate::stitching::Detail_PaniniWarper * Detail_PaniniWarperTraitConstConstant methods for crate::stitching::Detail_PaniniWarper * Detail_PlanePortraitProjectorTraitMutable methods for crate::stitching::Detail_PlanePortraitProjector * Detail_PlanePortraitProjectorTraitConstConstant methods for crate::stitching::Detail_PlanePortraitProjector * Detail_PlanePortraitWarperTraitMutable methods for crate::stitching::Detail_PlanePortraitWarper * Detail_PlanePortraitWarperTraitConstConstant methods for crate::stitching::Detail_PlanePortraitWarper * Detail_PlaneProjectorTraitMutable methods for crate::stitching::Detail_PlaneProjector * Detail_PlaneProjectorTraitConstConstant methods for crate::stitching::Detail_PlaneProjector * Detail_PlaneWarperGpuTraitMutable methods for crate::stitching::Detail_PlaneWarperGpu * Detail_PlaneWarperGpuTraitConstConstant methods for crate::stitching::Detail_PlaneWarperGpu * Detail_PlaneWarperTraitMutable methods for crate::stitching::Detail_PlaneWarper * Detail_PlaneWarperTraitConstConstant methods for crate::stitching::Detail_PlaneWarper * Detail_ProjectorBaseTraitMutable methods for crate::stitching::Detail_ProjectorBase * Detail_ProjectorBaseTraitConstConstant methods for crate::stitching::Detail_ProjectorBase * Detail_RotationWarperTraitMutable methods for crate::stitching::Detail_RotationWarper * Detail_RotationWarperTraitConstConstant methods for crate::stitching::Detail_RotationWarper * Detail_SeamFinderTraitMutable methods for crate::stitching::Detail_SeamFinder * Detail_SeamFinderTraitConstConstant methods for crate::stitching::Detail_SeamFinder * Detail_SphericalPortraitProjectorTraitMutable methods for crate::stitching::Detail_SphericalPortraitProjector * Detail_SphericalPortraitProjectorTraitConstConstant methods for crate::stitching::Detail_SphericalPortraitProjector * Detail_SphericalPortraitWarperTraitMutable methods for crate::stitching::Detail_SphericalPortraitWarper * Detail_SphericalPortraitWarperTraitConstConstant methods for crate::stitching::Detail_SphericalPortraitWarper * Detail_SphericalProjectorTraitMutable methods for crate::stitching::Detail_SphericalProjector * Detail_SphericalProjectorTraitConstConstant methods for crate::stitching::Detail_SphericalProjector * Detail_SphericalWarperGpuTraitMutable methods for crate::stitching::Detail_SphericalWarperGpu * Detail_SphericalWarperGpuTraitConstConstant methods for crate::stitching::Detail_SphericalWarperGpu * Detail_SphericalWarperTraitMutable methods for crate::stitching::Detail_SphericalWarper * Detail_SphericalWarperTraitConstConstant methods for crate::stitching::Detail_SphericalWarper * Detail_StereographicProjectorTraitMutable methods for crate::stitching::Detail_StereographicProjector * Detail_StereographicProjectorTraitConstConstant methods for crate::stitching::Detail_StereographicProjector * Detail_StereographicWarperTraitMutable methods for crate::stitching::Detail_StereographicWarper * Detail_StereographicWarperTraitConstConstant methods for crate::stitching::Detail_StereographicWarper * Detail_TransverseMercatorProjectorTraitMutable methods for crate::stitching::Detail_TransverseMercatorProjector * Detail_TransverseMercatorProjectorTraitConstConstant methods for crate::stitching::Detail_TransverseMercatorProjector * Detail_TransverseMercatorWarperTraitMutable methods for crate::stitching::Detail_TransverseMercatorWarper * Detail_TransverseMercatorWarperTraitConstConstant methods for crate::stitching::Detail_TransverseMercatorWarper * Detail_VoronoiSeamFinderTraitMutable methods for crate::stitching::Detail_VoronoiSeamFinder * Detail_VoronoiSeamFinderTraitConstConstant methods for crate::stitching::Detail_VoronoiSeamFinder * FisheyeWarperTraitMutable methods for crate::stitching::FisheyeWarper * FisheyeWarperTraitConstConstant methods for crate::stitching::FisheyeWarper * MercatorWarperTraitMutable methods for crate::stitching::MercatorWarper * MercatorWarperTraitConstConstant methods for crate::stitching::MercatorWarper * PaniniPortraitWarperTraitMutable methods for crate::stitching::PaniniPortraitWarper * PaniniPortraitWarperTraitConstConstant methods for crate::stitching::PaniniPortraitWarper * PaniniWarperTraitMutable methods for crate::stitching::PaniniWarper * PaniniWarperTraitConstConstant methods for crate::stitching::PaniniWarper * PlaneWarperGpuTraitMutable methods for crate::stitching::PlaneWarperGpu * PlaneWarperGpuTraitConstConstant methods for crate::stitching::PlaneWarperGpu * PlaneWarperTraitMutable methods for crate::stitching::PlaneWarper * PlaneWarperTraitConstConstant methods for crate::stitching::PlaneWarper * PyRotationWarperTraitMutable methods for crate::stitching::PyRotationWarper * PyRotationWarperTraitConstConstant methods for crate::stitching::PyRotationWarper * SphericalWarperGpuTraitMutable methods for crate::stitching::SphericalWarperGpu * SphericalWarperGpuTraitConstConstant methods for crate::stitching::SphericalWarperGpu * SphericalWarperTraitMutable methods for crate::stitching::SphericalWarper * SphericalWarperTraitConstConstant methods for crate::stitching::SphericalWarper * StereographicWarperTraitMutable methods for crate::stitching::StereographicWarper * StereographicWarperTraitConstConstant methods for crate::stitching::StereographicWarper * StitcherTraitMutable methods for crate::stitching::Stitcher * StitcherTraitConstConstant methods for crate::stitching::Stitcher * TransverseMercatorWarperTraitMutable methods for crate::stitching::TransverseMercatorWarper * TransverseMercatorWarperTraitConstConstant methods for crate::stitching::TransverseMercatorWarper * WarperCreatorTraitMutable methods for crate::stitching::WarperCreator * WarperCreatorTraitConstConstant methods for crate::stitching::WarperCreator Functions --- * auto_detect_wave_correct_kindTries to detect the wave correction kind depending on whether a panorama spans horizontally or vertically * compute_image_featuresParameters * compute_image_features2Parameters * compute_image_features2_defParameters * compute_image_features_defParameters * create_laplace_pyr * create_laplace_pyr_gpu * create_weight_map * find_max_spanning_tree * leave_biggest_component * matches_graph_as_string/////////////////////////////////////////////////////////////////////////// * normalize_using_weight_map/////////////////////////////////////////////////////////////////////////// * overlap_roi/////////////////////////////////////////////////////////////////////////// * restore_image_from_laplace_pyr * restore_image_from_laplace_pyr_gpu * result_roi * result_roi_1 * result_roi_intersection * result_tl * select_random_subset * stitching_log_level * wave_correctTries to make panorama more horizontal (or vertical). Module opencv::structured_light === Structured Light API --- Structured light is considered one of the most effective techniques to acquire 3D models. This technique is based on projecting a light pattern and capturing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points and points of the projected pattern can be quickly found and 3D information easily retrieved. One of the most commonly exploited coding strategies is based on trmatime-multiplexing. In this case, a set of patterns are successively projected onto the measuring surface. The codeword for a given pixel is usually formed by the sequence of illuminance values for that pixel across the projected patterns. Thus, the codification is called temporal because the bits of the codewords are multiplexed in time pattern . In this module a time-multiplexing coding strategy based on Gray encoding is implemented following the (stereo) approach described in 3DUNDERWORLD algorithm UNDERWORLD . For more details, see [tutorial_structured_light]. Modules --- * prelude Structs --- * GrayCodePatternClass implementing the Gray-code pattern, based on UNDERWORLD. * GrayCodePattern_ParamsParameters of StructuredLightPattern constructor. * SinusoidalPatternClass implementing Fourier transform profilometry (FTP) , phase-shifting profilometry (PSP) and Fourier-assisted phase-shifting profilometry (FAPS) based on faps. * SinusoidalPattern_ParamsParameters of SinusoidalPattern constructor * StructuredLightPatternAbstract base class for generating and decoding structured light patterns. Constants --- * DECODE_3D_UNDERWORLDK<NAME>, <NAME>. “3DUNDERWORLD-SLS: An Open-Source Structured-Light Scanning System for Rapid Geometry Acquisition”, arXiv preprint arXiv:1406.6595 (2014). * FAPS * FTP * PSP Traits --- * GrayCodePatternTraitMutable methods for crate::structured_light::GrayCodePattern * GrayCodePatternTraitConstConstant methods for crate::structured_light::GrayCodePattern * GrayCodePattern_ParamsTraitMutable methods for crate::structured_light::GrayCodePattern_Params * GrayCodePattern_ParamsTraitConstConstant methods for crate::structured_light::GrayCodePattern_Params * SinusoidalPatternTraitMutable methods for crate::structured_light::SinusoidalPattern * SinusoidalPatternTraitConstConstant methods for crate::structured_light::SinusoidalPattern * SinusoidalPattern_ParamsTraitMutable methods for crate::structured_light::SinusoidalPattern_Params * SinusoidalPattern_ParamsTraitConstConstant methods for crate::structured_light::SinusoidalPattern_Params * StructuredLightPatternTraitMutable methods for crate::structured_light::StructuredLightPattern * StructuredLightPatternTraitConstConstant methods for crate::structured_light::StructuredLightPattern Module opencv::superres === Super Resolution --- The Super Resolution module contains a set of functions and classes that can be used to solve the problem of resolution enhancement. There are a few methods implemented, most of them are described in the papers Farsiu03 and Mitzel09 . Modules --- * prelude Structs --- * SuperRes_BroxOpticalFlow * SuperRes_DenseOpticalFlowExt * SuperRes_DualTVL1OpticalFlow * SuperRes_FarnebackOpticalFlow * SuperRes_FrameSource * SuperRes_PyrLKOpticalFlow * SuperRes_SuperResolutionBase class for Super Resolution algorithms. Traits --- * SuperRes_BroxOpticalFlowTraitMutable methods for crate::superres::SuperRes_BroxOpticalFlow * SuperRes_BroxOpticalFlowTraitConstConstant methods for crate::superres::SuperRes_BroxOpticalFlow * SuperRes_DenseOpticalFlowExtTraitMutable methods for crate::superres::SuperRes_DenseOpticalFlowExt * SuperRes_DenseOpticalFlowExtTraitConstConstant methods for crate::superres::SuperRes_DenseOpticalFlowExt * SuperRes_DualTVL1OpticalFlowTraitMutable methods for crate::superres::SuperRes_DualTVL1OpticalFlow * SuperRes_DualTVL1OpticalFlowTraitConstConstant methods for crate::superres::SuperRes_DualTVL1OpticalFlow * SuperRes_FarnebackOpticalFlowTraitMutable methods for crate::superres::SuperRes_FarnebackOpticalFlow * SuperRes_FarnebackOpticalFlowTraitConstConstant methods for crate::superres::SuperRes_FarnebackOpticalFlow * SuperRes_FrameSourceTraitMutable methods for crate::superres::SuperRes_FrameSource * SuperRes_FrameSourceTraitConstConstant methods for crate::superres::SuperRes_FrameSource * SuperRes_PyrLKOpticalFlowTraitMutable methods for crate::superres::SuperRes_PyrLKOpticalFlow * SuperRes_PyrLKOpticalFlowTraitConstConstant methods for crate::superres::SuperRes_PyrLKOpticalFlow * SuperRes_SuperResolutionTraitMutable methods for crate::superres::SuperRes_SuperResolution * SuperRes_SuperResolutionTraitConstConstant methods for crate::superres::SuperRes_SuperResolution Functions --- * create_frame_source_cameraC++ default parameters * create_frame_source_camera_defNote * create_frame_source_empty * create_frame_source_video * create_frame_source_video_cuda * create_opt_flow_brox_cuda * create_opt_flow_dual_tvl1 * create_opt_flow_dual_tvl1_cuda * create_opt_flow_farneback * create_opt_flow_farneback_cuda * create_opt_flow_pyr_lk_cuda * create_super_resolution_btvl1Create Bilateral TV-L1 Super Resolution. * create_super_resolution_btvl1_cuda Module opencv::surface_matching === Surface Matching --- ### Note about the License and Patents The following patents have been issued for methods embodied in this software: “Recognition and pose determination of 3D objects in 3D scenes using geometric point pair descriptors and the generalized Hough Transform”, <NAME>, <NAME>, EP Patent 2385483 (Nov. 21, 2012), assignee: MVTec Software GmbH, 81675 Muenchen (Germany); “Recognition and pose determination of 3D objects in 3D scenes”, <NAME>, <NAME>, US Patent 8830229 (Sept. 9, 2014), assignee: MVTec Software GmbH, 81675 Muenchen (Germany). Further patents are pending. For further details, contact MVTec Software GmbH (<EMAIL>). Note that restrictions imposed by these patents (and possibly others) exist independently of and may be in conflict with the freedoms granted in this license, which refers to copyright of the program, not patents for any methods that it implements. Both copyright and patent law must be obeyed to legally use and redistribute this program and it is not the purpose of this license to induce you to infringe any patents or other property right claims or to contest validity of any such claims. If you redistribute or use the program, then this license merely protects you from committing copyright infringement. It does not protect you from committing patent infringement. So, before you do anything with this program, make sure that you have permission to do so not merely in terms of copyright, but also in terms of patent law. Please note that this license is not to be understood as a guarantee either. If you use the program according to this license, but in conflict with patent law, it does not mean that the licensor will refund you for any losses that you incur if you are sued for your patent infringement. ### Introduction to Surface Matching Cameras and similar devices with the capability of sensation of 3D structure are becoming more common. Thus, using depth and intensity information for matching 3D objects (or parts) are of crucial importance for computer vision. Applications range from industrial control to guiding everyday actions for visually impaired people. The task in recognition and pose estimation in range images aims to identify and localize a queried 3D free-form object by matching it to the acquired database. From an industrial perspective, enabling robots to automatically locate and pick up randomly placed and oriented objects from a bin is an important challenge in factory automation, replacing tedious and heavy manual labor. A system should be able to recognize and locate objects with a predefined shape and estimate the position with the precision necessary for a gripping robot to pick it up. This is where vision guided robotics takes the stage. Similar tools are also capable of guiding robots (and even people) through unstructured environments, leading to automated navigation. These properties make 3D matching from point clouds a ubiquitous necessity. Within this context, I will now describe the OpenCV implementation of a 3D object recognition and pose estimation algorithm using 3D features. ### Surface Matching Algorithm Through 3D Features The state of the algorithms in order to achieve the task 3D matching is heavily based on drost2010, which is one of the first and main practical methods presented in this area. The approach is composed of extracting 3D feature points randomly from depth images or generic point clouds, indexing them and later in runtime querying them efficiently. Only the 3D structure is considered, and a trivial hash table is used for feature queries. While being fully aware that utilization of the nice CAD model structure in order to achieve a smart point sampling, I will be leaving that aside now in order to respect the generalizability of the methods (Typically for such algorithms training on a CAD model is not needed, and a point cloud would be sufficient). Below is the outline of the entire algorithm: ![Outline of the Algorithm](https://docs.opencv.org/4.8.1/outline.jpg) As explained, the algorithm relies on the extraction and indexing of point pair features, which are defined as follows: ![block formula](https://latex.codecogs.com/png.latex?%5Cbf%7B%7BF%7D%7D%28%5Cbf%7B%7Bm1%7D%7D%2C%20%5Cbf%7B%7Bm2%7D%7D%29%20%3D%20%28%7C%7C%5Cbf%7B%7Bd%7D%7D%7C%7C%5F2%2C%20%3C%28%5Cbf%7B%7Bn1%7D%7D%2C%5Cbf%7B%7Bd%7D%7D%29%2C%20%3C%28%5Cbf%7B%7Bn2%7D%7D%2C%5Cbf%7B%7Bd%7D%7D%29%2C%20%3C%28%5Cbf%7B%7Bn1%7D%7D%2C%5Cbf%7B%7Bn2%7D%7D%29%29) where ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7B%7Bm1%7D%7D) and ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7B%7Bm2%7D%7D) are feature two selected points on the model (or scene), ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7B%7Bd%7D%7D) is the difference vector, ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7B%7Bn1%7D%7D) and ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7B%7Bn2%7D%7D) are the normals at ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7B%7Bm1%7D%7D) and ![inline formula](https://latex.codecogs.com/png.latex?%5Cbf%7Bm2%7D). During the training stage, this vector is quantized, indexed. In the test stage, same features are extracted from the scene and compared to the database. With a few tricks like separation of the rotational components, the pose estimation part can also be made efficient (check the reference for more details). A Hough-like voting and clustering is employed to estimate the object pose. To cluster the poses, the raw pose hypotheses are sorted in decreasing order of the number of votes. From the highest vote, a new cluster is created. If the next pose hypothesis is close to one of the existing clusters, the hypothesis is added to the cluster and the cluster center is updated as the average of the pose hypotheses within the cluster. If the next hypothesis is not close to any of the clusters, it creates a new cluster. The proximity testing is done with fixed thresholds in translation and rotation. Distance computation and averaging for translation are performed in the 3D Euclidean space, while those for rotation are performed using quaternion representation. After clustering, the clusters are sorted in decreasing order of the total number of votes which determines confidence of the estimated poses. This pose is further refined using ![inline formula](https://latex.codecogs.com/png.latex?ICP) in order to obtain the final pose. PPF presented above depends largely on robust computation of angles between 3D vectors. Even though not reported in the paper, the naive way of doing this (![inline formula](https://latex.codecogs.com/png.latex?%5Ctheta%20%3D%20cos%5E%7B%2D1%7D%28%7B%5Cbf%7Ba%7D%7D%5Ccdot%7B%5Cbf%7Bb%7D%7D%29) remains numerically unstable. A better way to do this is then use inverse tangents, like: ![block formula](https://latex.codecogs.com/png.latex?%3C%28%5Cbf%7Bn1%7D%2C%5Cbf%7Bn2%7D%29%3Dtan%5E%7B%2D1%7D%28%7C%7C%7B%5Cbf%7Bn1%7D%20%20%5Cwedge%20%5Cbf%7Bn2%7D%7D%7C%7C%5F2%2C%20%5Cbf%7Bn1%7D%20%5Ccdot%20%5Cbf%7Bn2%7D%29) ### Rough Computation of Object Pose Given PPF Let me summarize the following notation: * ![inline formula](https://latex.codecogs.com/png.latex?p%5Ei%5Fm): ![inline formula](https://latex.codecogs.com/png.latex?i%5E%7Bth%7D) point of the model (![inline formula](https://latex.codecogs.com/png.latex?p%5Ej%5Fm) accordingly) * ![inline formula](https://latex.codecogs.com/png.latex?n%5Ei%5Fm): Normal of the ![inline formula](https://latex.codecogs.com/png.latex?i%5E%7Bth%7D) point of the model (![inline formula](https://latex.codecogs.com/png.latex?n%5Ej%5Fm) accordingly) * ![inline formula](https://latex.codecogs.com/png.latex?p%5Ei%5Fs): ![inline formula](https://latex.codecogs.com/png.latex?i%5E%7Bth%7D) point of the scene (![inline formula](https://latex.codecogs.com/png.latex?p%5Ej%5Fs) accordingly) * ![inline formula](https://latex.codecogs.com/png.latex?n%5Ei%5Fs): Normal of the ![inline formula](https://latex.codecogs.com/png.latex?i%5E%7Bth%7D) point of the scene (![inline formula](https://latex.codecogs.com/png.latex?n%5Ej%5Fs) accordingly) * ![inline formula](https://latex.codecogs.com/png.latex?T%5F%7Bm%5Crightarrow%20g%7D): The transformation required to translate ![inline formula](https://latex.codecogs.com/png.latex?p%5Ei%5Fm) to the origin and rotate its normal ![inline formula](https://latex.codecogs.com/png.latex?n%5Ei%5Fm) onto the ![inline formula](https://latex.codecogs.com/png.latex?x)-axis. * ![inline formula](https://latex.codecogs.com/png.latex?R%5F%7Bm%5Crightarrow%20g%7D): Rotational component of ![inline formula](https://latex.codecogs.com/png.latex?T%5F%7Bm%5Crightarrow%20g%7D). * ![inline formula](https://latex.codecogs.com/png.latex?t%5F%7Bm%5Crightarrow%20g%7D): Translational component of ![inline formula](https://latex.codecogs.com/png.latex?T%5F%7Bm%5Crightarrow%20g%7D). * ![inline formula](https://latex.codecogs.com/png.latex?%28p%5Ei%5Fm%29%5E%7B%27%7D): ![inline formula](https://latex.codecogs.com/png.latex?i%5E%7Bth%7D) point of the model transformed by ![inline formula](https://latex.codecogs.com/png.latex?T%5F%7Bm%5Crightarrow%20g%7D). (![inline formula](https://latex.codecogs.com/png.latex?%28p%5Ej%5Fm%29%5E%7B%27%7D) accordingly). * ![inline formula](https://latex.codecogs.com/png.latex?%7B%5Cbf%7BR%5F%7Bm%5Crightarrow%20g%7D%7D%7D): Axis angle representation of rotation ![inline formula](https://latex.codecogs.com/png.latex?R%5F%7Bm%5Crightarrow%20g%7D). * ![inline formula](https://latex.codecogs.com/png.latex?%5Ctheta%5F%7Bm%5Crightarrow%20g%7D): The angular component of the axis angle representation ![inline formula](https://latex.codecogs.com/png.latex?%7B%5Cbf%7BR%5F%7Bm%5Crightarrow%20g%7D%7D%7D). The transformation in a point pair feature is computed by first finding the transformation ![inline formula](https://latex.codecogs.com/png.latex?T%5F%7Bm%5Crightarrow%20g%7D) from the first point, and applying the same transformation to the second one. Transforming each point, together with the normal, to the ground plane leaves us with an angle to find out, during a comparison with a new point pair. We could now simply start writing ![block formula](https://latex.codecogs.com/png.latex?%28p%5Ei%5Fm%29%5E%7B%27%7D%20%3D%20T%5F%7Bm%5Crightarrow%20g%7D%20p%5Ei%5Fm) where ![block formula](https://latex.codecogs.com/png.latex?T%5F%7Bm%5Crightarrow%20g%7D%20%3D%20%2Dt%5F%7Bm%5Crightarrow%20g%7DR%5F%7Bm%5Crightarrow%20g%7D) Note that this is nothing but a stacked transformation. The translational component ![inline formula](https://latex.codecogs.com/png.latex?t%5F%7Bm%5Crightarrow%20g%7D) reads ![block formula](https://latex.codecogs.com/png.latex?t%5F%7Bm%5Crightarrow%20g%7D%20%3D%20%2DR%5F%7Bm%5Crightarrow%20g%7Dp%5Ei%5Fm) and the rotational being ![block formula](https://latex.codecogs.com/png.latex?%5Ctheta%5F%7Bm%5Crightarrow%20g%7D%20%3D%20%5Ccos%5E%7B%2D1%7D%28n%5Ei%5Fm%20%5Ccdot%20%7B%5Cbf%7Bx%7D%7D%29%5C%5C%0A%20%7B%5Cbf%7BR%5F%7Bm%5Crightarrow%20g%7D%7D%7D%20%3D%20n%5Ei%5Fm%20%5Cwedge%20%7B%5Cbf%7Bx%7D%7D) in axis angle format. Note that bold refers to the vector form. After this transformation, the feature vectors of the model are registered onto the ground plane X and the angle with respect to ![inline formula](https://latex.codecogs.com/png.latex?x%3D0) is called ![inline formula](https://latex.codecogs.com/png.latex?%5Calpha%5Fm). Similarly, for the scene, it is called ![inline formula](https://latex.codecogs.com/png.latex?%5Calpha%5Fs). #### Hough-like Voting Scheme As shown in the outline, PPF (point pair features) are extracted from the model, quantized, stored in the hashtable and indexed, during the training stage. During the runtime however, the similar operation is perfomed on the input scene with the exception that this time a similarity lookup over the hashtable is performed, instead of an insertion. This lookup also allows us to compute a transformation to the ground plane for the scene pairs. After this point, computing the rotational component of the pose reduces to computation of the difference ![inline formula](https://latex.codecogs.com/png.latex?%5Calpha%3D%5Calpha%5Fm%2D%5Calpha%5Fs). This component carries the cue about the object pose. A Hough-like voting scheme is performed over the local model coordinate vector and ![inline formula](https://latex.codecogs.com/png.latex?%5Calpha). The highest poses achieved for every scene point lets us recover the object pose. #### Source Code for PPF Matching ``` // pc is the loaded point cloud of the model // (Nx6) and pcTest is a loaded point cloud of // the scene (Mx6) ppf_match_3d::PPF3DDetector detector(0.03, 0.05); detector.trainModel(pc); vector<Pose3DPtr> results; detector.match(pcTest, results, 1.0/10.0, 0.05); cout << "Poses: " << endl; // print the poses for (size_t i=0; i<results.size(); i++) { Pose3DPtr pose = results[i]; cout << "Pose Result " << i << endl; pose->printPose(); } ``` ### Pose Registration via ICP The matching process terminates with the attainment of the pose. However, due to the multiple matching points, erroneous hypothesis, pose averaging and etc. such pose is very open to noise and many times is far from being perfect. Although the visual results obtained in that stage are pleasing, the quantitative evaluation shows ![inline formula](https://latex.codecogs.com/png.latex?%7E10) degrees variation (error), which is an acceptable level of matching. Many times, the requirement might be set well beyond this margin and it is desired to refine the computed pose. Furthermore, in typical RGBD scenes and point clouds, 3D structure can capture only less than half of the model due to the visibility in the scene. Therefore, a robust pose refinement algorithm, which can register occluded and partially visible shapes quickly and correctly is not an unrealistic wish. At this point, a trivial option would be to use the well known iterative closest point algorithm . However, utilization of the basic ICP leads to slow convergence, bad registration, outlier sensitivity and failure to register partial shapes. Thus, it is definitely not suited to the problem. For this reason, many variants have been proposed . Different variants contribute to different stages of the pose estimation process. ICP is composed of ![inline formula](https://latex.codecogs.com/png.latex?6) stages and the improvements I propose for each stage is summarized below. #### Sampling To improve convergence speed and computation time, it is common to use less points than the model actually has. However, sampling the correct points to register is an issue in itself. The naive way would be to sample uniformly and hope to get a reasonable subset. More smarter ways try to identify the critical points, which are found to highly contribute to the registration process. Gelfand et. al. exploit the covariance matrix in order to constrain the eigenspace, so that a set of points which affect both translation and rotation are used. This is a clever way of subsampling, which I will optionally be using in the implementation. #### Correspondence Search As the name implies, this step is actually the assignment of the points in the data and the model in a closest point fashion. Correct assignments will lead to a correct pose, where wrong assignments strongly degrade the result. In general, KD-trees are used in the search of nearest neighbors, to increase the speed. However this is not an optimality guarantee and many times causes wrong points to be matched. Luckily the assignments are corrected over iterations. To overcome some of the limitations, Picky ICP pickyicp and BC-ICP (ICP using bi-unique correspondences) are two well-known methods. Picky ICP first finds the correspondences in the old-fashioned way and then among the resulting corresponding pairs, if more than one scene point ![inline formula](https://latex.codecogs.com/png.latex?p%5Fi) is assigned to the same model point ![inline formula](https://latex.codecogs.com/png.latex?m%5Fj), it selects ![inline formula](https://latex.codecogs.com/png.latex?p%5Fi) that corresponds to the minimum distance. BC-ICP on the other hand, allows multiple correspondences first and then resolves the assignments by establishing bi-unique correspondences. It also defines a novel no-correspondence outlier, which intrinsically eases the process of identifying outliers. For reference, both methods are used. Because P-ICP is a bit faster, with not-so-significant performance drawback, it will be the method of choice in refinment of correspondences. #### Weighting of Pairs In my implementation, I currently do not use a weighting scheme. But the common approaches involve *normal compatibility* (![inline formula](https://latex.codecogs.com/png.latex?w%5Fi%3Dn%5E1%5Fi%5Ccdot%20n%5E2%5Fj)) or assigning lower weights to point pairs with greater distances (![inline formula](https://latex.codecogs.com/png.latex?w%3D1%2D%5Cfrac%7B%7C%7Cdist%28m%5Fi%2Cs%5Fi%29%7C%7C%5F2%7D%7Bdist%5F%7Bmax%7D%7D)). #### Rejection of Pairs The rejections are done using a dynamic thresholding based on a robust estimate of the standard deviation. In other words, in each iteration, I find the MAD estimate of the Std. Dev. I denote this as ![inline formula](https://latex.codecogs.com/png.latex?mad%5Fi). I reject the pairs with distances ![inline formula](https://latex.codecogs.com/png.latex?d%5Fi%3E%5Ctau%20mad%5Fi). Here ![inline formula](https://latex.codecogs.com/png.latex?%5Ctau) is the threshold of rejection and by default set to ![inline formula](https://latex.codecogs.com/png.latex?3). The weighting is applied prior to Picky refinement, explained in the previous stage. #### Error Metric As described in , a linearization of point to plane as in koklimlow error metric is used. This both speeds up the registration process and improves convergence. #### Minimization Even though many non-linear optimizers (such as Levenberg Mardquardt) are proposed, due to the linearization in the previous step, pose estimation reduces to solving a linear system of equations. This is what I do exactly using cv::solve with DECOMP_SVD option. #### ICP Algorithm Having described the steps above, here I summarize the layout of the ICP algorithm. ##### Efficient ICP Through Point Cloud Pyramids While the up-to-now-proposed variants deal well with some outliers and bad initializations, they require significant number of iterations. Yet, multi-resolution scheme can help reducing the number of iterations by allowing the registration to start from a coarse level and propagate to the lower and finer levels. Such approach both improves the performances and enhances the runtime. The search is done through multiple levels, in a hierarchical fashion. The registration starts with a very coarse set of samples of the model. Iteratively, the points are densified and sought. After each iteration the previously estimated pose is used as an initial pose and refined with the ICP. ##### Visual Results ###### Results on Synthetic Data In all of the results, the pose is initiated by PPF and the rest is left as: ![inline formula](https://latex.codecogs.com/png.latex?%5B%5Ctheta%5Fx%2C%20%5Ctheta%5Fy%2C%20%5Ctheta%5Fz%2C%20t%5Fx%2C%20t%5Fy%2C%20t%5Fz%5D%3D%5B0%5D) #### Source Code for Pose Refinement Using ICP ``` ICP icp(200, 0.001f, 2.5f, 8); // Using the previously declared pc and pcTest // This will perform registration for every pose // contained in results icp.registerModelToScene(pc, pcTest, results); // results now contain the refined poses ``` ### Results This section is dedicated to the results of surface matching (point-pair-feature matching and a following ICP refinement): ![Several matches of a single frog model using ppf + icp](https://docs.opencv.org/4.8.1/gsoc_forg_matches.jpg) Matches of different models for Mian dataset is presented below: ![Matches of different models for Mian dataset](https://docs.opencv.org/4.8.1/snapshot27.jpg) You might checkout the video on youTube here. ### A Complete Sample #### Parameter Tuning Surface matching module treats its parameters relative to the model diameter (diameter of the axis parallel bounding box), whenever it can. This makes the parameters independent from the model size. This is why, both model and scene cloud were subsampled such that all points have a minimum distance of ![inline formula](https://latex.codecogs.com/png.latex?RelativeSamplingStep%2ADimensionRange), where ![inline formula](https://latex.codecogs.com/png.latex?DimensionRange) is the distance along a given dimension. All three dimensions are sampled in similar manner. For example, if ![inline formula](https://latex.codecogs.com/png.latex?RelativeSamplingStep) is set to 0.05 and the diameter of model is 1m (1000mm), the points sampled from the object’s surface will be approximately 50 mm apart. From another point of view, if the sampling RelativeSamplingStep is set to 0.05, at most ![inline formula](https://latex.codecogs.com/png.latex?20x20x20%20%3D%208000) model points are generated (depending on how the model fills in the volume). Consequently this results in at most 8000x8000 pairs. In practice, because the models are not uniformly distributed over a rectangular prism, much less points are to be expected. Decreasing this value, results in more model points and thus a more accurate representation. However, note that number of point pair features to be computed is now quadratically increased as the complexity is O(N^2). This is especially a concern for 32 bit systems, where large models can easily overshoot the available memory. Typically, values in the range of 0.025 - 0.05 seem adequate for most of the applications, where the default value is 0.03. (Note that there is a difference in this paremeter with the one presented in drost2010 . In drost2010 a uniform cuboid is used for quantization and model diameter is used for reference of sampling. In my implementation, the cuboid is a rectangular prism, and each dimension is quantized independently. I do not take reference from the diameter but along the individual dimensions. It would very wise to remove the outliers from the model and prepare an ideal model initially. This is because, the outliers directly affect the relative computations and degrade the matching accuracy. During runtime stage, the scene is again sampled by ![inline formula](https://latex.codecogs.com/png.latex?RelativeSamplingStep), as described above. However this time, only a portion of the scene points are used as reference. This portion is controlled by the parameter ![inline formula](https://latex.codecogs.com/png.latex?RelativeSceneSampleStep), where ![inline formula](https://latex.codecogs.com/png.latex?SceneSampleStep%20%3D%20%28int%29%281%2E0%2FRelativeSceneSampleStep%29). In other words, if the ![inline formula](https://latex.codecogs.com/png.latex?RelativeSceneSampleStep%20%3D%201%2E0%2F5%2E0), the subsampled scene will once again be uniformly sampled to 1/5 of the number of points. Maximum value of this parameter is 1 and increasing this parameter also increases the stability, but decreases the speed. Again, because of the initial scene-independent relative sampling, fine tuning this parameter is not a big concern. This would only be an issue when the model shape occupies a volume uniformly, or when the model shape is condensed in a tiny place within the quantization volume (e.g. The octree representation would have too much empty cells). ![inline formula](https://latex.codecogs.com/png.latex?RelativeDistanceStep) acts as a step of discretization over the hash table. The point pair features are quantized to be mapped to the buckets of the hashtable. This discretization involves a multiplication and a casting to the integer. Adjusting RelativeDistanceStep in theory controls the collision rate. Note that, more collisions on the hashtable results in less accurate estimations. Reducing this parameter increases the affect of quantization but starts to assign non-similar point pairs to the same bins. Increasing it however, wanes the ability to group the similar pairs. Generally, because during the sampling stage, the training model points are selected uniformly with a distance controlled by RelativeSamplingStep, RelativeDistanceStep is expected to equate to this value. Yet again, values in the range of 0.025-0.05 are sensible. This time however, when the model is dense, it is not advised to decrease this value. For noisy scenes, the value can be increased to improve the robustness of the matching against noisy points. Modules --- * prelude Structs --- * ICPThis class implements a very efficient and robust variant of the iterative closest point (ICP) algorithm. The task is to register a 3D model (or point cloud) against a set of noisy target data. The variants are put together by myself after certain tests. The task is to be able to match partial, noisy point clouds in cluttered scenes, quickly. You will find that my emphasis is on the performance, while retaining the accuracy. This implementation is based on Tolga Birdal’s MATLAB implementation in here: http://www.mathworks.com/matlabcentral/fileexchange/47152-icp-registration-using-efficient-variants-and-multi-resolution-scheme The main contributions come from: * PPF3DDetectorClass, allowing the load and matching 3D models. Typical Use: * Pose3DClass, allowing the storage of a pose. The data structure stores both the quaternions and the matrix forms. It supports IO functionality together with various helper methods to work with poses * PoseCluster3DWhen multiple poses (see Pose3D) are grouped together (contribute to the same transformation) pose clusters occur. This class is a general container for such groups of poses. It is possible to store, load and perform IO on these poses. Constants --- * ICP_ICP_SAMPLING_TYPE_GELFAND * ICP_ICP_SAMPLING_TYPE_UNIFORM Traits --- * ICPTraitMutable methods for crate::surface_matching::ICP * ICPTraitConstConstant methods for crate::surface_matching::ICP * PPF3DDetectorTraitMutable methods for crate::surface_matching::PPF3DDetector * PPF3DDetectorTraitConstConstant methods for crate::surface_matching::PPF3DDetector * Pose3DTraitMutable methods for crate::surface_matching::Pose3D * Pose3DTraitConstConstant methods for crate::surface_matching::Pose3D * PoseCluster3DTraitMutable methods for crate::surface_matching::PoseCluster3D * PoseCluster3DTraitConstConstant methods for crate::surface_matching::PoseCluster3D Type Aliases --- * Pose3DPtr * PoseCluster3DPtr * key_type Module opencv::text === Scene Text Detection and Recognition --- The opencv_text module provides different algorithms for text detection and recognition in natural scene images. Scene Text Detection --- ### Class-specific Extremal Regions for Scene Text Detection The scene text detection algorithm described below has been initially proposed by <NAME> & <NAME>. The main idea behind Class-specific Extremal Regions is similar to the MSER in that suitable Extremal Regions (ERs) are selected from the whole component tree of the image. However, this technique differs from MSER in that selection of suitable ERs is done by a sequential classifier trained for character detection, i.e. dropping the stability requirement of MSERs and selecting class-specific (not necessarily stable) regions. The component tree of an image is constructed by thresholding by an increasing value step-by-step from 0 to 255 and then linking the obtained connected components from successive levels in a hierarchy by their inclusion relation: ![image](https://docs.opencv.org/4.8.1/component_tree.png) The component tree may contain a huge number of regions even for a very simple image as shown in the previous image. This number can easily reach the order of 1 x 10^6 regions for an average 1 Megapixel image. In order to efficiently select suitable regions among all the ERs the algorithm make use of a sequential classifier with two differentiated stages. In the first stage incrementally computable descriptors (area, perimeter, bounding box, and Euler’s number) are computed (in O(1)) for each region r and used as features for a classifier which estimates the class-conditional probability p(r|character). Only the ERs which correspond to local maximum of the probability p(r|character) are selected (if their probability is above a global limit p_min and the difference between local maximum and local minimum is greater than a delta_min value). In the second stage, the ERs that passed the first stage are classified into character and non-character classes using more informative but also more computationally expensive features. (Hole area ratio, convex hull ratio, and the number of outer boundary inflexion points). This ER filtering process is done in different single-channel projections of the input image in order to increase the character localization recall. After the ER filtering is done on each input channel, character candidates must be grouped in high-level text blocks (i.e. words, text lines, paragraphs, 
). The opencv_text module implements two different grouping algorithms: the Exhaustive Search algorithm proposed in Neumann12 for grouping horizontally aligned text, and the method proposed by <NAME> and <NAME> in Gomez13 Gomez14 for grouping arbitrary oriented text (see erGrouping). To see the text detector at work, have a look at the textdetection demo: https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp Scene Text Recognition --- Modules --- * prelude Structs --- * BaseOCR * ERFilterBase class for 1st and 2nd stages of Neumann and Matas scene text detection algorithm Neumann12. : * ERFilter_CallbackCallback with the classifier is made a class. * ERStatThe ERStat structure represents a class-specific Extremal Region (ER). * OCRBeamSearchDecoderOCRBeamSearchDecoder class provides an interface for OCR using Beam Search algorithm. * OCRBeamSearchDecoder_ClassifierCallbackCallback with the character classifier is made a class. * OCRHMMDecoderOCRHMMDecoder class provides an interface for OCR using Hidden Markov Models. * OCRHMMDecoder_ClassifierCallbackCallback with the character classifier is made a class. * OCRHolisticWordRecognizerOCRHolisticWordRecognizer class provides the functionallity of segmented wordspotting. Given a predefined vocabulary , a DictNet is employed to select the most probable word given an input image. * OCRTesseractOCRTesseract class provides an interface with the tesseract-ocr API (v3.02.02) in C++. * TextDetectorAn abstract class providing interface for text detection algorithms * TextDetectorCNNTextDetectorCNN class provides the functionallity of text bounding box detection. This class is representing to find bounding boxes of text words given an input image. This class uses OpenCV dnn module to load pre-trained model described in LiaoSBWL17. The original repository with the modified SSD Caffe version: https://github.com/MhLiao/TextBoxes. Model can be downloaded from DropBox. Modified .prototxt file with the model description can be found in `opencv_contrib/modules/text/samples/textbox.prototxt`. Enums --- * classifier_type * decoder_mode * erGrouping_Modestext::erGrouping operation modes * ocr_engine_modeTesseract.OcrEngineMode Enumeration * page_seg_modeTesseract.PageSegMode Enumeration Constants --- * ERFILTER_NM_IHSGrad * ERFILTER_NM_RGBLGrad * ERGROUPING_ORIENTATION_ANYText grouping method proposed in Gomez13 Gomez14 for grouping arbitrary oriented text. Regions are agglomerated by Single Linkage Clustering in a weighted feature space that combines proximity (x,y coordinates) and similarity measures (color, size, gradient magnitude, stroke width, etc.). SLC provides a dendrogram where each node represents a text group hypothesis. Then the algorithm finds the branches corresponding to text groups by traversing this dendrogram with a stopping rule that combines the output of a rotation invariant text group classifier and a probabilistic measure for hierarchical clustering validity assessment. * ERGROUPING_ORIENTATION_HORIZExhaustive Search algorithm proposed in Neumann11 for grouping horizontally aligned text. The algorithm models a verification function for all the possible ER sequences. The verification fuction for ER pairs consists in a set of threshold-based pairwise rules which compare measurements of two regions (height ratio, centroid angle, and region distance). The verification function for ER triplets creates a word text line estimate using Least Median-Squares fitting for a given triplet and then verifies that the estimate is valid (based on thresholds created during training). Verification functions for sequences larger than 3 are approximated by verifying that the text line parameters of all (sub)sequences of length 3 are consistent. * OCR_CNN_CLASSIFIER * OCR_DECODER_VITERBI * OCR_KNN_CLASSIFIER * OCR_LEVEL_TEXTLINE * OCR_LEVEL_WORD * OEM_CUBE_ONLY * OEM_DEFAULT * OEM_TESSERACT_CUBE_COMBINED * OEM_TESSERACT_ONLY * PSM_AUTO * PSM_AUTO_ONLY * PSM_AUTO_OSD * PSM_CIRCLE_WORD * PSM_OSD_ONLY * PSM_SINGLE_BLOCK * PSM_SINGLE_BLOCK_VERT_TEXT * PSM_SINGLE_CHAR * PSM_SINGLE_COLUMN * PSM_SINGLE_LINE * PSM_SINGLE_WORD Traits --- * BaseOCRTraitMutable methods for crate::text::BaseOCR * BaseOCRTraitConstConstant methods for crate::text::BaseOCR * ERFilterTraitMutable methods for crate::text::ERFilter * ERFilterTraitConstConstant methods for crate::text::ERFilter * ERFilter_CallbackTraitMutable methods for crate::text::ERFilter_Callback * ERFilter_CallbackTraitConstConstant methods for crate::text::ERFilter_Callback * ERStatTraitMutable methods for crate::text::ERStat * ERStatTraitConstConstant methods for crate::text::ERStat * OCRBeamSearchDecoderTraitMutable methods for crate::text::OCRBeamSearchDecoder * OCRBeamSearchDecoderTraitConstConstant methods for crate::text::OCRBeamSearchDecoder * OCRBeamSearchDecoder_ClassifierCallbackTraitMutable methods for crate::text::OCRBeamSearchDecoder_ClassifierCallback * OCRBeamSearchDecoder_ClassifierCallbackTraitConstConstant methods for crate::text::OCRBeamSearchDecoder_ClassifierCallback * OCRHMMDecoderTraitMutable methods for crate::text::OCRHMMDecoder * OCRHMMDecoderTraitConstConstant methods for crate::text::OCRHMMDecoder * OCRHMMDecoder_ClassifierCallbackTraitMutable methods for crate::text::OCRHMMDecoder_ClassifierCallback * OCRHMMDecoder_ClassifierCallbackTraitConstConstant methods for crate::text::OCRHMMDecoder_ClassifierCallback * OCRHolisticWordRecognizerTraitMutable methods for crate::text::OCRHolisticWordRecognizer * OCRHolisticWordRecognizerTraitConstConstant methods for crate::text::OCRHolisticWordRecognizer * OCRTesseractTraitMutable methods for crate::text::OCRTesseract * OCRTesseractTraitConstConstant methods for crate::text::OCRTesseract * TextDetectorCNNTraitMutable methods for crate::text::TextDetectorCNN * TextDetectorCNNTraitConstConstant methods for crate::text::TextDetectorCNN * TextDetectorTraitMutable methods for crate::text::TextDetector * TextDetectorTraitConstConstant methods for crate::text::TextDetector Functions --- * compute_nm_channelsCompute the different channels to be processed independently in the N&M algorithm Neumann12. * compute_nm_channels_defCompute the different channels to be processed independently in the N&M algorithm Neumann12. * create_er_filter_nm1Create an Extremal Region Filter for the 1st stage classifier of N&M algorithm Neumann12. * create_er_filter_nm1_defCreate an Extremal Region Filter for the 1st stage classifier of N&M algorithm Neumann12. * create_er_filter_nm1_from_fileReads an Extremal Region Filter for the 1st stage classifier of N&M algorithm from the provided path e.g. /path/to/cpp/trained_classifierNM1.xml * create_er_filter_nm1_from_file_defReads an Extremal Region Filter for the 1st stage classifier of N&M algorithm from the provided path e.g. /path/to/cpp/trained_classifierNM1.xml * create_er_filter_nm2Create an Extremal Region Filter for the 2nd stage classifier of N&M algorithm Neumann12. * create_er_filter_nm2_defCreate an Extremal Region Filter for the 2nd stage classifier of N&M algorithm Neumann12. * create_er_filter_nm2_from_fileReads an Extremal Region Filter for the 2nd stage classifier of N&M algorithm from the provided path e.g. /path/to/cpp/trained_classifierNM2.xml * create_er_filter_nm2_from_file_defReads an Extremal Region Filter for the 2nd stage classifier of N&M algorithm from the provided path e.g. /path/to/cpp/trained_classifierNM2.xml * create_ocrhmm_transitions_tableUtility function to create a tailored language model transitions table from a given list of words (lexicon). * create_ocrhmm_transitions_table_1 * detect_regions * detect_regions_from_fileExtracts text regions from image. * detect_regions_from_file_defExtracts text regions from image. * detect_text_swtApplies the Stroke Width Transform operator followed by filtering of connected components of similar Stroke Widths to return letter candidates. It also chain them by proximity and size, saving the result in chainBBs. * detect_text_swt_defApplies the Stroke Width Transform operator followed by filtering of connected components of similar Stroke Widths to return letter candidates. It also chain them by proximity and size, saving the result in chainBBs. * er_groupingFind groups of Extremal Regions that are organized as text blocks. * er_grouping_1C++ default parameters * er_grouping_1_defNote * er_grouping_defFind groups of Extremal Regions that are organized as text blocks. * load_classifier_nm1Allow to implicitly load the default classifier when creating an ERFilter object. * load_classifier_nm2Allow to implicitly load the default classifier when creating an ERFilter object. * load_ocr_beam_search_classifier_cnnAllow to implicitly load the default character classifier when creating an OCRBeamSearchDecoder object. * load_ocrhmm_classifierAllow to implicitly load the default character classifier when creating an OCRHMMDecoder object. * load_ocrhmm_classifier_cnnDeprecatedAllow to implicitly load the default character classifier when creating an OCRHMMDecoder object. * load_ocrhmm_classifier_nmDeprecatedAllow to implicitly load the default character classifier when creating an OCRHMMDecoder object. * mse_rs_to_er_statsConverts MSER contours (vector<Point>) to ERStat regions. Module opencv::tracking === Tracking API --- Tracking API implementation details --- Legacy Tracking API --- Modules --- * prelude Structs --- * TrackerCSRTthe CSRT tracker * TrackerCSRT_Params * TrackerKCFthe KCF (Kernelized Correlation Filter) tracker * TrackerKCF_Params Enums --- * TrackerKCF_MODE\brief Feature type to be used in the tracking grayscale, colornames, compressed color-names The modes available now: Constants --- * TrackerKCF_CN * TrackerKCF_CUSTOM * TrackerKCF_GRAY Traits --- * TrackerCSRTTraitMutable methods for crate::tracking::TrackerCSRT * TrackerCSRTTraitConstConstant methods for crate::tracking::TrackerCSRT * TrackerCSRT_ParamsTraitMutable methods for crate::tracking::TrackerCSRT_Params * TrackerCSRT_ParamsTraitConstConstant methods for crate::tracking::TrackerCSRT_Params * TrackerKCFTraitMutable methods for crate::tracking::TrackerKCF * TrackerKCFTraitConstConstant methods for crate::tracking::TrackerKCF Type Aliases --- * TrackerKCF_FeatureExtractorCallbackFN Module opencv::video === Video Analysis --- Motion Analysis --- Object Tracking --- C API --- Modules --- * prelude Structs --- * BackgroundSubtractorBase class for background/foreground segmentation. : * BackgroundSubtractorKNNK-nearest neighbours - based Background/Foreground Segmentation Algorithm. * BackgroundSubtractorMOG2Gaussian Mixture-based Background/Foreground Segmentation Algorithm. * DISOpticalFlowDIS optical flow algorithm. * DenseOpticalFlowBase class for dense optical flow algorithms * FarnebackOpticalFlowClass computing a dense optical flow using the Gunnar Farneback’s algorithm. * KalmanFilterKalman filter class. * SparseOpticalFlowBase interface for sparse optical flow algorithms. * SparsePyrLKOpticalFlowClass used for calculating a sparse optical flow. * TrackerBase abstract class for the long-term tracker * TrackerDaSiamRPN * TrackerDaSiamRPN_Params * TrackerGOTURNthe GOTURN (Generic Object Tracking Using Regression Networks) tracker * TrackerGOTURN_Params * TrackerMILThe MIL algorithm trains a classifier in an online manner to separate the object from the background. * TrackerMIL_Params * TrackerNanothe Nano tracker is a super lightweight dnn-based general object tracking. * TrackerNano_Params * VariationalRefinementVariational optical flow refinement Constants --- * DISOpticalFlow_PRESET_FAST * DISOpticalFlow_PRESET_MEDIUM * DISOpticalFlow_PRESET_ULTRAFAST * MOTION_AFFINE * MOTION_EUCLIDEAN * MOTION_HOMOGRAPHY * MOTION_TRANSLATION * OPTFLOW_FARNEBACK_GAUSSIAN * OPTFLOW_LK_GET_MIN_EIGENVALS * OPTFLOW_USE_INITIAL_FLOW Traits --- * BackgroundSubtractorKNNTraitMutable methods for crate::video::BackgroundSubtractorKNN * BackgroundSubtractorKNNTraitConstConstant methods for crate::video::BackgroundSubtractorKNN * BackgroundSubtractorMOG2TraitMutable methods for crate::video::BackgroundSubtractorMOG2 * BackgroundSubtractorMOG2TraitConstConstant methods for crate::video::BackgroundSubtractorMOG2 * BackgroundSubtractorTraitMutable methods for crate::video::BackgroundSubtractor * BackgroundSubtractorTraitConstConstant methods for crate::video::BackgroundSubtractor * DISOpticalFlowTraitMutable methods for crate::video::DISOpticalFlow * DISOpticalFlowTraitConstConstant methods for crate::video::DISOpticalFlow * DenseOpticalFlowTraitMutable methods for crate::video::DenseOpticalFlow * DenseOpticalFlowTraitConstConstant methods for crate::video::DenseOpticalFlow * FarnebackOpticalFlowTraitMutable methods for crate::video::FarnebackOpticalFlow * FarnebackOpticalFlowTraitConstConstant methods for crate::video::FarnebackOpticalFlow * KalmanFilterTraitMutable methods for crate::video::KalmanFilter * KalmanFilterTraitConstConstant methods for crate::video::KalmanFilter * SparseOpticalFlowTraitMutable methods for crate::video::SparseOpticalFlow * SparseOpticalFlowTraitConstConstant methods for crate::video::SparseOpticalFlow * SparsePyrLKOpticalFlowTraitMutable methods for crate::video::SparsePyrLKOpticalFlow * SparsePyrLKOpticalFlowTraitConstConstant methods for crate::video::SparsePyrLKOpticalFlow * TrackerDaSiamRPNTraitMutable methods for crate::video::TrackerDaSiamRPN * TrackerDaSiamRPNTraitConstConstant methods for crate::video::TrackerDaSiamRPN * TrackerDaSiamRPN_ParamsTraitMutable methods for crate::video::TrackerDaSiamRPN_Params * TrackerDaSiamRPN_ParamsTraitConstConstant methods for crate::video::TrackerDaSiamRPN_Params * TrackerGOTURNTraitMutable methods for crate::video::TrackerGOTURN * TrackerGOTURNTraitConstConstant methods for crate::video::TrackerGOTURN * TrackerGOTURN_ParamsTraitMutable methods for crate::video::TrackerGOTURN_Params * TrackerGOTURN_ParamsTraitConstConstant methods for crate::video::TrackerGOTURN_Params * TrackerMILTraitMutable methods for crate::video::TrackerMIL * TrackerMILTraitConstConstant methods for crate::video::TrackerMIL * TrackerNanoTraitMutable methods for crate::video::TrackerNano * TrackerNanoTraitConstConstant methods for crate::video::TrackerNano * TrackerNano_ParamsTraitMutable methods for crate::video::TrackerNano_Params * TrackerNano_ParamsTraitConstConstant methods for crate::video::TrackerNano_Params * TrackerTraitMutable methods for crate::video::Tracker * TrackerTraitConstConstant methods for crate::video::Tracker * VariationalRefinementTraitMutable methods for crate::video::VariationalRefinement * VariationalRefinementTraitConstConstant methods for crate::video::VariationalRefinement Functions --- * build_optical_flow_pyramidConstructs the image pyramid which can be passed to calcOpticalFlowPyrLK. * build_optical_flow_pyramid_defConstructs the image pyramid which can be passed to calcOpticalFlowPyrLK. * calc_optical_flow_farnebackComputes a dense optical flow using the Gunnar Farneback’s algorithm. * calc_optical_flow_pyr_lkCalculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids. * calc_optical_flow_pyr_lk_defCalculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids. * cam_shiftFinds an object center, size, and orientation. * compute_eccComputes the Enhanced Correlation Coefficient value between two images EP08 . * compute_ecc_defComputes the Enhanced Correlation Coefficient value between two images EP08 . * create_background_subtractor_knnCreates KNN Background Subtractor * create_background_subtractor_knn_defCreates KNN Background Subtractor * create_background_subtractor_mog2Creates MOG2 Background Subtractor * create_background_subtractor_mog2_defCreates MOG2 Background Subtractor * estimate_rigid_transformDeprecatedComputes an optimal affine transformation between two 2D point sets. * find_transform_eccFinds the geometric transform (warp) between two images in terms of the ECC criterion EP08 . * find_transform_ecc_1Finds the geometric transform (warp) between two images in terms of the ECC criterion EP08 . * find_transform_ecc_1_def@overload * mean_shiftFinds an object on a back projection image. * read_optical_flowRead a .flo file * write_optical_flowWrite a .flo to disk Module opencv::videoio === Video I/O --- Read and write video or images sequence with OpenCV #### See also: * [videoio_overview] * Tutorials: [tutorial_table_of_content_app]Flags for video I/O --- Additional flags for video I/O API backends --- Hardware-accelerated video decoding and encoding --- C API for video I/O --- iOS glue for video I/O --- WinRT glue for video I/O --- Query I/O API backends registry --- Modules --- * prelude Structs --- * VideoCaptureClass for video capturing from video files, image sequences or cameras. * VideoWriterVideo writer class. Enums --- * VideoAccelerationTypeVideo Acceleration type * VideoCaptureAPIscv::VideoCapture API backends identifier. * VideoCaptureOBSensorDataTypeOBSENSOR (for Orbbec 3D-Sensor device/module ) * VideoCaptureOBSensorGeneratorsOBSENSOR stream generator * VideoCaptureOBSensorPropertiesOBSENSOR properties * VideoCapturePropertiescv::VideoCapture generic properties identifier. * VideoWriterPropertiescv::VideoWriter generic properties identifier. Constants --- * CAP_ANDROIDAndroid - not used * CAP_ANYAuto detect == 0 * CAP_ARAVISAravis SDK * CAP_AVFOUNDATIONAVFoundation framework for iOS (OS X Lion will have the same API) * CAP_CMU1394Same value as CAP_FIREWIRE * CAP_DC1394Same value as CAP_FIREWIRE * CAP_DSHOWDirectShow (via videoInput) * CAP_FFMPEGOpen and record video file or stream using the FFMPEG library * CAP_FIREWARESame value as CAP_FIREWIRE * CAP_FIREWIREIEEE 1394 drivers * CAP_GIGANETIXSmartek Giganetix GigEVisionSDK * CAP_GPHOTO2gPhoto2 connection * CAP_GSTREAMERGStreamer * CAP_IEEE1394Same value as CAP_FIREWIRE * CAP_IMAGESOpenCV Image Sequence (e.g. img_%02d.jpg) * CAP_INTELPERCRealSense (former Intel Perceptual Computing SDK) * CAP_INTELPERC_DEPTH_GENERATOR * CAP_INTELPERC_DEPTH_MAPEach pixel is a 16-bit integer. The value indicates the distance from an object to the camera’s XY plane or the Cartesian depth. * CAP_INTELPERC_GENERATORS_MASK * CAP_INTELPERC_IMAGE * CAP_INTELPERC_IMAGE_GENERATOR * CAP_INTELPERC_IR_GENERATOR * CAP_INTELPERC_IR_MAPEach pixel is a 16-bit integer. The value indicates the intensity of the reflected laser beam. * CAP_INTELPERC_UVDEPTH_MAPEach pixel contains two 32-bit floating point values in the range of 0-1, representing the mapping of depth coordinates to the color coordinates. * CAP_INTEL_MFXIntel MediaSDK * CAP_MSMFMicrosoft Media Foundation (via videoInput). See platform specific notes above. * CAP_OBSENSORFor Orbbec 3D-Sensor device/module (Astra+, Femto) * CAP_OBSENSOR_BGR_IMAGEData given from BGR stream generator * CAP_OBSENSOR_DEPTH_GENERATOR * CAP_OBSENSOR_DEPTH_MAPDepth values in mm (CV_16UC1) * CAP_OBSENSOR_GENERATORS_MASK * CAP_OBSENSOR_IMAGE_GENERATOR * CAP_OBSENSOR_IR_GENERATOR * CAP_OBSENSOR_IR_IMAGEData given from IR stream generator(CV_16UC1) * CAP_OPENCV_MJPEGBuilt-in OpenCV MotionJPEG codec * CAP_OPENNIOpenNI (for Kinect) * CAP_OPENNI2OpenNI2 (for Kinect) * CAP_OPENNI2_ASTRAOpenNI2 (for Orbbec Astra) * CAP_OPENNI2_ASUSOpenNI2 (for Asus Xtion and Occipital Structure sensors) * CAP_OPENNI_ASUSOpenNI (for Asus Xtion) * CAP_OPENNI_BGR_IMAGEData given from RGB image generator * CAP_OPENNI_DEPTH_GENERATOR * CAP_OPENNI_DEPTH_GENERATOR_BASELINE * CAP_OPENNI_DEPTH_GENERATOR_FOCAL_LENGTH * CAP_OPENNI_DEPTH_GENERATOR_PRESENT * CAP_OPENNI_DEPTH_GENERATOR_REGISTRATION * CAP_OPENNI_DEPTH_GENERATOR_REGISTRATION_ON * CAP_OPENNI_DEPTH_MAPDepth values in mm (CV_16UC1) * CAP_OPENNI_DISPARITY_MAPDisparity in pixels (CV_8UC1) * CAP_OPENNI_DISPARITY_MAP_32FDisparity in pixels (CV_32FC1) * CAP_OPENNI_GENERATORS_MASK * CAP_OPENNI_GRAY_IMAGEData given from RGB image generator * CAP_OPENNI_IMAGE_GENERATOR * CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE * CAP_OPENNI_IMAGE_GENERATOR_PRESENT * CAP_OPENNI_IR_GENERATOR * CAP_OPENNI_IR_GENERATOR_PRESENT * CAP_OPENNI_IR_IMAGEData given from IR image generator * CAP_OPENNI_POINT_CLOUD_MAPXYZ in meters (CV_32FC3) * CAP_OPENNI_QVGA_30HZ * CAP_OPENNI_QVGA_60HZ * CAP_OPENNI_SXGA_15HZ * CAP_OPENNI_SXGA_30HZ * CAP_OPENNI_VALID_DEPTH_MASKCV_8UC1 * CAP_OPENNI_VGA_30HZ * CAP_PROP_APERTUREAperture. Can be readonly, depends on camera program. * CAP_PROP_ARAVIS_AUTOTRIGGERAutomatically trigger frame capture if camera is configured with software trigger * CAP_PROP_AUDIO_BASE_INDEX(read-only) Index of the first audio channel for .retrieve() calls. That audio channel number continues enumeration after video channels. * CAP_PROP_AUDIO_DATA_DEPTH(open, read) Alternative definition to bits-per-sample, but with clear handling of 32F / 32S * CAP_PROP_AUDIO_POS(read-only) Audio position is measured in samples. Accurate audio sample timestamp of previous grabbed fragment. See CAP_PROP_AUDIO_SAMPLES_PER_SECOND and CAP_PROP_AUDIO_SHIFT_NSEC. * CAP_PROP_AUDIO_SAMPLES_PER_SECOND(open, read) determined from file/codec input. If not specified, then selected audio sample rate is 44100 * CAP_PROP_AUDIO_SHIFT_NSEC(read only) Contains the time difference between the start of the audio stream and the video stream in nanoseconds. Positive value means that audio is started after the first video frame. Negative value means that audio is started before the first video frame. * CAP_PROP_AUDIO_STREAM(**open-only**) Specify stream in multi-language media files, -1 - disable audio processing or microphone. Default value is -1. * CAP_PROP_AUDIO_SYNCHRONIZE(open, read) Enables audio synchronization. * CAP_PROP_AUDIO_TOTAL_CHANNELS(read-only) Number of audio channels in the selected audio stream (mono, stereo, etc) * CAP_PROP_AUDIO_TOTAL_STREAMS(read-only) Number of audio streams. * CAP_PROP_AUTOFOCUS * CAP_PROP_AUTO_EXPOSUREDC1394: exposure control done by camera, user can adjust reference level using this feature. * CAP_PROP_AUTO_WBenable/ disable auto white-balance * CAP_PROP_BACKENDCurrent backend (enum VideoCaptureAPIs). Read-only property * CAP_PROP_BACKLIGHT * CAP_PROP_BITRATE(read-only) Video bitrate in kbits/s * CAP_PROP_BRIGHTNESSBrightness of the image (only for those cameras that support). * CAP_PROP_BUFFERSIZE * CAP_PROP_CHANNELVideo input or Channel Number (only for those cameras that support) * CAP_PROP_CODEC_EXTRADATA_INDEXPositive index indicates that returning extra data is supported by the video back end. This can be retrieved as cap.retrieve(data, ). E.g. When reading from a h264 encoded RTSP stream, the FFmpeg backend could return the SPS and/or PPS if available (if sent in reply to a DESCRIBE request), from calls to cap.retrieve(data, ). * CAP_PROP_CODEC_PIXEL_FORMAT(read-only) codec’s pixel format. 4-character code - see VideoWriter::fourcc . Subset of AV_PIX_FMT_* or -1 if unknown * CAP_PROP_CONTRASTContrast of the image (only for cameras). * CAP_PROP_CONVERT_RGBBoolean flags indicating whether images should be converted to RGB. *GStreamer note*: The flag is ignored in case if custom pipeline is used. It’s user responsibility to interpret pipeline output. * CAP_PROP_DC1394_MAX * CAP_PROP_DC1394_MODE_AUTO * CAP_PROP_DC1394_MODE_MANUALset automatically when a value of the feature is set by the user. * CAP_PROP_DC1394_MODE_ONE_PUSH_AUTO * CAP_PROP_DC1394_OFFturn the feature off (not controlled manually nor automatically). * CAP_PROP_EXPOSUREExposure (only for those cameras that support). * CAP_PROP_EXPOSUREPROGRAMCamera exposure program. * CAP_PROP_FOCUS * CAP_PROP_FORMATFormat of the %Mat objects (see Mat::type()) returned by VideoCapture::retrieve(). Set value -1 to fetch undecoded RAW video streams (as Mat 8UC1). * CAP_PROP_FOURCC4-character code of codec. see VideoWriter::fourcc . * CAP_PROP_FPSFrame rate. * CAP_PROP_FRAME_COUNTNumber of frames in the video file. * CAP_PROP_FRAME_HEIGHTHeight of the frames in the video stream. * CAP_PROP_FRAME_TYPE(read-only) FFmpeg back-end only - Frame type ascii code (73 = ‘I’, 80 = ‘P’, 66 = ‘B’ or 63 = ‘?’ if unknown) of the most recently read frame. * CAP_PROP_FRAME_WIDTHWidth of the frames in the video stream. * CAP_PROP_GAINGain of the image (only for those cameras that support). * CAP_PROP_GAMMA * CAP_PROP_GIGA_FRAME_HEIGH_MAX * CAP_PROP_GIGA_FRAME_OFFSET_X * CAP_PROP_GIGA_FRAME_OFFSET_Y * CAP_PROP_GIGA_FRAME_SENS_HEIGH * CAP_PROP_GIGA_FRAME_SENS_WIDTH * CAP_PROP_GIGA_FRAME_WIDTH_MAX * CAP_PROP_GPHOTO2_COLLECT_MSGSCollect messages with details. * CAP_PROP_GPHOTO2_FLUSH_MSGSReadonly, returns (const char *). * CAP_PROP_GPHOTO2_PREVIEWCapture only preview from liveview mode. * CAP_PROP_GPHOTO2_RELOAD_CONFIGTrigger, only by set. Reload camera settings. * CAP_PROP_GPHOTO2_RELOAD_ON_CHANGEReload all settings on set. * CAP_PROP_GPHOTO2_WIDGET_ENUMERATEReadonly, returns (const char *). * CAP_PROP_GSTREAMER_QUEUE_LENGTHDefault is 1 * CAP_PROP_GUID * CAP_PROP_HUEHue of the image (only for cameras). * CAP_PROP_HW_ACCELERATION(**open-only**) Hardware acceleration type (see #VideoAccelerationType). Setting supported only via `params` parameter in cv::VideoCapture constructor / .open() method. Default value is backend-specific. * CAP_PROP_HW_ACCELERATION_USE_OPENCL(**open-only**) If non-zero, create new OpenCL context and bind it to current thread. The OpenCL context created with Video Acceleration context attached it (if not attached yet) for optimized GPU data copy between HW accelerated decoder and cv::UMat. * CAP_PROP_HW_DEVICE(**open-only**) Hardware device index (select GPU if multiple available). Device enumeration is acceleration type specific. * CAP_PROP_IMAGES_BASE * CAP_PROP_IMAGES_LAST * CAP_PROP_INTELPERC_DEPTH_CONFIDENCE_THRESHOLD * CAP_PROP_INTELPERC_DEPTH_FOCAL_LENGTH_HORZ * CAP_PROP_INTELPERC_DEPTH_FOCAL_LENGTH_VERT * CAP_PROP_INTELPERC_DEPTH_LOW_CONFIDENCE_VALUE * CAP_PROP_INTELPERC_DEPTH_SATURATION_VALUE * CAP_PROP_INTELPERC_PROFILE_COUNT * CAP_PROP_INTELPERC_PROFILE_IDX * CAP_PROP_IOS_DEVICE_EXPOSURE * CAP_PROP_IOS_DEVICE_FLASH * CAP_PROP_IOS_DEVICE_FOCUS * CAP_PROP_IOS_DEVICE_TORCH * CAP_PROP_IOS_DEVICE_WHITEBALANCE * CAP_PROP_IRIS * CAP_PROP_ISO_SPEED * CAP_PROP_LRF_HAS_KEY_FRAMEFFmpeg back-end only - Indicates whether the Last Raw Frame (LRF), output from VideoCapture::read() when VideoCapture is initialized with VideoCapture::open(CAP_FFMPEG, {CAP_PROP_FORMAT, -1}) or VideoCapture::set(CAP_PROP_FORMAT,-1) is called before the first call to VideoCapture::read(), contains encoded data for a key frame. * CAP_PROP_MODEBackend-specific value indicating the current capture mode. * CAP_PROP_MONOCHROME * CAP_PROP_N_THREADS(**open-only**) Set the maximum number of threads to use. Use 0 to use as many threads as CPU cores (applicable for FFmpeg back-end only). * CAP_PROP_OBSENSOR_INTRINSIC_CX * CAP_PROP_OBSENSOR_INTRINSIC_CY * CAP_PROP_OBSENSOR_INTRINSIC_FX * CAP_PROP_OBSENSOR_INTRINSIC_FY * CAP_PROP_OPENNI2_MIRROR * CAP_PROP_OPENNI2_SYNC * CAP_PROP_OPENNI_APPROX_FRAME_SYNC * CAP_PROP_OPENNI_BASELINEIn mm * CAP_PROP_OPENNI_CIRCLE_BUFFER * CAP_PROP_OPENNI_FOCAL_LENGTHIn pixels * CAP_PROP_OPENNI_FRAME_MAX_DEPTHIn mm * CAP_PROP_OPENNI_GENERATOR_PRESENT * CAP_PROP_OPENNI_MAX_BUFFER_SIZE * CAP_PROP_OPENNI_MAX_TIME_DURATION * CAP_PROP_OPENNI_OUTPUT_MODE * CAP_PROP_OPENNI_REGISTRATIONFlag that synchronizes the remapping depth map to image map by changing depth generator’s view point (if the flag is “on”) or sets this view point to its normal one (if the flag is “off”). * CAP_PROP_OPENNI_REGISTRATION_ON * CAP_PROP_OPEN_TIMEOUT_MSEC(**open-only**) timeout in milliseconds for opening a video capture (applicable for FFmpeg and GStreamer back-ends only) * CAP_PROP_ORIENTATION_AUTOif true - rotates output frames of CvCapture considering video file’s metadata (applicable for FFmpeg and AVFoundation back-ends only) (https://github.com/opencv/opencv/issues/15499) * CAP_PROP_ORIENTATION_META(read-only) Frame rotation defined by stream meta (applicable for FFmpeg and AVFoundation back-ends only) * CAP_PROP_PAN * CAP_PROP_POS_AVI_RATIORelative position of the video file: 0=start of the film, 1=end of the film. * CAP_PROP_POS_FRAMES0-based index of the frame to be decoded/captured next. * CAP_PROP_POS_MSECCurrent position of the video file in milliseconds. * CAP_PROP_PVAPI_BINNINGXHorizontal binning factor. * CAP_PROP_PVAPI_BINNINGYVertical binning factor. * CAP_PROP_PVAPI_DECIMATIONHORIZONTALHorizontal sub-sampling of the image. * CAP_PROP_PVAPI_DECIMATIONVERTICALVertical sub-sampling of the image. * CAP_PROP_PVAPI_FRAMESTARTTRIGGERMODEFrameStartTriggerMode: Determines how a frame is initiated. * CAP_PROP_PVAPI_MULTICASTIPIP for enable multicast master mode. 0 for disable multicast. * CAP_PROP_PVAPI_PIXELFORMATPixel format. * CAP_PROP_READ_TIMEOUT_MSEC(**open-only**) timeout in milliseconds for reading from a video capture (applicable for FFmpeg and GStreamer back-ends only) * CAP_PROP_RECTIFICATIONRectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently). * CAP_PROP_ROLL * CAP_PROP_SAR_DENSample aspect ratio: num/den (den) * CAP_PROP_SAR_NUMSample aspect ratio: num/den (num) * CAP_PROP_SATURATIONSaturation of the image (only for cameras). * CAP_PROP_SETTINGSPop up video/camera filter dialog (note: only supported by DSHOW backend currently. The property value is ignored) * CAP_PROP_SHARPNESS * CAP_PROP_SPEEDExposure speed. Can be readonly, depends on camera program. * CAP_PROP_STREAM_OPEN_TIME_USEC * CAP_PROP_TEMPERATURE * CAP_PROP_TILT * CAP_PROP_TRIGGER * CAP_PROP_TRIGGER_DELAY * CAP_PROP_VIDEO_STREAM(**open-only**) Specify video stream, 0-based index. Use -1 to disable video stream from file or IP cameras. Default value is 0. * CAP_PROP_VIDEO_TOTAL_CHANNELS(read-only) Number of video channels * CAP_PROP_VIEWFINDEREnter liveview mode. * CAP_PROP_WB_TEMPERATUREwhite-balance color temperature * CAP_PROP_WHITE_BALANCE_BLUE_UCurrently unsupported. * CAP_PROP_WHITE_BALANCE_RED_V * CAP_PROP_XI_ACQ_BUFFER_SIZEAcquisition buffer size in buffer_size_unit. Default bytes. * CAP_PROP_XI_ACQ_BUFFER_SIZE_UNITAcquisition buffer size unit in bytes. Default 1. E.g. Value 1024 means that buffer_size is in KiBytes. * CAP_PROP_XI_ACQ_FRAME_BURST_COUNTSets number of frames acquired by burst. This burst is used only if trigger is set to FrameBurstStart. * CAP_PROP_XI_ACQ_TIMING_MODEType of sensor frames timing. * CAP_PROP_XI_ACQ_TRANSPORT_BUFFER_COMMITNumber of buffers to commit to low level. * CAP_PROP_XI_ACQ_TRANSPORT_BUFFER_SIZEAcquisition transport buffer size in bytes. * CAP_PROP_XI_AEAGAutomatic exposure/gain. * CAP_PROP_XI_AEAG_LEVELAverage intensity of output signal AEAG should achieve(in %). * CAP_PROP_XI_AEAG_ROI_HEIGHTAutomatic exposure/gain ROI Height. * CAP_PROP_XI_AEAG_ROI_OFFSET_XAutomatic exposure/gain ROI offset X. * CAP_PROP_XI_AEAG_ROI_OFFSET_YAutomatic exposure/gain ROI offset Y. * CAP_PROP_XI_AEAG_ROI_WIDTHAutomatic exposure/gain ROI Width. * CAP_PROP_XI_AE_MAX_LIMITMaximum limit of exposure in AEAG procedure. * CAP_PROP_XI_AG_MAX_LIMITMaximum limit of gain in AEAG procedure. * CAP_PROP_XI_APPLY_CMSEnable applying of CMS profiles to xiGetImage (see XI_PRM_INPUT_CMS_PROFILE, XI_PRM_OUTPUT_CMS_PROFILE). * CAP_PROP_XI_AUTO_BANDWIDTH_CALCULATIONAutomatic bandwidth calculation. * CAP_PROP_XI_AUTO_WBAutomatic white balance. * CAP_PROP_XI_AVAILABLE_BANDWIDTHCalculate and returns available interface bandwidth(int Megabits). * CAP_PROP_XI_BINNING_HORIZONTALHorizontal Binning - number of horizontal photo-sensitive cells to combine together. * CAP_PROP_XI_BINNING_PATTERNBinning pattern type. * CAP_PROP_XI_BINNING_SELECTORBinning engine selector. * CAP_PROP_XI_BINNING_VERTICALVertical Binning - number of vertical photo-sensitive cells to combine together. * CAP_PROP_XI_BPCCorrection of bad pixels. * CAP_PROP_XI_BUFFERS_QUEUE_SIZEQueue of field/frame buffers. * CAP_PROP_XI_BUFFER_POLICYData move policy. * CAP_PROP_XI_CC_MATRIX_00Color Correction Matrix element [0][0]. * CAP_PROP_XI_CC_MATRIX_01Color Correction Matrix element [0][1]. * CAP_PROP_XI_CC_MATRIX_02Color Correction Matrix element [0][2]. * CAP_PROP_XI_CC_MATRIX_03Color Correction Matrix element [0][3]. * CAP_PROP_XI_CC_MATRIX_10Color Correction Matrix element [1][0]. * CAP_PROP_XI_CC_MATRIX_11Color Correction Matrix element [1][1]. * CAP_PROP_XI_CC_MATRIX_12Color Correction Matrix element [1][2]. * CAP_PROP_XI_CC_MATRIX_13Color Correction Matrix element [1][3]. * CAP_PROP_XI_CC_MATRIX_20Color Correction Matrix element [2][0]. * CAP_PROP_XI_CC_MATRIX_21Color Correction Matrix element [2][1]. * CAP_PROP_XI_CC_MATRIX_22Color Correction Matrix element [2][2]. * CAP_PROP_XI_CC_MATRIX_23Color Correction Matrix element [2][3]. * CAP_PROP_XI_CC_MATRIX_30Color Correction Matrix element [3][0]. * CAP_PROP_XI_CC_MATRIX_31Color Correction Matrix element [3][1]. * CAP_PROP_XI_CC_MATRIX_32Color Correction Matrix element [3][2]. * CAP_PROP_XI_CC_MATRIX_33Color Correction Matrix element [3][3]. * CAP_PROP_XI_CHIP_TEMPCamera sensor temperature. * CAP_PROP_XI_CMSMode of color management system. * CAP_PROP_XI_COLOR_FILTER_ARRAYReturns color filter array type of RAW data. * CAP_PROP_XI_COLUMN_FPN_CORRECTIONCorrection of column FPN. * CAP_PROP_XI_COOLINGStart camera cooling. * CAP_PROP_XI_COUNTER_SELECTORSelect counter. * CAP_PROP_XI_COUNTER_VALUECounter status. * CAP_PROP_XI_DATA_FORMATOutput data format. * CAP_PROP_XI_DEBOUNCE_ENEnable/Disable debounce to selected GPI. * CAP_PROP_XI_DEBOUNCE_POLDebounce polarity (pol = 1 t0 - falling edge, t1 - rising edge). * CAP_PROP_XI_DEBOUNCE_T0Debounce time (x * 10us). * CAP_PROP_XI_DEBOUNCE_T1Debounce time (x * 10us). * CAP_PROP_XI_DEBUG_LEVELSet debug level. * CAP_PROP_XI_DECIMATION_HORIZONTALHorizontal Decimation - horizontal sub-sampling of the image - reduces the horizontal resolution of the image by the specified vertical decimation factor. * CAP_PROP_XI_DECIMATION_PATTERNDecimation pattern type. * CAP_PROP_XI_DECIMATION_SELECTORDecimation engine selector. * CAP_PROP_XI_DECIMATION_VERTICALVertical Decimation - vertical sub-sampling of the image - reduces the vertical resolution of the image by the specified vertical decimation factor. * CAP_PROP_XI_DEFAULT_CC_MATRIXSet default Color Correction Matrix. * CAP_PROP_XI_DEVICE_MODEL_IDReturns device model id. * CAP_PROP_XI_DEVICE_RESETResets the camera to default state. * CAP_PROP_XI_DEVICE_SNReturns device serial number. * CAP_PROP_XI_DOWNSAMPLINGChange image resolution by binning or skipping. * CAP_PROP_XI_DOWNSAMPLING_TYPEChange image downsampling type. * CAP_PROP_XI_EXPOSUREExposure time in microseconds. * CAP_PROP_XI_EXPOSURE_BURST_COUNTSets the number of times of exposure in one frame. * CAP_PROP_XI_EXP_PRIORITYExposure priority (0.5 - exposure 50%, gain 50%). * CAP_PROP_XI_FFS_ACCESS_KEYSetting of key enables file operations on some cameras. * CAP_PROP_XI_FFS_FILE_IDFile number. * CAP_PROP_XI_FFS_FILE_SIZESize of file. * CAP_PROP_XI_FRAMERATEDefine framerate in Hz. * CAP_PROP_XI_FREE_FFS_SIZESize of free camera FFS. * CAP_PROP_XI_GAINGain in dB. * CAP_PROP_XI_GAIN_SELECTORGain selector for parameter Gain allows to select different type of gains. * CAP_PROP_XI_GAMMACChromaticity gamma. * CAP_PROP_XI_GAMMAYLuminosity gamma. * CAP_PROP_XI_GPI_LEVELGet general purpose level. * CAP_PROP_XI_GPI_MODESet general purpose input mode. * CAP_PROP_XI_GPI_SELECTORSelects general purpose input. * CAP_PROP_XI_GPO_MODESet general purpose output mode. * CAP_PROP_XI_GPO_SELECTORSelects general purpose output. * CAP_PROP_XI_HDREnable High Dynamic Range feature. * CAP_PROP_XI_HDR_KNEEPOINT_COUNTThe number of kneepoints in the PWLR. * CAP_PROP_XI_HDR_T1Position of first kneepoint(in % of XI_PRM_EXPOSURE). * CAP_PROP_XI_HDR_T2Position of second kneepoint (in % of XI_PRM_EXPOSURE). * CAP_PROP_XI_HEIGHTHeight of the Image provided by the device (in pixels). * CAP_PROP_XI_HOUS_BACK_SIDE_TEMPCamera housing back side temperature. * CAP_PROP_XI_HOUS_TEMPCamera housing temperature. * CAP_PROP_XI_HW_REVISIONReturns hardware revision number. * CAP_PROP_XI_IMAGE_BLACK_LEVELLast image black level counts. Can be used for Offline processing to recall it. * CAP_PROP_XI_IMAGE_DATA_BIT_DEPTHbitdepth of data returned by function xiGetImage. * CAP_PROP_XI_IMAGE_DATA_FORMATOutput data format. * CAP_PROP_XI_IMAGE_DATA_FORMAT_RGB32_ALPHAThe alpha channel of RGB32 output image format. * CAP_PROP_XI_IMAGE_IS_COLORReturns 1 for color cameras. * CAP_PROP_XI_IMAGE_PAYLOAD_SIZEBuffer size in bytes sufficient for output image returned by xiGetImage. * CAP_PROP_XI_IS_COOLEDReturns 1 for cameras that support cooling. * CAP_PROP_XI_IS_DEVICE_EXISTReturns 1 if camera connected and works properly. * CAP_PROP_XI_KNEEPOINT1Value of first kneepoint (% of sensor saturation). * CAP_PROP_XI_KNEEPOINT2Value of second kneepoint (% of sensor saturation). * CAP_PROP_XI_LED_MODEDefine camera signalling LED functionality. * CAP_PROP_XI_LED_SELECTORSelects camera signalling LED. * CAP_PROP_XI_LENS_APERTURE_VALUECurrent lens aperture value in stops. Examples: 2.8, 4, 5.6, 8, 11. * CAP_PROP_XI_LENS_FEATUREAllows access to lens feature value currently selected by XI_PRM_LENS_FEATURE_SELECTOR. * CAP_PROP_XI_LENS_FEATURE_SELECTORSelects the current feature which is accessible by XI_PRM_LENS_FEATURE. * CAP_PROP_XI_LENS_FOCAL_LENGTHLens focal distance in mm. * CAP_PROP_XI_LENS_FOCUS_DISTANCELens focus distance in cm. * CAP_PROP_XI_LENS_FOCUS_MOVEMoves lens focus motor by steps set in XI_PRM_LENS_FOCUS_MOVEMENT_VALUE. * CAP_PROP_XI_LENS_FOCUS_MOVEMENT_VALUELens current focus movement value to be used by XI_PRM_LENS_FOCUS_MOVE in motor steps. * CAP_PROP_XI_LENS_MODEStatus of lens control interface. This shall be set to XI_ON before any Lens operations. * CAP_PROP_XI_LIMIT_BANDWIDTHSet/get bandwidth(datarate)(in Megabits). * CAP_PROP_XI_LUT_ENActivates LUT. * CAP_PROP_XI_LUT_INDEXControl the index (offset) of the coefficient to access in the LUT. * CAP_PROP_XI_LUT_VALUEValue at entry LUTIndex of the LUT. * CAP_PROP_XI_MANUAL_WBCalculates White Balance(must be called during acquisition). * CAP_PROP_XI_OFFSET_XHorizontal offset from the origin to the area of interest (in pixels). * CAP_PROP_XI_OFFSET_YVertical offset from the origin to the area of interest (in pixels). * CAP_PROP_XI_OUTPUT_DATA_BIT_DEPTHDevice output data bit depth. * CAP_PROP_XI_OUTPUT_DATA_PACKINGDevice output data packing (or grouping) enabled. Packing could be enabled if output_data_bit_depth > 8 and packing capability is available. * CAP_PROP_XI_OUTPUT_DATA_PACKING_TYPEData packing type. Some cameras supports only specific packing type. * CAP_PROP_XI_RECENT_FRAMEGetImage returns most recent frame. * CAP_PROP_XI_REGION_MODEActivates/deactivates Region selected by Region Selector. * CAP_PROP_XI_REGION_SELECTORSelects Region in Multiple ROI which parameters are set by width, height, 
 ,region mode. * CAP_PROP_XI_ROW_FPN_CORRECTIONCorrection of row FPN. * CAP_PROP_XI_SENSOR_BOARD_TEMPCamera sensor board temperature. * CAP_PROP_XI_SENSOR_CLOCK_FREQ_HZSensor clock frequency in Hz. * CAP_PROP_XI_SENSOR_CLOCK_FREQ_INDEXSensor clock frequency index. Sensor with selected frequencies have possibility to set the frequency only by this index. * CAP_PROP_XI_SENSOR_DATA_BIT_DEPTHSensor output data bit depth. * CAP_PROP_XI_SENSOR_FEATURE_SELECTORSelects the current feature which is accessible by XI_PRM_SENSOR_FEATURE_VALUE. * CAP_PROP_XI_SENSOR_FEATURE_VALUEAllows access to sensor feature value currently selected by XI_PRM_SENSOR_FEATURE_SELECTOR. * CAP_PROP_XI_SENSOR_MODECurrent sensor mode. Allows to select sensor mode by one integer. Setting of this parameter affects: image dimensions and downsampling. * CAP_PROP_XI_SENSOR_OUTPUT_CHANNEL_COUNTNumber of output channels from sensor used for data transfer. * CAP_PROP_XI_SENSOR_TAPSNumber of taps. * CAP_PROP_XI_SHARPNESSSharpness Strength. * CAP_PROP_XI_SHUTTER_TYPEChange sensor shutter type(CMOS sensor). * CAP_PROP_XI_TARGET_TEMPSet sensor target temperature for cooling. * CAP_PROP_XI_TEST_PATTERNSelects which test pattern type is generated by the selected generator. * CAP_PROP_XI_TEST_PATTERN_GENERATOR_SELECTORSelects which test pattern generator is controlled by the TestPattern feature. * CAP_PROP_XI_TIMEOUTImage capture timeout in milliseconds. * CAP_PROP_XI_TRANSPORT_PIXEL_FORMATCurrent format of pixels on transport layer. * CAP_PROP_XI_TRG_DELAYSpecifies the delay in microseconds (us) to apply after the trigger reception before activating it. * CAP_PROP_XI_TRG_SELECTORSelects the type of trigger. * CAP_PROP_XI_TRG_SOFTWAREGenerates an internal trigger. PRM_TRG_SOURCE must be set to TRG_SOFTWARE. * CAP_PROP_XI_TRG_SOURCEDefines source of trigger. * CAP_PROP_XI_TS_RST_MODEDefines how time stamp reset engine will be armed. * CAP_PROP_XI_TS_RST_SOURCEDefines which source will be used for timestamp reset. Writing this parameter will trigger settings of engine (arming). * CAP_PROP_XI_USED_FFS_SIZESize of used camera FFS. * CAP_PROP_XI_WB_KBWhite balance blue coefficient. * CAP_PROP_XI_WB_KGWhite balance green coefficient. * CAP_PROP_XI_WB_KRWhite balance red coefficient. * CAP_PROP_XI_WIDTHWidth of the Image provided by the device (in pixels). * CAP_PROP_ZOOM * CAP_PVAPIPvAPI, Prosilica GigE SDK * CAP_PVAPI_DECIMATION_2OUTOF42 out of 4 decimation * CAP_PVAPI_DECIMATION_2OUTOF82 out of 8 decimation * CAP_PVAPI_DECIMATION_2OUTOF162 out of 16 decimation * CAP_PVAPI_DECIMATION_OFFOff * CAP_PVAPI_FSTRIGMODE_FIXEDRATEFixedRate * CAP_PVAPI_FSTRIGMODE_FREERUNFreerun * CAP_PVAPI_FSTRIGMODE_SOFTWARESoftware * CAP_PVAPI_FSTRIGMODE_SYNCIN1SyncIn1 * CAP_PVAPI_FSTRIGMODE_SYNCIN2SyncIn2 * CAP_PVAPI_PIXELFORMAT_BAYER8Bayer8 * CAP_PVAPI_PIXELFORMAT_BAYER16Bayer16 * CAP_PVAPI_PIXELFORMAT_BGR24Bgr24 * CAP_PVAPI_PIXELFORMAT_BGRA32Bgra32 * CAP_PVAPI_PIXELFORMAT_MONO8Mono8 * CAP_PVAPI_PIXELFORMAT_MONO16Mono16 * CAP_PVAPI_PIXELFORMAT_RGB24Rgb24 * CAP_PVAPI_PIXELFORMAT_RGBA32Rgba32 * CAP_QTQuickTime (obsolete, removed) * CAP_REALSENSESynonym for CAP_INTELPERC * CAP_UEYEuEye Camera API * CAP_UNICAPUnicap drivers (obsolete, removed) * CAP_V4LV4L/V4L2 capturing support * CAP_V4L2Same as CAP_V4L * CAP_VFWVideo For Windows (obsolete, removed) * CAP_WINRTMicrosoft Windows Runtime using Media Foundation * CAP_XIAPIXIMEA Camera API * CAP_XINEXINE engine (Linux) * CV__CAP_PROP_LATEST * CV__VIDEOWRITER_PROP_LATEST * VIDEOWRITER_PROP_DEPTHDefaults to CV_8U. * VIDEOWRITER_PROP_FRAMEBYTES(Read-only): Size of just encoded video frame. Note that the encoding order may be different from representation order. * VIDEOWRITER_PROP_HW_ACCELERATION(**open-only**) Hardware acceleration type (see #VideoAccelerationType). Setting supported only via `params` parameter in VideoWriter constructor / .open() method. Default value is backend-specific. * VIDEOWRITER_PROP_HW_ACCELERATION_USE_OPENCL(**open-only**) If non-zero, create new OpenCL context and bind it to current thread. The OpenCL context created with Video Acceleration context attached it (if not attached yet) for optimized GPU data copy between cv::UMat and HW accelerated encoder. * VIDEOWRITER_PROP_HW_DEVICE(**open-only**) Hardware device index (select GPU if multiple available). Device enumeration is acceleration type specific. * VIDEOWRITER_PROP_IS_COLORIf it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames. * VIDEOWRITER_PROP_NSTRIPESNumber of stripes for parallel encoding. -1 for auto detection. * VIDEOWRITER_PROP_QUALITYCurrent quality (0..100%) of the encoded videostream. Can be adjusted dynamically in some codecs. * VIDEO_ACCELERATION_ANYPrefer to use H/W acceleration. If no one supported, then fallback to software processing. * VIDEO_ACCELERATION_D3D11DirectX 11 * VIDEO_ACCELERATION_MFXlibmfx (Intel MediaSDK/oneVPL) * VIDEO_ACCELERATION_NONEDo not require any specific H/W acceleration, prefer software processing. Reading of this value means that special H/W accelerated handling is not added or not detected by OpenCV. * VIDEO_ACCELERATION_VAAPIVAAPI Traits --- * VideoCaptureTraitMutable methods for crate::videoio::VideoCapture * VideoCaptureTraitConstConstant methods for crate::videoio::VideoCapture * VideoWriterTraitMutable methods for crate::videoio::VideoWriter * VideoWriterTraitConstConstant methods for crate::videoio::VideoWriter Functions --- * get_backend_nameReturns backend API name or “UnknownVideoAPI(xxx)” * get_backendsReturns list of all available backends * get_camera_backend_plugin_versionReturns description and ABI/API version of videoio plugin’s camera interface * get_camera_backendsReturns list of available backends which works via `cv::VideoCapture(int index)` * get_stream_backend_plugin_versionReturns description and ABI/API version of videoio plugin’s stream capture interface * get_stream_backendsReturns list of available backends which works via `cv::VideoCapture(filename)` * get_writer_backend_plugin_versionReturns description and ABI/API version of videoio plugin’s writer interface * get_writer_backendsReturns list of available backends which works via `cv::VideoWriter()` * has_backendReturns true if backend is available * is_backend_built_inReturns true if backend is built in (false if backend is used as plugin) Module opencv::videostab === Video Stabilization --- The video stabilization module contains a set of functions and classes that can be used to solve the problem of video stabilization. There are a few methods implemented, most of them are described in the papers OF06 and G11 . However, there are some extensions and deviations from the original paper methods. #### References 1. “Full-Frame Video Stabilization with Motion Inpainting” <NAME>, <NAME>, <NAME>, <NAME>, Senior Member, and Heung-<NAME> 2. “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths” <NAME>, <NAME>, <NAME> # Global Motion Estimation The video stabilization module contains a set of functions and classes for global motion estimation between point clouds or between images. In the last case features are extracted and matched internally. For the sake of convenience the motion estimation functions are wrapped into classes. Both the functions and the classes are available. The Fast Marching Method Telea04 is used in of the video stabilization routines to do motion and color inpainting. The method is implemented is a flexible way and it’s made public for other users. Modules --- * prelude Structs --- * ColorAverageInpainter * ColorInpainter * ConsistentMosaicInpainter * DeblurerBase * DensePyrLkOptFlowEstimatorGpu * FastMarchingMethodDescribes the Fast Marching Method implementation. * FromFileMotionReader * GaussianMotionFilter * IDenseOptFlowEstimator * IFrameSource * ILog * IMotionStabilizer * IOutlierRejector * ISparseOptFlowEstimator * ImageMotionEstimatorBaseBase class for global 2D motion estimation methods which take frames as input. * InpainterBase * InpaintingPipeline * KeypointBasedMotionEstimatorDescribes a global 2D motion estimation method which uses keypoints detection and optical flow for matching. * KeypointBasedMotionEstimatorGpu * LogToStdout * LpMotionStabilizer * MaskFrameSource * MoreAccurateMotionWobbleSuppressor * MoreAccurateMotionWobbleSuppressorBase * MoreAccurateMotionWobbleSuppressorGpu * MotionEstimatorBaseBase class for all global motion estimation methods. * MotionEstimatorL1Describes a global 2D motion estimation method which minimizes L1 error. * MotionEstimatorRansacL2Describes a robust RANSAC-based global 2D motion estimation method which minimizes L2 error. * MotionFilterBase * MotionInpainter * MotionStabilizationPipeline * NullDeblurer * NullFrameSource * NullInpainter * NullLog * NullOutlierRejector * NullWobbleSuppressor * OnePassStabilizer * PyrLkOptFlowEstimatorBase * RansacParamsDescribes RANSAC method parameters. * SparsePyrLkOptFlowEstimator * SparsePyrLkOptFlowEstimatorGpu * StabilizerBase * ToFileMotionWriter * TranslationBasedLocalOutlierRejector * TwoPassStabilizer * VideoFileSource * WeightingDeblurer * WobbleSuppressorBase Enums --- * MotionModelDescribes motion model between two point clouds. Constants --- * MM_AFFINE * MM_HOMOGRAPHY * MM_RIGID * MM_ROTATION * MM_SIMILARITY * MM_TRANSLATION * MM_TRANSLATION_AND_SCALE * MM_UNKNOWN Traits --- * ColorAverageInpainterTraitMutable methods for crate::videostab::ColorAverageInpainter * ColorAverageInpainterTraitConstConstant methods for crate::videostab::ColorAverageInpainter * ColorInpainterTraitMutable methods for crate::videostab::ColorInpainter * ColorInpainterTraitConstConstant methods for crate::videostab::ColorInpainter * ConsistentMosaicInpainterTraitMutable methods for crate::videostab::ConsistentMosaicInpainter * ConsistentMosaicInpainterTraitConstConstant methods for crate::videostab::ConsistentMosaicInpainter * DeblurerBaseTraitMutable methods for crate::videostab::DeblurerBase * DeblurerBaseTraitConstConstant methods for crate::videostab::DeblurerBase * DensePyrLkOptFlowEstimatorGpuTraitMutable methods for crate::videostab::DensePyrLkOptFlowEstimatorGpu * DensePyrLkOptFlowEstimatorGpuTraitConstConstant methods for crate::videostab::DensePyrLkOptFlowEstimatorGpu * FastMarchingMethodTraitMutable methods for crate::videostab::FastMarchingMethod * FastMarchingMethodTraitConstConstant methods for crate::videostab::FastMarchingMethod * FromFileMotionReaderTraitMutable methods for crate::videostab::FromFileMotionReader * FromFileMotionReaderTraitConstConstant methods for crate::videostab::FromFileMotionReader * GaussianMotionFilterTraitMutable methods for crate::videostab::GaussianMotionFilter * GaussianMotionFilterTraitConstConstant methods for crate::videostab::GaussianMotionFilter * IDenseOptFlowEstimatorTraitMutable methods for crate::videostab::IDenseOptFlowEstimator * IDenseOptFlowEstimatorTraitConstConstant methods for crate::videostab::IDenseOptFlowEstimator * IFrameSourceTraitMutable methods for crate::videostab::IFrameSource * IFrameSourceTraitConstConstant methods for crate::videostab::IFrameSource * ILogTraitMutable methods for crate::videostab::ILog * ILogTraitConstConstant methods for crate::videostab::ILog * IMotionStabilizerTraitMutable methods for crate::videostab::IMotionStabilizer * IMotionStabilizerTraitConstConstant methods for crate::videostab::IMotionStabilizer * IOutlierRejectorTraitMutable methods for crate::videostab::IOutlierRejector * IOutlierRejectorTraitConstConstant methods for crate::videostab::IOutlierRejector * ISparseOptFlowEstimatorTraitMutable methods for crate::videostab::ISparseOptFlowEstimator * ISparseOptFlowEstimatorTraitConstConstant methods for crate::videostab::ISparseOptFlowEstimator * ImageMotionEstimatorBaseTraitMutable methods for crate::videostab::ImageMotionEstimatorBase * ImageMotionEstimatorBaseTraitConstConstant methods for crate::videostab::ImageMotionEstimatorBase * InpainterBaseTraitMutable methods for crate::videostab::InpainterBase * InpainterBaseTraitConstConstant methods for crate::videostab::InpainterBase * InpaintingPipelineTraitMutable methods for crate::videostab::InpaintingPipeline * InpaintingPipelineTraitConstConstant methods for crate::videostab::InpaintingPipeline * KeypointBasedMotionEstimatorGpuTraitMutable methods for crate::videostab::KeypointBasedMotionEstimatorGpu * KeypointBasedMotionEstimatorGpuTraitConstConstant methods for crate::videostab::KeypointBasedMotionEstimatorGpu * KeypointBasedMotionEstimatorTraitMutable methods for crate::videostab::KeypointBasedMotionEstimator * KeypointBasedMotionEstimatorTraitConstConstant methods for crate::videostab::KeypointBasedMotionEstimator * LogToStdoutTraitMutable methods for crate::videostab::LogToStdout * LogToStdoutTraitConstConstant methods for crate::videostab::LogToStdout * LpMotionStabilizerTraitMutable methods for crate::videostab::LpMotionStabilizer * LpMotionStabilizerTraitConstConstant methods for crate::videostab::LpMotionStabilizer * MaskFrameSourceTraitMutable methods for crate::videostab::MaskFrameSource * MaskFrameSourceTraitConstConstant methods for crate::videostab::MaskFrameSource * MoreAccurateMotionWobbleSuppressorBaseTraitMutable methods for crate::videostab::MoreAccurateMotionWobbleSuppressorBase * MoreAccurateMotionWobbleSuppressorBaseTraitConstConstant methods for crate::videostab::MoreAccurateMotionWobbleSuppressorBase * MoreAccurateMotionWobbleSuppressorGpuTraitMutable methods for crate::videostab::MoreAccurateMotionWobbleSuppressorGpu * MoreAccurateMotionWobbleSuppressorGpuTraitConstConstant methods for crate::videostab::MoreAccurateMotionWobbleSuppressorGpu * MoreAccurateMotionWobbleSuppressorTraitMutable methods for crate::videostab::MoreAccurateMotionWobbleSuppressor * MoreAccurateMotionWobbleSuppressorTraitConstConstant methods for crate::videostab::MoreAccurateMotionWobbleSuppressor * MotionEstimatorBaseTraitMutable methods for crate::videostab::MotionEstimatorBase * MotionEstimatorBaseTraitConstConstant methods for crate::videostab::MotionEstimatorBase * MotionEstimatorL1TraitMutable methods for crate::videostab::MotionEstimatorL1 * MotionEstimatorL1TraitConstConstant methods for crate::videostab::MotionEstimatorL1 * MotionEstimatorRansacL2TraitMutable methods for crate::videostab::MotionEstimatorRansacL2 * MotionEstimatorRansacL2TraitConstConstant methods for crate::videostab::MotionEstimatorRansacL2 * MotionFilterBaseTraitMutable methods for crate::videostab::MotionFilterBase * MotionFilterBaseTraitConstConstant methods for crate::videostab::MotionFilterBase * MotionInpainterTraitMutable methods for crate::videostab::MotionInpainter * MotionInpainterTraitConstConstant methods for crate::videostab::MotionInpainter * MotionStabilizationPipelineTraitMutable methods for crate::videostab::MotionStabilizationPipeline * MotionStabilizationPipelineTraitConstConstant methods for crate::videostab::MotionStabilizationPipeline * NullDeblurerTraitMutable methods for crate::videostab::NullDeblurer * NullDeblurerTraitConstConstant methods for crate::videostab::NullDeblurer * NullFrameSourceTraitMutable methods for crate::videostab::NullFrameSource * NullFrameSourceTraitConstConstant methods for crate::videostab::NullFrameSource * NullInpainterTraitMutable methods for crate::videostab::NullInpainter * NullInpainterTraitConstConstant methods for crate::videostab::NullInpainter * NullLogTraitMutable methods for crate::videostab::NullLog * NullLogTraitConstConstant methods for crate::videostab::NullLog * NullOutlierRejectorTraitMutable methods for crate::videostab::NullOutlierRejector * NullOutlierRejectorTraitConstConstant methods for crate::videostab::NullOutlierRejector * NullWobbleSuppressorTraitMutable methods for crate::videostab::NullWobbleSuppressor * NullWobbleSuppressorTraitConstConstant methods for crate::videostab::NullWobbleSuppressor * OnePassStabilizerTraitMutable methods for crate::videostab::OnePassStabilizer * OnePassStabilizerTraitConstConstant methods for crate::videostab::OnePassStabilizer * PyrLkOptFlowEstimatorBaseTraitMutable methods for crate::videostab::PyrLkOptFlowEstimatorBase * PyrLkOptFlowEstimatorBaseTraitConstConstant methods for crate::videostab::PyrLkOptFlowEstimatorBase * RansacParamsTraitMutable methods for crate::videostab::RansacParams * RansacParamsTraitConstConstant methods for crate::videostab::RansacParams * SparsePyrLkOptFlowEstimatorGpuTraitMutable methods for crate::videostab::SparsePyrLkOptFlowEstimatorGpu * SparsePyrLkOptFlowEstimatorGpuTraitConstConstant methods for crate::videostab::SparsePyrLkOptFlowEstimatorGpu * SparsePyrLkOptFlowEstimatorTraitMutable methods for crate::videostab::SparsePyrLkOptFlowEstimator * SparsePyrLkOptFlowEstimatorTraitConstConstant methods for crate::videostab::SparsePyrLkOptFlowEstimator * StabilizerBaseTraitMutable methods for crate::videostab::StabilizerBase * StabilizerBaseTraitConstConstant methods for crate::videostab::StabilizerBase * ToFileMotionWriterTraitMutable methods for crate::videostab::ToFileMotionWriter * ToFileMotionWriterTraitConstConstant methods for crate::videostab::ToFileMotionWriter * TranslationBasedLocalOutlierRejectorTraitMutable methods for crate::videostab::TranslationBasedLocalOutlierRejector * TranslationBasedLocalOutlierRejectorTraitConstConstant methods for crate::videostab::TranslationBasedLocalOutlierRejector * TwoPassStabilizerTraitMutable methods for crate::videostab::TwoPassStabilizer * TwoPassStabilizerTraitConstConstant methods for crate::videostab::TwoPassStabilizer * VideoFileSourceTraitMutable methods for crate::videostab::VideoFileSource * VideoFileSourceTraitConstConstant methods for crate::videostab::VideoFileSource * WeightingDeblurerTraitMutable methods for crate::videostab::WeightingDeblurer * WeightingDeblurerTraitConstConstant methods for crate::videostab::WeightingDeblurer * WobbleSuppressorBaseTraitMutable methods for crate::videostab::WobbleSuppressorBase * WobbleSuppressorBaseTraitConstConstant methods for crate::videostab::WobbleSuppressorBase Functions --- * calc_blurriness * calc_flow_mask * complete_frame_according_to_flow * ensure_inclusion_constraint * estimate_global_motion_least_squaresEstimates best global motion between two 2D point clouds in the least-squares sense. * estimate_global_motion_least_squares_defEstimates best global motion between two 2D point clouds in the least-squares sense. * estimate_global_motion_ransacEstimates best global motion between two 2D point clouds robustly (using RANSAC method). * estimate_global_motion_ransac_defEstimates best global motion between two 2D point clouds robustly (using RANSAC method). * estimate_optimal_trim_ratio * get_motionComputes motion between two frames assuming that all the intermediate motions are known. Module opencv::viz === 3D Visualizer --- This section describes 3D visualization window as well as classes and methods that are used to interact with it. 3D visualization window (see Viz3d) is used to display widgets (see Widget), and it provides several methods to interact with scene and widgets. Widget --- In this section, the widget framework is explained. Widgets represent 2D or 3D objects, varying from simple ones such as lines to complex ones such as point clouds and meshes. Widgets are **implicitly shared**. Therefore, one can add a widget to the scene, and modify the widget without re-adding the widget. ``` // Create a cloud widget viz::WCloud cw(cloud, viz::Color::red()); // Display it in a window myWindow.showWidget("CloudWidget1", cw); // Modify it, and it will be modified in the window. cw.setColor(viz::Color::yellow()); ``` Modules --- * prelude Structs --- * CameraThis class wraps intrinsic parameters of a camera. * ColorThis class represents color in BGR order. * KeyboardEventThis class represents a keyboard event. * MeshThis class wraps mesh attributes, and it can load a mesh from a ply file. : * MouseEventThis class represents a mouse event. * Viz3dThe Viz3d class represents a 3D visualizer window. This class is implicitly shared. * WArrowThis 3D Widget defines an arrow. * WCameraPositionThis 3D Widget represents camera position in a scene by its axes or viewing frustum. : * WCircleThis 3D Widget defines a circle. * WCloudThis 3D Widget defines a point cloud. : * WCloudCollectionThis 3D Widget defines a collection of clouds. : * WCloudNormalsThis 3D Widget represents normals of a point cloud. : * WConeThis 3D Widget defines a cone. : * WCoordinateSystemThis 3D Widget represents a coordinate system. : * WCubeThis 3D Widget defines a cube. * WCylinderThis 3D Widget defines a cylinder. : * WGridThis 3D Widget defines a grid. : * WImage3DThis 3D Widget represents an image in 3D space. : * WImageOverlayThis 2D Widget represents an image overlay. : * WLineThis 3D Widget defines a finite line. * WMeshConstructs a WMesh. * WPaintedCloud * WPlaneThis 3D Widget defines a finite plane. * WPolyLineThis 3D Widget defines a poly line. : * WSphereThis 3D Widget defines a sphere. : * WTextThis 2D Widget represents text overlay. * WText3DThis 3D Widget represents 3D text. The text always faces the camera. * WTrajectoryThis 3D Widget represents a trajectory. : * WTrajectoryFrustumsThis 3D Widget represents a trajectory. : * WTrajectorySpheresThis 3D Widget represents a trajectory using spheres and lines * WWidgetMergerThis class allows to merge several widgets to single one. * WidgetBase class of all widgets. Widget is implicitly shared. * Widget2DBase class of all 2D widgets. * Widget3DBase class of all 3D widgets. Enums --- * KeyboardEvent_Action * MouseEvent_MouseButton * MouseEvent_Type * RenderingProperties////////////////////////////////////////////////////////////////////////// Widget rendering properties * RepresentationValues * ShadingValues Constants --- * AMBIENT * FONT_SIZE * IMMEDIATE_RENDERING * KeyboardEvent_ALT * KeyboardEvent_CTRL * KeyboardEvent_KEY_DOWN * KeyboardEvent_KEY_UP * KeyboardEvent_NONE * KeyboardEvent_SHIFT * LIGHTING * LINE_WIDTH * Mesh_LOAD_AUTO * Mesh_LOAD_OBJ * Mesh_LOAD_PLY * MouseEvent_LeftButton * MouseEvent_MiddleButton * MouseEvent_MouseButtonPress * MouseEvent_MouseButtonRelease * MouseEvent_MouseDblClick * MouseEvent_MouseMove * MouseEvent_MouseScrollDown * MouseEvent_MouseScrollUp * MouseEvent_NoButton * MouseEvent_RightButton * MouseEvent_VScroll * OPACITY * POINT_SIZE * REPRESENTATION * REPRESENTATION_POINTS * REPRESENTATION_SURFACE * REPRESENTATION_WIREFRAME * SHADING * SHADING_FLAT * SHADING_GOURAUD * SHADING_PHONG * WTrajectory_BOTH * WTrajectory_FRAMES * WTrajectory_PATH Traits --- * CameraTraitMutable methods for crate::viz::Camera * CameraTraitConstConstant methods for crate::viz::Camera * ColorTraitMutable methods for crate::viz::Color * ColorTraitConstConstant methods for crate::viz::Color * KeyboardEventTraitMutable methods for crate::viz::KeyboardEvent * KeyboardEventTraitConstConstant methods for crate::viz::KeyboardEvent * MeshTraitMutable methods for crate::viz::Mesh * MeshTraitConstConstant methods for crate::viz::Mesh * MouseEventTraitMutable methods for crate::viz::MouseEvent * MouseEventTraitConstConstant methods for crate::viz::MouseEvent * Viz3dTraitMutable methods for crate::viz::Viz3d * Viz3dTraitConstConstant methods for crate::viz::Viz3d * WArrowTraitMutable methods for crate::viz::WArrow * WArrowTraitConstConstant methods for crate::viz::WArrow * WCameraPositionTraitMutable methods for crate::viz::WCameraPosition * WCameraPositionTraitConstConstant methods for crate::viz::WCameraPosition * WCircleTraitMutable methods for crate::viz::WCircle * WCircleTraitConstConstant methods for crate::viz::WCircle * WCloudCollectionTraitMutable methods for crate::viz::WCloudCollection * WCloudCollectionTraitConstConstant methods for crate::viz::WCloudCollection * WCloudNormalsTraitMutable methods for crate::viz::WCloudNormals * WCloudNormalsTraitConstConstant methods for crate::viz::WCloudNormals * WCloudTraitMutable methods for crate::viz::WCloud * WCloudTraitConstConstant methods for crate::viz::WCloud * WConeTraitMutable methods for crate::viz::WCone * WConeTraitConstConstant methods for crate::viz::WCone * WCoordinateSystemTraitMutable methods for crate::viz::WCoordinateSystem * WCoordinateSystemTraitConstConstant methods for crate::viz::WCoordinateSystem * WCubeTraitMutable methods for crate::viz::WCube * WCubeTraitConstConstant methods for crate::viz::WCube * WCylinderTraitMutable methods for crate::viz::WCylinder * WCylinderTraitConstConstant methods for crate::viz::WCylinder * WGridTraitMutable methods for crate::viz::WGrid * WGridTraitConstConstant methods for crate::viz::WGrid * WImage3DTraitMutable methods for crate::viz::WImage3D * WImage3DTraitConstConstant methods for crate::viz::WImage3D * WImageOverlayTraitMutable methods for crate::viz::WImageOverlay * WImageOverlayTraitConstConstant methods for crate::viz::WImageOverlay * WLineTraitMutable methods for crate::viz::WLine * WLineTraitConstConstant methods for crate::viz::WLine * WMeshTraitMutable methods for crate::viz::WMesh * WMeshTraitConstConstant methods for crate::viz::WMesh * WPaintedCloudTraitMutable methods for crate::viz::WPaintedCloud * WPaintedCloudTraitConstConstant methods for crate::viz::WPaintedCloud * WPlaneTraitMutable methods for crate::viz::WPlane * WPlaneTraitConstConstant methods for crate::viz::WPlane * WPolyLineTraitMutable methods for crate::viz::WPolyLine * WPolyLineTraitConstConstant methods for crate::viz::WPolyLine * WSphereTraitMutable methods for crate::viz::WSphere * WSphereTraitConstConstant methods for crate::viz::WSphere * WText3DTraitMutable methods for crate::viz::WText3D * WText3DTraitConstConstant methods for crate::viz::WText3D * WTextTraitMutable methods for crate::viz::WText * WTextTraitConstConstant methods for crate::viz::WText * WTrajectoryFrustumsTraitMutable methods for crate::viz::WTrajectoryFrustums * WTrajectoryFrustumsTraitConstConstant methods for crate::viz::WTrajectoryFrustums * WTrajectorySpheresTraitMutable methods for crate::viz::WTrajectorySpheres * WTrajectorySpheresTraitConstConstant methods for crate::viz::WTrajectorySpheres * WTrajectoryTraitMutable methods for crate::viz::WTrajectory * WTrajectoryTraitConstConstant methods for crate::viz::WTrajectory * WWidgetMergerTraitMutable methods for crate::viz::WWidgetMerger * WWidgetMergerTraitConstConstant methods for crate::viz::WWidgetMerger * Widget2DTraitMutable methods for crate::viz::Widget2D * Widget2DTraitConstConstant methods for crate::viz::Widget2D * Widget3DTraitMutable methods for crate::viz::Widget3D * Widget3DTraitConstConstant methods for crate::viz::Widget3D * WidgetTraitMutable methods for crate::viz::Widget * WidgetTraitConstConstant methods for crate::viz::Widget Functions --- * compute_normals//////////////////////////////////////////////////////////////////////////////////////////// Computing normals for mesh * get_window_by_nameRetrieves a window by its name. * imshowDisplays image in specified window * imshow_defDisplays image in specified window * make_camera_poseConstructs camera pose from position, focal_point and up_vector (see gluLookAt() for more information). * make_transform_to_globalTakes coordinate frame data and builds transform to global coordinate frame. * make_transform_to_global_defTakes coordinate frame data and builds transform to global coordinate frame. * read_cloudParameters * read_cloud_defParameters * read_mesh//////////////////////////////////////////////////////////////////////////////////////////// Reads mesh. Only ply format is supported now and no texture load support * read_poseParameters * read_pose_defParameters * read_trajectorytakes vector<Affine3> with T = float/double and loads poses from sequence of files * read_trajectory_deftakes vector<Affine3> with T = float/double and loads poses from sequence of files * unregister_all_windowsUnregisters all Viz windows from internal database. After it ‘getWindowByName()’ will create new windows instead of getting existing from the database. * write_cloudParameters * write_cloud_defParameters * write_poseParameters * write_pose_defParameters * write_trajectorytakes vector<Affine3> with T = float/double and writes to a sequence of files with given filename format * write_trajectory_deftakes vector<Affine3> with T = float/double and writes to a sequence of files with given filename format Type Aliases --- * Viz3d_Color * Viz3d_KeyboardCallback * Viz3d_MouseCallback Module opencv::wechat_qrcode === WeChat QR code detector for detecting and parsing QR code. --- Modules --- * prelude Structs --- * WeChatQRCodeWeChat QRCode includes two CNN-based models:A object detection model and a super resolution model.Object detection model is applied to detect QRCode with the bounding box.super resolution model is applied to zoom in QRCode when it is small. Traits --- * WeChatQRCodeTraitMutable methods for crate::wechat_qrcode::WeChatQRCode * WeChatQRCodeTraitConstConstant methods for crate::wechat_qrcode::WeChatQRCode Module opencv::xfeatures2d === Extra 2D Features Framework --- Experimental 2D Features Algorithms --- This section describes experimental algorithms for 2d feature detection. Non-free 2D Features Algorithms --- This section describes two popular algorithms for 2d feature detection, SIFT and SURF, that are known to be patented. You need to set the OPENCV_ENABLE_NONFREE option in cmake to use those. Use them at your own risk. Experimental 2D Features Matching Algorithm --- This section describes the following matching strategies: * GMS: Grid-based Motion Statistics, Bian2017gms * LOGOS: Local geometric support for high-outlier spatial verification, Lowry2018LOGOSLG Modules --- * prelude Structs --- * AffineFeature2DClass implementing affine adaptation for key points. * BEBLIDClass implementing BEBLID (Boosted Efficient Binary Local Image Descriptor), described in Suarez2020BEBLID . * BoostDescClass implementing BoostDesc (Learning Image Descriptors with Boosting), described in Trzcinski13a and Trzcinski13b. * BriefDescriptorExtractorClass for computing BRIEF descriptors described in calon2010 . * DAISYClass implementing DAISY descriptor, described in Tola10 * Elliptic_KeyPointElliptic region around an interest point. * FREAKClass implementing the FREAK (*Fast Retina Keypoint*) keypoint descriptor, described in AOV12 . * HarrisLaplaceFeatureDetectorClass implementing the Harris-Laplace feature detector as described in Mikolajczyk2004. * LATCHlatch Class for computing the LATCH descriptor. If you find this code useful, please add a reference to the following paper in your work: <NAME> and <NAME>, “LATCH: Learned Arrangements of Three Patch Codes”, arXiv preprint arXiv:1501.03719, 15 Jan. 2015 * LUCIDClass implementing the locally uniform comparison image descriptor, described in LUCID * MSDDetectorClass implementing the MSD (*Maximal Self-Dissimilarity*) keypoint detector, described in Tombari14. * PCTSignaturesClass implementing PCT (position-color-texture) signature extraction as described in KrulisLS16. The algorithm is divided to a feature sampler and a clusterizer. Feature sampler produces samples at given set of coordinates. Clusterizer then produces clusters of these samples using k-means algorithm. Resulting set of clusters is the signature of the input image. * PCTSignaturesSQFDClass implementing Signature Quadratic Form Distance (SQFD). * SURFClass for extracting Speeded Up Robust Features from an image Bay06 . * SURF_CUDAClass used for extracting Speeded Up Robust Features (SURF) from an image. : * StarDetectorThe class implements the keypoint detector introduced by Agrawal08, synonym of StarDetector. : * TBMRClass implementing the Tree Based Morse Regions (TBMR) as described in Najman2014 extended with scaled extraction ability. * TEBLIDClass implementing TEBLID (Triplet-based Efficient Binary Local Image Descriptor), described in Suarez2021TEBLID. * VGGClass implementing VGG (Oxford Visual Geometry Group) descriptor trained end to end using “Descriptor Learning Using Convex Optimisation” (DLCO) aparatus described in Simonyan14. Enums --- * BEBLID_BeblidSizeDescriptor number of bits, each bit is a boosting weak-learner. The user can choose between 512 or 256 bits. * DAISY_NormalizationType * PCTSignatures_DistanceFunctionLp distance function selector. * PCTSignatures_PointDistributionPoint distributions supported by random point generator. * PCTSignatures_SimilarityFunctionSimilarity function selector. * SURF_CUDA_KeypointLayout * TEBLID_TeblidSizeDescriptor number of bits, each bit is a box average difference. The user can choose between 256 or 512 bits. Constants --- * BEBLID_SIZE_256_BITS * BEBLID_SIZE_512_BITS * BoostDesc_BGM * BoostDesc_BGM_BILINEAR * BoostDesc_BGM_HARD * BoostDesc_BINBOOST_64 * BoostDesc_BINBOOST_128 * BoostDesc_BINBOOST_256 * BoostDesc_LBGM * DAISY_NRM_FULL * DAISY_NRM_NONE * DAISY_NRM_PARTIAL * DAISY_NRM_SIFT * PCTSignatures_GAUSSIANblock formula * PCTSignatures_HEURISTICblock formula * PCTSignatures_L0_5 * PCTSignatures_L0_25 * PCTSignatures_L1 * PCTSignatures_L2 * PCTSignatures_L2SQUARED * PCTSignatures_L5 * PCTSignatures_L_INFINITY * PCTSignatures_MINUSblock formula * PCTSignatures_NORMALGenerate points with normal (gaussian) distribution. * PCTSignatures_REGULARGenerate points in a regular grid. * PCTSignatures_UNIFORMGenerate numbers uniformly. * SURF_CUDA_ANGLE_ROW * SURF_CUDA_HESSIAN_ROW * SURF_CUDA_LAPLACIAN_ROW * SURF_CUDA_OCTAVE_ROW * SURF_CUDA_ROWS_COUNT * SURF_CUDA_SIZE_ROW * SURF_CUDA_X_ROW * SURF_CUDA_Y_ROW * TEBLID_SIZE_256_BITS * TEBLID_SIZE_512_BITS * VGG_VGG_48 * VGG_VGG_64 * VGG_VGG_80 * VGG_VGG_120 Traits --- * AffineFeature2DTraitMutable methods for crate::xfeatures2d::AffineFeature2D * AffineFeature2DTraitConstConstant methods for crate::xfeatures2d::AffineFeature2D * BEBLIDTraitMutable methods for crate::xfeatures2d::BEBLID * BEBLIDTraitConstConstant methods for crate::xfeatures2d::BEBLID * BoostDescTraitMutable methods for crate::xfeatures2d::BoostDesc * BoostDescTraitConstConstant methods for crate::xfeatures2d::BoostDesc * BriefDescriptorExtractorTraitMutable methods for crate::xfeatures2d::BriefDescriptorExtractor * BriefDescriptorExtractorTraitConstConstant methods for crate::xfeatures2d::BriefDescriptorExtractor * DAISYTraitMutable methods for crate::xfeatures2d::DAISY * DAISYTraitConstConstant methods for crate::xfeatures2d::DAISY * Elliptic_KeyPointTraitMutable methods for crate::xfeatures2d::Elliptic_KeyPoint * Elliptic_KeyPointTraitConstConstant methods for crate::xfeatures2d::Elliptic_KeyPoint * FREAKTraitMutable methods for crate::xfeatures2d::FREAK * FREAKTraitConstConstant methods for crate::xfeatures2d::FREAK * HarrisLaplaceFeatureDetectorTraitMutable methods for crate::xfeatures2d::HarrisLaplaceFeatureDetector * HarrisLaplaceFeatureDetectorTraitConstConstant methods for crate::xfeatures2d::HarrisLaplaceFeatureDetector * LATCHTraitMutable methods for crate::xfeatures2d::LATCH * LATCHTraitConstConstant methods for crate::xfeatures2d::LATCH * LUCIDTraitMutable methods for crate::xfeatures2d::LUCID * LUCIDTraitConstConstant methods for crate::xfeatures2d::LUCID * MSDDetectorTraitMutable methods for crate::xfeatures2d::MSDDetector * MSDDetectorTraitConstConstant methods for crate::xfeatures2d::MSDDetector * PCTSignaturesSQFDTraitMutable methods for crate::xfeatures2d::PCTSignaturesSQFD * PCTSignaturesSQFDTraitConstConstant methods for crate::xfeatures2d::PCTSignaturesSQFD * PCTSignaturesTraitMutable methods for crate::xfeatures2d::PCTSignatures * PCTSignaturesTraitConstConstant methods for crate::xfeatures2d::PCTSignatures * SURFTraitMutable methods for crate::xfeatures2d::SURF * SURFTraitConstConstant methods for crate::xfeatures2d::SURF * SURF_CUDATraitMutable methods for crate::xfeatures2d::SURF_CUDA * SURF_CUDATraitConstConstant methods for crate::xfeatures2d::SURF_CUDA * StarDetectorTraitMutable methods for crate::xfeatures2d::StarDetector * StarDetectorTraitConstConstant methods for crate::xfeatures2d::StarDetector * TBMRTraitMutable methods for crate::xfeatures2d::TBMR * TBMRTraitConstConstant methods for crate::xfeatures2d::TBMR * TEBLIDTraitMutable methods for crate::xfeatures2d::TEBLID * TEBLIDTraitConstConstant methods for crate::xfeatures2d::TEBLID * VGGTraitMutable methods for crate::xfeatures2d::VGG * VGGTraitConstConstant methods for crate::xfeatures2d::VGG Functions --- * fast_for_point_setEstimates cornerness for prespecified KeyPoints using the FAST algorithm * fast_for_point_set_defEstimates cornerness for prespecified KeyPoints using the FAST algorithm * match_gmsGMS (Grid-based Motion Statistics) feature matching strategy described in Bian2017gms . * match_gms_defGMS (Grid-based Motion Statistics) feature matching strategy described in Bian2017gms . * match_logosLOGOS (Local geometric support for high-outlier spatial verification) feature matching strategy described in Lowry2018LOGOSLG . Type Aliases --- * SurfDescriptorExtractor * SurfFeatureDetector Module opencv::ximgproc === Extended Image Processing --- Structured forests for fast edge detection --- This module contains implementations of modern structured edge detection algorithms, i.e. algorithms which somehow takes into account pixel affinities in natural images. EdgeBoxes --- Filters --- Superpixels --- Image segmentation --- Fast line detector --- EdgeDrawing --- EDGE DRAWING LIBRARY FOR GEOMETRIC FEATURE EXTRACTION AND VALIDATION Edge Drawing (ED) algorithm is an proactive approach on edge detection problem. In contrast to many other existing edge detection algorithms which follow a subtractive approach (i.e. after applying gradient filters onto an image eliminating pixels w.r.t. several rules, e.g. non-maximal suppression and hysteresis in Canny), ED algorithm works via an additive strategy, i.e. it picks edge pixels one by one, hence the name Edge Drawing. Then we process those random shaped edge segments to extract higher level edge features, i.e. lines, circles, ellipses, etc. The popular method of extraction edge pixels from the thresholded gradient magnitudes is non-maximal supression that tests every pixel whether it has the maximum gradient response along its gradient direction and eliminates if it does not. However, this method does not check status of the neighboring pixels, and therefore might result low quality (in terms of edge continuity, smoothness, thinness, localization) edge segments. Instead of non-maximal supression, ED points a set of edge pixels and join them by maximizing the total gradient response of edge segments. Therefore it can extract high quality edge segments without need for an additional hysteresis step. Fourier descriptors --- Binary morphology on run-length encoded image --- These functions support morphological operations on binary images. In order to be fast and space efficient binary images are encoded with a run-length representation. This representation groups continuous horizontal sequences of “on” pixels together in a “run”. A run is charactarized by the column position of the first pixel in the run, the column position of the last pixel in the run and the row position. This representation is very compact for binary images which contain large continuous areas of “on” and “off” pixels. A checkerboard pattern would be a good example. The representation is not so suitable for binary images created from random noise images or other images where little correlation between neighboring pixels exists. The morphological operations supported here are very similar to the operations supported in the imgproc module. In general they are fast. However on several occasions they are slower than the functions from imgproc. The structuring elements of cv::MORPH_RECT and cv::MORPH_CROSS have very good support from the imgproc module. Also small structuring elements are very fast in imgproc (presumably due to opencl support). Therefore the functions from this module are recommended for larger structuring elements (cv::MORPH_ELLIPSE or self defined structuring elements). A sample application (run_length_morphology_demo) is supplied which allows to compare the speed of some morphological operations for the functions using run-length encoding and the imgproc functions for a given image. Run length encoded images are stored in standard opencv images. Images have a single column of cv::Point3i elements. The number of rows is the number of run + 1. The first row contains the size of the original (not encoded) image. For the runs the following mapping is used (x: column begin, y: column end (last column), z: row). The size of the original image is required for compatibility with the imgproc functions when the boundary handling requires that pixel outside the image boundary are “on”. Modules --- * prelude Structs --- * AdaptiveManifoldFilterInterface for Adaptive Manifold Filter realizations. * Box * ContourFittingClass for ContourFitting algorithms. ContourFitting match two contours inline formula and inline formula minimizing distance block formula where inline formula and inline formula are Fourier descriptors of inline formula and inline formula and s is a scaling factor and inline formula is angle rotation and inline formula is starting point factor adjustement * DTFilterInterface for realizations of Domain Transform filter. * DisparityFilterMain interface for all disparity map filters. * DisparityWLSFilterDisparity map filter based on Weighted Least Squares filter (in form of Fast Global Smoother that is a lot faster than traditional Weighted Least Squares filter implementations) and optional use of left-right-consistency-based confidence to refine the results in half-occlusions and uniform areas. * EdgeAwareInterpolatorSparse match interpolation algorithm based on modified locally-weighted affine estimator from Revaud2015 and Fast Global Smoother as post-processing filter. * EdgeBoxesClass implementing EdgeBoxes algorithm from ZitnickECCV14edgeBoxes : * EdgeDrawingClass implementing the ED (EdgeDrawing) topal2012edge, EDLines akinlar2011edlines, EDPF akinlar2012edpf and EDCircles akinlar2013edcircles algorithms * EdgeDrawing_Params * FastBilateralSolverFilterInterface for implementations of Fast Bilateral Solver. * FastGlobalSmootherFilterInterface for implementations of Fast Global Smoother filter. * FastLineDetector@include samples/fld_lines.cpp * GraphSegmentationGraph Based Segmentation Algorithm. The class implements the algorithm described in PFF2004 . * GuidedFilterInterface for realizations of Guided Filter. * RFFeatureGetter! Helper class for training part of [<NAME> and <NAME>. Structured Forests for Fast Edge Detection, 2013]. * RICInterpolatorSparse match interpolation algorithm based on modified piecewise locally-weighted affine estimator called Robust Interpolation method of Correspondences or RIC from Hu2017 and Variational and Fast Global Smoother as post-processing filter. The RICInterpolator is a extension of the EdgeAwareInterpolator. Main concept of this extension is an piece-wise affine model based on over-segmentation via SLIC superpixel estimation. The method contains an efficient propagation mechanism to estimate among the pieces-wise models. * RidgeDetectionFilterApplies Ridge Detection Filter to an input image. Implements Ridge detection similar to the one in Mathematica using the eigen values from the Hessian Matrix of the input image using Sobel Derivatives. Additional refinement can be done using Skeletonization and Binarization. Adapted from segleafvein and M_RF * ScanSegmentClass implementing the F-DBSCAN (Accelerated superpixel image segmentation with a parallelized DBSCAN algorithm) superpixels algorithm by Loke SC, et al. loke2021accelerated for original paper. * SelectiveSearchSegmentationSelective search segmentation algorithm The class implements the algorithm described in uijlings2013selective. * SelectiveSearchSegmentationStrategyStrategie for the selective search segmentation algorithm The class implements a generic stragery for the algorithm described in uijlings2013selective. * SelectiveSearchSegmentationStrategyColorColor-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in uijlings2013selective. * SelectiveSearchSegmentationStrategyFillFill-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in uijlings2013selective. * SelectiveSearchSegmentationStrategyMultipleRegroup multiple strategies for the selective search segmentation algorithm * SelectiveSearchSegmentationStrategySizeSize-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in uijlings2013selective. * SelectiveSearchSegmentationStrategyTextureTexture-based strategy for the selective search segmentation algorithm The class is implemented from the algorithm described in uijlings2013selective. * SparseMatchInterpolatorMain interface for all filters, that take sparse matches as an input and produce a dense per-pixel matching (optical flow) as an output. * StructuredEdgeDetectionClass implementing edge detection algorithm from Dollar2013 : * SuperpixelLSCClass implementing the LSC (Linear Spectral Clustering) superpixels algorithm described in LiCVPR2015LSC. * SuperpixelSEEDSClass implementing the SEEDS (Superpixels Extracted via Energy-Driven Sampling) superpixels algorithm described in VBRV14 . * SuperpixelSLICClass implementing the SLIC (Simple Linear Iterative Clustering) superpixels algorithm described in Achanta2012. Enums --- * AngleRangeOptionSpecifies the part of Hough space to calculate @details The enum specifies the part of Hough space to calculate. Each member specifies primarily direction of lines (horizontal or vertical) and the direction of angle changes. Direction of angle changes is from multiples of 90 to odd multiples of 45. The image considered to be written top-down and left-to-right. Angles are started from vertical line and go clockwise. Separate quarters and halves are written in orientation they should be in full Hough space. * EdgeAwareFiltersList * EdgeDrawing_GradientOperator * HoughDeskewOptionSpecifies to do or not to do skewing of Hough transform image @details The enum specifies to do or not to do skewing of Hough transform image so it would be no cycling in Hough transform image through borders of image. * HoughOpSpecifies binary operations. @details The enum specifies binary operations, that is such ones which involve two operands. Formally, a binary operation @f$ f @f$ on a set @f$ S @f$ is a binary relation that maps elements of the Cartesian product @f$ S \times S @f$ to @f$ S @f$: @f[ f: S \times S \to S @f] @ingroup MinUtils_MathOper * LocalBinarizationMethodsSpecifies the binarization method to use in cv::ximgproc::niBlackThreshold * RulesOptionSpecifies the degree of rules validation. @details The enum specifies the degree of rules validation. This can be used, for example, to choose a proper way of input arguments validation. * SLICType * ThinningTypes * WMFWeightTypeSpecifies weight types of weighted median filter. Constants --- * AM_FILTER * ARO_0_45 * ARO_45_90 * ARO_45_135 * ARO_90_135 * ARO_315_0 * ARO_315_45 * ARO_315_135 * ARO_CTR_HOR * ARO_CTR_VER * BINARIZATION_NIBLACKClassic Niblack binarization. See Niblack1985 . * BINARIZATION_NICKNICK technique. See Khurshid2009 . * BINARIZATION_SAUVOLASauvola’s technique. See Sauvola1997 . * BINARIZATION_WOLFWolf’s technique. See Wolf2004 . * DTF_IC * DTF_NC * DTF_RF * EdgeDrawing_LSD * EdgeDrawing_PREWITT * EdgeDrawing_SCHARR * EdgeDrawing_SOBEL * FHT_ADD * FHT_AVE * FHT_MAX * FHT_MIN * GUIDED_FILTER * HDO_DESKEW * HDO_RAW * MSLIC * RO_IGNORE_BORDERSSkip validations of image borders. * RO_STRICTValidate each rule in a proper way. * SLIC * SLICO * THINNING_GUOHALL * THINNING_ZHANGSUEN * WMF_COSinline formula * WMF_EXPinline formula * WMF_IV1inline formula * WMF_IV2inline formula * WMF_JACinline formula * WMF_OFFunweighted Traits --- * AdaptiveManifoldFilterTraitMutable methods for crate::ximgproc::AdaptiveManifoldFilter * AdaptiveManifoldFilterTraitConstConstant methods for crate::ximgproc::AdaptiveManifoldFilter * ContourFittingTraitMutable methods for crate::ximgproc::ContourFitting * ContourFittingTraitConstConstant methods for crate::ximgproc::ContourFitting * DTFilterTraitMutable methods for crate::ximgproc::DTFilter * DTFilterTraitConstConstant methods for crate::ximgproc::DTFilter * DisparityFilterTraitMutable methods for crate::ximgproc::DisparityFilter * DisparityFilterTraitConstConstant methods for crate::ximgproc::DisparityFilter * DisparityWLSFilterTraitMutable methods for crate::ximgproc::DisparityWLSFilter * DisparityWLSFilterTraitConstConstant methods for crate::ximgproc::DisparityWLSFilter * EdgeAwareInterpolatorTraitMutable methods for crate::ximgproc::EdgeAwareInterpolator * EdgeAwareInterpolatorTraitConstConstant methods for crate::ximgproc::EdgeAwareInterpolator * EdgeBoxesTraitMutable methods for crate::ximgproc::EdgeBoxes * EdgeBoxesTraitConstConstant methods for crate::ximgproc::EdgeBoxes * EdgeDrawingTraitMutable methods for crate::ximgproc::EdgeDrawing * EdgeDrawingTraitConstConstant methods for crate::ximgproc::EdgeDrawing * FastBilateralSolverFilterTraitMutable methods for crate::ximgproc::FastBilateralSolverFilter * FastBilateralSolverFilterTraitConstConstant methods for crate::ximgproc::FastBilateralSolverFilter * FastGlobalSmootherFilterTraitMutable methods for crate::ximgproc::FastGlobalSmootherFilter * FastGlobalSmootherFilterTraitConstConstant methods for crate::ximgproc::FastGlobalSmootherFilter * FastLineDetectorTraitMutable methods for crate::ximgproc::FastLineDetector * FastLineDetectorTraitConstConstant methods for crate::ximgproc::FastLineDetector * GraphSegmentationTraitMutable methods for crate::ximgproc::GraphSegmentation * GraphSegmentationTraitConstConstant methods for crate::ximgproc::GraphSegmentation * GuidedFilterTraitMutable methods for crate::ximgproc::GuidedFilter * GuidedFilterTraitConstConstant methods for crate::ximgproc::GuidedFilter * RFFeatureGetterTraitMutable methods for crate::ximgproc::RFFeatureGetter * RFFeatureGetterTraitConstConstant methods for crate::ximgproc::RFFeatureGetter * RICInterpolatorTraitMutable methods for crate::ximgproc::RICInterpolator * RICInterpolatorTraitConstConstant methods for crate::ximgproc::RICInterpolator * RidgeDetectionFilterTraitMutable methods for crate::ximgproc::RidgeDetectionFilter * RidgeDetectionFilterTraitConstConstant methods for crate::ximgproc::RidgeDetectionFilter * ScanSegmentTraitMutable methods for crate::ximgproc::ScanSegment * ScanSegmentTraitConstConstant methods for crate::ximgproc::ScanSegment * SelectiveSearchSegmentationStrategyColorTraitMutable methods for crate::ximgproc::SelectiveSearchSegmentationStrategyColor * SelectiveSearchSegmentationStrategyColorTraitConstConstant methods for crate::ximgproc::SelectiveSearchSegmentationStrategyColor * SelectiveSearchSegmentationStrategyFillTraitMutable methods for crate::ximgproc::SelectiveSearchSegmentationStrategyFill * SelectiveSearchSegmentationStrategyFillTraitConstConstant methods for crate::ximgproc::SelectiveSearchSegmentationStrategyFill * SelectiveSearchSegmentationStrategyMultipleTraitMutable methods for crate::ximgproc::SelectiveSearchSegmentationStrategyMultiple * SelectiveSearchSegmentationStrategyMultipleTraitConstConstant methods for crate::ximgproc::SelectiveSearchSegmentationStrategyMultiple * SelectiveSearchSegmentationStrategySizeTraitMutable methods for crate::ximgproc::SelectiveSearchSegmentationStrategySize * SelectiveSearchSegmentationStrategySizeTraitConstConstant methods for crate::ximgproc::SelectiveSearchSegmentationStrategySize * SelectiveSearchSegmentationStrategyTextureTraitMutable methods for crate::ximgproc::SelectiveSearchSegmentationStrategyTexture * SelectiveSearchSegmentationStrategyTextureTraitConstConstant methods for crate::ximgproc::SelectiveSearchSegmentationStrategyTexture * SelectiveSearchSegmentationStrategyTraitMutable methods for crate::ximgproc::SelectiveSearchSegmentationStrategy * SelectiveSearchSegmentationStrategyTraitConstConstant methods for crate::ximgproc::SelectiveSearchSegmentationStrategy * SelectiveSearchSegmentationTraitMutable methods for crate::ximgproc::SelectiveSearchSegmentation * SelectiveSearchSegmentationTraitConstConstant methods for crate::ximgproc::SelectiveSearchSegmentation * SparseMatchInterpolatorTraitMutable methods for crate::ximgproc::SparseMatchInterpolator * SparseMatchInterpolatorTraitConstConstant methods for crate::ximgproc::SparseMatchInterpolator * StructuredEdgeDetectionTraitMutable methods for crate::ximgproc::StructuredEdgeDetection * StructuredEdgeDetectionTraitConstConstant methods for crate::ximgproc::StructuredEdgeDetection * SuperpixelLSCTraitMutable methods for crate::ximgproc::SuperpixelLSC * SuperpixelLSCTraitConstConstant methods for crate::ximgproc::SuperpixelLSC * SuperpixelSEEDSTraitMutable methods for crate::ximgproc::SuperpixelSEEDS * SuperpixelSEEDSTraitConstConstant methods for crate::ximgproc::SuperpixelSEEDS * SuperpixelSLICTraitMutable methods for crate::ximgproc::SuperpixelSLIC * SuperpixelSLICTraitConstConstant methods for crate::ximgproc::SuperpixelSLIC Functions --- * am_filterSimple one-line Adaptive Manifold Filter call. * am_filter_defSimple one-line Adaptive Manifold Filter call. * anisotropic_diffusionPerforms anisotropic diffusion on an image. * bilateral_texture_filterApplies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see Cho2014. * bilateral_texture_filter_defApplies the bilateral texture filter to an image. It performs structure-preserving texture filter. For more details about this filter see Cho2014. * bright_edgesC++ default parameters * bright_edges_defNote * color_match_templateCompares a color template against overlapped color image regions. * compute_bad_pixel_percentFunction for computing the percent of “bad” pixels in the disparity map (pixels where error is higher than a specified threshold) * compute_bad_pixel_percent_defFunction for computing the percent of “bad” pixels in the disparity map (pixels where error is higher than a specified threshold) * compute_mseFunction for computing mean square error for disparity maps * contour_samplingContour sampling . * covariance_estimationComputes the estimated covariance matrix of an image using the sliding window forumlation. * create_am_filterFactory method, create instance of AdaptiveManifoldFilter and produce some initialization routines. * create_am_filter_defFactory method, create instance of AdaptiveManifoldFilter and produce some initialization routines. * create_contour_fittingcreate ContourFitting algorithm object * create_contour_fitting_defcreate ContourFitting algorithm object * create_disparity_wls_filterConvenience factory method that creates an instance of DisparityWLSFilter and sets up all the relevant filter parameters automatically based on the matcher instance. Currently supports only StereoBM and StereoSGBM. * create_disparity_wls_filter_genericMore generic factory method, create instance of DisparityWLSFilter and execute basic initialization routines. When using this method you will need to set-up the ROI, matchers and other parameters by yourself. * create_dt_filterFactory method, create instance of DTFilter and produce initialization routines. * create_dt_filter_defFactory method, create instance of DTFilter and produce initialization routines. * create_edge_aware_interpolatorFactory method that creates an instance of the EdgeAwareInterpolator. * create_edge_boxesCreates a Edgeboxes * create_edge_boxes_defCreates a Edgeboxes * create_edge_drawingCreates a smart pointer to a EdgeDrawing object and initializes it * create_fast_bilateral_solver_filterFactory method, create instance of FastBilateralSolverFilter and execute the initialization routines. * create_fast_bilateral_solver_filter_defFactory method, create instance of FastBilateralSolverFilter and execute the initialization routines. * create_fast_global_smoother_filterFactory method, create instance of FastGlobalSmootherFilter and execute the initialization routines. * create_fast_global_smoother_filter_defFactory method, create instance of FastGlobalSmootherFilter and execute the initialization routines. * create_fast_line_detectorCreates a smart pointer to a FastLineDetector object and initializes it * create_fast_line_detector_defCreates a smart pointer to a FastLineDetector object and initializes it * create_graph_segmentationCreates a graph based segmentor * create_graph_segmentation_defCreates a graph based segmentor * create_guided_filterFactory method, create instance of GuidedFilter and produce initialization routines. * create_quaternion_imagecreates a quaternion image. * create_rf_feature_getter * create_ric_interpolatorFactory method that creates an instance of the RICInterpolator. * create_right_matcherConvenience method to set up the matcher for computing the right-view disparity map that is required in case of filtering with confidence. * create_rle_imageCreates a run-length encoded image from a vector of runs (column begin, column end, row) * create_rle_image_defCreates a run-length encoded image from a vector of runs (column begin, column end, row) * create_scan_segmentInitializes a ScanSegment object. * create_scan_segment_defInitializes a ScanSegment object. * create_selective_search_segmentationCreate a new SelectiveSearchSegmentation class. * create_selective_search_segmentation_strategy_colorCreate a new color-based strategy * create_selective_search_segmentation_strategy_fillCreate a new fill-based strategy * create_selective_search_segmentation_strategy_multipleCreate a new multiple strategy * create_selective_search_segmentation_strategy_multiple_1Create a new multiple strategy and set one subtrategy * create_selective_search_segmentation_strategy_multiple_2Create a new multiple strategy and set two subtrategies, with equal weights * create_selective_search_segmentation_strategy_multiple_3Create a new multiple strategy and set three subtrategies, with equal weights * create_selective_search_segmentation_strategy_multiple_4Create a new multiple strategy and set four subtrategies, with equal weights * create_selective_search_segmentation_strategy_sizeCreate a new size-based strategy * create_selective_search_segmentation_strategy_textureCreate a new size-based strategy * create_structured_edge_detection! * create_structured_edge_detection_def! * create_superpixel_lscClass implementing the LSC (Linear Spectral Clustering) superpixels * create_superpixel_lsc_defClass implementing the LSC (Linear Spectral Clustering) superpixels * create_superpixel_seedsInitializes a SuperpixelSEEDS object. * create_superpixel_seeds_defInitializes a SuperpixelSEEDS object. * create_superpixel_slicInitialize a SuperpixelSLIC object * create_superpixel_slic_defInitialize a SuperpixelSLIC object * dilateDilates an run-length encoded binary image by using a specific structuring element. * dilate_defDilates an run-length encoded binary image by using a specific structuring element. * dt_filterSimple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage. * dt_filter_defSimple one-line Domain Transform filter call. If you have multiple images to filter with the same guided image then use DTFilter interface to avoid extra computations on initialization stage. * edge_preserving_filterSmoothes an image using the Edge-Preserving filter. * erodeErodes an run-length encoded binary image by using a specific structuring element. * erode_defErodes an run-length encoded binary image by using a specific structuring element. * fast_bilateral_solver_filterSimple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations. * fast_bilateral_solver_filter_defSimple one-line Fast Bilateral Solver filter call. If you have multiple images to filter with the same guide then use FastBilateralSolverFilter interface to avoid extra computations. * fast_global_smoother_filterSimple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations. * fast_global_smoother_filter_defSimple one-line Fast Global Smoother filter call. If you have multiple images to filter with the same guide then use FastGlobalSmootherFilter interface to avoid extra computations. * fast_hough_transformCalculates 2D Fast Hough transform of an image. * fast_hough_transform_defCalculates 2D Fast Hough transform of an image. * find_ellipsesFinds ellipses fastly in an image using projective invariant pruning. * find_ellipses_defFinds ellipses fastly in an image using projective invariant pruning. * fourier_descriptorFourier descriptors for planed closed curves * fourier_descriptor_defFourier descriptors for planed closed curves * get_disparity_visFunction for creating a disparity map visualization (clamped CV_8U image) * get_disparity_vis_defFunction for creating a disparity map visualization (clamped CV_8U image) * get_structuring_elementReturns a run length encoded structuring element of the specified size and shape. * gradient_deriche_xApplies X Deriche filter to an image. * gradient_deriche_yApplies Y Deriche filter to an image. * gradient_paillou_x * gradient_paillou_yApplies Paillou filter to an image. * guided_filterSimple one-line Guided Filter call. * guided_filter_defSimple one-line Guided Filter call. * hough_point2_lineCalculates coordinates of line segment corresponded by point in Hough space. * hough_point2_line_defCalculates coordinates of line segment corresponded by point in Hough space. * is_rl_morphology_possibleCheck whether a custom made structuring element can be used with run length morphological operations. (It must consist of a continuous array of single runs per row) * joint_bilateral_filterApplies the joint bilateral filter to an image. * joint_bilateral_filter_defApplies the joint bilateral filter to an image. * l0_smoothGlobal image smoothing via L0 gradient minimization. * l0_smooth_defGlobal image smoothing via L0 gradient minimization. * morphology_exApplies a morphological operation to a run-length encoded binary image. * morphology_ex_defApplies a morphological operation to a run-length encoded binary image. * ni_black_thresholdPerforms thresholding on input images using Niblack’s technique or some of the popular variations it inspired. * ni_black_threshold_defPerforms thresholding on input images using Niblack’s technique or some of the popular variations it inspired. * paintPaint run length encoded binary image into an image. * pei_lin_normalizationCalculates an affine transformation that normalize given image using Pei&Lin Normalization. * pei_lin_normalization_1Calculates an affine transformation that normalize given image using Pei&Lin Normalization. * qconjcalculates conjugate of a quaternion image. * qdftPerforms a forward or inverse Discrete quaternion Fourier transform of a 2D quaternion array. * qmultiplyCalculates the per-element quaternion product of two arrays * qunitarydivides each element by its modulus. * radon_transformCalculate Radon Transform of an image. * radon_transform_defCalculate Radon Transform of an image. * read_gtFunction for reading ground truth disparity maps. Supports basic Middlebury and MPI-Sintel formats. Note that the resulting disparity map is scaled by 16. * rolling_guidance_filterApplies the rolling guidance filter to an image. * rolling_guidance_filter_defApplies the rolling guidance filter to an image. * thinningApplies a binary blob thinning operation, to achieve a skeletization of the input image. * thinning_defApplies a binary blob thinning operation, to achieve a skeletization of the input image. * thresholdApplies a fixed-level threshold to each array element. * transform_fdtransform a contour * transform_fd_deftransform a contour * weighted_median_filterApplies weighted median filter to an image. * weighted_median_filter_defApplies weighted median filter to an image. Type Aliases --- * Boxes Module opencv::xobjdetect === Extended object detection --- Modules --- * prelude Structs --- * WBDetectorWaldBoost detector Traits --- * WBDetectorTraitMutable methods for crate::xobjdetect::WBDetector * WBDetectorTraitConstConstant methods for crate::xobjdetect::WBDetector Module opencv::xphoto === Additional photo processing algorithms --- Modules --- * prelude Structs --- * GrayworldWBGray-world white balance algorithm * LearningBasedWBMore sophisticated learning-based automatic white balance algorithm. * SimpleWBA simple white balance algorithm that works by independently stretching each of the input image channels to the specified range. For increased robustness it ignores the top and bottom inline formula of pixel values. * TonemapDurandThis algorithm decomposes image into two layers: base layer and detail layer using bilateral filter and compresses contrast of the base layer thus preserving all the details. * WhiteBalancerThe base class for auto white balance algorithms. Enums --- * Bm3dStepsBM3D algorithm steps * InpaintTypesVarious inpainting algorithms * TransformTypesBM3D transform types Constants --- * BM3D_STEP1Execute only first step of the algorithm * BM3D_STEP2Execute only second step of the algorithm * BM3D_STEPALLExecute all steps of the algorithm * HAARUn-normalized Haar transform * INPAINT_FSR_BESTPerforms Frequency Selective Reconstruction (FSR). One of the two quality profiles BEST and FAST can be chosen, depending on the time available for reconstruction. See GenserPCS2018 and SeilerTIP2015 for details. * INPAINT_FSR_FASTSee #INPAINT_FSR_BEST * INPAINT_SHIFTMAPThis algorithm searches for dominant correspondences (transformations) of image patches and tries to seamlessly fill-in the area to be inpainted using this transformations Traits --- * GrayworldWBTraitMutable methods for crate::xphoto::GrayworldWB * GrayworldWBTraitConstConstant methods for crate::xphoto::GrayworldWB * LearningBasedWBTraitMutable methods for crate::xphoto::LearningBasedWB * LearningBasedWBTraitConstConstant methods for crate::xphoto::LearningBasedWB * SimpleWBTraitMutable methods for crate::xphoto::SimpleWB * SimpleWBTraitConstConstant methods for crate::xphoto::SimpleWB * TonemapDurandTraitMutable methods for crate::xphoto::TonemapDurand * TonemapDurandTraitConstConstant methods for crate::xphoto::TonemapDurand * WhiteBalancerTraitMutable methods for crate::xphoto::WhiteBalancer * WhiteBalancerTraitConstConstant methods for crate::xphoto::WhiteBalancer Functions --- * apply_channel_gainsImplements an efficient fixed-point approximation for applying channel gains, which is the last step of multiple white balance algorithms. * bm3d_denoisingPerforms image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise. * bm3d_denoising_1Performs image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise. * bm3d_denoising_1_defPerforms image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise. * bm3d_denoising_defPerforms image denoising using the Block-Matching and 3D-filtering algorithm http://www.cs.tut.fi/~foi/GCF-BM3D/BM3D_TIP_2007.pdf with several computational optimizations. Noise expected to be a gaussian white noise. * create_grayworld_wbCreates an instance of GrayworldWB * create_learning_based_wbCreates an instance of LearningBasedWB * create_learning_based_wb_defCreates an instance of LearningBasedWB * create_simple_wbCreates an instance of SimpleWB * create_tonemap_durandCreates TonemapDurand object * create_tonemap_durand_defCreates TonemapDurand object * dct_denoisingThe function implements simple dct-based denoising * dct_denoising_defThe function implements simple dct-based denoising * inpaintThe function implements different single-image inpainting algorithms. * oil_paintingoilPainting See the book Holzmann1988 for details. * oil_painting_1oilPainting See the book Holzmann1988 for details. Macro opencv::not_opencv_branch_4 === ``` macro_rules! not_opencv_branch_4 { ($($tt:tt)*) => { ... }; } ``` Conditional compilation macro based on OpenCV branch version for usage in external crates. Examples --- Alternative import: ``` opencv::opencv_branch_4! { use opencv::imgproc::LINE_8; } opencv::not_opencv_branch_4! { use opencv::core::LINE_8; } ``` Alternative function call: ``` opencv::opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new_default(0)?; } opencv::not_opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new(0, videoio::CAP_ANY)?; } ``` Macro opencv::not_opencv_branch_32 === ``` macro_rules! not_opencv_branch_32 { ($($tt:tt)*) => { ... }; } ``` 👎Deprecated: OpenCV 3.2 is no longer supportedConditional compilation macro based on OpenCV branch version for usage in external crates. Examples --- Alternative import: ``` opencv::opencv_branch_4! { use opencv::imgproc::LINE_8; } opencv::not_opencv_branch_4! { use opencv::core::LINE_8; } ``` Alternative function call: ``` opencv::opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new_default(0)?; } opencv::not_opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new(0, videoio::CAP_ANY)?; } ``` Macro opencv::not_opencv_branch_34 === ``` macro_rules! not_opencv_branch_34 { ($($tt:tt)*) => { ... }; } ``` Conditional compilation macro based on OpenCV branch version for usage in external crates. Examples --- Alternative import: ``` opencv::opencv_branch_4! { use opencv::imgproc::LINE_8; } opencv::not_opencv_branch_4! { use opencv::core::LINE_8; } ``` Alternative function call: ``` opencv::opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new_default(0)?; } opencv::not_opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new(0, videoio::CAP_ANY)?; } ``` Macro opencv::opencv_branch_4 === ``` macro_rules! opencv_branch_4 { ($($tt:tt)*) => { ... }; } ``` Conditional compilation macro based on OpenCV branch version for usage in external crates. Examples --- Alternative import: ``` opencv::opencv_branch_4! { use opencv::imgproc::LINE_8; } opencv::not_opencv_branch_4! { use opencv::core::LINE_8; } ``` Alternative function call: ``` opencv::opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new_default(0)?; } opencv::not_opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new(0, videoio::CAP_ANY)?; } ``` Macro opencv::opencv_branch_32 === ``` macro_rules! opencv_branch_32 { ($($tt:tt)*) => { ... }; } ``` 👎Deprecated: OpenCV 3.2 is no longer supportedConditional compilation macro based on OpenCV branch version for usage in external crates. Examples --- Alternative import: ``` opencv::opencv_branch_4! { use opencv::imgproc::LINE_8; } opencv::not_opencv_branch_4! { use opencv::core::LINE_8; } ``` Alternative function call: ``` opencv::opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new_default(0)?; } opencv::not_opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new(0, videoio::CAP_ANY)?; } ``` Macro opencv::opencv_branch_34 === ``` macro_rules! opencv_branch_34 { ($($tt:tt)*) => { ... }; } ``` Conditional compilation macro based on OpenCV branch version for usage in external crates. Examples --- Alternative import: ``` opencv::opencv_branch_4! { use opencv::imgproc::LINE_8; } opencv::not_opencv_branch_4! { use opencv::core::LINE_8; } ``` Alternative function call: ``` opencv::opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new_default(0)?; } opencv::not_opencv_branch_32! { let mut cam = opencv::videoio::VideoCapture::new(0, videoio::CAP_ANY)?; } ``` Struct opencv::Error === ``` pub struct Error { pub code: i32, pub message: String, } ``` Fields --- `code: i32``message: String`Implementations --- ### impl Error #### pub fn new(code: i32, message: impl Into<String>) -> Self Trait Implementations --- ### impl Debug for Error #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. #### fn fmt(&self, f: &mut Formatter<'_>) -> Result Formats the value using the given formatter. 1.30.0 · source#### fn source(&self) -> Option<&(dyn Error + 'static)The lower-level source of this error, if any. Read more1.0.0 · source#### fn description(&self) -> &str 👎Deprecated since 1.42.0: use the Display impl or to_string() Read more1.0.0 · source#### fn cause(&self) -> Option<&dyn Error👎Deprecated since 1.33.0: replaced by Error::source, which can support downcasting#### fn provide<'a>(&'a self, request: &mut Request<'a>) 🔬This is a nightly-only experimental API. (`error_generic_member_access`)Provides type based access to context intended for error reports. #### fn from(_: NulError) -> Self Converts to this type from the input type.### impl From<TryFromCharError> for Error #### fn from(_: TryFromCharError) -> Self Converts to this type from the input type.Auto Trait Implementations --- ### impl RefUnwindSafe for Error ### impl Send for Error ### impl Sync for Error ### impl Unpin for Error ### impl UnwindSafe for Error Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> ToString for Twhere T: Display + ?Sized, #### default fn to_string(&self) -> String Converts the given value to a `String`. U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion. Type Alias opencv::Result === ``` pub type Result<T, E = Error> = Result<T, E>; ``` Aliased Type --- ``` enum Result<T, E = Error> { Ok(T), Err(E), } ``` Variants --- 1.0.0### Ok(T) Contains the success value 1.0.0### Err(E) Contains the error value
@ckeditor/ckeditor5-build-multi-root
npm
JavaScript
[CKEditor 5 multi-root editor build](#ckeditor5-multi-root-editor-build) === The multi-root editor build for CKEditor 5. Read more about the [multi-root editor build](https://ckeditor.com/docs/ckeditor5/latest/installation/getting-started/predefined-builds.html#multi-root-editor) and see the [demo](https://ckeditor.com/docs/ckeditor5/latest/examples/builds/multi-root-editor.html). [Documentation](#documentation) --- See: * [Installation](https://ckeditor.com/docs/ckeditor5/latest/installation/getting-started/quick-start.html) for how to install this package and what it contains. * [Editor lifecycle](https://ckeditor.com/docs/ckeditor5/latest/installation/getting-started/editor-lifecycle.html) for how to create an editor and interact with it. * [Configuration](https://ckeditor.com/docs/ckeditor5/latest/installation/getting-started/configuration.html) for how to configure the editor. * [Creating custom builds](https://ckeditor.com/docs/ckeditor5/latest/installation/getting-started/quick-start.html#building-the-editor-from-source) for how to customize the build (configure and rebuild the editor bundle). [Quick start](#quick-start) --- First, install the build from npm: ``` npm install --save @ckeditor/ckeditor5-build-multi-root ``` And use it in your website: ``` <div id="toolbar"></div> <div id="header"> <p>Content for header.</p> </div> <div id="content"> <p>Main editor content.</p> </div> <div class="boxes"> <div class="box box-left editor"> <div id="left-side"> <p>Content for left-side box.</p> </div> </div> <div class="box box-right editor"> <div id="right-side"> <p>Content for right-side box.</p> </div> </div> </div> <script src="./node_modules/@ckeditor/ckeditor5-build-multi-root/build/ckeditor.js"></script> <script> MultiRootEditor .create( { header: document.getElementById( 'header' ), content: document.getElementById( 'content' ), leftSide: document.getElementById( 'left-side' ), rightSide: document.getElementById( 'right-side' ) } ) .then( editor => { window.editor = editor; // Append toolbar to a proper container. const toolbarContainer = document.getElementById( 'toolbar' ); toolbarContainer.appendChild( editor.ui.view.toolbar.element ); } ) .catch( error => { console.error( 'There was a problem initializing the editor.', error ); } ); </script> ``` Or in your JavaScript application: ``` import MultiRootEditor from '@ckeditor/ckeditor5-build-multi-root'; // Or using the CommonJS version: // const MultiRootEditor = require( '@ckeditor/ckeditor5-build-multi-root' ); MultiRootEditor .create( { header: document.getElementById( 'header' ), content: document.getElementById( 'content' ), leftSide: document.getElementById( 'left-side' ), rightSide: document.getElementById( 'right-side' ) } ) .then( editor => { window.editor = editor; // Append toolbar to a proper container. const toolbarContainer = document.getElementById( 'toolbar' ); toolbarContainer.appendChild( editor.ui.view.toolbar.element ); } ) .catch( error => { console.error( 'There was a problem initializing the editor.', error ); } ); ``` **Note:** If you are planning to integrate CKEditor 5 deep into your application, it is actually more convenient and recommended to install and import the source modules directly (like it happens in `ckeditor.js`). Read more in the [Advanced setup guide](https://ckeditor.com/docs/ckeditor5/latest/installation/advanced/advanced-setup.html). [License](#license) --- Licensed under the terms of [GNU General Public License Version 2 or later](http://www.gnu.org/licenses/gpl.html). For full details about the license, please check the `LICENSE.md` file or <https://ckeditor.com/legal/ckeditor-oss-license>. Readme --- ### Keywords * ckeditor5-build * ckeditor * ckeditor5 * ckeditor 5 * wysiwyg * rich text * editor * html * contentEditable * editing * operational transformation * ot * collaboration * collaborative * real-time * framework
@thi.ng/rdom-canvas
npm
JavaScript
This project is part of the [@thi.ng/umbrella](https://github.com/thi-ng/umbrella/) monorepo and anti-framework. * [About](#about) + [General usage](#general-usage) + [Control attributes](#control-attributes) * [Status](#status) * [Related packages](#related-packages) * [Installation](#installation) * [Dependencies](#dependencies) * [Usage examples](#usage-examples) * [API](#api) * [Authors](#authors) * [License](#license) [About](#about) --- [@thi.ng/rdom](https://github.com/thi-ng/umbrella/tree/develop/packages/rdom) component wrapper for [@thi.ng/hiccup-canvas](https://github.com/thi-ng/umbrella/tree/develop/packages/hiccup-canvas) and declarative canvas drawing. Please consult these packages' READMEs for further background information... ### [General usage](#general-usage) As with most thi.ng/rdom components, the state (aka geometry/scenegraph) for the canvas component is being sourced from a [thi.ng/rstream](https://github.com/thi-ng/umbrella/tree/develop/packages/rstream) subscription. The canvas redraws every time that subscription delivers a new value. The size of the canvas can be given as a subscription too and if so will also automatically trigger resizing of the canvas. The geometry to rendered to the canvas is expressed as [thi.ng/hiccup](https://github.com/thi-ng/umbrella/tree/develop/packages/hiccup), specifically the flavor used by [thi.ng/hiccup-canvas](https://github.com/thi-ng/umbrella/tree/develop/packages/hiccup-canvas), which (not just coincidentally) is the same as also used by [thi.ng/geom](https://github.com/thi-ng/umbrella/tree/develop/packages/geom) shapes. ``` import { circle, group } from "@thi.ng/geom"; import { $canvas } from "@thi.ng/rdom-canvas"; import { fromRAF } from "@thi.ng/rstream"; import { repeatedly } from "@thi.ng/transducers"; // create geometry stream/subscription const geo = fromRAF().map((t) => // shape group w/ attribs (also see section in readme) group({ __background: "#0ff" }, [ // create 10 circles ...repeatedly( (i) => circle( [ Math.sin(t * 0.01 + i * 0.5) * 150 + 300, Math.sin(t * 0.03 + i * 0.5) * 150 + 300 ], 50, // colors can be given as RGBA vectors or CSS { fill: [i * 0.1, 0, i * 0.05] } ), 10 ) ]) ); // create & mount canvas component (w/ fixed size) $canvas(geo, [600, 600]).mount(document.body); ``` ### [Control attributes](#control-attributes) The root shape/group support the following special attributes: * `__background`: background color. If given, fills the canvas will given color before drawing * `__clear`: clear background flag. If true clears the canvas before drawing Also see relevant section in the [thi.ng/hiccup-canvas README](https://github.com/thi-ng/umbrella/blob/develop/packages/hiccup-canvas/README.md#special-attributes)... [Status](#status) --- **ALPHA** - bleeding edge / work-in-progress [Search or submit any issues for this package](https://github.com/thi-ng/umbrella/issues?q=%5Brdom-canvas%5D+in%3Atitle) [Related packages](#related-packages) --- * [@thi.ng/hiccup-canvas](https://github.com/thi-ng/umbrella/tree/develop/packages/hiccup-canvas) - Hiccup shape tree renderer for vanilla Canvas 2D contexts * [@thi.ng/hiccup-svg](https://github.com/thi-ng/umbrella/tree/develop/packages/hiccup-svg) - SVG element functions for [@thi.ng/hiccup](https://github.com/thi-ng/umbrella/tree/develop/packages/hiccup) & related tooling * [@thi.ng/geom](https://github.com/thi-ng/umbrella/tree/develop/packages/geom) - Functional, polymorphic API for 2D geometry types & SVG generation * [@thi.ng/scenegraph](https://github.com/thi-ng/umbrella/tree/develop/packages/scenegraph) - Extensible 2D/3D scene graph with [@thi.ng/hiccup-canvas](https://github.com/thi-ng/umbrella/tree/develop/packages/hiccup-canvas) support [Installation](#installation) --- ``` yarn add @thi.ng/rdom-canvas ``` ES module import: ``` <script type="module" src="https://cdn.skypack.dev/@thi.ng/rdom-canvas"></script> ``` [Skypack documentation](https://docs.skypack.dev/) For Node.js REPL: ``` const rdomCanvas = await import("@thi.ng/rdom-canvas"); ``` Package sizes (brotli'd, pre-treeshake): ESM: 596 bytes [Dependencies](#dependencies) --- * [@thi.ng/adapt-dpi](https://github.com/thi-ng/umbrella/tree/develop/packages/adapt-dpi) * [@thi.ng/api](https://github.com/thi-ng/umbrella/tree/develop/packages/api) * [@thi.ng/associative](https://github.com/thi-ng/umbrella/tree/develop/packages/associative) * [@thi.ng/checks](https://github.com/thi-ng/umbrella/tree/develop/packages/checks) * [@thi.ng/hiccup-canvas](https://github.com/thi-ng/umbrella/tree/develop/packages/hiccup-canvas) * [@thi.ng/rdom](https://github.com/thi-ng/umbrella/tree/develop/packages/rdom) * [@thi.ng/rstream](https://github.com/thi-ng/umbrella/tree/develop/packages/rstream) [Usage examples](#usage-examples) --- Several demos in this repo's [/examples](https://github.com/thi-ng/umbrella/tree/develop/examples) directory are using this package. A selection: | Screenshot | Description | Live demo | Source | | --- | --- | --- | --- | | | Interactive visualization of closest points on ellipses | [Demo](https://demo.thi.ng/umbrella/ellipse-proximity/) | [Source](https://github.com/thi-ng/umbrella/tree/develop/examples/ellipse-proximity) | | | 2.5D hidden line visualization of digital elevation files (DEM) | [Demo](https://demo.thi.ng/umbrella/geom-terrain-viz/) | [Source](https://github.com/thi-ng/umbrella/tree/develop/examples/geom-terrain-viz) | | | Quasi-random lattice generator | [Demo](https://demo.thi.ng/umbrella/quasi-lattice/) | [Source](https://github.com/thi-ng/umbrella/tree/develop/examples/quasi-lattice) | | | Minimal rdom-canvas animation | [Demo](https://demo.thi.ng/umbrella/rdom-canvas-basics/) | [Source](https://github.com/thi-ng/umbrella/tree/develop/examples/rdom-canvas-basics) | | | rdom & hiccup-canvas interop test | [Demo](https://demo.thi.ng/umbrella/rdom-lissajous/) | [Source](https://github.com/thi-ng/umbrella/tree/develop/examples/rdom-lissajous) | | | Fitting, transforming & plotting 10k data points per frame using SIMD | [Demo](https://demo.thi.ng/umbrella/simd-plot/) | [Source](https://github.com/thi-ng/umbrella/tree/develop/examples/simd-plot) | | | Multi-layer vectorization & dithering of bitmap images | [Demo](https://demo.thi.ng/umbrella/trace-bitmap/) | [Source](https://github.com/thi-ng/umbrella/tree/develop/examples/trace-bitmap) | [API](#api) --- [Generated API docs](https://docs.thi.ng/umbrella/rdom-canvas/) TODO [Authors](#authors) --- * [<NAME>](https://thi.ng) If this project contributes to an academic publication, please cite it as: ``` @misc{thing-rdom-canvas, title = "@thi.ng/rdom-canvas", author = "<NAME>", note = "https://thi.ng/rdom-canvas", year = 2020 } ``` [License](#license) --- © 2020 - 2023 <NAME> // Apache License 2.0 Readme --- ### Keywords * animation * browser * canvas * component * declarative * graphics * hiccup * scenegraph * typescript * ui * wrapper
GRMustache
cocoapods
Objective-C
grmustache.swift === Welcome to the documentation for the grmustache.swift framework! In this guide, we will explore the features, usage, and advantages of using grmustache.swift in your Swift projects. What is grmustache.swift? --- grmustache.swift is a powerful and lightweight implementation of the Mustache templating language for Swift. Mustache is a logic-less template system that allows you to define templates with placeholders that are then replaced with actual values at runtime. This framework provides an intuitive way to render Mustache templates, making it easy to generate dynamic content such as HTML, emails, source code, configuration files, and more. Installation --- To install grmustache.swift, follow these steps: * Open your Xcode project * Go to File > Swift Packages > Add Package Dependency * Enter the repository URL: `https://github.com/your-repo/grmustache.swift` * Choose the latest version of grmustache.swift * Click Next and follow the prompts to complete the installation Usage --- grmustache.swift offers a simple and elegant API for rendering Mustache templates. Here’s an example of how to use it: ``` import GRMustache // Create a Mustache template let template = try Template(string: "Hello, {{name}}!") // Define a context let context = ["name": "<NAME>"] // Render the template with the context let rendered = try template.render(with: Box(context)) print(rendered) // Output: Hello, <NAME>! ``` Advantages --- grmustache.swift offers several advantages over other templating solutions: * Lightweight and efficient * Excellent performance * Supports advanced Mustache features like lambdas, filters, and partials * Extensive documentation and active community support * Compatible with macOS, iOS, tvOS, and watchOS Additional Resources --- * [grmustache.swift GitHub Repository](https://github.com/your-repo/grmustache.swift) * [Mustache Documentation](https://mustache.github.io/mustache.5.html) * [Swift Programming Language](https://www.swift.org) That’s all you need to know to get started with grmustache.swift. Happy templating!
atsame54_xpro
rust
Rust
Module atsame54_xpro::pins === SAM E54 XPlained Pro Pin Definitions Structs --- * PinsBSP replacement for the HAL `Pins` type Constants --- * ADC0_ANALOG0_IDDynPinId for the `Adc0Analog0` alias. * ADC0_ANALOG0_MODEDynPinMode for the `Adc0Analog0` alias. * ADC0_ANALOG1_IDDynPinId for the `Adc0Analog1` alias. * ADC0_ANALOG1_MODEDynPinMode for the `Adc0Analog1` alias. * ADC0_ANALOG2_IDDynPinId for the `Adc0Analog2` alias. * ADC0_ANALOG2_MODEDynPinMode for the `Adc0Analog2` alias. * ADC0_ANALOG3_IDDynPinId for the `Adc0Analog3` alias. * ADC0_ANALOG3_MODEDynPinMode for the `Adc0Analog3` alias. * ADC0_ANALOG4_IDDynPinId for the `Adc0Analog4` alias. * ADC0_ANALOG4_MODEDynPinMode for the `Adc0Analog4` alias. * ADC0_ANALOG5_IDDynPinId for the `Adc0Analog5` alias. * ADC0_ANALOG5_MODEDynPinMode for the `Adc0Analog5` alias. * ADC0_ANALOG6_IDDynPinId for the `Adc0Analog6` alias. * ADC0_ANALOG6_MODEDynPinMode for the `Adc0Analog6` alias. * ADC0_ANALOG7_IDDynPinId for the `Adc0Analog7` alias. * ADC0_ANALOG7_MODEDynPinMode for the `Adc0Analog7` alias. * ADC1_ANALOG0_IDDynPinId for the `Adc1Analog0` alias. * ADC1_ANALOG0_MODEDynPinMode for the `Adc1Analog0` alias. * ADC1_ANALOG1_IDDynPinId for the `Adc1Analog1` alias. * ADC1_ANALOG1_MODEDynPinMode for the `Adc1Analog1` alias. * ADC1_ANALOG4_IDDynPinId for the `Adc1Analog4` alias. * ADC1_ANALOG4_MODEDynPinMode for the `Adc1Analog4` alias. * ADC1_ANALOG5_IDDynPinId for the `Adc1Analog5` alias. * ADC1_ANALOG5_MODEDynPinMode for the `Adc1Analog5` alias. * ADC1_ANALOG6_IDDynPinId for the `Adc1Analog6` alias. * ADC1_ANALOG6_MODEDynPinMode for the `Adc1Analog6` alias. * ADC1_ANALOG7_IDDynPinId for the `Adc1Analog7` alias. * ADC1_ANALOG7_MODEDynPinMode for the `Adc1Analog7` alias. * ADC1_ANALOG8_IDDynPinId for the `Adc1Analog8` alias. * ADC1_ANALOG8_MODEDynPinMode for the `Adc1Analog8` alias. * ADC1_ANALOG9_IDDynPinId for the `Adc1Analog9` alias. * ADC1_ANALOG9_MODEDynPinMode for the `Adc1Analog9` alias. * ADC1_ANALOG14_IDDynPinId for the `Adc1Analog14` alias. * ADC1_ANALOG14_MODEDynPinMode for the `Adc1Analog14` alias. * ADC1_ANALOG15_IDDynPinId for the `Adc1Analog15` alias. * ADC1_ANALOG15_MODEDynPinMode for the `Adc1Analog15` alias. * ADC_DAC_HEADER_IDDynPinId for the `AdcDacHeader` alias. * ADC_DAC_HEADER_MODEDynPinMode for the `AdcDacHeader` alias. * ATA6561_RX_IDDynPinId for the `Ata6561Rx` alias. * ATA6561_RX_MODEDynPinMode for the `Ata6561Rx` alias. * ATA6561_STANDBY_IDDynPinId for the `Ata6561Standby` alias. * ATA6561_STANDBY_MODEDynPinMode for the `Ata6561Standby` alias. * ATA6561_TX_IDDynPinId for the `Ata6561Tx` alias. * ATA6561_TX_MODEDynPinMode for the `Ata6561Tx` alias. * BUTTON_IDDynPinId for the `Button` alias. * BUTTON_MODEDynPinMode for the `Button` alias. * DGI_I2C_SCL_IDDynPinId for the `DgiI2cScl` alias. * DGI_I2C_SCL_MODEDynPinMode for the `DgiI2cScl` alias. * DGI_I2C_SDA_IDDynPinId for the `DgiI2cSda` alias. * DGI_I2C_SDA_MODEDynPinMode for the `DgiI2cSda` alias. * DGI_SPI_CS_IDDynPinId for the `DgiSpiCs` alias. * DGI_SPI_CS_MODEDynPinMode for the `DgiSpiCs` alias. * DGI_SPI_MISO_IDDynPinId for the `DgiSpiMiso` alias. * DGI_SPI_MISO_MODEDynPinMode for the `DgiSpiMiso` alias. * DGI_SPI_MOSI_IDDynPinId for the `DgiSpiMosi` alias. * DGI_SPI_MOSI_MODEDynPinMode for the `DgiSpiMosi` alias. * DGI_SPI_SCK_IDDynPinId for the `DgiSpiSck` alias. * DGI_SPI_SCK_MODEDynPinMode for the `DgiSpiSck` alias. * EDBG_GPIO0_IN_IDDynPinId for the `EdbgGpio0In` alias. * EDBG_GPIO0_IN_MODEDynPinMode for the `EdbgGpio0In` alias. * EDBG_GPIO0_OUT_IDDynPinId for the `EdbgGpio0Out` alias. * EDBG_GPIO0_OUT_MODEDynPinMode for the `EdbgGpio0Out` alias. * EDBG_GPIO1_IN_IDDynPinId for the `EdbgGpio1In` alias. * EDBG_GPIO1_IN_MODEDynPinMode for the `EdbgGpio1In` alias. * EDBG_GPIO1_OUT_IDDynPinId for the `EdbgGpio1Out` alias. * EDBG_GPIO1_OUT_MODEDynPinMode for the `EdbgGpio1Out` alias. * EDBG_GPIO2_IN_IDDynPinId for the `EdbgGpio2In` alias. * EDBG_GPIO2_IN_MODEDynPinMode for the `EdbgGpio2In` alias. * EDBG_GPIO2_OUT_IDDynPinId for the `EdbgGpio2Out` alias. * EDBG_GPIO2_OUT_MODEDynPinMode for the `EdbgGpio2Out` alias. * EDBG_UART_RX_IDDynPinId for the `EdbgUartRx` alias. * EDBG_UART_RX_MODEDynPinMode for the `EdbgUartRx` alias. * EDBG_UART_TX_IDDynPinId for the `EdbgUartTx` alias. * EDBG_UART_TX_MODEDynPinMode for the `EdbgUartTx` alias. * ETH_CRS_DV_IDDynPinId for the `EthCrsDv` alias. * ETH_CRS_DV_MODEDynPinMode for the `EthCrsDv` alias. * ETH_GMDIO_IDDynPinId for the `EthGmdio` alias. * ETH_GMDIO_MODEDynPinMode for the `EthGmdio` alias. * ETH_INTERRUPT_IDDynPinId for the `EthInterrupt` alias. * ETH_INTERRUPT_MODEDynPinMode for the `EthInterrupt` alias. * ETH_MDC_IDDynPinId for the `EthMdc` alias. * ETH_MDC_MODEDynPinMode for the `EthMdc` alias. * ETH_REF_CLK_IDDynPinId for the `EthRefClk` alias. * ETH_REF_CLK_MODEDynPinMode for the `EthRefClk` alias. * ETH_RESET_IDDynPinId for the `EthReset` alias. * ETH_RESET_MODEDynPinMode for the `EthReset` alias. * ETH_RXD0_IDDynPinId for the `EthRxd0` alias. * ETH_RXD0_MODEDynPinMode for the `EthRxd0` alias. * ETH_RXD1_IDDynPinId for the `EthRxd1` alias. * ETH_RXD1_MODEDynPinMode for the `EthRxd1` alias. * ETH_RXER_IDDynPinId for the `EthRxer` alias. * ETH_RXER_MODEDynPinMode for the `EthRxer` alias. * ETH_TXD0_IDDynPinId for the `EthTxd0` alias. * ETH_TXD0_MODEDynPinMode for the `EthTxd0` alias. * ETH_TXD1_IDDynPinId for the `EthTxd1` alias. * ETH_TXD1_MODEDynPinMode for the `EthTxd1` alias. * ETH_TXEN_IDDynPinId for the `EthTxen` alias. * ETH_TXEN_MODEDynPinMode for the `EthTxen` alias. * EXT1_ADC_MINUS_IDDynPinId for the `Ext1AdcMinus` alias. * EXT1_ADC_MINUS_MODEDynPinMode for the `Ext1AdcMinus` alias. * EXT1_ADC_PLUS_IDDynPinId for the `Ext1AdcPlus` alias. * EXT1_ADC_PLUS_MODEDynPinMode for the `Ext1AdcPlus` alias. * EXT1_GPIO1_IN_IDDynPinId for the `Ext1Gpio1In` alias. * EXT1_GPIO1_IN_MODEDynPinMode for the `Ext1Gpio1In` alias. * EXT1_GPIO1_OUT_IDDynPinId for the `Ext1Gpio1Out` alias. * EXT1_GPIO1_OUT_MODEDynPinMode for the `Ext1Gpio1Out` alias. * EXT1_GPIO2_IN_IDDynPinId for the `Ext1Gpio2In` alias. * EXT1_GPIO2_IN_MODEDynPinMode for the `Ext1Gpio2In` alias. * EXT1_GPIO2_OUT_IDDynPinId for the `Ext1Gpio2Out` alias. * EXT1_GPIO2_OUT_MODEDynPinMode for the `Ext1Gpio2Out` alias. * EXT1_I2C_SCL_IDDynPinId for the `Ext1I2cScl` alias. * EXT1_I2C_SCL_MODEDynPinMode for the `Ext1I2cScl` alias. * EXT1_I2C_SDA_IDDynPinId for the `Ext1I2cSda` alias. * EXT1_I2C_SDA_MODEDynPinMode for the `Ext1I2cSda` alias. * EXT1_IRQ_GPIO_IN_IDDynPinId for the `Ext1IrqGpioIn` alias. * EXT1_IRQ_GPIO_IN_MODEDynPinMode for the `Ext1IrqGpioIn` alias. * EXT1_IRQ_GPIO_OUT_IDDynPinId for the `Ext1IrqGpioOut` alias. * EXT1_IRQ_GPIO_OUT_MODEDynPinMode for the `Ext1IrqGpioOut` alias. * EXT1_PWM_MINUS_IDDynPinId for the `Ext1PwmMinus` alias. * EXT1_PWM_MINUS_MODEDynPinMode for the `Ext1PwmMinus` alias. * EXT1_PWM_PLUS_IDDynPinId for the `Ext1PwmPlus` alias. * EXT1_PWM_PLUS_MODEDynPinMode for the `Ext1PwmPlus` alias. * EXT1_SPI_CS_A_IDDynPinId for the `Ext1SpiCsA` alias. * EXT1_SPI_CS_A_MODEDynPinMode for the `Ext1SpiCsA` alias. * EXT1_SPI_CS_B_IDDynPinId for the `Ext1SpiCsB` alias. * EXT1_SPI_CS_B_MODEDynPinMode for the `Ext1SpiCsB` alias. * EXT1_SPI_MISO_IDDynPinId for the `Ext1SpiMiso` alias. * EXT1_SPI_MISO_MODEDynPinMode for the `Ext1SpiMiso` alias. * EXT1_SPI_MOSI_IDDynPinId for the `Ext1SpiMosi` alias. * EXT1_SPI_MOSI_MODEDynPinMode for the `Ext1SpiMosi` alias. * EXT1_SPI_SCK_IDDynPinId for the `Ext1SpiSck` alias. * EXT1_SPI_SCK_MODEDynPinMode for the `Ext1SpiSck` alias. * EXT1_UART_CTS_IDDynPinId for the `Ext1UartCts` alias. * EXT1_UART_CTS_MODEDynPinMode for the `Ext1UartCts` alias. * EXT1_UART_RTS_IDDynPinId for the `Ext1UartRts` alias. * EXT1_UART_RTS_MODEDynPinMode for the `Ext1UartRts` alias. * EXT1_UART_RX_IDDynPinId for the `Ext1UartRx` alias. * EXT1_UART_RX_MODEDynPinMode for the `Ext1UartRx` alias. * EXT1_UART_TX_IDDynPinId for the `Ext1UartTx` alias. * EXT1_UART_TX_MODEDynPinMode for the `Ext1UartTx` alias. * EXT2_ADC_MINUS_IDDynPinId for the `Ext2AdcMinus` alias. * EXT2_ADC_MINUS_MODEDynPinMode for the `Ext2AdcMinus` alias. * EXT2_ADC_PLUS_IDDynPinId for the `Ext2AdcPlus` alias. * EXT2_ADC_PLUS_MODEDynPinMode for the `Ext2AdcPlus` alias. * EXT2_GPIO1_IN_IDDynPinId for the `Ext2Gpio1In` alias. * EXT2_GPIO1_IN_MODEDynPinMode for the `Ext2Gpio1In` alias. * EXT2_GPIO1_OUT_IDDynPinId for the `Ext2Gpio1Out` alias. * EXT2_GPIO1_OUT_MODEDynPinMode for the `Ext2Gpio1Out` alias. * EXT2_GPIO2_IN_IDDynPinId for the `Ext2Gpio2In` alias. * EXT2_GPIO2_IN_MODEDynPinMode for the `Ext2Gpio2In` alias. * EXT2_GPIO2_OUT_IDDynPinId for the `Ext2Gpio2Out` alias. * EXT2_GPIO2_OUT_MODEDynPinMode for the `Ext2Gpio2Out` alias. * EXT2_I2C_SCL_IDDynPinId for the `Ext2I2cScl` alias. * EXT2_I2C_SCL_MODEDynPinMode for the `Ext2I2cScl` alias. * EXT2_I2C_SDA_IDDynPinId for the `Ext2I2cSda` alias. * EXT2_I2C_SDA_MODEDynPinMode for the `Ext2I2cSda` alias. * EXT2_IRQ_GPIO_IN_IDDynPinId for the `Ext2IrqGpioIn` alias. * EXT2_IRQ_GPIO_IN_MODEDynPinMode for the `Ext2IrqGpioIn` alias. * EXT2_IRQ_GPIO_OUT_IDDynPinId for the `Ext2IrqGpioOut` alias. * EXT2_IRQ_GPIO_OUT_MODEDynPinMode for the `Ext2IrqGpioOut` alias. * EXT2_PWM_MINUS_IDDynPinId for the `Ext2PwmMinus` alias. * EXT2_PWM_MINUS_MODEDynPinMode for the `Ext2PwmMinus` alias. * EXT2_PWM_PLUS_IDDynPinId for the `Ext2PwmPlus` alias. * EXT2_PWM_PLUS_MODEDynPinMode for the `Ext2PwmPlus` alias. * EXT2_SPI_CS_A_IDDynPinId for the `Ext2SpiCsA` alias. * EXT2_SPI_CS_A_MODEDynPinMode for the `Ext2SpiCsA` alias. * EXT2_SPI_CS_B_IDDynPinId for the `Ext2SpiCsB` alias. * EXT2_SPI_CS_B_MODEDynPinMode for the `Ext2SpiCsB` alias. * EXT2_SPI_MISO_IDDynPinId for the `Ext2SpiMiso` alias. * EXT2_SPI_MISO_MODEDynPinMode for the `Ext2SpiMiso` alias. * EXT2_SPI_MOSI_IDDynPinId for the `Ext2SpiMosi` alias. * EXT2_SPI_MOSI_MODEDynPinMode for the `Ext2SpiMosi` alias. * EXT2_SPI_SCK_IDDynPinId for the `Ext2SpiSck` alias. * EXT2_SPI_SCK_MODEDynPinMode for the `Ext2SpiSck` alias. * EXT2_UART_RX_IDDynPinId for the `Ext2UartRx` alias. * EXT2_UART_RX_MODEDynPinMode for the `Ext2UartRx` alias. * EXT2_UART_TX_IDDynPinId for the `Ext2UartTx` alias. * EXT2_UART_TX_MODEDynPinMode for the `Ext2UartTx` alias. * EXT3_ADC_MINUS_IDDynPinId for the `Ext3AdcMinus` alias. * EXT3_ADC_MINUS_MODEDynPinMode for the `Ext3AdcMinus` alias. * EXT3_ADC_PLUS_IDDynPinId for the `Ext3AdcPlus` alias. * EXT3_ADC_PLUS_MODEDynPinMode for the `Ext3AdcPlus` alias. * EXT3_GPIO1_IN_IDDynPinId for the `Ext3Gpio1In` alias. * EXT3_GPIO1_IN_MODEDynPinMode for the `Ext3Gpio1In` alias. * EXT3_GPIO1_OUT_IDDynPinId for the `Ext3Gpio1Out` alias. * EXT3_GPIO1_OUT_MODEDynPinMode for the `Ext3Gpio1Out` alias. * EXT3_GPIO2_IN_IDDynPinId for the `Ext3Gpio2In` alias. * EXT3_GPIO2_IN_MODEDynPinMode for the `Ext3Gpio2In` alias. * EXT3_GPIO2_OUT_IDDynPinId for the `Ext3Gpio2Out` alias. * EXT3_GPIO2_OUT_MODEDynPinMode for the `Ext3Gpio2Out` alias. * EXT3_I2C_SCL_IDDynPinId for the `Ext3I2cScl` alias. * EXT3_I2C_SCL_MODEDynPinMode for the `Ext3I2cScl` alias. * EXT3_I2C_SDA_IDDynPinId for the `Ext3I2cSda` alias. * EXT3_I2C_SDA_MODEDynPinMode for the `Ext3I2cSda` alias. * EXT3_IRQ_GPIO_IN_IDDynPinId for the `Ext3IrqGpioIn` alias. * EXT3_IRQ_GPIO_IN_MODEDynPinMode for the `Ext3IrqGpioIn` alias. * EXT3_IRQ_GPIO_OUT_IDDynPinId for the `Ext3IrqGpioOut` alias. * EXT3_IRQ_GPIO_OUT_MODEDynPinMode for the `Ext3IrqGpioOut` alias. * EXT3_PWM_MINUS_IDDynPinId for the `Ext3PwmMinus` alias. * EXT3_PWM_MINUS_MODEDynPinMode for the `Ext3PwmMinus` alias. * EXT3_PWM_PLUS_IDDynPinId for the `Ext3PwmPlus` alias. * EXT3_PWM_PLUS_MODEDynPinMode for the `Ext3PwmPlus` alias. * EXT3_SPI_CS_A_IDDynPinId for the `Ext3SpiCsA` alias. * EXT3_SPI_CS_A_MODEDynPinMode for the `Ext3SpiCsA` alias. * EXT3_SPI_CS_B_IDDynPinId for the `Ext3SpiCsB` alias. * EXT3_SPI_CS_B_MODEDynPinMode for the `Ext3SpiCsB` alias. * EXT3_SPI_MISO_IDDynPinId for the `Ext3SpiMiso` alias. * EXT3_SPI_MISO_MODEDynPinMode for the `Ext3SpiMiso` alias. * EXT3_SPI_MOSI_IDDynPinId for the `Ext3SpiMosi` alias. * EXT3_SPI_MOSI_MODEDynPinMode for the `Ext3SpiMosi` alias. * EXT3_SPI_SCK_IDDynPinId for the `Ext3SpiSck` alias. * EXT3_SPI_SCK_MODEDynPinMode for the `Ext3SpiSck` alias. * EXT3_UART_RX_IDDynPinId for the `Ext3UartRx` alias. * EXT3_UART_RX_MODEDynPinMode for the `Ext3UartRx` alias. * EXT3_UART_TX_IDDynPinId for the `Ext3UartTx` alias. * EXT3_UART_TX_MODEDynPinMode for the `Ext3UartTx` alias. * I2S_FS0_IDDynPinId for the `I2sFs0` alias. * I2S_FS0_MODEDynPinMode for the `I2sFs0` alias. * I2S_FS1_IDDynPinId for the `I2sFs1` alias. * I2S_FS1_MODEDynPinMode for the `I2sFs1` alias. * I2S_SDI_IDDynPinId for the `I2sSdi` alias. * I2S_SDI_MODEDynPinMode for the `I2sSdi` alias. * I2S_SDO_IDDynPinId for the `I2sSdo` alias. * I2S_SDO_MODEDynPinMode for the `I2sSdo` alias. * LED_IDDynPinId for the `Led` alias. * LED_MODEDynPinMode for the `Led` alias. * PCC_DOUT00_IDDynPinId for the `PccDout00` alias. * PCC_DOUT00_MODEDynPinMode for the `PccDout00` alias. * PCC_DOUT01_IDDynPinId for the `PccDout01` alias. * PCC_DOUT01_MODEDynPinMode for the `PccDout01` alias. * PCC_DOUT02_IDDynPinId for the `PccDout02` alias. * PCC_DOUT02_MODEDynPinMode for the `PccDout02` alias. * PCC_DOUT03_IDDynPinId for the `PccDout03` alias. * PCC_DOUT03_MODEDynPinMode for the `PccDout03` alias. * PCC_DOUT04_IDDynPinId for the `PccDout04` alias. * PCC_DOUT04_MODEDynPinMode for the `PccDout04` alias. * PCC_DOUT05_IDDynPinId for the `PccDout05` alias. * PCC_DOUT05_MODEDynPinMode for the `PccDout05` alias. * PCC_DOUT06_IDDynPinId for the `PccDout06` alias. * PCC_DOUT06_MODEDynPinMode for the `PccDout06` alias. * PCC_DOUT07_IDDynPinId for the `PccDout07` alias. * PCC_DOUT07_MODEDynPinMode for the `PccDout07` alias. * PCC_DOUT08_IDDynPinId for the `PccDout08` alias. * PCC_DOUT08_MODEDynPinMode for the `PccDout08` alias. * PCC_DOUT09_IDDynPinId for the `PccDout09` alias. * PCC_DOUT09_MODEDynPinMode for the `PccDout09` alias. * PCC_HSYNC_IDDynPinId for the `PccHsync` alias. * PCC_HSYNC_MODEDynPinMode for the `PccHsync` alias. * PCC_PCLK_IDDynPinId for the `PccPclk` alias. * PCC_PCLK_MODEDynPinMode for the `PccPclk` alias. * PCC_PWDN_IDDynPinId for the `PccPwdn` alias. * PCC_PWDN_MODEDynPinMode for the `PccPwdn` alias. * PCC_RESET_IDDynPinId for the `PccReset` alias. * PCC_RESET_MODEDynPinMode for the `PccReset` alias. * PCC_VSYNC_IDDynPinId for the `PccVsync` alias. * PCC_VSYNC_MODEDynPinMode for the `PccVsync` alias. * PCC_XCLK_IDDynPinId for the `PccXclk` alias. * PCC_XCLK_MODEDynPinMode for the `PccXclk` alias. * PDEC_INDEX_IDDynPinId for the `PdecIndex` alias. * PDEC_INDEX_MODEDynPinMode for the `PdecIndex` alias. * PDEC_PHASE_A_IDDynPinId for the `PdecPhaseA` alias. * PDEC_PHASE_A_MODEDynPinMode for the `PdecPhaseA` alias. * PDEC_PHASE_B_IDDynPinId for the `PdecPhaseB` alias. * PDEC_PHASE_B_MODEDynPinMode for the `PdecPhaseB` alias. * QSPI_DATA0_IDDynPinId for the `QspiData0` alias. * QSPI_DATA0_MODEDynPinMode for the `QspiData0` alias. * QSPI_DATA1_IDDynPinId for the `QspiData1` alias. * QSPI_DATA1_MODEDynPinMode for the `QspiData1` alias. * QSPI_DATA2_IDDynPinId for the `QspiData2` alias. * QSPI_DATA2_MODEDynPinMode for the `QspiData2` alias. * QSPI_DATA3_IDDynPinId for the `QspiData3` alias. * QSPI_DATA3_MODEDynPinMode for the `QspiData3` alias. * QSPI_SCK_IDDynPinId for the `QspiSck` alias. * QSPI_SCK_MODEDynPinMode for the `QspiSck` alias. * QSPI_SCS_IDDynPinId for the `QspiScs` alias. * QSPI_SCS_MODEDynPinMode for the `QspiScs` alias. * QT_BUTTON_IDDynPinId for the `QtButton` alias. * QT_BUTTON_MODEDynPinMode for the `QtButton` alias. * SD_CD_IDDynPinId for the `SdCd` alias. * SD_CD_MODEDynPinMode for the `SdCd` alias. * SD_CLK_IDDynPinId for the `SdClk` alias. * SD_CLK_MODEDynPinMode for the `SdClk` alias. * SD_CMD_IDDynPinId for the `SdCmd` alias. * SD_CMD_MODEDynPinMode for the `SdCmd` alias. * SD_DATA0_IDDynPinId for the `SdData0` alias. * SD_DATA0_MODEDynPinMode for the `SdData0` alias. * SD_DATA1_IDDynPinId for the `SdData1` alias. * SD_DATA1_MODEDynPinMode for the `SdData1` alias. * SD_DATA2_IDDynPinId for the `SdData2` alias. * SD_DATA2_MODEDynPinMode for the `SdData2` alias. * SD_DATA3_IDDynPinId for the `SdData3` alias. * SD_DATA3_MODEDynPinMode for the `SdData3` alias. * SD_WP_IDDynPinId for the `SdWp` alias. * SD_WP_MODEDynPinMode for the `SdWp` alias. * SWCLK_IDDynPinId for the `Swclk` alias. * SWCLK_MODEDynPinMode for the `Swclk` alias. * SWDIO_IDDynPinId for the `Swdio` alias. * SWDIO_MODEDynPinMode for the `Swdio` alias. * SWO_IDDynPinId for the `Swo` alias. * SWO_MODEDynPinMode for the `Swo` alias. * USB_DM_IDDynPinId for the `UsbDm` alias. * USB_DM_MODEDynPinMode for the `UsbDm` alias. * USB_DP_IDDynPinId for the `UsbDp` alias. * USB_DP_MODEDynPinMode for the `UsbDp` alias. * XOSC1_CLOCK_IDDynPinId for the `Xosc1Clock` alias. * XOSC1_CLOCK_MODEDynPinMode for the `Xosc1Clock` alias. * XOSC1_X_IN_IDDynPinId for the `Xosc1XIn` alias. * XOSC1_X_IN_MODEDynPinMode for the `Xosc1XIn` alias. * XOSC1_X_OUT_IDDynPinId for the `Xosc1XOut` alias. * XOSC1_X_OUT_MODEDynPinMode for the `Xosc1XOut` alias. Type Definitions --- * Adc0Analog0Alias for a configured `Pin` * Adc0Analog0Id`PinId` for the `Adc0Analog0` alias * Adc0Analog0Mode`PinMode` for the `Adc0Analog0` alias * Adc0Analog1Alias for a configured `Pin` * Adc0Analog1Id`PinId` for the `Adc0Analog1` alias * Adc0Analog1Mode`PinMode` for the `Adc0Analog1` alias * Adc0Analog2Alias for a configured `Pin` * Adc0Analog2Id`PinId` for the `Adc0Analog2` alias * Adc0Analog2Mode`PinMode` for the `Adc0Analog2` alias * Adc0Analog3Alias for a configured `Pin` * Adc0Analog3Id`PinId` for the `Adc0Analog3` alias * Adc0Analog3Mode`PinMode` for the `Adc0Analog3` alias * Adc0Analog4Alias for a configured `Pin` * Adc0Analog4Id`PinId` for the `Adc0Analog4` alias * Adc0Analog4Mode`PinMode` for the `Adc0Analog4` alias * Adc0Analog5Alias for a configured `Pin` * Adc0Analog5Id`PinId` for the `Adc0Analog5` alias * Adc0Analog5Mode`PinMode` for the `Adc0Analog5` alias * Adc0Analog6Alias for a configured `Pin` * Adc0Analog6Id`PinId` for the `Adc0Analog6` alias * Adc0Analog6Mode`PinMode` for the `Adc0Analog6` alias * Adc0Analog7Alias for a configured `Pin` * Adc0Analog7Id`PinId` for the `Adc0Analog7` alias * Adc0Analog7Mode`PinMode` for the `Adc0Analog7` alias * Adc1Analog0Alias for a configured `Pin` * Adc1Analog0Id`PinId` for the `Adc1Analog0` alias * Adc1Analog0Mode`PinMode` for the `Adc1Analog0` alias * Adc1Analog1Alias for a configured `Pin` * Adc1Analog1Id`PinId` for the `Adc1Analog1` alias * Adc1Analog1Mode`PinMode` for the `Adc1Analog1` alias * Adc1Analog4Alias for a configured `Pin` * Adc1Analog4Id`PinId` for the `Adc1Analog4` alias * Adc1Analog4Mode`PinMode` for the `Adc1Analog4` alias * Adc1Analog5Alias for a configured `Pin` * Adc1Analog5Id`PinId` for the `Adc1Analog5` alias * Adc1Analog5Mode`PinMode` for the `Adc1Analog5` alias * Adc1Analog6Alias for a configured `Pin` * Adc1Analog6Id`PinId` for the `Adc1Analog6` alias * Adc1Analog6Mode`PinMode` for the `Adc1Analog6` alias * Adc1Analog7Alias for a configured `Pin` * Adc1Analog7Id`PinId` for the `Adc1Analog7` alias * Adc1Analog7Mode`PinMode` for the `Adc1Analog7` alias * Adc1Analog8Alias for a configured `Pin` * Adc1Analog8Id`PinId` for the `Adc1Analog8` alias * Adc1Analog8Mode`PinMode` for the `Adc1Analog8` alias * Adc1Analog9Alias for a configured `Pin` * Adc1Analog9Id`PinId` for the `Adc1Analog9` alias * Adc1Analog9Mode`PinMode` for the `Adc1Analog9` alias * Adc1Analog14Alias for a configured `Pin` * Adc1Analog14Id`PinId` for the `Adc1Analog14` alias * Adc1Analog14Mode`PinMode` for the `Adc1Analog14` alias * Adc1Analog15Alias for a configured `Pin` * Adc1Analog15Id`PinId` for the `Adc1Analog15` alias * Adc1Analog15Mode`PinMode` for the `Adc1Analog15` alias * AdcDacHeaderAlias for a configured `Pin` * AdcDacHeaderId`PinId` for the `AdcDacHeader` alias * AdcDacHeaderMode`PinMode` for the `AdcDacHeader` alias * Ata6561RxAlias for a configured `Pin` * Ata6561RxId`PinId` for the `Ata6561Rx` alias * Ata6561RxMode`PinMode` for the `Ata6561Rx` alias * Ata6561StandbyAlias for a configured `Pin` * Ata6561StandbyId`PinId` for the `Ata6561Standby` alias * Ata6561StandbyMode`PinMode` for the `Ata6561Standby` alias * Ata6561TxAlias for a configured `Pin` * Ata6561TxId`PinId` for the `Ata6561Tx` alias * Ata6561TxMode`PinMode` for the `Ata6561Tx` alias * ButtonAlias for a configured `Pin` * ButtonId`PinId` for the `Button` alias * ButtonMode`PinMode` for the `Button` alias * DgiI2cSclAlias for a configured `Pin` * DgiI2cSclId`PinId` for the `DgiI2cScl` alias * DgiI2cSclMode`PinMode` for the `DgiI2cScl` alias * DgiI2cSdaAlias for a configured `Pin` * DgiI2cSdaId`PinId` for the `DgiI2cSda` alias * DgiI2cSdaMode`PinMode` for the `DgiI2cSda` alias * DgiSpiCsAlias for a configured `Pin` * DgiSpiCsId`PinId` for the `DgiSpiCs` alias * DgiSpiCsMode`PinMode` for the `DgiSpiCs` alias * DgiSpiMisoAlias for a configured `Pin` * DgiSpiMisoId`PinId` for the `DgiSpiMiso` alias * DgiSpiMisoMode`PinMode` for the `DgiSpiMiso` alias * DgiSpiMosiAlias for a configured `Pin` * DgiSpiMosiId`PinId` for the `DgiSpiMosi` alias * DgiSpiMosiMode`PinMode` for the `DgiSpiMosi` alias * DgiSpiSckAlias for a configured `Pin` * DgiSpiSckId`PinId` for the `DgiSpiSck` alias * DgiSpiSckMode`PinMode` for the `DgiSpiSck` alias * EdbgGpio0InAlias for a configured `Pin` * EdbgGpio0InId`PinId` for the `EdbgGpio0In` alias * EdbgGpio0InMode`PinMode` for the `EdbgGpio0In` alias * EdbgGpio0OutAlias for a configured `Pin` * EdbgGpio0OutId`PinId` for the `EdbgGpio0Out` alias * EdbgGpio0OutMode`PinMode` for the `EdbgGpio0Out` alias * EdbgGpio1InAlias for a configured `Pin` * EdbgGpio1InId`PinId` for the `EdbgGpio1In` alias * EdbgGpio1InMode`PinMode` for the `EdbgGpio1In` alias * EdbgGpio1OutAlias for a configured `Pin` * EdbgGpio1OutId`PinId` for the `EdbgGpio1Out` alias * EdbgGpio1OutMode`PinMode` for the `EdbgGpio1Out` alias * EdbgGpio2InAlias for a configured `Pin` * EdbgGpio2InId`PinId` for the `EdbgGpio2In` alias * EdbgGpio2InMode`PinMode` for the `EdbgGpio2In` alias * EdbgGpio2OutAlias for a configured `Pin` * EdbgGpio2OutId`PinId` for the `EdbgGpio2Out` alias * EdbgGpio2OutMode`PinMode` for the `EdbgGpio2Out` alias * EdbgUartRxAlias for a configured `Pin` * EdbgUartRxId`PinId` for the `EdbgUartRx` alias * EdbgUartRxMode`PinMode` for the `EdbgUartRx` alias * EdbgUartTxAlias for a configured `Pin` * EdbgUartTxId`PinId` for the `EdbgUartTx` alias * EdbgUartTxMode`PinMode` for the `EdbgUartTx` alias * EthCrsDvAlias for a configured `Pin` * EthCrsDvId`PinId` for the `EthCrsDv` alias * EthCrsDvMode`PinMode` for the `EthCrsDv` alias * EthGmdioAlias for a configured `Pin` * EthGmdioId`PinId` for the `EthGmdio` alias * EthGmdioMode`PinMode` for the `EthGmdio` alias * EthInterruptAlias for a configured `Pin` * EthInterruptId`PinId` for the `EthInterrupt` alias * EthInterruptMode`PinMode` for the `EthInterrupt` alias * EthMdcAlias for a configured `Pin` * EthMdcId`PinId` for the `EthMdc` alias * EthMdcMode`PinMode` for the `EthMdc` alias * EthRefClkAlias for a configured `Pin` * EthRefClkId`PinId` for the `EthRefClk` alias * EthRefClkMode`PinMode` for the `EthRefClk` alias * EthResetAlias for a configured `Pin` * EthResetId`PinId` for the `EthReset` alias * EthResetMode`PinMode` for the `EthReset` alias * EthRxd0Alias for a configured `Pin` * EthRxd0Id`PinId` for the `EthRxd0` alias * EthRxd0Mode`PinMode` for the `EthRxd0` alias * EthRxd1Alias for a configured `Pin` * EthRxd1Id`PinId` for the `EthRxd1` alias * EthRxd1Mode`PinMode` for the `EthRxd1` alias * EthRxerAlias for a configured `Pin` * EthRxerId`PinId` for the `EthRxer` alias * EthRxerMode`PinMode` for the `EthRxer` alias * EthTxd0Alias for a configured `Pin` * EthTxd0Id`PinId` for the `EthTxd0` alias * EthTxd0Mode`PinMode` for the `EthTxd0` alias * EthTxd1Alias for a configured `Pin` * EthTxd1Id`PinId` for the `EthTxd1` alias * EthTxd1Mode`PinMode` for the `EthTxd1` alias * EthTxenAlias for a configured `Pin` * EthTxenId`PinId` for the `EthTxen` alias * EthTxenMode`PinMode` for the `EthTxen` alias * Ext1AdcMinusAlias for a configured `Pin` * Ext1AdcMinusId`PinId` for the `Ext1AdcMinus` alias * Ext1AdcMinusMode`PinMode` for the `Ext1AdcMinus` alias * Ext1AdcPlusAlias for a configured `Pin` * Ext1AdcPlusId`PinId` for the `Ext1AdcPlus` alias * Ext1AdcPlusMode`PinMode` for the `Ext1AdcPlus` alias * Ext1Gpio1InAlias for a configured `Pin` * Ext1Gpio1InId`PinId` for the `Ext1Gpio1In` alias * Ext1Gpio1InMode`PinMode` for the `Ext1Gpio1In` alias * Ext1Gpio1OutAlias for a configured `Pin` * Ext1Gpio1OutId`PinId` for the `Ext1Gpio1Out` alias * Ext1Gpio1OutMode`PinMode` for the `Ext1Gpio1Out` alias * Ext1Gpio2InAlias for a configured `Pin` * Ext1Gpio2InId`PinId` for the `Ext1Gpio2In` alias * Ext1Gpio2InMode`PinMode` for the `Ext1Gpio2In` alias * Ext1Gpio2OutAlias for a configured `Pin` * Ext1Gpio2OutId`PinId` for the `Ext1Gpio2Out` alias * Ext1Gpio2OutMode`PinMode` for the `Ext1Gpio2Out` alias * Ext1I2cSclAlias for a configured `Pin` * Ext1I2cSclId`PinId` for the `Ext1I2cScl` alias * Ext1I2cSclMode`PinMode` for the `Ext1I2cScl` alias * Ext1I2cSdaAlias for a configured `Pin` * Ext1I2cSdaId`PinId` for the `Ext1I2cSda` alias * Ext1I2cSdaMode`PinMode` for the `Ext1I2cSda` alias * Ext1IrqGpioInAlias for a configured `Pin` * Ext1IrqGpioInId`PinId` for the `Ext1IrqGpioIn` alias * Ext1IrqGpioInMode`PinMode` for the `Ext1IrqGpioIn` alias * Ext1IrqGpioOutAlias for a configured `Pin` * Ext1IrqGpioOutId`PinId` for the `Ext1IrqGpioOut` alias * Ext1IrqGpioOutMode`PinMode` for the `Ext1IrqGpioOut` alias * Ext1PwmMinusAlias for a configured `Pin` * Ext1PwmMinusId`PinId` for the `Ext1PwmMinus` alias * Ext1PwmMinusMode`PinMode` for the `Ext1PwmMinus` alias * Ext1PwmPlusAlias for a configured `Pin` * Ext1PwmPlusId`PinId` for the `Ext1PwmPlus` alias * Ext1PwmPlusMode`PinMode` for the `Ext1PwmPlus` alias * Ext1SpiCsAAlias for a configured `Pin` * Ext1SpiCsAId`PinId` for the `Ext1SpiCsA` alias * Ext1SpiCsAMode`PinMode` for the `Ext1SpiCsA` alias * Ext1SpiCsBAlias for a configured `Pin` * Ext1SpiCsBId`PinId` for the `Ext1SpiCsB` alias * Ext1SpiCsBMode`PinMode` for the `Ext1SpiCsB` alias * Ext1SpiMisoAlias for a configured `Pin` * Ext1SpiMisoId`PinId` for the `Ext1SpiMiso` alias * Ext1SpiMisoMode`PinMode` for the `Ext1SpiMiso` alias * Ext1SpiMosiAlias for a configured `Pin` * Ext1SpiMosiId`PinId` for the `Ext1SpiMosi` alias * Ext1SpiMosiMode`PinMode` for the `Ext1SpiMosi` alias * Ext1SpiSckAlias for a configured `Pin` * Ext1SpiSckId`PinId` for the `Ext1SpiSck` alias * Ext1SpiSckMode`PinMode` for the `Ext1SpiSck` alias * Ext1UartCtsAlias for a configured `Pin` * Ext1UartCtsId`PinId` for the `Ext1UartCts` alias * Ext1UartCtsMode`PinMode` for the `Ext1UartCts` alias * Ext1UartRtsAlias for a configured `Pin` * Ext1UartRtsId`PinId` for the `Ext1UartRts` alias * Ext1UartRtsMode`PinMode` for the `Ext1UartRts` alias * Ext1UartRxAlias for a configured `Pin` * Ext1UartRxId`PinId` for the `Ext1UartRx` alias * Ext1UartRxMode`PinMode` for the `Ext1UartRx` alias * Ext1UartTxAlias for a configured `Pin` * Ext1UartTxId`PinId` for the `Ext1UartTx` alias * Ext1UartTxMode`PinMode` for the `Ext1UartTx` alias * Ext2AdcMinusAlias for a configured `Pin` * Ext2AdcMinusId`PinId` for the `Ext2AdcMinus` alias * Ext2AdcMinusMode`PinMode` for the `Ext2AdcMinus` alias * Ext2AdcPlusAlias for a configured `Pin` * Ext2AdcPlusId`PinId` for the `Ext2AdcPlus` alias * Ext2AdcPlusMode`PinMode` for the `Ext2AdcPlus` alias * Ext2Gpio1InAlias for a configured `Pin` * Ext2Gpio1InId`PinId` for the `Ext2Gpio1In` alias * Ext2Gpio1InMode`PinMode` for the `Ext2Gpio1In` alias * Ext2Gpio1OutAlias for a configured `Pin` * Ext2Gpio1OutId`PinId` for the `Ext2Gpio1Out` alias * Ext2Gpio1OutMode`PinMode` for the `Ext2Gpio1Out` alias * Ext2Gpio2InAlias for a configured `Pin` * Ext2Gpio2InId`PinId` for the `Ext2Gpio2In` alias * Ext2Gpio2InMode`PinMode` for the `Ext2Gpio2In` alias * Ext2Gpio2OutAlias for a configured `Pin` * Ext2Gpio2OutId`PinId` for the `Ext2Gpio2Out` alias * Ext2Gpio2OutMode`PinMode` for the `Ext2Gpio2Out` alias * Ext2I2cSclAlias for a configured `Pin` * Ext2I2cSclId`PinId` for the `Ext2I2cScl` alias * Ext2I2cSclMode`PinMode` for the `Ext2I2cScl` alias * Ext2I2cSdaAlias for a configured `Pin` * Ext2I2cSdaId`PinId` for the `Ext2I2cSda` alias * Ext2I2cSdaMode`PinMode` for the `Ext2I2cSda` alias * Ext2IrqGpioInAlias for a configured `Pin` * Ext2IrqGpioInId`PinId` for the `Ext2IrqGpioIn` alias * Ext2IrqGpioInMode`PinMode` for the `Ext2IrqGpioIn` alias * Ext2IrqGpioOutAlias for a configured `Pin` * Ext2IrqGpioOutId`PinId` for the `Ext2IrqGpioOut` alias * Ext2IrqGpioOutMode`PinMode` for the `Ext2IrqGpioOut` alias * Ext2PwmMinusAlias for a configured `Pin` * Ext2PwmMinusId`PinId` for the `Ext2PwmMinus` alias * Ext2PwmMinusMode`PinMode` for the `Ext2PwmMinus` alias * Ext2PwmPlusAlias for a configured `Pin` * Ext2PwmPlusId`PinId` for the `Ext2PwmPlus` alias * Ext2PwmPlusMode`PinMode` for the `Ext2PwmPlus` alias * Ext2SpiCsAAlias for a configured `Pin` * Ext2SpiCsAId`PinId` for the `Ext2SpiCsA` alias * Ext2SpiCsAMode`PinMode` for the `Ext2SpiCsA` alias * Ext2SpiCsBAlias for a configured `Pin` * Ext2SpiCsBId`PinId` for the `Ext2SpiCsB` alias * Ext2SpiCsBMode`PinMode` for the `Ext2SpiCsB` alias * Ext2SpiMisoAlias for a configured `Pin` * Ext2SpiMisoId`PinId` for the `Ext2SpiMiso` alias * Ext2SpiMisoMode`PinMode` for the `Ext2SpiMiso` alias * Ext2SpiMosiAlias for a configured `Pin` * Ext2SpiMosiId`PinId` for the `Ext2SpiMosi` alias * Ext2SpiMosiMode`PinMode` for the `Ext2SpiMosi` alias * Ext2SpiSckAlias for a configured `Pin` * Ext2SpiSckId`PinId` for the `Ext2SpiSck` alias * Ext2SpiSckMode`PinMode` for the `Ext2SpiSck` alias * Ext2UartRxAlias for a configured `Pin` * Ext2UartRxId`PinId` for the `Ext2UartRx` alias * Ext2UartRxMode`PinMode` for the `Ext2UartRx` alias * Ext2UartTxAlias for a configured `Pin` * Ext2UartTxId`PinId` for the `Ext2UartTx` alias * Ext2UartTxMode`PinMode` for the `Ext2UartTx` alias * Ext3AdcMinusAlias for a configured `Pin` * Ext3AdcMinusId`PinId` for the `Ext3AdcMinus` alias * Ext3AdcMinusMode`PinMode` for the `Ext3AdcMinus` alias * Ext3AdcPlusAlias for a configured `Pin` * Ext3AdcPlusId`PinId` for the `Ext3AdcPlus` alias * Ext3AdcPlusMode`PinMode` for the `Ext3AdcPlus` alias * Ext3Gpio1InAlias for a configured `Pin` * Ext3Gpio1InId`PinId` for the `Ext3Gpio1In` alias * Ext3Gpio1InMode`PinMode` for the `Ext3Gpio1In` alias * Ext3Gpio1OutAlias for a configured `Pin` * Ext3Gpio1OutId`PinId` for the `Ext3Gpio1Out` alias * Ext3Gpio1OutMode`PinMode` for the `Ext3Gpio1Out` alias * Ext3Gpio2InAlias for a configured `Pin` * Ext3Gpio2InId`PinId` for the `Ext3Gpio2In` alias * Ext3Gpio2InMode`PinMode` for the `Ext3Gpio2In` alias * Ext3Gpio2OutAlias for a configured `Pin` * Ext3Gpio2OutId`PinId` for the `Ext3Gpio2Out` alias * Ext3Gpio2OutMode`PinMode` for the `Ext3Gpio2Out` alias * Ext3I2cSclAlias for a configured `Pin` * Ext3I2cSclId`PinId` for the `Ext3I2cScl` alias * Ext3I2cSclMode`PinMode` for the `Ext3I2cScl` alias * Ext3I2cSdaAlias for a configured `Pin` * Ext3I2cSdaId`PinId` for the `Ext3I2cSda` alias * Ext3I2cSdaMode`PinMode` for the `Ext3I2cSda` alias * Ext3IrqGpioInAlias for a configured `Pin` * Ext3IrqGpioInId`PinId` for the `Ext3IrqGpioIn` alias * Ext3IrqGpioInMode`PinMode` for the `Ext3IrqGpioIn` alias * Ext3IrqGpioOutAlias for a configured `Pin` * Ext3IrqGpioOutId`PinId` for the `Ext3IrqGpioOut` alias * Ext3IrqGpioOutMode`PinMode` for the `Ext3IrqGpioOut` alias * Ext3PwmMinusAlias for a configured `Pin` * Ext3PwmMinusId`PinId` for the `Ext3PwmMinus` alias * Ext3PwmMinusMode`PinMode` for the `Ext3PwmMinus` alias * Ext3PwmPlusAlias for a configured `Pin` * Ext3PwmPlusId`PinId` for the `Ext3PwmPlus` alias * Ext3PwmPlusMode`PinMode` for the `Ext3PwmPlus` alias * Ext3SpiCsAAlias for a configured `Pin` * Ext3SpiCsAId`PinId` for the `Ext3SpiCsA` alias * Ext3SpiCsAMode`PinMode` for the `Ext3SpiCsA` alias * Ext3SpiCsBAlias for a configured `Pin` * Ext3SpiCsBId`PinId` for the `Ext3SpiCsB` alias * Ext3SpiCsBMode`PinMode` for the `Ext3SpiCsB` alias * Ext3SpiMisoAlias for a configured `Pin` * Ext3SpiMisoId`PinId` for the `Ext3SpiMiso` alias * Ext3SpiMisoMode`PinMode` for the `Ext3SpiMiso` alias * Ext3SpiMosiAlias for a configured `Pin` * Ext3SpiMosiId`PinId` for the `Ext3SpiMosi` alias * Ext3SpiMosiMode`PinMode` for the `Ext3SpiMosi` alias * Ext3SpiSckAlias for a configured `Pin` * Ext3SpiSckId`PinId` for the `Ext3SpiSck` alias * Ext3SpiSckMode`PinMode` for the `Ext3SpiSck` alias * Ext3UartRxAlias for a configured `Pin` * Ext3UartRxId`PinId` for the `Ext3UartRx` alias * Ext3UartRxMode`PinMode` for the `Ext3UartRx` alias * Ext3UartTxAlias for a configured `Pin` * Ext3UartTxId`PinId` for the `Ext3UartTx` alias * Ext3UartTxMode`PinMode` for the `Ext3UartTx` alias * I2sFs0Alias for a configured `Pin` * I2sFs0Id`PinId` for the `I2sFs0` alias * I2sFs0Mode`PinMode` for the `I2sFs0` alias * I2sFs1Alias for a configured `Pin` * I2sFs1Id`PinId` for the `I2sFs1` alias * I2sFs1Mode`PinMode` for the `I2sFs1` alias * I2sSdiAlias for a configured `Pin` * I2sSdiId`PinId` for the `I2sSdi` alias * I2sSdiMode`PinMode` for the `I2sSdi` alias * I2sSdoAlias for a configured `Pin` * I2sSdoId`PinId` for the `I2sSdo` alias * I2sSdoMode`PinMode` for the `I2sSdo` alias * LedAlias for a configured `Pin` * LedId`PinId` for the `Led` alias * LedMode`PinMode` for the `Led` alias * PccDout00Alias for a configured `Pin` * PccDout00Id`PinId` for the `PccDout00` alias * PccDout00Mode`PinMode` for the `PccDout00` alias * PccDout01Alias for a configured `Pin` * PccDout01Id`PinId` for the `PccDout01` alias * PccDout01Mode`PinMode` for the `PccDout01` alias * PccDout02Alias for a configured `Pin` * PccDout02Id`PinId` for the `PccDout02` alias * PccDout02Mode`PinMode` for the `PccDout02` alias * PccDout03Alias for a configured `Pin` * PccDout03Id`PinId` for the `PccDout03` alias * PccDout03Mode`PinMode` for the `PccDout03` alias * PccDout04Alias for a configured `Pin` * PccDout04Id`PinId` for the `PccDout04` alias * PccDout04Mode`PinMode` for the `PccDout04` alias * PccDout05Alias for a configured `Pin` * PccDout05Id`PinId` for the `PccDout05` alias * PccDout05Mode`PinMode` for the `PccDout05` alias * PccDout06Alias for a configured `Pin` * PccDout06Id`PinId` for the `PccDout06` alias * PccDout06Mode`PinMode` for the `PccDout06` alias * PccDout07Alias for a configured `Pin` * PccDout07Id`PinId` for the `PccDout07` alias * PccDout07Mode`PinMode` for the `PccDout07` alias * PccDout08Alias for a configured `Pin` * PccDout08Id`PinId` for the `PccDout08` alias * PccDout08Mode`PinMode` for the `PccDout08` alias * PccDout09Alias for a configured `Pin` * PccDout09Id`PinId` for the `PccDout09` alias * PccDout09Mode`PinMode` for the `PccDout09` alias * PccHsyncAlias for a configured `Pin` * PccHsyncId`PinId` for the `PccHsync` alias * PccHsyncMode`PinMode` for the `PccHsync` alias * PccPclkAlias for a configured `Pin` * PccPclkId`PinId` for the `PccPclk` alias * PccPclkMode`PinMode` for the `PccPclk` alias * PccPwdnAlias for a configured `Pin` * PccPwdnId`PinId` for the `PccPwdn` alias * PccPwdnMode`PinMode` for the `PccPwdn` alias * PccResetAlias for a configured `Pin` * PccResetId`PinId` for the `PccReset` alias * PccResetMode`PinMode` for the `PccReset` alias * PccVsyncAlias for a configured `Pin` * PccVsyncId`PinId` for the `PccVsync` alias * PccVsyncMode`PinMode` for the `PccVsync` alias * PccXclkAlias for a configured `Pin` * PccXclkId`PinId` for the `PccXclk` alias * PccXclkMode`PinMode` for the `PccXclk` alias * PdecIndexAlias for a configured `Pin` * PdecIndexId`PinId` for the `PdecIndex` alias * PdecIndexMode`PinMode` for the `PdecIndex` alias * PdecPhaseAAlias for a configured `Pin` * PdecPhaseAId`PinId` for the `PdecPhaseA` alias * PdecPhaseAMode`PinMode` for the `PdecPhaseA` alias * PdecPhaseBAlias for a configured `Pin` * PdecPhaseBId`PinId` for the `PdecPhaseB` alias * PdecPhaseBMode`PinMode` for the `PdecPhaseB` alias * QspiData0Alias for a configured `Pin` * QspiData0Id`PinId` for the `QspiData0` alias * QspiData0Mode`PinMode` for the `QspiData0` alias * QspiData1Alias for a configured `Pin` * QspiData1Id`PinId` for the `QspiData1` alias * QspiData1Mode`PinMode` for the `QspiData1` alias * QspiData2Alias for a configured `Pin` * QspiData2Id`PinId` for the `QspiData2` alias * QspiData2Mode`PinMode` for the `QspiData2` alias * QspiData3Alias for a configured `Pin` * QspiData3Id`PinId` for the `QspiData3` alias * QspiData3Mode`PinMode` for the `QspiData3` alias * QspiSckAlias for a configured `Pin` * QspiSckId`PinId` for the `QspiSck` alias * QspiSckMode`PinMode` for the `QspiSck` alias * QspiScsAlias for a configured `Pin` * QspiScsId`PinId` for the `QspiScs` alias * QspiScsMode`PinMode` for the `QspiScs` alias * QtButtonAlias for a configured `Pin` * QtButtonId`PinId` for the `QtButton` alias * QtButtonMode`PinMode` for the `QtButton` alias * SdCdAlias for a configured `Pin` * SdCdId`PinId` for the `SdCd` alias * SdCdMode`PinMode` for the `SdCd` alias * SdClkAlias for a configured `Pin` * SdClkId`PinId` for the `SdClk` alias * SdClkMode`PinMode` for the `SdClk` alias * SdCmdAlias for a configured `Pin` * SdCmdId`PinId` for the `SdCmd` alias * SdCmdMode`PinMode` for the `SdCmd` alias * SdData0Alias for a configured `Pin` * SdData0Id`PinId` for the `SdData0` alias * SdData0Mode`PinMode` for the `SdData0` alias * SdData1Alias for a configured `Pin` * SdData1Id`PinId` for the `SdData1` alias * SdData1Mode`PinMode` for the `SdData1` alias * SdData2Alias for a configured `Pin` * SdData2Id`PinId` for the `SdData2` alias * SdData2Mode`PinMode` for the `SdData2` alias * SdData3Alias for a configured `Pin` * SdData3Id`PinId` for the `SdData3` alias * SdData3Mode`PinMode` for the `SdData3` alias * SdWpAlias for a configured `Pin` * SdWpId`PinId` for the `SdWp` alias * SdWpMode`PinMode` for the `SdWp` alias * SwclkAlias for a configured `Pin` * SwclkId`PinId` for the `Swclk` alias * SwclkMode`PinMode` for the `Swclk` alias * SwdioAlias for a configured `Pin` * SwdioId`PinId` for the `Swdio` alias * SwdioMode`PinMode` for the `Swdio` alias * SwoAlias for a configured `Pin` * SwoId`PinId` for the `Swo` alias * SwoMode`PinMode` for the `Swo` alias * UsbDmAlias for a configured `Pin` * UsbDmId`PinId` for the `UsbDm` alias * UsbDmMode`PinMode` for the `UsbDm` alias * UsbDpAlias for a configured `Pin` * UsbDpId`PinId` for the `UsbDp` alias * UsbDpMode`PinMode` for the `UsbDp` alias * Xosc1ClockAlias for a configured `Pin` * Xosc1ClockId`PinId` for the `Xosc1Clock` alias * Xosc1ClockMode`PinMode` for the `Xosc1Clock` alias * Xosc1XInAlias for a configured `Pin` * Xosc1XInId`PinId` for the `Xosc1XIn` alias * Xosc1XInMode`PinMode` for the `Xosc1XIn` alias * Xosc1XOutAlias for a configured `Pin` * Xosc1XOutId`PinId` for the `Xosc1XOut` alias * Xosc1XOutMode`PinMode` for the `Xosc1XOut` alias Module atsame54_xpro::devices === SAM E54 XPlained Pro sercom device definitions These type definitions are used by the Sam E54 Xplained Pro for its various Sercom functions in extensions 1, 2, and 3, as well as the DGI and EDBG ports and USB connections. Functions --- * dgi_i2cSet up the DGI I2C device * dgi_spiSet up the DGI SPI device * edbg_uartSet up the EDBG UART device * ext1_flow_control_uartSet up the extension 1 UART device with flow control * ext1_i2cSet up the extension 1 I2C device * ext1_spiSet up the extension 1 SPI device * ext1_uartSet up the extension 1 UART device * ext2_i2cSet up the extension 1 I2C device * ext2_spiSet up the extension 2 SPI device * ext2_uartSet up the extension 2 UART device * ext3_i2cSet up the extension 1 I2C device * ext3_spiSet up the extension 3 SPI device * ext3_uartSet up the extension 3 UART device Type Definitions --- * DgiI2cDGI I2C device * DgiI2cPadsI2C pads for the DGI connection * DgiI2cSercomAlias for the `SERCOM7` peripheral * DgiSpiDGI SPI device * DgiSpiPadsSPI pads for the DGI connection * DgiSpiSercomAlias for the `SERCOM6` peripheral * EdbgUartEDBG connection UART device * EdbgUartPadsUART pads for the EDBG connection * EdbgUartSercomAlias for the `SERCOM2` peripheral * Ext1FlowControlUartExtension 1 UART device with flow control * Ext1FlowControlUartPadsUART pads for the extension 1 connection with flow control * Ext1I2cExtension 1 I2C device * Ext1I2cPadsI2C pads for the extension 1 connection * Ext1I2cSercomAlias for the `SERCOM3` peripheral * Ext1SpiExtension 1 SPI device * Ext1SpiPadsSPI pads for the extension 1 connection * Ext1SpiSercomAlias for the `SERCOM4` peripheral * Ext1UartExtension 1 UART device * Ext1UartPadsUART pads for the extension 1 connection * Ext1UartSercomAlias for the `SERCOM0` peripheral * Ext2I2cExtension 2 I2C device * Ext2I2cPadsI2C pads for the extension 2 connection * Ext2I2cSercomAlias for the `SERCOM7` peripheral * Ext2SpiExtension 1 SPI device * Ext2SpiPadsSPI pads for the extension 2 connection * Ext2SpiSercomAlias for the `SERCOM6` peripheral * Ext2UartExtension 2 UART device * Ext2UartPadsUART pads for the extension 2 connection * Ext2UartSercomAlias for the `SERCOM5` peripheral * Ext3I2cExtension 3 I2C device * Ext3I2cPadsI2C pads for the extension 3 connection * Ext3I2cSercomAlias for the `SERCOM7` peripheral * Ext3SpiExtension 3 SPI device * Ext3SpiPadsSPI pads for the extension 3 connection * Ext3SpiSercomAlias for the `SERCOM6` peripheral * Ext3UartExtension 3 UART device * Ext3UartPadsUART pads for the extension 3 connection * Ext3UartSercomAlias for the `SERCOM1` peripheral Macro atsame54_xpro::periph_alias === ``` macro_rules! periph_alias { ($peripherals:ident . ext1_uart_sercom) => { ... }; ($peripherals:ident . ext3_uart_sercom) => { ... }; ($peripherals:ident . edbg_uart_sercom) => { ... }; ($peripherals:ident . ext1_i2c_sercom) => { ... }; ($peripherals:ident . ext1_spi_sercom) => { ... }; ($peripherals:ident . ext2_uart_sercom) => { ... }; ($peripherals:ident . ext2_spi_sercom) => { ... }; ($peripherals:ident . ext3_spi_sercom) => { ... }; ($peripherals:ident . dgi_spi_sercom) => { ... }; ($peripherals:ident . ext2_i2c_sercom) => { ... }; ($peripherals:ident . ext3_i2c_sercom) => { ... }; ($peripherals:ident . dgi_i2c_sercom) => { ... }; } ``` Refer to fields of the `Peripherals` struct by alternate names This macro can be used to access fields of the `Peripherals` struct by alternate names. The available aliases are: * `SERCOM0` can be refered to with the type alias `Ext1UartSercom` and accessed as the field name `ext1_uart_sercom` * `SERCOM1` can be refered to with the type alias `Ext3UartSercom` and accessed as the field name `ext3_uart_sercom` * `SERCOM2` can be refered to with the type alias `EdbgUartSercom` and accessed as the field name `edbg_uart_sercom` * `SERCOM3` can be refered to with the type alias `Ext1I2cSercom` and accessed as the field name `ext1_i2c_sercom` * `SERCOM4` can be refered to with the type alias `Ext1SpiSercom` and accessed as the field name `ext1_spi_sercom` * `SERCOM5` can be refered to with the type alias `Ext2UartSercom` and accessed as the field name `ext2_uart_sercom` * `SERCOM6` can be refered to with the type alias `Ext2SpiSercom` and accessed as the field name `ext2_spi_sercom` * `SERCOM6` can be refered to with the type alias `Ext3SpiSercom` and accessed as the field name `ext3_spi_sercom` * `SERCOM6` can be refered to with the type alias `DgiSpiSercom` and accessed as the field name `dgi_spi_sercom` * `SERCOM7` can be refered to with the type alias `Ext2I2cSercom` and accessed as the field name `ext2_i2c_sercom` * `SERCOM7` can be refered to with the type alias `Ext3I2cSercom` and accessed as the field name `ext3_i2c_sercom` * `SERCOM7` can be refered to with the type alias `DgiI2cSercom` and accessed as the field name `dgi_i2c_sercom` For example. suppose `display_spi` were an alternate name for the `SERCOM4` peripheral. You could use the `periph_alias!` macro to access it like this: ``` let mut peripherals = pac::Peripherals::take().unwrap(); // Replace this let display_spi = peripherals.SERCOM4; // With this let display_spi = periph_alias!(peripherals.display_spi); ``` Macro atsame54_xpro::pin_alias === ``` macro_rules! pin_alias { ($pins:ident . pa02) => { ... }; ($pins:ident . pa03) => { ... }; ($pins:ident . pa04) => { ... }; ($pins:ident . pa05) => { ... }; ($pins:ident . pa06) => { ... }; ($pins:ident . pa07) => { ... }; ($pins:ident . pa08) => { ... }; ($pins:ident . pa09) => { ... }; ($pins:ident . pa10) => { ... }; ($pins:ident . pa11) => { ... }; ($pins:ident . pa12) => { ... }; ($pins:ident . pa13) => { ... }; ($pins:ident . pa14) => { ... }; ($pins:ident . pa15) => { ... }; ($pins:ident . pa16) => { ... }; ($pins:ident . pa17) => { ... }; ($pins:ident . pa18) => { ... }; ($pins:ident . pa19) => { ... }; ($pins:ident . pa20) => { ... }; ($pins:ident . pa21) => { ... }; ($pins:ident . pa22) => { ... }; ($pins:ident . pa23) => { ... }; ($pins:ident . pa24) => { ... }; ($pins:ident . pa25) => { ... }; ($pins:ident . pa27) => { ... }; ($pins:ident . pa30) => { ... }; ($pins:ident . pa31) => { ... }; ($pins:ident . pb00) => { ... }; ($pins:ident . pb01) => { ... }; ($pins:ident . pb02) => { ... }; ($pins:ident . pb04) => { ... }; ($pins:ident . pb05) => { ... }; ($pins:ident . pb06) => { ... }; ($pins:ident . pb07) => { ... }; ($pins:ident . pb08) => { ... }; ($pins:ident . pb09) => { ... }; ($pins:ident . pb10) => { ... }; ($pins:ident . pb11) => { ... }; ($pins:ident . pb12) => { ... }; ($pins:ident . pb13) => { ... }; ($pins:ident . pb14) => { ... }; ($pins:ident . pb15) => { ... }; ($pins:ident . pb16) => { ... }; ($pins:ident . pb17) => { ... }; ($pins:ident . pb18) => { ... }; ($pins:ident . pb19) => { ... }; ($pins:ident . pb20) => { ... }; ($pins:ident . pb21) => { ... }; ($pins:ident . pb22) => { ... }; ($pins:ident . pb23) => { ... }; ($pins:ident . pb24) => { ... }; ($pins:ident . pb25) => { ... }; ($pins:ident . pb26) => { ... }; ($pins:ident . pb27) => { ... }; ($pins:ident . pb28) => { ... }; ($pins:ident . pb29) => { ... }; ($pins:ident . pb30) => { ... }; ($pins:ident . pb31) => { ... }; ($pins:ident . pc01) => { ... }; ($pins:ident . pc02) => { ... }; ($pins:ident . pc03) => { ... }; ($pins:ident . pc04) => { ... }; ($pins:ident . pc05) => { ... }; ($pins:ident . pc06) => { ... }; ($pins:ident . pc07) => { ... }; ($pins:ident . pc10) => { ... }; ($pins:ident . pc11) => { ... }; ($pins:ident . pc12) => { ... }; ($pins:ident . pc13) => { ... }; ($pins:ident . pc14) => { ... }; ($pins:ident . pc16) => { ... }; ($pins:ident . pc17) => { ... }; ($pins:ident . pc18) => { ... }; ($pins:ident . pc20) => { ... }; ($pins:ident . pc21) => { ... }; ($pins:ident . pc22) => { ... }; ($pins:ident . pc23) => { ... }; ($pins:ident . pc30) => { ... }; ($pins:ident . pc31) => { ... }; ($pins:ident . pd00) => { ... }; ($pins:ident . pd01) => { ... }; ($pins:ident . pd08) => { ... }; ($pins:ident . pd09) => { ... }; ($pins:ident . pd10) => { ... }; ($pins:ident . pd11) => { ... }; ($pins:ident . pd12) => { ... }; ($pins:ident . pd20) => { ... }; ($pins:ident . pd21) => { ... }; ($pins:ident . adc_dac_header) => { ... }; ($pins:ident . adc0_analog0) => { ... }; ($pins:ident . ext2_adc_minus) => { ... }; ($pins:ident . adc0_analog1) => { ... }; ($pins:ident . adc0_analog4) => { ... }; ($pins:ident . ext1_uart_tx) => { ... }; ($pins:ident . adc0_analog5) => { ... }; ($pins:ident . ext1_uart_rx) => { ... }; ($pins:ident . adc0_analog6) => { ... }; ($pins:ident . ext1_gpio1_in) => { ... }; ($pins:ident . ext1_gpio1_out) => { ... }; ($pins:ident . ext1_uart_rts) => { ... }; ($pins:ident . ext1_gpio2_in) => { ... }; ($pins:ident . ext1_gpio2_out) => { ... }; ($pins:ident . adc0_analog7) => { ... }; ($pins:ident . ext1_uart_cts) => { ... }; ($pins:ident . qspi_data0) => { ... }; ($pins:ident . qspi_data1) => { ... }; ($pins:ident . qspi_data2) => { ... }; ($pins:ident . qspi_data3) => { ... }; ($pins:ident . pcc_vsync) => { ... }; ($pins:ident . eth_rxd1) => { ... }; ($pins:ident . pcc_hsync) => { ... }; ($pins:ident . eth_rxd0) => { ... }; ($pins:ident . pcc_pclk) => { ... }; ($pins:ident . eth_ref_clk) => { ... }; ($pins:ident . pcc_xclk) => { ... }; ($pins:ident . eth_rxer) => { ... }; ($pins:ident . qt_button) => { ... }; ($pins:ident . pcc_dout00) => { ... }; ($pins:ident . pcc_dout01) => { ... }; ($pins:ident . eth_txen) => { ... }; ($pins:ident . pcc_dout02) => { ... }; ($pins:ident . eth_txd0) => { ... }; ($pins:ident . pcc_dout03) => { ... }; ($pins:ident . eth_txd1) => { ... }; ($pins:ident . sd_cmd) => { ... }; ($pins:ident . i2s_fs0) => { ... }; ($pins:ident . pcc_dout04) => { ... }; ($pins:ident . sd_clk) => { ... }; ($pins:ident . i2s_sdo) => { ... }; ($pins:ident . pcc_dout05) => { ... }; ($pins:ident . ext1_i2c_sda) => { ... }; ($pins:ident . i2s_sdi) => { ... }; ($pins:ident . pcc_dout06) => { ... }; ($pins:ident . ext1_i2c_scl) => { ... }; ($pins:ident . i2s_fs1) => { ... }; ($pins:ident . pcc_dout07) => { ... }; ($pins:ident . usb_dm) => { ... }; ($pins:ident . usb_dp) => { ... }; ($pins:ident . ext1_spi_cs_b) => { ... }; ($pins:ident . swclk) => { ... }; ($pins:ident . swdio) => { ... }; ($pins:ident . ext2_adc_plus) => { ... }; ($pins:ident . ext2_gpio1_in) => { ... }; ($pins:ident . ext2_gpio1_out) => { ... }; ($pins:ident . ext2_spi_cs_b) => { ... }; ($pins:ident . ext1_adc_plus) => { ... }; ($pins:ident . adc1_analog6) => { ... }; ($pins:ident . ext1_adc_minus) => { ... }; ($pins:ident . adc1_analog7) => { ... }; ($pins:ident . ext2_gpio2_in) => { ... }; ($pins:ident . ext2_gpio2_out) => { ... }; ($pins:ident . adc1_analog8) => { ... }; ($pins:ident . ext1_irq_gpio_in) => { ... }; ($pins:ident . ext1_irq_gpio_out) => { ... }; ($pins:ident . adc1_analog9) => { ... }; ($pins:ident . adc0_analog2) => { ... }; ($pins:ident . adc1_analog0) => { ... }; ($pins:ident . ext1_pwm_plus) => { ... }; ($pins:ident . adc0_analog3) => { ... }; ($pins:ident . adc1_analog1) => { ... }; ($pins:ident . ext1_pwm_minus) => { ... }; ($pins:ident . qspi_sck) => { ... }; ($pins:ident . qspi_scs) => { ... }; ($pins:ident . ata6561_tx) => { ... }; ($pins:ident . ata6561_rx) => { ... }; ($pins:ident . ext2_pwm_plus) => { ... }; ($pins:ident . pcc_dout08) => { ... }; ($pins:ident . ext2_pwm_minus) => { ... }; ($pins:ident . pcc_dout09) => { ... }; ($pins:ident . ext2_uart_tx) => { ... }; ($pins:ident . ext2_uart_rx) => { ... }; ($pins:ident . sd_data0) => { ... }; ($pins:ident . sd_data1) => { ... }; ($pins:ident . sd_data2) => { ... }; ($pins:ident . sd_data3) => { ... }; ($pins:ident . xosc1_x_in) => { ... }; ($pins:ident . xosc1_clock) => { ... }; ($pins:ident . xosc1_x_out) => { ... }; ($pins:ident . edbg_uart_rx) => { ... }; ($pins:ident . edbg_uart_tx) => { ... }; ($pins:ident . ext1_spi_sck) => { ... }; ($pins:ident . ext1_spi_mosi) => { ... }; ($pins:ident . ext1_spi_cs_a) => { ... }; ($pins:ident . ext1_spi_miso) => { ... }; ($pins:ident . swo) => { ... }; ($pins:ident . button) => { ... }; ($pins:ident . ext3_gpio1_in) => { ... }; ($pins:ident . ext3_gpio1_out) => { ... }; ($pins:ident . ext3_adc_plus) => { ... }; ($pins:ident . adc1_analog4) => { ... }; ($pins:ident . ext3_adc_minus) => { ... }; ($pins:ident . adc1_analog5) => { ... }; ($pins:ident . ext2_spi_mosi) => { ... }; ($pins:ident . ext3_spi_mosi) => { ... }; ($pins:ident . dgi_spi_mosi) => { ... }; ($pins:ident . ext2_spi_sck) => { ... }; ($pins:ident . ext3_spi_sck) => { ... }; ($pins:ident . dgi_spi_sck) => { ... }; ($pins:ident . ext2_spi_cs_a) => { ... }; ($pins:ident . ext2_spi_miso) => { ... }; ($pins:ident . ext3_spi_miso) => { ... }; ($pins:ident . dgi_spi_miso) => { ... }; ($pins:ident . ext3_gpio2_in) => { ... }; ($pins:ident . ext3_gpio2_out) => { ... }; ($pins:ident . pcc_pwdn) => { ... }; ($pins:ident . eth_mdc) => { ... }; ($pins:ident . pcc_reset) => { ... }; ($pins:ident . eth_gmdio) => { ... }; ($pins:ident . ata6561_standby) => { ... }; ($pins:ident . ext3_spi_cs_a) => { ... }; ($pins:ident . pdec_phase_a) => { ... }; ($pins:ident . edbg_gpio0_in) => { ... }; ($pins:ident . edbg_gpio0_out) => { ... }; ($pins:ident . pdec_phase_b) => { ... }; ($pins:ident . edbg_gpio1_in) => { ... }; ($pins:ident . edbg_gpio1_out) => { ... }; ($pins:ident . led) => { ... }; ($pins:ident . pdec_index) => { ... }; ($pins:ident . edbg_gpio2_in) => { ... }; ($pins:ident . edbg_gpio2_out) => { ... }; ($pins:ident . eth_crs_dv) => { ... }; ($pins:ident . eth_reset) => { ... }; ($pins:ident . ext3_uart_tx) => { ... }; ($pins:ident . ext3_uart_rx) => { ... }; ($pins:ident . ext3_irq_gpio_in) => { ... }; ($pins:ident . ext3_irq_gpio_out) => { ... }; ($pins:ident . ext3_spi_cs_b) => { ... }; ($pins:ident . ext2_irq_gpio_in) => { ... }; ($pins:ident . ext2_irq_gpio_out) => { ... }; ($pins:ident . adc1_analog14) => { ... }; ($pins:ident . dgi_spi_cs) => { ... }; ($pins:ident . adc1_analog15) => { ... }; ($pins:ident . ext2_i2c_sda) => { ... }; ($pins:ident . ext3_i2c_sda) => { ... }; ($pins:ident . dgi_i2c_sda) => { ... }; ($pins:ident . ext2_i2c_scl) => { ... }; ($pins:ident . ext3_i2c_scl) => { ... }; ($pins:ident . dgi_i2c_scl) => { ... }; ($pins:ident . ext3_pwm_plus) => { ... }; ($pins:ident . ext3_pwm_minus) => { ... }; ($pins:ident . eth_interrupt) => { ... }; ($pins:ident . sd_cd) => { ... }; ($pins:ident . sd_wp) => { ... }; } ``` Refer to fields of the `Pins` struct by alternate names This macro can be used to access fields of the `Pins` struct by alternate names. See the `Pins` documentation for a list of the availabe pin aliases. For example. suppose `spi_mosi` were an alternate name for the `serial_out` pin of the `Pins` struct. You could use the `pin_alias!` macro to access it like this: ``` let mut peripherals = pac::Peripherals::take().unwrap(); let pins = bsp::Pins::new(peripherals.PORT); // Replace this let mosi = pins.serial_out; // With this let mosi = pin_alias!(pins.spi_mosi); ``` Struct atsame54_xpro::pins::Pins === ``` pub struct Pins { pub pa02: Pin<PA02, Reset>, pub pa03: Pin<PA03, Reset>, pub pa04: Pin<PA04, Reset>, pub pa05: Pin<PA05, Reset>, pub pa06: Pin<PA06, Reset>, pub pa07: Pin<PA07, Reset>, pub pa08: Pin<PA08, Reset>, pub pa09: Pin<PA09, Reset>, pub pa10: Pin<PA10, Reset>, pub pa11: Pin<PA11, Reset>, pub pa12: Pin<PA12, Reset>, pub pa13: Pin<PA13, Reset>, pub pa14: Pin<PA14, Reset>, pub pa15: Pin<PA15, Reset>, pub pa16: Pin<PA16, Reset>, pub pa17: Pin<PA17, Reset>, pub pa18: Pin<PA18, Reset>, pub pa19: Pin<PA19, Reset>, pub pa20: Pin<PA20, Reset>, pub pa21: Pin<PA21, Reset>, pub pa22: Pin<PA22, Reset>, pub pa23: Pin<PA23, Reset>, pub pa24: Pin<PA24, Reset>, pub pa25: Pin<PA25, Reset>, pub pa27: Pin<PA27, Reset>, pub pa30: Pin<PA30, Reset>, pub pa31: Pin<PA31, Reset>, pub pb00: Pin<PB00, Reset>, pub pb01: Pin<PB01, Reset>, pub pb02: Pin<PB02, Reset>, pub pb04: Pin<PB04, Reset>, pub pb05: Pin<PB05, Reset>, pub pb06: Pin<PB06, Reset>, pub pb07: Pin<PB07, Reset>, pub pb08: Pin<PB08, Reset>, pub pb09: Pin<PB09, Reset>, pub pb10: Pin<PB10, Reset>, pub pb11: Pin<PB11, Reset>, pub pb12: Pin<PB12, Reset>, pub pb13: Pin<PB13, Reset>, pub pb14: Pin<PB14, Reset>, pub pb15: Pin<PB15, Reset>, pub pb16: Pin<PB16, Reset>, pub pb17: Pin<PB17, Reset>, pub pb18: Pin<PB18, Reset>, pub pb19: Pin<PB19, Reset>, pub pb20: Pin<PB20, Reset>, pub pb21: Pin<PB21, Reset>, pub pb22: Pin<PB22, Reset>, pub pb23: Pin<PB23, Reset>, pub pb24: Pin<PB24, Reset>, pub pb25: Pin<PB25, Reset>, pub pb26: Pin<PB26, Reset>, pub pb27: Pin<PB27, Reset>, pub pb28: Pin<PB28, Reset>, pub pb29: Pin<PB29, Reset>, pub pb30: Pin<PB30, Reset>, pub pb31: Pin<PB31, Reset>, pub pc01: Pin<PC01, Reset>, pub pc02: Pin<PC02, Reset>, pub pc03: Pin<PC03, Reset>, pub pc04: Pin<PC04, Reset>, pub pc05: Pin<PC05, Reset>, pub pc06: Pin<PC06, Reset>, pub pc07: Pin<PC07, Reset>, pub pc10: Pin<PC10, Reset>, pub pc11: Pin<PC11, Reset>, pub pc12: Pin<PC12, Reset>, pub pc13: Pin<PC13, Reset>, pub pc14: Pin<PC14, Reset>, pub pc16: Pin<PC16, Reset>, pub pc17: Pin<PC17, Reset>, pub pc18: Pin<PC18, Reset>, pub pc20: Pin<PC20, Reset>, pub pc21: Pin<PC21, Reset>, pub pc22: Pin<PC22, Reset>, pub pc23: Pin<PC23, Reset>, pub pc30: Pin<PC30, Reset>, pub pc31: Pin<PC31, Reset>, pub pd00: Pin<PD00, Reset>, pub pd01: Pin<PD01, Reset>, pub pd08: Pin<PD08, Reset>, pub pd09: Pin<PD09, Reset>, pub pd10: Pin<PD10, Reset>, pub pd11: Pin<PD11, Reset>, pub pd12: Pin<PD12, Reset>, pub pd20: Pin<PD20, Reset>, pub pd21: Pin<PD21, Reset>, /* private fields */ } ``` BSP replacement for the HAL `Pins` type This type is intended to provide more meaningful names for the given pins. Fields --- `pa02: Pin<PA02, Reset>`PA02: ADC/DAC header pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: adc_dac_header, adc0_analog0, `pa03: Pin<PA03, Reset>`PA03: Extension 2 ADC pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_adc_minus, adc0_analog1, `pa04: Pin<PA04, Reset>`PA04: Extension 1 UART transmit pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: adc0_analog4, ext1_uart_tx, `pa05: Pin<PA05, Reset>`PA05: Extension 1 UART receive pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: adc0_analog5, ext1_uart_rx, `pa06: Pin<PA06, Reset>`PA06: ADC 0 analog pin 6, Extension 1 GPIO pin 1, or Extension 1 UART request to send (RTS) pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: adc0_analog6, ext1_gpio1_in, ext1_gpio1_out, ext1_uart_rts, `pa07: Pin<PA07, Reset>`PA07: ADC 0 analog pin 6, Extension 1 GPIO pin 2, or Extension 1 UART clear to send (CTS) pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_gpio2_in, ext1_gpio2_out, adc0_analog7, ext1_uart_cts, `pa08: Pin<PA08, Reset>`PA08: QSPI data pin 0 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: qspi_data0, `pa09: Pin<PA09, Reset>`PA09: QSPI data pin 1 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: qspi_data1, `pa10: Pin<PA10, Reset>`PA10: QSPI data pin 2 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: qspi_data2, `pa11: Pin<PA11, Reset>`PA11: QSPI data pin 3 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: qspi_data3, `pa12: Pin<PA12, Reset>`PA12: PCC VSync pin or Ethernet receive data pin 1 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pcc_vsync, eth_rxd1, `pa13: Pin<PA13, Reset>`PA13: PCC HSync pin or Ethernet receive data pin 0 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pcc_hsync, eth_rxd0, `pa14: Pin<PA14, Reset>`PA14: PCC PCLK pin or Ethernet ref clock pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pcc_pclk, eth_ref_clk, `pa15: Pin<PA15, Reset>`PA15: PCC XCLK pin or Ethernet receive error pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pcc_xclk, eth_rxer, `pa16: Pin<PA16, Reset>`PA16: Capacitive touch button or PCC Data out pin 00 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: qt_button, pcc_dout00, `pa17: Pin<PA17, Reset>`PA17: PCC data out pin 01 or Ethernet transmit enable pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pcc_dout01, eth_txen, `pa18: Pin<PA18, Reset>`PA18: PCC data out pin 02 or Ethernet transmit data pin 0 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pcc_dout02, eth_txd0, `pa19: Pin<PA19, Reset>`PA19: PCC data out pin 03 or Ethernet transmit data pin 1 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pcc_dout03, eth_txd1, `pa20: Pin<PA20, Reset>`PA20: SD card command pin, I2S frame sync 0 pin, or PCC data out pin 04 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: sd_cmd, i2s_fs0, pcc_dout04, `pa21: Pin<PA21, Reset>`PA21: SD card clock pin, I2S serial data out pin, or PCC data out pin 05 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: sd_clk, i2s_sdo, pcc_dout05, `pa22: Pin<PA22, Reset>`PA22: Extension 1 I2c data pin, I2S serial data pin, or PCC data out pin 06 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_i2c_sda, i2s_sdi, pcc_dout06, `pa23: Pin<PA23, Reset>`PA23: Extension 1 I2c clock pin, I2S frame sync 1 pin, or PCC data out pin 07 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_i2c_scl, i2s_fs1, pcc_dout07, `pa24: Pin<PA24, Reset>`PA24: USB Data- pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: usb_dm, `pa25: Pin<PA25, Reset>`PA25: USB Data+ pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: usb_dp, `pa27: Pin<PA27, Reset>`PA27: Extension 1 SPI chip select pin B This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_spi_cs_b, `pa30: Pin<PA30, Reset>`PA30: Serial wire clock pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: swclk, `pa31: Pin<PA31, Reset>`PA31: Serial wire data in/out This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: swdio, `pb00: Pin<PB00, Reset>`PB00: Extension 2 ADC pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_adc_plus, `pb01: Pin<PB01, Reset>`PB01: Extension 2 GPIO pin 1 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_gpio1_in, ext2_gpio1_out, `pb02: Pin<PB02, Reset>`PB02: Extension 2 SPI chip select pin B This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_spi_cs_b, `pb04: Pin<PB04, Reset>`PB04: Extension 1 ADC plus pin, or ADC 1 analog pin 6 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_adc_plus, adc1_analog6, `pb05: Pin<PB05, Reset>`PB05: Extension 1 ADC minus pin, or ADC 1 analog pin 7 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_adc_minus, adc1_analog7, `pb06: Pin<PB06, Reset>`PB06: Extension 2 GPIO pin 2, or ADC 1 analog pin 8 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_gpio2_in, ext2_gpio2_out, adc1_analog8, `pb07: Pin<PB07, Reset>`PB07: Extension 2 interrupt request/GPIO pin, or ADC 1 analog pin 9 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_irq_gpio_in, ext1_irq_gpio_out, adc1_analog9, `pb08: Pin<PB08, Reset>`PB08: Extension 1 PWM+ pin, ADC 0 analog pin 2, or ADC 1 analog pin 0 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: adc0_analog2, adc1_analog0, ext1_pwm_plus, `pb09: Pin<PB09, Reset>`PB09: Extension 1 PWM- pin, ADC 0 analog pin 3, or ADC 1 analog pin 1 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: adc0_analog3, adc1_analog1, ext1_pwm_minus, `pb10: Pin<PB10, Reset>`PB10: QSPI serial clock This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: qspi_sck, `pb11: Pin<PB11, Reset>`PB11: QSPI chip select pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: qspi_scs, `pb12: Pin<PB12, Reset>`PB12: ATA6561 on-board CAN device transmit pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ata6561_tx, `pb13: Pin<PB13, Reset>`PB13: ATA6561 on-board CAN device receive pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ata6561_rx, `pb14: Pin<PB14, Reset>`PB14: Extension 2 PWM+ pin or PCC data out pin 08 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_pwm_plus, pcc_dout08, `pb15: Pin<PB15, Reset>`PB15: Extension 2 PWM- pin or PCC data out pin 09 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_pwm_minus, pcc_dout09, `pb16: Pin<PB16, Reset>`PB16: Extension 2 UART transmit pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_uart_tx, `pb17: Pin<PB17, Reset>`PB17: Extension 2 UART receive pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_uart_rx, `pb18: Pin<PB18, Reset>`PB18: SD card data pin 0 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: sd_data0, `pb19: Pin<PB19, Reset>`PB19: SD card data pin 1 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: sd_data1, `pb20: Pin<PB20, Reset>`PB20: SD card data pin 2 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: sd_data2, `pb21: Pin<PB21, Reset>`PB21: SD card data pin 3 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: sd_data3, `pb22: Pin<PB22, Reset>`PB22: Xosc1 XIn/clock pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: xosc1_x_in, xosc1_clock, `pb23: Pin<PB23, Reset>`PB23: Xosc1 XOut pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: xosc1_x_out, `pb24: Pin<PB24, Reset>`PB24: EDBG connection UART receive pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: edbg_uart_rx, `pb25: Pin<PB25, Reset>`PB25: EDBG connection UART transmit pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: edbg_uart_tx, `pb26: Pin<PB26, Reset>`PB26: Extension 1 SPI serial clock pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_spi_sck, `pb27: Pin<PB27, Reset>`PB27: Extension 1 SPI MOSI pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_spi_mosi, `pb28: Pin<PB28, Reset>`PB28: Extension 1 SPI chip select pin A This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_spi_cs_a, `pb29: Pin<PB29, Reset>`PB29: Extension 1 SPI MISO pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext1_spi_miso, `pb30: Pin<PB30, Reset>`PB30: Serial wire out pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: swo, `pb31: Pin<PB31, Reset>`PB31: Switch 0, user-controlled button This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: button, `pc01: Pin<PC01, Reset>`PC01: Extension 3 GPIO pin 1 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_gpio1_in, ext3_gpio1_out, `pc02: Pin<PC02, Reset>`PC02: Extension 3 ADC+ pin, or ADC 1 analog pin 4 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_adc_plus, adc1_analog4, `pc03: Pin<PC03, Reset>`PC03: Extension 3 ADC- pin, or ADC 1 analog pin 5 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_adc_minus, adc1_analog5, `pc04: Pin<PC04, Reset>`PC04: SPI MOSI pin for extension 2, 3, or data gateway interface This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_spi_mosi, ext3_spi_mosi, dgi_spi_mosi, `pc05: Pin<PC05, Reset>`PC05: SPI serial clock pin for extension 2, 3, or data gateway interface This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_spi_sck, ext3_spi_sck, dgi_spi_sck, `pc06: Pin<PC06, Reset>`PC06: Extension 2 SPI chip select pin A This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_spi_cs_a, `pc07: Pin<PC07, Reset>`PC07: SPI MISO pin for extension 2, 3, or data gateway interface This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_spi_miso, ext3_spi_miso, dgi_spi_miso, `pc10: Pin<PC10, Reset>`PC10: Extension 3 GPIO pin 2 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_gpio2_in, ext3_gpio2_out, `pc11: Pin<PC11, Reset>`PC11: PCC PWDN pin, or Ethernet MDC pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pcc_pwdn, eth_mdc, `pc12: Pin<PC12, Reset>`PC12: PCC reset pin, or ethernet GMDIO pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pcc_reset, eth_gmdio, `pc13: Pin<PC13, Reset>`PC13: ATA6561 on-board CAN device standby pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ata6561_standby, `pc14: Pin<PC14, Reset>`PC14: Extension 3 SPI chip select pin A This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_spi_cs_a, `pc16: Pin<PC16, Reset>`PC16: Position decoder phase A pin, or embedded debugger GPIO pin 0 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pdec_phase_a, edbg_gpio0_in, edbg_gpio0_out, `pc17: Pin<PC17, Reset>`PC17: Position decoder phase B pin, or embedded debugger GPIO pin 1 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: pdec_phase_b, edbg_gpio1_in, edbg_gpio1_out, `pc18: Pin<PC18, Reset>`PC18: LED output, position decoder index pin, or embedded debugger GPIO pin 2 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: led, pdec_index, edbg_gpio2_in, edbg_gpio2_out, `pc20: Pin<PC20, Reset>`PC20: Ethernet carrier sense / data valid pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: eth_crs_dv, `pc21: Pin<PC21, Reset>`PC21: Ethernet reset pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: eth_reset, `pc22: Pin<PC22, Reset>`PC22: Extension 3 UART transmit pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_uart_tx, `pc23: Pin<PC23, Reset>`PC23: Extension 3 UART receive pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_uart_rx, `pc30: Pin<PC30, Reset>`PC30: Extension 3 interrupt request/GPIO pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_irq_gpio_in, ext3_irq_gpio_out, `pc31: Pin<PC31, Reset>`PC31: Extension 3 SPI chip select pin B This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_spi_cs_b, `pd00: Pin<PD00, Reset>`PD00: Extension 2 interrupt request/GPIO pin, or ADC 1 analog pin 14 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_irq_gpio_in, ext2_irq_gpio_out, adc1_analog14, `pd01: Pin<PD01, Reset>`PD01: Data gateway interface chip select pin, or ADC 1 analog pin 15 This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: dgi_spi_cs, adc1_analog15, `pd08: Pin<PD08, Reset>`PD08: Extension 2, 3, and data gateway interface I2C serial data pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_i2c_sda, ext3_i2c_sda, dgi_i2c_sda, `pd09: Pin<PD09, Reset>`PD09: Extension 2, 3, and data gateway interface I2C serial clock pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext2_i2c_scl, ext3_i2c_scl, dgi_i2c_scl, `pd10: Pin<PD10, Reset>`PD10: Extension 3 PWM+ pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_pwm_plus, `pd11: Pin<PD11, Reset>`PD11: Extension 3 PWM- pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: ext3_pwm_minus, `pd12: Pin<PD12, Reset>`PD12: Ethernet interrupt pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: eth_interrupt, `pd20: Pin<PD20, Reset>`PD20: SD card card detect pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: sd_cd, `pd21: Pin<PD21, Reset>`PD21: SD card write protect pin This field can also be accessed using the [`pin_alias!`] macro with the following alternate names: sd_wp, Implementations --- ### impl Pins #### pub fn new(port: PORT) -> Self Take ownership of the PAC [`PORT`] and split it into discrete [`Pin`]s. This struct serves as a replacement for the HAL `Pins` struct. It is intended to provide more meaningful names for each [`Pin`] in a BSP. Any [`Pin`] not defined by the BSP is dropped. `PORT` `Pin` `Pins` #### pub unsafe fn port(&mut self) -> PORT Take the PAC [`PORT`] The [`PORT`] can only be taken once. Subsequent calls to this function will panic. ##### Safety Direct access to the [`PORT`] could allow you to invalidate the compiler’s type-level tracking, so it is unsafe. `PORT` Auto Trait Implementations --- ### impl RefUnwindSafe for Pins ### impl Send for Pins ### impl !Sync for Pins ### impl Unpin for Pins ### impl UnwindSafe for Pins Blanket Implementations --- ### impl<T> Any for Twhere T: 'static + ?Sized, #### fn type_id(&self) -> TypeId Gets the `TypeId` of `self`. T: ?Sized, #### fn borrow(&self) -> &T Immutably borrows from an owned value. T: ?Sized, #### fn borrow_mut(&mut self) -> &mut T Mutably borrows from an owned value. #### fn from(t: T) -> T Returns the argument unchanged. ### impl<T, U> Into<U> for Twhere U: From<T>, #### fn into(self) -> U Calls `U::from(self)`. That is, this conversion is whatever the implementation of `From<T> for U` chooses to do. ### impl<T> Same<T> for T #### type Output = T Should always be `Self`### impl<T, U> TryFrom<U> for Twhere U: Into<T>, #### type Error = Infallible The type returned in the event of a conversion error.#### fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::ErrorPerforms the conversion.### impl<T, U> TryInto<U> for Twhere U: TryFrom<T>, #### type Error = <U as TryFrom<T>>::Error The type returned in the event of a conversion error.#### fn try_into(self) -> Result<U, <U as TryFrom<T>>::ErrorPerforms the conversion.
novapo-medusa-payment-manual
npm
JavaScript
Manual Payment === A minimal payment provider that allows merchants to handle payments manually. [Medusa Website](https://medusajs.com/) | [Medusa Repository](https://github.com/medusajs/medusa) Features --- * Provides a restriction-free payment provider that can be used during checkout and while processing orders. --- Prerequisites --- * [Medusa backend](https://docs.medusajs.com/development/backend/install) --- How to Install --- 1. Run the following command in the directory of the Medusa backend: ``` npm install medusa-payment-manual ``` 2. In `medusa-config.js` add the following at the end of the `plugins` array: ``` const plugins = [ // ... `medusa-payment-manual` ] ``` --- Test the Plugin --- 1. Run the following command in the directory of the Medusa backend to run the backend: ``` npm run start ``` 2. Enable the payment provider in a region in the admin. You can refer to [this User Guide](https://docs.medusajs.com/user-guide/regions/providers) to learn how to do that. Alternatively, you can use the [Admin APIs](https://docs.medusajs.com/api/admin#tag/Region/operation/PostRegionsRegion). 3. Place an order using a storefront or the [Store APIs](https://docs.medusajs.com/api/store). You should be able to use Stripe as a payment method. Readme --- ### Keywords * medusa-plugin * medusa-plugin-payment
FME
cran
R
Package ‘FME’ July 5, 2023 Version 1.3.6.3 Title A Flexible Modelling Environment for Inverse Modelling, Sensitivity, Identifiability and Monte Carlo Analysis Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-4603-7100>), <NAME> [aut] (<https://orcid.org/0000-0002-4951-6468>) Maintainer <NAME> <<EMAIL>> Depends R (>= 2.6), deSolve, rootSolve, coda Imports minpack.lm, MASS, graphics, grDevices, stats, utils, minqa Suggests diagram Description Provides functions to help in fitting models to data, to perform Monte Carlo, sensitivity and identifiability analysis. It is intended to work with models be written as a set of differential equations that are solved either by an integration routine from package 'deSolve', or a steady-state solver from package 'rootSolve'. However, the methods can also be used with other types of functions. License GPL (>= 2) URL http://fme.r-forge.r-project.org/ NeedsCompilation yes Repository CRAN Date/Publication 2023-07-05 15:03:04 UTC R topics documented: FME-packag... 2 colli... 4 cross2lon... 7 gaussianWeight... 10 Gri... 12 Latinhype... 13 modCos... 14 modCR... 20 modFi... 23 modMCM... 30 Nor... 38 obsplo... 40 pseudoOpti... 42 sensFu... 44 sensRang... 48 Uni... 53 FME-package A Flexible Modelling Environment for Inverse Modelling, Sensitivity, Identifiability, Monte Carlo Analysis. Description R-package FME contains functions to run complex applications of models that produce output as a function of input parameters. Although it was created to be used with models consisting of ordinary differential equations (ODE), partial differential equations (PDE) or differential algebraic equations (DAE), it can work with other models. It contains: • Functions to allow fitting of the model to data. Function modCost estimates the (weighted) residuals between model output and data, variable and model costs. Function modFit uses the output of modCost to find the best-fit parameters. It provides a wrapper around R’s built-in minimisation routines (optim, nlm, nlminb) and nls.lm from package minpack.lm. Package FME also includes an implementation of the pseudo-random search algorithm (func- tion pseudoOptim). • Function sensFun estimates the sensitivity functions of selected output variables as a function of model parameters. This is the basis of uni-variate, bi-variate and multi-variate sensitivity analysis. • Function collin uses as input the sensitivity functions and estimates the "collinearity" index for all possible parameter sets. This multivariate sensitivity estimate measures approximate linear dependence and is useful to derive which parameter sets are identifiable given the data set. • Function sensRange produces ’envelopes’ around the sensitivity variables, consisting of a time series or a 1-dimensional set, as a function of the sensitivity parameters. It produces "envelopes" around the variables. • Function modCRL calculates the values of single variables as a function of the sensitivity pa- rameters. This function can be used to run simple "what-if" scenarios • Function modMCMC runs a Markov chain Monte Carlo (Bayesian analysis). It implements the delayed rejection - adaptive Metropolis (DRAM) algorithm. • FME also contains functions to generate multiple parameter values arranged according to a grid (Grid) multinormal (Norm) or uniform (Unif) design, and a latin hypercube sampling (Latinhyper) function Details bug corrections: • version 1.3.6, sensFun: corrected calculation of L2 norm (now consistent with help page), • version 1.3, modCost: minlogp was not correctly estimated if more than one observed variable (used the wrong sd). Author(s) <NAME> <NAME> References Soetaert, K. and <NAME>. 2010. Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME. Journal of Statistical Software 33(3) 1–28. doi:10.18637/jss.v033.i03 Examples ## Not run: ## show examples (see respective help pages for details) example(modCost) example(sensFun) example(modMCMC) example(modCRL) ## open the directory with documents browseURL(paste(system.file(package = "FME"), "/doc", sep = "")) ## open the directory with examples browseURL(paste(system.file(package = "FME"), "/doc/examples", sep = "")) ## the vignettes vignette("FME") vignette("FMEdyna") vignette("FMEsteady") vignette("FMEother") vignette("FMEmcmc") edit(vignette("FME")) edit(vignette("FMEdyna")) edit(vignette("FMEsteady")) edit(vignette("FMEother")) edit(vignette("FMEmcmc")) ## End(Not run) collin Estimates the Collinearity of Parameter Sets Description Based on the sensitivity functions of model variables to a selection of parameters, calculates the "identifiability" of sets of parameter. The sensitivity functions are a matrix whose (i,j)-th element contains ∂yi ∆Θj · ∂Θj ∆yi and where yi is an output variable, at a certain (time) instance, i, ∆yi is the scaling of variable yi , ∆Θj is the scaling of parameter Θj . Function collin estimates the collinearity, or identifiability of all parameter sets or of one param- eter set. As a rule of thumb, a collinearity value less than about 20 is "identifiable". Usage collin(sensfun, parset = NULL, N = NULL, which = NULL, maxcomb = 5000) ## S3 method for class 'collin' print(x, ...) ## S3 method for class 'collin' plot(x, ...) Arguments sensfun model sensitivity functions as estimated by SensFun. parset one selected parameter combination, a vector with their names or with the in- dices to the parameters. N the number of parameters in the set; if NULL then all combinations will be tried. Ignored if parset is not NULL. which the name or the index to the observed variables that should be used. Default = all observed variables. maxcomb the maximal number of combinations that can be tested. If too large, this may produce a huge output. The number of combinations of n parameters out of a total of p parameters is choose(p, n). x an object of class collin. ... additional arguments passed to the methods. Details The collinearity is a measure of approximate linear dependence between sets of parameters. The higher its value, the more the parameters are related. With "related" is meant that several paraemter combinations may produce similar values of the output variables. Value a data.frame of class collin with one row for each parameter combination (parameters as in sensfun). Each row contains: ... for each parameter whether it is present (1) or absent (0) in the set, N the number of parameters in the set, collinearity the collinearity value. The data.frame returned by collin has methods for the generic functions print and plot. Note It is possible to use collin for selecting parameter sets that can be fine-tuned based on a data set. Thus it is a powerful technique to make model calibration routines more robust, because calibration routines often fail when parameters are strongly related. In general, when the collinearity index exceeds 20, the linear dependence is assumed to be critical (i.e. it will not be possible or easy to estimate all the parameters in the combination together). The procedure is explained in Omlin et al. (2001). 1. First the function collin is used to test how far a dataset can be used for estimating certain (combinations of) parameters. After selection of an ’identifiable parameter set’ (which has a low "collinearity") they are fine-tuned by calibration. 2. As the sensitivity analysis is a local analysis (i.e. its outcome depends on the current values of the model parameters) and the fitting routine is used to estimate the best values of the parameters, this is an iterative procedure. This means that identifiable parameters are determined, fitted to the data, then a newly identifiable parameter set is determined, fitted, etcetera until convergenc is reached. See the paper by Omlin et al. (2001) for more information. Author(s) <NAME> <<EMAIL>> References <NAME>., <NAME>. and <NAME>., 2001. Practical Identifiability Analysis of Large Environ- mental Simulation Models. Water Resour. Res. 37(4): 1015–1030. <NAME>., <NAME>. and <NAME>., 2001. Biogeochemical Model of Lake Zurich: Sensitivity, Identifiability and Uncertainty Analysis. Ecol. Modell. 141: 105–123. <NAME>. and <NAME>. 2010. Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME. Journal of Statistical Software 33(3) 1–28. doi:10.18637/jss.v033.i03 Examples ## ======================================================================= ## Test collinearity values ## ======================================================================= ## linearly related set... => Infinity collin(cbind(1:5, 2*(1:5))) ## unrelated set => 1 MM <- matrix(nr = 4, nc = 2, byrow = TRUE, data = c(-0.400, -0.374, 0.255, 0.797, 0.690, -0.472, -0.546, 0.049)) collin(MM) ## ======================================================================= ## Bacterial model as in Soetaert and Herman, 2009 ## ======================================================================= pars <- list(gmax = 0.5,eff = 0.5, ks = 0.5, rB = 0.01, dB = 0.01) solveBact <- function(pars) { derivs <- function(t, state, pars) { # returns rate of change with (as.list(c(state, pars)), { dBact <- gmax*eff*Sub/(Sub + ks)*Bact - dB*Bact - rB*Bact dSub <- -gmax *Sub/(Sub + ks)*Bact + dB*Bact return(list(c(dBact, dSub))) }) } state <- c(Bact = 0.1, Sub = 100) tout <- seq(0, 50, by = 0.5) ## ode solves the model by integration... return(as.data.frame(ode(y = state, times = tout, func = derivs, parms = pars))) } out <- solveBact(pars) ## We wish to estimate parameters gmax and eff by fitting the model to ## these data: Data <- matrix(nc = 2, byrow = TRUE, data = c( 2, 0.14, 4, 0.2, 6, 0.38, 8, 0.42, 10, 0.6, 12, 0.107, 14, 1.3, 16, 2.0, 18, 3.0, 20, 4.5, 22, 6.15, 24, 11, 26, 13.8, 28, 20.0, 30, 31 , 35, 65, 40, 61) ) colnames(Data) <- c("time","Bact") head(Data) Data2 <- matrix(c(2, 100, 20, 93, 30, 55, 50, 0), ncol = 2, byrow = TRUE) colnames(Data2) <- c("time", "Sub") ## Objective function to minimise Objective <- function (x) { # Model cost pars[] <- x out <- solveBact(x) Cost <- modCost(obs = Data2, model = out) # observed data in 2 data.frames return(modCost(obs = Data, model = out, cost = Cost)) } ## 1. Estimate sensitivity functions - all parameters sF <- sensFun(func = Objective, parms = pars, varscale = 1) ## 2. Estimate the collinearity Coll <- collin(sF) ## The larger the collinearity, the less identifiable the data set Coll plot(Coll, log = "y") ## 20 = magical number above which there are identifiability problems abline(h = 20, col = "red") ## select "identifiable" sets with 4 parameters Coll [Coll[,"collinearity"] < 20 & Coll[,"N"]==4,] ## collinearity of one selected parameter set collin(sF, c(1, 3, 5)) collin(sF, 1:5) collin(sF, c("gmax", "eff")) ## collinearity of all combinations of 3 parameters collin(sF, N = 3) ## The collinearity depends on the value of the parameters: P <- pars P[1:2] <- 1 # was: 0.5 collin(sensFun(Objective, P, varscale = 1)) cross2long Convert a dataset in wide (crosstab) format to long (database) format Description Rearranges a data frame in cross tab format by putting all relevant columns below each other, replicating the independent variable and, if necessary, other specified columns. Optionally, an err column is added. Usage cross2long( data, x, select = NULL, replicate = NULL, error = FALSE, na.rm = FALSE) Arguments data a data frame (or matrix) with crosstab layout x name of the independent variable to be replicated select a vector of column names to be included (see details). All columns are included if not specified. replicate a vector of names of variables (apart from the independent variable that have to be replicated for every included column (e.g. experimental treatment specifica- tion). error boolean indicating whether the final dataset in long format should contain an extra column for error values (cf. modCost); here filled with 1’s. na.rm whether or not to remove the NAs. Details The original data frame is converted from a wide (crosstab) layout (one variable per column) to a long (database) layout (all variable value in one column). As an example of both formats consider the data, called Dat consisting of two observed variables, called "Obs1" and "Obs2", both containing two observations, at time 1 and 2: name time val err Obs1 1 50 5 Obs1 2 150 15 Obs2 1 1 0.1 Obs2 2 2 0.2 for the long format and time Obs1 Obs2 1 50 1 2 150 2 for the crosstab format. The parameters x, select, and replicate should be disjoint. Although the independent variable always has to be replicated it should not be given by the replicate parameter. Value A data frame with the following columns: name Column containing the column names of the original crosstab data frame, data x A replication of the independent variable y The actual data stacked upon each other in one column err Optional column, filled with NA values (necessary for some other functions) ... all other columns from the original dataset that had to be replicated (indicated by the parameter replicate) Author(s) <NAME> <<EMAIL>> References <NAME>. and <NAME>. 2010. Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME. Journal of Statistical Software 33(3) 1–28. doi:10.18637/jss.v033.i03 Examples ## ======================================================================= ## Suppose we have measured sediment oxygen concentration profiles ## ======================================================================= depth <- 0:7 O2mud <- c( 6, 1, 0.5, 0.1, 0.05,0, 0, 0) O2silt <- c( 6, 5, 3, 2, 1.5, 1, 0.5, 0) O2sand <- c( 6, 6, 5, 4, 3, 2, 1, 0) zones <- c("a", "b", "b", "c", "c", "d", "d", "e") oxygen <- data.frame(depth = depth, zone = zones, mud = O2mud, silt = O2silt, sand = O2sand ) cross2long(data = oxygen, x = depth, select = c(silt, mud), replicate = zone) cross2long(data = oxygen, x = depth, select = c(mud, -silt), replicate = zone) # twice the same column name: replicates colnames(oxygen)[4] <- "mud" cross2long(data=oxygen, x = depth, select = mud) gaussianWeights A kernel average smoother function to weigh residuals according to a Gaussian density function This function is still experimental... use with care Description A calibration dataset in database format (cf. modCost for the database format) is extended in order to fit model output using a weighted least squares approach. To this end, the observations are replicated for a certain number of times, and weights are assigned to the replicates according to a Gaussian density function. This density has the relevant observation as mean value. The standard deviation, provided as a parameter, determines the number of inserted replicate observations (see Detail). This weighted regression approach may be interesting when discontinuities exist in the observa- tional data. Under these circumstances small changes in the timing (or more general the position along the axis of the independent variable) of the model output may have a disproportional impact on the overall goodness-of-fit (e.g. timing of nutrient depletion). Additionally, this approach may be used to model uncertainty in the independent variable (e.g. slices of sediment profiles, or the timing of a sampling). Usage gaussianWeights (obs, x = x, y = y, xmodel, spread, weight = "none", aggregation = x ,ordering) Arguments obs dataset in long (database) format as is typically used by modCost x name of the independent variable (typically x, cf. modCost) in obs. Defaults to x (not given as character string; cf. subset) y name of the dependent variable in obs. Defaults to y. xmodel an ordered vector of unique times at which model output is produced. If not given, the independent variable of the observational dataset is used. spread standard deviation used to calculate the weights from a normal density function. This value also determines the number of points from the model output that are compared to a specific observa- tion in obs (2 * 3 * spread + 1; containing 99.7% of the Gaussian distribution, centered around the observation of interest). weight scaling factor of the modCost function ("sd", "mean", or "none"). The Gaussian weights are multiplied by this factor to account for differences in units. aggregation vector of column names from the dataset that are used to aggregate observations while calculating the scaling factor. Defaults to the variable name, "name". ordering Optional extra grouping and ordering of observations. Given as a vector of variable names. If none given, ordering will be done by variable name and inde- pendent variable. If both aggregation and ordering variables are given, ordering will be done as follows: x within ordering (in reverse order) within aggregation (in reverse order). Aggregation and ordering should be disjoint sets of variable names. Details Suppose: spread = 1/24 (days; = 1 hour) x = time in days, 1 per hour Then: obs_i is replicated 7 times (spread = observational periodicity = 1 hour): => obs_i-3 = ... = obs_i-1 = obs_i = obs_i+1 = ... = obs_i+3 The weights (W_i+j, for j = -3 ...3) are calculated as follows: W’_i+j = 1/(spread * sqrt(2pi)) * exp(-1/2 * ((obs_i+j - obs_i)/spread)^2 W_i+j = W’_i+j/sum(W_i-3,...,W_i+3) (such that their sum equals 1) Value A modified version of obs is returned with the following extensions: 1. Each observation obs[i] is replicated n times were n represents the number of modelx values within the interval [obs_i - (3 * spread), obs_i + 3 * spread)]. 2. These replicate observations get the same x values as their modeled counterparts (xmodel). 3. Weights are given in column, called "err" The returned data frame has the following columns: • "name" or another name specified by the first element of aggregation. Usually this column contains the names of the observed variables. • "x" or another name specified by x • "y" or another name specified by y • "err" containing the calculated weights • The rest of the columns of the data frame given by obs in that order. Author(s) <NAME> <<EMAIL>> Examples ## ======================================================================= ## A Sediment example ## ======================================================================= ## Sediment oxygen concentration is measured every ## centimeter in 3 sediment types depth <- 0:7 observations <- data.frame( profile = rep(c("mud","silt","sand"), each=8), depth = depth, O2 = c(c(6,1,0.5,0.1,0.05,0,0,0), c(6,5,3,2,1.5,1,0.5,0), c(6,6,5,4,3,2,1,0) ) ) ## A model generates profiles with a depth resolution of 1 millimeter modeldepths <- seq(0, 9, by = 0.05) ## All these model outputs are compared with weighed observations. gaussianWeights(obs = observations, x = depth, y = O2, xmodel = modeldepths, spread = 0.1, weight = "none", aggregation = profile) # Weights of one observation in silt at depth 2: Sub <- subset(observations, subset = (profile == "silt" & depth == 2)) plot(Sub[,-1]) SubWW <- gaussianWeights(obs = Sub, x = depth, y = O2, xmodel = modeldepths, spread = 0.5, weight="none", aggregation = profile) SubWW[,-1] Grid Grid Distribution Description Generates parameter sets arranged on a regular grid. Usage Grid(parRange, num) Arguments parRange the range (min, max) of the parameters, a matrix or a data.frame with one row for each parameter, and two columns with the minimum (1st) and maximum (2nd) column. num the number of random parameter sets to generate. Details The grid design produces the most regular parameter distribution; there is no randomness involved. The number of parameter sets generated with Grid will be <= num. Value a matrix with one row for each generated parameter set, and one column per parameter. Author(s) <NAME> <<EMAIL>> See Also Norm for (multi)normally distributed random parameter sets. Latinhyper to generates parameter sets using latin hypercube sampling. Unif for uniformly distributed random parameter sets. seq the R-default for generating regular sequences of numbers. Examples ## 4 parameters parRange <- data.frame(min = c(0, 1, 2, 3), max = c(10, 9, 8, 7)) rownames(parRange) <- c("par1", "par2", "par3", "par4") ## grid pairs(Grid(parRange, 500), main = "Grid") Latinhyper Latin Hypercube Sampling Description Generates random parameter sets using a latin hypercube sampling algorithm. Usage Latinhyper(parRange, num) Arguments parRange the range (min, max) of the parameters, a matrix or a data.frame with one row for each parameter, and two columns with the minimum (1st) and maximum (2nd) column. num the number of random parameter sets to generate. Details In the latin hypercube sampling, the space for each parameter is subdivided into num equally-sized segments and one parameter value in each of the segments drawn randomly. Value a matrix with one row for each generated parameter set, and one column per parameter. Note The latin hypercube distributed parameter sets give better coverage in parameter space than the uniform random design (Unif). It is a reasonable choice in case the number of parameter sets is limited. Author(s) <NAME> <<EMAIL>> References Press, <NAME>., <NAME>., <NAME>. and <NAME>. (2007) Numerical Recipes in C. Cambridge University Press. See Also Norm for (multi)normally distributed random parameter sets. Unif for uniformly distributed random parameter sets. Grid to generate random parameter sets arranged on a regular grid. Examples ## 4 parameters parRange <- data.frame(min = c(0, 1, 2, 3), max = c(10, 9, 8, 7)) rownames(parRange) <- c("par1", "par2", "par3", "par4") ## Latin hypercube pairs(Latinhyper(parRange, 100), main = "Latin hypercube") modCost Calculates the Discrepancy of a Model Solution with Observations Description Given a solution of a model and observed data, estimates the residuals, and the variable and model costs (sum of squared residuals). Usage modCost(model, obs, x = "time", y = NULL, err = NULL, weight = "none", scaleVar = FALSE, cost = NULL, ...) Arguments model model output, as generated by the integration routine or the steady-state solver, a matrix or a data.frame, with one column per dependent and independent vari- able. obs the observed data, either in long (database) format (name, x, y), a data.frame, or in wide (crosstable, or matrix) format - see details. x the name of the independent variable; it should be a name occurring both in the obs and model data structures. y either NULL, the name of the column with the dependent variable values,or an in- dex to the dependent variable values; if NULL then the observations are assumed to be in crosstable (matrix) format, and the names of the independent variables are given by the column names of this matrix. err either NULL, or the name of the column with the error estimates, used to weigh the residuals (see details); if NULL, then the residuals are not weighed. cost if not NULL, the output of a previous call to modCost; in this case, the new output will combine both. weight only if err=NULL: how to weigh the residuals, one of "none", "std", "mean", see details. scaleVar if TRUE, then the residuals of one observed variable are scaled respectively to the number of observations (see details). ... additional arguments passed to R-function approx. Details This function compares model output with observed data. It computes 1. the weighted residuals, one for each data point. 2. the variable costs, i.e. the sum of squared weight residuals per variable. 3. the model cost, the scaled sum of variable costs . There are three steps: 1. For any observed data point, i, the weighted residuals are estimated as: M odi − Obsi resi = errori with weighti = 1/erri and where M odi and Obsi are the modeled, respectively observed value of data point i. The weights are equal to 1/error, where the latter can be inputted, one for each data point by speci- fying err as an extra column in the observed data. This can only be done when the data input is in long (database) format. When err is not inputted, then the weights are specified via argument weight which is either: • "none", which sets the weight equal to 1 (the default) • "std", which sets the weights equal to the reciprocal of the standard deviation of the observed data (can only be used if there is more than 1 data point) • "mean", which uses 1/mean of the absolute value of the observed data (can only be used if not 0). 2. Then for each observed variable, j, a variable cost is estimated as the sum of squared weighted residuals for this variable: nj X Costvarj = resi 2 where nj is the number of observations for observed variable j. 3. Finally, the model Cost is estimated as the scaled sum of variable costs: nv X Costvarj M odCost = scalevarj and where scalevarj allows to scale the variable costs relative to the number of observations. This is set by specifying argument scaleVar. If TRUE, then the variable costs are rescaled. The default is NOT to rescale (i.e. scalevarj =1). The models typically consist of (a system of) differential equations, which are either solved by: • integration routines, e.g. the routines from package deSolve, • steady-state estimators, as from package rootSolve. The data can be presented in two formats: • data table (long) format; this is a two to four column data.frame that contains the name of the observed variable (always the FIRST column), the (optional) value of the independent variable (default column name = "time"), the value of the observation and the (optional) value of the error. For data presented in this format, the names of the column(s) with the independent variable (x) and the name of the column that has the value of the dependent variable y must be passed to function modCost. • crosstable (wide) format; this is a matrix, where each column denotes one dependent (or independent) variable; the column name is the name of the observed variable. When using this format, only the name of the column that contains the dependent variable must be specified (x). As an example of both formats consider the data, called Dat consisting of two observed variables, called "Obs1" and "Obs2", both containing two observations, at time 1 and 2: name time val err Obs1 1 50 5 Obs1 2 150 15 Obs2 1 1 0.1 Obs2 2 2 0.2 for the long format and time Obs1 Obs2 1 50 1 2 150 2 for the crosstab format. Note, that in the latter case it is not possible to provide separate errors per data point. By calling modCost several consecutive times (using the cost argument), it is possible to combine both types of data files. Value a list of type modCost containing: model one value, the model cost, which equals the sum of scaled variable costs (see details). minlogp one value, -log(model probablity), where it is assumed that the data are normally distributed, with standard deviation = error. var the variable costs, a data.frame with, for each observed variable the following (see details): • name, the name of the observed variable. • scale, the scale-factor used to weigh the variable cost, either 1 or 1/(number observations), defaults to 1. • N, the number of data points per observed variable. • SSR.unweighted, the sum of squared residuals per observed variable, un- weighted. • SSR, the sum of weighted squared residuals per observed variable(see de- tails). residuals the data residual, a data.frame with several columns: • name, the name of the observed variable. • x, the value of the independent variable (if present). • obs, the observed variable value. • mod, the corresponding modeled value. • weight, the factor used to weigh the residuals, 1/error, defaults to 1. • res, the weighted residuals between model and observations (mod-obs)*weight. • res.unweighted, the residuals between model and observations (mod-obs). Note In the future, it should be possible to have more than one independent variable present. This is not yet implemented, but it should allow e.g. to fit time series of spatially dependent variables. Author(s) Karline Soetaert <<EMAIL>> References Soetaert, K. and <NAME>. 2010. Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME. Journal of Statistical Software 33(3) 1–28. doi:10.18637/jss.v033.i03 Examples ## ======================================================================= ## Type 1 input: name, time, value ## ======================================================================= ## Create new data: two observed variables, "a", "b" Data <- data.frame(name = c(rep("a", 4), rep("b", 4)), time = c(1:4, 2:5), val = c(runif(4), 1:4)) ## "a nonsense model" Mod <- function (t, y, par) { da <- 0 db <- 1 return(list(c(da, db))) } out <- ode(y = c(a = 0.5, b = 0.5), times = 0:6, func = Mod, parms = NULL) Data # Show out ## The cost function modCost(model = out, obs = Data, y = "val") ## The cost function with a data error added Dat2 <- cbind(Data, Err = Data$val*0.1) # error = 10% of value modCost(model = out, obs = Dat2, y = "val", err = "Err") ## ======================================================================= ## Type 2 input: Matrix format; column names = variable names ## ======================================================================= ## logistic growth model TT <- seq(1, 100, 2.5) N0 <- 0.1 r <- 0.5 K <- 100 ## analytical solution Ana <- cbind(time = TT, N = K/(1 + (K/N0 - 1) * exp(-r*TT))) ## numeric solution logist <- function(t, x, parms) { with(as.list(parms), { dx <- r * x[1] * (1 - x[1]/K) list(dx) }) } time <- 0:100 parms <- c(r = r, K = K) x <- c(N = N0) ## Compare several numerical solutions Euler <- ode(x, time, logist, parms, hini = 2, method = "euler") Rk4 <- ode(x, time, logist, parms, hini = 2, method = "rk4") Lsoda <- ode(x, time, logist, parms) # lsoda is default method Ana2 <- cbind(time = time, N = K/(1 + (K/N0 - 1) * exp(-r * time))) ## the SSR and residuals with respect to the "data" cEuler <- modCost(Euler, Ana)$model cRk4 <- modCost(Rk4 , Ana)$model cLsoda <- modCost(Lsoda, Ana)$model cAna <- modCost(Ana2 , Ana)$model compare <- data.frame(method = c("euler", "rk4", "lsoda", "Ana"), cost = c(cEuler, cRk4, cLsoda, cAna)) ## Plot Euler, RK and analytic solution plot(Euler, Rk4, col = c("red", "blue"), obs = Ana, main = "logistic growth", xlab = "time", ylab = "N") legend("bottomright", c("exact", "euler", "rk4"), pch = c(1, NA, NA), col = c("black", "red", "blue"), lty = c(NA, 1, 2)) legend("right", ncol = 2, title = "SSR", legend = c(as.character(compare[,1]), format(compare[,2], digits = 2))) compare ## ======================================================================= ## Now suppose we do not know K and r and they are to be fitted... ## The "observations" are the analytical solution ## ======================================================================= ## Run the model with initial guess: K = 10, r = 2 parms["K"] <- 10 parms["r"] <- 2 init <- ode(x, time, logist, parms) ## FITTING algorithm uses modFit ## First define the objective function (model cost) to be minimised ## more general: using modFit Cost <- function(P) { parms["K"] <- P[1] parms["r"] <- P[2] out <- ode(x, time, logist, parms) return(modCost(out, Ana)) } (Fit<-modFit(p = c(K = 10, r = 2), f = Cost)) summary(Fit) ## run model with the optimized value: parms[c("K", "r")] <- Fit$par fitted <- ode(x, time, logist, parms) ## show results, compared with "observations" plot(init, fitted, col = c("green", "blue"), lwd = 2, lty = 1, obs = Ana, obspar = list(col = "red", pch = 16, cex = 2), main = "logistic growth", xlab = "time", ylab = "N") legend("right", c("initial", "fitted"), col = c("green", "blue"), lwd = 2) Cost(Fit$par) modCRL Monte Carlo Analysis Description Given a model consisting of differential equations, estimates the global effect of certain (sensitivity) parameters on selected sensitivity variables. This is done by drawing parameter values according to some predefined distribution, running the model with each of these parameter combinations, and calculating the values of the selected output variables at each output interval. This function is useful for “what-if” scenarios. If the output variables consist of a time-series or spatially dependent, use sensRange instead. Usage modCRL(func, parms = NULL, sensvar = NULL, dist = "unif", parInput = NULL, parRange = NULL, parMean = NULL, parCovar = NULL, num = 100, ...) ## S3 method for class 'modCRL' summary(object, ...) ## S3 method for class 'modCRL' plot(x, which = NULL, trace = FALSE, ask = NULL, ...) ## S3 method for class 'modCRL' pairs(x, which = 1:ncol(x), nsample = NULL, ...) ## S3 method for class 'modCRL' hist(x, which = 1:ncol(x), ask = NULL, ...) Arguments func an R-function that has as first argument parms and that returns a vector with variables whose sensitivity should be estimated. parms parameters passed to func; should be either a vector, or a list with named ele- ments. If NULL, then the first element of parInput is taken. sensvar the output variables for which the sensitivity needs to be estimated. Either NULL, the default=all output variables, or a vector with output variable names (which should be present in the vector returned by func), or a vector with indices to output variables as present in the output vector returned by func. dist the distribution according to which the parameters should be generated, one of "unif" (uniformly random samples), "norm", (normally distributed random samples), "latin" (latin hypercube distribution), "grid" (parameters arranged on a grid). The input parameters for the distribution are specified by parRange (min,max), except for the normally distributed parameters, in which case the distribution is specified by the parameter means parMean and the variance-covariance matrix, parCovar. Note that, if the distribution is "norm" and parRange is given, then a truncated distribution will be generated. (This is useful to prevent for instance that certain parameters become negative). Ignored if parInput is specified. parRange the range (min, max) of the sensitivity parameters, a matrix or (preferred) a data.frame with one row for each parameter, and two columns with the mini- mum (1st) and maximum (2nd) value. The rownames of parRange should be parameter names that are known in argument parms. Ignored if parInput is specified. parInput a matrix with dimension (*, npar) with the values of the sensitivity parameters. parMean only when dist is "norm": the mean value of each parameter. Ignored if parInput is specified. parCovar only when dist is "norm": the parameter’s variance-covariance matrix. num the number of times the model has to be run. Set large enough. If parInput is specified, then num parameters are selected randomly (from the rows of parInput. object an object of class modCRL. x an object of class modCRL. which the name or the index to the variables and parameters that should be plotted. Default = all variables and parameters. nsample the number of xy pairs to be plotted on the upper panel in the pairs plot. When NULL all xy pairs plotted. Set to a lower number in case the graph becomes too dense (and the exported picture too large). This does not affect the histograms on the diagonal plot (which are estimated using all values). trace if TRUE, adds smoothed line to the plot. ask logical; if TRUE, the user is asked before each plot, if NULL the user is only asked if more than one page of plots is necessary and the current graphics device is set interactive, see par(ask=.) and dev.interactive. ... additional arguments passed to function func or to the methods. Value a data.frame of type modCRL containing the parameter(s) and the corresponding values of the sensi- tivity output variables. The list returned by modCRL has a method for the generic functions summary and plot – see note. Note The following methods are included: • summary, estimates summary statistics for the sensitivity variables, a table with as many rows as there are variables (or elements in the vector returned by func) and the following columns: x, the mapping value, Mean, the mean, sd, the standard deviation, Min, the minimal value, Max, the maximal value, q25, q50, q75, the 25th, 50 and 75% quantile. • plot, produces a plot of the modCRL output, either one plot for each sensitivity variable and with the parameter value on the x-axis. This only works when there is only one parameter! OR one plot for each parameter value on the x-axis. This only works when there is only one variable! • hist, produces a histogram of the modCRL output parameters and variables. • pairs, produces a pairs plot of the modCRL output. The data.frame of type modCRL has several attributes, which remain hidden, and which are generally not of practical use (they are needed for the S3 methods). There is one exception - see notes in help of sensRange. Author(s) <NAME> <<EMAIL>>. References Soetaert, K. and Petzoldt, T. 2010. Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME. Journal of Statistical Software 33(3) 1–28. doi:10.18637/jss.v033.i03 Examples ## ======================================================================= ## Bacterial growth model as in Soetaert and Herman, 2009 ## ======================================================================= pars <- list(gmax = 0.5,eff = 0.5, ks = 0.5, rB = 0.01, dB = 0.01) solveBact <- function(pars) { derivs <- function(t,state,pars) { # returns rate of change with (as.list(c(state,pars)), { dBact <- gmax*eff * Sub/(Sub + ks)*Bact - dB*Bact - rB*Bact dSub <- -gmax * Sub/(Sub + ks)*Bact + dB*Bact return(list(c(dBact, dSub))) }) } state <- c(Bact = 0.1, Sub = 100) tout <- seq(0, 50, by = 0.5) ## ode solves the model by integration... return(as.data.frame(ode(y = state, times = tout, func = derivs, parms = pars))) } out <- solveBact(pars) plot(out$time, out$Bact, main = "Bacteria", xlab = "time, hour", ylab = "molC/m3", type = "l", lwd = 2) ## Function that returns the last value of the simulation SF <- function (p) { pars["eff"] <- p out <- solveBact(pars) return(out[nrow(out), 2:3]) } parRange <- matrix(nr = 1, nc = 2, c(0.2, 0.8), dimnames = list("eff", c("min", "max"))) parRange CRL <- modCRL(func = SF, parRange = parRange) plot(CRL) # plots both variables plot(CRL, which = c("eff", "Bact"), trace = FALSE) #selects one modFit Constrained Fitting of a Model to Data Description Fitting a model to data, with lower and/or upper bounds Usage modFit(f, p, ..., lower = -Inf, upper = Inf, method = c("Marq", "Port", "Newton", "Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Pseudo", "bobyqa"), jac = NULL, control = list(), hessian = TRUE) ## S3 method for class 'modFit' summary(object, cov=TRUE,...) ## S3 method for class 'modFit' deviance(object, ...) ## S3 method for class 'modFit' coef(object, ...) ## S3 method for class 'modFit' residuals(object, ...) ## S3 method for class 'modFit' df.residual(object, ...) ## S3 method for class 'modFit' plot(x, ask = NULL, ...) ## S3 method for class 'summary.modFit' print(x, digits = max(3, getOption("digits") - 3), ...) Arguments f a function to be minimized, with first argument the vector of parameters over which minimization is to take place. It should return either a vector of residuals (of model versus data) or an element of class modCost (as returned by a call to modCost. p initial values for the parameters to be optimized over. ... additional arguments passed to function f (modFit) or passed to the methods. lower lower bounds on the parameters; if unbounded set equal to -Inf. upper upper bounds on the parameters; if unbounded set equal to Inf. method The method to be used, one of "Marq", "Port", "Newton", "Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Pseudo", "bobyqa" - see details. jac A function that calculates the Jacobian; it should be called as jac(x, ...) and return the matrix with derivatives of the model residuals as a function of the parameters. Supplying the Jacobian can substantially improve performance; see last example. hessian TRUE if Hessian is to be estimated. Note that, if set to FALSE, then a summary cannot be estimated. control additional control arguments passed to the optimisation routine - see details of nls.lm ("Marq"), nlminb ("Port"), optim ("Nelder-Mead", "BFGS", "CG", "L- BFGS-B", "SANN"), nlm ("Newton") or pseudoOptim("Pseudo"). object an object of class modFit. x an object of class modFit. digits number of significant digits in printout. cov when TRUE also calculates the parameter covariances. ask logical; if TRUE, the user is asked before each plot, if NULL the user is only asked if more than one page of plots is necessary and the current graphics device is set interactive, see par(ask=.) and dev.interactive. Details Note that arguments after ... must be matched exactly. The method to be used is specified by argument method which can be one of the methods from function optim: • "Nelder-Mead", the default from optim, • "BFGS", a quasi-Newton method, • "CG", a conjugate-gradient method, • "L-BFGS-B", constrained quasi-Newton method, • "SANN", method of simulated annealing. Or one of the following: • "Marq", the Levenberg-Marquardt algorithm (nls.lm from package minpack) - the default. Note that this method is the only least squares method. • "Newton", a Newton-type algorithm (see nlm), • "Port", the Port algorithm (see nlminb), • "Pseudo", a pseudorandom-search algorithm (see pseudoOptim), • "bobyqa", derivative-free optimization by quadratic approximation from package minqa. For difficult problems it may be efficient to perform some iterations with Pseudo, which will bring the algorithm near the vicinity of a (the) minimum, after which the default algorithm (Marq) is used to locate the minimum more precisely. The implementation for the routines from optim differs from constrOptim which implements an adaptive barrier algorithm and which allows a more flexible implementation of linear constraints. For all methods except L-BFGS-B, Port, Pseudo, and bobyqa that handle box constraints internally, bounds on parameters are imposed by a transformation of the parameters to be fitted. In case both lower and upper bounds are specified, this is achieved by a tangens and arc tangens transformation. This is, parameter values, p0 , generated by the optimisation routine, and which are located in the range [-Inf, Inf] are transformed, before they are passed to f as: p = (upper + lower)/2 + (upper − lower) · arctan(p0 )/π . which maps them into the interval [lower, upper]. Before the optimisation routine is called, the original parameter values, as given by argument p are mapped from [lower,upper] to [-Inf, Inf] by: p0 = tan(π/2 · (2p − upper − lower)/(upper − lower)) In case only lower or upper bounds are specified, this is achieved by a log transformation and a corresponding exponential back transformation. In case parameters are transformed (all methods) or in case the method Port, Pseudo, Marq or bobyqa is selected, the Hessian is approximated as 2 · J T · J, where J is the Jacobian, estimated by finite differences. This ignores the second derivative terms, but this is reasonable if the method has truly converged to the minimum. Note that finite differences are not extremely precise. In case the Levenberg-Marquard method (Marq) is used, and parameters are not transformed, 0.5 times the Hessian of the least squares problem is returned by nls.lm, the original Marquardt algo- rithm. To make it compatible, this value is multiplied with 2 and the TRUE Hessian is thus returned by modFit. Value a list of class modFit containing the results as returned from the called optimisation routines. This includes the following: par the best set of parameters found. ssr the sum of squared residuals, evaluated at the best set of parameters. Hessian A symmetric matrix giving an estimate of the Hessian at the solution found - see note. residuals the result of the last f evaluation; that is, the residuals. ms the mean squared residuals, i.e. ssr/length(residuals). var_ms the weighted and scaled variable mean squared residuals, one value per observed variable; only when f returns an element of class modCost; NA otherwise. var_ms_unscaled the weighted, but not scaled variable mean squared residuals var_ms_unweighted the raw variable mean squared residuals, unscaled and unweighted. ... any other arguments returned by the called optimisation routine. Note: this means that some return arguments of the original optimisation functions are renamed. More specifically, "objective" and "counts" from routine nlminb (method = "Port") are renamed; "value" and "counts"; "niter" and "minimum" from routine nls.lm (method=Marq) are renamed; "counts" and "value"; "minimum" and "estimate" from routine nlm (method = "Newton") are re- named. The list returned by modFit has a method for the summary, deviance, coef, residuals, df.residual and print.summary – see note. Note The summary method is based on an estimate of the parameter covariance matrix. In computing the covariance matrix of the fitted parameters, the problem is treated as if it were a linear least squares problem, linearizing around the parameter values that minimize Chi2 . The covariance matrix is estimated as 1/(0.5 · Hessian). This computation relies on several things, i.e.: 1. the parameter values are located at the minimum (i.e. the fitting algorithm has converged). 2. the observations yj are subject to independent errors whose variances are well estimated by 1/(n − p) times the residual sum of squares (where n = number of data points, p = number of parameters). 3. the model is not too nonlinear. This means that the estimated covariance (correlation) matrix and the confidence intervals derived from it may be worthless if the assumptions behind the covariance computation are invalid. If in doubt about the validity of the summary computations, use Monte Carlo fitting instead, or run a modMCMC. Other methods included are: • deviance, which returns the model deviance, • coef, which extracts the values of the fitted parameters, • residuals,which extracts the model residuals, • df.residual which returns the residual degrees of freedom • print.summary, producing a nice printout of the summary. Specifying a function to estimate the Jacobian matrix via argument jac may increase speed. The Jacobian is used in the methods "Marq", "BFGS", "CG", "L-BFGS", "Port", and is also used at the end, to estimate the Hessian at the optimum. Specification of the gradient in routines "BFGS", "CG", "L-BFGS" from optim and "port" from nlminb is not allowed here. Within modFit, the gradient is rather estimated from the Jacobian jac and the function f. Author(s) <NAME> <<EMAIL>>, <NAME> <<EMAIL>> References <NAME>., <NAME>. and Varadhan, R. 2014. minqa: Derivative-free optimization algorithms by quadratic approximation. R package. https://cran.r-project.org/package= minqa <NAME>., 1990. Usage Summary for Selected Optimization Routines. Computing Science Tech- nical Report No. 153. AT&T Bell Laboratories, Murray Hill, NJ 07974. <NAME>. (2009). The BOBYQA algorithm for bound constrained optimization without derivatives. Report No. DAMTP 2009/NA06, Centre for Mathematical Sciences, University of Cambridge, UK. https://www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf Press, <NAME>., <NAME>., <NAME>. and <NAME>., 2007. Numerical Recipes in C. Cambridge University Press. Price, W.L., 1977. A Controlled Random Search Procedure for Global Optimisation. The Computer Journal, 20: 367-370. doi:10.1093/comjnl/20.4.367 <NAME>. and <NAME>. 2010. Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME. Journal of Statistical Software 33(3) 1–28. doi:10.18637/jss.v033.i03 Please see also additional publications on the help pages of the individual algorithms. See Also constrOptim for constrained optimization. Examples ## ======================================================================= ## logistic growth model ## ======================================================================= TT <- seq(1, 60, 5) N0 <- 0.1 r <- 0.5 K <- 100 ## perturbed analytical solution Data <- data.frame( time = TT, N = K / (1+(K/N0-1) * exp(-r*TT)) * (1 + rnorm(length(TT), sd = 0.01)) ) plot(TT, Data[,"N"], ylim = c(0, 120), pch = 16, col = "red", main = "logistic growth", xlab = "time", ylab = "N") ##=================================== ## Fitted with analytical solution # ##=================================== ## initial "guess" parms <- c(r = 2, K = 10, N0 = 5) ## analytical solution model <- function(parms,time) with (as.list(parms), return(K/(1+(K/N0-1)*exp(-r*time)))) ## run the model with initial guess and plot results lines(TT, model(parms, TT), lwd = 2, col = "green") ## FITTING algorithm 1 ModelCost <- function(P) { out <- model(P, TT) return(Data$N-out) # residuals } (Fita <- modFit(f = ModelCost, p = parms)) times <- 0:60 lines(times, model(Fita$par, times), lwd = 2, col = "blue") summary(Fita) ##=================================== ## Fitted with numerical solution # ##=================================== ## numeric solution logist <- function(t, x, parms) { with(as.list(parms), { dx <- r * x[1] * (1 - x[1]/K) list(dx) }) } ## model cost, ModelCost2 <- function(P) { out <- ode(y = c(N = P[["N0"]]), func = logist, parms = P, times = c(0, TT)) return(modCost(out, Data)) # object of class modCost } Fit <- modFit(f = ModelCost2, p = parms, lower = rep(0, 3), upper = c(5, 150, 10)) out <- ode(y = c(N = Fit$par[["N0"]]), func = logist, parms = Fit$par, times = times) lines(out, col = "red", lty = 2) legend("right", c("data", "original", "fitted analytical", "fitted numerical"), lty = c(NA, 1, 1, 2), lwd = c(NA, 2, 2, 1), col = c("red", "green", "blue", "red"), pch = c(16, NA, NA, NA)) summary(Fit) plot(residuals(Fit)) ## ======================================================================= ## the use of the Jacobian ## ======================================================================= ## We use modFit to solve a set of linear equations A <- matrix(nr = 30, nc = 30, runif(900)) X <- runif(30) B <- A %*% X ## try to find vector "X"; the Jacobian is matrix A ## Function that returns the vector of residuals FUN <- function(x) as.vector(A %*% x - B) ## Function that returns the Jacobian JAC <- function(x) A ## The port algorithm print(system.time( mf <- modFit(f = FUN, p = runif(30), method = "Port") )) print(system.time( mf <- modFit(f = FUN, p = runif(30), method = "Port", jac = JAC) )) max(abs(mf$par - X)) # should be very small ## BFGS print(system.time( mf <- modFit(f = FUN, p = runif(30), method = "BFGS") )) print(system.time( mf <- modFit(f = FUN, p = runif(30), method = "BFGS", jac = JAC) )) max(abs(mf$par - X)) ## Levenberg-Marquardt print(system.time( mf <- modFit(f = FUN, p = runif(30), jac = JAC) )) max(abs(mf$par - X)) modMCMC Constrained Markov Chain Monte Carlo Description Performs a Markov Chain Monte Carlo simulation, using an adaptive Metropolis (AM) algorithm and including a delayed rejection (DR) procedure. Usage modMCMC(f, p, ..., jump = NULL, lower = -Inf, upper = +Inf, prior = NULL, var0 = NULL, wvar0 = NULL, n0 = NULL, niter = 1000, outputlength = niter, burninlength = 0, updatecov = niter, covscale = 2.4^2/length(p), ntrydr = 1, drscale = NULL, verbose = TRUE) ## S3 method for class 'modMCMC' summary(object, remove = NULL, ...) ## S3 method for class 'modMCMC' pairs(x, Full = FALSE, which = 1:ncol(x$pars), remove = NULL, nsample = NULL, ...) ## S3 method for class 'modMCMC' hist(x, Full = FALSE, which = 1:ncol(x$pars), remove = NULL, ask = NULL, ...) ## S3 method for class 'modMCMC' plot(x, Full = FALSE, which = 1:ncol(x$pars), trace = TRUE, remove = NULL, ask = NULL, ...) Arguments f the function to be evaluated, with first argument the vector of parameters which should be varied. It should return either the model residuals, an element of class modCost (as returned by a call to modCost) or -2*log(likelihood). The latter is equivalent to the sum-of-squares functions when using a Gaussian likelihood and prior. p initial values for the parameters to be optimized over. ... additional arguments passed to function f or to the methods. jump jump length, either a number, a vector with length equal to the total number of parameters, a covariance matrix, or a function that takes as input the current values of the parameters and produces as output the perturbed parameters. See details. prior -2*log(parameter prior probability), either a function that is called as prior(p) or NULL; in the latter case a non-informative prior is used (i.e. all parameters are equally likely, depending on lower and upper within min and max bounds). var0 initial model variance; if NULL, it is assumed that the model variance is 1, and the return element from f is -2*log (likelihood). If it has a value, it is assumed that the return element from f contain the model residuals or a list of class modFit. See details. Good options for var0 are to use the modelvariance (modVariance) as returned by the summary method of modFit. When this option is chosen, and the model has several variables, they will all be scaled similarly. See vignette FMEdyna. In case the model has several variables with different magnitudes, then it may be better to scale each variable independently. In that case, one can use as var0, the mean of the unweighted squared residuals from the model fit as returned from modFit (var_ms_unweighted). See vignette FME. wvar0 "weight" for the initial model variance – see details. n0 parameter used for weighing the initial model variance - if NULL, it is estimated as n0=wvar0*n, where n = number of observations. See details. lower lower bounds on the parameters; for unbounded parameters set equal to -Inf. upper upper bounds on the parameters; for unbounded parameters set equal to Inf. niter number of iterations for the MCMC. outputlength number of iterations kept in the output; should be smaller or equal to niter. updatecov number of iterations after which the parameter covariance matrix is (re)evaluated based on the parameters kept thus far, and used to update the MCMC jumps. covscale scale factor for the parameter covariance matrix, used to perform the MCMC jumps. burninlength number of initial iterations to be removed from output. ntrydr maximal number of tries for the delayed rejection procedure. It is generally not a good idea to set this to a too large value. drscale for each try during delayed rejection, the cholesky decomposition of the pro- posal matrix is scaled with this amount; if NULL, it is assumed to be c(0.2,0.25, 0.333, 0.333, ...) verbose if TRUE or 1: prints extra output, if numeric value i > 1, prints status information every i iterations. object an object of class modMCMC. x an object of class modMCMC. Full If TRUE then not only the parameters will be plotted, but also the function value and (if appropriate) the model variance(s). which the name or the index to the parameters that should be plotted. Default = all pa- rameters. If Full=TRUE, setting which = NULL will plot only the function value and the model variance. trace if TRUE, adds smoothed line to the plot. remove a list with indices of the runs that should be removed (e.g. to remove runs during burnin). nsample the number of xy pairs to be plotted on the upper panel in the pairs plot. When NULL all xy pairs plotted. Set to a lower number in case the graph becomes too dense (and the exported picture too large). This does not affect the histograms on the diagonal plot (which are estimated using all MCMC draws). ask logical; if TRUE, the user is asked before each plot, if NULL the user is only asked if more than one page of plots is necessary and the current graphics device is set interactive, see par(ask=.) and dev.interactive. Details Note that arguments after ... must be matched exactly. R-function f is called as f(p, ...). It should return either -2 times the log likelihood of the model (one value), the residuals between model and data or an item of class modFit (as created by function modFit. In the latter two cases, it is assumed that the prior distribution for Ξ is either non-informative or gaussian. If gaussian, it can be treated as a sum of squares (SS). If the measurement function is defined as: y = f (Ξ) + ΟΟ N (0, σ 2 ) where Ο is the measurement error, assumed normally distribution, then the posterior for the param- eters will be estimated as: SS(Ξ) p(Ξ|y, σ 2 ) ∝ exp(−0.5 · ( + SSpri (Ξ)) and where σ 2 is the error variance, SS is the sum of squares function SS(Ξ) = (yi − f (Ξ))2 . If P non-informative priors are used, then SSpri (Ξ) = 0. The error variance σ 2 is considered a nuisance parameter. A prior distribution of it should be specified and a posterior distribution is estimated. If wvar0 or n0 is >0, then the variances are sampled as conjugate priors from the inverse gamma distribution with parameters var0 and n0=wvar0*n. Larger values of wvar0 keep these samples closer to var0. Thus, at each step, 1/ the error variance (σ −2 ) is sampled from a gamma distribution: (n0 + n) (n0 · var0 + SS(Ξ)) p(σ −2 |y, Ξ) ∌ Γ( , ) where n is the number of data points and where n0 = n · wvar0, and where the second argument to the gamma function is the shape parameter. The prior parameters (var0 and wvar0) are the prior mean for σ 2 and the prior accuracy. By setting wvar0 equal to 1, equal weight is given to the prior and the current value. If wvar0 is 0 then the prior is ignored. If wvar0 is NULL (the default) then the error variances are assumed to be fixed. var0 estimates the variance of the measured components. In case independent estimates are not available, these variances can be obtained from the mean squares of fitted residuals. (e.g. as reported in modFit). See the examples. (but note that this is not truly independent information) var0 is either one value, or a value for each observed variable, or a value for each observed data point. When var0 is not NULL, then f is assumed to return the model residuals OR an instance of class modCost. When var0=NULL, then f should return either -2*log(probability of the model), or an instance of class modCost. modMCMC implements the Metropolis-Hastings method. The proposal distribution, which is used to generate new parameter values is the (multidimensional) Gaussian density distribution, with stan- dard deviation given by jump. jump can be either one value, a vector of length = number of parameters or a parameter covariance matrix (nrow = ncol = number parameters). The jump parameter, jump thus determines how much the new parameter set will deviate from the old one. If jump is one value, or a vector, then the new parameter values are generated by sampling a normal distribution with standard deviation equal to jump. A larger value will lead to larger jumps in the parameter space, but acceptance of new points can get very low. Smaller jump lengths increase the acceptance rate, but the algorithm may move too slowly, and too many runs may be needed to scan the parameter space. If jump is NULL, then the jump length is taken as 10% of the parameter value as given in p. jump can also be a proposal covariance matrix. In this case, the new parameter values are generated by sampling a multidimensional normal distribution. It can be efficient to initialise jump using the parameter covariance as resulting from fitting the model (e.g. using modFit) – see examples. Finally, jump can also be an R-function that takes as input the current values of the parameters and returns the new parameter values. Two methods are implemented to increase the number of accepted runs. 1. In the "adaptive Metropolis" method, new parameters are generated with a covariance matrix that is estimated from the parameters generated (and saved) thus far. The idea behind this is that the MCMC method is more efficient if the proposal covariance (to generate new parameter values) is somehow tuned to the shape and size of the target distribution. Setting updatecov smaller than niter will trigger this functionality. In this case, every updatecov iterations, the jump covariance matrix will be estimated from the covariance ma- trix of the saved parameter values. The covariance matrix is scaled with (2.42 /npar) where npar is the number of parameters, unless covscale has been given a different value. Thus, Jump = (cov(Ξ1 , Ξ2 , ....Ξn ) · diag(np, +1e−16 ) · (2.42 /npar) where the small number 1e−16 is added on the diagonal of the covariance matrix to prevent it from becoming singular. Note that a problem of adapting the proposal distribution using the MCMC results so far is that standard convergence results do not apply. One solution is to use adaptation only for the burn-in period and discard the part of the chain where adaptation has been used. Thus, when using updatecov with a positive value of burninlength, the proposal distribution is only updated during burnin. If burninlength = 0 though, the updates occur throughout the entire simulation. When using the adaptive Metropolis method, it is best to start with a small value of the jump length. 2. In the "delayed rejection" method, new parameter values are tried upon rejection. The process of delaying rejection can be iterated for at most ntrydr trials. Setting ntrydr equal to 1 (the default) toggles off delayed rejection. During the delayed rejection procedure, new parameters are generated from the last accepted value by scaling the jump covariance matrix with a factor as specified in drscale. The accep- tance probability of this new set depends on the candidates so far proposed and rejected, in such a way that reversibility of the Markov chain is preserved. See Haario et al. (2005, 2006) for more details. Convergence of the MCMC chain can be checked via plot, which plots for each iteration the values of all parameters, and if Full is TRUE, of the function value (SS) and (if appropriate) the modeled variance. If converged, there should be no visible drift. In addition, the methods from package coda become available by making the object returned by modMCMC of class mcmc, as used in the methods of coda. For instance, if object MCMCres is returned by modMCMC then as.mcmc(MCMCres$pars) will make an instance of class mcmc, usable by coda. The burninlength is the number of initial steps that is not included in the output. It can be useful if the initial value of the parameters is far from the optimal value. Starting the MCMC with the best fit parameter set will alleviate the need for using burninlength. Value a list of class modMCMC containing the results as returned from the Markov chain. This includes the following: pars an array with dimension (outputlength, length(p)), containing the parameters of the MCMC at each iteration that is kept. SS vector with the sum of squares function, one for each row in pars. naccepted the number of accepted runs. sig the sampled error variance σ 2 , a matrix with one row for each row in pars. bestpar the parameter set that gave the highest probability. bestfunp the function value corresponding to bestpar. prior the parameter prior, one value for each row in pars. count information about the MCMC chain: number of delayed rejection steps (dr_steps), the number of alfa steps Alfasteps, the number of accepted runs (num_accepted) and the number of times the proposal covariance matrix has been updated (num_covupdate.) settings the settings for error covariance calculation, i.e. arguments var0, n0 and N the number of data points. The list returned by modMCMC has methods for the generic functions summary, plot, pairs – see note. Note The following S3 methods are provided: • summary, produces summary statistics of the MCMC results • plot, plots the MCMC results, for all parameters. Use it to check convergence. • pairs, produces a pairs plot of the MCMC results; overrides the default gap = 0, upper.panel = NA, and diag.panel. It is also possible to use the methods from the coda package, e.g. densplot. To do that, first the modMCMC object has to be converted to an mcmc object. See the examples for an application. Author(s) <NAME> <<EMAIL>> <NAME> <<EMAIL>> References <NAME>., 2008. Adaptive MCMC Methods With Applications in Environmental and Geophysi- cal Models. Finnish Meteorological Institute contributions 69, ISBN 978-951-697-662-7, Finnish Meteorological Institute, Helsinki. <NAME>., <NAME>. and <NAME>., 2001. An Adaptive Metropolis Algorithm. Bernoulli 7, pp. 223–242. doi:10.2307/3318737 <NAME>., <NAME>., <NAME>. and <NAME>., 2006. DRAM: Efficient Adaptive MCMC. Statis- tics and Computing, 16(4), 339–354. doi:10.1007/s1122200694380 <NAME>., <NAME>. and <NAME>., 2005. Componentwise Adaptation for High Dimen- sional MCMC. Computational Statistics 20(2), 265–274. doi:10.1007/BF02789703 <NAME>. <NAME>., <NAME>. and <NAME>., 2004. Bayesian Data Analysis. Second edition. Chapman and Hall, Boca Raton. <NAME>. and <NAME>. 2010. Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME. Journal of Statistical Software 33(3) 1–28. doi:10.18637/jss.v033.i03 See Also modFit for constrained model fitting Examples ## ======================================================================= ## Sampling a 3-dimensional normal distribution, ## ======================================================================= # mean = 1:3, sd = 0.1 # f returns -2*log(probability) of the parameter values NN <- function(p) { mu <- c(1,2,3) -2*sum(log(dnorm(p, mean = mu, sd = 0.1))) #-2*log(probability) } # simple Metropolis-Hastings MCMC <- modMCMC(f = NN, p = 0:2, niter = 5000, outputlength = 1000, jump = 0.5) # More accepted values by updating the jump covariance matrix... MCMC <- modMCMC(f = NN, p = 0:2, niter = 5000, updatecov = 100, outputlength = 1000, jump = 0.5) summary(MCMC) plot(MCMC) # check convergence pairs(MCMC) ## ======================================================================= ## test 2: sampling a 3-D normal distribution, larger standard deviation... ## noninformative priors, lower and upper bounds imposed on parameters ## ======================================================================= NN <- function(p) { mu <- c(1,2,2.5) -2*sum(log(dnorm(p, mean = mu, sd = 0.5))) #-2*log(probability) } MCMC2 <- modMCMC(f = NN, p = 1:3, niter = 2000, burninlength = 500, updatecov = 10, jump = 0.5, lower = c(0, 2, 1), upper = c(1, 3, 3)) plot(MCMC2) hist(MCMC2, breaks = 20) ## Compare output of p3 with theoretical distribution hist(MCMC2, which = "p3", breaks = 20) lines(seq(1, 3, 0.1), dnorm(seq(1, 3, 0.1), mean = 2.5, sd = 0.5)/pnorm(3, 2.5, 0.5)) summary(MCMC2) # functions from package coda... cumuplot(as.mcmc(MCMC2$pars)) summary(as.mcmc(MCMC2$pars)) raftery.diag(MCMC2$pars) ## ======================================================================= ## test 3: sampling a log-normal distribution, log mean=1:4, log sd = 1 ## ======================================================================= NL <- function(p) { mu <- 1:4 -2*sum(log(dlnorm(p, mean = mu, sd = 1))) #-2*log(probability) } MCMCl <- modMCMC(f = NL, p = log(1:4), niter = 3000, outputlength = 1000, jump = 5) plot(MCMCl) # bad convergence cumuplot(as.mcmc(MCMCl$pars)) MCMCl <- modMCMC (f = NL, p = log(1:4), niter = 3000, outputlength = 1000, jump = 2^(2:5)) plot(MCMCl) # better convergence but CHECK it! pairs(MCMCl) colMeans(log(MCMCl$pars)) apply(log(MCMCl$pars), 2, sd) MCMCl <- modMCMC (f = NL, p = rep(1, 4), niter = 3000, outputlength = 1000, jump = 5, updatecov = 100) plot(MCMCl) colMeans(log(MCMCl$pars)) apply(log(MCMCl$pars), 2, sd) ## ======================================================================= ## Fitting a Monod (Michaelis-Menten) function to data ## ======================================================================= # the observations #--------------------- Obs <- data.frame(x=c( 28, 55, 83, 110, 138, 225, 375), # mg COD/l y=c(0.053,0.06,0.112,0.105,0.099,0.122,0.125)) # 1/hour plot(Obs, pch = 16, cex = 2, xlim = c(0, 400), ylim = c(0, 0.15), xlab = "mg COD/l", ylab = "1/hr", main = "Monod") # the Monod model #--------------------- Model <- function(p, x) data.frame(x = x, N = p[1]*x/(x+p[2])) # Fitting the model to the data #--------------------- # define the residual function Residuals <- function(p) (Obs$y - Model(p, Obs$x)$N) # use modFit to find parameters P <- modFit(f = Residuals, p = c(0.1, 1)) # plot best-fit model x <-0:375 lines(Model(P$par, x)) # summary of fit sP <- summary(P) sP[] print(sP) # Running an MCMC #--------------------- # estimate parameter covariances # (to efficiently generate new parameter values) Covar <- sP$cov.scaled * 2.4^2/2 # the model variance s2prior <- sP$modVariance # set nprior = 0 to avoid updating model variance MCMC <- modMCMC(f = Residuals, p = P$par,jump = Covar, niter = 1000, var0 = s2prior, wvar0 = NULL, updatecov = 100) plot(MCMC, Full = TRUE) pairs(MCMC) # function from the coda package. raftery.diag(as.mcmc(MCMC$pars)) cor(MCMC$pars) cov(MCMC$pars) # covariances by MCMC sP$cov.scaled # covariances by Hessian of fit x <- 1:400 SR <- summary(sensRange(parInput = MCMC$pars, func = Model, x = x)) plot(SR, xlab="mg COD/l", ylab = "1/hr", main = "Monod") points(Obs, pch = 16, cex = 1.5) Norm Normal Random Distribution Description Generates random parameter sets that are (multi)normally distributed. Usage Norm(parMean, parCovar, parRange = NULL, num) Arguments parMean a vector, with the mean value of each parameter. parCovar the parameter variance-covariance matrix. parRange the range (min, max) of the parameters, a matrix or a data.frame with one row for each parameter, and two columns with the minimum (1st) and maximum (2nd) column. num the number of random parameter sets to generate. Details Function Norm, draws parameter sets from a multivariate normal distribution, as specified through the mean value and the variance-covariance matrix of the parameters. In addition, it is possible to impose a minimum and maximum of each parameter, via parRange. This will generate a truncated distribution. Use this for instance if certain parameters cannot become negative. Value a matrix with one row for each generated parameter set, and one column per parameter. Note For function Norm to work, parCovar must be a valid variance-covariance matrix. (i.e. positive definite). If this is not the case, then the function will fail. Author(s) <NAME> <<EMAIL>> See Also Unif for uniformly distributed random parameter sets. Latinhyper to generates parameter sets using latin hypercube sampling. Grid to generate random parameter sets arranged on a regular grid rnorm the R-default for generating normally distributed random numbers. Examples ## multinormal parameters: variance-covariance matrix and parameter mean parCovar <- matrix(data = c(0.5, -0.2, 0.3, 0.4, -0.2, 1.0, 0.1, 0.3, 0.3, 0.1, 1.5, -0.7, 1.0, 0.3, -0.7, 4.5), nrow = 4) parCovar parMean <- 4:1 ## Generated sample Ndist <- Norm(parCovar = parCovar, parMean = parMean, num = 500) cov(Ndist) # check pairs(Ndist, main = "normal") ## truncated multinormal Ranges <- data.frame(min = rep(0, 4), max = rep(Inf, 4)) pairs(Norm(parCovar = parCovar, parMean = parMean, parRange = Ranges, num = 500), main = "truncated normal") obsplot Plot Method for observed data Description Plot all observed variables in matrix formalt Usage obsplot(x, ..., which = NULL, xyswap = FALSE, ask = NULL) Arguments x a matrix or data.frame, containing the observed data to be plotted. The ’x’- values (first axis) should be the first column. Several other matrices or data.frames can be passed in the ..., after x (unnamed) - see second example. If the first column of x consists of factors, or characters (strings), then it is assumed that the data are presented in long (database) format, where the first three columns contain (name, x, y). See last example. which the name(s) or the index to the variables that should be plotted. Default = all variables, except the first column. ask logical; if TRUE, the user is asked before each plot, if NULL the user is only asked if more than one page of plots is necessary and the current graphics device is set interactive, see par(ask=.) and dev.interactive. xyswap if TRUE, then x-and y-values are swapped and the y-axis is from top to bottom. Useful for drawing vertical profiles. ... additional arguments. The graphical arguments are passed to plot.default and points. The dots may contain other matrices and data.frames with observed data to be plotted on the same graphs as x - see second example. The arguments after . . . must be matched exactly. Details The number of panels per page is automatically determined up to 3 x 3 (par(mfrow = c(3, 3))). This default can be overwritten by specifying user-defined settings for mfrow or mfcol. Set mfrow equal to NULL to avoid the plotting function to change user-defined mfrow or mfcol settings. Other graphical parameters can be passed as well. Parameters are vectorized, either according to the number of plots (xlab, ylab, main, sub, xlim, ylim, log, asp, ann, axes, frame.plot,panel.first,panel.last, cex.lab,cex.axis,cex.main) or according to the number of lines within one plot (other parame- ters e.g. col, lty, lwd etc.) so it is possible to assign specific axis labels to individual plots, resp. different plotting style. Plotting parameter ylim, or xlim can also be a list to assign different axis limits to individual plots. See Also print.deSolve, ode, deSolve Examples ## 'observed' data AIRquality <- cbind(DAY = 1:153, airquality[, 1:4]) head(AIRquality) obsplot(AIRquality, type="l", xlab="Day since May") ## second set of observed data AIR2 <- cbind( 1:100, Solar.R = 250 * runif(100), Temp = 90-30*cos(2*pi*1:100/365) ) obsplot(AIRquality, AIR2, type = "l", xlab = "Day since May" , lwd = 1:2) obsplot(AIRquality, AIR2, type = "l", xlab = "Day since May" , lwd = 1 : 2, which =c("Solar.R", "Temp"), xlim = list(c(0, 150), c(0, 100))) obsplot(AIRquality, AIR2, type = "l", xlab = "Day since May" , lwd = 1 : 2, which =c("Solar.R", "Temp"), log = c("y", "")) obsplot(AIRquality, AIR2, which = 1:3, xyswap = c(TRUE,FALSE,TRUE)) ## ' a data.frame, with 'treatments', presented in long database format Data <- ToothGrowth[,c(2,3,1)] head (Data) obsplot(Data, ylab = "len", xlab = "dose") # same, plotted as two observed data sets obsplot(subset(ToothGrowth, supp == "VC", select = c(dose, len)), subset(ToothGrowth, supp == "OJ", select = c(dose, len))) pseudoOptim Pseudo-random Search Optimisation Algorithm of Price (1977) Description Fits a model to data, using the pseudo-random search algorithm of Price (1977), a random-based fitting technique. Usage pseudoOptim(f, p,..., lower, upper, control = list()) Arguments f function to be minimised, its first argument should be the vector of parameters over which minimization is to take place. It should return a scalar result, the model cost, e.g the sum of squared residuals. p initial values of the parameters to be optimised. ... arguments passed to function f. lower minimal values of the parameters to be optimised; these must be specified; they cannot be -Inf. upper maximal values of the parameters to be optimised; these must be specified; they cannot be +Inf. control a list of control parameters - see details. Details The control argument is a list that can supply any of the following components: • npop, number of elements in the population. Defaults to max(5*length(p),50). • numiter, maximal number of iterations to be performed. Defaults to 10000. The algorithm either stops when numiter iterations has been performed or when the remaining variation is less than varleft. • centroid, number of elements from which to estimate a new parameter vector, defaults to 3. • varleft, relative variation remaining; if below this value the algorithm stops; defaults to 1e-8. • verbose, if TRUE, more verbose output will contain the parameters in the final population, their respective population costs and the cost at each succesful interation. Defaults to FALSE. see the book of Soetaert and Herman (2009) for a description of the algorithm AND for a line to line explanation of the function code. Value a list containing: par the optimised parameter values. cost the model cost, or function evaluation associated to the optimised parameter values, i.e. the minimal cost. iterations the number of iterations performed. and if control$verbose is TRUE: poppar all parameter vectors remaining in the population, matrix of dimension (npop,length(par)). popcost model costs associated with all population parameter vectors, vector of length npop. rsstrace a 2-columned matrix with the iteration number and the model cost at each suc- cesful iteration. Author(s) <NAME> <<EMAIL>> References <NAME>. and <NAME>. J., 2009. A Practical Guide to Ecological Modelling. Using R as a Simulation Platform. Springer, 372 pp. Price, W.L., 1977. A Controlled Random Search Procedure for Global Optimisation. The Computer Journal, 20: 367-370. Examples amp <- 6 period <- 5 phase <- 0.5 x <- runif(20)*13 y <- amp*sin(2*pi*x/period+phase) + rnorm(20, mean = 0, sd = 0.05) plot(x, y, pch = 16) cost <- function(par) sum((par[1] * sin(2*pi*x/par[2]+par[3])-y)^2) p1 <- optim(par = c(amplitude = 1, phase = 1, period = 1), fn = cost) p2 <- optim(par = c(amplitude = 1, phase = 1, period = 1), fn = cost, method = "SANN") p3 <- pseudoOptim(p = c(amplitude = 1, phase = 1, period = 1), lower = c(0, 1e-8, 0), upper = c(100, 2*pi, 100), f = cost, control = c(numiter = 3000, verbose = TRUE)) curve(p1$par[1]*sin(2*pi*x/p1$par[2]+p1$par[3]), lty = 2, add = TRUE) curve(p2$par[1]*sin(2*pi*x/p2$par[2]+p2$par[3]), lty = 3, add = TRUE) curve(p3$par[1]*sin(2*pi*x/p3$par[2]+p3$par[3]), lty = 1, add = TRUE) legend ("bottomright", lty = c(1, 2, 3), c("Price", "Mathematical", "Simulated annealing")) sensFun Local Sensitivity Analysis Description Given a model consisting of differential equations, estimates the local effect of certain parameters on selected sensitivity variables by calculating a matrix of so-called sensitivity functions. In this matrix the (i,j)-th element contains ∂yi ∆Θj · ∂Θj ∆yi and where yi is an output variable (at a certain time instance), Θj is a parameter, and ∆yi is the scaling of variable yi , ∆Θj is the scaling of parameter Θj . Usage sensFun(func, parms, sensvar = NULL, senspar = names(parms), varscale = NULL, parscale = NULL, tiny = 1e-8, map = 1, ...) ## S3 method for class 'sensFun' summary(object, vars = FALSE, ...) ## S3 method for class 'sensFun' pairs(x, which = NULL, ...) ## S3 method for class 'sensFun' plot(x, which = NULL, legpos="topleft", ask = NULL, ...) ## S3 method for class 'summary.sensFun' plot(x, which = 1:nrow(x), ...) Arguments func an R-function that has as first argument parms and that returns a matrix or data.frame with the values of the output variables (columns) at certain output intervals (rows), and – optionally – a mapping variable (by default the first col- umn). parms parameters passed to func; should be either a vector, or a list with named ele- ments. If NULL, then the first element of parInput is taken. sensvar the output variables for which the sensitivity needs to be estimated. Either NULL, the default, which selects all variables, or a vector with variable names (which should be present in the matrix returned by func), or a vector with indices to variables as present in the output matrix (note that the column of this matrix with the mapping variable should not be selected). senspar the parameters whose sensitivity needs to be estimated, the default=all parame- ters. Either a vector with parameter names, or a vector with indices to positions of parameters in parms. varscale the scaling (weighing) factor for sensitivity variables, NULL indicates that the variable value is used. parscale the scaling (weighing) factor for sensitivity parameters, NULL indicates that the parameter value is used. tiny the perturbation, or numerical difference, factor, see details. map the column number with the (independent) mapping variable in the output ma- trix returned by func. For dynamic models solved by integration, this will be the (first) column with time. For 1-D spatial output, this column will be some distance variable. Set to NULL if there is no mapping variable. Mapping vari- ables should not be selected for estimating sensitivity functions; they are used for plotting. ... additional arguments passed to func or to the methods. object an object of class sensFun. x an object of class sensFun. vars if FALSE: summaries per parameter are returned; if TRUE, summaries per pa- rameter and per variable are returned. which the name or the index to the variables that should be plotted. Default = all variables. legpos position of the legend; set to NULL to avoid plotting a legend. ask logical; if TRUE, the user is asked before each plot, if NULL the user is only asked if more than one page of plots is necessary and the current graphics device is set interactive, see par(ask = ...) and dev.interactive. Details There are essentially two ways in which to use function sensFun. • When func returns a matrix or data frame with output values, sensFun can be used for sensi- tivity analysis, estimating the impact of parameters on output variables. • When func returns an instance of class modCost (as returned by a call to function modCost), then sensFun can be used for parameter identifiability. In this case the results from sensFun are used as input to function collin. See the help file for collin. For each sensitivity parameter, the number of sensitivity functions estimated is: length(sensvar) * length(mapping variable), i.e. one for each element returned by func (except the mapping variable). The sensitivity functions are estimated numerically. This means that each parameter value Θj is perturbed as max (tiny, Θj · (1 + tiny)) Value a data.frame of class sensFun containing the sensitivity functions this is one row for each sensitivity variable at each independent (time or position) value and the following columns: x, the value of the independent (mapping) variable, usually time (solver= "ode.."), or distance (solver= "steady.1D") var, the name of the observed variable, ..., a number of columns, one for each sensitivity parameter The data.frame returned by sensFun has methods for the generic functions summary, plot, pairs – see note. Note Sensitivity functions are generated by perturbing one by one the parameters with a very small amount, and quantifying the differences in the output. It is important that the output is generated with high precision, else it is possible, that the sensitivity functions are just noise. For instance, when used with a dynamic model (using solver from deSolve) set the tolerances atol and rtol to a lower value, to see if the sensitivity results make sense. The following methods are provided: • summary. Produces summary statistics of the sensitivity functions, a data.frame with: one row for each parameter and the following columns: – L1: the L1-norm n1 · |Sij |, P q P – L2: the L2-norm · n1 Sij · Sij , – Mean: the mean of the sensitivity functions, – Min: the minimal value of the sensitivity functions, – Max: the maximal value of the sensitivity functions. • var the summary of the variables sensitivity functions, a data.frame with the same columns as model and one row for each parameter + variable combination. This is only outputted if the variable names are effectively known • plot plots the sensitivity functions for each parameter; each parameter has its own color. By default, the sensitivity functions for all variables are plotted in one figure, unless which gives a selection of variables; in that case, each variable will be plotted in a separate figure, and the figures aligned in a rectangular grid, unless par mfrow is passed as an argument. • pairs produces a pairs plot of the sensitivity results; per parameter. By default, the sensitivity functions for all variables are plotted in one figure, unless which gives a selection of variables. Overrides the default gap = 0, upper.panel = NA, and diag.panel. Author(s) <NAME> <<EMAIL>> References <NAME>. and <NAME>., 2009. A Practical Guide to Ecological Modelling – Using R as a Simulation Platform. Springer, 390 pp. <NAME>., <NAME>. and <NAME>., 2001. Practical Identificability Analysis of Large Environ- mental Simulation Models. Water Resour. Res. 37(4): 1015–1030. doi:10.1029/2000WR900350 <NAME>. and <NAME>. 2010. Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME. Journal of Statistical Software 33(3) 1–28. doi:10.18637/jss.v033.i03 Examples ## ======================================================================= ## Bacterial growth model as in Soetaert and Herman, 2009 ## ======================================================================= pars <- list(gmax = 0.5, eff = 0.5, ks = 0.5, rB = 0.01, dB = 0.01) solveBact <- function(pars) { derivs <- function(t, state, pars) { # returns rate of change with (as.list(c(state, pars)), { dBact <- gmax * eff * Sub/(Sub + ks) * Bact - dB * Bact - rB * Bact dSub <- -gmax * Sub/(Sub + ks) * Bact + dB * Bact return(list(c(dBact, dSub))) }) } state <- c(Bact = 0.1, Sub = 100) tout <- seq(0, 50, by = 0.5) ## ode solves the model by integration ... return(as.data.frame(ode(y = state, times = tout, func = derivs, parms = pars))) } out <- solveBact(pars) plot(out$time, out$Bact, ylim = range(c(out$Bact, out$Sub)), xlab = "time, hour", ylab = "molC/m3", type = "l", lwd = 2) lines(out$time, out$Sub, lty = 2, lwd = 2) lines(out$time, out$Sub + out$Bact) legend("topright", c("Bacteria", "Glucose", "TOC"), lty = c(1, 2, 1), lwd = c(2, 2, 1)) ## sensitivity functions SnsBact <- sensFun(func = solveBact, parms = pars, sensvar = "Bact", varscale = 1) head(SnsBact) plot(SnsBact) plot(SnsBact, type = "b", pch = 15:19, col = 2:6, main = "Sensitivity all vars") summary(SnsBact) plot(summary(SnsBact)) SF <- sensFun(func = solveBact, parms = pars, sensvar = c("Bact", "Sub"), varscale = 1) head(SF) tail(SF) summary(SF, var = TRUE) plot(SF) plot(SF, which = c("Sub","Bact")) pm <- par(mfrow = c(1,3)) plot(SF, which = c("Sub", "Bact"), mfrow = NULL) plot(SF, mfrow = NULL) par(mfrow = pm) ## Bivariate sensitivity pairs(SF) # same color pairs(SF, which = "Bact", col = "green", pch = 15) pairs(SF, which = c("Bact", "Sub"), col = c("green", "blue")) mtext(outer = TRUE, side = 3, line = -2, "Sensitivity functions", cex = 1.5) ## pairwise correlation cor(SnsBact[,-(1:2)]) sensRange Sensitivity Ranges of a Timeseries or 1-D Variables Description Given a model consisting of differential equations, estimates the global effect of certain (sensitivity) parameters on a time series or on 1-D spatial series of selected sensitivity variables. This is done by drawing parameter values according to some predefined distribution, running the model with each of these parameter combinations, and calculating the values of the selected output variables at each output interval. This function thus produces ’envelopes’ around the sensitivity variables. Usage sensRange(func, parms = NULL, sensvar = NULL, dist = "unif", parInput = NULL, parRange = NULL, parMean = NULL, parCovar = NULL, map = 1, num = 100, ...) ## S3 method for class 'sensRange' summary(object, ...) ## S3 method for class 'summary.sensRange' plot(x, xyswap = FALSE, which = NULL, legpos = "topleft", col = c(grey(0.8), grey(0.7)), quant = FALSE, ask = NULL, obs = NULL, obspar = list(), ...) ## S3 method for class 'sensRange' plot(x, xyswap = FALSE, which = NULL, ask = NULL, ...) Arguments func an R-function that has as first argument parms and that returns a matrix or data.frame with the values of the output variables (columns) at certain output intervals (rows), and – optionally – a mapping variable (by default the first col- umn). parms parameters passed to func; should be either a vector, or a list with named ele- ments. If NULL, then the first element of parInput is taken. sensvar the output variables for which the sensitivity needs to be estimated. Either NULL, the default, which selects all variables, or a vector with variable names (which should be present in the matrix returned by func), or a vector with indices to variables as present in the output matrix (note that the column of this matrix with the mapping variable should not be selected). dist the distribution according to which the parameters should be generated, one of "unif" (uniformly random samples), "norm", (normally distributed random samples), "latin" (latin hypercube distribution), "grid" (parameters arranged on a grid). The input parameters for the distribution are specified by parRange (min,max), except for the normally distributed parameters, in which case the distribution is specified by the parameter means parMean and the variance- covariance matrix, parCovar. Note that, if the distribution is "norm" and parRange is given, then a truncated distribution will be generated. (This is useful to pre- vent for instance that certain parameters become negative). Ignored if parInput is specified. parRange the range (min, max) of the sensitivity parameters, a matrix or (preferred) a data.frame with one row for each parameter, and two columns with the mini- mum (1st) and maximum (2nd) value. The rownames of parRange should be parameter names that are known in argument parms. Ignored if parInput is specified. parInput a matrix with dimension (*, npar) with the values of the sensitivity parameters. parMean only when dist is "norm": the mean value of each parameter. Ignored if parInput is specified. parCovar only when dist is "norm": the parameter’s variance-covariance matrix. num the number of times the model has to be run. Set large enough. If parInput is specified, then num parameters are selected randomly (from the rows of parInput. map the column number with the (independent) mapping variable in the output ma- trix returned by func. For dynamic models solved by integration, this will be the (first) column with time. For 1-D spatial output, this column will be some distance variable. Set to NULL if there is no mapping variable. Mapping vari- ables should not be selected for estimating sensitivity ranges; they are used for plotting. object an object of class sensRange. x an object of class sensRange. legpos position of the legend; set to NULL to avoid plotting a legend. xyswap if TRUE, then x-and y-values are swapped and the y-axis is from top to bottom. Useful for drawing vertical profiles. which the name or the index to the variables that should be plotted. Default = all variables. col the two colors of the polygons that should be plotted. quant if TRUE, then the median surrounded by the quantiles q25-q75 and q95-q95 are plotted, else the min-max and mean +- sd are plotted. ask logical; if TRUE, the user is asked before each plot, if NULL the user is only asked if more than one page of plots is necessary and the current graphics device is set interactive, see par(ask=...) and dev.interactive. obs a data.frame or matrix with "observed data" that will be added as points to the plots. obs can also be a list with multiple data.frames and/or matrices containing observed data. The first column of obs should contain the time or space-variable. If obs is not NULL and which is NULL, then the variables, com- mon to both obs and x will be plotted. obspar additional graphics arguments passed to points, for plotting the observed data. If obs is a list containing multiple observed data sets, then the graphics argu- ments can be a vector or a list (e.g. for xlim, ylim), specifying each data set separately. ... additional arguments passed to func or to the methods. Details Models solved by integration (i.e. by using one of 'ode', 'ode.1D', 'ode.band', 'ode.2D'), have the output already in a form usable by sensRange. Value a data.frame of type sensRange containing the parameter set and the corresponding values of the sensitivity output variables. The list returned by sensRange has a method for the generic functions summary,plot and plot.summary – see note. Note The following methods are included: • summary, estimates summary statistics for the sensitivity variables, a data.frame with as many rows as there are mapping variables (or rows in the matrix returned by func) and the following columns: x, the mapping value, Mean, the mean, sd, the standard deviation, Min, the minimal value, Max, the maximal value, q25, q50, q75, the 25th, 50 and 75% quantile • plot, produces a "matplot" of the sensRange output, one plot for each sensitivity variable and with the mapping variable on the x-axis. Each variable will be plotted in a separate figure, and the figures aligned in a rectangular grid, unless par mfrow is passed as an argument. • summary.plot, produces a plot of the summary of the sensRange output, one plot for each sensitivity variable and with the ranges and mean +- standard deviation or the quantiles as coloured polygons. Each variable will be plotted in a separate figure, and the figures aligned in a rectangular grid, unless par mfrow is passed as an argument. The output for models solved by a steady-state solver (i.e. one of 'steady', 'steady.1D', 'steady.band', 'steady.2D', needs to be rearranged – see examples. For plot.summary.sensRange and plot.sensRange, the number of panels per page is automati- cally determined up to 3 x 3 (par(mfrow = c(3, 3))). This default can be overwritten by specifying user-defined settings for mfrow or mfcol. Set mfrow equal to NULL to avoid the plotting function to change user-defined mfrow or mfcol settings. Other graphical parameters can be passed as well. Parameters are vectorized, either according to the number of plots (xlab, ylab, main, sub, xlim, ylim, log, asp, ann, axes, frame.plot,panel.first,panel.last, cex.lab,cex.axis,cex.main) or according to the number of lines within one plot (other parame- ters e.g. col, lty, lwd etc.) so it is possible to assign specific axis labels to individual plots, resp. different plotting style. Plotting parameter ylim, or xlim can also be a list to assign different axis limits to individual plots. Similarly, the graphical parameters for observed data, as passed by obspar can be vectorized, ac- cording to the number of observed data sets (when obs is a list). The data.frame of type sensRange has several attributes, which remain hidden, and which are generally not of practical use (they are needed for the S3 methods). There is one exception, i.e. if parameter values are imposed via argument parInput, and these parameters are generated by a Markov chain (modMCMC). If the number of draws, num, is less than the number of rows in parInput, then num random draws will be taken. Attribute, "pset" then contains the index to the parameters that have been selected. The sensRange method only represents the distribution of the model response variables as a func- tion of the parameter values. But an additional source of noise is due to the model error, as repre- sented by the sampled values of sigma in the Markov chain. In order to represent also this source of error, gaussian noise should be added to each sensitivity output variables, with a standard deviation that corresponds to the original parameter draw – see vignette "FMEother". Author(s) <NAME> <<EMAIL>> References <NAME>. and <NAME>. 2010. Inverse Modelling, Sensitivity and Monte Carlo Analysis in R Using Package FME. Journal of Statistical Software 33(3) 1–28. doi:10.18637/jss.v033.i03 Examples ## ======================================================================= ## Bacterial growth model from Soetaert and Herman, 2009 ## ======================================================================= pars <- list(gmax = 0.5,eff = 0.5, ks = 0.5, rB = 0.01, dB = 0.01) solveBact <- function(pars) { derivs <- function(t,state,pars) { # returns rate of change with (as.list(c(state,pars)), { dBact <- gmax*eff * Sub/(Sub + ks)*Bact - dB*Bact - rB*Bact dSub <- -gmax * Sub/(Sub + ks)*Bact + dB*Bact return(list(c(dBact,dSub))) }) } state <- c(Bact = 0.1,Sub = 100) tout <- seq(0, 50, by = 0.5) ## ode solves the model by integration ... return(as.data.frame(ode(y = state, times = tout, func = derivs, parms = pars))) } out <- solveBact(pars) mf <-par(mfrow = c(2,2)) plot(out$time, out$Bact, main = "Bacteria", xlab = "time, hour", ylab = "molC/m3", type = "l", lwd = 2) ## the sensitivity parameters parRanges <- data.frame(min = c(0.4, 0.4, 0.0), max = c(0.6, 0.6, 0.02)) rownames(parRanges)<- c("gmax", "eff", "rB") parRanges tout <- 0:50 ## sensitivity to rB; equally-spaced parameters ("grid") SensR <- sensRange(func = solveBact, parms = pars, dist = "grid", sensvar = "Bact", parRange = parRanges[3,], num = 50) Sens <-summary(SensR) plot(Sens, legpos = "topleft", xlab = "time, hour", ylab = "molC/m3", main = "Sensitivity to rB", mfrow = NULL) ## sensitivity to all; latin hypercube Sens2 <- summary(sensRange(func = solveBact, parms = pars, dist = "latin", sensvar = c("Bact", "Sub"), parRange = parRanges, num = 50)) ## Plot all variables; plot mean +- sd, min max plot(Sens2, xlab = "time, hour", ylab = "molC/m3", main = "Sensitivity to gmax,eff,rB", mfrow = NULL) par(mfrow = mf) ## Select one variable for plotting; plot the quantiles plot(Sens2, xlab = "time, hour", ylab = "molC/m3", which = "Bact", quant = TRUE) ## Add data data <- cbind(time = c(0,10,20,30), Bact = c(0,1,10,45)) plot(Sens2, xlab = "time, hour", ylab = "molC/m3", quant = TRUE, obs = data, obspar = list(col = "darkblue", pch = 16, cex = 2)) Unif Uniform Random Distribution Description Generates uniformly distributed random parameter sets. Usage Unif(parRange, num) Arguments parRange the range (min, max) of the parameters, a matrix or a data.frame with one row for each parameter, and two columns with the minimum (1st) and maximum (2nd) value. num the number of random parameter sets to generate. Details In the uniform sampling, each parameter is uniformly random distributed over its range. Value a matrix with one row for each generated parameter set, and one column per parameter. Note For small sample sizes, the latin hypercube distributed parameter sets (Latinhyper) may give better coverage in parameter space than the uniform random design. Author(s) <NAME> <<EMAIL>> See Also Norm for (multi)normally distributed random parameter sets. Latinhyper to generates parameter sets using latin hypercube sampling. Grid to generate random parameter sets arranged on a regular grid runif the R-default for generating uniformally distributed random numbers. Examples ## 4 parameters parRange <- data.frame(min = c(0, 1, 2, 3), max = c(10, 9, 8, 7)) rownames(parRange) <- c("par1", "par2", "par3", "par4") ## uniform pairs(Unif(parRange, 100), main = "Uniformly random")
CodableAlamofire
cocoapods
Objective-C
codablealamofire === **CodableAlamofire** --- **CodableAlamofire** is a powerful and user-friendly library that integrates **Alamofire** with **Codable** to effortlessly parse JSON responses into Swift objects. This library makes working with web services and APIs seamless and efficient by handling all the data serialization and deserialization for you. ### **Key Features** * Integrates Alamofire and Codable seamlessly * Effortlessly parse JSON responses into Swift objects * Handle data serialization and deserialization with ease * Improve code readability and maintainability ### **Installation** 1. Open your project in Xcode and navigate to the **File** menu. 2. Click on **Swift Packages** and then select **Add Package Dependency**. 3. In the search bar, enter **CodableAlamofire**. 4. Select the latest version and click **Add Package**. 5. Finally, import the library in your Swift file using **import CodableAlamofire**. ### **Usage** To use CodableAlamofire, follow the steps below: ``` import CodableAlamofire // Make a request using Alamofire and CodableAlamofire AF.request("https://api.example.com/data").responseDecodable(of: MyModel.self) { response in guard let data = response.value else { return } // Use 'data' which is an instance of MyModel } ``` In the above example, we use Alamofire’s **request** method to make a network request to the specified URL. We then use CodableAlamofire’s **responseDecodable** method to automatically parse the JSON response into an instance of **MyModel**. Finally, we can access the parsed data within the completion block using **response.value**. ### **Conclusion** CodableAlamofire is a fantastic library that simplifies the integration of Alamofire and Codable. By handling the serialization and deserialization of JSON responses for you, it saves you time and effort, while also improving the readability and maintainability of your code. **Example Heading** --- Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam faucibus tempor augue, vitae vestibulum nulla molestie gravida. Sed hendrerit mauris id scelerisque facilisis. In eget tristique ante. Curabitur finibus, mauris quis bibendum ornare, lectus mauris volutpat nisl, vulputate ullamcorper dolor arcu nec orci. Suspendisse a molestie enim. Sed at pharetra nisl. Donec ut augue nisi. Pellentesque blandit elementum ante ornare semper. **Another Example Heading** --- Integer et tempus lorem. Curabitur non sollicitudin diam. Ut gravida est ut enim tincidunt sagittis. Quisque varius nisi nec sem aliquam, ut posuere eros auctor. Sed mollis volutpat mollis. Nunc eget tortor porttitor, dignissim erat ut, facilisis felis. Proin posuere consectetur ante, et congue neque fringilla ut. Fusce commodo sagittis urna, nec lobortis dolor semper vitae. Etiam dui ex, hendrerit id nisl quis, sodales congue nisl. Integer faucibus sagittis ligula, vitae vulputate nisl aliquet ac. Ut blandit nunc eu libero tempus finibus.
mtsta
cran
R
Package ‘mtsta’ September 19, 2023 Title Accessing the Red List of Montane Tree Species of the Tropical Andes Version 0.0.0.1 Description Access the 'Red List of Montane Tree Species of the Tropical Andes' Tejedor Gar- avito et al.(2014, ISBN:978-1-905164-60-8). This package allows users to search for glob- ally threatened tree species within the andean montane forests, including cloud forests and sea- sonal (wet) forests above 1500 m a.s.l. License MIT + file LICENSE Suggests dplyr, forcats, ggplot2, janitor, knitr, rmarkdown, stringr, testthat, tibble, tidyr Config/testthat/edition 3 Encoding UTF-8 LazyData true LazyDataCompression xz Date 2023-09-16 RoxygenNote 7.2.3 Maintainer <NAME> <<EMAIL>> URL https://github.com/PaulESantos/mtsta BugReports https://github.com/PaulESantos/mtsta/issues VignetteBuilder knitr Depends R (>= 2.10) NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-6635-0375>) Repository CRAN Date/Publication 2023-09-19 14:20:02 UTC R topics documented: mtsta_distributio... 2 mtsta_ta... 3 search_mtst... 4 search_mtsta_distributio... 5 mtsta_distribution mtsta_distribution: Distribution Data for Species in mtsta_tab Description This dataset contains distribution data for species included in A Regional Red List of Montane Tree Species of the Tropical Andes - 2014. It provides information on the current accepted names, distribution range, and taxonomic status for each species. The data is original diatribution data and the updated distribution information for the included species. Usage mtsta_distribution Format A tibble accepted_name Character vector with the current accepted name of the species. distribution Character vector indicating the distribution range of the species. distribution_wcvp Character vector indicating the distribution according to the World Checklist of Vascular Plants (WCVP). taxonomic_status Character vector indicating the taxonomic status of the species of the submitted name (e.g., "Accepted"). Details The dataset contains information on the distribution of species in A Regional Red List of Montane Tree Species of the Tropical Andes - 2014. The distribution data includes the geographical range of each species, including countries where the species can be found. Examples # Load the mtsta_distribution dataset data("mtsta_distribution") mtsta_tab Montane Tree Species of the Tropical Andes - Base Data Description This is the curated base data of montane tree species found in the Tropical Andes region. The data is stored as a tibble with 132 rows and 11 columns. Each row represents a species and contains information such as the species name, accepted name, accepted family, accepted genus, IUCN status, distribution, elevation range, assessor, description, and taxonomic status. Usage mtsta_tab Format A tibble with 132 rows and 11 columns. Details The columns in the base data tibble are as follows: • species_name: The scientific name of the species. • accepted_name: The currently accepted name for the species. • accepted_family: The family to which the species belongs. • accepted_genus: The genus to which the species belongs. • taxonomic_status: The taxonomic status of the species. • iucn: The conservation status of the species according to the IUCN Red List Categories. • distribution: The distribution range of the species. • unsure_distribution: Information about the uncertainty in the distribution data. • elevation: The elevation range where the species is found. • assessor: The person or group responsible for assessing the species. • description: Additional information or description of the species. The base species list used in the mtsta package has been carefully reviewed and validated with the assistance of the Taxonomic Name Resolution Service (TNRS). By utilizing the TNRS, the base species list in the mtsta package guarantees accuracy and consistency in the representation of species names, enhancing the reliability of the package’s functionalities. Source The data for this package was obtained from A Regional Red List of Montane Tree Species of the Tropical Andes - 2014. See Also Use search_mtsta() function to search and match species names using this base data. Examples data("mtsta_tab") search_mtsta Search for Matching Species Names in the Red List of Montane Tree Species of the Tropical Andes Description This function searches for matching species names in the Red List of Montane Tree Species of the Tropical Andes (mtsta package) based on a provided list of species names. The function performs approximate matching using the Levenshtein distance algorithm to find similar names within a specified maximum distance. Usage search_mtsta(splist, max_distance = 0.1, ...) Arguments splist A character vector containing the species names to be matched. max_distance The distance used is a generalized Levenshtein distance that indicates the to- tal number of insertions, deletions, and substitutions allowed to match the two names. It can be expressed as an integer or as the fraction of the binomial name. A value between 0 and 1 will be treated as a percentage of the string length. A value greater than 1 will be treated as an absolute number of allowed changes. For example, a name with length 10, and a max_distance = 0.1, allow only one change (insertion, deletion, or substitution). A max_distance = 2, allows two changes. ... Additional arguments (currently unused). Value A data frame with the following columns: • name_submitted: The submitted species names. • name_matched: The matched species names from the mtsta data. • accepted_name: The accepted scientific name of the matched species. • accepted_family: The accepted family name of the matched species. • accepted_genus: The accepted genus name of the matched species. • taxonomic_status: The taxonomic status of the matched species. • iucn: The conservation status of the matched species according to the IUCN Red List Cate- gories. • distribution: The distribution range of the matched species. • unsure_distribution: Information about the uncertainty in the distribution data. • elevation: The elevation range where the species is found. • assessor: The person or group responsible for assessing the species. • description: Additional information or description of the species. • distance: The Levenshtein distance between the submitted name and the matched name. If no match is found, the name_matched column will contain "nill", and the other columns will be empty. See Also mtsta::mtsta_tab Examples # Assuming you have the mtsta package loaded with the necessary data search_result <- search_mtsta(c("<NAME>", "<NAME>", "<NAME>", "<NAME>", "Ilex colombiana", "Ilex rimbachii", "Ilex scopulorum"), max_distance = 0.1) search_mtsta_distribution Search Species Distribution of A Regional Red List of Montane Tree Species of the Tropical Andes Description This function searches the Montane Tree Species of the Tropical Andes distribution database for a list of submitted species names and returns their distribution information. The search allows for approximate string matching within a given maximum distance. Usage search_mtsta_distribution(splist, max_distance = 0.1, typedf = "main") Arguments splist Character vector of submitted plant species names for which distribution data is to be searched. max_distance Numeric value representing the maximum allowed distance for approximate string matching. The default value is 0.1. typedf "main" return a selected columns in from mtsta_distribution "full" return all data Details The function uses approximate string matching with a maximum distance specified by the max_distance argument. It searches the mtsta distribution database for submitted species names that match the provided names within the given distance. The function then retrieves distribution information, in- cluding the accepted name, distribution region, unsure distribution (if available), distribution from the World Check-list of Vascular Plants (WCVP), taxonomic status, and the Levenshtein distance between submitted and matched species names. Value A data frame with the following columns: name_submitted Character vector with the submitted species names. name_matched Character vector with the matched species names in the database. accepted_name Character vector with the accepted names of the matched species. distribution Character vector with the distribution of the matched species. unsure_distribution Character vector with information about unsure distribution (if available). distribution_wcvp Character vector with the distribution from the World Check-list of Vascular Plants (WCVP) database for the matched species. taxonomic_status Character vector indicating the taxonomic status of the matched species. distance Numeric vector with the Levenshtein distance between submitted and matched species names. See Also mtsta_distribution Examples # Example usage of search_mtsta_distribution function species_list <- c("<NAME>", "<NAME>", "<NAME>") distribution_data <- search_mtsta_distribution(species_list, max_distance = 0.2) print(distribution_data)
django-user-accounts
readthedoc
Python
django-user-accounts 2.0.3 documentation [django-user-accounts](index.html#document-index) --- django-user-accounts[¶](#django-user-accounts) === Provides user accounts to a Django project. Development[¶](#development) --- The source repository can be found at <https://github.com/pinax/django-user-accounts/### Contents[¶](#contents) #### Installation[¶](#installation) Install the development version: ``` pip install django-user-accounts ``` Add `account` to your `INSTALLED_APPS` setting: ``` INSTALLED_APPS = ( # ... "account", # ... ) ``` See the list of [Settings](index.html#settings) to modify the default behavior of django-user-accounts and make adjustments for your website. Add `account.urls` to your URLs definition: ``` urlpatterns = patterns("", ... url(r"^account/", include("account.urls")), ... ) ``` Add `account.context_processors.account` to `TEMPLATE_CONTEXT_PROCESSORS`: ``` TEMPLATE_CONTEXT_PROCESSORS = [ ... "account.context_processors.account", ... ] ``` Add `account.middleware.LocaleMiddleware` and `account.middleware.TimezoneMiddleware` to `MIDDLEWARE_CLASSES`: ``` MIDDLEWARE_CLASSES = [ ... "account.middleware.LocaleMiddleware", "account.middleware.TimezoneMiddleware", ... ] ``` Optionally include `account.middleware.ExpiredPasswordMiddleware` in `MIDDLEWARE_CLASSES` if you need password expiration support: ``` MIDDLEWARE_CLASSES = [ ... "account.middleware.ExpiredPasswordMiddleware", ... ] ``` Once everything is in place make sure you run `migrate` to modify the database with the `account` app models. ##### Dependencies[¶](#dependencies) ###### `django.contrib.auth`[¶](#django-contrib-auth) This is bundled with Django. It is enabled by default with all new Django projects, but if you adding django-user-accounts to an existing project you need to make sure `django.contrib.auth` is installed. ###### `django.contrib.sites`[¶](#django-contrib-sites) This is bundled with Django. It is enabled by default with all new Django projects. It is used to provide links back to the site in emails or various places in templates that need an absolute URL. ###### [django-appconf](https://github.com/jezdez/django-appconf)[¶](#django-appconf) We use django-appconf for app settings. It is listed in `install_requires` and will be installed when pip installs. ###### [pytz](http://pypi.python.org/pypi/pytz/)[¶](#pytz) pytz is used for handling timezones for accounts. This dependency is critical due to its extensive dataset for timezones. #### Usage[¶](#usage) This document covers the usage of django-user-accounts. It assumes you’ve read [Installation](index.html#installation). django-user-accounts has very good default behavior when handling user accounts. It has been designed to be customizable in many aspects. By default this app will: * enable username authentication * provide default views and forms for sign up, log in, password reset and account management * handle log out with POST * require unique email addresses globally * require email verification for performing password resets The rest of this document will cover how you can tweak the default behavior of django-user-accounts. ##### Limiting access to views[¶](#limiting-access-to-views) To limit view access to logged in users, normally you would use the Django decorator `django.contrib.auth.decorators.login_required`. Instead you should use `account.decorators.login_required`. ##### Customizing the sign up process[¶](#customizing-the-sign-up-process) In many cases you need to tweak the sign up process to do some domain specific tasks. Perhaps you need to update a profile for the new user or something else. The built-in `SignupView` has hooks to enable just about any sort of customization during sign up. Here’s an example of a custom `SignupView` defined in your project: ``` import account.views class SignupView(account.views.SignupView): def after_signup(self, form): self.update_profile(form) super(SignupView, self).after_signup(form) def update_profile(self, form): profile = self.created_user.profile # replace with your reverse one-to-one profile attribute profile.some_attr = "some value" profile.save() ``` This example assumes you had a receiver hooked up to the post_save signal for the sender, User like so: ``` from django.dispatch import receiver from django.db.models.signals import post_save from django.contrib.auth.models import User from mysite.profiles.models import UserProfile @receiver(post_save, sender=User) def handle_user_save(sender, instance, created, **kwargs): if created: UserProfile.objects.create(user=instance) ``` You can define your own form class to add fields to the sign up process: ``` # forms.py from django import forms from django.forms.extras.widgets import SelectDateWidget import account.forms class SignupForm(account.forms.SignupForm): birthdate = forms.DateField(widget=SelectDateWidget(years=range(1910, 1991))) # views.py import account.views import myproject.forms class SignupView(account.views.SignupView): form_class = myproject.forms.SignupForm def after_signup(self, form): self.create_profile(form) super(SignupView, self).after_signup(form) def create_profile(self, form): profile = self.created_user.profile # replace with your reverse one-to-one profile attribute profile.birthdate = form.cleaned_data["birthdate"] profile.save() ``` To hook this up for your project you need to override the URL for sign up: ``` from django.conf.urls import patterns, include, url import myproject.views urlpatterns = patterns("", url(r"^account/signup/$", myproject.views.SignupView.as_view(), name="account_signup"), url(r"^account/", include("account.urls")), ) ``` Note Make sure your `url` for `/account/signup/` comes *before* the `include` of `account.urls`. Django will short-circuit on yours. ##### Using email address for authentication[¶](#using-email-address-for-authentication) django-user-accounts allows you to use email addresses for authentication instead of usernames. You still have the option to continue using usernames or get rid of them entirely. To enable email authentication do the following: 1. check your settings for the following values: ``` ACCOUNT_EMAIL_UNIQUE = True ACCOUNT_EMAIL_CONFIRMATION_REQUIRED = True ``` Note If you need to change the value of `ACCOUNT_EMAIL_UNIQUE` make sure your database schema is modified to support a unique email column in `account_emailaddress`. `ACCOUNT_EMAIL_CONFIRMATION_REQUIRED` is optional, but highly recommended to be `True`. 2. define your own `LoginView` in your project: ``` import account.forms import account.views class LoginView(account.views.LoginView): form_class = account.forms.LoginEmailForm ``` 3. ensure `"account.auth_backends.EmailAuthenticationBackend"` is in `AUTHENTICATION_BACKENDS` If you want to get rid of username you’ll need to do some extra work: 1. define your own `SignupForm` and `SignupView` in your project: ``` # forms.py import account.forms class SignupForm(account.forms.SignupForm): def __init__(self, *args, **kwargs): super(SignupForm, self).__init__(*args, **kwargs) del self.fields["username"] # views.py import account.views import myproject.forms class SignupView(account.views.SignupView): form_class = myproject.forms.SignupForm identifier_field = 'email' def generate_username(self, form): # do something to generate a unique username (required by the # Django User model, unfortunately) username = "<magic>" return username ``` 2. many places will rely on a username for a User instance. django-user-accounts provides a mechanism to add a level of indirection when representing the user in the user interface. Keep in mind not everything you include in your project will do what you expect when removing usernames entirely. Set `ACCOUNT_USER_DISPLAY` in settings to a callable suitable for your site: ``` ACCOUNT_USER_DISPLAY = lambda user: user.email ``` Your Python code can use `user_display` to handle user representation: ``` from account.utils import user_display user_display(user) ``` Your templates can use `{% user_display request.user %}`: ``` {% load account_tags %} {% user_display request.user %} ``` ##### Allow non-unique email addresses[¶](#allow-non-unique-email-addresses) If your site requires that you support non-unique email addresses globally you can tweak the behavior to allow this. Set `ACCOUNT_EMAIL_UNIQUE` to `False`. If you have already setup the tables for django-user-accounts you will need to migrate the `account_emailaddress` table: ``` ALTER TABLE "account_emailaddress" ADD CONSTRAINT "account_emailaddress_user_id_email_key" UNIQUE ("user_id", "email"); ALTER TABLE "account_emailaddress" DROP CONSTRAINT "account_emailaddress_email_key"; ``` `ACCOUNT_EMAIL_UNIQUE = False` will allow duplicate email addresses per user, but not across users. ##### Including accounts in fixtures[¶](#including-accounts-in-fixtures) If you want to include account_account in your fixture, you may notice that when you load that fixture there is a conflict because django-user-accounts defaults to creating a new account for each new user. Example: ``` IntegrityError: Problem installing fixture \ ...'/app/fixtures/some_users_and_accounts.json': \ Could not load account.Account(pk=1): duplicate key value violates unique constraint \ "account_account_user_id_key" DETAIL: Key (user_id)=(1) already exists. ``` To prevent this from happening, subclass DiscoverRunner and in setup_test_environment set CREATE_ON_SAVE to False. For example in a file called lib/tests.py: ``` from django.test.runner import DiscoverRunner from account.conf import AccountAppConf class MyTestDiscoverRunner(DiscoverRunner): def setup_test_environment(self, **kwargs): super(MyTestDiscoverRunner, self).setup_test_environment(**kwargs) aac = AccountAppConf() aac.CREATE_ON_SAVE = False ``` And in your settings: ``` TEST_RUNNER = "lib.tests.MyTestDiscoverRunner" ``` ##### Enabling password expiration[¶](#enabling-password-expiration) Password expiration is disabled by default. In order to enable password expiration you must add entries to your settings file: ``` ACCOUNT_PASSWORD_EXPIRY = 60*60*24*5 # seconds until pw expires, this example shows five days ACCOUNT_PASSWORD_USE_HISTORY = True ``` and include ExpiredPasswordMiddleware with your middleware settings: ``` MIDDLEWARE_CLASSES = { ... "account.middleware.ExpiredPasswordMiddleware", } ``` `ACCOUNT_PASSWORD_EXPIRY` indicates the duration a password will stay valid. After that period the password must be reset in order to view any page. If `ACCOUNT_PASSWORD_EXPIRY` is zero (0) then passwords never expire. If `ACCOUNT_PASSWORD_USE_HISTORY` is False, no history will be generated and password expiration WILL NOT be checked. If `ACCOUNT_PASSWORD_USE_HISTORY` is True, a password history entry is created each time the user changes their password. This entry links the user with their most recent (encrypted) password and a timestamp. Unless deleted manually, PasswordHistory items are saved forever, allowing password history checking for new passwords. For an authenticated user, `ExpiredPasswordMiddleware` prevents retrieving or posting to any page except the password change page and log out page when the user password is expired. However, if the user is “staff” (can access the Django admin site), the password check is skipped. #### Settings[¶](#settings) ##### `ACCOUNT_OPEN_SIGNUP`[¶](#account-open-signup) Default: `True` ##### `ACCOUNT_LOGIN_URL`[¶](#account-login-url) Default: `"account_login"` ##### `ACCOUNT_SIGNUP_REDIRECT_URL`[¶](#account-signup-redirect-url) Default: `"/"` ##### `ACCOUNT_LOGIN_REDIRECT_URL`[¶](#account-login-redirect-url) Default: `"/"` ##### `ACCOUNT_LOGOUT_REDIRECT_URL`[¶](#account-logout-redirect-url) Default: `"/"` ##### `ACCOUNT_PASSWORD_CHANGE_REDIRECT_URL`[¶](#account-password-change-redirect-url) Default: `"account_password"` ##### `ACCOUNT_PASSWORD_RESET_REDIRECT_URL`[¶](#account-password-reset-redirect-url) Default: `"account_login"` ##### `ACCOUNT_PASSWORD_EXPIRY`[¶](#account-password-expiry) Default: `0` ##### `ACCOUNT_PASSWORD_USE_HISTORY`[¶](#account-password-use-history) Default: `False` ##### `ACCOUNT_REMEMBER_ME_EXPIRY`[¶](#account-remember-me-expiry) Default: `60 * 60 * 24 * 365 * 10` ##### `ACCOUNT_USER_DISPLAY`[¶](#account-user-display) Default: `lambda user: user.username` ##### `ACCOUNT_CREATE_ON_SAVE`[¶](#account-create-on-save) Default: `True` ##### `ACCOUNT_EMAIL_UNIQUE`[¶](#account-email-unique) Default: `True` ##### `ACCOUNT_EMAIL_CONFIRMATION_REQUIRED`[¶](#account-email-confirmation-required) Default: `False` ##### `ACCOUNT_EMAIL_CONFIRMATION_EMAIL`[¶](#account-email-confirmation-email) Default: `True` ##### `ACCOUNT_EMAIL_CONFIRMATION_EXPIRE_DAYS`[¶](#account-email-confirmation-expire-days) Default: `3` ##### `ACCOUNT_EMAIL_CONFIRMATION_ANONYMOUS_REDIRECT_URL`[¶](#account-email-confirmation-anonymous-redirect-url) Default: `"account_login"` ##### `ACCOUNT_EMAIL_CONFIRMATION_AUTHENTICATED_REDIRECT_URL`[¶](#account-email-confirmation-authenticated-redirect-url) Default: `None` ##### `ACCOUNT_EMAIL_CONFIRMATION_URL`[¶](#account-email-confirmation-url) Default: `"account_confirm_email"` ##### `ACCOUNT_SETTINGS_REDIRECT_URL`[¶](#account-settings-redirect-url) Default: `"account_settings"` ##### `ACCOUNT_NOTIFY_ON_PASSWORD_CHANGE`[¶](#account-notify-on-password-change) Default: `True` ##### `ACCOUNT_DELETION_MARK_CALLBACK`[¶](#account-deletion-mark-callback) Default: `"account.callbacks.account_delete_mark"` ##### `ACCOUNT_DELETION_EXPUNGE_CALLBACK`[¶](#account-deletion-expunge-callback) Default: `"account.callbacks.account_delete_expunge"` ##### `ACCOUNT_DELETION_EXPUNGE_HOURS`[¶](#account-deletion-expunge-hours) Default: `48` ##### `ACCOUNT_HOOKSET`[¶](#account-hookset) Default: `"account.hooks.AccountDefaultHookSet"` This setting allows you define your own hooks for specific functionality that django-user-accounts exposes. Point this to a class using a string and you can override the following methods: * `send_invitation_email(to, ctx)` * `send_confirmation_email(to, ctx)` * `send_password_change_email(to, ctx)` * `send_password_reset_email(to, ctx)` ##### `ACCOUNT_TIMEZONES`[¶](#account-timezones) Default: `list(zip(pytz.all_timezones, pytz.all_timezones))` ##### `ACCOUNT_LANGUAGES`[¶](#account-languages) See full list in: <https://github.com/pinax/django-user-accounts/blob/master/account/language_list.py#### Templates[¶](#templates) This document covers the implementation of django-user-accounts within Django templates. The [pinax-theme-bootstrap](https://github.com/pinax/pinax-theme-bootstrap) package provides a good [starting point](https://github.com/pinax/pinax-theme-bootstrap/tree/master/pinax_theme_bootstrap/templates/account) to build from. Note, this document assumes you have read the installation docs. ##### Template Files[¶](#template-files) By default, django-user-accounts expects the following templates. If you don’t use `pinax-theme-bootstrap`, then you will have to create these templates yourself. Login/Registration/Signup Templates: ``` account/login.html account/logout.html account/signup.html account/signup_closed.html ``` Email Confirmation Templates: ``` account/email_confirm.html account/email_confirmation_sent.html account/email_confirmed.html ``` Password Management Templates: ``` account/password_change.html account/password_reset.html account/password_reset_sent.html account/password_reset_token.html account/password_reset_token_fail.html ``` Account Settings: ``` account/settings.html ``` Emails (actual emails themselves): ``` account/email/email_confirmation_message.txt account/email/email_confirmation_subject.txt account/email/invite_user.txt account/email/invite_user_subject.txt account/email/password_change.txt account/email/password_change_subject.txt account/email/password_reset.txt account/email/password_reset_subject.txt ``` ##### Template Tags[¶](#template-tags) To use the built in template tags you must first load them within the templates: ``` {% load account_tags %} ``` To display the current logged-in user: ``` {% user_display request.user %} ``` #### Signals[¶](#signals) ##### user_signed_up[¶](#user-signed-up) Triggered when a user signs up successfully. Providing arguments `user` (User instance) and `form` (form instance) as arguments. ##### user_sign_up_attempt[¶](#user-sign-up-attempt) Triggered when a user tried but failed to sign up. Providing arguments `username` (string), `email` (string) and `result` (boolean, False if the form did not validate). ##### user_logged_in[¶](#user-logged-in) Triggered when a user logs in successfully. Providing arguments `user` (User instance) and `form` (form instance). ##### user_login_attempt[¶](#user-login-attempt) Triggered when a user tries and fails to log in. Providing arguments `username` (string) and `result` (boolean, False if the form did not validate). ##### signup_code_sent[¶](#signup-code-sent) Triggered when a signup code was sent. Providing argument `signup_code` (SignupCode instance). ##### signup_code_used[¶](#signup-code-used) Triggered when a user used a signup code. Providing argument `signup_code_result` (SignupCodeResult instance). ##### email_confirmed[¶](#email-confirmed) Triggered when a user confirmed an email. Providing argument [``](#id2)email_address``(EmailAddress instance). ##### email_confirmation_sent[¶](#email-confirmation-sent) Triggered when an email confirmation was sent. Providing argument `confirmation` (EmailConfirmation instance). ##### password_changed[¶](#password-changed) Triggered when a user changes his password. Providing argument `user` (User instance). ##### password_expired[¶](#password-expired) Triggered when a user password is expired. Providing argument `user` (User instance). #### Management Commands[¶](#management-commands) ##### user_password_history[¶](#user-password-history) Creates an initial password history for all users who don’t already have a password history. Accepts two optional arguments: ``` -d --days <days> - Sets the age of the current password. Default is 10 days. -f --force - Sets a new password history for ALL users, regardless of prior history. ``` ##### user_password_expiry[¶](#user-password-expiry) Creates a password expiry specific to one user. Password expiration checks use a global value (`ACCOUNT_PASSWORD_EXPIRY`) for the expiration time period. This value can be superseded on a per-user basis by creating a user password expiry. Requires one argument: ``` <username> [<username>] - username(s) of the user(s) who needs specific password expiry. ``` Accepts one optional argument: ``` -e --expire <seconds> - Sets the number of seconds for password expiration. Default is the current global ACCOUNT_PASSWORD_EXPIRY value. ``` After creation, you can modify user password expiration from the Django admin. Find the desired user at `/admin/account/passwordexpiry/` and change the `expiry` value. #### Migration from Pinax[¶](#migration-from-pinax) django-user-accounts is based on `pinax.apps.account` combining some of the supporting apps. django-email-confirmation, `pinax.apps.signup_codes` and bits of django-timezones have been merged to create django-user-accounts. This document will outline the changes needed to migrate from Pinax to using this app in your Django project. If you are new to django-user-accounts then this guide will not be useful to you. ##### Database changes[¶](#database-changes) Due to combining apps the table layout when converting from Pinax has changed. We’ve also taken the opportunity to update the schema to take advantage of much saner defaults. Here is SQL to convert from Pinax to django-user-accounts. ###### PostgreSQL[¶](#postgresql) ``` ALTER TABLE "signup_codes_signupcode" RENAME TO "account_signupcode"; ALTER TABLE "signup_codes_signupcoderesult" RENAME TO "account_signupcoderesult"; ALTER TABLE "emailconfirmation_emailaddress" RENAME TO "account_emailaddress"; ALTER TABLE "emailconfirmation_emailconfirmation" RENAME TO "account_emailconfirmation"; DROP TABLE "account_passwordreset"; ALTER TABLE "account_signupcode" ALTER COLUMN "code" TYPE varchar(64); ALTER TABLE "account_signupcode" ADD CONSTRAINT "account_signupcode_code_key" UNIQUE ("code"); ALTER TABLE "account_emailconfirmation" RENAME COLUMN "confirmation_key" TO "key"; ALTER TABLE "account_emailconfirmation" ALTER COLUMN "key" TYPE varchar(64); ALTER TABLE account_emailconfirmation ADD COLUMN created timestamp with time zone; UPDATE account_emailconfirmation SET created = sent; ALTER TABLE account_emailconfirmation ALTER COLUMN created SET NOT NULL; ALTER TABLE account_emailconfirmation ALTER COLUMN sent DROP NOT NULL; ``` If `ACCOUNT_EMAIL_UNIQUE` is set to `True` (the default value) you need: ``` ALTER TABLE "account_emailaddress" ADD CONSTRAINT "account_emailaddress_email_key" UNIQUE ("email"); ALTER TABLE "account_emailaddress" DROP CONSTRAINT "emailconfirmation_emailaddress_user_id_email_key"; ``` ###### MySQL[¶](#mysql) ``` RENAME TABLE `emailconfirmation_emailaddress` TO `account_emailaddress` ; RENAME TABLE `emailconfirmation_emailconfirmation` TO `account_emailconfirmation` ; DROP TABLE account_passwordreset; ALTER TABLE `account_emailconfirmation` CHANGE `confirmation_key` `key` VARCHAR(64) NOT NULL; ALTER TABLE `account_emailconfirmation` ADD UNIQUE (`key`); ALTER TABLE account_emailconfirmation ADD COLUMN created datetime NOT NULL; UPDATE account_emailconfirmation SET created = sent; ALTER TABLE `account_emailconfirmation` CHANGE `sent` `sent` DATETIME NULL; ``` If `ACCOUNT_EMAIL_UNIQUE` is set to `True` (the default value) you need: ``` ALTER TABLE `account_emailaddress` ADD UNIQUE (`email`); ALTER TABLE account_emailaddress DROP INDEX user_id; ``` If you have installed `pinax.apps.signup_codes`: ``` RENAME TABLE `signup_codes_signupcode` TO `account_signupcode` ; RENAME TABLE `signup_codes_signupcoderesult` TO `account_signupcoderesult` ; ``` ##### URL changes[¶](#url-changes) Here is a list of all URLs provided by django-user-accounts and how they map from Pinax. This assumes `account.urls` is mounted at `/account/` as it was in Pinax. | Pinax | django-user-accounts | | --- | --- | | `/account/login/` | `/account/login/` | | `/account/signup/` | `/account/signup/` | | `/account/confirm_email/` | `/account/confirm_email/` | | `/account/password_change/` | `/account/password/` [[1]](#id2) | | `/account/password_reset/` | `/account/password/reset/` | | `/account/password_reset_done/` | *removed* | | `/account/password_reset_key/<key>/` | `/account/password/reset/<token>/` | | [[1]](#id1) | When user is anonymous and requests a GET the user is redirected to `/account/password/reset/`. | ##### View changes[¶](#view-changes) All views have been converted to class-based views. This is a big departure from the traditional function-based, but has the benefit of being much more flexible. @@@ todo: table of changes ##### Settings changes[¶](#settings-changes) We have cleaned up settings and set saner defaults used by django-user-accounts. | Pinax | django-user-accounts | | --- | --- | | `ACCOUNT_OPEN_SIGNUP = True` | `ACCOUNT_OPEN_SIGNUP = True` | | `ACCOUNT_UNIQUE_EMAIL = False` | `ACCOUNT_EMAIL_UNIQUE = True` | | `EMAIL_CONFIRMATION_UNIQUE_EMAIL = False` | *removed* | ##### General changes[¶](#general-changes) django-user-accounts requires Django 1.4. This means we can take advantage of many of the new features offered by Django. This app implements all of the best practices of Django 1.4. If there is something missing you should let us know! #### FAQ[¶](#faq) This document is a collection of frequently asked questions about django-user-accounts. ##### What is the difference between django-user-accounts and django.contrib.auth?[¶](#what-is-the-difference-between-django-user-accounts-and-django-contrib-auth) django-user-accounts is designed to supplement `django.contrib.auth`. This app provides improved views for log in, password reset, log out and adds sign up functionality. We try not to duplicate code when Django provides a good implementation. For example, we did not re-implement password reset, but simply provide an improved view which calls into the secure Django password reset code. `django.contrib.auth` is still providing many of supporting elements such as `User` model, default authentication backends, helper functions and authorization. django-user-accounts takes your Django project from having simple log in, log out and password reset to a full blown account management system that you will end up building anyways. ##### Why can email addresses get out of sync?[¶](#why-can-email-addresses-get-out-of-sync) django-user-accounts stores email addresses in two locations. The default `User` model contains an `email` field and django-user-accounts provides an `EmailAddress` model. This latter is provided to support multiple email addresses per user. If you use a custom user model you can prevent the double storage. This is because you can choose not to do any email address storage. If you don’t use a custom user model then make sure you take extra precaution. When editing email addresses either in the shell or admin make sure you update in both places. Only the primary email address is stored on the `User` model.
CorrBin
cran
R
Package ‘CorrBin’ September 29, 2023 Title Nonparametrics with Clustered Binary and Multinomial Data Version 1.6.1 Date 2023-09-28 Encoding UTF-8 Depends R(>= 2.6.0) Imports boot, combinat, geepack, dirmult, mvtnorm Suggests lattice Description Implements non-parametric analyses for clustered binary and multinomial data. The elements of the cluster are assumed exchangeable, and identical joint distribution (also known as marginal compatibility, or reproducibility) is assumed for clusters of different sizes. A trend test based on stochastic ordering is implemented. <NAME>, <NAME>. (2010) <doi:10.1093/biomet/asp077>; <NAME>, <NAME>, <NAME>, <NAME> (2016) <doi:10.1093/biomet/asw009>. License GPL (>= 2) LazyLoad yes RoxygenNote 7.2.3 NeedsCompilation yes Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-09-29 07:40:05 UTC R topics documented: CorrBin-packag... 2 CBDat... 3 CMDat... 4 deh... 5 egd... 6 Extrac... 7 GEE.trend.tes... 8 jointprob... 9 mc.est.CMDat... 10 mc.test.chisq.CMDat... 12 multi.cor... 13 multinom.ge... 14 NOSTASO... 15 pd... 17 ran.CBDat... 18 ran.CMDat... 19 read.CBDat... 21 read.CMDat... 22 RS.trend.tes... 23 shellto... 24 SO.LR... 25 SO.mc.es... 26 SO.trend.tes... 27 soContro... 29 trend.tes... 30 uniprob... 31 unwrap.CBDat... 32 CorrBin-package Nonparametrics for Correlated Binary and Multinomial Data Description This package implements nonparametric methods for analyzing exchangeable binary and multino- mial data with variable cluster sizes with emphasis on trend testing. The input should specify the treatment group, cluster-size, and the number of responses (i.e. the number of cluster elements with the outcome of interest) for each cluster. Details • The CBData/CMData and read.CBData/read.CMData functions create a ‘CBData’ or ‘CM- Data’ object used by the analysis functions. • ran.CBData and ran.CMData can be used to generate random binary or multinomial data using a variety of distributions. • mc.test.chisq tests the assumption of marginal compatibility underlying all the methods, while mc.est estimates the distribution of the number of responses under marginal compati- bility. • Finally, trend.test performs three different tests for trend along the treatment groups for binomial data. Author(s) <NAME> Maintainer: <NAME> <<EMAIL>> References Szabo A, Ge<NAME>. (2009) On the Use of Stochastic Ordering to Test for Trend with Clustered Binary Data. Biometrika Stefanescu, C. & <NAME>. (2003) Likelihood inference for exchangeable binary data with varying cluster sizes. Biometrics, 59, 18-24 Pang, Z. & Kuk, A. (2007) Test of marginal compatibility and smoothing methods for exchangeable binary data with unequal cluster sizes. Biometrics, 63, 218-227 CBData Create a ‘CBdata’ object from a data frame. Description The CBData function creates an object of class CBData that is used in further analyses. It identifies the variables that define treatment group, clustersize and the number of responses. Usage CBData(x, trt, clustersize, nresp, freq = NULL) Arguments x a data frame with one row representing a cluster or potentially a set of clusters of the same size and number of responses trt the name of the variable that defines treatment group clustersize the name of the variable that defines cluster size nresp the name of the variable that defines the number of responses in the cluster freq the name of the variable that defines the number of clusters represented by the data row. If NULL, then each row is assumed to correspond to one cluster. Value A data frame with each row representing all the clusters with the same trt/size/number of responses, and standardized variable names: Trt factor, the treatment group ClusterSize numeric, the cluster size NResp numeric, the number of responses Freq numeric, number of clusters with the same values Author(s) <NAME> See Also read.CBData for creating a CBData object directly from a file. Examples data(shelltox) sh <- CBData(shelltox, trt="Trt", clustersize="ClusterSize", nresp="NResp") str(sh) CMData Create a ‘CMdata’ object from a data frame. Description The CMData function creates an object of class CMData that is used in further analyses. It identifies the variables that define treatment group, clustersize and the number of responses for each outcome type. Usage CMData(x, trt, nresp, clustersize = NULL, freq = NULL) Arguments x a data frame with one row representing a cluster or potentially a set of clusters of the same size and number of responses for each outcome trt the name of the variable that defines treatment group nresp either a character vector with the names or a numeric vector with indices of the variables that define the number of responses in the cluster for each outcome type. If clustersize is NULL, then it will be calculated assuming that the nresp vector contains all the possible outcomes. If clustersize is given, then an additional category is created for the excess cluster members. clustersize the name or index of the variable that defines cluster size, or NULL. If NULL, its value will be calculated by adding the counts from the nresp variables. If de- fined, an additional response type will be created for the excess cluster members. freq the name or index of the variable that defines the number of clusters represented by the data row. If NULL, then each row is assumed to correspond to one cluster. Value A data frame with each row representing all the clusters with the same trt/size/number of responses, and standardized variable names: Trt factor, the treatment group ClusterSize numeric, the cluster size NResp.1--NResp.K+1 numeric, the number of responses for each of the K+1 outcome types Freq numeric, number of clusters with the same values Author(s) <NAME> See Also read.CMData for creating a CMData object directly from a file. Examples data(dehp) dehp <- CMData(dehp, trt="Trt", nresp=c("NResp.1","NResp.2","NResp.3")) str(dehp) dehp Developmental toxicology study of DEHP in mice Description This data set is based on a National Toxicology Program study on diethylhexyl phthalate, DEHP. Pregnant CD-1 mice were randomly assigned to receive 0, 250, 500, 1000, or 1500 ppm of DEHP in their feed during gestational days 0-17. The uterine contents of the mice were examined for toxicity endpoints prior to normal delivery. The possible outcomes are 1) malformation, 2) death or resorption, 3) no adverse event. Usage data(dehp) Format A ’CMData’ object, that is a data frame with the following variables Trt factor giving treatment group ClusterSize the size of the litter NResp.1 the number of fetuses with a type 1 outcome (malformation) NResp.2 the number of fetuses with a type 2 outcome (death or resorption) NResp.3 the number of fetuses with a type 3 outcome (normal) Freq the number of litters with the given ClusterSize/NResp.1-NResp.3 combination Source National Toxicology Program, NTP Study TER84064. References <NAME>., <NAME>., <NAME>., and <NAME>. (1988). Developmental toxicity evaluation of dietary di(2-ethylhexy)phthalate in Fischer 344 rats and CD-1 mice. Fundamental and Applied Toxicology 10, 395-412. Examples data(dehp) library(lattice) pl <- xyplot(NResp.1/ClusterSize + NResp.2/ClusterSize + NResp.3/ClusterSize ~ Trt, data=dehp, outer=TRUE, type=c("p","a"), jitter.x=TRUE) pl$condlevels[[1]] <- c("Malformed", "Dead", "Normal") print(pl) egde EGDE data Description The data set is based on a developmental toxicity experiment on the effect of ethylene glycol di- ethyl ether (EGDE) on fetal development of New Zealand white rabbits. In the study, four groups of pregnant does were randomly assigned to dose levels $0, 25, 50$, and $100$ milligrams per kilogram body weight of EGDE. For each litter and at each dose level, the adverse response used is the combined number of fetal malformation and fetal death. Usage data(egde) Format A ’CBData’ object, that is a data frame with the following variables Trt factor giving treatment group ClusterSize the size of the litter NResp the number of affected fetuses Freq the number of litters with the given ClusterSize/NResp combination Source <NAME>., <NAME>., and <NAME>. (1995). Statistical analysis of overdispersed multinomial data from developmental toxicity studies. In Statistics in Toxicology, <NAME>, pp.\ 151–179. New York: Oxford University Press. Examples data(egde) stripchart(I(NResp/ClusterSize)~Trt, cex=sqrt(egde$Freq), data=egde, pch=1, method="jitter", vertical=TRUE, ylab="Proportion affected") Extract Extract from a CBData or CMData object Description The extracting syntax works as for [.data.frame, and in general the returned object is not a CBData or CMData object. However if the columns are not modified, then the result is still a CBData or CMData object with appropriate attributes preserved, and the unused levels of treatment groups dropped. Usage ## S3 method for class 'CBData' x[i, j, drop] ## S3 method for class 'CMData' x[i, j, drop] Arguments x CMData object. i numeric, row index of extracted values j numeric, column index of extracted values drop logical. If TRUE the result is coerced to the lowest possible dimension. The default is the same as for [.data.frame: to drop if only one column is left, but not to drop if only one row is left. Value a CBData or CMData object Author(s) <NAME> See Also CBData, CMData Examples data(shelltox) str(shelltox[1:5,]) str(shelltox[1:5, 2:4]) data(dehp) str(dehp[1:5,]) str(dehp[1:5, 2:4]) GEE.trend.test GEE-based trend test Description GEE.trend.test implements a GEE based test for linear increasing trend for correlated binary data. Usage GEE.trend.test(cbdata, scale.method = c("fixed", "trend", "all")) Arguments cbdata a CBData object scale.method character string specifying the assumption about the change in the overdisper- sion (scale) parameter across the treatment groups: "fixed" - constant scale pa- rameter (default); "trend" - linear trend for the log of the scale parameter; "all" - separate scale parameter for each group. Details The actual work is performed by the geese function of the geepack library. This function only provides a convenient wrapper to obtain the results in the same format as RS.trend.test and SO.trend.test. The implementation aims for testing for increasing trend, and a one-sided p-value is reported. The test statistic is asymptotically normally distributed, and a two-sided p-value can be easily computed if needed. Value A list with components statistic numeric, the value of the test statistic p.val numeric, asymptotic one-sided p-value of the test Author(s) <NAME>, <EMAIL> See Also RS.trend.test, SO.trend.test for alternative tests; CBData for constructing a CBData object. Examples data(shelltox) GEE.trend.test(shelltox, "trend") jointprobs Estimate joint event probabilities for multinomial data Description An exchangeable multinomial distribution with K + 1 categories O1 , . . . , OK+1 , can be parame- terized by the joint probabilities of events   τr1 ,...,rK |n = P X1 = · · · = Xr1 = O1 , . . . , XPK−1 ri +1 = · · · = XPK ri = OK where ri ≥ 0 and r1 + · · · + rK ≀ n. The jointprobs function estimates these probabilities under various settings. Note that when some of the ri ’s equal zero, then no restriction on the number of outcomes of the corresponding type are imposed, so the resulting probabilities are marginal. Usage jointprobs(cmdata, type = c("averaged", "cluster", "mc")) Arguments cmdata a CMData object type character string describing the desired type of estimate: • "averaged" - averaged over the observed cluster-size distribution within each treatment • "cluster" - separately for each cluster size within each treatment • "mc" - assuming marginal compatibility, ie that τ does not depend on the cluster-size Value a list with an array of estimates for each treatment. For a multinomial distribution with K + 1 categories the arrays will have either K + 1 or K dimensions, depending on whether cluster-size specific estimates (type="cluster") or pooled estimates (type="averaged" or type="mc") are requested. For the cluster-size specific estimates the first dimension is the cluster-size. Each addi- tional dimension is a possible outcome. See Also mc.est for estimating the distribution under marginal compatibility, uniprobs and multi.corr for extracting the univariate marginal event probabilities, and the within-multinomial correlations from the joint probabilities. Examples data(dehp) # averaged over cluster-sizes tau.ave <- jointprobs(dehp, type="ave") # averaged P(X1=X2=O1, X3=O2) in the 1500 dose group tau.ave[["1500"]]["2","1"] # there are two type-1, and one type-2 outcome #plot P(X1=O1) - the marginal probability of a type-1 event over cluster-sizes tau <- jointprobs(dehp, type="cluster") ests <- as.data.frame(lapply(tau, function(x)x[,"1","0"])) matplot(ests, type="b") mc.est.CMData Distribution of the number of responses assuming marginal compati- bility. Description The mc.est function estimates the distribution of the number of responses in a cluster under the assumption of marginal compatibility: information from all cluster sizes is pooled. The estimation is performed independently for each treatment group. Usage ## S3 method for class 'CMData' mc.est(object, eps = 1e-06, ...) ## S3 method for class 'CBData' mc.est(object, ...) mc.est(object, ...) Arguments object a CBData or CMData object eps numeric; EM iterations proceed until the sum of squared changes fall below eps ... other potential arguments; not currently used Details The EM algorithm given by Stefanescu and Turnbull (2003) is used for the binary data. Value For CMData: A data frame giving the estimated pdf for each treatment and clustersize. The proba- bilities add up to 1 for each Trt/ClusterSize combination. It has the following columns: Prob numeric, the probability of NResp responses in a cluster of size ClusterSize in group Trt Trt factor, the treatment group ClusterSize numeric, the cluster size NResp.1 - NResp.K numeric, the number of responses of each type For CBData: A data frame giving the estimated pdf for each treatment and clustersize. The proba- bilities add up to 1 for each Trt/ClusterSize combination. It has the following columns: Prob numeric, the probability of NResp responses in a cluster of size ClusterSize in group Trt Trt factor, the treatment group ClusterSize numeric, the cluster size NResp numeric, the number of responses Note For multinomial data, the implementation is currently written in R, so it is not very fast. Author(s) <NAME> References <NAME>, <NAME>, <NAME>, <NAME> (2016) On Exchangeable Multinomial Distributions. #’Biometrika 103(2), 397-408. <NAME>. & <NAME>. (2003) Likelihood inference for exchangeable binary data with varying cluster sizes. Biometrics, 59, 18-24 Examples data(dehp) dehp.mc <- mc.est(subset(dehp, Trt=="0")) subset(dehp.mc, ClusterSize==2) data(shelltox) sh.mc <- mc.est(shelltox) library(lattice) xyplot(Prob~NResp|factor(ClusterSize), groups=Trt, data=sh.mc, subset=ClusterSize>0, type="l", as.table=TRUE, auto.key=list(columns=4, lines=TRUE, points=FALSE), xlab="Number of responses", ylab="Probability P(R=r|N=n)") mc.test.chisq.CMData Test the assumption of marginal compatibility Description mc.test.chisq tests whether the assumption of marginal compatibility is violated in the data. Usage ## S3 method for class 'CMData' mc.test.chisq(object, ...) ## S3 method for class 'CBData' mc.test.chisq(object, ...) mc.test.chisq(object, ...) Arguments object a CBData or CMData object ... other potential arguments; not currently used Details The assumption of marginal compatibility (AKA reproducibility or interpretability) implies that the marginal probability of response does not depend on clustersize. Stefanescu and Turnbull (2003), and Pang and Kuk (2007) developed a Cochran-Armitage type test for trend in the marginal prob- ability of success as a function of the clustersize. mc.test.chisq implements a generalization of that test extending it to multiple treatment groups. Value A list with the following components: overall.chi the test statistic; sum of the statistics for each group overall.p p-value of the test individual a list of the results of the test applied to each group separately: • chi.sq the test statistic for the group • p p-value for the group Author(s) <NAME> References <NAME>. & <NAME>. (2003) Likelihood inference for exchangeable binary data with varying cluster sizes. Biometrics, 59, 18-24 Pang, Z. & <NAME>. (2007) Test of marginal compatibility and smoothing methods for exchangeable binary data with unequal cluster sizes. Biometrics, 63, 218-227 See Also mc.est for estimating the distribution under marginal compatibility. Examples data(dehp) mc.test.chisq(dehp) data(shelltox) mc.test.chisq(shelltox) multi.corr Extract correlation coefficients from joint probability arrays Description Calculates the within- and between-outcome correlation coefficients for exchangeable correlated multinomial data based on joint probability estimates calculated by the jointprobs function. These determine the variance inflation due the cluster structure. Usage multi.corr(jp, type = attr(jp, "type")) Arguments jp the output of jointprobs - a list of joint probability arrays by treatment type one of c("averaged","cluster","mc") - the type of joint probability. By default, the type attribute of jp is used. Details If Ri and Rj is the number of events of type i and j, respectively, in a cluster of size n, then V ar(Ri ) = npi (1 − pi )(1 + (n − 1)φii ) Cov(Ri , Rj ) = −npi pj (1 + (n − 1)φij ) where pi and pj are the marginal event probabilities and φij are the correlation coefficients com- puted by multi.corr. Value a list of estimated correlation matrices by treatment group. If cluster-size specific estimates were requested ((type="cluster")), then each list elements are a list of these matrices for each cluster size. See Also jointprobs for calculating the joint probability arrays Examples data(dehp) tau <- jointprobs(dehp, type="averaged") multi.corr(tau) multinom.gen Functions for generating multinomial outcomes Description These are built-in functions to be used by ran.CMData for generating random multinomial data. Usage mg.Resample(n, clustersizes, param) mg.DirMult(n, clustersizes, param) mg.LogitNorm(n, clustersizes, param) mg.MixMult(n, clustersizes, param) Arguments n number of independent clusters to generate clustersizes an integer vector specifying the sizes of the clusters param a list of parameters for each specific generator Details For mg.Resample: the param list should be list(param=...), in which the CMData object to be resampled is passed. For mg.DirMult: the param list should be list(shape=...), in which the parameter vector of the Dirichlet distribution is passed (see rdirichlet). For mg.LogitNorm: the param list should be list(mu=...,sigma=...), in which the mean vector and covariance matrix of the underlying Normal distribution are passed. If sigma is NULL (or missing), then an identity matrix is assumed. They should have K-1 dimensions for a K-variate multinomial. For mg.MixMult: the param list should be list(q=...,m=...), in which the vector of mixture probabilities q and the matrix m of logit-transformed means of each component are passed. For a K-variate multinomial, the matrixm should have K-1 columns and length(q) rows. NOSTASOT Finding the NOSTASOT dose Description The NOSTASOT dose is the No-Statistical-Significance-Of-Trend dose – the largest dose at which no trend in the rate of response has been observed. It is often used to determine a safe dosage level for a potentially toxic compound. Usage NOSTASOT( cbdata, test = c("RS", "GEE", "GEEtrend", "GEEall", "SO"), exact = test == "SO", R = 100, sig.level = 0.05, control = soControl() ) Arguments cbdata a CBData object test character string defining the desired test statistic. See trend.test for details. exact logical, should an exact permutation test be performed. See trend.test for details. R integer, number of permutations for the exact test sig.level numeric between 0 and 1, significance level of the test control an optional list of control settings for the stochastic order ("SO") test, usually a call to soControl. See there for the names of the settable control values and their effect. Details A series of hypotheses about the presence of an increasing trend overall, with all but the last group, all but the last two groups, etc. are tested. Since this set of hypotheses forms a closed family, one can test these hypotheses in a step-down manner with the same sig.level type I error rate at each step and still control the family-wise error rate. The NOSTASOT dose is the largest dose at which the trend is not statistically significant. If the trend test is not significant with all the groups included, the largest dose is the NOSTASOT dose. If the testing sequence goes down all the way to two groups, and a significant trend is still detected, the lowest dose is the NOSTASOT dose. This assumes that the lowest dose is a control group, and this convention might not be meaningful otherwise. Value a list with two components NOSTASOT character string identifying the NOSTASOT dose. p numeric vector of the p-values of the tests actually performed. The last element corresponds to all doses included, and will not be missing. p-values for tests that were not actually performed due to the procedure stopping are set to NA. Author(s) <NAME>, <EMAIL> References <NAME>.; <NAME>. & <NAME>. (1985) Testing the statistical certainty of a response to increasing doses of a drug. Biometrics 41, 295-301. See Also trend.test for details about the available trend tests. Examples data(shelltox) NOSTASOT(shelltox, test="RS") pdf Parametric distributions for correlated binary data Description qpower.pdf and betabin.pdf calculate the probability distribution function for the number of responses in a cluster of the q-power and beta-binomial distributions, respectively. Usage betabin.pdf(p, rho, n) qpower.pdf(p, rho, n) Arguments p numeric, the probability of success. rho numeric between 0 and 1 inclusive, the within-cluster correlation. n integer, cluster size. Details The pdf of the q-power distribution is  X x   n k x γ P (X = x) = (−1) q (n−x+k) , x k x = 0, . . . , n, where q = 1 − p, and the intra-cluster correlation γ ρ= . The pdf of the beta-binomial distribution is   n B(α + x, n + β − x) P (X = x) = , x B(α, β) x = 0, . . . , n, where α = p 1−ρ 1−ρ ρ , and α = (1 − p) ρ . Value a numeric vector of length n + 1 giving the value of P (X = x) for x = 0, . . . , n. Author(s) <NAME>, <EMAIL> References <NAME> (2004) Litter-based approach to risk assessment in developmental toxicity studies via a power family of completely monotone functions Applied Statistics, 52, 51-61. <NAME>. (1975) The Analysis of Binary Responses from Toxicological Experiments Involv- ing Reproduction and Teratogenicity Biometrics, 31, 949-952. See Also ran.CBData for generating an entire dataset using these functions Examples #the distributions have quite different shapes #with q-power assigning more weight to the "all affected" event than other distributions plot(0:10, betabin.pdf(0.3, 0.4, 10), type="o", ylim=c(0,0.34), ylab="Density", xlab="Number of responses out of 10") lines(0:10, qpower.pdf(0.3, 0.4, 10), type="o", col="red") ran.CBData Generate random correlated binary data Description ran.mc.CBData generates a random CBData object from a given two-parameter distribution. Usage ran.CBData( sample.sizes, p.gen.fun = function(g) 0.3, rho.gen.fun = function(g) 0.2, pdf.fun = qpower.pdf ) Arguments sample.sizes a dataset with variables Trt, ClusterSize and Freq giving the number of clusters to be generated for each Trt/ClusterSize combination. p.gen.fun a function of one parameter that generates the value of the first parameter of pdf.fun (p) given the group number. rho.gen.fun a function of one parameter that generates the value of the second parameter of pdf.fun (rho) given the group number. pdf.fun a function of three parameters (p, rho, n) giving the PDF of the number of re- sponses in a cluster given the two parameters (p, rho), and the cluster size (n). Functions implementing two common distributions: the beta-binomial (betabin.pdf) and q-power (qpower.pdf) are provided in the package. Details p.gen.fun and rho.gen.fun are functions that generate the parameter values for each treatment group; pdf.fun is a function generating the pdf of the number of responses given the two parameters p and rho, and the cluster size n. p.gen.fun and rho.gen.fun expect the parameter value of 1 to represent the first group, 2 - the second group, etc. Value a CBData object with randomly generated number of responses with sample sizes specified in the call. Author(s) <NAME>, <EMAIL> See Also betabin.pdf and qpower.pdf Examples set.seed(3486) ss <- expand.grid(Trt=0:3, ClusterSize=5, Freq=4) #Trt is converted to a factor rd <- ran.CBData(ss, p.gen.fun=function(g)0.2+0.1*g) rd ran.CMData Generate a random CMData object Description Generates random exchangeably correlated multinomial data based on a parametric distribution or using resampling. The Dirichlet-Multinomial, Logistic-Normal multinomial, and discrete mixture multinomial parametric distributions are implemented. All observations will be assigned to the same treatment group, and there is no guarantee of a specific order of the observations in the output. Usage ran.CMData(n, ncat, clustersize.gen, distribution) Arguments n number of independent clusters to generate ncat number of response categories clustersize.gen either an integer vector specifying the sizes of the clusters, which will be recy- cled to achieve the target number of clusters n; or a function with one parameter that returns an integer vector of cluster sizes when the target number of clusters n is passed to it as a parameter distribution a list with two components: "multinom.gen" and "param" that specifies the gen- eration process for each cluster. The "multinom.gen" component should be a function of three parameters: number of clusters, vector of cluster sizes, and parameter list, that a matrix of response counts where each row is a cluster and each column is the number of responses of a given type. The "param" compo- nent should specify the list of parameters needed by the multinom.gen function. Value a CMData object with randomly generated number of responses with sample sizes specified in the call Author(s) <NAME> See Also CMData for details about CMData objects; multinom.gen for built-in generating functions Examples # Resample from the dehp dataset data(dehp) ran.dehp <- ran.CMData(20, 3, 10, list(multinom.gen=mg.Resample, param=list(data=dehp))) # Dirichlet-Multinomial distribution with two treatment groups and random cluster sizes binom.cs <- function(n){rbinom(n, p=0.3, size=10)+1} dm1 <- ran.CMData(15, 4, binom.cs, list(multinom.gen=mg.DirMult, param=list(shape=c(2,3,2,1)))) dm2 <- ran.CMData(15, 4, binom.cs, list(multinom.gen=mg.DirMult, param=list(shape=c(1,1,4,1)))) ran.dm <- rbind(dm1, dm2) # Logit-Normal multinomial distribution ran.ln <- ran.CMData(13, 3, 3, list(multinom.gen=mg.LogitNorm, param=list(mu=c(-1, 1), sigma=matrix(c(1,0.8,0.8,2), nr=2)))) # Mixture of two multinomial distributions unif.cs <- function(n){sample(5:9, size=n, replace=TRUE)} ran.mm <- ran.CMData(6, 3, unif.cs, list(multinom.gen=mg.MixMult, param=list(q=c(0.8,0.2), m=rbind(c(-1,0), c(0,2))))) read.CBData Read data from external file into a CBData object Description A convenience function to read data from specially structured file directly into a CBData object. Usage read.CBData(file, with.freq = TRUE, ...) Arguments file name of file with data. The first column should contain the treatment group, the second the size of the cluster, the third the number of responses in the cluster. Optionally, a fourth column could give the number of times the given combina- tion occurs in the data. with.freq logical indicator of whether a frequency variable is present in the file ... additional arguments passed to read.table Value a CBData object Author(s) <NAME> See Also CBData read.CMData Read data from external file into a CMData object Description A convenience function to read data from specially structured file directly into a CMData object. There are two basic data format options: etiher the counts of responses of all categories are given (and the cluster size is the sum of these counts), or the total cluster size is given with the counts of all but one category. The first column should always give the treatment group, then either the counts for each category (first option, chosen by setting with.clustersize = FALSE), or the size of the cluster followed by the counts for all but one category (second option, chosen by setting with.clustersize = TRUE). Optionally, a last column could give the number of times the given combination occurs in the data. Usage read.CMData(file, with.clustersize = TRUE, with.freq = TRUE, ...) Arguments file name of file with data. The data in the file should be structured as described above. with.clustersize logical indicator of whether a cluster size variable is present in the file with.freq logical indicator of whether a frequency variable is present in the file ... additional arguments passed to read.table Value a CMData object Author(s) <NAME> See Also CMData RS.trend.test Rao-Scott trend test Description RS.trend.test implements the Rao-Scott adjusted Cochran-Armitage test for linear increasing trend with correlated data. Usage RS.trend.test(cbdata) Arguments cbdata a CBData object Details The test is based on calculating a design effect for each cluster by dividing the observed variability by the one expected under independence. The number of responses and the cluster size are then divided by the design effect, and a Cochran-Armitage type test statistic is computed based on these adjusted values. The implementation aims for testing for increasing trend, and a one-sided p-value is reported. The test statistic is asymptotically normally distributed, and a two-sided p-value can be easily computed if needed. Value A list with components statistic numeric, the value of the test statistic p.val numeric, asymptotic one-sided p-value of the test Author(s) <NAME>, <EMAIL> References Rao, <NAME>. & <NAME>. A (1992) Simple Method for the Analysis of Clustered Data Biometrics, 48, 577-586. See Also SO.trend.test, GEE.trend.test for alternative tests; CBData for constructing a CBData object. Examples data(shelltox) RS.trend.test(shelltox) shelltox Shell Toxicology data Description This is a classical developmental toxicology data set. Pregnant banded Dutch rabbits were treated with one of four levels of a chemical. The actual doses are not known, instead the groups are designated as Control, Low, Medium, and High. Before term the animals were sacrificed, and the total number of fetuses, as well as the number affected by the treatment was recorded. Usage data(shelltox) Format A ’CBData’ object, that is a data frame with the following variables Trt factor giving treatment group ClusterSize the size of the litter NResp the number of affected fetuses Freq the number of litters with the given ClusterSize/NResp combination Source <NAME>. (1982) Analysis of proportions of affected foetuses in teratological experiments. Bio- metrics, 38, 361-370. This data set has been analyzed (and listed) in numerous papers, including <NAME>. & <NAME>. (1992) A Simple Method for the Analysis of Clustered Data. Biometrics, 48, 577-586. <NAME>. & <NAME>. (1996) Tests of Independence, Treatment Heterogeneity, and Dose- Related Trend With Exchangeable Binary Data. Journal of the American Statistical Association, 91, 1602-1610. <NAME>. (2003) Analysis of the Binary Littermate Data in the One-Way Layout. Biometrical Journal, 45, 195-206. Examples data(shelltox) stripchart(I(NResp/ClusterSize)~Trt, cex=sqrt(shelltox$Freq), data=shelltox, pch=1, method="jitter", vertical=TRUE, ylab="Proportion affected") SO.LRT Likelihood-ratio test statistic Description SO.LRT computes the likelihood ratio test statistic for stochastic ordering against equality assuming marginal compatibility for both alternatives. Note that this statistic does not have a χ2 distribution, so the p-value computation is not straightforward. The SO.trend.test function implements a permutation-based evaluation of the p-value for the likelihood-ratio test. Usage SO.LRT(cbdata, control = soControl()) Arguments cbdata a CBData object control an optional list of control settings, usually a call to soControl. See there for the names of the settable control values and their effect. Value The value of the likelihood ratio test statistic is returned with two attributes: ll0 the log-likelihood under H0 (equality) ll1 the log-likelihood under Ha (stochastic order) Author(s) <NAME> See Also SO.trend.test, soControl Examples data(shelltox) LRT <- SO.LRT(shelltox, control=soControl(max.iter = 100, max.directions = 50)) LRT SO.mc.est Order-restricted MLE assuming marginal compatibility Description SO.mc.est computes the nonparametric maximum likelihood estimate of the distribution of the number of responses in a cluster P (R = r|n) under a stochastic ordering constraint. Umbrella ordering can be specified using the turn parameter. Usage SO.mc.est(cbdata, turn = 1, control = soControl()) Arguments cbdata an object of class CBData. turn integer specifying the peak of the umbrella ordering (see Details). The default corresponds to a non-decreasing order. control an optional list of control settings, usually a call to soControl. See there for the names of the settable control values and their effect. Details Two different algorithms: EM and ISDM are implemented. In general, ISDM (the default) should be faster, though its performance depends on the tuning parameter max.directions: values that are too low or too high slow the algorithm down. SO.mc.est allows extension to an umbrella ordering: D1 ≥st · · · ≥st Dk ≀st · · · ≀st Dn by specifying the value of k as the turn parameter. This is an experimental feature, and at this point none of the other functions can handle umbrella orderings. Value A list with components: Components Q and D are unlikely to be needed by the user. MLest data frame with the maximum likelihood estimates of P (Ri = r|n) Q numeric matrix; estimated weights for the mixing distribution D numeric matrix; directional derivative of the log-likelihood loglik the achieved value of the log-likelihood converge a 2-element vector with the achieved relative error and the performed number of iterations Author(s) <NAME>, <EMAIL> References Szabo A, George EO. (2010) On the Use of Stochastic Ordering to Test for Trend with Clustered Binary Data. Biometrika 97(1), 95-108. See Also soControl Examples data(shelltox) ml <- SO.mc.est(shelltox, control=soControl(eps=0.01, method="ISDM")) attr(ml, "converge") require(lattice) panel.cumsum <- function(x,y,...){ x.ord <- order(x) panel.xyplot(x[x.ord], cumsum(y[x.ord]), ...)} xyplot(Prob~NResp|factor(ClusterSize), groups=Trt, data=ml, type="s", panel=panel.superpose, panel.groups=panel.cumsum, as.table=TRUE, auto.key=list(columns=4, lines=TRUE, points=FALSE), xlab="Number of responses", ylab="Cumulative Probability R(R>=r|N=n)", ylim=c(0,1.1), main="Stochastically ordered estimates\n with marginal compatibility") SO.trend.test Likelihood ratio test of stochastic ordering Description Performs a likelihood ratio test of stochastic ordering versus equality using permutations to estimate the null-distribution and the p-value. If only the value of the test statistic is needed, use SO.LRT instead. Usage SO.trend.test(cbdata, R = 100, control = soControl()) Arguments cbdata a CBData object. R an integer – the number of random permutations for estimating the null distribu- tion. control an optional list of control settings, usually a call to soControl. See there for the names of the settable control values and their effect. Details The test is valid only under the assumption that the cluster-size distribution does not depend on group. During the estimation of the null-distribution the group assignments of the clusters are permuted keeping the group sizes constant; the within-group distribution of the cluster-sizes will vary randomly during the permutation test. The default value of R is probably too low for the final data analysis, and should be increased. Value A list with the following components LRT the value of the likelihood ratio test statistic. It has two attributes: ll0 and ll1 - the values of the log-likelihood under H0 and Ha respectively. p.val the estimated one-sided p-value. boot.res an object of class "boot" with the detailed results of the permutations. See boot for details. Author(s) <NAME>, <EMAIL> References <NAME>, <NAME>. (2010) On the Use of Stochastic Ordering to Test for Trend with Clustered Binary Data. Biometrika 97(1), 95-108. See Also SO.LRT for calculating only the test statistic, soControl Examples data(shelltox) set.seed(45742) sh.test <- SO.trend.test(shelltox, R=10, control=soControl(eps=0.1, max.directions=25)) sh.test #a plot of the resampled LRT values #would look better with a reasonable value of R null.vals <- sh.test$boot.res$t[,1] hist(null.vals, breaks=10, freq=FALSE, xlab="Test statistic", ylab="Density", main="Simulated null-distribution", xlim=range(c(0,20,null.vals))) points(sh.test$LRT, 0, pch="*",col="red", cex=3) soControl Control values for order-constrained fit Description The values supplied in the function call replace the defaults and a list with all possible argu- ments is returned. The returned list is used as the control argument to the mc.est, SO.LRT, and SO.trend.test functions. Usage soControl( method = c("ISDM", "EM"), eps = 0.005, max.iter = 5000, max.directions = 0, start = ifelse(method == "ISDM", "H0", "uniform"), verbose = FALSE ) Arguments method a string specifying the maximization method eps a numeric value giving the maximum absolute error in the log-likelihood max.iter an integer specifying the maximal number of iterations max.directions an integer giving the maximal number of directions considered at one step of the ISDM method. If zero or negative, it is set to the number of non-empty cells. A value of 1 corresponds to the VDM algorithm. start a string specifying the starting setup of the mixing distribution; "H0" puts weight only on constant vectors (corresponding to the null hypothesis of no change), "uniform" puts equal weight on all elements. Only a "uniform" start can be used for the "EM" algorithm. verbose a logical value; if TRUE details of the optimization are shown. Value a list with components for each of the possible arguments. Author(s) <NAME> <EMAIL> See Also mc.est, SO.LRT, SO.trend.test Examples # decrease the maximum number iterations and # request the "EM" algorithm soControl(method="EM", max.iter=100) trend.test Test for increasing trend with correlated binary data Description The trend.test function provides a common interface to the trend tests implemented in this pack- age: SO.trend.test, RS.trend.test, and GEE.trend.test. The details of each test can be found on their help page. Usage trend.test( cbdata, test = c("RS", "GEE", "GEEtrend", "GEEall", "SO"), exact = test == "SO", R = 100, control = soControl() ) Arguments cbdata a CBData object test character string defining the desired test statistic. "RS" performs the Rao-Scott test (RS.trend.test), "SO" performs the stochastic ordering test (SO.trend.test), "GEE", "GEEtrend", "GEEall" perform the GEE-based test (GEE.trend.test) with constant, linearly modeled, and freely varying scale parameters, respec- tively. exact logical, should an exact permutation test be performed. Only an exact test can be performed for "SO". The default is to use the asymptotic p-values except for "SO". R integer, number of permutations for the exact test control an optional list of control settings for the stochastic order ("SO") test, usually a call to soControl. See there for the names of the settable control values and their effect. Value A list with two components and an optional "boot" attribute that contains the detailed results of the permutation test as an object of class boot if an exact test was performed. statistic numeric, the value of the test statistic p.val numeric, asymptotic one-sided p-value of the test Author(s) <NAME>, <EMAIL> See Also SO.trend.test, RS.trend.test, and GEE.trend.test for details about the available tests. Examples data(shelltox) trend.test(shelltox, test="RS") set.seed(5724) #R=50 is too low to get a good estimate of the p-value trend.test(shelltox, test="RS", R=50, exact=TRUE) uniprobs Extract univariate marginal probabilities from joint probability arrays Description Calculates the marginal probability of each event type for exchangeable correlated multinomial data based on joint probability estimates calculated by the jointprobs function. Usage uniprobs(jp, type = attr(jp, "type")) Arguments jp the output of jointprobs - a list of joint probability arrays by treatment type one of c("averaged","cluster","mc") - the type of joint probability. By default, the type attribute of jp is used. Value a list of estimated probability of each outcome by treatment group. The elements are either matrices or vectors depending on whether cluster-size specific estimates were requested (type="cluster") or not. See Also jointprobs for calculating the joint probability arrays Examples data(dehp) tau <- jointprobs(dehp, type="averaged") uniprobs(tau) #separately for each cluster size tau2 <- jointprobs(dehp, type="cluster") uniprobs(tau2) unwrap.CBData Unwrap a clustered object Description unwrap is a utility function that reformats a CBData or CMData object so that each row is one observation (instead of one or more clusters). A new ‘ID’ variable is added to indicate clusters. This form can be useful for setting up the data for a different package. Usage ## S3 method for class 'CBData' unwrap(object, ...) ## S3 method for class 'CMData' unwrap(object, ...) unwrap(object, ...) Arguments object a CBData object ... other potential arguments; not currently used Value For unwrap.CMData: a data frame with one row for each cluster element (having a multinomial outcome) with the following standardized column names Trt factor, the treatment group ClusterSize numeric, the cluster size ID factor, each level representing a different cluster Resp numeric with integer values giving the response type of the cluster element For unwrap.CBData: a data frame with one row for each cluster element (having a binary outcome) with the following standardized column names Trt factor, the treatment group ClusterSize numeric, the cluster size ID factor, each level representing a different cluster Resp numeric with 0/1 values, giving the response of the cluster element Author(s) <NAME> Examples data(dehp) dehp.long <- unwrap(dehp) head(dehp.long) data(shelltox) ush <- unwrap(shelltox) head(ush)
r2redux
cran
R
Package ‘r2redux’ September 19, 2023 Title R2 Statistic Version 1.0.16 Description R2 statistic for significance test. Variance and covariance of R2 values used to as- sess the 95% CI and p-value of the R2 difference. License GPL (>= 3) URL https://github.com/mommy003/r2redux Encoding UTF-8 RoxygenNote 7.1.2 NeedsCompilation no Depends R (>= 2.10) LazyData true Suggests testthat (>= 3.0.0) Config/testthat/edition 3 Author <NAME> [aut, cph], <NAME> [aut, cre, cph] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-09-19 06:50:02 UTC R topics documented: cc_tr... 2 dat... 3 dat... 4 olkin12_... 4 olkin12_1... 5 olkin12_... 5 olkin12_3... 6 olkin1_... 6 olkin_beta1_... 7 olkin_beta_in... 8 olkin_beta_rati... 9 r2_beta_va... 10 r2_dif... 12 r2_enrich_bet... 18 r2_va... 20 r_dif... 23 cc_trf cc_trf function Description This function transforms the predictive ability (R2) and its standard error (se) between the observed scale and liability scale Usage cc_trf(R2, se, K, P) Arguments R2 R2 or coefficient of determination on the observed or liability scale se Standard error of R2 K Population prevalence P The ratio of cases in the study samples Value This function will transform the R2 and its s.e between observed scale and liability scale.Output from the command is the lists of outcomes. R2l Transformed R2 on the liability scale sel Transformed se on the liability scale R2O Transformed R2 on the observed scale seO Transformed se on the observed scale References <NAME>., <NAME>., <NAME>., and <NAME>. A better coefficient of determination for genetic profile analysis. Genetic epidemiology,(2012). 36(3): p. 214-224. Examples #To get the transformed R2 output=cc_trf(0.06, 0.002, 0.05, 0.05) output #output$R2l (transformed R2 on the liability scale) #0.2679337 #output$sel (transformed se on the liability scale) #0.008931123 #output$R2O (transformed R2 on the observed scale) #0.01343616 #output$seO (transformed se on the observed scale) #0.000447872 dat1 Phenotypes and 10 sets of PGSs Description A dataset containing phenotypes and multiple PGSs estimated from 10 sets of SNPs according to GWAS p-value thresholds Usage dat1 Format A data frame with 1000 rows and 11 variables: V1 Phenotype, value V2 PGS1, for p value threshold <=1 V3 PGS2, for p value threshold <=0.5 V4 PGS3, for p value threshold <=0.4 V5 PGS4, for p value threshold <=0.3 V6 PGS5, for p value threshold <=0.2 V7 PGS6, for p value threshold <=0.1 V8 PGS7, for p value threshold <=0.05 V9 PGS8, for p value threshold <=0.01 V10 PGS9, for p value threshold <=0.001 V11 PGS10, for p value threshold <=0.0001 dat2 Phenotypes and 2 sets of PGSs Description A dataset containing phenotypes and 2 sets of PGSs estimated from 2 sets of SNPs from regulatroy and non-regulatory genomic regions Usage dat2 Format A data frame with 1000 rows and 3 variables: V1 Phenotype V2 PGS1, regulatory region V3 PGS2, non-regulatory region olkin12_1 olkin12_1 function Description olkin12_1 function Usage olkin12_1(omat, nv) Arguments omat 3 by 3 matrix having the correlation coefficients between y, x1 and x2, i.e. omat=cor(dat) where dat is N by 3 matrix having variables in the order of cbind (y,x1,x2) nv Sample size Value This function will be used as source code olkin12_13 olkin12_13 function Description olkin12_13 function Usage olkin12_13(omat, nv) Arguments omat 3 by 3 matrix having the correlation coefficients between y, x1 and x2, i.e. omat=cor(dat) where dat is N by 3 matrix having variables in the order of cbind (y,x1,x2) nv Sample size Value This function will be used as source code olkin12_3 olkin12_3 function Description olkin12_3 function Usage olkin12_3(omat, nv) Arguments omat 3 by 3 matrix having the correlation coefficients between y, x1 and x2, i.e. omat=cor(dat) where dat is N by 3 matrix having variables in the order of cbind (y,x1,x2) nv Sample size Value This function will be used as source code olkin12_34 olkin12_34 function Description olkin12_34 function Usage olkin12_34(omat, nv) Arguments omat 3 by 3 matrix having the correlation coefficients between y, x1 and x2, i.e. omat=cor(dat) where dat is N by 3 matrix having variables in the order of cbind (y,x1,x2) nv Sample size Value This function will be used as source code olkin1_2 olkin1_2 function Description olkin1_2 function Usage olkin1_2(omat, nv) Arguments omat 3 by 3 matrix having the correlation coefficients between y, x1 and x2, i.e. omat=cor(dat) where dat is N by 3 matrix having variables in the order of cbind (y,x1,x2) nv Sample size Value This function will be used as source code olkin_beta1_2 olkin_beta1_2 function Description This function derives Information matrix for beta1^2 and beta2^2 where beta1 and 2 are regression coefficients from a multiple regression model, i.e. y = x1 * beta1 + x2 * beta2 + e, where y, x1 and x2 are column-standardised, (i.e. in the context of correlation coefficients,see Olkin and Finn 1995). Usage olkin_beta1_2(omat, nv) Arguments omat 3 by 3 matrix having the correlation coefficients between y, x1 and x2, i.e. omat=cor(dat) where dat is N by 3 matrix having variables in the order of cbind (y,x1,x2) nv Sample size Value This function will give information (variance-covariance) matrix of beta1^2 and beta2^2.To get information (variance-covariance) matrix of beta1^2 and beta2^2. Where beta1 and beta2 are re- gression coefficients from a multiple regression model. The outputs are listed as follows. info 2x2 information (variance-covariance) matrix var1 Variance of beta1_2 var2 Variance of beta2_2 var1_2 Variance of difference between beta1_2 and beta2_2 References <NAME>. and <NAME>. Correlations redux. Psychological Bulletin, 1995. 118(1): p. 155. Examples #To get information (variance-covariance) matrix of beta1_2 and beta2_2 where #beta1 and 2 are regression coefficients from a multiple regression model. dat=dat1 omat=cor(dat)[1:3,1:3] #omat #1.0000000 0.1958636 0.1970060 #0.1958636 1.0000000 0.9981003 #0.1970060 0.9981003 1.0000000 nv=length(dat$V1) output=olkin_beta1_2(omat,nv) output #output$info (2x2 information (variance-covariance) matrix) #0.04146276 0.08158261 #0.08158261 0.16111124 #output$var1 (variance of beta1_2) #0.04146276 #output$var2 (variance of beta2_2) #0.1611112 #output$var1_2 (variance of difference between beta1_2 and beta2_2) #0.03940878 olkin_beta_inf olkin_beta_inf function Description This function derives Information matrix for beta1 and beta2 where beta1 and 2 are regression coefficients from a multiple regression model, i.e. y = x1 * beta1 + x2 * beta2 + e, where y, x1 and x2 are column-standardised (see Olkin and Finn 1995). Usage olkin_beta_inf(omat, nv) Arguments omat 3 by 3 matrix having the correlation coefficients between y, x1 and x2, i.e. omat=cor(dat) where dat is N by 3 matrix having variables in the order of cbind (y,x1,x2) nv Sample size Value This function will generate information (variance-covariance) matrix of beta1 and beta2.The outputs are listed as follows. info 2x2 information (variance-covariance) matrix var1 Variance of beta1 var2 Variance of beta2 var1_2 Variance of difference between beta1 and beta2 References <NAME>. and <NAME>. Correlations redux. Psychological Bulletin, 1995. 118(1): p. 155. Examples #To get information (variance-covariance) matrix of beta1 and beta2 where #beta1 and 2 are regression coefficients from a multiple regression model. dat=dat1 omat=cor(dat)[1:3,1:3] #omat #1.0000000 0.1958636 0.1970060 #0.1958636 1.0000000 0.9981003 #0.1970060 0.9981003 1.0000000 nv=length(dat$V1) output=olkin_beta_inf(omat,nv) output #output$info (2x2 information (variance-covariance) matrix) #0.2531406 -0.2526212 #-0.2526212 0.2530269 #output$var1 (variance of beta1) #0.2531406 #output$var2 (variance of beta2) #0.2530269 #output$var1_2 (variance of difference between beta1 and beta2) #1.01141 olkin_beta_ratio olkin_beta_ratio function Description This function derives variance of beta1^2 / R^2 where beta1 and 2 are regression coefficients from a multiple regression model, i.e. y = x1 * beta1 + x2 * beta2 + e, where y, x1 and x2 are column- standardised (see Olkin and Finn 1995). Usage olkin_beta_ratio(omat, nv) Arguments omat 3 by 3 matrix having the correlation coefficients between y, x1 and x2, i.e. omat=cor(dat) where dat is N by 3 matrix having variables in the order of cbind (y,x1,x2) nv sampel size Value This function will generate the variance of the proportion, i.e. beta1_2/R^2.The outputs are listed as follows. ratio_var Variance of ratio References <NAME>. and <NAME>. Correlations redux. Psychological Bulletin, 1995. 118(1): p. 155. Examples #To get information (variance-covariance) matrix of beta1 and beta2 where #beta1 and 2 are regression coefficients from a multiple regression model. dat=dat2 omat=cor(dat)[1:3,1:3] #omat #1.0000000 0.1497007 0.136431 #0.1497007 1.0000000 0.622790 #0.1364310 0.6227900 1.000000 nv=length(dat$V1) output=olkin_beta_ratio(omat,nv) output #r2redux output #output$ratio_var (Variance of ratio) #0.08042288 r2_beta_var r2_beta_var Description This function estimates var(beta1^2) and (beta2^2), and beta1 and 2 are regression coefficients from a multiple regression model, i.e. y = x1 * beta1 + x2 * beta2 +e, y, x1 and x2 are column- standardised (see Olkin and Finn 1995). y is N by 1 matrix having the dependent variable, x1 is N by 1 matrix having the ith explanatory variable. x2 is N by 1 matrix having the jth explanatory variable. v1 and v2 indicates the ith and jth column in the data (v1 or v2 should be a single interger between 1 - M, see Arguments below). Usage r2_beta_var(dat, v1, v2, nv) Arguments dat N by (M+1) matrix having variables in the order of cbind(y,x) v1 This can be set as v1=1, v1=2, v1=3 or any value between 1 - M based on combination v2 This can be set as v2=1, v2=2, v2=3, or any value between 1 - M based on combination nv Sample size Value This function will estiamte the variance of beta1^2 and beta2^2, and the covariance between beta1^2 and beta2^2, i.e. the information matrix of squared regression coefficients. beta1 and beta2 are regression coefficients from a multiple regression model, i.e. y = x1 * beta1 + x2 * beta2 +e, where y, x1 and x2 are column-standardised. The outputs are listed as follows. beta1_sq beta1_sq beta2_sq beta2_sq var1 Variance of beta1_sq var2 Variance of beta2_sq var1_2 Variance of difference between beta1_sq and beta2_sq cov Covariance between beta1_sq and beta2_sq upper_beta1_sq upper limit of 95% CI for beta1_sq lower_beta1_sq lower limit of 95% CI for beta1_sq upper_beta2_sq upper limit of 95% CI for beta2_sq lower_beta2_sq lower limit of 95% CI for beta2_sq References <NAME> <NAME>. Correlations redux. Psychological Bulletin, 1995. 118(1): p. 155. Examples #To get the 95% CI of beta1_sq and beta2_sq #beta1 and beta2 are regression coefficients from a multiple regression model, #i.e. y = x1 * beta1 + x2 * beta2 +e, where y, x1 and x2 are column-standardised. dat=dat2 nv=length(dat$V1) v1=c(1) v2=c(2) output=r2_beta_var(dat,v1,v2,nv) output #r2redux output #output$beta1_sq (beta1_sq) #0.01118301 #output$beta2_sq (beta2_sq) #0.004980285 #output$var1 (variance of beta1_sq) #7.072931e-05 #output$var2 (variance of beta2_sq) #3.161929e-05 #output$var1_2 (variance of difference between beta1_sq and beta2_sq) #0.000162113 #output$cov (covariance between beta1_sq and beta2_sq) #-2.988221e-05 #output$upper_beta1_sq (upper limit of 95% CI for beta1_sq) #0.03037793 #output$lower_beta1_sq (lower limit of 95% CI for beta1_sq) #-0.00123582 #output$upper_beta2_sq (upper limit of 95% CI for beta2_sq) #0.02490076 #output$lower_beta2_sq (lower limit of 95% CI for beta2_sq) #-0.005127546 r2_diff r2_diff function Description This function estimates var(R2(y~x[,v1]) - R2(y~x[,v2])) where R2 is the R squared value of the model, y is N by 1 matrix having the dependent variable, and x is N by M matrix having M explana- tory variables. v1 or v2 indicates the ith column in the x matrix (v1 or v2 can be multiple values between 1 - M, see Arguments below) Usage r2_diff(dat, v1, v2, nv) Arguments dat N by (M+1) matrix having variables in the order of cbind(y,x) v1 This can be set as v1=c(1) or v1=c(1,2) v2 This can be set as v2=c(2), v2=c(3), v2=c(1,3) or v2=c(3,4) nv Sample size Value This function will estimate significant difference between two PGS (either dependent or indepen- dent and joint or single). To get the test statistics for the difference between R2(y~x[,v1]) and R2(y~x[,v2]). (here we define R2_1=R2(y~x[,v1])) and R2_2=R2(y~x[,v2]))). The outputs are listed as follows. rsq1 R2_1 rsq2 R2_2 var1 Variance of R2_1 var2 variance of R2_2 var_diff Variance of difference between R2_1 and R2_2 r2_based_p two tailed P-value for significant difference between R2_1 and R2_2 r2_based_p_one_tail one tailed P-value for significant difference mean_diff Differences between R2_1 and R2_2 upper_diff Upper limit of 95% CI for the difference lower_diff Lower limit of 95% CI for the difference Examples #To get the test statistics for the difference between R2(y~x[,1]) and #R2(y~x[,2]). (here we define R2_1=R2(y~x[,1])) and R2_2=R2(y~x[,2]))) dat=dat1 nv=length(dat$V1) v1=c(1) v2=c(2) output=r2_diff(dat,v1,v2,nv) output #r2redux output #output$rsq1 (R2_1) #0.03836254 #output$rsq2 (R2_2) #0.03881135 #output$var1 (variance of R2_1) #0.0001436128 #output$var2 (variance of R2_2) #0.0001451358 #output$var_diff (variance of difference between R2_1 and R2_2) #5.678517e-07 #output$r2_based_p (two tailed p-value for significant difference) #0.5514562 #output$r2_based_p_one_tail(one tailed p-value for significant difference) #0.2757281 #output$mean_diff (differences between R2_1 and R2_2) #-0.0004488044 #output$upper_diff (upper limit of 95% CI for the difference) #0.001028172 #output$lower_diff (lower limit of 95% CI for the difference) #-0.001925781 #output$$p$nested #1 #output$$p$nonnested #0.5514562 #output$$p$LRT #1 #To get the test statistics for the difference between R2(y~x[,1]+x[,2]) and #R2(y~x[,2]). (here R2_1=R2(y~x[,1]+x[,2]) and R2_2=R2(y~x[,1])) dat=dat1 nv=length(dat$V1) v1=c(1,2) v2=c(1) output=r2_diff(dat,v1,v2,nv) #r2redux output #output$rsq1 (R2_1) #0.03896678 #output$rsq2 (R2_2) #0.03836254 #output$var1 (variance of R2_1) #0.0001473686 #output$var2 (variance of R2_2) #0.0001436128 #output$var_diff (variance of difference between R2_1 and R2_2) #2.321425e-06 #output$r2_based_p (p-value for significant difference between R2_1 and R2_2) #0.4366883 #output$mean_diff (differences between R2_1 and R2_2) #0.0006042383 #output$upper_diff (upper limit of 95% CI for the difference) #0.00488788 #output$lower_diff (lower limit of 95% CI for the difference) #-0.0005576171 #When faced with multiple predictors common between two models, for example, #y = any_cov1 + any_cov2 + ... + any_covN + e vs. #y = PRS + any_cov1 + any_cov2 +...+ any_covN + e #A more streamlined approach can be adopted by consolidating the various #predictors into a single predictor (see R code below). #R #dat=dat1 #here let's assume, we wanted to test one PRS (dat$V2) #with 5 covariates (dat$V7 to dat$V11) #mod1 <- lm(dat$V1~dat$V2 + dat$V7+ dat$V8+ dat$V9+ dat$V10+ dat$V11) #merged_predictor1 <- mod1$fitted.values #mod2 <- lm(dat$V1~ dat$V7+ dat$V8+ dat$V9+ dat$V10+ dat$V11) #merged_predictor2 <- mod2$fitted.values #dat=data.frame(dat$V1,merged_predictor1,merged_predictor2) #the comparison can be equivalently expressed as: #y = merged_predictor1 + e vs. #y = merged_predictor2 + e #This comparison can be simply achieved using the r2_diff function, e.g. #To get the test statistics for the difference between R2(y~x[,1]) and #R2(y~x[,2]). (here x[,1]= merged_predictor2 (from full model), #and x[,2]= merged_predictor1(from reduced model)) #v1=c(1) #v2=c(2) #output=r2_diff(dat,v1,v2,nv) #note that the merged predictor from the full model (v1) should be the first. #str(output) #List of 11 #$ rsq1 : num 0.0428 #$ rsq2 : num 0.042 #$ var1 : num 0.0.000158 #$ var2 : num 0.0.000156 #$ var_diff : num 2.87e-06 #$ r2_based_p : num 0.658 #$ r2_based_p_one_tail: num 0.329 #$ mean_diff : num 0.000751 #$ upper_diff : num 0.00407 #$ lower_diff : num -0.00257 #$ p :List of 3 #..$ nested : num 0.386 #..$ nonnested: num 0.658 #..$ LRT : num 0.376 #Importantly note that in this case, merged_predictor1 is nested within #merged_predictor2 (see mod1 vs. mod2 above). Therefore, this is #nested model comparison. So, output$p$nested (0.386) should be used #instead of output$p$nonnested (0.658). #Note that r2_based_p is the same as output$p$nonnested (0.658) here. ##For this scenario, alternatively, the outcome variable (y) can be preadjusted #with covariate(s), following the procedure in R: #mod <- lm(y ~ any_cov1 + any_cov2 + ... + any_covN) #y_adj=scale(mod$residuals) #then, the comparative significance test can be approximated by using #the following model y_adj = PRS (r2_var(dat, v1, nv)) #R #dat=dat1 #mod <- lm(dat$V1~dat$V7+ dat$V8+ dat$V9+ dat$V10+ dat$V11) #y_adj=scale(mod$residuals) #dat=data.frame(y_adj,dat$V2) #v1=c(1) #output=r2_var(dat, v1, nv) #str(output) #$ var : num 2e-06 #$ LRT_p :Class 'logLik' : 0.98 (df=2) #$ r2_based_p: num 0.977 #$ rsq : num 8.21e-07 #$ upper_r2 : num 0.00403 #$ lower_r2 : num -0.000999 #In another scenario where the same covariates, but different #PRS1 and PRS2 are compared, #y = PRS1 + any_cov1 + any_cov2 + ... + any_covN + e vs. #y = PRS2 + any_cov1 + any_cov2 + ... + any_covN + e #following approach can be employed (see R code below). #R #dat=dat1 #here let's assume dat$V2 as PRS1, dat$V3 as PRS2 and dat$V7 to dat$V11 as covariates #mod1 <- lm(dat$V1~dat$V2 + dat$V7+ dat$V8+ dat$V9+ dat$V10+ dat$V11) #merged_predictor1 <- mod1$fitted.values #mod2 <- lm(dat$V1~dat$V3 + dat$V7+ dat$V8+ dat$V9+ dat$V10+ dat$V11) #merged_predictor2 <- mod2$fitted.values #dat=data.frame(dat$V1,merged_predictor2,merged_predictor1) #the comparison can be equivalently expressed as: #y = merged_predictor1 + e vs. #y = merged_predictor2 + e #This comparison can be simply achieved using the r2_diff function, e.g. #To get the test statistics for the difference between R2(y~x[,1]) and #R2(y~x[,2]). (here x[,1]= merged_predictor2, and x[,2]= merged_predictor1) #v1=c(1) #v2=c(2) #output=r2_diff(dat,v1,v2,nv) #str(output) #List of 11 #$ rsq1 : num 0.043 #$ rsq2 : num 0.0428 #$ var1 : num 0.000159 #$ var2 : num 0.000158 #$ var_diff : num 2.6e-07 #$ r2_based_p : num 0.657 #$ r2_based_p_one_tail: num 0.328 #$ mean_diff : num 0.000227 #$ upper_diff : num 0.00123 #$ lower_diff : num 0.000773 #$ p :List of 3 #..$ nested : num 0.634 #..$ nonnested: num 0.657 #..$ LRT : num 0.627 #Importantly note that in this case, merged_predictor1 and merged_predictor2 #are not nested to each other (see mod1 vs. mod2 above). #Therefore, this is nonnested model comparison. #So, output$p$nonnested (0.657) should be used instead of #output$p$nested (0.634). Note that r2_based_p is the same #as output$p$nonnested (0.657) here. #For the above non-nested scenario, alternatively, the outcome variable (y) #can be preadjusted with covariate(s), following the procedure in R: #mod <- lm(y ~ any_cov1 + any_cov2 + ... + any_covN) #y_adj=scale(mod$residuals) #R #dat=dat1 #mod <- lm(dat$V1~dat$V7+ dat$V8+ dat$V9+ dat$V10+ dat$V11) #y_adj=scale(mod$residuals) #dat=data.frame(y_adj,dat$V3,dat$V2) #the comparison can be equivalently expressed as: #y_adj = PRS1 + e vs. #y_adj = PRS2 + e #then, the comparative significance test can be approximated by using r2_diff function #To get the test statistics for the difference between R2(y~x[,1]) and #R2(y~x[,2]). (here x[,1]= PRS1 and x[,2]= PRS2) #v1=c(1) #v2=c(2) #output=r2_diff(dat,v1,v2,nv) #str(output) #List of 11 #$ rsq1 : num 5.16e-05 #$ rsq2 : num 4.63e-05 #$ var1 : num 2.21e-06 #$ var2 : num 2.18e-06 #$ var_diff : num 1.31e-09 #$ r2_based_p : num 0.884 #$ r2_based_p_one_tail: num 0.442 #$ mean_diff : num 5.28e-06 #$ upper_diff : num 7.63e-05 #$ lower_diff : num -6.57e-05 #$ p :List of 3 #..$ nested : num 0.942 #..$ nonnested: num 0.884 #..$ LRT : num 0.942 r2_enrich_beta r2_enrich_beta Description This function estimates var(beta1^2/R^2), beta1 and R^2 are regression coefficient and the coeffi- cient of determination from a multiple regression model, i.e. y = x1 * beta1 + x2 * beta2 +e, where y, x1 and x2 are column-standardised (see Olkin and Finn 1995). y is N by 1 matrix having the dependent variable, and x1 is N by 1 matrix having the ith explanatory variables. x2 is N by 1 matrix having the jth explanatory variables. v1 and v2 indicates the ith and jth column in the data (v1 or v2 should be a single interger between 1 - M, see Arguments below). Usage r2_enrich_beta(dat, v1, v2, nv, exp1) Arguments dat N by (M+1) matrix having variables in the order of cbind(y,x) v1 These can be set as v1=1, v1=2, v1=3 or any value between 1 - M based on combination v2 These can be set as v2=1, v2=2, v2=3, or any value between 1 - M based on combination nv Sample size exp1 The expectation of the ratio (e.g. ratio of # SNPs in genomic partitioning) Value This function will estimate var(beta1^2/R^2), beta1 and R^2 are regression coefficient and the co- efficient of determination from a multiple regression model, i.e. y = x1 * beta1 + x2 * beta2 +e, where y, x1 and x2 are column-standardised. The outputs are listed as follows. beta1_sq beta1_sq beta2_sq beta2_sq ratio1 beta1_sq/R^2 ratio2 beta2_sq/R^2 ratio_var1 variance of ratio 1 ratio_var2 variance of ratio 2 upper_ratio1 upper limit of 95% CI for ratio 1 lower_ratio1 lower limit of 95% CI for ratio 1 upper_ratio2 upper limit of 95% CI for ratio 2 lower_ratio2 lower limit of 95% CI for ratio 2 enrich_p1 two tailed P-value for beta1_sq/R^2 is significantly different from exp1 enrich_p1_one_tail one tailed P-value for beta1_sq/R^2 is significantly different from exp1 enrich_p2 P-value for beta2_sq/R2 is significantly different from (1-exp1) enrich_p2_one_tail one tailed P-value for beta2_sq/R2 is significantly different from (1-exp1) References <NAME>. and <NAME>. Correlations redux. Psychological Bulletin, 1995. 118(1): p. 155. Examples #To get the test statistic for the ratio which is significantly #different from the expectation, this function estiamtes #var (beta1^2/R^2), where #beta1^2 and R^2 are regression coefficients and the #coefficient of dterminationfrom a multiple regression model, #i.e. y = x1 * beta1 + x2 * beta2 +e, where y, x1 and x2 are #column-standardised. dat=dat2 nv=length(dat$V1) v1=c(1) v2=c(2) expected_ratio=0.04 output=r2_enrich_beta(dat,v1,v2,nv,expected_ratio) output #r2redux output #output$beta1_sq (beta1_sq) #0.01118301 #output$beta2_sq (beta2_sq) #0.004980285 #output$ratio1 (beta1_sq/R^2) #0.4392572 #output$ratio2 (beta2_sq/R^2) #0.1956205 #output$ratio_var1 (variance of ratio 1) #0.08042288 #output$ratio_var2 (variance of ratio 2) #0.0431134 #output$upper_ratio1 (upper limit of 95% CI for ratio 1) #0.9950922 #output$lower_ratio1 (lower limit of 95% CI for ratio 1) #-0.1165778 #output$upper_ratio2 upper limit of 95% CI for ratio 2) #0.6025904 #output$lower_ratio2 (lower limit of 95% CI for ratio 2) #-0.2113493 #output$enrich_p1 (two tailed P-value for beta1_sq/R^2 is #significantly different from exp1) #0.1591692 #output$enrich_p1_one_tail (one tailed P-value for beta1_sq/R^2 #is significantly different from exp1) #0.07958459 #output$enrich_p2 (two tailed P-value for beta2_sq/R2 is #significantly different from (1-exp1)) #0.000232035 #output$enrich_p2_one_tail (one tailed P-value for beta2_sq/R2 #is significantly different from (1-exp1)) #0.0001160175 r2_var r2_var function Description This function estimates var(R2(y~x[,v1])) where R2 is the R squared value of the model, where R2 is the R squared value of the model, y is N by 1 matrix having the dependent variable, and x is N by M matrix having M explanatory variables. v1 indicates the ith column in the x matrix (v1 can be multiple values between 1 - M, see Arguments below) Usage r2_var(dat, v1, nv) Arguments dat N by (M+1) matrix having variables in the order of cbind(y,x) v1 This can be set as v1=c(1), v1=c(1,2) or possibly with more values nv Sample size Value This function will test the null hypothesis for R2. To get the test statistics for R2(y~x[,v1]). The outputs are listed as follows. rsq R2 var Variance of R2 r2_based_p P-value under the null hypothesis, i.e. R2=0 upper_r2 Upper limit of 95% CI for R2 lower_r2 Lower limit of 95% CI for R2 Examples #To get the test statistics for R2(y~x[,1]) dat=dat1 nv=length(dat$V1) v1=c(1) output=r2_var(dat,v1,nv) output #r2redux output #output$rsq (R2) #0.03836254 #output$var (variance of R2) #0.0001436128 #output$r2_based_p (P-value under the null hypothesis, i.e. R2=0) #1.188162e-10 #output$upper_r2 (upper limit of 95% CI for R2) #0.06433782 #output$lower_r2 (lower limit of 95% CI for R2) #0.01764252 #To get the test statistic for R2(y~x[,1]+x[,2]+x[,3]) dat=dat1 nv=length(dat$V1) v1=c(1,2,3) r2_var(dat,v1,nv) #r2redux output #output$rsq (R2) #0.03836254 #output$var (variance of R2) #0.0001436128 #output$r2_based_p (R2 based P-value) #1.188162e-10 #output$upper_r2 (upper limit of 95% CI for R2) #0.06433782 #output$lower_r2 (lower limit of 95% CI for R2) #0.0176425 #When comparing two independent sets of PGSs #Let’s assume dat1$V1 and dat2$V2 are independent for this example #(e.g. male PGS vs. female PGS) nv=length(dat1$V1) v1=c(1) output1=r2_var(dat1,v1,nv) nv=length(dat2$V1) v1=c(1) output2=r2_var(dat2,v1,nv) #To get the difference between two independent sets of PGSs r2_diff_independent=abs(output1$rsq-output2$rsq) #To get the variance of the difference between two independent sets of PGSs var_r2_diff_independent= output1$var+output2$var sd_r2_diff_independent=sqrt(var_r2_diff_independent) #To get p-value (following eq. 15 in the paper) chi=r2_diff_independent^2/var_r2_diff_independent p_value=pchisq(chi,1,lower.tail=FALSE) #to get 95% CI (following eq. 15 in the paper) uci=r2_diff_independent+1.96*sd_r2_diff_independent lci=r2_diff_independent-1.96*sd_r2_diff_independent r_diff r_diff function Description This function estimates var(R(y~x[,v1]) - R(y~x[,v2])) where R is the correlation between y and x, y is N by 1 matrix having the dependent variable, and x is N by M matrix having M explanatory variables. v1 or v2 indicates the ith column in the x matrix (v1 or v2 can be multiple values between 1 - M, see Arguments below) Usage r_diff(dat, v1, v2, nv) Arguments dat N by (M+1) matrix having variables in the order of cbind(y,x) v1 This can be set as v1=c(1) or v1=c(1,2) v2 This can be set as v2=c(2), v2=c(3), v2=c(1,3) or v2=c(3,4) nv Sample size Value This function will estimate significant difference between two PGS (either dependent or indepen- dent and joint or single). To get the test statistics for the difference between R(y~x[,v1]) and R(y~x[,v2]). (here we define R_1=R(y~x[,v1])) and R_2=R(y~x[,v2]))). The outputs are listed as follows. r1 R_1 r2 R_2 var1 Variance of R_1 var2 variance of R_2 var_diff Variance of difference between R_1 and R_2 r2_based_p P-value for significant difference between R_1 and R_2 for two tailed test r_based_p_one_tail P-value for significant difference between R_1 and R_2 for one tailed test mean_diff Differences between R_1 and R_2 upper_diff Upper limit of 95% CI for the difference lower_diff Lower limit of 95% CI for the difference Examples #To get the test statistics for the difference between R(y~x[,1]) and #R(y~x[,2]). (here we define R_1=R(y~x[,1])) and R_2=R(y~x[,2]))) dat=dat1 nv=length(dat$V1) v1=c(1) v2=c(2) output=r_diff(dat,v1,v2,nv) output #r2redux output #output$r1 (R_1) #0.1958636 #output$r2 (R_2) #0.197006 #output$var1 (variance of R_1) #0.0009247466 #output$var2 (variance of R_1) #0.0001451358 #output$var_diff (variance of difference between R_1 and R_2) #3.65286e-06 #output$r_based_p (two tailed p-value for significant difference between R_1 and R_2) #0.5500319 #output$r_based_p_one_tail (one tailed p-value #0.2750159 #output$mean_diff #-0.001142375 (differences between R2_1 and R2_2) #output$upper_diff (upper limit of 95% CI for the difference) #0.002603666 #output$lower_diff (lower limit of 95% CI for the difference) #-0.004888417 #To get the test statistics for the difference between R(y~x[,1]+[,2]) and #R(y~x[,2]). (here R_1=R(y~x[,1]+x[,2]) and R_2=R(y~x[,1])) nv=length(dat$V1) v1=c(1,2) v2=c(2) output=r_diff(dat,v1,v2,nv) output r_diff 25 #output$r1 #0.1974001 #output$r2 #0.197006 #output$var1 #0.0009235848 #output$var2 #0.0009238836 #output$var_diff #3.837451e-06 #output$r2_based_p #0.8405593 #output$mean_diff #0.0003940961 #output$upper_diff #0.004233621 #output$lower_diff #-0.003445429
flashClust
cran
R
Package ‘flashClust’ October 13, 2022 Version 1.01-2 Date 2012-08-21 Title Implementation of optimal hierarchical clustering Author code by <NAME> and R development team, modifications and packaging by <NAME> Maintainer <NAME> <<EMAIL>> Depends R (>= 2.3.0) ZipData no License GPL (>= 2) Description Fast implementation of hierarchical clustering Repository CRAN Date/Publication 2012-08-21 18:23:43 NeedsCompilation yes R topics documented: flashClus... 1 flashClust Faster alternative to hclust Description This function implements optimal hierarchical clustering with the same interface as hclust. Usage hclust(d, method = "complete", members=NULL) flashClust(d, method = "complete", members=NULL) Arguments d a dissimilarity structure as produced by ’dist’. method the agglomeration method to be used. This should be (an unambiguous abbre- viation of) one of "ward", "single", "complete", "average", "mcquitty", "median" or "centroid". members NULL or a vector with length size of d. See the ‘Details’ section. Details See the description of hclust for details on available clustering methods. If members!=NULL, then d is taken to be a dissimilarity matrix between clusters instead of dissim- ilarities between singletons and members gives the number of observations per cluster. This way the hierarchical cluster algorithm can be ‘started in the middle of the dendrogram’, e.g., in order to reconstruct the part of the tree above a cut (see examples). Dissimilarities between clusters can be efficiently computed (i.e., without hclust itself) only for a limited number of distance/linkage combinations, the simplest one being squared Euclidean distance and centroid linkage. In this case the dissimilarities between the clusters are the squared Euclidean distances between cluster means. flashClust is a wrapper for compatibility with older code. Value Returned value is the same as that of hclust: An object of class hclust which describes the tree produced by the clustering process. The object is a list with components: merge an n − 1 by 2 matrix. Row i of merge describes the merging of clusters at step i of the clustering. If an element j in the row is negative, then observation −j was merged at this stage. If j is positive then the merge was with the cluster formed at the (earlier) stage j of the algorithm. Thus negative entries in merge indicate agglomerations of singletons, and positive entries indicate agglomerations of non-singletons. height a set of n − 1 non-decreasing real values. The clustering height: that is, the value of the criterion associated with the clustering method for the particular agglomeration. order a vector giving the permutation of the original observations suitable for plotting, in the sense that a cluster plot using this ordering and matrix merge will not have crossings of the branches. labels labels for each of the objects being clustered. call the call which produced the result. method the cluster method that has been used. dist.method the distance that has been used to create d (only returned if the distance object has a "method" attribute). Author(s) <NAME>, adapted and packaged by <NAME>der References This implementation is mentioned in Peter Langfelder, <NAME> (2012) Fast R Functions for Robust Correlations and Hierarchical Clustering. Journal of Statistical Software, 46(11), 1-17. http://www.jstatsoft.org/v46/i11/ F.Murtagh’s software web site: http://www.classification-society.org/csna/mda-sw/ , section 6 <NAME>., <NAME>. and <NAME>. (1988) The New S Language. Wadsworth \& Brooks/Cole. (S version.) <NAME>. (1974). Cluster Analysis. London: Heinemann Educ. Books. <NAME>. (1975). Clustering Algorithms. New York: Wiley. Sneath, <NAME>. and <NAME> (1973). Numerical Taxonomy. San Francisco: Freeman. Anderberg, <NAME>. (1973). Cluster Analysis for Applications. Academic Press: New York. <NAME>. (1999). Classification. Second Edition. London: Chapman and Hall / CRC Murtagh, F. (1985). “Multidimensional Clustering Algorithms”, in COMPSTAT Lectures 4. Wuerzburg: Physica-Verlag (for algorithmic details of algorithms used). <NAME>. (1966). Similarity Analysis by Reciprocal Pairs for Discrete and Continuous Data. Educational and Psychological Measurement, 26, 825–831. See Also hclust Examples # generate some data to cluster set.seed(1); nNodes = 2000; # Random "distance" matrix dst = matrix(runif(n = nNodes^2, min = 0, max = 1), nNodes, nNodes); # Time the flashClust clustering system.time( { h1 = hclust(as.dist(dst), method= "average"); } ); # Time the standard R clustering system.time( { h2 = stats::hclust(as.dist(dst), method = "average"); } ); all.equal(h1, h2) # What is different: h1[[6]] h2[[6]] # Everything but the 'call' component is the same; in particular, the trees are exactly equal.
bartCause
cran
R
Package ‘bartCause’ January 23, 2023 Version 1.0-6 Date 2023-01-23 Title Causal Inference using Bayesian Additive Regression Trees Depends R (>= 3.1-0) Imports dbarts (>= 0.9-16), methods, stats, graphics, parallel, utils, grDevices Suggests testthat (>= 0.9-0), lme4, rpart, tmle, stan4bart Description Contains a variety of methods to generate typical causal inference estimates us- ing Bayesian Additive Regression Trees (BART) as the underlying regres- sion model (Hill (2012) <doi:10.1198/jcgs.2010.08162>). License GPL (>= 2) NeedsCompilation no URL https://github.com/vdorie/bartCause BugReports https://github.com/vdorie/bartCause/issues Author <NAME> [aut, cre] (<https://orcid.org/0000-0002-9576-3064>), <NAME> [aut] (<https://orcid.org/0000-0003-4983-2206>) Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-01-23 19:40:08 UTC R topics documented: bart... 2 bartc-generic... 8 bartc-plo... 11 summary.bartcFi... 14 bartc Causal Inference using Bayesian Additive Regression Trees Description Fits a collection of treatment and response models using the Bayesian Additive Regression Trees (BART) algorithm, producing estimates of treatment effects. Usage bartc(response, treatment, confounders, parametric, data, subset, weights, method.rsp = c("bart", "tmle", "p.weight"), method.trt = c("bart", "glm", "none"), estimand = c("ate", "att", "atc"), group.by = NULL, commonSup.rule = c("none", "sd", "chisq"), commonSup.cut = c(NA_real_, 1, 0.05), args.rsp = list(), args.trt = list(), p.scoreAsCovariate = TRUE, use.ranef = TRUE, group.effects = FALSE, crossvalidate = FALSE, keepCall = TRUE, verbose = TRUE, seed = NA_integer_, ...) Arguments response A vector of the outcome variable, or a reference to such in the data argument. Can be continuous or binary. treatment A vector of the binary treatment variable, or a reference to data. confounders A matrix or data frame of covariates to be used in estimating the treatment and response model. Can also be the right-hand-side of a formula (e.g. x1 + x2 + ...). The data argument will be searched if supplied. parametric The right-hand-side of a formula (e.g. x1 + x2 + (1 | g) ...) giving the equa- tion of a parametric form to be used for estimating the mean structure. See the details section below. data An optional data frame or named list containing the response, treatment, and confounders. subset An optional vector using to subset the data. Can refer to data if provided. weights An optional vector of population weights used in model fitting and estimating the treatment effect. Can refer to data if provided. method.rsp A character string specifying which method to use when fitting the response sur- face and estimating the treatment effect. Options are: "bart" - fit the response surface with BART and take the average of the individual treatment effect esti- mates, "p.weight" - fit the response surface with BART but compute the treat- ment effect estimate by using a propensity score weighted sum of individual effects, and "tmle" - as above, but further adjust the individual estimates using the Targeted Minimum Loss based Estimation (TMLE) adjustment. method.trt A character string specifying which method to use when fitting the treatment assignment mechanism, or a vector/matrix of propensity scores. Character string options are: "bart" - fit BART directly to the treatment variable, "glm" - fit a generalized linear model with a binomial response and all confounders added linearly, and "none" - do no propensity score estimation. Cannot be "none" if the response model requires propensity scores. When supplied as a matrix, it should be of dimensions equal to the number of observations times the number of samples used in any response model. estimand A character string specifying which causal effect to target. Options are "ate" - average treatment effect, "att" - average treatment effect on the treated, and "atc" - average treatment effect on the controls. group.by An optional factor that, when present, causes the treatment effect estimate to be calculated within each group. commonSup.rule Rule for exclusion of observations lacking in common support. Options are "none" - no suppression, "sd" - exclude units whose predicted counterfactual standard deviation is extreme compared to the maximum standard deviation un- der those units’ observed treatment condition, where extreme refers to the dis- tribution of all standard deviations of observed treatment conditions, "chisq" - exclude observations according to ratio of the variance of posterior predicted counterfactual to the posterior variance of the observed condition, having a Chi Squared distribution with one degree of freedom under the null hypothesis of have equal distributions. commonSup.cut Cutoffs for commonSup.rule. Ignored for "none", when commonSup.rule is "sd", refers to how many standard deviations of the distribution of posterior variance for counterfactuals an observation can be above the maximum of poste- rior variances for that treatment condition. When commonSup.rule is "chisq", is the p value used for rejection of the hypothesis of equal variances. p.scoreAsCovariate A logical such that when TRUE, the propensity score is added to the response model as a covariate. When used, this is equivalent to the ’ps-BART’ method described by described by Hahn, Murray, and Carvalho. use.ranef Logical specifying if group.by variable - when present - should be included as a "random" or "fixed" effect. If true, rbart will be used for BART models. Using random effects for treatment assignment mechanisms of type "glm" require that the lme4 package be available. group.effects Logical specifying if effects should be calculated within groups if the group.by variable is provided. Response methods of "tmle" and "p.weight" are such that if group effects are calculated, then the population effect is not provided. keepCall A logical such that when FALSE, the call to bartc is not kept. This can reduce the amount of information printed by summary when passing in data as literals. crossvalidate One of TRUE, FALSE, "trt", or "rsp". Enables code to attempt to estimate the optimal end-node sensitivity parameter. This uses a rudimentary Bayesian optimization routine and can be extremely slow. verbose A logical that when TRUE prints information as the model is fit. seed Optional integer specifying the desired pRNG seed. It should not be needed when running single-threaded - set.seed will suffice, and can be used to obtain reproducible results when multi-threaded. See Reproducibility section of bart2. args.rsp, args.trt, ... Further arguments to the treatment and response model fitting algorithms. Argu- ments passed to the main function as . . . will be used in both models. args.rsp and args.trt can be used to set parameters in a single fit, and will override other values. See glm and bart2 for reference. Details bartc represents a collection of methods that primarily use the Bayesian Additive Regression Trees (BART) algorithm to estimate causal treatment effects with binary treatment variables and contin- uous or binary outcomes. This requires models to be fit to the response surface (distribution of the response as a function of treatment and confounders, p(Y (1), Y (0)|X) and optionally for treatment assignment mechanism (probability of receiving treatment, i.e. propensity score, P r(Z = 1|X)). The response surface model is used to impute counterfactuals, which may then be adjusted together with the propensity score to produce estimates of effects. Similar to lm, models can be specified symbolically. When the data term is present, it will be added to the search path for the response, treatment, and confounders variables. The confounders must be specified devoid of any "left hand side", as they appear in both of the models. Response Surface The response surface methods included are: • "bart" - use BART to fit the response surface and produce individual estimates Ŷ (1)i and Ŷ (0)i . Treatment effect estimates are obtained by averaging the difference of these across the population of interest. • "p.weight" - individual effects are estimated as in "bart", but treatment effect estimates are obtained by using a propensity score weighted P average. For the average treatment effect on the treated, these weights are p(zi |xi )/( z/n). For ATC, replace z with 1 − z. For ATE, "p.weight" is equal to "bart". • "tmle" - individual effects are estimated as in "bart" and a weighted average is taken as in "p.weight", however the response surface estimates and propensity scores are corrected by using the Targeted Minimum Loss based Estimation method. Treatment Assignment The treatment assignment models are: • "bart" - fit a binary BART directly to the treatment using all the confounders. • "none" - no modeling is done. Only applies when using response method "bart" and p.scoreAsCovariate is FALSE. • "glm" - fit a binomial generalized linear model with logistic link and confounders included as linear terms. • Finally, a vector or matrix of propensity scores can be supplied. Propensity score matrices should have a number of rows equal to the number of observations in the data and a number of columns equal to the number of posterior samples. Parametrics bartc uses the stan4bart package, when available, to fit semi- parametric surfaces. Equations can be given as to lm. Grouping structures are also permitted, and use the syntax of lmer. Generics For a fitted model, the easiest way to analyze the resulting fit is to use the generics fitted, extract, and predict to analyze specific quantities and summary to aggregate those values into targets (e.g. ATE, ATT, or ATC). Common Support Rules Common support, or that the probability of receiving all treatment conditions is non-zero within every area of the covariate space (P (Z = 1|X = x) > 0 for all x in the inferential sample), can be enforced by excluding observations with high posterior uncertainty. bartc supports two common support rules through commonSup.rule argument: • "sd" - observations are cut from the inferential sample if: si > mz +a×sd(sj , where f (1−z) si is the posteriors standard deviation of the predicted counterfactual for observation i, f sj (z) is the posterior standard deviation of the prediction for the observed treatment condition f (z) of observation j, sd(sj is the empirical standard deviation of those quantities, and mz = f (z) maxj {sj } for all j in the same treatment group, i.e. Zj = z. a is a constant to be passed in using commonSup.cut and its default is 1. • "chisq" - observations are cut from the inferential sample if: (si /si )2 > qα , where si are as above and qα , is the upper α percentile of a χ distribution with one degree of freedom, corresponding to a null hypothesis of equal variance. The default for α is 0.05, and it is specified using the commonSup.cut parameter. Special Arguments Some default arguments are unconventional or are passed in a unique fashion. • If n.chains is missing, unlike in bart2 a default of 10 is used. • For method.rsp == "tmle", a special arg.trt of posteriorOfTMLE determines if the TMLE correction should be applied to each posterior sample (TRUE), or just the posterior mean (FALSE). Missing Data Missingness is allowed only in the response. If some response values are NA, the BART models will be trained just for where data are available and those values will be used to make predictions for the missing observations. Missing observations are not used when calculating statistics for assessing common support, although they may still be excluded on those grounds. Further, missing observations may not be compatible with response method "tmle". Value bartc returns an object of class bartcFit. Information about the object can be derived by using methods summary, plot_sigma, plot_est, plot_indiv, and plot_support. Numerical quantities are recovered with the fitted and extract generics. Predictions for new observations are obtained with predict. Objects of class bartcFit are lists containing items: method.rsp character string specifying the method used to fit the response surface method.trt character string specifying the method used to fit the treatment assignment mech- anism estimand character string specifying the targeted causal effect fit.rsp object containing the fitted response model data.rsp dbartsData object used when fitting the response model fit.trt object containing the fitted treatment model group.by optional factor vector containing the groups in which treatment effects are esti- mated est matrix or array of posterior samples of the treatment effect estimate p.score the vector of propensity scores used as a covariate in the response model, when applicable samples.p.score matrix or array of posterior samples of the propensity score, when applicable mu.hat.obs samples from the posterior of the expected value for individual responses under the observed treatment regime mu.hat.cf samples from the posterior of the expected value for individual responses under the counterfactual treatment name.trt character string giving the name of the treatment variable in the data of fit.rsp trt vector of treatment assignments call how bartc was called n.chains number of independent posterior sampler chains in response model commonSup.rule common support rule used for suppressing observations commonSup.cut common support parameter used to set cutoff when suppressing observations sd.obs vector of standard deviations of individual posterior predictors for observed treatment conditions sd.cf vector of standard deviations of individual posterior predictors for counterfactu- als commonSup.sub logical vector expressing which observations are used when estimating treat- ment effects use.ranef logical for whether ranef models were used; only added when true group.effects logical for whether group-level estimates are made; only added when true seed a random seed for use when drawing from the posterior predictive distribution Author(s) <NAME>: <<EMAIL>>. References <NAME>., <NAME>. and <NAME>. (2010) BART: Bayesian additive regression trees. The Annals of Applied Statistics 4(1), 266–298. The Institute of Mathematical Statistics. doi:10.1214/ 09AOAS285. <NAME>. (2011) Bayesian Nonparametric Modeling for Causal Inference. Journal of Computa- tional and Graphical Statistics 20(1), 217–240. Taylor & Francis. doi:10.1198/jcgs.2010.08162. <NAME>. and <NAME>. (2013) Assessing Lack of Common Support in Causal Inference Using Bayesian Nonparametrics: Implications for Evaluating the Effect of Breastfeeding on Children’s Cognitive Outcomes The Annals of Applied Statistics 7(3), 1386–1420. The Institute of Mathemat- ical Statistics. doi:10.1214/13AOAS630. <NAME>. (2019) Comment: Contributions of Model Features to BART Causal Inference Performance Using ACIC 2016 Competition Data Statistical Science 34(1), 90–93. The Institute of Mathematical Statistics. doi:10.1214/18STS682 <NAME>., <NAME>., and <NAME>. (2020) Bayesian Regression Tree Models for Causal Inference: Regularization, Confounding, and Heterogeneous Effects (with Discussion). Bayesian Analysis 15(3), 965–1056. International Society for Bayesian Analysis. doi:10.1214/19BA1195. See Also bart2 Examples ## fit a simple linear model n <- 100L beta.z <- c(.75, -0.5, 0.25) beta.y <- c(.5, 1.0, -1.5) sigma <- 2 set.seed(725) x <- matrix(rnorm(3 * n), n, 3) tau <- rgamma(1L, 0.25 * 16 * rgamma(1L, 1 * 32, 32), 16) p.score <- pnorm(x %*% beta.z) z <- rbinom(n, 1, p.score) mu.0 <- x %*% beta.y mu.1 <- x %*% beta.y + tau y <- mu.0 * (1 - z) + mu.1 * z + rnorm(n, 0, sigma) # low parameters only for example fit <- bartc(y, z, x, n.samples = 100L, n.burn = 15L, n.chains = 2L) summary(fit) ## example to show refitting under the common support rule fit2 <- refit(fit, commonSup.rule = "sd") fit3 <- bartc(y, z, x, subset = fit2$commonSup.sub, n.samples = 100L, n.burn = 15L, n.chains = 2L) bartc-generics Generic Methods for bartcFit Objects Description Visual exploratory data analysis and model fitting diagnostics for causal inference models fit using the bartc function. Usage ## S3 method for class 'bartcFit' fitted(object, type = c("pate", "sate", "cate", "mu.obs", "mu.cf", "mu.0", "mu.1", "y.cf", "y.0", "y.1", "icate", "ite", "p.score", "p.weights"), sample = c("inferential", "all"), ...) extract(object, ...) ## S3 method for class 'bartcFit' extract(object, type = c("pate", "sate", "cate", "mu.obs", "mu.cf", "mu.0", "mu.1", "y.cf", "y.0", "y.1", "icate", "ite", "p.score", "p.weights", "sigma"), sample = c("inferential", "all"), combineChains = TRUE, ...) ## S3 method for class 'bartcFit' predict(object, newdata, group.by, type = c("mu", "y", "mu.0", "mu.1", "y.0", "y.1", "icate", "ite", "p.score"), combineChains = TRUE, ...) refit(object, newresp, ...) ## S3 method for class 'bartcFit' refit(object, newresp = NULL, commonSup.rule = c("none", "sd", "chisq"), commonSup.cut = c(NA_real_, 1, 0.05), ...) Arguments object Object of class bartcFit. type Which quantity to return. See details for a description of possible values. sample Return information for either the "inferential" (e.g. treated observations when the estimand is att) or "all" observations. combineChains If the models were fit with more than one chain, results retain the chain structure unless combineChains is TRUE. newresp Not presently used, but provided for compatibility with other definitions of the refit generic. newdata Data corresponding to the confounders in a bartc fit. group.by Optional grouping variable. See definition of group.by in bartc. commonSup.rule, commonSup.cut As in bartc. ... Additional parameters passed up the generic method chain. Details fitted returns the values that would serve as predictions for an object returned by the bartc func- tion, while extract instead returns the full matrix or array of posterior samples. The possible options are: • "pate", "sate", "cate" - various target quantities; see summary • "mu" - predict only: expected value; requires user-supplied treatment variable in newdata • "y" - predict only: sample of the response; requires user-supplied treatment variable in newdata • "mu.obs" - (samples from the posterior of) the expected value under the observed treatment condition, i.e. muˆ i (1) ∗ zi + mu ˆ i (0) ∗ (1 − zi ) • "mu.cf" - the expected value under the counterfactual treatment condition, i.e. mu ˆ i (1) ∗ (1 − zi ) + muˆ i (0) ∗ zi ) • "mu.0" - the expected value under the control condition • "mu.1" - the expected value under the treated condition • "y.cf" - samples of the response under the the counterfactual treatment condition, i.e. ŷi (1 − zi )); values are obtained by adding noise to mu.cf using the posterior predictive distribution • "y.0" - observed responses under the control together with predicted under the treated, i.e. ŷi (1) ∗ zi + y(0) ∗ (1 − zi ) • "y.1" - observed responses under the treatment together with predicted under the control, i.e. yi (1) ∗ zi + ŷ(0) ∗ (1 − zi ) • "ite" - (sample) individual treatment effect estimates, i.e. (yi (zi ) − yi (1 − zi )) ∗ (2zi − 1); uses observed responses and posterior predicted counterfactuals • "icate" - individual conditional average treatment effect estimates, i.e. mu ˆ i (1) − mu ˆ i (0) • "p.score" - probability that each observation is assigned to the treatment group • "p.weights" - weights assigned to each individual difference if the response method is "p.weight" • "sigma" - residual standard deviation from continuous response models refit exists to allow the same regressions to be used to calculate estimates under different common support rules. To refit those models on a subset, see the examples in bartc. predict allows the fitted model to be used to make predictions on an out-of-sample set. Requires model to be fit with keepTrees equal to TRUE. As ‘y’ values are all considered out of sample, the posterior predictive distribution is always used when relevant. Value For fitted, extract, and predict, a matrix, array, or vector depending on the dimensions of the result and the number of chains. For the following, when n.chains is one the dimension is dropped. • "pate", "sate", or "cate" - with fitted, a scalar; with extract, n.chains x n.samples • "p.score" - depending on the fitting method, samples may or not be present; when samples are absent, a vector is returned for both functions; when present, the same as "y". • all other types - with fitted, a vector of length equal to the number of observations (n.obs); with extract or predict, a matrix or array of dimensions n.chains x n.samples x n.obs. For refit, an object of class bartcFit. Author(s) <NAME>: <<EMAIL>>. See Also bartc Examples ## fit a simple linear model n <- 100L beta.z <- c(.75, -0.5, 0.25) beta.y <- c(.5, 1.0, -1.5) sigma <- 2 set.seed(725) x <- matrix(rnorm(3 * n), n, 3) tau <- rgamma(1L, 0.25 * 16 * rgamma(1L, 1 * 32, 32), 16) p.score <- pnorm(x %*% beta.z) z <- rbinom(n, 1, p.score) mu.0 <- x %*% beta.y mu.1 <- x %*% beta.y + tau y <- mu.0 * (1 - z) + mu.1 * z + rnorm(n, 0, sigma) # low parameters only for example fit <- bartc(y, z, x, n.samples = 100L, n.burn = 15L, n.chains = 2L) # compare fit to linear model lm.fit <- lm(y ~ z + x) plot(fitted(fit, type = "mu.obs"), fitted(lm.fit)) # rank order sample individual treatment effect estimates and plot ites <- extract(fit, type = "ite") ite.m <- apply(ites, 2, mean) ite.sd <- apply(ites, 2, sd) ite.lb <- ite.m - 2 * ite.sd ite.ub <- ite.m + 2 * ite.sd ite.o <- order(ite.m) plot(NULL, type = "n", xlim = c(1, length(ite.m)), ylim = range(ite.lb, ite.ub), xlab = "effect order", ylab = "individual treatment effect") lines(rbind(seq_along(ite.m), seq_along(ite.m), NA), rbind(ite.lb[ite.o], ite.ub[ite.o], NA), lwd = 0.5) points(seq_along(ite.m), ite.m[ite.o], pch = 20) bartc-plot Plot methods for bartc Description Visual exploratory data analysis and model fitting diagnostics for causal inference models fit using the bartc function. Usage plot_sigma(x, main = "Traceplot sigma", xlab = "iteration", ylab = "sigma", lty = 1:x$n.chains, ...) plot_est(x, main = paste("Traceplot", x$estimand), xlab = "iteration", ylab = x$estimand, lty = 1:x$n.chains, col = NULL, ...) plot_indiv(x, main = "Histogram Individual Quantities", type = c("icate", "mu.obs", "mu.cf", "mu.0", "mu.1", "y.cf", "y.0", "y.1", "ite"), xlab = "treatment effect", breaks = 20, ...) plot_support(x, main = "Common Support Scatterplot", xvar = "tree.1", yvar = "tree.2", sample = c("inferential", "all"), xlab = NULL, ylab = NULL, pch.trt = 21, bg.trt = "black", pch.ctl = pch.trt, bg.ctl = NA, pch.sup = pch.trt, bg.sup = NA, col.sup = "red", cex.sup = 1.5, legend.x = "topleft", legend.y = NULL, ...) Arguments x Object of class bartcFit. main Character title of plot. xlab Character label of x axis. For plot_support, if NULL a default will be used. ylab Character label of y axis. For plot_support, if NULL a default will be used. lty For line plots (plot_sigma, plot_est), models use the values of lty to visually distinguish each chain. col For line plots, use col vector to distinguish between groups (if any). breaks Argument to codehist. type The individual quantity to be plotted. See fitted. xvar Variable for use on x axis. Can be one of "tree.XX", "pca.XX", "css", any individual level quantity accepted by fitted, the number or name of a column used to fit the response model, or a given vector. See below for details. sample Return information for either the "inferential" (e.g. treated observations when the estimand is att) or "all" observations. yvar Variable for use on the y axis, of the same form as xvar. pch.trt pch point value used when plotting treatment observations. bg.trt bg background value used when plotting treatment observations. pch.ctl pch point value used when plotting control observations. bg.ctl bg background value used when plotting treatment observations. pch.sup pch point value used when plotting suppressed observations. bg.sup bg background value used when plotting suppressed observations. col.sup col color value used when plotting suppressed observations. cex.sup cex size value used when plotting suppressed observations. legend.x x value passed to legend. If NULL, legend plotting is skipped. legend.y Optional y value passed to legend ... Optional graphical parameters. Details Produces various plots using objects fit by bartc. plot_sigma and plot_est are traditional pa- rameter trace plots that can be used to diagnose the convergence of the posterior sampler. If the bartc model is fit with n.chains greater than one, by default each chain will be plotted with its own line type. plot_indiv produces a simple histogram of the distribution of the estimates of the individual ef- fects, taken as the average of their posterior samples. plot_support is used to visualize the common support diagnostic in the form of a scatterplot. Points that the diagnostic excludes are outlined in red. The contents of the x and y axes are con- trolled by the xvar and yvar arguments respectively and can be of the form: • "tree.XX" - Variable number "XX" as selected by variable importance in predicting individual treatment effects using a tree fit by rpart. "XX" must be a number not exceeding the number of continuous variables used to fit the response model. • "pca.XX" - "XX"th principal component axis. • "css" - The common support statistic. • "y" - Observed response variable. • "y0" - Predicted values for the response under the control as obtained by fitted. • "y1" - Predicted values for the response under the treatment fitted. • "indiv.diff" - Individual treatment effect estimates, or ŷ(1) − ŷ(0). • "p.score" - Propensity score used to fit the response model (if applicable). • "p.weights" - Weights used when taking average of individual differences for response method "p.weight". • predictor columns - Given by name or number. • supplied vector - Any vector of length equal to the number of observations. Value None, although plotting occurs as a side-effect. Author(s) <NAME>: <<EMAIL>>. See Also bartc Examples # generate fake data using a simple linear model n <- 100L beta.z <- c(.75, -0.5, 0.25) beta.y <- c(.5, 1.0, -1.5) sigma <- 2 set.seed(725) x <- matrix(rnorm(3 * n), n, 3) tau <- rgamma(1L, 0.25 * 16 * rgamma(1L, 1 * 32, 32), 16) p.score <- pnorm(x %*% beta.z) z <- rbinom(n, 1, p.score) mu.0 <- x %*% beta.y mu.1 <- x %*% beta.y + tau y <- mu.0 * (1 - z) + mu.1 * z + rnorm(n, 0, sigma) # run with low parameters only for example fit <- bartc(y, z, x, n.samples = 100L, n.burn = 15L, n.chains = 2L, n.threads = 1L, commonSup.rule = "sd") ## plot specific functions # sigma plot can be used to access convergence of chains plot_sigma(fit) # show points lacking common support plot_support(fit, xvar = "tree.1", yvar = "css", legend.x = NULL) # see example in ?"bartc-generics" for rank-ordered individual effects plot summary.bartcFit Summary for bartcFit Objects Description Summarizes bartc fits by calculating target quantities and uncertainty estimates. Usage ## S3 method for class 'bartcFit' summary(object, target = c("pate", "sate", "cate"), ci.style = c("norm", "quant", "hpd"), ci.level = 0.95, pate.style = c("ppd", "var.exp"), ...) Arguments object Object of class bartcFit. target Treatment effect to calculate. One of "pate" - population average treatment ef- fect, "sate" - sample average treatment effect, and "cate" - conditional average treatment effect. ci.style Means of calculating confidence intervals (posterior credible regions). Options include "norm" - use a normal approximation, "quant" - the empirical quantites of the posterior samples, and "hpd" - region of highest posterior density. ci.level Level of confidence for intervals. pate.style For target "pate", calculate uncertainty by using "ppd" - the posterior predictive distribution or "var.exp" a variance expansion formula. ... Not used at moment, but present to match summary generic signature. Details summary produces a numeric and qualitative summary of a bartc fit. Target Types The SATE and PATE involve calculating predicted response values under different treatment con- ditions. When using extract or fitted, these values are simulated directly from the posterior predictive distribution. However, since these quantities all have the same expected value, in order to provide consistent results summary only uses those simulations to derive credible intervals. Thus the estimates for CATE, SATE, and PATE all should be reported as the same but with increasing degrees of uncertainty. Grouped Data If a model is fit with a supplied grouping variable and group.effects = TRUE, the estimates will also be reported within groups. When possible, the last line corresponds to the population. Within group estimates for resposne methods such as "tmle" cannot easily be extrapolated to the popula- tion at large - the means will combine based on the sample sizes but the uncertainty estimates will lack correlations. Value An object of class bartcFit.summary equivalent to a list with named items: call how bartc was called method.rsp character string specifying the method used to fit the response surface method.trt character string specifying the method used to fit the treatment assignment mech- anism ci.info a named list with items target, ci.style, ci.level, and pate.style as passed to summary n.obs total number of observations n.samples number of samples within any one chain n.chains total number of chains commonSup.rule common support rule used when fitting object to produce estimates estimates a data.frame with columns "estimate" - the target, "sd" - standard devi- ation of posterior of estimate, "ci.lower" - lower bound of credible region, "ci.upper" - upper bound of credible region, and optionally "n.cut" - how many observations were dropped using common support rule 16 summary.bartcFit Author(s) <NAME>: <<EMAIL>>.
velociraptr
cran
R
Package ‘velociraptr’ October 12, 2022 Type Package Title Fossil Analysis Version 1.1.0 Author <NAME> Maintainer <NAME> <<EMAIL>> Description Functions for downloading, reshaping, culling, cleaning, and analyzing fos- sil data from the Paleobiology Database <https://paleobiodb.org>. Imports sf License GPL-3 Encoding UTF-8 LazyData true RoxygenNote 6.1.1 NeedsCompilation no Repository CRAN Date/Publication 2019-07-31 19:20:02 UTC R topics documented: abundanceMatri... 2 ageRange... 3 cleanTaxonom... 4 constrainAge... 5 cullMatri... 6 downloadPaleogeograph... 7 downloadPBD... 8 downloadPlace... 9 downloadTim... 10 fixedLatitud... 11 multiplicativeBet... 12 presenceMatri... 13 taxonAlph... 14 abundanceMatrix Create a community matrix of taxon abundances Description Creates a community matrix of taxon abundances, with samples as rows and species as columns, from a data frame. Usage abundanceMatrix(Data, Rows = "geoplate", Columns = "genus") Arguments Data A data.frame of taxonomic occurrences. Must have at least two columns. One column representing the samples, and one column representing the taxa. Rows A characer string Columns A character string Details Note that older versions of this function automatically checked for and removed hanging factors. However, this is something that should really be dictated by the user, and that step is no longer a part of the function. This is unlikely to introduce any breaking changes in older scripts, but we note it here for documentation purposes.. Value A numeric matrix of taxon abundances. Samples as the rownames and species as the column names. Author(s) <NAME> Examples # Download a test dataset of pleistocene bivalves. # DataPBDB<-downloadPBDB(Taxa="Bivalvia", StartInterval="Pleistocene", StopInterval="Pleistocene") # Clean the genus column # DataPBDB<-cleanTaxonomy(DataPBDB,"genus") # Create a community matrix of genera by tectonic plate id# # CommunityMatrix<-abundanceMatrix(Data=DataPBDB, Rows="geoplate", Columns="genus") ageRanges Find the age range for each taxon in a dataframe Description Find the age range (first occurrence and last occurrence) for each taxon in a PBDB dataset. Can be run for any level of the taxonomic hierarchy (e.g., family, genus). Usage ageRanges(Data, Taxonomy = "genus") Arguments Data A data frame downloaded from the paleobiology database API. Taxonomy A characer string identifying the desired level of the taxonomic hierarchy. Details Returns a data frame of that states gives the time of origination and extinction for each taxon as numeric values. Note that older versions of this function automatically dropped hanging factors and NA’s, but that cleaning step should ideally be dictated by the user up-front. So that functionality has been dropped. This may introduce breaking chanes in legacy scripts, but is easily fixed by standard data cleaning steps. Value A numeric matrix of first and last ages for each taxon, with tax as rownames. Author(s) <NAME> Examples # Download a test dataset of Cenozoic bivalves. # DataPBDB<-downloadPBDB(Taxa="Bivalvia",StartInterval="Cenozoic",StopInterval="Cenozoic") # Find the first occurrence and last occurrence for all Cenozoic bivalves in DataPBDB # AgeRanges<-ageRanges(DataPBDB,"genus") cleanTaxonomy Clean taxonomic names Description Removes NAs and subgenera from the genus column. Usage cleanTaxonomy(Data, Taxonomy = "genus") Arguments Data A data frame of taxonomic ocurrences downloaded from the paleobiology database API. Taxonomy A character string Details Will remove NA’s and subgenera from the genus column of a PBDB dataset. It can also be used on other datasets of similar structure to convert species names to genus, or remove NAs. Value Will return a data frame identical to the original, but with the genus column cleaned. Author(s) <NAME> Examples # Download a test dataset of Cenozoic bivalves. # DataPBDB<-downloadPBDB(Taxa="Bivalvia",StartInterval="Cenozoic",StopInterval="Cenozoic") # Clean up the genus column. # CleanedPBDB<-cleanTaxonomy(DataPBDB,"genus") constrainAges Constrain a dataset to only occurrences within a certain age-range Description Assign fossil occurrences to different intervals within a geologic timescale, then remove occur- rences that are not temporally constrained to a single interval within that timescale. Usage constrainAges(Data, Timescale) multiplyAges(Data, Timescale) Arguments Data A data frame Timescale A data frame Details Cull a paleobiology database data frame to only occurrences temporally constrained to be within a certain level of the geologic timescale (e.g., period, epoch). The geologic timescale should come from the Macrostrat database, but custom time-scales can be used if structured in the same way. See downloadTime for how to download a timescale. Value A data frame Author(s) <NAME> Examples # Download a test dataset of Cenozoic bivalves. # DataPBDB<-downloadPBDB(Taxa="Bivalvia",StartInterval="Cenozoic",StopInterval="Cenozoic") # Download the international epochs timescale from macrostrat.org # Epochs<-downloadTime("international epochs") # Find only occurrences that are temporally constrained to a single international epoch # ConstrainedPBDB<-constrainAges(DataPBDB,Timescale=Epochs) # Create mutliple instances of a single occurrence for each epoch it occurs in # MultipliedPBDB<-multiplyAges(DataPBDB,Timescale=Epochs) cullMatrix Cull rare taxa and depauperate samples Description Functions for recursively culling community matrices of rare taxa and depauperate samples. Usage cullMatrix(CommunityMatrix, Rarity = 2, Richness = 2, Silent = FALSE) Arguments CommunityMatrix a matrix Rarity a whole number Richness a whole number Silent logical Details Takes a community matrix (see presenceMatrix or abundanceMatrix) and removes all samples with fewer than a certain number of taxa and all taxa that occur below a certain threshold of samples. The function operates recursively, and will check to see if removing a rare taxon drops a sampe below the input minimum richness and vice-versa. This means that it is possible to eliminate all taxa and samples if the rarity and richness minimums are too high. If the Silent argument is set to FALSE the function will throw an error and print a warning if no taxa or samples are left after culling. If Silent is set to TRUE the function will simply return NULL. The latter case is useful if many matrices are being culled as a part of a loop, and you do not want to break the loop with an error. These functions originally appeared in the R script appendix of Holland, S.M. and <NAME> (2011) "Niche conservatism along an onshore-offshore gradient". Paleobiology 37:270-286. Value A community matrix with depauperate samples and rare taxa removed. Author(s) <NAME> Examples # Download a test dataset of pleistocene bivalves. # DataPBDB<-downloadPBDB(Taxa="Bivalvia",StartInterval="Pleistocene",StopInterval="Pleistocene") # Create a community matrix with tectonic plates as "samples". # CommunityMatrix<-abundanceMatrix(DataPBDB,"geoplate") # Remove taxa that occur in less than 5 samples and samples with fewer than 25 taxa. # cullMatrix(CommunityMatrix,Rarity=5,Richness=25,Silent=FALSE) downloadPaleogeography Downloads paleogeographic maps Description Download a paleogeographic map for an age expressed in millions of years ago. Usage downloadPaleogeography(Age = 0) Arguments Age A whole number up to 550 Details Downloads a map of paleocontinents for a specific age from Macrostrat.org as a shapefile. The given age must be expressed as a whole number. Note that the function makes use of the rgdal and RCurl packages. Value A simple features object Author(s) <NAME> Examples # Download a test dataset of Maastrichtian bivalves. # DataPBDB<-downloadPBDB(Taxa="Bivalvia",StartInterval="Maastrichtian",StopInterval="Maastrichtian") # Download a paleogeographic map. # KTBoundary<-downloadPaleogeography(Age=66) # Plot the paleogeographic map (uses rgdal) and the PBDB points. # plot(KTBoundary,col="grey") # points(x=DataPBDB[,"paleolng"],y=DataPBDB[,"paleolat"],pch=16,cex=2) downloadPBDB Download Occurrences from the Paleobiology Database Description Downloads a data frame of Paleobiology Database fossil occurrences. Usage downloadPBDB(Taxa, StartInterval = "Pliocene", StopInterval = "Pleistocene") Arguments Taxa a character vector StartInterval a character vector StopInterval a character vector Details Downloads a data frame of Paleobiology Database fossil occurrences matching certain taxonomic groups and age range. This is simply a convenience function for rapid data download, and only returns the most generically useful fields. Go directly to the Paleobiology Database to make more complex searches or access additional fields. This function makes use of the RCurl package. • ocurrence_no: The Paleobiology Database occurrence number. • collection_no: The Paleobiology Database collection number. • reference_no: The Paleobiology Database reference number. • Classifications: The stated Linnean classification of the occurence from phylum through genus. See cleanTaxonomy for how to simplify these fields. • accepted_name: The highest resolution taxonomic name assigned to the occurrence. • Geologic Intervals: The earliest possible age of the occurrence and latest possible age of the occurrence, expressed in terms of geologic intervals. See constrainAge for how to simplify these fields. • Numeric Ages: The earliest possible age of the occurrence and latest possible age of the occurrence, expressed as millions of years ago. • Geolocation: Both present-day and rotated paleocoordinates of the occurrence. The geoplate id used by the rotation model is also included. The key for geoplate ids can be found in the Paleobiology Database API documentation. Value a data frame Author(s) <NAME> Examples # Download a test dataset of Ypresian bivalves. # DataPBDB<-downloadPBDB(Taxa="Bivalvia",StartInterval="Ypresian",StopInterval="Ypresian") # Download a test dataset of Ordovician-Silurian trilobites and brachiopods. # DataPBDB<-downloadPBDB(c("Trilobita","Brachiopoda"),"Ordovician","Silurian") downloadPlaces Download Shapefile of Places Description Download a shapefile of a geolocation from the Macrostrat API’s implementation of the Who’s on First database by MapZen. Usage downloadPlaces(Place = "Wisconsin", Type = "region") Arguments Place A character string; the name of a place Type A character string; a type of place Details Download a shapefile of a geolocation from the Macrostrat API. The Macrostrat database provides a GeoJSON of a particular location given the location’s name and type. Type can be of the categories: "continent", "country", "region", "county", and "locality". If multiple locations of the same type share the same name (e.g., Alexandria), the route will return a feature collection of all matching polygons. Value An rgdal compatible shapefile Author(s) <NAME> Examples # Download a polygon of Dane County, Wisconsin, United States, North America # DaneCounty<-downloadPlaces(Place="Dane",Type="county") # Download a polygon of Wisconsin, United States, North America # Wisconsin<-downloadPlaces(Place="Wisconsin",Type="region") # Download a polygon of North America # NorthAmerica<-downloadPlaces(Place="North America",Type="continent") downloadTime Download geologic timescale Description Downloads a geologic timescale from the Macrostrat.org database. Usage downloadTime(Timescale = "interational epochs") Arguments Timescale character string; a recognized timescale in the Macrostrat.org database Details Downloads a recognized timescale from the Macrostrat.org database. This includes the name, mini- mum age, maximum age, midpoint age, and official International Commission on Stratigraphy color hexcode if applicable of each interval in the timescale. Go to https://macrostrat.org/api/defs/timescales?all for a list of recognized timescales. Value A data frame Author(s) <NAME> Examples # Download the ICS recognized periods timescale Timescale<-downloadTime(Timescale="international periods") fixedLatitude Download fixed-latitude equal-area grid Description Download an equal-area grid of the world with fixed latitudinal spacing and variable longitudinal spacing. Usage fixedLatitude(LatSpacing = 5, CellArea = "500000") Arguments LatSpacing Number of degrees desired between latitudinal bands CellArea Desired target area of the cells in km^2 as a character string Details Downloads an equal-area grid with fixed latitudinal spacing and variable longitudinal spacing. The distance between longitudinal borders of grids will adjust to the target area size within each band of latitude. The algorithm will adjust the area of the grids to ensure that the total surface of the globe is covered. Value A simple features object Author(s) <NAME> Examples # Download an equal area grid with 10 degree latitudinal spacing and 1,000,000 km^2 grids # EqualArea<-fixedLatitude(LatSpacing=10,CellArea="1000000") multiplicativeBeta Multiplicative Diversity Partitioning Description Calculates beta diversity under various Multiplicative Diversity Partitioning paradigms. Usage multiplicativeBeta(CommunityMatrix) completeTurnovers(CommunityMatrix) notEndemic(CommunityMatrix) Arguments CommunityMatrix a matrix Details Takes a community matrix (see presenceMatrix or abundanceMatrix) and returns one of three types of multiplicative beta diversity. Refer to Tuomisto, H (2010) "A diversity of beta diversities: straightening up a concept gone awry. Part 1. Defining beta diversity as a function of alpha and gamma diversity". Ecography 33:2-22. • multiplicativeBeta(CommunityMatrix): Calculates the original beta diversity ratio - Gamma/Alpha. It quantifies how many times as rich gamma is than alpha. • completeTurnovers(CommunityMatrix): The number of complete effective species turnovers observed among compositonal units in the dataset - (Gamma-Alpha)/Alpha. • notEndemic(CommunityMatrix): The proportion of taxa in the dataset not limited to a single sample - (Gamma-Alpha)/Gamma Value A numeric vector Author(s) <NAME> Examples # Download a test dataset of pleistocene bivalves from the Paleobiology Database. # DataPBDB<-downloadPBDB(Taxa="Bivalvia","Pleistocene","Pleistocene") # Create a community matrix with tectonic plates as "samples". # CommunityMatrix<-abundanceMatrix(DataPBDB,"geoplate") # "True local diversity ratio" # multiplicativeBeta(CommunityMatrix) # Whittaker's effective species turnover # completeTurnovers(CommunityMatrix) # Proportional effective species turnover # notEndemic(CommunityMatrix) presenceMatrix Create a matrix of presences and absences Description Creates a community matrix of taxon presences and absences from a data frame with a column of sites and a column of species. Usage presenceMatrix(Data, Rows = "geoplate", Columns = "genus") Arguments Data A dataframe or matrix Rows A characer string Columns A character string Value A presence-absence matrix Author(s) <NAME> Examples # Download a test dataset of pleistocene bivalves. # DataPBDB<-downloadPBDB(Taxa="Bivalvia","Pleistocene","Pleistocene") # Create a community matrix of genera by plates. # CommunityMatrix<-presenceMatrix(DataPBDB,Rows="geoplate",Columns="genus") # Create a community matrix of families by geologic interval. # CommunityMatrix<-presenceMatrix(DataPBDB,Rows="early_interval",Columns="family") taxonAlpha Additive Diversity Partitioning functions Description Functions for calculating alpha, beta, and gamma richness of a community matrix under the Addi- tive Diversity partitioning paradigm of R. Lande. Usage taxonAlpha(CommunityMatrix) meanAlpha(CommunityMatrix) taxonBeta(CommunityMatrix) sampleBeta(CommunityMatrix) totalBeta(CommunityMatrix) totalGamma(CommunityMatrix) Arguments CommunityMatrix a matrix Details Takes a community matrix (see presenceMatrix or abundanceMatrix) and returns the either the alpha, beta, or gamma richness of a community matrix. These functions were originally presented in Holland, SM (2010) "Additive diversity partitioning in palaeobiology: revisiting Sepkoski’s question" Paleontology 53:1237-1254. • taxonAlpha(CommunityMatrix) Calculates the contribution to alpha diversity of each taxon. • meanAlpha(CommunityMatrix) Calculates the average alpha diversity of all samples. • taxonBeta(CommunityMatrix) Calculates the contribution to beta diversity of each taxon. • sampleBeta(CommunityMatrix) Calculates the contribution to beta diversity of each sample. • totalBeta(CommunityMatrix) Calculates the total beta diversity. • totalGamma(CommunityMatrix) Calculates the richness of all samples in the community matrix. Value A vector of the alpha, beta, or gamma richness of a taxon, sample, or entire community matrix. Author(s) <NAME> Examples # Download a test dataset of pleistocene bivalves. # DataPBDB<-downloadPBDB(Taxa="Bivalvia",StartInterval="Pleistocene",StopInterval="Pleistocene") # Create a community matrix with tectonic plates as "samples" # CommunityMatrix<-abundanceMatrix(DataPBDB,"geoplate") # Calculate the average richness of all samples in a community. # meanAlpha(CommunityMatrix) # The beta diversity of all samples in a community. # totalBeta(CommunityMatrix) # This is, by definition, equivalent to the gamma diversity - mean alpha diversity. # totalBeta(CommunityMatrix)==(totalGamma(CommunityMatrix)-meanAlpha(CommunityMatrix))
mcbette
cran
R
Package ‘mcbette’ September 27, 2023 Title Model Comparison Using 'babette' Version 1.15.2 Maintainer <NAME> <<EMAIL>> Description 'BEAST2' (<https://www.beast2.org>) is a widely used Bayesian phylogenetic tool, that uses DNA/RNA/protein data and many model priors to create a posterior of jointly estimated phylogenies and parameters. 'mcbette' allows to do a Bayesian model comparison over some site and clock models, using 'babette' (<https://github.com/ropensci/babette/>). License GPL-3 RoxygenNote 7.2.3 VignetteBuilder knitr URL https://github.com/ropensci/mcbette/ BugReports https://github.com/ropensci/mcbette/issues Imports babette (>= 2.3), beautier (>= 2.6.2), beastier (>= 2.4.6), curl, devtools, mauricer (>= 2.5), Rmpfr, testit, txtplot Suggests ape, ggplot2, hunspell, knitr, lintr, markdown, nLTT, phangorn, rappdirs, rmarkdown, spelling, stringr, testthat (>= 2.1.0), tracerer Language en-US Encoding UTF-8 SystemRequirements BEAST2 (https://www.beast2.org/) NeedsCompilation no Author <NAME> [aut, cre] (<https://orcid.org/0000-0003-1107-7049>), <NAME> [rev] (Joëlle reviewed the package for rOpenSci, see https://github.com/ropensci/software-review/issues/360), <NAME> [rev] (Vikram reviewed the package for rOpenSci, see https://github.com/ropensci/software-review/issues/360, <https://orcid.org/0000-0002-9367-8974>) Repository CRAN Date/Publication 2023-09-27 09:00:02 UTC R topics documented: calc_weight... 2 can_run_mcbett... 3 check_beast2_ns_pk... 4 check_marg_lik... 4 check_mcbette_stat... 5 default_params_do... 5 est_marg_li... 7 est_marg_lik... 9 get_mcbette_stat... 11 get_test_marg_lik... 12 interpret_bayes_facto... 12 interpret_marg_lik_estimate... 13 is_marg_lik... 14 mcbette_repor... 14 mcbette_self_tes... 15 plot_marg_lik... 16 set_mcbette_stat... 17 calc_weights Calculate the weights for each marginal likelihood Description Calculate the weights for each marginal likelihood Usage calc_weights(marg_liks) Arguments marg_liks (non-log) marginal likelihood estimates Value the weight of each marginal likelihood estimate, which will sum up to 1.0 Author(s) <NAME> Examples # Evidences (aka marginal likelihoods) can be very small evidences <- c(0.0001, 0.0002, 0.0003, 0.0004) # Sum will be 1.0 calc_weights(evidences) beastier::remove_beaustier_folders() beastier::check_empty_beaustier_folders() can_run_mcbette Can ’mcbette’ run? Description Can ’mcbette’ run? Will return TRUE if: • (1) Running on Linux or MacOS • (2) BEAST2 is installed • (3) The BEAST2 NS package is installed Usage can_run_mcbette(beast2_folder = beastier::get_default_beast2_folder()) Arguments beast2_folder the folder where the BEAST2 is installed. Note that this is not the folder where the BEAST2 executable is installed: the BEAST2 executable is in a subfolder. Use get_default_beast2_folder to get the default BEAST2 folder. Use get_default_beast2_bin_path to get the full path to the default BEAST2 executable. Use get_default_beast2_jar_path to get the full path to the default BEAST2 jar file. Author(s) <NAME> Examples can_run_mcbette() beastier::remove_beaustier_folders() beastier::check_empty_beaustier_folders() check_beast2_ns_pkg Checks if the BEAST2 ’NS’ package is installed. Description Checks if the BEAST2 ’NS’ package is installed. Will stop if not Usage check_beast2_ns_pkg(beast2_bin_path = beastier::get_default_beast2_bin_path()) Arguments beast2_bin_path path to the the BEAST2 binary file check_marg_liks Check if the marg_liks are of the same type as returned by est_marg_liks. Description stop if not. Usage check_marg_liks(marg_liks) Arguments marg_liks a table of (estimated) marginal likelihoods, as, for example, created by est_marg_liks. This data.frame has the following columns: • site_model_name: name of the site model, must be an element of get_site_model_names • clock_model_name: name of the clock model, must be an element of get_clock_model_names • tree_prior_name: name of the tree prior, must be an element of get_tree_prior_names • marg_log_lik: estimated marginal (natural) log likelihood • marg_log_lik_sd: estimated error of marg_log_lik • weight: relative model weight, a value from 1.0 (all evidence is in favor of this model combination) to 0.0 (no evidence in favor of this model combi- nation) • ess: effective sample size of the marginal likelihood estimation Use get_test_marg_liks to get a test marg_liks. Use is_marg_liks to determine if a marg_liks is valid. Use check_marg_liks to check that a marg_liks is valid. check_mcbette_state Check if the mcbette_state is valid. Description Check if the mcbette_state is valid. Will stop otherwise. Usage check_mcbette_state(mcbette_state) Arguments mcbette_state the mcbette state, which is a list with the following elements: • beast2_installed TRUE if BEAST2 is installed, FALSE otherwise • ns_installed NA if BEAST2 is not installed. TRUE if the BEAST2 NS package is installed FALSE if the BEAST2 NS package is not installed Author(s) <NAME> default_params_doc Documentation of general function arguments. This function does nothing. It is intended to inherit function argument documentation. Description Documentation of general function arguments. This function does nothing. It is intended to inherit function argument documentation. Usage default_params_doc( beast2_bin_path, beast2_folder, beast2_working_dir, beast2_options, beast2_optionses, clock_model, clock_models, epsilon, fasta_filename, inference_model, inference_models, marg_liks, mcbette_state, mcmc, os, rng_seed, site_model, site_models, tree_prior, tree_priors, verbose ) Arguments beast2_bin_path path to the the BEAST2 binary file beast2_folder the folder where the BEAST2 is installed. Note that this is not the folder where the BEAST2 executable is installed: the BEAST2 executable is in a subfolder. Use get_default_beast2_folder to get the default BEAST2 folder. Use get_default_beast2_bin_path to get the full path to the default BEAST2 executable. Use get_default_beast2_jar_path to get the full path to the default BEAST2 jar file. beast2_working_dir folder in which BEAST2 will run and produce intermediate files. By default, this is a temporary folder beast2_options a beast2_options structure, as can be created by create_mcbette_beast2_options. beast2_optionses list of one or more beast2_options structures, as can be created by create_mcbette_beast2_options. Use of reduplicated plural to achieve difference with beast2_options clock_model a clock model, as can be created by create_clock_model clock_models a list of one or more clock models, as can be created by create_clock_models epsilon measure of relative accuracy. Smaller values result in longer, more precise esti- mations fasta_filename name of the FASTA file inference_model an inference model, as can be created by create_inference_model inference_models a list of one or more inference models, as can be created by create_inference_model marg_liks a table of (estimated) marginal likelihoods, as, for example, created by est_marg_liks. This data.frame has the following columns: • site_model_name: name of the site model, must be an element of get_site_model_names • clock_model_name: name of the clock model, must be an element of get_clock_model_names • tree_prior_name: name of the tree prior, must be an element of get_tree_prior_names • marg_log_lik: estimated marginal (natural) log likelihood • marg_log_lik_sd: estimated error of marg_log_lik • weight: relative model weight, a value from 1.0 (all evidence is in favor of this model combination) to 0.0 (no evidence in favor of this model combi- nation) • ess: effective sample size of the marginal likelihood estimation Use get_test_marg_liks to get a test marg_liks. Use is_marg_liks to determine if a marg_liks is valid. Use check_marg_liks to check that a marg_liks is valid. mcbette_state the mcbette state, which is a list with the following elements: • beast2_installed TRUE if BEAST2 is installed, FALSE otherwise • ns_installed NA if BEAST2 is not installed. TRUE if the BEAST2 NS package is installed FALSE if the BEAST2 NS package is not installed mcmc an MCMC for the Nested Sampling run, as can be created by create_mcmc_nested_sampling os name of the operating system, must be unix (Linux, Mac) or win (Windows) rng_seed a random number generator seed used for the BEAST2 inference site_model a site model, as can be created by create_site_model site_models a list of one or more site models, as can be created by create_site_models tree_prior a tree prior, as can be created by create_tree_prior tree_priors a list of one or more tree priors, as can be created by create_tree_priors verbose if TRUE show debug output Note This is an internal function, so it should be marked with @noRd. This is not done, as this will disallow all functions to find the documentation parameters Author(s) <NAME> est_marg_lik Estimate the marginal likelihood for an inference model. Description Estimate the marginal likelihood for an inference model. Usage est_marg_lik( fasta_filename, inference_model = beautier::create_ns_inference_model(), beast2_options = beastier::create_mcbette_beast2_options(), os = rappdirs::app_dir()$os ) Arguments fasta_filename name of the FASTA file inference_model an inference model, as can be created by create_inference_model beast2_options a beast2_options structure, as can be created by create_mcbette_beast2_options. os name of the operating system, must be unix (Linux, Mac) or win (Windows) Value a list showing the estimated marginal likelihoods (and its estimated error), its items are:: • marg_log_lik: estimated marginal (natural) log likelihood • marg_log_lik_sd: estimated error of marg_log_lik • esses the Effective Sample Size Author(s) <NAME> See Also • can_run_mcbette: see if ’mcbette’ can run • est_marg_liks: estimate multiple marginal likelihoods Examples if (can_run_mcbette()) { # An example FASTA file fasta_filename <- system.file("extdata", "simple.fas", package = "mcbette") # A testing inference model with inaccurate (thus fast) marginal # likelihood estimation inference_model <- beautier::create_ns_inference_model() # Shorten the run, by doing a short (dirty, unreliable) MCMC inference_model$mcmc <- beautier::create_test_ns_mcmc() # Setup the options for BEAST2 to be able to call BEAST2 packages beast2_options <- beastier::create_mcbette_beast2_options() # Estimate the marginal likelihood est_marg_lik( fasta_filename = fasta_filename, inference_model = inference_model, beast2_options = beast2_options ) beastier::remove_beaustier_folders() } est_marg_liks Estimate the marginal likelihoods for one or more inference models Description Estimate the marginal likelihoods (aka evidence) for one or more inference models, based on a single alignment. Also, the marginal likelihoods are compared, resulting in a relative weight for each model, where a relative weight of a model close to 1.0 means that that model is way likelier than the others. Usage est_marg_liks( fasta_filename, inference_models = list(beautier::create_inference_model(mcmc = beautier::create_ns_mcmc())), beast2_optionses = rep(list(beastier::create_mcbette_beast2_options()), times = length(inference_models)), verbose = FALSE, os = rappdirs::app_dir()$os ) Arguments fasta_filename name of the FASTA file inference_models a list of one or more inference models, as can be created by create_inference_model beast2_optionses list of one or more beast2_options structures, as can be created by create_mcbette_beast2_options. Use of reduplicated plural to achieve difference with beast2_options verbose if TRUE show debug output os name of the operating system, must be unix (Linux, Mac) or win (Windows) Details In the process, multiple (temporary) files are created (where [x] denotes the index in a list) • beast2_optionses[x]$input_filename path to the the BEAST2 XML input file • beast2_optionses[x]$output_state_filename path to the BEAST2 XML state file • inference_models[x]$mcmc$tracelog$filename path to the BEAST2 trace file with pa- rameter estimates • inference_models[x]$mcmc$treelog$filename path to the BEAST2 trees file with the posterior trees • inference_models[x]$mcmc$screenlog$filename path to the BEAST2 screen output file These file can be deleted manually by bbt_delete_temp_files, else these will be deleted automati- cally by the operating system. Value a data.frame showing the estimated marginal likelihoods (and its estimated error) per combination of models. Columns are: • site_model_name: name of the site model • clock_model_name: name of the clock model • tree_prior_name: name of the tree prior • marg_log_lik: estimated marginal (natural) log likelihood • marg_log_lik_sd: estimated error of marg_log_lik • weight: relative model weight, a value from 1.0 (all evidence is in favor of this model combi- nation) to 0.0 (no evidence in favor of this model combination) • ess: effective sample size of the marginal likelihood estimation Author(s) <NAME> See Also • can_run_mcbette: see if ’mcbette’ can run • est_marg_liks: estimate multiple marginal likelihood of a single inference mode Examples if (can_run_mcbette()) { # Use an example FASTA file fasta_filename <- system.file("extdata", "simple.fas", package = "mcbette") # Create two inference models inference_model_1 <- beautier::create_ns_inference_model( site_model = beautier::create_jc69_site_model() ) inference_model_2 <- beautier::create_ns_inference_model( site_model = beautier::create_hky_site_model() ) # Shorten the run, by doing a short (dirty, unreliable) MCMC inference_model_1$mcmc <- beautier::create_test_ns_mcmc() inference_model_2$mcmc <- beautier::create_test_ns_mcmc() # Combine the inference models inference_models <- list(inference_model_1, inference_model_2) # Create the BEAST2 options, that will write the output # to different (temporary) filanems beast2_options_1 <- beastier::create_mcbette_beast2_options() beast2_options_2 <- beastier::create_mcbette_beast2_options() # Combine the two BEAST2 options sets, # use reduplicated plural beast2_optionses <- list(beast2_options_1, beast2_options_2) # Compare the models marg_liks <- est_marg_liks( fasta_filename, inference_models = inference_models, beast2_optionses = beast2_optionses ) # Interpret the results interpret_marg_lik_estimates(marg_liks) beastier::remove_beaustier_folders() beastier::check_empty_beaustier_folders() } get_mcbette_state Get the current state of mcbette Description Get the current state of mcbette Usage get_mcbette_state(beast2_folder = beastier::get_default_beast2_folder()) Arguments beast2_folder the folder where the BEAST2 is installed. Note that this is not the folder where the BEAST2 executable is installed: the BEAST2 executable is in a subfolder. Use get_default_beast2_folder to get the default BEAST2 folder. Use get_default_beast2_bin_path to get the full path to the default BEAST2 executable. Use get_default_beast2_jar_path to get the full path to the default BEAST2 jar file. Value a list with the following elements: • beast2_installed TRUE if BEAST2 is installed, FALSE otherwise • ns_installed TRUE if the BEAST2 NS package is installed FALSE if the BEAST2 or the BEAST2 NS package is not installed Examples get_mcbette_state() beastier::remove_beaustier_folders() beastier::check_empty_beaustier_folders() get_test_marg_liks Get testing marg_liks Description Get testing marg_liks Usage get_test_marg_liks() Examples get_test_marg_liks() beastier::remove_beaustier_folders() beastier::check_empty_beaustier_folders() interpret_bayes_factor Interpret a Bayes factor Description Interpret a Bayes factor, using the interpretation from [1]. Usage interpret_bayes_factor(bayes_factor) Arguments bayes_factor Bayes factor to be interpreted Details • [1] <NAME> (1961). The Theory of Probability (3rd ed.). Oxford. p. 432 Value a string with the interpretation in English Author(s) <NAME> Examples interpret_bayes_factor(0.5) beastier::remove_beaustier_folders() beastier::check_empty_beaustier_folders() interpret_marg_lik_estimates Interpret the marginal likelihood estimates Description Interpret the marginal likelihood estimates as created by est_marg_liks. Usage interpret_marg_lik_estimates(marg_liks) Arguments marg_liks a table of (estimated) marginal likelihoods, as, for example, created by est_marg_liks. This data.frame has the following columns: • site_model_name: name of the site model, must be an element of get_site_model_names • clock_model_name: name of the clock model, must be an element of get_clock_model_names • tree_prior_name: name of the tree prior, must be an element of get_tree_prior_names • marg_log_lik: estimated marginal (natural) log likelihood • marg_log_lik_sd: estimated error of marg_log_lik • weight: relative model weight, a value from 1.0 (all evidence is in favor of this model combination) to 0.0 (no evidence in favor of this model combi- nation) • ess: effective sample size of the marginal likelihood estimation Use get_test_marg_liks to get a test marg_liks. Use is_marg_liks to determine if a marg_liks is valid. Use check_marg_liks to check that a marg_liks is valid. Author(s) <NAME> is_marg_liks Determine if the marg_liks is valid Description Determine if the marg_liks is valid Usage is_marg_liks(marg_liks, verbose = FALSE) Arguments marg_liks a table of (estimated) marginal likelihoods, as, for example, created by est_marg_liks. This data.frame has the following columns: • site_model_name: name of the site model, must be an element of get_site_model_names • clock_model_name: name of the clock model, must be an element of get_clock_model_names • tree_prior_name: name of the tree prior, must be an element of get_tree_prior_names • marg_log_lik: estimated marginal (natural) log likelihood • marg_log_lik_sd: estimated error of marg_log_lik • weight: relative model weight, a value from 1.0 (all evidence is in favor of this model combination) to 0.0 (no evidence in favor of this model combi- nation) • ess: effective sample size of the marginal likelihood estimation Use get_test_marg_liks to get a test marg_liks. Use is_marg_liks to determine if a marg_liks is valid. Use check_marg_liks to check that a marg_liks is valid. verbose if TRUE show debug output Value TRUE if the argument is a valid marg_liks, FALSE otherwise mcbette_report Create a mcbette report, to be used when reporting bugs Description Create a mcbette report, to be used when reporting bugs Usage mcbette_report(beast2_folder = beastier::get_default_beast2_folder()) Arguments beast2_folder the folder where the BEAST2 is installed. Note that this is not the folder where the BEAST2 executable is installed: the BEAST2 executable is in a subfolder. Use get_default_beast2_folder to get the default BEAST2 folder. Use get_default_beast2_bin_path to get the full path to the default BEAST2 executable. Use get_default_beast2_jar_path to get the full path to the default BEAST2 jar file. Value nothing. It is intended that the output (not the return value) is copy-pasted from screen. Author(s) <NAME> Examples if(beautier::is_on_ci()) { mcbette_report() } mcbette_self_test Performs a minimal mcbette run Description Performs a minimal mcbette run Usage mcbette_self_test(beast2_folder = beastier::get_default_beast2_folder()) Arguments beast2_folder the folder where the BEAST2 is installed. Note that this is not the folder where the BEAST2 executable is installed: the BEAST2 executable is in a subfolder. Use get_default_beast2_folder to get the default BEAST2 folder. Use get_default_beast2_bin_path to get the full path to the default BEAST2 executable. Use get_default_beast2_jar_path to get the full path to the default BEAST2 jar file. plot_marg_liks Plot the marg_liks Description Plot the marg_liks Usage plot_marg_liks(marg_liks) Arguments marg_liks a table of (estimated) marginal likelihoods, as, for example, created by est_marg_liks. This data.frame has the following columns: • site_model_name: name of the site model, must be an element of get_site_model_names • clock_model_name: name of the clock model, must be an element of get_clock_model_names • tree_prior_name: name of the tree prior, must be an element of get_tree_prior_names • marg_log_lik: estimated marginal (natural) log likelihood • marg_log_lik_sd: estimated error of marg_log_lik • weight: relative model weight, a value from 1.0 (all evidence is in favor of this model combination) to 0.0 (no evidence in favor of this model combi- nation) • ess: effective sample size of the marginal likelihood estimation Use get_test_marg_liks to get a test marg_liks. Use is_marg_liks to determine if a marg_liks is valid. Use check_marg_liks to check that a marg_liks is valid. Value a ggplot Examples plot_marg_liks(get_test_marg_liks()) beastier::remove_beaustier_folders() beastier::check_empty_beaustier_folders() set_mcbette_state Set the mcbette state. Description Set the mcbette state to having BEAST2 installed with or without installing the BEAST2 NS pack- age. Usage set_mcbette_state( mcbette_state, beast2_folder = beastier::get_default_beast2_folder(), verbose = FALSE ) Arguments mcbette_state the mcbette state, which is a list with the following elements: • beast2_installed TRUE if BEAST2 is installed, FALSE otherwise • ns_installed NA if BEAST2 is not installed. TRUE if the BEAST2 NS package is installed FALSE if the BEAST2 NS package is not installed beast2_folder the folder where the BEAST2 is installed. Note that this is not the folder where the BEAST2 executable is installed: the BEAST2 executable is in a subfolder. Use get_default_beast2_folder to get the default BEAST2 folder. Use get_default_beast2_bin_path to get the full path to the default BEAST2 executable. Use get_default_beast2_jar_path to get the full path to the default BEAST2 jar file. verbose if TRUE show debug output Note In newer versions of BEAST2, BEAST2 comes pre-installed with the BEAST2 NS package. For such a version, one cannot install BEAST2 without NS. A warning will be issues if one intends to only install BEAST2 (i.e. without the BEAST2 NS package) and gets the BEAST2 NS package installed as a side effect as well. Also, installing or uninstalling a BEAST2 package from a BEAST2 installation will affect all in- stallations. See Also • Use get_mcbette_state to get the current mcbette state • Use check_mcbette_state to check the current mcbette state
asphalt-influxdb
readthedoc
Python
asphalt-influxdb 2.1.0.post3 documentation [asphalt-influxdb](index.html#document-index) --- This Asphalt framework component provides connectivity to [InfluxDB](https://www.influxdata.com/) database servers. Table of contents[¶](#table-of-contents) === Configuration[¶](#configuration) --- The typical InfluxDB configuration using a single database at `localhost` on the default port would look like this: ``` components: influxdb: db: mydb ``` The above configuration creates an [`asphalt.influxdb.client.InfluxDBClient`](index.html#asphalt.influxdb.client.InfluxDBClient) instance in the context, available as `ctx.influxdb` (resource name: `default`). If you wanted to connect to `influx.example.org` on port 8886, you would do: ``` components: influxdb: base_urls: http://influx.example.org:8886 db: mydb ``` To connect to an InfluxEnterprise cluster, list all the nodes under `base_urls`: ``` components: influxdb: base_urls: - http://influx1.example.org:8086 - http://influx2.example.org:8086 - http://influx3.example.org:8086 db: mydb ``` To connect to two unrelated InfluxDB servers, you could use a configuration like: ``` components: influxdb: clients: influx1: base_urls: http://influx.example.org:8886 db: mydb influx2: context_attr: influxalter base_urls: https://influxalter.example.org/influxdb db: anotherdb username: testuser password: testpass ``` This configuration creates two [`asphalt.influxdb.client.InfluxDBClient`](index.html#asphalt.influxdb.client.InfluxDBClient) resources, `influx1` and `influx2` (`ctx.influx1` and `ctx.influxalter`) respectively. Note See the documentation of the [`asphalt.influxdb.client.InfluxDBClient`](index.html#asphalt.influxdb.client.InfluxDBClient) class for a comprehensive listing of all connection options. Using the InfluxDB client[¶](#using-the-influxdb-client) --- ### Getting the server version[¶](#getting-the-server-version) You can find out which version of InfluxDB you’re running in the following manner: ``` async def handler(ctx): version = await ctx.influxdb.ping() print('Running InfluxDB version ' + version) ``` ### Writing data points[¶](#writing-data-points) Each data point being written into an InfluxDB database contains the following information: * name of the measurement (corresponds to a table in a relational database) * one or more tags (corresponds to indexed, non-nullable string columns in a RDBMS) * zero or more fields (corresponds to nullable columns in a RDBMS) * a timestamp (supplied by the server if omitted) Data points can be written one at a time ([`write()`](index.html#asphalt.influxdb.client.InfluxDBClient.write)) or many at once ([`write_many()`](index.html#asphalt.influxdb.client.InfluxDBClient.write_many)). It should be noted that the former is merely a wrapper for the latter. To write a single data point: ``` async def handler(ctx): await ctx.influxdb.write('cpu_load_short', dict(host=server02), dict(value=0.67)) print('Running InfluxDB version ' + version) ``` To write multiple data points, you need use use the [`DataPoint`](index.html#asphalt.influxdb.client.DataPoint) class: ``` async def handler(ctx): await ctx.influxdb.write_many([ DataPoint('cpu_load_short', dict(host=server02), dict(value=0.67)), DataPoint('cpu_load_short', dict(host=server01), dict(value=0.21)) ]) ``` The data points don’t have to be within the same measurement. ### Querying the database[¶](#querying-the-database) This library offers the ability to query data via both raw InfluxQL queries and a programmatic query builder. To learn how the query builder works, see the next section. Sending raw queries is done using [`raw_query()`](index.html#asphalt.influxdb.client.InfluxDBClient.raw_query): ``` async def handler(ctx): series = await ctx.influxdb.raw_query('SHOW DATABASES') print([row[0] for row in series]) ``` If you include more than one measurement in the `FROM` clause of a `SELECT` query, you will get a list of [`Series`](index.html#asphalt.influxdb.query.Series) as a result: ``` async def handler(ctx): cpu_load, temperature = await ctx.influxdb.raw_query('SELECT * FROM cpu_load,temperature') print([row[0] for row in series]) ``` It is possible to send multiple queries by delimiting them with a semicolon (`;`). If you do that, the call will return a list of results for each query, each of which may be a [`Series`](index.html#asphalt.influxdb.query.Series) or a list thereof: ``` async def handler(ctx): db_series, m_series = await ctx.influxdb.raw_query('SHOW DATABASES; SHOW MEASUREMENTS') print('Databases:') print([row[0] for row in db_series]) print('Measurements:') print([row[0] for row in m_series]) ``` Warning Due to [this bug](https://github.com/aio-libs/yarl/issues/34), multiple queries with the same call do not currently work. Note If you want to send a query like `SELECT ... INTO ...`, you must call [`raw_query()`](index.html#asphalt.influxdb.client.InfluxDBClient.raw_query) `with http_verb='POST'`. The proper HTTP verb should be automatically detected from the query string for all other queries. ### Using the query builder[¶](#using-the-query-builder) The query builder allows one to dynamically build queries without having to do error prone manual string concatenation. The query builder is considered immutable, so every one of its methods returns a new builder with the modifications made to it, just like with [SQLAlchemy](http://sqlalchemy.org/) ORM queries. For example, to select `field1` and `field2` from measurements `m1` and `m2`: ``` async def handler(ctx): query = ctx.influxdb.query(['field1', 'field2'], ['m1', 'm2']) return await query.execute() ``` This will produce a query like `SELECT "field1","field2" FROM "m1","m2"`. More complex expressions can be given but field or tag names are not automatically quoted: ``` async def handler(ctx): query = ctx.influxdb.query('field1 + field2', 'm1') return await query.execute() ``` The query will look like `SELECT field1 + field2 FROM "m1"`. Filters can be added by using [`where()`](index.html#asphalt.influxdb.query.SelectQuery.where) with positional and/or keyword arguments: ``` async def handler(ctx): query = ctx.influxdb.query(['field1', 'field2'], 'm1').\ where('field1 > 3.5', 'field2 < 6.2', location='Helsinki') return await query.execute() ``` This will produce a query like `SELECT field1,field2 FROM "m1" WHERE field1 > 3.5 AND field2 < 6.2 AND location='Helsinki'`. To use `OR`, you have to manually include it in one of the `WHERE` expressions. Further calls to `.where()` will add conditions to the `WHERE` clause of the query. A call to `.where()` with no arguments will clear the `WHERE` clause. Grouping by tags works largely the same way: ``` async def handler(ctx): query = ctx.influxdb.query(['field1', 'SUM(field2)'], 'm1').group_by('tag1') return await query.execute() ``` The SQL: `SELECT field1,SUM(field2) FROM "m1" GROUP BY "tag1"` Version history[¶](#version-history) --- This library adheres to [Semantic Versioning](http://semver.org/). **2.1.0** (2017-09-20) * Exposed the `Series.values` attribute to enable faster processing of query results **2.0.1** (2017-06-04) * Added compatibility with Asphalt 4.0 * Fixed `DeprecationWarning: ClientSession.close() is not coroutine` warnings * Added Docker configuration for easier local testing **2.0.0** (2017-04-11) * **BACKWARD INCOMPATIBLE** Migrated to Asphalt 3.0 * **BACKWARD INCOMPATIBLE** Migrated to aiohttp 2.0 **1.1.1** (2017-02-09) * Fixed handling of long responses (on InfluxDB 1.2+) **1.1.0** (2016-12-15) * Added the `KeyedTuple._asdict()` method * Fixed wrong quoting of string values (should use single quotes, not double quotes) **1.0.0** (2016-12-12) * Initial release * [API reference](py-modindex.html)
phpmyadmin-japanese
readthedoc
PHP
phpMyAdmin 6.0.0-dev ドキュメント ### ナビゲヌション * [phpMyAdmin 6.0.0-dev ドキュメント](index.html#document-index) » phpMyAdmin 日本語ドキュメント[¶](#welcome-to-phpmyadmin-s-documentation) === 目次: はじめに[¶](#introduction) --- phpMyAdmin は、 MySQL たたは MariaDB デヌタベヌスサヌバの管理を扱うこずを目的ずした、 PHP で蚘述されたフリヌ゜フトりェアのツヌルです。 phpMyAdmin を䜿甚しお、デヌタベヌスの䜜成、ク゚リの実行、ナヌザアカりントの远加など、ほずんどの管理タスクを実行できたす。 ### 察応しおいる機胜[¶](#supported-features) phpMyAdmin は珟圚次のこずができたす。 * デヌタベヌス、テヌブル、ビュヌ、カラム、むンデックスの䜜成、衚瀺、削陀 * ストアドプロシヌゞャやク゚リを通しおの耇数の結果セットの衚瀺 * デヌタベヌス、テヌブル、カラム、むンデックスの䜜成、耇補、削陀、名前や定矩の倉曎 * サヌバ、デヌタベヌス、テヌブルの保守、そのためのサヌバの蚭定を提案 * あらゆる [SQL](index.html#term-sql) 文の実行、線集、ブックマヌクができ、バッチク゚リも可 * テキストファむルをテヌブルに読み蟌み * テヌブルダンプの䜜成 [[1]](#f1) ず読み蟌み * 様々な圢匏のデヌタの゚クスポヌト [[1]](#f1)。 [CSV](index.html#term-csv), [XML](index.html#term-xml), [PDF](index.html#term-pdf), [ISO](index.html#term-iso)/[IEC](index.html#term-iec) 26300 - [OpenDocument](index.html#term-opendocument) テキストおよびスプレッドシヌト, Microsoft Word 2000, LATEX 圢匏 * デヌタず [MySQL](index.html#term-47) 構造のむンポヌト。 [OpenDocument](index.html#term-opendocument) スプレッドシヌト、 [XML](index.html#term-xml)、 [CSV](index.html#term-csv)、 [SQL](index.html#term-sql) の各ファむルより * 耇数のサヌバの管理 * MySQL ナヌザアカりントず暩限の远加、線集、削陀 * MyISAM テヌブルにおける参照完党性のチェック * デヌタベヌスレむアりトの [PDF](index.html#term-pdf) 画像の䜜成 * デヌタベヌスないしそのサブセットのグロヌバル怜玢 * 事前に定矩された䞀連の関数を䜿甚した、栌玍されたデヌタの任意の圢匏ぞの倉換。 BLOB デヌタを画像やダりンロヌドリンクずしお衚瀺するなど * デヌタベヌス、テヌブル、ビュヌの倉曎の远跡 * InnoDB テヌブルず倖郚キヌの察応 * 改良版の MySQL 拡匵モゞュヌルである mysqli の察応、[1.17 phpMyAdmin が察応しおいるデヌタベヌスのバヌゞョンは](index.html#faq1-17) を参照 * ストアドプロシヌゞャや関数の䜜成、線集、呌び出し、゚クスポヌト、削陀 * むベントやトリガの䜜成、線集、゚クスポヌト、削陀 * [80 の異なる蚀語](https://www.phpmyadmin.net/translations/) に察応 ### ショヌトカットキヌ[¶](#shortcut-keys) 珟圚、phpMyAdmin は以䞋のショヌトカットに察応しおいたす。 * k - コン゜ヌルの衚瀺/非衚瀺 * h - ホヌムペヌゞぞ移動 * s - 蚭定を開く * d + s - デヌタベヌスの構造ぞ移動 (デヌタベヌス関連のペヌゞにいる堎合) * d + f - デヌタベヌスを怜玢 (デヌタベヌス関連のペヌゞにいる堎合) * t + s - テヌブルの構造ぞ移動 (テヌブル関連のペヌゞにいる堎合) * t + f - テヌブルを怜玢 (テヌブル関連のペヌゞにいる堎合) * backspace - ひず぀前のペヌゞぞ移動。 ### ナヌザに぀いお䞀蚀[¶](#a-word-about-users) 倚くの人々が phpMyAdmin におけるナヌザ管理の抂念を理解するのが難しいず感じおいたす。ナヌザが phpMyAdmin にログむンするず、その時のナヌザ名ずパスワヌドは MySQL に盎接枡されたす。phpMyAdmin は、独自にアカりント管理を (MySQL のナヌザアカりント情報を操䜜するために぀蚱可しおいる以倖は) 行っおいたせん。すべおのナヌザは、有効な MySQL ナヌザである必芁がありたす。 脚泚 | [1] | *([1](#footnote-reference-1), [2](#footnote-reference-2))* [Zlib](index.html#term-zlib) に察応 (`--with-zlib`) した PHP を䜿甚するず、 phpMyAdmin は圧瞮ダンプ ([ZIP](index.html#term-zip)、 [GZip](index.html#term-gzip)、 [RFC 1952](index.html#term-rfc-1952) 圢匏) や [CSV](index.html#term-csv) ゚クスポヌトが利甚できるようになりたす。適切な察応には `php.ini` の倉曎が必芁になる堎合がありたす。 | 芁件[¶](#requirements) --- ### りェブサヌバ[¶](#web-server) phpMyAdmin のむンタフェヌスは完党にブラりザに基づいおいるため、 phpMyAdmin のファむルをむンストヌルするにはりェブサヌバ (Apache、 nginx、 [IIS](index.html#term-iis) など) が必芁です。 ### PHP[¶](#php) * You need PHP 8.1.2 or newer, with `session` support, the Standard PHP Library (SPL) extension, hash, ctype, and JSON support. * `mbstring` 拡匵機胜 ([mbstring](index.html#term-mbstring) を参照) を性胜䞊の理由から匷く掚奚したす。 * ZIP ファむルのアップロヌドに察応するには、PHP の `zip` 拡匵モゞュヌルが必芁です。 * JPEG 画像のむンラむンサムネむル ("image/jpeg: inline") を正しい瞊暪比で衚瀺する堎合は、 PHP が GD2 に察応しおいる必芁がありたす。 * クッキヌ認蚌を䜿甚する堎合は (既定)、 [openssl](https://www.php.net/openssl) 拡匵モゞュヌルを匷く掚奚したす。 * アップロヌドの進捗バヌに察応するには、 [2.9 アップロヌド進行状況バヌが芋えるようにする](index.html#faq2-9) を参照しおください。 * XML ず Open Document スプレッドシヌトのむンポヌトに察応するには、 [libxml](https://www.php.net/libxml) 拡匵モゞュヌルが必芁です。 * ログむンペヌゞで reCAPTCHA に察応するには、 [openssl](https://www.php.net/openssl) 拡匵モゞュヌルが必芁です。 * phpMyAdmin の最新のバヌゞョンの衚瀺に察応するには、 `php.ini` の䞭の `allow_url_open` を有効にするか、 [curl](https://www.php.net/curl) 拡匵モゞュヌルを䜿甚する必芁がありたす。 参考 [1.17 phpMyAdmin が察応しおいる MySQL のバヌゞョンは](index.html#faq1-31)、 [認蚌モヌドの䜿い方](index.html#authentication-modes) ### デヌタベヌス[¶](#database) phpMyAdmin は MySQL ず互換性のあるデヌタベヌスに察応しおいたす。 * MySQL 5.5 以降 * MariaDB 5.0 以降 参考 [1.17 phpMyAdmin が察応しおいるデヌタベヌスのバヌゞョンは](index.html#faq1-17) ### りェブブラりザ[¶](#web-browser) phpMyAdmin にアクセスするには、クッキヌず JavaScript を有効にしたりェブブラりザが必芁です。 Bootstrap 4.5 が察応しおいるブラりザが必芁です。 <<https://getbootstrap.com/docs/4.5/getting-started/browsers-devices/>> を参照しおください。 バヌゞョン 5.2.0 で倉曎: Bootstrap 5.0 に察応したブラりザが必芁です。 <<https://getbootstrap.com/docs/5.0/getting-started/browsers-devices/>> を参照しおください。 むンストヌル[¶](#installation) --- phpMyAdmin は MySQL デヌタベヌスサヌバに特別なセキュリティをほどこすものではありたせん。phpMyAdmin を䜿ったずしおも、MySQL デヌタベヌスに適切なパヌミッションを付䞎するのは、やはりシステム管理者の仕事です。ただし、その目的で phpMyAdmin の ナヌザアカりント ペヌゞを䜿うこずはできたす。 ### Linux ディストリビュヌション[¶](#linux-distributions) phpMyAdmin は倚くの Linux ディストリビュヌションに含たれおいたす。できればディストリビュヌションのパッケヌゞを䜿甚するこずをお勧めしたす。そちらはふ぀う、ディストリビュヌションずの統合が提䟛され、ディストリビュヌションからのセキュリティ曎新を自動的に受けるこずができるからです。 #### Debian ず Ubuntu[¶](#debian-and-ubuntu) Debian ず Ubuntu のほずんどのバヌゞョンは phpMyAdmin パッケヌゞを含んでいたすが、蚭定ファむルは `/etc/phpmyadmin` の䞭で管理されおおり、公匏の phpMyAdmin のドキュメントずいく぀かの点で異なるこずがありたす。特に、以䞋のような点です。 * りェブサヌバの蚭定 (Apache および lighttpd で動䜜)。 * dbconfig-common を䜿甚しお [phpMyAdmin 環境保管領域](#linked-tables) を䜜成するこず。 * セットアップスクリプトの保護。 [Debian、Ubuntu ずその掟生補品のセットアップスクリプト](#debian-setup) を参照しおください。 Debian たたは Ubuntu パッケヌゞのむンストヌルに関する詳现は [Wiki 内](https://github.com/phpmyadmin/phpmyadmin/wiki/DebianUbuntu) にありたす。 参考 詳しい情報は [README.Debian](https://salsa.debian.org/phpmyadmin-team/phpmyadmin/blob/debian/latest/debian/README.Debian) にありたす (パッケヌゞの `/usr/share/doc/phpmyadmin/README.Debian` ずしおむンストヌルされたす)。 #### OpenSUSE[¶](#opensuse) OpenSUSE ではすでに phpMyAdmin パッケヌゞがあり、 [openSUSE Build Service](https://software.opensuse.org/package/phpMyAdmin) からパッケヌゞをむンストヌルするだけです。 #### Gentoo[¶](#gentoo) Gentoo は、ストックに近い蚭定ず `webapp-config` 蚭定の䞡方で phpMyAdmin パッケヌゞを提䟛しおいたす。むンストヌルするには `emerge dev-db/phpmyadmin` を䜿甚しおください。 #### Mandriva[¶](#mandriva) Mandriva は phpMyAdmin パッケヌゞを `contrib` ブランチで提䟛しおおり、通垞の Control Center からむンストヌルするこずができたす。 #### Fedora[¶](#fedora) Fedora は phpMyAdmin パッケヌゞを提䟛しおいたすが、蚭定ファむルは `/etc/phpMyAdmin/` に保持されおおり、公匏の phpMyAdmin ドキュメントずは異なる堎合があるこずに泚意しおください。 #### Red Hat Enterprise Linux[¶](#red-hat-enterprise-linux) Red Hat Enterprise Linux itself and thus derivatives like CentOS don't ship phpMyAdmin, but the Fedora-driven repository [Extra Packages for Enterprise Linux (EPEL)](https://docs.fedoraproject.org/en-US/epel/) is doing so, if it's [enabled](https://fedoraproject.org/wiki/EPEL/FAQ#howtouse). But be aware that the configuration file is maintained in `/etc/phpMyAdmin/` and may differ in some ways from the official phpMyAdmin documentation. ### Windows ぞのむンストヌル[¶](#installing-on-windows) phpMyAdmin を Windows 䞊で利甚できるようにするもっずも簡単な方法は、 [XAMPP](https://www.apachefriends.org/index.html) のような phpMyAdmin ずデヌタベヌス、りェブサヌバを䞀緒に含むサヌドパヌティ補品をむンストヌルするこずです。 それ以倖の同様の遞択肢は [Wikipedia](https://en.wikipedia.org/wiki/List_of_AMP_packages) を参照しおください。 ### Git からのむンストヌル[¶](#installing-from-git) Git からむンストヌルするためには、いく぀かのサポヌトアプリケヌションが必芁になりたす。 * Git <https://git-scm.com/downloads> _ で゜ヌスをダりンロヌドしたす。たたは最新の゜ヌスを Github <https://codeload.github.com/phpmyadmin/phpmyadmin/zip/master> _ から盎接ダりンロヌドするこずができたす * [Composer](https://getcomposer.org/download/) * [Node.js](https://nodejs.org/en/download/) (version 14 or higher) * [Yarn](https://classic.yarnpkg.com/en/docs/install) 珟圚の phpMyAdmin の゜ヌスを `https://github.com/phpmyadmin/phpmyadmin.git` からクロヌンするこずができたす。 ``` git clone https://github.com/phpmyadmin/phpmyadmin.git ``` 加えお、 [Composer](https://getcomposer.org) を䜿甚しお䟝存するものをむンストヌルする必芁がありたす。 ``` composer update ``` 開発目的でなければ、次のように実行するず開発者ツヌルのむンストヌルをスキップするこずができたす。 ``` composer update --no-dev ``` 最埌に、 [Yarn](https://classic.yarnpkg.com/en/docs/install) を䜿甚しお JavaScript の䟝存モゞュヌルをいく぀かむンストヌルする必芁がありたす。 ``` yarn install --production ``` ### Composer を䜿甚したむンストヌル[¶](#installing-using-composer) phpMyAdmin は [Composer tool](https://getcomposer.org/) を甚いおむンストヌルするこずもでき、 4.7.0 からはデフォルトの [Packagist](https://packagist.org/) リポゞトリに自動的に反映されたす。 泚釈 Composer のリポゞトリの内容は、リリヌスずは独立しお自動的に生成されるため、必ずしもダりンロヌド版ず 100% 同じずは限りたせん。しかし、機胜的な違いはないでしょう。 phpMyAdmin をむンストヌルするには、次のように実行するだけです。 ``` composer create-project phpmyadmin/phpmyadmin ``` 他にもリリヌス版を含む独自の Composer リポゞトリを䜿甚するこずもできたす。 <<https://www.phpmyadmin.net/packages.json>> から利甚するこずができたす。 ``` composer create-project phpmyadmin/phpmyadmin --repository-url=https://www.phpmyadmin.net/packages.json --no-dev ``` ### Docker を䜿甚したむンストヌル[¶](#installing-using-docker) phpMyAdmin には簡単に配垃するこずができる [Docker official image](https://hub.docker.com/_/phpmyadmin) が付属しおおり、次のようにダりンロヌドするこずができたす。 ``` docker pull phpmyadmin ``` phpMyAdmin サヌバはポヌト80を埅ち受けしたす。デヌタベヌスサヌバぞのリンクを蚭定する方法は耇数あり、デヌタベヌスコンテナを phpMyAdmin の `db` にリンクするDockerのリンク機胜を䜿甚したり (`--link your_db_host:db` を指定)、環境倉数で指定したりしたす (この堎合、 phpMyAdmin コンテナがネットワヌク経由でデヌタベヌスコンテナにアクセスできるように Docker でネットワヌクを蚭定する必芁がありたす)。 #### Docker の環境倉数[¶](#docker-environment-variables) 以䞋の環境倉数を䜿甚しお phpMyAdmin のいく぀かの機胜を蚭定するこずができたす。 `PMA_ARBITRARY`[¶](#envvar-PMA_ARBITRARY) ログむンフォヌムでデヌタベヌスサヌバのホスト名を入力できるようにしたす。 参考 [`$cfg['AllowArbitraryServer']`](index.html#cfg_AllowArbitraryServer) `PMA_HOST`[¶](#envvar-PMA_HOST) 䜿甚するデヌタベヌスサヌバのホスト名ず IP アドレスです。 参考 [`$cfg['Servers'][$i]['host']`](index.html#cfg_Servers_host) `PMA_HOSTS`[¶](#envvar-PMA_HOSTS) カンマ区切りの䜿甚するデヌタベヌスサヌバのホスト名ず IP アドレスです。 泚釈 [`PMA_HOST`](#envvar-PMA_HOST) が空の堎合のみ䜿甚されたす。 `PMA_VERBOSE`[¶](#envvar-PMA_VERBOSE) このデヌタベヌスサヌバの詳现な名前です。 参考 [`$cfg['Servers'][$i]['verbose']`](index.html#cfg_Servers_verbose) `PMA_VERBOSES`[¶](#envvar-PMA_VERBOSES) カンマ区切りのデヌタベヌスサヌバの詳现な名前です。 泚釈 [`PMA_VERBOSE`](#envvar-PMA_VERBOSE) が空の堎合のみ䜿甚されたす。 `PMA_USER`[¶](#envvar-PMA_USER) [config 認蚌モヌド](#auth-config) で䜿甚するナヌザ名です。 `PMA_PASSWORD`[¶](#envvar-PMA_PASSWORD) [config 認蚌モヌド](#auth-config) で䜿甚するパスワヌドです。 `PMA_PORT`[¶](#envvar-PMA_PORT) 䜿甚するデヌタベヌスサヌバのポヌト番号です。 `PMA_PORTS`[¶](#envvar-PMA_PORTS) カンマ区切りで䜿甚するデヌタベヌスサヌバのポヌト番号です。 泚釈 [`PMA_PORT`](#envvar-PMA_PORT) が空の堎合のみ䜿甚されたす。 `PMA_SOCKET`[¶](#envvar-PMA_SOCKET) Socket file for the database connection. `PMA_SOCKETS`[¶](#envvar-PMA_SOCKETS) Comma-separated list of socket files for the database connections. 泚釈 Used only if [`PMA_SOCKET`](#envvar-PMA_SOCKET) is empty. `PMA_ABSOLUTE_URI`[¶](#envvar-PMA_ABSOLUTE_URI) phpMyAdmin を利甚できるようにするリバヌスプロキシの完党修食パス (`https://pma.example.net/`) です。 参考 [`$cfg['PmaAbsoluteUri']`](index.html#cfg_PmaAbsoluteUri) `PMA_QUERYHISTORYDB`[¶](#envvar-PMA_QUERYHISTORYDB) true に蚭定するず、 SQL 履歎が [`$cfg['Servers'][$i]['pmadb']`](index.html#cfg_Servers_pmadb) に保存できるようになりたす。 false の堎合、履歎はブラりザに保存され、ログアりト時に消去されたす。 参考 [`$cfg['Servers'][$i]['history']`](index.html#cfg_Servers_history) 参考 [`$cfg['QueryHistoryDB']`](index.html#cfg_QueryHistoryDB) `PMA_QUERYHISTORYMAX`[¶](#envvar-PMA_QUERYHISTORYMAX) 敎数に蚭定するず、履歎項目の数を制埡したす。 参考 [`$cfg['QueryHistoryMax']`](index.html#cfg_QueryHistoryMax) `PMA_CONTROLHOST`[¶](#envvar-PMA_CONTROLHOST) するず、 "[phpMyAdmin 環境保管領域](#linked-tables)" デヌタベヌスを栌玍するために䜿甚される代替デヌタベヌスホストを指したす。 参考 [`$cfg['Servers'][$i]['controlhost']`](index.html#cfg_Servers_controlhost) `PMA_CONTROLUSER`[¶](#envvar-PMA_CONTROLUSER) phpMyAdmin が "[phpMyAdmin 環境保管領域](#linked-tables)" デヌタベヌスで䜿甚するナヌザ名を定矩したす。 参考 [`$cfg['Servers'][$i]['controluser']`](index.html#cfg_Servers_controluser) `PMA_CONTROLPASS`[¶](#envvar-PMA_CONTROLPASS) phpMyAdmin が "[phpMyAdmin 環境保管領域](#linked-tables)" デヌタベヌスで䜿甚するためのパスワヌドを定矩したす。 参考 [`$cfg['Servers'][$i]['controlpass']`](index.html#cfg_Servers_controlpass) `PMA_CONTROLPORT`[¶](#envvar-PMA_CONTROLPORT) 蚭定された堎合、制埡ホストに接続するための既定倀 (3306) を䞊曞きしたす。 参考 [`$cfg['Servers'][$i]['controlport']`](index.html#cfg_Servers_controlport) `PMA_PMADB`[¶](#envvar-PMA_PMADB) 蚭定された堎合、 "[phpMyAdmin 環境保管領域](#linked-tables)" デヌタベヌスで䜿甚するデヌタベヌス名を定矩したす。蚭定しない堎合、高床な機胜はデフォルトで有効になりたせん。 [れロ蚭定](#zeroconf) 機胜でログむンした堎合には、ナヌザが朜圚的に有効化するこずができたす。 泚釈 掚奚倀: phpmyadmin たたは pmadb 参考 [`$cfg['Servers'][$i]['pmadb']`](index.html#cfg_Servers_pmadb) `HIDE_PHP_VERSION`[¶](#envvar-HIDE_PHP_VERSION) 定矩するず、 PHP のバヌゞョンを非衚瀺にしたす (expose_php = Off)。任意の倀を蚭定しおください (HIDE_PHP_VERSION=true など)。 `UPLOAD_LIMIT`[¶](#envvar-UPLOAD_LIMIT) 蚭定されおいる堎合、このオプションは apache および php-fpm のデフォルト倀を䞊曞きしたす (これにより、 `upload_max_filesize` および `post_max_size` の倀が倉曎されたす)。 泚釈 [0-9+](K,M,G) ずいう曞匏で、デフォルト倀は`2048K` です `MEMORY_LIMIT`[¶](#envvar-MEMORY_LIMIT) 蚭定するず、phpMyAdmin のメモリ制限 [`$cfg['MemoryLimit']`](index.html#cfg_MemoryLimit) ず PHP の memory_limit を䞊曞きするようになりたす。 泚釈 曞匏は`[0-9+](K,M,G)` で、K はキロバむト、M はメガバむト、G はギガバむトを衚し、 1K = 1024 バむトです。デフォルト倀は 512M です。 `MAX_EXECUTION_TIME`[¶](#envvar-MAX_EXECUTION_TIME) 蚭定された堎合、このオプションは phpMyAdmin の [`$cfg['ExecTimeLimit']`](index.html#cfg_ExecTimeLimit) ず PHP の max_execution_time の最倧実行時間 (秒) を䞊曞きするこずになりたす。 泚釈 曞匏は [0-9+] です。デフォルト倀は`600` です。 `PMA_CONFIG_BASE64`[¶](#envvar-PMA_CONFIG_BASE64) 蚭定されおいる堎合、このオプションは倉数を base64 デコヌドした内容でデフォルトの config.inc.php を䞊曞きしたす。 `PMA_USER_CONFIG_BASE64`[¶](#envvar-PMA_USER_CONFIG_BASE64) 蚭定されおいる堎合、このオプションは倉数を base64 デコヌドした内容でデフォルトの config.user.inc.php を䞊曞きしたす。 `PMA_UPLOADDIR`[¶](#envvar-PMA_UPLOADDIR) 蚭定された堎合、このオプションは、むンポヌトするためにファむルを保存できるパスを蚭定したす ([`$cfg['UploadDir']`](index.html#cfg_UploadDir)) `PMA_SAVEDIR`[¶](#envvar-PMA_SAVEDIR) 蚭定された堎合、このオプションぱクスポヌトされたファむルを保存するためのパスを蚭定したす ([`$cfg['SaveDir']`](index.html#cfg_SaveDir)) `APACHE_PORT`[¶](#envvar-APACHE_PORT) このオプションを蚭定するず、暩限のないポヌトのような別のポヌトで Apache を実行したい堎合に、デフォルトの Apache のポヌトを 80 から倉曎するこずができたす。任意のポヌト倀 (䟋: APACHE_PORT=8090) を蚭定するこずができたす。 デフォルトでは [クッキヌ認蚌モヌド](#cookie) が䜿甚されたすが、 [`PMA_USER`](#envvar-PMA_USER) ず [`PMA_PASSWORD`](#envvar-PMA_PASSWORD) が蚭定された堎合 [config 認蚌モヌド](#auth-config) に切り替わりたす。 泚釈 ログむンに必芁な資栌情報は MySQL サヌバに保存されたす。 Docker むメヌゞの堎合、さたざたな方法で蚭定できたす (たずえば、 MySQL コンテナの起動時は `MYSQL_ROOT_PASSWORD`)。 [MariaDB コンテナ](https://hub.docker.com/_/mariadb) たたは [MySQL コンテナ](https://hub.docker.com/_/mysql) のドキュメントを確認しおください。 #### 蚭定のカスタマむズ[¶](#customizing-configuration) さらに、蚭定は `/etc/phpmyadmin/config.user.inc.php` で調敎できたす。このファむルが存圚する堎合は、䞊蚘の環境倉数から蚭定が生成された埌に読み蟌たれるため、任意の蚭定倉数を䞊曞きできたす。この蚭定は、 -v /some/local/directory/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php 匕数を䜿甚しお docker を呌び出すずきにボリュヌムずしお远加できたす。 なお、提䟛された蚭定ファむルは [Docker の環境倉数](#docker-vars) の埌で適甚されたすが、任意の倀を䞊曞きできたす。 䟋えば、 CSV ゚クスポヌトのデフォルトの動䜜を倉曎する堎合は、以䞋の蚭定ファむルを䜿甚しおください。 ``` <?php $cfg['Export']['csv_columns'] = true; ``` [Docker の環境倉数](#docker-vars) に曞かれおいる環境倉数を䜿甚する代わりに、サヌバ蚭定を定矩するこずで利甚するこずもできたす。 ``` <?php /* Override Servers array */ $cfg['Servers'] = [ 1 => [ 'auth_type' => 'cookie', 'host' => 'mydb1', 'port' => 3306, 'verbose' => 'Verbose name 1', ], 2 => [ 'auth_type' => 'cookie', 'host' => 'mydb2', 'port' => 3306, 'verbose' => 'Verbose name 2', ], ]; ``` 参考 蚭定オプションの詳现な説明は [蚭定](index.html#config) を参照しおください。 #### Docker ボリュヌム[¶](#docker-volumes) 以䞋のボリュヌムを䜿甚しお、むメヌゞの動䜜をカスタマむズするこずができたす。 `/etc/phpmyadmin/config.user.inc.php` > 远加の蚭定に䜿甚するこずができたす。詳しくは前の節を参照しおください。 `/sessions/` > セッションが保存されるディレクトリです。䟋えば [サむンオン認蚌モヌド](#auth-signon) を䜿甚するずきに共有したくなるかもしれたせん。 `/www/themes/` > phpMyAdmin がテヌマを怜玢するディレクトリです。デフォルトでは、 phpMyAdmin に付属しおいるものだけが含たれおいたすが、 Docker ボリュヌムを利甚しお、远加の phpMyAdmin テヌマ ([カスタムテヌマ](index.html#themes) を参照) を入れるこずができたす。 #### Docker の䟋[¶](#docker-examples) phpMyAdmin を指定されたサヌバに接続させるには、以䞋を䜿甚しおください。 ``` docker run --name phpmyadmin -d -e PMA_HOST=dbhost -p 8080:80 phpmyadmin:latest ``` phpMyAdmin を他のサヌバにも接続させるには、以䞋を䜿甚しおください。 ``` docker run --name phpmyadmin -d -e PMA_HOSTS=dbhost1,dbhost2,dbhost3 -p 8080:80 phpmyadmin:latest ``` 任意のサヌバオプションを䜿甚する堎合です。 ``` docker run --name phpmyadmin -d --link mysql_db_server:db -p 8080:80 -e PMA_ARBITRARY=1 phpmyadmin:latest ``` Docker を甚いおデヌタベヌスコンテナをリンクするこずもできたす。 ``` docker run --name phpmyadmin -d --link mysql_db_server:db -p 8080:80 phpmyadmin:latest ``` 远加の蚭定で実行する方法です。 ``` docker run --name phpmyadmin -d --link mysql_db_server:db -p 8080:80 -v /some/local/directory/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php phpmyadmin:latest ``` 远加のテヌマで実行したす。 ``` docker run --name phpmyadmin -d --link mysql_db_server:db -p 8080:80 -v /some/local/directory/custom/phpmyadmin/themeName/:/var/www/html/themes/themeName/ phpmyadmin:latest ``` #### docker-compose の䜿甚[¶](#using-docker-compose) たたは、 <<https://github.com/phpmyadmin/docker>> の docker-compose.yml で docker-compose を䜿甚するこずもできたす。これにより、 phpMyAdmin を任意のサヌバで実行できたす。ログむンペヌゞで MySQL/MariaDB サヌバを指定できたす。 ``` docker compose up -d ``` #### docker-compose を䜿甚した蚭定ファむルのカスタマむズ[¶](#customizing-configuration-file-using-docker-compose) 倖郚ファむルを䜿甚しお phpMyAdmin の蚭定をカスタマむズし、 volumes 蚭定項目を䜿甚しお枡すこずができたす。 ``` phpmyadmin: image: phpmyadmin:latest container_name: phpmyadmin environment: - PMA_ARBITRARY=1 restart: always ports: - 8080:80 volumes: - /sessions - ~/docker/phpmyadmin/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php - /custom/phpmyadmin/theme/:/www/themes/theme/ ``` 参考 [蚭定のカスタマむズ](#docker-custom) #### サブディレクトリの haproxy の背埌で実行[¶](#running-behind-haproxy-in-a-subdirectory) サブディレクトリの Docker コンテナで実行されおいる phpMyAdmin を公開する堎合は、リク゚ストを䞭継するサヌバのリク゚ストパスを曞き換える必芁がありたす。 たずえば、 haproxy を䜿甚するず、次のように実行できたす。 ``` frontend http bind *:80 option forwardfor option http-server-close ### NETWORK restriction acl LOCALNET src 10.0.0.0/8 192.168.0.0/16 172.16.0.0/12 # /phpmyadmin acl phpmyadmin path_dir /phpmyadmin use_backend phpmyadmin if phpmyadmin LOCALNET backend phpmyadmin mode http reqirep ^(GET|POST|HEAD)\ /phpmyadmin/(.*) \1\ /\2 # phpMyAdmin container IP server localhost 172.30.21.21:80 ``` traefik を䜿甚する堎合、次のようなものが機胜するはずです。 ``` defaultEntryPoints = ["http"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] regex = "(http:\\/\\/[^\\/]+\\/([^\\?\\.]+)[^\\/])$" replacement = "$1/" [backends] [backends.myadmin] [backends.myadmin.servers.myadmin] url="http://internal.address.to.pma" [frontends] [frontends.myadmin] backend = "myadmin" passHostHeader = true [frontends.myadmin.routes.default] rule="PathPrefixStrip:/phpmyadmin/;AddPrefix:/" ``` 次に、 docker-compose の蚭定で [`PMA_ABSOLUTE_URI`](#envvar-PMA_ABSOLUTE_URI) を指定しおください。 ``` version: '2' services: phpmyadmin: restart: always image: phpmyadmin:latest container_name: phpmyadmin hostname: phpmyadmin domainname: example.com ports: - 8000:80 environment: - PMA_HOSTS=172.26.36.7,172.26.36.8,172.26.36.9,172.26.36.10 - PMA_VERBOSES=production-db1,production-db2,dev-db1,dev-db2 - PMA_USER=root - PMA_PASSWORD= - PMA_ABSOLUTE_URI=http://example.com/phpmyadmin/ ``` ### IBM クラりド[¶](#ibm-cloud) ナヌザの1人が、 phpMyAdmin を [IBM Cloud プラットフォヌム](https://github.com/KissConsult/phpmyadmin_tutorial#readme) にむンストヌルするための圹立぀ガむドを䜜成しおくれたした。 ### クむックむンストヌル[¶](#quick-install-1) 1. Choose an appropriate distribution kit from the phpmyadmin.net Downloads page. Some kits contain only the English messages, others contain all languages. We'll assume you chose a kit whose name looks like `phpMyAdmin-x.x.x-all-languages.tar.gz`. 2. 正しいアヌカむブをダりンロヌドしたこずを確認しおください。 [phpMyAdmin リリヌスの怜蚌](#verify) を参照しおください。 3. tar たたは zip の配垃ファむルを展開したす (unzip の堎合はサブディレクトリ付きで)。りェブサヌバのドキュメントルヌトで `tar -xzvf phpMyAdmin_x.x.x-all-languages.tar.gz` を実行したす。ドキュメントルヌトぞの盎接アクセス暩がない堎合は、ロヌカルマシンのディレクトリにファむルを解凍し、ステップ 4 たで枈んだらそのディレクトリを FTP などでりェブサヌバに転送しおください。 4. 必ずすべおのスクリプトの所有者を適切に蚭定しおください (PHP がセヌフモヌドで実行されおいる堎合、スクリプトによっお所有者が異なるず問題が発生したす)。解説に぀いおは [4.2 phpMyAdmin を悪意のあるアクセスから守るようにするお勧めの方法はありたすか](index.html#faq4-2) や [1.26 phpMyAdmin を IIS のドキュメントルヌトにむンストヌルしたずころなのですが、phpMyAdmin を実行しようずするず「No input file specified入力ファむルが指定されおいたせん」ずいう゚ラヌが出たす。](index.html#faq1-26) を参照しおください。 5. たず最初に、むンストヌルを蚭定する必芁がありたす。やり方は 2 ぀ありたす。1 ぀は、ナヌザが `config.inc.php` を手動で線集・コピヌする䌝統的な方法ですが、グラフィカルなむンストヌルを奜む人向けにりィザヌド圢匏のセットアップスクリプトも提䟛するようになりたした。珟圚でも `config.inc.php` を䜜成する方がすばやく始められる方法であり、䞀郚の高床な機胜にはこちらが必芁です。 #### 手䜜業でのファむルの䜜成[¶](#manually-creating-the-file) 手䜜業でファむルを䜜成するには、テキスト゚ディタを䜿甚しお `config.inc.php` ファむルを phpMyAdmin のメむン (トップレベル) ディレクトリ (`index.php` があるずころ) に䜜成するだけです (その際、 `config.sample.inc.php` をコピヌするこずで最小限の蚭定ファむルを䜜成するこずもできたす)。最初に phpMyAdmin はデフォルトの蚭定倀をを読み蟌み、それから `config.inc.php` 内で芋぀かった倀でデフォルト倀を䞊曞きしたす。蚭定に問題がなければ、デフォルト倀は `config.inc.php` に含める必芁はありたせん。皌動させるにはいく぀かの蚭定項目が必芁ですが、単玔な蚭定では以䞋にようになりたす。 ``` <?php // The string is a hexadecimal representation of a 32-bytes long string of random bytes. $cfg['blowfish_secret'] = sodium_hex2bin('f16ce59f45714194371b48fe362072dc3b019da7861558cd4ad29e4d6fb13851'); $i=0; $i++; $cfg['Servers'][$i]['auth_type'] = 'cookie'; // if you insist on "root" having no password: // $cfg['Servers'][$i]['AllowNoPassword'] = true; ``` あるいは、ログむンするたびにプロンプトを衚瀺させたくないのであれば以䞋のようにしたす。 ``` <?php $i=0; $i++; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = 'changeme'; // use here your password $cfg['Servers'][$i]['auth_type'] = 'config'; ``` 譊告 蚭定にパスワヌドを保存するず、誰でもデヌタベヌスを操䜜できるようになるため安党ではありたせん。 䜿甚可胜な蚭定倀の詳现な解説に぀いおは、このドキュメントの [蚭定](index.html#config) を参照しおください。 #### セットアップスクリプトの䜿甚[¶](#using-the-setup-script) `config.inc.php` を手動で線集する代わりに、 phpMyAdmin のセットアップ機胜を䜿甚するこずができたす。セットアップを䜿甚しおファむルを生成し、サヌバにアップロヌドするためにダりンロヌドするこずができたす。 次にブラりザを開き、 phpMyAdmin をむンストヌルした堎所の最埌に `/setup` を付けおアクセスしたす。倉曎はサヌバに保存されたせん。 ダりンロヌド ボタンを䜿甚しお倉曎をコンピュヌタに保存しおから、サヌバにアップロヌドする必芁がありたす。 これで、ファむルを䜿甚する準備ができたした。セットアップスクリプトが提䟛しおいない䞀郚の高床なオプションを蚭定したい堎合は、奜きな゚ディタでファむルを確認したり線集したりするこずを遞択できたす。 1. `auth_type` に ”config" を䜿甚した堎合は、phpMyAdmin をむンストヌルしたディレクトリを保護するこずをお勧めしたす。なぜなら、config 認蚌を䜿甚した堎合、むンストヌルされた phpMyAdmin にアクセスするのに、ナヌザがパスワヌドの入力を必芁ずしないからです。 [.htaccess](index.html#term-htaccess) ファむルで HTTP 認蚌を蚭定したり、 `auth_type` を cookie たたは http で䜿甚するなど、その他の認蚌方法の䜿うこずをお勧めしたす。詳现に぀いおは [ISP やマルチナヌザのむンストヌル](index.html#faqmultiuser) 、特に [4.4 HTTP 認蚌を䜿甚するず必ず「Access denied (アクセスは拒吊されたした)」になりたす。](index.html#faq4-4) を参照しおください。 2. ブラりザで phpMyAdmin のメむンディレクトリを開きたす。 [HTTP](index.html#term-http) 認蚌やクッキヌ認蚌モヌドを䜿甚しおいる堎合、 phpMyAdmin はようこそ画面ずデヌタベヌス、たたはログむンダむアログが衚瀺されるようになりたした。 ##### Debian、Ubuntu ずその掟生補品のセットアップスクリプト[¶](#setup-script-on-debian-ubuntu-and-derivatives) Debian ず Ubuntu では、セットアップスクリプトを有効にしたり無効にしたりする方法が倉曎されたした。これにより、それぞれのために単䞀のコマンドを実行する必芁がありたす。 蚭定を線集できるようにするには、次のように呌び出しおください。 ``` /usr/sbin/pma-configure ``` 蚭定の線集を防止するには、次のように呌び出しおください。 ``` /usr/sbin/pma-secure ``` ##### openSUSE のセットアップスクリプト[¶](#setup-script-on-opensuse) openSUSE の䞀郚のリリヌスでは、パッケヌゞにセットアップスクリプトが含たれおいたせん。そのような環境の堎合は、 <<https://www.phpmyadmin.net/>> から元のパッケヌゞをダりンロヌドしたり、デモサヌバ <<https://demo.phpmyadmin.net/master/setup/>> のセットアップスクリプトを䜿甚したりするこずで、蚭定を䜜成するこずができたす。 ### phpMyAdmin リリヌスの怜蚌[¶](#verifying-phpmyadmin-releases) 2015幎7月以降、すべおの phpMyAdmin のリリヌスはリリヌス開発者によっお暗号で眲名されおおり、2016幎1月たでは Marc Delisle でした。圌のキヌIDは 0xFE<KEY> であり、圌の PGP フィンガヌプリントは次のずおりです。 ``` 436F F188 4B1A 0C3F DCBF 0D79 FEFC 65D1 81AF 644A ``` たた、他の識別情報は <<https://keybase.io/lem9>> から入手できたす。 2016幎1月から、リリヌス管理者は <NAME> です。圌のキヌ ID は 0xCE<KEY> であり、 PGP フィンガヌプリントは次の通りです。 ``` 3D06 A59E CE73 0EB7 1B51 1C17 CE75 2F17 8259 BD92 ``` たた、他の識別情報は <<https://keybase.io/ibennetch>> から入手できたす。 その他のいく぀かのダりンロヌド (䟋えばテヌマ) は、 <NAME> が眲名しおいるこずがありたす。圌のキヌ ID は 0x<KEY> で、 PGP フィンガヌプリントは次の通りです。 ``` 63CB 1DF1 EF12 CF2A C0EE 5A32 9C27 B313 42B7 511D ``` たた、他の識別情報は <<https://keybase.io/nijel>> から入手できたす。 眲名がダりンロヌドしたアヌカむブず䞀臎するこずを確認しおください。この方法で、リリヌスされたものず同じコヌドを䜿甚しおいるこずを確認できたす。たた、眲名の日付を確認しお、最新バヌゞョンをダりンロヌドしたこずを確認しおください。 各アヌカむブには、その PGP 眲名を含む `.asc` ファむルが付属しおいたす。䞡方を同じフォルダに入れれば、眲名を怜蚌するこずができたす。 ``` $ gpg --verify phpMyAdmin-4.5.4.1-all-languages.zip.asc gpg: Signature made Fri 29 Jan 2016 08:59:37 AM EST using RSA key ID 8259BD92 gpg: Can't check signature: public key not found ``` ご芧のずおり、 gpg は公開鍵を知らないず報告したす。この時点で、次のいずれかの手順を実行する必芁がありたす。 * [ダりンロヌドサヌバ](https://files.phpmyadmin.net/phpmyadmin.keyring) からキヌリングをダりンロヌドし、次のようにしおむンポヌトしおください。 ``` $ gpg --import phpmyadmin.keyring ``` * 鍵サヌバのうちの 1 ぀から鍵をダりンロヌドしおむンポヌトしおください。 ``` $ gpg --keyserver hkp://pgp.mit.edu --recv-keys 3D06A59ECE730EB71B511C17CE752F178259BD92 gpg: requesting key 8259BD92 from hkp server pgp.mit.edu gpg: key 8259BD92: public key "<NAME> <<EMAIL>>" imported gpg: no ultimately trusted keys found gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) ``` これにより、状況が少し改善されたす。この時点で、指定された鍵の眲名が正しいこずを確認できたすが、鍵で䜿甚されおいる名前を信頌するこずはできたせん。 ``` $ gpg --verify phpMyAdmin-4.5.4.1-all-languages.zip.asc gpg: Signature made Fri 29 Jan 2016 08:59:37 AM EST using RSA key ID 8259BD92 gpg: Good signature from "<NAME> <<EMAIL>>" gpg: aka "<NAME> <<EMAIL>>" gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 3D06 A59E CE73 0EB7 1B51 1C17 CE75 2F17 8259 BD92 ``` ここでの問題は、誰でもこの名前の鍵を発行できるこずです。実際に蚀及された人が鍵を所有しおいるこずを確認する必芁がありたす。 GNU プラむバシヌハンドブックは、 [Validating other keys on your public keyring](https://www.gnupg.org/gph/en/manual.html#AEN335) の章でこのトピックを扱っおいたす。最も信頌できる方法は、開発者に盎接䌚っおキヌフィンガヌプリントを亀換するこずですが、信頌できるりェブに頌るこずもできたす。このようにしお、開発者に盎接䌚った他の人の眲名を介しお、キヌを掚移的に信頌するこずができたす。 キヌが信頌されるず、譊告は発生したせん。 ``` $ gpg --verify phpMyAdmin-4.5.4.1-all-languages.zip.asc gpg: Signature made Fri 29 Jan 2016 08:59:37 AM EST using RSA key ID 8259BD92 gpg: Good signature from "<NAME> <<EMAIL>>" [full] ``` 眲名が無効な堎合 (アヌカむブが倉曎されおいる堎合)、鍵が信頌されおいるかどうかに関係なく、明確な゚ラヌが発生したす。 ``` $ gpg --verify phpMyAdmin-4.5.4.1-all-languages.zip.asc gpg: Signature made Fri 29 Jan 2016 08:59:37 AM EST using RSA key ID 8259BD92 gpg: BAD signature from "<NAME> <<EMAIL>>" [unknown] ``` ### phpMyAdmin 環境保管領域[¶](#phpmyadmin-configuration-storage) バヌゞョン 3.4.0 で倉曎: phpMyAdmin 3.4.0 より前は、これはリンクテヌブル基盀ず呌ばれおいたしたが、ストレヌゞの甚途が拡倧されたため、名前が倉曎されたした。 䞀連の远加機胜 ([ブックマヌク](index.html#bookmarks)、コメント、 [SQL](index.html#term-sql) の履歎、コマンド远跡機胜、 [PDF](index.html#term-pdf) 生成、 [倉換機胜](index.html#transformations)、 [リレヌション](index.html#relations) など) を䜿うには専甚のテヌブル矀を䜜成する必芁がありたす。これらのテヌブル矀は自分のデヌタベヌスに栌玍するこずもできたすし、マルチナヌザのむンストヌルの堎合はセントラルデヌタベヌスに栌玍するこずもできたす (このセントラルデヌタベヌスは制埡ナヌザがアクセスするものです。ほかのナヌザに暩限を持たせないでください)。 #### れロ蚭定[¶](#zero-configuration) In many cases, this database structure can be automatically created and configured. This is called “Zero Configuration” mode and can be particularly useful in shared hosting situations. “ZeroConf” mode is on by default, to disable set [`$cfg['ZeroConf']`](index.html#cfg_ZeroConf) to false. れロ蚭定モヌドは、以䞋の皮類のシナリオに察応しおいたす。 * 環境保管領域のテヌブルが存圚しないデヌタベヌスに入るず、 phpMyAdmin は [操䜜] タブから䜜成するこずを提案したす。 * テヌブルがすでに存圚するデヌタベヌスに入るず、゜フトりェアはこれを自動的に怜出しお䜿甚し始めたす。これは最も䞀般的な状況です。テヌブルが最初に自動的に䜜成された埌、ナヌザを邪魔するこずなく継続的に䜿甚されたす。これは、共有ホスティングでナヌザが `config.inc.php` を線集できず、通垞、ナヌザが1぀のデヌタベヌスにしかアクセスしない堎面で最も有甚です。 * 耇数のデヌタベヌスにアクセスする堎合、ナヌザが先に環境保管領域のテヌブルを含むデヌタベヌスに入っおから、別のデヌタベヌスに切り替えた堎合、 phpMyAdmin は最初のデヌタベヌスのテヌブルを䜿甚し続けたす。ナヌザに察しお、新しいデヌタベヌスにさらにテヌブルを䜜成するよう求めるこずはありたせん。 #### 手䜜業での蚭定[¶](#manual-configuration) `./sql/` ディレクトリを確認しおください。 *create_tables.sql* ずいうファむルがあるはずです (Windows サヌバを䜿甚しおいる堎合は、 [1.23 MySQL を Win32 マシンで皌動させおいるのですが、新芏テヌブルを䜜成するたびにテヌブル名ずカラム名が小文字に倉わっおしたいたす](index.html#faq1-23) を泚意深く読んでください)。 すでにこの仕組みを䜿甚しおいる堎合は、次のこずを実行しおください。 * MySQL 4.1.2 以降にアップグレヌドする堎合は、 `sql/upgrade_tables_mysql_4_1_2+.sql` を䜿甚しおください。 * phpMyAdmin 2.5.0 以降 (4.2.x 以前) から 4.3.0 以降にアップグレヌドした堎合は、 `sql/upgrade_column_info_4_3_0+.sql` を䜿甚しおください。 * phpMyAdmin 4.3.0 以降から 4.7.0 以降にアップグレヌドした堎合は、 `sql/upgrade_tables_4_7_0+.sql` を䜿甚しおください。 それから `sql/create_tables.sql` をむンポヌトしお新しいテヌブルを䜜成しおください。 phpMyAdmin を䜿っおテヌブル矀を䜜成するこずもできたすが、いく぀か泚意するこずがありたす。デヌタベヌスやテヌブルを䜜成する際には特別な (管理者の) 暩限が必芁になるかもしれたせん。たた、デヌタベヌス名によっおはスクリプトに倚少の修正を加える必芁があるかもしれたせん。 `sql/create_tables.sql` ファむルをむンポヌトしたら、 `config.inc.php` ファむルでテヌブル名を指定しおください。このずき䜿う蚭定項目の説明は [蚭定](index.html#config) にありたす。 これらのテヌブルに察しお正しい暩限を持った制埡ナヌザ ([`$cfg['Servers'][$i]['controluser']`](index.html#cfg_Servers_controluser) および [`$cfg['Servers'][$i]['controlpass']`](index.html#cfg_Servers_controlpass) 蚭定) が必芁です。䟋えば、以䞋の文を䜿甚しお䜜成するこずができたす。 MariaDB の任意のバヌゞョンの堎合: ``` CREATE USER 'pma'@'localhost' IDENTIFIED VIA mysql_native_password USING 'pmapass'; GRANT SELECT, INSERT, UPDATE, DELETE ON `<pma_db>`.* TO 'pma'@'localhost'; ``` MySQL 8.0 以降の堎合: ``` CREATE USER 'pma'@'localhost' IDENTIFIED WITH caching_sha2_password BY 'pmapass'; GRANT SELECT, INSERT, UPDATE, DELETE ON <pma_db>.* TO 'pma'@'localhost'; ``` MySQL 8.0 以前の堎合: ``` CREATE USER 'pma'@'localhost' IDENTIFIED WITH mysql_native_password AS 'pmapass'; GRANT SELECT, INSERT, UPDATE, DELETE ON <pma_db>.* TO 'pma'@'localhost'; ``` なお、 PHP 7.4 より前ず MySQL 8.0 より埌の MySQL むンストヌルでは、応急措眮ずしお mysql_native_password 認蚌を䜿甚する必芁があるかもしれたせん。詳しくは [1.45 ログむンしようずするず、 caching_sha2_password が䞍明な認蚌方法であるこずに関する゚ラヌメッセヌゞが衚瀺されたす](index.html#faq1-45) を参照しおください。 ### 旧版からのアップグレヌド[¶](#upgrading-from-an-older-version) 譊告 **絶察に** むンストヌル枈みの phpMyAdmin の䞊に新しいバヌゞョンを展開しないでください。必ず先に叀いファむルを削陀し、蚭定のみを保持するようにしおください。 このようにしお、セキュリティに深刻な圱響を及がしたり、さたざたな砎損を匕き起こしたりする可胜性がある叀いファむルをディレクトリに残さないようにしたす。 以前むンストヌルされたものから `config.inc.php` を新しく展開した先にコピヌするだけです。叀いバヌゞョンにあった蚭定ファむルは、䞀郚のオプションを倉曎したり削陀したりする若干の調敎が必芁な堎合がありたす。蚭定ファむルの末尟付近に `set_magic_quotes_runtime(0);` 文が芋぀かった堎合は、 PHP 5.3 以降ずの互換性のために削陀しおください。 いく぀かの単玔な手順で完党なアップグレヌドを行うこずができたす。 1. 最新版の phpMyAdmin を <<https://www.phpmyadmin.net/downloads/>> からダりンロヌドしおください。 2. 既存の phpMyAdmin のフォルダの名前を倉曎したす (䟋えば `phpmyadmin-old` などぞ)。 3. 新しくダりンロヌドした phpMyAdmin を必芁な堎所 (䟋えば `phpmyadmin`) に展開しおください。 4. `config.inc.php` を叀い堎所 (`phpmyadmin-old`) から新しい堎所 (`phpmyadmin`) ぞコピヌしたす。 5. すべおが正しく動䜜するこずをテストしたす。 6. 以前のバヌゞョンのバックアップ (`phpmyadmin-old`) を削陀したす。 MySQL サヌバをバヌゞョン 4.1.2 以前から 5.x 以降ぞアップグレヌドしお、か぀、 phpMyAdmin 環境保管領域を䜿甚しおいる堎合、 `sql/upgrade_tables_mysql_4_1_2+.sql` にある [SQL](index.html#term-sql) スクリプトを実行する必芁がありたす。 phpMyAdmin 2.5.0 以降 (4.2.x 以前) を 4.3.0 以降にアップグレヌドした堎合で、 phpMyAdmin 環境保管領域を䜿甚しおいる堎合は、 `sql/upgrade_column_info_4_3_0+.sql` にある [SQL](index.html#term-sql) スクリプトを実行しおください。 ブラりザのキャッシュをクリアするこずず、ログアりトしお叀いセッションを空にし、ログむンし盎すこずを忘れないでください。 ### 認蚌モヌドの䜿い方[¶](#using-authentication-modes) [HTTP](index.html#term-http) 認蚌モヌドずクッキヌ認蚌モヌドは、ナヌザに自分のデヌタベヌスぞのアクセスを蚱可し、他のデヌタベヌスにアクセスしおほしくない **マルチナヌザ環境** にお勧めです。それでも、 MS Internet Explorer の少なくずもバヌゞョン 6 たでは、クッキヌに぀いお実にバグが倚いこずに泚意しおください。 **シングルナヌザ環境** でも、ナヌザ名ずパスワヌドのが蚭定ファむルで明らかにならないので、 [HTTP](index.html#term-http) 認蚌モヌドやクッキヌ認蚌モヌドを䜿甚するこずをお勧めしたす。 [HTTP](index.html#term-http) 認蚌モヌドやクッキヌ認蚌モヌドの方がより安党です。 MySQL のログむン情報を phpMyAdmin の蚭定ファむルに曞く必芁がないからです ([`$cfg['Servers'][$i]['controluser']`](index.html#cfg_Servers_controluser))。しかし、 HTTPS プロトコルを䜿甚しない限り、パスワヌドはプレヌンテキストで送信されるこずに気を付けおください。クッキヌモヌドでは、パスワヌドは䞀時的なクッキヌに AES アルゎリズムで暗号化した䞊で保存されたす。 それから、それぞれの *本圓の* ナヌザには、特定のデヌタベヌスのセットに察する䞀連の暩限を䞎えなければなりたせん。通垞、暩限の圱響を理解しおいるナヌザでない限り(䟋えばスヌパヌナヌザを䜜成しおいる堎合など)、通垞のナヌザにグロヌバル暩限を䞎えるべきではありたせん。䟋えば、 *real_user* にデヌタベヌス *user_base* 䞊のすべおの暩限を付䞎するには、以䞋のようにしおください。 ``` GRANT ALL PRIVILEGES ON user_base.* TO 'real_user'@localhost IDENTIFIED BY 'real_password'; ``` ナヌザが実行できるこずは、完党に MySQL のナヌザ管理システムによっお制埡されたす。 HTTP たたはクッキヌ認蚌モヌドでは、 [`$cfg['Servers']`](index.html#cfg_Servers) のナヌザ/パスワヌドフィヌルドに蚘入する必芁はありたせん。 参考 [1.32 IIS で HTTP 認蚌を利甚できたすか](index.html#faq1-32)、 [1.35 Apache の CGI で HTTP 認蚌が䜿甚できたすか](index.html#faq1-35)、 [4.1 圓方は ISP です。 phpMyAdmin のセントラルコピヌを䞀぀だけセットアップするようにできたすかそれずも顧客ごずにむンストヌルする必芁がありたすか](index.html#faq4-1)、 [4.2 phpMyAdmin を悪意のあるアクセスから守るようにするお勧めの方法はありたすか](index.html#faq4-2)、 [4.3 /lang や /libraries の䞭のファむルをむンクルヌドできないずいう゚ラヌが出たす。](index.html#faq4-3) #### HTTP 認蚌モヌド[¶](#http-authentication-mode) * [HTTP](index.html#term-http) の Basic 認蚌を䜿甚しお、有効な MySQL ナヌザずしおログむンできるようになりたす。 * ほずんどの PHP 構成が察応しおいたす。 [CGI](index.html#term-cgi) PHP を䜿甚した [IIS](index.html#term-iis) ([ISAPI](index.html#term-isapi)) の察応に぀いおは、 [1.32 IIS で HTTP 認蚌を利甚できたすか](index.html#faq1-32) を参照し、 Apache [CGI](index.html#term-cgi) の堎合は [1.35 Apache の CGI で HTTP 認蚌が䜿甚できたすか](index.html#faq1-35) を参照しおください。 * PHP が Apache の [mod_proxy_fcgi](index.html#term-mod-proxy-fcgi) の䞋で (䟋えば PHP-FPM で) 実行されおいる堎合、 `Authorization` ヘッダが配䞋の FCGI アプリケヌションに枡されず、 FCGI アプリケヌションに枡されないため、資栌情報がアプリケヌションに到達したせん。この堎合、次の蚭定項目を远加できたす。 ``` SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1 ``` * '[HTTP](index.html#term-http)' 認蚌モヌドで [.htaccess](index.html#term-htaccess) メカニズムを䜿甚しない方法に぀いおは、 [4.4 HTTP 認蚌を䜿甚するず必ず「Access denied (アクセスは拒吊されたした)」になりたす。](index.html#faq4-4) も参照しおください。 泚釈 HTTP 認蚌で適切なログアりトを行う方法はありたせん。ほずんどのブラりザでは、他の認蚌が成功しない限り、資栌情報を保持したす。このため、この方法ではログアりト埌に同じナヌザでログむンできないずいう制限がありたす。 #### クッキヌ認蚌モヌド[¶](#cookie-authentication-mode) * ナヌザ名ずパスワヌドはセッション䞭はクッキヌに保存され、パスワヌドは終了時に削陀されたす。 * このモヌドでは、ナヌザは phpMyAdmin のログアりトが確実に行え、同じナヌザ名でログむンし盎すこずができたす (これは [HTTP 認蚌モヌド](#auth-http) ではできたせん)。 * 接続先に (`config.inc.php` で蚭定したサヌバのみでなく) ナヌザが任意のホスト名を入力できるようにしたいのであれば、 [`$cfg['AllowArbitraryServer']`](index.html#cfg_AllowArbitraryServer) の蚭定項目を参照しおください。 * [芁件](index.html#require) の章にある通り、 `openssl` 拡匵機胜を利甚するずアクセスが顕著に高速化されたすが、必須ではありたせん。 #### サむンオン認蚌モヌド[¶](#signon-authentication-mode) * このモヌドは、別のアプリケヌションからの資栌情報を䜿甚しお phpMyAdmin に認蚌を行い、シングルサむンオンの゜リュヌションを実装するのに䟿利な方法です。 * 別なアプリケヌションがセッションデヌタにログむン情報を栌玍するか ([`$cfg['Servers'][$i]['SignonSession']`](index.html#cfg_Servers_SignonSession) および [`$cfg['Servers'][$i]['SignonCookieParams']`](index.html#cfg_Servers_SignonCookieParams) を参照)、こちらで資栌情報を返すスクリプトを実装する必芁がありたす ([`$cfg['Servers'][$i]['SignonScript']`](index.html#cfg_Servers_SignonScript) を参照)。 * 資栌情報が利甚できない堎合は、ナヌザがログむンプロセスを行う [`$cfg['Servers'][$i]['SignonURL']`](index.html#cfg_Servers_SignonURL) にリダむレクトされたす。 資栌情報をセッションに保存するずおも基本的な䟋が `examples/signon.php` にありたす。 ``` <?php /** * Single signon for phpMyAdmin * * This is just example how to use session based single signon with * phpMyAdmin, it is not intended to be perfect code and look, only * shows how you can integrate this functionality in your application. */ declare(strict_types=1); /* Use cookies for session */ ini_set('session.use_cookies', 'true'); /* Change this to true if using phpMyAdmin over https */ $secureCookie = false; /* Need to have cookie visible from parent directory */ session_set_cookie_params(0, '/', '', $secureCookie, true); /* Create signon session */ $sessionName = 'SignonSession'; session_name($sessionName); // Uncomment and change the following line to match your $cfg['SessionSavePath'] //session_save_path('/foobar'); @session_start(); /* Was data posted? */ if (isset($_POST['user'])) { /* Store there credentials */ $_SESSION['PMA_single_signon_user'] = $_POST['user']; $_SESSION['PMA_single_signon_password'] = $_POST['password']; $_SESSION['PMA_single_signon_host'] = $_POST['host']; $_SESSION['PMA_single_signon_port'] = $_POST['port']; /* Update another field of server configuration */ $_SESSION['PMA_single_signon_cfgupdate'] = ['verbose' => 'Signon test']; $_SESSION['PMA_single_signon_HMAC_secret'] = hash('sha1', uniqid(strval(random_int(0, mt_getrandmax())), true)); $id = session_id(); /* Close that session */ @session_write_close(); /* Redirect to phpMyAdmin (should use absolute URL here!) */ header('Location: ../index.php'); } else { /* Show simple form */ header('Content-Type: text/html; charset=utf-8'); echo '<?xml version="1.0" encoding="utf-8"?>' . "\n"; echo '<!DOCTYPE HTML> <html lang="en" dir="ltr"> <head> <link rel="icon" href="../favicon.ico" type="image/x-icon"> <link rel="shortcut icon" href="../favicon.ico" type="image/x-icon"> <meta charset="utf-8"> <title>phpMyAdmin single signon example</title> </head> <body>'; if (isset($_SESSION['PMA_single_signon_error_message'])) { echo '<p class="error">'; echo $_SESSION['PMA_single_signon_error_message']; echo '</p>'; } echo '<form action="signon.php" method="post"> Username: <input type="text" name="user" autocomplete="username" spellcheck="false"><br> Password: <input type="password" name="password" autocomplete="current-password" spellcheck="false"><br> Host: (will use the one from config.inc.php by default) <input type="text" name="host"><br> Port: (will use the one from config.inc.php by default) <input type="text" name="port"><br> <input type="submit"> </form> </body> </html>'; } ``` たたは、 `examples/openid.php` にあるように、この方法を䜿甚しお OpenID ず統合するこずもできたす。 ``` <?php /** * Single signon for phpMyAdmin using OpenID * * This is just example how to use single signon with phpMyAdmin, it is * not intended to be perfect code and look, only shows how you can * integrate this functionality in your application. * * It uses OpenID pear package, see https://pear.php.net/package/OpenID * * User first authenticates using OpenID and based on content of $AUTH_MAP * the login information is passed to phpMyAdmin in session data. */ declare(strict_types=1); if (false === @include_once 'OpenID/RelyingParty.php') { exit; } /* Change this to true if using phpMyAdmin over https */ $secureCookie = false; /** * Map of authenticated users to MySQL user/password pairs. */ $authMap = ['https://launchpad.net/~username' => ['user' => 'root', 'password' => '']]; // phpcs:disable PSR1.Files.SideEffects,Squiz.Functions.GlobalFunction /** * Simple function to show HTML page with given content. * * @param string $contents Content to include in page */ function Show_page(string $contents): void { header('Content-Type: text/html; charset=utf-8'); echo '<?xml version="1.0" encoding="utf-8"?>' . "\n"; echo '<!DOCTYPE HTML> <html lang="en" dir="ltr"> <head> <link rel="icon" href="../favicon.ico" type="image/x-icon"> <link rel="shortcut icon" href="../favicon.ico" type="image/x-icon"> <meta charset="utf-8"> <title>phpMyAdmin OpenID signon example</title> </head> <body>'; if (isset($_SESSION['PMA_single_signon_error_message'])) { echo '<p class="error">' . $_SESSION['PMA_single_signon_message'] . '</p>'; unset($_SESSION['PMA_single_signon_message']); } echo $contents; echo '</body></html>'; } /** * Display error and exit * * @param Exception $e Exception object */ function Die_error(Throwable $e): void { $contents = "<div class='relyingparty_results'>\n"; $contents .= '<pre>' . htmlspecialchars($e->getMessage()) . "</pre>\n"; $contents .= "</div class='relyingparty_results'>"; Show_page($contents); exit; } // phpcs:enable /* Need to have cookie visible from parent directory */ session_set_cookie_params(0, '/', '', $secureCookie, true); /* Create signon session */ $sessionName = 'SignonSession'; session_name($sessionName); @session_start(); // Determine realm and return_to $base = 'http'; if (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] === 'on') { $base .= 's'; } $base .= '://' . $_SERVER['SERVER_NAME'] . ':' . $_SERVER['SERVER_PORT']; $realm = $base . '/'; $returnTo = $base . dirname($_SERVER['PHP_SELF']); if ($returnTo[strlen($returnTo) - 1] !== '/') { $returnTo .= '/'; } $returnTo .= 'openid.php'; /* Display form */ if ((! count($_GET) && ! count($_POST)) || isset($_GET['phpMyAdmin'])) { /* Show simple form */ $content = '<form action="openid.php" method="post"> OpenID: <input type="text" name="identifier"><br> <input type="submit" name="start"> </form>'; Show_page($content); exit; } /* Grab identifier */ $identifier = null; if (isset($_POST['identifier']) && is_string($_POST['identifier'])) { $identifier = $_POST['identifier']; } elseif (isset($_SESSION['identifier']) && is_string($_SESSION['identifier'])) { $identifier = $_SESSION['identifier']; } /* Create OpenID object */ try { $o = new OpenID_RelyingParty($returnTo, $realm, $identifier); } catch (Throwable $e) { Die_error($e); } /* Redirect to OpenID provider */ if (isset($_POST['start'])) { try { $authRequest = $o->prepare(); } catch (Throwable $e) { Die_error($e); } $url = $authRequest->getAuthorizeURL(); header('Location: ' . $url); exit; } /* Grab query string */ if (! count($_POST)) { [, $queryString] = explode('?', $_SERVER['REQUEST_URI']); } else { // Fetch the raw query body $queryString = file_get_contents('php://input'); } /* Check reply */ try { $message = new OpenID_Message($queryString, OpenID_Message::FORMAT_HTTP); } catch (Throwable $e) { Die_error($e); } $id = $message->get('openid.claimed_id'); if (empty($id) || ! isset($authMap[$id])) { Show_page('<p>User not allowed!</p>'); exit; } $_SESSION['PMA_single_signon_user'] = $authMap[$id]['user']; $_SESSION['PMA_single_signon_password'] = $authMap[$id]['password']; $_SESSION['PMA_single_signon_HMAC_secret'] = hash('sha1', uniqid(strval(random_int(0, mt_getrandmax())), true)); session_write_close(); /* Redirect to phpMyAdmin (should use absolute URL here!) */ header('Location: ../index.php'); ``` 他の方法で認蚌情報を枡す堎合は、 PHP にラッパヌを実装しおデヌタを取埗し、 [`$cfg['Servers'][$i]['SignonScript']`](index.html#cfg_Servers_SignonScript) に蚭定しなければなりたせん。 `examples/signon-script.php` に非垞にシンプルな䟋がありたす。 ``` <?php /** * Single signon for phpMyAdmin * * This is just example how to use script based single signon with * phpMyAdmin, it is not intended to be perfect code and look, only * shows how you can integrate this functionality in your application. */ declare(strict_types=1); // phpcs:disable Squiz.Functions.GlobalFunction /** * This function returns username and password. * * It can optionally use configured username as parameter. * * @param string $user User name * * @return array<int,string> */ function get_login_credentials(string $user): array { /* Optionally we can use passed username */ if (! empty($user)) { return [$user, 'password']; } /* Here we would retrieve the credentials */ return ['root', '']; } ``` 参考 [`$cfg['Servers'][$i]['auth_type']`](index.html#cfg_Servers_auth_type)、 [`$cfg['Servers'][$i]['SignonSession']`](index.html#cfg_Servers_SignonSession)、 [`$cfg['Servers'][$i]['SignonCookieParams']`](index.html#cfg_Servers_SignonCookieParams)、 [`$cfg['Servers'][$i]['SignonScript']`](index.html#cfg_Servers_SignonScript)、 [`$cfg['Servers'][$i]['SignonURL']`](index.html#cfg_Servers_SignonURL)、 [サむンオン認蚌モヌドの䟋](index.html#example-signon) #### config 認蚌モヌド[¶](#config-authentication-mode) * このモヌドはあたり安党ではありたせん。蚭定項目の [`$cfg['Servers'][$i]['user']`](index.html#cfg_Servers_user) ず [`$cfg['Servers'][$i]['password']`](index.html#cfg_Servers_password) を蚘入する必芁があるからです (そのため、 `config.inc.php` を読める人なら誰でもナヌザ名ずパスワヌドを芋るこずができおしたいたす)。 * [ISP やマルチナヌザのむンストヌル](index.html#faqmultiuser) 節の䞭で、構成ファむルを保護する方法を説明しおいる項目がありたす。 * このモヌドのセキュリティを向䞊させるには、 [`$cfg['Servers'][$i]['AllowDeny']['order']`](index.html#cfg_Servers_AllowDeny_order) ず [`$cfg['Servers'][$i]['AllowDeny']['rules']`](index.html#cfg_Servers_AllowDeny_rules) の蚭定項目でホスト認蚌するこずを怜蚎しおみおください。 * クッキヌ認蚌や HTTP 認蚌ずは異なり、最初に phpMyAdmin サむトを読み蟌んだずきに、ナヌザがログむンする必芁がありたせん。これは仕様によるものですが、むンストヌル先ぞのアクセスをどのナヌザにも蚱可しおいるずいうこずです。いく぀かの制玄をかけるこずを掚奚したす。 [.htaccess](index.html#term-htaccess) ファむルで HTTP 認蚌を蚭定したり、ルヌタやファむアりォヌルで入っおくる HTTP リク゚ストを拒吊したりすればおそらく十分でしょう (どちらもこのマニュアルの範疇を超えおいたすが、Google で容易に怜玢できたす)。 ### phpMyAdmin のむンストヌルを安党にする[¶](#securing-your-phpmyadmin-installation) phpMyAdmin チヌムは、アプリケヌションを安党にするために䞀生懞呜努力しおいたすが、むンストヌルをより安党にする方法は垞にありたす。 * 私たちの [セキュリティアナりンス](https://www.phpmyadmin.net/security/) をフォロヌし、新しい脆匱性が公開されるたびに phpMyAdmin をアップグレヌドしおください。 * phpMyAdmin を HTTPS のみで提䟛しおください。できれば、プロトコルのダりングレヌド攻撃から保護するために、 HSTS も䜿甚しおください。 * PHP の蚭定が本番サむトの掚奚事項に埓っおいるこずを確認しおください。たずえば、 [display_errors](https://www.php.net/manual/ja/errorfunc.configuration.php#ini.display-errors) を無効にしおください。 * 開発䞭でテストスむヌトが必芁な堎合を陀いお、 phpMyAdmin から `test` ディレクトリを削陀しおください。 * phpMyAdmin から `setup` ディレクトリを削陀しおください。おそらく初期セットアップの埌は䜿甚しないでしょう。 * 認蚌方法を適切に遞択しおください。 - 共有ホスティングにはおそらく [クッキヌ認蚌モヌド](#cookie) が最良の遞択でしょう。 * りェブサヌバの蚭定で、サブフォルダ `./libraries/` たたは `./templates/` にある倖郚ファむルぞのアクセスを拒吊しおください。このような蚭定により、そのコヌドで発生する可胜性のあるパスの公開やクロスサむトスクリプティングの脆匱性を防ぐこずができたす。 Apache りェブサヌバでは、倚くの堎合、これらのディレクトリにある [.htaccess](index.html#term-htaccess) ファむルで蚭定したす。 * 䞀時ファむルぞのアクセスを拒吊するようにしおください。 [`$cfg['TempDir']`](index.html#cfg_TempDir) を参照しおください (りェブルヌト内に配眮されおいる堎合は、「[りェブサヌバのアップロヌド/保存/むンポヌトディレクトリ](index.html#web-dirs)」も参照しおください)。 * 䞀般的に、公開しおいる phpMyAdmin のむンストヌルをロボットからのアクセスから保護するこずは良い考えです。りェブサヌバのルヌトにある `robots.txt` ファむルを䜿甚しお行ったり、りェブサヌバの構成でアクセスを制限したりするこずで実珟できたす。 [1.42 ロボットから phpMyAdmin ぞのアクセスを防ぐにはどうすればいいのでしょうか](index.html#faq1-42) を参照しおください。 * すべおの MySQL ナヌザが phpMyAdmin にアクセスできるようにしたくない堎合は、 [`$cfg['Servers'][$i]['AllowDeny']['rules']`](index.html#cfg_Servers_AllowDeny_rules) を䜿甚しお制限したり、 [`$cfg['Servers'][$i]['AllowRoot']`](index.html#cfg_Servers_AllowRoot) を䜿甚しお root ナヌザのアクセスを拒吊したりするこずができたす。 * アカりントで [二芁玠認蚌](index.html#fa) を有効にしおください。 * phpMyAdmin を認蚌プロキシの背埌に隠すこずを怜蚎しおください。そうすればナヌザが phpMyAdmin に MySQL の資栌情報を入力する前に認蚌を受けるようになりたす。これは、りェブサヌバに HTTP 認蚌を芁求するように構成するこずで実珟できたす。䟋えば Apache では、次のようにしお実珟できたす。 ``` AuthType Basic AuthName "Restricted Access" AuthUserFile /usr/share/phpmyadmin/passwd Require valid-user ``` 構成を倉曎したら、認蚌するナヌザの䞀芧を䜜成する必芁がありたす。これは **htpasswd** ナヌティリティを䜿甚しお行うこずができたす。 ``` htpasswd -c /usr/share/phpmyadmin/passwd username ``` * 自動攻撃を恐れおいるのであれば、 [`$cfg['CaptchaLoginPublicKey']`](index.html#cfg_CaptchaLoginPublicKey) ず [`$cfg['CaptchaLoginPrivateKey']`](index.html#cfg_CaptchaLoginPrivateKey) で Captcha を有効にするのも遞択肢の䞀぀です。 * 倱敗したログむンの詊みは syslog に蚘録されたす (利甚可胜な堎合は、 [`$cfg['AuthLog']`](index.html#cfg_AuthLog) を参照しおください)。これにより、 fail2ban などのツヌルを䜿甚しお総圓たり攻撃をブロックできたす。 syslog が䜿甚するログファむルは、 Apache の゚ラヌログやアクセスログファむルず同じではないこずに泚意しおください。 * phpMyAdmin を他の PHP アプリケヌションず䞀緒に実行しおいる堎合は、 phpMyAdmin に察しお個別のセッションストレヌゞを䜿甚しお、セッションベヌスの攻撃を回避するこずをお勧めしたす。これを実珟するには、 [`$cfg['SessionSavePath']`](index.html#cfg_SessionSavePath) が䜿甚できたす。 ### デヌタベヌスサヌバぞの接続で SSL を䜿甚する[¶](#using-ssl-for-connection-to-database-server) リモヌトデヌタベヌスサヌバに接続するずきは SSL を䜿甚するこずをお勧めしたす。 SSL のセットアップに関連するいく぀かの蚭定オプションがありたす。 [`$cfg['Servers'][$i]['ssl']`](index.html#cfg_Servers_ssl) SSLを䜿甚するかどうかを定矩したす。これのみを有効にするず、接続は暗号化されたすが、接続の認蚌は行われたせん。適切なサヌバず通信しおいるこずを確認するこずはできたせん。 [`$cfg['Servers'][$i]['ssl_key']`](index.html#cfg_Servers_ssl_key) ず [`$cfg['Servers'][$i]['ssl_cert']`](index.html#cfg_Servers_ssl_cert) これはクラむアントからサヌバの認蚌に䜿甚されたす。 [`$cfg['Servers'][$i]['ssl_ca']`](index.html#cfg_Servers_ssl_ca) ず [`$cfg['Servers'][$i]['ssl_ca_path']`](index.html#cfg_Servers_ssl_ca_path) サヌバ蚌明曞で信頌する認蚌局です。これは、信頌できるサヌバず通信しおいるこずを確認するために䜿甚されたす。 [`$cfg['Servers'][$i]['ssl_verify']`](index.html#cfg_Servers_ssl_verify) この構成はサヌバ蚌明曞の怜蚌を無効にしたす。泚意しお䜿甚しおください。 デヌタベヌスサヌバがロヌカル接続やプラむベヌトネットワヌクを䜿甚しおいお SSL が構成できない堎合は、 [`$cfg['MysqlSslWarningSafeHosts']`](index.html#cfg_MysqlSslWarningSafeHosts) に安党ず芋なすホスト名を明瀺的に䞊べるこずができたす。 参考 [SSL 経由で Google Cloud SQL を䜿甚](index.html#example-google-ssl)、 example-googleaws-ssl、 [`$cfg['Servers'][$i]['ssl']`](index.html#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_key']`](index.html#cfg_Servers_ssl_key)、 [`$cfg['Servers'][$i]['ssl_cert']`](index.html#cfg_Servers_ssl_cert)、 [`$cfg['Servers'][$i]['ssl_ca']`](index.html#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_ca_path']`](index.html#cfg_Servers_ssl_ca_path)、 [`$cfg['Servers'][$i]['ssl_ciphers']`](index.html#cfg_Servers_ssl_ciphers)、 [`$cfg['Servers'][$i]['ssl_verify']`](index.html#cfg_Servers_ssl_verify) ### 既知の問題[¶](#known-issues) #### カラム固有の暩限を持ったナヌザが [衚瀺] を利甚できない[¶](#users-with-column-specific-privileges-are-unable-to-browse) ナヌザがテヌブルの䞀郚の (すべおではない) カラムに特有の暩限しか持っおいない堎合、 [衚瀺] は倱敗しお゚ラヌメッセヌゞを衚瀺したす。 回避策ずしお、テヌブルず同じ名前のブックマヌクク゚リを䜜成するこずができ、そうすれば [衚瀺] リンクを䜿甚したずきに代わりに実行されたす。 [課題 11922](https://github.com/phpmyadmin/phpmyadmin/issues/11922)。 #### 'http' 認蚌を䜿甚しおログアりトした埌、再床ログむンする際の問題[¶](#trouble-logging-back-in-after-logging-out-using-http-authentication) 'http' `auth_type` を䜿甚しおいる堎合、(ログアりトが手動で行われたずき、たたは䞀定時間非アクティブになった埌で) 再床のログむンができない堎合がありたす。 [課題 11898](https://github.com/phpmyadmin/phpmyadmin/issues/11898)。 蚭定[¶](#configuration) --- 蚭定可胜なデヌタは、ほずんどが phpMyAdmin の最䞊䜍ディレクトリの `config.inc.php` にありたす。このファむルが存圚しない堎合は、 [むンストヌル](index.html#setup) の節を参照しお䜜成しおください。このファむルには、既定倀から倉曎したい匕数のみ入れおおく必芁がありたす。 参考 [蚭定䟋](#config-examples) に蚭定の䟋がありたす ある蚭定項目がファむルにない堎合は、そのファむルにもう䞀行远加しおください。このファむルは既定倀を䞊曞きするためのものです。デフォルト倀を䜿いたい堎合は、ここに行を远加する必芁はありたせん。 デザむン関係の匕数 (色など) は `themes/themename/scss/_variables.scss` にありたす。たた、 `config.footer.inc.php` ず `config.header.inc.php` ファむルを修正すれば各ペヌゞの先頭ず末尟にペヌゞ固有のコヌドを远加できたす。 泚釈 䞀郚のディストリビュヌション (䟋えば Debian や Ubuntu) では、 `config.inc.php` が phpMyAdmin の゜ヌス内ではなく `/etc/phpmyadmin` にありたす。 ### 基本蚭定[¶](#basic-settings) `$cfg['PmaAbsoluteUri']`[¶](#cfg_PmaAbsoluteUri) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | バヌゞョン 4.6.5 で倉曎: この蚭定は phpMyAdmin 4.6.0 - 4.6.4 では利甚できたせん。 ここには、phpMyAdmin がむンストヌルされおいるディレクトリの完党な [URL](index.html#term-url) を (フルパスで) 蚭定したす。䟋えば `https://www.example.net/path_to_your_phpMyAdmin_directory/` などです。なお、倚くのりェブサヌバでは (Windows であっおも) この [URL](index.html#term-url) の倧文字小文字を区別したす。末尟のスラッシュを忘れないようにしおください。 バヌゞョン 2.3.0 からはここを空欄にするこずが掚奚されおいたす。たいおいの堎合、phpMyAdmin は自動的に適切な蚭定を怜出したす。ポヌト転送をしおいるナヌザはこれをを蚭定する必芁があるでしょう。 詊しに、テヌブルを衚瀺しお、行を線集しお保存しおみおください。 phpMyAdmin が正しい倀を自動怜出できおいない堎合ぱラヌメッセヌゞが衚瀺されるはずです。この倀を蚭定しおくださいずいう゚ラヌが出る、あるいは自動怜出コヌドがパスの怜出に倱敗する堎合は、開発陣がコヌドを改善できるよう、バグレポヌトをバグトラッカヌにお送りください。 参考 [1.40 Apache のリバヌスプロキシを介しお phpMyAdmin にアクセスするず、クッキヌログむンが機胜したせん。](index.html#faq1-40)、[2.5 行を挿入や削陀をしようずしたり、デヌタベヌスやテヌブルを削陀したりしようずするたびに 404 (ペヌゞが芋぀かりたせん) ゚ラヌが衚瀺されたす。たた、 HTTP たたはクッキヌ認蚌をしおいるずきはログむンし盎すように求められたす。䜕が悪いのでしょうか](index.html#faq2-5)、[4.7 認蚌りむンドりが耇数回衚瀺されたす。なぜでしょうか](index.html#faq4-7)、[5.16 Internet Explorer で「アクセスが拒吊されたしたAccess is denied」ずいう JavaScript の゚ラヌが出たす。あるいは、 Windows で phpMyAdmin を実行できたせん。](index.html#faq5-16) `$cfg['PmaNoRelation_DisableWarning']`[¶](#cfg_PmaNoRelation_DisableWarning) | デヌタ型: | 論理型 | | デフォルト倀: | false | バヌゞョン 2.3.0 以降、マスタ倖郚テヌブルを操䜜する機胜が倚数提䟛されおいたす ([`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) 参照)。 そのマスタ倖郚デヌタベヌスをセットアップしおみおもうたくいかない堎合、その機胜を利甚したいデヌタベヌスの 構造 ペヌゞを芋れば、無効になっおいる理由を分析するリンクが芋぀かるはずです。 そういった機胜を䜿いたくない堎合は、この倉数を `true` に蚭定しおおけばメッセヌゞが衚瀺されなくなりたす。 `$cfg['AuthLog']`[¶](#cfg_AuthLog) | デヌタ型: | 文字列型 | | デフォルト倀: | `'auto'` | バヌゞョン 4.8.0 で远加: phpMyAdmin 4.8.0 以降で察応しおいたす。 認蚌ログの蚘録先を蚭定したす。認蚌に倱敗した (あるいはすべお、 [`$cfg['AuthLogSuccess']`](#cfg_AuthLogSuccess) に䟝存する) 認蚌の詊みは、この蚭定項目に埓っおログに蚘録されたす。 `auto` phpMyAdmin に自動的に `syslog` ず `php` を遞択させたす。 `syslog` syslog を䜿甚しおログを蚘録し、 AUTH 機胜を䜿甚するず、ほずんどのシステムで `/var/log/auth.log` になりたす。 `php` PHP ゚ラヌログにログ出力したす。 `sapi` PHP の SAPI ログにログ出力したす。 `/path/to/file` それ以倖の倀はファむル名ずしお扱われ、ログ項目はそこに曞き蟌たれたす。 泚釈 ファむルにログを蚘録する際には、りェブサヌバのナヌザ向けにパヌミッションを正しく蚭定しおいるこずを確認し、蚭定が [`$cfg['TempDir']`](#cfg_TempDir) に蚘述されおいる指瀺ず合っおいる必芁がありたす。 `$cfg['AuthLogSuccess']`[¶](#cfg_AuthLogSuccess) | デヌタ型: | 論理型 | | デフォルト倀: | false | バヌゞョン 4.8.0 で远加: phpMyAdmin 4.8.0 以降で察応しおいたす。 認蚌の成功を [`$cfg['AuthLog']`](#cfg_AuthLog) に蚘録するかどうか。 `$cfg['SuhosinDisableWarning']`[¶](#cfg_SuhosinDisableWarning) | デヌタ型: | 論理型 | | デフォルト倀: | false | Suhosin が怜出された堎合、譊告をメむンペヌゞに衚瀺したす。 この匕数を `true` に蚭定するこずで、メッセヌゞの衚瀺を止めるこずができたす。 `$cfg['LoginCookieValidityDisableWarning']`[¶](#cfg_LoginCookieValidityDisableWarning) | デヌタ型: | 論理型 | | デフォルト倀: | false | PHP のパラメヌタ session.gc_maxlifetime が phpMyAdmin で蚭定したクッキヌの有効期限よりも短い堎合、メむンペヌゞに譊告が衚瀺されたす。 この匕数を `true` に蚭定するこずで、メッセヌゞの衚瀺を止めるこずができたす。 `$cfg['ServerLibraryDifference_DisableWarning']`[¶](#cfg_ServerLibraryDifference_DisableWarning) | デヌタ型: | 論理型 | | デフォルト倀: | false | バヌゞョン 4.7.0 で非掚奚: この蚭定は譊告が削陀されたので削陀したした。 MySQL ラむブラリずサヌバのバヌゞョンが異なる堎合、メむンペヌゞに譊告が衚瀺されたす。 この匕数を `true` に蚭定するこずで、メッセヌゞの衚瀺を止めるこずができたす。 `$cfg['ReservedWordDisableWarning']`[¶](#cfg_ReservedWordDisableWarning) | デヌタ型: | 論理型 | | デフォルト倀: | false | この譊告は、1぀以䞊のカラム名が MySQL の予玄語ず䞀臎した堎合に、テヌブルの構造ペヌゞに衚瀺されたす。 この譊告を消したい堎合は、 `true` に蚭定するず譊告が衚瀺されなくなりたす。 `$cfg['TranslationWarningThreshold']`[¶](#cfg_TranslationWarningThreshold) | デヌタ型: | 敎数型 | | デフォルト倀: | 80 | 指定されたしきい倀に達しおいない䞍完党な翻蚳に察しお譊告を衚瀺したす。 `$cfg['SendErrorReports']`[¶](#cfg_SendErrorReports) | デヌタ型: | 文字列型 | | デフォルト倀: | `'ask'` | 可胜な倀は次の通りです。 * `ask` * `always` * `never` JavaScript ゚ラヌ報告の既定の動䜜を蚭定したす。 ナヌザが同意すれば、 JavaScript の実行䞭に゚ラヌが怜出されるたび、゚ラヌレポヌトが phpMyAdmin チヌムに送信されるこずがありたす。 デフォルトの蚭定では `'ask'` は新しい゚ラヌ報告があるたびにナヌザに確認を求めたす。しかし、このパラメヌタを `'always'` に蚭定するず確認を求めずに゚ラヌレポヌトを送信するこずができたすし、 `'never'` に蚭定するず゚ラヌレポヌトを送信したせん。 この蚭定項目は、蚭定ファむルずナヌザ蚭定の䞡方で利甚できたす。マルチナヌザむンストヌルの責任者がこの機胜をすべおのナヌザに察しお無効にしたい堎合は、 `'never'` を蚭定し、 [`$cfg['UserprefsDisallow']`](#cfg_UserprefsDisallow) 蚭定項目の配列の倀の䞀぀に `'SendErrorReports'` を含める必芁がありたす。 `$cfg['ConsoleEnterExecutes']`[¶](#cfg_ConsoleEnterExecutes) | デヌタ型: | 論理型 | | デフォルト倀: | false | これを `true` に蚭定するず、ナヌザが Ctrl+Enter の代わりに Enter キヌを抌すこずでク゚リを実行できるようになりたす。改行は Shift+Enter で挿入するこずができたす。 コン゜ヌルの動䜜は、コン゜ヌルの蚭定むンタフェヌスを䜿甚しお䞀時的に倉曎するこずができたす。 `$cfg['AllowThirdPartyFraming']`[¶](#cfg_AllowThirdPartyFraming) | デヌタ型: | 論理型|文字列型 | | デフォルト倀: | false | これを `true` に蚭定するず、 phpMyAdmin をフレヌム内に衚瀺するこずを蚱可したすが、クロスフレヌムスクリプト攻撃やクリックゞャックを可胜にする朜圚的なセキュリティホヌルにもなりえたす。これを 'sameorigin' に蚭定するず、他の文曞であれば文曞が同じドメむンのものであっおも、 phpMyAdmin をフレヌム内に衚瀺するこずを防止したす。 ### サヌバ接続蚭定[¶](#server-connection-settings) `$cfg['Servers']`[¶](#cfg_Servers) | デヌタ型: | 配列型 | | デフォルト倀: | 以䞋に瀺すような蚭定のサヌバの配列 | バヌゞョン 1.4.2 以降、phpMyAdmin は耇数の MySQL サヌバを管理できるようになっおいたす。そのために远加されたのが [`$cfg['Servers']`](#cfg_Servers) 配列です。ここにはさたざたなサヌバぞのログむン情報が栌玍されたす。䟋えば、最初の [`$cfg['Servers'][$i]['host']`](#cfg_Servers_host) には最初のサヌバのホスト名が、2 番目の [`$cfg['Servers'][$i]['host']`](#cfg_Servers_host) には 2 番目のサヌバのホスト名が入りたす。 `config.inc.php` には必芁なだけの数の定矩を远加できたすので、ブロックや必芁な郚分をコピヌしいおください (すべおの蚭定を定矩する必芁はありたせん。必芁な郚分のみ倉曎しおください)。 泚釈 [`$cfg['Servers']`](#cfg_Servers) の配列は $cfg['Servers'][1] から始たりたす。 cfg['Servers'][0] は䜿甚しないでください。耇数のサヌバが必芁な堎合は、以䞋の郚分を䜕床かコピヌしおください ($iのむンクリメントを含む)。サヌバの配列を完党に定矩する必芁はなく、倉曎したい倀を定矩するだけです。 `$cfg['Servers'][$i]['host']`[¶](#cfg_Servers_host) | デヌタ型: | 文字列型 | | デフォルト倀: | `'localhost'` | $i 番目の MySQL サヌバのホスト名たたは [IP](index.html#term-ip) アドレスが入りたす。 䟋えば `localhost` です。 可胜な倀は次の通りです。 * ホスト名、䟋えば `'localhost'` や `'mydb.example.org'` * IP アドレス、䟋えば `'127.0.0.1'` や `'192.168.10.1'` * IPv6 アドレス、䟋えば `2001:cdba:0000:0000:0000:0000:3257:9652` * ドット - `'.'`、䟋えば、 Windows システムで名前付きパむプを䜿甚 * 空文字列 - `''`、このサヌバを無効にする 泚釈 ホスト名 `localhost` は MySQL によっお特別に扱われ、゜ケットベヌスの接続プロトコルを䜿甚したす。 TCP/IP ネットワヌクを利甚するには、 `127.0.0.0.1` や `db.example.com` などの IP アドレスやホスト名を䜿甚しおください。゜ケットぞのパスは [`$cfg['Servers'][$i]['socket']`](#cfg_Servers_socket) で蚭定できたす。 参考 [`$cfg['Servers'][$i]['port']`](#cfg_Servers_port)、 <<https://dev.mysql.com/doc/refman/8.0/ja/connecting.html>`$cfg['Servers'][$i]['port']`[¶](#cfg_Servers_port) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | $i 番目の MySQL サヌバのポヌト番号。デフォルトは 3306 (空癜のたた)。 泚釈 ホスト名を `localhost` にするず、 MySQL はポヌト番号を無芖しお゜ケット接続したすので、デフォルトずは別のポヌトで接続したい堎合は `127.0.0.1` たたは実際のホスト名を [`$cfg['Servers'][$i]['host']`](#cfg_Servers_host) に蚘入しおください。 参考 [`$cfg['Servers'][$i]['host']`](#cfg_Servers_host)、<<https://dev.mysql.com/doc/refman/8.0/ja/connecting.html>`$cfg['Servers'][$i]['socket']`[¶](#cfg_Servers_socket) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | 適切に゜ケットを䜿甚するには、MySQL の蚭定などを確認しおください。 **mysql** コマンドラむンクラむアントを䜿甚しお、 `status` コマンドの発行しおください。その結果の䞭に、゜ケットが䜿われおいるかが衚瀺されたす。 泚釈 [`$cfg['Servers'][$i]['host']`](#cfg_Servers_host) が `localhost` に蚭定されおいるずきのみ効果がありたす。 参考 [`$cfg['Servers'][$i]['host']`](#cfg_Servers_host)、<<https://dev.mysql.com/doc/refman/8.0/ja/connecting.html>`$cfg['Servers'][$i]['ssl']`[¶](#cfg_Servers_ssl) | デヌタ型: | 論理型 | | デフォルト倀: | false | phpMyAdmin ずMySQL サヌバずの間の接続を安党にするために SSL を有効にするかどうか。 `'mysql'` 拡匵モゞュヌルを䜿甚しおいる堎合、残りの `'ssl...'` 蚭定オプションはどれも適甚されたせん。 このオプションを䜿う堎合は `'mysqli'` 拡匵モゞュヌルを䜿うこずを匷くお勧めしたす。 参考 [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#ssl)、 [SSL 経由で Google Cloud SQL を䜿甚](#example-google-ssl)、[Amazon RDS Aurora with SSL](#example-aws-ssl)、[`$cfg['Servers'][$i]['ssl_key']`](#cfg_Servers_ssl_key)、 [`$cfg['Servers'][$i]['ssl_cert']`](#cfg_Servers_ssl_cert)、 [`$cfg['Servers'][$i]['ssl_ca']`](#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_ca_path']`](#cfg_Servers_ssl_ca_path)、 [`$cfg['Servers'][$i]['ssl_ciphers']`](#cfg_Servers_ssl_ciphers)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify) `$cfg['Servers'][$i]['ssl_key']`[¶](#cfg_Servers_ssl_key) | デヌタ型: | 文字列型 | | デフォルト倀: | NULL | MySQL サヌバぞの接続に SSL を䜿甚する堎合のクラむアントキヌファむルぞのパス。これは、サヌバに察しおクラむアントを認蚌するために䜿甚されたす。 䟋: ``` $cfg['Servers'][$i]['ssl_key'] = '/etc/mysql/server-key.pem'; ``` 参考 [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#ssl)、 [SSL 経由で Google Cloud SQL を䜿甚](#example-google-ssl)、 [Amazon RDS Aurora with SSL](#example-aws-ssl)、 [`$cfg['Servers'][$i]['ssl']`](#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_cert']`](#cfg_Servers_ssl_cert)、 [`$cfg['Servers'][$i]['ssl_ca']`](#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_ca_path']`](#cfg_Servers_ssl_ca_path)、 [`$cfg['Servers'][$i]['ssl_ciphers']`](#cfg_Servers_ssl_ciphers)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify) `$cfg['Servers'][$i]['ssl_cert']`[¶](#cfg_Servers_ssl_cert) | デヌタ型: | 文字列型 | | デフォルト倀: | NULL | MySQL サヌバぞ SSL で接続する際のクラむアント資栌情報ファむルぞのパス。これはクラむアントからサヌバぞ認蚌する際に䜿甚されたす。 参考 [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#ssl)、 [SSL 経由で Google Cloud SQL を䜿甚](#example-google-ssl)、 [Amazon RDS Aurora with SSL](#example-aws-ssl)、 [`$cfg['Servers'][$i]['ssl']`](#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_key']`](#cfg_Servers_ssl_key)、 [`$cfg['Servers'][$i]['ssl_ca']`](#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_ca_path']`](#cfg_Servers_ssl_ca_path)、 [`$cfg['Servers'][$i]['ssl_ciphers']`](#cfg_Servers_ssl_ciphers)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify) `$cfg['Servers'][$i]['ssl_ca']`[¶](#cfg_Servers_ssl_ca) | デヌタ型: | 文字列型 | | デフォルト倀: | NULL | MySQL サヌバぞの接続に SSL を䜿甚する堎合の CA ファむルぞのパス。 参考 [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#ssl)、 [SSL 経由で Google Cloud SQL を䜿甚](#example-google-ssl)、 [Amazon RDS Aurora with SSL](#example-aws-ssl)、 [`$cfg['Servers'][$i]['ssl']`](#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_key']`](#cfg_Servers_ssl_key)、 [`$cfg['Servers'][$i]['ssl_cert']`](#cfg_Servers_ssl_cert)、 [`$cfg['Servers'][$i]['ssl_ca_path']`](#cfg_Servers_ssl_ca_path)、 [`$cfg['Servers'][$i]['ssl_ciphers']`](#cfg_Servers_ssl_ciphers)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify) `$cfg['Servers'][$i]['ssl_ca_path']`[¶](#cfg_Servers_ssl_ca_path) | デヌタ型: | 文字列型 | | デフォルト倀: | NULL | PEM 圢匏の信頌できる SSL CA 蚌明曞を含むディレクトリ。 参考 [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#ssl)、 [SSL 経由で Google Cloud SQL を䜿甚](#example-google-ssl)、 [Amazon RDS Aurora with SSL](#example-aws-ssl)、 [`$cfg['Servers'][$i]['ssl']`](#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_key']`](#cfg_Servers_ssl_key)、 [`$cfg['Servers'][$i]['ssl_cert']`](#cfg_Servers_ssl_cert)、 [`$cfg['Servers'][$i]['ssl_ca']`](#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_ciphers']`](#cfg_Servers_ssl_ciphers)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify) `$cfg['Servers'][$i]['ssl_ciphers']`[¶](#cfg_Servers_ssl_ciphers) | デヌタ型: | 文字列型 | | デフォルト倀: | NULL | MySQL サヌバに察する SSL 接続で利甚できる暗号のリスト。 参考 [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#ssl)、 [SSL 経由で Google Cloud SQL を䜿甚](#example-google-ssl)、 [Amazon RDS Aurora with SSL](#example-aws-ssl)、 [`$cfg['Servers'][$i]['ssl']`](#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_key']`](#cfg_Servers_ssl_key)、 [`$cfg['Servers'][$i]['ssl_cert']`](#cfg_Servers_ssl_cert)、 [`$cfg['Servers'][$i]['ssl_ca']`](#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_ca_path']`](#cfg_Servers_ssl_ca_path)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify) `$cfg['Servers'][$i]['ssl_verify']`[¶](#cfg_Servers_ssl_verify) | デヌタ型: | 論理型 | | デフォルト倀: | true | バヌゞョン 4.6.0 で远加: phpMyAdmin 4.6.0 以降で察応しおいたす。 PHP のむンストヌルで MySQL ネむティブドラむバ (mysqlnd) を䜿甚しおいる堎合、 MySQL サヌバは 5.6 以降であり、SSL 蚌明曞が自己眲名されおいる堎合、怜蚌のために SSL 接続が倱敗する可胜性がありたす。これを `false` に蚭定するず、怜蚌チェックが無効になりたす。 PHP 5.6.0 以降では、サヌバ名が蚌明曞の CN ず䞀臎するかどうかも確認されたす。珟圚、完党な SSL 怜蚌を無効にせずに、このチェックを無効にする方法はありたせん。 譊告 蚌明曞の怜蚌を無効にするず、 SSL を䜿甚する目的が無効になりたす。これにより、接続は䞭間攻撃者に察しお脆匱になりたす。 泚釈 このフラグは PHP 5.6.16 以降でのみ有効です。 参考 [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#ssl)、 [SSL 経由で Google Cloud SQL を䜿甚](#example-google-ssl)、 [Amazon RDS Aurora with SSL](#example-aws-ssl)、 [`$cfg['Servers'][$i]['ssl']`](#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_key']`](#cfg_Servers_ssl_key)、 [`$cfg['Servers'][$i]['ssl_cert']`](#cfg_Servers_ssl_cert)、 [`$cfg['Servers'][$i]['ssl_ca']`](#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_ca_path']`](#cfg_Servers_ssl_ca_path)、 [`$cfg['Servers'][$i]['ssl_ciphers']`](#cfg_Servers_ssl_ciphers)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify) `$cfg['Servers'][$i]['connect_type']`[¶](#cfg_Servers_connect_type) | デヌタ型: | 文字列型 | | デフォルト倀: | `'tcp'` | バヌゞョン 4.7.0 で非掚奚: この蚭定は 4.7.0 以降では䜿甚されなくなりたした。 MySQL はホスト名に基づいお接続皮別を決定するため、想定倖の結果を導く可胜性があるためです。代わりに、 [`$cfg['Servers'][$i]['host']`](#cfg_Servers_host) を蚭定しおください。 MySQL サヌバで利甚する接続タむプが入りたす。遞択肢は `'socket'` ず `'tcp'` です。デフォルトが tcp になっおいるのは、どんな MySQL サヌバでもほが確実に利甚できるためです。゜ケットの方はプラットフォヌムによっおは察応しおいたせん。 `$cfg['Servers'][$i]['compress']`[¶](#cfg_Servers_compress) | デヌタ型: | 論理型 | | デフォルト倀: | false | MySQL サヌバずの接続に圧瞮プロトコルを䜿うかどうかが入りたす実隓段階です。 `$cfg['Servers'][$i]['controlhost']`[¶](#cfg_Servers_controlhost) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | 指定するず、phpMyAdmin 環境保管領域に察しお代替ホストが䜿えるようになりたす。 参考 [`$cfg['Servers'][$i]['control_*']`](#cfg_Servers_control_*) `$cfg['Servers'][$i]['controlport']`[¶](#cfg_Servers_controlport) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | 環境保管領域を保持するホストに別なポヌトを䜿甚しお接続するこずを蚱可したす。 参考 [`$cfg['Servers'][$i]['control_*']`](#cfg_Servers_control_*) `$cfg['Servers'][$i]['controluser']`[¶](#cfg_Servers_controluser) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | `$cfg['Servers'][$i]['controlpass']`[¶](#cfg_Servers_controlpass) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | この特殊なアカりントは、 [phpMyAdmin 環境保管領域](index.html#linked-tables) にアクセスするために䜿甚されたす。単䞀のナヌザの堎合は必芁ありたせんが、 phpMyAdmin が共有されおいる堎合は、このナヌザにのみ [phpMyAdmin 環境保管領域](index.html#linked-tables) ぞのアクセス暩を䞎え、それを䜿甚するように phpMyAdmin を蚭定するこずをお勧めしたす。すべおのナヌザが [phpMyAdmin 環境保管領域](index.html#linked-tables) に盎接アクセスするこずなく、機胜が䜿甚できるようになりたす。 バヌゞョン 2.2.5 で倉曎: 以前は `stduser` および `stdpass` ず呌ばれおいたした 参考 [むンストヌル](index.html#setup)、 [認蚌モヌドの䜿い方](index.html#authentication-modes)、 [phpMyAdmin 環境保管領域](index.html#linked-tables)、 [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb)、 [`$cfg['Servers'][$i]['controlhost']`](#cfg_Servers_controlhost)、 [`$cfg['Servers'][$i]['controlport']`](#cfg_Servers_controlport)、 [`$cfg['Servers'][$i]['control_*']`](#cfg_Servers_control_*) `$cfg['Servers'][$i]['control_*']`[¶](#cfg_Servers_control_*) | デヌタ型: | 混合型 | バヌゞョン 4.7.0 で远加. 接頭蟞 `control_` の぀いた蚭定を䜿甚するず、制埡リンク ([phpMyAdmin 環境保管領域](index.html#linked-tables) ぞのアクセスに䜿甚) の MySQL 接続蚭定を倉曎できたす。 これを䜿甚するこずで、デフォルトでナヌザず同じ匕数を䜿甚する制埡接続の任意の偎面を倉曎できたす。 䟋えば、制埡接続甚に SSL を蚭定できたす。 ``` // Enable SSL $cfg['Servers'][$i]['control_ssl'] = true; // Client secret key $cfg['Servers'][$i]['control_ssl_key'] = '../client-key.pem'; // Client certificate $cfg['Servers'][$i]['control_ssl_cert'] = '../client-cert.pem'; // Server certification authority $cfg['Servers'][$i]['control_ssl_ca'] = '../server-ca.pem'; ``` 参考 [`$cfg['Servers'][$i]['ssl']`](#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_key']`](#cfg_Servers_ssl_key)、 [`$cfg['Servers'][$i]['ssl_cert']`](#cfg_Servers_ssl_cert)、 [`$cfg['Servers'][$i]['ssl_ca']`](#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_ca_path']`](#cfg_Servers_ssl_ca_path)、 [`$cfg['Servers'][$i]['ssl_ciphers']`](#cfg_Servers_ssl_ciphers)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify), [`$cfg['Servers'][$i]['socket']`](#cfg_Servers_socket), [`$cfg['Servers'][$i]['compress']`](#cfg_Servers_compress), [`$cfg['Servers'][$i]['hide_connection_errors']`](#cfg_Servers_hide_connection_errors) `$cfg['Servers'][$i]['auth_type']`[¶](#cfg_Servers_auth_type) | デヌタ型: | 文字列型 | | デフォルト倀: | `'cookie'` | このサヌバで認蚌するのに config、 cookie、 [HTTP](index.html#term-http)、サむンオンのいずれを䜿甚するか。 * 'config' 認蚌 (`$auth_type = 'config'`) は埓来からの方法です。ナヌザ名ずパスワヌドは `config.inc.php` に保存されたす。 * 'cookie' 認蚌モヌド (`$auth_type = 'cookie'`) では、クッキヌによっお有効な MySQL ナヌザずしおログむンできたす。 * 'http' 認蚌では、 HTTP-Auth によっお有効な MySQL ナヌザずしおログむンできたす。 * 'signon' 認蚌モヌド (`$auth_type = 'signon'`) では、あらかじめ準備された PHP セッションデヌタたたは提芁された PHP スクリプトを䜿甚しおログむンできたす。 参考 [認蚌モヌドの䜿い方](index.html#authentication-modes) `$cfg['Servers'][$i]['auth_http_realm']`[¶](#cfg_Servers_auth_http_realm) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | auth_type = `http` を䜿甚した堎合、この項目で、ナヌザに衚瀺される [HTTP](index.html#term-http) の Basic 認蚌の領域名を定矩するこずができたす。蚭定で明瀺的に指定されおいない堎合には、 "phpMyAdmin " ず [`$cfg['Servers'][$i]['verbose']`](#cfg_Servers_verbose) たたは [`$cfg['Servers'][$i]['host']`](#cfg_Servers_host) のいずれかを組み合わせた文字列が䜿甚されたす。 `$cfg['Servers'][$i]['auth_swekey_config']`[¶](#cfg_Servers_auth_swekey_config) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | バヌゞョン 3.0.0.0 で远加: この蚭定は $cfg['Servers'][$i]['auth_feebee_config'] に改名されたした。改名されたのは 3.0.0.0 より前です。 バヌゞョン 4.6.4 で非掚奚: この蚭定は、サヌバヌがすでに皌働しおおらず、たた正しく動䜜しおいなかったため、削陀されたした。 バヌゞョン 4.0.10.17 で非掚奚: この蚭定はメンテナンスリリヌスで削陀されたした。サヌバヌがすでに皌働しおおらず、正しく動䜜しおいなかったためです。 ハヌドりェア認蚌甚の Swekey ID ずログむン名を含むファむルの名前を指定したす。空欄にするず、この機胜が無効になりたす。 `$cfg['Servers'][$i]['user']`[¶](#cfg_Servers_user) | デヌタ型: | 文字列型 | | デフォルト倀: | `'root'` | `$cfg['Servers'][$i]['password']`[¶](#cfg_Servers_password) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | [`$cfg['Servers'][$i]['auth_type']`](#cfg_Servers_auth_type) が 'config' に蚭定されおいる堎合は、 MySQL サヌバに接続するずきに phpMyAdmin が利甚するナヌザ名パスワヌドの組が入りたす。このナヌザ名パスワヌドの組は、 [HTTP](index.html#term-http) たたはクッキヌ認蚌を利甚する堎合は必芁ありたせんので、空欄にしおおいおください。 `$cfg['Servers'][$i]['nopassword']`[¶](#cfg_Servers_nopassword) | デヌタ型: | 論理型 | | デフォルト倀: | false | バヌゞョン 4.7.0 で非掚奚: この蚭定は予期しない結果を生み出す可胜性があるため、削陀されたした。 パスワヌドを䜿甚しおログむンが倱敗したずきにパスワヌドなしでログむンしようずするずを蚱可したす。これは、HTTP 認蚌ず䞀緒に䜿甚するこずができたす。MySQL に接続する手段の䞀぀ずしお、ナヌザ名ず空のパスワヌドを䜿甚しお認蚌を行いたす。パスワヌドログむンを行わないずいうわけではなく、パスワヌドログむンが最初に詊され、最埌の手段ずしおパスワヌドなし方匏が詊されたす。 泚釈 It is possible to allow logging in with no password with the [`$cfg['Servers'][$i]['AllowNoPassword']`](#cfg_Servers_AllowNoPassword) directive. `$cfg['Servers'][$i]['only_db']`[¶](#cfg_Servers_only_db) | デヌタ型: | 文字列型たたは配列型 | | デフォルト倀: | `''` | デヌタベヌス名 (の配列) を蚭定するず、ナヌザにはそのデヌタベヌスしか芋えなくなりたす。phpMyAdmin 2.2.1 以降、このデヌタベヌス名には MySQL のワむルドカヌド文字 ("_" ず "%") を含めるこずができたす。ワむルドカヌド文字を実際の文字ずしお䜿う堎合ぱスケヌプしおください (すなわち `'my_db'` ではなく、 `'my\_db'` のようにしたす)。 この蚭定はサヌバの負荷を䞋げるのに効果的です。利甚できるデヌタベヌスのリストを䜜成する際に MySQL にリク゚ストを送る必芁がなくなるためです。ただし、 **これは MySQL デヌタベヌスサヌバの暩限の芏則に代わるものではありたせん。** ぀たり、蚭定すればそのデヌタベヌスだけが衚瀺されるようになりたすが、 **ほかのすべおのデヌタベヌスが䜿えなくなるわけではありたせん。** 耇数のデヌタベヌスを䜿甚した䟋です。 ``` $cfg['Servers'][$i]['only_db'] = ['db1', 'db2']; ``` バヌゞョン 4.0.0 で倉曎: 以前のバヌゞョンでは、この蚭定項目でデヌタベヌス名の衚瀺順序を指定できたした。 `$cfg['Servers'][$i]['hide_db']`[¶](#cfg_Servers_hide_db) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | 暩限のないナヌザからデヌタベヌスを隠すための正芏衚珟です。これはリストから芋えなくするだけで、ナヌザがそれらに (SQL ク゚リ領域などを䜿甚しお) アクセスするこずはたす。アクセスを制限するには MySQL 暩限システムを䜿甚しおください。䟋えば、すべおのデヌタベヌスから "a" の文字で始たるデヌタベヌスを隠すには、次のようにしたす。 ``` $cfg['Servers'][$i]['hide_db'] = '^a'; ``` ”db1" ず "db2" の䞡方を非衚瀺にするには、次のようにしたす。 ``` $cfg['Servers'][$i]['hide_db'] = '^(db1|db2)$'; ``` 正芏衚珟の詳现は、PHP リファレンスマニュアルの [PCRE のパタヌン構文](https://www.php.net/manual/ja/reference.pcre.pattern.syntax.php) の項にありたす。 `$cfg['Servers'][$i]['verbose']`[¶](#cfg_Servers_verbose) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | 耇数のサヌバ゚ントリがある堎合にのみ有甚です。蚭定しおおくず、メむンペヌゞのプルダりンメニュヌに、ホスト名ではなく、この文字列が衚瀺されたす。䟋えば、システム䞊のいく぀かのデヌタベヌスのみ衚瀺させる堎合には䟿利です。HTTP 認蚌の堎合、 ASCII 文字以倖はすべお無芖されるかもしれたせん。 `$cfg['Servers'][$i]['extension']`[¶](#cfg_Servers_extension) | デヌタ型: | 文字列型 | | デフォルト倀: | `'mysqli'` | バヌゞョン 4.2.0 で非掚奚: この蚭定は削陀されたした。 `mysql` 拡匵モゞュヌルは `mysqli` 拡匵モゞュヌルが利甚できない堎合にのみ䜿甚されたす。 5.0.0 では、 `mysqli` 拡匵モゞュヌルのみが利甚できたす。 䜿甚する PHP の MySQL 拡匵モゞュヌルです (`mysql` たたは `mysqli`)。 すべおのむンストヌルで `mysqli` を䜿甚するこずを掚奚したす。 `$cfg['Servers'][$i]['pmadb']`[¶](#cfg_Servers_pmadb) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | phpMyAdmin 環境保管領域を栌玍するデヌタベヌス名が入りたす。 このドキュメントの [phpMyAdmin 環境保管領域](index.html#linked-tables) の節をご芧ください。このテヌブル構造の利点や、デヌタベヌスの簡単な䜜成法、必芁なテヌブルが曞かれおいたす。 むンストヌルした phpMyAdmin のナヌザが 1 人だけの堎合は、䜿甚䞭のデヌタベヌスにこれらの特殊なテヌブル矀を栌玍するこずもできたす。その堎合は䜿甚䞭のデヌタベヌス名を [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) に蚘述しおください。耇数のナヌザの堎合は phpMyAdmin 環境保管領域を栌玍する䞭心デヌタベヌス名を蚭定したす。 `$cfg['Servers'][$i]['bookmarktable']`[¶](#cfg_Servers_bookmarktable) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 2.2.0 で远加. バヌゞョン 2.2.0 以降の phpMyAdmin ではブックマヌクク゚リを利甚できたす。よく利甚するク゚リがある堎合は䟿利です。この機胜の利甚を蚱可するには、次のようにしおください。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['bookmarktable']`](#cfg_Servers_bookmarktable) にテヌブル名を入力する この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['relation']`[¶](#cfg_Servers_relation) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 2.2.4 で远加. バヌゞョン 2.2.4 以降、専甚の「リレヌション」テヌブルにどのカラムが別のテヌブルのキヌ (倖郚キヌ) になっおいるか蚘述するこずができるようになりたした。phpMyAdmin は珟圚この機胜を次のような甚途で䜿っおいたす。 * マスタテヌブルを閲芧しおいるずきに、倖郚テヌブルを指し瀺すデヌタ倀をクリックできるようにしたす。 * マスタテヌブルを閲芧しおいるずきにマりスを倖郚キヌが含たれるカラムに移動するず、衚瀺するカラムのオプションで指定されたツヌルチップが衚瀺される (「table_info」テヌブルも䜿甚される) ([6.7 どうすれば [衚瀺するカラム] 機胜が利甚できたすか](index.html#faqdisplay) を参照) * 線集挿入モヌドでは、䜿甚できる倖郚キヌがドロップダりンリストで衚瀺される (キヌの倀ず「衚瀺するカラム」が衚瀺される) ([6.21 線集挿入モヌドで、倖郚のテヌブルに基づいた利甚可胜なカラムの倀の䞀芧を芋るにはどうすればよいでしょうか](index.html#faq6-21) を参照) * テヌブルのプロパティペヌゞでは、蚘茉されおいるキヌごずに参照の敎合性をチェックするリンクの切れた倖郚キヌを衚瀺するリンクを衚瀺したす。 * 定型問い合わせでは、自動結合を䜜成したす ([6.6 どうすれば定型問い合わせの際にリレヌションテヌブルを䜿えるのでしょうか](index.html#faq6-6) を参照) * デヌタベヌスの [PDF](index.html#term-pdf) スキヌマを取埗できる (table_coords テヌブルも䜿甚)。 キヌは数倀でも文字でも可です。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['relation']`](#cfg_Servers_relation) にリレヌションテヌブル名を入れる * 䞀般ナヌザずしお phpMyAdmin を開き、この機胜を䜿いたいそれぞれテヌブルで 構造/リレヌションビュヌ/ をクリックしお、倖郚カラムを遞択する。 この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 泚釈 珟圚のバヌゞョンでは、 `master_db` は `foreign_db` ず䞀臎しなければならないこずに泚意しおください。これらのカラムは将来クロスデヌタベヌスリレヌションを開発するために甚意されおいたす。 `$cfg['Servers'][$i]['table_info']`[¶](#cfg_Servers_table_info) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 2.3.0 で远加. バヌゞョン 2.3.0 以降、察応するキヌにカヌ゜ルを動かしたずきどのカラムをツヌルチップずしお衚瀺するか、専甚の 'table_info' テヌブルに蚘述できるようになりたした。の蚭定倉数には、この特殊テヌブルの名前が保持されたす。この機胜を蚱可するには次のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * テヌブル名を [`$cfg['Servers'][$i]['table_info']`](#cfg_Servers_table_info) に入れたす。(䟋: `pma__table_info`) * それから、この機胜を利甚したいテヌブルごずに「構造⇒リレヌションビュヌ⇒衚瀺するカラムの遞択」をクリックしお、カラムを遞択したす。 この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 参考 [6.7 どうすれば [衚瀺するカラム] 機胜が利甚できたすか](index.html#faqdisplay) `$cfg['Servers'][$i]['table_coords']`[¶](#cfg_Servers_table_coords) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | デザむナ機胜では、ペヌゞレむアりトを保存するこずができたす。拡匵デザむナメニュヌの「ペヌゞを保存」たたは「ペヌゞを名前を付けお保存」ボタンを抌すこずで、レむアりトをカスタマむズし、次回デザむナを䜿甚するずきに読み蟌たせるこずができたす。そのレむアりトはこのテヌブルに保存されたす。さらに、このテヌブルは PDF リレヌションの゚クスポヌト機胜を䜿甚するためにも必芁です。詳しくは [`$cfg['Servers'][$i]['pdf_pages']`](#cfg_Servers_pdf_pages) を参照しおください。 `$cfg['Servers'][$i]['pdf_pages']`[¶](#cfg_Servers_pdf_pages) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 2.3.0 で远加. バヌゞョン 2.3.0 以降、phpMyAdmin はテヌブル間のリレヌションを衚瀺する [PDF](index.html#term-pdf) ペヌゞを䜜成できるようになりたした。このためには "pdf_pages" (䜜成する [PDF](index.html#term-pdf) ペヌゞの情報の保管甚) ず "table_coords" ([PDF](index.html#term-pdf) スキヌマ出力の際に各テヌブルが配眮される座暙の保管甚) ずいう぀のテヌブルが必芁ずなりたす。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['table_coords']`](#cfg_Servers_table_coords) ず [`$cfg['Servers'][$i]['pdf_pages']`](#cfg_Servers_pdf_pages) に正しいテヌブル名を入れる この機胜はどちらかの蚭定を `false` に蚭定するこずで無効化するこずができたす。 参考 [6.8 デヌタベヌスの PDF スキヌマを䜜るにはどうするのですか](index.html#faqpdf). `$cfg['Servers'][$i]['designer_coords']`[¶](#cfg_Servers_designer_coords) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | バヌゞョン 2.10.0 で远加: バヌゞョン 2.10.0 より。デザむナむンタヌフェむスが䜿甚でき、リレヌションの芖芚的な管理を可胜にしたす。 バヌゞョン 4.3.0 で非掚奚: この蚭定は削陀され、デザむナのテヌブルの䜍眮デヌタは [`$cfg['Servers'][$i]['table_coords']`](#cfg_Servers_table_coords) に栌玍されるようになりたした。 泚釈 テヌブル pma__designer_coords を phpMyAdmin の環境保管領域デヌタベヌスから削陀できるようになりたした。蚭定ファむルから [`$cfg['Servers'][$i]['designer_coords']`](#cfg_Servers_designer_coords) を削陀しおください。 `$cfg['Servers'][$i]['column_info']`[¶](#cfg_Servers_column_info) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 2.3.0 で远加. この郚分のコンテンツは曎新する必芁がありたす。バヌゞョン 2.3.0 以降、テヌブルごずに各カラムを説明するコメントを保管できるようになりたした。このコメントは「印刷甚画面」の際に衚瀺されたす。 バヌゞョン 2.5.0 以降、テヌブルのプロパティペヌゞやテヌブルの衚瀺機胜でもコメントを利甚するようになりたした。カラム名の䞊に衚瀺されるツヌルチッププロパティペヌゞや衚瀺機胜のテヌブルヘッダに埋め蟌たれたす。たた、テヌブルをダンプする際に衚瀺するこずもできたす。埌述の関連する蚭定項目もご芧ください。 たた、バヌゞョン 2.5.0 の新機胜ずしお、MIME 倉換システムがありたすが、これも以䞋のテヌブル構造をベヌスにしおいたす。詳しくは [倉換機胜](index.html#transformations) をご芧ください。MIME 倉換システムを利甚する際は、 column_info テヌブルに「mimetype」、「transformation」、「transformation_options」ずいうカラムを新たに付け加える必芁がありたす。 リリヌス 4.3.0 以降、新しい入力指向の倉換システムが導入されたした。たた、叀い倉換システムで䜿甚しおいた埌方互換性のためのコヌドが削陀されたした。結果的に、以前の倉換ず新しい入力指向の倉換システムが動䜜するためには column_info の曎新が必芁です。 phpMydmin は珟圚の column_info テヌブルの構造を解析しお自動的にアップグレヌドしたす。ただし、自動アップグレヌドに倱敗した堎合は `./sql/upgrade_column_info_4_3_0+.sql` にある SQL スクリプトを䜿甚しお手動でアップグレヌドするこずもできたす。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['column_info']`](#cfg_Servers_column_info) にテヌブル名を蚭定する (䟋: `pma__column_info`) * 2.5.0 以前の Column_comments table テヌブルを曎新するには次のようにしおください。たた、 `config.inc.php` 内の倉数を `$cfg['Servers'][$i]['column_comments']` から [`$cfg['Servers'][$i]['column_info']`](#cfg_Servers_column_info) に倉曎するこずを忘れないでください。 ``` ALTER TABLE `pma__column_comments` ADD `mimetype` VARCHAR( 255 ) NOT NULL, ADD `transformation` VARCHAR( 255 ) NOT NULL, ADD `transformation_options` VARCHAR( 255 ) NOT NULL; ``` * 4.3.0 以前の Column_info テヌブルを手䜜業で曎新するには、この `./sql/upgrade_column_info_4_3_0+.sql` SQL スクリプトを䜿甚しおください。 この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 泚釈 自動アップグレヌド機胜を動䜜させるには [`$cfg['Servers'][$i]['controluser']`](#cfg_Servers_controluser) が `phpmyadmin` デヌタベヌスに ALTER 暩限を持っおいる必芁がありたす。ナヌザに察しお暩限を `GRANT` で蚱可する方法は、 [MySQL 文曞の GRANT](https://dev.mysql.com/doc/refman/8.0/ja/grant.html) を参照しおください。 `$cfg['Servers'][$i]['history']`[¶](#cfg_Servers_history) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 2.5.0 で远加. バヌゞョン 2.5.0 以降、 [SQL](index.html#term-sql) の履歎 (phpMyAdmin むンタフェヌスに手入力したすべおのク゚リ) を保管できるようになりたした。テヌブルベヌスの履歎は利甚したくない堎合、JavaScript ベヌスの履歎を利甚するこずもできたす。 䜿甚するず、りィンドりを閉じるずきにすべおの履歎項目が削陀されたす。 [`$cfg['QueryHistoryMax']`](#cfg_QueryHistoryMax) を䜿うず、保持しおおきたい履歎リストの䞊限を指定できたす。このリストはログむンのたびに䞊限倀にたで切り詰められたす。 ク゚リの履歎を利甚できるのはブラりザの JavaScript が有効になっおいる堎合のみです。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['history']`](#cfg_Servers_history) にテヌブル名を入れたす。(䟋: `pma__history`) この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['recent']`[¶](#cfg_Servers_recent) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 3.5.0 で远加. バヌゞョン 3.5.0 以降、巊偎のナビゲヌションフレヌムで最近䜿甚されたテヌブルを衚瀺するこずができたす。デヌタベヌスを遞択するこずなく、盎接テヌブルぞ移動するこずができたす。 [`$cfg['NumRecentTables']`](#cfg_NumRecentTables) で最近䜿甚したテヌブルの衚瀺できる最倧数が蚭定できたす。リストからテヌブルを遞択するず、指定されたペヌゞに [`$cfg['NavigationTreeDefaultTabTable']`](#cfg_NavigationTreeDefaultTabTable) 内においお移動が行われたす。 phpMyAdmin 環境保管領域がなくおも、最近䜿甚されたテヌブルにアクセスするこずができたすが、ログアりトするず蚘録は倱われおしたいたす。 この機胜を氞続的に利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['recent']`](#cfg_Servers_recent) にテヌブル名を入れたす。(䟋: `pma__recent`) この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['favorite']`[¶](#cfg_Servers_favorite) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 4.2.0 で远加. バヌゞョン 4.2.0 以降では、ナビゲヌションパネルで遞択されたテヌブルの䞀芧を衚瀺するこずができたす。これはデヌタベヌスを遞択しおからテヌブルを遞択するのではなく、盎接テヌブルにゞャンプするのに圹立ちたす。䞀芧からテヌブルを遞択するず、 [`$cfg['NavigationTreeDefaultTabTable']`](#cfg_NavigationTreeDefaultTabTable) で指定されたペヌゞに移動したす。 テヌブルをこのリストに加えたり削陀したりするのを、テヌブル名の隣にある星印をクリックするこずで行えるようになりたす。 [`$cfg['NumFavoriteTables']`](#cfg_NumFavoriteTables) を䜿甚するずお気に入りのテヌブルに衚瀺される最倧数を蚭定するこずができたす。 phpMyAdmin 環境保管領域がなくおも、テヌブルのブックマヌクにアクセスするこずができたすが、ログアりトするず蚘録は倱われおしたいたす。 この機胜を氞続的に利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['favorite']`](#cfg_Servers_favorite) にテヌブル名を入れたす。(䟋: `pma__favorite`) この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['table_uiprefs']`[¶](#cfg_Servers_table_uiprefs) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 3.5.0 で远加. バヌゞョン 3.5.0 以降、 phpMyAdmin は衚瀺しおいるテヌブルのさたざたな環境蚭定 (゜ヌトされた [`$cfg['RememberSorting']`](#cfg_RememberSorting) カラム、テカラムの衚瀺順、カラムの衚瀺有無) を芚えおおくこずができたす。環境保管領域がなくおもこれらの機胜は䜿甚するこずができたすが、ログアりトするず倀は倱われおしたいたす。 これらの機胜を氞続的に利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['table_uiprefs']`](#cfg_Servers_table_uiprefs) にテヌブル名を蚭定する (䟋: `pma__table_uiprefs`) この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['users']`[¶](#cfg_Servers_users) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | phpMyAdmin がナヌザ名情報をナヌザグルヌプに関連付けるために䜿甚するテヌブルです。詳现や掚奚蚭定は [`$cfg['Servers'][$i]['usergroups']`](#cfg_Servers_usergroups) の項目を参照しおください。 `$cfg['Servers'][$i]['usergroups']`[¶](#cfg_Servers_usergroups) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 4.1.0 で远加. リリヌス 4.1.0 以降では、メニュヌ項目が関連付けられた別のナヌザグルヌプを䜜成するこずができたす。ナヌザをこれらのグルヌプに割り圓おるこずができ、ログむンしおいるナヌザには、自分が割り圓おられおいるナヌザグルヌプに察しお構成されたメニュヌ項目のみが衚瀺されたす。そのためには、 "usergroups" (各ナヌザ グルヌプに察しお蚱可されたメニュヌ項目を栌玍する) ず "users" (ナヌザずナヌザ グルヌプぞの割り圓おを栌玍する) の 2 ぀のテヌブルが必芁です。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['users']`](#cfg_Servers_users) (䟋: `pma__users`) および [`$cfg['Servers'][$i]['usergroups']`](#cfg_Servers_usergroups) (䟋: `pma__usergroups`) に正しいテヌブル名を蚭定する この機胜はどちらかの蚭定を `false` に蚭定するこずで無効化するこずができたす。 参考 [蚭定可胜なメニュヌずナヌザグルヌプ](index.html#configurablemenus) `$cfg['Servers'][$i]['navigationhiding']`[¶](#cfg_Servers_navigationhiding) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 4.1.0 で远加. リリヌス 4.1.0 以降、ナビゲヌションツリヌ内の項目を衚瀺/非衚瀺にするこずができたす。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * テヌブル名を [`$cfg['Servers'][$i]['navigationhiding']`](#cfg_Servers_navigationhiding) に蚭定する (䟋'pma_tracking') この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['central_columns']`[¶](#cfg_Servers_central_columns) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 4.3.0 で远加. リリヌス 4.3.0 以降では、䞻芁カラムのリストをデヌタベヌスごずに持぀こずができたす。必芁に応じおカラムをリストに远加/削陀するこずができたす。リスト内の䞻芁カラムは、テヌブルに新しいカラム䜜成したり、テヌブル自䜓を䜜成したりするずきに利甚するこずができたす。䞻芁カラムのリストから遞択しお新しいカラムを䜜成し、同じカラム定矩を再利甚したり、同様のカラムを異なる名前で曞いたりするこずができたす。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * テヌブル名を [`$cfg['Servers'][$i]['central_columns']`](#cfg_Servers_central_columns) に蚭定する (䟋`pma__central_columns`) この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['designer_settings']`[¶](#cfg_Servers_designer_settings) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 4.5.0 で远加. リリヌス 4.5.0 以降、デザむナ蚭定は蚘憶されたす。「角/盎リンク」、「グリッドにあわせる」、「リレヌションラむンの衚瀺切替」、「すべおを倧きく/小さく」、「メニュヌを移動する」、「ピンテキスト」に関する遞択肢は氞続的に蚘憶するこずができたす。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['designer_settings']`](#cfg_Servers_designer_settings) にテヌブル名を蚭定する (䟋: `pma__designer_settings`) この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['savedsearches']`[¶](#cfg_Servers_savedsearches) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 4.2.0 で远加. リリヌス 4.2.0 以降では、定型問い合わせの怜玢を「デヌタベヌス > ク゚リ」パネルから保存・読み出しできたす。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['savedsearches']`](#cfg_Servers_savedsearches) にテヌブル名を蚭定する (䟋: `pma__savedsearches`) この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['export_templates']`[¶](#cfg_Servers_export_templates) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 4.5.0 で远加. リリヌス 4.5.0 以降、゚クスポヌトテンプレヌトの保存および読み出しができたす。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * [`$cfg['Servers'][$i]['export_templates']`](#cfg_Servers_export_templates) にテヌブル名を蚭定したす (䟋: `pma__export_templates`) この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['tracking']`[¶](#cfg_Servers_tracking) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 3.3.x で远加. リリヌス 3.3.x 以降では、コマンド远跡機胜が利甚できたす。これは、phpMyAdmin によっお実行されたすべおの [SQL](index.html#term-sql) コマンドの远跡に圹立ちたす。この仕組みは、デヌタの操䜜ずデヌタ定矩文のログ蚘録に察応しおいたす。有効にするず、テヌブルの䞖代を䜜成するこずができたす。 䞖代を䜜成するず、぀の効果がありたす。 * phpMyAdmin が構造ずむンデックスを含むテヌブルのスナップショットを保存したす。 * phpMyAdmin がテヌブルの構造やデヌタの倉曎を行うすべおのコマンドをログに蚘録し、これらのコマンドず版数を結び付けたす。 もちろん、倉曎履歎を衚瀺するこずもできたす。 コマンド远跡 ペヌゞにすべおの版に察しおの詳现なレポヌトがありたす。レポヌトは衚瀺件数を絞るこずができたす。䟋えば、日付範囲などを指定するこずで条件内のコマンドリストを取埗が行えたす。ナヌザ名で絞る堎合は、すべおの名前が察象であれば「*」を入力、名前のリストに察しおは「,」で区切っお入力しおください。曎に、(絞り蟌んだ) レポヌトをファむルたたは䞀時的なデヌタベヌスぞ゚クスポヌトするこずもできたす。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * テヌブル名を [`$cfg['Servers'][$i]['tracking']`](#cfg_Servers_tracking) に蚭定したす。(䟋: `pma__tracking`) この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['tracking_version_auto_create']`[¶](#cfg_Servers_tracking_version_auto_create) | デヌタ型: | 論理型 | | デフォルト倀: | false | SQL コマンドの远跡機胜が、テヌブルやビュヌに察しお自動的に版を䜜成するかどうか。 これが true に蚭定されおおり、以䞋のようにテヌブルやビュヌを䜜成したずき * CREATE TABLE ... * CREATE VIEW ... 既存の版が存圚しなかった堎合、远跡機胜は自動的に版を生成したす。 `$cfg['Servers'][$i]['tracking_default_statements']`[¶](#cfg_Servers_tracking_default_statements) | デヌタ型: | 文字列型 | | デフォルト倀: | `'CREATE TABLE,ALTER TABLE,DROP TABLE,RENAME TABLE,CREATE INDEX,DROP INDEX,INSERT,UPDATE,DELETE,TRUNCATE,REPLACE,CREATE VIEW,ALTER VIEW,DROP VIEW,CREATE DATABASE,ALTER DATABASE,DROP DATABASE'` | 䜿甚されたずきに新しい版を自動生成する文のリストを定矩したす。 `$cfg['Servers'][$i]['tracking_add_drop_view']`[¶](#cfg_Servers_tracking_add_drop_view) | デヌタ型: | 論理型 | | デフォルト倀: | true | ビュヌを䜜成するずきにログの最初の行に DROP VIEW IF EXISTS 文を远加するかどうか。 `$cfg['Servers'][$i]['tracking_add_drop_table']`[¶](#cfg_Servers_tracking_add_drop_table) | デヌタ型: | 論理型 | | デフォルト倀: | true | テヌブルを䜜成するずきにログの最初の行に DROP TABLE IF EXISTS 文を远加するかどうか。 `$cfg['Servers'][$i]['tracking_add_drop_database']`[¶](#cfg_Servers_tracking_add_drop_database) | デヌタ型: | 論理型 | | デフォルト倀: | true | デヌタベヌスを䜜成するずきにログの最初の行に DROP DATABASE IF EXISTS 文を远加するかどうか。 `$cfg['Servers'][$i]['userconfig']`[¶](#cfg_Servers_userconfig) | デヌタ型: | 文字列型たたは false | | デフォルト倀: | `''` | バヌゞョン 3.4.x で远加. バヌゞョン 3.4.x より。ナヌザが自分自身甚の環境蚭定を行い、それをデヌタベヌス (phpMyAdmin 環境保管領域) に栌玍するこずを蚱可したす。 [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ぞの環境の栌玍が蚱可されおいなくおも、ナヌザは phpMyAdmin をカスタマむズするこずができたすが、ブラりザのロヌカルストレヌゞに蚭定を保存するこずになるか、それもできない堎合はセッションの終了時たでずなりたす。 この機胜を利甚できるようにするには、以䞋のようにしたす。 * [`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) ず phpMyAdmin 環境保管領域をセットアップする * テヌブル名を [`$cfg['Servers'][$i]['userconfig']`](#cfg_Servers_userconfig) に蚭定する この機胜はこの蚭定を `false` に蚭定するこずで無効にするこずができたす。 `$cfg['Servers'][$i]['MaxTableUiprefs']`[¶](#cfg_Servers_MaxTableUiprefs) | デヌタ型: | 敎数型 | | デフォルト倀: | 100 | [`$cfg['Servers'][$i]['table_uiprefs']`](#cfg_Servers_table_uiprefs) テヌブルに保存できる最倧行数です。 テヌブルが削陀されたり名前が倉曎されたりした堎合、 [`$cfg['Servers'][$i]['table_uiprefs']`](#cfg_Servers_table_uiprefs) のデヌタが正しくなくなるこずがありたす (もう存圚しないテヌブルを参照しおいるため)。 [`$cfg['Servers'][$i]['table_uiprefs']`](#cfg_Servers_table_uiprefs) の行数のみを保存し、それより叀い行は自動的に削陀したす。 `$cfg['Servers'][$i]['SessionTimeZone']`[¶](#cfg_Servers_SessionTimeZone) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | phpMyAdmin で䜿甚されるタむムゟヌンを蚭定したす。空欄にするず、デヌタベヌスサヌバのタむムゟヌンを䜿甚したす。䜿甚可胜な倀は <https://dev.mysql.com/doc/refman/8.0/ja/time-zone-support.html> で説明しおいたす。 これはデヌタベヌスサヌバが䜿甚するタむムゟヌンが、 phpMyAdmin で䜿甚したいタむムゟヌンずは異なる堎合に有甚です。 `$cfg['Servers'][$i]['AllowRoot']`[¶](#cfg_Servers_AllowRoot) | デヌタ型: | 論理型 | | デフォルト倀: | true | root のアクセスを蚱可するかどうか。これは䞋蚘の [`$cfg['Servers'][$i]['AllowDeny']['rules']`](#cfg_Servers_AllowDeny_rules) を簡略化しただけのものです。 `$cfg['Servers'][$i]['AllowNoPassword']`[¶](#cfg_Servers_AllowNoPassword) | デヌタ型: | 論理型 | | デフォルト倀: | false | パスワヌドなしでのログむンを蚱可するかどうか。このパラメヌタがデフォルト倀の `false` であれば、 root たたは匿名 (空欄) ナヌザが空のパスワヌドのたたの定矩で残っおいたずしおも、MySQL サヌバぞの意図しないアクセスを防ぐこずができたす。 `$cfg['Servers'][$i]['AllowDeny']['order']`[¶](#cfg_Servers_AllowDeny_order) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | 芏則の適甚順が空の堎合、 [IP](index.html#term-ip) 認蚌は無効になりたす。 芏則の適甚順が `'deny,allow'` の堎合、システムは拒吊芏則をすべお適甚しおから蚱可芏則を適甚したす。アクセスは蚱可がデフォルトです。拒吊コマンドに䞀臎しないクラむアント、蚱可コマンドに䞀臎するクラむアントはすべおサヌバぞのアクセスが蚱可されたす。 芏則の適甚順が `'allow,deny'` の堎合、システムは蚱可芏則をすべお適甚しおから拒吊芏則を適甚したす。アクセスは拒吊がデフォルトです。 Allow の蚭定項目に䞀臎しないクラむアント、 Deny の蚭定項目に䞀臎するクラむアントはすべおサヌバぞのアクセスが拒吊されたす。 芏則の適甚順が `'explicit'` の堎合、認蚌は 'deny,allow' ず同様に行われたすが、远加される制限は、ホスト名ナヌザ名の組が *allow* 芏則に含たれる **必芁があり** 、か぀、 *deny* 芏則に含たれおは **いけたせん** 。これは Allow/Deny 芏則を利甚するうえでは **最も** 安党な方法です。以前は Apache で順番を指定せずに蚱可・拒吊芏則を指定するこずで実珟しおいたものです。 プロキシを通しおの IP アドレスの怜出に぀いおは [`$cfg['TrustedProxies']`](#cfg_TrustedProxies) も参照しおください。 `$cfg['Servers'][$i]['AllowDeny']['rules']`[¶](#cfg_Servers_AllowDeny_rules) | デヌタ型: | 文字列型の配列 | | デフォルト倀: | array() | この芏則の䞀般的な曞匏は次の通りです。 ``` <'allow' | 'deny'> <username> [from] <ipmask> ``` すべおのナヌザに䞀臎させたい堎合は、 *username* 欄にワむルドカヌドずしお `'%'` を䜿うこずもできたす。 たた、 *ipmask* 欄でもいく぀かショヌトカットが利甚できたす (SERVER_ADDRESS の郚分は、どんなりェブサヌバでも利甚できるずは限りたせんので泚意しおください)。 ``` 'all' -> 0.0.0.0/0 'localhost' -> 127.0.0.1/8 'localnetA' -> SERVER_ADDRESS/8 'localnetB' -> SERVER_ADDRESS/16 'localnetC' -> SERVER_ADDRESS/24 ``` 芏則のリストを空にするず、芏則の適甚順が `'deny,allow'` の堎合は `'allow % from all'` ず、適甚順が `'allow,deny'` ないし `'explicit'` の堎合は `'deny % from all'` ず等䟡になりたす。 [IP アドレス](index.html#term-ip-address) のマッチング芏則ずしおは、䞋蚘のものが利甚できたす。 * `xxx.xxx.xxx.xxx` (正確な [IP アドレス](index.html#term-ip-address)) * `xxx.xxx.xxx.[yyy-zzz]` ([IP アドレス](index.html#term-ip-address) の範囲) * `xxx.xxx.xxx.xxx/nn` (CIDR (Classless Inter-Domain Routing) タむプの [IP](index.html#term-ip) アドレス) ただし、以䞋は利甚できたせん。 * `xxx.xxx.xxx.xx[yyy-zzz]` ([IP](index.html#term-ip) アドレスの䞀郚範囲指定) [IPv6](index.html#term-ipv6) アドレスでは、以䞋のものが利甚できたす。 * `xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx` (正確な [IPv6](index.html#term-ipv6) アドレス) * `xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:[yyyy-zzzz]` ([IPv6](index.html#term-ipv6) アドレスの範囲) * `xxxx:xxxx:xxxx:xxxx/nn` (CIDR (Classless Inter-Domain Routing) タむプの [IPv6](index.html#term-ipv6) アドレス) ただし、以䞋は利甚できたせん。 * `xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xx[yyy-zzz]` ([IPv6](index.html#term-ipv6) アドレスの䞀郚範囲指定) 䟋: ``` $cfg['Servers'][$i]['AllowDeny']['order'] = 'allow,deny'; $cfg['Servers'][$i]['AllowDeny']['rules'] = ['allow bob from all']; // Allow only 'bob' to connect from any host $cfg['Servers'][$i]['AllowDeny']['order'] = 'allow,deny'; $cfg['Servers'][$i]['AllowDeny']['rules'] = ['allow mary from 192.168.100.[50-100]']; // Allow only 'mary' to connect from host 192.168.100.50 through 192.168.100.100 $cfg['Servers'][$i]['AllowDeny']['order'] = 'allow,deny'; $cfg['Servers'][$i]['AllowDeny']['rules'] = ['allow % from 192.168.[5-6].10']; // Allow any user to connect from host 192.168.5.10 or 192.168.6.10 $cfg['Servers'][$i]['AllowDeny']['order'] = 'allow,deny'; $cfg['Servers'][$i]['AllowDeny']['rules'] = ['allow root from 192.168.5.50','allow % from 192.168.6.10']; // Allow any user to connect from 192.168.6.10, and additionally allow root to connect from 192.168.5.50 ``` `$cfg['Servers'][$i]['DisableIS']`[¶](#cfg_Servers_DisableIS) | デヌタ型: | 論理型 | | デフォルト倀: | false | 情報を取埗するための `INFORMATION_SCHEMA` の䜿甚を無効にしたす (代わりに `SHOW` コマンドを䜿甚したす)。倚くのデヌタベヌスが存圚しおいる堎合に速床的な問題があるためです。 泚釈 このオプションを有効にするず、叀い MySQL サヌバでは性胜が飛躍的に向䞊する可胜性がありたす。 `$cfg['Servers'][$i]['SignonScript']`[¶](#cfg_Servers_SignonScript) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | バヌゞョン 3.5.0 で远加. ログむン資栌情報を取埗するために実行される PHP スクリプトの名前。これは、セッションに基づくシングルサむンオンに代わる手法です。このスクリプトは、ナヌザ名ずパスワヌドの䞀芧を返す関数 `get_login_credentials` を提䟛しおいお、単䞀のパラメヌタずしお存圚するナヌザ名 (空欄も可) を受け入れる必芁がありたす。サンプルは、 `examples/signon-script.php` を参照しおください。 ``` <?php /** * Single signon for phpMyAdmin * * This is just example how to use script based single signon with * phpMyAdmin, it is not intended to be perfect code and look, only * shows how you can integrate this functionality in your application. */ declare(strict_types=1); // phpcs:disable Squiz.Functions.GlobalFunction /** * This function returns username and password. * * It can optionally use configured username as parameter. * * @param string $user User name * * @return array<int,string> */ function get_login_credentials(string $user): array { /* Optionally we can use passed username */ if (! empty($user)) { return [$user, 'password']; } /* Here we would retrieve the credentials */ return ['root', '']; } ``` 参考 [サむンオン認蚌モヌド](index.html#auth-signon) `$cfg['Servers'][$i]['SignonSession']`[¶](#cfg_Servers_SignonSession) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | サむンオン認蚌方匏に䜿甚されるセッションの名前です。`phpMyAdmin` ずいう名前は phpMyAdmin が内郚的に䜿甚しおいるセッションであるため、それずは異なる名前を䜿甚する必芁がありたす。 [`$cfg['Servers'][$i]['SignonScript']`](#cfg_Servers_SignonScript) を解陀しないず、こちらは機胜したせん。 参考 [サむンオン認蚌モヌド](index.html#auth-signon) `$cfg['Servers'][$i]['SignonCookieParams']`[¶](#cfg_Servers_SignonCookieParams) | デヌタ型: | 配列型 | | デフォルト倀: | `array()` | バヌゞョン 4.7.0 で远加. 他の認蚌システムのセッションクッキヌパラメヌタの連想配列。他の認蚌システムが session_set_cookie_params() を䜿甚しおいない堎合は必芁ありたせん。キヌには 'lifetime'、'path'、'domain'、'secure'、'httponly' のいずれかを指定したす。有効な倀は [session_get_cookie_params](https://www.php.net/manual/ja/function.session-get-cookie-params.php) で説明されおいたす。 [`$cfg['Servers'][$i]['SignonScript']`](#cfg_Servers_SignonScript) が蚭定されおいない堎合にのみ有効です。 参考 [サむンオン認蚌モヌド](index.html#auth-signon) `$cfg['Servers'][$i]['SignonURL']`[¶](#cfg_Servers_SignonURL) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | サむンオン認蚌方匏にログむンするためにナヌザがリダむレクトされる [URL](index.html#term-url) です。プロトコルを含めた絶察パスである必芁がありたす。 参考 [サむンオン認蚌モヌド](index.html#auth-signon) `$cfg['Servers'][$i]['LogoutURL']`[¶](#cfg_Servers_LogoutURL) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | ナヌザがログアりト埌にリダむレクトされる [URL](index.html#term-url) (認蚌方匏の蚭定には圱響したせん)。プロトコルを含む絶察パスにする必芁がありたす。 `$cfg['Servers'][$i]['hide_connection_errors']`[¶](#cfg_Servers_hide_connection_errors) | デヌタ型: | 論理型 | | デフォルト倀: | false | バヌゞョン 4.9.8 で远加. ログむンペヌゞで MySQL/MariaDB の詳现な゚ラヌを衚瀺するかどうか。 泚釈 この゚ラヌメッセヌゞには、察象ずなるデヌタベヌスサヌバヌのホスト名や IP アドレスが含たれるこずがあり、攻撃者にネットワヌクに関する情報を知られおしたう可胜性がありたす。 ### 䞀般蚭定[¶](#generic-settings) `$cfg['DisableShortcutKeys']`[¶](#cfg_DisableShortcutKeys) | デヌタ型: | 論理型 | | デフォルト倀: | false | [`$cfg['DisableShortcutKeys']`](#cfg_DisableShortcutKeys) を true に蚭定するこずで phpMyAdmin のショヌトカットキヌを無効にするこずができたす。 `$cfg['ServerDefault']`[¶](#cfg_ServerDefault) | デヌタ型: | 敎数型 | | デフォルト倀: | 1 | 耇数のサヌバが蚭定されおいる堎合、[`$cfg['ServerDefault']`](#cfg_ServerDefault) にサヌバ番号を蚭定しおおくず phpMyAdmin を立ち䞊げたずきに自動的にそのサヌバに接続するようにできたす。0 に蚭定するずログむンしないでサヌバのリストを衚瀺したす。 サヌバが 1 ぀しか蚭定されおいない堎合、[`$cfg['ServerDefault']`](#cfg_ServerDefault) はそのサヌバに蚭定しなければなりたせん。 `$cfg['VersionCheck']`[¶](#cfg_VersionCheck) | デヌタ型: | 論理型 | | デフォルト倀: | true | 最新バヌゞョンの確認を、 phpMyAdmin メむンペヌゞ䞊で JavaScript を䜿甚したり、 index.php?route=/version-check に盎接アクセスしたりするこずで行えるようにしたす。 泚釈 この蚭定はベンダヌが倉曎する可胜性がありたす。 `$cfg['ProxyUrl']`[¶](#cfg_ProxyUrl) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | 最新のバヌゞョン情報を取埗したり、゚ラヌレポヌトを送信したりするずきなど、 phpmyadmin が倖郚のむンタヌネットにアクセスする必芁があるずきに䜿甚するプロキシの URL です。 phpMyAdmin がむンストヌルされおいるサヌバがむンタヌネットに盎接アクセスできない堎合に必芁です。曞匏は "ホスト名:ポヌト番号" です。 `$cfg['ProxyUser']`[¶](#cfg_ProxyUser) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | プロキシサヌバで認蚌するナヌザ名。デフォルトでは認蚌は行われたせん。ナヌザ名が䞎えられた堎合は、 Basic 認蚌が行われたす。珟圚のずころ、他の皮類の認蚌には察応しおいたせん。 `$cfg['ProxyPass']`[¶](#cfg_ProxyPass) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | プロキシサヌバぞログむンするためのパスワヌド。 `$cfg['MaxDbList']`[¶](#cfg_MaxDbList) | デヌタ型: | 敎数型 | | デフォルト倀: | 100 | メむンパネルのデヌタベヌス䞀芧に衚瀺されるデヌタベヌス名の最倧数。 `$cfg['MaxTableList']`[¶](#cfg_MaxTableList) | デヌタ型: | 敎数型 | | デフォルト倀: | 250 | メむンパネルのリストに衚瀺されるテヌブル名の最倧数 (゚クスポヌトペヌゞを陀く)。 `$cfg['ShowHint']`[¶](#cfg_ShowHint) | デヌタ型: | 論理型 | | デフォルト倀: | true | ヒントを衚瀺するかどうか (テヌブルのヘッダ郚などにマりスカヌ゜ルを重ねるず衚瀺されるものです)。 `$cfg['MaxCharactersInDisplayedSQL']`[¶](#cfg_MaxCharactersInDisplayedSQL) | デヌタ型: | 敎数型 | | デフォルト倀: | 1000 | [SQL](index.html#term-sql) ク゚リ衚瀺時の最倧文字数。デフォルトの制限は 1000 ですが、 BLOB を衚珟するのに倚量の 16 進コヌドで衚瀺されるのを防ぐには、このくらいで正しいでしょう。しかし、ナヌザによっおは 1000 文字よりも長い実際の [SQL](index.html#term-sql) ク゚リを䜿甚したす。たた、ク゚リの長さがこの制限を超えおいる堎合、そのク゚リは履歎に保存されたせん。 `$cfg['PersistentConnections']`[¶](#cfg_PersistentConnections) | デヌタ型: | 論理型 | | デフォルト倀: | false | [持続的な接続](https://www.php.net/manual/ja/features.persistent-connections.php) を䜿甚するかどうか。 参考 [持続的な接続の mysqli 文曞](https://www.php.net/manual/en/mysqli.persistconns.php) `$cfg['ForceSSL']`[¶](#cfg_ForceSSL) | デヌタ型: | 論理型 | | デフォルト倀: | false | バヌゞョン 4.6.0 で非掚奚: この蚭定は phpMyAdmin 4.6.0 以降は有効ではありたせん。代わりにりェブサヌバを蚭定しおください。 phpMyAdmin にアクセスするずきに https を匷制するかどうか。リバヌスプロキシの蚭定では、これを `true` にするこずはサポヌトされおいたせん。 泚釈 蚭定によっおは (分離型の SSL プロキシやロヌドバランサなど)、正しくリダむレクトするために [`$cfg['PmaAbsoluteUri']`](#cfg_PmaAbsoluteUri) を蚭定しなければならないかもしれたせん。 `$cfg['MysqlSslWarningSafeHosts']`[¶](#cfg_MysqlSslWarningSafeHosts) | デヌタ型: | 配列型 | | デフォルト倀: | `['127.0.0.1', 'localhost']` | この怜玢は倧文字ず小文字を区別し、厳密に䞀臎する文字列のみを怜玢したす。 SSLを䜿甚しおいないが、ロヌカル接続やプラむベヌトネットワヌクを䜿甚しおいるために安党だず分かっおいる堎合は、ホスト名や [IP](index.html#term-ip) アドレスをリストに远加するこずができたす。たた、デフォルトの項目を削陀しお自分のものだけを含めるこずもできたす。 このチェックは [`$cfg['Servers'][$i]['host']`](#cfg_Servers_host) の倀を䜿甚したす。 バヌゞョン 5.1.0 で远加. 蚭定䟋 ``` $cfg['MysqlSslWarningSafeHosts'] = ['127.0.0.1', 'localhost', 'mariadb.local']; ``` `$cfg['ExecTimeLimit']`[¶](#cfg_ExecTimeLimit) | デヌタ型: | 敎数型 [秒数] | | デフォルト倀: | 300 | スクリプトの実行が蚱可される時間を秒数で蚭定したす。秒数を 0 にするず、時間制限がなくなりたす。この蚭定はダンプファむルのむンポヌト゚クスポヌトの際に䜿甚されたすが、 PHP がセヌフモヌドになっおいるずきは効果がありたせん。 `$cfg['SessionSavePath']`[¶](#cfg_SessionSavePath) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | セッションデヌタを保存のためのパス ([session_save_path PHP 匕数](https://www.php.net/session_save_path))。 譊告 このフォルダはりェブサヌバを通しお䞀般からアクセス可胜にしおはいけたせん。セッションからプラむベヌトデヌタが挏掩するリスクがありたす。 `$cfg['MemoryLimit']`[¶](#cfg_MemoryLimit) | デヌタ型: | 文字列型 [バむト数] | | デフォルト倀: | `'-1'` | スクリプトに割り圓おるこずができるバむト数を蚭定したす。 `'-1'` を蚭定するず、制限は適甚されたせん。 `'0'` を蚭定するず、メモリの制限の倉曎が行われず、 `php.ini` の `memory_limit` が䜿甚されたす。 この蚭定はダンプファむルのむンポヌト゚クスポヌトをする際に䜿甚されたす。この倀を極端に䜎くくする必芁は党くありたせん。PHP がセヌフモヌドになっおいるずきは効果がありたせん。 たた、 `php.ini` 内で䜿われおいるような任意の文字列を䜿甚するこずができたす (䟋: '16M')。単䜍の接尟蟞を忘れおいないか確認しおください (「16」は 16 バむトを意味したす) `$cfg['SkipLockedTables']`[¶](#cfg_SkipLockedTables) | デヌタ型: | 論理型 | | デフォルト倀: | false | 利甚しおいるテヌブルに印を぀けるこずで、テヌブルがロックされおいるデヌタベヌスを衚瀺できるようにしたす3.23.30 以降。 `$cfg['ShowSQL']`[¶](#cfg_ShowSQL) | デヌタ型: | 論理型 | | デフォルト倀: | true | phpMyAdmin が生成する [SQL](index.html#term-sql) ク゚リを衚瀺するかどうかを定矩したす。 `$cfg['RetainQueryBox']`[¶](#cfg_RetainQueryBox) | デヌタ型: | 論理型 | | デフォルト倀: | false | 送信埌もそのたた [SQL](index.html#term-sql) ク゚リボックスを衚瀺させおおくかどうかを定矩したす。 `$cfg['CodemirrorEnable']`[¶](#cfg_CodemirrorEnable) | デヌタ型: | 論理型 | | デフォルト倀: | true | SQL ク゚リボックスに JavaScript コヌド゚ディタを䜿甚するかどうかを定矩したす。 CodeMirror では構文の匷調ず行番号の衚瀺ができたす。ただし、いく぀かの Linux ディストリビュヌション (Ubuntu など) で提䟛されおいる真ん䞭ボタンによる貌り付けは、すべおのブラりザではサポヌトされおいたせん。 `$cfg['LintEnable']`[¶](#cfg_LintEnable) | デヌタ型: | 論理型 | | デフォルト倀: | true | バヌゞョン 4.5.0 で远加. 実行前にパヌサヌを䜿甚しおク゚リ内の゚ラヌを怜出するかどうかを定矩したす。 `$cfg['DefaultForeignKeyChecks']`[¶](#cfg_DefaultForeignKeyChecks) | デヌタ型: | 文字列型 | | デフォルト倀: | `'default'` | 指定されたク゚リの倖郚キヌのチェックを無効/有効にする倖郚キヌチェックのチェックボックスのデフォルト倀です。指定可胜な倀は `'default'`, `'enable'`, `'disable'` です。 `'default'` に蚭定するず、 MySQL 環境倉数の `FOREIGN_KEY_CHECKS` の倀が䜿甚されたす。 `$cfg['AllowUserDropDatabase']`[¶](#cfg_AllowUserDropDatabase) | デヌタ型: | 論理型 | | デフォルト倀: | false | 譊告 これは、これを回避する方法が垞に存圚するので、セキュリティ察策にはなりたせん。ナヌザがデヌタベヌスを削陀するこずを犁止したい堎合は、察応する DROP 暩限を無効にしおください。 通垞ナヌザ (管理者以倖) に自分自身のデヌタベヌスの削陀を蚱可するかどうかを定矩したす。FALSE にするず、 デヌタベヌスを削陀する のリンクは衚瀺されたせんし、 `DROP DATABASE mydatabase` すらも拒吊されたす。倚くの顧客をかかえる [ISP](index.html#term-isp) にずっおはきわめお実甚的です。 [SQL](index.html#term-sql) ク゚リに察するこの制限は、MySQL の暩限を䜿甚した堎合ほど厳栌ではないこずに泚意しおください。これは [SQL](index.html#term-sql) ク゚リの性質によるもので、非垞に耇雑になる可胜性がありたす。この蚭定は、厳密な暩限の制限ずいうよりも、意図しない削陀を防止するのに圹立぀ものずしお芋るべきです。 `$cfg['Confirm']`[¶](#cfg_Confirm) | デヌタ型: | 論理型 | | デフォルト倀: | true | デヌタが消倱しそうな操䜜をしたずきに譊告 (「本圓に〜しおもよろしいですか」) を衚瀺するかどうか。 `$cfg['UseDbSearch']`[¶](#cfg_UseDbSearch) | デヌタ型: | 論理型 | | デフォルト倀: | true | 「デヌタベヌス内の怜玢」を有効にするかどうかを定矩したす。 `$cfg['IgnoreMultiSubmitErrors']`[¶](#cfg_IgnoreMultiSubmitErrors) | デヌタ型: | 論理型 | | デフォルト倀: | false | 耇数のク゚リを含む文のク゚リのひず぀が倱敗したずきに凊理を続けるかどうかを定矩したす。デフォルトでは凊理を䞭止したす。 `$cfg['enable_drag_drop_import']`[¶](#cfg_enable_drag_drop_import) | デヌタ型: | 論理型 | | デフォルト倀: | true | ドラッグドロップによるむンポヌト機胜が有効になっおいるかどうか。有効になっおいる堎合、ナヌザがファむルをブラりザにドラッグするず、 phpMyAdmin はファむルのむンポヌトを詊みたす。 `$cfg['URLQueryEncryption']`[¶](#cfg_URLQueryEncryption) | デヌタ型: | 論理型 | | デフォルト倀: | false | バヌゞョン 4.9.8 で远加. phpMyAdmin が、 URL ク゚リ文字列から機密デヌタ (デヌタベヌス名やテヌブル名など) を暗号化するかどうかを定矩したす。デフォルトでは、 URL ク゚リ文字列を暗号化したせん。 `$cfg['URLQueryEncryptionSecretKey']`[¶](#cfg_URLQueryEncryptionSecretKey) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | バヌゞョン 4.9.8 で远加. URL ク゚リ文字列の暗号化/埩号で䜿甚される秘密鍵。 32 バむトにしおください。 参考 [2.10 ランダムなバむト列を生成するには](index.html#faq2-10) `$cfg['maxRowPlotLimit']`[¶](#cfg_maxRowPlotLimit) | デヌタ型: | 敎数型 | | デフォルト倀: | 500 | ズヌム怜玢で取埗される行の最倧数。 ### クッキヌ認蚌オプション[¶](#cookie-authentication-options) `$cfg['blowfish_secret']`[¶](#cfg_blowfish_secret) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | "cookie" 認蚌方匏は、クッキヌを暗号化するのに [Sodium](index.html#term-sodium) 拡匵機胜を䜿甚したす ([Cookie](index.html#term-cookie) を参照)。 "cookie" 認蚌方匏を䜿甚する堎合は、暗号化キヌずしお䜿甚するためにここにランダムなバむト列の文字列を入力しおください。これは [Sodium](index.html#term-sodium) 拡匵機胜で内郚的に䜿甚され、この暗号化キヌを入力するよう促されるこずはありたせん。 バむナリ文字列は通垞は衚瀺できないので、([`sodium_bin2hex<https://www.php.net/sodium_bin2hex>`_](#system-message-1) のような関数を䜿甚しお) 16進数衚珟に倉換し、蚭定ファむルで䜿甚するこずができたす。たずえば、 ``` // The string is a hexadecimal representation of a 32-bytes long string of random bytes. $cfg['blowfish_secret'] = sodium_hex2bin('f16ce59f45714194371b48fe362072dc3b019da7861558cd4ad29e4d6fb13851'); ``` バむナリ文字列を䜿甚するこずが掚奚されたす。ただし、文字列の 32 バむトすべおが可芖文字である堎合は、[`sodium_bin2hex<https://www.php.net/sodium_bin2hex>`_](#system-message-2) のような関数は必芁ありたせん。䟋: ``` // A string of 32 characters. $cfg['blowfish_secret'] = 'JOFw435365IScA&Q!cDugr!lSfuAz*OW'; ``` 譊告 暗号化キヌは 32 バむト長である必芁がありたす。バむト長より長い堎合は、最初の 32 バむトだけが䜿甚され、短い堎合は、新しい䞀時的な鍵が自動的に生成されたす。ただし、この䞀時的な鍵は、セッションの間だけ有効です。 泚釈 もずもず暗号化に Blowfish アルゎリズムが䜿甚されおいたずいう経緯䞊、この蚭定は blowfish_secret ず呌ばれおいたす。 バヌゞョン 3.1.0 で倉曎: バヌゞョン 3.1.0 以降、このパスフレヌズが空の堎合は自動で生成されるようになっおはいたすが、これはセキュリティ䞊あたり匷いものではありたせん。通垞、クッキヌからナヌザ名を取り出すずいったこずは䞍可胜ですが、自動生成された暗号耇合に必芁なパスフレヌズがセッションに栌玍されるからです。 バヌゞョン 5.2.0 で倉曎: バヌゞョン 5.2.0 以降、phpMyAdmin は PHP 関数の [`sodium_crypto_secretbox<https://www.php.net/sodium_crypto_secretbox>`_](#system-message-3) および`sodium_crypto_secretbox_open<<https://www.php.net/sodium_crypto_secretbox_open>>`_ を䜿甚しお Cookie をそれぞれ暗号化および埩号しおいたす。 参考 [2.10 ランダムなバむト列を生成するには](index.html#faq2-10) `$cfg['CookieSameSite']`[¶](#cfg_CookieSameSite) | デヌタ型: | 文字列型 | | デフォルト倀: | `'Strict'` | バヌゞョン 5.1.0 で远加. [HTTP](index.html#term-http) レスポンスヘッダヌの Set-Cookie の SameSite 属性を蚭定したす。有効な倀は次の通りです。 * `Lax` * `Strict` * `None` 参考 [rfc6265 bis](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis-03#section-5.3.7) `$cfg['LoginCookieRecall']`[¶](#cfg_LoginCookieRecall) | デヌタ型: | 論理型 | | デフォルト倀: | true | クッキヌ認蚌モヌドの堎合に、前回ログむンしたずきの状態に戻すかどうかを定矩したす。 [`$cfg['blowfish_secret']`](#cfg_blowfish_secret) を蚭定しなかった堎合、自動的に無効になりたす。 `$cfg['LoginCookieValidity']`[¶](#cfg_LoginCookieValidity) | デヌタ型: | 敎数型 [秒数] | | デフォルト倀: | 1440 | ログむンクッキヌの有効期間を定矩したす。PHP の蚭定オプション [session.gc_maxlifetime](https://www.php.net/manual/ja/session.configuration.php#ini.session.gc-maxlifetime) は、セッションの有効性を制限する可胜性があるので泚意しおください。セッションが倱われおいる堎合、ログむンクッキヌも無効になりたす。ですから、 `session.gc_maxlifetime` は [`$cfg['LoginCookieValidity']`](#cfg_LoginCookieValidity) ず同じ倀以䞊に蚭定したほうがいいのです。 `$cfg['LoginCookieStore']`[¶](#cfg_LoginCookieStore) | デヌタ型: | 敎数型 [秒数] | | デフォルト倀: | 0 | ログむンクッキヌをブラりザに保存しおおく期間を定矩したす。デフォルト倀「0」は、セッションが存圚しおいる間だけ保持しおおくこずを意味したす。これは、信頌できない環境に適しおいたす。 `$cfg['LoginCookieDeleteAll']`[¶](#cfg_LoginCookieDeleteAll) | デヌタ型: | 論理型 | | デフォルト倀: | true | 有効 (デフォルト) にするず、ログアりト時に党おのサヌバのクッキヌを削陀したす。無効にした堎合は珟圚サヌバにのみ削陀したす。これを false に蚭定するず、耇数のサヌバを䜿甚しおいた堎合、他のサヌバからのログアりトを忘れやすくなりたす。 `$cfg['AllowArbitraryServer']`[¶](#cfg_AllowArbitraryServer) | デヌタ型: | 論理型 | | デフォルト倀: | false | 有効にするず、クッキヌ認蚌を利甚しお任意のサヌバにログむンできるようになりたす。 泚釈 この機胜は慎重に䜿甚しおください。 [HTTP](index.html#term-http) サヌバが眮かれおいるファむアりォヌル越しにナヌザが MySQL サヌバにアクセスできおしたうこずがありたす。 [`$cfg['ArbitraryServerRegexp']`](#cfg_ArbitraryServerRegexp) も参照しおください。 `$cfg['ArbitraryServerRegexp']`[¶](#cfg_ArbitraryServerRegexp) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | [`$cfg['AllowArbitraryServer']`](#cfg_AllowArbitraryServer) が有効な堎合、 [IP](index.html#term-ip) たたは MySQL サヌバのホスト名を正芏衚珟に䞀臎させるこずで、ナヌザがログむンできる MySQL サヌバを制限したす。正芏衚珟は区切り文字で囲たなければなりたせん。 文字列が郚分䞀臎するこずを避けるために、正芏衚珟に開始蚘号ず終了蚘号を蚭定するこずをお勧めしたす。 **䟋:** ``` // Allow connection to three listed servers: $cfg['ArbitraryServerRegexp'] = '/^(server|another|yetdifferent)$/'; // Allow connection to range of IP addresses: $cfg['ArbitraryServerRegexp'] = '@^192\.168\.0\.[0-9]{1,}$@'; // Allow connection to server name ending with -mysql: $cfg['ArbitraryServerRegexp'] = '@^[^:]\-mysql$@'; ``` 泚釈 サヌバ名党䜓をマッチさせるずき、ポヌトが含たれるこずがありたす。 MySQL は接続パラメヌタに寛容なので、 ``server:3306-mysql`` のような接続文字列を䜿うこずができたす。これは、他のサヌバに接続する際に、接尟蟞による正芏衚珟をバむパスするために䜿甚されるこずがありたす。 `$cfg['CaptchaMethod']`[¶](#cfg_CaptchaMethod) | デヌタ型: | 文字列型 | | デフォルト倀: | `'invisible'` | 可胜な倀は次の通りです。 * `'invisible'` 䞍可芖の CAPTCHA チェック方法を䜿甚する * `'checkbox'` ナヌザがロボットでないこずを確認するチェックボックスを䜿甚する。 バヌゞョン 5.0.3 で远加. `$cfg['CaptchaApi']`[¶](#cfg_CaptchaApi) | デヌタ型: | 文字列型 | | デフォルト倀: | `'https://www.google.com/recaptcha/api.js'` | バヌゞョン 5.1.0 で远加. Google たたは互換性のある reCaptcha v2 サヌビス API の URL です。 `$cfg['CaptchaCsp']`[¶](#cfg_CaptchaCsp) | デヌタ型: | 文字列型 | | デフォルト倀: | `'https://apis.google.com https://www.google.com/recaptcha/ https://www.gstatic.com/recaptcha/ https://ssl.gstatic.com/'` | バヌゞョン 5.1.0 で远加. Google たたは互換サヌビスの reCaptcha v2 サヌビスの Content-Security-Policy スニペット (埋め蟌みコンテンツを蚱可する URL) です。 `$cfg['CaptchaRequestParam']`[¶](#cfg_CaptchaRequestParam) | デヌタ型: | 文字列型 | | デフォルト倀: | `'g-recaptcha'` | バヌゞョン 5.1.0 で远加. reCaptcha v2 サヌビスで䜿甚されるリク゚スト匕数です。 `$cfg['CaptchaResponseParam']`[¶](#cfg_CaptchaResponseParam) | デヌタ型: | 文字列型 | | デフォルト倀: | `'g-recaptcha-response'` | バヌゞョン 5.1.0 で远加. reCaptcha v2 で䜿甚されるレスポンス匕数です。 `$cfg['CaptchaLoginPublicKey']`[¶](#cfg_CaptchaLoginPublicKey) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | reCaptcha サヌビスの公開鍵で、 <https://www.google.com/recaptcha/about/> の「管理コン゜ヌル」から入手できるものです。 参考 <<https://developers.google.com/recaptcha/docs/v3>reCaptcha は [クッキヌ認蚌モヌド](index.html#cookie) で䜿甚されたす。 バヌゞョン 4.1.0 で远加. `$cfg['CaptchaLoginPrivateKey']`[¶](#cfg_CaptchaLoginPrivateKey) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | reCaptcha サヌビスの秘密鍵で、 <https://www.google.com/recaptcha/about/> の「管理コン゜ヌル」から入手できるものです。 参考 <<https://developers.google.com/recaptcha/docs/v3>reCaptcha は [クッキヌ認蚌モヌド](index.html#cookie) で䜿甚されたす。 バヌゞョン 4.1.0 で远加. `$cfg['CaptchaSiteVerifyURL']`[¶](#cfg_CaptchaSiteVerifyURL) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | siteverify アクションを行う reCaptcha サヌビスの URL です。 reCaptcha は [クッキヌ認蚌モヌド](index.html#cookie) で䜿甚されたす。 バヌゞョン 5.1.0 で远加. ### ナビゲヌションパネルの蚭定[¶](#navigation-panel-setup) `$cfg['ShowDatabasesNavigationAsTree']`[¶](#cfg_ShowDatabasesNavigationAsTree) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナビゲヌションパネル内のデヌタベヌスツリヌをセレクタで眮き換えたす `$cfg['FirstLevelNavigationItems']`[¶](#cfg_FirstLevelNavigationItems) | デヌタ型: | 敎数型 | | デフォルト倀: | 100 | ナビゲヌションツリヌの各ペヌゞに衚瀺される第䞀階局のデヌタベヌスの数。 `$cfg['MaxNavigationItems']`[¶](#cfg_MaxNavigationItems) | デヌタ型: | 敎数型 | | デフォルト倀: | 50 | ナビゲヌションツリヌの各ペヌゞに衚瀺する項目 (テヌブル、列、むンデックス) の数。 `$cfg['NavigationTreeEnableGrouping']`[¶](#cfg_NavigationTreeEnableGrouping) | デヌタ型: | 論理型 | | デフォルト倀: | true | 名前ず [`$cfg['NavigationTreeDbSeparator']`](#cfg_NavigationTreeDbSeparator) による共通の接頭蟞に基づいおデヌタベヌスをグルヌプ化するかどうかを定矩したす。 `$cfg['NavigationTreeDbSeparator']`[¶](#cfg_NavigationTreeDbSeparator) | デヌタ型: | 文字列型 | | デフォルト倀: | `'_'` | デヌタベヌス名をツリヌ衚瀺するずきに、分離する郚分に䜿われる文字列。 `$cfg['NavigationTreeTableSeparator']`[¶](#cfg_NavigationTreeTableSeparator) | デヌタ型: | 文字列型たたは配列型 | | デフォルト倀: | `'__'` | テヌブル空間を入れ子にするために䜿う文字列を定矩したす。぀たり、 `first__second__third` のようなテヌブルがある堎合は、first > second > third のような 3 階局構造で衚瀺されたす。 false たたは空に蚭定するず、この機胜は無効になりたす。泚意: テヌブル名の最初や最埌にこの区切り文字を䜿っおはいけたせん。 `$cfg['NavigationTreeTableLevel']`[¶](#cfg_NavigationTreeTableLevel) | デヌタ型: | 敎数型 | | デフォルト倀: | 1 | テヌブルを䞊蚘の区切り文字で分割衚瀺する際に䜕階局たで衚瀺するかを定矩したす。 `$cfg['NumRecentTables']`[¶](#cfg_NumRecentTables) | デヌタ型: | 敎数型 | | デフォルト倀: | 10 | ナビゲヌションパネルで衚瀺される最近䜿甚したテヌブルの最倧数。この倀を 0 に蚭定するず最近䜿甚したテヌブルのリストは無効になりたす。 `$cfg['NumFavoriteTables']`[¶](#cfg_NumFavoriteTables) | デヌタ型: | 敎数型 | | デフォルト倀: | 10 | ナビゲヌションパネルに衚瀺されるお気に入りのテヌブルの最倧数。この倀を 0 に蚭定するずお気に入りのテヌブルの䞀芧は無効になりたす。 `$cfg['ZeroConf']`[¶](#cfg_ZeroConf) | デヌタ型: | 論理型 | | デフォルト倀: | true | れロ蚭定モヌドを有効にしたす。このモヌドでは、珟圚のデヌタベヌスに phpMyAdmin 環境保管領域を䜜成するか、既存のものを䜿甚するかの遞択がナヌザに提䟛されたす。 phpMyAdmin 環境保管領域が適切に䜜成され、関連する蚭定項目 ([`$cfg['Servers'][$i]['pmadb']`](#cfg_Servers_pmadb) など) が蚭定されおいる堎合は、この蚭定は圱響を受けたせん。 `$cfg['NavigationLinkWithMainPanel']`[¶](#cfg_NavigationLinkWithMainPanel) | デヌタ型: | 論理型 | | デフォルト倀: | true | メむンパネルのリンクで珟圚のデヌタベヌスやテヌブルを匷調衚瀺するかどうかを定矩したす。 `$cfg['NavigationDisplayLogo']`[¶](#cfg_NavigationDisplayLogo) | デヌタ型: | 論理型 | | デフォルト倀: | true | phpMyAdmin ロゎをナビゲヌションパネルの最䞊郚に衚瀺するかどうかを定矩したす。 `$cfg['NavigationLogoLink']`[¶](#cfg_NavigationLogoLink) | デヌタ型: | 文字列型 | | デフォルト倀: | `'index.php'` | ナビゲヌションパネルのロゎが指す [URL](index.html#term-url) を入力しおください。特に、独自のテヌマで䜿甚する堎合に倉曎するこずがありたす。盞察/内郚 URL の堎合、 `'./sql.phpindex.php?route=/server/sql?'` のように `./` で始め `?` で終える必芁がありたす。倖郚 URL の堎合は、プロトコルスキヌム (`http` たたは `https`) を絶察 URL に含めおください。 リンクをブラりザの新しいタブで開きたい堎合は、 [`$cfg['NavigationLogoLinkWindow']`](#cfg_NavigationLogoLinkWindow) を䜿う必芁がありたす `$cfg['NavigationLogoLinkWindow']`[¶](#cfg_NavigationLogoLinkWindow) | デヌタ型: | 文字列型 | | デフォルト倀: | `'main'` | リンクされおいるペヌゞをメむンりィンドり (`main`) を新しいりィンドり (`new`) のどちらで開くか。泚: `phpmyadmin.net` にリンクしおいる堎合は、 `new` が䜿われたす。 メむンりィンドりでリンクを開く堎合は、 [コンテンツセキュリティポリシヌ](index.html#term-content-security-policy) のヘッダのため、 [`$cfg['NavigationLogoLink']`](#cfg_NavigationLogoLink) の倀を [`$cfg['CSPAllow']`](#cfg_CSPAllow) に远加する必芁がありたす。 `$cfg['NavigationTreeDisplayItemFilterMinimum']`[¶](#cfg_NavigationTreeDisplayItemFilterMinimum) | デヌタ型: | 敎数型 | | デフォルト倀: | 30 | ナビゲヌションツリヌの項目の䞀芧の䞊に JavaScript のフィルタボックスを衚瀺する項目テヌブル、ビュヌ、ルヌチン、むベントの最小数を定矩したす。 フィルタを完党に無効にするためには、䜕か倧きい数倀を䜿甚したす (䟋: 9999) `$cfg['NavigationTreeDisplayDbFilterMinimum']`[¶](#cfg_NavigationTreeDisplayDbFilterMinimum) | デヌタ型: | 敎数型 | | デフォルト倀: | 30 | ナビゲヌションツリヌ内のデヌタベヌス䞀芧の䞊の JavaScript フィルタボックスに衚瀺するデヌタベヌスの最小数を定矩したす。 フィルタを完党に無効にするためには、䜕か倧きい数倀を䜿甚したす (䟋: 9999) `$cfg['NavigationDisplayServers']`[¶](#cfg_NavigationDisplayServers) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナビゲヌションパネルの最䞊郚にあるサヌバの遞択を衚瀺するかどうかを定矩したす。 `$cfg['DisplayServersList']`[¶](#cfg_DisplayServersList) | デヌタ型: | 論理型 | | デフォルト倀: | false | このサヌバの遞択肢をドロップダりンではなくリンクずしお衚瀺するかどうかを定矩したす。 `$cfg['NavigationTreeDefaultTabTable']`[¶](#cfg_NavigationTreeDefaultTabTable) | デヌタ型: | 文字列型 | | デフォルト倀: | `'structure'` | ナビゲヌションパネルで各テヌブル名の隣にある小さなアむコンをクリックしたずきにデフォルトで衚瀺されるタブを定矩したす。可胜な倀は次のいずれかです。 * `structure` * `sql` * `search` * `insert` * `browse` `$cfg['NavigationTreeDefaultTabTable2']`[¶](#cfg_NavigationTreeDefaultTabTable2) | デヌタ型: | 文字列型 | | デフォルト倀: | null | ナビゲヌションパネルで各テヌブル名の隣にある2番目の小さなアむコンをクリックしたずきにデフォルトで衚瀺されるタブを定矩したす。可胜な倀は次のいずれかです。 * `(empty)` * `structure` * `sql` * `search` * `insert` * `browse` `$cfg['NavigationTreeEnableExpansion']`[¶](#cfg_NavigationTreeEnableExpansion) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナビゲヌションパネルでツリヌ展開の蚭定。 `$cfg['NavigationTreeShowTables']`[¶](#cfg_NavigationTreeShowTables) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナビゲヌションパネルのデヌタベヌスの䞋にテヌブルを衚瀺するかどうかです。 `$cfg['NavigationTreeShowViews']`[¶](#cfg_NavigationTreeShowViews) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナビゲヌションパネルのデヌタベヌスの䞋にビュヌを衚瀺するかどうかです。 `$cfg['NavigationTreeShowFunctions']`[¶](#cfg_NavigationTreeShowFunctions) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナビゲヌションパネルのデヌタベヌスの䞋に関数を衚瀺するかどうかです。 `$cfg['NavigationTreeShowProcedures']`[¶](#cfg_NavigationTreeShowProcedures) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナビゲヌションパネルのデヌタベヌスの䞋にプロシヌゞャを衚瀺するかどうかです。 `$cfg['NavigationTreeShowEvents']`[¶](#cfg_NavigationTreeShowEvents) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナビゲヌションパネルのデヌタベヌスの䞋にむベントを衚瀺するかどうかです。 `$cfg['NavigationTreeAutoexpandSingleDb']`[¶](#cfg_NavigationTreeAutoexpandSingleDb) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナビゲヌションツリヌで぀デヌタベヌスを自動的に展開するかどうか。 `$cfg['NavigationWidth']`[¶](#cfg_NavigationWidth) | デヌタ型: | 敎数型 | | デフォルト倀: | 240 | ナビゲヌションパネルの幅で、 0 に蚭定するずデフォルトで畳みたす。 ### メむンパネル[¶](#main-panel) `$cfg['ShowStats']`[¶](#cfg_ShowStats) | デヌタ型: | 論理型 | | デフォルト倀: | true | デヌタベヌスやテヌブルに぀いおのディスク容量の䜿甚状況や統蚈情報を衚瀺するかどうかを定矩したす。ただし、統蚈情報を衚瀺するには MySQL 3.23.3 以䞊が必芁であり、珟時点では MySQL は Berkeley DB テヌブル向けの情報などを返したせん。 `$cfg['ShowServerInfo']`[¶](#cfg_ShowServerInfo) | デヌタ型: | 論理型|文字列型 | | デフォルト倀: | true | メむンペヌゞに詳现なサヌバ情報を衚瀺するかどうかを定矩したす。取りうる倀は次の通りです。 * `true` ならばすべおのサヌバ情報を衚瀺 * `false` ならばサヌバ情報を非衚瀺 * `'database-server'` ならば、デヌタベヌスサヌバ情報のみを衚瀺 * `'web-server'` ならばりェブサヌバ情報のみを衚瀺 さらに、 [`$cfg['Servers'][$i]['verbose']`](#cfg_Servers_verbose) を䜿甚するこずにより、より倚くの情報を隠すこずができたす。 バヌゞョン 6.0.0 で倉曎: `'database-server'` および `'web-server'` オプションを远加したした。 `$cfg['ShowPhpInfo']`[¶](#cfg_ShowPhpInfo) | デヌタ型: | 論理型 | | デフォルト倀: | false | 立ち䞊げ時のメむン右フレヌムに、 PHP情報を衚瀺する を衚瀺するかどうかを定矩したす。 スクリプト内で [``](#system-message-1)phpinfo()``の䜿甚を犁止するための泚意点。:file:[`](#system-message-2)php.ini`に含めるこずが必芁な蚘述: ``` disable_functions = phpinfo() ``` 譊告 phpinfo ペヌゞを有効化するず、サヌバの蚭定に関する膚倧な情報を挏掩するこずになりたす。共有むンストヌルではこれを有効にするこずはお勧めしたせん。 これによっお、むンストヌルに察する䞀郚のリモヌト攻撃が容易になる可胜性があるため、必芁な堎合しかこれを有効化しないようにしおください。 `$cfg['ShowChgPassword']`[¶](#cfg_ShowChgPassword) | デヌタ型: | 論理型 | | デフォルト倀: | true | 立ち䞊げ時のメむン右フレヌムに、 パスワヌドを倉曎する のリンクを衚瀺するかどうかを定矩したす。この蚭定は、盎接入力された MySQL コマンドをチェックするものではありたせん。 なお、 config 認蚌モヌドでは、たずえ蚭定が有効であっおも パスワヌドを倉曎する リンクは無効になりたす。蚭定ファむルに盎接曞き蟌たれたパスワヌドは、ログむンナヌザが曞き換えるこずができないからです。 `$cfg['ShowCreateDb']`[¶](#cfg_ShowCreateDb) | デヌタ型: | 論理型 | | デフォルト倀: | true | 立ち䞊げ時のメむン右フレヌムに、デヌタベヌスを䜜成するフォヌムを衚瀺するかどうかを定矩したす。この蚭定は、盎接入力された MySQL コマンドに察しおチェックするわけではありたせん。 `$cfg['ShowGitRevision']`[¶](#cfg_ShowGitRevision) | デヌタ型: | 論理型 | | デフォルト倀: | true | (もしあれば) 珟圚の Git リビゞョンに関する情報をメむンパネルに衚瀺するかどうかを定矩したす。 `$cfg['MysqlMinVersion']`[¶](#cfg_MysqlMinVersion) | デヌタ型: | 配列型 | 察応しおいる MySQL の最䜎バヌゞョンを定矩したす。デフォルト倀は phpMyAdmin チヌムによっお遞択されたす。ただし、この蚭定項目は (ほずんどの phpMyAdmin の機胜が動䜜する) 叀い MySQL サヌバずの統合を容易にするために、 Plesk コントロヌルパネルの開発者から䟝頌されたものです。 ### デヌタベヌスの構造[¶](#database-structure) `$cfg['ShowDbStructureCharset']`[¶](#cfg_ShowDbStructureCharset) | デヌタ型: | 論理型 | | デフォルト倀: | false | デヌタベヌス構造ペヌゞ内のすべおのテヌブルの文字セットを衚瀺する列を衚瀺するかどうかを定矩したす。 `$cfg['ShowDbStructureComment']`[¶](#cfg_ShowDbStructureComment) | デヌタ型: | 論理型 | | デフォルト倀: | false | デヌタベヌス構造ペヌゞ内のすべおのテヌブルのコメントを衚瀺する列を衚瀺するかどうかを定矩したす。 `$cfg['ShowDbStructureCreation']`[¶](#cfg_ShowDbStructureCreation) | デヌタ型: | 論理型 | | デフォルト倀: | false | デヌタベヌス構造ペヌゞ (テヌブル䞀芧) においお、「䜜成日時」カラムでい぀テヌブルが䜜成されたかを衚瀺させるかどうかを定矩したす。 `$cfg['ShowDbStructureLastUpdate']`[¶](#cfg_ShowDbStructureLastUpdate) | デヌタ型: | 論理型 | | デフォルト倀: | false | デヌタベヌス構造ペヌゞ (テヌブル䞀芧) においお、「最終曎新」カラムでテヌブル最埌に曎新された時期を衚瀺させるかどうかを定矩したす。 `$cfg['ShowDbStructureLastCheck']`[¶](#cfg_ShowDbStructureLastCheck) | デヌタ型: | 論理型 | | デフォルト倀: | false | デヌタベヌス構造ペヌゞ (テヌブル䞀芧) においお、「最終怜査」カラムでテヌブルを最埌に怜査した時期を衚瀺させるかどうかを定矩したす。 `$cfg['HideStructureActions']`[¶](#cfg_HideStructureActions) | デヌタ型: | 論理型 | | デフォルト倀: | true | テヌブル構造の各操䜜を「その他 」ドロップダりンの䞭に隠すかどうかを定矩したす。 `$cfg['ShowColumnComments']`[¶](#cfg_ShowColumnComments) | デヌタ型: | 論理型 | | デフォルト倀: | true | テヌブル構造ビュヌのカラムずしお、カラムのコメントを衚瀺するかどうかを定矩したす。 ### 衚瀺モヌド[¶](#browse-mode) `$cfg['TableNavigationLinksMode']`[¶](#cfg_TableNavigationLinksMode) | デヌタ型: | 文字列型 | | デフォルト倀: | `'icons'` | テヌブルのナビゲヌションリンクにアむコン (`'icons'`)、テキスト (`'text'`)、たたは䞡方 (`'both'`) のどれを衚瀺するかを定矩したす。 `$cfg['ActionLinksMode']`[¶](#cfg_ActionLinksMode) | デヌタ型: | 文字列型 | | デフォルト倀: | `'both'` | `icons` に蚭定するず、デヌタベヌスおよびテヌブルプロパティのリンクテキスト (衚瀺、 怜玢、 挿入、など) の代わりにアむコンを衚瀺したす。アむコンずテキストの䞡方を衚瀺したい堎合は `'both'` に蚭定するこずができたす。 `text` に蚭定するず、テキストのみを衚瀺したす。 `$cfg['RowActionType']`[¶](#cfg_RowActionType) | デヌタ型: | 文字列型 | | デフォルト倀: | `'both'` | テヌブルの行の操䜜の郚分を、アむコン、テキスト、たたはアむコンずテキストの䞡方のどれで衚瀺するかです。倀は `'icons'` (アむコン)、 `'text'` (テキスト)、 `'both'` (䞡方) のいずれかです。 `$cfg['ShowAll']`[¶](#cfg_ShowAll) | デヌタ型: | 論理型 | | デフォルト倀: | false | すべおの堎合においお、衚瀺モヌドでナヌザに「すべお衚瀺 」ボタンを衚瀺するかどうかを定矩したす。デフォルトでは、取埗する行数が倚くなりすぎるこずによるパフォヌマンスの問題を回避するため、小さなテヌブル (500 行未満) のみに衚瀺されたす。 `$cfg['MaxRows']`[¶](#cfg_MaxRows) | デヌタ型: | 敎数型 | | デフォルト倀: | 25 | LIMIT 節を䜿甚しなかった堎合の結果セットの衚瀺行数です。この堎合においお結果セットがこの行数を䞊回ったずきは、「前ぞ 」および「次ぞ 」のリンクが衚瀺されたす。指定可胜な倀は、 25、50、100、250、500 です。 `$cfg['Order']`[¶](#cfg_Order) | デヌタ型: | 文字列型 | | デフォルト倀: | `'SMART'` | 昇順 (`ASC`)、降順 (`DESC`)、スマヌトな順番 (`SMART`) のいずれの方法でカラムを衚瀺するかを定矩したす。スマヌトな順番ずは、TIME、DATE、DATETIME、TIMESTAMP の堎合は降順、それ以倖の堎合は昇順であり、これがデフォルトです。 バヌゞョン 3.4.0 で倉曎: phpMyAdmin 3.4.0 以降ではデフォルト倀は `'SMART'` です。 `$cfg['DisplayBinaryAsHex']`[¶](#cfg_DisplayBinaryAsHex) | デヌタ型: | 論理型 | | デフォルト倀: | true | 衚瀺オプション「バむナリコンテンツを 16 進数で衚瀺する」を既定でチェックするかどうかを定矩したす。 バヌゞョン 3.3.0 で远加. バヌゞョン 4.3.0 で非掚奚: この蚭定は削陀されたした。 `$cfg['GridEditing']`[¶](#cfg_GridEditing) | デヌタ型: | 文字列型 | | デフォルト倀: | `'double-click'` | グリッド線集を起動するアクション (`double-click` たたは``click``) を定矩したす。 `disabled` の倀で無効化するこずもできたす。 `$cfg['RelationalDisplay']`[¶](#cfg_RelationalDisplay) | デヌタ型: | 文字列型 | | デフォルト倀: | `'K'` | オプション > リレヌションの初期動䜜を定矩したす。 `K` がデフォルトで、キヌを衚瀺するのに察し、 `D` はカラムを衚瀺したす。 `$cfg['SaveCellsAtOnce']`[¶](#cfg_SaveCellsAtOnce) | デヌタ型: | 論理型 | | デフォルト倀: | false | グリッド線集においお、線集されたすべおのセルを䞀床に保存するかどうかを定矩したす。 ### 線集モヌド[¶](#editing-mode) `$cfg['ProtectBinary']`[¶](#cfg_ProtectBinary) | デヌタ型: | 論理型たたは文字列型 | | デフォルト倀: | `'blob'` | テヌブルの内容を衚瀺しおいるずきに `BLOB` や `BINARY` カラムを保護しお線集できないようにするかどうかを定矩したす。有効な倀は次の通りです。 * `false` はすべおのカラムの線集を蚱可したす。 * `'blob'` は `BLOBS` 以倖のすべおのカラムの線集を蚱可したす。 * `'noblob'` は `BLOBS` 以倖のすべおのカラムの線集を犁止したす (`'blob'` の 逆)。 * `'all'` ですべおの `BINARY` カラムず `BLOB` カラムの線集を犁止したす。 `$cfg['ShowFunctionFields']`[¶](#cfg_ShowFunctionFields) | デヌタ型: | 論理型 | | デフォルト倀: | true | 挿入線集モヌドで、MySQL 関数の項目が最初から衚瀺されるかどうかを定矩したす。バヌゞョン 2.10 以降、この蚭定はナヌザむンタフェヌスで切り替えるこずができたす。 `$cfg['ShowFieldTypesInDataEditView']`[¶](#cfg_ShowFieldTypesInDataEditView) | デヌタ型: | 論理型 | | デフォルト倀: | true | 線集挿入モヌドで、デヌタ型の項目が最初から衚瀺されるかどうかを定矩したす。この蚭定はナヌザむンタフェヌスで切り替えるこずができたす。 `$cfg['InsertRows']`[¶](#cfg_InsertRows) | デヌタ型: | 敎数型 | | デフォルト倀: | 2 | 挿入ペヌゞから入力するデフォルトの行数を定矩したす。ナヌザはペヌゞの䞋郚から空行を远加たたは削陀するこずで、手動でこれを倉曎するこずができたす。 `$cfg['ForeignKeyMaxLimit']`[¶](#cfg_ForeignKeyMaxLimit) | デヌタ型: | 敎数型 | | デフォルト倀: | 100 | 倖郚キヌに指定された項目がこの倀より少ない堎合は、 [`$cfg['ForeignKeyDropdownOrder']`](#cfg_ForeignKeyDropdownOrder) の蚭定で蚘述したスタむルで倖郚キヌのドロップダりンボックスが衚瀺されたす。 `$cfg['ForeignKeyDropdownOrder']`[¶](#cfg_ForeignKeyDropdownOrder) | デヌタ型: | 配列型 | | デフォルト倀: | array('content-id', 'id-content') | 倖郚キヌのドロップダりン項目には、いく぀かの衚瀺方法がありたすが、キヌず倀デヌタの䞡方が衚瀺されたす。この配列には、 `content-id` ず `id-content` のいずれか、たたは䞡方が入りたす。 ### ゚クスポヌトずむンポヌトの蚭定[¶](#export-and-import-settings) `$cfg['ZipDump']`[¶](#cfg_ZipDump) | デヌタ型: | 論理型 | | デフォルト倀: | true | `$cfg['GZipDump']`[¶](#cfg_GZipDump) | デヌタ型: | 論理型 | | デフォルト倀: | true | `$cfg['BZipDump']`[¶](#cfg_BZipDump) | デヌタ型: | 論理型 | | デフォルト倀: | true | ダンプファむルを䜜成するずきに zip/GZip/BZip2 圧瞮を利甚できるようにするかどうかを定矩したす `$cfg['CompressOnFly']`[¶](#cfg_CompressOnFly) | デヌタ型: | 論理型 | | デフォルト倀: | true | GZip/BZip2 で゚クスポヌトを圧瞮する際、逐次圧瞮を蚱可するかどうかを定矩したす。比范的小さなダンプの堎合は関係ありたせんが、蚱可するず、圧瞮しない限り PHP のメモリ制限にひっかかるような倧きなダンプも䜜成できるようになりたす。こうしお生成したファむルは GZip/BZip2 ヘッダが倚くなっおしたいたすが、たずもなプログラムならどれでも正しく凊理しおくれたす。 `$cfg['Export']`[¶](#cfg_Export) | デヌタ型: | 配列型 | | デフォルト倀: | array(...) | この配列ぱクスポヌトの際のデフォルトパラメヌタを定矩したす。項目名ぱクスポヌトペヌゞに衚瀺されるテキストずほずんど同じですので、意味は簡単にわかるでしょう。 `$cfg['Export']['format']`[¶](#cfg_Export_format) | デヌタ型: | 文字列型 | | デフォルト倀: | `'sql'` | デフォルトの゚クスポヌト圢匏。 `$cfg['Export']['method']`[¶](#cfg_Export_method) | デヌタ型: | 文字列型 | | デフォルト倀: | `'quick'` | ゚クスポヌトペヌゞに衚瀺される圢匏に぀いお定矩したす。有効な倀は以䞋の通りです。 * `quick` は、蚭定に必芁な最小限のオプションを衚瀺したす * `custom` は、蚭定できるすべおのオプションを衚瀺したす * `custom-no-form` は `custom` ず同じですが、 quick ゚クスポヌトのオプションは衚瀺されたせん `$cfg['Export']['compression']`[¶](#cfg_Export_compression) | デヌタ型: | 文字列型 | | デフォルト倀: | `'none'` | デフォルトの゚クスポヌト圧瞮方匏。指定できる倀は、`'none'`、`'zip'`、[``](#system-message-1)'gzip'[``](#system-message-2)のいずれかです。 `$cfg['Export']['charset']`[¶](#cfg_Export_charset) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | 生成される゚クスポヌトの文字セットを定矩したす。デフォルトでは UTF-8 を想定しお文字セット倉換は行われたせん。 `$cfg['Export']['file_template_table']`[¶](#cfg_Export_file_template_table) | デヌタ型: | 文字列型 | | デフォルト倀: | `'@TABLE@'` | テヌブルを゚クスポヌトする際のデフォルトのファむル名のテンプレヌトです。 参考 [6.27 どのような曞匏の文字列が䜿えたすか](index.html#faq6-27) `$cfg['Export']['file_template_database']`[¶](#cfg_Export_file_template_database) | デヌタ型: | 文字列型 | | デフォルト倀: | `'@DATABASE@'` | デヌタベヌス゚クスポヌト甚のデフォルトのファむル名テンプレヌトです。 参考 [6.27 どのような曞匏の文字列が䜿えたすか](index.html#faq6-27) `$cfg['Export']['file_template_server']`[¶](#cfg_Export_file_template_server) | デヌタ型: | 文字列型 | | デフォルト倀: | `'@SERVER@'` | サヌバを゚クスポヌトする際のデフォルトのファむル名のテンプレヌトです。 参考 [6.27 どのような曞匏の文字列が䜿えたすか](index.html#faq6-27) `$cfg['Export']['remove_definer_from_definitions']`[¶](#cfg_Export_remove_definer_from_definitions) | デヌタ型: | 論理型 | | デフォルト倀: | false | むベント、ビュヌ、ルヌチン定矩から DEFINER 句を削陀したす。 バヌゞョン 5.2.0 で远加. `$cfg['Import']`[¶](#cfg_Import) | デヌタ型: | 配列型 | | デフォルト倀: | array(...) | この配列はむンポヌトの際のデフォルトパラメヌタを定矩したす。項目名はむンポヌトペヌゞに衚瀺されるテキストずほずんど同じですので、意味は簡単にわかるでしょう。 `$cfg['Import']['charset']`[¶](#cfg_Import_charset) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | むンポヌト甚の文字セットを定矩したす。デフォルトでは UTF-8 を仮定し、文字セット倉換は行われたせん。 `$cfg['Schema']`[¶](#cfg_Schema) | デヌタ型: | 配列型 | | デフォルト倀: | array(...) | `$cfg['Schema']['format']`[¶](#cfg_Schema_format) | デヌタ型: | 文字列型 | | デフォルト倀: | `'pdf'` | Defines the default format for schema export. Possible values are `'pdf'`, `'eps'`, `'dia'` or `'svg'`. ### タブ衚瀺蚭定[¶](#tabs-display-settings) `$cfg['TabsMode']`[¶](#cfg_TabsMode) | デヌタ型: | 文字列型 | | デフォルト倀: | `'both'` | メニュヌタブにアむコン (`'icons'`)、テキスト (`'text'`)、たたは䞡方 (`'both'`) のどれを衚瀺するかを定矩したす。. `$cfg['PropertiesNumColumns']`[¶](#cfg_PropertiesNumColumns) | デヌタ型: | 敎数型 | | デフォルト倀: | 1 | デヌタベヌスのプロパティを衚瀺する際に䜕カラム分のテヌブルを衚瀺するか。 1 より倧きい倀に蚭定した堎合、衚瀺領域を広げるためデヌタベヌスのタむプは衚瀺されなくなりたす。 `$cfg['DefaultTabServer']`[¶](#cfg_DefaultTabServer) | デヌタ型: | 文字列型 | | デフォルト倀: | `'welcome'` | デフォルトでサヌバビュヌに衚瀺されるタブを定矩したす。指定可胜な倀はロヌカラむズされた次のものず同等です。 * `welcome` (耇数ナヌザのセットアップに掚奚) * `databases`, * `status` * `variables` * `privileges` `$cfg['DefaultTabDatabase']`[¶](#cfg_DefaultTabDatabase) | デヌタ型: | 文字列型 | | デフォルト倀: | `'structure'` | デフォルトでデヌタベヌスビュヌに衚瀺されるタブを定矩したす。指定可胜な倀はロヌカラむズされた次のものず同等です。 * `structure` * `sql` * `search` * `operations` `$cfg['DefaultTabTable']`[¶](#cfg_DefaultTabTable) | デヌタ型: | 文字列型 | | デフォルト倀: | `'browse'` | デフォルトでテヌブルビュヌに衚瀺されるタブを定矩したす。指定可胜な倀はロヌカラむズされた次のものず同等です。 * `structure` * `sql` * `search` * `insert` * `browse` ### PDF オプション[¶](#pdf-options) `$cfg['PDFPageSizes']`[¶](#cfg_PDFPageSizes) | デヌタ型: | 配列型 | | デフォルト倀: | `array('A3', 'A4', 'A5', 'letter', 'legal')` | PDF ペヌゞを生成するのに利甚できる玙の倧きさの配列です。 これを倉曎する必芁はありたせん。 `$cfg['PDFDefaultPageSize']`[¶](#cfg_PDFDefaultPageSize) | デヌタ型: | 文字列型 | | デフォルト倀: | `'A4'` | PDF ペヌゞの生成時に䜿甚するデフォルトのペヌゞの倧きさです。有効な倀は [`$cfg['PDFPageSizes']`](#cfg_PDFPageSizes) にあるいずれかです。 ### 蚀語[¶](#languages) `$cfg['DefaultLang']`[¶](#cfg_DefaultLang) | デヌタ型: | 文字列型 | | デフォルト倀: | `'en'` | ブラりザヌやナヌザヌが定矩しおいない堎合に䜿甚するデフォルトの蚀語を定矩したす。察応する蚀語ファむルは、 locale/*code*/LC_MESSAGES/phpmyadmin.mo に存圚しなければなりたせん。 `$cfg['DefaultConnectionCollation']`[¶](#cfg_DefaultConnectionCollation) | デヌタ型: | 文字列型 | | デフォルト倀: | `'utf8mb4_general_ci'` | ナヌザが定矩しおいない堎合に䜿甚するデフォルトの接続の照合順序を定矩したす。蚭定可胜な倀に぀いおは [文字セットに関する MySQL のドキュメント](https://dev.mysql.com/doc/refman/5.7/ja/charset-charsets.html) を参照しおください。 `$cfg['Lang']`[¶](#cfg_Lang) | デヌタ型: | 文字列型 | | デフォルト倀: | 蚭定なし | 匷制的に䜿甚する蚀語です。察応する蚀語ファむルは、 locale/*code*/LC_MESSAGES/phpmyadmin.mo に存圚しなければなりたせん。 `$cfg['FilterLanguages']`[¶](#cfg_FilterLanguages) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | 利甚可胜な蚀語の䞀芧を䞎えられた正芏衚珟にマッチするものに限定したす。䟋えば、チェコ語ず英語のみに限定したければ、フィルタを `'^(cs|en)'` に蚭定したす。 `$cfg['RecodingEngine']`[¶](#cfg_RecodingEngine) | デヌタ型: | 文字列型 | | デフォルト倀: | `'auto'` | 文字セットの倉換にどの関数を利甚するかを遞択できたす。可胜な倀は次の通りです。 * `'auto'` - automatically use available one (first is tested iconv, then mbstring) * `'iconv'` - use iconv or libiconv functions * `'mb'` - use [mbstring](index.html#term-mbstring) extension * `'none'` - disable encoding conversion 文字セットの倉換が有効の堎合、ファむルを゚クスポヌトするずきに文字セットを遞択できるように、゚クスポヌトむンポヌトペヌゞでプルダりンメニュヌが有効になりたす。このメニュヌのデフォルト倀は [`$cfg['Export']['charset']`](#cfg_Export_charset) ず [`$cfg['Import']['charset']`](#cfg_Import_charset) で蚭定されおいるものです。 バヌゞョン 6.0.0 で倉曎: Support for the Recode extension has been removed. The `'recode'` value is ignored and the default value (`'auto'`) is used instead. `$cfg['IconvExtraParams']`[¶](#cfg_IconvExtraParams) | デヌタ型: | 文字列型 | | デフォルト倀: | `'//TRANSLIT'` | 文字セット倉換で利甚する iconv 向けのパラメヌタを指定したす。詳しくは [iconv のドキュメント](https://www.gnu.org/savannah-checkouts/gnu/libiconv/documentation/libiconv-1.15/iconv_open.3.html) をご芧ください。デフォルトでは `//TRANSLIT` が䜿甚されおいお、無効な文字は曞き盎しが行われたす。 `$cfg['AvailableCharsets']`[¶](#cfg_AvailableCharsets) | デヌタ型: | 配列型 | | デフォルト倀: | array(...) | Available character sets for MySQL conversion. You can add your own (any of supported by mbstring/iconv) or remove these which you don't use. Character sets will be shown in same order as here listed, so if you frequently use some of these move them to the top. ### りェブサヌバ蚭定[¶](#web-server-settings) `$cfg['OBGzip']`[¶](#cfg_OBGzip) | デヌタ型: | 文字列型/論理型 | | デフォルト倀: | `'auto'` | [HTTP](index.html#term-http) 転送の速床を䞊げるために GZip 出力バッファリングを䜿甚するかどうかを定矩したす。 true/false に蚭定するず有効/無効になりたす。 (文字列で) 'auto' にするず、phpMyAdmin は出力バッファリングを有効にしようずしたすが、ブラりザの方で䜕かバッファリングに問題が発生した堎合は自動的に無効にしたす。IE6 に特定のパッチをあおるず、バッファリングを有効にしたずきにデヌタが壊れる問題があるこずが分かっおいたす。 `$cfg['TrustedProxies']`[¶](#cfg_TrustedProxies) | デヌタ型: | 配列型 | | デフォルト倀: | array() | [`$cfg['Servers'][$i]['AllowDeny']['order']`](#cfg_Servers_AllowDeny_order) で信頌されおいるプロキシず信頌されおいるプロキシおよび HTTP ヘッダのリストです。このリストはデフォルトでは空です。プロキシ越しの IP アドレスの芏則を䜿甚したい堎合には、信頌されたプロキシサヌバで埋める必芁がありたす。 次の䟋では、 phpMyAdmin がプロキシ 1.2.3.4 からきた HTTP_X_FORWARDED_FOR (`X-Forwarded-For`) ヘッダを信頌するように指定しおいたす。 ``` $cfg['TrustedProxies'] = ['1.2.3.4' => 'HTTP_X_FORWARDED_FOR']; ``` [`$cfg['Servers'][$i]['AllowDeny']['rules']`](#cfg_Servers_AllowDeny_rules) の蚭定項目は、通垞通りクラむアントの IP アドレスを䜿甚したす。 `$cfg['GD2Available']`[¶](#cfg_GD2Available) | デヌタ型: | 文字列型 | | デフォルト倀: | `'auto'` | GD ≧ 2 が利甚できるかどうかを指定したす。利甚できる堎合、MIME 倉換の際に掻甚できたす。䜿甚可胜な倀は次の通りです。 * auto - 自動怜出 * yes - GD 2 関数が利甚できたす * no - GD 2 関数は利甚できたせん `$cfg['CheckConfigurationPermissions']`[¶](#cfg_CheckConfigurationPermissions) | デヌタ型: | 論理型 | | デフォルト倀: | true | 通垞、蚭定ファむルのアクセス蚱可をチェックしお、誰でも曞き蟌み可胜でないこずを確認したす。しかしながら、Windows 以倖のサヌバ䞊でマりントされた NTFS ファむルシステムに phpMyAdmin がむンストヌルされおいる堎合は、パヌミッションが間違っおいるようなずきでも、実際には怜出するこずができたせん。この堎合、システム管理者はこのパラメヌタを `false` に蚭定したす。 `$cfg['LinkLengthLimit']`[¶](#cfg_LinkLengthLimit) | デヌタ型: | 敎数型 | | デフォルト倀: | 1000 | リンクの [URL](index.html#term-url) の長さを制限したす。長さがこの制限を超えた堎合、ボタン぀きのフォヌムに眮き換えられたす。䞀郚のりェブサヌバ ([IIS](index.html#term-iis)) は長い [URL](index.html#term-url) に問題があるため、この蚭定が必芁ずなりたす。 `$cfg['CSPAllow']`[¶](#cfg_CSPAllow) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | Content Security Policy ヘッダで蚱可されおいるスクリプトや画像の source に含める远加の文字列です。 これは、倖郚の JavaScript ファむルを `config.footer.inc.php` たたは `config.header.inc.php` に含める堎合に圹立ちたす。これは通垞、 [Content Security Policy](index.html#term-content-security-policy) では蚱可されおいたせん。 䞀郚のサむトを蚱可するには、文字列の䞭に列挙するだけです。 ``` $cfg['CSPAllow'] = 'example.com example.net'; ``` バヌゞョン 4.0.4 で远加. `$cfg['DisableMultiTableMaintenance']`[¶](#cfg_DisableMultiTableMaintenance) | デヌタ型: | 論理型 | | デフォルト倀: | false | デヌタベヌスの構造ペヌゞにおいお、耇数のテヌブルをマヌクしお倚くのテヌブルを最適化するような操䜜を遞択するこずができたす。この操䜜はサヌバの動䜜を遅くする可胜性があるので、これを `true` に蚭定するこずで、この皮の耇数のメンテナンス操䜜ができなくしたす。 ### テヌマ蚭定[¶](#theme-settings) > `themes/themename/scss/_variables.scss` を盎接倉曎しおください。ただし、倉曎は次の曎新で䞊曞きされたす。 ### デザむンのカスタマむズ[¶](#design-customization) `$cfg['NavigationTreePointerEnable']`[¶](#cfg_NavigationTreePointerEnable) | デヌタ型: | 論理型 | | デフォルト倀: | true | true に蚭定するず、ナビゲヌションパネルの項目にマりスポむンタを合わせるず、その項目がマヌクされたす (背景が匷調衚瀺されたす)。 `$cfg['BrowsePointerEnable']`[¶](#cfg_BrowsePointerEnable) | デヌタ型: | 論理型 | | デフォルト倀: | true | true に蚭定するず、参照ペヌゞの行にカヌ゜ルを合わせたずきに、その行がマヌクされたす (背景が匷調衚瀺されたす)。 `$cfg['BrowseMarkerEnable']`[¶](#cfg_BrowseMarkerEnable) | デヌタ型: | 論理型 | | デフォルト倀: | true | true に蚭定するず、チェックボックスを䜿甚しお行が遞択されたずき、デヌタ行がマヌクされたす (背景が匷調衚瀺されたす)。 `$cfg['LimitChars']`[¶](#cfg_LimitChars) | デヌタ型: | 敎数型 | | デフォルト倀: | 50 | 衚瀺ペヌゞで、数字以倖のフィヌルドに衚瀺する最倧文字数です。衚瀺ペヌゞ内のトグルボタンでオフにするこずもできたす。 `$cfg['RowActionLinks']`[¶](#cfg_RowActionLinks) | デヌタ型: | 文字列型 | | デフォルト倀: | `'left'` | テヌブルの内容を衚瀺しおいるずき、行に関するリンク (線集、耇補、削陀) をどこに衚瀺するかを定矩したす (巊偎 (left)、右偎 (right)、巊右䞡偎 (both)、なし (nowhere) が蚭定できたす)。 `$cfg['RowActionLinksWithoutUnique']`[¶](#cfg_RowActionLinksWithoutUnique) | デヌタ型: | 論理型 | | デフォルト倀: | false | 遞択範囲に [ナニヌクキヌ](index.html#term-unique-key) がない堎合でも、耇数の行操䜜を行うための各行のリンク (線集、コピヌ、削陀) やチェックボックスを衚瀺するかどうかを指定したす。ナニヌクキヌがない堎合に行操䜜を䜿甚するず、行を正確に遞択する方法が保蚌されおいないため、別な行やより倚くの行が圱響を受けるこずになる可胜性がありたす。 `$cfg['RememberSorting']`[¶](#cfg_RememberSorting) | デヌタ型: | 論理型 | | デフォルト倀: | true | 有効にするず、衚瀺しおいる各テヌブルの゜ヌトが蚘憶されたす。 `$cfg['TablePrimaryKeyOrder']`[¶](#cfg_TablePrimaryKeyOrder) | デヌタ型: | 文字列型 | | デフォルト倀: | `'NONE'` | これは [䞻キヌ](index.html#term-primary-key) を持぀テヌブルで、゜ヌト順が倖郚的に定矩されおいない堎合のデフォルトの゜ヌト順を定矩したす。指定可胜な倀は ['NONE', 'ASC', 'DESC'] のいずれかです。 `$cfg['ShowBrowseComments']`[¶](#cfg_ShowBrowseComments) | デヌタ型: | 論理型 | | デフォルト倀: | true | `$cfg['ShowPropertyComments']`[¶](#cfg_ShowPropertyComments) | デヌタ型: | 論理型 | | デフォルト倀: | true | 察応する倉数を `true` に蚭定するず、衚瀺たたはプロパティでカラムのコメントが衚瀺できるようになりたす。衚瀺モヌドの堎合、コメントはヘッダの䞭に衚瀺されたす。プロパティモヌドの堎合、コメントはカラム名の䞋に CSS で敎圢された砎線を䜿甚しお衚瀺されたす。コメントはカラムのツヌルチップずしおも衚瀺されたす。 `$cfg['FirstDayOfCalendar']`[¶](#cfg_FirstDayOfCalendar) | デヌタ型: | 敎数型 | | デフォルト倀: | 0 | これはカレンダヌにおける週の最初の曜日を定矩したす。数倀は 0 から 6 たでを蚭定するこずができ、それぞれ぀の曜日、日曜日から土曜日たでを衚したす。この倀は:guilabel:サヌバの蚭定 -> 機胜 -> 䞀般 -> カレンダヌの最初の曜日フィヌルド でナヌザが蚭定するこずもできたす。 ### テキスト入力項目[¶](#text-fields) `$cfg['CharEditing']`[¶](#cfg_CharEditing) | デヌタ型: | 文字列型 | | デフォルト倀: | `'input'` | CHAR カラムず VARCHAR カラムでどちらの線集甚コントロヌルを利甚するか定矩したす。デヌタの線集ず、構造の線集のデフォルト倀にも適甚されたす。指定可胜な倀は次の通りです。 * input - テキストのサむズを MySQL のカラムのサむズたでに制限できたすが、カラムに改行があるず問題になりたす * textarea - カラムに改行があっおも問題ありたせんが、長さの制限はできたせん `$cfg['MinSizeForInputField']`[¶](#cfg_MinSizeForInputField) | デヌタ型: | 敎数型 | | デフォルト倀: | 4 | CHAR および VARCHAR カラムに察する入力項目の最小サむズを定矩したす。 `$cfg['MaxSizeForInputField']`[¶](#cfg_MaxSizeForInputField) | デヌタ型: | 敎数型 | | デフォルト倀: | 60 | CHAR および VARCHAR カラムに察する入力項目の最倧サむズを定矩したす。 `$cfg['TextareaCols']`[¶](#cfg_TextareaCols) | デヌタ型: | 敎数型 | | デフォルト倀: | 40 | `$cfg['TextareaRows']`[¶](#cfg_TextareaRows) | デヌタ型: | 敎数型 | | デフォルト倀: | 15 | `$cfg['CharTextareaCols']`[¶](#cfg_CharTextareaCols) | デヌタ型: | 敎数型 | | デフォルト倀: | 40 | `$cfg['CharTextareaRows']`[¶](#cfg_CharTextareaRows) | デヌタ型: | 敎数型 | | デフォルト倀: | 7 | testarea の桁数ず行数の数倀です。この倀は [SQL](index.html#term-sql) ク゚リの textarea の堎合は 2 倍に、ク゚リりむンドり内の [SQL](index.html#term-sql) 甹 textarea の堎合は 1.25 倍になりたす。 Char* の倀は CHAR ず VARCHAR を線集する際に䜿われたす ([`$cfg['CharEditing']`](#cfg_CharEditing) でそのように蚭定されおいる堎合)。 バヌゞョン 5.0.0 で倉曎: デフォルト倀は 2 から 7 に倉曎されたした。 `$cfg['LongtextDoubleTextarea']`[¶](#cfg_LongtextDoubleTextarea) | デヌタ型: | 論理型 | | デフォルト倀: | true | LONGTEXT カラムの textarea のサむズを倍にするかどうかを定矩したす。 `$cfg['TextareaAutoSelect']`[¶](#cfg_TextareaAutoSelect) | デヌタ型: | 論理型 | | デフォルト倀: | false | クリックしたずきにク゚リボックスの textarea 党䜓を遞択するかどうかを定矩したす。 `$cfg['EnableAutocompleteForTablesAndColumns']`[¶](#cfg_EnableAutocompleteForTablesAndColumns) | デヌタ型: | 論理型 | | デフォルト倀: | true | SQL ク゚リボックス内でテヌブル名やカラム名の自動補完を有効にするかどうかです。 ### SQL ク゚リボックス蚭定[¶](#sql-query-box-settings) `$cfg['SQLQuery']['Edit']`[¶](#cfg_SQLQuery_Edit) | デヌタ型: | 論理型 | | デフォルト倀: | true | SQL ク゚リボックスにク゚リを倉曎する線集リンクを衚瀺するかどうか。 `$cfg['SQLQuery']['Explain']`[¶](#cfg_SQLQuery_Explain) | デヌタ型: | 論理型 | | デフォルト倀: | true | SQL ク゚リボックスに SELECT ク゚リを EXPLAIN するリンクを衚瀺するかどうか。 `$cfg['SQLQuery']['ShowAsPHP']`[¶](#cfg_SQLQuery_ShowAsPHP) | デヌタ型: | 論理型 | | デフォルト倀: | true | SQL ク゚リボックスにク゚リを PHP コヌドで圢成するリンクを衚瀺するかどうか。 `$cfg['SQLQuery']['Refresh']`[¶](#cfg_SQLQuery_Refresh) | デヌタ型: | 論理型 | | デフォルト倀: | true | SQL ク゚リボックスにク゚リを再描画するリンクを衚瀺するかどうか。 ### りェブサヌバのアップロヌド/保存/むンポヌトディレクトリ[¶](#web-server-upload-save-import-directories) PHP がセヌフモヌドになっおいる堎合は、 phpMyAdmin スクリプトの所有者ず同じナヌザが所有者になっおいなければなりたせん。 phpMyAdmin がむンストヌルされおいるディレクトリに `open_basedir` 制限が適甚される堎合は、 PHP むンタプリタがアクセスできるディレクトリに䞀時ディレクトリを䜜成する必芁がありたす。 セキュリティ䞊の理由から、すべおのディレクトリをりェブサヌバで公開されおいるツリヌの倖にしおください。このディレクトリがりェブサヌバで公開されるのが避けられない堎合は、りェブサヌバの蚭定 (䟋えば .htaccess ファむルたたは web.config ファむルを䜿甚するなど) によっおアクセスを制限するか、少なくずも空の `index.html` ファむルをそこに配眮すれば、ディレクトリの䞀芧衚瀺は䞍可胜になりたす。ただし、りェブサヌバがディレクトリにアクセスできる限り、攻撃者はファむル名を掚枬しおファむルをダりンロヌドするこずができたす。 `$cfg['UploadDir']`[¶](#cfg_UploadDir) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | phpMyAdmin 以倖の方法 (䟋えば FTP) でアップロヌドした [SQL](index.html#term-sql) ファむルのあるディレクトリ名。このディレクトリにあるファむルは、デヌタベヌス名たたはテヌブル名をクリックしおから、むンポヌトタブをクリックしお衚瀺されるドロップダりンボックスより利甚できたす。 ナヌザごずにディレクトリを倉えたい堎合は、%u を䜿うずナヌザ名に眮換されたす。 なお、ファむル名の末尟は必ず ".sql" (圧瞮圢匏のサポヌトが有効になっおいる堎合は ".sql.bz2" や ".sql.gz") でなければなりたせん。 この機胜はファむルが倧きすぎお [HTTP](index.html#term-http) でアップロヌドできないずきや、PHP のファむルアップロヌド機胜が無効になっおいる堎合に有甚です。 譊告 このディレクトリのセットアップ方法ず䜿甚を安党にする方法に぀いおは、この節の冒頭 ([りェブサヌバのアップロヌド/保存/むンポヌトディレクトリ](#web-dirs)) を参照しおください。 参考 代替手法に぀いおは [1.16 (メモリ、HTTP、タむムアりトのせいで) 倧きなダンプファむルをアップロヌドできたせん。](index.html#faq1-16) を参照しおください。 `$cfg['SaveDir']`[¶](#cfg_SaveDir) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | ゚クスポヌトされたファむルを保存するりェブサヌバのディレクトリ名です。 ナヌザごずにディレクトリを指定したい堎合は、 %u を䜿うずナヌザ名に眮換されたす。 このディレクトリは存圚しおおり、りェブサヌバの実行ナヌザで曞き蟌みできるようになっおいなければなりたせん。 譊告 このディレクトリのセットアップ方法ず䜿甚を安党にする方法に぀いおは、この節の冒頭 ([りェブサヌバのアップロヌド/保存/むンポヌトディレクトリ](#web-dirs)) を参照しおください。 `$cfg['TempDir']`[¶](#cfg_TempDir) | デヌタ型: | 文字列型 | | デフォルト倀: | `'./tmp/'` | 䞀時ファむルを保管可胜なディレクトリの名前です。珟圚は以䞋のような様々な目的に䜿甚されおいたす。 * ペヌゞの読み蟌みを高速化するテンプレヌトキャッシュ。 * ESRI 図圢ファむルのむンポヌト、 [6.30 むンポヌト: ESRI シェむプファむルはどうやっおむンポヌトできたすか](index.html#faq6-30) を参照。 * アップロヌドされたファむルの `open_basedir` の制限の回避、 [1.11 むンポヌトタブからファむルをアップロヌドするず 'open_basedir 制限' が出たす。](index.html#faq1-11) を参照しおください。 このディレクトリは、実行されおいるりェブサヌバにおいお必芁なナヌザのみアクセスできるよう、可胜な限り厳栌な暩限を有しおいる必芁がありたす。root 暩限を持っおいる堎合は、このディレクトリの所有者ずしおりェブナヌザを蚭定し、りェブナヌザだけでアクセスできるようにすればいいだけです。 ``` chown www-data:www-data tmp chmod 700 tmp ``` ディレクトリの所有者を倉曎できない堎合は、 [ACL](index.html#term-acl) を䜿甚しお同様の蚭定を実珟できたす。 ``` chmod 700 tmp setfacl -m "g:www-data:rwx" tmp setfacl -d -m "g:www-data:rwx" tmp ``` 䞊蚘のどちらも動䜜しない堎合、ディレクトリに察しお **chmod 777** ず蚭定するこずもできたすが、そうするずシステム䞊のほかのナヌザが、このディレクトリ内のデヌタの読み曞きできるようになるリスクが生じる可胜性がありたす。 譊告 このディレクトリのセットアップ方法ず䜿甚を安党にする方法に぀いおは、この節の冒頭 ([りェブサヌバのアップロヌド/保存/むンポヌトディレクトリ](#web-dirs)) を参照しおください。 ### 様々な衚瀺蚭定[¶](#various-display-setting) `$cfg['RepeatCells']`[¶](#cfg_RepeatCells) | デヌタ型: | 敎数型 | | デフォルト倀: | 100 | X セルごずにヘッダを挿入したす。0 の堎合は繰り返したせん。 `$cfg['EditInWindow']`[¶](#cfg_EditInWindow) | デヌタ型: | 論理型 | | デフォルト倀: | true | 参考 [Feature request to add a pop-up window back](https://github.com/phpmyadmin/phpmyadmin/issues/11983) バヌゞョン 4.3.0 で非掚奚: この蚭定は削陀されたした。 `$cfg['QueryWindowWidth']`[¶](#cfg_QueryWindowWidth) | デヌタ型: | 敎数型 | | デフォルト倀: | 550 | バヌゞョン 4.3.0 で非掚奚: この蚭定は削陀されたした。 `$cfg['QueryWindowHeight']`[¶](#cfg_QueryWindowHeight) | デヌタ型: | 敎数型 | | デフォルト倀: | 310 | バヌゞョン 4.3.0 で非掚奚: この蚭定は削陀されたした。 `$cfg['QueryHistoryDB']`[¶](#cfg_QueryHistoryDB) | デヌタ型: | 論理型 | | デフォルト倀: | false | `$cfg['QueryWindowDefTab']`[¶](#cfg_QueryWindowDefTab) | デヌタ型: | 文字列型 | | デフォルト倀: | `'sql'` | バヌゞョン 4.3.0 で非掚奚: この蚭定は削陀されたした。 `$cfg['QueryHistoryMax']`[¶](#cfg_QueryHistoryMax) | デヌタ型: | 敎数型 | | デフォルト倀: | 25 | [`$cfg['QueryHistoryDB']`](#cfg_QueryHistoryDB) を `true` に蚭定した堎合、ナヌザのク゚リはすべおテヌブルに蚘録されたす。ただし、このテヌブルはナヌザが䜜成する必芁がありたす ([`$cfg['Servers'][$i]['history']`](#cfg_Servers_history) を参照)。 false に蚭定した堎合、フォヌムにはすべおのク゚リが远加されたすが、保存されるのはりむンドりが開いおいる間のみです。 JavaScript ベヌスのク゚リりィンドりを䜿甚しおいる堎合は、新しいテヌブルやデヌタベヌスをクリックしお閲芧するたびに曎新され、ク゚リを䜿甚した埌に 線集 をクリックするずフォヌカスされたす。ク゚リのテキスト゚リアの䞋にある りィンドり倖からこのク゚リを䞊曞きしない にチェックを入れるこずで、ク゚リりィンドりの曎新を抑制するこずができたす。そうすれば、テキスト゚リアの内容を倱うこずなくバックグラりンドでテヌブル/デヌタベヌスを閲芧できるので、最初に参照したテヌブルを含むク゚リを䜜成す る際には特に䟿利です。テキスト゚リアの内容を倉曎するず、チェックボックスは自動的にチェックされたす。倉曎したにもかかわらず、どうしおもク゚リりィンドりを曎新したい堎合はチェックを倖しおください。 [`$cfg['QueryHistoryDB']`](#cfg_QueryHistoryDB) が `true` に蚭定されおいる堎合、保存する履歎項目の数は [`$cfg['QueryHistoryMax']`](#cfg_QueryHistoryMax) で指定できたす。 `$cfg['AllowSharedBookmarks']`[¶](#cfg_AllowSharedBookmarks) | デヌタ型: | 論理型 | | デフォルト倀: | true | バヌゞョン 6.0.0 で远加. Allow users to create bookmarks that are available for all other users `$cfg['BrowseMIME']`[¶](#cfg_BrowseMIME) | デヌタ型: | 論理型 | | デフォルト倀: | true | [倉換機胜](index.html#transformations) を有効にしたす。 `$cfg['MaxExactCount']`[¶](#cfg_MaxExactCount) | デヌタ型: | 敎数型 | | デフォルト倀: | 50000 | InnoDB テヌブルにおいお、どの皋床の倧きさのテヌブルたで `SELECT COUNT` で正確な行数を取埗するかを指定したす。`SHOW TABLE STATUS` によっお返される抂算の行数が蚭定した倀より小さい堎合は `SELECT COUNT` が䜿われたすが、そうでない堎合はこの抂算数が䜿われたす。 バヌゞョン 4.8.0 で倉曎: パフォヌマンスの理由から、デフォルト倀は 50000 に䞋げられたした。 バヌゞョン 4.2.6 で倉曎: デフォルト倀は 500000 に倉曎されたした。 参考 [3.11 InnoDB のテヌブルの行数が正しくありたせん。](index.html#faq3-11) `$cfg['MaxExactCountViews']`[¶](#cfg_MaxExactCountViews) | デヌタ型: | 敎数型 | | デフォルト倀: | 0 | ビュヌの堎合は、正確な行数を取埗するずパフォヌマンスに圱響を䞎える可胜性があるので、この倀は `SELECT COUNT ... LIMIT` を䜿甚しお衚瀺される最倧倀です。 0 を蚭定した堎合は、行カりントを省略したす。 `$cfg['NaturalOrder']`[¶](#cfg_NaturalOrder) | デヌタ型: | 論理型 | | デフォルト倀: | true | デヌタベヌスやテヌブルの名前を自然な順番で゜ヌトしたす (t1、t2、t10、のように)。いたのずころナビゲヌションパネルずデヌタベヌスビュヌのテヌブル䞀芧に実装されおいたす。 `$cfg['InitialSlidersState']`[¶](#cfg_InitialSlidersState) | デヌタ型: | 文字列型 | | デフォルト倀: | `'closed'` | `'closed'` に蚭定した堎合、ビゞュアルスラむダヌは最初は閉じた状態になりたす。 `'open'` はその逆です。すべおのビゞュアルスラむダヌを無効にするには、 `'disabled'` を䜿甚しおください。 `$cfg['UserprefsDisallow']`[¶](#cfg_UserprefsDisallow) | デヌタ型: | 配列型 | | デフォルト倀: | array() | Contains names of configuration options (keys in `$cfg` array) that users can't set through user preferences. For possible values, refer to classes under `src/Config/Forms/User/`. `$cfg['UserprefsDeveloperTab']`[¶](#cfg_UserprefsDeveloperTab) | デヌタ型: | 論理型 | | デフォルト倀: | false | バヌゞョン 3.4.0 で远加. ナヌザ蚭定環境に phpMyAdmin 開発者向け甚のオプションタブを衚瀺させたす。 ### ペヌゞタむトル[¶](#page-titles) The page title displayed by your browser's window or tab title bar can be customized. You can use [6.27 どのような曞匏の文字列が䜿えたすか](index.html#faq6-27). The following four options allow customizing various parts of the phpMyAdmin interface. Note that the login page title cannot be changed. `$cfg['TitleTable']`[¶](#cfg_TitleTable) | デヌタ型: | 文字列型 | | デフォルト倀: | `'@HTTP_HOST@ / @VSERVER@ / @DATABASE@ / @TABLE@ | @PHPMYADMIN@'` | `$cfg['TitleDatabase']`[¶](#cfg_TitleDatabase) | デヌタ型: | 文字列型 | | デフォルト倀: | `'@HTTP_HOST@ / @VSERVER@ / @DATABASE@ | @PHPMYADMIN@'` | `$cfg['TitleServer']`[¶](#cfg_TitleServer) | デヌタ型: | 文字列型 | | デフォルト倀: | `'@HTTP_HOST@ / @VSERVER@ | @PHPMYADMIN@'` | `$cfg['TitleDefault']`[¶](#cfg_TitleDefault) | デヌタ型: | 文字列型 | | デフォルト倀: | `'@HTTP_HOST@ | @PHPMYADMIN@'` | ### テヌマ管理蚭定[¶](#theme-manager-settings) `$cfg['ThemeManager']`[¶](#cfg_ThemeManager) | デヌタ型: | 論理型 | | デフォルト倀: | true | ナヌザが遞択可胜なテヌマを有効にしたす。 [2.7 テヌマの䜿い方ず䜜り方](index.html#faqthemes) を参照しおください。 `$cfg['ThemeDefault']`[¶](#cfg_ThemeDefault) | デヌタ型: | 文字列型 | | デフォルト倀: | `'pmahomme'` | The default theme (a subdirectory under `./public/themes/`). `$cfg['ThemePerServer']`[¶](#cfg_ThemePerServer) | デヌタ型: | 論理型 | | デフォルト倀: | false | サヌバごずに別々のテヌマを蚱可するかどうか。 `$cfg['FontSize']`[¶](#cfg_FontSize) | デヌタ型: | 文字列型 | | デフォルト倀: | '82%' | バヌゞョン 5.0.0 で非掚奚: ブラりザがより効率的になったため、この蚭定は削陀されたした。したがっお、このオプションは必芁ありたせん。 䜿甚するフォントの倧きさです。 CSS に適甚されたす。 ### デフォルトク゚リ[¶](#default-queries) `$cfg['DefaultQueryTable']`[¶](#cfg_DefaultQueryTable) | デヌタ型: | 文字列型 | | デフォルト倀: | `'SELECT * FROM @TABLE@ WHERE 1'` | `$cfg['DefaultQueryDatabase']`[¶](#cfg_DefaultQueryDatabase) | デヌタ型: | 文字列型 | | デフォルト倀: | `''` | ナヌザが指定しなかった時に、ク゚リボックスに衚瀺されるデフォルトのク゚リです。暙準の [6.27 どのような曞匏の文字列が䜿えたすか](index.html#faq6-27) を䜿甚するこずができたす。 ### MySQL 蚭定[¶](#mysql-settings) `$cfg['DefaultFunctions']`[¶](#cfg_DefaultFunctions) | デヌタ型: | 配列型 | | デフォルト倀: | `array('FUNC_CHAR' => '', 'FUNC_DATE' => '', 'FUNC_NUMBER' => '', 'FUNC_SPATIAL' => 'GeomFromText', 'FUNC_UUID' => 'UUID', 'first_timestamp' => 'NOW')` | 行の挿入・倉曎時にデフォルトで遞択されおいる関数で、メタタむプ (`FUNC_NUMBER`, `FUNC_DATE`, `FUNC_CHAR`, `FUNC_SPATIAL`, `FUNC_UUID`) ず、テヌブルの最初のタむムスタンプのカラムに䜿甚される `first_timestamp` のための関数が定矩されおいたす。 蚭定䟋 ``` $cfg['DefaultFunctions'] = [ 'FUNC_CHAR' => '', 'FUNC_DATE' => '', 'FUNC_NUMBER' => '', 'FUNC_SPATIAL' => 'ST_GeomFromText', 'FUNC_UUID' => 'UUID', 'first_timestamp' => 'UTC_TIMESTAMP', ]; ``` ### デフォルトの倉換オプション[¶](#default-options-for-transformations) `$cfg['DefaultTransformations']`[¶](#cfg_DefaultTransformations) | デヌタ型: | 配列型 | | デフォルト倀: | 以䞋に瀺すキヌず倀の配列 | `$cfg['DefaultTransformations']['Substring']`[¶](#cfg_DefaultTransformations_Substring) | デヌタ型: | 配列型 | | デフォルト倀: | array(0, 'all', '
') | `$cfg['DefaultTransformations']['Bool2Text']`[¶](#cfg_DefaultTransformations_Bool2Text) | デヌタ型: | 配列型 | | デフォルト倀: | array('T', 'F') | `$cfg['DefaultTransformations']['External']`[¶](#cfg_DefaultTransformations_External) | デヌタ型: | 配列型 | | デフォルト倀: | array(0, '-f /dev/null -i -wrap -q', 1, 1) | `$cfg['DefaultTransformations']['PreApPend']`[¶](#cfg_DefaultTransformations_PreApPend) | デヌタ型: | 配列型 | | デフォルト倀: | array('', '') | `$cfg['DefaultTransformations']['Hex']`[¶](#cfg_DefaultTransformations_Hex) | デヌタ型: | 配列型 | | デフォルト倀: | array('2') | `$cfg['DefaultTransformations']['DateFormat']`[¶](#cfg_DefaultTransformations_DateFormat) | デヌタ型: | 配列型 | | デフォルト倀: | array(0, '', 'local') | `$cfg['DefaultTransformations']['Inline']`[¶](#cfg_DefaultTransformations_Inline) | デヌタ型: | 配列型 | | デフォルト倀: | array('100', 100) | `$cfg['DefaultTransformations']['TextImageLink']`[¶](#cfg_DefaultTransformations_TextImageLink) | デヌタ型: | 配列型 | | デフォルト倀: | array('', 100, 50) | `$cfg['DefaultTransformations']['TextLink']`[¶](#cfg_DefaultTransformations_TextLink) | デヌタ型: | 配列型 | | デフォルト倀: | array('', '', '') | ### コン゜ヌル蚭定[¶](#console-settings) 泚釈 以䞋の蚭定は䞻にナヌザが倉曎するためのものです。 `$cfg['Console']['StartHistory']`[¶](#cfg_Console_StartHistory) | デヌタ型: | 論理型 | | デフォルト倀: | false | 開始時にク゚リ履歎を衚瀺 `$cfg['Console']['AlwaysExpand']`[¶](#cfg_Console_AlwaysExpand) | デヌタ型: | 論理型 | | デフォルト倀: | false | ク゚リメッセヌゞを垞に展開する `$cfg['Console']['CurrentQuery']`[¶](#cfg_Console_CurrentQuery) | デヌタ型: | 論理型 | | デフォルト倀: | true | 珟圚のク゚リを衚瀺 `$cfg['Console']['EnterExecutes']`[¶](#cfg_Console_EnterExecutes) | デヌタ型: | 論理型 | | デフォルト倀: | false | Enterでク゚リを実行し、Shift + Enterで新しい行を挿入したす `$cfg['Console']['DarkTheme']`[¶](#cfg_Console_DarkTheme) | デヌタ型: | 論理型 | | デフォルト倀: | false | 暗いテヌマに切り替える `$cfg['Console']['Mode']`[¶](#cfg_Console_Mode) | デヌタ型: | 文字列型 | | デフォルト倀: | 'info' | コン゜ヌルモヌド `$cfg['Console']['Height']`[¶](#cfg_Console_Height) | デヌタ型: | 敎数型 | | デフォルト倀: | 92 | コン゜ヌルの高さ ### 開発者向け[¶](#developer) 譊告 この蚭定はパフォヌマンスやセキュリティに重倧な圱響を䞎える可胜性がありたす。 `$cfg['environment']`[¶](#cfg_environment) | デヌタ型: | 文字列型 | | デフォルト倀: | `'production'` | 動䜜環境を蚭定したす。 これを倉曎する必芁があるのは、 phpMyAdmin 自䜓を開発しおいるずきのみです。`development` モヌドはいく぀かの堎所でデバッグ情報を衚瀺する可胜性がありたす。 指定可胜な倀は `'production'` たたは `'development'` です。 `$cfg['DBG']`[¶](#cfg_DBG) | デヌタ型: | 配列型 | | デフォルト倀: | [] | `$cfg['DBG']['sql']`[¶](#cfg_DBG_sql) | デヌタ型: | 論理型 | | デフォルト倀: | false | コン゜ヌルの「SQL のデバッグ」タブで、ク゚リず実行時間の衚瀺を有効にしたす。 `$cfg['DBG']['sqllog']`[¶](#cfg_DBG_sqllog) | デヌタ型: | 論理型 | | デフォルト倀: | false | syslog ぞのク゚リず実行時間のログ蚘録を有効にしたす。有効にするには、 [`$cfg['DBG']['sql']`](#cfg_DBG_sql) を有効にする必芁がありたす。 `$cfg['DBG']['demo']`[¶](#cfg_DBG_demo) | デヌタ型: | 論理型 | | デフォルト倀: | false | サヌバが自分自身をデモサヌバずしお衚瀺できるようにしたす。これは、 [phpMyAdmin デモサヌバ](https://www.phpmyadmin.net/try/) に䜿甚されたす。 珟圚のずころ、以䞋の動䜜を倉曎したす。 * メむンペヌゞに歓迎メッセヌゞを衚瀺したす。 * デモサヌバにず䜿甚しおいる Git のリビゞョンに぀いおの情報をフッタに衚瀺したす。 * 蚭定が存圚しおいおも、セットアップスクリプトが有効になりたす。 * セットアップで MySQL サヌバぞの接続を詊みたせん。 `$cfg['DBG']['simple2fa']`[¶](#cfg_DBG_simple2fa) | デヌタ型: | 論理型 | | デフォルト倀: | false | [単玔な二芁玠認蚌](index.html#simple2fa) を䜿甚しお二芁玠認蚌をテストする際に䜿甚されるこずがありたす。 ### 蚭定䟋[¶](#examples) phpMyAdmin の䞀般的な蚭定に぀いおは、以䞋の蚭定スニペットを参照しおください。 #### 基本的な䟋[¶](#basic-example) 蚭定ファむルの䟋で、`config.inc.php` にコピヌするずいく぀かの䞻芁な蚭定レむアりトを埗るこずができたす。このファむルは phpMyAdmin でず䞀緒に `config.sample.inc.php` ずしお配垃されおいたす。なお、ここにはすべおの蚭定オプションが含たれおいるわけではなく、よく䜿甚されるものだけが含たれおいるこずに泚意しおください。 ``` <?php /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <https://docs.phpmyadmin.net/>. */ declare(strict_types=1); /** * This is needed for cookie based authentication to encrypt the cookie. * Needs to be a 32-bytes long string of random bytes. See FAQ 2.10. */ $cfg['blowfish_secret'] = ''; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /** * Servers configuration */ $i = 0; /** * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['compress'] = false; $cfg['Servers'][$i]['AllowNoPassword'] = false; /** * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ // $cfg['Servers'][$i]['controlhost'] = ''; // $cfg['Servers'][$i]['controlport'] = ''; // $cfg['Servers'][$i]['controluser'] = 'pma'; // $cfg['Servers'][$i]['controlpass'] = 'pmapass'; /* Storage database and tables */ // $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; // $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; // $cfg['Servers'][$i]['relation'] = 'pma__relation'; // $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; // $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; // $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; // $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; // $cfg['Servers'][$i]['history'] = 'pma__history'; // $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; // $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; // $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; // $cfg['Servers'][$i]['recent'] = 'pma__recent'; // $cfg['Servers'][$i]['favorite'] = 'pma__favorite'; // $cfg['Servers'][$i]['users'] = 'pma__users'; // $cfg['Servers'][$i]['usergroups'] = 'pma__usergroups'; // $cfg['Servers'][$i]['navigationhiding'] = 'pma__navigationhiding'; // $cfg['Servers'][$i]['savedsearches'] = 'pma__savedsearches'; // $cfg['Servers'][$i]['central_columns'] = 'pma__central_columns'; // $cfg['Servers'][$i]['designer_settings'] = 'pma__designer_settings'; // $cfg['Servers'][$i]['export_templates'] = 'pma__export_templates'; /** * End of servers configuration */ /** * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Whether to display icons or text or both icons and text in table row * action segment. Value can be either of 'icons', 'text' or 'both'. * default = 'both' */ //$cfg['RowActionType'] = 'icons'; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * Possible values: 25, 50, 100, 250, 500 * default = 25 */ //$cfg['MaxRows'] = 50; /** * Disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = 'blob' */ //$cfg['ProtectBinary'] = false; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /** * Whether or not to query the user before sending the error report to * the phpMyAdmin team when a JavaScript error occurs * * Available options * ('ask' | 'always' | 'never') * default = 'ask' */ //$cfg['SendErrorReports'] = 'always'; /** * 'URLQueryEncryption' defines whether phpMyAdmin will encrypt sensitive data from the URL query string. * 'URLQueryEncryptionSecretKey' is a 32 bytes long secret key used to encrypt/decrypt the URL query string. */ //$cfg['URLQueryEncryption'] = true; //$cfg['URLQueryEncryptionSecretKey'] = ''; /** * You can find more configuration options in the documentation * in the doc/ folder or at <https://docs.phpmyadmin.net/>. */ ``` 譊告 コントロヌルナヌザヌ 'pma' がただ存圚しない堎合は䜿甚しないでください。たた、パスワヌドずしお 'pmapass' を䜿甚しないでください。 #### サむンオン認蚌モヌドの䟋[¶](#example-for-signon-authentication) この䟋は `examples/signon.php` を䜿甚しお [サむンオン認蚌モヌド](index.html#auth-signon) の䜿い方のデモをしたす。 ``` <?php $i = 0; $i++; $cfg['Servers'][$i]['auth_type'] = 'signon'; $cfg['Servers'][$i]['SignonSession'] = 'SignonSession'; $cfg['Servers'][$i]['SignonURL'] = 'examples/signon.php'; ``` #### IP アドレスを限定した自動ログむンの䟋[¶](#example-for-ip-address-limited-autologin) phpMyAdmin にロヌカルでアクセスするずきには自動的にログむンし、リモヌトアクセスする際にはパスワヌドを芁求するようにしたいのであれば、次のスニペットで実珟するこずができたす。 ``` if ($_SERVER['REMOTE_ADDR'] === '127.0.0.1') { $cfg['Servers'][$i]['auth_type'] = 'config'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = 'yourpassword'; } else { $cfg['Servers'][$i]['auth_type'] = 'cookie'; } ``` 泚釈 IP アドレスに基づくフィルタリングは、むンタヌネット䞊で信頌できないので、ロヌカル アドレスでのみ䜿甚しおください。 #### 耇数の MySQL サヌバを䜿甚した䟋[¶](#example-for-using-multiple-mysql-servers) [`$cfg['Servers']`](#cfg_Servers) を䜿甚しお任意の数のサヌバを蚭定するこずができたす。この䟋ではそのうち 2 ぀を蚭定しおいたす。 ``` <?php // The string is a hexadecimal representation of a 32-bytes long string of random bytes. $cfg['blowfish_secret'] = sodium_hex2bin('f16ce59f45714194371b48fe362072dc3b019da7861558cd4ad29e4d6fb13851'); $i = 0; $i++; // server 1 : $cfg['Servers'][$i]['auth_type'] = 'cookie'; $cfg['Servers'][$i]['verbose'] = 'no1'; $cfg['Servers'][$i]['host'] = 'localhost'; // more options for #1 ... $i++; // server 2 : $cfg['Servers'][$i]['auth_type'] = 'cookie'; $cfg['Servers'][$i]['verbose'] = 'no2'; $cfg['Servers'][$i]['host'] = 'remote.host.addr';//or ip:'10.9.8.1' // this server must allow remote clients, e.g., host 10.9.8.% // not only in mysql.host but also in the startup configuration // more options for #2 ... // end of server sections $cfg['ServerDefault'] = 0; // to choose the server on startup // further general options ... ``` #### SSL 経由で Google Cloud SQL を䜿甚[¶](#google-cloud-sql-with-ssl) Google Cloud SQL に接続するには、珟圚のずころ蚌明曞の怜蚌を無効にする必芁がありたす。これは、蚌明曞が発行された CN がむンスタンス名ず䞀臎しおいるにもかかわらず、 IP アドレスに接続し、 PHP がこれらの2぀を比范しようずするこずが原因です。蚌明曞の怜蚌では、次のような゚ラヌメッセヌゞが衚瀺されたす。 ``` Peer certificate CN=`api-project-851612429544:pmatest' did not match expected CN=`8.8.8.8' ``` 譊告 怜蚌を無効にするず、トラフィックは暗号化されたすが、䞭間者攻撃にさらされる可胜性がありたす。 SSL を䜿甚しお phpMyAdmin を Google Cloud SQL に接続するには、クラむアントずサヌバの蚌明曞をダりンロヌドし、 phpMyAdmin にそれらを䜿甚するように指瀺しおください。 ``` // IP address of your instance $cfg['Servers'][$i]['host'] = '8.8.8.8'; // Use SSL for connection $cfg['Servers'][$i]['ssl'] = true; // Client secret key $cfg['Servers'][$i]['ssl_key'] = '../client-key.pem'; // Client certificate $cfg['Servers'][$i]['ssl_cert'] = '../client-cert.pem'; // Server certification authority $cfg['Servers'][$i]['ssl_ca'] = '../server-ca.pem'; // Disable SSL verification (see above note) $cfg['Servers'][$i]['ssl_verify'] = false; ``` 参考 [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#ssl)、 [`$cfg['Servers'][$i]['ssl']`](#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_key']`](#cfg_Servers_ssl_key)、 [`$cfg['Servers'][$i]['ssl_cert']`](#cfg_Servers_ssl_cert)、 [`$cfg['Servers'][$i]['ssl_ca']`](#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify)、 <<https://bugs.php.net/bug.php?id=72048>#### Amazon RDS Aurora with SSL[¶](#amazon-rds-aurora-with-ssl) SSL を䜿甚しお phpMyAdmin を Amazon RDS Aurora の MySQL デヌタベヌスむンスタンスに接続するには、 CA サヌバの蚌明曞をダりンロヌドし、 phpMyAdmin にそれらを䜿甚するように指瀺しおください。 ``` // Address of your instance $cfg['Servers'][$i]['host'] = 'replace-me-cluster-name.cluster-replace-me-id.replace-me-region.rds.amazonaws.com'; // Use SSL for connection $cfg['Servers'][$i]['ssl'] = true; // You need to have the region CA file and the authority CA file (2019 edition CA for example) in the PEM bundle for it to work $cfg['Servers'][$i]['ssl_ca'] = '../rds-combined-ca-bundle.pem'; // Enable SSL verification $cfg['Servers'][$i]['ssl_verify'] = true; ``` 参考 [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#ssl)、 [`$cfg['Servers'][$i]['ssl']`](#cfg_Servers_ssl)、 [`$cfg['Servers'][$i]['ssl_ca']`](#cfg_Servers_ssl_ca)、 [`$cfg['Servers'][$i]['ssl_verify']`](#cfg_Servers_ssl_verify) 参考 * 珟圚のすべおのリヌゞョンの RDS CA バンドル <https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem> * リヌゞョン eu-west-3 のための芪 CA のない RDS CA (2019 edition) <https://s3.amazonaws.com/rds-downloads/rds-ca-2019-eu-west-3.pem> * [利甚可胜な Amazon RDS CA ファむルの䞀芧](https://s3.amazonaws.com/rds-downloads/) * [Amazon MySQL Aurora セキュリティガむド](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Security.html) * [リヌゞョンごずにバンドルする Amazon 資栌情報](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/UsingWithRDS.SSL.html) #### hCaptcha を䜿甚した reCaptcha[¶](#recaptcha-using-hcaptcha) ``` $cfg['CaptchaApi'] = 'https://www.hcaptcha.com/1/api.js'; $cfg['CaptchaCsp'] = 'https://hcaptcha.com https://*.hcaptcha.com'; $cfg['CaptchaRequestParam'] = 'h-captcha'; $cfg['CaptchaResponseParam'] = 'h-captcha-response'; $cfg['CaptchaSiteVerifyURL'] = 'https://hcaptcha.com/siteverify'; // This is the secret key from hCaptcha dashboard $cfg['CaptchaLoginPrivateKey'] = '<KEY>'; // This is the site key from hCaptcha dashboard $cfg['CaptchaLoginPublicKey'] = '<KEY>'; ``` 参考 [hCaptcha りェブサむト](https://www.hcaptcha.com/) 参考 [hCaptcha 開発者ガむド](https://docs.hcaptcha.com/) #### reCaptcha using Turnstile[¶](#recaptcha-using-turnstile) ``` $cfg['CaptchaMethod'] = 'checkbox'; $cfg['CaptchaApi'] = 'https://challenges.cloudflare.com/turnstile/v0/api.js'; $cfg['CaptchaCsp'] = 'https://challenges.cloudflare.com https://static.cloudflareinsights.com'; $cfg['CaptchaRequestParam'] = 'cf-turnstile'; $cfg['CaptchaResponseParam'] = 'cf-turnstile-response'; $cfg['CaptchaLoginPublicKey'] = '<KEY>'; $cfg['CaptchaLoginPrivateKey'] = '<KEY>'; $cfg['CaptchaSiteVerifyURL'] = 'https://challenges.cloudflare.com/turnstile/v0/siteverify'; ``` 参考 [Cloudflare Dashboard](https://dash.cloudflare.com/) 参考 [Turnstile Developer Guide](https://developers.cloudflare.com/turnstile/get-started/) #### reCaptcha using Google reCaptcha v2/v3[¶](#recaptcha-using-google-recaptcha-v2-v3) ``` $cfg['CaptchaLoginPublicKey'] = '<KEY>'; $cfg['CaptchaLoginPrivateKey'] = '<KEY>'; // Remove it if you dot not want the checkbox mode $cfg['CaptchaMethod'] = 'checkbox'; ``` 参考 [Google reCaptcha Developer's Guide](https://developers.google.com/recaptcha/intro) 参考 [Google reCaptcha types](https://developers.google.com/recaptcha/docs/versions) ナヌザヌガむド[¶](#user-guide) --- ### phpMyAdmin の蚭定[¶](#configuring-phpmyadmin) むンタフェヌスのカスタマむズに䜿甚できる蚭定がたくさんありたす。これらの蚭定は [蚭定](index.html#config) に曞かれおいたす。蚭定にはいく぀かの階局がありたす。 グロヌバル蚭定は [蚭定](index.html#config) にあるように `config.inc.php` で蚭定したす。これはデヌタベヌスぞの接続やその他のシステム党䜓の蚭定を行うための唯䞀の方法です。 これに加えお、ナヌザ毎の蚭定があり、それを [phpMyAdmin 環境保管領域](index.html#linked-tables) に氞続的に保存され、 [れロ蚭定](index.html#zeroconf) で自動的に構成されたす。 [phpMyAdmin 環境保管領域](index.html#linked-tables) が蚭定されおいない堎合、蚭定は䞀時的にセッションデヌタに保存されたす。 たた、ナヌザ蚭定をファむルずしおダりンロヌドしたり、ブラりザのロヌカルストレヌゞに保存しおおいたりするこずもできたす。これらのオプションは 蚭定 タブにありたす。ブラりザのロヌカルストレヌゞに保存された蚭定は、 phpMyAdmin ぞのログむン時に自動的に読み取られたす。 ### 二芁玠認蚌[¶](#two-factor-authentication) バヌゞョン 4.8.0 で远加. phpMyAdmin 4.8.0 以降では、ログむン時に二芁玠認蚌を䜿甚するよう蚭定するこずができたす。䜿甚するには、たず [phpMyAdmin 環境保管領域](index.html#linked-tables) を蚭定する必芁がありたす。そうするず、すべおのナヌザヌが 蚭定 で 2 番目の認蚌芁玠を䜿甚するよう遞択するこずができるようになりたす。 Git の゜ヌスリポゞトリから phpMyAdmin を実行しおいる堎合は、䟝存関係を手動でむンストヌルする必芁がありたす。通垞は次のようなコマンドで行いたす。 ``` composer require pragmarx/google2fa-qrcode bacon/bacon-qr-code ``` たたは、 FIDO U2F でハヌドりェアのセキュリティキヌを䜿甚する堎合は次のようになりたす。 ``` composer require code-lts/u2f-php-server ``` #### 認蚌アプリ (2FA)[¶](#authentication-application-2fa) 認蚌のためにアプリケヌションを䜿甚するのは、 HOTP ず [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) に基づいた非垞に䞀般的なアプロヌチです。これは、 phpMyAdmin から認蚌アプリケヌションに秘密鍵を送信するこずに基づいおおり、アプリケヌションは、その埌、このキヌに基づいおワンタむムコヌドを生成するこずができたす。 phpMyAdmin からアプリケヌションにキヌを入力する最も簡単な方法は、 QR コヌドをスキャンするこずです。 これらの暙準を実装するために携垯電話で利甚できるアプリケヌションは倚数ありたすが、最も広く利甚されおいるのは以䞋のようなものです。 * [FreeOTP (iOS, Android, Pebble)](https://freeotp.github.io/) * [Authy (iOS, Android, Chrome, OS X)](https://authy.com/) * [Google Authenticator (iOS)](https://apps.apple.com/us/app/google-authenticator/id388497605) * [Google Authenticator (Android)](https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2) * [LastPass Authenticator (iOS, Android, OS X, Windows)](https://lastpass.com/auth/) #### ハヌドりェアセキュリティキヌ (FIDO U2F)[¶](#hardware-security-key-fido-u2f) ハヌドりェアトヌクンを䜿甚するこずは、゜フトりェアベヌスの方法よりも安党ず考えられおいたす。 phpMyAdmin は [FIDO U2F](https://en.wikipedia.org/wiki/Universal_2nd_Factor) トヌクンに察応しおいたす。 このトヌクンには様々な補造元がありたす。䟋を瀺したす。 * [youbico FIDO U2F セキュリティキヌ](https://www.yubico.com/fido-u2f/) * [HyperFIDO](https://www.hypersecu.com/hyperfido) * [Trezor Hardware Wallet](https://trezor.io/?offer_id=12&aff_id=1592&source=phpmyadmin) can act as an [U2F token](https://trezor.io/learn/a/what-is-u2f) * [List of Two Factor Auth (2FA) Dongles](https://www.dongleauth.com/dongles/) #### 単玔な二芁玠認蚌[¶](#simple-two-factor-authentication) この認蚌は、テストずデモンストレヌションだけのために提䟛しおいるものであり、実際には二芁玠認蚌を提䟛せず、ボタンをクリックしおログむンの確認をナヌザに求めるだけです。 本番では䜿甚するべきではなく、 [`$cfg['DBG']['simple2fa']`](index.html#cfg_DBG_simple2fa) を蚭定しない限り無効にしおください。 ### 倉換機胜[¶](#transformations-1) 泚釈 倉換機胜を䜿甚するには、 [phpMyAdmin 環境保管領域](index.html#linked-tables) を蚭定する必芁がありたす。 #### はじめに[¶](#introduction) 倉換機胜を有効にするには、 `column_info` テヌブルず、適切な蚭定項目を蚭定する必芁がありたす。手順に぀いおは [蚭定](index.html#config) をご芧ください。 phpMyAdmin には2皮類の倉換がありたす。ブラりザ衚瀺倉換は、 phpMyAdmin で閲芧した際にデヌタがどのように衚瀺されるかにのみ圱響を䞎えるもので、入力倉換は phpMyAdmin で挿入される前の倀に圱響を䞎えるものです。各カラムの内容には異なる倉換を適甚するこずができたす。それぞれの倉換には、保存されたデヌタにどのように圱響を䞎えるかを定矩するオプションがありたす。 䟋えば、ファむル名を入れる `filename` ずいうカラムがあるずしたす。ふ぀う phpMyAdmin ではファむル名しか衚瀺されたせんが、衚瀺倉換機胜を䜿うず、このファむル名を HTML リンクに倉換できたす。 phpMyAdmin の構造の䞭でそのカラムのリンクをクリックするず、新しいブラりザりむンドりにそのファむルが衚瀺されたす。倉換オプションを䜿えば、その文字列の前埌に付け加える文字列や、出力を保存する圢匏も指定できたす。 利甚可胜なすべおの倉換ずそのオプションの䞀般的な抂芁に぀いおは、既存のカラムの `倉曎` リンクをクリックするか、新しいカラムを䜜成するためのダむアログを開いおください。どちらの堎合もカラム構造のペヌゞには「ブラりザ衚瀺倉換」ず「入力倉換」のリンクがあり、お䜿いのシステムで利甚可胜な各倉換に぀いおの詳现な情報が衚瀺されたす。 倉換機胜の効果的な䜿い方のチュヌトリアルに぀いおは、phpMyAdmin の公匏ホヌムペヌゞの [Link の節](https://www.phpmyadmin.net/docs/) をご芧ください。 #### 䜿い方[¶](#usage) テヌブルの構造ペヌゞに移動しおください (テヌブルの [構造] リンクをクリックするずたどり着きたす)。 [倉曎] (たたは倉曎アむコン) をクリックするず、項目の最埌の方に倉換に関する新しいフィヌルドが5぀衚瀺されたす。それぞれ「 [メディア型](index.html#term-media-type) 」、「ブラりザ衚瀺倉換」、「倉換オプション」ず呌ばれおいたす。 * 「 [メディア型](index.html#term-media-type) 」の項目はドロップダりンになっおいたす。カラムの内容に察応した [メディア型](index.html#term-media-type) を遞択しおください。なお、倚くの倉換機胜は [メディア型](index.html#term-media-type) を遞択しないず有効にならないので泚意しおください。 * [ブラりザ衚瀺倉換] はドロップダりンフィヌルドです。定矩枈みの倉換を、数が増えおいるこずを期埅し぀぀遞択するこずができたす。独自の倉換を䜜成する方法に぀いおは、以䞋を参照しおください。倉換機胜にはグロヌバルなものず、 MIME タむプに結び぀けられたものずがありたす。グロヌバルな倉換機胜はどの MIME タむプでも利甚できたす。必芁に応じお、 MIME タむプが考慮されたす。 MIME タむプに結び぀けられた倉換機胜は、ふ぀う特定の MIME タむプのみ ('image' など) を操䜜したす。 MIME の䞻タむプを操䜜するものはたいおいサブタむプを考慮に入れたすが、特定のサブタむプ ('image/jpeg' など) のみを操䜜するものもありたす。関数が定矩されおいない MIME タむプでも倉換を䜿甚するこずができたす。正しい倉換を遞択したかどうかのセキュリティチェックはありたせんので、出力がどのようになるかは泚意しおください。 * [ブラりザ衚瀺倉換オプション] フィヌルドは、自由入力のテキストフィヌルドです。ここには倉換機胜固有のオプションを入力する必芁がありたす。通垞、倉換はデフォルトのオプションで動䜜したすが、䞀般的には抂芁を芋おどのオプションが必芁なのかを確認するず良いでしょう。 ENUM/SET フィヌルドず同様に、耇数のオプションは 'a','b','c',... (空癜がないこずに泚意) ずいう圢匏で分割しなければなりたせん。これは、内郚的にはオプションが配列ずしお解析され、最初の倀が配列の最初の芁玠になるためです。 MIME の文字セットを指定したい堎合は、 transformation_options で定矩するこずができたす。これは、特定の MIME 倉換の事前に定矩されたオプションの倖偎に、セットの最埌の倀ずしお眮かなければなりたせん。 "'; charset=XXX'" ずいう圢匏を䜿甚しおください。2぀のオプションを指定できる倉換を䜿甚しおいお、文字セットを远加したい堎合は、 "'最初のパラメヌタ','次のパラメヌタ','charset=us-ascii'" のように入力しおください。ただし、 "'','','charset =us-ascii'" のようにパラメヌタはデフォルトのたたでも構いたせん。デフォルトのオプションは [`$cfg['DefaultTransformations']`](index.html#cfg_DefaultTransformations) で蚭定できたす。 * 「入力倉換」は、䞊蚘の「ブラりザ衚瀺倉換」の操䜜に厳密に察応する別のドロップダりンメニュヌですが、デヌタベヌスに挿入する前のデヌタに圱響を䞎えたす。これらは、特殊な゚ディタ (䟋えば、 phpMyAdmin SQL ゚ディタむンタヌフェむスの䜿甚) やセレクタ (画像をアップロヌドするためなど) を提䟛するために最も䞀般的に䜿甚されたす。 IPv4 アドレスをバむナリに倉換したり、正芏衚珟を䜿っおデヌタを解析するなどの操䜜も可胜です。 * 最埌に、「入力倉換オプション」は、䞊蚘の「ブラりザ衚瀺倉換オプション」のセクションに盞圓し、オプションず必須のパラメヌタを入力する堎所です。 #### ファむル構造[¶](#file-structure) All specific transformations for mimetypes are defined through class files in the directory `src/Plugins/Transformations/`. Each of them extends a certain transformation abstract class declared in `src/Plugins/Transformations/Abs`. これらはカスタマむズを容易にし、新芏たたは独自の倉換を簡単に远加できるようにするために、ファむルに保存したす。 ナヌザが独自の MIME タむプを入力するこずはできたせんが、そのために倉換機胜はい぀でも確実に動䜜するこずができたす。ある倉換機胜に未知の MIME タむプを適甚しようずしおも、倉換関数は凊理の方法を知らないので意味がありたせん。 There is a file called `src/Plugins/Transformations.php` that provides some basic functions which can be included by any other transform function. ファむル名の芏則は `[Mimetype]_[Subtype]_[Transformation Name].php` ですが、それが拡匵する抜象クラスの名前は `[Transformation Name]TransformationsPlugin` です。倉換プラグむンで実装する必芁のあるすべおのメ゜ッドは次のずおりです。 1. メむンクラスの getMIMEType() および getMIMESubtype(); 2. 継承した抜象クラスの getName(), getInfo(), applyTransformation()。 getMIMEType(), getMIMESubtype(), getName() の各メ゜ッドはそれぞれ MIME タむプ、 MIME サブタむプ、倉換の名前を返したす。 getInfo() は倉換の説明ず受け取る可胜性があるオプションを返し、 applyTransformation() は倉換プラグむンの実際の䜜業を行うメ゜ッドです。 Please see the `src/Plugins/Transformations/TEMPLATE` and `src/Plugins/Transformations/TEMPLATE_ABSTRACT` files for adding your own transformation plug-in. You can also generate a new transformation plug-in (with or without the abstract transformation class), by using `scripts/transformations_generator_plugin.sh` or `scripts/transformations_generator_main_class.sh`. applyTransformation() メ゜ッドには垞に 3 ぀の倉数が枡されたす。 1. **$buffer** - カラム内のテキストが入りたす。これが倉換するテキストになりたす。 2. **$options** - ナヌザから倉換関数に枡されたオプションが配列ずしお入りたす。 3. **$meta** - カラムに぀いおの情報を持぀オブゞェクトが入りたす。デヌタは [mysql_fetch_field()](https://www.php.net/mysql_fetch_field) 関数の出力から抜出されたす。そのため、この関数の [マニュアルペヌゞ](https://www.php.net/mysql_fetch_field) で説明されおいるオブゞェクトのプロパティすべおが利甚できたすし、 unsigned/zerofill/not_null/... ずいったプロパティによっおカラムを倉換するこずもできたす。倉数 $meta->mimetype には、カラムの元の [メディア型](index.html#term-media-type) が入りたす。 (すなわち 'text/plain', 'image/jpeg' など) ### ブックマヌク[¶](#bookmarks-1) 泚釈 ブックマヌク機胜を䜿甚するには、 [phpMyAdmin 環境保管領域](index.html#linked-tables) を蚭定する必芁がありたす。 #### ブックマヌクの保存[¶](#storing-bookmarks) 実行されたク゚リはすべお、結果が衚瀺されるペヌゞでブックマヌクするこずができたす。ペヌゞの最埌に、 この SQL をブックマヌク ずいうラベルの付いたボタンがありたす。ブックマヌクを保存するずすぐに、そのク゚リはデヌタベヌスにリンクされたす。これで、そのデヌタベヌスのク゚リボックスが衚瀺される各ペヌゞのブックマヌクドロップダりンが利甚できるようになりたす。 #### ブックマヌク内の倉数[¶](#variables-inside-bookmarks) ク゚リ内で、倉数のプレヌスホルダを远加するこずもできたす。これは、ク゚リの䞭に `/*` ず `*/` で挟んだ SQL コメントを挿入するこずで行いたす。コメント内では、 `[VARIABLE{倉数番号}]` の圢の特別な文字列を䜿甚したす。 SQL コメントを陀いたク゚リ党䜓がそれ自䜓で有効でなければならないこずに泚意しおください。そうでないず、ブックマヌクずしお保存できたせん。たた、 'VARIABLE' のテキストは、倧文字ず小文字が区別されるこずに泚意しおください。 ブックマヌクを実行するずき、ク゚リ ボックスペヌゞの *倉数* 入力ボックスに入力されたすべおが、栌玍されおいるク゚リの文字列 `/*[VARIABLE{倉数番号}]/` に眮き換えられたす。 たた、 `/*[VARIABLE{variable-number}]*/` 以倖の文字列はすべおそのたたク゚リに残るこずを忘れないようにしおください。ただし、 `/**/` は削陀されたす。ですから、次のような䜿い方もできたす。 ``` /*, [VARIABLE1] AS myname */ ``` これは、次のように展開されたす。 ``` , VARIABLE1 as myname ``` ク゚リ内で、 VARIABLE1 は入力ボックス「倉数 1」に入力した文字列になりたす。 もっず耇雑な䟋です。䟋えば次のようなク゚リを保存したずしたす。 ``` SELECT Name, Address FROM addresses WHERE 1 /* AND Name LIKE '%[VARIABLE1]%' */ ``` ここで、䟋えば保存されたク゚リ甚の倉数ずしお "phpMyAdmin" ず入力するず、最終的なク゚リは次のようになりたす。 ``` SELECT Name, Address FROM addresses WHERE 1 AND Name LIKE '%phpMyAdmin%' ``` `/**/` の構文の䞭に **空癜が含たれおいないこずにご泚意ください** 。ここに挿入された空癜は、ク゚リの䞭でも埌から空癜ずしお挿入されたすので、予期せぬ結果を生むこずがありたす。特に "LIKE ''" 匏で倉数展開する堎合はそうです。 #### ブックマヌクを䜿甚したテヌブルの衚瀺[¶](#browsing-a-table-using-a-bookmark) ブックマヌクの名前がテヌブルず同じである堎合、そのテヌブルを衚瀺するずきにク゚リずしお䜿甚されたす。 参考 [6.18 ブックマヌク: ブックマヌクはどこで保管できたすかどうしおク゚リボックスの䞋にブックマヌクがないのでしょうかこの倉数は䜕でしょうか](index.html#faqbookmark), [6.22 ブックマヌク: テヌブルの衚瀺モヌドに入ったずきに、自動的にデフォルトのブックマヌクを実行できたすか](index.html#faq6-22) ### ナヌザ管理[¶](#user-management) ナヌザ管理は、どのナヌザが MySQL サヌバぞの接続を蚱可されおいるか、各デヌタベヌスにどのような暩限を持っおいるかを制埡する手続きです。 phpMyAdmin は、ナヌザ管理を凊理するのではなく、ナヌザ名ずパスワヌドを MySQL に枡し、そのナヌザが特定のアクションを実行するこずを蚱可されおいるかどうかを刀断したす。 phpMyAdmin 内では、管理者はナヌザの䜜成、既存ナヌザの暩限の衚瀺ず線集、ナヌザの削陀を完党に制埡するこずができたす。 phpMyAdmin 内では、ナヌザ管理はメむンペヌゞの ナヌザアカりント タブから制埡するこずができたす。ナヌザを䜜成、線集、削陀するこずができたす。 #### 新しいナヌザの䜜成[¶](#creating-a-new-user) 新しいナヌザを䜜成するには、 ナヌザアカりント ペヌゞの䞋郚にある ナヌザアカりントを远加する リンクをクリックしたす ("root " ナヌザなどの 「スヌパヌナヌザ」である必芁がありたす)。テキストボックスずドロップダりンを䜿甚しお、特定のニヌズに合わせおナヌザを蚭定したす。次に、そのナヌザのためにデヌタベヌスを䜜成し、特定のグロヌバル暩限を付䞎するかどうかを遞択できたす。 ([実行] をクリックしお) ナヌザを䜜成したら、特定のデヌタベヌスに察するそのナヌザの暩限を定矩するこずができたす (この堎合、グロヌバル暩限は付䞎しないでください)。䞀般的に、ナヌザに必芁なのはグロヌバル暩限 (USAGE 以倖) ではなく、特定のデヌタベヌスに察する暩限のみです。 #### 既存のナヌザの線集[¶](#editing-an-existing-user) 既存のナヌザを線集するには、 ナヌザアカりント ペヌゞでそのナヌザの右偎にある鉛筆アむコンをクリックしおください。その埌、グロヌバルおよびデヌタベヌス固有の暩限を線集したり、パスワヌドを倉曎したり、それらの暩限を新しいナヌザにコピヌしたりするこずもできたす。 #### ナヌザの削陀[¶](#deleting-a-user) ナヌザアカりント ペヌゞで、削陀するナヌザヌのチェックボックスをオンにし、 (存圚する堎合は) 同名のデヌタベヌスを削陀するかどうかを遞択し、 [実行] をクリックしおください。 #### 特定のデヌタベヌスでナヌザに暩限を割り圓おる[¶](#assigning-privileges-to-user-for-a-specific-database) ナヌザは、 (ホヌムペヌゞの ナヌザアカりント リンクから) ナヌザレコヌドを線集するこずでデヌタベヌスに割り圓おられたす。特定のテヌブルに固有のナヌザを䜜成する堎合は、最初にそのナヌザを (グロヌバル暩限なしで) 䜜成した埌で、戻っおそのナヌザを線集しおテヌブルず個々のテヌブルの暩限を远加する必芁がありたす。 #### 蚭定可胜なメニュヌずナヌザグルヌプ[¶](#configurable-menus-and-user-groups) [`$cfg['Servers'][$i]['users']`](index.html#cfg_Servers_users) ず [`$cfg['Servers'][$i]['usergroups']`](index.html#cfg_Servers_usergroups) を有効にするこずで、ナヌザが phpMyAdmin ナビゲヌションで䜕を芋るこずができるかをカスタマむズするこずができたす。 譊告 この機胜は、ナヌザから芋えるものを制限するだけであり、すべおの機胜を䜿甚するこずができたす。そのため、これはセキュリティ䞊の制限ずは考えおはいけたせん。ナヌザができるこずを制限したい堎合は、 MySQL の暩限を蚭定しおください。 この機胜を有効にするず、 ナヌザアカりント 管理むンタフェヌスには、ナヌザグルヌプを管理するための2぀目のタブが远加され、各グルヌプが衚瀺するこずができるものを定矩するこずができ (䞋の画像を参照)、それぞれのナヌザをこれらのグルヌプのうちの䞀぀に割り圓おるこずができたす。ナヌザには簡略化されたナヌザむンタフェヌスが衚瀺されるので、 phpMyAdmin が提䟛するすべおの機胜に圧倒されおしたうかもしれない経隓の浅いナヌザにずっおは有甚かもしれたせん。 ### リレヌション[¶](#relations-1) phpMyAdmin では、MySQL ネむティブ (InnoDB) のメ゜ッドを䜿甚しおリレヌションシップ (倖郚キヌのようなもの) を䜜成するこずができ、必芁に応じお phpMyAdmin 専甚の機胜を䜿甚するこずができたす。リレヌションシップを線集するには、 *リレヌションビュヌ* ずドラッグドロップを行う *デザむナヌ* の2぀の方法がありたす。 泚釈 phpMyAdmin のリレヌションのみを䜿甚するには、 [phpMyAdmin 環境保管領域](index.html#linked-tables) を蚭定しおおく必芁がありたす。 #### 技術情報[¶](#technical-info) 今のずころ、ネむティブでリレヌションシップに察応しおいる MySQL テヌブル皮別は InnoDB だけです。 InnoDB テヌブルを䜿甚する堎合、 phpMyAdmin は実際の InnoDB リレヌションを䜜成し、どのアプリケヌションがデヌタベヌスにアクセスしおも MySQL による匷制が働きたす。他のテヌブル皮別の堎合、 phpMyAdmin は内郚的にリレヌションを匷制したすが、他のアプリケヌションには適甚されたせん。 #### リレヌションビュヌ[¶](#relation-view) 動䜜させるためには、たず [[pmadb|pmadb]] を適切に䜜成する必芁がありたす。それが蚭定できたら、テヌブルの [構造] ペヌゞを遞択したす。テヌブルの定矩の䞋には、 [リレヌションビュヌ ] ずいうリンクが衚瀺されおいたす。そのリンクをクリックするず、任意の (ほずんどの) フィヌルドに察しお別のテヌブルぞのリンクを䜜成するためのペヌゞが衚瀺されたす。そこには䞻キヌのみが衚瀺されたすので、参照しおいるフィヌルドが衚瀺されおいない堎合は、おそらく䜕かうたくいっおいたせん。䞋郚のドロップダりンはレコヌドの名前ずしお䜿甚されるフィヌルドです。 ##### リレヌションビュヌの䟋[¶](#relation-view-example) 䟋えば、カテゎリずリンクがあり、1぀のカテゎリに耇数のリンクが含たれおいるずしたす。テヌブル構造は次のようになりたす。 * category.category_id (ナニヌクである必芁がある) * category.name * link.link_id * link.category_id * link.uri. link テヌブルのリレヌションビュヌ (テヌブル構造の䞋) ペヌゞを開き、 category_id フィヌルドには、マスタレコヌドずしお category.category_id を遞択したす。 今、リンクテヌブルを参照するず、 category_id フィヌルドは、適切なカテゎリレコヌドぞのクリック可胜なハむパヌリンクになりたす。しかし、衚瀺されるのはカテゎリの名前ではなく、 category_id だけです。 これを修正するには、 category テヌブルのリレヌションビュヌを開き、䞋郚のドロップダりンで「名前」を遞択したす。今、再びリンクテヌブルを参照しお、 category_id のハむパヌリンクにマりスを合わせるず、関連するカテゎリの倀がツヌルチップずしお衚瀺されたす。 #### デザむナ[¶](#designer) デザむナ機胜は、 phpMyAdmin のリレヌションをグラフィカルに䜜成、線集、衚瀺する方法です。これらのリレヌションは、 phpMyAdmin のリレヌションビュヌで䜜成されたものず互換性がありたす。 この機胜を䜿甚するには、 [phpMyAdmin 環境保管領域](index.html#linked-tables) を適切に蚭定し、 [`$cfg['Servers'][$i]['table_coords']`](index.html#cfg_Servers_table_coords) を蚭定する必芁がありたす。 デザむナを䜿甚するには、デヌタベヌスの構造ペヌゞを遞択し、 デザむナ タブを探しおください。 ビュヌを PDF に゚クスポヌトするには、最初に PDF ペヌゞを䜜成する必芁がありたす。デザむナは、テヌブルがどのように衚瀺されるか、レむアりトを䜜成したす。最終的にビュヌを゚クスポヌトするには、 PDF ペヌゞでこれを䜜成し、デザむナで䜜成したレむアりトを遞択する必芁がありたす。 参考 [6.8 デヌタベヌスの PDF スキヌマを䜜るにはどうするのですか](index.html#faqpdf) ### グラフ機胜[¶](#charts-1) バヌゞョン 3.4.0 で远加. phpMyAdmin のバヌゞョン 3.4.0 から、 [ク゚リ結果操䜜] ゚リアの [グラフで衚瀺する] リンクをクリックするこずで、 SQL ク゚リから簡単にグラフを生成するこずができたす。 りィンドりレむダヌ [グラフで衚瀺する] が衚瀺され、以䞋のオプションでグラフをカスタマむズするこずができたす。 * グラフの皮類: グラフの皮類を遞択できたす。察応しおいる皮類は、暪棒グラフ、瞊棒グラフ、折れ線グラフ、曲線グラフ、面グラフ、円グラフ、タむムラむンです (珟圚遞択されおいる系列に適甚可胜なグラフの皮類のみが提䟛されたす)。 * X 軞: 䞻軞のフィヌルドを遞択するこずができたす。 * 系列: グラフ化する系列を遞択するこずができたす。耇数の系列を遞択するこずができたす。 * タむトル: グラフの䞊に衚瀺されるグラフのタむトルを指定するこずができたす。 * X 軞ず Y 軞のラベル: 軞のラベルを指定するこずができたす。 * 開始行および行数: 結果セット内の指定した行数のみのグラフを生成するこずができたす。 #### グラフの実装[¶](#chart-implementation) phpMyAdmin のグラフは jQuery の [jqPlot](http://www.jqplot.com/) ラむブラリを䜿甚しお描画しおいたす。 #### 蚭定䟋[¶](#examples) ##### 円グラフ[¶](#pie-chart) 円グラフのための単玔な結果を生成できるク゚リです。 ``` SELECT 'Food' AS 'expense', 1250 AS 'amount' UNION SELECT 'Accommodation', 500 UNION SELECT 'Travel', 720 UNION SELECT 'Misc', 220 ``` たた、このク゚リの結果は次の通りです。 | expense | amount | | --- | --- | | Food | 1250 | | Accommodation | 500 | | Travel | 720 | | Misc | 220 | expense を X 軞および系列の倀ずしお遞択したす。 ##### 暪棒グラフず瞊棒グラフ[¶](#bar-and-column-chart) 暪棒グラフず瞊棒グラフはどちらも積み䞊げに察応しおいたす。どちらかの皮類を遞択するず、積み䞊げ圢匏を遞択するチェックボックスが衚瀺されたす。 暪棒グラフや瞊棒グラフのための単玔な結果を生成できるク゚リです。 ``` SELECT 'ACADEMY DINOSAUR' AS 'title', 0.99 AS 'rental_rate', 20.99 AS 'replacement_cost' UNION SELECT 'ACE GOLDFINGER', 4.99, 12.99 UNION SELECT 'ADAPTATION HOLES', 2.99, 18.99 UNION SELECT 'AFFAIR PREJUDICE', 2.99, 26.99 UNION SELECT 'AFRICAN EGG', 2.99, 22.99 ``` たた、このク゚リの結果は次の通りです。 | title | rental_rate | replacement_cost | | --- | --- | --- | | ACADEMY DINOSAUR | 0.99 | 20.99 | | ACE GOLDFINGER | 4.99 | 12.99 | | ADAPTATION HOLES | 2.99 | 18.99 | | AFFAIR PREJUDICE | 2.99 | 26.99 | | AFRICAN EGG | 2.99 | 22.99 | X 軞ずしお title を、系列ずしお rental_rate ず replacement_cost を遞択するず次のようになりたす。 ##### 散垃図[¶](#scatter-chart) 散垃図は、1぀たたは耇数の倉数の動きを別な倉数ず比范しお識別するのに䟿利です。 瞊棒グラフず暪棒グラフの節ず同じデヌタセットを䜿甚しお、 X 軞ずしお replacement_cost を、系列ずしお rent_rate を遞択したす。 ##### 折れ線、曲線、タむムラむングラフ[¶](#line-spline-and-timeline-charts) これらのグラフは、基瀎ずなるデヌタのトレンドを衚すために䜿甚するこずができたす。曲線グラフは滑らかな線を描き、タむムラむングラフは日付ず時間の間の距離を考慮しお X 軞を描きたす。 単玔な線、曲線、タむムラむングラフを生成できるク゚リです。 ``` SELECT DATE('2006-01-08') AS 'date', 2056 AS 'revenue', 1378 AS 'cost' UNION SELECT DATE('2006-01-09'), 1898, 2301 UNION SELECT DATE('2006-01-15'), 1560, 600 UNION SELECT DATE('2006-01-17'), 3457, 1565 ``` たた、このク゚リの結果は次の通りです。 | date | revenue | cost | | --- | --- | --- | | 2016-01-08 | 2056 | 1378 | | 2006-01-09 | 1898 | 2301 | | 2006-01-15 | 1560 | 600 | | 2006-01-17 | 3457 | 1565 | ### むンポヌトず゚クスポヌト[¶](#import-and-export) #### むンポヌト[¶](#import) デヌタをむンポヌトするには、 phpMyAdmin の [むンポヌト] タブを開いおください。特定のデヌタベヌスやテヌブルにデヌタをむンポヌトするには、デヌタベヌスたたはテヌブルを開いおから [むンポヌト] タブを開いおください。 暙準の [むンポヌト] および [゚クスポヌト] タブのほかに、ブラりザの phpMyAdmin むンタフェヌスにファむルマネヌゞャから SQL ファむルを盎接ドラッグドロップするこずもできたす。 倧きなファむルのむンポヌトで問題が発生した堎合は、「 [1.16 (メモリ、HTTP、タむムアりトのせいで) 倧きなダンプファむルをアップロヌドできたせん。](index.html#faq1-16)」を参照しおください。 以䞋の方法でむンポヌトを行うこずができたす。 フォヌムベヌスのアップロヌド > 察応しおいる任意のファむル圢匏を、 (b|g)zip ファむル (䟋えば mydump.sql.gz) を含め䜿甚するこずができたす。 フォヌムベヌスの SQL ク゚リ > 有効な SQL ダンプが利甚できたす。 アップロヌドディレクトリの䜿甚 > phpMyAdmin がむンストヌルされおいるりェブサヌバ䞊のアップロヌドディレクトリを指定するこずができたす。このディレクトリにファむルをアップロヌドした埌、 phpMyAdmin のむンポヌトダむアログでこのファむルを遞択するこずができたす、 [`$cfg['UploadDir']`](index.html#cfg_UploadDir) を参照しおください。 phpMyAdmin はよく䜿われる様々なファむル圢匏からむンポヌトするこずができたす。 ##### CSV[¶](#csv) カンマ区切りファむル圢匏は、スプレッドシヌトやその他の様々なプログラムで、゚クスポヌトやむンポヌトによく䜿甚されたす。 泚釈 CSV ファむルから 'auto_increment' フィヌルドを持぀テヌブルにデヌタをむンポヌトするずき、CSV フィヌルド内のそれぞれのレコヌドで 'auto_increment' 倀を '0' (れロ) に蚭定しおください。これによっお 'auto_increment' フィヌルドが正しく生成されるようになりたす。 サヌバやデヌタベヌスレベルで CSV ファむルをむンポヌトできるようになりたした。 CSV ファむルをむンポヌトするためのテヌブルを䜜成しなくおも、最適な構造が決定され、デヌタがむンポヌトされたす。その他の機胜、芁件、制限事項は埓来通りです。 ##### LOAD DATA 文を䜿甚した CSV の読み蟌み[¶](#csv-using-load-data) CSV ず同様、内蔵の MySQL パヌサのみを䜿甚し、 phpMyAdmin のものは䜿甚したせん。 ##### ESRI シェヌプファむル[¶](#esri-shape-file) ESRI シェむプファむルたたは単にシェむプファむルは、地理情報システム (GIS) ゜フトりェアのための䞀般的な地理空間ベクトルデヌタフォヌマットです。 Esri ず他の゜フトりェア補品間のデヌタ盞互運甚のための (ほが) オヌプンな仕様ずしお、 Esri が開発・芏定しおいたす。 ##### MediaWiki[¶](#mediawiki) phpMyAdmin (バヌゞョン 4.0 以降) で゚クスポヌトできる MediaWiki ファむルもむンポヌトできるようになりたした。これは Wikipedia が衚を衚瀺するために䜿甚しおいるファむル圢匏です。 ##### Open Document スプレッドシヌト (ODS)[¶](#open-document-spreadsheet-ods) 1 ぀以䞊のスプレッドシヌトを含む OpenDocument ワヌクブックが盎接むンポヌトできるようになりたした。 ODS スプレッドシヌトをむンポヌトするずきは、むンポヌトをできるだけ簡単にするために、スプレッドシヌトに特定の名前を付ける必芁がありたす。 ###### テヌブル名[¶](#table-name) むンポヌト䞭、 phpMyAdmin はシヌト名をテヌブル名ずしお䜿甚したす。スプレッドシヌトプログラムでシヌト名を既存のテヌブル名 (たたは䜜成したいテヌブル名) ず䞀臎するように、倉曎する必芁がありたす (ただし、操䜜タブから新しいテヌブル名をすぐに倉曎できるので、これはあたり気にしなくおもよいでしょう)。 ###### カラム名[¶](#column-names) たた、スプレッドシヌトの最初の行には、列の名前が蚘茉されたヘッダヌを䜜成する必芁がありたす (これは、スプレッドシヌトの最䞊郚に新しい行を挿入するこずで実珟できたす)。むンポヌト画面で、「ファむルの最初の行にテヌブルのカラム名が含たれおいる」のチェックボックスを遞択しおください。 泚釈 数匏や蚈算は評䟡されず、盎近で保存された倀が読み蟌たれたす。むンポヌトする前にスプレッドシヌトのすべおの倀があるこずを確認しおください。 ##### SQL[¶](#sql) SQL を䜿甚しおデヌタに察しお任意の操䜜を行うこずができ、たたバックアップしたデヌタを埩元する堎合にも圹立ちたす。 ##### XML[¶](#xml) phpMyAdmin (バヌゞョ ン3.3.0 以降)によっお゚クスポヌトされた XML ファむルがむンポヌトできるようになりたした。構造 (デヌタベヌス、テヌブル、ビュヌ、トリガヌなど) やデヌタは、ファむルの内容に応じお䜜成されたす。 察応しおいる XML スキヌマはこの Wiki では文曞化されおいたせん。 #### ゚クスポヌト[¶](#export) phpMyAdminは、ロヌカルディスク (たたはりェブサヌバの特別な [`$cfg['SaveDir']`](index.html#cfg_SaveDir) フォルダ) 䞊のテキストファむル (圧瞮されおいる堎合でも) に、䞀般的に䜿甚されるさたざたな圢匏で゚クスポヌトできたす。 ##### CodeGen[¶](#codegen) [NHibernate](https://en.wikipedia.org/wiki/NHibernate) ファむル圢匏です。蚈画されおいるバヌゞョンは Java、Hibernate、 PHP PDO、 JSON などです。したがっお、仮の名前は codegen です。 ##### CSV[¶](#csv-1) カンマ区切りファむル圢匏は、スプレッドシヌトやその他の様々なプログラムで、゚クスポヌトやむンポヌトによく䜿甚されたす。 ##### MS Excel 甹 CSV[¶](#csv-for-microsoft-excel) これは倚くの Microsoft Excel の英語版のほずんどにむンポヌトできる CSV ゚クスポヌトが事前構成されただけのものです。䞀郚のロヌカラむズ版 (デンマヌク語など) はフィヌルド区切りに "," ではなく ";" が必芁です。 ##### Microsoft Word 2000[¶](#microsoft-word-2000) Microsoft Word 2000 以降 (たたは OpenOffice.org などの互換補品) を䜿甚しおいる堎合は、この゚クスポヌトが利甚できたす。 ##### JSON[¶](#json) JSON (JavaScript Object Notation) は、軜量のデヌタ亀換圢匏です。人間にずっお読み曞きしやすく、機械にずっお解釈や生成がしやすいものです。 バヌゞョン 4.7.0 で倉曎: 生成される JSON の構造は、 phpMyAdmin 4.7.0 で劥圓な JSON デヌタを出力するために倉曎されたした。 生成される JSON は、以䞋の属性を持ったオブゞェクトのリストです。 `type`[¶](#type) このオブゞェクトの型で、次のうちのいずれか぀です。 `header` コメントず phpMyAdmin のバヌゞョンを含む゚クスポヌトヘッダです。 `database` デヌタベヌスマヌカの開始で、デヌタベヌスの名前を含みたす。 `table` ゚クスポヌトするテヌブルのデヌタです。 `version`[¶](#version) `header` [`type`](#type) で䜿甚され、 phpMyAdmin のバヌゞョンを瀺したす。 `comment`[¶](#comment) オプションのテキストによるコメントです。 `name`[¶](#name) オブゞェクト名です。 [`type`](#type) によっおテヌブル名たたはデヌタベヌス名になりたす。 `database`[¶](#database) `table` [`type`](#type) の堎合のデヌタベヌス名です。 `data`[¶](#data) `table` [`type`](#type) の堎合のテヌブルの内容です。 出力䟋: ``` [ { "comment": "Export to JSON plugin for PHPMyAdmin", "type": "header", "version": "4.7.0-dev" }, { "name": "cars", "type": "database" }, { "data": [ { "car_id": "1", "description": "Green Chrysler 300", "make_id": "5", "mileage": "113688", "price": "13545.00", "transmission": "automatic", "yearmade": "2007" } ], "database": "cars", "name": "cars", "type": "table" }, { "data": [ { "make": "Chrysler", "make_id": "5" } ], "database": "cars", "name": "makes", "type": "table" } ] ``` ##### LaTeX[¶](#latex) テヌブルのデヌタや構造を LaTeX に埋め蟌みたいのであれば、これが正しい遞択です。 LaTeX は高品質の科孊および数孊の文曞を生成するのに最適な組版システムです。これは単䞀の文字から完党な本たで、他の皮類の文曞を生成するのにも適しおいたす。 LaTeX は敎圢゚ンゞンに TeX を䜿甚しおいたす。 TeX および LaTeX に぀いおの詳现は [the Comprehensive TeX Archive Network](https://www.ctan.org/) および [short description od TeX](https://www.ctan.org/tex/) を参照しおください。 出力をレンダリングする前に、出力を LaTeX ドキュメントに埋め蟌む必芁がありたす。以䞋にドキュメントの䟋を挙げたす。 ``` \documentclass{article} \title{phpMyAdmin SQL output} \author{} \usepackage{longtable,lscape} \date{} \setlength{\parindent}{0pt} \usepackage[left=2cm,top=2cm,right=2cm,nohead,nofoot]{geometry} \pdfpagewidth 210mm \pdfpageheight 297mm \begin{document} \maketitle % insert phpMyAdmin LaTeX Dump here \end{document} ``` ##### MediaWiki[¶](#mediawiki-1) テヌブルずデヌタベヌスの䞡方を、りィキペディアが衚を衚瀺するために䜿甚する MediaWiki 圢匏で゚クスポヌトするこずができたす。゚クスポヌトできるものは構造、デヌタ、テヌブル名、ヘッダなどです。 ##### OpenDocument スプレッドシヌト[¶](#opendocument-spreadsheet) 広く採甚されおいるスプレッドシヌトデヌタのオヌプン暙準です。 LibreOffice、 OpenOffice、 Microsoft Office、 Google Docs などの最近の倚くのスプレッドシヌトプログラムは、この圢匏を凊理するこずができたす。 ##### OpenDocument テキスト[¶](#opendocument-text) 広く採甚されおいるテキストデヌタの新しい暙準です。最新のワヌドプロセッサ (LibreOffice、 OpenOffice、 Microsoft Word、 AbiWord、 KWordなど) でこれを凊理するこずができたす。 ##### PDF[¶](#pdf) プレれンテヌション目的の堎合、線集できない PDF は最適かもしれたせん。 ##### PHP array[¶](#php-array) 遞択されたテヌブルやデヌタベヌスの内容を持った倚次元配列を宣蚀する PHP ファむルを生成するこずができたす。 ##### SQL[¶](#sql-1) SQL で゚クスポヌトするず、デヌタベヌスの埩元に䜿甚できるため、バックアップに有甚です。 「䜜成されたク゚リの最倧長」オプションは文曞化されおいないようです。しかし、実隓では、倧きく広がった INSERT を分割するため、それぞれが指定されたバむト数 (たたは文字数) より倧きくならないこずが分かっおいたす。したがっお、ファむルをむンポヌトするずきに、倧きなテヌブルで "Got a packet bigger than 'max_allowed_packet' bytes" ずいう゚ラヌを回避するこずができたす。 参考 <https://dev.mysql.com/doc/refman/5.7/ja/packet-too-large.html###### デヌタオプション[¶](#data-options) **完党挿入** は、 SQL ダンプにカラム名を远加したす。このパラメヌタにより、ダンプの可読性ず信頌性が高たりたす。カラム名を远加するず、ダンプの倧きさが増加したすが、拡匵挿入ず組み合わせるず無芖できる皋床です。 **拡匵挿入** は、耇数のデヌタ行を぀の INSERT ク゚リに結合したす。これにより、倧きな SQL ダンプではファむルの倧きさを倧幅に瞮小でき、むンポヌト時の INSERT の速床を䞊げるこずができるので、通垞は掚奚されたす。 ##### Texy![¶](#texy) [Texy!](https://texy.info/) マヌクアップ圢匏です。䟋を [Texy! のデモ](https://texy.info/en/try/4q5we) で参照するこずができたす。 ##### XML[¶](#xml-1) カスタムスクリプトで䜿甚するために解析しやすい゚クスポヌトです。 バヌゞョン 3.3.0 で倉曎: 䜿甚する XML スキヌマがバヌゞョン 3.3.0 で倉曎されたした ##### YAML[¶](#yaml) YAML は、人間が読める圢匏であり、蚈算胜力も高いデヌタシリアル化圢匏です (<<https://yaml.org>>)。 ### カスタムテヌマ[¶](#custom-themes) phpMyAdmin には、サヌドパヌティのテヌマに察応しおいたす。远加のテヌマは、私たちのりェブサむト <<https://www.phpmyadmin.net/themes/>> からダりンロヌドできたす。 #### 蚭定[¶](#configuration) Themes are configured with [`$cfg['ThemeManager']`](index.html#cfg_ThemeManager) and [`$cfg['ThemeDefault']`](index.html#cfg_ThemeDefault). Under `./public/themes/`, you should not delete the directory `pmahomme` or its underlying structure, because this is the system theme used by phpMyAdmin. `pmahomme` contains all images and styles, for backwards compatibility and for all themes that would not include images or css-files. If [`$cfg['ThemeManager']`](index.html#cfg_ThemeManager) is enabled, you can select your favorite theme on the main page. Your selected theme will be stored in a cookie. #### カスタムテヌマの䜜成[¶](#creating-custom-theme) テヌマを䜜成するには、以䞋のようにしたす。 * make a new subdirectory (for example "your_theme_name") under `./public/themes/`. * `pmahomme` から "your_theme_name" にファむルやディレクトリをコピヌしたす * "your_theme_name/css" の CSS ファむルを線集したす * "your_theme_name/img" に新しい画像を入れたす * "your_theme_name/scss" の `_variables.scss` を線集したす * "your_theme_name" の `theme.json` を線集しおテヌマのメタデヌタを蚭定したす (䞋蚘参照) * テヌマの新しいスクリヌンショットを䜜成しお、 "your_theme_name/screen.png" に保存したす ##### テヌマのメタデヌタ[¶](#theme-metadata) バヌゞョン 4.8.0 で倉曎: 4.8.0 以前は、テヌマのメタデヌタは `info.inc.php` ファむルで枡されおいたした。これをは `theme.json` に眮き換えられ、より簡単に解析ができるようになり (PHP コヌドを扱う必芁がなくなり)、远加機胜に察応するようになりたした。 テヌマディレクトリには、テヌマのメタデヌタを含むファむル `theme.json` がありたす。珟圚のずころ、以䞋の芁玠がありたす。 `name` テヌマの衚瀺名です。 **このフィヌルドは必須です。** `version` テヌマのバヌゞョンです。自由に蚭定するこずができ、 phpMyAdmin のバヌゞョンず䞀臎させる必芁はありたせん。 **このフィヌルドは必須です。** `description` テヌマの説明です。これはりェブサむトに衚瀺されたす。 **このフィヌルドは必須です。** `author` テヌマの䜜者の名前です。 **このフィヌルドは必須です。** `url` 䜜者のりェブサむトぞのリンクです。そこでサポヌトを受けられるようにするずいいでしょう。 `supports` 察応しおいる phpMyAdmin のメゞャヌバヌゞョンの配列です。 **このフィヌルドは必須です。** たずえば、phpMyAdmin 4.8 に付属しおいるオリゞナルテヌマの定矩は次の通りです。 ``` { "name": "Original", "version": "4.8", "description": "Original phpMyAdmin theme", "author": "phpMyAdmin developers", "url": "https://www.phpmyadmin.net/", "supports": ["4.8"] } ``` ##### 画像の共有[¶](#sharing-images) 独自の蚘号やボタンを䜿甚しない堎合は、 "your_theme_name" から "img" ディレクトリを削陀しおください。デフォルトの (システムテヌマである `pmahomme` の) アむコンやボタンが䜿われたす。 ### その他の情報源[¶](#other-sources-of-information) #### 出版本[¶](#printed-book) phpMyAdmin を䜿甚するための決定的なガむドは、 <NAME>le 著『Mastering phpMyAdmin for Effective MySQL Management』です。この本やその他の公匏参考曞に぀いおの情報は「[books at the phpMyAdmin site](https://www.phpmyadmin.net/docs/)」で芋るこずができたす。 #### チュヌトリアル[¶](#tutorials) 興味を持぀かもしれない倖郚のチュヌトリアルや蚘事です。 ##### チェコ語[¶](#cesky-czech) * [Seriál o phpMyAdminovi](https://cihar.com/publications/linuxsoft/) ##### 英語[¶](#english) * [Having fun with phpMyAdmin's MIME-transformations & PDF-features](https://garv.in/tops/texte/mimetutorial) * [Learning SQL Using phpMyAdmin (叀いチュヌトリアル)](http://www.php-editors.com/articles/sql_phpmyadmin.php) ##### ロシア語[¶](#russian) * [phpMyAdmin に関するロシア語のサヌバ](https://php-myadmin.ru/) FAQ - よくある質問[¶](#faq-frequently-asked-questions) --- phpMyAdmin の機胜やむンタフェヌスの詳现に぀いおは phpMyAdmin の公匏ペヌゞにある [Link セクション](https://www.phpmyadmin.net/docs/) をご芧ください。 ### サヌバ[¶](#server) #### 1.1 サヌバが床々クラッシュしたす。その床に、特定のアクションを芁求されたり、ブラりザに空癜のペヌゞや暗号のような文字だらけのペヌゞが衚瀺されたす。どうしたらよいのでしょうか[¶](#my-server-is-crashing-each-time-a-specific-action-is-required-or-phpmyadmin-sends-a-blank-page-or-a-page-full-of-cryptic-characters-to-my-browser-what-can-i-do) `config.inc.php` ファむルの [`$cfg['OBGzip']`](index.html#cfg_OBGzip) ディレクティブを `false` に、PHP の蚭定ファむルの `zlib.output_compression` 蚭定を `Off` にしおみおください。 #### 1.2 phpMyAdmin を䜿うず Apache サヌバがクラッシュしたす。[¶](#my-apache-server-crashes-when-using-phpmyadmin) たず、 Apache (および MySQL) の最新のバヌゞョンを䜿甚しおみおください。それでもサヌバがクラッシュするようなら、Apache の各皮サポヌトグルヌプに助けを求めおください。 参考 [1.1 サヌバが床々クラッシュしたす。その床に、特定のアクションを芁求されたり、ブラりザに空癜のペヌゞや暗号のような文字だらけのペヌゞが衚瀺されたす。どうしたらよいのでしょうか](#faq1-1) #### 1.3 (取り䞋げ)。[¶](#withdrawn) #### 1.4 phpMyAdmin を IIS 䞊で䜿うず、 "The specified CGI application misbehaved by not returning a complete set of HTTP headers ..." ずいう゚ラヌメッセヌゞが衚瀺されたす。[¶](#using-phpmyadmin-on-iis-i-m-displayed-the-error-message-the-specified-cgi-application-misbehaved-by-not-returning-a-complete-set-of-http-headers) PHP の配垃ファむルに入っおいる *install.txt* を読み忘れたのでしょう。 PHP の公匏バグデヌタベヌスから [PHP バグレポヌト #12061](https://bugs.php.net/bug.php?id=12061) を芋おみおください。 #### 1.5 phpMyAdmin を IIS 䞊で䜿うず、 HTTP においおクラッシュしたり、゚ラヌメッセヌゞがたくさん衚瀺されたす。[¶](#using-phpmyadmin-on-iis-i-m-facing-crashes-and-or-many-error-messages-with-the-http) これは PHP の [ISAPI](index.html#term-isapi) フィルタの既知の問題です。あたり安定しおいたせん。代わりにクッキヌ認蚌モヌドをお䜿いください。 #### 1.6 PWS 䞊で phpMyAdmin が䜿えたせん。䜕も衚瀺されたせん[¶](#i-can-t-use-phpmyadmin-on-pws-nothing-is-displayed) This seems to be a PWS bug. <NAME> found a workaround (at this time there is no better fix): remove or comment the `DOCTYPE` declarations (2 lines) from the scripts `src/Header.php` and `index.php`. #### 1.7 どうすればダンプや CSV ゚クスポヌトを gzip 圧瞮できるのでしょうか 動いおいないようですが。[¶](#how-can-i-gzip-a-dump-or-a-csv-export-it-does-not-seem-to-work) これらの機胜はプラットフォヌム (Unix か Windows か、セヌフモヌドかどうか、など) からの独立性を高めるために PHP の `gzencode()` 関数に基づいおいたす。そのため、Zlib サポヌト (`--with-zlib`) が必芁です。 #### 1.8 テヌブルにテキストファむルが挿入できたせん。セヌフモヌドになっおいるずいう゚ラヌが出たす。[¶](#i-cannot-insert-a-text-file-in-a-table-and-i-get-an-error-about-safe-mode-being-in-effect) アップロヌドされたファむルは PHP によっお、 `php.ini` の倉数 `upload_tmp_dir` で定矩された "upload dir" の䞭に保存されたす (ふ぀うシステムのデフォルトは */tmp* です)。セヌフモヌドで動䜜しおいる Apache サヌバには、以䞋のセットアップを行い合理的に安党であるファむルのアップロヌドを有効化するこずをお勧めしたす。 * **mkdir /tmp/php** でアップロヌド甚の別なディレクトリを䜜りたす * **chown apache.apache /tmp/php** で所有者を Apache サヌバのナヌザグルヌプにしたす * **chmod 600 /tmp/php** で適切なパヌミッションを蚭定したす * `php.ini` に `upload_tmp_dir = /tmp/php` を远加したす * Apache を再起動したす #### 1.9 (取り䞋げ)。[¶](#withdrawn-1) #### 1.10 セキュアサヌバ䞊で皌動させおいる phpMyAdmin のファむルアップロヌドで困っおいたす。ブラりザは Internet Explorer で、 Apache サヌバを䜿っおいたす。[¶](#i-m-having-troubles-when-uploading-files-with-phpmyadmin-running-on-a-secure-server-my-browser-is-internet-explorer-and-i-m-using-the-apache-server) phpWizard フォヌラムで "<NAME>" が勧めおいたように、*httpd.conf* に以䞋の行を远加しおください。 ``` SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown ``` こうするず、Internet Explorer ず SSL にた぀わる倚くの問題が解決されるようです。 #### 1.11 むンポヌトタブからファむルをアップロヌドするず 'open_basedir 制限' が出たす。[¶](#i-get-an-open-basedir-restriction-while-uploading-a-file-from-the-import-tab) バヌゞョン 2.2.4 以降、phpMyAdmin はサヌバの open_basedir 制限をサポヌトしおいたす。しかしながら、䞀時ディレクトリを䜜成しお、 [`$cfg['TempDir']`](index.html#cfg_TempDir) にそのディレクトリを蚭定する必芁がありたす。アップロヌドしたファむルはそこに移され、 [SQL](index.html#term-sql) コマンドの実行が終わったら削陀されたす。 #### 1.12 MySQL の root のパスワヌドをなくしおしたいたした。どうしたらよいのでしょうか[¶](#i-have-lost-my-mysql-root-password-what-can-i-do) phpMyAdmin は䜿甚しおいる MySQL サヌバに察しお認蚌を行いたすので、 phpMyAdmin のパスワヌドを玛倱した堎合は、 MySQL レベルで埩旧する必芁がありたす。 MySQL のマニュアルに [パヌミッションをリセットする](https://dev.mysql.com/doc/refman/5.7/ja/resetting-permissions.html) 方法の説明がありたす。 ホスティングプロバむダによっおむンストヌルされた MySQL サヌバを䜿甚しおいる堎合は、そのサポヌトに連絡しおパスワヌドを回埩しおください。 #### 1.13 (取り䞋げ)。[¶](#withdrawn-2) #### 1.14 (取り䞋げ)。[¶](#withdrawn-3) #### 1.15 *mysql.user* のカラム名で問題が発生したした。[¶](#i-have-problems-with-mysql-user-column-names) MySQL の叀いバヌゞョンでは `User` ず `Password` のカラムが `user` ず `password` ずいう名前でした。カラム名を珟圚の暙準にあわせお修正しおください。 #### 1.16 (メモリ、HTTP、タむムアりトのせいで) 倧きなダンプファむルをアップロヌドできたせん。[¶](#i-cannot-upload-big-dump-files-memory-http-or-timeout-problems) バヌゞョン 2.7.0 以降はむンポヌト゚ンゞンが曞き盎されたため、この問題は起こらないはずです。可胜であればお䜿いの phpMyAdmin を最新のバヌゞョンにアップグレヌドしお新しいむンポヌト機胜をご利甚ください。 たずは `php.ini` 蚭定ファむルの `max_execution_time`, `upload_max_filesize`, `memory_limit`, `post_max_size` の倀を確認しおください (あるいはプロバむダに確認しおもらっおください)。この 3 ぀の蚭定はいずれも PHP 経由で送信取り扱いできるデヌタサむズの䞊限を決めるものです。たた、 `post_max_size` は `upload_max_filesize` より倧きくなければなりたせん。アップロヌドが倧きすぎたり、ホスティングプロバむダが蚭定を倉曎したくない堎合は、いく぀かの回避策がありたす。 * [`$cfg['UploadDir']`](index.html#cfg_UploadDir) 機胜をご芧ください。この機胜を䜿うず、SCP や FTP ずいったお奜みの転送法でファむルをサヌバにアップロヌドできるようになりたす。ファむルをアップロヌドしたら、phpMyAdmin はその䞀時ディレクトリからファむルをむンポヌトできるようになりたす。詳しくはこのドキュメントの [蚭定](index.html#config) をご芧ください。 * ([BigDump](https://www.ozerov.de/bigdump/) のような) ナヌティリティを利甚しおアップロヌドの前にファむルを分割する方法もありたす。このナヌティリティはもずより、サヌドパヌティ補のアプリケヌションは䞀切サポヌトできたせんが、そういったものを利甚しお成功したナヌザがいるこずは承知しおいたす。 * シェル (コマンドラむン) にアクセスできるのであれば、MySQL を利甚しお盎接ファむルをむンポヌトする方法もありたす。その堎合は MySQL の䞭で "source" コマンドを䜿っおください。 ``` source filename.sql; ``` #### 1.17 phpMyAdmin が察応しおいるデヌタベヌスのバヌゞョンは[¶](#which-database-versions-does-phpmyadmin-support) [MySQL](https://www.mysql.com/) では、バヌゞョン 5.5 以降に察応しおいたす。叀い MySQL のバヌゞョンに察しおは、 [Downloads](https://www.phpmyadmin.net/downloads/) ペヌゞで叀いバヌゞョンの phpMyAdmin を提䟛しおいたす (サポヌトが切れおいるかもしれたせん)。 [MariaDB](https://mariadb.org/) はバヌゞョン 5.5 以降に察応しおいたす。 #### 1.17a MySQL サヌバに接続できたせん。かならず "Client does not support authentication protocol requested by server; consider upgrading MySQL client" ずいう゚ラヌメッセヌゞが返っおきたす[¶](#a-i-cannot-connect-to-the-mysql-server-it-always-returns-the-error-message-client-does-not-support-authentication-protocol-requested-by-server-consider-upgrading-mysql-client) 叀い MySQL クラむアントラむブラリで MySQL にアクセスしようずしおいたす。 MySQL クラむアントラむブラリのバヌゞョンは phpinfo() の出力でチェックできたす。䞀般に、 [1.17 phpMyAdmin が察応しおいるデヌタベヌスのバヌゞョンは](#faq1-17) で瀺したマむナヌバヌゞョン以䞊のサヌバを䜿甚しおください。この問題は䞀般的に 4.1 以䞊のバヌゞョンの MySQL で起こりたす。MySQL の認蚌ハッシュが倉わったのに、お䜿いの PHP が叀い方法を詊そうずするためです。この問題を適切に解決するには、お䜿いの MySQL のバヌゞョンに合わせた適切なクラむアントラむブラリの [mysqli 拡匵モゞュヌル](https://www.php.net/mysqli) を䜿甚するようにしおください。詳现 (ずその察策) に぀いおは [MySQL のドキュメント](https://dev.mysql.com/doc/refman/5.7/ja/common-errors.html) をご芧ください。 #### 1.18 (取り䞋げ)。[¶](#withdrawn-4) #### 1.19 「リレヌション衚瀺」機胜を実行できたせん。䜿っおいるフォントを認識しおくれないようです。[¶](#i-can-t-run-the-display-relations-feature-because-the-script-seems-not-to-know-the-font-face-i-m-using) この機胜を実珟するのに利甚しおいる [TCPDF](index.html#term-tcpdf) ラむブラリは、いく぀かの特別なファむルがないずフォントを利甚できたせん。 [TCPDF マニュアル](https://tcpdf.org/) を参照しおこれらのファむルを構築しおください。 #### 1.20 mysqli や mysql 拡匵モゞュヌルがないずいう゚ラヌが出たす。[¶](#i-receive-an-error-about-missing-mysqli-and-mysql-extensions) PHP が MySQL サヌバに接続する際には「MySQL 拡匵モゞュヌル」ずいう MySQL の関数セットを必芁ずしたす。この拡匵は PHP の配垃ファむルに含たれおいる (コンパむルで組み蟌たれおいる) こずもありたすが、そうでない堎合、動的に読み蟌む必芁がありたす。拡匵名はおそらく *mysqli.so* か *php_mysqli.dll* ですが、phpMyAdmin はこの拡匵モゞュヌルを読み蟌もうずしお倱敗しおいたす。通垞、この問題は "PHP-MySQL" たたは同様の名前の゜フトりェアパッケヌゞをむンストヌルするこずで解決できたす。 PHP が MySQL 拡匵モゞュヌルずしお提䟛しおいるむンタフェヌスは、 `mysql` ず `mysqli` の 2 ぀がありたした。 `mysql` むンタフェヌスは PHP 7.0 で削陀されたした。 この問題は `php.ini` に間違ったパスが指定されおいたり、間違った `php.ini` を䜿甚しおいたりするず発生するこずもありたす。 拡匵機胜のファむルが `extension_dir` が指すフォルダに存圚し、 `php.ini` の察応する行がコメントアりトされおいないこずを確認しおください (珟圚の蚭定を確認するには `phpinfo()` が利甚できたす)。 ``` [PHP] ; Directory in which the loadable extensions (modules) reside. extension_dir = "C:/Apache2/modules/php/ext" ``` `php.ini` は (特に Windows では) 耇数の堎所から読み蟌たれる可胜性がありたすので、正しいものを曎新しおいるこずを確認しおください。 Apache を䜿甚しおいる堎合、 `PHPIniDir` ディレクティブを䜿甚しおこのファむルに特定のパスを䜿甚するように指瀺できたす。 ``` LoadModule php7_module "C:/php7/php7apache2_4.dll" <IfModule php7_module> PHPIniDir "C:/php7" <Location> AddType text/html .php AddHandler application/x-httpd-php .php </Location> </IfModule> ``` たれにこの問題は、 PHP に読み蟌たれおいる他の拡匵モゞュヌルが MySQL 拡匵モゞュヌルの読み蟌みを劚げおいるこずで発生するこずもありたす。ただ倱敗する堎合は、 `php.ini` から他のデヌタベヌスの拡匵モゞュヌルをコメントアりトしおみおください。 #### 1.21 CGI 版の PHP を Unix で皌動させおいるのですが、クッキヌ認蚌を䜿ったログむンができたせん。[¶](#i-am-running-the-cgi-version-of-php-under-unix-and-i-cannot-log-in-using-cookie-auth) `php.ini` の `mysql.max_links` を 1 より倧きな倀にしおください。 #### 1.22 「テキストファむルの堎所」フィヌルドが芋぀からないのでアップロヌドできたせん。[¶](#i-don-t-see-the-location-of-text-file-field-so-i-cannot-upload) いちばんありがちな理由は、 `php.ini` の `file_uploads` パラメヌタが "on" になっおいないこずです。 #### 1.23 MySQL を Win32 マシンで皌動させおいるのですが、新芏テヌブルを䜜成するたびにテヌブル名ずカラム名が小文字に倉わっおしたいたす[¶](#i-m-running-mysql-on-a-win32-machine-each-time-i-create-a-new-table-the-table-and-column-names-are-changed-to-lowercase) これは MySQL の蚭定項目の `lower_case_table_names` が Win32 版のデフォルトでは 1 (`ON`) になっおいるためです。この動䜜はこの蚭定を 0 (`OFF`) に倉曎するだけで倉えられたす。おそらく Windows ディレクトリにある `my.ini` ファむルを線集し、 [mysqld] グルヌプに次の行を远加するだけです。 ``` set-variable = lower_case_table_names=0 ``` 泚釈 倧文字ず小文字の区別のないファむルシステムで --lower-case-table-names=0 を甚いおこの倉数を 0 に匷制的に蚭定し、 MyISAM にアクセスしお倧文字ず小文字が異なるテヌブル名を䜿甚するず、むンデックスが競合する可胜性がありたす。 次に、そのファむルを保存しお MySQL サヌビスを再起動したす。このディレクティブの倀は次のク゚リでい぀でもチェックできたす。 ``` SHOW VARIABLES LIKE 'lower_case_table_names'; ``` 参考 [MySQL リファレンスマニュアルの「識別子の倧文字ず小文字の区別」](https://dev.mysql.com/doc/refman/5.7/ja/identifier-case-sensitivity.html) #### 1.24 (取り䞋げ)。[¶](#withdrawn-5) #### 1.25 Windows XP で Apache に mod_gzip-1.3.26.1a を組み蟌んで皌動させおいるのですが、 SQL ク゚リを実行したずきに倉数が定矩されおいないずいった問題が出たす。[¶](#i-am-running-apache-with-mod-gzip-1-3-26-1a-on-windows-xp-and-i-get-problems-such-as-undefined-variables-when-i-run-a-sql-query) <NAME> からのヒントずしお、 httpd.conf に䞋蚘の 2 行があったら、以䞋のようにコメントにしおください。 ``` # mod_gzip_item_include file \.php$ # mod_gzip_item_include mime "application/x-httpd-php.*" ``` このバヌゞョンの Apache (Windows) の mod_gzip には、PHP スクリプトの扱いに問題があるからです。もちろん、䞊蚘の倉曎を行ったら Apache を再起動する必芁がありたす。 #### 1.26 phpMyAdmin を IIS のドキュメントルヌトにむンストヌルしたずころなのですが、phpMyAdmin を実行しようずするず「No input file specified入力ファむルが指定されおいたせん」ずいう゚ラヌが出たす。[¶](#i-just-installed-phpmyadmin-in-my-document-root-of-iis-but-i-get-the-error-no-input-file-specified-when-trying-to-run-phpmyadmin) これはパヌミッションの問題です。phpmyadmin フォルダを右クリックしお、プロパティを遞んでください。セキュリティタブを開き、 "远加 (Add)"をクリックしお、䞀芧から "IUSR_machine” を遞択したす。このナヌザにパヌミッションを蚭定すれば動くはずです。 #### 1.27 巚倧なペヌゞ (䟋えば、倚くのテヌブルある db_structure.php) を衚瀺しようずするず空癜ペヌゞが衚瀺されたす。[¶](#i-get-empty-page-when-i-want-to-view-huge-page-eg-db-structure-php-with-plenty-of-tables) これは [PHP のバグ](https://bugs.php.net/bug.php?id=21079) で、GZIP 出力バッファリングが有効になっおいるず発生したす。これ (`config.inc.php` の䞭の [`$cfg['OBGzip']`](index.html#cfg_OBGzip)) をオフにすれば動䜜するはずです。このバグは PHP 5.0.0 で修正されたした。 #### 1.28 MySQL サヌバがずきどきク゚リを拒吊しお「Errorcode: 13」ずいうメッセヌゞを返しおきたす。どういうこずでしょうか[¶](#my-mysql-server-sometimes-refuses-queries-and-returns-the-message-errorcode-13-what-does-this-mean) これは MySQL のバグです。 `lower_case_table_names` が 1 になっおいるのにデヌタベヌステヌブル名に倧文字が含たれおいるず起こるこずがありたす。修正するには、この蚭定をオフにしお、すべおのデヌタベヌステヌブル名を小文字に倉換しおから、再床オンにしおください。たた、MySQL 3.23.56 / 4.0.11-gamma からはバグフィックスも甚意されおいたす。 #### 1.29 テヌブルを䜜成したりカラムを線集したりするず、゚ラヌが出おカラムが耇補されおしたいたす。[¶](#when-i-create-a-table-or-modify-a-column-i-get-an-error-and-the-columns-are-duplicated) Apache の蚭定によっおは PHP が .php ファむルの解釈を間違っおしたうこずがありたす。 この問題が起こるのは、䞋蚘のように 2 組の異なった (矛盟をきたす) 蚭定項目を䜿甚しおいるためです。 ``` SetOutputFilter PHP SetInputFilter PHP ``` および ``` AddType application/x-httpd-php .php ``` 私たちが芋たある䟋では、䞀方の蚭定が `/etc/httpd/conf/httpd.conf` に、もう䞀方の蚭定が `/etc/httpd/conf/addon-modules/php.conf` に入っおいたした。掚奚される方法は `AddType` なので、前者の䞡方の行をコメントアりトしお Apache を再起動したす。 ``` #SetOutputFilter PHP #SetInputFilter PHP ``` #### 1.30 "navigation.php: Missing hash" ずいう゚ラヌが出たす。[¶](#i-get-the-error-navigation-php-missing-hash) この問題はサヌバが Turck MMCache を皌動させおいるず起こるこずがわかっおいたすが、MMCache をバヌゞョン 2.3.21 にアップグレヌドすれば解決したす。 #### 1.17 phpMyAdmin が察応しおいる MySQL のバヌゞョンは[¶](#which-php-versions-does-phpmyadmin-support) リリヌス 4.5 以降は、 phpMyAdmin は PHP 5.5 以降のみに察応しおいたす。リリヌス 4.1 の phpMyAdmin は PHP 5.3 以降のみに察応しおいたす。 PHP 5.2 は、リリヌス 4.0.x を䜿甚するこずができたす。 phpMyAdmin 4.6 以降は PHP 7 に察応しおいたす。 PHP 7.1 は 4.6.5 以降で察応しおおり、 PHP 7.2 は 4.7.4 以降で察応しおいたす。 HHVM は phpMyAdmin 4.8 たで察応しおいたす。 Since release 5.0, phpMyAdmin supports only PHP 7.1 and newer. Since release 5.2, phpMyAdmin supports only PHP 7.2 and newer. Since release 6.0, phpMyAdmin supports only PHP 8.1 and newer. #### 1.32 IIS で HTTP 認蚌を利甚できたすか[¶](#can-i-use-http-authentication-with-iis) はい。以䞋の手順は phpMyAdmin 2.6.1 を [IIS](index.html#term-iis) 5.1 の [ISAPI](index.html#term-isapi) モヌドで動かした PHP 4.3.9 で確認したした。 1. `php.ini` ファむルに `cgi.rfc2616_headers = 0` を蚭定したす 2. `りェブサむトのプロパティ -> ディレクトリセキュリティ -> 匿名アクセス` ダむアログボックスで、`匿名アクセス` チェックボックスをチェックし、それ以倖のチェックボックスのチェックを倖したす (぀たり、有効になっおいる堎合は `基本認蚌`、 `統合 Windows 認蚌`、 `ダむゞェスト認蚌` のチェックを倖したす)。 `OK` をクリックしたす。 3. `カスタム゚ラヌ` で、 `401;1` から `401;5` たでを遞択し、 `既定倀に蚭定` ボタンをクリックしたす。 参考 [**RFC 2616**](https://tools.ietf.org/html/rfc2616.html) #### 1.33 (取り䞋げ)。[¶](#withdrawn-6) #### 1.34 デヌタベヌスやテヌブルのペヌゞに盎接アクセスできたすか[¶](#can-i-directly-access-a-database-or-table-pages) はい。そのたたでも `http://server/phpMyAdmin/index.php?server=X&db=database&table=table&target=script` のような [URL](index.html#term-url) を利甚するこずができたす。 `server` 匕数には、 `config.inc.php` で参照した数倀のホスト番号 (`$i` より) を䜿甚するこずができたす。 table ず script の郚分はオプションです。 `http://server/phpMyAdmin/database[/table][/script]` のような URL を䜿いたい堎合は、さらにいく぀かの蚭定をする必芁がありたす。以䞋の䟋は [Apache](https://httpd.apache.org) りェブサヌバにしか適甚できたせん。たずはグロヌバル蚭定でいく぀かの機胜を有効にしおあるこずを確認しおください。 phpMyAdmin をむンストヌルしたディレクトリに぀いおは `Options SymLinksIfOwnerMatch` ず `AllowOverride FileInfo` を有効にする必芁がありたすし、mod_rewrite を有効にしおおく必芁もありたす。あずは、以䞋のような [.htaccess](index.html#term-htaccess) ファむルを phpMyAdmin をむンストヌルした堎所のルヌトフォルダに䜜成する必芁がありたす (忘れずにディレクトリ名を修正しおください)。 ``` RewriteEngine On RewriteBase /path_to_phpMyAdmin RewriteRule ^([a-zA-Z0-9_]+)/([a-zA-Z0-9_]+)/([a-z_]+\.php)$ index.php?db=$1&table=$2&target=$3 [R] RewriteRule ^([a-zA-Z0-9_]+)/([a-z_]+\.php)$ index.php?db=$1&target=$2 [R] RewriteRule ^([a-zA-Z0-9_]+)/([a-zA-Z0-9_]+)$ index.php?db=$1&table=$2 [R] RewriteRule ^([a-zA-Z0-9_]+)$ index.php?db=$1 [R] ``` 参考 [4.8 phpMyAdmin を起動する URL 䜿うこずができるパラメヌタはどれですか](#faq4-8) バヌゞョン 5.1.0 で倉曎: `target` 匕数を䜿甚した察応は phpMyAdmin 5.1.0 で削陀されたした。代わりに `route` 匕数を䜿甚しおください。 #### 1.35 Apache の CGI で HTTP 認蚌が䜿甚できたすか[¶](#can-i-use-http-authentication-with-apache-cgi) はい。ただし、以䞋のリラむト芏則を䜿甚しお、 [CGI](index.html#term-cgi) ぞ認蚌甚倉数を枡す必芁がありたす。 ``` RewriteEngine On RewriteRule .* - [E=REMOTE_USER:%{HTTP:Authorization},L] ``` #### 1.36 "500 Internal Server Error" ずいう゚ラヌが出たす。[¶](#i-get-an-error-500-internal-server-error) これには倚くの芁因があり、サヌバの゚ラヌログファむルを芋れば手がかりが埗られる堎合がありたす。 #### 1.37 異なるマシンのクラスタ䞊で phpMyAdmin を皌動させおいたすが、クッキヌ認蚌でのパスワヌド暗号化が機胜したせん。[¶](#i-run-phpmyadmin-on-cluster-of-different-machines-and-password-encryption-in-cookie-auth-doesn-t-work) クラスタが異なるアヌキテクチャで構成されおいる堎合、暗号化埩号の PHP コヌドが正しく動䜜したせん。これは、コヌド内で pack/unpack 関数を䜿甚しおいるこずが原因で発生したす。唯䞀の解決策は、この状況でも正垞に動䜜しおいる openssl 拡匵を䜿甚するこずです。 #### 1.38 Suhosin が有効になっおいるサヌバ䞊で phpMyAdmin を䜿うこずはできたすか[¶](#can-i-use-phpmyadmin-on-a-server-on-which-suhosin-is-enabled) はい。ただし、Suhosin のデフォルトの蚭定倀は、䞀郚の操䜜で問題があるこずが知られおいたす。䟋えば、倚くのカラムを持っおいお [䞻キヌ](index.html#term-primary-key) がないテヌブルや、文字列の [䞻キヌ](index.html#term-primary-key) を持っおいるテヌブルの線集などです。 Suhosin の蚭定はいく぀かのケヌスでは誀動䜜に぀ながる可胜性がありたす。phpMyAdmin は 1 回の HTTP リク゚スト内で倚量のカラムの転送を必芁ずするアプリケヌションであり、Suhosin は HTTP リク゚ストに察しお䜕らかの防止を行おうずするものであるため、この問題を完党に回避するこずはできたせん。䞀般的に、すべおの `suhosin.request.*`、 `suhosin.post.*`、 `suhosin.get.*` の蚭定項目は、phpMyAdmin の䜿甚に察しお悪圱響を䞎えたす。Suhosin によっお削られた倉数が発生したかを゚ラヌログで確認できるので、この問題を蚺断しお蚭定がうたく合うように調敎するこずができたす。 Suhosin の蚭定オプションのほずんどは、デフォルト倀で倧抵の状況においお動䜜するでしょう。ただし、少なくずも以䞋のパラメヌタは調敎するこずをお勧めしたす。 * [suhosin.request.max_vars](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-request-max-vars) should be increased (eg. 2048) * [suhosin.post.max_vars](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-post-max-vars) should be increased (eg. 2048) * [suhosin.request.max_array_index_length](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-request-max-array-index-length) should be increased (eg. 256) * [suhosin.post.max_array_index_length](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-post-max-array-index-length) should be increased (eg. 256) * [suhosin.request.max_totalname_length](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-request-max-totalname-length) should be increased (eg. 8192) * [suhosin.post.max_totalname_length](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-post-max-totalname-length) should be increased (eg. 8192) * [suhosin.get.max_value_length](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-get-max-value-length) should be increased (eg. 1024) * [suhosin.sql.bailout_on_error](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-sql-bailout-on-error) needs to be disabled (the default) * [suhosin.log.*](https://suhosin5.suhosin.org/stories/configuration.html#logging-configuration) should not include [SQL](index.html#term-sql), otherwise you get big slowdown * [suhosin.sql.union](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-sql-union) must be disabled (which is the default). * [suhosin.sql.multiselect](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-sql-multiselect) must be disabled (which is the default). * [suhosin.sql.comment](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-sql-comment) must be disabled (which is the default). さらにセキュリティを高めるため、以䞋のずおり倉曎するこずをお勧めしたす。 * [suhosin.executor.include.max_traversal](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-executor-include-max-traversal) should be enabled as a mitigation against local file inclusion attacks. We suggest setting this to 2 as `../` is used with the ReCaptcha library. * [suhosin.cookie.encrypt](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-cookie-encrypt) should be enabled. * [suhosin.executor.disable_emodifier](https://suhosin5.suhosin.org/stories/configuration.html#suhosin-executor-disable-emodifier) should be enabled. [`$cfg['SuhosinDisableWarning']`](index.html#cfg_SuhosinDisableWarning) 蚭定を䜿甚しお譊告を無効にするこずもできたす。 #### 1.39 https を経由しお接続するず、ログむンするこずはできたすが、接続が http にリダむレクトされおしたいたす。この挙動の原因は䜕ですか[¶](#when-i-try-to-connect-via-https-i-can-log-in-but-then-my-connection-is-redirected-back-to-http-what-can-cause-this-behavior) これは、サむトが https を䜿甚しおいるこずを PHP スクリプトが認識しおいないこずが原因です。䜿甚するりェブサヌバに応じお、アクセスに䜿甚する URL ずスキヌムに぀いお PHP に通知するように構成する必芁がありたす。 たずえば Apacheでは、蚭定で `SSLOptions` ず `StdEnvVars` が有効になっおいるこずを確認しおください。 参考 <<https://httpd.apache.org/docs/2.4/mod/mod_ssl.html>#### 1.40 Apache のリバヌスプロキシを介しお phpMyAdmin にアクセスするず、クッキヌログむンが機胜したせん。[¶](#when-accessing-phpmyadmin-via-an-apache-reverse-proxy-cookie-login-does-not-work) クッキヌ認蚌を䜿甚しおいる堎合、 set-cookie ヘッダを曞き換える必芁があるこずを Apache が 認識する必芁がありたす。 Apache 2.2 ドキュメントからの䟋です。 ``` ProxyPass /mirror/foo/ http://backend.example.com/ ProxyPassReverse /mirror/foo/ http://backend.example.com/ ProxyPassReverseCookieDomain backend.example.com public.example.com ProxyPassReverseCookiePath / /mirror/foo/ ``` 泚: バック゚ンド URL が `http://host/~user/phpmyadmin` のようになる堎合、 ProxyPassReverse* の行においおチルダ (~) は %7E のように URL ゚ンコヌドする必芁がありたす。これは phpMyAdmin に限ったものではなく、 Apache の動䜜です。 ``` ProxyPass /mirror/foo/ http://backend.example.com/~user/phpmyadmin ProxyPassReverse /mirror/foo/ http://backend.example.com/%7Euser/phpmyadmin ProxyPassReverseCookiePath /%7Euser/phpmyadmin /mirror/foo ``` 参考 <<https://httpd.apache.org/docs/2.2/mod/mod_proxy.html>>、 [`$cfg['PmaAbsoluteUri']`](index.html#cfg_PmaAbsoluteUri) #### 1.41 デヌタベヌスを衚瀺しおその暩限を参照しようずするず、未知のカラムに関する゚ラヌが衚瀺されたす。[¶](#when-i-view-a-database-and-ask-to-see-its-privileges-i-get-an-error-about-an-unknown-column) MySQL サヌバの暩限テヌブルが最新の状態ではありたせん。サヌバ䞊で **mysql_upgrade** コマンドを実行する必芁がありたす。 #### 1.42 ロボットから phpMyAdmin ぞのアクセスを防ぐにはどうすればいいのでしょうか[¶](#how-can-i-prevent-robots-from-accessing-phpmyadmin) [.htaccess](index.html#term-htaccess) に、ナヌザ゚ヌゞェントフィヌルドに基づいおアクセスをフィルタリングするルヌルを远加するこずができたす。防ぐ手立おずしおはきわめお簡単な方法ですが、少なくずもいく぀かの怜玢ロボットからは防ぐこずが可胜です。 ``` RewriteEngine on # Allow only GET and POST verbs RewriteCond %{REQUEST_METHOD} !^(GET|POST)$ [NC,OR] # Ban Typical Vulnerability Scanners and others # Kick out Script Kiddies RewriteCond %{HTTP_USER_AGENT} ^(java|curl|wget).* [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^.*(libwww-perl|curl|wget|python|nikto|wkito|pikto|scan|acunetix).* [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^.*(winhttp|HTTrack|clshttp|archiver|loader|email|harvest|extract|grab|miner).* [NC,OR] # Ban Search Engines, Crawlers to your administrative panel # No reasons to access from bots # Ultimately Better than the useless robots.txt # Did google respect robots.txt? # Try google: intitle:phpMyAdmin intext:"Welcome to phpMyAdmin *.*.*" intext:"Log in" -wiki -forum -forums -questions intext:"Cookies must be enabled" RewriteCond %{HTTP_USER_AGENT} ^.*(AdsBot-Google|ia_archiver|Scooter|Ask.Jeeves|Baiduspider|Exabot|FAST.Enterprise.Crawler|FAST-WebCrawler|www\.neomo\.de|Gigabot|Mediapartners-Google|Google.Desktop|Feedfetcher-Google|Googlebot|heise-IT-Markt-Crawler|heritrix|ibm.com\cs/crawler|ICCrawler|ichiro|MJ12bot|MetagerBot|msnbot-NewsBlogs|msnbot|msnbot-media|NG-Search|lucene.apache.org|NutchCVS|OmniExplorer_Bot|online.link.validator|psbot0|Seekbot|Sensis.Web.Crawler|SEO.search.Crawler|Seoma.\[SEO.Crawler\]|SEOsearch|Snappy|www.urltrends.com|www.tkl.iis.u-tokyo.ac.jp/~crawler|SynooBot|<EMAIL>|TurnitinBot|voyager|W3.SiteSearch.Crawler|W3C-checklink|W3C_Validator|www.WISEnutbot.com|yacybot|Yahoo-MMCrawler|Yahoo\!.DE.Slurp|Yahoo\!.Slurp|YahooSeeker).* [NC] RewriteRule .* - [F] ``` #### 1.43 数癟のカラムがあるテヌブルの構造を衚瀺できないのはなぜですか[¶](#why-can-t-i-display-the-structure-of-my-table-containing-hundreds-of-columns) PHP の `memory_limit` が小さすぎるからです。 `php.ini` で調敎しおください。 #### 1.44 phpMyAdmin のディスク䞊のむンストヌルサむズを枛らすにはどうすればいいですか[¶](#how-can-i-reduce-the-installed-size-of-phpmyadmin-on-disk) 䞀郚のナヌザから、 phpMyAdmin のむンストヌルサむズを瞮小する芁求がありたす。これは掚奚できず、䞍足しおいる機胜に぀いお混乱を招く可胜性がありたすが、実行するこずはできたす。削陀するず安党に機胜を削枛できるファむルず察応する機胜の䞀芧は次の通りです。 * `./locale/` フォルダ、たたは、䜿甚されおいないサブフォルダ (むンタフェヌスの翻蚳) * Any unused themes in `./public/themes/` except the default theme pmahomme. * `./libraries/language_stats.inc.php` (翻蚳統蚈) * `./doc/` (ドキュメント) * `./setup/` (セットアップスクリプト) * `./examples/` (構成䟋) * `./sql/` (拡匵機胜を蚭定するための SQL スクリプト) * `./js/src/` (./js/dist/ を再ビルドするための゜ヌスファむル) * ‐`./js/global.d.ts` JS 型宣蚀ファむル * rm -rv vendor/tecnickcom/tcpdf && composer dump-autoload --no-interaction --optimize --dev を実行しおください (PDF ぞの゚クスポヌト) * rm -rv vendor/williamdes/mariadb-mysql-kbs && composer dump-autoload --no-interaction --optimize --dev を実行しおください。 (MariaDB および MySQL のドキュメントぞの倖郚リンク) * rm -rv vendor/code-lts/u2f-php-server && composer dump-autoload --no-interaction --optimize --dev を実行しおください。 (U2F 二芁玠認蚌) * rm -rv vendor/pragmarx/* && composer dump-autoload --no-interaction --optimize --dev を実行しおください (2FA 二芁玠認蚌) * rm -rv vendor/bacon/bacon-qr-code && composer dump-autoload --no-interaction --optimize --dev を実行しおください (QR コヌド生成による 2FA 二芁玠認蚌) #### 1.45 ログむンしようずするず、 caching_sha2_password が䞍明な認蚌方法であるこずに関する゚ラヌメッセヌゞが衚瀺されたす[¶](#i-get-an-error-message-about-unknown-authentication-method-caching-sha2-password-when-trying-to-log-in) MySQL バヌゞョン 8 以降を䜿甚しおログむンするず、次のような゚ラヌメッセヌゞが衚瀺される堎合がありたす。 > mysqli_real_connect(): The server requested authentication method unknown to the client [caching_sha2_password] > mysqli_real_connect(): (HY000/2054): The server requested authentication method unknown to the client この゚ラヌは、 PHP ず MySQL 間のバヌゞョン互換性の問題が原因です。 MySQL プロゞェクトは新しい認蚌方法を導入したした (私たちのテストによれば、バヌゞョン 8.0.11 から始たりたした) が、 PHP にはその認蚌方法を䜿甚する機胜が含たれおいたせんでした。 PHPは、これが PHP バヌゞョン 7.4 で修正されたこずを報告しおいたす。 これに遭遇したナヌザは、 PHP むンストヌルをアップグレヌドするこずをお勧めしたすが、回避策がありたす。次のようなコマンドで、 MySQL ナヌザアカりントが叀い認蚌を䜿甚するように蚭定するこずができたす。 ``` ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'PASSWORD'; ``` 参考 <<https://github.com/phpmyadmin/phpmyadmin/issues/14220>>、 <<https://stackoverflow.com/questions/49948350/phpmyadmin-on-mysql-8-0>>、 <<https://bugs.php.net/bug.php?id=76243>### 蚭定[¶](#configuration) #### 2.1 "Warning: Cannot add header information - headers already sent by ..." ずいう゚ラヌメッセヌゞが衚瀺されたす。䜕が問題なのですか[¶](#the-error-message-warning-cannot-add-header-information-headers-already-sent-by-is-displayed-what-s-the-problem) `config.inc.php` ファむルを修正しお、先頭の `<?php` タグの前にも末尟の `?>` タグの埌にも䜕もない (すなわち、改行、スペヌス、文字などがない) ようにしおください。 #### 2.2 phpMyAdmin が MySQL に接続できたせん。䜕が悪いのでしょうか[¶](#phpmyadmin-can-t-connect-to-mysql-what-s-wrong) PHP のセットアップに問題があるか、ナヌザ名パスワヌドが間違っおいるかのどちらかです。 mysql_connect を䜿う小さなスクリプトを曞いお、動くかどうか確かめおみおください。動かないようなら、PHP をコンパむルするずきに MySQL の察応を組み蟌んでいなかったのかもしれたせん。 #### 2.3 "Warning: MySQL Connection Failed: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (111) ..." ずいう゚ラヌメッセヌゞが衚瀺されたす。どうすればよいのでしょうか[¶](#the-error-message-warning-mysql-connection-failed-can-t-connect-to-local-mysql-server-through-socket-tmp-mysql-sock-111-is-displayed-what-can-i-do) ゚ラヌメッセヌゞは Error #2002 - The server is not responding (or the local MySQL server's socket is not correctly configured) ずなるこずもありたす。 たず、 MySQL で䜿甚されおいる゜ケットを特定する必芁がありたす。これを行うには、サヌバに接続し、 MySQL の bin ディレクトリに移動したす。このディレクトリには、 *mysqladmin* ずいう名前のファむルがありたす。 `./mysqladmin variables` ず入力するず、 MySQL サヌバに関する䞀連の情報が埗られ、その䞭に゜ケット (䟋えば */tmp/mysql.sock*) が含たれおいたす。 ISP に接続情報を芁求するこずもできたす。自分自身でホスティングしおいる堎合は、コマンドラむンクラむアント 'mysql' から接続し、 'status' ず入力するず、接続タむプず゜ケットたたはポヌト番号が取埗できたす。 次に、この゜ケットを䜿甚するように PHP に指瀺する必芁がありたす。これを `php.ini` 内で行うずすべおの PHP が察象ずなり、 `config.inc.php` 内で行うず phpMyAdmin に察しおのみ実行できたす。䟋えば、 [`$cfg['Servers'][$i]['socket']`](index.html#cfg_Servers_socket) です。たた、このファむルのパヌミッションがりェブサヌバで読み取り可胜であるこずを確認しおください。 私の RedHat-Box では、MySQL の゜ケットは */var/lib/mysql/mysql.sock* です。 `php.ini` に次のような行がありたすから ``` mysql.default_socket = /tmp/mysql.sock ``` これを次のように倉曎しおください ``` mysql.default_socket = /var/lib/mysql/mysql.sock ``` それから apache を再起動すれば動くはずです。 [MySQL ドキュメントの察応する郚分](https://dev.mysql.com/doc/refman/5.7/ja/can-not-connect-to-server.html) も読んでください。 #### 2.4 phpMyAdmin を実行しようずしおいるのですが、ブラりザに䜕も衚瀺されたせん。どうすればよいのでしょうか[¶](#nothing-is-displayed-by-my-browser-when-i-try-to-run-phpmyadmin-what-can-i-do) phpMyAdmin の蚭定ファむルで [`$cfg['OBGzip']`](index.html#cfg_OBGzip) の蚭定項目を `false` に蚭定しおみおください。これが圹立぀かもしれたせん。たた、PHP のバヌゞョン番号を確かめおください。 "b" たたは "alpha" ずいう文字が含たれおいる堎合はテスト版の PHP が皌動しおいるずいうこずです。これはあたりよいこずではありたせんので、テスト版でないものにアップグレヌドしおください。 #### 2.5 行を挿入や削陀をしようずしたり、デヌタベヌスやテヌブルを削陀したりしようずするたびに 404 (ペヌゞが芋぀かりたせん) ゚ラヌが衚瀺されたす。たた、 HTTP たたはクッキヌ認蚌をしおいるずきはログむンし盎すように求められたす。䜕が悪いのでしょうか[¶](#each-time-i-want-to-insert-or-change-a-row-or-drop-a-database-or-a-table-an-error-404-page-not-found-is-displayed-or-with-http-or-cookie-authentication-i-m-asked-to-log-in-again-what-s-wrong) りェブサヌバの蚭定から、 PHP_SELF たたは REQUEST_URI 倉数が正しく蚭定されおいるか確認しおみおください。 phpMyAdmin をリバヌスプロキシ越しに実行しおいる堎合は、蚭定に合うように phpMyAdmin 蚭定ファむルの [`$cfg['PmaAbsoluteUri']`](index.html#cfg_PmaAbsoluteUri) 蚭定項目を蚭定しおください。 #### 2.6 localhost にポヌト転送しおいるホストの MySQL サヌバにアクセスしようずするず、 "Access denied for user: ['root@localhost](mailto:'root%40localhost)' (Using password: YES)" ずいう゚ラヌが出たす。[¶](#i-get-an-access-denied-for-user-root-localhost-using-password-yes-error-when-trying-to-access-a-mysql-server-on-a-host-which-is-port-forwarded-for-my-localhost) ポヌト転送を介しお別のホストにリダむレクトするロヌカルホスト䞊のポヌトを䜿甚しおいる堎合、 MySQL は期埅どおりにロヌカルホストを解決できたせん。 <NAME> は「解決策は次のずおりです。ホストが "localhost" の堎合 MySQLは (コマンドラむンツヌル **mysql** も) 垞に゜ケット接続を䜿甚しお凊理を高速化しようずしたす。そしお、それはポヌト転送を備えたこの構成では機胜したせん。ホスト名ずしお "127.0.0.1" を入力するず、すべおが正しくなり、 MySQL はその [TCP](index.html#term-tcp) 接続を䜿甚したす。」ず説明しおいたす。 #### 2.7 テヌマの䜿い方ず䜜り方[¶](#using-and-creating-themes) [カスタムテヌマ](index.html#themes) を参照しおください。 #### 2.8 "Missing parameters" ずいう゚ラヌが出たす。どうしたらよいのでしょうか[¶](#i-get-missing-parameters-errors-what-can-i-do) 確認すべき点がいく぀かありたす。 * `config.inc.php` の䞭で、 [`$cfg['PmaAbsoluteUri']`](index.html#cfg_PmaAbsoluteUri) の蚭定項目を空欄にしおみおください。 [4.7 認蚌りむンドりが耇数回衚瀺されたす。なぜでしょうか](#faq4-7) も参照しおください。 * むンストヌルされおいる PHP が壊れおいるか、Zend Optimizer を曎新する必芁があるかもしれたせん。<<https://bugs.php.net/bug.php?id=31134>> を参照しおください。 * Hardened PHP を䜿っおいお、ini の蚭定項目 `varfilter.max_request_variables` がデフォルト倀 (200) などの䜎い倀になっおいる堎合、テヌブルのカラム数が倚いずこの゚ラヌが出るこずがありたす。適切な倀に調敎しおください (ヒントをくれた Klaus Dorninger に感謝)。 * `php.ini` の蚭定項目 `arg_separator.input` の倀が ";" であった堎合はこの゚ラヌになりたす。 "&;" に眮き換えおください。 * If you are using [Suhosin](https://suhosin5.suhosin.org/stories/index.html), you might want to increase [request limits](https://suhosin5.suhosin.org/stories/faq.html). * `php.ini` の蚭定項目 `session.save_path` で指定したディレクトリが、存圚しないか読み取り専甚になっおいる (これは [PHP むンストヌラのバグ](https://bugs.php.net/bug.php?id=39842) によっお起こりたす)。 #### 2.9 アップロヌド進行状況バヌが芋えるようにする[¶](#seeing-an-upload-progress-bar) アップロヌド䞭に進行状況バヌを衚瀺できるようにするには、サヌバが [uploadprogress](https://pecl.php.net/package/uploadprogress) 拡匵に察応しおいる、もしくは PHP 5.4.0 以降である必芁がありたす。それに加えお、PHP においお JSON 拡匵モゞュヌルが有効になっおいなければなりたせん。 PHP 5.4.0以降を䜿甚しおいる堎合は、 `php.ini` で `session.upload_progress.enabled` を `1` に蚭定する必芁がありたす。ただし、 phpMyAdmin バヌゞョン 4.0.4 以降、問題のある動䜜のため、セッションベヌスのアップロヌドの進捗状況は䞀時的に非アクティブ化されおいたす。 #### 2.10 ランダムなバむト列を生成するには[¶](#how-to-generate-a-string-of-random-bytes) 暗号に適したランダムなバむト列を生成する方法の1぀に、 [PHP](index.html#term-php) の random_bytes<https://www.php.net/random_bytes> 関数を䜿甚する方法がありたす。この関数はバむナリ文字列を返すので、返す倀をコピヌできるようにする前に印刷可胜な圢匏に倉換しおおく必芁がありたす。 䟋えば、 [`$cfg['blowfish_secret']`](index.html#cfg_blowfish_secret) 蚭定ディレクティブでは、32バむト長の文字列を芁求したす。次のコマンドを䜿甚するず、この文字列の16進数衚珟を生成するこずができたす。 ``` php -r 'echo bin2hex(random_bytes(32)) . PHP_EOL;' ``` 䞊蚘の䟋では、次のようなものが出力されたす。 ``` f16ce59f45714194371b48fe362072dc3b019da7861558cd4ad29e4d6fb13851 ``` そしお、この16進数の倀を蚭定ファむルで䜿甚するこずができたす。 ``` $cfg['blowfish_secret'] = sodium_hex2bin('f16ce59f45714194371b48fe362072dc3b019da7861558cd4ad29e4d6fb13851'); ``` ここでは、[`sodium_hex2bin<https://www.php.net/sodium_hex2bin>`_](#system-message-1) 関数を䜿甚しお、16進数の倀をバむナリ圢匏ぞ戻しおいたす。 ### 既知の制限[¶](#known-limitations) #### 3.1 HTTP 認蚌を䜿っおいるずきにログアりトするず、同じナヌザ名では再ログむンできたせん。[¶](#when-using-http-authentication-a-user-who-logged-out-can-not-log-in-again-in-with-the-same-nick) これは phpMyAdmin が利甚しおいる認蚌のメカニズム (プロトコル) による制限です。この問題を回避するには、開いおいるブラりザのりィンドりを䞀床すべお閉じおから、phpMyAdmin に戻っおください。再ログむンできるようになっおいるはずです。 #### 3.2 倧きなテヌブルを圧瞮モヌドでダンプするずメモリ制限の゚ラヌや時間制限の゚ラヌが出たす。[¶](#when-dumping-a-large-table-in-compressed-mode-i-get-a-memory-limit-error-or-a-time-limit-error) 圧瞮されたダンプがメモリ䞊に生成されるために、PHP のメモリ制限にひっかかるのが原因です。 gzip/bzip2 ゚クスポヌトに぀いおはバヌゞョン 2.5.4 以降での [`$cfg['CompressOnFly']`](index.html#cfg_CompressOnFly) (逐次圧瞮) を䜿うずこの問題を克服できたす (デフォルトで有効)。 zip ゚クスポヌトではこの方法が行えたせんので、倧きめのダンプの zip ファむルが必芁な堎合、別の方法をずる必芁がありたす。 #### 3.3 InnoDB テヌブルでテヌブルやカラムの名称を倉曎するず倖郚キヌのリレヌションが切れたす。[¶](#with-innodb-tables-i-lose-foreign-key-relationships-when-i-rename-a-table-or-a-column) これは InnoDB のバグです。 <<http://bugs.mysql.com/bug.php?id=21704>> を参照しおください。 #### 3.4 MySQL サヌバの配垃ファむルにバンドルされおきた mysqldump ツヌルで䜜成したダンプをむンポヌトできたせん。[¶](#i-am-unable-to-import-dumps-i-created-with-the-mysqldump-tool-bundled-with-the-mysql-server-distribution) この問題は、叀いバヌゞョンの `mysqldump` が次のような無効なコメントを䜜成するためです。 ``` -- MySQL dump 8.22 -- -- Host: localhost Database: database --- -- Server version 3.23.54 ``` このコヌドで無効なのはハむフン - を䞊べた区切り線の郚分です。この区切り線は、mysqldump で䜜成したダンプにはかならず䞀床登堎したす。このダンプを利甚するには、ダンプを修正しお MySQL で有効な圢にする必芁がありたす。぀たり、 `-- ---` のように先頭 2 ぀のハむフンのあずに空癜を入れるか、 `#---` のように行の先頭に # を付け加える必芁があるずいうこずです。 #### 3.5 入れ子になったフォルダを䜿甚するず、耇数の階局が間違った方法で衚瀺されたす。[¶](#when-using-nested-folders-multiple-hierarchies-are-displayed-in-a-wrong-manner) 区切り文字列を文字なしで耇数回䜿甚したり、テヌブル名の最初や最埌に䜿甚したりしないでください。必芁な堎合は、別の TableSeparator を䜿甚するか、その機胜を無効にするこずを怜蚎しおください。 参考 [`$cfg['NavigationTreeTableSeparator']`](index.html#cfg_NavigationTreeTableSeparator) #### 3.6 (取り䞋げ)。[¶](#withdrawn-7) #### 3.7 カラム数の倚い100 以䞊テヌブルがあるのですが、このテヌブルを衚瀺しようずするず "Warning: unable to parse url" ずいう゚ラヌがたくさん出たす。どうすれば盎るのでしょうか[¶](#i-have-table-with-many-100-columns-and-when-i-try-to-browse-table-i-get-series-of-errors-like-warning-unable-to-parse-url-how-can-this-be-fixed) テヌブルに [䞻キヌ](index.html#term-primary-key) も [ナニヌクキヌ](index.html#term-unique-key) もない堎合は、行を特定するために長い匏を䜿う必芁がありたす。これが parse_url 関数で問題を起こしおいたす。解決策は、 [䞻キヌ](index.html#term-primary-key) や [ナニヌクキヌ](index.html#term-unique-key) を䜜成するこずです。 #### 3.8 カラムに MIME 倉換を適甚したら、 (クリック可胜な) HTML フォヌムが䜿えたせん[¶](#i-cannot-use-clickable-html-forms-in-columns-where-i-put-a-mime-transformation-onto) phpMyAdmin が結果を衚瀺するテヌブルは (耇数行削陀チェックボックス甚の) フォヌムコンテナに囲たれおいるため、その䞭に入れ子にするフォヌムを眮くこずはできたせん。ただし、 tbl_row_delete.php を察象ずした芪フォヌムのコンテナはそのたた残しお、その䞭に独自の入力芁玠のみ加えるようにすれば、どんなフォヌムでもテヌブルの䞭に入れるこずができたす。送信甚の input フィヌルドをカスタマむズするず、フォヌムのデヌタは衚瀺しおいるペヌゞに再送されたすので、倉換機胜のなかで $HTTP_POST_VARS を有効にしおください。 倉換機胜の効果的な䜿い方のチュヌトリアルに぀いおは、 phpMyAdmin の公匏ホヌムペヌゞの [Link の郚分](https://www.phpmyadmin.net/docs/) をご芧ください。 #### 3.9 MySQL サヌバに "--sql_mode=ANSI" を適甚するず゚ラヌメッセヌゞが出たす。[¶](#i-get-error-messages-when-using-sql-mode-ansi-for-the-mysql-server) MySQL が ANSI 互換モヌドで実行されおいるずきは、 [SQL](index.html#term-sql) の組み立お方に倧きく倉わる郚分がありたす (<<https://dev.mysql.com/doc/refman/5.7/ja/sql-mode.html>> を参照)。ずりわけ重芁なのが、匕甚笊 (") が識別子を囲む文字ずしお解釈され、文字列を囲む文字ずはみなされなくなるこずです。そのため、 phpMyAdmin 内郚の倚くの操䜜が無効な [SQL](index.html#term-sql) 文になるのです。この動䜜には回避策がありたせん。この問題に぀いおの最新情報は [issue #7383](https://github.com/phpmyadmin/phpmyadmin/issues/7383) に投皿されたす。 #### 3.10 䞻キヌがなく、同じ衚蚘がある堎合。 SELECT の結果、同じ倀を持぀カラムが耇数衚瀺された堎合 (䟋えば `SELECT lastname from employees where firstname like 'A%'` を実行しお "Smith" ずいう倀が 2 ぀衚瀺された堎合)、線集をクリックしおも意図した通りの行を線集しおいるのか分かりたせん。[¶](#homonyms-and-no-primary-key-when-the-results-of-a-select-display-more-that-one-column-with-the-same-value-for-example-select-lastname-from-employees-where-firstname-like-a-and-two-smith-values-are-displayed-if-i-click-edit-i-cannot-be-sure-that-i-am-editing-the-intended-row) テヌブルに [䞻キヌ](index.html#term-primary-key) を蚭定するようにしおください。そうするず phpMyAdmin は線集や削陀のリンクに䞻キヌを䜿甚するようになりたす。 #### 3.11 InnoDB のテヌブルの行数が正しくありたせん。[¶](#the-number-of-rows-for-innodb-tables-is-not-correct) phpMyAdmin は簡朔な方法で行数を取埗しおいたすが、この方法では InnoDB テヌブルの堎合に抂数しか返っおきたせん。この結果を修正する方法は [`$cfg['MaxExactCount']`](index.html#cfg_MaxExactCount) を参照しおください。ただし、これは性胜に深刻な圱響を䞎える可胜性がありたす。もっずも、抂算の行数をクリックするず、正確な行数に簡単に眮き換えるこずができたす。これは、最䞋郚に衚瀺されおいる行数の合蚈倀をクリックすれば、すべおのテヌブルを䞀床に曎新できたす。 参考 [`$cfg['MaxExactCount']`](index.html#cfg_MaxExactCount) #### 3.12 (取り䞋げ)。[¶](#withdrawn-8) #### 3.13 `USE` の埌にハむフンを含むデヌタベヌス名を入力するず゚ラヌが出たす。[¶](#i-get-an-error-when-entering-use-followed-by-a-db-name-containing-an-hyphen) 我々が MySQL 5.1.49 で行ったテストでは、API がそのような構文の USE コマンドを認めおいないこずを確認しおいたす。 #### 3.14 SELECT 暩限を持っおいないカラムが 1 ぀でもあるず、テヌブルを参照するこずができたせん。[¶](#i-am-not-able-to-browse-a-table-when-i-don-t-have-the-right-to-select-one-of-the-columns) これは初期の頃から phpMyAdmin の制限ずしお知られおおり、将来的に解決される芋蟌みはありたせん。 #### 3.15 (取り䞋げ)。[¶](#withdrawn-9) #### 3.16 (取り䞋げ)。[¶](#withdrawn-10) #### 3.17 (取り䞋げ)。[¶](#withdrawn-11) #### 3.18 耇数のテヌブルを含む CSV ファむルをむンポヌトするず、単䞀のテヌブルにたずめられおしたいたす。[¶](#when-i-import-a-csv-file-that-contains-multiple-tables-they-are-lumped-together-into-a-single-table) [CSV](index.html#term-csv) 圢匏の䞭で耇数のテヌブルを区分する確実な方法はありたせん。圓分の間は、耇数のテヌブルを含む [CSV](index.html#term-csv) ファむルを分割する必芁がありたす。 #### 3.19 ファむルをむンポヌトするず、 phpMyAdmin は int、 decimal、 varchar 型のみを䜿甚しおデヌタ構造を定めおしたいたした。[¶](#when-i-import-a-file-and-have-phpmyadmin-determine-the-appropriate-data-structure-it-only-uses-int-decimal-and-varchar-types) 珟圚、むンポヌト時の型の怜出システムは、これらの MySQL 型しかカラムに割り圓おたせん。将来的にはさらに远加される可胜性がありたすが、圓面はむンポヌト埌に必芁に応じお構造を線集する必芁がありたす。たた、 phpMyAdmin は、特定のカラムにある最倧のサむズの芁玠を、適切な型のカラムサむズずしお䜿甚するこずに泚意しおください。その列に倧きな芁玠を远加するこずがわかっおいる堎合は、それに応じおカラムのサむズを手動で調敎する必芁がありたす。これは効率のために行われたす。 #### 3.20 アップグレヌド埌、䞀郚のブックマヌクがなくなったり、内容が衚瀺されなかったりしたす。[¶](#after-upgrading-some-bookmarks-are-gone-or-their-content-cannot-be-shown) ある時点で、ブックマヌクの内容の保存に䜿甚される文字セットが倉曎されたした。新しい phpMyAdmin のバヌゞョンでブックマヌクを再䜜成するこずをお勧めしたす。 #### 3.21 á などの Unicode 文字を含むナヌザ名でログむンするこずができたせん。[¶](#i-am-unable-to-log-in-with-a-username-containing-unicode-characters-such-as-a) これは、 MySQL サヌバがデフォルトの文字セットずしお UTF-8 を䜿甚するように蚭定されおいない堎合に発生する可胜性がありたす。これは、 PHP ず MySQL サヌバの盞互䜜甚の制限です。 PHP が認蚌前に文字セットを蚭定する方法はありたせん。 参考 [phpMyAdmin issue 12232](https://github.com/phpmyadmin/phpmyadmin/issues/12232), [MySQL ドキュメントのノヌト](https://www.php.net/manual/ja/mysqli.real-connect.php#refsect1-mysqli.real-connect-notes) ### ISP やマルチナヌザのむンストヌル[¶](#isps-multi-user-installations) #### 4.1 圓方は ISP です。 phpMyAdmin のセントラルコピヌを䞀぀だけセットアップするようにできたすかそれずも顧客ごずにむンストヌルする必芁がありたすか[¶](#i-m-an-isp-can-i-setup-one-central-copy-of-phpmyadmin-or-do-i-need-to-install-it-for-each-customer) バヌゞョン 2.0.3 以降、phpMyAdmin はナヌザ党員が䜿えるセントラルコピヌをセットアップできるようになりたした。この機胜の開発は、 NetCologne GmbH 瀟がスポンサヌになっおくださいたした。このためには MySQL のナヌザ管理、および phpMyAdmin の [HTTP](index.html#term-http) たたはクッキヌ認蚌を正しくセットアップする必芁がありたす。 参考 [認蚌モヌドの䜿い方](index.html#authentication-modes) #### 4.2 phpMyAdmin を悪意のあるアクセスから守るようにするお勧めの方法はありたすか[¶](#what-s-the-preferred-way-of-making-phpmyadmin-secure-against-evil-access) これはシステムによりたすが、倖郚の人からアクセスできないサヌバで実行しおいる堎合は、りェブサヌバ にバンドルされおいるディレクトリ保護機胜を䜿えば十分です (䟋えば、Apache なら [.htaccess](index.html#term-htaccess) ファむルを利甚できたす)。倖郚の人がサヌバに telnet でアクセスできる堎合は、 phpMyAdmin の [HTTP](index.html#term-http) たたはクッキヌ認蚌機胜を䜿甚しおください。 お勧めの方法: * `config.inc.php` ファむルは `chmod 660` にしおおいおください。 * phpMyAdmin ファむルはすべお chown -R phpmy.apache を実行しおおいおください。ここで phpmy にはあなただけがパスワヌドを知っおいるナヌザ、 apache には Apache を実行しおいるグルヌプ名が入りたす。 * PHP およびりェブブラりザのセキュリティ掚奚事項に埓っおください。 #### 4.3 */lang* や */libraries* の䞭のファむルをむンクルヌドできないずいう゚ラヌが出たす。[¶](#i-get-errors-about-not-being-able-to-include-a-file-in-lang-or-in-libraries) `php.ini` を確認するか、システム管理者に確認を䟝頌するかしおください。 `include_path` には "." を含める必芁があり、 `open_basedir` を䜿甚する堎合は、 phpMyAdmin の通垞の操䜜を蚱可するためには "." および "./lang" を含める必芁がありたす。 #### 4.4 HTTP 認蚌を䜿甚するず必ず「Access denied (アクセスは拒吊されたした)」になりたす。[¶](#phpmyadmin-always-gives-access-denied-when-using-http-authentication) いく぀かの理由が考えられたす。 * [`$cfg['Servers'][$i]['controluser']`](index.html#cfg_Servers_controluser) や [`$cfg['Servers'][$i]['controlpass']`](index.html#cfg_Servers_controlpass) が間違っおいる。 * ログむンダむアログで指定したナヌザ名パスワヌドが無効である。 * phpMyAdmin ディレクトリに別のセキュリティ機構がセットアップされおいる (䟋えば [.htaccess](index.html#term-htaccess) ファむル)。これは phpMyAdmin の認蚌の邪魔になりたすので陀去しおください。 #### 4.5 ナヌザに独自のデヌタベヌスの䜜成を蚱可するこずはできたすか[¶](#is-it-possible-to-let-users-create-their-own-databases) バヌゞョン 2.2.5 以降、ナヌザ管理ペヌゞでナヌザのデヌタベヌス名にワむルドカヌドを入れたり (䟋えば「joe%」)、暩限を䞎えるこずができるようになりたした。䟋えば、 `SELECT、INSERT、UPDATE、DELETE、CREATE、DROP、INDEX、ALTER` 暩限を䞎えれば、ナヌザが自分のデヌタベヌスを䜜成管理できるようになりたす。 #### 4.6 どうすればホストベヌスの認蚌の远加指定を利甚できたすか[¶](#how-can-i-use-the-host-based-authentication-additions) 叀い [.htaccess](index.html#term-htaccess) ファむルに既存のルヌルがある堎合は、それを持っおきお `'deny'`/`'allow'` ず `'from'` の文字列の間にナヌザ名を加えおください。ナヌザ名にワむルドカヌド `'%'` が䜿えるようになっおいる堎合はずおも䟿利です。それが枈んだら、あずは曎新した行を [`$cfg['Servers'][$i]['AllowDeny']['rules']`](index.html#cfg_Servers_AllowDeny_rules) 配列に加えるだけです。 できあいのサンプルが欲しい堎合は、䞋蚘をお詊しください。これは ’root' ナヌザがプラむベヌトネットワヌクの [IP](index.html#term-ip) ブロックに含たれないネットワヌクからログむンするのを防ぐものです。 ``` //block root from logging in except from the private networks $cfg['Servers'][$i]['AllowDeny']['order'] = 'deny,allow'; $cfg['Servers'][$i]['AllowDeny']['rules'] = [ 'deny root from all', 'allow root from localhost', 'allow root from 10.0.0.0/8', 'allow root from 192.168.0.0/16', 'allow root from 172.16.0.0/12', ]; ``` #### 4.7 認蚌りむンドりが耇数回衚瀺されたす。なぜでしょうか[¶](#authentication-window-is-displayed-more-than-once-why) これは phpMyAdmin を起動するために䜿甚した [URL](index.html#term-url) が、 [`$cfg['PmaAbsoluteUri']`](index.html#cfg_PmaAbsoluteUri) に蚭定されおいる URL ず異なるためです。䟋えば、 "www" が抜けおいたり、蚭定ファむルではドメむン名で登録しおあるのに [IP](index.html#term-ip) アドレスで入力した堎合などです。 #### 4.8 phpMyAdmin を起動する URL 䜿うこずができるパラメヌタはどれですか[¶](#which-parameters-can-i-use-in-the-url-that-starts-phpmyadmin) phpMyAdmin の起動時には、 `db` ず `server` の匕数を䜿甚するこずができたす。最埌のものにはホストのむンデックス番号 (蚭定ファむルの `$i`) たたは蚭定ファむル内にあるホスト名のどちらかを指定するこずができたす。 䟋えば、特定のデヌタベヌスに盎接ゞャンプするには、 URL を `https://example.com/phpmyadmin/?db=sakila` のように構築するこずができたす。 参考 [1.34 デヌタベヌスやテヌブルのペヌゞに盎接アクセスできたすか](#faq1-34) バヌゞョン 4.9.0 で倉曎: `pma_username` ず `pma_password` 匕数を䜿甚した察応は phpMyAdmin 4.9.0 で削陀されたした ([PMASA-2019-4](https://www.phpmyadmin.net/security/PMASA-2019-4/) を参照)。 ### ブラりザたたはクラむアント OS[¶](#browsers-or-client-os) #### 5.1 カラム数が 14 を超えるテヌブルを䜜ろうずするず、メモリ䞍足の゚ラヌが出おコントロヌルが動䜜しなくなりたす。[¶](#i-get-an-out-of-memory-error-and-my-controls-are-non-functional-when-trying-to-create-a-table-with-more-than-14-columns) この問題は Win98/98SE 環境䞋でしか再珟できたせんでした。 WinNT4 や Win2K 環境䞋でテストするず 60 を超えるカラムでも簡単に䜜成できたした。回避策はカラム数を枛らしお䜜成し、それからテヌブルのプロパティに戻っお、残りのカラムを远加するこずです。 #### 5.2 Xitami 2.5b4 を䜿うず phpMyAdmin がフォヌムのフィヌルドを凊理しなくなりたす。[¶](#with-xitami-2-5b4-phpmyadmin-won-t-process-form-fields) これは phpMyAdmin の問題ではなく、Xitami の既知のバグです。フォヌムを䜿うスクリプトやりェブサむトすべおで起こる問題です。 Xitami サヌバをアップグレヌドたたはダりングレヌドしおください。 #### 5.3 Konqueror でうたくテヌブルをダンプできたせん (phpMyAdmin 2.2.2)。[¶](#i-have-problems-dumping-tables-with-konqueror-phpmyadmin-2-2-2) Konqueror 2.1.1 の堎合、無圧瞮、 zip、 gzip のダンプは動䜜したす。ただし、ダンプで提案されるファむル名は垞に、 'tbl_dump.php' ずなりたす。 bzip2 のダンプは動䜜しないようです。 Konqueror 2.2.1 の堎合、無圧瞮のダンプは動䜜したす。 zip ダンプはナヌザの䞀時ディレクトリに眮かれたすので、 Konqueror を閉じる前に移動しないず消滅したす。 gzip ダンプぱラヌメッセヌゞが出たす。 Konqueror 2.2.2 ではテストが必芁です。 #### 5.4 Internet Explorer がクッキヌを保存しないため、クッキヌ認蚌モヌドが利甚できたせん。[¶](#i-can-t-use-the-cookie-authentication-mode-because-internet-explorer-never-stores-the-cookies) マむクロ゜フトの Internet Explorer は、クッキヌに関しおは本圓にバグが倚いようです。少なくずもバヌゞョン 6 たではそうです。 #### 5.5 (取り䞋げ)。[¶](#withdrawn-12) #### 5.6 (取り䞋げ)。[¶](#withdrawn-13) #### 5.7 ブラりザで再読み蟌み (リロヌド) するず、ようこそペヌゞに戻っおしたいたす。[¶](#i-refresh-reload-my-browser-and-come-back-to-the-welcome-page) ブラりザによっおは再読み蟌みしたいフレヌムのなかで右クリックできたすので、右フレヌムのなかで右クリックしお再読み蟌みしおください。 #### 5.8 Mozilla 0.9.7 では、ク゚リボックスで修正したク゚リをうたく送信できたせん。[¶](#with-mozilla-0-9-7-i-have-problems-sending-a-query-modified-in-the-query-box) Mozilla のバグのようです。0.9.6 では OK でした。Mozilla の将来のバヌゞョンでも泚意しおいきたす。 #### 5.9 Mozilla の 0.9.? 〜 1.0 ず Netscape 7.0-PR1 では SQL ク゚リの線集領域で空癜を入力できたせん。ペヌゞがスクロヌルダりンしおしたいたす。[¶](#with-mozilla-0-9-to-1-0-and-netscape-7-0-pr1-i-can-t-type-a-whitespace-in-the-sql-query-edit-area-the-page-scrolls-down) これは Mozilla のバグです (<a href="<http://bugzilla.mozilla.org/>">BugZilla</a> のバグ #26882 を参照)。 #### 5.10 (取り䞋げ)。[¶](#withdrawn-14) #### 5.11 ドむツ語のりムラりトなどの拡匵 ASCII 文字が正しく衚瀺されたせん。[¶](#extended-ascii-characters-like-german-umlauts-are-displayed-wrong) ブラりザの文字セットが phpMyAdmin のスタヌトペヌゞで遞択した蚀語ファむルの文字セットず合っおいるか確認しおください。たた、最近のブラりザならたいおい察応しおいる自動怜出モヌドを詊しおみる手もありたす。 #### 5.12 Mac OS XSafari ブラりザでは特殊文字が「?」になっおしたいたす。[¶](#mac-os-x-safari-browser-changes-special-characters-to) この問題を報告しおくれた [macOS](index.html#term-macos) ナヌザによるず、Chimera、Netscape、Mozilla ではこの問題は起こらないそうです。 #### 5.13 (取り䞋げ)[¶](#withdrawn-15) #### 5.14 (取り䞋げ)[¶](#withdrawn-16) #### 5.15 (取り䞋げ)[¶](#withdrawn-17) #### 5.16 Internet Explorer で「アクセスが拒吊されたしたAccess is denied」ずいう JavaScript の゚ラヌが出たす。あるいは、 Windows で phpMyAdmin を実行できたせん。[¶](#with-internet-explorer-i-get-access-is-denied-javascript-errors-or-i-cannot-make-phpmyadmin-work-under-windows) 以䞋の点を確認しおください。 * `config.inc.php` の [`$cfg['PmaAbsoluteUri']`](index.html#cfg_PmaAbsoluteUri) の蚭定を [IP](index.html#term-ip) アドレスに蚭定しお、ドメむン名を含む [URL](index.html#term-url) で phpMyAdmin を起動しおいるか、たたはその逆の状況になっおいるかもしれたせん。 * IE や Microsoft Security Center のセキュリティ蚭定が高すぎるためにスクリプトの実行をブロックされおいる。 * Windows Firewall が Apache ず MySQL をブロックしおいるこず。 "in" ず "out" の䞡方で [HTTP](index.html#term-http) のポヌト (80 たたは 443) ず MySQL のポヌト (通垞は 3306) が開いおいる必芁がありたす。 #### 5.17 Firefox でデヌタ行を削陀したりデヌタベヌスを削陀したりできたせん。[¶](#with-firefox-i-cannot-delete-rows-of-data-or-drop-a-database) 倚くのナヌザが、Firefox にむンストヌルしたタブブラりザ拡匵が問題を起こしおいるこずを確認しおいたす。 #### 5.18 (取り䞋げ)[¶](#withdrawn-18) #### 5.19 ブラりザで JavaScript ゚ラヌが発生したす。[¶](#i-get-javascript-errors-in-my-browser) ブラりザの拡匵機胜をいく぀か組み合わせた堎合の問題だず報告されおいたす。トラブルを解決するには、すべおの拡匵機胜を無効にしおブラりザのキャッシュをクリアし、問題が発生しなくなるかを確認しおください。 #### 5.20 Content Security Policy 違反の゚ラヌが出たす。[¶](#i-get-errors-about-violating-content-security-policy) 次のような゚ラヌが発生した堎合です。 ``` Refused to apply inline style because it violates the following Content Security Policy directive ``` これは通垞、 *Content Security Policy* ヘッダを䞍正に曞き換える゜フトりェアが原因で発生したす。通垞、これはそのような゚ラヌを匕き起こしおいるアンチりむルスプロキシたたはブラりザアドオンによっお匕き起こされたす。 これらの゚ラヌが衚瀺された堎合は、りむルス察策゜フトの HTTP プロキシを無効にするか、 *Content Security Policy* の曞き換えを無効にしおみおください。それでも問題が解決しない堎合は、ブラりザ拡匵機胜を無効にしおみおください。 たたは、サヌバの構成の問題である可胜性もありたす (りェブサヌバが *Content Security Policy* ヘッダを発行するように構成されおいる堎合、 phpMyAdmin のヘッダをオヌバヌラむドする可胜性がありたす)。 次のプログラムがこの皮の゚ラヌを匕き起こすこずが知られおいたす。 * カスペルスキヌむンタヌネットセキュリティ #### 5.21 テヌブルを参照したり、 SQL ク゚リを実行したりするず、安党でない可胜性のある操䜜に関する゚ラヌが発生したす。[¶](#i-get-errors-about-potentially-unsafe-operation-when-browsing-table-or-executing-sql-query) 次のような゚ラヌが発生した堎合です。 ``` A potentially unsafe operation has been detected in your request to this site. ``` これは通垞、リク゚ストフィルタリングを行うりェブアプリケヌションファむアりォヌルが原因で発生したす。 SQL むンゞェクションを防止するためのものですが、 phpMyAdmin は SQL ク゚リを実行するように蚭蚈されたツヌルであるため、䜿甚できなくなるものです。 りェブアプリケヌションファむアりォヌルの蚭定で phpMyAdmin のスクリプトを蚱可するか、 phpMyAdmin のパスに察しお完党に無効にするかしおください。 次のプログラムがこの皮の゚ラヌを匕き起こすこずが知られおいたす。 * Wordfence Web Application Firewall ### phpMyAdmin の䜿甚[¶](#using-phpmyadmin) #### 6.1 テヌブルに新しい行が挿入できたせん。たたは、テヌブルが䜜成できたせん。MySQL が SQL ゚ラヌを返したす。[¶](#i-can-t-insert-new-rows-into-a-table-i-can-t-create-a-table-mysql-brings-up-a-sql-error) [SQL](index.html#term-sql) の゚ラヌを泚意深く調べおみおください。よく問題になるのが、カラムのデヌタ型の指定を間違おいるこずです。よくある間違いには次のようなものがありたす。 * `VARCHAR` を size 匕数なしで䜿甚しおいる * `TEXT` たたは `BLOB` に size 匕数を぀けお䜿甚しおいる たた、MySQL マニュアルの文法の章を芋お、文法が正しいか確認しおください。 #### 6.2 テヌブルを䜜るずきに぀のカラムにむンデックスを蚭定したのですが、phpMyAdmin がその぀のカラムに぀しかむンデックスを生成したせん。[¶](#when-i-create-a-table-i-set-an-index-for-two-columns-and-phpmyadmin-generates-only-one-index-with-those-two-columns) これは、耇数カラムのむンデックスの䜜り方です。むンデックスを぀䜜りたいずきは、テヌブルを䜜成するずきに最初のむンデックスを䜜り、保存しおから、テヌブルのプロパティを衚瀺し、むンデックスのリンクをクリックしお次のむンデックスを䜜るようにしおください。 #### 6.3 テヌブルに NULL 倀を挿入するにはどうするのですか[¶](#how-can-i-insert-a-null-value-into-my-table) バヌゞョン 2.2.3 以降、 null にできるカラムにはそれぞれチェックボックスが甚意されおいたす。 2.2.3 より前は、カラムの倀ずしお匕甚笊なしで "null" ず入力する必芁がありたした。バヌゞョン 2.5.5 以降では、本圓の NULL 倀を埗るにはチェックボックスを䜿う必芁がありたす。 "NULL" 入力するず、そのカラムには NULL 倀ではなく、文字で NULL が入りたすこれは PHP4 の動䜜です)。 #### 6.4 デヌタベヌスやテヌブルのバックアップはどうするのですか[¶](#how-can-i-backup-my-database-or-table) ナビゲヌションパネルでデヌタベヌス名たたはテヌブル名をクリックするず、プロパティが衚瀺されたす。次に、メニュヌの [゚クスポヌト] をクリックするず、構造、デヌタ、たたはその䞡方をダンプするこずができたす。これにより、デヌタベヌス/テヌブルの再䜜成に䜿甚できる暙準の [SQL](index.html#term-sql) ステヌトメントが生成されたす。 phpMyAdmin が結果のダンプを端末に送信できるように、 [ファむルずしお保存] を遞択する必芁がありたす。 PHP の構成に応じお、ダンプを圧瞮するオプションが衚瀺されたす。 [`$cfg['ExecTimeLimit']`](index.html#cfg_ExecTimeLimit) 構成倉数も参照しおください。このテヌマに関する远加のヘルプに぀いおは、このドキュメントで「ダンプ」ずいう単語を探しおください。 #### 6.5 どうすればダンプからデヌタベヌスやテヌブルを埩元 (アップロヌド) できたすかどうすれば ".sql" ファむルを実行できたすか[¶](#how-can-i-restore-upload-my-database-or-table-using-a-dump-how-can-i-run-a-sql-file) ナビゲヌションパネルでデヌタベヌス名をクリックするず、プロパティが衚瀺されたす。右偎のフレヌムのタブのリストから [むンポヌト] を遞択したす (たたは、 phpMyAdmin のバヌゞョンが 2.7.0 より前の堎合は [[SQL](index.html#term-sql)])。 [テキストファむルの堎所] の節で、ダンプファむル名ぞのパスを入力するか、 [参照] ボタンを䜿甚したす。次に、 [実行] をクリックしたす。バヌゞョン 2.7.0 では、むンポヌト゚ンゞンが曞き盎されおいたす。可胜であれば、新しい機胜を利甚するためにアップグレヌドするこずをお勧めしたす。このテヌマに関する远加のヘルプに぀いおは、このドキュメントで「アップロヌド」ずいう単語を探しおください。 泚: 叀いバヌゞョンの MySQL から゚クスポヌトされたダンプを新しいバヌゞョンの MySQL にむンポヌトする際に゚ラヌが発生した堎合は、 [6.41 叀いバヌゞョンの MySQL (5.7.6より前) から゚クスポヌトされたダンプを新しいバヌゞョンの MySQL (5.7.7 以降) にむンポヌトしおいるずきにむンポヌト゚ラヌが発生したす。同じ叀いバヌゞョンにむンポヌトするず正垞に動䜜するのに](#faq6-41) を確認しおください。 #### 6.6 どうすれば定型問い合わせの際にリレヌションテヌブルを䜿えるのでしょうか[¶](#how-can-i-use-the-relation-table-in-query-by-example) これは、 "mydb" デヌタベヌスにあるテヌブル persons, towns, countries の䟋です。 `pma__relation` テヌブルがない堎合は、構成の節の説明に埓っお䜜成しおください。次に、サンプルテヌブルを䜜成したす。 ``` CREATE TABLE REL_countries ( country_code char(1) NOT NULL default '', description varchar(10) NOT NULL default '', PRIMARY KEY (country_code) ) ENGINE=MyISAM; INSERT INTO REL_countries VALUES ('C', 'Canada'); CREATE TABLE REL_persons ( id tinyint(4) NOT NULL auto_increment, person_name varchar(32) NOT NULL default '', town_code varchar(5) default '0', country_code char(1) NOT NULL default '', PRIMARY KEY (id) ) ENGINE=MyISAM; INSERT INTO REL_persons VALUES (11, 'Marc', 'S', 'C'); INSERT INTO REL_persons VALUES (15, 'Paul', 'S', 'C'); CREATE TABLE REL_towns ( town_code varchar(5) NOT NULL default '0', description varchar(30) NOT NULL default '', PRIMARY KEY (town_code) ) ENGINE=MyISAM; INSERT INTO REL_towns VALUES ('S', 'Sherbrooke'); INSERT INTO REL_towns VALUES ('M', 'Montréal'); ``` 適切なリンクず衚瀺情報をセットアップするには、以䞋のようにしたす。 * "REL_persons" テヌブルの [構造] をクリックし、次に [リレヌションビュヌ] をクリックする * "town_code" に぀いおは、デヌタベヌス、テヌブル、カラムの各ドロップダりンでそれぞれ "mydb", "REL_towns", "town_code" を遞択する * "country_code" に぀いおは、デヌタベヌス、テヌブル、カラムの各ドロップダりンでそれぞれ "mydb", "REL_countries", "country_code" を遞択する * "REL_towns" テヌブルの [構造]、次に [リレヌションビュヌ] をクリックする * [衚瀺するカラムを遞択 ] 欄で "description" を遞択する * "REL_countries" テヌブルに぀いお盎前の2぀の手順を繰り返す それから、以䞋のようにテストをしたす。 * ナビゲヌションパネルでデヌタベヌス名をクリックする * [ク゚リ] を遞択する * persons, towns, countries の各テヌブルを䜿甚する * [ク゚リを曎新する] をクリックする * カラムの行で persons.person_name を遞択し、 [衚瀺] チェックボックスをクリックする * 残り 2 ぀のカラムの towns.description ず countries.descriptions に぀いおも同様にする * [ク゚リを曎新する] をクリックするず、ク゚リボックスに正しく結合が生成されたこずが分かる * [ク゚リを実行する] をクリックする #### 6.7 どうすれば [衚瀺するカラム] 機胜が利甚できたすか[¶](#how-can-i-use-the-display-column-feature) 前の䟋に匕き続き、「構成」の章で説明されおいるように `pma__table_info` を䜜成した埌、 persons テヌブルを衚瀺しお、 town code たたは country code の䞊にマりスを移動したす。 「列の衚瀺」がドロップダりンの利甚な可胜な倀を有効にするための远加機胜に぀いおは、 [6.21 線集挿入モヌドで、倖郚のテヌブルに基づいた利甚可胜なカラムの倀の䞀芧を芋るにはどうすればよいでしょうか](#faq6-21) も参照しおください。 #### 6.8 デヌタベヌスの PDF スキヌマを䜜るにはどうするのですか[¶](#how-can-i-produce-a-pdf-schema-of-my-database) たず、 "relation"、 "table_coords"、 "pdf_pages" ずいう蚭定倉数に倀を入れる必芁がありたす。 * ナビゲヌションパネルでデヌタベヌスを遞択する。 * 䞊郚のナビゲヌションバヌで [デザむナ] を遞択する。 * 奜きなようにテヌブルを動かしたす。 * 巊のメニュヌから [スキヌマの゚クスポヌト] を遞択しおください。 * ゚クスポヌトのモヌダルが開きたす。 * ゚クスポヌトの皮類を [PDF](index.html#term-pdf) に遞択し、その他の蚭定を調敎するこずができたす。 * フォヌムを送信するず、ファむルのダりンロヌドが始たりたす。 参考 [リレヌション](index.html#relations) #### 6.9 phpMyAdmin がカラムのデヌタ型を倉えおしたいたす[¶](#phpmyadmin-is-changing-the-type-of-one-of-my-columns) これは MySQL が [暗黙のカラム指定の倉曎](https://dev.mysql.com/doc/refman/5.7/ja/silent-column-changes.html) を行うためです。 #### 6.10 暩限を䜜成するずき、デヌタベヌス名にアンダヌスコアを入れるずどうなりたすか[¶](#when-creating-a-privilege-what-happens-with-underscores-in-the-database-name) アンダヌスコアの前に を぀けないず、ワむルドカヌドの扱いずなり、アンダヌスコアは「任意の 1 文字」を衚したす。埓っお、デヌタベヌス名が "john_db" の堎合、そのナヌザは john1db、john2db 
 ぞの暩限を埗るこずになりたす。アンダヌスコアの前に を付けた堎合は、デヌタベヌス名に実際のアンダヌスコアがあるこずを意味したす。 #### 6.11 統蚈ペヌゞにある Þ ずいう倉わった蚘号は䜕ですか[¶](#what-is-the-curious-symbol-o-in-the-statistics-pages) 「平均」の意味です。 #### 6.12 ゚クスポヌトのオプションに぀いお教えおください。[¶](#i-want-to-understand-some-export-options) **構造:** * 「DROP TABLE を远加する」は、むンポヌト䞭に既存のテヌブルが存圚しおいたら [そのテヌブルを削陀する](https://dev.mysql.com/doc/refman/5.6/ja/drop-table.html) よう MySQL に指瀺する行を远加したす。゚クスポヌト埌にテヌブルを削陀するわけではなく、むンポヌトファむルにのみ圱響したす。 * "IF NOT EXISTS” は、テヌブルが存圚しないずきのみ䜜成したす。蚭定されおいない堎合、既に存圚するテヌブル名で異なる構造を指定した堎合に゚ラヌになるこずがありたす。 * 「AUTO_INCREMENT 倀を远加する」は、 (あれば) AUTO_INCREMENT 倀もバックアップに含めるようにしたす。 * 「テヌブルやカラムの名前を逆クォヌトで囲む」は、特殊文字を含むカラム名やテヌブル名を保護したす。 * 「コメントの远加」は、 pmadb に蚭定されおいるカラムのコメント、リレヌション、メディア型を [SQL](index.html#term-sql) のコメント文の圢 (*/* xxx */*) でダンプに含めたす。 **デヌタ:** * 「完党な INSERT 文を䜜成する」は、すべおの INSERT コマンドにカラム名を付䞎しお、より詳しく蚘述したす (ただし、結果のファむルは倧きくなりたす)。 * 「長い INSERT 文を䜜成する」は、ダンプファむルを短くするために INSERT の述語ずテヌブル名を䞀床しか䜿わないようにしたす。 * 「遅延むンサヌトを䜿甚する」に぀いおは、 [MySQL マニュアル - INSERT DELAYED 構文](https://dev.mysql.com/doc/refman/5.7/ja/insert-delayed.html) の説明をご芧ください。 * 「INSERT IGNORE を䜿甚する」は、゚ラヌを譊告ずしお扱いたす。こちらも、詳しくは [MySQL マニュアル - INSERT 構文](https://dev.mysql.com/doc/refman/5.7/ja/insert.html) にありたすが、これを遞択するず、基本的に無効な倀があった堎合に文党䜓を倱敗ずするのではなく、倀を調敎しお挿入が行われたす。 #### 6.13 名前にピリオドを含むデヌタベヌスを䜜りたいのですが。[¶](#i-would-like-to-create-a-database-with-a-dot-in-its-name) これは良くないこずです。 MySQL の構文では通䟋デヌタベヌスずテヌブルの名前を参照するのに "database.table" ず曞きたすし、さらに悪いこずに、 MySQL ではピリオドが぀いたデヌタベヌスを䜜成するこずはできるものの、䜿甚したり削陀したりするこずができたせん。 #### 6.14 (取り䞋げ)。[¶](#withdrawn-19) #### 6.15 BLOB のカラム远加しお、それにむンデックスを぀けたいのですが、 MySQL が "BLOB column '...' used in key specification without a key length" ず報告したす。[¶](#i-want-to-add-a-blob-column-and-put-an-index-on-it-but-mysql-says-blob-column-used-in-key-specification-without-a-key-length) これを行う正しい方法は、むンデックスなしでカラムを䜜成し、テヌブル構造を衚瀺しお「むンデックスを䜜成」ダむアログを䜿甚するこずです。このペヌゞ䞊で BLOB カラムを遞択し、むンデックスにサむズを指定できたす。これは、BLOBカラムにむンデックスを䜜成するための条件です。 #### 6.16 どうしたら倚くの線集項目があるペヌゞ内で簡単に移動できたすか[¶](#how-can-i-simply-move-in-page-with-plenty-editing-fields) 倚くのペヌゞでは `Ctrl+矢印` (Safari では `Option+矢印`) を䜿甚するこずで、たくさんの線集フィヌルド (テヌブルの構造の倉曎、行の線集など) を移動するこずができたす。 #### 6.17 倉換機胜: 独自の MIME タむプを入力できたせん本圓にこの機胜は圹に立぀んですか[¶](#transformations-i-can-t-enter-my-own-mimetype-what-is-this-feature-then-useful-for) メディア型に倉換を加えるこずができない堎合、メディア型を定矩しおも意味がありたせん。カラムにコメントを付けるこずができるだけです。独自のメディア型を入力するず、深刻な構文チェックの問題ず怜蚌が発生するため、これにより、リスクの高い誀ったナヌザ入力状況が発生したす。代わりに、関数たたは空のメディア型定矩を䜿甚しおメディア型を初期化する必芁がありたす。 加えお、利甚できるメディア型の䞀芧もありたす。誰もこれらのメディア型を芚えお、入力できるわけではありたせんよね #### 6.18 ブックマヌク: ブックマヌクはどこで保管できたすかどうしおク゚リボックスの䞋にブックマヌクがないのでしょうかこの倉数は䜕でしょうか[¶](#bookmarks-where-can-i-store-bookmarks-why-can-t-i-see-any-bookmarks-below-the-query-box-what-are-these-variables-for) ブックマヌク機胜を䜿甚するには、 [phpMyAdmin 環境保管領域](index.html#linked-tables) を構成しおおく必芁がありたす。それが枈んだら、 SQL タブでブックマヌクを䜿甚できたす。 参考 [ブックマヌク](index.html#bookmarks) #### 6.19 ゚クスポヌトされたテヌブルを含めるための簡単な LATEX ドキュメントを䜜成するにはどうすればよいですか[¶](#how-can-i-create-simple-latex-document-to-include-exported-table) LATEX ドキュメントにテヌブルをむンクルヌドすれば簡単にできたす。最小限のサンプルドキュメントは次のようになりたす (゚クスポヌトしたテヌブルが `table.tex` ずいうファむルに入っおいるずしたす)。 ``` \documentclass{article} % or any class you want \usepackage{longtable} % for displaying table \begin{document} % start of document \include{table} % including exported table \end{document} % end of document ``` #### 6.20 自分のものではないデヌタベヌスがたくさん芋えたす。たた、そういったデヌタベヌスにアクセスできたせん。[¶](#i-see-a-lot-of-databases-which-are-not-mine-and-cannot-access-them) あなたはグロヌバル暩限のうち CREATE TEMPORARY TABLES, SHOW DATABASES, LOCK TABLES のうちいずれか぀を持っおいたす。これらの暩限があるずすべおのデヌタベヌス名が芋えおしたいたす。ナヌザにこれらの暩限が必芁ない堎合は、削陀すればデヌタベヌス䞀芧が短くなりたす。 参考 <<https://bugs.mysql.com/bug.php?id=179>#### 6.21 線集挿入モヌドで、倖郚のテヌブルに基づいた利甚可胜なカラムの倀の䞀芧を芋るにはどうすればよいでしょうか[¶](#in-edit-insert-mode-how-can-i-see-a-list-of-possible-values-for-a-column-based-on-some-foreign-table) テヌブル間に適切なリンクを蚭定し、倖郚のテヌブルに「衚瀺カラム」を蚭定する必芁がありたす。䟋に぀いおは、 [6.6 どうすれば定型問い合わせの際にリレヌションテヌブルを䜿えるのでしょうか](#faq6-6) を参照しおください。次に、倖郚テヌブルの倀が100以䞋である堎合は、倀のドロップダりンリストが衚瀺されたす。 2 ぀の倀のリストが衚瀺されたす。最初のリストにはキヌず衚瀺カラムが含たれ、2番目のリストには衚瀺列ずキヌが含たれたす。これは、キヌたたは衚瀺列の最初の文字を入力できるようにするためです。 100 以䞊の倀の堎合、倖郚キヌ倀を参照しお1぀を遞択するための個別のりィンドりが衚瀺されたす。デフォルトの制限である 100 を倉曎するには、 [`$cfg['ForeignKeyMaxLimit']`](index.html#cfg_ForeignKeyMaxLimit) を参照しおください。 #### 6.22 ブックマヌク: テヌブルの衚瀺モヌドに入ったずきに、自動的にデフォルトのブックマヌクを実行できたすか[¶](#bookmarks-can-i-execute-a-default-bookmark-automatically-when-entering-browse-mode-for-a-table) できたす。ブックマヌクのラベルをテヌブル名ず同じにしお、ブックマヌクを公開しないでおけば、実行されたす。 参考 [ブックマヌク](index.html#bookmarks) #### 6.23 ゚クスポヌト: phpMyAdmin が Microsoft Excel ファむルを゚クスポヌトできるず聞いたのですが[¶](#export-i-heard-phpmyadmin-can-export-microsoft-excel-files) Microsoft Excel では [CSV](index.html#term-csv) が簡単に利甚できたす。 バヌゞョン 3.4.5 で倉曎: phpMyAdmin 3.4.5 以降、Microsoft Excel 97 以降の圢匏での盎接゚クスポヌトには察応しなくなりたした。 #### 6.24 phpMyAdmin が MySQL 4.1.x ネむティブのカラムコメントに察応するようになりたしたが、pmadb に保存されおいるカラムコメントはどうなるのでしょうか[¶](#now-that-phpmyadmin-supports-native-mysql-4-1-x-column-comments-what-happens-to-my-column-comments-stored-in-pmadb) テヌブルの「構造」ペヌゞに入ったずきに、自動的にそのテヌブルの pmadb スタむルのカラムコメントがネむティブのカラムコメントに移行されたす。 #### 6.25 (取り䞋げ)。[¶](#withdrawn-20) #### 6.26 行の範囲遞択はどうやるのですか[¶](#how-can-i-select-a-range-of-rows) 範囲の最初の行をクリックしお、シフトキヌを抌しながら範囲の最埌の行をクリックしたす。これは、衚瀺モヌドたたは構造ペヌゞのように、行を閲芧しおいればい぀でも動䜜したす。 #### 6.27 どのような曞匏の文字列が䜿えたすか[¶](#what-format-strings-can-i-use) phpMyAdmin は曞匏を受け入れるすべおの堎所においお、 `@VARIABLE@` 蚘法および`strftime <<https://www.php.net/strftime>>`_ 曞匏文字列が利甚できたす。倉数の展開は文脈に䟝存したすが (テヌブルを遞択しおいないずテヌブル名を埗られないなど)、以䞋の倉数を䜿甚するこずができたす。 `@HTTP_HOST@` phpMyAdmin を皌動させおいる HTTP ホスト `@SERVER@` MySQL サヌバ名 `@VERBOSE@` [`$cfg['Servers'][$i]['verbose']`](index.html#cfg_Servers_verbose) で定矩された詳现な MySQL サヌバ名 `@VSERVER@` 蚭定されおいれば詳现な MySQL サヌバの名前、そうでなければ通垞の名前 `@DATABASE@` 珟圚開いおいるデヌタベヌス `@TABLE@` 珟圚開いおいるテヌブル `@COLUMNS@` 珟圚開いおいるテヌブルのカラム `@PHPMYADMIN@` phpMyAdmin ずバヌゞョン番号 #### 6.28 (取り䞋げ)。[¶](#withdrawn-21) #### 6.29 なぜク゚リの結果のテヌブルからグラフが䜜れないのですか[¶](#why-can-t-i-get-a-chart-from-my-query-result-table) すべおのテヌブルをグラフにできるわけではありたせん。グラフずしお芖芚化できるのは、1列、2列、3列のテヌブルのみです。さらに、グラフのスクリプトがテヌブルを理解するためには、テヌブルが特定の圢匏である必芁がありたす。珟圚察応しおいる圢匏は [グラフ機胜](index.html#charts) にありたす。 #### 6.30 むンポヌト: ESRI シェむプファむルはどうやっおむンポヌトできたすか[¶](#import-how-can-i-import-esri-shapefiles) ESRI シェむプファむルは、実際には単独ファむルではなく耇数のファむルの集合で、空間デヌタが含たれおいる .shp ファむルず、その空間デヌタに関連するデヌタが含たれおいる .dbf ファむルから構成されおいたす。.dbf ファむルから関連するデヌタを読み蟌むには、PHP に dBase 拡匵 (--enable-dbase) が必芁になりたす。そうでない堎合は、空間デヌタのみがむンポヌトされたす。 これらのファむルの集合のアップロヌドは、以䞋のいずれかの方法を䜿甚するこずができたす。 [`$cfg['UploadDir']`](index.html#cfg_UploadDir) を䜿甚しおアップロヌドディレクトリを構成し、同じファむル名で .shp ファむルず .dbf ファむルの䞡方をアップロヌドし、むンポヌトペヌゞから .shp ファむルを遞択したす。 .shp ファむルず .dbf ファむルを䜿甚しお zip アヌカむブを䜜成し、むンポヌトしたす。これを機胜させるには、 [`$cfg['TempDir']`](index.html#cfg_TempDir) をりェブサヌバのナヌザが曞き蟌むこずができる堎所 (䟋えば `'./tmp'`) に蚭定する必芁がありたす。 UNIX ベヌスのシステムで䞀時ディレクトリを䜜成するには、以䞋のようにしたす。 ``` cd phpMyAdmin mkdir tmp chmod o+rwx tmp ``` #### 6.31 どうやっおデザむナでリレヌションを䜜成できたすか[¶](#how-do-i-create-a-relation-in-designer) リレヌションを遞択するには、クリックしたす。衚瀺されるカラムはピンク色で衚瀺されたす。カラムを衚瀺カラムずしお蚭定/解陀するには、 [衚瀺するカラムの遞択] アむコンをクリックしおから、適切なカラム名をクリックしおください。 #### 6.32 ズヌム怜玢機胜はどのように䜿うこずができたすか[¶](#how-can-i-use-the-zoom-search-feature) 怜玢プロット機胜は、テヌブル怜玢ずは異なった機胜です。デヌタを散垃図に描くこずにより、テヌブルデヌタを調査するものです。この機胜は、テヌブルを遞択しお 怜玢 タブをクリックしたずころから䜿甚できたす。 テヌブル怜玢`ペヌゞ内のタブに、 :guilabel:`怜玢プロット の項目がありたす。 䟋ずしお [6.6 どうすれば定型問い合わせの際にリレヌションテヌブルを䜿えるのでしょうか](#faq6-6) の REL_persons テヌブルを考えおみおください。怜玢プロットを䜿甚するには、id ず town_code ずいうように、2 ぀のカラムを遞択する必芁がありたす。id の倀が䞀方の軞に眮かれた堎合、town_code はもう䞀方の軞に眮かれたす。それぞれの行は、id ず town_code を基に点ずしお散垃図内に配眮されたす。衚瀺されおいる入力項目より、远加の絞り蟌み条件を 2 ぀たで含めるこずができたす。 各点のラベルずしお衚瀺される項目を遞択するこずができたす。テヌブルに衚瀺するカラム ([6.7 どうすれば [衚瀺するカラム] 機胜が利甚できたすか](#faqdisplay) を参照) が蚭定されおいる堎合は、特に指定しない限りそれがラベルずしお扱われたす。たた、プロットする結果の最倧数で、散垃図内に衚瀺させたいプロットの最倧数が指定できたす。描画の指定がよければ、 [実行] をクリックするず散垃図が衚瀺されたす。 䜜成された散垃図は、マりスホむヌルでズヌムの調敎が行えたす。たた、散垃図を通じお各点の評䟡が行えたす。现郚が刀別できる皋床たで拡倧しお、配眮されおいる点の䞭から評䟡したいものを探したす。点をクリックするずダむアログが開き、描画されおいる点に察応する行のデヌタ倀が衚瀺されたす。必芁であればこれらのデヌタ倀は線集するこずが可胜で、 [実行] をクリックするず UPDATE ク゚リによっお内容が倉曎されたす。簡単な䜿甚方法が、散垃図の䞊にある「䜿い方」リンクをクリックするこずにより衚瀺されたす。 #### 6.33 テヌブル閲芧時に、どうすればカラム名をコピヌできたすか[¶](#when-browsing-a-table-how-can-i-copy-a-column-name) コピヌするためにブラりズテヌブルのヘッダセル内のカラム名を遞択するのは難しいです。カラムはリンク先の列名をクリックしお゜ヌトするだけでなく、ヘッダセルをドラッグしお䞊べ替えるこずもできるからです。カラム名をコピヌするには、ツヌルチップの指瀺に埓っお、カラム名の暪にある空の領域をダブルクリックしおください。そうするず、カラム名の入力ボックスが衚瀺されたす。この入力ボックス内のカラム名を右クリックしおクリップボヌドにコピヌするこずができたす。 #### 6.34 お気に入りテヌブル機胜を䜿甚するにはどうすればよいですか[¶](#how-can-i-use-the-favorite-tables-feature) お気に入りテヌブル機胜は、最近䜿甚したテヌブル機胜ず非垞によく䌌おいたす。これは、デヌタベヌスで頻繁に䜿甚されるテヌブルのショヌトカットをナビゲヌションパネルに远加するこずができたす。リストからそれを遞択するだけで、リスト内の任意のテヌブルに移動するこずができたす。 phpMyAdmin 環境保管領域 を蚭定しおいない堎合、これらのテヌブルはブラりザのロヌカルストレヌゞに保存されたす。そうでない堎合は、これらの項目は phpMyAdmin 環境保管領域 に保存されたす。 重芁: phpMyAdmin 環境保管領域 がない堎合、お気に入りのテヌブルはブラりザごずに遞択したものに基づくため、異なるものになる可胜性がありたす。 お気に入りリストにテヌブルを远加するには、デヌタベヌスのテヌブルリストのテヌブル名の前にある 灰色 の星をクリックし、黄色になるたで埅っおください。リストからテヌブルを削陀するには、黄色の星をクリックし、再び灰色になるたで埅っおください。 `config.inc.php` ファむルの [`$cfg['NumFavoriteTables']`](index.html#cfg_NumFavoriteTables) を䜿甚するず、ナビゲヌションパネルに衚瀺されるお気に入りテヌブルの最倧数を定矩するこずができたす。既定倀は 10 です。 #### 6.35 範囲怜玢機胜はどのようにしお䜿うのですか[¶](#how-can-i-use-the-range-search-feature) 範囲怜玢機胜を利甚するず、 [怜玢] タブから衚の怜玢操䜜を実行しおいる間に、特定のカラムの倀の範囲を指定するこずができたす。 この機胜を䜿甚するには、カラム名の前にある挔算子遞択リストから BETWEEN 挔算子たたは NOT BETWEEN 挔算子をクリックするだけです。䞊蚘のオプションのいずれかを遞択するず、そのカラムの 最小倀 ず 最倧倀 を尋ねるダむアログボックスが衚瀺されたす。 BETWEEN の堎合は指定された倀の範囲のみが含たれ、 NOT BETWEEN の堎合には最終結果から陀倖されたす。 泚: 範囲怜玢機胜は Numeric および Date デヌタ型のカラムでのみ機胜したす。 #### 6.36 䞻芁カラム機胜ずは䜕ですかこの機胜はどのように䜿うのですか[¶](#what-is-central-columns-and-how-can-i-use-this-feature) その名前が瀺すように、䞻芁カラム機胜はデヌタベヌスごずに䞻芁なカラムのリストを管理するこずで、同じデヌタ芁玠の類䌌した名前を避け、同じデヌタ芁玠のデヌタ型の䞀貫性をもたらしたす。䞻芁リストを䜿甚するず、デヌタベヌス内の任意のテヌブル構造に芁玠を远加するこずができるので、䌌たようなカラム名やカラム定矩を蚘述する手間が省けたす。 䞻芁カラムのリストに远加するには、テヌブル構造のペヌゞに移動し、远加したいカラムをチェックしお、 [䞻芁カラムに远加] をクリックしおください。デヌタベヌスの耇数のテヌブルから䞀意のカラムをすべお远加する堎合は、デヌタベヌス構造ペヌゞに移動し、察象ずしたいテヌブルを確認しおから「䞻芁カラムリストぞ远加」を遞択しおください。 䞻芁カラムリストからカラムを削陀する堎合は、テヌブル構造ペヌゞに移動し、削陀したいカラムを確認しおから、「䞻芁カラムリストから削陀」をクリックするだけです。デヌタベヌスから耇数のテヌブルからすべおのカラムを削陀したい堎合は、デヌタベヌス構造ペヌゞに移動し、察象ずするテヌブルを確認しおから「䞻芁カラムリストから削陀」を遞択しおください。 䞻芁カラムリストを衚瀺したり管理したりするには、䞻芁カラムを管理したいデヌタベヌスを遞択し、トップメニュヌから「䞻芁カラム」をクリックしたす。䞻芁カラムリストの線集、削陀、新しいカラムの远加ができるペヌゞが衚瀺されたす。 #### 6.37 テヌブル構造の改善機胜はどのように䜿うのですか[¶](#how-can-i-use-improve-table-structure-feature) テヌブル構造の改善機胜は、テヌブルの構造を第䞉正芏圢にするのに圹立ちたす。ナヌザにりィザヌドが衚瀺され、正芏化のための様々なステップの間に芁玠に぀いお質問を行い、それに応じお新しい構造を提案し、テヌブルを第䞀/第ニ/第䞉正芏圢に持ち蟌むこずができたす。りィザヌドの起動時に、ナヌザはテヌブル構造をどの正芏圢たで正芏化したいかを遞択するこずができたす。 ここでは、第䞀、第二、第䞉の3皮類の正芏圢をすべおテストするこずができるテヌブルの䟋です。 ``` CREATE TABLE `VetOffice` ( `petName` varchar(64) NOT NULL, `petBreed` varchar(64) NOT NULL, `petType` varchar(64) NOT NULL, `petDOB` date NOT NULL, `ownerLastName` varchar(64) NOT NULL, `ownerFirstName` varchar(64) NOT NULL, `ownerPhone1` int(12) NOT NULL, `ownerPhone2` int(12) NOT NULL, `ownerEmail` varchar(64) NOT NULL, ); ``` 䞊蚘のテヌブルは [䞻キヌ](index.html#term-primary-key) がないので第䞀正芏圢ではありたせん。䞻キヌは (petName, ownerLastName, ownerFirstName) になるず考えられたす。 [䞻キヌ](index.html#term-primary-key) が提案された通りに遞択された堎合でも、結果のテヌブルには䞋蚘の䟝存関係があるため、第二正芏圢にも第䞉正芏圢にもなりたせん。 ``` (OwnerLastName, OwnerFirstName) -> OwnerEmail (OwnerLastName, OwnerFirstName) -> OwnerPhone PetBreed -> PetType ``` すなわち、 OwnerEmail は OwnerLastName ず OwnerFirstName に䟝存しおいたす。 OwnerPhone は OwnerLastName ず OwnerFirstName に䟝存しおいたす。 PetType は PetBreed に䟝存しおいたす。 #### 6.38 どうすればオヌトむンクリメント倀を再蚭定できたすか[¶](#how-can-i-reassign-auto-incremented-values) AUTO_INCREMENT の倀が連続しおいるこずを奜むナヌザもいたすが、行を削陀した埌はそうなるずは限りたせん。 これを実珟するための手順を以䞋に瀺したす。これらの手順は、ある時点で手動で怜蚌を行う必芁があるため、手動で行いたす。 * テヌブルぞの排他的なアクセス暩を持っおいるこずを確認する * [䞻キヌ](index.html#term-primary-key) の列 (すなわち id) の AUTO_INCREMENT 蚭定を削陀する * [構造] > [むンデックス] で䞻キヌを削陀する * 新しいカラム future_id を䞻キヌ、 AUTO_INCREMENT ずしお䜜成する * テヌブルを参照しお、新しいむンクリメント倀が期埅通りであるこずを確認する * 叀い ID カラムを削陀する * future_id カラムの名前を id に倉曎する * [構造] > [カラムの移動] で新しい id カラムを移動する #### 6.39 デヌタベヌス、テヌブル、カラム、プロシヌゞャの名前を倉曎、コピヌ、移動時に衚瀺される [暩限を調敎]ずは䜕ですか[¶](#what-is-the-adjust-privileges-option-when-renaming-copying-or-moving-a-database-table-column-or-procedure) デヌタベヌス/テヌブル/カラム/プロシヌゞャの名前の倉曎/コピヌ/移動時に、 MySQL は自分自身ではこれらのオブゞェクトに察する元の暩限を倉曎したせん。このオプションを遞択するこずで、 phpMyAdmin がテヌブルの暩限を調敎するので、ナヌザは新しいアむテムで同じ暩限を持぀こずができたす。 䟋えば、 ['bob'@'localhost](mailto:'bob'%40'localhost)' が 'id' ずいう名前のカラムに 'SELECT' 暩限を持っおいたずしたす。ここで、このカラムの名前が 'id_new' ず倉曎されるず、 MySQL 自身はカラムの暩限を新しいカラム名に調敎**したせん**。 phpMyAdmin はこの調敎を自動的に行うこずができたす。 泚: * デヌタベヌスの暩限を調敎するず、すべおのデヌタベヌスに関する芁玠 (テヌブル、カラム、プロシヌゞャ) の暩限もデヌタベヌスの新しい名前に調敎されたす。 * 同様に、テヌブルの暩限を調敎するず、新しいテヌブル内の党カラムの暩限も調敎されたす。 * 暩限を調敎する際、その操䜜を行うナヌザは以䞋の暩限を持っおいる**必芁がありたす**。 + mysql.`db`, mysql.`columns_priv`, mysql.`tables_priv`, mysql.`procs_priv` に察する SELECT, INSERT, UPDATE, DELETE 暩限 + 暩限の FLUSH (グロヌバル) したがっお、これらのオブゞェクトの名前倉曎/コピヌ/移動䞭にデヌタベヌス/テヌブル/カラム/プロシヌゞャをそのたた耇補する堎合は、このオプションがオンになっおいるこずを確認しおください。 #### 6.40 [SQL] ペヌゞに「パラメヌタのバむンド」チェックボックスが衚瀺されたす。どうやっおパラメヌタの぀いた SQL ク゚リを䜜成するのですか[¶](#i-see-bind-parameters-checkbox-in-the-sql-page-how-do-i-write-parameterized-sql-queries) バヌゞョン 4.5 から、 phpMyAdmin では [SQL] ペヌゞでパラメヌタを指定したク゚リを実行できるようになりたした。パラメヌタの前にはコロン (:) を付け、「パラメヌタのバむンド」チェックボックスをチェックするず、これらのパラメヌタが識別され、パラメヌタの入力フィヌルドが衚瀺されたす。フィヌルドに入力された倀は、実行される前にク゚リ内で眮換されたす。 #### 6.41 叀いバヌゞョンの MySQL (5.7.6より前) から゚クスポヌトされたダンプを新しいバヌゞョンの MySQL (5.7.7 以降) にむンポヌトしおいるずきにむンポヌト゚ラヌが発生したす。同じ叀いバヌゞョンにむンポヌトするず正垞に動䜜するのに[¶](#i-get-import-errors-while-importing-the-dumps-exported-from-older-mysql-versions-pre-5-7-6-into-newer-mysql-versions-5-7-7-but-they-work-fine-when-imported-back-on-same-older-versions) 5.7.7 以前の MySQL サヌバから゚クスポヌトしたダンプをバヌゞョン 5.7.7 以降の新しい MySQL サヌバにむンポヌトしおいるずきに *#1031 - Table storage engine for 'table_name' doesn't have this option* のような゚ラヌが発生した堎合は、 InnoDB テヌブルが ROW_FORMAT=FIXED に察応しおいないからかもしれたせん。さらに、 [innodb_strict_mode](https://dev.mysql.com/doc/refman/5.7/ja/innodb-parameters.html#sysvar_innodb_strict_mode) の倀によっお、これが譊告ずしお報告されるか、゚ラヌずしお報告されるかが定矩されたす。 MySQL バヌゞョン 5.7.9 以降では、 innodb_strict_mode のデフォルト倀が ON であるため、このような CREATE TABLE たたは ALTER TABLE 文が怜出されるず゚ラヌが生成されたす。 むンポヌト䞭にこのような゚ラヌを防ぐには、次の2぀の方法がありたす。 * むンポヌトを開始する前に innodb_strict_mode の倀を OFF に倉曎し、むンポヌトが正垞に完了した埌に ON にする。 * これは、次の 2 ぀の方法で実珟できたす。 + [倉数] ペヌゞぞ移動し、 innodb_strict_mode の倀を線集する + SET GLOBAL `innodb_strict_mode = '[value]'` ずいうク゚リを実行する むンポヌトが完了したら、 innodb_strict_mode の倀を元の倀にリセットするこずが掚奚されおいたす。 ### phpMyAdmin プロゞェクト[¶](#phpmyadmin-project) #### 7.1 バグを芋぀けたした。開発者に知らせるにはどうすればよいですか[¶](#i-have-found-a-bug-how-do-i-inform-developers) 課題远跡システムが <<https://github.com/phpmyadmin/phpmyadmin/issues>> にありたす。セキュリティ䞊の問題に぀いおは <<https://www.phpmyadmin.net/security>> の説明を参照し、開発者に盎接メヌルを送っおください。 #### 7.2 メッセヌゞを新しい蚀語に翻蚳したり、既存の蚀語を曎新したりしたいのですが、どこから始めるのですか[¶](#i-want-to-translate-the-messages-to-a-new-language-or-upgrade-an-existing-language-where-do-i-start) 翻蚳はずおも歓迎したすし、必芁なものは蚀語スキルだけです。もっずも簡単な方法は [オンラむン翻蚳サヌビス](https://hosted.weblate.org/projects/phpmyadmin/) を䜿甚する方法です。 [私たちのりェブサむトの翻蚳の節](https://www.phpmyadmin.net/translate/) に翻蚳でできるこずのすべおを芋るこずができたす。 #### 7.3 phpMyAdmin の開発に協力したいのですが、どのように進めればよいでしょうか[¶](#i-would-like-to-help-out-with-the-development-of-phpmyadmin-how-should-i-proceed) 私たちは phpMyAdmin の開発に察するすべおの協力を歓迎したす。協力できるすべおのこずが [私たちのりェブサむトの貢献の節](https://www.phpmyadmin.net/contribute/) にありたす。 参考 [開発者向け情報](index.html#developers) ### セキュリティ[¶](#security) #### 8.1 phpMyAdmin に察しお発行されたセキュリティアラヌトに぀いおの情報はどこで入手できたすか[¶](#where-can-i-get-information-about-the-security-alerts-issued-for-phpmyadmin) <<https://www.phpmyadmin.net/security/>> を参照しおください。 #### 8.2 phpMyAdmin を総圓たり攻撃から守るにはどうすれば良いですか[¶](#how-can-i-protect-phpmyadmin-against-brute-force-attacks) Apache りェブサヌバを䜿甚する堎合、 phpMyAdmin は Apache の環境に認蚌情報を゚クスポヌトしお、 Apache のログに蚘録するこずができたす。珟圚、぀の倉数を利甚するこずができたす。 `userID` 珟圚アクティブなナヌザのナヌザ名ログむンする必芁はありたせん。 `userStatus` 珟圚アクティブなナヌザのステヌタスで、 `ok` (ナヌザがログむンした), `mysql-denied` (MySQL がナヌザのログむンを拒吊した), `allow-denied` (allow/deny ルヌルで拒吊されたナヌザ), `root-denied` (蚭定で root が拒吊された), `empty-denied` (空のパスワヌドが拒吊された) のうちの䞀぀です。 Apache の LogFormat 蚭定は、次のようなものです。 ``` LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %{userID}n %{userStatus}n" pma_combined ``` あずで、ログ解析ツヌルを䜿甚しお、䟵入を詊みおいる可胜性のあるものを怜出するこずができたす。 #### 8.3 あるファむルを盎接読み蟌むずき、なぜパスが公開されおしたうのですか[¶](#why-are-there-path-disclosures-when-directly-loading-certain-files) これはサヌバ蚭定の問題です。本番サむトでは `display_errors` を有効にしないでください。 #### 8.4 phpMyAdmin から゚クスポヌトした CSV ファむルで匏むンゞェクション攻撃ができおしたいたす。[¶](#csv-files-exported-from-phpmyadmin-could-allow-a-formula-injection-attack) Microsoft Excel などの衚蚈算プログラムにむンポヌトするず、任意のコマンドを実行できる可胜性のある [CSV](index.html#term-csv) ファむルを生成するこずが可胜です。 phpMyAdmin によっお生成された CSV ファむルには、衚蚈算プログラムが数匏ずしお解釈するテキストが含たれおいる可胜性がありたすが、これらのフィヌルドを゚スケヌプするこずが適切な動䜜であるずは考えおいたせん。゚スケヌプしたいテキスト出力ず゚スケヌプすべき数匏を適切に区別しお゚スケヌプする方法はありたせん。私たちはこれに぀いお長々ず議論しおきたしたが、代わりに入力䞊のそのようなデヌタを適切に解析しおサニタむズするのは、衚蚈算プログラムの責任であるず感じおいたす。 Google も [同様の芋解](https://sites.google.com/site/bughunteruniversity/nonvuln/csv-excel-formula-injection) を持っおいたす。 ### 同期[¶](#synchronization) #### 9.1 (取り䞋げ).[¶](#withdrawn-22) #### 9.2 (取り䞋げ).[¶](#withdrawn-23) 開発者向け情報[¶](#developers-information) --- phpMyAdmin はオヌプン゜ヌスですから、協力しおくださる方は歓迎したす。これたでにもほかの方々が倚くのすばらしい機胜を曞いおくださいたした。みなさんにもできるこずはありたす。phpMyAdmin を䟿利なツヌルにしおいきたしょう。 協力するこずができるすべおの項目は、 [私たちのりェブサむトの contribute の節](https://www.phpmyadmin.net/contribute/) で確認するこずができたす。 セキュリティポリシヌ[¶](#security-policy) --- phpMyAdmin 開発チヌムは、 phpMyAdmin を可胜な限り安党にするために倚くの努力をしおいたす。しかし、 phpMyAdmin のようなりェブアプリケヌションは、倚くの攻撃に察しお脆匱である可胜性があり、それを悪甚する新しい方法がただ暡玢されおいたす。 報告されたすべおの脆匱性に察しお、私たちは phpMyAdmin セキュリティアナりンス (PMASA) を発行し、それには CVE ID も割り圓おられたす。類䌌する脆匱性を䞀぀の PMASA にグルヌプ化するこずもありたす (䟋えば、耇数の XSS 脆匱性を䞀぀の PMASA の䞋で発衚するこずがありたす)。 脆匱性を発芋したず思われる堎合は、 [セキュリティ問題の報告](#reporting-security) を参照しおください。 ### よくある脆匱性[¶](#typical-vulnerabilities) この節では、私たちのコヌドベヌスに珟れる可胜性のある兞型的な脆匱性に぀いお説明したす。このリストは決しお完党なものではなく、兞型的な攻撃の䞀面を瀺すためのものです。 #### クロスサむトスクリプティング (XSS)[¶](#cross-site-scripting-xss) phpMyAdmin がナヌザデヌタの䞀郚、䟋えばナヌザのデヌタベヌス内の䜕かを衚瀺するずき、 HTML の特殊文字をすべお゚スケヌプしなければなりたせん。この゚スケヌプが行われおいない堎所があった堎合、悪意のあるナヌザヌがデヌタベヌスに特別に䜜られたコンテンツを流し蟌み、そのデヌタベヌスの他のナヌザを隙しお䜕かを実行させおしたう可胜性がありたす。これは、䟋えば、いく぀もの厄介なこずをする JavaScript のコヌドの䞀郚である可胜性がありたす。 phpMyAdmin は、ブラりザ向けの HTML にレンダリングされる前に、すべおのナヌザデヌタを゚スケヌプするようにしおいたす。 参考 [りィキペディアの「クロスサむトスクリプティング」](https://ja.wikipedia.org/wiki/%E3%82%AF%E3%83%AD%E3%82%B9%E3%82%B5%E3%82%A4%E3%83%88%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%97%E3%83%86%E3%82%A3%E3%83%B3%E3%82%B0) #### クロスサむトリク゚ストフォヌゞェリ (CSRF)[¶](#cross-site-request-forgery-csrf) 攻撃者が phpMyAdmin のナヌザを隙しお、 phpMyAdmin のアクションを誘発するリンクをクリックさせたす。このリンクは、電子メヌルで送信されるか、どこかのりェブサむトから送信されるかのいずれかです。これが成功した堎合、攻撃者がナヌザの暩限を䜿っお䜕らかのアクションを実行できるようになりたす。 これを緩和するために、 phpMyAdmin はセンシティブなリク゚ストにトヌクンを送信する必芁がありたす。これは、攻撃者が珟圚有効なトヌクンを提瀺せず、提瀺されたリンクに含めるずいうものです。 トヌクンはログむンするたびに再生成されるので、䞀般的には限られた期間しか有効ではなく、攻撃者が有効なものを入手するのが難しくなりたす。 参考 [りィキペディアの「クロスサむトリク゚ストフォヌゞェリ」](https://en.wikipedia.org/wiki/Cross-site_request_forgery) #### SQL むンゞェクション[¶](#sql-injection) phpMyAdmin の党䜓的な目的は SQL ク゚リを実行するこずであるため、これは第䞀の懞念事項ではありたせん。 mysql の制埡接続に関係する堎合には SQL むンゞェクションに気を付けなければなりたせん。この制埡接続は、ログむン䞭のナヌザが持っおいない暩限を持っおいるこずがあるからです。䟋えば [phpMyAdmin 環境保管領域](index.html#linked-tables) ぞのアクセスです。 (管理甚) ク゚リに含たれるナヌザデヌタは、垞に DatabaseInterface::quoteString() を通しお実行する必芁がありたす。 参考 [Wikipedia での SQL むンゞェクションの蚘事](https://ja.wikipedia.org/wiki/SQL%E3%82%A4%E3%83%B3%E3%82%B8%E3%82%A7%E3%82%AF%E3%82%B7%E3%83%A7%E3%83%B3) #### 総圓たり攻撃[¶](#brute-force-attack) phpMyAdmin 自䜓は、認蚌の詊行回数を制限するこずはありたせん。これは、このようなこずから保護する方法がないステヌトレス環境で動䜜する必芁があるためです。 これを軜枛するには、 Captcha を䜿甚したり、 fail2ban などの倖郚ツヌルを利甚したりするこずができたす。これは [phpMyAdmin のむンストヌルを安党にする](index.html#securing) でより詳しく説明しおいたす。 参考 [りィキペディアの「総圓たり攻撃」](https://ja.wikipedia.org/wiki/%E7%B7%8F%E5%BD%93%E3%81%9F%E3%82%8A%E6%94%BB%E6%92%83) ### セキュリティ問題の報告[¶](#reporting-security-issues) phpMyAdmin のプログラムコヌドにセキュリティ䞊の問題を発芋した堎合は、それを公開する前に事前に [phpMyAdmin セキュリティチヌム](mailto:security%40phpmyadmin.net) に連絡しおください。このようにすれば私たちが修正を準備し、あなたの発衚ず同時に修正版をリリヌスするこずができたす。たた、私たちのセキュリティアナりンスの䞭で、あなたにクレゞットが䞎えられるこずになりたす。オプションで、レポヌトは以䞋のフィンガヌプリントで PGP キヌ ID `DA68AB39218AB947` で暗号化するこずができたす。 ``` pub 4096R/DA68AB39218AB947 2016-08-02 Key fingerprint = <KEY> uid phpMyAdmin Security Team <<EMAIL>> sub 4096R/5E4176FB497A31F7 2016-08-02 ``` 鍵は鍵サヌバから取埗するか、ダりンロヌドサヌバにある [phpMyAdmin キヌリング](https://files.phpmyadmin.net/phpmyadmin.keyring) や [Keybase](https://keybase.io/phpmyadmin_sec) を䜿甚しお入手するこずができたす。 phpMyAdmin をより安党にするための改善提案があるのであれば、私たちの [問題远跡システム](https://github.com/phpmyadmin/phpmyadmin/issues) に報告しおください。既存の改善提案は、 [hardening label](https://github.com/phpmyadmin/phpmyadmin/labels/hardening) で芋るこずができたす。 phpMyAdmin の配垃ずパッケヌゞ化[¶](#distributing-and-packaging-phpmyadmin) --- このドキュメントは、 Linux ディストリビュヌションなどの他の゜フトりェアパッケヌゞ内で、たたはりェブサヌバず MySQL サヌバを含むすべおを1぀のパッケヌゞにたずめお phpMyAdmin を再配垃したい人に助蚀するこずを目的ずしおいたす。 䞀般に、いく぀かの基本的な偎面 (いく぀かのファむルぞのパスず動䜜) は `libraries/vendor_config.php` でカスタマむズできたす。 たずえば、セットアップスクリプトで var に蚭定ファむルを生成するようにする堎合は、 `SETUP_CONFIG_FILE` を `/var/lib/phpmyadmin/config.inc.php` に倉曎しおください。たた、おそらくディレクトリが曞き蟌み可胜であるかのチェックもスキップしたいでしょうから、 `SETUP_DIR_WRITABLE` を false に蚭定しおください。 ### 倖郚ラむブラリ[¶](#external-libraries) phpMyAdminにはいく぀かの倖郚ラむブラリが含たれおおり、利甚可胜な堎合はシステムラむブラリに眮き換たほうがよい堎合もありたすが、新しいバヌゞョンが同梱したバヌゞョンず互換性があるかどうかをテストする必芁があるこずに泚意しおください。 珟圚知られおいる倖郚ラむブラリのリストは以䞋の通りです。 js/vendor js の jQuery フレヌムワヌクラむブラリず様々な js のラむブラリです。 vendor/ ダりンロヌドキットには、䟝存関係ずしおさたざたな Composer パッケヌゞが含たれおいたす。 ### 特定のファむルのラむセンス[¶](#specific-files-licenses) phpMyAdmin 配垃テヌマには、ラむセンス䞋にあるコンテンツが含たれおいたす。 * The icons of the Original and pmahomme themes are from the [Silk Icons](https://web.archive.org/web/20221201060206/http://www.famfamfam.com/lab/icons/silk/). * Some icons of the Metro theme are from the [Silk Icons](https://web.archive.org/web/20221201060206/http://www.famfamfam.com/lab/icons/silk/). * themes/*/img/b_rename.svg は [Icons8](https://thenounproject.com/Icons8/) の䞀぀で、 [Android L Icon Pack Collection](https://thenounproject.com/Icons8/collection/android-l-icon-pack/) によるアむコン [rename](https://thenounproject.com/term/rename/61456/) です。 * themes/metro/img/user.svg は IcoMoon のうちの䞀぀、 [user](https://github.com/Keyamoon/IcoMoon-Free/blob/master/SVG/114-user.svg) です CC BY 4.0 たたは GPL ### ベンダヌ向けのラむセンス情報[¶](#licenses-for-vendors) * Silk Icons are under the [CC BY 2.5 or CC BY 3.0](https://web.archive.org/web/20221201060206/http://www.famfamfam.com/lab/icons/silk/) licenses. * Icons8 の rename は ["パブリックドメむン"](https://creativecommons.org/publicdomain/zero/1.0/) (CC0 1.0) ラむセンスです。 * IcoMoon Free は ["CC BY 4.0 たたは GPL"](https://github.com/Keyamoon/IcoMoon-Free/blob/master/License.txt) です。 著䜜暩[¶](#copyright-1) --- ``` Copyright (C) 1998-2000 <NAME> <tobias_at_ratschiller.com> Copyright (C) 2001-2018 <NAME> <marc_at_infomarc.info> <NAME> <om_at_omnis.ch> <NAME> <robbat2_at_users.sourceforge.net> <NAME> <me_at_derrabus.de> <NAME> <michal_at_cihar.com> <NAME> <me_at_supergarv.de> <NAME> <mkkeck_at_users.sourceforge.net> <NAME> <cybot_tm_at_users.sourceforge.net> [check credits for more details] ``` このプログラムはフリヌ゜フトりェアです。Free Software Foundation よっお公衚された GNU General Public License バヌゞョン 2 (GPLv2) の条件の䞋においお、再配垃や倉曎を行うこずができたす。 このプログラムは、圹に立぀こずを期埅しお配垃されおいたすが、いかなる保蚌もありたせん。特定の目的に察する商業䞊たたは適合䞊の暗黙的な保蚌もありたせん。詳现に぀いおは GNU General Public License (GPL) を参照しおください。 このプログラムず共に GNU General Public License (GPL) のコピヌを受け取っおいるはずです。もし、ない堎合は <<https://www.gnu.org/licenses/>> を参照しおください。 ### サヌドパヌティのラむセンス[¶](#third-party-licenses) phpMyAdmin には、いく぀かのサヌドパヌティ補ラむブラリが、それぞれのラむセンスの䞋で含たれおいたす。 js/vendor/jquery/ 以䞋のファむルを入手した jQuery のラむセンスは (MIT|GPL) で、それぞれのラむセンスのコピヌがこのリポゞトリにありたす (GPL は LICENSE、 MIT は js/vendor/jquery/MIT-LICENSE.txt)。 ダりンロヌドキットには、いく぀かのコンポヌザヌラむブラリが远加で含たれおいたす。ベンダヌ/ディレクトリのラむセンス情報を参照しおください。 貢献者䞀芧[¶](#credits-1) --- ### 貢献者䞀芧幎代順[¶](#credits-in-chronological-order) * <NAME> <tobias_at_ratschiller.com> + phpMyAdmin プロゞェクトの創始者 + 1998 幎から 2000 幎倏たでの管理者 * <NAME> <marc_at_infomarc.info> + 倚蚀語版 (1998 幎 12 月) + 様々な修正ず改善 + [SQL](index.html#term-sql) アナラむザの最初のバヌゞョン (の倧半) + 2001 幎から 2015 幎たでの管理者 * <NAME> <om_at_omnis.ch> + 2001 幎 3 月に SourceForge phpMyAdmin プロゞェクトを立ち䞊げ + 既存のさたざたな CVS ツリヌの同期をずり、新機胜、バグフィックスを反映 + 倚蚀語察応の改善。動的蚀語遞択 + たくさんのバグ修正ず改善 * <NAME> <lolo_at_phpheaven.net> + JavaScript、DHTML、DOM の曞き盎し、最適化 + スクリプトを曞き盎しお [PEAR](index.html#term-pear) のコヌディング基準にあわせ、XHTML1.0 ず CSS2 準拠のコヌドを生成するようにした + 蚀語怜知システムの改善 + たくさんのバグ修正ず改善 * <NAME> <robbat2_at_users.sourceforge.net> + デヌタベヌスのメンテナンスコントロヌル + テヌブル皮別コヌド + ホスト認蚌 [IP](index.html#term-ip) アドレスによる蚱可/拒吊 + デヌタベヌスを䜿甚した蚭定 (未完) + [SQL](index.html#term-sql) パヌサず曞匏敎備機胜 + [SQL](index.html#term-sql) バリデヌタ + たくさんのバグ修正ず改善 * <NAME> <armel.fauveau_at_globalis-ms.com> + ブックマヌク機胜 + 耇数ダンプ機胜 + gzip ダンプ機胜 + zip ダンプ機胜 * <NAME> <glund_at_silversoft.dk> + 修正倚数 + phpwizard.net の旧 phpMyAdmin ナヌザフォヌラムのモデレヌタ * <NAME> <korakot_at_iname.com> + 「新しい行ずしお挿入する」機胜 * <NAME> <webmaster_at_trafficg.com> + ダンプコヌドの曞き盎しず修正 + バグフィックス * <NAME> <alberty_at_neptunlabs.de> + PHP4 向けにダンプコヌドを曞き盎し + mySQL テヌブル統蚈機胜 + バグフィックス * <NAME> <gandon_at_isia.cma.fr> + バヌゞョン 2.1.0.1 の䞻著者 + バグフィックス * <NAME> <me_at_derrabus.de> + MySQL 4.0 / 4.1 / 5.0 ずの互換性 + MySQLi サポヌトによる抜象デヌタベヌスむンタフェヌス (PMA_DBI) + 暩限の管理 + [XML](index.html#term-xml) ゚クスポヌト + 様々な機胜や修正 + ドむツ語ファむルの曎新 * <NAME> <mike.beck_at_web.de> + 定型問い合わせにおける自動結合凊理 + 印刷甚衚瀺printviewにおける links カラム + リレヌションビュヌ * <NAME> <michal_at_cihar.com> + むンデックス䜜成衚瀺機胜の匷化 + MySQL ずは異なる HTML 文字セットを利甚する機胜 + ゚クスポヌト機胜の改善 + 様々な機胜や修正 + チェコ語ファむルの曎新 + phpMyAdminの珟圚のりェブサむト * "MySQL Form Generator for PHPMyAdmin" の <NAME> (<https://sourceforge.net/projects/phpmysqlformgen/>) + 耇数テヌブルを印刷甚衚瀺するためのパッチを提案 * <NAME> <me_at_supergarv.de> + テヌブルの行を瞊に衚瀺するパッチの䜜成 + Javascript ベヌスのク゚リりィンドり + [SQL](index.html#term-sql) 履歎の䜜成 + カラムデヌタベヌスコメントの改善 + (MIME) - カラムの倉換 + 巊フレヌムのデヌタベヌスに独自の別名を利甚 + テヌブルの階局入れ子衚瀺 + [PDF](index.html#term-pdf) リレヌションを WYSIWYG 配垃するための [PDF](index.html#term-pdf) スクラッチボヌド + 新しいアむコンセット + カラムプロパティペヌゞの瞊衚瀺 + バグフィックス、远加機胜、サポヌト、ドむツ語の远加を少々 * <NAME> <kawada_at_den.fujifilm.co.jp> + 日本語挢字゚ンコヌド倉換機胜 * <NAME> <d3xter_at_users.sourceforge.net> および <NAME> + クッキヌ認蚌モヌド * <NAME> <n8falke_at_users.sourceforge.net> + テヌブルリレヌションリンク機胜 * <NAME> <delorme.maxime_at_free.fr> + [PDF](index.html#term-pdf) スキヌマ出力。 "FPDF" ラむブラリ (<<http://www.fpdf.org/>> を参照) の <NAME>、 "UFPDF" ラむブラリの Steven Wittens、 "TCPDF" ラむブラリ (<<https://tcpdf.org/>> を参照) の Nicola Asuni にも感謝。 * <NAME> <olof.edlund_at_upright.se> + [SQL](index.html#term-sql) バリデヌタサヌバ * <NAME> <ivanlanin_at_users.sourceforge.net> + phpMyAdmin ロゎ2004 幎 6 月たで * <NAME> <mike_at_graftonhall.co.nz> + Horde プロゞェクトより、blowfish ラむブラリ (リリヌス 4.0 で終了) * <NAME> <ne0x_at_users.sourceforge.net> + mysqli サポヌト + たくさんのバグ修正ず改善 * Nicola Asuni (Tecnick.com) + TCPDF ラむブラリ (<<https://tcpdf.org>>) * <NAME> <mkkeck_at_users.sourceforge.net> + 2.6.0 向けの再デザむン + phpMyAdmin の垆船ロゎ2004 幎 6 月 * <NAME> + カンファレンスでの発衚 * <NAME> <cybot_tm_at_users.sourceforge.net> + むンタフェヌスの改善 + バグフィックス倚数 * <NAME> + 新しいレプリケヌションデザむナ * <NAME> (Google Summer of Code 2008) + BLOB ストリヌムをサポヌト (バヌゞョン 4.0 からは非サポヌト) * <NAME> (Google Summer of Code 2008, 2010 and 2011) + セットアップスクリプトの改良 + ナヌザ蚭定機胜 + Drizzle のサポヌト * <NAME> (Google Summer of Code 2009) + むンポヌトシステムの改良 * <NAME> (Google Summer of Code 2009) + コマンド远跡機胜 * <NAME> (Google Summer of Code 2009) + 同期機胜 (リリヌス 4.0 で削陀) * <NAME> (Google Summer of Code 2009) + レプリケヌションサポヌト * <NAME> (Google Summer of Code 2010) + 耇数のフォヌマットぞのリレヌションスキヌマの゚クスポヌト * <NAME> (Google Summer of Code 2010) + ナヌザむンタフェヌスの改善 + ENUM/SET ゚ディタ + ゚クスポヌトむンポヌトの簡易むンタフェヌス * <NAME> (Google Summer of Code 2010) + Ajax 化したむンタフェヌス * <NAME> (Google Summer of Code 2010) + グラフ機胜 * <NAME> + PBMS PHP 拡匵による BLOB ストリヌムのサポヌト (バヌゞョン 4.0 からは非サポヌト) * <NAME> (Google Summer of Code 2010) + グラフィカルなク゚リデザむナ * <NAME> (Google Summer of Code 2011) + OpenGIS サポヌト * <NAME> (Google Summer of Code 2011) + 怜玢プロット * <NAME> (Google Summer of Code 2011) + ブラりザモヌドの改良 * <NAME> (Google Summer of Code 2011) + AJAX 化 * <NAME> (Google Summer of Code 2011) + ステヌタスペヌゞのク゚リ集蚈ずグラフ * <NAME> (Google Summer of Code 2011) + 自動化されたテスト * <NAME> (Google Summer of Code 2011 および 2012) + ストアドルヌチン、トリガ、むベントのサポヌトの改善 + むタリア語の翻蚳曎新 + フレヌムの削陀、新しいナビゲヌション * <NAME> + バグフィックス倚数 + オランダ語の翻蚳曎新 * <NAME> (Google Summer of Code 2012) + 新しいプラグむンずプロパティのシステム * <NAME> (Google Summer of Code 2012) + 構造改善 * <NAME> (Google Summer of Code 2012) + 構造改善 * <NAME> (Google Summer of Code 2012) + 構造改善 * <NAME> (Google Summer of Code 2012) + 自動化されたテスト * <NAME>ginton (phpseclib.sourceforge.net) + phpseclib * <NAME> (Google Summer of Code 2013) + 構造改善 * <NAME> (Google Summer of Code 2013) + 構造改善 * <NAME> (Google Summer of Code 2013) + AJAX の゚ラヌ報告 * <NAME> (Google Summer of Code 2013) + 自動化されたテスト * <NAME> (Google Summer of Code 2013) + 自動化されたテスト * <NAME> (Google Summer of Code 2013) + むンタフェヌスの改善 * <NAME> + 䟋によるク゚リの読み蟌み/保存 (デヌタベヌス怜玢ブックマヌク) * <NAME> (Google Summer of Code 2014) + 䞻芁カラムリスト + テヌブル構造の改善 (正芏化) * <NAME> (Google Summer of Code 2014) + むンタフェヌスの改善 * <NAME> (Google Summer of Code 2014) + PHP ゚ラヌレポヌト * <NAME> (Google Summer of Code 2014) + SQL ク゚リコン゜ヌル * <NAME> (Google Summer of Code 2014) + 構造改善: デザむナヌずスキヌマの統合 * <NAME> (Google Summer of Code 2014) + カスタムフィヌルドハンドラ (MIME 倉換に基づく入力) + テヌブル/カラムの名前を倉曎しお゚クスポヌト * <NAME> (Google Summer of Code 2015) + 新しいパヌサずアナラむザ * <NAME> (Google Summer of Code 2015) + ペヌゞ関連の蚭定 + SQL デバッグのコン゜ヌルぞの統合 + その他のナヌザむンタフェヌスの改善 * <NAME> (Google Summer of Code 2015) + CSS を䜿甚した印刷ビュヌ + その他のナヌザむンタフェヌスの改善ず新機胜 * <NAME> (Google Summer of Code 2017) + ゚ラヌ報告サヌバの改良 + Selenium テストの改良 * <NAME> (Google Summer of Code 2017) + モバむルナヌザむンタフェヌス + むンラむン JavaScript コヌドの陀去 + その他のナヌザむンタフェヌスの改善 * <NAME> (Google Summer of Code 2017) + 耇数テヌブルのク゚リむンタフェヌス + デザむナヌでほかのデヌタベヌスのテヌブルで䜜業できるようにした + その他のナヌザむンタフェヌスの改善 * <NAME> + 䞻芁な改良ず JavaScript コアぞのアップグレヌド + JavaScript ラむブラリ機胜の近代化 + テンプレヌトの最新化ず Twig の導入 * <NAME> + PHPStan に基づくコヌディングスタむルの改善 + 倖郚の MySQL および MariaDB のドキュメントぞのリンクの改善 + その他の倚数のバグ修正 * <NAME> + 総合的なセキュリティ評䟡ず提案 * <NAME> (Google Summer of Code 2018) + 次のような様々な改善点がありたす。 - ナヌザ蚭定のロヌカルストレヌゞぞの統合 - セッションの期限切れ埌のモヌダルログむンの䜿甚 - CHECK CONSTRAINTS の察応の远加 - その他 * <NAME> (Google Summer of Code 2018) + テヌマ自動生成ツヌル * <NAME> (Google Summer of Code 2018) + Twig テンプレヌトの構造改善およびその他の内郚コヌドの改善 * Piyush Vijay (Google Summer of Code 2018) + Webpack、Babel、Yarn、eslint、Jsdoc の導入を含む JavaScript コヌドの最新化 その他、バヌゞョン 2.1.0 以降、现々ずした修正、匷化、バグフィックス、新芏蚀語サポヌト等をしおくださった方々。 <NAME>, Ricardo ?, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, "Sakamoto", <NAME>, www.securereality.com.au, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>., <NAME>, <NAME>, <NAME>, Vinay, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, "Manuzhai". ### 翻蚳者[¶](#translators) 以䞋の方々が phpMyAdmin の翻蚳に貢献したした。 * アルバニア語 > + <NAME> <acokaj_at_shkoder.net> > * アラビア語 > + <NAME> <a.saleh.ismael_at_gmail.com> > + <NAME> <egbrave_at_hotmail.com> > + <NAME> <persiste1_at_gmail.com> > * アルメニア語 > + <NAME> <aaleksanyants_at_yahoo.com> > * アれルバむゞャン語 > + Mircəlal <01youknowme_at_gmail.com> > + Huseyn <huseyn_esgerov_at_mail.ru> > + <NAME> <sevdimaliisayev_at_mail.ru> > + Jafar <sharifov_at_programmer.net> > * ベラルヌシ語 > + <NAME> <vipals_at_gmail.com> > * ブルガリア語 > + <NAME> <bkehayov_at_gmail.com> > + <NAME> <blagynchy_at_gmail.com> > + <NAME> <hudsonvsm_at_gmail.com> > + P <plamen_mbx_at_yahoo.com> > + krasimir <vip_at_krasio-valia.com> > * カタルヌニャ語 > + j<NAME> <jconstanti_at_yahoo.es> > + <NAME> <xvnavarro_at_gmail.com> > * 䞭囜語 (䞭囜) > + <NAME> <3092849_at_qq.com> > + <NAME> <clanboy_at_163.com> > + disorderman <disorderman_at_qq.com> > + <NAME> <duguying2008_at_gmail.com> > + <fundawang_at_gmail.com> > + popcorner <memoword_at_163.com> > + <NAME> <qyz.yswy_at_hotmail.com> > + zz <tczzjin_at_gmail.com> > + <NAME> <wengshiyu_at_gmail.com> > + whh <whhlcj_at_126.com> > * 䞭囜語 (台湟) > + <NAME> <albb0920_at_gmail.com> > + <NAME> <cwlin0416_at_gmail.com> > + <NAME> <xs910203_at_gmail.com> > * ケルン語 > + Purodha <publi_at_web.de> > * チェコ語 > + <NAME> <ales_at_hakl.net> > + <NAME> <dalibor.straka3_at_gmail.com> > + <NAME> <martin_at_vidner.net> > + <NAME> <ondrasek.simecek_at_gmail.com> > + <NAME> <palider_at_seznam.cz> > + <NAME> <petr.katerinak_at_gmail.com> > * デンマヌク語 > + AputsiaÄž <NAME> <aj_at_isit.gl> > + <NAME> <dennis.jakobsen_at_gmail.com> > + Jonas <jonas.den.smarte_at_gmail.com> > + <NAME> <just.my.smtp.server_at_gmail.com> > * オランダ語 > + 1. Voogt <a.voogt_at_hccnet.nl> > + dingo thirteen <dingo13_at_gmail.com> > + <NAME> <info_at_robinvandervliet.nl> > + <NAME> <ruleant_at_users.sourceforge.net> > + <NAME> <strijbol.niko_at_gmail.com> > * 英語 (むギリス) > + <NAME> <dries.verschuere_at_outlook.com> > + <NAME> <j.francisco.o.rocha_at_zoho.com> > + <NAME> <marc_at_infomarc.info> > + <NAME> <tomastik.m_at_gmail.com> > * ゚スペラント語 > + Eliovir <eliovir_at_gmail.com> > + <NAME> <info_at_robinvandervliet.nl> > * ゚ストニア語 > + <NAME> <kristjanrats_at_gmail.com> > * フィンランド語 > + <NAME> <jremes_at_outlook.com> > + <NAME> <lari_at_oesch.me> > * フランス語 > + <NAME> <marc_at_infomarc.info> > * フリゞア語 > + <NAME> <info_at_robinvandervliet.nl> > * ガリシア語 > + <NAME> <xosecalvo_at_gmail.com> > * ドむツ語 > + <NAME> <github.com-t3if_at_ladisch.de> > + <NAME> <jan.zassenhaus_at_jgerman.de> > + <NAME> <lasse_at_mydom.de> > + <NAME> <matthias_at_bluthardt.org> > + <NAME> <michael.koch_at_enough.de> > + Ann + J.M. <phpMyAdmin_at_ZweiSteinSoft.de> > + <pma_at_sebastianmendel.de> > + <NAME> <rohmberger_at_hotmail.de> > + <NAME> <sqrt_at_entless.org> > * ギリシア語 > + ΠαΜαγιώτης Παπάζογλου <papaz_p_at_yahoo.com> > * ヘブラむ語 > + <NAME> <mmh15_at_windowslive.com> > + <NAME> <sh.yaron_at_gmail.com> > + <NAME> <visokereyal_at_gmail.com> > * ヒンズヌ語 > + <NAME> <atulpratapsingh05_at_gmail.com> > + Yogeshwar <charanyogeshwar_at_gmail.com> > + <NAME> <devenbansod.bits_at_gmail.com> > + <NAME> <kushagra4296_at_gmail.com> > + <NAME> <nisargjhaveri_at_gmail.com> > + <NAME> <roohan_cena_at_yahoo.co.in> > + <NAME> <yug.scorpio_at_gmail.com> > * ハンガリヌ語 > + <NAME> <erosakos02_at_gmail.com> > + <NAME> <leedermeister_at_gmail.com> > + <NAME> <undernetangel_at_gmail.com> > + <NAME> <urbalazs_at_gmail.com> > * むンドネシア語 > + <NAME> <Deky40_at_gmail.com> > + <NAME> <andika_at_gmail.com> > + <NAME> <da2n_s_at_yahoo.co.id> > + <NAME> <dadan.setia_at_gmail.com> > + <NAME> <edwin_at_yohanesedwin.com> > + <NAME> <fadhiilrachman_at_gmail.com> > + Benny <tarzq28_at_gmail.com> > + <NAME> <tommy_at_surbakti.net> > + <NAME> <zufar.bogor_at_gmail.com> > * むンタヌリングア > + <NAME> <g.sora_at_tiscali.it> > * むタリア語 > + <NAME> <francesco.giacobazzi_at_ferrania.it> > + <NAME> <ironpotts_at_gmail.com> > + <NAME> <stefano.ste.martinelli_at_gmail.com> > * 日本語 > + k725 <alexalex.kobayashi_at_gmail.com> > + <NAME> <hiroshi.chiyokawa_at_gmail.com> > + <NAME> <orzkun_at_ageage.jp> > + worldwideskier <worldwideskier_at_yahoo.co.jp> > * カンナダ語 > + <NAME> <info_at_robinvandervliet.nl> > + <NAME> <shameem.sam_at_gmail.com> > * 韓囜語 > + <NAME> <bskim45_at_gmail.com> > + <NAME> <cdac1234_at_gmail.com> > + <NAME> <dckyoung_at_gmail.com> > + <NAME> <greatymh_at_gmail.com> > + JongDeok <human.zion_at_gmail.com> > + <NAME> <kim_at_nhn.com> > + 읎겜쀀 <kyungjun2_at_gmail.com> > + <NAME> <skshin_at_gmail.com> > + <NAME> <virusyoon_at_gmail.com> > + <NAME> <youngminz.kr_at_gmail.com> > * 䞭倮クルド語 > + <NAME> <alan.hilal94_at_gmail.com> > + <NAME> <aso.naderi_at_gmail.com> > + muhammad <esy_vb_at_yahoo.com> > + <NAME> <zhyarabdulla94_at_gmail.com> > * ラトビア語 > + Latvian TV <dnighttv_at_gmail.com> > + <NAME> <edgarsneims5092_at_inbox.lv> > + Ukko <perkontevs_at_gmail.com> > * リンブルフ語 > + <NAME> <info_at_robinvandervliet.nl> > * リトアニア語 > + <NAME> <v.motuzas_at_gmail.com> > * マレヌ語 > + <NAME> <amir.overlord666_at_gmail.com> > + diprofinfiniti <anonynuine-999_at_yahoo.com> > * ネパヌル語 > + <NAME> <nnabinn_at_hotmail.com> > * ノルりェヌ語ブヌクモヌル > + <NAME> <borge947_at_gmail.com> > + <NAME> <danorse_at_gmail.com> > + <NAME> <efroys_at_gmail.com> > + <NAME> <kurt_at_kheds.com> > + <NAME> <ph3n1x.nobody_at_gmail.com> > + Sebastian <sebastian_at_sgundersen.com> > + Tomas <tomas_at_tomasruud.com> > * ペルシア語 > + ashkan shirian <ashkan.shirian_at_gmail.com> > + HM <goodlinuxuser_at_chmail.ir> > * ポヌランド語 > + Andrzej <andrzej_at_kynu.pl> > + Przemo <info_at_opsbielany.waw.pl> > + <NAME> <krystian4842_at_gmail.com> > + <NAME> <maciejka45_at_gmail.com> > + <NAME> <vonflynee_at_gmail.com> > * ポルトガル語 > + <NAME> <alexandre.badalo_at_sapo.pt> > + <NAME> <geral_at_jonilive.com> > + <NAME> <p.m42.ribeiro_at_gmail.com> > + <NAME> <sandro123iv_at_gmail.com> > * ポルトガル語 (ブラゞル) > + <NAME> <alexrohleder96_at_outlook.com> > + <NAME> <brunomendax_at_gmail.com> > + <NAME> <danilo.eng_at_globomail.com> > + <NAME> <douglas.kollar_at_pg.df.gov.br> > + <NAME> <douglaseccker_at_hotmail.com> > + <NAME> <edjacobjunior_at_gmail.com> > + <NAME> <g.szsilva_at_gmail.com> > + <NAME> <gui_at_webseibt.net> > + <NAME> <helder.bs.santana_at_gmail.com> > + <NAME> <jrzancan_at_hotmail.com> > + Luis <luis.eduardo.braschi_at_outlook.com> > + <NAME> <malgeri_at_gmail.com> > + <NAME> <marc_at_infomarc.info> > + <NAME> <renatomdd_at_yahoo.com.br> > + <NAME> <thiago.casotti_at_uol.com.br> > + <NAME> <victor.laureano_at_gmail.com> > + <NAME> <vinipitta_at_gmail.com> > + <NAME> <washingtonbruno_at_msn.com> > + <NAME> <yansilvagabriel_at_gmail.com> > * パンゞャブ語 > + <NAME> <info_at_robinvandervliet.nl> > * ルヌマニア語 > + Alex <amihaita_at_yahoo.com> > + <NAME> <costa1988sv_at_gmail.com> > + <NAME> <john_at_panevo.ro> > + <NAME> <molnar.raul_at_wservices.eu> > + Deleted User <noreply_at_weblate.org> > + <NAME> <stefan.murariu_at_yahoo.com> > * ロシア語 > + <NAME> <aaleksanyants_at_yahoo.com> > + <ddrmoscow_at_gmail.com> > + <NAME> <info_at_robinvandervliet.nl> > + <NAME> <khomutov.ivan_at_mail.ru> > + <NAME> <orion1979_at_yandex.ru> > + <NAME> <salvadoporjc_at_gmail.com> > + <NAME> <unlucky_at_inbox.ru> > * セルビア語 > + Smart Kid <kidsmart33_at_gmail.com> > * シンハラ語 > + <NAME> <madhura.cj_at_gmail.com> > * スロバキア語 > + <NAME> <martin_at_whistler.sk> > + <NAME> <parkourpotex_at_gmail.com> > + <NAME> <pistej2_at_gmail.com> > * スロベニア語 > + Domen <mitenem_at_outlook.com> > * スペむン語 > + <NAME>, MD <phpmyadmin_at_cerebroperiferico.com> > + <NAME> <floss.dev_at_gmail.com> > + Franco <fulanodetal.github1_at_openaliasbox.org> > + <NAME> <luisan00_at_hotmail.com> > + Macofe <macofe.languagetool_at_gmail.com> > + <NAME> <matiasbellone+weblate_at_gmail.com> > + <NAME>. <ra4_at_openmailbox.org> > + FAMMA TV NOTICIAS MEDIOS DE CO <revistafammatvmusic.oficial_at_gmail.com> > + <NAME> <ronniesimonf_at_gmail.com> > * スりェヌデン語 > + <NAME> <anders.jonsson_at_norsjovallen.se> > * タミル語 > + கணேஷ் குமடர் <GANESHTHEONE_at_gmail.com> > + <NAME> <achch1990_at_gmail.com> > + <NAME> <rifthy456_at_gmail.com> > * タむ語 > + <nontawat39_at_gmail.com> > + <NAME>. <somthanat_at_gmail.com> > * トルコ語 > + <NAME> <hitowerdigit_at_hotmail.com> > * りクラむナ語 > + <NAME> <nitrotoll_at_gmail.com> > + Igor <vmta_at_yahoo.com> > + <NAME> <vperekupka_at_gmail.com> > * ベトナム語 > + <NAME> <baophan94_at_icloud.com> > + <NAME> <mr.hungdx_at_gmail.com> > + <NAME> <trinhminhbao_at_gmail.com> > * 西フラマン語 > + <NAME> <info_at_robinvandervliet.nl> ### ドキュメントの翻蚳者[¶](#documentation-translators) 以䞋の方々が phpMyAdmin のドキュメントの翻蚳に貢献しおくださいたした。 * アルバニア語 > + <NAME> <acokaj_at_shkoder.net> > * アラビア語 > + <NAME> <ahmedtek1993_at_gmail.com> > + <NAME> <omar_2412_at_live.com> > * アルメニア語 > + <NAME> <aaleksanyants_at_yahoo.com> > * アれルバむゞャン語 > + Mircəlal <01youknowme_at_gmail.com> > + <NAME> <sevdimaliisayev_at_mail.ru> > * カタルヌニャ語 > + j<NAME> <jconstanti_at_yahoo.es> > + <NAME> <joan_at_montane.cat> > + <NAME> <xvnavarro_at_gmail.com> > * 䞭囜語 (䞭囜) > + <NAME> <3092849_at_qq.com> > + 眗攀登 <6375lpd_at_gmail.com> > + disorderman <disorderman_at_qq.com> > + ITXiaoPang <djh1017555_at_126.com> > + tunnel213 <tunnel213_at_aliyun.com> > + <NAME> <wengshiyu_at_gmail.com> > + whh <whhlcj_at_126.com> > * 䞭囜語 (台湟) > + <NAME> <cwlin0416_at_gmail.com> > + <NAME> <xs910203_at_gmail.com> > * チェコ語 > + <NAME> <ales_at_hakl.net> > + <NAME> <michal_at_cihar.com> > + <NAME> <palider_at_seznam.cz> > + <NAME> <petr.katerinak_at_gmail.com> > * デンマヌク語 > + AputsiaÄž <NAME> <aj_at_isit.gl> > + <NAME> <just.my.smtp.server_at_gmail.com> > * オランダ語 > + 1. Voogt <a.voogt_at_hccnet.nl> > + dingo thirteen <dingo13_at_gmail.com> > + <NAME> <dries.verschuere_at_outlook.com> > + <NAME> <info_at_robinvandervliet.nl> > + <NAME> <nast3zz_at_gmail.com> > + <NAME> <ray_at_datahuis.net> > + <NAME> <ruleant_at_users.sourceforge.net> > + <NAME> <tom.hofman_at_gmail.com> > * ゚ストニア語 > + <NAME> <kristjanrats_at_gmail.com> > * フィンランド語 > + Juha <jremes_at_outlook.com> > * フランス語 > + <NAME> <cedric.corazza_at_wanadoo.fr> > + <NAME> <etienne.gilli_at_gmail.com> > + <NAME> <marc_at_infomarc.info> > + Donavan_Martin <mart.donavan_at_hotmail.com> > * フリゞア語 > + <NAME> <info_at_robinvandervliet.nl> > * ガリシア語 > + <NAME> <xosecalvo_at_gmail.com> > * ドむツ語 > + Daniel <d.gnauk89_at_googlemail.com> > + <NAME> <janhenrikm_at_yahoo.de> > + <NAME> <lasse_at_mydom.de> > + <NAME> <michael.koch_at_enough.de> > + Ann + J.M. <phpMyAdmin_at_ZweiSteinSoft.de> > + <NAME> <predatorix_at_web.de> > + <NAME> <rohmberger_at_hotmail.de> > + <NAME> <sqrt_at_entless.org> > * ギリシア語 > + ΠαΜαγιώτης Παπάζογλου <papaz_p_at_yahoo.com> > * ハンガリヌ語 > + <NAME> <urbalazs_at_gmail.com> > * むタリア語 > + <NAME> <francesco.giacobazzi_at_ferrania.it> > + <NAME> <ironpotts_at_gmail.com> > + <NAME> <stefano.ste.martinelli_at_gmail.com> > + TWS <tablettws_at_gmail.com> > * 日本語 > + <NAME> <ek_at_luna.miko.im> > + <NAME> <hiroshi.chiyokawa_at_gmail.com> > * リトアニア語 > + <NAME> <atvejis_at_gmail.com> > + Dovydas <dovy.buz_at_gmail.com> > * ノルりェヌ語ブヌクモヌル > + <NAME> <danorse_at_gmail.com> > + <NAME> <kurt_at_kheds.com> > * ポルトガル語 (ブラゞル) > + <NAME> <alemoretti2010_at_hotmail.com> > + <NAME> <douglas.kollar_at_pg.df.gov.br> > + <NAME> <gui_at_webseibt.net> > + <NAME> <helder.bs.santana_at_gmail.com> > + <NAME> <michal_at_cihar.com> > + <NAME> <michel.ekio_at_gmail.com> > + <NAME> <mrdaniloazevedo_at_gmail.com> > + <NAME> <thiago.casotti_at_uol.com.br> > + <NAME> <vinipitta_at_gmail.com> > + <NAME> <yansilvagabriel_at_gmail.com> > * スロバキア語 > + <NAME> <martin_at_whistler.sk> > + <NAME> <michal_at_cihar.com> > + <NAME> <pistej2_at_gmail.com> > * スロベニア語 > + Domen <mitenem_at_outlook.com> > * スペむン語 > + <NAME> <floss.dev_at_gmail.com> > + Franco <fulanodetal.github1_at_openaliasbox.org> > + <NAME> <matiasbellone+weblate_at_gmail.com> > + <NAME> <ronniesimonf_at_gmail.com> > * トルコ語 > + <NAME> <hitowerdigit_at_hotmail.com> ### バヌゞョン 2.1.0 圓時の貢献者䞀芧[¶](#original-credits-of-version-2-1-0) この゜フトのベヌスずなったのは <NAME> の MySQL-Webadmin です。PHP3 を䜿っおりェブベヌスの MySQL むンタフェヌスを䜜るずいうのは元々圌のアむデアだったのです。私も、圌の゜ヌスコヌドはいっさい利甚しおいたせんが、いく぀かのコンセプトはお借りしたした。phpMyAdmin を䜜ったのは、圌がもうその偉倧なツヌルの開発を続ける぀もりがないず蚀ったからです。 以䞋の方々に感謝したす。 * <NAME> <ak-lsml_at_living-source.com> はテヌブルやデヌタベヌスを削陀するずきのチェックコヌドを提䟛しおくださいたした。たた、tbl_create.php3 に䞻キヌを指定できるようにしたらどうかず薊めおくださったのも圌です。バヌゞョン 1.1.1 にはバグレポヌトだけでなくldi_*.php3-setテキストファむルのむンポヌトも提䟛しおくださいたした。小さな改善も倚数いただいおいたす。 * <NAME> <jan_at_nrw.net> は、バヌゞョン 1.3.0 で導入された倉曎点の倧郚分を䜜っおくださいたした (特に重芁なものずしおは認蚌がありたす)。バヌゞョン 1.4.1 ではテヌブルダンプ機胜を増匷しおくださいたした。バグフィックスや揎助も倚数いただいおいたす。 * <NAME> <DelislMa_at_CollegeSherbrooke.qc.ca> は、文字列を別ファむルにアりト゜ヌスするこずで phpMyAdmin を蚀語に䟝存しないものにしたした。フランス語ぞの翻蚳にも貢献したした。 * <NAME> <abravo_at_hq.admiral.ru> は、tbl_select.php3 を提䟛しおくださいたした。これはテヌブルのいく぀かのフィヌルドのみを衚瀺する機胜です。 * <NAME> <chrisj_at_ctel.net>。tbl_change.php3 に MySQL 関数のサポヌトを远加しおくださいたした。2.0 では「定型問い合わせ」機胜も远加しおくださっおいたす。 * <NAME> <walton_at_nordicdms.com>。耇数のサヌバのサポヌトを远加しおくださいたした。バグフィックスをくださる垞連でもありたす。 * <NAME> <ga244_at_is8.nyu.edu>。バヌゞョン 2.0.6 のランダムアクセス機胜を提䟛しおくださいたした。 现々ずした修正、匷化、バグフィックス、新芏蚀語サポヌト等をしおくださった方々。 <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. たた、提案、バグレポヌト、単なる感想など、私にメヌルをくださったすべおの方々に感謝したす。 甚語集[¶](#glossary-1) --- フリヌ癟科事兞 Wikipedia より .htaccess Apache のディレクトリ単䜍での蚭定ファむルのデフォルト名。 参考 <<https://ja.wikipedia.org/wiki/.htaccess>ACL Access Control List (アクセス制埡リスト) Blowfish [<NAME>](https://ja.wikipedia.org/wiki/%E3%83%96%E3%83%AB%E3%83%BC%E3%82%B9%E3%83%BB%E3%82%B7%E3%83%A5%E3%83%8A%E3%82%A4%E3%82%A2%E3%83%BC) が1993幎に開発したキヌを持぀察称ブロック暗号。 参考 <<https://ja.wikipedia.org/wiki/Blowfish>ブラりザ World Wide Web 䞊のりェブサむトでりェブペヌゞに䞀般的に眮かれおいるテキスト、画像、その他の情報をナヌザに察しお衚瀺や双方向通信するこずができるアプリケヌション。 参考 <<http://ja.wikipedia.org/wiki/%E3%82%A6%E3%82%A7%E3%83%96%E3%83%96%E3%83%A9%E3%82%A6%E3%82%B6>bzip2 <NAME> 氏により開発されたフリヌ゜フトりェアでオヌプン゜ヌスのデヌタ圧瞮アルゎリズムおよびプログラム。 参考 <<https://ja.wikipedia.org/wiki/Bzip2>CGI CGI (コモン・ゲヌトりェむ・むンタフェヌス) は、クラむアントのりェブブラりザが、りェブサヌバ䞊で実行されるプログラムからのデヌタを芁求するこずができる重芁な World Wide Web 技術です。 参考 <<https://ja.wikipedia.org/wiki/Common_Gateway_Interface>倉曎履歎 プロゞェクトに察しお行われた倉曎のログたたは蚘録。 参考 <<https://en.wikipedia.org/wiki/Changelog>クラむアント ネットワヌクなどを介しお他のコンピュヌタ䞊の (リモヌト) サヌビスにアクセスするコンピュヌタシステムのこず。 参考 <<https://ja.wikipedia.org/wiki/%E3%82%AF%E3%83%A9%E3%82%A4%E3%82%A2%E3%83%B3%E3%83%88_(%E3%82%B3%E3%83%B3%E3%83%94%E3%83%A5%E3%83%BC%E3%82%BF)>カラム 特定の単䞀の型のデヌタ倀の集たり。テヌブルの各行に察しおの䞀芁玠でもある。 参考 <<https://ja.wikipedia.org/wiki/%E5%B1%9E%E6%80%A7_(%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9)>クッキヌ サヌバにアクセスするたびにサヌバから送信され World Wide Web ブラりザから送り返される情報のひずかたたり。 参考 <<https://ja.wikipedia.org/wiki/HTTP_cookie>CSV カンマ区切りの倀 参考 <<https://ja.wikipedia.org/wiki/Comma-Separated_Values>DB [デヌタベヌス](#term-database) を参照 デヌタベヌス 組織化されたデヌタの集たり。 参考 <<https://ja.wikipedia.org/wiki/%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9>゚ンゞン [ストレヌゞ゚ンゞン](#term-storage-engines) を参照 PHP 拡匵モゞュヌル PHP を远加機胜で拡匵する PHP モゞュヌル。 参考 <<https://ja.wikipedia.org/wiki/%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3>FAQ Frequently Asked Questions は、よく尋ねられる質問ずその答えの䞀芧です。 参考 <<https://ja.wikipedia.org/wiki/FAQ>フィヌルド 分割されたデヌタの䞀郚分、たたはカラム。 参考 <<https://ja.wikipedia.org/wiki/%E3%83%95%E3%82%A3%E3%83%BC%E3%83%AB%E3%83%89_(%E8%A8%88%E7%AE%97%E6%A9%9F%E7%A7%91%E5%AD%A6)>倖郚キヌ デヌタベヌスの行のカラムたたはカラムの集合の぀。これらは、(通垞は異なる) テヌブル内における別のデヌタベヌスの行のキヌを圢成しおいるカラムキヌたたはカラム集合キヌを指しおいる。 参考 <<https://ja.wikipedia.org/wiki/%E5%A4%96%E9%83%A8%E3%82%AD%E3%83%BC>GD <NAME>ell 氏他によっお開発された動的に画像を操䜜するためのグラフィックラむブラリ。 参考 <<https://ja.wikipedia.org/wiki/GD_Graphics_Library>GD2 [GD](#term-gd) を参照 GZip GZip はGNU zip の略称で、GNU フリヌ゜フトりェアファむル圧瞮プログラムのこずです。 参考 <<https://ja.wikipedia.org/wiki/Gzip>ホスト コンピュヌタネットワヌクに接続された任意のマシンのこず。ホスト名を持っおいるノヌド。 参考 <<https://ja.wikipedia.org/wiki/%E3%83%9B%E3%82%B9%E3%83%88_(%E3%83%8D%E3%83%83%E3%83%88%E3%83%AF%E3%83%BC%E3%82%AF)>ホスト名 ネットワヌク接続された端末のネットワヌク䞊における䞀意の名前。 参考 <<https://ja.wikipedia.org/wiki/%E3%83%9B%E3%82%B9%E3%83%88%E5%90%8D>HTTP HTTP (ハむパヌテキスト転送プロトコル) は、 World Wide Web 䞊で情報を転送したり䌝えたりするために䜿甚される䞻な方法です。 参考 <<https://ja.wikipedia.org/wiki/Hypertext_Transfer_Protocol>HTTPS セキュリティ察策を付加した [HTTP](#term-http) 接続。 参考 <<https://ja.wikipedia.org/wiki/HTTPS>IEC 囜際電気暙準䌚議 IIS IIS (Internet Information Services) は Microsoft Windows を䜿甚しおいるサヌバ甚のむンタヌネット基瀎サヌビスのセットです。 参考 <<https://ja.wikipedia.org/wiki/Internet_Information_Services>むンデックス テヌブル内の行に玠早くアクセスできるようにするための機胜。 参考 <<https://ja.wikipedia.org/wiki/%E7%B4%A2%E5%BC%95_(%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9)>IP むンタヌネット・プロトコル (Internet Protocol) は、パケット亀換ネットワヌク間でデヌタを通信するための、送信元ず宛先のホストで䜿甚されるデヌタ指向のプロトコルです。 参考 <<https://ja.wikipedia.org/wiki/Internet_Protocol>IP アドレス むンタヌネット・プロトコル暙準で䜿甚される、ネットワヌク䞊で盞互に識別し通信する端末に䜿甚される䞀意の番号。 参考 <<https://ja.wikipedia.org/wiki/IP%E3%82%A2%E3%83%89%E3%83%AC%E3%82%B9>IPv6 IPv6 (Internet Protocol version 6) は Internet Protocol ([IP](#term-ip)) の最新版であり、その前身の IPv4 がアドレスを䜿い果たしたこずの長幎の問題に察凊するように蚭蚈されおいたす。 参考 <<https://ja.wikipedia.org/wiki/IPv6>ISAPI Internet Server Application Programming Interface は Internet Information Services (IIS) の API です。 参考 <<https://ja.wikipedia.org/wiki/ISAPI>ISP ISP (むンタヌネット・サヌビス・プロバむダ) は、むンタヌネット関連サヌビスぞのナヌザアクセスを提䟛しおいる䌁業や組織です。 参考 <<https://ja.wikipedia.org/wiki/%E3%82%A4%E3%83%B3%E3%82%BF%E3%83%BC%E3%83%8D%E3%83%83%E3%83%88%E3%82%B5%E3%83%BC%E3%83%93%E3%82%B9%E3%83%97%E3%83%AD%E3%83%90%E3%82%A4%E3%83%80>ISO 囜際暙準化機構 参考 [ISO 組織りェブサむト](https://www.iso.org/about-us.html) 参考 <<https://ja.wikipedia.org/wiki/%E5%9B%BD%E9%9A%9B%E6%A8%99%E6%BA%96%E5%8C%96%E6%A9%9F%E6%A7%8B>JPEG 写真画像の非可逆圧瞮に䜿われる最も䞀般的で暙準的な方法。 参考 <<https://ja.wikipedia.org/wiki/JPEG>JPG [JPEG](#term-jpeg) を参照 キヌ [むンデックス](#term-index) を参照 LaTeX TeX 組版プログラムのための文曞準備システム。 参考 <<https://ja.wikipedia.org/wiki/LaTeX>Mac Apple Macintosh は、 Apple Inc. が蚭蚈、開発、補造、販売しおいるパ゜コンの補品矀です。 参考 <<https://ja.wikipedia.org/wiki/Macintosh>macOS 䞀般消費者およびプロの垂堎向けに珟圚出荷されおいる党おの Apple Macintosh コンピュヌタに含たれおいるオペレヌティングシステム。 参考 <<https://ja.wikipedia.org/wiki/MacOS>mbstring PHP mbstring 機胜は、マルチバむト文字セット、特に UTF-8 で衚された蚀語の察応を提䟛したす。 この拡匵モゞュヌルのむンストヌルで問題が発生したずきは、 [1.20 mysqli や mysql 拡匵モゞュヌルがないずいう゚ラヌが出たす。](index.html#faqmysql) に良いヒントが曞かれおいるので参照しおください。 参考 <<https://www.php.net/manual/ja/book.mbstring.php>メディア型 メディア型 (以前は MIME タむプず呌ばれおいた) は 2 ぀の郚分からなる識別子で、むンタヌネットで転送されるファむル圢匏ずファむルの内容を識別したす。 参考 <<https://ja.wikipedia.org/wiki/%E3%83%A1%E3%83%87%E3%82%A3%E3%82%A2%E3%82%BF%E3%82%A4%E3%83%97>MIME 倚目的むンタヌネットメヌル拡匵は、電子メヌル曞匏におけるむンタヌネット暙準です。 参考 <<https://ja.wikipedia.org/wiki/Multipurpose_Internet_Mail_Extensions>モゞュヌル Apache HTTP サヌバ httpd の拡匵機胜。 参考 <<https://ja.wikipedia.org/wiki/Apache_HTTP_Server>mod_proxy_fcgi Fast CGI むンタフェヌスを実装するための Apache モゞュヌルです。 PHP は CGI モゞュヌル、 FastCGI、たたは盎接の Apache モゞュヌルずしお実行するこずができたす。 参考 <<https://en.wikipedia.org/wiki/Mod_proxy>MySQL マルチスレッド、マルチナヌザの SQL (Structured Query Language) によるデヌタベヌス管理システム (DBMS)。 参考 <<https://ja.wikipedia.org/wiki/MySQL>MySQLi 改良された MySQL クラむアントの PHP 拡匵モゞュヌル。 参考 [PHP マニュアルの MySQL 改良版拡匵モゞュヌル](https://www.php.net/manual/ja/book.mysqli.php) 参考 <<https://en.wikipedia.org/wiki/MySQLi>mysql MySQL クラむアントの PHP 拡匵。 参考 <<https://www.php.net/manual/ja/book.mysql.php>OpenDocument オフィス文曞のオヌプン暙準の䞀぀。 参考 <<https://ja.wikipedia.org/wiki/OpenDocument>OS X [macOS](#term-macos) を参照。 参考 <<https://ja.wikipedia.org/wiki/MacOS>PDF PDF (ポヌタブル・ドキュメント・フォヌマット) は、 Adobe Systems よっお開発された装眮や解像床に䟝存しない圢匏で二次元ドキュメントを衚珟するためのファむル圢匏。 参考 <<https://ja.wikipedia.org/wiki/Portable_Document_Format>PEAR PHP 拡匵モゞュヌルずアプリケヌションのリポゞトリ。 参考 [PEAR りェブサむト](https://pear.php.net/) 参考 [Wikipedia の PEAR ペヌゞ](https://ja.wikipedia.org/wiki/PEAR) PCRE PCRE (Perl Compatible Regular Expressions) は Perl 互換の PHP 正芏衚珟関数矀です 参考 <<https://www.php.net/pcre>参考 [PHP マニュアルの 正芏衚珟 (Perl 互換)](https://www.php.net/pcre) 参考 <<https://ja.wikipedia.org/wiki/Perl_Compatible_Regular_Expressions>PHP 「PHP: Hypertext Preprocessor」の略称。オヌプン゜ヌス。䞻にサヌバ偎のアプリケヌション開発ず動的なりェブコンテンツのために䜿甚されるリフレクションプログラミング蚀語。最近では、゜フトりェアアプリケヌションの広い範囲で䜿われおいる。 参考 <<https://ja.wikipedia.org/wiki/PHP_(%E3%83%97%E3%83%AD%E3%82%B0%E3%83%A9%E3%83%9F%E3%83%B3%E3%82%B0%E8%A8%80%E8%AA%9E)>ポヌト デヌタが送受信される接続。 参考 <<https://ja.wikipedia.org/wiki/%E3%83%9D%E3%83%BC%E3%83%88_(%E3%82%B3%E3%83%B3%E3%83%94%E3%83%A5%E3%83%BC%E3%82%BF%E3%83%8D%E3%83%83%E3%83%88%E3%83%AF%E3%83%BC%E3%82%AF)>䞻キヌ 䞻キヌは、テヌブル内の1぀以䞊のフィヌルドに察するむンデックスで、テヌブル内のそれぞれの行で䞀意の倀を持ちたす。テヌブル内のデヌタをにアクセスしたり識別したりしやすいように、すべおのテヌブルに䞻キヌを蚭定すべきです。テヌブルに䞻キヌを蚭定できるのは䞀぀だけで、垞に **䞻キヌ** ず呌ばれたす。実際、䞻キヌは **䞻キヌ** ずいう名前の [ナニヌクキヌ](#term-unique-key) にすぎたせん。䞻キヌが定矩されおいない堎合、存圚すれば最初の *ナニヌクキヌ* を䞻キヌずしお䜿甚したす。 䞻キヌはテヌブルを䜜成する際に䜜成するこずができたす (phpMyAdmin では、䞻キヌの䞀郚にしたい各フィヌルドの䞻キヌチェックボックスをチェックするだけです)。 既存のテヌブルには、 ALTER TABLE たたは CREATE INDEX で䞻キヌを远加するこずができたす (phpMyAdmin では、テヌブルの構造ペヌゞのフィヌルド䞀芧の䞋にある「むンデックスの远加」をクリックするだけです)。 RFC RFC (Request for Comments) は、むンタヌネット技術に適甚される新しい研究、技術革新、方法論を曞き連ねた文曞です。 参考 <<https://ja.wikipedia.org/wiki/Request_for_Comments>RFC 1952 GZIP ファむル圢匏仕様曞 4.3 版 参考 [**RFC 1952**](https://tools.ietf.org/html/rfc1952.html) 行 (レコヌド、組) テヌブル内においお単䞀の、暗黙的に構造化されたデヌタ項目を衚したす。 参考 <<https://ja.wikipedia.org/wiki/%E7%B5%84_(%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9)>サヌバ ネットワヌクを介しお他のコンピュヌタシステムにサヌビスを提䟛するコンピュヌタシステムのこず。 参考 <<https://ja.wikipedia.org/wiki/%E3%82%B5%E3%83%BC%E3%83%90>Sodium Sodium の PHP 拡匵。 参考 [PHP マニュアルの Sodium 拡匵モゞュヌル](https://www.php.net/manual/ja/book.sodium.php) ストレヌゞ゚ンゞン MySQL は、ディスク䞊のデヌタを保存するためにいく぀かの異なる圢匏を䜿甚するこずができたすが、これらはストレヌゞ゚ンゞンやテヌブル皮別ず呌ばれおいたす。 phpMyAdmin では、操䜜タブからナヌザヌが特定のテヌブルのストレヌゞ゚ンゞンを倉曎するこずができたす。 䞀般的なテヌブル皮別は InnoDB ず MyISAM ですが、その他にも倚くのテヌブル皮別が存圚しおおり、状況によっおはそれが望たしい堎合もありたす。 参考 [MySQL ドキュメントの Alternative Storage Engines に぀いおの章](https://dev.mysql.com/doc/refman/8.0/ja/storage-engines.html) 参考 <<https://ja.wikipedia.org/wiki/%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%82%A8%E3%83%B3%E3%82%B8%E3%83%B3>゜ケット プロセス間通信の䞀圢匏。 参考 <<https://ja.wikipedia.org/wiki/UNIX%E3%83%89%E3%83%A1%E3%82%A4%E3%83%B3%E3%82%BD%E3%82%B1%E3%83%83%E3%83%88>SSL SSL (TLS に眮き換えられたした) は、むンタヌネット䞊の安党な通信を提䟛する暗号プロトコルです。 参考 <<https://ja.wikipedia.org/wiki/Transport_Layer_Security>ストアドプロシヌゞャ リレヌショナルデヌタベヌスシステムにアクセスするアプリケヌションで䜿甚可胜なサブルヌチン 参考 <<https://ja.wikipedia.org/wiki/%E3%82%B9%E3%83%88%E3%82%A2%E3%83%89%E3%83%97%E3%83%AD%E3%82%B7%E3%83%BC%E3%82%B8%E3%83%A3>SQL 構造化問い合わせ蚀語 参考 <<https://ja.wikipedia.org/wiki/SQL>テヌブル テヌブル (衚) は、氎平方向の行ず垂盎方向のカラム (列) で構成、定矩、保存されおいるデヌタ芁玠 (セル) の組み合わせです。各項目は、ラベルかキヌによっお、もしくは他の項目ずの関係における䜍眮によっお、䞀意に識別するこずができたす。 参考 <<https://ja.wikipedia.org/wiki/%E8%A1%A8_(%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9)>tar アヌカむブファむル圢匏の䞀皮で Tape ARchive の略。 参考 <<https://ja.wikipedia.org/wiki/Tar>TCP TCP (Transmission Control Protocol) は、むンタヌネット・プロトコル矀の䞭栞ずなるプロトコルの䞀぀。 参考 <<https://ja.wikipedia.org/wiki/%E3%82%A4%E3%83%B3%E3%82%BF%E3%83%BC%E3%83%8D%E3%83%83%E3%83%88%E3%83%BB%E3%83%97%E3%83%AD%E3%83%88%E3%82%B3%E3%83%AB%E3%83%BB%E3%82%B9%E3%82%A4%E3%83%BC%E3%83%88>TCPDF PDF ファむルを生成するための PHP ラむブラリ。 参考 <<https://tcpdf.org/>参考 <<https://en.wikipedia.org/wiki/TCPDF>トリガ デヌタベヌス内の特定のテヌブルたたはビュヌ䞊の特定のむベントに応答しお自動的に実行される手続き型のコヌド 参考 <<https://ja.wikipedia.org/wiki/%E3%83%87%E3%83%BC%E3%82%BF%E3%83%99%E3%83%BC%E3%82%B9%E3%83%88%E3%83%AA%E3%82%AC>ナニヌクキヌ ナニヌクキヌずは、テヌブル内の1぀以䞊のフィヌルドに察するむンデックスで、各行に察しお䞀意の倀を持ちたす。 最初のナニヌクキヌは、*䞻キヌ* が定矩されおいない堎合は [䞻キヌ](#term-primary-key) ずしお扱われたす。 URL URL (Uniform Resource Locator) は文字の䞊びで、暙準化された圢匏に準拠しおおり、むンタヌネット䞊の文曞や画像などを堎所で参照するために䜿甚されたす。 参考 <<https://ja.wikipedia.org/wiki/Uniform_Resource_Locator>りェブサヌバ クラむアントからの HTTP リク゚ストの受け入れおよびそれらに察するりェブペヌゞの提䟛を行うコンピュヌタ (プログラム)。 参考 <<https://ja.wikipedia.org/wiki/Web%E3%82%B5%E3%83%BC%E3%83%90>XML 個別の目的に応じたマヌクアップ蚀語䜜成のための、W3C 勧告の汎甚マヌクアップ蚀語。倚くの異なる皮類のデヌタを蚘述するこずが可胜。 参考 <<https://ja.wikipedia.org/wiki/Extensible_Markup_Language>ZIP 有名なデヌタ圧瞮ずアヌカむブの圢匏。 参考 <<https://ja.wikipedia.org/wiki/ZIP_(%E3%83%95%E3%82%A1%E3%82%A4%E3%83%AB%E3%83%95%E3%82%A9%E3%83%BC%E3%83%9E%E3%83%83%E3%83%88)>Zlib [Jean-loup Gailly](https://en.wikipedia.org/wiki/Jean-Loup_Gailly) ず [Mark Adler](https://en.wikipedia.org/wiki/Mark_Adler) 䞡氏によるオヌプン゜ヌスでクロスプラットフォヌムのデヌタ圧瞮ラむブラリ。 参考 <<https://ja.wikipedia.org/wiki/Zlib>コンテンツセキュリティポリシヌ HTTP の Content-Security-Policy レスポンスヘッダにより、りェブサむトの管理者が特定のペヌゞから読み蟌むこずができるリ゜ヌスを制埡するこずができたす。 参考 <<https://en.wikipedia.org/wiki/Content_Security_Policy>参考 <<https://developer.mozilla.org/ja/docs/Web/HTTP/CSP>むンデックスずテヌブル[¶](#indices-and-tables) === * [玢匕](genindex.html) * [怜玢ペヌゞ](search.html) * [甚語集](index.html#glossary) ### [Table of Contents](index.html#document-index) * [はじめに](index.html#document-intro) + [察応しおいる機胜](index.html#supported-features) + [ショヌトカットキヌ](index.html#shortcut-keys) + [ナヌザに぀いお䞀蚀](index.html#a-word-about-users) * [芁件](index.html#document-require) + [りェブサヌバ](index.html#web-server) + [PHP](index.html#php) + [デヌタベヌス](index.html#database) + [りェブブラりザ](index.html#web-browser) * [むンストヌル](index.html#document-setup) + [Linux ディストリビュヌション](index.html#linux-distributions) + [Windows ぞのむンストヌル](index.html#installing-on-windows) + [Git からのむンストヌル](index.html#installing-from-git) + [Composer を䜿甚したむンストヌル](index.html#installing-using-composer) + [Docker を䜿甚したむンストヌル](index.html#installing-using-docker) + [IBM クラりド](index.html#ibm-cloud) + [クむックむンストヌル](index.html#quick-install-1) + [phpMyAdmin リリヌスの怜蚌](index.html#verifying-phpmyadmin-releases) + [phpMyAdmin 環境保管領域](index.html#phpmyadmin-configuration-storage) + [旧版からのアップグレヌド](index.html#upgrading-from-an-older-version) + [認蚌モヌドの䜿い方](index.html#using-authentication-modes) + [phpMyAdmin のむンストヌルを安党にする](index.html#securing-your-phpmyadmin-installation) + [デヌタベヌスサヌバぞの接続で SSL を䜿甚する](index.html#using-ssl-for-connection-to-database-server) + [既知の問題](index.html#known-issues) * [蚭定](index.html#document-config) + [基本蚭定](index.html#basic-settings) + [サヌバ接続蚭定](index.html#server-connection-settings) + [䞀般蚭定](index.html#generic-settings) + [クッキヌ認蚌オプション](index.html#cookie-authentication-options) + [ナビゲヌションパネルの蚭定](index.html#navigation-panel-setup) + [メむンパネル](index.html#main-panel) + [デヌタベヌスの構造](index.html#database-structure) + [衚瀺モヌド](index.html#browse-mode) + [線集モヌド](index.html#editing-mode) + [゚クスポヌトずむンポヌトの蚭定](index.html#export-and-import-settings) + [タブ衚瀺蚭定](index.html#tabs-display-settings) + [PDF オプション](index.html#pdf-options) + [蚀語](index.html#languages) + [りェブサヌバ蚭定](index.html#web-server-settings) + [テヌマ蚭定](index.html#theme-settings) + [デザむンのカスタマむズ](index.html#design-customization) + [テキスト入力項目](index.html#text-fields) + [SQL ク゚リボックス蚭定](index.html#sql-query-box-settings) + [りェブサヌバのアップロヌド/保存/むンポヌトディレクトリ](index.html#web-server-upload-save-import-directories) + [様々な衚瀺蚭定](index.html#various-display-setting) + [ペヌゞタむトル](index.html#page-titles) + [テヌマ管理蚭定](index.html#theme-manager-settings) + [デフォルトク゚リ](index.html#default-queries) + [MySQL 蚭定](index.html#mysql-settings) + [デフォルトの倉換オプション](index.html#default-options-for-transformations) + [コン゜ヌル蚭定](index.html#console-settings) + [開発者向け](index.html#developer) + [蚭定䟋](index.html#examples) * [ナヌザヌガむド](index.html#document-user) + [phpMyAdmin の蚭定](index.html#document-settings) + [二芁玠認蚌](index.html#document-two_factor) + [倉換機胜](index.html#document-transformations) + [ブックマヌク](index.html#document-bookmarks) + [ナヌザ管理](index.html#document-privileges) + [リレヌション](index.html#document-relations) + [グラフ機胜](index.html#document-charts) + [むンポヌトず゚クスポヌト](index.html#document-import_export) + [カスタムテヌマ](index.html#document-themes) + [その他の情報源](index.html#document-other) * [FAQ - よくある質問](index.html#document-faq) + [サヌバ](index.html#server) + [蚭定](index.html#configuration) + [既知の制限](index.html#known-limitations) + [ISP やマルチナヌザのむンストヌル](index.html#isps-multi-user-installations) + [ブラりザたたはクラむアント OS](index.html#browsers-or-client-os) + [phpMyAdmin の䜿甚](index.html#using-phpmyadmin) + [phpMyAdmin プロゞェクト](index.html#phpmyadmin-project) + [セキュリティ](index.html#security) + [同期](index.html#synchronization) * [開発者向け情報](index.html#document-developers) * [セキュリティポリシヌ](index.html#document-security) + [よくある脆匱性](index.html#typical-vulnerabilities) + [セキュリティ問題の報告](index.html#reporting-security-issues) * [phpMyAdmin の配垃ずパッケヌゞ化](index.html#document-vendors) + [倖郚ラむブラリ](index.html#external-libraries) + [特定のファむルのラむセンス](index.html#specific-files-licenses) + [ベンダヌ向けのラむセンス情報](index.html#licenses-for-vendors) * [著䜜暩](index.html#document-copyright) + [サヌドパヌティのラむセンス](index.html#third-party-licenses) * [貢献者䞀芧](index.html#document-credits) + [貢献者䞀芧幎代順](index.html#credits-in-chronological-order) + [翻蚳者](index.html#translators) + [ドキュメントの翻蚳者](index.html#documentation-translators) + [バヌゞョン 2.1.0 圓時の貢献者䞀芧](index.html#original-credits-of-version-2-1-0) * [甚語集](index.html#document-glossary) ### クむック怜玢 ### ナビゲヌション * [phpMyAdmin 6.0.0-dev ドキュメント](index.html#document-index) »
shunter
npm
JavaScript
Shunter is a [Node.js](https://nodejs.org/) application built to read JSON and translate it into HTML. It helps you create a loosely-coupled front end which can serve traffic from one or more back end applications - great for use in multi-language, multi-disciplinary teams or just to make your project more flexible and future-proofed. Shunter does not contain an API client, or any Controller logic (in the [MVC](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) sense). Instead, Shunter simply proxies requests to a back end server, then: 1. If the back end wants Shunter to render the response, it returns the application state as JSON, served with a certain HTTP header. This initiates the templating process in Shunter. 2. If the back end wishes to serve the response, it omits the header and Shunter proxies the request back to the client. Key Features --- * Enforces decoupling of templates from underlying applications * Enables multiple applications to use the same unified front end * Makes full site redesigns or swapping out back end applications a doddle * Completely technology-agnostic; if your application outputs JSON, it can work with Shunter * Asset concatenation, minification, cache-busting, and other performance optimisations built-in * Outputs any type of content you like, e.g. HTML, RSS, RDF * Well-tested and supported, serving [Scientific American](http://www.scientificamerican.com) as well as [many](http://www.nature.com/npjscilearn/) [high-traffic](http://www.nature.com/srep) [sites](http://www.nature.com/search) across nature.com Getting Started --- If you're new to Shunter, we recommend reading the [Getting Started Guide](https://github.com/springernature/shunter/blob/HEAD/docs/getting-started.md). This will teach you the basics, and how to create your first Shunter-based application. Once you're familiar with Shunter's basics you can refer to the [API Documentation](https://github.com/springernature/shunter/blob/HEAD/docs/usage/index.md) for a full breakdown about how to work with Shunter. Requirements --- Shunter requires [Node.js](https://nodejs.org/) 4.x–6.x. This should be easy to get running on Mac and Linux. One of Shunter's dependencies is a native addon module so it requires a working C compiler. Windows doesn't come with one by default so you may find the following links helpful: * [node-gyp on Windows](https://github.com/nodejs/node-gyp#on-windows) * [node-gyp Visual Studio 2010 Setup](https://github.com/TooTallNate/node-gyp/wiki/Visual-Studio-2010-Setup) * [contextify – Specified platform toolset (v110) is not installed or invalid](http://zxtech.wordpress.com/2013/02/20/contextify-specified-platform-toolset-v110-is-not-installed-or-invalid/) See the [Getting started documentation](https://github.com/springernature/shunter/blob/HEAD/docs/getting-started.md#prerequisites) for more information on Shunter's requirements. Support and Migration --- The last major version of Shunter is version 4. Old major versions are supported for 6 months after a new major version is released. This means that patch-level changes will be added and bugs will be fixed. We maintain a [support guide](https://github.com/springernature/shunter/blob/HEAD/docs/support.md) which documents the major versions and their support levels. If you'd like to know more about how we support our open source projects, including the full release process, check out our [support practices document](https://github.com/springernature/frontend/blob/master/practices/open-source-support.md). If you're migrating between major versions of Shunter, we maintain a [migration guide](https://github.com/springernature/shunter/blob/HEAD/docs/migration/index.md) to help you. Contributing --- We'd love for you to contribute to Shunter. We maintain a [developer guide](https://github.com/springernature/shunter/blob/HEAD/docs/developer-guide.md) to help people get started with working on Shunter itself. It outlines the structure of the application and some of the development practices we uphold. We also label [issues that might be a good starting-point](https://github.com/springernature/shunter/labels/good-starter-issue) for new developers to the project. License --- Shunter is licensed under the [Lesser General Public License (LGPL-3.0)](https://github.com/springernature/shunter/blob/HEAD/LICENSE). Copyright © 2015, <NAME> Readme --- ### Keywords * proxy * front-end * dust * templates * asset pipeline * renderer
github.com/asciinema/asciinema-cli
go
Go
README [¶](#section-readme) --- ### asciinema [![Build Status](https://travis-ci.org/asciinema/asciinema.svg?branch=master)](https://travis-ci.org/asciinema/asciinema) [![license](http://img.shields.io/badge/license-GNU-blue.svg)](https://raw.githubusercontent.com/asciinema/asciinema/master/LICENSE) Terminal session recorder and the best companion of [asciinema.org](https://asciinema.org). [![demo](https://asciinema.org/a/624fjx2rx7k3pctdozw7m8b24.png)](https://asciinema.org/a/624fjx2rx7k3pctdozw7m8b24?autoplay=1) #### Installation ##### Using package manager asciinema is available in repositories of most popular package managers on Mac OS X, Linux and FreeBSD. Look for package named `asciinema`. See the [list of available packages](https://asciinema.org/docs/installation). ##### Building from source To build asciinema from source you need to have [Go development toolchain](http://golang.org/doc/install) installed. ###### With `go get` You can use `go get` to fetch the source, build and install asciinema at `$GOPATH/bin/asciinema` in one go: ``` go get github.com/asciinema/asciinema ``` ###### With `make` Download the source code into your `$GOPATH`: ``` mkdir -p $GOPATH/src/github.com/asciinema git clone https://github.com/asciinema/asciinema.git $GOPATH/src/github.com/asciinema/asciinema ``` Build the binary: ``` cd $GOPATH/src/github.com/asciinema/asciinema make build ``` This will produce asciinema binary at `$GOPATH/src/github.com/asciinema/asciinema/bin/asciinema`. To install it system wide (to `/usr/local`): ``` sudo make install ``` If you want to install it in other location: ``` PREFIX=/the/prefix make install ``` #### Usage asciinema is composed of multiple commands, similar to `git`, `apt-get` or `brew`. When you run `asciinema` with no arguments help message is displayed, listing all available commands with their options. ##### `rec [filename]` **Record terminal session.** This is the single most important command in asciinema, since it is how you utilize this tool's main job. By running `asciinema rec [filename]` you start a new recording session. The command (process) that is recorded can be specified with `-c` option (see below), and defaults to `$SHELL` which is what you want in most cases. Recording finishes when you exit the shell (hit Ctrl+D or type `exit`). If the recorded process is not a shell then recording finishes when the process exits. If the `filename` argument is given then the resulting recording (called [asciicast](https://github.com/asciinema/asciinema-cli/blob/v1.2.0/doc/asciicast-v1.md)) is saved to a local file. It can later be replayed with `asciinema play <filename>` and/or uploaded to asciinema.org with `asciinema upload <filename>`. If the `filename` argument is omitted then (after asking for confirmation) the resulting asciicast is uploaded to asciinema.org for further playback in a web browser. `ASCIINEMA_REC=1` is added to recorded process environment variables. This can be used by your shell's config file (`.bashrc`, `.zshrc`) to alter the prompt or play a sound when shell is being recorded. Available options: * `-c, --command=<command>` - Specify command to record, defaults to $SHELL * `-t, --title=<title>` - Specify the title of the asciicast * `-w, --max-wait=<sec>` - Reduce recorded terminal inactivity to max seconds * `-y, --yes` - Answer "yes" to all prompts (e.g. upload confirmation) * `-q, --quiet` - Be quiet, suppress all notices/warnings (implies -y) ##### `play <filename>` **Replay recorded asciicast in a terminal.** This command replays given asciicast (as recorded by `rec` command) directly in your terminal. Playing from a local file: ``` asciinema play /path/to/asciicast.json ``` Playing from HTTP(S) URL: ``` asciinema play https://asciinema.org/a/22124.json asciinema play http://example.com/demo.json ``` Playing from asciicast page URL (requires `<link rel="alternate" type="application/asciicast+json" href="....json">` in page's HTML): ``` asciinema play https://asciinema.org/a/22124 asciinema play http://example.com/blog/post.html ``` Playing from stdin: ``` cat /path/to/asciicast.json | asciinema play - ssh user@host cat asciicast.json | asciinema play - ``` Playing from IPFS: ``` asciinema play ipfs:/ipfs/<KEY>V asciinema play fs:/ipfs/<KEY> ``` Available options: * `-w, --max-wait=<sec>` - Reduce replayed terminal inactivity to max seconds NOTE: it is recommended to run `asciinema play` in a terminal of dimensions not smaller than the one used for recording as there's no "transcoding" of control sequences for new terminal size. ##### `upload <filename>` **Upload recorded asciicast to asciinema.org site.** This command uploads given asciicast (as recorded by `rec` command) to asciinema.org for further playback in a web browser. `asciinema rec demo.json` + `asciinema play demo.json` + `asciinema upload demo.json` is a nice combo for when you want to review an asciicast before publishing it on asciinema.org. ##### `auth` **Assign local API token to asciinema.org account.** On every machine you install asciinema recorder, you get a new, unique API token. This command connects this local token with your asciinema.org account, and links all asciicasts recorded on this machine with the account. This command displays the URL you should open in your web browser. If you never logged in to asciinema.org then your account will be created when opening the URL. NOTE: it is **necessary** to do this if you want to **edit or delete** your recordings on asciinema.org. You can synchronize your config file (which keeps the API token) across the machines but that's not necessary. You can assign new tokens to your account from as many machines as you want. #### Configuration file asciinema uses a config file to keep API token and user settings. In most cases the location of this file is `$HOME/.config/asciinema/config`. When you first run `asciinema`, local API token is generated and saved in the file (unless the file already exists). It looks like this: ``` [api] token = d5a2dce4-173f-45b2-a405-ac33d7b70c5f ``` There are several options you can set in this file. Here's a config with all available options set: ``` [api] token = d5a2dce4-173f-45b2-a405-ac33d7b70c5f url = https://asciinema.example.com [record] command = /bin/bash -l maxwait = 2 yes = true [play] maxwait = 1 ``` The options in `[api]` section are related to API location and authentication. To tell asciinema recorder to use your own asciinema site instance rather than the default one (asciinema.org), you can set `url` option. API URL can also be passed via `ASCIINEMA_API_URL` environment variable. The options in `[record]` and `[play]` sections have the same meaning as the options you pass to `asciinema rec`/`asciinema play` command. If you happen to often use either `-c`, `-w` or `-y` with these commands then consider saving it as a default in the config file. ##### Configuration file locations In fact, the following locations are checked for the presence of the config file (in the given order): * `$ASCIINEMA_CONFIG_HOME/config` - if you have set `$ASCIINEMA_CONFIG_HOME` * `$XDG_CONFIG_HOME/asciinema/config` - on Linux, `$XDG_CONFIG_HOME` usually points to `$HOME/.config/` * `$HOME/.config/asciinema/config` - in most cases it's here * `$HOME/.asciinema/config` - created by asciinema versions prior to 1.1 The first one which is found is used. #### Contributing If you want to contribute to this project check out [Contributing](https://asciinema.org/contributing) page. #### Authors Developed with passion by [<NAME>](http://ku1ik.com) and great open source [contributors](https://github.com/asciinema/asciinema/contributors) #### License Copyright © 2011-2016 <NAME>. All code is licensed under the GPL, v3 or later. See LICENSE file for details. Documentation [¶](#section-documentation) --- ![The Go Gopher](/static/shared/gopher/airplane-1200x945.svg) There is no documentation for this package.
ecan
cran
R
Package ‘ecan’ July 7, 2023 Title Ecological Analysis and Visualization Version 0.2.1 Description Support ecological analyses such as ordination and clustering. Contains consistent and easy wrapper functions of 'stat', 'vegan', and 'labdsv' packages, and visualisation functions of ordination and clustering. License MIT + file LICENSE Encoding UTF-8 RoxygenNote 7.2.3 Depends R (>= 3.5.0) URL https://github.com/matutosi/ecan https://github.com/matutosi/ecan/tree/develop (devel) Imports MASS, cluster, dendextend, dplyr, ggplot2, jsonlite, labdsv, magrittr, purrr, rlang, stringr, tibble, tidyr, vegan Suggests ggdendro, knitr, rmarkdown, testthat (>= 3.0.0), Config/testthat/edition 3 NeedsCompilation no Author <NAME> [aut, cre] Maintainer <NAME> <<EMAIL>> Repository CRAN Date/Publication 2023-07-07 12:20:06 UTC R topics documented: cluste... 2 ... 3 df2tabl... 4 draw_layer_constructio... 5 gen_exampl... 6 ind_va... 7 is_one2mult... 8 ordinatio... 9 pad2longes... 11 read_bis... 11 shd... 12 cluster Helper function for clustering methods Description Helper function for clustering methods Helper function for calculate distance Add group names to hclust labels. Add colors to dendrogram Usage cluster(x, c_method, d_method) distance(x, d_method) cls_add_group(cls, df, indiv, group, pad = TRUE) cls_color(cls, df, indiv, group) Arguments x A community data matrix. rownames: stands. colnames: species. c_method A string of clustering method. "ward.D", "ward.D2", "single", "complete", "av- erage" (= UPGMA), "mcquitty" (= WPGMA), "median" (= WPGMC), "cen- troid" (= UPGMC), or "diana". d_method A string of distance method. "correlation", "manhattan", "euclidean", "can- berra", "clark", "bray", "kulczynski", "jaccard", "gower", "altGower", "morisita", "horn", "mountford", "raup", "binomial", "chao", "cao", "mahalanobis", "chisq", "chord", "aitchison", or "robust.aitchison". cls A result of cluster or dendrogram. df A data.frame to be added into ord scores indiv, group A string to specify individual and group name of column in df. pad A logical to specify padding strings. Value cluster() returns result of clustering. $clustering_method: c_method $distance_method: d_method distance() returns distance matrix. Inputing cls return a color vector, inputing dend return a dend with color. Examples library(dplyr) df <- tibble::tibble( stand = paste0("ST_", c("A", "A", "A", "B", "B", "C", "C", "C", "C")), species = paste0("sp_", c("a", "e", "d", "e", "b", "e", "d", "b", "a")), abundance = c(3, 3, 1, 9, 5, 4, 3, 3, 1)) cls <- df2table(df) %>% cluster(c_method = "average", d_method = "bray") library(ggdendro) # show standard cluster ggdendro::ggdendrogram(cls) # show cluster with group data(dune, package = "vegan") data(dune.env, package = "vegan") cls <- cluster(dune, c_method = "average", d_method = "bray") df <- tibble::rownames_to_column(dune.env, "stand") cls <- cls_add_group(cls, df, indiv = "stand", group = "Use") ggdendro::ggdendrogram(cls) d Calculating diversity Description Calculating diversity Usage d(x) h(x, base = exp(1)) Arguments x, base A numeric vector. Value A numeric vector. df2table Convert data.frame and table to each other. Description Convert data.frame and table to each other. Usage df2table(df, st = "stand", sp = "species", ab = "abundance") table2df(tbl, st = "stand", sp = "species", ab = "abundance") dist2df(dist) Arguments df A data.frame. st, sp, ab A string. tbl A table. community matrix. rownames: stands. colnames: species. dist A distance table. Value df2table() return table, table2df() return data.frame, dist2df() return data.frame. Examples tibble::tibble( st = paste0("st_", rep(1:2, times = 2)), sp = paste0("sp_", rep(1:2, each = 2)), ab = runif(4)) %>% dplyr::bind_rows(., .) %>% print() %>% df2table("st", "sp", "ab") draw_layer_construction Draw layer construction plot Description Draw layer construction plot Add mid point and bin width of layer heights. Compute mid point of layer heights. Compute bin width of layer heights. Usage draw_layer_construction( df, stand = "stand", height = "height", cover = "cover", group = "", ... ) add_mid_p_bin_w(df, height = "height") mid_point(x) bin_width(x) Arguments df A dataframe including columns: stand, layer height and cover. Optional column: stand group. stand, height, cover, group A string to specify stand, height, cover, group column. ... Extra arguments for geom_bar(). x A numeric vector. Value draw_layer_construction() returns gg object, add_mid_p_bin_w() returns dataframe including mid_point and bin_width columns. mid_point() and bin_width() return a numeric vector. Examples library(dplyr) n <- 10 height_max <- 20 ly_list <- c("B", "S", "K") st_list <- LETTERS[1] sp_list <- letters[1:9] st_group <- NULL sp_group <- rep(letters[24:26], 3) cover_list <- 2^(0:4) df <- gen_example(n = n, use_layer = TRUE, height_max = height_max, ly_list = ly_list, st_list = st_list, sp_list = sp_list, st_group = st_group, sp_group = sp_group, cover_list = cover_list) # select stand and summarise by sp_group df %>% dplyr::group_by(height, sp_group) %>% dplyr::summarise(cover = sum(cover), .groups = "drop") %>% draw_layer_construction(group = "sp_group", colour = "white") gen_example Generate vegetation example Description Stand, species, and cover are basic. Layer, height, st_group, are sp_group optional. Usage gen_example( n = 300, use_layer = TRUE, height_max = 20, ly_list = "", st_list = LETTERS[1:9], sp_list = letters[1:9], st_group = NULL, sp_group = NULL, cover_list = 2^(0:6) ) Arguments n A numeric to generate no of occurrences. use_layer A logical. If FALSE, height_max and ly_list will be omitted. height_max A numeric. The highest layer of samples. ly_list, st_list, sp_list, st_group, sp_group A string vector. st_group and sp_group are optional (default is NULL). Length of st_list and sp_list should be the same as st_group and sp_group, respectively. cover_list A numeric vector. Value A dataframe with columns: stand, layer, species, cover, st_group and sp_group. Examples n <- 300 height_max <- 20 ly_list <- c("B1", "B2", "S1", "S2", "K") st_list <- LETTERS[1:9] sp_list <- letters[1:9] st_group <- rep(LETTERS[24:26], 3) sp_group <- rep(letters[24:26], 3) cover_list <- 2^(0:6) gen_example(n = n, use_layer = TRUE, height_max = height_max, ly_list = ly_list, st_list = st_list, sp_list = sp_list, st_group = st_group, sp_group = sp_group, cover_list = cover_list) ind_val Helper function for Indicator Species Analysis Description Calculating diversity indices such as species richness (s), Shannon’s H’ (h), Simpson’ D (d), Simp- son’s inverse D (i). Usage ind_val( df, stand = NULL, species = NULL, abundance = NULL, group = NULL, row_data = FALSE ) Arguments df A data.frame, which has three cols: stand, species, abundance. Community matrix should be converted using table2df(). stand, species, abundance A text to specify each column. If NULL, 1st, 2nd, 3rd column will be used. group A text to specify group column. row_data A logical. TRUE: return row result data of labdsv::indval(). Value A data.frame. Examples library(dplyr) library(tibble) data(dune, package = "vegan") data(dune.env, package = "vegan") df <- dune %>% table2df(st = "stand", sp = "species", ab = "cover") %>% dplyr::left_join(tibble::rownames_to_column(dune.env, "stand")) ind_val(df, abundance = "cover", group = "Moisture") is_one2multi Check cols one-to-one, or one-to-multi in data.frame Description Check cols one-to-one, or one-to-multi in data.frame Usage is_one2multi(df, col_1, col_2) is_one2one(df, col_1, col_2) is_multi2multi(df, col_1, col_2) cols_one2multi(df, col, inculde_self = TRUE) select_one2multi(df, col, inculde_self = TRUE) unique_length(df, col_1, col_2) Arguments df A data.frame col, col_1, col_2 A string to specify a colname. inculde_self A logical. If TRUE, return value including input col. Value is_one2multi(), is_one2one(), is_multi2multi() return a logical. cols_one2multi() returns strings of colnames that has one2multi relation to input col. unique_length() returns a list. Examples df <- tibble::tibble( x = rep(letters[1:6], each = 1), x_grp = rep(letters[1:3], each = 2), y = rep(LETTERS[1:3], each = 2), y_grp = rep(LETTERS[1:3], each = 2), z = rep(LETTERS[1:3], each = 2), z_grp = rep(LETTERS[1:3], times = 2)) unique_length(df, "x", "x_grp") is_one2one(df, "x", "x_grp") is_one2one(df, "y", "y_grp") is_one2one(df, "z", "z_grp") ordination Helper function for ordination methods Description Helper function for ordination methods Usage ordination(tbl, o_method, d_method = NULL, ...) ord_plot(ord, score = "st_scores", x = 1, y = 2) ord_add_group(ord, score = "st_scores", df, indiv, group) ord_extract_score(ord, score = "st_scores", row_name = NULL) Arguments tbl A community data matrix. rownames: stands. colnames: species. o_method A string of ordination method. "pca", "ca", "dca", "pcoa", or "nmds". "fspa" is removed, because package dave was archived. d_method A string of distance method. "correlation", "manhattan", "euclidean", "can- berra", "clark", "bray", "kulczynski", "jaccard", "gower", "altGower", "morisita", "horn", "mountford", "raup", "binomial", "chao", "cao", "mahalanobis", "chisq", "chord", "aitchison", or "robust.aitchison". ... Other parameters for PCA. ord A result of ordination(). score A string to specify score for plot. "st_scores" means stands and "sp_scores" species. x, y A column number for x and y axis. df A data.frame to be added into ord scores indiv, group, row_name A string to specify indiv, group, row_name column in df. Value ordination() returns result of ordination. $st_scores: scores for stand. $sp_scores: scores for species. $eig_val: eigen value for stand. $results_raw: results of original ordination function. $ordination_method: o_method. $distance_method: d_method. ord_plot() returns ggplot2 object. ord_extract_score() extracts stand or species scores from ordination result. ord_add_group() adds group data.frame into ordination scores. Examples library(ggplot2) library(vegan) data(dune) data(dune.env) df <- table2df(dune) %>% dplyr::left_join(tibble::rownames_to_column(dune.env, "stand")) sp_dammy <- tibble::tibble("species" = colnames(dune), "dammy_1" = stringr::str_sub(colnames(dune), 1, 1), "dammy_6" = stringr::str_sub(colnames(dune), 6, 6)) df <- dplyr::left_join(df, sp_dammy) ord_dca <- ordination(dune, o_method = "dca") ord_pca <- df %>% df2table() %>% ordination(o_method = "pca") ord_dca_st <- ord_extract_score(ord_dca, score = "st_scores") ord_pca_sp <- ord_add_group(ord_pca, score = "sp_scores", df, indiv = "species", group = "dammy_1") pad2longest Pad a string to the longest width of the strings. Description Pad a string to the longest width of the strings. Usage pad2longest(string, side = "right", pad = " ") Arguments string Strings. side Side on which padding character is added (left, right or both). pad Single padding character (default is spaces). Value Strings. Examples x <- c("a", "ab", "abc") pad2longest(x, side = "right", pad = " ") read_biss Read data from BiSS (Biodiversity Investigation Support System) to data frame. Description BiSS data is formatted as JSON. Usage read_biss(txt, join = TRUE) Arguments txt A JSON string, URL or file. join A logical. TRUE: join plot and occurrence, FALSE: do not join. Value A data frame. Examples library(dplyr) # path <- "set file path" path <- "https://raw.githubusercontent.com/matutosi/biodiv/main/man/example.json" read_biss(path) shdi Helper function for calculating diversity Description Calculating diversity indices such as species richness (s), Shannon’s H’ (h), Simpson’ D (d), Simp- son’s inverse D (i). Usage shdi(df, stand = NULL, species = NULL, abundance = NULL) Arguments df A data.frame, which has three cols: stand, species, abundance. Community matrix should be converted using table2df(). stand, species, abundance A text to specify each column. If NULL, 1st, 2nd, 3rd column will be used. Value A data.frame. Including species richness (s), Shannon’s H’ (h), Simpson’ D (d), Simpson’s inverse D (i). Examples data(dune, package = "vegan") df <- table2df(dune) shdi(df)
base64url
rust
Rust
Crate base64url === Implementation of the encode and decode operations on base64url (as defined in IETF RFC 4648) String. The encoded string restricted to containing the 2^6 UTF-8 code points without padding. In general, we use `-` and `_` instead of `+` and `/`, without paddings. Functions --- * decodedecode takes in a string and tries to decode it into a Vector of bytes. It returns a base64::DecodeError if `string` is not valid Base64URL. * encodeencode takes in a slice of bytes and returns the bytes encoded as a base64url String. Crate base64url === Implementation of the encode and decode operations on base64url (as defined in IETF RFC 4648) String. The encoded string restricted to containing the 2^6 UTF-8 code points without padding. In general, we use `-` and `_` instead of `+` and `/`, without paddings. Functions --- * decodedecode takes in a string and tries to decode it into a Vector of bytes. It returns a base64::DecodeError if `string` is not valid Base64URL. * encodeencode takes in a slice of bytes and returns the bytes encoded as a base64url String. Function base64url::decode === ``` pub fn decode<T: ?Sized + AsRef<[u8]>>( base64_url: &T ) -> Result<Vec<u8>, DecodeError> ``` decode takes in a string and tries to decode it into a Vector of bytes. It returns a base64::DecodeError if `string` is not valid Base64URL. Function base64url::encode === ``` pub fn encode<T: AsRef<[u8]>>(bytes: T) -> String ``` encode takes in a slice of bytes and returns the bytes encoded as a base64url String.